diff --git a/kosmos-g/README.md b/kosmos-g/README.md deleted file mode 100644 index 26d6aa807..000000000 --- a/kosmos-g/README.md +++ /dev/null @@ -1,139 +0,0 @@ -# Kosmos-G: Generating Images in Context with Multimodal Large Language Models -[Paper](https://arxiv.org/abs/2310.02992) | [Project Page](https://xichenpan.github.io/kosmosg/) - -## Checkpoints - -Download checkpoints for stage1, stage2, and the final model. - -```shell -mkdir kosmosg_checkpoints -cd kosmosg_checkpoints -DLINK=$(echo -n "aHR0cHM6Ly9jb252ZXJzYXRpb25odWIuYmxvYi5jb3JlLndpbmRvd3MubmV0L2JlaXQtc2hhcmUtcHVibGljL2tvc21vc2cvVmlULUwtMTQtc2QucHQ/c3Y9MjAyMy0wMS0wMyZzdD0yMDI0LTA0LTEwVDEzJTNBMTElM0E0NFomc2U9MjA1MC0wNC0xMVQxMyUzQTExJTNBMDBaJnNyPWMmc3A9ciZzaWc9NGNYSklqVlJaSElCV3FIalBnRG4lMkYwMW9jenBEV1hpcG1QQ1VrM1o4dmJRJTNE" | base64 --decode) -wget -O ViT-L-14-sd.pt $DLINK -DLINK=$(echo -n "aHR0cHM6Ly9jb252ZXJzYXRpb25odWIuYmxvYi5jb3JlLndpbmRvd3MubmV0L2JlaXQtc2hhcmUtcHVibGljL2tvc21vc2cvY2hlY2twb2ludF9zdGFnZTEucHQ/c3Y9MjAyMy0wMS0wMyZzdD0yMDI0LTA0LTEwVDEzJTNBMTElM0E0NFomc2U9MjA1MC0wNC0xMVQxMyUzQTExJTNBMDBaJnNyPWMmc3A9ciZzaWc9NGNYSklqVlJaSElCV3FIalBnRG4lMkYwMW9jenBEV1hpcG1QQ1VrM1o4dmJRJTNE" | base64 --decode) -wget -O checkpoint_stage1.pt $DLINK -DLINK=$(echo -n "aHR0cHM6Ly9jb252ZXJzYXRpb25odWIuYmxvYi5jb3JlLndpbmRvd3MubmV0L2JlaXQtc2hhcmUtcHVibGljL2tvc21vc2cvY2hlY2twb2ludF9zdGFnZTIucHQ/c3Y9MjAyMy0wMS0wMyZzdD0yMDI0LTA0LTEwVDEzJTNBMTElM0E0NFomc2U9MjA1MC0wNC0xMVQxMyUzQTExJTNBMDBaJnNyPWMmc3A9ciZzaWc9NGNYSklqVlJaSElCV3FIalBnRG4lMkYwMW9jenBEV1hpcG1QQ1VrM1o4dmJRJTNE" | base64 --decode) -wget -O checkpoint_stage2.pt $DLINK -DLINK=$(echo -n "aHR0cHM6Ly9jb252ZXJzYXRpb25odWIuYmxvYi5jb3JlLndpbmRvd3MubmV0L2JlaXQtc2hhcmUtcHVibGljL2tvc21vc2cvY2hlY2twb2ludF9maW5hbC5wdD9zdj0yMDIzLTAxLTAzJnN0PTIwMjQtMDQtMTBUMTMlM0ExMSUzQTQ0WiZzZT0yMDUwLTA0LTExVDEzJTNBMTElM0EwMFomc3I9YyZzcD1yJnNpZz00Y1hKSWpWUlpISUJXcUhqUGdEbiUyRjAxb2N6cERXWGlwbVBDVWszWjh2YlElM0Q=" | base64 --decode) -wget -O checkpoint_final.pt $DLINK -``` - -## Setup - -### Using Docker Image [Recommended] - -You can use our built Docker Image - -```bash -docker run --runtime=nvidia --gpus all --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --name kosmosg --privileged=true -it -v /mnt:/mnt/ xichenpan/kosmosg:v1 /bin/bash -git clone https://github.com/microsoft/unilm.git -cd unilm/kosmos-g -pip install torchscale/ -pip install open_clip/ -pip install fairseq/ -pip install infinibatch/ -``` - -You can also start with NVIDIA Official Docker Image, and install all dependencies manually. - -```bash -docker run --runtime=nvidia --gpus all --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 --name kosmosg --privileged=true -it -v /mnt:/mnt/ nvcr.io/nvidia/pytorch:22.10-py3 /bin/bash -apt-get install -y libsm6 libxext6 libxrender-dev -git clone https://github.com/microsoft/unilm.git -cd unilm/kosmos-g -bash vl_setup.sh -``` - -### Using Base Environment -Make sure you have Pytorch 1.13.0 and nvcc 11.x installed. -```bash -git clone https://github.com/microsoft/unilm.git -cd unilm/kosmos-g -bash vl_setup.sh -``` - -## Demo - -If you would like to host a local Gradio demo, run the following command after [setup](#setup): -```bash -bash runapp.sh -``` -Be sure to adjust the guidance scale if you find the default one leads to over-saturated images. - -## Training - -### Preparing dataset - -Refer to [this guide](scripts/README.md) to prepare the dataset. - -### Train script -After preparing the data, run the following command to train the model. Be sure to change the directories in the script to your own. -For the image decoder aligning stage: -```bash -bash runalign.sh -``` -For the instruction tuning stage: -```bash -bash runtrain.sh -``` - -## Evaluation - -### FID score on COCO (2014) val set - -Download and unzip the COCO (2014) val set: -```shell -mkdir coco -cd coco -wget http://images.cocodataset.org/zips/val2014.zip -wget http://images.cocodataset.org/annotations/annotations_trainval2014.zip -unzip val2014.zip -``` -Specify the cfg in `sample_kosmosg_coco.py` and run the script to evaluate: -```shell -bash runeval_coco.sh -``` - -### DINO score, CLIP-I score and CLIP-T score on DreamBench -Download DreamBench: -```shell -mkdir dreambench -cd dreambench -git clone https://github.com/google/dreambooth.git -``` - -We keep only one image for each entity as described in our paper. -``` -bash scripts/remove_dreambench_multiimg.sh /path/to/dreambench/dreambooth/dataset -``` - -Specify the cfg in `sample_kosmosg_dreambench.py` and run the script to evaluate: -```shell -bash runeval_dreambench.sh -``` - -## Citation - -If you find this repository useful, please consider citing our work: -``` -@article{kosmos-g, - title={{Kosmos-G}: Generating Images in Context with Multimodal Large Language Models}, - author={Xichen Pan and Li Dong and Shaohan Huang and Zhiliang Peng and Wenhu Chen and Furu Wei}, - journal={ArXiv}, - year={2023}, - volume={abs/2310.02992} -} -``` - -## Acknowledgement - -This repository is built using [torchscale](https://github.com/microsoft/torchscale), [fairseq](https://github.com/facebookresearch/fairseq), [openclip](https://github.com/mlfoundations/open_clip). We thank the authors of [Nerfies](https://github.com/nerfies/nerfies.github.io) that kindly open sourced the template of the project page. - -## License -This project is licensed under the license found in the LICENSE file in the root directory of this source tree. - -[Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct) - -### Contact Information - -For help or issues using models, please submit a GitHub issue. diff --git a/kosmos-g/app.py b/kosmos-g/app.py deleted file mode 100644 index 32b84e731..000000000 --- a/kosmos-g/app.py +++ /dev/null @@ -1,145 +0,0 @@ -from app_model import AppModel -from app_utils import * -from controlnet.app_canny import create_demo_canny -from controlnet.app_depth import create_demo_depth -from controlnet.app_ip2p import create_demo_ip2p -from controlnet.app_lineart import create_demo_lineart -from controlnet.app_mlsd import create_demo_mlsd -from controlnet.app_normal import create_demo_normal -from controlnet.app_openpose import create_demo_openpose -from controlnet.app_scribble import create_demo_scribble -from controlnet.app_scribble_interactive import create_demo_scribble_interactive -from controlnet.app_segmentation import create_demo_segmentation -from controlnet.app_shuffle import create_demo_shuffle -from controlnet.app_softedge import create_demo_softedge -from fairseq import options -from fairseq.dataclass.utils import convert_namespace_to_omegaconf -from fairseq.distributed import utils as distributed_utils - - -def main(cfg): - appmodel = AppModel(cfg) - - with gr.Blocks() as app: - with gr.Row(): - with gr.Column(scale=5): - ckpt = gr.Textbox(value=cfg.model.pretrained_ckpt_path, show_label=False, container=False) - with gr.Column(scale=4): - current_ckpt = gr.Textbox(show_label=False, container=False) - with gr.Column(scale=1, min_width=100): - scheduler = gr.Dropdown(['dpms', 'pndm', 'ddim'], value='dpms', show_label=False, container=False, - min_width=60) - with gr.Row(): - with gr.Column(scale=5): - lora = gr.Dropdown(['None'] + appmodel.get_available_lora(), value=cfg.model.lora_name, - show_label=False, container=False) - with gr.Column(scale=4): - current_lora = gr.Textbox(show_label=False, container=False) - with gr.Column(scale=1, min_width=60): - set_ckpt_scheduler_button = gr.Button('Set', container=False, min_width=60) - - set_ckpt_scheduler_button.click( - fn=appmodel.set_ckpt_scheduler_fn, inputs=[ckpt, scheduler], outputs=current_ckpt, queue=False - ).then(fn=appmodel.load_lora, inputs=lora, outputs=current_lora, queue=False) - - with gr.Tabs(): - with gr.TabItem('KOSMOS-G'): - with gr.Blocks(): - with gr.Row(): - with gr.Column(scale=1): - prompt = gr.Textbox(label="Prompt", max_lines=1, - placeholder="Use to represent the images in prompt") - num_input_images = gr.Slider(1, MAX_INPUT_IMAGES, value=DEFAULT_INPUT_IMAGES, step=1, - label="Number of input images:") - input_images = [gr.Image(label=f'img{i}', type="pil", - visible=True if i < DEFAULT_INPUT_IMAGES else False) - for i in range(MAX_INPUT_IMAGES)] - num_input_images.change(variable_images, num_input_images, input_images) - text_guidance_scale = gr.Slider(1, 15, value=6, step=0.5, label="Text Guidance Scale") - - seed = gr.Slider(label="Seed", minimum=MIN_SEED, maximum=MAX_SEED, step=1, value=0) - randomize_seed = gr.Checkbox(label='Randomize seed', value=True) - run_button = gr.Button(label="Run") - with gr.Accordion("Advanced options", open=False): - lora_scale = gr.Slider(0, 1, value=0, step=0.05, label="LoRA Scale") - num_inference_steps = gr.Slider(label="num_inference_steps", minimum=10, maximum=100, - value=50, step=5) - negative_prompt = gr.Textbox(label="Negative Prompt", max_lines=1, value="") - num_images_per_prompt = gr.Slider(1, MAX_IMAGES_PER_PROMPT, - value=4, step=1, label="Number of Images") - with gr.Column(scale=2): - result_gallery = gr.Gallery(label='Output', show_label=False, elem_id="gallery", - columns=2, height='100%') - - ips = [prompt, lora_scale, num_inference_steps, text_guidance_scale, negative_prompt, - num_images_per_prompt, *input_images] - - prompt.submit( - fn=appmodel.set_ckpt_scheduler_fn, inputs=[ckpt, scheduler], outputs=current_ckpt, queue=False - ).then(fn=appmodel.load_lora, inputs=lora, outputs=current_lora, queue=False).then( - fn=randomize_seed_fn, inputs=[seed, randomize_seed], outputs=seed, queue=False, api_name=False - ).then(fn=appmodel.kosmosg_generation, inputs=ips, outputs=result_gallery) - - run_button.click( - fn=appmodel.set_ckpt_scheduler_fn, inputs=[ckpt, scheduler], outputs=current_ckpt, queue=False - ).then(fn=appmodel.load_lora, inputs=lora, outputs=current_lora, queue=False).then( - fn=randomize_seed_fn, inputs=[seed, randomize_seed], outputs=seed, queue=False, api_name=False - ).then(fn=appmodel.kosmosg_generation, inputs=ips, outputs=result_gallery) - - gr.Examples( - examples=[ - ['', 'appimg/dog.jpg', None], - [' swimming underwater', 'appimg/dog.jpg', None], - [' in Batman suit', 'appimg/dog.jpg', None], - [' as an oil painting by Vincent van Gogh', 'appimg/dog.jpg', None], - [' in Minecraft', 'appimg/dog.jpg', None], - [' in the suit of ', 'appimg/dog2.jpg', 'appimg/ironman.jpg'], - [' in Unity3D', 'appimg/car.jpg', None], - ['', 'appimg/bengio.jpg', None], - [' as an oil painting in the style of ', 'appimg/bengio.jpg', 'appimg/vangogh.jpg'], - [' wearing ', 'appimg/bengio.jpg', 'appimg/sunglasses.jpg'], - [' in \'s jacket', 'appimg/bengio.jpg', 'appimg/huang.jpg'], - [' taking a selfie at ', 'appimg/bengio.jpg', 'appimg/ijen.jpg'], - [' in the style of ', 'appimg/bengio.jpg', 'appimg/uname.jpg'], - ], - inputs=[prompt, input_images[0], input_images[1]], - cache_examples=False, - examples_per_page=100 - ) - - with gr.TabItem('ControlNet KOSMOS-G'): - with gr.Tabs(): - with gr.TabItem('Canny'): - create_demo_canny(appmodel.controlnet_generation_canny) - with gr.TabItem('MLSD'): - create_demo_mlsd(appmodel.controlnet_generation_mlsd) - with gr.TabItem('Scribble'): - create_demo_scribble(appmodel.controlnet_generation_scribble) - with gr.TabItem('Scribble Interactive'): - create_demo_scribble_interactive(appmodel.controlnet_generation_scribble_interactive) - with gr.TabItem('SoftEdge'): - create_demo_softedge(appmodel.controlnet_generation_softedge) - with gr.TabItem('OpenPose'): - create_demo_openpose(appmodel.controlnet_generation_openpose) - with gr.TabItem('Segmentation'): - create_demo_segmentation(appmodel.controlnet_generation_segmentation) - with gr.TabItem('Depth'): - create_demo_depth(appmodel.controlnet_generation_depth) - with gr.TabItem('Normal map'): - create_demo_normal(appmodel.controlnet_generation_normal) - with gr.TabItem('Lineart'): - create_demo_lineart(appmodel.controlnet_generation_lineart) - with gr.TabItem('Content Shuffle'): - create_demo_shuffle(appmodel.controlnet_generation_shuffle) - with gr.TabItem('Instruct Pix2Pix'): - create_demo_ip2p(appmodel.controlnet_generation_ip2p) - - app.queue(concurrency_count=1).launch(share=True) - - -if __name__ == "__main__": - parser = options.get_training_parser() - args = options.parse_args_and_arch(parser, modify_parser=None) - - cfg = convert_namespace_to_omegaconf(args) - distributed_utils.call_main(cfg, main) diff --git a/kosmos-g/app_model.py b/kosmos-g/app_model.py deleted file mode 100644 index ef4e9d49d..000000000 --- a/kosmos-g/app_model.py +++ /dev/null @@ -1,513 +0,0 @@ -import argparse -import gc -import os - -from torchvision.transforms import InterpolationMode - -BICUBIC = InterpolationMode.BICUBIC - -from tiktoken.core import Encoding -from torchvision.transforms import CenterCrop, Compose, Resize -from diffusers import ControlNetModel -from diffusers.schedulers import DPMSolverMultistepScheduler, PNDMScheduler, DDIMScheduler - -from app_utils import * -from controlnet.preprocessor import ControlNet_Preprocessor -from fairseq import checkpoint_utils, tasks, utils -from fairseq.dataclass.utils import convert_namespace_to_omegaconf -from unilm.data.vl.openimage_loader import NumpyNormalize - - -class AppModel: - def __init__(self, cfg): - if isinstance(cfg, argparse.Namespace): - cfg = convert_namespace_to_omegaconf(cfg) - cfg.model.align = False - cfg.model.checkpoint_activations = False - utils.import_user_module(cfg.common) - task = tasks.setup_task(cfg.task) - model = task.build_model(cfg.model) - model.freeze_params(model.parameters()) - model.half() - model.cuda() - model.eval() - - self.cfg = cfg - self.model = model - self.model_cache = {} - self.bos_id = task.dictionary.bos() - self.eos_id = task.dictionary.eos() - self.boi_id = task.dictionary.index(BOI_SYMBOL) - self.eoi_id = task.dictionary.index(EOI_SYMBOL) - self.dictionary = task.dictionary - self.tokenizer = task.tokenizer - self.text_transform = self._build_text_transform() - self.image_transform = self._build_image_transform() - self.task_name = "" - self.controlnet = None - self.controlnet_preprocessor = ControlNet_Preprocessor() - - def _build_image_transform(self): - preprocess_image = { - 'gpt': Compose([ - Resize(224, interpolation=BICUBIC), - CenterCrop(224), - NumpyNormalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711)) - ]), - 'diff': Compose([ - Resize(512), - CenterCrop(512), - NumpyNormalize([0.5], [0.5]) - ]) - } - return preprocess_image - - def _build_text_transform(self): - def text_transform(text): - append_eos = False - fs_dict = self.dictionary - if isinstance(self.tokenizer, Encoding): - words = list(map(str, self.tokenizer.encode(text, allowed_special="all"))) - else: - words = self.tokenizer.encode(text, out_type=str) - # ids = [fs_dict.bos_index] - ids = [] - for i, word in enumerate(words): - idx = fs_dict.index(word) - ids.append(idx) - if append_eos: - ids.append(fs_dict.eos_index) - return ids - - return text_transform - - def load_controlnet_weight(self, task_name): - if task_name == self.task_name: - return - - torch.cuda.empty_cache() - gc.collect() - self.controlnet = ControlNetModel.from_pretrained(CONTROLNET_MODEL_IDS[task_name], torch_dtype=torch.float16) - self.model.freeze_params(self.controlnet.parameters()) - self.controlnet.cuda() - self.controlnet.eval() - torch.cuda.empty_cache() - gc.collect() - self.task_name = task_name - - def get_available_lora(self): - # traverse all the available lora in cfg.model.lora_dir in created time order - if self.cfg.model.lora_dir == '': - return [] - files = [f for f in os.listdir(self.cfg.model.lora_dir) if - os.path.isfile(os.path.join(self.cfg.model.lora_dir, f))] - files_sorted = sorted(files, key=lambda x: os.path.getctime(os.path.join(self.cfg.model.lora_dir, x))) - - return files_sorted - - def load_lora(self, lora_name): - if lora_name == 'None': - if self.cfg.model.lora_name != 'None': - for _, module in self.model.diffusion_unet.unet.named_modules(): - if hasattr(module, "set_lora_layer"): - module.set_lora_layer(None) - self.model.diffusion_unet.lora = False - self.cfg.model.lora_name = 'None' - return 'None' - try: - state_dict, network_alphas = self.model.diffusion_unet.lora_state_dict( - os.path.join(self.cfg.model.lora_dir, lora_name) - ) - - self.model.diffusion_unet.load_lora_into_unet( - state_dict, network_alphas=network_alphas, unet=self.model.diffusion_unet.unet, low_cpu_mem_usage=True - ) - self.model.diffusion_unet.lora = True - self.cfg.model.lora_name = lora_name - return lora_name - except: - return 'None' - - def set_ckpt_scheduler_fn(self, ckpt, scheduler): - # reset scheduler if the class is changed - if scheduler == 'dpms' and not isinstance(self.model.diffusion_unet.scheduler, DPMSolverMultistepScheduler): - self.model.diffusion_unet.scheduler = DPMSolverMultistepScheduler.from_pretrained( - self.cfg.model.pretrained_model_name_or_path, subfolder="scheduler", torch_dtype=torch.float16, - revision="fp16" - ) - elif scheduler == 'pndm' and not isinstance(self.model.diffusion_unet.scheduler, PNDMScheduler): - self.model.diffusion_unet.scheduler = PNDMScheduler.from_pretrained( - self.cfg.model.pretrained_model_name_or_path, subfolder="scheduler", torch_dtype=torch.float16, - revision="fp16" - ) - elif scheduler == 'ddim' and not isinstance(self.model.diffusion_unet.scheduler, DDIMScheduler): - self.model.diffusion_unet.scheduler = DDIMScheduler.from_pretrained( - self.cfg.model.pretrained_model_name_or_path, subfolder="scheduler", torch_dtype=torch.float16, - revision="fp16" - ) - - if ckpt != self.cfg.model.pretrained_ckpt_path: - try: - if ckpt not in self.model_cache: - state = checkpoint_utils.load_checkpoint_to_cpu(ckpt) - self.model_cache[ckpt] = state["model"] - msg = self.model.load_state_dict(self.model_cache[ckpt], strict=False, args=self.cfg.model) - self.cfg.model.pretrained_ckpt_path = ckpt - - except: - if ckpt in self.model_cache: - del self.model_cache[ckpt] - exp = self.cfg.model.pretrained_ckpt_path.split('/')[-2] - pt = self.cfg.model.pretrained_ckpt_path.split('/')[-1].split('.')[0].split('_')[-1] - return f'exp: {exp} pt: {pt}' - - def kosmosg_preprocess(self, prompt, negative_prompt, *args, single_batch=True): - img_src_tokens = [im for im in args if im is not None] - assert len(img_src_tokens) == prompt.count(''), \ - "Number of images in prompt does not match the number of images uploaded" - - gpt_img_src_tokens = [torch.tensor(self.image_transform['gpt'](im)) for im in img_src_tokens] - - src_tokens = [self.bos_id] - img_gpt_input_mask = [0] - - for i in range(len(img_src_tokens)): - text_snippet = prompt.split('', 1)[0] - prompt = prompt.split('', 1)[1] - text_token = self.text_transform(text_snippet) - - src_tokens.extend(text_token + [self.boi_id] * (self.cfg.task.image_token_length + 1) + [self.eoi_id]) - img_gpt_input_mask.extend([0] * len(text_token) + [0] + [1] * self.cfg.task.image_token_length + [0]) - - text_token = self.text_transform(prompt) - src_tokens.extend(text_token) - img_gpt_input_mask.extend([0] * len(text_token)) - - src_tokens = torch.LongTensor(src_tokens) - gpt_img_src_tokens = torch.stack(gpt_img_src_tokens).to(torch.float16) \ - if len(gpt_img_src_tokens) > 0 else None - img_gpt_input_mask = torch.tensor(img_gpt_input_mask, dtype=torch.bool) - - negative_tokens = torch.LongTensor([self.bos_id] + self.text_transform(negative_prompt)) - - if single_batch: - return src_tokens.unsqueeze(0), gpt_img_src_tokens, img_gpt_input_mask.unsqueeze(0), \ - negative_tokens.unsqueeze(0) - else: - return src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens - - @torch.inference_mode() - def kosmosg_generation(self, prompt, lora_scale, num_inference_steps, text_guidance_scale, negative_prompt, - num_images_per_prompt, *args): - src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens = \ - self.kosmosg_preprocess(prompt, negative_prompt, *args) - - kwargs = { - 'num_inference_steps': num_inference_steps, - 'text_guidance_scale': text_guidance_scale, - 'num_images_per_prompt': num_images_per_prompt, - 'lora_scale': lora_scale, - } - - image = self.model.sample(src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens, **kwargs) - return image - - @torch.inference_mode() - def controlnet_generation_canny(self, prompt, num_inference_steps, text_guidance_scale, negative_prompt, - num_images_per_prompt, control_image, image_resolution, low_threshold, - high_threshold, *args): - assert control_image is not None, 'Image is required' - assert image_resolution <= MAX_IMAGE_RESOLUTION, 'Image resolution is too high' - - control_image = self.controlnet_preprocessor.preprocess_canny(control_image, image_resolution, low_threshold, - high_threshold) - src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens = \ - self.kosmosg_preprocess(prompt, negative_prompt, *args) - - self.load_controlnet_weight('Canny') - - kwargs = { - 'num_inference_steps': num_inference_steps, - 'text_guidance_scale': text_guidance_scale, - 'num_images_per_prompt': num_images_per_prompt, - 'lora_scale': 0.0, - } - - image = self.model.sample_controlnet(src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens, - control_image, self.controlnet, **kwargs) - return [control_image] + image - - @torch.inference_mode() - def controlnet_generation_mlsd(self, prompt, num_inference_steps, text_guidance_scale, negative_prompt, - num_images_per_prompt, control_image, image_resolution, preprocess_resolution, - value_threshold, distance_threshold, *args): - assert control_image is not None, 'Image is required' - assert image_resolution <= MAX_IMAGE_RESOLUTION, 'Image resolution is too high' - - control_image = self.controlnet_preprocessor.preprocess_mlsd(control_image, image_resolution, - preprocess_resolution, value_threshold, - distance_threshold) - src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens = \ - self.kosmosg_preprocess(prompt, negative_prompt, *args) - - self.load_controlnet_weight('MLSD') - - kwargs = { - 'num_inference_steps': num_inference_steps, - 'text_guidance_scale': text_guidance_scale, - 'num_images_per_prompt': num_images_per_prompt, - 'lora_scale': 0.0, - } - - image = self.model.sample_controlnet(src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens, - control_image, self.controlnet, **kwargs) - return [control_image] + image - - @torch.inference_mode() - def controlnet_generation_scribble(self, prompt, num_inference_steps, text_guidance_scale, negative_prompt, - num_images_per_prompt, control_image, image_resolution, preprocess_resolution, - preprocessor_name, *args): - assert control_image is not None, 'Image is required' - assert image_resolution <= MAX_IMAGE_RESOLUTION, 'Image resolution is too high' - - control_image = self.controlnet_preprocessor.preprocess_scribble(control_image, image_resolution, - preprocess_resolution, preprocessor_name) - src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens = \ - self.kosmosg_preprocess(prompt, negative_prompt, *args) - - self.load_controlnet_weight('scribble') - - kwargs = { - 'num_inference_steps': num_inference_steps, - 'text_guidance_scale': text_guidance_scale, - 'num_images_per_prompt': num_images_per_prompt, - 'lora_scale': 0.0, - } - - image = self.model.sample_controlnet(src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens, - control_image, self.controlnet, **kwargs) - return [control_image] + image - - @torch.inference_mode() - def controlnet_generation_scribble_interactive(self, prompt, num_inference_steps, text_guidance_scale, - negative_prompt, num_images_per_prompt, control_image_and_mask, - image_resolution, *args): - assert control_image_and_mask is not None, 'Image is required' - - control_image = self.controlnet_preprocessor.preprocess_scribble_interactive(control_image_and_mask, - image_resolution) - src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens = \ - self.kosmosg_preprocess(prompt, negative_prompt, *args) - - self.load_controlnet_weight('scribble') - - kwargs = { - 'num_inference_steps': num_inference_steps, - 'text_guidance_scale': text_guidance_scale, - 'num_images_per_prompt': num_images_per_prompt, - 'lora_scale': 0.0, - } - - image = self.model.sample_controlnet(src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens, - control_image, self.controlnet, **kwargs) - return [control_image] + image - - @torch.inference_mode() - def controlnet_generation_softedge(self, prompt, num_inference_steps, text_guidance_scale, negative_prompt, - num_images_per_prompt, control_image, image_resolution, preprocess_resolution, - preprocessor_name, *args): - assert control_image is not None, 'Image is required' - assert image_resolution <= MAX_IMAGE_RESOLUTION, 'Image resolution is too high' - - control_image = self.controlnet_preprocessor.preprocess_softedge(control_image, image_resolution, - preprocess_resolution, preprocessor_name) - src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens = \ - self.kosmosg_preprocess(prompt, negative_prompt, *args) - - self.load_controlnet_weight('softedge') - - kwargs = { - 'num_inference_steps': num_inference_steps, - 'text_guidance_scale': text_guidance_scale, - 'num_images_per_prompt': num_images_per_prompt, - 'lora_scale': 0.0, - } - - image = self.model.sample_controlnet(src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens, - control_image, self.controlnet, **kwargs) - return [control_image] + image - - @torch.inference_mode() - def controlnet_generation_openpose(self, prompt, num_inference_steps, text_guidance_scale, negative_prompt, - num_images_per_prompt, control_image, image_resolution, preprocess_resolution, - preprocessor_name, *args): - assert control_image is not None, 'Image is required' - assert image_resolution <= MAX_IMAGE_RESOLUTION, 'Image resolution is too high' - - control_image = self.controlnet_preprocessor.preprocess_openpose(control_image, image_resolution, - preprocess_resolution, preprocessor_name) - src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens = \ - self.kosmosg_preprocess(prompt, negative_prompt, *args) - - self.load_controlnet_weight('Openpose') - - kwargs = { - 'num_inference_steps': num_inference_steps, - 'text_guidance_scale': text_guidance_scale, - 'num_images_per_prompt': num_images_per_prompt, - 'lora_scale': 0.0, - } - - image = self.model.sample_controlnet(src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens, - control_image, self.controlnet, **kwargs) - return [control_image] + image - - @torch.inference_mode() - def controlnet_generation_segmentation(self, prompt, num_inference_steps, text_guidance_scale, negative_prompt, - num_images_per_prompt, control_image, image_resolution, - preprocess_resolution, preprocessor_name, *args): - assert control_image is not None, 'Image is required' - assert image_resolution <= MAX_IMAGE_RESOLUTION, 'Image resolution is too high' - - control_image = self.controlnet_preprocessor.preprocess_segmentation(control_image, image_resolution, - preprocess_resolution, preprocessor_name) - src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens = \ - self.kosmosg_preprocess(prompt, negative_prompt, *args) - - self.load_controlnet_weight('segmentation') - - kwargs = { - 'num_inference_steps': num_inference_steps, - 'text_guidance_scale': text_guidance_scale, - 'num_images_per_prompt': num_images_per_prompt, - 'lora_scale': 0.0, - } - - image = self.model.sample_controlnet(src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens, - control_image, self.controlnet, **kwargs) - return [control_image] + image - - @torch.inference_mode() - def controlnet_generation_depth(self, prompt, num_inference_steps, text_guidance_scale, negative_prompt, - num_images_per_prompt, control_image, image_resolution, preprocess_resolution, - preprocessor_name, *args): - assert control_image is not None, 'Image is required' - assert image_resolution <= MAX_IMAGE_RESOLUTION, 'Image resolution is too high' - - control_image = self.controlnet_preprocessor.preprocess_depth(control_image, image_resolution, - preprocess_resolution, preprocessor_name) - src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens = \ - self.kosmosg_preprocess(prompt, negative_prompt, *args) - - self.load_controlnet_weight('depth') - - kwargs = { - 'num_inference_steps': num_inference_steps, - 'text_guidance_scale': text_guidance_scale, - 'num_images_per_prompt': num_images_per_prompt, - 'lora_scale': 0.0, - } - - image = self.model.sample_controlnet(src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens, - control_image, self.controlnet, **kwargs) - return [control_image] + image - - @torch.inference_mode() - def controlnet_generation_normal(self, prompt, num_inference_steps, text_guidance_scale, negative_prompt, - num_images_per_prompt, control_image, image_resolution, preprocess_resolution, - preprocessor_name, *args): - assert control_image is not None, 'Image is required' - assert image_resolution <= MAX_IMAGE_RESOLUTION, 'Image resolution is too high' - - control_image = self.controlnet_preprocessor.preprocess_normal(control_image, image_resolution, - preprocess_resolution, preprocessor_name) - src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens = \ - self.kosmosg_preprocess(prompt, negative_prompt, *args) - - self.load_controlnet_weight('NormalBae') - - kwargs = { - 'num_inference_steps': num_inference_steps, - 'text_guidance_scale': text_guidance_scale, - 'num_images_per_prompt': num_images_per_prompt, - 'lora_scale': 0.0, - } - - image = self.model.sample_controlnet(src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens, - control_image, self.controlnet, **kwargs) - return [control_image] + image - - @torch.inference_mode() - def controlnet_generation_lineart(self, prompt, num_inference_steps, text_guidance_scale, negative_prompt, - num_images_per_prompt, control_image, image_resolution, preprocess_resolution, - preprocessor_name, *args): - assert control_image is not None, 'Image is required' - assert image_resolution <= MAX_IMAGE_RESOLUTION, 'Image resolution is too high' - - control_image = self.controlnet_preprocessor.preprocess_lineart(control_image, image_resolution, - preprocess_resolution, preprocessor_name) - src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens = \ - self.kosmosg_preprocess(prompt, negative_prompt, *args) - - if 'anime' in preprocessor_name: - self.load_controlnet_weight('lineart_anime') - else: - self.load_controlnet_weight('lineart') - - kwargs = { - 'num_inference_steps': num_inference_steps, - 'text_guidance_scale': text_guidance_scale, - 'num_images_per_prompt': num_images_per_prompt, - 'lora_scale': 0.0, - } - - image = self.model.sample_controlnet(src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens, - control_image, self.controlnet, **kwargs) - return [control_image] + image - - @torch.inference_mode() - def controlnet_generation_shuffle(self, prompt, num_inference_steps, text_guidance_scale, negative_prompt, - num_images_per_prompt, control_image, image_resolution, preprocessor_name, *args): - assert control_image is not None, 'Image is required' - assert image_resolution <= MAX_IMAGE_RESOLUTION, 'Image resolution is too high' - - control_image = self.controlnet_preprocessor.preprocess_shuffle(control_image, image_resolution, - preprocessor_name) - src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens = \ - self.kosmosg_preprocess(prompt, negative_prompt, *args) - - self.load_controlnet_weight('shuffle') - - kwargs = { - 'num_inference_steps': num_inference_steps, - 'text_guidance_scale': text_guidance_scale, - 'num_images_per_prompt': num_images_per_prompt, - 'lora_scale': 0.0, - } - - image = self.model.sample_controlnet(src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens, - control_image, self.controlnet, kwargs) - return [control_image] + image - - @torch.inference_mode() - def controlnet_generation_ip2p(self, prompt, num_inference_steps, text_guidance_scale, negative_prompt, - num_images_per_prompt, control_image, image_resolution, *args): - assert control_image is not None, 'Image is required' - assert image_resolution <= MAX_IMAGE_RESOLUTION, 'Image resolution is too high' - - control_image = self.controlnet_preprocessor.preprocess_ip2p(control_image, image_resolution) - src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens = \ - self.kosmosg_preprocess(prompt, negative_prompt, *args) - - self.load_controlnet_weight('ip2p') - - kwargs = { - 'num_inference_steps': num_inference_steps, - 'text_guidance_scale': text_guidance_scale, - 'num_images_per_prompt': num_images_per_prompt, - 'lora_scale': 0.0, - } - - image = self.model.sample_controlnet(src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens, - control_image, self.controlnet, kwargs) - return [control_image] + image diff --git a/kosmos-g/app_utils.py b/kosmos-g/app_utils.py deleted file mode 100644 index 8c138810a..000000000 --- a/kosmos-g/app_utils.py +++ /dev/null @@ -1,55 +0,0 @@ -import random - -import gradio as gr -import numpy as np -import torch - -controlnet_example = [ - ['appimg/doctor.jpg', '', 'appimg/bengio.jpg', None], - ['appimg/doctor.jpg', ' as an oil painting in the style of ', 'appimg/bengio.jpg', 'appimg/vangogh.jpg'], -] - -BOI_SYMBOL = "" -EOI_SYMBOL = "" -MIN_SEED = 0 -MAX_SEED = np.iinfo(np.int32).max -MAX_COLORS = 12 -MAX_INPUT_IMAGES = 10 -DEFAULT_INPUT_IMAGES = 2 -MAX_IMAGES_PER_PROMPT = 4 -DEFAULT_IMAGES_PER_PROMPT = 1 - -MIN_IMAGE_RESOLUTION = 256 -MAX_IMAGE_RESOLUTION = 768 -DEFAULT_IMAGE_RESOLUTION = 768 - -CONTROLNET_MODEL_IDS = { - 'Openpose': 'lllyasviel/control_v11p_sd15_openpose', - 'Canny': 'lllyasviel/control_v11p_sd15_canny', - 'MLSD': 'lllyasviel/control_v11p_sd15_mlsd', - 'scribble': 'lllyasviel/control_v11p_sd15_scribble', - 'softedge': 'lllyasviel/control_v11p_sd15_softedge', - 'segmentation': 'lllyasviel/control_v11p_sd15_seg', - 'depth': 'lllyasviel/control_v11f1p_sd15_depth', - 'NormalBae': 'lllyasviel/control_v11p_sd15_normalbae', - 'lineart': 'lllyasviel/control_v11p_sd15_lineart', - 'lineart_anime': 'lllyasviel/control_v11p_sd15s2_lineart_anime', - 'shuffle': 'lllyasviel/control_v11e_sd15_shuffle', - 'ip2p': 'lllyasviel/control_v11e_sd15_ip2p', - 'inpaint': 'lllyasviel/control_v11e_sd15_inpaint', -} - - -def randomize_seed_fn(seed, randomize_seed): - if randomize_seed: - seed = random.randint(0, MAX_SEED) - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - return seed - - -def variable_images(k): - k = int(k) - return [gr.Textbox.update(visible=True)] * k + [gr.Textbox.update(visible=False)] * (MAX_INPUT_IMAGES - k) diff --git a/kosmos-g/appimg/bengio.jpg b/kosmos-g/appimg/bengio.jpg deleted file mode 100644 index f1f6e9d27..000000000 Binary files a/kosmos-g/appimg/bengio.jpg and /dev/null differ diff --git a/kosmos-g/appimg/car.jpg b/kosmos-g/appimg/car.jpg deleted file mode 100644 index 13cedab26..000000000 Binary files a/kosmos-g/appimg/car.jpg and /dev/null differ diff --git a/kosmos-g/appimg/doctor.jpg b/kosmos-g/appimg/doctor.jpg deleted file mode 100644 index c043f5d23..000000000 Binary files a/kosmos-g/appimg/doctor.jpg and /dev/null differ diff --git a/kosmos-g/appimg/dog.jpg b/kosmos-g/appimg/dog.jpg deleted file mode 100644 index 72148ad1a..000000000 Binary files a/kosmos-g/appimg/dog.jpg and /dev/null differ diff --git a/kosmos-g/appimg/dog2.jpg b/kosmos-g/appimg/dog2.jpg deleted file mode 100644 index f7726c1a2..000000000 Binary files a/kosmos-g/appimg/dog2.jpg and /dev/null differ diff --git a/kosmos-g/appimg/huang.jpg b/kosmos-g/appimg/huang.jpg deleted file mode 100644 index 3ea336f57..000000000 Binary files a/kosmos-g/appimg/huang.jpg and /dev/null differ diff --git a/kosmos-g/appimg/ijen.jpg b/kosmos-g/appimg/ijen.jpg deleted file mode 100644 index 0003c7322..000000000 Binary files a/kosmos-g/appimg/ijen.jpg and /dev/null differ diff --git a/kosmos-g/appimg/ironman.jpg b/kosmos-g/appimg/ironman.jpg deleted file mode 100644 index b3d2b5a73..000000000 Binary files a/kosmos-g/appimg/ironman.jpg and /dev/null differ diff --git a/kosmos-g/appimg/sunglasses.jpg b/kosmos-g/appimg/sunglasses.jpg deleted file mode 100644 index 037574579..000000000 Binary files a/kosmos-g/appimg/sunglasses.jpg and /dev/null differ diff --git a/kosmos-g/appimg/uname.jpg b/kosmos-g/appimg/uname.jpg deleted file mode 100644 index 57902f648..000000000 Binary files a/kosmos-g/appimg/uname.jpg and /dev/null differ diff --git a/kosmos-g/appimg/vangogh.jpg b/kosmos-g/appimg/vangogh.jpg deleted file mode 100644 index d372252c3..000000000 Binary files a/kosmos-g/appimg/vangogh.jpg and /dev/null differ diff --git a/kosmos-g/controlnet/app_canny.py b/kosmos-g/controlnet/app_canny.py deleted file mode 100644 index 5e6a1514f..000000000 --- a/kosmos-g/controlnet/app_canny.py +++ /dev/null @@ -1,57 +0,0 @@ -from app_utils import * - - -def create_demo_canny(generation_fn): - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(scale=1): - image = gr.Image(label="Control image") - prompt = gr.Textbox(label="Prompt", max_lines=1, - placeholder="Use to represent the images in prompt") - num_input_images = gr.Slider(1, MAX_INPUT_IMAGES, value=DEFAULT_INPUT_IMAGES, step=1, - label="Number of input images:") - input_images = [ - gr.Image(label=f'img{i}', type="pil", visible=True if i < DEFAULT_INPUT_IMAGES else False) - for i in range(MAX_INPUT_IMAGES)] - num_input_images.change(variable_images, num_input_images, input_images) - - seed = gr.Slider(label="Seed", minimum=MIN_SEED, maximum=MAX_SEED, step=1, value=0) - randomize_seed = gr.Checkbox(label='Randomize seed', value=True) - run_button = gr.Button(label="Run") - with gr.Accordion("Advanced options", open=False): - num_inference_steps = gr.Slider(label="num_inference_steps", minimum=10, maximum=100, value=50, - step=5) - text_guidance_scale = gr.Slider(1, 15, value=6, step=0.5, label="Text Guidance Scale") - negative_prompt = gr.Textbox(label="Negative Prompt", max_lines=1, - value="") - num_images_per_prompt = gr.Slider(1, MAX_IMAGES_PER_PROMPT, value=DEFAULT_IMAGES_PER_PROMPT, step=1, - label="Number of Images") - image_resolution = gr.Slider(label='Image resolution', minimum=MIN_IMAGE_RESOLUTION, - maximum=MAX_IMAGE_RESOLUTION, value=DEFAULT_IMAGE_RESOLUTION, step=256) - canny_low_threshold = gr.Slider(label='Canny low threshold', minimum=1, maximum=255, value=100, - step=1) - canny_high_threshold = gr.Slider(label='Canny high threshold', minimum=1, maximum=255, value=200, - step=1) - - with gr.Column(scale=2): - result_gallery = gr.Gallery(label='Output', show_label=False, elem_id="gallery", columns=2, - height='100%') - ips = [prompt, num_inference_steps, text_guidance_scale, negative_prompt, num_images_per_prompt, image, - image_resolution, canny_low_threshold, canny_high_threshold, *input_images] - - prompt.submit( - fn=randomize_seed_fn, inputs=[seed, randomize_seed], outputs=seed, queue=False, api_name=False - ).then(fn=generation_fn, inputs=ips, outputs=result_gallery) - - run_button.click( - fn=randomize_seed_fn, inputs=[seed, randomize_seed], outputs=seed, queue=False, api_name=False - ).then(fn=generation_fn, inputs=ips, outputs=result_gallery) - - gr.Examples( - examples=controlnet_example, - inputs=[image, prompt, input_images[0], input_images[1]], - cache_examples=False, - examples_per_page=100 - ) - - return demo diff --git a/kosmos-g/controlnet/app_depth.py b/kosmos-g/controlnet/app_depth.py deleted file mode 100644 index 80fd8a4d0..000000000 --- a/kosmos-g/controlnet/app_depth.py +++ /dev/null @@ -1,60 +0,0 @@ -from app_utils import * - - -def create_demo_depth(generation_fn): - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(scale=1): - image = gr.Image(label="Control image") - prompt = gr.Textbox(label="Prompt", max_lines=1, - placeholder="Use to represent the images in prompt") - num_input_images = gr.Slider(1, MAX_INPUT_IMAGES, value=DEFAULT_INPUT_IMAGES, step=1, - label="Number of input images:") - input_images = [ - gr.Image(label=f'img{i}', type="pil", visible=True if i < DEFAULT_INPUT_IMAGES else False) - for i in range(MAX_INPUT_IMAGES)] - num_input_images.change(variable_images, num_input_images, input_images) - - seed = gr.Slider(label="Seed", minimum=MIN_SEED, maximum=MAX_SEED, step=1, value=0) - randomize_seed = gr.Checkbox(label='Randomize seed', value=True) - run_button = gr.Button(label="Run") - with gr.Accordion("Advanced options", open=False): - num_inference_steps = gr.Slider(label="num_inference_steps", minimum=10, maximum=100, value=50, - step=5) - text_guidance_scale = gr.Slider(1, 15, value=6, step=0.5, label="Text Guidance Scale") - negative_prompt = gr.Textbox(label="Negative Prompt", max_lines=1, - value="") - num_images_per_prompt = gr.Slider(1, MAX_IMAGES_PER_PROMPT, value=DEFAULT_IMAGES_PER_PROMPT, step=1, - label="Number of Images") - image_resolution = gr.Slider(label='Image resolution', minimum=MIN_IMAGE_RESOLUTION, - maximum=MAX_IMAGE_RESOLUTION, value=DEFAULT_IMAGE_RESOLUTION, step=256) - preprocess_resolution = gr.Slider(label='Preprocess resolution', minimum=128, maximum=512, - value=384, step=1) - preprocessor_name = gr.Radio( - label='Preprocessor', - choices=['Midas', 'DPT', 'None'], - type='value', - value='DPT') - - with gr.Column(scale=2): - result_gallery = gr.Gallery(label='Output', show_label=False, elem_id="gallery", columns=2, - height='100%') - ips = [prompt, num_inference_steps, text_guidance_scale, negative_prompt, num_images_per_prompt, image, - image_resolution, preprocess_resolution, preprocessor_name, *input_images] - - prompt.submit( - fn=randomize_seed_fn, inputs=[seed, randomize_seed], outputs=seed, queue=False, api_name=False - ).then(fn=generation_fn, inputs=ips, outputs=result_gallery) - - run_button.click( - fn=randomize_seed_fn, inputs=[seed, randomize_seed], outputs=seed, queue=False, api_name=False - ).then(fn=generation_fn, inputs=ips, outputs=result_gallery) - - gr.Examples( - examples=controlnet_example, - inputs=[image, prompt, input_images[0], input_images[1]], - cache_examples=False, - examples_per_page=100 - ) - - return demo diff --git a/kosmos-g/controlnet/app_ip2p.py b/kosmos-g/controlnet/app_ip2p.py deleted file mode 100644 index b032b303c..000000000 --- a/kosmos-g/controlnet/app_ip2p.py +++ /dev/null @@ -1,50 +0,0 @@ -from app_utils import * - - -def create_demo_ip2p(generation_fn): - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(scale=1): - image = gr.Image(label="Control image") - prompt = gr.Textbox(label="Prompt", max_lines=1, - placeholder="Use to represent the images in prompt") - num_input_images = gr.Slider(1, MAX_INPUT_IMAGES, value=DEFAULT_INPUT_IMAGES, step=1, - label="Number of input images:") - input_images = [ - gr.Image(label=f'img{i}', type="pil", visible=True if i < DEFAULT_INPUT_IMAGES else False) - for i in range(MAX_INPUT_IMAGES)] - num_input_images.change(variable_images, num_input_images, input_images) - - seed = gr.Slider(label="Seed", minimum=MIN_SEED, maximum=MAX_SEED, step=1, value=0) - randomize_seed = gr.Checkbox(label='Randomize seed', value=True) - run_button = gr.Button(label="Run") - with gr.Accordion("Advanced options", open=False): - num_inference_steps = gr.Slider(label="num_inference_steps", minimum=10, maximum=100, value=50, - step=5) - text_guidance_scale = gr.Slider(1, 15, value=6, step=0.5, label="Text Guidance Scale") - negative_prompt = gr.Textbox(label="Negative Prompt", max_lines=1, - value="") - num_images_per_prompt = gr.Slider(1, MAX_IMAGES_PER_PROMPT, value=DEFAULT_IMAGES_PER_PROMPT, step=1, - label="Number of Images") - image_resolution = gr.Slider(label='Image resolution', minimum=MIN_IMAGE_RESOLUTION, - maximum=MAX_IMAGE_RESOLUTION, value=DEFAULT_IMAGE_RESOLUTION, step=256) - - with gr.Column(scale=2): - result_gallery = gr.Gallery(label='Output', show_label=False, elem_id="gallery", columns=2, - height='100%') - ips = [prompt, num_inference_steps, text_guidance_scale, negative_prompt, num_images_per_prompt, image, - image_resolution, *input_images] - - prompt.submit( - fn=randomize_seed_fn, inputs=[seed, randomize_seed], outputs=seed, queue=False, api_name=False - ).then(fn=generation_fn, inputs=ips, outputs=result_gallery) - - run_button.click( - fn=randomize_seed_fn, inputs=[seed, randomize_seed], outputs=seed, queue=False, api_name=False - ).then(fn=generation_fn, inputs=ips, outputs=result_gallery) - gr.Examples( - examples=controlnet_example, - inputs=[image, prompt, input_images[0], input_images[1]], - cache_examples=False, - examples_per_page=100 - ) diff --git a/kosmos-g/controlnet/app_lineart.py b/kosmos-g/controlnet/app_lineart.py deleted file mode 100644 index cd00f16c7..000000000 --- a/kosmos-g/controlnet/app_lineart.py +++ /dev/null @@ -1,69 +0,0 @@ -from app_utils import * - - -def create_demo_lineart(generation_fn): - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(scale=1): - image = gr.Image(label="Control image") - prompt = gr.Textbox(label="Prompt", max_lines=1, - placeholder="Use to represent the images in prompt") - num_input_images = gr.Slider(1, MAX_INPUT_IMAGES, value=DEFAULT_INPUT_IMAGES, step=1, - label="Number of input images:") - input_images = [ - gr.Image(label=f'img{i}', type="pil", visible=True if i < DEFAULT_INPUT_IMAGES else False) - for i in range(MAX_INPUT_IMAGES)] - num_input_images.change(variable_images, num_input_images, input_images) - - seed = gr.Slider(label="Seed", minimum=MIN_SEED, maximum=MAX_SEED, step=1, value=0) - randomize_seed = gr.Checkbox(label='Randomize seed', value=True) - run_button = gr.Button(label="Run") - with gr.Accordion("Advanced options", open=False): - num_inference_steps = gr.Slider(label="num_inference_steps", minimum=10, maximum=100, value=50, - step=5) - text_guidance_scale = gr.Slider(1, 15, value=6, step=0.5, label="Text Guidance Scale") - negative_prompt = gr.Textbox(label="Negative Prompt", max_lines=1, - value="") - num_images_per_prompt = gr.Slider(1, MAX_IMAGES_PER_PROMPT, value=DEFAULT_IMAGES_PER_PROMPT, step=1, - label="Number of Images") - image_resolution = gr.Slider(label='Image resolution', minimum=MIN_IMAGE_RESOLUTION, - maximum=MAX_IMAGE_RESOLUTION, value=DEFAULT_IMAGE_RESOLUTION, step=256) - preprocess_resolution = gr.Slider(label='Preprocess resolution', minimum=128, maximum=512, - value=512, step=1) - preprocessor_name = gr.Radio( - label='Preprocessor', - choices=[ - 'Lineart', - 'Lineart coarse', - 'None', - 'Lineart (anime)', - 'None (anime)', - ], - type='value', - value='Lineart', - info= - 'Note that "Lineart (anime)" and "None (anime)" are for anime base models like Anything-v3.' - ) - - with gr.Column(scale=2): - result_gallery = gr.Gallery(label='Output', show_label=False, elem_id="gallery", columns=2, - height='100%') - ips = [prompt, num_inference_steps, text_guidance_scale, negative_prompt, num_images_per_prompt, image, - image_resolution, preprocess_resolution, preprocessor_name, *input_images] - - prompt.submit( - fn=randomize_seed_fn, inputs=[seed, randomize_seed], outputs=seed, queue=False, api_name=False - ).then(fn=generation_fn, inputs=ips, outputs=result_gallery) - - run_button.click( - fn=randomize_seed_fn, inputs=[seed, randomize_seed], outputs=seed, queue=False, api_name=False - ).then(fn=generation_fn, inputs=ips, outputs=result_gallery) - - gr.Examples( - examples=controlnet_example, - inputs=[image, prompt, input_images[0], input_images[1]], - cache_examples=False, - examples_per_page=100 - ) - - return demo diff --git a/kosmos-g/controlnet/app_mlsd.py b/kosmos-g/controlnet/app_mlsd.py deleted file mode 100644 index 0e2881def..000000000 --- a/kosmos-g/controlnet/app_mlsd.py +++ /dev/null @@ -1,59 +0,0 @@ -from app_utils import * - - -def create_demo_mlsd(generation_fn): - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(scale=1): - image = gr.Image(label="Control image") - prompt = gr.Textbox(label="Prompt", max_lines=1, - placeholder="Use to represent the images in prompt") - num_input_images = gr.Slider(1, MAX_INPUT_IMAGES, value=DEFAULT_INPUT_IMAGES, step=1, - label="Number of input images:") - input_images = [ - gr.Image(label=f'img{i}', type="pil", visible=True if i < DEFAULT_INPUT_IMAGES else False) - for i in range(MAX_INPUT_IMAGES)] - num_input_images.change(variable_images, num_input_images, input_images) - - seed = gr.Slider(label="Seed", minimum=MIN_SEED, maximum=MAX_SEED, step=1, value=0) - randomize_seed = gr.Checkbox(label='Randomize seed', value=True) - run_button = gr.Button(label="Run") - with gr.Accordion("Advanced options", open=False): - num_inference_steps = gr.Slider(label="num_inference_steps", minimum=10, maximum=100, value=50, - step=5) - text_guidance_scale = gr.Slider(1, 15, value=6, step=0.5, label="Text Guidance Scale") - negative_prompt = gr.Textbox(label="Negative Prompt", max_lines=1, - value="") - num_images_per_prompt = gr.Slider(1, MAX_IMAGES_PER_PROMPT, value=DEFAULT_IMAGES_PER_PROMPT, step=1, - label="Number of Images") - image_resolution = gr.Slider(label='Image resolution', minimum=MIN_IMAGE_RESOLUTION, - maximum=MAX_IMAGE_RESOLUTION, value=DEFAULT_IMAGE_RESOLUTION, step=256) - preprocess_resolution = gr.Slider(label='Preprocess resolution', minimum=128, maximum=512, - value=512, step=1) - mlsd_value_threshold = gr.Slider(label='Hough value threshold (MLSD)', minimum=0.01, maximum=2.0, - value=0.1, step=0.01) - mlsd_distance_threshold = gr.Slider(label='Hough distance threshold (MLSD)', minimum=0.01, - maximum=20.0, value=0.1, step=0.01) - - with gr.Column(scale=2): - result_gallery = gr.Gallery(label='Output', show_label=False, elem_id="gallery", columns=2, - height='100%') - ips = [prompt, num_inference_steps, text_guidance_scale, negative_prompt, num_images_per_prompt, image, - image_resolution, preprocess_resolution, mlsd_value_threshold, mlsd_distance_threshold, *input_images] - - prompt.submit( - fn=randomize_seed_fn, inputs=[seed, randomize_seed], outputs=seed, queue=False, api_name=False - ).then(fn=generation_fn, inputs=ips, outputs=result_gallery) - - run_button.click( - fn=randomize_seed_fn, inputs=[seed, randomize_seed], outputs=seed, queue=False, api_name=False - ).then(fn=generation_fn, inputs=ips, outputs=result_gallery) - - gr.Examples( - examples=controlnet_example, - inputs=[image, prompt, input_images[0], input_images[1]], - cache_examples=False, - examples_per_page=100 - ) - - return demo diff --git a/kosmos-g/controlnet/app_normal.py b/kosmos-g/controlnet/app_normal.py deleted file mode 100644 index a59165717..000000000 --- a/kosmos-g/controlnet/app_normal.py +++ /dev/null @@ -1,59 +0,0 @@ -from app_utils import * - - -def create_demo_normal(generation_fn): - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(scale=1): - image = gr.Image(label="Control image") - prompt = gr.Textbox(label="Prompt", max_lines=1, - placeholder="Use to represent the images in prompt") - num_input_images = gr.Slider(1, MAX_INPUT_IMAGES, value=DEFAULT_INPUT_IMAGES, step=1, - label="Number of input images:") - input_images = [ - gr.Image(label=f'img{i}', type="pil", visible=True if i < DEFAULT_INPUT_IMAGES else False) - for i in range(MAX_INPUT_IMAGES)] - num_input_images.change(variable_images, num_input_images, input_images) - - seed = gr.Slider(label="Seed", minimum=MIN_SEED, maximum=MAX_SEED, step=1, value=0) - randomize_seed = gr.Checkbox(label='Randomize seed', value=True) - run_button = gr.Button(label="Run") - with gr.Accordion("Advanced options", open=False): - num_inference_steps = gr.Slider(label="num_inference_steps", minimum=10, maximum=100, value=50, - step=5) - text_guidance_scale = gr.Slider(1, 15, value=6, step=0.5, label="Text Guidance Scale") - negative_prompt = gr.Textbox(label="Negative Prompt", max_lines=1, - value="") - num_images_per_prompt = gr.Slider(1, MAX_IMAGES_PER_PROMPT, value=DEFAULT_IMAGES_PER_PROMPT, step=1, - label="Number of Images") - image_resolution = gr.Slider(label='Image resolution', minimum=MIN_IMAGE_RESOLUTION, - maximum=MAX_IMAGE_RESOLUTION, value=DEFAULT_IMAGE_RESOLUTION, step=256) - preprocess_resolution = gr.Slider(label='Preprocess resolution', minimum=128, maximum=512, - value=384, step=1) - preprocessor_name = gr.Radio(label='Preprocessor', - choices=['NormalBae', 'None'], - type='value', - value='NormalBae') - - with gr.Column(scale=2): - result_gallery = gr.Gallery(label='Output', show_label=False, elem_id="gallery", columns=2, - height='100%') - ips = [prompt, num_inference_steps, text_guidance_scale, negative_prompt, num_images_per_prompt, image, - image_resolution, preprocess_resolution, preprocessor_name, *input_images] - - prompt.submit( - fn=randomize_seed_fn, inputs=[seed, randomize_seed], outputs=seed, queue=False, api_name=False - ).then(fn=generation_fn, inputs=ips, outputs=result_gallery) - - run_button.click( - fn=randomize_seed_fn, inputs=[seed, randomize_seed], outputs=seed, queue=False, api_name=False - ).then(fn=generation_fn, inputs=ips, outputs=result_gallery) - - gr.Examples( - examples=controlnet_example, - inputs=[image, prompt, input_images[0], input_images[1]], - cache_examples=False, - examples_per_page=100 - ) - - return demo diff --git a/kosmos-g/controlnet/app_openpose.py b/kosmos-g/controlnet/app_openpose.py deleted file mode 100644 index ca3c8cdb6..000000000 --- a/kosmos-g/controlnet/app_openpose.py +++ /dev/null @@ -1,59 +0,0 @@ -from app_utils import * - - -def create_demo_openpose(generation_fn): - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(scale=1): - image = gr.Image(label="Control image") - prompt = gr.Textbox(label="Prompt", max_lines=1, - placeholder="Use to represent the images in prompt") - num_input_images = gr.Slider(1, MAX_INPUT_IMAGES, value=DEFAULT_INPUT_IMAGES, step=1, - label="Number of input images:") - input_images = [ - gr.Image(label=f'img{i}', type="pil", visible=True if i < DEFAULT_INPUT_IMAGES else False) - for i in range(MAX_INPUT_IMAGES)] - num_input_images.change(variable_images, num_input_images, input_images) - - seed = gr.Slider(label="Seed", minimum=MIN_SEED, maximum=MAX_SEED, step=1, value=0) - randomize_seed = gr.Checkbox(label='Randomize seed', value=True) - run_button = gr.Button(label="Run") - with gr.Accordion("Advanced options", open=False): - num_inference_steps = gr.Slider(label="num_inference_steps", minimum=10, maximum=100, value=50, - step=5) - text_guidance_scale = gr.Slider(1, 15, value=6, step=0.5, label="Text Guidance Scale") - negative_prompt = gr.Textbox(label="Negative Prompt", max_lines=1, - value="") - num_images_per_prompt = gr.Slider(1, MAX_IMAGES_PER_PROMPT, value=DEFAULT_IMAGES_PER_PROMPT, step=1, - label="Number of Images") - image_resolution = gr.Slider(label='Image resolution', minimum=MIN_IMAGE_RESOLUTION, - maximum=MAX_IMAGE_RESOLUTION, value=DEFAULT_IMAGE_RESOLUTION, step=256) - preprocess_resolution = gr.Slider(label='Preprocess resolution', minimum=128, maximum=512, - value=512, step=1) - preprocessor_name = gr.Radio(label='Preprocessor', - choices=['Openpose', 'None'], - type='value', - value='Openpose') - - with gr.Column(scale=2): - result_gallery = gr.Gallery(label='Output', show_label=False, elem_id="gallery", columns=2, - height='100%') - ips = [prompt, num_inference_steps, text_guidance_scale, negative_prompt, num_images_per_prompt, image, - image_resolution, preprocess_resolution, preprocessor_name, *input_images] - - prompt.submit( - fn=randomize_seed_fn, inputs=[seed, randomize_seed], outputs=seed, queue=False, api_name=False - ).then(fn=generation_fn, inputs=ips, outputs=result_gallery) - - run_button.click( - fn=randomize_seed_fn, inputs=[seed, randomize_seed], outputs=seed, queue=False, api_name=False - ).then(fn=generation_fn, inputs=ips, outputs=result_gallery) - - gr.Examples( - examples=controlnet_example, - inputs=[image, prompt, input_images[0], input_images[1]], - cache_examples=False, - examples_per_page=100 - ) - - return demo diff --git a/kosmos-g/controlnet/app_scribble.py b/kosmos-g/controlnet/app_scribble.py deleted file mode 100644 index 444722ae9..000000000 --- a/kosmos-g/controlnet/app_scribble.py +++ /dev/null @@ -1,60 +0,0 @@ -from app_utils import * - - -def create_demo_scribble(generation_fn): - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(scale=1): - image = gr.Image(label="Control image") - prompt = gr.Textbox(label="Prompt", max_lines=1, - placeholder="Use to represent the images in prompt") - num_input_images = gr.Slider(1, MAX_INPUT_IMAGES, value=DEFAULT_INPUT_IMAGES, step=1, - label="Number of input images:") - input_images = [ - gr.Image(label=f'img{i}', type="pil", visible=True if i < DEFAULT_INPUT_IMAGES else False) - for i in range(MAX_INPUT_IMAGES)] - num_input_images.change(variable_images, num_input_images, input_images) - - seed = gr.Slider(label="Seed", minimum=MIN_SEED, maximum=MAX_SEED, step=1, value=0) - randomize_seed = gr.Checkbox(label='Randomize seed', value=True) - run_button = gr.Button(label="Run") - with gr.Accordion("Advanced options", open=False): - num_inference_steps = gr.Slider(label="num_inference_steps", minimum=10, maximum=100, value=50, - step=5) - text_guidance_scale = gr.Slider(1, 15, value=6, step=0.5, label="Text Guidance Scale") - negative_prompt = gr.Textbox(label="Negative Prompt", max_lines=1, - value="") - num_images_per_prompt = gr.Slider(1, MAX_IMAGES_PER_PROMPT, value=DEFAULT_IMAGES_PER_PROMPT, step=1, - label="Number of Images") - image_resolution = gr.Slider(label='Image resolution', minimum=MIN_IMAGE_RESOLUTION, - maximum=MAX_IMAGE_RESOLUTION, value=DEFAULT_IMAGE_RESOLUTION, step=256) - preprocess_resolution = gr.Slider(label='Preprocess resolution', minimum=128, maximum=512, - value=512, step=1) - preprocessor_name = gr.Radio( - label='Preprocessor', - choices=['HED', 'PidiNet', 'None'], - type='value', - value='HED') - - with gr.Column(scale=2): - result_gallery = gr.Gallery(label='Output', show_label=False, elem_id="gallery", columns=2, - height='100%') - ips = [prompt, num_inference_steps, text_guidance_scale, negative_prompt, num_images_per_prompt, image, - image_resolution, preprocess_resolution, preprocessor_name, *input_images] - - prompt.submit( - fn=randomize_seed_fn, inputs=[seed, randomize_seed], outputs=seed, queue=False, api_name=False - ).then(fn=generation_fn, inputs=ips, outputs=result_gallery) - - run_button.click( - fn=randomize_seed_fn, inputs=[seed, randomize_seed], outputs=seed, queue=False, api_name=False - ).then(fn=generation_fn, inputs=ips, outputs=result_gallery) - - gr.Examples( - examples=controlnet_example, - inputs=[image, prompt, input_images[0], input_images[1]], - cache_examples=False, - examples_per_page=100 - ) - - return demo diff --git a/kosmos-g/controlnet/app_scribble_interactive.py b/kosmos-g/controlnet/app_scribble_interactive.py deleted file mode 100644 index 57e7c9a59..000000000 --- a/kosmos-g/controlnet/app_scribble_interactive.py +++ /dev/null @@ -1,75 +0,0 @@ -from app_utils import * - - -def create_canvas(w, h): - return np.zeros(shape=(h, w, 3), dtype=np.uint8) + 255 - - -def create_demo_scribble_interactive(generation_fn): - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(scale=1): - canvas_width = gr.Slider(label='Canvas width', - minimum=256, - maximum=MAX_IMAGE_RESOLUTION, - value=DEFAULT_IMAGE_RESOLUTION, - step=1) - canvas_height = gr.Slider(label='Canvas height', - minimum=256, - maximum=MAX_IMAGE_RESOLUTION, - value=DEFAULT_IMAGE_RESOLUTION, - step=1) - create_button = gr.Button('Open drawing canvas!') - image = gr.Image(tool='sketch', brush_radius=10) - prompt = gr.Textbox(label="Prompt", max_lines=1, - placeholder="Use to represent the images in prompt") - num_input_images = gr.Slider(1, MAX_INPUT_IMAGES, value=DEFAULT_INPUT_IMAGES, step=1, - label="Number of input images:") - input_images = [ - gr.Image(label=f'img{i}', type="pil", visible=True if i < DEFAULT_INPUT_IMAGES else False) - for i in range(MAX_INPUT_IMAGES)] - num_input_images.change(variable_images, num_input_images, input_images) - - seed = gr.Slider(label="Seed", minimum=MIN_SEED, maximum=MAX_SEED, step=1, value=0) - randomize_seed = gr.Checkbox(label='Randomize seed', value=True) - run_button = gr.Button(label="Run") - with gr.Accordion("Advanced options", open=False): - num_inference_steps = gr.Slider(label="num_inference_steps", minimum=10, maximum=100, value=50, - step=5) - text_guidance_scale = gr.Slider(1, 15, value=6, step=0.5, label="Text Guidance Scale") - negative_prompt = gr.Textbox(label="Negative Prompt", max_lines=1, - value="") - num_images_per_prompt = gr.Slider(1, MAX_IMAGES_PER_PROMPT, value=DEFAULT_IMAGES_PER_PROMPT, step=1, - label="Number of Images") - image_resolution = gr.Slider(label='Image resolution', minimum=MIN_IMAGE_RESOLUTION, - maximum=MAX_IMAGE_RESOLUTION, value=DEFAULT_IMAGE_RESOLUTION, step=256) - - with gr.Column(scale=2): - result_gallery = gr.Gallery(label='Output', show_label=False, elem_id="gallery", columns=2, - height='100%') - create_button.click( - fn=create_canvas, - inputs=[canvas_width, canvas_height], - outputs=image, - queue=False, - api_name=False, - ) - ips = [prompt, num_inference_steps, text_guidance_scale, negative_prompt, num_images_per_prompt, image, - image_resolution, *input_images] - - prompt.submit( - fn=randomize_seed_fn, inputs=[seed, randomize_seed], outputs=seed, queue=False, api_name=False - ).then(fn=generation_fn, inputs=ips, outputs=result_gallery) - - run_button.click( - fn=randomize_seed_fn, inputs=[seed, randomize_seed], outputs=seed, queue=False, api_name=False - ).then(fn=generation_fn, inputs=ips, outputs=result_gallery) - - gr.Examples( - examples=controlnet_example, - inputs=[image, prompt, input_images[0], input_images[1]], - cache_examples=False, - examples_per_page=100 - ) - - return demo diff --git a/kosmos-g/controlnet/app_segmentation.py b/kosmos-g/controlnet/app_segmentation.py deleted file mode 100644 index 4ba7412b1..000000000 --- a/kosmos-g/controlnet/app_segmentation.py +++ /dev/null @@ -1,59 +0,0 @@ -from app_utils import * - - -def create_demo_segmentation(generation_fn): - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(scale=1): - image = gr.Image(label="Control image") - prompt = gr.Textbox(label="Prompt", max_lines=1, - placeholder="Use to represent the images in prompt") - num_input_images = gr.Slider(1, MAX_INPUT_IMAGES, value=DEFAULT_INPUT_IMAGES, step=1, - label="Number of input images:") - input_images = [ - gr.Image(label=f'img{i}', type="pil", visible=True if i < DEFAULT_INPUT_IMAGES else False) - for i in range(MAX_INPUT_IMAGES)] - num_input_images.change(variable_images, num_input_images, input_images) - - seed = gr.Slider(label="Seed", minimum=MIN_SEED, maximum=MAX_SEED, step=1, value=0) - randomize_seed = gr.Checkbox(label='Randomize seed', value=True) - run_button = gr.Button(label="Run") - with gr.Accordion("Advanced options", open=False): - num_inference_steps = gr.Slider(label="num_inference_steps", minimum=10, maximum=100, value=50, - step=5) - text_guidance_scale = gr.Slider(1, 15, value=6, step=0.5, label="Text Guidance Scale") - negative_prompt = gr.Textbox(label="Negative Prompt", max_lines=1, - value="") - num_images_per_prompt = gr.Slider(1, MAX_IMAGES_PER_PROMPT, value=DEFAULT_IMAGES_PER_PROMPT, step=1, - label="Number of Images") - image_resolution = gr.Slider(label='Image resolution', minimum=MIN_IMAGE_RESOLUTION, - maximum=MAX_IMAGE_RESOLUTION, value=DEFAULT_IMAGE_RESOLUTION, step=256) - preprocess_resolution = gr.Slider(label='Preprocess resolution', minimum=128, maximum=512, - value=512, step=1) - preprocessor_name = gr.Radio(label='Preprocessor', - choices=['UPerNet', 'None'], - type='value', - value='UPerNet') - - with gr.Column(scale=2): - result_gallery = gr.Gallery(label='Output', show_label=False, elem_id="gallery", columns=2, - height='100%') - ips = [prompt, num_inference_steps, text_guidance_scale, negative_prompt, num_images_per_prompt, image, - image_resolution, preprocess_resolution, preprocessor_name, *input_images] - - prompt.submit( - fn=randomize_seed_fn, inputs=[seed, randomize_seed], outputs=seed, queue=False, api_name=False - ).then(fn=generation_fn, inputs=ips, outputs=result_gallery) - - run_button.click( - fn=randomize_seed_fn, inputs=[seed, randomize_seed], outputs=seed, queue=False, api_name=False - ).then(fn=generation_fn, inputs=ips, outputs=result_gallery) - - gr.Examples( - examples=controlnet_example, - inputs=[image, prompt, input_images[0], input_images[1]], - cache_examples=False, - examples_per_page=100 - ) - - return demo diff --git a/kosmos-g/controlnet/app_shuffle.py b/kosmos-g/controlnet/app_shuffle.py deleted file mode 100644 index 6fd63eefc..000000000 --- a/kosmos-g/controlnet/app_shuffle.py +++ /dev/null @@ -1,58 +0,0 @@ -from app_utils import * - - -def create_demo_shuffle(generation_fn): - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(scale=1): - image = gr.Image(label="Control image") - prompt = gr.Textbox(label="Prompt", max_lines=1, - placeholder="Use to represent the images in prompt") - num_input_images = gr.Slider(1, MAX_INPUT_IMAGES, value=DEFAULT_INPUT_IMAGES, step=1, - label="Number of input images:") - input_images = [ - gr.Image(label=f'img{i}', type="pil", visible=True if i < DEFAULT_INPUT_IMAGES else False) - for i in range(MAX_INPUT_IMAGES)] - num_input_images.change(variable_images, num_input_images, input_images) - - seed = gr.Slider(label="Seed", minimum=MIN_SEED, maximum=MAX_SEED, step=1, value=0) - randomize_seed = gr.Checkbox(label='Randomize seed', value=True) - run_button = gr.Button(label="Run") - with gr.Accordion("Advanced options", open=False): - num_inference_steps = gr.Slider(label="num_inference_steps", minimum=10, maximum=100, value=50, - step=5) - text_guidance_scale = gr.Slider(1, 15, value=6, step=0.5, label="Text Guidance Scale") - negative_prompt = gr.Textbox(label="Negative Prompt", max_lines=1, - value="") - num_images_per_prompt = gr.Slider(1, MAX_IMAGES_PER_PROMPT, value=DEFAULT_IMAGES_PER_PROMPT, step=1, - label="Number of Images") - image_resolution = gr.Slider(label='Image resolution', minimum=MIN_IMAGE_RESOLUTION, - maximum=MAX_IMAGE_RESOLUTION, value=DEFAULT_IMAGE_RESOLUTION, step=256) - preprocessor_name = gr.Radio( - label='Preprocessor', - choices=['ContentShuffle', 'None'], - type='value', - value='ContentShuffle') - - with gr.Column(scale=2): - result_gallery = gr.Gallery(label='Output', show_label=False, elem_id="gallery", columns=2, - height='100%') - ips = [prompt, num_inference_steps, text_guidance_scale, negative_prompt, num_images_per_prompt, image, - image_resolution, preprocessor_name, *input_images] - - prompt.submit( - fn=randomize_seed_fn, inputs=[seed, randomize_seed], outputs=seed, queue=False, api_name=False - ).then(fn=generation_fn, inputs=ips, outputs=result_gallery) - - run_button.click( - fn=randomize_seed_fn, inputs=[seed, randomize_seed], outputs=seed, queue=False, api_name=False - ).then(fn=generation_fn, inputs=ips, outputs=result_gallery) - - gr.Examples( - examples=controlnet_example, - inputs=[image, prompt, input_images[0], input_images[1]], - cache_examples=False, - examples_per_page=100 - ) - - return demo diff --git a/kosmos-g/controlnet/app_softedge.py b/kosmos-g/controlnet/app_softedge.py deleted file mode 100644 index aca15f1fd..000000000 --- a/kosmos-g/controlnet/app_softedge.py +++ /dev/null @@ -1,65 +0,0 @@ -from app_utils import * - - -def create_demo_softedge(generation_fn): - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(scale=1): - image = gr.Image(label="Control image") - prompt = gr.Textbox(label="Prompt", max_lines=1, - placeholder="Use to represent the images in prompt") - num_input_images = gr.Slider(1, MAX_INPUT_IMAGES, value=DEFAULT_INPUT_IMAGES, step=1, - label="Number of input images:") - input_images = [ - gr.Image(label=f'img{i}', type="pil", visible=True if i < DEFAULT_INPUT_IMAGES else False) - for i in range(MAX_INPUT_IMAGES)] - num_input_images.change(variable_images, num_input_images, input_images) - - seed = gr.Slider(label="Seed", minimum=MIN_SEED, maximum=MAX_SEED, step=1, value=0) - randomize_seed = gr.Checkbox(label='Randomize seed', value=True) - run_button = gr.Button(label="Run") - with gr.Accordion("Advanced options", open=False): - num_inference_steps = gr.Slider(label="num_inference_steps", minimum=10, maximum=100, value=50, - step=5) - text_guidance_scale = gr.Slider(1, 15, value=6, step=0.5, label="Text Guidance Scale") - negative_prompt = gr.Textbox(label="Negative Prompt", max_lines=1, - value="") - num_images_per_prompt = gr.Slider(1, MAX_IMAGES_PER_PROMPT, value=DEFAULT_IMAGES_PER_PROMPT, step=1, - label="Number of Images") - image_resolution = gr.Slider(label='Image resolution', minimum=MIN_IMAGE_RESOLUTION, - maximum=MAX_IMAGE_RESOLUTION, value=DEFAULT_IMAGE_RESOLUTION, step=256) - preprocess_resolution = gr.Slider(label='Preprocess resolution', minimum=128, maximum=512, - value=512, step=1) - preprocessor_name = gr.Radio(label='Preprocessor', - choices=[ - 'HED', - 'PidiNet', - 'HED safe', - 'PidiNet safe', - 'None', - ], - type='value', - value='PidiNet') - - with gr.Column(scale=2): - result_gallery = gr.Gallery(label='Output', show_label=False, elem_id="gallery", columns=2, - height='100%') - ips = [prompt, num_inference_steps, text_guidance_scale, negative_prompt, num_images_per_prompt, image, - image_resolution, preprocess_resolution, preprocessor_name, *input_images] - - prompt.submit( - fn=randomize_seed_fn, inputs=[seed, randomize_seed], outputs=seed, queue=False, api_name=False - ).then(fn=generation_fn, inputs=ips, outputs=result_gallery) - - run_button.click( - fn=randomize_seed_fn, inputs=[seed, randomize_seed], outputs=seed, queue=False, api_name=False - ).then(fn=generation_fn, inputs=ips, outputs=result_gallery) - - gr.Examples( - examples=controlnet_example, - inputs=[image, prompt, input_images[0], input_images[1]], - cache_examples=False, - examples_per_page=100 - ) - - return demo diff --git a/kosmos-g/controlnet/cv_utils.py b/kosmos-g/controlnet/cv_utils.py deleted file mode 100644 index d81177c5e..000000000 --- a/kosmos-g/controlnet/cv_utils.py +++ /dev/null @@ -1,17 +0,0 @@ -import cv2 -import numpy as np - - -def resize_image(input_image, resolution, interpolation=None): - H, W, C = input_image.shape - H = float(H) - W = float(W) - k = float(resolution) / max(H, W) - H *= k - W *= k - H = int(np.round(H / 64.0)) * 64 - W = int(np.round(W / 64.0)) * 64 - if interpolation is None: - interpolation = cv2.INTER_LANCZOS4 if k > 1 else cv2.INTER_AREA - img = cv2.resize(input_image, (W, H), interpolation=interpolation) - return img diff --git a/kosmos-g/controlnet/depth_estimator.py b/kosmos-g/controlnet/depth_estimator.py deleted file mode 100644 index 6dd670e33..000000000 --- a/kosmos-g/controlnet/depth_estimator.py +++ /dev/null @@ -1,25 +0,0 @@ -import PIL.Image -import numpy as np -from controlnet_aux.util import HWC3 -from transformers import pipeline - -from controlnet.cv_utils import resize_image - - -class DepthEstimator: - def __init__(self): - self.model = pipeline('depth-estimation') - - def __call__(self, image: np.ndarray, **kwargs) -> PIL.Image.Image: - detect_resolution = kwargs.pop('detect_resolution', 512) - image_resolution = kwargs.pop('image_resolution', 512) - image = np.array(image) - image = HWC3(image) - image = resize_image(image, resolution=detect_resolution) - image = PIL.Image.fromarray(image) - image = self.model(image) - image = image['depth'] - image = np.array(image) - image = HWC3(image) - image = resize_image(image, resolution=image_resolution) - return PIL.Image.fromarray(image) diff --git a/kosmos-g/controlnet/image_segmentor.py b/kosmos-g/controlnet/image_segmentor.py deleted file mode 100644 index a7357ee33..000000000 --- a/kosmos-g/controlnet/image_segmentor.py +++ /dev/null @@ -1,39 +0,0 @@ -import PIL.Image -import cv2 -import numpy as np -import torch -from controlnet_aux.util import HWC3, ade_palette -from transformers import AutoImageProcessor, UperNetForSemanticSegmentation - -from controlnet.cv_utils import resize_image - - -class ImageSegmentor: - def __init__(self): - self.image_processor = AutoImageProcessor.from_pretrained( - 'openmmlab/upernet-convnext-small') - self.image_segmentor = UperNetForSemanticSegmentation.from_pretrained( - 'openmmlab/upernet-convnext-small') - - @torch.inference_mode() - def __call__(self, image: np.ndarray, **kwargs) -> PIL.Image.Image: - detect_resolution = kwargs.pop('detect_resolution', 512) - image_resolution = kwargs.pop('image_resolution', 512) - image = HWC3(image) - image = resize_image(image, resolution=detect_resolution) - image = PIL.Image.fromarray(image) - - pixel_values = self.image_processor(image, - return_tensors='pt').pixel_values - outputs = self.image_segmentor(pixel_values) - seg = self.image_processor.post_process_semantic_segmentation( - outputs, target_sizes=[image.size[::-1]])[0] - color_seg = np.zeros((seg.shape[0], seg.shape[1], 3), dtype=np.uint8) - for label, color in enumerate(ade_palette()): - color_seg[seg == label, :] = color - color_seg = color_seg.astype(np.uint8) - - color_seg = resize_image(color_seg, - resolution=image_resolution, - interpolation=cv2.INTER_NEAREST) - return PIL.Image.fromarray(color_seg) diff --git a/kosmos-g/controlnet/preprocessor.py b/kosmos-g/controlnet/preprocessor.py deleted file mode 100644 index 1b7b6d2fb..000000000 --- a/kosmos-g/controlnet/preprocessor.py +++ /dev/null @@ -1,269 +0,0 @@ -import gc - -import PIL.Image -import numpy as np -import torch -from controlnet_aux import (CannyDetector, ContentShuffleDetector, HEDdetector, LineartAnimeDetector, LineartDetector, - MidasDetector, MLSDdetector, NormalBaeDetector, OpenposeDetector, PidiNetDetector) -from controlnet_aux.util import HWC3 - -from controlnet.cv_utils import resize_image -from controlnet.depth_estimator import DepthEstimator -from controlnet.image_segmentor import ImageSegmentor - - -class ControlNet_Preprocessor: - MODEL_ID = 'lllyasviel/Annotators' - - def __init__(self): - self.model = None - self.name = '' - - def load(self, name: str) -> None: - if name == self.name: - return - if name == 'HED': - self.model = HEDdetector.from_pretrained(self.MODEL_ID) - elif name == 'Midas': - self.model = MidasDetector.from_pretrained(self.MODEL_ID) - elif name == 'MLSD': - self.model = MLSDdetector.from_pretrained(self.MODEL_ID) - elif name == 'Openpose': - self.model = OpenposeDetector.from_pretrained(self.MODEL_ID) - elif name == 'PidiNet': - self.model = PidiNetDetector.from_pretrained(self.MODEL_ID) - elif name == 'NormalBae': - self.model = NormalBaeDetector.from_pretrained(self.MODEL_ID) - elif name == 'Lineart': - self.model = LineartDetector.from_pretrained(self.MODEL_ID) - elif name == 'LineartAnime': - self.model = LineartAnimeDetector.from_pretrained(self.MODEL_ID) - elif name == 'Canny': - self.model = CannyDetector() - elif name == 'ContentShuffle': - self.model = ContentShuffleDetector() - elif name == 'DPT': - self.model = DepthEstimator() - elif name == 'UPerNet': - self.model = ImageSegmentor() - else: - raise ValueError - torch.cuda.empty_cache() - gc.collect() - self.name = name - - def __call__(self, image: PIL.Image.Image, **kwargs) -> PIL.Image.Image: - if self.name == 'Canny': - if 'detect_resolution' in kwargs: - detect_resolution = kwargs.pop('detect_resolution') - image = np.array(image) - image = HWC3(image) - image = resize_image(image, resolution=detect_resolution) - image = self.model(image, **kwargs) - return PIL.Image.fromarray(image) - elif self.name == 'Midas': - detect_resolution = kwargs.pop('detect_resolution', 512) - image_resolution = kwargs.pop('image_resolution', 512) - image = np.array(image) - image = HWC3(image) - image = resize_image(image, resolution=detect_resolution) - image = self.model(image, **kwargs) - image = HWC3(image) - image = resize_image(image, resolution=image_resolution) - return PIL.Image.fromarray(image) - else: - image = np.array(image) - return self.model(image, **kwargs) - - @torch.inference_mode() - def preprocess_canny(self, image, image_resolution, low_threshold, high_threshold): - self.load('Canny') - control_image = self( - image=image, - low_threshold=low_threshold, - high_threshold=high_threshold, - detect_resolution=image_resolution - ) - return control_image - - @torch.inference_mode() - def preprocess_mlsd(self, image, image_resolution, preprocess_resolution, value_threshold, distance_threshold): - self.load('MLSD') - control_image = self( - image=image, - image_resolution=image_resolution, - detect_resolution=preprocess_resolution, - thr_v=value_threshold, - thr_d=distance_threshold, - ) - return control_image - - @torch.inference_mode() - def preprocess_scribble(self, image, image_resolution, preprocess_resolution, preprocessor_name): - if preprocessor_name == 'None': - image = HWC3(image) - image = resize_image(image, resolution=image_resolution) - control_image = PIL.Image.fromarray(image) - elif preprocessor_name == 'HED': - self.load(preprocessor_name) - control_image = self( - image=image, - image_resolution=image_resolution, - detect_resolution=preprocess_resolution, - scribble=False, - ) - elif preprocessor_name == 'PidiNet': - self.load(preprocessor_name) - control_image = self( - image=image, - image_resolution=image_resolution, - detect_resolution=preprocess_resolution, - safe=False, - ) - else: - raise ValueError - return control_image - - @torch.inference_mode() - def preprocess_scribble_interactive(self, image_and_mask, image_resolution): - image = image_and_mask['mask'] - image = HWC3(image) - image = resize_image(image, resolution=image_resolution) - control_image = PIL.Image.fromarray(image) - return control_image - - @torch.inference_mode() - def preprocess_softedge(self, image, image_resolution, preprocess_resolution, preprocessor_name): - if preprocessor_name == 'None': - image = HWC3(image) - image = resize_image(image, resolution=image_resolution) - control_image = PIL.Image.fromarray(image) - elif preprocessor_name in ['HED', 'HED safe']: - safe = 'safe' in preprocessor_name - self.load('HED') - control_image = self( - image=image, - image_resolution=image_resolution, - detect_resolution=preprocess_resolution, - scribble=safe, - ) - elif preprocessor_name in ['PidiNet', 'PidiNet safe']: - safe = 'safe' in preprocessor_name - self.load('PidiNet') - control_image = self( - image=image, - image_resolution=image_resolution, - detect_resolution=preprocess_resolution, - safe=safe, - ) - else: - raise ValueError - return control_image - - @torch.inference_mode() - def preprocess_openpose(self, image, image_resolution, preprocess_resolution, preprocessor_name): - if preprocessor_name == 'None': - image = HWC3(image) - image = resize_image(image, resolution=image_resolution) - control_image = PIL.Image.fromarray(image) - else: - self.load('Openpose') - control_image = self( - image=image, - image_resolution=image_resolution, - detect_resolution=preprocess_resolution, - hand_and_face=True, - ) - return control_image - - @torch.inference_mode() - def preprocess_segmentation(self, image, image_resolution, preprocess_resolution, preprocessor_name): - if preprocessor_name == 'None': - image = HWC3(image) - image = resize_image(image, resolution=image_resolution) - control_image = PIL.Image.fromarray(image) - else: - self.load(preprocessor_name) - control_image = self( - image=image, - image_resolution=image_resolution, - detect_resolution=preprocess_resolution, - ) - return control_image - - @torch.inference_mode() - def preprocess_depth(self, image, image_resolution, preprocess_resolution, preprocessor_name): - if preprocessor_name == 'None': - image = HWC3(image) - image = resize_image(image, resolution=image_resolution) - control_image = PIL.Image.fromarray(image) - else: - self.load(preprocessor_name) - control_image = self( - image=image, - image_resolution=image_resolution, - detect_resolution=preprocess_resolution, - ) - return control_image - - @torch.inference_mode() - def preprocess_normal(self, image, image_resolution, preprocess_resolution, preprocessor_name): - if preprocessor_name == 'None': - image = HWC3(image) - image = resize_image(image, resolution=image_resolution) - control_image = PIL.Image.fromarray(image) - else: - self.load('NormalBae') - control_image = self( - image=image, - image_resolution=image_resolution, - detect_resolution=preprocess_resolution, - ) - return control_image - - @torch.inference_mode() - def preprocess_lineart(self, image, image_resolution, preprocess_resolution, preprocessor_name): - if preprocessor_name in ['None', 'None (anime)']: - image = HWC3(image) - image = resize_image(image, resolution=image_resolution) - control_image = PIL.Image.fromarray(image) - elif preprocessor_name in ['Lineart', 'Lineart coarse']: - coarse = 'coarse' in preprocessor_name - self.load('Lineart') - control_image = self( - image=image, - image_resolution=image_resolution, - detect_resolution=preprocess_resolution, - coarse=coarse, - ) - elif preprocessor_name == 'Lineart (anime)': - self.load('LineartAnime') - control_image = self( - image=image, - image_resolution=image_resolution, - detect_resolution=preprocess_resolution, - ) - else: - raise ValueError - return control_image - - @torch.inference_mode() - def preprocess_shuffle(self, image, image_resolution, preprocessor_name): - if preprocessor_name == 'None': - image = HWC3(image) - image = resize_image(image, resolution=image_resolution) - control_image = PIL.Image.fromarray(image) - else: - self.load(preprocessor_name) - control_image = self( - image=image, - image_resolution=image_resolution, - ) - return control_image - - @torch.inference_mode() - def preprocess_ip2p(self, image, image_resolution): - image = HWC3(image) - image = resize_image(image, resolution=image_resolution) - control_image = PIL.Image.fromarray(image) - return control_image diff --git a/kosmos-g/data/dict.txt b/kosmos-g/data/dict.txt deleted file mode 100644 index 42ce3cd53..000000000 --- a/kosmos-g/data/dict.txt +++ /dev/null @@ -1,63997 +0,0 @@ -. 1 -▁the 1 -, 1 -▁to 1 -▁and 1 -▁of 1 -▁a 1 -s 1 -▁in 1 -▁I 1 -▁that 1 -' 1 -▁for 1 -▁is 1 -▁was 1 -- 1 -▁on 1 -’ 1 -▁it 1 -▁with 1 -▁The 1 -▁as 1 -▁at 1 -▁be 1 -t 1 -▁he 1 -▁have 1 -▁from 1 -▁by 1 -▁are 1 -▁" 1 -▁you 1 -▁his 1 -▁“ 1 -▁this 1 -▁said 1 -▁not 1 -▁has 1 -▁an 1 -▁( 1 -▁but 1 -▁had 1 -▁we 1 -▁her 1 -▁they 1 -▁will 1 -▁my 1 -▁or 1 -▁were 1 -▁their 1 -) 1 -: 1 -▁up 1 -▁about 1 -▁out 1 -▁who 1 -▁one 1 -▁all 1 -▁been 1 -▁she 1 -▁can 1 -▁more 1 -▁would 1 -▁It 1 -▁which 1 -▁so 1 -▁me 1 -▁He 1 -▁when 1 -▁time 1 -▁also 1 -▁just 1 -▁like 1 -▁there 1 -m 1 -▁what 1 -▁him 1 -▁after 1 -▁into 1 -▁In 1 -▁our 1 -▁first 1 -▁over 1 -▁them 1 -▁people 1 -▁do 1 -" 1 -▁other 1 -▁some 1 -? 1 -▁two 1 -▁A 1 -▁if 1 -▁its 1 -▁than 1 -▁get 1 -▁your 1 -▁could 1 -▁year 1 -▁back 1 -▁no 1 -▁new 1 -▁ 1 -.” 1 -re 1 -d 1 -,” 1 -▁only 1 -▁But 1 -” 1 -▁last 1 -▁now 1 -▁years 1 -▁We 1 -▁because 1 -I 1 -▁how 1 -▁then 1 -▁- 1 -▁know 1 -." 1 -▁before 1 -▁This 1 -! 1 -▁where 1 -ing 1 -▁And 1 -▁She 1 -▁way 1 -▁being 1 -▁down 1 -▁make 1 -; 1 -▁going 1 -▁even 1 -▁— 1 -▁day 1 -▁made 1 -▁any 1 -▁through 1 -▁see 1 -▁most 1 -S 1 -▁much 1 -▁don 1 -▁did 1 -▁off 1 -▁go 1 -▁well 1 -▁very 1 -▁think 1 -▁while 1 -ve 1 -▁us 1 -▁work 1 -▁good 1 -▁still 1 -▁around 1 -▁three 1 -▁many 1 -▁those 1 -," 1 -/ 1 -▁really 1 -▁home 1 -▁take 1 -▁They 1 -▁against 1 -▁right 1 -▁these 1 -▁want 1 -▁got 1 -▁during 1 -ed 1 -▁U 1 -▁say 1 -▁told 1 -▁didn 1 -ll 1 -▁– 1 -▁should 1 -a 1 -▁may 1 -▁such 1 -▁little 1 -▁since 1 -▁life 1 -▁Trump 1 -▁here 1 -▁long 1 -▁So 1 -▁too 1 -▁need 1 -▁next 1 -▁week 1 -▁between 1 -▁part 1 -▁game 1 -▁says 1 -▁team 1 -▁same 1 -th 1 -▁company 1 -▁things 1 -▁That 1 -▁both 1 -▁world 1 -▁another 1 -▁never 1 -▁come 1 -▁You 1 -▁There 1 -▁' 1 -▁few 1 -▁As 1 -▁New 1 -▁second 1 -▁own 1 -▁something 1 -▁If 1 -▁went 1 -▁found 1 -▁family 1 -▁use 1 -▁help 1 -▁government 1 -▁under 1 -The 1 -▁end 1 -▁state 1 -▁used 1 -▁left 1 -▁million 1 -▁am 1 -▁season 1 -▁took 1 -y 1 -▁place 1 -▁man 1 -▁again 1 -▁came 1 -▁high 1 -▁lot 1 -e 1 -▁every 1 -▁put 1 -▁night 1 -er 1 -▁including 1 -▁each 1 -▁away 1 -▁best 1 -▁school 1 -▁four 1 -▁look 1 -). 1 -year 1 -... 1 -▁set 1 -▁When 1 -▁called 1 -ly 1 -▁For 1 -), 1 -▁days 1 -▁2 1 -▁always 1 -▁until 1 -▁find 1 -▁great 1 -▁better 1 -▁love 1 -▁show 1 -▁later 1 -in 1 -▁percent 1 -We 1 -▁10 1 -▁police 1 -▁without 1 -▁thing 1 -▁On 1 -▁country 1 -▁might 1 -▁3 1 -▁play 1 -▁per 1 -▁top 1 -▁won 1 -▁started 1 -▁1 1 -▁thought 1 -▁different 1 -▁big 1 -▁feel 1 -▁number 1 -▁public 1 -▁head 1 -▁group 1 -▁market 1 -▁children 1 -▁though 1 -▁five 1 -▁‘ 1 -▁able 1 -▁After 1 -▁today 1 -▁point 1 -▁already 1 -▁... 1 -▁having 1 -▁report 1 -▁does 1 -▁My 1 -▁start 1 -▁enough 1 -▁city 1 -▁past 1 -▁asked 1 -▁house 1 -▁give 1 -▁business 1 -▁far 1 -▁United 1 -▁support 1 -▁sure 1 -▁keep 1 -▁women 1 -▁President 1 -▁case 1 -▁run 1 -▁old 1 -▁getting 1 -▁No 1 -▁early 1 -▁side 1 -▁wanted 1 -old 1 -▁At 1 -▁making 1 -▁doing 1 -▁months 1 -▁why 1 -▁ever 1 -▁former 1 -▁working 1 -▁One 1 -▁least 1 -▁What 1 -▁money 1 -▁hard 1 -▁4 1 -It 1 -▁several 1 -▁done 1 -▁water 1 -A 1 -▁Friday 1 -▁according 1 -▁times 1 -▁seen 1 -▁across 1 -▁story 1 -▁change 1 -▁small 1 -▁together 1 -▁room 1 -▁full 1 -▁information 1 -▁car 1 -▁American 1 -▁month 1 -▁area 1 -▁doesn 1 -▁5 1 -▁Mr 1 -▁system 1 -▁All 1 -▁wasn 1 -▁looking 1 -▁half 1 -n 1 -▁face 1 -▁let 1 -▁along 1 -▁name 1 -▁must 1 -▁local 1 -▁open 1 -] 1 -▁call 1 -▁morning 1 -▁once 1 -▁State 1 -▁trying 1 -▁ago 1 -▁person 1 -▁news 1 -▁free 1 -▁third 1 -▁Monday 1 -▁US 1 -▁power 1 -▁less 1 -▁-- 1 -i 1 -▁bit 1 -▁course 1 -▁His 1 -▁line 1 -▁saw 1 -▁Tuesday 1 -▁& 1 -▁become 1 -▁points 1 -▁Thursday 1 -▁To 1 -▁six 1 -▁kind 1 -▁added 1 -▁win 1 -▁actually 1 -▁yet 1 -▁move 1 -o 1 -▁Wednesday 1 -— 1 -▁hand 1 -▁book 1 -▁anything 1 -▁men 1 -com 1 -▁[ 1 -▁behind 1 -to 1 -C 1 -▁North 1 -▁deal 1 -▁known 1 -▁fact 1 -▁tell 1 -▁important 1 -▁games 1 -▁May 1 -▁However 1 -▁someone 1 -▁using 1 -▁real 1 -the 1 -▁law 1 -▁S 1 -▁coming 1 -▁president 1 -▁House 1 -▁given 1 -▁often 1 -▁woman 1 -▁close 1 -▁hit 1 -▁members 1 -▁taken 1 -▁almost 1 -▁taking 1 -▁job 1 -▁felt 1 -▁care 1 -▁believe 1 -▁York 1 -▁front 1 -▁Sunday 1 -▁20 1 -▁community 1 -▁media 1 -▁post 1 -▁young 1 -▁South 1 -▁Saturday 1 -▁quarter 1 -▁party 1 -▁looked 1 -▁hours 1 -▁friends 1 -▁held 1 -▁live 1 -▁data 1 -▁order 1 -▁saying 1 -▁City 1 -▁knew 1 -▁began 1 -▁court 1 -▁With 1 -▁try 1 -▁food 1 -▁others 1 -r 1 -▁While 1 -▁following 1 -▁future 1 -▁large 1 -▁within 1 -▁reported 1 -▁health 1 -▁office 1 -▁body 1 -▁series 1 -▁read 1 -▁University 1 -▁John 1 -▁everything 1 -▁service 1 -▁minutes 1 -▁due 1 -▁gave 1 -▁late 1 -▁States 1 -▁whether 1 -▁mother 1 -▁National 1 -▁lead 1 -▁weeks 1 -▁share 1 -▁comes 1 -▁6 1 -▁12 1 -▁March 1 -▁near 1 -▁lost 1 -▁kids 1 -▁major 1 -▁outside 1 -▁plan 1 -▁couple 1 -▁death 1 -▁became 1 -▁players 1 -▁expected 1 -▁among 1 -▁final 1 -▁played 1 -▁God 1 -▁likely 1 -▁County 1 -es 1 -▁political 1 -▁process 1 -▁pretty 1 -▁forward 1 -▁students 1 -▁nothing 1 -▁pay 1 -▁experience 1 -▁recent 1 -and 1 -▁de 1 -▁needed 1 -▁possible 1 -▁social 1 -▁myself 1 -▁price 1 -▁billion 1 -▁short 1 -▁p 1 -▁video 1 -▁decided 1 -▁Now 1 -▁2018 1 -▁World 1 -▁child 1 -on 1 -▁turned 1 -▁B 1 -▁everyone 1 -▁shares 1 -▁stop 1 -▁stock 1 -▁June 1 -man 1 -▁released 1 -▁idea 1 -▁film 1 -▁building 1 -▁bad 1 -▁decision 1 -▁event 1 -▁continue 1 -▁15 1 -▁campaign 1 -▁statement 1 -▁probably 1 -▁companies 1 -▁makes 1 -▁8 1 -▁himself 1 -▁China 1 -▁whole 1 -". 1 -▁single 1 -▁re 1 -▁available 1 -▁leave 1 -▁Then 1 -▁meeting 1 -▁friend 1 -▁based 1 -▁research 1 -▁program 1 -▁means 1 -▁running 1 -▁current 1 -▁further 1 -▁control 1 -▁history 1 -▁isn 1 -▁level 1 -▁son 1 -▁couldn 1 -▁11 1 -▁7 1 -▁father 1 -▁April 1 -or 1 -▁officials 1 -▁soon 1 -▁return 1 -▁talk 1 -▁low 1 -▁led 1 -▁national 1 -▁2017 1 -▁security 1 -▁mind 1 -▁announced 1 -▁black 1 -▁O 1 -▁services 1 -▁July 1 -▁clear 1 -D 1 -▁growth 1 -▁quite 1 -▁problem 1 -▁fire 1 -▁hope 1 -▁worked 1 -▁D 1 -▁record 1 -▁eyes 1 -▁turn 1 -▁St 1 -▁playing 1 -▁period 1 -▁inside 1 -▁mean 1 -▁parents 1 -▁either 1 -▁age 1 -▁Police 1 -▁project 1 -▁door 1 -▁issue 1 -▁30 1 -▁January 1 -▁special 1 -▁matter 1 -▁shot 1 -▁include 1 -▁earlier 1 -▁strong 1 -▁tried 1 -▁recently 1 -▁heard 1 -▁C 1 -▁else 1 -▁question 1 -▁received 1 -▁town 1 -▁America 1 -▁light 1 -▁music 1 -▁position 1 -▁phone 1 -▁happened 1 -▁white 1 -▁moment 1 -▁heart 1 -▁however 1 -▁rest 1 -▁wrote 1 -up 1 -▁living 1 -ers 1 -▁industry 1 -▁provide 1 -▁Some 1 -▁role 1 -an 1 -▁Washington 1 -?" 1 -▁bring 1 -▁died 1 -▁non 1 -▁rather 1 -▁9 1 -▁list 1 -▁total 1 -▁News 1 -▁reason 1 -▁An 1 -▁stay 1 -▁human 1 -▁issues 1 -▁air 1 -▁road 1 -▁Department 1 -▁election 1 -▁Facebook 1 -▁West 1 -▁especially 1 -▁career 1 -▁cost 1 -▁average 1 -▁E 1 -▁remember 1 -▁needs 1 -▁T 1 -▁moved 1 -▁anyone 1 -▁tax 1 -▁development 1 -▁spent 1 -▁happy 1 -▁According 1 -▁federal 1 -▁These 1 -▁goal 1 -▁November 1 -", 1 -▁December 1 -▁nearly 1 -▁military 1 -▁seems 1 -▁seven 1 -▁longer 1 -▁rights 1 -▁war 1 -▁girl 1 -▁wife 1 -▁brought 1 -▁White 1 -▁fun 1 -al 1 -▁sent 1 -▁cut 1 -▁finally 1 -▁space 1 -▁killed 1 -▁attack 1 -▁hold 1 -▁policy 1 -▁words 1 -▁example 1 -2 1 -▁ready 1 -▁Not 1 -▁October 1 -▁field 1 -▁player 1 -▁visit 1 -▁study 1 -▁16 1 -▁member 1 -▁plans 1 -▁online 1 -▁general 1 -▁countries 1 -▁action 1 -▁chance 1 -▁September 1 -▁trade 1 -▁result 1 -▁release 1 -▁above 1 -▁hands 1 -▁San 1 -▁India 1 -▁instead 1 -▁site 1 -▁higher 1 -▁weekend 1 -▁summer 1 -▁quickly 1 -▁lives 1 -▁understand 1 -▁husband 1 -▁increase 1 -▁director 1 -▁How 1 -▁baby 1 -▁Her 1 -▁2016 1 -▁key 1 -▁cent 1 -▁August 1 -▁meet 1 -▁access 1 -▁financial 1 -en 1 -▁School 1 -▁daughter 1 -▁R 1 -▁main 1 -▁shows 1 -▁14 1 -▁official 1 -▁13 1 -▁involved 1 -▁coach 1 -▁Court 1 -You 1 -▁eight 1 -▁P 1 -▁race 1 -▁ask 1 -▁interest 1 -1 1 -▁staff 1 -▁By 1 -▁Canada 1 -▁form 1 -▁co 1 -▁rate 1 -▁M 1 -▁met 1 -AP 1 -▁ahead 1 -▁gone 1 -▁hear 1 -▁ground 1 -▁class 1 -B 1 -▁board 1 -▁latest 1 -▁reports 1 -▁buy 1 -k 1 -▁similar 1 -▁international 1 -▁Dr 1 -▁currently 1 -▁California 1 -▁published 1 -▁Minister 1 -▁N 1 -▁easy 1 -▁February 1 -▁fight 1 -▁themselves 1 -▁18 1 -T 1 -▁club 1 -▁TV 1 -▁worth 1 -▁potential 1 -▁feeling 1 -▁results 1 -▁continued 1 -▁wrong 1 -▁watch 1 -▁true 1 -▁David 1 -▁hospital 1 -▁global 1 -▁sense 1 -▁kept 1 -▁situation 1 -▁investigation 1 -▁technology 1 -▁vote 1 -▁G 1 -▁wouldn 1 -P 1 -▁energy 1 -▁talking 1 -▁League 1 -▁break 1 -▁fall 1 -▁thinking 1 -g 1 -4 1 -▁training 1 -▁performance 1 -▁included 1 -c 1 -▁create 1 -▁walk 1 -▁nice 1 -▁Street 1 -▁K 1 -▁offer 1 -▁private 1 -time 1 -▁fans 1 -▁happen 1 -▁British 1 -▁wants 1 -▁showed 1 -▁areas 1 -▁store 1 -▁Donald 1 -▁maybe 1 -▁goes 1 -▁personal 1 -▁oil 1 -▁seemed 1 -▁Even 1 -▁opened 1 -3 1 -▁questions 1 -▁leader 1 -▁J 1 -▁Russia 1 -▁entire 1 -▁self 1 -▁difficult 1 -▁risk 1 -▁loss 1 -▁James 1 -▁below 1 -▁17 1 -▁land 1 -▁Twitter 1 -This 1 -▁bed 1 -▁Christmas 1 -▁relationship 1 -▁value 1 -▁economic 1 -▁starting 1 -▁named 1 -▁previous 1 -▁products 1 -▁allowed 1 -▁London 1 -▁built 1 -▁huge 1 -▁drive 1 -▁match 1 -▁website 1 -▁25 1 -based 1 -▁Park 1 -▁step 1 -▁allow 1 -▁red 1 -▁administration 1 -▁whose 1 -▁Well 1 -▁helped 1 -▁pick 1 -▁certain 1 -▁guy 1 -▁round 1 -▁ended 1 -▁Center 1 -▁states 1 -▁simply 1 -▁F 1 -▁production 1 -▁sales 1 -▁50 1 -▁changes 1 -▁European 1 -▁itself 1 -▁Michael 1 -▁legal 1 -▁region 1 -▁100 1 -M 1 -▁attention 1 -▁opportunity 1 -▁sign 1 -▁leading 1 -▁giving 1 -▁looks 1 -▁takes 1 -▁feet 1 -▁events 1 -▁lower 1 -▁created 1 -▁Day 1 -▁followed 1 -▁teams 1 -He 1 -▁view 1 -E 1 -▁various 1 -▁hour 1 -K 1 -▁evidence 1 -▁bill 1 -▁moving 1 -▁groups 1 -▁alone 1 -▁finished 1 -▁sometimes 1 -▁un 1 -▁hair 1 -▁chief 1 -▁cannot 1 -▁Just 1 -▁problems 1 -▁fourth 1 -▁passed 1 -▁movie 1 -▁build 1 -▁focus 1 -▁! 1 -▁writing 1 -▁amount 1 -▁immediately 1 -▁stuff 1 -▁rating 1 -▁check 1 -▁towards 1 -▁medical 1 -▁present 1 -▁seem 1 -▁test 1 -▁First 1 -▁ball 1 -▁fell 1 -▁paid 1 -▁Of 1 -▁cause 1 -▁Inc 1 -▁boy 1 -▁dead 1 -▁floor 1 -▁star 1 -▁follow 1 -▁growing 1 -▁ran 1 -l 1 -▁word 1 -▁reached 1 -▁stand 1 -▁impact 1 -▁date 1 -▁officer 1 -▁East 1 -▁UK 1 -▁brother 1 -▁email 1 -R 1 -▁trip 1 -▁works 1 -▁leaders 1 -▁Republican 1 -▁additional 1 -▁appeared 1 -▁despite 1 -▁response 1 -▁original 1 -▁force 1 -▁Our 1 -▁War 1 -▁middle 1 -?” 1 -▁During 1 -▁expect 1 -▁Do 1 -▁Russian 1 -▁church 1 -▁Cup 1 -▁wait 1 -▁football 1 -▁changed 1 -▁learn 1 -▁terms 1 -▁although 1 -▁i 1 -G 1 -▁usually 1 -▁ways 1 -▁officers 1 -▁reading 1 -▁sold 1 -▁biggest 1 -▁table 1 -▁voice 1 -▁version 1 -▁picture 1 -▁Paul 1 -▁raised 1 -▁stage 1 -▁pass 1 -▁scene 1 -▁college 1 -▁Obama 1 -▁range 1 -▁stories 1 -N 1 -▁comment 1 -▁title 1 -▁walked 1 -▁addition 1 -▁families 1 -▁Since 1 -▁property 1 -▁Here 1 -▁center 1 -▁common 1 -▁workers 1 -h 1 -!" 1 -▁cases 1 -▁photo 1 -▁written 1 -▁safety 1 -▁leaving 1 -▁completely 1 -▁senior 1 -▁stopped 1 -▁pain 1 -▁straight 1 -▁throughout 1 -▁track 1 -▁England 1 -▁dog 1 -▁Europe 1 -▁complete 1 -▁girls 1 -▁Group 1 -▁Congress 1 -▁anti 1 -▁remain 1 -am 1 -▁type 1 -▁closed 1 -▁Senate 1 -▁International 1 -▁born 1 -▁books 1 -▁character 1 -▁gets 1 -▁interview 1 -▁loved 1 -▁scored 1 -▁figure 1 -▁Australia 1 -▁popular 1 -▁write 1 -▁afternoon 1 -▁customers 1 -▁Chinese 1 -▁student 1 -▁Texas 1 -▁More 1 -▁blood 1 -▁station 1 -▁pressure 1 -▁economy 1 -▁High 1 -▁address 1 -▁account 1 -▁considered 1 -▁education 1 -▁returned 1 -▁Or 1 -▁firm 1 -▁significant 1 -▁shooting 1 -▁content 1 -▁reach 1 -▁effort 1 -▁sleep 1 -▁target 1 -▁beginning 1 -▁nine 1 -▁incident 1 -▁deep 1 -▁opening 1 -▁served 1 -▁toward 1 -▁driving 1 -▁40 1 -▁From 1 -▁Last 1 -▁remains 1 -▁beautiful 1 -▁fine 1 -▁provided 1 -▁blog 1 -▁capital 1 -▁success 1 -▁24 1 -▁upon 1 -▁conference 1 -▁increased 1 -▁daily 1 -▁population 1 -▁haven 1 -▁includes 1 -▁answer 1 -▁sort 1 -▁waiting 1 -▁described 1 -▁charge 1 -▁sound 1 -▁gas 1 -▁foreign 1 -▁league 1 -p 1 -▁spend 1 -▁seeing 1 -▁source 1 -▁L 1 -▁via 1 -▁design 1 -▁serious 1 -▁French 1 -▁mom 1 -▁manager 1 -▁sister 1 -co 1 -▁song 1 -▁guys 1 -▁fast 1 -▁term 1 -▁evening 1 -▁? 1 -▁practice 1 -▁sexual 1 -▁perfect 1 -▁Press 1 -z 1 -▁ability 1 -x 1 -▁signed 1 -& 1 -▁largest 1 -▁compared 1 -▁details 1 -▁bank 1 -▁receive 1 -▁pulled 1 -▁vehicle 1 -▁article 1 -▁add 1 -▁miles 1 -▁arrested 1 -▁Council 1 -▁nation 1 -▁exactly 1 -▁executive 1 -▁charges 1 -▁safe 1 -day 1 -▁model 1 -▁Two 1 -▁arrived 1 -▁pre 1 -▁efforts 1 -▁General 1 -▁size 1 -▁message 1 -▁victory 1 -▁Korea 1 -able 1 -▁product 1 -▁press 1 -▁runs 1 -no 1 -▁positive 1 -▁drug 1 -▁sitting 1 -▁Los 1 -▁simple 1 -▁costs 1 -▁dark 1 -▁cover 1 -▁watching 1 -▁paper 1 -▁2015 1 -▁George 1 -▁hot 1 -▁travel 1 -▁goals 1 -▁e 1 -▁Google 1 -▁Florida 1 -▁learned 1 -▁sat 1 -▁investment 1 -▁posted 1 -▁related 1 -▁eventually 1 -▁19 1 -▁shared 1 -5 1 -▁particularly 1 -▁band 1 -▁H 1 -▁letter 1 -b 1 -▁regular 1 -▁base 1 -▁failed 1 -▁spot 1 -▁wall 1 -▁Co 1 -▁treatment 1 -▁effect 1 -▁piece 1 -▁particular 1 -▁caught 1 -▁quality 1 -▁extra 1 -▁rules 1 -▁agency 1 -▁De 1 -▁Smith 1 -▁street 1 -▁schools 1 -▁cold 1 -ness 1 -▁21 1 -▁residents 1 -it 1 -▁bought 1 -▁previously 1 -▁begin 1 -▁eat 1 -▁features 1 -▁act 1 -▁Most 1 -▁border 1 -▁revenue 1 -F 1 -▁forced 1 -▁English 1 -▁required 1 -▁users 1 -▁Bank 1 -▁onto 1 -▁joined 1 -▁contact 1 -▁accused 1 -▁beyond 1 -There 1 -▁calls 1 -▁certainly 1 -▁sit 1 -▁Don 1 -▁herself 1 -of 1 -9 1 -▁jobs 1 -▁multiple 1 -▁married 1 -▁eye 1 -▁parts 1 -▁issued 1 -▁prices 1 -▁holding 1 -▁material 1 -▁speak 1 -▁contract 1 -▁enjoy 1 -▁winning 1 -▁annual 1 -▁demand 1 -▁offered 1 -▁favorite 1 -▁management 1 -▁charged 1 -▁agreed 1 -▁Red 1 -▁Indian 1 -▁conditions 1 -▁authorities 1 -▁overall 1 -▁driver 1 -▁King 1 -▁district 1 -▁meant 1 -▁numbers 1 -▁note 1 -▁forces 1 -▁spending 1 -▁guess 1 -▁aren 1 -▁sex 1 -▁Party 1 -▁network 1 -▁Democratic 1 -▁Union 1 -▁gun 1 -8 1 -▁yesterday 1 -▁natural 1 -▁box 1 -▁lack 1 -▁Although 1 -▁hearing 1 -▁beat 1 -▁People 1 -▁interesting 1 -▁search 1 -▁weather 1 -▁older 1 -▁La 1 -▁art 1 -If 1 -▁page 1 -▁album 1 -▁claims 1 -▁Al 1 -▁cash 1 -▁save 1 -▁minister 1 -▁approach 1 -▁Angeles 1 -▁Many 1 -▁choice 1 -▁lived 1 -▁# 1 -us 1 -▁revealed 1 -▁agreement 1 -▁Why 1 -▁levels 1 -▁Road 1 -▁spoke 1 -▁Mark 1 -▁calling 1 -▁Today 1 -▁budget 1 -▁wide 1 -▁challenge 1 -le 1 -▁Also 1 -▁comments 1 -is 1 -▁violence 1 -▁caused 1 -7 1 -▁individual 1 -▁double 1 -▁located 1 -▁review 1 -▁majority 1 -▁fear 1 -▁Like 1 -6 1 -▁dinner 1 -as 1 -▁walking 1 -▁defense 1 -In 1 -▁France 1 -▁investors 1 -▁missing 1 -▁trial 1 -▁asking 1 -▁Another 1 -▁Black 1 -▁sell 1 -▁grow 1 -J 1 -▁movement 1 -▁department 1 -▁adding 1 -▁Democrats 1 -▁German 1 -▁ensure 1 -▁consider 1 -▁projects 1 -▁W 1 -▁telling 1 -▁card 1 -▁blue 1 -▁Mike 1 -▁claimed 1 -▁District 1 -▁confirmed 1 -▁Apple 1 -▁quick 1 -f 1 -▁window 1 -▁organization 1 -▁injury 1 -▁employees 1 -▁Because 1 -▁Times 1 -▁pair 1 -▁poor 1 -▁income 1 -▁Clinton 1 -▁managed 1 -▁mostly 1 -▁noted 1 -▁mid 1 -▁boys 1 -▁yourself 1 -▁thousands 1 -▁normal 1 -▁Bill 1 -▁dropped 1 -▁competition 1 -▁excited 1 -▁patients 1 -▁places 1 -▁judge 1 -▁College 1 -▁knows 1 -▁stood 1 -▁reality 1 -▁park 1 -▁designed 1 -▁basis 1 -▁amazing 1 -ie 1 -▁seat 1 -▁weight 1 -L 1 -▁perhaps 1 -▁language 1 -▁protect 1 -▁cancer 1 -▁construction 1 -▁2019 1 -They 1 -▁clean 1 -▁credit 1 -▁computer 1 -▁successful 1 -term 1 -▁minute 1 -▁Yes 1 -▁Chicago 1 -▁Office 1 -▁send 1 -▁Americans 1 -▁finish 1 -▁Both 1 -▁attempt 1 -▁battle 1 -▁putting 1 -▁whatever 1 -▁prison 1 -▁damage 1 -O 1 -▁Africa 1 -▁picked 1 -▁placed 1 -▁yards 1 -▁Canadian 1 -▁2014 1 -▁hurt 1 -▁cars 1 -▁V 1 -▁directly 1 -▁north 1 -▁businesses 1 -▁rise 1 -▁Once 1 -▁Johnson 1 -▁concerns 1 -What 1 -▁definitely 1 -▁app 1 -▁Chris 1 -▁fully 1 -▁Health 1 -▁Japan 1 -▁wish 1 -▁bar 1 -▁speed 1 -▁female 1 -▁avoid 1 -▁Germany 1 -▁host 1 -▁green 1 -▁Man 1 -▁systems 1 -▁planned 1 -▁standing 1 -▁join 1 -▁Robert 1 -▁crime 1 -▁1, 1 -▁sports 1 -▁rates 1 -▁believed 1 -▁central 1 -▁traffic 1 -▁Association 1 -▁Act 1 -▁century 1 -u 1 -▁planning 1 -▁improve 1 -▁funding 1 -▁heavy 1 -▁Israel 1 -▁lose 1 -▁committee 1 -▁developed 1 -▁culture 1 -▁subject 1 -▁net 1 -▁appear 1 -▁@ 1 -▁limited 1 -▁Photo 1 -▁Bay 1 -ton 1 -▁difference 1 -▁funds 1 -▁Chief 1 -▁train 1 -▁22 1 -▁Justice 1 -▁Let 1 -son 1 -▁Tom 1 -▁claim 1 -▁activities 1 -▁gold 1 -▁provides 1 -▁ten 1 -▁cool 1 -▁Jones 1 -▁physical 1 -▁pictures 1 -▁fit 1 -▁EU 1 -▁River 1 -▁serve 1 -▁professional 1 -▁doctor 1 -▁highest 1 -▁image 1 -▁learning 1 -▁ride 1 -▁reasons 1 -nd 1 -▁Mexico 1 -▁television 1 -▁specific 1 -▁supposed 1 -▁Committee 1 -▁ice 1 -▁bus 1 -0 1 -na 1 -▁operations 1 -▁offers 1 -▁push 1 -▁Brown 1 -▁Other 1 -▁drop 1 -▁brand 1 -▁sale 1 -▁missed 1 -▁truth 1 -▁block 1 -▁$1 1 -▁gives 1 -▁X 1 -▁feature 1 -▁analysts 1 -▁continues 1 -▁wearing 1 -▁society 1 -▁helping 1 -▁bottom 1 -▁easily 1 -▁appears 1 -ism 1 -de 1 -▁launched 1 -▁murder 1 -▁nature 1 -▁environment 1 -▁necessary 1 -▁sector 1 -▁watched 1 -▁Thomas 1 -▁Prime 1 -▁camera 1 -▁decades 1 -▁please 1 -▁Britain 1 -▁mentioned 1 -▁arms 1 -ist 1 -▁brain 1 -▁speech 1 -▁Hall 1 -▁showing 1 -That 1 -W 1 -▁grew 1 -▁Maybe 1 -▁explained 1 -▁African 1 -▁debt 1 -▁ones 1 -out 1 -one 1 -▁victims 1 -▁south 1 -▁becoming 1 -▁criminal 1 -▁Scott 1 -▁broke 1 -▁parties 1 -▁23 1 -▁style 1 -▁Government 1 -▁Central 1 -▁Commission 1 -ar 1 -▁60 1 -▁Air 1 -– 1 -▁fighting 1 -▁tree 1 -▁dad 1 -▁worse 1 -▁attacks 1 -▁platform 1 -▁conversation 1 -▁Q 1 -▁Western 1 -▁traditional 1 -▁threat 1 -▁powerful 1 -▁modern 1 -▁happens 1 -▁activity 1 -▁CEO 1 -▁Oh 1 -▁peace 1 -▁talked 1 -▁Republicans 1 -▁Big 1 -▁programs 1 -ley 1 -▁owner 1 -▁Those 1 -ic 1 -▁Church 1 -▁items 1 -▁option 1 -▁produced 1 -▁raise 1 -▁Green 1 -▁NFL 1 -▁(@ 1 -▁victim 1 -▁finding 1 -▁Me 1 -▁digital 1 -▁sites 1 -▁twice 1 -▁device 1 -▁partner 1 -▁candidate 1 -▁trust 1 -▁score 1 -▁crisis 1 -by 1 -▁carry 1 -▁interested 1 -▁lines 1 -▁hotel 1 -▁Amazon 1 -▁earnings 1 -▁nuclear 1 -▁wonderful 1 -▁crowd 1 -▁Post 1 -▁al 1 -▁keeping 1 -▁Jan 1 -▁condition 1 -▁whom 1 -less 1 -▁rule 1 -▁losing 1 -▁direction 1 -▁Christian 1 -el 1 -▁climate 1 -▁Iran 1 -▁Martin 1 -▁highly 1 -▁alleged 1 -▁covered 1 -▁cross 1 -▁Despite 1 -▁cities 1 -▁truly 1 -▁benefit 1 -▁shown 1 -▁options 1 -▁Their 1 -▁corner 1 -▁clearly 1 -▁voters 1 -▁rain 1 -▁filed 1 -▁launch 1 -▁leadership 1 -▁council 1 -▁closer 1 -▁tough 1 -▁miss 1 -▁Your 1 -▁active 1 -▁actions 1 -at 1 -▁markets 1 -_ 1 -▁location 1 -▁presidential 1 -▁benefits 1 -▁discovered 1 -▁dream 1 -▁choose 1 -▁path 1 -▁slow 1 -▁photos 1 -▁Australian 1 -▁snow 1 -▁talks 1 -But 1 -▁arm 1 -▁screen 1 -▁commercial 1 -▁larger 1 -▁responsible 1 -▁2018. 1 -▁realized 1 -▁headed 1 -▁kill 1 -▁Peter 1 -▁Ben 1 -▁dollars 1 -▁completed 1 -▁cast 1 -ia 1 -▁injured 1 -▁scheduled 1 -▁spring 1 -pm 1 -▁remained 1 -▁skin 1 -▁starts 1 -▁tour 1 -▁communities 1 -▁standard 1 -▁Star 1 -▁Sam 1 -▁Kim 1 -▁Can 1 -▁Super 1 -▁sources 1 -▁characters 1 -▁Over 1 -▁Hill 1 -▁request 1 -▁mouth 1 -▁proposed 1 -▁Williams 1 -▁Joe 1 -▁birth 1 -▁camp 1 -▁names 1 -▁Ryan 1 -▁carried 1 -▁author 1 -▁primary 1 -▁setting 1 -▁laws 1 -▁color 1 -▁suggested 1 -▁hadn 1 -▁2017. 1 -▁ex 1 -off 1 -ling 1 -▁science 1 -▁plant 1 -▁hoping 1 -▁Company 1 -▁opposition 1 -▁vehicles 1 -▁fan 1 -▁wind 1 -▁Go 1 -▁worst 1 -No 1 -rs 1 -▁fair 1 -▁committed 1 -▁attorney 1 -▁Lake 1 -▁tomorrow 1 -▁hasn 1 -▁section 1 -▁opportunities 1 -▁concerned 1 -▁spokesman 1 -▁code 1 -▁analysis 1 -▁advantage 1 -▁Jack 1 -land 1 -▁disease 1 -V 1 -▁pull 1 -▁direct 1 -▁stated 1 -▁abuse 1 -▁established 1 -▁status 1 -▁touch 1 -▁guilty 1 -▁sea 1 -▁pro 1 -▁flight 1 -▁feels 1 -▁agree 1 -▁knowledge 1 -we 1 -▁unique 1 -▁village 1 -▁winter 1 -▁slightly 1 -▁independent 1 -▁Instead 1 -▁busy 1 -▁insurance 1 -▁resources 1 -▁individuals 1 -▁teacher 1 -▁records 1 -▁mine 1 -▁progress 1 -▁etc 1 -▁Research 1 -▁super 1 -▁Service 1 -la 1 -▁shop 1 -▁2013 1 -▁easier 1 -▁Carolina 1 -▁healthy 1 -▁homes 1 -▁unit 1 -▁focused 1 -▁Lee 1 -ta 1 -H 1 -▁* 1 -▁2017, 1 -▁Re 1 -▁emergency 1 -▁radio 1 -▁surprise 1 -▁spread 1 -▁injuries 1 -▁kid 1 -▁seeking 1 -▁26 1 -▁filled 1 -▁supply 1 -▁respect 1 -▁sun 1 -▁broken 1 -▁Daily 1 -▁prior 1 -▁marriage 1 -▁critical 1 -▁foot 1 -▁27 1 -▁separate 1 -▁mobile 1 -▁elections 1 -▁28 1 -▁operating 1 -▁Before 1 -▁weren 1 -▁catch 1 -▁politics 1 -▁aware 1 -▁0 1 -▁warm 1 -▁mission 1 -▁fund 1 -Y 1 -▁steps 1 -▁coffee 1 -▁policies 1 -▁2012 1 -for 1 -▁prevent 1 -▁massive 1 -▁truck 1 -▁identified 1 -▁birthday 1 -▁episode 1 -▁kitchen 1 -▁happening 1 -▁Syria 1 -▁noticed 1 -▁Associated 1 -▁intelligence 1 -▁plays 1 -▁assault 1 -▁restaurant 1 -▁tournament 1 -▁software 1 -▁rose 1 -▁presence 1 -▁/ 1 -▁centre 1 -▁ordered 1 -▁El 1 -▁reduce 1 -▁entered 1 -▁equipment 1 -▁2016, 1 -▁audience 1 -un 1 -▁suspect 1 -▁Great 1 -▁Army 1 -▁killing 1 -▁weapons 1 -▁Secretary 1 -▁trouble 1 -▁bag 1 -▁creating 1 -▁fellow 1 -▁ideas 1 -▁doubt 1 -▁devices 1 -▁Jesus 1 -▁housing 1 -▁thanks 1 -▁Supreme 1 -▁tells 1 -▁greater 1 -▁fresh 1 -▁male 1 -▁Club 1 -▁mental 1 -▁session 1 -▁decide 1 -▁hundreds 1 -▁hate 1 -like 1 -▁Who 1 -▁offering 1 -ra 1 -▁smaller 1 -▁Pakistan 1 -▁Virginia 1 -▁strategy 1 -▁extremely 1 -▁Will 1 -▁Island 1 -▁lunch 1 -▁allows 1 -▁Lord 1 -▁strength 1 -▁recorded 1 -▁develop 1 -▁animals 1 -▁Mary 1 -▁William 1 -▁facing 1 -U 1 -▁rock 1 -▁providing 1 -▁concern 1 -▁Ms 1 -▁introduced 1 -▁notice 1 -▁owned 1 -rd 1 -▁allegations 1 -▁drugs 1 -▁removed 1 -▁Valley 1 -▁seconds 1 -X 1 -ma 1 -▁decade 1 -▁complex 1 -▁Paris 1 -▁crazy 1 -▁ship 1 -▁suffered 1 -( 1 -▁ban 1 -ity 1 -▁winner 1 -▁ratio 1 -▁display 1 -▁meaning 1 -▁clothes 1 -▁text 1 -▁wear 1 -▁Board 1 -▁degree 1 -▁eating 1 -▁Be 1 -▁wonder 1 -▁Ohio 1 -▁guard 1 -▁civil 1 -▁internet 1 -▁fifth 1 -▁prepared 1 -▁Steve 1 -▁proud 1 -▁Institute 1 -▁Security 1 -▁produce 1 -▁yes 1 -▁elected 1 -▁tonight 1 -▁draw 1 -▁exchange 1 -▁smile 1 -▁machine 1 -▁enjoyed 1 -▁Pro 1 -▁fired 1 -ry 1 -▁worry 1 -▁allowing 1 -And 1 -do 1 -▁explain 1 -▁willing 1 -▁journey 1 -▁debate 1 -▁Boston 1 -▁Meanwhile 1 -▁appearance 1 -▁remaining 1 -▁effects 1 -▁heat 1 -▁die 1 -▁Asia 1 -▁influence 1 -▁Year 1 -▁Public 1 -▁measures 1 -▁la 1 -▁purchase 1 -▁plenty 1 -▁skills 1 -▁dogs 1 -▁Director 1 -▁except 1 -▁drink 1 -▁dangerous 1 -▁Toronto 1 -:// 1 -▁speaking 1 -▁younger 1 -▁domestic 1 -▁memory 1 -▁trading 1 -▁Jackson 1 -▁bringing 1 -▁Matt 1 -▁documents 1 -!” 1 -ne 1 -te 1 -▁bigger 1 -▁apartment 1 -▁earned 1 -▁user 1 -▁realize 1 -▁Richard 1 -▁citizens 1 -▁dollar 1 -▁require 1 -▁religious 1 -da 1 -▁file 1 -▁negative 1 -▁coverage 1 -My 1 -▁struck 1 -▁crash 1 -▁application 1 -▁pushed 1 -▁legislation 1 -▁sweet 1 -line 1 -▁initial 1 -”. 1 -▁generally 1 -▁justice 1 -▁measure 1 -▁Fox 1 -▁draft 1 -▁2018, 1 -▁sick 1 -▁actual 1 -▁illegal 1 -▁lots 1 -▁increasing 1 -▁understanding 1 -▁Federal 1 -▁sides 1 -▁debut 1 -▁hopes 1 -▁thank 1 -▁surprised 1 -So 1 -▁approved 1 -▁imagine 1 -▁effective 1 -be 1 -▁surgery 1 -▁owners 1 -v 1 -▁Still 1 -▁operation 1 -▁leg 1 -▁affected 1 -▁accept 1 -▁county 1 -▁lawyer 1 -▁opinion 1 -▁holiday 1 -▁species 1 -▁nearby 1 -▁selling 1 -▁chair 1 -When 1 -▁solution 1 -▁mass 1 -▁broadcast 1 -▁2011 1 -▁responded 1 -▁existing 1 -▁glass 1 -▁Every 1 -▁collection 1 -▁signs 1 -▁pieces 1 -▁stayed 1 -▁occurred 1 -▁enforcement 1 -▁balance 1 -▁trees 1 -▁buying 1 -▁respond 1 -▁Thank 1 -▁edge 1 -▁Korean 1 -▁connection 1 -▁approximately 1 -▁storm 1 -▁29 1 -led 1 -▁slowly 1 -▁secret 1 -▁shut 1 -▁writer 1 -▁gain 1 -▁paying 1 -▁absolutely 1 -▁2010 1 -▁Le 1 -▁university 1 -▁drove 1 -▁changing 1 -▁studies 1 -▁Finally 1 -▁purpose 1 -▁facility 1 -▁500 1 -▁FBI 1 -▁figures 1 -▁discuss 1 -▁possibly 1 -▁believes 1 -▁Good 1 -▁throw 1 -▁en 1 -▁anyway 1 -▁wedding 1 -he 1 -▁stores 1 -▁stars 1 -▁attended 1 -▁Saudi 1 -▁denied 1 -▁originally 1 -▁Get 1 -▁turning 1 -▁candidates 1 -▁thoughts 1 -▁gift 1 -▁refused 1 -▁Andrew 1 -▁Foundation 1 -▁faith 1 -▁decisions 1 -▁capacity 1 -▁Under 1 -▁estimated 1 -▁fish 1 -▁award 1 -▁70 1 -▁challenges 1 -▁confidence 1 -▁arrest 1 -▁mark 1 -▁familiar 1 -▁faces 1 -▁island 1 -▁Home 1 -▁Japanese 1 -▁customer 1 -▁shopping 1 -▁forget 1 -▁Michigan 1 -▁knowing 1 -▁animal 1 -ch 1 -▁declined 1 -▁nor 1 -▁Sen 1 -▁cell 1 -▁Life 1 -▁presented 1 -▁shots 1 -▁songs 1 -▁totally 1 -▁moments 1 -▁count 1 -▁pointed 1 -▁leaves 1 -▁sounds 1 -back 1 -▁basic 1 -▁determined 1 -▁apparently 1 -▁handle 1 -▁Sports 1 -▁enter 1 -▁legs 1 -ation 1 -▁distance 1 -▁supported 1 -▁turns 1 -▁Ireland 1 -▁2016. 1 -▁immigration 1 -▁funny 1 -▁advice 1 -▁reportedly 1 -men 1 -▁II 1 -▁expressed 1 -▁famous 1 -▁link 1 -w 1 -▁associated 1 -▁sad 1 -▁banks 1 -▁crew 1 -▁experienced 1 -▁Fire 1 -▁none 1 -▁reporters 1 -▁cards 1 -▁90 1 -▁Jim 1 -▁serving 1 -▁Earth 1 -▁Please 1 -▁fuel 1 -▁infrastructure 1 -▁plane 1 -▁unable 1 -▁regional 1 -▁gotten 1 -▁worried 1 -▁structure 1 -▁quiet 1 -▁route 1 -▁Grand 1 -▁behavior 1 -ian 1 -▁Alex 1 -ville 1 -▁admitted 1 -▁Open 1 -▁Game 1 -▁largely 1 -▁rising 1 -▁generation 1 -▁liked 1 -▁parking 1 -▁agencies 1 -▁listen 1 -ion 1 -▁suddenly 1 -all 1 -▁researchers 1 -▁east 1 -▁passing 1 -▁usual 1 -▁defensive 1 -▁views 1 -▁Z 1 -▁variety 1 -▁giant 1 -▁Kevin 1 -▁faced 1 -▁dance 1 -▁wake 1 -▁ruling 1 -▁comfortable 1 -▁uses 1 -▁allegedly 1 -▁Instagram 1 -▁ongoing 1 -▁pool 1 -▁smart 1 -▁obviously 1 -▁tired 1 -▁cookies 1 -▁200 1 -▁streets 1 -▁Three 1 -▁west 1 -▁accounts 1 -▁strike 1 -▁mention 1 -▁surface 1 -▁Yet 1 -▁aircraft 1 -She 1 -▁regarding 1 -▁Royal 1 -ner 1 -▁seek 1 -▁plus 1 -▁treated 1 -▁governor 1 -▁reserved 1 -▁counter 1 -▁freedom 1 -▁solid 1 -▁delivered 1 -▁31 1 -▁Sun 1 -▁offensive 1 -▁standards 1 -▁agent 1 -▁youth 1 -▁Dan 1 -▁units 1 -▁notes 1 -▁dress 1 -▁anymore 1 -▁considering 1 -▁actor 1 -▁deliver 1 -▁expensive 1 -ies 1 -▁Centre 1 -▁teachers 1 -▁Love 1 -▁basketball 1 -▁flat 1 -▁protection 1 -▁responsibility 1 -▁newspaper 1 -▁Charles 1 -▁Only 1 -▁schedule 1 -▁falling 1 -▁experts 1 -▁images 1 -▁proposal 1 -▁Blue 1 -j 1 -Z 1 -▁Francisco 1 -▁80 1 -▁scoring 1 -▁artist 1 -▁warned 1 -▁Dec 1 -▁Management 1 -▁core 1 -▁professor 1 -▁multi 1 -▁Wall 1 -▁Services 1 -▁performed 1 -go 1 -▁patient 1 -▁Prince 1 -▁Each 1 -▁failure 1 -▁unless 1 -▁becomes 1 -▁attend 1 -▁drivers 1 -▁shape 1 -▁grown 1 -▁seasons 1 -▁Arizona 1 -▁thus 1 -▁seriously 1 -▁cat 1 -▁types 1 -ka 1 -▁rare 1 -▁2015, 1 -▁chairman 1 -▁Hospital 1 -▁= 1 -▁shoulder 1 -▁Force 1 -so 1 -▁developing 1 -▁receiving 1 -▁champion 1 -▁apart 1 -▁jail 1 -▁river 1 -▁helps 1 -ish 1 -▁pace 1 -▁scale 1 -▁Long 1 -▁testing 1 -▁warning 1 -▁estate 1 -▁prime 1 -▁Though 1 -▁otherwise 1 -▁Jr 1 -▁accident 1 -▁Harry 1 -▁farm 1 -field 1 -▁positions 1 -▁wild 1 -▁About 1 -▁tight 1 -▁airport 1 -▁zone 1 -Q 1 -▁taxes 1 -▁tiny 1 -▁combined 1 -▁welcome 1 -▁secure 1 -not 1 -▁Women 1 -▁perform 1 -▁initially 1 -Photo 1 -▁35 1 -▁buildings 1 -▁entirely 1 -▁supporters 1 -ri 1 -▁web 1 -▁organizations 1 -▁doctors 1 -▁fill 1 -▁factors 1 -▁Ford 1 -▁remove 1 -▁Middle 1 -▁AP 1 -▁background 1 -▁alive 1 -▁tied 1 -▁Sea 1 -▁listed 1 -▁2, 1 -▁corporate 1 -▁Houston 1 -▁Division 1 -▁assets 1 -▁numerous 1 -▁Mom 1 -▁task 1 -▁announcement 1 -▁stuck 1 -As 1 -▁Beach 1 -▁division 1 -▁alongside 1 -▁votes 1 -▁replaced 1 -▁featured 1 -▁split 1 -▁visited 1 -▁boat 1 -point 1 -20 1 -▁sorry 1 -▁Nov 1 -▁exercise 1 -▁sharing 1 -ro 1 -▁Coast 1 -▁fashion 1 -▁Time 1 -▁classes 1 -▁carrying 1 -▁Bob 1 -▁Mrs 1 -▁army 1 -▁conducted 1 -▁voted 1 -age 1 -▁Reuters 1 -▁aid 1 -▁dry 1 -▁conflict 1 -▁afraid 1 -▁acting 1 -▁facilities 1 -▁Islamic 1 -▁intended 1 -▁breaking 1 -▁ring 1 -▁Market 1 -▁shift 1 -▁engine 1 -▁Davis 1 -▁vision 1 -▁determine 1 -me 1 -▁declared 1 -▁adult 1 -▁novel 1 -▁discussion 1 -▁ultimately 1 -ment 1 -▁somewhere 1 -▁province 1 -▁seats 1 -▁lawsuit 1 -▁relatively 1 -▁update 1 -▁prove 1 -▁suit 1 -▁tools 1 -▁Santa 1 -▁Southern 1 -▁directed 1 -▁extended 1 -▁glad 1 -▁transfer 1 -▁possibility 1 -ty 1 -▁formed 1 -▁limit 1 -▁appeal 1 -▁Italy 1 -▁bodies 1 -▁southern 1 -▁plants 1 -▁Adam 1 -▁Kelly 1 -▁Taylor 1 -▁hits 1 -▁reduced 1 -▁bike 1 -▁Copyright 1 -▁Sometimes 1 -▁flow 1 -▁panel 1 -▁sought 1 -▁Jeff 1 -▁Northern 1 -▁threw 1 -▁soldiers 1 -▁bid 1 -▁beach 1 -▁conservative 1 -▁therefore 1 -▁Bush 1 -▁ourselves 1 -▁accepted 1 -▁stands 1 -▁Italian 1 -ting 1 -▁survey 1 -▁45 1 -▁protest 1 -▁lay 1 -▁Tim 1 -▁heading 1 -▁THE 1 -▁matches 1 -▁campus 1 -▁fly 1 -▁strange 1 -▁replace 1 -▁okay 1 -▁obvious 1 -▁Premier 1 -▁Daniel 1 -end 1 -▁promise 1 -▁NBA 1 -▁Best 1 -▁2008 1 -▁consumers 1 -▁upcoming 1 -▁struggle 1 -▁millions 1 -▁row 1 -▁vice 1 -▁sub 1 -▁neighborhood 1 -▁Posted 1 -▁suggest 1 -▁emotional 1 -▁jump 1 -▁connected 1 -▁Muslim 1 -Reuters 1 -▁square 1 -▁maintain 1 -▁empty 1 -▁neck 1 -▁defeat 1 -▁ending 1 -▁pounds 1 -▁Little 1 -▁Spanish 1 -▁chain 1 -▁marijuana 1 -▁Attorney 1 -▁horse 1 -way 1 -▁Feb 1 -▁movies 1 -▁environmental 1 -▁owns 1 -▁Brexit 1 -▁joint 1 -▁expansion 1 -▁y 1 -▁talent 1 -▁union 1 -▁authority 1 -▁secretary 1 -▁bridge 1 -▁taught 1 -▁forecast 1 -▁metal 1 -▁Bowl 1 -▁increasingly 1 -▁plastic 1 -▁Nick 1 -▁pop 1 -▁function 1 -▁tech 1 -▁Law 1 -▁chose 1 -▁sport 1 -▁identify 1 -▁returning 1 -▁films 1 -over 1 -▁cap 1 -born 1 -▁requires 1 -▁adults 1 -▁traded 1 -▁shouldn 1 -▁faster 1 -▁improved 1 -▁gender 1 -▁assistant 1 -▁concept 1 -▁quarterback 1 -▁Capital 1 -▁Brian 1 -▁interests 1 -▁relief 1 -▁Back 1 -▁Energy 1 -▁Un 1 -▁baseball 1 -▁promised 1 -▁championship 1 -▁treat 1 -▁potentially 1 -ni 1 -▁identity 1 -▁statements 1 -▁instance 1 -▁manage 1 -▁Georgia 1 -▁Pacific 1 -per 1 -▁expectations 1 -▁Are 1 -▁appointed 1 -▁impossible 1 -▁consumer 1 -os 1 -game 1 -▁northern 1 -▁causing 1 -▁parent 1 -▁supporting 1 -▁suffering 1 -▁apply 1 -▁Iraq 1 -▁lady 1 -!) 1 -▁marketing 1 -▁applications 1 -run 1 -▁Miami 1 -▁Read 1 -▁Manchester 1 -▁achieve 1 -▁brings 1 -▁stepped 1 -▁models 1 -▁Colorado 1 -▁conduct 1 -▁ceremony 1 -▁Jordan 1 -▁profit 1 -▁mix 1 -▁stake 1 -▁tears 1 -▁hell 1 -▁magazine 1 -▁sentence 1 -▁Internet 1 -▁Team 1 -long 1 -▁USA 1 -▁values 1 -▁everybody 1 -▁tests 1 -▁typically 1 -▁troops 1 -▁retired 1 -▁commitment 1 -▁partners 1 -est 1 -▁Bar 1 -▁Young 1 -▁pushing 1 -▁exciting 1 -▁retail 1 -▁anywhere 1 -▁laid 1 -▁beer 1 -▁wine 1 -▁Louis 1 -▁2009 1 -▁Avenue 1 -▁greatest 1 -▁singer 1 -▁Conference 1 -▁stress 1 -▁desire 1 -▁pic 1 -▁minor 1 -▁entry 1 -▁valued 1 -▁Media 1 -▁boost 1 -▁lights 1 -▁Old 1 -▁Thanks 1 -▁breath 1 -▁Global 1 -▁purchased 1 -▁doors 1 -▁cuts 1 -▁chest 1 -▁Jewish 1 -▁Following 1 -▁stick 1 -which 1 -▁Jason 1 -+ 1 -▁Oct 1 -▁Israeli 1 -▁Mo 1 -▁teaching 1 -▁soft 1 -▁experiences 1 -▁Turkey 1 -▁t 1 -▁Rs 1 -▁holds 1 -▁boss 1 -▁kick 1 -▁grand 1 -▁shoes 1 -▁angry 1 -▁attempted 1 -▁$2 1 -▁poll 1 -▁surrounding 1 -▁Catholic 1 -ter 1 -▁Y 1 -▁reporting 1 -▁Van 1 -▁houses 1 -▁bright 1 -▁approval 1 -▁2014, 1 -▁era 1 -▁contributed 1 -▁upset 1 -▁percentage 1 -▁Wilson 1 -▁lawmakers 1 -▁visitors 1 -▁pregnant 1 -il 1 -▁hole 1 -▁advanced 1 -▁labor 1 -▁Up 1 -▁Award 1 -▁internal 1 -▁loan 1 -▁studio 1 -▁coast 1 -▁Joseph 1 -▁armed 1 -▁Judge 1 -▁tend 1 -▁feed 1 -▁invited 1 -▁flying 1 -well 1 -yard 1 -▁industrial 1 -▁Irish 1 -▁Show 1 -▁Zealand 1 -▁waste 1 -▁Society 1 -▁hitting 1 -▁escape 1 -ized 1 -▁false 1 -▁drama 1 -side 1 -▁forever 1 -▁roll 1 -▁alternative 1 -▁leads 1 -▁rich 1 -▁Queen 1 -▁Right 1 -▁materials 1 -▁immediate 1 -▁drinking 1 -▁Kansas 1 -▁visiting 1 -▁honor 1 -▁discussed 1 -▁Miller 1 -▁Frank 1 -Oh 1 -▁begins 1 -▁Jersey 1 -▁specifically 1 -▁argued 1 -▁awesome 1 -▁Rep 1 -▁sixth 1 -▁videos 1 -ca 1 -house 1 -▁barely 1 -▁artists 1 -▁affect 1 -▁volume 1 -▁laugh 1 -▁lucky 1 -▁Minnesota 1 -▁struggling 1 -▁advance 1 -▁native 1 -▁earth 1 -▁appropriate 1 -▁squad 1 -▁electric 1 -▁Vegas 1 -▁Championship 1 -▁feelings 1 -▁stocks 1 -▁relations 1 -▁founded 1 -▁Academy 1 -month 1 -▁brief 1 -▁Several 1 -▁officially 1 -▁penalty 1 -ey 1 -▁Henry 1 -▁publicly 1 -▁basically 1 -▁shortly 1 -▁clients 1 -se 1 -▁Rose 1 -but 1 -▁delivery 1 -▁roads 1 -▁Stadium 1 -▁celebrate 1 -▁Eric 1 -▁Rock 1 -▁posts 1 -▁Scotland 1 -▁Business 1 -▁fix 1 -▁driven 1 -▁reaction 1 -▁closing 1 -▁suggests 1 -▁dealing 1 -▁dedicated 1 -▁technical 1 -ization 1 -▁BBC 1 -▁creative 1 -▁alcohol 1 -▁ends 1 -▁beauty 1 -▁length 1 -▁Hollywood 1 -▁minimum 1 -▁Art 1 -▁involving 1 -▁Community 1 -▁harder 1 -▁messages 1 -▁downtown 1 -* 1 -▁library 1 -▁Seattle 1 -▁theory 1 -den 1 -▁bathroom 1 -▁matters 1 -▁breakfast 1 -▁gained 1 -▁attempts 1 -▁correct 1 -▁spirit 1 -ier 1 -▁returns 1 -▁sending 1 -▁offense 1 -▁rally 1 -▁closely 1 -ya 1 -▁meetings 1 -▁opposite 1 -▁trend 1 -▁hundred 1 -ally 1 -Posted 1 -▁widely 1 -li 1 -▁competitive 1 -▁Spain 1 -▁Education 1 -For 1 -▁somehow 1 -▁reform 1 -▁factor 1 -▁argument 1 -▁linked 1 -▁garden 1 -▁Asian 1 -▁jumped 1 -▁iPhone 1 -▁voting 1 -▁Ministry 1 -▁continuing 1 -▁cultural 1 -▁2015. 1 -▁Alabama 1 -▁Among 1 -▁package 1 -▁Christ 1 -▁wins 1 -▁Syrian 1 -▁Development 1 -▁d 1 -▁gay 1 -▁locations 1 -▁grabbed 1 -▁loves 1 -▁index 1 -▁Republic 1 -▁informed 1 -▁Mar 1 -▁Journal 1 -▁grade 1 -▁storage 1 -▁agents 1 -▁dividend 1 -▁Nigeria 1 -▁Music 1 -▁Gold 1 -▁pitch 1 -▁Town 1 -▁mixed 1 -▁recovery 1 -▁cells 1 -▁joy 1 -▁bond 1 -▁sky 1 -▁scientists 1 -▁Free 1 -▁festival 1 -▁Detroit 1 -▁fairly 1 -▁importance 1 -▁Philadelphia 1 -▁walls 1 -▁www 1 -wood 1 -▁(" 1 -▁Sarah 1 -ine 1 -▁heavily 1 -di 1 -▁employee 1 -▁dozen 1 -▁margin 1 -foot 1 -▁honest 1 -▁Second 1 -▁Sept 1 -▁aside 1 -▁indeed 1 -▁analyst 1 -▁sight 1 -▁somewhat 1 -▁confident 1 -▁inspired 1 -▁selected 1 -▁appointment 1 -▁solutions 1 -▁raising 1 -▁defence 1 -▁neither 1 -▁Stephen 1 -▁Games 1 -▁handed 1 -ko 1 -▁LLC 1 -▁represent 1 -▁Dad 1 -▁Four 1 -▁memories 1 -▁roughly 1 -▁decline 1 -▁contest 1 -▁tweeted 1 -▁Perhaps 1 -▁UN 1 -▁perspective 1 -▁moves 1 -▁veteran 1 -▁threatened 1 -▁granted 1 -▁tool 1 -▁impressive 1 -▁acquired 1 -▁m 1 -▁sanctions 1 -▁elements 1 -▁GOP 1 -▁Where 1 -▁3- 1 -▁duty 1 -▁steel 1 -▁registered 1 -▁exist 1 -▁significantly 1 -▁van 1 -▁Mayor 1 -▁expand 1 -ke 1 -▁excellent 1 -▁ruled 1 -▁forms 1 -▁heads 1 -▁milk 1 -ah 1 -▁institutions 1 -▁possession 1 -▁Next 1 -▁crimes 1 -▁saved 1 -▁cutting 1 -▁Las 1 -▁Golden 1 -▁Image 1 -▁deaths 1 -▁opposed 1 -▁Tony 1 -▁awarded 1 -▁semi 1 -▁map 1 -▁combination 1 -▁Power 1 -▁Mac 1 -▁tie 1 -▁tickets 1 -▁circumstances 1 -▁Sheriff 1 -▁attacked 1 -▁pages 1 -▁Justin 1 -▁wave 1 -▁scheme 1 -▁ticket 1 -▁capable 1 -▁Nations 1 -▁vs 1 -lin 1 -▁deals 1 -▁bear 1 -▁risks 1 -▁losses 1 -▁staying 1 -▁privacy 1 -▁incredible 1 -▁See 1 -▁document 1 -▁referred 1 -▁criticism 1 -um 1 -▁severe 1 -▁CBS 1 -▁Financial 1 -Our 1 -▁32 1 -▁stronger 1 -▁Real 1 -▁updates 1 -▁chosen 1 -▁Corp 1 -▁stone 1 -▁assistance 1 -▁channel 1 -▁equal 1 -▁farmers 1 -▁method 1 -▁innings 1 -▁franchise 1 -Well 1 -▁Microsoft 1 -▁degrees 1 -▁transaction 1 -▁motion 1 -▁reaching 1 -▁represents 1 -free 1 -▁sleeping 1 -▁brown 1 -▁literally 1 -▁commission 1 -▁amid 1 -▁proper 1 -▁Which 1 -▁transition 1 -▁Parliament 1 -twitter 1 -▁smoke 1 -▁encourage 1 -▁orders 1 -▁captain 1 -▁1990 1 -▁teach 1 -▁mile 1 -▁settled 1 -▁Olympic 1 -▁Dallas 1 -▁historic 1 -▁listening 1 -▁Agency 1 -▁10- 1 -ir 1 -▁replied 1 -if 1 -▁rural 1 -▁silver 1 -▁mainly 1 -▁meal 1 -▁rescue 1 -▁payment 1 -▁signing 1 -▁Did 1 -▁consensus 1 -▁meat 1 -▁Click 1 -▁Medical 1 -▁shoot 1 -▁Corporation 1 -▁yellow 1 -▁joining 1 -▁relationships 1 -▁tea 1 -▁Democrat 1 -▁Anthony 1 -▁bunch 1 -▁resolution 1 -▁upper 1 -▁figured 1 -▁Moore 1 -▁Cleveland 1 -▁repeatedly 1 -ur 1 -▁knee 1 -▁captured 1 -▁CNN 1 -▁que 1 -▁Atlanta 1 -▁manner 1 -▁favor 1 -▁cloud 1 -▁Later 1 -▁shirt 1 -▁humans 1 -▁weird 1 -▁hall 1 -▁$3 1 -▁fixed 1 -▁musical 1 -▁Series 1 -▁blame 1 -▁shall 1 -▁Festival 1 -▁destroyed 1 -▁1980 1 -▁trail 1 -▁mistake 1 -▁engaged 1 -▁hoped 1 -▁fiscal 1 -▁client 1 -▁2000 1 -▁detail 1 -▁fought 1 -▁master 1 -▁violent 1 -▁Museum 1 -▁deputy 1 -▁planet 1 -▁brothers 1 -ary 1 -▁battery 1 -▁folks 1 -▁estimates 1 -▁loud 1 -▁mayor 1 -▁newly 1 -▁properties 1 -▁Android 1 -▁bench 1 -▁stretch 1 -▁producer 1 -▁practices 1 -▁Top 1 -▁Patrick 1 -▁Ed 1 -▁taste 1 -▁weapon 1 -▁lie 1 -▁regulations 1 -▁constantly 1 -em 1 -▁yard 1 -▁historical 1 -related 1 -▁prepare 1 -▁soul 1 -▁Miss 1 -▁represented 1 -▁Have 1 -▁ties 1 -down 1 -▁admit 1 -▁participate 1 -▁passengers 1 -▁compete 1 -▁di 1 -▁waited 1 -▁technologies 1 -▁Sydney 1 -▁classic 1 -▁passes 1 -▁smiled 1 -▁threats 1 -▁protests 1 -▁drawn 1 -▁politicians 1 -NYSE 1 -▁Khan 1 -▁urged 1 -▁readers 1 -▁creation 1 -▁Morgan 1 -▁cry 1 -▁fingers 1 -▁Unfortunately 1 -▁harm 1 -▁deeply 1 -▁partnership 1 -▁promote 1 -▁2007 1 -▁wore 1 -▁recognize 1 -▁pack 1 -▁phase 1 -▁understood 1 -▁Aug 1 -▁flag 1 -▁stolen 1 -▁Foreign 1 -▁follows 1 -▁aimed 1 -▁Eastern 1 -▁bottle 1 -▁regularly 1 -") 1 -▁Ontario 1 -▁terrible 1 -▁chances 1 -▁shock 1 -▁Labour 1 -@ 1 -org 1 -▁ill 1 -▁unknown 1 -▁adopted 1 -▁resident 1 -▁cup 1 -▁rejected 1 -▁complaint 1 -▁gathered 1 -▁latter 1 -▁Ali 1 -10 1 -▁Night 1 -▁primarily 1 -▁actress 1 -▁Diego 1 -▁summit 1 -▁suicide 1 -▁Josh 1 -▁Earlier 1 -▁) 1 -▁theme 1 -▁useful 1 -▁stations 1 -▁segment 1 -▁lying 1 -▁frame 1 -▁(19 1 -▁tall 1 -▁custody 1 -▁fallen 1 -▁hanging 1 -▁el 1 -▁wood 1 -▁cycle 1 -▁targeted 1 -▁mountain 1 -▁friendly 1 -▁concert 1 -▁temperature 1 -▁Family 1 -▁woke 1 -▁western 1 -▁junior 1 -▁fees 1 -▁chicken 1 -▁Gov 1 -▁$5 1 -▁frequently 1 -▁territory 1 -▁Ltd 1 -don 1 -▁manufacturing 1 -▁dressed 1 -called 1 -▁struggled 1 -▁targets 1 -▁danger 1 -▁roof 1 -▁grateful 1 -▁crying 1 -▁investigating 1 -▁calm 1 -▁payments 1 -▁OK 1 -▁worldwide 1 -▁twenty 1 -▁click 1 -▁2013, 1 -▁afford 1 -▁convicted 1 -▁Former 1 -▁Take 1 -ha 1 -▁cute 1 -▁solar 1 -▁Watch 1 -▁everywhere 1 -▁suspended 1 -▁recognized 1 -▁firms 1 -▁sets 1 -▁finance 1 -▁distribution 1 -▁Science 1 -▁lovely 1 -▁portion 1 -▁Pennsylvania 1 -▁letters 1 -▁sugar 1 -▁Social 1 -▁drew 1 -law 1 -▁Putin 1 -▁wanting 1 -American 1 -▁proved 1 -ge 1 -▁comfort 1 -▁Images 1 -▁v 1 -▁bedroom 1 -▁ministry 1 -▁guide 1 -▁programme 1 -▁letting 1 -▁lies 1 -▁participants 1 -▁hired 1 -▁currency 1 -et 1 -▁communication 1 -▁entering 1 -▁anniversary 1 -▁corruption 1 -▁beating 1 -▁riding 1 -▁founder 1 -▁copy 1 -▁appreciate 1 -▁pattern 1 -▁Human 1 -▁Anderson 1 -▁o 1 -▁applied 1 -▁editor 1 -▁guns 1 -▁unlikely 1 -ler 1 -▁Tennessee 1 -▁fake 1 -▁scared 1 -▁permanent 1 -▁perfectly 1 -uk 1 -▁Illinois 1 -▁bills 1 -▁Project 1 -▁necessarily 1 -▁Speaking 1 -▁Lewis 1 -▁gap 1 -▁Hillary 1 -▁colleagues 1 -▁covering 1 -▁Executive 1 -▁Moscow 1 -▁https 1 -▁tackle 1 -▁$10 1 -▁Sanders 1 -▁3, 1 -▁Navy 1 -▁Assembly 1 -▁babies 1 -round 1 -▁acts 1 -▁retirement 1 -▁vast 1 -▁PM 1 -▁jury 1 -▁answers 1 -▁command 1 -▁Elizabeth 1 -▁expert 1 -▁thinks 1 -▁consecutive 1 -▁noise 1 -▁selection 1 -id 1 -▁transport 1 -▁earn 1 -▁button 1 -▁cream 1 -▁plate 1 -right 1 -▁Governor 1 -▁dating 1 -▁resulted 1 -▁collected 1 -▁Football 1 -▁Water 1 -▁Allen 1 -▁Officer 1 -▁personnel 1 -▁reference 1 -▁marked 1 -▁license 1 -▁windows 1 -To 1 -▁equity 1 -▁stomach 1 -▁causes 1 -▁investigators 1 -▁essential 1 -▁rolled 1 -▁paint 1 -▁nations 1 -▁requirements 1 -▁Having 1 -▁weak 1 -▁defeated 1 -▁negotiations 1 -▁constant 1 -▁describe 1 -ful 1 -▁x 1 -▁extreme 1 -▁luck 1 -▁carbon 1 -▁counts 1 -▁properly 1 -▁con 1 -▁recording 1 -▁bags 1 -▁Disney 1 -▁suspected 1 -▁Rights 1 -▁powers 1 -▁Iowa 1 -▁evil 1 -▁Yeah 1 -▁lifted 1 -▁effectively 1 -▁300 1 -▁nose 1 -▁blow 1 -▁tweet 1 -▁approached 1 -▁anger 1 -▁emerged 1 -▁regime 1 -▁estimate 1 -▁Afghanistan 1 -▁religion 1 -▁increases 1 -▁zero 1 -▁Lady 1 -▁checked 1 -▁1970 1 -▁remembered 1 -▁magic 1 -▁guest 1 -▁Everyone 1 -▁somebody 1 -▁respectively 1 -▁Special 1 -▁pilot 1 -▁category 1 -▁Dave 1 -▁desk 1 -▁fraud 1 -▁immigrants 1 -▁Brazil 1 -▁Week 1 -▁Vice 1 -ja 1 -con 1 -▁entertainment 1 -▁fat 1 -io 1 -▁fewer 1 -▁guests 1 -▁newsletter 1 -▁locked 1 -▁laughed 1 -▁topic 1 -▁Car 1 -▁Out 1 -▁port 1 -▁organized 1 -▁portfolio 1 -▁mess 1 -▁MP 1 -▁coal 1 -▁Columbia 1 -more 1 -▁Plus 1 -▁+ 1 -▁survive 1 -my 1 -▁exact 1 -▁Trust 1 -▁c 1 -▁settlement 1 -▁Mueller 1 -▁obtained 1 -▁Men 1 -▁Singapore 1 -▁hang 1 -▁arrive 1 -ul 1 -▁combat 1 -▁enjoying 1 -▁tracks 1 -▁Network 1 -mer 1 -▁passenger 1 -▁NBC 1 -▁wondering 1 -▁36 1 -▁permission 1 -▁prosecutors 1 -▁unusual 1 -▁pulling 1 -▁sentenced 1 -▁Ten 1 -▁pleased 1 -▁operate 1 -▁behalf 1 -▁inches 1 -▁clothing 1 -▁Bell 1 -▁ABC 1 -▁findings 1 -); 1 -▁drawing 1 -▁2019. 1 -izing 1 -that 1 -▁witness 1 -▁tradition 1 -▁incredibly 1 -ina 1 -▁normally 1 -▁Port 1 -▁Chelsea 1 -▁maximum 1 -▁rear 1 -▁backed 1 -▁400 1 -▁object 1 -▁electricity 1 -▁supplies 1 -▁prominent 1 -▁forest 1 -▁2014. 1 -▁nobody 1 -▁Max 1 -▁expecting 1 -▁keeps 1 -▁Ken 1 -▁detailed 1 -▁Delhi 1 -▁awareness 1 -▁2012, 1 -▁exposed 1 -▁answered 1 -▁dreams 1 -▁Kingdom 1 -▁Fund 1 -▁YouTube 1 -▁assists 1 -▁finger 1 -▁context 1 -▁Beijing 1 -▁Arabia 1 -▁failing 1 -▁gonna 1 -▁fee 1 -▁stable 1 -▁coalition 1 -▁outcome 1 -▁Austin 1 -▁rival 1 -car 1 -▁claiming 1 -▁relevant 1 -▁consistent 1 -▁33 1 -▁childhood 1 -▁Per 1 -▁reflect 1 -▁Simon 1 -Star 1 -▁adds 1 -▁thrown 1 -ate 1 -▁Victoria 1 -▁switch 1 -▁expression 1 -sen 1 -▁silence 1 -▁challenging 1 -▁assist 1 -era 1 -lo 1 -▁tries 1 -▁typical 1 -▁consequences 1 -▁Live 1 -▁lawyers 1 -▁governments 1 -hour 1 -▁throwing 1 -sa 1 -▁agenda 1 -▁singing 1 -▁encouraged 1 -▁Di 1 -▁covers 1 -▁choices 1 -▁Harris 1 -▁employment 1 -▁plot 1 -▁exit 1 -▁shower 1 -▁trained 1 -▁races 1 -▁asks 1 -▁2006 1 -▁concluded 1 -▁Sir 1 -▁Children 1 -’ 1 -▁| 1 -ous 1 -▁tested 1 -▁allies 1 -▁argue 1 -▁reveal 1 -▁mode 1 -▁hosted 1 -▁contrast 1 -▁rough 1 -▁reputation 1 -▁priority 1 -▁seventh 1 -▁protected 1 -▁defend 1 -▁cake 1 -▁Local 1 -▁Jose 1 -▁Anyway 1 -head 1 -▁65 1 -▁controversial 1 -▁pink 1 -▁teeth 1 -All 1 -▁Bo 1 -▁playoff 1 -▁Ross 1 -▁rooms 1 -▁regions 1 -level 1 -▁MORE 1 -▁Kong 1 -▁boyfriend 1 -▁featuring 1 -ted 1 -▁Jay 1 -▁differences 1 -▁Ray 1 -▁Oklahoma 1 -▁tip 1 -▁initiative 1 -▁eastern 1 -▁urban 1 -▁dozens 1 -▁rape 1 -▁foundation 1 -▁extent 1 -▁Hong 1 -▁streak 1 -▁personally 1 -▁footage 1 -▁stream 1 -▁sharp 1 -▁Mexican 1 -▁Fort 1 -▁weekly 1 -▁apps 1 -▁contains 1 -▁Radio 1 -kin 1 -▁nurse 1 -▁hat 1 -▁temporary 1 -▁roster 1 -▁disaster 1 -▁Liverpool 1 -mo 1 -▁scientific 1 -▁odd 1 -▁rebounds 1 -▁ranked 1 -you 1 -▁critics 1 -▁valuable 1 -▁print 1 -▁broad 1 -▁49 1 -▁Was 1 -▁48 1 -▁Oregon 1 -▁hardly 1 -▁f 1 -▁Ma 1 -ana 1 -▁lift 1 -▁improvement 1 -▁Wisconsin 1 -▁34 1 -▁ancient 1 -▁b 1 -▁PC 1 -▁clubs 1 -class 1 -▁investigate 1 -▁cleaning 1 -▁auto 1 -How 1 -▁temperatures 1 -ise 1 -ive 1 -▁chocolate 1 -q 1 -▁academic 1 -ol 1 -▁carefully 1 -▁4, 1 -▁nervous 1 -▁kinds 1 -▁controlled 1 -▁ear 1 -▁Again 1 -▁NHL 1 -▁hide 1 -ek 1 -▁performing 1 -▁blocks 1 -wa 1 -▁Highway 1 -▁Any 1 -▁profile 1 -▁intense 1 -▁chemical 1 -▁cameras 1 -▁opponents 1 -▁stupid 1 -▁fruit 1 -ists 1 -▁crucial 1 -▁Hi 1 -▁1960 1 -▁filing 1 -▁unclear 1 -▁Roman 1 -▁Such 1 -▁mood 1 -▁roles 1 -ad 1 -▁charity 1 -▁routine 1 -▁girlfriend 1 -▁Netflix 1 -▁arrival 1 -TV 1 -▁Walker 1 -▁therapy 1 -▁strongly 1 -▁Li 1 -▁relative 1 -▁asleep 1 -▁explains 1 -▁strategic 1 -▁worker 1 -▁Singh 1 -▁sees 1 -▁Sky 1 -▁Tour 1 -▁insisted 1 -▁Olympics 1 -▁Food 1 -▁vulnerable 1 -▁picking 1 -ce 1 -▁Bridge 1 -▁successfully 1 -▁thick 1 -▁terrorist 1 -▁grab 1 -▁Rob 1 -▁hopefully 1 -▁Senator 1 -▁inflation 1 -▁illness 1 -▁birds 1 -▁cited 1 -▁scenes 1 -▁aggressive 1 -▁painting 1 -▁Investment 1 -▁aim 1 -▁crossed 1 -▁enemy 1 -▁prize 1 -▁essentially 1 -▁fields 1 -▁Care 1 -▁preparing 1 -40 1 -▁loans 1 -▁racing 1 -▁moon 1 -▁shit 1 -▁finishing 1 -▁inning 1 -▁studied 1 -▁transportation 1 -▁survived 1 -▁monthly 1 -▁emails 1 -▁soccer 1 -▁hidden 1 -▁resulting 1 -her 1 -star 1 -▁Rick 1 -▁goods 1 -▁opponent 1 -▁2011, 1 -▁error 1 -▁Matthew 1 -▁achieved 1 -▁phones 1 -ised 1 -▁Kentucky 1 -▁fantastic 1 -▁indicated 1 -ant 1 -▁Clark 1 -▁Station 1 -[ 1 -▁edition 1 -▁pray 1 -ford 1 -▁rarely 1 -▁Ho 1 -▁Indiana 1 -▁advertising 1 -▁monitor 1 -▁appearances 1 -▁Chairman 1 -▁landed 1 -au 1 -▁reporter 1 -▁till 1 -▁aged 1 -▁chapter 1 -▁s 1 -▁begun 1 -▁methods 1 -▁lake 1 -▁medicine 1 -▁explore 1 -with 1 -▁quit 1 -ron 1 -▁guidance 1 -▁golf 1 -▁packed 1 -▁household 1 -▁fail 1 -▁headquarters 1 -▁collect 1 -va 1 -▁cheap 1 -▁recall 1 -▁engineering 1 -▁links 1 -▁recommend 1 -▁Administration 1 -minute 1 -▁Pat 1 -▁: 1 -▁select 1 --1 1 -▁Utah 1 -ba 1 -▁museum 1 -Do 1 -▁activists 1 -▁contain 1 -▁spotted 1 -▁capture 1 -▁demands 1 -▁convinced 1 -▁connect 1 -▁sudden 1 -▁Awards 1 -▁communications 1 -ov 1 -▁occur 1 -▁Five 1 -▁journalists 1 -▁remarks 1 -▁Defense 1 -▁situations 1 -▁se 1 -▁flowers 1 -bo 1 -▁Alexander 1 -▁awards 1 -▁6, 1 -▁wealth 1 -▁entrance 1 -▁Missouri 1 -▁parliament 1 -▁spoken 1 -▁king 1 -▁Civil 1 -▁pregnancy 1 -▁Major 1 -▁spokeswoman 1 -▁Head 1 -▁Con 1 -im 1 -▁democracy 1 -▁ease 1 -▁Airport 1 -high 1 -▁kicked 1 -▁Without 1 -▁serves 1 -▁assigned 1 -▁coaches 1 -▁pleaded 1 -▁replacement 1 -▁recommended 1 -▁existence 1 -▁representative 1 -▁download 1 -▁athletes 1 -▁!! 1 -▁expanded 1 -▁Wales 1 -▁attached 1 -▁formal 1 -▁En 1 -▁branch 1 -▁hockey 1 -▁tone 1 -▁Picture 1 -▁Vancouver 1 -▁Land 1 -▁Mountain 1 -▁purposes 1 -▁bomb 1 -▁Kate 1 -▁absence 1 -▁elsewhere 1 -▁finds 1 -▁spokesperson 1 -▁searching 1 -▁conversations 1 -▁smell 1 -▁ratings 1 -▁Cross 1 -▁grant 1 -sh 1 -▁cheese 1 -ize 1 -▁aspects 1 -▁Thompson 1 -▁poverty 1 -▁Windows 1 -▁Hamilton 1 -▁$4 1 -▁belief 1 -▁traveling 1 -▁Jon 1 -▁recognition 1 -▁restaurants 1 -▁Father 1 -▁extensive 1 -▁scandal 1 -▁pet 1 -▁touchdown 1 -▁representing 1 -▁investments 1 -▁requests 1 -▁thousand 1 -▁rush 1 -week 1 -▁Greek 1 -▁7, 1 -▁diet 1 -▁marks 1 -▁Senior 1 -▁random 1 -▁caption 1 -▁networks 1 -▁extension 1 -▁improving 1 -▁Gary 1 -▁comedy 1 -▁pants 1 -▁engage 1 -▁1,000 1 -▁IT 1 -▁limits 1 -town 1 -▁km 1 -▁gang 1 -▁delay 1 -▁thin 1 -▁eggs 1 -▁75 1 -▁sand 1 -▁refugees 1 -▁laughing 1 -▁Aaron 1 -▁symptoms 1 -▁funeral 1 -▁wondered 1 -▁Arab 1 -▁enable 1 -▁express 1 -▁Maryland 1 -▁titles 1 -▁Turkish 1 -wing 1 -▁150 1 -pa 1 -▁Massachusetts 1 -▁ocean 1 -▁lowest 1 -▁Check 1 -▁journalist 1 -▁wing 1 -ak 1 -▁updated 1 -▁loose 1 -▁blocked 1 -▁Am 1 -30 1 -op 1 -▁complaints 1 -▁Charlie 1 -Yes 1 -▁tank 1 -▁oldest 1 -▁Duke 1 -▁medium 1 -▁invest 1 -▁healthcare 1 -▁developers 1 -▁Technology 1 -▁writers 1 -▁vacation 1 -▁contained 1 -▁Professor 1 -▁provider 1 -www 1 -▁atmosphere 1 -▁forth 1 -▁peak 1 -ham 1 -▁liberal 1 -▁providers 1 -season 1 -ji 1 -▁Jonathan 1 -▁signal 1 -▁Jean 1 -▁damaged 1 -▁extend 1 -▁recovered 1 -▁bars 1 -▁Area 1 -▁mail 1 -▁stadium 1 -▁Saint 1 -▁fears 1 -▁prefer 1 -▁loving 1 -▁ate 1 -▁assume 1 -▁yield 1 -▁Graham 1 -▁pot 1 -▁Soviet 1 -▁gains 1 -▁offices 1 -▁Sean 1 -▁bread 1 -▁Va 1 -▁Madrid 1 -▁surrounded 1 -▁Stone 1 -▁counsel 1 -▁formula 1 -▁proof 1 -▁contracts 1 -▁touched 1 -▁Follow 1 -▁observed 1 -▁Book 1 -▁abortion 1 -▁confirm 1 -▁Uber 1 -za 1 -▁burning 1 -▁careful 1 -▁tariffs 1 -▁apparent 1 -ney 1 -▁electronic 1 -▁exposure 1 -ger 1 -▁wage 1 -▁Arts 1 -▁Luke 1 -▁Cohen 1 -▁graduate 1 -▁Baltimore 1 -▁redistributed 1 -▁vital 1 -▁gym 1 -▁Denver 1 -▁dismissed 1 -▁overnight 1 -▁scores 1 -▁attending 1 -▁royal 1 -▁shocked 1 -▁load 1 -▁personality 1 -▁celebration 1 -▁depression 1 -▁Ha 1 -▁wrapped 1 -▁missile 1 -▁wheel 1 -▁association 1 -▁occasion 1 -▁Ron 1 -▁cooking 1 -▁buried 1 -▁knife 1 -bar 1 -▁dispute 1 -▁5, 1 -▁favourite 1 -▁Barack 1 -▁depth 1 -▁attitude 1 -▁disappointed 1 -ance 1 -▁whenever 1 -▁Being 1 -▁brands 1 -▁2010, 1 -é 1 -style 1 -▁publication 1 -▁Samsung 1 -One 1 -▁Trade 1 -▁terror 1 -▁Bureau 1 -▁installed 1 -▁Report 1 -▁establish 1 -▁pocket 1 -▁reviews 1 -▁” 1 -▁deficit 1 -▁bird 1 -▁38 1 -▁Lu 1 -▁Part 1 -▁prayer 1 -▁gifts 1 -▁papers 1 -▁2020 1 -▁DNA 1 -▁deadline 1 -▁Things 1 -▁Affairs 1 -▁walks 1 -▁grass 1 -▁Square 1 -▁Pa 1 -▁2005 1 -▁clock 1 -▁managing 1 -▁hung 1 -▁Howard 1 -▁Bible 1 -▁anybody 1 -▁pm 1 -▁posting 1 -▁Cruz 1 -can 1 -▁crude 1 -▁boxes 1 -▁Ka 1 -▁Eagles 1 -▁Winter 1 -▁reminded 1 -▁rolling 1 -▁addressed 1 -ti 1 -▁landing 1 -▁maintained 1 -▁resistance 1 -▁Kennedy 1 -▁reasonable 1 -▁outstanding 1 -▁fishing 1 -▁passion 1 -▁37 1 -▁banned 1 -▁writes 1 -▁pipeline 1 -▁exclusive 1 -▁seed 1 -▁complicated 1 -▁dying 1 -▁rent 1 -▁Scottish 1 -▁Phil 1 -▁Pittsburgh 1 -▁articles 1 -▁mad 1 -▁conclusion 1 -▁Egypt 1 -▁moral 1 -▁narrow 1 -▁harassment 1 -▁(1 1 -▁everyday 1 -mi 1 -▁Warriors 1 -▁22, 1 -▁anxiety 1 -▁depending 1 -▁Atlantic 1 -▁register 1 -▁shelter 1 -▁label 1 -▁shook 1 -ard 1 -▁1. 1 -▁spots 1 -▁audio 1 -▁files 1 -▁ideal 1 -▁AM 1 -ster 1 -▁polls 1 -▁playoffs 1 -▁questioned 1 -▁28, 1 -▁solve 1 -▁hill 1 -▁Champions 1 -ki 1 -ga 1 -▁volunteers 1 -▁Oak 1 -▁badly 1 -▁discussions 1 -▁abandoned 1 -▁decent 1 -▁dramatic 1 -▁rushed 1 -▁pursue 1 -▁output 1 -▁.. 1 -▁recalled 1 -▁gear 1 -▁wounded 1 -▁Andy 1 -▁concrete 1 -▁highway 1 -▁flew 1 -der 1 -▁lips 1 -▁ages 1 -▁suspects 1 -▁Policy 1 -▁eligible 1 -▁requested 1 -▁courts 1 -▁swimming 1 -▁cable 1 -▁regardless 1 -▁Cal 1 -▁Nothing 1 -use 1 -▁circle 1 -life 1 -▁versus 1 -▁Mother 1 -▁acknowledged 1 -▁ownership 1 -▁maintenance 1 -▁prospect 1 -▁Anna 1 -▁Edward 1 -▁24, 1 -▁horses 1 -▁divided 1 -▁behaviour 1 -▁30, 1 -▁Murray 1 -▁Russell 1 -▁opinions 1 -▁knocked 1 -▁protesters 1 -▁menu 1 -▁independence 1 -▁NCAA 1 -▁defined 1 -▁DC 1 -who 1 -▁neighbors 1 -ten 1 -▁ships 1 -▁8, 1 -net 1 -▁garage 1 -▁landscape 1 -▁facts 1 -▁Ukraine 1 -▁21, 1 -▁incidents 1 -▁colors 1 -▁Antonio 1 -▁rookie 1 -▁objective 1 -▁Jane 1 -▁lesson 1 -▁2004 1 -▁Field 1 -▁18, 1 -Don 1 -▁Carter 1 -▁Boy 1 -▁Memorial 1 -▁lineup 1 -▁judges 1 -▁Ridge 1 -▁eighth 1 -▁Okay 1 -▁Francis 1 -▁Jo 1 -▁viewers 1 -▁expenses 1 -UNK 1 -▁Greg 1 -▁trends 1 -▁involvement 1 -▁contribute 1 -▁machines 1 -▁trailer 1 -▁2013. 1 -Re 1 -▁winds 1 -▁suffer 1 -making 1 -cha 1 -▁principal 1 -▁puts 1 -▁cleared 1 -▁chart 1 -▁receiver 1 -▁Fair 1 -▁Kyle 1 -▁universe 1 -▁Ar 1 -▁fault 1 -▁sensitive 1 -▁platforms 1 -▁grounds 1 -▁helpful 1 -▁gathering 1 -▁cannabis 1 -▁remote 1 -▁Bruce 1 -▁Note 1 -▁mining 1 -▁violation 1 -NASDAQ 1 -▁solo 1 -▁dates 1 -▁Drive 1 -▁emotions 1 -▁item 1 -▁lessons 1 -▁Price 1 -▁rapidly 1 -▁sheet 1 -▁actors 1 -▁performances 1 -▁Fed 1 -▁Space 1 -At 1 -▁handful 1 -▁affairs 1 -▁26, 1 -▁joke 1 -▁probe 1 -▁producers 1 -▁repeated 1 -▁deadly 1 -▁inter 1 -▁rail 1 -▁Cook 1 -▁Creek 1 -▁Maria 1 -▁physically 1 -▁Make 1 -▁consumption 1 -▁Line 1 -▁hosts 1 -▁celebrated 1 -▁amounts 1 -▁banking 1 -▁Channel 1 -▁accurate 1 -▁height 1 -▁emissions 1 -▁speaks 1 -▁deeper 1 -Why 1 -▁string 1 -▁episodes 1 -▁hero 1 -▁beside 1 -▁submitted 1 -▁19, 1 -▁Wood 1 -▁visible 1 -▁overseas 1 -▁speaker 1 -dy 1 -▁Pre 1 -▁promises 1 -▁producing 1 -▁nights 1 -ating 1 -▁testimony 1 -▁momentum 1 -▁cook 1 -▁copyright 1 -▁31, 1 -les 1 -▁lock 1 -▁cabinet 1 -▁23, 1 -▁terrorism 1 -▁silent 1 -▁industries 1 -▁sing 1 -▁Him 1 -war 1 -gen 1 -ite 1 -▁asset 1 -▁steady 1 -▁Express 1 -▁scenario 1 -▁attempting 1 -▁saving 1 -▁Mc 1 -▁equivalent 1 -▁Jews 1 -▁contacted 1 -▁n 1 -▁Dean 1 -▁Warren 1 -que 1 -▁Authorities 1 -▁falls 1 -▁cried 1 -▁2019, 1 -▁thankful 1 -▁lap 1 -US 1 -▁Deputy 1 -▁Whether 1 -ik 1 -▁impressed 1 -lan 1 -▁Vietnam 1 -▁forgotten 1 -▁Ko 1 -▁Orleans 1 -▁Latin 1 -▁browser 1 -▁cats 1 -inch 1 -▁27, 1 -▁Point 1 -▁pride 1 -▁reduction 1 -▁Muslims 1 -▁couch 1 -▁Everything 1 -▁Captain 1 -▁42 1 -$ 1 -▁14, 1 -▁Constitution 1 -▁emerging 1 -▁encounter 1 -▁55 1 -▁Melbourne 1 -▁assessment 1 -▁representatives 1 -▁READ 1 -▁residential 1 -▁residence 1 -▁sisters 1 -▁Summer 1 -▁throat 1 -▁secondary 1 -▁reducing 1 -known 1 -▁smooth 1 -▁Ann 1 -▁craft 1 -▁coaching 1 -▁deck 1 -ED 1 -▁confused 1 -▁briefly 1 -▁visits 1 -▁Authority 1 -▁swing 1 -▁transferred 1 -▁micro 1 -▁Tesla 1 -▁ballot 1 -▁intent 1 -▁smartphone 1 -▁aspect 1 -▁grandmother 1 -▁Call 1 -▁adventure 1 -▁kiss 1 -▁citing 1 -▁Reserve 1 -12 1 -▁recover 1 -▁tape 1 -▁surprising 1 -▁grocery 1 -▁Exchange 1 -▁gross 1 -▁threatening 1 -▁constitutional 1 -▁processes 1 -▁referring 1 -▁repair 1 -▁Steven 1 -▁Jimmy 1 -▁Tech 1 -▁surveillance 1 -▁generate 1 -▁horrible 1 -▁skill 1 -): 1 -▁$ 1 -▁salary 1 -▁veterans 1 -▁generated 1 -▁factory 1 -▁~ 1 -▁merely 1 -▁criticized 1 -ball 1 -▁Way 1 -▁districts 1 -▁indicate 1 -▁savings 1 -▁Cooper 1 -▁yeah 1 -▁comprehensive 1 -▁Iranian 1 -PA 1 -▁Dutch 1 -▁migrants 1 -▁compensation 1 -▁engagement 1 -▁Forest 1 -▁rid 1 -▁Sign 1 -▁affordable 1 -▁acquisition 1 -▁discover 1 -▁25, 1 -▁pure 1 -▁soil 1 -ex 1 -▁Film 1 -▁abroad 1 -▁announce 1 -▁11, 1 -▁12- 1 -▁bowl 1 -po 1 -▁rely 1 -▁settle 1 -▁seemingly 1 -▁defending 1 -▁fled 1 -▁monitoring 1 -▁AI 1 -▁minority 1 -▁mistakes 1 -▁focusing 1 -▁quarterly 1 -▁cooperation 1 -▁handling 1 -▁Data 1 -▁alert 1 -million 1 -▁winners 1 -▁format 1 -▁streaming 1 -▁climb 1 -▁crossing 1 -▁Da 1 -▁raw 1 -ang 1 -▁nodded 1 -▁iron 1 -▁lately 1 -▁unlike 1 -▁operates 1 -▁Class 1 -▁Hotel 1 -▁flights 1 -▁expects 1 -▁fence 1 -▁Du 1 -▁Ball 1 -▁waves 1 -▁operated 1 -▁horror 1 -▁gate 1 -▁luxury 1 -red 1 -▁authors 1 -▁collapse 1 -▁Jake 1 -50 1 -45 1 -▁titled 1 -▁stairs 1 -▁15, 1 -mm 1 -▁restrictions 1 -▁homeless 1 -▁Moon 1 -▁Giants 1 -▁viewed 1 -▁Jacob 1 -▁drinks 1 -▁Play 1 -▁Su 1 -▁arts 1 -▁encouraging 1 -▁Patriots 1 -▁Gen 1 -▁SEC 1 -▁comparison 1 -▁20- 1 -▁screaming 1 -▁bail 1 -▁efficient 1 -pe 1 -▁tips 1 -looking 1 -▁suggesting 1 -▁Chi 1 -▁App 1 -▁16, 1 -▁deserve 1 -▁provincial 1 -▁Stanley 1 -▁interviews 1 -▁Hope 1 -▁Information 1 -▁29, 1 -▁widespread 1 -▁breast 1 -▁Amy 1 -▁recommendations 1 -▁trips 1 -▁firing 1 -▁shoulders 1 -mar 1 -▁Far 1 -90 1 -▁Sure 1 -▁institutional 1 -▁lab 1 -▁objects 1 -▁wound 1 -▁supports 1 -ny 1 -▁Look 1 -▁Phoenix 1 -▁Gordon 1 -▁Crown 1 -▁dust 1 -▁demanded 1 -▁17, 1 -ng 1 -▁Nelson 1 -▁Through 1 -▁Commissioner 1 -▁racial 1 -▁processing 1 -▁toys 1 -▁breathing 1 -▁intention 1 -▁Brad 1 -▁nomination 1 -▁equally 1 -▁Gulf 1 -▁painful 1 -▁spell 1 -▁commit 1 -▁collaboration 1 -▁2012. 1 -▁burned 1 -▁9, 1 -▁Test 1 -▁wider 1 -▁presentation 1 -▁44 1 -▁2009, 1 -▁ranks 1 -19 1 -▁proven 1 -▁TO 1 -▁regard 1 -▁studying 1 -▁checking 1 -▁cousin 1 -▁timing 1 -▁hardware 1 -Now 1 -▁Lisa 1 -▁medal 1 -▁ears 1 -▁III 1 -▁rounds 1 -▁virtual 1 -▁balls 1 -▁slept 1 -▁warrant 1 -ara 1 -▁Grant 1 -▁followers 1 -▁suppose 1 -▁Palestinian 1 -▁System 1 -▁graduated 1 -▁tracking 1 -▁striking 1 -▁visual 1 -vi 1 -▁hospitals 1 -fi 1 -▁13, 1 -▁mini 1 -▁© 1 -▁neighbor 1 -▁Lo 1 -..." 1 -▁script 1 -light 1 -▁par 1 -▁Leader 1 -▁39 1 -▁FC 1 -▁ignored 1 -▁hip 1 -▁departure 1 -▁demanding 1 -▁attractive 1 -▁educational 1 -▁journal 1 -▁legacy 1 -▁NEW 1 -▁belt 1 -▁del 1 -▁substantial 1 -▁automatically 1 -▁NOT 1 -▁innovation 1 -▁venture 1 -see 1 -▁Story 1 -▁Camp 1 -▁adviser 1 -▁adoption 1 -▁fighters 1 -▁dancing 1 -▁establishment 1 -▁Ke 1 -▁sessions 1 -▁honestly 1 -▁assumed 1 -▁exception 1 -▁1950 1 -she 1 -▁implementation 1 -▁dropping 1 -▁interior 1 -▁duties 1 -▁Labor 1 -state 1 -▁competing 1 -▁holes 1 -▁breaks 1 -▁Six 1 -▁pleasure 1 -ber 1 -▁stages 1 -▁rewritten 1 -▁Marine 1 -▁UFC 1 -▁documentary 1 -▁ads 1 -▁Berlin 1 -ino 1 -▁explanation 1 -▁Modi 1 -▁shops 1 -60 1 -▁Economic 1 -▁Montreal 1 -ship 1 -ze 1 -hold 1 -▁Charlotte 1 -▁promotion 1 -buy 1 -▁inner 1 -▁techniques 1 -▁41 1 -▁examples 1 -▁Given 1 -▁Spring 1 -▁excitement 1 -▁cents 1 -ee 1 -▁stressed 1 -▁Metro 1 -▁donations 1 -▁sexually 1 -▁voices 1 -▁improvements 1 -▁rapid 1 -▁divorce 1 -▁manufacturers 1 -▁lifetime 1 -▁element 1 -▁components 1 -▁Theatre 1 -Reporting 1 -▁brilliant 1 -▁Arsenal 1 -▁membership 1 -▁Thanksgiving 1 -▁Conservative 1 -▁Mississippi 1 -▁strikes 1 -▁mirror 1 -▁disappeared 1 -▁repeat 1 -▁NASA 1 -▁description 1 -▁sits 1 -▁elite 1 -▁mainstream 1 -▁gather 1 -▁summary 1 -18 1 -ns 1 -▁$20 1 -▁narrative 1 -▁expanding 1 -▁referendum 1 -▁Sweden 1 -▁dis 1 -▁quietly 1 -▁involves 1 -▁Library 1 -▁Master 1 -▁destroy 1 -▁dirty 1 -▁Holy 1 -▁venue 1 -▁legislative 1 -▁volunteer 1 -▁Until 1 -ci 1 -▁Program 1 -▁Christopher 1 -▁prompted 1 -▁destination 1 -▁20, 1 -▁Jeremy 1 -▁orange 1 -▁Nevada 1 -▁overcome 1 -▁tower 1 -berg 1 -▁employed 1 -▁crack 1 -▁Alan 1 -▁versions 1 -▁WASHINGTON 1 -▁painted 1 -▁2003 1 -▁premium 1 -▁hire 1 -▁purchasing 1 -▁centers 1 -▁odds 1 -▁innocent 1 -▁log 1 -▁Regional 1 -▁flood 1 -▁12, 1 -▁Craig 1 -▁describes 1 -▁regulatory 1 -▁tear 1 -▁controversy 1 -▁Country 1 -▁Liberal 1 -▁obtain 1 -▁Harvey 1 -▁salt 1 -▁stopping 1 -▁MR 1 -▁pointing 1 -▁fate 1 -▁Mount 1 -▁Bitcoin 1 -▁column 1 -▁hiding 1 -half 1 -▁anticipated 1 -party 1 -▁Battle 1 -▁Collins 1 -▁sons 1 -▁diverse 1 -▁Credit 1 -▁Robinson 1 -▁chat 1 -▁Thus 1 -▁Barcelona 1 -▁sounded 1 -▁2008, 1 -▁Portland 1 -▁forcing 1 -▁profits 1 -▁professionals 1 -▁hedge 1 -▁structures 1 -▁promoted 1 -▁Orange 1 -▁diagnosed 1 -▁carries 1 -▁lifestyle 1 -▁Does 1 -▁introduction 1 -▁welcomed 1 -▁Lane 1 -▁longest 1 -▁presents 1 -▁ESPN 1 -▁qualified 1 -▁trillion 1 -▁Roy 1 -▁regulation 1 -▁dialogue 1 -▁Eve 1 -▁contributions 1 -▁convince 1 -▁burn 1 -▁ignore 1 -▁climbed 1 -▁defender 1 -▁lane 1 -▁Oakland 1 -▁theater 1 -▁investigations 1 -▁BJP 1 -▁hunting 1 -▁fiction 1 -▁sheriff 1 -▁daughters 1 -▁clinical 1 -owned 1 -▁Ottawa 1 -▁functions 1 -▁dirt 1 -▁subsequently 1 -▁feedback 1 -▁advised 1 -▁consent 1 -▁congressional 1 -▁bound 1 -▁resolve 1 -hi 1 -▁Cameron 1 -▁hosting 1 -▁spiritual 1 -▁ninth 1 -▁g 1 -▁naturally 1 -On 1 -▁Sa 1 -▁Susan 1 -▁investor 1 -▁slipped 1 -▁Nation 1 -▁strategies 1 -▁introduce 1 -!! 1 -▁guitar 1 -▁aims 1 -▁talented 1 -▁revenues 1 -▁teen 1 -▁sample 1 -▁coat 1 -ho 1 -▁Guard 1 -▁designer 1 -▁capabilities 1 --10 1 -▁tail 1 -isation 1 -▁definition 1 -▁websites 1 -▁Tyler 1 -▁Had 1 -▁Queensland 1 -▁preferred 1 -▁History 1 -▁teenager 1 -GB 1 -▁procedure 1 -▁witnesses 1 -▁completion 1 -▁Season 1 -▁LA 1 -▁Others 1 -▁Del 1 -▁fundamental 1 -▁inspiration 1 -▁attacking 1 -▁scary 1 -chi 1 -▁noting 1 -▁2001 1 -▁Fla 1 -ani 1 -▁surely 1 -▁Yesterday 1 -▁30- 1 -▁discovery 1 -▁drunk 1 -made 1 -jo 1 -▁involve 1 -work 1 -▁broader 1 --12 1 -▁uncomfortable 1 -ran 1 -▁desperate 1 -▁awful 1 -▁crashed 1 -ker 1 -winning 1 -▁Lincoln 1 -▁illegally 1 -▁projected 1 -▁Anne 1 -▁entitled 1 -▁Jerry 1 -▁containing 1 -▁champions 1 -▁bone 1 -▁targeting 1 -▁participation 1 -▁motor 1 -▁jacket 1 -▁Anyone 1 -▁tons 1 -▁rivals 1 -▁hungry 1 -▁mountains 1 -▁52 1 -▁plain 1 -▁mystery 1 -▁implement 1 -▁praised 1 -▁Galaxy 1 -▁Mitchell 1 -▁protecting 1 -▁Islam 1 -▁fantasy 1 -▁shake 1 -▁resigned 1 -▁unexpected 1 -▁oh 1 -▁Main 1 -▁Buy 1 -▁trades 1 -▁movements 1 -▁Looking 1 -▁resort 1 -▁separated 1 -▁subsequent 1 -▁Besides 1 -▁Larry 1 -▁fed 1 -▁remind 1 -board 1 -▁Cape 1 -very 1 -air 1 -▁Additionally 1 -▁Kavanaugh 1 -▁activist 1 -These 1 -new 1 -▁43 1 -ano 1 -▁Indeed 1 -tic 1 -▁Lions 1 -▁nowhere 1 -▁Editing 1 -tz 1 -▁Jennifer 1 -▁slide 1 -▁!!! 1 -▁REUTERS 1 -hand 1 -▁contribution 1 -▁Ocean 1 -▁organic 1 -▁2011. 1 -▁recipe 1 -ING 1 -▁custom 1 -si 1 -▁AND 1 -▁Much 1 -▁suspension 1 -▁meals 1 -▁Key 1 -▁Stewart 1 -▁patterns 1 -▁Inter 1 -▁2002 1 -▁gaming 1 -han 1 -▁furniture 1 -▁McDonald 1 -▁Campbell 1 -▁airline 1 -▁citizen 1 -▁presidency 1 -▁programming 1 -▁managers 1 -▁Douglas 1 -▁promising 1 -ring 1 -▁expense 1 -▁spin 1 -▁teammates 1 -▁alerts 1 -▁developer 1 -▁diversity 1 -▁alarm 1 -▁accessible 1 -▁petition 1 -▁absolute 1 -▁predicted 1 -▁experiment 1 -▁disclosed 1 -▁nominee 1 -mon 1 -▁Brooklyn 1 -▁Finance 1 -▁executives 1 -▁Oscar 1 -▁substance 1 -▁rank 1 -▁ordinary 1 -▁consideration 1 -▁Mass 1 -zi 1 -bit 1 -▁Adams 1 -▁seized 1 -▁attract 1 -▁Light 1 -▁participated 1 -?) 1 -▁youngest 1 -▁Brady 1 -▁OF 1 -▁$6 1 -▁sustained 1 -▁promoting 1 -▁popularity 1 -▁prayers 1 -▁tag 1 --19 1 -hu 1 -▁stops 1 -ons 1 -▁delayed 1 -▁peaceful 1 -▁Unlike 1 -▁holidays 1 -▁Hunter 1 -▁forgot 1 -▁Carl 1 -▁Ram 1 -▁closest 1 -▁Baker 1 -▁Wright 1 -▁trains 1 -▁proposals 1 -▁implemented 1 -▁CD 1 -▁guarantee 1 -ology 1 -▁midnight 1 -▁AT 1 -▁$8 1 -km 1 -▁highlight 1 -ez 1 -▁ultimate 1 -▁beaten 1 -▁secured 1 -▁invested 1 -▁conspiracy 1 -▁relatives 1 -▁dear 1 -▁Review 1 -▁Tokyo 1 -▁affair 1 -▁toilet 1 -▁enormous 1 -▁$15 1 -▁journalism 1 -ut 1 -▁racist 1 -▁frustrated 1 -▁occasions 1 -▁destruction 1 -▁Murphy 1 -▁funded 1 -▁• 1 -▁Ni 1 -▁radical 1 -ola 1 -▁pending 1 -▁loaded 1 -▁efficiency 1 -▁ok 1 -▁Treasury 1 -▁reader 1 -▁differently 1 -▁wooden 1 -▁Garden 1 -▁keen 1 -▁egg 1 -▁guards 1 -▁Mel 1 -▁arguments 1 -▁Na 1 -▁organisation 1 -▁uncertainty 1 -▁nominated 1 -▁Something 1 -▁Cor 1 -▁counties 1 -let 1 -▁welfare 1 -▁panic 1 -▁connections 1 -ft 1 -▁Control 1 -▁universities 1 -17 1 -▁hunt 1 -▁cricket 1 -▁killer 1 -▁resource 1 -▁Louisiana 1 -▁Officials 1 -▁investing 1 -▁imposed 1 -▁Parker 1 -are 1 -election 1 -▁occasionally 1 -▁Emma 1 -▁Plan 1 -▁Grace 1 -▁Rachel 1 -▁diplomatic 1 -▁trick 1 -▁categories 1 -▁traveled 1 -ray 1 -▁pizza 1 -▁Moreover 1 -▁Tigers 1 -▁happiness 1 -▁Wars 1 -▁prosecutor 1 -▁muscle 1 -mann 1 -▁barrel 1 -▁Remember 1 -▁server 1 -▁external 1 -▁infection 1 -▁Greece 1 -▁accompanied 1 -▁Never 1 -▁notable 1 -▁Roger 1 -▁blamed 1 -▁refuse 1 -▁Down 1 -▁partly 1 -▁wet 1 -▁beneath 1 -▁Alberta 1 -▁Pan 1 -▁Hawaii 1 -▁bin 1 -▁Evans 1 -▁exists 1 -▁blind 1 -▁comic 1 -▁feeding 1 -▁SO 1 -▁damn 1 -▁longtime 1 -▁discussing 1 -▁maker 1 -▁trash 1 -▁naked 1 -▁channels 1 -▁Ian 1 -▁periods 1 -▁Action 1 -▁discrimination 1 -▁Wayne 1 -▁Use 1 -▁ladies 1 -▁$100 1 -▁commented 1 -▁Silver 1 -▁arguing 1 -Not 1 -▁flash 1 -▁consistently 1 -▁blessed 1 -▁controls 1 -▁Manager 1 -▁(2 1 -▁listened 1 -▁instructions 1 -▁View 1 -▁Protection 1 -▁slight 1 -▁Organization 1 -▁colour 1 -▁Warner 1 -▁hearts 1 -▁drag 1 -ck 1 -▁editorial 1 -▁Franklin 1 -▁Brandon 1 -▁rated 1 -▁smiling 1 -▁minds 1 -▁backing 1 -▁Arena 1 -▁disorder 1 -▁tongue 1 -▁Dakota 1 -▁assured 1 -▁10, 1 -▁laundry 1 -▁darkness 1 -▁intervention 1 -▁Empire 1 -▁romantic 1 -▁Har 1 -▁et 1 -▁CIA 1 -▁occupied 1 -▁crore 1 -▁Bishop 1 -▁challenged 1 -▁stability 1 -▁deployed 1 -▁Poland 1 -▁shadow 1 -▁favour 1 -▁sorts 1 -▁dominated 1 -dis 1 -▁blast 1 -▁Jamie 1 -▁Woods 1 -▁outdoor 1 -▁frequent 1 -▁fleet 1 -▁generations 1 -port 1 -▁smoking 1 -▁reserve 1 -▁End 1 -▁punishment 1 -▁celebrating 1 -▁practical 1 -▁exhibition 1 -▁god 1 -▁Billy 1 -▁trucks 1 -ow 1 -▁sustainable 1 -▁Soon 1 -▁(18 1 -▁adjusted 1 -▁safely 1 -▁lit 1 -▁filling 1 -▁competitors 1 -▁w 1 -▁quoted 1 -▁technique 1 -▁pressed 1 -▁Entertainment 1 -▁tourists 1 -▁collective 1 -▁Wild 1 -▁GM 1 -▁flooding 1 -▁towns 1 -▁Trudeau 1 -▁Laura 1 -▁shutdown 1 -▁statistics 1 -▁dynamic 1 -degree 1 -▁precious 1 -▁participating 1 -▁unemployment 1 -Let 1 -▁Bad 1 -▁protein 1 -▁grandfather 1 -▁Philippines 1 -▁Keep 1 -Man 1 -▁Maine 1 -‘ 1 -▁witnessed 1 -▁Enter 1 -11 1 -▁Safety 1 -▁buyers 1 -▁staring 1 -▁butter 1 -▁Keith 1 -▁curious 1 -▁mum 1 -▁Jerusalem 1 -▁households 1 -▁chase 1 -▁revolution 1 -play 1 -▁acted 1 -▁administrative 1 -▁reverse 1 -fer 1 -▁Marc 1 -▁Cambridge 1 -▁inquiry 1 -▁permit 1 -▁refer 1 -▁telephone 1 -ial 1 -▁designs 1 -▁arriving 1 -▁highlights 1 -▁remarkable 1 -▁jet 1 -▁Hurricane 1 -▁associate 1 -▁checks 1 -▁disabled 1 -▁Come 1 -▁steal 1 -▁explosion 1 -▁plea 1 -▁McCain 1 -▁importantly 1 -▁beta 1 -▁signature 1 -▁institution 1 -▁touchdowns 1 -▁attracted 1 -▁excuse 1 -▁stance 1 -▁civilians 1 -▁slip 1 -▁registration 1 -▁Capitol 1 -▁starter 1 -▁Emily 1 -▁Final 1 -▁swim 1 -▁Hart 1 -▁reward 1 -▁outfit 1 -▁glasses 1 -▁romance 1 -▁picks 1 -▁flu 1 -▁commander 1 -▁Michelle 1 -▁Would 1 -gar 1 -▁friendship 1 -▁burden 1 -▁struggles 1 -▁dining 1 -▁Lawrence 1 -▁stunning 1 -▁delivering 1 -▁commonly 1 -▁Rio 1 -▁charging 1 -▁contemporary 1 -▁greatly 1 -our 1 -▁k 1 -▁trans 1 -▁knock 1 -▁principles 1 --3 1 -king 1 -▁Ya 1 -▁medication 1 -▁Place 1 -▁labour 1 -▁knees 1 -▁midfielder 1 -▁default 1 -▁Village 1 -▁Ham 1 -▁tragedy 1 -= 1 -▁nationwide 1 -▁opener 1 -▁bat 1 -▁ministers 1 -▁accusations 1 -▁Ta 1 -▁1.5 1 -▁worn 1 -▁computers 1 -▁Order 1 -▁fatal 1 -▁impression 1 -▁tennis 1 -Just 1 -▁waters 1 -▁http 1 -▁rocks 1 -▁mis 1 -▁Ever 1 -▁shame 1 -▁foods 1 -▁ISIS 1 -▁passage 1 -▁classroom 1 -▁Eventually 1 -▁ha 1 -bi 1 -ju 1 -▁persons 1 -▁outlets 1 -▁Kings 1 -▁$0 1 -▁nursing 1 -▁overwhelming 1 -▁Band 1 -▁submit 1 -che 1 -scale 1 -▁Alliance 1 -▁spaces 1 -▁meaningful 1 -▁Sha 1 -▁Rangers 1 -▁ID 1 -▁pile 1 -▁communicate 1 -▁chaos 1 -▁desert 1 -▁speculation 1 -room 1 -▁Rams 1 -low 1 -▁Tri 1 -▁Orlando 1 -water 1 -▁rushing 1 -▁experiencing 1 -▁liquid 1 -▁Danny 1 -▁Junior 1 -ane 1 -ita 1 -ability 1 -▁Stock 1 -▁initiatives 1 -▁fighter 1 -je 1 -▁rice 1 -just 1 -ab 1 -▁Perry 1 -▁hike 1 -▁reforms 1 -▁deny 1 -▁ceiling 1 -▁64 1 -▁reliable 1 -▁ingredients 1 -▁Rome 1 -▁pricing 1 -▁basement 1 -▁detained 1 -▁warming 1 -▁mall 1 -▁pit 1 -▁languages 1 -▁Han 1 -▁72 1 -▁agriculture 1 -▁theft 1 -▁10,000 1 -▁Manhattan 1 -▁opt 1 -▁database 1 -▁2007, 1 -▁interviewed 1 -▁removal 1 -▁freshman 1 -this 1 -▁leather 1 -▁Federation 1 -▁virus 1 -▁succeed 1 -▁BC 1 -▁Sri 1 -▁handled 1 -▁ET 1 -▁Philip 1 -▁Alaska 1 -▁noon 1 -▁procedures 1 -AM 1 -▁frustration 1 -▁assembly 1 -▁47 1 -▁Se 1 -▁asylum 1 -▁Palace 1 -▁holdings 1 -▁grey 1 -▁Swiss 1 -▁Securities 1 -▁fucking 1 -▁Happy 1 -▁Pi 1 -▁Apparently 1 -▁stared 1 -▁Tu 1 -▁Corps 1 -▁Hot 1 -▁complained 1 -▁Tar 1 -▁operator 1 -▁46 1 -▁outlook 1 -▁fabric 1 -two 1 -▁Quebec 1 -▁Venezuela 1 -▁Ash 1 -▁66 1 -▁survival 1 -▁strip 1 -▁Spirit 1 -▁Girl 1 -▁guidelines 1 -▁dated 1 -▁uncle 1 -▁preparation 1 -▁Mi 1 -▁copies 1 -gi 1 -▁exports 1 -▁Cole 1 -▁politician 1 -▁cock 1 -▁nap 1 -▁prospects 1 -With 1 -▁converted 1 -▁Col 1 -▁Ward 1 -▁ranging 1 -▁embrace 1 -▁tasks 1 -▁Should 1 -▁Speaker 1 -▁fitness 1 -now 1 -▁Building 1 -▁Buffalo 1 -▁trigger 1 -▁Therefore 1 -▁satellite 1 -▁Christians 1 -▁EPS 1 -▁Within 1 -ini 1 -▁elderly 1 -▁Switzerland 1 -▁wash 1 -▁golden 1 -▁Tower 1 -▁actively 1 -▁utility 1 -▁calendar 1 -▁frozen 1 -▁Ku 1 -▁Intelligence 1 -▁phrase 1 -▁annually 1 -▁dishes 1 -▁MPs 1 -▁travelling 1 -▁Doug 1 -▁diseases 1 -▁dominant 1 -▁hiring 1 -▁adopt 1 -▁topics 1 -▁designated 1 -▁artificial 1 -▁judgment 1 -▁engineer 1 -▁twin 1 -▁snap 1 -▁Easter 1 -▁tribute 1 -▁— 1 -▁healing 1 -▁philosophy 1 -▁twelve 1 -▁depends 1 -▁ridiculous 1 -ru 1 -▁15- 1 -▁Work 1 -▁clinic 1 -▁corn 1 -▁Argentina 1 -▁operational 1 -▁temple 1 -ó 1 -▁triple 1 -▁shipping 1 -▁misconduct 1 -▁CO 1 -▁drives 1 -▁sectors 1 -▁belong 1 -▁attendance 1 -▁Anti 1 -▁advocates 1 -▁Mal 1 -▁sole 1 -▁Puerto 1 -▁Malaysia 1 -▁creates 1 -▁conviction 1 -▁carpet 1 -ada 1 -▁earning 1 -▁mortgage 1 -▁coordinator 1 -▁Sh 1 -▁Sox 1 -▁railway 1 -▁voter 1 -▁bite 1 -vo 1 -▁praise 1 -▁enemies 1 -▁Morris 1 -▁component 1 -▁Todd 1 -under 1 -▁lasted 1 -▁24- 1 -▁Ted 1 -▁virtually 1 -▁alliance 1 -oh 1 -▁tap 1 -▁ambassador 1 -▁innovative 1 -▁Dark 1 -People 1 -▁Johnny 1 -ung 1 -ong 1 -▁Watson 1 -▁detention 1 -▁formation 1 -▁highlighted 1 -▁IN 1 -▁Penn 1 -▁clip 1 -▁tensions 1 -▁na 1 -▁beloved 1 -▁pollution 1 -▁woods 1 -▁distributed 1 -▁donated 1 -▁Doctor 1 -▁Amendment 1 -▁strengthen 1 -▁parade 1 -▁explaining 1 -▁Halloween 1 -▁trapped 1 -▁roots 1 -▁NO 1 -▁Coach 1 -▁wages 1 -▁bonds 1 -▁£1 1 -▁march 1 -win 1 -▁Video 1 -good 1 -▁Par 1 -▁18- 1 -▁settings 1 -pro 1 -▁provision 1 -▁throws 1 -▁Related 1 -▁stating 1 -▁meters 1 -▁None 1 -▁bonus 1 -min 1 -▁runner 1 -▁overtime 1 -▁eager 1 -▁tourism 1 -▁{ 1 -▁layer 1 -▁Terry 1 -▁meets 1 -▁sum 1 -▁sparked 1 -▁speakers 1 -▁Te 1 -▁literature 1 -▁Hills 1 -▁duo 1 -▁shareholders 1 -▁theatre 1 -▁Baby 1 -▁2010. 1 -▁gray 1 -▁shy 1 -▁Oil 1 -▁dual 1 -▁spare 1 -▁boots 1 -▁Turner 1 -▁burst 1 -▁developments 1 -▁stored 1 -▁Roberts 1 -▁architecture 1 -▁chip 1 -▁symbol 1 -▁integrated 1 -news 1 -▁framework 1 -including 1 -▁retailers 1 -▁headlines 1 -▁transactions 1 -▁input 1 -▁transit 1 -▁globe 1 -▁Blake 1 -ups 1 -▁$7 1 -▁samples 1 -▁violations 1 -▁adjust 1 -▁FA 1 -▁jumping 1 -▁extraordinary 1 -▁soldier 1 -▁chamber 1 -▁Madison 1 -▁defended 1 -▁constructed 1 -▁confusion 1 -▁NATO 1 -ated 1 -mile 1 -▁Comey 1 -ising 1 -▁delicious 1 -▁carrier 1 -vin 1 -▁Between 1 -▁upgrade 1 -▁amongst 1 -▁senator 1 -▁tremendous 1 -▁Mars 1 -▁questioning 1 -▁consists 1 -▁mph 1 -▁restore 1 -▁convention 1 -▁crews 1 -▁beliefs 1 -▁Ex 1 -▁resignation 1 -▁difficulty 1 -▁jeans 1 -▁100% 1 -▁Analysis 1 -▁makeup 1 -▁Marvel 1 -▁PA 1 -▁investigated 1 -▁Has 1 -▁fastest 1 -▁Marshall 1 -▁scope 1 -▁Early 1 -▁campaigns 1 -ok 1 -▁employers 1 -▁Yu 1 -▁choosing 1 -▁wealthy 1 -▁sufficient 1 -▁56 1 -▁advocate 1 -▁displayed 1 -▁opens 1 -▁ethnic 1 -▁Around 1 -▁laptop 1 -▁Sessions 1 -▁escaped 1 -▁thirty 1 -ble 1 -▁51 1 -▁uniform 1 -▁thrilled 1 -▁Pope 1 -▁legitimate 1 -▁addiction 1 -▁demonstrated 1 -▁usage 1 -▁prosecution 1 -▁pen 1 -▁specialist 1 -set 1 -▁imports 1 -▁subjects 1 -▁Very 1 -▁Connecticut 1 -▁Asked 1 -▁root 1 -▁Got 1 -▁Members 1 -ac 1 -▁tunnel 1 -▁Records 1 -▁acres 1 -▁meanwhile 1 -▁Arthur 1 -▁enhance 1 -▁cheaper 1 -▁lighting 1 -▁execution 1 -len 1 -14 1 -mail 1 -wide 1 -ria 1 -▁silly 1 -Con 1 -▁attorneys 1 -sex 1 -▁quote 1 -▁Assistant 1 -▁parked 1 -▁requirement 1 -▁reveals 1 -▁Netherlands 1 -▁Arkansas 1 -▁electoral 1 -▁Too 1 -▁maintaining 1 -▁appearing 1 -ella 1 -▁Freedom 1 -▁ensuring 1 -▁iconic 1 -▁Po 1 -from 1 -▁helicopter 1 -istic 1 -▁surge 1 -▁teenage 1 -sized 1 -▁routes 1 -ura 1 -▁Castle 1 -ig 1 -▁viewing 1 -▁Indonesia 1 -▁Pete 1 -top 1 -cho 1 -▁wise 1 -▁classified 1 -▁Bi 1 -▁Princess 1 -ent 1 -▁Cincinnati 1 -▁bother 1 -▁beef 1 -▁volatility 1 -▁reads 1 -▁Mumbai 1 -▁76 1 -Getty 1 -▁responding 1 -▁Robin 1 -▁dealt 1 -▁(3 1 -▁teens 1 -▁Milwaukee 1 -▁survivors 1 -▁directions 1 -▁Web 1 -▁branches 1 -part 1 -▁principle 1 -▁trials 1 --2 1 -▁Jessica 1 -▁Based 1 -▁churches 1 -▁Barbara 1 -▁pan 1 -▁pound 1 -▁euro 1 -▁memorial 1 -▁Xi 1 -some 1 -▁arena 1 -▁legally 1 -▁Nathan 1 -▁ugly 1 -▁confirmation 1 --7 1 -▁prisoners 1 -they 1 -▁Wi 1 -box 1 -▁Sar 1 -▁IS 1 -▁accomplished 1 -99 1 -▁humanity 1 -▁euros 1 -▁FREE 1 -▁removing 1 -▁Saints 1 -▁democratic 1 -▁Peace 1 -▁Morning 1 -rate 1 -▁Harvard 1 -▁deemed 1 -▁Neil 1 -du 1 -▁transformation 1 -▁transgender 1 -▁Karen 1 -▁Getty 1 -▁tension 1 -only 1 -▁grace 1 -▁instant 1 -▁lists 1 -▁Ok 1 -▁Run 1 -▁pose 1 -▁directors 1 -▁civilian 1 -▁instantly 1 -▁progressive 1 -▁reply 1 -▁rocket 1 -▁robust 1 -▁stroke 1 -▁600 1 -▁Chase 1 -▁launching 1 -▁reminder 1 -▁Want 1 -▁boards 1 -▁hated 1 -▁wire 1 -▁tune 1 -▁gradually 1 -▁printed 1 -▁militants 1 -▁engines 1 -▁recommendation 1 -▁cookie 1 -▁Zacks 1 -▁Third 1 -▁Independent 1 -▁sword 1 -there 1 -▁afterwards 1 -▁concerning 1 -á 1 -▁partially 1 -ities 1 -▁basket 1 -▁Nancy 1 -kar 1 -▁2006, 1 -Hey 1 -▁climbing 1 -▁shed 1 -▁Calif 1 -▁Rich 1 -▁metres 1 -▁origin 1 -▁parks 1 -▁grave 1 -▁moderate 1 -▁Fred 1 -▁Shares 1 -ium 1 -▁Child 1 -▁1999 1 -Go 1 -▁flip 1 -ping 1 -tra 1 -▁preserve 1 -▁Brazilian 1 -▁logo 1 -▁math 1 -▁reflected 1 -▁bands 1 -fa 1 -▁Oliver 1 -▁Albert 1 -place 1 -▁shifted 1 -▁murdered 1 -dale 1 -▁Model 1 -▁reception 1 -▁underground 1 -▁Currently 1 -▁sin 1 -▁creature 1 -▁Small 1 -▁qualify 1 -▁Due 1 -▁approaching 1 -▁courage 1 -65 1 -▁indication 1 -ski 1 -▁Commerce 1 -▁Digital 1 -▁foster 1 -pur 1 -▁centuries 1 -▁tweets 1 -iya 1 -▁heritage 1 -ich 1 -▁politically 1 -▁measured 1 -24 1 -▁Theresa 1 -▁operators 1 -▁collecting 1 -▁Jong 1 -▁ankle 1 -▁2,000 1 -▁dish 1 -▁replacing 1 -▁bottles 1 -▁58 1 -▁anytime 1 -▁requiring 1 -qui 1 -▁Calgary 1 -▁compare 1 -▁Tampa 1 -▁Seven 1 -world 1 -▁collapsed 1 -▁Sony 1 -▁underway 1 -▁terrorists 1 -ky 1 -▁Ratings 1 -▁athletic 1 -▁Juan 1 -▁ranking 1 -▁Benjamin 1 -rich 1 -▁bare 1 -▁agricultural 1 -▁devastating 1 -profile 1 -▁placing 1 -▁reviewed 1 -▁marry 1 -▁Flynn 1 -▁median 1 -▁Using 1 -▁120 1 -▁Samuel 1 -▁hug 1 --5 1 -▁sections 1 -16 1 -▁Along 1 -▁Cat 1 -▁auction 1 -▁bio 1 -market 1 -▁celebrity 1 -ins 1 -key 1 -▁Indians 1 -▁sink 1 -▁FILE 1 -▁candy 1 -▁accuracy 1 -ula 1 -▁Angel 1 -▁cancelled 1 -ning 1 -▁senators 1 -▁$50 1 -ER 1 -▁wishes 1 -La 1 -▁chips 1 -par 1 -▁relax 1 -▁WWE 1 -▁respective 1 -▁wireless 1 -▁cave 1 -▁85 1 -15 1 -▁Index 1 -▁Bears 1 -▁dig 1 -▁humor 1 -▁pension 1 -▁isolated 1 -▁Dead 1 -▁ta 1 -▁catching 1 -▁array 1 -5,000 1 -▁Share 1 -▁seeks 1 -▁segments 1 -ika 1 -▁Thailand 1 -profit 1 -▁closure 1 -▁addressing 1 -▁integrity 1 -ev 1 -▁keys 1 -ette 1 -▁disclosure 1 -▁eliminate 1 -▁tenure 1 -▁Gas 1 -▁unfortunately 1 -▁sauce 1 -▁Nashville 1 -▁export 1 -▁lands 1 -▁legend 1 -▁250 1 -ever 1 -▁Chuck 1 -▁laying 1 -▁notion 1 -▁Smart 1 -▁fires 1 -▁Dodgers 1 -▁tables 1 -96 1 -▁tourist 1 -▁pitcher 1 -art 1 -▁buses 1 -▁install 1 -▁Net 1 -▁Neither 1 -70 1 -▁Limited 1 -▁predict 1 -▁Pet 1 -▁Rather 1 -▁Mason 1 -▁habit 1 -After 1 -▁genetic 1 -▁honour 1 -▁tale 1 -▁Dar 1 -▁Native 1 -▁Shah 1 -▁Could 1 -▁photographs 1 -▁Harrison 1 -gan 1 -▁striker 1 -▁Partners 1 -▁motivated 1 -▁Whatever 1 -▁Chapter 1 -88 1 -▁wings 1 -Me 1 -▁Fame 1 -ren 1 -▁exploring 1 -▁abilities 1 -▁Think 1 -▁departments 1 -▁stranger 1 -ging 1 -▁preliminary 1 -▁conventional 1 -▁engaging 1 -▁Afghan 1 -▁composed 1 -▁genuine 1 -▁insight 1 --9 1 -▁pin 1 -▁sake 1 -▁vocal 1 -▁grief 1 -▁describing 1 -▁67 1 -▁Technologies 1 -▁Snow 1 -▁Heart 1 -▁Further 1 -▁5- 1 -▁Magic 1 -▁gently 1 -▁1998 1 -▁Hopefully 1 -▁54 1 -Be 1 -▁Euro 1 -▁worship 1 -page 1 -▁68 1 -five 1 -▁Friends 1 -▁Mid 1 -▁Sport 1 -▁foul 1 -▁Resources 1 -▁$9 1 -De 1 -▁decrease 1 -▁whereas 1 -▁Vladimir 1 -▁53 1 -▁siblings 1 -▁wished 1 -▁tens 1 -▁blanket 1 -▁cyber 1 -▁Marcus 1 -▁Mail 1 -▁financing 1 -▁exhausted 1 -▁formally 1 -▁subscription 1 -▁urge 1 -▁invasion 1 -▁lined 1 -was 1 -▁productive 1 -▁Farm 1 -▁monster 1 -ou 1 -▁bath 1 -▁upside 1 -▁Tan 1 -▁Nor 1 -▁Taiwan 1 -▁brave 1 -▁hopeful 1 -▁gorgeous 1 -▁Raiders 1 -▁penalties 1 -▁shaking 1 -gate 1 -▁trusted 1 -▁Jets 1 -▁2009. 1 -▁temporarily 1 -▁Overall 1 -▁57 1 -▁sometime 1 -▁True 1 -▁publishing 1 -▁resume 1 -▁Gray 1 -▁Ty 1 -▁crop 1 -63 1 -▁Section 1 -▁Sara 1 -▁Staff 1 -▁gallery 1 -▁lo 1 -> 1 -▁discipline 1 -▁inch 1 -▁donation 1 -▁Hunt 1 -▁Store 1 -▁marathon 1 -▁doubled 1 -▁Meghan 1 -▁MS 1 -▁occasional 1 -▁sooner 1 -▁robbery 1 -▁accepting 1 -▁worries 1 -▁taxi 1 -▁YORK 1 -▁suspicious 1 -▁evolution 1 -yn 1 -New 1 -▁breach 1 -▁Guardian 1 -▁borders 1 -ku 1 -▁Page 1 -▁Rico 1 -▁stepping 1 -▁touching 1 -ai 1 -▁Bal 1 -▁cabin 1 -▁Katie 1 -▁brick 1 -▁Ja 1 -friendly 1 -▁Barry 1 -▁arranged 1 -▁representation 1 -▁rental 1 -▁Cha 1 -▁delays 1 -▁Swedish 1 -▁regret 1 -▁creatures 1 -▁intellectual 1 -▁Springs 1 -▁AFP 1 -▁strict 1 -▁shore 1 -▁Lakers 1 -▁refugee 1 -▁gaining 1 -▁crown 1 -▁Someone 1 -ence 1 -get 1 -▁Pen 1 -yan 1 -▁Gaza 1 -▁electrical 1 -56 1 -▁Auto 1 -▁sentiment 1 -▁Dis 1 -▁enterprise 1 -what 1 -▁sacrifice 1 -▁boring 1 -Fi 1 -▁piano 1 -▁cruise 1 -▁chairs 1 -▁$12 1 -▁para 1 -▁cleaned 1 -▁6- 1 -▁approve 1 -bn 1 -▁addresses 1 -▁deciding 1 -▁62 1 -▁unveiled 1 -▁1/2 1 -▁ERA 1 -▁fuck 1 -▁firefighters 1 -▁mutual 1 -▁Tommy 1 -▁Ave 1 -▁quarters 1 -▁Zimbabwe 1 -▁credits 1 -▁Carlos 1 -▁Add 1 -▁wisdom 1 -▁Hell 1 -will 1 -▁indicates 1 -▁Iraqi 1 -▁bones 1 -▁Son 1 -▁Drew 1 -▁2-1 1 -care 1 -▁clash 1 -▁Students 1 -▁(5 1 -▁peers 1 -▁impose 1 -▁awake 1 -▁Bradley 1 -▁mentally 1 -▁statue 1 -▁Marie 1 -lock 1 -ible 1 -▁booked 1 -▁compromise 1 -▁equipped 1 -▁proceedings 1 -▁mechanism 1 -▁announcing 1 -▁unprecedented 1 -▁Transportation 1 -▁existed 1 -van 1 -▁Dublin 1 -▁openly 1 -school 1 -▁focuses 1 -▁weighed 1 --4 1 -▁heaven 1 -▁brutal 1 -▁boats 1 -ett 1 -▁acquire 1 -iness 1 -▁underlying 1 -▁Age 1 -▁react 1 -▁wrap 1 -▁stole 1 --6 1 -▁camps 1 -▁consist 1 -▁acknowledge 1 -▁agreements 1 -▁Low 1 -▁screening 1 -▁Death 1 -home 1 -most 1 -▁kingdom 1 -▁span 1 -left 1 -MP 1 -▁59 1 -tu 1 -▁swept 1 -stone 1 -▁diagnosis 1 -▁approaches 1 -▁arrangements 1 -than 1 -lu 1 -▁Fisher 1 -▁Kenya 1 -▁2- 1 -▁caring 1 -▁Nigerian 1 -ston 1 -▁Youth 1 -▁mere 1 -▁SA 1 -▁Musk 1 -AL 1 -▁(4 1 -▁bug 1 -▁mud 1 -▁retain 1 -▁tent 1 -▁depend 1 -▁bases 1 -▁torture 1 -▁whilst 1 -▁demonstrate 1 -▁Communications 1 -▁heels 1 -▁snapped 1 -▁bulk 1 -▁Spencer 1 -▁Ro 1 -▁expertise 1 -▁lowered 1 -▁Code 1 -▁backup 1 -▁Environmental 1 --8 1 -▁humanitarian 1 -ative 1 -▁emotion 1 -▁grip 1 -▁robot 1 -80 1 -▁Pence 1 -▁Furthermore 1 -▁perceived 1 -▁interact 1 -▁parliamentary 1 -▁radar 1 -▁Er 1 -▁Prior 1 -▁RELATED 1 -▁Ti 1 -▁constitution 1 -▁commissioner 1 -▁racism 1 --20 1 -▁Reed 1 -▁Officers 1 -▁automatic 1 -▁Ga 1 -▁Ca 1 -▁bold 1 -quarter 1 -▁Harper 1 -Can 1 -▁“ 1 -▁publisher 1 -▁interim 1 -▁blocking 1 -▁Lucas 1 -▁iOS 1 -▁flower 1 -▁optimistic 1 -▁occurs 1 -▁considerable 1 -▁Gar 1 -ari 1 -▁meantime 1 -▁clouds 1 -▁satisfied 1 -▁Prize 1 -▁priest 1 -▁mothers 1 -▁Tonight 1 -▁verdict 1 -▁villages 1 -▁scientist 1 -sky 1 -▁releasing 1 -▁disappointing 1 -▁leaned 1 -▁Stan 1 -▁transmission 1 -▁juice 1 -▁25- 1 -▁harsh 1 -▁imagination 1 -▁bullet 1 -▁blew 1 -▁matching 1 -▁awkward 1 -▁define 1 -▁acid 1 -▁Upon 1 -▁ski 1 -ven 1 -▁Canadians 1 -▁nonprofit 1 -shirt 1 -▁revealing 1 -▁starring 1 -▁mandate 1 -▁spectrum 1 -▁sweat 1 -▁relation 1 -▁Falls 1 -▁sang 1 -▁trafficking 1 -▁Knight 1 -▁(“ 1 -▁corporations 1 -▁desperately 1 -▁imagined 1 -▁Almost 1 -▁Weather 1 -▁Airlines 1 -Yeah 1 -▁exclusively 1 -▁Brett 1 -▁valid 1 -tin 1 -▁2008. 1 -▁l 1 -▁tactics 1 -▁Yemen 1 -32 1 -▁wildlife 1 -▁kilometers 1 -▁invite 1 -▁Intel 1 -ric 1 -▁anonymous 1 -income 1 -▁compliance 1 -▁dramatically 1 -▁Trail 1 -▁urging 1 -▁DJ 1 -▁Representatives 1 -▁CA 1 -▁flesh 1 -▁faculty 1 -▁Alice 1 -▁Investors 1 -▁17- 1 -▁Walmart 1 -▁upstairs 1 -LA 1 -▁interference 1 -▁Crime 1 -▁wherever 1 -▁praying 1 -ery 1 -yo 1 -13 1 -▁besides 1 -▁posed 1 -▁Defence 1 -▁16- 1 -▁relaxed 1 -▁feared 1 -▁911 1 -▁locker 1 -▁ya 1 -▁69 1 -▁Yankees 1 -▁couples 1 -white 1 -▁breathe 1 -▁nurses 1 -▁Mission 1 -▁offseason 1 -▁soup 1 -▁workplace 1 -▁integration 1 -▁1930 1 -even 1 -▁surprisingly 1 -▁neutral 1 -▁Dennis 1 -▁purple 1 -non 1 -▁crap 1 -ue 1 -▁Classic 1 -▁distinct 1 -▁resist 1 -▁trauma 1 -▁distant 1 -med 1 -▁sequence 1 -▁makers 1 -▁Cowboys 1 -▁grid 1 -▁courses 1 -▁Belgium 1 -▁HIV 1 -▁fancy 1 -▁rifle 1 -▁14- 1 -▁underneath 1 -▁lyrics 1 -▁Ashley 1 -▁seeds 1 -▁Hal 1 -▁Qatar 1 -▁Macron 1 -▁Bel 1 -▁treating 1 -▁felony 1 -ira 1 -▁Taliban 1 -▁workforce 1 -▁interaction 1 -ell 1 -▁runners 1 -▁manufacturer 1 -▁disappointment 1 -▁Abu 1 -▁shorter 1 -▁hub 1 -▁Rev 1 -▁guilt 1 -▁Bobby 1 -▁patch 1 -▁Emergency 1 -▁margins 1 -▁2.5 1 -▁resolved 1 -▁Unit 1 -▁Brussels 1 -▁da 1 -▁angle 1 -▁Convention 1 -▁responses 1 -tro 1 -▁Delta 1 -▁leverage 1 -view 1 -▁punch 1 -▁SC 1 -▁Dick 1 -▁Columbus 1 -▁marine 1 -▁bitter 1 -▁signals 1 -▁Championships 1 -▁wounds 1 -▁pump 1 -▁Medicine 1 -plus 1 -▁packages 1 -ries 1 -▁counting 1 -▁influenced 1 -▁shoe 1 -▁intersection 1 -▁Margaret 1 -▁Ad 1 -century 1 -▁searched 1 -▁guaranteed 1 -▁Medicaid 1 -▁exhibit 1 -▁viral 1 -▁organisations 1 -ff 1 -▁Palm 1 -▁lets 1 -cent 1 -▁succeeded 1 -▁sporting 1 -67 1 -▁Room 1 -▁finale 1 -lee 1 -▁Cabinet 1 -ap 1 -▁regarded 1 -person 1 -▁vegetables 1 -▁Kent 1 -▁Studies 1 -▁citizenship 1 -▁× 1 -▁Ice 1 -▁aired 1 -▁collision 1 -▁Systems 1 -▁modest 1 -▁63 1 -▁achievement 1 -▁Rogers 1 -▁aunt 1 -elect 1 -▁Garcia 1 -▁Temple 1 -▁Oxford 1 -▁VR 1 -▁95 1 -▁brush 1 -▁complain 1 -▁discount 1 -▁supportive 1 -▁800 1 -▁purchases 1 -25 1 -▁realised 1 -▁slammed 1 -▁Salt 1 -ph 1 -▁municipal 1 -book 1 -▁costume 1 -▁executed 1 -▁1997 1 -▁patrol 1 -▁switched 1 -▁premier 1 -▁unions 1 -▁renewed 1 -▁Linda 1 -▁exploration 1 -An 1 -ily 1 -▁vessel 1 -▁availability 1 -▁Side 1 -▁genre 1 -1, 1 -▁Boeing 1 -49 1 -▁drought 1 -▁decides 1 -▁Circuit 1 -▁suggestions 1 -▁reserves 1 -▁Front 1 -▁dragged 1 -▁acceptable 1 -▁Actually 1 -▁pipe 1 -▁du 1 -í 1 -▁editing 1 -▁PS 1 -▁Polish 1 -▁Broadway 1 -▁Wells 1 -▁Standard 1 -▁accounting 1 -▁forum 1 -▁Tournament 1 -▁Natural 1 -▁withdraw 1 -▁missiles 1 -▁magical 1 -▁Cam 1 -▁withdrawal 1 -▁slot 1 -▁Border 1 -▁Wa 1 -▁contributing 1 -▁influential 1 -▁torn 1 -▁avoided 1 -▁Sch 1 -▁steep 1 -▁tube 1 -▁Birmingham 1 -▁GDP 1 -ken 1 -▁riders 1 -.50 1 -founder 1 -▁coastal 1 -▁chronic 1 -▁triggered 1 -▁belly 1 -▁deputies 1 -▁kit 1 -▁bias 1 -ram 1 -nes 1 -▁patience 1 -▁equality 1 -▁Girls 1 -29 1 -▁firmly 1 -▁Walter 1 -▁Obviously 1 -aka 1 -▁Phillips 1 -▁generous 1 -▁flames 1 -▁Dow 1 -▁oxygen 1 -▁2005, 1 -▁File 1 -▁cow 1 -▁quest 1 -▁provisions 1 -▁difficulties 1 -▁formerly 1 -▁ally 1 -night 1 -▁priorities 1 -pre 1 -▁Motor 1 -▁organised 1 -▁thanked 1 -▁musician 1 -▁Edwards 1 -▁sheep 1 -▁engineers 1 -▁opted 1 -▁sponsored 1 -▁privilege 1 -▁shocking 1 -▁shooter 1 -▁scrutiny 1 -▁Val 1 -▁barn 1 -▁pictured 1 -▁twist 1 -▁Bear 1 -has 1 -▁YOU 1 -▁Butler 1 -▁vowed 1 -ably 1 -▁delighted 1 -▁drone 1 -cut 1 -▁FIFA 1 -▁invitation 1 -wan 1 -▁Pentagon 1 -▁tragic 1 -har 1 -▁correctly 1 -▁Election 1 -▁Born 1 -▁boundaries 1 -▁modified 1 -▁acquiring 1 -▁bronze 1 -▁Nu 1 -▁10% 1 -▁Bennett 1 -▁bi 1 -sch 1 -▁Norway 1 -▁arrives 1 -▁Milan 1 -speed 1 -▁testified 1 -▁Victor 1 -▁Nazi 1 -▁anxious 1 -▁premiere 1 -▁inform 1 -▁migration 1 -▁Alpha 1 -▁universal 1 -▁photograph 1 -▁habits 1 -ero 1 -▁Trans 1 -▁Cardinals 1 -▁comeback 1 -▁drafted 1 -▁Really 1 -▁pitched 1 -▁employer 1 -▁Daddy 1 -▁Region 1 -▁Carr 1 -▁conducting 1 -▁amendment 1 -▁Kan 1 -date 1 -▁suspicion 1 -▁Costa 1 -▁Route 1 -54 1 -▁explosive 1 -27 1 -▁($1 1 -▁Guy 1 -▁Ver 1 -▁2-0 1 -▁rescued 1 -▁Bryan 1 -IT 1 -▁diabetes 1 -▁Richmond 1 -▁restored 1 -tor 1 -▁exchanges 1 -▁Blues 1 -▁transparency 1 -▁drops 1 -▁shell 1 -▁donate 1 -▁musicians 1 -▁barrier 1 -▁dependent 1 -▁workout 1 -▁reflects 1 -▁Mad 1 -▁$25 1 -▁rumors 1 -▁refers 1 -▁crowded 1 -Some 1 -▁frequency 1 -▁memo 1 -▁Bio 1 -▁pressing 1 -▁Logan 1 -▁73 1 -▁Luckily 1 -Ma 1 -▁phenomenon 1 -▁deserves 1 -▁ultra 1 -▁Islands 1 -▁acceptance 1 -ius 1 -ned 1 -▁Panthers 1 -▁yelled 1 -▁Industry 1 -▁closet 1 -▁Yorkshire 1 -zo 1 -▁Angela 1 -▁heroes 1 -▁Online 1 -▁proceed 1 -▁Storm 1 -ide 1 -ming 1 -▁minimal 1 -▁intelligent 1 -▁mask 1 -▁pledged 1 -▁Si 1 -▁gene 1 -▁domain 1 -▁hardest 1 -real 1 -▁RBI 1 -▁Kar 1 -▁pastor 1 -▁1996 1 -▁violated 1 -▁Dream 1 -tech 1 -▁heal 1 -▁rebels 1 -▁recovering 1 -▁floating 1 -53 1 -▁kicking 1 -▁'' 1 -▁safer 1 -▁emphasis 1 -▁passionate 1 -▁mysterious 1 -▁rode 1 -▁EPA 1 -▁pursuit 1 -2,000 1 -▁photographer 1 -▁Mu 1 -▁fights 1 -▁Register 1 -ena 1 -▁grants 1 -▁filming 1 -▁Cuba 1 -▁boot 1 -▁inmates 1 -▁sophomore 1 -▁ambitious 1 -▁abused 1 -▁notably 1 -▁fame 1 -▁halt 1 -▁opposing 1 -▁attributed 1 -▁islands 1 -▁frustrating 1 -▁Pakistani 1 -ach 1 -▁locally 1 -▁fever 1 -▁assaulted 1 -▁excess 1 -▁instruments 1 -▁Iron 1 -▁61 1 -▁Forces 1 -▁garbage 1 -▁edit 1 -▁upgraded 1 -▁Mor 1 -tar 1 -▁Military 1 -▁hook 1 -▁affects 1 -▁arrests 1 -tal 1 -▁toxic 1 -▁errors 1 -While 1 -▁appreciated 1 -▁parallel 1 -DC 1 -▁terminal 1 -▁admission 1 -▁Pop 1 -▁shouted 1 -lon 1 -▁Hudson 1 -▁unfair 1 -▁examined 1 -ick 1 -▁pretend 1 -▁insider 1 -▁planes 1 -▁Cold 1 -▁wheels 1 -▁blessing 1 -▁indicating 1 -▁outcomes 1 -hr 1 -▁Za 1 -▁newspapers 1 -▁hurricane 1 -▁displays 1 -▁endless 1 -▁devoted 1 -▁publish 1 -▁washed 1 -▁Che 1 -▁pole 1 -handed 1 -bound 1 -.00 1 -▁Golf 1 -▁Wolf 1 -▁Julie 1 -▁loyal 1 -▁solely 1 -ala 1 -▁Mohammed 1 -nan 1 -shot 1 -▁tissue 1 -▁produces 1 -▁census 1 -▁barriers 1 -▁outbreak 1 -▁nerve 1 -▁implications 1 -▁eliminated 1 -▁immune 1 -▁judicial 1 -▁soap 1 -▁pie 1 -▁Kumar 1 -▁hotels 1 -▁bow 1 -ice 1 -▁ambulance 1 -▁calculated 1 -OR 1 -▁stealing 1 -▁Sacramento 1 -team 1 -▁lobby 1 -▁tender 1 -▁77 1 -▁Fu 1 -▁Sal 1 -ag 1 -▁Nintendo 1 -▁Fe 1 -▁repeal 1 -▁mate 1 -▁circuit 1 -▁spotlight 1 -▁rhetoric 1 -▁lip 1 -▁suitable 1 -▁Bangladesh 1 -▁Tell 1 -▁chemicals 1 -▁ST 1 -▁qualifying 1 -ud 1 -▁Woman 1 -▁victories 1 -▁males 1 -▁2020. 1 -▁Ukrainian 1 -00 1 -▁prevented 1 -▁offset 1 -▁Ber 1 -▁lanes 1 -▁slid 1 -type 1 -▁Liberty 1 -tel 1 -▁su 1 -▁eaten 1 -▁pickup 1 -▁dressing 1 -▁determination 1 -▁worthy 1 -▁Rod 1 -▁honored 1 -▁gentle 1 -▁respected 1 -▁teenagers 1 -▁import 1 -▁Sound 1 -▁Montana 1 -▁100,000 1 -▁jokes 1 -▁mandatory 1 -stein 1 -ida 1 -▁Glen 1 -▁arrangement 1 -▁Mill 1 -▁assess 1 -▁strain 1 -2. 1 -▁Hey 1 -▁info 1 -▁superior 1 -▁psychological 1 -▁retreat 1 -▁filmed 1 -▁restricted 1 -py 1 -▁happily 1 -▁lengthy 1 -▁Joshua 1 -▁matched 1 -▁deer 1 -▁Chiefs 1 -▁Access 1 -▁Talk 1 -▁spray 1 -▁Total 1 -ator 1 -▁apologize 1 -▁Eddie 1 -ye 1 -▁€ 1 -▁Off 1 -▁audiences 1 -▁MarketBeat 1 -▁retailer 1 -▁outlet 1 -▁laughter 1 -▁spreading 1 -▁castle 1 -▁accomplish 1 -▁fifty 1 -▁aviation 1 -▁Jefferson 1 -▁pleasant 1 -▁colleague 1 -▁porch 1 -ver 1 -▁Reid 1 -▁averaged 1 -▁Bloomberg 1 -▁regulators 1 -▁countless 1 -▁Sanchez 1 -▁connecting 1 -▁Target 1 -▁Money 1 -▁condemned 1 -▁Bond 1 -▁completing 1 -▁Industrial 1 -▁strongest 1 -▁battles 1 -▁50% 1 -▁Broncos 1 -▁Dog 1 -▁Magazine 1 -▁Pay 1 -▁purse 1 -▁NY 1 -▁perception 1 -▁packaging 1 -68 1 -▁glory 1 -75 1 -ens 1 -▁Stars 1 -▁Holdings 1 -▁comply 1 -▁prescription 1 -▁Fa 1 -▁concise 1 --16 1 -▁Yo 1 -▁transported 1 -Co 1 -▁relate 1 -▁Baptist 1 -▁trainer 1 -▁charter 1 -▁Cancer 1 -▁Commonwealth 1 -▁affecting 1 -▁vessels 1 -▁delegation 1 -▁BTC 1 -▁Jared 1 -▁pursuing 1 -best 1 -▁lean 1 -▁console 1 -▁bloody 1 -▁counted 1 -pi 1 -▁muscles 1 -▁fluid 1 -▁Sorry 1 -▁females 1 -▁boosted 1 -range 1 -▁cattle 1 -▁hometown 1 -▁Shi 1 -▁stem 1 -▁Libya 1 -▁spectacular 1 -▁stretched 1 -ding 1 -▁lease 1 -tion 1 -▁elaborate 1 -esh 1 -▁consultant 1 -▁Change 1 -▁grandchildren 1 -▁detect 1 -▁Win 1 -▁queen 1 -▁Receive 1 -MA 1 -▁athlete 1 -▁Palestinians 1 -▁biological 1 -▁wrestling 1 -71 1 -69 1 -▁(6 1 -▁powered 1 -▁midst 1 -▁Angels 1 -▁1995 1 -▁Hughes 1 -▁globally 1 -▁separation 1 -▁Ci 1 -▁puck 1 -▁19- 1 -▁reaches 1 -▁Brooks 1 -ile 1 -▁impacted 1 -▁teammate 1 -▁argues 1 -Mo 1 -▁le 1 -▁kissed 1 -▁installation 1 -01 1 -▁educated 1 -▁rage 1 -▁podcast 1 -▁preventing 1 -▁rubber 1 -▁Foster 1 -▁governing 1 -▁1992 1 -▁trio 1 -▁Design 1 -▁AR 1 -▁casual 1 -▁bears 1 -▁Kane 1 -▁impacts 1 -▁legislature 1 -▁logic 1 -▁bull 1 -▁Mercedes 1 -▁aftermath 1 -▁Find 1 -▁storms 1 -▁successor 1 -elle 1 --15 1 -▁1920 1 -▁economies 1 -▁$11 1 -hit 1 -▁understands 1 -▁los 1 -▁sticking 1 -▁inevitable 1 -▁inappropriate 1 -▁Hockey 1 -▁Visit 1 -▁goalkeeper 1 -▁ME 1 -▁vet 1 -▁Lauren 1 -▁Stanford 1 -CO 1 -▁Leonard 1 -▁4- 1 -form 1 -▁treatments 1 -Who 1 -▁Hard 1 -64 1 -▁motivation 1 -▁Mario 1 -▁Manafort 1 -▁Romney 1 -▁spirits 1 -▁Eli 1 -▁taxpayers 1 -87 1 -2, 1 -▁thread 1 -8,000 1 -▁slowed 1 -▁topped 1 -▁Fortunately 1 -▁Secret 1 -▁oppose 1 -▁fits 1 -▁IV 1 -▁steam 1 -▁goodbye 1 -▁businessman 1 -about 1 -▁1-0 1 -cy 1 -‚ 1 -▁rugby 1 -▁transparent 1 -▁concentration 1 -▁Kids 1 -▁bitcoin 1 -▁spy 1 -▁Short 1 -▁realistic 1 -burg 1 -maker 1 -▁Ki 1 -▁applying 1 -▁Heritage 1 -▁pronounced 1 -▁tobacco 1 -▁scratch 1 -▁sued 1 -dar 1 -▁saves 1 -▁ought 1 -▁Mobile 1 -▁Barr 1 -▁Vincent 1 -▁realise 1 -▁washing 1 -▁Helen 1 -▁1994 1 -▁aboard 1 -▁finals 1 -▁epic 1 -▁Left 1 -▁Caribbean 1 -▁container 1 -▁Rodriguez 1 -7,000 1 -▁critic 1 -▁Carson 1 -▁Inside 1 -▁missions 1 -▁singles 1 -▁immigrant 1 -▁r 1 -▁raises 1 -▁SUV 1 -bu 1 -▁releases 1 -had 1 -–19 1 -list 1 -▁cargo 1 -▁Travis 1 -▁cared 1 -▁Noah 1 -▁farmer 1 -▁valley 1 -▁71 1 -▁examine 1 -86 1 -▁suggestion 1 -▁backyard 1 -▁linebacker 1 -being 1 -▁crowds 1 -▁Player 1 -▁offence 1 -ese 1 -▁Claire 1 -▁Russians 1 -cher 1 -CA 1 -▁Reagan 1 -▁Cloud 1 -▁caution 1 -▁planted 1 -rin 1 -News 1 -▁< 1 -▁Gabriel 1 -▁decreased 1 -▁Mon 1 -▁visa 1 -▁Jesse 1 -▁OS 1 -▁Xbox 1 -▁governance 1 -▁simultaneously 1 -▁volumes 1 -▁Bailey 1 -▁Ana 1 -▁flows 1 -▁contrary 1 -worth 1 -▁Ju 1 -▁powder 1 -6,000 1 -▁setup 1 -▁halfway 1 -▁founding 1 -▁Colin 1 -▁pale 1 -ward 1 -▁cop 1 -▁significance 1 -▁nasty 1 -▁Luis 1 -▁Revolution 1 -▁circles 1 -▁polling 1 -▁Indianapolis 1 -▁cops 1 -▁negotiate 1 -▁authorized 1 -ix 1 -▁5,000 1 -▁screens 1 -▁3,000 1 -▁99 1 -Are 1 -▁batteries 1 -▁unfortunate 1 -▁Dylan 1 -97 1 -▁hurting 1 -57 1 -▁Shaw 1 -▁Commons 1 -backed 1 -▁$30 1 -▁Television 1 -▁diesel 1 --18 1 -▁dubbed 1 -▁Lopez 1 -▁rides 1 -su 1 -▁earthquake 1 -bank 1 -▁convenience 1 -▁Dubai 1 -▁Round 1 -▁poured 1 -▁appeals 1 -ties 1 -rated 1 -▁Karl 1 -▁Especially 1 -▁vendors 1 -▁opioid 1 -▁fundraising 1 -▁retire 1 -▁intensity 1 -▁Edmonton 1 -▁accommodate 1 -▁bass 1 -▁honey 1 -▁21- 1 -▁boom 1 -▁fifteen 1 -▁awhile 1 -▁cab 1 -61 1 -▁hills 1 -▁evolved 1 -▁coup 1 -21 1 -▁Source 1 -▁Mitch 1 -▁insights 1 -sk 1 -▁kilometres 1 -▁conservation 1 -▁proceeds 1 -▁Tax 1 -66 1 -▁Nebraska 1 -▁Township 1 -▁sunny 1 -▁partial 1 -▁entertaining 1 -▁fascinating 1 -▁bay 1 -responsibilities 1 -82 1 -▁Merkel 1 -▁(7 1 -36 1 -89 1 -▁gaze 1 -▁Ra 1 -▁similarly 1 -▁weakness 1 -▁Derek 1 -▁believing 1 -▁deposit 1 -▁worrying 1 -26 1 -79 1 -▁82 1 -▁billions 1 -▁halftime 1 -▁logged 1 -▁glimpse 1 -▁conversion 1 -ado 1 -▁reminds 1 -▁bra 1 -SC 1 -▁bombing 1 -▁Sussex 1 -3,000 1 -▁overwhelmed 1 -▁Ru 1 -▁monetary 1 -▁pets 1 -▁populations 1 -▁Ghana 1 -▁fossil 1 -▁debris 1 -▁Griffin 1 -▁Beth 1 -▁Carol 1 -▁waist 1 -▁productivity 1 -▁Command 1 -▁appealing 1 -▁Type 1 -▁Troy 1 -▁rotation 1 -▁rebel 1 -▁Valentine 1 -iz 1 -▁maps 1 -▁Martinez 1 -ington 1 -▁photography 1 -▁Powell 1 -▁newest 1 -ification 1 -▁Toyota 1 -▁Electric 1 -▁yoga 1 -▁Fish 1 -▁refusing 1 -▁Prosecutors 1 -▁bacteria 1 -▁fitting 1 -▁yelling 1 -▁panels 1 -▁Nearly 1 -▁possibilities 1 -▁Sub 1 -▁Pradesh 1 -▁disturbing 1 -▁Potter 1 -final 1 -▁lens 1 -▁Sciences 1 -▁NHS 1 -▁Fall 1 -▁weigh 1 -ID 1 -▁propaganda 1 -▁Ruth 1 -▁22- 1 -▁scream 1 -rk 1 -▁raid 1 -▁sheets 1 -▁wrist 1 -▁trap 1 -▁Future 1 -del 1 -▁Met 1 -▁traders 1 -fire 1 -▁Wal 1 -▁slower 1 -▁justify 1 -▁Chamber 1 -▁cult 1 -▁reflection 1 -bed 1 -▁; 1 -▁vary 1 -▁lifting 1 -▁ripped 1 -Trump 1 -▁apology 1 -▁balanced 1 -▁Joel 1 -▁risen 1 -▁Hampshire 1 -▁recession 1 -▁northeast 1 -▁cooked 1 -yes 1 -▁sentences 1 -▁Lin 1 -▁Netanyahu 1 -Good 1 -▁Chad 1 -▁dump 1 -▁] 1 -▁commentary 1 -▁commissioned 1 -second 1 -sey 1 -▁casting 1 -▁farming 1 -▁Nature 1 -▁bubble 1 -▁Edinburgh 1 -▁albums 1 -▁courtesy 1 -▁renewable 1 -▁automotive 1 -▁Wade 1 -▁researcher 1 -nt 1 -▁Lucy 1 -▁battling 1 -▁Ger 1 -▁Communist 1 -▁Cox 1 -▁poorly 1 -▁McConnell 1 -▁elementary 1 -▁OR 1 -▁leaked 1 -▁revised 1 -▁criteria 1 -▁Bryant 1 -▁Dame 1 -▁Bur 1 -▁coin 1 -▁fortunate 1 -3, 1 -▁legendary 1 -▁Amanda 1 -▁copper 1 -OS 1 -▁Elliott 1 -▁Lou 1 -91 1 -▁dose 1 -▁goodness 1 -shi 1 -▁Brisbane 1 -▁Ahmed 1 -▁warnings 1 -▁Investigators 1 -▁altogether 1 -▁autonomous 1 -▁loop 1 -▁farms 1 -▁Song 1 -▁Taking 1 -▁Memphis 1 -▁anonymity 1 -▁advocacy 1 -▁emotionally 1 -▁Fast 1 -▁101 1 -▁twins 1 -▁briefing 1 -▁inspection 1 -gun 1 -▁2007. 1 -1,000 1 -▁inclusion 1 -▁por 1 -▁inventory 1 -▁earliest 1 -▁Challenge 1 -illa 1 -▁emerge 1 -ros 1 -▁alike 1 -member 1 -▁resign 1 -ene 1 -▁Recently 1 -SE 1 -▁practically 1 -▁curve 1 -▁shortage 1 -AC 1 -▁graduation 1 -▁grandparents 1 -▁examination 1 -▁motorcycle 1 -59 1 -sell 1 -▁Egyptian 1 -▁severely 1 -▁encountered 1 -▁conservatives 1 -▁identical 1 -▁instances 1 -▁defendant 1 -▁annoying 1 -▁paintings 1 -By 1 -▁declaration 1 -▁icon 1 -▁interface 1 -▁curb 1 -tter 1 -▁Full 1 -▁dairy 1 -dan 1 -▁consciousness 1 -val 1 -▁compound 1 -▁novels 1 -▁securities 1 -ip 1 -▁reject 1 -▁travelled 1 -▁jazz 1 -▁Patrol 1 -▁Hu 1 -▁FDA 1 -▁Bull 1 -▁socks 1 -▁combine 1 -▁folk 1 -▁Urban 1 -▁zoo 1 -▁enhanced 1 -how 1 --17 1 -▁barrels 1 -▁batting 1 -Ra 1 -▁tanks 1 -▁yields 1 -▁forming 1 -▁leak 1 -▁bankruptcy 1 -▁grows 1 -▁hybrid 1 -▁Blood 1 -▁shade 1 -▁identifying 1 -▁78 1 -▁flavor 1 -▁swear 1 -▁Either 1 -▁licensed 1 -▁driveway 1 -ban 1 -▁blowing 1 -▁secrets 1 -tan 1 -▁rings 1 -▁Leon 1 -▁Liam 1 -▁centres 1 -ris 1 -▁Boys 1 -▁donors 1 -▁chasing 1 -ua 1 -▁Mall 1 -▁complaining 1 -ST 1 -▁ghost 1 -▁Drug 1 -▁protective 1 -pp 1 -85 1 -▁accidentally 1 -▁ON 1 -▁Bird 1 -▁benchmark 1 -▁Non 1 -▁instrument 1 -hill 1 -▁poetry 1 -▁Evan 1 -▁Norman 1 -▁historically 1 -▁beans 1 -▁Forum 1 -▁criminals 1 -98 1 -▁LGBT 1 -▁damaging 1 -ending 1 -▁trademark 1 -▁relating 1 -▁compelling 1 -▁bent 1 -▁needing 1 -▁2004, 1 -▁Beyond 1 -yl 1 -▁freeze 1 -▁Lynch 1 --14 1 -▁SP 1 -▁infant 1 -▁clue 1 -▁Nobody 1 -▁exam 1 -▁thoroughly 1 -▁Jen 1 -▁20% 1 -▁density 1 -CC 1 -ama 1 -where 1 -▁accusing 1 -▁proceeded 1 -▁beds 1 -▁cheek 1 -ata 1 -IL 1 -"), 1 -▁conscious 1 -▁functional 1 -▁peer 1 -▁facilitate 1 -four 1 -▁tossed 1 -MC 1 -▁headline 1 -▁catches 1 -▁hallway 1 -▁sells 1 -▁Knights 1 -▁nightmare 1 -▁threshold 1 -▁graphic 1 -▁applies 1 -▁blown 1 -▁evident 1 -▁23- 1 -▁relieved 1 -▁Say 1 -▁blank 1 -▁cloth 1 -▁credited 1 -▁trophy 1 -▁salad 1 -▁cope 1 -▁Sand 1 -▁clearing 1 -▁controlling 1 -▁Thai 1 -▁unnecessary 1 -▁Heat 1 -▁Support 1 -▁delight 1 -TA 1 -▁offerings 1 -▁radiation 1 -▁Box 1 -▁crops 1 -▁fortune 1 -▁Tree 1 -▁screamed 1 -▁Together 1 -100 1 -▁mixture 1 -▁avoiding 1 -▁listing 1 -▁belonged 1 -ela 1 -▁Riley 1 -▁rope 1 -▁preseason 1 -▁blonde 1 -▁neighborhoods 1 -▁Shortly 1 -▁assuming 1 -51 1 -show 1 -▁Transport 1 -▁bleeding 1 -▁memorable 1 -▁permitted 1 -▁Set 1 -▁apologized 1 -▁mature 1 -▁uncertain 1 -▁lighter 1 -▁Agreement 1 -pound 1 -▁Den 1 -▁Summit 1 -▁subtle 1 -▁kinda 1 -▁transformed 1 -▁Obamacare 1 -▁intimate 1 -▁flexible 1 -▁horizon 1 -▁entity 1 -▁initiated 1 -quality 1 -▁characteristics 1 -▁speeds 1 -▁plates 1 -▁Schools 1 -▁Cap 1 -▁Wind 1 -92 1 -▁Brothers 1 -▁13- 1 -▁oversight 1 -▁Pyongyang 1 -▁Tor 1 -▁assisted 1 -▁apple 1 -▁Word 1 -▁guided 1 -olo 1 -▁Reports 1 -▁conflicts 1 -46 1 -▁Insurance 1 -▁Railway 1 -▁Dawn 1 -row 1 -▁Cubs 1 -▁declare 1 -▁oven 1 -▁Ban 1 -▁Gun 1 -ure 1 -▁breakdown 1 -▁Vermont 1 -▁tackles 1 -▁seal 1 -▁Assad 1 -Pro 1 -▁Simpson 1 -76 1 -▁loyalty 1 -▁Ronald 1 -▁#1 1 -▁Vikings 1 -▁disagree 1 -▁Gi 1 -▁Lebanon 1 -▁Grey 1 -▁wars 1 -▁jailed 1 -▁Isaac 1 -▁rolls 1 -▁competed 1 -cap 1 -▁stabbed 1 -▁Denmark 1 -▁certificate 1 -▁cure 1 -▁1993 1 -FA 1 -sville 1 -▁transform 1 -▁DVD 1 -oni 1 -▁fur 1 -▁(8 1 -▁remainder 1 -▁shelf 1 -▁manual 1 -af 1 -▁Zach 1 -▁celebrities 1 -▁substitute 1 -▁sighed 1 -▁interactions 1 -▁Porter 1 -PM 1 -▁intentions 1 -▁opera 1 -▁glanced 1 -DA 1 -▁Against 1 -Today 1 -▁supporter 1 -▁smartphones 1 -▁entities 1 -Al 1 -ea 1 -48 1 -▁treats 1 -▁Catherine 1 -▁scan 1 -▁Ri 1 -▁Bernie 1 -▁mechanical 1 -▁Common 1 -▁Rescue 1 -Where 1 -▁Leeds 1 -▁intend 1 -73 1 -▁Montgomery 1 -▁urgent 1 -▁slave 1 -▁Portugal 1 -▁embassy 1 -▁Tiger 1 -▁gut 1 -▁championships 1 -▁sweeping 1 -▁Ba 1 -▁recreational 1 -▁traditionally 1 -via 1 -▁cage 1 -▁Casey 1 -▁flexibility 1 -▁chef 1 -▁freezing 1 -▁Late 1 -▁artistic 1 -▁Hernandez 1 -▁Throughout 1 -▁shaped 1 -eh 1 -▁translated 1 -▁bizarre 1 -▁Rain 1 -ians 1 -▁Dam 1 -▁flooded 1 -▁Medicare 1 -▁mouse 1 -TM 1 -▁Companies 1 -▁reactions 1 -▁mild 1 -▁whispered 1 -▁> 1 -▁Hannah 1 -▁Buhari 1 -▁pause 1 -▁Congressional 1 -▁tonnes 1 -▁firearms 1 -▁Herald 1 -ay 1 -▁technological 1 -▁blaze 1 -95 1 -▁strangers 1 -▁Steelers 1 -▁AL 1 -28 1 -▁sexy 1 -sha 1 -▁Getting 1 -▁enthusiasm 1 -az 1 -▁Nine 1 -▁workshop 1 -▁sore 1 -▁Nevertheless 1 -▁sizes 1 -▁Julia 1 -▁Fi 1 -lia 1 -▁continent 1 -▁iPad 1 -▁Mas 1 -card 1 -▁Race 1 -▁Browns 1 -▁clicking 1 -▁demonstration 1 -▁pledge 1 -▁ward 1 -▁sealed 1 -▁Bon 1 -▁precisely 1 -▁merger 1 -▁1940 1 -▁layers 1 -▁pub 1 -▁blogs 1 -▁£ 1 -▁Dragon 1 -ped 1 -▁$13 1 -▁Wow 1 -▁Leo 1 -▁Ambassador 1 -▁greeted 1 -▁Half 1 -39 1 -▁deployment 1 -▁adjacent 1 -▁continuous 1 -▁Curry 1 -▁supposedly 1 -▁Spurs 1 -▁convert 1 -31 1 -▁Rice 1 -▁1000 1 -▁Tribune 1 -▁Ferguson 1 -▁Dale 1 -▁showcase 1 -rie 1 -▁potatoes 1 -▁literary 1 -face 1 -▁Sierra 1 -▁sequel 1 -▁snake 1 -▁tends 1 -▁Steel 1 --13 1 -▁nuts 1 -▁$14 1 -▁assignment 1 -▁Muhammad 1 -▁Industries 1 -▁consumed 1 -▁heated 1 -▁Link 1 -▁Nicole 1 -▁stakeholders 1 -▁Swift 1 -▁attraction 1 -▁cocaine 1 -▁Kat 1 -▁prevention 1 -▁Austria 1 -MS 1 -▁tropical 1 -▁£2 1 -▁companion 1 -▁unity 1 -▁Ren 1 -▁sponsor 1 -▁74 1 -▁Kay 1 -▁cotton 1 -▁Players 1 -▁Weinstein 1 -▁Sweet 1 -▁bounce 1 -▁Baseball 1 -▁accountable 1 -▁plug 1 -▁Connor 1 -▁stuffed 1 -▁theories 1 -var 1 -▁Glenn 1 -▁Vi 1 -▁disability 1 -▁animated 1 -ë 1 -▁draws 1 -▁vaccine 1 -▁lasting 1 -▁customs 1 -ob 1 -master 1 -Then 1 -▁sentencing 1 -away 1 -die 1 -▁interpretation 1 -52 1 -▁DO 1 -▁AG 1 -▁belonging 1 -after 1 -▁Myanmar 1 -General 1 --11 1 -35 1 -▁sophisticated 1 -▁skirt 1 -▁scattered 1 -▁Housing 1 -▁popped 1 -▁Punjab 1 -▁Editor 1 -▁sensor 1 -▁exercises 1 -▁alien 1 -first 1 -▁NSW 1 -oma 1 -▁deserved 1 -▁orbit 1 -▁privately 1 -▁TD 1 -▁Pelosi 1 -▁warn 1 -▁Legislature 1 -▁damages 1 -mate 1 -cat 1 -▁freely 1 -72 1 -▁flags 1 -▁81 1 -▁hyper 1 -▁licensing 1 -▁flagship 1 -lie 1 -din 1 -▁flowing 1 -▁jaw 1 -▁inspire 1 -▁financially 1 -▁ethics 1 -▁Julian 1 -have 1 -▁shifting 1 -▁scholarship 1 -post 1 -▁prayed 1 -▁Sandy 1 -▁HD 1 -eau 1 -▁traditions 1 -ware 1 -ot 1 -▁valuation 1 -▁tri 1 -▁styles 1 -lic 1 -▁Newcastle 1 -▁bump 1 -▁insists 1 -▁Engineering 1 -IN 1 -▁contacts 1 -▁reviewing 1 -▁Writing 1 -▁adapt 1 -▁Des 1 -▁colleges 1 -▁ALL 1 -▁adequate 1 -▁fails 1 -▁extending 1 -▁electronics 1 -▁Thunder 1 -▁Suddenly 1 -▁divisions 1 -▁prince 1 -▁Put 1 -▁downstairs 1 -▁Bus 1 -▁Athletic 1 -▁Chan 1 -▁Shanghai 1 -▁jewelry 1 -▁Heather 1 -4,000 1 -▁exceptional 1 -his 1 -▁establishing 1 -▁corrupt 1 -▁notified 1 -▁northwest 1 -action 1 -▁rapper 1 -▁documented 1 -▁indigenous 1 -▁84 1 -▁} 1 -▁Bre 1 -▁neat 1 -▁glance 1 -ON 1 -clock 1 -zu 1 -▁nicely 1 -▁divide 1 -▁mounted 1 -▁11- 1 -▁drill 1 -▁spite 1 -▁blockchain 1 -▁Kashmir 1 -because 1 -RA 1 -▁Metropolitan 1 -▁amateur 1 -▁Goldman 1 -▁prompting 1 -▁PR 1 -▁seniors 1 -▁postseason 1 -pictured 1 -EL 1 -▁Going 1 -▁Agriculture 1 -▁anthem 1 -▁thumb 1 -▁permits 1 -▁MVP 1 -▁cart 1 -▁Indigenous 1 -LY 1 -▁Kirk 1 -▁1991 1 -▁Interior 1 -ans 1 -▁manages 1 -▁Henderson 1 -three 1 -▁appreciation 1 -▁cancel 1 -▁gig 1 -▁700 1 -▁offshore 1 -▁internationally 1 -SS 1 -▁u 1 -58 1 -▁consulting 1 -▁Uncle 1 -▁Card 1 -Your 1 -▁dive 1 -▁mill 1 -▁hurts 1 -▁Dear 1 -story 1 -▁pilots 1 -▁disclose 1 -▁Give 1 -▁leaning 1 -pin 1 -RC 1 -▁waking 1 -Picture 1 -▁corners 1 -▁Greater 1 -▁Mer 1 -07 1 -▁LP 1 -▁Reading 1 -▁Honda 1 -▁grades 1 -▁suburban 1 -size 1 -▁averaging 1 -▁Nicholas 1 -▁pockets 1 -▁Southeast 1 -▁hatred 1 -▁raped 1 -▁dare 1 -▁Morrison 1 -▁lawn 1 -▁references 1 -▁sharply 1 -▁Revenue 1 -▁certified 1 -▁physician 1 -wo 1 -▁introducing 1 -▁Rugby 1 -stop 1 -▁Drake 1 -▁receives 1 -▁Huawei 1 -▁Seth 1 -▁advisory 1 -▁Century 1 -▁Marco 1 -▁Homeland 1 -▁Starbucks 1 -▁Universal 1 -tis 1 -▁Twenty 1 -▁alter 1 -uri 1 -▁equities 1 -▁insane 1 -▁Triple 1 -ks 1 -▁ramp 1 -▁kicks 1 -▁Masters 1 -▁aide 1 -▁Ab 1 -▁capability 1 -▁trace 1 -▁Sheffield 1 -▁bored 1 -▁desired 1 -▁Tickets 1 -▁patent 1 -▁serial 1 -IS 1 -▁Ah 1 -▁tub 1 -▁Deep 1 -▁Help 1 -▁occupation 1 -▁Environment 1 -▁Sullivan 1 -▁Lt 1 -▁contractor 1 -▁suits 1 -▁Manning 1 -▁List 1 -▁classical 1 -▁AC 1 -▁sheer 1 -▁treaty 1 -door 1 -▁Kardashian 1 -▁printing 1 -▁Coalition 1 -▁nearest 1 -▁ma 1 -▁troubled 1 -▁Ge 1 -▁excessive 1 -▁password 1 -screen 1 -▁pitching 1 -SA 1 -▁Gandhi 1 -▁mice 1 -▁distinction 1 -▁navigate 1 -▁Mama 1 -elli 1 -mark 1 -▁Brent 1 -▁Owen 1 -▁vertical 1 -vis 1 -▁Mourinho 1 -▁lover 1 -▁priced 1 -▁hint 1 -Even 1 -May 1 -▁hence 1 -▁Norwegian 1 -▁enabled 1 -▁proportion 1 -▁grain 1 -kan 1 -▁palm 1 -▁decorated 1 -▁predecessor 1 -▁Pass 1 -▁organ 1 -▁Budget 1 -▁lazy 1 -▁identification 1 -▁TODAY 1 -▁Article 1 -▁Years 1 -TS 1 -▁facial 1 -▁belongs 1 -▁Cu 1 -▁mo 1 -meter 1 -▁1- 1 -▁shield 1 -▁Ralph 1 -▁$200 1 -▁accidents 1 -▁Base 1 -▁elevated 1 -▁mm 1 -▁appointments 1 -▁chin 1 -hard 1 -▁0.1 1 -▁smiles 1 -Po 1 -▁lawsuits 1 -strong 1 -▁3-0 1 -▁lonely 1 -▁organize 1 -put 1 -▁Delaware 1 -act 1 -▁sensors 1 -▁infected 1 -▁technically 1 -▁Tottenham 1 -▁skilled 1 -▁Privacy 1 -▁Tomorrow 1 -▁repairs 1 -▁exchanged 1 -▁Grove 1 -▁satisfaction 1 -▁Mind 1 -▁$16 1 -▁1989 1 -▁Relations 1 -▁cu 1 -▁skip 1 -▁expectation 1 -▁chemistry 1 -▁Except 1 -▁Villa 1 -▁Daniels 1 -▁aging 1 -▁coins 1 -tier 1 -EC 1 -▁southeast 1 -▁indicator 1 -▁architect 1 -▁startup 1 -▁lung 1 -▁texts 1 -▁embarrassed 1 -▁Animal 1 -▁poem 1 -▁scare 1 -▁3.5 1 -other 1 -▁tablet 1 -▁Bro 1 -▁79 1 -▁wheelchair 1 -▁costly 1 -▁bothered 1 -aki 1 -▁Consumer 1 -▁forgive 1 -Thank 1 -God 1 -"). 1 -▁NC 1 -Because 1 -▁shouting 1 -▁dragon 1 -▁Earl 1 -▁canceled 1 -▁beneficial 1 -▁holy 1 -▁Eagle 1 -▁bucket 1 -▁evaluation 1 -▁40- 1 -▁Elementary 1 -▁revenge 1 -▁suppliers 1 -▁bombs 1 -▁recruiting 1 -▁abandon 1 -▁Colombia 1 -▁paperwork 1 -▁shifts 1 -▁negotiating 1 -▁pad 1 -▁poster 1 -▁limitations 1 -lar 1 -,500 1 -▁nail 1 -▁admits 1 -▁retained 1 -▁2000, 1 -too 1 -boy 1 -▁Thomson 1 -▁Lynn 1 -▁floors 1 -▁fierce 1 -▁indictment 1 -▁charts 1 -▁sigh 1 -Ha 1 -▁sadness 1 -▁dam 1 -▁(10 1 -▁surviving 1 -▁clever 1 -bridge 1 -▁declining 1 -▁convenient 1 -rock 1 -▁Louisville 1 -▁threaten 1 -▁firearm 1 -▁starred 1 -▁sweep 1 -burn 1 -▁recognised 1 -▁protections 1 -▁Honor 1 -▁88 1 -▁absent 1 -▁Similarly 1 -▁Private 1 -▁miracle 1 -▁creativity 1 -▁confronted 1 -▁Modern 1 -▁disappear 1 -▁pills 1 -▁graphics 1 -▁chains 1 -▁Czech 1 -▁keyboard 1 -▁protocol 1 -▁sticks 1 -▁liver 1 -▁Barnes 1 -iv 1 -▁consultation 1 -▁portrait 1 -▁HBO 1 -▁Stop 1 -▁drilling 1 -▁whoever 1 -▁economist 1 -▁jurisdiction 1 -▁Berkeley 1 -▁Ob 1 -▁southwest 1 -▁Racing 1 -▁incumbent 1 -▁Imagine 1 -▁entries 1 -NA 1 -▁viable 1 -▁grin 1 -▁Quinn 1 -ona 1 -ations 1 -▁wiped 1 -▁Ph 1 -ps 1 -AD 1 -▁sends 1 -▁stays 1 -▁Lab 1 -▁experiments 1 -Of 1 -▁Try 1 -▁stones 1 -▁Palmer 1 -▁Stay 1 -▁Studios 1 -▁Leave 1 -▁Cuban 1 -▁McCarthy 1 -▁Sudan 1 -▁Wallace 1 -onic 1 -▁rang 1 -▁gesture 1 -▁drank 1 -cu 1 -▁Reynolds 1 -▁composition 1 -▁Tre 1 -▁adorable 1 -▁ladder 1 -▁lacking 1 -▁zones 1 -!!! 1 -▁anchor 1 -▁Em 1 -▁creator 1 -▁licence 1 -▁Abe 1 -▁Jazz 1 -▁stack 1 -▁Zone 1 -▁Criminal 1 -▁bearing 1 -Or 1 -9,000 1 -▁Aunt 1 -▁Email 1 -▁magazines 1 -▁probation 1 -▁Gates 1 -▁permanently 1 -▁Kurdish 1 -site 1 -▁fabulous 1 -ert 1 -▁h 1 -▁puppy 1 -▁medals 1 -zer 1 -▁accessed 1 -▁themes 1 -▁willingness 1 -▁sandwich 1 -▁Kai 1 -▁Crystal 1 -▁achieving 1 -▁audit 1 -ali 1 -▁$500 1 -▁Training 1 -▁Holland 1 -▁cruel 1 -▁rankings 1 -▁greenhouse 1 -▁FOR 1 -▁2006. 1 -▁evaluate 1 -▁derived 1 -▁lending 1 -▁lone 1 -▁Florence 1 -▁advances 1 -▁Kit 1 -▁harmful 1 -▁heroin 1 -weight 1 -▁Rebecca 1 -▁awaiting 1 -▁Asset 1 -▁deliberately 1 -▁aluminum 1 -▁stunned 1 -▁Hitler 1 -▁Pra 1 -▁essay 1 -62 1 -Look 1 -▁prohibited 1 -bin 1 -▁Chancellor 1 -▁Usually 1 -▁Living 1 -▁watches 1 -▁massacre 1 -▁respondents 1 -▁Nova 1 -mal 1 -▁structural 1 -▁experimental 1 -▁marking 1 -▁0.5 1 -▁vacuum 1 -▁palace 1 -▁flee 1 -▁Trevor 1 -▁cooler 1 -▁outer 1 -ica 1 -▁affiliate 1 -▁trails 1 -▁mega 1 -▁Better 1 -▁nationally 1 -▁favorable 1 -▁86 1 -▁endorsed 1 -▁blogging 1 -▁87 1 -▁MD 1 -coming 1 -▁suck 1 -▁bikes 1 -▁92 1 -▁rebuild 1 -▁UP 1 -▁Cash 1 -▁unhappy 1 -▁achievements 1 -▁porn 1 -▁Lanka 1 -▁toddler 1 -▁profitable 1 -▁orientation 1 -▁Gallery 1 -▁Draft 1 -▁marketplace 1 -▁Wang 1 -▁defenders 1 -▁overhaul 1 -▁backwards 1 -▁DA 1 -▁arc 1 -▁slated 1 -▁Brock 1 -▁aiming 1 -▁Student 1 -▁offences 1 -▁packing 1 -▁welcoming 1 -▁stripped 1 -▁op 1 -▁pa 1 -23 1 -▁stakes 1 -▁Joint 1 -▁boxing 1 -▁Lloyd 1 -44 1 -47 1 -▁7:30 1 -▁alternate 1 -▁GMT 1 -▁si 1 -▁regards 1 -▁Fans 1 -▁Prof 1 -▁slowing 1 -▁void 1 -▁appealed 1 -▁Adrian 1 -▁(9 1 -▁Movement 1 -▁Gro 1 -▁outrage 1 -▁consisted 1 -▁Shore 1 -05 1 -▁counterparts 1 -▁nod 1 -▁hilarious 1 -▁Bee 1 -gy 1 -▁neighboring 1 -▁tribe 1 -bs 1 -▁rhythm 1 -▁shelves 1 -▁Political 1 -▁forecasts 1 -500 1 -▁83 1 -Here 1 -▁Bella 1 -▁expressing 1 -▁gates 1 -▁rebound 1 -▁Thankfully 1 -dal 1 -▁Globe 1 -▁Fo 1 -▁referee 1 -▁embarrassing 1 -▁Blair 1 -▁ports 1 -wi 1 -▁Idaho 1 -▁arguably 1 -▁Richardson 1 -▁conclude 1 -▁harvest 1 -▁denies 1 -▁suburb 1 -.0% 1 -▁inaugural 1 -▁Kam 1 -▁Tea 1 -▁Holmes 1 -▁competitions 1 -▁showers 1 -▁matchup 1 -▁Eight 1 -▁insiders 1 -▁timeline 1 -▁billionaire 1 -▁Duncan 1 -ati 1 -▁FOX 1 -▁enforce 1 -aw 1 -▁robots 1 -▁gunman 1 -take 1 -▁lightning 1 -55 1 -▁Seoul 1 -▁depressed 1 -read 1 -▁Working 1 -ase 1 -▁angel 1 -▁empire 1 -▁Bulls 1 -▁reluctant 1 -▁USB 1 -▁dip 1 -▁oral 1 -▁Newton 1 -▁finances 1 -▁homicide 1 -▁tightly 1 -▁Construction 1 -▁subscribe 1 -▁hostile 1 -83 1 -▁provinces 1 -▁Stuart 1 -▁Lily 1 -▁edited 1 -▁helmet 1 -▁defendants 1 -nic 1 -Every 1 -▁subscribers 1 -▁Olivia 1 -▁Joy 1 -▁Walk 1 -▁Ze 1 -▁concepts 1 -▁miserable 1 -▁sidewalk 1 -▁deceased 1 -rick 1 -▁2003, 1 -▁BMW 1 -▁Kerry 1 -▁Immigration 1 -▁apartments 1 -such 1 -shaped 1 -▁concentrate 1 -▁accounted 1 -▁Harbor 1 -▁fog 1 -▁Starting 1 -▁buttons 1 -▁embedded 1 -▁fleeing 1 -▁unconscious 1 -34 1 -▁UCLA 1 -▁Abdul 1 -▁realm 1 -▁embraced 1 -▁Strong 1 -▁contractors 1 -▁Carroll 1 -▁pays 1 -▁110 1 -▁Fan 1 -▁Falcons 1 -Source 1 -▁considers 1 -Le 1 -▁Ste 1 -▁epidemic 1 -▁GPS 1 -▁terrified 1 -▁adventures 1 -▁breeze 1 -▁bits 1 -▁recruit 1 -▁triumph 1 -▁Min 1 -▁translation 1 -▁persistent 1 -▁rub 1 -▁Wing 1 -▁PHOTO 1 -ions 1 -▁incorporated 1 -▁physics 1 -▁filter 1 -▁Gil 1 -▁Rosa 1 -▁Gibson 1 -▁precise 1 -▁Ethan 1 -▁knocking 1 -ical 1 -ther 1 -▁semester 1 -▁hunger 1 -▁anticipate 1 -▁cheer 1 -▁salaries 1 -▁Tel 1 -▁organizers 1 -▁Bor 1 -▁accessories 1 -minded 1 -▁aides 1 -▁dug 1 -▁Pompeo 1 -▁MLS 1 -▁Andrea 1 -SP 1 -grade 1 -▁Sgt 1 -▁obligation 1 -▁parish 1 -▁gloves 1 -▁Melissa 1 -▁runway 1 -▁1.2 1 -▁PE 1 -▁kills 1 -▁1988 1 -▁landmark 1 -▁resting 1 -▁sustain 1 -▁agrees 1 -▁grasp 1 -▁thereby 1 -▁Kris 1 -▁doubles 1 -▁Days 1 -▁lightly 1 -▁undermine 1 -▁Dana 1 -▁realizing 1 -anna 1 -RE 1 -▁lied 1 -▁implementing 1 -▁arch 1 -▁Peterson 1 -▁genes 1 -human 1 -▁AD 1 -▁Southwest 1 -▁Making 1 -▁50- 1 -▁celebrations 1 -▁forty 1 -▁handy 1 -NO 1 -▁reunion 1 -▁municipality 1 -▁camping 1 -▁Stephanie 1 -▁strictly 1 -▁Char 1 -▁Jeffrey 1 -▁backlash 1 -▁Advisors 1 -bal 1 -▁exploded 1 -Mr 1 -▁Glasgow 1 -▁Shop 1 -▁instructed 1 -▁2001, 1 -▁Ky 1 -▁cinema 1 -▁Amber 1 -▁fireworks 1 -▁observe 1 -▁careers 1 -lyn 1 -▁Commander 1 -▁3-1 1 -▁detected 1 -▁Corbyn 1 -▁Tory 1 -▁dumb 1 -▁dried 1 -▁Shane 1 -▁Member 1 -▁Com 1 -▁Kal 1 -▁jam 1 -▁accountability 1 -ele 1 -▁eleven 1 -▁clarity 1 -FC 1 -▁economics 1 -▁compiled 1 -▁Emmanuel 1 -▁magnitude 1 -% 1 -nu 1 -▁amazed 1 -▁Works 1 -▁0.2 1 -▁detective 1 -▁inauguration 1 -▁treasure 1 -fish 1 -▁dawn 1 -iest 1 -▁tense 1 -33 1 -▁2.0 1 -▁observation 1 -▁Megan 1 -▁94 1 -$1 1 -▁collar 1 -▁profession 1 -fe 1 -▁shirts 1 -▁nephew 1 -▁Diamond 1 -▁lungs 1 -▁gauge 1 -▁turkey 1 -ES 1 -▁Erdogan 1 -▁tougher 1 -▁tribal 1 -▁Sin 1 -▁carriers 1 -▁Abraham 1 -▁violating 1 -▁commitments 1 -▁killings 1 -▁incentives 1 -▁evacuated 1 -▁Vo 1 -▁Average 1 -41 1 -▁Packers 1 -▁gotta 1 -▁nest 1 -▁accent 1 -▁programmes 1 -▁happier 1 -81 1 -▁homework 1 -▁Ne 1 -CP 1 -▁paused 1 -▁wheat 1 -▁Wildlife 1 -▁shine 1 -▁designers 1 -▁wallet 1 -▁ecosystem 1 -▁ka 1 -▁feat 1 -▁testify 1 -▁Ellis 1 -ches 1 -▁Block 1 -▁caps 1 -▁comparable 1 -▁organizing 1 -▁93 1 -▁Celtics 1 -▁fined 1 -▁steadily 1 -▁recipes 1 -▁Veterans 1 -▁Clarke 1 -GA 1 -▁Rex 1 -▁bloc 1 -▁united 1 -▁secretly 1 -▁devastated 1 -▁Probably 1 -▁blade 1 -▁Diana 1 -hander 1 -▁spark 1 -▁betting 1 -▁bout 1 -▁7- 1 -File 1 -▁28- 1 -▁cups 1 -▁indoor 1 -▁servers 1 -▁suite 1 -▁adapted 1 -78 1 -▁blend 1 -▁Ch 1 -▁warehouse 1 -▁Brother 1 -▁Sales 1 -▁execute 1 -▁contents 1 -▁syndrome 1 -▁puzzle 1 -▁restoration 1 -▁situated 1 -powered 1 -▁Case 1 -▁fraction 1 -▁Randy 1 -84 1 -▁NPR 1 -▁WA 1 -▁RCMP 1 -▁sneak 1 -/2 1 -▁distracted 1 -▁Dance 1 -▁securing 1 -▁duration 1 -▁Es 1 -▁Lagos 1 -▁obligations 1 -▁enables 1 -AS 1 -▁casino 1 -Day 1 -▁pig 1 -▁unaware 1 -▁Bin 1 -mel 1 -▁Cho 1 -Brien 1 -dor 1 -▁Workers 1 -▁digging 1 -▁beast 1 -▁starters 1 -▁travels 1 -▁inequality 1 -gu 1 -▁extract 1 -kg 1 -▁interactive 1 -▁Lower 1 -▁Chile 1 -▁reiterated 1 -▁clerk 1 -▁drones 1 -▁tire 1 -▁grandson 1 -▁milestone 1 -▁trim 1 -▁fist 1 -▁Cricket 1 -▁edges 1 -▁sa 1 -▁passport 1 -▁Bu 1 -any 1 -09 1 -▁Survey 1 -hole 1 -▁profound 1 -▁practicing 1 -74 1 -▁elevator 1 -▁yen 1 -▁Glass 1 -▁hacking 1 -▁Canyon 1 -▁comedian 1 -▁ignoring 1 -ec 1 -▁projections 1 -▁ritual 1 -▁26- 1 -▁essence 1 -mat 1 -cal 1 -▁accurately 1 -▁Seahawks 1 -▁tires 1 -▁Pac 1 -▁fond 1 -▁booth 1 -link 1 -controlled 1 -ME 1 -▁crystal 1 -▁poses 1 -ranked 1 -▁$40 1 -▁91 1 -▁Maduro 1 -DE 1 -▁Winnipeg 1 -▁insist 1 -DS 1 -▁Devils 1 -▁imported 1 -aba 1 -Pacific 1 -▁Salman 1 -▁weekends 1 -gue 1 -▁Gay 1 -also 1 -ub 1 -▁VA 1 -▁constituency 1 -▁parole 1 -rus 1 -▁criticised 1 -▁Guide 1 -▁renamed 1 -▁debuted 1 -▁frankly 1 -▁snack 1 -▁waved 1 -▁Virgin 1 -bad 1 -▁cracked 1 -▁Boulevard 1 -▁colored 1 -▁wasted 1 -▁Fine 1 -▁cognitive 1 -ock 1 -wise 1 -▁cigarette 1 -▁Ce 1 -▁processed 1 -▁favorites 1 -▁adaptation 1 -▁pitches 1 -▁Windsor 1 -▁bell 1 -▁LONDON 1 -▁rap 1 -4, 1 -nce 1 -OK 1 -gal 1 -▁lesser 1 -3. 1 -▁Liberals 1 -▁hay 1 -when 1 -▁reign 1 -▁Fargo 1 -▁confusing 1 -▁dumped 1 -▁aggravated 1 -▁Edge 1 -▁toast 1 -ib 1 -▁laser 1 -▁Gu 1 -driven 1 -▁disabilities 1 -▁Banks 1 -▁Batman 1 -▁52- 1 -▁Sol 1 -▁Abbott 1 -hood 1 -▁Grandma 1 -▁mosque 1 -▁EP 1 -▁faithful 1 -rat 1 -▁scholars 1 -▁Curtis 1 -▁Self 1 -▁shorts 1 -▁positioned 1 -▁Heights 1 -▁Rodgers 1 -▁thrust 1 -verse 1 -▁Malcolm 1 -▁Ellen 1 -▁pill 1 -▁Mur 1 -▁placement 1 -▁shallow 1 -▁crackdown 1 -PL 1 -ral 1 -▁1984 1 -▁outlined 1 -▁Contact 1 -▁cousins 1 -una 1 -▁bipartisan 1 -pen 1 -▁Mis 1 -▁Markle 1 -:00 1 -▁baking 1 -front 1 -▁gambling 1 -▁Nasdaq 1 -▁tin 1 -▁inspiring 1 -▁Vol 1 -▁Bristol 1 -▁intends 1 -▁Province 1 -▁Shu 1 -bus 1 -▁Trophy 1 -▁Bernard 1 -billion 1 -▁ballots 1 -▁Poor 1 -# 1 -Ta 1 -▁vacant 1 -▁alternatives 1 -qua 1 -▁presenting 1 -▁Fourth 1 -▁180 1 -wick 1 -▁dedication 1 -77 1 -▁Chargers 1 -▁hurry 1 -▁Andre 1 -▁deleted 1 -▁armor 1 -▁neighbours 1 -▁rivers 1 -▁desktop 1 -▁Less 1 -▁FM 1 -oli 1 -▁GP 1 -▁20,000 1 -oo 1 -sson 1 -▁answering 1 -▁Nixon 1 -▁Bros 1 -▁Raj 1 -▁Gene 1 -IC 1 -▁detectives 1 -▁construct 1 -▁doubts 1 -▁towel 1 -▁Mat 1 -▁consequence 1 -east 1 -driving 1 -▁ammunition 1 -order 1 -▁Perez 1 -▁SAN 1 -▁0.3 1 -▁Peninsula 1 -▁hop 1 -▁playground 1 -▁Mills 1 -oc 1 -▁liability 1 -▁surgeon 1 -court 1 -name 1 -▁giants 1 -▁leap 1 -▁Tamil 1 -▁tide 1 -fall 1 -▁Campaign 1 -ory 1 -des 1 -▁Jar 1 -▁Seriously 1 -tie 1 -▁Rush 1 -▁lakh 1 -▁Sue 1 -ima 1 -▁Researchers 1 -▁overhead 1 -▁Kor 1 -▁Joey 1 -▁fruits 1 -▁demonstrations 1 -▁possess 1 -▁Upper 1 -▁lottery 1 -▁genuinely 1 -▁locate 1 -▁Emirates 1 -▁Belgian 1 -▁cease 1 -▁Legal 1 -▁Outside 1 -▁Christine 1 -▁weighing 1 -thirds 1 -▁Mediterranean 1 -▁nickname 1 -▁Ak 1 -lis 1 -▁horrific 1 -▁130 1 -▁heating 1 -▁Westminster 1 -▁dignity 1 -▁talents 1 -▁sworn 1 -lam 1 -▁Linux 1 -▁steering 1 -▁massage 1 -▁es 1 -▁duck 1 -▁preference 1 -▁legislators 1 -▁subsidiary 1 -▁deportation 1 -Ro 1 -▁gaps 1 -▁disappearance 1 -▁schemes 1 -standing 1 -▁portrayed 1 -▁rallied 1 -▁Tillerson 1 -▁UC 1 -▁Portuguese 1 -▁committing 1 -▁Hindu 1 -TO 1 -▁cord 1 -▁kindness 1 -▁evacuation 1 -▁incentive 1 -▁Minneapolis 1 -▁warmer 1 -22 1 -▁Ring 1 -▁autism 1 -▁Studio 1 -cor 1 -▁corridor 1 -▁Pearl 1 -ami 1 -▁MA 1 -depth 1 -▁hearings 1 -▁batch 1 -▁Tehran 1 -▁Cosby 1 -▁limiting 1 -} 1 -▁supplied 1 -eri 1 -▁displaced 1 -▁Ruby 1 -▁diaper 1 -▁casualties 1 -▁interrupted 1 -▁pouring 1 -▁tours 1 -▁enabling 1 -▁Spa 1 -▁Caroline 1 -risk 1 -▁raced 1 -▁Unless 1 -42 1 -▁intact 1 -▁independently 1 -▁cautious 1 -▁Dolphins 1 -LE 1 -▁bullying 1 -▁squeeze 1 -eo 1 -▁1986 1 -▁prestigious 1 -▁Sterling 1 -▁diamond 1 -▁fridge 1 -▁crushed 1 -▁performers 1 -▁generating 1 -▁prey 1 -▁confrontation 1 -▁89 1 -▁fare 1 -▁Weekly 1 -▁rider 1 -ei 1 -▁Core 1 -mont 1 -▁GT 1 -▁Welsh 1 -▁MI 1 -▁bargain 1 -▁considerably 1 -▁entrepreneurs 1 -▁compassion 1 -▁Baldwin 1 -▁historian 1 -▁shadows 1 -▁relaxing 1 -▁poet 1 -▁Silva 1 -▁cycling 1 -▁(17 1 -cost 1 -▁Miles 1 -SU 1 -tri 1 -▁logical 1 -▁disorders 1 -▁Ravens 1 -▁Armed 1 -▁im 1 -▁Kremlin 1 -▁heck 1 -▁credible 1 -Come 1 -▁Climate 1 -▁comparing 1 -.25 1 -▁sir 1 -▁Lance 1 -▁Ag 1 -▁holders 1 -▁ICE 1 -▁proving 1 -▁Annual 1 -▁militant 1 -▁dresses 1 -▁quilt 1 -From 1 -▁credentials 1 -▁Reply 1 -▁layout 1 -▁rewards 1 -▁disruption 1 -▁territories 1 -▁outright 1 -Car 1 -▁8- 1 -fu 1 -▁elbow 1 -▁minus 1 -▁glow 1 -▁LeBron 1 -PS 1 -▁Always 1 -▁Durant 1 -▁qualities 1 -▁handsome 1 -▁Grande 1 -▁forests 1 -tes 1 -▁prisoner 1 -▁Operation 1 -▁venues 1 -▁Faith 1 -▁Holly 1 -▁tolerance 1 -▁curriculum 1 -▁wolf 1 -▁incoming 1 -▁una 1 -▁gardens 1 -▁Pe 1 -▁slavery 1 -▁mentor 1 -▁Les 1 -▁Pierre 1 -▁spine 1 -▁MLB 1 -”) 1 -▁destroying 1 -▁ordering 1 -▁cemetery 1 -▁ideology 1 -▁reflecting 1 -▁albeit 1 -▁Christie 1 -ening 1 -▁labels 1 -▁isolation 1 -▁diving 1 -▁corporation 1 -▁distress 1 -table 1 -▁shareholder 1 -▁forehead 1 -▁Finland 1 -▁Beast 1 -enberg 1 -▁Clay 1 -▁Shin 1 -▁Jackie 1 -▁Start 1 -▁Hamas 1 -▁USC 1 -▁declaring 1 -▁touches 1 -▁aisle 1 -▁Theater 1 -▁Arnold 1 -▁Cookies 1 -PC 1 -▁MO 1 -▁dies 1 -▁artwork 1 -▁Foods 1 -.99 1 -▁tendency 1 -▁des 1 -▁Sp 1 -▁Markets 1 -▁Christianity 1 -▁hood 1 -▁Wil 1 -▁sadly 1 -▁anticipation 1 -▁ETF 1 -▁stats 1 -▁Champion 1 -▁calories 1 -▁medications 1 -▁Cemetery 1 -▁gasoline 1 -▁computing 1 -▁Notre 1 -▁Horse 1 -▁2005. 1 -▁partisan 1 -▁enjoys 1 -▁ego 1 -▁haul 1 -▁Jenkins 1 -far 1 -▁warmth 1 -▁USD 1 -▁laboratory 1 -▁Bachelor 1 -pper 1 -▁delivers 1 -▁buyer 1 -▁denying 1 -▁differ 1 -▁skull 1 -▁maintains 1 -▁demo 1 -▁eclipse 1 -▁Bruins 1 -▁unacceptable 1 -▁27- 1 -▁throne 1 -▁Leicester 1 -ello 1 -▁credibility 1 -▁paths 1 -▁SE 1 -▁accommodation 1 -▁subsidies 1 -▁colours 1 -▁1987 1 -▁hammer 1 -▁Pink 1 -PO 1 -▁tricks 1 -▁bend 1 -▁97 1 -▁allegation 1 -▁shootings 1 -▁Auburn 1 -▁tooth 1 -▁Bra 1 -▁dominate 1 -▁airports 1 -▁junk 1 -▁handing 1 -▁Pitt 1 -▁Pin 1 -ema 1 -▁Matthews 1 -▁attacker 1 -▁flies 1 -▁Ronaldo 1 -▁academy 1 -▁explicitly 1 -▁explicit 1 -▁Cavaliers 1 -▁enrolled 1 -▁containers 1 -▁Lan 1 -▁accelerate 1 -▁rip 1 -RO 1 -▁HE 1 -▁Bru 1 -▁bicycle 1 -▁presumably 1 -43 1 -▁atop 1 -▁pond 1 -tte 1 -▁Stein 1 -▁sensation 1 -nia 1 -▁bridges 1 -▁partnerships 1 -eye 1 -▁2002, 1 -▁mainland 1 -▁Fun 1 -▁autumn 1 -▁Hispanic 1 -nis 1 -▁retiring 1 -▁overly 1 -John 1 -▁96 1 -▁convincing 1 -▁AFC 1 -▁counterpart 1 -▁chill 1 -▁Vatican 1 -tha 1 -▁stare 1 -▁Clean 1 -▁banner 1 -▁columnist 1 -big 1 -▁Micro 1 -▁drain 1 -rum 1 -▁weekday 1 -▁Hungary 1 -▁Enterprise 1 -▁scrap 1 -▁Chen 1 -▁Hay 1 -▁fool 1 -▁poised 1 -▁teaches 1 -▁flown 1 -▁Deal 1 -▁Know 1 -▁complications 1 -▁Jill 1 -▁hackers 1 -▁automated 1 -▁weaker 1 -▁Jenny 1 -▁Kra 1 -▁Fraser 1 -▁Kenny 1 -▁bugs 1 -▁fiber 1 -cus 1 -west 1 -▁optimism 1 -▁authentic 1 -▁subway 1 -▁surrender 1 -▁pupils 1 -▁loudly 1 -generation 1 -▁Beck 1 -▁Inspector 1 -▁Colts 1 -▁locals 1 -▁instructor 1 -AT 1 -▁scam 1 -▁Tam 1 -fo 1 -▁pairs 1 -▁abusive 1 -▁lovers 1 -▁fulfill 1 -▁Regardless 1 -▁Gregory 1 -▁Mont 1 -▁staged 1 -▁appetite 1 -▁twisted 1 -anti 1 -▁paired 1 -▁inbox 1 -▁Ser 1 -▁pork 1 -▁relay 1 -▁Wonder 1 -▁Simmons 1 -▁naval 1 -AR 1 -▁thrive 1 -▁hiking 1 -▁mansion 1 -▁CITY 1 -▁ho 1 -▁advantages 1 -▁charm 1 -lit 1 -▁likelihood 1 -▁Disease 1 -▁Northwest 1 -▁Historic 1 -Hi 1 -▁Customs 1 -▁Grade 1 -▁shipped 1 -▁mat 1 -▁donor 1 -▁streams 1 -▁mindset 1 -▁Tra 1 -▁sliding 1 -▁afterward 1 -.75 1 -▁cultures 1 -▁compact 1 -▁Imperial 1 -▁verify 1 -His 1 -▁habitat 1 -▁Silicon 1 -bel 1 -▁visitor 1 -38 1 -▁Mets 1 -▁Kin 1 -▁sacks 1 -zy 1 -▁Var 1 -▁pepper 1 -▁140 1 -base 1 -▁Ry 1 -▁dangers 1 -▁Zoo 1 -ep 1 -▁justified 1 -▁drum 1 -going 1 -wall 1 -▁Nar 1 -▁Je 1 -37 1 -▁denial 1 -ridge 1 -▁educate 1 -▁functioning 1 -▁mobility 1 -ido 1 -▁Armstrong 1 -▁offenders 1 -▁specialty 1 -▁airlines 1 -▁economists 1 -Na 1 -▁Mag 1 -▁namely 1 -▁Raymond 1 -▁Culture 1 -SO 1 -▁BE 1 -running 1 -bra 1 -▁spill 1 -▁stiff 1 -▁tracked 1 -▁intensive 1 -▁slaves 1 -▁withdrew 1 -▁Welcome 1 -▁revelation 1 -▁Munich 1 -piece 1 -▁IP 1 -▁employ 1 -▁rounded 1 -▁scent 1 -▁sucked 1 -▁complexity 1 -▁graduating 1 -▁joked 1 -▁Interstate 1 -ade 1 -▁homer 1 -eg 1 -▁poison 1 -▁spinning 1 -▁Titans 1 -▁seasonal 1 -▁Stevens 1 -▁Hand 1 -▁directing 1 -▁1985 1 -▁failures 1 -▁sponsors 1 -▁shoppers 1 -▁LED 1 -93 1 -▁Gal 1 -▁Planning 1 -▁infamous 1 -▁presently 1 -▁comics 1 -▁inability 1 -▁Chu 1 -▁obstacles 1 -▁metro 1 -▁Verizon 1 -▁parenting 1 -▁Mah 1 --25 1 -finals 1 -▁Op 1 -▁sunshine 1 -▁ferry 1 -El 1 -▁echoed 1 -▁litigation 1 -▁98 1 -▁expose 1 -▁Dal 1 -▁therapist 1 -▁someday 1 -▁accordance 1 -▁daddy 1 -▁Voice 1 -▁railroad 1 -▁Ros 1 -▁cows 1 -▁LG 1 -▁coupled 1 -▁bilateral 1 -▁collections 1 -▁processor 1 -▁skies 1 -▁slim 1 -▁Hat 1 -▁tuition 1 -breaking 1 -▁breasts 1 -though 1 -▁Alexa 1 -▁Finals 1 -ux 1 -▁elder 1 -▁critically 1 -▁Conservatives 1 -▁mob 1 -chan 1 -▁altered 1 -nal 1 -▁Coleman 1 -app 1 -▁tricky 1 -▁instrumental 1 -▁recalls 1 -▁Sharks 1 -▁seated 1 -▁switching 1 -step 1 -▁thorough 1 -▁turnout 1 -▁Chair 1 -▁volatile 1 -08 1 -▁celebrates 1 -▁deposits 1 -through 1 -▁mi 1 -▁owe 1 -▁farther 1 -▁Below 1 -▁Representative 1 -▁needle 1 -▁hooked 1 -▁9/11 1 -▁clearance 1 -▁Arctic 1 -▁dock 1 -03 1 -▁crush 1 -▁lacked 1 -▁Made 1 -▁bust 1 -▁cornerback 1 -sley 1 -cur 1 -▁’” 1 -▁cheating 1 -▁grabbing 1 -▁accord 1 -▁Enforcement 1 -▁vibrant 1 -▁QB 1 -author 1 -▁contested 1 -▁den 1 -▁kidney 1 -▁utilities 1 -▁ensemble 1 -▁rainfall 1 -▁voluntary 1 -▁Sat 1 -▁Liz 1 -▁humble 1 -▁separately 1 -▁surveyed 1 -mit 1 -▁statewide 1 -▁Alzheimer 1 -▁Residents 1 -▁detection 1 -▁Nate 1 -▁Sally 1 -ified 1 -▁neighbourhood 1 -▁descent 1 -▁Additional 1 -® 1 -▁Sophie 1 -▁taxpayer 1 -▁Sho 1 -▁joins 1 -press 1 -▁Nike 1 -▁rejection 1 -▁preview 1 -▁promptly 1 -▁solidarity 1 -▁Yahoo 1 -▁CT 1 -▁Flash 1 -▁Para 1 -▁colorful 1 -▁pedestrian 1 -▁vegan 1 -▁buzz 1 -▁lining 1 -▁3-2 1 -▁Pu 1 -filled 1 -▁infections 1 -▁Echo 1 -▁crashing 1 -▁migrant 1 --21 1 -▁tuned 1 -▁suburbs 1 -gel 1 -▁Reds 1 -▁borrowing 1 -▁soundtrack 1 -great 1 -space 1 -▁Mack 1 -ori 1 -▁ethical 1 -▁administrator 1 -ila 1 -▁relied 1 -▁bride 1 -▁Haven 1 -▁Lang 1 -▁Burns 1 -▁lobbying 1 -▁capitalization 1 -stock 1 -▁stumbled 1 -▁6:30 1 -▁environments 1 -▁currencies 1 -▁undergo 1 -▁Annie 1 -▁searches 1 -▁Jacksonville 1 -▁varied 1 -▁kidnapping 1 -ration 1 -▁Sharon 1 -▁spends 1 -▁stint 1 -▁advancing 1 -▁Table 1 -▁Victorian 1 -▁j 1 -▁RE 1 -▁messaging 1 -lla 1 -▁Derby 1 -▁Flight 1 -▁liquor 1 -▁$18 1 -▁Presidential 1 -ture 1 -China 1 -▁substantially 1 -▁Mick 1 -▁advisers 1 -cia 1 -▁Kara 1 -▁Walsh 1 -▁Martha 1 -▁gratitude 1 -▁30% 1 -▁defeating 1 -▁HR 1 -▁wears 1 -▁Kushner 1 -▁recycling 1 -▁cheeks 1 -pers 1 -core 1 -▁smashed 1 -third 1 -▁ruin 1 -▁beaches 1 -▁bitch 1 -▁settling 1 -▁Brighton 1 -▁disputed 1 -ments 1 -▁Bit 1 -▁Soccer 1 -▁purely 1 -▁fixing 1 -▁colonial 1 -▁Senators 1 -▁Sr 1 -▁enters 1 -▁Professional 1 -▁deploy 1 -▁headache 1 -▁premises 1 -▁erupted 1 -Lo 1 -▁rat 1 -▁disposal 1 -▁Mum 1 -▁encourages 1 -▁Chip 1 -black 1 -▁Universe 1 -▁Truth 1 -▁$100,000 1 -▁supermarket 1 -▁conclusions 1 -gle 1 -▁lol 1 -▁observations 1 -▁Fool 1 -ora 1 -bury 1 -▁manufactured 1 -▁Bannon 1 -▁Peru 1 -▁breakthrough 1 -▁Books 1 -▁Messi 1 -▁sacred 1 -▁pursued 1 -▁Inn 1 -MO 1 -▁Emperor 1 -▁enjoyable 1 -stage 1 -ose 1 -▁fictional 1 -ET 1 -▁leagues 1 -▁shuttle 1 -▁scenarios 1 -▁Corey 1 -▁stove 1 -▁Hello 1 -▁NDP 1 -▁Kil 1 -▁suited 1 -▁fitted 1 -▁rented 1 -▁remembering 1 -▁crawl 1 -▁lion 1 -▁rude 1 -%) 1 -▁advisor 1 -▁pulls 1 -yu 1 -▁Zo 1 -aj 1 -uma 1 -Black 1 -▁Hor 1 -▁NSA 1 -▁recognise 1 -▁cooperate 1 -▁quotes 1 -▁Zuckerberg 1 -▁hack 1 -04 1 -power 1 -mes 1 -▁instruction 1 -▁Broad 1 -bert 1 -▁socialist 1 -▁wishing 1 -family 1 -?! 1 -gas 1 -▁disputes 1 -▁submission 1 -▁Branch 1 -▁Spider 1 -▁invisible 1 -▁terribly 1 -ious 1 -▁UAE 1 -▁Fear 1 -▁Winston 1 -▁Erin 1 -▁assumptions 1 -▁Die 1 -dollar 1 -dia 1 -ash 1 -▁Reddit 1 -▁las 1 -rah 1 -▁IBM 1 -▁voiced 1 -log 1 -▁concentrated 1 -▁inherited 1 -▁Jam 1 -▁stretching 1 -▁Janet 1 -▁divorced 1 -▁rested 1 -▁intentionally 1 -▁encouragement 1 -government 1 -▁cryptocurrency 1 -▁verse 1 -▁absorb 1 -▁associates 1 -▁leaf 1 -▁psychology 1 -▁revolutionary 1 -▁merit 1 -▁minorities 1 -▁Rad 1 -▁dominance 1 -▁Performance 1 -digit 1 -▁jersey 1 -▁terrifying 1 -IP 1 -▁doll 1 -▁generic 1 -ae 1 -▁Ivan 1 -▁Danish 1 -▁Ultimately 1 -▁lid 1 -▁Fri 1 -▁honors 1 -▁endured 1 -▁trunk 1 -▁wrongdoing 1 -▁useless 1 -▁50,000 1 -▁observers 1 -▁kissing 1 -▁Papa 1 -▁vague 1 -something 1 -▁civic 1 -▁Solutions 1 -▁ca 1 -▁1979 1 -▁inviting 1 -▁curse 1 -▁routinely 1 -▁Eye 1 -▁choir 1 -▁utterly 1 -▁Update 1 -▁portal 1 -▁monument 1 -▁Ground 1 -▁tucked 1 -▁32- 1 -▁rational 1 -▁flank 1 -▁40% 1 -▁travelers 1 -▁Medal 1 -▁Ly 1 -▁tier 1 -▁Travel 1 -▁concerts 1 -▁nutrition 1 -▁Java 1 -commerce 1 -▁behave 1 -▁yarn 1 -▁(16 1 -▁cooling 1 -▁supplier 1 -▁Mommy 1 -▁smoothly 1 -▁tee 1 -hal 1 -▁unlimited 1 -ago 1 -▁baked 1 -▁Omar 1 -nie 1 -▁confront 1 -▁occurring 1 -▁immense 1 -▁shining 1 -Okay 1 -▁measuring 1 -▁demon 1 -▁Pictures 1 -▁1982 1 -▁$17 1 -BC 1 -▁boarding 1 -▁Duterte 1 -▁AS 1 -▁Panama 1 -▁stamp 1 -▁bounced 1 -nick 1 -▁Aviation 1 -▁Sebastian 1 -▁gods 1 -▁Fashion 1 -▁devil 1 -▁roller 1 -▁GE 1 -▁Judiciary 1 -▁4,000 1 -▁glorious 1 -▁renowned 1 -▁servants 1 -▁Often 1 -:) 1 -▁Investigation 1 -▁delicate 1 -▁Buck 1 -▁Zen 1 -▁Burke 1 -▁cigarettes 1 -▁pharmaceutical 1 -▁Maggie 1 -▁sanctuary 1 -▁masses 1 -dom 1 -▁ballistic 1 -▁Ji 1 -cast 1 -President 1 -▁mounting 1 -▁spun 1 -▁scorer 1 -▁Nissan 1 -▁fines 1 -▁owed 1 -▁refusal 1 -▁loads 1 -▁GO 1 -▁backpack 1 -▁wreck 1 -▁eventual 1 -▁SS 1 -▁5% 1 -▁beautifully 1 -▁touring 1 -Un 1 -▁bureau 1 -▁murders 1 -▁2004. 1 -▁bowling 1 -▁umbrella 1 -▁lounge 1 -▁specialized 1 -Mar 1 -▁Freeman 1 -aging 1 -▁attendees 1 -▁issuing 1 -▁goalie 1 -▁refuses 1 -▁injection 1 -▁Strip 1 -▁fold 1 -ser 1 -Please 1 -iff 1 -fold 1 -hen 1 -▁Nat 1 -▁DE 1 -▁Sharma 1 -▁ER 1 -▁tribes 1 -Maybe 1 -▁Rai 1 -▁MY 1 -▁160 1 -▁courtroom 1 -say 1 -▁annoyed 1 -hop 1 -▁winger 1 -▁Confederate 1 -▁terrain 1 -▁CR 1 -nell 1 -▁rallies 1 -▁laughs 1 -▁dynamics 1 -▁tournaments 1 -sic 1 -tax 1 -ets 1 -▁Hawks 1 -▁dull 1 -▁Opera 1 -▁invented 1 -AF 1 -▁faded 1 -▁assessed 1 -▁graduates 1 -▁Seeking 1 -▁licenses 1 -▁commodity 1 -HS 1 -▁Mus 1 -▁potato 1 -▁Perth 1 -▁poly 1 -▁Maple 1 -▁Sadly 1 -▁Prix 1 -▁objectives 1 -▁Rivers 1 -▁Belt 1 -▁Gate 1 -zen 1 -▁resumed 1 -▁Cody 1 -▁assassination 1 -▁tariff 1 -▁distinctive 1 -html 1 -▁finishes 1 -▁Estate 1 -selling 1 -▁advise 1 -▁acquisitions 1 -▁7. 1 -▁contributor 1 -▁Originally 1 -▁Finn 1 -▁Actor 1 -▁shark 1 -▁VI 1 -kel 1 -▁analytics 1 -▁applicants 1 -pointers 1 -▁29- 1 -mie 1 -â 1 -▁Author 1 -▁misleading 1 -br 1 -▁recipient 1 -▁genius 1 -▁regulator 1 -▁exploit 1 -tur 1 -▁Robertson 1 -▁variable 1 -▁Mari 1 -▁forthcoming 1 -▁skeptical 1 -▁superstar 1 -▁Rail 1 -▁flipped 1 -▁Hopkins 1 -▁factories 1 -“ 1 -▁Meyer 1 -▁Cut 1 -▁aggression 1 -▁folded 1 -▁dessert 1 -▁Coming 1 -▁stir 1 -▁1) 1 -body 1 -▁nationalist 1 -love 1 -▁obsessed 1 -▁preparations 1 -▁$10,000 1 -▁Kha 1 -▁soda 1 -▁Joan 1 -▁Step 1 -▁administrators 1 -▁Naval 1 -▁Bat 1 -▁Darren 1 -▁Gilbert 1 -media 1 -kov 1 -▁clashes 1 -NC 1 -▁Superior 1 -▁competitor 1 -▁FCC 1 -▁Double 1 -ees 1 -▁alley 1 -▁Circle 1 -HA 1 -fan 1 -▁Thor 1 -▁assumption 1 -▁alleging 1 -▁alleges 1 -▁dealer 1 -▁Nice 1 -▁Switch 1 -▁Carrie 1 -▁restrict 1 -▁continually 1 -▁breed 1 -acre 1 -▁mu 1 -USA 1 -▁Edition 1 -▁supervisor 1 -IM 1 -▁notorious 1 -▁admitting 1 -▁solved 1 -▁sank 1 -▁rises 1 -lands 1 -▁theaters 1 -▁prizes 1 -▁consume 1 -▁shapes 1 -▁Adelaide 1 -▁Shawn 1 -CBS 1 -▁corresponding 1 -aged 1 -▁launches 1 -▁sh 1 -▁Donna 1 -▁Operations 1 -▁Sister 1 -▁1999, 1 -▁Erik 1 -iti 1 -▁slice 1 -▁connectivity 1 -▁oath 1 -▁Dee 1 -▁unsure 1 -sur 1 -▁dealers 1 -▁Otherwise 1 -▁algorithm 1 -ole 1 -▁regain 1 -Buy 1 -▁priests 1 -ova 1 -▁RT 1 -building 1 -▁consistency 1 -▁tenants 1 -TR 1 -▁chickens 1 -▁shrugged 1 -▁navy 1 -▁affiliated 1 -matic 1 -▁marched 1 -▁Wait 1 -▁particles 1 -▁certification 1 -▁chased 1 -▁Mt 1 -▁mercy 1 -Vi 1 -▁shout 1 -▁binding 1 -▁Roth 1 -▁lecture 1 -largest 1 -▁punished 1 -▁continuously 1 -▁HERE 1 -▁Range 1 -ante 1 -▁smallest 1 -▁tattoo 1 -▁vintage 1 -▁spacecraft 1 -▁Aside 1 -DP 1 -aya 1 -PD 1 -▁pillow 1 -▁surplus 1 -▁Jess 1 -▁Louise 1 -▁lucrative 1 -▁Wyoming 1 -▁Jail 1 -See 1 -▁reversed 1 -▁Shadow 1 -▁uncovered 1 -▁DH 1 -nik 1 -▁publications 1 -▁buddy 1 -▁hats 1 -▁noble 1 -▁kidding 1 -▁Title 1 -fin 1 -▁Close 1 -▁elect 1 -▁Critics 1 -people 1 -▁speeches 1 -▁IRS 1 -▁crashes 1 -▁inclusive 1 -▁boundary 1 -▁cement 1 -▁selfish 1 -▁subjected 1 -▁PlayStation 1 -▁1983 1 -▁snacks 1 -pan 1 -EA 1 -▁Opposition 1 -▁ruined 1 -▁risky 1 -▁watchdog 1 -▁Brand 1 -uka 1 -­ 1 -▁locks 1 -Get 1 -▁Shannon 1 -▁borrow 1 -▁spouse 1 -▁Nobel 1 -pointer 1 -▁Cro 1 -▁Hassan 1 -▁Face 1 -6, 1 -▁Davidson 1 -▁hi 1 -▁diplomats 1 -▁Walt 1 -▁flour 1 -▁$1.5 1 -▁Luther 1 -▁logistics 1 -▁freak 1 -▁characterized 1 -▁naming 1 -▁forensic 1 -▁fundraiser 1 -▁contracted 1 -▁Slam 1 -▁Independence 1 -held 1 -om 1 -▁Saskatchewan 1 -▁allocation 1 -▁confidential 1 -▁gravity 1 -▁princess 1 -▁reasonably 1 -▁feminist 1 -ox 1 -▁rehabilitation 1 -▁Apart 1 -▁bullets 1 -▁Fat 1 -▁Kerr 1 -▁recruitment 1 -▁Detective 1 -▁Growth 1 -horn 1 -▁McGregor 1 -▁emphasized 1 -▁Bulldogs 1 -BA 1 -▁2) 1 -watch 1 -▁ashamed 1 -▁owing 1 -▁furious 1 -▁Diane 1 -▁Gonzalez 1 -▁jealous 1 -▁Med 1 -▁tasked 1 -▁Crawford 1 -▁assembled 1 -▁tipped 1 -▁Ve 1 -mu 1 -▁questionable 1 -▁Hundreds 1 -city 1 -▁shiny 1 -▁varying 1 -▁newer 1 -▁Horn 1 -▁LOL 1 -▁departed 1 -▁broadly 1 -▁Ask 1 -▁float 1 -SD 1 -▁upward 1 -▁Wilder 1 -▁neo 1 -▁quo 1 -▁Emmy 1 -▁filings 1 -▁midfield 1 -▁Save 1 -▁brains 1 -▁coached 1 -▁backdrop 1 -▁debates 1 -▁Garrett 1 -▁boycott 1 -▁recordings 1 -▁interfere 1 -▁libraries 1 -▁costumes 1 -etti 1 -fully 1 -vy 1 -▁Manuel 1 -▁$1,000 1 -▁punched 1 -▁jets 1 -Ex 1 -▁Briefs 1 -▁underwear 1 -▁unsuccessful 1 -▁Heaven 1 -▁meter 1 -▁outdoors 1 -▁disrupt 1 -▁olive 1 -▁convictions 1 -▁Gor 1 -ava 1 -▁rehab 1 -▁jungle 1 -▁Said 1 -▁verbal 1 -▁Tai 1 -▁Current 1 -▁practiced 1 -WA 1 -▁hips 1 -▁spike 1 -▁niece 1 -▁exterior 1 -▁fatally 1 -▁Brook 1 -▁settlements 1 -▁bush 1 -▁negotiated 1 -xi 1 -threatening 1 -▁nominations 1 -▁Commercial 1 -▁Biden 1 -▁Pine 1 -▁unchanged 1 -▁Study 1 -RT 1 -Big 1 -▁beard 1 -▁civilization 1 -uff 1 -▁contention 1 -▁Tucker 1 -▁capped 1 -▁Initiative 1 -▁Ny 1 -ben 1 -▁Patterson 1 -▁Pokemon 1 -Like 1 -▁Monica 1 -▁TR 1 -▁sympathy 1 -▁conceded 1 -pal 1 -▁Number 1 -▁Twin 1 -▁Pirates 1 -▁(12 1 -▁Marathon 1 -▁chunk 1 -▁enterprises 1 -▁Der 1 -▁CLOSE 1 -bb 1 -MB 1 -ef 1 -▁labeled 1 -▁35- 1 -▁messy 1 -▁NASCAR 1 -▁prosperity 1 -▁recruited 1 -kh 1 -▁Ducks 1 -▁newborn 1 -▁stressful 1 -▁calculations 1 -▁Molly 1 -▁Castro 1 -▁Dun 1 -▁Syracuse 1 -▁Mattis 1 -▁Mini 1 -▁turnovers 1 -▁Lion 1 -▁quicker 1 -▁sewing 1 -▁investigator 1 -▁broadcaster 1 -▁emailed 1 -▁Kurt 1 -▁bore 1 -First 1 -▁Conservation 1 -▁integrate 1 -▁1,500 1 -iva 1 -▁functionality 1 -▁350 1 -▁Pot 1 -▁Orthodox 1 -gg 1 -06 1 -án 1 -▁chatting 1 -Bo 1 -▁softly 1 -▁affidavit 1 -▁́ 1 -▁magnetic 1 -▁clause 1 -gov 1 -▁towers 1 -▁witch 1 -▁Thousands 1 -▁evolve 1 -▁1-1 1 -▁swung 1 -▁correspondent 1 -▁Grammy 1 -▁Cy 1 -▁1967 1 -▁intake 1 -▁explored 1 -300 1 -More 1 -ological 1 -▁acute 1 -CL 1 -nk 1 -▁Queens 1 -hy 1 -▁1.3 1 -▁delegates 1 -▁sunset 1 -ander 1 -▁dental 1 -match 1 -▁Reform 1 -▁leaks 1 -▁Hugh 1 -▁pistol 1 -▁Wu 1 -▁cluster 1 -▁caravan 1 -▁effectiveness 1 -ants 1 -▁payout 1 -▁% 1 -▁attitudes 1 -▁upgrades 1 -▁strengthening 1 -▁pour 1 -▁Latest 1 -▁shelters 1 -▁Wo 1 -▁imminent 1 -▁0.4 1 -▁Rhode 1 -▁lawmaker 1 -▁Schumer 1 -▁Elder 1 -▁gown 1 -▁jack 1 -...” 1 -▁Monte 1 -▁Planet 1 -Com 1 -▁Formula 1 -mis 1 -▁Pedro 1 -▁Texans 1 -▁Pretty 1 -▁commands 1 -▁traits 1 -▁inhabitants 1 -▁unlock 1 -▁von 1 -▁adjustments 1 -▁swap 1 -▁Gross 1 -▁Istanbul 1 -specific 1 -▁sofa 1 -▁MMA 1 -▁stall 1 -▁Ferrari 1 -▁Moving 1 -▁healthier 1 -▁Lieutenant 1 -president 1 -ili 1 -mus 1 -▁overdose 1 -▁mechanisms 1 -▁Roll 1 -rise 1 -▁Michel 1 -▁prop 1 -▁Rest 1 -▁boasts 1 -▁topping 1 -▁Lightning 1 -▁Hold 1 -▁wickets 1 -▁Scientists 1 -▁liberty 1 -▁Yale 1 -▁underwent 1 -▁cafe 1 -▁documentation 1 -▁relying 1 -▁whatsoever 1 -sta 1 -▁banning 1 -▁2020, 1 -▁motive 1 -▁regulate 1 -▁undergoing 1 -▁undoubtedly 1 -▁endure 1 -▁30,000 1 -▁Ghost 1 -▁faint 1 -▁personalities 1 -▁posing 1 -▁Walking 1 -,000 1 -▁Narendra 1 -▁WE 1 -iki 1 -▁nonetheless 1 -▁profiles 1 -nor 1 -eb 1 -▁wives 1 -▁positively 1 -▁futures 1 -▁animation 1 -▁Superintendent 1 -ō 1 -changing 1 -▁Sim 1 -▁Joyce 1 -▁Cre 1 -▁header 1 -▁committees 1 -▁necessity 1 -▁housed 1 -▁cares 1 -▁Sugar 1 -▁Germans 1 -▁shattered 1 -ico 1 -▁Holiday 1 -agh 1 -eta 1 -▁thriller 1 -oz 1 -equity 1 -▁Gardner 1 -▁chapters 1 -▁freed 1 -▁darker 1 -▁LSU 1 -osa 1 -▁sack 1 -▁intervene 1 -▁packs 1 -▁Rand 1 -▁pee 1 -02 1 -▁Juventus 1 -▁Leslie 1 -▁broadband 1 -▁fried 1 -▁relies 1 -square 1 -▁nails 1 -▁surfaced 1 -▁Fer 1 -▁broker 1 -▁Clemson 1 -▁Tro 1 -normal 1 -▁Ay 1 -▁outing 1 -bri 1 -▁encounters 1 -▁determining 1 -▁disc 1 -▁centered 1 -\ 1 -TH 1 -▁Latino 1 -VA 1 -dra 1 -▁Mugabe 1 -West 1 -although 1 -▁statistical 1 -▁turnover 1 -eyed 1 -▁societies 1 -▁hubby 1 -▁Hy 1 -▁refuge 1 -▁famously 1 -▁Lost 1 -▁straw 1 -▁Berry 1 -Most 1 -▁$300 1 -▁Sachs 1 -case 1 -may 1 -▁martial 1 -▁exceed 1 -▁nominees 1 -▁LI 1 -▁Chapel 1 -▁prescribed 1 -▁assure 1 -▁aerial 1 -▁examining 1 -▁Frederick 1 -▁8:30 1 -▁hid 1 -▁Davies 1 -▁intriguing 1 -▁Mich 1 -▁terrific 1 -▁Timothy 1 -▁Standing 1 -▁finest 1 -▁ranch 1 -▁lieutenant 1 -▁skating 1 -▁imprisonment 1 -▁BP 1 -▁Philippine 1 -▁Betty 1 -▁translate 1 -▁judgement 1 -▁25% 1 -ā 1 -aro 1 -▁Prevention 1 -▁publicity 1 -six 1 -▁Pri 1 -▁Analysts 1 -▁Kid 1 -▁recipients 1 -▁barred 1 -tle 1 -▁Train 1 -▁hugged 1 -▁Sel 1 -▁decisive 1 -▁Parks 1 -▁networking 1 -self 1 -▁neuro 1 -ato 1 -▁Ala 1 -▁downward 1 -▁sp 1 -UN 1 -▁Haley 1 -▁Match 1 -▁API 1 -▁Plant 1 -FL 1 -▁1976 1 -▁Colonel 1 -▁composer 1 -▁timely 1 -▁neighbouring 1 -▁lethal 1 -▁precedent 1 -▁sidelines 1 -▁comp 1 -▁whip 1 -▁distribute 1 -▁problematic 1 -▁admire 1 -▁enthusiastic 1 -bach 1 -▁Austrian 1 -▁Holocaust 1 -▁Bills 1 -▁Pride 1 -▁prospective 1 -▁entrepreneur 1 -▁Petroleum 1 -ill 1 -▁dentist 1 -▁reacted 1 -▁1981 1 -▁Breaking 1 -▁LOVE 1 -▁tapped 1 -▁Ol 1 -word 1 -▁automation 1 -makers 1 -▁Moses 1 -▁mono 1 -▁Geneva 1 -▁groceries 1 -▁origins 1 -▁activated 1 -▁dividends 1 -▁1968 1 -▁superhero 1 -▁RAM 1 -▁contender 1 -▁prone 1 -▁eliminating 1 -▁Wagner 1 -▁Wendy 1 -▁Croatia 1 -▁homemade 1 -TC 1 -▁Stage 1 -lay 1 -▁PGA 1 -94 1 -▁secular 1 -▁preserved 1 -▁tales 1 -▁branded 1 -▁clan 1 -▁dire 1 -▁meditation 1 -▁5:30 1 -▁servant 1 -▁cough 1 -▁loading 1 -▁holder 1 -▁inevitably 1 -▁1% 1 -▁sunlight 1 -▁dismiss 1 -▁nerves 1 -▁confessed 1 -▁figuring 1 -▁Wildcats 1 -ically 1 -▁destructive 1 -▁readily 1 -IA 1 -▁mines 1 -ä 1 -▁Million 1 -av 1 -▁Telegraph 1 -▁fragile 1 -8, 1 -▁Need 1 -▁Rockets 1 -▁sovereignty 1 -▁litter 1 -▁Tara 1 -▁$60 1 -▁CM 1 -▁Clippers 1 -▁visibility 1 -race 1 -och 1 -▁picnic 1 -▁indicted 1 -▁hailed 1 -▁Lots 1 -kind 1 -▁st 1 -▁performer 1 -▁feast 1 -▁alright 1 -▁cellphone 1 -▁Hans 1 -▁ordinance 1 -▁Indonesian 1 -▁Aid 1 -▁Safe 1 -▁Irving 1 -wear 1 -▁destinations 1 -▁mock 1 -inger 1 -▁tablets 1 -▁Tur 1 -▁Healthcare 1 -▁ambition 1 -▁Bolton 1 -▁loses 1 -leg 1 -▁machinery 1 -▁bless 1 -▁baseman 1 -▁aggressively 1 -▁Miguel 1 -▁1.1 1 -▁"" 1 -aga 1 -pop 1 -▁merchandise 1 -▁Trinity 1 -▁Join 1 -▁wipe 1 -▁stabbing 1 -▁Gardens 1 -▁predictions 1 -▁mount 1 -▁demographic 1 -▁Clearly 1 -▁investigative 1 -200 1 -▁Lindsey 1 -▁physicians 1 -▁1974 1 -▁clay 1 -sion 1 -▁Qualcomm 1 -▁neighbour 1 -▁frames 1 -▁Pastor 1 -▁Statistics 1 -og 1 -▁survivor 1 -▁airplane 1 -funded 1 -ching 1 -▁arrange 1 -▁curiosity 1 -▁merged 1 -▁lump 1 -sharing 1 -▁speeding 1 -ras 1 -▁Congo 1 -▁Majority 1 -▁wholesale 1 -▁signatures 1 -til 1 -▁NYC 1 -▁strategist 1 -▁Somehow 1 -▁900 1 -▁breeding 1 -Last 1 -▁cliff 1 -mates 1 -NY 1 -▁Break 1 -▁Stra 1 -▁sights 1 -▁Were 1 -▁tan 1 -▁Yang 1 -▁aggregate 1 -▁Rocky 1 -▁Preston 1 -▁Level 1 -▁Debbie 1 -uch 1 -▁unbelievable 1 -▁portions 1 -▁workshops 1 -▁linking 1 -▁Allah 1 -workers 1 -▁stark 1 -▁Wine 1 -▁extends 1 -▁Mann 1 -▁unbeaten 1 -▁Mountains 1 -sburg 1 -▁widow 1 -▁burial 1 -▁Cost 1 -March 1 -▁blogger 1 -mph 1 -▁compromised 1 -▁thoughtful 1 -▁planting 1 -▁Wimbledon 1 -▁Parish 1 -▁insisting 1 -ew 1 -OL 1 -▁Funeral 1 -▁prototype 1 -▁Terms 1 -▁Everton 1 -▁dude 1 -brook 1 -▁trailing 1 -▁narrowly 1 -▁unlawful 1 -▁bang 1 -▁Worth 1 -▁Mohamed 1 -7, 1 -▁(15 1 -▁endorsement 1 -▁Learn 1 -▁poop 1 -▁gifted 1 -▁designation 1 -▁straightforward 1 -ute 1 -▁Pur 1 -illo 1 -▁Duchess 1 -▁Khashoggi 1 -CH 1 -▁Phase 1 -▁punt 1 -▁podium 1 -▁bosses 1 -▁Buddhist 1 -open 1 -▁Liu 1 -▁1.4 1 -▁tactical 1 -met 1 -▁Arabic 1 -▁violate 1 -▁proudly 1 -▁navigation 1 -▁pics 1 -▁Census 1 -unconstitutional 1 -▁BB 1 -▁Mir 1 -▁satisfy 1 -▁Soul 1 -▁Cardinal 1 -▁neglect 1 -shan 1 -▁cleaner 1 -.:) 1 -length 1 -▁creek 1 -▁(20 1 -▁Mine 1 -▁discharge 1 -▁cartoon 1 -▁pine 1 -▁Consider 1 -ug 1 -▁Fleet 1 -▁ambitions 1 -▁campaigning 1 -▁hikes 1 -▁Celtic 1 -▁dash 1 -cer 1 -▁Blog 1 -EP 1 -mun 1 -▁spiral 1 -▁receivers 1 -▁stretches 1 -▁takeover 1 -▁Isn 1 -▁Ahmad 1 -▁Advisory 1 -▁capitalism 1 -▁superintendent 1 -▁imposing 1 -▁protesting 1 -▁predominantly 1 -▁attackers 1 -▁proposition 1 -▁Nigerians 1 -▁der 1 -▁obtaining 1 -▁transplant 1 -PR 1 -▁ju 1 -leading 1 -▁mommy 1 -▁applicable 1 -▁silk 1 -▁regulated 1 -▁Metal 1 -▁charming 1 -▁sailing 1 -▁Past 1 -▁1972 1 -▁Fellow 1 -▁enrollment 1 -▁Sai 1 -▁PhD 1 -▁mold 1 -RP 1 -▁McG 1 -▁sue 1 -▁Myers 1 -▁analyze 1 -▁Phillip 1 -band 1 -ui 1 -▁Wings 1 -▁pal 1 -▁organs 1 -▁parameters 1 -▁Lam 1 -▁divine 1 -▁outreach 1 -▁Ur 1 -▁Charleston 1 -▁WR 1 -▁Trek 1 -hall 1 -▁Insider 1 -▁Hence 1 -▁tying 1 -▁carved 1 -▁slap 1 -ters 1 -▁Palestine 1 -CE 1 -▁Sur 1 -aries 1 -▁Form 1 -▁Lok 1 -▁surveys 1 -▁laps 1 -▁Fin 1 -▁125 1 -▁Bet 1 -ale 1 -oon 1 -▁beverage 1 -▁CBC 1 -help 1 -▁Broadcasting 1 -service 1 -▁rewarded 1 -▁trailed 1 -▁0.7 1 -▁Jenner 1 -▁Bart 1 -▁doorway 1 -▁Islamist 1 -▁charities 1 -▁NJ 1 -▁£3 1 -▁souls 1 -▁pipes 1 -▁Search 1 -ered 1 -tch 1 -ique 1 -mine 1 -jan 1 -▁1973 1 -▁thereafter 1 -▁Large 1 -▁Anaheim 1 -▁(0 1 -▁Everybody 1 -▁Astros 1 -▁slam 1 -around 1 -▁dragging 1 -working 1 -▁condemn 1 -▁accordingly 1 -▁clinics 1 -▁utilize 1 -▁foreigners 1 -▁Az 1 -▁congregation 1 -eth 1 -▁revival 1 -▁fatigue 1 -woman 1 -▁controller 1 -▁deliberate 1 -IE 1 -Her 1 -▁Solar 1 -▁rivalry 1 -store 1 -▁rocked 1 -▁100- 1 -▁successive 1 -▁Task 1 -▁despair 1 -▁begging 1 -▁affection 1 -▁Pick 1 -EM 1 -▁Au 1 -▁Color 1 -▁bullpen 1 -▁township 1 -▁Leafs 1 -▁fuels 1 -ual 1 -▁crisp 1 -hot 1 -▁Beer 1 -▁pressures 1 -▁Cathedral 1 -▁Midwest 1 -▁aesthetic 1 -▁wrapping 1 -▁Kenneth 1 -▁insurers 1 -yr 1 -▁Bengal 1 -▁Ranch 1 -▁distraction 1 -▁Shakespeare 1 -▁reckless 1 -▁Nan 1 -road 1 -▁Samantha 1 -▁deter 1 -▁desires 1 -▁guitarist 1 -▁blasted 1 -▁identities 1 -tine 1 -▁guessed 1 -▁Hayes 1 -▁dreamed 1 -▁teamed 1 -RI 1 -ifying 1 -▁unnamed 1 -▁Uganda 1 -▁Forbes 1 -ground 1 -▁MC 1 -▁Affordable 1 -▁Natalie 1 -▁profitability 1 -area 1 -▁Athens 1 -▁clarify 1 -▁Comp 1 -▁Parents 1 -▁dementia 1 -▁designing 1 -▁lace 1 -▁Extra 1 -▁Roosevelt 1 -▁bacon 1 -▁efficiently 1 -▁wildly 1 -▁midterm 1 -▁0.6 1 -▁judged 1 -▁Already 1 -phone 1 -▁conditioning 1 -▁eldest 1 -▁Stories 1 -▁persuade 1 -▁Return 1 -▁Alexis 1 -▁ob 1 -ore 1 -▁responders 1 -▁Content 1 -roll 1 -▁sting 1 -▁calculate 1 -▁lamp 1 -▁ATR 1 -▁recommends 1 -▁ACC 1 -▁APC 1 -▁(11 1 -▁creepy 1 -▁Gill 1 -▁Ant 1 -▁remembers 1 -▁flame 1 -▁gentleman 1 -▁norm 1 -▁EBITDA 1 -force 1 -▁1969 1 -▁Turn 1 -▁Turnbull 1 -▁exceptions 1 -double 1 -▁fixture 1 -▁Beauty 1 -▁Name 1 -▁antibiotics 1 -▁ni 1 -test 1 -▁Initially 1 -value 1 -Did 1 -▁elevation 1 -▁Trent 1 -▁IPO 1 -Love 1 -▁Chloe 1 -▁360 1 -▁waving 1 -▁juvenile 1 -▁Basically 1 -▁Guinea 1 -▁Seeing 1 -▁complement 1 -rt 1 -▁distinguished 1 -▁Santiago 1 -▁benefited 1 -▁Jet 1 -▁drift 1 -▁Gavin 1 -▁punish 1 -▁Municipal 1 -▁restructuring 1 -▁freelance 1 -▁elephant 1 -▁adjustment 1 -▁postponed 1 -▁posters 1 -▁proximity 1 -▁fairy 1 -anga 1 -▁LGBTQ 1 -ii 1 -▁threatens 1 -track 1 -▁envelope 1 -▁incorporate 1 -▁Deutsche 1 -▁kindergarten 1 -▁supervision 1 -▁builds 1 -▁Bayern 1 -ologist 1 -▁1998, 1 -mb 1 -▁quantity 1 -▁Cindy 1 -▁Hol 1 -▁Sharif 1 -▁tastes 1 -▁associations 1 -lot 1 -▁_ 1 -check 1 -bag 1 -▁monsters 1 -▁jurors 1 -▁initiate 1 -▁2001. 1 -▁moms 1 -▁Primary 1 -▁Harold 1 -America 1 -▁Rosen 1 -▁mineral 1 -▁Charlottesville 1 -▁Patricia 1 -▁hospitalized 1 -▁Eden 1 -▁Motors 1 -▁withdrawn 1 -▁Bud 1 -NS 1 -▁2003. 1 -▁Lamb 1 -▁pushes 1 -][ 1 -▁Abby 1 -▁Iceland 1 -▁Vision 1 -▁Meeting 1 -▁Few 1 -▁CC 1 -wife 1 -▁Subscribe 1 -▁coordination 1 -▁Advanced 1 -▁highlighting 1 -▁tornado 1 -▁gospel 1 -▁spit 1 -▁1975 1 -look 1 -▁configuration 1 -▁1.6 1 -▁HP 1 -▁Software 1 -▁satisfying 1 -▁bidding 1 -▁Steam 1 -▁Paradise 1 -▁Springfield 1 -▁tomatoes 1 -▁Learning 1 -bee 1 -▁wagon 1 -▁Plaza 1 -▁Basketball 1 -▁transcript 1 -▁Zero 1 -▁1978 1 -▁31- 1 -▁Steele 1 -▁1900 1 -▁9:30 1 -▁deliveries 1 -▁notification 1 -▁Partnership 1 -nar 1 -▁unwanted 1 -▁municipalities 1 -▁adverse 1 -▁spells 1 -▁variations 1 -▁analyzed 1 -▁inspector 1 -▁Rohingya 1 -▁doctrine 1 -▁Coffee 1 -‐ 1 -▁Monroe 1 -▁1977 1 -▁allocated 1 -▁judging 1 -▁reasoning 1 -oriented 1 -▁Kemp 1 -▁tumor 1 -▁pussy 1 -▁Vienna 1 -▁hydrogen 1 -▁Sabha 1 -▁livestock 1 -▁transfers 1 -▁Behind 1 -Neill 1 -▁mechanics 1 -▁burns 1 -▁portable 1 -MR 1 -▁surroundings 1 -▁turmoil 1 -ush 1 -▁mortality 1 -▁Enjoy 1 -▁polite 1 -awa 1 -wal 1 -▁semifinals 1 -▁NOW 1 -▁administered 1 -▁battled 1 -▁rocky 1 -▁Macon 1 -▁overturned 1 -▁polar 1 -▁flaws 1 -▁prompt 1 -▁dense 1 -▁promotional 1 -0.00 1 -would 1 -▁Mohammad 1 -▁Barrett 1 -▁wandering 1 -▁Sherman 1 -▁rains 1 -▁accompanying 1 -ERS 1 -▁patches 1 -▁socially 1 -▁interceptions 1 -▁expedition 1 -▁festive 1 -▁excluded 1 -control 1 -▁absurd 1 -▁veto 1 -▁sacked 1 -▁vulnerability 1 -▁outfits 1 -▁ft 1 -▁codes 1 -shirts 1 -▁preferences 1 -▁rows 1 -▁rebuilding 1 -▁footsteps 1 -▁Ker 1 -▁jar 1 -▁grim 1 -▁booking 1 -▁hydro 1 -pt 1 -DO 1 -▁riot 1 -▁publishers 1 -▁agreeing 1 -▁Chrome 1 -iel 1 -▁NFC 1 -▁poisoning 1 -▁Oval 1 -▁Pixel 1 -▁knitting 1 -▁adopting 1 -berry 1 -▁Lyon 1 -▁horn 1 -▁Federer 1 -EU 1 -▁favored 1 -▁constitute 1 -▁Chang 1 -▁canvas 1 -▁Yellow 1 -group 1 -▁institute 1 -▁editors 1 -▁Northeast 1 -▁Levi 1 -▁manufacture 1 -▁Body 1 -▁Fantasy 1 -▁dial 1 -▁yuan 1 -▁elegant 1 -Ar 1 -▁Computer 1 -▁backward 1 -▁suspend 1 -pet 1 -▁robbed 1 -▁brushed 1 -▁Pokémon 1 -▁wander 1 -▁downhill 1 -▁abruptly 1 -▁Appeals 1 -▁HA 1 -▁stranded 1 -Once 1 -▁hey 1 -▁pornography 1 -▁vegetable 1 -▁borrowed 1 -▁Month 1 -▁$35 1 -▁Whenever 1 -▁Judy 1 -▁expired 1 -▁certainty 1 -▁Flint 1 -▁Romania 1 -▁rubbed 1 -ón 1 -▁PAC 1 -▁OPEC 1 -MAN 1 -▁rumours 1 -▁Klein 1 -▁tab 1 -▁Snapchat 1 -Those 1 -▁Touch 1 -▁nightclub 1 -▁Lindsay 1 -▁guarantees 1 -▁enacted 1 -▁grandma 1 -ego 1 -▁wandered 1 -▁Caleb 1 -▁Methodist 1 -▁Premium 1 -▁Hood 1 -uck 1 -▁swift 1 -▁0.8 1 -▁Gr 1 -still 1 -▁tenth 1 -▁clues 1 -▁Hogan 1 -▁gunshot 1 -od 1 -▁variation 1 -▁originated 1 -gh 1 -▁Hub 1 -▁consuming 1 -both 1 -▁obsession 1 -▁Singer 1 -▁unanimously 1 -South 1 -▁Bengaluru 1 -▁Kylie 1 -▁arise 1 -▁raids 1 -▁Surrey 1 -▁colony 1 -▁Malaysian 1 -▁ITV 1 -▁10:30 1 -▁Roma 1 -▁festivals 1 --35 1 -▁wanna 1 -▁scholar 1 -green 1 -▁Volkswagen 1 -▁sustainability 1 -▁exploitation 1 -▁circular 1 -▁sculpture 1 -▁endangered 1 -▁JavaScript 1 -▁Knicks 1 -sign 1 -Right 1 -▁mixing 1 -▁pretending 1 -pping 1 -▁Cool 1 -▁stationed 1 -▁misdemeanor 1 -▁uniforms 1 -▁Ole 1 -▁abstract 1 -▁Boss 1 -▁malicious 1 -▁THIS 1 -▁Cardiff 1 -50,000 1 -▁Dor 1 -▁Quick 1 -▁outline 1 -avi 1 -▁steals 1 -▁WhatsApp 1 -▁AB 1 -▁inadequate 1 -▁troubles 1 -▁sponsorship 1 -▁candles 1 -ged 1 -▁Rat 1 -▁handles 1 -▁Brendan 1 -▁connects 1 -▁quantum 1 -▁Running 1 -▁Bol 1 -▁nonsense 1 -▁Job 1 -▁Focus 1 -▁aforementioned 1 -▁catcher 1 -▁exceeded 1 -▁warrants 1 -▁Rubio 1 -▁aligned 1 -▁diapers 1 -▁elimination 1 -▁commerce 1 -▁strings 1 -▁hugely 1 -▁Morocco 1 -▁Income 1 -▁Sex 1 -▁sovereign 1 -▁Petersburg 1 -▁Shell 1 -TE 1 -▁Zu 1 -▁Kill 1 -dle 1 -▁Host 1 -▁integral 1 -▁tally 1 -tus 1 -▁stigma 1 -▁coordinated 1 -▁concussion 1 -ify 1 -▁charitable 1 -▁seating 1 -▁wildfire 1 -▁vocals 1 -▁Mile 1 -stra 1 -▁pumpkin 1 -park 1 -▁broadcasting 1 -▁Hon 1 -▁lenders 1 -▁Lowe 1 -las 1 -▁kidnapped 1 -▁marker 1 -▁demonstrates 1 -▁interception 1 -▁extremist 1 -▁CAGR 1 -▁partnered 1 -▁proves 1 -▁contests 1 -▁cereal 1 -▁Speed 1 -cing 1 -▁adjusting 1 -▁mentality 1 -▁gameplay 1 -▁texture 1 -▁sweater 1 -▁diplomat 1 -▁symbols 1 -owski 1 -▁sexuality 1 -▁Charter 1 -▁Sad 1 -AA 1 -▁halted 1 -▁attracting 1 -▁creators 1 -▁beats 1 -food 1 -▁variant 1 -▁Brigade 1 -▁Fight 1 -▁Mae 1 -▁expressions 1 -▁salmon 1 -▁reservations 1 -▁Listen 1 -▁balloon 1 -▁premise 1 -gra 1 -▁snaps 1 -▁wary 1 -▁chorus 1 -▁Property 1 -▁Mega 1 -▁Mai 1 -▁assaulting 1 -GO 1 -▁footprint 1 -▁tackling 1 -▁dismissal 1 -talk 1 -SH 1 -cc 1 -▁keeper 1 -▁looming 1 -▁extensively 1 -▁80% 1 -MM 1 -▁miners 1 -▁blankets 1 -▁Neu 1 -▁Thrones 1 -▁Franco 1 -▁potty 1 -▁biology 1 -▁classmates 1 -growing 1 -sar 1 -▁90% 1 -▁jumper 1 -** 1 -▁assisting 1 -800 1 -▁Tina 1 -▁helicopters 1 -seeded 1 -▁TA 1 -▁balcony 1 -▁underwater 1 -▁Ethiopia 1 -▁superb 1 -▁easiest 1 -▁Planned 1 -▁Lori 1 -▁bachelor 1 -▁conscience 1 -▁revelations 1 -▁Marketing 1 -did 1 -▁strive 1 -▁min 1 -▁specialists 1 -wheel 1 -▁govern 1 -▁defining 1 -▁apples 1 -▁Jun 1 -▁danced 1 -▁Riverside 1 -▁seekers 1 -FM 1 -▁60- 1 -▁beers 1 -▁1.8 1 -▁unrelated 1 -▁obesity 1 -lly 1 -▁whistle 1 -2.5 1 -▁LOS 1 -▁Dollar 1 -▁Somalia 1 -▁ANC 1 -tas 1 -▁blaming 1 -▁edged 1 -focused 1 -▁payroll 1 -▁sandwiches 1 -▁Sid 1 -▁weed 1 -▁Mara 1 -▁wonders 1 -▁mentions 1 -▁planets 1 -▁Embassy 1 -▁impeachment 1 -ax 1 -und 1 -▁Mil 1 -▁discomfort 1 -▁precision 1 -ites 1 -▁learnt 1 -▁tactic 1 -▁possessions 1 -▁slapped 1 -▁shotgun 1 -▁imp 1 -last 1 -▁disgusting 1 -▁midway 1 -gin 1 -▁Penny 1 -▁burglary 1 -▁Sta 1 -▁[1] 1 -▁prep 1 -▁studios 1 -▁NEWS 1 -fest 1 -▁module 1 -yer 1 -source 1 -▁swinging 1 -▁inmate 1 -▁floods 1 -▁bishop 1 -▁Dell 1 -▁warfare 1 -▁Noble 1 -▁Jin 1 -▁Fitzgerald 1 -▁Parenthood 1 -▁dioxide 1 -▁Raptors 1 -▁uncommon 1 -▁minimize 1 -uf 1 -▁1951 1 -chu 1 -▁Lives 1 -Ba 1 -▁mama 1 -▁idiot 1 -▁Pierce 1 -▁15% 1 -▁handgun 1 -▁Select 1 -▁Jays 1 -igan 1 -▁Principal 1 -▁renewal 1 -▁eternal 1 -▁z 1 -▁Pack 1 -▁pissed 1 -▁discovering 1 -deal 1 -▁metric 1 -oy 1 -▁columns 1 -▁com 1 -ska 1 -▁lbs 1 -▁monitored 1 -AN 1 -▁Traffic 1 -▁lemon 1 -▁randomly 1 -Men 1 -▁Resort 1 -▁brokerage 1 -▁burnt 1 -▁Beverly 1 -.15 1 -▁Kingston 1 -▁requesting 1 -▁Actress 1 -bone 1 -▁thunder 1 -▁Fifth 1 -▁retaliation 1 -▁Aurora 1 -▁helm 1 -boat 1 -▁Auckland 1 -▁marble 1 -▁Moss 1 -▁daylight 1 -▁solving 1 -▁glowing 1 -pay 1 -chen 1 -▁ditch 1 -▁medieval 1 -▁probability 1 -▁Apollo 1 -▁Date 1 -▁ultrasound 1 -▁renovation 1 -▁sleeve 1 -▁petrol 1 -▁cheering 1 -tti 1 -▁sensitivity 1 -▁Ter 1 -▁pact 1 -▁successes 1 -▁Forward 1 -▁>> 1 -▁Falcon 1 -▁realization 1 -▁(1) 1 -ering 1 -▁Il 1 -▁troubling 1 -▁Desert 1 -▁Fresh 1 -nch 1 -▁mound 1 -bat 1 -SL 1 -▁urgency 1 -▁Prison 1 -▁uploaded 1 -▁[...] 1 -▁THAT 1 -▁Plymouth 1 -▁Williamson 1 -▁illnesses 1 -▁quarterbacks 1 -▁lure 1 -▁Newman 1 -▁downside 1 -Since 1 --22 1 -bot 1 -▁tended 1 -▁freight 1 -▁foam 1 -Am 1 -▁lightweight 1 -▁gum 1 -▁raining 1 -▁coincidence 1 -▁informal 1 -▁squeezed 1 -▁staffers 1 -▁Corporate 1 -▁Kathy 1 -▁ideological 1 -▁Mun 1 -▁punk 1 -▁Transit 1 -▁170 1 -▁CDC 1 -▁comprised 1 -▁skinny 1 -▁homeowners 1 -▁spa 1 -▁Record 1 -ls 1 -▁yell 1 -▁knit 1 -▁conjunction 1 -▁cm 1 -▁borough 1 -▁minors 1 -▁Um 1 -▁reduces 1 -▁ounce 1 -▁Stand 1 -▁Hilton 1 -▁stellar 1 -▁compatible 1 -▁Allison 1 -▁Technical 1 -▁medicines 1 -▁passive 1 -▁Movie 1 -▁peanut 1 -▁chaotic 1 -▁thermal 1 -Over 1 -▁villain 1 -▁tolerate 1 -nn 1 -▁Download 1 -▁laundering 1 -▁Progressive 1 -▁dislike 1 -▁rainbow 1 -▁pat 1 -▁succession 1 -wer 1 -▁reminding 1 -▁1: 1 -▁delete 1 -▁Dur 1 -▁vampire 1 -▁Durham 1 -▁dough 1 -Li 1 -▁convey 1 -▁Caller 1 -▁guessing 1 -SON 1 -▁NE 1 -▁undergraduate 1 -▁dancers 1 -▁cum 1 -▁startups 1 -eng 1 -▁slope 1 -▁Tracy 1 -▁sized 1 -▁macro 1 -▁replay 1 -▁halls 1 -▁mod 1 -▁Mia 1 -▁refreshing 1 -▁Damascus 1 -▁Gomez 1 -▁Machine 1 -▁Hull 1 -sp 1 -▁backgrounds 1 -▁temper 1 -ree 1 -▁overlooked 1 -▁Norfolk 1 -▁cries 1 -,400 1 -▁orchestra 1 -▁amendments 1 -▁plunged 1 -▁PT 1 -mp 1 -▁Clinic 1 -▁Berg 1 -▁forgiveness 1 -▁4.5 1 -▁homeland 1 -ked 1 -▁rim 1 -▁Experts 1 -▁static 1 -ics 1 -▁confirming 1 -▁concealed 1 -▁compelled 1 -▁dick 1 -lived 1 -▁subscriber 1 -▁Continue 1 -▁warrior 1 -▁combining 1 -mas 1 -▁Vietnamese 1 -▁Neal 1 -▁niche 1 -▁Vince 1 -▁proteins 1 -▁measurements 1 -▁whale 1 -▁progression 1 -▁classification 1 -▁rooted 1 -▁WikiLeaks 1 -▁diameter 1 -▁capturing 1 -▁0.9 1 -▁refund 1 -▁1961 1 -NT 1 -▁sour 1 -▁Pam 1 -▁fueled 1 -▁Gul 1 -▁Counsel 1 -atory 1 -▁Direct 1 -eva 1 -uz 1 -▁inclined 1 -▁undertaken 1 -late 1 -edge 1 -▁canal 1 -▁1971 1 -▁collusion 1 -▁Canal 1 -▁unexpectedly 1 -▁Cuomo 1 -▁norms 1 -▁decorations 1 -▁filmmaker 1 -▁Row 1 -▁bolster 1 -then 1 -▁performs 1 -▁Choice 1 -▁Products 1 -▁lifelong 1 -wire 1 -▁Clara 1 -▁copied 1 -▁exotic 1 -cross 1 -▁abundance 1 -▁Hero 1 -▁rack 1 -▁vanished 1 -▁Bang 1 -▁individually 1 -▁feud 1 -sea 1 -▁freezer 1 -▁Loading 1 -hat 1 -OT 1 -:10 1 -▁ash 1 -▁Boris 1 -▁concludes 1 -▁surgical 1 -bell 1 -▁wool 1 -▁luggage 1 -SF 1 -▁supplement 1 -▁propose 1 -tone 1 -▁Pinterest 1 -▁depicted 1 -▁pulse 1 -▁Southampton 1 -▁EV 1 -▁#2 1 -2) 1 -▁Mostly 1 -lli 1 -▁slick 1 -▁comfortably 1 -400 1 -▁Ultra 1 -▁$19 1 -▁symbolic 1 -▁Value 1 -▁Wy 1 -▁discharged 1 -bre 1 -wind 1 -▁Multi 1 -▁Fernando 1 -▁Meet 1 -▁silently 1 -▁alerted 1 -▁brakes 1 -▁swiftly 1 -▁communist 1 -▁rebellion 1 -▁indicators 1 -▁Newport 1 -▁collectively 1 -▁sucks 1 -▁WHO 1 -▁smelled 1 -▁Amsterdam 1 -▁peninsula 1 -▁specified 1 -▁exams 1 -▁Willie 1 -bie 1 -▁swallow 1 -▁stray 1 -ann 1 -However 1 -▁aliens 1 -▁behaviors 1 -Ka 1 -▁attributes 1 -▁freaking 1 -▁policing 1 -▁Skip 1 -▁disk 1 -league 1 -▁countryside 1 -▁BA 1 -▁melt 1 -▁boosting 1 -▁cooperative 1 -▁Chemical 1 -▁ordeal 1 -` 1 -IR 1 -▁Cyber 1 -▁stunt 1 -▁architectural 1 -▁cracks 1 -▁excellence 1 -▁2002. 1 -▁Ava 1 -iah 1 -World 1 -web 1 -▁blows 1 -▁entertain 1 -▁reunited 1 -▁Dawson 1 -▁poems 1 -▁sedan 1 -▁preceded 1 -kick 1 -▁absorbed 1 -▁ra 1 -▁varieties 1 -▁rainy 1 -▁$21 1 -mos 1 -▁steer 1 -feld 1 -Di 1 -wen 1 -▁2000. 1 -▁timber 1 -hara 1 -oka 1 -▁prolonged 1 -▁summoned 1 -▁Floyd 1 -▁Kerala 1 -▁Bradford 1 -▁Hungarian 1 -▁eve 1 -▁grounded 1 -▁Paper 1 -BO 1 -▁Carey 1 -SB 1 -cell 1 -▁Lakes 1 -▁Serbia 1 -stick 1 -▁Bend 1 -▁Vernon 1 -▁hacked 1 -▁turnaround 1 -▁glove 1 -▁Raw 1 -▁Fra 1 -OP 1 -▁louder 1 -▁corrected 1 -▁Figure 1 -▁Argentine 1 -▁sprint 1 -iri 1 -▁Heath 1 -Up 1 -SM 1 -▁dancer 1 -rest 1 -June 1 -Life 1 -▁Pal 1 -▁artillery 1 -nine 1 -▁equation 1 -Ga 1 -▁offender 1 -▁embarrassment 1 -▁disastrous 1 -tive 1 -▁repeating 1 -▁consult 1 -used 1 -▁develops 1 -▁disasters 1 -▁magnificent 1 -pack 1 -▁appoint 1 -▁cycles 1 -▁circulation 1 -▁Mitt 1 -etic 1 -SCNG 1 -▁predictable 1 -AG 1 -▁queue 1 -break 1 -▁“[ 1 -▁Regiment 1 -▁CP 1 -▁Previously 1 -ike 1 -▁Camera 1 -rr 1 -▁rendered 1 -▁youths 1 -▁innocence 1 -▁honesty 1 -▁Chapman 1 -▁Creative 1 -▁stalled 1 -▁Scout 1 -▁evolving 1 -rating 1 -ass 1 -▁Dixon 1 -▁confined 1 -ude 1 -▁accelerated 1 -▁conferences 1 -▁2% 1 -▁tempted 1 -▁warns 1 -▁influences 1 -▁weakened 1 -▁monitors 1 -▁CAPA 1 -▁Birthday 1 -tta 1 -▁excuses 1 -ech 1 -▁$22 1 -▁Devon 1 -▁Nokia 1 -▁Clayton 1 -▁Doc 1 -▁mentioning 1 -▁vest 1 -▁vicious 1 -▁visas 1 -imo 1 -▁Pruitt 1 -▁sounding 1 -▁swings 1 -▁Swan 1 -▁Tab 1 -▁Gre 1 -▁Collection 1 -▁Legion 1 -▁Xavier 1 -▁Barclays 1 -turn 1 -▁progressed 1 -▁flavors 1 -▁EA 1 -▁jumps 1 -▁1964 1 -▁blessings 1 -▁jo 1 -▁militia 1 -▁tearing 1 -▁nude 1 -▁surged 1 -▁MRS 1 -▁beam 1 -▁drummer 1 -▁campuses 1 -▁traction 1 -ando 1 -▁condo 1 -rag 1 -▁Redskins 1 -▁Across 1 -▁smells 1 -▁Eva 1 -▁Kre 1 -▁counseling 1 -▁Bag 1 -llo 1 -▁Sears 1 -° 1 -▁Hebrew 1 -▁Snyder 1 -▁Cer 1 -▁Conway 1 -▁(2) 1 -▁framed 1 -▁merchant 1 -uda 1 -chy 1 -language 1 -deep 1 -▁Calvin 1 -▁heavier 1 -▁impress 1 -▁Ran 1 -▁composite 1 -▁synthetic 1 -rid 1 -▁ne 1 -▁Koch 1 -aire 1 -▁Higher 1 -▁seller 1 -▁surprises 1 -▁Pie 1 -▁Bruno 1 -▁Santos 1 -▁Manitoba 1 -▁Armenian 1 -proof 1 -▁berth 1 -▁constituents 1 -▁Shan 1 -▁NL 1 -▁Jer 1 -quin 1 -▁retrieve 1 -▁comprising 1 -▁BO 1 -▁PTI 1 -▁pardon 1 -▁economically 1 -▁screw 1 -▁spur 1 -▁Doctors 1 -▁Nam 1 -▁Chester 1 -▁obstacle 1 -▁Historical 1 -▁candle 1 -▁Nicolas 1 -:50 1 -make 1 -▁NAFTA 1 -▁ONE 1 -EE 1 -▁Kas 1 -▁Burton 1 -▁meds 1 -▁Sunni 1 -▁105 1 -its 1 -▁implied 1 -▁$24 1 -▁nursery 1 -▁marching 1 -▁Retail 1 -▁Diaz 1 -▁Anything 1 -▁lowering 1 -sia 1 -▁Downtown 1 -▁premiums 1 -▁Bucks 1 -State 1 -▁arrow 1 -▁hinted 1 -file 1 -▁undocumented 1 -gro 1 -▁($0. 1 -▁motorists 1 -▁youngsters 1 -alo 1 -▁Twins 1 -▁vendor 1 -▁slogan 1 -▁bloggers 1 -▁Mak 1 -▁recreation 1 -▁Alfred 1 -▁Rd 1 -▁empathy 1 -▁concessions 1 -sis 1 -▁chapel 1 -▁NRA 1 -▁pleading 1 -ü 1 -fair 1 -▁sellers 1 -adi 1 -▁Belle 1 -▁detailing 1 -▁founders 1 -▁Track 1 -try 1 -dic 1 -▁curtain 1 -▁Production 1 -▁kits 1 -▁33- 1 -▁unidentified 1 -▁eyebrows 1 -ened 1 -▁patio 1 -bon 1 -ucci 1 -▁Ricky 1 -DR 1 -▁SpaceX 1 -▁frightening 1 -▁rats 1 -▁cocktail 1 -social 1 -▁electorate 1 -▁Gran 1 -▁Nadal 1 -cker 1 -▁$1.2 1 -▁dipped 1 -▁Gear 1 -▁Teachers 1 -▁hostage 1 -▁Baron 1 -▁CPU 1 -▁refrigerator 1 -▁traumatic 1 -▁prepares 1 -built 1 -hn 1 -▁slate 1 -▁peaked 1 -▁unusually 1 -goal 1 -▁Cas 1 -▁Sheikh 1 -▁sketch 1 -▁screams 1 -▁subscribing 1 -▁slipping 1 -▁granddaughter 1 -▁heightened 1 -▁Australians 1 -▁respects 1 -▁Bean 1 -▁unpredictable 1 -elected 1 -▁Marty 1 -▁Mormon 1 -▁altitude 1 -▁Crow 1 -▁populated 1 -een 1 -▁Lodge 1 -▁pasta 1 -▁token 1 -▁tasted 1 -▁strained 1 -▁ARE 1 -▁frightened 1 -lor 1 -▁demolished 1 -▁Tap 1 -lat 1 -ean 1 -▁shoved 1 -▁Pizza 1 -▁Sandra 1 -▁probable 1 -▁Same 1 -▁Sergio 1 -▁crosses 1 -▁shades 1 -▁Honestly 1 -▁reconciliation 1 -▁Sunshine 1 -▁supper 1 -ien 1 -inspired 1 -▁remark 1 -▁incorrect 1 -▁contempt 1 -▁manipulation 1 -▁SR 1 -▁» 1 -▁insects 1 -▁Webb 1 -▁ancestors 1 -▁Halifax 1 -▁Alexandria 1 -▁thrill 1 -pot 1 -▁rep 1 -▁overwhelmingly 1 -▁hospitality 1 -▁remarked 1 -your 1 -▁(13 1 -tia 1 -▁imprisoned 1 -▁drills 1 -gon 1 -▁2: 1 -itis 1 -▁siege 1 -▁Kitchen 1 -plan 1 -▁Nolan 1 -▁shoots 1 -▁hype 1 -▁Jaguars 1 -▁interpreted 1 -▁Haiti 1 -▁scanning 1 -▁mar 1 -india 1 -▁lacks 1 -▁[2] 1 -▁Catalan 1 -rio 1 -▁pigs 1 -▁Cruise 1 -▁4-0 1 -▁kisses 1 -▁1956 1 -▁Tyson 1 -▁Kur 1 -▁unrest 1 -▁Bass 1 -▁photographed 1 -▁Innovation 1 -▁relocated 1 -▁verge 1 -▁revive 1 -themed 1 -taking 1 -CD 1 -sal 1 -▁seize 1 -▁RB 1 -seat 1 -full 1 -▁sickness 1 -▁Malik 1 -▁1996, 1 -▁accomplishments 1 -rel 1 -▁monkey 1 -▁Holding 1 -▁Cast 1 -▁ANGELES 1 -▁Jacobs 1 -▁deaf 1 -▁Ramos 1 -▁descended 1 -▁Shaun 1 -▁4-1 1 -chief 1 -▁alcoholic 1 -CR 1 -▁spider 1 -▁pumping 1 -atic 1 -sel 1 -▁Salvador 1 -▁Teen 1 -▁stems 1 -vers 1 -int 1 -ario 1 -▁utter 1 -▁forefront 1 -▁Hammond 1 -▁expenditure 1 -▁Pol 1 -▁feeds 1 -▁gunfire 1 -ivo 1 -▁Considering 1 -Everyone 1 -▁upright 1 -▁align 1 -▁junction 1 -▁Belfast 1 -▁substances 1 -▁Roe 1 -▁plunge 1 -▁acclaimed 1 -▁Tin 1 -aria 1 -▁foremost 1 -▁surgeries 1 -▁pavement 1 -▁DS 1 -▁($ 1 -"; 1 -▁Complet 1 -6. 1 -▁insult 1 -▁6,000 1 -▁melted 1 -▁Europa 1 -▁exposing 1 -▁courthouse 1 -Pa 1 -▁declines 1 -▁stimulus 1 -▁surrendered 1 -600 1 -▁shutting 1 -▁painter 1 -▁metals 1 -▁badge 1 -Thanks 1 -▁premiered 1 -Opens 1 -▁outgoing 1 -▁Knowing 1 -▁presidents 1 -▁staggering 1 -▁Restaurant 1 -▁reopened 1 -*** 1 -▁Reporter 1 -▁characteristic 1 -▁lesbian 1 -▁surf 1 -kie 1 -▁Lamar 1 -▁offended 1 -paid 1 -▁Citizens 1 -▁hobby 1 -▁strikeouts 1 -▁EC 1 -▁Clare 1 -,800 1 -▁Recent 1 -Time 1 -▁Bed 1 -▁contentious 1 -▁Generation 1 -▁Nash 1 -▁Ade 1 -▁standings 1 -Mart 1 -▁clips 1 -▁Similar 1 -▁resolutions 1 -▁chambers 1 -▁tomato 1 -political 1 -▁strengths 1 -▁Tru 1 -▁amended 1 -April 1 -▁bucks 1 -▁potent 1 -mic 1 -▁knot 1 --23 1 -▁awe 1 -▁Brooke 1 -▁1938 1 -▁Johnston 1 -▁Booker 1 -.95 1 -itz 1 -▁tick 1 -▁statute 1 -▁shrink 1 -▁Gla 1 -▁shortages 1 -▁hugs 1 -mid 1 -▁therapeutic 1 -▁bundle 1 -▁blunt 1 -▁NA 1 -▁Owens 1 -▁greet 1 -▁Mosul 1 -▁discounts 1 -▁wildfires 1 -ré 1 -▁tortured 1 -▁notices 1 -▁SD 1 -▁spelling 1 -▁Ark 1 -cryptocurrencies 1 -▁Turns 1 -▁Organisation 1 -▁commute 1 -▁Sergeant 1 -▁Tal 1 -▁standpoint 1 -▁Bri 1 -▁1997, 1 -Hold 1 -▁UEFA 1 -▁Nepal 1 -▁Ye 1 -works 1 -▁Mali 1 -▁storyline 1 -▁preschool 1 -▁Melania 1 -▁PG 1 -▁1965 1 -live 1 -▁Fran 1 -▁ruins 1 -▁typing 1 -▁Gospel 1 -▁unemployed 1 -▁ver 1 -tom 1 -▁Harley 1 -▁Ella 1 -▁rig 1 -▁demons 1 -▁ignorance 1 -▁obstruction 1 -ended 1 -▁misses 1 -▁Courtney 1 -▁Yep 1 -▁thigh 1 -▁Gus 1 -▁sync 1 -▁rendering 1 -country 1 -▁memoir 1 -▁demolition 1 -▁Gur 1 -▁dorm 1 -▁curled 1 -cott 1 -▁Phone 1 -▁swallowed 1 -needed 1 -RS 1 -▁floated 1 -▁recurring 1 -▁pinch 1 -▁Venezuelan 1 -ssel 1 -▁hello 1 -▁flawed 1 -▁compares 1 -▁Likewise 1 -ao 1 -▁beverages 1 -anda 1 -▁footballer 1 -▁ALSO 1 -▁Ivanka 1 -▁Audi 1 -kha 1 -share 1 -▁Marsh 1 -▁rewarding 1 -▁angels 1 -miss 1 -▁toured 1 -▁neglected 1 -▁6-2 1 -▁unanimous 1 -▁enduring 1 -▁Sources 1 -▁Egg 1 -Mobile 1 -▁hunters 1 -▁undertake 1 -▁Cow 1 -▁Abbey 1 -▁sinking 1 -▁collaborative 1 -rian 1 -▁Rules 1 -▁pharmacy 1 -▁submarine 1 -▁grabs 1 -▁hamstring 1 -▁cracking 1 -▁contenders 1 -OC 1 -▁Nazis 1 -▁verified 1 -▁catastrophic 1 -▁Cousins 1 -▁ace 1 -erson 1 -▁mathematics 1 -▁territorial 1 -▁disciplinary 1 -▁Honey 1 -▁roommate 1 -▁Royals 1 -▁(14 1 -St 1 -▁protested 1 -▁Braves 1 -▁eligibility 1 -▁correlation 1 -▁Solomon 1 -▁Maj 1 -pass 1 -▁Treaty 1 -▁knockout 1 -▁piled 1 -HC 1 -▁Oz 1 -▁Regina 1 -Many 1 -▁tighter 1 -▁Conte 1 -Any 1 -▁meddling 1 -arm 1 -▁Brennan 1 -▁Nikki 1 -▁geo 1 -especially 1 -▁Ship 1 -▁Patel 1 -▁accumulated 1 -AFP 1 -▁villagers 1 -▁Herbert 1 -girl 1 -▁compounds 1 -▁Gang 1 -▁Bun 1 -▁bees 1 -▁shootout 1 -▁Beat 1 -▁Wheeler 1 -▁famed 1 -LO 1 -ote 1 -▁Kel 1 -▁battlefield 1 -▁satellites 1 -▁Hai 1 -▁Schmidt 1 -▁destined 1 -▁Cable 1 -▁AIDS 1 -▁Orchestra 1 -ct 1 -▁tractor 1 -▁Lind 1 -▁maximize 1 -▁discourse 1 -▁algorithms 1 -Red 1 -▁garnered 1 -▁confirms 1 -▁3% 1 -▁renew 1 -eli 1 -▁myth 1 -▁fu 1 -▁rabbit 1 -▁pencil 1 -▁sail 1 -▁stab 1 -▁lend 1 -▁dispatched 1 -▁Cell 1 -▁dreaming 1 -▁Greene 1 -▁Monster 1 -▁MTV 1 -▁Language 1 -▁fade 1 -▁brake 1 -▁unfamiliar 1 -▁CCTV 1 -▁Rahul 1 -▁ingredient 1 -▁penis 1 -▁te 1 -▁optional 1 -▁dictator 1 -▁demonstrators 1 -▁40,000 1 -▁haunted 1 -▁inconsistent 1 -PE 1 -▁1800 1 -▁flashed 1 -▁Beau 1 -▁constraints 1 -▁budgets 1 -▁2017-18 1 -▁abortions 1 -▁Lucky 1 -hir 1 -▁Cory 1 -▁Cultural 1 -▁Gore 1 -▁buck 1 -▁semifinal 1 -▁dunk 1 -▁pivotal 1 -▁Needless 1 -▁Deb 1 -▁limbs 1 -▁implies 1 -▁Kapoor 1 -▁mighty 1 -▁wholly 1 -▁Kathleen 1 -▁reliability 1 -yi 1 -▁sci 1 -▁deported 1 -▁Wan 1 -▁Sharp 1 -▁winding 1 -▁brass 1 -ear 1 -▁Jamaica 1 -▁popping 1 -▁Normally 1 -jar 1 -▁recognizing 1 -▁Pool 1 -▁$5,000 1 -▁34- 1 -▁bake 1 -▁strengthened 1 -▁Boo 1 -▁imperative 1 -▁explode 1 -▁Dem 1 -▁Certainly 1 -COM 1 -▁Maya 1 -▁Ibrahim 1 -▁rubbing 1 -▁17. 1 -▁harbor 1 -▁kg 1 -▁Eugene 1 -▁Stark 1 -▁Original 1 -▁suffers 1 -▁crib 1 -▁Rockies 1 -▁Opening 1 -▁chores 1 -growth 1 -▁perfection 1 -▁Electoral 1 -▁Spicer 1 -▁Cher 1 -▁calf 1 -▁$1.3 1 -▁Coin 1 -▁Ottoman 1 -▁manifest 1 -▁Tourism 1 -RR 1 -▁Manila 1 -▁participant 1 -colored 1 -▁splash 1 -FS 1 -▁restart 1 -▁healed 1 -neutral 1 -▁zombie 1 -▁Count 1 -pu 1 -▁Details 1 -▁drums 1 -▁reopen 1 -▁Liga 1 -▁Chance 1 -▁DR 1 -▁Bath 1 -▁bypass 1 -▁possessed 1 -▁Rule 1 -▁Pad 1 -▁$50,000 1 -▁weighs 1 -▁headphones 1 -▁ceased 1 -▁($2 1 -▁showdown 1 -▁max 1 -▁Nassar 1 -▁wicket 1 -▁Superman 1 -▁Rafael 1 -▁pedestrians 1 -▁remotely 1 -GP 1 -▁downgraded 1 -▁justification 1 -▁Agent 1 --0 1 -▁segmented 1 -▁Nielsen 1 -▁Laboratory 1 -tron 1 -sec 1 -▁debts 1 -ía 1 -?? 1 -▁Marines 1 -▁empower 1 -coin 1 -▁Katherine 1 -▁Simply 1 -▁4:30 1 -▁rocking 1 -ENT 1 -Americans 1 -▁Gerald 1 -▁Heavy 1 -▁$150 1 -▁Hardy 1 -idi 1 -▁saga 1 -lies 1 -▁VP 1 -▁Maxwell 1 -▁Rising 1 -▁Talking 1 -▁cottage 1 -▁paddle 1 -▁volcano 1 -▁stylish 1 -▁Lancaster 1 -pad 1 -▁Bou 1 -lane 1 -▁contestants 1 -▁Hampton 1 -▁Miranda 1 -▁scroll 1 -science 1 -▁Lim 1 -▁lingering 1 -▁Whole 1 -▁installing 1 -▁upload 1 -▁unsafe 1 -▁Experience 1 -tain 1 -▁Wellington 1 -▁PDP 1 -▁seizure 1 -▁Cities 1 -▁manslaughter 1 -▁scoreless 1 -▁Chronicle 1 -zing 1 -▁volunteered 1 -LP 1 -,700 1 -▁patrons 1 -lines 1 -▁Prosecutor 1 -▁mistaken 1 -▁closures 1 -TON 1 -▁Pharmaceuticals 1 -▁Ramirez 1 -▁1959 1 -▁abuses 1 -▁Toby 1 -▁Boyd 1 -▁ribbon 1 -▁Peters 1 -path 1 -▁Comcast 1 -▁GB 1 -load 1 -▁surreal 1 -▁Competition 1 -▁moisture 1 -▁200,000 1 --3) 1 -▁ensured 1 -▁Official 1 -▁Bas 1 -▁fo 1 -ators 1 -▁Que 1 -▁Mercury 1 -▁ribs 1 -▁Airbus 1 -▁Sentinel 1 -▁lord 1 -▁blah 1 -▁grind 1 -LD 1 -ici 1 -▁Shepherd 1 -▁Mirror 1 -hin 1 -▁60% 1 -▁microphone 1 -▁Zoe 1 -▁wardrobe 1 -▁Fr 1 -▁appropriately 1 -▁pools 1 -▁unpleasant 1 -▁hottest 1 -▁Richards 1 -▁stacked 1 -▁digits 1 -print 1 -OM 1 -▁skipped 1 -▁Vote 1 -▁Rey 1 -vey 1 -▁exile 1 -▁Neo 1 -trade 1 -è 1 -▁Dunn 1 -▁skiing 1 -▁dissent 1 -▁rifles 1 -▁angles 1 -▁1958 1 -▁oak 1 -fly 1 -–20 1 -< 1 -zone 1 -▁reservation 1 -▁sympathetic 1 -▁Waters 1 -▁demise 1 -raj 1 -▁steak 1 -▁heavyweight 1 -▁Azerbaijan 1 -▁IMF 1 -▁imagery 1 -▁Brunswick 1 -▁whipped 1 -tty 1 -▁spilled 1 -▁exited 1 -5, 1 -headed 1 -▁commanded 1 -▁gossip 1 -▁RV 1 -▁Comments 1 -▁ample 1 -▁Christina 1 -▁taller 1 -▁sorrow 1 --24 1 -writer 1 -▁drawer 1 -▁bark 1 -▁19. 1 -▁Strange 1 -▁Ellie 1 -▁evaluated 1 -▁Andrews 1 -▁hose 1 -▁Fowler 1 -▁Hours 1 -Connor 1 -rc 1 -▁interpret 1 -▁fundamentally 1 -▁Sto 1 -▁Dominic 1 -▁upheld 1 -att 1 -▁linear 1 -▁virtue 1 -▁Geo 1 -▁employs 1 -▁tunnels 1 -▁Course 1 -▁000 1 -▁WHAT 1 -▁noticeable 1 -▁SF 1 -Da 1 -lio 1 -▁heroic 1 -▁owning 1 -▁Nonetheless 1 -active 1 -▁Mariners 1 -$2 1 -▁classrooms 1 -eight 1 -▁perimeter 1 -stand 1 -EST 1 -▁def 1 -▁pinned 1 -▁Mickey 1 -sub 1 -▁pads 1 -▁Kir 1 -▁Blu 1 -▁bo 1 -lle 1 -▁1966 1 -rant 1 -Ho 1 -▁heel 1 -zan 1 -▁banana 1 -▁whales 1 -▁jerk 1 -▁Jobs 1 -▁garlic 1 -using 1 -▁Yan 1 -CT 1 -▁Frost 1 -▁Wake 1 -▁bomber 1 -DF 1 -▁metrics 1 -earnings 1 -▁cybersecurity 1 -▁Keeping 1 -Two 1 -AU 1 -▁injustice 1 -eur 1 -▁Kon 1 -▁Fletcher 1 -▁SL 1 -▁Discovery 1 -▁AU 1 -▁sailed 1 -▁111 1 -▁geography 1 -▁grinned 1 -1) 1 -▁presenter 1 -▁Mental 1 -▁$23 1 -▁Scotia 1 -▁imaging 1 -▁Kendall 1 -▁CAN 1 -▁Associates 1 -▁inquiries 1 -▁Cairo 1 -▁Abuja 1 -UR 1 -▁nano 1 -iga 1 -▁eased 1 -▁sober 1 -▁liking 1 -LT 1 -▁Zuma 1 -▁Spin 1 -ome 1 -▁RBIs 1 -▁leaking 1 -▁insufficient 1 -▁TE 1 -▁faction 1 -section 1 -North 1 -wave 1 -▁extremists 1 -▁FY 1 -tors 1 -▁gangs 1 -▁upwards 1 -▁liquidity 1 -▁accuse 1 -▁variables 1 --30 1 -▁masks 1 -▁Rivera 1 -▁drastically 1 -▁catalyst 1 -▁Gujarat 1 -▁prosecuted 1 -/10 1 -▁endorse 1 -▁1995, 1 -▁highways 1 -▁homers 1 -▁AMD 1 -▁Zack 1 -▁Ski 1 -▁Spotify 1 -▁Perfect 1 -▁touted 1 -▁jointly 1 -▁joking 1 -▁hitter 1 -▁injunction 1 -▁counselor 1 -▁similarities 1 -▁asserted 1 -▁Growing 1 -1.5 1 -▁hates 1 -▁Pit 1 -▁exempt 1 -bur 1 -playing 1 -▁15,000 1 -▁Send 1 -▁Essex 1 -▁Darwin 1 -▁Bright 1 -▁Guild 1 -▁attractions 1 -▁Felix 1 -▁Rec 1 -▁pumped 1 -▁heights 1 -▁sunglasses 1 -▁Roberto 1 -▁Campus 1 -▁hints 1 -▁1953 1 -▁hunter 1 -▁gravel 1 -▁relentless 1 -▁catalog 1 -▁standout 1 -▁Clear 1 -▁Economics 1 -▁1957 1 -▁enforced 1 -▁Led 1 -▁oversee 1 -▁contingent 1 -▁replies 1 -▁Milton 1 -▁unite 1 -▁recruits 1 -▁Cinema 1 -ison 1 -▁smoked 1 -▁Anton 1 -▁undercover 1 -▁ba 1 -▁meta 1 -▁spoil 1 -ath 1 -▁cloudy 1 -worthy 1 -▁commenting 1 -▁sensible 1 -▁Complex 1 -TT 1 -▁specially 1 -▁zip 1 -▁Albany 1 -▁Hang 1 -▁lava 1 -Mark 1 -▁alarming 1 -▁exemption 1 -▁negotiation 1 -rina 1 -▁traced 1 -▁36- 1 -▁Poly 1 -▁iTunes 1 -▁resisted 1 -▁Bengals 1 -▁statues 1 -record 1 -▁AA 1 -▁Sche 1 -▁6-1 1 -▁Persian 1 -▁singled 1 -▁accessing 1 -▁dimension 1 -▁nevertheless 1 -▁televised 1 -▁Edgar 1 -▁prolific 1 -▁procurement 1 -▁offenses 1 -▁nut 1 -▁pairing 1 -▁simpler 1 -,600 1 -▁contaminated 1 -▁impaired 1 -:35 1 -dad 1 -▁ink 1 -▁researching 1 -▁Marion 1 -▁spectacle 1 -cas 1 -▁plead 1 -▁Near 1 -▁Tate 1 -▁liberals 1 -▁em 1 -▁1929 1 -▁Cliff 1 -▁manipulate 1 -▁Raven 1 -▁slowdown 1 -▁debated 1 -▁ser 1 -▁theatrical 1 -▁influx 1 -▁activism 1 -CS 1 -▁SB 1 -▁objections 1 -▁tubes 1 -▁drunken 1 -kins 1 -seven 1 -▁grassroots 1 -▁governors 1 -▁prints 1 -tim 1 -host 1 -▁burger 1 -▁relates 1 -▁thankfully 1 -▁bites 1 -▁RSI 1 -▁heap 1 -▁Gala 1 -GE 1 -:25 1 -▁Ramsey 1 -▁peek 1 -▁sidelined 1 -▁noticing 1 -▁peacefully 1 -sie 1 -▁costing 1 -▁overseeing 1 -issa 1 -▁unpaid 1 -▁sticky 1 -▁ruler 1 -UM 1 -▁distinguish 1 -▁tomb 1 -▁Wolves 1 -economic 1 -▁Canberra 1 -▁federation 1 -▁respiratory 1 -▁Cornell 1 -▁escort 1 -▁criticize 1 -▁Penguins 1 -▁unwilling 1 -▁Rochester 1 -▁cables 1 -shire 1 -▁coconut 1 -▁blur 1 -▁belongings 1 -▁intercepted 1 -▁lengths 1 -▁setback 1 -▁Bug 1 -▁smarter 1 -▁plagued 1 -▁specify 1 -▁Danielle 1 -▁diplomacy 1 -▁Acting 1 -craft 1 -▁Kanye 1 -▁shipments 1 -▁instinct 1 -lik 1 -placed 1 -▁Katrina 1 -▁SM 1 -▁Robbie 1 -▁irony 1 -▁obscure 1 -▁Payne 1 -▁Alison 1 -▁1919 1 -▁caucus 1 -▁prediction 1 -▁Serie 1 -!). 1 -▁noises 1 -▁skate 1 -▁fascinated 1 -▁negatively 1 -▁Sprint 1 -▁supreme 1 -▁quake 1 -▁hesitate 1 -▁whisper 1 -itt 1 -▁Brewers 1 -▁excluding 1 -▁inherent 1 -▁$27 1 -▁Ale 1 -▁Reserved 1 -▁Hank 1 -▁communicating 1 -▁populist 1 -▁thriving 1 -▁Feel 1 -▁blockbuster 1 -▁announcements 1 -▁1941 1 -▁Craft 1 -ying 1 -duc 1 -ied 1 -▁Playing 1 -▁daycare 1 -▁Depression 1 -▁Niagara 1 -▁schedules 1 -▁Ebola 1 -PP 1 -▁Victory 1 -▁Dai 1 -dell 1 -▁Athletics 1 -iko 1 -▁measurement 1 -▁pioneer 1 -▁ceremonies 1 -▁cherry 1 -▁revolt 1 -▁1999. 1 -▁trainers 1 -▁CF 1 -▁1946 1 -▁demonstrating 1 -▁rant 1 -▁Yi 1 -▁reconstruction 1 -▁shaft 1 -guard 1 -▁Denis 1 -▁Legislative 1 -▁CAPTION 1 -▁tasting 1 -▁brewing 1 -▁distract 1 -▁XL 1 -▁Sell 1 -▁bikini 1 -▁login 1 -ovic 1 -▁persona 1 -▁Hugo 1 -▁width 1 -▁gallon 1 -ATE 1 -▁Georgetown 1 -nen 1 -▁Appeal 1 -▁Pic 1 -▁tasty 1 -▁basin 1 -▁Thi 1 -abi 1 -▁staple 1 -▁illicit 1 -▁remarkably 1 -▁vitamin 1 -▁overview 1 -▁Personal 1 -▁pitchers 1 -▁Cedar 1 -▁professors 1 -▁cater 1 -▁foundations 1 -▁Sk 1 -ito 1 -▁utilized 1 -▁commodities 1 -▁Jinping 1 -▁AFL 1 -special 1 -▁activate 1 -▁um 1 -▁accustomed 1 -▁1914 1 -Will 1 -won 1 -tail 1 -▁optical 1 -▁intentional 1 -▁Meg 1 -ular 1 -▁Move 1 -▁tapping 1 -▁crushing 1 -▁cruelty 1 -▁turf 1 -▁XI 1 -▁tv 1 -vel 1 -▁graph 1 -▁slack 1 -▁illusion 1 -▁flock 1 -▁sewer 1 -▁weaken 1 -▁teased 1 -▁Vale 1 -▁lodged 1 -▁Elite 1 -▁Gaga 1 -▁reset 1 -▁Beckham 1 -▁psychologist 1 -▁psychiatric 1 -▁arose 1 -▁Politics 1 -ú 1 -▁icy 1 -rine 1 -▁cheered 1 -▁tray 1 -MT 1 -aver 1 -▁shaken 1 -uh 1 -▁LE 1 -▁Observer 1 -▁Jerome 1 -▁Rouge 1 -▁21. 1 -▁Sei 1 -offs 1 -▁Gan 1 -▁condolences 1 -▁mornings 1 -▁Serena 1 -▁Avengers 1 -NE 1 -5. 1 -▁comforting 1 -▁ko 1 -worker 1 -▁Question 1 -▁1943 1 -▁115 1 -▁respectful 1 -▁CH 1 -▁Baghdad 1 -▁thirteen 1 -▁voluntarily 1 -pped 1 -Ed 1 -▁Willis 1 -▁Lock 1 -▁Luc 1 -scape 1 -3) 1 -▁Shield 1 -▁rein 1 -▁spoon 1 -▁Walton 1 -enko 1 -ister 1 -GR 1 -chin 1 -para 1 -note 1 -▁blues 1 -heart 1 -▁Raja 1 -▁Dia 1 -▁bedrooms 1 -▁mattress 1 -▁accompany 1 -▁separating 1 -▁Islamabad 1 -▁Salah 1 -rey 1 -▁plaintiffs 1 -▁misery 1 -▁recognizes 1 -▁indie 1 -▁anime 1 -▁privileged 1 -/1 1 -▁echo 1 -▁Luck 1 -PF 1 -▁Gloria 1 -▁bids 1 -▁Tiffany 1 -▁brighter 1 -▁settlers 1 -▁1994, 1 -▁toughest 1 -▁bait 1 -▁bonuses 1 -▁Sale 1 -▁Bollywood 1 -▁malware 1 -▁preservation 1 -▁Churchill 1 -Super 1 -enter 1 -▁Lead 1 -▁Assange 1 -▁judiciary 1 -▁prejudice 1 -▁professionally 1 -sexual 1 -▁128 1 -▁Tang 1 -▁Eu 1 -▁lust 1 -▁stool 1 -▁bracket 1 -elo 1 -5% 1 -▁eagle 1 -arian 1 -▁insert 1 -▁dilemma 1 -▁cheat 1 -▁11:30 1 -▁DACA 1 -▁Rapids 1 -▁fingerprint 1 -imi 1 -▁sins 1 -▁injuring 1 -▁lavish 1 -▁illustrated 1 -▁Cass 1 -▁begged 1 -national 1 -▁scales 1 -▁1955 1 -▁lag 1 -▁baseline 1 -▁entitle 1 -▁poker 1 -India 1 -▁dimensions 1 -▁Armenia 1 -▁suing 1 -kill 1 -July 1 -▁JPMorgan 1 -▁Breitbart 1 -▁structured 1 -▁Battalion 1 -uli 1 -bro 1 -▁Abbas 1 -▁internally 1 -▁uprising 1 -chair 1 -▁Socialist 1 -▁Expo 1 -▁Ari 1 -trick 1 -come 1 -▁reproductive 1 -▁drawings 1 -▁Frankfurt 1 -▁hail 1 -▁shells 1 -▁immunity 1 -▁paragraph 1 -▁zee 1 -nation 1 -▁captive 1 -▁damp 1 -▁expresses 1 -▁circus 1 -▁sideline 1 -▁accepts 1 -▁Tide 1 -▁Event 1 -▁Cafe 1 -▁Kaepernick 1 -▁Rolling 1 -000 1 -▁fallout 1 -▁th 1 -mouth 1 -▁councillors 1 -▁optimal 1 -▁Parkland 1 -present 1 -▁qualification 1 -▁Brain 1 -▁ringing 1 -▁collided 1 -▁accomplishment 1 -▁refined 1 -▁Ras 1 -▁competent 1 -▁Jude 1 -▁downloaded 1 -▁contend 1 -▁flick 1 -▁Site 1 -▁selfie 1 -aid 1 -▁lakes 1 -▁soaked 1 -▁relieve 1 -▁rebuilt 1 -pic 1 -▁bushes 1 -▁Hale 1 -▁1939 1 -ult 1 -▁Than 1 -▁pathway 1 -mir 1 -▁Fly 1 -▁heartbeat 1 -▁vault 1 -ern 1 -▁crafted 1 -▁Chicken 1 -▁guiding 1 -▁Bluetooth 1 -▁Math 1 -▁avail 1 -cio 1 -▁Princeton 1 -▁Mix 1 -▁corps 1 -▁autopsy 1 -▁chemo 1 -▁landlord 1 -▁weaknesses 1 -▁bans 1 -▁admired 1 -▁bro 1 -▁senses 1 -ple 1 -▁flashing 1 -HD 1 -▁TMZ 1 -▁heartbreaking 1 -think 1 -▁Satan 1 -tting 1 -▁defines 1 -▁surfaces 1 -▁disturbed 1 -call 1 -▁Writer 1 -▁shaping 1 -▁Quality 1 -stan 1 -▁Leadership 1 -▁Allan 1 -▁1954 1 -▁1945 1 -sport 1 -▁occupy 1 -▁Orioles 1 -▁bouncing 1 -▁brewery 1 -▁Believe 1 -▁sixteen 1 -keeper 1 -▁needles 1 -▁Ace 1 -▁gem 1 -▁Roland 1 -▁tags 1 -▁erected 1 -▁1942 1 -▁CB 1 -uring 1 -▁1935 1 -▁1.7 1 -▁Chandler 1 -▁exhibited 1 -▁Isaiah 1 -TP 1 -▁buffer 1 -▁viewer 1 -fra 1 -▁Beautiful 1 -▁creep 1 -ack 1 -Muslim 1 -▁republic 1 -▁Milo 1 -▁humour 1 -▁abusing 1 -hurst 1 -▁listeners 1 -▁Vic 1 -▁clearer 1 -▁Ng 1 -▁storytelling 1 -HO 1 -,” 1 -▁Hurricanes 1 -▁1949 1 -MeToo 1 -▁swamp 1 -▁bets 1 -▁Programme 1 -▁promotes 1 -▁combo 1 -▁Wong 1 -▁lodge 1 -▁necklace 1 -▁Torres 1 -▁appliances 1 -eu 1 -▁Zhang 1 -▁emergence 1 -▁$26 1 -speaking 1 -▁divisive 1 -▁bodily 1 -▁Beatles 1 -▁bolt 1 -▁Snap 1 -▁[4] 1 -ire 1 -osis 1 -▁pumps 1 -▁Basin 1 -▁urine 1 -▁Pioneer 1 -▁Dev 1 -isa 1 -▁thrilling 1 -▁outskirts 1 -▁commissioners 1 -▁Elon 1 -Have 1 -▁CE 1 -CBC 1 -▁prisons 1 -demand 1 -▁Legend 1 -▁yearly 1 -▁maritime 1 -▁hollow 1 -▁Uttar 1 -inc 1 -▁expelled 1 -▁merchants 1 -▁sufficiently 1 -▁wax 1 -tree 1 -▁gamers 1 -▁embracing 1 -▁coroner 1 -▁literacy 1 -▁layup 1 -▁VIP 1 -▁skipper 1 -▁Turning 1 -EF 1 -▁hashtag 1 -▁tsunami 1 -▁Bahrain 1 -▁displaying 1 -▁chatted 1 -▁refrain 1 -Ch 1 -▁premature 1 -▁Civic 1 -▁humidity 1 -▁hurdles 1 -▁­ 1 -LC 1 -▁Peak 1 -super 1 -▁installment 1 -▁parental 1 -▁Families 1 -▁Flat 1 -▁reboot 1 -▁Single 1 -▁Avery 1 -izer 1 -▁freeway 1 -▁proposing 1 -▁Monaco 1 -▁phenomenal 1 -▁Herman 1 -oi 1 -▁1910 1 -▁sip 1 -▁Archbishop 1 -▁destiny 1 -▁Casino 1 -▁Scroll 1 -▁Capt 1 -ient 1 -▁Guardiola 1 -EN 1 -uni 1 -▁Holt 1 -▁FL 1 -▁vaccines 1 -▁whopping 1 -▁Yard 1 -▁unpopular 1 -▁Vanguard 1 -▁Admiral 1 -▁phases 1 -▁drained 1 -▁Horizon 1 -▁Lau 1 -▁monuments 1 -▁unified 1 -▁fairness 1 -▁tweeting 1 -▁Hoffman 1 -written 1 -▁Cyprus 1 -▁candidacy 1 -▁impending 1 -▁genocide 1 -UP 1 -▁Farmers 1 -▁altar 1 -▁Jamal 1 -▁Klopp 1 -teen 1 -▁wasting 1 -▁eco 1 -▁easing 1 -▁Attack 1 -,200 1 -▁galaxy 1 -▁Malta 1 -▁rib 1 -▁molecules 1 -▁airing 1 -▁congressman 1 -▁wolves 1 -▁Elvis 1 -beat 1 -San 1 -▁trajectory 1 -▁wizard 1 -▁volleyball 1 -outs 1 -▁stormed 1 -▁Moody 1 -ö 1 -▁surpassed 1 -▁grill 1 -▁aka 1 -▁allege 1 -▁Cla 1 -▁Bryce 1 -▁pounding 1 -▁Cru 1 -▁stern 1 -▁Depot 1 -▁bedtime 1 -▁Hook 1 -▁Fields 1 -▁decree 1 -child 1 -▁brew 1 -AB 1 -▁Hezbollah 1 -▁fulfilling 1 -▁9- 1 -▁slash 1 -▁reinforced 1 -▁Bert 1 -▁melting 1 -▁Wenger 1 -▁[3] 1 -▁tangible 1 -▁Daisy 1 -TER 1 -tern 1 -pl 1 -▁harassed 1 -▁compensate 1 -▁rec 1 -▁Marlins 1 -▁texted 1 -▁bargaining 1 -▁resistant 1 -▁psychiatrist 1 -▁slew 1 -saving 1 -▁Vel 1 -▁Lip 1 -▁mates 1 -▁Becky 1 -▁fertility 1 -▁tightening 1 -▁Typically 1 -▁discretion 1 -IF 1 -.05 1 -▁grams 1 -▁telecommunicati 1 -▁Dustin 1 -▁pity 1 -▁autonomy 1 -▁proprietary 1 -▁standoff 1 -▁Zi 1 -▁compliment 1 -▁1934 1 -▁Spot 1 -▁22. 1 -▁discusses 1 -▁Grandpa 1 -▁replica 1 -▁persecution 1 -▁fiery 1 -▁vent 1 -▁Brands 1 -▁feathers 1 -nder 1 -▁deepest 1 -▁accusation 1 -▁po 1 -▁chess 1 -▁Hit 1 -blue 1 -▁render 1 -▁paramedics 1 -▁irrelevant 1 -▁Qu 1 -▁vibe 1 -rn 1 -▁challenger 1 -▁briefed 1 -▁wilderness 1 -▁Providence 1 -▁Guatemala 1 -dimensional 1 -safe 1 -RD 1 -MI 1 -▁Railroad 1 -▁calmly 1 -▁RHP 1 -▁Quarter 1 -lah 1 -▁guides 1 -▁Employees 1 -▁Valencia 1 -▁naive 1 -▁stamps 1 -▁$400 1 -▁Map 1 -▁remake 1 -▁explosives 1 -▁helpless 1 -▁fork 1 -▁Must 1 -▁fourteen 1 -asi 1 -▁Rita 1 -▁crafts 1 -▁modes 1 -▁goat 1 -▁scar 1 -▁Middleton 1 -▁Noel 1 -▁farewell 1 -▁insulin 1 -▁Bates 1 -▁Vin 1 -▁HS 1 -▁Mul 1 -kes 1 -▁outspoken 1 -▁councillor 1 -ction 1 -vision 1 -▁DNC 1 -ibility 1 -▁swollen 1 -▁thieves 1 -▁Ek 1 -▁Hyde 1 -▁Breeze 1 -udi 1 -▁maiden 1 -▁correction 1 -▁Beginning 1 -▁lu 1 -▁Episode 1 -tum 1 -pat 1 -▁Multiple 1 -▁poles 1 -▁outrageous 1 -▁2.2 1 -▁Progress 1 -▁relegation 1 -▁penny 1 -▁SU 1 -▁comprises 1 -▁Adventure 1 -Mu 1 -▁Truck 1 -▁academics 1 -Russian 1 -▁Comic 1 -lum 1 -▁Managing 1 -▁gal 1 -▁Jae 1 -vid 1 -hearted 1 -public 1 -▁strips 1 -▁stroller 1 -▁promotions 1 -▁Popular 1 -▁innovations 1 -▁philosophical 1 -Ne 1 -▁quantities 1 -▁Hicks 1 -▁4-3 1 -▁AAP 1 -Out 1 -▁Arch 1 -▁cautioned 1 -thy 1 -stream 1 -▁Easy 1 -▁manually 1 -▁Tristan 1 -▁Burger 1 -▁sway 1 -ancy 1 -▁Institution 1 -▁harmony 1 -▁congestion 1 -▁advocated 1 -▁groom 1 -▁marginal 1 -▁indefinitely 1 -center 1 -▁Weiss 1 -founded 1 -▁Matter 1 -▁Naomi 1 -▁Interestingly 1 -▁squadron 1 -▁fulfilled 1 -▁cardboard 1 -White 1 -▁doses 1 -▁vivid 1 -▁petroleum 1 -tag 1 -code 1 -▁certificates 1 -▁sums 1 -▁Anglo 1 -dog 1 -vor 1 -▁pope 1 -▁Bai 1 -▁confession 1 -▁Uni 1 -▁Kenyan 1 -lier 1 -▁homelessness 1 -▁protagonist 1 -▁mug 1 -▁cushion 1 -▁disregard 1 -▁glue 1 -▁bailout 1 -▁acknowledging 1 -▁cache 1 -▁Ladies 1 -Ah 1 -▁Risk 1 -▁lows 1 -▁bumper 1 -▁knives 1 -▁resilience 1 -general 1 -▁Hartford 1 -▁neutrality 1 -cin 1 -ullah 1 -▁Lar 1 -▁CBD 1 -▁acoustic 1 -uru 1 -▁Prophet 1 -▁dynasty 1 -▁comparisons 1 -Tech 1 -▁incomplete 1 -▁Leah 1 -▁commemorate 1 -Bar 1 -▁bald 1 -▁sleeves 1 -▁Shelby 1 -▁Nigel 1 -▁2.4 1 -▁Aussie 1 -▁cancellation 1 -▁shutout 1 -fit 1 -▁Otto 1 -▁Thirty 1 -ND 1 -▁Finding 1 -etta 1 -▁denounced 1 -▁addicted 1 -duty 1 -▁Lebanese 1 -▁Chef 1 -▁Arc 1 -▁tagged 1 -▁escalating 1 -▁contamination 1 -mond 1 -▁soared 1 -▁Rory 1 -▁evacuate 1 -▁saddle 1 -▁70% 1 -▁brink 1 -▁Stephens 1 -▁dwarf 1 -▁fearful 1 -centric 1 -▁lenses 1 -vent 1 -▁developmental 1 -▁Phillies 1 -▁Wealth 1 -▁Cheryl 1 -▁Specifically 1 -inning 1 -▁Didn 1 -▁mitigate 1 -▁undertaking 1 -▁escorted 1 -▁TRENDING 1 -▁Carpenter 1 -▁Porsche 1 -▁Cherry 1 -▁Northwestern 1 -emi 1 -UK 1 -▁MRI 1 -▁notifications 1 -eno 1 -▁convoy 1 -flight 1 -▁tow 1 -omo 1 -▁messed 1 -▁singers 1 -Great 1 -▁Sunderland 1 -▁Democracy 1 -▁prohibition 1 -▁patron 1 -▁Kot 1 -–2 1 -▁tribunal 1 -little 1 -▁exhaust 1 -▁Vista 1 -mani 1 -▁sperm 1 -▁fireplace 1 -▁Jedi 1 -▁annualized 1 -▁forums 1 -turned 1 -zz 1 -▁1993, 1 -▁slump 1 -▁sprawling 1 -▁councils 1 -▁UT 1 -Make 1 -▁telecom 1 -▁Pablo 1 -▁Piper 1 -▁texting 1 -▁residency 1 -▁rigid 1 -▁rot 1 -hoo 1 -eon 1 -ame 1 -▁4% 1 -▁Origin 1 -▁behavioral 1 -ges 1 -▁carriage 1 -rous 1 -▁Bach 1 -▁branding 1 -eck 1 -▁4-2 1 -▁definitive 1 -MD 1 -▁45- 1 -nin 1 -▁envoy 1 -▁Earnings 1 -▁23. 1 -▁Billboard 1 -▁Alt 1 -▁wit 1 -▁wires 1 -▁quad 1 -▁Lego 1 -email 1 -▁contractions 1 -▁allied 1 -▁salon 1 -▁factions 1 -PT 1 -▁critique 1 -▁Ivy 1 -▁Vijay 1 -▁genus 1 -rac 1 -▁disrupted 1 -FI 1 -▁Cara 1 -▁forged 1 -bor 1 -▁collaborate 1 -▁Hun 1 -▁Chart 1 -rak 1 -▁surround 1 -▁circulating 1 -ashi 1 -▁commentator 1 -Wow 1 -▁gallons 1 -▁sphere 1 -David 1 -hart 1 -floor 1 -▁Pyeongchang 1 -liter 1 -▁observing 1 -▁microwave 1 -spec 1 -▁predicting 1 -ista 1 -▁1952 1 -▁1962 1 -▁scarf 1 -▁mourning 1 -ulation 1 -▁spectators 1 -▁BuzzFeed 1 -▁Surely 1 -▁Eco 1 -▁1963 1 -AI 1 -uro 1 -▁undisclosed 1 -▁Venice 1 -War 1 -▁understandable 1 -▁gradual 1 -Although 1 -▁markers 1 -▁conceived 1 -sponsored 1 -▁vinyl 1 -ão 1 -▁Fortune 1 -▁mirrors 1 -League 1 -▁Petro 1 -▁smash 1 -▁paved 1 -▁myriad 1 -inter 1 -hon 1 -▁Tele 1 -▁Canucks 1 -▁Savannah 1 -▁Hammer 1 -again 1 -▁Above 1 -cause 1 -▁merge 1 -▁grieving 1 -VI 1 -▁repaired 1 -▁$250 1 -▁Whit 1 -▁energetic 1 -▁Arlington 1 -▁Jorge 1 -▁Fur 1 -▁dim 1 -▁inserted 1 -▁proclaimed 1 -▁photographers 1 -▁strokes 1 -▁logging 1 -▁1998. 1 -pie 1 -▁antique 1 -▁lender 1 -▁maternal 1 -▁maturity 1 -▁analyses 1 -▁Romanian 1 -▁exhibits 1 -▁huh 1 -▁Sy 1 -▁justices 1 -▁Scientific 1 -▁Snowden 1 -▁filters 1 -▁balancing 1 -▁leash 1 -qu 1 -▁Bern 1 -▁selecting 1 -secret 1 -dies 1 -aq 1 -▁consulate 1 -▁ham 1 -lings 1 -▁RA 1 -▁sincere 1 -▁Bla 1 -nge 1 -▁perceive 1 -▁realities 1 -▁Dortmund 1 -▁barbecue 1 -▁Motley 1 -▁altercation 1 -▁uphold 1 -▁depths 1 -▁Weekend 1 -▁lineman 1 -▁Stella 1 -▁Pep 1 -▁Salem 1 -osh 1 -▁Shirley 1 -▁cardiac 1 -▁diary 1 -▁drowned 1 -▁1932 1 -cro 1 -▁Score 1 -block 1 -▁synagogue 1 -▁Harbour 1 -▁projection 1 -▁suppress 1 -▁sorted 1 -▁vi 1 -▁prevents 1 -serving 1 -los 1 -▁Conor 1 -Reilly 1 -▁triangle 1 -▁prevalent 1 -▁assurance 1 -▁Highland 1 -▁AJ 1 -▁ECB 1 -▁Boxing 1 -▁conductor 1 -Game 1 -▁Oracle 1 -▁Wrestling 1 -▁vegetarian 1 -▁unlocked 1 -▁ridge 1 -BS 1 -▁raft 1 -▁IoT 1 -▁scenery 1 -▁trousers 1 -▁finalists 1 -ML 1 -got 1 -▁nausea 1 -▁1928 1 -▁escaping 1 -▁festivities 1 -IB 1 -▁2.3 1 -▁Advisor 1 -▁combines 1 -▁scheduling 1 -▁Crosby 1 -▁emphasize 1 -▁legalization 1 -▁Panther 1 -▁stubborn 1 -▁rash 1 -▁applause 1 -▁cuisine 1 -▁hypothesis 1 -▁yacht 1 -▁Speedway 1 -player 1 -ess 1 -▁Citizen 1 -▁monarch 1 -▁1992, 1 -▁Lil 1 -▁capita 1 -▁Sil 1 -▁Gee 1 -▁Byron 1 -▁bats 1 -▁inflammation 1 -ounce 1 -▁uncertainties 1 -▁varies 1 -▁cowboy 1 -▁madness 1 -LF 1 -▁positioning 1 -▁Places 1 -Sometimes 1 -▁Devin 1 -▁syrup 1 -▁shrine 1 -iser 1 -flu 1 -▁restraining 1 -▁ballet 1 -▁dissolved 1 -▁1:30 1 -▁assessments 1 -▁crunch 1 -▁metaphor 1 -▁Isle 1 -▁uranium 1 -yne 1 -▁occurrence 1 -CK 1 -▁!!!! 1 -▁1,200 1 -▁motel 1 -▁criticizing 1 -▁campaigned 1 -▁casually 1 -▁asthma 1 -▁crises 1 -never 1 -pel 1 -▁CI 1 -▁bathrooms 1 -▁--- 1 -▁Bir 1 -lett 1 -▁Wikipedia 1 -▁Moto 1 -▁Airways 1 -▁entirety 1 -▁Hansen 1 -▁brace 1 -▁Bashar 1 -▁Sleep 1 -▁raided 1 -▁Tenn 1 -▁advocating 1 -▁workouts 1 -▁Barn 1 -▁gel 1 -,300 1 -▁1918 1 -▁congress 1 -▁Artist 1 -▁entertained 1 -.” 1 -▁hefty 1 -▁breakout 1 -▁Sep 1 -▁aspirations 1 -▁Stefan 1 -▁Paula 1 -▁ANY 1 -▁Ethereum 1 -▁Riyadh 1 -▁whereabouts 1 -▁coffin 1 -▁01 1 -▁Hawaiian 1 -▁Maha 1 -▁hopped 1 -▁GA 1 -interest 1 -bedroom 1 -▁commanding 1 -▁asshole 1 -▁prohibit 1 -▁1924 1 -start 1 -▁disagreed 1 -ille 1 -umi 1 -▁£5 1 -Sorry 1 -▁Elections 1 -▁Solo 1 -▁attribute 1 -onia 1 -▁Simple 1 -▁cyclists 1 -▁Abdullah 1 -▁contributors 1 -▁fraudulent 1 -▁Arm 1 -▁GI 1 -▁JD 1 -▁severity 1 -▁Infantry 1 -▁manuscript 1 -▁circulated 1 -▁MIT 1 -▁commenced 1 -▁scoop 1 -▁Rama 1 -business 1 -▁punches 1 -▁2.1 1 -▁Maharashtra 1 -▁Stockholm 1 -▁doc 1 -▁slaughter 1 -sum 1 -▁Guess 1 -▁millennials 1 -▁journals 1 -▁plague 1 -▁Weber 1 -▁adequately 1 -▁phrases 1 -▁LNG 1 -▁2016-17 1 -▁PF 1 -▁crawling 1 -▁Parade 1 -▁terminated 1 -gard 1 -▁descriptions 1 -▁1948 1 -▁Shot 1 -ggy 1 -▁Nuclear 1 -▁Hawkins 1 -▁wicked 1 -▁warriors 1 -▁toilets 1 -▁ICC 1 -▁religions 1 -▁1991, 1 -▁Strategic 1 -▁dumping 1 -▁Basic 1 -▁Path 1 -ivity 1 -▁reigning 1 -▁wow 1 -▁Wii 1 -▁blockade 1 -▁defect 1 -storm 1 -▁Coca 1 -Obviously 1 -▁avid 1 -▁Investments 1 -▁induced 1 -▁undefeated 1 -▁Gareth 1 -▁Whitney 1 -▁retaining 1 -▁McCoy 1 -▁SW 1 -Everything 1 -Ja 1 -▁plight 1 -▁rockets 1 -▁Ankara 1 -BL 1 -▁trek 1 -▁revived 1 -▁greeting 1 -▁CS 1 -▁accidental 1 -▁advertisement 1 -▁invention 1 -▁Flames 1 -▁Properties 1 -▁forbidden 1 -▁Minority 1 -▁goaltender 1 -▁Dinner 1 -ars 1 -TU 1 -LL 1 -esque 1 -▁exercising 1 -hil 1 -▁Constitutional 1 -▁Bring 1 -▁expire 1 -cle 1 -Dr 1 -▁defenseman 1 -▁oppression 1 -▁butterfly 1 -▁wa 1 -▁Om 1 -▁desirable 1 -▁fries 1 -▁caller 1 -sted 1 -▁soaring 1 -▁oversees 1 -▁doubling 1 -▁crypto 1 -BR 1 -▁drifted 1 -▁Kelley 1 -▁regained 1 -▁rogue 1 -▁Af 1 -▁punching 1 -š 1 -▁continuation 1 -▁Gupta 1 -▁biased 1 -Cal 1 -▁Reno 1 -▁SG 1 -▁foolish 1 -▁lame 1 -▁assertion 1 -▁kindly 1 -▁Leigh 1 -▁polled 1 -▁Devil 1 -CM 1 -.60 1 --2) 1 -▁identifies 1 -▁referenced 1 -▁termination 1 -▁reductions 1 -▁Wash 1 -▁Av 1 -▁scout 1 -▁torch 1 -ride 1 -3.5 1 -▁Foot 1 -▁Dash 1 -▁shovel 1 -▁assignments 1 -▁450 1 -els 1 -▁Imran 1 -▁Qa 1 -▁boarded 1 -▁enlisted 1 -▁175 1 -▁notch 1 -▁definite 1 -▁Scouts 1 -Mexico 1 -lb 1 -▁Chevrolet 1 -▁Freddie 1 -▁fatalities 1 -FR 1 -▁2:30 1 -ld 1 -ELL 1 -▁await 1 -mor 1 -▁Stalin 1 -ota 1 -▁dice 1 -▁magnet 1 -sided 1 -▁Fal 1 -▁scrambled 1 -▁Poll 1 -▁outfielder 1 -▁overlooking 1 -▁masters 1 -▁Crew 1 -ache 1 -▁realizes 1 -neck 1 -▁Mnangagwa 1 -▁Rural 1 -▁infants 1 -amba 1 -▁breastfeeding 1 -▁diminished 1 -▁Abrams 1 -▁discourage 1 -▁slides 1 -▁interrogation 1 -FE 1 -▁Oilers 1 -▁Electronic 1 -▁printer 1 -▁Airbnb 1 -▁reversal 1 -▁Purdue 1 -▁3.0 1 -▁apiece 1 -▁Fury 1 -▁fe 1 -BI 1 -▁logs 1 -▁ranged 1 -▁UPDATE 1 -▁Marshal 1 -▁Allied 1 -▁Cri 1 -▁Strike 1 -▁Observatory 1 -▁apparel 1 -▁reconsider 1 -▁Nord 1 -▁daytime 1 -Pic 1 -▁intra 1 -▁artifacts 1 -▁theoretical 1 -▁inaccurate 1 -▁Pilot 1 -▁richest 1 -▁fixtures 1 -▁remedy 1 -▁Pas 1 -▁Counter 1 -▁attach 1 -▁interventions 1 -▁laptops 1 -▁biting 1 -▁Venus 1 -▁OnePlus 1 -▁$28 1 -▁Log 1 -rou 1 -▁modeling 1 -▁Djokovic 1 -▁voyage 1 -▁proxy 1 -▁vastly 1 -▁audition 1 -▁drowning 1 -▁Blade 1 -▁Chat 1 -▁concession 1 -▁Sophia 1 -▁towels 1 -▁supplements 1 -▁fucked 1 -▁cozy 1 -En 1 -▁Impact 1 -▁Evil 1 -▁coordinate 1 -▁baggage 1 -▁versatile 1 -▁extinction 1 -▁RM 1 -▁mist 1 -nel 1 -▁updating 1 -tric 1 -▁KC 1 -9.99 1 -▁Buddy 1 -▁batter 1 -Pre 1 -▁advertised 1 -▁ranges 1 -▁translates 1 -▁CW 1 -smith 1 -▁transmitted 1 -▁intermediate 1 -▁BT 1 -hawk 1 -▁$70 1 -▁Mira 1 -▁disbelief 1 -▁104 1 -▁extradition 1 -▁unto 1 -▁errands 1 -FT 1 -▁automobile 1 -▁reminiscent 1 -▁notebook 1 -▁friendships 1 -▁daring 1 -▁frowned 1 -▁assaults 1 -▁Paso 1 -▁responds 1 -▁museums 1 -▁pod 1 -▁prevail 1 -Mi 1 -▁facebook 1 -▁Woo 1 -▁eighteen 1 -▁vain 1 -▁Therapeutics 1 -▁traces 1 -▁arsenal 1 -▁Perkins 1 -▁Rodney 1 -ched 1 -▁sibling 1 -▁darn 1 -▁Comics 1 -▁harness 1 -▁lamb 1 -▁dread 1 -▁chooses 1 -▁tu 1 -▁thesis 1 -▁heir 1 -BE 1 -▁Whereas 1 -▁vicinity 1 -▁MSNBC 1 -▁alumni 1 -▁Bab 1 -frame 1 -Take 1 -5) 1 -phy 1 -▁flush 1 -▁cakes 1 -▁unofficial 1 -▁Dodge 1 -▁Archer 1 -▁subsidiaries 1 -▁yogurt 1 -′′ 1 -▁tiger 1 -▁curtains 1 -▁3:30 1 -cum 1 -him 1 -▁possessing 1 -▁daunting 1 -▁encryption 1 -▁Alphabet 1 -▁Amid 1 -▁Baylor 1 -▁Emails 1 -bird 1 -▁exclusion 1 -▁temptation 1 -▁Doyle 1 -▁Savage 1 -▁CJ 1 -▁Gazette 1 -▁Rica 1 -▁cellular 1 -▁Kohli 1 -▁chic 1 -NR 1 -▁Sc 1 -▁maid 1 -▁Honduras 1 -▁peculiar 1 -▁vegetation 1 -▁genome 1 -▁PSG 1 -▁regrets 1 -▁vein 1 -▁― 1 -really 1 -▁Dogs 1 -▁£4 1 -▁ignorant 1 -▁Jonas 1 -▁Reich 1 -mount 1 -▁Crazy 1 -▁piles 1 -▁ODI 1 -▁90- 1 -▁PRO 1 -▁coral 1 -▁clergy 1 -▁adhere 1 -▁disciplined 1 -▁Infrastructure 1 -▁ensuing 1 -▁McCabe 1 -uv 1 -cm 1 -▁Katy 1 -▁snakes 1 -faced 1 -RY 1 -▁Randall 1 -▁advertisers 1 -▁108 1 -IO 1 -▁Wis 1 -▁customized 1 -▁sergeant 1 -▁Stoke 1 -▁Wol 1 -▁200- 1 -fil 1 -RB 1 -▁enthusiasts 1 -rod 1 -▁Cave 1 -forward 1 -strom 1 -large 1 -▁amenities 1 -▁orgasm 1 -▁Sounds 1 -▁tents 1 -▁bullish 1 -▁Nina 1 -▁communal 1 -▁offline 1 -▁Engineers 1 -▁Factory 1 -▁AN 1 -▁PD 1 -▁embarked 1 -▁1916 1 -▁stitches 1 -▁Cotton 1 -▁Cambodia 1 -▁Rae 1 -▁woes 1 -▁dolls 1 -ense 1 -▁falsely 1 -iro 1 -Per 1 -▁Gau 1 -▁harbour 1 -▁chemotherapy 1 -▁gases 1 -▁Adding 1 -anne 1 -▁26. 1 -▁Tha 1 -▁historians 1 -▁resemble 1 -▁spoiled 1 -▁Pasadena 1 -▁Hastings 1 -▁José 1 -▁Th 1 -▁collaborated 1 -▁granting 1 -▁narrowed 1 -hai 1 -zon 1 -:05 1 -▁amounted 1 -▁$45 1 -▁Indies 1 -shed 1 -▁1922 1 -OD 1 -▁Renaissance 1 -isi 1 -▁Grill 1 -▁Newark 1 -▁Jag 1 -▁blond 1 -▁lecturer 1 -▁sensational 1 -▁Aberdeen 1 -▁cathedral 1 -▁Dalton 1 -▁twitter 1 -mmer 1 -▁JP 1 -▁incomes 1 -▁Quite 1 -▁testament 1 -▁reliance 1 -▁37- 1 -▁Wide 1 -▁103 1 -▁Jeep 1 -▁Wei 1 -camp 1 -▁emperor 1 -▁Yadav 1 -▁Squadron 1 -▁defenses 1 -▁Pratt 1 -▁Rosie 1 -▁Fest 1 -▁Minor 1 -▁clicked 1 -▁tattoos 1 -▁2-3 1 -▁Comedy 1 -Mom 1 -kowski 1 -▁ED 1 -scoring 1 -▁Celsius 1 -▁Rookie 1 -▁scars 1 -▁suitcase 1 -▁102 1 -▁Hydro 1 -▁motives 1 -eer 1 -▁Erie 1 -▁Dominican 1 -▁Catholics 1 -▁retention 1 -▁scandals 1 -Ad 1 -▁Watts 1 -ovich 1 -Sa 1 -▁compilation 1 -▁2-2 1 -▁Cobb 1 -▁stat 1 -moving 1 -▁Carlson 1 -UT 1 -▁waiver 1 -▁polished 1 -▁splitting 1 -▁BUT 1 -▁aided 1 -▁Islanders 1 -▁nationalism 1 -▁Marina 1 -▁Indy 1 -▁blink 1 -Ru 1 -▁froze 1 -▁verification 1 -▁Bronx 1 -gui 1 -▁softball 1 -▁favourites 1 -owner 1 -▁posture 1 -Hu 1 -▁swelling 1 -▁Kap 1 -Ben 1 -▁coaster 1 -▁darling 1 -▁bishops 1 -gri 1 -▁Screen 1 -▁exploited 1 -eal 1 -▁Notice 1 -▁sunk 1 -Journal 1 -▁Pier 1 -ester 1 -▁Feeling 1 -▁variants 1 -hee 1 -▁pins 1 -▁tighten 1 -▁consolidated 1 -ación 1 -▁Por 1 -gie 1 -▁scholarships 1 -▁fury 1 -TB 1 -▁Jos 1 -fight 1 -security 1 -6) 1 -▁friction 1 -▁8,000 1 -▁archive 1 -▁pirate 1 -▁enhancing 1 -▁constructive 1 -roy 1 -▁Hamburg 1 -▁horizontal 1 -▁attendant 1 -ION 1 -▁pros 1 -▁Jupiter 1 -▁brigade 1 -Che 1 -▁preserving 1 -▁bells 1 -▁Scar 1 -▁liar 1 -▁drastic 1 -▁molecular 1 -▁spinal 1 -▁roast 1 -▁Product 1 -▁FX 1 -▁Rocket 1 -▁Gina 1 -▁sciences 1 -shaw 1 -▁Karnataka 1 -▁miniature 1 -▁evaluating 1 -▁browsing 1 -▁lectures 1 -▁33, 1 -▁worthwhile 1 -▁Recreation 1 -▁damned 1 -▁bombers 1 -▁Kabul 1 -▁speculate 1 -▁Pel 1 -▁Paulo 1 -lessness 1 -▁songwriter 1 -hem 1 -▁Barton 1 -hor 1 -▁Bangkok 1 -gli 1 -▁Cle 1 -▁Firefighters 1 -▁dryer 1 -▁(3) 1 -▁persuaded 1 -▁restoring 1 -700 1 -▁arbitrary 1 -▁Manufacturing 1 -▁Kiev 1 -kat 1 -▁fountain 1 -▁29. 1 -mad 1 -ith 1 -itu 1 -▁Dorothy 1 -▁alleviate 1 -▁Cavs 1 -▁Karachi 1 -▁Male 1 -TY 1 -▁continental 1 -▁booming 1 -▁recycled 1 -▁Lev 1 -▁$500,000 1 -▁1925 1 -▁tele 1 -▁sucking 1 -▁ironic 1 -▁Events 1 -▁stereotypes 1 -▁screwed 1 -▁aerospace 1 -▁inducted 1 -▁courtyard 1 -▁Reese 1 -▁obey 1 -▁Das 1 -▁tiles 1 -▁goats 1 -▁Haram 1 -▁Donovan 1 -▁Contest 1 -VR 1 -▁Ul 1 -▁dominating 1 -▁Reg 1 -▁overweight 1 -bling 1 -▁Derrick 1 -▁whiskey 1 -▁Rhodes 1 -▁gestures 1 -▁pains 1 -▁Alert 1 -▁Mos 1 -▁SH 1 -heim 1 -▁runoff 1 -wed 1 -▁fringe 1 -▁Sem 1 -▁additions 1 -▁WI 1 -▁protects 1 -▁sequences 1 -▁7,000 1 -▁analyse 1 -▁Icon 1 -King 1 -▁beneficiaries 1 -▁formidable 1 -▁Revolutionary 1 -▁inspections 1 -▁Dom 1 -▁thru 1 -▁Vanessa 1 -▁32, 1 -Se 1 -▁Alta 1 -ega 1 -▁Teresa 1 -▁Sergei 1 -▁spat 1 -▁Mau 1 -▁leisure 1 -▁filmmakers 1 -Hello 1 -▁bury 1 -▁stained 1 -▁champagne 1 -▁cafeteria 1 -▁exceptionally 1 -▁finalized 1 -▁fishermen 1 -▁educators 1 -▁Equity 1 -▁portrayal 1 -▁chancellor 1 -▁Flyers 1 -4) 1 -▁Norton 1 -▁Ola 1 -▁1-2 1 -▁atomic 1 -▁leftist 1 -max 1 -▁vanilla 1 -▁Yorker 1 -▁1912 1 -RL 1 -ving 1 -▁generator 1 -▁// 1 -▁Busch 1 -▁Adult 1 -▁screened 1 -▁consulted 1 -▁chilly 1 -goers 1 -▁Pearson 1 -▁exhaustion 1 -▁shale 1 -▁organizer 1 -▁Active 1 -▁doomed 1 -▁expenditures 1 -▁preacher 1 -▁electro 1 -arch 1 -ranking 1 -▁acquitted 1 -▁amusement 1 -walk 1 -▁Meredith 1 -isha 1 -uga 1 -eni 1 -between 1 -▁Associate 1 -▁Mut 1 -▁Cunningham 1 -▁repay 1 -fast 1 -▁NRL 1 -▁seizures 1 -▁Fre 1 -own 1 -http 1 -▁LB 1 -▁escalated 1 -▁Rooney 1 -▁muddy 1 -iron 1 -▁Chin 1 -/20 1 -▁tapes 1 -▁Trafford 1 -▁Knoxville 1 -▁invasive 1 -▁windy 1 -ée 1 -▁Sultan 1 -▁lumber 1 -...) 1 -▁subjective 1 -▁Hour 1 -▁administer 1 -▁Boat 1 -▁breaches 1 -▁kickoff 1 -▁unstable 1 -uc 1 -▁$29 1 -▁cam 1 -CB 1 -▁ji 1 -▁Bulgaria 1 -sun 1 -▁Boone 1 -▁hazard 1 -▁coma 1 -▁speculated 1 -char 1 -▁1931 1 -▁Jew 1 -▁tar 1 -Pi 1 -▁boo 1 -▁fractured 1 -▁Aleppo 1 -▁Sutton 1 -▁staircase 1 -▁Stoppers 1 -▁mapping 1 -▁predators 1 -▁Corner 1 -hua 1 -▁analyzing 1 -▁primaries 1 -▁Maurice 1 -▁Continental 1 -▁smuggling 1 -~ 1 -▁headset 1 -▁procession 1 -▁Von 1 -▁Brewing 1 -▁discovers 1 -▁Gallagher 1 -▁advising 1 -▁1921 1 -▁Schultz 1 -▁Kristen 1 -▁Minutes 1 -▁ounces 1 -▁Directors 1 -▁Conn 1 -▁geographic 1 -▁maternity 1 -brand 1 -▁spear 1 -▁violently 1 -thi 1 -▁bully 1 -dos 1 -▁prosecute 1 -▁forgetting 1 -▁solitary 1 -▁strap 1 -Jo 1 -▁devotion 1 -▁Naturally 1 -▁Photograph 1 -ifer 1 -une 1 -▁Clement 1 -▁Dane 1 -▁distributors 1 -▁Photos 1 -▁razor 1 -▁ACLU 1 -▁Nana 1 -▁LinkedIn 1 -▁Nawaz 1 -▁alignment 1 -▁5.5 1 -▁NT 1 -▁switches 1 -▁Hyundai 1 -▁folder 1 -▁Sitting 1 -▁tease 1 -wash 1 -▁Ahead 1 -trip 1 -▁axe 1 -▁gag 1 -▁honoured 1 -▁outdated 1 -IG 1 -▁cubic 1 -▁Clo 1 -▁slumped 1 -NN 1 -▁Neymar 1 -▁praising 1 -▁firefighter 1 -▁batters 1 -▁rampant 1 -▁killers 1 -.'' 1 -▁USS 1 -▁Cain 1 -▁recess 1 -▁Watching 1 -▁Rudy 1 -▁Elijah 1 -▁correspondence 1 -▁arbitration 1 -▁Holder 1 -▁numb 1 -▁instincts 1 -▁systematic 1 -▁directive 1 -dling 1 -▁abundant 1 -–18 1 -gram 1 -dam 1 -beck 1 -▁verses 1 -inder 1 -▁Damn 1 -nam 1 -▁icons 1 -▁Spaniard 1 -haired 1 -▁Finnish 1 -▁Secondary 1 -BB 1 -▁invites 1 -should 1 -▁gardening 1 -▁replicate 1 -▁conglomerate 1 -▁Productions 1 -percent 1 -▁brokers 1 -▁boast 1 -▁pedal 1 -▁depressing 1 -conscious 1 -▁coats 1 -▁battered 1 -▁Bran 1 -▁formats 1 -▁companions 1 -▁Analytica 1 -▁Sidney 1 -▁dye 1 -▁frontier 1 -▁converting 1 -▁protocols 1 -▁Courtesy 1 -▁24/7 1 -▁rematch 1 -▁disagreement 1 -Va 1 -,’” 1 -▁betrayed 1 -▁bookstore 1 -▁rod 1 -▁explosions 1 -▁Chain 1 -▁lent 1 -▁eagerly 1 -▁generates 1 -▁distances 1 -▁yielded 1 -▁Deputies 1 -▁collector 1 -listed 1 -▁patiently 1 -▁Lyn 1 -vie 1 -▁footing 1 -▁Chennai 1 -▁Murder 1 -▁sings 1 -▁acknowledges 1 -▁Atletico 1 -▁Wyatt 1 -▁infectious 1 -▁confess 1 -▁Gift 1 -▁Thinking 1 -▁athletics 1 -▁Rabbi 1 -dividend 1 -husband 1 -▁Ride 1 -▁metropolitan 1 -▁Niger 1 -▁Libyan 1 -▁triggering 1 -▁Denise 1 -▁Stick 1 -▁basics 1 -▁aloud 1 -ios 1 -▁Emanuel 1 -▁lunar 1 -▁hormone 1 -▁transferring 1 -▁Results 1 -▁honorary 1 -▁balloons 1 -▁advertisements 1 -▁IOC 1 -▁scrub 1 -▁downturn 1 -▁Cove 1 -▁Fighting 1 -▁cleanup 1 -▁donating 1 -▁Palin 1 -▁archives 1 -position 1 -▁marriages 1 -▁thief 1 -hana 1 -▁Pike 1 -▁TM 1 -▁Lawyers 1 -▁Cork 1 -▁authorization 1 -AGE 1 -▁ACT 1 -▁° 1 -done 1 -RF 1 -▁resisting 1 -▁Prairie 1 -▁desperation 1 -▁pneumonia 1 -▁ousted 1 -perfect 1 -▁wraps 1 -/4 1 -▁28. 1 -nz 1 -▁intensified 1 -▁Lahore 1 -▁Sinclair 1 -▁Ryder 1 -SR 1 -▁blueprint 1 -nee 1 -▁editions 1 -ency 1 -▁powerhouse 1 -▁creditors 1 -CU 1 -:55 1 -citation 1 -▁crust 1 -▁rotating 1 -▁breadth 1 -▁Funny 1 -▁1936 1 -▁Musical 1 -ete 1 -Michael 1 -▁stimulate 1 -▁rubbish 1 -bay 1 -▁floral 1 -▁Rainbow 1 -▁mandated 1 -▁1989, 1 -ops 1 -▁morality 1 -▁jurisdictions 1 -▁112 1 -▁headquartered 1 -▁consolidation 1 -▁Rifle 1 -▁levy 1 -▁Surface 1 -▁haircut 1 -▁gentlemen 1 -▁fibre 1 -many 1 -▁Bonnie 1 -▁109 1 -▁pathetic 1 -▁slots 1 -▁pun 1 -oid 1 -▁patents 1 -▁instability 1 -▁3) 1 -▁gluten 1 -▁infantry 1 -▁rightly 1 -7.5 1 -▁embark 1 -▁sampling 1 -▁Mayo 1 -▁Corn 1 -▁yo 1 -▁robotic 1 -▁specifics 1 -▁multitude 1 -▁humid 1 -▁eBay 1 -▁meltdown 1 -▁2030 1 -▁ghosts 1 -uba 1 -▁guts 1 -itch 1 -▁1927 1 -▁biography 1 -▁multinational 1 -▁Celebrity 1 -▁busiest 1 -▁va 1 -ault 1 -▁wellness 1 -▁indoors 1 -▁Ultimate 1 -▁fortunes 1 -▁Oscars 1 -▁headaches 1 -▁scramble 1 -▁CV 1 -▁prostitution 1 -▁scanned 1 -▁Hop 1 -▁disruptive 1 -hl 1 -▁Pig 1 -▁policeman 1 -leigh 1 -▁lick 1 -▁hormones 1 -▁$32 1 -▁Ac 1 -▁mattered 1 -▁Padres 1 -▁ceasefire 1 -nny 1 -count 1 -Dis 1 -acy 1 -▁bunker 1 -▁perspectives 1 -▁rumored 1 -runner 1 -▁Jenna 1 -▁Cra 1 -plane 1 -▁1915 1 -▁Len 1 -▁teasing 1 -▁staging 1 -health 1 -▁Willow 1 -ichi 1 -▁Dayton 1 -▁assessing 1 -▁timetable 1 -▁unauthorized 1 -▁hazardous 1 -▁undergone 1 -▁query 1 -▁missionary 1 -.08 1 -IV 1 -▁Elliot 1 -▁1917 1 -MU 1 -▁governed 1 -▁northeastern 1 -▁Launch 1 -▁skeleton 1 -▁1.0 1 -▁Til 1 -▁Chambers 1 -▁assistants 1 -▁announces 1 -facing 1 -▁opioids 1 -▁HB 1 -▁sung 1 -▁Nadu 1 -▁Damon 1 -fat 1 -▁inception 1 -▁OH 1 -▁Thing 1 -zero 1 -▁Mina 1 -▁disposable 1 -.'" 1 -▁1996. 1 -flow 1 -▁bakery 1 -▁1500 1 -defense 1 -5.00 1 -▁delightful 1 -Semitism 1 -▁Dozens 1 -since 1 -▁Hayden 1 -▁Witnesses 1 -▁Mining 1 -▁Nope 1 -▁Webster 1 -▁Hawk 1 -▁31. 1 -bet 1 -▁descendants 1 -▁harmless 1 -▁Kickstarter 1 -▁Elsewhere 1 -holder 1 -▁Stanton 1 -▁grilled 1 -▁Lambert 1 -▁Bloom 1 -▁Era 1 -▁Ethiopian 1 -▁elders 1 -▁1926 1 -▁rumor 1 -▁modify 1 -▁admissions 1 -▁clutch 1 -▁Lima 1 -Frank 1 -▁receipt 1 -sti 1 -▁awoke 1 -▁batsman 1 -▁supernatural 1 -▁Tories 1 -cel 1 -▁Flying 1 -▁royalty 1 -▁disadvantage 1 -▁Via 1 -▁Alba 1 -▁overlook 1 -▁whites 1 -▁processors 1 -zel 1 -rig 1 -▁startled 1 -▁mechanic 1 -▁Pepper 1 -▁Rao 1 -▁elusive 1 -why 1 -▁bricks 1 -▁caretaker 1 -▁Kind 1 -▁spared 1 -▁Filipino 1 -▁attachment 1 -rew 1 -▁500,000 1 -ign 1 -Nothing 1 -▁signaled 1 -Fa 1 -▁Provincial 1 -▁Schwartz 1 -▁HTML 1 -▁Electronics 1 -▁perpetrators 1 -Free 1 -▁sailors 1 -▁hectares 1 -▁rink 1 -▁Tata 1 -▁assign 1 -▁plots 1 -▁fuss 1 -▁psycho 1 -▁Interest 1 -▁6.5 1 -▁sixty 1 -▁Norris 1 -▁generosity 1 -▁legitimacy 1 -▁secrecy 1 -▁Swansea 1 -▁Nas 1 -▁Ned 1 -▁Guyana 1 -▁Wizards 1 -wel 1 -▁Bali 1 -▁objected 1 -setting 1 -metre 1 -▁tokens 1 -▁25,000 1 -▁youngster 1 -▁facto 1 -▁boil 1 -▁Spark 1 -▁crowned 1 -▁Tanzania 1 -▁restraint 1 -▁queer 1 -GS 1 -▁resilient 1 -▁libertarian 1 -▁relegated 1 -▁condemnation 1 -▁ashes 1 -clear 1 -▁Philly 1 -▁regiment 1 -▁reservoir 1 -▁1937 1 -▁amusing 1 -money 1 -▁warmed 1 -▁Colo 1 -Semitic 1 -Net 1 -▁Strategy 1 -▁Users 1 -▁riots 1 -▁thighs 1 -▁sitcom 1 -▁illustrate 1 -▁rigorous 1 -▁Aston 1 -▁Wish 1 -spot 1 -▁12:30 1 -▁sideways 1 -▁6-3 1 -▁Py 1 -agent 1 -lap 1 -▁106 1 -▁foil 1 -▁Gorsuch 1 -▁Rosenstein 1 -hey 1 -▁Squad 1 -▁hog 1 -▁Voters 1 -ural 1 -▁Expect 1 -ois 1 -▁nutrients 1 -▁Kun 1 -▁Cannon 1 -▁Coal 1 -▁Kuwait 1 -▁inspect 1 -▁sped 1 -▁Tag 1 -▁Marx 1 -▁elephants 1 -were 1 -▁Amnesty 1 -▁compassionate 1 -▁seafood 1 --1) 1 -▁Dancing 1 -▁:) 1 -pes 1 -▁Aviv 1 -▁Nicola 1 -▁Customers 1 -▁tile 1 -▁baskets 1 -▁2025 1 -▁Trading 1 -▁loser 1 --40 1 -▁Tweet 1 -▁Marvin 1 -▁Hur 1 -▁wines 1 -▁pressured 1 -ume 1 -▁upbeat 1 -▁portray 1 -▁Motion 1 -▁trusting 1 -oko 1 -▁Success 1 -MG 1 -▁Borough 1 -▁panicked 1 -▁crawled 1 -trained 1 -▁spoilers 1 -BM 1 -▁deprived 1 -▁2018-19 1 -Live 1 -DOT 1 -string 1 -▁depicting 1 -borne 1 -▁Period 1 -▁evenings 1 -▁Composite 1 -▁acceleration 1 -▁catastrophe 1 -▁trilogy 1 -▁tunes 1 -▁coveted 1 -▁Poe 1 -▁Ven 1 -▁Doing 1 -▁Malawi 1 -▁velocity 1 -▁2,500 1 -▁trustees 1 -▁descend 1 -▁contributes 1 -▁hesitant 1 -ulated 1 -▁parted 1 -▁capitalist 1 -▁Aboriginal 1 -▁obliged 1 -wy 1 -▁ro 1 -.80 1 -▁commanders 1 -▁Kindle 1 -▁GST 1 -ography 1 -cock 1 -▁Hair 1 -ndi 1 -leaning 1 -expected 1 -▁Lawson 1 -Mac 1 -▁billing 1 -▁Testament 1 -▁Crisis 1 -▁quoting 1 -▁Hussein 1 -▁Violence 1 -▁1923 1 -–3 1 -rov 1 -▁300,000 1 -▁reinforce 1 -dry 1 -▁salvation 1 -▁Ventura 1 -idae 1 -▁discounted 1 -▁Daughter 1 -▁advancement 1 -▁sc 1 -ced 1 -▁Lutheran 1 -▁shortfall 1 -▁specializes 1 -▁Previous 1 -▁sausage 1 -▁Imp 1 -▁mimic 1 -▁neural 1 -▁grammar 1 -4.5 1 -▁Hubby 1 -▁policymakers 1 -▁gateway 1 -Sports 1 -▁backers 1 -▁netted 1 -shop 1 -ingham 1 -▁Terminal 1 -▁upgrading 1 -▁Fairfax 1 -▁modules 1 -▁colonies 1 -▁chuckled 1 -▁38- 1 -haven 1 -▁quarterfinals 1 -▁1890 1 -▁idol 1 -col 1 -▁curves 1 -▁restriction 1 -▁policemen 1 -▁Automotive 1 -▁MT 1 -▁Mode 1 -yang 1 -loss 1 -▁Flu 1 -Smith 1 -▁Dover 1 -II 1 -▁prevalence 1 -▁censorship 1 -▁Depending 1 -▁Sum 1 --4) 1 -xa 1 -▁Westbrook 1 -▁ed 1 -fake 1 -▁Hyderabad 1 -▁nowadays 1 -▁clarified 1 -▁horrified 1 -▁shrinking 1 -Back 1 -▁unleashed 1 -das 1 -▁rug 1 -▁Sikh 1 -▁6-4 1 -▁tightened 1 -▁invaded 1 -▁fox 1 -▁hurdle 1 -▁colon 1 -▁Lancashire 1 -▁Wichita 1 -▁hierarchy 1 -▁Suite 1 -▁detainees 1 -▁witnessing 1 -▁Frontier 1 -▁mesh 1 -▁indirectly 1 -▁Generally 1 -▁envisioned 1 -..... 1 -▁Luna 1 -kai 1 -▁Philippe 1 -▁landfill 1 -▁Genesis 1 -▁oceans 1 -oe 1 -lets 1 -WD 1 -▁ventures 1 -▁proceeding 1 -▁Kos 1 -▁buddies 1 -▁Renault 1 -ific 1 -▁Knox 1 -lem 1 -▁** 1 -▁Tay 1 -▁12,000 1 -▁chanting 1 -▁interviewing 1 -▁extremism 1 -▁persist 1 -▁1944 1 -▁sentiments 1 -▁Trey 1 -▁blades 1 -realDonaldTrump 1 -▁Territory 1 -▁Bolt 1 -▁CAR 1 -▁onions 1 -▁Horton 1 -▁assumes 1 -Air 1 -▁Gwen 1 -gor 1 -▁compartment 1 -▁intimidation 1 -▁Symphony 1 -▁precipitation 1 -▁popcorn 1 -▁boxer 1 -▁travellers 1 -▁Morton 1 -▁bubbles 1 -▁fusion 1 -▁Fuller 1 -hoe 1 -▁bankers 1 -▁sharks 1 -▁Qaeda 1 -▁Emery 1 -UL 1 -▁mortal 1 -CF 1 -▁Lionel 1 -production 1 -heavy 1 -Best 1 -▁cardiovascular 1 -▁cooperating 1 -Paul 1 -▁totaled 1 -Rourke 1 -▁locking 1 -▁reassure 1 -▁Chrysler 1 -▁Weaver 1 -▁Sab 1 -▁Rou 1 -▁technicians 1 -▁Poli 1 -street 1 -▁folding 1 -▁tenant 1 -▁Brun 1 -ción 1 -▁culinary 1 -▁billed 1 -▁1997. 1 -sol 1 -sure 1 -Th 1 -▁corpse 1 -▁intrigued 1 -▁exclude 1 -▁Shar 1 -▁homosexuality 1 -▁Trying 1 -▁exhausting 1 -tool 1 --27 1 -▁hostel 1 -▁Heidi 1 -▁trending 1 -HR 1 -▁prefers 1 -▁wi 1 -▁jackets 1 -gio 1 -▁pageant 1 -▁Jennings 1 -▁workload 1 -▁hostility 1 -sil 1 -▁Macy 1 -eus 1 -▁ES 1 -oun 1 -▁€1 1 -▁GOOD 1 -▁rehearsal 1 -▁Unity 1 -▁tackled 1 -▁Eat 1 -▁Dragons 1 -▁Vis 1 -▁engineered 1 -▁Ranger 1 -▁Parkinson 1 -▁shaky 1 -East 1 -▁Burn 1 -▁renting 1 -▁illustration 1 -▁simulation 1 -▁Alexandra 1 -▁exclaimed 1 -▁interval 1 -▁Qui 1 -▁Pla 1 -▁chilling 1 -▁ripe 1 -▁Wear 1 -▁$80 1 -▁1995. 1 -▁presentations 1 -▁indications 1 -▁translator 1 -▁evolutionary 1 -▁interstate 1 -PG 1 -▁builder 1 -▁3-4 1 -▁smear 1 -▁infinite 1 -▁backlog 1 -▁legends 1 -covered 1 -▁vigil 1 -▁immensely 1 -▁Duck 1 -yama 1 -▁Immediately 1 -▁220 1 -▁Tunisia 1 -▁raging 1 -▁Gaming 1 -City 1 -▁soak 1 -▁Cynthia 1 -▁VERY 1 -metric 1 -oku 1 -▁SEE 1 -▁specifications 1 -▁Harden 1 -▁notify 1 -etz 1 -▁Mnuchin 1 -▁Practice 1 -▁McLaren 1 -rik 1 -▁Dani 1 -▁2022 1 -▁27. 1 -Har 1 -▁Moor 1 -▁troop 1 -▁Bueno 1 -▁Connect 1 -▁Wesley 1 -▁starving 1 -▁sparkling 1 -electric 1 -ates 1 -▁4: 1 -along 1 -▁sleepy 1 -▁Kro 1 -aud 1 -▁inquest 1 -▁[5] 1 -GC 1 -operation 1 -▁genres 1 -▁singular 1 -▁ValuEngine 1 -sten 1 -▁decay 1 -▁Ronnie 1 -▁template 1 -▁Mint 1 -▁Resource 1 -▁pensions 1 -▁0-0 1 -▁Disclosure 1 -▁termed 1 -/5 1 -without 1 -TION 1 -▁mathematical 1 -▁stride 1 -nov 1 -December 1 -zza 1 -▁durable 1 -▁Bale 1 -▁southeastern 1 -▁slightest 1 -▁dome 1 -▁Diet 1 -▁intimidating 1 -▁WILL 1 -▁surfing 1 -▁BY 1 -▁Sunny 1 -▁Wire 1 -nom 1 -▁systemic 1 -▁fading 1 -hari 1 -lea 1 -▁Schneider 1 -▁midday 1 -▁Stern 1 -▁Economy 1 -▁puppies 1 -▁dossier 1 -▁Firefox 1 -▁impairment 1 -▁receptions 1 -▁OB 1 -▁Parkway 1 -▁Alec 1 -▁fathers 1 -rl 1 -▁choke 1 -▁Meadows 1 -▁greed 1 -▁sacrifices 1 -system 1 -▁swell 1 -▁Sit 1 -▁Flake 1 -▁conform 1 -▁Scha 1 -▁diocese 1 -▁Prescott 1 -▁Brittany 1 -vert 1 -▁Leading 1 -▁weeds 1 -▁contestant 1 -▁angered 1 -▁reflective 1 -▁Category 1 -▁scrambling 1 -▁unarmed 1 -▁disappearing 1 -▁Cr 1 -▁inputs 1 -▁chi 1 -▁bloom 1 -fiction 1 -▁visually 1 -Sha 1 -▁exceeding 1 -▁cites 1 -▁Rise 1 -pra 1 -▁Cover 1 -▁SK 1 -▁fearing 1 -▁Hus 1 -▁colder 1 -▁intricate 1 -▁supplying 1 -▁Burlington 1 -▁Afterwards 1 -▁inconvenience 1 -▁beforehand 1 -▁drying 1 -▁proactive 1 -▁therapies 1 -▁Membership 1 -▁Claus 1 -apa 1 -▁withstand 1 -▁automaker 1 -▁$75 1 -▁AAA 1 -except 1 -▁Sochi 1 -▁atmospheric 1 -▁REALLY 1 -▁Jammu 1 -▁decorating 1 -▁Lyft 1 -▁unhealthy 1 -fashioned 1 -▁pots 1 -ić 1 -▁brutality 1 -▁DL 1 -▁ch 1 -▁woo 1 -▁Tall 1 -asa 1 -▁Amazing 1 -▁ensures 1 -▁comprehend 1 -▁liberation 1 -▁trait 1 -▁depart 1 -▁ye 1 -▁Text 1 -▁Lac 1 -▁1.9 1 -▁allegiance 1 -▁resentment 1 -Gen 1 -▁Supporters 1 -▁hurried 1 -▁likewise 1 -▁Titan 1 -▁Ecuador 1 -▁disgust 1 -▁susceptible 1 -,900 1 -▁Engine 1 -▁Councillor 1 -▁Sofia 1 -2). 1 -▁lively 1 -▁tumbled 1 -▁antenna 1 -▁widened 1 -▁literal 1 -▁café 1 -▁consortium 1 -▁Legacy 1 -▁Julius 1 -▁Burnley 1 -tr 1 -lip 1 -nah 1 -lov 1 -▁offspring 1 -▁Protestant 1 -▁differential 1 -▁MN 1 -Happy 1 -▁Alvarez 1 -▁Thornton 1 -CNN 1 -▁TB 1 -▁1947 1 -▁Crimea 1 -▁selective 1 -▁irregular 1 -▁Boko 1 -▁rulers 1 -▁1933 1 -▁reside 1 -▁observer 1 -▁Sak 1 -▁Lay 1 -▁graves 1 -▁tummy 1 -▁Leone 1 -gay 1 -August 1 -▁Factor 1 -▁frog 1 -▁chopped 1 -▁longstanding 1 -▁merits 1 -▁bullshit 1 -▁beg 1 -▁endeavor 1 -▁Congressman 1 -▁Gerard 1 -▁Grab 1 -▁Sporting 1 -▁eats 1 -▁$1.4 1 -▁Mercy 1 -▁Serbian 1 -▁3.1 1 -igo 1 -▁Dwight 1 -▁executing 1 -▁Operating 1 -▁betrayal 1 -▁ineffective 1 -▁odor 1 -dder 1 -▁uncover 1 -▁astonishing 1 -▁combinations 1 -▁dean 1 -September 1 -▁cheated 1 -bug 1 -▁abide 1 -▁Nottingham 1 -▁mogul 1 -▁MLA 1 -▁tilt 1 -▁nearing 1 -▁petty 1 -▁commuters 1 -▁rejecting 1 -▁deficits 1 -omi 1 -could 1 -Qaeda 1 -▁gamble 1 -▁crater 1 -▁architects 1 -▁Ready 1 -▁insurgents 1 -▁villa 1 -▁crossover 1 -▁113 1 -▁uphill 1 -▁minerals 1 -▁improper 1 -▁2.6 1 -▁Communication 1 -▁browse 1 -▁havoc 1 -▁neatly 1 -▁Chocolate 1 -▁Brit 1 -▁considerations 1 -ATION 1 -zzo 1 -▁dent 1 -▁roar 1 -▁publishes 1 -raw 1 -▁fills 1 -▁Vanderbilt 1 -▁gigantic 1 -▁Omaha 1 -▁Bentley 1 -▁1994. 1 -▁Ariana 1 -▁Mayweather 1 -NP 1 -▁squash 1 -▁Dy 1 -▁manifesto 1 -natural 1 -▁aspiring 1 -▁1/3 1 -▁distributor 1 -lk 1 -▁Kom 1 -▁2026 1 -▁tissues 1 -ç 1 -▁statutory 1 -▁Bieber 1 -▁Hazard 1 -▁escalation 1 -content 1 -lau 1 -▁shrimp 1 -▁Consequently 1 -▁taxation 1 -▁honoring 1 -▁Ara 1 -▁Krishna 1 -▁fin 1 -rrell 1 -▁stalls 1 -▁breathed 1 -▁scratched 1 -▁Sioux 1 -▁Brotherhood 1 -▁accountant 1 -▁downstream 1 -asse 1 -▁1880 1 -▁Tennis 1 -▁Laurie 1 -▁Wave 1 -▁polish 1 -▁vomiting 1 -▁limp 1 -Everybody 1 -▁Giuliani 1 -▁brawl 1 -▁CBI 1 -▁$2.5 1 -▁Materials 1 -▁reaffirmed 1 -ded 1 -▁6% 1 -▁Res 1 -▁Gram 1 -▁ideals 1 -▁Portsmouth 1 -▁Brenda 1 -▁FTSE 1 -keeping 1 -▁cursed 1 -▁plaque 1 -cen 1 -▁Braun 1 -▁hesitation 1 -▁prominence 1 -▁yummy 1 -▁deadliest 1 -▁drainage 1 -▁Tanner 1 -Home 1 -▁shutter 1 -▁Bowie 1 -▁Kang 1 -▁overlap 1 -▁syndicated 1 -ffer 1 -▁Ont 1 -▁Albion 1 -dine 1 -terrorism 1 -hang 1 -▁$34 1 -▁renovated 1 -▁Disneyland 1 -▁mortar 1 -▁unfolded 1 -.04 1 -figure 1 -▁erase 1 -.35 1 -▁Boulder 1 -▁bartender 1 -▁stirring 1 -▁reliever 1 -▁Dil 1 -▁Sexual 1 -▁Absolutely 1 -▁shortstop 1 -▁ropes 1 -boro 1 -bol 1 -▁interacting 1 -oil 1 -gren 1 -train 1 -▁bumps 1 -▁1913 1 -▁Internal 1 -erman 1 -▁believer 1 -▁believers 1 -▁muscular 1 -▁playful 1 -ulo 1 -▁DM 1 -▁revision 1 -▁Hanson 1 -▁Concert 1 -▁Valerie 1 -official 1 -▁readiness 1 -tell 1 -cra 1 -▁240 1 -RM 1 -▁Mora 1 -▁Investor 1 -▁bowed 1 -▁shepherd 1 -▁uh 1 -▁discoveries 1 -zie 1 -▁Hip 1 -▁renovations 1 -▁MU 1 -▁Osborne 1 -▁chaired 1 -kal 1 -amine 1 -▁portraits 1 -ttle 1 -▁SMS 1 -▁unsuccessfully 1 -FF 1 -▁bun 1 -▁Python 1 -▁registering 1 -▁(25 1 -▁Style 1 -▁explanations 1 -▁grad 1 -laws 1 -▁1911 1 -▁1987, 1 -▁armies 1 -phi 1 -▁kitten 1 -stemming 1 -▁8% 1 -▁chuckle 1 -zar 1 -egan 1 -ente 1 -▁Et 1 -▁tailored 1 -▁Mesa 1 -▁essays 1 -LS 1 -▁1990, 1 -▁$1.6 1 -global 1 -▁threads 1 -▁Bala 1 -▁skepticism 1 -▁sock 1 -▁Dillon 1 -▁discouraged 1 -cru 1 -▁astronauts 1 -series 1 -▁Edwin 1 -▁rubble 1 -▁Journey 1 -▁Flag 1 -ador 1 -▁bogey 1 -▁Carlo 1 -▁taped 1 -▁Arrow 1 -▁Moran 1 -signed 1 -▁NI 1 -dem 1 -▁traps 1 -FO 1 -▁7% 1 -gger 1 -▁fentanyl 1 -▁seasoned 1 -▁chew 1 -▁improves 1 -▁hovering 1 -▁horns 1 -▁discarded 1 -▁valve 1 -▁diverted 1 -ule 1 -▁allowance 1 -▁Lydia 1 -▁admiration 1 -▁scouting 1 -tina 1 -kon 1 -ines 1 -▁incremental 1 -▁Sis 1 -▁node 1 -▁earrings 1 -▁penetration 1 -▁circumstance 1 -Cola 1 -▁clashed 1 -abad 1 -▁VW 1 -▁unfairly 1 -▁screenplay 1 -▁transforming 1 -▁sanctioned 1 -▁devastation 1 -▁erosion 1 -▁"[ 1 -▁Pure 1 -▁inspectors 1 -mers 1 -cis 1 -▁Beta 1 -▁noisy 1 -▁OT 1 -▁feminine 1 -▁austerity 1 -▁pony 1 -▁Rot 1 -▁Talent 1 -▁pipelines 1 -▁Caesar 1 -sample 1 -▁poke 1 -▁Caucus 1 -▁privileges 1 -bul 1 -▁methodology 1 -▁inexpensive 1 -▁volunteering 1 -▁garner 1 -▁landlords 1 -vol 1 -▁neurons 1 -▁Aero 1 -FP 1 -▁Greens 1 -▁disguise 1 -▁intern 1 -▁Flores 1 -▁Byrne 1 -▁pier 1 -▁revoked 1 -▁grains 1 -▁Bul 1 -▁geographical 1 -▁107 1 -▁incidence 1 -▁NEVER 1 -▁Kirby 1 -▁Supply 1 -▁Weight 1 -▁Priest 1 -▁Petr 1 -stead 1 -▁scans 1 -DB 1 -ias 1 -▁guardian 1 -▁Dhabi 1 -bos 1 -▁pinpoint 1 -EPS 1 -▁flagged 1 -idad 1 -▁rituals 1 -▁Cur 1 -▁recount 1 -▁Marin 1 -Never 1 -▁bribery 1 -▁adore 1 -bia 1 -▁abducted 1 -▁mg 1 -▁addict 1 -▁commentators 1 -▁Centers 1 -ink 1 -bbing 1 -▁temples 1 -▁relocate 1 -▁fraternity 1 -▁lithium 1 -▁NYPD 1 -▁extraction 1 -▁iPhones 1 -▁packet 1 -▁Boyle 1 -▁KS 1 -▁fielder 1 -English 1 -▁Relief 1 -▁seldom 1 -▁Elaine 1 -▁fumble 1 -▁Og 1 -▁YOUR 1 -▁Kia 1 -Matt 1 -▁Magistrates 1 -▁Hussain 1 -▁slit 1 -▁teaser 1 -Col 1 -Ti 1 -▁Elk 1 -▁Runner 1 -▁hauled 1 -▁limb 1 -lene 1 -ake 1 -know 1 -▁chick 1 -▁Kiss 1 -▁35, 1 -▁imperial 1 -▁murderer 1 -▁cal 1 -performance 1 -▁commercials 1 -▁penned 1 -▁Sector 1 -▁spurred 1 -▁Hoover 1 -▁carving 1 -▁Bike 1 -▁strides 1 -▁banker 1 -▁roadway 1 -yon 1 -▁cynical 1 -▁excel 1 -▁apparatus 1 -▁losers 1 -▁Reach 1 -▁Giant 1 -▁nicknamed 1 -ulu 1 -text 1 -”). 1 -▁defeats 1 -▁WAS 1 -ready 1 -▁Barb 1 -writing 1 -linked 1 -▁swipe 1 -▁saint 1 -▁eternity 1 -▁scrapped 1 -▁Watt 1 -ija 1 -▁bowls 1 -Ni 1 -HL 1 -tation 1 -▁onion 1 -corn 1 -▁dealings 1 -▁Infinity 1 -▁hygiene 1 -▁savage 1 -▁[6] 1 -gna 1 -stown 1 -▁cloak 1 -▁Demand 1 -▁funnel 1 -▁predecessors 1 -▁Epic 1 -▁Lone 1 -▁imply 1 -▁gasped 1 -▁contacting 1 -▁amused 1 -ij 1 -▁groove 1 -significant 1 -culture 1 -▁mic 1 -▁Baba 1 -▁indirect 1 -▁scripts 1 -▁bombings 1 -▁submissions 1 -▁doping 1 -▁ci 1 -▁spec 1 -▁fragments 1 -▁2.8 1 -▁dubious 1 -▁Matrix 1 -▁Teddy 1 -▁diversion 1 -▁Jessie 1 -▁Founded 1 -▁supervised 1 -stad 1 -ight 1 -▁Ama 1 -▁unfold 1 -▁Administrator 1 -▁Suarez 1 -▁reassuring 1 -▁pyramid 1 -▁Israelis 1 -ides 1 -director 1 -▁Gui 1 -▁hum 1 -▁Monetary 1 -▁boutique 1 -▁prevailing 1 -▁unavailable 1 -▁liable 1 -▁Mam 1 -▁occupants 1 -Book 1 -▁1992. 1 -▁2.7 1 -▁Dub 1 -▁politely 1 -▁Farmer 1 -▁makeshift 1 -▁Tank 1 -uki 1 -▁masterpiece 1 -▁mushrooms 1 -▁ecological 1 -▁eruption 1 -▁signaling 1 -▁strapped 1 -▁Drama 1 -gold 1 -EX 1 -CN 1 -ope 1 -▁curling 1 -▁waitress 1 -▁mon 1 -▁passwords 1 -▁showcased 1 -▁Saunders 1 -▁enjoyment 1 -▁stirred 1 -▁responsive 1 -▁parcel 1 -BF 1 -▁HC 1 -▁Institutional 1 -sham 1 -▁inherently 1 -▁automakers 1 -nai 1 -▁boiling 1 -▁abdomen 1 -▁enforcing 1 -▁Promotion 1 -▁earthquakes 1 -▁choked 1 -▁prescriptions 1 -▁Pew 1 -smart 1 -▁Granted 1 -▁Lester 1 -▁ri 1 -▁UI 1 -▁boasted 1 -▁operatives 1 -▁slashed 1 -▁brightly 1 -▁zoning 1 -▁glaring 1 -▁strains 1 -▁Plains 1 -▁PBS 1 -▁spying 1 -▁114 1 -| 1 -▁salute 1 -▁irrigation 1 -▁Kosovo 1 -▁Politico 1 -▁loneliness 1 -established 1 -▁Gone 1 -▁newcomers 1 -▁canned 1 -▁Bancorp 1 -▁insecurity 1 -azi 1 -House 1 -▁PTSD 1 -▁buffet 1 -lton 1 -▁hangs 1 -▁Russ 1 -▁Alicia 1 -hum 1 -▁fastball 1 -Part 1 -▁Goa 1 -900 1 -lift 1 -▁commence 1 -▁Wembley 1 -▁suicidal 1 -▁Female 1 -▁Clint 1 -▁insect 1 -TI 1 -rium 1 -▁Booth 1 -▁pits 1 -▁mantra 1 -▁Tip 1 -▁Xiaomi 1 -▁introduces 1 -▁tra 1 -▁defensively 1 -▁frantic 1 -▁McMahon 1 -▁Hindi 1 -fed 1 -▁genetically 1 -▁Benedict 1 -borough 1 -▁bur 1 -▁KO 1 -▁Aires 1 -▁Bod 1 -▁prohibits 1 -▁Contract 1 -▁empowered 1 -▁pi 1 -▁..." 1 -▁Alonso 1 -▁Cisco 1 -▁outings 1 -▁Ancient 1 -▁5-1 1 -▁MW 1 -▁Laden 1 -▁digit 1 -▁Writers 1 -bill 1 -data 1 -Little 1 -▁earnest 1 -.70 1 -▁debit 1 -▁COUNTY 1 -▁coping 1 -▁kicker 1 -bang 1 -GH 1 -▁constitutes 1 -▁reactor 1 -▁Keller 1 -▁fetch 1 -▁Sammy 1 -▁MB 1 -▁decks 1 -▁mankind 1 -▁breakup 1 -▁Cad 1 -▁Networks 1 -▁TORONTO 1 -▁caffeine 1 -▁primitive 1 -▁perfume 1 -▁Citigroup 1 -▁spree 1 -▁curator 1 -▁Morales 1 -drive 1 -dhi 1 -▁Root 1 -bid 1 -▁Kali 1 -▁stash 1 -▁insulting 1 -▁drafting 1 -▁smack 1 -gre 1 -▁fixes 1 -▁Hornets 1 -Green 1 -▁cue 1 -AH 1 -tank 1 -▁Koreans 1 -▁calculation 1 -energy 1 -▁Participants 1 -▁Br 1 -️ 1 -▁Caldwell 1 -▁Casa 1 -▁southwestern 1 -shore 1 -▁slippery 1 -▁freaked 1 -jin 1 -▁Sing 1 -opening 1 -▁thumbs 1 -▁organisers 1 -▁Ingram 1 -▁Friend 1 -▁freshly 1 -▁Retirement 1 -▁mocked 1 -▁terrace 1 -click 1 -▁Maritime 1 -TF 1 -▁assemble 1 -cco 1 -▁Drop 1 -▁slopes 1 -▁stickers 1 -▁gubernatorial 1 -▁simplicity 1 -▁Different 1 -▁onboard 1 -▁forgiven 1 --28 1 -▁imbalance 1 -uta 1 -▁breached 1 -▁DOJ 1 -close 1 -▁li 1 -▁overturn 1 -INE 1 -▁hassle 1 -▁Stacey 1 -▁Pai 1 -▁hardship 1 -▁disappoint 1 -Sure 1 -▁Farrell 1 -▁swearing 1 -▁financed 1 -▁guarded 1 -later 1 -ingly 1 -▁nitrogen 1 -Gu 1 -pit 1 -▁Huntington 1 -▁Coke 1 -▁alpha 1 -▁capsule 1 -▁Doesn 1 -▁crab 1 -storey 1 -▁Hatch 1 -Christian 1 -▁vulnerabilities 1 -▁Deborah 1 -▁cannon 1 -▁canoe 1 -▁Ironically 1 -▁Charity 1 -▁arson 1 -▁Stocks 1 -▁Giving 1 -▁marrying 1 -▁Explorer 1 -yet 1 -▁ethic 1 -▁shave 1 -cht 1 -small 1 -▁Anita 1 -▁Fuel 1 -before 1 -▁roses 1 -▁Rajasthan 1 -▁graffiti 1 -bow 1 -▁internship 1 -▁kings 1 -▁Rapid 1 -▁loosely 1 -▁dots 1 -▁technician 1 -▁Veteran 1 -▁Yay 1 -▁fend 1 -▁Atlas 1 -▁nightmares 1 -▁Banking 1 -▁BIG 1 -▁Capitals 1 -▁Pharma 1 -▁explores 1 -▁nationals 1 -▁viruses 1 -October 1 -▁lettuce 1 -Ca 1 -▁Lei 1 -▁Mod 1 -▁RC 1 -Cha 1 -▁buys 1 -▁Claude 1 -▁coughing 1 -▁affiliation 1 -baby 1 -▁materially 1 -▁Mol 1 -flower 1 -trust 1 -▁curved 1 -▁motto 1 -▁Paige 1 -▁thanking 1 -▁Gru 1 -▁champ 1 -▁Finch 1 -▁employing 1 -▁Bow 1 -▁turtle 1 -▁Scho 1 -▁bullied 1 -▁dictatorship 1 -▁biblical 1 -▁Door 1 -▁rents 1 -▁disconnect 1 -▁Pacers 1 -▁conceal 1 -▁poorest 1 -▁dividing 1 -▁unanswered 1 -▁Homes 1 -▁Amelia 1 -▁Warsaw 1 -▁progressing 1 -▁Heroes 1 -▁Comment 1 -▁34, 1 -Ver 1 -▁trivial 1 -▁Augusta 1 -here 1 -▁chefs 1 -▁sneakers 1 -Only 1 -▁!) 1 -rup 1 -▁vigilant 1 -women 1 -▁recap 1 -▁rained 1 -▁Fern 1 -▁Variety 1 -▁Visa 1 -▁Away 1 -▁accumulation 1 -▁Kazakhstan 1 -▁Watkins 1 -▁Oprah 1 -▁Wedding 1 -▁Geoff 1 -▁tempting 1 -▁Eleven 1 -▁Yoga 1 -▁SI 1 -▁asteroid 1 -Saturday 1 -▁Murdoch 1 -▁volcanic 1 -▁Breakfast 1 -▁Milk 1 -▁1993. 1 -ement 1 -▁Ada 1 -tance 1 -wu 1 -▁Paw 1 -▁scarce 1 -▁seminar 1 -ste 1 -UD 1 -▁captures 1 -▁Bernardino 1 -jack 1 -▁116 1 -▁HTC 1 -▁2021. 1 -believe 1 -▁reluctantly 1 -▁Rover 1 -▁Kaiser 1 -▁glamorous 1 -▁feasible 1 -▁spelled 1 -▁7.5 1 -▁Marilyn 1 -▁fellowship 1 -▁Lilly 1 -▁Nile 1 -▁Sisters 1 -▁Kei 1 -ONE 1 -▁Exxon 1 -▁Nairobi 1 -▁plasma 1 -▁135 1 -▁Realty 1 -▁partnering 1 -▁stressing 1 -nut 1 -▁sticker 1 -▁wonderfully 1 -▁Cars 1 -▁hatch 1 -linger 1 -▁concede 1 -▁snapping 1 -▁Cry 1 -need 1 -▁agony 1 -▁outward 1 -ender 1 -pc 1 -▁linger 1 -▁akin 1 -▁practitioners 1 -▁Saw 1 -▁luxurious 1 -▁Vogue 1 -▁3.2 1 -▁derby 1 -▁IRA 1 -attack 1 -▁recounted 1 -.06 1 -▁2021 1 -▁UFO 1 -▁Rican 1 -▁Stream 1 -▁finalist 1 -▁Standards 1 -▁peaks 1 -▁PP 1 -▁Prayer 1 -lick 1 -▁ACA 1 -▁prank 1 -▁irrational 1 -▁nickel 1 -▁membrane 1 -▁murdering 1 -▁fences 1 -▁Qua 1 -▁tempo 1 -▁bruises 1 -▁stringent 1 -oro 1 -Assad 1 -▁retreated 1 -▁chipped 1 -▁sorting 1 -▁Cardi 1 -▁revel 1 -▁PDT 1 -▁dwell 1 -▁Rowe 1 -▁resides 1 -▁Spike 1 -plate 1 -▁attire 1 -wyn 1 -BLE 1 -▁cha 1 -▁staffing 1 -▁Clifford 1 -▁Violet 1 -▁cane 1 -cz 1 -▁bothering 1 -▁Somali 1 -▁Fit 1 -vic 1 -tics 1 -▁geared 1 -▁Dion 1 -▁overthrow 1 -▁extensions 1 -▁accelerating 1 -▁homage 1 -▁preceding 1 -▁continuity 1 -▁torque 1 -▁paradigm 1 -▁Response 1 -▁Hulu 1 -▁bleak 1 -▁batted 1 -user 1 -▁Gregg 1 -▁speedy 1 -BT 1 -▁favors 1 -▁Colombian 1 -▁3: 1 -▁departing 1 -▁debating 1 -▁semiconductor 1 -▁Cornwall 1 -▁advent 1 -▁spanning 1 -▁republished 1 -Jack 1 -▁1986, 1 -border 1 -▁misuse 1 -Mad 1 -▁Rwanda 1 -▁grapes 1 -▁waivers 1 -holders 1 -▁Won 1 -▁governmental 1 -change 1 -▁allergies 1 -▁provocative 1 -▁Warrior 1 -▁irresponsible 1 -▁strangely 1 -ddy 1 -▁clinch 1 -▁Personally 1 -▁Kla 1 -▁Monument 1 -▁Downing 1 -▁Bezos 1 -▁Higgins 1 -▁urges 1 -Under 1 -fur 1 -▁plotting 1 -,'" 1 -▁var 1 -▁violin 1 -lim 1 -rot 1 -▁captioned 1 -▁novelist 1 -▁sparking 1 -▁saddened 1 -▁Melanie 1 -▁Gra 1 -▁hut 1 -hel 1 -▁injected 1 -▁presumed 1 -▁PO 1 -▁stripes 1 -▁GoFundMe 1 -▁Ipswich 1 -▁dripping 1 -▁modifications 1 -▁quirky 1 -▁unused 1 -organ 1 -▁Suns 1 -rent 1 -▁commercially 1 -150 1 -.00. 1 -▁5-0 1 -▁bankrupt 1 -▁Salisbury 1 -▁Ernest 1 -▁scouts 1 -▁Alma 1 -▁outraged 1 -ain 1 -▁Esp 1 -▁spam 1 -▁amidst 1 -▁trumpet 1 -▁Flood 1 -▁Raleigh 1 -▁flurry 1 -▁shakes 1 -▁predicts 1 -▁Words 1 -DER 1 -Coin 1 -Russia 1 -▁frenzy 1 -▁Osaka 1 -▁Prague 1 -▁Bald 1 -▁hurricanes 1 -▁Bowman 1 -▁Deer 1 -▁Kick 1 -▁socio 1 -CI 1 -▁idle 1 -▁postal 1 -probably 1 -▁alarmed 1 -version 1 -▁Ethics 1 -▁rendition 1 -▁liabilities 1 -▁Clarence 1 -▁scanner 1 -▁48- 1 -▁swam 1 -▁cherish 1 -Du 1 -▁fulfil 1 -▁Panel 1 -▁10:00 1 -▁consultants 1 -▁bum 1 -pod 1 -▁sal 1 -▁belts 1 -▁snatched 1 -▁Driver 1 -▁Fil 1 -▁sidewalks 1 -▁Sci 1 -▁maneuver 1 -▁OUT 1 -$3 1 -▁lb 1 -▁Carbon 1 -▁resembles 1 -plant 1 -▁relevance 1 -▁Jacques 1 -▁Bomb 1 -▁bathing 1 -▁muttered 1 -▁[7] 1 -PH 1 -▁stitch 1 -jet 1 -▁Meng 1 -▁Hazel 1 -vas 1 -▁emergencies 1 -▁supremacy 1 -▁Buddha 1 -▁snapshot 1 -▁disturbance 1 -▁proposes 1 -ø 1 -▁groin 1 -Stop 1 -▁disrespect 1 -PI 1 -▁Irma 1 -▁RAF 1 -▁coastline 1 -▁Platform 1 -DM 1 -▁pantry 1 -EV 1 -▁exaggerated 1 -▁spice 1 -▁tossing 1 -root 1 -mber 1 -aku 1 -▁08 1 -▁accuses 1 -▁Wilmington 1 -▁cheque 1 -▁Toys 1 -▁cosmetic 1 -▁airstrikes 1 -▁1985, 1 -▁Shay 1 -▁assert 1 -▁Elena 1 -▁digest 1 -▁retro 1 -▁Lot 1 -▁sq 1 -ako 1 -▁vocabulary 1 -▁decorate 1 -▁Kraft 1 -▁spicy 1 -▁teachings 1 -▁fuzzy 1 -▁vaccination 1 -▁Cab 1 -▁Wen 1 -DD 1 -▁wakes 1 -eem 1 -▁Elias 1 -▁tolerated 1 -▁crafting 1 -▁Ashton 1 -▁inspirational 1 -▁Canadiens 1 -▁varsity 1 -▁edible 1 -▁sliced 1 -▁Hayward 1 -▁jewellery 1 -▁flashlight 1 -▁cheers 1 -mini 1 -Pe 1 -▁Print 1 -▁installations 1 -▁moist 1 -dio 1 -▁SNP 1 -▁sleek 1 -▁facade 1 -ders 1 -stin 1 -▁racially 1 -▁activation 1 -▁mint 1 -▁Cathy 1 -▁dusty 1 -▁complainant 1 -▁rays 1 -▁ATM 1 -▁Burr 1 -▁Monk 1 -▁EST 1 -▁Plain 1 -ahead 1 -eller 1 -▁Fallon 1 -mod 1 -▁organisms 1 -VE 1 -color 1 -0% 1 -▁Butter 1 -▁Hanna 1 -▁veil 1 -▁recalling 1 -cover 1 -▁plausible 1 -▁Lexington 1 -ough 1 -▁Alcohol 1 -▁learns 1 -▁Crews 1 -▁dictate 1 -▁Magistrate 1 -▁listings 1 -▁Benson 1 -▁torso 1 -▁Kala 1 -UC 1 -▁attic 1 -▁sweating 1 -▁Kensington 1 -▁staffer 1 -▁rob 1 -▁personalized 1 -▁foe 1 -▁visions 1 -.85 1 -▁unveil 1 -▁Taco 1 -November 1 -▁intervened 1 -▁Husband 1 -▁Wise 1 -▁platinum 1 -▁uniquely 1 -▁Fen 1 -inski 1 -▁Larson 1 -▁plaintiff 1 -bler 1 -▁deduction 1 -fired 1 -▁flowed 1 -each 1 -encia 1 -▁Einstein 1 -▁WITH 1 -▁Bottom 1 -▁Cyrus 1 -▁FB 1 -▁Ratio 1 -▁engulfed 1 -▁Tribe 1 -▁commended 1 -existing 1 -▁safeguard 1 -▁Hut 1 -▁beads 1 -▁Hearts 1 -▁cons 1 -.07 1 -▁terminate 1 -▁barrage 1 -▁pint 1 -▁Opinion 1 -Chris 1 -eti 1 -lined 1 -▁qualifications 1 -▁hardcore 1 -▁Arms 1 -▁trophies 1 -▁utilizing 1 -▁honeymoon 1 -▁Barber 1 -▁365 1 -cla 1 -▁Uruguay 1 -▁veins 1 -▁apologise 1 -▁McKenzie 1 -▁subsidy 1 -▁?! 1 -▁inject 1 -▁humorous 1 -▁Secondly 1 -▁insults 1 -▁Cecil 1 -▁refresh 1 -▁vaguely 1 -blood 1 -vu 1 -produced 1 -▁booster 1 -▁visibly 1 -▁Kho 1 -▁protector 1 -bil 1 -▁diploma 1 -▁Swa 1 -▁intercept 1 -▁Volunteer 1 -sim 1 -▁fiercely 1 -▁disqualified 1 -▁doorstep 1 -▁catalogue 1 -▁giveaway 1 -▁Friedman 1 -▁poked 1 -aux 1 -friend 1 -Watch 1 -▁Chick 1 -▁Suzuki 1 -forth 1 -▁harmed 1 -▁bounds 1 -▁elites 1 -▁sensory 1 -▁Ap 1 -▁respectable 1 -▁$31 1 -hine 1 -▁37, 1 -▁monkeys 1 -sheet 1 -▁dependence 1 -▁Cristiano 1 -▁Mach 1 -▁oversaw 1 -corp 1 -parent 1 -▁GPU 1 -During 1 -▁battalion 1 -▁qualifier 1 -▁Reality 1 -▁temp 1 -agon 1 -▁44, 1 -▁Films 1 -▁twists 1 -▁cholesterol 1 -▁inferior 1 -▁swore 1 -▁McDaniel 1 -▁Wynn 1 -▁Nichols 1 -▁Hou 1 -▁entitlement 1 -▁Minn 1 -▁herbs 1 -” 1 -▁Theory 1 -▁cues 1 -▁Own 1 -▁Tucson 1 -▁të 1 -backs 1 -▁imagining 1 -▁organizational 1 -▁hugging 1 -Jan 1 -▁IQ 1 -rule 1 -▁60,000 1 -▁revisit 1 -▁bash 1 -▁737 1 -▁forge 1 -▁wee 1 -▁Elm 1 -▁caste 1 -▁Twelve 1 -▁negligence 1 -heads 1 -▁relocation 1 -.36 1 -▁Skype 1 -▁Blazers 1 -▁confronting 1 -▁Midlands 1 -▁opting 1 -▁Cameroon 1 -bey 1 -▁vows 1 -▁Stakes 1 -▁Fresno 1 -▁carrots 1 -▁Johannesburg 1 -lus 1 -▁Organ 1 -▁CNBC 1 -▁vapor 1 -▁Kitty 1 -▁remnants 1 -▁cheerful 1 -▁glare 1 -▁brutally 1 -lab 1 -▁Syn 1 -▁conquer 1 -▁Spartans 1 -▁intervals 1 -KE 1 -pha 1 -▁jackpot 1 -nothing 1 -▁panties 1 -▁luckily 1 -▁meaningless 1 -grown 1 -▁metallic 1 -▁nodes 1 -▁uneven 1 -▁bruised 1 -▁Millions 1 -▁paycheck 1 -DL 1 -fund 1 -coat 1 -▁comprise 1 -▁franchises 1 -▁flaw 1 -▁1988, 1 -▁robe 1 -▁225 1 -▁Ple 1 -Ri 1 -▁grit 1 -▁veggies 1 -▁Kolkata 1 -▁mascot 1 -▁windshield 1 -▁redemption 1 -▁empowerment 1 -▁depot 1 -▁Purple 1 -rry 1 -▁motions 1 -▁2/3 1 -▁mut 1 -▁NZ 1 -▁Siri 1 -▁flare 1 -▁trimmed 1 -▁dashboard 1 -Fox 1 -Lu 1 -▁Evening 1 -▁Angus 1 -▁conflicting 1 -ref 1 -umba 1 -pho 1 -▁6-0 1 -▁Copper 1 -▁bumped 1 -▁butcher 1 -High 1 -▁authoritarian 1 -▁packaged 1 -▁pretended 1 -▁decreasing 1 -▁Alibaba 1 -.55 1 -▁Avi 1 -gged 1 -▁shores 1 -▁flourish 1 -mom 1 -performing 1 -▁pointless 1 -▁troll 1 -anta 1 -▁monopoly 1 -European 1 -nj 1 -▁reproduction 1 -▁prom 1 -▁Carolyn 1 -▁Rag 1 -”— 1 -▁Wheel 1 -Cu 1 -▁chunks 1 -▁sniper 1 -▁flipping 1 -▁weights 1 -▁informing 1 -▁meme 1 -▁clad 1 -▁binary 1 -gent 1 -▁lime 1 -▁conception 1 -▁Wai 1 -▁virgin 1 -▁Jakarta 1 -▁Jade 1 -▁Registration 1 -▁authenticity 1 -▁Publications 1 -▁illustrates 1 -▁manipulated 1 -▁muster 1 -packed 1 -▁Teams 1 -Blue 1 -▁Liberation 1 -▁Waste 1 -ALL 1 -▁Rate 1 -▁MM 1 -▁lauded 1 -▁trader 1 -Cap 1 -▁inspected 1 -.29 1 -▁courageous 1 -Times 1 -▁Bundesliga 1 -▁wiping 1 -▁1908 1 -Su 1 -▁bleed 1 -▁extracted 1 -▁barking 1 -HE 1 -▁spouses 1 -▁busted 1 -▁pillows 1 -▁tones 1 -▁diamonds 1 -▁nostalgia 1 -▁Woody 1 -▁borrowers 1 -▁slamming 1 -▁feminism 1 -num 1 -▁thunderstorms 1 -▁interpreter 1 -▁Edmund 1 -oga 1 -insky 1 -▁Merrill 1 -nett 1 -▁pasture 1 -▁projecting 1 -▁owes 1 -▁Lines 1 -▁bind 1 -▁Suffolk 1 -▁endurance 1 -▁Syndrome 1 -▁cockpit 1 -▁WATCH 1 -▁BS 1 -usa 1 -▁perennial 1 -▁blackmail 1 -▁123 1 -▁labelled 1 -Boy 1 -▁Gol 1 -▁Tau 1 -▁Lis 1 -nda 1 -▁Rovers 1 -▁onset 1 -▁Sens 1 -▁Stu 1 -▁Pleasant 1 -.67 1 -▁Lena 1 -Read 1 -▁midwife 1 -▁Rudd 1 -▁Duty 1 -Cat 1 -▁“( 1 -▁anonymously 1 -▁nutritional 1 -Sun 1 -▁waived 1 -▁crow 1 -▁violates 1 -iers 1 -▁Messenger 1 -▁paranoid 1 -gus 1 -▁hoax 1 -NI 1 -oa 1 -▁Wool 1 -▁vocalist 1 -ICE 1 -▁coupon 1 -▁rabbi 1 -mind 1 -▁Goodman 1 -▁allergic 1 -▁Guam 1 -Art 1 -cular 1 -▁mural 1 -▁CHICAGO 1 -▁Broward 1 -▁evenly 1 -▁queries 1 -▁Trend 1 -▁FT 1 -▁insurgency 1 -nier 1 -▁sincerely 1 -▁Africans 1 -▁Mate 1 -▁Gabe 1 -▁goodies 1 -▁monastery 1 -WT 1 -▁focal 1 -▁fundamentals 1 -▁overdue 1 -▁Kri 1 -▁lifts 1 -▁weddings 1 -▁soaking 1 -▁Feinstein 1 -▁attends 1 -▁reap 1 -▁Gam 1 -Benz 1 -▁Leaders 1 -▁victorious 1 -▁experimenting 1 -gers 1 -▁gripped 1 -▁rooftop 1 -▁ol 1 -LU 1 -drop 1 -▁Sheila 1 -issue 1 -THE 1 -▁kernel 1 -▁Ortiz 1 -▁masked 1 -▁salvage 1 -▁apex 1 -▁Kurds 1 -loving 1 -▁Ravi 1 -▁Carmen 1 -ologists 1 -bas 1 -▁Palo 1 -books 1 -▁$1.1 1 -▁Dodd 1 -▁enact 1 -▁avalanche 1 -▁imaginary 1 -▁segregation 1 -▁36, 1 -▁Pogba 1 -▁Stamford 1 -▁Elsa 1 -▁promo 1 -unda 1 -▁Fifty 1 -almost 1 -paying 1 -▁7-6 1 -▁Moh 1 -▁Alternative 1 -▁mosquito 1 -▁roadside 1 -▁railing 1 -▁$25,000 1 -Unfortunately 1 -/8 1 --26 1 -▁Brick 1 -▁Safari 1 -▁Ariel 1 -▁canyon 1 -average 1 -.02 1 -▁communicated 1 -rose 1 -▁unveiling 1 -▁tug 1 -à 1 -January 1 -▁ornaments 1 -▁Copenhagen 1 -Canada 1 -▁Saturn 1 -▁Jensen 1 -Sam 1 -▁strategically 1 -WE 1 -▁controllers 1 -▁coloured 1 -▁homeowner 1 -▁Nico 1 -▁spontaneous 1 -▁Trojans 1 -moral 1 -positive 1 -▁anticipating 1 -▁Rutgers 1 -▁incorporating 1 -▁liner 1 -Si 1 -▁Toni 1 -▁deeds 1 -▁hash 1 -dr 1 -▁squirrel 1 -▁stabilize 1 -▁dreaded 1 -▁je 1 -▁telescope 1 -rab 1 -▁Georgian 1 -▁$200,000 1 -▁Pune 1 -▁Berkshire 1 -▁rugged 1 -▁sanity 1 -amo 1 -▁Stoneman 1 -▁Cards 1 -▁Blind 1 -bomb 1 -▁Parliamentary 1 -ely 1 -ald 1 -▁Barbie 1 -▁zombies 1 -▁Goff 1 -▁Bis 1 -▁diagnostic 1 -▁Mata 1 -▁Rays 1 -▁inadvertently 1 -▁Audrey 1 -▁$600 1 -▁yeast 1 -▁Quest 1 -tner 1 -▁playback 1 -▁amazingly 1 -20,000 1 -▁patrols 1 -▁methamphetamine 1 -▁Employment 1 -▁choking 1 -▁dagger 1 -▁arcade 1 -▁Trip 1 -▁geopolitical 1 -▁snowfall 1 -▁empowering 1 -▁sm 1 -▁ruthless 1 -▁neon 1 -▁Visitors 1 -.09 1 -▁Rebels 1 -their 1 -majority 1 -▁refinery 1 -▁Segment 1 -▁psychic 1 -▁weakening 1 -▁TI 1 -▁fares 1 -▁provisional 1 -▁stoppage 1 -▁rollout 1 -Donnell 1 -▁skipping 1 -▁marketed 1 -ctor 1 -▁Manny 1 -▁suspicions 1 -▁RO 1 -▁reel 1 -dent 1 -▁reacts 1 -much 1 -played 1 -▁Junction 1 -▁prioritize 1 -▁Lawmakers 1 -▁Ox 1 -▁moth 1 -▁Richie 1 --5) 1 -▁weary 1 -▁Glad 1 -▁Ang 1 -▁Pelicans 1 -▁Tomas 1 -▁poorer 1 -tight 1 -▁1906 1 -▁approving 1 -▁contemplating 1 -▁distributing 1 -▁heroine 1 -▁Thames 1 -▁Tulsa 1 -▁07 1 -▁750 1 -▁amp 1 -▁permitting 1 -▁barring 1 -▁TT 1 -▁cite 1 -mill 1 -▁clown 1 -▁numbered 1 -▁Belichick 1 -▁Pistons 1 -▁dared 1 -▁miscarriage 1 -▁deferred 1 -▁Sims 1 -▁Plc 1 -▁passer 1 -▁PLC 1 -▁ensued 1 -▁cheapest 1 -▁grinding 1 -bies 1 -▁LOT 1 -▁classics 1 -▁restroom 1 -▁Daytona 1 -▁reps 1 -▁capitalize 1 -oso 1 -▁applicant 1 -choice 1 -▁raping 1 -▁Saul 1 -▁redevelopment 1 -▁Lights 1 -otic 1 -staff 1 -▁outperform 1 -▁sur 1 -▁Jasper 1 -▁Patriot 1 -serve 1 -HP 1 -vil 1 -▁compression 1 -▁Bha 1 -rays 1 -▁rationale 1 -▁Mani 1 -??? 1 -▁ducks 1 -▁acre 1 -▁drip 1 -▁Adobe 1 -▁rallying 1 -▁marvel 1 -▁[8] 1 -▁Rupert 1 -▁Ulster 1 -▁Bravo 1 -▁Woodward 1 -▁pupil 1 -▁Irvine 1 -▁UNC 1 -▁Gel 1 -loved 1 -mission 1 -▁$250,000 1 -▁lieu 1 -▁EX 1 -wp 1 -rim 1 -▁birdie 1 -iba 1 -▁ceremonial 1 -▁peripheral 1 -▁hiatus 1 -▁Madonna 1 -▁arrivals 1 -▁Catch 1 -Israel 1 -▁labs 1 -▁Eng 1 -▁constituted 1 -▁Manu 1 -▁Scarborough 1 -▁Watford 1 -▁sewage 1 -▁conventions 1 -▁Tammy 1 -▁surname 1 -▁mindful 1 -ook 1 -▁Napoli 1 -charge 1 -▁Ore 1 -pon 1 -▁diluted 1 -▁scratching 1 -▁schooling 1 -Brexit 1 -Police 1 -while 1 -VC 1 -▁seas 1 -ugh 1 -▁mocking 1 -lich 1 -office 1 -lav 1 -▁mammals 1 -▁emerges 1 -▁Fleming 1 -ographic 1 -▁eyed 1 -▁slogans 1 -▁keynote 1 -▁erect 1 -▁marshal 1 -▁Odd 1 -▁passports 1 -▁230 1 -▁Bos 1 -Sc 1 -▁Maker 1 -▁atrocities 1 -▁Stafford 1 -imp 1 -▁Rim 1 -▁dietary 1 -▁($3 1 -▁Loch 1 -▁flavour 1 -▁rift 1 -rail 1 -▁retrieved 1 -nde 1 -▁gala 1 -▁CON 1 -▁Nvidia 1 -▁liberties 1 -▁waiter 1 -▁RS 1 -▁brow 1 -▁$36 1 -▁multiplayer 1 -▁chili 1 -stars 1 -▁missionaries 1 -▁mortgages 1 -▁checkpoint 1 -▁ripping 1 -▁Plans 1 -▁spreads 1 -uddin 1 -▁Domestic 1 -.38 1 -▁HAVE 1 -▁oddly 1 -ition 1 -▁societal 1 -▁measles 1 -▁Chavez 1 -▁Flowers 1 -▁Leaf 1 -▁abandoning 1 -▁CL 1 -▁tad 1 -▁condemning 1 -▁Heck 1 -serious 1 -▁39- 1 -▁TH 1 -▁Advance 1 -▁Enrique 1 -▁Located 1 -▁Famer 1 -lev 1 -▁Dot 1 -▁Nora 1 -itude 1 -ranging 1 -▁Cannabis 1 -▁enclosed 1 -igh 1 -▁bee 1 -▁triggers 1 -▁Create 1 -▁applauded 1 -touch 1 -▁adamant 1 -▁Amir 1 -corruption 1 -▁slain 1 -▁nicer 1 -▁Foley 1 -▁Baltic 1 -▁Cannes 1 -▁prostate 1 -▁bot 1 -alia 1 -Wait 1 -▁Frenchman 1 -▁deposited 1 -▁Frankie 1 -▁scaled 1 -▁mundane 1 -▁sabotage 1 -▁Bahamas 1 -▁FedEx 1 -agan 1 -▁reacting 1 -lake 1 -▁CPR 1 -▁overcame 1 -▁hull 1 -▁collectors 1 -Far 1 -▁Judicial 1 -▁concentrations 1 -▁Dry 1 -soft 1 -▁stereo 1 -▁$20,000 1 -▁Alto 1 -▁wellbeing 1 -nica 1 -▁Gale 1 -lou 1 -▁trooper 1 -▁Mayfield 1 -350 1 -▁puppet 1 -▁stresses 1 -▁standalone 1 -▁Nationals 1 -hab 1 -▁sucker 1 -▁Keys 1 -tical 1 -▁Holden 1 -▁congratulated 1 -▁inherit 1 -▁Fernandez 1 -▁angrily 1 -▁Answer 1 -▁rotate 1 -aught 1 -▁stumble 1 -▁undermining 1 -▁flap 1 -▁Visual 1 -▁pens 1 -▁nailed 1 -▁Est 1 -▁Meat 1 -trans 1 -▁pediatric 1 -▁Clyde 1 -▁treadmill 1 -▁Indo 1 -pool 1 -▁bulb 1 -▁Lara 1 -▁Witch 1 -▁wrestler 1 -▁75% 1 -▁Reyes 1 -.31 1 -▁interrupt 1 -▁businessmen 1 -ional 1 -▁chassis 1 -▁showcasing 1 -▁crept 1 -▁Build 1 -▁Tr 1 -▁painfully 1 -▁graft 1 -▁depicts 1 -▁como 1 -▁Strait 1 -▁gasp 1 -▁Roof 1 -pol 1 -Before 1 -▁Gha 1 -▁Fiona 1 -▁insistence 1 -▁Barkley 1 -those 1 -ilo 1 -▁disciples 1 -▁ejected 1 -▁Det 1 -▁Vance 1 -▁distressed 1 -▁doom 1 -▁Liberia 1 -mma 1 -▁Known 1 -▁geek 1 -cam 1 -▁particle 1 -iana 1 -▁racked 1 -▁Confederation 1 -▁Rashid 1 -▁Chaffey 1 -▁orphanage 1 -▁gunmen 1 -▁cyclone 1 -▁TS 1 -every 1 -▁$37 1 -▁FAA 1 -▁Def 1 -▁awaited 1 -▁scooter 1 -▁Conner 1 -▁corrections 1 -dha 1 -lé 1 -▁Nurse 1 -▁Fighter 1 -▁Lastly 1 -▁residing 1 -▁bureaucracy 1 -▁apologised 1 -▁fluids 1 -EG 1 -▁kittens 1 -▁validity 1 -glass 1 -complete 1 -▁hitters 1 -▁womb 1 -▁softer 1 -▁Enough 1 -▁irritated 1 -nna 1 -▁9:00 1 -▁CFO 1 -▁advises 1 -▁Bihar 1 -▁greens 1 -gee 1 -▁motorcycles 1 -▁coding 1 -▁Vera 1 -▁amend 1 -.03 1 -▁AK 1 -▁morally 1 -▁weighted 1 -eff 1 -▁Worldwide 1 -▁shove 1 -▁Catalonia 1 -▁Defensive 1 -▁divers 1 -▁voltage 1 -▁2017) 1 -sensitive 1 -▁Forget 1 -▁KP 1 -▁41- 1 -▁estranged 1 -▁glucose 1 -▁solicitor 1 -▁tipping 1 -▁Hil 1 -▁fracture 1 -wn 1 -▁guru 1 -▁reckon 1 -▁Cream 1 -▁conceptual 1 -▁Write 1 -pine 1 -▁tame 1 -▁fencing 1 -blo 1 -Billboard 1 -▁Harvest 1 -▁Autumn 1 -▁impacting 1 -▁Anonymous 1 -▁cruiser 1 -named 1 -meaning 1 -▁Monkey 1 -National 1 -▁aspire 1 -gam 1 -▁casinos 1 -ı 1 -▁paradise 1 -▁storing 1 -▁adrenaline 1 -▁socialism 1 -▁Lea 1 -▁Hole 1 -▁burgers 1 -pri 1 -▁transporting 1 -▁stroll 1 -▁Teacher 1 -▁tram 1 -▁UBS 1 -▁builders 1 -OW 1 -▁Gy 1 -▁Publishing 1 -▁Rodrigo 1 -▁Wick 1 -induced 1 -▁sweetheart 1 -▁Fry 1 -▁TIME 1 -▁possesses 1 -▁deed 1 -▁zoom 1 -▁Everett 1 -▁Sonny 1 -▁arrogant 1 -▁eccentric 1 -▁Resolution 1 -mur 1 -▁sunrise 1 -ifies 1 -▁Baton 1 -▁ransom 1 -▁getaway 1 -▁warranty 1 -▁mins 1 -▁versa 1 -Bi 1 -▁greatness 1 -▁shrug 1 -▁sluggish 1 -songwriter 1 -cup 1 -▁Kol 1 -▁richer 1 -▁Vehicle 1 -▁insurer 1 -▁bowel 1 -ket 1 -erry 1 -▁Lap 1 -Dec 1 -▁outcry 1 -▁Huskies 1 -▁pseudo 1 -▁butterflies 1 -Ab 1 -▁seriousness 1 -▁dr 1 -cip 1 -▁escalate 1 -▁philosopher 1 -▁binge 1 -▁maze 1 -▁Buzz 1 -▁abrupt 1 -▁robotics 1 -ump 1 -sports 1 -▁42- 1 -eman 1 -▁flashes 1 -▁Olive 1 -▁Ricardo 1 -▁bulls 1 -▁Leonardo 1 -moderate 1 -▁Ou 1 -▁Ped 1 -▁Till 1 -▁spiders 1 -▁swords 1 -▁calmed 1 -appointed 1 -▁diligence 1 -▁optimization 1 -▁Eleanor 1 -uel 1 -,’’ 1 -▁outsider 1 -▁Ai 1 -▁southbound 1 -▁bible 1 -avan 1 -▁1991. 1 -▁chatter 1 -Note 1 -wald 1 -▁3000 1 -▁widening 1 -▁$33 1 -▁Agricultural 1 -▁Bournemouth 1 -▁fortress 1 -▁GBX 1 -sdale 1 -cycle 1 -▁Melvin 1 -▁Combat 1 -▁poo 1 -mack 1 -▁onwards 1 -WC 1 -▁inflicted 1 -▁readings 1 -mother 1 -▁cameo 1 -ways 1 -▁wedge 1 -▁selections 1 -▁Fried 1 -▁Hoo 1 -▁youthful 1 -▁rupees 1 -▁attain 1 -ager 1 -.44 1 -▁freedoms 1 -▁Mercer 1 -▁Kendrick 1 -giving 1 -ere 1 -Fe 1 -▁Cougars 1 -▁newcomer 1 -str 1 -▁brushing 1 -Ko 1 -▁Blvd 1 -▁más 1 -▁gearing 1 -▁gatherings 1 -▁Theodore 1 -▁inaugurated 1 -rol 1 -▁Duffy 1 -▁Rally 1 -▁detached 1 -▁Pole 1 -hm 1 -▁motivate 1 -▁hourly 1 -▁Loop 1 -▁marches 1 -fill 1 -▁fashioned 1 -▁slender 1 -▁lipstick 1 -▁Maps 1 -▁Sample 1 -40,000 1 -▁erased 1 -▁Creator 1 -▁117 1 -{ 1 -▁MGM 1 -▁Rust 1 -▁Rap 1 -▁advisors 1 -▁Francois 1 -purpose 1 -▁2.9 1 -IX 1 -▁elevate 1 -▁duct 1 -▁piss 1 -▁Sham 1 -▁Grass 1 -▁sch 1 -dro 1 -▁Gap 1 -▁Cir 1 -▁restricting 1 -▁tumble 1 -pipe 1 -rea 1 -▁Thorn 1 -▁Pos 1 -February 1 -▁inheritance 1 -▁candid 1 -▁breathtaking 1 -▁consolidate 1 -▁Mona 1 -▁wearable 1 -▁pleasing 1 -▁hesitated 1 -▁Archie 1 -▁Nets 1 -oda 1 -▁devote 1 -▁quota 1 -dir 1 -▁Nest 1 -cutting 1 -▁Fix 1 -Their 1 -▁narrator 1 -▁registry 1 -▁infringement 1 -▁Jasmine 1 -▁Various 1 -▁Happ 1 -▁Costco 1 -ih 1 -▁lashed 1 -▁avenues 1 -▁behaved 1 -rm 1 -▁transcripts 1 -0.5 1 -effective 1 -▁tanker 1 --29 1 -▁alt 1 -▁Goldberg 1 -▁` 1 -▁bribes 1 -▁environmentally 1 -▁onstage 1 -▁Toledo 1 -▁extinct 1 -▁lobster 1 -▁Flo 1 -▁amassed 1 -▁Nunes 1 -▁northbound 1 -▁coating 1 -▁Zak 1 -▁dangerously 1 -▁bastard 1 -▁impulse 1 -▁drilled 1 -▁Sinn 1 -dead 1 -▁cartoons 1 -▁admin 1 -▁Buffett 1 -BU 1 -▁organise 1 -▁mysteries 1 -▁vacancy 1 -▁bowler 1 -▁reunite 1 -▁BBQ 1 -▁hover 1 -▁puff 1 -▁Lowry 1 -▁Lor 1 -▁whereby 1 -▁mutually 1 -motion 1 -1.1 1 -▁Alien 1 -▁DI 1 -▁Cord 1 -due 1 -OH 1 -NG 1 -▁AMC 1 -▁Sym 1 -▁unspecified 1 -▁stumbling 1 -▁erotic 1 -▁Sheldon 1 -▁plaza 1 -▁shaved 1 -▁Mala 1 -▁exiting 1 -▁Javier 1 -▁Surf 1 -▁4) 1 -▁Cooperation 1 -▁Hate 1 -▁Warning 1 -▁12% 1 -▁Mud 1 -▁Terra 1 -▁antibiotic 1 -▁stew 1 -▁Dome 1 -▁legalized 1 -▁Merry 1 -▁paste 1 -▁quantitative 1 -▁McKay 1 -▁freestyle 1 -▁subpoena 1 -▁Yates 1 -Share 1 -▁expires 1 -better 1 -pr 1 -▁banners 1 -astic 1 -▁blinked 1 -▁apprentice 1 -▁Diocese 1 -▁Angie 1 -▁Driving 1 -thin 1 -▁adulthood 1 -▁Bridges 1 -boo 1 -uer 1 -▁#3 1 -,100 1 -stat 1 -Mc 1 -▁Seg 1 -wee 1 -cultural 1 -aff 1 -▁opposes 1 -▁streamed 1 -▁lad 1 -▁Hector 1 -▁sermon 1 -▁Questions 1 -▁DP 1 -▁cultivation 1 -▁DeVos 1 -▁Scripture 1 -▁manufactures 1 -▁Manor 1 -▁Warwick 1 -▁chalk 1 -▁Venture 1 -▁Cats 1 -▁Resorts 1 -▁Hancock 1 -▁sinister 1 -▁Presbyterian 1 -▁tract 1 -▁hinder 1 -▁ki 1 -▁Jackets 1 -▁bracelet 1 -▁Smoke 1 -▁enclosure 1 -▁Sons 1 -▁regretted 1 -▁methane 1 -ality 1 -▁Americas 1 -▁dove 1 -▁Mavericks 1 -▁Cummings 1 -▁Rum 1 -▁growers 1 -▁waterfront 1 -▁creations 1 -▁Slater 1 -chain 1 -▁induce 1 -cl 1 -▁humiliation 1 -▁uneasy 1 -mil 1 -tai 1 -▁pave 1 -▁Says 1 -▁intoxicated 1 -▁poultry 1 -▁messenger 1 -▁surging 1 -▁155 1 -▁Kimberly 1 -▁pillars 1 -▁1982, 1 -nova 1 -▁leftover 1 -▁pertaining 1 -plex 1 -poor 1 -▁ridiculously 1 -▁freshmen 1 -▁axis 1 -▁roasted 1 -6.5 1 -▁procedural 1 -▁denim 1 -▁suppression 1 -2- 1 -▁Bash 1 -▁envy 1 -▁piercing 1 -suit 1 -pher 1 -té 1 -▁catering 1 -▁Lib 1 -▁sourced 1 -▁Merc 1 -▁Marcos 1 -▁swirling 1 -▁Sau 1 -▁1870 1 -▁verb 1 -nne 1 -bla 1 -RU 1 -▁illustrations 1 -rone 1 -▁Pul 1 -nea 1 -▁incurred 1 -▁Slide 1 -▁cricketer 1 -▁unseen 1 -fact 1 -▁Oaks 1 -▁Bulgarian 1 -▁Ming 1 -▁Slovakia 1 -▁sculptures 1 -▁EL 1 -▁conditional 1 -▁£6 1 -▁Recovery 1 -▁Somerset 1 -▁northwestern 1 -zh 1 -cri 1 -▁stalking 1 -oop 1 -▁8:00 1 -▁bitten 1 -▁rattled 1 -▁ache 1 -▁evangelical 1 -▁ext 1 -▁"( 1 -▁landslide 1 -▁examiner 1 -▁seeming 1 -isse 1 -▁Huddersfield 1 -▁tabloid 1 -▁unfinished 1 -▁unreasonable 1 -▁crest 1 -▁vol 1 -▁Olson 1 -▁hectic 1 -▁quitting 1 -▁dodge 1 -Je 1 -▁er 1 -IST 1 -▁WB 1 -▁sprayed 1 -▁dishwasher 1 -▁senate 1 -▁nanny 1 -▁Daly 1 -▁doubted 1 -mme 1 -mia 1 -DT 1 -dai 1 -/2018 1 -▁predator 1 -▁Volvo 1 -▁prevailed 1 -▁Anfield 1 -▁contraction 1 -▁constituencies 1 -▁MacDonald 1 -▁planner 1 -▁2015-16 1 -iza 1 -▁Patty 1 -▁downright 1 -▁shooters 1 -▁paw 1 -▁worm 1 -▁ke 1 -▁Romans 1 -▁IL 1 -death 1 -▁Mackenzie 1 -▁Sutherland 1 -▁confinement 1 -awaited 1 -▁restless 1 -▁congratulate 1 -▁duel 1 -price 1 -▁upped 1 -▁submitting 1 -▁generators 1 -▁Muk 1 -SN 1 -▁Transfer 1 -60,000 1 -▁consultations 1 -▁Statement 1 -▁felon 1 -▁Arabian 1 -Inter 1 -▁Meta 1 -▁hitch 1 -▁Carnival 1 -▁bladder 1 -▁Bernstein 1 -▁startling 1 -▁confiscated 1 -▁discontinued 1 -Fire 1 -▁Pain 1 -▁periodically 1 -▁1905 1 -bha 1 -▁sexist 1 -▁anchored 1 -▁Thu 1 -nus 1 -NF 1 -én 1 -▁Jung 1 -▁eyebrow 1 -▁Newsletter 1 -▁prophet 1 -▁Romero 1 -▁Tun 1 -▁arrows 1 -▁ponder 1 -▁colourful 1 -▁USDA 1 -▁Trout 1 -winner 1 -▁Damian 1 -ske 1 -▁inland 1 -▁WW 1 -▁RI 1 -central 1 -alla 1 -▁font 1 -quist 1 -▁Compared 1 -▁mor 1 -▁TC 1 -▁Paramount 1 -▁Brewer 1 -birth 1 -▁dinosaurs 1 -▁occupying 1 -Sp 1 -graph 1 -▁isolate 1 -▁tallest 1 -USD 1 -▁Ariz 1 -▁Person 1 -▁undo 1 -▁broadcasts 1 -birds 1 -▁exhibitions 1 -▁Muhammadu 1 -▁Abigail 1 -▁Gracie 1 -▁Nuggets 1 -▁induction 1 -▁oriented 1 -▁Application 1 -▁jockey 1 -▁accessibility 1 -uter 1 -▁Powers 1 -▁Rie 1 -▁augmented 1 -▁Pond 1 -▁remorse 1 -male 1 -▁Oman 1 -▁authorize 1 -▁sy 1 -vir 1 -▁VIDEO 1 -▁antics 1 -▁restrained 1 -▁berries 1 -▁cabinets 1 -illi 1 -reaching 1 -▁URL 1 -▁rag 1 -▁til 1 -▁Shields 1 -▁phenomena 1 -▁educator 1 -▁Excellence 1 -▁collateral 1 -▁router 1 -▁Cai 1 -▁vampires 1 -▁undermined 1 -▁lions 1 -▁DD 1 -▁goddess 1 -▁1907 1 -▁Croatian 1 -▁upsetting 1 -▁Schu 1 -sector 1 -▁Candy 1 -▁countered 1 -▁Stuff 1 -▁Stranger 1 -▁Soros 1 -▁carnival 1 -▁radius 1 -▁slices 1 -▁consequently 1 -Nobody 1 -RG 1 -TOR 1 -7) 1 -▁inhabit 1 -ESS 1 -8.5 1 -▁obese 1 -▁archaeological 1 -▁sobbing 1 -▁Audio 1 -bl 1 -▁NV 1 -▁climax 1 -▁Academic 1 -▁Buckingham 1 -achi 1 -▁analytical 1 -▁Waterloo 1 -▁clubhouse 1 -▁Mei 1 -▁sniff 1 -▁gi 1 -/3 1 -▁cautiously 1 -agi 1 -▁emulate 1 -stroke 1 -101 1 -▁seamless 1 -▁flex 1 -▁nephews 1 -▁subdued 1 -▁bunk 1 -▁McC 1 -▁slaying 1 -▁WS 1 -▁frost 1 -▁delaying 1 -▁Detectives 1 -▁quiz 1 -▁Robot 1 -▁intimidated 1 -▁seventeen 1 -ener 1 -▁Mouse 1 -▁tendencies 1 -▁Outdoor 1 -mere 1 -▁MK 1 -▁Cassie 1 -▁deserving 1 -▁Romeo 1 -▁dances 1 -▁Misty 1 -▁emission 1 -▁pest 1 -Point 1 -sym 1 -▁TN 1 -genic 1 -▁Rubin 1 -▁dearly 1 -▁indexes 1 -▁bins 1 -▁Reader 1 -▁Gunn 1 -▁moan 1 -▁dispatch 1 -fun 1 -▁3.6 1 -lying 1 -▁Somewhere 1 -▁bean 1 -▁PI 1 -▁roaming 1 -▁deserted 1 -▁Goal 1 -▁deploying 1 -▁VC 1 -poll 1 -vian 1 -▁hubs 1 -research 1 -▁prominently 1 -▁preaching 1 -▁propel 1 -▁Carnegie 1 -cycl 1 -▁Predators 1 -ín 1 -rad 1 -▁Outperform 1 -▁Scan 1 -▁MAR 1 -▁Push 1 -▁Shiv 1 -harm 1 -short 1 -▁treasurer 1 -▁hypothetical 1 -▁Facility 1 -▁Kathryn 1 -▁outscored 1 -▁Viktor 1 -View 1 -highest 1 -▁Drivers 1 -ulating 1 -sale 1 -▁informs 1 -LR 1 -cation 1 -▁Katz 1 -▁impatient 1 -▁longing 1 -▁Toy 1 -▁chewing 1 -ect 1 -▁cinnamon 1 -▁horribly 1 -▁Malone 1 -▁Destiny 1 -▁majors 1 -▁pleas 1 -berger 1 -▁Release 1 -▁groundbreaking 1 -▁Scots 1 -▁Khalid 1 -Mike 1 -ifi 1 -▁Zar 1 -▁Forty 1 -▁Spe 1 -▁cartel 1 -▁Volunteers 1 -GM 1 -▁withdrawing 1 -▁divert 1 -▁clarification 1 -▁entre 1 -▁wreckage 1 -▁lays 1 -.33 1 -pir 1 -▁Nicholson 1 -▁curl 1 -▁Parsons 1 -▁7:00 1 -▁wired 1 -,000,000 1 -▁defects 1 -▁pathways 1 -▁banging 1 -▁Physical 1 -ddle 1 -▁spices 1 -▁PEOPLE 1 -▁utmost 1 -▁crappy 1 -▁urgently 1 -▁confuse 1 -▁Fuji 1 -manship 1 -▁hazards 1 -▁whim 1 -▁Garland 1 -▁outage 1 -OG 1 -▁Stokes 1 -ABC 1 -▁NH 1 -▁LT 1 -▁MAN 1 -▁Whilst 1 -▁relaxation 1 -▁1979, 1 -▁accessory 1 -▁Tale 1 -▁productions 1 -lund 1 -▁glut 1 -idge 1 -▁Pant 1 -▁bulbs 1 -▁boiled 1 -▁Coun 1 -▁Howe 1 -▁plugged 1 -▁famine 1 -▁Neville 1 -▁craving 1 -▁haunting 1 -▁delegate 1 -▁Mandela 1 -▁willingly 1 -▁Lund 1 -▁Blo 1 -▁3.4 1 -▁Pres 1 -▁weave 1 -▁benches 1 -rounder 1 -▁dinosaur 1 -▁PDF 1 -▁Sn 1 -▁Lennon 1 -▁1860 1 -▁Copa 1 -▁allergy 1 -▁drifting 1 -▁Viking 1 -▁hopeless 1 -▁-1 1 -course 1 -▁Ambulance 1 -▁Selena 1 -▁bu 1 -lessly 1 -▁savvy 1 -▁tutor 1 -▁arising 1 -▁NS 1 -▁Nah 1 -▁nerd 1 -▁paced 1 -LB 1 -approved 1 -girlfriend 1 -▁gadgets 1 -▁pioneering 1 -▁fluffy 1 -▁lush 1 -▁pup 1 -▁mills 1 -▁manga 1 -Gi 1 -serv 1 -▁1989. 1 -▁Stevenson 1 -▁Franken 1 -▁Humphrey 1 -▁histories 1 -▁velvet 1 -▁Fellowship 1 -▁mercury 1 -▁Tropical 1 -▁discriminatory 1 -▁$38 1 -Being 1 -▁poisoned 1 -▁determines 1 -▁ti 1 -▁cone 1 -▁succeeding 1 -▁Display 1 -▁withheld 1 -mass 1 -▁vodka 1 -▁Maddie 1 -▁troopers 1 -▁culprit 1 -give 1 -Both 1 -stro 1 -▁LIVE 1 -▁inventor 1 -▁trailers 1 -▁Seal 1 -▁squat 1 -▁arthritis 1 -▁yourselves 1 -▁benefiting 1 -▁Religious 1 -▁Marriage 1 -gmail 1 -▁Conrad 1 -ril 1 -▁Advocate 1 -▁preach 1 -▁fascist 1 -▁1967, 1 -▁dresser 1 -▁exemptions 1 -▁1984, 1 -▁intensely 1 -▁CES 1 -aunt 1 -amp 1 -▁oz 1 -▁outages 1 -▁Blackhawks 1 -▁consoles 1 -▁paralyzed 1 -▁tit 1 -▁aroma 1 -angle 1 -▁magistrate 1 -▁punishing 1 -▁Silk 1 -▁Assistance 1 -▁WAY 1 -kra 1 -sbury 1 -▁leased 1 -▁maple 1 -▁cre 1 -▁backstage 1 -▁petitions 1 -++ 1 -cept 1 -▁buckets 1 -phe 1 -▁Hyper 1 -▁indecent 1 -Te 1 -.11 1 -▁Declaration 1 -▁cumulative 1 -▁McMaster 1 -▁Hutchinson 1 -skin 1 -distance 1 -▁deception 1 -▁IF 1 -▁wards 1 -▁Seems 1 -▁09 1 -▁receipts 1 -▁Nav 1 -▁peel 1 -▁sacrificed 1 -▁plastics 1 -vio 1 -fulness 1 -▁poking 1 -▁Dong 1 -▁abduction 1 -▁Treatment 1 -▁subscriptions 1 -▁dashed 1 -▁Camden 1 -▁Este 1 -▁Kul 1 -▁bowled 1 -▁Beaver 1 -▁deadlines 1 -▁informant 1 -▁joyful 1 -rain 1 -▁ram 1 -▁joints 1 -▁existential 1 -▁Kolb 1 -▁fertilizer 1 -▁thereof 1 -.26 1 -▁repayment 1 -ads 1 -Em 1 -▁DHS 1 -▁claws 1 -▁fists 1 -▁Phi 1 -▁RF 1 -▁Spice 1 -▁stainless 1 -▁routines 1 -▁writings 1 -▁cinematic 1 -▁[9] 1 -▁95% 1 -▁Yong 1 -▁assurances 1 -▁mouths 1 -▁Senegal 1 -▁scrimmage 1 -▁concluding 1 -lac 1 -▁Cassidy 1 -▁cocktails 1 -▁Expand 1 -▁referral 1 -▁VPN 1 -▁Info 1 -▁Tran 1 -▁barge 1 -▁Countries 1 -▁biking 1 -▁harvesting 1 -▁homered 1 -▁uni 1 -Link 1 -quan 1 -▁miracles 1 -▁1904 1 -GT 1 -▁incapable 1 -▁expansive 1 -▁Spi 1 -▁Peggy 1 -▁NGO 1 -▁pancakes 1 -▁ethnicity 1 -enburg 1 -▁gusts 1 -▁cashier 1 -▁kin 1 -▁Sec 1 -▁Virtual 1 -▁43- 1 -▁ubiquitous 1 -▁iPod 1 -▁upfront 1 -▁Coral 1 -▁espionage 1 -cie 1 -▁behold 1 -▁chiefs 1 -▁Kru 1 -otti 1 -▁Underwood 1 -▁trolls 1 -▁databases 1 -30,000 1 -▁monumental 1 -lation 1 -dian 1 -▁transmit 1 -▁IX 1 -▁Stones 1 -▁linguistic 1 -script 1 -▁scaling 1 -▁leasing 1 -▁consultancy 1 -▁Vettel 1 -▁Colbert 1 -▁attracts 1 -▁malaria 1 -▁Normal 1 -rist 1 -ssy 1 -local 1 -LM 1 -▁Canon 1 -/9 1 -▁medalist 1 -▁Liberties 1 -▁frantically 1 -▁noteworthy 1 -▁Cheese 1 -▁Starr 1 -▁Blackburn 1 -gn 1 -▁FO 1 -anu 1 -▁Cali 1 -▁rewrite 1 -▁acids 1 -ider 1 -▁Addressing 1 -▁Bharat 1 -▁bonding 1 -▁DIY 1 -▁hemp 1 -▁NYSE 1 -chel 1 -▁smashing 1 -”), 1 -▁motherhood 1 -▁sweaty 1 -image 1 -▁buzzing 1 -scenes 1 -▁coherent 1 -▁tandem 1 -▁Strength 1 -▁Akron 1 -▁Gibbs 1 -Gar 1 -▁Coastal 1 -▁Staples 1 -▁categorized 1 -▁lobbyists 1 -pens 1 -entry 1 -▁linen 1 -▁avocado 1 -▁disdain 1 -▁gymnastics 1 -▁Lem 1 -▁Killer 1 -brother 1 -▁Into 1 -▁Legends 1 -▁criticisms 1 -TL 1 -▁bottled 1 -▁dwelling 1 -▁Ling 1 -▁Ent 1 -▁Marcel 1 -▁mourn 1 -▁reins 1 -▁aluminium 1 -▁incarceration 1 -▁Sage 1 -▁Colonial 1 -Team 1 -▁announcer 1 -▁nominate 1 -▁Chevy 1 -factor 1 -▁hammered 1 -.48 1 -xx 1 -▁Barker 1 -kers 1 -shin 1 -nc 1 -▁Lum 1 -Very 1 -▁Napoleon 1 -▁transcend 1 -▁submerged 1 -▁unnoticed 1 -oke 1 -▁Veronica 1 -▁comb 1 -laden 1 -▁envision 1 -▁Outstanding 1 -▁heartfelt 1 -▁Ton 1 -▁camel 1 -ws 1 -▁Frances 1 -▁implicated 1 -▁Canton 1 -▁researched 1 -▁accumulate 1 -▁forwards 1 -▁abnormal 1 -▁Fel 1 -:1 1 -▁Bea 1 -▁detrimental 1 -▁offending 1 -▁chant 1 -▁Walters 1 -▁Grizzlies 1 -▁Peyton 1 -▁gruesome 1 -▁resemblance 1 -Having 1 -▁Oslo 1 -▁Remain 1 -▁cho 1 -▁outlines 1 -▁lamented 1 -▁Rom 1 -▁hacker 1 -▁Tier 1 -▁(21 1 -▁(30 1 -▁reinstated 1 -▁Individual 1 -▁Semi 1 -▁swarm 1 -▁$55 1 -▁190 1 -iano 1 -▁Idol 1 -▁adventurous 1 -▁interfering 1 -▁GETTY 1 -▁pianist 1 -▁Emerson 1 -▁diets 1 -▁kitty 1 --47 1 -add 1 -▁Complete 1 -▁authored 1 -rri 1 -▁FR 1 -▁Marriott 1 -▁Fiction 1 -▁bisexual 1 -80,000 1 -▁noodles 1 -moon 1 -bing 1 -tee 1 -▁Laurel 1 -shift 1 -▁Lauderdale 1 -▁Fusion 1 -▁eviction 1 -▁alot 1 -▁abolished 1 -▁HQ 1 -▁1909 1 -▁Cora 1 -▁cor 1 -▁hears 1 -▁heater 1 -▁Helena 1 -▁Frog 1 -hou 1 -▁Levy 1 -▁Greenville 1 -▁radically 1 -ñ 1 --50 1 -▁donkey 1 -–0 1 -▁Volume 1 -Sh 1 -▁honours 1 -▁shipment 1 -▁deepen 1 -Port 1 -▁arresting 1 -rap 1 -ña 1 -▁astronaut 1 -Pri 1 -▁Bosnia 1 -▁ABR 1 -▁Leg 1 -▁Shiite 1 -Ju 1 -combe 1 -lak 1 -▁militias 1 -▁mistakenly 1 -▁incorrectly 1 -▁Michele 1 -▁Algeria 1 -▁2022. 1 -▁Rak 1 -▁rum 1 -kit 1 -▁Jump 1 -▁Cause 1 -▁Coroner 1 -▁Correctional 1 -Mon 1 -▁Pont 1 -▁pledges 1 -▁rampage 1 -▁stud 1 -▁bacterial 1 -iq 1 -▁recommending 1 -.37 1 -▁PCs 1 -ardi 1 -Trans 1 -VO 1 -70,000 1 -dance 1 -▁Artists 1 -▁Anniversary 1 -▁grazing 1 -▁unconventional 1 -▁OLED 1 -maybe 1 -▁nieces 1 -▁Gerry 1 -▁Hind 1 -▁Jaguar 1 -▁nipples 1 -Bra 1 -▁Kee 1 -▁compulsory 1 -▁repurchase 1 -▁Evelyn 1 -▁). 1 -▁retains 1 -inda 1 -Old 1 -cess 1 -▁partition 1 -▁Ugh 1 -▁flats 1 -▁cling 1 -▁invaluable 1 -▁contracting 1 -▁twisting 1 -personal 1 -▁fascination 1 -▁Khloe 1 -▁Blackpool 1 -▁OC 1 -▁evidently 1 -▁Zion 1 -AV 1 -.49 1 -▁peg 1 -END 1 -ony 1 -▁faulty 1 -rep 1 -▁Forever 1 -▁Bah 1 -▁gears 1 -▁reclaim 1 -▁NW 1 -▁grease 1 -▁Suisse 1 -▁newsletters 1 -▁38, 1 -▁cardinal 1 -▁woken 1 -▁corridors 1 -/2017 1 -▁Downs 1 -▁toggle 1 -▁Looks 1 -▁alum 1 -arra 1 -▁beams 1 -common 1 -▁reeling 1 -ury 1 -▁blacks 1 -▁pillar 1 -rama 1 -▁Huckabee 1 -▁Becker 1 -confidence 1 -lash 1 -▁socket 1 -▁Chand 1 -5.5 1 -Call 1 -▁commissions 1 -▁restrictive 1 -.14 1 -▁ticking 1 -▁1945, 1 -▁brightest 1 -lig 1 -▁morale 1 -▁Guards 1 -▁supremacist 1 -▁outpost 1 -▁collapsing 1 -▁educating 1 -▁enclave 1 -▁intimacy 1 -▁veterinarian 1 -▁Snapdragon 1 -▁avatar 1 -▁outsiders 1 -lang 1 -escent 1 -▁provoke 1 -▁Comm 1 -living 1 -▁Mayer 1 -▁Macedonia 1 -▁Pea 1 -▁WP 1 -▁ridden 1 -▁prowess 1 -▁forcibly 1 -▁Lana 1 -▁Regarding 1 -▁Animals 1 -▁forearm 1 -som 1 -▁apprehended 1 -▁Ei 1 -▁(26 1 -▁ventured 1 -Joe 1 -rani 1 -▁Enterprises 1 -▁tutorial 1 -▁pops 1 -▁puzzles 1 -▁textile 1 -▁baptism 1 -▁Shepard 1 -▁Letter 1 -Facebook 1 -quet 1 -▁Renee 1 -▁Kh 1 -▁landscapes 1 -▁tampering 1 -▁Present 1 -Care 1 -▁stronghold 1 -▁Burnett 1 -Tom 1 -▁Paint 1 -▁Lockheed 1 -▁Welfare 1 -▁oblivious 1 -▁ligament 1 -▁rotten 1 -dict 1 -▁145 1 -▁70- 1 -MENT 1 -cr 1 -▁Ellison 1 -▁subordinate 1 --60 1 -▁bursting 1 -▁Mou 1 -titled 1 -▁Zimmerman 1 -hack 1 -▁anguish 1 -▁?? 1 -▁granite 1 -▁resolving 1 -Next 1 -▁Leaving 1 -▁Garner 1 -▁Rogue 1 -▁Witt 1 -▁exported 1 -bis 1 -▁Lorenzo 1 -▁Hack 1 -bye 1 -▁vandalism 1 -▁apologies 1 -▁AF 1 -▁Blasio 1 -▁sensed 1 -▁foes 1 -film 1 -clean 1 -▁proliferation 1 -▁53- 1 -pole 1 -▁sew 1 -▁supermarkets 1 -▁ankles 1 -▁4.0 1 -▁advertise 1 -▁frank 1 -▁recognizable 1 -▁scenic 1 -▁Built 1 -ibly 1 -▁pundits 1 -▁volley 1 --1, 1 -▁Eyes 1 -▁Tesco 1 -▁Vine 1 -grad 1 -▁booze 1 -▁Negro 1 -zia 1 -▁verbally 1 -▁Wolfe 1 -▁Hamm 1 -unit 1 -sworth 1 -▁WARNING 1 -▁subdivision 1 -▁massively 1 -▁patriotic 1 -Power 1 -▁articulate 1 -▁encrypted 1 -▁MSCI 1 -▁1972, 1 -▁Labs 1 -▁coloring 1 -▁bundled 1 -1.7 1 -dark 1 -1% 1 -▁Yemeni 1 -▁unrealistic 1 -▁icing 1 -▁affluent 1 -▁fragrance 1 -▁Wife 1 -▁evade 1 -▁newsroom 1 -▁[10] 1 -payment 1 -horse 1 -▁perceptions 1 -▁differentiate 1 -2.8 1 -▁Lat 1 -▁vo 1 -▁playlist 1 -▁Shock 1 -.39 1 -▁notoriously 1 -hhh 1 -▁intuitive 1 -▁mushroom 1 -▁dealership 1 -▁Tee 1 -▁sterling 1 -▁misinformation 1 -▁crease 1 -▁inventories 1 -× 1 -–1 1 -vote 1 -fort 1 -▁Slowly 1 -▁lamps 1 -▁Pell 1 -tec 1 -quarters 1 -▁Mit 1 -▁dragons 1 -grave 1 -▁Drag 1 -▁Bone 1 -ace 1 -▁spans 1 -▁Traditional 1 -▁Sheridan 1 -▁efficacy 1 -▁prosperous 1 -▁Moines 1 -▁Aqua 1 -▁Wor 1 -▁Cent 1 -▁snowy 1 -▁buff 1 -▁GH 1 -▁Career 1 -▁Giles 1 -blog 1 -WS 1 -▁distrust 1 -EB 1 -▁Arabs 1 -▁TX 1 -▁GR 1 -▁lookout 1 -▁Olympia 1 -.46 1 -PU 1 -map 1 -itan 1 -▁AV 1 -▁Naples 1 -▁Guinness 1 -▁Pipeline 1 -assi 1 -ania 1 -▁monk 1 -▁Sup 1 -Op 1 -▁proportions 1 -▁promoter 1 -▁DB 1 -▁afforded 1 -č 1 -о 1 -▁Jeremiah 1 -▁indicative 1 -▁silhouette 1 -▁sturdy 1 -▁Rihanna 1 -▁Frazier 1 -▁auditorium 1 -▁displacement 1 -▁regulating 1 -▁Slo 1 -▁Founder 1 -▁Aspen 1 -▁Hin 1 -edu 1 -sty 1 -▁1901 1 -▁Faculty 1 -tol 1 -resistant 1 -▁spies 1 -▁canon 1 -GI 1 -eter 1 -nard 1 -▁dismay 1 -▁Ble 1 -▁insecure 1 -▁rails 1 -hair 1 -▁Album 1 -▁pilgrims 1 -▁Process 1 -▁bpd 1 -▁draped 1 -▁harvested 1 -Cor 1 -▁impressions 1 -▁Articles 1 -▁blatant 1 -▁owl 1 -▁ja 1 -▁Damien 1 -▁elf 1 -Chi 1 -▁Laurent 1 -▁Henri 1 -▁Barca 1 -▁Mandy 1 -▁Levine 1 -▁Haz 1 -▁actresses 1 -▁favoured 1 -▁Ard 1 -▁Photographer 1 -camera 1 -lib 1 -▁assortment 1 -▁Permanent 1 -▁umpire 1 -▁cruising 1 -ots 1 -▁decor 1 -GAAP 1 -song 1 -▁Sarri 1 -formerly 1 -▁caliber 1 -▁HOW 1 -▁Drink 1 -▁offensively 1 -▁Charge 1 -▁competitiveness 1 -fix 1 -▁jeopardy 1 -▁mythology 1 -▁Fischer 1 -▁homosexual 1 -.28 1 -▁addictive 1 -former 1 -Rock 1 -virus 1 -▁Moments 1 -▁Crane 1 -seed 1 -▁PPP 1 -logy 1 -▁props 1 -▁Oculus 1 -ece 1 -▁Amar 1 -burgh 1 -▁Nikola 1 -Real 1 -contract 1 -▁arises 1 -▁Xiao 1 -▁deteriorated 1 -▁stricter 1 -BP 1 -▁dime 1 -▁backbone 1 -▁Communities 1 -▁Crimson 1 -▁librarian 1 -▁Concord 1 -▁organising 1 -▁Fulton 1 -▁BlackRock 1 -▁radioactive 1 -▁Abel 1 -iu 1 -▁pajamas 1 -▁thwart 1 -▁$42 1 -▁BI 1 -▁(+ 1 -▁Honolulu 1 -▁campaigners 1 -▁Natasha 1 -▁Lafayette 1 -▁footwear 1 -▁Europeans 1 -▁Barron 1 -▁Hulk 1 -▁FI 1 -hed 1 -▁Letters 1 -▁fl 1 -▁wreath 1 -▁Ajax 1 -▁Kasich 1 -Three 1 -▁Shopping 1 -▁Petty 1 -▁upstream 1 -▁canine 1 -ndy 1 -SG 1 -fri 1 -▁Verde 1 -▁provoked 1 -▁Millennium 1 -▁Tribunal 1 -▁WordPress 1 -▁turbines 1 -▁monks 1 -▁Harare 1 -▁118 1 -▁$2,000 1 -▁Promise 1 -▁Fiat 1 -▁oxide 1 -▁knots 1 -▁Livingston 1 -▁wand 1 -▁tripled 1 -▁enroll 1 -▁Folk 1 -▁Kohl 1 -▁Budapest 1 -▁dec 1 -▁cosmic 1 -▁ovation 1 -Bill 1 -▁Underground 1 -▁Tests 1 -▁Koh 1 -▁handbag 1 -▁£7 1 -mato 1 -throw 1 -▁torment 1 -TN 1 -▁peril 1 -enstein 1 -shell 1 -▁soothing 1 -▁humility 1 -▁fabrics 1 -▁Bluff 1 -▁CN 1 -▁interchange 1 -vik 1 -▁Blacks 1 -▁horsepower 1 -▁scares 1 -erie 1 -▁Scheme 1 -▁skirts 1 -▁Rin 1 -▁limitation 1 -iger 1 -▁Obi 1 -critical 1 -▁sanitation 1 -▁Wynne 1 -Image 1 -▁Miracle 1 -▁Terri 1 -▁cuddle 1 -▁jars 1 -▁armored 1 -▁wrongly 1 -bottom 1 -▁tangled 1 -▁correlated 1 -fic 1 -▁Chung 1 -▁emptied 1 -▁Crescent 1 -▁sizable 1 -▁Arn 1 -▁1968, 1 -▁Fuck 1 -James 1 -▁Sala 1 -Top 1 -▁knowledgeable 1 -ambo 1 -legal 1 -▁Canterbury 1 -▁Coliseum 1 -▁concentrating 1 -▁mediocre 1 -▁Olivier 1 -▁bananas 1 -▁Manufacturers 1 -tero 1 -usually 1 -▁JC 1 -▁modeled 1 -Friday 1 -▁towering 1 -ility 1 -▁presided 1 -▁roared 1 -▁IM 1 -▁Bauer 1 -▁domination 1 -▁Assessment 1 -WI 1 -▁Sylvia 1 -thank 1 -▁Ventures 1 -unk 1 -witch 1 -▁Sz 1 -once 1 -▁stocked 1 -▁sightings 1 -▁simplest 1 -▁Citi 1 -▁purported 1 -▁Herrera 1 -rink 1 -▁(22 1 -▁intrigue 1 -▁vector 1 -iman 1 -▁Dante 1 -▁Din 1 -▁ambush 1 -▁Newfoundland 1 -▁speculative 1 -▁Salvation 1 -▁strawberry 1 -▁HUGE 1 -▁Bedford 1 -▁Wentz 1 -▁Jolie 1 -▁loft 1 -▁Amit 1 -▁menace 1 -▁tumors 1 -▁alarms 1 -▁Shark 1 -armed 1 -▁Kimmel 1 -▁Borg 1 -▁Chal 1 -▁narcotics 1 -▁goodwill 1 -1.6 1 -▁crave 1 -▁GET 1 -uce 1 -▁Notes 1 -▁surrogate 1 -▁wrestle 1 -▁Wireless 1 -▁Phantom 1 -▁tabs 1 -▁Billion 1 -▁recreate 1 -farm 1 --70 1 -jer 1 -▁(4) 1 -dding 1 -▁Dwayne 1 -▁Rouhani 1 -dot 1 -▁EN 1 -▁dreadful 1 -▁Gem 1 -Copyright 1 -▁optimize 1 -lish 1 -▁11:00 1 -▁Midland 1 -▁mit 1 -moto 1 -▁Novak 1 -▁Chandra 1 -▁Blizzard 1 -▁incarcerated 1 -▁feather 1 -▁downed 1 -▁invade 1 -▁FEMA 1 -▁Jonah 1 -▁PB 1 -▁Sasha 1 -▁deport 1 -▁Lemon 1 -▁Problem 1 -▁Session 1 -▁Hearing 1 -▁puzzled 1 -▁Beard 1 -▁fa 1 -mina 1 -▁Customer 1 -▁cancers 1 -mot 1 -▁stocking 1 -▁rabbits 1 -▁Arbor 1 -▁Disaster 1 -▁fugitive 1 -▁Beacon 1 -▁Dexter 1 -▁Plastic 1 -▁rust 1 -▁Ner 1 -▁Homer 1 -▁propelled 1 -▁Abdel 1 -▁Chilean 1 -▁offend 1 -hur 1 -▁Alberto 1 -dev 1 -▁Sco 1 -▁clusters 1 -dominated 1 -hate 1 -▁doctorate 1 -▁furnace 1 -▁Luxembourg 1 -▁haha 1 -▁foreseeable 1 -▁Gateway 1 -eating 1 -▁thinner 1 -▁sweatshirt 1 -▁husbands 1 -▁crate 1 -▁Tul 1 -cade 1 -▁consolation 1 -▁happiest 1 -▁hamper 1 -▁Reggie 1 -channel 1 --6) 1 -rge 1 -xo 1 -jpg 1 -▁worms 1 -▁wounding 1 -▁reproduce 1 -▁$39 1 -▁lawful 1 -god 1 -▁draining 1 -▁soy 1 -yar 1 -▁darkest 1 -lack 1 -▁Conversation 1 -▁Episcopal 1 -▁styling 1 -▁pediatrician 1 -▁Sindh 1 -▁gu 1 -greg 1 -2.4 1 -▁hateful 1 -AZ 1 -poli 1 -▁unfolding 1 -▁narratives 1 -▁Consumers 1 -shu 1 -▁Dominion 1 -▁tha 1 -Also 1 -alone 1 -▁Angelo 1 -sense 1 -▁Sher 1 -▁cooled 1 -uca 1 -ados 1 -▁Tobacco 1 -▁collaborating 1 -▁Shanahan 1 -▁calcium 1 -▁supervisors 1 -tv 1 -▁walkway 1 -bber 1 -▁Guest 1 -▁blended 1 -▁Soft 1 -▁1975, 1 -▁Yankee 1 -▁discredit 1 -▁Ska 1 -▁invalid 1 -▁underage 1 -▁Added 1 -Peter 1 -cool 1 -▁sage 1 -▁Sonic 1 -▁nuisance 1 -и 1 -▁decorative 1 -▁Havana 1 -▁sodium 1 -▁Winner 1 -▁oval 1 -▁Farms 1 -hra 1 -▁tilted 1 -▁Sixth 1 -▁Liv 1 -▁Geographic 1 -▁Acosta 1 -▁shortcomings 1 -Adam 1 -▁$65 1 -▁Forrest 1 -▁Lately 1 -1.8 1 -▁Jaime 1 -▁stun 1 -▁wig 1 -▁bravery 1 -▁Schi 1 -LI 1 -▁woodland 1 -:13 1 -▁Clarkson 1 -▁sleeper 1 -▁400,000 1 -▁Rating 1 -▁perched 1 -allow 1 -▁derail 1 -▁establishments 1 -▁galleries 1 -▁repetitive 1 -eration 1 -▁surcharge 1 -▁vying 1 -▁sustaining 1 -▁Sunset 1 -lian 1 -▁deposition 1 -▁5-2 1 -▁6:00 1 -▁awaits 1 -▁mac 1 -▁rethink 1 -kur 1 -▁cherished 1 -▁soften 1 -▁270 1 -▁9% 1 -▁WH 1 -▁repercussions 1 -▁brunette 1 -▁authentication 1 -▁creeping 1 -▁tro 1 -rg 1 -▁habitats 1 -▁gin 1 -▁chanted 1 -▁Ami 1 -cons 1 -saw 1 -chip 1 -▁Suzanne 1 -▁Vita 1 -▁Alpine 1 -ml 1 -▁Esther 1 -▁Twilight 1 -▁reassured 1 -sberg 1 -–4 1 -▁Rabbit 1 -tale 1 -▁haunt 1 -▁FE 1 -▁Wheat 1 -▁Weston 1 -▁Nordic 1 -▁glitter 1 -▁Nut 1 -▁Buccaneers 1 -▁defamation 1 -▁exquisite 1 -▁knob 1 -▁strawberries 1 -▁neurological 1 -▁whining 1 -▁conquest 1 -▁Edison 1 -▁ordained 1 -▁Lack 1 -▁balances 1 -true 1 -▁Connie 1 -▁Astro 1 -▁Tsu 1 -▁Haas 1 -legged 1 -▁instructors 1 -ifier 1 -▁Tie 1 -oco 1 -▁Rossi 1 -▁Olsen 1 -▁reimbursement 1 --80 1 -▁fiddle 1 -hui 1 -▁Becca 1 -uto 1 -1.2 1 -ometer 1 -▁hoodie 1 -▁rusher 1 -▁hunted 1 -▁Lie 1 -rra 1 -▁Examiner 1 -▁issuance 1 -▁Alyssa 1 -▁Sherlock 1 -▁MacBook 1 -▁entice 1 -▁Lords 1 -▁80- 1 -.16 1 -▁vigorously 1 -▁WWII 1 -▁treasures 1 -ndra 1 -▁myths 1 -▁Cake 1 -▁119 1 -▁clinched 1 -▁nationality 1 -:11 1 -▁Shelton 1 -pun 1 -▁longevity 1 -rier 1 -Chicago 1 -▁Kobe 1 -▁coincide 1 -▁declares 1 -▁1903 1 -▁orchestrated 1 -▁wisely 1 -▁Engel 1 -▁objection 1 -▁eyeing 1 -▁Zan 1 -▁coincided 1 -▁Medina 1 -von 1 -▁roam 1 -▁pervasive 1 -▁$90 1 -lbs 1 -▁selfies 1 -▁customary 1 -▁STEM 1 -▁severed 1 -▁Hua 1 -▁springs 1 -▁RSS 1 -▁Alvin 1 -▁lessen 1 -▁Graves 1 -tica 1 -▁cohort 1 -▁disco 1 -obi 1 -▁HO 1 -ffy 1 -▁Meek 1 -thing 1 -▁washer 1 -▁Calling 1 -▁blouse 1 -▁expulsion 1 -▁counterfeit 1 -Miss 1 -▁Character 1 -▁indulge 1 -▁Skin 1 -▁cylinder 1 -dul 1 -vale 1 -▁collects 1 -▁Olympian 1 -▁mast 1 -move 1 -found 1 -▁SV 1 -▁perks 1 -▁frequencies 1 -▁memorandum 1 -▁spaghetti 1 -▁Taka 1 -▁Cop 1 -▁turtles 1 -tl 1 -boyfriend 1 -Cl 1 -▁Uh 1 -▁foray 1 -▁Cesar 1 -▁Religion 1 -▁Tayyip 1 -▁Ernst 1 -▁jug 1 -▁leapt 1 -Another 1 -▁dismantle 1 -▁Songs 1 -▁disciplines 1 -buck 1 -▁fatty 1 -▁outset 1 -GL 1 -▁overcoming 1 -rp 1 -▁slips 1 -▁donned 1 -▁symptom 1 -▁Herb 1 -▁gems 1 -▁resorts 1 -▁handset 1 -▁overflow 1 -▁Thom 1 -▁(£ 1 -▁Dre 1 -▁reluctance 1 -▁Buchanan 1 -enda 1 -▁fortnight 1 -▁Truman 1 -job 1 -▁depended 1 -▁1976, 1 -Wa 1 -▁Barnett 1 -▁caves 1 -.23 1 -nit 1 -osi 1 -▁1978, 1 -▁126 1 -▁Antarctica 1 -▁PayPal 1 -cke 1 -▁recollection 1 -▁Fitness 1 -▁EUR 1 -fy 1 -ño 1 -event 1 -dri 1 -▁booths 1 -▁Colton 1 -▁diminish 1 -thus 1 -▁vomit 1 -▁misguided 1 -▁Dairy 1 -mple 1 -▁Definitely 1 -▁Iris 1 -▁COO 1 -gos 1 -▁playmaker 1 -borg 1 -▁str 1 -▁helmets 1 -existent 1 -▁Divine 1 -▁snag 1 -plo 1 -▁textbook 1 -hero 1 -▁VM 1 -▁seismic 1 -▁pl 1 -▁crumbling 1 -▁wherein 1 -▁gripping 1 -▁Burgess 1 -▁appalling 1 -▁trapping 1 -▁$1.8 1 -▁BYU 1 -▁Kayla 1 -▁cyclist 1 -▁peep 1 -▁1080 1 -dler 1 -mbo 1 -happy 1 -▁stain 1 -▁gloss 1 -▁Baxter 1 -▁Ocasio 1 -▁Fruit 1 -aron 1 -▁Cullen 1 -private 1 -▁stylist 1 -▁Dress 1 -▁Conduct 1 -sort 1 -▁mileage 1 -▁Hurt 1 -▁PARIS 1 -▁messing 1 -▁improperly 1 -▁pounded 1 -▁1980, 1 -▁definitions 1 -▁stuffing 1 -▁Schiff 1 -▁UV 1 -▁JA 1 -kle 1 -▁tailor 1 -OE 1 -anza 1 -1⁄2 1 -▁pregnancies 1 -▁striving 1 -▁uptick 1 -▁classmate 1 -▁legalize 1 -emo 1 -leader 1 -▁Avoid 1 -WR 1 -▁1969, 1 -▁Fab 1 -▁gra 1 -thinking 1 -▁fearless 1 -▁tremendously 1 -▁Regulation 1 -▁passages 1 -▁DAY 1 -▁horrifying 1 -▁stimulation 1 -▁Contemporary 1 -▁vinegar 1 -▁charismatic 1 -▁systematically 1 -▁straps 1 -▁programmed 1 -tent 1 -▁squares 1 -▁diner 1 -▁bowlers 1 -▁toppled 1 -▁Sed 1 -▁compel 1 -▁Spacey 1 -▁gracious 1 -▁55- 1 -.98 1 -▁profoundly 1 -▁Skripal 1 -▁Coordinator 1 -▁specimens 1 -Despite 1 -RK 1 -▁Tibetan 1 -▁Noor 1 -▁Joanna 1 -▁Fact 1 -▁Lifetime 1 -.34 1 -Gr 1 -▁Mustang 1 -▁dedicate 1 -nnie 1 -▁650 1 -▁avenue 1 -▁Giovanni 1 -▁Judaism 1 -▁precursor 1 -.77 1 -▁ox 1 -▁carve 1 -ination 1 -/6 1 -▁TSA 1 -▁Coyotes 1 -▁Juliet 1 -▁Cage 1 -▁shortened 1 -▁precaution 1 -▁evidenced 1 -dig 1 -▁applaud 1 -▁1965, 1 -▁Prospect 1 -▁trench 1 -▁casket 1 -▁KNOW 1 -gur 1 -major 1 -▁1974, 1 -▁CU 1 -▁stealth 1 -▁earns 1 -▁mailing 1 -nish 1 -▁Bil 1 -▁sexism 1 -▁Gunners 1 -ctic 1 -▁journeys 1 -▁suppressed 1 -▁peas 1 -RN 1 -.78 1 -▁Ramaphosa 1 -▁insanity 1 -▁knelt 1 -▁plunging 1 -priced 1 -▁commuter 1 -▁GC 1 -▁mutations 1 -▁sup 1 -▁132 1 -yk 1 -ages 1 -▁saturated 1 -odo 1 -bly 1 -▁ATP 1 -▁pirates 1 -qi 1 -▁Roo 1 -▁1977, 1 -▁1971, 1 -▁dams 1 -▁digs 1 -▁Method 1 -3.7 1 -120 1 -▁Engineer 1 -▁palette 1 -▁+1 1 -▁Nic 1 -▁validation 1 -▁analog 1 -.43 1 -▁Traditionally 1 -.47 1 -▁rebounded 1 -▁conquered 1 -▁incorporates 1 -▁1981, 1 -▁1973, 1 -gler 1 -▁cr 1 -▁Randolph 1 -▁ministries 1 -▁onslaught 1 -▁Nutrition 1 -▁Woodland 1 -▁Fig 1 -Cortez 1 -Ok 1 -▁Jeb 1 -2019 1 -overs 1 -iver 1 -▁3.8 1 -▁46- 1 -▁cosmetics 1 -App 1 -▁Written 1 -▁Andreas 1 -▁distractions 1 -▁absentee 1 -cash 1 -▁birdies 1 -▁Sword 1 -▁farmland 1 -▁Sabres 1 -▁Plum 1 -▁$49 1 -▁Carmel 1 -▁Shia 1 -▁tentative 1 -zed 1 -▁constructing 1 -▁bounty 1 -▁Nikkei 1 -▁dungeon 1 -▁lunches 1 -▁ACL 1 -▁Constable 1 -▁openness 1 -▁filthy 1 -▁distorted 1 -▁Broken 1 -▁Eaton 1 -▁echoes 1 -▁deflected 1 -▁Toro 1 -MW 1 -▁convict 1 -▁behaving 1 -▁perpetual 1 -▁Estonia 1 -fen 1 -▁proportional 1 -▁forecasting 1 -▁Supervisor 1 -▁Wizard 1 -cream 1 -▁sediment 1 -▁Christchurch 1 -▁spikes 1 -▁Steph 1 -▁Albanian 1 -▁inflammatory 1 -▁secretive 1 -▁RPG 1 -kun 1 -▁harassing 1 -▁disconnected 1 -▁Sigh 1 -▁giggled 1 -▁Horror 1 -▁bamboo 1 -▁defiance 1 -▁exploding 1 -▁Trinidad 1 -▁Guzman 1 -▁Gruden 1 -▁Kou 1 -▁unborn 1 -▁HK 1 -▁Carla 1 -feet 1 -▁ants 1 -▁exercised 1 -▁GW 1 -▁licking 1 -▁worthless 1 -▁Avenatti 1 -▁camouflage 1 -▁plethora 1 -▁McGrath 1 -▁mould 1 -▁kneeling 1 -▁Grassley 1 -▁Savior 1 -▁sushi 1 -▁Klan 1 -▁plc 1 -▁penetrate 1 -▁1986. 1 -▁precautions 1 -▁vagina 1 -▁latch 1 -SK 1 -▁Pon 1 -eke 1 -▁Goodwin 1 -▁Mess 1 -.13 1 -▁Patch 1 -▁Helsinki 1 -▁hydraulic 1 -▁Graduate 1 -▁Diamondbacks 1 -▁Pearce 1 -▁stature 1 -▁Desmond 1 -▁Headquarters 1 -sound 1 -▁fullback 1 -▁Bowen 1 -▁Aidan 1 -▁04 1 -▁3.3 1 -▁EVER 1 -▁Bake 1 -▁plantation 1 -▁nods 1 -▁Nguyen 1 -▁Machado 1 -does 1 -▁swimmer 1 -▁antitrust 1 -▁Everest 1 -umb 1 -▁NAB 1 -▁Alas 1 -"? 1 -▁banquet 1 -fighting 1 -▁NGOs 1 -▁Bac 1 -▁Kidd 1 -anger 1 -young 1 -▁lateral 1 -ime 1 -▁conveyed 1 -▁Analytics 1 -▁wrath 1 -▁Fulham 1 -▁coordinating 1 -FB 1 -▁pies 1 -▁cider 1 -▁Applications 1 -▁heartbroken 1 -▁Shelley 1 -mara 1 -▁Firm 1 -oul 1 ----------------- 1 -▁grievances 1 -▁Bing 1 -▁devised 1 -▁Erica 1 -eat 1 -lop 1 -▁lineage 1 -▁Naga 1 -▁AHL 1 -▁kilograms 1 -vit 1 -▁coworkers 1 -▁punitive 1 -▁parody 1 -HI 1 -▁Trouble 1 -▁detour 1 -▁mistress 1 -▁shone 1 -alis 1 -▁trustee 1 -operative 1 -▁Flower 1 -pla 1 -▁Rei 1 -▁ion 1 -WW 1 -▁whine 1 -▁Zambia 1 -▁5:00 1 -▁pivot 1 -▁thy 1 -lite 1 -lani 1 -▁Md 1 -▁atheist 1 -▁modification 1 -▁Anglican 1 -▁CLICK 1 -▁grandpa 1 -▁Postal 1 -▁showered 1 -▁Southeastern 1 -▁UW 1 -▁casualty 1 --7) 1 -bats 1 -▁Mane 1 -▁typed 1 -▁lib 1 -▁£10 1 -▁acclaim 1 -BY 1 -▁clinging 1 -▁prudent 1 -▁1902 1 -▁hawk 1 -▁leases 1 -▁UPS 1 -modern 1 -▁truce 1 -▁meteorologist 1 -▁colonel 1 -▁Doll 1 -▁Ind 1 -▁Daryl 1 -▁Patients 1 -▁demographics 1 -250 1 -WP 1 -▁exploits 1 -▁milestones 1 -oud 1 -▁Franz 1 -▁Albuquerque 1 -▁Merchant 1 -▁stink 1 -▁Simone 1 -▁Smash 1 -▁(27 1 -III 1 -▁Demi 1 -extremely 1 -▁caveat 1 -▁dismal 1 -▁crystals 1 -▁Clan 1 -Kim 1 -▁DT 1 -lant 1 -laughs 1 -▁jealousy 1 -▁manageable 1 -▁OTHER 1 -▁brunch 1 -mmy 1 -▁faults 1 -▁??? 1 -traditional 1 -▁dissolve 1 -10,000 1 -▁dispatcher 1 -▁proponents 1 -▁HU 1 -HT 1 -▁carmaker 1 -.86 1 -hip 1 -▁User 1 -▁wartime 1 -▁buyout 1 -▁Gym 1 -:16 1 -▁capacities 1 -▁Idlib 1 -▁matrix 1 -▁fingertips 1 -▁blitz 1 -▁Lottery 1 -▁bandwidth 1 -▁Workshop 1 -▁Server 1 -▁Seed 1 -▁receptor 1 -ail 1 -mut 1 -▁tainted 1 -▁Prices 1 -▁Sib 1 -▁Tat 1 -ior 1 -AY 1 -▁childcare 1 -▁follower 1 -▁calming 1 -Long 1 -▁cavalry 1 -▁hospice 1 -ANT 1 -▁Address 1 -▁Harmon 1 -cic 1 -▁hampered 1 -Post 1 -▁culturally 1 -▁blasting 1 -▁influencing 1 -▁Admission 1 -▁Betsy 1 -▁ceramic 1 -▁fielding 1 -▁campground 1 -▁Rebel 1 -▁Ear 1 -▁Embed 1 -▁jerseys 1 -designed 1 -▁SIM 1 -▁Sinai 1 -▁CFL 1 -▁reactors 1 -nagar 1 -▁Lithuania 1 -raising 1 -dou 1 -▁safest 1 -1.4 1 -ease 1 -▁navigating 1 -▁squeezing 1 -▁fertile 1 -▁Villanova 1 -▁fray 1 -▁Corrections 1 -usi 1 -▁scripture 1 -▁plumbing 1 -▁quartet 1 -▁pacing 1 -▁tidy 1 -eka 1 -▁reef 1 -▁inhabited 1 -▁tread 1 -▁curly 1 -release 1 -/7 1 -▁SAP 1 -▁Lucia 1 -▁shin 1 -▁Parts 1 -▁Worcester 1 -▁Beyoncé 1 -▁ancestry 1 -▁guarding 1 -▁twentieth 1 -▁Griffith 1 -▁dugout 1 -▁muted 1 -▁landfall 1 -▁OP 1 -▁sided 1 -hol 1 -▁stump 1 -▁malls 1 -▁ratios 1 -▁brightness 1 -▁Doe 1 -▁122 1 -▁herd 1 -▁caregivers 1 -▁monsoon 1 -▁bipolar 1 -▁Zurich 1 -▁beacon 1 -▁Mao 1 -▁1984. 1 -▁1990. 1 -▁plateau 1 -▁Papua 1 -▁payload 1 -▁Copy 1 -kas 1 -▁treasury 1 -▁arranging 1 -▁Quincy 1 -tler 1 -▁Kuala 1 -▁Prop 1 -▁Reeves 1 -▁chop 1 -esteem 1 -▁oils 1 -▁$48 1 -▁derailed 1 -1_ 1 -▁PH 1 -▁nationalists 1 -▁palms 1 -tru 1 -▁staunch 1 -.27 1 -hane 1 -▁DUI 1 -▁sane 1 -▁Equipment 1 -▁tumour 1 -▁Magnus 1 -▁Elle 1 -▁Peterborough 1 -OA 1 -▁String 1 -▁Throw 1 -abortion 1 -▁mailed 1 -▁resembled 1 -▁Carly 1 -pee 1 -Grand 1 -nning 1 -▁anyways 1 -vari 1 -▁Namibia 1 -draft 1 -▁specs 1 -▁holistic 1 -▁Sang 1 -▁bicycles 1 -▁cocoa 1 -▁Gould 1 -▁Evidence 1 -▁Andersen 1 -▁grooming 1 -▁fortunately 1 -▁Bunny 1 -▁recharge 1 -tens 1 -▁Sud 1 -▁hillside 1 -▁drawers 1 -▁upscale 1 -▁02 1 -▁domains 1 -▁bob 1 -▁Ferdinand 1 -▁Literature 1 -▁Opportunity 1 -▁amnesty 1 -▁unbearable 1 -▁Whitaker 1 -▁sponge 1 -▁Regular 1 -▁boiler 1 -heat 1 -▁165 1 -Women 1 -▁Donaldson 1 -▁Vest 1 -▁giggle 1 -ARD 1 -▁checkout 1 -quo 1 -▁fiancé 1 -▁digitally 1 -▁rejects 1 -▁Offer 1 -▁PST 1 -▁slider 1 -▁Fiji 1 -▁clutching 1 -▁grappling 1 -▁whirlwind 1 -▁bathtub 1 -▁mildly 1 -Germain 1 -▁grenade 1 -▁planners 1 -▁chord 1 -▁Tess 1 -pose 1 -save 1 -Market 1 -▁Dyke 1 -▁Banner 1 -å 1 -▁Rowan 1 -▁scalp 1 -▁fr 1 -▁McLaughlin 1 -▁benign 1 -▁Putting 1 -▁Password 1 -lost 1 -▁Telecom 1 -▁Wie 1 -▁illuminated 1 -▁purge 1 -orn 1 -▁scissors 1 -▁screenings 1 -▁babe 1 -Health 1 -▁alliances 1 -▁feasibility 1 -▁Huffington 1 -▁integrating 1 -▁Protesters 1 -▁Dreams 1 -▁121 1 -▁Adele 1 -▁sap 1 -▁compile 1 -▁Anyways 1 -▁persisted 1 -▁paws 1 -▁instituted 1 -erra 1 -▁McKe 1 -▁outlaw 1 -kumar 1 -▁Quin 1 -▁disagreements 1 -ă 1 -▁tumultuous 1 -▁McKenna 1 -▁comfy 1 -▁Mackay 1 -▁probing 1 -▁Birds 1 -▁Ferry 1 -chal 1 -▁Landing 1 -▁overdoses 1 -Beat 1 -drug 1 -▁disgusted 1 -▁EMS 1 -▁pe 1 -▁2023 1 -▁1988. 1 -▁Gail 1 -▁Moo 1 -Max 1 -▁borne 1 -▁reelection 1 -▁diarrhea 1 -▁intrusion 1 -▁melody 1 -▁diversified 1 -info 1 -▁Taj 1 -▁convertible 1 -▁Baird 1 -▁tyres 1 -Tell 1 -ulator 1 -▁Fle 1 -▁Blast 1 -▁[12] 1 -atta 1 -▁ga 1 -▁documenting 1 -▁Arun 1 -▁MH 1 -▁McGee 1 -uku 1 -▁setbacks 1 -▁knight 1 -tak 1 -BER 1 -▁64- 1 -▁proclamation 1 -▁superficial 1 -▁fetal 1 -mag 1 -sproportionately 1 -▁championed 1 -▁pixel 1 -▁submarines 1 -▁Cheney 1 -wang 1 -▁spatial 1 -▁nurture 1 -▁Harding 1 -▁watershed 1 -carbon 1 -▁mal 1 -▁smokers 1 -▁tonne 1 -▁bribe 1 -▁mustard 1 -escu 1 -esse 1 -▁3.7 1 -▁CVS 1 -awi 1 -▁Cos 1 -▁roaring 1 -▁Cummins 1 -▁parachute 1 -▁Reverend 1 -▁Chopra 1 -▁Anthem 1 -▁vitamins 1 -▁reconcile 1 -▁Wolverines 1 -▁Gender 1 -nix 1 -▁insulation 1 -▁Manson 1 -▁blackout 1 -▁Irwin 1 -▁operative 1 -▁Wigan 1 -▁Status 1 -▁Been 1 -Featured 1 -OB 1 -▁inflated 1 -▁ushered 1 -▁cookbook 1 -SEC 1 -▁Pod 1 -▁recorder 1 -Year 1 -▁Spy 1 -Ve 1 -▁claw 1 -▁suspense 1 -▁nicotine 1 -▁complied 1 -▁exceeds 1 -▁Isabella 1 -▁Kiwi 1 -▁Snake 1 -▁mag 1 -Bu 1 -▁4.2 1 -▁Pala 1 -▁bud 1 -▁redesign 1 -▁Gat 1 -▁glued 1 -▁frustrations 1 -VM 1 -ott 1 -▁Interview 1 -▁Carlton 1 -these 1 -▁approvals 1 -▁lunchtime 1 -▁Amtrak 1 -▁Cherokee 1 -▁reversing 1 -▁Helm 1 -▁ecosystems 1 -▁rocker 1 -▁Points 1 -Chief 1 -▁discriminate 1 -▁Congratulations 1 -▁garment 1 -▁Gin 1 -bble 1 -▁sandy 1 -▁Crypto 1 -▁curtail 1 -training 1 -▁nominal 1 -▁Tail 1 -▁Measure 1 -▁coil 1 -▁statistic 1 -▁gram 1 -▁Forecast 1 -centre 1 -▁analogy 1 -▁meth 1 -▁crane 1 -▁Meadow 1 -▁cur 1 -▁Zac 1 -▁surgeons 1 -▁Belmont 1 -▁ominous 1 -▁infusion 1 -▁paints 1 -ody 1 -▁autobiography 1 -▁sloppy 1 -▁DW 1 -▁bunny 1 -▁Nexus 1 -▁Janeiro 1 -▁playwright 1 -▁crackers 1 -▁Janata 1 -▁Boot 1 -▁tuning 1 -▁bending 1 -9.5 1 -dock 1 -▁Dag 1 -▁Judith 1 -▁vacated 1 -▁Hedge 1 -▁Solid 1 -▁prose 1 -Los 1 -ANG 1 -▁Saban 1 -grass 1 -▁Collier 1 -▁coolest 1 -▁Chau 1 -etto 1 -▁64, 1 -▁unravel 1 -▁rhyme 1 -▁barracks 1 -▁medicinal 1 -▁waterfall 1 -jit 1 -▁imperfect 1 -▁Pamela 1 -▁Sena 1 -▁Terrace 1 -odd 1 -▁twenties 1 -▁Videos 1 -breaker 1 -▁rentals 1 -▁sketches 1 -▁Zika 1 -▁Vir 1 -yal 1 -▁Allow 1 -▁Boise 1 -▁Sharing 1 -chemical 1 -▁riff 1 -▁Journalism 1 -nath 1 -▁Aw 1 -▁Vit 1 -▁Nano 1 -▁bigotry 1 -▁fashionable 1 -▁Najib 1 -▁guise 1 -▁1983, 1 -pas 1 -▁grieve 1 -▁gladly 1 -▁Colony 1 -▁44- 1 -▁Pull 1 -▁$44 1 -▁Tiny 1 -▁Levin 1 -rog 1 -▁nightly 1 -▁Kelowna 1 -▁fantasies 1 -▁HSBC 1 -▁decidedly 1 -▁tallied 1 -ckle 1 -▁furry 1 -Jesus 1 -▁sling 1 -▁motorway 1 -▁recycle 1 -direct 1 -▁Feed 1 -jun 1 -▁statistically 1 -weed 1 -▁fry 1 -▁pudding 1 -PER 1 -▁Adolf 1 -▁algae 1 -▁motorbike 1 -▁Dorsey 1 -▁Cyril 1 -▁Cayman 1 -pus 1 -▁minded 1 -▁atoms 1 -▁malfunction 1 -▁favourable 1 -▁adolescent 1 -▁dogged 1 -▁validate 1 -▁heartbreak 1 -▁1985. 1 -js 1 -▁nervously 1 -▁cub 1 -▁Certain 1 -▁Clive 1 ---- 1 -▁addicts 1 -▁11% 1 -▁Peach 1 -▁McIlroy 1 -▁collegiate 1 -▁contagious 1 -▁Famous 1 -another 1 -▁pilgrimage 1 -▁sandals 1 -▁Lal 1 -misunderstanding 1 -support 1 -▁affordability 1 -▁Clinical 1 -▁Ages 1 -▁Understanding 1 -shoot 1 -2.2 1 -▁Mana 1 -▁Nag 1 -▁fiance 1 -▁Moe 1 -▁jewel 1 -released 1 -▁regimes 1 -▁discontent 1 -chat 1 -▁commune 1 -▁ninety 1 -▁factual 1 -▁rigged 1 -▁Glover 1 -hia 1 -ahan 1 -▁Archives 1 -▁Halo 1 -▁theoretically 1 -coach 1 -▁Sponsor 1 -▁Marxist 1 -▁Thought 1 -▁licked 1 -polis 1 -dden 1 -▁compost 1 -▁upbringing 1 -▁Dundee 1 -▁Saddam 1 -▁repression 1 -▁Scholarship 1 -▁Wakefield 1 -▁ironically 1 -▁210 1 -main 1 -▁Britt 1 -▁Landry 1 -▁sanction 1 -▁144 1 -▁Hari 1 -▁crank 1 -▁detain 1 -▁ginger 1 -▁blush 1 -▁Pir 1 -▁drown 1 -–6 1 -▁Occasionally 1 -bah 1 -▁sect 1 -uan 1 -gno 1 -▁cellar 1 -▁grape 1 -▁medically 1 -▁Signal 1 -criminal 1 -nak 1 -▁armour 1 -2.6 1 -▁shouts 1 -▁raffle 1 -▁logistical 1 -▁incarnation 1 -KS 1 -ffin 1 -▁ante 1 -▁Thur 1 -▁mosquitoes 1 -leave 1 -KO 1 -▁whispers 1 -▁Mart 1 -▁Serb 1 -▁[11] 1 -▁DID 1 -▁Peoples 1 -▁Ley 1 -▁dinners 1 -▁sailor 1 -ception 1 -▁coined 1 -▁runaway 1 -▁welcomes 1 -Ge 1 -▁implant 1 -▁peppers 1 -▁Numerous 1 -▁lodging 1 -▁pavilion 1 -▁Celebration 1 -▁entrenched 1 -▁understatement 1 -▁overseen 1 -▁poetic 1 -▁fruition 1 -struct 1 -▁Polo 1 -▁· 1 -.41 1 -▁Shri 1 -▁lobbyist 1 -oto 1 -▁gunshots 1 -rack 1 -▁pans 1 -wat 1 -▁Surveillance 1 -loaded 1 -▁distortion 1 -:14 1 -lord 1 -▁revamped 1 -boards 1 -nas 1 -div 1 -▁metre 1 -▁Guru 1 -2018 1 -▁dysfunction 1 -▁McCartney 1 -▁Pavilion 1 -▁priceless 1 -▁doubtful 1 -▁synthesis 1 -▁retrospect 1 -▁VII 1 -▁pawn 1 -▁Sonia 1 -▁kayak 1 -▁lurking 1 -▁eh 1 -▁($4 1 -▁worsened 1 -▁cornerstone 1 -▁Historically 1 -▁tre 1 -▁tracker 1 -▁4.4 1 -10. 1 -inator 1 -▁usher 1 -ASH 1 -▁Investigations 1 -▁Avalanche 1 -▁Voting 1 -served 1 -▁Separately 1 -▁Langley 1 -NU 1 -▁Bain 1 -▁Assam 1 -▁ignited 1 -▁LCD 1 -▁defenceman 1 -▁PL 1 -mg 1 -ution 1 -▁Guerrero 1 -▁Moroccan 1 -▁substantive 1 -▁Sabrina 1 -▁Cock 1 -▁Nim 1 -bio 1 -▁esta 1 -Wi 1 -▁Effect 1 -alu 1 -▁Readers 1 -▁39, 1 -▁Bard 1 -▁thickness 1 -▁Orion 1 -▁oversized 1 -rush 1 -.76 1 -▁bulletin 1 -▁frown 1 -▁hotter 1 -▁facilitated 1 -▁Bacon 1 -▁Physics 1 -▁veterinary 1 -▁Beyonce 1 -▁Geoffrey 1 -▁spacious 1 -▁Enlarge 1 -▁Fill 1 -▁expands 1 -▁Stro 1 -▁Slim 1 -▁Custom 1 -▁Brees 1 -▁tuna 1 -▁qua 1 -Keep 1 -▁disorderly 1 -▁wooded 1 -▁Spieth 1 -▁Moreno 1 -▁exits 1 -▁baker 1 -▁Vampire 1 -▁Pepsi 1 -▁06 1 -▁revise 1 -▁Kinder 1 -▁$1.7 1 -▁counselling 1 -▁simplify 1 -▁interpretations 1 -▁wrists 1 -hitter 1 -▁"... 1 -▁jelly 1 -▁arraigned 1 -▁mischief 1 -▁Anchorage 1 -Welcome 1 -above 1 -▁righteous 1 -▁Payton 1 -▁(24 1 -▁2050 1 -3.3 1 -▁comrades 1 -▁Performing 1 -Indian 1 -Sal 1 -illy 1 -▁villains 1 -cate 1 -▁dismissing 1 -▁treason 1 -▁goose 1 -▁summon 1 -▁wo 1 -ids 1 -▁differing 1 -ér 1 -atu 1 -▁Humboldt 1 -▁McDermott 1 -▁NAACP 1 -▁Claudia 1 -▁Norwich 1 -pressed 1 -▁Zachary 1 -▁paving 1 -▁accommodations 1 -prop 1 -▁breeds 1 -▁overshadowed 1 -▁replaces 1 -▁breakaway 1 -▁Neuro 1 -glo 1 -▁assassin 1 -▁Benny 1 -▁strand 1 -▁hind 1 -.62 1 -▁Lunch 1 -dorf 1 -▁5-4 1 -▁Cary 1 -▁snail 1 -▁prized 1 -▁Dock 1 -▁Cookie 1 -▁contemplate 1 -join 1 -▁Tracking 1 -umble 1 -▁Recording 1 -▁Akin 1 -▁watering 1 -▁mop 1 -▁widen 1 -▁127 1 -8.00 1 -▁snatch 1 -famous 1 -▁bouts 1 -▁Supplied 1 -▁resurgence 1 -▁Doncaster 1 -▁obsolete 1 -▁swimmers 1 -▁Strictly 1 -paced 1 -▁Psalm 1 -▁Isabel 1 -▁ISO 1 -▁Mang 1 -phon 1 -mostly 1 -▁alphabet 1 -lag 1 -▁inactive 1 -▁unleash 1 -▁bombed 1 -▁placebo 1 -▁astounding 1 -▁congratulations 1 -▁Slideshow 1 -▁Jarrett 1 -▁YA 1 -▁Jax 1 -▁Keystone 1 -wards 1 -uth 1 -▁biscuits 1 -▁80,000 1 -▁seventy 1 -▁Pictured 1 -▁Bonn 1 -▁Sah 1 -▁negotiators 1 -▁$30,000 1 -correct 1 -▁Malibu 1 -▁Syed 1 -▁Dividend 1 -▁wield 1 -▁bred 1 -lei 1 -▁adorned 1 -▁revert 1 -▁Connection 1 -▁Masa 1 -▁McCo 1 -bir 1 -▁confidently 1 -▁McM 1 -tested 1 -▁Fitzpatrick 1 -▁Nursing 1 -▁Lumpur 1 -▁provocation 1 -▁handshake 1 -bull 1 -▁handmade 1 -▁apron 1 -wrote 1 -▁whisk 1 -▁shoreline 1 -▁inspires 1 -ents 1 -▁greedy 1 -:18 1 -esa 1 -▁Khu 1 -▁Went 1 -photo 1 -▁sus 1 -▁prostitutes 1 -▁35% 1 -▁Ober 1 --31 1 -ryn 1 -▁outweigh 1 -Young 1 -▁turbulent 1 -▁Extension 1 -▁Hague 1 -▁Lit 1 -▁ballpark 1 -▁mutant 1 -itter 1 -▁complicate 1 -▁decreases 1 -▁Adel 1 -▁Ballet 1 -inen 1 -▁Angle 1 -▁tho 1 -took 1 -▁Fro 1 -▁trot 1 -▁depiction 1 -▁Pray 1 -▁override 1 -▁downloading 1 -▁Platinum 1 -▁Smollett 1 -▁prohibiting 1 -▁Hum 1 -▁jihadists 1 -▁eerie 1 -duct 1 -▁pixels 1 -▁pleasantly 1 -▁shameful 1 -▁airplanes 1 -▁Sands 1 -▁Hmm 1 -▁shack 1 -weak 1 -▁engages 1 -▁Cau 1 -dur 1 -▁Hands 1 -▁$800 1 -ces 1 -▁tack 1 -mination 1 -Play 1 -▁pesticides 1 -▁facilitating 1 -▁prosecuting 1 -▁calves 1 -▁Sacred 1 -▁breweries 1 -▁resurrection 1 -▁brilliantly 1 -▁Disorder 1 -▁disbanded 1 -▁deliberations 1 -▁litre 1 -.42 1 -▁Brig 1 -logical 1 -▁Pace 1 -anya 1 -dependent 1 -▁Bron 1 -▁transitional 1 -nese 1 -iter 1 -▁Welles 1 -▁Resistance 1 -▁modular 1 -▁hoop 1 -▁impoverished 1 -▁Atkinson 1 -▁stacks 1 -▁expectancy 1 -DU 1 -▁Patton 1 -▁disappears 1 -▁rescuers 1 -▁Marian 1 -▁firsthand 1 -hman 1 -▁scooped 1 -▁ledge 1 -▁prick 1 -▁Tears 1 -gene 1 -Class 1 -▁crippling 1 -▁executions 1 -▁airborne 1 --2, 1 -▁Bray 1 -▁peach 1 -▁affirmed 1 -▁groaned 1 -▁Pub 1 -▁Rav 1 -grand 1 -utter 1 -▁invoked 1 -▁Bold 1 -2.7 1 -▁Sixers 1 -▁angst 1 -▁Cadillac 1 -▁Cumberland 1 -▁Gymnastics 1 -▁abdominal 1 -▁autistic 1 -dium 1 -▁mosques 1 -▁grinning 1 -Line 1 -▁Favorite 1 -▁graveyard 1 -▁Request 1 -ously 1 -morning 1 -▁adapting 1 -pid 1 -▁Highlights 1 -▁Barrow 1 -rá 1 -▁novelty 1 -▁tendon 1 -tham 1 -plain 1 -crime 1 -daughter 1 -governmental 1 -▁wrestlers 1 -▁downfall 1 -Ann 1 -▁RNA 1 -▁chap 1 -▁ref 1 -▁humiliating 1 -▁spiked 1 -▁Wilkinson 1 -▁connector 1 -▁orderly 1 -connected 1 -▁endemic 1 -hom 1 -▁Apr 1 -▁Vivian 1 -▁notched 1 -▁Zee 1 -▁Silent 1 -▁acquaintance 1 -▁enzyme 1 -rino 1 -▁crocodile 1 -▁fraught 1 -1.9 1 -▁knowingly 1 -▁mantle 1 -▁2023. 1 -▁ethanol 1 -▁Sig 1 -▁Brave 1 -▁echoing 1 -▁progresses 1 -▁hallmark 1 -comp 1 -▁fluent 1 -▁Responding 1 -Ya 1 -▁circuits 1 -▁hiked 1 -directed 1 -Dan 1 -sters 1 -▁intolerance 1 -▁deteriorating 1 -▁inclination 1 -▁caramel 1 -▁hindsight 1 -:01 1 -▁framing 1 -cious 1 -▁Sevilla 1 -▁Prasad 1 -tribu 1 -▁steroids 1 -▁Gallup 1 -▁listener 1 -▁Drugs 1 -▁Stacy 1 -▁cleansing 1 -Sunday 1 -▁seamlessly 1 -.68 1 -▁Mono 1 -▁Nur 1 -▁lest 1 -▁PU 1 -▁Combined 1 -▁vacations 1 -▁excerpt 1 -▁Eb 1 -▁acronym 1 -▁pragmatic 1 -▁metabolic 1 -▁Percy 1 -▁Nie 1 -▁FS 1 -▁Vander 1 -▁Dep 1 -▁circled 1 -Figure 1 -▁Hunger 1 -▁Irene 1 -▁scrape 1 -▁Thatcher 1 -logist 1 -▁Sadie 1 -▁lured 1 -▁outbreaks 1 -Nov 1 -wl 1 -▁Sanford 1 -▁shortest 1 -î 1 -▁Ubuntu 1 -denuclearization 1 -▁Clerk 1 -▁Townsend 1 -▁stemmed 1 -▁Missing 1 -uti 1 -▁skid 1 -▁campsite 1 -lander 1 -▁4.1 1 -lf 1 -▁Pak 1 -fine 1 -▁pals 1 -▁Soldier 1 -Mary 1 -▁bursts 1 -▁70,000 1 -▁mutation 1 -▁Ame 1 -▁5: 1 -▁1898 1 -▁Abuse 1 -▁liaison 1 -▁Kelsey 1 -▁extortion 1 -▁Mosque 1 -pul 1 -pil 1 -▁marquee 1 -▁Highlands 1 -▁clenched 1 -▁girlfriends 1 -▁Pho 1 -▁Agnes 1 -▁straightened 1 -▁attained 1 -▁Sparks 1 -apo 1 -ried 1 -▁packets 1 -▁futile 1 -defined 1 -▁Pandora 1 -▁curry 1 -▁adjourned 1 -iate 1 -ppe 1 -▁advancements 1 -▁Yun 1 -▁shuffle 1 -▁differs 1 -▁Musa 1 -absolutely 1 -wala 1 -▁bureaucratic 1 -tino 1 -mah 1 -▁(5) 1 -▁spurt 1 -▁disturb 1 -▁chicks 1 -▁Cel 1 -▁futuristic 1 -▁throttle 1 -▁referencing 1 -▁Antoine 1 -▁Airline 1 -▁Random 1 -▁viewpoint 1 -▁genetics 1 -gia 1 -online 1 -▁creamy 1 -▁mash 1 -▁standardized 1 -limit 1 -▁Nicaragua 1 -▁1850 1 -▁exploiting 1 -.89 1 -▁trump 1 -▁ticked 1 -▁versatility 1 -▁refining 1 -▁bookings 1 -▁Strauss 1 -▁clot 1 -▁unintended 1 -▁Cyclone 1 -▁BREAKING 1 -▁forwarded 1 -▁Glory 1 -▁openings 1 -▁goofy 1 -Which 1 -▁airspace 1 -▁redshirt 1 -▁und 1 -▁alloy 1 -▁Yar 1 -▁cultivated 1 -akis 1 -▁Hav 1 -cca 1 -▁bouquet 1 -▁Ramadan 1 -▁Armour 1 -kawa 1 -▁Ballard 1 -sometimes 1 -▁Ally 1 -▁Pound 1 -▁Notably 1 -▁Sap 1 -▁convened 1 -▁Adventures 1 -▁timeout 1 -▁orchard 1 -▁disrespectful 1 -▁mare 1 -▁KU 1 -▁Distribution 1 -▁syringe 1 -▁appease 1 -▁Battery 1 -▁Speech 1 -▁Term 1 -▁Gemma 1 -aza 1 -religious 1 -▁Partner 1 -worst 1 -▁Mik 1 -▁residences 1 -▁coop 1 -worm 1 -chenko 1 -▁Railways 1 -▁harass 1 -▁guild 1 -▁TRUMP 1 -▁Winfrey 1 -▁Presidency 1 -▁taboo 1 -▁1982. 1 -▁thyroid 1 -tat 1 -...... 1 -▁haze 1 -▁formulation 1 -ugu 1 -remember 1 -▁BR 1 -3.6 1 -▁LTE 1 -▁4-6 1 -izes 1 -▁Rig 1 -▁Makes 1 -▁Prep 1 -▁9,000 1 -▁erratic 1 -▁Timberwolves 1 -▁Ivory 1 -▁emoji 1 -▁inflict 1 -maconda 1 -▁documentaries 1 -▁dusk 1 -▁Rahman 1 -▁mandates 1 -:12 1 -▁Turks 1 -Mor 1 -▁Published 1 -▁Shea 1 -▁seam 1 -▁Aziz 1 -▁milli 1 -▁hotline 1 -▁poignant 1 -▁Samaritan 1 -▁Accountability 1 -▁faux 1 -▁TPP 1 -▁Grim 1 -▁agitated 1 -▁exchanging 1 -▁inquire 1 -▁railways 1 -▁surrounds 1 -▁turbo 1 -▁apt 1 -litre 1 -▁textbooks 1 -▁Quad 1 -▁Treat 1 -▁affiliates 1 -▁Greatest 1 -▁1896 1 -eme 1 -▁familiarity 1 -▁Vega 1 -▁Slow 1 -▁coated 1 -bbi 1 -▁Pochettino 1 -▁lobbied 1 -▁mitigation 1 -▁Sawyer 1 -▁complimentary 1 -▁informative 1 -▁horrendous 1 -▁exporters 1 -▁Shelter 1 -ê 1 -▁nun 1 -▁Regulatory 1 -▁stamped 1 -sister 1 -▁Rath 1 -▁Sense 1 -signing 1 -▁1987. 1 -Baby 1 -▁fri 1 -▁Locke 1 -▁col 1 -emia 1 -▁curls 1 -▁Gut 1 --90 1 -▁captivity 1 -▁resembling 1 -Would 1 -▁legislator 1 -▁Harlem 1 -▁Tumblr 1 -▁150,000 1 -▁Mazda 1 -▁rave 1 -$4 1 -▁Howell 1 -▁Borders 1 -▁Rein 1 -▁unilateral 1 -▁Equal 1 -▁BEIJING 1 -▁Sustainable 1 -▁embattled 1 -▁obligated 1 -▁tedious 1 -▁Expedition 1 -▁ecstatic 1 -▁Knowledge 1 -▁blazing 1 -tek 1 -▁broom 1 -▁funk 1 -▁invoke 1 -▁cleric 1 -counter 1 -▁iteration 1 -▁Xu 1 -▁directory 1 -▁TW 1 -▁Neg 1 -▁federally 1 -NZ 1 -▁Found 1 -weather 1 -▁Stephenson 1 -SW 1 -▁Vet 1 -▁Reliance 1 -▁untouched 1 -▁Dudley 1 -▁Jacqueline 1 -▁WTO 1 -▁Galway 1 -▁Flow 1 -▁cigar 1 -/17 1 -quire 1 -judge 1 -▁fouled 1 -▁hustle 1 -▁HI 1 -shing 1 -zio 1 -fla 1 -zin 1 -▁trickle 1 -uffer 1 -▁Lyle 1 -nzo 1 -▁YMCA 1 -▁alligator 1 -▁fracking 1 -.79 1 -▁Dianne 1 -▁Database 1 -▁trolley 1 -▁witty 1 -▁frail 1 -▁Crowley 1 -▁loot 1 -▁vow 1 -▁CPS 1 -▁Denny 1 -▁postpone 1 -INT 1 -▁Cay 1 -▁Bam 1 -bench 1 -ridden 1 -▁cube 1 -/18 1 -qa 1 -▁Choi 1 -yp 1 -▁License 1 -▁asphalt 1 -▁hypocrisy 1 -▁pristine 1 -▁testosterone 1 -▁Hubbard 1 -▁disposition 1 -ardo 1 -ively 1 -KP 1 -▁Suicide 1 -▁stereotype 1 -▁cultivate 1 -▁cupboard 1 -▁1600 1 -▁Welch 1 -▁reissued 1 -6.7 1 -▁Lamp 1 -▁spawned 1 -▁Sputnik 1 -unga 1 -▁Maz 1 -▁stationary 1 -mul 1 -▁fuelled 1 -Talk 1 -▁outfield 1 -▁cliffs 1 -▁ven 1 -▁mastermind 1 -▁whichever 1 -▁burgeoning 1 -▁withholding 1 -▁Talbot 1 -▁marginalized 1 -▁Hel 1 -▁formations 1 -According 1 -Canadian 1 -▁accusers 1 -▁Aiden 1 -▁weaving 1 -▁YES 1 -▁hu 1 -▁pra 1 -▁precarious 1 -▁Eid 1 -▁BOSTON 1 -▁Monsanto 1 -▁dictionary 1 -▁synonymous 1 -▁centrist 1 -▁Reason 1 -360 1 -:22 1 -▁rout 1 -▁flawless 1 -▁careless 1 -LAND 1 -▁fooled 1 -megapixel 1 -▁menus 1 -▁disrupting 1 -▁Fake 1 -▁CDs 1 -▁theology 1 -▁forbid 1 -CON 1 -▁1897 1 -▁deprivation 1 -▁metabolism 1 -▁Jefferies 1 -▁UNESCO 1 -▁budding 1 -▁Mitsubishi 1 -▁hooks 1 -▁Whip 1 -▁thicker 1 -▁Crossing 1 -▁Kle 1 -▁marina 1 -▁Bronze 1 -▁forcefully 1 -0.0 1 -▁understandably 1 -fon 1 -▁camper 1 -▁Luca 1 -▁soar 1 -NB 1 -▁Parent 1 -:17 1 -▁takeaway 1 -′ 1 -▁Saskatoon 1 -▁chimney 1 -▁questionnaire 1 -▁Sor 1 -▁est 1 -▁sensing 1 -▁royalties 1 -▁mastered 1 -▁implication 1 -▁depleted 1 -:23 1 -▁Zin 1 -▁Kristin 1 -▁bri 1 -▁(23 1 -▁Cooke 1 -▁Jewel 1 -Bob 1 -▁boon 1 -▁MOSCOW 1 -▁controversies 1 -▁evasion 1 -▁WM 1 -▁Mozilla 1 -▁Childhood 1 -▁bode 1 -tsu 1 -▁squarely 1 -▁Hotels 1 -▁Merri 1 -▁naps 1 -▁duplicate 1 -▁Pulse 1 -▁ministerial 1 -zzle 1 -▁crossings 1 -▁bliss 1 -esco 1 -disproportionate 1 -▁Apache 1 -▁baths 1 -▁Memory 1 -▁lament 1 -▁WiFi 1 -▁starvation 1 -croft 1 -▁Bridget 1 -▁customization 1 -▁revisions 1 -▁disgrace 1 -▁JJ 1 -lasting 1 -▁needy 1 -▁Hab 1 -▁risked 1 -▁WC 1 -▁Pun 1 -cil 1 -▁DUP 1 -rai 1 -▁polymer 1 -▁Batt 1 -▁utilization 1 -▁Paxton 1 -▁Exhibition 1 -▁toddlers 1 -▁blasts 1 -rena 1 -▁terminals 1 -biggest 1 -▁migrate 1 -▁tripped 1 -▁Turk 1 -▁distinctly 1 -▁buckle 1 -brain 1 -▁Rau 1 -▁elbows 1 --34 1 -▁Bot 1 -clad 1 -gies 1 -▁Gamble 1 -▁disparity 1 -▁LORD 1 -▁Phelps 1 -▁glared 1 -▁Jesuit 1 -▁Haryana 1 -▁trout 1 -▁summed 1 -▁groan 1 -▁disguised 1 -▁noticeably 1 -▁Russo 1 -▁Stack 1 -▁Examples 1 -▁litres 1 -▁Essentially 1 -▁Wouldn 1 -▁Coco 1 -▁Zhu 1 -Too 1 -▁Insiders 1 -▁disruptions 1 -▁photographic 1 -▁GL 1 -▁Integrated 1 -▁circling 1 -▁snuck 1 -▁abs 1 -▁Hospice 1 -▁gazed 1 -▁instruct 1 -▁Darling 1 -▁Crowd 1 -Ray 1 -▁rooting 1 -▁enhancement 1 -▁Julio 1 -▁portfolios 1 -▁dolphins 1 -▁Hannity 1 -▁empirical 1 -pies 1 -▁Aba 1 -▁woven 1 -▁picket 1 -▁Northampton 1 -▁allocate 1 -▁counters 1 -write 1 -▁Firstly 1 -▁Rolls 1 -TX 1 -chie 1 -▁Shire 1 -▁monarchy 1 -▁Lew 1 -bby 1 -▁deductions 1 -club 1 -rom 1 -▁boasting 1 -▁repetition 1 -▁41, 1 -voice 1 -▁Sca 1 -▁Sunrise 1 -▁Macau 1 -▁succumbed 1 -▁Joanne 1 -.94 1 -▁quieter 1 -Though 1 -▁Directorate 1 -▁domestically 1 -▁Roche 1 -▁REIT 1 -▁Peel 1 -▁roofs 1 -▁Guys 1 -▁Portal 1 -▁Lai 1 -▁airfield 1 -▁epi 1 -а 1 -▁5000 1 -▁upheaval 1 -▁Assuming 1 -▁respite 1 -▁Kershaw 1 -▁blames 1 -▁Swim 1 -▁jammed 1 -▁Mansfield 1 -▁Mutual 1 -▁Offensive 1 -▁embargo 1 -▁littered 1 -▁Jas 1 -▁[13] 1 -▁null 1 -▁Worse 1 -eiro 1 -investment 1 -▁LC 1 -Press 1 -▁Cub 1 -▁03 1 -CG 1 -▁qualifiers 1 -▁Colt 1 -▁partying 1 -Work 1 -▁Berger 1 -▁$700 1 -▁emit 1 -▁reminders 1 -▁curfew 1 -ube 1 -▁handicap 1 -▁siding 1 -2.00 1 -▁poisonous 1 -▁MAC 1 -Google 1 -▁QC 1 -▁weakest 1 -▁ir 1 -Bank 1 -ZA 1 -▁thugs 1 -▁pear 1 -Gra 1 -▁injections 1 -▁timer 1 -▁sneaking 1 -zhou 1 -▁Gibraltar 1 -▁repository 1 -▁residue 1 -▁Chatham 1 -▁Kieran 1 -▁Awesome 1 -wad 1 -▁westbound 1 -journal 1 -thought 1 -▁culminated 1 -▁Dahl 1 -▁Adviser 1 -▁flung 1 -▁Export 1 -▁equals 1 -▁charger 1 -udge 1 -▁magician 1 -▁blowout 1 -▁Survivor 1 -ría 1 -▁endeavour 1 -▁egregious 1 -▁Occupy 1 -toxic 1 -▁Holm 1 -Hopefully 1 -▁birthdays 1 -▁Creed 1 -▁Tanya 1 -▁Jor 1 -▁buzzer 1 -tman 1 -▁Santo 1 -▁Weir 1 -pain 1 -▁Yugoslavia 1 -▁Buddhism 1 -▁FRANCISCO 1 -▁Raqqa 1 -▁nostalgic 1 -▁Vaughan 1 -▁Tahoe 1 -RIS 1 -ISH 1 -▁accredited 1 -▁Kad 1 -▁Mariah 1 -▁awkwardly 1 -DG 1 -▁ashore 1 -▁underestimate 1 -▁cupcakes 1 -easy 1 -ATA 1 -▁ballad 1 -▁1979. 1 -kwa 1 -▁hauling 1 -▁XP 1 -▁Lama 1 -▁irritation 1 -▁Shapiro 1 -▁Karan 1 -▁disadvantaged 1 -▁Span 1 -3.8 1 -▁Reilly 1 -▁wink 1 -▁manned 1 -▁seals 1 -▁PV 1 -rach 1 -▁fingerprints 1 -▁periodic 1 -Each 1 -▁IPL 1 -▁Awareness 1 -▁Marjory 1 -▁Sweeney 1 -▁giggling 1 -▁Madhya 1 -▁Slate 1 -▁Approximately 1 -▁Wrong 1 -▁extraordinarily 1 -▁labeling 1 -▁transmitter 1 -▁WAR 1 -Atlantic 1 -Pakistan 1 -▁μ 1 -▁frontline 1 -▁iceberg 1 -▁powerless 1 -▁Vor 1 -▁Ticket 1 -▁CPI 1 -▁ageing 1 -HER 1 -née 1 -▁tuck 1 -▁influenza 1 -▁shampoo 1 -▁REPORT 1 -▁deterrent 1 -▁Annapolis 1 -▁demeanor 1 -▁shaving 1 -▁encompasses 1 -▁outlining 1 -▁trendy 1 -.64 1 -Pass 1 -▁universally 1 -▁growled 1 -▁mashed 1 -▁detectors 1 -▁Educational 1 -▁revolves 1 -▁styled 1 -▁ideally 1 -mobile 1 -ady 1 -/11 1 -▁sparks 1 -fus 1 -▁redundant 1 -▁Lisbon 1 -fect 1 -▁1983. 1 -▁Omega 1 -oph 1 -kara 1 -▁Madden 1 -▁Exactly 1 -▁pp 1 -▁piling 1 --100 1 -▁sac 1 -▁lockdown 1 -rion 1 -▁compliant 1 -eria 1 -▁Alternatively 1 -▁Accord 1 -https 1 -Given 1 -▁ND 1 -OX 1 -▁BlackBerry 1 -▁reproduced 1 -▁annoyance 1 -▁sighting 1 -▁Addison 1 -▁Fortnite 1 -▁Gai 1 -▁Midnight 1 -budget 1 -▁circumvent 1 -▁Analyst 1 -Without 1 -▁47- 1 -▁Goals 1 -paper 1 -ential 1 -▁immortal 1 -▁Discover 1 -Nor 1 -▁Pulitzer 1 -▁shrunk 1 -▁thrift 1 -▁paramount 1 -▁Hidden 1 -▁André 1 -▁Scarlett 1 -next 1 -▁intending 1 -GF 1 -▁Gad 1 -▁ailing 1 -▁hardened 1 -▁260 1 -▁− 1 -morph 1 -.22 1 -RES 1 -hay 1 -▁theatres 1 -▁Eisenhower 1 -▁culminating 1 -▁adversity 1 -▁McCann 1 -▁Sundance 1 -▁Carrier 1 -German 1 -▁Burk 1 -▁GOD 1 -against 1 -past 1 -phan 1 -access 1 -▁Organic 1 -.87 1 -▁KA 1 -▁spilling 1 -▁worsening 1 -▁Lud 1 -▁Account 1 -▁Belarus 1 -▁clutter 1 -▁immersive 1 -▁Tunnel 1 -▁dangling 1 -▁Mellon 1 -dah 1 -▁aching 1 -▁apartheid 1 -▁ur 1 -▁14% 1 -comb 1 -▁leftovers 1 -▁Crash 1 -fried 1 -▁Bowling 1 -▁42, 1 -▁Lawn 1 -▁fluctuations 1 -▁emphasizing 1 -▁ray 1 -3.2 1 -Islam 1 -▁whispering 1 -exp 1 -▁Lex 1 -Morning 1 -▁Dispatch 1 -▁Limit 1 -▁coupons 1 -▁fueling 1 -▁retirees 1 -doing 1 -▁Initial 1 -▁disclosures 1 -▁Rider 1 -tering 1 -Bur 1 -▁har 1 -▁prestige 1 -▁kettle 1 -▁zinc 1 -emp 1 -▁risking 1 -2017 1 -▁vetting 1 -chuk 1 -▁reposted 1 -management 1 -Something 1 -▁denounce 1 -posed 1 -▁restraints 1 -Ky 1 -▁Profile 1 -cking 1 -gil 1 -lain 1 -▁flushed 1 -▁Porto 1 -▁Speak 1 -▁mayoral 1 -▁MBA 1 -▁paraphernalia 1 -▁trembling 1 -▁Gillespie 1 -▁Psychology 1 -▁rebounding 1 -▁280 1 -must 1 -▁clocks 1 -▁intimidate 1 -hid 1 -▁131 1 -▁abound 1 -▁depict 1 -▁Councilman 1 -▁Jai 1 -AND 1 -▁Shim 1 -▁approximate 1 -▁fleeting 1 -▁1893 1 -▁incompetent 1 -kka 1 -▁Robbins 1 -▁Turtle 1 -RED 1 -▁ambassadors 1 -▁Sudanese 1 -.66 1 -▁unheard 1 -▁Aria 1 -▁simplified 1 -▁[14] 1 -▁3,500 1 -south 1 -Vo 1 -Music 1 -▁sled 1 -▁cape 1 -▁blocker 1 -▁Shake 1 -▁implants 1 -▁mole 1 -▁removes 1 -▁Anand 1 -▁robes 1 -▁compounded 1 -tract 1 -rator 1 -Listen 1 -▁Might 1 -▁OTTAWA 1 -▁repertoire 1 -▁misunderstood 1 -▁Jurgen 1 -▁degradation 1 -▁ostensibly 1 -▁unjust 1 -▁Fau 1 -often 1 -▁Hailey 1 -lean 1 -▁uproar 1 -▁Mobil 1 -.59 1 -▁typo 1 -Express 1 -▁salesman 1 -1.3 1 -gr 1 -FD 1 -▁cooks 1 -eed 1 -icia 1 -ghan 1 -▁peered 1 -mol 1 -▁implying 1 -trap 1 -▁Treasure 1 -NK 1 -▁Affleck 1 -▁Tem 1 -▁Hue 1 -▁1944, 1 -▁dotted 1 -▁Cli 1 -▁Angelina 1 -▁Boom 1 -▁dependency 1 -▁limbo 1 -▁Tracey 1 -▁grapple 1 -▁Lyons 1 -product 1 -▁Eduardo 1 -.84 1 -▁1⁄2 1 -▁Amb 1 -▁lever 1 -▁blizzard 1 -▁ferocious 1 -▁remedies 1 -▁Lansing 1 -▁Podcast 1 -▁flea 1 -▁Tak 1 -▁chairperson 1 -$5 1 -▁misled 1 -▁underdog 1 -NER 1 -▁betray 1 -▁marketers 1 -▁Vicky 1 -▁roundabout 1 -sick 1 -mounted 1 -▁Carlisle 1 -▁Covington 1 -▁galaxies 1 -▁undrafted 1 -▁compromising 1 -▁Brom 1 -▁Kramer 1 -▁HDR 1 -▁wording 1 -tex 1 -▁Dickens 1 -▁3.9 1 -▁rover 1 -rud 1 -▁clam 1 -pong 1 -▁Alli 1 -▁mule 1 -▁£8 1 -▁Achilles 1 -▁Fifteen 1 -▁contingency 1 -▁Liquid 1 -▁Entry 1 -▁fulfillment 1 -important 1 -.74 1 -▁$54 1 -uploads 1 -▁broaden 1 -Other 1 -▁Hau 1 -▁decoration 1 -▁Humane 1 -▁Cod 1 -aji 1 -▁Hasan 1 -uous 1 -▁guerrilla 1 -▁negativity 1 -▁Mauricio 1 -▁Kirsten 1 -▁Lukaku 1 -▁humane 1 -▁1895 1 -▁dipping 1 -▁Pap 1 -▁Gators 1 -▁striped 1 -cion 1 -▁listens 1 -report 1 -model 1 -▁Amen 1 -▁salty 1 -▁backstop 1 -▁prescribe 1 -▁horrors 1 -▁exacerbated 1 -fledged 1 -fr 1 -lix 1 -▁groundwater 1 -:19 1 -▁pero 1 -HF 1 -▁receptors 1 -ī 1 -▁Plenty 1 -▁Ritchie 1 -▁rambling 1 -▁robberies 1 -▁purity 1 -▁Ing 1 -▁rigs 1 -▁Faye 1 -▁defer 1 -Min 1 -▁transitioning 1 -single 1 -▁desserts 1 -▁Photography 1 -▁perpetrator 1 -▁124 1 -▁Sigma 1 -▁heavens 1 -▁Kurdistan 1 -▁McLean 1 -▁hypo 1 -▁Foles 1 -▁revered 1 -wei 1 -holding 1 -▁dispose 1 -▁Figures 1 -0.1 1 -▁Trial 1 -▁transitions 1 -▁silicon 1 -▁Aki 1 -▁torpedo 1 -▁(28 1 -▁Bangalore 1 -▁apologizing 1 -▁diabetic 1 -▁Marseille 1 -uja 1 -▁Medium 1 -▁accents 1 -▁meadow 1 -▁wrecked 1 -▁fielded 1 -▁humbled 1 -▁Specialist 1 -▁chants 1 -▁sentimental 1 -▁1,600 1 -▁Entrepreneur 1 -▁manners 1 -▁overheard 1 -▁Sus 1 -Alex 1 -.21 1 -▁rectangular 1 -▁wheeled 1 -▁Glo 1 -▁Interactive 1 -▁trough 1 -▁Colon 1 -▁Rollins 1 -▁batsmen 1 -▁HM 1 -▁compatibility 1 -.69 1 -▁hostages 1 -110 1 -▁Northam 1 -▁compensated 1 -▁scrum 1 -▁fossils 1 -.83 1 -▁hysterical 1 -isin 1 -▁herbal 1 -▁Loyola 1 -▁Harbaugh 1 -▁headlights 1 -▁Witness 1 -▁accuser 1 -▁Shiva 1 -▁Sanjay 1 -▁Zombie 1 -▁unnatural 1 -▁11,000 1 -▁Jur 1 -▁reliant 1 -▁intently 1 -▁Riot 1 -▁traitor 1 -▁Landon 1 -Uh 1 -▁Demon 1 -▁Crest 1 -▁Fay 1 -▁overrun 1 -▁Mori 1 -▁Sergey 1 -▁Whale 1 -▁tweak 1 -efficient 1 -▁Lies 1 -▁appreciative 1 -▁debacle 1 -▁populous 1 -▁swirl 1 -▁ZTE 1 -▁Unified 1 -▁sparkle 1 -▁confidentiality 1 -▁Wat 1 -▁coordinates 1 -▁Barney 1 -Bay 1 -mba 1 -▁Somebody 1 -Boo 1 -▁ARM 1 -▁5.1 1 -▁Rank 1 -▁Christy 1 -▁2018) 1 -▁1972. 1 -▁Ninja 1 -gha 1 -▁Motorola 1 -▁detector 1 -▁Nicki 1 -Robert 1 -tile 1 -▁distracting 1 -sler 1 -▁THC 1 -outperform 1 -450 1 -dh 1 -▁Direction 1 -▁livelihood 1 -▁asserting 1 -▁Jain 1 -▁Tad 1 -▁portraying 1 -▁Rent 1 -▁Administrative 1 -▁attributable 1 -▁entrepreneurial 1 -▁Ridley 1 -▁Taiwanese 1 -▁hastily 1 -loo 1 -▁Lorraine 1 -▁CEOs 1 -Give 1 -kid 1 -▁Towers 1 -▁characterize 1 -▁05 1 -▁Temperatures 1 -▁SSD 1 -until 1 -ships 1 -Leave 1 --3, 1 -▁smelling 1 -▁thirst 1 -2.1 1 -▁1200 1 -▁physicist 1 -▁Andres 1 -▁PJ 1 -▁admittedly 1 -▁sympathize 1 -▁Position 1 -▁Amateur 1 -▁Heller 1 -▁phoned 1 -▁braking 1 -▁stripping 1 -▁Caption 1 -▁Abd 1 -aja 1 -▁purposely 1 -▁Isa 1 -Texas 1 -▁thirsty 1 -ilia 1 -hook 1 -▁tails 1 -▁Whoever 1 -▁seeded 1 -▁Freeland 1 -collar 1 -▁Eg 1 -▁unthinkable 1 -▁unicorn 1 -▁Intermediate 1 -▁baptized 1 -▁momentarily 1 -▁Hawke 1 -Lago 1 -oth 1 -▁$52 1 -Bad 1 -rano 1 -▁linemen 1 -▁Cons 1 -rib 1 -▁fer 1 -▁motivations 1 -▁warranted 1 -▁$300,000 1 -▁professionalism 1 -Brown 1 -▁Spears 1 -▁lofty 1 -▁signage 1 -early 1 -nig 1 -▁Thal 1 -▁superpower 1 -▁Patti 1 -▁shedding 1 -ogo 1 -▁netting 1 -abo 1 -▁($5 1 -▁Lankan 1 -▁schizophrenia 1 -▁vulgar 1 -▁outpouring 1 -▁Architecture 1 -▁Winchester 1 -▁Resident 1 -▁omission 1 -▁Surprisingly 1 -▁FROM 1 -▁Sie 1 -▁gigs 1 -▁compositions 1 -fel 1 -▁Jab 1 -▁countdown 1 -▁Lola 1 -▁Fork 1 -▁CIO 1 -▁contra 1 -▁guitars 1 -▁250,000 1 -cine 1 -▁Pauline 1 -▁Sturgeon 1 -▁latitude 1 -▁Stevie 1 -▁marred 1 -▁eminent 1 -▁cuff 1 -Feb 1 -▁Buckley 1 -▁1/4 1 -Qu 1 -▁Featured 1 -please 1 -▁Yorkers 1 -tab 1 -oria 1 -▁Andhra 1 -▁Omni 1 -▁256 1 -▁Drum 1 -▁Zanu 1 -▁Okanagan 1 -▁Tallahassee 1 -▁beneficiary 1 -▁dazzling 1 -▁Yield 1 -▁reassurance 1 -▁moderator 1 -▁Improvement 1 -tap 1 -▁Silence 1 -▁$41 1 -employed 1 -▁timeframe 1 -▁Nai 1 -▁Fitz 1 -▁dishonest 1 -▁Spence 1 -▁Tasmania 1 -▁Nagar 1 -▁65- 1 -▁PML 1 -▁Bayer 1 -▁moratorium 1 -▁molecule 1 -▁McDonnell 1 -▁Tatum 1 -generated 1 -▁$15,000 1 -▁plummeted 1 -▁Shen 1 -▁EM 1 -dozen 1 -▁discern 1 -▁pigment 1 -▁Quiet 1 -gain 1 -mental 1 -▁q 1 -▁Santana 1 -▁wrestled 1 -▁Maxim 1 -ANA 1 -▁Challenger 1 -hailing 1 -▁specimen 1 -▁descending 1 -▁ivory 1 -▁juicy 1 -▁inexperienced 1 -▁Applied 1 -▁leveled 1 -▁Biology 1 -▁Micah 1 -▁marital 1 -▁yanked 1 -▁Bid 1 -▁curated 1 -▁glossy 1 -▁Albania 1 -▁1865 1 -▁vile 1 -▁Rene 1 -LES 1 -▁7.30 1 -▁trusts 1 -▁jihadist 1 -▁Spectrum 1 -▁Ortega 1 -▁Achievement 1 -▁unresponsive 1 -▁detachment 1 -▁patted 1 -▁54- 1 -▁Premiership 1 -liness 1 -pretty 1 -entrepreneurship 1 -▁polo 1 -▁fabricated 1 -bbed 1 -▁Lois 1 -▁buds 1 -▁vigorous 1 -raz 1 -▁covert 1 -tian 1 -▁Saudis 1 -▁deepening 1 -▁separatist 1 -dress 1 -▁Illustrated 1 -▁eclectic 1 -▁tavern 1 -▁Courier 1 -▁biometric 1 -е 1 -▁Location 1 -▁Miley 1 -icular 1 -haul 1 -▁Bundy 1 -▁Maher 1 -▁squads 1 -▁aura 1 -▁relentlessly 1 -▁bonded 1 -▁plaster 1 -▁Vu 1 -▁burglar 1 -▁unconditional 1 -▁TWO 1 -▁redirect 1 -yana 1 -▁WrestleMania 1 -▁distraught 1 -▁energies 1 -▁bruising 1 -▁Lincolnshire 1 -5,000. 1 -▁Dixie 1 -▁satisfactory 1 -stay 1 -▁consciously 1 -▁Recep 1 -Fig 1 -▁crippled 1 -▁Agents 1 -▁homicides 1 -▁Kari 1 -▁cardio 1 -▁Advertisement 1 -▁cerebral 1 -▁qualitative 1 -▁Borussia 1 -▁Schwab 1 -▁Granny 1 -quite 1 -▁Vil 1 -▁summons 1 -17. 1 -▁governorship 1 -▁shields 1 -CV 1 -nuclear 1 -▁GS 1 -endra 1 -▁whack 1 -▁Ramon 1 -▁Ismail 1 -▁Waiting 1 -▁beasts 1 -gation 1 -▁DON 1 -▁Miz 1 -▁nodding 1 -▁Cena 1 -cop 1 -▁Mulvaney 1 -▁adjoining 1 -▁ambiguous 1 -▁embroiled 1 -▁invariably 1 -▁unmanned 1 -▁unsettling 1 -▁irregularities 1 -▁spinach 1 -▁wealthiest 1 -▁screenwriter 1 -▁Size 1 -▁Zionist 1 -▁superiority 1 -▁Grayson 1 -▁carrot 1 -▁tornadoes 1 -Dar 1 -maid 1 -▁Garda 1 -gni 1 -▁Rana 1 -keep 1 -itive 1 -▁disable 1 -liberal 1 -▁Tijuana 1 -▁Invitational 1 -ière 1 -▁attentive 1 -▁Cohn 1 -▁screenshot 1 -Pay 1 -▁Insights 1 -review 1 -▁blistering 1 -▁Stormy 1 -▁sirens 1 -▁Soldiers 1 -▁denote 1 -▁Chem 1 -tip 1 -▁ornament 1 -hoff 1 -▁Tales 1 -▁Bigg 1 -▁loaned 1 -hub 1 -▁terminology 1 -▁Harriet 1 -▁Issue 1 -Um 1 -▁babysitter 1 -▁Mystery 1 -▁bagged 1 -▁nonpartisan 1 -▁Presley 1 -▁Fisheries 1 -▁banged 1 -ought 1 -▁walker 1 -Company 1 -Calif 1 -▁Rahm 1 -▁huddled 1 -mails 1 -anni 1 -▁magically 1 -said 1 -▁Browne 1 -▁cages 1 -▁TOKYO 1 -▁merging 1 -▁bragging 1 -▁ADHD 1 -▁ratified 1 -▁Gemini 1 -▁negotiator 1 -▁traffickers 1 -▁Issa 1 -ANCE 1 -▁ire 1 -▁Gucci 1 -▁Tire 1 -▁Dodger 1 -▁1964, 1 -▁Anwar 1 -▁Stores 1 -▁pur 1 -▁writ 1 -▁Seng 1 -cial 1 -rage 1 -▁kidnap 1 -▁calculating 1 -▁covenant 1 -▁deterioration 1 -▁vengeance 1 -Open 1 -▁swapped 1 -▁slapping 1 -vac 1 -▁oneself 1 -father 1 -▁“... 1 -▁Darius 1 -▁sunscreen 1 -▁Dirk 1 -▁Callum 1 -▁shines 1 -▁clap 1 -▁amaze 1 -imposed 1 -▁Journalists 1 -4.8 1 -Whatever 1 -▁Monitor 1 -▁swoop 1 -▁Trustees 1 -▁waits 1 -▁Bau 1 -Tri 1 -▁Mahmoud 1 -▁blunder 1 -▁mammoth 1 -▁canopy 1 -▁Bombay 1 -▁impasse 1 -▁refill 1 -▁fig 1 -▁competence 1 -▁practitioner 1 -▁Paddy 1 -▁hires 1 -bane 1 -▁wastewater 1 -oti 1 -▁cramped 1 -▁$43 1 -▁parallels 1 -▁knack 1 -▁Ginger 1 -▁scoreboard 1 -▁stares 1 -▁worsen 1 -trial 1 -ood 1 -▁impart 1 -▁steered 1 -▁Shit 1 -▁Cabrera 1 -▁Rudolph 1 -▁Sanctuary 1 -▁constellation 1 -▁replenish 1 -▁Priyanka 1 -▁Café 1 -▁tumbling 1 -▁shady 1 -▁sipping 1 -▁boobs 1 -▁scams 1 -▁naughty 1 -▁TJ 1 -mbi 1 -.73 1 -▁8.5 1 -▁$46 1 -closed 1 -▁nineteenth 1 -Does 1 -torn 1 -▁75- 1 -▁regimen 1 -▁refine 1 -nate 1 -▁ge 1 -▁gran 1 -Nazi 1 -VILLE 1 -▁Turbo 1 -▁Governors 1 -▁notions 1 -knit 1 -▁exert 1 -▁landscaping 1 -▁presiding 1 -▁Navajo 1 -▁uninsured 1 -▁Sul 1 -▁Funding 1 -▁Buckeyes 1 -▁collisions 1 -consuming 1 -Things 1 -Fla 1 -▁Yoshi 1 -gone 1 -▁^ 1 -▁13% 1 -wah 1 -▁burner 1 -▁ignores 1 -▁Constant 1 -▁imprint 1 -▁1899 1 -▁comedic 1 -▁passions 1 -▁46, 1 -iam 1 -▁Slovenia 1 -automatic 1 -▁mailbox 1 -finger 1 -▁charcoal 1 -▁intuition 1 -▁Oreo 1 -▁motors 1 -▁patrolling 1 -▁TCU 1 -Mont 1 -▁Bulletin 1 -▁Freshman 1 -▁wipes 1 -bourne 1 -EAR 1 -▁Spo 1 -reading 1 -Hy 1 -▁Bergen 1 -▁Reddy 1 -elia 1 -AW 1 -▁bla 1 -▁mishap 1 -payer 1 -▁scraps 1 -▁outnumbered 1 -▁SWAT 1 -▁invitations 1 -▁sim 1 -▁leopard 1 -▁appalled 1 -▁menacing 1 -▁revamp 1 -▁valuations 1 -▁proclaim 1 -▁slashing 1 -violent 1 -▁Kenney 1 -Jones 1 -▁Equality 1 -tall 1 -▁ping 1 -dated 1 -flag 1 -▁obsessive 1 -▁5) 1 -▁salads 1 -▁Merit 1 -▁compliments 1 -▁Beirut 1 -▁leveraging 1 -ū 1 -▁constable 1 -▁vehemently 1 -▁Commissioners 1 -▁Sino 1 -▁Collin 1 -▁pout 1 -jay 1 -▁13,000 1 -▁reiterate 1 -Office 1 -▁climbs 1 -ticket 1 -▁pulp 1 -▁Ryu 1 -ssian 1 -▁Bolivia 1 -▁Kern 1 -▁MSU 1 -▁derivatives 1 -▁MG 1 -▁chemist 1 -380 1 -▁pledging 1 -▁anthology 1 -▁Friedrich 1 -▁complementary 1 -▁Greenland 1 -▁Riders 1 -▁Peck 1 -educated 1 -▁Devices 1 -George 1 -pressure 1 -mac 1 -▁modelling 1 -mund 1 -Shi 1 -▁graders 1 -▁SUVs 1 -▁dia 1 -▁poets 1 -▁7-5 1 -17) 1 -▁sprung 1 -▁geometry 1 -▁viability 1 -travel 1 -isch 1 -▁stroked 1 -▁Debt 1 -▁diligently 1 -▁McGill 1 -NH 1 -▁$2.2 1 -Nu 1 -partisan 1 -gree 1 -Kit 1 -▁calorie 1 -ipe 1 -blown 1 -▁TDs 1 -▁Invest 1 -▁insured 1 -▁Grid 1 -▁podcasts 1 -AST 1 -▁Kem 1 -▁Nicky 1 -▁migrated 1 -▁Romance 1 -▁similarity 1 -hul 1 -eaux 1 -▁Amos 1 -▁revolving 1 -▁Emerging 1 -▁Heisman 1 -▁parrot 1 -▁jab 1 -▁fasting 1 -▁Sami 1 -▁Lounge 1 -▁dizzy 1 -▁spirited 1 -▁mixes 1 -▁Showtime 1 -fifth 1 -▁Bog 1 -Name 1 -€ 1 -▁Trace 1 -▁clo 1 -▁coke 1 -▁Tao 1 -iso 1 -▁judgments 1 -▁$3,000 1 -Park 1 -^ 1 -▁scorers 1 -▁VIOLATION 1 -▁harrowing 1 -▁residual 1 -▁Called 1 -▁spearheaded 1 -▁bassist 1 -▁commonplace 1 -asco 1 -▁diversify 1 -▁google 1 -▁1,300 1 -phor 1 -▁redesigned 1 -▁vouchers 1 -▁4:00 1 -▁discard 1 -blast 1 -▁Waymo 1 -▁honorable 1 -▁Lux 1 -▁quarterfinal 1 -▁1:00 1 -▁moose 1 -▁Rotary 1 -▁vines 1 -▁Vicki 1 -▁redeem 1 -▁revolver 1 -▁waterways 1 -▁lifespan 1 -vehicle 1 -▁stoked 1 -▁2024 1 -wage 1 -abil 1 -▁Ferris 1 -▁KT 1 -▁silenced 1 -▁makeover 1 -▁Tho 1 -▁Byzantine 1 -▁JFK 1 -▁Adidas 1 -▁pricey 1 -▁Caracas 1 -▁330 1 -Week 1 -00) 1 -▁0% 1 -▁vaccinated 1 -cake 1 -▁scripted 1 -▁Doha 1 -▁REAL 1 -24. 1 -nice 1 -union 1 -ulf 1 -▁51- 1 -iac 1 -▁Karim 1 -▁1894 1 -▁Seymour 1 -▁affinity 1 -▁refineries 1 -▁swimsuit 1 -mortem 1 -want 1 -Fo 1 -yx 1 -▁refunds 1 -▁Tong 1 -▁Salmon 1 -Save 1 -▁treaties 1 -▁Returning 1 -Gold 1 -▁sham 1 -urs 1 -TRA 1 -HAN 1 -▁Malay 1 -▁growl 1 -:21 1 -▁Plate 1 -▁Bethlehem 1 -▁Greitens 1 -▁aperture 1 -typical 1 -.00% 1 -lers 1 -▁avert 1 -▁Citing 1 -▁onward 1 -▁broadcasters 1 -▁Elton 1 -▁14,000 1 -▁Birth 1 -:24 1 -▁Hog 1 -drawn 1 -▁Dip 1 -sai 1 -vig 1 -▁3:00 1 -edi 1 -▁auditor 1 -▁Clair 1 -▁Hmmm 1 -▁apprehension 1 -▁composure 1 -▁Chevron 1 -▁shi 1 -▁Appalachian 1 -icle 1 -▁backside 1 -▁seizing 1 -nite 1 -▁novice 1 -therapy 1 -▁coyote 1 -▁landowners 1 -▁moaned 1 -▁snuggle 1 -▁Bride 1 -▁Doom 1 -▁Oo 1 -4.7 1 -▁Jed 1 -inal 1 -wara 1 -▁Qi 1 -▁Rutherford 1 -▁memorabilia 1 -▁servicing 1 -▁Subaru 1 -▁1963, 1 -▁Zinke 1 -▁darkened 1 -vat 1 -eek 1 -▁Janice 1 -▁Dickson 1 -▁Martial 1 -▁supremacists 1 -▁Johann 1 -▁baton 1 -α 1 -United 1 -▁cringe 1 -▁Brewery 1 -pick 1 -▁shocks 1 -ô 1 -▁receptionist 1 -▁progressively 1 -▁simulated 1 -▁Flor 1 -▁emphasizes 1 -▁1970, 1 -▁delve 1 -▁Dav 1 -▁Chesapeake 1 -▁Dmitry 1 -▁hemisphere 1 -▁conveniently 1 -augh 1 -▁Nathaniel 1 -▁Darrell 1 -▁Counties 1 -▁Pru 1 -▁550 1 -▁Deadline 1 -▁Carney 1 -▁liberated 1 -bc 1 -▁1892 1 -▁astronomers 1 -▁phased 1 -▁Gaz 1 -▁..... 1 -▁Healthy 1 -RNA 1 -▁Extreme 1 -▁surpass 1 -▁Rancho 1 -▁Portfolio 1 -▁fisheries 1 -▁Blockchain 1 -▁Monterey 1 -▁$150,000 1 -▁stalemate 1 -▁Azure 1 -▁Bethany 1 -▁sculptor 1 -▁spitting 1 -▁Weeks 1 -▁Treasurer 1 -▁loops 1 -▁Teaching 1 -▁nipple 1 -▁mergers 1 -▁id 1 -acting 1 -feel 1 -▁altering 1 -▁138 1 -▁Boca 1 -wich 1 -ensis 1 -▁newfound 1 -▁Bak 1 -▁freeing 1 -▁$3.5 1 -▁cheesy 1 -▁scathing 1 -▁Pascal 1 -▁intercourse 1 -▁wiring 1 -▁Storage 1 -▁Housewives 1 -▁MIL 1 -▁mountainous 1 -▁Ani 1 -ngling 1 -▁reviewer 1 -▁swath 1 -▁Hindus 1 -About 1 -▁hopping 1 -▁49- 1 -▁miner 1 -assa 1 -rable 1 -▁Ruben 1 -▁contemplated 1 -posted 1 -brown 1 -▁Billie 1 -▁replacements 1 -engine 1 -”? 1 -▁Gord 1 -▁Wilhelm 1 -▁deadlock 1 -▁Unlimited 1 -▁Lesson 1 -▁Coachella 1 -▁Surprise 1 -▁(29 1 -▁HMS 1 -Han 1 -yev 1 -▁16% 1 -▁prostitute 1 -1000 1 -▁lighthouse 1 -nau 1 -▁NDA 1 -punk 1 -▁Vick 1 -eep 1 -▁laboratories 1 -▁Dew 1 -eco 1 -▁infrared 1 -▁earmarked 1 -▁Reef 1 -▁Huang 1 -▁Medi 1 -▁mend 1 -▁Stamp 1 -▁AWS 1 -inated 1 -ige 1 -▁Jalen 1 -reach 1 -▁4.6 1 -▁Bless 1 -▁plow 1 -aca 1 -▁BERLIN 1 -▁Hermione 1 -▁Vodafone 1 -▁predicament 1 -phil 1 -spring 1 -▁dysfunctional 1 -▁spanned 1 -▁Ele 1 -▁Constand 1 -▁Tusk 1 -▁Willy 1 -▁immersed 1 -▁178 1 -▁cranky 1 -▁bolts 1 -▁precinct 1 -▁rupee 1 -tard 1 -▁dispersed 1 -▁centralized 1 -▁Mean 1 -Water 1 -▁contended 1 -▁Clifton 1 -▁Marijuana 1 -▁tragedies 1 -▁iShares 1 -▁Gon 1 -▁checkpoints 1 -khan 1 -▁grumpy 1 -▁baron 1 -zier 1 -▁walkout 1 -▁Suit 1 -▁Moose 1 -▁Nieto 1 -competitive 1 -▁Lexi 1 -Remember 1 -▁MLAs 1 -▁deflect 1 -320 1 -▁Navi 1 -chet 1 -▁Object 1 -ango 1 -▁Hiro 1 -▁Chit 1 -gla 1 -▁enthusiast 1 -▁gall 1 -▁Dickinson 1 -▁irritating 1 -▁Donnelly 1 -▁murky 1 -▁jogging 1 -▁bogus 1 -gat 1 -▁Bliss 1 -▁pottery 1 -▁compose 1 -Cam 1 -▁comparatively 1 -▁gunned 1 -▁subset 1 -▁Globes 1 -3.4 1 -owitz 1 -▁Sav 1 -▁vis 1 -capital 1 -▁simultaneous 1 -▁Zimmer 1 -▁Lavrov 1 -▁adversaries 1 -▁reliably 1 -▁monologue 1 -▁chore 1 -▁SEAL 1 -▁Ambrose 1 -▁Outlook 1 -▁smirk 1 -7.00 1 -iation 1 -together 1 -▁gosh 1 -nominated 1 -▁Ful 1 -Was 1 -Pu 1 -▁Papers 1 -▁1,400 1 -▁Essential 1 -yen 1 -æ 1 -▁dwindling 1 -▁paranoia 1 -▁philanthropist 1 -▁ramifications 1 -▁Aerospace 1 -▁1861 1 -▁trustworthy 1 -▁handcuffed 1 -▁eastbound 1 -Media 1 -tour 1 -▁finite 1 -.97 1 -▁earners 1 -▁rebuke 1 -▁censor 1 -▁juices 1 -▁char 1 -▁Rees 1 -Law 1 -▁205 1 -▁Linden 1 -▁Scarlet 1 -▁Maguire 1 -▁shoving 1 -▁affirmative 1 -▁ambient 1 -▁5-3 1 -died 1 -▁adored 1 -▁Xander 1 -▁Website 1 -▁LLP 1 -▁Martian 1 -▁Alps 1 -▁Schedule 1 -▁awakened 1 -uge 1 -eater 1 -▁pubs 1 -KA 1 -▁realism 1 -▁shudder 1 -▁Gio 1 -bak 1 -adjusted 1 -always 1 -▁millennial 1 -▁nineteen 1 -▁electron 1 -▁Polk 1 -credit 1 -▁torrent 1 -▁Greeks 1 -▁Lenovo 1 -▁patriotism 1 -ppa 1 -▁Tripoli 1 -▁flooring 1 -producing 1 -▁passionately 1 -▁drafts 1 -▁cooker 1 -haus 1 -formed 1 -▁restrain 1 -▁rundown 1 -earth 1 -▁fibers 1 -▁Seventh 1 -▁Boots 1 -▁Hug 1 -10) 1 -▁Heading 1 -▁examinations 1 -slow 1 -▁manipulating 1 -▁paternal 1 -▁vacancies 1 -▁limestone 1 -▁Apprentice 1 -▁Degree 1 -▁$99 1 -drum 1 -▁Floor 1 -professional 1 -▁Bread 1 -▁eradicate 1 -▁stadiums 1 -baum 1 -▁Fate 1 -PB 1 -▁unfit 1 -▁Jaw 1 -▁repairing 1 -▁truthful 1 -▁Collective 1 -▁Nad 1 -ndo 1 -olin 1 -▁cabbage 1 -▁undecided 1 -▁bedding 1 -▁Camille 1 -▁Davos 1 -▁Aircraft 1 -▁propped 1 -▁Kart 1 -▁Sirius 1 -▁representations 1 -▁correspond 1 -▁MAX 1 -▁Lent 1 -▁sha 1 -shoulder 1 -▁walkers 1 -▁groundwork 1 -▁jeopardize 1 -Lee 1 -didn 1 -▁Instant 1 -▁Epi 1 -▁destroyer 1 -▁Cul 1 -▁Cran 1 -▁Hide 1 -▁Taipei 1 -▁enraged 1 -▁impromptu 1 -▁glamour 1 -▁bombshell 1 -▁blurred 1 -▁crowdfunding 1 -▁Straight 1 -▁Subsequently 1 -▁childbirth 1 -▁exporter 1 -music 1 -▁-> 1 -▁Corker 1 -▁frontman 1 -rights 1 -oids 1 -bush 1 -▁aiding 1 -▁politic 1 -4.6 1 -Mal 1 -toe 1 -eda 1 -tani 1 -▁safeguards 1 -▁Cosmo 1 -▁Pil 1 -▁2:00 1 -lion 1 -/04/2018 1 -▁depreciation 1 -▁excruciating 1 -▁WIN 1 -▁Comfort 1 -▁hurtful 1 -▁Consulting 1 -▁thorn 1 -▁Fleetwood 1 -▁Styles 1 -egi 1 -lumin 1 -▁contractual 1 -▁bitterness 1 -▁handler 1 -▁collaborators 1 -:27 1 -▁enlarged 1 -gru 1 -▁35,000 1 -ambi 1 -▁convene 1 -esis 1 -uld 1 -▁Latvia 1 -▁Scot 1 -.57 1 -111 1 -Bharatiya 1 -getting 1 -▁Majesty 1 -▁SINGAPORE 1 -▁Zidane 1 -▁accelerator 1 -▁obedience 1 -▁homophobic 1 -▁shredded 1 -▁gravy 1 -:28 1 -▁separatists 1 -▁Derry 1 -SCO 1 -▁Atta 1 -▁Playboy 1 -▁oppressive 1 -▁Toll 1 -▁Split 1 -▁relish 1 -▁NEC 1 -cute 1 -▁conditioner 1 -lp 1 -▁dialect 1 -▁insignificant 1 -▁contends 1 -▁roommates 1 -▁refreshed 1 -▁Sloan 1 -enthusiastically 1 -▁vase 1 -▁Yup 1 -▁ascent 1 -▁Lite 1 -▁$47 1 -▁layered 1 -lateral 1 -▁epilepsy 1 -▁Potential 1 -▁devoid 1 -▁Cahill 1 -▁Darcy 1 -▁fisherman 1 -eering 1 -▁Cris 1 -mode 1 -▁locomotives 1 -▁barren 1 -▁Ike 1 -Mag 1 -moor 1 -ronic 1 -▁cured 1 -▁transcription 1 -▁intensify 1 -▁moderately 1 -▁Henrik 1 -▁Debra 1 -▁jokingly 1 -▁5.2 1 -project 1 -▁Begin 1 -▁1864 1 -▁damning 1 -▁Rings 1 -▁Mister 1 -▁suffice 1 -▁« 1 -▁Yukon 1 -▁ponytail 1 -gang 1 -Sch 1 -▁challengers 1 -▁4.7 1 -▁clueless 1 -▁murderous 1 -▁stains 1 -rek 1 -mour 1 -oya 1 -▁lax 1 -▁reopening 1 -▁Shooting 1 -▁estates 1 -▁temps 1 -▁slab 1 -▁Mort 1 -▁thou 1 -▁OPP 1 -▁reflex 1 -▁Certificate 1 -▁Opponents 1 -▁academia 1 -▁Antarctic 1 -▁Surgery 1 -▁Universities 1 -▁spotting 1 -▁penchant 1 -▁17% 1 -▁grunt 1 -▁exponentially 1 -▁fuse 1 -▁childish 1 -▁spanking 1 -olu 1 -▁SHARE 1 -fl 1 -truck 1 -▁streamline 1 -Andrew 1 -daily 1 -▁Gab 1 -ondo 1 -NBC 1 -▁Haus 1 -▁endlessly 1 -labor 1 -▁swayed 1 -bli 1 -▁warehouses 1 -▁1889 1 -▁HoldingsChannel 1 -fresh 1 -▁800,000 1 -▁manor 1 -▁staples 1 -▁Stat 1 -▁bots 1 -▁Critical 1 -▁Atomic 1 -▁backseat 1 -▁Harmony 1 -▁sported 1 -PAC 1 -capacity 1 -rar 1 -▁smug 1 -Sport 1 -Islamic 1 -▁inscription 1 -Dear 1 -▁pastoral 1 -▁counselors 1 -▁locality 1 -▁perch 1 -▁Labrador 1 -▁withhold 1 -▁paranormal 1 -▁Kroger 1 -▁trimming 1 -▁mater 1 -.58 1 -▁Blogger 1 -–5 1 -Additional 1 -▁Grow 1 -▁Ruiz 1 -▁downloads 1 -Lab 1 -lied 1 -▁underestimated 1 -▁129 1 -▁1973. 1 -kot 1 -▁Ware 1 -▁aquarium 1 -▁hepatitis 1 -flat 1 -▁Juncker 1 -▁Derbyshire 1 -▁nagging 1 -▁Wanderers 1 -▁segregated 1 -▁constrained 1 -▁BU 1 -▁Hag 1 -▁Avon 1 -▁sleeps 1 -Associated 1 -straight 1 -▁chilled 1 -Daily 1 -▁retaliate 1 -military 1 -ouche 1 -total 1 -▁hardships 1 -liest 1 -limited 1 -.’’ 1 -emic 1 -▁repealed 1 -▁respecting 1 -▁Button 1 -▁Joaquin 1 -▁aquatic 1 -▁spreadsheet 1 -Down 1 -▁hound 1 -▁adapter 1 -▁WHY 1 -▁sprinkle 1 -▁SHA 1 -▁Ernie 1 -▁Cycle 1 -▁burdens 1 -otta 1 -▁Nou 1 -4.4 1 -illon 1 -Lord 1 -hri 1 -▁Elf 1 -▁ballroom 1 -▁Vanity 1 -▁Joining 1 -WO 1 -▁taxed 1 -▁compass 1 -ATR 1 -▁Flex 1 -▁marginally 1 -▁1981. 1 -▁shear 1 -▁Aadhaar 1 -▁Thirteen 1 -▁deficiency 1 -▁invading 1 -▁overflowing 1 -▁Chesterfield 1 -▁sectarian 1 -▁untrue 1 -▁satire 1 -ulate 1 -▁Choose 1 -igen 1 -▁Hanover 1 -urge 1 -▁Lille 1 -▁clamp 1 -▁headlined 1 -▁buyback 1 -lette 1 -▁dramas 1 -▁Buster 1 -iling 1 -▁spraying 1 -▁homecoming 1 -▁EDT 1 -INA 1 -▁Lands 1 -fro 1 -▁pseudonym 1 -▁carnage 1 -▁Vargas 1 -▁Trenton 1 -▁nighttime 1 -lina 1 -▁adaptive 1 -▁downpour 1 -obo 1 -▁Terror 1 -▁waterproof 1 -watt 1 -.72 1 -▁Shankar 1 -▁Riding 1 -▁Jamaican 1 -tun 1 -terrorist 1 -cab 1 -▁cliché 1 -▁exhaustive 1 -▁Detention 1 -▁engagements 1 -▁Libby 1 -▁mapped 1 -▁entrusted 1 -▁premiership 1 -SV 1 -▁Jungle 1 -▁analysed 1 -▁FIR 1 -female 1 -ngo 1 -Ki 1 -▁Grimm 1 -▁copying 1 -times 1 -▁doctoral 1 -jal 1 -▁defiant 1 -▁brackets 1 -▁uncles 1 -meyer 1 -DX 1 -:44 1 -▁resorted 1 -▁bolstered 1 -nat 1 -▁Gonzaga 1 -▁Incorporated 1 -▁stalwart 1 -▁Emerald 1 -▁stagnant 1 -▁Mikhail 1 -▁cortex 1 -▁heterosexual 1 -▁impunity 1 -▁Scene 1 -SY 1 -tailed 1 -▁Pee 1 -▁insightful 1 -gul 1 -lex 1 -tiv 1 -▁layoffs 1 -rrington 1 -intelligence 1 -Christmas 1 -▁defied 1 -bai 1 -xe 1 -▁strands 1 -▁parity 1 -bike 1 -▁Augustine 1 -▁valleys 1 -▁taps 1 -▁Weird 1 -Bri 1 -▁flute 1 -▁aches 1 -▁coward 1 -▁Jubilee 1 -▁microscope 1 -▁Hanoi 1 -▁bracing 1 -▁Skinner 1 -▁medic 1 -▁touchscreen 1 -▁birthplace 1 -▁Balance 1 -▁adoptive 1 -▁Winning 1 -▁Chow 1 -▁reconnect 1 -▁grouping 1 -▁Kano 1 -OF 1 -▁Fixed 1 -▁Fol 1 -▁Romo 1 -160 1 -▁$85 1 -▁Gain 1 -▁carts 1 -estimate 1 -▁Tibet 1 -▁joyous 1 -▁insulted 1 -▁lull 1 -▁PRESENTATION 1 -▁Poverty 1 -▁reconnaissance 1 -▁Accern 1 -▁Pvt 1 -▁authoritative 1 -▁Tyrone 1 -▁Population 1 -▁persistence 1 -▁Castillo 1 -▁spate 1 -riel 1 -▁Pr 1 -▁Subway 1 -▁Devi 1 -sho 1 -Hol 1 -▁bedside 1 -▁oppressed 1 -▁fostering 1 -▁motioned 1 -Think 1 -▁Mika 1 -▁chime 1 -▁Valve 1 -▁involuntary 1 -Steve 1 -IVE 1 -▁diminishing 1 -▁divides 1 -dee 1 -▁Southgate 1 -▁43, 1 -▁Mullen 1 -▁1-3 1 -xia 1 -mente 1 -▁(31 1 -▁formulate 1 -▁kar 1 -▁Laboratories 1 -▁masculine 1 -▁pineapple 1 -▁crotch 1 -▁unrestricted 1 -▁Fahrenheit 1 -▁1976. 1 -▁Jonny 1 --52 1 -tant 1 -▁Singles 1 -▁mumbled 1 -EK 1 -▁Courthouse 1 -▁Eh 1 -▁Greenwood 1 -▁signings 1 -▁looms 1 -▁Moro 1 -▁births 1 -lower 1 -▁Lukas 1 -▁shiver 1 -▁Bara 1 -▁digress 1 -shares 1 -▁barefoot 1 -▁Equi 1 -▁bland 1 -▁undeniable 1 -▁unreliable 1 -4.3 1 -▁Frei 1 -▁temporal 1 -▁baffled 1 -▁reviewers 1 -build 1 -▁centerpiece 1 -▁Atkins 1 -▁Lars 1 -▁Combine 1 -18) 1 -▁securely 1 -rica 1 -▁Options 1 -▁restated 1 -▁Radar 1 -▁Jac 1 -▁Davenport 1 -▁expiration 1 -▁Siemens 1 -▁Covenant 1 -▁HTTP 1 -▁Marquez 1 -▁dictated 1 -▁wallpaper 1 -VB 1 -▁Shift 1 -▁Marr 1 -Whether 1 -pocket 1 -hun 1 -▁4.3 1 -▁137 1 -▁Consul 1 -▁Duff 1 -▁pep 1 -▁syn 1 -▁Label 1 -▁coli 1 -▁lovingly 1 -9% 1 -ubi 1 -▁grips 1 -▁autoplay 1 -Kar 1 -▁specification 1 -▁indefinite 1 -▁locomotive 1 -▁Jio 1 -▁nuns 1 -▁Byrd 1 -▁Bombardier 1 -▁Embiid 1 -▁garrison 1 -▁Ghosn 1 -▁foreclosure 1 -▁deviation 1 -▁Felipe 1 -▁agility 1 -▁botched 1 -▁GREAT 1 -▁Means 1 -▁unsupported 1 -▁Alright 1 -▁Eliza 1 -hail 1 -▁Tweed 1 -▁Playoff 1 -▁reinforcements 1 -▁Choir 1 -▁Gabrielle 1 -▁racer 1 -layer 1 -▁heavenly 1 -▁Heinz 1 -ISE 1 -▁accolades 1 -▁Christi 1 -ehl 1 -▁NR 1 -:26 1 -▁simulations 1 -lough 1 -▁contradiction 1 -▁paperback 1 -▁Spec 1 -▁Continuing 1 -▁extravagant 1 -▁Utilities 1 -▁annexation 1 -▁breaching 1 -▁Horne 1 -▁kneel 1 -▁yielding 1 -experience 1 -represented 1 -▁Tol 1 -Qaida 1 -▁sweetest 1 -▁soybeans 1 -▁Platt 1 -▁miraculous 1 -▁Alamo 1 -▁whistleblower 1 -atur 1 -mash 1 -▁PEG 1 -▁loaf 1 -▁Christensen 1 -▁ISLAMABAD 1 -▁Kourtney 1 -▁funniest 1 -▁Duggar 1 -▁adept 1 -▁moderation 1 -▁fortified 1 -respect 1 -▁quell 1 -▁afloat 1 -▁defy 1 -▁ku 1 -hundred 1 -▁ru 1 -▁levelled 1 -▁Basil 1 -4.1 1 -▁importing 1 -▁rad 1 -▁dismantled 1 -jen 1 -▁ceilings 1 -▁Commodore 1 -▁Enquirer 1 -▁Roethlisberger 1 -▁glancing 1 -▁psychedelic 1 -▁unforgettable 1 -▁Epstein 1 -▁Cath 1 -▁virtues 1 -▁discs 1 -▁aunts 1 -doc 1 -▁floats 1 -▁psyche 1 -▁Dems 1 -▁Imam 1 -nav 1 -99) 1 -▁absorbing 1 -▁fetus 1 -▁VIII 1 -▁uttered 1 -▁impartial 1 -▁meteor 1 -▁Pena 1 -indiscernible 1 -▁bluff 1 -▁Spielberg 1 -▁THERE 1 -▁retake 1 -dina 1 -riot 1 -HM 1 -mite 1 -▁inappropriately 1 -▁KB 1 -▁athleticism 1 -bru 1 -▁filtered 1 -equal 1 -▁Saleh 1 -▁feminists 1 -▁DU 1 -▁golfer 1 -▁centimeters 1 -4.2 1 -▁filibuster 1 -▁chromosome 1 -▁combustion 1 -▁Harrington 1 -▁minivan 1 -▁Blaine 1 -▁Merck 1 -▁inward 1 -▁racket 1 -near 1 -▁playbook 1 -▁NU 1 -▁divisional 1 -▁rushes 1 -▁Iranians 1 -lid 1 -▁Rox 1 -▁brood 1 -roid 1 -:33 1 -▁Guests 1 -firm 1 -Louis 1 -▁Ezra 1 -rata 1 -▁awfully 1 -________________ 1 -ono 1 -▁Dynamics 1 -▁Johan 1 -▁solemn 1 -▁intervening 1 -▁culmination 1 -hitting 1 --45 1 -▁Kaplan 1 -▁braid 1 -▁deductible 1 -▁rulings 1 -▁san 1 -▁Ollie 1 -▁MAY 1 -▁unload 1 -▁artery 1 -▁sponsoring 1 -▁XS 1 -▁Bullet 1 -▁stupidity 1 -▁witches 1 -▁leaps 1 -▁amino 1 -Really 1 -▁SEA 1 -▁traveler 1 -AO 1 -▁turbine 1 -▁CLEVELAND 1 -▁Rockefeller 1 -▁dispensaries 1 -▁prophecy 1 -▁estimation 1 -▁Guardians 1 -▁Version 1 -▁casing 1 -▁Zelda 1 -IGHT 1 -▁paradox 1 -▁tart 1 -▁Sung 1 --2017 1 -▁COM 1 -▁continents 1 -?!" 1 -▁pastors 1 -sent 1 -▁fue 1 -▁Bour 1 -lington 1 -▁Dolly 1 -▁sparse 1 -▁Smile 1 -▁AZ 1 -▁diver 1 -MY 1 -▁midterms 1 -▁obnoxious 1 -▁Organizers 1 -▁psychologists 1 -▁Aztec 1 -▁wept 1 -▁Aren 1 -▁Sko 1 -▁citation 1 -▁99% 1 -everything 1 -cul 1 -suke 1 -▁resonate 1 -Dad 1 -2009 1 -▁ineligible 1 -▁Yum 1 -▁vowing 1 -8.6 1 -ür 1 -catching 1 -▁republican 1 -llie 1 -▁Victims 1 -▁immoral 1 -▁efficiencies 1 -▁Revival 1 -▁robbing 1 -▁clung 1 -▁biologist 1 -▁cla 1 -▁Hurley 1 -▁Tencent 1 -▁Brew 1 -regular 1 -▁bitterly 1 -kee 1 -▁dataset 1 -TSX 1 -▁hovered 1 -▁smoother 1 -Center 1 -▁Trick 1 -▁frogs 1 -friends 1 -▁0.000 1 -chem 1 -▁bolted 1 -▁excitedly 1 -▁leaping 1 -western 1 -▁referees 1 -▁Novel 1 -▁anatomy 1 -▁occupies 1 -▁contraception 1 -▁uterus 1 -▁pegged 1 -▁Jarvis 1 -▁caregiver 1 -▁pretext 1 -▁CNET 1 -▁147 1 -▁bullies 1 -▁1966, 1 -▁Merritt 1 -▁Whitman 1 -▁feeder 1 -Pat 1 -standard 1 -▁PK 1 -▁predatory 1 -▁lore 1 -▁Gmail 1 -hope 1 -▁$1.9 1 -▁disposed 1 -▁outburst 1 -=" 1 -▁1885 1 -solid 1 -▁TL 1 -▁awarding 1 -▁Gutierrez 1 -▁giraffe 1 -▁Neighborhood 1 -▁flair 1 -▁Gian 1 -▁Cottage 1 -▁JUST 1 -▁luncheon 1 -▁programmer 1 -wild 1 -▁CSU 1 -▁sands 1 -▁humiliated 1 -whether 1 -▁conceive 1 -▁endanger 1 -▁Virat 1 -2.3 1 -UB 1 -▁markings 1 -▁themed 1 -▁Eating 1 -▁Chattanooga 1 -▁remembrance 1 -▁disclosing 1 -▁168 1 -▁saints 1 -▁knocks 1 -▁Pharmaceutical 1 -▁Salon 1 -▁Alongside 1 -▁gritty 1 -.'” 1 -▁tolerant 1 -wor 1 -▁Wit 1 -▁pierced 1 -jon 1 -Japan 1 -Until 1 -▁authorised 1 -▁subsidized 1 -▁Saga 1 -▁hangover 1 -▁Rus 1 -▁Vehicles 1 -▁equip 1 -▁quilts 1 -▁Zel 1 -▁clocked 1 -▁ascertain 1 -▁asserts 1 -▁Chol 1 -▁McCl 1 -▁Chemistry 1 -▁obscene 1 -▁brunt 1 -▁idiots 1 -▁romp 1 -▁automobiles 1 -iyah 1 -▁Gras 1 -ogen 1 -▁2014-15 1 -▁1975. 1 -fitting 1 -▁signalled 1 -fore 1 -Jewish 1 -▁ebook 1 -rite 1 -▁spawn 1 -▁docks 1 -▁weeping 1 -Show 1 -▁Gum 1 -iwa 1 -▁Anger 1 -▁Pha 1 -▁mentoring 1 -▁Aden 1 -▁MCDANIEL 1 -▁raccoon 1 -▁Chichester 1 -▁eurozone 1 -▁Dyer 1 -▁Aj 1 -▁Oversight 1 -▁Gilmore 1 -▁(40 1 -▁handcuffs 1 -▁Berman 1 -▁8.1 1 -STER 1 -▁tweaks 1 -▁Seat 1 -▁vibration 1 -▁Britons 1 -rna 1 --0) 1 -▁1974. 1 -udo 1 -▁Mash 1 -▁sheltered 1 -▁examines 1 -▁JB 1 -aware 1 -izz 1 -▁Corridor 1 -▁Mozambique 1 -▁plentiful 1 -▁contradictory 1 -▁filtering 1 -uction 1 -▁ponds 1 -▁Hogg 1 -▁Pension 1 -4.00 1 -▁hurled 1 -▁Marg 1 -▁flirting 1 -learning 1 -▁1962, 1 -boys 1 -▁scholarly 1 -▁Adi 1 -▁1942, 1 -▁1,100 1 -▁Businesses 1 -ا 1 -▁Bolsonaro 1 -▁financier 1 -▁prototypes 1 -▁Asheville 1 -▁multilateral 1 -TED 1 -▁Rip 1 -▁effortlessly 1 -illion 1 -ACK 1 -▁fragmented 1 -▁Decker 1 -binding 1 -▁Ev 1 -▁0-2 1 -▁Chico 1 -▁antagonist 1 -▁brag 1 -▁flared 1 -▁Ax 1 -▁enhancements 1 -▁Lime 1 -▁recounts 1 -akh 1 -▁Margin 1 -▁arrogance 1 -001 1 -opoulos 1 -▁commencement 1 -▁Depp 1 -▁LS 1 -▁winless 1 -▁roadmap 1 -▁Farr 1 -▁$120 1 -▁thwarted 1 -ongo 1 -AX 1 -▁Alder 1 -location 1 -▁Angola 1 -▁manuscripts 1 -liff 1 -ode 1 -▁slug 1 -–7 1 -▁Bram 1 -▁slalom 1 -▁decency 1 -▁troupe 1 -▁Remove 1 -▁erection 1 -▁abandonment 1 -▁flop 1 -▁mute 1 -▁consensual 1 -▁rapist 1 -▁Rem 1 -▁Bucs 1 -analysis 1 -▁Tami 1 -▁Milli 1 -mix 1 -▁Briefing 1 -▁malt 1 -▁fonts 1 -▁softened 1 -▁memes 1 -▁chewed 1 -▁biases 1 -▁Zor 1 -lma 1 -▁BMO 1 -kk 1 -▁implicit 1 -created 1 -▁debilitating 1 -▁timid 1 -▁Greenwich 1 -▁apprenticeship 1 -▁Torre 1 -NL 1 -chill 1 -nton 1 -▁parameter 1 -▁householder 1 -bian 1 -nta 1 -▁Daw 1 -▁Aly 1 -▁Quran 1 -described 1 -▁Sla 1 -stadt 1 -▁Inner 1 -▁marvelous 1 -▁mystical 1 -▁patented 1 -sad 1 -▁visuals 1 -▁Cart 1 -▁harming 1 -▁McBride 1 -▁moniker 1 -▁sacrificing 1 -▁biodiversity 1 -▁Boehner 1 -NX 1 -▁1881 1 -▁Mund 1 -▁migraine 1 -▁ancestor 1 -▁inefficient 1 -rial 1 -▁improvised 1 -Off 1 -▁assassinated 1 -▁simulate 1 -▁nudge 1 -▁USSR 1 -▁Fritz 1 -▁needless 1 -▁ripple 1 -▁1862 1 -▁elicit 1 -▁desks 1 -Drive 1 -▁Daley 1 -▁HER 1 -kari 1 -Yo 1 -▁workflow 1 -▁Crimes 1 -▁throwback 1 -Land 1 -▁Myles 1 -▁Shame 1 -▁foresee 1 -▁Bautista 1 -▁substitution 1 -▁unsuspecting 1 -▁iced 1 -▁taxis 1 -▁Conversely 1 -Er 1 -▁Tool 1 -▁Benton 1 -▁undone 1 -▁Fantastic 1 -▁Polar 1 -London 1 -▁Raul 1 -byte 1 -▁Ashes 1 -▁hides 1 -Saharan 1 -▁Shak 1 -AIDS 1 -▁overtake 1 -▁Guthrie 1 -▁Maureen 1 -▁brilliance 1 -▁french 1 -▁Gingrich 1 -▁Pilgrim 1 -▁undervalued 1 -▁uneventful 1 -:31 1 -▁Vaughn 1 -▁testifying 1 -lone 1 -▁bows 1 -▁Thousand 1 -▁Loan 1 -▁loophole 1 -Box 1 -▁Tex 1 -▁portrays 1 -▁bestowed 1 -oper 1 -▁133 1 -aco 1 -Key 1 -ART 1 -▁hanged 1 -moo 1 -DY 1 -PORT 1 -▁Sou 1 -▁concerted 1 -▁intellect 1 -▁Diabetes 1 -▁Logistics 1 -▁fodder 1 -boot 1 -▁Munster 1 -▁bustling 1 -▁Dou 1 -label 1 -▁robbers 1 -▁Singaporean 1 -▁handwriting 1 -Oct 1 -▁Supporting 1 -▁picturesque 1 -unless 1 -▁exiled 1 -▁swan 1 -▁thinkers 1 -▁Done 1 -▁contesting 1 -▁yells 1 -▁gestured 1 -▁(# 1 -IDE 1 -▁Amal 1 -▁Increasing 1 -▁Muir 1 -▁platoon 1 -▁Rift 1 -▁François 1 -▁moonlight 1 -crew 1 -▁saliva 1 -▁Samson 1 -▁repeats 1 -▁attest 1 -▁interviewer 1 -cylinder 1 -▁undercut 1 -bine 1 -▁Agri 1 -▁diagnose 1 -create 1 -▁ter 1 -▁Interesting 1 -▁SAT 1 -▁cruised 1 -▁collaborations 1 --33 1 -▁Jul 1 -▁Mortgage 1 -▁monstrous 1 -▁Wiggins 1 -▁parlor 1 -▁namesake 1 -▁corpses 1 -▁bon 1 -▁18% 1 -▁rumoured 1 -▁Shinzo 1 -▁harshly 1 -▁ticks 1 -▁1971. 1 -▁warden 1 -,250 1 -Instead 1 -▁7-0 1 -▁Pirate 1 -eze 1 -▁Quote 1 -▁occupational 1 -▁$2.4 1 -▁alas 1 -▁Advocates 1 -tang 1 -▁tributes 1 -▁Laws 1 -▁fragment 1 -▁probes 1 -▁Botswana 1 -▁agitation 1 -▁infertility 1 -▁allotted 1 -▁clump 1 -▁EB 1 -▁showcases 1 -▁Rousey 1 -cigarettes 1 -tura 1 -▁IVF 1 -amu 1 -▁shitty 1 -▁aids 1 -▁duly 1 -particularly 1 -brush 1 -▁carton 1 -church 1 -▁Nye 1 -9.6 1 -▁CSS 1 -Four 1 -▁Tessa 1 -▁£9 1 -igne 1 -▁Bello 1 -▁Gothic 1 -▁Slack 1 -pia 1 -▁Chap 1 -▁homestead 1 -▁compiler 1 -▁Orr 1 -sche 1 -▁splashed 1 -leaf 1 -▁Gladstone 1 -▁toxins 1 -▁Alfa 1 -▁occupancy 1 -▁Ligue 1 -▁abolish 1 -▁Rib 1 -▁hospitalization 1 -▁fest 1 -▁VO 1 -nies 1 -tooth 1 -lead 1 -▁Philipp 1 -▁fabrication 1 -▁defends 1 -▁gloom 1 -▁DEA 1 -▁215 1 -appa 1 -▁Nik 1 -▁Barrie 1 -mmel 1 -▁warmly 1 -▁gutter 1 -▁1968. 1 -▁additionally 1 -MV 1 -score 1 -▁Boul 1 -▁Inspired 1 -▁Dude 1 -▁pioneers 1 -▁STAR 1 -:02 1 -▁Creation 1 -conference 1 -▁entails 1 -▁Cube 1 -enabled 1 -sei 1 -▁presume 1 -▁succeeds 1 -▁toned 1 -:59 1 -▁herb 1 -▁waged 1 -▁moaning 1 -▁banter 1 -▁Baku 1 -▁Monarch 1 -▁Abubakar 1 -▁Coventry 1 -▁Priebus 1 -▁ancestral 1 -▁Xinhua 1 -▁preventive 1 -▁disapproval 1 -▁simulator 1 -▁sprang 1 -▁fathom 1 -roo 1 -stones 1 -indi 1 -▁Farage 1 -▁rookies 1 -▁nick 1 -▁screws 1 -▁sealing 1 -▁usable 1 -answer 1 -Space 1 -plica 1 -▁slaughtered 1 -uchi 1 -▁ICU 1 -▁funky 1 -▁commend 1 -▁DVDs 1 -▁cascade 1 -▁Drummond 1 -▁ce 1 -Dade 1 -▁Coverage 1 -▁clipped 1 -▁Alone 1 -seen 1 -▁MDC 1 -▁tacos 1 -▁OVER 1 -▁flicked 1 -▁Wah 1 -▁Funds 1 -▁crammed 1 -▁bridal 1 -▁moss 1 -cient 1 -▁inconvenient 1 -hora 1 -psy 1 -▁Wanda 1 -▁Chamberlain 1 -5.7 1 -equipped 1 -▁neighbourhoods 1 -▁trespassing 1 -▁1969. 1 -▁defunct 1 -strike 1 -▁deem 1 -hours 1 -GHz 1 -▁Dak 1 -▁JS 1 -rex 1 -▁Duran 1 -▁legit 1 -▁forgiving 1 -▁Kaz 1 -nap 1 -nya 1 -▁payday 1 -▁camped 1 -▁atom 1 -inate 1 -▁Stor 1 -▁Worst 1 -▁condominium 1 -▁Peanut 1 -▁JR 1 -▁Belgrade 1 -▁staggered 1 -▁Boa 1 -▁Geological 1 -▁Yama 1 -▁senseless 1 -▁homeschool 1 -▁compressed 1 -▁hairy 1 -▁quantify 1 -▁thi 1 -▁Chess 1 -▁entrants 1 -▁Knock 1 -bold 1 -▁Roku 1 -▁Marley 1 -▁morph 1 -▁Matters 1 -▁attendants 1 -wit 1 -▁loosen 1 --4, 1 -:46 1 -▁Futures 1 -rub 1 -▁Wilkins 1 -▁Leipzig 1 -▁discrete 1 -▁Judd 1 -▁Kelvin 1 -▁Angry 1 -▁exc 1 -▁taxing 1 -blogspot 1 -▁preferring 1 -▁uplifting 1 -aar 1 -2.9 1 -▁Domino 1 -▁polluted 1 -▁Shahid 1 -▁Jules 1 -▁causal 1 -Chinese 1 -▁Berk 1 -▁1887 1 -Money 1 -edo 1 -Roman 1 -▁embryos 1 -▁Bren 1 -▁staffed 1 -▁1840 1 -▁Auction 1 --75 1 -▁IDF 1 -▁mentors 1 -▁jog 1 -branded 1 -▁1941, 1 -▁Connolly 1 -▁Fountain 1 -▁leukemia 1 -▁tycoon 1 -▁anomaly 1 -▁Britney 1 -▁whipping 1 -▁Import 1 -▁2025. 1 -▁splits 1 -▁Cinderella 1 -▁Municipality 1 -oba 1 -▁Edo 1 -▁dispel 1 -liga 1 -lynn 1 -▁ramping 1 -▁Daesh 1 -▁seminars 1 -bear 1 -▁lizard 1 -▁avoidance 1 -▁cafes 1 -▁1863 1 -és 1 -▁176 1 -▁Merlin 1 -▁Punch 1 -bass 1 -OUR 1 -▁incompetence 1 -▁Cologne 1 -▁Loo 1 -▁Burma 1 -▁visitation 1 -▁1888 1 -nga 1 -▁Luka 1 -▁Kyrie 1 -▁Weapons 1 -▁Goodell 1 -cid 1 -▁Breath 1 -▁nuanced 1 -▁Gir 1 -▁ravaged 1 -▁bureaucrats 1 -▁Lon 1 -▁Akbar 1 -LED 1 -▁Samoa 1 -▁retrospective 1 -ILL 1 -RON 1 -nio 1 -▁optimized 1 -▁learners 1 -▁pouch 1 -▁sar 1 -▁protester 1 -▁Comprehensive 1 -▁chaplain 1 -▁dissolution 1 -▁Delaney 1 -▁communism 1 -▁unsustainable 1 -▁Ves 1 -▁hue 1 -▁brothel 1 -leton 1 -▁GAAP 1 -▁primer 1 -▁Mask 1 -industrial 1 -▁Fairfield 1 -▁enrich 1 -Les 1 -▁LAKE 1 -▁kidneys 1 -▁Sac 1 -▁voucher 1 -▁Pages 1 -▁Fairy 1 -▁Couple 1 -▁Digi 1 -elman 1 -truth 1 -▁Guantanamo 1 -▁cassette 1 -▁pharmacies 1 -▁commotion 1 -▁(50 1 -▁Paterson 1 -▁Guitar 1 -▁Frozen 1 -Os 1 -▁Seb 1 -▁Panda 1 -▁Joker 1 -tir 1 -dol 1 -▁Qur 1 -video 1 -▁Option 1 -▁mayhem 1 -▁din 1 -▁ruining 1 -▁carbs 1 -▁Winners 1 -▁stretcher 1 -▁presses 1 -Based 1 -▁185 1 -▁negligent 1 -▁Quentin 1 -▁Telangana 1 -▁broccoli 1 -▁glimmer 1 -▁heaviest 1 -▁Oswald 1 -▁bombardment 1 -▁Liber 1 -▁Babies 1 -▁Secure 1 -▁Buffy 1 -▁remanded 1 -▁containment 1 -▁sprained 1 -▁snip 1 -BD 1 -police 1 -▁resigning 1 -.61 1 -authored 1 -▁Lynne 1 -▁Burt 1 -rman 1 -▁stitching 1 -▁qui 1 -▁gunpoint 1 -▁alias 1 -▁SEO 1 -▁drains 1 -▁elastic 1 -▁Scientology 1 -▁protracted 1 -▁Rakhine 1 -▁bittersweet 1 -▁tidal 1 -▁Zane 1 -▁NOAA 1 -▁amazement 1 -▁cutest 1 -▁Corona 1 -.93 1 -▁Hendricks 1 -▁Roh 1 -▁suites 1 -▁nan 1 -rates 1 -having 1 -▁spills 1 -lein 1 -perhaps 1 -▁disperse 1 -▁assertions 1 -▁underscored 1 -Sur 1 -▁???? 1 -▁puddle 1 -civil 1 -.81 1 -▁Gael 1 -▁halves 1 -▁geological 1 -▁chills 1 -dong 1 -:08 1 -▁projector 1 -▁durability 1 -▁Ludwig 1 -▁Navarro 1 -▁biotechnology 1 -FW 1 -▁Swimming 1 -▁renegotiate 1 -▁à 1 -gaard 1 -▁publicist 1 -▁£20 1 -▁takeoff 1 -▁liquids 1 -▁Newtown 1 -arily 1 -▁cemented 1 -▁stout 1 -▁vocational 1 -▁affirm 1 -▁Basel 1 -target 1 -ident 1 -▁ascend 1 -▁Dirty 1 -▁rattle 1 -British 1 -▁annoy 1 -ADA 1 -▁Houthi 1 -▁Blum 1 -▁Kahn 1 -▁peeled 1 -urb 1 -kovic 1 -▁\ 1 -OV 1 -▁straws 1 -9.00 1 -Arab 1 -▁validated 1 -▁indifferent 1 -zal 1 -▁burying 1 -▁Decision 1 -▁agile 1 -▁Dora 1 -quest 1 -▁Bose 1 -▁dumpster 1 -aha 1 -▁5.7 1 -50) 1 -▁undertook 1 -▁Proud 1 -▁competes 1 -▁Chim 1 --38 1 -7.6 1 -▁Grun 1 -aker 1 -▁squirrels 1 -▁relinquish 1 -sweet 1 -▁worldview 1 -▁incite 1 -claim 1 -▁ailments 1 -:04 1 -trop 1 -▁disgruntled 1 -▁imaginative 1 -▁obituary 1 -▁Purchase 1 -▁Protocol 1 -▁$4.5 1 -▁plains 1 -▁leans 1 -▁Meh 1 -▁intro 1 -▁Beef 1 -▁suspending 1 -▁Sven 1 -▁Homo 1 -vier 1 -▁merry 1 -▁offside 1 -▁Bally 1 -▁nets 1 -▁Myth 1 -hong 1 -▁dismantling 1 -▁Autism 1 -▁700,000 1 -▁unlucky 1 -▁Glendale 1 -▁fumbled 1 -▁gutted 1 -▁perpetrated 1 -▁strewn 1 -▁mirrored 1 -xton 1 -▁Sick 1 -Mel 1 -▁Rho 1 -nock 1 -glu 1 -shak 1 -▁Orient 1 -▁Escape 1 -▁(32 1 -middle 1 -▁awakening 1 -▁Traders 1 -▁Fir 1 -bodied 1 -▁Royce 1 -▁Dust 1 -▁Buch 1 -ikh 1 -ject 1 -decade 1 -▁quilting 1 -▁177 1 -▁Paraguay 1 -▁astronomical 1 -aur 1 -girls 1 -▁flowering 1 -▁Dutton 1 --95 1 -▁FTC 1 -▁Hayley 1 -▁alma 1 -MIT 1 -ssie 1 -▁Thoughts 1 -▁experimentation 1 -▁generously 1 -Perhaps 1 -▁Wonderful 1 -▁Submit 1 -▁managerial 1 -▁handheld 1 -Master 1 -▁itching 1 -▁pearl 1 -▁tending 1 -▁scratches 1 -OUND 1 -▁solace 1 -▁Heavenly 1 -▁storylines 1 -sufficient 1 -▁bruise 1 -▁Brass 1 -▁Smithsonian 1 -▁audible 1 -▁dummy 1 -▁Turin 1 -▁cunt 1 -▁1977. 1 -▁Twitch 1 -▁Mb 1 -▁pla 1 -▁billboard 1 -ordinate 1 -riya 1 -▁1884 1 -▁Anil 1 -▁Birch 1 -bud 1 -:37 1 -trin 1 -▁hoops 1 -▁pew 1 -▁garments 1 -▁reformed 1 -natal 1 -/15 1 -▁Benghazi 1 -▁Mendoza 1 -▁carpenter 1 -▁Diesel 1 -▁Galloway 1 -bearing 1 -▁confines 1 -▁exec 1 -rse 1 -lysis 1 -ulous 1 -▁prolong 1 -▁advert 1 -▁budge 1 -strength 1 -▁detecting 1 -▁Archive 1 -bring 1 -▁Doors 1 -ffe 1 -▁projectile 1 -▁Pes 1 -▁Bermuda 1 -▁gimmick 1 -▁rammed 1 -▁truths 1 -▁Dro 1 -▁trenches 1 -▁Kell 1 -▁Jeanne 1 -▁fared 1 -▁elves 1 -▁É 1 -immigrant 1 -Pan 1 -▁Garry 1 -▁Tori 1 -▁Rite 1 -▁Mimi 1 -▁indifference 1 -def 1 -_1 1 -CAR 1 -▁intrinsic 1 -▁screenshots 1 -▁weekdays 1 -▁alternating 1 -▁commemoration 1 -▁resonance 1 -icing 1 -▁149 1 -:29 1 -▁undue 1 -▁delusion 1 -▁crooked 1 -▁BM 1 -broken 1 -▁Kamala 1 -▁appointing 1 -6.00 1 -aph 1 -▁reasoned 1 -▁£100 1 -▁LDS 1 -▁assailant 1 -▁reaffirm 1 -▁mediation 1 -GER 1 -▁symbolize 1 -talking 1 -▁Rea 1 -6.2 1 -cked 1 -▁Honour 1 -▁Available 1 -▁Maui 1 -▁Bianca 1 -▁Tavern 1 -▁Yellen 1 -▁terminus 1 -▁larvae 1 -▁OMG 1 -▁Staten 1 -▁[17] 1 -uar 1 -Scott 1 -▁VAT 1 -▁synth 1 -oxy 1 -▁matchups 1 -▁summarize 1 -▁152 1 -▁kite 1 -▁smoothie 1 -eza 1 -▁1967. 1 -▁Kes 1 -▁badges 1 -▁reefs 1 -▁skater 1 -▁emailing 1 -▁necessities 1 -▁Assault 1 -▁interfered 1 -4.9 1 -▁Married 1 -▁Newsom 1 -▁jobless 1 -▁Including 1 -▁deli 1 -xy 1 -abu 1 -Say 1 -captain 1 -▁CG 1 -▁homo 1 -▁skit 1 -circ 1 -▁peanuts 1 -▁Exit 1 -▁Hamid 1 -▁2016) 1 -▁numerical 1 -▁finalised 1 -▁diagram 1 -aligned 1 -▁breaths 1 -▁Saad 1 -▁Chaos 1 -ITY 1 -▁tarp 1 -▁Alejandro 1 -▁decentralized 1 -▁translating 1 -▁cadre 1 -▁Mickelson 1 -▁Yellowstone 1 -▁protestors 1 -▁Pumpkin 1 -▁Vy 1 -nh 1 -▁Cheng 1 -▁flattened 1 -▁zipper 1 -▁KY 1 -▁1978. 1 -▁Sao 1 -▁unearthed 1 -▁strangled 1 -▁val 1 -lol 1 -▁endeavors 1 -▁Marks 1 -▁53, 1 -coloured 1 -▁tabled 1 -▁Esk 1 -▁appreciates 1 -▁Producer 1 -▁Lauer 1 -▁GIF 1 -▁5.8 1 -▁Klaus 1 -development 1 -▁Distinguished 1 -▁Eileen 1 -▁Twice 1 -▁rife 1 -▁marrow 1 -▁boredom 1 -▁acquainted 1 -▁Darnold 1 -▁blurry 1 -ović 1 -osphere 1 -▁Git 1 -▁Asher 1 -▁Glu 1 -▁gorilla 1 -ises 1 -▁Zara 1 -▁Facing 1 -rash 1 -kri 1 -▁wrench 1 -▁?" 1 -▁escapes 1 -▁BJ 1 -▁Springer 1 -▁thunderstorm 1 -▁Lub 1 -▁begs 1 -▁Commenting 1 -▁composers 1 -▁Jal 1 -▁realising 1 -▁HOUSTON 1 -▁Yosemite 1 -▁imitate 1 -▁Babcock 1 -▁FIRST 1 -blade 1 -▁Rhine 1 -▁ICO 1 -▁mosaic 1 -▁Garfield 1 -▁skewed 1 -mari 1 -▁HUD 1 -▁FP 1 -▁gull 1 -▁anxiously 1 -▁rejoin 1 -▁BL 1 -▁bundles 1 -,'” 1 -gne 1 -rro 1 -MF 1 -GN 1 -members 1 -UE 1 -▁Brits 1 -AIN 1 -▁Subject 1 -▁comforted 1 -▁Burning 1 -ibo 1 -hammer 1 -▁harp 1 -▁Odyssey 1 -▁equilibrium 1 -▁gravitational 1 -▁mathematician 1 -▁turret 1 -▁disinformation 1 -▁Lizzie 1 -▁motorist 1 -▁157 1 -▁selfless 1 -▁LHP 1 -▁Piece 1 -▁pant 1 -▁Lowell 1 -Au 1 -▁textiles 1 -▁2018: 1 -▁Ain 1 -▁€2 1 -Sir 1 -▁Chai 1 -▁Alam 1 -▁flux 1 -▁outsourcing 1 -ás 1 -▁jolt 1 -▁fractures 1 -▁Taken 1 -omer 1 -▁Profit 1 -▁Satellite 1 -▁profiling 1 -▁hobbies 1 -/05/2018 1 -▁Exclusive 1 -▁Brat 1 -▁Dio 1 -▁activating 1 -uris 1 -▁Fiscal 1 -▁Axel 1 -▁1891 1 -mpl 1 -▁hardline 1 -▁3-3 1 -continue 1 -studded 1 -▁vets 1 -▁Ep 1 -immer 1 -5.8 1 -▁LO 1 -French 1 -▁titan 1 -▁4-5 1 -appropriate 1 -▁Material 1 -tori 1 -NFL 1 -▁DeSantis 1 -▁Elephant 1 -▁nucleus 1 -▁nurturing 1 -▁ventilation 1 -▁Marquette 1 -▁greenback 1 -▁Threat 1 -▁wallets 1 -▁McCall 1 -wani 1 -Twitter 1 -phobia 1 -▁practise 1 -▁Dalit 1 -▁Cate 1 -anticipated 1 -France 1 -▁seaside 1 -▁Psycho 1 -icate 1 -▁headwinds 1 -▁Arundel 1 -cf 1 -90,000 1 -▁therapists 1 -▁FF 1 -▁conducts 1 -▁interfaces 1 -▁fullest 1 -▁sen 1 -▁plainly 1 -▁Recall 1 -▁sarcastic 1 -▁Spokane 1 -▁irrespective 1 -▁notwithstanding 1 -▁oatmeal 1 -▁retaliatory 1 -deeply 1 -▁throng 1 -▁voicemail 1 -▁paramilitary 1 -▁shelling 1 -▁Triangle 1 -▁pokemon 1 -▁renters 1 -▁ISIL 1 -▁[16] 1 -▁drugged 1 -▁ledger 1 -▁47, 1 -▁Designer 1 -▁Rah 1 -▁Gayle 1 -▁purportedly 1 -▁awaken 1 -▁$2.1 1 -▁Deco 1 -▁Maroon 1 -Thursday 1 -advance 1 -▁farmhouse 1 -▁Naka 1 -sorry 1 -▁1886 1 -▁vividly 1 -▁Savi 1 -▁Older 1 -yah 1 -:38 1 -afa 1 -▁healer 1 -Jim 1 -physical 1 -▁programmers 1 -▁Owners 1 -▁potion 1 -▁Clooney 1 -▁Macquarie 1 -▁kennel 1 -▁inundated 1 -▁Cassandra 1 -▁broth 1 -▁UM 1 -▁digestive 1 -▁salsa 1 -▁Paolo 1 -▁secretaries 1 -▁134 1 -tana 1 -▁locating 1 -▁punishable 1 -▁tirelessly 1 -▁hyp 1 -Pop 1 -▁tentatively 1 -0.3 1 -3.9 1 -▁Quaker 1 -▁1945. 1 -▁Tough 1 -▁diners 1 -▁quotas 1 -campus 1 -cken 1 -▁Changing 1 -▁GU 1 -HU 1 -reported 1 -▁Defender 1 -▁brisk 1 -▁thermo 1 -Neal 1 -7.7 1 -▁smugglers 1 -▁Pia 1 -ş 1 -▁absorption 1 -▁conceding 1 -▁officiating 1 -▁Exeter 1 -▁edging 1 -TING 1 -knee 1 -▁Normandy 1 -▁physiological 1 -▁WANT 1 -:41 1 -▁Kis 1 --1990 1 -▁unfounded 1 -▁reuse 1 -▁heartache 1 -▁Momma 1 -▁SHE 1 -▁Madame 1 -▁nasal 1 -▁Ey 1 -▁snowball 1 -▁rusty 1 -▁Haynes 1 -▁Doris 1 -holes 1 -completely 1 -▁disgraced 1 -ennial 1 -▁Indoor 1 -3.1 1 -▁intruder 1 -▁Pav 1 -thon 1 -▁Sicily 1 -▁cervical 1 -▁physique 1 -▁McGuire 1 -▁Pfizer 1 -▁Therapy 1 -▁slut 1 -▁Heathrow 1 -▁rainforest 1 -▁Friendship 1 -▁halting 1 -:03 1 -▁painkillers 1 -▁Plato 1 -WAY 1 -rough 1 -▁Delivery 1 -dec 1 -▁Jody 1 -▁extinguished 1 -footed 1 -▁Polly 1 -bic 1 -Anyone 1 -cies 1 -р 1 -ghi 1 -Fest 1 -▁slur 1 -▁Fabian 1 -▁condom 1 -▁Arpaio 1 -▁Gronkowski 1 -▁improbable 1 -▁Kaduna 1 -▁paralysis 1 -▁antibodies 1 -▁Cutler 1 -▁Branson 1 -▁Weiner 1 -▁PRESS 1 -▁racks 1 -▁sizeable 1 -▁churn 1 -▁acquaintances 1 -jah 1 -▁DG 1 -▁towed 1 -▁Amish 1 -racing 1 -▁Doo 1 -▁Parties 1 -▁Idris 1 -volume 1 -ej 1 -▁vests 1 -▁#4 1 -▁looting 1 -▁hp 1 -▁billionaires 1 -▁Damage 1 -▁comm 1 -design 1 -▁6-7 1 -▁Capp 1 -atha 1 -▁gadget 1 -▁wetlands 1 -▁blends 1 -▁backfield 1 -Sea 1 -fuel 1 -mean 1 -▁Bast 1 -essa 1 -▁MSP 1 -hei 1 -▁Libertarian 1 -▁adversary 1 -▁clumsy 1 -▁treacherous 1 -▁Cascade 1 -▁Circus 1 -▁Residential 1 -▁Swi 1 -▁Fitbit 1 -ucker 1 -▁ethos 1 -▁NB 1 -▁Shad 1 -▁converts 1 -minus 1 -▁planetary 1 -▁spoiler 1 -▁Edith 1 -▁clicks 1 -▁mil 1 -▁preached 1 -▁surpassing 1 -▁Jah 1 -Sky 1 -▁smacked 1 -▁Clock 1 -▁Punk 1 -▁oust 1 -▁Ris 1 -▁Role 1 -▁embarking 1 -▁45% 1 -▁Coutinho 1 -▁buffalo 1 -▁inhibit 1 -▁Changes 1 -▁enrichment 1 -▁Hilary 1 -▁ONLY 1 -▁democrat 1 -▁AMY 1 -▁golfers 1 -▁Juno 1 -erly 1 -unc 1 -▁ph 1 -loc 1 -Tra 1 -▁Chanel 1 -▁pastry 1 -ió 1 -▁Blessed 1 -ege 1 -▁invent 1 -▁stung 1 -TW 1 -▁comparative 1 -▁hideous 1 -▁nesting 1 -▁Centennial 1 -▁Migration 1 -▁Holloway 1 -▁muzzle 1 -▁unnecessarily 1 -▁Automobile 1 -▁scarcity 1 -▁vaping 1 -▁Briggs 1 -5.6 1 -▁Proposition 1 -▁licences 1 -▁hacks 1 -EPA 1 -102 1 -scope 1 -▁rescheduled 1 -▁Invalid 1 -▁insensitive 1 -centered 1 -false 1 -▁prematurely 1 -▁natives 1 -Someone 1 -▁Perl 1 -▁wrongful 1 -▁clone 1 -rana 1 -▁Geek 1 -▁BidaskClub 1 -▁Heinrich 1 -▁tiring 1 -▁Stirling 1 -xie 1 -▁Posts 1 -▁Hollow 1 --5, 1 -kul 1 -▁evacuations 1 -▁expressive 1 -▁RBC 1 -oki 1 -joy 1 -▁downgrade 1 -▁statutes 1 -▁Youtube 1 -Western 1 -▁rejoice 1 -▁“‘ 1 -▁136 1 -Martin 1 -▁Anders 1 -▁Tens 1 -shooting 1 -▁Minute 1 -▁Fitch 1 -▁stockpile 1 -:32 1 -▁Recession 1 -▁geometric 1 -▁Kenyatta 1 -▁transformative 1 -▁Corbin 1 -▁Heads 1 -hani 1 -▁progressives 1 -▁ding 1 -▁Osama 1 -▁nonprofits 1 -peak 1 -hh 1 -▁weaponry 1 -▁TP 1 -▁Blanco 1 -▁$53 1 -▁DOES 1 -▁Fee 1 -▁alterations 1 -mire 1 -▁jerked 1 -▁Sack 1 -cord 1 -▁tyre 1 -▁UConn 1 -▁exodus 1 -▁lousy 1 -▁Hotspur 1 -▁polio 1 -▁Hershey 1 -▁Carn 1 -VP 1 -yed 1 -▁pct 1 -▁storefront 1 -▁organism 1 -Leary 1 -▁Regions 1 -▁Rohit 1 -▁pollen 1 -▁$2.3 1 -▁Bail 1 -▁showroom 1 -▁Calls 1 -▁shuffled 1 -▁vans 1 -▁pas 1 -▁millionaire 1 -current 1 -▁pioneered 1 -▁Saba 1 -inho 1 -▁$2.6 1 -▁External 1 -▁Rachael 1 -▁inflatable 1 -▁Goddess 1 -▁72- 1 -▁Francesco 1 -▁Molina 1 -▁complexion 1 -▁FIA 1 -castle 1 -▁Shareholders 1 -aye 1 -▁routed 1 -105 1 -▁designate 1 -▁Aggies 1 -discrimination 1 -Tu 1 -▁Const 1 -EW 1 -inn 1 -▁Dul 1 -▁leveraged 1 -▁WEST 1 -▁classy 1 -▁Economists 1 -grove 1 -▁weep 1 -▁Rosenberg 1 -▁rehearsals 1 -FG 1 -metal 1 -▁blending 1 -▁Petra 1 -bbe 1 -▁introductory 1 -▁Dynasty 1 -▁wavelength 1 -▁Failure 1 -▁Roughly 1 -▁winters 1 -▁erupt 1 -▁mayors 1 -eum 1 -▁[18] 1 -3.00 1 -▁Mighty 1 -▁85% 1 -▁grove 1 -grandchildren 1 -▁noses 1 -▁payoff 1 -8.8 1 -:07 1 -▁sow 1 -▁Bund 1 -▁Toshiba 1 -▁citrus 1 -▁Ghouta 1 -▁Structure 1 -▁Sabbath 1 -▁Paddock 1 -▁burrow 1 -▁Rauner 1 -▁hush 1 -▁Colleen 1 -▁Become 1 -▁Jurassic 1 -End 1 -▁Mathews 1 -▁Lenin 1 -/19 1 -▁freshwater 1 -650 1 -▁orphan 1 -Corp 1 -▁Meyers 1 -▁Beam 1 -▁deteriorate 1 -▁pronounce 1 -▁logos 1 -▁adolescents 1 -MDB 1 -erton 1 -▁ambulances 1 -▁furiously 1 -▁1961, 1 -0.6 1 -▁correctness 1 -bred 1 -▁Sega 1 -▁standby 1 -▁punter 1 -▁deficiencies 1 -▁drizzle 1 -▁Sampson 1 -▁Bullock 1 -▁Horgan 1 -▁acknowledgement 1 -▁ISS 1 -▁320 1 -▁Yuri 1 -▁nicest 1 -▁Parti 1 -HB 1 -▁Babe 1 -▁JT 1 -uva 1 -▁Chel 1 -▁McGowan 1 -▁splendid 1 -▁Werner 1 -▁mismanagement 1 -▁Miriam 1 -▁equaliser 1 -▁dormant 1 -▁slideshow 1 -▁captains 1 -lore 1 -▁fleece 1 -▁globalization 1 -zek 1 -▁gy 1 -▁Markus 1 -▁Scale 1 -▁handwritten 1 -NV 1 -▁Bark 1 -▁Darkness 1 -:34 1 -▁benchmarks 1 -traumatic 1 -guy 1 -utz 1 -▁syndicate 1 -▁attractiveness 1 -▁perilous 1 -▁upstate 1 -YN 1 -▁OFF 1 -liv 1 -▁deity 1 -▁fondly 1 -▁spiritually 1 -▁catapult 1 -▁Pebble 1 -▁Vineyard 1 -▁Tacoma 1 -▁regression 1 -letter 1 -holm 1 -gender 1 -ales 1 -charged 1 -tay 1 -hardt 1 -▁0.1% 1 -uber 1 -▁scant 1 -▁RP 1 -▁artificially 1 -tune 1 -!!!! 1 -▁Kant 1 -▁cutter 1 -cold 1 -▁grassy 1 -▁KR 1 -▁lotion 1 -:52 1 -▁demonetisation 1 -▁worrisome 1 -▁pumpkins 1 -▁CBO 1 -ordinator 1 -▁someplace 1 -▁apos 1 -▁ardent 1 -▁Unknown 1 -▁Eighth 1 -▁resent 1 -▁centred 1 -▁ducked 1 -into 1 -Light 1 -▁Cambodian 1 -▁campers 1 -▁Morse 1 -aris 1 -▁conditioned 1 -/16 1 -garh 1 -▁Sahara 1 -▁hostess 1 -▁BCE 1 -whi 1 -today 1 -learn 1 -▁Bry 1 -▁Wid 1 -▁hikers 1 -▁Restoration 1 -▁cohesive 1 -▁derogatory 1 -▁impetus 1 -▁rake 1 -▁Karlsson 1 -▁tracing 1 -▁1830 1 -▁[15] 1 -▁resounding 1 -▁fret 1 -▁praises 1 -▁Ginny 1 -▁condoms 1 -▁LAS 1 -▁unilaterally 1 -rance 1 -▁pharma 1 -proclaimed 1 -▁revoke 1 -▁Bangor 1 -prone 1 -▁unloaded 1 -▁wiggle 1 -Monday 1 -▁Sight 1 -▁ramped 1 -▁$40,000 1 -▁resumes 1 -▁deceptive 1 -▁lapse 1 -▁pent 1 -rax 1 -▁Sent 1 -fee 1 -▁Raphael 1 -▁opaque 1 -▁unstoppable 1 -:47 1 -▁omitted 1 -▁Keane 1 -▁hinges 1 -▁stepfather 1 -▁refurbished 1 -▁telecoms 1 -▁peaking 1 -published 1 -▁dawned 1 -▁flyers 1 -▁playable 1 -▁esteem 1 -▁derive 1 -▁jewels 1 -jam 1 -▁brushes 1 -▁barber 1 -▁Qatari 1 -▁Shine 1 -deck 1 -▁Philosophy 1 -▁frigid 1 -▁pedigree 1 -▁Diversity 1 -▁contour 1 -▁teleport 1 -▁Zhou 1 -▁excelled 1 -▁ramps 1 -▁scraped 1 -▁melee 1 -▁Pall 1 -▁Neighbors 1 -seekers 1 -▁Terrorism 1 -▁STR 1 -▁Mead 1 -▁NASDAQ 1 -Richard 1 -▁ridicule 1 -▁rightfully 1 -rav 1 -▁erode 1 -▁Wee 1 -▁Sheet 1 -▁augment 1 -▁stripe 1 -▁forecasters 1 -▁Kwa 1 -▁palpable 1 -▁Immigrant 1 -▁teddy 1 -▁biotech 1 -▁unmarked 1 -▁etched 1 -wai 1 -▁Maw 1 -▁overwhelm 1 -▁Plants 1 -▁laced 1 -Ten 1 -▁Dh 1 -▁Spending 1 -Sullivan 1 -yellow 1 -▁Oriental 1 -Pol 1 -▁printers 1 -urban 1 -▁Progressives 1 -igno 1 -4% 1 -vation 1 -▁adversely 1 -▁Trap 1 -▁pups 1 -▁hairstyle 1 -resolution 1 -▁1882 1 -▁Broadcast 1 -▁NF 1 -grid 1 -CW 1 -▁Gou 1 -▁rods 1 -▁Adjusted 1 -▁Okinawa 1 -▁Nis 1 -▁Sixteen 1 -▁Winnie 1 -▁Jaya 1 -:48 1 -haw 1 -uit 1 -sites 1 -▁Stockton 1 -▁Males 1 -▁Goose 1 -kos 1 -MN 1 -▁gan 1 -ivate 1 -:09 1 -ife 1 -nke 1 -▁spinner 1 -▁tur 1 -▁Khal 1 -▁Aus 1 -:49 1 -▁Josef 1 -▁juggle 1 -6.3 1 -MER 1 -ggin 1 -▁Dept 1 -▁misunderstand 1 -▁stabilized 1 -▁Exercise 1 -▁Gideon 1 -▁Reservation 1 -▁Schwarzenegger 1 -▁admiring 1 -▁conducive 1 -▁prosthetic 1 -▁fumes 1 -▁biopic 1 -▁misplaced 1 -▁Radi 1 -▁flashy 1 -▁medallist 1 -▁relapse 1 -▁napping 1 -▁Peer 1 -▁cleaners 1 -▁Tips 1 -▁waning 1 -▁Grimes 1 -▁sourcing 1 -▁booklet 1 -▁abusers 1 -▁Hadid 1 -▁grasped 1 -nti 1 -▁computational 1 -ulin 1 -▁Guns 1 -▁Otis 1 -▁Tum 1 -▁Huge 1 -Marie 1 -▁repealing 1 -seller 1 -▁expel 1 -NW 1 -▁Jaitley 1 -▁Solskjaer 1 -▁welterweight 1 -▁gazing 1 -▁NEED 1 -▁innovate 1 -▁nestled 1 -▁tantrum 1 -rh 1 -▁crusade 1 -▁Ref 1 -▁Marcelo 1 -▁gloomy 1 -▁Rigg 1 -▁Forestry 1 -▁Developers 1 -▁visionary 1 -dim 1 -▁ABS 1 -▁Isis 1 -▁jams 1 -Night 1 -▁Dig 1 -▁$350 1 -▁rhetorical 1 -▁victor 1 --2000 1 -2008 1 -▁Auditor 1 -▁dampen 1 -▁DETROIT 1 -▁Hathaway 1 -▁storied 1 -▁giddy 1 -7.8 1 -▁NOTICE 1 -▁1947, 1 -▁Killing 1 -▁Sherry 1 -▁Cata 1 -▁Bethel 1 -▁predictive 1 -▁Karma 1 -▁Dealer 1 -▁Afro 1 -▁stockings 1 -ape 1 -rice 1 -Yesterday 1 -breeze 1 -▁nonstop 1 -▁Instructions 1 -▁timeless 1 -collectively 1 -▁Links 1 -▁smartest 1 -180 1 -Help 1 -▁Julien 1 -uo 1 -6.6 1 -pea 1 -emon 1 -▁Vos 1 -lbert 1 -▁6.2 1 -▁Pluto 1 -▁innate 1 -loe 1 -▁orbital 1 -▁Caitlin 1 -▁astronomy 1 -▁fledgling 1 -▁Mbappe 1 -▁enhances 1 -Aid 1 -,999 1 -▁bordering 1 -rob 1 -▁Pau 1 -Five 1 -▁Pang 1 -asta 1 -▁Madness 1 -▁leadoff 1 -▁Incredible 1 -▁exceedingly 1 -▁toughness 1 -▁1876 1 -13) 1 -▁tamp 1 -▁comedians 1 -▁Boh 1 -▁Sew 1 -▁parcels 1 -▁Terrell 1 -▁pondering 1 -verb 1 -▁conflicted 1 -▁bagel 1 -▁162 1 -spread 1 -▁PRI 1 -▁Nowadays 1 -▁restitution 1 -▁cliche 1 -amos 1 -▁BEACH 1 -▁quipped 1 -winter 1 -▁indices 1 -▁Scandinavian 1 -Ke 1 -▁Afterward 1 -▁146 1 -▁1868 1 -▁detonated 1 -▁manifestation 1 -▁wagons 1 -ahi 1 -▁combating 1 -▁Naz 1 -period 1 -▁buildup 1 -▁vanity 1 -BJP 1 -▁deci 1 -▁teaming 1 -vant 1 -▁watchers 1 -▁Accordingly 1 -▁Warm 1 -endo 1 -▁148 1 -ographer 1 -▁variance 1 -▁Kaufman 1 -▁decipher 1 -▁proverbial 1 -▁ploy 1 -▁Susie 1 -▁4.8 1 -▁diffuse 1 -▁slippers 1 -reviewed 1 -evich 1 -▁spirituality 1 -▁Dorset 1 -▁Pony 1 -dica 1 -summer 1 -▁Iain 1 -▁600,000 1 -▁promoters 1 -▁Wald 1 -▁Funk 1 -timer 1 -▁1-800- 1 -treat 1 -aving 1 -EZ 1 -mura 1 -▁Hawking 1 -▁displeasure 1 -▁grudge 1 -▁COL 1 -alle 1 -▁1948, 1 -▁Rab 1 -▁GTX 1 -▁Tinder 1 -▁Alley 1 -▁repent 1 -▁plum 1 -▁Hul 1 -▁PARK 1 -button 1 -▁Yuan 1 -▁pollster 1 -▁Humans 1 -▁hy 1 -▁lifeline 1 -athi 1 -▁Thy 1 -▁Filip 1 -▁Torah 1 -▁disliked 1 -enna 1 -▁Portman 1 -▁grenades 1 -eil 1 -▁triumphant 1 -▁entertainer 1 -▁mower 1 -▁Obrador 1 -▁storyteller 1 -▁Virgil 1 -▁coffers 1 -▁molested 1 -▁Loud 1 -▁Activity 1 -▁hamlet 1 -▁Elise 1 -▁Ness 1 -▁sweetness 1 -▁Mansion 1 -thanks 1 -igi 1 -▁recoup 1 -▁1871 1 -▁Frankly 1 -OY 1 -▁INC 1 -▁ignite 1 -▁endangering 1 -▁$64 1 -▁classify 1 -▁pap 1 -9.9 1 -▁nearer 1 -▁Ott 1 -nza 1 -▁Gow 1 -eron 1 -▁climbers 1 -▁Named 1 -itor 1 -▁grading 1 -▁Clause 1 -▁vested 1 -▁Flanagan 1 -▁Jimenez 1 -▁Nassau 1 -▁enticing 1 -▁Sinatra 1 -▁Movies 1 -▁Seminary 1 -▁touting 1 -flies 1 -Malley 1 -▁KL 1 -kamp 1 -▁troublesome 1 -▁firmware 1 -▁Lad 1 -activated 1 -citizen 1 -wicket 1 -▁Testing 1 -▁Houthis 1 -▁Pale 1 -▁suicides 1 -▁fax 1 -belt 1 -Put 1 -▁rut 1 -▁pagan 1 -cooked 1 -330 1 -(" 1 -updated 1 -managed 1 -▁Lough 1 -bp 1 -okay 1 -Nick 1 -ž 1 -▁Ezekiel 1 -▁pods 1 -▁stale 1 -▁Hawley 1 -▁avenge 1 -▁hating 1 -▁Guer 1 -▁Soo 1 -things 1 -▁minimise 1 -▁6.1 1 -lva 1 -» 1 -asu 1 -▁coworker 1 -▁Deck 1 -dun 1 -▁Fabric 1 -vad 1 -▁keyboards 1 -▁rightful 1 -▁Cebu 1 -▁modernization 1 -ORE 1 -▁tits 1 -▁landmarks 1 -▁onlookers 1 -▁Hud 1 -▁emblem 1 -▁Kejriwal 1 -▁quarantine 1 -▁Rosemary 1 -▁fiasco 1 -▁sulfur 1 -▁Greyhound 1 -▁cursing 1 -▁allowances 1 -sett 1 -▁dominates 1 -▁requisite 1 -▁muse 1 -▁Chili 1 -▁derivative 1 -mining 1 -▁contrasting 1 -▁Tomb 1 -Credit 1 -Stay 1 -ogram 1 -8.9 1 -▁Khalil 1 -JA 1 -▁skyline 1 -gl 1 -▁Truly 1 -▁temperament 1 -vine 1 -igu 1 -▁Tamara 1 -Still 1 -▁lifeless 1 -▁dodged 1 -▁plural 1 -▁cryptic 1 -▁hangar 1 -▁chartered 1 -▁unresolved 1 -▁Wilcox 1 -▁quarry 1 -▁relocating 1 -county 1 -▁percussion 1 -embe 1 -▁[19] 1 -▁cro 1 -▁Economist 1 -▁whisky 1 -▁ribbons 1 -▁plowed 1 -▁Slav 1 -▁conserve 1 -▁Olu 1 -hiro 1 -▁paragraphs 1 -▁grader 1 -Ohio 1 -▁prosper 1 -▁OPS 1 -▁2014) 1 -▁Yea 1 -▁appellate 1 -▁mafia 1 -▁instinctively 1 -cloud 1 -▁Carver 1 -▁Pocket 1 -16) 1 -▁mage 1 -▁Roseanne 1 -▁sprinkler 1 -eger 1 -▁transitioned 1 -▁enlist 1 -▁vicar 1 -__ 1 -▁grossed 1 -stamp 1 -▁Haitian 1 -andra 1 -▁densely 1 -▁Margot 1 -▁Owner 1 -▁superheroes 1 -▁brat 1 -▁Brief 1 -▁Mahathir 1 -▁unharmed 1 -▁bloke 1 -▁reload 1 -▁subcommittee 1 -▁quotation 1 -bread 1 -▁YPG 1 -▁CAS 1 -▁besieged 1 -avon 1 -▁multimedia 1 -▁unsettled 1 -▁mainstay 1 -▁Comet 1 -▁principals 1 -▁Arb 1 -▁ETFs 1 -▁GMO 1 -▁bothers 1 -▁mustache 1 -omb 1 -▁CTV 1 -▁Loren 1 -▁emphasised 1 -▁textures 1 -▁Conley 1 -▁Allie 1 -▁Idea 1 -▁Syrians 1 -▁riches 1 -rada 1 -▁Hurst 1 -▁helper 1 -odi 1 -Girl 1 -▁intermittent 1 -:36 1 -▁Kok 1 -▁Electro 1 -▁Cypriot 1 -▁Laguna 1 -▁Arri 1 -▁alluded 1 -▁hostilities 1 -▁Eurovision 1 -▁lifestyles 1 -▁vigilante 1 -▁battleground 1 -▁furnished 1 -▁Swanson 1 -▁Protect 1 -▁annexed 1 -▁doorbell 1 -▁cabins 1 -▁campaigner 1 -rip 1 -▁lighten 1 -▁morphed 1 -Sub 1 -Donald 1 -▁Napa 1 -imagine 1 -▁seminary 1 -iii 1 -▁Shape 1 -viv 1 -▁Hospitality 1 -KY 1 -▁Tek 1 -▁boxers 1 -▁Kry 1 -▁Vent 1 -▁flyer 1 -▁Roz 1 -▁speck 1 --64 1 -▁visualize 1 -▁cleavage 1 -▁ignition 1 -▁receptive 1 -▁Podesta 1 -lux 1 -▁Ginsburg 1 -▁IndyCar 1 -▁vertically 1 -▁Dynamo 1 -rb 1 -▁Sort 1 -▁specialised 1 -mé 1 -▁waive 1 -sak 1 -▁Hoe 1 -▁peasant 1 -NET 1 -▁eater 1 -aru 1 -shima 1 -▁separates 1 -different 1 -▁Chun 1 -sleep 1 -▁manpower 1 -err 1 -▁ML 1 -ecki 1 -▁Kardashians 1 -▁Reporting 1 -▁Wearing 1 -yong 1 -onda 1 -▁Advertising 1 -▁Doherty 1 -▁Gaddafi 1 -▁McDowell 1 -▁blinking 1 -▁calculator 1 -▁semantic 1 -▁Accra 1 -▁Mic 1 -▁unimaginable 1 -▁Suk 1 -▁breaker 1 -▁56- 1 -▁tragically 1 -▁bloodshed 1 -▁emigrated 1 -20) 1 -▁Hail 1 -▁livelihoods 1 -▁translations 1 -▁Hare 1 -hausen 1 -▁Shipping 1 -▁Roach 1 -▁Underperform 1 -HAM 1 -▁LL 1 -kali 1 -ERT 1 -▁cellphones 1 -▁Redding 1 -▁lingered 1 -phase 1 -▁wreak 1 -▁MEP 1 -▁steadfast 1 -activity 1 -:42 1 -▁Poole 1 -▁Parking 1 -▁Calhoun 1 -▁Guterres 1 -▁endorsing 1 -▁levied 1 -▁reusable 1 -▁initiation 1 -▁Garrison 1 -▁quasi 1 -Char 1 -▁chairwoman 1 -▁Fore 1 -▁PET 1 -▁Shoot 1 -▁Hock 1 -5.4 1 -scribe 1 -0.2 1 -▁swallowing 1 -▁hippie 1 -▁flo 1 -▁Brin 1 -▁Westwood 1 -Special 1 -▁18,000 1 -▁heinous 1 --8) 1 -▁Wilbur 1 -AK 1 -▁shuttered 1 -▁Gambia 1 -utu 1 -▁Monthly 1 -▁glittering 1 -liner 1 -▁$56 1 -▁raged 1 -▁45, 1 -▁Dual 1 -▁glitch 1 -▁wrapper 1 -▁pollutants 1 -▁Etihad 1 -▁McVay 1 -▁impulsive 1 -▁optics 1 -▁straighten 1 -▁3/4 1 -given 1 -AAP 1 -follow 1 -Tim 1 -▁Butte 1 -cks 1 -▁resin 1 -▁Louie 1 -luck 1 -▁.500 1 -lement 1 -▁embryo 1 -intensive 1 -▁bog 1 -Plan 1 -▁Millie 1 -nian 1 -▁linebackers 1 -▁Ode 1 -▁rodeo 1 -ár 1 -▁beige 1 -▁theological 1 -▁Castell 1 -▁Uzbekistan 1 -▁dignitaries 1 -▁pertinent 1 -▁sincerity 1 -▁Mustafa 1 -▁syntax 1 -▁grandkids 1 -▁Orchard 1 -▁eu 1 -▁Birk 1 -singer 1 -▁Laurence 1 -▁pluck 1 -▁nests 1 -▁54, 1 -▁middleweight 1 -▁bezel 1 -▁Inland 1 -▁Dela 1 -Val 1 -▁hyperlink 1 -▁aisles 1 -▁signalling 1 -▁outlawed 1 -Aug 1 -vita 1 -patient 1 -gence 1 -ience 1 -▁equitable 1 -▁Months 1 -▁Reveal 1 -▁Contrary 1 -tuck 1 -▁Rotten 1 -▁Chiang 1 -▁enzymes 1 -▁Wally 1 -▁$68 1 -lect 1 -▁Tube 1 -▁enriched 1 -▁soybean 1 -▁constituent 1 -▁Rival 1 -enbach 1 -▁crumble 1 -▁nutrient 1 -▁forfeit 1 -▁Operator 1 -▁embarrass 1 -DING 1 -▁prairie 1 -▁rebellious 1 -▁voicing 1 -▁Mafia 1 -▁cropped 1 -▁Adler 1 -acle 1 -▁Dish 1 -▁Jace 1 -▁2030. 1 -▁Sixty 1 -Wal 1 -▁prompts 1 -5.9 1 -▁TNT 1 -▁hooded 1 -▁0.3% 1 -▁Saying 1 -Ireland 1 -upon 1 -▁Tunisian 1 -▁mythical 1 -deb 1 -▁Monitoring 1 -ör 1 -▁Mines 1 -▁Beech 1 -▁Dau 1 -▁1964. 1 -▁customize 1 -▁scuffle 1 -zak 1 -UST 1 -▁NPC 1 -▁enormously 1 -,750 1 -iche 1 -▁1700 1 -▁Bia 1 -▁Evolution 1 -▁Myrtle 1 -▁conspiring 1 -▁illegitimate 1 -0.7 1 -▁impactful 1 -▁Genoa 1 -▁stints 1 -▁intersect 1 -▁overboard 1 -Rep 1 -Phil 1 -▁foothold 1 -▁CAP 1 -▁prosecutions 1 -Ken 1 -▁bearer 1 -▁Relative 1 -▁spinoff 1 -▁Nya 1 -/14 1 -▁injure 1 -▁142 1 -▁Viola 1 -▁Strat 1 -nium 1 -▁suitcases 1 -Reg 1 -graphic 1 -stress 1 -wig 1 -LG 1 -▁pul 1 -wake 1 -▁entourage 1 -▁Mahomes 1 -▁Wisdom 1 -▁Huntsville 1 -▁docked 1 -▁0.00000 1 -▁Blow 1 -▁Nak 1 -▁beau 1 -▁elk 1 -▁Beg 1 -oom 1 -▁Rockwell 1 -▁gust 1 -▁STATE 1 -▁Meaning 1 -ARY 1 -▁streamlined 1 -uation 1 -▁Robb 1 -rug 1 -rang 1 -▁1867 1 -▁uplift 1 -▁Reis 1 -mask 1 -▁UTC 1 -▁behemoth 1 -▁ludicrous 1 -▁reprieve 1 -▁Champagne 1 -▁immature 1 -▁geese 1 -uria 1 -ça 1 -▁apocalypse 1 -OTA 1 -▁Persons 1 -▁Bellator 1 -▁realistically 1 -aran 1 -▁Californians 1 -▁Breast 1 -▁Muse 1 -▁brighten 1 -tails 1 -▁plucked 1 -▁Gea 1 -▁reservoirs 1 -▁siren 1 -▁dosage 1 -operated 1 -▁Used 1 -heimer 1 -teria 1 -▁Halep 1 -▁Tune 1 -▁duke 1 -izzy 1 -▁fateful 1 -▁COR 1 -▁excise 1 -▁Mubarak 1 -▁imposition 1 -▁Rosario 1 -▁Sheeran 1 -▁Gustav 1 -▁instill 1 -▁Pitch 1 -sac 1 -▁Freddy 1 -▁Electricity 1 -▁Josie 1 -words 1 -▁exporting 1 -ł 1 -asha 1 -▁Pressure 1 -▁RD 1 -▁Lehigh 1 -▁smuggled 1 -¢ 1 -▁0.5% 1 -▁Mab 1 -atus 1 -anto 1 -nner 1 -▁Ghanaian 1 -AAA 1 -▁PER 1 -▁frosting 1 -▁NOTE 1 -alli 1 -▁22% 1 -▁blinded 1 -arse 1 -▁Nay 1 -CRA 1 -▁Trends 1 -▁ISPs 1 -bearer 1 -▁gen 1 -▁Grenfell 1 -▁Benchmark 1 -▁clout 1 -▁unethical 1 -▁subtly 1 -▁discretionary 1 -▁brute 1 -Ku 1 -▁mating 1 -▁Losing 1 -▁Rid 1 -▁Flame 1 -▁Brody 1 -▁IPA 1 -arrow 1 -▁Kawa 1 -▁Yen 1 -▁speechless 1 -▁flourished 1 -yeah 1 -:43 1 -▁Lange 1 -UCK 1 -▁plush 1 -▁Exploration 1 -▁ghetto 1 -▁lavender 1 -▁Gibbons 1 -▁Deadpool 1 -TD 1 -▁flavored 1 -▁transpired 1 -gua 1 -▁Telegram 1 -▁energized 1 -syn 1 --36 1 -AKE 1 -▁hops 1 -conservative 1 -quil 1 -stuff 1 -▁muck 1 -▁electrons 1 -▁Took 1 -▁psychopath 1 -▁hatched 1 -▁journalistic 1 -▁Cue 1 -rified 1 -▁Zag 1 -▁Tah 1 -▁Erika 1 -▁Upton 1 -8.3 1 -tea 1 -▁complexes 1 -▁rowing 1 -▁Glacier 1 -▁Phoebe 1 -▁dialysis 1 -▁asbestos 1 -ACH 1 -▁Dental 1 -▁FG 1 -▁Healy 1 -focus 1 -▁Raff 1 -▁Transition 1 -▁Hartley 1 -▁Cin 1 -ehr 1 -iku 1 -▁entrant 1 -▁legitimately 1 -▁Leap 1 -▁Reb 1 -▁biopsy 1 -▁Prem 1 -ARE 1 -tico 1 -funding 1 -▁Huh 1 -▁downplayed 1 -sci 1 -▁Mans 1 -▁pertain 1 -▁Matteo 1 -developed 1 -▁infield 1 -encies 1 -▁forceful 1 -▁1966. 1 -▁embraces 1 -▁restructure 1 -▁Churches 1 -▁eroded 1 -▁prim 1 -fuck 1 -▁Corpus 1 --46 1 -▁inquired 1 -▁quickest 1 -▁1943, 1 -uya 1 -noy 1 -▁5.0 1 -▁Sandoval 1 -▁foliage 1 -▁Evangelical 1 -▁quarrel 1 -▁Hobart 1 -▁titular 1 -▁jeep 1 -▁chopping 1 -icide 1 -▁Gloucester 1 -▁Exp 1 -▁thefts 1 -▁condone 1 -▁Rizzo 1 -pac 1 -▁195 1 -▁Chromebook 1 -number 1 -▁ouster 1 -▁forging 1 -▁139 1 -actually 1 -125 1 -▁cer 1 -▁bystanders 1 -▁swelled 1 -▁Soup 1 -▁143 1 -140 1 -▁penal 1 -▁Grammys 1 -gol 1 -▁reprint 1 -▁2-4 1 -wine 1 -▁Gonzales 1 -▁Madagascar 1 -▁stroking 1 -▁dissatisfaction 1 -▁Telugu 1 -▁FCA 1 -▁diversification 1 -▁ecology 1 -▁limelight 1 -loop 1 -▁feral 1 -▁Lyme 1 -LINE 1 -▁Defenders 1 -fuse 1 -▁thrived 1 -tini 1 -mobil 1 -▁Praise 1 -ibi 1 -▁tweaked 1 -material 1 -▁cancellations 1 -nest 1 -▁scooters 1 -▁MV 1 -▁1872 1 -▁collaborator 1 -▁Maverick 1 -$6 1 -foundation 1 -▁bead 1 -▁sheds 1 -▁almond 1 -ewski 1 -feed 1 -▁cartridge 1 -rop 1 -▁Refugee 1 -Hand 1 -folk 1 -▁Emp 1 -▁warp 1 -cola 1 -▁Papadopoulos 1 -▁Mandarin 1 -▁devout 1 -▁insanely 1 -▁bottleneck 1 -▁Contributing 1 -lz 1 -▁1918, 1 -▁Shorten 1 -▁braced 1 -WB 1 -▁relayed 1 -▁ste 1 -▁infused 1 -inga 1 -▁decrie 1 -Thomas 1 -▁MJ 1 -caliber 1 -sync 1 -▁Outer 1 -▁Disc 1 -▁analyzes 1 -▁IGN 1 -▁noun 1 -▁Agu 1 -flying 1 -▁Nau 1 -▁gaping 1 -▁PSA 1 -▁Mav 1 -kah 1 -▁snug 1 -php 1 -▁humming 1 -Angel 1 -▁eighty 1 -▁disparate 1 -▁strung 1 -▁Ohtani 1 -▁GREEN 1 -▁infect 1 -▁excerpts 1 -▁pitted 1 -▁Compton 1 -▁piracy 1 -felt 1 -▁overlapping 1 -▁scrolling 1 -▁WOW 1 -▁Hui 1 -▁Package 1 -▁Wolff 1 -▁Tia 1 -▁adjustable 1 -Win 1 -▁subconscious 1 -Sweet 1 -5.3 1 -Data 1 -▁routing 1 -▁lads 1 -7.3 1 -Med 1 -Par 1 -bery 1 -▁proponent 1 -▁remix 1 -▁Kop 1 -▁fi 1 -▁graceful 1 -▁51, 1 -▁pharmaceuticals 1 -▁Montenegro 1 -▁admirable 1 -▁stamina 1 -▁microbes 1 -▁bounded 1 -▁VAR 1 -▁Aman 1 -▁coarse 1 -▁Downey 1 -▁kiddos 1 -▁headphone 1 -▁Allies 1 -▁Kale 1 --65 1 -▁BN 1 -▁Moy 1 -▁entail 1 -:57 1 -▁scr 1 -▁infielder 1 -28) 1 -▁Saxon 1 -inus 1 -beth 1 -▁pigeon 1 -▁wiser 1 -▁juveniles 1 -ACE 1 -drome 1 -EAT 1 -▁Coaches 1 -▁151 1 -DEN 1 -▁UNICEF 1 -▁masculinity 1 -▁glam 1 -▁compiling 1 -▁Salazar 1 -▁Constantine 1 -▁scrapping 1 -▁Keegan 1 -▁Lobby 1 -▁maize 1 -▁Passengers 1 -▁Jazeera 1 -▁TLC 1 -▁Fein 1 -▁hoo 1 -medical 1 -quel 1 -▁Taste 1 -tsch 1 -Palestinian 1 -▁Champ 1 -vocation 1 -eptic 1 -▁blooms 1 -▁comedies 1 -▁Niko 1 -lad 1 -▁$72 1 -dley 1 -▁Numbers 1 -▁Wage 1 -▁expanse 1 -▁majestic 1 -▁motivating 1 -▁sobriety 1 -▁cyclical 1 -▁Dominique 1 -▁ratchet 1 -▁Costello 1 -▁Mexicans 1 -▁Tonga 1 -ivism 1 -:56 1 -▁Chrissy 1 -▁Sten 1 -▁cancelling 1 -▁Came 1 -▁hive 1 -aldo 1 -▁bracelets 1 -▁Latinos 1 -▁WD 1 -▁Younger 1 -▁underlined 1 -▁supervise 1 -▁vie 1 -icus 1 -8.7 1 -children 1 -▁Vail 1 -▁substituted 1 -▁gangster 1 -▁accommodating 1 -▁democracies 1 -▁grueling 1 -▁spaceship 1 -▁Conflict 1 -▁skates 1 -hwa 1 -▁Umar 1 -▁padded 1 -▁Harrisburg 1 -▁Midway 1 -▁meats 1 -▁Draw 1 --2018 1 -▁Kamal 1 -▁beaming 1 -:53 1 -▁KFC 1 -▁hardwood 1 -▁Rub 1 -▁1,800 1 -▁Petersen 1 -▁Dill 1 -▁celeb 1 -▁Assets 1 -▁Thin 1 -▁Hollande 1 -UG 1 -sight 1 -▁Ach 1 -▁49, 1 -▁COMPANY 1 -▁clandestine 1 -▁discouraging 1 -▁hallways 1 -▁imaginable 1 -▁lacrosse 1 -▁bodyguard 1 -▁Snoop 1 -▁conservatism 1 -▁Deacon 1 -▁joys 1 -:39 1 -▁notebooks 1 -hew 1 -nomic 1 -wait 1 -lman 1 -▁1883 1 -nec 1 -Washington 1 -▁Swing 1 -▁Taxi 1 -ENCE 1 -▁subsided 1 -▁Dino 1 -▁bulky 1 -▁THEY 1 -▁Wilde 1 -partner 1 -▁Haw 1 -▁equations 1 -▁sniffing 1 -850 1 -▁likeness 1 -Mother 1 -▁loopholes 1 -▁reflections 1 -▁NM 1 -▁($6 1 -▁APP 1 -▁limo 1 -▁aesthetics 1 -▁Feather 1 -▁insomnia 1 -▁assorted 1 -▁groping 1 -▁warships 1 -▁Defending 1 -▁Beaumont 1 -▁Saeed 1 -▁valves 1 -▁LONG 1 -▁Ozil 1 -▁pressuring 1 -▁Kip 1 -▁Kev 1 -WF 1 -▁Greenberg 1 -▁professions 1 -▁fouls 1 -uke 1 -▁mixer 1 -FOR 1 -▁fungi 1 -▁Sweat 1 -rence 1 -▁overload 1 -▁trucking 1 -Turn 1 -▁Sunil 1 -▁MacArthur 1 -litz 1 -▁Scalia 1 -ordered 1 -▁soothe 1 -▁electronically 1 -minis 1 -▁symbolism 1 -▁footprints 1 -▁Tian 1 -▁Summers 1 -clo 1 -▁boldly 1 -gic 1 -▁despise 1 -uate 1 -▁robber 1 -▁chops 1 -onian 1 -▁Hak 1 -them 1 -ditch 1 -▁Dimitri 1 -▁cheerleader 1 -▁SPD 1 -▁Docker 1 -▁Skills 1 -Jeff 1 -imon 1 -technical 1 -▁Savings 1 -▁anterior 1 -▁hysteria 1 -▁tyranny 1 -▁Fukushima 1 -▁sociology 1 -▁Dara 1 -▁Dart 1 -▁vowel 1 -▁superstars 1 -▁mats 1 -▁Pavel 1 -▁UCF 1 -Ram 1 -▁Mustangs 1 -▁firepower 1 -esi 1 -osaurus 1 -Family 1 -▁Millennials 1 -▁Hewitt 1 -Damn 1 -doo 1 -▁Projects 1 -▁panda 1 -dish 1 -eson 1 -▁Klo 1 -wearing 1 -▁Tripp 1 -▁Brod 1 -▁SCP 1 -▁Verma 1 -math 1 -▁excludes 1 -▁Artificial 1 -▁Semiconductor 1 -▁barrister 1 -▁indiscriminate 1 -▁raucous 1 -▁electromagnetic 1 -▁stave 1 -▁Tobias 1 -▁beamed 1 -▁unprepared 1 -▁confer 1 -otto 1 -EO 1 -▁Smiley 1 -▁Lol 1 -▁Trainer 1 -▁bib 1 -ICO 1 -Ac 1 -▁6.3 1 -rella 1 -▁tankers 1 -▁assertive 1 -▁Aldi 1 -▁Pf 1 -▁Mob 1 -▁murmur 1 -kay 1 -▁Kav 1 -▁pregame 1 -▁Setting 1 -6.4 1 -▁$2.7 1 -▁Renewable 1 -▁brochure 1 -▁contraceptive 1 -▁Walgreens 1 -▁satirical 1 -▁HuffPost 1 -▁Auditorium 1 -▁charred 1 -▁momma 1 -trend 1 -▁Animation 1 -▁(6) 1 -▁Lag 1 -wana 1 -▁Watergate 1 -dhar 1 -▁164 1 -▁borderline 1 -▁departures 1 -▁Chet 1 -▁Ways 1 -Service 1 -▁!" 1 -▁renewables 1 -:51 1 -▁legality 1 -LSE 1 -▁1848 1 -14) 1 -burger 1 -▁distributes 1 -▁Suu 1 -▁hooker 1 -▁masse 1 -fourth 1 -▁ornate 1 -▁Billings 1 -▁barked 1 -▁Iqbal 1 -▁unparalleled 1 -▁Spiegel 1 -▁leggings 1 -▁Photoshop 1 -▁Settlement 1 -▁Creating 1 -▁fascism 1 -▁APIs 1 -▁viola 1 -▁mellow 1 -▁sans 1 -▁6-8 1 -Smart 1 -▁Bowles 1 -establishment 1 -▁Usman 1 -▁Beal 1 -hire 1 -▁lyrical 1 -▁auctions 1 -families 1 -platform 1 -▁$51 1 -▁Morrow 1 -▁158 1 -▁Chill 1 -▁Sle 1 -▁complying 1 -tracking 1 -ulla 1 -▁halo 1 -▁INEC 1 -▁hearty 1 -15) 1 -▁156 1 -▁Lig 1 -19) 1 -▁appointees 1 -▁Concept 1 -▁believable 1 -▁rescuing 1 -▁Consultant 1 -▁Peruvian 1 -▁1879 1 -▁Gabby 1 -▁Hahn 1 -▁undead 1 -▁campfire 1 -▁Settings 1 -taker 1 -▁Oops 1 -▁Italia 1 -▁adaptations 1 -▁Manga 1 -osaur 1 -▁Trees 1 -▁Bowers 1 -Kevin 1 -Daniel 1 -Fu 1 -▁arched 1 -▁52, 1 -▁townhouse 1 -▁starved 1 -worn 1 -103 1 -▁unseat 1 -▁Ogun 1 -NBA 1 -▁Vid 1 -▁Leno 1 -▁Goes 1 -▁Kath 1 -▁Salam 1 -▁undermines 1 -▁fruitful 1 -▁Scheer 1 -▁Energ 1 -▁HIS 1 -▁Disclaimer 1 -▁widget 1 -▁malnutrition 1 -▁Sudbury 1 -▁ransomware 1 -▁Activities 1 -▁scarred 1 -▁Bachelorette 1 -▁2021, 1 -▁Henson 1 -2000 1 -▁Dunham 1 -YS 1 -▁Kana 1 -▁slurs 1 -STA 1 -Sham 1 -School 1 -Wall 1 -:58 1 -8.1 1 -▁Technically 1 -▁Burg 1 -▁EF 1 -▁braces 1 -ribe 1 -▁Griffiths 1 -▁Biblical 1 -▁bilingual 1 -▁detergent 1 -▁lemonade 1 -▁dev 1 -▁Species 1 -▁Croft 1 -▁esports 1 -▁conspired 1 -▁finalize 1 -▁survives 1 -▁extremes 1 -▁$58 1 -▁Naj 1 -▁Yelp 1 -Ph 1 -▁16,000 1 -▁58- 1 -dele 1 -mala 1 -▁Statistical 1 -▁illuminate 1 -Connell 1 -International 1 -▁resonated 1 -▁Cun 1 -rish 1 -60) 1 -▁.9 1 -▁purport 1 -▁Novo 1 -counterterrorism 1 -mai 1 -neo 1 -▁mango 1 -lover 1 -▁Collegiate 1 -▁amplified 1 -▁cavity 1 -▁nascent 1 -▁Phyllis 1 -▁Castile 1 -▁kph 1 -▁SDF 1 -▁driverless 1 -onym 1 -▁watered 1 -▁Rue 1 -nka 1 -sys 1 -▁Neb 1 -▁153 1 -egg 1 -▁inhaled 1 -▁lefty 1 -search 1 -▁regeneration 1 -▁blinds 1 -▁CALL 1 -▁skaters 1 -▁rumble 1 -▁billboards 1 -▁partake 1 -▁Troll 1 -▁Monty 1 -CEO 1 -7.9 1 -▁Pill 1 -▁vaginal 1 -▁este 1 -▁scraping 1 -lder 1 -▁plugin 1 -▁Currie 1 -▁61- 1 -▁Expert 1 -▁Eclipse 1 -▁privatization 1 -▁stimulating 1 -▁watermelon 1 -▁Dartmouth 1 -▁Copeland 1 -▁rounding 1 -pati 1 -▁141 1 -TIPS 1 -lining 1 -EY 1 -▁terrestrial 1 -▁$62 1 -▁bidder 1 -▁$57 1 -▁Fraud 1 -▁incur 1 -▁jaws 1 -▁explorer 1 -▁tenor 1 -▁leaped 1 -▁Ish 1 -Luc 1 -▁$125 1 -▁outperformed 1 -40) 1 -▁Bhatt 1 -ovsky 1 -▁Ping 1 -rather 1 -▁Serious 1 -▁hammering 1 -▁distributions 1 -▁Tags 1 -▁circa 1 -▁ecstasy 1 -▁sequencing 1 -▁stench 1 -▁Etsy 1 -▁clapped 1 -Fuck 1 -▁Ost 1 -▁Cooperative 1 -370 1 -▁toxicity 1 -DN 1 -GG 1 -0.9 1 -▁compute 1 -pf 1 -▁Keeper 1 -tuned 1 -financial 1 -Courtesy 1 -episode 1 -▁Jana 1 -▁stifle 1 -illegal 1 -▁strolled 1 -▁172 1 -▁deepened 1 -▁clutched 1 -racial 1 -Click 1 -▁gamer 1 -▁Subs 1 -▁lifeguard 1 -doubt 1 -▁1955, 1 -▁bestselling 1 -supported 1 -▁collage 1 -lage 1 -▁adorn 1 -▁Preservation 1 -▁machete 1 -▁SoftBank 1 -▁Retired 1 -▁clipping 1 -▁Everywhere 1 -▁XX 1 -▁Goodwill 1 -▁contradict 1 -▁Fail 1 -▁bumpy 1 -vable 1 -Pet 1 -▁bestseller 1 -▁Latter 1 -▁Demo 1 -▁1877 1 -▁lyric 1 -▁signify 1 -amy 1 -▁Rankin 1 -▁skated 1 -▁NOTES 1 -student 1 -Should 1 -[1] 1 -▁categorize 1 -▁genitals 1 -Queen 1 -▁Bangladeshi 1 -▁Flip 1 -▁WF 1 -▁yearning 1 -Pad 1 -▁Twi 1 -▁alcoholism 1 -▁criminally 1 -▁Kellogg 1 -▁Menendez 1 -▁visceral 1 -▁Elisabeth 1 -▁Oasis 1 -▁Boucher 1 -▁Marquis 1 -▁NICU 1 -▁fanfare 1 -▁unchecked 1 -▁£30 1 -▁sharper 1 -▁mired 1 -▁defective 1 -▁Rog 1 -▁MX 1 -▁ASEAN 1 -▁Kno 1 -▁Bret 1 -▁Italians 1 -▁Amp 1 -▁Chas 1 -▁supplemental 1 -▁Dove 1 -▁Rud 1 -▁Whis 1 -organized 1 -▁blooming 1 -▁obstructing 1 -▁dys 1 -company 1 -▁midweek 1 -▁Stephan 1 --55 1 -▁Llan 1 -▁Nun 1 -Meet 1 -▁Kyi 1 -thorpe 1 -▁publicized 1 -▁Butcher 1 -▁cog 1 -▁Riv 1 -▁picky 1 -▁certify 1 -▁excursion 1 -▁Amherst 1 -▁Bethesda 1 -▁Gillibrand 1 -▁imitation 1 -intercontinental 1 -▁Alameda 1 -▁rarity 1 -▁AGAIN 1 -▁ragged 1 -▁lagging 1 -▁shadowy 1 -▁seatbelt 1 -highly 1 -▁Bess 1 -▁lingerie 1 -▁vending 1 -RIC 1 -▁directs 1 -▁cooperated 1 -▁5.4 1 -deci 1 -Ly 1 -▁Estonian 1 -▁Shut 1 -▁bailed 1 -▁RED 1 -garden 1 -▁vine 1 -undi 1 -▁1946, 1 -Claude 1 -Run 1 -▁2013) 1 -Die 1 -▁reinstate 1 -▁Bhu 1 -▁characterised 1 -▁(33 1 -downs 1 -▁dictates 1 -blind 1 -▁7.1 1 -▁1954, 1 -▁Gli 1 -▁Raz 1 -▁Rehabilitation 1 -▁camaraderie 1 -▁greasy 1 -▁accreditation 1 -▁Hybrid 1 -▁regroup 1 -▁Salesforce 1 -▁Lesnar 1 -▁pea 1 -▁dissect 1 -▁emboldened 1 -▁che 1 -▁Swe 1 -▁Vader 1 -▁resided 1 -▁Blitz 1 -▁Rumors 1 -▁Luton 1 -dex 1 -Actually 1 -▁pondered 1 -▁basal 1 -Published 1 -▁philosophers 1 -▁Roc 1 -▁Dare 1 -▁Arsene 1 -blowing 1 -dl 1 -▁incompatible 1 -▁Dele 1 -▁Vive 1 -2015 1 -OU 1 -▁sporadic 1 -▁Aubrey 1 -▁McIntyre 1 -▁goggles 1 -▁hijab 1 -▁Apostle 1 -▁MIAMI 1 -▁platter 1 -▁unincorporated 1 -▁tugged 1 -▁Coulter 1 -▁Framework 1 -▁carb 1 -▁Melinda 1 -PTI 1 -▁peacekeeping 1 -▁lashes 1 -▁Rhys 1 -▁Freud 1 -6.8 1 -▁cheeky 1 -▁Gallo 1 -▁Nil 1 -Type 1 -▁peasants 1 -▁referrals 1 -▁Dhaka 1 --37 1 -▁relive 1 -▁endpoint 1 -▁fools 1 -:06 1 -▁1940, 1 -▁continual 1 -eja 1 -▁$2.8 1 -▁Kepler 1 -▁cholera 1 -▁uninterrupted 1 -▁Edelman 1 -▁Springsteen 1 -▁rotting 1 -▁Ars 1 -▁Ramsay 1 -▁Dynamic 1 -▁Lotus 1 -▁Yamaha 1 -▁Betts 1 -TIC 1 -▁Ninth 1 -▁Eliot 1 -producer 1 -▁anchors 1 -drama 1 -▁RIGHT 1 -conduct 1 -▁shorten 1 -▁Duma 1 -▁Thorne 1 -fear 1 -▁mounts 1 -▁Charm 1 -Dog 1 -▁Gren 1 -▁fiancée 1 -added 1 -negative 1 -lom 1 -Hard 1 -▁NES 1 -Camp 1 -▁garb 1 -eq 1 -roe 1 -ERN 1 -▁Michelin 1 -▁strikeout 1 -Eye 1 -Bal 1 -▁Beverage 1 -▁tuberculosis 1 -▁incubator 1 -▁embodied 1 -▁Austen 1 -▁Charlton 1 -▁Sonoma 1 -▁vetoed 1 -▁persists 1 -▁cubes 1 -empt 1 -▁7-1 1 -▁hijacked 1 -▁Scream 1 -▁Motorsports 1 -▁deployments 1 -▁obstruct 1 -▁manure 1 -OO 1 -orous 1 -Pla 1 -▁Wiley 1 -▁racers 1 -▁cylinders 1 -bh 1 -Charles 1 -▁breathless 1 -:15 1 -▁PRE 1 -DH 1 -EEE 1 --9) 1 -▁optic 1 -▁1939, 1 -▁DES 1 -▁shopper 1 -▁Rang 1 -chai 1 -VN 1 -▁Aga 1 -pitch 1 -▁anal 1 -▁Alexandre 1 -▁lymph 1 -▁Fidel 1 -▁medley 1 -▁endorsements 1 -▁pimp 1 -▁Sherri 1 -▁Dempsey 1 -▁adhesive 1 -▁auxiliary 1 -▁caricature 1 -▁retribution 1 -▁unfavorable 1 -▁Colombo 1 -Had 1 -▁parliamentarian 1 -UI 1 -▁intellectuals 1 -▁whi 1 -gill 1 -▁Pere 1 -▁Gillum 1 -▁Messiah 1 -Made 1 -▁Bland 1 -▁narrowing 1 -▁SQL 1 -▁shred 1 -▁Poppy 1 -▁[20] 1 -▁$69 1 -tender 1 -▁3-5 1 -▁trademarks 1 -▁mysteriously 1 -▁Ryanair 1 -▁hurling 1 -▁Volk 1 -▁chained 1 -pert 1 -▁slo 1 -▁Bosch 1 -Cas 1 -▁Fink 1 -chak 1 -Mommy 1 -poke 1 -▁Bones 1 -▁paramedic 1 -▁Aron 1 -reason 1 -▁202 1 -▁Saving 1 -▁blindly 1 -▁Important 1 -jour 1 -stack 1 -pore 1 -▁Hispanics 1 -▁herald 1 -▁Almighty 1 -▁Cycling 1 -▁Investigative 1 -▁nutritious 1 -▁unbelievably 1 -▁tortilla 1 -▁infancy 1 -▁positivity 1 -▁Maid 1 -▁Poetry 1 -▁Kalanick 1 -▁Peri 1 -▁Fighters 1 -▁BMI 1 -▁loathe 1 -▁patronage 1 -▁restrooms 1 -▁Reference 1 --32 1 -▁Fiber 1 -PV 1 -earned 1 -▁crochet 1 -drick 1 -bara 1 -▁Architects 1 -ombe 1 -▁Bey 1 -Earth 1 - 1 -▁$67 1 -uko 1 -Flo 1 -Dark 1 -▁parishes 1 -sponsor 1 -plastic 1 -▁collide 1 -1.00 1 -ational 1 -gress 1 -IES 1 -gaz 1 -▁Zeke 1 -draw 1 -▁brazen 1 -▁Mongolia 1 -▁Boost 1 -▁Atmospheric 1 -▁illustrator 1 -▁Shenzhen 1 -▁Positive 1 -▁Telescope 1 -▁Camilla 1 -▁NET 1 -▁markedly 1 -▁Lexus 1 -▁Vitamin 1 -▁1965. 1 -oci 1 -▁visualization 1 -▁shrubs 1 -▁Missile 1 -▁preparedness 1 -▁hotly 1 -▁barley 1 -▁Baha 1 -▁EE 1 -▁sweets 1 -▁conjure 1 -▁Huron 1 -span 1 -▁teller 1 -hof 1 -▁ANI 1 -eira 1 -▁Alfie 1 -▁redress 1 -atch 1 -▁Ghani 1 -▁backstory 1 -▁Syl 1 -▁Forensic 1 -▁AUDIO 1 -▁lopsided 1 -▁Impossible 1 -▁KM 1 -▁placenta 1 -▁petals 1 -▁$59 1 -scot 1 -phobic 1 -▁orphans 1 -fighter 1 -▁Brunson 1 -▁6-3, 1 -▁assuring 1 -safety 1 -toxin 1 -▁Accu 1 -▁Gillian 1 -9.1 1 -▁Chinatown 1 -▁6-5 1 -hala 1 -▁blatantly 1 -▁Petit 1 -Jet 1 -▁stricken 1 -rising 1 -▁housekeeper 1 -▁fling 1 -▁impede 1 -▁Ori 1 -106 1 -guin 1 -▁Godwin 1 -LV 1 -▁excused 1 -▁shattering 1 -▁CAD 1 -▁Feng 1 -▁Mohan 1 -kor 1 -7.4 1 -▁57- 1 -lka 1 -▁celebratory 1 -▁escalator 1 -▁endearing 1 -▁strait 1 -▁welding 1 -▁goddamn 1 -▁Sykes 1 -▁mas 1 -▁adherence 1 -▁Greenpeace 1 -▁Gage 1 -▁Bartlett 1 -12) 1 -▁guardians 1 -0.8 1 -▁FORT 1 -▁cleanse 1 -▁Yeo 1 -▁fungus 1 -▁Kab 1 -▁phony 1 -▁Grammar 1 -▁6.4 1 -▁Cine 1 -▁murmured 1 -wright 1 -▁flattering 1 -▁Digest 1 -opened 1 -▁Worthing 1 -▁lurk 1 -nikov 1 -▁(35 1 -▁sprain 1 -IND 1 -▁reckoning 1 -▁underscores 1 -sboro 1 -▁articulated 1 -▁Isles 1 -▁ARC 1 -▁Bench 1 -RV 1 -▁Cheyenne 1 -▁Leinster 1 -▁Ovechkin 1 -▁Streep 1 -▁teaspoon 1 -▁Monmouth 1 -▁unsolved 1 -▁Fourteen 1 -▁SIGN 1 -▁serum 1 -▁Shack 1 -LOW 1 -▁Lift 1 -▁gasping 1 -display 1 -▁hens 1 -▁0-1 1 -▁murals 1 -wrong 1 -azz 1 -▁Apartments 1 -▁retweet 1 -▁Stefani 1 -Laughs 1 -▁mana 1 -▁takedown 1 -▁Kashmiri 1 -encing 1 -▁Abi 1 -▁Pacquiao 1 -▁commuting 1 -▁dispensary 1 -▁dissertation 1 -▁sanitary 1 -▁secluded 1 -▁Leopold 1 -▁monoxide 1 -▁Nordstrom 1 -▁Fenway 1 -▁Himself 1 -Heart 1 -▁Dyson 1 -▁$66 1 -▁excavation 1 -▁blogged 1 -▁thud 1 -▁parasite 1 -ACT 1 -▁taunt 1 -▁Crack 1 -▁amber 1 -6% 1 -hav 1 -▁sec 1 -Jason 1 -LEY 1 -▁Hof 1 -▁Igor 1 -▁jihad 1 -▁Gotta 1 -6.9 1 -▁Paine 1 -▁giggles 1 -▁tricked 1 -▁lends 1 -▁denomination 1 -▁154 1 -▁Alter 1 -▁Webber 1 -▁Chipotle 1 -▁Prefecture 1 -▁ideologies 1 -▁squirt 1 -▁juggling 1 -▁sweaters 1 -▁Nationalist 1 -▁Spell 1 -enti 1 -▁persecuted 1 -▁beginnings 1 -▁premieres 1 -▁Stre 1 -▁Baa 1 -▁63- 1 -▁Titanic 1 -planned 1 -Through 1 -▁Asians 1 -industry 1 -▁Excel 1 -garde 1 -pilot 1 -▁Estates 1 -▁Mé 1 -Republican 1 -▁subtract 1 -yin 1 -▁Oli 1 -hya 1 -▁Unite 1 -▁Lun 1 -▁Daimler 1 -▁GmbH 1 -▁PHILADELPHIA 1 -▁Trubisky 1 -▁ovarian 1 -▁spied 1 -▁tablespoon 1 -▁buggy 1 -▁Yusuf 1 -▁unequal 1 -agency 1 -▁Gurley 1 -▁rung 1 -▁10-15 1 -▁evicted 1 -tub 1 -wheeler 1 -2), 1 -▁850 1 -▁Humanity 1 -▁Sag 1 -▁cores 1 -▁mobilize 1 -▁Robson 1 -acious 1 -Harry 1 -▁Lak 1 -▁Serve 1 -pray 1 -▁Grave 1 -▁modem 1 -▁Haha 1 -▁Sinha 1 -▁headliner 1 -Hara 1 -▁Stuttgart 1 -▁animosity 1 -▁Bordeaux 1 -▁Sass 1 -▁gradient 1 -▁Dhoni 1 -▁Albums 1 -▁McDavid 1 -▁swapping 1 -▁Tudor 1 -▁dialed 1 -▁Marino 1 -▁Lenny 1 -▁Minaj 1 -▁Fullerton 1 -▁filmmaking 1 -killing 1 -▁Jeffery 1 -rift 1 -▁Annette 1 -▁appliance 1 -▁axle 1 -▁replicated 1 -▁Taser 1 -▁Nadia 1 -▁browsers 1 -▁coincides 1 -▁comical 1 -▁Roads 1 -–8 1 -:-) 1 -▁ATV 1 -▁Weis 1 -kova 1 -▁hanger 1 -ask 1 -▁1866 1 -▁SN 1 -▁discrepancy 1 -▁exaggeration 1 -▁millennium 1 -▁sarcasm 1 -▁typhoon 1 -▁stunts 1 -▁Johansson 1 -▁bonfire 1 -WN 1 -▁unwelcome 1 -▁Furious 1 -▁Bei 1 -▁Pryor 1 -▁solvent 1 -▁Boro 1 -▁Bury 1 -▁Sexton 1 -▁± 1 -▁appropriation 1 -▁Jodie 1 -Six 1 -▁Passion 1 -character 1 -vana 1 -ZE 1 -▁BNP 1 -Wo 1 -▁cradle 1 -▁Islamists 1 -▁refreshments 1 -▁intersections 1 -▁plumber 1 -▁steaming 1 -▁Wad 1 -rado 1 -edged 1 -▁6.6 1 -racist 1 -▁Started 1 -chrom 1 -▁wh 1 -▁McCormick 1 -▁Panasonic 1 -▁unscathed 1 -▁variability 1 -▁responsibly 1 -▁Calendar 1 -▁Craigslist 1 -▁Gamer 1 -▁MUCH 1 -▁Cheshire 1 -▁exemplary 1 -▁choreography 1 -▁anticipates 1 -rini 1 -▁Aya 1 -▁dialog 1 -9.3 1 -▁wager 1 -▁raven 1 -▁marsh 1 -udu 1 -KK 1 -▁Sera 1 -▁obscured 1 -Death 1 -▁shielded 1 -pec 1 -▁abolition 1 -kir 1 -▁Rocks 1 -▁destroys 1 -▁gr 1 -▁sails 1 -▁derives 1 -▁funerals 1 -edit 1 -▁formulated 1 -▁squared 1 -▁ditched 1 -▁Pixar 1 -▁Prosecution 1 -▁Kindergarten 1 -▁reinforcing 1 -▁Crawley 1 -▁Philips 1 -▁Particularly 1 -▁fewest 1 -▁Madeline 1 -severe 1 -▁Middlesex 1 -▁beak 1 -▁perished 1 -▁Shrine 1 -▁anarchist 1 -▁Lazar 1 -request 1 -▁Tama 1 -Head 1 -▁cinemas 1 -▁fiancee 1 -Operator 1 -▁homeschooling 1 -▁batches 1 -▁tongues 1 -▁BRA 1 -gai 1 -isk 1 -▁CMS 1 -▁Alain 1 -▁CARE 1 -▁Ely 1 -▁bathed 1 -▁oc 1 -▁peso 1 -▁Missy 1 -▁CRE 1 -▁Marcia 1 -▁dine 1 -▁Leach 1 -▁Nab 1 -9.8 1 -▁ICT 1 -▁Poles 1 -220 1 -▁Macdonald 1 -▁Pueblo 1 -▁stimuli 1 -▁Mozart 1 -▁rumbling 1 -▁selves 1 -▁telecast 1 -▁rigging 1 -▁emitted 1 -▁baseless 1 -▁OL 1 -▁Gunnar 1 -▁1875 1 -▁hump 1 -iller 1 -▁tattooed 1 -▁scientifically 1 -▁spectator 1 -URE 1 -▁Controller 1 -▁veered 1 -▁wrinkles 1 -▁Dj 1 -witz 1 -violence 1 -▁Updates 1 -▁Correa 1 -▁sugary 1 -▁skeletons 1 -▁taping 1 -tac 1 -▁sprinkled 1 -ledge 1 -john 1 -▁opportunistic 1 -meet 1 -hali 1 -▁concurrent 1 -terra 1 -YA 1 -leman 1 -▁pallet 1 -▁experimented 1 -▁cavern 1 -iopharmaceutical 1 -▁Madeleine 1 -▁Tavares 1 -▁Toowoomba 1 -▁misfortune 1 -▁naïve 1 -▁populace 1 -▁qualifies 1 -▁colossal 1 -▁crossroads 1 -▁humbling 1 -▁Ideally 1 -▁Mallory 1 -▁directives 1 -▁amplify 1 -▁EVERY 1 -104 1 -▁snowstorm 1 -▁Hear 1 -▁Hana 1 -▁Icelandic 1 -▁Updated 1 -iello 1 -▁eleventh 1 -▁turkeys 1 -Europe 1 -▁Lever 1 -▁infinitely 1 -charging 1 -Edition 1 -▁cod 1 -▁tides 1 -▁Jury 1 -▁Conan 1 -▁respectfully 1 -▁veg 1 -raiser 1 -▁2" 1 -▁Ikea 1 -▁Arie 1 -UTC 1 -▁Calder 1 -station 1 -▁casts 1 -massive 1 -▁ranger 1 -▁Lehman 1 -▁Artillery 1 -▁denouncing 1 -▁Aguero 1 -▁morphine 1 -▁0.00 1 -▁redacted 1 -▁Garmin 1 -▁Keenan 1 -–9 1 -130 1 -▁grouped 1 -caster 1 -▁Barlow 1 -Rose 1 -▁mindless 1 -▁Boomer 1 -poo 1 -▁ASU 1 -▁shivering 1 -▁Nap 1 -▁germ 1 -▁Cane 1 -▁Slave 1 -▁Khaled 1 -▁crook 1 -▁Cask 1 -radi 1 -ggs 1 -OLD 1 -▁audits 1 -▁Callie 1 -▁BF 1 -ovo 1 -▁encampment 1 -▁strenuous 1 -▁Zhao 1 -▁bulge 1 -▁NEXT 1 -▁Prohibition 1 -▁strode 1 -▁bandits 1 -▁trimester 1 -▁(£1 1 -▁Simpsons 1 -▁airlifted 1 -▁Hadi 1 -▁Hash 1 -▁loo 1 -▁203 1 -▁establishes 1 -▁repaid 1 -▁Electrical 1 -8.4 1 -Prince 1 -▁Vig 1 -▁clit 1 -None 1 -▁Individuals 1 -XX 1 -▁enlarge 1 -▁stinging 1 -▁upholding 1 -▁mediator 1 -▁fenced 1 -bek 1 -▁dismissive 1 -▁unfolds 1 -education 1 -4,000. 1 -▁Deng 1 -30) 1 -angel 1 -▁evaluations 1 -▁fetish 1 -▁Lewandowski 1 -▁Literary 1 -▁obedient 1 -▁preclude 1 -▁wielding 1 -▁Beasley 1 -▁Bhutan 1 -▁preoccupied 1 -▁disagrees 1 -▁Goldstein 1 -▁Lacey 1 -▁â 1 -▁complexities 1 -▁Colby 1 -▁fluff 1 -graphy 1 -▁Ald 1 -▁Fei 1 -uw 1 -Doctor 1 -mortar 1 -▁201 1 -▁donuts 1 -orie 1 -Looking 1 -AMA 1 -BN 1 -▁CDP 1 -▁complimented 1 -▁commits 1 -tima 1 -▁Pump 1 -lai 1 -Log 1 -▁WO 1 -cloth 1 -ITE 1 -▁Transparency 1 -▁eponymous 1 -▁Timberlake 1 -▁Pederson 1 -▁Cavalry 1 -▁serene 1 -▁Brayden 1 -▁songwriting 1 -Arm 1 -▁Foy 1 -▁Jihad 1 -▁capping 1 -▁preferably 1 -▁Rumble 1 -▁correctional 1 -▁Wyn 1 -▁graphical 1 -▁1873 1 -tov 1 -▁flashbacks 1 -skinned 1 -▁Mole 1 -posing 1 -▁reclaimed 1 -lana 1 -▁Quay 1 -cano 1 -pull 1 -▁$400,000 1 -▁unlawfully 1 -Qui 1 -▁moron 1 -▁tangle 1 -▁Message 1 -▁21% 1 -▁chords 1 -▁sinks 1 -▁Valle 1 -▁coldest 1 -▁Alfonso 1 -▁Maldives 1 -▁Alistair 1 -rz 1 -util 1 -▁Tomlinson 1 -▁Comparatively 1 -▁BCCI 1 -▁Rowling 1 -▁groped 1 -polar 1 -▁shopped 1 -▁hinting 1 -▁1869 1 -▁Sites 1 -allo 1 -▁livestream 1 -▁solicit 1 -▁pinning 1 -▁Jordanian 1 -▁glaciers 1 -▁Habitat 1 -tik 1 -▁interruption 1 -▁Hirsch 1 -aircraft 1 -▁deprive 1 -▁ridiculed 1 -▁trove 1 -lena 1 -▁Stir 1 -independent 1 -index 1 -2% 1 -▁veer 1 -▁correlate 1 -▁173 1 -▁Crusaders 1 -▁catheter 1 -▁mischievous 1 -▁reindeer 1 -▁syllable 1 -▁Identity 1 -peer 1 -▁Logic 1 -▁communion 1 -▁Freak 1 -▁bitcoins 1 -▁Leroy 1 -▁Client 1 -▁inventive 1 -▁compromises 1 -▁outpatient 1 -Ryan 1 -▁Lillian 1 -flex 1 -Bla 1 -▁Superstar 1 -▁assembling 1 -▁Mug 1 -▁liquidation 1 -Anne 1 -identified 1 -▁hails 1 -▁Cong 1 -▁Easton 1 -▁00 1 -thousand 1 -Wire 1 -▁perpetuate 1 -▁Cowen 1 -▁confidant 1 -▁Koo 1 -▁Offering 1 -▁glances 1 -▁Barrier 1 -▁Kea 1 -▁Flora 1 -▁substrate 1 -▁GitHub 1 -▁Tomatoes 1 -▁*** 1 -▁crouched 1 -▁Stress 1 -▁unison 1 -▁McCa 1 -▁Adults 1 -▁BEST 1 -▁Mehta 1 -▁cru 1 -▁6-4, 1 -▁blindness 1 -Ci 1 -▁backhand 1 -▁Garth 1 -▁cuz 1 -▁finer 1 -▁expressly 1 -chaffe 1 -whose 1 -uver 1 -▁Contra 1 -pson 1 -stained 1 -▁Trev 1 -▁vaccinations 1 -CY 1 -▁Goo 1 -▁posit 1 -stakes 1 -▁mince 1 -tah 1 -▁roadways 1 -▁suspensions 1 -▁Schumacher 1 -▁Sophomore 1 -▁blasphemy 1 -▁bungalow 1 -▁fictitious 1 -▁strife 1 -▁chlorine 1 -▁Sensor 1 -inde 1 -▁Elgin 1 -▁Portrait 1 -▁SNAP 1 -▁purposefully 1 -dream 1 -▁Plateau 1 -▁Occ 1 -▁dribble 1 -literally 1 -▁Bombers 1 -▁plank 1 -▁Hospitals 1 -▁SY 1 -▁DIS 1 -▁parades 1 -Son 1 -▁impeach 1 -ncy 1 -0.4 1 -▁EQ 1 -FU 1 -▁exhibiting 1 -▁callous 1 -▁Curt 1 -▁Remembrance 1 -▁assemblies 1 -▁congested 1 -▁pedestal 1 -▁unavoidable 1 -л 1 -▁Blumenthal 1 -▁savior 1 -▁transforms 1 -▁Limerick 1 -▁transient 1 -▁untreated 1 -▁Shades 1 -▁cusp 1 -▁originating 1 -▁detriment 1 -▁immuno 1 -oza 1 -▁weathered 1 -▁bumping 1 -▁shatter 1 -▁166 1 -▁Producers 1 -recorded 1 -push 1 -blin 1 -▁pharmacist 1 -▁hymn 1 -▁descendant 1 -Stock 1 -Operation 1 -▁coatings 1 -▁hardworking 1 -▁2015) 1 -cracker 1 -▁stitched 1 -▁abort 1 -▁Sometime 1 -▁5.3 1 -▁1980. 1 -▁Nath 1 -▁Mixed 1 -▁exposes 1 -▁chats 1 -▁Wilk 1 -▁Mn 1 -▁subside 1 -▁Designed 1 -Jane 1 -▁Penelope 1 -▁Secretariat 1 -▁arraignment 1 -▁simplistic 1 -▁initiating 1 -▁arteries 1 -▁Sherwood 1 -▁worshippers 1 -▁265 1 -▁binder 1 -pis 1 -▁Evo 1 -▁Farah 1 -▁plotted 1 -EEP 1 -▁plantations 1 -▁Bengali 1 -▁1878 1 -fascist 1 -▁squeak 1 -▁horseback 1 -▁në 1 -ital 1 -▁excite 1 -▁Strand 1 -▁23% 1 -107 1 -▁pH 1 -▁showrunner 1 -▁reminisce 1 -▁Cormier 1 -▁Inquiry 1 -▁diaspora 1 -▁unexplained 1 -▁faithfully 1 -▁forfeiture 1 -▁antioxidant 1 -▁frigate 1 -▁Walnut 1 -▁Stratford 1 -▁slumber 1 --2016 1 -▁booted 1 -▁Bubba 1 -▁propeller 1 -▁fundamentalist 1 -▁Blessing 1 -▁blinding 1 -ebe 1 -▁thrashing 1 -▁Alaskan 1 -▁Expressway 1 -▁chronicles 1 -List 1 -▁Hove 1 -▁Survivors 1 -essential 1 -▁churning 1 -neur 1 -▁sniffed 1 -▁Towards 1 -▁devotees 1 -▁Khat 1 -▁Cowboy 1 -▁Talks 1 -▁fella 1 -▁negate 1 -▁Mardi 1 -▁Koz 1 -▁1958, 1 -▁techno 1 -▁hopefuls 1 -▁badass 1 -▁scarves 1 -▁Emb 1 -▁Literally 1 -▁Hod 1 -▁latency 1 -▁JV 1 -▁usurp 1 -▁memos 1 -▁combative 1 -▁Mindy 1 -▁uploading 1 -ramp 1 -▁5.6 1 -▁rinse 1 -▁Lue 1 -▁$110 1 -▁Gia 1 -▁detox 1 -biotic 1 -▁Classical 1 -▁affectionate 1 -▁clauses 1 -Mer 1 -▁flavours 1 -▁Dha 1 -▁Middlesbrough 1 -▁meditate 1 -▁quaint 1 -▁folklore 1 -▁adrift 1 -▁Lonzo 1 -▁Mare 1 -▁Mummy 1 -▁Hwy 1 -▁Asif 1 -▁lil 1 -▁Larsen 1 -▁scorching 1 -▁Respect 1 -▁Playhouse 1 -▁Morata 1 -trailer 1 -Such 1 -▁Activists 1 -▁9.5 1 -▁Benn 1 -▁Heard 1 -▁entrances 1 -▁fad 1 -▁Laos 1 -▁multiply 1 -▁CBP 1 -▁magistrates 1 -▁Vox 1 -:54 1 -▁groomed 1 -loch 1 -▁1953, 1 -▁Puri 1 -▁Alia 1 -▁matured 1 -▁Foxconn 1 -▁Perform 1 -▁Chee 1 -ogan 1 -▁skier 1 -▁engraved 1 -▁topple 1 -▁traumatized 1 -▁Beatrice 1 -▁Melo 1 -▁righteousness 1 -Williams 1 -▁Pip 1 -▁northward 1 -▁Fold 1 -▁folders 1 -▁commandment 1 -riz 1 -phin 1 -▁artifact 1 -fusing 1 -ís 1 -▁registers 1 -▁Ske 1 -zai 1 -Bro 1 -▁TTC 1 -▁Assassin 1 -▁Hartman 1 -▁ATLANTA 1 -▁FOLLOW 1 -▁commemorative 1 -▁pursuant 1 -▁toothpaste 1 -▁walnut 1 -▁Hitchcock 1 -▁62- 1 -▁metadata 1 -▁conferred 1 -▁Darryl 1 -▁mockery 1 -wow 1 -▁dudes 1 -▁Vacation 1 -▁RH 1 -▁Bubble 1 -▁silo 1 -▁Buying 1 -▁telegraph 1 -▁167 1 -▁bombarded 1 -▁sampled 1 -akov 1 -ecker 1 -▁splashing 1 -▁Smyth 1 -▁Rear 1 -▁configured 1 -▁OUR 1 -▁Drawing 1 -▁leisurely 1 -▁economical 1 -lax 1 -▁Increase 1 -▁tat 1 -▁Sik 1 -▁1962. 1 -▁33% 1 -▁inhibitor 1 -Term 1 -ettes 1 -Group 1 -scu 1 -▁Moment 1 -▁payable 1 -laid 1 -acted 1 -▁19% 1 -reports 1 -▁Released 1 -RIA 1 -▁Possibly 1 -▁apologetic 1 -▁disparities 1 -▁Matilda 1 -▁sequels 1 -▁Audit 1 -▁rhythms 1 -▁Melody 1 -▁Fleury 1 -▁Andrei 1 -▁Bridgewater 1 -▁120,000 1 -dice 1 -▁spooky 1 -Christ 1 -INS 1 -ivist 1 -▁$63 1 -▁feline 1 -▁stipulated 1 -▁lobe 1 -▁cords 1 -▁forestry 1 -crazy 1 -▁converge 1 -sack 1 -dhan 1 -▁Lakewood 1 -▁peri 1 -contact 1 -▁crumbled 1 -▁peering 1 -▁recapture 1 -radical 1 -▁rebate 1 -lal 1 -▁electing 1 -▁Guo 1 -▁ebb 1 -▁extinguish 1 -▁DLC 1 -▁Cult 1 -gear 1 -▁Hemisphere 1 -▁Strzok 1 -▁aerodynamic 1 -▁synergies 1 -▁courier 1 -evil 1 -▁tarmac 1 -▁Gifford 1 -3% 1 -▁lagoon 1 -▁sterile 1 -▁Lunar 1 -▁cordial 1 -▁deduct 1 -▁cadets 1 -▁Voices 1 -quette 1 -Van 1 -▁Ink 1 -▁renewing 1 -wish 1 -▁mattresses 1 -▁Barba 1 -mbra 1 -▁Alive 1 -▁boxed 1 -▁Eddy 1 -▁incumbents 1 -WL 1 -▁bearish 1 -▁scarcely 1 -pub 1 -▁Lover 1 -▁Mk 1 -▁shipyard 1 -Social 1 -▁Laver 1 -angi 1 -▁BFF 1 -▁elevators 1 -Andre 1 -information 1 -▁eliminates 1 -▁Cade 1 -▁Gigi 1 -▁canals 1 -▁kilometre 1 -aceae 1 -▁inaction 1 -▁Antetokounmpo 1 -▁pedophile 1 -▁spoof 1 -▁detractors 1 -▁triathlon 1 -▁handicapped 1 -▁werewolf 1 -▁McQueen 1 -▁Arjun 1 -▁counsellor 1 -▁Twain 1 -▁Whisper 1 -▁vac 1 -IAL 1 -▁Boer 1 -grow 1 -▁$0.00 1 -▁gushed 1 -▁Putnam 1 -▁tru 1 -▁Seri 1 -▁assailants 1 -FN 1 -bond 1 -▁enforcer 1 -scher 1 -▁1948. 1 -▁Corinthians 1 -▁streetcar 1 -▁taxable 1 -Bel 1 -▁Loved 1 -▁$2,500 1 -owicz 1 -erative 1 -▁flashback 1 -▁Egyptians 1 -▁Judging 1 -▁astonished 1 -▁potassium 1 -▁reviving 1 -▁cones 1 -oca 1 -▁Regis 1 -▁felonies 1 -▁Isabelle 1 -▁Dent 1 -ordinated 1 -ipping 1 -▁favorably 1 -▁coughed 1 -▁crumbs 1 -▁Wonderland 1 -dak 1 -▁Nev 1 -aut 1 -▁headsets 1 -▁Oral 1 -rider 1 -▁elective 1 -▁relievers 1 -NIC 1 -▁modestly 1 -▁Addis 1 -vast 1 -▁Included 1 -▁0.2% 1 -Pierre 1 -▁pleasures 1 -consumer 1 -fruit 1 -bolt 1 -Ban 1 -▁lig 1 -Series 1 -telling 1 -▁Gable 1 -▁snuggled 1 -Dead 1 -▁bylaw 1 -▁Dae 1 -oste 1 -▁ole 1 -oodle 1 -–16 1 -kow 1 -▁Cham 1 -Den 1 -▁4-4 1 -▁Viz 1 -▁Vene 1 -▁finely 1 -▁Nestle 1 -▁OECD 1 -▁adjective 1 -▁philanthropic 1 -▁stabilise 1 -▁Galactic 1 -▁EFCC 1 -▁foyer 1 -▁Augustus 1 -▁handguns 1 -ruff 1 -▁viewership 1 -substantial 1 -▁Duane 1 -▁palate 1 -Kat 1 -▁interconnected 1 -▁pearls 1 -▁Temp 1 -▁Totally 1 -▁1944. 1 -▁Kamara 1 -▁161 1 -▁swirled 1 -NM 1 -▁KE 1 -BH 1 -▁boating 1 -▁configurations 1 -aggressive 1 -▁163 1 -▁Amin 1 -▁frustrate 1 -▁cushions 1 -▁pursuits 1 -▁amounting 1 -mAh 1 -▁chipset 1 -enne 1 -▁indictments 1 -brew 1 -▁ammo 1 -▁outlier 1 -▁melodies 1 -▁turbulence 1 -▁Froome 1 -▁embody 1 -▁grasping 1 -7.1 1 -▁filler 1 -▁aces 1 -▁Quarterback 1 -▁2013-14 1 -Ham 1 -Las 1 -ryl 1 -▁Gig 1 -fusion 1 -8.2 1 -Score 1 -▁LM 1 -bowl 1 -LAN 1 -▁howling 1 -BG 1 -▁solidify 1 -▁biker 1 -▁directorial 1 -▁247 1 -▁kilo 1 -▁Cairns 1 -avy 1 -▁Wren 1 -▁afar 1 -▁distressing 1 -▁Norma 1 -▁sho 1 -dded 1 -▁proprietor 1 -▁skeletal 1 -▁Hyatt 1 -▁Blunt 1 -▁unmarried 1 -▁Blaze 1 -▁Anyhow 1 -▁intertwined 1 -▁Thread 1 -▁graded 1 -▁Notable 1 -▁Odisha 1 -▁Cedric 1 -▁interacted 1 -▁Panic 1 -▁BET 1 -▁jihadi 1 -▁Zub 1 -huge 1 -▁penalized 1 -▁dart 1 -▁Vlad 1 -▁maths 1 -▁DNS 1 -pair 1 -▁DVR 1 -▁unification 1 -550 1 -▁LSD 1 -▁Shul 1 -▁mowing 1 -jj 1 -▁2019-20 1 -▁corresponds 1 -elson 1 -opa 1 -community 1 -Captain 1 -▁Zap 1 -whatever 1 -▁Pines 1 -▁174 1 -▁1,700 1 -▁Whitt 1 -▁drinkers 1 -▁Lawyer 1 -fps 1 -builder 1 -▁complicit 1 -▁Schwe 1 -▁pencils 1 -might 1 -walking 1 -Lady 1 -▁eyewitness 1 -▁Experiment 1 -▁Schulz 1 -▁ingrained 1 -▁Haskell 1 -▁INDIANAPOLIS 1 -▁Ministries 1 -▁crumpled 1 -▁perseverance 1 -▁stewardship 1 -▁Laugh 1 -▁Kyoto 1 -▁taco 1 -axis 1 -▁boardwalk 1 -▁tempered 1 -▁telescopes 1 -▁1956, 1 -buster 1 -▁realtor 1 -icki 1 -▁rejoined 1 -OST 1 -▁puke 1 -▁heirs 1 -California 1 -▁stoic 1 -▁Shel 1 -▁CAT 1 -▁Developer 1 -ejo 1 -▁beginner 1 -▁Mateo 1 -Hub 1 -▁Dolan 1 -▁steamed 1 -services 1 -spin 1 -Kill 1 -trix 1 -▁parting 1 -2016 1 -▁Archdiocese 1 -▁Hodgson 1 -▁bourbon 1 -▁dehydration 1 -▁wrought 1 -▁Xinjiang 1 -▁morbid 1 -▁McKinley 1 -Evan 1 -▁siphon 1 -rimmed 1 -▁Cen 1 -▁Revelation 1 -▁Magi 1 -iak 1 -▁Hawkeye 1 -▁1957, 1 -RING 1 -▁oranges 1 -▁6.8 1 -clar 1 -▁Holl 1 -▁rand 1 -▁Ashok 1 -▁expo 1 -lady 1 -ciones 1 -executive 1 -▁Tyre 1 -▁observes 1 -▁veiled 1 -▁hor 1 -▁Jiang 1 -adh 1 -▁Nair 1 -▁glacier 1 -▁Frey 1 -totally 1 -▁Oko 1 -pg 1 -▁curiously 1 -/12 1 -hear 1 -▁folds 1 -▁scorn 1 -▁newscast 1 -▁sensations 1 -uza 1 -▁Lila 1 -▁Conditions 1 -cook 1 -▁APPLICA 1 -▁Veterinary 1 -▁nefarious 1 -▁populism 1 -▁Graeme 1 -▁clapping 1 -▁Erickson 1 -▁’90 1 -▁sneaky 1 -▁sunflower 1 -▁punctuated 1 -▁swaying 1 -▁sitter 1 -▁prophets 1 -▁Durban 1 -▁Gotham 1 -hydr 1 -▁patriot 1 -▁stereotypical 1 -▁commissioning 1 -▁Dangerous 1 -▁Cramer 1 -▁Claims 1 -▁differed 1 -▁bleach 1 -$7 1 -eid 1 -▁Lithuanian 1 -▁Rusty 1 -▁remodel 1 -shock 1 -dangerous 1 -Congress 1 -liber 1 -▁shaming 1 -agging 1 -eros 1 -123 1 -iles 1 -native 1 -rez 1 -▁Kamp 1 -▁reg 1 -▁frontrunner 1 -6.1 1 -▁Golan 1 -▁Unable 1 -▁rotated 1 -▁corral 1 -▁Nk 1 -▁Escobar 1 -▁ultimatum 1 -▁Accident 1 -▁sensual 1 -cky 1 -bou 1 -▁seasonally 1 -▁Nationwide 1 -▁EW 1 -loading 1 -chon 1 -▁patched 1 -▁enlightened 1 -▁enslaved 1 -▁Brie 1 -▁contradictions 1 -uci 1 -▁Interpol 1 -calling 1 -▁rhythmic 1 -▁han 1 -▁exacerbate 1 -rma 1 -▁Tomlin 1 -izo 1 -▁slowest 1 -▁Neck 1 -▁onshore 1 -▁behaviours 1 -▁Tig 1 -▁Vari 1 -▁Bryn 1 -GU 1 -▁Issues 1 -ð 1 -▁Alhaji 1 -▁Griezmann 1 -▁coercion 1 -▁emanating 1 -▁emblazoned 1 -▁Prakash 1 -▁Igbo 1 -▁pitiful 1 -▁neurologist 1 -▁Nava 1 -▁Como 1 -▁Scottsdale 1 -arro 1 -01) 1 -Sept 1 -▁1959, 1 -▁RU 1 -▁Infant 1 -▁1952, 1 -▁Anastasia 1 -▁£50 1 -organic 1 -▁1949, 1 -▁Denton 1 -khar 1 -▁Ayo 1 -▁handlers 1 -▁tripping 1 -▁treasured 1 -VT 1 -affiliated 1 -bani 1 -future 1 -▁PIN 1 -▁envelopes 1 -▁Elect 1 -▁manhunt 1 -▁Vivi 1 --44 1 -▁Skye 1 -▁automate 1 -▁Sylvester 1 -▁unattended 1 -▁Moldova 1 -▁Whittaker 1 -▁Ibrahimovic 1 -▁Guil 1 -▁Quantum 1 -▁Marlon 1 -▁Mecca 1 -▁Belize 1 -visit 1 -▁vetted 1 -▁viewpoints 1 -▁mummy 1 -▁dyed 1 -▁rescinded 1 -▁darted 1 -▁CK 1 -▁CSI 1 -loft 1 -▁gar 1 -▁modernize 1 -▁1859 1 -▁craze 1 -▁eatery 1 -fare 1 -Daddy 1 -seeking 1 -▁AMA 1 -▁Lys 1 -rath 1 -▁1917, 1 -37) 1 -▁Blanc 1 -▁backpacks 1 -oral 1 -▁cartoonist 1 -▁McNamara 1 -▁hormonal 1 -▁neutron 1 -▁unwavering 1 -▁thermometer 1 -▁Fremont 1 -lga 1 -▁Humber 1 -▁FISA 1 -Wood 1 -▁Bavarian 1 -▁thermostat 1 -▁LAN 1 -spor 1 -▁stardom 1 -▁Blank 1 -▁3-6 1 -▁Mig 1 -▁#5 1 -▁Candidates 1 -Anna 1 -▁stub 1 -▁changer 1 -▁animations 1 -painted 1 -vious 1 -▁savor 1 -jee 1 -▁liberalism 1 -shall 1 -115 1 -2013 1 -▁materialize 1 -▁bluntly 1 -▁yr 1 -▁Repeat 1 -ades 1 -▁Wolverine 1 -▁Yogi 1 -▁diligent 1 -▁legalizing 1 -▁correcting 1 -▁sequential 1 -▁Patient 1 -▁administering 1 -▁INS 1 -▁Claudio 1 -▁HUNNICUTT 1 -▁vantage 1 -▁priesthood 1 -▁stalker 1 -▁RAW 1 -justice 1 -▁HOT 1 -▁medics 1 -▁orbiting 1 -umbo 1 -▁rhino 1 -1.0 1 -▁Oppo 1 -▁Graz 1 -abuse 1 -▁Documents 1 -▁Wag 1 -▁Alton 1 -▁catchy 1 -▁Sooners 1 -▁gourmet 1 -▁nudity 1 -▁propulsion 1 -▁burrito 1 -▁Deliver 1 -▁dodging 1 -▁socioeconomic 1 -itarian 1 -IANS 1 -Dream 1 -▁prequel 1 -▁totalitarian 1 -sounding 1 -oux 1 -Comp 1 -▁Supervisors 1 -▁[22] 1 -hla 1 -▁protectionist 1 -eel 1 -▁bru 1 -▁intellectually 1 -▁leaderboard 1 -▁Tourist 1 -▁Dunkin 1 -Tribune 1 -beautiful 1 -▁235 1 -▁71- 1 -▁Feldman 1 -handedly 1 -▁thaw 1 -dash 1 -▁settles 1 -Word 1 -IZ 1 -Has 1 -▁archaeologists 1 -▁Grind 1 -▁vibes 1 -▁arse 1 -▁recite 1 -т 1 -▁Reuben 1 -▁façade 1 -▁notoriety 1 -▁pulmonary 1 -▁allotment 1 -▁outstretched 1 -▁reputed 1 -▁bridesmaid 1 -▁Butterfly 1 -▁Coronation 1 -▁Peña 1 -▁Benefit 1 -▁seductive 1 -▁Fayetteville 1 -▁fintech 1 -Vol 1 -9.2 1 -▁pulses 1 -▁sickening 1 -▁Meri 1 -▁gridlock 1 -▁Pugh 1 -▁Chop 1 -▁17,000 1 -Got 1 -▁Cheap 1 -▁Baez 1 -▁curbing 1 -▁looted 1 -Check 1 -▁bunt 1 -▁Barrel 1 -▁homepage 1 -involved 1 -▁Notley 1 -meat 1 -▁Alf 1 -▁Larkin 1 -▁Shelly 1 -▁trivia 1 -▁Dina 1 -▁Token 1 -▁accumulating 1 -▁antidote 1 -▁persuasive 1 -▁unsecured 1 -▁Mistress 1 -▁edgy 1 -▁stairwell 1 -▁reactive 1 -secure 1 -▁Grandmother 1 -▁Troopers 1 -▁libel 1 -▁Ville 1 -alpha 1 -▁grille 1 -▁Digg 1 -▁auditions 1 -▁Manipur 1 -▁mined 1 -▁Selection 1 -▁MTA 1 -Wild 1 -▁Kost 1 -170 1 -▁Milky 1 -▁unreal 1 -Fall 1 -Act 1 -▁Muller 1 -onna 1 -bate 1 -DPR 1 -pier 1 -+1 1 -▁RR 1 -▁2024. 1 -▁Manual 1 -▁Siberia 1 -▁Pour 1 -▁192 1 -▁insurgent 1 -ezi 1 -▁Reign 1 -▁Ugandan 1 -brow 1 -▁Eastwood 1 -▁Auschwitz 1 -▁Bellevue 1 -▁Honduran 1 -▁irreversible 1 -▁Flacco 1 -▁WL 1 -▁Bourbon 1 -speak 1 -▁hazy 1 -rival 1 -▁abstain 1 -▁Raceway 1 -consider 1 -▁china 1 -▁faked 1 -▁popularly 1 -cad 1 -▁nuances 1 -elin 1 -river 1 -afi 1 -▁winery 1 -▁Refugees 1 -▁recoil 1 -▁cubicle 1 -▁paychecks 1 -▁275 1 -▁Crisp 1 -▁gardener 1 -IAN 1 -▁peeling 1 -▁reinforcement 1 -ddled 1 -▁Regan 1 -ggi 1 -▁ABOUT 1 -▁bandwagon 1 -▁reddish 1 -▁$1,500 1 -checking 1 -machine 1 -▁extradited 1 -tension 1 -▁gent 1 -▁Zimbabwean 1 -structure 1 -▁posh 1 -▁Cantor 1 -▁originate 1 -coast 1 -▁vacate 1 -▁£15 1 -▁homegrown 1 -urg 1 -▁sul 1 -▁fundraisers 1 -▁$61 1 -▁Wha 1 -tho 1 -▁Desk 1 -▁majorities 1 -runners 1 -▁drape 1 -▁cramps 1 -▁intrusive 1 -▁measurable 1 -▁python 1 -▁Paisley 1 -▁Bradshaw 1 -▁unaffected 1 -▁Citizenship 1 -reader 1 -▁citations 1 -▁Bringing 1 -▁Fell 1 -▁Sleeping 1 -▁dads 1 -dera 1 -▁Homecoming 1 -Works 1 -▁Beckett 1 -▁Roots 1 -ival 1 -▁Apo 1 -graduate 1 -afe 1 -peace 1 -▁megawatts 1 -cci 1 -▁crossbar 1 -▁gays 1 -tiny 1 -Neil 1 -Eric 1 -▁ec 1 -▁Johanna 1 -inflicted 1 -▁Grad 1 -True 1 -▁ 1 -ules 1 -0° 1 -▁Eck 1 -▁Actors 1 -▁trespass 1 -▁Deli 1 -deserved 1 -▁ranchers 1 -▁Robyn 1 -▁occupant 1 -yam 1 -▁stormy 1 -CSO 1 -▁Brigadier 1 -▁Mathematics 1 -▁Strickland 1 -▁craziness 1 -▁disobedience 1 -▁solitude 1 -▁lewd 1 -▁pennies 1 -▁arming 1 -▁smog 1 -▁acne 1 -▁dreading 1 -▁Closing 1 -woo 1 -▁Badgers 1 -ulus 1 -cran 1 -▁Trader 1 -jes 1 -metro 1 -▁Rafi 1 -Second 1 -▁24% 1 -▁traverse 1 -▁exp 1 -▁Taw 1 -Safe 1 -▁gaffe 1 -▁Shir 1 -▁Zip 1 -perform 1 -▁aversion 1 -▁substitutes 1 -▁Rafa 1 -▁Diploma 1 -▁Wir 1 -▁Carole 1 -▁ALWAYS 1 -▁Daphne 1 -▁appraisal 1 -▁archbishop 1 -▁emptiness 1 -▁nobility 1 -▁overcrowded 1 -▁Hercules 1 -?'" 1 -▁hypertension 1 -▁$0.01 1 -mata 1 -▁lorry 1 -▁prioritise 1 -▁ozone 1 -invest 1 -▁spheres 1 -▁reshape 1 -▁rooster 1 -▁pastel 1 -▁Curran 1 -▁Hooper 1 -▁downplay 1 -▁satin 1 -▁narrated 1 -Tony 1 -▁Bibi 1 -▁Ibn 1 -agne 1 -gham 1 -▁smirked 1 -▁nook 1 -estate 1 -chairman 1 -▁jails 1 -imer 1 -▁splinter 1 -▁corrupted 1 -▁Rye 1 -▁OCD 1 -▁Olga 1 -▁straining 1 -▁rescind 1 -▁solidly 1 -▁Lava 1 -▁Nell 1 -▁MRO 1 -▁HHS 1 -▁venom 1 -▁Shab 1 -▁Serge 1 -▁murderers 1 -▁Sock 1 -▁comprehension 1 -▁impeccable 1 -▁ninja 1 -▁speculating 1 -▁SOUTH 1 -▁psychotic 1 -▁Reece 1 -MIN 1 -▁Towns 1 -stall 1 -▁Kash 1 -▁barbed 1 -11) 1 -▁sorely 1 -▁Effective 1 -▁spontaneously 1 -Jay 1 -▁4,500 1 -crypt 1 -LER 1 -logue 1 -jak 1 -▁10.5 1 -▁1300 1 -Tuesday 1 -spoken 1 -fought 1 -▁Heels 1 -▁Boe 1 -▁Aim 1 -auto 1 -▁Ritz 1 -▁retreating 1 -▁culprits 1 -Going 1 -▁(45 1 -LN 1 -▁Ric 1 -▁dignified 1 -▁semblance 1 -▁McDougal 1 -▁Goddard 1 -▁(2015) 1 -▁Audience 1 -▁Kessler 1 -▁Abhi 1 -▁Overwatch 1 -IFF 1 -▁captivated 1 -been 1 -▁Massey 1 -▁Peaks 1 -▁hone 1 -▁Household 1 -▁soreness 1 -▁Wray 1 -poly 1 -▁suggestive 1 -▁Gei 1 -▁Jenn 1 -▁hissed 1 -▁Poul 1 -orian 1 -icio 1 -dating 1 -▁Giro 1 -kr 1 -cult 1 -▁Greer 1 -▁crispy 1 -WM 1 -▁Lucie 1 -▁!!!!! 1 -Za 1 -▁glide 1 -▁Clash 1 -▁pushback 1 -kou 1 -▁emphatic 1 -▁complacent 1 -▁gymnasium 1 -▁sneeze 1 -▁synopsis 1 -▁Historian 1 -▁unaccompanied 1 -▁recital 1 -▁Memories 1 -▁maroon 1 -▁Lucknow 1 -▁jarring 1 -nico 1 -▁estimating 1 -▁silverware 1 -▁cork 1 -▁Rajya 1 -▁incest 1 -▁IND 1 -▁bloated 1 -▁$130 1 -▁2012) 1 -▁bangs 1 -▁beep 1 -lecommunications 1 -▁airliner 1 -▁Coles 1 -▁claimants 1 -IER 1 -▁DV 1 -▁frontal 1 -lipped 1 -▁Spread 1 -▁$79 1 -Rick 1 -▁Kaine 1 -▁hindered 1 -▁Redmi 1 -▁SHO 1 -▁Vee 1 -▁chimed 1 -ept 1 -▁discriminated 1 -▁flocked 1 -▁Greensboro 1 -styled 1 -▁sinful 1 -▁Barra 1 -VEN 1 -▁False 1 -disc 1 -gawa 1 -Sta 1 -▁Maintenance 1 -▁Suspect 1 -▁dissuade 1 -▁convent 1 -▁forehand 1 -▁prescribing 1 -▁vents 1 -▁Squ 1 -90% 1 -colon 1 -cti 1 -▁Patent 1 -abe 1 -▁blazer 1 -▁Atiku 1 -▁understated 1 -cry 1 -▁sobbed 1 -▁landslides 1 -▁CHA 1 -▁Laser 1 -▁Norm 1 -▁Venezuelans 1 -▁Ef 1 -bun 1 -▁regulates 1 -▁glimpses 1 -▁conversions 1 -Editing 1 -challenge 1 -▁Brandi 1 -▁Rondo 1 -▁herein 1 -Coach 1 -▁frameworks 1 -▁Wellness 1 -African 1 -▁2.5% 1 -▁bookmark 1 -▁trolling 1 -▁chocolates 1 -▁'90 1 -▁manifested 1 -▁boob 1 -▁hamburger 1 -▁OG 1 -comprehensible 1 -▁Raspberry 1 -▁Strasbourg 1 -▁cohesion 1 -▁fluorescent 1 -▁geographies 1 -▁impediment 1 -▁itinerary 1 -▁lackluster 1 -▁Spokesman 1 -▁Draper 1 -▁Durbin 1 -powerful 1 -▁beggar 1 -▁Legendary 1 -▁fide 1 -2.0 1 -▁atheists 1 -▁flirt 1 -▁Landmark 1 -pse 1 -▁Trooper 1 -▁Wilderness 1 -▁Vinci 1 -7.2 1 -▁169 1 -Sen 1 -▁demolish 1 -▁Hess 1 -Gro 1 -Blood 1 -▁Beale 1 -▁4.9 1 -▁Crowe 1 -▁Klay 1 -▁crates 1 -▁TSX 1 -fell 1 -Human 1 -▁starch 1 -killer 1 -▁Arte 1 -hide 1 -▁Radical 1 -tape 1 -▁renders 1 -ushi 1 -▁stripper 1 -brainer 1 -▁Guillermo 1 -▁decimated 1 -▁STILL 1 -▁Skill 1 -ró 1 -escence 1 -qual 1 -Sales 1 -▁DF 1 -▁sinus 1 -▁provoking 1 -▁plump 1 -gir 1 -por 1 -▁fingernails 1 -▁les 1 --2014 1 -▁preventable 1 -▁Tres 1 -quick 1 -▁armoured 1 -▁suspiciously 1 -▁defection 1 -▁Brandt 1 -iden 1 -study 1 -threat 1 -pure 1 -Spot 1 -▁Munro 1 -▁DOT 1 -▁flora 1 -▁Loving 1 -▁Arden 1 -68) 1 -▁declarations 1 -route 1 -Hell 1 -▁720 1 -Plus 1 -▁Companion 1 -▁Gardiner 1 -▁Peshawar 1 -▁hairdresser 1 -▁irresistible 1 -▁smoky 1 -▁chronological 1 -▁tequila 1 -▁Ardern 1 -▁CRA 1 -▁sliver 1 -▁Leather 1 -FIELD 1 -▁grandchild 1 -▁reorganization 1 -▁Widow 1 -▁Mendes 1 -▁proto 1 -▁Sling 1 -▁rustic 1 -▁differentiation 1 -▁0.05 1 -▁Ges 1 -▁graciously 1 -Cro 1 -ncia 1 -▁scatter 1 -ologi 1 -dum 1 -▁holed 1 -▁Nel 1 -fal 1 -▁hare 1 -▁12:00 1 -same 1 -Bon 1 -Gate 1 -▁devise 1 -cated 1 -▁overtaken 1 -▁27% 1 -▁skateboard 1 -113 1 -▁Suh 1 -▁2017: 1 -yield 1 -▁Goods 1 -▁Fever 1 -▁enquiries 1 -▁identifiable 1 -▁karaoke 1 -▁shuffling 1 -▁shunned 1 -▁dolphin 1 -▁pesticide 1 -▁sublime 1 -▁veggie 1 -volv 1 -▁Wester 1 -▁Haj 1 -▁Southwestern 1 -▁Units 1 -▁cartridges 1 -▁commando 1 -Sat 1 -▁racking 1 -ession 1 -▁blossom 1 -▁alienated 1 -▁Partly 1 -▁religiously 1 -210 1 -▁Hines 1 -▁Goat 1 -unity 1 -▁Stri 1 -▁Crook 1 -ICA 1 -▁karma 1 -question 1 -▁Parisian 1 -MH 1 -▁fats 1 -broad 1 -eren 1 -▁PCB 1 -▁74- 1 -▁earthly 1 -broker 1 -xel 1 -▁Chong 1 -▁reforming 1 -▁goof 1 -▁Mothers 1 -5.1 1 -▁Integrity 1 -▁Kyrgios 1 -▁Typhoon 1 -▁rousing 1 -▁Loretta 1 -▁pronunciation 1 -▁emphatically 1 -▁Godfrey 1 -▁surveying 1 -▁Shuttle 1 -▁Discussion 1 -gian 1 -Pen 1 -movie 1 -▁Reigns 1 -▁primed 1 -▁stalked 1 -CHA 1 -▁sab 1 -▁Englishman 1 -▁brownies 1 -▁Sharia 1 -icker 1 -istan 1 -▁yearbook 1 -▁graphs 1 -▁PHP 1 -▁Hume 1 -▁boils 1 -flam 1 -▁pessimistic 1 -apple 1 -APC 1 -Global 1 -population 1 -▁McCu 1 -▁dang 1 -▁germs 1 -▁Emmett 1 -▁Entering 1 -▁5-6 1 -▁surrendering 1 -▁BART 1 -▁Spiritual 1 -roma 1 -idis 1 -▁Browning 1 -▁Flipkart 1 -▁McLeod 1 -▁custodian 1 -▁disarray 1 -▁stifling 1 -▁twilight 1 -ل 1 -▁Permian 1 -▁Sarasota 1 -▁Felicia 1 -▁Horowitz 1 -▁Barbados 1 -▁₹ 1 -▁(2013) 1 -▁intermission 1 -▁¥ 1 -▁motif 1 -▁propane 1 -▁agendas 1 -▁tagging 1 -▁electrician 1 -▁whiff 1 -▁stomped 1 -▁Medic 1 -rul 1 -occupied 1 -▁AH 1 -▁lasers 1 -▁soooo 1 -108 1 -offensive 1 -kilometer 1 -atti 1 -▁INF 1 -▁Attorneys 1 -▁Respond 1 -NJ 1 -imation 1 -▁clothed 1 -▁TU 1 -liners 1 -▁postings 1 -▁eagles 1 -▁Mull 1 -treatment 1 -▁injecting 1 -▁habitual 1 -▁differentiated 1 -testing 1 -▁($7 1 -▁crowns 1 -▁pests 1 -▁Harrogate 1 -▁López 1 -▁Wallabies 1 -▁dehydrated 1 -▁etiquette 1 -▁Gatwick 1 -▁Maltese 1 -▁(2016) 1 -▁1963. 1 -▁lantern 1 -▁discord 1 -▁Healing 1 -▁); 1 -▁Stupid 1 -▁intimately 1 -▁Exclud 1 -▁[21] 1 -depend 1 -affe 1 -teacher 1 -▁uptake 1 -points 1 -sighted 1 -▁Josephine 1 -▁lashing 1 -▁ASAP 1 -▁Algerian 1 -▁TF 1 -▁plastered 1 -▁Kob 1 -▁civilized 1 -immigration 1 -pathetic 1 -decision 1 -Radio 1 -▁topical 1 -ère 1 -▁Bonus 1 -▁vial 1 -▁1857 1 -▁Feature 1 -LIN 1 -▁presenters 1 -▁Rash 1 -▁Returns 1 -▁BSE 1 -▁Accept 1 -▁Sheppard 1 -▁reggae 1 -▁WEEK 1 -▁lymphoma 1 -▁Peoria 1 -▁Rashford 1 -▁Munoz 1 -▁Coinbase 1 -▁Qing 1 -▁Statue 1 -▁Madras 1 -▁hawkish 1 -▁Cadet 1 -▁latte 1 -▁institutes 1 -▁skiers 1 -Mass 1 -▁bidders 1 -▁assures 1 -▁Kendra 1 -Face 1 -▁affirmation 1 -▁Bonds 1 -▁standstill 1 -▁diva 1 -▁Waterford 1 -▁Blonde 1 -▁condensed 1 -▁Shank 1 -▁anyhow 1 -▁1947. 1 -▁disclaimer 1 -▁checklist 1 -skill 1 -▁Kut 1 -▁Maiden 1 -▁renovate 1 -▁deceive 1 -tig 1 -▁Viewers 1 -▁Fang 1 -baugh 1 -GW 1 -▁Banda 1 -lust 1 -lyse 1 -▁silky 1 -jerk 1 -sourced 1 -▁pointer 1 -task 1 -▁Collingwood 1 -▁Dembele 1 -▁NVIDIA 1 -▁Stifel 1 -▁sclerosis 1 -▁Dhawan 1 -▁Heavyweight 1 -▁Vogel 1 -▁Burbank 1 -▁predictably 1 -▁dunes 1 -▁Clover 1 -▁manic 1 -▁lynching 1 -▁favoring 1 -▁(34 1 -▁Donnie 1 -offer 1 --62 1 -went 1 -▁BAR 1 -▁Norse 1 -Business 1 -Summer 1 -▁teamwork 1 -LON 1 -▁localized 1 -(1) 1 -▁:: 1 -▁Drone 1 -hunter 1 -Try 1 -uil 1 -▁Mila 1 -▁Salle 1 -▁Ronan 1 -▁Gong 1 -APP 1 -▁scriptures 1 -tou 1 -▁Provider 1 -▁Jing 1 -▁Kislyak 1 -▁Lighthizer 1 -▁commemorating 1 -▁conduit 1 -▁narration 1 -▁scapegoat 1 -▁straddle 1 -▁(2014) 1 -▁Beaufort 1 -▁deviate 1 -▁Sharpe 1 -▁Blasey 1 -▁essentials 1 -▁rebuffed 1 -▁59- 1 -▁Cutting 1 -▁username 1 -▁calendars 1 -▁Everyday 1 -▁miraculously 1 -arius 1 -▁Bosnian 1 -ologies 1 -▁sheath 1 -inner 1 -▁ambushed 1 -▁Nee 1 -vino 1 -▁Fishing 1 -▁intensifying 1 -▁Nail 1 -▁colt 1 -colour 1 -Step 1 -▁unlocking 1 -▁sacking 1 -nte 1 -▁SMA 1 -▁Processing 1 -▁knights 1 -▁$900 1 -arn 1 -Israeli 1 -▁capitalized 1 -▁Franc 1 -▁Banerjee 1 -▁revitalize 1 -▁pinnacle 1 -▁Guaido 1 -▁MailOnline 1 -▁Happiness 1 -▁cedar 1 -▁Calvert 1 -▁Dolby 1 -▁Ow 1 -▁preferential 1 -Tor 1 -▁Trapp 1 -▁Sailor 1 -Fine 1 -▁gunner 1 -▁HAD 1 -▁accomplice 1 -▁Penguin 1 -▁Bottle 1 -▁webpage 1 -▁MF 1 -ún 1 -▁wad 1 -▁Dowd 1 -▁thicken 1 -.25% 1 -▁downtime 1 -▁heartland 1 -▁totality 1 -▁Wanted 1 -▁queens 1 -Could 1 -aste 1 -▁Waller 1 -minating 1 -▁Cochran 1 -▁$74 1 -▁stead 1 -▁sweeter 1 -▁Azerbaijani 1 -muth 1 -▁Certified 1 -▁Pensacola 1 -▁Politicians 1 -▁Sainsbury 1 -▁ambiguity 1 -▁contemporaries 1 -▁stabilization 1 -cea 1 -▁Rosenthal 1 -▁Scalise 1 -▁Packaging 1 -▁Biggest 1 -▁Pollard 1 -▁Youngstown 1 -tention 1 -▁Crush 1 -within 1 -lach 1 -▁inhibition 1 -▁Paralympic 1 -▁timed 1 -▁demos 1 -▁Kah 1 -jima 1 -▁5-7 1 -Fun 1 -▁fatality 1 -▁brim 1 -▁prehistoric 1 -dose 1 -▁closeness 1 -eder 1 -▁Candle 1 -rf 1 -▁pauses 1 -imba 1 -▁KG 1 -examination 1 -▁Salad 1 -▁riddled 1 -▁chiefly 1 -ISA 1 -overweight 1 -heri 1 -▁newbie 1 -▁Marta 1 -▁Soviets 1 -▁penetrated 1 -▁sprinted 1 -▁Garoppolo 1 -▁casserole 1 -▁epidural 1 -▁snoring 1 -▁tortoise 1 -▁Clapper 1 -▁Agenda 1 -▁Incident 1 -▁slum 1 -▁enshrined 1 -▁loudest 1 -▁LIKE 1 -Hon 1 -▁Bashir 1 -lots 1 -▁Baja 1 -▁$1,0 1 -▁MADE 1 -WG 1 -▁overreact 1 -▁Kau 1 -bait 1 -▁preferable 1 -▁pistols 1 -▁22) 1 -▁headband 1 -▁evoke 1 -▁knuckles 1 -▁defying 1 -▁Ames 1 -TEN 1 -▁appropriations 1 -▁Features 1 -▁$750 1 -▁Bavaria 1 -▁longed 1 -▁fairway 1 -▁Consortium 1 -▁Sanskrit 1 -▁beleaguered 1 -▁reputable 1 -▁ulcer 1 -▁Bancroft 1 -▁undesirable 1 -▁bespoke 1 -▁Harcourt 1 -▁WNBA 1 -▁trays 1 -allah 1 -▁Genius 1 -▁Excellent 1 -▁Axis 1 -▁grossly 1 -▁Brianna 1 -▁NBN 1 -asia 1 -▁Pharmacy 1 -▁Fas 1 -▁Roxy 1 -▁dub 1 -▁spins 1 -▁JU 1 -CCC 1 -▁RES 1 -▁anesthesia 1 --39 1 -Japanese 1 -▁cy 1 -▁infiltrate 1 -secondary 1 -▁shimmer 1 -emotional 1 -▁somber 1 -▁Mathew 1 -gala 1 -▁protectionism 1 -Adds 1 -▁takeaways 1 -/12/ 1 -doesn 1 -VING 1 -▁Rare 1 -▁occupations 1 --54 1 -▁Blend 1 -▁adjunct 1 -▁Jericho 1 -ARC 1 -▁Grafton 1 -▁endangerment 1 -▁Fatima 1 -▁Zucker 1 -▁dreamt 1 -▁complemented 1 -▁snuff 1 -▁abyss 1 -▁poaching 1 -▁480 1 -▁slay 1 -fits 1 -protect 1 -▁gracefully 1 -▁Coon 1 -▁momentous 1 -▁Zeus 1 -▁Mast 1 -▁emp 1 -▁abstraction 1 -Rob 1 -Ob 1 -kids 1 -–10 1 -Brian 1 -Fran 1 -▁dives 1 -▁peeing 1 -▁Greta 1 -▁KD 1 -mort 1 -▁NP 1 -▁coyotes 1 -▁templates 1 -▁motivational 1 -fford 1 -rank 1 -▁disturbances 1 -▁Bourdain 1 -▁Citadel 1 -▁Medvedev 1 -▁adolescence 1 -▁miserably 1 -▁orthopedic 1 -▁supervising 1 -▁Guangzhou 1 -▁mistreatment 1 -UX 1 -▁sipped 1 -Sound 1 -▁twig 1 -▁Packer 1 -▁Redmond 1 -▁Whitehead 1 -▁Celeste 1 -tips 1 -▁Marketplace 1 -▁Helping 1 -▁Soto 1 -▁seams 1 -▁diss 1 -▁computation 1 -Golden 1 -▁uniformed 1 -▁inaccessible 1 -ebo 1 -▁Sia 1 -▁Blas 1 -▁rector 1 -▁Hamlet 1 -▁handsets 1 -▁BG 1 -▁encountering 1 -▁ROCK 1 -▁UCI 1 -▁Quint 1 -finding 1 -▁punishments 1 -▁Pag 1 -▁Shil 1 -▁TOP 1 -▁DeRozan 1 -▁Yvonne 1 -▁discrepancies 1 -▁laureate 1 -▁proficient 1 -▁uncanny 1 -▁Schwarz 1 -▁(2012) 1 -▁rabies 1 -▁McClure 1 -▁Wolfgang 1 -▁Rockhampton 1 -▁pausing 1 -▁Natalia 1 -▁underscore 1 -▁Diva 1 -▁28% 1 -hydrate 1 -▁Bare 1 -▁Edmonds 1 -▁Opt 1 -▁boulder 1 -▁starve 1 -▁0800 1 -▁Window 1 -equi 1 -▁Areas 1 -▁nu 1 -▁6.7 1 -▁Willard 1 -▁expat 1 -▁Dogg 1 -Nazis 1 -▁Straits 1 -▁310 1 -▁Jansen 1 -▁Chic 1 -Better 1 -▁RN 1 -▁assay 1 -▁Capri 1 -mell 1 -ddington 1 -▁Cortez 1 -▁Tuck 1 -▁Breed 1 -arium 1 --1/2 1 -▁(7) 1 -▁vibrations 1 -▁cartels 1 -▁Kira 1 -▁BEFORE 1 -▁Lamborghini 1 -▁instalment 1 -▁inciting 1 -▁Josiah 1 -none 1 -▁padding 1 -▁knitted 1 -▁subvert 1 -▁Brookfield 1 -▁technologically 1 -▁NAS 1 -▁Aud 1 -▁trunks 1 -▁Hawthorne 1 -connect 1 -▁bearings 1 -▁gambler 1 -▁cri 1 -7% 1 -athletes 1 -▁bouncer 1 -▁Chronicles 1 -mitting 1 -increasing 1 -determination 1 -ritz 1 -▁procure 1 -cto 1 -▁182 1 -wolf 1 -▁Upgrade 1 -issi 1 -Tro 1 -▁Jacobson 1 -▁perk 1 -north 1 -habit 1 -▁Mayan 1 -▁woodwork 1 -ACC 1 -▁Temer 1 -▁Benitez 1 -▁Deloitte 1 -▁Giuseppe 1 -▁Merseyside 1 -▁Pretoria 1 -▁THANK 1 -▁pandemic 1 -▁repatriation 1 -▁arduous 1 -▁neoliberal 1 -▁Luigi 1 -▁Rhino 1 -leen 1 -▁Burial 1 -▁2100 1 -25,000 1 -▁bikers 1 -Sign 1 -▁meticulously 1 -PN 1 -▁Fallout 1 -▁strives 1 -emba 1 -▁Lao 1 -▁Ashraf 1 -▁utilizes 1 -▁haired 1 -boxes 1 -▁Klu 1 -▁auctioned 1 -▁Dei 1 -▁EFF 1 -▁420 1 -▁Stru 1 -▁ordinances 1 -▁loosened 1 -hle 1 -▁implanted 1 -▁howl 1 -watching 1 -Justice 1 -acquired 1 -▁underside 1 -uwa 1 -lude 1 -▁gowns 1 -▁plugs 1 -▁gulp 1 -▁muffins 1 -▁wasteful 1 -micro 1 -▁charisma 1 -▁LOUIS 1 -▁bubbling 1 -▁rectangle 1 -▁Donetsk 1 -▁oppress 1 -▁Eugenie 1 -▁rumour 1 -▁NXT 1 -▁Ogden 1 -▁Couldn 1 -▁LEGO 1 -▁facets 1 -alter 1 -evi 1 -▁aroused 1 -▁sewn 1 -▁Corr 1 -▁Scientist 1 -▁pr 1 -▁Boll 1 -▁Blackstone 1 -▁magnets 1 -▁Rack 1 -▁pastime 1 -▁forecasted 1 -▁0.01 1 -▁$3.2 1 -▁workplaces 1 -▁definitively 1 -migrant 1 -▁Dol 1 -▁eastward 1 -▁criticise 1 -▁Kwan 1 -▁Hodge 1 -▁git 1 -▁exempted 1 -▁Pipe 1 -omy 1 -▁206 1 -nur 1 -astro 1 -static 1 -▁Congolese 1 -▁Harvick 1 -▁Nacional 1 -▁grotesque 1 -▁pendant 1 -▁tirade 1 -▁Callahan 1 -▁Bourne 1 -▁Laine 1 -▁Kellyanne 1 -HK 1 -athlon 1 -▁coefficient 1 -▁stepmother 1 -▁Northeastern 1 -blu 1 -Wil 1 -▁windfall 1 -▁rescues 1 -▁Fare 1 -▁Powerball 1 -▁Lula 1 -▁narcissist 1 -▁Banana 1 -▁cheerleaders 1 -▁Describ 1 -▁$4,000 1 -▁chrome 1 -▁launcher 1 -▁inked 1 -ebel 1 -▁1933, 1 -▁Domin 1 -▁lowly 1 -▁Sandi 1 -rape 1 -IED 1 -ety 1 -▁Guatemalan 1 -wr 1 -▁housework 1 -district 1 -▁annex 1 -▁tetra 1 -▁flakes 1 -▁SAM 1 -▁accomplishing 1 -▁React 1 -▁ALS 1 -▁1851 1 -▁Muh 1 -▁GRA 1 -▁stumps 1 -▁Saif 1 -▁Caterpillar 1 -▁McMillan 1 -▁Pembroke 1 -▁Strategies 1 -▁Overseas 1 -▁feces 1 -▁Keynes 1 -▁paddock 1 -imov 1 -▁Translation 1 -▁Mati 1 -▁miniseries 1 -▁Granite 1 -▁painters 1 -▁despicable 1 -▁Mineral 1 -▁Klux 1 -▁expectant 1 -▁290 1 -ün 1 -▁Otter 1 -▁interrogated 1 -27) 1 -WH 1 -upper 1 -▁sectional 1 -▁Document 1 -hren 1 -issued 1 -laughter 1 -▁err 1 -▁4000 1 -▁Prevent 1 -dogs 1 -vani 1 -▁legislatures 1 -▁Faso 1 -hunt 1 -pian 1 -terror 1 -▁smeared 1 -▁panting 1 -▁brewed 1 -lur 1 -▁modifying 1 -▁Zy 1 -▁Juvenile 1 -▁Kalamazoo 1 -▁Zverev 1 -▁emerald 1 -▁faucet 1 -▁mitigating 1 -▁Earnhardt 1 -▁Robotics 1 -dzi 1 -▁spacing 1 -▁Yacht 1 -▁Aur 1 -▁GTA 1 -▁Gaston 1 -▁Callaway 1 -▁Titus 1 -▁[24] 1 -▁contending 1 -▁[23] 1 -▁Silas 1 -▁Frame 1 -▁Iraqis 1 -amma 1 -▁Trish 1 -▁resurrected 1 -▁psychologically 1 -▁degrade 1 -▁WT 1 -▁Freeway 1 -▁Narayan 1 -Central 1 -· 1 -▁seminal 1 -▁CAC 1 -▁laden 1 -▁Amer 1 -▁reconstruct 1 -▁Ajay 1 -▁Ameri 1 -▁Huff 1 -▁furnishings 1 -▁Caps 1 -▁Directed 1 -▁Timber 1 -▁Gretchen 1 -▁SmackDown 1 -▁Solicitor 1 -▁reshuffle 1 -▁fawn 1 -▁conspicuous 1 -▁pesky 1 -▁foggy 1 -▁LAPD 1 -▁Papp 1 -▁solicitation 1 -▁Aretha 1 -▁chopper 1 -atan 1 -Report 1 -digital 1 -▁2011) 1 -scription 1 -skaya 1 -▁Vern 1 -▁anecdotes 1 -eastern 1 -▁Adv 1 -▁Programs 1 -▁preside 1 -FY 1 -▁patriarch 1 -▁Compare 1 -BBC 1 -▁sparkly 1 -▁Tolkien 1 -▁apprehensive 1 -▁pelvis 1 -▁Datuk 1 -▁bristle 1 -▁Courage 1 -▁butts 1 -▁300- 1 -▁Tanaka 1 -▁Carlyle 1 -▁Judges 1 -Apple 1 -▁Midtown 1 -▁slotted 1 -▁profiled 1 -pelling 1 -▁Theft 1 -▁1812 1 -▁favours 1 -▁Rowland 1 -▁Broke 1 -▁RAC 1 -▁bystander 1 -▁cupcake 1 -▁HIM 1 -▁401 1 -▁Nato 1 -regulation 1 -▁Manly 1 -inflammatory 1 -▁reckoned 1 -Korean 1 -cancer 1 -fred 1 -outside 1 -oooo 1 -▁Lombardi 1 -▁oversold 1 -▁decisively 1 -wari 1 -▁Greenfield 1 -▁snort 1 -▁amends 1 -▁Dab 1 -▁1856 1 -bry 1 -▁Galveston 1 -▁Māori 1 -▁embassies 1 -▁microscopic 1 -▁nifty 1 -▁puzzling 1 -▁foothills 1 -▁Sparrow 1 -▁cunning 1 -rev 1 -▁401( 1 -▁illusions 1 -▁Graph 1 -traded 1 -▁Gao 1 -▁graced 1 -▁prepaid 1 -▁drenched 1 -▁Ack 1 -▁DMV 1 -lice 1 -▁petite 1 -!!" 1 -▁Hodges 1 -▁JE 1 -bour 1 -▁inflows 1 -rounded 1 -▁helium 1 -▁capsules 1 -▁influencers 1 -▁bravely 1 -▁dwellings 1 -thal 1 -▁Mok 1 -▁fro 1 -▁flatter 1 -▁novella 1 -▁hoc 1 -hak 1 -▁1.5% 1 -▁Advent 1 -▁Lovely 1 -▁vu 1 -dora 1 -▁Kub 1 -wrap 1 -▁hailing 1 -▁Crab 1 -▁Khyber 1 -▁melanoma 1 -▁tertiary 1 -▁triangular 1 -▁penultimate 1 -▁perfected 1 -▁Winslow 1 -▁Athena 1 -▁Srinagar 1 -▁Potato 1 -▁Empress 1 -▁Stats 1 -▁resettlement 1 -▁vanish 1 -yna 1 -▁Pair 1 -▁footpath 1 -▁Classes 1 -▁HPV 1 -▁scourge 1 -▁academically 1 -▁Wilkes 1 -address 1 -▁Hark 1 -▁Scholar 1 -▁Biological 1 -▁sexes 1 -▁Lamont 1 -certain 1 -▁eyeballs 1 -▁lug 1 -▁nudged 1 -▁bathe 1 -Stand 1 -▁Oro 1 -lius 1 -▁INT 1 -70% 1 -▁clerks 1 -▁whimper 1 -▁eventful 1 -▁ached 1 -240 1 -▁Sovereign 1 -▁insecurities 1 -▁limousine 1 -▁manipulative 1 -▁misogyny 1 -▁relatable 1 -▁unforeseen 1 -▁Sandwich 1 -▁featherweight 1 -▁posterior 1 -▁Fiesta 1 -▁muffled 1 -▁Establishment 1 -▁pancreatic 1 -▁mistrust 1 -▁slugger 1 -▁Jakob 1 -▁Tempe 1 -▁Tensions 1 -▁keywords 1 -▁Tae 1 -▁Lighthouse 1 -▁mindfulness 1 -▁pane 1 -▁corrective 1 -▁Concerns 1 -▁Corruption 1 -▁ang 1 -wil 1 -▁Grange 1 -▁Kidman 1 -▁pastures 1 -▁Kristina 1 -▁Machines 1 -▁abiding 1 -ABLE 1 -▁fussy 1 -▁arches 1 -region 1 -▁narrower 1 -ovski 1 -dropping 1 -choices 1 -IDA 1 -▁DX 1 -avo 1 -wk 1 -▁revolve 1 -▁Islander 1 -stre 1 -▁spade 1 -▁clerical 1 -lung 1 -nay 1 -auer 1 -▁KING 1 -▁360- 1 -dow 1 -▁Signs 1 -uzz 1 -▁CAM 1 -▁Poroshenko 1 -▁archipelago 1 -▁cumbersome 1 -▁oscillator 1 -▁fidget 1 -▁Rebellion 1 -▁cobalt 1 -▁Aldridge 1 -▁Diablo 1 -▁slander 1 -▁Kobach 1 -▁unspoken 1 -▁panned 1 -▁Marissa 1 -▁MUST 1 -▁Names 1 -▁Mullins 1 -Street 1 -▁roundup 1 -▁GAA 1 -▁resurfaced 1 -▁chit 1 -▁servicemen 1 -▁memoirs 1 -boarding 1 -▁eclipsed 1 -▁searing 1 -mongering 1 -▁distort 1 -▁Basque 1 -▁minimalist 1 -Lock 1 -▁Colour 1 -▁POW 1 -▁Instruments 1 -▁(36 1 -▁critters 1 -▁HAS 1 -▁McKee 1 -▁crossovers 1 -▁Forgot 1 -function 1 -sole 1 -▁Tours 1 -colonial 1 -▁sprinter 1 -Hare 1 --94 1 -▁dips 1 -▁heighten 1 -▁booed 1 -.50% 1 -▁1874 1 -▁Dunne 1 -topic 1 -title 1 -▁thug 1 -▁1400 1 -▁está 1 -▁tractors 1 -Fair 1 -▁sufferers 1 -▁entangled 1 -▁gravitate 1 -▁motorcyclist 1 -▁phishing 1 -▁Neptune 1 -▁Ricciardo 1 -▁Unilever 1 -▁Lieberman 1 -▁McKinney 1 -▁Automation 1 -raise 1 -▁scrubbed 1 -▁Quilt 1 -▁logically 1 -▁Kerber 1 -▁crutches 1 -112 1 -▁Juice 1 -visible 1 -▁Collector 1 -▁sighted 1 -▁Bora 1 -▁Tala 1 -gret 1 -"— 1 -▁Anybody 1 -▁Pl 1 --48 1 -▁Banco 1 -▁shortlist 1 -▁Potts 1 -▁genital 1 -notch 1 -wha 1 -▁crabs 1 -▁Tribal 1 -▁Jes 1 -▁Scher 1 -▁Unionist 1 -▁manoeuvre 1 -▁222 1 -▁Palma 1 -creator 1 -▁Prima 1 -Davidson 1 -▁Mater 1 -Hot 1 -▁showering 1 -▁Selling 1 -▁Grover 1 -glia 1 -▁Curious 1 -hanging 1 -▁leafy 1 -▁Romantic 1 -▁Physicians 1 -located 1 -▁2–1 1 -▁acrylic 1 -▁dormitory 1 -▁Seventeen 1 -▁Keaton 1 -▁companionship 1 -▁Marianne 1 -▁Camino 1 -▁Burrow 1 -▁Akh 1 -▁NG 1 -▁Baz 1 -▁Tsai 1 -find 1 -▁Lulu 1 -SVILLE 1 -egh 1 -▁waging 1 -▁milling 1 -▁Chia 1 -cki 1 -▁Deposit 1 -9.4 1 -▁Niall 1 -▁neared 1 -▁Umm 1 -brick 1 -▁iris 1 -▁Loft 1 -JP 1 -milk 1 -▁Paz 1 -marked 1 -▁Camb 1 -▁finisher 1 -▁flourishing 1 -▁Baden 1 -▁Sundar 1 -▁strikers 1 -▁midpoint 1 -▁Hamlin 1 -▁relic 1 -▁Cree 1 -ß 1 -▁captivating 1 -▁juncture 1 -▁puberty 1 -▁Atlantis 1 -▁hoot 1 -▁nautical 1 -▁floodwaters 1 -▁pebble 1 -▁Referring 1 -▁bummed 1 -▁ratification 1 -▁briefcase 1 -▁Indie 1 -tzel 1 -▁idols 1 -Fly 1 -▁Secrets 1 --10) 1 -▁carpets 1 -▁Hermann 1 -▁Mystic 1 -▁pounced 1 -▁Domain 1 -▁Falk 1 -▁Weld 1 -▁Listening 1 -▁twitch 1 -enson 1 -5.2 1 -▁geographically 1 -▁Kum 1 -▁rev 1 -▁Leb 1 -▁Mongol 1 -▁dashing 1 -▁recognises 1 -▁$75,000 1 -▁BACK 1 -▁ADA 1 -▁buoy 1 -▁55% 1 --57 1 -Mail 1 -▁1854 1 -▁overcast 1 -▁horny 1 -▁specialize 1 -▁clinicians 1 -hap 1 -▁Pup 1 -▁212 1 -▁sleepless 1 -▁ISP 1 -▁facilitator 1 -▁buttocks 1 -▁Utility 1 -▁Pontiac 1 -▁Aquarium 1 -▁blob 1 -▁Rough 1 -▁multicultural 1 -▁Lyndon 1 -▁untold 1 -▁Scully 1 -▁Fifa 1 -▁postcard 1 -▁Rani 1 -▁LPGA 1 -25) 1 -▁Bakery 1 -▁coded 1 -▁Battlefield 1 -▁mulling 1 -▁regal 1 -▁Pagan 1 -▁volcanoes 1 -▁briefings 1 -▁1,000- 1 -▁scattering 1 -Change 1 -▁succumb 1 -▁impulses 1 -▁2022, 1 -limb 1 -▁ART 1 -▁SJ 1 -▁astronomer 1 -diction 1 -▁Zam 1 -Wonder 1 -▁DEC 1 -▁stipulat 1 -▁flicker 1 -▁Georg 1 -▁snowing 1 -▁denominations 1 -Gal 1 -▁lunged 1 -reasonable 1 -с 1 -▁Dictionary 1 -▁Gerrard 1 -▁Reservoir 1 -▁Salvini 1 -▁throbbing 1 -Il 1 -▁bastion 1 -▁rosy 1 -▁torrential 1 -▁Ayr 1 -▁deluge 1 -▁Lillard 1 -▁Referee 1 -▁Mandel 1 -▁prenatal 1 -▁7-2 1 -tele 1 -▁sam 1 -erator 1 -▁Juve 1 -▁screeching 1 -▁Colle 1 -▁Declan 1 -▁Mikey 1 -▁wailing 1 -NES 1 -▁petting 1 -▁Silverman 1 -▁bedrock 1 -illian 1 -tegra 1 -▁Walden 1 -▁Debate 1 -▁Leh 1 -▁Mahal 1 -▁proclaiming 1 -Charlie 1 -Royce 1 -owed 1 -▁brewers 1 -▁embellish 1 -mpton 1 -▁scrapbook 1 -▁mastery 1 -ppen 1 -▁1936, 1 -grant 1 -▁Kata 1 -▁fades 1 -CJ 1 -tzer 1 -ENS 1 -▁Aguilar 1 -▁Alumni 1 -▁COLUMBUS 1 -▁Helicopter 1 -▁Scaramucci 1 -▁frugal 1 -▁outlandish 1 -▁Nicholls 1 -▁nutshell 1 -▁constrict 1 -▁Mishra 1 -▁customising 1 -▁999 1 -▁rattling 1 -▁Gomes 1 -▁haters 1 -▁reinforces 1 -▁remission 1 -▁Compass 1 -▁Sheep 1 -▁Rockford 1 -▁años 1 -▁Shoe 1 -nski 1 -▁Bye 1 -▁Clu 1 -▁Ayers 1 -extra 1 -▁biscuit 1 -▁Jem 1 -▁Bhag 1 -▁cannons 1 -▁angled 1 -▁Feld 1 -▁Mainland 1 -▁div 1 -▁nationalities 1 -▁Raider 1 -▁organiser 1 -▁Eritrea 1 -network 1 -▁meticulous 1 -▁motionless 1 -annual 1 -▁inspecting 1 -▁5-10 1 -disqualification 1 -▁Featuring 1 -▁Sandusky 1 -▁anxieties 1 -▁guinea 1 -▁immersion 1 -▁indispensable 1 -▁nemesis 1 -▁radiator 1 -▁Jayhawks 1 -▁derelict 1 -/03/2018 1 -▁deleting 1 -▁Diagnostic 1 -▁Migrant 1 -▁unlicensed 1 -▁Nasser 1 -▁$0.99 1 -▁Gosh 1 -rius 1 -▁demonic 1 -▁Interim 1 -▁categorically 1 -▁208 1 -rida 1 -▁buoyed 1 -▁inbound 1 -▁addictions 1 -▁tiebreaker 1 -▁Budd 1 -Mrs 1 -▁255 1 -▁underrated 1 -USH 1 -▁indict 1 -lc 1 -▁Increased 1 -▁moderates 1 -▁brokered 1 -▁$95 1 -▁incline 1 -erated 1 -ookie 1 -▁APAC 1 -▁26% 1 -return 1 -▁itch 1 -▁pickles 1 -▁cigars 1 -tower 1 -▁brokerages 1 -▁sonic 1 -▁stalk 1 -▁Mississauga 1 -▁celestial 1 -▁violet 1 -▁Duluth 1 -▁Conyers 1 -▁AFTER 1 -▁Gilead 1 -▁Izzy 1 -▁Midwestern 1 -▁ANZ 1 -▁Anchor 1 -oye 1 -▁Velvet 1 -▁Chyna 1 -▁choreographed 1 -▁SAS 1 -▁Tick 1 -proper 1 -hell 1 -inar 1 -▁PETA 1 -▁Cale 1 -▁Wheels 1 -▁Britton 1 -Count 1 -▁shivered 1 -▁thoughtfully 1 -02) 1 -66) 1 -▁geniuses 1 -▁sly 1 -itas 1 -hian 1 -despite 1 -▁Minh 1 -▁Polls 1 -▁SNL 1 -driver 1 -▁interrupting 1 -▁exclaim 1 -Exp 1 -isco 1 -ieri 1 -lub 1 -oj 1 -▁reprise 1 -▁yrs 1 -▁shortcut 1 -▁DIEGO 1 -▁Draymond 1 -▁Hendrix 1 -▁Northumberland 1 -▁confluence 1 -▁exhilarating 1 -▁anthropologist 1 -▁SEOUL 1 -▁Wendell 1 -▁gulf 1 -▁dopamine 1 -fei 1 -▁Cure 1 -▁Woodstock 1 -▁acreage 1 -▁Footage 1 -▁Remind 1 -▁NK 1 -▁posthumously 1 --1980 1 -▁bloodied 1 -NSA 1 -▁schoolgirl 1 --74 1 -▁Coates 1 -asset 1 -▁Hound 1 -▁gleaming 1 -atz 1 --59 1 -abba 1 -▁contexts 1 -paw 1 -▁nominating 1 -creation 1 -▁12.5 1 -▁cyst 1 -▁cubs 1 -▁Automatic 1 -hae 1 -stretch 1 -▁theorists 1 -▁Twist 1 -▁sidebar 1 -▁backdoor 1 -▁serpent 1 -▁fluke 1 -▁Inquisitr 1 -▁Turnpike 1 -▁elegance 1 -▁mobilization 1 -▁renaissance 1 -▁symphony 1 -▁Wasserman 1 -▁Rasmussen 1 -▁rectify 1 -▁Giroud 1 -▁transfusion 1 -▁Donuts 1 -201 1 -▁basil 1 -synchronous 1 -▁apostle 1 -▁fandom 1 -▁nonexistent 1 -▁Dwyer 1 -mey 1 -walker 1 -▁Gathering 1 -▁legion 1 -▁scoffed 1 -▁Mariota 1 -▁Lesley 1 -▁(© 1 -uno 1 -uj 1 -▁befriend 1 -▁RNC 1 -▁dismayed 1 -▁sill 1 -▁Wicked 1 --69 1 -Trust 1 -▁watchful 1 -oyle 1 -▁Created 1 -▁Alvaro 1 -▁Pikachu 1 -▁Varadkar 1 -▁cemeteries 1 -▁unruly 1 -▁Tribute 1 -▁acquittal 1 -▁Conclusion 1 -enza 1 -▁SunTrust 1 -▁Brigham 1 -▁Milford 1 -▁Mist 1 -▁autograph 1 -▁millimeter 1 -fection 1 -▁affectionately 1 -rito 1 -▁204 1 -▁nostrils 1 -reck 1 -▁gushing 1 -▁(2011) 1 -▁blushed 1 -roa 1 -balance 1 -hic 1 -Wednesday 1 -confident 1 -NING 1 -ein 1 -▁necro 1 -documented 1 -▁feds 1 -▁Nikon 1 -▁mismatch 1 -▁safeguarding 1 -▁peeked 1 -▁delusional 1 -▁Ora 1 -lytic 1 -launch 1 -remain 1 -▁Autonomous 1 -▁indulging 1 -▁philanthropy 1 -▁practising 1 -▁vortex 1 -▁wacky 1 -▁Versace 1 -▁Approach 1 -▁Caledonia 1 -▁refurbishment 1 -▁slams 1 -▁emphasise 1 -tara 1 -ijk 1 -▁behavioural 1 -▁Efforts 1 -▁171 1 -Cross 1 -▁Airtel 1 -▁Maxine 1 -zman 1 -▁retweeted 1 -▁Balkans 1 -▁Script 1 -▁phil 1 -▁Eastman 1 -Full 1 -▁sooo 1 -▁Hopper 1 -▁flutter 1 -▁discredited 1 -dynamic 1 -▁211 1 -facebook 1 -▁grievance 1 -▁Royale 1 -▁216 1 -▁franc 1 -▁regaining 1 -▁Ding 1 -motor 1 -▁RCA 1 -▁hotspot 1 -nail 1 -▁creatively 1 -hler 1 -▁Snell 1 -acher 1 -▁Bono 1 -▁EMI 1 -▁Demons 1 -▁Warmbier 1 -▁radiant 1 -▁unrelenting 1 -▁plywood 1 -▁dissatisfied 1 -▁behest 1 -▁Darlene 1 -▁precautionary 1 -▁unpack 1 --53 1 -▁Remote 1 -inaudible 1 -▁distanced 1 -bath 1 -▁normalized 1 -iao 1 -▁Crist 1 -▁Moral 1 -▁realisation 1 -treated 1 -▁whistles 1 -▁Hallmark 1 -▁Wrap 1 -▁Cobra 1 -är 1 -enia 1 -▁heralded 1 -▁contradicted 1 -▁Seasons 1 -kum 1 -/13 1 -▁saddled 1 -▁MAS 1 -▁Bene 1 -oshi 1 -scientific 1 -▁summertime 1 -▁Verb 1 -▁Barclay 1 -9.7 1 -▁queues 1 -UA 1 -▁canton 1 -admi 1 -Way 1 -▁'80 1 -▁seeding 1 -ww 1 -KU 1 -▁generational 1 -▁Vonn 1 -▁annum 1 -▁testimonies 1 -▁ketchup 1 -▁Benoit 1 -stated 1 -▁Whitecaps 1 -▁Hiroshima 1 -▁resonates 1 -▁ooze 1 -▁Chartered 1 -▁superiors 1 -Component 1 -▁Burnham 1 -▁Paulson 1 -Nice 1 -▁dan 1 -▁greener 1 -▁soils 1 -▁overlay 1 -throwing 1 -▁Bong 1 -▁reactionary 1 -▁Cooking 1 -▁Baum 1 -▁-2 1 -▁diagrams 1 -▁Deva 1 -▁Caesars 1 -▁22,000 1 -▁semen 1 -continent 1 -▁FUN 1 -▁Bernardo 1 -▁relics 1 -Honestly 1 -democracy 1 -▁backups 1 -▁overpass 1 -▁Justine 1 -▁Catholicism 1 -▁WTA 1 -▁robin 1 -▁Balkan 1 -▁Matu 1 -▁Forgive 1 -▁Bott 1 -:45 1 -▁insure 1 -▁NPA 1 -▁Mako 1 -▁Peg 1 -Mama 1 -▁sn 1 -Page 1 -▁$71 1 -▁Wikileaks 1 -▁amplifier 1 -▁chubby 1 -▁illustrious 1 -▁intangible 1 -▁nylon 1 -▁persuasion 1 -FX 1 -▁Peacock 1 -▁biofuel 1 -▁theologian 1 -condition 1 -▁burglaries 1 -▁chestnut 1 -resident 1 -cour 1 -▁370 1 -guer 1 -▁scaring 1 -▁(10) 1 -.60% 1 -matter 1 -▁pizzas 1 -picture 1 -▁Evie 1 -actor 1 -burning 1 -beam 1 -▁Trials 1 -▁Jameson 1 -▁ibn 1 -Holy 1 -pathy 1 -▁informational 1 -▁Broadcom 1 -▁crafty 1 -▁Callan 1 -technology 1 -soaked 1 -▁bre 1 -▁Claim 1 -▁frequented 1 -▁boulders 1 -▁Realm 1 -▁Cowell 1 -▁carbohydrates 1 -▁Rahim 1 -▁maneuvers 1 -▁blossoms 1 -▁INFORMATION 1 -▁Ubisoft 1 -▁cultivating 1 -▁composing 1 -▁IKEA 1 -▁Benin 1 -▁tigers 1 -▁invaders 1 -▁Lé 1 -CAN 1 -▁goodbyes 1 -Fort 1 -tough 1 -▁Patron 1 -▁Dating 1 -▁tutorials 1 -establish 1 -chester 1 -UPI 1 -▁whomever 1 -▁congregations 1 -▁Chou 1 -▁45,000 1 -▁kink 1 -dial 1 -▁Restaurants 1 -commercial 1 -deploy 1 -▁Chances 1 -▁washes 1 -▁Loki 1 -2006 1 -▁champs 1 -135 1 -▁sweeps 1 -▁Candidate 1 -▁Fife 1 -▁drugmaker 1 -▁mow 1 -280 1 -neau 1 -▁pesos 1 -▁diagnoses 1 -▁Uri 1 -OVER 1 -ZZ 1 -▁carefree 1 -▁Felicity 1 -▁Kamloops 1 -▁Koepka 1 -▁undeniably 1 -▁overjoyed 1 -▁swagger 1 -▁Verlander 1 -▁dystopian 1 -▁Hubble 1 -▁transformer 1 -▁abuser 1 -▁Johannes 1 -Bear 1 -▁Draco 1 -▁Rickey 1 -rette 1 -▁MHz 1 -▁registrations 1 -▁fave 1 -▁Mariano 1 -▁Accounts 1 -▁Finish 1 -▁condos 1 -University 1 -afternoon 1 -▁Leu 1 -▁masterful 1 -enga 1 -rome 1 -GDP 1 -▁divest 1 -▁interiors 1 -▁emo 1 -▁Regent 1 -▁Swar 1 -▁Payment 1 -▁Correspondent 1 -▁emeritus 1 -▁porcelain 1 -▁resumption 1 -▁thirties 1 -▁vehicular 1 -▁petrochemical 1 -▁McKinnon 1 -▁swoon 1 -▁Cartoon 1 -▁Seminoles 1 -▁Gainesville 1 -EVER 1 -▁Pollock 1 -▁Barnier 1 -MMA 1 -▁usefulness 1 -▁£12 1 -▁scolded 1 -▁Flickr 1 -▁deposed 1 -▁peaches 1 -▁theorem 1 -▁Yasi 1 -▁Piano 1 -▁Maynard 1 -▁bearded 1 -▁REC 1 -▁Monsters 1 -8% 1 -▁Maas 1 -▁clawed 1 -▁shortlisted 1 -fc 1 -▁228 1 -▁unloading 1 -wrenching 1 -vest 1 -▁jig 1 -dressed 1 -▁(60 1 -▁junkie 1 -▁budgetary 1 -09) 1 -▁hetero 1 -▁Finger 1 -▁Device 1 -▁Arma 1 -▁Magna 1 -▁Bub 1 -▁smelly 1 -Ron 1 -SION 1 -antly 1 -▁Versailles 1 -▁perplexed 1 -▁presumption 1 -▁dreary 1 -▁isotope 1 -▁Fortress 1 -▁electrified 1 -▁Nehru 1 -▁LINK 1 -eba 1 -▁geology 1 -▁Topeka 1 -▁sweetener 1 -▁Enbridge 1 -▁Volcano 1 -▁Meryl 1 -▁compulsive 1 -nberg 1 -trying 1 -▁Ferrer 1 -cchio 1 -26) 1 -▁Holidays 1 -▁thumping 1 -▁ascended 1 -▁FFA 1 -▁CGI 1 -▁XVI 1 -biz 1 -▁Ronda 1 -arri 1 -Absolutely 1 -▁Knu 1 -▁huddle 1 -▁Una 1 -08) 1 -▁Beats 1 -▁complication 1 -▁hooking 1 -▁grocer 1 -eca 1 -▁Stur 1 -▁sash 1 -▁5.9 1 -▁Torch 1 -listen 1 -▁brethren 1 -▁disobey 1 -▁legislate 1 -▁magnesium 1 -▁migrating 1 -▁Dolores 1 -▁invoking 1 -▁molestation 1 -▁jagged 1 -▁metropolis 1 -▁$0.03 1 -▁Burkina 1 -▁unprotected 1 -▁Roast 1 -▁[25] 1 -River 1 -▁firewall 1 -▁organically 1 -▁defuse 1 -▁UKIP 1 -▁spectacularly 1 -▁Hubert 1 -▁teeny 1 -SSA 1 -helm 1 -Fer 1 -▁worldly 1 -▁Oakley 1 -▁infidelity 1 -quer 1 -11: 1 -▁Forge 1 -▁peed 1 -▁Magnet 1 -▁8-0 1 -division 1 -▁mused 1 -▁Schul 1 -▁mislead 1 -breath 1 -mati 1 -lek 1 -▁Crap 1 -▁555 1 -▁FU 1 -▁Beethoven 1 -▁anecdotal 1 -▁octopus 1 -▁profanity 1 -▁profusely 1 -▁mermaid 1 -▁shawl 1 -▁prerequisite 1 -dome 1 -▁biographer 1 -▁biologists 1 -▁stair 1 -▁trance 1 -▁slang 1 -▁reinvest 1 -▁Collect 1 -▁Advertiser 1 -▁Owls 1 -▁Joshi 1 -▁infuriated 1 -▁mourners 1 -▁esp 1 -▁Souls 1 -Jen 1 -▁Retro 1 -▁bandage 1 -▁wrinkled 1 -▁overbought 1 -▁smuggle 1 -333 1 -▁Lung 1 -▁underwriting 1 -▁Scotch 1 -▁Beavers 1 -▁protagonists 1 -grandfather 1 -▁IPS 1 -omp 1 -▁rodents 1 -▁tearful 1 -Tur 1 -▁Liberian 1 -among 1 -OPEC 1 -illus 1 -▁sift 1 -lisa 1 -▁Westfield 1 -owners 1 -▁STA 1 -▁Scal 1 -▁Chao 1 -▁incessant 1 -logic 1 -▁Foo 1 -▁229 1 -▁chuck 1 -▁Anatomy 1 -▁McAuliffe 1 -▁optimum 1 -▁quintessential 1 -▁agonizing 1 -▁enlightenment 1 -▁Sunbeam 1 -▁WHEN 1 -▁reprisal 1 -▁fanatic 1 -▁Camel 1 -▁hawks 1 -hog 1 -▁immigrated 1 -▁brash 1 -▁sandstone 1 -▁antibody 1 -▁Dim 1 -Add 1 -ISS 1 -rong 1 -slip 1 -▁Bloody 1 -▁Orban 1 -▁Rewards 1 -▁stomping 1 -▁IHS 1 -▁principally 1 -▁dependable 1 -▁nuggets 1 -▁contrasted 1 --51 1 -▁insistent 1 -▁Cele 1 -▁Jodi 1 -▁Gud 1 -▁subscribed 1 -▁tweaking 1 -Meredith 1 -measure 1 -▁counteract 1 -▁Jia 1 -▁Sadiq 1 -▁Advisers 1 -▁Str 1 -nze 1 -▁Predator 1 -agua 1 -▁underworld 1 -▁reappear 1 -▁parasites 1 -▁Abbasi 1 -▁inference 1 -▁footballers 1 -calc 1 -▁Sharapova 1 -▁melancholy 1 -▁sceptical 1 -zine 1 -▁Kaitlyn 1 -▁embed 1 -▁(2010) 1 -▁159 1 -▁wingspan 1 -▁Vass 1 -▁monies 1 -▁Fundamental 1 -▁otro 1 -▁Whew 1 -▁rebirth 1 -▁sequentially 1 -▁HOME 1 -aglia 1 -▁reconstructed 1 -▁buns 1 -▁steamy 1 -▁Ethel 1 -▁Cola 1 -.20% 1 -enger 1 -either 1 -▁Winds 1 -abiding 1 -▁rupture 1 -LK 1 -abra 1 -Sean 1 -jection 1 --72 1 -▁glee 1 -▁Fries 1 -▁compatriot 1 -ntz 1 -DNA 1 -▁multimillion 1 -capped 1 -ambu 1 -▁bandmate 1 -▁Mazz 1 -▁Ceremony 1 -▁lunatic 1 -▁multiplied 1 -▁pelvic 1 -▁reunification 1 -▁symmetry 1 -▁herbicide 1 -▁Muscat 1 -▁supermodel 1 -▁unequivocally 1 -▁plaid 1 -▁Phar 1 -▁Kie 1 -PY 1 -▁Massive 1 -▁fanned 1 -▁tireless 1 -▁lunge 1 -▁grasses 1 -FORD 1 -▁Cavi 1 -thru 1 -▁wits 1 -▁rotor 1 -▁quirks 1 -▁Asus 1 -▁Parry 1 -▁Auntie 1 -▁interns 1 -▁quin 1 -▁Minnie 1 -▁Filipinos 1 -▁Vols 1 -behind 1 -efficiency 1 -▁Hanks 1 -▁instructing 1 -▁skeptics 1 -Jackson 1 -▁busier 1 -▁CPA 1 -▁chronicle 1 -aban 1 -ampa 1 --63 1 --58 1 -guided 1 -▁Feast 1 -▁Fli 1 -▁grassland 1 -▁Reasons 1 -▁340 1 -▁converse 1 -▁backfire 1 -Wh 1 -▁Consolidated 1 -▁VEGAS 1 -▁unmistakable 1 -▁Moffat 1 -▁Signature 1 -▁Meridian 1 -▁Registry 1 -▁Cllr 1 -▁slush 1 -▁Naidu 1 -▁Kareem 1 -▁sob 1 -Der 1 -▁Gillette 1 -▁deterred 1 -▁Lazarus 1 -▁Teach 1 -▁301 1 -▁heatwave 1 -▁deregulation 1 -▁Liability 1 -▁Tav 1 -▁Saddle 1 -▁Paleo 1 -▁Marble 1 -▁crosswalk 1 -▁Tight 1 -▁Micron 1 -▁Dah 1 -▁Tools 1 -▁canceling 1 -▁Leia 1 -dea 1 -functional 1 -▁aspiration 1 -▁aptly 1 -▁Regulations 1 -▁WN 1 -▁Oceanic 1 -▁0.4% 1 -▁remodeling 1 -▁shuddered 1 -▁pickle 1 -▁Node 1 -▁totalling 1 -▁crunchy 1 -Government 1 -▁messes 1 -Science 1 -▁summarized 1 -▁circulate 1 -▁stalling 1 -▁Visiting 1 -▁cosy 1 -everyone 1 -derived 1 -Lin 1 -▁factored 1 -esta 1 -▁Deir 1 -▁simmer 1 -▁strangest 1 -Pool 1 -▁Relay 1 -▁Rol 1 -▁potholes 1 -▁Lacy 1 -▁Newspaper 1 -▁Zia 1 -▁militar 1 -▁Willoughby 1 -▁disqualify 1 -▁flimsy 1 -▁frivolous 1 -▁propensity 1 -▁tributary 1 -▁Balochistan 1 -▁overcrowding 1 -▁Maurizio 1 -▁Flanders 1 -▁hypocritical 1 -▁oncology 1 -▁macroeconomic 1 -▁LIFE 1 -▁(2008) 1 -onga 1 -▁Perdue 1 -▁stewards 1 -▁DeAndre 1 -▁Skull 1 -▁babysitting 1 -▁plume 1 -▁Spoiler 1 -cliffe 1 -▁Granger 1 -▁Prussian 1 -▁Chil 1 -lucky 1 -▁189 1 -Africa 1 -▁firefighting 1 -▁fostered 1 -▁Friendly 1 -▁Davey 1 -▁latched 1 -▁calmer 1 -▁2009) 1 -▁reposition 1 -222 1 -6.7% 1 -Gov 1 -▁despised 1 -▁TG 1 -Anthony 1 -▁expedite 1 -anov 1 -Table 1 -Alright 1 -▁hunts 1 -clip 1 -gross 1 -▁hunk 1 -▁Arrivals 1 -▁warring 1 -▁Badger 1 -▁Bev 1 -▁Divide 1 -▁Licht 1 -productive 1 -▁eff 1 -▁Babylon 1 -▁Admittedly 1 -▁Crimestoppers 1 -▁turquoise 1 -▁overarching 1 -▁Leisure 1 -▁sparring 1 -▁Faulkner 1 -▁Burundi 1 -▁trotted 1 -▁rapport 1 -▁Ripley 1 -▁Dijk 1 -▁Vivo 1 -nker 1 -▁702 1 -▁Quan 1 -▁footnote 1 -▁Raising 1 -▁elated 1 -▁devolved 1 -bite 1 -▁heyday 1 -▁implements 1 -▁cowardly 1 -naut 1 -Bring 1 -▁Maloney 1 -▁Spade 1 -▁Padma 1 -▁withdrawals 1 -[3] 1 -▁byproduct 1 -▁Pear 1 -▁Bree 1 -ark 1 -inian 1 -▁lawns 1 -▁Mura 1 -▁Painting 1 -▁oyster 1 -uther 1 -▁Congregation 1 -Ven 1 -informed 1 -▁Bundaberg 1 -▁Cheltenham 1 -▁Constantinople 1 -▁DeGeneres 1 -▁McIntosh 1 -▁Mindanao 1 -▁intestinal 1 -▁zebra 1 -▁embodies 1 -▁inscribed 1 -▁Morneau 1 -node 1 -▁Firearms 1 -▁unannounced 1 -▁colonists 1 -▁Avatar 1 -▁CFPB 1 -▁Wildfire 1 -▁Westchester 1 -igue 1 -▁PKK 1 -▁dived 1 -▁oysters 1 -Union 1 -hhhh 1 -▁Fancy 1 -mona 1 -asan 1 -ometric 1 -▁Voter 1 -▁timelines 1 -▁occult 1 -mania 1 -▁edits 1 -▁soliciting 1 -▁217 1 -▁hurriedly 1 -▁Stubb 1 -▁‘‘ 1 -▁foreigner 1 -▁Penal 1 -▁meager 1 -▁10-0 1 -amide 1 -▁resurrect 1 -▁Lauri 1 -▁hunch 1 -▁hitmaker 1 -▁inched 1 -▁bled 1 -▁pancake 1 -▁keenly 1 -▁airstrike 1 -▁indebted 1 -▁Khawaja 1 -▁Wainwright 1 -▁descriptive 1 -▁gratification 1 -▁caterpillar 1 -▁impractical 1 -shut 1 -▁UNLV 1 -▁Goldsmith 1 -▁Š 1 -▁rites 1 -▁Lear 1 -▁Townsville 1 -▁loyalists 1 -▁flops 1 -▁Seventy 1 -Santa 1 -▁hoard 1 -▁CHP 1 -▁snorted 1 -▁Felt 1 -AME 1 -waste 1 -hopper 1 -▁Mott 1 -▁Rapper 1 -− 1 -▁taunting 1 -▁flares 1 -gap 1 -Files 1 -▁frying 1 -▁laborers 1 -inclusive 1 -▁giveaways 1 -▁Wax 1 -hello 1 -▁VS 1 -▁Tua 1 -▁winked 1 -▁pathologist 1 -▁Exhibit 1 -▁foundational 1 -▁Devo 1 -▁Aristotle 1 -▁deforestation 1 -▁Eternal 1 -▁doubleheader 1 -▁Valverde 1 -▁oncologist 1 -▁Constance 1 -▁Antioch 1 -▁Prepare 1 -▁polarizing 1 -▁imperialism 1 -▁scorched 1 -▁pulpit 1 -▁appointee 1 -39) 1 -▁Nes 1 -▁caliphate 1 -barrel 1 -▁Goss 1 -▁CAF 1 -semi 1 -▁Jillian 1 -▁thrashed 1 -▁Duc 1 -▁GF 1 -▁PAT 1 -wanted 1 -▁Steps 1 -Ali 1 -▁shimmering 1 -uu 1 -▁Whiteside 1 -pond 1 -Bot 1 -ues 1 -▁Falling 1 -Obama 1 -emergency 1 -▁fins 1 -▁gorge 1 -▁dope 1 -▁additives 1 -likely 1 -▁burdened 1 -▁sto 1 -▁Curse 1 -▁bashing 1 -▁Raf 1 -▁Grin 1 -▁patterned 1 -▁unwind 1 -▁Correction 1 -TZ 1 -▁shined 1 -Af 1 -▁Oppenheimer 1 -▁Explosive 1 -▁achievable 1 -▁eerily 1 -▁feisty 1 -▁airwaves 1 -▁snagged 1 -▁Lantern 1 -▁Vai 1 -tempo 1 -▁Preliminary 1 -▁wholeheartedly 1 -▁$0.02 1 -▁embroidered 1 -▁geologist 1 -▁DK 1 -▁Vasil 1 -▁Gaines 1 -▁wares 1 -▁poppy 1 -▁Oba 1 -▁Matsu 1 -enta 1 -▁Darby 1 -quat 1 -▁SALT 1 -chev 1 -▁perpetually 1 -▁Celebrate 1 -hah 1 -▁Duo 1 -▁thump 1 -ahn 1 -▁rebates 1 -—“ 1 -▁$77 1 -▁warmest 1 -ICS 1 -▁Biz 1 -▁BBB 1 -cide 1 -Ball 1 -dean 1 -aven 1 -▁acutely 1 -▁7.2 1 -▁Pow 1 -▁Sting 1 -▁melts 1 -▁Caucasian 1 -▁bewildered 1 -▁inseparable 1 -▁polluting 1 -▁recognising 1 -editor 1 -▁Ernesto 1 -▁iterations 1 -▁janitor 1 -▁Judah 1 -▁(2009) 1 -▁sauna 1 -▁Vulcan 1 -▁remarried 1 -acion 1 -▁Appropriations 1 -Hun 1 -▁nonviolent 1 -▁confronts 1 -▁holster 1 -uary 1 -logged 1 -▁Load 1 -▁shielding 1 -▁denials 1 -Due 1 -washing 1 -▁Waco 1 -cote 1 -▁Oddly 1 -▁6.0 1 -▁Hyun 1 -▁Khe 1 -lunk 1 -▁afterlife 1 -▁attachments 1 -▁hinge 1 -▁approves 1 -▁AMP 1 -▁mugs 1 -▁incision 1 -▁alerting 1 -▁curbs 1 -received 1 -▁Randle 1 -Hoo 1 -▁HATE 1 -iang 1 -▁Zal 1 -beg 1 -glyc 1 -nham 1 -▁Robo 1 -▁NOTHING 1 -▁cuddling 1 -▁Ekiti 1 -▁Bloomington 1 -▁Tebow 1 -▁(2017) 1 -▁Domingo 1 -▁Coo 1 -▁excessively 1 -▁underwhelming 1 -!:) 1 -▁Tyne 1 -▁Established 1 -Magic 1 -changer 1 -▁Sauce 1 -▁strategists 1 -▁Told 1 -▁PIC 1 -▁Arya 1 -▁wade 1 -▁Dancer 1 -▁outfitted 1 -REN 1 -▁qu 1 -evaluate 1 -▁Pero 1 -Anything 1 -unde 1 -chart 1 -▁Quickly 1 -ál 1 -Anyway 1 -▁Explore 1 -wak 1 -Fight 1 -▁Cari 1 -mitted 1 -holic 1 --49 1 -▁Siege 1 -▁memorials 1 -▁noodle 1 -▁Abram 1 -▁conscientious 1 -▁signatories 1 -▁Tobago 1 -▁Murkowski 1 -▁Rappler 1 -▁Ferrell 1 -provide 1 -▁Bernier 1 -▁swine 1 -▁Honeywell 1 -▁CAA 1 -rogen 1 -watched 1 -▁pellets 1 -▁Earn 1 -▁Concerned 1 -▁Ledger 1 -▁banished 1 -px 1 -▁owls 1 -▁Goodbye 1 -▁vitality 1 -▁Cecilia 1 -▁Ado 1 -iente 1 -Web 1 -▁concussions 1 -kki 1 -dick 1 -▁disapprove 1 -▁transplants 1 -IU 1 -ROW 1 -122 1 -blank 1 -▁Ashwin 1 -hound 1 -izzi 1 -Ag 1 -▁discreet 1 -▁fumbles 1 -▁Buc 1 -▁Tir 1 -▁pinched 1 -▁1855 1 -▁trackers 1 -▁steward 1 -▁steroid 1 -▁pundit 1 -▁Ruff 1 -▁guaranteeing 1 -▁Piedmont 1 -▁Rotterdam 1 -▁bubbly 1 -▁liberating 1 -▁sophistication 1 -▁Potomac 1 -▁capitol 1 -▁McMurray 1 -▁Subscription 1 -▁Midstream 1 -▁Magnolia 1 -IQ 1 -▁£85 1 -▁Beatty 1 -▁plugging 1 -▁transgression 1 -▁clientele 1 -▁undated 1 -uzi 1 -114 1 -▁keyword 1 -▁Highlanders 1 -▁Picks 1 -▁buzzed 1 -▁pathology 1 -▁Insp 1 -▁Painter 1 -▁65% 1 -cino 1 -▁Slu 1 -▁Held 1 -yak 1 -olf 1 -▁sidekick 1 -▁CIS 1 -▁repo 1 -▁explorers 1 -uster 1 -▁grounding 1 -vish 1 -cliff 1 -▁tacit 1 -▁GDPR 1 -▁MON 1 -▁Harm 1 -▁PLA 1 -▁bays 1 -▁HT 1 -▁Cops 1 -▁Tou 1 -▁Worker 1 -wain 1 -▁1100 1 -soon 1 -▁Cabo 1 -cephal 1 -▁Madam 1 -▁Deferred 1 -▁Territories 1 -▁chimpanzee 1 -▁nauseous 1 -▁McCaskill 1 -▁Tarantino 1 -▁cardiologist 1 -▁Pyramid 1 -▁exhibitors 1 -▁midwives 1 -▁Homestead 1 -▁Osun 1 -printed 1 -/30 1 -▁rpm 1 -▁Dorn 1 -▁callers 1 -▁REM 1 -▁gro 1 -▁simmering 1 -▁huff 1 -▁necklaces 1 -▁Handel 1 -▁Spit 1 -lob 1 -▁benefitted 1 -fellow 1 -▁inventions 1 -VIEW 1 -▁princes 1 -cera 1 -▁Kli 1 -▁Dup 1 -thos 1 -▁unearth 1 -▁TRI 1 -▁lira 1 -JC 1 -dition 1 -▁souvenir 1 -▁negligible 1 -▁sympathies 1 -▁molesting 1 -▁Khalifa 1 -▁Scores 1 -▁2010) 1 -▁loudspeaker 1 -▁tiene 1 -▁Minecraft 1 -▁(2006) 1 -▁omni 1 -rigg 1 -▁Tit 1 -▁Rage 1 -ln 1 -▁Hir 1 -TAL 1 -▁firestorm 1 -▁Keating 1 -▁percentile 1 -CIS 1 -▁HOL 1 -Bit 1 -▁Brush 1 -Bridge 1 -ansky 1 -▁Korn 1 -▁effortless 1 -▁(100 1 -▁auditors 1 -▁1937, 1 -▁growling 1 -Germany 1 -▁whitewash 1 -▁tranquil 1 -▁supplemented 1 -▁tiers 1 -▁224 1 -▁rangers 1 -▁CONCACAF 1 -▁hereditary 1 -▁Nightmare 1 -▁convergence 1 -▁Carmichael 1 -▁molten 1 -▁underpinning 1 -▁Splash 1 -democratic 1 -▁Barnsley 1 -▁HDMI 1 -▁clogged 1 -▁Aramco 1 -▁Dewey 1 -▁Consulate 1 -▁Deepika 1 -▁const 1 -▁jumpsuit 1 -▁complains 1 -▁6.9 1 -▁SNC 1 -struck 1 -strict 1 -▁apocalyptic 1 -▁focussed 1 -GET 1 -▁iPads 1 -▁beginners 1 -▁Shall 1 -▁neutralize 1 -▁Nal 1 -fueled 1 -prepared 1 -▁trusty 1 -pio 1 -solving 1 -▁hipster 1 -yte 1 -▁Harp 1 -▁ether 1 -▁ethno 1 -▁é 1 -▁Intercept 1 -▁Aging 1 -smoke 1 -▁OSU 1 -▁repel 1 -▁Patriarch 1 -rava 1 -▁Corvette 1 -▁Vintage 1 -▁shelved 1 -▁Faisal 1 -▁Quartz 1 -▁pamphlet 1 -▁skyscraper 1 -▁dissident 1 -▁pretzel 1 -▁airway 1 -▁ExxonMobil 1 -▁Belleville 1 -▁Dalai 1 -▁colluded 1 -mak 1 -magic 1 -cheek 1 -▁Proof 1 -▁devoured 1 -▁beaver 1 -▁wry 1 -▁Gö 1 -trophy 1 -roughly 1 -▁tact 1 -▁TRA 1 -▁Wasn 1 -▁flickering 1 -▁glowed 1 -▁SAR 1 -▁silicone 1 -▁WWF 1 -▁blender 1 -▁dusting 1 -▁tickle 1 -hopping 1 -▁raked 1 -▁Bevin 1 -Sputnik 1 -▁nervousness 1 -▁WBC 1 -▁orbits 1 -▁LD 1 -kirk 1 -▁1.25 1 -zoo 1 -▁meld 1 -Ever 1 -▁Ideas 1 -▁sweetie 1 -9.5% 1 -▁90,000 1 -▁Plane 1 -▁disciple 1 -nky 1 -▁fuller 1 -guess 1 -▁fireball 1 -.80% 1 -▁Activision 1 -▁Puigdemont 1 -▁reciprocal 1 -▁Bazaar 1 -▁Dresden 1 -▁DELHI 1 -▁precedence 1 -▁fairytale 1 -▁foreground 1 -▁budgeting 1 -NAN 1 -▁snare 1 -▁Kovac 1 -▁pang 1 -itzer 1 -▁drier 1 -▁expertly 1 -▁Arcade 1 -▁Cron 1 -▁Repair 1 -▁rebranded 1 -/0 1 -▁loomed 1 -▁wealthier 1 -▁eluded 1 -▁Gaelic 1 -▁lapses 1 -▁previews 1 -▁slabs 1 -mmmm 1 -▁Hays 1 -▁Bader 1 -operate 1 -▁nourish 1 -pak 1 -extreme 1 -▁whisked 1 -▁Cog 1 -▁Eisen 1 -▁acclimat 1 -▁neglecting 1 -▁reimburse 1 -kilometre 1 -▁quadruple 1 -▁parse 1 -▁limped 1 -.10% 1 -lose 1 -▁Gand 1 -▁converter 1 -▁cucumber 1 -Following 1 --12) 1 -vention 1 -bourg 1 -▁Defendant 1 -▁Prudential 1 -▁Survival 1 -▁admiral 1 -▁Jehovah 1 -▁sacrament 1 -▁Hobbs 1 -▁Braxton 1 -▁Ensemble 1 -▁maturing 1 -▁smartwatch 1 -▁cheesecake 1 -▁Haspel 1 -Seven 1 -▁Stud 1 -▁Knowles 1 -▁polarized 1 -▁Trojan 1 -▁clinically 1 -▁afflicted 1 -▁alleyway 1 -copy 1 -▁NSC 1 -quad 1 -easing 1 -▁civilisation 1 -▁mowed 1 -▁config 1 --2012 1 -▁Ashland 1 -▁shockingly 1 -▁Copp 1 -▁peck 1 -▁1858 1 -▁autographs 1 -KR 1 -▁prefix 1 -▁firewood 1 -▁practised 1 -werk 1 -▁puncture 1 -Saint 1 -▁Include 1 -Getting 1 -handling 1 -Australian 1 -▁Couch 1 -▁fuelling 1 -▁Pax 1 -▁concealing 1 -▁Hager 1 -▁chunky 1 -haya 1 -▁fished 1 -▁Gim 1 -Hor 1 -▁Blount 1 -▁Novartis 1 -▁appreciating 1 -▁DENVER 1 -▁pretense 1 -▁encoding 1 -▁Haji 1 -▁sobre 1 -Cho 1 -▁prepping 1 -▁Herring 1 -antha 1 -zur 1 -rrie 1 -▁sobs 1 -▁$73 1 -▁Rami 1 -▁ruptured 1 -▁Wim 1 -▁Aung 1 -TG 1 -evo 1 -▁mobilized 1 -▁harsher 1 -▁HAR 1 -▁westward 1 -inducing 1 -▁Rach 1 -client 1 -mony 1 -Para 1 -regulated 1 -▁Kaka 1 -boe 1 -▁Spend 1 -▁bol 1 -PDP 1 -▁orthodox 1 -▁carcass 1 -▁smoker 1 -▁Telling 1 -▁Charl 1 -buying 1 -▁Guelph 1 -▁Homicide 1 -▁Riviera 1 -▁Favre 1 -▁Dungeon 1 -▁mudslide 1 -▁incursion 1 -▁Wentworth 1 -▁treble 1 -▁horrid 1 -▁shipwreck 1 -▁Corning 1 -▁Hardware 1 -phonic 1 -▁Eskom 1 -▁Hutton 1 -▁Nia 1 -zhi 1 -▁Bison 1 -▁Fringe 1 -▁Lombard 1 -▁whiz 1 -▁Chew 1 -▁(11) 1 -▁Gho 1 -gbe 1 -▁Loc 1 -▁NIH 1 -▁Viacom 1 -▁Czechoslovakia 1 -▁CHE 1 -▁Grigor 1 -▁Naf 1 -onne 1 -▁fountains 1 -▁linkage 1 -Writing 1 -frequency 1 -knowledge 1 -▁Gase 1 -▁OD 1 -▁bustle 1 -▁profess 1 -▁nuke 1 -Finally 1 -▁Louisa 1 -▁Yam 1 -▁millennia 1 -▁photographing 1 -skilled 1 -▁Sek 1 -▁Wired 1 -▁warship 1 -flation 1 -watering 1 -zor 1 -distinguishable 1 --78 1 -Del 1 -▁tightness 1 -▁Bucharest 1 -▁Residence 1 -▁espresso 1 -▁menstrual 1 -▁Caitlyn 1 -▁Pandey 1 -▁insidious 1 -▁melodic 1 -▁Cornelius 1 -▁Rhythm 1 -▁Warwickshire 1 -▁helpers 1 -▁saloon 1 -▁nibble 1 -▁fauna 1 -▁Crunch 1 -▁Dickerson 1 -▁Description 1 -▁440 1 -▁Warfare 1 -▁integer 1 -▁chipping 1 -Update 1 -▁communists 1 -▁payouts 1 -kom 1 -▁unintentionally 1 -▁Staying 1 -arity 1 -▁sausages 1 -zl 1 -berries 1 -▁physicists 1 -▁Owl 1 -▁Importantly 1 -▁Fior 1 -Invision 1 -▁Glove 1 -▁Kaye 1 -▁Loose 1 -▁Nicol 1 -▁reassess 1 -▁Pepe 1 -▁POS 1 -▁birthing 1 -DAY 1 -▁Williamsburg 1 -Public 1 -grin 1 -▁cranes 1 -Auto 1 -▁eruptions 1 -Hawk 1 --66 1 -▁OM 1 -eared 1 -roc 1 -▁Honest 1 -▁Crude 1 -▁Abortion 1 -▁binoculars 1 -▁embodiment 1 -▁resuming 1 -▁cleverly 1 -▁Skywalker 1 -▁physicality 1 -▁bonnet 1 -▁Poison 1 -▁Reconstruction 1 -▁advantageous 1 -▁Disability 1 -▁Cornish 1 -▁María 1 -▁doughnuts 1 -▁injustices 1 -▁[26] 1 -▁Recommended 1 -▁Vidal 1 -▁Charley 1 -▁Waffle 1 -▁barricades 1 -▁dwindled 1 -▁interpreting 1 -▁Kush 1 -ild 1 -▁Verge 1 -▁Hindustan 1 -▁deceived 1 -▁Kappa 1 -▁insulated 1 -▁runtime 1 -tera 1 -▁Ashe 1 -crib 1 -▁creed 1 -▁BH 1 -gift 1 -▁Leahy 1 -▁Scr 1 -bash 1 -several 1 -▁drool 1 -▁Zim 1 -punch 1 -edition 1 -▁Delphi 1 -▁Ansari 1 -laine 1 -AE 1 -▁puppets 1 -▁Puma 1 -▁limping 1 -▁corrosion 1 -▁Burnaby 1 -▁Hogwarts 1 -▁Manziel 1 -▁UNITED 1 -▁Uncategorized 1 -▁exploratory 1 -▁grievous 1 -▁periphery 1 -▁unopposed 1 -▁remittance 1 -▁Luxury 1 -▁Maddox 1 -▁sociologist 1 -▁Panhandle 1 -▁biomedical 1 -▁glint 1 -▁Maximum 1 -▁copious 1 -▁concoction 1 -▁wretched 1 -▁criticising 1 -▁enigmatic 1 -▁BST 1 -▁Dozier 1 -▁collectible 1 -▁fortification 1 -hie 1 -▁mammal 1 -▁deflection 1 -▁shrouded 1 -currently 1 -▁postwar 1 -spire 1 -▁Harman 1 -04) 1 -▁Sey 1 -▁eyelids 1 -▁closets 1 -QUE 1 -▁flaming 1 -gbo 1 -▁delta 1 -▁Loma 1 -▁lash 1 -▁charted 1 -history 1 -raf 1 -▁Mee 1 -▁$3.4 1 -▁Redwood 1 -▁equates 1 -▁perverse 1 -▁THR 1 -▁laughable 1 -▁Builders 1 -▁Confederacy 1 -▁Kearney 1 -▁kangaroo 1 -▁prognosis 1 -▁Adapt 1 -▁hydrocarbon 1 -▁inverted 1 -▁Hillsborough 1 -▁drumming 1 -▁Rape 1 -▁Brunei 1 -▁burp 1 -▁→ 1 -▁playfully 1 -hunting 1 -ukh 1 -gum 1 -▁recourse 1 -▁reigns 1 -naya 1 -▁Newspapers 1 -MX 1 -▁Cheat 1 -fied 1 -▁crypt 1 -agu 1 -cub 1 -▁catwalk 1 -▁allure 1 -▁encompass 1 -▁sis 1 -brough 1 -▁Galli 1 -shon 1 -▁clearances 1 -▁GCC 1 -▁Ket 1 -Spring 1 -▁NOR 1 -ipa 1 -Food 1 -▁Maru 1 -▁Siegel 1 -osity 1 -▁Valeri 1 -▁Stefano 1 -▁vibrate 1 -eux 1 -▁Parma 1 -▁+3 1 -▁Discuss 1 -29) 1 -adult 1 -▁extracting 1 -▁Davy 1 -Raw 1 -▁Triumph 1 -▁centimetres 1 -▁embankment 1 -▁evacuees 1 -▁sizzling 1 -▁spruce 1 -▁Arvind 1 -▁synergy 1 -▁Longhorns 1 -▁oblique 1 -▁Doubt 1 -▁Couture 1 -▁rabid 1 -▁pornographic 1 -▁biomass 1 -▁Butch 1 -nac 1 -▁Protective 1 -▁inhumane 1 -▁HF 1 -▁Ennis 1 -▁compressor 1 -▁castles 1 -▁Rodeo 1 -Ze 1 -▁Nei 1 -▁Heartland 1 -▁depictions 1 -▁dwellers 1 -▁carers 1 -▁Div 1 -aaa 1 -▁fal 1 -▁Babu 1 -pala 1 -365 1 -▁scribe 1 -▁stashed 1 -chter 1 -▁cheaply 1 -effects 1 -▁waterfalls 1 -▁Emir 1 -▁tutoring 1 -Brad 1 -▁muffin 1 -▁Utd 1 -▁battleship 1 -▁$60,000 1 -▁Shy 1 -▁Grape 1 -▁Oy 1 -▁Pillar 1 -▁$78 1 -KER 1 -sión 1 -▁Bernabeu 1 -▁McCormack 1 -▁Verstappen 1 -▁registrar 1 -▁Jurors 1 -▁Dawkins 1 -▁bragged 1 -▁Bragg 1 -GAN 1 -▁Warehouse 1 -started 1 -▁Dunbar 1 -▁Bloomfield 1 -▁Hariri 1 -▁Hilda 1 -▁Danger 1 -▁unsurprisingly 1 -▁chlor 1 -▁Wherever 1 -climate 1 -▁grilling 1 -Bas 1 -▁dorsal 1 -▁Poker 1 -iler 1 -▁RX 1 -▁preventative 1 -feeling 1 -▁elaborated 1 --76 1 -bili 1 -▁Gregor 1 -▁ablaze 1 -▁Atari 1 -▁AOL 1 -▁encoded 1 -meier 1 -▁honk 1 -▁conservationist 1 -oche 1 -endi 1 -▁Hacker 1 -▁infiltrated 1 -Str 1 -▁liberate 1 -▁Kish 1 -▁retaliated 1 -▁battalions 1 -▁RJ 1 -▁bona 1 -▁parishioners 1 -▁Shoes 1 -▁genera 1 -▁Trin 1 -▁coax 1 -▁Minerals 1 -loid 1 -▁overtook 1 -▁Philharmonic 1 -▁Wrigley 1 -▁bouncy 1 -▁Grateful 1 -▁Substance 1 -▁infotainment 1 -▁lovable 1 -▁severance 1 -▁Brookings 1 -▁Manchin 1 -▁localities 1 -▁embroidery 1 -▁misused 1 -adia 1 -▁porque 1 -▁Braden 1 -▁7:15 1 -▁unwell 1 -▁concur 1 -▁minions 1 -▁Lean 1 -▁Hunting 1 -lep 1 -▁XV 1 -▁tripod 1 -▁dissenting 1 -▁Quo 1 -▁archived 1 -▁tinted 1 -▁randomized 1 -▁DAX 1 -▁roundtable 1 -▁choreographer 1 -▁microphones 1 --43 1 -▁Thunderbird 1 -fault 1 -fol 1 -▁GoPro 1 -▁redeemed 1 -▁Warden 1 -▁averted 1 -▁freaks 1 -▁$2.9 1 -▁abundantly 1 -▁Kesha 1 -cko 1 -▁honed 1 -▁Graphic 1 -▁Signals 1 -▁Raman 1 -▁decked 1 -▁headway 1 -Keefe 1 -according 1 -ppi 1 -trail 1 -▁Preview 1 -▁sw 1 -forming 1 -INGS 1 -▁Frederic 1 -▁pensioners 1 -▁Flags 1 -someone 1 -zzi 1 -Ye 1 -▁Chubb 1 -▁Arroyo 1 -▁Delgado 1 -▁Rothschild 1 -▁archaeology 1 -▁breezy 1 -▁conundrum 1 -▁mercenary 1 -▁swastika 1 -▁Corbett 1 -▁Possible 1 -▁torturing 1 -▁snicker 1 -▁magnate 1 -▁Offshore 1 -▁blaring 1 -▁belated 1 -▁Humanitarian 1 -▁Keenum 1 -▁attaching 1 -▁Haslam 1 -▁Employers 1 -▁Mohawk 1 -▁Pledge 1 -▁Ericsson 1 -▁otter 1 -▁Margo 1 -31) 1 -▁Milner 1 -▁Mace 1 -▁WSJ 1 -▁newborns 1 -▁eyesight 1 -▁implicitly 1 -▁anglers 1 -▁Pieter 1 -▁Schw 1 -▁Diary 1 -expect 1 -▁Truex 1 -▁wading 1 -MK 1 -▁Spartan 1 -▁reverted 1 -▁shortcuts 1 -▁converged 1 -▁Kier 1 -▁Willem 1 -▁Vital 1 -▁encompassing 1 -popular 1 -program 1 -crest 1 -▁Gav 1 -▁Siva 1 -▁reconciled 1 -cigarette 1 -KG 1 -▁Dump 1 -kala 1 -▁erroneous 1 -▁Odi 1 -▁aspirin 1 -▁Dorian 1 -▁PMI 1 -grader 1 -▁McManus 1 -▁accompanies 1 -▁centenary 1 -▁chandelier 1 -▁inconsistencies 1 -▁uncomfortably 1 -▁intoxication 1 -▁montage 1 -▁untimely 1 -▁isolating 1 -▁infestation 1 -▁mulch 1 -▁Frankenstein 1 -▁Morrissey 1 -▁Steward 1 -▁shards 1 -Carlo 1 -▁Wasp 1 -▁[27] 1 -▁inserting 1 -4.9% 1 -▁archaeologist 1 -▁INTER 1 -orah 1 -zard 1 -▁sixes 1 -▁Mentor 1 -▁Charges 1 -▁equate 1 -▁shriek 1 -during 1 -fie 1 -▁outcast 1 -▁raiding 1 -▁evergreen 1 -multi 1 -jian 1 -▁postage 1 -▁Reporters 1 -▁Maoist 1 -▁climber 1 -▁Mili 1 -▁$3.8 1 -iere 1 -▁Piers 1 -▁Kaur 1 -ppel 1 -▁Afri 1 -н 1 -command 1 -▁Homeless 1 -▁hurl 1 -▁Sufi 1 -processing 1 -▁petitioned 1 -▁papal 1 -▁Orwell 1 -▁formative 1 -▁clinching 1 -catch 1 -daddy 1 -▁roasting 1 -sufficiency 1 -▁Gottlieb 1 -▁Responsibility 1 -▁aerobic 1 -▁compulsion 1 -▁figurine 1 -▁munitions 1 -▁genomic 1 -▁Rhonda 1 -▁nimble 1 -▁coronation 1 -▁Sesame 1 -▁unorthodox 1 -▁elemental 1 -▁Gartner 1 -–15 1 -Royal 1 -▁Santander 1 -▁Wight 1 -▁Pom 1 -nger 1 -▁liter 1 -▁excavated 1 -▁Griff 1 -▁skulls 1 -▁Dunk 1 -▁underpin 1 -▁245 1 -▁193 1 -▁Cott 1 -suited 1 -▁Cou 1 -▁Kingsley 1 -▁binds 1 -▁molding 1 -▁redirected 1 -▁blacklist 1 -▁Sounders 1 -▁unleashing 1 -Father 1 -Eleven 1 -▁Reflect 1 -Tonight 1 -▁IPCC 1 -▁Lore 1 -ishi 1 -▁powdered 1 -▁Gaul 1 -▁exponential 1 -dil 1 -▁ATMs 1 -145 1 -▁Junk 1 -▁wearer 1 -▁scab 1 -▁Caucasus 1 -▁Schroeder 1 -▁circumcision 1 -▁connotation 1 -▁Earthquake 1 -▁orchid 1 -▁Fidelity 1 -▁rename 1 -▁suppressing 1 -▁armchair 1 -▁CHI 1 -▁Nex 1 -▁ESA 1 -▁cordoned 1 -▁Yuki 1 -▁tarnished 1 -▁heavyweights 1 -▁countering 1 -▁zeal 1 -▁Gulen 1 -▁torches 1 -▁bison 1 -▁vineyard 1 -▁hoarding 1 -construction 1 -▁Estimates 1 -▁Wach 1 -▁Vac 1 -▁Zer 1 -▁Faced 1 -▁thinly 1 -utor 1 -▁Weibo 1 -Video 1 -crisis 1 -▁Keel 1 -▁hybrids 1 -bent 1 -▁Sask 1 -lingual 1 -▁Blackwell 1 -izers 1 -Mil 1 -▁Hung 1 -▁Folks 1 -▁cruisers 1 -REE 1 -▁McPherson 1 -▁parchment 1 -▁Bortles 1 -▁fuselage 1 -▁García 1 -▁angular 1 -▁Ahmedabad 1 -▁Telstra 1 -▁Thorpe 1 -▁Horace 1 -▁unwittingly 1 -▁Nearby 1 -▁meteorite 1 -▁ATF 1 -▁Fabio 1 -▁[28] 1 -▁loom 1 -▁Squire 1 -▁concurrently 1 -Ang 1 -▁Busy 1 -▁SPCA 1 -▁333 1 -▁McGu 1 -▁petitioner 1 -lance 1 -▁greets 1 -▁knuckle 1 -izon 1 -▁Canary 1 -▁boomers 1 -vr 1 -▁gearbox 1 -emptive 1 -▁Aldo 1 -murder 1 -pour 1 -▁Interested 1 -▁Dreamers 1 -▁guideline 1 -▁uber 1 -nette 1 -TES 1 --42 1 -▁Krista 1 -session 1 -▁Athletes 1 -original 1 -NYSEARCA 1 -▁Awakens 1 -▁Osinbajo 1 -▁unnerving 1 -▁Venetian 1 -▁Estrada 1 -▁dunno 1 -▁Personnel 1 -▁Bachchan 1 -▁Jaipur 1 -▁Engagement 1 -9.9% 1 -▁incidentally 1 -▁Mobility 1 -▁steeped 1 -▁Lazio 1 -▁Bick 1 -▁bleachers 1 -▁shopkeeper 1 -▁DOWN 1 -▁feeble 1 -EEN 1 -▁Newly 1 -▁disgraceful 1 -isan 1 -▁Winn 1 -▁coerced 1 -▁scrutinized 1 -iber 1 -▁humanoid 1 -▁Nasr 1 -▁0-3 1 -▁carriages 1 -dron 1 -▁$450 1 -▁Donegal 1 -▁napkin 1 -▁Ewing 1 -▁stomp 1 -▁artworks 1 -covering 1 -zha 1 -▁Ghi 1 -▁Sousa 1 -▁confessions 1 -possibly 1 -problem 1 -▁movers 1 -▁foiled 1 -▁flatly 1 -▁conversational 1 -▁Cowan 1 -▁Eich 1 -▁sixties 1 -Project 1 -▁Helmet 1 -▁undoing 1 -▁Successful 1 -▁powering 1 -▁($8 1 -▁capitalists 1 -▁32% 1 -▁Terrier 1 -▁catered 1 -▁fam 1 -gina 1 -▁Khamenei 1 -▁arrears 1 -▁hammock 1 -▁superstition 1 -▁Allianz 1 -▁Féin 1 -▁penetrating 1 -▁Hackett 1 -▁Addiction 1 -▁fudge 1 -▁Tarrant 1 -▁Amri 1 -▁Transaction 1 -▁Sever 1 -▁keg 1 -▁halved 1 -▁=> 1 -▁flinch 1 -▁spooked 1 -▁Donations 1 -▁sobering 1 -▁Memo 1 -▁darts 1 -▁Corden 1 -hama 1 -▁Tham 1 -▁Specialty 1 -▁Solution 1 -▁sermons 1 -▁FSU 1 --96 1 -▁additive 1 -YO 1 -crack 1 -▁Kron 1 -▁slime 1 -▁SMU 1 -▁FLA 1 -}} 1 -225 1 -warm 1 -▁gland 1 -▁AfD 1 -▁Celia 1 -▁Kona 1 -▁nurtured 1 -▁flak 1 -Dun 1 -loud 1 -▁10.30 1 -▁cravings 1 -▁325 1 -hma 1 -▁Rotherham 1 -▁evangelist 1 -▁exasperated 1 -▁illuminating 1 -▁permissible 1 -▁Lovato 1 -▁McArthur 1 -▁Purpose 1 -▁Soap 1 -Test 1 -▁instrumentation 1 -grab 1 -▁Sangh 1 -▁complicity 1 -▁diagnostics 1 -▁Plot 1 -▁Grain 1 -▁Cpl 1 -▁Latvian 1 -▁aligning 1 -▁broadest 1 -▁Lili 1 -zov 1 -▁materialized 1 -dailycaller 1 -▁Zheng 1 -cone 1 -▁Stable 1 -▁Sno 1 -▁censored 1 -44) 1 -lapse 1 -Australia 1 -▁Hooker 1 -▁Gianni 1 -▁handout 1 -▁penguins 1 -▁Mechanical 1 -▁Gall 1 -▁(8) 1 -▁recklessly 1 -▁207 1 -▁hilly 1 -▁suc 1 -maine 1 -PRO 1 -▁Newell 1 -▁diagonal 1 -▁tantrums 1 -▁Reaching 1 -Tar 1 -chard 1 -▁shamed 1 -▁Ayatollah 1 -▁cognition 1 -▁swivel 1 -▁Picasso 1 -▁impossibly 1 -▁ravine 1 -▁WORK 1 -▁abnormalities 1 -▁deafening 1 -▁whimsical 1 -boom 1 -▁Dum 1 -▁scarlet 1 -▁unmatched 1 -▁tenuous 1 -▁cleanliness 1 -▁NIC 1 -310 1 -190 1 -▁Khar 1 -▁Thar 1 -▁blueberry 1 -Form 1 -.40% 1 -▁flailing 1 -▁Meade 1 -lica 1 -▁overgrown 1 -▁BSP 1 -Walk 1 -▁7.3 1 -jung 1 -▁nip 1 -▁grate 1 -▁Luiz 1 -▁BHP 1 -Jean 1 -▁theorist 1 -▁Maa 1 -▁forested 1 -▁haste 1 -▁232 1 -Matthew 1 -▁Shai 1 -▁Massa 1 -▁7.0 1 -▁Minsk 1 -▁Pei 1 -270 1 -cure 1 -▁Yulia 1 -cli 1 -▁topless 1 -▁Aubameyang 1 -▁Kawasaki 1 -▁Unsurprisingly 1 -▁Vasquez 1 -▁inexplicably 1 -▁recyclable 1 -▁tyrant 1 -▁wobbly 1 -▁Liquor 1 -▁gnome 1 -▁khaki 1 -▁coronary 1 -▁Draghi 1 -▁recliner 1 -▁Sachin 1 -▁KKK 1 -▁Cilic 1 -▁unease 1 -▁Puig 1 -▁Gove 1 -▁SAG 1 -▁Garage 1 -▁justifying 1 -▁DNR 1 -▁CTO 1 -este 1 -lke 1 -▁rooftops 1 -▁Jayne 1 -▁beetle 1 -▁Riva 1 -▁Nix 1 -▁$1000 1 -▁CIF 1 -▁adverts 1 -Prime 1 -▁Ramona 1 -▁SET 1 -32) 1 -▁inhibitors 1 -▁Konta 1 -▁Oke 1 -▁guesses 1 -▁boardroom 1 -▁Gibb 1 -Soviet 1 -Tweet 1 -stricken 1 -▁subsidize 1 -▁Casper 1 -▁bas 1 -▁socialize 1 -▁breakthroughs 1 -▁rodent 1 -▁Brasil 1 -▁Fist 1 -▁Argos 1 -▁rappers 1 -▁$76 1 -▁suitors 1 -▁secondly 1 -▁terra 1 -Set 1 -▁TED 1 -▁infringe 1 -▁Beto 1 -gastrointestinal 1 -utobiographical 1 -▁EVERYTHING 1 -▁MINNEAPOLIS 1 -▁Xperia 1 -▁conjecture 1 -▁exaggerating 1 -▁fracturing 1 -▁lenient 1 -▁rudimentary 1 -▁shenanigans 1 -▁Raheem 1 -▁Kaspersky 1 --85 1 -▁inhalation 1 -▁Gladys 1 -▁neuroscience 1 -▁Motorcycle 1 -[2] 1 -▁Enugu 1 -▁Godzilla 1 -▁budgeted 1 -▁musing 1 -▁rugs 1 -▁huts 1 -▁corporal 1 -VER 1 -▁Germain 1 -▁181 1 -▁Avalon 1 -▁headlining 1 -▁patchwork 1 -▁CBA 1 -▁Worm 1 -▁CMA 1 -forced 1 -▁proactively 1 -▁rioting 1 -enham 1 -▁GPA 1 -czyk 1 -Probably 1 -certified 1 -▁Satya 1 -ethnic 1 -noid 1 -▁Provo 1 -▁Syd 1 -▁PVC 1 -▁cocked 1 -▁Darth 1 -▁hoof 1 -▁Typical 1 -▁supervisory 1 -String 1 -▁redo 1 -▁EG 1 -я 1 -▁Bulawayo 1 -▁disembark 1 -▁idyllic 1 -▁lounging 1 -▁obligatory 1 -▁submissive 1 -▁Wholesale 1 -▁Quintana 1 -onte 1 -▁Prisoner 1 -▁DOC 1 -couple 1 -▁Hoping 1 -oski 1 -Hop 1 -▁fanbase 1 -▁Miya 1 -▁Gabon 1 -taken 1 -▁schoolchildren 1 -LING 1 -exist 1 -ERA 1 -▁Isla 1 -440 1 -▁grunted 1 -▁WBA 1 -▁misconception 1 -▁mush 1 -leak 1 -▁racetrack 1 -▁38% 1 -gga 1 -▁Douma 1 -▁shaman 1 -▁housewife 1 -Sex 1 -▁Accounting 1 -frequent 1 -▁Scholars 1 -assisted 1 -raised 1 -▁Lyric 1 -▁watery 1 -▁VT 1 -▁scowl 1 -▁lords 1 -▁1-4 1 -▁bourgeois 1 -▁prejudices 1 -▁FAR 1 -▁Sele 1 -▁dries 1 -potential 1 -▁Sheryl 1 -в 1 -▁Hannibal 1 -▁Inspection 1 -▁Shrewsbury 1 -▁mercenaries 1 -▁persuading 1 -▁Leopard 1 -▁Mathieu 1 -▁Equifax 1 -▁Acquisition 1 -▁ammonia 1 -▁Eminem 1 -▁Yikes 1 -▁repressive 1 -▁herdsmen 1 -▁Culver 1 -▁Emory 1 -▁Adoption 1 -▁Rubber 1 -▁Startup 1 -▁karate 1 -▁unplanned 1 -▁deems 1 -▁heartwarming 1 -▁Oye 1 -▁214 1 -▁Finley 1 -▁stipulation 1 -▁artisans 1 -wane 1 -▁Evansville 1 -fated 1 -▁Reaction 1 -▁1–0 1 -▁walled 1 -51) 1 -▁cull 1 -▁frighten 1 -▁Lieb 1 -fle 1 -▁WV 1 -▁Quarterly 1 -▁PEN 1 -WOOD 1 -5.9% 1 -hosted 1 -gut 1 -▁ROM 1 -▁Kristi 1 -▁herds 1 -modal 1 -bright 1 -▁Michal 1 -▁soured 1 -▁fetched 1 -▁Advantage 1 -▁Bolshevik 1 -▁Contributed 1 -▁Creighton 1 -▁Injuries 1 -▁Tuscaloosa 1 -▁belligerent 1 -▁degrading 1 -▁glyphosate 1 -▁juggernaut 1 -▁trampoline 1 -▁Eureka 1 -▁reverence 1 -▁Pana 1 -▁Incidentally 1 -arte 1 -▁opium 1 -▁blurted 1 -▁Editorial 1 -▁abbreviated 1 -▁183 1 --68 1 -▁Joined 1 -▁Apocalypse 1 -▁stacking 1 -▁Gentry 1 -ismo 1 -▁cornered 1 -▁compress 1 -disciplinary 1 -endorf 1 -▁Penney 1 -▁Swin 1 -▁befriended 1 -ISIS 1 -▁Shami 1 -▁Nara 1 -Iran 1 -▁lesbians 1 -▁errand 1 -▁NYU 1 -▁380 1 -▁Weak 1 -▁Grady 1 -Tax 1 -▁enrolling 1 -▁reprimand 1 -spirited 1 -▁scammers 1 -ienne 1 -▁refocus 1 -▁HAL 1 -moment 1 -▁Jewell 1 -▁MCC 1 -▁Vend 1 -▁Pharaoh 1 -▁Symptoms 1 -▁anomalies 1 -▁Valdez 1 -▁skunk 1 -▁ingenious 1 -▁Limbaugh 1 -▁Cristina 1 -arna 1 -▁LePage 1 -▁redistribution 1 -▁Lug 1 -ectomy 1 -hp 1 -ffle 1 -▁elliptical 1 -▁retrial 1 -Jon 1 -▁EAST 1 -▁209 1 -▁fondness 1 -prove 1 -▁Ajit 1 -▁196 1 -pani 1 -▁Slovak 1 -IOUS 1 -came 1 -!!!!! 1 -▁francs 1 -▁debuting 1 -▁Troop 1 -Total 1 -uthorizing 1 -Len 1 -▁Timbers 1 -▁blip 1 -▁watercolor 1 -computer 1 -ifiable 1 -defunct 1 -▁oblige 1 -Uni 1 -▁36% 1 -ULL 1 -▁Dever 1 -freedom 1 -▁Chik 1 -tari 1 -▁strolling 1 -▁Refuge 1 -finished 1 -▁Install 1 -riding 1 -manager 1 -▁spoons 1 -▁Font 1 -▁kilometer 1 -tube 1 -seriously 1 -▁ridges 1 -fork 1 -▁scanners 1 -▁INVESTMENT 1 -▁Zeppelin 1 -▁proficiency 1 -▁protruding 1 -▁psychiatry 1 -▁hapless 1 -▁estrogen 1 -▁Terrence 1 -▁hydroelectric 1 -▁sharpen 1 -▁’80 1 -tying 1 -lima 1 -▁Gorilla 1 -▁luring 1 -▁512 1 -▁për 1 -▁Saha 1 -▁Argus 1 -▁wiretap 1 -▁resorting 1 -175 1 -alam 1 -yuki 1 -▁Bender 1 -▁improv 1 -▁brides 1 -▁ppg 1 -▁KN 1 -▁Mooney 1 -bala 1 -▁Kelli 1 -▁Timmy 1 -▁sprinting 1 -▁expletive 1 -▁fearsome 1 -pix 1 -▁hai 1 -.30% 1 -Side 1 -▁motorsport 1 -▁Niro 1 -boggling 1 -▁BLM 1 -▁instructional 1 -vera 1 -spell 1 -amente 1 -acci 1 -▁dra 1 -▁recurrent 1 -▁$130,000 1 -▁Bec 1 -UV 1 -▁Fallen 1 -▁caved 1 -▁Expansion 1 -▁Shurmur 1 -▁labyrinth 1 -▁Gamecocks 1 -▁Hemsworth 1 -▁Chamisa 1 -▁Emotional 1 -▁Nikita 1 -kim 1 -▁Channing 1 -▁Rowley 1 -nearly 1 -▁unwillingness 1 -dn 1 -▁2008) 1 -▁Correct 1 -▁Halle 1 -izar 1 -▁kan 1 -▁STOP 1 -▁destroyers 1 -▁Taft 1 -ograph 1 -▁puffy 1 -▁Edit 1 -▁doggie 1 -▁Dressed 1 -▁gymnast 1 -▁Scrap 1 -sentence 1 -tolerance 1 -Anti 1 -▁Haitians 1 -▁Hasbro 1 -ANI 1 -▁CCC 1 -Iron 1 -▁7.4 1 -▁Trustee 1 -anthrop 1 -▁finishers 1 -▁accomplices 1 -zinger 1 -?!?! 1 -Hope 1 -VG 1 -rude 1 -Luke 1 -UTH 1 -▁popularized 1 -anan 1 -▁homosexuals 1 -Nine 1 -▁aggressor 1 -▁glorified 1 -▁Innocent 1 -▁homophobia 1 -▁macOS 1 -▁malice 1 -▁Oyster 1 -rv 1 -▁Rel 1 -▁Nowhere 1 -▁refinement 1 -pace 1 -eki 1 -▁bytes 1 -▁Kirkland 1 -▁Zah 1 -▁expeditions 1 -bola 1 -▁impeached 1 -▁Camry 1 -▁badger 1 -▁Oka 1 -▁friendlies 1 -EMA 1 -▁UBC 1 -▁Puck 1 -35) 1 -▁Romano 1 -▁inscriptions 1 -▁squeal 1 -▁EMT 1 -▁Zaha 1 -▁apathy 1 -▁(41 1 -▁.22 1 -RAN 1 -▁McK 1 -wielding 1 -Amazon 1 -benefit 1 -Dispatch 1 -▁Emilia 1 -READ 1 -Connect 1 -▁199 1 -▁VHS 1 -▁malign 1 -▁!:) 1 -▁Helio 1 -Nation 1 -▁Recogniz 1 -▁Scripps 1 -▁Vazquez 1 -▁alumnus 1 -▁contraband 1 -▁50/50 1 -▁sarcastically 1 -▁unregulated 1 -▁sleigh 1 -▁alpine 1 -▁polarization 1 -▁Hwang 1 -▁Bergman 1 -▁Humanities 1 -▁Baidu 1 -05) 1 --71 1 -▁MISS 1 -▁blacked 1 -▁webcast 1 -▁tubing 1 -▁29% 1 -expert 1 -▁blot 1 -▁LOVED 1 -▁Preserve 1 -hydro 1 -▁swaps 1 -▁dogma 1 -109 1 -▁Macedonian 1 -▁Rolf 1 -▁pointers 1 -▁Affair 1 -VS 1 -▁learner 1 -lade 1 -▁hymns 1 -Pra 1 -marriage 1 -foreign 1 -▁Sev 1 -▁Ironman 1 -▁Mirza 1 -▁sketchy 1 -▁Khur 1 -slide 1 -▁$3.6 1 -▁Kazan 1 -▁Revenge 1 -▁Niki 1 -▁maneuvering 1 -lta 1 -ggie 1 -07) 1 -▁emptying 1 -▁outnumber 1 -ocracy 1 -▁snowed 1 -OUT 1 -▁outbursts 1 -_2 1 -▁Chandigarh 1 -▁McNair 1 -▁Nurmagomedov 1 -▁disillusioned 1 -▁fissure 1 -▁incredulous 1 -▁uncontrollably 1 -▁Feminist 1 -▁► 1 -▁Ukip 1 -▁Wharf 1 -icon 1 -▁revising 1 -▁Candice 1 -▁Marawi 1 -▁Bottas 1 -▁Remy 1 -▁Insta 1 -▁absurdity 1 -▁Francesca 1 -▁Ripple 1 -itra 1 -▁consumes 1 -▁couches 1 -▁Kruger 1 -▁Noise 1 -2-3 1 -frac 1 -▁Mock 1 -▁Freeze 1 -Town 1 -▁bondage 1 -▁sweetly 1 -▁donut 1 -▁Ababa 1 -PDF 1 -▁reinvent 1 -▁Nagy 1 -KT 1 -Children 1 -▁2040 1 -▁seeker 1 -▁Melan 1 -Review 1 -▁matte 1 -▁wholesome 1 -▁Kitchener 1 -▁Regal 1 -▁Macri 1 -▁stoop 1 -SAN 1 -▁warmup 1 -▁rags 1 -▁Ganesh 1 -▁1845 1 -▁emojis 1 -▁COLUMBIA 1 -▁Neanderthal 1 -▁erstwhile 1 -▁scalable 1 -▁anthropology 1 -▁cocoon 1 -▁bunnies 1 -▁indulgence 1 -▁Rochelle 1 -▁Cornyn 1 -▁Craven 1 -▁Torrey 1 -▁penthouse 1 -▁blackness 1 -▁colonization 1 -▁onscreen 1 -▁lob 1 -▁widows 1 -▁demoted 1 -▁panicking 1 -▁Bellamy 1 -▁solidified 1 -230 1 -▁baggy 1 -▁Gorge 1 -▁farmed 1 -▁storming 1 -▁Bandit 1 -▁colonialism 1 -▁Schneiderman 1 -Track 1 -▁...... 1 -▁Dole 1 -▁dusted 1 -1-0 1 -▁Scouting 1 -▁rediscover 1 -Hill 1 -▁Hadley 1 -▁selectively 1 -▁tacked 1 -▁ratify 1 -▁Rhy 1 -▁liters 1 -▁Nasir 1 -▁shirtless 1 -otte 1 -▁Brink 1 -▁Oster 1 -▁REITs 1 -starting 1 -▁Scorpio 1 -▁upstart 1 -▁Hawthorn 1 -▁richly 1 -▁dijo 1 -azar 1 -▁Chrys 1 -▁FN 1 -▁avant 1 -▁afterthought 1 -▁Paramedics 1 -▁dilapidated 1 -▁enrolment 1 -▁euphoria 1 -▁innocuous 1 -▁twelfth 1 -▁Argentinian 1 -▁pitfalls 1 -▁tulip 1 -▁telco 1 -▁Stapleton 1 -▁confrontations 1 -▁CapEx 1 -▁pathogens 1 --84 1 -feeding 1 -▁Gallant 1 -gba 1 -nado 1 -▁Jimmie 1 -CCI 1 -▁privy 1 -cian 1 -zawa 1 -▁clouded 1 -▁Plantation 1 -▁Cull 1 -▁keel 1 -▁Richter 1 -▁clutches 1 -▁snowboarding 1 -▁$3.3 1 -▁Nino 1 -▁contaminants 1 -▁asteroids 1 -▁Psy 1 -▁charms 1 -▁instilled 1 -hue 1 -▁glazed 1 -vina 1 -▁degraded 1 -▁bookcase 1 -▁plummet 1 -▁recur 1 -36) 1 -pants 1 -▁ADD 1 -guil 1 -pronounced 1 -mentioned 1 -▁humanities 1 -▁innovators 1 -▁Shadows 1 -▁victimized 1 -Local 1 -▁Dusty 1 -kana 1 -▁Sold 1 -▁boar 1 -▁VE 1 -▁Dumb 1 -▁Koko 1 -▁Rik 1 -▁Fey 1 -03) 1 -▁rapists 1 -▁toothbrush 1 -uddle 1 -▁Geno 1 -21) 1 -OVE 1 -ALS 1 -▁observatory 1 -▁paparazzi 1 -▁perjury 1 -▁snooze 1 -▁unaccounted 1 -▁autoimmune 1 -▁Moussa 1 -▁187 1 -▁Kiri 1 -▁Buick 1 -▁Nadler 1 -▁gush 1 -▁rivalries 1 -ief 1 -▁nonfiction 1 -gallon 1 -verbal 1 -▁Motel 1 -▁Sparta 1 -▁Azar 1 -▁unconditionally 1 -▁aborted 1 -Rod 1 -▁peppered 1 -▁Ez 1 -grain 1 -▁Mattel 1 -▁subpoenas 1 -▁levers 1 -49) 1 -gari 1 -▁reptiles 1 -▁777 1 -▁CBN 1 -▁ascending 1 -▁Jug 1 -▁tormented 1 -▁asses 1 -ossa 1 -▁Clem 1 -gall 1 -▁snub 1 -Strong 1 -Antoni 1 -fishing 1 -▁Hick 1 -bab 1 -rescue 1 -randa 1 -▁intestines 1 -Ask 1 -▁buckled 1 -▁Horner 1 -▁McCra 1 -él 1 -McC 1 -weekly 1 -chia 1 -▁cupboards 1 -▁Raza 1 -▁Boon 1 -▁Sund 1 -Shut 1 -▁Cromwell 1 -▁epitome 1 -▁observance 1 -▁postpartum 1 -▁aboriginal 1 -▁minimizing 1 -▁wistful 1 -▁empathetic 1 -▁nabbed 1 -▁awry 1 -▁Fuente 1 -▁stairway 1 -▁keepers 1 -▁stoned 1 -▁Farmington 1 -▁Eastbourne 1 -▁Marxism 1 -▁Jagger 1 -▁Nicolaus 1 -mixed 1 -▁Monta 1 -▁BW 1 -crat 1 -▁Jeter 1 -▁Hits 1 -▁roadblocks 1 -▁muttering 1 -passing 1 -▁captained 1 -eche 1 -▁1853 1 -ADR 1 -▁quiver 1 -▁Yak 1 -▁oilfield 1 -▁(38 1 -▁heroism 1 -YP 1 -1-2 1 -▁Rohan 1 -▁375 1 -▁overhauled 1 -ONG 1 -odge 1 -▁itchy 1 -surprise 1 -[7] 1 -smoking 1 -[5] 1 -dip 1 -▁hereby 1 -Winter 1 -▁glider 1 -▁lance 1 -▁Fru 1 -hna 1 -▁electrodes 1 -▁cuddled 1 -▁pours 1 -iru 1 -▁Maro 1 -▁Californian 1 -▁ration 1 -▁misrepresent 1 -ingen 1 -drill 1 -▁Macro 1 -▁Burmese 1 -▁Conservancy 1 -▁Enemy 1 -▁McFarland 1 -▁rollercoaster 1 -▁tussle 1 -▁Neighbour 1 -▁Powder 1 -▁Comedian 1 -▁Kawhi 1 -▁Malloy 1 -▁litany 1 -Justin 1 -▁wallow 1 -▁soprano 1 -▁pry 1 -▁Voyager 1 -▁Submitted 1 -▁Overnight 1 -▁peeking 1 -▁Items 1 -▁powertrain 1 -▁evaporated 1 -▁refuted 1 -evic 1 -Lake 1 -▁Factors 1 -▁Investing 1 -▁9.30 1 -liev 1 -▁ABB 1 -iyya 1 -▁oligarch 1 -▁modesty 1 -enforcement 1 -virtual 1 -▁184 1 -▁assassinate 1 -mpo 1 -▁Kne 1 -▁Harr 1 -▁1820 1 -▁torched 1 -▁Ingrid 1 -▁softening 1 -spiration 1 -▁Eff 1 -▁Graphics 1 -animal 1 -doping 1 -rba 1 -▁Hoop 1 -23) 1 -▁Aber 1 -]] 1 -▁RW 1 -picked 1 -▁cyclo 1 -lace 1 -▁DirecTV 1 -▁Guggenheim 1 -▁Toulouse 1 -▁fifties 1 -▁Hemingway 1 -▁Phuket 1 -▁prefecture 1 -▁Attendees 1 -▁Precision 1 -▁Lucifer 1 -▁titanium 1 -▁Linn 1 -▁Manfred 1 -▁jamming 1 -▁Hobby 1 -▁astute 1 -▁FULL 1 -▁Epp 1 -▁Mastercard 1 -khi 1 -lais 1 -▁patriarchal 1 -▁crossbow 1 -▁wrenching 1 -▁Diaries 1 -▁Sanaa 1 -▁cautionary 1 -▁WTF 1 -▁democratically 1 -▁moderated 1 -▁hoisted 1 -▁Moyes 1 -▁Tito 1 -lew 1 -▁Nero 1 -▁21) 1 -neu 1 -▁nursed 1 -▁Arg 1 -▁objectively 1 -▁Yao 1 -▁Gasol 1 -▁Newsweek 1 -▁fender 1 -Email 1 -▁BDS 1 -▁quirk 1 -▁hopelessly 1 -▁sever 1 -cee 1 -▁Viper 1 -concern 1 -▁maximise 1 -▁Swami 1 -Code 1 --77 1 -▁mounds 1 -tracked 1 -▁teary 1 -▁Kib 1 -▁railroads 1 -▁Punjabi 1 -▁occurrences 1 -respond 1 -yad 1 -▁richness 1 -ologic 1 -▁+2 1 -shev 1 -iji 1 -▁enveloped 1 -ivan 1 -▁Timo 1 -▁CONFERENCE 1 -▁Furniture 1 -▁Nickelodeon 1 -▁SYDNEY 1 -▁Shiffrin 1 -▁defensemen 1 -▁shabby 1 -instead 1 -▁Peskov 1 -▁Illegal 1 -▁snob 1 -▁Corporal 1 -▁Redemption 1 -▁familial 1 -▁Belinda 1 -▁entertainers 1 -▁Dutchman 1 -▁throughput 1 -▁Caravan 1 -▁7:45 1 -lec 1 -▁deplorable 1 -▁Rental 1 -▁Letterman 1 -▁sprawled 1 -▁Brace 1 -▁tang 1 -euro 1 -Train 1 -▁importation 1 -▁envisaged 1 -▁Afrin 1 -▁Keen 1 -▁sunburn 1 -▁Millard 1 -▁Cum 1 -jur 1 -▁reimbursed 1 -▁resale 1 -RX 1 -HN 1 -yra 1 -▁188 1 -▁410 1 -▁Bok 1 -▁Instrument 1 -Feel 1 -▁grandstand 1 -AVE 1 -79) 1 -ön 1 -▁Hutch 1 -▁tinker 1 -arms 1 -▁melon 1 -▁Sakura 1 -▁busting 1 -▁diverge 1 -▁'70 1 -▁Antony 1 -né 1 -▁Omo 1 -▁Osman 1 -▁Mandatory 1 -▁Opportunities 1 -▁arithmetic 1 -▁exclamation 1 -▁rebuttal 1 -▁redistricting 1 -▁scallop 1 -▁Margarita 1 -▁TechCrunch 1 -▁depressive 1 -▁Cypress 1 -▁wafer 1 -▁piggy 1 -▁Navalny 1 -▁Tinker 1 -kol 1 -finium 1 -▁extant 1 -▁Texan 1 -▁Loeb 1 -▁Berna 1 -▁Singleton 1 -▁Coats 1 -▁skyrocketed 1 -chuck 1 -▁adultery 1 -▁noir 1 -▁LU 1 -XL 1 -▁faltered 1 --2015 1 -▁decider 1 -▁countertop 1 -▁clowns 1 -▁Benefits 1 -spending 1 -▁Whitehall 1 -▁Marge 1 -Follow 1 -AIR 1 -▁tolls 1 -▁Toward 1 -▁Melton 1 -▁cluttered 1 -8.5% 1 -foo 1 -▁Bhi 1 -▁Bagg 1 -2,500 1 -qualified 1 -▁Gua 1 -▁seduce 1 -ffey 1 -karan 1 -▁Lyc 1 -▁fused 1 -▁PAN 1 -tenth 1 -▁desist 1 -▁Jayson 1 -▁neckline 1 -▁Compact 1 -▁fir 1 -ancies 1 -▁Roller 1 -ITA 1 -▁shaded 1 -adu 1 -▁conceivable 1 -▁hypocrite 1 -▁leeway 1 -▁saturation 1 -▁vulture 1 -▁archaic 1 -▁ponies 1 -▁Piazza 1 -▁functionalities 1 -▁BRI 1 -▁(2007) 1 -▁(37 1 -▁checkup 1 -▁recon 1 -▁unsigned 1 -▁dearest 1 -▁vascular 1 -▁ramble 1 -▁Sellers 1 -▁pigeons 1 -▁44% 1 -▁Rac 1 -▁Hemp 1 -▁Closer 1 -▁primetime 1 -ament 1 -fashion 1 -Gun 1 -panel 1 -ADE 1 -▁2500 1 -▁Chak 1 -▁coupe 1 -▁detract 1 -▁Gull 1 -▁Weapon 1 -▁Ramp 1 -generating 1 -KING 1 -▁combatants 1 -▁conclusive 1 -▁OEM 1 -▁cocky 1 -▁Mormons 1 -PRNewswire 1 -▁Brilliant 1 -▁Ombudsman 1 -▁Saratoga 1 -▁crimson 1 -▁immunization 1 -▁transatlantic 1 -▁levies 1 -▁opined 1 -▁shilling 1 -▁Dracula 1 -▁Hutchison 1 -▁Gympie 1 -▁Equities 1 -▁dependencies 1 -▁improvisation 1 -▁Eighteen 1 -▁Houghton 1 -▁PNC 1 -▁cracker 1 -Irish 1 -▁analogous 1 -▁exude 1 -bauer 1 -▁ASP 1 -▁Atom 1 -Ow 1 -▁Bohemian 1 -▁proton 1 -▁Polytechnic 1 -▁Middletown 1 -▁brows 1 -▁Deter 1 -▁Publishers 1 -▁bandages 1 -lv 1 -▁traveller 1 -api 1 -▁Insight 1 -Bru 1 -▁Wiki 1 -▁8-1 1 -▁Cheer 1 -▁Cancel 1 -▁Vito 1 -▁$35,000 1 -insurance 1 -▁maxim 1 -▁inflicting 1 -▁Flair 1 -▁goblins 1 -protesters 1 -▁dh 1 -▁configure 1 -▁munch 1 -sail 1 -▁KGB 1 -▁Explain 1 -▁warped 1 -▁sandbox 1 -▁plunder 1 -▁rump 1 -▁pence 1 -▁secession 1 -▁Terre 1 -▁Ember 1 -▁Lens 1 -▁Astra 1 -▁Bao 1 -christ 1 -▁Sava 1 -▁cutoff 1 -vill 1 -▁fright 1 -▁embezzlement 1 -▁handkerchief 1 -▁abysmal 1 -▁calculus 1 -▁ICBM 1 -▁spaced 1 -▁Thierry 1 -▁IAAF 1 -▁NTSB 1 -▁opiate 1 -▁Arriving 1 -▁Momentum 1 -▁paternity 1 -▁Geelong 1 -yahoo 1 -▁ridership 1 -▁Sandberg 1 -▁skew 1 -▁SPDR 1 -▁sprouts 1 -▁Vik 1 -▁Coors 1 -▁debutant 1 -▁Alexei 1 -▁$180 1 -▁PNG 1 -▁Farrow 1 -▁wideout 1 -▁snippets 1 -▁SCH 1 -▁stimulated 1 -inta 1 -▁Zimbabweans 1 -▁Alek 1 -Bee 1 -▁Straw 1 -uzzi 1 -▁Rask 1 -ifa 1 -▁8.30 1 -▁cheerfully 1 -▁Raid 1 -aña 1 -▁unifying 1 -▁ranting 1 -▁Zamb 1 -▁XM 1 -▁taper 1 -occur 1 -▁bowing 1 -iye 1 -Road 1 -▁Inuit 1 -▁drawbacks 1 -akka 1 -HQ 1 -doctor 1 -70) 1 -ckler 1 -▁1847 1 -cog 1 -▁Nep 1 -▁Advocacy 1 -▁acknowledgment 1 -▁bulging 1 -▁extravaganza 1 -▁inexplicable 1 -multiple 1 -▁Tactical 1 -▁Blanchard 1 -▁Clancy 1 -Sad 1 -▁Dropbox 1 -▁Wylie 1 -▁vigilance 1 -isto 1 -putting 1 -▁FBS 1 -▁glacial 1 -▁moot 1 -▁Briton 1 -▁{\ 1 -▁Jayden 1 -▁piping 1 -▁Thurman 1 -▁uniqueness 1 -▁MEN 1 -paul 1 -▁banish 1 -▁regiments 1 -▁STE 1 -laced 1 -▁Slayer 1 -▁Dix 1 -5.3% 1 -▁Alford 1 -▁psych 1 -▁preface 1 -fetched 1 -hopefully 1 -Pretty 1 -▁dissipate 1 -jel 1 -▁Kapil 1 -▁$84 1 -jaz 1 -▁Example 1 -▁fairies 1 -▁Mollie 1 -starred 1 -▁Spear 1 -rilla 1 -ghar 1 -▁resourceful 1 -▁payers 1 -▁Carlin 1 -lok 1 -stupid 1 -▁Antonin 1 -▁Kannada 1 -▁benefactor 1 -▁dissemination 1 -▁equestrian 1 -▁furthest 1 -▁plutonium 1 -▁shrapnel 1 -▁unauthorised 1 -▁Guarantee 1 -▁Lennox 1 -▁raspberry 1 -▁DPRK 1 -▁celery 1 -▁morgue 1 -▁PepsiCo 1 -▁microSD 1 -▁postponement 1 -▁coincidentally 1 -▁8:15 1 -▁Maddon 1 -▁COMP 1 -▁Tonya 1 -wag 1 -▁[29] 1 -▁Sip 1 -▁Israelites 1 -▁$140 1 -▁commenter 1 -▁2012-13 1 -▁Deaf 1 -55) 1 -▁obstructed 1 -▁hallucinations 1 -▁ul 1 -▁butler 1 -▁XR 1 -▁Gry 1 -▁utensils 1 -▁Osa 1 -▁nuance 1 -▁gait 1 -▁Assist 1 -▁recited 1 -▁tickled 1 -▁reared 1 -taining 1 -▁riddle 1 -▁Founding 1 -statement 1 -▁Tobin 1 -History 1 -▁Capcom 1 -Honey 1 -▁Forks 1 -Fast 1 -▁sporty 1 -▁Solari 1 -▁tempt 1 -▁blockage 1 -260 1 -GV 1 -▁Muppet 1 -▁cervix 1 -▁reassigned 1 -▁tranche 1 -▁Tahir 1 -▁gamut 1 -▁transistor 1 -▁Darvish 1 -▁Peabody 1 -▁levee 1 -▁Howie 1 -▁Tsar 1 -▁Noon 1 -▁Romelu 1 -▁Zab 1 -▁§ 1 -▁cert 1 -▁Sato 1 -▁Discount 1 -▁pecan 1 -▁dealerships 1 -angu 1 -NOT 1 -mega 1 -▁sneer 1 -▁tal 1 -uq 1 -▁rowdy 1 -▁ornamental 1 -▁industrialist 1 -XT 1 -▁labelling 1 -▁Crowder 1 -▁Riddle 1 -▁Tub 1 -sphere 1 -▁offshoot 1 -▁Exporting 1 -accept 1 -finish 1 -▁divulge 1 -▁cripple 1 -▁candies 1 -▁glaze 1 -▁pelt 1 -▁carbonate 1 -LX 1 -▁rearrange 1 -▁graze 1 -▁deliberation 1 -▁remnant 1 -▁1852 1 -iston 1 -enthal 1 -rump 1 -owa 1 -below 1 -▁consul 1 -warming 1 -▁6.30 1 -▁creeps 1 -▁Archibald 1 -▁Computing 1 -▁Himalayan 1 -▁consolidating 1 -▁removable 1 -Intercontinental 1 -▁oats 1 -▁Journalist 1 -▁destabilize 1 -▁Madigan 1 -lium 1 -▁lagged 1 -▁Antigua 1 -▁replication 1 -Low 1 -▁Nomura 1 -▁(2005) 1 -▁FAQ 1 -▁Malkin 1 -minster 1 -▁soggy 1 -▁Sutter 1 -▁Ec 1 -90) 1 -rami 1 -pacific 1 -gata 1 -▁Liza 1 -–12 1 -▁stoke 1 -< 1 -▁ml 1 -numbered 1 -ATED 1 -▁Azam 1 -▁confessional 1 -▁CY 1 -Toole 1 -▁subpoenaed 1 -▁underscoring 1 -▁snatching 1 -▁Scary 1 -Victor 1 -▁rebuked 1 -4.0 1 -▁Jenni 1 -Football 1 -▁loath 1 -Sense 1 -checked 1 -▁SALE 1 -amento 1 -Rad 1 -pton 1 -▁Mechanics 1 -▁bootleg 1 -▁bloat 1 -▁Verne 1 -▁naught 1 -▁Kilda 1 -▁Molo 1 -▁researches 1 -Putin 1 -▁adrenal 1 -▁branched 1 -▁Atlético 1 -▁Doughty 1 -▁Hoboken 1 -▁Legislation 1 -▁Taxpayers 1 -▁aristocracy 1 -▁exoplanet 1 -▁nonsensical 1 -▁voracious 1 -▁earbuds 1 -▁wriggle 1 -▁Smiling 1 -▁Renegade 1 -▁Mauritania 1 -▁saddest 1 -▁Sajid 1 -▁gooey 1 -▁amicable 1 -▁Pollution 1 -▁$800,000 1 -▁birch 1 -▁cipher 1 -Combining 1 -▁Folau 1 -▁TOTAL 1 -121 1 -▁bugging 1 -▁rustling 1 -▁Seafood 1 -▁sparingly 1 -▁poodle 1 -.12% 1 -▁Amgen 1 -▁retorted 1 -Ready 1 -▁Radiohead 1 -▁filly 1 -rosa 1 -▁Dimensional 1 -▁mooring 1 -▁GAR 1 -▁snowboard 1 -▁640 1 -▁11-2 1 -▁ramen 1 -▁Coyote 1 -158 1 -.11% 1 -888 1 -=1 1 --1-1 1 -▁triage 1 -NDA 1 -▁Boil 1 -▁wane 1 -▁Trem 1 -vg 1 -▁Kyiv 1 -880 1 -▁Cleary 1 -▁Philo 1 -▁chemically 1 -apocalyptic 1 -Beautiful 1 -circuit 1 -Lewis 1 -▁Astor 1 -▁Kofi 1 -▁Nomad 1 -▁wobble 1 -▁faze 1 -▁Hmmmm 1 -shape 1 -presidential 1 -▁Bigfoot 1 -▁looped 1 -▁Carp 1 -kau 1 -▁bemoan 1 -▁doggy 1 -Yep 1 -144 1 -▁Heron 1 -hump 1 -complex 1 -▁hilariously 1 -▁spotty 1 -▁Creature 1 -▁Figueroa 1 -▁Guadalupe 1 -▁Holyrood 1 -▁Recognition 1 -▁WILLIAMS 1 -▁dispensation 1 -▁exclusivity 1 -▁experiential 1 -▁frenetic 1 -▁ghastly 1 -▁hooligan 1 -▁merchandising 1 -▁mobilizing 1 -▁receivable 1 -▁remedial 1 -▁scheming 1 -▁squabble 1 -▁Mahatma 1 -▁Magellan 1 -▁McCown 1 -▁purview 1 -▁Nexstar 1 -▁allusion 1 -▁Struggle 1 -▁Meteorologist 1 -▁Southport 1 -▁Skyline 1 -▁tenacious 1 -▁Jeong 1 -5-1 1 -▁squatting 1 -▁Odom 1 -▁slumping 1 -▁mpg 1 -longest 1 -▁570 1 -▁reflexes 1 -▁venting 1 -▁govt 1 -▁formatted 1 -▁tethered 1 -▁decadent 1 -▁Clearing 1 -▁Lovell 1 -▁Martyn 1 -▁defiantly 1 -▁abject 1 -▁Ponce 1 -▁SPL 1 -iyi 1 -▁hiker 1 -▁idealistic 1 -▁EFL 1 -▁Windy 1 -▁Spoke 1 -▁Zulu 1 -▁Lose 1 -▁Reacting 1 -▁punctured 1 -jaw 1 -▁periodical 1 -▁Larger 1 -▁Hoh 1 -▁Jacket 1 -▁#7 1 -▁Wreck 1 -▁compounding 1 -corporate 1 -▁enriching 1 -▁HARD 1 -▁businesswoman 1 -▁retriever 1 -CLE 1 -▁Taxes 1 -Corruption 1 -sensitivity 1 -[8] 1 -Style 1 -Dev 1 -dham 1 -▁reschedule 1 -▁Brexiteers 1 -▁octa 1 -▁slink 1 -itate 1 -▁(14) 1 -▁braved 1 -Uncle 1 -▁Kassi 1 -▁overhear 1 -▁Bucky 1 -oxide 1 -▁9-1 1 -▁strayed 1 -▁Require 1 -DEX 1 -▁Inquirer 1 -▁ProPublica 1 -▁Scrooge 1 -▁Sicilian 1 -▁Trotsky 1 -▁VANCOUVER 1 -▁clarinet 1 -▁cremation 1 -▁euphemism 1 -▁mesmerizing 1 -▁methadone 1 -▁refrigeration 1 -▁odour 1 -▁bigoted 1 -▁Subcommittee 1 -▁Hickory 1 -▁Tianjin 1 -▁Neuroscience 1 -▁Ranbir 1 -▁Coughlin 1 -▁beauties 1 -▁Aimee 1 -▁$0.04 1 -▁fraudsters 1 -▁Tribeca 1 -Dance 1 -▁Pickford 1 -▁JW 1 -4.5% 1 -▁FOUR 1 -▁[38] 1 -vital 1 -▁Sabo 1 -▁crux 1 -▁Osi 1 -▁ou 1 -▁easement 1 -Design 1 -▁Plata 1 -▁53% 1 -▁patrolled 1 -▁firemen 1 -▁adventurer 1 -▁transactional 1 -▁$92 1 -▁attaches 1 -.16% 1 -▁pinching 1 -▁FIL 1 -gaming 1 -▁vamp 1 -▁teal 1 -maz 1 -▁wayside 1 -butter 1 -▁Dread 1 -▁undergoes 1 -unterproductive 1 -▁Ditto 1 -................ 1 -zana 1 -▁hampering 1 -giant 1 -flavored 1 -Custom 1 -Indeed 1 -▁swarming 1 -▁Triton 1 -Front 1 -censor 1 -rabi 1 -212 1 -▁creditor 1 -alarm 1 -YT 1 -USB 1 -oge 1 -▁CPP 1 -ATH 1 -▁Orrin 1 -▁Trailing 1 -▁regionally 1 -▁intrude 1 -▁Gour 1 -▁phenom 1 -ر 1 -▁Chávez 1 -▁Ferreira 1 -▁Leverkusen 1 -▁Occidental 1 -▁Resurrection 1 -▁Seychelles 1 -▁Uighur 1 -▁adhering 1 -▁impersonation 1 -▁laurel 1 -▁mutilated 1 -▁reinvigorate 1 -▁intoxicating 1 -▁MARTIN 1 -▁emblematic 1 -▁Fayette 1 -▁pulsing 1 -Cost 1 -▁Kubrick 1 -▁Foothill 1 -▁tolerable 1 -▁Racine 1 -▁sagging 1 -▁assuage 1 -▁BCS 1 -▁unassuming 1 -▁memento 1 -▁Halsey 1 -▁kiddie 1 -▁Livestock 1 -▁Silverstone 1 -▁mercilessly 1 -▁Fairview 1 -▁bono 1 -▁Ajayi 1 -▁RG 1 -▁BEA 1 -wolves 1 -▁riskier 1 -▁trudged 1 -▁neuron 1 -capitalist 1 -▁Abedi 1 -bending 1 -▁gusty 1 -▁$4.4 1 -275 1 -▁Lorne 1 -▁earthy 1 -▁gunning 1 -▁Caf 1 -▁shied 1 -▁Smokey 1 -▁ousting 1 -▁Mideast 1 -▁abolishing 1 -▁Kling 1 -LIVE 1 -211 1 -▁Lander 1 -▁backcourt 1 -▁Feeding 1 -▁Vida 1 -seated 1 -avel 1 -▁superbly 1 -▁Alexandr 1 -Apparently 1 -▁Tooth 1 -▁exhale 1 -▁Inge 1 -▁invalidate 1 -▁470 1 -▁underfunded 1 -▁pom 1 -golden 1 -PCC 1 -▁Aryan 1 -azzo 1 -▁REP 1 -blatt 1 -▁Deripaska 1 -▁Identification 1 -▁Tagovailoa 1 -▁Yerevan 1 -▁cascading 1 -▁evocative 1 -▁fiddling 1 -▁innumerable 1 -▁whaling 1 -▁Heineken 1 -▁postponing 1 -▁McKinsey 1 -▁Meditation 1 -▁Archaeology 1 -▁razed 1 -▁Seuss 1 -▁aloof 1 -▁dimmed 1 -▁FaceTime 1 -▁Borneo 1 -▁SAC 1 -▁nope 1 -▁Swartz 1 -mole 1 -▁Kew 1 -299 1 -▁Moi 1 -▁Yahya 1 -▁rearview 1 -▁Hain 1 -▁Persia 1 -▁PTA 1 -6-7 1 -▁orthodoxy 1 -▁scribbled 1 -▁forthright 1 -▁XIV 1 -dyne 1 -▁Minerva 1 -▁abstained 1 -article 1 -▁Oakville 1 -▁TBS 1 -kola 1 -▁Simba 1 -▁β 1 -▁AAPL 1 -bbl 1 -004 1 -▁2.6% 1 -▁flatten 1 -▁skeptic 1 -ullo 1 -streaming 1 -elite 1 -$15 1 -▁Revel 1 -▁Kinn 1 -233 1 -▁WINS 1 -.27% 1 -▁GG 1 -zyme 1 -▁outflow 1 -anski 1 -éri 1 -▁helplessness 1 -User 1 -wrapped 1 -Dutch 1 -Round 1 -Child 1 -York 1 -▁cleanly 1 -GAR 1 -Player 1 -opol 1 -+2 1 -▁instantaneous 1 -▁unequivocal 1 -▁Cavendish 1 -▁Croydon 1 -▁Divorce 1 -▁Flaherty 1 -▁MILWAUKEE 1 -▁ORLEANS 1 -▁Yvette 1 -▁abbreviation 1 -▁couture 1 -▁evasive 1 -▁neurosurgeon 1 -▁pleasurable 1 -▁scrutinise 1 -▁Conspiracy 1 -▁banknotes 1 -▁Mulligan 1 -▁Foul 1 -▁jingle 1 -▁BASF 1 -▁Pentecostal 1 -▁Lonnie 1 -▁Croix 1 -▁Venkat 1 -▁Tripura 1 -▁shellfish 1 -▁Maclean 1 -▁incurable 1 -▁reprehensible 1 -▁MANY 1 -crash 1 -genetic 1 -▁barista 1 -ergic 1 -LAS 1 -▁Fortuna 1 -▁spasm 1 -▁Whig 1 -▁placate 1 -▁accordion 1 -▁Savoy 1 -▁$750,000 1 -▁tiebreak 1 -▁Kuo 1 -Gui 1 -▁solemnly 1 -▁Cava 1 -itta 1 -laugh 1 -▁3.2% 1 -▁staunchly 1 -▁tiptoe 1 -▁vacationing 1 -▁DRAM 1 -▁Cleo 1 -taste 1 -▁feathered 1 -bong 1 -▁defenseless 1 -onie 1 -morrow 1 -▁Johar 1 -▁simplifying 1 -▁snarl 1 -opia 1 -▁springboard 1 -▁curing 1 -Especially 1 -Dancing 1 -▁.5 1 -▁interrogate 1 -▁Armada 1 -▁Christophe 1 -hidden 1 -packing 1 -▁quickness 1 -scratch 1 -▁carted 1 -binary 1 -SEA 1 -**** 1 -▁Newmarket 1 -▁Handicap 1 -▁$8,000 1 -▁Begg 1 -▁Raju 1 -▁Chrono 1 -▁Aguirre 1 -▁Confidence 1 -▁DUBAI 1 -▁Fontaine 1 -▁Haunted 1 -▁Schroder 1 -▁amplitude 1 -▁gliding 1 -▁pancreas 1 -▁carjacking 1 -▁enchanting 1 -▁dominion 1 -Nittany 1 -▁kudos 1 -▁Laredo 1 -▁tactile 1 -▁fumbling 1 -▁purification 1 -▁Tupac 1 -▁Sentence 1 -▁Duckworth 1 -▁Bowser 1 -▁Encounter 1 -▁queried 1 -▁concourse 1 -▁prologue 1 -▁GCSE 1 -america 1 -........ 1 -▁unoccupied 1 -▁Shabbat 1 -▁ROIC 1 -▁outplayed 1 -▁Stonewall 1 -▁overhauling 1 -▁Vanilla 1 -RAY 1 -▁mechanically 1 -▁560 1 -▁snowboarder 1 -following 1 -▁1:45 1 -▁kickstart 1 -▁Johnnie 1 -▁faithfulness 1 -3-7 1 -▁hopelessness 1 -cep 1 -▁Trash 1 -hti 1 -▁EBIT 1 -▁TBI 1 -▁earnestly 1 -8-0 1 -▁Nerd 1 -▁narcotic 1 -culp 1 -▁Bosco 1 -▁Critic 1 -bodies 1 -▁Blame 1 -▁valor 1 -▁conjured 1 -EPT 1 -▁510 1 -▁Bonner 1 -▁Tse 1 -psycho 1 -Andy 1 -▁$700,000 1 -▁Forster 1 -mead 1 -▁Neel 1 -jö 1 -individual 1 -mbro 1 -▁latent 1 -▁actuality 1 -Somebody 1 -▁540 1 -studio 1 -NPR 1 -Bol 1 -Grady 1 -▁Nestor 1 -▁blazed 1 -▁Poz 1 -▁Absolute 1 -/23 1 -Arg 1 -EFE 1 -▁Authentic 1 -▁Contribution 1 -▁Deportivo 1 -▁Lakshmi 1 -▁McGuinness 1 -▁Nuremberg 1 -▁Symantec 1 -▁corrosive 1 -▁insurrection 1 -▁matriarch 1 -▁serotonin 1 -▁Turkmenistan 1 -▁meditating 1 -▁Sapphire 1 -▁Metacritic 1 -▁buttress 1 -▁proliferate 1 -▁embryonic 1 -▁Shiloh 1 -▁McCarron 1 -▁cede 1 -▁Tennant 1 -▁tiredness 1 -▁Fireworks 1 -▁atheism 1 -▁transplanted 1 -▁finalise 1 -▁EVEN 1 -▁husky 1 -▁Lesser 1 -hampton 1 -▁flicking 1 -▁incessantly 1 -▁ecommerce 1 -QA 1 -▁Bela 1 -▁Sensex 1 -▁leaky 1 -▁Linus 1 -▁Uruguayan 1 -▁ironed 1 -▁ROME 1 -▁Brill 1 -asked 1 -carce 1 -Insaf 1 -hte 1 -▁workstation 1 -▁Valentino 1 -▁countrymen 1 -▁Marisa 1 -ará 1 -▁pared 1 -▁Moderate 1 -▁studded 1 -Roll 1 -▁needlessly 1 -▁9.8 1 -evidence 1 -simply 1 -Souza 1 -▁MER 1 -▁pocketed 1 -▁masking 1 -usta 1 -▁bil 1 -▁AIG 1 -▁Galla 1 -▁9-11 1 -▁Socialism 1 -Half 1 -gora 1 -▁persevere 1 -▁Universit 1 -Print 1 -▁Tornado 1 -▁pickled 1 -▁curate 1 -▁DAVID 1 -▁FACEBOOK 1 -▁Gladiator 1 -▁Ignatius 1 -▁Kandahar 1 -▁Lacazette 1 -▁Maradona 1 -▁Pujols 1 -▁Rapoport 1 -▁assimilation 1 -▁cavities 1 -▁condiment 1 -▁peddling 1 -▁ultraviolet 1 -▁upholstery 1 -██ 1 -▁mutilation 1 -▁Ricketts 1 -▁Enchant 1 -▁Nougat 1 -▁Clothing 1 -▁adoration 1 -▁acumen 1 -▁Wahlberg 1 -Baghdadi 1 -▁predetermined 1 -▁Trafficking 1 -▁inhaling 1 -Susan 1 -▁Preference 1 -▁Worry 1 -▁Austro 1 -uze 1 -▁Latham 1 -▁gerrymandering 1 -▁Guerra 1 -▁Hinton 1 -▁nga 1 -▁Mikael 1 -▁Samar 1 -opp 1 -▁precipitated 1 -▁brainstorming 1 -▁58% 1 -▁ceded 1 -marc 1 -fiber 1 -▁Myra 1 -RRI 1 -Director 1 -▁prima 1 -▁appetizer 1 -▁Tiff 1 -▁Koro 1 -izzle 1 -▁Siren 1 -▁Fedora 1 -▁Marten 1 -▁Thistle 1 -▁€5 1 -Modern 1 -Field 1 -trouble 1 -3.5% 1 -▁Javi 1 -▁Ladd 1 -▁discontinue 1 -baker 1 -Mission 1 -▁Nikol 1 -▁metaphorical 1 -▁finesse 1 -sofar 1 -▁Meteor 1 -hydroxy 1 -▁anthropo 1 -▁Anxiety 1 -▁Bismarck 1 -▁Lithium 1 -▁McCullough 1 -▁Shropshire 1 -▁Sichuan 1 -▁pantomime 1 -▁plebiscite 1 -▁stimulant 1 -▁receding 1 -▁Yoruba 1 -▁reconvene 1 -▁Expedia 1 -▁masonry 1 -▁DSP 1 -▁silica 1 -▁loyalties 1 -▁Gaia 1 -▁unearned 1 -▁Gentlemen 1 -▁Devan 1 -▁[40] 1 -▁Hatfield 1 -▁CONS 1 -▁Bagley 1 -▁Dé 1 -▁visor 1 -▁Frankel 1 -▁33,000 1 -▁DEL 1 -Path 1 -gran 1 -▁misrepresented 1 -▁Artistic 1 -▁WeChat 1 -Veon 1 -▁ENT 1 -▁312 1 -▁Mush 1 -▁Zappa 1 -▁crackling 1 -▁Sabi 1 -ilda 1 -▁brook 1 -Sab 1 -▁bleaching 1 -1.1% 1 -diver 1 -▁Rajan 1 -▁intercom 1 -▁Sault 1 -▁irate 1 -▁Abedin 1 -▁8-4 1 -▁dialing 1 -▁Sahel 1 -▁peruse 1 -▁Sé 1 -▁obsess 1 -▁Backup 1 -▁Garri 1 -▁Chalk 1 -▁Hatter 1 -▁talkative 1 -▁jerky 1 -▁1789 1 -295 1 -▁befitting 1 -Minn 1 -Turkish 1 -authentic 1 -▁exaggerate 1 -▁Kono 1 -▁recede 1 -Hur 1 -▁ruffle 1 -▁Resist 1 -▁vaulted 1 -gau 1 -▁yelp 1 -▁hashing 1 -▁overblown 1 -▁hacer 1 -▁Koll 1 -........... 1 -179 1 -▁cruelly 1 -▁Padukone 1 -▁deluxe 1 -▁sepsis 1 -▁unhinged 1 -▁matinee 1 -▁Bearcats 1 -▁Suleiman 1 -▁Sanofi 1 -▁otherworldly 1 -▁Arrieta 1 -▁Cersei 1 -▁Scherzer 1 -▁Bognor 1 -▁Hitachi 1 -▁Catalina 1 -▁discernible 1 -▁Symbol 1 -▁amalgamation 1 -▁Charlene 1 -▁Pricing 1 -▁Centaur 1 -▁kinship 1 -▁tangent 1 -ugi 1 -▁objectivity 1 -3.6% 1 -▁overtures 1 -▁predictability 1 -cari 1 -▁Sabine 1 -▁[36] 1 -▁Estado 1 -orski 1 -▁lube 1 -▁Albertans 1 -▁Exile 1 -ouri 1 -▁lurched 1 -▁defie 1 -evolving 1 -▁twitched 1 -chol 1 -▁Papi 1 -▁Shoal 1 -▁Addie 1 -▁Urbana 1 -tonne 1 -▁Emirati 1 -▁Naa 1 -▁Kaylee 1 -COR 1 -Cooper 1 -▁john 1 -Rest 1 -▁flier 1 -HON 1 -▁Vla 1 -▁Florian 1 -▁doubtless 1 -dominant 1 -Avengers 1 -Secretary 1 -Search 1 -containing 1 -message 1 -thread 1 -exit 1 -▁Cambria 1 -▁Voyage 1 -▁TSN 1 -▁Printing 1 -▁Marcell 1 -starring 1 -origin 1 -81) 1 -jacket 1 -▁£11 1 -NOC 1 -Guy 1 -▁approachable 1 -▁Solis 1 -▁nightfall 1 -rgu 1 -▁SHOULD 1 -▁Veselnitskaya 1 -▁hemorrhage 1 -▁luscious 1 -▁masturbation 1 -▁punctuation 1 -▁Burroughs 1 -▁Vulture 1 -▁Libraries 1 -▁emcee 1 -▁Kafka 1 -▁Dunlap 1 -▁Tractor 1 -▁Privy 1 -▁10-1 1 -▁fable 1 -▁minicamp 1 -▁Illini 1 -▁toolkit 1 -▁unproductive 1 -▁cropping 1 -▁Stig 1 -▁(21) 1 -▁Nisar 1 -▁Automated 1 -▁hatchet 1 -ngel 1 -▁Afternoon 1 -139 1 -wik 1 -▁Farid 1 -▁spectral 1 -▁halfback 1 -▁skein 1 -▁Sacha 1 -▁rebranding 1 -▁Marte 1 -141 1 -▁249 1 -▁scorecard 1 -Text 1 -91) 1 -▁Barely 1 -▁LF 1 -▁alterable 1 -▁Vigo 1 -▁adaptable 1 -▁mangrove 1 -▁Furman 1 -▁Yemi 1 -▁Void 1 -▁goldfish 1 -▁5.5% 1 -▁Daylight 1 -chlor 1 -UMP 1 -Pack 1 -▁yay 1 -▁Geller 1 -▁Horst 1 -▁1801 1 -Doo 1 -242 1 -▁Neh 1 -▁Hebron 1 -▁clawing 1 -▁whirl 1 -▁tau 1 -▁Xeno 1 -Ice 1 -▁Tsa 1 -disclosure 1 -▁Blackwood 1 -▁Shared 1 -grabbing 1 -▁grumble 1 -Nusra 1 -▁NRG 1 -nek 1 -580 1 -▁faraway 1 -▁freaky 1 -▁bluster 1 -dressing 1 -▁unconsciously 1 -chell 1 -TAR 1 -▁Guin 1 -▁DSLR 1 -▁Arunachal 1 -▁Cupertino 1 -▁Navigator 1 -▁Salvatore 1 -▁Spalding 1 -▁Twickenham 1 -▁Wrangler 1 -▁abrasive 1 -▁radiating 1 -▁Osbourne 1 -▁interstellar 1 -▁Divi 1 -▁livid 1 -▁Jockey 1 -▁backpacker 1 -gier 1 -▁Mojave 1 -▁sizzle 1 -▁McCallum 1 -▁Milligan 1 -TRO 1 -▁Flick 1 -▁Hardaway 1 -▁dimple 1 -Shock 1 -▁knotted 1 -▁Textile 1 -▁Hardik 1 -puram 1 -555 1 -▁interplay 1 -▁Simeone 1 -▁Horvat 1 -▁languishing 1 -junk 1 -▁Councilwoman 1 -▁Neon 1 -▁Seville 1 -500,000 1 -Fix 1 -▁Leila 1 -▁Platte 1 -▁Marius 1 -▁Sonos 1 -▁cred 1 -▁CTA 1 -▁fucker 1 -orum 1 -▁blacklisted 1 -meta 1 -▁Jacoby 1 -Thr 1 -4.4% 1 -▁improvise 1 -▁1834 1 -▁Kettle 1 -▁Mather 1 -▁SPX 1 -▁Lotte 1 -vé 1 -▁stutter 1 -foil 1 -▁MMR 1 -organization 1 -▁computerized 1 -provoking 1 -terrible 1 -Strike 1 -▁envisage 1 -VV 1 -▁1833 1 -▁condense 1 -HOL 1 -▁Publicity 1 -▁Wike 1 -rika 1 -▁Segal 1 -▁Grat 1 -▁Shale 1 -▁delegated 1 -▁corny 1 -▁LV 1 -few 1 -.74% 1 -▁Kochi 1 -ē 1 -▁Anushka 1 -▁Compliance 1 -▁Jokic 1 -▁SMITH 1 -▁Swallow 1 -▁fluoride 1 -▁igniting 1 -▁lecturing 1 -▁luxuries 1 -▁reassert 1 -▁restaurateur 1 -▁hypnotic 1 -▁reconfigure 1 -▁Parallel 1 -▁Sanctions 1 -▁Worcestershire 1 -▁Viscount 1 -▁Torrance 1 -▁pennant 1 -▁Duarte 1 -itating 1 -▁Pollack 1 -ouli 1 -▁catchment 1 -▁passersby 1 -▁Giveaway 1 -▁HONG 1 -hopped 1 -▁ripen 1 -▁WADA 1 -▁Hayat 1 -▁propelling 1 -▁rustle 1 -1.3% 1 -▁mushy 1 -3-8 1 -▁Mackey 1 -▁grimaced 1 -▁XML 1 -▁Boi 1 -▁Zafar 1 -▁arctic 1 -▁prying 1 -Fil 1 -▁Petrov 1 -▁organist 1 -▁LAX 1 -▁mentored 1 -▁weeklong 1 -▁Rosso 1 -Party 1 -▁corresponded 1 -▁$1,200 1 -ogenesis 1 -throp 1 -▁Barring 1 -Shabab 1 -▁comma 1 -▁rumbled 1 -▁Estimate 1 -▁Teja 1 -▁wiggled 1 -▁remorseful 1 -▁Thr 1 -CHR 1 -▁Erich 1 -zyg 1 -64) 1 -▁$80,000 1 -fry 1 -▁blushing 1 -QUA 1 -Sleep 1 -GUE 1 -▁Jamestown 1 -▁allergen 1 -▁Vickers 1 -.67% 1 -470 1 -avar 1 -▁yoke 1 -▁cursor 1 -▁Zun 1 -▁Apply 1 -nunciation 1 -OUS 1 -κ 1 -▁Emmerdale 1 -▁Rosneft 1 -▁masturbating 1 -▁microorganisms 1 -▁uncharted 1 -▁Masjid 1 -▁Sprout 1 -▁ballerina 1 -▁Cosmopolitan 1 -▁Amitabh 1 -▁Bernanke 1 -▁Sheehan 1 -▁prawn 1 -▁Axios 1 -▁Mahrez 1 -Pod 1 -▁Magnum 1 -▁dissolving 1 -▁Bathurst 1 -▁Distance 1 -▁Ashleigh 1 -▁scarring 1 -93) 1 -▁vertebrate 1 -▁Leake 1 -▁parsley 1 -Object 1 -▁unreported 1 -▁willfully 1 -▁Turned 1 -▁reneg 1 -▁[42] 1 -comment 1 -▁Almond 1 -▁cursory 1 -▁Missionary 1 -▁furloughed 1 -▁RIA 1 -NESS 1 -▁windscreen 1 -▁PHOTOS 1 -ibel 1 -▁potted 1 -▁56% 1 -▁66% 1 -217 1 -▁Yue 1 -▁Offset 1 -▁Waltz 1 -▁Shira 1 -Chan 1 -7.50 1 -▁refunded 1 -▁Groom 1 -▁orcs 1 -▁Libra 1 -▁Sonora 1 -▁wonky 1 -▁DIG 1 -▁Dali 1 -▁63% 1 -▁Demar 1 -▁Brownie 1 -▁ZIP 1 -▁SSL 1 -Chat 1 -bius 1 -▁1776 1 -SIM 1 -Turkey 1 -pill 1 -constant 1 -promise 1 -▁RUN 1 -▁Nobu 1 -▁Ferro 1 -▁aristocrat 1 -▁misinterpret 1 -veld 1 -▁bucked 1 -▁unsurprising 1 -▁fussing 1 -▁potter 1 -▁Trist 1 -sensor 1 -skirt 1 -aï 1 -NED 1 -π 1 -▁2018-2024 1 -▁Consensus 1 -▁Garcetti 1 -▁appendage 1 -▁manipulator 1 -▁narcissism 1 -▁prodigious 1 -▁reminiscing 1 -▁cronies 1 -▁stagnate 1 -▁rudder 1 -▁humbly 1 -▁Bhushan 1 -▁Herzog 1 -▁Spartanburg 1 -▁florist 1 -▁Sienna 1 -▁Purcell 1 -crunch 1 -▁Kermit 1 -Close 1 -▁Stallone 1 -rope 1 -▁caressed 1 -▁Bradbury 1 -▁silt 1 -▁Manoj 1 -▁Shaheen 1 -▁Alca 1 -▁Belarusian 1 -▁Jakub 1 -▁bitumen 1 -▁shoveling 1 -kash 1 -▁valuing 1 -▁diehard 1 -▁Fault 1 -capacit 1 -▁Fonte 1 -▁jaded 1 -▁Razor 1 -▁Array 1 -▁JSON 1 -▁2010-11 1 -▁Andrey 1 -▁Sef 1 -▁detectable 1 -▁FK 1 -▁0.75 1 -▁Curb 1 -▁degenerate 1 -omin 1 -▁overheated 1 -▁Whitehouse 1 -▁craziest 1 -▁TheStreet 1 -▁Raha 1 -VPN 1 -▁cradled 1 -deliver 1 -bean 1 -loose 1 -huri 1 -▁Commando 1 -▁Sada 1 -▁minimized 1 -▁Zoey 1 -yoshi 1 -attempt 1 -▁mobilise 1 -▁radiate 1 -▁Davie 1 -Ultimately 1 -weird 1 -▁voiceover 1 -▁naturalized 1 -▁0.04 1 -▁Gib 1 -▁Kristoff 1 -esser 1 -▁Bana 1 -LIGHT 1 -▁Chisholm 1 -▁Fabregas 1 -▁Hampden 1 -▁Laundry 1 -▁Litigation 1 -▁Vauxhall 1 -▁alpaca 1 -▁consumable 1 -▁encyclopedia 1 -▁mayonnaise 1 -▁fancied 1 -▁lightsaber 1 -à 1 -▁Serpent 1 -▁prettier 1 -▁eulogy 1 -▁specificity 1 -▁Comrade 1 -▁Toews 1 -▁shading 1 -▁vogue 1 -▁Boundary 1 -▁unremarkable 1 -▁Samurai 1 -▁turban 1 -▁Longoria 1 -▁pubic 1 -▁Cassini 1 -▁Bowden 1 -▁Gustavo 1 -▁quench 1 -▁Proto 1 -▁Cavani 1 -▁untested 1 -firing 1 -▁blindfolded 1 -gwe 1 -▁admonished 1 -▁Gillis 1 -▁Sahib 1 -▁Crosse 1 -430 1 -▁Alber 1 -▁mobster 1 -itha 1 -▁overthrown 1 -▁rehabilitated 1 -▁shrieking 1 -saurus 1 -▁barter 1 -boi 1 -▁Vau 1 -▁Imogen 1 -▁entryway 1 -▁sire 1 -▁alluding 1 -▁backcountry 1 -▁Tej 1 -▁Guar 1 -▁flyover 1 -ceiling 1 -alcoholic 1 -Rather 1 -transform 1 -▁advertiser 1 -▁Oyo 1 -▁YEAR 1 -▁stu 1 -▁Theodor 1 -coup 1 -▁markup 1 -▁Coney 1 -223 1 -▁MSG 1 -▁0.000000 1 -▁$1.50 1 -▁ACS 1 -ryan 1 -œ 1 -▁Allardyce 1 -▁Gleason 1 -▁Soprano 1 -▁corrugated 1 -▁denominator 1 -▁harmonies 1 -▁postdoctoral 1 -▁pyjama 1 -▁stencil 1 -😂 1 -▁Sentencing 1 -▁Falkirk 1 -▁metaphysical 1 -▁cryptographic 1 -▁0.0001 1 -▁greyhound 1 -▁Satoshi 1 -▁Inglewood 1 -▁Dravid 1 -▁Dundas 1 -▁Lizard 1 -▁charade 1 -▁Rutland 1 -▁nourishment 1 -▁Hardwick 1 -▁Jadeja 1 -▁Muriel 1 -Kid 1 -▁Reh 1 -▁Identify 1 -▁snore 1 -▁saluted 1 -ussi 1 -▁Telford 1 -▁reinsurance 1 -▁Benito 1 -▁Franchise 1 -▁Ecuadorian 1 -▁Keira 1 -eviating 1 -▁unkind 1 -onium 1 -▁Torino 1 -leh 1 -169 1 -PPP 1 -▁brightened 1 -▁Janie 1 -411 1 -zilla 1 -▁beefed 1 -.42% 1 -▁Homan 1 -cía 1 -▁Acadia 1 -▁CCS 1 -guide 1 -cina 1 -.32% 1 -▁Rohr 1 -▁Bamba 1 -▁manger 1 -▁POP 1 -▁Dyna 1 -▁Blur 1 -▁avoidable 1 -Growth 1 -Eastern 1 -▁encroach 1 -Country 1 -9.8% 1 -Johnny 1 -▁bearable 1 -commissioned 1 -▁233 1 -▁CMC 1 -Wan 1 -ember 1 -jitsu 1 -anny 1 -ğ 1 -▁Calipari 1 -▁Crenshaw 1 -▁Müller 1 -▁Preparation 1 -▁conciliatory 1 -▁defamatory 1 -▁enthused 1 -▁multiplier 1 -▁ombudsman 1 -▁pretentious 1 -▁sumptuous 1 -▁untouchable 1 -▁Andromeda 1 -▁Fabulous 1 -▁Bergdahl 1 -▁Polanski 1 -▁Grief 1 -▁Encourage 1 -▁MoviePass 1 -▁Palmyra 1 -▁Gawker 1 -▁biennial 1 -▁Grisham 1 -▁Shamrock 1 -▁Pixie 1 -▁Yelich 1 -▁annotation 1 -▁sesame 1 -▁Recycling 1 -▁misogynistic 1 -▁DeAngelo 1 -▁Bellingham 1 -▁’60 1 -710 1 -▁MarketWatch 1 -▁Tray 1 -cp 1 -▁prodding 1 -▁610 1 -▁Nepali 1 -▁Wives 1 -▁Safeway 1 -▁Gorka 1 -▁eBook 1 -▁Considered 1 -▁totem 1 -▁Huy 1 -▁Taxation 1 -▁Behavioral 1 -▁Feder 1 -778 1 -cau 1 -1-3 1 -▁Akira 1 -▁Kerri 1 -▁Marla 1 -▁encompassed 1 -sitting 1 -▁Dota 1 -flop 1 -▁267 1 -spoon 1 -▁308 1 -151 1 -▁2.7% 1 -mama 1 -▁.@ 1 -1.7% 1 -▁MAP 1 -DMA 1 -▁TAP 1 -▁Della 1 -▁rusted 1 -2.4% 1 -▁Zed 1 -adopt 1 -ICAL 1 -prior 1 -▁applauding 1 -surface 1 -▁254 1 -▁EDM 1 -▁Magical 1 -▁Coen 1 -hospital 1 -practice 1 -plagued 1 -Jordan 1 -Future 1 -ravaged 1 -▁Lawton 1 -▁Caro 1 -Except 1 -▁lowland 1 -matically 1 -VIS 1 -▁Blackman 1 -▁overdosed 1 - 1 -▁Aluminum 1 -▁Assurance 1 -▁Bernadette 1 -▁Demonstrators 1 -▁Tipperary 1 -▁delicacy 1 -▁follicle 1 -▁militancy 1 -▁scurried 1 -▁solstice 1 -▁unfazed 1 -▁unfulfilled 1 -▁arterial 1 -▁xenophobic 1 -▁Patagonia 1 -▁Knife 1 -▁minuscule 1 -▁Montoya 1 -▁FOIA 1 -▁Brandenburg 1 -▁wicketkeeper 1 -▁£100,000 1 -Hack 1 -▁strident 1 -anchor 1 -▁Haskins 1 -▁Smartphone 1 -▁Joao 1 -▁Virus 1 -002 1 -▁Carrington 1 -▁Patsy 1 -▁lily 1 -▁sultan 1 -topia 1 -▁wan 1 -▁Amari 1 -▁Custer 1 -coding 1 -▁sprouted 1 -▁Techno 1 -▁Jeffries 1 -▁Ballon 1 -▁$91 1 -▁grainy 1 -▁muddled 1 -▁caso 1 -▁Kato 1 -▁Mutant 1 ->> 1 -▁equalised 1 -▁uttering 1 -▁faltering 1 -▁cual 1 -▁XIII 1 -▁Hashem 1 -▁tallies 1 -cycling 1 -▁Tack 1 -shawn 1 -▁0-4 1 -▁Kaba 1 -▁Hz 1 -▁1810 1 -▁Saar 1 -15,000 1 -▁healthiest 1 -▁LET 1 -▁quid 1 -easter 1 -.23% 1 -atio 1 -gale 1 -▁Throwing 1 -▁Divisional 1 -–11 1 -intentioned 1 -Elect 1 -▁RAN 1 -Olympic 1 -sanctuary 1 -— 1 -▁520 1 -▁rehearse 1 -▁257 1 -▁Rive 1 -▁Armani 1 -▁elude 1 -entered 1 -Beth 1 -Loughlin 1 -potentially 1 -▁Argent 1 -▁LOC 1 -jiang 1 -▁husk 1 -▁Barnaby 1 -▁Chaffetz 1 -▁Morecambe 1 -▁Registrar 1 -▁WHILE 1 -▁Zhejiang 1 -▁distancing 1 -▁elaborating 1 -▁paralysed 1 -▁perpendicular 1 -▁résumé 1 -▁abomination 1 -▁maddening 1 -▁inferred 1 -▁culvert 1 -▁Museveni 1 -▁Schubert 1 -▁Batista 1 -▁(1995) 1 -▁Thermal 1 -▁Broadband 1 -▁mutated 1 -▁demonstrabl 1 -▁Hooray 1 -▁Lampard 1 -▁mucus 1 -▁dumplings 1 -▁Healey 1 -▁wristband 1 -▁Chrysti 1 -Must 1 -▁vanguard 1 -▁Dunford 1 -▁instructive 1 -.17% 1 -▁neces 1 -eena 1 -▁Coptic 1 -▁pinball 1 -▁figuratively 1 -▁GNU 1 -▁(20) 1 -▁Petition 1 -▁Chara 1 -rical 1 -▁blissfully 1 -▁motoring 1 -▁Alderman 1 -zag 1 -öl 1 -▁autocratic 1 -▁accentuated 1 -ogie 1 -▁Plo 1 -▁Adj 1 -illary 1 -▁Ahn 1 -▁reformist 1 -▁nominally 1 -igny 1 -▁3–0 1 -scorer 1 -linda 1 -▁succumbing 1 -▁ordination 1 -▁adamantly 1 -▁Glee 1 -fou 1 -▁hilltop 1 -▁relished 1 -▁Lust 1 -shek 1 -.24% 1 -▁Cyr 1 -▁coalesce 1 -▁wrongfully 1 -▁criminalize 1 -▁tramp 1 -▁Dye 1 -▁mountainside 1 -▁Cavan 1 -▁redeeming 1 -ksha 1 -▁Hint 1 -contest 1 -▁squashed 1 -▁Campo 1 -gency 1 -Ride 1 -assuming 1 -▁Knoll 1 -▁bling 1 -166 1 -Usually 1 -Early 1 -Record 1 -Lauren 1 -ndez 1 -▁835 1 -▁uncharacteri 1 -immediately 1 -medium 1 -6-4 1 -▁Dakar 1 -▁Adil 1 -▁sketched 1 -▁juxtapos 1 -▁Calvary 1 -▁Consistent 1 -▁Disturb 1 -▁INSTAGRAM 1 -▁MADRID 1 -▁Villanueva 1 -▁probabilities 1 -▁replete 1 -▁McAvoy 1 -▁Obispo 1 -▁Proverbs 1 -▁Ossoff 1 -▁crevice 1 -▁Schilling 1 -▁reproach 1 -ilian 1 -▁0.03 1 -▁vestige 1 -▁Maddow 1 -▁Puzzle 1 -▁Schiller 1 -▁preoccupation 1 -▁reais 1 -▁Sinaloa 1 -▁Cooney 1 -▁swimwear 1 -▁Armando 1 -▁abhorrent 1 -▁mythological 1 -▁$94 1 -LDS 1 -▁concocted 1 -▁galley 1 -▁Sapp 1 -▁ska 1 -▁obliterated 1 -▁slimy 1 -idia 1 --1960 1 -▁beckoned 1 -▁Southside 1 -▁upswing 1 -metabol 1 -▁Inglis 1 -▁mingled 1 -810 1 -Dude 1 -hedra 1 -▁rocketed 1 -143 1 -1.6% 1 -▁781 1 -Julia 1 -▁sternly 1 -▁57% 1 -▁constitut 1 -▁Kite 1 -▁supremely 1 -▁Destroy 1 -heeled 1 -uffle 1 -busting 1 -▁Heaton 1 -vara 1 -▁nugget 1 -▁pajama 1 -chau 1 -▁Portable 1 -▁Bulldog 1 -Original 1 -extraordinary 1 -magnetic 1 -Hispanic 1 -▁Stability 1 -▁MIC 1 -▁encapsulate 1 -▁Chakra 1 -▁sept 1 -▁pious 1 -▁chilli 1 -▁405 1 -Var 1 -▁brainstorm 1 -angan 1 -▁figurative 1 -74) 1 -NCA 1 -logger 1 -Neither 1 -▁Manbij 1 -▁Napolitano 1 -▁Tendulkar 1 -▁misdeeds 1 -▁recognisable 1 -▁sabbatical 1 -▁sapphire 1 -▁romaine 1 -▁thrice 1 -▁Puritan 1 -▁Shulkin 1 -scholastic 1 -▁$0.06 1 -▁Eurozone 1 -▁Eau 1 -▁Gadot 1 -▁Marietta 1 -▁rehash 1 -CMA 1 -▁kickbacks 1 -▁Adelson 1 -▁mountaineer 1 -▁Caruso 1 -▁Fanny 1 -jean 1 -already 1 -▁reliving 1 -▁Grier 1 -▁scowled 1 -▁bucking 1 -▁cutie 1 -Quest 1 -chow 1 -▁Sully 1 -▁quivering 1 -▁Clone 1 -▁Nabi 1 -▁Colla 1 -▁tacky 1 -▁FYI 1 -▁130,000 1 -▁Ensure 1 -yung 1 -▁overruled 1 -▁burly 1 -▁canister 1 -▁soiled 1 -▁disinfect 1 -▁.45 1 -▁contradicting 1 -▁Lef 1 -▁rollover 1 -▁Winger 1 -▁8.7 1 -$50 1 -tried 1 -▁enabler 1 -▁7-9 1 -▁Abdullahi 1 -▁Chev 1 -▁Dern 1 -▁Ammon 1 -Meanwhile 1 -Several 1 -▁Smol 1 -▁detonate 1 -007 1 -▁bunches 1 -dragon 1 -revolution 1 -▁mystic 1 -142 1 -▁aspirational 1 -▁Gaw 1 -▁smacking 1 -▁feign 1 -REX 1 -660 1 -▁Apostolic 1 -▁Gangneung 1 -▁Kirkpatrick 1 -▁Pellegrini 1 -▁Societies 1 -▁unnerved 1 -▁cathartic 1 -▁samurai 1 -▁unhelpful 1 -▁McAdoo 1 -▁PvP 1 -Defamation 1 -▁bourse 1 -▁Benzinga 1 -▁sordid 1 -▁Widodo 1 -▁Capricorn 1 -▁Delilah 1 -▁Rinne 1 -▁Amba 1 -▁Meehan 1 -▁Ugly 1 -▁Stroud 1 -▁390 1 -315 1 -▁Altria 1 -▁Haifa 1 -▁trotting 1 -VL 1 -▁Nursery 1 -▁dung 1 -▁[41] 1 -▁Antique 1 -TBA 1 -▁Druid 1 -▁sheltering 1 -▁Dying 1 -▁bronc 1 -[6] 1 -▁wavering 1 -▁Rugg 1 -▁Catcher 1 -▁Lowery 1 -▁Provisional 1 -spiel 1 -hahaha 1 -▁237 1 -▁coaxed 1 -▁RPM 1 -▁laud 1 -▁formatting 1 -▁Coffs 1 -▁reelected 1 -ananda 1 -▁780 1 -152 1 -Within 1 -JM 1 -▁ticketed 1 -▁braided 1 -▁($10 1 -▁demonize 1 -▁seventeenth 1 -cius 1 -▁CFA 1 -FRA 1 -rational 1 -▁Redford 1 -▁elegantly 1 -▁Arsen 1 -▁Publisher 1 -▁(22) 1 -holiday 1 -Herald 1 -▁Aye 1 -185 1 -▁controversially 1 -▁LAW 1 -▁Maura 1 -▁squander 1 -▁Recommend 1 -▁Bauman 1 -Store 1 -1-4 1 -282 1 -▁turd 1 -▁atrium 1 -honor 1 -UFF 1 -▁mol 1 -glow 1 -▁brine 1 -▁BEIRUT 1 -▁Casablanca 1 -▁Gretzky 1 -▁Keselowski 1 -▁hyperbole 1 -▁intricacies 1 -▁majesty 1 -▁mausoleum 1 -▁precipitous 1 -▁propagation 1 -▁underprivileged 1 -▁writhing 1 -▁cabal 1 -▁Citrus 1 -▁Fairgrounds 1 -▁climatic 1 -▁Celebrating 1 -▁Confused 1 -▁sweatpants 1 -▁Amarillo 1 -▁Grenada 1 -▁whiny 1 -▁democratization 1 -▁Bennington 1 -▁uninjured 1 -oscopic 1 -Factor 1 -▁unsubscribe 1 -▁extrajudicial 1 -▁Provost 1 -▁Balloon 1 -▁Johor 1 -▁Heyward 1 -▁Larsson 1 -▁passerby 1 -▁Hesse 1 -▁Ewan 1 -▁Mounted 1 -▁12:15 1 -▁hydropower 1 -▁Tere 1 -▁Zil 1 -▁Lewes 1 -mediated 1 -rake 1 -YES 1 -▁gainers 1 -.46% 1 -▁Described 1 -Amy 1 -▁Millar 1 -▁Kuni 1 -▁1.9% 1 -▁hardcover 1 -▁Pender 1 -▁Synod 1 -▁Counselor 1 -Move 1 -aaaa 1 -▁angelic 1 -262 1 -▁Geb 1 -▁Ryo 1 -▁Outback 1 -HIT 1 -▁looser 1 -▁Jell 1 -INO 1 -MAS 1 -leash 1 -hrer 1 -▁rationing 1 -▁littering 1 -▁Archae 1 -▁Quinte 1 -baba 1 -▁Dine 1 -238 1 -▁Scottie 1 -▁lapsed 1 -rí 1 -Join 1 -campaign 1 -Number 1 -righteous 1 -prevent 1 -routine 1 -Taking 1 -▁threesome 1 -undu 1 -adol 1 -ffler 1 -▁MEA 1 -▁Kran 1 -▁£10,000 1 -▁corpora 1 -Cru 1 -▁remade 1 -▁stunted 1 -ppo 1 -▁Collaboration 1 -▁Nifty 1 -▁Sarajevo 1 -▁Voldemort 1 -▁antiquity 1 -▁nightmarish 1 -▁ovaries 1 -▁perpetuating 1 -▁polygamy 1 -▁syllabus 1 -▁cabaret 1 -▁noxious 1 -▁galactic 1 -▁abhor 1 -▁Nylander 1 -▁ISBN 1 -honour 1 -▁regressive 1 -▁blindsided 1 -▁Bernice 1 -▁Misha 1 -▁Belk 1 -▁gatekeeper 1 -▁cheater 1 -HOO 1 -▁subculture 1 -▁slaughterhouse 1 -▁doodle 1 -▁Ponder 1 -▁Pandit 1 -examined 1 -▁Yves 1 -▁36,000 1 -▁$97 1 -▁clipboard 1 -▁Redlands 1 -▁encircled 1 -▁Yost 1 -▁Priority 1 -iyar 1 -▁Hervey 1 -▁Masi 1 -▁basking 1 -▁Cronk 1 -▁Humble 1 -▁freshness 1 -▁Gerhard 1 -▁Mendel 1 -▁croc 1 -▁blossoming 1 -241 1 -▁looping 1 -LEX 1 -▁EMA 1 -uille 1 -157 1 -▁KEY 1 -▁Westworld 1 -▁termite 1 -▁spew 1 -▁Kalyan 1 -▁dispatching 1 -410 1 -Hillary 1 -Level 1 -olina 1 -▁Thra 1 -▁vented 1 -▁outpaced 1 -hain 1 -Jin 1 -▁TNA 1 -isn 1 -VIN 1 -▁precondition 1 -▁Exo 1 -kota 1 -▁Menlo 1 -ASX 1 --2-1 1 -▁backstroke 1 -▁Bahamian 1 -▁Cessna 1 -▁Guernsey 1 -▁Heitkamp 1 -▁Hokkaido 1 -▁Invictus 1 -▁Kauffman 1 -▁Mammoth 1 -▁OKLAHOMA 1 -▁Sequoia 1 -▁Socrates 1 -▁advisable 1 -▁claustrophobic 1 -▁colorectal 1 -▁dribbling 1 -▁ferocity 1 -▁jubilant 1 -▁milligrams 1 -▁reversible 1 -▁scalability 1 -▁smoldering 1 -▁unflattering 1 -▁Likud 1 -▁Github 1 -▁overvalued 1 -▁lasagna 1 -ström 1 -▁unaffordable 1 -▁Commandments 1 -▁sustainably 1 -▁Boswell 1 -▁Beloved 1 -▁Croat 1 -▁widower 1 -▁Remo 1 -▁Coldplay 1 -▁scrawled 1 -▁Lingard 1 -▁slimmer 1 -▁irked 1 -▁forgettable 1 -▁Lehmann 1 -▁truss 1 -▁spanked 1 -▁methodically 1 -▁foreseen 1 -780 1 -▁Togo 1 -Mile 1 -▁Cartel 1 -▁stormwater 1 -▁Valor 1 -▁Honorable 1 -▁$98 1 -▁Leith 1 -▁Crooked 1 -▁2.8% 1 -▁sharpest 1 -▁Ruin 1 -.22% 1 -▁munching 1 -▁Shirt 1 -▁Else 1 -CCA 1 -▁Pham 1 -▁frisk 1 -medal 1 -Entrepreneurship 1 -8:30 1 -ologically 1 -breed 1 -▁CHO 1 -ERRY 1 -Cop 1 -looked 1 -inform 1 -ference 1 -trigger 1 -▁Kalu 1 -educational 1 -wishers 1 -fence 1 -▁Saff 1 -▁Menon 1 -Korea 1 -▁Covered 1 -▁SIT 1 -kwu 1 -Message 1 -Private 1 -tainment 1 -▁Logano 1 -MHz 1 -▁peeve 1 -▁mejor 1 -▁Thick 1 -▁shroud 1 -▁colouring 1 -▁Chappell 1 -▁coincidental 1 -▁purist 1 -Mun 1 -▁Cone 1 -▁Rusk 1 -pacing 1 -quier 1 -▁Dac 1 -OTCPK 1 -▁Ancelotti 1 -▁Clermont 1 -▁Continuous 1 -▁Discipline 1 -▁Enhanced 1 -▁Propulsion 1 -▁Velasquez 1 -▁Yolanda 1 -▁contiguous 1 -▁evacuating 1 -▁femininity 1 -▁fragility 1 -▁phenotype 1 -▁Lothian 1 -▁profane 1 -▁enamel 1 -7,500 1 -▁Pigeon 1 -▁WHIP 1 -▁unsupervised 1 -▁shrugging 1 -▁Acura 1 -Toy 1 -▁Moritz 1 -▁Collar 1 -.19% 1 -▁napped 1 -▁jogger 1 -▁Mith 1 -▁[43] 1 -▁outlying 1 -▁neutered 1 -▁harshest 1 -▁Zeit 1 -▁madman 1 -▁Gonna 1 -▁heave 1 -▁rejuvenated 1 -▁chatty 1 -▁betterment 1 -▁Negan 1 -▁Sadio 1 -▁debug 1 -/25 1 -JO 1 -finite 1 -▁Maran 1 -▁Barros 1 -▁Bundle 1 -▁Slope 1 -▁Ecker 1 -▁Guang 1 -▁mathematically 1 -▁$4.1 1 -▁Zig 1 -▁relent 1 -▁dressage 1 -▁Roper 1 -▁Interviewer 1 -.31% 1 -▁DIA 1 -document 1 -2.8% 1 -▁BRIT 1 -psi 1 -Medi 1 -▁2,800 1 -▁UAB 1 -▁passively 1 -▁Brid 1 -▁weirdness 1 -excellent 1 -▁Relationship 1 -▁Palu 1 -▁Volta 1 -5-8 1 -.54% 1 -blow 1 -▁(30) 1 -▁SAA 1 -▁Shiro 1 -▁Moth 1 -fren 1 -▁Cuthbert 1 -▁Heidelberg 1 -▁Hickenlooper 1 -▁Panorama 1 -▁capacitor 1 -▁impenetrable 1 -▁sustenance 1 -NGO 1 -▁Zillow 1 -▁Cyborg 1 -▁Attendance 1 -▁Ticketmaster 1 -▁interracial 1 -▁Racial 1 -▁subcontinent 1 -▁salmonella 1 -▁Armen 1 -▁fortitude 1 -▁Ackerman 1 -▁chatbot 1 -▁Pitino 1 -▁nutty 1 -▁JCPOA 1 -▁Lionsgate 1 -▁Whitley 1 -▁Duval 1 -▁Sharpton 1 -▁OWN 1 -mistake 1 -▁Thad 1 -▁conspicuously 1 -▁bellies 1 -▁schoolteacher 1 -▁biz 1 -NIS 1 -cob 1 -▁Zayed 1 -▁GOLD 1 -▁Tiru 1 -orbit 1 -237 1 -Vice 1 -▁sweetened 1 -▁Cheek 1 -▁Presently 1 -ène 1 -▁continu 1 -▁Eun 1 -▁FDR 1 -▁Seaside 1 -▁Ferri 1 -▁PTC 1 -▁Ritt 1 -▁zig 1 -▁padlock 1 -lita 1 -Tac 1 -▁Tung 1 -nami 1 -chamber 1 -▁Rated 1 -impl 1 -▁Agua 1 -▁Steyn 1 -▁Judi 1 -▁ASIC 1 -destruct 1 -comfortable 1 -Fake 1 -Between 1 -laundering 1 -availability 1 -suffering 1 -conflict 1 -▁afflict 1 -▁1803 1 -▁Elli 1 -.21% 1 -▁boomed 1 -▁Rockland 1 -archy 1 -▁stiffer 1 -▁Esca 1 -▁£35 1 -▁Goldie 1 -▁321 1 -▁AMOLED 1 -▁Aishwarya 1 -▁barbeque 1 -▁calibration 1 -▁cylindrical 1 -▁ibuprofen 1 -▁infrastructural 1 -▁innuendo 1 -▁irreparable 1 -▁vindictive 1 -▁circumference 1 -▁jovial 1 -▁virulent 1 -▁Ziegler 1 -▁apnea 1 -▁Talmud 1 -▁país 1 -▁embers 1 -▁decoy 1 -▁Racecourse 1 -▁harnessing 1 -▁Reproductive 1 -▁Berwick 1 -▁Delicious 1 -▁undetermined 1 -▁Partridge 1 -▁Altogether 1 -▁Zionism 1 -▁Farrah 1 -▁Wikimedia 1 -005 1 -▁airfare 1 -▁$8.5 1 -▁FIVE 1 -▁revocation 1 -targeted 1 -▁massaging 1 -ummies 1 -▁Caine 1 -▁Chern 1 -▁macho 1 -▁pong 1 -▁Clon 1 -167 1 -▁espoused 1 -▁Acre 1 -▁SIG 1 -▁1825 1 -245 1 -▁Permit 1 -▁yer 1 -▁fiscally 1 -▁Forsberg 1 -243 1 -▁turntable 1 -▁Longer 1 -▁264 1 -YL 1 -▁Spelling 1 -monitor 1 -cough 1 -▁Fae 1 -▁Habit 1 -▁CST 1 -▁Suga 1 -▁Verse 1 -editing 1 -advisor 1 -Commerce 1 -[12] 1 -Silver 1 -opter 1 -▁manicure 1 -▁Blac 1 -▁Familia 1 -▁mah 1 -325 1 -lute 1 -elves 1 -▁silvery 1 -▁Darn 1 -▁Gara 1 -▁Duh 1 -racism 1 -▁Simeon 1 -▁Patna 1 -height 1 -packaged 1 -▁Leib 1 -▁Aegean 1 -▁Penticton 1 -▁STORIES 1 -▁baritone 1 -▁purveyor 1 -▁sleuth 1 -▁Crockett 1 -▁Invesco 1 -▁Plessis 1 -▁astrology 1 -▁Trayvon 1 -▁dowry 1 -▁CrossFit 1 -▁gypsy 1 -▁Observation 1 -▁Spectre 1 -▁birdied 1 -▁Shandong 1 -▁Happened 1 -Karabakh 1 -▁dissimilar 1 -▁Essendon 1 -▁Essence 1 -▁• 1 -▁Happily 1 -▁Thief 1 -▁Quetta 1 -▁[44] 1 -▁Crocker 1 -▁Cantrell 1 -▁Villar 1 -▁NNPC 1 -▁Neumann 1 -▁honing 1 -▁icky 1 -snake 1 -▁Stam 1 -▁Siena 1 -▁Margie 1 -▁MAD 1 -▁DFS 1 -▁Mmm 1 -MDA 1 -▁Shou 1 -▁loner 1 -▁fallacy 1 -urai 1 -▁BAC 1 -Arnaud 1 -▁dazzled 1 -▁leach 1 -▁bemoaned 1 -▁expended 1 -NYC 1 -259 1 -chun 1 -▁yawning 1 -▁Leung 1 -▁modernist 1 -▁Kean 1 -▁Stowe 1 -▁2.9% 1 -ophile 1 -▁Hagg 1 -▁Haile 1 -▁skateboarding 1 -crawl 1 -▁SEN 1 -2.9% 1 -▁frayed 1 -▁1828 1 -oggle 1 -rchaeological 1 -▁$7,000 1 -▁pec 1 -_{ 1 -▁Ota 1 -lba 1 -▁PORT 1 -typically 1 -▁thirteenth 1 -chemistry 1 -Mexican 1 -emphasis 1 -[13] 1 -acid 1 -.05% 1 -EGA 1 -▁Akan 1 -132 1 -Davis 1 -▁Bhat 1 -▁Wildcat 1 -ikka 1 -▁merciful 1 -▁Shining 1 -franc 1 -Ross 1 -nately 1 -3.8% 1 -▁Browder 1 -▁Everglades 1 -▁JERUSALEM 1 -▁Khabib 1 -▁SXSW 1 -▁english 1 -▁expiry 1 -▁pokémon 1 -▁repurposed 1 -▁Floridian 1 -▁Pomerania 1 -▁shelving 1 -▁vandals 1 -▁Paradox 1 -▁Wilkerson 1 -▁Martínez 1 -▁gagged 1 -▁Terrible 1 -▁Cmn 1 -▁Bargain 1 -▁Kalin 1 -▁guardrail 1 -▁recreating 1 -▁ethnicities 1 -wh 1 -▁rainstorm 1 -▁Terran 1 -▁Boyfriend 1 -▁foment 1 -▁Lauder 1 -▁Etienne 1 -▁Thumb 1 -248 1 -▁Oceania 1 -depending 1 -▁Gaiman 1 -▁judo 1 -▁inkling 1 -▁regretting 1 -▁Gabriella 1 -anovic 1 -▁larva 1 -▁rotational 1 -▁Buffon 1 -▁usability 1 -▁Antoni 1 -Intel 1 -mouse 1 -▁Sarnia 1 -▁fem 1 -▁Montag 1 -▁weirdly 1 -▁Muta 1 -▁6.5% 1 -▁Casting 1 -▁Schne 1 -identify 1 -▁tamper 1 -burner 1 -Gary 1 -▁tallying 1 -ensa 1 -▁START 1 -▁REV 1 -jac 1 -▁VIX 1 -▁ensue 1 -▁Kante 1 -▁summoning 1 -▁518 1 -▁Filming 1 -▁stupidly 1 -▁Culp 1 -▁Patil 1 -μ 1 -hamstring 1 -▁Sooner 1 -▁mason 1 -▁Mato 1 -▁infraction 1 -▁hace 1 -Pray 1 -Cut 1 -▁Legg 1 -Return 1 -▁Moser 1 -▁Hatcher 1 -▁Veer 1 -raki 1 -214 1 -▁HOUSE 1 -▁Lourdes 1 -▁MONTREAL 1 -▁Refinery 1 -▁Susquehanna 1 -▁displeased 1 -▁exuberance 1 -▁unselfish 1 -▁unverified 1 -unpredictability 1 -▁Astronaut 1 -▁Drexel 1 -▁BRICS 1 -▁Ibadan 1 -▁Univision 1 -▁Cordova 1 -▁poachers 1 -▁blurring 1 -▁SUNY 1 -▁Fournier 1 -▁Ibiza 1 -domestic 1 -▁crass 1 -▁copycat 1 -▁Breen 1 -▁Kingsbury 1 -▁Newcomer 1 -▁QPR 1 -▁NIS 1 -▁SCORE 1 -▁Carrey 1 -▁Grupo 1 -▁Scotsman 1 -aylor 1 -▁signatory 1 -▁comprehensively 1 -4.7% 1 -1.9% 1 -octane 1 -▁domesticated 1 -▁Kogi 1 -▁Zaman 1 -▁NPD 1 -▁Ponzi 1 -ugger 1 -▁Neely 1 -▁Betis 1 -▁Donkey 1 -▁Virtually 1 -UNC 1 -▁580 1 -▁musc 1 -▁topography 1 -▁Goran 1 -▁triplets 1 -▁trumped 1 -▁LPG 1 -▁patchy 1 -Sync 1 -▁Mule 1 -2.7% 1 -▁Damond 1 -▁TRE 1 -▁Bish 1 -▁335 1 -▁Mikel 1 -shield 1 -▁NCP 1 -festival 1 -▁bc 1 -Iranian 1 -Guardian 1 -hyun 1 -▁MWC 1 -breeding 1 -hug 1 -politi 1 -▁Armistice 1 -▁Opioid 1 -▁PLYMOUTH 1 -▁annihilation 1 -▁countenance 1 -▁sulphur 1 -▁chaperone 1 -▁annuity 1 -▁raspberries 1 -▁Sabathia 1 -▁caddie 1 -▁spores 1 -▁Monopoly 1 -▁Broome 1 -▁lobbies 1 -▁Abilene 1 -▁bayonet 1 -▁Aditya 1 -▁disorganized 1 -Founder 1 -▁unwrapped 1 -▁impartiality 1 -▁horseshoe 1 -960 1 -Mini 1 -▁Mashable 1 -▁Harlow 1 -▁Shetland 1 -▁Nanny 1 -188 1 -▁8-3 1 -▁eloquently 1 -▁Ruh 1 -▁misinterpreted 1 -▁Medici 1 -▁Lora 1 -▁shrieked 1 -▁Sandeep 1 -greet 1 -▁Rubi 1 -▁Wooden 1 -305 1 -▁2.2% 1 -▁taser 1 -▁Nook 1 -▁Krause 1 -▁unofficially 1 -▁pensioner 1 -rrr 1 -opathic 1 -▁Lilian 1 -▁Sonam 1 -▁java 1 -▁Slice 1 -▁DAN 1 -▁ACM 1 -Job 1 -crystal 1 -▁Schell 1 -Twenty 1 -expanding 1 -legitimate 1 -previously 1 -unfortunate 1 -arti 1 -escalation 1 -▁Condon 1 -▁fortify 1 -▁6-6 1 -▁dishonor 1 -▁Lumber 1 -ieron 1 -originally 1 -▁painstaking 1 -▁Whiting 1 -chrome 1 -▁Pyr 1 -▁maul 1 -▁Basu 1 -▁Fenn 1 -▁Awakening 1 -▁Burgundy 1 -▁Difficult 1 -▁Kootenay 1 -▁Literacy 1 -▁Malhotra 1 -▁Musharraf 1 -▁Penrith 1 -▁dichotomy 1 -▁knead 1 -▁nudging 1 -▁unchallenged 1 -▁Lesotho 1 -▁Sirisena 1 -▁TripAdvisor 1 -▁netminder 1 -▁THINK 1 -▁2018-2019 1 -▁grapefruit 1 -▁technologist 1 -▁Goodreads 1 -▁▪ 1 -▁chided 1 -▁Ranieri 1 -▁Pussy 1 -▁purging 1 -▁Armagh 1 -▁Đ 1 -▁Walcott 1 -▁Shaffer 1 -▁BOOK 1 -▁Breyer 1 -▁repelled 1 -▁Guilty 1 -▁LaVar 1 -▁unspeakable 1 -▁Wiener 1 -▁Rosetta 1 -shame 1 -▁transplantation 1 -▁$170 1 -▁ogre 1 -▁lapped 1 -▁Fence 1 -▁discreetly 1 -▁UVA 1 -▁Dragic 1 -rique 1 -▁holo 1 -▁Supermarket 1 -▁Eritrean 1 -committal 1 -▁Hilltop 1 -▁Qual 1 -directional 1 -BAC 1 -Didn 1 -vila 1 -▁Hakim 1 -▁curd 1 -shared 1 -1.8% 1 -▁Jama 1 -fixed 1 -▁Bolivian 1 -nir 1 -.43% 1 -▁MAG 1 -▁sarin 1 -▁Reeve 1 -▁FRA 1 -▁diversifying 1 -▁outpace 1 -▁circum 1 -▁Penh 1 -▁Noosa 1 -interesting 1 -▁disrespected 1 -▁foursome 1 -▁Continued 1 -▁Conti 1 -▁Tsi 1 -config 1 -▁$96 1 -fail 1 -▁twinge 1 -▁tiled 1 -Pain 1 -intensity 1 -Double 1 -Defense 1 -▁243 1 -Major 1 -chase 1 -promotion 1 -▁crumb 1 -▁246 1 -▁unmet 1 -▁childlike 1 -▁Juba 1 -Spider 1 -chloro 1 -▁mime 1 -▁Comer 1 -3000 1 -▁6-10 1 -verting 1 -▁Accuracy 1 -▁Innovative 1 -▁Thackeray 1 -▁coinciding 1 -▁dredging 1 -▁episodic 1 -▁judicious 1 -▁metabolite 1 -▁protégé 1 -▁uninterested 1 -▁Mulcair 1 -▁dildo 1 -▁earphones 1 -▁Territorial 1 -▁forlorn 1 -▁IAEA 1 -▁loosing 1 -▁MILAN 1 -▁incurring 1 -▁(1994) 1 -▁streamlining 1 -▁crucifixion 1 -▁Plasma 1 -drinking 1 -▁Adkins 1 -▁(2-1) 1 -▁gritted 1 -figuration 1 -▁Wellesley 1 -cita 1 -▁sickle 1 -zhen 1 -quish 1 -Hour 1 -▁retroactively 1 -▁gastric 1 -▁mp 1 -▁fizzled 1 -▁¿ 1 -▁shareholding 1 -▁Karla 1 -▁Enoch 1 -1999 1 -▁interceptor 1 -▁corroborat 1 -▁discus 1 -scribing 1 -▁conf 1 -▁Lorna 1 -▁theorized 1 -▁Morley 1 -▁defrauding 1 -▁Twisted 1 -▁spearheading 1 -▁renounced 1 -uncharacteristic 1 -▁18.5 1 -bpd 1 -pont 1 -▁Jalal 1 -▁Kashi 1 -▁claimant 1 -▁Ringo 1 -▁Delia 1 -▁Marner 1 -▁Nelly 1 -▁Kopp 1 -umming 1 -▁Jacqui 1 -▁vitally 1 -heartedly 1 -▁Destroyer 1 -▁Poon 1 -570 1 -radar 1 -▁WIC 1 -▁7000 1 -▁Swain 1 -▁relational 1 -▁Expected 1 -▁Yeh 1 -▁Traci 1 -revolutionary 1 -yourself 1 -OKE 1 -Ipsos 1 -▁Streaming 1 -▁pummel 1 -▁discounting 1 -204 1 -▁Provision 1 -.36% 1 -NHL 1 -▁vida 1 -▁Tye 1 -▁posthumous 1 -▁struct 1 -▁MCG 1 -Rex 1 -▁Eccles 1 -▁Culinary 1 -▁Guillaume 1 -▁Ignoring 1 -▁Incredibly 1 -▁Jealous 1 -▁Jernigan 1 -▁Lopetegui 1 -▁Pregnant 1 -▁Technique 1 -▁apricot 1 -▁bazaar 1 -▁dandelion 1 -▁machinations 1 -▁simulcast 1 -▁twinkling 1 -▁vignette 1 -▁Montessori 1 -▁legitimize 1 -▁indelible 1 -▁rattlesnake 1 -▁RADIO 1 -▁pruning 1 -▁Tatiana 1 -▁Smoky 1 -▁decompress 1 -▁Gleeson 1 -cleveland 1 -▁Mullin 1 -▁BDSM 1 -▁Bunker 1 -ayi 1 -▁equi 1 -▁Memoir 1 -▁287 1 -▁♥ 1 -▁Penang 1 -dealer 1 -▁Vinson 1 -▁Mallya 1 -▁Vucci 1 -▁moo 1 -▁deformed 1 -▁IPv 1 -▁Klingon 1 -beach 1 -▁radicalized 1 -▁divider 1 -▁Ehr 1 -SMA 1 -▁Hashim 1 -▁Quaid 1 -▁fanning 1 -▁toughen 1 -▁Reus 1 -▁duh 1 -▁72% 1 -▁Henning 1 -▁abnormally 1 -▁seduced 1 -▁leaker 1 -▁Lookout 1 -Hip 1 -jevic 1 -▁Laure 1 -▁Terrill 1 -▁Kabir 1 -Around 1 -▁AFF 1 -photograph 1 -OSH 1 -▁Marcy 1 -▁Deanna 1 -▁Mood 1 -▁chairmen 1 -.37% 1 -▁cringed 1 -gong 1 -▁vitriol 1 -Scotland 1 -altitude 1 -▁crunched 1 -Manuel 1 -gravity 1 -▁Maren 1 -▁animate 1 -.89% 1 -▁Beren 1 -zah 1 -▁Colman 1 -▁ducking 1 -valid 1 -▁woeful 1 -speaker 1 -▁Blackmon 1 -▁carcinogen 1 -Hallelujah 1 -▁Amendola 1 -▁AstraZeneca 1 -▁Encyclopedia 1 -▁Esposito 1 -▁LaSalle 1 -▁Simcoe 1 -▁abdicate 1 -▁excavator 1 -▁fluctuating 1 -▁impossibility 1 -▁lacklustre 1 -▁paediatric 1 -▁Himachal 1 -▁Instinct 1 -▁Prosper 1 -▁Samajwadi 1 -▁TOWNSHIP 1 -▁Infowars 1 -▁Wexford 1 -▁Giulia 1 -▁inquiring 1 -▁Monaghan 1 -▁Dobbs 1 -▁supersonic 1 -▁pungent 1 -▁concurred 1 -▁interlude 1 -▁malaise 1 -▁morphology 1 -▁Woolsey 1 -▁awning 1 -▁Coulson 1 -▁Bartolo 1 -▁Stormi 1 -▁mistaking 1 -giver 1 -▁declassified 1 -▁Organized 1 -▁shackled 1 -▁Tricia 1 -▁Naik 1 -▁Reception 1 -nault 1 -▁Masha 1 -▁Hitting 1 -LAC 1 -▁vacuuming 1 -▁foolishness 1 -▁McEl 1 -.82% 1 -▁Blackberry 1 -▁hazel 1 -▁Remus 1 -▁suitability 1 -▁weasel 1 -Nope 1 -▁blighted 1 -▁STU 1 -▁squirming 1 -▁readied 1 -395 1 -▁Salina 1 -DOS 1 -▁Parnell 1 -▁Gambian 1 -415 1 -▁Laughter 1 -▁DISC 1 -leck 1 -rë 1 -▁Murali 1 -▁Ogle 1 -▁traversed 1 -idium 1 -Race 1 -▁inhaler 1 -▁GBP 1 -OLE 1 -▁sprout 1 -Court 1 -▁Asha 1 -swim 1 -▁Aram 1 -Brook 1 -pane 1 -plenty 1 -museum 1 -phrase 1 -theatre 1 -▁1821 1 -▁gre 1 -▁flail 1 -$30 1 -▁tinge 1 -▁Starter 1 -▁Employ 1 -▁cobble 1 -▁Cache 1 -nelli 1 -▁analytic 1 -Need 1 -▁ANYTHING 1 -▁Astronomy 1 -▁Brzezinski 1 -▁Deschamps 1 -▁Himalayas 1 -▁Lethbridge 1 -▁Macmillan 1 -▁conceivably 1 -▁hydrant 1 -▁reincarnation 1 -▁schizophrenic 1 -▁edifice 1 -▁AirPods 1 -▁Maitland 1 -▁extricate 1 -▁hippocampus 1 -▁Vanderpump 1 -▁testicles 1 -Hodgkin 1 -▁roiled 1 -▁Wilfred 1 -▁Subdivision 1 -▁Clothes 1 -▁recieve 1 -▁Murdock 1 -▁hummus 1 -▁Seguin 1 -▁ungrateful 1 -▁Presence 1 -▁conscript 1 -▁erratically 1 -▁Vinny 1 -▁disbursement 1 -▁Genesee 1 -▁confide 1 -truly 1 -▁apologist 1 -▁vivo 1 -▁Noodle 1 -▁Carne 1 -▁poached 1 -▁carol 1 -▁Herod 1 -▁Banff 1 -▁Abstract 1 -stairs 1 -▁Gobert 1 -jeet 1 -▁Mentioned 1 -▁FGM 1 -minor 1 -▁USAF 1 -Mine 1 -▁Valentina 1 -▁subconsciously 1 -▁culled 1 -▁Maio 1 -▁1824 1 -▁Sli 1 -.49% 1 -▁obsessively 1 -united 1 -NASA 1 -▁Placement 1 -discovery 1 -otomy 1 -JK 1 -lala 1 -▁outwardly 1 -▁beeping 1 -▁VCU 1 -.68% 1 -bier 1 -▁Wilma 1 -▁hellish 1 -▁skillful 1 -ghost 1 -▁Rhea 1 -▁pj 1 -ailing 1 -198 1 -830 1 -▁shocker 1 -▁straightening 1 -▁staining 1 -▁Farming 1 -▁Kym 1 -▁Zav 1 -▁Jumping 1 -Baltimore 1 -Beauty 1 -Technical 1 -Jacob 1 -federal 1 -▁disseminate 1 -forgotten 1 -Internet 1 -▁Miko 1 -physics 1 -▁Rattle 1 -trading 1 -specifically 1 -▁daze 1 -▁Lakh 1 -Truth 1 -2.1% 1 -▁absolut 1 -▁Sandro 1 -▁Canaccord 1 -▁Cognitive 1 -▁Destruction 1 -▁Intensive 1 -▁Nagasaki 1 -▁decomposition 1 -▁gambit 1 -▁rejoicing 1 -▁unfathomable 1 -▁Cumberbatch 1 -▁Magnitsky 1 -▁SPORTS 1 -▁Sprinkle 1 -▁Symposium 1 -▁inescapable 1 -▁[46] 1 -▁backbencher 1 -▁Maintaining 1 -▁Empowerment 1 -▁Vishal 1 -▁complies 1 -▁Verdict 1 -▁fecal 1 -▁Ryerson 1 -▁koala 1 -▁Trunk 1 -▁Sonja 1 -hurt 1 -Color 1 -▁Clarita 1 -▁meteorological 1 -▁444 1 -▁marginalised 1 -▁Neeson 1 -▁Foundry 1 -lowski 1 -▁Lockwood 1 -▁Admiralty 1 -ripe 1 -▁NIGHT 1 -▁truckload 1 -▁disallowed 1 -casual 1 -▁Panera 1 -▁anemia 1 -▁spewed 1 -▁vibrator 1 -▁$$ 1 -▁Sewell 1 -▁Curtin 1 -Iowa 1 -▁Optical 1 -residential 1 -▁briskly 1 -▁Linton 1 -▁gulped 1 -enbaum 1 -▁BUR 1 -▁Admin 1 -NIT 1 -▁1818 1 -▁Homewood 1 -▁Protecting 1 -▁enlisting 1 -▁1,900 1 -▁Stahl 1 -▁Koon 1 -▁Teller 1 -▁Skr 1 -▁Pitcher 1 -▁MoU 1 -▁Sander 1 -rigan 1 -install 1 -ippy 1 -▁Handy 1 -nationalist 1 -▁Rishi 1 -▁minefield 1 -.57% 1 -▁reclaiming 1 -▁Schau 1 -▁Stepan 1 -Netflix 1 -▁extradite 1 -▁9-2 1 -Teen 1 -AAPL 1 -▁Teenager 1 -agawa 1 -significantly 1 -▁Holtz 1 -▁Ladder 1 -▁pathogen 1 -194 1 -▁TCP 1 -iev 1 -▁FAST 1 -.29% 1 -▁Martín 1 -▁Astronomical 1 -▁HAPPY 1 -▁croissant 1 -▁divorcing 1 -▁impersonate 1 -▁multitasking 1 -▁unbridled 1 -▁Petrobras 1 -▁Hokies 1 -▁Gallardo 1 -▁Oshawa 1 -▁zigzag 1 -▁sullen 1 -▁Ramallah 1 -▁mannerisms 1 -▁Otago 1 -▁potluck 1 -▁SDSU 1 -▁900,000 1 -▁Greenblatt 1 -▁militaries 1 -▁PRA 1 -▁piecing 1 -▁cavernous 1 -▁surreptitiously 1 -buff 1 -▁wheelbase 1 -▁Parson 1 -▁Mattingly 1 -▁skirmishes 1 -▁Capo 1 -192 1 -▁responsiveness 1 -Grace 1 -linski 1 -▁Doodle 1 -gyn 1 -▁Watertown 1 -▁TDP 1 -▁petro 1 -▁herpes 1 -▁1.00 1 -▁Hanley 1 -▁Wiese 1 -▁RSL 1 -▁esa 1 -▁lapping 1 -▁dampened 1 -▁$4.6 1 -▁\( 1 -▁mumble 1 -▁seeped 1 -▁brownish 1 -▁Inquisition 1 -achieve 1 -▁fruity 1 -▁disapproved 1 -▁Galle 1 -197 1 -▁10.4 1 -▁Justify 1 -▁foolishly 1 -▁longitude 1 -▁toolbox 1 -▁Allred 1 -further 1 -▁Bertha 1 -▁Landis 1 -745 1 -▁Raila 1 -Tex 1 -.47% 1 -▁Salmond 1 -Knight 1 -▁Fag 1 -▁Durst 1 -▁Convert 1 -4.3% 1 -Citizen 1 -Georgia 1 -Value 1 -Disney 1 -Movie 1 -▁flounder 1 -wack 1 -.61% 1 -▁Indio 1 -▁Chiu 1 -▁gunfight 1 -▁instinctive 1 -▁Appreciation 1 -▁Lausanne 1 -▁Mennonite 1 -▁Srinivas 1 -▁adequacy 1 -▁caribou 1 -▁geriatric 1 -▁jittery 1 -▁rouble 1 -▁sorcerer 1 -▁tutelage 1 -▁Algiers 1 -▁amyloid 1 -▁Marlborough 1 -▁Kettering 1 -▁immunotherapy 1 -▁Entrance 1 -▁Kirstj 1 -▁WRONG 1 -▁bonkers 1 -▁Colchester 1 -▁Imagination 1 -▁Rawlings 1 -▁sulfate 1 -▁measly 1 -▁multiplication 1 -▁Shopify 1 -▁gelding 1 -▁Oroville 1 -▁possessive 1 -▁Refresh 1 -▁Laird 1 -▁Briar 1 -▁Samu 1 -▁(24) 1 -cracy 1 -▁alibi 1 -▁Caden 1 -▁downsizing 1 -▁Growers 1 -▁Coyne 1 -▁repatriated 1 -ALLY 1 -▁diminishes 1 -▁Natal 1 -▁fiercest 1 -▁NHK 1 -▁Kizer 1 -▁BAE 1 -ulsi 1 -▁Serial 1 -▁Lala 1 -missing 1 -▁videotaped 1 -▁tenderly 1 -aqa 1 -ETT 1 -▁Chateau 1 -FIN 1 -▁Abou 1 -▁#9 1 -▁stationery 1 -▁Barty 1 -▁precept 1 -.62% 1 -▁Stereo 1 -holy 1 -▁Bani 1 -▁Chamb 1 -penny 1 -▁alternated 1 -▁2,100 1 -▁USPS 1 -3.1% 1 -marsh 1 -▁0.15 1 -▁skyrocket 1 -Ratcliffe 1 -▁Arora 1 -▁Leela 1 -▁zealous 1 -passenger 1 -Editor 1 -Travel 1 -▁WTV 1 -▁sparkled 1 -framed 1 -▁meaty 1 -Stranger 1 -▁Asean 1 -branch 1 -▁MINI 1 -▁Seminar 1 -▁spotless 1 -▁fetching 1 -▁Mention 1 -opolis 1 -▁9.6 1 -▁slurp 1 -▁AbbVie 1 -▁Boilermakers 1 -▁Pulaski 1 -▁STOXX 1 -▁Saipov 1 -▁dictating 1 -▁disjointed 1 -▁idiosyncratic 1 -▁propagandist 1 -▁ransacked 1 -▁sugarcane 1 -▁unabated 1 -▁unduly 1 -▁vassal 1 -▁Suggest 1 -▁rosemary 1 -▁Cactus 1 -▁Nitish 1 -▁expunge 1 -▁Panamanian 1 -▁hoarse 1 -▁Cybersecurity 1 -▁Raptor 1 -▁ramming 1 -▁flanker 1 -▁Burberry 1 -▁tranquility 1 -▁Blanca 1 -▁620 1 -▁Horan 1 -▁SportsLine 1 -Building 1 -celli 1 -vous 1 -▁HealthCare 1 -▁Holtby 1 -higher 1 -▁KILL 1 -▁Nutri 1 -▁Leonid 1 -▁1816 1 -▁Sylvan 1 -▁Buz 1 -HEN 1 -▁beholden 1 -▁flocking 1 -▁defrauded 1 -▁Gillard 1 -capita 1 -emission 1 -▁scuttled 1 -▁LINE 1 -▁dune 1 -▁Bucket 1 -ilation 1 -293 1 -▁Atheist 1 -beating 1 -▁MiG 1 -▁Brownsville 1 -▁SME 1 -▁mga 1 -▁infringed 1 -▁LTD 1 -▁hinged 1 -▁Skies 1 -ACTION 1 -94) 1 -▁delightfully 1 -▁bookshop 1 -▁quack 1 -▁Katelyn 1 -ewicz 1 -▁flexed 1 -porn 1 -▁tsp 1 -▁Forza 1 -▁Brind 1 -▁cementing 1 -▁WALL 1 -getter 1 -▁WHL 1 -sleeve 1 -Sisi 1 -raptor 1 -corrupt 1 -▁CIT 1 -▁Mandal 1 -▁Sexy 1 -poverty 1 -Afghan 1 -Natural 1 -▁grieved 1 -basically 1 -▁zooming 1 -▁Newberry 1 -▁boaters 1 -▁spurr 1 -▁orient 1 -TECH 1 -enactment 1 -▁ATC 1 -.38% 1 -hok 1 -229 1 -▁Audubon 1 -▁Mesopotamia 1 -▁Simmonds 1 -▁Spruce 1 -▁diocesan 1 -▁inclusivity 1 -▁monetization 1 -▁restorative 1 -▁scepticism 1 -▁shambles 1 -▁superstitious 1 -▁Erasmus 1 -▁Grimsby 1 -▁arbiter 1 -▁crosshairs 1 -▁Lundqvist 1 -denuclearisation 1 -▁Bhopal 1 -▁opulent 1 -▁Bastille 1 -▁Puppet 1 -▁silliness 1 -▁Toomey 1 -▁Curtain 1 -▁holiness 1 -▁Laureate 1 -▁breaststroke 1 -▁emigration 1 -▁Towson 1 -▁ruckus 1 -▁czar 1 -▁Parole 1 -▁Panzer 1 -▁refract 1 -Roma 1 -▁Keefe 1 -▁Mossad 1 -207 1 -▁Machinery 1 -▁Hustle 1 -▁robocall 1 -handle 1 -▁YAY 1 -▁Rafe 1 -▁USAID 1 -liver 1 -convert 1 -▁FARC 1 -Whoa 1 -▁rekindled 1 -▁Scenic 1 -▁sooooo 1 -▁64% 1 -amher 1 -▁leech 1 -▁blanketed 1 -holz 1 -▁boundless 1 -▁Kitts 1 -▁Breathe 1 -Xi 1 -lode 1 -▁288 1 -▁Cian 1 -▁Ebert 1 -▁Mago 1 -▁unionized 1 -cena 1 -Network 1 -▁Asahi 1 -▁Hoya 1 -▁Nevis 1 -▁Supplement 1 -▁Rabb 1 -▁Eph 1 -shark 1 -7.0 1 -▁seasonality 1 -▁Essen 1 -▁Vala 1 -▁Daria 1 -▁layering 1 -▁Borges 1 -▁Jaffe 1 -▁Anime 1 -▁WWI 1 -FTA 1 -▁flexing 1 -▁11-1 1 -conceived 1 -carriage 1 -▁pissing 1 -Ocean 1 -▁bribed 1 -▁refresher 1 -▁blaster 1 -▁pike 1 -▁Seine 1 -kong 1 -▁pollute 1 -209 1 -342 1 -▁Mahan 1 -walled 1 -▁Westerners 1 -▁Maza 1 -265 1 -▁Sensing 1 -▁Bum 1 -Ü 1 -▁Ambedkar 1 -▁COULD 1 -▁Lannister 1 -▁Psychiatric 1 -▁Technician 1 -▁fertilization 1 -▁ideologue 1 -▁reshaping 1 -▁Waverly 1 -▁Theoretical 1 -▁unusable 1 -▁Adebayo 1 -▁Celgene 1 -▁Karthik 1 -▁Meijer 1 -▁sombre 1 -▁Vallejo 1 -realistic 1 -▁Norwalk 1 -▁Burrell 1 -▁Cambridgeshire 1 -▁Racism 1 -▁Monahan 1 -5.5% 1 -ibble 1 -▁Occupational 1 -▁bereft 1 -coroh 1 -▁Oren 1 -▁1792 1 -▁cashless 1 -▁313 1 -▁3–1 1 -▁Shipyard 1 -▁$1,6 1 -▁DePaul 1 -▁262 1 -▁confounding 1 -▁DEAD 1 -153 1 -oppy 1 -▁reignited 1 -▁Lash 1 -▁underweight 1 -▁Olo 1 -▁Sauer 1 -bhu 1 -▁dangled 1 -▁Kirkuk 1 -▁inwardly 1 -VAL 1 -▁Mohd 1 -▁Veda 1 -▁wreaked 1 -quito 1 -Kam 1 -▁Redstone 1 -▁Pinot 1 -▁bac 1 -Keeping 1 -▁Abbie 1 -▁crushes 1 -stepping 1 -▁Acadian 1 -▁nullify 1 -pru 1 -▁Sink 1 -6.5% 1 -146 1 -▁bolder 1 -▁fifteenth 1 -▁Liver 1 -▁DBS 1 -azine 1 -pora 1 -▁workhorse 1 -cala 1 -Grey 1 -spawn 1 -▁Fuku 1 -▁Gator 1 -3.4% 1 -▁Tile 1 -Federal 1 -▁modulate 1 -pattern 1 -▁overbearing 1 -▁deliberated 1 -depressant 1 -wadi 1 -▁258 1 -▁Receiver 1 -▁Gwa 1 -eño 1 -pressing 1 -Rolling 1 -▁Bankruptcy 1 -▁DeBoer 1 -▁Innocence 1 -▁MacLeod 1 -▁NETWORK 1 -▁Strowman 1 -▁conceit 1 -▁constipation 1 -▁gratuitous 1 -▁monopolies 1 -▁Rumours 1 -▁Cronulla 1 -▁InSight 1 -▁jigsaw 1 -▁savanna 1 -chukwu 1 -▁psychoactive 1 -▁Stormont 1 -Spangled 1 -▁Geography 1 -▁Uprising 1 -▁Wolfsburg 1 -▁OBE 1 -▁pug 1 -▁Harder 1 -▁coastguard 1 -RIDGE 1 -▁Juul 1 -▁Darnell 1 -▁eyewitnesses 1 -▁Viv 1 -▁Gaspar 1 -▁Barrick 1 -▁Supplemental 1 -▁Luciano 1 -▁Woodford 1 -▁unwritten 1 -▁Kiran 1 -coaster 1 -soldier 1 -▁officiated 1 -westerly 1 -▁jerking 1 -▁140,000 1 -▁Raab 1 -2-1 1 -▁cobbled 1 -▁baggie 1 -▁Smell 1 -▁tinged 1 -▁$5.2 1 -▁molar 1 -244 1 -▁Basra 1 -▁Rayne 1 -▁Karate 1 -cilla 1 -adapt 1 -rien 1 -▁exes 1 -▁ABV 1 -▁terrorized 1 -passed 1 -▁Negr 1 -▁JIT 1 -▁antagonistic 1 -▁Emin 1 -▁SAY 1 -▁redefined 1 -turbo 1 -owie 1 -▁12-0 1 -911 1 -20% 1 -▁musk 1 -▁Chest 1 -▁Stephon 1 -Chain 1 -▁Gabi 1 -▁ADV 1 -▁Stroll 1 -▁Lease 1 -akura 1 -umbling 1 -Definitely 1 -Either 1 -universal 1 -▁accentuate 1 -Spirit 1 -▁Trisha 1 -▁penalize 1 -Season 1 -▁Holman 1 -174 1 -▁fizz 1 -▁Oliva 1 -.56% 1 -▁Geri 1 -▁departmental 1 -twist 1 -▁Continent 1 -▁pled 1 -▁auntie 1 -.96% 1 -FER 1 -▁vigor 1 -י 1 -▁Applicants 1 -▁Hennessy 1 -▁Khloé 1 -▁Leamington 1 -▁Medtronic 1 -▁Mendocino 1 -▁Stoltenberg 1 -▁babbling 1 -▁catalytic 1 -▁daffodil 1 -▁extermination 1 -▁inconsequential 1 -▁introspection 1 -▁introspective 1 -▁panorama 1 -▁stylized 1 -▁summaries 1 -▁Instructor 1 -▁peppermint 1 -▁sterilize 1 -▁estuary 1 -▁Michelangelo 1 -▁holographic 1 -▁Commuter 1 -▁REVIEW 1 -▁Sturridge 1 -▁Gossip 1 -▁sag 1 -▁bosom 1 -▁mishandling 1 -▁Vogt 1 -▁sledding 1 -▁Sardinia 1 -▁Conquer 1 -▁Ashcroft 1 -▁Holbrook 1 -▁CSP 1 -▁FRANKFURT 1 -▁cyberattack 1 -▁Kasper 1 -▁Hump 1 -cai 1 -▁Connelly 1 -▁galvanized 1 -▁otra 1 -OWER 1 -▁Dugan 1 -▁Winfield 1 -.18% 1 -▁Dima 1 -▁WGN 1 -▁Rainier 1 -▁32,000 1 -▁creaking 1 -▁Yaw 1 -▁splintered 1 -▁3.1% 1 -▁Admit 1 -▁Kreme 1 -pable 1 -fia 1 -▁110,000 1 -▁Kurz 1 -▁Munch 1 -294 1 -▁heeded 1 -▁ruthlessly 1 -▁modus 1 -▁£300 1 -▁Ballot 1 -▁SSA 1 -bend 1 -▁idealism 1 -▁SAB 1 -▁Agile 1 -▁Lega 1 -Quan 1 -schke 1 -▁unapologetic 1 -▁TPS 1 -▁Rupp 1 -.66% 1 -arious 1 -▁Merced 1 -▁Searching 1 -▁Nacho 1 -▁stabilizer 1 -▁5-5 1 -▁antlers 1 -rbi 1 -▁$125,000 1 -Aren 1 -119 1 -apparently 1 -thirty 1 -▁Benji 1 -adian 1 -aloo 1 -Valley 1 -chef 1 -▁interconnect 1 -suggest 1 -▁Mainz 1 -.52% 1 -▁Yangon 1 -REIT 1 -vv 1 -rooted 1 -▁FIS 1 -DIC 1 -▁Maul 1 -corner 1 -▁Veil 1 -Bone 1 -elberg 1 -▁5-8 1 -▁Aguilera 1 -▁Governing 1 -▁LANSING 1 -▁Participation 1 -▁Ulysses 1 -▁associating 1 -▁botanist 1 -▁confiscation 1 -▁decapitated 1 -▁regenerative 1 -▁resiliency 1 -▁incubation 1 -▁jinx 1 -▁omnibus 1 -▁infographic 1 -▁Chengdu 1 -▁morsel 1 -▁brainwashed 1 -▁Kospi 1 -▁Softball 1 -▁Boogie 1 -▁Fillmore 1 -278 1 -▁realignment 1 -▁Eil 1 -▁HOPE 1 -Tip 1 -▁Hagel 1 -▁Winkler 1 -▁cask 1 -Kai 1 -▁carbohydrate 1 -▁Meantime 1 -▁Waldorf 1 -▁furrowed 1 -▁Duque 1 -▁paddy 1 -▁$4.8 1 -3.7% 1 -▁trickier 1 -▁GED 1 -▁Formerly 1 -tinib 1 -▁scolding 1 -▁Faiz 1 -▁Controlled 1 -▁thumped 1 -▁Mojo 1 -▁3.4% 1 -▁Emmi 1 -▁dingy 1 -▁unjustly 1 -▁Applewhite 1 -odor 1 -163 1 -▁660 1 -avage 1 -▁Maxime 1 -hyper 1 -▁Alternate 1 -▁overworked 1 -consul 1 -ANDA 1 -▁Presented 1 -▁Rickie 1 -▁vez 1 -.34% 1 -decker 1 -shooter 1 -▁Mazar 1 -▁$650 1 -▁Grou 1 -▁slop 1 -▁DSS 1 -224 1 -▁Quirk 1 -▁Shaker 1 -9.0 1 -logging 1 -Manufacturing 1 -reliance 1 -Support 1 -twice 1 -Austin 1 -primary 1 -▁aggravate 1 -▁confiscate 1 -wound 1 -nature 1 -▁showpiece 1 -▁snorkel 1 -▁foreshadow 1 -relatively 1 -Pope 1 -zko 1 -enhanced 1 -▁{* 1 -dox 1 -▁Weigh 1 -▁airmen 1 -ن 1 -▁Bregman 1 -▁Deshaun 1 -▁Fáil 1 -▁Mutharika 1 -▁Oswego 1 -▁Spoelstra 1 -▁smattering 1 -▁stockpiling 1 -▁straddling 1 -▁pulsating 1 -▁Wyndham 1 -▁delirious 1 -▁Carabao 1 -▁EVERYONE 1 -▁Varsity 1 -▁Hundley 1 -▁Steffen 1 -▁Bedfordshire 1 -▁1793 1 -▁amulet 1 -▁tricycle 1 -▁Joc 1 -▁McKnight 1 -▁Woodruff 1 -▁WATER 1 -▁Omarosa 1 -▁Whitworth 1 -▁cobblestone 1 -▁Sinema 1 -▁dandy 1 -▁Bulger 1 -▁genitalia 1 -▁bugged 1 -▁Fredericton 1 -▁cockroaches 1 -uska 1 -▁townspeople 1 -▁sari 1 -▁zealot 1 -elsen 1 -▁electrifying 1 -UFC 1 -stressed 1 -Contra 1 -▁advisories 1 -▁Seo 1 -Myers 1 -▁pouches 1 -▁surpasses 1 -KED 1 -▁97% 1 -▁southernmost 1 -▁Managed 1 -▁Hatton 1 -▁STEP 1 -▁TPC 1 -eligible 1 -▁goalscoring 1 -▁MLK 1 -▁Kitten 1 -▁Firestone 1 -▁acidity 1 -▁crepe 1 -HHH 1 -▁bookseller 1 -▁clover 1 -▁Canning 1 -980 1 -allied 1 -▁yum 1 -▁Dorm 1 -162 1 -▁9-3 1 -▁TOWN 1 -▁Evangelist 1 -▁Posh 1 -▁Maud 1 -▁Hitch 1 -▁reprised 1 -▁relishing 1 -▁2.25 1 -▁biggie 1 -▁Reina 1 -.76% 1 -▁*} 1 -▁Stead 1 -▁globalisation 1 -talented 1 -external 1 -hence 1 -AMZN 1 -▁energize 1 -▁ravage 1 -Moonlight 1 -184 1 -▁populate 1 -▁Maharaj 1 -▁1807 1 -.63% 1 -▁$70,000 1 -▁Qian 1 -▁0.08 1 -▁admonish 1 -▁hardening 1 -Syrian 1 -finalist 1 -ROA 1 -HOU 1 -▁Má 1 -▁firework 1 -▁nitri 1 -▁origination 1 -acca 1 -LIE 1 -▁Kinda 1 -▁Exam 1 -▁Pavlov 1 -▁1-6 1 -2.3% 1 -▁Abercrombie 1 -▁Bedouin 1 -▁Bigelow 1 -▁Constabulary 1 -▁Immaculate 1 -▁Nanjing 1 -▁Nunavut 1 -▁Sociedad 1 -▁almighty 1 -▁anesthetic 1 -▁circumstantial 1 -▁cognizant 1 -▁culpability 1 -▁exemplifies 1 -▁promenade 1 -▁sensitivities 1 -▁ambiance 1 -▁(1996) 1 -▁Hyperloop 1 -▁Cicero 1 -▁Dupont 1 -▁Montrose 1 -hush 1 -▁Swindon 1 -▁Newbury 1 -▁Sportsradio 1 -▁12.30 1 -▁lanky 1 -▁Hinkle 1 -▁Pursuit 1 -▁Hobson 1 -▁Torture 1 -▁Blaney 1 -▁friggin 1 -▁Å 1 -▁Polynesian 1 -▁Durkin 1 -▁él 1 -Thou 1 -xu 1 -unknown 1 -▁bacterium 1 -▁Lambda 1 -caller 1 -▁shipbuilding 1 -▁Ofsted 1 -joke 1 -▁Detailed 1 -▁Extend 1 -▁Pathway 1 -▁postmodern 1 -▁❤ 1 -▁Fluor 1 -▁MySpace 1 -▁Wiseman 1 -▁Enix 1 -▁sedated 1 -▁implicate 1 -zumi 1 -▁smuggler 1 -▁typeface 1 -▁Sportsnet 1 -▁CHF 1 -▁workaround 1 -▁rewind 1 -Friend 1 -▁airbase 1 -▁LCS 1 -▁Mattie 1 -▁smelter 1 -▁Wishing 1 -▁purest 1 -▁Grigg 1 -▁.38 1 -176 1 -golf 1 -▁Kellie 1 -216 1 -▁clink 1 -▁reconnected 1 -▁860 1 -▁crossword 1 -▁Yarn 1 -▁Glennon 1 -▁NCR 1 -.13% 1 -▁Warming 1 -▁Nazar 1 -▁Goku 1 -▁Fleur 1 -defining 1 -[14] 1 -▁DCF 1 -▁scaffold 1 -▁Counsell 1 -Def 1 -influenced 1 -▁contort 1 -▁Meir 1 -drain 1 -BURG 1 -▁Jordy 1 -▁Haden 1 -▁disburse 1 -▁orchestrate 1 -▁Bü 1 -▁bushy 1 -▁JACKSON 1 -ì 1 -▁Abreu 1 -▁Allegations 1 -▁Halliburton 1 -▁KARACHI 1 -▁México 1 -▁Scooby 1 -▁Varanasi 1 -▁apparition 1 -▁dormitories 1 -▁gargantuan 1 -▁insatiable 1 -▁javelin 1 -▁observable 1 -▁subdivided 1 -▁biodegradable 1 -▁snubbed 1 -éc 1 -▁Lidl 1 -▁dinghy 1 -▁Templar 1 -▁Trimble 1 -▁swanky 1 -▁KCNA 1 -▁Seacrest 1 -▁Steyer 1 -▁Dolce 1 -▁Millwall 1 -▁Radiation 1 -locate 1 -▁Trier 1 -▁Chosen 1 -239 1 -▁Sindhu 1 -▁VICE 1 -▁Tamp 1 -▁armband 1 -▁435 1 -cision 1 -▁conclusively 1 -▁torpedoes 1 -▁г 1 -lease 1 -▁superseded 1 -▁amphetamine 1 -▁eyeliner 1 -▁Thou 1 -▁EXCLUSIVE 1 -206 1 -5-7 1 -.03% 1 -▁Kole 1 -▁Whose 1 -▁socialized 1 -▁awoken 1 -▁hectare 1 -▁ingest 1 -▁1819 1 -▁incrementally 1 -▁Pittman 1 -ochi 1 -Feed 1 -hearing 1 -▁trailhead 1 -▁$190 1 -▁WEC 1 -▁perpetuated 1 -▁Maury 1 -Kent 1 -186 1 -▁¢ 1 -▁urinal 1 -▁DEP 1 -▁deliciously 1 -▁Kanu 1 -▁Bibb 1 -▁Tyrann 1 -migration 1 -▁scouted 1 -capture 1 -analyst 1 -propelled 1 -Northern 1 -strongly 1 -HIN 1 -grandson 1 -▁KDE 1 -calorie 1 -Emerick 1 -▁Grig 1 -atomic 1 -▁Glyn 1 -▁Chatterjee 1 -▁Chippewa 1 -▁Connacht 1 -▁Missoula 1 -▁connoisseur 1 -▁expropriation 1 -▁gilded 1 -▁strangling 1 -▁vociferous 1 -▁Odebrecht 1 -▁duchess 1 -▁Exhaust 1 -▁Policies 1 -▁Winthrop 1 -▁Glamour 1 -▁antimicrobial 1 -▁flint 1 -▁chimp 1 -/2018/11/ 1 -▁ecologist 1 -adora 1 -▁minibus 1 -ideal 1 -▁uneducated 1 -▁Koeman 1 -▁BRE 1 -▁LNP 1 -▁thicket 1 -▁fervently 1 -▁Ayton 1 -▁versed 1 -▁rebelled 1 -▁antidepressant 1 -▁Denham 1 -▁clamoring 1 -▁cutbacks 1 -BODY 1 -▁stiffen 1 -▁undressed 1 -▁sneered 1 -▁SES 1 -raped 1 -▁Adamu 1 -▁61% 1 -agiar 1 -▁chirping 1 -ogene 1 -▁precariously 1 -▁flawlessly 1 -▁Shaq 1 -hotel 1 -update 1 -▁CLO 1 -rova 1 -▁Onye 1 -▁740 1 -▁ilk 1 -▁Layne 1 -▁knitter 1 -246 1 -▁Charming 1 -▁Matz 1 -▁heartily 1 -▁Hov 1 -▁MPC 1 -UND 1 -▁Buses 1 -▁Reel 1 -▁Spra 1 -▁Evangel 1 -▁Leger 1 -▁enlighten 1 -honey 1 -▁raved 1 -guitar 1 -welcome 1 -Hindu 1 -favour 1 -Emma 1 -MAX 1 -▁misfit 1 -▁materialise 1 -Currently 1 -tourism 1 -▁Versa 1 -▁Darla 1 -▁overstay 1 -▁Neve 1 -▁secretion 1 -Albert 1 -▁Loon 1 -168 1 -▁Anadolu 1 -▁Armageddon 1 -▁Brigitte 1 -▁Chaudhary 1 -▁Guadalajara 1 -▁Redknapp 1 -▁Synagogue 1 -▁Whedon 1 -▁bronchitis 1 -▁cradling 1 -▁escrow 1 -▁strangulation 1 -▁ugliness 1 -▁attributing 1 -▁voluminous 1 -▁Pringle 1 -▁rosary 1 -▁Bahadur 1 -▁Rochdale 1 -▁barbershop 1 -▁Yakubu 1 -▁Rhett 1 -▁Musgrave 1 -▁gladiator 1 -▁MotoGP 1 -▁gullible 1 -▁Elimination 1 -▁Oliveira 1 -▁Dulles 1 -▁webinar 1 -▁snowshoe 1 -/09/ 1 -▁Fanning 1 -▁suitor 1 -enka 1 -405 1 -ATHER 1 -▁anchoring 1 -▁ploughed 1 -▁Annex 1 -▁299 1 -▁sloping 1 -▁perverted 1 -▁Taz 1 -Miller 1 -sleeping 1 -rva 1 -▁Increasingly 1 -consciousness 1 -▁Nang 1 -▁Yoon 1 -▁Derick 1 -▁Mild 1 -▁erred 1 -rø 1 -▁LGBTI 1 -▁sifting 1 -208 1 -▁JAMA 1 -▁Kaleb 1 -.44% 1 -▁Batavia 1 -▁276 1 -▁Katya 1 -▁Woodson 1 -▁Yonge 1 -▁corona 1 -▁Genie 1 -Seeing 1 -▁spank 1 -258 1 -▁RSA 1 -▁DSM 1 -▁Necro 1 -dumb 1 -▁Sturm 1 -▁birthright 1 -.99% 1 -▁revamping 1 -worthiness 1 -▁USOC 1 -▁GLA 1 -▁Danville 1 -▁Surat 1 -TRE 1 -▁Temper 1 -▁Garret 1 -Trac 1 -▁Priory 1 -andro 1 -▁Utes 1 -▁9-5 1 -▁googled 1 -▁Namely 1 -▁seething 1 -▁1798 1 -Brendan 1 -Seriously 1 -flexible 1 -Senator 1 -singh 1 -▁sloth 1 -▁perked 1 -.85% 1 -KEN 1 -Place 1 -▁BAY 1 -319 1 -▁Dug 1 -▁legged 1 -▁1806 1 -▁Opti 1 -▁Bakh 1 -▁Tompkins 1 -▁Ayodhya 1 -▁Bairstow 1 -▁Britannia 1 -▁Carthage 1 -▁Coincidentally 1 -▁Excellency 1 -▁ICICI 1 -▁Schefter 1 -▁agrarian 1 -▁agribusiness 1 -▁asymmetrical 1 -▁bludgeon 1 -▁impersonating 1 -▁intensifies 1 -▁maestro 1 -▁traversing 1 -▁Yokohama 1 -▁ZANU 1 -▁Substitute 1 -▁commissar 1 -▁Kamen 1 -▁curving 1 -▁surfacing 1 -▁Silvio 1 -▁Connectivity 1 -▁babble 1 -▁Montague 1 -▁Plague 1 -▁Tahiti 1 -▁(2-0) 1 -▁intrinsically 1 -▁porridge 1 -▁Midwife 1 -▁Vinnie 1 -▁torrid 1 -▁Reddick 1 -▁VFW 1 -▁Brentford 1 -▁affixed 1 -▁undefined 1 -▁Cooley 1 -▁Postmedia 1 -▁Cristian 1 -▁sighing 1 -▁Bustle 1 -▁Rooster 1 -▁yep 1 -▁Vere 1 -▁Templeton 1 -▁DRM 1 -▁Tetra 1 -▁$4.9 1 -▁Loy 1 -▁quilted 1 -▁Duvall 1 -▁Ballina 1 -▁Downie 1 -4.8% 1 -▁AUD 1 -▁SPLC 1 -bile 1 -uuuu 1 -venir 1 -▁$105 1 -attached 1 -▁10-3 1 -▁rework 1 -318 1 -▁Barak 1 -▁Console 1 -▁Elves 1 -▁1.75 1 -▁Eyre 1 -▁malfunctioning 1 -▁Browse 1 -▁Ghat 1 -▁misread 1 -▁***** 1 -301 1 -254 1 -▁NEA 1 -6-0 1 -▁defaulted 1 -▁jeweler 1 -,080 1 -2.6% 1 -subscribe 1 -incredibly 1 -Brazil 1 -surgical 1 -▁Curie 1 -▁assimilate 1 -Geo 1 -▁godly 1 -umu 1 -▁splatter 1 -▁Elie 1 -▁11-0 1 -▁tango 1 -matsu 1 -▁TIM 1 -▁outstrip 1 -€™ 1 -▁HGTV 1 -▁Judgment 1 -▁Moncton 1 -▁Mufti 1 -▁Muskegon 1 -▁Osprey 1 -▁Rousseau 1 -▁Thonhoff 1 -▁Uranium 1 -▁ambivalent 1 -▁dissonance 1 -▁fission 1 -▁legacies 1 -▁lollipop 1 -▁progesterone 1 -▁unfaithful 1 -▁Transylvania 1 -▁groggy 1 -▁Petraeus 1 -▁sodomy 1 -▁Gilchrist 1 -WORTH 1 -▁Habsburg 1 -▁mingling 1 -▁Sorensen 1 -▁Karaoke 1 -▁kinase 1 -▁Senegalese 1 -▁equipping 1 -▁gloat 1 -▁Andretti 1 -▁Cordray 1 -▁Pershing 1 -▁Kudos 1 -▁Sayoc 1 -▁dapper 1 -▁Arrowhead 1 -▁Serrano 1 -Kiss 1 -▁Charging 1 -▁UNDER 1 -▁undemocratic 1 -▁scarier 1 -▁dozed 1 -▁reevaluate 1 -▁Functional 1 -▁Waugh 1 -ipher 1 -▁deactivated 1 -▁pronoun 1 -▁pag 1 -▁PASS 1 -▁Hoyt 1 -cheese 1 -▁68% 1 -▁Julianne 1 -Perfect 1 -▁stuttering 1 -▁electrode 1 -▁opinionated 1 -▁Oates 1 -noise 1 -▁sketching 1 -???????? 1 -▁6-9 1 -▁decreed 1 -/200 1 -▁selector 1 -▁Wap 1 -277 1 -870 1 -▁960 1 -▁proverb 1 -▁midrange 1 -▁Moreland 1 -▁GRE 1 -lase 1 -▁Katha 1 -▁fourteenth 1 -liar 1 -Cell 1 -▁Hader 1 -▁commercialize 1 -▁Fermi 1 -Orthodox 1 -Amazing 1 -Consider 1 -▁$199 1 -hash 1 -Moore 1 -tipped 1 -▁Dealing 1 -171 1 -▁16.5 1 -▁Dura 1 -▁revolutionize 1 -ATT 1 -▁Vande 1 -▁Tani 1 -▁10.3 1 -▁Proton 1 -▁NAD 1 -▁misogynist 1 -▁Dominguez 1 -▁ENERGY 1 -▁Harlequin 1 -▁Khosrowshahi 1 -▁Pocono 1 -▁auspicious 1 -▁dejected 1 -▁iodine 1 -▁lazily 1 -▁resurfacing 1 -▁undisturbed 1 -▁Linebacker 1 -▁detonation 1 -▁denuclearize 1 -▁Patreon 1 -▁salacious 1 -▁Orchid 1 -▁pixie 1 -▁maggot 1 -▁mutiny 1 -▁Statute 1 -▁contravene 1 -▁Genuity 1 -▁biochemical 1 -▁Citation 1 -▁diode 1 -▁forerunner 1 -▁Brando 1 -▁probate 1 -▁Nagpur 1 -▁caressing 1 -▁Segura 1 -▁musket 1 -▁Cartier 1 -▁cleft 1 -▁Barrymore 1 -▁Jeopardy 1 -▁resolutely 1 -▁Dickey 1 -▁Ginn 1 -▁Stampede 1 -▁Tether 1 -▁clockwork 1 -▁Rutte 1 -HSAA 1 -▁mooted 1 -▁Crump 1 -generational 1 -odont 1 -▁monolithic 1 -▁fae 1 -▁bourgeoisie 1 -▁Kyra 1 -▁squealing 1 -▁Kondo 1 -▁Finishing 1 -▁GEO 1 -▁generalization 1 -▁fantastically 1 -▁Shakur 1 -227 1 -▁formulating 1 -ergy 1 -▁Addo 1 -▁degeneration 1 -shka 1 -▁Hiring 1 -▁crusty 1 -▁1804 1 -▁maneuvered 1 -.41% 1 -▁grated 1 -▁ushering 1 -1:30 1 -WIN 1 -▁NAME 1 -strateg 1 -▁squint 1 -610 1 -conferencing 1 -▁Eton 1 -▁Snape 1 -▁Hose 1 -▁darned 1 -▁Lindy 1 -declared 1 -Tel 1 -▁Mixon 1 -privilege 1 -tremendous 1 -▁scrutinize 1 -▁coerce 1 -eeeeee 1 -▁NIM 1 -▁BoE 1 -embo 1 -repeated 1 -▁FBR 1 -(4) 1 -▁cel 1 -▁Benning 1 -▁Machin 1 -ensure 1 -▁Malam 1 -▁reissue 1 -MSFT 1 -▁802.11 1 -BUSINESS 1 -SPECIAL 1 -▁CINCINNATI 1 -▁Fellaini 1 -▁GDAX 1 -▁Gothenburg 1 -▁McCrory 1 -▁Preakness 1 -▁Probation 1 -▁Svitolina 1 -▁aquaculture 1 -▁colluding 1 -▁hospitable 1 -▁microcosm 1 -▁Aquinas 1 -▁piqued 1 -▁Menzies 1 -▁Gatsby 1 -▁floodgates 1 -▁Militia 1 -▁Dispute 1 -▁Champlain 1 -Person 1 -▁abbot 1 -▁Nilsson 1 -▁Dermot 1 -▁indulgent 1 -▁Mukesh 1 -▁Millionaire 1 -▁PSNI 1 -▁Hackney 1 -▁Brahm 1 -▁Mirren 1 -▁Marathi 1 -▁Beattie 1 -▁Balfour 1 -▁Agnew 1 -▁craftsmen 1 -▁Vinay 1 -▁mach 1 -▁Maximilian 1 -▁hubris 1 -▁rationality 1 -▁Finney 1 -▁Dagger 1 -▁Sina 1 -▁PFA 1 -umbu 1 -▁lyricist 1 -▁Ripper 1 -unu 1 -▁Kuch 1 -▁panhandle 1 -▁Scoop 1 -▁snowflake 1 -▁Sade 1 -▁Coburn 1 -dien 1 -▁PPV 1 -▁practicality 1 -▁oversupply 1 -kopf 1 -thorne 1 -thick 1 -▁Sita 1 -▁Concerto 1 -▁007 1 -▁Selected 1 -▁Ocon 1 -▁Bertie 1 -bop 1 -▁Carper 1 -▁Woolf 1 -▁kha 1 -▁SARS 1 -ophag 1 -3.9% 1 -▁Kuma 1 -▁auctioneer 1 -▁Kore 1 -dual 1 -▁cloned 1 -385 1 -iña 1 -▁interned 1 -▁duplicated 1 -▁Lanc 1 -▁hustled 1 -martial 1 -politics 1 -▁electrically 1 -Animal 1 -aunch 1 -156 1 -▁jean 1 -▁furlough 1 -Invest 1 -▁consequent 1 -▁valiant 1 -249 1 -▁Tilly 1 -processed 1 -▁curricula 1 -earning 1 -▁hmmm 1 -Jake 1 -gwu 1 -SOUNDBITE 1 -rheumatoid 1 -▁Apparel 1 -▁DeWine 1 -▁Dáil 1 -▁Guantánamo 1 -▁Haqqani 1 -▁Manifesto 1 -▁Pulisic 1 -▁RALEIGH 1 -▁Sotomayor 1 -▁Voodoo 1 -▁disapproving 1 -▁orangutan 1 -▁sheepishly 1 -▁Quigley 1 -▁martyrdom 1 -▁dashcam 1 -▁Envoy 1 -▁interfaith 1 -▁Submission 1 -▁Finlay 1 -▁Bradenton 1 -▁Zoning 1 -▁Bettman 1 -▁retinal 1 -▁TIP 1 -▁bitches 1 -▁Unexpected 1 -630 1 -▁Cheung 1 -▁replicating 1 -▁Yonhap 1 -▁pasa 1 -▁weaning 1 -▁POV 1 -▁336 1 -▁Hizb 1 -▁croak 1 -Removing 1 -▁Planck 1 -URA 1 -▁Reverse 1 -▁teetering 1 -▁bod 1 -signal 1 -▁Idiot 1 -haba 1 -▁abolitionist 1 -conv 1 -▁devouring 1 -▁broadside 1 -▁steeply 1 -▁Analog 1 -▁XRP 1 -▁Kyler 1 -540 1 -▁SSE 1 -▁3.3% 1 -▁PALM 1 -▁Huey 1 -▁Merle 1 -/21 1 -▁Sponsored 1 -▁glided 1 -rade 1 -▁Chasing 1 -725 1 -▁CUP 1 -▁tailback 1 -▁Parton 1 -189 1 -▁Somers 1 -▁Operational 1 -▁cloaked 1 -▁Rink 1 -▁LGA 1 -Lynn 1 -▁sidestep 1 -▁cargoes 1 -explain 1 -▁Klim 1 -Elizabeth 1 -Columbia 1 -▁scuttle 1 -▁Buttler 1 -Whoever 1 -▁Filmmaker 1 -iska 1 -▁publicize 1 -▁touchline 1 -▁hatching 1 -▁Debut 1 -GMO 1 -▁octagon 1 -▁Smithfield 1 -▁Leclerc 1 -▁Monterrey 1 -▁ORLANDO 1 -▁Procurement 1 -▁SPRINGFIELD 1 -▁Tahrir 1 -▁christened 1 -▁cortisol 1 -▁duffel 1 -▁naysayers 1 -▁pessimism 1 -▁repulsive 1 -▁unconcerned 1 -▁Utrecht 1 -▁fanciful 1 -▁denigrate 1 -▁juxtaposition 1 -▁Montecito 1 -▁Buckinghamshire 1 -▁embedding 1 -▁Stagecoach 1 -▁shredding 1 -▁conflate 1 -▁KTLA 1 -▁Stampeders 1 -▁Johansen 1 -▁Skinny 1 -▁Essay 1 -ugly 1 -▁unseasonabl 1 -▁Oskar 1 -▁Rigby 1 -▁Soren 1 -▁Granville 1 -▁Buckner 1 -▁Carrera 1 -▁Hye 1 -▁Sancho 1 -▁Waka 1 -▁Beware 1 -▁cashew 1 -▁Harbin 1 -▁Sherpa 1 -▁airflow 1 -▁huffed 1 -▁mace 1 -▁$0.10 1 -▁marshmallow 1 -▁Tarek 1 -▁purify 1 -▁authorisation 1 -troph 1 -▁Ambien 1 -▁Stringer 1 -ogu 1 -▁welder 1 -gher 1 -Gbps 1 -▁Kandi 1 -▁flavorful 1 -164 1 -hanna 1 -▁Eugen 1 -136 1 -▁Westgate 1 -▁Omer 1 -▁stagger 1 -KDKA 1 -▁Peek 1 -355 1 -▁treading 1 -▁Bahr 1 -▁wince 1 -▁Principle 1 -▁Ferran 1 -distinct 1 -insights 1 -▁grimace 1 -▁Kyung 1 -slightly 1 -▁rationally 1 -broke 1 -▁Condor 1 -arajan 1 -▁Tug 1 -▁Stormer 1 -747 1 -mandated 1 -▁Gather 1 -wives 1 -▁Markey 1 -▁Mohi 1 -▁hurrying 1 -▁Slay 1 -▁Algonquin 1 -▁Australasia 1 -▁Endeavour 1 -▁Endurance 1 -▁Fiorentina 1 -▁Ghulam 1 -▁Keuchel 1 -▁Khomeini 1 -▁Maiduguri 1 -▁Nightingale 1 -▁calibrated 1 -▁hyperinflation 1 -▁inevitability 1 -▁irreplaceable 1 -▁ointment 1 -▁parochial 1 -▁syndication 1 -б 1 -▁Vegetable 1 -▁infrequently 1 -▁communique 1 -▁mittens 1 -▁Snider 1 -▁SCOTT 1 -▁Shohei 1 -▁wiretapping 1 -▁Camelot 1 -▁Brantford 1 -▁Pencil 1 -▁EMP 1 -▁Madhav 1 -▁McCarty 1 -▁extramarital 1 -chicken 1 -▁unceremoniously 1 -▁ISLAND 1 -▁Okafor 1 -▁Organizing 1 -▁Ashanti 1 -▁Venue 1 -▁redefining 1 -▁Lagarde 1 -▁Keanu 1 -▁Diplomatic 1 -▁Spent 1 -▁Geraldine 1 -royal 1 -▁Bastian 1 -▁laborious 1 -▁Attraction 1 -▁hypothesized 1 -▁Tunis 1 -▁MED 1 -▁waxing 1 -▁earshot 1 -oiled 1 -▁balmy 1 -▁Marsden 1 -▁trawl 1 -▁jacked 1 -▁scooted 1 -▁ingesting 1 -▁skylight 1 -▁nada 1 -▁$5.6 1 -▁firebrand 1 -▁DMK 1 -288 1 -▁perishable 1 -▁2009-10 1 -▁manicured 1 -▁Leary 1 -348 1 -975 1 -AFC 1 -▁lawlessness 1 -▁Overland 1 -▁trickling 1 -▁nineties 1 -▁honking 1 -asculin 1 -226 1 -nani 1 -▁premarket 1 -▁insular 1 -▁Nicaraguan 1 -▁Kade 1 -clin 1 -Alexa 1 -cancel 1 -425 1 -______ 1 -.69% 1 -.97% 1 -▁Quicken 1 -▁HY 1 -▁revere 1 -▁pacemaker 1 -▁counterpoint 1 ------- 1 -855 1 -▁298 1 -▁Gah 1 -▁Nephi 1 -▁gau 1 -▁SPF 1 -▁Objective 1 -march 1 -338 1 -▁logistic 1 -apnews 1 -referred 1 -▁Keir 1 -.71% 1 -relevant 1 -Hungarian 1 -Greek 1 -Standard 1 -.95% 1 -▁SDG 1 -▁tasteful 1 -exchange 1 -▁Patrik 1 -▁1794 1 -wanda 1 -split 1 -ankar 1 -▁$550 1 -Large 1 -ativa 1 -tercollegiate 1 -▁Celebrities 1 -▁Cespedes 1 -▁Chernobyl 1 -▁Espinosa 1 -▁Kinshasa 1 -▁LITTLE 1 -▁TOWIE 1 -▁exasperation 1 -▁foreboding 1 -▁fraternal 1 -▁insinuate 1 -▁masquerading 1 -▁procuring 1 -▁unbeatable 1 -▁vengeful 1 -▁Hoffenheim 1 -▁Gurgaon 1 -▁skimming 1 -▁renegade 1 -▁decimal 1 -▁modulation 1 -▁gauze 1 -▁forbade 1 -▁Artisan 1 -▁Carillion 1 -▁flax 1 -▁canvassing 1 -▁delving 1 -▁schematic 1 -▁Countdown 1 -▁tiresome 1 -▁mojo 1 -▁TUC 1 -LNG 1 -▁Boardwalk 1 -▁Tidal 1 -▁embar 1 -6.0 1 -recht 1 -▁Definition 1 -CIO 1 -▁ugh 1 -{\ 1 -honored 1 -▁provenance 1 -▁splattered 1 -▁snooping 1 -▁MAGA 1 -▁Brahmin 1 -▁spaceflight 1 -▁tearfully 1 -▁jumbled 1 -▁snarling 1 -witted 1 -FORMER 1 -▁Abdo 1 -ław 1 -▁Instantly 1 -▁autographed 1 -▁Wiggle 1 -▁crayon 1 -stitch 1 -▁EXC 1 -▁islanders 1 -▁splashes 1 -▁21⁄2 1 -▁Hough 1 -ateur 1 -▁Raised 1 -▁homemaker 1 -LET 1 -COP 1 -wiki 1 -▁Astrid 1 -▁1805 1 -▁clamor 1 -▁Quake 1 -▁retainer 1 -▁ATK 1 -▁stri 1 -▁Rogan 1 -avour 1 -▁2035 1 -▁Lace 1 -▁.......... 1 -▁Barbour 1 -Guard 1 -▁Trevi 1 -grim 1 -▁Prag 1 -licensed 1 -383 1 -Security 1 -exposed 1 -Rub 1 -▁hookup 1 -heath 1 -▁Karo 1 -bottle 1 -172 1 -▁Cretaceous 1 -▁Ceylon 1 -▁Chidambaram 1 -▁Guevara 1 -▁Muhammed 1 -▁Saracens 1 -▁carcinoma 1 -▁cauldron 1 -▁cyanide 1 -▁dispersion 1 -▁interrogator 1 -▁malevolent 1 -▁satisfies 1 -▁smallpox 1 -▁aeroplane 1 -▁Cobain 1 -▁homebuyers 1 -▁Razorbacks 1 -▁Psychologist 1 -▁bribing 1 -▁Ravindra 1 -▁angrier 1 -▁skimmed 1 -Shot 1 -▁Ziggy 1 -▁CAIRO 1 -▁antisemitism 1 -▁tween 1 -finally 1 -▁Chairwoman 1 -▁Alonzo 1 -▁tuft 1 -▁Whittier 1 -▁Nek 1 -▁[50] 1 -▁Poplar 1 -failing 1 -▁transverse 1 -▁Hightower 1 -Drop 1 -▁Dieter 1 -▁Ç 1 -▁Goyal 1 -▁denunciation 1 -▁Schloss 1 -▁Seward 1 -▁Disick 1 -▁10.2 1 -▁Jumbo 1 -▁Firefly 1 -▁Gorgeous 1 -▁reseller 1 -▁propping 1 -▁ballgame 1 -3.2% 1 -▁1809 1 -▁restive 1 -▁Goldwater 1 -udh 1 -▁Borden 1 -ripping 1 -▁Maison 1 -▁Horde 1 -▁Sonnen 1 -▁Maxx 1 -▁resettled 1 -▁Ruck 1 -▁bagging 1 -cooled 1 -▁Punta 1 -▁midget 1 -▁Tyra 1 -▁tidbit 1 -▁180,000 1 -▁Diving 1 -▁HSE 1 -▁antsy 1 -▁15.5 1 -▁falsehood 1 -SET 1 -▁Sergi 1 -▁moped 1 -▁1:10 1 -tuning 1 -▁£60 1 -▁EDF 1 -▁chaser 1 -▁kilt 1 -▁conserved 1 -▁OSCE 1 -unfair 1 -▁occ 1 -▁Kaif 1 -▁Carrot 1 -4.2% 1 -.55% 1 -evolution 1 -▁ETH 1 -▁Arne 1 -577 1 -Oregon 1 -REUTERS 1 -chancellor 1 -discretion 1 -feminist 1 -relationship 1 -▁undersea 1 -Civil 1 -trafficking 1 -▁jumble 1 -Proposed 1 -▁Palla 1 -Member 1 -species 1 -▁yada 1 -tempered 1 -▁Moby 1 -▁logger 1 -ilio 1 -genesis 1 -▁snacking 1 -▁algo 1 -▁melodrama 1 -▁Ahmadinejad 1 -▁Anheuser 1 -▁Forbidden 1 -▁GNOME 1 -▁Hennepin 1 -▁Maserati 1 -▁Prosecuting 1 -▁Streisand 1 -▁TARDIS 1 -▁Universidad 1 -▁Windhoek 1 -▁admirably 1 -▁citadel 1 -▁emancipation 1 -▁irreverent 1 -▁liturgical 1 -▁reclamation 1 -▁gondola 1 -▁recuperate 1 -▁Benioff 1 -▁deGrom 1 -▁Carlsbad 1 -▁Synergy 1 -▁Sikkim 1 -▁FactSet 1 -▁Attitude 1 -▁ratepayers 1 -▁Godrej 1 -▁displacing 1 -▁transfixed 1 -▁Pradhan 1 -▁baht 1 -▁Antoinette 1 -▁unsatisfactory 1 -▁Carrasco 1 -.98% 1 -▁grasshopper 1 -▁gaggle 1 -▁Tyneside 1 -▁Abyss 1 -▁Mejia 1 -▁DEN 1 -▁Sushma 1 -▁1.15 1 -▁Khris 1 -▁Krugman 1 -−1 1 -▁uninformed 1 -▁Simpli 1 -▁mumps 1 -▁riled 1 -▁Clarion 1 -▁Roswell 1 -▁patty 1 -▁Satanic 1 -▁Antifa 1 -hier 1 -▁tracksuit 1 -▁11:59 1 -▁Zaw 1 -▁Debian 1 -▁BOY 1 -▁transcribed 1 -▁decorator 1 -▁Goodness 1 -▁arbor 1 -▁bullion 1 -▁Rosh 1 -▁Traveler 1 -▁introverted 1 -▁undies 1 -▁casa 1 -▁sparing 1 -▁Caron 1 -▁Tempest 1 -▁MCL 1 -▁Dubbe 1 -▁Mainstream 1 -▁59% 1 -▁Abia 1 -▁Extract 1 -solve 1 -▁ascertained 1 -▁demolishing 1 -adequate 1 -miscommunication 1 -1,500 1 -▁Cairn 1 -NIA 1 -▁1780 1 -▁Gag 1 -▁Zambian 1 -▁Brack 1 -▁Faithful 1 -▁Somewhat 1 -▁digested 1 -▁unraveling 1 -▁Marlin 1 -▁$5.4 1 -▁delved 1 -▁Strang 1 -▁8000 1 -▁drawdown 1 -politan 1 -▁nigh 1 -▁grappled 1 -▁Alina 1 -YOU 1 -▁Mylan 1 -existence 1 -integration 1 -Instagram 1 -Marco 1 -▁baffle 1 -▁waxed 1 -▁Lush 1 -5.6% 1 -▁infamously 1 -Danny 1 -▁Garde 1 -philic 1 -emption 1 -▁unsold 1 -Woman 1 -▁gouge 1 -appeared 1 -incredible 1 -Quick 1 -▁Occupation 1 -8+ 1 -▁KwaZulu 1 -و 1 -▁Desjardins 1 -▁Fonseca 1 -▁McDonagh 1 -▁Monrovia 1 -▁Tlaib 1 -▁anniversaries 1 -▁emaciated 1 -▁extrovert 1 -▁invertebrate 1 -▁nativity 1 -▁rapprochement 1 -▁rickshaw 1 -▁subterranean 1 -▁symbiotic 1 -▁Slightly 1 -▁cutlery 1 -▁monochrome 1 -▁Tabitha 1 -▁fairgrounds 1 -▁Jeddah 1 -▁metastatic 1 -Rashtriya 1 -▁(1992) 1 -graf 1 -▁Rakesh 1 -▁Hafiz 1 -▁Gannon 1 -▁Soleil 1 -▁brisket 1 -▁Translate 1 -▁Brawl 1 -ranger 1 -▁Medford 1 -▁codenamed 1 -▁PAL 1 -▁Lakeshore 1 -▁Krystal 1 -▁tickling 1 -▁Fedor 1 -stuffed 1 -udin 1 -certainly 1 -▁POTUS 1 -▁RAD 1 -▁Dime 1 -meeting 1 -▁Overview 1 -washed 1 -▁EEG 1 -▁severing 1 -▁Paid 1 -▁Lazy 1 -▁7.5% 1 -▁yellowish 1 -▁Khair 1 -▁Focusing 1 -▁Pé 1 -▁Shady 1 -toire 1 -Found 1 -▁jibe 1 -▁9.0 1 -218 1 -▁Blom 1 -▁Danica 1 -7-6 1 -▁tablecloth 1 -▁11.6 1 -▁HOR 1 -koro 1 -▁soulmate 1 -▁capitalizing 1 -▁Curl 1 -regarded 1 -AMG 1 -Shield 1 -intelligent 1 -nutrient 1 -vulnerable 1 -Capital 1 -231 1 -▁Cram 1 -exter 1 -▁Mistake 1 -5° 1 -greatest 1 -Sadly 1 -▁Antonia 1 -polo 1 -▁Bistro 1 -▁ECP 1 -740 1 -▁750,000 1 -nfectious 1 -▁Recorder 1 -▁preorder 1 -.48% 1 -▁lapel 1 -▁Blankenship 1 -▁DeKalb 1 -▁Incumbent 1 -▁Narcotics 1 -▁escapade 1 -▁euphoric 1 -▁plagiarism 1 -▁rickety 1 -▁stratosphere 1 -▁wheelbarrow 1 -▁intractable 1 -▁Bumrah 1 -▁Smurf 1 -▁Gizmodo 1 -▁Karzai 1 -▁wilful 1 -▁quadrant 1 -▁Chinook 1 -▁cybercrime 1 -▁AARP 1 -▁Crohn 1 -▁Bahá 1 -▁reassignment 1 -durante 1 -▁scold 1 -Army 1 -▁$1,5 1 -▁Blogging 1 -▁senatorial 1 -401 1 -.64% 1 -▁subtlety 1 -▁zu 1 -▁Armory 1 -▁8.9 1 -▁Foxtel 1 -▁Salma 1 -▁286 1 -bourn 1 -▁radiated 1 -▁sawmill 1 -Soul 1 -▁17.5 1 -181 1 -▁Eastside 1 -327 1 -▁Jamieson 1 -▁Physically 1 -▁Blackwater 1 -▁feuding 1 -▁condolence 1 -▁Counseling 1 -▁Hyman 1 -Letter 1 -▁Magnetic 1 -335 1 -▁317 1 -▁carelessly 1 -▁psi 1 -▁carseat 1 -▁raiders 1 -▁1813 1 -▁starry 1 -Poor 1 -Duck 1 -▁PSC 1 -▁Moura 1 -▁RAND 1 -ffrey 1 -wealth 1 -***** 1 -.78% 1 -▁stunningly 1 -.88% 1 -▁Bleu 1 -▁individualized 1 -Consumer 1 -imagining 1 -proliferation 1 -wonderful 1 -Arizona 1 -Guess 1 -▁implore 1 -▁terse 1 -▁Crip 1 -▁resurface 1 -discovered 1 -▁Norah 1 -▁Bohemia 1 -▁Dhu 1 -▁environmentalis 1 -mitsu 1 -Aunt 1 -Rain 1 -Tiger 1 -178 1 -comfort 1 -▁backpacking 1 -synaptic 1 -▁Dougherty 1 -▁Fascist 1 -▁Marguerite 1 -▁McLellan 1 -▁Oaxaca 1 -▁Vanuatu 1 -▁anorexia 1 -▁compensatory 1 -▁uninhabited 1 -▁Deciding 1 -▁irrigate 1 -▁perplexing 1 -▁rezoning 1 -▁Sociology 1 -▁harmonica 1 -▁overthrew 1 -▁NHTSA 1 -▁Stearns 1 -▁applesauce 1 -▁vapour 1 -▁Boutique 1 -▁spleen 1 -▁Poké 1 -▁destitute 1 -▁Huskers 1 -▁Chastain 1 -▁fiberglass 1 -▁Karolina 1 -▁subprime 1 -▁Fuchs 1 -▁Christoph 1 -▁Cirque 1 -▁Suzie 1 -▁Millsap 1 -▁Octagon 1 -▁Tallinn 1 -▁Shoreham 1 -weiler 1 -▁Ndu 1 -dramatic 1 -▁Surviving 1 -▁RPI 1 -▁explanatory 1 -▁Malak 1 -gression 1 -768 1 -▁Ormond 1 -▁Hades 1 -miller 1 -fleet 1 -▁11.4 1 -▁Lightfoot 1 -wasn 1 -▁POINT 1 -▁wailed 1 -▁obsessing 1 -▁Stillwater 1 -▁flaunting 1 -▁4.8% 1 -TIN 1 -▁Fleck 1 -▁fabled 1 -▁Kildare 1 -▁artisanal 1 -▁Kennel 1 -▁blackberry 1 -▁bony 1 -▁1829 1 -▁scrapbooking 1 -beta 1 -▁Groove 1 -▁EMC 1 -▁Senna 1 -▁NPS 1 -▁factional 1 -▁Armand 1 -▁Pagano 1 -▁Katsu 1 -▁Trott 1 -▁elation 1 -▁modernizing 1 -▁proudest 1 -▁$115 1 -impose 1 -▁fitter 1 -▁sharpness 1 -▁Teeth 1 -▁repress 1 -▁Lilith 1 -▁Illumina 1 -▁quill 1 -▁Climb 1 -MIC 1 -failure 1 -dwelling 1 -Senate 1 -jra 1 -agara 1 -.58% 1 -▁AFB 1 -stellar 1 -▁$5.3 1 -umma 1 -▁$350,000 1 -▁tumult 1 -ę 1 -δ 1 -▁Broughton 1 -▁Cavusoglu 1 -▁Collaborative 1 -▁Fadnavis 1 -▁Freemason 1 -▁Hanukkah 1 -▁Oxygen 1 -▁Receiving 1 -▁Stryker 1 -▁Wickremesinghe 1 -▁cheetah 1 -▁degenerative 1 -▁dietitian 1 -▁environs 1 -▁flotilla 1 -▁molasses 1 -▁pacifist 1 -▁phrasing 1 -▁precocious 1 -▁vendetta 1 -▁Wilshere 1 -▁Screening 1 -▁cyborg 1 -▁airstrip 1 -▁CSKA 1 -KXAN 1 -▁McAllen 1 -▁Pratap 1 -▁Cabello 1 -rrrrr 1 -▁Gabbard 1 -zzz 1 -pepper 1 -Nugget 1 -286 1 -▁Terrance 1 -▁Eriksson 1 -▁preconceived 1 -▁unclaimed 1 -▁strapping 1 -▁HDFC 1 -▁songstress 1 -ussie 1 -▁Tuscany 1 -▁scoresheet 1 -▁Ivanov 1 -▁Bonham 1 -ousse 1 -▁Ratner 1 -▁disinterested 1 -▁Tenant 1 -▁ebay 1 -▁Karina 1 -▁Harrow 1 -▁riverbank 1 -▁dangle 1 -▁Gunter 1 -▁replayed 1 -▁Climbing 1 -▁statistician 1 -eaten 1 -▁disintegrated 1 -▁flirty 1 -▁equat 1 -▁winemaker 1 -▁Kiir 1 -zze 1 -▁mismatched 1 -▁Arenado 1 -▁repped 1 -173 1 -▁invalidated 1 -5.4% 1 -▁Protected 1 -▁Canelo 1 -Leo 1 -▁Mayfair 1 -▁innovator 1 -▁PBA 1 -▁Sati 1 -▁Riker 1 -▁midtown 1 -▁£50,000 1 -▁soothed 1 -▁Gardener 1 -▁conductive 1 -▁mAh 1 -customer 1 -▁Jock 1 -▁nationalistic 1 ----------- 1 -▁Snook 1 -▁£400 1 -▁airbag 1 -.53% 1 -absolute 1 -.92% 1 -337 1 -▁harnessed 1 -▁bloodline 1 -Chancellor 1 -Sentinel 1 -supplied 1 -▁rejuvenate 1 -▁swerve 1 -Wilson 1 -initial 1 -▁Miku 1 -▁Kuwaiti 1 -awful 1 -▁Matic 1 -232 1 -▁schema 1 -bok 1 -▁Acoustic 1 -▁Chomsky 1 -▁Giordano 1 -▁Hampstead 1 -▁Hepatitis 1 -▁Lacrosse 1 -▁Notwithstanding 1 -▁Ordnance 1 -▁SHANGHAI 1 -▁crescendo 1 -▁equivalence 1 -▁mozzarella 1 -▁numeral 1 -▁susceptibility 1 -Ephesians 1 -▁remastered 1 -▁LOUISVILLE 1 -▁neuroscientist 1 -▁shalt 1 -▁eavesdropping 1 -▁meagre 1 -▁Nativity 1 -▁slurred 1 -▁Ozzy 1 -▁Offender 1 -▁680 1 -▁[49] 1 -▁HMRC 1 -▁Pelham 1 -▁Priorities 1 -▁Endless 1 -344 1 -▁Haywood 1 -holtz 1 -▁spiced 1 -▁OPEN 1 -▁crystalline 1 -▁Saver 1 -▁suffocated 1 -▁spindle 1 -▁Endgame 1 -▁slanted 1 -▁Banksy 1 -▁Tanker 1 -.59% 1 -▁Tew 1 -▁whacked 1 -▁joyfully 1 -▁TRS 1 -▁digger 1 -expensive 1 -supply 1 -▁MEC 1 -284 1 -PSC 1 -▁Connell 1 -▁Phno 1 -▁perky 1 -▁Shaking 1 -234 1 -LIC 1 -atrice 1 -ová 1 -▁Tyga 1 -▁Rug 1 -▁Yell 1 -▁Rickard 1 -▁Westport 1 -▁Capitalism 1 -ozzi 1 -331 1 -▁puffing 1 -excited 1 -▁dwindle 1 -.79% 1 -▁LMP 1 -▁rarest 1 -▁Pimp 1 -afia 1 -▁Condo 1 -▁Ural 1 -▁Flea 1 -interoperability 1 -▁Believing 1 -▁Determined 1 -▁Frustrated 1 -▁Prejudice 1 -▁Roscoe 1 -▁TAMPA 1 -▁Underneath 1 -▁elapsed 1 -▁flurries 1 -▁gurney 1 -▁liturgy 1 -▁nightgown 1 -▁psoriasis 1 -▁salaried 1 -▁travesty 1 -▁DDoS 1 -▁frostbite 1 -▁Gourmet 1 -▁gardaí 1 -▁Taunton 1 -▁emulator 1 -▁shaggy 1 -▁thumbnail 1 -▁modifier 1 -▁(1993) 1 -▁whooping 1 -▁Luongo 1 -▁Youssef 1 -▁spurned 1 -▁alderman 1 -▁upsurge 1 -▁€7 1 -▁Marcellus 1 -▁Moorhead 1 -papa 1 -▁swig 1 -adze 1 -▁subsystem 1 -IBA 1 -▁Hillsboro 1 -▁Whitfield 1 -▁Worley 1 -▁nonchalantly 1 -▁Persona 1 -▁Congregational 1 -▁validating 1 -▁Bessie 1 -▁OTT 1 -▁operatic 1 -▁Supergirl 1 -▁Tillis 1 -MEN 1 -▁Prabha 1 -▁squishy 1 -▁GIVE 1 -▁whirled 1 -▁reestablish 1 -ROC 1 -▁Bode 1 -▁Rourke 1 -▁CLI 1 -vah 1 -▁AMZN 1 -▁persevered 1 -▁Shap 1 -▁Geun 1 -▁Rylan 1 -▁intelligently 1 -▁covertly 1 -▁multiplying 1 -.94% 1 -▁Bambi 1 -▁sav 1 -▁Inclusion 1 -Gram 1 -▁capitalisation 1 -▁vocally 1 -brian 1 -7:30 1 -Italia 1 -▁Yah 1 -▁PAY 1 -▁beanie 1 -▁ASL 1 -▁Khor 1 -268 1 -▁266 1 -▁Holz 1 -abella 1 -▁Cech 1 -▁disarming 1 -fuelled 1 -475 1 -▁Tempo 1 -ultimate 1 -memory 1 -▁Trench 1 -reduction 1 -distant 1 -▁fanatical 1 -Batman 1 -babies 1 -NCR 1 -ctus 1 -CBN 1 -▁Taf 1 -4.6% 1 -▁redraw 1 -▁Amritsar 1 -▁Anthropology 1 -▁Mombasa 1 -▁Outfitters 1 -▁Pomeroy 1 -▁Tomahawk 1 -▁custard 1 -▁inexcusable 1 -▁intrepid 1 -▁luminaries 1 -▁maverick 1 -▁vaunted 1 -▁yucky 1 -▁Supposedly 1 -GLOBE 1 -▁Stamkos 1 -▁sandbags 1 -▁Afraid 1 -▁caustic 1 -▁Prabhu 1 -▁sojourn 1 -▁Embrace 1 -▁Nuevo 1 -▁GIRL 1 -▁operandi 1 -▁btw 1 -1⁄4 1 -curricular 1 -▁HUNT 1 -▁Outlander 1 -▁bushfire 1 -▁amenable 1 -▁squatted 1 -▁Pickens 1 -▁wetsuit 1 -▁Kenyon 1 -▁piecemeal 1 -▁misstep 1 -▁Ensign 1 -▁Awami 1 -▁teapot 1 -▁Yanez 1 -▁hairline 1 -rogue 1 -▁$0.18 1 -▁Pash 1 -▁Gentleman 1 -▁subsidised 1 -▁dentistry 1 -▁flake 1 -▁Joyner 1 -▁racehorse 1 -lowest 1 -▁hardliners 1 -▁Hamish 1 -▁>>> 1 -▁pelted 1 -▁relaunched 1 -Flight 1 -MRC 1 -▁Nef 1 -▁4.3% 1 -▁Heartbreak 1 -denominational 1 -▁4-8 1 -MAT 1 -▁9.9 1 -▁nay 1 -▁Finder 1 -▁Jeh 1 -▁Bastard 1 -Bike 1 -▁Lü 1 -▁Karol 1 -▁Rouse 1 -berman 1 -▁Handle 1 -▁Cee 1 -application 1 -▁milled 1 -▁Mesh 1 -▁twirl 1 -JHL 1 -▁ASI 1 -LAY 1 -appearance 1 -▁dwarfed 1 -OMA 1 -Nasdaq 1 -▁refilled 1 -impossible 1 -ELE 1 -▁reintroduce 1 -5.8% 1 -Bobby 1 -▁Bernal 1 -hofer 1 -Exchange 1 -nnouncing 1 -Bryan 1 -272 1 -OOC 1 -▁talc 1 -▁crutch 1 -▁PROVIDE 1 -▁FDI 1 -▁Adrien 1 -ř 1 -▁Abdulaziz 1 -▁Fawcett 1 -▁Flannery 1 -▁Lavender 1 -▁Pfeiffer 1 -▁WOULD 1 -▁acrimonious 1 -▁adjudicate 1 -▁deliberating 1 -▁entropy 1 -▁perpetuity 1 -▁promulgated 1 -▁wharf 1 -▁queuing 1 -▁straggl 1 -▁GCHQ 1 -▁Chukwu 1 -▁Montclair 1 -▁Gambling 1 -▁Biotechnology 1 -▁Accused 1 -▁Kluber 1 -▁swatted 1 -▁CenturyLink 1 -▁Ethical 1 -▁Whatsapp 1 -▁Cosmic 1 -▁Comerica 1 -▁ephemeral 1 -▁forklift 1 -▁mammalian 1 -▁Quarry 1 -▁Changelly 1 -▁Skyrim 1 -▁Playground 1 -▁Luftwaffe 1 -▁Minster 1 -▁Ascend 1 -▁saltwater 1 -▁impounded 1 -▁geneticist 1 -▁Gregorio 1 -▁instantaneously 1 -▁synthesized 1 -▁fluidity 1 -▁hurtling 1 -▁Dung 1 -▁Ambro 1 -▁Spook 1 -▁Grayling 1 -▁Blaz 1 -▁WCS 1 -▁Baloch 1 -▁boyhood 1 -▁3.8% 1 -puri 1 -▁Ashby 1 -▁Neverland 1 -Bomb 1 -▁pullout 1 -phony 1 -▁Mendi 1 -▁clingy 1 -323 1 -▁surpluses 1 -▁OLD 1 -Pete 1 -▁Bogdan 1 -▁Replay 1 -cchia 1 -▁oceanic 1 -▁Wahl 1 -▁Surya 1 -▁mur 1 -▁Zahid 1 -▁0-6 1 -▁Gallia 1 -5.2% 1 -▁WAVE 1 -▁consumerism 1 -transit 1 -▁crouch 1 -▁328 1 -▁DeLa 1 -▁Zeid 1 -therapeutic 1 -Community 1 -Telegram 1 -disrupt 1 -concert 1 -Bruce 1 -Growing 1 -storage 1 -▁curler 1 -method 1 -▁greenish 1 -▁sanitize 1 -▁starship 1 -▁Vib 1 -itative 1 -culus 1 -▁Appalachia 1 -▁MSF 1 -othy 1 -▁$135 1 -221 1 -▁Zipp 1 -▁Excited 1 -▁Iroquois 1 -▁Pulwama 1 -▁Rajapaksa 1 -▁Targaryen 1 -▁UNRWA 1 -▁Vajpayee 1 -▁cleats 1 -▁gobierno 1 -▁implausible 1 -▁nepotism 1 -▁occupiers 1 -▁oscillation 1 -▁pectoral 1 -▁waiving 1 -▁meditative 1 -▁Ceramic 1 -▁telepathic 1 -▁abrasion 1 -▁enmity 1 -▁Brumbies 1 -▁Nutcracker 1 -▁Publix 1 -▁Roderick 1 -▁popsicle 1 -▁tropics 1 -▁Guidance 1 -▁Moulton 1 -▁foregone 1 -▁Gilliam 1 -.72% 1 -▁roving 1 -▁Esquire 1 -▁Cadbury 1 -▁Veritas 1 -▁StarCraft 1 -▁nipped 1 -▁gills 1 -▁$0.07 1 -▁orchestration 1 -Body 1 -crown 1 -▁Rippon 1 -▁Brinkley 1 -▁Aliyev 1 -Finding 1 -▁Greenspan 1 -IMO 1 -▁unplug 1 -▁Renzi 1 -▁Hosted 1 -▁Annabel 1 -▁stammered 1 -▁antivirus 1 -▁2016-2017 1 -▁Fattah 1 -▁interjected 1 -▁scurrying 1 -.65% 1 -▁Forgotten 1 -▁Desiree 1 -enchant 1 -▁Bought 1 -▁redneck 1 -▁$120,000 1 -▁gazebo 1 -▁Fogg 1 -▁terrorizing 1 -▁snorkeling 1 -▁Miri 1 -▁emitter 1 -▁encroaching 1 -▁harnesses 1 -▁Haug 1 -▁hmm 1 -▁housemate 1 -▁Mbps 1 -▁CHRIST 1 -▁Alston 1 -▁$0.20 1 -▁Probe 1 -▁inhibited 1 -▁Wrath 1 -▁ASX 1 -▁$230 1 -resist 1 -▁missive 1 -▁Shorter 1 -▁coiled 1 -279 1 -▁8-5 1 -ché 1 -Doc 1 -▁toothless 1 -▁Faris 1 -▁Synchro 1 -825 1 -▁Utopia 1 -Hang 1 -191 1 -▁Mica 1 -▁Baal 1 -▁phospho 1 -▁Kael 1 -▁268 1 -arranged 1 -bail 1 -▁Artie 1 -▁PPI 1 -▁3500 1 -Egypt 1 -Bloomberg 1 -infested 1 -presumably 1 -Senior 1 -creek 1 -exhaust 1 -▁rekindle 1 -SIC 1 -Warner 1 -▁HMD 1 -302 1 -▁Albin 1 -▁Threaten 1 -▁Socio 1 -▁Siraj 1 -▁entrust 1 -custom 1 -▁Shard 1 -Roger 1 -▁Emile 1 -Huh 1 -▁15.6 1 -▁Marlboro 1 -▁squawk 1 -▁Cinnamon 1 -▁Macintosh 1 -▁Orbital 1 -▁OxyContin 1 -▁Trinamool 1 -▁bereavement 1 -▁centrifuge 1 -▁doorknob 1 -▁enormity 1 -▁hinterland 1 -▁proletariat 1 -▁Comanche 1 -▁ciudad 1 -▁hemoglobin 1 -▁Harriman 1 -▁swindle 1 -▁enrollees 1 -▁Kinect 1 -multiculturalism 1 -▁henceforth 1 -▁Caputo 1 -▁aftershocks 1 -▁Hummel 1 -▁godmother 1 -▁technocrat 1 -▁Oberlin 1 -▁personalize 1 -modified 1 -▁RFID 1 -▁Unitarian 1 -fluid 1 -ahem 1 -▁scruffy 1 -▁Iberian 1 -▁Noonan 1 -▁Stephane 1 -▁Kickoff 1 -▁trickery 1 -925 1 -▁attainable 1 -▁Shafi 1 -▁Baca 1 -▁[47] 1 -Activ 1 -▁penance 1 -Mega 1 -▁Adair 1 -▁notepad 1 -326 1 -▁Berber 1 -▁mauled 1 -▁Valero 1 -Lavalin 1 -▁Moeen 1 -▁twirling 1 -▁soluble 1 -▁seabed 1 -▁Jamaat 1 -▁martyred 1 -▁4.1% 1 -▁Capone 1 -onomi 1 -cuff 1 -▁daydreaming 1 -213 1 -boost 1 -infected 1 -▁SAO 1 -▁undergrad 1 -▁Magni 1 -▁VX 1 -▁Sailing 1 -▁EOS 1 -▁carpark 1 -▁pivoted 1 -limp 1 -Mur 1 -▁halal 1 -▁Flour 1 -receive 1 -▁1817 1 -▁bandana 1 -▁lint 1 -.51% 1 -▁McCaw 1 -ordinary 1 -lü 1 -▁stingy 1 -compared 1 -sustaining 1 -imperial 1 -▁habitation 1 -Crime 1 -▁substantiate 1 -Church 1 -▁lumpy 1 -▁Paco 1 -shocking 1 -193 1 -mechanical 1 -▁Ultima 1 -п 1 -▁Antiquities 1 -▁Argyle 1 -▁Encarnacion 1 -▁Rensselaer 1 -▁accosted 1 -▁mahogany 1 -▁splendor 1 -▁zodiac 1 -▁Anzac 1 -▁Elaina 1 -▁Lloris 1 -▁monotonous 1 -▁Coffman 1 -▁Snead 1 -▁traumatised 1 -▁Wedbush 1 -▁Clapton 1 -▁kebab 1 -▁Doolittle 1 -▁programmable 1 -▁chugging 1 -▁hyena 1 -▁SWOT 1 -▁fascia 1 -▁Spitzer 1 -▁Sligo 1 -▁clogging 1 -▁RuPaul 1 -▁Arianna 1 -▁Impala 1 -▁fretting 1 -▁cystic 1 -▁Trudy 1 -▁Caste 1 -▁Virgo 1 -▁midsize 1 -▁PIL 1 -▁Rosberg 1 -kaya 1 -▁Appendix 1 -▁Surrounded 1 -▁partido 1 -nobody 1 -layered 1 -BIT 1 -wringing 1 -▁deceptively 1 -CSC 1 -▁3.7% 1 -▁neocon 1 -▁KTM 1 -▁4.7% 1 -▁0.06 1 -382 1 -▁Whitefish 1 -▁1775 1 -▁Badge 1 -MBA 1 -▁shyness 1 -775 1 -▁revolutionized 1 -▁Wicker 1 -affair 1 -▁Welshman 1 -▁316 1 -▁jan 1 -▁BPA 1 -▁Cougar 1 -hura 1 -▁exerting 1 -▁Argento 1 -sixth 1 -▁parlay 1 -PIC 1 -▁Sula 1 -▁scribble 1 -▁323 1 -▁Sze 1 -▁GAO 1 -▁WOOD 1 -Judge 1 -▁spoonful 1 -Planet 1 -▁bong 1 -▁Deny 1 -▁$1,9 1 -▁$12.5 1 -▁Sacrament 1 -▁Hodeida 1 -▁obeying 1 -▁infidel 1 -▁anomalous 1 -▁ANTONIO 1 -▁Arduino 1 -▁Barangay 1 -▁Implementation 1 -▁Novichok 1 -▁Prometheus 1 -▁Roughriders 1 -▁Syndergaard 1 -▁enterprising 1 -▁evaporation 1 -▁inaccuracies 1 -▁laundromat 1 -▁mortuary 1 -▁nondescript 1 -▁prudence 1 -▁sanctuaries 1 -▁unrecognizable 1 -▁bobsled 1 -▁aghast 1 -▁Django 1 -▁Abramovich 1 -▁Cordelia 1 -▁Cobalt 1 -▁Neighbourhood 1 -▁draught 1 -▁Mushroom 1 -▁Mahendra 1 -▁Exposure 1 -▁subplot 1 -▁Vinod 1 -▁Washburn 1 -▁clamour 1 -▁Conquest 1 -▁Vinyl 1 -▁Farewell 1 -▁0.01% 1 -▁Criticism 1 -▁Lynette 1 -▁Buckle 1 -▁harmonize 1 -▁Goodluck 1 -▁lilies 1 -Using 1 -▁Gaulle 1 -▁Girona 1 -▁Manohar 1 -▁glitzy 1 -▁Garber 1 -▁Discrimination 1 -▁smartwatches 1 -▁Luzon 1 -▁bureaucrat 1 -Democrat 1 -▁SNES 1 -▁Phew 1 -▁intraday 1 -▁psyched 1 -CIES 1 -▁Goldstone 1 -▁Lombardo 1 -functioning 1 -▁Felton 1 -▁Randal 1 -▁yawned 1 -▁Montero 1 -▁sprouting 1 -▁Rucker 1 -▁undercurrent 1 -▁Gabor 1 -▁bleached 1 -▁Akash 1 -▁Brim 1 -Haul 1 -classical 1 -▁Kristine 1 -comedy 1 -DIA 1 -krishna 1 -▁Haig 1 -▁previewed 1 -FIC 1 -▁Apu 1 -▁Headley 1 -▁Notification 1 -▁Accel 1 -▁Cashman 1 -▁punchline 1 -NPA 1 -.83% 1 -▁Commandant 1 -▁Gravel 1 -▁Bridg 1 -▁leaflet 1 -▁numbing 1 -Lisa 1 -▁tenet 1 -▁WCW 1 -Range 1 -▁absurdly 1 -▁overseer 1 -025 1 -▁partied 1 -hectare 1 -Terrorism 1 -insurgency 1 -aggression 1 -▁devastate 1 -dancing 1 -▁Mundo 1 -▁embolden 1 -Case 1 -pano 1 -▁insulate 1 -Merry 1 -õ 1 -▁sacrificial 1 -▁Brochure 1 -▁Carvalho 1 -▁Distillery 1 -▁Sanitation 1 -▁Sniper 1 -▁Trujillo 1 -▁articulation 1 -▁pernicious 1 -▁repositories 1 -▁slather 1 -▁pylon 1 -▁(1987) 1 -▁Mayawati 1 -▁PUBG 1 -▁livable 1 -▁Infiniti 1 -▁Thrift 1 -▁demeanour 1 -▁Productivity 1 -▁Tiwari 1 -▁MasterCard 1 -▁Kinsler 1 -5.7% 1 -▁Janssen 1 -▁retrograde 1 -▁Icardi 1 -▁Haddish 1 -▁pyrotechnic 1 -▁NAIA 1 -▁Absent 1 -▁Molinari 1 -▁Ignore 1 -▁Hynes 1 -▁ringleader 1 -▁Rozier 1 -▁Durga 1 -▁subversion 1 -▁Chabad 1 -▁hijackers 1 -▁pho 1 -▁Canvas 1 -008 1 -▁stepdaughter 1 -Mai 1 -▁hollered 1 -▁Homeowners 1 -▁decrying 1 -▁teeter 1 -▁Brau 1 -▁posterity 1 -▁Burst 1 -▁substantiated 1 -Spi 1 -▁harboring 1 -▁foo 1 -▁Toddler 1 -10% 1 -▁NCIS 1 -▁#10 1 -▁3.6% 1 -vai 1 -▁distorting 1 -▁Doran 1 -wong 1 -▁Slick 1 -▁GSK 1 -▁Diab 1 -▁Crypt 1 -▁OTC 1 -brake 1 -▁Raa 1 -SPCA 1 -enjoy 1 -lush 1 -hosting 1 -▁Moir 1 -▁Unix 1 -▁Rambo 1 -187 1 -conquer 1 -7-0 1 -▁Marek 1 -cities 1 -Earlier 1 -Vietnam 1 -▁SEM 1 -diversity 1 -success 1 -improvement 1 -▁contaminate 1 -399 1 -slavery 1 -Common 1 -convict 1 -▁SUB 1 -▁dislodge 1 -▁sown 1 -▁Blythe 1 -▁$7.2 1 -▁rebrand 1 -▁nuk 1 -▁Díaz 1 -▁Centurion 1 -▁Geordie 1 -▁HARTFORD 1 -▁MELBOURNE 1 -▁Meghalaya 1 -▁WALLACE 1 -▁carnivore 1 -▁harbinger 1 -▁superfluous 1 -▁Blevins 1 -▁Faraday 1 -▁pantheon 1 -▁Stealth 1 -▁Athenian 1 -▁Mosquito 1 -▁Architectural 1 -▁quinoa 1 -▁WHITE 1 -▁Yamamoto 1 -▁cortical 1 -▁lupus 1 -▁miscue 1 -▁`` 1 -▁Trixie 1 -▁Preacher 1 -▁indoctrination 1 -▁Tyrion 1 -▁Nizam 1 -▁indescribable 1 -▁Cargill 1 -▁Showalter 1 -▁$0.08 1 -▁delineate 1 -▁PLUS 1 -▁BSA 1 -▁Vishwa 1 -▁Ō 1 -▁poppies 1 -▁knickers 1 -▁enlargement 1 -puesto 1 -owiki 1 -▁neuronal 1 -▁Auston 1 -▁Pantry 1 -▁centennial 1 -▁Landau 1 -▁flyweight 1 -▁prix 1 -▁Folsom 1 -▁remixes 1 -▁Unification 1 -▁Resolve 1 -▁whittled 1 -▁160,000 1 -▁Dreyfus 1 -386 1 -▁Kelce 1 -▁Oceanside 1 -▁Armored 1 -▁Gough 1 -Deb 1 -ql 1 -▁maligned 1 -▁pail 1 -▁Bronco 1 -▁Vasa 1 -leaving 1 -▁1822 1 -▁FFO 1 -▁Ebb 1 --2007 1 -haka 1 -▁LOG 1 -▁Yuck 1 -assessment 1 -99.99 1 -CHO 1 -.87% 1 -▁guidebook 1 -▁Kosh 1 -emerge 1 -▁wagering 1 -▁Snapp 1 -▁Temperature 1 -▁Mousa 1 -▁Millennial 1 -343 1 -▁Boxes 1 -▁bandaged 1 -▁Niv 1 -▁Stoll 1 -underperform 1 -▁Fucking 1 -▁underpaid 1 -▁CET 1 -▁£16 1 -▁Persi 1 -Date 1 -CBD 1 -▁Cotter 1 -brief 1 -intentional 1 -encompassing 1 -achieving 1 -religion 1 -▁vaccinate 1 -▁Miki 1 -ivities 1 -vascular 1 -▁Ansar 1 -▁Buffaloes 1 -▁pamper 1 -Arc 1 -▁weaved 1 -▁FLO 1 -▁Fung 1 -.01% 1 -▁plop 1 -▁Ortho 1 -Ø 1 -▁Aeronautics 1 -▁Cancun 1 -▁Embraer 1 -▁Horwath 1 -▁Majestic 1 -▁Manmohan 1 -▁Mogherini 1 -▁Muammar 1 -▁Reconnaissance 1 -▁crustacean 1 -▁diarrhoea 1 -▁dispersal 1 -▁disputing 1 -▁eczema 1 -▁Salomon 1 -▁Grandparents 1 -▁nachos 1 -▁Neolithic 1 -▁Telekom 1 -▁omnipresent 1 -▁psychotherapy 1 -▁purified 1 -▁Corrigan 1 -▁Boateng 1 -▁McHenry 1 -▁multifaceted 1 -▁cowardice 1 -Sissi 1 -▁hummed 1 -▁foregoing 1 -▁Roundtable 1 -▁NewsEdge 1 -▁Commentary 1 -▁keepsake 1 -cutaneous 1 -▁Optus 1 -lä 1 -▁Encore 1 -▁Tren 1 -▁Malachi 1 -Mean 1 -▁sportsmanship 1 -▁Juanita 1 -▁Bhavan 1 -▁THEIR 1 -▁thinned 1 -▁10.8 1 -gade 1 -▁magnifying 1 -▁hulking 1 -▁blustery 1 -▁Ashford 1 -▁untrained 1 -latch 1 -▁Kwon 1 -Doug 1 -.77% 1 -▁Acquire 1 -internet 1 -▁Kwara 1 -159 1 -▁$2.50 1 -▁Motherwell 1 -▁1826 1 -NEO 1 -▁permeated 1 -brecht 1 -zola 1 -▁Governorate 1 -▁Maeda 1 -346 1 -politically 1 -636 1 -prim 1 -▁Hydrogen 1 -▁FMC 1 -▁Morty 1 -Playing 1 -▁Goodwood 1 -[9] 1 -▁11.2 1 -5.1% 1 -dense 1 -▁distasteful 1 -▁chisel 1 -▁3,300 1 -▁WIP 1 -assembled 1 -▁Instruction 1 -▁rejoiced 1 -▁aggressiveness 1 -caine 1 -▁Loyd 1 -NPC 1 -▁Ahh 1 -idh 1 -▁oriental 1 -▁Levante 1 -▁Cavalier 1 -▁278 1 -injured 1 -disgusting 1 -politician 1 -portrait 1 -society 1 -fantastic 1 -▁Confi 1 -Section 1 -deleted 1 -▁Zhen 1 -Busch 1 -uzzo 1 -Marvel 1 -▁Saur 1 -▁sampler 1 -▁.27 1 -▁Samba 1 -▁biologic 1 -inadvertent 1 -neurotransmitter 1 -▁Aleksandar 1 -▁Guyanese 1 -▁Hernández 1 -▁Pujara 1 -▁oxycodone 1 -▁pedophilia 1 -▁reciprocity 1 -▁repugnant 1 -▁repulsed 1 -▁serenade 1 -▁shrivel 1 -▁truncated 1 -▁tyrannical 1 -▁Bristow 1 -▁Hossein 1 -▁amputee 1 -▁countermeasures 1 -▁resonant 1 -▁Allergan 1 -▁extrapolate 1 -▁quizzes 1 -▁Duchene 1 -▁Brethren 1 -▁impeding 1 -▁OPCW 1 -▁Jacinda 1 -▁Greetings 1 -▁steadied 1 -▁Eoin 1 -▁debugging 1 -▁Zeitung 1 -▁Kuldeep 1 -▁Fayose 1 -▁Ibaka 1 -▁duvet 1 -ubu 1 -▁Summon 1 -▁dammit 1 -Rise 1 -▁Cocker 1 -▁fulfilment 1 -▁NASL 1 -▁Waterbury 1 -▁stopgap 1 -▁Gentle 1 -▁SIP 1 -378 1 -▁Dewan 1 -occa 1 -▁portability 1 -▁Cueto 1 -▁Xmas 1 -▁Buffet 1 -▁emergent 1 -▁WCBS 1 -▁Cigna 1 -▁mulled 1 -▁Wale 1 -lori 1 -▁squinting 1 -▁Frick 1 -kirch 1 -970 1 -Thor 1 -▁Niners 1 -▁269 1 -▁octave 1 -maniac 1 -representation 1 -▁capex 1 -▁Contrast 1 -▁lard 1 -▁Vessel 1 -▁Leaning 1 -blogging 1 -▁pdf 1 -ithi 1 -▁Moyer 1 -▁377 1 -CHS 1 -DALE 1 -▁rinsed 1 -▁Mitra 1 -▁outrageously 1 -▁grunting 1 -▁starboard 1 -▁feli 1 -283 1 -trapped 1 -Mate 1 -▁Coward 1 -▁Eisenberg 1 -▁13.1 1 -KEY 1 -▁Anima 1 -viral 1 -▁12.3 1 -▁Arif 1 -▁$90,000 1 -▁:-) 1 -▁Nandi 1 -york 1 -▁Meme 1 -▁Xian 1 -▁Produce 1 -▁layover 1 -▁Waite 1 -▁Booster 1 -Traffic 1 -Windows 1 -maximum 1 -harvest 1 -collection 1 -▁FACT 1 -▁Bagan 1 -▁3.00 1 -▁Pavl 1 -▁Machar 1 -▁£18 1 -biological 1 -issen 1 -▁entrench 1 -▁Vibra 1 -▁355 1 -rzy 1 -delivered 1 -Provid 1 -accio 1 -▁Malaya 1 -ب 1 -▁ATHENS 1 -▁Creativity 1 -▁Logitech 1 -▁antagonism 1 -▁compensating 1 -▁crazier 1 -▁exacerbating 1 -▁mercurial 1 -▁participatory 1 -▁repudiate 1 -▁skittish 1 -▁Argument 1 -▁Buehler 1 -▁Ethnic 1 -▁KFVS 1 -▁jammies 1 -▁inanimate 1 -▁duress 1 -▁unbeknownst 1 -▁abetting 1 -▁emotive 1 -▁Horseshoe 1 -▁Bogota 1 -▁gobbled 1 -▁Micheal 1 -▁(1990) 1 -▁Albrecht 1 -▁megaphone 1 -▁Reaffirm 1 -▁patties 1 -▁CarPlay 1 -▁misusing 1 -▁exploitative 1 -▁Tongue 1 -▁Outrage 1 -▁Maruti 1 -▁Cassius 1 -▁Jeannie 1 -424 1 -▁Rojo 1 -Deal 1 -▁GSM 1 -▁panther 1 -▁segue 1 -▁Goodison 1 -▁Authorization 1 -▁schism 1 -▁Teal 1 -▁Yoder 1 -▁PMO 1 -▁Haigh 1 -▁signee 1 -▁twirled 1 -▁EVA 1 -▁625 1 -IMF 1 -izable 1 -▁stow 1 -▁Stefanovic 1 -easily 1 -ophyll 1 -▁CCI 1 -Euro 1 -▁Corby 1 -▁darkening 1 -▁flattening 1 -▁assimilated 1 -monstrat 1 -▁4.2% 1 -▁encapsulated 1 -▁opportunist 1 -▁Campion 1 -▁guesthouse 1 -▁11.3 1 -▁keyboardist 1 -▁Haney 1 -ül 1 -▁radicalism 1 -thwaite 1 -WSVN 1 -▁american 1 -ZO 1 -$200 1 -▁confidante 1 -▁SAL 1 -▁bookmakers 1 -▁Kazi 1 -▁£70 1 -▁(£3 1 -investor 1 -▁Westeros 1 -▁Babar 1 -▁lute 1 -▁PIA 1 -▁Goodall 1 -▁barcode 1 -▁Domi 1 -wheeled 1 -EXCLUSIVE 1 -Easy 1 -Patriot 1 -▁Catching 1 -▁awesomeness 1 -Heaven 1 -.91% 1 -▁Unreal 1 -▁Donato 1 -wati 1 -▁Shun 1 -▁Dzi 1 -azole 1 -CNS 1 -affi 1 -NEX 1 -8.0 1 -▁Recorded 1 -▁980 1 -▁Adriatic 1 -▁Briscoe 1 -▁Inevitably 1 -▁Kubernetes 1 -▁McNally 1 -▁Obesity 1 -▁Schlumberger 1 -▁Villeneuve 1 -▁catalyze 1 -▁congrats 1 -▁demagogue 1 -▁derision 1 -▁goosebumps 1 -▁immutable 1 -▁mishandled 1 -▁neutrino 1 -▁procrastination 1 -▁provost 1 -▁undivided 1 -▁Disabled 1 -▁Weymouth 1 -▁surveil 1 -▁thermodynamic 1 -▁Bayelsa 1 -▁Lagerfeld 1 -▁predation 1 -▁División 1 -chick 1 -▁ALERT 1 -▁Raisman 1 -▁Javascript 1 -▁Larissa 1 -▁Sweep 1 -▁Maisie 1 -dactyl 1 -▁[48] 1 -possess 1 -czak 1 -▁Sonata 1 -▁staking 1 -▁quantified 1 -▁Neme 1 -▁Nitin 1 -▁parsing 1 -cyber 1 -Aqsa 1 -▁bizarrely 1 -▁Engler 1 -▁friendliness 1 -▁Raghu 1 -▁McNeill 1 -▁Fender 1 -▁radicalization 1 -▁squirmed 1 -▁Hanne 1 -▁bellowed 1 -▁kinky 1 -Kyle 1 -▁chronicling 1 -▁unfunded 1 -▁Deutsch 1 -▁Leif 1 -▁Eels 1 -▁Mounties 1 -▁underlining 1 -ettler 1 -▁Amina 1 -hwan 1 -▁Keyboard 1 -▁Cardin 1 -▁scorned 1 -▁Beltran 1 -▁unfairness 1 -▁Sabre 1 -▁Reject 1 -▁forestall 1 -▁ECO 1 -▁backtracked 1 -counterintuitive 1 -▁Somme 1 -808 1 -▁SIX 1 -▁Mikko 1 -▁Talon 1 -▁smirking 1 -▁Hyland 1 -▁$9.5 1 -▁WPP 1 -▁critter 1 -cido 1 -▁Yip 1 -▁Teck 1 -▁Xena 1 -▁Rumor 1 -▁VER 1 -▁SMB 1 -eidel 1 -painting 1 -590 1 -▁Primera 1 -▁LOTS 1 -▁Designate 1 -closure 1 -▁Ponte 1 -perishable 1 -Jennifer 1 -preservation 1 -▁Willett 1 -Giving 1 -engineered 1 -Emily 1 -▁dais 1 -compromise 1 -▁Cassel 1 -▁Glynn 1 -▁haphazard 1 -▁coolant 1 -solar 1 -281 1 -▁enigma 1 -▁MBE 1 -▁$4.7 1 -▁Kada 1 -afari 1 -▁Appointment 1 -▁Flinders 1 -▁MEMPHIS 1 -▁Mysteries 1 -▁Nantucket 1 -▁implosion 1 -▁impregnate 1 -▁impropriety 1 -▁saxophonist 1 -▁seduction 1 -▁summarily 1 -▁Wochit 1 -▁Execution 1 -▁Mizzou 1 -▁pandering 1 -▁forewings 1 -▁Sokoto 1 -▁Dybala 1 -▁Hempstead 1 -▁Leningrad 1 -▁propellant 1 -ć 1 -▁Reinhart 1 -▁Bercow 1 -▁Renewal 1 -▁symptomatic 1 -▁Bowyer 1 -▁bln 1 -▁ascribed 1 -▁Zamora 1 -▁spurious 1 -▁Silvia 1 -▁quorum 1 -▁Paytm 1 -▁Deerfield 1 -▁Woodbury 1 -▁snappy 1 -▁Pola 1 -▁Waltham 1 -▁Mecklenburg 1 -▁Aggie 1 -▁Anywhere 1 -▁Dice 1 -▁78% 1 -▁Girlfriend 1 -▁Surry 1 -liti 1 -▁WELL 1 -▁surmised 1 -Same 1 -▁FCF 1 -ieux 1 -▁Ibom 1 -mede 1 -▁recitation 1 -▁Mayflower 1 -▁busily 1 -▁cada 1 -tenant 1 -▁underdeveloped 1 -iglia 1 -▁4.6% 1 -▁gravestone 1 -▁jeopardiz 1 -Rocket 1 -▁Lalu 1 -▁1823 1 -▁matted 1 -Unlike 1 -army 1 -▁Kaden 1 -▁10.6 1 -▁roman 1 -▁3–2 1 -Rachel 1 -▁SSC 1 -▁rippled 1 -▁Roshan 1 -▁Praying 1 -▁Gie 1 -▁Activist 1 -▁1:20 1 -▁megawatt 1 -▁Kaling 1 -advantage 1 -intellectual 1 -disaster 1 -appointment 1 -▁rudely 1 -Throughout 1 -▁BID 1 -▁Ratna 1 -announced 1 -▁Mosc 1 -voted 1 -▁Hiram 1 -▁merciless 1 -▁woe 1 -▁NWA 1 -▁Riyad 1 -▁GROUP 1 -▁McDermid 1 -▁Sebastien 1 -▁aneurysm 1 -▁inextricably 1 -▁maturation 1 -▁pomegranate 1 -▁spiking 1 -▁substituting 1 -▁succulent 1 -▁trabaja 1 -Donoghue 1 -AMERICA 1 -▁Mathematical 1 -▁mastectomy 1 -▁Toffee 1 -▁Goldschmidt 1 -▁Sephora 1 -▁agonising 1 -▁Nestlé 1 -▁Submarine 1 -▁Blockbuster 1 -▁misadventure 1 -▁Englewood 1 -▁parlance 1 -▁Bantam 1 -▁bobcat 1 -▁Kiffin 1 -▁Situate 1 -▁Kotaku 1 -▁gunpowder 1 -▁Playstation 1 -▁hallowed 1 -▁Laptop 1 -▁Breakthrough 1 -▁Metropolis 1 -▁Garvey 1 -▁Ž 1 -▁Olly 1 -▁Swamy 1 -▁Rahane 1 -▁chalkboard 1 -considered 1 -▁spool 1 -▁Atmos 1 -▁Teamsters 1 -haan 1 -▁Daydream 1 -▁joystick 1 -▁handcrafted 1 -▁Cupcake 1 -▁feverish 1 -▁Seung 1 -▁1811 1 -Knowing 1 -▁Middlebury 1 -▁polemic 1 -▁villainous 1 -▁Ahsan 1 -▁Selby 1 -Eng 1 -cito 1 -▁SAYS 1 -▁Thug 1 -▁floundering 1 -▁Roux 1 -▁whirling 1 -filling 1 -▁5.1% 1 -▁CCA 1 -conforming 1 -▁Bazar 1 -▁Osuna 1 -▁singling 1 -▁Browser 1 -▁Wadi 1 -USC 1 -▁Biles 1 -▁Doria 1 -▁NAM 1 -▁Disk 1 -solv 1 -▁frosted 1 -▁Botox 1 -▁STAT 1 -▁ranches 1 -4-0 1 -▁eyelid 1 -▁Councilmember 1 -▁gentler 1 -▁Littleton 1 -Robot 1 -▁Lilli 1 -Thomson 1 -recognition 1 -Michigan 1 -magazine 1 -television 1 -Urban 1 -Focus 1 -▁officiate 1 -Flash 1 -financed 1 -▁encircle 1 -▁MOD 1 -▁Dabo 1 -▁highlands 1 -▁Pineda 1 -▁Kuz 1 -▁FIP 1 -eswara 1 -▁Berri 1 -chron 1 -Unbelievable 1 -▁Disqus 1 -▁Kavanagh 1 -▁Pattinson 1 -▁Veracruz 1 -▁Warburton 1 -▁adversarial 1 -▁divestiture 1 -▁dysphoria 1 -▁nanoparticles 1 -▁nymph 1 -▁raunchy 1 -▁recusal 1 -▁servitude 1 -▁unsteady 1 -▁vindication 1 -▁woodpecker 1 -▁Ontarians 1 -▁deviant 1 -▁assailed 1 -▁Northumbria 1 -▁Kareena 1 -▁Hangzhou 1 -▁Bobbitt 1 -▁Gwynn 1 -▁Saviour 1 --0-0 1 -▁cornea 1 -▁[45] 1 -▁Cathay 1 -emerged 1 -▁Korver 1 -▁coexistence 1 -▁devotee 1 -itsch 1 -▁pariah 1 -▁FUCK 1 -▁viaduct 1 -▁Crater 1 -▁Rashad 1 -▁beachfront 1 -▁Goodale 1 -▁Spooner 1 -▁Decree 1 -▁Floating 1 -▁footfall 1 -▁Surge 1 -▁pummeled 1 -▁96% 1 -▁Freiburg 1 -▁Renner 1 -▁Swarm 1 -▁Stroman 1 -▁WJZ 1 -▁Staci 1 -▁jetty 1 -▁Dowling 1 -100% 1 -▁30-40 1 -▁Redick 1 -▁Gino 1 -▁$0.12 1 -▁steri 1 -▁LEG 1 -▁$1,8 1 -▁Neha 1 -plug 1 -▁******* 1 -▁Baines 1 -▁Define 1 -WFLA 1 -006 1 -▁CSC 1 -▁Lakeside 1 -ETF 1 -▁aspirant 1 -▁Burge 1 -▁heartburn 1 -▁−0. 1 -Publish 1 -▁cobra 1 -lotte 1 -restaurant 1 -believing 1 -[15] 1 -▁£5,000 1 -Dallas 1 -▁espouse 1 -priority 1 -▁Sieg 1 -Monster 1 -▁KEN 1 -symbol 1 -▁envi 1 -▁Rhodesia 1 -▁cumin 1 -▁Ranga 1 -▁Donohue 1 -▁Lombok 1 -▁Ncube 1 -▁Sculpture 1 -▁Serenity 1 -▁Valtteri 1 -▁Waratahs 1 -▁confederation 1 -▁conversing 1 -▁curvature 1 -▁descriptor 1 -▁lactose 1 -▁lexicon 1 -▁miscalculation 1 -▁monstrosity 1 -▁perceptive 1 -▁surrogacy 1 -▁ukulele 1 -▁warehousing 1 -▁Palisade 1 -▁amplification 1 -▁yesteryear 1 -▁abatement 1 -▁Hensley 1 -▁Librarian 1 -▁doomsday 1 -▁exterminate 1 -▁Xanax 1 -▁Pharrell 1 -▁MAKING 1 -▁SoundCloud 1 -▁Anatolia 1 -▁Shoemaker 1 -▁Iverson 1 -▁Aberdeenshire 1 -▁Bungie 1 -▁Ability 1 -▁(1991) 1 -▁trustworthiness 1 -▁für 1 -▁Sherrod 1 -▁Rhein 1 -▁gagging 1 -▁Coronado 1 -▁Silesia 1 -▁Valeant 1 -▁Peking 1 -▁Rennes 1 -▁GameStop 1 -▁redistribute 1 -▁clockwise 1 -.86% 1 -▁castigat 1 -▁TIFF 1 -▁$0.25 1 -▁symbolically 1 -▁Blatter 1 -▁normality 1 -▁Weldon 1 -▁centrepiece 1 -▁Ricard 1 -▁buzzword 1 -▁Witcher 1 -▁Lanier 1 -▁Beagle 1 -.84% 1 -gnan 1 -▁Monson 1 -▁Prado 1 -▁Jovan 1 -▁83% 1 -▁SSP 1 -▁BON 1 -▁Scope 1 -▁WAIT 1 -▁checkbook 1 -▁Bien 1 -512 1 -▁Kuri 1 -OTT 1 -gazette 1 -▁Kadri 1 -▁algorithmic 1 -▁$7,500 1 -▁Barrio 1 -▁whereupon 1 -▁centralised 1 -▁Nanda 1 -▁Bij 1 -▁dovish 1 -Palm 1 -▁Samaj 1 -▁Baking 1 -▁butchered 1 -▁Gump 1 -▁Caval 1 -Barr 1 -awley 1 -▁Placid 1 -336 1 -▁polygraph 1 -Chamberlain 1 -Atlanta 1 -controversial 1 -[18] 1 -Multi 1 -Joshua 1 -resistance 1 -▁Mice 1 -▁disenfranchise 1 -▁Zinc 1 -solved 1 -equen 1 -▁Mehr 1 -▁Messer 1 -warning 1 -▁Saro 1 -Ship 1 -▁1795 1 -510 1 -htm 1 -Bahn 1 -▁Bounty 1 -▁Farrakhan 1 -▁GEORGE 1 -▁NEWSWIRE 1 -▁Scholastic 1 -▁Thoroughbred 1 -▁Valkyrie 1 -▁cilantro 1 -▁diagnosing 1 -▁disheartened 1 -▁entwined 1 -▁jurisprudence 1 -▁trombone 1 -й 1 -▁Argyll 1 -▁Rabbitohs 1 -् 1 -▁Ambode 1 -▁gaudy 1 -▁Arafat 1 -▁nullified 1 -▁ECHL 1 -▁Ragnar 1 -▁Lemonade 1 -▁Adonis 1 -▁Rabada 1 -▁Kodiak 1 -▁Mysore 1 -▁Eubank 1 -▁Modified 1 -▁stony 1 -mommy 1 -coli 1 -empel 1 -▁Causeway 1 -▁bondholders 1 -▁Č 1 -▁Toilet 1 -▁cadaver 1 -▁Wheatley 1 -PRA 1 -▁Magdalene 1 -▁1808 1 -▁Silverado 1 -▁satchel 1 -▁Curator 1 -▁Molson 1 -▁marshes 1 -▁WPA 1 -▁Rawat 1 -Bush 1 -▁wonderland 1 -Rocky 1 -Walker 1 -Beck 1 -▁Interment 1 -▁navigational 1 -kama 1 -▁steadfastly 1 -▁undid 1 -$40 1 -fran 1 -▁5.7% 1 -▁calci 1 -▁Sparkle 1 -glen 1 -▁Restore 1 -searching 1 -▁Randi 1 -▁squeaked 1 -▁Kaul 1 -holme 1 -▁Rosalind 1 -▁Wedge 1 -▁Inspire 1 -LIA 1 -steal 1 -▁Tyrell 1 -zuki 1 -▁confi 1 -▁1827 1 -▁reworking 1 -▁Mackie 1 -▁Enable 1 -▁7-10 1 -▁Badal 1 -▁Gunner 1 -rrrr 1 -▁porches 1 -▁UnitedHealth 1 -▁Invite 1 -▁Commit 1 -familiar 1 -dió 1 -▁kneeled 1 -Entertainment 1 -Fortunately 1 -programming 1 -▁disintegrate 1 -Michelle 1 -▁causa 1 -Board 1 -Modi 1 -insider 1 -Quite 1 -essler 1 -FAA 1 -broadcast 1 -▁Percent 1 -▁2400 1 -Chairman 1 -▁crucifix 1 -▁ANAHEIM 1 -▁Acevedo 1 -▁Affiliate 1 -▁Buttigieg 1 -▁Haaretz 1 -▁Introducing 1 -▁Monticello 1 -▁QUESTION 1 -▁Rousseff 1 -▁Shenandoah 1 -▁Talladega 1 -▁enchantment 1 -▁epitomize 1 -▁exorcism 1 -▁guacamole 1 -▁horticulture 1 -▁macabre 1 -▁odyssey 1 -▁revoking 1 -▁sapiens 1 -▁spousal 1 -▁trajectories 1 -▁uncontested 1 -▁unexplored 1 -▁unprofitable 1 -▁Notorious 1 -▁possum 1 -र 1 -▁avowed 1 -▁Bozeman 1 -▁Tinubu 1 -▁Oxlade 1 -▁capitulate 1 -encrusted 1 -▁sweepstakes 1 -▁devising 1 -▁Chaplain 1 -▁(1989) 1 -▁Bisping 1 -▁parapet 1 -▁bombastic 1 -▁EDITOR 1 -▁Votto 1 -▁Repeal 1 -▁Nevin 1 -▁Conception 1 -▁YUM 1 -▁Reimer 1 -▁Warnock 1 -▁motorhome 1 -▁Benedictine 1 -▁Tipton 1 -▁Mehdi 1 -▁Gracia 1 -▁3200 1 -FIA 1 -couldn 1 -▁Majid 1 -427 1 -▁GOOGL 1 -▁2:20 1 -▁Napoleonic 1 -▁Xuan 1 -▁sportswear 1 -▁reconstitut 1 -▁Steep 1 -▁pompous 1 -▁Yoda 1 -partial 1 -▁74% 1 -▁Gloss 1 -▁Staal 1 -orientated 1 -▁Primo 1 -▁Selig 1 -▁Blanchett 1 -▁Horned 1 -NATO 1 -4.1% 1 -SHIP 1 -▁Tomato 1 -▁sharpening 1 -▁ske 1 -▁blowback 1 -▁67% 1 -habilita 1 -VIC 1 -▁infecting 1 -knew 1 -▁Bhar 1 -▁Dusk 1 -▁rejoining 1 -▁Klem 1 -Past 1 -▁71% 1 -895 1 -▁Jawa 1 -▁ROH 1 -mog 1 -▁sanctioning 1 -▁composting 1 -479 1 -▁1791 1 -▁frowning 1 -IOC 1 -▁tailoring 1 -▁fussed 1 -▁calmness 1 -▁primate 1 -lugging 1 -▁Nada 1 -Politics 1 -Lincoln 1 -accused 1 -volunteer 1 -▁unclean 1 -segur 1 -228 1 -▁Botanic 1 -Download 1 -recession 1 -movement 1 -AWS 1 -▁blindfold 1 -audio 1 -▁Karna 1 -Princess 1 -▁spatter 1 -▁earmark 1 -▁omelet 1 -▁Hutchins 1 -▁Frida 1 -Pocahontas 1 -ț 1 -ו 1 -▁Cowichan 1 -▁Dalhousie 1 -▁Dementia 1 -▁Donahue 1 -▁Financing 1 -▁Krzyzewski 1 -▁SPRINGS 1 -▁divinity 1 -▁lorries 1 -▁pragmatism 1 -▁rippling 1 -▁superlative 1 -▁unconscionable 1 -▁decrepit 1 -▁Lemieux 1 -▁agitating 1 -▁polyethylene 1 -▁summarise 1 -▁abstention 1 -eradicating 1 -▁causation 1 -▁Mallorca 1 -▁sociological 1 -▁Elevate 1 -▁predates 1 -▁HollywoodLifers 1 -▁Highest 1 -▁lobbed 1 -▁Tuchel 1 -▁Banquet 1 -Lots 1 -happen 1 -▁undeclared 1 -▁Rostov 1 -▁decorum 1 -▁Drummer 1 -kami 1 -▁EMEA 1 -▁PROVIDED 1 -rä 1 -▁leery 1 -▁Avant 1 -▁Carefully 1 -▁GAL 1 -device 1 -▁handpicked 1 -▁Requirement 1 -▁Drought 1 -▁retraction 1 -▁kung 1 -▁Huston 1 -▁customised 1 -▁fetuses 1 -▁Elisha 1 -▁COST 1 -▁Pardon 1 -▁engulfing 1 -▁privatized 1 -lasted 1 -▁Backpage 1 -▁82% 1 -▁headteacher 1 -▁Bolden 1 -carrier 1 -▁Moise 1 -UNG 1 -▁demotion 1 -▁7:05 1 -▁Novice 1 -▁Tania 1 -▁disassemble 1 -▁redeveloped 1 -▁Revised 1 -.93% 1 -▁më 1 -▁booby 1 -▁Carrying 1 -▁Bracket 1 -▁regrouped 1 -▁modernisation 1 -▁coaxing 1 -intuitive 1 -▁Sagar 1 -Carl 1 -▁factually 1 -▁imprinted 1 -▁defencemen 1 -▁Blaise 1 -▁oligo 1 -▁3:20 1 -▁unitary 1 -Jax 1 -▁3,600 1 -pixel 1 -▁reaped 1 -▁Traverse 1 -SPECT 1 -407 1 -militar 1 -jë 1 -▁85,000 1 -committee 1 -defensive 1 -Globe 1 -anxiety 1 -Goodbye 1 -▁reorganize 1 -675 1 -Shoot 1 -▁OAS 1 -▁13-0 1 -▁plumb 1 -Pink 1 -▁handsomely 1 -Jerry 1 -▁mitochondria 1 -▁NAT 1 -menopausal 1 -▁Advancing 1 -▁CALGARY 1 -▁Fernández 1 -▁Kenilworth 1 -▁Montpellier 1 -▁Pedersen 1 -▁Percival 1 -▁Philosopher 1 -▁bravado 1 -▁contusion 1 -▁diaphragm 1 -▁microbiota 1 -▁procrastinate 1 -▁rejuvenation 1 -▁sputtered 1 -▁treachery 1 -▁unafraid 1 -▁Vodka 1 -▁dethrone 1 -▁underbelly 1 -▁2008-09 1 -▁nutmeg 1 -▁Weasley 1 -▁Lovren 1 -▁thoroughbred 1 -▁kHz 1 -intended 1 -▁Wrote 1 -▁impersonal 1 -▁Finnegan 1 -▁inflating 1 -▁undiagnosed 1 -▁(1985) 1 -▁Cortex 1 -▁Tuna 1 -▁tubular 1 -▁Picnic 1 -▁normative 1 -▁disused 1 -▁digitized 1 -▁unassisted 1 -▁2015/16 1 -▁blogosphere 1 -Edge 1 -▁penning 1 -▁Popeye 1 -▁Stopping 1 -▁heresy 1 -▁moviegoers 1 -▁northernmost 1 -▁antisocial 1 -▁conversely 1 -▁dabbled 1 -▁Dingle 1 -▁Ruk 1 -▁cornfield 1 -▁Stadio 1 -▁Pawar 1 -▁Kogan 1 -▁Czar 1 -▁FAS 1 -Model 1 -▁Perera 1 -▁formalized 1 -▁Cider 1 -▁Zerg 1 -▁rashes 1 -▁Patten 1 -▁FAIR 1 -▁Brynn 1 -odic 1 -monster 1 -▁0-5 1 -▁osteo 1 -Fund 1 -▁Gretel 1 -▁Chur 1 -▁woodworking 1 -▁Engle 1 -▁eke 1 -▁WGC 1 -▁USERS 1 -▁Mair 1 -▁brig 1 -▁inhibiting 1 -▁Nabil 1 -▁Canoe 1 -▁spacewalk 1 -▁Gü 1 -438 1 -▁chaste 1 -▁tidying 1 -▁camo 1 -▁CEN 1 -Write 1 -behaved 1 -▁matchmaking 1 -▁Yew 1 -boul 1 -▁reconsidered 1 -Phillips 1 -injury 1 -satellite 1 -defying 1 -relief 1 -Celeb 1 -▁Lindor 1 -Shawn 1 -youtube 1 -▁Trae 1 -transport 1 -▁nub 1 -Hunter 1 -Often 1 -▁Masterson 1 -▁neuter 1 -▁Reward 1 -6.2% 1 -unintelligible 1 -▁Breckenridge 1 -▁Caballero 1 -▁FALLS 1 -▁Ghostbusters 1 -▁Inventory 1 -▁MacLean 1 -▁Mustapha 1 -▁Niantic 1 -▁Panmunjom 1 -▁Purchasing 1 -▁Sulawesi 1 -▁Sumitomo 1 -▁Tsarnaev 1 -▁Voluntary 1 -▁clarifies 1 -▁genotype 1 -▁hibernation 1 -▁hypnosis 1 -▁ostrich 1 -▁urinating 1 -▁Composer 1 -▁Uranus 1 -▁Pashtun 1 -▁Jameis 1 -▁anthologies 1 -▁Zainab 1 -▁‘ 1 -▁simulating 1 -▁Waddell 1 -▁Timmins 1 -▁varnish 1 -▁Giroux 1 -▁contrarian 1 -▁Seeker 1 -▁Steamboat 1 -▁Entries 1 -▁Pancake 1 -▁polygon 1 -▁sharia 1 -▁sedition 1 -▁congratulatory 1 -▁Woodbridge 1 -▁WERE 1 -▁Leitch 1 -▁conservatory 1 -▁Harrier 1 -▁Beltway 1 -▁Alisson 1 -▁imbued 1 -desire 1 -▁Hasina 1 -▁shamelessly 1 -constitutional 1 -▁commendation 1 -▁unturned 1 -▁Esta 1 -▁homolog 1 -448 1 -▁fillet 1 -▁circuitry 1 -▁Handler 1 -▁1797 1 -▁flourishes 1 -▁Graff 1 -▁plundered 1 -▁deadpan 1 -▁Edited 1 -▁3.9% 1 -9:30 1 -▁Pause 1 -▁Opus 1 -▁prospered 1 -OMG 1 -▁Render 1 -377 1 -▁Roch 1 -▁Brier 1 -▁Posse 1 -Progress 1 -▁CPD 1 -495 1 -▁Chha 1 -▁Maidan 1 -▁chewy 1 -▁Gustaf 1 -dried 1 -▁randomness 1 -claimed 1 -▁Ergo 1 -▁scion 1 -▁Jerk 1 -TRX 1 -▁Foote 1 -▁MSC 1 -▁starkly 1 -.73% 1 -▁10.7 1 -▁Staub 1 -haran 1 -▁Brahma 1 -Foreign 1 -Governor 1 -Screen 1 -example 1 -bathroom 1 -hostile 1 -▁Gujarati 1 -▁Schra 1 -▁Crying 1 -fastest 1 -scheduled 1 -▁MACH 1 -▁Moc 1 -▁Grown 1 -improved 1 -▁Trina 1 -▁Figur 1 -▁hoe 1 -Alice 1 -▁SBA 1 -▁Appearance 1 -▁Carbide 1 -▁Demonstration 1 -▁Efficiency 1 -▁Espanyol 1 -▁Eurosceptic 1 -▁MATERIAL 1 -▁Montréal 1 -▁Pedestrian 1 -▁Schreiber 1 -▁Shapovalov 1 -▁Simultaneously 1 -▁Societe 1 -▁THAAD 1 -▁commensurate 1 -▁defibrillator 1 -▁prognostic 1 -▁prophecies 1 -▁scrutinizing 1 -▁deluded 1 -▁exposé 1 -▁Struggling 1 -▁Atticus 1 -▁queasy 1 -▁circadian 1 -▁combustible 1 -▁seperate 1 -▁Ayrshire 1 -▁Stunning 1 -▁Galleries 1 -▁Yarmouth 1 -▁chakra 1 -▁urchin 1 -▁annulled 1 -▁Colli 1 -▁Tuttle 1 -▁adjournment 1 -▁£20,000 1 -ocean 1 -801 1 -▁Logged 1 -▁mopping 1 -modest 1 -▁Biotech 1 -▁Slavery 1 -▁Millen 1 -▁incubate 1 -▁Stoic 1 -▁Ronny 1 -▁surfboard 1 -▁Narcan 1 -▁unelected 1 -▁diplomatically 1 -forecast 1 -▁CEF 1 -CAST 1 -▁yanking 1 -▁NOLA 1 -▁Elected 1 -▁CRTC 1 -▁deceitful 1 -BOO 1 -Jabbar 1 -grained 1 -▁flinging 1 -Buzz 1 -▁astonishingly 1 -▁FAO 1 -▁gusting 1 -produce 1 -384 1 -435 1 -▁Booz 1 -▁$240 1 -vich 1 -BURY 1 -▁Jagr 1 -zou 1 -352 1 -OLO 1 -Abd 1 -▁jade 1 -▁Jarra 1 -▁Gerardo 1 -▁Taub 1 -Oil 1 -5000 1 -▁gallant 1 -▁touchstone 1 -Climate 1 -EBITDA 1 -apartheid 1 -concussion 1 -exposure 1 -Arnold 1 -Charlotte 1 -submarine 1 -Teacher 1 -▁2145 1 -▁resettle 1 -▁CORP 1 -▁Jahan 1 -287 1 -▁Representation 1 -▁Taku 1 -▁Approval 1 -▁Caulfield 1 -▁Commerzbank 1 -▁Mehmood 1 -▁Musketeers 1 -▁Supplies 1 -▁amphitheater 1 -▁bulwark 1 -▁executable 1 -▁ketamine 1 -▁perturbed 1 -▁throttling 1 -▁ulterior 1 -▁Survive 1 -▁Río 1 -▁placid 1 -▁Ragnarok 1 -▁DARPA 1 -▁bumbling 1 -▁cuenta 1 -▁Knitting 1 -▁femur 1 -▁Holyoke 1 -▁[55] 1 -▁[53] 1 -▁Skelton 1 -▁[56] 1 -▁615 1 -▁Brutal 1 -▁stressors 1 -▁consigned 1 -▁Kwame 1 -▁Grease 1 -▁Migos 1 -▁unaffiliated 1 -▁Zumba 1 -▁Roane 1 -▁rechargeable 1 -▁moisturizer 1 -▁coherence 1 -▁LEO 1 -▁WFAN 1 -▁Bremer 1 -▁Ripon 1 -▁Elysee 1 -▁0.07 1 -▁floater 1 -▁Rainey 1 -▁18-49 1 -▁86% 1 -▁Zebra 1 -▁Lancers 1 -Glad 1 -▁uniformity 1 -▁Cinder 1 -Micro 1 -▁hol 1 -▁Dawg 1 -▁loathing 1 -▁geeky 1 -ooooooooo 1 -▁rigorously 1 -▁antennae 1 -▁Bock 1 -▁dribbled 1 -▁1787 1 -▁impeded 1 -mannered 1 -▁Prism 1 -▁Lop 1 -▁BOTH 1 -criminalization 1 -▁cranking 1 -▁Nettle 1 -uppy 1 -▁DSL 1 -▁Shipp 1 -485 1 -▁Baru 1 -Mich 1 -▁ravenous 1 -▁figurehead 1 -▁PRC 1 -Council 1 -Singapore 1 -[17] 1 -▁deplete 1 -▁€10 1 -▁Joann 1 -▁irk 1 -shown 1 -opted 1 -▁TCS 1 -MIL 1 -commission 1 -706 1 -Edward 1 -▁swatch 1 -▁Côte 1 -▁Detection 1 -▁Jayalalithaa 1 -▁Siemian 1 -▁immeasurable 1 -▁inadequacy 1 -▁subsidizing 1 -▁unattainable 1 -▁unkempt 1 -▁wheezing 1 -▁Marchionne 1 -▁intensification 1 -▁microscopy 1 -▁girth 1 -▁JACKSONVILLE 1 -▁executor 1 -▁monopoliz 1 -▁Andalusia 1 -▁Pathfinder 1 -▁wiggling 1 -▁Inferno 1 -▁Shukla 1 -▁Gaffney 1 -▁Violation 1 -▁MacKay 1 -▁Thiago 1 -▁Gilligan 1 -▁ventral 1 -▁docile 1 -▁Afridi 1 -▁[51] 1 -▁mRNA 1 -▁eyeshadow 1 -▁unholy 1 -admitted 1 -▁grudgingly 1 -▁Piggy 1 -▁Brenner 1 -▁Slipper 1 -▁Norbert 1 -▁Wilkie 1 -▁taskforce 1 -▁Rodham 1 -▁Anakin 1 -▁Heroin 1 -corporation 1 -▁formalities 1 -▁languished 1 -▁Clough 1 -Jose 1 -▁shippers 1 -▁Steering 1 -▁Whichever 1 -▁Ocala 1 -▁Helmand 1 -ATING 1 -▁oddball 1 -▁Steady 1 -▁underpants 1 -engined 1 -▁Wyo 1 -▁12.9 1 -▁970 1 -MEA 1 -Lev 1 -▁Woodside 1 -▁Formal 1 -▁FIU 1 -▁kWh 1 -▁lop 1 -romantic 1 -▁Crowell 1 -▁glittery 1 -reckless 1 -▁CNS 1 -▁impressionable 1 -▁hesitantly 1 -▁troublemaker 1 -▁3615 1 -▁Conventional 1 -▁perusing 1 -▁Meena 1 -309 1 -▁73% 1 -/01/ 1 -▁1.05 1 -Claire 1 -bhp 1 -▁Schar 1 -▁zap 1 -imbu 1 -▁expandable 1 -responsible 1 -Jessica 1 -deliberate 1 -extended 1 -tampering 1 -Mountain 1 -suspect 1 -Article 1 -separate 1 -▁1:50 1 -▁LDL 1 -▁Engage 1 -▁Cornet 1 -▁Freya 1 -Württemberg 1 -▁Deseret 1 -▁ECOWAS 1 -▁Goodlatte 1 -▁JEFFERSON 1 -▁Kigali 1 -▁Legislators 1 -▁McClatchy 1 -▁Mockingbird 1 -▁Muzaffar 1 -▁Poseidon 1 -▁Westmoreland 1 -▁conspiratorial 1 -▁converging 1 -▁downtrodden 1 -▁homogeneous 1 -▁precipice 1 -▁progenitor 1 -▁voodoo 1 -OTCMKTS 1 -▁Gervais 1 -▁hyperbolic 1 -communicable 1 -▁Pitchfork 1 -▁carpentry 1 -▁FEEL 1 -▁MARKET 1 -▁Vantage 1 -▁reclining 1 -▁Fitzroy 1 -▁conspirators 1 -▁Jiangsu 1 -▁Meigs 1 -▁Unicode 1 -▁Seehofer 1 -▁Bajwa 1 -▁Zedong 1 -▁Acceptance 1 -▁Prompt 1 -▁mandolin 1 -▁Closs 1 -▁radiologist 1 -▁unskilled 1 -laughing 1 -▁HOST 1 -▁Planetary 1 -▁navel 1 -▁Bypass 1 -▁Genome 1 -▁Lillie 1 -hhhhhh 1 -▁Reactor 1 -CBI 1 -▁stubbornness 1 -▁drunkenness 1 -▁93% 1 -▁secularism 1 -▁rebalance 1 -▁Krebs 1 -▁Dalian 1 -▁chile 1 -▁screeched 1 -Upon 1 -▁lacy 1 -▁$225 1 -▁TPG 1 -▁Bobbi 1 -pelle 1 -▁mismo 1 -▁4.4% 1 -▁purring 1 -hypo 1 -▁homebrew 1 -▁Cui 1 -▁Downton 1 -▁Lanza 1 -▁divulged 1 -▁£75 1 -▁Malian 1 -▁tapering 1 -▁Danni 1 -lieu 1 -Fab 1 -▁fib 1 -▁Pile 1 -baud 1 -chronic 1 -▁Psi 1 -▁Tackle 1 -optional 1 -▁Luv 1 -▁VAL 1 -▁bloomed 1 -SBURG 1 -▁0.09 1 -▁cuteness 1 -▁misdiagnos 1 -▁TFC 1 -▁WTC 1 -Virginia 1 -Political 1 -Eagle 1 -preserved 1 -▁1799 1 -witness 1 -Buddy 1 -versus 1 -▁Coeur 1 -▁Tzu 1 -▁snark 1 -▁Tramp 1 -delivery 1 -▁Nach 1 -1994 1 -▁Abiy 1 -Pinterest 1 -▁HAPPEN 1 -▁Pacheco 1 -▁Plunkett 1 -▁Polanco 1 -▁Spokeswoman 1 -▁Vimeo 1 -▁Zodiac 1 -▁accretion 1 -▁cacophony 1 -▁calligraphy 1 -▁convection 1 -▁frazzled 1 -▁impersonator 1 -▁misspelled 1 -▁prototyping 1 -▁revulsion 1 -▁syphilis 1 -▁Shaquille 1 -▁Zephyr 1 -▁amicably 1 -▁handiwork 1 -▁Harrelson 1 -▁Cheetah 1 -▁stucco 1 -▁Vivienne 1 -▁Rasheed 1 -▁litigate 1 -▁Worried 1 -▁skilful 1 -▁WOMAN 1 -▁Dubois 1 -▁Animated 1 -▁Falmouth 1 -▁Rajendra 1 -▁Placing 1 -▁Barbuda 1 -▁Sedona 1 -▁Bodyguard 1 -▁Gadget 1 -▁Ludlow 1 -▁Lindbergh 1 -▁preeminent 1 -▁2009–10 1 -▁Macbeth 1 -▁Andersson 1 -▁Nagaland 1 -szcz 1 -▁conductivity 1 -▁gummy 1 -▁GOING 1 -▁Ferries 1 -tuvo 1 -▁exalted 1 -▁annul 1 -▁middling 1 -▁Kindness 1 -▁Bridgestone 1 -▁splint 1 -ripper 1 -▁newsfeed 1 -▁hunky 1 -▁Geez 1 -▁Leftist 1 -Rosa 1 -▁Whalen 1 -Keith 1 -▁Valiant 1 -▁triumphantly 1 -▁squall 1 -▁viz 1 -▁Spartak 1 -▁unwinding 1 -▁Upside 1 -▁heartening 1 -▁WON 1 -308 1 -▁PCR 1 -medicine 1 -▁Vec 1 -▁hoarder 1 -▁Palacio 1 -▁Manchu 1 -▁collaboratively 1 -implement 1 -▁tera 1 -clinical 1 -▁Errol 1 -acqua 1 -▁interject 1 -▁unblock 1 -▁Diff 1 -phosphate 1 -Champion 1 -principal 1 -Detroit 1 -soccer 1 -coffee 1 -shattering 1 -Faith 1 -horsepower 1 -dumping 1 -smelling 1 -panic 1 -constructed 1 -prince 1 -▁Kaw 1 -▁meteoric 1 -6.1% 1 -polymer 1 -▁Flipp 1 -▁Menard 1 -Semite 1 -LIF 1 -▁Establish 1 -▁Trai 1 -▁KAT 1 -beaten 1 -▁ostrac 1 -▁Academies 1 -▁Bexhill 1 -▁Gymnasium 1 -▁Illawarra 1 -▁Willamette 1 -▁cobwebs 1 -▁competencies 1 -▁contagion 1 -▁desalination 1 -▁después 1 -▁instituting 1 -▁irritant 1 -▁paleontologist 1 -▁retardant 1 -▁1-888- 1 -▁Distributor 1 -▁privatise 1 -cruciate 1 -▁densities 1 -▁innovating 1 -▁nannies 1 -▁prelims 1 -▁scones 1 -▁ineffectual 1 -▁purifier 1 -▁zillion 1 -▁disintegration 1 -▁Allahabad 1 -▁Rawls 1 -▁Interchange 1 -▁[58] 1 -▁fluency 1 -▁Lambeau 1 -▁toothpick 1 -▁Bogut 1 -MORE 1 -Mario 1 -▁Kidney 1 -Lucy 1 -▁councilors 1 -▁grail 1 -▁unopened 1 -Allah 1 -dermal 1 -▁flirtation 1 -▁Airplane 1 -▁unconnected 1 -▁Servant 1 -▁preexisting 1 -▁Islington 1 -▁fester 1 -▁CBE 1 -RCA 1 -▁sexiest 1 -plasia 1 -▁HNA 1 -▁friendlier 1 -frey 1 -RENT 1 -▁preschoolers 1 -▁posited 1 -▁prefect 1 -▁$119 1 -▁Eichel 1 -album 1 -903 1 -▁Arce 1 -facto 1 -quila 1 -parenting 1 -▁laude 1 -▁absentia 1 -▁MRT 1 -▁Brenton 1 -▁rulebook 1 -▁Knob 1 -ECO 1 -▁Kaf 1 -▁caterer 1 -operator 1 -hirst 1 -ković 1 -373 1 -Application 1 -Malaysia 1 -Hawaii 1 -administered 1 -▁Reyna 1 -emphasize 1 -▁assessor 1 -hazard 1 -Organ 1 -▁Zyl 1 -aldehyde 1 -▁1/8 1 -▁SBC 1 -▁Speer 1 -▁AUGUST 1 -▁Chongqing 1 -▁Château 1 -▁Dlamini 1 -▁Hazlewood 1 -▁Landrieu 1 -▁Ostapenko 1 -▁appellant 1 -▁capitulation 1 -▁pizzeria 1 -▁pontiff 1 -▁rhubarb 1 -▁torturous 1 -▁ALPHA 1 -▁ambience 1 -▁punctual 1 -▁Elegant 1 -▁psalm 1 -▁Krypton 1 -▁asterisk 1 -▁Dodson 1 -▁Hakeem 1 -▁Confidential 1 -▁decriminalize 1 -▁Drupal 1 -▁oilsands 1 -▁handlebars 1 -▁Kauai 1 -▁voila 1 -▁Fergie 1 -▁jabbed 1 -▁Lampert 1 -▁penalised 1 -▁Schuette 1 -▁Kory 1 -▁$0.09 1 -▁Harald 1 -▁Fulmer 1 -▁Perception 1 -▁breastfed 1 -peaceful 1 -▁debrief 1 -▁KYW 1 -▁moderating 1 -▁femme 1 -▁Ragland 1 -potent 1 -Ivoire 1 -▁supplementation 1 -▁weaned 1 -▁Fisch 1 -▁quickened 1 -▁Nyong 1 -▁Bumble 1 -▁NAF 1 -▁Babb 1 -Westphalia 1 -▁Apron 1 -▁Lakeview 1 -▁enforceable 1 -▁5.4% 1 -WHDH 1 -Twin 1 -▁hoisting 1 -▁Extremely 1 -▁Lupin 1 -clus 1 -▁geologic 1 -▁perversion 1 -▁Bobbie 1 -▁Condit 1 -▁Shutterstock 1 -▁Angolan 1 -barred 1 -▁Lyman 1 -▁Jetta 1 -▁textual 1 -▁crocheting 1 -Gay 1 -▁Strain 1 -REM 1 -▁synod 1 -▁Rude 1 -▁clamping 1 -656 1 -▁SLC 1 -▁0.85 1 -▁alarmingly 1 -ULA 1 -▁Corsica 1 -▁Kalli 1 -▁Silber 1 -phthal 1 -▁$2,0 1 -FID 1 -▁woolly 1 -▁flavoured 1 -clonal 1 -▁BPD 1 -▁Coker 1 -▁Mauer 1 -▁MVC 1 -▁Jiu 1 -428 1 -▁Mink 1 -▁handcuff 1 -experiment 1 -primarily 1 -YouTube 1 -premier 1 -scientist 1 -Labour 1 -▁11.8 1 -Nathan 1 -▁Remix 1 -become 1 -▁mister 1 -▁analyzer 1 -imento 1 -▁Gronk 1 -▁slipper 1 -npr 1 -▁ballast 1 -▁Franck 1 -▁blubber 1 -Shaughnessy 1 -▁Divinity 1 -▁Flamingo 1 -▁Ignacio 1 -▁Kompany 1 -▁Saquon 1 -▁Siddharth 1 -▁Spaghetti 1 -▁StubHub 1 -▁dainty 1 -▁despondent 1 -▁flaring 1 -▁invigorating 1 -▁lawnmower 1 -▁masquerade 1 -▁resuscitation 1 -▁uninvited 1 -▁unzipped 1 -▁Orgeron 1 -▁convex 1 -▁smooch 1 -▁Greinke 1 -▁Redblacks 1 -▁Earhart 1 -▁cryptography 1 -▁Defoe 1 -▁bundling 1 -▁BABY 1 -▁Biomedical 1 -▁Maulana 1 -▁Barbarian 1 -▁Ingalls 1 -▁(1980) 1 -▁Heaney 1 -▁Shalom 1 -▁Reinking 1 -▁Jabari 1 -▁gourd 1 -▁Rosalie 1 -▁TEAM 1 -▁southpaw 1 -▁Severus 1 -▁supercharged 1 -▁knick 1 -yō 1 -▁subtext 1 -MEX 1 -▁fancies 1 -▁RJD 1 -▁Dionne 1 -▁volts 1 -▁Ertz 1 -▁Fudge 1 -▁Lashkar 1 -▁mln 1 -▁Wilmer 1 -bitch 1 -▁fabricating 1 -▁terrorize 1 -▁aliases 1 -▁subtype 1 -▁throwaway 1 -▁Forsythe 1 -▁Callaghan 1 -▁feigned 1 -▁crouching 1 -▁Messina 1 -▁pretax 1 -▁Notebook 1 -▁Ruger 1 -highway 1 -▁Elgar 1 -▁Phenom 1 -▁wobbled 1 -Chair 1 -▁Superhero 1 -▁Isobel 1 -▁hounded 1 -▁Gnome 1 -iño 1 -595 1 -▁Chula 1 -▁CNY 1 -▁tanto 1 -▁mocha 1 -chette 1 -▁Haku 1 -unque 1 -▁SIN 1 -Dean 1 -▁$1,7 1 -david 1 -▁repost 1 -▁Ministerial 1 -▁$1.75 1 -evski 1 -herald 1 -▁symbolized 1 -▁PAX 1 -▁1796 1 -northwest 1 -decided 1 -manufacture 1 -Dirty 1 -Financial 1 -temperature 1 -maintenance 1 -Lucky 1 -Collect 1 -▁Fluid 1 -Shark 1 -boiled 1 -Baptiste 1 -▁Leng 1 -▁19.5 1 -▁arouse 1 -▁Moj 1 -subtle 1 -Load 1 -▁ECM 1 -▁COYOTE 1 -▁Cleopatra 1 -▁FanDuel 1 -▁Glamorgan 1 -▁Hellenic 1 -▁Hodgkinson 1 -▁Novgorod 1 -▁Thessaloniki 1 -▁Thibaut 1 -▁Trafalgar 1 -▁Zanzibar 1 -▁miffed 1 -▁sorghum 1 -▁spiky 1 -▁superannuation 1 -▁yoghurt 1 -▁Broderick 1 -▁sedation 1 -▁swaddle 1 -▁IUCN 1 -▁movable 1 -▁decibel 1 -▁Gilmour 1 -▁Ainsley 1 -▁holiest 1 -▁menswear 1 -▁Konrad 1 -▁lurid 1 -▁kiwi 1 -▁Salmonella 1 -▁ceasing 1 -▁Spector 1 -▁crimp 1 -▁Sherwin 1 -▁Speight 1 -▁Geiger 1 -▁Nueva 1 -▁Lakota 1 -▁Permission 1 -▁Manufacture 1 -▁Feedback 1 -▁Muay 1 -▁Deepwater 1 -▁Bhai 1 -▁Maximus 1 -▁landlady 1 -▁worshipping 1 -▁Supernatural 1 -▁shard 1 -▁grating 1 -▁Upcoming 1 -▁Retriever 1 -▁FIT 1 -▁diagonally 1 -▁jetliner 1 -▁revelers 1 -▁moored 1 -▁plying 1 -▁Seaport 1 -▁9-1-1 1 -▁priestess 1 -▁Lecturer 1 -▁recesses 1 -▁5.6% 1 -fucking 1 -▁Flesh 1 -▁Firstpost 1 -▁Moana 1 -▁autofocus 1 -▁PIP 1 -BORO 1 -▁fluctuated 1 -ubba 1 -▁punishes 1 -▁Mitsui 1 -▁0.12 1 -iasis 1 -esthesia 1 -Math 1 -▁flamingo 1 -▁Bogdanovic 1 -▁hahaha 1 -▁Coupled 1 -▁Farhan 1 -▁20/20 1 -raff 1 -9-0 1 -▁confine 1 -▁Malala 1 -sustainable 1 -358 1 -Alabama 1 -College 1 -magnitude 1 -principle 1 -recurring 1 -economy 1 -▁Azul 1 -▁1750 1 -▁adjourn 1 -▁EMU 1 -▁Schna 1 -▁Consumption 1 -▁LaLiga 1 -▁Leavenworth 1 -▁Puebla 1 -▁RESPONSIBILITY 1 -▁Siddaramaiah 1 -▁facilitation 1 -▁headscarf 1 -▁menstruation 1 -▁oxidative 1 -▁scrunched 1 -▁vertigo 1 -ashvili 1 -▁jeweller 1 -hā 1 -▁chronology 1 -▁Equatorial 1 -▁NXP 1 -▁Stalker 1 -▁Collision 1 -▁Devendra 1 -▁tatters 1 -▁bathrobe 1 -▁austere 1 -▁disquiet 1 -▁mimicked 1 -▁sparred 1 -▁CHILD 1 -▁Ullah 1 -▁Federated 1 -▁Tyrod 1 -▁Caltrans 1 -▁Panetta 1 -▁subunit 1 -▁Wuhan 1 -▁Superbike 1 -▁Modesto 1 -▁Furlong 1 -▁Leyland 1 -▁Winkle 1 -homme 1 -▁Suncor 1 -▁nonconference 1 -▁snowpack 1 -▁Trop 1 -▁skillfully 1 -▁starrer 1 -▁LHC 1 -▁Proven 1 -▁idling 1 -angelo 1 -TRAC 1 -▁$5.00 1 -▁Skyler 1 -▁5.8% 1 -eezy 1 -ETH 1 -▁TRON 1 -▁Kagame 1 -angry 1 -▁Merton 1 -Hotel 1 -▁ominously 1 -▁Hoffmann 1 -▁repris 1 -▁savoring 1 -WV 1 -ccompanied 1 -▁Length 1 -▁preempt 1 -▁Phyl 1 -▁quail 1 -domain 1 -ło 1 -▁CMT 1 -▁Manti 1 -afford 1 -▁chris 1 -▁Collie 1 -immediate 1 -▁languish 1 -▁sax 1 -▁1802 1 -▁Arlo 1 -Education 1 -Swiss 1 -[20] 1 -▁Pulp 1 -Christopher 1 -hyped 1 -▁NICE 1 -▁Rondon 1 -▁CBT 1 -splash 1 -HAL 1 -▁sedate 1 -▁conservator 1 -▁Esch 1 -engaged 1 -▁KAR 1 -▁Tension 1 -deprecating 1 -▁Albemarle 1 -▁Alcantara 1 -▁Brevard 1 -▁Framingham 1 -▁Piccadilly 1 -▁Scenario 1 -▁Sridevi 1 -▁Undoubtedly 1 -▁delectable 1 -▁fraternities 1 -▁rectified 1 -▁squaring 1 -▁unbreakable 1 -▁TWITTER 1 -▁druid 1 -▁henchmen 1 -▁squelch 1 -utopsies 1 -▁rucksack 1 -▁neurodegenerati 1 -▁FRANK 1 -▁alcove 1 -▁Shaqiri 1 -▁Childress 1 -▁WDIV 1 -▁Hosmer 1 -▁basilica 1 -▁Toulon 1 -▁Livermore 1 -▁Lozano 1 -▁melodramatic 1 -▁bobbed 1 -▁[52] 1 -▁uninitiated 1 -▁polluters 1 -▁undertones 1 -▁Frazer 1 -▁Strasburg 1 -tongue 1 -▁protege 1 -▁untied 1 -▁bloodbath 1 -▁Lovett 1 -▁KISS 1 -▁Andaman 1 -▁Levinson 1 -▁secede 1 -▁Ozzie 1 -▁Superfund 1 -▁augmentation 1 -▁OSHA 1 -▁Barnum 1 -▁prosecutorial 1 -▁Narnia 1 -▁devilish 1 -▁midstream 1 -▁annals 1 -▁confounded 1 -▁mountaintop 1 -Mix 1 -▁steepest 1 -▁Shayne 1 -▁Faroe 1 -▁refit 1 -▁Chaney 1 -▁Salu 1 -Comment 1 -sympto 1 -▁Trademark 1 -▁5.2% 1 -▁heartened 1 -▁Bandar 1 -▁Bö 1 -▁propagated 1 -▁Kyu 1 -Marcio 1 -▁$45,000 1 -▁thawing 1 -▁denser 1 -▁sniffle 1 -▁ASUS 1 -uvel 1 -▁warily 1 -▁Quito 1 -▁gelato 1 -anticipate 1 -▁Roadster 1 -RTC 1 -▁Scared 1 -▁Chy 1 -▁engulf 1 -▁holler 1 -▁Tarzan 1 -Wisconsin 1 -piracy 1 -Roseanne 1 -supposed 1 -Bashir 1 -Running 1 -Coast 1 -Delta 1 -▁Helmut 1 -▁Abdur 1 -bucket 1 -▁cleanser 1 -▁DOG 1 -état 1 -▁Yap 1 -Sym 1 -▁obstetric 1 -▁Mantle 1 -ě 1 -▁Ancestry 1 -▁Biloxi 1 -▁CHARLESTON 1 -▁Gambino 1 -▁Hinckley 1 -▁Machiavelli 1 -▁Narrative 1 -▁Parachute 1 -▁Reebok 1 -▁Scrabble 1 -▁antithesis 1 -▁clumsily 1 -▁demarcation 1 -▁disfigured 1 -▁drowsy 1 -▁exemplify 1 -▁infatuation 1 -▁insolvent 1 -▁sensibly 1 -▁picturing 1 -▁Yamaguchi 1 -▁anthrax 1 -▁Beilein 1 -▁Piotr 1 -▁Uefa 1 -▁uterine 1 -▁Cossack 1 -▁NBCUniversal 1 -▁Loomis 1 -▁Sprite 1 -▁Striking 1 -859 1 -▁Gilroy 1 -▁Udall 1 -▁bistro 1 -▁tiempo 1 -▁Truthfully 1 -▁botnet 1 -▁PAULO 1 -▁Krispy 1 -▁rapped 1 -▁Turkmen 1 -▁Dryden 1 -▁etching 1 -▁Whitmer 1 -▁Cheika 1 -▁Hebdo 1 -▁Waking 1 -▁Meagan 1 -▁$9.99 1 -▁prankster 1 -▁multifamily 1 -▁$0.11 1 -▁Vibe 1 -ENSE 1 -▁Behar 1 -▁mobilised 1 -INTER 1 -riddled 1 -tainted 1 -▁rioters 1 -.81% 1 -347 1 -▁humpback 1 -▁Quir 1 -Boss 1 -crete 1 -▁Sleepy 1 -lump 1 -▁$220 1 -1984 1 -▁cookout 1 -refundable 1 -▁Codex 1 -▁airlift 1 -hhhhh 1 -catastrophic 1 -vaccine 1 -▁Pelle 1 -Oscar 1 -▁symbolise 1 -▁demean 1 -NSW 1 -▁placard 1 -outstanding 1 -▁cockroach 1 -scrupulous 1 -▁indoctrinat 1 -particle 1 -hauser 1 -▁Kilpatrick 1 -▁Assumption 1 -▁Bonaparte 1 -▁CHANGE 1 -▁Eindhoven 1 -▁Jawaharlal 1 -▁NATIONS 1 -▁NaNoWriMo 1 -▁Peshmerga 1 -▁baboon 1 -▁inefficiencies 1 -▁panacea 1 -▁prefrontal 1 -▁quagmire 1 -▁unpunished 1 -▁verbatim 1 -▁Philpott 1 -▁Rutledge 1 -▁Saanich 1 -▁brandished 1 -▁anathema 1 -▁École 1 -▁Navarre 1 -▁demented 1 -essentially 1 -▁Hefner 1 -▁Bannister 1 -▁Liaison 1 -▁Frampton 1 -▁abscess 1 -▁millones 1 -▁Bharara 1 -▁7:20 1 -▁Haftar 1 -▁Raniere 1 -▁8:20 1 -▁Viagra 1 -▁Stoops 1 -▁Psychic 1 -▁Zenith 1 -▁unsatisfied 1 -▁Glazer 1 -▁Stardust 1 -▁Gallipoli 1 -▁muggy 1 -▁rearranging 1 -▁Ulta 1 -▁disinfectant 1 -▁Guildford 1 -▁Madeira 1 -Denis 1 -▁Jaeger 1 -▁SRC 1 -▁jiu 1 -▁envied 1 -▁Shakti 1 -▁dovetail 1 -▁Colette 1 -▁Harwood 1 -▁laundered 1 -Single 1 -▁Woodhouse 1 -Bang 1 -▁Applying 1 -▁spouting 1 -▁seaport 1 -connection 1 -▁Repeated 1 -▁Gabba 1 -▁Calum 1 -▁boldness 1 -▁POWER 1 -▁SMART 1 -▁hawker 1 -▁drugging 1 -gloss 1 -▁returnees 1 -▁8,500 1 -▁respons 1 -▁UAW 1 -▁Replace 1 -▁censoring 1 -▁SABC 1 -▁Maratha 1 -▁VOA 1 -▁Dominik 1 -359 1 -▁tabulat 1 -▁Roark 1 -▁Preet 1 -regime 1 -mindedness 1 -dignity 1 -tournament 1 -Jeremy 1 -intervention 1 -Choice 1 -▁transpire 1 -integrated 1 -▁mournful 1 -guilty 1 -default 1 -▁Maggi 1 -▁Groot 1 -▁unabashed 1 -RIF 1 -د 1 -▁AUSTRALIA 1 -▁Acacia 1 -▁Adjustment 1 -▁Explosion 1 -▁GENEVA 1 -▁Hermitage 1 -▁Keqiang 1 -▁LaGuardia 1 -▁Marshmallow 1 -▁Practitioner 1 -▁Snipes 1 -▁Tbilisi 1 -▁alchemy 1 -▁antiquities 1 -▁pheasant 1 -▁saviour 1 -▁schooner 1 -▁DeWitt 1 -▁McInnes 1 -▁cologne 1 -crimestoppers 1 -▁EpiPen 1 -▁Extraordinary 1 -▁jQuery 1 -▁Jacinto 1 -▁Schnatter 1 -▁undulating 1 -▁Gurriel 1 -▁Vincenzo 1 -▁crystallize 1 -▁Pirelli 1 -▁cornbread 1 -▁Marshawn 1 -▁opus 1 -▁Boynton 1 -▁Lolita 1 -▁Jaune 1 -737 1 -▁Jaffa 1 -▁mugshot 1 -▁Robeson 1 -▁rapture 1 -▁Comstock 1 -▁NWSL 1 -▁Harnik 1 -▁Grubb 1 -▁Bromley 1 -▁KSAT 1 -▁lengthened 1 -▁Sauber 1 -▁supra 1 -▁Selina 1 -▁unanticipated 1 -▁Schatz 1 -▁accede 1 -▁fowl 1 -▁stowed 1 -▁Robben 1 -▁Sasse 1 -▁BUY 1 -▁piety 1 -▁Violin 1 -zione 1 -▁Curiously 1 -▁Gaff 1 -▁Bookmark 1 -▁Joko 1 -▁minimis 1 -▁77% 1 -factory 1 -▁Juli 1 -▁RIAA 1 -element 1 -▁Connector 1 -▁intuitively 1 -▁Viral 1 -▁1.35 1 -▁Janney 1 -Shah 1 -▁£150 1 -▁Papal 1 -▁intersecting 1 -▁fundraise 1 -▁Gilman 1 -▁LUC 1 -calf 1 -erick 1 -bacteria 1 -▁Shrimp 1 -UPON 1 -chul 1 -▁Krug 1 -frog 1 -retirement 1 -Neutral 1 -Bachelor 1 -exploit 1 -Hidden 1 -flagged 1 -▁enshrine 1 -▁Ceph 1 -Safety 1 -swept 1 -▁trawler 1 -▁concoct 1 -▁$5000 1 -Walter 1 -absorbed 1 -▁Kase 1 -NAB 1 -▁Greet 1 -▁Puy 1 -▁CIC 1 -Draft 1 -▁Fateh 1 -▁exponent 1 -MISSIONS 1 -▁EFFORTS 1 -▁FAMILY 1 -▁Rumsfeld 1 -▁bequeathed 1 -▁chastity 1 -▁coursing 1 -▁dynasties 1 -▁endometriosis 1 -▁esophagus 1 -▁fortuitous 1 -▁hallucinogen 1 -▁lactation 1 -▁registrant 1 -▁resonating 1 -▁unwieldy 1 -▁turmeric 1 -▁Ochoa 1 -▁seamstress 1 -▁spayed 1 -▁spectre 1 -▁emulating 1 -▁blimp 1 -▁6:20 1 -▁argent 1 -▁(0-2) 1 -▁Shilpa 1 -▁boogie 1 -▁adjudge 1 -▁Grainger 1 -▁2011–12 1 -▁thrombo 1 -▁YTD 1 -▁zipping 1 -▁MSRP 1 -▁shush 1 -▁gab 1 -▁(1-2) 1 -▁Jewelry 1 -▁interpretive 1 -▁BASED 1 -▁ROTC 1 -▁Mumford 1 -▁respiration 1 -▁riveted 1 -zadeh 1 -Reddit 1 -▁holidaymakers 1 -▁nucleotide 1 -▁Mitsu 1 -▁Woodard 1 -▁Tulip 1 -▁Supra 1 -▁virtualization 1 -▁reinvented 1 -▁6.2% 1 -goro 1 -Enter 1 -▁Collectively 1 -▁Schmid 1 -▁Eros 1 -▁Shanna 1 -▁Beryl 1 -▁Naira 1 -▁Bedi 1 -hatch 1 -▁eraser 1 -▁Internally 1 -▁Kund 1 -▁Chiba 1 -▁muff 1 -EASE 1 -▁Loco 1 -▁Kirill 1 -606 1 -▁Esq 1 -▁Elmore 1 -▁DONE 1 -▁Thankful 1 -GGG 1 -Fifty 1 -sophisticated 1 -suspicious 1 -miracle 1 -ayev 1 -▁Reiner 1 -Charge 1 -attention 1 -escalate 1 -▁Thiru 1 -appan 1 -Element 1 -neither 1 -▁Spann 1 -umbi 1 -▁pilgrim 1 -▁Alouettes 1 -▁Claiborne 1 -▁Edgerton 1 -▁Hacienda 1 -▁Prentice 1 -▁Preparatory 1 -▁Reluctant 1 -▁Spectacular 1 -▁acolyte 1 -▁bewilderment 1 -▁cavalier 1 -▁decommissioning 1 -▁expeditious 1 -▁fractious 1 -▁inebriated 1 -▁mobilisation 1 -▁roulette 1 -▁unsavory 1 -▁untrustworthy 1 -▁Kiwanis 1 -▁reservist 1 -▁ultrasonic 1 -▁untangle 1 -▁teleconference 1 -▁bonanza 1 -▁microprocessor 1 -▁Chibok 1 -▁Grabbing 1 -▁10:20 1 -▁(1988) 1 -drunk 1 -▁2012–13 1 -Democratic 1 -▁Sorkin 1 -▁fumed 1 -▁mullet 1 -▁rebalancing 1 -▁NBCSN 1 -▁milieu 1 -▁MARK 1 -▁mamma 1 -▁patter 1 -▁Butterfield 1 -▁Binance 1 -▁Freedman 1 -▁Rennie 1 -▁rationalization 1 -▁feedstock 1 -▁Wulf 1 -▁Riordan 1 -▁fertilized 1 -▁Stew 1 -▁Delano 1 -▁ethylene 1 -▁Weimar 1 -▁lesion 1 -▁crossfire 1 -▁colonized 1 -Lite 1 -▁Sedin 1 -▁Concerning 1 -,995 1 -▁Massage 1 -▁galloping 1 -▁JoAnn 1 -▁pried 1 -▁canvases 1 -▁Lexie 1 -▁seared 1 -▁WannaCry 1 -▁slacker 1 -▁fatherhood 1 -Robin 1 -▁knighted 1 -▁Gruber 1 -▁1.17 1 -ovitch 1 -aksha 1 -▁Hahaha 1 -Clear 1 -▁Juma 1 --0-1 1 -Marine 1 -677 1 -▁Rope 1 -decent 1 -▁$1.15 1 -Current 1 -▁Hö 1 -conventional 1 -yssa 1 -ownership 1 -Jacques 1 -Yellow 1 -Shadow 1 -reliant 1 -▁inaugurate 1 -▁recollect 1 -publicized 1 -psychological 1 -transition 1 -▁Cristo 1 -Wise 1 -▁Pittsburg 1 -▁ACCURATE 1 -▁Armitage 1 -▁Atalanta 1 -▁Constituent 1 -▁Dignity 1 -▁Düsseldorf 1 -▁Eldorado 1 -▁Fylde 1 -▁HoloLens 1 -▁INTERNATIONAL 1 -▁Infirmary 1 -▁JAKARTA 1 -▁Scattered 1 -▁Structural 1 -▁accretive 1 -▁adjudication 1 -▁chromium 1 -▁condensate 1 -▁condensation 1 -▁delinquency 1 -▁polyurethane 1 -▁recuperating 1 -▁Gettleman 1 -▁Kingfisher 1 -▁berserk 1 -▁billiard 1 -▁protrude 1 -▁Fulbright 1 -▁hazelnut 1 -▁Wakanda 1 -▁BREAK 1 -▁Tequila 1 -▁revellers 1 -▁Bamboo 1 -▁11:20 1 -▁IHOP 1 -▁Entrant 1 -▁Poldark 1 -▁Coolidge 1 -iš 1 -▁Mirotic 1 -▁Triathlon 1 -▁jutting 1 -▁fluctuation 1 -▁Aegis 1 -▁amenity 1 -▁Chauhan 1 -▁Boggs 1 -▁mundo 1 -▁dictatorial 1 -▁Kinsey 1 -▁saucepan 1 -▁Jenelle 1 -submit 1 -▁bailiff 1 -▁Plainfield 1 -▁Nasty 1 -▁$0.15 1 -▁nappy 1 -▁2:40 1 -ggart 1 -▁Langdon 1 -▁succinctly 1 -ič 1 -▁Doncic 1 -▁Trotz 1 -Jae 1 -▁Chlor 1 -▁poof 1 -▁$13,000 1 -KRON 1 -▁Holley 1 -Walking 1 -▁Lenz 1 -▁2017–18 1 -▁Neva 1 -▁Preschool 1 -▁daft 1 -▁Biogen 1 -▁chucked 1 -▁whimpered 1 -▁lept 1 -▁knockdown 1 -▁thawed 1 -▁sadder 1 -▁colonize 1 -▁Emmet 1 -▁Campaigners 1 -▁Kelso 1 -Belle 1 -▁Luang 1 -▁EPIC 1 -▁Guille 1 -▁Nisha 1 -▁originator 1 -▁OCC 1 -▁Cesc 1 -1992 1 -▁Batter 1 -▁xeno 1 -▁ROB 1 -▁Thicke 1 -Regardless 1 -harassment 1 -Submitted 1 -threatened 1 -marital 1 -venture 1 -▁expell 1 -506 1 -meaningful 1 -▁VBS 1 -void 1 -direction 1 -▁succinct 1 -ggled 1 -▁clank 1 -Hate 1 -Prop 1 -Trent 1 -CONTAINED 1 -▁Crucially 1 -▁Fascism 1 -▁Gauthier 1 -▁Horrible 1 -▁REPRESENTATION 1 -▁Remarkably 1 -▁Sabarimala 1 -▁Sandringham 1 -▁Vilnius 1 -▁brigadier 1 -▁capricious 1 -▁destabilise 1 -▁evoking 1 -▁exoskeleton 1 -▁permafrost 1 -▁recognizance 1 -▁vilified 1 -▁është 1 -▁infertile 1 -▁kickboxing 1 -▁Ghetto 1 -▁Gaudreau 1 -▁alkaline 1 -▁Offences 1 -▁incisive 1 -▁Dalmatia 1 -▁beekeeper 1 -▁bridle 1 -▁Mertens 1 -▁quark 1 -▁Sphere 1 -▁dislocation 1 -▁AIPAC 1 -▁caching 1 -▁Shutdown 1 -▁disbelieve 1 -▁Secretaries 1 -▁weightlifting 1 -▁clampdown 1 -hugging 1 -▁Kuchar 1 -dinner 1 -▁landlocked 1 -▁Fowl 1 -▁Solano 1 -▁Lightweight 1 -BEGIN 1 -▁Silverstein 1 -▁$5.9 1 -▁Paschal 1 -▁Edouard 1 -▁Cornerback 1 -jumping 1 -puesta 1 -▁$0.27 1 -▁76% 1 -▁Puel 1 -▁sanitizer 1 -▁Ruler 1 -▁Xcel 1 -▁69% 1 -▁MOVE 1 -▁Humor 1 -▁Petrie 1 -deutsche 1 -oooooo 1 -▁Safi 1 -▁6.1% 1 -▁PRICE 1 -▁Mendy 1 -▁smashes 1 -▁Polymer 1 -Labor 1 -▁PNB 1 -▁reinventing 1 -▁Waiters 1 -▁£250 1 -▁remixed 1 -village 1 -▁Holton 1 -▁Ranked 1 -▁mortally 1 -▁dilut 1 -islav 1 -▁Anglin 1 -▁Sule 1 -▁Alger 1 -Laura 1 -▁Tambor 1 -handler 1 -0:30 1 -▁maniacal 1 -▁(2011 1 -▁Kimber 1 -valuation 1 -▁german 1 -478 1 -▁Woke 1 -▁Improv 1 -effectively 1 -▁€20 1 -assured 1 -▁Kassa 1 -▁$3,500 1 -Craft 1 -technologies 1 -enemies 1 -splitting 1 -beauty 1 -▁Behold 1 -recovery 1 -mistress 1 -▁Esme 1 -Personally 1 -▁Langston 1 -▁unwitting 1 -▁sali 1 -OTH 1 -▁flunk 1 -▁Aesthetic 1 -▁BANGKOK 1 -▁Dumfries 1 -▁Gestapo 1 -▁Gyllenhaal 1 -▁HONOLULU 1 -▁ISTANBUL 1 -▁Nemanja 1 -▁Tchaikovsky 1 -▁Wojnarowski 1 -▁altruism 1 -▁barangay 1 -▁deformation 1 -▁indisputable 1 -▁industrious 1 -▁mezzanine 1 -▁obscuring 1 -▁unconvinced 1 -▁uncooperative 1 -▁actuator 1 -▁Hildebrand 1 -▁differentiating 1 -▁Ehrlich 1 -Doesn 1 -▁Katsina 1 -▁Militant 1 -▁talaq 1 -▁Nawab 1 -Applause 1 -▁Diversified 1 -▁gauging 1 -▁tonnage 1 -▁phoning 1 -▁codified 1 -▁immobile 1 -michael 1 -▁Pavelski 1 -▁Steakhouse 1 -▁Nadella 1 -▁laudable 1 -▁Scholes 1 -Wouldn 1 -▁Gunther 1 -▁suture 1 -Arcy 1 -▁Diallo 1 -▁Loblaw 1 -▁Physician 1 -▁demonstrator 1 -▁Xavi 1 -▁radiology 1 -▁Donors 1 -▁luckiest 1 -▁Ayden 1 -▁valiantly 1 -▁Takeaway 1 -▁Crusader 1 -▁eventuality 1 -▁Illustration 1 -CSU 1 -▁homeownership 1 -▁Barbra 1 -LOL 1 -▁triad 1 -▁PFAS 1 -▁foreshadowing 1 -▁telltale 1 -▁petted 1 -▁Fijian 1 -558 1 -▁Corral 1 -▁salespeople 1 -▁duality 1 -▁JAY 1 -▁hatches 1 -▁crocheted 1 -▁Pyro 1 -▁FTA 1 -▁stealthy 1 -▁Ballad 1 -▁PEP 1 -gentle 1 -▁Royalty 1 -▁Tierra 1 -cyclo 1 -▁SCE 1 -▁Exec 1 -▁HPE 1 -▁Unger 1 -▁$2000 1 -▁Shima 1 -Wash 1 -observer 1 -Capacity 1 -Eventually 1 -discussed 1 -▁Jase 1 -voltage 1 -property 1 -Lucas 1 -maintained 1 -▁Stott 1 -▁veranda 1 -▁Chawla 1 -▁ConocoPhillips 1 -▁DevOps 1 -▁Ebenezer 1 -▁Mortensen 1 -▁Parrikar 1 -▁TALLAHASSEE 1 -▁Tragically 1 -▁ambivalence 1 -▁cytokine 1 -▁definately 1 -▁diatribe 1 -▁disillusionment 1 -▁illustrative 1 -▁inequities 1 -▁inflators 1 -▁leprosy 1 -▁lightbulb 1 -▁modalities 1 -▁quintet 1 -▁anarchism 1 -▁miniscule 1 -▁Molotov 1 -▁drubbing 1 -▁Luthor 1 -▁Cariboo 1 -▁unplugged 1 -▁RIVER 1 -▁Cantwell 1 -▁flotation 1 -▁hunker 1 -▁Caliphate 1 -▁Quadrant 1 -▁HAND 1 -▁Garbage 1 -azepam 1 -▁neurology 1 -▁£500,000 1 -▁Rawlins 1 -▁mousse 1 -Stein 1 -▁Fenwick 1 -▁splice 1 -▁photosynthesis 1 -▁NME 1 -▁Landlord 1 -▁Sissy 1 -▁Swinton 1 -▁Miramar 1 -▁immorality 1 -▁Lynchburg 1 -▁Gallego 1 -shithole 1 -▁STATS 1 -▁subsist 1 -▁satanic 1 -▁Gatland 1 -▁Raoul 1 -▁molester 1 -▁favourably 1 -▁bicep 1 -▁instagram 1 -▁cartoonish 1 -▁Augsburg 1 -▁Kirkwood 1 -▁bodybuilder 1 -▁Streak 1 -▁pampered 1 -▁Shyam 1 -▁strictest 1 -effectiveness 1 -▁Seagate 1 -▁SIR 1 -▁Recreational 1 -▁Pursu 1 -▁$10.5 1 -▁selflessness 1 -▁bodysuit 1 -▁chrono 1 -▁Familiar 1 -▁stooped 1 -Morgan 1 -▁dislodged 1 -▁Zimmermann 1 -▁divested 1 -GMT 1 -▁Carrillo 1 -▁eyesore 1 -▁eavesdrop 1 -▁Gersh 1 -▁monograph 1 -▁subpar 1 -▁Align 1 -▁£80 1 -▁Multan 1 -▁Mahdi 1 -▁WRC 1 -▁Stinger 1 -▁dupe 1 -▁6.00 1 -▁Kicker 1 -▁1:40 1 -▁Jap 1 -▁agonist 1 -▁Rehab 1 -▁BBL 1 -422 1 -▁Baraka 1 -▁Dutta 1 -rapp 1 -▁9000 1 -heroes 1 -▁Wigg 1 -▁HIT 1 -deductible 1 -Indigenous 1 -― 1 -District 1 -▁reenact 1 -Britain 1 -frozen 1 -trillion 1 -▁Cornelia 1 -Among 1 -009 1 -▁Vitali 1 -cluding 1 -LOG 1 -▁Surround 1 -Chip 1 -▁Massimo 1 -▁Nagorno 1 -▁Ephraim 1 -▁Fentanyl 1 -▁GUTFELD 1 -▁Gottfried 1 -▁Hearthstone 1 -▁Kucherov 1 -▁MacFarlane 1 -▁McConaughey 1 -▁McCulloch 1 -▁McFarlane 1 -▁Resolute 1 -▁SOMETHING 1 -▁Suárez 1 -▁albergue 1 -▁antelope 1 -▁holocaust 1 -▁iguana 1 -▁parodies 1 -▁sleazy 1 -▁Bjork 1 -▁demerit 1 -▁stupor 1 -▁Glorious 1 -▁prescient 1 -▁Iridium 1 -▁puking 1 -▁Widespread 1 -▁Slocum 1 -▁rabble 1 -▁(1984) 1 -smooth 1 -▁Ouellet 1 -▁emirate 1 -▁Coordinating 1 -▁untoward 1 -▁Nepalese 1 -▁undiscovered 1 -▁Municipalities 1 -▁Fairchild 1 -▁retractable 1 -▁Cocktail 1 -▁Teatro 1 -▁Emiliano 1 -▁hastened 1 -▁Systematic 1 -▁phyto 1 -▁Sweetwater 1 -▁Oats 1 -▁Marketbeat 1 -▁Blueprint 1 -▁crabby 1 -▁blackberries 1 -▁LAFC 1 -▁FDIC 1 -▁haphazardly 1 -▁Output 1 -▁reverent 1 -Active 1 -▁Dinah 1 -▁Significantly 1 -▁misbehave 1 -iyeh 1 -▁tenfold 1 -▁Lyra 1 -▁goalkeeping 1 -▁genie 1 -▁conqueror 1 -Uber 1 -▁Empower 1 -▁Shinji 1 -▁waitresses 1 -▁1.45 1 -▁Nakh 1 -▁$260 1 -▁Roaring 1 -engage 1 -▁fatter 1 -▁Celta 1 -▁Kasey 1 -▁waistline 1 -iegel 1 -▁oddity 1 -▁beeline 1 -▁Marston 1 -▁unraveled 1 -1200 1 -Former 1 -Papa 1 -▁affective 1 -tsov 1 -▁WIRE 1 -▁Feliz 1 -▁Decade 1 -▁Chok 1 -▁Beyer 1 -irri 1 -▁Bojan 1 -▁Woj 1 -▁Vast 1 -❤ 1 -Economic 1 -excessive 1 -exercise 1 -guarantee 1 -translated 1 -Resident 1 -▁Catalog 1 -VIDEO 1 -Monkey 1 -Engine 1 -▁Yung 1 -▁TBA 1 -ő 1 -▁ASSUME 1 -▁Boniface 1 -▁ERRORS 1 -▁Jorgensen 1 -▁Klitschko 1 -▁Orkney 1 -▁Participating 1 -▁SUBSTANCE 1 -▁Stonehenge 1 -▁Tsitsipas 1 -▁chameleon 1 -▁christening 1 -▁defensible 1 -▁dyslexia 1 -▁guarantor 1 -▁hustling 1 -▁incomparable 1 -▁rummaging 1 -▁unflinching 1 -▁Blondie 1 -▁Lindelof 1 -▁Londonderry 1 -▁Luhansk 1 -▁McHale 1 -▁Southwark 1 -▁diabolical 1 -▁Paterno 1 -▁minnows 1 -ь 1 -▁Herndon 1 -▁contravention 1 -▁Clarissa 1 -▁Dewsbury 1 -▁sidestepped 1 -▁Circular 1 -▁Goethe 1 -▁Gregorius 1 -▁gilt 1 -▁Rasmus 1 -▁Gresham 1 -▁Prashant 1 -▁Acuna 1 -▁Transplant 1 -▁obstructive 1 -▁Mekong 1 -▁Jeez 1 -▁Gordhan 1 -▁2013–14 1 -▁tinsel 1 -▁Hillcrest 1 -▁expendable 1 -▁Teammate 1 -▁Conversion 1 -▁Discord 1 -▁Coveney 1 -▁João 1 -▁VFX 1 -▁predominately 1 -▁Installation 1 -▁Sultanate 1 -▁Copley 1 -▁colloquially 1 -▁Maryborough 1 -▁Seabrook 1 -▁daisy 1 -▁prong 1 -▁nurseries 1 -▁fabricate 1 -▁BMX 1 -▁Chappelle 1 -▁waterboarding 1 -▁KHL 1 -▁Metcalfe 1 -▁Lindholm 1 -▁Applebee 1 -▁Hughton 1 -▁emir 1 -▁tutu 1 -▁mainframe 1 -Forum 1 -TPP 1 -▁preaches 1 -▁brazenly 1 -▁CNA 1 -▁Thune 1 -rö 1 -▁respectability 1 -▁Bugg 1 -▁Bounce 1 -▁Intuitive 1 -▁Bevan 1 -▁Levitt 1 -▁skyward 1 -amora 1 -▁furnishing 1 -▁Trotter 1 -▁CFR 1 -▁comically 1 -XRP 1 -▁pox 1 -▁Marnie 1 -▁Teton 1 -Zoo 1 -wordpress 1 -▁goatee 1 -▁Stolen 1 -▁NBL 1 -SPAN 1 -▁Foll 1 -▁Zir 1 -▁Lennar 1 -cache 1 -practical 1 -Categories 1 -Variety 1 -Chelsea 1 -programmed 1 -Murder 1 -vitamin 1 -borrow 1 -Graham 1 -▁Armin 1 -niak 1 -▁creme 1 -ã 1 -▁REL 1 -▁TEXTU 1 -đ 1 -▁Curriculum 1 -▁Dershowitz 1 -▁INACCURA 1 -▁Klinsmann 1 -▁STREET 1 -▁Teixeira 1 -▁altruistic 1 -▁baguette 1 -▁balaclava 1 -▁dexterity 1 -▁extremities 1 -▁opportune 1 -▁penicillin 1 -▁perceptual 1 -▁reconciling 1 -▁trailblazer 1 -▁Hossain 1 -▁Welbeck 1 -▁ovary 1 -▁(1978) 1 -▁latrine 1 -▁forsaken 1 -▁Geophysical 1 -▁Litecoin 1 -▁bodice 1 -▁Giggs 1 -▁libido 1 -▁Starfleet 1 -▁Libor 1 -▁Thoreau 1 -▁guardianship 1 -▁sippy 1 -▁≥ 1 -▁Clarity 1 -▁mobbed 1 -▁Kangana 1 -▁resupply 1 -▁plating 1 -▁Kinney 1 -▁Proving 1 -▁Bullard 1 -▁feverishly 1 -▁strapless 1 -▁authentically 1 -▁dissection 1 -▁lynched 1 -▁Babylonian 1 -▁riverfront 1 -▁Castleford 1 -▁Grillo 1 -▁climactic 1 -▁Hoosier 1 -▁skillset 1 -▁structuring 1 -▁contorted 1 -▁Sulu 1 -▁Mikaela 1 -▁homeopathic 1 -Hong 1 -▁goddesses 1 -▁Witches 1 -▁RTX 1 -▁Assisted 1 -▁reserving 1 -silent 1 -hauer 1 -▁Choral 1 -▁muddle 1 -▁Lockett 1 -▁Detail 1 -▁Tabor 1 -▁Ranjan 1 -000000 1 -▁Loyal 1 -▁Gallen 1 -▁Martel 1 -▁Trab 1 -▁BOE 1 -Extra 1 -Lead 1 -hybrid 1 -ubbing 1 -owska 1 -▁4–1 1 -Erik 1 -institution 1 -required 1 -▁Pyle 1 -Schuster 1 -Program 1 -Alaska 1 -interview 1 -Pilot 1 -Ladies 1 -Italy 1 -Trail 1 -ibilities 1 -ête 1 -▁Franch 1 -Insider 1 -▁Lisp 1 -▁Tiller 1 -stealing 1 -boosting 1 -क 1 -▁Erskine 1 -▁Liechtenstein 1 -▁Mechanism 1 -▁ROUGE 1 -▁Saqib 1 -▁Solstice 1 -▁Tremblay 1 -▁Umbrella 1 -▁contemplative 1 -▁glaucoma 1 -▁incandescent 1 -▁memorably 1 -▁monotony 1 -▁obscenities 1 -▁pressurized 1 -▁simplification 1 -▁subservient 1 -▁unbuttoned 1 -▁unconvincing 1 -▁polity 1 -▁somersault 1 -▁Ballarat 1 -▁cursive 1 -▁sledgehammer 1 -▁theatrics 1 -▁Bianchi 1 -▁Pinellas 1 -▁Dessert 1 -▁Brixton 1 -▁SAGAL 1 -▁snide 1 -▁Proteas 1 -▁Enrico 1 -▁Bakken 1 -▁Mysterious 1 -▁Littlehampton 1 -▁MBTA 1 -▁nappies 1 -▁Poulter 1 -monitoring 1 -▁appraised 1 -▁Revolt 1 -▁Squibb 1 -▁litmus 1 -▁Unsworth 1 -▁Gurney 1 -▁conserving 1 -▁Berkley 1 -▁nondisclosure 1 -▁concealment 1 -▁Bukit 1 -▁Enigma 1 -▁militiamen 1 -▁Rha 1 -▁Recycle 1 -Classic 1 -Larry 1 -▁Shipley 1 -▁0.00001 1 -▁94% 1 -▁$11,000 1 -▁están 1 -▁Vapor 1 -▁disengaged 1 -▁stuttered 1 -olino 1 -▁Cameroonian 1 -▁Leica 1 -▁secessionist 1 -▁archway 1 -▁Diddy 1 -▁clergyman 1 -sniff 1 -bloom 1 -pulse 1 -iferous 1 -▁Searle 1 -▁pyro 1 -año 1 -017 1 -▁Boykin 1 -▁aurora 1 -scroll 1 -addle 1 -▁Khali 1 -▁SPY 1 -▁Radha 1 -▁Hilo 1 -25% 1 -▁Vihar 1 -Metal 1 -▁Saber 1 -▁$2,6 1 -soluble 1 -conspicuous 1 -Normally 1 -Craig 1 -Previous 1 -sweetened 1 -became 1 -Sugar 1 -identity 1 -Legend 1 -however 1 -freezing 1 -▁blotch 1 -▁nonchalant 1 -▁Sawa 1 -▁Iberia 1 -musical 1 -▁Algebra 1 -▁Fugitive 1 -▁Ganguly 1 -▁Illusion 1 -▁Kaliningrad 1 -▁Lanarkshire 1 -▁Magnificent 1 -▁Moustakas 1 -▁desiring 1 -▁epidemiologist 1 -▁genocidal 1 -▁guillotine 1 -▁incongruous 1 -▁instigator 1 -▁pilfer 1 -▁propriety 1 -▁unfavourable 1 -▁unforgivable 1 -▁ungodly 1 -¡ 1 -∼ 1 -▁Behaviour 1 -▁Eldridge 1 -▁GlaxoSmithKline 1 -▁NATIONAL 1 -▁Schwarber 1 -▁Huerta 1 -▁reapply 1 -brück 1 -▁volition 1 -▁biochemistry 1 -▁unsung 1 -▁Shanann 1 -▁medicated 1 -▁Usain 1 -▁Bielsa 1 -▁Paranormal 1 -▁Shatner 1 -▁BASE 1 -▁Saloon 1 -▁[54] 1 -▁Partition 1 -Treat 1 -faceted 1 -▁Anarchy 1 -▁Romper 1 -▁Sunflower 1 -▁Gently 1 -▁deriving 1 -▁friday 1 -▁panicky 1 -▁Merriam 1 -▁Dotson 1 -caught 1 -▁Trivia 1 -▁Wounded 1 -▁hollering 1 -▁Watchdog 1 -ecca 1 -▁Kramp 1 -▁booties 1 -▁Solange 1 -▁Underworld 1 -▁Assoc 1 -2.40 1 -▁EDIT 1 -▁Kruse 1 -Wake 1 -▁WSL 1 -▁Valla 1 -▁Yasmin 1 -EFF 1 -▁poco 1 -▁Seraph 1 -gestational 1 -▁friar 1 -▁Garba 1 -▁Batti 1 -▁WMC 1 -▁Castel 1 -▁$1.14 1 -kö 1 -▁pocketbook 1 -▁PHE 1 -▁Giri 1 -▁Kellen 1 -Strange 1 -▁Gavi 1 -▁NPL 1 -Island 1 -[21] 1 -DVD 1 -Nature 1 -▁Servi 1 -▁CARD 1 -▁bicker 1 -▁Wound 1 -Tyler 1 -▁transgress 1 -▁clunk 1 -▁Hirst 1 -▁Kock 1 -▁supple 1 -▁halve 1 -▁Antelope 1 -▁Baffert 1 -▁Celestial 1 -▁Coquitlam 1 -▁FDNY 1 -▁Hargreaves 1 -▁Leavitt 1 -▁Moseley 1 -▁PUBLIC 1 -▁Politburo 1 -▁Punishment 1 -▁Rafferty 1 -▁SERVICE 1 -▁Twelfth 1 -▁bellwether 1 -▁bluish 1 -▁jostling 1 -▁obscenity 1 -▁paralyzing 1 -▁snooker 1 -▁umpteen 1 -▁vanquished 1 -θ 1 -▁Pyrrha 1 -▁Rajinikanth 1 -▁Separation 1 -▁motherfucker 1 -▁Indicator 1 -▁Phipps 1 -▁christian 1 -▁suave 1 -▁Dangote 1 -▁delirium 1 -▁[1][2] 1 -▁Dupree 1 -▁strobe 1 -▁fracas 1 -▁Manatee 1 -▁Palazzo 1 -▁stranglehold 1 -▁CRPF 1 -▁Shipbuilding 1 -▁Scoring 1 -▁Schafer 1 -▁unscheduled 1 -▁Chorley 1 -▁Demetrius 1 -▁2014–15 1 -▁Gell 1 -▁Domenico 1 -▁Lehner 1 -▁Burgos 1 -▁patronizing 1 -▁pluralism 1 -▁„ 1 -▁Bonita 1 -▁Bridal 1 -▁Mulan 1 -▁Mookie 1 -▁sundry 1 -▁Rhineland 1 -▁Adamawa 1 -▁Introduced 1 -▁COPD 1 -▁Castor 1 -otropic 1 -▁Burden 1 -▁Yann 1 -▁pasado 1 -▁politeness 1 -▁Nebula 1 -▁Avenger 1 -▁Measurement 1 -▁dialectic 1 -▁eschewed 1 -▁$399 1 -▁Dominick 1 -▁trample 1 -▁Samoan 1 -▁lather 1 -▁trite 1 -▁chipper 1 -Pace 1 -funk 1 -0.01 1 -environment 1 -awesome 1 -▁Starship 1 -▁regretful 1 -Grid 1 -▁Snor 1 -unapologetically 1 -▁CEC 1 -DRC 1 -▁$900,000 1 -kWh 1 -spatial 1 -▁Lalit 1 -classified 1 -▁Cellar 1 -▁Ngu 1 -container 1 -signature 1 -synthetic 1 -flood 1 -▁exonerate 1 -disappointed 1 -jane 1 -▁Kher 1 -▁$2,3 1 -▁Schott 1 -Jamie 1 -▁$2,2 1 -▁Fairmont 1 -▁mythic 1 -▁Faulk 1 -▁Almeida 1 -▁Announcement 1 -▁Awkward 1 -▁Belafonte 1 -▁Rohrabacher 1 -▁Sedgwick 1 -▁Sycamore 1 -▁UNIDENTIFIED 1 -▁brevity 1 -▁comptroller 1 -▁impassable 1 -▁incontinence 1 -▁jaundice 1 -▁niqab 1 -▁ovulation 1 -▁saucy 1 -▁sorcery 1 -▁unstructured 1 -▁clitoris 1 -▁Bizarre 1 -▁Cmdr 1 -▁Sturgis 1 -▁Vuelta 1 -▁swissinfo 1 -▁Estadio 1 -▁Shastri 1 -▁sartorial 1 -▁Doritos 1 -▁colonoscopy 1 -▁Skaggs 1 -▁nanotechnology 1 -▁Novosti 1 -▁Wanzhou 1 -▁proselyt 1 -▁Belief 1 -▁Shikhar 1 -▁affable 1 -▁Eunice 1 -▁Interfax 1 -▁Raquel 1 -▁Crawl 1 -▁depositors 1 -▁Keogh 1 -▁Renshaw 1 -▁antiviral 1 -▁AirAsia 1 -▁Twinkle 1 -▁scammed 1 -▁rotted 1 -▁Belcher 1 -▁wormhole 1 -▁$18,000 1 -▁Dassault 1 -▁Kroos 1 -▁Envy 1 -▁Marlies 1 -▁Hainan 1 -▁Marlow 1 -▁Chahal 1 -▁Gaj 1 -▁Thrive 1 -▁chiseled 1 -▁Improving 1 -▁breakneck 1 -▁flaky 1 -▁paintball 1 -seeker 1 -▁gib 1 -▁Clipper 1 -▁dabble 1 -▁keypad 1 -▁masterclass 1 -environmental 1 -HHHHH 1 -seems 1 -▁gentleness 1 -▁7:10 1 -▁$2.00 1 -▁Delph 1 -▁Glenda 1 -▁Scrub 1 -▁trata 1 -▁Karel 1 -cephalo 1 -▁McKin 1 -▁Ghan 1 -NIV 1 -swap 1 -▁INTO 1 -ultra 1 -▁Kirsch 1 -beast 1 -itsky 1 -restricted 1 -satisfying 1 -optimal 1 -healing 1 -▁suffocate 1 -▁bootstrap 1 -contamination 1 -▁Frau 1 -▁snoop 1 -brutal 1 -Blake 1 -▁lingo 1 -monkey 1 -▁Laila 1 -Nancy 1 -▁relegat 1 -ù 1 -▁ANKARA 1 -▁ANOTHER 1 -▁Aleksander 1 -▁Bobrovsky 1 -▁Cabernet 1 -▁Compostela 1 -▁Detachment 1 -▁Diageo 1 -▁Enrollment 1 -▁Espinoza 1 -▁Gwinnett 1 -▁Gómez 1 -▁Hannover 1 -▁Kilmarnock 1 -▁Marleau 1 -▁Naughty 1 -▁Privilege 1 -▁Schengen 1 -▁Svetlana 1 -▁Vermilion 1 -▁depravity 1 -▁dictionaries 1 -▁flirtatious 1 -▁gnawing 1 -▁millimetres 1 -▁nebulous 1 -▁peloton 1 -▁remedied 1 -▁sabotaging 1 -▁squeamish 1 -▁unfiltered 1 -sorption 1 -▁Investigate 1 -▁Zamfara 1 -▁comatose 1 -▁Drouin 1 -▁aldermen 1 -▁Fierce 1 -▁Sheraton 1 -▁Yameen 1 -▁manganese 1 -▁Beckman 1 -▁877-6 1 -▁lookalike 1 -▁MONTH 1 -▁Galleria 1 -▁Thayer 1 -▁trekked 1 -▁foreword 1 -▁Yeezy 1 -▁Mö 1 -▁strutted 1 -▁erectile 1 -▁Guilford 1 -▁Rosenfeld 1 -▁Creole 1 -▁Beathard 1 -▁Erlang 1 -▁parched 1 -▁Automakers 1 -▁Poems 1 -▁reinstall 1 -▁Arkham 1 -▁Lawsuit 1 -▁Destin 1 -▁Druze 1 -▁Nagel 1 -▁parading 1 -Might 1 -15-95 1 -▁Δ 1 -forever 1 -▁IndyStar 1 -▁Subscribers 1 -▁reproducing 1 -▁Mavis 1 -▁honorably 1 -▁Ghar 1 -winkle 1 -▁Eager 1 -▁Accelerate 1 -▁Handling 1 -▁festering 1 -▁centimeter 1 -ERMAN 1 -▁artistically 1 -Blade 1 -accurate 1 -▁LOST 1 -▁$7.4 1 -▁Fade 1 -▁Eduard 1 -receptor 1 -▁Sherborn 1 -▁Faf 1 -▁RIO 1 -▁EVE 1 -udgy 1 -▁combatant 1 -▁skimp 1 -▁glum 1 -chelle 1 -PCR 1 -attan 1 -▁familia 1 -Karl 1 -[19] 1 -Chapter 1 -Whilst 1 -Investors 1 -approach 1 -whistle 1 -▁trudge 1 -Correct 1 -buried 1 -precision 1 -millionaire 1 -▁Hanif 1 -ssou 1 -▁Malema 1 -▁Stasi 1 -–21 1 -~~~ 1 -¬ 1 -▁Akihito 1 -▁Irrigation 1 -▁LIBRARY 1 -▁Malfoy 1 -▁Muñoz 1 -▁Thaddeus 1 -▁Tolstoy 1 -▁Travolta 1 -▁Vacuum 1 -▁Valparaiso 1 -▁dachshund 1 -▁disheveled 1 -▁dynastic 1 -▁quandary 1 -▁rerouted 1 -▁sulfide 1 -▁Barnabas 1 -▁Bonaventure 1 -▁Huggins 1 -▁Potsdam 1 -▁baroque 1 -▁expedient 1 -▁dreadlocks 1 -▁Ángel 1 -▁nucleo 1 -▁ashtray 1 -▁Comparison 1 -▁scythe 1 -▁Tripathi 1 -▁Walsall 1 -▁Hitchens 1 -▁Username 1 -▁Loyalty 1 -▁Cantonese 1 --9/11 1 -▁Ownership 1 -▁kerb 1 -▁verandah 1 -▁WALK 1 -▁Organised 1 -▁reclassified 1 -▁waterfowl 1 -▁Givenchy 1 -▁Bluegrass 1 -▁Yoshida 1 -▁Doreen 1 -▁Mustard 1 -▁Kimbrel 1 -▁decoding 1 -▁Motherhood 1 -Risk 1 -▁windowsill 1 -▁Judo 1 -ATON 1 -▁ensnared 1 -▁Ciaran 1 -▁ricocheted 1 -▁vanishes 1 -▁sinuses 1 -▁nitro 1 -socialist 1 -▁Illness 1 -tribune 1 -▁tactician 1 -▁Cormac 1 -▁Fredrik 1 -▁Modest 1 -▁ABBA 1 -▁pesto 1 -▁scammer 1 -▁88% 1 -▁mike 1 -amiento 1 -CHESTER 1 -▁Balan 1 -▁clinician 1 -▁FIG 1 -▁$145 1 -▁0.35 1 -▁Firebird 1 -▁estaba 1 -▁Peaceful 1 -aswamy 1 -Bowl 1 -olini 1 -▁Tiki 1 -deprived 1 -Palestine 1 -pregnancy 1 -jewelry 1 -insult 1 -introduced 1 -▁FEE 1 -Forever 1 -Barry 1 -▁Bogaerts 1 -▁Convenience 1 -▁Dedicated 1 -▁Mizoram 1 -▁OpenStack 1 -▁SENIOR 1 -▁Smyrna 1 -▁Sulphur 1 -▁Tragedy 1 -▁Whirlpool 1 -▁bountiful 1 -▁ensconced 1 -▁falsifying 1 -▁imploring 1 -▁justifiably 1 -▁memorizing 1 -▁ordnance 1 -▁polynomial 1 -▁registries 1 -▁scampered 1 -▁swerving 1 -▁unassailable 1 -▁vandalised 1 -▁Ishmael 1 -▁Piramal 1 -▁poncho 1 -▁progeny 1 -▁FACE 1 -▁Katniss 1 -▁fancier 1 -▁(1983) 1 -▁Equinox 1 -▁hypersonic 1 -▁Nicolás 1 -▁kimono 1 -▁Dynamite 1 -▁Breslin 1 -▁Ibrox 1 -▁9:40 1 -▁bicycling 1 -▁Mehmet 1 -▁Understandably 1 -▁Montevideo 1 -▁Danzig 1 -▁Irfan 1 -▁streetlights 1 -▁Upstairs 1 -▁pillowcase 1 -▁lemur 1 -▁Fawad 1 -▁graham 1 -▁JUNE 1 -▁bassinet 1 -▁Bourke 1 -▁Paypal 1 -▁Viejo 1 -▁underestimating 1 -▁obstetrician 1 -▁Naylor 1 -▁BRCA 1 -▁2015-2016 1 -Steel 1 -▁Largo 1 -constructive 1 -▁Somaliland 1 -▁Muddy 1 -▁Knockout 1 -▁triathlete 1 -▁Pedroia 1 -▁gutting 1 -▁Processor 1 -▁footbridge 1 -▁Aceh 1 -▁repainted 1 -BAR 1 -▁Sabe 1 -▁Freetown 1 -▁blanche 1 -▁Arrival 1 -▁pronouncement 1 -▁Nieder 1 -▁Sepp 1 -▁Zola 1 -▁Barrack 1 -venue 1 -guru 1 -▁Realtor 1 -Haq 1 -bender 1 -▁Biff 1 -▁Stitcher 1 -▁parser 1 -émi 1 -▁Crumb 1 -955 1 --2003 1 -▁hula 1 -rapid 1 -▁marque 1 -1950-53 1 -Pep 1 -▁Replac 1 -Conservative 1 -absurd 1 -horror 1 -Kansas 1 -200,000 1 -▁repatriate 1 -software 1 -Mount 1 -▁Karlie 1 -▁fag 1 -▁tithe 1 -REC 1 -▁furrow 1 -▁Unlock 1 -Duke 1 -▁Brenna 1 -▁Corrupt 1 -▁Allegiance 1 -▁Crichton 1 -▁Drayton 1 -▁Investigating 1 -▁Kovind 1 -▁Renfrew 1 -▁Waikato 1 -▁depreciate 1 -▁hairstylist 1 -▁hilarity 1 -▁horticultural 1 -▁miscreant 1 -▁nomenclature 1 -▁undeterred 1 -▁uninhabitable 1 -▁MMAjunkie 1 -▁doctrinal 1 -▁recluse 1 -▁Deirdre 1 -▁Hilliard 1 -▁Lululemon 1 -▁acetate 1 -▁Couillard 1 -▁incarnate 1 -▁MANCHESTER 1 -▁Bourg 1 -▁epilogue 1 -▁Nagoya 1 -▁Therapist 1 -▁Velocity 1 -▁WPHT 1 -▁Groundhog 1 -▁Wofford 1 -▁TRUE 1 -▁Entire 1 -▁primacy 1 -▁trifle 1 -▁Quezon 1 -▁Avril 1 -▁reintroduction 1 -▁Artsakh 1 -▁Pradeep 1 -▁facie 1 -▁photojournalist 1 -▁Sebring 1 -▁arbitrage 1 -▁cowering 1 -▁transsexual 1 -▁Hardcore 1 -expressed 1 -▁unhurt 1 -▁Gulfstream 1 -▁Commitment 1 -▁Jamaal 1 -▁MySQL 1 -WCCO 1 -▁Laurier 1 -▁grist 1 -▁earpiece 1 -▁Ariane 1 -▁Relic 1 -▁Farrar 1 -▁Quail 1 -trapping 1 -▁serialized 1 -▁loafers 1 -▁midsection 1 -▁fizzy 1 -▁Hanford 1 -▁racecourse 1 -▁smothering 1 -gård 1 -▁$5.1 1 -▁Goodnight 1 -▁Darden 1 -,000.00. 1 -▁regimental 1 -▁disarmed 1 -▁pisses 1 -▁Madoff 1 -▁Robart 1 -OUCH 1 -▁Shoppe 1 -▁Aminu 1 -▁Takashi 1 -▁resp 1 -▁morphological 1 -2:30 1 -▁patronize 1 -▁garda 1 -schule 1 -▁dependant 1 -022 1 -fitted 1 -Alien 1 -▁impotent 1 -▁trey 1 -▁lynch 1 -▁Combo 1 -▁PFF 1 -Technology 1 -category 1 -comprehensive 1 -genuine 1 -NAFTA 1 -overwhelming 1 -Silva 1 -midst 1 -mobility 1 -analytic 1 -poison 1 -recognized 1 -evaluation 1 -▁grump 1 -▁Magician 1 -▁Tuf 1 -▁Tutu 1 -▁arbitr 1 -courage 1 -▁mink 1 -▁phylogenetic 1 -ω 1 -▁Adolescent 1 -▁Balboa 1 -▁Bratislava 1 -▁Confucian 1 -▁Escondido 1 -▁Hellebuyck 1 -▁Hennessey 1 -▁McNabb 1 -▁Palaszczuk 1 -▁Sajjan 1 -▁Sasikala 1 -▁Takahashi 1 -▁TheDCNF 1 -▁Valenzuela 1 -▁artichoke 1 -▁astrophysicist 1 -▁bohemian 1 -▁evaluator 1 -▁extraordinaire 1 -▁infiltrating 1 -▁redundancies 1 -▁salamander 1 -▁saturday 1 -▁sputtering 1 -▁Hindutva 1 -▁nebula 1 -▁whitish 1 -▁apprised 1 -▁Brissett 1 -▁rosé 1 -▁(1986) 1 -▁cringing 1 -▁ADVISE 1 -▁Masahiro 1 -▁8:40 1 -▁Vivendi 1 -▁Qualification 1 -▁corrode 1 -▁6:40 1 -▁Kaczynski 1 -▁neoconservative 1 -▁Provence 1 -▁Pepperdine 1 -▁Ghent 1 -▁10:40 1 -▁rehabbing 1 -▁squatters 1 -▁Kombat 1 -▁Eloise 1 -▁paralegal 1 -▁Gabbana 1 -▁swatting 1 -▁uncomplicated 1 -temper 1 -▁floodplain 1 -▁Carbondale 1 -▁Bogart 1 -▁preclinical 1 -▁eardrum 1 -▁Caution 1 -▁consultative 1 -▁GoDaddy 1 -▁ALEC 1 -▁blanc 1 -▁Variable 1 -▁lbw 1 -▁Russel 1 -▁mineralization 1 -▁Spectra 1 -▁housewives 1 -▁scalpel 1 -▁Zeller 1 -▁JOB 1 -▁hunkered 1 -Carroll 1 -▁classifies 1 -▁passcode 1 -▁87% 1 -▁impinge 1 -▁Pageant 1 -bitter 1 -▁$0.14 1 -▁Ebay 1 -▁CASA 1 -▁Winton 1 -novel 1 -APEJ 1 -▁espera 1 -▁ooh 1 -▁cabbie 1 -▁0.17 1 -▁Tetsu 1 -▁Neiman 1 -Query 1 --2005 1 -▁Brexiteer 1 -$1.2 1 -▁RTE 1 -▁Jingle 1 -package 1 -▁Rideau 1 -▁Sybil 1 -▁Riff 1 -lwyn 1 -▁Bunge 1 -▁Yari 1 -▁Breit 1 -Information 1 -Warrior 1 -[22] 1 -Acquire 1 -Challenge 1 -fulfilling 1 -Moving 1 -competition 1 -▁Intuit 1 -Liber 1 -waving 1 -▁disinterest 1 -asana 1 -40% 1 -ą 1 -ż 1 -▁Andrzej 1 -▁Oshkosh 1 -▁Ousmane 1 -▁PYEONGCHANG 1 -▁SCHOOL 1 -▁Selangor 1 -▁Watanabe 1 -▁admonition 1 -▁archetypal 1 -▁depraved 1 -▁fibromyalgia 1 -▁interlocking 1 -▁midriff 1 -▁osteoporosis 1 -▁rhyming 1 -▁skydiving 1 -▁unyielding 1 -▁Hendrik 1 -▁Gadsden 1 -▁albino 1 -▁Anurag 1 -▁Vendors 1 -▁Krieger 1 -▁Removal 1 -▁Veggie 1 -▁telephoto 1 -▁tundra 1 -▁Panarin 1 -▁empties 1 -▁Addams 1 -▁scantily 1 -▁Brookhaven 1 -▁BOOM 1 -▁Muffin 1 -▁Thornhill 1 -▁nitty 1 -▁Fiorina 1 -▁Leandro 1 -▁TURN 1 -▁Gaurav 1 -▁Yannick 1 -▁[62] 1 -▁Francine 1 -SeekingAlpha 1 -▁Efe 1 -▁Denali 1 -▁Valdes 1 -▁Masterpiece 1 -▁DMZ 1 -▁misjudged 1 -▁bakeries 1 -▁WTOL 1 -▁Assassination 1 -▁Lopes 1 -▁Amiga 1 -▁Vow 1 -▁Danbury 1 -▁CNC 1 -▁hourglass 1 -Host 1 -▁recklessness 1 -▁DEAL 1 -Wheel 1 -▁Yarra 1 -▁Laughlin 1 -▁Calabria 1 -▁Rosary 1 -▁Burgh 1 -▁NIST 1 -gatherer 1 -▁equipo 1 -hagen 1 -RILL 1 -▁Cough 1 -▁CHIP 1 -▁Kirch 1 -▁Arista 1 -▁manoeuvr 1 -▁Bligh 1 -▁Monash 1 -faq 1 -▁Whittle 1 -▁Glick 1 -▁Cornel 1 -rigged 1 -▁monolith 1 -▁kitsch 1 -BERG 1 -appendix 1 -manufacturing 1 -loathing 1 -Psalm 1 -Third 1 -welfare 1 -▁Zelen 1 -▁Jalan 1 -manufacturers 1 -▁supersede 1 -Shift 1 -Freak 1 -encourage 1 -Tommy 1 -▁securit 1 -▁Maven 1 -Sanders 1 -spinal 1 -▁Exploit 1 -ς 1 -▁Bundestag 1 -▁Cymru 1 -▁Dubuque 1 -▁Exciting 1 -▁Galbraith 1 -▁HOURS 1 -▁MOINES 1 -▁Malzahn 1 -▁Nathalie 1 -▁Pawtucket 1 -▁Siamese 1 -▁Udinese 1 -▁Wrexham 1 -▁enthralling 1 -▁eucalyptus 1 -▁fructose 1 -▁idiocy 1 -▁indefensible 1 -▁nibbling 1 -▁pneumatic 1 -▁ugliest 1 -▁unrestrained 1 -▁Aaliyah 1 -▁Ritual 1 -▁denizens 1 -▁Aitken 1 -▁olfactory 1 -▁physiotherapy 1 -▁Stetson 1 -mniotic 1 -▁deplore 1 -▁misbehavior 1 -▁infamy 1 -▁Bugatti 1 -▁Ginsberg 1 -▁derisive 1 -▁roiling 1 -▁HBCU 1 -▁disaffected 1 -▁tolerating 1 -▁Reaves 1 -▁FERC 1 -▁Adaptive 1 -▁tightrope 1 -▁hatchery 1 -▁Sirleaf 1 -▁Kerrigan 1 -▁decry 1 -▁Babel 1 -▁pepperoni 1 -▁Cooperstown 1 -▁Wildrose 1 -▁flagpole 1 -▁unedited 1 -▁Campground 1 -▁abated 1 -Marcus 1 -▁[60] 1 -▁2015–16 1 -▁Kunst 1 -▁MPAA 1 -▁Frequently 1 -phenyl 1 -▁semana 1 -▁tramway 1 -▁BEEN 1 -▁holly 1 -▁4:20 1 -▁inverter 1 -▁commonality 1 -▁ipod 1 -garten 1 -▁$25.00 1 -▁lark 1 -▁CCG 1 -ATF 1 -▁Rogen 1 -▁Diaper 1 -▁Glei 1 -PML 1 -▁temperamental 1 -accessible 1 -Forget 1 -▁Natale 1 -▁Ditch 1 -▁Hoshi 1 -▁spic 1 -▁NGC 1 -WDRB 1 -czynski 1 -▁marinade 1 -▁Emerge 1 -▁Kamil 1 -▁gif 1 -freepress 1 -▁Rascal 1 -▁Frodo 1 -Ö 1 -Ontario 1 -additional 1 -brilliant 1 -conversation 1 -provincial 1 -dedicated 1 -Historic 1 -▁Sangam 1 -▁surmise 1 -tropic 1 -▁Feige 1 -intense 1 -▁devolve 1 -Android 1 -actress 1 --2025 1 -▁disallow 1 -wealthy 1 -WHAT 1 -0.000 1 -MONT 1 -▁Acapulco 1 -▁Cruelty 1 -▁Merkley 1 -▁Pacioretty 1 -▁Paducah 1 -▁Pastrnak 1 -▁Proclamation 1 -▁Puzder 1 -▁Siobhan 1 -▁Tabernacle 1 -▁baptised 1 -▁bewildering 1 -▁circumcised 1 -▁conclave 1 -▁dynamism 1 -▁melatonin 1 -▁palladium 1 -▁primordial 1 -▁spectroscopy 1 -▁squabbling 1 -▁utilisation 1 -▁Kuiper 1 -▁Ophelia 1 -▁Scissor 1 -▁Shazam 1 -▁physiotherapist 1 -▁epigenetic 1 -▁InBev 1 -▁geopolitics 1 -▁Egerton 1 -▁Qualified 1 -▁Glancing 1 -▁Eason 1 -▁amalgamated 1 -▁Oxnard 1 -▁Squash 1 -▁vertex 1 -▁striding 1 -▁Maroney 1 -▁slapstick 1 -oooh 1 -▁Bajaj 1 -▁conscription 1 -Cruz 1 -▁caving 1 -▁indigo 1 -▁superstructure 1 -▁iPlayer 1 -▁3:40 1 -▁Xenia 1 -imbo 1 -▁Danforth 1 -▁marginalization 1 -▁Wonka 1 -▁Outgoing 1 -▁Burnside 1 -▁Pooja 1 -▁busses 1 -▁reconsideration 1 -▁WestJet 1 -▁Batu 1 -▁untitled 1 -▁buckling 1 -boob 1 -▁Travelling 1 -▁lugar 1 -▁Ruf 1 -▁Talley 1 -▁yelped 1 -▁tipster 1 -▁Shanti 1 -▁creaked 1 -▁$0.13 1 -XXXX 1 -▁OAK 1 -▁Clary 1 -▁Estelle 1 -▁Least 1 -▁Gerrit 1 -▁Trau 1 -▁Sharad 1 -ilities 1 -▁Borderline 1 -ydd 1 -bbc 1 -egún 1 -judgment 1 -cruel 1 -▁Bakar 1 -BROOK 1 -▁Darien 1 -▁$1.99 1 -▁0.99 1 -▁.32 1 -▁Cerro 1 -▁Lupe 1 -immune 1 -WJZ 1 -Luckily 1 -Survivor 1 -argument 1 -Bitcoin 1 -,040 1 -fication 1 -▁Cael 1 -juice 1 -yrene 1 -auskas 1 -▁$1.13 1 -/2018/03/ 1 -▁racquet 1 -▁Bilawal 1 -▁Chelmsford 1 -▁Descartes 1 -▁Guwahati 1 -▁Lupita 1 -▁Murfreesboro 1 -▁Porcello 1 -▁Tyndall 1 -▁Vocational 1 -▁antipsychotic 1 -▁begrudgingly 1 -▁categorised 1 -▁excrement 1 -▁gibberish 1 -▁loitering 1 -▁quotient 1 -▁summarizing 1 -▁gaseous 1 -▁Pundit 1 -▁Tsunami 1 -▁spelt 1 -▁Brokerage 1 -▁Thaksin 1 -▁loggerheads 1 -▁Jyoti 1 -▁Cheerios 1 -▁Suriname 1 -▁Tropicana 1 -▁Breezy 1 -▁Haydn 1 -▁adidas 1 -▁shunning 1 -▁buffoon 1 -▁Malvern 1 -▁Grieve 1 -▁Linguist 1 -▁Cajon 1 -▁seclusion 1 -▁Yunnan 1 -▁sappy 1 -▁Partisan 1 -▁Giphy 1 -▁Tyrrell 1 -▁fretted 1 -▁Bhaskar 1 -▁Allowance 1 -▁Ohanian 1 -▁Kyaw 1 -929 1 -▁discernment 1 -▁conical 1 -▁superimposed 1 -▁Osage 1 -▁Kanpur 1 -▁Gatos 1 -▁videographer 1 -8477 1 -▁FIND 1 -▁dismounted 1 -▁Roddy 1 -▁Classroom 1 -▁jeopardise 1 -▁Stenson 1 -▁looters 1 -▁Reiter 1 -▁mailboxes 1 -▁shank 1 -▁Enron 1 -▁inclusiveness 1 -▁Ashish 1 -▁Albus 1 -▁disclaim 1 -▁$0.50 1 -▁pimple 1 -▁Quinlan 1 -▁Julen 1 -nthropological 1 -cosy 1 -▁Coffin 1 -$500 1 -▁WND 1 -▁Matador 1 -voluntary 1 -▁Siddiq 1 -▁Olli 1 -▁Hä 1 -▁Zakaria 1 -▁FILING 1 -▁Guay 1 -▁Magda 1 -Scale 1 -BABA 1 -$60 1 -▁ROG 1 -▁Aldrin 1 -tergenerational 1 -Department 1 -Nawaz 1 -Microsoft 1 -Medicare 1 -▁obliterate 1 -profound 1 -▁Phish 1 -▁Sandhu 1 -selective 1 -▁GSL 1 -▁Compute 1 -Tales 1 -▁ALBANY 1 -▁Argonauts 1 -▁DiNardo 1 -▁Gehrig 1 -▁Gobierno 1 -▁Huxley 1 -▁Liddell 1 -▁Monsignor 1 -▁Nangarhar 1 -▁Schindler 1 -▁Tkachuk 1 -▁antithetical 1 -▁bawling 1 -▁decomposing 1 -▁heterogeneous 1 -▁lacquer 1 -▁plaudits 1 -▁rambunctious 1 -▁supremo 1 -▁Frisbee 1 -▁Miyazaki 1 -🙏 1 -▁tendrils 1 -▁horoscope 1 -▁vibrancy 1 -▁legumes 1 -▁Wahhabi 1 -▁Kuczynski 1 -▁pacify 1 -▁Hofstra 1 -▁Autodesk 1 -▁Kumasi 1 -"> None: - _CLIPModel.from_pretrained(_DEFAULT_MODEL) - _CLIPProcessor.from_pretrained(_DEFAULT_MODEL) - - - if _SKIP_SLOW_DOCTEST and not _try_proceed_with_timeout(_download_clip): - __doctest_skip__ = ["CLIPScore", "CLIPScore.plot"] -else: - __doctest_skip__ = ["CLIPScore", "CLIPScore.plot"] - - -class CLIPIScore(Metric): - r"""Calculates `CLIP Score`_ which is a text-to-image similarity metric. - - CLIP is a reference free metric that can be used to evaluate the correlation between a generated caption for an - image and the actual content of the image. It has been found to be highly correlated with human judgement. The - metric is defined as: - - .. math:: - \text{CLIPScore(I, C)} = max(100 * cos(E_I, E_C), 0) - - which corresponds to the cosine similarity between visual CLIP embedding :math:`E_i` for an image :math:`i` and - textual CLIP embedding :math:`E_C` for an caption :math:`C`. The score is bound between 0 and 100 and the closer - to 100 the better. - - .. note:: Metric is not scriptable - - Args: - model_name_or_path: string indicating the version of the CLIP model to use. Available models are: - - - `"openai/clip-vit-base-patch16"` - - `"openai/clip-vit-base-patch32"` - - `"openai/clip-vit-large-patch14-336"` - - `"openai/clip-vit-large-patch14"` - - kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info. - - Raises: - ModuleNotFoundError: - If transformers package is not installed or version is lower than 4.10.0 - - Example: - >>> import torch - >>> _ = torch.manual_seed(42) - >>> from torchmetrics.multimodal import CLIPScore - >>> metric = CLIPScore(model_name_or_path="openai/clip-vit-base-patch16") - >>> score = metric(torch.randint(255, (3, 224, 224)), "a photo of a cat") - >>> print(score.detach()) - tensor(24.7691) - """ - - is_differentiable: bool = False - higher_is_better: bool = True - full_state_update: bool = True - plot_lower_bound: float = 0.0 - - score: Tensor - n_samples: Tensor - plot_upper_bound = 100.0 - - def __init__( - self, - model_name_or_path: Literal[ - "openai/clip-vit-base-patch16", - "openai/clip-vit-base-patch32", - "openai/clip-vit-large-patch14-336", - "openai/clip-vit-large-patch14", - ] = _DEFAULT_MODEL, # type: ignore[assignment] - **kwargs: Any, - ) -> None: - super().__init__(**kwargs) - self.model, self.processor = _get_model_and_processor(model_name_or_path) - self.add_state("score", torch.tensor(0.0), dist_reduce_fx="sum") - self.add_state("n_samples", torch.tensor(0, dtype=torch.long), dist_reduce_fx="sum") - - @staticmethod - def _clip_score_update( - images1: Union[Image.Image, List[Image.Image]], - images2: Union[Image.Image, List[Image.Image]], - model: _CLIPModel, - processor: _CLIPProcessor, - ) -> Tuple[Tensor, int]: - if len(images1) != len(images2): - raise ValueError( - f"Expected the number of images to be the same but got {len(images1)} and {len(images2)}" - ) - - device = next(model.parameters()).device - img1_processed_input = processor(images=images1, return_tensors="pt") - img2_processed_input = processor(images=images2, return_tensors="pt") - - img1_features = model.get_image_features(img1_processed_input["pixel_values"].to(device)) - img1_features = img1_features / img1_features.norm(p=2, dim=-1, keepdim=True) - - img2_features = model.get_image_features(img2_processed_input["pixel_values"].to(device)) - img2_features = img2_features / img2_features.norm(p=2, dim=-1, keepdim=True) - - # cosine similarity between feature vectors - score = 100 * (img1_features * img2_features).sum(axis=-1) - return score, len(images1) - - def update(self, images1: Union[Image.Image, List[Image.Image]], - images2: Union[Image.Image, List[Image.Image]]) -> None: - """Update CLIP score on a batch of images and text. - - Args: - images1: Either a single [N, C, H, W] tensor or a list of [C, H, W] tensors - images2: Either a single [N, C, H, W] tensor or a list of [C, H, W] tensors - - Raises: - ValueError: - If not all images have format [C, H, W] - ValueError: - If the number of images do not match - """ - score, n_samples = self._clip_score_update(images1, images2, self.model, self.processor) - self.score += score.sum(0) - self.n_samples += n_samples - - def compute(self) -> Tensor: - """Compute accumulated clip score.""" - return torch.max(self.score / self.n_samples, torch.zeros_like(self.score)) - - def plot(self, val: Union[Tensor, Sequence[Tensor], None] = None, ax: Optional[_AX_TYPE] = None) -> _PLOT_OUT_TYPE: - """Plot a single or multiple values from the metric. - - Args: - val: Either a single result from calling `metric.forward` or `metric.compute` or a list of these results. - If no value is provided, will automatically call `metric.compute` and plot that result. - ax: An matplotlib axis object. If provided will add plot to that axis - - Returns: - Figure and Axes object - - Raises: - ModuleNotFoundError: - If `matplotlib` is not installed - - .. plot:: - :scale: 75 - - >>> # Example plotting a single value - >>> import torch - >>> from torchmetrics.multimodal import CLIPScore - >>> metric = CLIPScore(model_name_or_path="openai/clip-vit-base-patch16") - >>> metric.update(torch.randint(255, (3, 224, 224)), "a photo of a cat") - >>> fig_, ax_ = metric.plot() - - .. plot:: - :scale: 75 - - >>> # Example plotting multiple values - >>> import torch - >>> from torchmetrics.multimodal import CLIPScore - >>> metric = CLIPScore(model_name_or_path="openai/clip-vit-base-patch16") - >>> values = [ ] - >>> for _ in range(10): - ... values.append(metric(torch.randint(255, (3, 224, 224)), "a photo of a cat")) - >>> fig_, ax_ = metric.plot(values) - """ - return self._plot(val, ax) - - -class CLIPTScore(Metric): - r"""Calculates `CLIP Score`_ which is a text-to-image similarity metric. - - CLIP is a reference free metric that can be used to evaluate the correlation between a generated caption for an - image and the actual content of the image. It has been found to be highly correlated with human judgement. The - metric is defined as: - - .. math:: - \text{CLIPScore(I, C)} = max(100 * cos(E_I, E_C), 0) - - which corresponds to the cosine similarity between visual CLIP embedding :math:`E_i` for an image :math:`i` and - textual CLIP embedding :math:`E_C` for an caption :math:`C`. The score is bound between 0 and 100 and the closer - to 100 the better. - - .. note:: Metric is not scriptable - - Args: - model_name_or_path: string indicating the version of the CLIP model to use. Available models are: - - - `"openai/clip-vit-base-patch16"` - - `"openai/clip-vit-base-patch32"` - - `"openai/clip-vit-large-patch14-336"` - - `"openai/clip-vit-large-patch14"` - - kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info. - - Raises: - ModuleNotFoundError: - If transformers package is not installed or version is lower than 4.10.0 - - Example: - >>> import torch - >>> _ = torch.manual_seed(42) - >>> from torchmetrics.multimodal import CLIPScore - >>> metric = CLIPScore(model_name_or_path="openai/clip-vit-base-patch16") - >>> score = metric(torch.randint(255, (3, 224, 224)), "a photo of a cat") - >>> print(score.detach()) - tensor(24.7691) - """ - - is_differentiable: bool = False - higher_is_better: bool = True - full_state_update: bool = True - plot_lower_bound: float = 0.0 - - score: Tensor - n_samples: Tensor - plot_upper_bound = 100.0 - - def __init__( - self, - model_name_or_path: Literal[ - "openai/clip-vit-base-patch16", - "openai/clip-vit-base-patch32", - "openai/clip-vit-large-patch14-336", - "openai/clip-vit-large-patch14", - ] = _DEFAULT_MODEL, # type: ignore[assignment] - **kwargs: Any, - ) -> None: - super().__init__(**kwargs) - self.model, self.processor = _get_model_and_processor(model_name_or_path) - self.add_state("score", torch.tensor(0.0), dist_reduce_fx="sum") - self.add_state("n_samples", torch.tensor(0, dtype=torch.long), dist_reduce_fx="sum") - - @staticmethod - def _clip_score_update( - images: Union[Image.Image, List[Image.Image]], - text: Union[str, List[str]], - model: _CLIPModel, - processor: _CLIPProcessor, - ) -> Tuple[Tensor, int]: - if len(text) != len(images): - raise ValueError( - f"Expected the number of images and text examples to be the same but got {len(images)} and {len(text)}" - ) - device = next(model.parameters()).device - processed_input = processor(text=text, images=images, return_tensors="pt", padding=True) - - img_features = model.get_image_features(processed_input["pixel_values"].to(device)) - img_features = img_features / img_features.norm(p=2, dim=-1, keepdim=True) - - txt_features = model.get_text_features( - processed_input["input_ids"].to(device), processed_input["attention_mask"].to(device) - ) - txt_features = txt_features / txt_features.norm(p=2, dim=-1, keepdim=True) - - # cosine similarity between feature vectors - score = 100 * (img_features * txt_features).sum(axis=-1) - return score, len(text) - - def update(self, images: Union[Image.Image, List[Image.Image]], text: Union[str, List[str]]) -> None: - """Update CLIP score on a batch of images and text. - - Args: - images: Either a single [N, C, H, W] tensor or a list of [C, H, W] tensors - text: Either a single caption or a list of captions - - Raises: - ValueError: - If not all images have format [C, H, W] - ValueError: - If the number of images and captions do not match - """ - score, n_samples = self._clip_score_update(images, text, self.model, self.processor) - self.score += score.sum(0) - self.n_samples += n_samples - - def compute(self) -> Tensor: - """Compute accumulated clip score.""" - return torch.max(self.score / self.n_samples, torch.zeros_like(self.score)) - - def plot(self, val: Union[Tensor, Sequence[Tensor], None] = None, ax: Optional[_AX_TYPE] = None) -> _PLOT_OUT_TYPE: - """Plot a single or multiple values from the metric. - - Args: - val: Either a single result from calling `metric.forward` or `metric.compute` or a list of these results. - If no value is provided, will automatically call `metric.compute` and plot that result. - ax: An matplotlib axis object. If provided will add plot to that axis - - Returns: - Figure and Axes object - - Raises: - ModuleNotFoundError: - If `matplotlib` is not installed - - .. plot:: - :scale: 75 - - >>> # Example plotting a single value - >>> import torch - >>> from torchmetrics.multimodal import CLIPScore - >>> metric = CLIPScore(model_name_or_path="openai/clip-vit-base-patch16") - >>> metric.update(torch.randint(255, (3, 224, 224)), "a photo of a cat") - >>> fig_, ax_ = metric.plot() - - .. plot:: - :scale: 75 - - >>> # Example plotting multiple values - >>> import torch - >>> from torchmetrics.multimodal import CLIPScore - >>> metric = CLIPScore(model_name_or_path="openai/clip-vit-base-patch16") - >>> values = [ ] - >>> for _ in range(10): - ... values.append(metric(torch.randint(255, (3, 224, 224)), "a photo of a cat")) - >>> fig_, ax_ = metric.plot(values) - """ - return self._plot(val, ax) diff --git a/kosmos-g/eval/dino_score.py b/kosmos-g/eval/dino_score.py deleted file mode 100644 index cc798826e..000000000 --- a/kosmos-g/eval/dino_score.py +++ /dev/null @@ -1,157 +0,0 @@ -# Copyright The Lightning team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from typing import Any, List, Optional, Sequence, Union, Tuple - -import torch -import torch.nn.functional as F -from PIL import Image -from torch import Tensor -from torch.nn import Module as _DINOModel -from torchmetrics import Metric -from torchmetrics.utilities.imports import _MATPLOTLIB_AVAILABLE, _TRANSFORMERS_AVAILABLE -from torchmetrics.utilities.plot import _AX_TYPE, _PLOT_OUT_TYPE -from torchvision import transforms -from torchvision.transforms import Compose as _DINOProcessor -from typing_extensions import Literal - -if not _MATPLOTLIB_AVAILABLE: - __doctest_skip__ = ["DINOScore.plot"] - -_DEFAULT_MODEL: str = "dino_vits16" - - -class DINOScore(Metric): - r"""Calculates `DINO Score`_ which is a image-to-image similarity metric. - - .. note:: Metric is not scriptable - - Args: - model_name_or_path: string indicating the version of the DINO model to use. Available models are: - - - `"dino_vits16"` - - kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info. - - Raises: - ModuleNotFoundError: - If transformers package is not installed or version is lower than 4.10.0 - """ - - is_differentiable: bool = False - higher_is_better: bool = True - full_state_update: bool = True - plot_lower_bound: float = 0.0 - - score: Tensor - n_samples: Tensor - plot_upper_bound = 100.0 - - def __init__( - self, - model_name_or_path: Literal[ - "dino_vits16", - ] = _DEFAULT_MODEL, # type: ignore[assignment] - **kwargs: Any, - ) -> None: - super().__init__(**kwargs) - self.model, self.processor = self._get_model_and_processor(model_name_or_path) - self.add_state("score", torch.tensor(0.0), dist_reduce_fx="sum") - self.add_state("n_samples", torch.tensor(0, dtype=torch.long), dist_reduce_fx="sum") - - @staticmethod - def _get_model_and_processor( - model_name_or_path: Literal[ - "dino_vits16", - ] = "dino_vits16", - ) -> Tuple[_DINOModel, _DINOProcessor]: - if _TRANSFORMERS_AVAILABLE: - model = torch.hub.load('facebookresearch/dino:main', model_name_or_path) - processor = transforms.Compose([ - transforms.Resize(256, interpolation=transforms.InterpolationMode.BICUBIC), - transforms.CenterCrop(224), - transforms.ToTensor(), - transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)), - ]) - return model, processor - - raise ModuleNotFoundError( - "`dino_score` metric requires `transformers` package be installed." - " Either install with `pip install transformers>=4.0` or `pip install torchmetrics[multimodal]`." - ) - - @staticmethod - def _dino_score_update( - images1: Union[Image.Image, List[Image.Image]], - images2: Union[Image.Image, List[Image.Image]], - model: _DINOModel, - processor: _DINOProcessor, - ) -> Tuple[Tensor, int]: - if len(images1) != len(images2): - raise ValueError( - f"Expected the number of images to be the same but got {len(images1)} and {len(images2)}" - ) - - device = next(model.parameters()).device - - img1_processed_input = [processor(i) for i in images1] - img2_processed_input = [processor(i) for i in images2] - - img1_processed_input = torch.stack(img1_processed_input).to(device) - img2_processed_input = torch.stack(img2_processed_input).to(device) - - img1_features = model(img1_processed_input) - img2_features = model(img2_processed_input) - - # cosine similarity between feature vectors - score = 100 * F.cosine_similarity(img1_features, img2_features, dim=-1) - return score, len(images1) - - def update(self, images1: Union[Image.Image, List[Image.Image]], - images2: Union[Image.Image, List[Image.Image]]) -> None: - """Update DINO score on a batch of images and text. - - Args: - images1: Either a single [N, C, H, W] tensor or a list of [C, H, W] tensors - images2: Either a single [N, C, H, W] tensor or a list of [C, H, W] tensors - - Raises: - ValueError: - If not all images have format [C, H, W] - ValueError: - If the number of images do not match - """ - score, n_samples = self._dino_score_update(images1, images2, self.model, self.processor) - self.score += score.sum(0) - self.n_samples += n_samples - - def compute(self) -> Tensor: - """Compute accumulated dino score.""" - return torch.max(self.score / self.n_samples, torch.zeros_like(self.score)) - - def plot(self, val: Union[Tensor, Sequence[Tensor], None] = None, ax: Optional[_AX_TYPE] = None) -> _PLOT_OUT_TYPE: - """Plot a single or multiple values from the metric. - - Args: - val: Either a single result from calling `metric.forward` or `metric.compute` or a list of these results. - If no value is provided, will automatically call `metric.compute` and plot that result. - ax: An matplotlib axis object. If provided will add plot to that axis - - Returns: - Figure and Axes object - - Raises: - ModuleNotFoundError: - If `matplotlib` is not installed - """ - return self._plot(val, ax) diff --git a/kosmos-g/eval/dreambench_prompts.py b/kosmos-g/eval/dreambench_prompts.py deleted file mode 100644 index 13266c3bd..000000000 --- a/kosmos-g/eval/dreambench_prompts.py +++ /dev/null @@ -1,147 +0,0 @@ -OBJECT = { - "backpack": "backpack", - "backpack_dog": "backpack", - "bear_plushie": "stuffed animal", - "berry_bowl": "bowl", - "can": "can", - "candle": "candle", - "clock": "clock", - "colorful_sneaker": "sneaker", - "duck_toy": "toy", - "fancy_boot": "boot", - "grey_sloth_plushie": "stuffed animal", - "monster_toy": "toy", - "pink_sunglasses": "glasses", - "poop_emoji": "toy", - "rc_car": "toy", - "red_cartoon": "cartoon", - "robot_toy": "toy", - "shiny_sneaker": "sneaker", - "teapot": "teapot", - "vase": "vase", - "wolf_plushie": "stuffed animal" -} - -LIVE_OBJECT = { - "cat": "cat", - "cat2": "cat", - "dog": "dog", - "dog2": "dog", - "dog3": "dog", - "dog5": "dog", - "dog6": "dog", - "dog7": "dog", - "dog8": "dog", -} - -OBJECT_PROMPTS = [ - 'a {} in the jungle', - 'a {} in the snow', - 'a {} on the beach', - 'a {} on a cobblestone street', - 'a {} on top of pink fabric', - 'a {} on top of a wooden floor', - 'a {} with a city in the background', - 'a {} with a mountain in the background', - 'a {} with a blue house in the background', - 'a {} on top of a purple rug in a forest', - 'a {} with a wheat field in the background', - 'a {} with a tree and autumn leaves in the background', - 'a {} with the Eiffel Tower in the background', - 'a {} floating on top of water', - 'a {} floating in an ocean of milk', - 'a {} on top of green grass with sunflowers around it', - 'a {} on top of a mirror', - 'a {} on top of the sidewalk in a crowded street', - 'a {} on top of a dirt road', - 'a {} on top of a white rug', - 'a red {}', - 'a purple {}', - 'a shiny {}', - 'a wet {}', - 'a cube shaped {}' -] - -LIVE_OBJECT_PROMPTS = [ - 'a {} in the jungle', - 'a {} in the snow', - 'a {} on the beach', - 'a {} on a cobblestone street', - 'a {} on top of pink fabric', - 'a {} on top of a wooden floor', - 'a {} with a city in the background', - 'a {} with a mountain in the background', - 'a {} with a blue house in the background', - 'a {} on top of a purple rug in a forest', - 'a {} wearing a red hat', - 'a {} wearing a santa hat', - 'a {} wearing a rainbow scarf', - 'a {} wearing a black top hat and a monocle', - 'a {} in a chef outfit', - 'a {} in a firefighter outfit', - 'a {} in a police outfit', - 'a {} wearing pink glasses', - 'a {} wearing a yellow shirt', - 'a {} in a purple wizard outfit', - 'a red {}', - 'a purple {}', - 'a shiny {}', - 'a wet {}', - 'a cube shaped {}' -] - -KOSMOSG_OBJECT_PROMPTS = [ - '{} in the jungle', - '{} in the snow', - '{} on the beach', - '{} on a cobblestone street', - '{} on top of pink fabric', - '{} on top of a wooden floor', - '{} with a city in the background', - '{} with a mountain in the background', - '{} with a blue house in the background', - '{} on top of a purple rug in a forest', - '{} with a wheat field in the background', - '{} with a tree and autumn leaves in the background', - '{} with the Eiffel Tower in the background', - '{} floating on top of water', - '{} floating in an ocean of milk', - '{} on top of green grass with sunflowers around it', - '{} on top of a mirror', - '{} on top of the sidewalk in a crowded street', - '{} on top of a dirt road', - '{} on top of a white rug', - '{}, red', - '{}, purple', - '{}, shiny', - '{}, wet', - '{}, cube shaped' -] - -KOSMOSG_LIVE_OBJECT_PROMPTS = [ - '{} in the jungle', - '{} in the snow', - '{} on the beach', - '{} on a cobblestone street', - '{} on top of pink fabric', - '{} on top of a wooden floor', - '{} with a city in the background', - '{} with a mountain in the background', - '{} with a blue house in the background', - '{} on top of a purple rug in a forest', - '{} wearing a red hat', - '{} wearing a santa hat', - '{} wearing a rainbow scarf', - '{} wearing a black top hat and a monocle', - '{} in a chef outfit', - '{} in a firefighter outfit', - '{} in a police outfit', - '{} wearing pink glasses', - '{} wearing a yellow shirt', - '{} in a purple wizard outfit', - '{}, red', - '{}, purple', - '{}, shiny', - '{}, wet', - '{}, cube shaped' -] \ No newline at end of file diff --git a/kosmos-g/fairseq/.circleci/config.yml b/kosmos-g/fairseq/.circleci/config.yml deleted file mode 100644 index de40a6e9c..000000000 --- a/kosmos-g/fairseq/.circleci/config.yml +++ /dev/null @@ -1,159 +0,0 @@ -# Use 2.1 for orbs -version: 2.1 - -# ------------------------------------------------------------------------------------- -# Environments to run the jobs in -# ------------------------------------------------------------------------------------- -gpu: &gpu - environment: - CUDA_VERSION: "11.1" - machine: - image: ubuntu-1604-cuda-11.1:202012-01 - resource_class: gpu.nvidia.medium.multi - - -# ------------------------------------------------------------------------------------- -# Re-usable commands -# ------------------------------------------------------------------------------------- -cache_key: &cache_key cache-key-{{ .Environment.CIRCLE_JOB }}-{{ checksum ".circleci/config.yml" }}-{{ checksum "setup.py"}} - -install_dep_common: &install_dep_common - - run: - name: Install Common Dependencies - command: | - source activate fairseq - pip install --upgrade setuptools - pip install bitarray boto3 deepspeed editdistance fastBPE iopath ipdb ipython pyarrow pytest sacremoses sentencepiece subword-nmt hydra-core==1.0.7 omegaconf==2.0.6 - pip install --progress-bar off pytest - pip install --progress-bar off fairscale - pip install -i https://test.pypi.org/simple/ bitsandbytes-cuda111 -U - python -c 'import torch; print("Torch version:", torch.__version__)' - python -m torch.utils.collect_env - -install_dep_fused_ops: &install_dep_fused_ops - - run: - name: Install Megatron/Apex Dependencies - working_directory: ~/ - command: | - source activate fairseq - git clone https://github.com/NVIDIA/apex - cd apex - git checkout e2083df5eb96643c61613b9df48dd4eea6b07690 - pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--deprecated_fused_adam" --global-option="--xentropy" --global-option="--fast_multihead_attn" ./ - cd ~/ - git clone --depth=1 --branch v2.4 https://github.com/NVIDIA/Megatron-LM.git - cd Megatron-LM - pip install -e . - - -install_dep_pt19: &install_dep_pt19 - - run: - name: Install Pytorch Dependencies - command: | - source activate fairseq - pip install --upgrade setuptools - pip install torch==1.9.1+cu111 torchvision==0.10.1+cu111 torchaudio==0.9.1 -f https://download.pytorch.org/whl/torch_stable.html - python -c 'import torch; print("Torch version:", torch.__version__)' - -install_dep_pt18: &install_dep_pt18 - - run: - name: Install Pytorch Dependencies - command: | - source activate fairseq - pip install --upgrade setuptools - pip install torch==1.8.1+cu111 torchvision==0.9.1+cu111 torchaudio==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html - python -c 'import torch; print("Torch version:", torch.__version__)' - -install_repo: &install_repo - - run: - name: Install Repository - command: | - source activate fairseq - pip install . - python setup.py build_ext --inplace - -run_unittests: &run_unittests - - run: - name: Run Unit Tests - command: | - source activate fairseq - pytest tests/gpu/test_binaries_gpu.py - -check_nvidia_driver: &check_nvidia_driver - - run: - name: Check NVIDIA Driver - working_directory: ~/ - command: | - pyenv versions - nvidia-smi - -create_conda_env: &create_conda_env - - run: - name: Install and Create Conda Environment - command: | - curl -o ~/miniconda.sh -O https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh - chmod +x ~/miniconda.sh - ~/miniconda.sh -b -p $HOME/miniconda - rm ~/miniconda.sh - echo 'export PATH=$HOME/miniconda/bin:$PATH' >> $BASH_ENV - source $BASH_ENV - if [ ! -d ~/miniconda/envs/fairseq ] - then - conda create -y -n fairseq python=3.8 - fi - source activate fairseq - python --version - pip install --upgrade pip -# ------------------------------------------------------------------------------------- -# Jobs to run -# ------------------------------------------------------------------------------------- - -jobs: - gpu_tests_pt19: - <<: *gpu - - working_directory: ~/fairseq-py - - steps: - - checkout - - <<: *check_nvidia_driver - - <<: *create_conda_env - - restore_cache: - key: *cache_key - - <<: *install_dep_pt19 - - <<: *install_dep_common - - <<: *install_dep_fused_ops - - save_cache: - paths: - - ~/miniconda/ - key: *cache_key - - <<: *install_repo - - <<: *run_unittests - - gpu_tests_pt18: - <<: *gpu - - working_directory: ~/fairseq-py - - steps: - - checkout - - <<: *check_nvidia_driver - - <<: *create_conda_env - - restore_cache: - key: *cache_key - - <<: *install_dep_pt18 - - <<: *install_dep_common - - <<: *install_dep_fused_ops - - save_cache: - paths: - - ~/miniconda/ - key: *cache_key - - <<: *install_repo - - <<: *run_unittests - -workflows: - version: 2 - build: - jobs: - - gpu_tests_pt18 - - gpu_tests_pt19 diff --git a/kosmos-g/fairseq/.github/ISSUE_TEMPLATE.md b/kosmos-g/fairseq/.github/ISSUE_TEMPLATE.md deleted file mode 100644 index 5c4c4493e..000000000 --- a/kosmos-g/fairseq/.github/ISSUE_TEMPLATE.md +++ /dev/null @@ -1,3 +0,0 @@ -## 👉 [Please follow one of these issue templates](https://github.com/pytorch/fairseq/issues/new/choose) 👈 - -Note: to keep the backlog clean and actionable, issues may be immediately closed if they do not follow one of the above issue templates. diff --git a/kosmos-g/fairseq/.github/ISSUE_TEMPLATE/bug_report.md b/kosmos-g/fairseq/.github/ISSUE_TEMPLATE/bug_report.md deleted file mode 100644 index aa15123d8..000000000 --- a/kosmos-g/fairseq/.github/ISSUE_TEMPLATE/bug_report.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -name: 🐛 Bug Report -about: Submit a bug report to help us improve -labels: 'bug, needs triage' ---- - -## 🐛 Bug - - - -### To Reproduce - -Steps to reproduce the behavior (**always include the command you ran**): - -1. Run cmd '....' -2. See error - - - - -#### Code sample - - -### Expected behavior - - - -### Environment - - - fairseq Version (e.g., 1.0 or main): - - PyTorch Version (e.g., 1.0) - - OS (e.g., Linux): - - How you installed fairseq (`pip`, source): - - Build command you used (if compiling from source): - - Python version: - - CUDA/cuDNN version: - - GPU models and configuration: - - Any other relevant information: - -### Additional context - - diff --git a/kosmos-g/fairseq/.github/ISSUE_TEMPLATE/documentation.md b/kosmos-g/fairseq/.github/ISSUE_TEMPLATE/documentation.md deleted file mode 100644 index 3a6e2e9ea..000000000 --- a/kosmos-g/fairseq/.github/ISSUE_TEMPLATE/documentation.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -name: 📚 Documentation/Typos -about: Report an issue related to documentation or a typo -labels: 'documentation, needs triage' ---- - -## 📚 Documentation - -For typos and doc fixes, please go ahead and: - -1. Create an issue. -2. Fix the typo. -3. Submit a PR. - -Thanks! diff --git a/kosmos-g/fairseq/.github/ISSUE_TEMPLATE/feature_request.md b/kosmos-g/fairseq/.github/ISSUE_TEMPLATE/feature_request.md deleted file mode 100644 index 93c866804..000000000 --- a/kosmos-g/fairseq/.github/ISSUE_TEMPLATE/feature_request.md +++ /dev/null @@ -1,24 +0,0 @@ ---- -name: 🚀 Feature Request -about: Submit a proposal/request for a new feature -labels: 'enhancement, help wanted, needs triage' ---- - -## 🚀 Feature Request - - -### Motivation - - - -### Pitch - - - -### Alternatives - - - -### Additional context - - diff --git a/kosmos-g/fairseq/.github/ISSUE_TEMPLATE/how-to-question.md b/kosmos-g/fairseq/.github/ISSUE_TEMPLATE/how-to-question.md deleted file mode 100644 index 04f3f15d3..000000000 --- a/kosmos-g/fairseq/.github/ISSUE_TEMPLATE/how-to-question.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -name: ❓ Questions/Help -about: If you have questions, please first search existing issues and docs -labels: 'question, needs triage' ---- - -## ❓ Questions and Help - -### Before asking: -1. search the issues. -2. search the docs. - - - -#### What is your question? - -#### Code - - - -#### What have you tried? - -#### What's your environment? - - - fairseq Version (e.g., 1.0 or main): - - PyTorch Version (e.g., 1.0) - - OS (e.g., Linux): - - How you installed fairseq (`pip`, source): - - Build command you used (if compiling from source): - - Python version: - - CUDA/cuDNN version: - - GPU models and configuration: - - Any other relevant information: diff --git a/kosmos-g/fairseq/.github/PULL_REQUEST_TEMPLATE.md b/kosmos-g/fairseq/.github/PULL_REQUEST_TEMPLATE.md deleted file mode 100644 index d005e2df4..000000000 --- a/kosmos-g/fairseq/.github/PULL_REQUEST_TEMPLATE.md +++ /dev/null @@ -1,16 +0,0 @@ -# Before submitting - -- [ ] Was this discussed/approved via a Github issue? (no need for typos, doc improvements) -- [ ] Did you read the [contributor guideline](https://github.com/pytorch/fairseq/blob/main/CONTRIBUTING.md)? -- [ ] Did you make sure to update the docs? -- [ ] Did you write any new necessary tests? - -## What does this PR do? -Fixes # (issue). - -## PR review -Anyone in the community is free to review the PR once the tests have passed. -If we didn't discuss your PR in Github issues there's a high chance it will not be merged. - -## Did you have fun? -Make sure you had fun coding 🙃 diff --git a/kosmos-g/fairseq/.github/stale.yml b/kosmos-g/fairseq/.github/stale.yml deleted file mode 100644 index b12867dab..000000000 --- a/kosmos-g/fairseq/.github/stale.yml +++ /dev/null @@ -1,30 +0,0 @@ -# Configuration for probot-stale - https://github.com/probot/stale -# Mostly copied from github.com/facebook/react/blob/master/.github/stale.yml -# Number of days of inactivity before an issue becomes stale -daysUntilStale: 90 -# Number of days of inactivity before a stale issue is closed -daysUntilClose: 7 -# Issues with these labels will never be considered stale -exemptLabels: - - bug -# Label to use when marking an issue as stale -staleLabel: stale -issues: - # Comment to post when marking an issue as stale. - markComment: > - This issue has been automatically marked as stale. - **If this issue is still affecting you, please leave any comment** (for example, "bump"), and we'll keep it open. - We are sorry that we haven't been able to prioritize it yet. If you have any new additional information, please include it with your comment! - # Comment to post when closing a stale issue. - closeComment: > - Closing this issue after a prolonged period of inactivity. If this issue is still present in the latest release, please create a new issue with up-to-date information. Thank you! -pulls: - # Comment to post when marking a pull request as stale. - markComment: > - This pull request has been automatically marked as stale. - **If this pull request is still relevant, please leave any comment** (for example, "bump"), and we'll keep it open. - We are sorry that we haven't been able to prioritize reviewing it yet. Your contribution is very much appreciated. - # Comment to post when closing a stale pull request. - closeComment: > - Closing this pull request after a prolonged period of inactivity. If this issue is still present in the latest release, please ask for this pull request to be reopened. Thank you! - diff --git a/kosmos-g/fairseq/.github/workflows/build.yml b/kosmos-g/fairseq/.github/workflows/build.yml deleted file mode 100644 index a80e0f92c..000000000 --- a/kosmos-g/fairseq/.github/workflows/build.yml +++ /dev/null @@ -1,60 +0,0 @@ -name: build - -on: - # Trigger the workflow on push to main or any pull request - push: - branches: - - main - pull_request: - -jobs: - build: - - strategy: - max-parallel: 4 - matrix: - platform: [ubuntu-latest, macos-latest] - python-version: [3.8, 3.9] - - runs-on: ${{ matrix.platform }} - - steps: - - uses: actions/checkout@v2 - - - name: Set up Python ${{ matrix.python-version }} - uses: actions/setup-python@v2 - with: - python-version: ${{ matrix.python-version }} - - - name: Conditionally install pytorch - if: matrix.platform == 'windows-latest' - run: pip3 install torch -f https://download.pytorch.org/whl/torch_stable.html - - - name: Install locally - run: | - python -m pip install --upgrade pip - git submodule update --init --recursive - python setup.py build_ext --inplace - python -m pip install --editable . - - - name: Install optional test requirements - run: | - python -m pip install iopath transformers pyarrow - python -m pip install git+https://github.com/facebookresearch/fairscale.git@main - - - name: Lint with flake8 - run: | - pip install flake8 - # stop the build if there are Python syntax errors or undefined names - flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics --extend-exclude fairseq/model_parallel/megatron - # exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide - flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics --extend-exclude fairseq/model_parallel/megatron - - - name: Run tests - run: | - python setup.py test - - - name: Lint with black - run: | - pip install black - black --check . --extend-exclude 'examples|fairseq\/model_parallel\/megatron' diff --git a/kosmos-g/fairseq/.github/workflows/build_wheels.yml b/kosmos-g/fairseq/.github/workflows/build_wheels.yml deleted file mode 100644 index 726170859..000000000 --- a/kosmos-g/fairseq/.github/workflows/build_wheels.yml +++ /dev/null @@ -1,41 +0,0 @@ -name: build_wheels - -on: - push: - branches: - - v[0-9]+.[0-9]+.[x0-9]+ - tags: - - v* - -jobs: - build_wheels: - name: Build wheels on ${{ matrix.os }} - runs-on: ${{ matrix.os }} - strategy: - matrix: - os: [ubuntu-latest, macos-latest] - - steps: - - uses: actions/checkout@v2 - - - name: Install Python - uses: actions/setup-python@v2 - with: - python-version: '3.7' - - - name: Install cibuildwheel - run: | - python -m pip install cibuildwheel - - - name: Build wheels for CPython - run: | - python -m cibuildwheel --output-dir dist - env: - CIBW_BUILD: "cp36-*64 cp37-*64 cp38-*64" - CIBW_MANYLINUX_X86_64_IMAGE: manylinux1 - CIBW_BEFORE_BUILD: git submodule update --init --recursive && pip install . - - - uses: actions/upload-artifact@v2 - with: - name: wheels - path: ./dist/*.whl diff --git a/kosmos-g/fairseq/.gitignore b/kosmos-g/fairseq/.gitignore deleted file mode 100644 index 4be13638d..000000000 --- a/kosmos-g/fairseq/.gitignore +++ /dev/null @@ -1,141 +0,0 @@ -# JetBrains PyCharm IDE -.idea/ - -# Byte-compiled / optimized / DLL files -__pycache__/ -*.py[cod] -*$py.class - -# C extensions -*.so - -# macOS dir files -.DS_Store - -# Distribution / packaging -.Python -env/ -build/ -develop-eggs/ -dist/ -downloads/ -eggs/ -.eggs/ -lib/ -lib64/ -parts/ -sdist/ -var/ -wheels/ -*.egg-info/ -.installed.cfg -*.egg - -# Checkpoints -checkpoints - -# PyInstaller -# Usually these files are written by a python script from a template -# before PyInstaller builds the exe, so as to inject date/other infos into it. -*.manifest -*.spec - -# Installer logs -pip-log.txt -pip-delete-this-directory.txt - -# Unit test / coverage reports -htmlcov/ -.tox/ -.coverage -.coverage.* -.cache -nosetests.xml -coverage.xml -*.cover -.hypothesis/ - -# Translations -*.mo -*.pot - -# Django stuff: -*.log -local_settings.py - -# Flask stuff: -instance/ -.webassets-cache - -# Scrapy stuff: -.scrapy - -# Sphinx documentation -docs/_build/ - -# PyBuilder -target/ - -# Jupyter Notebook -.ipynb_checkpoints - -# pyenv -.python-version - -# celery beat schedule file -celerybeat-schedule - -# SageMath parsed files -*.sage.py - -# dotenv -.env - -# virtualenv -.venv -venv/ -ENV/ - -# Spyder project settings -.spyderproject -.spyproject - -# Rope project settings -.ropeproject - -# mkdocs documentation -/site - -# mypy -.mypy_cache/ - -# Generated files -/fairseq/temporal_convolution_tbc -/fairseq/modules/*_layer/*_forward.cu -/fairseq/modules/*_layer/*_backward.cu -/fairseq/version.py - -# data -data-bin/ - -# reranking -/examples/reranking/rerank_data - -# Cython-generated C++ source files -/fairseq/data/data_utils_fast.cpp -/fairseq/data/token_block_utils_fast.cpp - -# VSCODE -.vscode/ftp-sync.json -.vscode/settings.json - -# Experimental Folder -experimental/* - -# Weights and Biases logs -wandb/ - -# Hydra artifacts -nohup.out -multirun -outputs diff --git a/kosmos-g/fairseq/.gitmodules b/kosmos-g/fairseq/.gitmodules deleted file mode 100644 index 07a55d45d..000000000 --- a/kosmos-g/fairseq/.gitmodules +++ /dev/null @@ -1,4 +0,0 @@ -[submodule "fairseq/model_parallel/megatron"] - path = fairseq/model_parallel/megatron - url = https://github.com/ngoyal2707/Megatron-LM - branch = fairseq diff --git a/kosmos-g/fairseq/.isort.cfg b/kosmos-g/fairseq/.isort.cfg deleted file mode 100644 index aed482f47..000000000 --- a/kosmos-g/fairseq/.isort.cfg +++ /dev/null @@ -1,2 +0,0 @@ -[settings] -known_third_party = _cffi_backend,agg_results,aml,bitarray,boto3,botocore,dump_hubert_feature,dynamicconv_cuda,editdistance,faiss,fasttext,feature_utils,ffmpeg,g2p_en,h5py,hydra,hypothesis,indicnlp,inflect,iopath,joblib,kaldi_io,kenlm,libfb,librosa,lightconv_cuda,matplotlib,misc,mmpt,mmpt_cli,model,nltk,npy_append_array,numpy,omegaconf,pandas,pathbuilder,preprocessing,progressbar,pythainlp,random_sequence_shuffler,regex,sacrebleu,sacremoses,scipy,sentencepiece,setuptools,six,sklearn,soundfile,sweep,sweep_wmt_en2de_transformer_big_common,tabulate,torch,torchaudio,tqdm,unidecode,utils,videoreader,wav2vec_cluster_faiss,wget,yaml diff --git a/kosmos-g/fairseq/.pre-commit-config.yaml b/kosmos-g/fairseq/.pre-commit-config.yaml deleted file mode 100644 index 4817f6e87..000000000 --- a/kosmos-g/fairseq/.pre-commit-config.yaml +++ /dev/null @@ -1,40 +0,0 @@ -exclude: 'build|stubs' - -default_language_version: - python: python3 - -repos: -- repo: https://github.com/pre-commit/pre-commit-hooks - rev: v4.0.1 - hooks: - - id: trailing-whitespace - - id: check-ast - - id: check-merge-conflict - - id: no-commit-to-branch - args: ['--branch=master'] - - id: check-added-large-files - args: ['--maxkb=500'] - - id: end-of-file-fixer - -- repo: https://github.com/ambv/black - rev: 21.12b0 - hooks: - - id: black - language_version: python3.8 - -- repo: https://gitlab.com/pycqa/flake8 - rev: 3.9.2 - hooks: - - id: flake8 - args: [ - # only error for syntax errors and undefined names - "--select=E9,F63,F7,F82", - ] - -- repo: https://github.com/pycqa/isort - rev: 5.10.1 - hooks: - - id: isort - exclude: README.md - additional_dependencies: [toml] - args: ["--profile", "black"] diff --git a/kosmos-g/fairseq/CODE_OF_CONDUCT.md b/kosmos-g/fairseq/CODE_OF_CONDUCT.md deleted file mode 100644 index a0cbeaab7..000000000 --- a/kosmos-g/fairseq/CODE_OF_CONDUCT.md +++ /dev/null @@ -1,77 +0,0 @@ -# Code of Conduct - -## Our Pledge - -In the interest of fostering an open and welcoming environment, we as -contributors and maintainers pledge to make participation in our project and -our community a harassment-free experience for everyone, regardless of age, body -size, disability, ethnicity, sex characteristics, gender identity and expression, -level of experience, education, socio-economic status, nationality, personal -appearance, race, religion, or sexual identity and orientation. - -## Our Standards - -Examples of behavior that contributes to creating a positive environment -include: - -* Using welcoming and inclusive language -* Being respectful of differing viewpoints and experiences -* Gracefully accepting constructive criticism -* Focusing on what is best for the community -* Showing empathy towards other community members - -Examples of unacceptable behavior by participants include: - -* The use of sexualized language or imagery and unwelcome sexual attention or - advances -* Trolling, insulting/derogatory comments, and personal or political attacks -* Public or private harassment -* Publishing others' private information, such as a physical or electronic - address, without explicit permission -* Other conduct which could reasonably be considered inappropriate in a - professional setting - -## Our Responsibilities - -Project maintainers are responsible for clarifying the standards of acceptable -behavior and are expected to take appropriate and fair corrective action in -response to any instances of unacceptable behavior. - -Project maintainers have the right and responsibility to remove, edit, or -reject comments, commits, code, wiki edits, issues, and other contributions -that are not aligned to this Code of Conduct, or to ban temporarily or -permanently any contributor for other behaviors that they deem inappropriate, -threatening, offensive, or harmful. - -## Scope - -This Code of Conduct applies within all project spaces, and it also applies when -an individual is representing the project or its community in public spaces. -Examples of representing a project or community include using an official -project e-mail address, posting via an official social media account, or acting -as an appointed representative at an online or offline event. Representation of -a project may be further defined and clarified by project maintainers. - -## Enforcement - -Instances of abusive, harassing, or otherwise unacceptable behavior may be -reported by contacting the project team at . All -complaints will be reviewed and investigated and will result in a response that -is deemed necessary and appropriate to the circumstances. The project team is -obligated to maintain confidentiality with regard to the reporter of an incident. -Further details of specific enforcement policies may be posted separately. - -Project maintainers who do not follow or enforce the Code of Conduct in good -faith may face temporary or permanent repercussions as determined by other -members of the project's leadership. - -## Attribution - -This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, -available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html - -[homepage]: https://www.contributor-covenant.org - -For answers to common questions about this code of conduct, see -https://www.contributor-covenant.org/faq - diff --git a/kosmos-g/fairseq/CONTRIBUTING.md b/kosmos-g/fairseq/CONTRIBUTING.md deleted file mode 100644 index 60e902588..000000000 --- a/kosmos-g/fairseq/CONTRIBUTING.md +++ /dev/null @@ -1,82 +0,0 @@ -# Contributing to Facebook AI Research Sequence-to-Sequence Toolkit (fairseq) -We want to make contributing to this project as easy and transparent as -possible. - -## Pull Requests -We actively welcome your pull requests. - -1. Fork the repo and create your branch from `main`. -2. If you've added code that should be tested, add tests. -3. If you've changed APIs, update the documentation. -4. Ensure the test suite passes. -5. Make sure your code lints. -6. If you haven't already, complete the Contributor License Agreement ("CLA"). - -## Contributor License Agreement ("CLA") -In order to accept your pull request, we need you to submit a CLA. You only need -to do this once to work on any of Facebook's open source projects. - -Complete your CLA here: - -## Issues -We use GitHub issues to track public bugs. Please ensure your description is -clear and has sufficient instructions to be able to reproduce the issue. - -## License -By contributing to Facebook AI Research Sequence-to-Sequence Toolkit (fairseq), -you agree that your contributions will be licensed under the LICENSE file in -the root directory of this source tree. - -## Pre-commit hooks -In order to ensure your code lints, there are pre-commit hooks configured in the repository which you can install. -After installation, they will automatically run each time you commit. -An abbreviated guide is given below; for more information, refer to [the offical pre-commit documentation](https://pre-commit.com/). - -### Installation -``` -pip install pre-commit -pre-commit install -``` - -### Usage -Just commit your changes: -``` -git commit -m "My informative commit message" -``` - -If there was a failure, you will get feedback -``` -[INFO] Initializing environment for https://github.com/PyCQA/flake8. -[INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks. -[INFO] Once installed this environment will be reused. -[INFO] This may take a few minutes... -[INFO] Installing environment for https://github.com/PyCQA/flake8. -[INFO] Once installed this environment will be reused. -[INFO] This may take a few minutes... -Trim Trailing Whitespace.................................................Failed -- hook id: trailing-whitespace -- exit code: 1 -- files were modified by this hook -Fixing examples/nllb/modeling/wmt15_benchmark/eval_langs2.sh -Fix End of Files.........................................................Failed -- hook id: end-of-file-fixer -- exit code: 1 -- files were modified by this hook -Fixing examples/few_shot/scripts/schedule_jobs_few_shot.py -flake8...................................................................Passed -``` - -Certain hooks modify your files to comply. -To include these modifications, you will need to add them (i.e. `git add ...`) and commit again. - -If all is well, you should see something like: -``` -Trim Trailing Whitespace.................................................Passed -Fix End of Files.........................................................Passed -flake8...................................................................Passed -[gshard-fix-ci 8698644e1] Fix lint, add pre-commit hooks - 10 files changed, 148 insertions(+), 110 deletions(-) - create mode 100644 .flake8 - create mode 100644 .pre-commit-config.yaml - rename examples/nllb/modeling/wmt15_benchmark/{eval_langs2.py => eval_langs2.sh} (99%) - ``` diff --git a/kosmos-g/fairseq/LICENSE b/kosmos-g/fairseq/LICENSE deleted file mode 100644 index b96dcb048..000000000 --- a/kosmos-g/fairseq/LICENSE +++ /dev/null @@ -1,21 +0,0 @@ -MIT License - -Copyright (c) Facebook, Inc. and its affiliates. - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. diff --git a/kosmos-g/fairseq/README.md b/kosmos-g/fairseq/README.md deleted file mode 100644 index f1db14a96..000000000 --- a/kosmos-g/fairseq/README.md +++ /dev/null @@ -1,237 +0,0 @@ -

- -
-
- MIT License - Latest Release - Build Status - Documentation Status -

- --------------------------------------------------------------------------------- - -Fairseq(-py) is a sequence modeling toolkit that allows researchers and -developers to train custom models for translation, summarization, language -modeling and other text generation tasks. - -We provide reference implementations of various sequence modeling papers: - -
List of implemented papers

- -* **Convolutional Neural Networks (CNN)** - + [Language Modeling with Gated Convolutional Networks (Dauphin et al., 2017)](examples/language_model/conv_lm/README.md) - + [Convolutional Sequence to Sequence Learning (Gehring et al., 2017)](examples/conv_seq2seq/README.md) - + [Classical Structured Prediction Losses for Sequence to Sequence Learning (Edunov et al., 2018)](https://github.com/pytorch/fairseq/tree/classic_seqlevel) - + [Hierarchical Neural Story Generation (Fan et al., 2018)](examples/stories/README.md) - + [wav2vec: Unsupervised Pre-training for Speech Recognition (Schneider et al., 2019)](examples/wav2vec/README.md) -* **LightConv and DynamicConv models** - + [Pay Less Attention with Lightweight and Dynamic Convolutions (Wu et al., 2019)](examples/pay_less_attention_paper/README.md) -* **Long Short-Term Memory (LSTM) networks** - + Effective Approaches to Attention-based Neural Machine Translation (Luong et al., 2015) -* **Transformer (self-attention) networks** - + Attention Is All You Need (Vaswani et al., 2017) - + [Scaling Neural Machine Translation (Ott et al., 2018)](examples/scaling_nmt/README.md) - + [Understanding Back-Translation at Scale (Edunov et al., 2018)](examples/backtranslation/README.md) - + [Adaptive Input Representations for Neural Language Modeling (Baevski and Auli, 2018)](examples/language_model/README.adaptive_inputs.md) - + [Lexically constrained decoding with dynamic beam allocation (Post & Vilar, 2018)](examples/constrained_decoding/README.md) - + [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context (Dai et al., 2019)](examples/truncated_bptt/README.md) - + [Adaptive Attention Span in Transformers (Sukhbaatar et al., 2019)](examples/adaptive_span/README.md) - + [Mixture Models for Diverse Machine Translation: Tricks of the Trade (Shen et al., 2019)](examples/translation_moe/README.md) - + [RoBERTa: A Robustly Optimized BERT Pretraining Approach (Liu et al., 2019)](examples/roberta/README.md) - + [Facebook FAIR's WMT19 News Translation Task Submission (Ng et al., 2019)](examples/wmt19/README.md) - + [Jointly Learning to Align and Translate with Transformer Models (Garg et al., 2019)](examples/joint_alignment_translation/README.md ) - + [Multilingual Denoising Pre-training for Neural Machine Translation (Liu et at., 2020)](examples/mbart/README.md) - + [Neural Machine Translation with Byte-Level Subwords (Wang et al., 2020)](examples/byte_level_bpe/README.md) - + [Unsupervised Quality Estimation for Neural Machine Translation (Fomicheva et al., 2020)](examples/unsupervised_quality_estimation/README.md) - + [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations (Baevski et al., 2020)](examples/wav2vec/README.md) - + [Generating Medical Reports from Patient-Doctor Conversations Using Sequence-to-Sequence Models (Enarvi et al., 2020)](examples/pointer_generator/README.md) - + [Linformer: Self-Attention with Linear Complexity (Wang et al., 2020)](examples/linformer/README.md) - + [Cross-lingual Retrieval for Iterative Self-Supervised Training (Tran et al., 2020)](examples/criss/README.md) - + [Deep Transformers with Latent Depth (Li et al., 2020)](examples/latent_depth/README.md) - + [Unsupervised Cross-lingual Representation Learning for Speech Recognition (Conneau et al., 2020)](https://arxiv.org/abs/2006.13979) - + [Self-training and Pre-training are Complementary for Speech Recognition (Xu et al., 2020)](https://arxiv.org/abs/2010.11430) - + [Robust wav2vec 2.0: Analyzing Domain Shift in Self-Supervised Pre-Training (Hsu, et al., 2021)](https://arxiv.org/abs/2104.01027) - + [Unsupervised Speech Recognition (Baevski, et al., 2021)](https://arxiv.org/abs/2105.11084) - + [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition (Xu et al., 2021)](https://arxiv.org/abs/2109.11680) - + [VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding (Xu et. al., 2021)](https://arxiv.org/pdf/2109.14084.pdf) - + [VLM: Task-agnostic Video-Language Model Pre-training for Video Understanding (Xu et. al., 2021)](https://aclanthology.org/2021.findings-acl.370.pdf) - + [NormFormer: Improved Transformer Pretraining with Extra Normalization (Shleifer et. al, 2021)](examples/normformer/README.md) -* **Non-autoregressive Transformers** - + Non-Autoregressive Neural Machine Translation (Gu et al., 2017) - + Deterministic Non-Autoregressive Neural Sequence Modeling by Iterative Refinement (Lee et al. 2018) - + Insertion Transformer: Flexible Sequence Generation via Insertion Operations (Stern et al. 2019) - + Mask-Predict: Parallel Decoding of Conditional Masked Language Models (Ghazvininejad et al., 2019) - + [Levenshtein Transformer (Gu et al., 2019)](examples/nonautoregressive_translation/README.md) -* **Finetuning** - + [Better Fine-Tuning by Reducing Representational Collapse (Aghajanyan et al. 2020)](examples/rxf/README.md) - -

- -### What's New: -* December 2021 [Released Direct speech-to-speech translation code](examples/speech_to_speech/README.md) -* October 2021 [Released VideoCLIP and VLM models](examples/MMPT/README.md) -* October 2021 [Released multilingual finetuned XLSR-53 model](examples/wav2vec/README.md) -* September 2021 [`master` branch renamed to `main`](https://github.com/github/renaming). -* July 2021 [Released DrNMT code](examples/discriminative_reranking_nmt/README.md) -* July 2021 [Released Robust wav2vec 2.0 model](examples/wav2vec/README.md) -* June 2021 [Released XLMR-XL and XLMR-XXL models](examples/xlmr/README.md) -* May 2021 [Released Unsupervised Speech Recognition code](examples/wav2vec/unsupervised/README.md) -* March 2021 [Added full parameter and optimizer state sharding + CPU offloading](examples/fully_sharded_data_parallel/README.md) -* February 2021 [Added LASER training code](examples/laser/README.md) -* December 2020: [Added Adaptive Attention Span code](examples/adaptive_span/README.md) -* December 2020: [GottBERT model and code released](examples/gottbert/README.md) -* November 2020: Adopted the [Hydra](https://github.com/facebookresearch/hydra) configuration framework - * [see documentation explaining how to use it for new and existing projects](docs/hydra_integration.md) -* November 2020: [fairseq 0.10.0 released](https://github.com/pytorch/fairseq/releases/tag/v0.10.0) -* October 2020: [Added R3F/R4F (Better Fine-Tuning) code](examples/rxf/README.md) -* October 2020: [Deep Transformer with Latent Depth code released](examples/latent_depth/README.md) -* October 2020: [Added CRISS models and code](examples/criss/README.md) - -
Previous updates

- -* September 2020: [Added Linformer code](examples/linformer/README.md) -* September 2020: [Added pointer-generator networks](examples/pointer_generator/README.md) -* August 2020: [Added lexically constrained decoding](examples/constrained_decoding/README.md) -* August 2020: [wav2vec2 models and code released](examples/wav2vec/README.md) -* July 2020: [Unsupervised Quality Estimation code released](examples/unsupervised_quality_estimation/README.md) -* May 2020: [Follow fairseq on Twitter](https://twitter.com/fairseq) -* April 2020: [Monotonic Multihead Attention code released](examples/simultaneous_translation/README.md) -* April 2020: [Quant-Noise code released](examples/quant_noise/README.md) -* April 2020: [Initial model parallel support and 11B parameters unidirectional LM released](examples/megatron_11b/README.md) -* March 2020: [Byte-level BPE code released](examples/byte_level_bpe/README.md) -* February 2020: [mBART model and code released](examples/mbart/README.md) -* February 2020: [Added tutorial for back-translation](https://github.com/pytorch/fairseq/tree/main/examples/backtranslation#training-your-own-model-wmt18-english-german) -* December 2019: [fairseq 0.9.0 released](https://github.com/pytorch/fairseq/releases/tag/v0.9.0) -* November 2019: [VizSeq released (a visual analysis toolkit for evaluating fairseq models)](https://facebookresearch.github.io/vizseq/docs/getting_started/fairseq_example) -* November 2019: [CamemBERT model and code released](examples/camembert/README.md) -* November 2019: [BART model and code released](examples/bart/README.md) -* November 2019: [XLM-R models and code released](examples/xlmr/README.md) -* September 2019: [Nonautoregressive translation code released](examples/nonautoregressive_translation/README.md) -* August 2019: [WMT'19 models released](examples/wmt19/README.md) -* July 2019: fairseq relicensed under MIT license -* July 2019: [RoBERTa models and code released](examples/roberta/README.md) -* June 2019: [wav2vec models and code released](examples/wav2vec/README.md) - -

- -### Features: - -* multi-GPU training on one machine or across multiple machines (data and model parallel) -* fast generation on both CPU and GPU with multiple search algorithms implemented: - + beam search - + Diverse Beam Search ([Vijayakumar et al., 2016](https://arxiv.org/abs/1610.02424)) - + sampling (unconstrained, top-k and top-p/nucleus) - + [lexically constrained decoding](examples/constrained_decoding/README.md) (Post & Vilar, 2018) -* [gradient accumulation](https://fairseq.readthedocs.io/en/latest/getting_started.html#large-mini-batch-training-with-delayed-updates) enables training with large mini-batches even on a single GPU -* [mixed precision training](https://fairseq.readthedocs.io/en/latest/getting_started.html#training-with-half-precision-floating-point-fp16) (trains faster with less GPU memory on [NVIDIA tensor cores](https://developer.nvidia.com/tensor-cores)) -* [extensible](https://fairseq.readthedocs.io/en/latest/overview.html): easily register new models, criterions, tasks, optimizers and learning rate schedulers -* [flexible configuration](docs/hydra_integration.md) based on [Hydra](https://github.com/facebookresearch/hydra) allowing a combination of code, command-line and file based configuration -* [full parameter and optimizer state sharding](examples/fully_sharded_data_parallel/README.md) -* [offloading parameters to CPU](examples/fully_sharded_data_parallel/README.md) - -We also provide [pre-trained models for translation and language modeling](#pre-trained-models-and-examples) -with a convenient `torch.hub` interface: - -``` python -en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-de.single_model') -en2de.translate('Hello world', beam=5) -# 'Hallo Welt' -``` - -See the PyTorch Hub tutorials for [translation](https://pytorch.org/hub/pytorch_fairseq_translation/) -and [RoBERTa](https://pytorch.org/hub/pytorch_fairseq_roberta/) for more examples. - -# Requirements and Installation - -* [PyTorch](http://pytorch.org/) version >= 1.5.0 -* Python version >= 3.6 -* For training new models, you'll also need an NVIDIA GPU and [NCCL](https://github.com/NVIDIA/nccl) -* **To install fairseq** and develop locally: - -``` bash -git clone https://github.com/pytorch/fairseq -cd fairseq -pip install --editable ./ - -# on MacOS: -# CFLAGS="-stdlib=libc++" pip install --editable ./ - -# to install the latest stable release (0.10.x) -# pip install fairseq -``` - -* **For faster training** install NVIDIA's [apex](https://github.com/NVIDIA/apex) library: - -``` bash -git clone https://github.com/NVIDIA/apex -cd apex -pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" \ - --global-option="--deprecated_fused_adam" --global-option="--xentropy" \ - --global-option="--fast_multihead_attn" ./ -``` - -* **For large datasets** install [PyArrow](https://arrow.apache.org/docs/python/install.html#using-pip): `pip install pyarrow` -* If you use Docker make sure to increase the shared memory size either with `--ipc=host` or `--shm-size` - as command line options to `nvidia-docker run` . - -# Getting Started - -The [full documentation](https://fairseq.readthedocs.io/) contains instructions -for getting started, training new models and extending fairseq with new model -types and tasks. - -# Pre-trained models and examples - -We provide pre-trained models and pre-processed, binarized test sets for several tasks listed below, -as well as example training and evaluation commands. - -* [Translation](examples/translation/README.md): convolutional and transformer models are available -* [Language Modeling](examples/language_model/README.md): convolutional and transformer models are available - -We also have more detailed READMEs to reproduce results from specific papers: - -* [XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale (Babu et al., 2021)](examples/wav2vec/xlsr/README.md) -* [Cross-lingual Retrieval for Iterative Self-Supervised Training (Tran et al., 2020)](examples/criss/README.md) -* [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations (Baevski et al., 2020)](examples/wav2vec/README.md) -* [Unsupervised Quality Estimation for Neural Machine Translation (Fomicheva et al., 2020)](examples/unsupervised_quality_estimation/README.md) -* [Training with Quantization Noise for Extreme Model Compression ({Fan*, Stock*} et al., 2020)](examples/quant_noise/README.md) -* [Neural Machine Translation with Byte-Level Subwords (Wang et al., 2020)](examples/byte_level_bpe/README.md) -* [Multilingual Denoising Pre-training for Neural Machine Translation (Liu et at., 2020)](examples/mbart/README.md) -* [Reducing Transformer Depth on Demand with Structured Dropout (Fan et al., 2019)](examples/layerdrop/README.md) -* [Jointly Learning to Align and Translate with Transformer Models (Garg et al., 2019)](examples/joint_alignment_translation/README.md) -* [Levenshtein Transformer (Gu et al., 2019)](examples/nonautoregressive_translation/README.md) -* [Facebook FAIR's WMT19 News Translation Task Submission (Ng et al., 2019)](examples/wmt19/README.md) -* [RoBERTa: A Robustly Optimized BERT Pretraining Approach (Liu et al., 2019)](examples/roberta/README.md) -* [wav2vec: Unsupervised Pre-training for Speech Recognition (Schneider et al., 2019)](examples/wav2vec/README.md) -* [Mixture Models for Diverse Machine Translation: Tricks of the Trade (Shen et al., 2019)](examples/translation_moe/README.md) -* [Pay Less Attention with Lightweight and Dynamic Convolutions (Wu et al., 2019)](examples/pay_less_attention_paper/README.md) -* [Understanding Back-Translation at Scale (Edunov et al., 2018)](examples/backtranslation/README.md) -* [Classical Structured Prediction Losses for Sequence to Sequence Learning (Edunov et al., 2018)](https://github.com/pytorch/fairseq/tree/classic_seqlevel) -* [Hierarchical Neural Story Generation (Fan et al., 2018)](examples/stories/README.md) -* [Scaling Neural Machine Translation (Ott et al., 2018)](examples/scaling_nmt/README.md) -* [Convolutional Sequence to Sequence Learning (Gehring et al., 2017)](examples/conv_seq2seq/README.md) -* [Language Modeling with Gated Convolutional Networks (Dauphin et al., 2017)](examples/language_model/README.conv.md) - -# Join the fairseq community - -* Twitter: https://twitter.com/fairseq -* Facebook page: https://www.facebook.com/groups/fairseq.users -* Google group: https://groups.google.com/forum/#!forum/fairseq-users - -# License - -fairseq(-py) is MIT-licensed. -The license applies to the pre-trained models as well. - -# Citation - -Please cite as: - -``` bibtex -@inproceedings{ott2019fairseq, - title = {fairseq: A Fast, Extensible Toolkit for Sequence Modeling}, - author = {Myle Ott and Sergey Edunov and Alexei Baevski and Angela Fan and Sam Gross and Nathan Ng and David Grangier and Michael Auli}, - booktitle = {Proceedings of NAACL-HLT 2019: Demonstrations}, - year = {2019}, -} -``` diff --git a/kosmos-g/fairseq/docs/Makefile b/kosmos-g/fairseq/docs/Makefile deleted file mode 100644 index c2f5b1a89..000000000 --- a/kosmos-g/fairseq/docs/Makefile +++ /dev/null @@ -1,20 +0,0 @@ -# Minimal makefile for Sphinx documentation -# - -# You can set these variables from the command line. -SPHINXOPTS = -SPHINXBUILD = python -msphinx -SPHINXPROJ = fairseq -SOURCEDIR = . -BUILDDIR = _build - -# Put it first so that "make" without argument is like "make help". -help: - @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) - -.PHONY: help Makefile - -# Catch-all target: route all unknown targets to Sphinx using the new -# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). -%: Makefile - @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) \ No newline at end of file diff --git a/kosmos-g/fairseq/docs/_static/theme_overrides.css b/kosmos-g/fairseq/docs/_static/theme_overrides.css deleted file mode 100644 index 2a0764193..000000000 --- a/kosmos-g/fairseq/docs/_static/theme_overrides.css +++ /dev/null @@ -1,9 +0,0 @@ -.wy-table-responsive table td kbd { - white-space: nowrap; -} -.wy-table-responsive table td { - white-space: normal !important; -} -.wy-table-responsive { - overflow: visible !important; -} diff --git a/kosmos-g/fairseq/docs/command_line_tools.rst b/kosmos-g/fairseq/docs/command_line_tools.rst deleted file mode 100644 index c16300ff5..000000000 --- a/kosmos-g/fairseq/docs/command_line_tools.rst +++ /dev/null @@ -1,85 +0,0 @@ -.. _Command-line Tools: - -Command-line Tools -================== - -Fairseq provides several command-line tools for training and evaluating models: - -- :ref:`fairseq-preprocess`: Data pre-processing: build vocabularies and binarize training data -- :ref:`fairseq-train`: Train a new model on one or multiple GPUs -- :ref:`fairseq-generate`: Translate pre-processed data with a trained model -- :ref:`fairseq-interactive`: Translate raw text with a trained model -- :ref:`fairseq-score`: BLEU scoring of generated translations against reference translations -- :ref:`fairseq-eval-lm`: Language model evaluation - - -.. _fairseq-preprocess: - -fairseq-preprocess -~~~~~~~~~~~~~~~~~~ -.. automodule:: fairseq_cli.preprocess - - .. argparse:: - :module: fairseq.options - :func: get_preprocessing_parser - :prog: fairseq-preprocess - - -.. _fairseq-train: - -fairseq-train -~~~~~~~~~~~~~ -.. automodule:: fairseq_cli.train - - .. argparse:: - :module: fairseq.options - :func: get_training_parser - :prog: fairseq-train - - -.. _fairseq-generate: - -fairseq-generate -~~~~~~~~~~~~~~~~ -.. automodule:: fairseq_cli.generate - - .. argparse:: - :module: fairseq.options - :func: get_generation_parser - :prog: fairseq-generate - - -.. _fairseq-interactive: - -fairseq-interactive -~~~~~~~~~~~~~~~~~~~ -.. automodule:: fairseq_cli.interactive - - .. argparse:: - :module: fairseq.options - :func: get_interactive_generation_parser - :prog: fairseq-interactive - - -.. _fairseq-score: - -fairseq-score -~~~~~~~~~~~~~ -.. automodule:: fairseq_cli.score - - .. argparse:: - :module: fairseq_cli.score - :func: get_parser - :prog: fairseq-score - - -.. _fairseq-eval-lm: - -fairseq-eval-lm -~~~~~~~~~~~~~~~ -.. automodule:: fairseq_cli.eval_lm - - .. argparse:: - :module: fairseq.options - :func: get_eval_lm_parser - :prog: fairseq-eval-lm diff --git a/kosmos-g/fairseq/docs/conf.py b/kosmos-g/fairseq/docs/conf.py deleted file mode 100644 index 87b0db98c..000000000 --- a/kosmos-g/fairseq/docs/conf.py +++ /dev/null @@ -1,134 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- -# -# fairseq documentation build configuration file, created by -# sphinx-quickstart on Fri Aug 17 21:45:30 2018. -# -# This file is execfile()d with the current directory set to its -# containing dir. -# -# Note that not all possible configuration values are present in this -# autogenerated file. -# -# All configuration values have a default; values that are commented out -# serve to show the default. - -# If extensions (or modules to document with autodoc) are in another directory, -# add these directories to sys.path here. If the directory is relative to the -# documentation root, use os.path.abspath to make it absolute, like shown here. - -import os -import sys -from fairseq import __version__ - - -# source code directory, relative to this file, for sphinx-autobuild -sys.path.insert(0, os.path.abspath("..")) - -source_suffix = [".rst"] - -# -- General configuration ------------------------------------------------ - -# If your documentation needs a minimal Sphinx version, state it here. -# -# needs_sphinx = '1.0' - -# Add any Sphinx extension module names here, as strings. They can be -# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom -# ones. -extensions = [ - "sphinx.ext.autodoc", - "sphinx.ext.intersphinx", - "sphinx.ext.viewcode", - "sphinx.ext.napoleon", - "sphinxarg.ext", -] - -# Add any paths that contain templates here, relative to this directory. -templates_path = ["_templates"] - -# The master toctree document. -master_doc = "index" - -# General information about the project. -project = "fairseq" -copyright = "Facebook AI Research (FAIR)" -author = "Facebook AI Research (FAIR)" - -github_doc_root = "https://github.com/pytorch/fairseq/tree/main/docs/" - -# The version info for the project you're documenting, acts as replacement for -# |version| and |release|, also used in various other places throughout the -# built documents. -# -# The short X.Y version. -version = __version__ -# The full version, including alpha/beta/rc tags. -release = __version__ - -# The language for content autogenerated by Sphinx. Refer to documentation -# for a list of supported languages. -# -# This is also used if you do content translation via gettext catalogs. -# Usually you set "language" from the command line for these cases. -language = None - -# List of patterns, relative to source directory, that match files and -# directories to ignore when looking for source files. -# This patterns also effect to html_static_path and html_extra_path -exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"] - -# The name of the Pygments (syntax highlighting) style to use. -pygments_style = "sphinx" -highlight_language = "python" - -# If true, `todo` and `todoList` produce output, else they produce nothing. -todo_include_todos = False - - -# -- Options for HTML output ---------------------------------------------- - -# The theme to use for HTML and HTML Help pages. See the documentation for -# a list of builtin themes. -# -html_theme = "sphinx_rtd_theme" - -# Theme options are theme-specific and customize the look and feel of a theme -# further. For a list of options available for each theme, see the -# documentation. -# -# html_theme_options = {} - -# Add any paths that contain custom static files (such as style sheets) here, -# relative to this directory. They are copied after the builtin static files, -# so a file named "default.css" will overwrite the builtin "default.css". -html_static_path = ["_static"] - -html_context = { - "css_files": [ - "_static/theme_overrides.css", # override wide tables in RTD theme - ], -} - -# Custom sidebar templates, must be a dictionary that maps document names -# to template names. -# -# This is required for the alabaster theme -# refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars -# html_sidebars = { -# '**': [ -# 'about.html', -# 'navigation.html', -# 'relations.html', # needs 'show_related': True theme option to display -# 'searchbox.html', -# 'donate.html', -# ] -# } - - -# Example configuration for intersphinx: refer to the Python standard library. -intersphinx_mapping = { - "numpy": ("http://docs.scipy.org/doc/numpy/", None), - "python": ("https://docs.python.org/", None), - "torch": ("https://pytorch.org/docs/master/", None), -} diff --git a/kosmos-g/fairseq/docs/criterions.rst b/kosmos-g/fairseq/docs/criterions.rst deleted file mode 100644 index d6b8ca6b6..000000000 --- a/kosmos-g/fairseq/docs/criterions.rst +++ /dev/null @@ -1,31 +0,0 @@ -.. role:: hidden - :class: hidden-section - -.. _Criterions: - -Criterions -========== - -Criterions compute the loss function given the model and batch, roughly:: - - loss = criterion(model, batch) - -.. automodule:: fairseq.criterions - :members: - -.. autoclass:: fairseq.criterions.FairseqCriterion - :members: - :undoc-members: - -.. autoclass:: fairseq.criterions.adaptive_loss.AdaptiveLoss - :members: - :undoc-members: -.. autoclass:: fairseq.criterions.composite_loss.CompositeLoss - :members: - :undoc-members: -.. autoclass:: fairseq.criterions.cross_entropy.CrossEntropyCriterion - :members: - :undoc-members: -.. autoclass:: fairseq.criterions.label_smoothed_cross_entropy.LabelSmoothedCrossEntropyCriterion - :members: - :undoc-members: diff --git a/kosmos-g/fairseq/docs/data.rst b/kosmos-g/fairseq/docs/data.rst deleted file mode 100644 index 6a390cb33..000000000 --- a/kosmos-g/fairseq/docs/data.rst +++ /dev/null @@ -1,58 +0,0 @@ -.. role:: hidden - :class: hidden-section - -.. module:: fairseq.data - -Data Loading and Utilities -========================== - -.. _datasets: - -Datasets --------- - -**Datasets** define the data format and provide helpers for creating -mini-batches. - -.. autoclass:: fairseq.data.FairseqDataset - :members: -.. autoclass:: fairseq.data.LanguagePairDataset - :members: -.. autoclass:: fairseq.data.MonolingualDataset - :members: - -**Helper Datasets** - -These datasets wrap other :class:`fairseq.data.FairseqDataset` instances and -provide additional functionality: - -.. autoclass:: fairseq.data.BacktranslationDataset - :members: -.. autoclass:: fairseq.data.ConcatDataset - :members: -.. autoclass:: fairseq.data.ResamplingDataset - :members: -.. autoclass:: fairseq.data.RoundRobinZipDatasets - :members: -.. autoclass:: fairseq.data.TransformEosDataset - :members: - - -Dictionary ----------- - -.. autoclass:: fairseq.data.Dictionary - :members: - - -Iterators ---------- - -.. autoclass:: fairseq.data.CountingIterator - :members: -.. autoclass:: fairseq.data.EpochBatchIterator - :members: -.. autoclass:: fairseq.data.GroupedIterator - :members: -.. autoclass:: fairseq.data.ShardedIterator - :members: diff --git a/kosmos-g/fairseq/docs/docutils.conf b/kosmos-g/fairseq/docs/docutils.conf deleted file mode 100644 index 526acffd3..000000000 --- a/kosmos-g/fairseq/docs/docutils.conf +++ /dev/null @@ -1,2 +0,0 @@ -[writers] -option-limit=0 diff --git a/kosmos-g/fairseq/docs/fairseq.gif b/kosmos-g/fairseq/docs/fairseq.gif deleted file mode 100644 index 5782fdbc7..000000000 Binary files a/kosmos-g/fairseq/docs/fairseq.gif and /dev/null differ diff --git a/kosmos-g/fairseq/docs/fairseq_logo.png b/kosmos-g/fairseq/docs/fairseq_logo.png deleted file mode 100644 index 75472cbb5..000000000 Binary files a/kosmos-g/fairseq/docs/fairseq_logo.png and /dev/null differ diff --git a/kosmos-g/fairseq/docs/getting_started.rst b/kosmos-g/fairseq/docs/getting_started.rst deleted file mode 100644 index 745ad7763..000000000 --- a/kosmos-g/fairseq/docs/getting_started.rst +++ /dev/null @@ -1,216 +0,0 @@ -Evaluating Pre-trained Models -============================= - -First, download a pre-trained model along with its vocabularies: - -.. code-block:: console - - > curl https://dl.fbaipublicfiles.com/fairseq/models/wmt14.v2.en-fr.fconv-py.tar.bz2 | tar xvjf - - -This model uses a `Byte Pair Encoding (BPE) -vocabulary `__, so we'll have to apply -the encoding to the source text before it can be translated. This can be -done with the -`apply\_bpe.py `__ -script using the ``wmt14.en-fr.fconv-cuda/bpecodes`` file. ``@@`` is -used as a continuation marker and the original text can be easily -recovered with e.g. ``sed s/@@ //g`` or by passing the ``--remove-bpe`` -flag to :ref:`fairseq-generate`. Prior to BPE, input text needs to be tokenized -using ``tokenizer.perl`` from -`mosesdecoder `__. - -Let's use :ref:`fairseq-interactive` to generate translations interactively. -Here, we use a beam size of 5 and preprocess the input with the Moses -tokenizer and the given Byte-Pair Encoding vocabulary. It will automatically -remove the BPE continuation markers and detokenize the output. - -.. code-block:: console - - > MODEL_DIR=wmt14.en-fr.fconv-py - > fairseq-interactive \ - --path $MODEL_DIR/model.pt $MODEL_DIR \ - --beam 5 --source-lang en --target-lang fr \ - --tokenizer moses \ - --bpe subword_nmt --bpe-codes $MODEL_DIR/bpecodes - | loading model(s) from wmt14.en-fr.fconv-py/model.pt - | [en] dictionary: 44206 types - | [fr] dictionary: 44463 types - | Type the input sentence and press return: - Why is it rare to discover new marine mammal species? - S-0 Why is it rare to discover new marine mam@@ mal species ? - H-0 -0.0643349438905716 Pourquoi est-il rare de découvrir de nouvelles espèces de mammifères marins? - P-0 -0.0763 -0.1849 -0.0956 -0.0946 -0.0735 -0.1150 -0.1301 -0.0042 -0.0321 -0.0171 -0.0052 -0.0062 -0.0015 - -This generation script produces three types of outputs: a line prefixed -with *O* is a copy of the original source sentence; *H* is the -hypothesis along with an average log-likelihood; and *P* is the -positional score per token position, including the -end-of-sentence marker which is omitted from the text. - -Other types of output lines you might see are *D*, the detokenized hypothesis, -*T*, the reference target, *A*, alignment info, *E* the history of generation steps. - -See the `README `__ for a -full list of pre-trained models available. - -Training a New Model -==================== - -The following tutorial is for machine translation. For an example of how -to use Fairseq for other tasks, such as :ref:`language modeling`, please see the -``examples/`` directory. - -Data Pre-processing -------------------- - -Fairseq contains example pre-processing scripts for several translation -datasets: IWSLT 2014 (German-English), WMT 2014 (English-French) and WMT -2014 (English-German). To pre-process and binarize the IWSLT dataset: - -.. code-block:: console - - > cd examples/translation/ - > bash prepare-iwslt14.sh - > cd ../.. - > TEXT=examples/translation/iwslt14.tokenized.de-en - > fairseq-preprocess --source-lang de --target-lang en \ - --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \ - --destdir data-bin/iwslt14.tokenized.de-en - -This will write binarized data that can be used for model training to -``data-bin/iwslt14.tokenized.de-en``. - -Training --------- - -Use :ref:`fairseq-train` to train a new model. Here a few example settings that work -well for the IWSLT 2014 dataset: - -.. code-block:: console - - > mkdir -p checkpoints/fconv - > CUDA_VISIBLE_DEVICES=0 fairseq-train data-bin/iwslt14.tokenized.de-en \ - --optimizer nag --lr 0.25 --clip-norm 0.1 --dropout 0.2 --max-tokens 4000 \ - --arch fconv_iwslt_de_en --save-dir checkpoints/fconv - -By default, :ref:`fairseq-train` will use all available GPUs on your machine. Use the -``CUDA_VISIBLE_DEVICES`` environment variable to select specific GPUs and/or to -change the number of GPU devices that will be used. - -Also note that the batch size is specified in terms of the maximum -number of tokens per batch (``--max-tokens``). You may need to use a -smaller value depending on the available GPU memory on your system. - -Generation ----------- - -Once your model is trained, you can generate translations using -:ref:`fairseq-generate` **(for binarized data)** or -:ref:`fairseq-interactive` **(for raw text)**: - -.. code-block:: console - - > fairseq-generate data-bin/iwslt14.tokenized.de-en \ - --path checkpoints/fconv/checkpoint_best.pt \ - --batch-size 128 --beam 5 - | [de] dictionary: 35475 types - | [en] dictionary: 24739 types - | data-bin/iwslt14.tokenized.de-en test 6750 examples - | model fconv - | loaded checkpoint trainings/fconv/checkpoint_best.pt - S-721 danke . - T-721 thank you . - ... - -To generate translations with only a CPU, use the ``--cpu`` flag. BPE -continuation markers can be removed with the ``--remove-bpe`` flag. - -Advanced Training Options -========================= - -Large mini-batch training with delayed updates ----------------------------------------------- - -The ``--update-freq`` option can be used to accumulate gradients from -multiple mini-batches and delay updating, creating a larger effective -batch size. Delayed updates can also improve training speed by reducing -inter-GPU communication costs and by saving idle time caused by variance -in workload across GPUs. See `Ott et al. -(2018) `__ for more details. - -To train on a single GPU with an effective batch size that is equivalent -to training on 8 GPUs: - -.. code-block:: console - - > CUDA_VISIBLE_DEVICES=0 fairseq-train --update-freq 8 (...) - -Training with half precision floating point (FP16) --------------------------------------------------- - -.. note:: - - FP16 training requires a Volta GPU and CUDA 9.1 or greater - -Recent GPUs enable efficient half precision floating point computation, -e.g., using `Nvidia Tensor Cores -`__. -Fairseq supports FP16 training with the ``--fp16`` flag: - -.. code-block:: console - - > fairseq-train --fp16 (...) - -Distributed training --------------------- - -Distributed training in fairseq is implemented on top of ``torch.distributed``. -The easiest way to launch jobs is with the `torch.distributed.launch -`__ tool. - -For example, to train a large English-German Transformer model on 2 nodes each -with 8 GPUs (in total 16 GPUs), run the following command on each node, -replacing ``node_rank=0`` with ``node_rank=1`` on the second node and making -sure to update ``--master_addr`` to the IP address of the first node: - -.. code-block:: console - - > python -m torch.distributed.launch --nproc_per_node=8 \ - --nnodes=2 --node_rank=0 --master_addr="192.168.1.1" \ - --master_port=12345 \ - $(which fairseq-train) data-bin/wmt16_en_de_bpe32k \ - --arch transformer_vaswani_wmt_en_de_big --share-all-embeddings \ - --optimizer adam --adam-betas '(0.9, 0.98)' --clip-norm 0.0 \ - --lr-scheduler inverse_sqrt --warmup-init-lr 1e-07 --warmup-updates 4000 \ - --lr 0.0005 \ - --dropout 0.3 --weight-decay 0.0 --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --max-tokens 3584 \ - --max-epoch 70 \ - --fp16 - -On SLURM clusters, fairseq will automatically detect the number of nodes and -GPUs, but a port number must be provided: - -.. code-block:: console - - > salloc --gpus=16 --nodes 2 (...) - > srun fairseq-train --distributed-port 12345 (...). - -Sharding very large datasets ----------------------------- - -It can be challenging to train over very large datasets, particularly if your -machine does not have much system RAM. Most tasks in fairseq support training -over "sharded" datasets, in which the original dataset has been preprocessed -into non-overlapping chunks (or "shards"). - -For example, instead of preprocessing all your data into a single "data-bin" -directory, you can split the data and create "data-bin1", "data-bin2", etc. -Then you can adapt your training command like so: - -.. code-block:: console - - > fairseq-train data-bin1:data-bin2:data-bin3 (...) - -Training will now iterate over each shard, one by one, with each shard -corresponding to an "epoch", thus reducing system memory usage. diff --git a/kosmos-g/fairseq/docs/hydra_integration.md b/kosmos-g/fairseq/docs/hydra_integration.md deleted file mode 100644 index 6a1529838..000000000 --- a/kosmos-g/fairseq/docs/hydra_integration.md +++ /dev/null @@ -1,284 +0,0 @@ -## Hydra - -[Hydra](https://github.com/facebookresearch/hydra) is an open-source Python -framework that simplifies the development of research and other complex -applications. The key feature is the ability to dynamically create a -hierarchical configuration by composition and override it through config files -and the command line. The name Hydra comes from its ability to run multiple -similar jobs - much like a Hydra with multiple heads. - -## Motivation - -Until recently, all components in fairseq were configured through a shared -`args` namespace that was created at application startup. Components declared -their own `add_args` method to update the argparse parser, hoping that the names -would not clash with arguments from other components. While this model works for -smaller applications, as fairseq grew and became integrated into other -applications, this became problematic. In order to determine how to configure -each component, one needed to a) examine what args were added by this component, -and b) read the code to figure out what shared arguments it is using that were -added in other places. Reproducing models involved sharing commands that often -contained dozens of command line switches. - -The model described above is still supported by fairseq for backward -compatibility, but will be deprecated some time in the future. - -New components in fairseq should now create a dataclass that encapsulates all -parameters required to configure this component. The dataclass is registered -along with the component, and fairseq takes care of constructing and providing -this configuration object to the component's constructor. Note that sharing -parameters can optionally still work, but one has to explicitly point to the -"source of truth" (see inheritance example below). These changes make components -in fairseq more independent and re-usable by other applications: all that is -needed to create a component is to initialize its dataclass and overwrite some -of the defaults. - -While configuring fairseq through command line (using either the legacy argparse -based or the new Hydra based entry points) is still fully supported, you can now -take advantage of configuring fairseq completely or piece-by-piece through -hierarchical YAML configuration files. These files can also be shipped as -examples that others can use to run an identically configured job. - -Additionally, Hydra has a rich and growing [library of -plugins](https://github.com/facebookresearch/hydra/tree/master/plugins) that -provide functionality such as hyperparameter sweeping (including using bayesian -optimization through the [Ax](https://github.com/facebook/Ax) library), job -launching across various platforms, and more. - -## Creating or migrating components - -In general, each new (or updated) component should provide a companion -[dataclass](https://www.python.org/dev/peps/pep-0557/). These dataclass are -typically located in the same file as the component and are passed as arguments -to the `register_*()` functions. Top-level configs that should be present in -every fairseq application are placed in the -[global](fairseq/dataclass/configs.py) config file and added to the -`FairseqConfig` object. - -Each dataclass is a plain-old-data object, similar to a `NamedTuple`. These -classes are decorated with a `@dataclass` decorator, and typically inherit from -`FairseqDataclass` (which adds some functionality for backward compatibility). -Each field must have a type, and generally has metadata (such as a help string) -and a default value. Only primitive types or other config objects are allowed as -data types for each field. - -#### Example: - -```python -from dataclasses import dataclass, field -from fairseq.dataclass import FairseqDataclass - -@dataclass -class InteractiveConfig(FairseqDataclass): - buffer_size: int = field( - default=0, - metadata={ - "help": "read this many sentences into a buffer before processing them" - }, - ) - input: str = field( - default="-", - metadata={"help": "file to read from; use - for stdin"}, - ) -``` - -### Inherting values - -Some components require sharing a value. For example, a learning rate scheduler -and an optimizer may both need to know the initial learning rate value. One can -declare a field that, by default, will inherit its value from another config -node in the same hierarchy: - -```python -@dataclass -FairseqAdamConfig(FairseqDataclass): - ... - lr: List[float] = II("optimization.lr") - ... -``` - -`II("optimization.lr")` is syntactic sugar for `"${optimization.lr}"`, which is -the value one can use in a YAML config file or through command line to achieve -the same effect. Note that this assumes that there is an "optimization" config -object in the root config and it has a field called "lr". - -### Tasks and Models - -Creating Tasks and Models works same as before, except that legacy -implementations now inherit from `LegacyFairseq*` base classes, while new -components inherit from `FairseqTask` and `FairseqModel` and provide a dataclass -to the `register_*()` functions. - -#### Task example: - -```python -@dataclass -class LanguageModelingConfig(FairseqDataclass): - data: Optional[str] = field( - default=None, metadata={"help": "path to data directory"} - ) - ... - -@register_task("language_modeling", dataclass=LanguageModelingConfig) -class LanguageModelingTask(FairseqTask): - ... - @classmethod - def setup_task(cls, cfg: LanguageModelingConfig): - ... -``` - -#### Model example: - -```python -@dataclass -class TransformerLanguageModelConfig(FairseqDataclass): - activation_fn: ChoiceEnum(utils.get_available_activation_fns()) = field( - default="relu", metadata={"help": "activation function to use"} - ) - dropout: float = field(default=0.1, metadata={"help": "dropout probability"}) - ... - -@register_model("transformer_lm", dataclass=TransformerLanguageModelConfig) -class TransformerLanguageModel(FairseqLanguageModel): - ... - @classmethod - def build_model(cls, cfg: TransformerLanguageModelConfig, task: FairseqTask): - ... -``` - -### Other components - -Other components work as before, but they now take their configuration dataclass -as the only constructor argument: - -```python -@dataclass -class MosesTokenizerConfig(FairseqDataclass): - source_lang: str = field(default="en", metadata={"help": "source language"}) - ... - -@register_tokenizer("moses", dataclass=MosesTokenizerConfig) -class MosesTokenizer(object): - def __init__(self, cfg: MosesTokenizerConfig): - ... -``` - -Note that if you are adding a new registry for a new set of components, you need -to add it to the `FairseqConfig` object in `fairseq/dataclass/configs.py`: - -```python -@dataclass -class FairseqConfig(object): - ... - my_new_registry: Any = None -``` - -## Training with `fairseq-hydra-train` - -To fully take advantage of configuration flexibility offered by Hydra, you may -want to train new models using the `fairseq-hydra-train` entry point. Legacy CLI -tools such as `fairseq-train` will remain supported for the foreseeable future -but will be deprecated eventually. - -On startup, Hydra will create a configuration object that contains a hierarchy -of all the necessary dataclasses populated with their default values in the -code. The default values are overwritten by values found in YAML files in -`fairseq/config` directory (which currently sets minimal defaults) and then -further overwritten by values provided through command line arguments. - -Some of the most common use cases are shown below: - -### 1. Override default values through command line: - -```shell script -$ fairseq-hydra-train \ - distributed_training.distributed_world_size=1 \ - dataset.batch_size=2 \ - task.data=data-bin \ - model=transformer_lm/transformer_lm_gpt \ - task=language_modeling \ - optimization.max_update=5000 -``` - -Note that along with explicitly providing values for parameters such as -`dataset.batch_size`, this also tells Hydra to overlay configuration found in -`fairseq/config/model/transformer_lm/transformer_lm_gpt.yaml` over the default -values in the dataclass. If you want to train a model without specifying a -particular architecture you can simply specify `model=transformer_lm`. This only -works for migrated tasks and models. - -### 2. Replace bundled configs with an external config: - -```shell script -$ fairseq-hydra-train \ - --config-dir /path/to/external/configs \ - --config-name wiki103 -``` - -where `/path/to/external/configs/wiki103.yaml` contains: - -```yaml -# @package _group_ - -model: - _name: transformer_lm -distributed_training: - distributed_world_size: 1 -dataset: - batch_size: 2 -task: - _name: language_modeling - data: /path/to/data - add_bos_token: false - max_target_positions: 1024 -optimization: - max_update: 50000 - lr: [ 0.25 ] -criterion: cross_entropy -optimizer: adam -lr_scheduler: - _name: cosine -``` - -Note that here bundled configs from `fairseq/config` directory are not used, -however the defaults from each dataclass will still be used (unless overwritten -by your external config). - -Additionally you can choose to break up your configs by creating a directory -structure in the same location as your main config file, with the names of the -top-level fields (such as "model", "dataset", etc), and placing config files -with meaningful names that would populate that specific section of your -top-level config file (for example, you might have -`model/small_transformer_lm.yaml`, `model/big_transformer_lm.yaml`, etc). You -can then specify the correct configuration via command line, defaults in the -main config, or even launch all of them as a sweep (see Hydra documentation on -how to do this). - -### 3. Add an external config directory to Hydra search path: - -This allows combining default configuration (including using any bundled config -files), while specifying your own config files for some parts of the -configuration. - -```shell script -$ fairseq-hydra-train \ - distributed_training.distributed_world_size=1 \ - dataset.batch_size=2 \ - task.data=/path/to/data/ \ - model=transformer_lm/2_layers \ - task=language_modeling \ - optimization.max_update=5000 \ - --config-dir /path/to/external/configs -``` - -where `/path/to/external/configs` has the following structure: -``` -. -+-- model -| +-- transformer_lm -| | +-- 2_layers.yaml -``` - -and `2_layers.yaml` contains a copy of `transformer_lm_gpt.yaml` but with -`decoder_layers` set to 2. You can add other configs to configure other -components as well. diff --git a/kosmos-g/fairseq/docs/index.rst b/kosmos-g/fairseq/docs/index.rst deleted file mode 100644 index 591db86cd..000000000 --- a/kosmos-g/fairseq/docs/index.rst +++ /dev/null @@ -1,49 +0,0 @@ -.. fairseq documentation master file, created by - sphinx-quickstart on Fri Aug 17 21:45:30 2018. - You can adapt this file completely to your liking, but it should at least - contain the root `toctree` directive. - -:github_url: https://github.com/pytorch/fairseq - - -fairseq documentation -===================== - -Fairseq is a sequence modeling toolkit written in `PyTorch -`_ that allows researchers and developers to -train custom models for translation, summarization, language modeling and other -text generation tasks. - -.. toctree:: - :maxdepth: 1 - :caption: Getting Started - - getting_started - command_line_tools - -.. toctree:: - :maxdepth: 1 - :caption: Extending Fairseq - - overview - tutorial_simple_lstm - tutorial_classifying_names - -.. toctree:: - :maxdepth: 2 - :caption: Library Reference - - tasks - models - criterions - optim - lr_scheduler - data - modules - - -Indices and tables -================== - -* :ref:`genindex` -* :ref:`search` diff --git a/kosmos-g/fairseq/docs/lr_scheduler.rst b/kosmos-g/fairseq/docs/lr_scheduler.rst deleted file mode 100644 index bbc09dc22..000000000 --- a/kosmos-g/fairseq/docs/lr_scheduler.rst +++ /dev/null @@ -1,34 +0,0 @@ -.. role:: hidden - :class: hidden-section - -.. _Learning Rate Schedulers: - -Learning Rate Schedulers -======================== - -Learning Rate Schedulers update the learning rate over the course of training. -Learning rates can be updated after each update via :func:`step_update` or at -epoch boundaries via :func:`step`. - -.. automodule:: fairseq.optim.lr_scheduler - :members: - -.. autoclass:: fairseq.optim.lr_scheduler.FairseqLRScheduler - :members: - :undoc-members: - -.. autoclass:: fairseq.optim.lr_scheduler.cosine_lr_scheduler.CosineSchedule - :members: - :undoc-members: -.. autoclass:: fairseq.optim.lr_scheduler.fixed_schedule.FixedSchedule - :members: - :undoc-members: -.. autoclass:: fairseq.optim.lr_scheduler.inverse_square_root_schedule.InverseSquareRootSchedule - :members: - :undoc-members: -.. autoclass:: fairseq.optim.lr_scheduler.reduce_lr_on_plateau.ReduceLROnPlateau - :members: - :undoc-members: -.. autoclass:: fairseq.optim.lr_scheduler.triangular_lr_scheduler.TriangularSchedule - :members: - :undoc-members: diff --git a/kosmos-g/fairseq/docs/make.bat b/kosmos-g/fairseq/docs/make.bat deleted file mode 100644 index baa9d02a7..000000000 --- a/kosmos-g/fairseq/docs/make.bat +++ /dev/null @@ -1,36 +0,0 @@ -@ECHO OFF - -pushd %~dp0 - -REM Command file for Sphinx documentation - -if "%SPHINXBUILD%" == "" ( - set SPHINXBUILD=python -msphinx -) -set SOURCEDIR=. -set BUILDDIR=_build -set SPHINXPROJ=fairseq - -if "%1" == "" goto help - -%SPHINXBUILD% >NUL 2>NUL -if errorlevel 9009 ( - echo. - echo.The Sphinx module was not found. Make sure you have Sphinx installed, - echo.then set the SPHINXBUILD environment variable to point to the full - echo.path of the 'sphinx-build' executable. Alternatively you may add the - echo.Sphinx directory to PATH. - echo. - echo.If you don't have Sphinx installed, grab it from - echo.http://sphinx-doc.org/ - exit /b 1 -) - -%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% -goto end - -:help -%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% - -:end -popd diff --git a/kosmos-g/fairseq/docs/models.rst b/kosmos-g/fairseq/docs/models.rst deleted file mode 100644 index 054622d58..000000000 --- a/kosmos-g/fairseq/docs/models.rst +++ /dev/null @@ -1,104 +0,0 @@ -.. role:: hidden - :class: hidden-section - -.. module:: fairseq.models - -.. _Models: - -Models -====== - -A Model defines the neural network's ``forward()`` method and encapsulates all -of the learnable parameters in the network. Each model also provides a set of -named *architectures* that define the precise network configuration (e.g., -embedding dimension, number of layers, etc.). - -Both the model type and architecture are selected via the ``--arch`` -command-line argument. Once selected, a model may expose additional command-line -arguments for further configuration. - -.. note:: - - All fairseq Models extend :class:`BaseFairseqModel`, which in turn extends - :class:`torch.nn.Module`. Thus any fairseq Model can be used as a - stand-alone Module in other PyTorch code. - - -Convolutional Neural Networks (CNN) ------------------------------------ - -.. module:: fairseq.models.fconv -.. autoclass:: fairseq.models.fconv.FConvModel - :members: -.. autoclass:: fairseq.models.fconv.FConvEncoder - :members: - :undoc-members: -.. autoclass:: fairseq.models.fconv.FConvDecoder - :members: - - -Long Short-Term Memory (LSTM) networks --------------------------------------- - -.. module:: fairseq.models.lstm -.. autoclass:: fairseq.models.lstm.LSTMModel - :members: -.. autoclass:: fairseq.models.lstm.LSTMEncoder - :members: -.. autoclass:: fairseq.models.lstm.LSTMDecoder - :members: - - -Transformer (self-attention) networks -------------------------------------- - -.. module:: fairseq.models.transformer -.. autoclass:: fairseq.models.transformer.TransformerModel - :members: -.. autoclass:: fairseq.models.transformer.TransformerEncoder - :members: -.. autoclass:: fairseq.models.transformer.TransformerEncoderLayer - :members: -.. autoclass:: fairseq.models.transformer.TransformerDecoder - :members: -.. autoclass:: fairseq.models.transformer.TransformerDecoderLayer - :members: - - -Adding new models ------------------ - -.. currentmodule:: fairseq.models -.. autofunction:: fairseq.models.register_model -.. autofunction:: fairseq.models.register_model_architecture -.. autoclass:: fairseq.models.BaseFairseqModel - :members: - :undoc-members: -.. autoclass:: fairseq.models.FairseqEncoderDecoderModel - :members: - :undoc-members: -.. autoclass:: fairseq.models.FairseqEncoderModel - :members: - :undoc-members: -.. autoclass:: fairseq.models.FairseqLanguageModel - :members: - :undoc-members: -.. autoclass:: fairseq.models.FairseqMultiModel - :members: - :undoc-members: -.. autoclass:: fairseq.models.FairseqEncoder - :members: -.. autoclass:: fairseq.models.CompositeEncoder - :members: -.. autoclass:: fairseq.models.FairseqDecoder - :members: - - -.. _Incremental decoding: - -Incremental decoding --------------------- - -.. autoclass:: fairseq.models.FairseqIncrementalDecoder - :members: - :undoc-members: diff --git a/kosmos-g/fairseq/docs/modules.rst b/kosmos-g/fairseq/docs/modules.rst deleted file mode 100644 index 9631c93d4..000000000 --- a/kosmos-g/fairseq/docs/modules.rst +++ /dev/null @@ -1,9 +0,0 @@ -Modules -======= - -Fairseq provides several stand-alone :class:`torch.nn.Module` classes that may -be helpful when implementing a new :class:`~fairseq.models.BaseFairseqModel`. - -.. automodule:: fairseq.modules - :members: - :undoc-members: diff --git a/kosmos-g/fairseq/docs/optim.rst b/kosmos-g/fairseq/docs/optim.rst deleted file mode 100644 index c3326456b..000000000 --- a/kosmos-g/fairseq/docs/optim.rst +++ /dev/null @@ -1,38 +0,0 @@ -.. role:: hidden - :class: hidden-section - -.. _optimizers: - -Optimizers -========== - -Optimizers update the Model parameters based on the gradients. - -.. automodule:: fairseq.optim - :members: - -.. autoclass:: fairseq.optim.FairseqOptimizer - :members: - :undoc-members: - -.. autoclass:: fairseq.optim.adadelta.Adadelta - :members: - :undoc-members: -.. autoclass:: fairseq.optim.adagrad.Adagrad - :members: - :undoc-members: -.. autoclass:: fairseq.optim.adafactor.FairseqAdafactor - :members: - :undoc-members: -.. autoclass:: fairseq.optim.adam.FairseqAdam - :members: - :undoc-members: -.. autoclass:: fairseq.optim.fp16_optimizer.FP16Optimizer - :members: - :undoc-members: -.. autoclass:: fairseq.optim.nag.FairseqNAG - :members: - :undoc-members: -.. autoclass:: fairseq.optim.sgd.SGD - :members: - :undoc-members: diff --git a/kosmos-g/fairseq/docs/overview.rst b/kosmos-g/fairseq/docs/overview.rst deleted file mode 100644 index 026b3b5c7..000000000 --- a/kosmos-g/fairseq/docs/overview.rst +++ /dev/null @@ -1,74 +0,0 @@ -Overview -======== - -Fairseq can be extended through user-supplied `plug-ins -`_. We support five kinds of -plug-ins: - -- :ref:`Models` define the neural network architecture and encapsulate all of the - learnable parameters. -- :ref:`Criterions` compute the loss function given the model outputs and targets. -- :ref:`Tasks` store dictionaries and provide helpers for loading/iterating over - Datasets, initializing the Model/Criterion and calculating the loss. -- :ref:`Optimizers` update the Model parameters based on the gradients. -- :ref:`Learning Rate Schedulers` update the learning rate over the course of - training. - -**Training Flow** - -Given a ``model``, ``criterion``, ``task``, ``optimizer`` and ``lr_scheduler``, -fairseq implements the following high-level training flow:: - - for epoch in range(num_epochs): - itr = task.get_batch_iterator(task.dataset('train')) - for num_updates, batch in enumerate(itr): - task.train_step(batch, model, criterion, optimizer) - average_and_clip_gradients() - optimizer.step() - lr_scheduler.step_update(num_updates) - lr_scheduler.step(epoch) - -where the default implementation for ``task.train_step`` is roughly:: - - def train_step(self, batch, model, criterion, optimizer, **unused): - loss = criterion(model, batch) - optimizer.backward(loss) - return loss - -**Registering new plug-ins** - -New plug-ins are *registered* through a set of ``@register`` function -decorators, for example:: - - @register_model('my_lstm') - class MyLSTM(FairseqEncoderDecoderModel): - (...) - -Once registered, new plug-ins can be used with the existing :ref:`Command-line -Tools`. See the Tutorial sections for more detailed walkthroughs of how to add -new plug-ins. - -**Loading plug-ins from another directory** - -New plug-ins can be defined in a custom module stored in the user system. In -order to import the module, and make the plugin available to *fairseq*, the -command line supports the ``--user-dir`` flag that can be used to specify a -custom location for additional modules to load into *fairseq*. - -For example, assuming this directory tree:: - - /home/user/my-module/ - └── __init__.py - -with ``__init__.py``:: - - from fairseq.models import register_model_architecture - from fairseq.models.transformer import transformer_vaswani_wmt_en_de_big - - @register_model_architecture('transformer', 'my_transformer') - def transformer_mmt_big(args): - transformer_vaswani_wmt_en_de_big(args) - -it is possible to invoke the :ref:`fairseq-train` script with the new architecture with:: - - fairseq-train ... --user-dir /home/user/my-module -a my_transformer --task translation diff --git a/kosmos-g/fairseq/docs/requirements.txt b/kosmos-g/fairseq/docs/requirements.txt deleted file mode 100644 index c734a1f04..000000000 --- a/kosmos-g/fairseq/docs/requirements.txt +++ /dev/null @@ -1,2 +0,0 @@ -sphinx<2.0 -sphinx-argparse diff --git a/kosmos-g/fairseq/docs/tasks.rst b/kosmos-g/fairseq/docs/tasks.rst deleted file mode 100644 index 5f65c3c86..000000000 --- a/kosmos-g/fairseq/docs/tasks.rst +++ /dev/null @@ -1,61 +0,0 @@ -.. role:: hidden - :class: hidden-section - -.. module:: fairseq.tasks - -.. _Tasks: - -Tasks -===== - -Tasks store dictionaries and provide helpers for loading/iterating over -Datasets, initializing the Model/Criterion and calculating the loss. - -Tasks can be selected via the ``--task`` command-line argument. Once selected, a -task may expose additional command-line arguments for further configuration. - -Example usage:: - - # setup the task (e.g., load dictionaries) - task = fairseq.tasks.setup_task(args) - - # build model and criterion - model = task.build_model(args) - criterion = task.build_criterion(args) - - # load datasets - task.load_dataset('train') - task.load_dataset('valid') - - # iterate over mini-batches of data - batch_itr = task.get_batch_iterator( - task.dataset('train'), max_tokens=4096, - ) - for batch in batch_itr: - # compute the loss - loss, sample_size, logging_output = task.get_loss( - model, criterion, batch, - ) - loss.backward() - - -Translation ------------ - -.. autoclass:: fairseq.tasks.translation.TranslationTask - -.. _language modeling: - -Language Modeling ------------------ - -.. autoclass:: fairseq.tasks.language_modeling.LanguageModelingTask - - -Adding new tasks ----------------- - -.. autofunction:: fairseq.tasks.register_task -.. autoclass:: fairseq.tasks.FairseqTask - :members: - :undoc-members: diff --git a/kosmos-g/fairseq/docs/tutorial_classifying_names.rst b/kosmos-g/fairseq/docs/tutorial_classifying_names.rst deleted file mode 100644 index b02fec048..000000000 --- a/kosmos-g/fairseq/docs/tutorial_classifying_names.rst +++ /dev/null @@ -1,415 +0,0 @@ -Tutorial: Classifying Names with a Character-Level RNN -====================================================== - -In this tutorial we will extend fairseq to support *classification* tasks. In -particular we will re-implement the PyTorch tutorial for `Classifying Names with -a Character-Level RNN `_ -in fairseq. It is recommended to quickly skim that tutorial before beginning -this one. - -This tutorial covers: - -1. **Preprocessing the data** to create dictionaries. -2. **Registering a new Model** that encodes an input sentence with a simple RNN - and predicts the output label. -3. **Registering a new Task** that loads our dictionaries and dataset. -4. **Training the Model** using the existing command-line tools. -5. **Writing an evaluation script** that imports fairseq and allows us to - interactively evaluate our model on new inputs. - - -1. Preprocessing the data -------------------------- - -The original tutorial provides raw data, but we'll work with a modified version -of the data that is already tokenized into characters and split into separate -train, valid and test sets. - -Download and extract the data from here: -`tutorial_names.tar.gz `_ - -Once extracted, let's preprocess the data using the :ref:`fairseq-preprocess` -command-line tool to create the dictionaries. While this tool is primarily -intended for sequence-to-sequence problems, we're able to reuse it here by -treating the label as a "target" sequence of length 1. We'll also output the -preprocessed files in "raw" format using the ``--dataset-impl`` option to -enhance readability: - -.. code-block:: console - - > fairseq-preprocess \ - --trainpref names/train --validpref names/valid --testpref names/test \ - --source-lang input --target-lang label \ - --destdir names-bin --dataset-impl raw - -After running the above command you should see a new directory, -:file:`names-bin/`, containing the dictionaries for *inputs* and *labels*. - - -2. Registering a new Model --------------------------- - -Next we'll register a new model in fairseq that will encode an input sentence -with a simple RNN and predict the output label. Compared to the original PyTorch -tutorial, our version will also work with batches of data and GPU Tensors. - -First let's copy the simple RNN module implemented in the `PyTorch tutorial -`_. -Create a new file named :file:`fairseq/models/rnn_classifier.py` with the -following contents:: - - import torch - import torch.nn as nn - - class RNN(nn.Module): - - def __init__(self, input_size, hidden_size, output_size): - super(RNN, self).__init__() - - self.hidden_size = hidden_size - - self.i2h = nn.Linear(input_size + hidden_size, hidden_size) - self.i2o = nn.Linear(input_size + hidden_size, output_size) - self.softmax = nn.LogSoftmax(dim=1) - - def forward(self, input, hidden): - combined = torch.cat((input, hidden), 1) - hidden = self.i2h(combined) - output = self.i2o(combined) - output = self.softmax(output) - return output, hidden - - def initHidden(self): - return torch.zeros(1, self.hidden_size) - -We must also *register* this model with fairseq using the -:func:`~fairseq.models.register_model` function decorator. Once the model is -registered we'll be able to use it with the existing :ref:`Command-line Tools`. - -All registered models must implement the :class:`~fairseq.models.BaseFairseqModel` -interface, so we'll create a small wrapper class in the same file and register -it in fairseq with the name ``'rnn_classifier'``:: - - from fairseq.models import BaseFairseqModel, register_model - - # Note: the register_model "decorator" should immediately precede the - # definition of the Model class. - - @register_model('rnn_classifier') - class FairseqRNNClassifier(BaseFairseqModel): - - @staticmethod - def add_args(parser): - # Models can override this method to add new command-line arguments. - # Here we'll add a new command-line argument to configure the - # dimensionality of the hidden state. - parser.add_argument( - '--hidden-dim', type=int, metavar='N', - help='dimensionality of the hidden state', - ) - - @classmethod - def build_model(cls, args, task): - # Fairseq initializes models by calling the ``build_model()`` - # function. This provides more flexibility, since the returned model - # instance can be of a different type than the one that was called. - # In this case we'll just return a FairseqRNNClassifier instance. - - # Initialize our RNN module - rnn = RNN( - # We'll define the Task in the next section, but for now just - # notice that the task holds the dictionaries for the "source" - # (i.e., the input sentence) and "target" (i.e., the label). - input_size=len(task.source_dictionary), - hidden_size=args.hidden_dim, - output_size=len(task.target_dictionary), - ) - - # Return the wrapped version of the module - return FairseqRNNClassifier( - rnn=rnn, - input_vocab=task.source_dictionary, - ) - - def __init__(self, rnn, input_vocab): - super(FairseqRNNClassifier, self).__init__() - - self.rnn = rnn - self.input_vocab = input_vocab - - # The RNN module in the tutorial expects one-hot inputs, so we can - # precompute the identity matrix to help convert from indices to - # one-hot vectors. We register it as a buffer so that it is moved to - # the GPU when ``cuda()`` is called. - self.register_buffer('one_hot_inputs', torch.eye(len(input_vocab))) - - def forward(self, src_tokens, src_lengths): - # The inputs to the ``forward()`` function are determined by the - # Task, and in particular the ``'net_input'`` key in each - # mini-batch. We'll define the Task in the next section, but for - # now just know that *src_tokens* has shape `(batch, src_len)` and - # *src_lengths* has shape `(batch)`. - bsz, max_src_len = src_tokens.size() - - # Initialize the RNN hidden state. Compared to the original PyTorch - # tutorial we'll also handle batched inputs and work on the GPU. - hidden = self.rnn.initHidden() - hidden = hidden.repeat(bsz, 1) # expand for batched inputs - hidden = hidden.to(src_tokens.device) # move to GPU - - for i in range(max_src_len): - # WARNING: The inputs have padding, so we should mask those - # elements here so that padding doesn't affect the results. - # This is left as an exercise for the reader. The padding symbol - # is given by ``self.input_vocab.pad()`` and the unpadded length - # of each input is given by *src_lengths*. - - # One-hot encode a batch of input characters. - input = self.one_hot_inputs[src_tokens[:, i].long()] - - # Feed the input to our RNN. - output, hidden = self.rnn(input, hidden) - - # Return the final output state for making a prediction - return output - -Finally let's define a *named architecture* with the configuration for our -model. This is done with the :func:`~fairseq.models.register_model_architecture` -function decorator. Thereafter this named architecture can be used with the -``--arch`` command-line argument, e.g., ``--arch pytorch_tutorial_rnn``:: - - from fairseq.models import register_model_architecture - - # The first argument to ``register_model_architecture()`` should be the name - # of the model we registered above (i.e., 'rnn_classifier'). The function we - # register here should take a single argument *args* and modify it in-place - # to match the desired architecture. - - @register_model_architecture('rnn_classifier', 'pytorch_tutorial_rnn') - def pytorch_tutorial_rnn(args): - # We use ``getattr()`` to prioritize arguments that are explicitly given - # on the command-line, so that the defaults defined below are only used - # when no other value has been specified. - args.hidden_dim = getattr(args, 'hidden_dim', 128) - - -3. Registering a new Task -------------------------- - -Now we'll register a new :class:`~fairseq.tasks.FairseqTask` that will load our -dictionaries and dataset. Tasks can also control how the data is batched into -mini-batches, but in this tutorial we'll reuse the batching provided by -:class:`fairseq.data.LanguagePairDataset`. - -Create a new file named :file:`fairseq/tasks/simple_classification.py` with the -following contents:: - - import os - import torch - - from fairseq.data import Dictionary, LanguagePairDataset - from fairseq.tasks import FairseqTask, register_task - - - @register_task('simple_classification') - class SimpleClassificationTask(LegacyFairseqTask): - - @staticmethod - def add_args(parser): - # Add some command-line arguments for specifying where the data is - # located and the maximum supported input length. - parser.add_argument('data', metavar='FILE', - help='file prefix for data') - parser.add_argument('--max-positions', default=1024, type=int, - help='max input length') - - @classmethod - def setup_task(cls, args, **kwargs): - # Here we can perform any setup required for the task. This may include - # loading Dictionaries, initializing shared Embedding layers, etc. - # In this case we'll just load the Dictionaries. - input_vocab = Dictionary.load(os.path.join(args.data, 'dict.input.txt')) - label_vocab = Dictionary.load(os.path.join(args.data, 'dict.label.txt')) - print('| [input] dictionary: {} types'.format(len(input_vocab))) - print('| [label] dictionary: {} types'.format(len(label_vocab))) - - return SimpleClassificationTask(args, input_vocab, label_vocab) - - def __init__(self, args, input_vocab, label_vocab): - super().__init__(args) - self.input_vocab = input_vocab - self.label_vocab = label_vocab - - def load_dataset(self, split, **kwargs): - """Load a given dataset split (e.g., train, valid, test).""" - - prefix = os.path.join(self.args.data, '{}.input-label'.format(split)) - - # Read input sentences. - sentences, lengths = [], [] - with open(prefix + '.input', encoding='utf-8') as file: - for line in file: - sentence = line.strip() - - # Tokenize the sentence, splitting on spaces - tokens = self.input_vocab.encode_line( - sentence, add_if_not_exist=False, - ) - - sentences.append(tokens) - lengths.append(tokens.numel()) - - # Read labels. - labels = [] - with open(prefix + '.label', encoding='utf-8') as file: - for line in file: - label = line.strip() - labels.append( - # Convert label to a numeric ID. - torch.LongTensor([self.label_vocab.add_symbol(label)]) - ) - - assert len(sentences) == len(labels) - print('| {} {} {} examples'.format(self.args.data, split, len(sentences))) - - # We reuse LanguagePairDataset since classification can be modeled as a - # sequence-to-sequence task where the target sequence has length 1. - self.datasets[split] = LanguagePairDataset( - src=sentences, - src_sizes=lengths, - src_dict=self.input_vocab, - tgt=labels, - tgt_sizes=torch.ones(len(labels)), # targets have length 1 - tgt_dict=self.label_vocab, - left_pad_source=False, - # Since our target is a single class label, there's no need for - # teacher forcing. If we set this to ``True`` then our Model's - # ``forward()`` method would receive an additional argument called - # *prev_output_tokens* that would contain a shifted version of the - # target sequence. - input_feeding=False, - ) - - def max_positions(self): - """Return the max input length allowed by the task.""" - # The source should be less than *args.max_positions* and the "target" - # has max length 1. - return (self.args.max_positions, 1) - - @property - def source_dictionary(self): - """Return the source :class:`~fairseq.data.Dictionary`.""" - return self.input_vocab - - @property - def target_dictionary(self): - """Return the target :class:`~fairseq.data.Dictionary`.""" - return self.label_vocab - - # We could override this method if we wanted more control over how batches - # are constructed, but it's not necessary for this tutorial since we can - # reuse the batching provided by LanguagePairDataset. - # - # def get_batch_iterator( - # self, dataset, max_tokens=None, max_sentences=None, max_positions=None, - # ignore_invalid_inputs=False, required_batch_size_multiple=1, - # seed=1, num_shards=1, shard_id=0, num_workers=0, epoch=1, - # data_buffer_size=0, disable_iterator_cache=False, - # ): - # (...) - - -4. Training the Model ---------------------- - -Now we're ready to train the model. We can use the existing :ref:`fairseq-train` -command-line tool for this, making sure to specify our new Task (``--task -simple_classification``) and Model architecture (``--arch -pytorch_tutorial_rnn``): - -.. note:: - - You can also configure the dimensionality of the hidden state by passing the - ``--hidden-dim`` argument to :ref:`fairseq-train`. - -.. code-block:: console - - > fairseq-train names-bin \ - --task simple_classification \ - --arch pytorch_tutorial_rnn \ - --optimizer adam --lr 0.001 --lr-shrink 0.5 \ - --max-tokens 1000 - (...) - | epoch 027 | loss 1.200 | ppl 2.30 | wps 15728 | ups 119.4 | wpb 116 | bsz 116 | num_updates 3726 | lr 1.5625e-05 | gnorm 1.290 | clip 0% | oom 0 | wall 32 | train_wall 21 - | epoch 027 | valid on 'valid' subset | valid_loss 1.41304 | valid_ppl 2.66 | num_updates 3726 | best 1.41208 - | done training in 31.6 seconds - -The model files should appear in the :file:`checkpoints/` directory. - - -5. Writing an evaluation script -------------------------------- - -Finally we can write a short script to evaluate our model on new inputs. Create -a new file named :file:`eval_classifier.py` with the following contents:: - - from fairseq import checkpoint_utils, data, options, tasks - - # Parse command-line arguments for generation - parser = options.get_generation_parser(default_task='simple_classification') - args = options.parse_args_and_arch(parser) - - # Setup task - task = tasks.setup_task(args) - - # Load model - print('| loading model from {}'.format(args.path)) - models, _model_args = checkpoint_utils.load_model_ensemble([args.path], task=task) - model = models[0] - - while True: - sentence = input('\nInput: ') - - # Tokenize into characters - chars = ' '.join(list(sentence.strip())) - tokens = task.source_dictionary.encode_line( - chars, add_if_not_exist=False, - ) - - # Build mini-batch to feed to the model - batch = data.language_pair_dataset.collate( - samples=[{'id': -1, 'source': tokens}], # bsz = 1 - pad_idx=task.source_dictionary.pad(), - eos_idx=task.source_dictionary.eos(), - left_pad_source=False, - input_feeding=False, - ) - - # Feed batch to the model and get predictions - preds = model(**batch['net_input']) - - # Print top 3 predictions and their log-probabilities - top_scores, top_labels = preds[0].topk(k=3) - for score, label_idx in zip(top_scores, top_labels): - label_name = task.target_dictionary.string([label_idx]) - print('({:.2f})\t{}'.format(score, label_name)) - -Now we can evaluate our model interactively. Note that we have included the -original data path (:file:`names-bin/`) so that the dictionaries can be loaded: - -.. code-block:: console - - > python eval_classifier.py names-bin --path checkpoints/checkpoint_best.pt - | [input] dictionary: 64 types - | [label] dictionary: 24 types - | loading model from checkpoints/checkpoint_best.pt - - Input: Satoshi - (-0.61) Japanese - (-1.20) Arabic - (-2.86) Italian - - Input: Sinbad - (-0.30) Arabic - (-1.76) English - (-4.08) Russian diff --git a/kosmos-g/fairseq/docs/tutorial_simple_lstm.rst b/kosmos-g/fairseq/docs/tutorial_simple_lstm.rst deleted file mode 100644 index f52988507..000000000 --- a/kosmos-g/fairseq/docs/tutorial_simple_lstm.rst +++ /dev/null @@ -1,518 +0,0 @@ -Tutorial: Simple LSTM -===================== - -In this tutorial we will extend fairseq by adding a new -:class:`~fairseq.models.FairseqEncoderDecoderModel` that encodes a source -sentence with an LSTM and then passes the final hidden state to a second LSTM -that decodes the target sentence (without attention). - -This tutorial covers: - -1. **Writing an Encoder and Decoder** to encode/decode the source/target - sentence, respectively. -2. **Registering a new Model** so that it can be used with the existing - :ref:`Command-line tools`. -3. **Training the Model** using the existing command-line tools. -4. **Making generation faster** by modifying the Decoder to use - :ref:`Incremental decoding`. - - -1. Building an Encoder and Decoder ----------------------------------- - -In this section we'll define a simple LSTM Encoder and Decoder. All Encoders -should implement the :class:`~fairseq.models.FairseqEncoder` interface and -Decoders should implement the :class:`~fairseq.models.FairseqDecoder` interface. -These interfaces themselves extend :class:`torch.nn.Module`, so FairseqEncoders -and FairseqDecoders can be written and used in the same ways as ordinary PyTorch -Modules. - - -Encoder -~~~~~~~ - -Our Encoder will embed the tokens in the source sentence, feed them to a -:class:`torch.nn.LSTM` and return the final hidden state. To create our encoder -save the following in a new file named :file:`fairseq/models/simple_lstm.py`:: - - import torch.nn as nn - from fairseq import utils - from fairseq.models import FairseqEncoder - - class SimpleLSTMEncoder(FairseqEncoder): - - def __init__( - self, args, dictionary, embed_dim=128, hidden_dim=128, dropout=0.1, - ): - super().__init__(dictionary) - self.args = args - - # Our encoder will embed the inputs before feeding them to the LSTM. - self.embed_tokens = nn.Embedding( - num_embeddings=len(dictionary), - embedding_dim=embed_dim, - padding_idx=dictionary.pad(), - ) - self.dropout = nn.Dropout(p=dropout) - - # We'll use a single-layer, unidirectional LSTM for simplicity. - self.lstm = nn.LSTM( - input_size=embed_dim, - hidden_size=hidden_dim, - num_layers=1, - bidirectional=False, - batch_first=True, - ) - - def forward(self, src_tokens, src_lengths): - # The inputs to the ``forward()`` function are determined by the - # Task, and in particular the ``'net_input'`` key in each - # mini-batch. We discuss Tasks in the next tutorial, but for now just - # know that *src_tokens* has shape `(batch, src_len)` and *src_lengths* - # has shape `(batch)`. - - # Note that the source is typically padded on the left. This can be - # configured by adding the `--left-pad-source "False"` command-line - # argument, but here we'll make the Encoder handle either kind of - # padding by converting everything to be right-padded. - if self.args.left_pad_source: - # Convert left-padding to right-padding. - src_tokens = utils.convert_padding_direction( - src_tokens, - padding_idx=self.dictionary.pad(), - left_to_right=True - ) - - # Embed the source. - x = self.embed_tokens(src_tokens) - - # Apply dropout. - x = self.dropout(x) - - # Pack the sequence into a PackedSequence object to feed to the LSTM. - x = nn.utils.rnn.pack_padded_sequence(x, src_lengths, batch_first=True) - - # Get the output from the LSTM. - _outputs, (final_hidden, _final_cell) = self.lstm(x) - - # Return the Encoder's output. This can be any object and will be - # passed directly to the Decoder. - return { - # this will have shape `(bsz, hidden_dim)` - 'final_hidden': final_hidden.squeeze(0), - } - - # Encoders are required to implement this method so that we can rearrange - # the order of the batch elements during inference (e.g., beam search). - def reorder_encoder_out(self, encoder_out, new_order): - """ - Reorder encoder output according to `new_order`. - - Args: - encoder_out: output from the ``forward()`` method - new_order (LongTensor): desired order - - Returns: - `encoder_out` rearranged according to `new_order` - """ - final_hidden = encoder_out['final_hidden'] - return { - 'final_hidden': final_hidden.index_select(0, new_order), - } - - -Decoder -~~~~~~~ - -Our Decoder will predict the next word, conditioned on the Encoder's final -hidden state and an embedded representation of the previous target word -- which -is sometimes called *teacher forcing*. More specifically, we'll use a -:class:`torch.nn.LSTM` to produce a sequence of hidden states that we'll project -to the size of the output vocabulary to predict each target word. - -:: - - import torch - from fairseq.models import FairseqDecoder - - class SimpleLSTMDecoder(FairseqDecoder): - - def __init__( - self, dictionary, encoder_hidden_dim=128, embed_dim=128, hidden_dim=128, - dropout=0.1, - ): - super().__init__(dictionary) - - # Our decoder will embed the inputs before feeding them to the LSTM. - self.embed_tokens = nn.Embedding( - num_embeddings=len(dictionary), - embedding_dim=embed_dim, - padding_idx=dictionary.pad(), - ) - self.dropout = nn.Dropout(p=dropout) - - # We'll use a single-layer, unidirectional LSTM for simplicity. - self.lstm = nn.LSTM( - # For the first layer we'll concatenate the Encoder's final hidden - # state with the embedded target tokens. - input_size=encoder_hidden_dim + embed_dim, - hidden_size=hidden_dim, - num_layers=1, - bidirectional=False, - ) - - # Define the output projection. - self.output_projection = nn.Linear(hidden_dim, len(dictionary)) - - # During training Decoders are expected to take the entire target sequence - # (shifted right by one position) and produce logits over the vocabulary. - # The *prev_output_tokens* tensor begins with the end-of-sentence symbol, - # ``dictionary.eos()``, followed by the target sequence. - def forward(self, prev_output_tokens, encoder_out): - """ - Args: - prev_output_tokens (LongTensor): previous decoder outputs of shape - `(batch, tgt_len)`, for teacher forcing - encoder_out (Tensor, optional): output from the encoder, used for - encoder-side attention - - Returns: - tuple: - - the last decoder layer's output of shape - `(batch, tgt_len, vocab)` - - the last decoder layer's attention weights of shape - `(batch, tgt_len, src_len)` - """ - bsz, tgt_len = prev_output_tokens.size() - - # Extract the final hidden state from the Encoder. - final_encoder_hidden = encoder_out['final_hidden'] - - # Embed the target sequence, which has been shifted right by one - # position and now starts with the end-of-sentence symbol. - x = self.embed_tokens(prev_output_tokens) - - # Apply dropout. - x = self.dropout(x) - - # Concatenate the Encoder's final hidden state to *every* embedded - # target token. - x = torch.cat( - [x, final_encoder_hidden.unsqueeze(1).expand(bsz, tgt_len, -1)], - dim=2, - ) - - # Using PackedSequence objects in the Decoder is harder than in the - # Encoder, since the targets are not sorted in descending length order, - # which is a requirement of ``pack_padded_sequence()``. Instead we'll - # feed nn.LSTM directly. - initial_state = ( - final_encoder_hidden.unsqueeze(0), # hidden - torch.zeros_like(final_encoder_hidden).unsqueeze(0), # cell - ) - output, _ = self.lstm( - x.transpose(0, 1), # convert to shape `(tgt_len, bsz, dim)` - initial_state, - ) - x = output.transpose(0, 1) # convert to shape `(bsz, tgt_len, hidden)` - - # Project the outputs to the size of the vocabulary. - x = self.output_projection(x) - - # Return the logits and ``None`` for the attention weights - return x, None - - -2. Registering the Model ------------------------- - -Now that we've defined our Encoder and Decoder we must *register* our model with -fairseq using the :func:`~fairseq.models.register_model` function decorator. -Once the model is registered we'll be able to use it with the existing -:ref:`Command-line Tools`. - -All registered models must implement the -:class:`~fairseq.models.BaseFairseqModel` interface. For sequence-to-sequence -models (i.e., any model with a single Encoder and Decoder), we can instead -implement the :class:`~fairseq.models.FairseqEncoderDecoderModel` interface. - -Create a small wrapper class in the same file and register it in fairseq with -the name ``'simple_lstm'``:: - - from fairseq.models import FairseqEncoderDecoderModel, register_model - - # Note: the register_model "decorator" should immediately precede the - # definition of the Model class. - - @register_model('simple_lstm') - class SimpleLSTMModel(FairseqEncoderDecoderModel): - - @staticmethod - def add_args(parser): - # Models can override this method to add new command-line arguments. - # Here we'll add some new command-line arguments to configure dropout - # and the dimensionality of the embeddings and hidden states. - parser.add_argument( - '--encoder-embed-dim', type=int, metavar='N', - help='dimensionality of the encoder embeddings', - ) - parser.add_argument( - '--encoder-hidden-dim', type=int, metavar='N', - help='dimensionality of the encoder hidden state', - ) - parser.add_argument( - '--encoder-dropout', type=float, default=0.1, - help='encoder dropout probability', - ) - parser.add_argument( - '--decoder-embed-dim', type=int, metavar='N', - help='dimensionality of the decoder embeddings', - ) - parser.add_argument( - '--decoder-hidden-dim', type=int, metavar='N', - help='dimensionality of the decoder hidden state', - ) - parser.add_argument( - '--decoder-dropout', type=float, default=0.1, - help='decoder dropout probability', - ) - - @classmethod - def build_model(cls, args, task): - # Fairseq initializes models by calling the ``build_model()`` - # function. This provides more flexibility, since the returned model - # instance can be of a different type than the one that was called. - # In this case we'll just return a SimpleLSTMModel instance. - - # Initialize our Encoder and Decoder. - encoder = SimpleLSTMEncoder( - args=args, - dictionary=task.source_dictionary, - embed_dim=args.encoder_embed_dim, - hidden_dim=args.encoder_hidden_dim, - dropout=args.encoder_dropout, - ) - decoder = SimpleLSTMDecoder( - dictionary=task.target_dictionary, - encoder_hidden_dim=args.encoder_hidden_dim, - embed_dim=args.decoder_embed_dim, - hidden_dim=args.decoder_hidden_dim, - dropout=args.decoder_dropout, - ) - model = SimpleLSTMModel(encoder, decoder) - - # Print the model architecture. - print(model) - - return model - - # We could override the ``forward()`` if we wanted more control over how - # the encoder and decoder interact, but it's not necessary for this - # tutorial since we can inherit the default implementation provided by - # the FairseqEncoderDecoderModel base class, which looks like: - # - # def forward(self, src_tokens, src_lengths, prev_output_tokens): - # encoder_out = self.encoder(src_tokens, src_lengths) - # decoder_out = self.decoder(prev_output_tokens, encoder_out) - # return decoder_out - -Finally let's define a *named architecture* with the configuration for our -model. This is done with the :func:`~fairseq.models.register_model_architecture` -function decorator. Thereafter this named architecture can be used with the -``--arch`` command-line argument, e.g., ``--arch tutorial_simple_lstm``:: - - from fairseq.models import register_model_architecture - - # The first argument to ``register_model_architecture()`` should be the name - # of the model we registered above (i.e., 'simple_lstm'). The function we - # register here should take a single argument *args* and modify it in-place - # to match the desired architecture. - - @register_model_architecture('simple_lstm', 'tutorial_simple_lstm') - def tutorial_simple_lstm(args): - # We use ``getattr()`` to prioritize arguments that are explicitly given - # on the command-line, so that the defaults defined below are only used - # when no other value has been specified. - args.encoder_embed_dim = getattr(args, 'encoder_embed_dim', 256) - args.encoder_hidden_dim = getattr(args, 'encoder_hidden_dim', 256) - args.decoder_embed_dim = getattr(args, 'decoder_embed_dim', 256) - args.decoder_hidden_dim = getattr(args, 'decoder_hidden_dim', 256) - - -3. Training the Model ---------------------- - -Now we're ready to train the model. We can use the existing :ref:`fairseq-train` -command-line tool for this, making sure to specify our new Model architecture -(``--arch tutorial_simple_lstm``). - -.. note:: - - Make sure you've already preprocessed the data from the IWSLT example in the - :file:`examples/translation/` directory. - -.. code-block:: console - - > fairseq-train data-bin/iwslt14.tokenized.de-en \ - --arch tutorial_simple_lstm \ - --encoder-dropout 0.2 --decoder-dropout 0.2 \ - --optimizer adam --lr 0.005 --lr-shrink 0.5 \ - --max-tokens 12000 - (...) - | epoch 052 | loss 4.027 | ppl 16.30 | wps 420805 | ups 39.7 | wpb 9841 | bsz 400 | num_updates 20852 | lr 1.95313e-05 | gnorm 0.218 | clip 0% | oom 0 | wall 529 | train_wall 396 - | epoch 052 | valid on 'valid' subset | valid_loss 4.74989 | valid_ppl 26.91 | num_updates 20852 | best 4.74954 - -The model files should appear in the :file:`checkpoints/` directory. While this -model architecture is not very good, we can use the :ref:`fairseq-generate` script to -generate translations and compute our BLEU score over the test set: - -.. code-block:: console - - > fairseq-generate data-bin/iwslt14.tokenized.de-en \ - --path checkpoints/checkpoint_best.pt \ - --beam 5 \ - --remove-bpe - (...) - | Translated 6750 sentences (153132 tokens) in 17.3s (389.12 sentences/s, 8827.68 tokens/s) - | Generate test with beam=5: BLEU4 = 8.18, 38.8/12.1/4.7/2.0 (BP=1.000, ratio=1.066, syslen=139865, reflen=131146) - - -4. Making generation faster ---------------------------- - -While autoregressive generation from sequence-to-sequence models is inherently -slow, our implementation above is especially slow because it recomputes the -entire sequence of Decoder hidden states for every output token (i.e., it is -``O(n^2)``). We can make this significantly faster by instead caching the -previous hidden states. - -In fairseq this is called :ref:`Incremental decoding`. Incremental decoding is a -special mode at inference time where the Model only receives a single timestep -of input corresponding to the immediately previous output token (for teacher -forcing) and must produce the next output incrementally. Thus the model must -cache any long-term state that is needed about the sequence, e.g., hidden -states, convolutional states, etc. - -To implement incremental decoding we will modify our model to implement the -:class:`~fairseq.models.FairseqIncrementalDecoder` interface. Compared to the -standard :class:`~fairseq.models.FairseqDecoder` interface, the incremental -decoder interface allows ``forward()`` methods to take an extra keyword argument -(*incremental_state*) that can be used to cache state across time-steps. - -Let's replace our ``SimpleLSTMDecoder`` with an incremental one:: - - import torch - from fairseq.models import FairseqIncrementalDecoder - - class SimpleLSTMDecoder(FairseqIncrementalDecoder): - - def __init__( - self, dictionary, encoder_hidden_dim=128, embed_dim=128, hidden_dim=128, - dropout=0.1, - ): - # This remains the same as before. - super().__init__(dictionary) - self.embed_tokens = nn.Embedding( - num_embeddings=len(dictionary), - embedding_dim=embed_dim, - padding_idx=dictionary.pad(), - ) - self.dropout = nn.Dropout(p=dropout) - self.lstm = nn.LSTM( - input_size=encoder_hidden_dim + embed_dim, - hidden_size=hidden_dim, - num_layers=1, - bidirectional=False, - ) - self.output_projection = nn.Linear(hidden_dim, len(dictionary)) - - # We now take an additional kwarg (*incremental_state*) for caching the - # previous hidden and cell states. - def forward(self, prev_output_tokens, encoder_out, incremental_state=None): - if incremental_state is not None: - # If the *incremental_state* argument is not ``None`` then we are - # in incremental inference mode. While *prev_output_tokens* will - # still contain the entire decoded prefix, we will only use the - # last step and assume that the rest of the state is cached. - prev_output_tokens = prev_output_tokens[:, -1:] - - # This remains the same as before. - bsz, tgt_len = prev_output_tokens.size() - final_encoder_hidden = encoder_out['final_hidden'] - x = self.embed_tokens(prev_output_tokens) - x = self.dropout(x) - x = torch.cat( - [x, final_encoder_hidden.unsqueeze(1).expand(bsz, tgt_len, -1)], - dim=2, - ) - - # We will now check the cache and load the cached previous hidden and - # cell states, if they exist, otherwise we will initialize them to - # zeros (as before). We will use the ``utils.get_incremental_state()`` - # and ``utils.set_incremental_state()`` helpers. - initial_state = utils.get_incremental_state( - self, incremental_state, 'prev_state', - ) - if initial_state is None: - # first time initialization, same as the original version - initial_state = ( - final_encoder_hidden.unsqueeze(0), # hidden - torch.zeros_like(final_encoder_hidden).unsqueeze(0), # cell - ) - - # Run one step of our LSTM. - output, latest_state = self.lstm(x.transpose(0, 1), initial_state) - - # Update the cache with the latest hidden and cell states. - utils.set_incremental_state( - self, incremental_state, 'prev_state', latest_state, - ) - - # This remains the same as before - x = output.transpose(0, 1) - x = self.output_projection(x) - return x, None - - # The ``FairseqIncrementalDecoder`` interface also requires implementing a - # ``reorder_incremental_state()`` method, which is used during beam search - # to select and reorder the incremental state. - def reorder_incremental_state(self, incremental_state, new_order): - # Load the cached state. - prev_state = utils.get_incremental_state( - self, incremental_state, 'prev_state', - ) - - # Reorder batches according to *new_order*. - reordered_state = ( - prev_state[0].index_select(1, new_order), # hidden - prev_state[1].index_select(1, new_order), # cell - ) - - # Update the cached state. - utils.set_incremental_state( - self, incremental_state, 'prev_state', reordered_state, - ) - -Finally, we can rerun generation and observe the speedup: - -.. code-block:: console - - # Before - - > fairseq-generate data-bin/iwslt14.tokenized.de-en \ - --path checkpoints/checkpoint_best.pt \ - --beam 5 \ - --remove-bpe - (...) - | Translated 6750 sentences (153132 tokens) in 17.3s (389.12 sentences/s, 8827.68 tokens/s) - | Generate test with beam=5: BLEU4 = 8.18, 38.8/12.1/4.7/2.0 (BP=1.000, ratio=1.066, syslen=139865, reflen=131146) - - # After - - > fairseq-generate data-bin/iwslt14.tokenized.de-en \ - --path checkpoints/checkpoint_best.pt \ - --beam 5 \ - --remove-bpe - (...) - | Translated 6750 sentences (153132 tokens) in 5.5s (1225.54 sentences/s, 27802.94 tokens/s) - | Generate test with beam=5: BLEU4 = 8.18, 38.8/12.1/4.7/2.0 (BP=1.000, ratio=1.066, syslen=139865, reflen=131146) diff --git a/kosmos-g/fairseq/examples/.gitignore b/kosmos-g/fairseq/examples/.gitignore deleted file mode 100644 index 1ef816f2c..000000000 --- a/kosmos-g/fairseq/examples/.gitignore +++ /dev/null @@ -1,2 +0,0 @@ -!*/*.sh -!*/*.md diff --git a/kosmos-g/fairseq/examples/MMPT/.gitignore b/kosmos-g/fairseq/examples/MMPT/.gitignore deleted file mode 100644 index 70a255dc9..000000000 --- a/kosmos-g/fairseq/examples/MMPT/.gitignore +++ /dev/null @@ -1,139 +0,0 @@ -# Byte-compiled / optimized / DLL files -__pycache__/ -*.py[cod] -*$py.class - -# C extensions -*.so - -# Distribution / packaging -.Python -build/ -develop-eggs/ -dist/ -downloads/ -eggs/ -.eggs/ -lib/ -lib64/ -parts/ -sdist/ -var/ -wheels/ -pip-wheel-metadata/ -share/python-wheels/ -*.egg-info/ -.installed.cfg -*.egg -MANIFEST - -# PyInstaller -# Usually these files are written by a python script from a template -# before PyInstaller builds the exe, so as to inject date/other infos into it. -*.manifest -*.spec - -# Installer logs -pip-log.txt -pip-delete-this-directory.txt - -# Unit test / coverage reports -htmlcov/ -.tox/ -.nox/ -.coverage -.coverage.* -.cache -nosetests.xml -coverage.xml -*.cover -*.py,cover -.hypothesis/ -.pytest_cache/ - -# Translations -*.mo -*.pot - -# Django stuff: -*.log -local_settings.py -db.sqlite3 -db.sqlite3-journal - -# Flask stuff: -instance/ -.webassets-cache - -# Scrapy stuff: -.scrapy - -# Sphinx documentation -docs/_build/ - -# PyBuilder -target/ - -# Jupyter Notebook -.ipynb_checkpoints - -# IPython -profile_default/ -ipython_config.py - -# pyenv -.python-version - -# pipenv -# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. -# However, in case of collaboration, if having platform-specific dependencies or dependencies -# having no cross-platform support, pipenv may install dependencies that don't work, or not -# install all needed dependencies. -#Pipfile.lock - -# PEP 582; used by e.g. github.com/David-OConnor/pyflow -__pypackages__/ - -# Celery stuff -celerybeat-schedule -celerybeat.pid - -# SageMath parsed files -*.sage.py - -# Environments -.env -.venv -env/ -venv/ -ENV/ -env.bak/ -venv.bak/ - -# Spyder project settings -.spyderproject -.spyproject - -# Rope project settings -.ropeproject - -# mkdocs documentation -/site - -# mypy -.mypy_cache/ -.dmypy.json -dmypy.json - -# Pyre type checker -.pyre/ -runs -data -pretrained_models -projects/mmfusion_* -log_test -third-party -python_log -slurm_snapshot_code -lightning_logs -demos diff --git a/kosmos-g/fairseq/examples/MMPT/CONFIG.md b/kosmos-g/fairseq/examples/MMPT/CONFIG.md deleted file mode 100644 index bbd1403df..000000000 --- a/kosmos-g/fairseq/examples/MMPT/CONFIG.md +++ /dev/null @@ -1,41 +0,0 @@ -### Config Files Explained - -Taking `projects/mfmmlm.yaml` for example, which run pretraining using masked frame model (MFM) and masked language model (MLM) on a single BERT: - -```yaml -project_dir: mfmmlm # specify the project dir for this baseline. -run_task: - - how2.yaml # run pretraining on how2 when launching `projects/taskmfmmlm.yaml` - - [vtt.yaml, vttcap.yaml, vttqa.yaml, youcook.yaml, youcookcap.yaml, crosstask.yaml, coin.yaml] # run fine-tuning tasks. -base_dir: task # a global template folder to specify each training task. -task_group: - pretrain: # section for pretraining. Most baselines differs in this section. - task_list: - - how2.yaml # reconfig `projects/task/how2.yaml` - dataset: - aligner: MFMMLMAligner # overwrite the aligner for MFMMLM training task. - model: - model_cls: MMFusionMFMMLM # overwrite the model, which constructs negative examples for MFM on-the-fly. - loss: - loss_cls: MFMMLM # overwrite the loss as MFMMLM, which combines MFM and MLM together. - fairseq: # all fairseq args can be expecified under this name. - dataset: - batch_size: 128 - finetune: # section for fine-tuning tasks, we don't need to change anything here mostly since we want to see how pretraining can contribute to finetuning. - task_list: # specify the list of downstream tasks, e.g., copy `projects/task/vtt.yaml` to `projects/mfmmlm`. - - vtt.yaml - - vttqa.yaml - - youcook.yaml - - youcookcap.yaml - - crosstask.yaml - - coin.yaml - test: # section for testing. - task_list: - - test_vtt.yaml - - test_vttqa.yaml - - test_youcook.yaml - - test_youcookcap.yaml - - test_crosstask.yaml - - test_crosstask_zs.yaml - - test_coin.yaml -``` diff --git a/kosmos-g/fairseq/examples/MMPT/DATASET.md b/kosmos-g/fairseq/examples/MMPT/DATASET.md deleted file mode 100644 index 930403eb3..000000000 --- a/kosmos-g/fairseq/examples/MMPT/DATASET.md +++ /dev/null @@ -1,34 +0,0 @@ -# Dataset - -We understand video data are challenging to download and process. For videos, we provide our preprocessing scripts under `scripts/video_feature_extractor` (deeply adapted from `https://github.com/antoine77340/video_feature_extractor`); for text, we pre-tokenizing scripts under `scripts/text_token_extractor`. - -### S3D Feature Extraction -We use pre-trained [S3D](https://github.com/antoine77340/S3D_HowTo100M) for video feature extraction. Please place the models as `pretrained_models/s3d_dict.npy` and `pretrained_models/s3d_howto100m.pth`. - -We implement a `PathBuilder` to automatically track video ids, source video paths to their feature locations (you may need `conda install -c anaconda pandas`). Decoding may need `pip install ffmpeg-python`. - -### Howto100M -[Howto100M](https://www.di.ens.fr/willow/research/howto100m/) is a large-scale video pre-training datasets. You may download videos by yourself and run preprocessing of our scripts. - -Several key differences of our preprocessing from existing papers: (1) we use `raw_caption.json` instead of `caption.json` to have pure self-supervision on text (`caption.json` has manual removal of stop words); (2) we remove partially duplicated texts that are originally designed for real-time readability (see `mmpt/processors/dedupprocessor.py`); (3) then we shard video/text features using `SharedTensor` in `mmpt/utils/shardedtensor.py` for fast loading during training (faster than `h5py`). - -#### Steps -##### video -To extract video features: edit and run `bash scripts/video_feature_extractor/how2/s3d.sh`. (consider to run this on multiple machines; by default, we store features in fp16 to save space and also for faster training). - -Split available video ids as `data/how2/how2_s3d_train.lst` and `data/how2/how2_s3d_val.lst`. - -Lastly, pack video features into `ShardedTensor` using `python scripts/video_feature_extractor/shard_feature.py`. - -##### text -Clean captions using `python -m mmpt.processors.dedupprocessor`. - -Tokenize dedupped captions `data/how2/raw_caption_dedup.pkl` into sharded numpy arrays: -``` -python scripts/text_token_extractor/pretokenization.py scripts/text_token_extractor/configs/bert-base-uncased.yaml -``` - -### Youcook, MSRVTT etc. -We use the version of Youcook and MSRVTT come with Howto100M and MILNCE. Please download the data to `data/youcook` and `data/msrvtt` accordingly, you can also check `projects/task/youcook.yaml` and `projects/task/vtt.yaml` etc. in details. -We extract features for Youcook, MSRVTT similar to the first step of Howto100M but we read text from meta data directly and perform on-the-fly tokenization. - diff --git a/kosmos-g/fairseq/examples/MMPT/README.md b/kosmos-g/fairseq/examples/MMPT/README.md deleted file mode 100644 index 4a84819d9..000000000 --- a/kosmos-g/fairseq/examples/MMPT/README.md +++ /dev/null @@ -1,166 +0,0 @@ -# VideoCLIP and VLM - -You just find this toolkit for multimodal video understanding! It contains implementation of two recent multi-modal video understanding papers [VideoCLIP](https://arxiv.org/pdf/2109.14084.pdf) (EMNLP, 2021) and [VLM](https://aclanthology.org/2021.findings-acl.370.pdf) (ACL Findings, 2021), along with high-performance toolkits that are typically lacking in existing codebase. The toolkit is desigend to contain generic performance-tuned components that can be potentially adapted to other frameworks (we initially use fairseq). - -VideoCLIP is a contrastive learning model for zero-shot transfer to retrieval/classification/sequence labeling style tasks. - - - -VLM is a masked language model style pre-training using only one encoder with masked modality model (MMM) for retrieval/generation/sequence labeling style tasks. - - - -### News -[Oct. 2021] Initial release of implementation for the following papers: -[VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding](https://arxiv.org/pdf/2109.14084.pdf) (Xu et. al., EMNLP 2021) -[VLM: Task-agnostic Video-Language Model Pre-training for Video Understanding](https://aclanthology.org/2021.findings-acl.370.pdf) (Xu et. al., ACL Findings 2021) - - -### Installation -We aim to minimize the dependency of this repo on other packages. -We use fairseq as the main trainer (no models/datasets dependency on fairseq. We will support other trainer in future): -``` -git clone https://github.com/pytorch/fairseq -cd fairseq -pip install -e . # also optionally follow fairseq README for apex installation for fp16 training. -export MKL_THREADING_LAYER=GNU # fairseq may need this for numpy. -``` - -Then install this toolkit: -``` -cd examples/MMPT # MMPT can be in any folder, not necessarily under fairseq/examples. -pip install -e . -``` - -The code is developed under Python=3.8.8, Pytorch=1.8, cuda=11.0 with fairseq=1.0.0a0+af0389f and tested under Python=3.8.8 pytorch=1.9 cuda=11.0 fairseq=1.0.0a0+8e7bc73 during code release. -Most models require `transformers==3.4` for API compatibility `pip install transformers==3.4`. -In addition, some downstream tasks may need `conda install pandas`. - - -### Usage -#### Download Checkpoints -We use pre-trained [S3D](https://github.com/antoine77340/S3D_HowTo100M) for video feature extraction. Please place the models as `pretrained_models/s3d_dict.npy` and `pretrained_models/s3d_howto100m.pth`. - -Download VideoCLIP checkpoint `https://dl.fbaipublicfiles.com/MMPT/retri/videoclip/checkpoint_best.pt` to `runs/retri/videoclip` or VLM checkpoint `https://dl.fbaipublicfiles.com/MMPT/mtm/vlm/checkpoint_best.pt` to `runs/mtm/vlm`. - -#### Demo of Inference -run `python locallaunch.py projects/retri/videoclip.yaml --dryrun` to get all `.yaml`s for VideoCLIP. - -```python -import torch - -from mmpt.models import MMPTModel - - -model, tokenizer, aligner = MMPTModel.from_pretrained( - "projects/retri/videoclip/how2.yaml") - -model.eval() - - -# B, T, FPS, H, W, C (VideoCLIP is trained on 30 fps of s3d) -video_frames = torch.randn(1, 2, 30, 224, 224, 3) -caps, cmasks = aligner._build_text_seq( - tokenizer("some text", add_special_tokens=False)["input_ids"] -) - -caps, cmasks = caps[None, :], cmasks[None, :] # bsz=1 - -with torch.no_grad(): - output = model(video_frames, caps, cmasks, return_score=True) -print(output["score"]) # dot-product -``` - -#### Data Preparation -See [dataset](DATASET.md) for each dataset. - -#### Global Config for Training Pipeline -We organize a global config file for a training/testing pipeline under projects (see a detailed [explanation](CONFIG.md)). For example, VideoCLIP in `projects/retri/videoclip.yaml` and VLM is in `projects/mtm/vlm.yaml`. - -We wrap all cmds into `locallaunch.py` and `mmpt_cli/localjob.py`. You can check concrete cmds by `--dryrun` and then drop it for actual run. - -First, run `python locallaunch.py projects/retri/videoclip.yaml --dryrun` will generate configs for all configs of pre-training, zero-shot evaluation, fine-tuning and testing, for VideoCLIP under `projects/retri/videoclip`. - -Then each (either training or evaluation) process will be configed by a concrete config file (we save all complex arguments into the concrete config file for reproducibility, including fairseq args). For example, run zero-shot evaluation on youcook, -``` -python locallaunch.py projects/retri/videoclip/test_youcook_zs.yaml --jobtype local_predict # zero-shot evaluation. -python locallaunch.py projects/retri/videoclip/youcook_videoclip.yaml --jobtype local_single --dryrun # fine-tuning: use --dryrun to check cmds and drop it to make an actual run; local_small will run on two gpus (as in paper). -python locallaunch.py projects/retri/videoclip/test_youcook_videoclip.yaml --jobtype local_predict # testing on fine-tuned model. -``` - -Pretraining can be run as: -``` -python locallaunch.py projects/retri/videoclip/how2.yaml --jobtype local_single --dryrun # check then drop dryrun; paper is ran on local_big as 8 gpus. -``` -You may need to change `--jobtype`, check/extend `LocalJob` in `mmpt_cli/localjob.py` for multi-gpu/multi-node pre-training. - -The detailed instructions of pretraining and fine-tuning can be found at [pretraining instruction](pretraining.md) and [finetuning instruction](endtask.md). - - -### Development -Several components of this toolkit can be re-used for future research (and also our ongoing research). - -#### Framework Wrapper -We currently only support fairseq, but most components can be easily fit into other frameworks like huggingface. This repo is a `--user-dir` of fairseq with fairseq wrapper. For example, `mmpt/tasks` includes a `FairseqMMTTask`, which manages `mmpt/datasets` with `FairseqDataset`, `mmpt/models` with `FairseqModel`, `mmpt/losses` with `FairseqCriterion`. - -#### Processors -**Multi**modal research introduces the complexity on modality alignment from different input sources to losses. Inspired by [MMF](https://github.com/facebookresearch/mmf), this toolkit leverages `mmpt/processors` to handle various needs of data preprocessing and loading, **alleviating** the needs of multiple `torch.data.utils.Dataset` (that can be tricky for ablation study). -Processors can also be decoupled from `torch.data.utils.Dataset` for offline preprocessing instead of on-the-fly data preprocessing. - -We decouple a `mmpt.MMDataset` as 3 types of processors: `MetaProcessor`, `VideoProcessor`, `TextProcessor` and `Aligner`. They can be configed in `dataset` field of a config file (e.g., see `projects/task/how2.yaml`). -`MetaProcessor` is used to load the meta data about a dataset, aka, all video_ids of how2 dataset. -`VideoProcessor` is used to load the video features about a dataset. For example, S3D features for each second of a video. -`TextProcessor` is used to load the text (feature). For example, BERT pre-tokenized text clips for how2 dataset (with `start`s, `end`s of timestamps and `cap` for `token_ids`). -`Aligner` is the core class for different baselines that prepares the training data. For example, sampling a clip, masking tokens for MLM, etc. - -#### Performance-tuned Components -To speed up pre-training, this toolkit uses sharded features stored in mmaped numpy, backed by `ShardedTensor` in `mmpt/utils/shardedtensor.py` (adopted from MARGE paper). This reduces the loads of IO for multi-GPU training without loading all features for a video into the memory each time and `ShardedTensor` ensure features are stored in continuous disk space for near random access. This is used for both How2 video features and texts in `mmpt/processors/how2processor.py`. - - -### Citation -If this codebase is useful for your work, please cite the following papers: - -```BibTeX -@inproceedings{xu-etal-2021-videoclip, - title = "{VideoCLIP}: Contrastive Pre-training for\\Zero-shot Video-Text Understanding", - author = "Xu, Hu and - Ghosh, Gargi and - Huang, Po-Yao and - Okhonko, Dmytro and - Aghajanyan, Armen and - Metze, Florian and - Zettlemoyer, Luke and - Feichtenhofer, Christoph", - booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)", - month = nov, - year = "2021", - address = "Online", - publisher = "Association for Computational Linguistics", -} - -@inproceedings{xu-etal-2021-vlm, - title = "{VLM}: Task-agnostic Video-Language Model Pre-training for Video Understanding", - author = "Xu, Hu and - Ghosh, Gargi and - Huang, Po-Yao and - Arora, Prahal and - Aminzadeh, Masoumeh and - Feichtenhofer, Christoph and - Metze, Florian and - Zettlemoyer, Luke", - booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", - month = aug, - year = "2021", - address = "Online", - publisher = "Association for Computational Linguistics", - url = "https://aclanthology.org/2021.findings-acl.370", - doi = "10.18653/v1/2021.findings-acl.370", - pages = "4227--4239", -} -``` - -### Bug Reports -This repo is in its initial stage, welcome bug reports to huxu@fb.com - -### Copyright -The majority of Multimodal Pre-training (MMPT) is licensed under CC-BY-NC, however portions of the project are available under separate license terms: Evaluation Codes/Models: Howto100M and HuggingFace Transformers are licensed under the Apache2.0 license; COIN and NLG-eval are licensed under the MIT license; CrossTask is licensed under the BSD-3; DiDeMo is licensed under the BSD-2 license. diff --git a/kosmos-g/fairseq/examples/MMPT/endtask.md b/kosmos-g/fairseq/examples/MMPT/endtask.md deleted file mode 100644 index 769095532..000000000 --- a/kosmos-g/fairseq/examples/MMPT/endtask.md +++ /dev/null @@ -1,41 +0,0 @@ -# Zero-shot Transfer and Finetuning - -(If you are new to the ideas of `mmpt.processors`, see [README](README.md) first.) -All finetuning datasets (specifically `processors`) are defined in `mmpt.processors.dsprocessor`. -Given the complexity of different types of finetuning tasks, each task may have their own meta/video/text/aligner processors and `mmpt/evaluators/{Predictor,Metric}`. - -### Tasks - -Currently, we support 5 end datasets: `MSRVTT`, `Youcook`, `COIN`, `Crosstask` and `DiDeMo` with the following tasks: -text-video retrieval: `MSRVTT`, `Youcook`, `DiDeMo`; -video captioning: `Youcook`; -Video Question and Answering: `MSRVTT-QA`. - -To add your own dataset, you can specify the corresponding processors and config them in the `dataset` field of a config file, such as `projects/task/vtt.yaml`. - -### Zero-shot Transfer (no Training) -Zero-shot transfer will run the pre-trained model (e.g., VideoCLIP) directly on testing data. Configs with pattern: `projects/task/*_zs_*.yaml` are dedicated for zero-shot transfer. - -### Fine-tuning - -The training of a downstream task is similar to pretraining, execept you may need to specify the `restore_file` in `fairseq.checkpoint` and reset optimizers, see `projects/task/ft.yaml` that is included by `projects/task/vtt.yaml`. - -We typically do finetuning on 2 gpus (`local_small`). - -### Testing -For each finetuning dataset, you may need to specify a testing config, similar to `projects/task/test_vtt.yaml`. - -We define `mmpt.evaluators.Predictor` for different types of prediction. For example, `MSRVTT` and `Youcook` are video-retrieval tasks and expecting to use `RetrievalPredictor`. You may need to define your new type of predictors and specify that in `predictor` field of a testing config. - -Each task may also have their own metric for evaluation. This can be created in `mmpt.evaluators.Metric` and specified in the `metric` field of a testing config. - -Launching a testing is as simple as training by specifying the path of a testing config: -```python locallaunch.py projects/mfmmlm/test_vtt.yaml``` -Testing will be launched locally by default since prediction is computationally less expensive. - -### Third-party Libraries -We list the following finetuning tasks that require third-party libraries. - -Youcook captioning: `https://github.com/Maluuba/nlg-eval` - -CrossTask: `https://github.com/DmZhukov/CrossTask`'s `dp` under `third-party/CrossTask` (`python setup.py build_ext --inplace`) diff --git a/kosmos-g/fairseq/examples/MMPT/locallaunch.py b/kosmos-g/fairseq/examples/MMPT/locallaunch.py deleted file mode 100644 index e20fd816f..000000000 --- a/kosmos-g/fairseq/examples/MMPT/locallaunch.py +++ /dev/null @@ -1,148 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import argparse -import os - -from omegaconf import OmegaConf - -from mmpt.utils import recursive_config, overwrite_dir -from mmpt_cli.localjob import LocalJob - - -class JobLauncher(object): - JOB_CONFIG = { - "local": LocalJob, - } - - def __init__(self, yaml_file): - self.yaml_file = yaml_file - job_key = "local" - - if yaml_file.endswith(".yaml"): - config = recursive_config(yaml_file) - if config.task_type is not None: - job_key = config.task_type.split("_")[0] - else: - raise ValueError("unknown extension of job file:", yaml_file) - self.job_key = job_key - - def __call__(self, job_type=None, dryrun=False): - if job_type is not None: - self.job_key = job_type.split("_")[0] - print("[JobLauncher] job_key", self.job_key) - job = JobLauncher.JOB_CONFIG[self.job_key]( - self.yaml_file, job_type=job_type, dryrun=dryrun) - return job.submit() - - -class Pipeline(object): - """a job that loads yaml config.""" - - def __init__(self, fn): - """ - load a yaml config of a job and save generated configs as yaml for each task. - return: a list of files to run as specified by `run_task`. - """ - if fn.endswith(".py"): - # a python command. - self.backend = "python" - self.run_yamls = [fn] - return - - job_config = recursive_config(fn) - if job_config.base_dir is None: # single file job config. - self.run_yamls = [fn] - return - - self.project_dir = os.path.join("projects", job_config.project_dir) - self.run_dir = os.path.join("runs", job_config.project_dir) - - if job_config.run_task is not None: - run_yamls = [] - for stage in job_config.run_task: - # each stage can have multiple tasks running in parallel. - if OmegaConf.is_list(stage): - stage_yamls = [] - for task_file in stage: - stage_yamls.append( - os.path.join(self.project_dir, task_file)) - run_yamls.append(stage_yamls) - else: - run_yamls.append(os.path.join(self.project_dir, stage)) - self.run_yamls = run_yamls - configs_to_save = self._overwrite_task(job_config) - self._save_configs(configs_to_save) - - def __getitem__(self, idx): - yaml_files = self.run_yamls[idx] - if isinstance(yaml_files, list): - return [JobLauncher(yaml_file) for yaml_file in yaml_files] - return [JobLauncher(yaml_files)] - - def __len__(self): - return len(self.run_yamls) - - def _save_configs(self, configs_to_save: dict): - # save - os.makedirs(self.project_dir, exist_ok=True) - for config_file in configs_to_save: - config = configs_to_save[config_file] - print("saving", config_file) - OmegaConf.save(config=config, f=config_file) - - def _overwrite_task(self, job_config): - configs_to_save = {} - self.base_project_dir = os.path.join("projects", job_config.base_dir) - self.base_run_dir = os.path.join("runs", job_config.base_dir) - - for config_sets in job_config.task_group: - overwrite_config = job_config.task_group[config_sets] - if ( - overwrite_config.task_list is None - or len(overwrite_config.task_list) == 0 - ): - print( - "[warning]", - job_config.task_group, - "has no task_list specified.") - # we don't want this added to a final config. - task_list = overwrite_config.pop("task_list", None) - for config_file in task_list: - config_file_path = os.path.join( - self.base_project_dir, config_file) - config = recursive_config(config_file_path) - # overwrite it. - if overwrite_config: - config = OmegaConf.merge(config, overwrite_config) - overwrite_dir(config, self.run_dir, basedir=self.base_run_dir) - save_file_path = os.path.join(self.project_dir, config_file) - configs_to_save[save_file_path] = config - return configs_to_save - - -def main(args): - job_type = args.jobtype if args.jobtype else None - # parse multiple pipelines. - pipelines = [Pipeline(fn) for fn in args.yamls.split(",")] - - for pipe_id, pipeline in enumerate(pipelines): - if not hasattr(pipeline, "project_dir"): - for job in pipeline[0]: - job(job_type=job_type, dryrun=args.dryrun) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("yamls", type=str) - parser.add_argument( - "--dryrun", - action="store_true", - help="run config and prepare to submit without launch the job.", - ) - parser.add_argument( - "--jobtype", type=str, default="", - help="force to run jobs as specified.") - args = parser.parse_args() - main(args) diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/__init__.py b/kosmos-g/fairseq/examples/MMPT/mmpt/__init__.py deleted file mode 100644 index 6ff86ddd5..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -try: - # fairseq user dir - from .datasets import FairseqMMDataset - from .losses import FairseqCriterion - from .models import FairseqMMModel - from .tasks import FairseqMMTask -except ImportError: - pass diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/datasets/__init__.py b/kosmos-g/fairseq/examples/MMPT/mmpt/datasets/__init__.py deleted file mode 100644 index 2578235e1..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/datasets/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -from .mmdataset import * - -try: - from .fairseqmmdataset import * -except ImportError: - pass diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/datasets/fairseqmmdataset.py b/kosmos-g/fairseq/examples/MMPT/mmpt/datasets/fairseqmmdataset.py deleted file mode 100644 index 02c49141d..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/datasets/fairseqmmdataset.py +++ /dev/null @@ -1,57 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -TODO (huxu): fairseq wrapper class for all dataset you defined: mostly MMDataset. -""" - -from collections import OrderedDict - -from torch.utils.data import Dataset -from torch.utils.data.dataloader import default_collate -from fairseq.data import FairseqDataset, data_utils - - -class FairseqMMDataset(FairseqDataset): - """ - A wrapper class for MMDataset for fairseq. - """ - - def __init__(self, mmdataset): - if not isinstance(mmdataset, Dataset): - raise TypeError("mmdataset must be of type `torch.utils.data.dataset`.") - self.mmdataset = mmdataset - - def set_epoch(self, epoch, **unused): - super().set_epoch(epoch) - self.epoch = epoch - - def __getitem__(self, idx): - with data_utils.numpy_seed(43211, self.epoch, idx): - return self.mmdataset[idx] - - def __len__(self): - return len(self.mmdataset) - - def collater(self, samples): - if hasattr(self.mmdataset, "collator"): - return self.mmdataset.collator(samples) - if len(samples) == 0: - return {} - if isinstance(samples[0], dict): - batch = OrderedDict() - for key in samples[0]: - if samples[0][key] is not None: - batch[key] = default_collate([sample[key] for sample in samples]) - return batch - else: - return default_collate(samples) - - def size(self, index): - """dummy implementation: we don't use --max-tokens""" - return 1 - - def num_tokens(self, index): - """dummy implementation: we don't use --max-tokens""" - return 1 diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/datasets/mmdataset.py b/kosmos-g/fairseq/examples/MMPT/mmpt/datasets/mmdataset.py deleted file mode 100644 index 3d07283f9..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/datasets/mmdataset.py +++ /dev/null @@ -1,111 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from collections import OrderedDict - -from torch.utils.data import Dataset -from torch.utils.data.dataloader import default_collate - -from ..utils import set_seed - - -class MMDataset(Dataset): - """ - A generic multi-modal dataset. - Args: - `meta_processor`: a meta processor, - handling loading meta data and return video_id and text_id. - `video_processor`: a video processor, - handling e.g., decoding, loading .np files. - `text_processor`: a text processor, - handling e.g., tokenization. - `aligner`: combine the video and text feature - as one training example. - """ - - def __init__( - self, - meta_processor, - video_processor, - text_processor, - align_processor, - ): - self.split = meta_processor.split - self.meta_processor = meta_processor - self.video_processor = video_processor - self.text_processor = text_processor - self.align_processor = align_processor - - def __len__(self): - return len(self.meta_processor) - - def __getitem__(self, idx): - if self.split == "test": - set_seed(idx) - video_id, text_id = self.meta_processor[idx] - video_feature = self.video_processor(video_id) - text_feature = self.text_processor(text_id) - output = self.align_processor(video_id, video_feature, text_feature) - # TODO (huxu): the following is for debug purpose. - output.update({"idx": idx}) - return output - - def collater(self, samples): - """This collator is deprecated. - set self.collator = MMDataset.collater. - see collator in FairseqMMDataset. - """ - - if len(samples) == 0: - return {} - if isinstance(samples[0], dict): - batch = OrderedDict() - for key in samples[0]: - if samples[0][key] is not None: - batch[key] = default_collate( - [sample[key] for sample in samples]) - # if torch.is_tensor(batch[key]): - # print(key, batch[key].size()) - # else: - # print(key, len(batch[key])) - return batch - else: - return default_collate(samples) - - def print_example(self, output): - print("[one example]", output["video_id"]) - if ( - hasattr(self.align_processor, "subsampling") - and self.align_processor.subsampling is not None - and self.align_processor.subsampling > 1 - ): - for key in output: - if torch.is_tensor(output[key]): - output[key] = output[key][0] - - # search tokenizer to translate ids back. - tokenizer = None - if hasattr(self.text_processor, "tokenizer"): - tokenizer = self.text_processor.tokenizer - elif hasattr(self.align_processor, "tokenizer"): - tokenizer = self.align_processor.tokenizer - if tokenizer is not None: - caps = output["caps"].tolist() - if isinstance(caps[0], list): - caps = caps[0] - print("caps", tokenizer.decode(caps)) - print("caps", tokenizer.convert_ids_to_tokens(caps)) - - for key, value in output.items(): - if torch.is_tensor(value): - if len(value.size()) >= 3: # attention_mask. - print(key, value.size()) - print(key, "first", value[0, :, :]) - print(key, "last", value[-1, :, :]) - else: - print(key, value) - print("[end of one example]") diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/evaluators/__init__.py b/kosmos-g/fairseq/examples/MMPT/mmpt/evaluators/__init__.py deleted file mode 100644 index 2d06b9d79..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/evaluators/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -from .metric import * -from .evaluator import * - - -# experimental. -try: - from .expmetric import * -except ImportError: - pass diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/evaluators/evaluator.py b/kosmos-g/fairseq/examples/MMPT/mmpt/evaluators/evaluator.py deleted file mode 100644 index 94d9c5ec9..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/evaluators/evaluator.py +++ /dev/null @@ -1,54 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import os -import glob -import numpy as np - -from . import metric as metric_path -from . import predictor as predictor_path - - -class Evaluator(object): - """ - perform evaluation on a single (downstream) task. - make this both offline and online. - TODO(huxu) saving evaluation results. - """ - - def __init__(self, config, eval_dataloader=None): - if config.metric is None: - raise ValueError("config.metric is", config.metric) - metric_cls = getattr(metric_path, config.metric) - self.metric = metric_cls(config) - if config.predictor is None: - raise ValueError("config.predictor is", config.predictor) - predictor_cls = getattr(predictor_path, config.predictor) - self.predictor = predictor_cls(config) - self.eval_dataloader = eval_dataloader - - def __call__(self): - try: - print(self.predictor.pred_dir) - for pred_file in glob.glob( - self.predictor.pred_dir + "/*_merged.npy"): - outputs = np.load(pred_file) - results = self.metric.compute_metrics(outputs) - self.metric.print_computed_metrics(results) - - outputs = np.load(os.path.join( - self.predictor.pred_dir, "merged.npy")) - results = self.metric.compute_metrics(outputs) - return {"results": results, "metric": self.metric} - except FileNotFoundError: - print("\n[missing]", self.predictor.pred_dir) - return {} - - def evaluate(self, model, eval_dataloader=None, output_file="merged"): - if eval_dataloader is None: - eval_dataloader = self.eval_dataloader - outputs = self.predictor.predict_loop( - model, eval_dataloader, output_file) - results = self.metric.compute_metrics(**outputs) - return results diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/evaluators/metric.py b/kosmos-g/fairseq/examples/MMPT/mmpt/evaluators/metric.py deleted file mode 100644 index f00178b40..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/evaluators/metric.py +++ /dev/null @@ -1,312 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import json - - -class Metric(object): - def __init__(self, config, metric_names): - self.metric_names = metric_names - - def best_metric(self, metric): - return metric[self.metric_names[0]] - - def save_metrics(self, fn, metrics): - with open(fn, "w") as fw: - json.dump(fw, metrics) - - def print_computed_metrics(self, metrics): - raise NotImplementedError - - -class RetrievalMetric(Metric): - """ - this is modified from `howto100m/metrics.py`. - History of changes: - refactor as a class. - add metric_key in __init__ - """ - - def __init__(self, config, metric_names=["R1", "R5", "R10", "MR"]): - super().__init__(config, metric_names) - self.error = False # TODO(huxu): add to config to print error. - - def compute_metrics(self, outputs, texts, **kwargs): - x = outputs - sx = np.sort(-x, axis=1) - d = np.diag(-x) - d = d[:, np.newaxis] - ind = sx - d - ind = np.where(ind == 0) - ind = ind[1] - metrics = {} - metrics["R1"] = float(np.sum(ind == 0)) / len(ind) - metrics["R5"] = float(np.sum(ind < 5)) / len(ind) - metrics["R10"] = float(np.sum(ind < 10)) / len(ind) - metrics["MR"] = np.median(ind) + 1 - - max_idx = np.argmax(outputs, axis=1) - if self.error: - # print top-20 errors. - error = [] - for ex_idx in range(20): - error.append((texts[ex_idx], texts[max_idx[ex_idx]])) - metrics["error"] = error - return metrics - - def print_computed_metrics(self, metrics): - r1 = metrics["R1"] - r5 = metrics["R5"] - r10 = metrics["R10"] - mr = metrics["MR"] - print( - "R@1: {:.4f} - R@5: {:.4f} - R@10: {:.4f} - Median R: {}".format( - r1, r5, r10, mr - ) - ) - if "error" in metrics: - print(metrics["error"]) - - -class DiDeMoMetric(Metric): - """ - History of changes: - python 2.x to python 3.x. - reference: https://github.com/LisaAnne/LocalizingMoments/blob/master/utils/eval.py - Code to evaluate your results on the DiDeMo dataset. - """ - def __init__(self, config, metric_names=["rank1", "rank5", "miou"]): - super().__init__(config, metric_names) - - def compute_metrics(self, outputs, targets, **kwargs): - assert len(outputs) == len(targets) - rank1, rank5, miou = self._eval_predictions(outputs, targets) - metrics = { - "rank1": rank1, - "rank5": rank5, - "miou": miou - } - return metrics - - def print_computed_metrics(self, metrics): - rank1 = metrics["rank1"] - rank5 = metrics["rank5"] - miou = metrics["miou"] - # print("Average rank@1: %f" % rank1) - # print("Average rank@5: %f" % rank5) - # print("Average iou: %f" % miou) - - print( - "Average rank@1: {:.4f} Average rank@5: {:.4f} Average iou: {:.4f}".format( - rank1, rank5, miou - ) - ) - - def _iou(self, pred, gt): - intersection = max(0, min(pred[1], gt[1]) + 1 - max(pred[0], gt[0])) - union = max(pred[1], gt[1]) + 1 - min(pred[0], gt[0]) - return float(intersection)/union - - def _rank(self, pred, gt): - return pred.index(tuple(gt)) + 1 - - def _eval_predictions(self, segments, data): - ''' - Inputs: - segments: For each item in the ground truth data, rank possible video segments given the description and video. - In DiDeMo, there are 21 posible moments extracted for each video so the list of video segments will be of length 21. - The first video segment should be the video segment that best corresponds to the text query. - There are 4180 sentence in the validation data, so when evaluating a model on the val dataset, - segments should be a list of lenght 4180, and each item in segments should be a list of length 21. - data: ground truth data - ''' - average_ranks = [] - average_iou = [] - for s, d in zip(segments, data): - pred = s[0] - ious = [self._iou(pred, t) for t in d['times']] - average_iou.append(np.mean(np.sort(ious)[-3:])) - ranks = [self._rank(s, t) for t in d['times'] if tuple(t) in s] # if t in s] is added for s, e not in prediction. - average_ranks.append(np.mean(np.sort(ranks)[:3])) - rank1 = np.sum(np.array(average_ranks) <= 1)/float(len(average_ranks)) - rank5 = np.sum(np.array(average_ranks) <= 5)/float(len(average_ranks)) - miou = np.mean(average_iou) - - # print("Average rank@1: %f" % rank1) - # print("Average rank@5: %f" % rank5) - # print("Average iou: %f" % miou) - return rank1, rank5, miou - - -class NLGMetric(Metric): - def __init__( - self, - config, - metric_names=[ - "Bleu_1", "Bleu_2", "Bleu_3", "Bleu_4", - "METEOR", "ROUGE_L", "CIDEr" - ] - ): - super().__init__(config, metric_names) - # please install NLGEval from `https://github.com/Maluuba/nlg-eval` - from nlgeval import NLGEval - self.nlg = NLGEval() - - def compute_metrics(self, outputs, targets, **kwargs): - return self.nlg.compute_metrics( - hyp_list=outputs, ref_list=targets) - - def print_computed_metrics(self, metrics): - Bleu_1 = metrics["Bleu_1"] - Bleu_2 = metrics["Bleu_2"] - Bleu_3 = metrics["Bleu_3"] - Bleu_4 = metrics["Bleu_4"] - METEOR = metrics["METEOR"] - ROUGE_L = metrics["ROUGE_L"] - CIDEr = metrics["CIDEr"] - - print( - "Bleu_1: {:.4f} - Bleu_2: {:.4f} - Bleu_3: {:.4f} - Bleu_4: {:.4f} - METEOR: {:.4f} - ROUGE_L: {:.4f} - CIDEr: {:.4f}".format( - Bleu_1, Bleu_2, Bleu_3, Bleu_4, METEOR, ROUGE_L, CIDEr - ) - ) - - -class QAMetric(Metric): - def __init__( - self, - config, - metric_names=["acc"] - ): - super().__init__(config, metric_names) - - def compute_metrics(self, outputs, targets, **kwargs): - from sklearn.metrics import accuracy_score - return {"acc": accuracy_score(targets, outputs)} - - def print_computed_metrics(self, metrics): - print("acc: {:.4f}".format(metrics["acc"])) - - -class COINActionSegmentationMetric(Metric): - """ - COIN dataset listed 3 repos for Action Segmentation. - Action Sets, NeuralNetwork-Viterbi, TCFPN-ISBA. - The first and second are the same. - https://github.com/alexanderrichard/action-sets/blob/master/eval.py - - Future reference for the third: - `https://github.com/Zephyr-D/TCFPN-ISBA/blob/master/utils/metrics.py` - """ - def __init__(self, config, metric_name=["frame_acc"]): - super().__init__(config, metric_name) - - def compute_metrics(self, outputs, targets): - n_frames = 0 - n_errors = 0 - n_errors = sum(outputs != targets) - n_frames = len(targets) - return {"frame_acc": 1.0 - float(n_errors) / n_frames} - - def print_computed_metrics(self, metrics): - fa = metrics["frame_acc"] - print("frame accuracy:", fa) - - -class CrossTaskMetric(Metric): - def __init__(self, config, metric_names=["recall"]): - super().__init__(config, metric_names) - - def compute_metrics(self, outputs, targets, **kwargs): - """refactored from line 166: - https://github.com/DmZhukov/CrossTask/blob/master/train.py""" - - recalls = self._get_recalls(Y_true=targets, Y_pred=outputs) - results = {} - for task, rec in recalls.items(): - results[str(task)] = rec - - avg_recall = np.mean(list(recalls.values())) - results["recall"] = avg_recall - return results - - def print_computed_metrics(self, metrics): - print('Recall: {0:0.3f}'.format(metrics["recall"])) - for task in metrics: - if task != "recall": - print('Task {0}. Recall = {1:0.3f}'.format( - task, metrics[task])) - - def _get_recalls(self, Y_true, Y_pred): - """refactored from - https://github.com/DmZhukov/CrossTask/blob/master/train.py""" - - step_match = {task: 0 for task in Y_true.keys()} - step_total = {task: 0 for task in Y_true.keys()} - for task, ys_true in Y_true.items(): - ys_pred = Y_pred[task] - for vid in set(ys_pred.keys()).intersection(set(ys_true.keys())): - y_true = ys_true[vid] - y_pred = ys_pred[vid] - step_total[task] += (y_true.sum(axis=0) > 0).sum() - step_match[task] += (y_true*y_pred).sum() - recalls = { - task: step_match[task] / n for task, n in step_total.items()} - return recalls - - -class ActionRecognitionMetric(Metric): - def __init__( - self, - config, - metric_names=["acc", "acc_splits", "r1_splits", "r5_splits", "r10_splits"] - ): - super().__init__(config, metric_names) - - def compute_metrics(self, outputs, targets, splits, **kwargs): - all_video_embd = outputs - labels = targets - split1, split2, split3 = splits - accs = [] - r1s = [] - r5s = [] - r10s = [] - for split in range(3): - if split == 0: - s = split1 - elif split == 1: - s = split2 - else: - s = split3 - - X_pred = all_video_embd[np.where(s == 2)[0]] - label_test = labels[np.where(s == 2)[0]] - logits = X_pred - X_pred = np.argmax(X_pred, axis=1) - acc = np.sum(X_pred == label_test) / float(len(X_pred)) - accs.append(acc) - # compute recall. - sorted_pred = (-logits).argsort(axis=-1) - label_test_sp = label_test.reshape(-1, 1) - - r1 = np.mean((sorted_pred[:, :1] == label_test_sp).sum(axis=1), axis=0) - r5 = np.mean((sorted_pred[:, :5] == label_test_sp).sum(axis=1), axis=0) - r10 = np.mean((sorted_pred[:, :10] == label_test_sp).sum(axis=1), axis=0) - r1s.append(r1) - r5s.append(r5) - r10s.append(r10) - - return {"acc": accs[0], "acc_splits": accs, "r1_splits": r1s, "r5_splits": r5s, "r10_splits": r10s} - - def print_computed_metrics(self, metrics): - for split, acc in enumerate(metrics["acc_splits"]): - print("Top 1 accuracy on split {}: {}; r1 {}; r5 {}; r10 {}".format( - split + 1, acc, - metrics["r1_splits"][split], - metrics["r5_splits"][split], - metrics["r10_splits"][split], - ) - ) diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/evaluators/predictor.py b/kosmos-g/fairseq/examples/MMPT/mmpt/evaluators/predictor.py deleted file mode 100644 index 2ffef6ab4..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/evaluators/predictor.py +++ /dev/null @@ -1,595 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import os -import random -import json -import numpy as np -import torch -import pickle -import math - -from tqdm import tqdm - - -class Predictor(object): - """this base class is used to save predictions to disk - (and being called by a evaluator later). - Predictor has minimum support of single gpu prediction. - """ - def __init__(self, config): - self.pred_dir = None # on-the-fly eval does not save the results. - if hasattr(config, "eval") and config.eval is not None: - self.pred_dir = config.eval.save_path - os.makedirs(self.pred_dir, exist_ok=True) - - def __call__(self, outputs): - """extract the prediction and save it.""" - raise NotImplementedError - - def predict_loop(self, model, eval_dataloader, output_file=None): - """on-the-fly prediction on a single gpu.""" - self.full_scores = [] - model.eval() - model = model.to(0) - with torch.no_grad(): - for data in eval_dataloader: - data = self.to_ctx(data) - outputs = model(**data) - outputs.update(data) - self(outputs) - return self.finalize(output_file) - - def finalize(self, output_file): - pass - - def to_ctx(self, data, ctx=0, dtype=None): - if isinstance(data, dict): - for key in data: - if torch.is_tensor(data[key]): - if dtype is not None and data[key].dtype == torch.float32: - data[key] = data[key].to(dtype) - data[key] = data[key].to(ctx) - return data - else: - raise ValueError("non-dict type of batch is not supported yet.") - - -class NLGPredictor(Predictor): - """Predicting Text from MMFusion models.""" - """TODO: make a context.""" - def __init__(self, config): - super().__init__(config) - from transformers import AutoTokenizer - - self.tokenizer = AutoTokenizer.from_pretrained( - config.dataset.bert_name, - bos_token="[CLS]", eos_token="[SEP]") - self.bos_token_id = self.tokenizer.bos_token_id - self.eos_token_id = self.tokenizer.eos_token_id - - def predict_loop(self, model, eval_dataloader, output_file=None): - """TODO: refactor base classes.""" - ctx = 0 - outputs = {"outputs": [], "targets": [[]]} - model.eval() - model = model.to(ctx) - with torch.no_grad(): - for data in tqdm(eval_dataloader): - data = self.to_ctx(data, ctx) - self(data, model, outputs) - return self.finalize(outputs, output_file) - - def __call__(self, data, model, outputs): - data.update({ - "bos_token_id": self.bos_token_id, - "eos_token_id": self.eos_token_id - }) - - output = model.generate(**data) - assert len(output) == len(data["ref"]) - for idx, _output in enumerate(output): - generated_text = self.tokenizer.decode( - _output, skip_special_tokens=True) - if generated_text == "": - generated_text = "none" - outputs["outputs"].append(generated_text) - outputs["targets"][0].append(data["ref"][idx]) - if random.random() < 0.001: - print("_output", _output) - print("generated_text", generated_text) - print("ref", data["ref"][idx]) - - def finalize(self, outputs, output_file=None): - if output_file is not None: - with open(os.path.join( - self.pred_dir, output_file + ".json"), "w") as fw: - json.dump(outputs, fw, indent=4) - return outputs - - -class RetrievalPredictor(Predictor): - """generated `pooled_video` and `pooled_text`.""" - def __init__(self, config): - super().__init__(config) - from transformers import AutoTokenizer - self.tokenizer = AutoTokenizer.from_pretrained( - config.dataset.bert_name) - - def predict_loop( - self, - model, - eval_dataloader, - output_file="retrieval.npy" - ): - """on-the-fly prediction on a single gpu.""" - full_scores = [] - texts = [] - model.eval() - model = model.cuda() - with torch.no_grad(): - for data in eval_dataloader: - # convert to dict. - if not isinstance(data, dict): - data = { - "caps": data[0], - "cmasks": data[1], - "vfeats": data[2], - "vmasks": data[3], - "video_id": data[4] - } - data = self.to_ctx(data) - outputs = model(**data) - outputs.update(data) - self(outputs, full_scores) - for _cap in data["caps"]: - texts.append( - self.tokenizer.decode(_cap, skip_special_tokens=True) - ) - - return self.finalize(full_scores, texts, output_file) - - def __call__(self, sample, full_scores): - scores = self._get_pooled_outputs(sample) - self._append_scores(scores, full_scores) - - def finalize(self, full_scores, texts, output_file=None): - outputs = self._aggregate_scores(full_scores) - if output_file is not None: - np.save(os.path.join(self.pred_dir, output_file + ".npy"), outputs) - return {"outputs": outputs, "texts": texts} - - def _get_pooled_outputs(self, outputs): - if "pooled_video" in outputs: - return outputs["pooled_video"], outputs["pooled_text"] - else: - raise ValueError("unknown format of outputs.") - - def _append_scores(self, scores, full_scores): - assert len(scores) == 2 - if len(full_scores) == 0: - full_scores.append([]) - full_scores.append([]) - full_scores[0].append(scores[0].cpu().detach().numpy()) - full_scores[1].append(scores[1].cpu().detach().numpy()) - - def _aggregate_scores(self, scores): - assert len(scores) == 2 - video_hidden = np.concatenate(scores[0], axis=0) - text_hidden = np.concatenate(scores[1], axis=0) - # clear up. - self.full_scores = [] - return np.matmul(text_hidden, video_hidden.T) - - -class QAPredictor(Predictor): - """generated `pooled_video` and `pooled_text`.""" - def __init__(self, config): - super().__init__(config) - """predictor maintains scores and aggregate them.""" - - def predict_loop(self, model, eval_dataloader, output_file="qa.npy"): - """on-the-fly prediction on a single gpu.""" - self.full_scores = [] - model.eval() - model = model.cuda() - with torch.no_grad(): - for data in eval_dataloader: - # reshape ans and dup video 5 times. - v_len = data["vfeats"].size(1) - hidden_size = data["vfeats"].size(2) - data["vfeats"] = data["vfeats"].unsqueeze(1).repeat(1, 5, 1, 1).view(-1, v_len, hidden_size) - data["vmasks"] = data["vmasks"].unsqueeze(1).repeat(1, 5, 1).view(-1, v_len) - - t_len = data["caps"].size(-1) - data["caps"] = data["caps"].view(-1, t_len) - data["cmasks"] = data["cmasks"].view(-1, t_len) - - data = self.to_ctx(data) - outputs = model(**data) - outputs.update(data) - self(outputs) - return self.finalize(output_file) - - def __call__(self, sample): - hidden_size = sample["pooled_video"].size(-1) - pooled_video = sample["pooled_video"].view(-1, 5, hidden_size) - pooled_text = sample["pooled_text"].view(-1, 5, hidden_size) - scores = torch.bmm(pooled_video, pooled_text.transpose(2, 1)) - scores = scores.argmax(-1) - self._append_scores(scores[:, 0], sample["answers"], self.full_scores) - - def finalize(self, output_file=None): - outputs, targets = self._aggregate_scores(self.full_scores) - if output_file is not None: - np.save(os.path.join(self.pred_dir, output_file + ".npy"), outputs) - return {"outputs": outputs, "targets": targets} - - def _append_scores(self, scores, answers, full_scores): - if len(full_scores) == 0: - full_scores.append([]) - full_scores.append([]) - full_scores[0].append(scores.cpu().detach().numpy()) - full_scores[1].append(answers.cpu().detach().numpy()) - - def _aggregate_scores(self, scores): - assert len(scores) == 2 - outputs = np.concatenate(scores[0], axis=0) - targets = np.concatenate(scores[1], axis=0) - # clear up. - self.full_scores = [] - return outputs, targets - - -class CrossTaskPredictor(Predictor): - """ - CrossTaskPredictor needs to compute the average of logits - for overlapped sliding-window. - """ - def __init__(self, config): - super().__init__(config) - self.lsm = torch.nn.LogSoftmax(dim=1) - self.max_video_len = config.dataset.max_video_len - self.sliding_window = config.dataset.sliding_window - self.sliding_window_size = config.dataset.sliding_window_size - self.annotation_path = config.dataset.annotation_path - - def predict_loop(self, model, eval_dataloader, output_file="result.pkl"): - """refactored from line 144: - https://github.com/DmZhukov/CrossTask/blob/master/train.py - """ - ctx = 0 - model.eval() - model = model.to(ctx) - # this is not a loss but just compute neg_log_prob. - Y_pred = {} - Y_true = {} - with torch.no_grad(): - for batch in eval_dataloader: - self(batch, model, Y_pred, Y_true) - return self.finalize(Y_pred, Y_true, output_file) - - def __call__(self, sample, model, Y_pred, Y_true): - # please install dp from `https://github.com/DmZhukov/CrossTask` - from dp import dp - vid, task = sample['video_id'][0], sample['task'][0] - sample = self.to_ctx(sample) - # compute the average logits over sliding windows. - output = model(**sample) - batch_logits = output["logits"].cpu() - - video_len = sample["video_len"][0] - - # the following version is slow. - logits = torch.zeros((video_len, batch_logits.size(1))) - logits_counts = torch.zeros((video_len, 1), dtype=torch.long) - # use the same loop as aligner to recover. - batch_logit_idx = 0 - for window_start in range(0, video_len, self.sliding_window): - video_end = min(video_len - window_start, self.sliding_window_size) - logits[window_start: window_start + video_end] += batch_logits[ - batch_logit_idx: batch_logit_idx + video_end] - batch_logit_idx += video_end - logits_counts[window_start: window_start + video_end] += torch.ones((video_end, 1), dtype=torch.long) - - if (video_len - window_start) <= self.sliding_window_size: - break - - logits /= logits_counts - assert logits.size() == (video_len, batch_logits.size(1)), "{}, {}".format(logits.size(), video_len) - - O = self.lsm(logits) - y = np.zeros(O.size(), dtype=np.float32) - dp(y, -O.detach().cpu().numpy()) - if task not in Y_pred: - Y_pred[task] = {} - Y_pred[task][vid] = y - annot_path = os.path.join( - self.annotation_path, task+'_'+vid+'.csv') - if os.path.exists(annot_path): - if task not in Y_true: - Y_true[task] = {} - Y_true[task][vid] = self._read_assignment( - *y.shape, annot_path) - - def finalize(self, Y_pred, Y_true, output_file=None): - if output_file is not None: - with open( - os.path.join(self.pred_dir, output_file + ".pkl"), - "wb") as fw: - pickle.dump( - {"Y_pred": Y_pred, "Y_true": Y_true}, fw, - protocol=pickle.HIGHEST_PROTOCOL) - return {"outputs": Y_pred, "targets": Y_true} - - def _read_assignment(self, T, K, path): - """ - refactored from https://github.com/DmZhukov/CrossTask/blob/master/data.py - Howto interpret contraints on loss that is going to be minimized: - lambd is a big number; - self.lambd * C is a big number for all valid position (csv stores invalids) - - def forward(self, O, Y, C): - return (Y*(self.lambd * C - self.lsm(O))).mean(dim=0).sum() - - This will load the csv file and fill-in the step col from start to end rows. - """ - - Y = np.zeros([T, K], dtype=np.uint8) - with open(path, 'r') as f: - for line in f: - step, start, end = line.strip().split(',') - start = int(math.floor(float(start))) - end = int(math.ceil(float(end))) - step = int(step) - 1 - Y[start:end, step] = 1 - return Y - - -class COINPredictor(Predictor): - """ - COINPredictor is similar to CrossTask on sliding windows. - """ - def __init__(self, config): - super().__init__(config) - self.max_video_len = config.dataset.max_video_len - self.sliding_window = config.dataset.sliding_window - self.sliding_window_size = config.dataset.sliding_window_size - - def predict_loop(self, model, eval_dataloader, output_file="result.pkl"): - """refactored from line 144: - https://github.com/DmZhukov/CrossTask/blob/master/train.py - """ - ctx = 0 - model.eval() - model = model.to(ctx) - # this is not a loss but just compute neg_log_prob. - Y_pred = [] - Y_true = [] - with torch.no_grad(): - for batch in eval_dataloader: - self(batch, model, Y_pred, Y_true) - return self.finalize(Y_pred, Y_true, output_file) - - def __call__(self, sample, model, Y_pred, Y_true): - sample = self.to_ctx(sample) - # compute the average logits over sliding windows. - output = model(**sample) - logits = self._merge_windows(sample, output) - Y_pred.append(logits.argmax(dim=1)) - Y_true.append(sample["video_targets"].squeeze(0).cpu()) - - def _merge_windows(self, sample, output): - targets = sample["targets"].reshape(-1).cpu() - valid_mask = targets != -100 - targets = targets[valid_mask] - batch_logits = output["logits"].cpu() - batch_logits = batch_logits.reshape(-1, batch_logits.size(-1)) - batch_logits = batch_logits[valid_mask] - - video_len = sample["video_len"][0] - - # the following version is slow. - logits = torch.zeros((video_len, batch_logits.size(1))) - logits_counts = torch.zeros((video_len, 1), dtype=torch.long) - # use the same loop as aligner to recover. - batch_logit_idx = 0 - for window_start in range(0, video_len, self.sliding_window): - video_end = min(video_len - window_start, self.sliding_window_size) - logits[window_start: window_start + video_end] += batch_logits[ - batch_logit_idx: batch_logit_idx + video_end] - batch_logit_idx += video_end - logits_counts[window_start: window_start + video_end] += torch.ones((video_end, 1), dtype=torch.long) - if (video_len - window_start) <= self.sliding_window_size: - break - logits /= logits_counts - assert logits.size() == (video_len, batch_logits.size(1)), "{}, {}".format(logits.size(), video_len) - return logits - - def finalize(self, Y_pred, Y_true, output_file=None): - Y_pred = torch.cat(Y_pred, dim=0).numpy() - Y_true = torch.cat(Y_true, dim=0).numpy() - assert len(Y_pred) == len(Y_true) - - error_mask = Y_pred != Y_true - print("sample error", Y_pred[error_mask][:10], Y_true[error_mask][:10]) - print("sample error", Y_pred[error_mask][10:20], Y_true[error_mask][10:20]) - - if output_file is not None: - with open( - os.path.join(self.pred_dir, output_file + ".pkl"), - "wb") as fw: - pickle.dump( - {"Y_pred": Y_pred, "Y_true": Y_true}, fw, - protocol=pickle.HIGHEST_PROTOCOL) - return {"outputs": Y_pred, "targets": Y_true} - - -class COINZSPredictor(COINPredictor): - """ - COINZSPredictor for COIN zero-shot prediction. - """ - - def __init__(self, config): - super().__init__(config) - self.dataset_config = config.dataset - - def predict_loop(self, model, eval_dataloader, output_file="result.pkl"): - """refactored from line 144: - https://github.com/DmZhukov/CrossTask/blob/master/train.py - """ - ctx = 0 - model.eval() - model = model.to(ctx) - - with torch.no_grad(): - outputs = eval_dataloader.dataset.meta_processor.meta_text_labels( - self.dataset_config) - outputs = self.to_ctx(outputs, ctx) - label_hidden_states = model.forward_text(**outputs).cpu() - label_sim = label_hidden_states @ label_hidden_states.t() - num_labels = label_sim.size(0) - eye_mask = ~torch.eye(num_labels, dtype=torch.bool) - label_sim = label_sim.masked_select(eye_mask).view(num_labels, num_labels - 1) - lbd = label_sim.max() - - # this is not a loss but just compute neg_log_prob. - Y_pred = [] - Y_true = [] - with torch.no_grad(): - for batch in eval_dataloader: - self(batch, label_hidden_states, model, lbd, Y_pred, Y_true) - return self.finalize(Y_pred, Y_true, output_file) - - def reshape_subsample(self, sample): - for key in sample: - if torch.is_tensor(sample[key]): - sample[key] = self.flat_subsample(sample[key]) - return sample - - def flat_subsample(self, tensor): - if len(tensor.size()) > 1 and tensor.size(0) == 1: - tensor = tensor.squeeze(0) - return tensor - - def __call__(self, sample, label_hidden_states, model, lbd, Y_pred, Y_true): - sample = self.reshape_subsample(sample) - sample = self.to_ctx(sample) - # compute the average logits over sliding windows. - sample["output_hidden_states"] = True - video_outputs = model.forward_video(**sample).cpu() - output = {"logits": video_outputs[:, 1:sample["vmasks"].size(1)+1] @ label_hidden_states.t()} - logits = self._merge_windows(sample, output) - # logic of zero-shot for sequence labeling. - logits_argmax = logits.argmax(dim=1) + 1 # 0 is "O" label. - logits_max = logits.max(dim=1)[0] - - pred = torch.zeros_like(logits_argmax) - label_select = logits_max > lbd # 73 or 74 - pred[label_select] = logits_argmax[label_select] - - Y_pred.append(pred) - Y_true.append(sample["video_targets"].squeeze(0).cpu()) - - def finalize(self, Y_pred, Y_true, output_file=None): - Y_pred = torch.cat(Y_pred, dim=0).numpy() - Y_true = torch.cat(Y_true, dim=0).numpy() - assert len(Y_pred) == len(Y_true) - - error_mask = Y_pred != Y_true - print("sample error", Y_pred[error_mask][:10], Y_true[error_mask][:10]) - print("sample error", Y_pred[error_mask][10:20], Y_true[error_mask][10:20]) - - if output_file is not None: - with open( - os.path.join(self.pred_dir, output_file + ".pkl"), - "wb") as fw: - pickle.dump( - {"Y_pred": Y_pred, "Y_true": Y_true}, fw, - protocol=pickle.HIGHEST_PROTOCOL) - return {"outputs": Y_pred, "targets": Y_true} - - -class DiDeMoPredictor(Predictor): - """reference: https://github.com/LisaAnne/LocalizingMoments/blob/master/utils/eval.py - https://github.com/LisaAnne/LocalizingMoments/blob/master/utils/data_processing.py - """ - def __init__(self, config): - super().__init__(config) - # load targets. - with open(config.dataset.test_path) as data_file: - self.test_data = json.load(data_file) - - def predict_loop(self, model, eval_dataloader, output_file="didemo.npy"): - """ - TODO: two solutions here. - """ - import itertools - # 21 chunks. - self.possible_segments = [(0,0), (1,1), (2,2), (3,3), (4,4), (5,5)] - for i in itertools.combinations(range(6), 2): - self.possible_segments.append(i) - # pick segments from a video. - - """on-the-fly prediction on a single gpu.""" - self.full_scores = [] - model.eval() - model = model.cuda() - with torch.no_grad(): - for data in eval_dataloader: - # TODO special forwarding logic here. - data = self.to_ctx(data) - data["output_hidden_states"] = True - hidden_video = model.forward_video(**data) - data["output_hidden_states"] = False - pooled_text = model.forward_text(**data) - outputs = { - "hidden_video": hidden_video, - "pooled_text": pooled_text - } - outputs.update(data) - self(outputs) - return self.finalize(output_file) - - def __call__(self, sample): - # TODO: make an index select from self.possible_segments. - hidden_video = sample["hidden_video"] - pooled_text = sample["pooled_text"] - vmasks = sample["vmasks"] - # probably maintain valid results here. - - hidden_video = hidden_video[:, 1:-1, :] - # probably maintain valid results here. - pooled_video = [] - for s, e in self.possible_segments: - pooled_video.append( - torch.mean( - hidden_video[:, int(s*5):int((e+1)*5), :], - dim=1, keepdim=True) - ) - pooled_video = torch.cat(pooled_video, dim=1) - scores = torch.bmm( - pooled_video, pooled_text.unsqueeze(-1)).squeeze(-1).cpu() - - ranks = scores.argsort(dim=-1, descending=True) - - for batch_idx, rank in enumerate(ranks): - rank_of_moment = [] - for m_idx, moment in enumerate(rank): - s, e = self.possible_segments[moment.item()] - if torch.any( - vmasks[batch_idx, int(s*5):int((e+1)*5)] - ): - rank_of_moment.append((s, e)) - self.full_scores.append(rank_of_moment) - - def finalize(self, output_file=None): - outputs = self._aggregate_scores(self.full_scores) - if output_file is not None: - np.save(os.path.join(self.pred_dir, output_file + ".npy"), outputs) - return {"outputs": outputs, "targets": self.test_data} - - def _aggregate_scores(self, scores): - self.full_scores = [] - return scores diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/losses/__init__.py b/kosmos-g/fairseq/examples/MMPT/mmpt/losses/__init__.py deleted file mode 100644 index 8dc32c96d..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/losses/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -from .loss import * -from .nce import * - -try: - from .fairseqmmloss import * -except ImportError: - pass - -try: - from .expnce import * -except ImportError: - pass diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/losses/fairseqmmloss.py b/kosmos-g/fairseq/examples/MMPT/mmpt/losses/fairseqmmloss.py deleted file mode 100644 index c0415d107..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/losses/fairseqmmloss.py +++ /dev/null @@ -1,63 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -TODO (huxu): a general fairseq criterion for all your pre-defined losses. -""" - -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq import metrics - - -@register_criterion("mmloss") -class MMCriterion(FairseqCriterion): - def __init__(self, task): - super().__init__(task) - # TODO (huxu): wrap forward call of loss_fn and eval_fn into task. - self.mmtask = task.mmtask - - def forward(self, model, sample): - """Compute the loss for the given sample. - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - outputs = self.mmtask(model, sample) - - loss, loss_scalar, max_len, batch_size, sample_size = ( - outputs["loss"], - outputs["loss_scalar"], - outputs["max_len"], - outputs["batch_size"], - outputs["sample_size"], - ) - - logging_output = { - "loss": loss_scalar, - "ntokens": max_len * batch_size, # dummy report. - "nsentences": batch_size, # dummy report. - "sample_size": sample_size, - } - - return loss, 1, logging_output - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - """since we use NCE, our actual batch_size is 1 per GPU. - Then we take the mean of each worker.""" - loss_sum = sum(log.get("loss", 0.0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - metrics.log_scalar("loss", loss_sum / sample_size, round=3) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/losses/loss.py b/kosmos-g/fairseq/examples/MMPT/mmpt/losses/loss.py deleted file mode 100644 index 99c05d067..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/losses/loss.py +++ /dev/null @@ -1,87 +0,0 @@ -# Copyright (c) Facebook, Inc. All Rights Reserved - -import torch - -from torch import nn - - -class Loss(object): - def __call__(self, *args, **kwargs): - raise NotImplementedError - - -# Dummy Loss for testing. -class DummyLoss(Loss): - def __init__(self): - self.loss = nn.CrossEntropyLoss() - - def __call__(self, logits, targets, **kwargs): - return self.loss(logits, targets) - - -class DummyK400Loss(Loss): - """dummy k400 loss for MViT.""" - def __init__(self): - self.loss = nn.CrossEntropyLoss() - - def __call__(self, logits, targets, **kwargs): - return self.loss( - logits, torch.randint(0, 400, (logits.size(0),), device=logits.device)) - - -class CrossEntropy(Loss): - def __init__(self): - self.loss = nn.CrossEntropyLoss() - - def __call__(self, logits, targets, **kwargs): - return self.loss(logits.reshape(-1, logits.size(-1)), targets.reshape(-1)) - - -class ArgmaxCrossEntropy(Loss): - def __init__(self): - self.loss = nn.CrossEntropyLoss() - - def __call__(self, logits, targets, **kwargs): - return self.loss(logits, targets.argmax(dim=1)) - - -class BCE(Loss): - def __init__(self): - self.loss = nn.BCEWithLogitsLoss() - - def __call__(self, logits, targets, **kwargs): - targets = targets.squeeze(0) - return self.loss(logits, targets) - - -class NLGLoss(Loss): - def __init__(self): - self.loss = nn.CrossEntropyLoss() - - def __call__(self, logits, text_label, **kwargs): - targets = text_label[text_label != -100] - return self.loss(logits, targets) - - -class MSE(Loss): - def __init__(self): - self.loss = nn.MSELoss() - - def __call__(self, logits, targets, **kwargs): - return self.loss(logits, targets) - - -class L1(Loss): - def __init__(self): - self.loss = nn.L1Loss() - - def __call__(self, logits, targets, **kwargs): - return self.loss(logits, targets) - - -class SmoothL1(Loss): - def __init__(self): - self.loss = nn.SmoothL1Loss() - - def __call__(self, logits, targets, **kwargs): - return self.loss(logits, targets) diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/losses/nce.py b/kosmos-g/fairseq/examples/MMPT/mmpt/losses/nce.py deleted file mode 100644 index ed7be8d37..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/losses/nce.py +++ /dev/null @@ -1,156 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -softmax-based NCE loss, used by this project. -""" - -import torch - -from torch import nn - -from .loss import Loss - - -class NCE(Loss): - def __init__(self): - # TODO (huxu): define temperature. - self.loss = nn.CrossEntropyLoss() - - def __call__(self, align_scores, **kargs): - # note: we reuse the same shape as cls head in BERT (batch_size, 2) - # but NCE only needs one logits. - # (so we drop all weights in the second neg logits.) - align_scores = align_scores[:, :1] - # duplicate negative examples - batch_size = align_scores.size(0) // 2 - pos_scores = align_scores[:batch_size] - neg_scores = align_scores[batch_size:].view(1, batch_size).repeat( - batch_size, 1) - scores = torch.cat([pos_scores, neg_scores], dim=1) - return self.loss( - scores, - torch.zeros( - (batch_size,), - dtype=torch.long, - device=align_scores.device), - ) - - -class T2VContraLoss(Loss): - """NCE for MM joint space, on softmax text2video matrix. - """ - def __init__(self): - # TODO (huxu): define temperature. - self.loss = nn.CrossEntropyLoss() - - def __call__(self, pooled_video, pooled_text, **kargs): - batch_size = pooled_video.size(0) - logits = torch.mm(pooled_text, pooled_video.transpose(1, 0)) - targets = torch.arange( - batch_size, - dtype=torch.long, - device=pooled_video.device) - return self.loss(logits, targets) - - -class V2TContraLoss(Loss): - """NCE for MM joint space, with softmax on video2text matrix.""" - - def __init__(self): - # TODO (huxu): define temperature. - self.loss = nn.CrossEntropyLoss() - - def __call__(self, pooled_video, pooled_text, **kargs): - batch_size = pooled_video.size(0) - logits = torch.mm(pooled_video, pooled_text.transpose(1, 0)) - targets = torch.arange( - batch_size, - dtype=torch.long, - device=pooled_video.device) - return self.loss(logits, targets) - - -class MMContraLoss(Loss): - def __init__(self): - self.loss = nn.CrossEntropyLoss() - - def __call__(self, pooled_video, pooled_text, **kwargs): - logits_per_video = pooled_video @ pooled_text.t() - logits_per_text = pooled_text @ pooled_video.t() - - targets = torch.arange( - pooled_video.size(0), - dtype=torch.long, - device=pooled_video.device) - loss_video = self.loss(logits_per_video, targets) - loss_text = self.loss(logits_per_text, targets) - return loss_video + loss_text - - -class MTM(Loss): - """Combination of MFM and MLM.""" - - def __init__(self): - self.loss = nn.CrossEntropyLoss() - - def __call__( - self, - video_logits, - text_logits, - video_label, - text_label, - **kwargs - ): - text_logits = torch.cat([ - text_logits, - torch.zeros( - (text_logits.size(0), 1), device=text_logits.device) - ], dim=1) - vt_logits = torch.cat([video_logits, text_logits], dim=0) - # loss for video. - video_label = torch.zeros( - (video_logits.size(0),), - dtype=torch.long, - device=video_logits.device - ) - - # loss for text. - text_label = text_label.reshape(-1) - labels_mask = text_label != -100 - selected_text_label = text_label[labels_mask] - - vt_label = torch.cat([video_label, selected_text_label], dim=0) - return self.loss(vt_logits, vt_label) - - -class MFMMLM(Loss): - """Combination of MFM and MLM.""" - - def __init__(self): - self.loss = nn.CrossEntropyLoss() - - def __call__( - self, - video_logits, - text_logits, - video_label, - text_label, - **kwargs - ): - # loss for video. - video_label = torch.zeros( - (video_logits.size(0),), - dtype=torch.long, - device=video_logits.device - ) - masked_frame_loss = self.loss(video_logits, video_label) - - # loss for text. - text_label = text_label.reshape(-1) - labels_mask = text_label != -100 - selected_text_label = text_label[labels_mask] - masked_lm_loss = self.loss(text_logits, selected_text_label) - return masked_frame_loss + masked_lm_loss diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/models/__init__.py b/kosmos-g/fairseq/examples/MMPT/mmpt/models/__init__.py deleted file mode 100644 index 825250cd0..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/models/__init__.py +++ /dev/null @@ -1,17 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -from .mmfusion import * -from .transformermodel import * -from .mmfusionnlg import * - -try: - from .fairseqmmmodel import * -except ImportError: - pass - -try: - from .expmmfusion import * -except ImportError: - pass diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/models/fairseqmmmodel.py b/kosmos-g/fairseq/examples/MMPT/mmpt/models/fairseqmmmodel.py deleted file mode 100644 index b7dd64369..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/models/fairseqmmmodel.py +++ /dev/null @@ -1,51 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq.models import ( - BaseFairseqModel, - register_model, - register_model_architecture -) - - -@register_model("mmmodel") -class FairseqMMModel(BaseFairseqModel): - """a fairseq wrapper of model built by `task`.""" - - @classmethod - def build_model(cls, args, task): - return FairseqMMModel(task.mmtask.model) - - def __init__(self, mmmodel): - super().__init__() - self.mmmodel = mmmodel - - def forward(self, *args, **kwargs): - return self.mmmodel(*args, **kwargs) - - def upgrade_state_dict_named(self, state_dict, name): - - super().upgrade_state_dict_named(state_dict, name) - - keys_to_delete = [] - - for key in state_dict: - if key not in self.state_dict(): - keys_to_delete.append(key) - for key in keys_to_delete: - print("[INFO]", key, "not used anymore.") - del state_dict[key] - - # copy any newly defined parameters. - for key in self.state_dict(): - if key not in state_dict: - print("[INFO] adding", key) - state_dict[key] = self.state_dict()[key] - - -# a dummy arch, we config the model. -@register_model_architecture("mmmodel", "mmarch") -def mmarch(args): - pass diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/models/mmfusion.py b/kosmos-g/fairseq/examples/MMPT/mmpt/models/mmfusion.py deleted file mode 100644 index 2509e26b6..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/models/mmfusion.py +++ /dev/null @@ -1,926 +0,0 @@ -# coding=utf-8 -# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team. -# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# Copyright (c) Facebook, Inc. All Rights Reserved - - -import torch - -from torch import nn - -try: - from transformers import AutoConfig, AutoTokenizer -except ImportError: - pass - -from . import transformermodel - - -class MMPTModel(nn.Module): - """An e2e wrapper of inference model. - """ - @classmethod - def from_pretrained(cls, config, checkpoint="checkpoint_best.pt"): - import os - from ..utils import recursive_config - from ..tasks import Task - config = recursive_config(config) - mmtask = Task.config_task(config) - checkpoint_path = os.path.join(config.eval.save_path, checkpoint) - mmtask.build_model(checkpoint=checkpoint_path) - # TODO(huxu): make the video encoder configurable. - from ..processors.models.s3dg import S3D - video_encoder = S3D('pretrained_models/s3d_dict.npy', 512) - video_encoder.load_state_dict( - torch.load('pretrained_models/s3d_howto100m.pth')) - from transformers import AutoTokenizer - tokenizer = AutoTokenizer.from_pretrained( - config.dataset.bert_name, use_fast=config.dataset.use_fast - ) - from ..processors import Aligner - aligner = Aligner(config.dataset) - return ( - MMPTModel(config, mmtask.model, video_encoder), - tokenizer, - aligner - ) - - def __init__(self, config, model, video_encoder, **kwargs): - super().__init__() - self.max_video_len = config.dataset.max_video_len - self.video_encoder = video_encoder - self.model = model - - def forward(self, video_frames, caps, cmasks, return_score=False): - bsz = video_frames.size(0) - assert bsz == 1, "only bsz=1 is supported now." - seq_len = video_frames.size(1) - video_frames = video_frames.view(-1, *video_frames.size()[2:]) - vfeats = self.video_encoder(video_frames.permute(0, 4, 1, 2, 3)) - vfeats = vfeats['video_embedding'] - vfeats = vfeats.view(bsz, seq_len, vfeats.size(-1)) - padding = torch.zeros( - bsz, self.max_video_len - seq_len, vfeats.size(-1)) - vfeats = torch.cat([vfeats, padding], dim=1) - vmasks = torch.cat([ - torch.ones((bsz, seq_len), dtype=torch.bool), - torch.zeros((bsz, self.max_video_len - seq_len), dtype=torch.bool) - ], - dim=1 - ) - output = self.model(caps, cmasks, vfeats, vmasks) - if return_score: - output = {"score": torch.bmm( - output["pooled_video"][:, None, :], - output["pooled_text"][:, :, None] - ).squeeze(-1).squeeze(-1)} - return output - - -class MMFusion(nn.Module): - """a MMPT wrapper class for MMBert style models. - TODO: move isolated mask to a subclass. - """ - def __init__(self, config, **kwargs): - super().__init__() - transformer_config = AutoConfig.from_pretrained( - config.dataset.bert_name) - self.hidden_size = transformer_config.hidden_size - self.is_train = False - if config.dataset.train_path is not None: - self.is_train = True - # 0 means no iso; 1-12 means iso up to that layer. - self.num_hidden_layers = transformer_config.num_hidden_layers - self.last_iso_layer = 0 - if config.dataset.num_iso_layer is not None: - self.last_iso_layer = config.dataset.num_iso_layer - 1 + 1 - - if config.model.mm_encoder_cls is not None: - mm_encoder_cls = getattr(transformermodel, config.model.mm_encoder_cls) - model_config = AutoConfig.from_pretrained(config.dataset.bert_name) - model_config.max_video_len = config.dataset.max_video_len - # TODO: a general way to add parameter for a model. - model_config.use_seg_emb = config.model.use_seg_emb - self.mm_encoder = mm_encoder_cls.from_pretrained( - config.dataset.bert_name, config=model_config) - elif config.model.video_encoder_cls is not None\ - and config.model.text_encoder_cls is not None: - video_encoder_cls = getattr(transformermodel, config.model.video_encoder_cls) - model_config = AutoConfig.from_pretrained(config.dataset.bert_name) - model_config.max_video_len = config.dataset.max_video_len - # TODO: make each model a set of config class. - if hasattr(model_config, "num_layers"): - model_config.num_layers = config.model.num_hidden_video_layers - else: - model_config.num_hidden_layers = config.model.num_hidden_video_layers - self.video_encoder = video_encoder_cls.from_pretrained( - config.dataset.bert_name, config=model_config) - # exact same NLP model from Huggingface. - text_encoder_cls = getattr(transformermodel, config.model.text_encoder_cls) - self.text_encoder = text_encoder_cls.from_pretrained( - config.dataset.bert_name) - else: - raise ValueError("the encoder must be either MM or two backbones.") - - def forward( - self, - caps, - cmasks, - vfeats, - vmasks, - **kwargs - ): - raise NotImplementedError( - "Please derive MMFusion module." - ) - - def _mm_on_the_fly( - self, - cmasks, - vmasks, - attention_mask - ): - """helper function for mask, seg_ids and token_type_ids.""" - if attention_mask is None: - attention_mask = self._mm_attention_mask(cmasks, vmasks) - - """ - 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 - | first sequence | second sequence | - """ - token_type_ids = torch.cat( - [ - torch.zeros( - (vmasks.size(0), vmasks.size(1) + 2), - dtype=torch.long, - device=vmasks.device, - ), - torch.ones( - (cmasks.size(0), cmasks.size(1) - 2), - dtype=torch.long, - device=cmasks.device, - ), - ], - dim=1, - ) - return attention_mask, token_type_ids - - def _mm_attention_mask(self, cmasks, vmasks): - assert cmasks.size(0) == vmasks.size(0), "{}, {}, {}, {}".format( - str(cmasks.size()), - str(vmasks.size()), - str(cmasks.size(0)), - str(vmasks.size(0)), - ) - - mm_mask = torch.cat([cmasks[:, :1], vmasks, cmasks[:, 1:]], dim=1) - if self.last_iso_layer == 0: - # hard attention mask. - return mm_mask - else: - # a gpu iso mask; 0 : num_iso_layer is isolated; - # num_iso_layer: are MM-fused. - # make an iso layer - batch_size = cmasks.size(0) - iso_mask = self._make_iso_mask(batch_size, cmasks, vmasks) - mm_mask = mm_mask[:, None, :].repeat(1, mm_mask.size(-1), 1) - iso_mm_masks = [] - # hard attention mask. - iso_mask = iso_mask[:, None, :, :].repeat( - 1, self.last_iso_layer, 1, 1) - iso_mm_masks.append(iso_mask) - if self.last_iso_layer < self.num_hidden_layers: - mm_mask = mm_mask[:, None, :, :].repeat( - 1, self.num_hidden_layers - self.last_iso_layer, 1, 1 - ) - iso_mm_masks.append(mm_mask) - iso_mm_masks = torch.cat(iso_mm_masks, dim=1) - return iso_mm_masks - - def _make_iso_mask(self, batch_size, cmasks, vmasks): - cls_self_mask = torch.cat( - [ - torch.ones( - (batch_size, 1), dtype=torch.bool, device=cmasks.device), - torch.zeros( - (batch_size, cmasks.size(1) + vmasks.size(1) - 1), - dtype=torch.bool, device=cmasks.device) - ], dim=1) - - iso_video_mask = torch.cat( - [ - # [CLS] is not used. - torch.zeros( - (batch_size, 1), dtype=torch.bool, device=cmasks.device - ), - vmasks, - # assume to be 1. - cmasks[:, 1:2], - # 2 means [CLS] + [SEP] - torch.zeros( - (batch_size, cmasks.size(1) - 2), - dtype=torch.bool, - device=cmasks.device, - ), - ], - dim=1, - ) - iso_text_mask = torch.cat( - [ - torch.zeros( - (batch_size, 2 + vmasks.size(1)), - dtype=torch.bool, - device=cmasks.device, - ), # [CLS] is not used. - cmasks[:, 2:], # assume to be 1. - ], - dim=1, - ) - cls_self_mask = cls_self_mask[:, None, :] - iso_video_mask = iso_video_mask[:, None, :].repeat( - 1, vmasks.size(1) + 1, 1) - iso_text_mask = iso_text_mask[:, None, :].repeat( - 1, cmasks.size(1) - 2, 1) - return torch.cat([cls_self_mask, iso_video_mask, iso_text_mask], dim=1) - - def _pooling_vt_layer( - self, - layered_sequence_output, - cmasks, - vmasks - ): - layer_idx = self.last_iso_layer \ - if self.last_iso_layer > 0 else self.num_hidden_layers - hidden_state = layered_sequence_output[layer_idx] - # also output pooled_video and pooled_text. - batch_size = cmasks.size(0) - # pool the modality. - text_offset = vmasks.size(1) + 2 # [CLS] + [SEP] - # video tokens + [SEP] - video_outputs = hidden_state[:, 1:text_offset] - video_attention_mask = torch.cat( - [ - vmasks, - torch.ones( - (batch_size, 1), dtype=torch.bool, device=vmasks.device), - ], - dim=1, - ) - assert video_outputs.size(1) == video_attention_mask.size(1) - pooled_video = torch.sum( - video_outputs * video_attention_mask.unsqueeze(-1), dim=1 - ) / video_attention_mask.sum(1, keepdim=True) - # pooled_video = torch.mean(video_outputs[0], dim=1) - - # text tokens + [SEP] - text_attention_mask = cmasks[:, 2:] - text_outputs = hidden_state[:, text_offset:] - assert text_outputs.size(1) == text_attention_mask.size(1) - pooled_text = torch.sum( - text_outputs * text_attention_mask.unsqueeze(-1), dim=1 - ) / text_attention_mask.sum(1, keepdim=True) - return pooled_video, pooled_text - - -class MMFusionMFMMLM(MMFusion): - """forward function for MFM and MLM.""" - def forward( - self, - caps, - cmasks, - vfeats, - vmasks, - attention_mask=None, - video_label=None, - text_label=None, - **kwargs - ): - output_hidden_states = False if self.is_train else True - - target_vfeats, non_masked_frame_mask = None, None - if video_label is not None: - target_vfeats = vfeats.masked_select( - video_label.unsqueeze(-1)).view( - -1, vfeats.size(-1) - ) - # mask video token. - vfeats[video_label] = 0.0 - non_masked_frame_mask = vmasks.clone() - non_masked_frame_mask[video_label] = False - - attention_mask, token_type_ids = self._mm_on_the_fly( - cmasks, vmasks, attention_mask) - - outputs = self.mm_encoder( - input_ids=caps, - input_video_embeds=vfeats, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - masked_frame_labels=video_label, - target_video_hidden_states=target_vfeats, - non_masked_frame_mask=non_masked_frame_mask, - masked_lm_labels=text_label, - output_hidden_states=output_hidden_states, - ) - - video_logits, text_logits = outputs[0], outputs[1] - - if self.is_train: # return earlier for training. - return { - "video_logits": video_logits, - "text_logits": text_logits, - } - - pooled_video, pooled_text = self._pooling_vt_layer( - outputs[2], cmasks, vmasks) - return {"pooled_video": pooled_video, "pooled_text": pooled_text} - - -class MMFusionMTM(MMFusionMFMMLM): - def __init__(self, config, **kwargs): - super().__init__(config) - """ - For reproducibility: - self.mm_encoder will be initialized then discarded. - """ - from .transformermodel import MMBertForMTM - model_config = AutoConfig.from_pretrained(config.dataset.bert_name) - model_config.max_video_len = config.dataset.max_video_len - model_config.use_seg_emb = config.model.use_seg_emb - self.mm_encoder = MMBertForMTM.from_pretrained( - config.dataset.bert_name, config=model_config) - - -class MMFusionShare(MMFusion): - """A retrival wrapper using mm_encoder as both video/text backbone. - TODO: move formally. - """ - def forward( - self, - caps, - cmasks, - vfeats, - vmasks, - attention_mask=None, - video_label=None, - text_label=None, - output_hidden_states=False, - **kwargs - ): - pooled_video = self.forward_video( - vfeats, - vmasks, - caps, - cmasks, - output_hidden_states - ) - - pooled_text = self.forward_text( - caps, - cmasks, - output_hidden_states - ) - - return {"pooled_video": pooled_video, "pooled_text": pooled_text} - - def forward_video( - self, - vfeats, - vmasks, - caps, - cmasks, - output_hidden_states=False, - **kwargs - ): - input_ids = caps[:, :2] - - attention_mask = torch.cat([ - cmasks[:, :1], - vmasks, - cmasks[:, 1:2] - ], dim=1) - - token_type_ids = torch.zeros( - (vmasks.size(0), vmasks.size(1) + 2), - dtype=torch.long, - device=vmasks.device) - - outputs = self.mm_encoder( - input_ids=input_ids, - input_video_embeds=vfeats, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - output_hidden_states=True - ) - video_outputs = outputs[0] - - if output_hidden_states: - return video_outputs - - batch_size = cmasks.size(0) - - video_attention_mask = torch.cat( - [ - torch.zeros( - (batch_size, 1), dtype=torch.bool, device=vmasks.device), - vmasks, - torch.ones( - (batch_size, 1), dtype=torch.bool, device=vmasks.device), - ], - dim=1, - ) - assert video_outputs.size(1) == video_attention_mask.size(1) - - video_attention_mask = video_attention_mask.type(video_outputs.dtype) \ - / video_attention_mask.sum(1, keepdim=True) - - pooled_video = torch.bmm( - video_outputs.transpose(2, 1), - video_attention_mask.unsqueeze(2) - ).squeeze(-1) - return pooled_video # video_outputs - - def forward_text( - self, - caps, - cmasks, - output_hidden_states=False, - **kwargs - ): - input_ids = torch.cat([ - caps[:, :1], caps[:, 2:], - ], dim=1) - - attention_mask = torch.cat([ - cmasks[:, :1], - cmasks[:, 2:] - ], dim=1) - - token_type_ids = torch.cat([ - torch.zeros( - (cmasks.size(0), 1), - dtype=torch.long, - device=cmasks.device), - torch.ones( - (cmasks.size(0), cmasks.size(1) - 2), - dtype=torch.long, - device=cmasks.device) - ], dim=1) - - outputs = self.mm_encoder( - input_ids=input_ids, - input_video_embeds=None, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - output_hidden_states=True - ) - text_outputs = outputs[0] - - if output_hidden_states: - return text_outputs - - batch_size = caps.size(0) - # text tokens + [SEP] - text_attention_mask = torch.cat([ - torch.zeros( - (batch_size, 1), dtype=torch.bool, device=cmasks.device), - cmasks[:, 2:] - ], dim=1) - - assert text_outputs.size(1) == text_attention_mask.size(1) - - text_attention_mask = text_attention_mask.type(text_outputs.dtype) \ - / text_attention_mask.sum(1, keepdim=True) - - pooled_text = torch.bmm( - text_outputs.transpose(2, 1), - text_attention_mask.unsqueeze(2) - ).squeeze(-1) - return pooled_text # text_outputs - - -class MMFusionSeparate(MMFusionShare): - def forward_video( - self, - vfeats, - vmasks, - caps, - cmasks, - output_hidden_states=False, - **kwargs - ): - input_ids = caps[:, :2] - - attention_mask = torch.cat([ - cmasks[:, :1], - vmasks, - cmasks[:, 1:2] - ], dim=1) - - token_type_ids = torch.zeros( - (vmasks.size(0), vmasks.size(1) + 2), - dtype=torch.long, - device=vmasks.device) - - outputs = self.video_encoder( - input_ids=input_ids, - input_video_embeds=vfeats, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - output_hidden_states=True - ) - video_outputs = outputs[0] - - if output_hidden_states: - return video_outputs - - batch_size = cmasks.size(0) - - video_attention_mask = torch.cat( - [ - torch.zeros( - (batch_size, 1), dtype=torch.bool, device=vmasks.device), - vmasks, - torch.ones( - (batch_size, 1), dtype=torch.bool, device=vmasks.device), - ], - dim=1, - ) - assert video_outputs.size(1) == video_attention_mask.size(1) - - video_attention_mask = video_attention_mask.type(video_outputs.dtype) \ - / video_attention_mask.sum(1, keepdim=True) - - pooled_video = torch.bmm( - video_outputs.transpose(2, 1), - video_attention_mask.unsqueeze(2) - ).squeeze(-1) - return pooled_video # video_outputs - - def forward_text( - self, - caps, - cmasks, - output_hidden_states=False, - **kwargs - ): - input_ids = torch.cat([ - caps[:, :1], caps[:, 2:], - ], dim=1) - - attention_mask = torch.cat([ - cmasks[:, :1], - cmasks[:, 2:] - ], dim=1) - # different from sharing, we use all-0 type. - token_type_ids = torch.zeros( - (cmasks.size(0), cmasks.size(1) - 1), - dtype=torch.long, - device=cmasks.device) - - outputs = self.text_encoder( - input_ids=input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - output_hidden_states=True - ) - text_outputs = outputs[0] - - if output_hidden_states: - return text_outputs - - batch_size = caps.size(0) - # text tokens + [SEP] - text_attention_mask = torch.cat([ - torch.zeros( - (batch_size, 1), dtype=torch.bool, device=cmasks.device), - cmasks[:, 2:] - ], dim=1) - - assert text_outputs.size(1) == text_attention_mask.size(1) - - text_attention_mask = text_attention_mask.type(text_outputs.dtype) \ - / text_attention_mask.sum(1, keepdim=True) - - pooled_text = torch.bmm( - text_outputs.transpose(2, 1), - text_attention_mask.unsqueeze(2) - ).squeeze(-1) - return pooled_text # text_outputs - - -class MMFusionJoint(MMFusion): - """fine-tuning wrapper for retrival task.""" - - def forward( - self, - caps, - cmasks, - vfeats, - vmasks, - attention_mask=None, - video_label=None, - text_label=None, - **kwargs - ): - # TODO (huxu): other ways to do negative examples; move the following - # into your criterion forward. - output_hidden_states = True - - attention_mask, token_type_ids = self._mm_on_the_fly( - cmasks, vmasks, attention_mask) - - separate_forward_split = ( - None if self.is_train else vmasks.size(1) + 2 - ) # [CLS] + [SEP] - - outputs = self.mm_encoder( - input_ids=caps, - input_video_embeds=vfeats, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - output_hidden_states=output_hidden_states, - separate_forward_split=separate_forward_split, - ) - - pooled_video, pooled_text = self._pooling_vt_layer( - outputs[2], cmasks, vmasks) - return {"pooled_video": pooled_video, "pooled_text": pooled_text} - - -class MMFusionActionSegmentation(MMFusion): - """Fine-tuning wrapper for action segmentation. - TODO: rename this for VLM. - """ - def forward( - self, - caps, - cmasks, - vfeats, - vmasks, - attention_mask=None, - **kwargs - ): - # ActionLocalization assume of batch_size=1, squeeze it. - caps = caps.view(-1, caps.size(-1)) - cmasks = cmasks.view(-1, cmasks.size(-1)) - vfeats = vfeats.view(-1, vfeats.size(2), vfeats.size(3)) - vmasks = vmasks.view(-1, vmasks.size(-1)) - - # this may not cover all shapes of attention_mask. - attention_mask = attention_mask.view( - -1, attention_mask.size(2), attention_mask.size(3)) \ - if attention_mask is not None else None - - # TODO (huxu): other ways to do negative examples; move the following - # into your criterion forward. - output_hidden_states = True - - # video forwarding, text is dummy; never use attention_mask. - attention_mask, token_type_ids = self._mm_on_the_fly( - cmasks, vmasks, attention_mask) - - logits = self.mm_encoder( - input_ids=caps, - input_video_embeds=vfeats, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - output_hidden_states=output_hidden_states, - ) - return {"logits": logits[0][:, 1:vmasks.size(1)+1]} - - -class MMFusionActionLocalization(MMFusion): - """fine-tuning model for retrival task.""" - - def __init__(self, config, **kwargs): - super().__init__(config) - tokenizer = AutoTokenizer.from_pretrained( - config.dataset.bert_name) - self.cls_token_id = tokenizer.cls_token_id - self.sep_token_id = tokenizer.sep_token_id - self.pad_token_id = tokenizer.pad_token_id - - def forward( - self, - caps, - cmasks, - vfeats, - vmasks, - attention_mask=None, - **kwargs - ): - # ActionLocalization assume of batch_size=1, squeeze it. - caps = caps.squeeze(0) - cmasks = cmasks.squeeze(0) - vfeats = vfeats.squeeze(0) - vmasks = vmasks.squeeze(0) - attention_mask = attention_mask.squeeze(0) if attention_mask is not None else None - - # TODO (huxu): other ways to do negative examples; move the following - # into your criterion forward. - output_hidden_states = True - - # a len1 dummy video token. - dummy_vfeats = torch.zeros( - (caps.size(0), 1, vfeats.size(-1)), device=vfeats.device, dtype=vfeats.dtype) - dummy_vmasks = torch.ones( - (caps.size(0), 1), dtype=torch.bool, - device=vfeats.device) - - dummy_caps = torch.LongTensor( - [[self.cls_token_id, self.sep_token_id, - self.pad_token_id, self.sep_token_id]], - ).to(caps.device).repeat(vfeats.size(0), 1) - dummy_cmasks = torch.BoolTensor( - [[0, 1, 0, 1]] # pad are valid for attention. - ).to(caps.device).repeat(vfeats.size(0), 1) - - # video forwarding, text is dummy; never use attention_mask. - attention_mask, token_type_ids = self._mm_on_the_fly( - dummy_cmasks, vmasks, None) - - outputs = self.mm_encoder( - input_ids=dummy_caps, - input_video_embeds=vfeats, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - output_hidden_states=output_hidden_states, - ) - - layer_idx = self.last_iso_layer \ - if self.last_iso_layer > 0 else self.num_hidden_layers - - video_seq = outputs[2][layer_idx][:, 1:vmasks.size(1)+1].masked_select( - vmasks.unsqueeze(-1) - ).view(-1, self.hidden_size) - - # text forwarding, video is dummy - attention_mask, token_type_ids = self._mm_on_the_fly( - cmasks, dummy_vmasks, None) - - outputs = self.mm_encoder( - input_ids=caps, - input_video_embeds=dummy_vfeats, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - output_hidden_states=output_hidden_states, - ) - - _, pooled_text = self._pooling_vt_layer( - outputs[2], cmasks, dummy_vmasks) - # this line is not right. - logits = torch.mm(video_seq, pooled_text.transpose(1, 0)) - return {"logits": logits} - - -# --------------- MMFusionSeparate for end tasks --------------- - -class MMFusionSeparateActionSegmentation(MMFusionSeparate): - """Fine-tuning wrapper for action segmentation.""" - def forward( - self, - caps, - cmasks, - vfeats, - vmasks, - attention_mask=None, - **kwargs - ): - # ActionLocalization assume of batch_size=1, squeeze it. - caps = caps.view(-1, caps.size(-1)) - cmasks = cmasks.view(-1, cmasks.size(-1)) - vfeats = vfeats.view(-1, vfeats.size(2), vfeats.size(3)) - vmasks = vmasks.view(-1, vmasks.size(-1)) - logits = self.forward_video( - vfeats, - vmasks, - caps, - cmasks, - output_hidden_states=True - ) - return {"logits": logits[:, 1:vmasks.size(1)+1]} - - -class MMFusionSeparateActionLocalization(MMFusionSeparate): - def __init__(self, config, **kwargs): - super().__init__(config) - tokenizer = AutoTokenizer.from_pretrained( - config.dataset.bert_name) - self.cls_token_id = tokenizer.cls_token_id - self.sep_token_id = tokenizer.sep_token_id - self.pad_token_id = tokenizer.pad_token_id - - def forward( - self, - caps, - cmasks, - vfeats, - vmasks, - **kwargs - ): - # ActionLocalization assume of batch_size=1, squeeze it. - caps = caps.squeeze(0) - cmasks = cmasks.squeeze(0) - vfeats = vfeats.squeeze(0) - vmasks = vmasks.squeeze(0) - - # TODO (huxu): other ways to do negative examples; move the following - # into your criterion forward. - dummy_caps = torch.LongTensor( - [[self.cls_token_id, self.sep_token_id, - self.pad_token_id, self.sep_token_id]], - ).to(caps.device).repeat(vfeats.size(0), 1) - dummy_cmasks = torch.BoolTensor( - [[0, 1, 0, 1]] # pad are valid for attention. - ).to(caps.device).repeat(vfeats.size(0), 1) - - outputs = self.forward_video( - vfeats, - vmasks, - dummy_caps, - dummy_cmasks, - output_hidden_states=True - ) - - video_seq = outputs[:, 1:vmasks.size(1)+1].masked_select( - vmasks.unsqueeze(-1) - ).view(-1, self.hidden_size) - - pooled_text = self.forward_text( - caps, - cmasks, - output_hidden_states=False - ) - - # this line is not right. - logits = torch.mm(video_seq, pooled_text.transpose(1, 0)) - return {"logits": logits} - - -class MMFusionShareActionLocalization(MMFusionShare): - def __init__(self, config, **kwargs): - super().__init__(config) - tokenizer = AutoTokenizer.from_pretrained( - config.dataset.bert_name) - self.cls_token_id = tokenizer.cls_token_id - self.sep_token_id = tokenizer.sep_token_id - self.pad_token_id = tokenizer.pad_token_id - - def forward( - self, - caps, - cmasks, - vfeats, - vmasks, - **kwargs - ): - # ActionLocalization assume of batch_size=1, squeeze it. - caps = caps.squeeze(0) - cmasks = cmasks.squeeze(0) - vfeats = vfeats.squeeze(0) - vmasks = vmasks.squeeze(0) - - # TODO (huxu): other ways to do negative examples; move the following - # into your criterion forward. - dummy_caps = torch.LongTensor( - [[self.cls_token_id, self.sep_token_id, - self.pad_token_id, self.sep_token_id]], - ).to(caps.device).repeat(vfeats.size(0), 1) - dummy_cmasks = torch.BoolTensor( - [[0, 1, 0, 1]] # pad are valid for attention. - ).to(caps.device).repeat(vfeats.size(0), 1) - - outputs = self.forward_video( - vfeats, - vmasks, - dummy_caps, - dummy_cmasks, - output_hidden_states=True - ) - - video_seq = outputs[:, 1:vmasks.size(1)+1].masked_select( - vmasks.unsqueeze(-1) - ).view(-1, self.hidden_size) - - pooled_text = self.forward_text( - caps, - cmasks, - output_hidden_states=False - ) - - # this line is not right. - logits = torch.mm(video_seq, pooled_text.transpose(1, 0)) - return {"logits": logits} diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/models/mmfusionnlg.py b/kosmos-g/fairseq/examples/MMPT/mmpt/models/mmfusionnlg.py deleted file mode 100644 index 9207e77da..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/models/mmfusionnlg.py +++ /dev/null @@ -1,999 +0,0 @@ -# coding=utf-8 -# Copyright 2018 The Google AI Language Team Authors, Facebook AI Research authors and The HuggingFace Inc. team. -# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# Copyright (c) Facebook, Inc. All Rights Reserved - - -import torch - -from torch.nn import functional as F - -from typing import Optional, Iterable - -try: - from transformers import BertPreTrainedModel - from transformers.modeling_bert import BertOnlyMLMHead - - from transformers.file_utils import ModelOutput - from transformers.modeling_outputs import CausalLMOutput - from transformers.generation_utils import ( - BeamHypotheses, - top_k_top_p_filtering - ) -except ImportError: - pass - -from .mmfusion import MMFusion -from .transformermodel import MMBertModel -from ..modules import VideoTokenMLP - - -class MMFusionNLG(MMFusion): - def __init__(self, config, **kwargs): - super().__init__(config) - if config.model.max_decode_length is not None: - self.max_length = min( - config.model.max_decode_length, - config.dataset.max_len - config.dataset.max_video_len - 3 - ) - else: - self.max_length = \ - config.dataset.max_len - config.dataset.max_video_len - 3 - self.gen_param = config.gen_param if config.gen_param is not None \ - else {} - - def forward( - self, - caps, - cmasks, - vfeats, - vmasks, - attention_mask, - video_label=None, - text_label=None, - **kwargs - ): - """use pre-trained LM header for generation.""" - attention_mask, token_type_ids = self._mm_on_the_fly( - cmasks, vmasks, attention_mask) - - outputs = self.mm_encoder( - input_ids=caps, - input_video_embeds=vfeats, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - masked_lm_labels=text_label, - ) - return {"logits": outputs[0]} - - @torch.no_grad() - def generate( - self, - caps, cmasks, vfeats, vmasks, - attention_mask=None, - bos_token_id=None, - eos_token_id=None, - **kwargs - ): - # a simplified interface from - # https://huggingface.co/transformers/v3.4.0/_modules/transformers/generation_utils.html#GenerationMixin.generate - - # caps now only have - # [CLS], [SEP] (for video) and [CLS] (as bos_token) - assert caps.size(1) == 3 - - attention_mask, token_type_ids = self._mm_on_the_fly( - cmasks, vmasks, attention_mask) - - output = self.mm_encoder.generate( - input_ids=caps, - input_video_embeds=vfeats, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - bos_token_id=bos_token_id, - eos_token_id=eos_token_id, - max_length=self.max_length, - **self.gen_param - ) - return output - - -class MMBertForNLG(BertPreTrainedModel): - def __init__(self, config): - super().__init__(config) - self.bert = MMBertModel(config) - self.videomlp = VideoTokenMLP(config) - # we do not use `BertGenerationOnlyLMHead` - # because we can reuse pretraining. - self.cls = BertOnlyMLMHead(config) - self.hidden_size = config.hidden_size - self.init_weights() - - def get_output_embeddings(self): - return self.cls.predictions.decoder - - def forward( - self, - input_ids=None, - input_video_embeds=None, - attention_mask=None, - token_type_ids=None, - position_ids=None, - head_mask=None, - inputs_embeds=None, - masked_lm_labels=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - ): - # similar to MMBertForMFMMLM without MFM. - video_tokens = self.videomlp(input_video_embeds) - outputs = self.bert( - input_ids, - video_tokens, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = outputs[0] - - prediction_scores = None - if masked_lm_labels is not None: - text_offset = input_video_embeds.size(1) + 1 # [CLS] - # recover caps format: [CLS] [SEP] text [SEP] - text_sequence_output = torch.cat( - [sequence_output[:, :1], sequence_output[:, text_offset:]], - dim=1 - ) - - # only compute select tokens to training to speed up. - hidden_size = text_sequence_output.size(-1) - # masked_lm_labels = masked_lm_labels.reshape(-1) - labels_mask = masked_lm_labels != -100 - - selected_text_output = text_sequence_output.masked_select( - labels_mask.unsqueeze(-1) - ).view(-1, hidden_size) - prediction_scores = self.cls(selected_text_output) - - if not return_dict: - output = ( - prediction_scores, - ) + outputs[2:] - return output - - # for generation. - text_offset = input_video_embeds.size(1) + 2 # [CLS] - text_sequence_output = sequence_output[:, text_offset:] - prediction_scores = self.cls(text_sequence_output) - return CausalLMOutput( - loss=None, - logits=prediction_scores, - ) - - def prepare_inputs_for_generation( - self, - input_ids, - input_video_embeds, - attention_mask=None, - token_type_ids=None, - **model_kwargs - ): - # must return a dictionary. - seq_len = input_ids.size(1) + input_video_embeds.size(1) - if attention_mask is not None: - if len(attention_mask.size()) == 4: - attention_mask = attention_mask[:, :, :seq_len, :seq_len] - elif len(attention_mask.size()) == 3: - attention_mask = attention_mask[:, :seq_len, :seq_len] - else: - attention_mask = attention_mask[:, :seq_len] - if token_type_ids is not None: - token_type_ids = token_type_ids[:, :seq_len] - - return { - "input_ids": input_ids, - "input_video_embeds": input_video_embeds, - "attention_mask": attention_mask, - "token_type_ids": token_type_ids, - } - - @torch.no_grad() - def generate( - self, - input_ids: Optional[torch.LongTensor] = None, - decoder_input_ids: Optional[torch.LongTensor] = None, - max_length: Optional[int] = None, - min_length: Optional[int] = None, - do_sample: Optional[bool] = None, - early_stopping: Optional[bool] = None, - num_beams: Optional[int] = None, - temperature: Optional[float] = None, - top_k: Optional[int] = None, - top_p: Optional[float] = None, - repetition_penalty: Optional[float] = None, - bad_words_ids: Optional[Iterable[int]] = None, - bos_token_id: Optional[int] = None, - pad_token_id: Optional[int] = None, - eos_token_id: Optional[int] = None, - length_penalty: Optional[float] = None, - no_repeat_ngram_size: Optional[int] = None, - num_return_sequences: Optional[int] = None, - attention_mask: Optional[torch.LongTensor] = None, - decoder_start_token_id: Optional[int] = None, - use_cache: Optional[bool] = None, - **model_kwargs - ) -> torch.LongTensor: - r""" - Generates sequences for models with a language modeling head. The method currently supports greedy decoding, - beam-search decoding, sampling with temperature, sampling with top-k or nucleus sampling. - Adapted in part from `Facebook's XLM beam search code - `__. - Apart from :obj:`input_ids` and :obj:`attention_mask`, all the arguments below will default to the value of the - attribute of the same name inside the :class:`~transformers.PretrainedConfig` of the model. The default values - indicated are the default values of those config. - Most of these parameters are explained in more detail in `this blog post - `__. - Parameters: - input_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): - The sequence used as a prompt for the generation. If :obj:`None` the method initializes - it as an empty :obj:`torch.LongTensor` of shape :obj:`(1,)`. - decoder_input_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): - initial input_ids for the decoder of encoder-decoder type models. If :obj:`None` then only - decoder_start_token_id is passed as the first token to the decoder. - max_length (:obj:`int`, `optional`, defaults to 20): - The maximum length of the sequence to be generated. - min_length (:obj:`int`, `optional`, defaults to 10): - The minimum length of the sequence to be generated. - do_sample (:obj:`bool`, `optional`, defaults to :obj:`False`): - Whether or not to use sampling ; use greedy decoding otherwise. - early_stopping (:obj:`bool`, `optional`, defaults to :obj:`False`): - Whether to stop the beam search when at least ``num_beams`` sentences are finished per batch or not. - num_beams (:obj:`int`, `optional`, defaults to 1): - Number of beams for beam search. 1 means no beam search. - temperature (:obj:`float`, `optional`, defaults tp 1.0): - The value used to module the next token probabilities. - top_k (:obj:`int`, `optional`, defaults to 50): - The number of highest probability vocabulary tokens to keep for top-k-filtering. - top_p (:obj:`float`, `optional`, defaults to 1.0): - If set to float < 1, only the most probable tokens with probabilities that add up to ``top_p`` or - higher are kept for generation. - repetition_penalty (:obj:`float`, `optional`, defaults to 1.0): - The parameter for repetition penalty. 1.0 means no penalty. See `this paper - `__ for more details. - pad_token_id (:obj:`int`, `optional`): - The id of the `padding` token. - bos_token_id (:obj:`int`, `optional`): - The id of the `beginning-of-sequence` token. - eos_token_id (:obj:`int`, `optional`): - The id of the `end-of-sequence` token. - length_penalty (:obj:`float`, `optional`, defaults to 1.0): - Exponential penalty to the length. 1.0 means no penalty. - Set to values < 1.0 in order to encourage the model to generate shorter sequences, to a value > 1.0 in - order to encourage the model to produce longer sequences. - no_repeat_ngram_size (:obj:`int`, `optional`, defaults to 0): - If set to int > 0, all ngrams of that size can only occur once. - bad_words_ids(:obj:`List[int]`, `optional`): - List of token ids that are not allowed to be generated. In order to get the tokens of the words that - should not appear in the generated text, use :obj:`tokenizer.encode(bad_word, add_prefix_space=True)`. - num_return_sequences(:obj:`int`, `optional`, defaults to 1): - The number of independently computed returned sequences for each element in the batch. - attention_mask (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): - Mask to avoid performing attention on padding token indices. Mask values are in ``[0, 1]``, 1 for - tokens that are not masked, and 0 for masked tokens. - If not provided, will default to a tensor the same shape as :obj:`input_ids` that masks the pad token. - `What are attention masks? <../glossary.html#attention-mask>`__ - decoder_start_token_id (:obj:`int`, `optional`): - If an encoder-decoder model starts decoding with a different token than `bos`, the id of that token. - use_cache: (:obj:`bool`, `optional`, defaults to :obj:`True`): - Whether or not the model should use the past last key/values attentions (if applicable to the model) to - speed up decoding. - model_kwargs: - Additional model specific kwargs will be forwarded to the :obj:`forward` function of the model. - Return: - :obj:`torch.LongTensor` of shape :obj:`(batch_size * num_return_sequences, sequence_length)`: - The generated sequences. The second dimension (sequence_length) is either equal to :obj:`max_length` or - shorter if all batches finished early due to the :obj:`eos_token_id`. - Examples:: - tokenizer = AutoTokenizer.from_pretrained('distilgpt2') # Initialize tokenizer - model = AutoModelWithLMHead.from_pretrained('distilgpt2') # Download model and configuration from S3 and cache. - outputs = model.generate(max_length=40) # do greedy decoding - print('Generated: {}'.format(tokenizer.decode(outputs[0], skip_special_tokens=True))) - tokenizer = AutoTokenizer.from_pretrained('openai-gpt') # Initialize tokenizer - model = AutoModelWithLMHead.from_pretrained('openai-gpt') # Download model and configuration from S3 and cache. - input_context = 'The dog' - input_ids = tokenizer.encode(input_context, return_tensors='pt') # encode input context - outputs = model.generate(input_ids=input_ids, num_beams=5, num_return_sequences=3, temperature=1.5) # generate 3 independent sequences using beam search decoding (5 beams) with sampling from initial context 'The dog' - for i in range(3): # 3 output sequences were generated - print('Generated {}: {}'.format(i, tokenizer.decode(outputs[i], skip_special_tokens=True))) - tokenizer = AutoTokenizer.from_pretrained('distilgpt2') # Initialize tokenizer - model = AutoModelWithLMHead.from_pretrained('distilgpt2') # Download model and configuration from S3 and cache. - input_context = 'The dog' - input_ids = tokenizer.encode(input_context, return_tensors='pt') # encode input context - outputs = model.generate(input_ids=input_ids, max_length=40, temperature=0.7, num_return_sequences=3, do_sample=True) # generate 3 candidates using sampling - for i in range(3): # 3 output sequences were generated - print('Generated {}: {}'.format(i, tokenizer.decode(outputs[i], skip_special_tokens=True))) - tokenizer = AutoTokenizer.from_pretrained('ctrl') # Initialize tokenizer - model = AutoModelWithLMHead.from_pretrained('ctrl') # Download model and configuration from S3 and cache. - input_context = 'Legal My neighbor is' # "Legal" is one of the control codes for ctrl - input_ids = tokenizer.encode(input_context, return_tensors='pt') # encode input context - outputs = model.generate(input_ids=input_ids, max_length=50, temperature=0.7, repetition_penalty=1.2) # generate sequences - print('Generated: {}'.format(tokenizer.decode(outputs[0], skip_special_tokens=True))) - tokenizer = AutoTokenizer.from_pretrained('gpt2') # Initialize tokenizer - model = AutoModelWithLMHead.from_pretrained('gpt2') # Download model and configuration from S3 and cache. - input_context = 'My cute dog' # "Legal" is one of the control codes for ctrl - bad_words_ids = [tokenizer.encode(bad_word, add_prefix_space=True) for bad_word in ['idiot', 'stupid', 'shut up']] - input_ids = tokenizer.encode(input_context, return_tensors='pt') # encode input context - outputs = model.generate(input_ids=input_ids, max_length=100, do_sample=True, bad_words_ids=bad_words_ids) # generate sequences without allowing bad_words to be generated - """ - - # We cannot generate if the model does not have a LM head - if self.get_output_embeddings() is None: - raise AttributeError( - "You tried to generate sequences with a model that does not have a LM Head." - "Please use another model class (e.g. `OpenAIGPTLMHeadModel`, `XLNetLMHeadModel`, `GPT2LMHeadModel`, `CTRLLMHeadModel`, `T5WithLMHeadModel`, `TransfoXLLMHeadModel`, `XLMWithLMHeadModel`, `BartForConditionalGeneration` )" - ) - - max_length = max_length if max_length is not None else self.config.max_length - min_length = min_length if min_length is not None else self.config.min_length - do_sample = do_sample if do_sample is not None else self.config.do_sample - early_stopping = early_stopping if early_stopping is not None else self.config.early_stopping - use_cache = use_cache if use_cache is not None else self.config.use_cache - num_beams = num_beams if num_beams is not None else self.config.num_beams - temperature = temperature if temperature is not None else self.config.temperature - top_k = top_k if top_k is not None else self.config.top_k - top_p = top_p if top_p is not None else self.config.top_p - repetition_penalty = repetition_penalty if repetition_penalty is not None else self.config.repetition_penalty - bos_token_id = bos_token_id if bos_token_id is not None else self.config.bos_token_id - pad_token_id = pad_token_id if pad_token_id is not None else self.config.pad_token_id - eos_token_id = eos_token_id if eos_token_id is not None else self.config.eos_token_id - length_penalty = length_penalty if length_penalty is not None else self.config.length_penalty - no_repeat_ngram_size = ( - no_repeat_ngram_size if no_repeat_ngram_size is not None else self.config.no_repeat_ngram_size - ) - bad_words_ids = bad_words_ids if bad_words_ids is not None else self.config.bad_words_ids - num_return_sequences = ( - num_return_sequences if num_return_sequences is not None else self.config.num_return_sequences - ) - decoder_start_token_id = ( - decoder_start_token_id if decoder_start_token_id is not None else self.config.decoder_start_token_id - ) - - if input_ids is not None: - batch_size = input_ids.shape[0] # overriden by the input batch_size - else: - batch_size = 1 - - assert isinstance(max_length, int) and max_length > 0, "`max_length` should be a strictly positive integer." - assert isinstance(min_length, int) and min_length >= 0, "`min_length` should be a positive integer." - assert isinstance(do_sample, bool), "`do_sample` should be a boolean." - assert isinstance(early_stopping, bool), "`early_stopping` should be a boolean." - assert isinstance(use_cache, bool), "`use_cache` should be a boolean." - assert isinstance(num_beams, int) and num_beams > 0, "`num_beams` should be a strictly positive integer." - assert temperature > 0, "`temperature` should be strictly positive." - assert isinstance(top_k, int) and top_k >= 0, "`top_k` should be a positive integer." - assert 0 <= top_p <= 1, "`top_p` should be between 0 and 1." - assert repetition_penalty >= 1.0, "`repetition_penalty` should be >= 1." - assert input_ids is not None or ( - isinstance(bos_token_id, int) and bos_token_id >= 0 - ), "If input_ids is not defined, `bos_token_id` should be a positive integer." - assert pad_token_id is None or ( - isinstance(pad_token_id, int) and (pad_token_id >= 0) - ), "`pad_token_id` should be a positive integer." - assert (eos_token_id is None) or ( - isinstance(eos_token_id, int) and (eos_token_id >= 0) - ), "`eos_token_id` should be a positive integer." - assert length_penalty > 0, "`length_penalty` should be strictly positive." - assert ( - isinstance(no_repeat_ngram_size, int) and no_repeat_ngram_size >= 0 - ), "`no_repeat_ngram_size` should be a positive integer." - assert ( - isinstance(num_return_sequences, int) and num_return_sequences > 0 - ), "`num_return_sequences` should be a strictly positive integer." - assert ( - bad_words_ids is None or isinstance(bad_words_ids, list) and isinstance(bad_words_ids[0], list) - ), "`bad_words_ids` is either `None` or a list of lists of tokens that should not be generated" - - if input_ids is None: - assert isinstance(bos_token_id, int) and bos_token_id >= 0, ( - "you should either supply a context to complete as `input_ids` input " - "or a `bos_token_id` (integer >= 0) as a first token to start the generation." - ) - input_ids = torch.full( - (batch_size, 1), - bos_token_id, - dtype=torch.long, - device=next(self.parameters()).device, - ) - else: - assert input_ids.dim() == 2, "Input prompt should be of shape (batch_size, sequence length)." - - # not allow to duplicate outputs when greedy decoding - if do_sample is False: - if num_beams == 1: - # no_beam_search greedy generation conditions - assert ( - num_return_sequences == 1 - ), "Greedy decoding will always produce the same output for num_beams == 1 and num_return_sequences > 1. Please set num_return_sequences = 1" - - else: - # beam_search greedy generation conditions - assert ( - num_beams >= num_return_sequences - ), "Greedy beam search decoding cannot return more sequences than it has beams. Please set num_beams >= num_return_sequences" - - # create attention mask if necessary - # TODO (PVP): this should later be handled by the forward fn() in each model in the future see PR 3140 - if (attention_mask is None) and (pad_token_id is not None) and (pad_token_id in input_ids): - attention_mask = input_ids.ne(pad_token_id).long() - elif attention_mask is None: - attention_mask = input_ids.new_ones(input_ids.shape) - - # set pad_token_id to eos_token_id if not set. Important that this is done after - # attention_mask is created - if pad_token_id is None and eos_token_id is not None: - print( - "Setting `pad_token_id` to {} (first `eos_token_id`) to generate sequence".format(eos_token_id) - ) - pad_token_id = eos_token_id - - # vocab size - if hasattr(self.config, "vocab_size"): - vocab_size = self.config.vocab_size - elif ( - self.config.is_encoder_decoder - and hasattr(self.config, "decoder") - and hasattr(self.config.decoder, "vocab_size") - ): - vocab_size = self.config.decoder.vocab_size - else: - raise ValueError("either self.config.vocab_size or self.config.decoder.vocab_size needs to be defined") - - # set effective batch size and effective batch multiplier according to do_sample - if do_sample: - effective_batch_size = batch_size * num_return_sequences - effective_batch_mult = num_return_sequences - else: - effective_batch_size = batch_size - effective_batch_mult = 1 - - if self.config.is_encoder_decoder: - if decoder_start_token_id is None: - # see if BOS token can be used for decoder_start_token_id - if bos_token_id is not None: - decoder_start_token_id = bos_token_id - elif ( - hasattr(self.config, "decoder") - and hasattr(self.config.decoder, "bos_token_id") - and self.config.decoder.bos_token_id is not None - ): - decoder_start_token_id = self.config.decoder.bos_token_id - else: - raise ValueError( - "decoder_start_token_id or bos_token_id has to be defined for encoder-decoder generation" - ) - - assert hasattr(self, "get_encoder"), "{} should have a 'get_encoder' function defined".format(self) - assert callable(self.get_encoder), "{} should be a method".format(self.get_encoder) - - # get encoder and store encoder outputs - encoder = self.get_encoder() - encoder_outputs: ModelOutput = encoder(input_ids, attention_mask=attention_mask, return_dict=True) - - # Expand input ids if num_beams > 1 or num_return_sequences > 1 - if num_return_sequences > 1 or num_beams > 1: - # TODO: make this a call-back function. - # input_ids=caps, - # input_video_embeds=vfeats, - # attention_mask=attention_mask, - # token_type_ids=token_type_ids, - input_video_embeds = model_kwargs.pop("input_video_embeds", None) - token_type_ids = model_kwargs.pop("token_type_ids", None) - - input_ids_len = input_ids.shape[-1] - input_ids = input_ids.unsqueeze(1).expand( - batch_size, effective_batch_mult * num_beams, input_ids_len) - - input_video_embeds_len, input_video_embeds_hidden = input_video_embeds.size(1), input_video_embeds.size(2) - input_video_embeds = input_video_embeds.unsqueeze(1).expand( - batch_size, effective_batch_mult * num_beams, input_video_embeds_len, input_video_embeds_hidden) - - attention_mask_from_len, attention_mask_to_len = attention_mask.size(1), attention_mask.size(2) - attention_mask = attention_mask.unsqueeze(1).expand( - batch_size, effective_batch_mult * num_beams, attention_mask_from_len, attention_mask_to_len - ) - - token_type_ids_len = token_type_ids.size(1) - token_type_ids = token_type_ids.unsqueeze(1).expand( - batch_size, effective_batch_mult * num_beams, token_type_ids_len - ) - - # contiguous ... - input_ids = input_ids.contiguous().view( - effective_batch_size * num_beams, input_ids_len - ) # shape: (batch_size * num_return_sequences * num_beams, cur_len) - - input_video_embeds = input_video_embeds.contiguous().view( - effective_batch_size * num_beams, input_video_embeds_len, input_video_embeds_hidden) - - attention_mask = attention_mask.contiguous().view( - effective_batch_size * num_beams, attention_mask_from_len, attention_mask_to_len - ) # shape: (batch_size * num_return_sequences * num_beams, cur_len) - - token_type_ids = token_type_ids.contiguous().view( - effective_batch_size * num_beams, token_type_ids_len - ) - - model_kwargs["input_video_embeds"] = input_video_embeds - model_kwargs["token_type_ids"] = token_type_ids - - if self.config.is_encoder_decoder: - device = next(self.parameters()).device - if decoder_input_ids is not None: - # give initial decoder input ids - input_ids = decoder_input_ids.repeat(effective_batch_size * num_beams, 1).to(device) - else: - # create empty decoder input_ids - input_ids = torch.full( - (effective_batch_size * num_beams, 1), - decoder_start_token_id, - dtype=torch.long, - device=device, - ) - cur_len = input_ids.shape[-1] - - assert ( - batch_size == encoder_outputs.last_hidden_state.shape[0] - ), f"expected encoder_outputs.last_hidden_state to have 1st dimension bs={batch_size}, got {encoder_outputs.last_hidden_state.shape[0]} " - - # expand batch_idx to assign correct encoder output for expanded input_ids (due to num_beams > 1 and num_return_sequences > 1) - expanded_batch_idxs = ( - torch.arange(batch_size) - .view(-1, 1) - .repeat(1, num_beams * effective_batch_mult) - .view(-1) - .to(input_ids.device) - ) - - # expand encoder_outputs - encoder_outputs["last_hidden_state"] = encoder_outputs.last_hidden_state.index_select( - 0, expanded_batch_idxs - ) - - # save encoder_outputs in `model_kwargs` - model_kwargs["encoder_outputs"] = encoder_outputs - - else: - cur_len = input_ids.shape[-1] - - assert ( - cur_len < max_length - ), f"The context has {cur_len} number of tokens, but `max_length` is only {max_length}. Please make sure that `max_length` is bigger than the number of tokens, by setting either `generate(max_length=...,...)` or `config.max_length = ...`" - - if num_beams > 1: - output = self._generate_beam_search( - input_ids, - cur_len=cur_len, - max_length=max_length, - min_length=min_length, - do_sample=do_sample, - early_stopping=early_stopping, - temperature=temperature, - top_k=top_k, - top_p=top_p, - repetition_penalty=repetition_penalty, - no_repeat_ngram_size=no_repeat_ngram_size, - bad_words_ids=bad_words_ids, - pad_token_id=pad_token_id, - eos_token_id=eos_token_id, - batch_size=effective_batch_size, - num_return_sequences=num_return_sequences, - length_penalty=length_penalty, - num_beams=num_beams, - vocab_size=vocab_size, - attention_mask=attention_mask, - use_cache=use_cache, - model_kwargs=model_kwargs, - ) - else: - output = self._generate_no_beam_search( - input_ids, - cur_len=cur_len, - max_length=max_length, - min_length=min_length, - do_sample=do_sample, - temperature=temperature, - top_k=top_k, - top_p=top_p, - repetition_penalty=repetition_penalty, - no_repeat_ngram_size=no_repeat_ngram_size, - bad_words_ids=bad_words_ids, - pad_token_id=pad_token_id, - eos_token_id=eos_token_id, - batch_size=effective_batch_size, - attention_mask=attention_mask, - use_cache=use_cache, - model_kwargs=model_kwargs, - ) - - return output - - def _generate_beam_search( - self, - input_ids, - cur_len, - max_length, - min_length, - do_sample, - early_stopping, - temperature, - top_k, - top_p, - repetition_penalty, - no_repeat_ngram_size, - bad_words_ids, - pad_token_id, - eos_token_id, - batch_size, - num_return_sequences, - length_penalty, - num_beams, - vocab_size, - attention_mask, - use_cache, - model_kwargs, - ): - """Generate sequences for each example with beam search.""" - - # generated hypotheses - generated_hyps = [ - BeamHypotheses(num_beams, max_length, length_penalty, early_stopping=early_stopping) - for _ in range(batch_size) - ] - - # scores for each sentence in the beam - beam_scores = torch.zeros((batch_size, num_beams), dtype=torch.float, device=input_ids.device) - - # for greedy decoding it is made sure that only tokens of the first beam are considered to avoid sampling the exact same tokens three times - if do_sample is False: - beam_scores[:, 1:] = -1e9 - beam_scores = beam_scores.view(-1) # shape (batch_size * num_beams,) - - # cache compute states - past = None - - # done sentences - done = [False for _ in range(batch_size)] - - while cur_len < max_length: - model_inputs = self.prepare_inputs_for_generation( - input_ids, past=past, attention_mask=attention_mask, use_cache=use_cache, **model_kwargs - ) - outputs = self(**model_inputs, return_dict=True) # (batch_size * num_beams, cur_len, vocab_size) - next_token_logits = outputs.logits[:, -1, :] # (batch_size * num_beams, vocab_size) - - # if model has past, then set the past variable to speed up decoding - if "past_key_values" in outputs: - past = outputs.past_key_values - elif "mems" in outputs: - past = outputs.mems - - if self.config.is_encoder_decoder and do_sample is False: - # TODO (PVP) still a bit hacky here - there might be a better solution - next_token_logits = self.adjust_logits_during_generation( - next_token_logits, cur_len=cur_len, max_length=max_length - ) - - scores = F.log_softmax(next_token_logits, dim=-1) # (batch_size * num_beams, vocab_size) - - scores = self.postprocess_next_token_scores( - scores=scores, - input_ids=input_ids, - no_repeat_ngram_size=no_repeat_ngram_size, - bad_words_ids=bad_words_ids, - cur_len=cur_len, - min_length=min_length, - max_length=max_length, - eos_token_id=eos_token_id, - repetition_penalty=repetition_penalty, - batch_size=batch_size, - num_beams=num_beams, - ) - - assert scores.shape == (batch_size * num_beams, vocab_size), "Shapes of scores: {} != {}".format( - scores.shape, (batch_size * num_beams, vocab_size) - ) - - if do_sample: - _scores = scores + beam_scores[:, None].expand_as(scores) # (batch_size * num_beams, vocab_size) - # Temperature - if temperature != 1.0: - _scores = _scores / temperature - # Top-p/top-k filtering - _scores = top_k_top_p_filtering( - _scores, top_k=top_k, top_p=top_p, min_tokens_to_keep=2 - ) # (batch_size * num_beams, vocab_size) - # re-organize to group the beam together to sample from all beam_idxs - _scores = _scores.contiguous().view( - batch_size, num_beams * vocab_size - ) # (batch_size, num_beams * vocab_size) - - # Sample 2 next tokens for each beam (so we have some spare tokens and match output of greedy beam search) - probs = F.softmax(_scores, dim=-1) - next_tokens = torch.multinomial(probs, num_samples=2 * num_beams) # (batch_size, num_beams * 2) - # Compute next scores - next_scores = torch.gather(_scores, -1, next_tokens) # (batch_size, num_beams * 2) - # sort the sampled vector to make sure that the first num_beams samples are the best - next_scores, next_scores_indices = torch.sort(next_scores, descending=True, dim=1) - next_tokens = torch.gather(next_tokens, -1, next_scores_indices) # (batch_size, num_beams * 2) - - else: - next_scores = scores + beam_scores[:, None].expand_as(scores) # (batch_size * num_beams, vocab_size) - - # re-organize to group the beam together (we are keeping top hypothesis accross beams) - next_scores = next_scores.view( - batch_size, num_beams * vocab_size - ) # (batch_size, num_beams * vocab_size) - - next_scores, next_tokens = torch.topk(next_scores, 2 * num_beams, dim=1, largest=True, sorted=True) - - assert next_scores.size() == next_tokens.size() == (batch_size, 2 * num_beams) - - # next batch beam content - next_batch_beam = [] - - # for each sentence - for batch_idx in range(batch_size): - - # if we are done with this sentence, add a pad token - if done[batch_idx]: - assert ( - len(generated_hyps[batch_idx]) >= num_beams - ), "Batch can only be done if at least {} beams have been generated".format(num_beams) - assert ( - eos_token_id is not None and pad_token_id is not None - ), "generated beams >= num_beams -> eos_token_id and pad_token have to be defined" - next_batch_beam.extend([(0, pad_token_id, 0)] * num_beams) # pad the batch - continue - - # next sentence beam content, this will get added to next_batch_beam - next_sent_beam = [] - - # next tokens for this sentence - for beam_token_rank, (beam_token_id, beam_token_score) in enumerate( - zip(next_tokens[batch_idx], next_scores[batch_idx]) - ): - # get beam and token IDs - beam_id = beam_token_id // vocab_size - token_id = beam_token_id % vocab_size - - effective_beam_id = batch_idx * num_beams + beam_id - # add to generated hypotheses if end of sentence - if (eos_token_id is not None) and (token_id.item() == eos_token_id): - # if beam_token does not belong to top num_beams tokens, it should not be added - is_beam_token_worse_than_top_num_beams = beam_token_rank >= num_beams - if is_beam_token_worse_than_top_num_beams: - continue - generated_hyps[batch_idx].add( - input_ids[effective_beam_id].clone(), - beam_token_score.item(), - ) - else: - # add next predicted token since it is not eos_token - next_sent_beam.append((beam_token_score, token_id, effective_beam_id)) - - # once the beam for next step is full, don't add more tokens to it. - if len(next_sent_beam) == num_beams: - break - - # Check if we are done so that we can save a pad step if all(done) - done[batch_idx] = done[batch_idx] or generated_hyps[batch_idx].is_done( - next_scores[batch_idx].max().item(), cur_len - ) - - # update next beam content - assert len(next_sent_beam) == num_beams, "Beam should always be full" - next_batch_beam.extend(next_sent_beam) - assert len(next_batch_beam) == num_beams * (batch_idx + 1), "We should have added num_beams each step" - - # stop when we are done with each sentence - if all(done): - break - - # sanity check / prepare next batch - assert len(next_batch_beam) == batch_size * num_beams - beam_scores = beam_scores.new([x[0] for x in next_batch_beam]) - beam_tokens = input_ids.new([x[1] for x in next_batch_beam]) - beam_idx = input_ids.new([x[2] for x in next_batch_beam]) - - # re-order batch and update current length - input_ids = input_ids[beam_idx, :] - input_ids = torch.cat([input_ids, beam_tokens.unsqueeze(1)], dim=-1) - cur_len = cur_len + 1 - - # re-order internal states - if past is not None: - past = self._reorder_cache(past, beam_idx) - - # extend attention_mask for new generated input if only decoder - # (huxu): move out since we trim attention_mask by ourselves. - # if self.config.is_encoder_decoder is False: - # attention_mask = torch.cat( - # [attention_mask, attention_mask.new_ones((attention_mask.shape[0], 1))], dim=-1 - # ) - - # finalize all open beam hypotheses and add to generated hypotheses - for batch_idx in range(batch_size): - if done[batch_idx]: - continue - - # test that beam scores match previously calculated scores if not eos and batch_idx not done - if eos_token_id is not None and all( - (token_id % vocab_size).item() != eos_token_id for token_id in next_tokens[batch_idx] - ): - assert torch.all( - next_scores[batch_idx, :num_beams] == beam_scores.view(batch_size, num_beams)[batch_idx] - ), "If batch_idx is not done, final next scores: {} have to equal to accumulated beam_scores: {}".format( - next_scores[:, :num_beams][batch_idx], - beam_scores.view(batch_size, num_beams)[batch_idx], - ) - - # need to add best num_beams hypotheses to generated hyps - for beam_id in range(num_beams): - effective_beam_id = batch_idx * num_beams + beam_id - final_score = beam_scores[effective_beam_id].item() - final_tokens = input_ids[effective_beam_id] - generated_hyps[batch_idx].add(final_tokens, final_score) - - # depending on whether greedy generation is wanted or not define different output_batch_size and output_num_return_sequences_per_batch - output_batch_size = batch_size if do_sample else batch_size * num_return_sequences - output_num_return_sequences_per_batch = 1 if do_sample else num_return_sequences - - # select the best hypotheses - sent_lengths = input_ids.new(output_batch_size) - best = [] - - # retrieve best hypotheses - for i, hypotheses in enumerate(generated_hyps): - sorted_hyps = sorted(hypotheses.beams, key=lambda x: x[0]) - for j in range(output_num_return_sequences_per_batch): - effective_batch_idx = output_num_return_sequences_per_batch * i + j - best_hyp = sorted_hyps.pop()[1] - sent_lengths[effective_batch_idx] = len(best_hyp) - best.append(best_hyp) - - # prepare for adding eos - sent_max_len = min(sent_lengths.max().item() + 1, max_length) - decoded = input_ids.new(output_batch_size, sent_max_len) - # shorter batches are padded if needed - if sent_lengths.min().item() != sent_lengths.max().item(): - assert pad_token_id is not None, "`pad_token_id` has to be defined" - decoded.fill_(pad_token_id) - - # fill with hypotheses and eos_token_id if the latter fits in - for i, hypo in enumerate(best): - decoded[i, : sent_lengths[i]] = hypo - if sent_lengths[i] < max_length: - decoded[i, sent_lengths[i]] = eos_token_id - - return decoded - - def _generate_no_beam_search( - self, - input_ids, - cur_len, - max_length, - min_length, - do_sample, - temperature, - top_k, - top_p, - repetition_penalty, - no_repeat_ngram_size, - bad_words_ids, - pad_token_id, - eos_token_id, - batch_size, - attention_mask, - use_cache, - model_kwargs, - ): - """Generate sequences for each example without beam search (num_beams == 1). - All returned sequence are generated independantly. - """ - # length of generated sentences / unfinished sentences - unfinished_sents = input_ids.new(batch_size).fill_(1) - sent_lengths = input_ids.new(batch_size).fill_(max_length) - - past = None - while cur_len < max_length: - model_inputs = self.prepare_inputs_for_generation( - input_ids, past=past, attention_mask=attention_mask, use_cache=use_cache, **model_kwargs - ) - - outputs = self(**model_inputs, return_dict=True) - next_token_logits = outputs.logits[:, -1, :] - scores = self.postprocess_next_token_scores( - scores=next_token_logits, - input_ids=input_ids, - no_repeat_ngram_size=no_repeat_ngram_size, - bad_words_ids=bad_words_ids, - cur_len=cur_len, - min_length=min_length, - max_length=max_length, - eos_token_id=eos_token_id, - repetition_penalty=repetition_penalty, - batch_size=batch_size, - num_beams=1, - ) - - # if model has past, then set the past variable to speed up decoding - if "past_key_values" in outputs: - past = outputs.past_key_values - elif "mems" in outputs: - past = outputs.mems - - if do_sample: - # Temperature (higher temperature => more likely to sample low probability tokens) - if temperature != 1.0: - scores = scores / temperature - # Top-p/top-k filtering - next_token_logscores = top_k_top_p_filtering(scores, top_k=top_k, top_p=top_p) - # Sample - probs = F.softmax(next_token_logscores, dim=-1) - next_token = torch.multinomial(probs, num_samples=1).squeeze(1) - else: - # Greedy decoding - next_token = torch.argmax(next_token_logits, dim=-1) - - # print(next_token_logits[0,next_token[0]], next_token_logits[0,eos_token_id]) - - # update generations and finished sentences - if eos_token_id is not None: - # pad finished sentences if eos_token_id exist - tokens_to_add = next_token * unfinished_sents + (pad_token_id) * (1 - unfinished_sents) - else: - tokens_to_add = next_token - - # add token and increase length by one - input_ids = torch.cat([input_ids, tokens_to_add.unsqueeze(-1)], dim=-1) - cur_len = cur_len + 1 - - if eos_token_id is not None: - eos_in_sents = tokens_to_add == eos_token_id - # if sentence is unfinished and the token to add is eos, sent_lengths is filled with current length - is_sents_unfinished_and_token_to_add_is_eos = unfinished_sents.mul(eos_in_sents.long()).bool() - sent_lengths.masked_fill_(is_sents_unfinished_and_token_to_add_is_eos, cur_len) - # unfinished_sents is set to zero if eos in sentence - unfinished_sents.mul_((~eos_in_sents).long()) - - # stop when there is a in each sentence, or if we exceed the maximul length - if unfinished_sents.max() == 0: - break - - - # extend attention_mask for new generated input if only decoder - # if self.config.is_encoder_decoder is False: - # attention_mask = torch.cat( - # [attention_mask, attention_mask.new_ones((attention_mask.shape[0], 1))], dim=-1 - # ) - - return input_ids diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/models/transformermodel.py b/kosmos-g/fairseq/examples/MMPT/mmpt/models/transformermodel.py deleted file mode 100644 index 6acc419f0..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/models/transformermodel.py +++ /dev/null @@ -1,734 +0,0 @@ -# coding=utf-8 -# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team. -# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# Copyright (c) Facebook, Inc. All Rights Reserved - -import torch - -from torch import nn - -try: - from transformers.modeling_bert import ( - BertPreTrainedModel, - BertModel, - BertEncoder, - BertPredictionHeadTransform, - ) -except ImportError: - pass - -from ..modules import VideoTokenMLP, MMBertEmbeddings - - -# --------------- fine-tuning models --------------- -class MMBertForJoint(BertPreTrainedModel): - """A BertModel with isolated attention mask to separate modality.""" - - def __init__(self, config): - super().__init__(config) - self.videomlp = VideoTokenMLP(config) - self.bert = MMBertModel(config) - self.init_weights() - - def forward( - self, - input_ids=None, - input_video_embeds=None, - attention_mask=None, - token_type_ids=None, - position_ids=None, - head_mask=None, - inputs_embeds=None, - next_sentence_label=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - separate_forward_split=None, - ): - return_dict = ( - return_dict if return_dict is not None - else self.config.use_return_dict - ) - video_tokens = self.videomlp(input_video_embeds) - - outputs = self.bert( - input_ids, - video_tokens, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - separate_forward_split=separate_forward_split, - ) - - return outputs - - -class MMBertForTokenClassification(BertPreTrainedModel): - """A BertModel similar to MMJointUni, with extra wrapper layer - to be fine-tuned from other pretrained MMFusion model.""" - - def __init__(self, config): - super().__init__(config) - self.videomlp = VideoTokenMLP(config) - self.bert = MMBertModel(config) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - # TODO(huxu): 779 is the number of classes for COIN: move to config? - self.classifier = nn.Linear(config.hidden_size, 779) - self.init_weights() - - def forward( - self, - input_ids=None, - input_video_embeds=None, - attention_mask=None, - token_type_ids=None, - position_ids=None, - head_mask=None, - inputs_embeds=None, - next_sentence_label=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - separate_forward_split=None, - ): - return_dict = ( - return_dict if return_dict is not None - else self.config.use_return_dict - ) - - video_tokens = self.videomlp(input_video_embeds) - outputs = self.bert( - input_ids, - video_tokens, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - separate_forward_split=separate_forward_split, - ) - - return (self.classifier(outputs[0]),) - - -# ------------ pre-training models ---------------- - -class MMBertForEncoder(BertPreTrainedModel): - """A BertModel for Contrastive Learning.""" - def __init__(self, config): - super().__init__(config) - self.videomlp = VideoTokenMLP(config) - self.bert = MMBertModel(config) - self.init_weights() - - def forward( - self, - input_ids=None, - input_video_embeds=None, - attention_mask=None, - token_type_ids=None, - position_ids=None, - head_mask=None, - inputs_embeds=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - ): - return_dict = ( - return_dict if return_dict is not None - else self.config.use_return_dict - ) - if input_video_embeds is not None: - video_tokens = self.videomlp(input_video_embeds) - else: - video_tokens = None - - outputs = self.bert( - input_ids, - video_tokens, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - return outputs - - -class MMBertForMFMMLM(BertPreTrainedModel): - """A BertModel with shared prediction head on MFM-MLM.""" - def __init__(self, config): - super().__init__(config) - self.videomlp = VideoTokenMLP(config) - self.bert = MMBertModel(config) - self.cls = MFMMLMHead(config) - self.hidden_size = config.hidden_size - self.init_weights() - - def get_output_embeddings(self): - return self.cls.predictions.decoder - - def forward( - self, - input_ids=None, - input_video_embeds=None, - attention_mask=None, - token_type_ids=None, - position_ids=None, - head_mask=None, - inputs_embeds=None, - masked_frame_labels=None, - target_video_hidden_states=None, - non_masked_frame_mask=None, - masked_lm_labels=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - ): - return_dict = ( - return_dict if return_dict is not None - else self.config.use_return_dict - ) - if input_video_embeds is not None: - video_tokens = self.videomlp(input_video_embeds) - else: - video_tokens = None - - if target_video_hidden_states is not None: - target_video_hidden_states = self.videomlp( - target_video_hidden_states) - - non_masked_frame_hidden_states = video_tokens.masked_select( - non_masked_frame_mask.unsqueeze(-1) - ).view(-1, self.hidden_size) - - outputs = self.bert( - input_ids, - video_tokens, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = outputs[0] - - mfm_scores, prediction_scores = None, None - if masked_frame_labels is not None and masked_lm_labels is not None: - # split the sequence. - text_offset = masked_frame_labels.size(1) + 1 # [CLS] - video_sequence_output = sequence_output[ - :, 1:text_offset - ] # remove [SEP] as not in video_label. - text_sequence_output = torch.cat( - [sequence_output[:, :1], sequence_output[:, text_offset:]], - dim=1 - ) - - hidden_size = video_sequence_output.size(-1) - selected_video_output = video_sequence_output.masked_select( - masked_frame_labels.unsqueeze(-1) - ).view(-1, hidden_size) - - # only compute select tokens to training to speed up. - hidden_size = text_sequence_output.size(-1) - # masked_lm_labels = masked_lm_labels.reshape(-1) - labels_mask = masked_lm_labels != -100 - - selected_text_output = text_sequence_output.masked_select( - labels_mask.unsqueeze(-1) - ).view(-1, hidden_size) - mfm_scores, prediction_scores = self.cls( - selected_video_output, - target_video_hidden_states, - non_masked_frame_hidden_states, - selected_text_output, - ) - - output = ( - mfm_scores, - prediction_scores, - ) + outputs - return output - - -class BertMFMMLMPredictionHead(nn.Module): - def __init__(self, config): - super().__init__() - self.transform = BertPredictionHeadTransform(config) - # The output weights are the same as the input embeddings, but there is - # an output-only bias for each token. - self.decoder = nn.Linear( - config.hidden_size, config.vocab_size, bias=False) - - self.bias = nn.Parameter(torch.zeros(config.vocab_size)) - - # Need a link between the two variables so that the bias is correctly - # resized with `resize_token_embeddings` - self.decoder.bias = self.bias - - def forward( - self, - video_hidden_states=None, - target_video_hidden_states=None, - non_masked_frame_hidden_states=None, - text_hidden_states=None, - ): - video_logits, text_logits = None, None - if video_hidden_states is not None: - video_hidden_states = self.transform(video_hidden_states) - non_masked_frame_logits = torch.mm( - video_hidden_states, - non_masked_frame_hidden_states.transpose(1, 0) - ) - masked_frame_logits = torch.bmm( - video_hidden_states.unsqueeze(1), - target_video_hidden_states.unsqueeze(-1), - ).squeeze(-1) - video_logits = torch.cat( - [masked_frame_logits, non_masked_frame_logits], dim=1 - ) - - if text_hidden_states is not None: - text_hidden_states = self.transform(text_hidden_states) - text_logits = self.decoder(text_hidden_states) - return video_logits, text_logits - - -class MFMMLMHead(nn.Module): - def __init__(self, config): - super().__init__() - self.predictions = BertMFMMLMPredictionHead(config) - - def forward( - self, - video_hidden_states=None, - target_video_hidden_states=None, - non_masked_frame_hidden_states=None, - text_hidden_states=None, - ): - video_logits, text_logits = self.predictions( - video_hidden_states, - target_video_hidden_states, - non_masked_frame_hidden_states, - text_hidden_states, - ) - return video_logits, text_logits - - -class MMBertForMTM(MMBertForMFMMLM): - def __init__(self, config): - BertPreTrainedModel.__init__(self, config) - self.videomlp = VideoTokenMLP(config) - self.bert = MMBertModel(config) - self.cls = MTMHead(config) - self.hidden_size = config.hidden_size - self.init_weights() - - -class BertMTMPredictionHead(nn.Module): - def __init__(self, config): - super().__init__() - self.transform = BertPredictionHeadTransform(config) - self.decoder = nn.Linear( - config.hidden_size, config.vocab_size, bias=False) - - def forward( - self, - video_hidden_states=None, - target_video_hidden_states=None, - non_masked_frame_hidden_states=None, - text_hidden_states=None, - ): - non_masked_frame_hidden_states = non_masked_frame_hidden_states.transpose(1, 0) - video_logits, text_logits = None, None - if video_hidden_states is not None: - video_hidden_states = self.transform(video_hidden_states) - - masked_frame_logits = torch.bmm( - video_hidden_states.unsqueeze(1), - target_video_hidden_states.unsqueeze(-1), - ).squeeze(-1) - - non_masked_frame_logits = torch.mm( - video_hidden_states, - non_masked_frame_hidden_states - ) - video_on_vocab_logits = self.decoder(video_hidden_states) - video_logits = torch.cat([ - masked_frame_logits, - non_masked_frame_logits, - video_on_vocab_logits], dim=1) - - if text_hidden_states is not None: - text_hidden_states = self.transform(text_hidden_states) - # text first so label does not need to be shifted. - text_on_vocab_logits = self.decoder(text_hidden_states) - text_on_video_logits = torch.mm( - text_hidden_states, - non_masked_frame_hidden_states - ) - text_logits = torch.cat([ - text_on_vocab_logits, - text_on_video_logits - ], dim=1) - - return video_logits, text_logits - - -class MTMHead(nn.Module): - def __init__(self, config): - super().__init__() - self.predictions = BertMTMPredictionHead(config) - - def forward( - self, - video_hidden_states=None, - target_video_hidden_states=None, - non_masked_frame_hidden_states=None, - text_hidden_states=None, - ): - video_logits, text_logits = self.predictions( - video_hidden_states, - target_video_hidden_states, - non_masked_frame_hidden_states, - text_hidden_states, - ) - return video_logits, text_logits - - -class MMBertModel(BertModel): - """MMBertModel has MMBertEmbedding to support video tokens.""" - - def __init__(self, config, add_pooling_layer=True): - super().__init__(config) - # overwrite embedding - self.embeddings = MMBertEmbeddings(config) - self.encoder = MultiLayerAttentionMaskBertEncoder(config) - self.init_weights() - - def forward( - self, - input_ids=None, - input_video_embeds=None, - attention_mask=None, - token_type_ids=None, - position_ids=None, - head_mask=None, - inputs_embeds=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - separate_forward_split=None, - ): - output_attentions = ( - output_attentions - if output_attentions is not None - else self.config.output_attentions - ) - output_hidden_states = ( - output_hidden_states - if output_hidden_states is not None - else self.config.output_hidden_states - ) - return_dict = ( - return_dict if return_dict is not None - else self.config.use_return_dict - ) - - if input_ids is not None and inputs_embeds is not None: - raise ValueError( - "You cannot specify both input_ids " - "and inputs_embeds at the same time" - ) - elif input_ids is not None: - if input_video_embeds is not None: - input_shape = ( - input_ids.size(0), - input_ids.size(1) + input_video_embeds.size(1), - ) - else: - input_shape = ( - input_ids.size(0), - input_ids.size(1), - ) - elif inputs_embeds is not None: - if input_video_embeds is not None: - input_shape = ( - inputs_embeds.size(0), - inputs_embeds.size(1) + input_video_embeds.size(1), - ) - else: - input_shape = ( - input_ids.size(0), - input_ids.size(1), - ) - else: - raise ValueError( - "You have to specify either input_ids or inputs_embeds") - - device = input_ids.device if input_ids is not None \ - else inputs_embeds.device - - if attention_mask is None: - attention_mask = torch.ones(input_shape, device=device) - if token_type_ids is None: - token_type_ids = torch.zeros( - input_shape, dtype=torch.long, device=device) - - # We can provide a self-attention mask of dimensions - # [batch_size, from_seq_length, to_seq_length] - # ourselves in which case - # we just need to make it broadcastable to all heads. - extended_attention_mask: torch.Tensor = \ - self.get_extended_attention_mask( - attention_mask, input_shape, device) - - # If a 2D or 3D attention mask is provided for the cross-attention - # we need to make broadcastable to - # [batch_size, num_heads, seq_length, seq_length] - if self.config.is_decoder and encoder_hidden_states is not None: - ( - encoder_batch_size, - encoder_sequence_length, - _, - ) = encoder_hidden_states.size() - encoder_hidden_shape = ( - encoder_batch_size, encoder_sequence_length) - if encoder_attention_mask is None: - encoder_attention_mask = torch.ones( - encoder_hidden_shape, device=device) - encoder_extended_attention_mask = self.invert_attention_mask( - encoder_attention_mask - ) - else: - encoder_extended_attention_mask = None - - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or - # [num_hidden_layers x num_heads] - # and head_mask is converted to shape - # [num_hidden_layers x batch x num_heads x seq_length x seq_length] - - head_mask = self.get_head_mask( - head_mask, self.config.num_hidden_layers) - - embedding_output = self.embeddings( - input_ids, - input_video_embeds, - position_ids=position_ids, - token_type_ids=token_type_ids, - inputs_embeds=inputs_embeds, - ) - - if separate_forward_split is not None: - split_embedding_output = \ - embedding_output[:, :separate_forward_split] - split_extended_attention_mask = extended_attention_mask[ - :, :, :, :separate_forward_split, :separate_forward_split - ] - split_encoder_outputs = self.encoder( - split_embedding_output, - attention_mask=split_extended_attention_mask, - head_mask=head_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_extended_attention_mask, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - assert ( - len(split_encoder_outputs) <= 2 - ), "we do not support merge on attention for now." - encoder_outputs = [] - encoder_outputs.append([split_encoder_outputs[0]]) - if len(split_encoder_outputs) == 2: - encoder_outputs.append([]) - for _all_hidden_states in split_encoder_outputs[1]: - encoder_outputs[-1].append([_all_hidden_states]) - - split_embedding_output = \ - embedding_output[:, separate_forward_split:] - split_extended_attention_mask = extended_attention_mask[ - :, :, :, separate_forward_split:, separate_forward_split: - ] - - split_encoder_outputs = self.encoder( - split_embedding_output, - attention_mask=split_extended_attention_mask, - head_mask=head_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_extended_attention_mask, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - assert ( - len(split_encoder_outputs) <= 2 - ), "we do not support merge on attention for now." - encoder_outputs[0].append(split_encoder_outputs[0]) - encoder_outputs[0] = torch.cat(encoder_outputs[0], dim=1) - if len(split_encoder_outputs) == 2: - for layer_idx, _all_hidden_states in enumerate( - split_encoder_outputs[1] - ): - encoder_outputs[1][layer_idx].append(_all_hidden_states) - encoder_outputs[1][layer_idx] = torch.cat( - encoder_outputs[1][layer_idx], dim=1 - ) - encoder_outputs = tuple(encoder_outputs) - else: - encoder_outputs = self.encoder( - embedding_output, - attention_mask=extended_attention_mask, - head_mask=head_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_extended_attention_mask, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = encoder_outputs[0] - pooled_output = ( - self.pooler(sequence_output) if self.pooler is not None else None - ) - - return (sequence_output, pooled_output) + encoder_outputs[1:] - - def get_extended_attention_mask(self, attention_mask, input_shape, device): - """This is borrowed from `modeling_utils.py` with the support of - multi-layer attention masks. - The second dim is expected to be number of layers. - See `MMAttentionMaskProcessor`. - Makes broadcastable attention and causal masks so that future - and masked tokens are ignored. - - Arguments: - attention_mask (:obj:`torch.Tensor`): - Mask with ones indicating tokens to attend to, - zeros for tokens to ignore. - input_shape (:obj:`Tuple[int]`): - The shape of the input to the model. - device: (:obj:`torch.device`): - The device of the input to the model. - - Returns: - :obj:`torch.Tensor` The extended attention mask, \ - with a the same dtype as :obj:`attention_mask.dtype`. - """ - # We can provide a self-attention mask of dimensions - # [batch_size, from_seq_length, to_seq_length] - # ourselves in which case we just need to make it broadcastable - # to all heads. - if attention_mask.dim() == 4: - extended_attention_mask = attention_mask[:, :, None, :, :] - extended_attention_mask = extended_attention_mask.to( - dtype=self.dtype - ) # fp16 compatibility - extended_attention_mask = (1.0 - extended_attention_mask) \ - * -10000.0 - return extended_attention_mask - else: - return super().get_extended_attention_mask( - attention_mask, input_shape, device - ) - - -class MultiLayerAttentionMaskBertEncoder(BertEncoder): - """extend BertEncoder with the capability of - multiple layers of attention mask.""" - - def forward( - self, - hidden_states, - attention_mask=None, - head_mask=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - output_attentions=False, - output_hidden_states=False, - return_dict=False, - ): - all_hidden_states = () if output_hidden_states else None - all_attentions = () if output_attentions else None - for i, layer_module in enumerate(self.layer): - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - layer_head_mask = head_mask[i] if head_mask is not None else None - - layer_attention_mask = ( - attention_mask[:, i, :, :, :] - if attention_mask.dim() == 5 - else attention_mask - ) - - if getattr(self.config, "gradient_checkpointing", False): - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs, output_attentions) - - return custom_forward - - layer_outputs = torch.utils.checkpoint.checkpoint( - create_custom_forward(layer_module), - hidden_states, - layer_attention_mask, - layer_head_mask, - encoder_hidden_states, - encoder_attention_mask, - ) - else: - layer_outputs = layer_module( - hidden_states, - layer_attention_mask, - layer_head_mask, - encoder_hidden_states, - encoder_attention_mask, - output_attentions, - ) - hidden_states = layer_outputs[0] - if output_attentions: - all_attentions = all_attentions + (layer_outputs[1],) - - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - return tuple( - v - for v in [hidden_states, all_hidden_states, all_attentions] - if v is not None - ) diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/modules/__init__.py b/kosmos-g/fairseq/examples/MMPT/mmpt/modules/__init__.py deleted file mode 100644 index 4c78594c2..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/modules/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -from .mm import * - -try: - from .expmm import * -except ImportError: - pass diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/modules/mm.py b/kosmos-g/fairseq/examples/MMPT/mmpt/modules/mm.py deleted file mode 100644 index 5d9777371..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/modules/mm.py +++ /dev/null @@ -1,145 +0,0 @@ -# coding=utf-8 -# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team. -# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# Copyright (c) Facebook, Inc. All Rights Reserved - - -import torch - -from torch import nn - -try: - from transformers.modeling_bert import ( - BertEmbeddings, - ACT2FN, - ) -except ImportError: - pass - - -class VideoTokenMLP(nn.Module): - def __init__(self, config): - super().__init__() - input_dim = config.input_dim if hasattr(config, "input_dim") else 512 - self.linear1 = nn.Linear(input_dim, config.hidden_size) - self.LayerNorm = nn.LayerNorm(config.hidden_size) - self.activation = ACT2FN[config.hidden_act] - self.linear2 = nn.Linear(config.hidden_size, config.hidden_size) - - def forward(self, hidden_states): - hidden_states = self.linear1(hidden_states) - hidden_states = self.activation(hidden_states) - hidden_states = self.LayerNorm(hidden_states) - hidden_states = self.linear2(hidden_states) - return hidden_states - - -class MMBertEmbeddings(BertEmbeddings): - def __init__(self, config): - super().__init__(config) - self.max_video_len = config.max_video_len - if hasattr(config, "use_seg_emb") and config.use_seg_emb: - """the original VLM paper uses seg_embeddings for temporal space. - although not used it changed the randomness of initialization. - we keep it for reproducibility. - """ - self.seg_embeddings = nn.Embedding(256, config.hidden_size) - - def forward( - self, - input_ids, - input_video_embeds, - token_type_ids=None, - position_ids=None, - inputs_embeds=None, - ): - input_tensor = input_ids if input_ids is not None else inputs_embeds - if input_video_embeds is not None: - input_shape = ( - input_tensor.size(0), - input_tensor.size(1) + input_video_embeds.size(1), - ) - else: - input_shape = (input_tensor.size(0), input_tensor.size(1)) - - if position_ids is None: - """ - Auto skip position embeddings for text only case. - use cases: - (1) action localization and segmentation: - feed in len-1 dummy video token needs text part to - skip input_video_embeds.size(1) for the right - position_ids for video [SEP] and rest text tokens. - (2) MMFusionShare for two forward passings: - in `forward_text`: input_video_embeds is None. - need to skip video [SEP] token. - - # video_len + 1: [CLS] + video_embed - # self.max_video_len + 1: [SEP] for video. - # self.max_video_len + 2: [SEP] for video. - # self.max_video_len + input_ids.size(1): rest for text. - """ - if input_video_embeds is not None: - video_len = input_video_embeds.size(1) - starting_offset = self.max_video_len + 1 # video [SEP] - ending_offset = self.max_video_len + input_ids.size(1) - else: - video_len = 0 - starting_offset = self.max_video_len + 2 # first text token. - ending_offset = self.max_video_len + input_ids.size(1) + 1 - position_ids = torch.cat([ - self.position_ids[:, :video_len + 1], - self.position_ids[:, starting_offset:ending_offset] - ], dim=1) - - if token_type_ids is None: - token_type_ids = torch.zeros( - input_shape, dtype=torch.long, device=self.position_ids.device - ) - - """ - the format of input_ids is [CLS] [SEP] caption [SEP] padding. - the goal is to build [CLS] video tokens [SEP] caption [SEP] . - """ - if inputs_embeds is None: - inputs_embeds = self.word_embeddings(input_ids) - if input_video_embeds is not None: - inputs_mm_embeds = torch.cat([ - inputs_embeds[:, :1], input_video_embeds, inputs_embeds[:, 1:] - ], dim=1) - else: - # text only for `MMFusionShare`. - inputs_mm_embeds = inputs_embeds - - position_embeddings = self.position_embeddings(position_ids) - token_type_embeddings = self.token_type_embeddings(token_type_ids) - embeddings = inputs_mm_embeds + position_embeddings - embeddings += token_type_embeddings - - embeddings = self.LayerNorm(embeddings) - embeddings = self.dropout(embeddings) - return embeddings - - -class AlignHead(nn.Module): - """this will load pre-trained weights for NSP, which is desirable.""" - - def __init__(self, config): - super().__init__() - self.seq_relationship = nn.Linear(config.hidden_size, 2) - - def forward(self, dropout_pooled_output): - logits = self.seq_relationship(dropout_pooled_output) - return logits diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/modules/retri.py b/kosmos-g/fairseq/examples/MMPT/mmpt/modules/retri.py deleted file mode 100644 index d1b288f8e..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/modules/retri.py +++ /dev/null @@ -1,429 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import os -import numpy as np -import pickle -import time - -try: - import faiss -except ImportError: - pass - -from collections import defaultdict - -from ..utils import get_local_rank, print_on_rank0 - - -class VectorRetriever(object): - """ - How2 Video Retriver. - Reference usage of FAISS: - https://github.com/fairinternal/fairseq-py/blob/paraphrase_pretraining/fairseq/data/multilingual_faiss_dataset.py - """ - - def __init__(self, hidden_size, cent, db_type, examples_per_cent_to_train): - if db_type == "flatl2": - quantizer = faiss.IndexFlatL2(hidden_size) # the other index - self.db = faiss.IndexIVFFlat( - quantizer, hidden_size, cent, faiss.METRIC_L2) - elif db_type == "pq": - self.db = faiss.index_factory( - hidden_size, f"IVF{cent}_HNSW32,PQ32" - ) - else: - raise ValueError("unknown type of db", db_type) - self.train_thres = cent * examples_per_cent_to_train - self.train_cache = [] - self.train_len = 0 - self.videoid_to_vectoridx = {} - self.vectoridx_to_videoid = None - self.make_direct_maps_done = False - - def make_direct_maps(self): - faiss.downcast_index(self.db).make_direct_map() - - def __len__(self): - return self.db.ntotal - - def save(self, out_dir): - faiss.write_index( - self.db, - os.path.join(out_dir, "faiss_idx") - ) - with open( - os.path.join( - out_dir, "videoid_to_vectoridx.pkl"), - "wb") as fw: - pickle.dump( - self.videoid_to_vectoridx, fw, - protocol=pickle.HIGHEST_PROTOCOL - ) - - def load(self, out_dir): - fn = os.path.join(out_dir, "faiss_idx") - self.db = faiss.read_index(fn) - with open( - os.path.join(out_dir, "videoid_to_vectoridx.pkl"), "rb") as fr: - self.videoid_to_vectoridx = pickle.load(fr) - - def add(self, hidden_states, video_ids, last=False): - assert len(hidden_states) == len(video_ids), "{}, {}".format( - str(len(hidden_states)), str(len(video_ids))) - assert len(hidden_states.shape) == 2 - assert hidden_states.dtype == np.float32 - - valid_idx = [] - for idx, video_id in enumerate(video_ids): - if video_id not in self.videoid_to_vectoridx: - valid_idx.append(idx) - self.videoid_to_vectoridx[video_id] = \ - len(self.videoid_to_vectoridx) - - hidden_states = hidden_states[valid_idx] - if not self.db.is_trained: - self.train_cache.append(hidden_states) - self.train_len += hidden_states.shape[0] - if self.train_len < self.train_thres: - return - self.finalize_training() - else: - self.db.add(hidden_states) - - def finalize_training(self): - hidden_states = np.concatenate(self.train_cache, axis=0) - del self.train_cache - local_rank = get_local_rank() - if local_rank == 0: - start = time.time() - print("training db on", self.train_thres, "/", self.train_len) - self.db.train(hidden_states[:self.train_thres]) - if local_rank == 0: - print("training db for", time.time() - start) - self.db.add(hidden_states) - - def search( - self, - query_hidden_states, - orig_dist, - ): - if len(self.videoid_to_vectoridx) != self.db.ntotal: - raise ValueError( - "cannot search: size mismatch in-between index and db", - len(self.videoid_to_vectoridx), - self.db.ntotal - ) - - if self.vectoridx_to_videoid is None: - self.vectoridx_to_videoid = { - self.videoid_to_vectoridx[videoid]: videoid - for videoid in self.videoid_to_vectoridx - } - assert len(self.vectoridx_to_videoid) \ - == len(self.videoid_to_vectoridx) - - # MultilingualFaissDataset uses the following; not sure the purpose. - # faiss.ParameterSpace().set_index_parameter(self.db, "nprobe", 10) - queried_dist, index = self.db.search(query_hidden_states, 1) - queried_dist, index = queried_dist[:, 0], index[:, 0] - - outputs = np.array( - [self.vectoridx_to_videoid[_index] - if _index != -1 else (-1, -1, -1) for _index in index], - dtype=np.int32) - outputs[queried_dist <= orig_dist] = -1 - return outputs - - def search_by_video_ids( - self, - video_ids, - retri_factor - ): - if len(self.videoid_to_vectoridx) != self.db.ntotal: - raise ValueError( - len(self.videoid_to_vectoridx), - self.db.ntotal - ) - - if not self.make_direct_maps_done: - self.make_direct_maps() - - if self.vectoridx_to_videoid is None: - self.vectoridx_to_videoid = { - self.videoid_to_vectoridx[videoid]: videoid - for videoid in self.videoid_to_vectoridx - } - assert len(self.vectoridx_to_videoid) \ - == len(self.videoid_to_vectoridx) - - query_hidden_states = [] - vector_ids = [] - for video_id in video_ids: - vector_id = self.videoid_to_vectoridx[video_id] - vector_ids.append(vector_id) - query_hidden_state = self.db.reconstruct(vector_id) - query_hidden_states.append(query_hidden_state) - query_hidden_states = np.stack(query_hidden_states) - - # MultilingualFaissDataset uses the following; not sure the reason. - # faiss.ParameterSpace().set_index_parameter(self.db, "nprobe", 10) - _, index = self.db.search(query_hidden_states, retri_factor) - outputs = [] - for sample_idx, sample in enumerate(index): - # the first video_id is always the video itself. - cands = [video_ids[sample_idx]] - for vector_idx in sample: - if vector_idx >= 0 \ - and vector_ids[sample_idx] != vector_idx: - cands.append( - self.vectoridx_to_videoid[vector_idx] - ) - outputs.append(cands) - return outputs - - -class VectorRetrieverDM(VectorRetriever): - """ - with direct map. - How2 Video Retriver. - Reference usage of FAISS: - https://github.com/fairinternal/fairseq-py/blob/paraphrase_pretraining/fairseq/data/multilingual_faiss_dataset.py - """ - - def __init__( - self, - hidden_size, - cent, - db_type, - examples_per_cent_to_train - ): - super().__init__( - hidden_size, cent, db_type, examples_per_cent_to_train) - self.make_direct_maps_done = False - - def make_direct_maps(self): - faiss.downcast_index(self.db).make_direct_map() - self.make_direct_maps_done = True - - def search( - self, - query_hidden_states, - orig_dist, - ): - if len(self.videoid_to_vectoridx) != self.db.ntotal: - raise ValueError( - len(self.videoid_to_vectoridx), - self.db.ntotal - ) - - if not self.make_direct_maps_done: - self.make_direct_maps() - if self.vectoridx_to_videoid is None: - self.vectoridx_to_videoid = { - self.videoid_to_vectoridx[videoid]: videoid - for videoid in self.videoid_to_vectoridx - } - assert len(self.vectoridx_to_videoid) \ - == len(self.videoid_to_vectoridx) - - # MultilingualFaissDataset uses the following; not sure the reason. - # faiss.ParameterSpace().set_index_parameter(self.db, "nprobe", 10) - queried_dist, index = self.db.search(query_hidden_states, 1) - outputs = [] - for sample_idx, sample in enumerate(index): - # and queried_dist[sample_idx] < thres \ - if sample >= 0 \ - and queried_dist[sample_idx] < orig_dist[sample_idx]: - outputs.append(self.vectoridx_to_videoid[sample]) - else: - outputs.append(None) - return outputs - - def search_by_video_ids( - self, - video_ids, - retri_factor=8 - ): - if len(self.videoid_to_vectoridx) != self.db.ntotal: - raise ValueError( - len(self.videoid_to_vectoridx), - self.db.ntotal - ) - - if not self.make_direct_maps_done: - self.make_direct_maps() - if self.vectoridx_to_videoid is None: - self.vectoridx_to_videoid = { - self.videoid_to_vectoridx[videoid]: videoid - for videoid in self.videoid_to_vectoridx - } - assert len(self.vectoridx_to_videoid) \ - == len(self.videoid_to_vectoridx) - - query_hidden_states = [] - vector_ids = [] - for video_id in video_ids: - vector_id = self.videoid_to_vectoridx[video_id] - vector_ids.append(vector_id) - query_hidden_state = self.db.reconstruct(vector_id) - query_hidden_states.append(query_hidden_state) - query_hidden_states = np.stack(query_hidden_states) - - # MultilingualFaissDataset uses the following; not sure the reason. - # faiss.ParameterSpace().set_index_parameter(self.db, "nprobe", 10) - _, index = self.db.search(query_hidden_states, retri_factor) - outputs = [] - for sample_idx, sample in enumerate(index): - # the first video_id is always the video itself. - cands = [video_ids[sample_idx]] - for vector_idx in sample: - if vector_idx >= 0 \ - and vector_ids[sample_idx] != vector_idx: - cands.append( - self.vectoridx_to_videoid[vector_idx] - ) - outputs.append(cands) - return outputs - - -class MMVectorRetriever(VectorRetrieverDM): - """ - multimodal vector retriver: - text retrieve video or video retrieve text. - """ - - def __init__(self, hidden_size, cent, db_type, examples_per_cent_to_train): - super().__init__( - hidden_size, cent, db_type, examples_per_cent_to_train) - video_db = self.db - super().__init__( - hidden_size, cent, db_type, examples_per_cent_to_train) - text_db = self.db - self.db = {"video": video_db, "text": text_db} - self.video_to_videoid = defaultdict(list) - - def __len__(self): - assert self.db["video"].ntotal == self.db["text"].ntotal - return self.db["video"].ntotal - - def make_direct_maps(self): - faiss.downcast_index(self.db["video"]).make_direct_map() - faiss.downcast_index(self.db["text"]).make_direct_map() - - def save(self, out_dir): - faiss.write_index( - self.db["video"], - os.path.join(out_dir, "video_faiss_idx") - ) - faiss.write_index( - self.db["text"], - os.path.join(out_dir, "text_faiss_idx") - ) - - with open( - os.path.join( - out_dir, "videoid_to_vectoridx.pkl"), - "wb") as fw: - pickle.dump( - self.videoid_to_vectoridx, fw, - protocol=pickle.HIGHEST_PROTOCOL - ) - - def load(self, out_dir): - fn = os.path.join(out_dir, "video_faiss_idx") - video_db = faiss.read_index(fn) - fn = os.path.join(out_dir, "text_faiss_idx") - text_db = faiss.read_index(fn) - self.db = {"video": video_db, "text": text_db} - with open( - os.path.join(out_dir, "videoid_to_vectoridx.pkl"), "rb") as fr: - self.videoid_to_vectoridx = pickle.load(fr) - self.video_to_videoid = defaultdict(list) - - def add(self, hidden_states, video_ids): - """hidden_states is a pair `(video, text)`""" - assert len(hidden_states) == len(video_ids), "{}, {}".format( - str(len(hidden_states)), str(len(video_ids))) - assert len(hidden_states.shape) == 3 - assert len(self.video_to_videoid) == 0 - - valid_idx = [] - for idx, video_id in enumerate(video_ids): - if video_id not in self.videoid_to_vectoridx: - valid_idx.append(idx) - self.videoid_to_vectoridx[video_id] = \ - len(self.videoid_to_vectoridx) - - batch_size = hidden_states.shape[0] - hidden_states = hidden_states[valid_idx] - - hidden_states = np.transpose(hidden_states, (1, 0, 2)).copy() - if not self.db["video"].is_trained: - self.train_cache.append(hidden_states) - train_len = batch_size * len(self.train_cache) - if train_len < self.train_thres: - return - - hidden_states = np.concatenate(self.train_cache, axis=1) - del self.train_cache - self.db["video"].train(hidden_states[0, :self.train_thres]) - self.db["text"].train(hidden_states[1, :self.train_thres]) - self.db["video"].add(hidden_states[0]) - self.db["text"].add(hidden_states[1]) - - def get_clips_by_video_id(self, video_id): - if not self.video_to_videoid: - for video_id, video_clip, text_clip in self.videoid_to_vectoridx: - self.video_to_videoid[video_id].append( - (video_id, video_clip, text_clip)) - return self.video_to_videoid[video_id] - - def search( - self, - video_ids, - target_modality, - retri_factor=8 - ): - if len(self.videoid_to_vectoridx) != len(self): - raise ValueError( - len(self.videoid_to_vectoridx), - len(self) - ) - - if not self.make_direct_maps_done: - self.make_direct_maps() - if self.vectoridx_to_videoid is None: - self.vectoridx_to_videoid = { - self.videoid_to_vectoridx[videoid]: videoid - for videoid in self.videoid_to_vectoridx - } - assert len(self.vectoridx_to_videoid) \ - == len(self.videoid_to_vectoridx) - - src_modality = "text" if target_modality == "video" else "video" - - query_hidden_states = [] - vector_ids = [] - for video_id in video_ids: - vector_id = self.videoid_to_vectoridx[video_id] - vector_ids.append(vector_id) - query_hidden_state = self.db[src_modality].reconstruct(vector_id) - query_hidden_states.append(query_hidden_state) - query_hidden_states = np.stack(query_hidden_states) - - # MultilingualFaissDataset uses the following; not sure the reason. - # faiss.ParameterSpace().set_index_parameter(self.db, "nprobe", 10) - _, index = self.db[target_modality].search( - query_hidden_states, retri_factor) - outputs = [] - for sample_idx, sample in enumerate(index): - cands = [] - for vector_idx in sample: - if vector_idx >= 0: - cands.append( - self.vectoridx_to_videoid[vector_idx] - ) - outputs.append(cands) - return outputs diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/modules/vectorpool.py b/kosmos-g/fairseq/examples/MMPT/mmpt/modules/vectorpool.py deleted file mode 100644 index d2b23d2da..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/modules/vectorpool.py +++ /dev/null @@ -1,246 +0,0 @@ -# Copyright (c) Facebook, Inc. All Rights Reserved - -import torch -import os -import numpy as np -import pickle - -from . import retri -from ..utils import get_local_rank - - -class VectorPool(object): - """ - Base class of retrieval space. - """ - - def __init__(self, config): - from transformers import AutoConfig - self.hidden_size = AutoConfig.from_pretrained( - config.dataset.bert_name).hidden_size - self.retriever_cls = getattr(retri, config.retriever_cls) - - def __call__(self, sample, **kwargs): - raise NotImplementedError - - def build_retriver( - self, - retriever_cls=None, - hidden_size=None, - centroids=512, - db_type="flatl2", - examples_per_cent_to_train=48 - ): - - """merge results from multiple gpus and return a retriver..""" - self.retriver = retriever_cls( - hidden_size, centroids, db_type, examples_per_cent_to_train) - return self.retriver - - def __repr__(self): - if hasattr(self, "retriver"): - retriver_name = str(len(self.retriver)) - else: - retriver_name = "no retriver field yet" - return self.__class__.__name__ \ - + "(" + retriver_name + ")" - - -class VideoVectorPool(VectorPool): - """ - average clips of a video as video representation. - """ - def __init__(self, config): - super().__init__(config) - self.build_retriver(self.retriever_cls, self.hidden_size) - - def __call__(self, sample, subsampling, **kwargs): - hidden_states = ( - sample["pooled_video"] + sample["pooled_text"]) / 2. - hidden_states = hidden_states.view( - -1, subsampling, - hidden_states.size(-1)) - hidden_states = torch.mean(hidden_states, dim=1) - hidden_states = hidden_states.cpu().detach().numpy() - video_ids = [] - for offset_idx, video_id in enumerate(sample["video_id"]): - if isinstance(video_id, tuple) and len(video_id) == 3: - # a sharded video_id. - video_id = video_id[0] - video_ids.append(video_id) - assert len(video_ids) == len(hidden_states) - self.retriver.add( - hidden_states.astype("float32"), - video_ids - ) - - -class DistributedVectorPool(VectorPool): - """ - support sync of multiple gpus/nodes. - """ - def __init__(self, config): - super().__init__(config) - self.out_dir = os.path.join( - config.fairseq.checkpoint.save_dir, - "retri") - os.makedirs(self.out_dir, exist_ok=True) - self.hidden_states = [] - self.video_ids = [] - - def build_retriver( - self, - retriever_cls=None, - hidden_size=None, - centroids=4096, - db_type="flatl2", - examples_per_cent_to_train=48 - ): - if retriever_cls is None: - retriever_cls = self.retriever_cls - if hidden_size is None: - hidden_size = self.hidden_size - """merge results from multiple gpus and return a retriver..""" - if torch.distributed.is_initialized(): - self.save() - # sync saving. - torch.distributed.barrier() - world_size = torch.distributed.get_world_size() - else: - world_size = 1 - self.retriver = retriever_cls( - hidden_size, centroids, db_type, examples_per_cent_to_train) - # each gpu process has its own retriever. - for local_rank in range(world_size): - if get_local_rank() == 0: - print("load local_rank", local_rank) - hidden_states, video_ids = self.load(local_rank) - hidden_states = hidden_states.astype("float32") - self.retriver.add(hidden_states, video_ids) - return self.retriver - - def load(self, local_rank): - hidden_states = np.load( - os.path.join( - self.out_dir, - "hidden_state" + str(local_rank) + ".npy" - ) - ) - - with open( - os.path.join( - self.out_dir, "video_id" + str(local_rank) + ".pkl"), - "rb") as fr: - video_ids = pickle.load(fr) - return hidden_states, video_ids - - def save(self): - hidden_states = np.vstack(self.hidden_states) - assert len(hidden_states) == len(self.video_ids), "{}, {}".format( - len(hidden_states), - len(self.video_ids) - ) - local_rank = torch.distributed.get_rank() \ - if torch.distributed.is_initialized() else 0 - - np.save( - os.path.join( - self.out_dir, - "hidden_state" + str(local_rank) + ".npy"), - hidden_states) - - with open( - os.path.join( - self.out_dir, - "video_id" + str(local_rank) + ".pkl"), - "wb") as fw: - pickle.dump( - self.video_ids, - fw, - protocol=pickle.HIGHEST_PROTOCOL - ) - - -class DistributedVideoVectorPool(DistributedVectorPool): - """ - average clips of a video as video representation. - """ - def __call__(self, sample, subsampling, **kwargs): - hidden_states = ( - sample["pooled_video"] + sample["pooled_text"]) / 2. - hidden_states = hidden_states.view( - -1, subsampling, - hidden_states.size(-1)) - hidden_states = torch.mean(hidden_states, dim=1) - hidden_states = hidden_states.cpu().detach().numpy() - video_ids = [] - for offset_idx, video_id in enumerate(sample["video_id"]): - if isinstance(video_id, tuple) and len(video_id) == 3: - # a sharded video_id. - video_id = video_id[0] - video_ids.append(video_id) - assert len(video_ids) == len(hidden_states) - self.hidden_states.append(hidden_states) - self.video_ids.extend(video_ids) - - -# ------------ the following are deprecated -------------- - -class TextClipVectorPool(VectorPool): - def __init__(self, config): - from transformers import AutoConfig - hidden_size = AutoConfig.from_pretrained( - config.dataset.bert_name).hidden_size - retriever_cls = getattr(retri, config.retriever_cls) - self.build_retriver(retriever_cls, hidden_size) - - def __call__(self, sample, **kwargs): - clip_meta = sample["clip_meta"].cpu() - assert torch.all(torch.le(clip_meta[:, 4], clip_meta[:, 5])) - text_meta = [tuple(item.tolist()) for item in clip_meta[:, 3:]] - - if hasattr(self, "retriver"): - # build_retriver is called. - self.retriver.add( - sample["pooled_text"].cpu().numpy().astype("float32"), - text_meta - ) - else: - raise NotImplementedError - - -class MMClipVectorPool(VectorPool): - """ - Multimodal Clip-level vector pool. - """ - def __init__(self, out_dir): - """use hidden_states to store `(video, text)`.""" - """use video_ids to store `(video_id, start, end)`.""" - super().__init__(out_dir) - - def __call__(self, sample, **kwargs): - pooled_video = sample["pooled_video"].cpu().unsqueeze(1).numpy() - pooled_text = sample["pooled_text"].cpu().unsqueeze(1).numpy() - - self.hidden_states.append( - np.concatenate([pooled_video, pooled_text], axis=1) - ) - - video_starts = sample["video_start"].cpu() - video_ends = sample["video_end"].cpu() - assert torch.all(torch.le(video_starts, video_ends)) - - text_starts = sample["text_start"].cpu() - text_ends = sample["text_end"].cpu() - assert torch.all(torch.le(text_starts, text_ends)) - subsample_size = sample["pooled_video"].size(0) // len(sample["video_id"]) - video_ids = [video_id for video_id in sample["video_id"] - for _ in range(subsample_size) - ] - for video_id, video_start, video_end, text_start, text_end in zip( - video_ids, video_starts, video_ends, text_starts, text_ends): - self.video_ids.append(( - video_id, - (int(video_start), int(video_end)), - (int(text_start), int(text_end)) - )) diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/processors/__init__.py b/kosmos-g/fairseq/examples/MMPT/mmpt/processors/__init__.py deleted file mode 100644 index 434d1d92b..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/processors/__init__.py +++ /dev/null @@ -1,23 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -from .processor import * - -from .how2processor import * -from .how2retriprocessor import * - -from .dsprocessor import * - -try: - from .rawvideoprocessor import * - from .codecprocessor import * - from .webvidprocessor import * - from .expprocessor import * - from .exphow2processor import * - from .exphow2retriprocessor import * - from .expcodecprocessor import * - from .expfeatureencoder import * - from .expdsprocessor import * -except ImportError: - pass diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/processors/dedupprocessor.py b/kosmos-g/fairseq/examples/MMPT/mmpt/processors/dedupprocessor.py deleted file mode 100644 index 8a1ad402c..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/processors/dedupprocessor.py +++ /dev/null @@ -1,242 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import random -import json -import pickle -from tqdm import tqdm -import os -import numpy as np - - -class CaptionDedupProcessor(object): - """remove overlapping of caption sentences(clip). - Some statistics: - caption: - {'t_clip_len': 246.6448431320854, - 'video_len': 281.09174795676245, - 'clip_tps': 0.8841283727427481, - 'video_tps': 0.7821156477732097, - 'min_clip_len': 0.0, - 'max_clip_len': 398.3, - 'mean_clip_len': 3.196580003006861, - 'num_clip': 77.15897706301081} - - raw_caption: - {'t_clip_len': 238.95908778424115, - 'video_len': 267.5914859862507, - 'clip_tps': 2.4941363624267963, - 'video_tps': 2.258989769647173, - 'min_clip_len': 0.0, - 'max_clip_len': 398.3, - 'mean_clip_len': 3.0537954186814265, - 'num_clip': 78.24986779481756} - """ - - def __init__(self, pkl_file): - with open(pkl_file, "rb") as fd: - self.data = pickle.load(fd) - self.stat = { - "t_clip_len": [], - "video_len": [], - "clip_tps": [], - "video_tps": [], - "clip_len": [], - } - - def __call__(self): - for idx, video_id in enumerate(tqdm(self.data)): - caption = json.loads(self.data[video_id]) - caption = self._dedup(caption) - if idx < 4096: # for the first 4096 examples, compute the statistics. - self.save_stat(video_id, caption) - self.data[video_id] = json.dumps(caption) - self.print_stat() - - def single(self, video_id): - caption = json.loads(self.data[video_id]) - for clip_idx, (start, end, text) in enumerate( - zip(caption["start"], caption["end"], caption["text"]) - ): - print(start, end, text) - print("@" * 100) - caption = self._dedup(caption) - for clip_idx, (start, end, text) in enumerate( - zip(caption["start"], caption["end"], caption["text"]) - ): - print(start, end, text) - print("#" * 100) - self.save_stat(video_id, caption) - self.print_stat() - - def finalize(self, tgt_fn): - with open(tgt_fn, "wb") as fw: - pickle.dump(self.data, fw, pickle.HIGHEST_PROTOCOL) - - def save_stat(self, video_id, caption): - video_fn = os.path.join( - "data/feat/feat_how2_s3d", video_id + ".npy" - ) - if os.path.isfile(video_fn): - with open(video_fn, "rb", 1) as fr: # 24 is the buffer size. buffered - version = np.lib.format.read_magic(fr) - shape, fortran, dtype = np.lib.format._read_array_header(fr, version) - video_len = shape[0] - - t_clip_len = 0.0 - t_tokens = 0 - for idx, (start, end, text) in enumerate( - zip(caption["start"], caption["end"], caption["text"]) - ): - clip_len = ( - (end - max(caption["end"][idx - 1], start)) - if idx > 0 - else end - start - ) - t_clip_len += clip_len - t_tokens += len(text.split(" ")) - self.stat["clip_len"].append(clip_len) - self.stat["t_clip_len"].append(t_clip_len) - self.stat["video_len"].append(video_len) - self.stat["clip_tps"].append(t_tokens / t_clip_len) - self.stat["video_tps"].append(t_tokens / video_len) - - def print_stat(self): - result = { - "t_clip_len": np.mean(self.stat["t_clip_len"]), - "video_len": np.mean(self.stat["video_len"]), - "clip_tps": np.mean(self.stat["clip_tps"]), - "video_tps": np.mean(self.stat["video_tps"]), - "min_clip_len": min(self.stat["clip_len"]), - "max_clip_len": max(self.stat["clip_len"]), - "mean_clip_len": np.mean(self.stat["clip_len"]), - "num_clip": len(self.stat["clip_len"]) / len(self.stat["video_tps"]), - } - print(result) - - def _dedup(self, caption): - def random_merge(end_idx, start, end, text, starts, ends, texts): - if random.random() > 0.5: - # print(clip_idx, "[PARTIAL INTO PREV]", end_idx) - # overlapped part goes to the end of previous. - ends[-1] = max(ends[-1], start) # ? - rest_text = text[end_idx:].strip() - if rest_text: - starts.append(max(ends[-1], start)) - ends.append(max(end, starts[-1])) - texts.append(rest_text) - else: # goes to the beginning of the current. - # strip the previous. - left_text = texts[-1][:-end_idx].strip() - if left_text: - # print(clip_idx, "[PREV PARTIAL INTO CUR]", end_idx) - ends[-1] = min(ends[-1], start) - texts[-1] = left_text - else: - # print(clip_idx, "[PREV LEFT NOTHING ALL INTO CUR]", end_idx) - starts.pop(-1) - ends.pop(-1) - texts.pop(-1) - starts.append(start) - ends.append(end) - texts.append(text) - - starts, ends, texts = [], [], [] - for clip_idx, (start, end, text) in enumerate( - zip(caption["start"], caption["end"], caption["text"]) - ): - if not isinstance(text, str): - continue - text = text.replace("\n", " ").strip() - if len(text) == 0: - continue - starts.append(start) - ends.append(end) - texts.append(text) - break - - for clip_idx, (start, end, text) in enumerate( - zip( - caption["start"][clip_idx + 1:], - caption["end"][clip_idx + 1:], - caption["text"][clip_idx + 1:], - ) - ): - if not isinstance(text, str): - continue - text = text.replace("\n", " ").strip() - if len(text) == 0: - continue - - # print(clip_idx, texts[-5:]) - # print(clip_idx, start, end, text) - if texts[-1].endswith(text): # subset of prev caption -> merge - # print(clip_idx, "[MERGE INTO PREV]") - ends[-1] = max(ends[-1], end) - elif text.startswith(texts[-1]): # superset of prev caption -> merge - # print(clip_idx, "[PREV MERGE INTO CUR]") - texts[-1] = text - starts[-1] = min(starts[-1], start) - ends[-1] = max(ends[-1], end) - else: # overlapping or non-overlapping. - for end_idx in range(1, len(text) + 1): - if texts[-1].endswith(text[:end_idx]): - random_merge(end_idx, start, end, text, starts, ends, texts) - break - else: - starts.append(start) - ends.append(end) - texts.append(text) - - assert (ends[-1] + 0.001) >= starts[-1] and len( - texts[-1] - ) > 0, "{} {} {} <- {} {} {}, {} {} {}".format( - str(starts[-1]), - str(ends[-1]), - texts[-1], - caption["start"][clip_idx - 1], - caption["end"][clip_idx - 1], - caption["text"][clip_idx - 1], - str(start), - str(end), - text, - ) - - return {"start": starts, "end": ends, "text": texts} - - -if __name__ == "__main__": - import argparse - - parser = argparse.ArgumentParser(description="dedup how2 caption") - parser.add_argument('--how2dir', default="data/how2") - args = parser.parse_args() - - raw_caption_json = os.path.join(args.how2dir, "raw_caption.json") - raw_caption_pickle = os.path.join(args.how2dir, "raw_caption.pkl") - raw_caption_dedup_pickle = os.path.join(args.how2dir, "raw_caption_dedup.pkl") - - def convert_to_pickle(src_fn, tgt_fn): - with open(src_fn) as fd: - captions = json.load(fd) - - for video_id in captions: - captions[video_id] = json.dumps(captions[video_id]) - - with open(tgt_fn, "wb") as fw: - pickle.dump(captions, fw, pickle.HIGHEST_PROTOCOL) - - if not os.path.isfile(raw_caption_pickle): - convert_to_pickle(raw_caption_json, raw_caption_pickle) - - deduper = CaptionDedupProcessor(raw_caption_pickle) - deduper() - deduper.finalize(raw_caption_dedup_pickle) - - """ - # demo - deduper = CaptionDedupProcessor("data/how2/raw_caption.pkl") - deduper.single("HfIeQ9pzL5U") - """ diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/processors/dsprocessor.py b/kosmos-g/fairseq/examples/MMPT/mmpt/processors/dsprocessor.py deleted file mode 100644 index ecebf0eea..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/processors/dsprocessor.py +++ /dev/null @@ -1,848 +0,0 @@ -# Copyright (c) Facebook, Inc. All Rights Reserved - -""" -Processors for all downstream (ds) tasks. -""" - -import json -import os -import pickle -import random -import math -import numpy as np -import torch - -from collections import defaultdict - -from .processor import ( - MetaProcessor, - VideoProcessor, - TextProcessor, - Aligner, - MMAttentionMask2DProcessor, -) - -from .how2processor import TextGenerationProcessor - - -# ------------- A General Aligner for all downstream tasks----------------- - - -class DSAligner(Aligner): - """ - Downstream (DS) aligner shared by all datasets. - """ - - def __call__(self, video_id, video_feature, text_feature, wps=0.7): - # random sample a starting sec for video. - video_start = 0 - video_end = min(len(video_feature), self.max_video_len) - # the whole sequence is a single clip. - video_clips = {"start": [video_start], "end": [video_end]} - - text_feature = { - "cap": [text_feature], - "start": [video_start], - "end": [len(text_feature) / wps], - } - text_clip_indexs = [0] - - vfeats, vmasks = self._build_video_seq( - video_feature, video_clips - ) - caps, cmasks = self._build_text_seq( - text_feature, text_clip_indexs - ) - - return { - "caps": caps, - "cmasks": cmasks, - "vfeats": vfeats, - "vmasks": vmasks, - "video_id": video_id, - } - - -class NLGTextProcessor(TextProcessor): - """ - Also return the original text as ref. - """ - def __call__(self, text_id): - return super().__call__(text_id), text_id - - -class DSNLGAligner(DSAligner): - """extend with the capability of 2d mask for generation.""" - def __init__(self, config): - super().__init__(config) - self.attnmasker = MMAttentionMask2DProcessor() - from transformers import AutoTokenizer - tokenizer = AutoTokenizer.from_pretrained( - self.bert_name, use_fast=self.use_fast, - bos_token="[CLS]", eos_token="[SEP]" - ) - self.tokenizer = tokenizer - self.bos_token_id = tokenizer.bos_token_id - self.eos_token_id = tokenizer.eos_token_id - self.textgen = TextGenerationProcessor(tokenizer) - - def __call__(self, video_id, video_feature, text_feature): - output = super().__call__(video_id, video_feature, text_feature[0]) - if self.split == "test": - # output.update({"ref": text_feature[1]}) - output.update({"ref": self.tokenizer.decode( - output["caps"], skip_special_tokens=True)}) - text_label = output["caps"] - cmasks = torch.BoolTensor([1] * text_label.size(0)) - caps = torch.LongTensor([ - self.cls_token_id, - self.sep_token_id, - self.bos_token_id]) - else: - caps, text_label = self.textgen(output["caps"]) - cmasks = output["cmasks"] - - attention_mask = self.attnmasker( - output["vmasks"], cmasks, "textgen") - - output.update({ - "caps": caps, - "cmasks": cmasks, - "text_label": text_label, - "attention_mask": attention_mask, - }) - return output - - -# -------------------- MSRVTT ------------------------ - - -class MSRVTTMetaProcessor(MetaProcessor): - """MSRVTT dataset. - reference: `howto100m/msrvtt_dataloader.py` - """ - - def __init__(self, config): - super().__init__(config) - import pandas as pd - data = pd.read_csv(self._get_split_path(config)) - # TODO: add a text1ka flag. - if config.split == "train" \ - and config.full_test_path is not None \ - and config.jsfusion_path is not None: - # add testing videos from full_test_path not used by jfusion. - additional_data = pd.read_csv(config.full_test_path) - jsfusion_data = pd.read_csv(config.jsfusion_path) - - for video_id in additional_data["video_id"]: - if video_id not in jsfusion_data["video_id"].values: - data = data.append( - {"video_id": video_id}, ignore_index=True) - - if config.dup is not None and config.split == "train": - data = data.append([data] * (config.dup - 1), ignore_index=True) - self.data = data - - def __len__(self): - return len(self.data) - - def __getitem__(self, idx): - """slightly modify with if condition to combine train/test.""" - vid, sentence = None, None - vid = self.data["video_id"].values[idx] - if "sentence" in self.data: # for testing. - sentence = self.data["sentence"].values[idx] - else: # for training. - sentence = vid - return vid, sentence - - -class MSRVTTTextProcessor(TextProcessor): - """MSRVTT dataset. - reference: `msrvtt_dataloader.py` `MSRVTT_TrainDataLoader`. - TODO (huxu): add max_words. - """ - - def __init__(self, config): - super().__init__(config) - self.sentences = None - if config.json_path is not None and config.split == "train": - with open(config.json_path) as fd: - self.data = json.load(fd) - self.sentences = defaultdict(list) - for s in self.data["sentences"]: - self.sentences[s["video_id"]].append(s["caption"]) - - def __call__(self, text_id): - if self.sentences is not None: - rind = random.randint(0, len(self.sentences[text_id]) - 1) - sentence = self.sentences[text_id][rind] - else: - sentence = text_id - caption = self.tokenizer(sentence, add_special_tokens=False) - return caption["input_ids"] - - -class MSRVTTNLGTextProcessor(MSRVTTTextProcessor): - """TODO: change dsaligner and merge to avoid any NLG text processor.""" - def __call__(self, text_id): - if self.sentences is not None: - rind = random.randint(0, len(self.sentences[text_id]) - 1) - sentence = self.sentences[text_id][rind] - else: - sentence = text_id - caption = self.tokenizer(sentence, add_special_tokens=False) - return caption["input_ids"], sentence - - -class MSRVTTQAMetaProcessor(MetaProcessor): - """MSRVTT-QA: retrieval-based multi-choice QA from JSFusion dataset. - For simplicity, we use the train retrieval model. - reference: `https://github.com/yj-yu/lsmdc` - """ - - def __init__(self, config): - super().__init__(config) - import pandas as pd - csv_data = pd.read_csv(self._get_split_path(config), sep="\t") - data = [] - for video_id, a1, a2, a3, a4, a5, answer in zip( - csv_data["vid_key"].values, - csv_data["a1"].values, - csv_data["a2"].values, - csv_data["a3"].values, - csv_data["a4"].values, - csv_data["a5"].values, - csv_data["answer"].values): - video_id = video_id.replace("msr", "video") - data.append((video_id, (answer, [a1, a2, a3, a4, a5]))) - self.data = data - - def __len__(self): - return len(self.data) - - def __getitem__(self, idx): - return self.data[idx] - - -class MSRVTTQATextProcessor(TextProcessor): - """MSRVTT-QA dataset. - text_ans is of format `(answer, [a1, a2, a3, a4, a5])`. - """ - - def __call__(self, text_ans): - for ans_idx, ans in enumerate(text_ans[1]): - if isinstance(ans, str): - text_ans[1][ans_idx] = self.tokenizer(ans, add_special_tokens=False)["input_ids"] - return text_ans - - -class MSRVTTQAAligner(DSAligner): - """MSRVTT dataset. - similar to sample in how2. - we call __call__ multiple times. - """ - - def __call__(self, video_id, video_feature, text_feature, wps=0.7): - caps = [] - cmasks = [] - answer = text_feature[0] - for ans_idx, _text_feature in enumerate(text_feature[1]): - output = super().__call__( - video_id, video_feature, _text_feature, wps) - caps.append(output["caps"]) - cmasks.append(output["cmasks"]) - output.update({ - "caps": torch.stack(caps), - "cmasks": torch.stack(cmasks), - "answers": torch.LongTensor([answer]), - }) - return output - - -# -------------------- Youcook ----------------------- - - -class YoucookMetaProcessor(MetaProcessor): - """Youcook dataset. - reference: `howto100m/youcook_dataloader.py` - note that the data can be different as the - (1) some videos already in Howto100m are removed. - (2) stop words are removed from caption - TODO (huxu): make a flag to load the original caption. - (see youcookii_annotations_trainval.json). - - The max_video_len can be 264 and text can be 64 tokens. - In reality we may not need that long. see projects/task/youcook.yaml - """ - - def __init__(self, config): - super().__init__(config) - vfeat_dir = config.vfeat_dir - print(self._get_split_path(config)) - with open(self._get_split_path(config), "rb") as fd: - data = pickle.load(fd) - all_valid_video_ids = set( - [os.path.splitext(fn)[0] for fn in os.listdir(vfeat_dir)] - ) - recs = [] - video_ids = set() - valid_video_ids = set() - for rec in data: # filter videos not available. - udl_idx = rec["id"].rindex("_") - video_id = rec["id"][:udl_idx] - video_ids.add(video_id) - if video_id in all_valid_video_ids: - valid_video_ids.add(video_id) - recs.append(rec) - print("total video_ids in .pkl", len(video_ids)) - print("valid video_ids in .pkl", len(valid_video_ids)) - print("please verify {train,val}_list.txt") - data = recs - self.data = data - - with open(config.trainval_annotation) as fd: - self.youcook_annotation = json.load(fd)["database"] - if config.use_annotation_text is True: - print("using text in annotation.") - self.use_annotation_caption = True - else: - self.use_annotation_caption = False - - def __getitem__(self, idx): - def _get_video_and_caption(rec): - vid = rec["id"] - udl_idx = vid.rindex("_") - video_id, clip_id = vid[:udl_idx], int(vid[udl_idx + 1:]) - clip = self.youcook_annotation[video_id]["annotations"][clip_id] - start, end = clip["segment"] - if self.use_annotation_caption: - caption = clip["sentence"] - else: - caption = rec["caption"] - return (video_id, start, end), caption - - rec = self.data[idx] - video_info, text_info = _get_video_and_caption(rec) - return video_info, text_info - - -class YoucookVideoProcessor(VideoProcessor): - """video_fn is a tuple of (video_id, start, end) now.""" - - def __call__(self, video_fn): - video_id, start, end = video_fn - feat = np.load(os.path.join(self.vfeat_dir, video_id + ".npy")) - return feat[start:end] - - -class YoucookNLGMetaProcessor(MetaProcessor): - """NLG uses the original split: - `train_list.txt` and `val_list.txt` - """ - - def __init__(self, config): - super().__init__(config) - vfeat_dir = config.vfeat_dir - print(self._get_split_path(config)) - with open(self._get_split_path(config)) as fd: - video_ids = [ - line.strip().split("/")[1] for line in fd.readlines()] - print("total video_ids in train/val_list.txt", len(video_ids)) - - all_valid_video_ids = set( - [os.path.splitext(fn)[0] for fn in os.listdir(vfeat_dir)] - ) - video_ids = [ - video_id for video_id in video_ids - if video_id in all_valid_video_ids] - - print("valid video_ids in train/val_list.txt", len(video_ids)) - with open(config.trainval_annotation) as fd: - self.youcook_annotation = json.load(fd)["database"] - - data = [] - for video_id in video_ids: - for clip in self.youcook_annotation[video_id]["annotations"]: - start, end = clip["segment"] - caption = clip["sentence"] - data.append(((video_id, start, end), caption)) - self.data = data - - def __getitem__(self, idx): - return self.data[idx] - - -# --------------------- CrossTask ------------------------- - -class CrossTaskMetaProcessor(MetaProcessor): - def __init__(self, config): - super().__init__(config) - np.random.seed(0) # deterministic random split. - task_vids = self._get_vids( - config.train_csv_path, - config.vfeat_dir, - config.annotation_path) - - val_vids = self._get_vids( - config.val_csv_path, - config.vfeat_dir, - config.annotation_path) - - # filter out those task and vids appear in val_vids. - task_vids = { - task: [ - vid for vid in vids - if task not in val_vids or vid not in val_vids[task]] - for task, vids in task_vids.items()} - - primary_info = self._read_task_info(config.primary_path) - test_tasks = set(primary_info['steps'].keys()) - - # if args.use_related: - related_info = self._read_task_info(config.related_path) - task_steps = {**primary_info['steps'], **related_info['steps']} - n_steps = {**primary_info['n_steps'], **related_info['n_steps']} - # else: - # task_steps = primary_info['steps'] - # n_steps = primary_info['n_steps'] - all_tasks = set(n_steps.keys()) - # filter and keep task in primary or related. - task_vids = { - task: vids for task, vids in task_vids.items() - if task in all_tasks} - # vocab-by-step matrix (A) and vocab (M) - # (huxu): we do not use BoW. - # A, M = self._get_A(task_steps, share="words") - - train_vids, test_vids = self._random_split( - task_vids, test_tasks, config.n_train) - print("train_num_videos", sum(len(vids) for vids in train_vids.values())) - print("test_num_videos", sum(len(vids) for vids in test_vids.values())) - # added by huxu to automatically determine the split. - split_map = { - "train": train_vids, - "valid": test_vids, - "test": test_vids - } - task_vids = split_map[config.split] - - self.vids = [] - for task, vids in task_vids.items(): - self.vids.extend([(task, vid) for vid in vids]) - self.task_steps = task_steps - self.n_steps = n_steps - - def __getitem__(self, idx): - task, vid = self.vids[idx] - n_steps = self.n_steps[task] - steps = self.task_steps[task] - assert len(steps) == n_steps - return (task, vid, steps, n_steps), (task, vid, steps, n_steps) - - def __len__(self): - return len(self.vids) - - def _random_split(self, task_vids, test_tasks, n_train): - train_vids = {} - test_vids = {} - for task, vids in task_vids.items(): - if task in test_tasks and len(vids) > n_train: - train_vids[task] = np.random.choice( - vids, n_train, replace=False).tolist() - test_vids[task] = [ - vid for vid in vids if vid not in train_vids[task]] - else: - train_vids[task] = vids - return train_vids, test_vids - - def _get_vids(self, path, vfeat_dir, annotation_path): - """refactored from - https://github.com/DmZhukov/CrossTask/blob/master/data.py - changes: add `vfeat_dir` to check if the video is available. - add `annotation_path` to check if the video is available. - """ - - task_vids = {} - with open(path, 'r') as f: - for line in f: - task, vid, url = line.strip().split(',') - # double check the video is available. - if not os.path.exists( - os.path.join(vfeat_dir, vid + ".npy")): - continue - # double check the annotation is available. - if not os.path.exists(os.path.join( - annotation_path, - task + "_" + vid + ".csv")): - continue - if task not in task_vids: - task_vids[task] = [] - task_vids[task].append(vid) - return task_vids - - def _read_task_info(self, path): - titles = {} - urls = {} - n_steps = {} - steps = {} - with open(path, 'r') as f: - idx = f.readline() - while idx != '': - idx = idx.strip() - titles[idx] = f.readline().strip() - urls[idx] = f.readline().strip() - n_steps[idx] = int(f.readline().strip()) - steps[idx] = f.readline().strip().split(',') - next(f) - idx = f.readline() - return { - 'title': titles, - 'url': urls, - 'n_steps': n_steps, - 'steps': steps - } - - def _get_A(self, task_steps, share="words"): - raise ValueError("running get_A is not allowed for BERT.") - """Step-to-component matrices.""" - if share == 'words': - # share words - task_step_comps = { - task: [step.split(' ') for step in steps] - for task, steps in task_steps.items()} - elif share == 'task_words': - # share words within same task - task_step_comps = { - task: [[task+'_'+tok for tok in step.split(' ')] for step in steps] - for task, steps in task_steps.items()} - elif share == 'steps': - # share whole step descriptions - task_step_comps = { - task: [[step] for step in steps] for task, steps in task_steps.items()} - else: - # no sharing - task_step_comps = { - task: [[task+'_'+step] for step in steps] - for task, steps in task_steps.items()} - # BERT tokenizer here? - vocab = [] - for task, steps in task_step_comps.items(): - for step in steps: - vocab.extend(step) - vocab = {comp: m for m, comp in enumerate(set(vocab))} - M = len(vocab) - A = {} - for task, steps in task_step_comps.items(): - K = len(steps) - a = torch.zeros(M, K) - for k, step in enumerate(steps): - a[[vocab[comp] for comp in step], k] = 1 - a /= a.sum(dim=0) - A[task] = a - return A, M - - -class CrossTaskVideoProcessor(VideoProcessor): - def __call__(self, video_fn): - task, vid, steps, n_steps = video_fn - video_fn = os.path.join(self.vfeat_dir, vid + ".npy") - feat = np.load(video_fn) - return feat - - -class CrossTaskTextProcessor(TextProcessor): - def __call__(self, text_id): - task, vid, steps, n_steps = text_id - step_ids = [] - for step_str in steps: - step_ids.append( - self.tokenizer(step_str, add_special_tokens=False)["input_ids"] - ) - return step_ids - - -class CrossTaskAligner(Aligner): - """ - TODO: it's not clear yet the formulation of the task; finish this later. - """ - def __init__(self, config): - super().__init__(config) - self.annotation_path = config.annotation_path - self.sliding_window = config.sliding_window - self.sliding_window_size = config.sliding_window_size - - def __call__(self, video_id, video_feature, text_feature): - task, vid, steps, n_steps = video_id - annot_path = os.path.join( - self.annotation_path, task + '_' + vid + '.csv') - video_len = len(video_feature) - - labels = torch.from_numpy(self._read_assignment( - video_len, n_steps, annot_path)).float() - - vfeats, vmasks, targets = [], [], [] - # sliding window on video features and targets. - for window_start in range(0, video_len, self.sliding_window): - video_start = 0 - video_end = min(video_len - window_start, self.sliding_window_size) - video_clip = {"start": [video_start], "end": [video_end]} - - vfeat, vmask = self._build_video_seq( - video_feature[window_start: window_start + video_end], - video_clip - ) - - target = labels[window_start: window_start + video_end] - assert len(vfeat) >= len(target), "{},{}".format(len(vfeat), len(target)) - # TODO: randomly drop all zero targets for training ? - # if self.split == "train" and target.sum() == 0: - # continue - vfeats.append(vfeat) - vmasks.append(vmask) - targets.append(target) - - if (video_len - window_start) <= self.sliding_window_size: - break - - vfeats = torch.stack(vfeats) - vmasks = torch.stack(vmasks) - targets = torch.cat(targets, dim=0) - - caps, cmasks = [], [] - for step in text_feature: - step_text_feature = {"start": [0], "end": [1], "cap": [step]} - step_text_clip_index = [0] - cap, cmask = self._build_text_seq( - step_text_feature, step_text_clip_index - ) - caps.append(cap) - cmasks.append(cmask) - caps = torch.stack(caps) - cmasks = torch.stack(cmasks) - - return { - "caps": caps, - "cmasks": cmasks, - "vfeats": vfeats, # X for original code. - "vmasks": vmasks, - "targets": targets, - "video_id": vid, - "task": task, - "video_len": video_len # for later checking. - } - - def _read_assignment(self, T, K, path): - """ - refactored from https://github.com/DmZhukov/CrossTask/blob/master/data.py - Howto interpret contraints on loss that is going to be minimized: - lambd is a big number; - self.lambd * C is a big number for all valid position (csv stores invalids) - - def forward(self, O, Y, C): - return (Y*(self.lambd * C - self.lsm(O))).mean(dim=0).sum() - - This will load the csv file and fill-in the step col from start to end rows. - """ - - Y = np.zeros([T, K], dtype=np.uint8) - with open(path, 'r') as f: - for line in f: - step, start, end = line.strip().split(',') - start = int(math.floor(float(start))) - end = int(math.ceil(float(end))) - step = int(step) - 1 - Y[start:end, step] = 1 - return Y - - -# --------------------- COIN ------------------------- - -class MetaTextBinarizer(Aligner): - def __call__(self, text_feature): - text_feature = { - "cap": [text_feature], - "start": [0.], - "end": [100.], - } - text_clip_indexs = [0] - - caps, cmasks = self._build_text_seq( - text_feature, text_clip_indexs - ) - return {"caps": caps, "cmasks": cmasks} - - -class COINActionSegmentationMetaProcessor(MetaProcessor): - split_map = { - "train": "training", - "valid": "testing", - "test": "testing", - } - - def __init__(self, config): - super().__init__(config) - with open(self._get_split_path(config)) as fr: - database = json.load(fr)["database"] - id2label = {} - data = [] - # filter the data by split. - for video_id, rec in database.items(): - # always use testing to determine label_set - if rec["subset"] == "testing": - for segment in rec["annotation"]: - id2label[int(segment["id"])] = segment["label"] - # text_labels is used for ZS setting - self.text_labels = ["none"] * len(id2label) - for label_id in id2label: - self.text_labels[label_id-1] = id2label[label_id] - - id2label[0] = "O" - print("num of labels", len(id2label)) - - for video_id, rec in database.items(): - if not os.path.isfile(os.path.join(config.vfeat_dir, video_id + ".npy")): - continue - if rec["subset"] == COINActionSegmentationMetaProcessor.split_map[self.split]: - starts, ends, labels = [], [], [] - for segment in rec["annotation"]: - start, end = segment["segment"] - label = int(segment["id"]) - starts.append(start) - ends.append(end) - labels.append(label) - data.append( - (video_id, {"start": starts, "end": ends, "label": labels})) - self.data = data - - def meta_text_labels(self, config): - from transformers import default_data_collator - from ..utils import get_local_rank - - text_processor = TextProcessor(config) - binarizer = MetaTextBinarizer(config) - # TODO: add prompts to .yaml. - text_labels = [label for label in self.text_labels] - - if get_local_rank() == 0: - print(text_labels) - - outputs = [] - for text_label in text_labels: - text_feature = text_processor(text_label) - outputs.append(binarizer(text_feature)) - return default_data_collator(outputs) - - def __getitem__(self, idx): - return self.data[idx] - - -class COINActionSegmentationTextProcessor(TextProcessor): - def __call__(self, text_label): - return text_label - - -class COINActionSegmentationAligner(Aligner): - def __init__(self, config): - super().__init__(config) - self.sliding_window = config.sliding_window - self.sliding_window_size = config.sliding_window_size - - def __call__(self, video_id, video_feature, text_feature): - starts, ends, label_ids = text_feature["start"], text_feature["end"], text_feature["label"] - # sliding window. - video_len = len(video_feature) - - vfeats, vmasks, targets = [], [], [] - # sliding window on video features and targets. - for window_start in range(0, video_len, self.sliding_window): - video_start = 0 - video_end = min(video_len - window_start, self.sliding_window_size) - video_clip = {"start": [video_start], "end": [video_end]} - vfeat, vmask = self._build_video_seq( - video_feature[window_start: window_start + video_end], - video_clip - ) - # covers video length only. - target = torch.full_like(vmask, -100, dtype=torch.long) - target[vmask] = 0 - for start, end, label_id in zip(starts, ends, label_ids): - if (window_start < end) and (start < (window_start + video_end)): - start_offset = max(0, math.floor(start) - window_start) - end_offset = min(video_end, math.ceil(end) - window_start) - target[start_offset:end_offset] = label_id - vfeats.append(vfeat) - vmasks.append(vmask) - targets.append(target) - if (video_len - window_start) <= self.sliding_window_size: - break - - vfeats = torch.stack(vfeats) - vmasks = torch.stack(vmasks) - targets = torch.stack(targets) - video_targets = torch.full((video_len,), 0) - for start, end, label_id in zip(starts, ends, label_ids): - start_offset = max(0, math.floor(start)) - end_offset = min(video_len, math.ceil(end)) - video_targets[start_offset:end_offset] = label_id - - caps = torch.LongTensor( - [[self.cls_token_id, self.sep_token_id, - self.pad_token_id, self.sep_token_id]], - ).repeat(vfeats.size(0), 1) - cmasks = torch.BoolTensor( - [[0, 1, 0, 1]] # pad are valid for attention. - ).repeat(vfeats.size(0), 1) - return { - "caps": caps, - "cmasks": cmasks, - "vfeats": vfeats, # X for original code. - "vmasks": vmasks, - "targets": targets, - "video_id": video_id, - "video_len": video_len, # for later checking. - "video_targets": video_targets - } - - -class DiDeMoMetaProcessor(MetaProcessor): - """reference: https://github.com/LisaAnne/LocalizingMoments/blob/master/utils/eval.py - https://github.com/LisaAnne/LocalizingMoments/blob/master/utils/data_processing.py - """ - def __init__(self, config): - super().__init__(config) - - assert "test" in self._get_split_path(config), "DiDeMo only supports zero-shot testing for now." - - with open(self._get_split_path(config)) as data_file: - json_data = json.load(data_file) - - data = [] - for record in json_data: - data.append((record["video"], record["description"])) - self.data = data - - def __len__(self): - return len(self.data) - - def __getitem__(self, idx): - return self.data[idx] - - -class DiDeMoTextProcessor(TextProcessor): - """reference: https://github.com/LisaAnne/LocalizingMoments/blob/master/utils/eval.py - https://github.com/LisaAnne/LocalizingMoments/blob/master/utils/data_processing.py - """ - - def __call__(self, text): - return self.tokenizer(text, add_special_tokens=False)["input_ids"] - - -class DiDeMoAligner(DSAligner): - """ - check video length. - """ - - def __call__(self, video_id, video_feature, text_feature): - # print(video_feature.shape[0]) - return super().__call__(video_id, video_feature, text_feature) diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/processors/how2processor.py b/kosmos-g/fairseq/examples/MMPT/mmpt/processors/how2processor.py deleted file mode 100644 index bed2168b1..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/processors/how2processor.py +++ /dev/null @@ -1,887 +0,0 @@ -# coding=utf-8 -# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team. -# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# Copyright (c) Facebook, Inc. All Rights Reserved - - -import torch -import math -import pickle -import random -import os -import numpy as np - -from collections import deque -from typing import Optional, Tuple, List -from .processor import ( - Processor, - MetaProcessor, - TextProcessor, - Aligner, - MMAttentionMask2DProcessor -) - -from ..utils import ShardedTensor - - -class How2MetaProcessor(MetaProcessor): - def __init__(self, config): - super().__init__(config) - path = self._get_split_path(config) - with open(path) as fd: - self.data = [line.strip() for line in fd] - - def __getitem__(self, idx): - video_id = self.data[idx] - return video_id, video_id - - -class ShardedHow2MetaProcessor(How2MetaProcessor): - def __init__(self, config): - super().__init__(config) - self.split = str(config.split) - self.vfeat_dir = config.vfeat_dir - self._init_shard() - - def _init_shard(self): - if self.split == "train": - meta_fn = os.path.join(self.vfeat_dir, "train" + "_meta.pkl") - with open(meta_fn, "rb") as fr: - meta = pickle.load(fr) - elif self.split == "valid": - meta_fn = os.path.join(self.vfeat_dir, "val" + "_meta.pkl") - with open(meta_fn, "rb") as fr: - meta = pickle.load(fr) - elif self.split == "test": - print("use how2 val as test.") - meta_fn = os.path.join(self.vfeat_dir, "val" + "_meta.pkl") - with open(meta_fn, "rb") as fr: - meta = pickle.load(fr) - else: - raise ValueError("unsupported for MetaProcessor:", self.split) - video_id_to_shard = {} - for shard_id in meta: - for video_idx, video_id in enumerate(meta[shard_id]): - video_id_to_shard[video_id] = (shard_id, video_idx) - self.video_id_to_shard = video_id_to_shard - - def __getitem__(self, idx): - video_id, video_id = super().__getitem__(idx) - shard_id, shard_idx = self.video_id_to_shard[video_id] - meta = (video_id, idx, shard_id, shard_idx) - return meta, meta - - -class ShardedVideoProcessor(Processor): - """ - mmaped shards of numpy video features. - """ - - def __init__(self, config): - self.split = str(config.split) - self.vfeat_dir = config.vfeat_dir - - def __call__(self, video_id): - _, _, shard_id, video_idx = video_id - if self.split == "train": - shard = ShardedTensor.load( - os.path.join(self.vfeat_dir, "train" + "_" + str(shard_id)), - "r" - ) - elif self.split == "valid": - shard = ShardedTensor.load( - os.path.join(self.vfeat_dir, "val" + "_" + str(shard_id)), - "r" - ) - elif self.split == "test": - shard = ShardedTensor.load( - os.path.join(self.vfeat_dir, "val" + "_" + str(shard_id)), - "r" - ) - else: - raise ValueError("unknown split", self.split) - feat = shard[video_idx] - return feat - - -class ShardedTextProcessor(Processor): - def __init__(self, config): - self.tfeat_dir = str(config.tfeat_dir) - self.split = str(config.split) - - def __call__(self, video_id): - _, _, shard_id, shard_idx = video_id - if self.split == "train": - target_path = self.tfeat_dir + "train" + "_" + str(shard_id) - elif self.split == "valid": - target_path = self.tfeat_dir + "val" + "_" + str(shard_id) - elif self.split == "test": - target_path = self.tfeat_dir + "val" + "_" + str(shard_id) - else: - raise ValueError("unknown split", self.split) - - startend = ShardedTensor.load( - target_path + ".startends", "r")[shard_idx] - cap_ids = ShardedTensor.load( - target_path + ".caps_ids", "r")[shard_idx] - cap = [] - for clip_idx in range(len(cap_ids)): - clip = cap_ids[clip_idx] - cap.append(clip[clip != -1].tolist()) - start, end = startend[:, 0].tolist(), startend[:, 1].tolist() - return {"start": start, "end": end, "cap": cap} - - -class FixedLenAligner(Aligner): - """ - In the model we assume text is on the left (closer to BERT formulation) - and video is on the right. - We fix the total length of text + video. - max_video_len is in number of secs. - max_text_len is in number of tokens. - - special tokens formats: - we use the format [CLS] [SEP] text tokens [SEP] [PAD] ... - [CLS] will be splitted out into: - [CLS] video tokens [SEP] text tokens [SEP] [PAD] ... - token_type_ids will be generated by the model (for now). - 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 - | first sequence | second sequence | - so each sequence owns a [SEP] token for no-ops. - """ - - def __init__(self, config): - super().__init__(config) - self.text_clip_sampler = TextClipSamplingProcessor( - self.max_len - self.max_video_len - 3 - ) - """ - decide subsampling: - `config.subsampling` will change batch_size in trainer. - `config.clip_per_video` (used by RetriTask) doesn't - change batch_size in trainer. - """ - subsampling = config.subsampling \ - if config.subsampling is not None else None - if config.clip_per_video is not None: - subsampling = config.clip_per_video - self.subsampling = subsampling - - def _get_text_maxlen(self): - # use max text len - return self.text_clip_sampler.max_text_len - - def __call__(self, video_id, video_feature, text_feature): - from transformers import default_data_collator - video_idx = video_id[1] - if self.subsampling is not None and self.subsampling >= 1: - batch = [] - for _ in range(self.subsampling): - centerclip_idx = random.randint( - 0, len(text_feature["start"]) - 1) - batch.append( - self.sampling( - video_idx, - video_feature, - text_feature, - centerclip_idx, - self._get_text_maxlen() - )) - batch = self.batch_post_processing(batch, video_feature) - batch = default_data_collator(batch) - else: - raise ValueError( - "dataset.subsampling must be >= 1 for efficient video loading.") - batch = self.sampling(video_idx, video_feature, text_feature) - batch = self.batch_post_processing(batch, video_feature) - - batch["video_id"] = video_id if isinstance(video_id, str) \ - else video_id[0] - # e2e: make sure frame ids is into tensor. - assert torch.is_tensor(batch["vfeats"]) - return batch - - def sampling( - self, - video_idx, - video_feature, - text_feature, - centerclip_idx=None, - sampled_max_text_len=None, - ): - text_clip_indexs = self.text_clip_sampler( - text_feature, centerclip_idx, - sampled_max_text_len - ) - if isinstance(video_feature, np.ndarray): - video_len = len(video_feature) - else: - video_len = math.ceil(text_feature["end"][-1]) - - video_end = min( - math.ceil(text_feature["end"][text_clip_indexs[-1]]), - video_len - ) - video_start = max( - min( - math.floor(text_feature["start"][text_clip_indexs[0]]), - video_end), - 0 - ) - - video_clips = {"start": [video_start], "end": [video_end]} - - # tensorize. - vfeats, vmasks = self._build_video_seq( - video_feature, video_clips - ) - caps, cmasks = self._build_text_seq( - text_feature, text_clip_indexs - ) - - text_start = text_clip_indexs[0] - text_end = text_clip_indexs[-1] + 1 - - return { - "caps": caps, - "cmasks": cmasks, - "vfeats": vfeats, - "vmasks": vmasks, - "video_start": video_start, - "video_end": video_end, - "text_start": text_start, - "text_end": text_end, - } - - -class VariedLenAligner(FixedLenAligner): - def __init__(self, config): - super().__init__(config) - self.sampled_min_len = config.sampled_min_len - self.sampled_max_len = config.sampled_max_len - - def _get_text_maxlen(self): - return random.randint(self.sampled_min_len, self.sampled_max_len) - - -class StartClipAligner(VariedLenAligner): - def sampling( - self, - video_idx, - video_feature, - text_feature, - centerclip_idx=None, - sampled_max_text_len=None, - ): - return super().sampling( - video_idx, video_feature, text_feature, 0) - - -class OverlappedAligner(VariedLenAligner): - """video clip and text clip has overlappings - but may not be the same start/end.""" - def __init__(self, config): - super().__init__(config) - self.sampled_video_min_len = config.sampled_video_min_len - self.sampled_video_max_len = config.sampled_video_max_len - - self.video_clip_sampler = VideoClipSamplingProcessor() - - def _get_video_maxlen(self): - return random.randint( - self.sampled_video_min_len, self.sampled_video_max_len) - - def sampling( - self, - video_idx, - video_feature, - text_feature, - centerclip_idx=None, - sampled_max_text_len=None, - ): - text_clip_indexs = self.text_clip_sampler( - text_feature, centerclip_idx, - sampled_max_text_len - ) - if isinstance(video_feature, np.ndarray): - video_len = len(video_feature) - else: - video_len = math.ceil(text_feature["end"][-1]) - low = math.floor(text_feature["start"][text_clip_indexs[0]]) - high = math.ceil(text_feature["end"][text_clip_indexs[-1]]) - if low < high: - center = random.randint(low, high) - else: - center = int((low + high) // 2) - center = max(0, min(video_feature.shape[0] - 1, center)) - - assert 0 <= center < video_feature.shape[0] - - video_clips = self.video_clip_sampler( - video_len, self._get_video_maxlen(), center - ) - video_start = video_clips["start"][0] - video_end = video_clips["end"][0] - - # tensorize. - vfeats, vmasks = self._build_video_seq( - video_feature, video_clips - ) - caps, cmasks = self._build_text_seq( - text_feature, text_clip_indexs - ) - - text_start = text_clip_indexs[0] - text_end = text_clip_indexs[-1] + 1 - - return { - "caps": caps, - "cmasks": cmasks, - "vfeats": vfeats, - "vmasks": vmasks, - "video_start": video_start, - "video_end": video_end, - "text_start": text_start, - "text_end": text_end, - } - - -class MFMMLMAligner(FixedLenAligner): - """ - `FixedLenAligner` with Masked Language Model and Masked Frame Model. - """ - - def __init__(self, config): - super().__init__(config) - keep_prob = config.keep_prob if config.keep_prob is not None else 1.0 - self.text_clip_sampler = TextClipSamplingProcessor( - self.max_len - self.max_video_len - 3, keep_prob - ) - self.sampled_min_len = config.sampled_min_len - self.sampled_max_len = config.sampled_max_len - self.masked_token_sampler = TextMaskingProcessor(config) - self.mm_type = config.mm_type \ - if config.mm_type is not None else "full" - self.attnmasker = MMAttentionMask2DProcessor() \ - if self.mm_type == "textgen" else None - self.masked_frame_sampler = FrameMaskingProcessor(config) - self.lazy_vfeat_mask = ( - False if config.lazy_vfeat_mask is None else config.lazy_vfeat_mask - ) - self.mm_prob = config.mm_prob if config.mm_prob is not None else 0. - - def __call__(self, video_id, video_feature, text_feature): - from transformers import default_data_collator - if self.subsampling is not None and self.subsampling > 1: - batch = [] - for _ in range(self.subsampling): - centerclip_idx = random.randint( - 0, len(text_feature["start"]) - 1) - sampled_max_text_len = random.randint( - self.sampled_min_len, self.sampled_max_len - ) - batch.append( - self.sampling( - video_id, - video_feature, - text_feature, - centerclip_idx, - sampled_max_text_len, - ) - ) - batch = self.batch_post_processing(batch, video_feature) - batch = default_data_collator(batch) - else: - batch = self.sampling(video_id, video_feature, text_feature) - batch = self.batch_post_processing(batch, video_feature) - batch["video_id"] = video_id if isinstance(video_id, str) \ - else video_id[0] - return batch - - def sampling( - self, - video_id, - video_feature, - text_feature, - centerclip_idx=None, - sampled_max_text_len=None, - ): - output = FixedLenAligner.sampling(self, - video_id, video_feature, text_feature, - centerclip_idx, sampled_max_text_len) - - masking_text, masking_video = None, None - if random.random() < self.mm_prob: - if random.random() > 0.5: - masking_text, masking_video = self.mm_type, "no" - else: - masking_text, masking_video = "no", "full" - video_feats = output["vfeats"] if not self.lazy_vfeat_mask else None - video_label = self.masked_frame_sampler( - output["vmasks"], masking_video, vfeats=video_feats) - caps, text_label = self.masked_token_sampler( - output["caps"], masking_text) - - output.update({ - "caps": caps, - "video_label": video_label, - "text_label": text_label, - }) - - if self.attnmasker is not None: - attention_mask = self.attnmasker( - output["vmasks"], output["cmasks"], masking_text) - output.update({ - "attention_mask": attention_mask - }) - return output - - -class FrameMaskingProcessor(Processor): - def __init__(self, config): - self.mfm_probability = 0.15 - if config.mfm_probability is not None: - self.mfm_probability = config.mfm_probability - - def __call__(self, vmasks, modality_masking=None, vfeats=None): - """ - We perform lazy masking to save data transfer time. - It only generates video_labels by default and MFM model - will do actualy masking. - Return: `video_label` is a binary mask. - """ - video_label = vmasks.clone() - if modality_masking is not None: - if modality_masking == "full": - probability_matrix = torch.full(video_label.shape, 1.) - elif modality_masking == "no": - probability_matrix = torch.full(video_label.shape, 0.) - elif modality_masking == "inverse": - probability_matrix = torch.full( - video_label.shape, 1. - self.mfm_probability) - else: - raise ValueError("unknown modality masking.", modality_masking) - else: - probability_matrix = torch.full( - video_label.shape, self.mfm_probability) - masked_indices = torch.bernoulli(probability_matrix).bool() - # We only compute loss on masked tokens - video_label[~masked_indices] = 0 - if vfeats is not None: - vfeats[video_label, :] = 0.0 - return video_label - - -class TextGenerationProcessor(Processor): - def __init__(self, tokenizer): - self.bos_token_id = tokenizer.bos_token_id - self.pad_token_id = tokenizer.pad_token_id - - def __call__(self, inputs): - labels = inputs.clone() - # [CLS] [SEP] for video - labels[:2] = -100 - # keep [SEP] for text. - pad_mask = labels == self.pad_token_id - labels[pad_mask] = -100 - inputs[2:] = torch.cat([ - torch.LongTensor([self.bos_token_id]), - inputs[2:-1]]) - inputs[pad_mask] = self.pad_token_id - assert len(inputs) == len(labels) - return inputs, labels - - -class TextMaskingProcessor(Processor): - def __init__(self, config): - """this function is borrowed from - `transformers/data/data_collator.DataCollatorForLanguageModeling`""" - self.mlm_probability = 0.15 - if config.mlm_probability is not None: - self.mlm_probability = config.mlm_probability - self.bert_name = config.bert_name - # [CLS] is used as bos_token and [SEP] is used as eos_token. - # https://huggingface.co/transformers/master/model_doc/bertgeneration.html - from transformers import AutoTokenizer - self.tokenizer = AutoTokenizer.from_pretrained( - self.bert_name, bos_token="[CLS]", eos_token="[SEP]") - self.textgen = TextGenerationProcessor(self.tokenizer) - - def __call__( - self, inputs: torch.Tensor, - modality_masking=None, - special_tokens_mask: Optional[torch.Tensor] = None - ) -> Tuple[torch.Tensor, torch.Tensor]: - """ - expand modality_masking into - None: traditional bert masking. - "no": no masking. - "full": all [MASK] token for generation. - "gen": autoregressive generation. - """ - """ - Prepare masked tokens inputs/labels for masked language modeling: - 80% MASK, 10% random, 10% original. - """ - labels = inputs.clone() - # We sample a few tokens in each sequence for MLM training - # (with probability `self.mlm_probability`) - if modality_masking is not None: - if modality_masking == "full": - probability_matrix = torch.full(labels.shape, 1.) - elif modality_masking == "no": - probability_matrix = torch.full(labels.shape, 0.) - elif modality_masking.startswith("textgen"): - # [CLS] [SEP] ... - inputs, labels = self.textgen(inputs) - if "mask" not in modality_masking: - return inputs, labels - inputs = self.mask_input(inputs, special_tokens_mask) - return inputs, labels - elif modality_masking == "mask": - inputs = self.mask_input(inputs, special_tokens_mask) - labels = torch.full(inputs.shape, -100) - return inputs, labels - elif modality_masking == "inverse": - probability_matrix = torch.full(labels.shape, 1. - self.mlm_probability) - else: - raise ValueError("unknown modality masking.", modality_masking) - else: - probability_matrix = torch.full(labels.shape, self.mlm_probability) - - if special_tokens_mask is None: - special_tokens_mask = self.get_special_tokens_mask( - labels.tolist(), already_has_special_tokens=True - ) - special_tokens_mask = torch.tensor( - special_tokens_mask, dtype=torch.bool) - else: - special_tokens_mask = special_tokens_mask.bool() - - probability_matrix.masked_fill_(special_tokens_mask, value=0.0) - masked_indices = torch.bernoulli(probability_matrix).bool() - labels[~masked_indices] = -100 # We only compute loss on masked tokens - - # 80% of the time, - # we replace masked input tokens with tokenizer.mask_token ([MASK]) - indices_replaced = ( - torch.bernoulli( - torch.full(labels.shape, 0.8)).bool() & masked_indices - ) - inputs[indices_replaced] = self.tokenizer.convert_tokens_to_ids( - self.tokenizer.mask_token - ) - - # 10% of the time, we replace masked input tokens with random word - indices_random = ( - torch.bernoulli(torch.full(labels.shape, 0.5)).bool() - & masked_indices - & ~indices_replaced - ) - random_words = torch.randint( - len(self.tokenizer), labels.shape, dtype=torch.long - ) - inputs[indices_random] = random_words[indices_random] - - # The rest of the time (10% of the time) we keep the masked input - # tokens unchanged - return inputs, labels - - def mask_input(self, inputs, special_tokens_mask=None): - # the following is new with masked autoregressive. - probability_matrix = torch.full( - inputs.shape, self.mlm_probability) - if special_tokens_mask is None: - special_tokens_mask = self.get_special_tokens_mask( - inputs.tolist(), already_has_special_tokens=True - ) - special_tokens_mask = torch.tensor( - special_tokens_mask, dtype=torch.bool) - else: - special_tokens_mask = special_tokens_mask.bool() - probability_matrix.masked_fill_(special_tokens_mask, value=0.0) - masked_indices = torch.bernoulli(probability_matrix).bool() - indices_replaced = ( - torch.bernoulli( - torch.full(inputs.shape, 0.8)).bool() & masked_indices - ) - inputs[indices_replaced] = self.tokenizer.convert_tokens_to_ids( - self.tokenizer.mask_token - ) - - # 10% of the time, we replace masked input tokens with random word - indices_random = ( - torch.bernoulli(torch.full(inputs.shape, 0.5)).bool() - & masked_indices - & ~indices_replaced - ) - random_words = torch.randint( - len(self.tokenizer), inputs.shape, dtype=torch.long - ) - inputs[indices_random] = random_words[indices_random] - return inputs - - def get_special_tokens_mask( - self, token_ids_0: List[int], - token_ids_1: Optional[List[int]] = None, - already_has_special_tokens: bool = False - ) -> List[int]: - """ - Note: the version from transformers do not consider pad - as special tokens. - """ - - if already_has_special_tokens: - if token_ids_1 is not None: - raise ValueError( - "You should not supply a second sequence if" - "the provided sequence of " - "ids is already formated with special tokens " - "for the model." - ) - return list(map(lambda x: 1 if x in [ - self.tokenizer.sep_token_id, - self.tokenizer.cls_token_id, - self.tokenizer.pad_token_id] else 0, token_ids_0)) - - if token_ids_1 is not None: - return [1] + ([0] * len(token_ids_0)) + [1] + ([0] * len(token_ids_1)) + [1] - return [1] + ([0] * len(token_ids_0)) + [1] - - -class TextClipSamplingProcessor(Processor): - def __init__(self, max_text_len, keep_prob=1.0): - self.max_text_len = max_text_len - self.max_video_len = 256 # always hold. - self.keep_prob = keep_prob - - def __call__( - self, - text_feature, - centerclip_idx=None, - sampled_max_text_len=None, - sampled_max_video_len=None, - ): - # Let's use all caps for now and see if 256 can cover all of them. - if sampled_max_text_len is not None: - max_text_len = sampled_max_text_len - else: - max_text_len = self.max_text_len - if sampled_max_video_len is not None: - max_video_len = sampled_max_video_len - else: - max_video_len = self.max_video_len - - t_num_clips = len(text_feature["start"]) - - if centerclip_idx is None: - centerclip_idx = random.randint(0, t_num_clips - 1) - - start_idx, end_idx = centerclip_idx, centerclip_idx + 1 - text_clip_indexs = deque() - text_clip_indexs.append(start_idx) - text_len = len(text_feature["cap"][start_idx]) - - video_len = max( - 0, - text_feature["end"][start_idx] - - text_feature["start"][start_idx], - ) - - while ( - (start_idx > 0 or end_idx < t_num_clips) - and text_len < max_text_len - and video_len < max_video_len - ): - if random.random() > 0.5 and end_idx < t_num_clips: - # skip the next one? - if random.random() > self.keep_prob and (end_idx + 1) < t_num_clips: - end_idx = end_idx + 1 - text_clip_indexs.append(end_idx) - text_len += len(text_feature["cap"][end_idx]) - end_idx += 1 - elif start_idx > 0: - if random.random() > self.keep_prob and (start_idx - 1) > 0: - start_idx = start_idx - 1 - start_idx -= 1 - text_clip_indexs.insert(0, start_idx) - text_len += len(text_feature["cap"][start_idx]) - else: - if end_idx < t_num_clips: - if random.random() > self.keep_prob and (end_idx + 1) < t_num_clips: - end_idx = end_idx + 1 - text_clip_indexs.append(end_idx) - text_len += len(text_feature["cap"][end_idx]) - end_idx += 1 - else: - return text_clip_indexs - video_len = max( - 0, - text_feature["end"][text_clip_indexs[-1]] - - text_feature["start"][text_clip_indexs[0]], - ) - return text_clip_indexs - - -class VideoClipSamplingProcessor(Processor): - def __call__(self, video_len, max_video_len, center): - """ - `video_len`: length of the video. - `max_video_len`: maximum video tokens allowd in a sequence. - `center`: initial starting index. - """ - assert center >= 0 and center < video_len - t_clip_len = 0 - start, end = center, center - while (start > 0 or end < video_len) and t_clip_len < max_video_len: - # decide the direction to grow. - if start <= 0: - end += 1 - elif end >= video_len: - start -= 1 - elif random.random() > 0.5: - end += 1 - else: - start -= 1 - t_clip_len += 1 - return {"start": [start], "end": [end]} - - -class How2MILNCEAligner(FixedLenAligner): - """reference: `antoine77340/MIL-NCE_HowTo100M/video_loader.py`""" - - def __init__(self, config): - super().__init__(config) - self.num_candidates = 4 - self.min_time = 5.0 - self.num_sec = 3.2 - # self.num_sec = self.num_frames / float(self.fps) num_frames=16 / fps = 5 - # self.num_frames = 16 - - def sampling( - self, - video_id, - video_feature, - text_feature, - centerclip_idx=None, # will be ignored. - sampled_max_text_len=None # will be ignored. - ): - text, start, end = self._get_text(text_feature) - video = self._get_video(video_feature, start, end) - - vfeats = torch.zeros((self.max_video_len, video_feature.shape[1])) - vmasks = torch.zeros((self.max_video_len,), dtype=torch.bool) - vfeats[: video.shape[0]] = torch.from_numpy(np.array(video)) - vmasks[: video.shape[0]] = 1 - - caps, cmasks = [], [] - for words in text: - cap, cmask = self._build_text_seq(text_feature, words) - caps.append(cap) - cmasks.append(cmask) - caps = torch.stack(caps) - cmasks = torch.stack(cmasks) - # video of shape: (video_len) - # text of shape (num_candidates, max_text_len) - - return { - "caps": caps, - "cmasks": cmasks, - "vfeats": vfeats, - "vmasks": vmasks, - # "video_id": video_id, - } - - def _get_video(self, video_feature, start, end): - start_seek = random.randint(start, int(max(start, end - self.num_sec))) - # duration = self.num_sec + 0.1 - return video_feature[start_seek : int(start_seek + self.num_sec)] - - def _get_text(self, cap): - ind = random.randint(0, len(cap["start"]) - 1) - if self.num_candidates == 1: - words = [ind] - else: - words = [] - cap_start = self._find_nearest_candidates(cap, ind) - for i in range(self.num_candidates): - words.append([max(0, min(len(cap["cap"]) - 1, cap_start + i))]) - - start, end = cap["start"][ind], cap["end"][ind] - # TODO: May need to be improved for edge cases. - # expand the min time. - if end - start < self.min_time: - diff = self.min_time - end + start - start = max(0, start - diff / 2) - end = start + self.min_time - return words, int(start), int(end) - - def _find_nearest_candidates(self, caption, ind): - """find the range of the clips.""" - start, end = ind, ind - #diff = caption["end"][end] - caption["start"][start] - n_candidate = 1 - while n_candidate < self.num_candidates: - # the first clip - if start == 0: - return 0 - # we add () in the following condition to fix the bug. - elif end == (len(caption["start"]) - 1): - return start - (self.num_candidates - n_candidate) - elif (caption["end"][end] - caption["start"][start - 1]) < ( - caption["end"][end + 1] - caption["start"][start] - ): - start -= 1 - else: - end += 1 - n_candidate += 1 - return start - - -class PKLJSONStrTextProcessor(TextProcessor): - """`caption.json` from howto100m are preprocessed as a - dict `[video_id, json_str]`. - Json parsing tokenization are conducted on-the-fly and cached into dict. - """ - - def __init__(self, config, max_clip_text_len=96): - print("[Warning] PKLJSONStrTextProcessor is slow for num_workers > 0.") - self.caption_pkl_path = str(config.caption_pkl_path) - with open(self.caption_pkl_path, "rb") as fd: - self.data = pickle.load(fd) - self.max_clip_text_len = max_clip_text_len - from transformers import AutoTokenizer - self.tokenizer = AutoTokenizer.from_pretrained( - str(config.bert_name), use_fast=config.use_fast - ) - - def __call__(self, video_id): - caption = self.data[video_id] - if isinstance(caption, str): - import json - caption = json.loads(caption) - cap = [] - for clip_idx, text_clip in enumerate(caption["text"]): - clip_ids = [] - if isinstance(text_clip, str): - clip_ids = self.tokenizer( - text_clip[: self.max_clip_text_len], - add_special_tokens=False - )["input_ids"] - cap.append(clip_ids) - caption["cap"] = cap - caption.pop("text") # save space. - self.data[video_id] = caption - return caption diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/processors/how2retriprocessor.py b/kosmos-g/fairseq/examples/MMPT/mmpt/processors/how2retriprocessor.py deleted file mode 100644 index b5a7730ec..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/processors/how2retriprocessor.py +++ /dev/null @@ -1,100 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .how2processor import ( - ShardedHow2MetaProcessor, - ShardedVideoProcessor, - ShardedTextProcessor, - VariedLenAligner, - OverlappedAligner -) - - -class ShardedHow2VideoRetriMetaProcessor(ShardedHow2MetaProcessor): - def __init__(self, config): - super().__init__(config) - self.num_video_per_batch = config.num_video_per_batch - self.cands = [ - self.data[batch_offset:batch_offset + self.num_video_per_batch] - for batch_offset in - range(0, (len(self.data) // (8 * self.num_video_per_batch)) * 8 * self.num_video_per_batch, self.num_video_per_batch)] - - def __len__(self): - return len(self.cands) - - def set_candidates(self, cands): - # no changes on num of batches. - print(len(self.cands), "->", len(cands)) - # assert len(self.cands) == len(cands) - self.cands = cands - - def __getitem__(self, idx): - video_ids = self.cands[idx] - assert isinstance(video_ids, list) - sharded_video_idxs = [] - for video_id in video_ids: - shard_id, video_idx = self.video_id_to_shard[video_id] - sharded_video_idxs.append((video_id, -1, shard_id, video_idx)) - return sharded_video_idxs, sharded_video_idxs - - -class ShardedVideoRetriVideoProcessor(ShardedVideoProcessor): - """In retrival case the video_id - is a list of tuples: `(shard_id, video_idx)` .""" - - def __call__(self, sharded_video_idxs): - assert isinstance(sharded_video_idxs, list) - cand_feats = [] - for shared_video_idx in sharded_video_idxs: - feat = super().__call__(shared_video_idx) - cand_feats.append(feat) - return cand_feats - - -class ShardedVideoRetriTextProcessor(ShardedTextProcessor): - """In retrival case the video_id - is a list of tuples: `(shard_id, video_idx)` .""" - - def __call__(self, sharded_video_idxs): - assert isinstance(sharded_video_idxs, list) - cand_caps = [] - for shared_video_idx in sharded_video_idxs: - caps = super().__call__(shared_video_idx) - cand_caps.append(caps) - return cand_caps - - -class VideoRetriAligner(VariedLenAligner): - # Retritask will trim dim-0. - def __call__(self, sharded_video_idxs, video_features, text_features): - from transformers import default_data_collator - batch, video_ids = [], [] - for video_id, video_feature, text_feature in \ - zip(sharded_video_idxs, video_features, text_features): - sub_batch = super().__call__(video_id, video_feature, text_feature) - batch.append(sub_batch) - if isinstance(video_id, tuple): - video_id = video_id[0] - video_ids.append(video_id) - batch = default_data_collator(batch) - batch["video_id"] = video_ids - return batch - - -class VideoRetriOverlappedAligner(OverlappedAligner): - # Retritask will trim dim-0. - def __call__(self, sharded_video_idxs, video_features, text_features): - from transformers import default_data_collator - batch, video_ids = [], [] - for video_id, video_feature, text_feature in \ - zip(sharded_video_idxs, video_features, text_features): - sub_batch = super().__call__(video_id, video_feature, text_feature) - batch.append(sub_batch) - if isinstance(video_id, tuple): - video_id = video_id[0] - video_ids.append(video_id) - batch = default_data_collator(batch) - batch["video_id"] = video_ids - return batch diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/processors/models/s3dg.py b/kosmos-g/fairseq/examples/MMPT/mmpt/processors/models/s3dg.py deleted file mode 100644 index 6c7a691e3..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/processors/models/s3dg.py +++ /dev/null @@ -1,336 +0,0 @@ -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -"""Contains a PyTorch definition for Gated Separable 3D network (S3D-G) -with a text module for computing joint text-video embedding from raw text -and video input. The following code will enable you to load the HowTo100M -pretrained S3D Text-Video model from: - A. Miech, J.-B. Alayrac, L. Smaira, I. Laptev, J. Sivic and A. Zisserman, - End-to-End Learning of Visual Representations from Uncurated Instructional Videos. - https://arxiv.org/abs/1912.06430. - -S3D-G was proposed by: - S. Xie, C. Sun, J. Huang, Z. Tu and K. Murphy, - Rethinking Spatiotemporal Feature Learning For Video Understanding. - https://arxiv.org/abs/1712.04851. - Tensorflow code: https://github.com/tensorflow/models/blob/master/research/slim/nets/s3dg.py - -The S3D architecture was slightly modified with a space to depth trick for TPU -optimization. -""" - -import torch as th -import torch.nn.functional as F -import torch.nn as nn -import os -import numpy as np -import re - - -class InceptionBlock(nn.Module): - def __init__( - self, - input_dim, - num_outputs_0_0a, - num_outputs_1_0a, - num_outputs_1_0b, - num_outputs_2_0a, - num_outputs_2_0b, - num_outputs_3_0b, - gating=True, - ): - super(InceptionBlock, self).__init__() - self.conv_b0 = STConv3D(input_dim, num_outputs_0_0a, [1, 1, 1]) - self.conv_b1_a = STConv3D(input_dim, num_outputs_1_0a, [1, 1, 1]) - self.conv_b1_b = STConv3D( - num_outputs_1_0a, num_outputs_1_0b, [3, 3, 3], padding=1, separable=True - ) - self.conv_b2_a = STConv3D(input_dim, num_outputs_2_0a, [1, 1, 1]) - self.conv_b2_b = STConv3D( - num_outputs_2_0a, num_outputs_2_0b, [3, 3, 3], padding=1, separable=True - ) - self.maxpool_b3 = th.nn.MaxPool3d((3, 3, 3), stride=1, padding=1) - self.conv_b3_b = STConv3D(input_dim, num_outputs_3_0b, [1, 1, 1]) - self.gating = gating - self.output_dim = ( - num_outputs_0_0a + num_outputs_1_0b + num_outputs_2_0b + num_outputs_3_0b - ) - if gating: - self.gating_b0 = SelfGating(num_outputs_0_0a) - self.gating_b1 = SelfGating(num_outputs_1_0b) - self.gating_b2 = SelfGating(num_outputs_2_0b) - self.gating_b3 = SelfGating(num_outputs_3_0b) - - def forward(self, input): - """Inception block - """ - b0 = self.conv_b0(input) - b1 = self.conv_b1_a(input) - b1 = self.conv_b1_b(b1) - b2 = self.conv_b2_a(input) - b2 = self.conv_b2_b(b2) - b3 = self.maxpool_b3(input) - b3 = self.conv_b3_b(b3) - if self.gating: - b0 = self.gating_b0(b0) - b1 = self.gating_b1(b1) - b2 = self.gating_b2(b2) - b3 = self.gating_b3(b3) - return th.cat((b0, b1, b2, b3), dim=1) - - -class SelfGating(nn.Module): - def __init__(self, input_dim): - super(SelfGating, self).__init__() - self.fc = nn.Linear(input_dim, input_dim) - - def forward(self, input_tensor): - """Feature gating as used in S3D-G. - """ - spatiotemporal_average = th.mean(input_tensor, dim=[2, 3, 4]) - weights = self.fc(spatiotemporal_average) - weights = th.sigmoid(weights) - return weights[:, :, None, None, None] * input_tensor - - -class STConv3D(nn.Module): - def __init__( - self, input_dim, output_dim, kernel_size, stride=1, padding=0, separable=False - ): - super(STConv3D, self).__init__() - self.separable = separable - self.relu = nn.ReLU(inplace=True) - assert len(kernel_size) == 3 - if separable and kernel_size[0] != 1: - spatial_kernel_size = [1, kernel_size[1], kernel_size[2]] - temporal_kernel_size = [kernel_size[0], 1, 1] - if isinstance(stride, list) and len(stride) == 3: - spatial_stride = [1, stride[1], stride[2]] - temporal_stride = [stride[0], 1, 1] - else: - spatial_stride = [1, stride, stride] - temporal_stride = [stride, 1, 1] - if isinstance(padding, list) and len(padding) == 3: - spatial_padding = [0, padding[1], padding[2]] - temporal_padding = [padding[0], 0, 0] - else: - spatial_padding = [0, padding, padding] - temporal_padding = [padding, 0, 0] - if separable: - self.conv1 = nn.Conv3d( - input_dim, - output_dim, - kernel_size=spatial_kernel_size, - stride=spatial_stride, - padding=spatial_padding, - bias=False, - ) - self.bn1 = nn.BatchNorm3d(output_dim) - self.conv2 = nn.Conv3d( - output_dim, - output_dim, - kernel_size=temporal_kernel_size, - stride=temporal_stride, - padding=temporal_padding, - bias=False, - ) - self.bn2 = nn.BatchNorm3d(output_dim) - else: - self.conv1 = nn.Conv3d( - input_dim, - output_dim, - kernel_size=kernel_size, - stride=stride, - padding=padding, - bias=False, - ) - self.bn1 = nn.BatchNorm3d(output_dim) - - def forward(self, input): - out = self.relu(self.bn1(self.conv1(input))) - if self.separable: - out = self.relu(self.bn2(self.conv2(out))) - return out - - -class MaxPool3dTFPadding(th.nn.Module): - def __init__(self, kernel_size, stride=None, padding="SAME"): - super(MaxPool3dTFPadding, self).__init__() - if padding == "SAME": - padding_shape = self._get_padding_shape(kernel_size, stride) - self.padding_shape = padding_shape - self.pad = th.nn.ConstantPad3d(padding_shape, 0) - self.pool = th.nn.MaxPool3d(kernel_size, stride, ceil_mode=True) - - def _get_padding_shape(self, filter_shape, stride): - def _pad_top_bottom(filter_dim, stride_val): - pad_along = max(filter_dim - stride_val, 0) - pad_top = pad_along // 2 - pad_bottom = pad_along - pad_top - return pad_top, pad_bottom - - padding_shape = [] - for filter_dim, stride_val in zip(filter_shape, stride): - pad_top, pad_bottom = _pad_top_bottom(filter_dim, stride_val) - padding_shape.append(pad_top) - padding_shape.append(pad_bottom) - depth_top = padding_shape.pop(0) - depth_bottom = padding_shape.pop(0) - padding_shape.append(depth_top) - padding_shape.append(depth_bottom) - return tuple(padding_shape) - - def forward(self, inp): - inp = self.pad(inp) - out = self.pool(inp) - return out - - -class Sentence_Embedding(nn.Module): - def __init__( - self, - embd_dim, - num_embeddings=66250, - word_embedding_dim=300, - token_to_word_path="dict.npy", - max_words=16, - output_dim=2048, - ): - super(Sentence_Embedding, self).__init__() - self.word_embd = nn.Embedding(num_embeddings, word_embedding_dim) - self.fc1 = nn.Linear(word_embedding_dim, output_dim) - self.fc2 = nn.Linear(output_dim, embd_dim) - self.word_to_token = {} - self.max_words = max_words - token_to_word = np.load(token_to_word_path) - for i, t in enumerate(token_to_word): - self.word_to_token[t] = i + 1 - - def _zero_pad_tensor_token(self, tensor, size): - if len(tensor) >= size: - return tensor[:size] - else: - zero = th.zeros(size - len(tensor)).long() - return th.cat((tensor, zero), dim=0) - - def _split_text(self, sentence): - w = re.findall(r"[\w']+", str(sentence)) - return w - - def _words_to_token(self, words): - words = [ - self.word_to_token[word] for word in words if word in self.word_to_token - ] - if words: - we = self._zero_pad_tensor_token(th.LongTensor(words), self.max_words) - return we - else: - return th.zeros(self.max_words).long() - - def _words_to_ids(self, x): - split_x = [self._words_to_token(self._split_text(sent.lower())) for sent in x] - return th.stack(split_x, dim=0) - - def forward(self, x): - x = self._words_to_ids(x) - x = self.word_embd(x) - x = F.relu(self.fc1(x)) - x = th.max(x, dim=1)[0] - x = self.fc2(x) - return {'text_embedding': x} - - -class S3D(nn.Module): - def __init__(self, dict_path, num_classes=512, gating=True, space_to_depth=True): - super(S3D, self).__init__() - self.num_classes = num_classes - self.gating = gating - self.space_to_depth = space_to_depth - if space_to_depth: - self.conv1 = STConv3D( - 24, 64, [2, 4, 4], stride=1, padding=(1, 2, 2), separable=False - ) - else: - self.conv1 = STConv3D( - 3, 64, [3, 7, 7], stride=2, padding=(1, 3, 3), separable=False - ) - self.conv_2b = STConv3D(64, 64, [1, 1, 1], separable=False) - self.conv_2c = STConv3D(64, 192, [3, 3, 3], padding=1, separable=True) - self.gating = SelfGating(192) - self.maxpool_2a = MaxPool3dTFPadding( - kernel_size=(1, 3, 3), stride=(1, 2, 2), padding="SAME" - ) - self.maxpool_3a = MaxPool3dTFPadding( - kernel_size=(1, 3, 3), stride=(1, 2, 2), padding="SAME" - ) - self.mixed_3b = InceptionBlock(192, 64, 96, 128, 16, 32, 32) - self.mixed_3c = InceptionBlock( - self.mixed_3b.output_dim, 128, 128, 192, 32, 96, 64 - ) - self.maxpool_4a = MaxPool3dTFPadding( - kernel_size=(3, 3, 3), stride=(2, 2, 2), padding="SAME" - ) - self.mixed_4b = InceptionBlock( - self.mixed_3c.output_dim, 192, 96, 208, 16, 48, 64 - ) - self.mixed_4c = InceptionBlock( - self.mixed_4b.output_dim, 160, 112, 224, 24, 64, 64 - ) - self.mixed_4d = InceptionBlock( - self.mixed_4c.output_dim, 128, 128, 256, 24, 64, 64 - ) - self.mixed_4e = InceptionBlock( - self.mixed_4d.output_dim, 112, 144, 288, 32, 64, 64 - ) - self.mixed_4f = InceptionBlock( - self.mixed_4e.output_dim, 256, 160, 320, 32, 128, 128 - ) - self.maxpool_5a = self.maxPool3d_5a_2x2 = MaxPool3dTFPadding( - kernel_size=(2, 2, 2), stride=(2, 2, 2), padding="SAME" - ) - self.mixed_5b = InceptionBlock( - self.mixed_4f.output_dim, 256, 160, 320, 32, 128, 128 - ) - self.mixed_5c = InceptionBlock( - self.mixed_5b.output_dim, 384, 192, 384, 48, 128, 128 - ) - self.fc = nn.Linear(self.mixed_5c.output_dim, num_classes) - self.text_module = Sentence_Embedding(num_classes, - token_to_word_path=dict_path) - - def _space_to_depth(self, input): - """3D space to depth trick for TPU optimization. - """ - B, C, T, H, W = input.shape - input = input.view(B, C, T // 2, 2, H // 2, 2, W // 2, 2) - input = input.permute(0, 3, 5, 7, 1, 2, 4, 6) - input = input.contiguous().view(B, 8 * C, T // 2, H // 2, W // 2) - return input - - def forward(self, inputs): - """Defines the S3DG base architecture.""" - if self.space_to_depth: - inputs = self._space_to_depth(inputs) - net = self.conv1(inputs) - if self.space_to_depth: - # we need to replicate 'SAME' tensorflow padding - net = net[:, :, 1:, 1:, 1:] - net = self.maxpool_2a(net) - net = self.conv_2b(net) - net = self.conv_2c(net) - if self.gating: - net = self.gating(net) - net = self.maxpool_3a(net) - net = self.mixed_3b(net) - net = self.mixed_3c(net) - net = self.maxpool_4a(net) - net = self.mixed_4b(net) - net = self.mixed_4c(net) - net = self.mixed_4d(net) - net = self.mixed_4e(net) - net = self.mixed_4f(net) - net = self.maxpool_5a(net) - net = self.mixed_5b(net) - net = self.mixed_5c(net) - net = th.mean(net, dim=[2, 3, 4]) - return {'video_embedding': self.fc(net), 'mixed_5c': net} diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/processors/processor.py b/kosmos-g/fairseq/examples/MMPT/mmpt/processors/processor.py deleted file mode 100644 index 98edb051f..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/processors/processor.py +++ /dev/null @@ -1,274 +0,0 @@ -# Copyright (c) Facebook, Inc. All Rights Reserved - -import numpy as np -import os -import torch - - -class Processor(object): - """ - A generic processor for video (codec, feature etc.) and text. - """ - - def __call__(self, **kwargs): - raise NotImplementedError - - -class MetaProcessor(Processor): - """ - A meta processor is expected to load the metadata of a dataset: - (e.g., video_ids, or captions). - You must implement the `__getitem__` (meta datasets are rather diverse.). - """ - - def __init__(self, config): - self.split = config.split - - def __len__(self): - return len(self.data) - - def __getitem__(self, idx): - raise NotImplementedError - - def _get_split_path(self, config): - splits = { - "train": config.train_path, - "valid": config.val_path, - "test": config.test_path, - } - if config.split is not None: - return splits[config.split] - return config.train_path - - -class TextProcessor(Processor): - """ - A generic Text processor: rename this as `withTokenizer`. - tokenize a string of text on-the-fly. - Warning: mostly used for end tasks. - (on-the-fly tokenization is slow for how2.) - TODO(huxu): move this class as a subclass. - """ - - def __init__(self, config): - self.bert_name = str(config.bert_name) - self.use_fast = config.use_fast - from transformers import AutoTokenizer - self.tokenizer = AutoTokenizer.from_pretrained( - self.bert_name, use_fast=self.use_fast - ) - - def __call__(self, text_id): - caption = self.tokenizer(text_id, add_special_tokens=False) - return caption["input_ids"] - - -class VideoProcessor(Processor): - """ - A generic video processor: load a numpy video tokens by default. - """ - - def __init__(self, config): - self.vfeat_dir = config.vfeat_dir - - def __call__(self, video_fn): - if isinstance(video_fn, tuple): - video_fn = video_fn[0] - assert isinstance(video_fn, str) - video_fn = os.path.join(self.vfeat_dir, video_fn + ".npy") - feat = np.load(video_fn) - return feat - - -class Aligner(object): - """ - An alignprocessor align video and text and output a dict of tensors (for a model). - """ - def __init__(self, config): - """__init__ needs to be light weight for more workers/threads.""" - self.split = config.split - self.max_video_len = config.max_video_len - self.max_len = config.max_len - from transformers import AutoTokenizer - tokenizer = AutoTokenizer.from_pretrained( - str(config.bert_name), use_fast=config.use_fast - ) - self.cls_token_id = tokenizer.cls_token_id - self.sep_token_id = tokenizer.sep_token_id - self.pad_token_id = tokenizer.pad_token_id - self.mask_token_id = tokenizer.mask_token_id - - def __call__(self, video_id, video_feature, text_feature): - raise NotImplementedError - - def _build_video_seq(self, video_feature, video_clips=None): - """ - `video_feature`: available video tokens. - `video_clips`: video clip sequence to build. - """ - if not isinstance(video_feature, np.ndarray): - raise ValueError( - "unsupported type of video_feature", type(video_feature) - ) - - if video_clips is None: - # this is borrowed from DSAligner - video_start = 0 - video_end = min(len(video_feature), self.max_video_len) - # the whole sequence is a single clip. - video_clips = {"start": [video_start], "end": [video_end]} - - vfeats = np.zeros( - (self.max_video_len, video_feature.shape[1]), dtype=np.float32 - ) - vmasks = torch.zeros((self.max_video_len,), dtype=torch.bool) - video_len = 0 - for start, end in zip(video_clips["start"], video_clips["end"]): - clip_len = min(self.max_video_len - video_len, (end - start)) - if clip_len > 0: - vfeats[video_len: video_len + clip_len] = video_feature[ - start: start + clip_len - ] - vmasks[video_len: video_len + clip_len] = 1 - video_len += clip_len - vfeats = torch.from_numpy(vfeats) - - return vfeats, vmasks - - def _build_text_seq(self, text_feature, text_clip_indexs=None): - """ - `text_feature`: all available clips. - `text_clip_indexes`: clip sequence to build. - """ - if text_clip_indexs is None: - text_clip_indexs = [0] - - full_caps = [] - if isinstance(text_feature, dict): - for clip_idx in text_clip_indexs: - full_caps.extend(text_feature["cap"][clip_idx]) - else: - full_caps = text_feature - max_text_len = self.max_len - self.max_video_len - 3 - full_caps = full_caps[:max_text_len] - full_caps = ( - [self.cls_token_id, self.sep_token_id] + full_caps + [self.sep_token_id] - ) - text_pad_len = self.max_len - len(full_caps) - self.max_video_len - padded_full_caps = full_caps + [self.pad_token_id] * text_pad_len - caps = torch.LongTensor(padded_full_caps) - cmasks = torch.zeros((len(padded_full_caps),), dtype=torch.bool) - cmasks[: len(full_caps)] = 1 - - return caps, cmasks - - def batch_post_processing(self, batch, video_feature): - return batch - - -class MMAttentionMask2DProcessor(Processor): - """text generation requires 2d mask - that is harder to generate by GPU at this stage.""" - - def __call__(self, vmask, cmask, mtype): - if mtype == "textgen": - return self._build_textgeneration_mask(vmask, cmask) - elif mtype == "videogen": - return self._build_videogeneration_mask(vmask, cmask) - else: - return self._build_mm_mask(vmask, cmask) - - def _build_mm_mask(self, vmask, cmask): - mask_1d = torch.cat([cmask[:1], vmask, cmask[1:]], dim=0) - return mask_1d[None, :].repeat(mask_1d.size(0), 1) - - def _build_videogeneration_mask(self, vmask, cmask): - # cls_mask is only about text otherwise it will leak generation. - cls_text_mask = torch.cat([ - # [CLS] - torch.ones( - (1,), dtype=torch.bool, device=cmask.device), - # video tokens and [SEP] for video. - torch.zeros( - (vmask.size(0) + 1,), dtype=torch.bool, device=cmask.device), - cmask[2:] - ], dim=0) - - # concat horizontially. - video_len = int(vmask.sum()) - video_masks = torch.cat([ - # [CLS] - torch.ones( - (video_len, 1), dtype=torch.bool, device=cmask.device - ), - torch.tril( - torch.ones( - (video_len, video_len), - dtype=torch.bool, device=cmask.device)), - # video_padding - torch.zeros( - (video_len, vmask.size(0) - video_len), - dtype=torch.bool, device=cmask.device - ), - # [SEP] for video (unused). - torch.zeros( - (video_len, 1), dtype=torch.bool, device=cmask.device - ), - cmask[2:].unsqueeze(0).repeat(video_len, 1) - ], dim=1) - - text_masks = cls_text_mask[None, :].repeat( - cmask.size(0) - 2, 1) - video_padding_masks = cls_text_mask[None, :].repeat( - vmask.size(0) - video_len, 1) - - return torch.cat([ - cls_text_mask[None, :], - video_masks, - video_padding_masks, - torch.cat([cmask[:1], vmask, cmask[1:]], dim=0)[None,:], - text_masks - ], dim=0) - - def _build_textgeneration_mask(self, vmask, cmask): - # cls_mask is only about video otherwise it will leak generation. - cls_video_mask = torch.cat([ - # [CLS] - torch.ones( - (1,), dtype=torch.bool, device=cmask.device), - vmask, - # [SEP] - torch.ones((1,), dtype=torch.bool, device=cmask.device), - torch.zeros( - (cmask.size(0)-2,), dtype=torch.bool, device=cmask.device) - ], dim=0) - - # concat horizontially. - text_len = int(cmask[2:].sum()) - text_masks = torch.cat([ - # [CLS] - torch.ones( - (text_len, 1), dtype=torch.bool, device=cmask.device - ), - vmask.unsqueeze(0).repeat(text_len, 1), - # [SEP] for video. - torch.ones( - (text_len, 1), dtype=torch.bool, device=cmask.device - ), - torch.tril( - torch.ones( - (text_len, text_len), - dtype=torch.bool, device=cmask.device)), - # padding. - torch.zeros( - (text_len, cmask.size(0) - text_len - 2), - dtype=torch.bool, device=cmask.device - ) - ], dim=1) - - cls_video_masks = cls_video_mask[None, :].repeat( - vmask.size(0) + 2, 1) - text_padding_masks = cls_video_mask[None, :].repeat( - cmask.size(0) - text_len - 2, 1) - return torch.cat([ - cls_video_masks, text_masks, text_padding_masks], dim=0) diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/tasks/__init__.py b/kosmos-g/fairseq/examples/MMPT/mmpt/tasks/__init__.py deleted file mode 100644 index e2e9323a5..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/tasks/__init__.py +++ /dev/null @@ -1,22 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -from .task import * -from .vlmtask import * -from .retritask import * - -try: - from .fairseqmmtask import * -except ImportError: - pass - -try: - from .milncetask import * -except ImportError: - pass - -try: - from .expretritask import * -except ImportError: - pass diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/tasks/fairseqmmtask.py b/kosmos-g/fairseq/examples/MMPT/mmpt/tasks/fairseqmmtask.py deleted file mode 100644 index f6b6115a3..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/tasks/fairseqmmtask.py +++ /dev/null @@ -1,104 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -make a general fairseq task for MM pretraining. -""" - -import random - -from fairseq.tasks import LegacyFairseqTask, register_task - -from .task import Task -from .retritask import RetriTask -from ..datasets import FairseqMMDataset -from .. import utils - - -@register_task("mmtask") -class FairseqMMTask(LegacyFairseqTask): - @staticmethod - def add_args(parser): - # Add some command-line arguments for specifying where the data is - # located and the maximum supported input length. - parser.add_argument( - "taskconfig", - metavar="FILE", - help=("taskconfig to load all configurations" "outside fairseq parser."), - ) - - @classmethod - def setup_task(cls, args, **kwargs): - return FairseqMMTask(args) - - def __init__(self, args): - super().__init__(args) - config = utils.load_config(args) - self.mmtask = Task.config_task(config) - self.mmtask.build_dataset() - self.mmtask.build_model() - self.mmtask.build_loss() - - def load_dataset(self, split, **kwargs): - split_map = { - "train": self.mmtask.train_data, - "valid": self.mmtask.val_data, - "test": self.mmtask.test_data, - } - if split not in split_map: - raise ValueError("unknown split type.") - if split_map[split] is not None: - self.datasets[split] = FairseqMMDataset(split_map[split]) - - def get_batch_iterator( - self, - dataset, - max_tokens=None, - max_sentences=None, - max_positions=None, - ignore_invalid_inputs=False, - required_batch_size_multiple=1, - seed=1, - num_shards=1, - shard_id=0, - num_workers=0, - epoch=1, - data_buffer_size=0, - disable_iterator_cache=False, - skip_remainder_batch=False, - grouped_shuffling=False, - update_epoch_batch_itr=False, - ): - random.seed(epoch) - if dataset.mmdataset.split == "train" and isinstance(self.mmtask, RetriTask): - if epoch >= self.mmtask.config.retri_epoch: - if not hasattr(self.mmtask, "retri_dataloader"): - self.mmtask.build_dataloader() - self.mmtask.retrive_candidates(epoch) - - return super().get_batch_iterator( - dataset, - max_tokens, - max_sentences, - max_positions, - ignore_invalid_inputs, - required_batch_size_multiple, - seed, - num_shards, - shard_id, - num_workers, - epoch, - data_buffer_size, - disable_iterator_cache, - grouped_shuffling, - update_epoch_batch_itr, - ) - - @property - def source_dictionary(self): - return None - - @property - def target_dictionary(self): - return None diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/tasks/milncetask.py b/kosmos-g/fairseq/examples/MMPT/mmpt/tasks/milncetask.py deleted file mode 100644 index 61b6ab059..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/tasks/milncetask.py +++ /dev/null @@ -1,27 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from .task import Task - - -class MILNCETask(Task): - def reshape_subsample(self, sample): - if ( - hasattr(self.config.dataset, "subsampling") - and self.config.dataset.subsampling is not None - and self.config.dataset.subsampling > 1 - ): - for key in sample: - if torch.is_tensor(sample[key]): - tensor = self.flat_subsample(sample[key]) - if key in ["caps", "cmasks"]: - size = tensor.size() - batch_size = size[0] * size[1] - expanded_size = (batch_size,) + size[2:] - tensor = tensor.view(expanded_size) - sample[key] = tensor - return sample diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/tasks/retritask.py b/kosmos-g/fairseq/examples/MMPT/mmpt/tasks/retritask.py deleted file mode 100644 index b43f20fdd..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/tasks/retritask.py +++ /dev/null @@ -1,253 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import os -import torch -import pickle -import random - -from tqdm import tqdm -from torch.utils.data import DataLoader -from torch.utils.data.distributed import DistributedSampler - -from ..processors import ( - ShardedHow2MetaProcessor, - ShardedVideoProcessor, - ShardedTextProcessor, - VariedLenAligner, -) - -from ..datasets import MMDataset -from .task import Task -from ..modules import vectorpool -from ..evaluators.predictor import Predictor -from ..utils import set_seed, get_local_rank, get_world_size - - -class RetriTask(Task): - """abstract class for task with retrival.""" - - def reshape_subsample(self, sample): - for key in sample: - if torch.is_tensor(sample[key]): - sample[key] = self.flat_subsample(sample[key]) - return sample - - def flat_subsample(self, tensor): - if tensor.size(0) == 1: - tensor = tensor.squeeze(0) - return tensor - - def build_dataloader(self): - """called by `get_batch_iterator` in fairseqmmtask. """ - # TODO: hard-code dataloader for retri for now and configurable in .yaml. - # reuse the `train.lst`. - self.config.dataset.split = "train" - meta_processor = ShardedHow2MetaProcessor(self.config.dataset) - video_processor = ShardedVideoProcessor(self.config.dataset) - text_processor = ShardedTextProcessor(self.config.dataset) - - aligner = VariedLenAligner(self.config.dataset) - aligner.subsampling = self.config.dataset.clip_per_video - - self.retri_data = MMDataset( - meta_processor, video_processor, text_processor, aligner - ) - - retri_sampler = DistributedSampler(self.retri_data) - infer_scale = 16 - batch_size = self.config.dataset.num_video_per_batch \ - * infer_scale - - self.retri_dataloader = DataLoader( - self.retri_data, - collate_fn=self.retri_data.collater, - batch_size=batch_size, - shuffle=False, - sampler=retri_sampler, - num_workers=self.config.fairseq.dataset.num_workers - ) - return self.retri_dataloader - - def retrive_candidates(self, epoch, dataloader=None): - if get_local_rank() == 0: - print("running retrieval model.") - out_dir = os.path.join( - self.config.fairseq.checkpoint.save_dir, "retri") - os.makedirs(out_dir, exist_ok=True) - - if not os.path.isfile( - os.path.join( - out_dir, "batched_e" + str(epoch) + "_videos0.pkl") - ): - if dataloader is None: - dataloader = self.retri_dataloader - - self.model.eval() - self.model.is_train = False - - assert self.retri_data.meta_processor.data == \ - self.train_data.meta_processor.data # video_ids not mutated. - - self._retri_predict(epoch, dataloader) - - self.model.train() - self.model.is_train = True - - torch.distributed.barrier() - output = self._retri_sync(epoch, out_dir) - torch.distributed.barrier() - self.train_data.meta_processor.set_candidates(output) - return output - - -class VideoRetriTask(RetriTask): - """RetriTask on video level.""" - - def reshape_subsample(self, sample): - if ( - hasattr(self.config.dataset, "clip_per_video") - and self.config.dataset.clip_per_video is not None - and self.config.dataset.clip_per_video > 1 - ): - for key in sample: - if torch.is_tensor(sample[key]): - sample[key] = self.flat_subsample(sample[key]) - return sample - - def flat_subsample(self, tensor): - if tensor.size(0) == 1: - tensor = tensor.squeeze(0) - return Task.flat_subsample(self, tensor) - - def _retri_predict(self, epoch, dataloader): - set_seed(epoch) - # save for retrival. - predictor = VideoPredictor(self.config) - predictor.predict_loop( - self.model, dataloader) - set_seed(epoch) # get the same text clips. - # retrival. - retri_predictor = VideoRetriPredictor( - self.config) - retri_predictor.predict_loop( - self.model, predictor.vecpool.retriver, epoch) - del predictor - del retri_predictor - - def _retri_sync(self, epoch, out_dir): - # gpu do the same merge. - batched_videos = [] - for local_rank in range(get_world_size()): - fn = os.path.join( - out_dir, - "batched_e" + str(epoch) + "_videos" + str(local_rank) + ".pkl") - with open(fn, "rb") as fr: - batched_videos.extend(pickle.load(fr)) - print( - "[INFO] batched_videos", - len(batched_videos), len(batched_videos[0])) - return batched_videos - - -class VideoPredictor(Predictor): - def __init__(self, config): - vectorpool_cls = getattr(vectorpool, config.vectorpool_cls) - self.vecpool = vectorpool_cls(config) - - def predict_loop( - self, - model, - dataloader, - early_stop=-1, - ): - with torch.no_grad(): - if get_local_rank() == 0: - dataloader = tqdm(dataloader) - for batch_idx, batch in enumerate(dataloader): - if batch_idx == early_stop: - break - self(batch, model) - return self.finalize() - - def __call__(self, sample, model, **kwargs): - param = next(model.parameters()) - dtype = param.dtype - device = param.device - subsample = sample["vfeats"].size(1) - sample = self.to_ctx(sample, device, dtype) - for key in sample: - if torch.is_tensor(sample[key]): - size = sample[key].size() - if len(size) >= 2: - batch_size = size[0] * size[1] - expanded_size = ( - (batch_size,) + size[2:] if len(size) > 2 - else (batch_size,) - ) - sample[key] = sample[key].view(expanded_size) - - outputs = model(**sample) - sample.update(outputs) - self.vecpool(sample, subsample) - - def finalize(self): - print("[INFO]", self.vecpool) - if not self.vecpool.retriver.db.is_trained: - self.vecpool.retriver.finalize_training() - return self.vecpool.retriver - - -class VideoRetriPredictor(Predictor): - """ - Online Retrieval Predictor for Clips (used by RetriTask). - TODO: merge this with VisPredictor? - """ - - def __init__(self, config): - self.pred_dir = os.path.join( - config.fairseq.checkpoint.save_dir, - "retri") - self.num_cands = config.num_cands - self.num_video_per_batch = config.dataset.num_video_per_batch - - def predict_loop( - self, - model, - retriver, - epoch, - early_stop=-1 - ): - # a fake loop that only try to recover video vector - # from video_id. - batched_videos = [] - # obtain available video_ids. - video_ids = list(retriver.videoid_to_vectoridx.keys()) - - dataloader = random.sample( - video_ids, - len(video_ids) // self.num_video_per_batch - ) - - if get_local_rank() == 0: - dataloader = tqdm(dataloader) - for batch_idx, batch in enumerate(dataloader): - # batch is one video id. - if batch_idx == early_stop: - break - video_ids = retriver.search_by_video_ids( - [batch], self.num_cands)[0] - if len(video_ids) > self.num_video_per_batch: - # we moved the center to make cluster robust. - video_ids = random.sample(video_ids, self.num_video_per_batch) - batched_videos.append(video_ids) - return self.finalize(batched_videos, epoch) - - def finalize(self, batched_videos, epoch): - fn = os.path.join( - self.pred_dir, - "batched_e" + str(epoch) + "_videos" + str(get_local_rank()) + ".pkl") - with open(fn, "wb") as fw: - pickle.dump(batched_videos, fw, pickle.HIGHEST_PROTOCOL) - return batched_videos diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/tasks/task.py b/kosmos-g/fairseq/examples/MMPT/mmpt/tasks/task.py deleted file mode 100644 index 8bb50f24d..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/tasks/task.py +++ /dev/null @@ -1,184 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import torch - -from .. import tasks -from .. import models -from .. import losses -from ..datasets import MMDataset -from .. import processors - - -class Task(object): - """ - A task refers to one generic training task (e.g., training one model). - """ - - @classmethod - def config_task(cls, config): - """ - determine whether to load a hard-coded task or config from a generic one. - via if a task string is available in config. - """ - if config.task is not None: - # TODO (huxu): expand the search scope. - task_cls = getattr(tasks, config.task) - return task_cls(config) - else: - return Task(config) - - def __init__(self, config): - self.config = config - self.train_data = None - self.val_data = None - self.test_data = None - - self.model = None - self.loss_fn = None - self.eval_fn = None - - def build_dataset(self): - """TODO (huxu): move processor breakdown to MMDataset.""" - """fill-in `self.train_data`, `self.val_data` and `self.test_data`.""" - - meta_processor_cls = getattr( - processors, self.config.dataset.meta_processor) - video_processor_cls = getattr( - processors, self.config.dataset.video_processor) - text_processor_cls = getattr( - processors, self.config.dataset.text_processor) - aligner_cls = getattr( - processors, self.config.dataset.aligner) - - if self.config.dataset.train_path is not None: - self.config.dataset.split = "train" - # may be used by meta processor. - # meta_processor controls different dataset. - meta_processor = meta_processor_cls(self.config.dataset) - video_processor = video_processor_cls(self.config.dataset) - text_processor = text_processor_cls(self.config.dataset) - aligner = aligner_cls(self.config.dataset) - self.train_data = MMDataset( - meta_processor, video_processor, text_processor, aligner - ) - print("train_len", len(self.train_data)) - output = self.train_data[0] - self.train_data.print_example(output) - if self.config.dataset.val_path is not None: - self.config.dataset.split = "valid" - # may be used by meta processor. - meta_processor = meta_processor_cls(self.config.dataset) - video_processor = video_processor_cls(self.config.dataset) - text_processor = text_processor_cls(self.config.dataset) - aligner = aligner_cls(self.config.dataset) - self.val_data = MMDataset( - meta_processor, video_processor, text_processor, aligner - ) - print("val_len", len(self.val_data)) - output = self.val_data[0] - self.val_data.print_example(output) - - if self.config.dataset.split == "test": - # the following is run via lauching fairseq-validate. - meta_processor = meta_processor_cls(self.config.dataset) - video_processor = video_processor_cls(self.config.dataset) - text_processor = text_processor_cls(self.config.dataset) - - self.test_data = MMDataset( - meta_processor, video_processor, text_processor, aligner - ) - print("test_len", len(self.test_data)) - output = self.test_data[0] - self.test_data.print_example(output) - - def build_model(self, checkpoint=None): - if self.model is None: - model_cls = getattr(models, self.config.model.model_cls) - self.model = model_cls(self.config) - if checkpoint is not None: - self.load_checkpoint(checkpoint) - return self.model - - def load_checkpoint(self, checkpoint): - if self.model is None: - raise ValueError("model is not initialized.") - state_dict = torch.load(checkpoint) - state_dict = self._trim_state_dict(state_dict) - self.model.load_state_dict(state_dict, strict=False) - # if it's a fp16 model, turn it back. - if next(self.model.parameters()).dtype == torch.float16: - self.model = self.model.float() - return self.model - - def _trim_state_dict(self, state_dict): - from collections import OrderedDict - - if "state_dict" in state_dict: - state_dict = state_dict["state_dict"] - if "model" in state_dict: # fairseq checkpoint format. - state_dict = state_dict["model"] - ret_state_dict = OrderedDict() - for ( - key, - value, - ) in state_dict.items(): - # remove fairseq wrapper since this is a task. - if key.startswith("mmmodel"): - key = key[len("mmmodel."):] - ret_state_dict[key] = value - return ret_state_dict - - def build_loss(self): - if self.loss_fn is None and self.config.loss is not None: - loss_cls = getattr(losses, self.config.loss.loss_cls) - self.loss_fn = loss_cls() - return self.loss_fn - - def flat_subsample(self, tensor): - size = tensor.size() - if len(size) >= 2: - batch_size = size[0] * size[1] - expanded_size = ( - (batch_size,) + size[2:] if len(size) > 2 - else (batch_size,) - ) - tensor = tensor.view(expanded_size) - return tensor - - def reshape_subsample(self, sample): - if ( - hasattr(self.config.dataset, "subsampling") - and self.config.dataset.subsampling is not None - and self.config.dataset.subsampling > 1 - ): - for key in sample: - if torch.is_tensor(sample[key]): - sample[key] = self.flat_subsample(sample[key]) - return sample - - def __call__(self, model, sample): - loss = None - loss_scalar = float("inf") - - sample = self.reshape_subsample(sample) - outputs = self.model(**sample) - sample.update(outputs) - if self.loss_fn is not None: - loss = self.loss_fn(**sample) - loss_scalar = loss.item() - - batch_size = sample["caps"].size(0) - sample_size = 1 - return { - "loss": loss, - "loss_scalar": loss_scalar, - "max_len": self.config.dataset.max_len, - "batch_size": batch_size, - "sample_size": sample_size, - } - - def build_dataloader(self): - """only used for trainer that lacks building loaders.""" - raise NotImplementedError diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/tasks/vlmtask.py b/kosmos-g/fairseq/examples/MMPT/mmpt/tasks/vlmtask.py deleted file mode 100644 index 57dc4c917..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/tasks/vlmtask.py +++ /dev/null @@ -1,27 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import torch - -from .task import Task - - -class VLMTask(Task): - """A VLM task for reproducibility. - the collator split subsamples into two sub-batches. - This has should have no logic changes. - but changed the randomness in frame masking. - """ - - def flat_subsample(self, tensor): - size = tensor.size() - if len(size) >= 2: - batch_size = size[0] * (size[1] // 2) - expanded_size = ( - (batch_size, 2) + size[2:] if len(size) > 2 - else (batch_size, 2) - ) - tensor = tensor.view(expanded_size) - tensor = torch.cat([tensor[:, 0], tensor[:, 1]], dim=0) - return tensor diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/utils/__init__.py b/kosmos-g/fairseq/examples/MMPT/mmpt/utils/__init__.py deleted file mode 100644 index 2429ee375..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/utils/__init__.py +++ /dev/null @@ -1,68 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import random -import numpy as np -import torch - -from .shardedtensor import * -from .load_config import * - - -def set_seed(seed=43211): - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - if torch.backends.cudnn.enabled: - torch.backends.cudnn.benchmark = False - torch.backends.cudnn.deterministic = True - - -def get_world_size(): - if torch.distributed.is_initialized(): - world_size = torch.distributed.get_world_size() - else: - world_size = 1 - return world_size - - -def get_local_rank(): - return torch.distributed.get_rank() \ - if torch.distributed.is_initialized() else 0 - - -def print_on_rank0(func): - local_rank = get_local_rank() - if local_rank == 0: - print("[INFO]", func) - - -class RetriMeter(object): - """ - Statistics on whether retrieval yields a better pair. - """ - def __init__(self, freq=1024): - self.freq = freq - self.total = 0 - self.replace = 0 - self.updates = 0 - - def __call__(self, data): - if isinstance(data, np.ndarray): - self.replace += data.shape[0] - int((data[:, 0] == -1).sum()) - self.total += data.shape[0] - elif torch.is_tensor(data): - self.replace += int(data.sum()) - self.total += data.size(0) - else: - raise ValueError("unsupported RetriMeter data type.", type(data)) - - self.updates += 1 - if get_local_rank() == 0 and self.updates % self.freq == 0: - print("[INFO]", self) - - def __repr__(self): - return "RetriMeter (" + str(self.replace / self.total) \ - + "/" + str(self.replace) + "/" + str(self.total) + ")" diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/utils/load_config.py b/kosmos-g/fairseq/examples/MMPT/mmpt/utils/load_config.py deleted file mode 100644 index ede4f9411..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/utils/load_config.py +++ /dev/null @@ -1,81 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import os -import omegaconf -from omegaconf import OmegaConf - - -def load_config(args=None, config_file=None, overwrite_fairseq=False): - """TODO (huxu): move fairseq overwrite to another function.""" - if args is not None: - config_file = args.taskconfig - config = recursive_config(config_file) - - if config.dataset.subsampling is not None: - batch_size = config.fairseq.dataset.batch_size // config.dataset.subsampling - print( - "adjusting batch_size to {} due to subsampling {}.".format( - batch_size, config.dataset.subsampling - ) - ) - config.fairseq.dataset.batch_size = batch_size - - is_test = config.dataset.split is not None and config.dataset.split == "test" - if not is_test: - if ( - config.fairseq.checkpoint is None - or config.fairseq.checkpoint.save_dir is None - ): - raise ValueError("fairseq save_dir or save_path must be specified.") - - save_dir = config.fairseq.checkpoint.save_dir - os.makedirs(save_dir, exist_ok=True) - if config.fairseq.common.tensorboard_logdir is not None: - tb_run_dir = suffix_rundir( - save_dir, config.fairseq.common.tensorboard_logdir - ) - config.fairseq.common.tensorboard_logdir = tb_run_dir - print( - "update tensorboard_logdir as", config.fairseq.common.tensorboard_logdir - ) - os.makedirs(save_dir, exist_ok=True) - OmegaConf.save(config=config, f=os.path.join(save_dir, "config.yaml")) - - if overwrite_fairseq and config.fairseq is not None and args is not None: - # flatten fields. - for group in config.fairseq: - for field in config.fairseq[group]: - print("overwrite args." + field, "as", config.fairseq[group][field]) - setattr(args, field, config.fairseq[group][field]) - return config - - -def recursive_config(config_path): - """allows for stacking of configs in any depth.""" - config = OmegaConf.load(config_path) - if config.includes is not None: - includes = config.includes - config.pop("includes") - base_config = recursive_config(includes) - config = OmegaConf.merge(base_config, config) - return config - - -def suffix_rundir(save_dir, run_dir): - max_id = -1 - for search_dir in os.listdir(save_dir): - if search_dir.startswith(run_dir): - splits = search_dir.split("_") - cur_id = int(splits[1]) if len(splits) > 1 else 0 - max_id = max(max_id, cur_id) - return os.path.join(save_dir, run_dir + "_" + str(max_id + 1)) - - -def overwrite_dir(config, replace, basedir): - for key in config: - if isinstance(config[key], str) and config[key].startswith(basedir): - config[key] = config[key].replace(basedir, replace) - if isinstance(config[key], omegaconf.dictconfig.DictConfig): - overwrite_dir(config[key], replace, basedir) diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt/utils/shardedtensor.py b/kosmos-g/fairseq/examples/MMPT/mmpt/utils/shardedtensor.py deleted file mode 100644 index 2424f360e..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt/utils/shardedtensor.py +++ /dev/null @@ -1,46 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import os -import pickle -import numpy as np - - -class ShardedTensor(object): - def __init__(self, data, starts): - self.data = data - self.starts = starts - assert self.starts[0] == 0 - assert self.starts[-1] == len(self.data) - assert (self.starts[1:] >= self.starts[:-1]).all() - assert (self.starts > -1).all() - - @staticmethod - def from_list(xs): - starts = np.full((len(xs) + 1,), -1, dtype=np.long) - data = np.concatenate(xs, axis=0) - starts[0] = 0 - for i, x in enumerate(xs): - starts[i + 1] = starts[i] + x.shape[0] - assert (starts > -1).all() - return ShardedTensor(data, starts) - - def __getitem__(self, i): - return self.data[self.starts[i] : self.starts[i + 1]] - - def __len__(self): - return len(self.starts) - 1 - - def lengths(self): - return self.starts[1:] - self.starts[:-1] - - def save(self, path): - np.save(path + "_starts", self.starts) - np.save(path + "_data", self.data) - - @staticmethod - def load(path, mmap_mode=None): - starts = np.load(path + "_starts.npy", mmap_mode) - data = np.load(path + "_data.npy", mmap_mode) - return ShardedTensor(data, starts) diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt_cli/localjob.py b/kosmos-g/fairseq/examples/MMPT/mmpt_cli/localjob.py deleted file mode 100644 index 2675d3511..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt_cli/localjob.py +++ /dev/null @@ -1,117 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import os - -from mmpt.utils import recursive_config - - -class BaseJob(object): - def __init__(self, yaml_file, dryrun=False): - self.yaml_file = yaml_file - self.config = recursive_config(yaml_file) - self.dryrun = dryrun - - def submit(self, **kwargs): - raise NotImplementedError - - def _normalize_cmd(self, cmd_list): - cmd_list = list(cmd_list) - yaml_index = cmd_list.index("[yaml]") - cmd_list[yaml_index] = self.yaml_file - return cmd_list - - -class LocalJob(BaseJob): - - CMD_CONFIG = { - "local_single": [ - "fairseq-train", "[yaml]", "--user-dir", "mmpt", - "--task", "mmtask", "--arch", "mmarch", - "--criterion", "mmloss", - ], - "local_small": [ - "fairseq-train", "[yaml]", "--user-dir", "mmpt", - "--task", "mmtask", "--arch", "mmarch", - "--criterion", "mmloss", - "--distributed-world-size", "2" - ], - "local_big": [ - "fairseq-train", "[yaml]", "--user-dir", "mmpt", - "--task", "mmtask", "--arch", "mmarch", - "--criterion", "mmloss", - "--distributed-world-size", "8" - ], - "local_predict": ["python", "mmpt_cli/predict.py", "[yaml]"], - } - - def __init__(self, yaml_file, job_type=None, dryrun=False): - super().__init__(yaml_file, dryrun) - if job_type is None: - self.job_type = "local_single" - if self.config.task_type is not None: - self.job_type = self.config.task_type - else: - self.job_type = job_type - if self.job_type in ["local_single", "local_small"]: - if self.config.fairseq.dataset.batch_size > 32: - print("decreasing batch_size to 32 for local testing?") - - def submit(self): - cmd_list = self._normalize_cmd(LocalJob.CMD_CONFIG[self.job_type]) - if "predict" not in self.job_type: - # append fairseq args. - from mmpt.utils import load_config - - config = load_config(config_file=self.yaml_file) - for field in config.fairseq: - for key in config.fairseq[field]: - if key in ["fp16", "reset_optimizer", "reset_dataloader", "reset_meters"]: # a list of binary flag. - param = ["--" + key.replace("_", "-")] - else: - if key == "lr": - value = str(config.fairseq[field][key][0]) - elif key == "adam_betas": - value = "'"+str(config.fairseq[field][key])+"'" - else: - value = str(config.fairseq[field][key]) - param = [ - "--" + key.replace("_", "-"), - value - ] - cmd_list.extend(param) - - print("launching", " ".join(cmd_list)) - if not self.dryrun: - os.system(" ".join(cmd_list)) - return JobStatus("12345678") - - -class JobStatus(object): - def __init__(self, job_id): - self.job_id = job_id - - def __repr__(self): - return self.job_id - - def __str__(self): - return self.job_id - - def done(self): - return False - - def running(self): - return False - - def result(self): - if self.done(): - return "{} is done.".format(self.job_id) - else: - return "{} is running.".format(self.job_id) - - def stderr(self): - return self.result() - - def stdout(self): - return self.result() diff --git a/kosmos-g/fairseq/examples/MMPT/mmpt_cli/predict.py b/kosmos-g/fairseq/examples/MMPT/mmpt_cli/predict.py deleted file mode 100644 index 4071e196d..000000000 --- a/kosmos-g/fairseq/examples/MMPT/mmpt_cli/predict.py +++ /dev/null @@ -1,113 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import os -import glob -import argparse -import pprint -import omegaconf - -from omegaconf import OmegaConf -from torch.utils.data import DataLoader - -from mmpt.utils import load_config, set_seed -from mmpt.evaluators import Evaluator -from mmpt.evaluators import predictor as predictor_path -from mmpt.tasks import Task -from mmpt import processors -from mmpt.datasets import MMDataset - - -def get_dataloader(config): - meta_processor_cls = getattr(processors, config.dataset.meta_processor) - video_processor_cls = getattr(processors, config.dataset.video_processor) - text_processor_cls = getattr(processors, config.dataset.text_processor) - aligner_cls = getattr(processors, config.dataset.aligner) - - meta_processor = meta_processor_cls(config.dataset) - video_processor = video_processor_cls(config.dataset) - text_processor = text_processor_cls(config.dataset) - aligner = aligner_cls(config.dataset) - - test_data = MMDataset( - meta_processor, - video_processor, - text_processor, - aligner, - ) - print("test_len", len(test_data)) - output = test_data[0] - test_data.print_example(output) - - test_dataloader = DataLoader( - test_data, - batch_size=config.fairseq.dataset.batch_size, - shuffle=False, - num_workers=6, - collate_fn=test_data.collater, - ) - return test_dataloader - - -def main(args): - config = load_config(args) - - if isinstance(config, omegaconf.dictconfig.DictConfig): - print(OmegaConf.to_yaml(config)) - else: - pp = pprint.PrettyPrinter(indent=4) - pp.print(config) - - mmtask = Task.config_task(config) - mmtask.build_model() - - test_dataloader = get_dataloader(config) - checkpoint_search_path = os.path.dirname(config.eval.save_path) - results = [] - - prefix = os.path.basename(args.taskconfig) - if prefix.startswith("test"): - # loop all checkpoint for datasets without validation set. - if "best" not in config.fairseq.common_eval.path: - print("eval each epoch.") - for checkpoint in glob.glob(checkpoint_search_path + "/checkpoint*"): - model = mmtask.load_checkpoint(checkpoint) - ckpt = os.path.basename(checkpoint) - evaluator = Evaluator(config) - output = evaluator.evaluate( - model, test_dataloader, ckpt + "_merged") - results.append((checkpoint, output)) - # use the one specified by the config lastly. - model = mmtask.load_checkpoint(config.fairseq.common_eval.path) - evaluator = Evaluator(config) - output = evaluator.evaluate(model, test_dataloader) - results.append((config.fairseq.common_eval.path, output)) - - best_result = None - best_metric = 0. - for checkpoint, result in results: - print(checkpoint) - evaluator.metric.print_computed_metrics(result) - best_score = evaluator.metric.best_metric(result) - if best_score > best_metric: - best_result = (checkpoint, result) - best_metric = best_score - print("best results:") - print(best_result[0]) - evaluator.metric.print_computed_metrics(best_result[1]) - - elif prefix.startswith("vis"): - model = mmtask.load_checkpoint(config.fairseq.common_eval.path) - predictor_cls = getattr(predictor_path, config.predictor) - predictor = predictor_cls(config) - predictor.predict_loop(model, test_dataloader, mmtask, None) - else: - raise ValueError("unknown prefix of the config file", args.taskconfig) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("taskconfig", type=str) - args = parser.parse_args() - main(args) diff --git a/kosmos-g/fairseq/examples/MMPT/pretraining.md b/kosmos-g/fairseq/examples/MMPT/pretraining.md deleted file mode 100644 index 8f8e6d0fa..000000000 --- a/kosmos-g/fairseq/examples/MMPT/pretraining.md +++ /dev/null @@ -1,29 +0,0 @@ -# Pretraining - -(If you are new to the ideas of `mmpt.processors`, see [README](README.md) first.) -We mostly use [howto100M](https://github.com/antoine77340/howto100m) dataset for pretraining (other datasets are coming). So you are less likely to write a new `MetaProcessor`, `VideoProcessor` or `TextProcessor` but only working on a new `Aligner`, a new model and loss. - -### Data Sharding -Pretraining on Howto100M is heavy on IO since we have millions of videos or captions on the hard disk that cannot be fit into the memory. -It is desirable to have an optimized preprocessing step before the actual dataloading. - -We support data sharding to pack multiple videos into a shards of training data for both videos and captions. (see [dataset](DATASET.md) for preprocessing). -These shards will be mapped into memory to reduce the frequency of IO access on millions of files. See (processors starting with `Sharded*`). -This will be the default config for a how2 dataset `projects/task/how2.yaml`. - -Great thanks to Dmytro Okhonko for sharing the code from MARGE project. - -### Training -Pretraining on Howto100m is expected on one or multiple nodes, where each node has 8 GPUS with 32 GB mem. -launching a pretraing on MFM+MLM can be done, via: -```python locallaunch.py projects/mfmmlm/how2.yaml``` - -### Pre-training with a Retrieval Model (VideoCLIP) -This projects now support alternatively run a retrieval model and pre-training. -We implement a basic retrieval model that is built on the hidden states of a video and faiss. - -You may need to install faiss via `conda install faiss-cpu -c pytorch`. - -Right now, the hidden states of a video is computed as the average of 8 clips of their pooled visual/text hidden states. -See `mmpt/tasks/retritask.py` for more details. -The `.yaml` config for running pre-training with a retrieval model can be found at `projects/retri/videoretri.yaml`. diff --git a/kosmos-g/fairseq/examples/MMPT/projects/mfmmlm.yaml b/kosmos-g/fairseq/examples/MMPT/projects/mfmmlm.yaml deleted file mode 100644 index 0f3450a1e..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/mfmmlm.yaml +++ /dev/null @@ -1,59 +0,0 @@ -project_dir: mfmmlm -run_task: - - how2.yaml - - [vtt.yaml, vttcap.yaml, vttqa.yaml, youcook.yaml, youcookcap.yaml, crosstask.yaml, coin.yaml] -base_dir: task -task_group: - pretrain: - task_list: - - how2.yaml - dataset: - subsampling: 32 - sampled_min_len: 10 - sampled_max_len: 64 - max_video_len: 32 - max_len: 96 - aligner: MFMMLMAligner - lazy_vfeat_mask: True - mfm_probability: 0.15 - mlm_probability: 0.15 - mm_prob: 0.5 - model: - model_cls: MMFusionMFMMLM - mm_encoder_cls: MMFusionForMFMMLM - loss: - loss_cls: MFMMLM - fairseq: - common: - fp16: true - dataset: - batch_size: 256 - optimization: - max_epoch: 15 - finetune: - task_list: - - vtt.yaml - - vttqa.yaml - - youcook.yaml - - youcookcap.yaml - - crosstask.yaml - - coin.yaml - dataset: - max_video_len: 32 - max_len: 96 - fairseq: - common: - fp16: true - # do not write any model or loss here (they are expected to be fixed in mmfusion). - test: - task_list: - - test_vtt.yaml - - test_vttqa.yaml - - test_youcook.yaml - - test_youcookcap.yaml - - test_crosstask.yaml - - test_crosstask_zs.yaml - - test_coin.yaml - dataset: - max_video_len: 32 - max_len: 96 diff --git a/kosmos-g/fairseq/examples/MMPT/projects/mtm/mmfusionmtm.yaml b/kosmos-g/fairseq/examples/MMPT/projects/mtm/mmfusionmtm.yaml deleted file mode 100644 index 337d66a2a..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/mtm/mmfusionmtm.yaml +++ /dev/null @@ -1,19 +0,0 @@ -includes: projects/mfmmlm.yaml -project_dir: mtm/mmfusionmtm -task_group: - pretrain: - task: VLMTask # reproducible - dataset: - aligner: MFMMLMAligner - model: - use_seg_emb: True # reproducible - model_cls: MMFusionMTM - mm_encoder_cls: MMBertForMFMMLM - loss: - loss_cls: MTM - finetune: - model: - use_seg_emb: True # reproducible - test: - model: - use_seg_emb: True # reproducible diff --git a/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm.yaml b/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm.yaml deleted file mode 100644 index 022a2623c..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm.yaml +++ /dev/null @@ -1,8 +0,0 @@ -includes: projects/mtm/mmfusionmtm.yaml -project_dir: mtm/vlm -task_group: - pretrain: - dataset: - sampled_min_len: 8 - loss: - loss_cls: MTM diff --git a/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/coin.yaml b/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/coin.yaml deleted file mode 100644 index 48fd64a5f..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/coin.yaml +++ /dev/null @@ -1,47 +0,0 @@ -dataset: - video_processor: VideoProcessor - bert_name: bert-base-uncased - meta_processor: COINActionSegmentationMetaProcessor - train_path: data/coin/COIN.json - val_path: data/coin/COIN.json - vfeat_dir: data/feat/feat_coin_s3d - text_processor: COINActionSegmentationTextProcessor - aligner: COINActionSegmentationAligner - num_iso_layer: 12 - sliding_window: 8 - sliding_window_size: 32 - max_video_len: 32 - max_len: 96 -fairseq: - common: - tensorboard_logdir: run - log_interval: 1000 - fp16: true - dataset: - num_workers: 4 - batch_size: 1 - optimization: - lr: - - 5.0e-05 - clip_norm: 2.0 - optimizer: adam - adam_betas: (0.9, 0.98) - lr_scheduler: polynomial_decay - total_num_update: 1000000 - warmup_updates: 122 - weight_decay: 0.0 - ddp_backend: no_c10d - max_epoch: 8 - checkpoint: - restore_file: runs/mtm/vlm/checkpoint_best.pt - reset_optimizer: true - reset_dataloader: true - reset_meters: true - save_dir: runs/mtm/vlm/coin -task_type: sweep_big -model: - model_cls: MMFusionActionSegmentation - mm_encoder_cls: MMBertForTokenClassification - use_seg_emb: true -loss: - loss_cls: CrossEntropy diff --git a/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/crosstask.yaml b/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/crosstask.yaml deleted file mode 100644 index 4e706b549..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/crosstask.yaml +++ /dev/null @@ -1,53 +0,0 @@ -dataset: - video_processor: CrossTaskVideoProcessor - bert_name: bert-base-uncased - meta_processor: CrossTaskMetaProcessor - train_path: data/crosstask/crosstask_release/videos.csv - train_csv_path: data/crosstask/crosstask_release/videos.csv - val_path: data/crosstask/crosstask_release/videos_val.csv - val_csv_path: data/crosstask/crosstask_release/videos_val.csv - primary_path: data/crosstask/crosstask_release/tasks_primary.txt - related_path: data/crosstask/crosstask_release/tasks_related.txt - vfeat_dir: data/feat/feat_crosstask_s3d - annotation_path: data/crosstask/crosstask_release/annotations - n_train: 30 - text_processor: CrossTaskTextProcessor - aligner: CrossTaskAligner - num_iso_layer: 12 - sliding_window: 16 - sliding_window_size: 32 - max_video_len: 32 - max_len: 96 -fairseq: - common: - tensorboard_logdir: run - log_interval: 1000 - fp16: true - dataset: - num_workers: 4 - batch_size: 1 - optimization: - lr: - - 5.0e-05 - clip_norm: 2.0 - optimizer: adam - adam_betas: (0.9, 0.98) - lr_scheduler: polynomial_decay - total_num_update: 1000000 - warmup_updates: 122 - weight_decay: 0.0 - ddp_backend: no_c10d - max_epoch: 5 - checkpoint: - restore_file: runs/mtm/vlm/checkpoint11.pt - reset_optimizer: true - reset_dataloader: true - reset_meters: true - save_dir: runs/mtm/vlm/crosstask -task_type: sweep_small -model: - model_cls: MMFusionActionLocalization - mm_encoder_cls: MMBertForJoint - use_seg_emb: true -loss: - loss_cls: BCE diff --git a/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/how2.yaml b/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/how2.yaml deleted file mode 100644 index 7ca40ad81..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/how2.yaml +++ /dev/null @@ -1,55 +0,0 @@ -dataset: - video_processor: ShardedVideoProcessor - bert_name: bert-base-uncased - meta_processor: ShardedHow2MetaProcessor - train_path: data/how2/how2_s3d_train.lst - val_path: data/how2/how2_s3d_val.lst - vfeat_dir: data/feat/feat_how2_s3d_shard_small - text_processor: ShardedTextProcessor - tfeat_dir: data/feat/feat_how2_s3d_shard_small/raw_caption_dedup.bert-base-uncased. - aligner: MFMMLMAligner - subsampling: 32 - sampled_min_len: 8 - sampled_max_len: 64 - max_video_len: 32 - max_len: 96 - lazy_vfeat_mask: true - mfm_probability: 0.15 - mlm_probability: 0.15 - mm_prob: 0.5 -fairseq: - common: - tensorboard_logdir: run - log_interval: 1000 - fp16: true - dataset: - num_workers: 4 - batch_size: 256 - optimization: - lr: - - 5.0e-05 - clip_norm: 2.0 - optimizer: adam - adam_betas: (0.9, 0.98) - lr_scheduler: polynomial_decay - total_num_update: 1000000 - warmup_updates: 1000 - weight_decay: 0.0 - ddp_backend: no_c10d - max_epoch: 15 - checkpoint: - save_dir: runs/mtm/vlm - save_interval_updates: 1024 - keep_interval_updates: 2 - keep_last_epochs: 30 -task_type: sweep_big -slurm_config: big -eval: - save_path: runs/mtm/vlm -model: - model_cls: MMFusionMTM - mm_encoder_cls: MMBertForMFMMLM - use_seg_emb: true -loss: - loss_cls: MTM -task: VLMTask diff --git a/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/test_coin.yaml b/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/test_coin.yaml deleted file mode 100644 index 8df2e66ad..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/test_coin.yaml +++ /dev/null @@ -1,31 +0,0 @@ -slurm_config: big -task_type: local_predict -dataset: - split: test - video_processor: VideoProcessor - aligner: COINActionSegmentationAligner - bert_name: bert-base-uncased - test_path: data/coin/COIN.json - meta_processor: COINActionSegmentationMetaProcessor - vfeat_dir: data/feat/feat_coin_s3d - text_processor: COINActionSegmentationTextProcessor - num_iso_layer: 12 - sliding_window: 16 - sliding_window_size: 32 - max_video_len: 32 - max_len: 96 -fairseq: - dataset: - batch_size: 1 - valid_subset: test - num_workers: 2 - common_eval: - path: runs/mtm/vlm/coin/checkpoint_best.pt -model: - model_cls: MMFusionActionSegmentation - mm_encoder_cls: MMBertForTokenClassification - use_seg_emb: true -eval: - save_path: runs/mtm/vlm/coin/eval -metric: COINActionSegmentationMetric -predictor: COINPredictor diff --git a/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/test_crosstask.yaml b/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/test_crosstask.yaml deleted file mode 100644 index d15984787..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/test_crosstask.yaml +++ /dev/null @@ -1,38 +0,0 @@ -slurm_config: big -task_type: local_predict -dataset: - split: test - video_processor: CrossTaskVideoProcessor - aligner: CrossTaskAligner - bert_name: bert-base-uncased - meta_processor: CrossTaskMetaProcessor - test_path: data/crosstask/crosstask_release/videos_val.csv - train_csv_path: data/crosstask/crosstask_release/videos.csv - val_path: data/crosstask/crosstask_release/videos_val.csv - val_csv_path: data/crosstask/crosstask_release/videos_val.csv - primary_path: data/crosstask/crosstask_release/tasks_primary.txt - related_path: data/crosstask/crosstask_release/tasks_related.txt - vfeat_dir: data/feat/feat_crosstask_s3d - annotation_path: data/crosstask/crosstask_release/annotations - n_train: 30 - text_processor: CrossTaskTextProcessor - num_iso_layer: 12 - sliding_window: 16 - sliding_window_size: 32 - max_video_len: 32 - max_len: 96 -fairseq: - dataset: - batch_size: 1 - valid_subset: test - num_workers: 2 - common_eval: - path: runs/mtm/vlm/crosstask/checkpoint_best.pt -model: - model_cls: MMFusionActionLocalization - mm_encoder_cls: MMBertForJoint - use_seg_emb: true -eval: - save_path: runs/mtm/vlm/crosstask/eval -metric: CrossTaskMetric -predictor: CrossTaskPredictor diff --git a/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/test_crosstask_zs.yaml b/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/test_crosstask_zs.yaml deleted file mode 100644 index 59833c554..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/test_crosstask_zs.yaml +++ /dev/null @@ -1,38 +0,0 @@ -slurm_config: big -task_type: local_predict -dataset: - split: test - video_processor: CrossTaskVideoProcessor - aligner: CrossTaskAligner - bert_name: bert-base-uncased - meta_processor: CrossTaskMetaProcessor - test_path: data/crosstask/crosstask_release/videos_val.csv - train_csv_path: data/crosstask/crosstask_release/videos.csv - val_path: data/crosstask/crosstask_release/videos_val.csv - val_csv_path: data/crosstask/crosstask_release/videos_val.csv - primary_path: data/crosstask/crosstask_release/tasks_primary.txt - related_path: data/crosstask/crosstask_release/tasks_related.txt - vfeat_dir: data/feat/feat_crosstask_s3d - annotation_path: data/crosstask/crosstask_release/annotations - n_train: 30 - text_processor: CrossTaskTextProcessor - num_iso_layer: 12 - sliding_window: 16 - sliding_window_size: 32 - max_video_len: 32 - max_len: 96 -fairseq: - dataset: - batch_size: 1 - valid_subset: test - num_workers: 2 - common_eval: - path: runs/mtm/vlm/checkpoint_best.pt -model: - model_cls: MMFusionActionLocalization - mm_encoder_cls: MMBertForJoint - use_seg_emb: true -eval: - save_path: runs/mtm/vlm/crosstask_zs/eval -metric: CrossTaskMetric -predictor: CrossTaskPredictor diff --git a/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/test_vtt.yaml b/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/test_vtt.yaml deleted file mode 100644 index a41557df6..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/test_vtt.yaml +++ /dev/null @@ -1,29 +0,0 @@ -slurm_config: big -task_type: local_predict -dataset: - split: test - video_processor: VideoProcessor - aligner: DSAligner - bert_name: bert-base-uncased - meta_processor: MSRVTTMetaProcessor - test_path: data/msrvtt/MSRVTT_JSFUSION_test.csv - vfeat_dir: data/feat/feat_vtt_s3d - text_processor: MSRVTTTextProcessor - num_iso_layer: 12 - max_video_len: 32 - max_len: 96 -fairseq: - dataset: - batch_size: 256 - valid_subset: test - num_workers: 2 - common_eval: - path: runs/mtm/vlm/vtt/checkpoint_last.pt -model: - model_cls: MMFusionJoint - mm_encoder_cls: MMBertForJoint - use_seg_emb: true -eval: - save_path: runs/mtm/vlm/vtt/eval -metric: RetrievalMetric -predictor: RetrievalPredictor diff --git a/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/test_vttqa.yaml b/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/test_vttqa.yaml deleted file mode 100644 index abf3309f7..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/test_vttqa.yaml +++ /dev/null @@ -1,29 +0,0 @@ -slurm_config: big -task_type: local_predict -dataset: - split: test - video_processor: VideoProcessor - aligner: MSRVTTQAAligner - bert_name: bert-base-uncased - meta_processor: MSRVTTQAMetaProcessor - test_path: data/msrvtt-qa/MSR_MC_test.csv - vfeat_dir: data/feat/feat_vtt_s3d - text_processor: MSRVTTQATextProcessor - num_iso_layer: 12 - max_video_len: 32 - max_len: 96 -fairseq: - dataset: - batch_size: 256 - valid_subset: test - num_workers: 2 - common_eval: - path: runs/mtm/vlm/vttqa/checkpoint_last.pt -model: - model_cls: MMFusionJoint - mm_encoder_cls: MMBertForJoint - use_seg_emb: true -eval: - save_path: runs/mtm/vlm/vttqa/eval -metric: QAMetric -predictor: QAPredictor diff --git a/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/test_youcook.yaml b/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/test_youcook.yaml deleted file mode 100644 index 3a57d25c2..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/test_youcook.yaml +++ /dev/null @@ -1,31 +0,0 @@ -slurm_config: big -task_type: local_predict -dataset: - split: test - video_processor: YoucookVideoProcessor - aligner: DSAligner - bert_name: bert-base-uncased - meta_processor: YoucookMetaProcessor - test_path: data/youcook/youcook_val.pkl - trainval_annotation: data/youcook/youcookii_annotations_trainval.json - use_annotation_text: true - vfeat_dir: data/feat/feat_youcook_s3d - text_processor: TextProcessor - num_iso_layer: 12 - max_video_len: 32 - max_len: 96 -fairseq: - dataset: - batch_size: 256 - valid_subset: test - num_workers: 2 - common_eval: - path: runs/mtm/vlm/youcook/checkpoint_last.pt -model: - model_cls: MMFusionJoint - mm_encoder_cls: MMBertForJoint - use_seg_emb: true -eval: - save_path: runs/mtm/vlm/youcook/eval -metric: RetrievalMetric -predictor: RetrievalPredictor diff --git a/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/test_youcookcap.yaml b/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/test_youcookcap.yaml deleted file mode 100644 index b2595d7c3..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/test_youcookcap.yaml +++ /dev/null @@ -1,32 +0,0 @@ -slurm_config: big -task_type: local_predict -dataset: - split: test - video_processor: YoucookVideoProcessor - aligner: DSNLGAligner - bert_name: bert-base-uncased - meta_processor: YoucookNLGMetaProcessor - test_path: data/youcook/val_list.txt - trainval_annotation: data/youcook/youcookii_annotations_trainval.json - vfeat_dir: data/feat/feat_youcook_s3d - text_processor: NLGTextProcessor - max_video_len: 32 - max_len: 96 -fairseq: - dataset: - batch_size: 256 - valid_subset: test - num_workers: 2 - common_eval: - path: runs/mtm/vlm/youcookcap/checkpoint_best.pt -model: - model_cls: MMFusionNLG - mm_encoder_cls: MMBertForNLG - max_decode_length: 24 - use_seg_emb: true -eval: - save_path: runs/mtm/vlm/youcookcap/eval -metric: NLGMetric -predictor: NLGPredictor -gen_param: - num_beams: 5 diff --git a/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/vtt.yaml b/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/vtt.yaml deleted file mode 100644 index c6c5b1ab4..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/vtt.yaml +++ /dev/null @@ -1,49 +0,0 @@ -dataset: - video_processor: VideoProcessor - bert_name: bert-base-uncased - meta_processor: MSRVTTMetaProcessor - train_path: data/msrvtt/MSRVTT_train.csv - jsfusion_path: data/msrvtt/MSRVTT_JSFUSION_test.csv - full_test_path: data/msrvtt/MSRVTT_FULL_test.csv - dup: 20 - val_path: data/msrvtt/MSRVTT_JSFUSION_test.csv - vfeat_dir: data/feat/feat_vtt_s3d - text_processor: MSRVTTTextProcessor - json_path: data/msrvtt/MSRVTT_data.json - aligner: DSAligner - num_iso_layer: 12 - max_video_len: 32 - max_len: 96 -fairseq: - common: - tensorboard_logdir: run - log_interval: 1000 - fp16: true - dataset: - num_workers: 4 - batch_size: 256 - optimization: - lr: - - 5.0e-05 - clip_norm: 2.0 - optimizer: adam - adam_betas: (0.9, 0.98) - lr_scheduler: polynomial_decay - total_num_update: 1000000 - warmup_updates: 122 - weight_decay: 0.0 - ddp_backend: no_c10d - max_epoch: 10 - checkpoint: - restore_file: runs/mtm/vlm/checkpoint_best.pt - reset_optimizer: true - reset_dataloader: true - reset_meters: true - save_dir: runs/mtm/vlm/vtt -task_type: sweep_small -model: - model_cls: MMFusionJoint - mm_encoder_cls: MMBertForJoint - use_seg_emb: true -loss: - loss_cls: T2VContraLoss diff --git a/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/vttqa.yaml b/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/vttqa.yaml deleted file mode 100644 index 0a440c7dd..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/vttqa.yaml +++ /dev/null @@ -1,47 +0,0 @@ -dataset: - video_processor: VideoProcessor - bert_name: bert-base-uncased - meta_processor: MSRVTTMetaProcessor - train_path: data/msrvtt/MSRVTT_train.csv - dup: 20 - val_path: data/msrvtt/MSRVTT_JSFUSION_test.csv - vfeat_dir: data/feat/feat_vtt_s3d - text_processor: MSRVTTTextProcessor - json_path: data/msrvtt/MSRVTT_data.json - aligner: DSAligner - num_iso_layer: 12 - max_video_len: 32 - max_len: 96 -fairseq: - common: - tensorboard_logdir: run - log_interval: 1000 - fp16: true - dataset: - num_workers: 4 - batch_size: 128 - optimization: - lr: - - 5.0e-05 - clip_norm: 2.0 - optimizer: adam - adam_betas: (0.9, 0.98) - lr_scheduler: polynomial_decay - total_num_update: 1000000 - warmup_updates: 122 - weight_decay: 0.0 - ddp_backend: no_c10d - max_epoch: 5 - checkpoint: - restore_file: runs/mtm/vlm/checkpoint_best.pt - reset_optimizer: true - reset_dataloader: true - reset_meters: true - save_dir: runs/mtm/vlm/vttqa -task_type: sweep_small -model: - model_cls: MMFusionJoint - mm_encoder_cls: MMBertForJoint - use_seg_emb: true -loss: - loss_cls: V2TContraLoss diff --git a/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/youcook.yaml b/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/youcook.yaml deleted file mode 100644 index 9ee82b81b..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/youcook.yaml +++ /dev/null @@ -1,47 +0,0 @@ -dataset: - video_processor: YoucookVideoProcessor - bert_name: bert-base-uncased - meta_processor: YoucookMetaProcessor - train_path: data/youcook/youcook_train.pkl - val_path: data/youcook/youcook_val.pkl - trainval_annotation: data/youcook/youcookii_annotations_trainval.json - use_annotation_text: true - vfeat_dir: data/feat/feat_youcook_s3d - text_processor: TextProcessor - aligner: DSAligner - num_iso_layer: 12 - max_video_len: 32 - max_len: 96 -fairseq: - common: - tensorboard_logdir: run - log_interval: 1000 - fp16: true - dataset: - num_workers: 4 - batch_size: 128 - optimization: - lr: - - 5.0e-05 - clip_norm: 2.0 - optimizer: adam - adam_betas: (0.9, 0.98) - lr_scheduler: polynomial_decay - total_num_update: 1000000 - warmup_updates: 122 - weight_decay: 0.0 - ddp_backend: no_c10d - max_epoch: 10 - checkpoint: - restore_file: runs/mtm/vlm/checkpoint_best.pt - reset_optimizer: true - reset_dataloader: true - reset_meters: true - save_dir: runs/mtm/vlm/youcook -task_type: sweep_small -model: - model_cls: MMFusionJoint - mm_encoder_cls: MMBertForJoint - use_seg_emb: true -loss: - loss_cls: T2VContraLoss diff --git a/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/youcookcap.yaml b/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/youcookcap.yaml deleted file mode 100644 index d29dfad5c..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/mtm/vlm/youcookcap.yaml +++ /dev/null @@ -1,45 +0,0 @@ -dataset: - video_processor: YoucookVideoProcessor - bert_name: bert-base-uncased - meta_processor: YoucookNLGMetaProcessor - train_path: data/youcook/train_list.txt - val_path: data/youcook/val_list.txt - trainval_annotation: data/youcook/youcookii_annotations_trainval.json - vfeat_dir: data/feat/feat_youcook_s3d - text_processor: NLGTextProcessor - aligner: DSNLGAligner - max_video_len: 32 - max_len: 96 -fairseq: - common: - tensorboard_logdir: run - log_interval: 1000 - fp16: true - dataset: - num_workers: 4 - batch_size: 128 - optimization: - lr: - - 5.0e-05 - clip_norm: 2.0 - optimizer: adam - adam_betas: (0.9, 0.98) - lr_scheduler: polynomial_decay - total_num_update: 1000000 - warmup_updates: 122 - weight_decay: 0.0 - ddp_backend: no_c10d - max_epoch: 10 - checkpoint: - restore_file: runs/mtm/vlm/checkpoint_best.pt - reset_optimizer: true - reset_dataloader: true - reset_meters: true - save_dir: runs/mtm/vlm/youcookcap -task_type: sweep_small -model: - model_cls: MMFusionNLG - mm_encoder_cls: MMBertForNLG - use_seg_emb: true -loss: - loss_cls: NLGLoss diff --git a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip.yaml b/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip.yaml deleted file mode 100644 index afd040ab0..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip.yaml +++ /dev/null @@ -1,10 +0,0 @@ -includes: projects/retri/videoretri.yaml -project_dir: retri/videoclip -task_group: - pretrain: - model: - model_cls: MMFusionSeparate - mm_encoder_cls: - video_encoder_cls: MMBertForEncoder - text_encoder_cls: BertModel - num_hidden_video_layers: 6 diff --git a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/coin_videoclip.yaml b/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/coin_videoclip.yaml deleted file mode 100644 index aaed5e47f..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/coin_videoclip.yaml +++ /dev/null @@ -1,49 +0,0 @@ -dataset: - video_processor: VideoProcessor - bert_name: bert-base-uncased - meta_processor: COINActionSegmentationMetaProcessor - train_path: data/coin/COIN.json - val_path: data/coin/COIN.json - vfeat_dir: data/feat/feat_coin_s3d - text_processor: COINActionSegmentationTextProcessor - aligner: COINActionSegmentationAligner - num_iso_layer: 12 - sliding_window: 8 - sliding_window_size: 32 - max_video_len: 32 - max_len: 96 -fairseq: - common: - tensorboard_logdir: run - log_interval: 1000 - fp16: true - dataset: - num_workers: 4 - batch_size: 1 - optimization: - lr: - - 5.0e-05 - clip_norm: 2.0 - optimizer: adam - adam_betas: (0.9, 0.98) - lr_scheduler: polynomial_decay - total_num_update: 1000000 - warmup_updates: 122 - weight_decay: 0.0 - ddp_backend: no_c10d - max_epoch: 8 - checkpoint: - restore_file: runs/retri/videoclip/checkpoint_best.pt - reset_optimizer: true - reset_dataloader: true - reset_meters: true - save_dir: runs/retri/videoclip/coin -task_type: sweep_big -model: - model_cls: MMFusionSeparateActionSegmentation - mm_encoder_cls: null - video_encoder_cls: MMBertForTokenClassification - text_encoder_cls: BertModel - num_hidden_video_layers: 6 -loss: - loss_cls: CrossEntropy diff --git a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/crosstask_videoclip.yaml b/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/crosstask_videoclip.yaml deleted file mode 100644 index 758601e35..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/crosstask_videoclip.yaml +++ /dev/null @@ -1,55 +0,0 @@ -dataset: - video_processor: CrossTaskVideoProcessor - bert_name: bert-base-uncased - meta_processor: CrossTaskMetaProcessor - train_path: data/crosstask/crosstask_release/videos.csv - train_csv_path: data/crosstask/crosstask_release/videos.csv - val_path: data/crosstask/crosstask_release/videos_val.csv - val_csv_path: data/crosstask/crosstask_release/videos_val.csv - primary_path: data/crosstask/crosstask_release/tasks_primary.txt - related_path: data/crosstask/crosstask_release/tasks_related.txt - vfeat_dir: data/feat/feat_crosstask_s3d - annotation_path: data/crosstask/crosstask_release/annotations - n_train: 30 - text_processor: CrossTaskTextProcessor - aligner: CrossTaskAligner - num_iso_layer: 12 - sliding_window: 16 - sliding_window_size: 32 - max_video_len: 32 - max_len: 96 -fairseq: - common: - tensorboard_logdir: run - log_interval: 1000 - fp16: true - dataset: - num_workers: 4 - batch_size: 1 - optimization: - lr: - - 5.0e-05 - clip_norm: 2.0 - optimizer: adam - adam_betas: (0.9, 0.98) - lr_scheduler: polynomial_decay - total_num_update: 1000000 - warmup_updates: 122 - weight_decay: 0.0 - ddp_backend: no_c10d - max_epoch: 5 - checkpoint: - restore_file: runs/retri/videoclip/checkpoint_best.pt - reset_optimizer: true - reset_dataloader: true - reset_meters: true - save_dir: runs/retri/videoclip/crosstask -task_type: sweep_small -model: - model_cls: MMFusionSeparateActionLocalization - mm_encoder_cls: null - video_encoder_cls: MMBertForEncoder - text_encoder_cls: BertModel - num_hidden_video_layers: 6 -loss: - loss_cls: BCE diff --git a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/how2.yaml b/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/how2.yaml deleted file mode 100644 index b49581e87..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/how2.yaml +++ /dev/null @@ -1,65 +0,0 @@ -dataset: - video_processor: ShardedVideoRetriVideoProcessor - bert_name: bert-base-uncased - meta_processor: ShardedHow2VideoRetriMetaProcessor - train_path: data/how2/how2_s3d_train.lst - val_path: data/how2/how2_s3d_val.lst - vfeat_dir: data/feat/feat_how2_s3d_shard_small - text_processor: ShardedVideoRetriTextProcessor - tfeat_dir: data/feat/feat_how2_s3d_shard_small/raw_caption_dedup.bert-base-uncased. - aligner: VideoRetriOverlappedAligner - subsampling: 1 - sampled_min_len: 8 - sampled_max_len: 64 - max_video_len: 32 - max_len: 96 - lazy_vfeat_mask: true - mfm_probability: 0.15 - mlm_probability: 0.15 - mm_prob: 0.5 - sampled_video_min_len: 3 - sampled_video_max_len: 32 - num_video_per_batch: 32 - clip_per_video: 16 -fairseq: - common: - tensorboard_logdir: run - log_interval: 1000 - fp16: true - dataset: - num_workers: 4 - batch_size: 1 - optimization: - lr: - - 5.0e-05 - clip_norm: 2.0 - optimizer: adam - adam_betas: (0.9, 0.98) - lr_scheduler: polynomial_decay - total_num_update: 1000000 - warmup_updates: 1000 - weight_decay: 0.0 - ddp_backend: no_c10d - max_epoch: 25 - checkpoint: - save_dir: runs/retri/videoclip - save_interval_updates: 1024 - keep_interval_updates: 2 - keep_last_epochs: 30 -task_type: sweep_big -slurm_config: big -eval: - save_path: runs/retri/videoclip -model: - model_cls: MMFusionSeparate - mm_encoder_cls: null - video_encoder_cls: MMBertForEncoder - text_encoder_cls: BertModel - num_hidden_video_layers: 6 -loss: - loss_cls: MMContraLoss -task: VideoRetriTask -retri_epoch: 1 -vectorpool_cls: VideoVectorPool -retriever_cls: VectorRetriever -num_cands: 64 diff --git a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/test_coin_videoclip.yaml b/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/test_coin_videoclip.yaml deleted file mode 100644 index 409906203..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/test_coin_videoclip.yaml +++ /dev/null @@ -1,33 +0,0 @@ -slurm_config: big -task_type: local_predict -dataset: - split: test - video_processor: VideoProcessor - aligner: COINActionSegmentationAligner - bert_name: bert-base-uncased - test_path: data/coin/COIN.json - meta_processor: COINActionSegmentationMetaProcessor - vfeat_dir: data/feat/feat_coin_s3d - text_processor: COINActionSegmentationTextProcessor - num_iso_layer: 12 - sliding_window: 16 - sliding_window_size: 32 - max_video_len: 32 - max_len: 96 -fairseq: - dataset: - batch_size: 1 - valid_subset: test - num_workers: 2 - common_eval: - path: runs/retri/videoclip/coin/checkpoint_best.pt -model: - model_cls: MMFusionSeparateActionSegmentation - mm_encoder_cls: null - video_encoder_cls: MMBertForTokenClassification - text_encoder_cls: BertModel - num_hidden_video_layers: 6 -eval: - save_path: runs/retri/videoclip/coin/eval -metric: COINActionSegmentationMetric -predictor: COINPredictor diff --git a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/test_coin_zs.yaml b/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/test_coin_zs.yaml deleted file mode 100644 index b33739c7b..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/test_coin_zs.yaml +++ /dev/null @@ -1,33 +0,0 @@ -slurm_config: big -task_type: local_predict -dataset: - split: test - video_processor: VideoProcessor - aligner: COINActionSegmentationAligner - bert_name: bert-base-uncased - test_path: data/coin/COIN.json - meta_processor: COINActionSegmentationMetaProcessor - vfeat_dir: data/feat/feat_coin_s3d - text_processor: COINActionSegmentationTextProcessor - num_iso_layer: 12 - sliding_window: 16 - sliding_window_size: 32 - max_video_len: 32 - max_len: 96 -fairseq: - dataset: - batch_size: 1 - valid_subset: test - num_workers: 2 - common_eval: - path: runs/retri/videoclip/checkpoint_best.pt -model: - model_cls: MMFusionSeparate - mm_encoder_cls: null - video_encoder_cls: MMBertForEncoder - text_encoder_cls: BertModel - num_hidden_video_layers: 6 -eval: - save_path: runs/retri/videoclip/coin_zs/eval -metric: COINActionSegmentationMetric -predictor: COINZSPredictor diff --git a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/test_crosstask_videoclip.yaml b/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/test_crosstask_videoclip.yaml deleted file mode 100644 index e82f54fbe..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/test_crosstask_videoclip.yaml +++ /dev/null @@ -1,40 +0,0 @@ -slurm_config: big -task_type: local_predict -dataset: - split: test - video_processor: CrossTaskVideoProcessor - aligner: CrossTaskAligner - bert_name: bert-base-uncased - meta_processor: CrossTaskMetaProcessor - test_path: data/crosstask/crosstask_release/videos_val.csv - train_csv_path: data/crosstask/crosstask_release/videos.csv - val_path: data/crosstask/crosstask_release/videos_val.csv - val_csv_path: data/crosstask/crosstask_release/videos_val.csv - primary_path: data/crosstask/crosstask_release/tasks_primary.txt - related_path: data/crosstask/crosstask_release/tasks_related.txt - vfeat_dir: data/feat/feat_crosstask_s3d - annotation_path: data/crosstask/crosstask_release/annotations - n_train: 30 - text_processor: CrossTaskTextProcessor - num_iso_layer: 12 - sliding_window: 16 - sliding_window_size: 32 - max_video_len: 32 - max_len: 96 -fairseq: - dataset: - batch_size: 1 - valid_subset: test - num_workers: 2 - common_eval: - path: runs/retri/videoclip/crosstask/checkpoint_best.pt -model: - model_cls: MMFusionSeparateActionLocalization - mm_encoder_cls: null - video_encoder_cls: MMBertForEncoder - text_encoder_cls: BertModel - num_hidden_video_layers: 6 -eval: - save_path: runs/retri/videoclip/crosstask/eval -metric: CrossTaskMetric -predictor: CrossTaskPredictor diff --git a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/test_crosstask_zs_videoclip.yaml b/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/test_crosstask_zs_videoclip.yaml deleted file mode 100644 index 6fc357cc1..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/test_crosstask_zs_videoclip.yaml +++ /dev/null @@ -1,40 +0,0 @@ -slurm_config: big -task_type: local_predict -dataset: - split: test - video_processor: CrossTaskVideoProcessor - aligner: CrossTaskAligner - bert_name: bert-base-uncased - meta_processor: CrossTaskMetaProcessor - test_path: data/crosstask/crosstask_release/videos_val.csv - train_csv_path: data/crosstask/crosstask_release/videos.csv - val_path: data/crosstask/crosstask_release/videos_val.csv - val_csv_path: data/crosstask/crosstask_release/videos_val.csv - primary_path: data/crosstask/crosstask_release/tasks_primary.txt - related_path: data/crosstask/crosstask_release/tasks_related.txt - vfeat_dir: data/feat/feat_crosstask_s3d - annotation_path: data/crosstask/crosstask_release/annotations - n_train: 30 - text_processor: CrossTaskTextProcessor - num_iso_layer: 12 - sliding_window: 16 - sliding_window_size: 32 - max_video_len: 32 - max_len: 96 -fairseq: - dataset: - batch_size: 1 - valid_subset: test - num_workers: 2 - common_eval: - path: runs/retri/videoclip/checkpoint_best.pt -model: - model_cls: MMFusionSeparateActionLocalization - mm_encoder_cls: null - video_encoder_cls: MMBertForEncoder - text_encoder_cls: BertModel - num_hidden_video_layers: 6 -eval: - save_path: runs/retri/videoclip/crosstask_zs/eval -metric: CrossTaskMetric -predictor: CrossTaskPredictor diff --git a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/test_didemo_zs.yaml b/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/test_didemo_zs.yaml deleted file mode 100644 index 8dc716815..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/test_didemo_zs.yaml +++ /dev/null @@ -1,31 +0,0 @@ -slurm_config: big -task_type: local_predict -dataset: - split: test - video_processor: VideoProcessor - aligner: DiDeMoAligner - bert_name: bert-base-uncased - meta_processor: DiDeMoMetaProcessor - test_path: data/didemo/test_data.json - vfeat_dir: data/feat/feat_didemo_s3d - text_processor: DiDeMoTextProcessor - num_iso_layer: 12 - max_video_len: 32 - max_len: 96 -fairseq: - dataset: - batch_size: 256 - valid_subset: test - num_workers: 2 - common_eval: - path: runs/retri/videoclip/checkpoint_best.pt -model: - model_cls: MMFusionSeparate - mm_encoder_cls: null - video_encoder_cls: MMBertForEncoder - text_encoder_cls: BertModel - num_hidden_video_layers: 6 -eval: - save_path: runs/retri/videoclip/didemo_zs/eval -metric: DiDeMoMetric -predictor: DiDeMoPredictor diff --git a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/test_vtt_videoclip.yaml b/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/test_vtt_videoclip.yaml deleted file mode 100644 index 19321ad5f..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/test_vtt_videoclip.yaml +++ /dev/null @@ -1,31 +0,0 @@ -slurm_config: big -task_type: local_predict -dataset: - split: test - video_processor: VideoProcessor - aligner: DSAligner - bert_name: bert-base-uncased - meta_processor: MSRVTTMetaProcessor - test_path: data/msrvtt/MSRVTT_JSFUSION_test.csv - vfeat_dir: data/feat/feat_vtt_s3d - text_processor: MSRVTTTextProcessor - num_iso_layer: 12 - max_video_len: 32 - max_len: 96 -fairseq: - dataset: - batch_size: 256 - valid_subset: test - num_workers: 2 - common_eval: - path: runs/retri/videoclip/vtt/checkpoint_last.pt -model: - model_cls: MMFusionSeparate - mm_encoder_cls: null - video_encoder_cls: MMBertForEncoder - text_encoder_cls: BertModel - num_hidden_video_layers: 6 -eval: - save_path: runs/retri/videoclip/vtt/eval -metric: RetrievalMetric -predictor: RetrievalPredictor diff --git a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/test_vtt_zs.yaml b/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/test_vtt_zs.yaml deleted file mode 100644 index d149fa396..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/test_vtt_zs.yaml +++ /dev/null @@ -1,31 +0,0 @@ -slurm_config: big -task_type: local_predict -dataset: - split: test - video_processor: VideoProcessor - aligner: DSAligner - bert_name: bert-base-uncased - meta_processor: MSRVTTMetaProcessor - test_path: data/msrvtt/MSRVTT_JSFUSION_test.csv - vfeat_dir: data/feat/feat_vtt_s3d - text_processor: MSRVTTTextProcessor - num_iso_layer: 12 - max_video_len: 32 - max_len: 96 -fairseq: - dataset: - batch_size: 256 - valid_subset: test - num_workers: 2 - common_eval: - path: runs/retri/videoclip/checkpoint_best.pt -model: - model_cls: MMFusionSeparate - mm_encoder_cls: null - video_encoder_cls: MMBertForEncoder - text_encoder_cls: BertModel - num_hidden_video_layers: 6 -eval: - save_path: runs/retri/videoclip/vtt_zs/eval -metric: RetrievalMetric -predictor: RetrievalPredictor diff --git a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/test_vttqa_videoclip.yaml b/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/test_vttqa_videoclip.yaml deleted file mode 100644 index 295aeedbb..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/test_vttqa_videoclip.yaml +++ /dev/null @@ -1,31 +0,0 @@ -slurm_config: big -task_type: local_predict -dataset: - split: test - video_processor: VideoProcessor - aligner: MSRVTTQAAligner - bert_name: bert-base-uncased - meta_processor: MSRVTTQAMetaProcessor - test_path: data/msrvtt-qa/MSR_MC_test.csv - vfeat_dir: data/feat/feat_vtt_s3d - text_processor: MSRVTTQATextProcessor - num_iso_layer: 12 - max_video_len: 32 - max_len: 96 -fairseq: - dataset: - batch_size: 256 - valid_subset: test - num_workers: 2 - common_eval: - path: runs/retri/videoclip/vttqa/checkpoint_last.pt -model: - model_cls: MMFusionSeparate - mm_encoder_cls: null - video_encoder_cls: MMBertForEncoder - text_encoder_cls: BertModel - num_hidden_video_layers: 6 -eval: - save_path: runs/retri/videoclip/vttqa/eval -metric: QAMetric -predictor: QAPredictor diff --git a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/test_vttqa_zs.yaml b/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/test_vttqa_zs.yaml deleted file mode 100644 index 7a876c822..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/test_vttqa_zs.yaml +++ /dev/null @@ -1,31 +0,0 @@ -slurm_config: big -task_type: local_predict -dataset: - split: test - video_processor: VideoProcessor - aligner: MSRVTTQAAligner - bert_name: bert-base-uncased - meta_processor: MSRVTTQAMetaProcessor - test_path: data/msrvtt-qa/MSR_MC_test.csv - vfeat_dir: data/feat/feat_vtt_s3d - text_processor: MSRVTTQATextProcessor - num_iso_layer: 12 - max_video_len: 32 - max_len: 96 -fairseq: - dataset: - batch_size: 256 - valid_subset: test - num_workers: 2 - common_eval: - path: runs/retri/videoclip/checkpoint_best.pt -model: - model_cls: MMFusionSeparate - mm_encoder_cls: null - video_encoder_cls: MMBertForEncoder - text_encoder_cls: BertModel - num_hidden_video_layers: 6 -eval: - save_path: runs/retri/videoclip/vttqa_zs/eval -metric: QAMetric -predictor: QAPredictor diff --git a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/test_youcook_videoclip.yaml b/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/test_youcook_videoclip.yaml deleted file mode 100644 index 86a4ab203..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/test_youcook_videoclip.yaml +++ /dev/null @@ -1,33 +0,0 @@ -slurm_config: big -task_type: local_predict -dataset: - split: test - video_processor: YoucookVideoProcessor - aligner: DSAligner - bert_name: bert-base-uncased - meta_processor: YoucookMetaProcessor - test_path: data/youcook/youcook_val.pkl - trainval_annotation: data/youcook/youcookii_annotations_trainval.json - use_annotation_text: true - vfeat_dir: data/feat/feat_youcook_s3d - text_processor: TextProcessor - num_iso_layer: 12 - max_video_len: 32 - max_len: 96 -fairseq: - dataset: - batch_size: 256 - valid_subset: test - num_workers: 2 - common_eval: - path: runs/retri/videoclip/youcook/checkpoint_last.pt -model: - model_cls: MMFusionSeparate - mm_encoder_cls: null - video_encoder_cls: MMBertForEncoder - text_encoder_cls: BertModel - num_hidden_video_layers: 6 -eval: - save_path: runs/retri/videoclip/youcook/eval -metric: RetrievalMetric -predictor: RetrievalPredictor diff --git a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/test_youcook_zs.yaml b/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/test_youcook_zs.yaml deleted file mode 100644 index fd2941708..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/test_youcook_zs.yaml +++ /dev/null @@ -1,33 +0,0 @@ -slurm_config: big -task_type: local_predict -dataset: - split: test - video_processor: YoucookVideoProcessor - aligner: DSAligner - bert_name: bert-base-uncased - meta_processor: YoucookMetaProcessor - test_path: data/youcook/youcook_val.pkl - trainval_annotation: data/youcook/youcookii_annotations_trainval.json - use_annotation_text: true - vfeat_dir: data/feat/feat_youcook_s3d - text_processor: TextProcessor - num_iso_layer: 12 - max_video_len: 32 - max_len: 96 -fairseq: - dataset: - batch_size: 256 - valid_subset: test - num_workers: 2 - common_eval: - path: runs/retri/videoclip/checkpoint_best.pt -model: - model_cls: MMFusionSeparate - mm_encoder_cls: null - video_encoder_cls: MMBertForEncoder - text_encoder_cls: BertModel - num_hidden_video_layers: 6 -eval: - save_path: runs/retri/videoclip/youcook_zs/eval -metric: RetrievalMetric -predictor: RetrievalPredictor diff --git a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/vtt_videoclip.yaml b/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/vtt_videoclip.yaml deleted file mode 100644 index d8b4079ac..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/vtt_videoclip.yaml +++ /dev/null @@ -1,51 +0,0 @@ -dataset: - video_processor: VideoProcessor - bert_name: bert-base-uncased - meta_processor: MSRVTTMetaProcessor - train_path: data/msrvtt/MSRVTT_train.csv - jsfusion_path: data/msrvtt/MSRVTT_JSFUSION_test.csv - full_test_path: data/msrvtt/MSRVTT_FULL_test.csv - dup: 20 - val_path: data/msrvtt/MSRVTT_JSFUSION_test.csv - vfeat_dir: data/feat/feat_vtt_s3d - text_processor: MSRVTTTextProcessor - json_path: data/msrvtt/MSRVTT_data.json - aligner: DSAligner - num_iso_layer: 12 - max_video_len: 32 - max_len: 96 -fairseq: - common: - tensorboard_logdir: run - log_interval: 1000 - fp16: true - dataset: - num_workers: 4 - batch_size: 224 - optimization: - lr: - - 5.0e-05 - clip_norm: 2.0 - optimizer: adam - adam_betas: (0.9, 0.98) - lr_scheduler: polynomial_decay - total_num_update: 1000000 - warmup_updates: 122 - weight_decay: 0.0 - ddp_backend: no_c10d - max_epoch: 10 - checkpoint: - restore_file: runs/retri/videoclip/checkpoint_best.pt - reset_optimizer: true - reset_dataloader: true - reset_meters: true - save_dir: runs/retri/videoclip/vtt -task_type: sweep_small -model: - model_cls: MMFusionSeparate - mm_encoder_cls: null - video_encoder_cls: MMBertForEncoder - text_encoder_cls: BertModel - num_hidden_video_layers: 6 -loss: - loss_cls: T2VContraLoss diff --git a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/vttqa_videoclip.yaml b/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/vttqa_videoclip.yaml deleted file mode 100644 index f0566d784..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/vttqa_videoclip.yaml +++ /dev/null @@ -1,49 +0,0 @@ -dataset: - video_processor: VideoProcessor - bert_name: bert-base-uncased - meta_processor: MSRVTTMetaProcessor - train_path: data/msrvtt/MSRVTT_train.csv - dup: 20 - val_path: data/msrvtt/MSRVTT_JSFUSION_test.csv - vfeat_dir: data/feat/feat_vtt_s3d - text_processor: MSRVTTTextProcessor - json_path: data/msrvtt/MSRVTT_data.json - aligner: DSAligner - num_iso_layer: 12 - max_video_len: 32 - max_len: 96 -fairseq: - common: - tensorboard_logdir: run - log_interval: 1000 - fp16: true - dataset: - num_workers: 4 - batch_size: 128 - optimization: - lr: - - 5.0e-05 - clip_norm: 2.0 - optimizer: adam - adam_betas: (0.9, 0.98) - lr_scheduler: polynomial_decay - total_num_update: 1000000 - warmup_updates: 122 - weight_decay: 0.0 - ddp_backend: no_c10d - max_epoch: 5 - checkpoint: - restore_file: runs/retri/videoclip/checkpoint_best.pt - reset_optimizer: true - reset_dataloader: true - reset_meters: true - save_dir: runs/retri/videoclip/vttqa -task_type: sweep_small -model: - model_cls: MMFusionSeparate - mm_encoder_cls: null - video_encoder_cls: MMBertForEncoder - text_encoder_cls: BertModel - num_hidden_video_layers: 6 -loss: - loss_cls: V2TContraLoss diff --git a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/youcook_videoclip.yaml b/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/youcook_videoclip.yaml deleted file mode 100644 index c2b13e551..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoclip/youcook_videoclip.yaml +++ /dev/null @@ -1,49 +0,0 @@ -dataset: - video_processor: YoucookVideoProcessor - bert_name: bert-base-uncased - meta_processor: YoucookMetaProcessor - train_path: data/youcook/youcook_train.pkl - val_path: data/youcook/youcook_val.pkl - trainval_annotation: data/youcook/youcookii_annotations_trainval.json - use_annotation_text: true - vfeat_dir: data/feat/feat_youcook_s3d - text_processor: TextProcessor - aligner: DSAligner - num_iso_layer: 12 - max_video_len: 32 - max_len: 96 -fairseq: - common: - tensorboard_logdir: run - log_interval: 1000 - fp16: true - dataset: - num_workers: 4 - batch_size: 128 - optimization: - lr: - - 5.0e-05 - clip_norm: 2.0 - optimizer: adam - adam_betas: (0.9, 0.98) - lr_scheduler: polynomial_decay - total_num_update: 1000000 - warmup_updates: 122 - weight_decay: 0.0 - ddp_backend: no_c10d - max_epoch: 10 - checkpoint: - restore_file: runs/retri/videoclip/checkpoint_best.pt - reset_optimizer: true - reset_dataloader: true - reset_meters: true - save_dir: runs/retri/videoclip/youcook -task_type: sweep_small -model: - model_cls: MMFusionSeparate - mm_encoder_cls: null - video_encoder_cls: MMBertForEncoder - text_encoder_cls: BertModel - num_hidden_video_layers: 6 -loss: - loss_cls: T2VContraLoss diff --git a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoretri.yaml b/kosmos-g/fairseq/examples/MMPT/projects/retri/videoretri.yaml deleted file mode 100644 index 969e1fb27..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/retri/videoretri.yaml +++ /dev/null @@ -1,51 +0,0 @@ -includes: projects/mfmmlm.yaml -project_dir: retri/videoretri -run_task: - - how2.yaml -task_group: - pretrain: - task: VideoRetriTask - retri_epoch: 1 - vectorpool_cls: VideoVectorPool - retriever_cls: VectorRetriever - num_cands: 64 - dataset: - train_path: data/how2/how2_s3d_train.lst - meta_processor: ShardedHow2VideoRetriMetaProcessor - video_processor: ShardedVideoRetriVideoProcessor - text_processor: ShardedVideoRetriTextProcessor - aligner: VideoRetriOverlappedAligner - sampled_video_min_len: 3 - sampled_video_max_len: 32 - sampled_min_len: 8 - sampled_max_len: 64 - num_video_per_batch: 32 - # do not use subsampling as it changes fairseq batch_size. - subsampling: 1 # disable subsampling - clip_per_video: 16 - fairseq: - dataset: - batch_size: 1 - optimization: - max_epoch: 25 - model: - model_cls: MMFusionShare - mm_encoder_cls: MMBertForEncoder - loss: - loss_cls: MMContraLoss - finetune: - task_list: [vtt_videoclip.yaml, youcook_videoclip.yaml, vttqa_videoclip.yaml, crosstask_videoclip.yaml, coin_videoclip.yaml] - test: - task_list: - - test_youcook_zs.yaml - - test_vtt_zs.yaml - - test_vttqa_zs.yaml - - test_crosstask_zs_videoclip.yaml - - test_coin_zs.yaml - - test_didemo_zs.yaml - - test_youcook_videoclip.yaml - - test_vtt_videoclip.yaml - - test_vttqa_videoclip.yaml - - test_crosstask_videoclip.yaml - - test_coin_videoclip.yaml - diff --git a/kosmos-g/fairseq/examples/MMPT/projects/task/coin.yaml b/kosmos-g/fairseq/examples/MMPT/projects/task/coin.yaml deleted file mode 100644 index e7772486e..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/task/coin.yaml +++ /dev/null @@ -1,25 +0,0 @@ -includes: projects/task/ft.yaml -task_type: sweep_big -dataset: - meta_processor: COINActionSegmentationMetaProcessor - train_path: data/coin/COIN.json - val_path: data/coin/COIN.json - vfeat_dir: data/feat/feat_coin_s3d - video_processor: VideoProcessor - text_processor: COINActionSegmentationTextProcessor - aligner: COINActionSegmentationAligner - num_iso_layer: 12 - sliding_window: 8 - sliding_window_size: 32 -model: - model_cls: MMFusionActionSegmentation - mm_encoder_cls: MMBertForTokenClassification -loss: - loss_cls: CrossEntropy -fairseq: - dataset: - batch_size: 1 - optimization: - max_epoch: 8 - checkpoint: - save_dir: runs/task/coin diff --git a/kosmos-g/fairseq/examples/MMPT/projects/task/coin_videoclip.yaml b/kosmos-g/fairseq/examples/MMPT/projects/task/coin_videoclip.yaml deleted file mode 100644 index 69988bc18..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/task/coin_videoclip.yaml +++ /dev/null @@ -1,7 +0,0 @@ -includes: projects/task/coin.yaml -model: - model_cls: MMFusionSeparateActionSegmentation - mm_encoder_cls: - video_encoder_cls: MMBertForTokenClassification - text_encoder_cls: BertModel # dummy, not used. - num_hidden_video_layers: 6 diff --git a/kosmos-g/fairseq/examples/MMPT/projects/task/crosstask.yaml b/kosmos-g/fairseq/examples/MMPT/projects/task/crosstask.yaml deleted file mode 100644 index cb4dbb0cb..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/task/crosstask.yaml +++ /dev/null @@ -1,31 +0,0 @@ -includes: projects/task/ft.yaml -dataset: - meta_processor: CrossTaskMetaProcessor - train_path: data/crosstask/crosstask_release/videos.csv # dummy - train_csv_path: data/crosstask/crosstask_release/videos.csv - val_path: data/crosstask/crosstask_release/videos_val.csv # dummy - val_csv_path: data/crosstask/crosstask_release/videos_val.csv - primary_path: data/crosstask/crosstask_release/tasks_primary.txt - related_path: data/crosstask/crosstask_release/tasks_related.txt - vfeat_dir: data/feat/feat_crosstask_s3d - annotation_path: data/crosstask/crosstask_release/annotations - n_train: 30 - video_processor: CrossTaskVideoProcessor - text_processor: CrossTaskTextProcessor - aligner: CrossTaskAligner - num_iso_layer: 12 - sliding_window: 16 - sliding_window_size: 32 -model: - model_cls: MMFusionActionLocalization - mm_encoder_cls: MMBertForJoint -loss: - loss_cls: BCE -fairseq: - dataset: - batch_size: 1 - optimization: - max_epoch: 5 - checkpoint: - save_dir: runs/task/crosstask - restore_file: runs/task/checkpoint11.pt # for VLM diff --git a/kosmos-g/fairseq/examples/MMPT/projects/task/crosstask_videoclip.yaml b/kosmos-g/fairseq/examples/MMPT/projects/task/crosstask_videoclip.yaml deleted file mode 100644 index 6ec613c07..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/task/crosstask_videoclip.yaml +++ /dev/null @@ -1,10 +0,0 @@ -includes: projects/task/crosstask.yaml -model: - model_cls: MMFusionSeparateActionLocalization - mm_encoder_cls: - video_encoder_cls: MMBertForEncoder - text_encoder_cls: BertModel # dummy, not used. - num_hidden_video_layers: 6 -fairseq: - checkpoint: - restore_file: runs/task/checkpoint_best.pt # overwrite the default of VLM. diff --git a/kosmos-g/fairseq/examples/MMPT/projects/task/default.yaml b/kosmos-g/fairseq/examples/MMPT/projects/task/default.yaml deleted file mode 100644 index 087fef71a..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/task/default.yaml +++ /dev/null @@ -1,20 +0,0 @@ -# this yaml cannot be run alone. you must use `how2.yaml`, `vtt.yaml` etc for training. -dataset: - video_processor: VideoProcessor - bert_name: bert-base-uncased -fairseq: - common: - tensorboard_logdir: run - log_interval: 1000 - dataset: - num_workers: 4 - optimization: - lr: [ 0.00005 ] - clip_norm: 2.0 - optimizer: adam - adam_betas: (0.9, 0.98) - lr_scheduler: polynomial_decay - total_num_update: 1000000 # backward compatible on fairseq 1.0.0a0+af0389f for reproducibility. - warmup_updates: 1000 - weight_decay: 0.0 - ddp_backend: no_c10d diff --git a/kosmos-g/fairseq/examples/MMPT/projects/task/ft.yaml b/kosmos-g/fairseq/examples/MMPT/projects/task/ft.yaml deleted file mode 100644 index c93b8a73e..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/task/ft.yaml +++ /dev/null @@ -1,13 +0,0 @@ -includes: projects/task/default.yaml -# all derived config will be run by fairseq-train. -task_type: sweep_small -fairseq: - optimization: - warmup_updates: 122 # copied from roberta glue: https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.glue.md - checkpoint: - # save_interval_updates: 512 - # borrowed from Roberta script. - restore_file: runs/task/checkpoint_best.pt - reset_optimizer: True - reset_dataloader: True - reset_meters: True diff --git a/kosmos-g/fairseq/examples/MMPT/projects/task/how2.yaml b/kosmos-g/fairseq/examples/MMPT/projects/task/how2.yaml deleted file mode 100644 index 094dd04bf..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/task/how2.yaml +++ /dev/null @@ -1,22 +0,0 @@ -includes: projects/task/default.yaml -task_type: sweep_big -slurm_config: big -dataset: - meta_processor: ShardedHow2MetaProcessor - train_path: data/how2/how2_s3d_train.lst - val_path: data/how2/how2_s3d_val.lst - video_processor: ShardedVideoProcessor - vfeat_dir: data/feat/feat_how2_s3d_shard_small - text_processor: ShardedTextProcessor - tfeat_dir: data/feat/feat_how2_s3d_shard_small/raw_caption_dedup.bert-base-uncased. - aligner: FixedLenAligner -# disable direct running of this yaml -eval: - save_path: runs/task -fairseq: - checkpoint: - save_dir: runs/task - save_interval_updates: 1024 - keep_interval_updates: 2 - keep_last_epochs: 30 - diff --git a/kosmos-g/fairseq/examples/MMPT/projects/task/test.yaml b/kosmos-g/fairseq/examples/MMPT/projects/task/test.yaml deleted file mode 100644 index 0a9844524..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/task/test.yaml +++ /dev/null @@ -1,13 +0,0 @@ -# this yaml cannot be run alone: implement a test_${dataset}.yaml -slurm_config: big -task_type: local_predict -dataset: - split: test - video_processor: VideoProcessor - aligner: DSAligner - bert_name: bert-base-uncased -fairseq: - dataset: - batch_size: 256 - valid_subset: test - num_workers: 2 diff --git a/kosmos-g/fairseq/examples/MMPT/projects/task/test_coin.yaml b/kosmos-g/fairseq/examples/MMPT/projects/task/test_coin.yaml deleted file mode 100644 index 6d919df7c..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/task/test_coin.yaml +++ /dev/null @@ -1,24 +0,0 @@ -includes: projects/task/test.yaml -dataset: - split: test - test_path: data/coin/COIN.json - meta_processor: COINActionSegmentationMetaProcessor - vfeat_dir: data/feat/feat_coin_s3d - video_processor: VideoProcessor - text_processor: COINActionSegmentationTextProcessor - aligner: COINActionSegmentationAligner - num_iso_layer: 12 - sliding_window: 16 - sliding_window_size: 32 -model: - model_cls: MMFusionActionSegmentation - mm_encoder_cls: MMBertForTokenClassification -eval: - save_path: runs/task/coin/eval -fairseq: - dataset: - batch_size: 1 - common_eval: - path: runs/task/coin/checkpoint_best.pt -metric: COINActionSegmentationMetric -predictor: COINPredictor diff --git a/kosmos-g/fairseq/examples/MMPT/projects/task/test_coin_videoclip.yaml b/kosmos-g/fairseq/examples/MMPT/projects/task/test_coin_videoclip.yaml deleted file mode 100644 index b41f5bc48..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/task/test_coin_videoclip.yaml +++ /dev/null @@ -1,7 +0,0 @@ -includes: projects/task/test_coin.yaml -model: - model_cls: MMFusionSeparateActionSegmentation - mm_encoder_cls: - video_encoder_cls: MMBertForTokenClassification - text_encoder_cls: BertModel # dummy, not used. - num_hidden_video_layers: 6 diff --git a/kosmos-g/fairseq/examples/MMPT/projects/task/test_coin_zs.yaml b/kosmos-g/fairseq/examples/MMPT/projects/task/test_coin_zs.yaml deleted file mode 100644 index 5d19b09f1..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/task/test_coin_zs.yaml +++ /dev/null @@ -1,13 +0,0 @@ -includes: projects/task/test_coin.yaml -model: - model_cls: MMFusionSeparate - mm_encoder_cls: - video_encoder_cls: MMBertForEncoder - text_encoder_cls: BertModel - num_hidden_video_layers: 6 -eval: - save_path: runs/task/coin_zs/eval -fairseq: - common_eval: - path: runs/task/checkpoint_best.pt -predictor: COINZSPredictor diff --git a/kosmos-g/fairseq/examples/MMPT/projects/task/test_crosstask.yaml b/kosmos-g/fairseq/examples/MMPT/projects/task/test_crosstask.yaml deleted file mode 100644 index 6dd778e30..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/task/test_crosstask.yaml +++ /dev/null @@ -1,32 +0,0 @@ -includes: projects/task/test.yaml -dataset: - split: test - meta_processor: CrossTaskMetaProcessor - test_path: data/crosstask/crosstask_release/videos_val.csv - train_csv_path: data/crosstask/crosstask_release/videos.csv - val_path: data/crosstask/crosstask_release/videos_val.csv # dummy - val_csv_path: data/crosstask/crosstask_release/videos_val.csv - primary_path: data/crosstask/crosstask_release/tasks_primary.txt - related_path: data/crosstask/crosstask_release/tasks_related.txt - vfeat_dir: data/feat/feat_crosstask_s3d - annotation_path: data/crosstask/crosstask_release/annotations - n_train: 30 - video_processor: CrossTaskVideoProcessor - text_processor: CrossTaskTextProcessor - aligner: CrossTaskAligner - num_iso_layer: 12 - sliding_window: 16 - sliding_window_size: 32 -model: - model_cls: MMFusionActionLocalization - mm_encoder_cls: MMBertForJoint -eval: - save_path: runs/task/crosstask/eval -fairseq: - # read code and find what is the checkpoint arg. - dataset: - batch_size: 1 - common_eval: - path: runs/task/crosstask/checkpoint_best.pt -metric: CrossTaskMetric -predictor: CrossTaskPredictor diff --git a/kosmos-g/fairseq/examples/MMPT/projects/task/test_crosstask_videoclip.yaml b/kosmos-g/fairseq/examples/MMPT/projects/task/test_crosstask_videoclip.yaml deleted file mode 100644 index df12535d2..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/task/test_crosstask_videoclip.yaml +++ /dev/null @@ -1,7 +0,0 @@ -includes: projects/task/test_crosstask.yaml -model: - model_cls: MMFusionSeparateActionLocalization - mm_encoder_cls: - video_encoder_cls: MMBertForEncoder - text_encoder_cls: BertModel # dummy, not used. - num_hidden_video_layers: 6 diff --git a/kosmos-g/fairseq/examples/MMPT/projects/task/test_crosstask_zs.yaml b/kosmos-g/fairseq/examples/MMPT/projects/task/test_crosstask_zs.yaml deleted file mode 100644 index 19386e495..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/task/test_crosstask_zs.yaml +++ /dev/null @@ -1,32 +0,0 @@ -includes: projects/task/test.yaml -dataset: - split: test - meta_processor: CrossTaskMetaProcessor - test_path: data/crosstask/crosstask_release/videos_val.csv - train_csv_path: data/crosstask/crosstask_release/videos.csv - val_path: data/crosstask/crosstask_release/videos_val.csv # dummy - val_csv_path: data/crosstask/crosstask_release/videos_val.csv - primary_path: data/crosstask/crosstask_release/tasks_primary.txt - related_path: data/crosstask/crosstask_release/tasks_related.txt - vfeat_dir: data/feat/feat_crosstask_s3d - annotation_path: data/crosstask/crosstask_release/annotations - n_train: 30 - video_processor: CrossTaskVideoProcessor - text_processor: CrossTaskTextProcessor - aligner: CrossTaskAligner - num_iso_layer: 12 - sliding_window: 16 - sliding_window_size: 32 -model: - model_cls: MMFusionActionLocalization - mm_encoder_cls: MMBertForJoint -eval: - save_path: runs/task/crosstask_zs/eval -fairseq: - # read code and find what is the checkpoint arg. - dataset: - batch_size: 1 - common_eval: - path: runs/task/checkpoint_best.pt # load the best from how2 on ACL submission: runs/task/checkpoint11.pt -metric: CrossTaskMetric -predictor: CrossTaskPredictor diff --git a/kosmos-g/fairseq/examples/MMPT/projects/task/test_crosstask_zs_videoclip.yaml b/kosmos-g/fairseq/examples/MMPT/projects/task/test_crosstask_zs_videoclip.yaml deleted file mode 100644 index 7f0198276..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/task/test_crosstask_zs_videoclip.yaml +++ /dev/null @@ -1,7 +0,0 @@ -includes: projects/task/test_crosstask_zs.yaml -model: - model_cls: MMFusionSeparateActionLocalization - mm_encoder_cls: - video_encoder_cls: MMBertForEncoder - text_encoder_cls: BertModel # dummy, not used. - num_hidden_video_layers: 6 diff --git a/kosmos-g/fairseq/examples/MMPT/projects/task/test_didemo_zs.yaml b/kosmos-g/fairseq/examples/MMPT/projects/task/test_didemo_zs.yaml deleted file mode 100644 index 4b53dca71..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/task/test_didemo_zs.yaml +++ /dev/null @@ -1,23 +0,0 @@ -includes: projects/task/test.yaml -dataset: - meta_processor: DiDeMoMetaProcessor - test_path: data/didemo/test_data.json - video_processor: VideoProcessor - vfeat_dir: data/feat/feat_didemo_s3d - text_processor: DiDeMoTextProcessor - aligner: DiDeMoAligner - num_iso_layer: 12 -model: - model_cls: MMFusionSeparate - mm_encoder_cls: - video_encoder_cls: MMBertForEncoder - text_encoder_cls: BertModel - num_hidden_video_layers: 6 -eval: - save_path: runs/task/didemo_zs/eval -fairseq: - # read code and find what is the checkpoint arg. - common_eval: - path: runs/task/checkpoint_best.pt -metric: DiDeMoMetric -predictor: DiDeMoPredictor diff --git a/kosmos-g/fairseq/examples/MMPT/projects/task/test_vtt.yaml b/kosmos-g/fairseq/examples/MMPT/projects/task/test_vtt.yaml deleted file mode 100644 index 2f809b306..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/task/test_vtt.yaml +++ /dev/null @@ -1,19 +0,0 @@ -includes: projects/task/test.yaml -dataset: - meta_processor: MSRVTTMetaProcessor - test_path: data/msrvtt/MSRVTT_JSFUSION_test.csv - video_processor: VideoProcessor - vfeat_dir: data/feat/feat_vtt_s3d - text_processor: MSRVTTTextProcessor - num_iso_layer: 12 -model: - model_cls: MMFusionJoint - mm_encoder_cls: MMBertForJoint -eval: - save_path: runs/task/vtt/eval -fairseq: - # read code and find what is the checkpoint arg. - common_eval: - path: runs/task/vtt/checkpoint_last.pt -metric: RetrievalMetric -predictor: RetrievalPredictor diff --git a/kosmos-g/fairseq/examples/MMPT/projects/task/test_vtt_videoclip.yaml b/kosmos-g/fairseq/examples/MMPT/projects/task/test_vtt_videoclip.yaml deleted file mode 100644 index cb6564394..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/task/test_vtt_videoclip.yaml +++ /dev/null @@ -1,8 +0,0 @@ -includes: projects/task/test_vtt.yaml -model: - model_cls: MMFusionSeparate - mm_encoder_cls: - video_encoder_cls: MMBertForEncoder - text_encoder_cls: BertModel - num_hidden_video_layers: 6 - diff --git a/kosmos-g/fairseq/examples/MMPT/projects/task/test_vtt_zs.yaml b/kosmos-g/fairseq/examples/MMPT/projects/task/test_vtt_zs.yaml deleted file mode 100644 index 57340924b..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/task/test_vtt_zs.yaml +++ /dev/null @@ -1,13 +0,0 @@ -includes: projects/task/test_vtt.yaml -model: - model_cls: MMFusionSeparate - mm_encoder_cls: - video_encoder_cls: MMBertForEncoder - text_encoder_cls: BertModel - num_hidden_video_layers: 6 -eval: - save_path: runs/task/vtt_zs/eval -fairseq: - # read code and find what is the checkpoint arg. - common_eval: - path: runs/task/checkpoint_best.pt diff --git a/kosmos-g/fairseq/examples/MMPT/projects/task/test_vttqa.yaml b/kosmos-g/fairseq/examples/MMPT/projects/task/test_vttqa.yaml deleted file mode 100644 index ddf813c53..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/task/test_vttqa.yaml +++ /dev/null @@ -1,20 +0,0 @@ -includes: projects/task/test.yaml -dataset: - meta_processor: MSRVTTQAMetaProcessor - test_path: data/msrvtt-qa/MSR_MC_test.csv - video_processor: VideoProcessor - vfeat_dir: data/feat/feat_vtt_s3d - text_processor: MSRVTTQATextProcessor - aligner: MSRVTTQAAligner - num_iso_layer: 12 -model: - model_cls: MMFusionJoint - mm_encoder_cls: MMBertForJoint -eval: - save_path: runs/task/vttqa/eval -fairseq: - # read code and find what is the checkpoint arg. - common_eval: - path: runs/task/vttqa/checkpoint_last.pt -metric: QAMetric -predictor: QAPredictor diff --git a/kosmos-g/fairseq/examples/MMPT/projects/task/test_vttqa_videoclip.yaml b/kosmos-g/fairseq/examples/MMPT/projects/task/test_vttqa_videoclip.yaml deleted file mode 100644 index 32a41e861..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/task/test_vttqa_videoclip.yaml +++ /dev/null @@ -1,8 +0,0 @@ -includes: projects/task/test_vttqa.yaml -model: - model_cls: MMFusionSeparate - mm_encoder_cls: - video_encoder_cls: MMBertForEncoder - text_encoder_cls: BertModel - num_hidden_video_layers: 6 - diff --git a/kosmos-g/fairseq/examples/MMPT/projects/task/test_vttqa_zs.yaml b/kosmos-g/fairseq/examples/MMPT/projects/task/test_vttqa_zs.yaml deleted file mode 100644 index 5e0e29d20..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/task/test_vttqa_zs.yaml +++ /dev/null @@ -1,13 +0,0 @@ -includes: projects/task/test_vttqa.yaml -model: - model_cls: MMFusionSeparate - mm_encoder_cls: - video_encoder_cls: MMBertForEncoder - text_encoder_cls: BertModel - num_hidden_video_layers: 6 -eval: - save_path: runs/task/vttqa_zs/eval -fairseq: - # read code and find what is the checkpoint arg. - common_eval: - path: runs/task/checkpoint_best.pt diff --git a/kosmos-g/fairseq/examples/MMPT/projects/task/test_youcook.yaml b/kosmos-g/fairseq/examples/MMPT/projects/task/test_youcook.yaml deleted file mode 100644 index 092b680fa..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/task/test_youcook.yaml +++ /dev/null @@ -1,22 +0,0 @@ -includes: projects/task/test.yaml -dataset: - meta_processor: YoucookMetaProcessor - test_path: data/youcook/youcook_val.pkl - trainval_annotation: data/youcook/youcookii_annotations_trainval.json - use_annotation_text: True - video_processor: YoucookVideoProcessor - vfeat_dir: data/feat/feat_youcook_s3d # /checkpoint/huxu/feat/youcook_vmz # /checkpoint/prarora/berniehuang/feat_youcook_vmz - text_processor: TextProcessor - aligner: DSAligner - num_iso_layer: 12 -model: - model_cls: MMFusionJoint - mm_encoder_cls: MMBertForJoint -eval: - save_path: runs/task/youcook/eval -fairseq: - # read code and find what is the checkpoint arg. - common_eval: - path: runs/task/youcook/checkpoint_last.pt -metric: RetrievalMetric -predictor: RetrievalPredictor diff --git a/kosmos-g/fairseq/examples/MMPT/projects/task/test_youcook_videoclip.yaml b/kosmos-g/fairseq/examples/MMPT/projects/task/test_youcook_videoclip.yaml deleted file mode 100644 index b85ea4347..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/task/test_youcook_videoclip.yaml +++ /dev/null @@ -1,8 +0,0 @@ -includes: projects/task/test_youcook.yaml -model: - model_cls: MMFusionSeparate - mm_encoder_cls: - video_encoder_cls: MMBertForEncoder - text_encoder_cls: BertModel - num_hidden_video_layers: 6 - diff --git a/kosmos-g/fairseq/examples/MMPT/projects/task/test_youcook_zs.yaml b/kosmos-g/fairseq/examples/MMPT/projects/task/test_youcook_zs.yaml deleted file mode 100644 index 0a5875bea..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/task/test_youcook_zs.yaml +++ /dev/null @@ -1,13 +0,0 @@ -includes: projects/task/test_youcook.yaml -model: - model_cls: MMFusionSeparate - mm_encoder_cls: - video_encoder_cls: MMBertForEncoder - text_encoder_cls: BertModel - num_hidden_video_layers: 6 -eval: - save_path: runs/task/youcook_zs/eval -fairseq: - # read code and find what is the checkpoint arg. - common_eval: - path: runs/task/checkpoint_best.pt diff --git a/kosmos-g/fairseq/examples/MMPT/projects/task/test_youcookcap.yaml b/kosmos-g/fairseq/examples/MMPT/projects/task/test_youcookcap.yaml deleted file mode 100644 index 24f6518b7..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/task/test_youcookcap.yaml +++ /dev/null @@ -1,23 +0,0 @@ -includes: projects/task/test.yaml -dataset: - meta_processor: YoucookNLGMetaProcessor - test_path: data/youcook/val_list.txt - trainval_annotation: data/youcook/youcookii_annotations_trainval.json - video_processor: YoucookVideoProcessor - vfeat_dir: data/feat/feat_youcook_s3d - text_processor: NLGTextProcessor - aligner: DSNLGAligner -model: - model_cls: MMFusionNLG - mm_encoder_cls: MMBertForNLG - max_decode_length: 24 -eval: - save_path: runs/task/youcookcap/eval -fairseq: - # read code and find what is the checkpoint arg. - common_eval: - path: runs/task/youcookcap/checkpoint_best.pt -metric: NLGMetric -predictor: NLGPredictor -gen_param: - num_beams: 5 diff --git a/kosmos-g/fairseq/examples/MMPT/projects/task/vtt.yaml b/kosmos-g/fairseq/examples/MMPT/projects/task/vtt.yaml deleted file mode 100644 index 395e2ee6f..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/task/vtt.yaml +++ /dev/null @@ -1,25 +0,0 @@ -includes: projects/task/ft.yaml -dataset: - meta_processor: MSRVTTMetaProcessor - train_path: data/msrvtt/MSRVTT_train.csv - jsfusion_path: data/msrvtt/MSRVTT_JSFUSION_test.csv - full_test_path: data/msrvtt/MSRVTT_FULL_test.csv - dup: 20 - val_path: data/msrvtt/MSRVTT_JSFUSION_test.csv - vfeat_dir: data/feat/feat_vtt_s3d - text_processor: MSRVTTTextProcessor - json_path: data/msrvtt/MSRVTT_data.json - aligner: DSAligner - num_iso_layer: 12 -model: - model_cls: MMFusionJoint - mm_encoder_cls: MMBertForJoint -loss: - loss_cls: T2VContraLoss -fairseq: - dataset: - batch_size: 256 - optimization: - max_epoch: 10 - checkpoint: - save_dir: runs/task/vtt diff --git a/kosmos-g/fairseq/examples/MMPT/projects/task/vtt_videoclip.yaml b/kosmos-g/fairseq/examples/MMPT/projects/task/vtt_videoclip.yaml deleted file mode 100644 index a9892cab0..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/task/vtt_videoclip.yaml +++ /dev/null @@ -1,12 +0,0 @@ -includes: projects/task/vtt.yaml -model: - model_cls: MMFusionSeparate - mm_encoder_cls: - video_encoder_cls: MMBertForEncoder - text_encoder_cls: BertModel - num_hidden_video_layers: 6 -fairseq: - dataset: - batch_size: 224 -# model_cls: MMFusionShare -# mm_encoder_cls: MMBertForEncoder diff --git a/kosmos-g/fairseq/examples/MMPT/projects/task/vttqa.yaml b/kosmos-g/fairseq/examples/MMPT/projects/task/vttqa.yaml deleted file mode 100644 index 56d578eff..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/task/vttqa.yaml +++ /dev/null @@ -1,23 +0,0 @@ -includes: projects/task/ft.yaml -dataset: - meta_processor: MSRVTTMetaProcessor - train_path: data/msrvtt/MSRVTT_train.csv - dup: 20 - val_path: data/msrvtt/MSRVTT_JSFUSION_test.csv - vfeat_dir: data/feat/feat_vtt_s3d - text_processor: MSRVTTTextProcessor - json_path: data/msrvtt/MSRVTT_data.json - aligner: DSAligner - num_iso_layer: 12 -model: - model_cls: MMFusionJoint - mm_encoder_cls: MMBertForJoint -loss: - loss_cls: V2TContraLoss -fairseq: - dataset: - batch_size: 128 - optimization: - max_epoch: 5 - checkpoint: - save_dir: runs/task/vttqa diff --git a/kosmos-g/fairseq/examples/MMPT/projects/task/vttqa_videoclip.yaml b/kosmos-g/fairseq/examples/MMPT/projects/task/vttqa_videoclip.yaml deleted file mode 100644 index 2d484ca8a..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/task/vttqa_videoclip.yaml +++ /dev/null @@ -1,10 +0,0 @@ -includes: projects/task/vttqa.yaml -model: - model_cls: MMFusionSeparate - mm_encoder_cls: - video_encoder_cls: MMBertForEncoder - text_encoder_cls: BertModel - num_hidden_video_layers: 6 - -# model_cls: MMFusionShare -# mm_encoder_cls: MMBertForEncoder diff --git a/kosmos-g/fairseq/examples/MMPT/projects/task/youcook.yaml b/kosmos-g/fairseq/examples/MMPT/projects/task/youcook.yaml deleted file mode 100644 index e0cd84174..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/task/youcook.yaml +++ /dev/null @@ -1,25 +0,0 @@ -includes: projects/task/ft.yaml -dataset: - meta_processor: YoucookMetaProcessor - train_path: data/youcook/youcook_train.pkl - val_path: data/youcook/youcook_val.pkl - trainval_annotation: data/youcook/youcookii_annotations_trainval.json - use_annotation_text: True - video_processor: YoucookVideoProcessor - vfeat_dir: data/feat/feat_youcook_s3d # /checkpoint/huxu/feat/youcook_vmz # /checkpoint/prarora/berniehuang/feat_youcook_vmz - text_processor: TextProcessor - aligner: DSAligner - num_iso_layer: 12 -model: - model_cls: MMFusionJoint - mm_encoder_cls: MMBertForJoint -loss: - loss_cls: T2VContraLoss -fairseq: - dataset: - batch_size: 128 - optimization: - max_epoch: 10 - checkpoint: - save_dir: runs/task/youcook - diff --git a/kosmos-g/fairseq/examples/MMPT/projects/task/youcook_videoclip.yaml b/kosmos-g/fairseq/examples/MMPT/projects/task/youcook_videoclip.yaml deleted file mode 100644 index e3e901c30..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/task/youcook_videoclip.yaml +++ /dev/null @@ -1,9 +0,0 @@ -includes: projects/task/youcook.yaml -model: - model_cls: MMFusionSeparate - mm_encoder_cls: - video_encoder_cls: MMBertForEncoder - text_encoder_cls: BertModel - num_hidden_video_layers: 6 - # model_cls: MMFusionShare - # mm_encoder_cls: MMBertForEncoder diff --git a/kosmos-g/fairseq/examples/MMPT/projects/task/youcookcap.yaml b/kosmos-g/fairseq/examples/MMPT/projects/task/youcookcap.yaml deleted file mode 100644 index 047735f21..000000000 --- a/kosmos-g/fairseq/examples/MMPT/projects/task/youcookcap.yaml +++ /dev/null @@ -1,23 +0,0 @@ -# finetuning for youcook captioning. -includes: projects/task/ft.yaml -dataset: - meta_processor: YoucookNLGMetaProcessor - train_path: data/youcook/train_list.txt - val_path: data/youcook/val_list.txt - trainval_annotation: data/youcook/youcookii_annotations_trainval.json - video_processor: YoucookVideoProcessor - vfeat_dir: data/feat/feat_youcook_s3d - text_processor: NLGTextProcessor - aligner: DSNLGAligner -model: - model_cls: MMFusionNLG - mm_encoder_cls: MMBertForNLG -loss: - loss_cls: NLGLoss -fairseq: - dataset: - batch_size: 128 - optimization: - max_epoch: 10 - checkpoint: - save_dir: runs/task/youcookcap diff --git a/kosmos-g/fairseq/examples/MMPT/scripts/text_token_extractor/configs/bert-base-uncased.yaml b/kosmos-g/fairseq/examples/MMPT/scripts/text_token_extractor/configs/bert-base-uncased.yaml deleted file mode 100644 index 473dd9b45..000000000 --- a/kosmos-g/fairseq/examples/MMPT/scripts/text_token_extractor/configs/bert-base-uncased.yaml +++ /dev/null @@ -1,5 +0,0 @@ -dataset: - bert_name: bert-base-uncased - caption_pkl_path: data/how2/raw_caption_dedup.pkl - use_fast: true - target_dir: data/feat/feat_how2_s3d_shard_small diff --git a/kosmos-g/fairseq/examples/MMPT/scripts/text_token_extractor/pretokenization.py b/kosmos-g/fairseq/examples/MMPT/scripts/text_token_extractor/pretokenization.py deleted file mode 100644 index 29ae5dc15..000000000 --- a/kosmos-g/fairseq/examples/MMPT/scripts/text_token_extractor/pretokenization.py +++ /dev/null @@ -1,106 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import pickle -import os -import argparse -import numpy as np - -from torch.utils.data import Dataset, DataLoader -from mmpt.processors import PKLJSONStrTextProcessor -from mmpt.utils import ShardedTensor, recursive_config - - -class TokenizerDataset(Dataset): - def __init__(self, config): - self.text_processor = PKLJSONStrTextProcessor(config) - self.video_ids = list(self.text_processor.data.keys()) - - def __getitem__(self, idx): - video_id = self.video_ids[idx] - return video_id, self.text_processor(video_id) - - def __len__(self): - return len(self.video_ids) - - -def numpify(shard_idx, video_ids, captions, target_dir, split, prefix, max_cap_len=32): - startends = [] - caps_ids = [] - for video_id in video_ids: - caption = captions[video_id] - startend = [] - cap_ids = [] - for start, end, cap in zip( - caption["start"], caption["end"], caption["cap"]): - startend.append(np.array([start, end]).astype("float32")) - cap_id = np.full((max_cap_len,), -1, dtype=np.int32) - cap = cap[:max_cap_len] - cap_id[:len(cap)] = cap - cap_ids.append(cap_id) - startends.append(np.stack(startend)) - caps_ids.append(np.stack(cap_ids)) - - startends = ShardedTensor.from_list(startends) - target_path = os.path.join( - target_dir, - prefix + split + "_" + str(shard_idx) - ) - print("save to", target_path) - startends.save(target_path + ".startends") - caps_ids = ShardedTensor.from_list(caps_ids) - caps_ids.save(target_path + ".caps_ids") - - -def sharding(config, out_file): - with open(out_file, "rb") as fr: - captions = pickle.load(fr) - target_dir = config.target_dir - prefix = os.path.basename( - os.path.splitext(config.caption_pkl_path)[0] - ) + "." + config.bert_name + "." - for split in ["train", "val"]: - target_path = os.path.join(target_dir, split + "_meta") - with open(target_path + ".pkl", "rb") as fr: - meta = pickle.load(fr) - print("load meta", target_path, len(meta)) - for shard_id in meta: - numpify( - shard_id, meta[shard_id], captions, - target_dir, split, prefix - ) - - -def tokenize(config, out_file): - def collator(samples): - return samples - dataset = TokenizerDataset(config) - data = {} - for idx, batch in enumerate( - DataLoader(dataset, collate_fn=collator, num_workers=16)): - for video_id, caption in batch: - data[video_id] = caption - if idx % 5000 == 0: - print(idx) - with open(out_file, "wb") as fw: - pickle.dump(data, fw, pickle.HIGHEST_PROTOCOL) - - -def main(args): - config = recursive_config(args.config).dataset - - out_file = os.path.splitext(config.caption_pkl_path)[0] \ - + "." + config.bert_name + ".pkl" - if not os.path.isfile(out_file): - tokenize(config, out_file) - sharding(config, out_file) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser( - description="pretokenize (raw_)caption.json into pkl.") - parser.add_argument('config', type=str) - args = parser.parse_args() - main(args) diff --git a/kosmos-g/fairseq/examples/MMPT/scripts/video_feature_extractor/extract.py b/kosmos-g/fairseq/examples/MMPT/scripts/video_feature_extractor/extract.py deleted file mode 100644 index b5ee7b778..000000000 --- a/kosmos-g/fairseq/examples/MMPT/scripts/video_feature_extractor/extract.py +++ /dev/null @@ -1,157 +0,0 @@ -# Copyright Howto100M authors. -# Copyright (c) Facebook, Inc. All Rights Reserved - -import torch as th -import torch.nn.functional as F -import math -import numpy as np -import argparse - -from torch.utils.data import DataLoader -from model import get_model -from preprocessing import Preprocessing -from random_sequence_shuffler import RandomSequenceSampler - -from tqdm import tqdm -from pathbuilder import PathBuilder -from videoreader import VideoLoader - - -parser = argparse.ArgumentParser(description='Easy video feature extractor') - -parser.add_argument('--vdir', type=str) -parser.add_argument('--fdir', type=str) -parser.add_argument('--hflip', type=int, default=0) - -parser.add_argument('--batch_size', type=int, default=64, - help='batch size') -parser.add_argument('--type', type=str, default='2d', - help='CNN type') -parser.add_argument('--half_precision', type=int, default=0, - help='output half precision float') -parser.add_argument('--num_decoding_thread', type=int, default=4, - help='Num parallel thread for video decoding') -parser.add_argument('--l2_normalize', type=int, default=1, - help='l2 normalize feature') -parser.add_argument('--resnext101_model_path', type=str, default='model/resnext101.pth', - help='Resnext model path') -parser.add_argument('--vmz_model_path', type=str, default='model/r2plus1d_34_clip8_ig65m_from_scratch-9bae36ae.pth', - help='vmz model path') - -args = parser.parse_args() - - -# TODO: refactor all args into config. (current code is from different people.) -CONFIGS = { - "2d": { - "fps": 1, - "size": 224, - "centercrop": False, - "shards": 0, - }, - "3d": { - "fps": 24, - "size": 112, - "centercrop": True, - "shards": 0, - }, - "s3d": { - "fps": 30, - "size": 224, - "centercrop": True, - "shards": 0, - }, - "vmz": { - "fps": 24, - "size": 112, - "centercrop": True, - "shards": 0, - }, - "vae": { - "fps": 2, - "size": 256, - "centercrop": True, - "shards": 100, - } -} - -config = CONFIGS[args.type] - - -video_dirs = args.vdir -feature_dir = args.fdir - -video_dict = PathBuilder.build(video_dirs, feature_dir, ".npy", config["shards"]) - -dataset = VideoLoader( - video_dict=video_dict, - framerate=config["fps"], - size=config["size"], - centercrop=config["centercrop"], - hflip=args.hflip -) -n_dataset = len(dataset) -sampler = RandomSequenceSampler(n_dataset, 10) -loader = DataLoader( - dataset, - batch_size=1, - shuffle=False, - num_workers=args.num_decoding_thread, - sampler=sampler if n_dataset > 10 else None, -) -preprocess = Preprocessing(args.type) -model = get_model(args) - -with th.no_grad(): - for k, data in tqdm(enumerate(loader), total=loader.__len__(), ascii=True): - input_file = data['input'][0] - output_file = data['output'][0] - if len(data['video'].shape) > 3: - video = data['video'].squeeze() - if len(video.shape) == 4: - video = preprocess(video) - n_chunk = len(video) - if args.type == 'vmz': - n_chunk = math.ceil(n_chunk/float(3)) - features = th.cuda.FloatTensor(n_chunk, 512).fill_(0) - elif args.type == 's3d': - features = th.cuda.FloatTensor(n_chunk, 512).fill_(0) - elif args.type == "vae": - features = th.cuda.LongTensor(n_chunk, 1024).fill_(0) - else: - features = th.cuda.FloatTensor(n_chunk, 2048).fill_(0) - n_iter = int(math.ceil(n_chunk / float(args.batch_size))) - for i in range(n_iter): - factor = 1 - if args.type == 'vmz': - factor = 3 - min_ind = factor * i * args.batch_size - max_ind = factor * (i + 1) * args.batch_size - video_batch = video[min_ind:max_ind:factor].cuda() - if args.type == '2d': - batch_features = model(video_batch) # (51, 487), (51, 512) - elif args.type == 's3d': - batch_features = model(video_batch) - batch_features = batch_features['video_embedding'] - elif args.type == "vae": - # image_code. - batch_features = model(video_batch) - else: - batch_pred, batch_features = model(video_batch) # (51, 487), (51, 512) - if args.l2_normalize: - batch_features = F.normalize(batch_features, dim=1) - features[i*args.batch_size:(i+1)*args.batch_size] = batch_features - features = features.cpu().numpy() - if args.half_precision: - if args.type == "vae": - features = features.astype(np.int16) - else: - features = features.astype('float16') - else: - if args.type == "vae": - features = features.astype(np.int32) - else: - features = features.astype('float32') - np.save(output_file, features) - else: - print('Video {} error.'.format(input_file)) diff --git a/kosmos-g/fairseq/examples/MMPT/scripts/video_feature_extractor/how2/s3d.sh b/kosmos-g/fairseq/examples/MMPT/scripts/video_feature_extractor/how2/s3d.sh deleted file mode 100644 index 90102c89f..000000000 --- a/kosmos-g/fairseq/examples/MMPT/scripts/video_feature_extractor/how2/s3d.sh +++ /dev/null @@ -1,8 +0,0 @@ -#!/bin/bash - - -python scripts/video_feature_extractor/extract.py \ - --vdir \ - --fdir data/feat/feat_how2_s3d \ - --type=s3d --num_decoding_thread=4 \ - --batch_size 32 --half_precision 1 diff --git a/kosmos-g/fairseq/examples/MMPT/scripts/video_feature_extractor/model.py b/kosmos-g/fairseq/examples/MMPT/scripts/video_feature_extractor/model.py deleted file mode 100644 index ac266e844..000000000 --- a/kosmos-g/fairseq/examples/MMPT/scripts/video_feature_extractor/model.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) Howto100M authors and Facebook, Inc. All Rights Reserved - -import torch as th - -from torch import nn - - -class GlobalAvgPool(nn.Module): - def __init__(self): - super(GlobalAvgPool, self).__init__() - - def forward(self, x): - return th.mean(x, dim=[-2, -1]) - - -def get_model(args): - assert args.type in ['2d', '3d', 'vmz', 's3d', 'vae'] - if args.type == '2d': - print('Loading 2D-ResNet-152 ...') - import torchvision.models as models - model = models.resnet152(pretrained=True) - model = nn.Sequential(*list(model.children())[:-2], GlobalAvgPool()) - model = model.cuda() - elif args.type == 'vmz': - print('Loading VMZ ...') - from vmz34 import r2plus1d_34 - model = r2plus1d_34(pretrained_path=args.vmz_model_path, pretrained_num_classes=487) - model = model.cuda() - elif args.type == 's3d': - # we use one copy of s3d instead of dup another one for feature extraction. - from mmpt.processors.models.s3dg import S3D - model = S3D('pretrained_models/s3d_dict.npy', 512) - model.load_state_dict(th.load('pretrained_models/s3d_howto100m.pth')) - model = model.cuda() - - elif args.type == '3d': - print('Loading 3D-ResneXt-101 ...') - from videocnn.models import resnext - model = resnext.resnet101( - num_classes=400, - shortcut_type='B', - cardinality=32, - sample_size=112, - sample_duration=16, - last_fc=False) - model = model.cuda() - model_data = th.load(args.resnext101_model_path) - model.load_state_dict(model_data) - elif args.type == 'vae': - from openaivae import OpenAIParallelDiscreteVAE - model = OpenAIParallelDiscreteVAE() - model = model.cuda() - else: - raise ValueError("model not supported yet.") - - model.eval() - print('loaded') - return model diff --git a/kosmos-g/fairseq/examples/MMPT/scripts/video_feature_extractor/pathbuilder.py b/kosmos-g/fairseq/examples/MMPT/scripts/video_feature_extractor/pathbuilder.py deleted file mode 100644 index 2392d6d63..000000000 --- a/kosmos-g/fairseq/examples/MMPT/scripts/video_feature_extractor/pathbuilder.py +++ /dev/null @@ -1,89 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import os -import urllib.parse -import json -import pandas as pd - -from tqdm import tqdm - - -# TODO: extending to other datasets. -supported_formats = {} - - -class PathBuilder(object): - @classmethod - def build(cls, video_dirs, feature_dir, ext, shards=0, split=None): - meta_fn = os.path.join(feature_dir, "meta_plan.json") - os.makedirs(feature_dir, exist_ok=True) - if os.path.isfile(meta_fn): - with open(meta_fn) as fr: - meta = json.load(fr) - return meta - print("searching videos...") - - video_id_to_path = {} - for video_dir in video_dirs.split(","): - # TODO: add supports of recursive listdir. - if video_dir in supported_formats: - supported_formats[video_dir].load(video_dir, video_id_to_path) - else: - for idx, fn in enumerate(tqdm(os.listdir(video_dir))): - video_fn = os.path.join(video_dir, fn) - if os.path.isfile(video_fn): - video_id = os.path.splitext(fn)[0] - video_id_to_path[video_id] = video_fn - elif os.path.isdir(video_fn): - # shards of folders. - shard_dir = video_fn - for idx, fn in enumerate(os.listdir(shard_dir)): - video_fn = os.path.join(shard_dir, fn) - if os.path.isfile(video_fn): - video_id = os.path.splitext(fn)[0] - video_id_to_path[video_id] = video_fn - - video_path, feature_path = [], [] - valid_ext = set() - for idx, video_id in enumerate(video_id_to_path): - video_path.append(video_id_to_path[video_id]) - if ext is None: - # use original file ext for format compatibility. - video_id_to_path[video_id] - path = urllib.parse.urlparse(video_id_to_path[video_id]).path - ext = os.path.splitext(path)[1] - if ext not in valid_ext: - valid_ext.add(ext) - print("adding", ext) - if shards: - shard_id = str(idx % shards) - feature_fn = os.path.join( - feature_dir, shard_id, video_id + ext) - else: - feature_fn = os.path.join( - feature_dir, video_id + ext) - feature_path.append(feature_fn) - - print("targeting", len(feature_path), "videos") - meta = { - "video_path": video_path, "feature_path": feature_path} - with open(meta_fn, "w") as fw: - json.dump(meta, fw) - - if split is not None: - splits = split.split("/") - assert len(splits) == 2 - cur, total = int(splits[0]), int(splits[1]) - assert cur < total - import math - chunk = math.ceil(len(meta["video_path"]) / total) - start = cur * chunk - end = (cur + 1) * chunk - meta = { - "video_path": meta["video_path"][start:end], - "feature_path": meta["feature_path"][start:end] - } - - return meta diff --git a/kosmos-g/fairseq/examples/MMPT/scripts/video_feature_extractor/preprocessing.py b/kosmos-g/fairseq/examples/MMPT/scripts/video_feature_extractor/preprocessing.py deleted file mode 100644 index fa0cec3a7..000000000 --- a/kosmos-g/fairseq/examples/MMPT/scripts/video_feature_extractor/preprocessing.py +++ /dev/null @@ -1,57 +0,0 @@ -# Copyright Howto100m authors. -# Copyright (c) Facebook, Inc. All Rights Reserved - -import torch as th - -class Normalize(object): - - def __init__(self, mean, std): - self.mean = th.FloatTensor(mean).view(1, 3, 1, 1) - self.std = th.FloatTensor(std).view(1, 3, 1, 1) - - def __call__(self, tensor): - tensor = (tensor - self.mean) / (self.std + 1e-8) - return tensor - -class Preprocessing(object): - - def __init__(self, type): - self.type = type - if type == '2d': - self.norm = Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) - elif type == '3d': - self.norm = Normalize(mean=[110.6, 103.2, 96.3], std=[1.0, 1.0, 1.0]) - elif type == 'vmz': - self.norm = Normalize(mean=[110.201, 100.64, 95.997], std=[58.1489, 56.4701, 55.3324]) - - def _zero_pad(self, tensor, size): - n = size - len(tensor) % size - if n == size: - return tensor - else: - z = th.zeros(n, tensor.shape[1], tensor.shape[2], tensor.shape[3]) - return th.cat((tensor, z), 0) - - def __call__(self, tensor): - if self.type == '2d': - tensor = tensor / 255.0 - tensor = self.norm(tensor) - elif self.type == 'vmz': - #tensor = self._zero_pad(tensor, 8) - tensor = self._zero_pad(tensor, 10) - tensor = self.norm(tensor) - #tensor = tensor.view(-1, 8, 3, 112, 112) - tensor = tensor.view(-1, 10, 3, 112, 112) - tensor = tensor.transpose(1, 2) - elif self.type == '3d': - tensor = self._zero_pad(tensor, 16) - tensor = self.norm(tensor) - tensor = tensor.view(-1, 16, 3, 112, 112) - tensor = tensor.transpose(1, 2) - elif self.type == 's3d': - tensor = tensor / 255.0 - tensor = self._zero_pad(tensor, 30) - tensor = tensor.view(-1, 30, 3, 224, 224) # N x 30 x 3 x H x W - tensor = tensor.transpose(1, 2) # N x 3 x 30 x H x W - # for vae do nothing - return tensor diff --git a/kosmos-g/fairseq/examples/MMPT/scripts/video_feature_extractor/random_sequence_shuffler.py b/kosmos-g/fairseq/examples/MMPT/scripts/video_feature_extractor/random_sequence_shuffler.py deleted file mode 100644 index 1f3e4acea..000000000 --- a/kosmos-g/fairseq/examples/MMPT/scripts/video_feature_extractor/random_sequence_shuffler.py +++ /dev/null @@ -1,29 +0,0 @@ -# Copyright (c) Facebook, Inc. All Rights Reserved - -import numpy as np - -from torch.utils.data.sampler import Sampler - - -class RandomSequenceSampler(Sampler): - - def __init__(self, n_sample, seq_len): - self.n_sample = n_sample - self.seq_len = seq_len - - def _pad_ind(self, ind): - zeros = np.zeros(self.seq_len - self.n_sample % self.seq_len) - ind = np.concatenate((ind, zeros)) - return ind - - def __iter__(self): - idx = np.arange(self.n_sample) - if self.n_sample % self.seq_len != 0: - idx = self._pad_ind(idx) - idx = np.reshape(idx, (-1, self.seq_len)) - np.random.shuffle(idx) - idx = np.reshape(idx, (-1)) - return iter(idx.astype(int)) - - def __len__(self): - return self.n_sample + (self.seq_len - self.n_sample % self.seq_len) diff --git a/kosmos-g/fairseq/examples/MMPT/scripts/video_feature_extractor/shard_feature.py b/kosmos-g/fairseq/examples/MMPT/scripts/video_feature_extractor/shard_feature.py deleted file mode 100644 index f75e1dfae..000000000 --- a/kosmos-g/fairseq/examples/MMPT/scripts/video_feature_extractor/shard_feature.py +++ /dev/null @@ -1,64 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import numpy as np -import os -import pickle - -from mmpt.utils import ShardedTensor - - -class Shard(object): - def __init__( - self, - vfeat_dir, - tfeat_dir, - target_dir, - file_paths, - shard_size=4096 - ): - self.vfeat_dir = vfeat_dir - self.tfeat_dir = tfeat_dir - self.target_dir = target_dir - self.video_ids = {} - for split, file_path in zip(["train", "val"], file_paths): - with open(file_path) as fr: - self.video_ids[split] = [ - line.strip() for line in fr.readlines()] - self.shard_size = shard_size - - def __call__(self, split="train"): - for split in ["train", "val"]: - meta = {} - for shard_idx, shard_offset in enumerate( - range(0, len(self.video_ids[split]), self.shard_size) - ): - print(shard_idx) - meta_shard = [] - video_shard = [] - for video_id in self.video_ids[split][shard_offset:shard_offset+self.shard_size]: - meta_shard.append(video_id) - npy_file = os.path.join(self.vfeat_dir, video_id + ".npy") - video_shard.append(np.load(npy_file)) - - meta[shard_idx] = meta_shard - video_shard = ShardedTensor.from_list(video_shard) - target_path = os.path.join( - self.target_dir, split + "_" + str(shard_idx)) - video_shard.save(target_path) - - target_path = os.path.join(self.target_dir, split + "_meta") - with open(target_path + ".pkl", "wb") as fw: - pickle.dump(meta, fw, pickle.HIGHEST_PROTOCOL) - - -if __name__ == "__main__": - shard = Shard( - "data/feat/feat_how2_s3d", - "data/how2/raw_caption_dedup.bert-base-uncased", - "data/feat/feat_how2_s3d_shard_small", - ["data/how2/how2_s3d_train.lst", "data/how2/how2_s3d_val.lst"] - ) - - shard() diff --git a/kosmos-g/fairseq/examples/MMPT/scripts/video_feature_extractor/videoreader.py b/kosmos-g/fairseq/examples/MMPT/scripts/video_feature_extractor/videoreader.py deleted file mode 100644 index 429e05f8b..000000000 --- a/kosmos-g/fairseq/examples/MMPT/scripts/video_feature_extractor/videoreader.py +++ /dev/null @@ -1,242 +0,0 @@ -# Copyright Howto100M authors. -# Copyright (c) Facebook, Inc. All Rights Reserved - -import torch as th -import pandas as pd -import os -import numpy as np -import ffmpeg -import random - -from torch.utils.data import Dataset - - -class VideoLoader(Dataset): - """modified from how2's video_feature_extractor.""" - def __init__( - self, - csv=None, - video_dict=None, - framerate=1, - size=112, - centercrop=False, - hflip=False, - **kwargs - ): - if csv is None and video_dict is None: - raise ValueError("csv and video_dict cannot be both None.") - if csv is not None: - self.csv = pd.read_csv(csv) - if video_dict is not None: - self.csv = pd.DataFrame.from_dict(video_dict) - - self.centercrop = centercrop - self.size = size - self.framerate = framerate - self.hflip = hflip - - def __len__(self): - return len(self.csv) - - def _get_video_dim(self, video_path): - probe = ffmpeg.probe(video_path) - video_stream = next((stream for stream in probe['streams'] - if stream['codec_type'] == 'video'), None) - width = int(video_stream['width']) - height = int(video_stream['height']) - return height, width - - def _get_video_info(self, video_path): - probe = ffmpeg.probe(video_path) - video_stream = next((stream for stream in probe['streams'] - if stream['codec_type'] == 'video'), None) - return video_stream - - def _get_output_dim(self, h, w): - if isinstance(self.size, tuple) and len(self.size) == 2: - return self.size - elif h >= w: - return int(h * self.size / w), self.size - else: - return self.size, int(w * self.size / h) - - def __getitem__(self, idx): - video_path = self.csv['video_path'].values[idx] - output_file = self.csv['feature_path'].values[idx] - return self._decode(output_file, video_path) - - def _decode(self, output_file, video_path): - if not(os.path.isfile(output_file)) and os.path.isfile(video_path): - try: - h, w = self._get_video_dim(video_path) - except Exception: - print('ffprobe failed at: {}'.format(video_path)) - return {'video': th.zeros(1), 'input': video_path, - 'output': output_file} - try: - os.makedirs(os.path.dirname(output_file), exist_ok=True) - height, width = self._get_output_dim(h, w) - - cmd = ( - ffmpeg - .input(video_path) - .filter('fps', fps=self.framerate) - .filter('scale', width, height) - ) - if self.hflip: - cmd = cmd.filter('hflip') - - if self.centercrop: - x = int((width - self.size) / 2.0) - y = int((height - self.size) / 2.0) - cmd = cmd.crop(x, y, self.size, self.size) - video = self._run(cmd, output_file) - except Exception: - video = th.zeros(1) - else: - video = th.zeros(1) - - return {'video': video, 'input': video_path, 'output': output_file} - - def _run(self, cmd, output_file): - out, _ = ( - cmd.output('pipe:', format='rawvideo', pix_fmt='rgb24') - .run(capture_stdout=True, quiet=True) - ) - if self.centercrop and isinstance(self.size, int): - height, width = self.size, self.size - video = np.frombuffer(out, np.uint8).reshape([-1, height, width, 3]) - video = th.from_numpy(video.astype('float32')) - return video.permute(0, 3, 1, 2) - - -class VideoVerifier(VideoLoader): - def __getitem__(self, idx): - video_path = self.csv['video_path'].values[idx] - try: - return self._get_video_info(video_path) - except Exception: - # print('ffprobe failed at: {}'.format(video_path)) - return None - - -class VideoCompressor(VideoLoader): - def __init__( - self, - csv=None, - video_dict=None, - framerate=1, - size=112, - centercrop=False, - hflip=False, - crf=32, - **kwargs - ): - super().__init__( - csv, - video_dict, - framerate, - size, - centercrop, - hflip - ) - self.crf = crf - - def _run(self, cmd, output_file): - out, _ = ( - cmd.output(filename=output_file, crf=self.crf) - .run(quiet=True) - ) - video = None - return video - - -class VideoDownloader(VideoCompressor): - """download""" - def __getitem__(self, idx): - video_path = self.csv['video_path'].values[idx] - output_file = self.csv['feature_path'].values[idx] - if not(os.path.isfile(output_file)): - os.makedirs(os.path.dirname(output_file), exist_ok=True) - cmd = "wget -O" + output_file + " " + video_path - # import subprocess - # subprocess.check_output( - # cmd, - # stderr=subprocess.STDOUT, shell=True) - os.system(cmd) - return {'video': None, 'input': video_path, 'output': output_file} - - -class AvKeyframeVideoCompressor(VideoLoader): - """extract keyframes from a video and save it as jpg. - TODO: consider to merge with `CodecProcessor`. - """ - def __init__( - self, - csv=None, - video_dict=None, - framerate=1, - size=112, - centercrop=False, - max_num_frames=5, - **kwargs - ): - super().__init__(csv, video_dict, framerate, size, centercrop) - self.max_num_frames = max_num_frames - - def _get_video_dim(self, video_fn): - """decord cannot probe the size of a video, we use pyav instead.""" - import av - with av.open(video_fn) as container: - height = container.streams.video[0].codec_context.height - width = container.streams.video[0].codec_context.width - return height, width - - def _get_output_dim(self, height, width): - """ - keep the shorter side be `self.size`, strech the other. - """ - if height >= width: - return int(height * self.size / width), self.size - else: - return self.size, int(width * self.size / height) - - def __getitem__(self, idx): - import av - video_path = self.csv['video_path'].values[idx] - output_file = self.csv['feature_path'].values[idx] - if not(os.path.isdir(output_file)) and os.path.isfile(video_path): - try: - h, w = self._get_video_dim(video_path) - except Exception: - print('probe failed at: {}'.format(video_path)) - return {'video': th.zeros(1), 'input': video_path, - 'output': output_file} - - try: - height, width = self._get_output_dim(h, w) - - # new for av. - with av.open(video_path) as container: - container.streams.video[0].thread_type = "AUTO" - container.streams.video[0].codec_context.height = height - container.streams.video[0].codec_context.width = width - if self.framerate == 0: # keyframe. - container.streams.video[0].codec_context.skip_frame = 'NONKEY' - frames = [] - for frame in container.decode(video=0): - frames.append(frame) - frames = random.sample(frames, self.max_num_frames) - - os.makedirs(output_file, exist_ok=True) - for frame in frames: - frame.to_image().save( - os.path.join( - output_file, - "%04d.jpg" % frame.index)) - except Exception: - print('extract failed at: {}'.format(video_path)) - return {'video': th.zeros(1), 'input': video_path, - 'output': output_file} - video = th.zeros(1) - return {'video': video, 'input': video_path, 'output': output_file} diff --git a/kosmos-g/fairseq/examples/MMPT/setup.py b/kosmos-g/fairseq/examples/MMPT/setup.py deleted file mode 100644 index a9a82296e..000000000 --- a/kosmos-g/fairseq/examples/MMPT/setup.py +++ /dev/null @@ -1,24 +0,0 @@ -import setuptools - -with open("README.md", "r") as fh: - long_description = fh.read() - -setuptools.setup( - name="mmpt", - version="0.0.1", - author="Hu Xu, Po-yao Huang", - author_email="huxu@fb.com", - description="A package for multimodal pretraining.", - long_description=long_description, - long_description_content_type="text/markdown", - url="https://github.com/pytorch/fairseq/examples/MMPT", - packages=setuptools.find_packages(), - install_requires=[ - ], - classifiers=[ - "Programming Language :: Python :: 3", - "License :: CC-BY-NC", - "Operating System :: OS Independent", - ], - python_requires='>=3.6', -) diff --git a/kosmos-g/fairseq/examples/MMPT/videoclip.png b/kosmos-g/fairseq/examples/MMPT/videoclip.png deleted file mode 100644 index 50dd0abfe..000000000 Binary files a/kosmos-g/fairseq/examples/MMPT/videoclip.png and /dev/null differ diff --git a/kosmos-g/fairseq/examples/MMPT/vlm.png b/kosmos-g/fairseq/examples/MMPT/vlm.png deleted file mode 100644 index 55c97dbc9..000000000 Binary files a/kosmos-g/fairseq/examples/MMPT/vlm.png and /dev/null differ diff --git a/kosmos-g/fairseq/examples/__init__.py b/kosmos-g/fairseq/examples/__init__.py deleted file mode 100644 index 44bb24ae6..000000000 --- a/kosmos-g/fairseq/examples/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -try: - from fairseq.version import __version__ # noqa -except ImportError: - pass diff --git a/kosmos-g/fairseq/examples/adaptive_span/README.md b/kosmos-g/fairseq/examples/adaptive_span/README.md deleted file mode 100644 index d5224fb28..000000000 --- a/kosmos-g/fairseq/examples/adaptive_span/README.md +++ /dev/null @@ -1,90 +0,0 @@ -# Adaptive Span - -Adaptive Span is a novel self-attention mechanism that can learn its optimal -attention span. This allows us to extend significantly the maximum context size -used in Transformer, while maintaining control over their memory footprint -and computational time. It uses the Truncated BPTT technique for training, -as in [transformerXL](https://github.com/pytorch/fairseq/blob/main/examples/truncated_bptt/README.md). - -Adaptive Span was introduced by paper: -[Adaptive Attention Span in Transformers](https://arxiv.org/abs/1905.07799), -which achieved state-of-the-art language modeling results at the time of publication. - -We manage to reproduce their result in fairseq and keep most of the -[original implementation](https://github.com/facebookresearch/adaptive-span) untouched. -You can refer to the their sweep file as well if any combination of hyperparameter is not clear. - -##### 0. Setup - -First you need to process the Enwik8 dataset, we use the pre-tokenized dataset -from [adaptive span paper](https://github.com/facebookresearch/adaptive-span/blob/master/get_data.sh). -You can download the dataset, and then run: -```bash -fairseq-preprocess --only-source --trainpref ~/data/enwik8/train.txt \ - --validpref ~/data/enwik8/valid.txt --testpref ~/data/enwik8/test.txt \ - --destdir ~/data/enwik8/data-bin/ --joined-dictionary --workers 20 -``` - -##### 1. Train a Adaptive Span model on Enwik8 - -We will train a 12-layer Adaptive Span model following the [hyperparameters -used in the original -paper](https://github.com/facebookresearch/adaptive-span/blob/master/experiments/enwik8.sh). - -The following command assumes 4 GPUs, so that the total batch size is 64 -sequences (4 x 16). Training should take 2-3 days on 4 V100 GPUs: -```bash -CUDA_VISIBLE_DEVICES=0,1,2,3 fairseq-train \ - --user-dir examples/adaptive_span \ - --data ~/data/enwik8/data-bin/ \ - --fp16 --fp16-no-flatten-grads --max-update 600000 \ - --task truncated_bptt_lm --tokens-per-sample 512 --arch adaptive_span \ - --n-layer 12 --d-model 512 --n-head 8 --d-inner 2048 --dropout 0.3 \ - --attn-span 8192 --optimizer adagrad_with_grad_clip --adagrad-clip 0.03 \ - --validate-interval-updates 1000 \ - --lr-scheduler fixed --warmup-updates 32000 --batch-size-valid 32 \ - --lr 0.07 --criterion adaptive_span_loss --batch-size 16 --update-freq 1 \ - --seed 2 --log-format json --log-interval 25 --aux-loss-scaler 5e-07 -``` -This should land around 1.05 on validation, 1.03 on test. You can lower the ---aux-loss-scaler for better performance (longer span). It gives ~0.03 bpc -improvement to the transformerXL baseline here. -If training on a single GPU, set `--update-freq=4` to accumulate 4x gradients -and simulate training on 4 GPUs. -You can also reproduce the transformerXL result on enwik8 using this code base. -It should land around 1.06 on test,matching the [original paper](https://github.com/kimiyoung/transformer-xl/blob/master/pytorch/run_enwik8_base.sh). -You can try by -```bash -CUDA_VISIBLE_DEVICES=0,1,2,3 fairseq-train \ - --user-dir examples/truncated_bptt \ - ~/data/enwik8/data-bin/ \ - --task truncated_bptt_lm --fp16 --max-update 400000 \ - --tokens-per-sample 512 --arch transformer_xl --n-layer 12 \ - --d-model 512 --n-head 8 --d-head 64 --d-inner 2048 --dropout 0.1 \ - --dropatt 0.0 --mem-len 512 --optimizer adam --clip-norm 0.25 \ - --lr-scheduler cosine --warmup-updates 0 \ - --lr 0.0 --lr 0.00025 --batch-size 15 \ - --update-freq 1 --seed 2 --log-format json --log-interval 25 \ - --fp16 -``` - -##### 2. Evaluate -For Adaptive Span: -```bash -fairseq-eval-lm ~/data/enwik8/data-bin/ --path model/checkpoint_best.pt \ - --user-dir examples/adaptive_span \ - --task truncated_bptt_lm --batch-size 8 --tokens-per-sample 512 --gen-subset test -``` -For Transformer-XL evaluation: -```bash -fairseq-eval-lm ~/data/enwik8/data-bin/ --path model/checkpoint_best.pt \ - --user-dir examples/truncated_bptt/ --task truncated_bptt_lm --batch-size 8 \ - --tokens-per-sample 80 \ - --model-overrides '{"mem_len":2100,"clamp_len":820,"same_length":True}' \ - --gen-subset valid -``` - -*Note:* During training the model saw 512 tokens of context -(``--tokens-per-sample=512``), with batch size 8. These settings match the evaluation -settings from [the original -paper](https://github.com/facebookresearch/adaptive-span/blob/master/experiments/enwik8.sh). diff --git a/kosmos-g/fairseq/examples/adaptive_span/__init__.py b/kosmos-g/fairseq/examples/adaptive_span/__init__.py deleted file mode 100644 index e0a142a76..000000000 --- a/kosmos-g/fairseq/examples/adaptive_span/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import importlib -import os - -# automatically import any Python files in the current directory -cur_dir = os.path.dirname(__file__) -for file in os.listdir(cur_dir): - path = os.path.join(cur_dir, file) - if ( - not file.startswith("_") - and not file.startswith(".") - and (file.endswith(".py") or os.path.isdir(path)) - ): - mod_name = file[: file.find(".py")] if file.endswith(".py") else file - module = importlib.import_module(__name__ + "." + mod_name) diff --git a/kosmos-g/fairseq/examples/adaptive_span/adagrad_with_grad_clip.py b/kosmos-g/fairseq/examples/adaptive_span/adagrad_with_grad_clip.py deleted file mode 100644 index 585ce184a..000000000 --- a/kosmos-g/fairseq/examples/adaptive_span/adagrad_with_grad_clip.py +++ /dev/null @@ -1,128 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from torch.optim import Adagrad - -from fairseq.optim import LegacyFairseqOptimizer, register_optimizer - - -@register_optimizer("adagrad_with_grad_clip") -class FairseqAdagradWithGradClip(LegacyFairseqOptimizer): - def __init__(self, args, params): - super().__init__(args) - self._optimizer = AdagradWithGradClip(params, **self.optimizer_config) - - @staticmethod - def add_args(parser): - """Add optimizer-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--weight-decay', '--wd', default=0.0, type=float, metavar='WD', - help='weight decay') - parser.add_argument('--adagrad-clip', default=0.0, type=float, metavar='D', - help='internal grad clip') - # fmt: on - - @property - def optimizer_config(self): - """ - Return a kwarg dictionary that will be used to override optimizer - args stored in checkpoints. This allows us to load a checkpoint and - resume training using a different set of optimizer args, e.g., with a - different learning rate. - """ - return { - "lr": self.args.lr[0], - "weight_decay": self.args.weight_decay, - "grad_clip": self.args.adagrad_clip, - } - - @property - def supports_flat_params(self): - return False - - -def _clip_grad(clr, grad, group_grad_clip): - if group_grad_clip > 0: - norm = grad.norm(2).item() - if norm > group_grad_clip: - clr *= group_grad_clip / (norm + 1e-10) - return clr - - -class AdagradWithGradClip(Adagrad): - """Adagrad algorithm with custom gradient clipping""" - - def __init__( - self, - params, - lr=1e-2, - lr_decay=0, - weight_decay=0, - initial_accumulator_value=0, - grad_clip=0, - ): - Adagrad.__init__( - self, - params, - lr=lr, - lr_decay=lr_decay, - weight_decay=weight_decay, - initial_accumulator_value=initial_accumulator_value, - ) - self.defaults["grad_clip"] = grad_clip - self.param_groups[0].setdefault("grad_clip", grad_clip) - - def step(self, closure=None): - loss = None - if closure is not None: - loss = closure() - - for group in self.param_groups: - for p in group["params"]: - if p.grad is None: - continue - - grad = p.grad.data - state = self.state[p] - - state["step"] += 1 - - if group["weight_decay"] != 0: - if p.grad.data.is_sparse: - raise RuntimeError( - "weight_decay option is " - "not compatible with sparse " - "gradients" - ) - grad = grad.add(group["weight_decay"], p.data) - - clr = group["lr"] / (1 + (state["step"] - 1) * group["lr_decay"]) - - # clip - clr = _clip_grad(clr=clr, grad=grad, group_grad_clip=group["grad_clip"]) - - if grad.is_sparse: - # the update is non-linear so indices must be unique - grad = grad.coalesce() - grad_indices = grad._indices() - grad_values = grad._values() - size = grad.size() - - def make_sparse(values): - constructor = grad.new - if grad_indices.dim() == 0 or values.dim() == 0: - return constructor().resize_as_(grad) - return constructor(grad_indices, values, size) - - state["sum"].add_(make_sparse(grad_values.pow(2))) - std = state["sum"]._sparse_mask(grad) - std_values = std._values().sqrt_().add_(1e-10) - p.data.add_(-clr, make_sparse(grad_values / std_values)) - else: - state["sum"].addcmul_(1, grad, grad) - std = state["sum"].sqrt().add_(1e-10) - p.data.addcdiv_(-clr, grad, std) - - return loss diff --git a/kosmos-g/fairseq/examples/adaptive_span/adaptive_span_attention.py b/kosmos-g/fairseq/examples/adaptive_span/adaptive_span_attention.py deleted file mode 100644 index 07f757bb8..000000000 --- a/kosmos-g/fairseq/examples/adaptive_span/adaptive_span_attention.py +++ /dev/null @@ -1,160 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class AdaptiveMask(nn.Module): - """Soft masking function for adaptive size. - It masks out the last K values of an input. The masking value - goes from 1 to 0 gradually, so K can be learned with - back-propagation. - Args: - max_size: maximum size (i.e. input dimension) - ramp_size: size of the ramp going from 0 to 1 - init_val: initial size proportion not to be masked out - shape: learn multiple sizes independent of each other - """ - - def __init__(self, max_size, ramp_size, init_val=0, shape=(1,)): - nn.Module.__init__(self) - self._max_size = max_size - self._ramp_size = ramp_size - self.current_val = nn.Parameter(torch.zeros(*shape) + init_val) - mask_template = torch.linspace(1 - max_size, 0, steps=max_size) - self.register_buffer("mask_template", mask_template) - - def forward(self, x): - mask = self.mask_template.float() + self.current_val.float() * self._max_size - mask = mask / self._ramp_size + 1 - mask = mask.clamp(0, 1) - if x.size(-1) < self._max_size: - # the input could have been trimmed beforehand to save computation - mask = mask.narrow(-1, self._max_size - x.size(-1), x.size(-1)) - x = (x * mask).type_as(x) - return x - - def get_current_max_size(self, include_ramp=True): - current_size = math.ceil(self.current_val.max().item() * self._max_size) - if include_ramp: - current_size += self._ramp_size - current_size = max(0, min(self._max_size, current_size)) - return current_size - - def get_current_avg_size(self, include_ramp=True): - current_size = math.ceil( - self.current_val.float().mean().item() * self._max_size - ) - if include_ramp: - current_size += self._ramp_size - current_size = max(0, min(self._max_size, current_size)) - return current_size - - def clamp_param(self): - """this need to be called after each update""" - self.current_val.data.clamp_(0, 1) - - -class AdaptiveSpan(nn.Module): - """Adaptive attention span for Transformerself. - This module learns an attention span length from data for each - self-attention head. - Args: - attn_span: maximum attention span - adapt_span_loss: loss coefficient for the span length - adapt_span_ramp: length of the masking ramp - adapt_span_init: initial size ratio - adapt_span_cache: adapt cache size to reduce memory usage - """ - - def __init__( - self, - attn_span, - adapt_span_ramp, - adapt_span_init, - n_head, - adapt_span_layer, - **kargs - ): - nn.Module.__init__(self) - self._max_span = attn_span - self._n_head = n_head - self._adapt_span_layer = adapt_span_layer - if self._adapt_span_layer: - self._mask = AdaptiveMask( - max_size=self._max_span, - ramp_size=adapt_span_ramp, - init_val=adapt_span_init, - ) - else: - self._mask = AdaptiveMask( - max_size=self._max_span, - ramp_size=adapt_span_ramp, - init_val=adapt_span_init, - shape=(n_head, 1, 1), - ) - - def forward(self, attn, normalize=True): - """mask attention with the right span""" - # batch and head dimensions are merged together, so separate them first - self.clamp_param() - if self._adapt_span_layer: - attn = self._mask(attn) - else: - B = attn.size(0) # batch size - M = attn.size(1) # block size - attn = attn.reshape(B // self._n_head, self._n_head, M, -1) - attn = self._mask(attn) - attn = attn.view(B, M, -1) - return attn - - def get_trim_len(self): - """how much of memory can be trimmed to reduce computation""" - L = self._max_span - trim_len = min(L - 1, L - self._mask.get_current_max_size()) - # too fine granularity might be bad for the memory management - trim_len = math.floor(trim_len / 64) * 64 - return trim_len - - def trim_memory(self, query, key, value, key_pe): - """trim out unnecessary memory beforehand to reduce computation""" - trim_len = self.get_trim_len() - cache_size = key.size(1) - query.size(1) - trim_len_cache = trim_len - (self._max_span - cache_size) - if trim_len_cache > 0: - key = key[:, trim_len_cache:, :] - value = value[:, trim_len_cache:, :] - elif trim_len_cache < 0: - # cache is too short! this happens when validation resumes - # after a lot of updates. - key = F.pad(key, [0, 0, -trim_len_cache, 0]) - value = F.pad(value, [0, 0, -trim_len_cache, 0]) - if trim_len > 0: - if key_pe is not None: - key_pe = key_pe[:, :, trim_len:] - return key, value, key_pe - - def get_cache_size(self): - """determine how long the cache should be""" - trim_len = self.get_trim_len() - # give a buffer of 64 steps since a span might increase - # in future updates - return min(self._max_span, self._max_span - trim_len + 64) - - def get_loss(self): - """a loss term for regularizing the span length""" - return self._max_span * self._mask.current_val.float().mean() - - def get_current_max_span(self): - return self._mask.get_current_max_size() - - def get_current_avg_span(self): - return self._mask.get_current_avg_size() - - def clamp_param(self): - self._mask.clamp_param() diff --git a/kosmos-g/fairseq/examples/adaptive_span/adaptive_span_loss.py b/kosmos-g/fairseq/examples/adaptive_span/adaptive_span_loss.py deleted file mode 100644 index 056245807..000000000 --- a/kosmos-g/fairseq/examples/adaptive_span/adaptive_span_loss.py +++ /dev/null @@ -1,106 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from dataclasses import dataclass - -import torch.nn.functional as F -from fairseq import metrics, utils -from fairseq.criterions import register_criterion -from fairseq.criterions.cross_entropy import CrossEntropyCriterion -from fairseq.dataclass import FairseqDataclass -from omegaconf import II - - -@dataclass -class AdaptiveSpanCriterionConfig(FairseqDataclass): - sentence_avg: bool = II("optimization.sentence_avg") - - -@register_criterion("adaptive_span_loss", dataclass=AdaptiveSpanCriterionConfig) -class AdaptiveSpanCriterion(CrossEntropyCriterion): - def __init__(self, task, sentence_avg): - super().__init__(task, sentence_avg) - - def forward(self, model, sample, reduce=True): - """Compute the loss for the given sample. - - Returns a tuple with three elements: - 1) the loss here is summed, different from the adaptive span code - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - net_output = model(**sample["net_input"]) - loss, aux_loss, avg_span, max_span = self.compute_loss( - model, net_output, sample, reduce=reduce - ) - sample_size = ( - sample["target"].size(0) if self.sentence_avg else sample["ntokens"] - ) - loss /= sample_size - total_loss = loss + aux_loss - sample_size = 1 - - logging_output = { - "loss": loss.data, - "ntokens": sample["ntokens"], - "nsentences": sample["target"].size(0), - "sample_size": sample_size, - "total_loss": total_loss.data, - "avg_span": avg_span * sample_size, - "max_span": max_span * sample_size, - } - return total_loss, sample_size, logging_output - - def compute_loss(self, model, net_output, sample, reduce=True): - loss, _ = super().compute_loss(model, net_output, sample, reduce) - aux_loss = model.get_aux_loss() - avg_span = model.get_current_avg_span() - max_span = model.get_current_max_span() - return loss, aux_loss, avg_span, max_span - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_sum = sum(log.get("loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - total_loss_sum = sum(log.get("total_loss", 0) for log in logging_outputs) - avg_span_sum = sum(log.get("avg_span", 0) for log in logging_outputs) - max_span_sum = sum(log.get("max_span", 0) for log in logging_outputs) - - # we divide by log(2) to convert the loss from base e to base 2 - metrics.log_scalar( - "loss", loss_sum / sample_size / math.log(2), sample_size, round=3 - ) - metrics.log_scalar("avg_span", avg_span_sum / sample_size, sample_size, round=3) - metrics.log_scalar("max_span", max_span_sum / sample_size, sample_size, round=3) - # total loss contains the L1 norm on adaptive-span - metrics.log_scalar( - "total_loss", - total_loss_sum / sample_size / math.log(2), - sample_size, - round=3, - ) - if sample_size != ntokens: - metrics.log_scalar( - "nll_loss", loss_sum / ntokens / math.log(2), ntokens, round=3 - ) - metrics.log_derived( - "ppl", lambda meters: utils.get_perplexity(meters["nll_loss"].avg) - ) - else: - metrics.log_derived( - "ppl", lambda meters: utils.get_perplexity(meters["loss"].avg) - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/kosmos-g/fairseq/examples/adaptive_span/adaptive_span_model.py b/kosmos-g/fairseq/examples/adaptive_span/adaptive_span_model.py deleted file mode 100644 index d96c95b85..000000000 --- a/kosmos-g/fairseq/examples/adaptive_span/adaptive_span_model.py +++ /dev/null @@ -1,263 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from fairseq.modules.layer_norm import LayerNorm - -from .adaptive_span_attention import AdaptiveSpan - -# Size notations: -# B = batch_size, H = d_model, M = block_size, L = attn_span - - -def _skew(X, pad_value): - """shift every row 1 step to right""" - # X = B x M x L - B, M, L = X.size() - X = F.pad(X, (0, M + 1), value=pad_value) # B x M x (L+M+1) - X = X.view(B, -1) # B x ML+MM+M - X = X[:, :-M] # B x ML+MM - X = X.view(B, M, M + L) # B x M x L+M - return X - - -def _unskew(X): - """reverse _skew operation""" - # X = B x M x L+M - B, M, L = X.size() - L -= M - X = X.view(B, -1) # B x ML+MM - X = F.pad(X, (0, M)) # B x ML+MM+M - X = X.view(B, M, M + L + 1) # B x M x L+M+1 - X = X[:, :, :L] # B x M x L - return X - - -class SeqAttention(nn.Module): - """Sequential self-attention layer. - Each token will attend to its previous fixed number of steps. - Note that attention doesn't include the current step itself. - """ - - def __init__(self, d_model, n_head, attn_span, dropout, adapt_span_layer, **kargs): - nn.Module.__init__(self) - self.dropout = nn.Dropout(dropout) - self.d_model = d_model # size of a single head - self.attn_span = attn_span - self.adaptive_span = AdaptiveSpan( - attn_span=attn_span, - n_head=n_head, - adapt_span_layer=adapt_span_layer, - **kargs - ) - - def forward(self, query, key, value, key_pe): - # query size = B x M x H - # key, value sizes = B x (M+L) x H - - key, value, key_pe = self.adaptive_span.trim_memory(query, key, value, key_pe) - - # compute attention from context - # B x M (dest) x (M+L) (src) - attn_cont = torch.matmul(query, key.transpose(-1, -2)) - attn_cont = _unskew(attn_cont) # B x M x L - - # compute the effect of position embedding - attn_pos = torch.matmul(query, key_pe) # B x M x L_pos - attn = attn_cont + attn_pos - - attn = attn / math.sqrt(self.d_model) # B x M X L_pos - - attn = F.softmax(attn.float(), dim=-1).type_as(attn) - - # trim attention lengths according to the learned span - attn = self.adaptive_span(attn) - - attn = self.dropout(attn) # B x M X L_pos - - attn_cont = _skew(attn, 0) # B x M X (L+M) - out = torch.matmul(attn_cont, value) # B x M x H - return out - - def get_cache_size(self): - return self.adaptive_span.get_cache_size() - - -class MultiHeadSeqAttention(nn.Module): - def __init__(self, d_model, n_head, **kargs): - nn.Module.__init__(self) - assert d_model % n_head == 0 - self.n_head = n_head - self.head_dim = d_model // n_head - self.attn = SeqAttention(d_model=self.head_dim, n_head=n_head, **kargs) - self.proj_query = nn.Linear(d_model, d_model, bias=False) - nn.init.xavier_normal_(self.proj_query.weight) - self.proj_out = nn.Linear(d_model, d_model, bias=False) - nn.init.xavier_normal_(self.proj_out.weight) - self.proj_val = nn.Linear(d_model, d_model, bias=False) - nn.init.xavier_normal_(self.proj_val.weight) - self.proj_key = nn.Linear(d_model, d_model, bias=False) - nn.init.xavier_normal_(self.proj_key.weight) - - def head_reshape(self, x): - K = self.n_head - D = self.head_dim - x = x.view(x.size()[:-1] + (K, D)) # B x (M+L) x K x D - x = x.transpose(1, 2).contiguous() # B x K x (M+L) x D - x = x.view(-1, x.size(-2), x.size(-1)) # B_K x (M+L) x D - return x - - def forward(self, query, key, value, key_pe): - B = query.size(0) - K = self.n_head - D = self.head_dim - M = query.size(1) - - query = self.proj_query(query) - query = self.head_reshape(query) - value = self.proj_val(value) - value = self.head_reshape(value) - key = self.proj_key(key) - key = self.head_reshape(key) - - out = self.attn(query, key, value, key_pe) # B_K x M x D - out = out.view(B, K, M, D) # B x K x M x D - out = out.transpose(1, 2).contiguous() # B x M x K x D - out = out.view(B, M, -1) # B x M x K_D - out = self.proj_out(out) - return out - - -class FeedForwardLayer(nn.Module): - def __init__(self, d_model, d_inner, dropout, **kargs): - nn.Module.__init__(self) - self.fc1 = nn.Linear(d_model, d_inner) - self.fc2 = nn.Linear(d_inner, d_model) - nn.init.xavier_uniform_(self.fc1.weight) - nn.init.xavier_uniform_(self.fc2.weight) - self.dropout = nn.Dropout(dropout) - - def forward(self, h): - h1 = F.relu(self.fc1(h)) - h1 = self.dropout(h1) - h2 = self.fc2(h1) - return h2 - - -class TransformerSeqLayer(nn.Module): - def __init__(self, d_model, **kargs): - nn.Module.__init__(self) - self.attn = MultiHeadSeqAttention(d_model=d_model, **kargs) - self.norm1 = LayerNorm(d_model) - self.ff = FeedForwardLayer(d_model=d_model, **kargs) - self.norm2 = LayerNorm(d_model) - - def forward(self, h, h_cache, key_pe): - # h = B x M x H - # h_cache = B x L x H - h_all = torch.cat([h_cache, h], dim=1) # B x (M+L) x H - attn_out = self.attn(h, h_all, h_all, key_pe) - h = self.norm1(h + attn_out) # B x M x H - if self.ff is not None: - ff_out = self.ff(h) - out = self.norm2(h + ff_out) # B x M x H - else: - out = h - return out - - def get_cache_size(self): - return self.attn.attn.get_cache_size() - - -class TransformerSeq(nn.Module): - def __init__( - self, - vocab_size, - d_model, - n_head, - n_layer, - attn_span, - emb_dropout, - aux_loss_scaler, - adapt_span_layer, - **kargs - ): - nn.Module.__init__(self) - # token embeddings - self.in_emb = nn.Embedding(vocab_size, d_model) - nn.init.normal_(self.in_emb.weight, mean=0, std=d_model ** -0.5) - self.out_emb = nn.Linear(d_model, vocab_size) - self.aux_loss_scaler = aux_loss_scaler - if emb_dropout > 0: - self.emb_dropout = nn.Dropout(emb_dropout) - else: - self.emb_dropout = None - # position embeddings - self.key_pe = nn.Parameter(torch.randn(1, d_model // n_head, attn_span)) - - self.layers = nn.ModuleList() - self.layers.extend( - TransformerSeqLayer( - d_model=d_model, - n_head=n_head, - attn_span=attn_span, - adapt_span_layer=adapt_span_layer, - **kargs - ) - for _ in range(n_layer) - ) - - def forward(self, x, h_cache, target=None): - # x size = B x M - block_size = x.size(1) - h = self.in_emb(x) # B x M x H - if self.emb_dropout is not None: - h = self.emb_dropout(h) - - h_cache_next = [] - for l, layer in enumerate(self.layers): - cache_size = layer.attn.attn.get_cache_size() - if cache_size > block_size: - h_cache_next_l = torch.cat( - [h_cache[l][:, -cache_size + block_size :, :], h], dim=1 - ).detach() - else: - h_cache_next_l = h[:, -cache_size:, :].detach() - h_cache_next.append(h_cache_next_l) - h = layer(h, h_cache[l], self.key_pe) # B x M x H - - if self.emb_dropout is not None: - h = self.emb_dropout(h) - - out = F.log_softmax(self.out_emb(h).float(), dim=-1).type_as(h) - dummy_loss = None - - return out, h_cache_next, dummy_loss - - def get_aux_loss(self): - loss = 0.0 - for layer in self.layers: - loss += layer.attn.attn.adaptive_span.get_loss() - return self.aux_loss_scaler * loss - - def get_current_max_span(self): - max_span = 0.0 - for layer in self.layers: - max_span = max( - max_span, layer.attn.attn.adaptive_span.get_current_max_span() - ) - return max_span - - def get_current_avg_span(self): - avg_span = 0.0 - for layer in self.layers: - avg_span += layer.attn.attn.adaptive_span.get_current_avg_span() - return avg_span / len(self.layers) diff --git a/kosmos-g/fairseq/examples/adaptive_span/adaptive_span_model_wrapper.py b/kosmos-g/fairseq/examples/adaptive_span/adaptive_span_model_wrapper.py deleted file mode 100644 index 5b147fe11..000000000 --- a/kosmos-g/fairseq/examples/adaptive_span/adaptive_span_model_wrapper.py +++ /dev/null @@ -1,145 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from dataclasses import dataclass -from typing import Dict, List, Optional - -import torch -from fairseq.dataclass import FairseqDataclass -from fairseq.models import ( - FairseqIncrementalDecoder, - FairseqLanguageModel, - register_model, -) -from .adaptive_span_model import TransformerSeq as AdaptiveSpanTransformerModel - - -logger = logging.getLogger(__name__) - - -@dataclass -class AdaptiveSpanSmallConfig(FairseqDataclass): - # defaults come from https://github.com/facebookresearch/adaptive-span/blob/master/experiments/enwik8_small.sh - vocab_size: int = 50 - d_model: int = 256 - n_head: int = 4 - d_inner: int = 1024 - n_layer: int = 8 - attn_span: int = 1024 - dropout: float = 0.0 - emb_dropout: float = 0.0 - adapt_span_ramp: int = 32 - adapt_span_init: float = 0.0 - aux_loss_scaler: float = 0.000002 - adapt_span_layer: bool = False - - -@register_model("adaptive_span", dataclass=AdaptiveSpanSmallConfig) -class AdaptiveSpanTransformer(FairseqLanguageModel): - @classmethod - def build_model(cls, cfg: AdaptiveSpanSmallConfig, task): - return cls(AdaptiveSpanDecoder(cfg, task)) - - def get_aux_loss(self): - return self.decoder.get_aux_loss() - - def get_current_max_span(self): - return self.decoder.get_current_max_span() - - def get_current_avg_span(self): - return self.decoder.get_current_avg_span() - - -class AdaptiveSpanDecoder(FairseqIncrementalDecoder): - def __init__(self, cfg, task): - - super().__init__(task.target_dictionary) - - self.config = cfg - config = AdaptiveSpanSmallConfig( - vocab_size=len(task.target_dictionary), - d_model=cfg.d_model, - n_head=cfg.n_head, - d_inner=cfg.d_inner, - n_layer=cfg.n_layer, - attn_span=cfg.attn_span, - dropout=cfg.dropout, - emb_dropout=cfg.emb_dropout, - adapt_span_ramp=cfg.adapt_span_ramp, - adapt_span_init=cfg.adapt_span_init, - aux_loss_scaler=cfg.aux_loss_scaler, - adapt_span_layer=cfg.adapt_span_layer, - ) - logger.info(config) - self.model = AdaptiveSpanTransformerModel(**config.__dict__) - - self._mems = None - - def forward( - self, - src_tokens, - incremental_state: Optional[Dict[str, List[torch.Tensor]]] = None, - encoder_out=None, - ): - bsz = src_tokens.size(0) - if incremental_state is not None: # used during inference - mems = self.get_incremental_state("mems") - src_tokens = src_tokens[:, -1:] # only keep the most recent token - else: - mems = self._mems - - if mems is None: - # first time init - mems = self.init_hid_cache(bsz) - output = self.model(x=src_tokens, h_cache=mems,) - if incremental_state is not None: - self.set_incremental_state(incremental_state, "mems", output[1]) - else: - self._mems = output[1] - return (output[0],) - - def max_positions(self): - return self.config.attn_span - - def init_hid_cache(self, batch_sz): - hid = [] - for layer in self.model.layers: - param = next(self.model.parameters()) - h = torch.zeros( - batch_sz, - layer.get_cache_size(), - self.config.d_model, - dtype=param.dtype, - device=param.device, - ) - hid.append(h) - return hid - - def get_aux_loss(self): - return self.model.get_aux_loss() - - def get_current_max_span(self): - return self.model.get_current_max_span() - - def get_current_avg_span(self): - return self.model.get_current_avg_span() - - def reorder_incremental_state( - self, - incremental_state: Dict[str, Dict[str, Optional[torch.Tensor]]], - new_order: torch.Tensor, - ): - """Reorder incremental state. - - This will be called when the order of the input has changed from the - previous time step. A typical use case is beam search, where the input - order changes between time steps based on the selection of beams. - """ - raise NotImplementedError("This is required for generation/beam search") - # mems = self.get_incremental_state(incremental_state, "mems") - # if mems is not None: - # new_mems = [mems_i.index_select(1, new_order) for mems_i in mems] - # self.set_incremental_state(incremental_state, "mems", new_mems) diff --git a/kosmos-g/fairseq/examples/adaptive_span/truncated_bptt_lm_task.py b/kosmos-g/fairseq/examples/adaptive_span/truncated_bptt_lm_task.py deleted file mode 100644 index a92da3a29..000000000 --- a/kosmos-g/fairseq/examples/adaptive_span/truncated_bptt_lm_task.py +++ /dev/null @@ -1 +0,0 @@ -../truncated_bptt/truncated_bptt_lm_task.py \ No newline at end of file diff --git a/kosmos-g/fairseq/examples/attention_head_selection/README.md b/kosmos-g/fairseq/examples/attention_head_selection/README.md deleted file mode 100644 index 2434f1fb2..000000000 --- a/kosmos-g/fairseq/examples/attention_head_selection/README.md +++ /dev/null @@ -1,161 +0,0 @@ -# Pay Better Attention to Attention: Head Selection in Multilingual and Multi-Domain Sequence Modeling (Gong et al., 2021) - -[https://arxiv.org/pdf/2106.10840.pdf](https://arxiv.org/pdf/2106.10840.pdf) - -## Introduction - -We present attention head selection strategies in multilingual and multi-domain sequence modeling including text translation, speech recognition and speech translation tasks. - -Below is an example of training multilingual/multi-domain speech recognition models. - -## Data Preparation -Prepare mTEDx data as in [mTEDx example](https://github.com/fairinternal/fairseq-py/blob/0d9c5851e6fac40f9e366b3633ccd615c2901788/examples/speech_to_text/docs/mtedx_example.md) and CoVoST data as in [CoVoST example](https://github.com/fairinternal/fairseq-py/blob/0d9c5851e6fac40f9e366b3633ccd615c2901788/examples/speech_to_text/docs/covost_example.md). Similarly prepare EuroParl data. - - -## Training a multilingual ASR model with attention head selection - -```bash -data_dir= -train_subset="train_ar_ar_tedx,train_de_de_tedx,train_el_el_tedx,train_es_es_tedx,train_fr_fr_tedx,train_it_it_tedx,train_pt_pt_tedx,train_ru_ru_tedx" -valid_subset="valid_ar_ar_tedx,valid_de_de_tedx,valid_el_el_tedx,valid_es_es_tedx,valid_fr_fr_tedx,valid_it_it_tedx,valid_pt_pt_tedx,valid_ru_ru_tedx" -strateg= - -fairseq-train ${data_dir} \ - --user-dir examples/attention_head_selection/src \ - --train-subset "${train_subset}" \ - --valid-subset "${valid_subset}" \ - --config-yaml 'config_asr.yaml' \ - --arch 'head_selection_s2t_transformer_s' \ - --task 'speech_to_text_head_selection' \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --lr-scheduler 'inverse_sqrt' --stop-min-lr -1.0 --warmup-updates 10000 \ - --lr 5e-4 \ - --clip-norm 10.0 \ - --seed 1 \ - --max-epoch 400 \ - --max-tokens 32000 \ - --ignore-prefix-size 1 \ - --dropout 0.3 \ - --optimizer adam --adam-eps 1e-06 --adam-betas '(0.9, 0.98)' \ - --skip-invalid-size-inputs-valid-test \ - --encoder-attn-head-select \ - --total-encoder-attention-heads 8 \ - --decoder-self-attn-head-select \ - --total-decoder-attention-heads 8 \ - --attn-head-select-strategy ${strategy} \ - --task-type lang \ -``` - -## Training a multi-domain ASR model with attention head selection - -```bash -data_dir= -train_subset="train_es_es_tedx,train_fr_fr_tedx,train_pt_pt_tedx,train_it_it_tedx,train_ru_ru_tedx,train_el_el_tedx,train_ar_ar_tedx,train_de_de_tedx,train_ar_ar_cv,train_de_de_cv,train_es_es_cv,train_fr_fr_cv,train_it_it_cv,train_pt_pt_cv,train_ru_ru_cv,train_de_de_ep,train_es_es_ep,train_fr_fr_ep,train_it_it_ep,train_pt_pt_ep" -valid_subset="dev_es_es_tedx,dev_fr_fr_tedx,dev_pt_pt_tedx,dev_it_it_tedx,dev_ru_ru_tedx,dev_el_el_tedx,dev_ar_ar_tedx,dev_de_de_tedx,dev_ar_ar_cv,dev_de_de_cv,dev_es_es_cv,dev_fr_fr_cv,dev_it_it_cv,dev_pt_pt_cv,dev_ru_ru_cv,dev_de_de_ep,dev_es_es_ep,dev_fr_fr_ep,dev_it_it_ep,dev_pt_pt_ep" -strateg= - -fairseq-train ${data_dir} \ - --user-dir examples/attention_head_selection/src \ - --train-subset "${train_subset}" \ - --valid-subset "${valid_subset}" \ - --config-yaml 'config_asr.yaml' \ - --arch head_selection_s2t_transformer_s \ - --task speech_to_text_head_selection \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --lr-scheduler 'inverse_sqrt' --stop-min-lr -1.0 --warmup-updates 10000 \ - --lr 5e-4 \ - --clip-norm 10.0 \ - --seed 1 \ - --max-epoch 400 \ - --max-tokens 32000 \ - --ignore-prefix-size 1 \ - --dropout 0.3 \ - --optimizer adam --adam-eps 1e-06 --adam-betas '(0.9, 0.98)' \ - --skip-invalid-size-inputs-valid-test \ - --encoder-attn-head-select \ - --total-encoder-attention-heads 8 \ - --decoder-self-attn-head-select \ - --total-decoder-attention-heads 8 \ - --attn-head-select-strategy ${strategy} \ - --task-type domain -``` - -## Inference in multilingual setting - -```bash -MODEL_DIR= -data_dir= -gen_subset= -train_subset="train_ar_ar_tedx,train_de_de_tedx,train_el_el_tedx,train_es_es_tedx,train_fr_fr_tedx,train_it_it_tedx,train_pt_pt_tedx,train_ru_ru_tedx" -last_n=10 -CHECKPOINT_FILENAME="avg_last_${last_n}_checkpoint.pt" -CHECKPOINT="_avg" -RESULTS="${MODEL_DIR}/ckpt${CHECKPOINT}" -if [ ! -d $RESULTS ]; then - mkdir -p $RESULTS -fi; - -python scripts/average_checkpoints.py \ - --inputs ${MODEL_DIR} --num-epoch-checkpoints ${last_n} \ - --output "${MODEL_DIR}/${CHECKPOINT_FILENAME}" - -fairseq-generate ${data_dir} \ - --user-dir examples/attention_head_selection/src \ - --arch 'head_selection_s2t_transformer_s' \ - --task 'speech_to_text_head_selection' \ - --train-subset ${train_subset} \ - --gen-subset ${gen_subset} \ - --path "${MODEL_DIR}/${CHECKPOINT_FILENAME}" \ - --config-yaml 'config_asr.yaml' \ - --prefix-size 1 \ - --max-tokens 40000 --beam 5 \ - --skip-invalid-size-inputs-valid-test \ - --results-path ${RESULTS} \ - --scoring wer --wer-tokenizer 13a \ - --wer-lowercase --wer-remove-punct --remove-bpe -``` - -## Inference in multi-domain setting - -```bash -MODEL_DIR= -data_dir= -gen_subset= -train_subset="train_es_es_tedx,train_fr_fr_tedx,train_pt_pt_tedx,train_it_it_tedx,train_ru_ru_tedx,train_el_el_tedx,train_ar_ar_tedx,train_de_de_tedx,train_ar_ar_cv,train_de_de_cv,train_es_es_cv,train_fr_fr_cv,train_it_it_cv,train_pt_pt_cv,train_ru_ru_cv,train_de_de_ep,train_es_es_ep,train_fr_fr_ep,train_it_it_ep,train_pt_pt_ep" -last_n=10 -CHECKPOINT_FILENAME="avg_last_${last_n}_checkpoint.pt" -CHECKPOINT="_avg" -RESULTS="${MODEL_DIR}/ckpt${CHECKPOINT}" -if [ ! -d $RESULTS ]; then - mkdir -p $RESULTS -fi; - -python scripts/average_checkpoints.py \ - --inputs ${MODEL_DIR} --num-epoch-checkpoints ${last_n} \ - --output "${MODEL_DIR}/${CHECKPOINT_FILENAME}" - -fairseq-generate ${data_dir} \ - --user-dir examples/attention_head_selection/src \ - --arch 'head_selection_s2t_transformer_s' \ - --task 'speech_to_text_head_selection' \ - --train-subset ${train_subset} \ - --gen-subset ${gen_subset} \ - --path "${MODEL_DIR}/${CHECKPOINT_FILENAME}" \ - --config-yaml 'config_asr.yaml' \ - --prefix-size 1 \ - --max-tokens 40000 --beam 5 \ - --skip-invalid-size-inputs-valid-test \ - --results-path ${RESULTS} \ - --scoring wer --wer-tokenizer 13a \ - --wer-lowercase --wer-remove-punct --remove-bpe -``` - -## Citation -```bibtex -@article{gong2021pay, - title={Pay Better Attention to Attention: Head Selection in Multilingual and Multi-Domain Sequence Modeling}, - author={Gong, Hongyu and Tang, Yun and Pino, Juan and Li, Xian}, - journal={arXiv preprint arXiv:2106.10840}, - year={2021} -} -''' diff --git a/kosmos-g/fairseq/examples/attention_head_selection/src/__init__.py b/kosmos-g/fairseq/examples/attention_head_selection/src/__init__.py deleted file mode 100644 index e69de29bb..000000000 diff --git a/kosmos-g/fairseq/examples/attention_head_selection/src/data/__init__.py b/kosmos-g/fairseq/examples/attention_head_selection/src/data/__init__.py deleted file mode 100644 index e69de29bb..000000000 diff --git a/kosmos-g/fairseq/examples/attention_head_selection/src/data/speech_to_text_dataset_with_domain.py b/kosmos-g/fairseq/examples/attention_head_selection/src/data/speech_to_text_dataset_with_domain.py deleted file mode 100644 index 1f1823a7a..000000000 --- a/kosmos-g/fairseq/examples/attention_head_selection/src/data/speech_to_text_dataset_with_domain.py +++ /dev/null @@ -1,242 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from pathlib import Path -from typing import Dict, List, Optional -from dataclasses import dataclass - -import torch -from fairseq.data import ( - ConcatDataset, - Dictionary, - FairseqDataset, - ResamplingDataset -) -from fairseq.data.audio.data_cfg import S2TDataConfig -from fairseq.data.audio.speech_to_text_dataset import ( - SpeechToTextDatasetItem, - SpeechToTextDataset, - SpeechToTextDatasetCreator -) - -logger = logging.getLogger(__name__) - - -@dataclass -class SpeechToTextDatasetItemWithDomain(SpeechToTextDatasetItem): - src_lang_id: Optional[torch.Tensor] = None - tgt_lang_id: Optional[torch.Tensor] = None - domain_id: Optional[torch.Tensor] = None - - -class SpeechToTextDatasetWithDomain(SpeechToTextDataset): - - def __init__( - self, - split: str, - is_train_split: bool, - cfg: S2TDataConfig, - audio_paths: List[str], - n_frames: List[int], - src_texts: Optional[List[str]] = None, - tgt_texts: Optional[List[str]] = None, - speakers: Optional[List[str]] = None, - src_langs: Optional[List[str]] = None, - tgt_langs: Optional[List[str]] = None, - ids: Optional[List[str]] = None, - tgt_dict: Optional[Dictionary] = None, - pre_tokenizer=None, - bpe_tokenizer=None, - n_frames_per_step=1, - speaker_to_id=None, - src_lang_ids: Optional[List[int]] = None, - tgt_lang_ids: Optional[List[int]] = None, - domain_ids: Optional[List[int]] = None - ): - super().__init__( - split, is_train_split, cfg, audio_paths, n_frames, - src_texts, tgt_texts, speakers, src_langs, tgt_langs, - ids, tgt_dict, pre_tokenizer, bpe_tokenizer, - n_frames_per_step, speaker_to_id - ) - assert src_lang_ids is None or len(src_lang_ids) == self.n_samples - assert tgt_lang_ids is None or len(tgt_lang_ids) == self.n_samples - assert domain_ids is None or len(domain_ids) == self.n_samples - - self.src_lang_ids = src_lang_ids - self.tgt_lang_ids = tgt_lang_ids - self.domain_ids = domain_ids - - def __getitem__(self, index: int) -> SpeechToTextDatasetItemWithDomain: - item = super().__getitem__(index) - src_lang_id = self.src_lang_ids[index] - tgt_lang_id = self.tgt_lang_ids[index] - domain_id = self.domain_ids[index] - return SpeechToTextDatasetItemWithDomain( - index=item.index, source=item.source, - target=item.target, speaker_id=item.speaker_id, - src_lang_id=src_lang_id, - tgt_lang_id=tgt_lang_id, - domain_id=domain_id - ) - - def collater( - self, samples: List[SpeechToTextDatasetItem], return_order: bool = False - ) -> Dict: - if len(samples) == 0: - return {} - out = super().collater(samples, return_order=True) - order = out["order"] - src_lang_ids = torch.tensor([x.src_lang_id for x in samples], dtype=torch.long).index_select(0, order) - tgt_lang_ids = torch.tensor([x.tgt_lang_id for x in samples], dtype=torch.long).index_select(0, order) - domain_ids = torch.tensor([x.domain_id for x in samples], dtype=torch.long).index_select(0, order) - - out["src_lang_ids"] = src_lang_ids - out["tgt_lang_ids"] = tgt_lang_ids - out["domain_ids"] = domain_ids - if not return_order: - del out["order"] - return out - - -class SpeechToTextDatasetCreatorWithDomain(SpeechToTextDatasetCreator): - KEY_SRC_LANG_ID, KEY_TGT_LANG_ID = "src_lang_id", "tgt_lang_id" - KEY_DOMAIN_ID = "domain_id" - # default values - DEFAULT_SRC_LANG_ID, DEFAULT_TGT_LANG_ID, DEFAULT_DOMAIN_ID = 0, 0, 0 - - @classmethod - def _from_list( - cls, - split_name: str, - is_train_split, - samples: List[Dict], - cfg: S2TDataConfig, - tgt_dict, - pre_tokenizer, - bpe_tokenizer, - n_frames_per_step, - speaker_to_id - ) -> SpeechToTextDatasetWithDomain: - audio_root = Path(cfg.audio_root) - ids = [s[cls.KEY_ID] for s in samples] - audio_paths = [(audio_root / s[cls.KEY_AUDIO]).as_posix() for s in samples] - n_frames = [int(s[cls.KEY_N_FRAMES]) for s in samples] - tgt_texts = [s[cls.KEY_TGT_TEXT] for s in samples] - src_texts = [s.get(cls.KEY_SRC_TEXT, cls.DEFAULT_SRC_TEXT) for s in samples] - speakers = [s.get(cls.KEY_SPEAKER, cls.DEFAULT_SPEAKER) for s in samples] - src_langs = [s.get(cls.KEY_SRC_LANG, cls.DEFAULT_LANG) for s in samples] - tgt_langs = [s.get(cls.KEY_TGT_LANG, cls.DEFAULT_LANG) for s in samples] - src_lang_ids = [s.get(cls.KEY_SRC_LANG_ID, cls.DEFAULT_SRC_LANG_ID) for s in samples] - tgt_lang_ids = [s.get(cls.KEY_TGT_LANG_ID, cls.DEFAULT_TGT_LANG_ID) for s in samples] - domain_ids = [s.get(cls.KEY_DOMAIN_ID, cls.DEFAULT_DOMAIN_ID) for s in samples] - return SpeechToTextDatasetWithDomain( - split_name, - is_train_split, - cfg, - audio_paths, - n_frames, - src_texts=src_texts, - tgt_texts=tgt_texts, - speakers=speakers, - src_langs=src_langs, - tgt_langs=tgt_langs, - ids=ids, - tgt_dict=tgt_dict, - pre_tokenizer=pre_tokenizer, - bpe_tokenizer=bpe_tokenizer, - n_frames_per_step=n_frames_per_step, - speaker_to_id=speaker_to_id, - src_lang_ids=src_lang_ids, - tgt_lang_ids=tgt_lang_ids, - domain_ids=domain_ids - ) - - @classmethod - def _load_samples_from_tsv( - cls, - root: str, - split: str, - src_lang_map, - tgt_lang_map, - domain_map - ): - # metadata from split - _, src_lang, tgt_lang, domain = split.split("_") - src_lang_id = src_lang_map[src_lang] - tgt_lang_id = tgt_lang_map[tgt_lang] - domain_id = domain_map[domain] - - samples = SpeechToTextDatasetCreator._load_samples_from_tsv(root, split) - for s in samples: - s.update({ - cls.KEY_SRC_LANG_ID: src_lang_id, - cls.KEY_TGT_LANG_ID: tgt_lang_id, - cls.KEY_DOMAIN_ID: domain_id - }) - return samples - - @classmethod - def _from_tsv( - cls, - root: str, - cfg: S2TDataConfig, - split: str, - tgt_dict, - is_train_split: bool, - pre_tokenizer, - bpe_tokenizer, - n_frames_per_step, - speaker_to_id, - src_lang_map: Dict[str, int], - tgt_lang_map: Dict[str, int], - domain_map: Dict[str, int] - ) -> SpeechToTextDatasetItemWithDomain: - samples = cls._load_samples_from_tsv( - root, split, src_lang_map, - tgt_lang_map, domain_map - ) - return cls._from_list( - split, is_train_split, samples, cfg, tgt_dict, pre_tokenizer, - bpe_tokenizer, n_frames_per_step, speaker_to_id - ) - - @classmethod - def from_tsv( - cls, - root: str, - cfg: S2TDataConfig, - splits: str, - tgt_dict, - pre_tokenizer, - bpe_tokenizer, - is_train_split: bool, - epoch: int, - seed: int, - src_lang_map: Dict[str, int], - tgt_lang_map: Dict[str, int], - domain_map: Dict[str, int], - n_frames_per_step: int = 1, - speaker_to_id=None - ) -> SpeechToTextDatasetWithDomain: - datasets = [ - cls._from_tsv( - root, cfg, split, tgt_dict, is_train_split, pre_tokenizer, bpe_tokenizer, n_frames_per_step, speaker_to_id, src_lang_map, tgt_lang_map, domain_map - ) - for split in splits.split(",") - ] - - if is_train_split and len(datasets) > 1 and cfg.sampling_alpha != 1.0: - # temperature-based sampling - size_ratios = cls.get_size_ratios(datasets, alpha=cfg.sampling_alpha) - datasets = [ - ResamplingDataset( - d, size_ratio=r, seed=seed, epoch=epoch, replace=(r >= 1.0) - ) - for r, d in zip(size_ratios, datasets) - ] - - return ConcatDataset(datasets) if len(datasets) > 1 else datasets[0] diff --git a/kosmos-g/fairseq/examples/attention_head_selection/src/loss/__init__.py b/kosmos-g/fairseq/examples/attention_head_selection/src/loss/__init__.py deleted file mode 100644 index e69de29bb..000000000 diff --git a/kosmos-g/fairseq/examples/attention_head_selection/src/loss/attention_head_selection.py b/kosmos-g/fairseq/examples/attention_head_selection/src/loss/attention_head_selection.py deleted file mode 100644 index 4ba33954d..000000000 --- a/kosmos-g/fairseq/examples/attention_head_selection/src/loss/attention_head_selection.py +++ /dev/null @@ -1,27 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -from torch.nn.modules.loss import _Loss - - -class HeadSelectionLoss(_Loss): - - def __init__(self, args): - super().__init__() - self.args = args - self.kl_weight = getattr(args, "kl_weight", 0.0) - - def forward(self, head_samples, sample_sizes, prior=0.5, eps=1e-7): - """ - head_scores: (num_tasks, num_layers, num_heads) - sample_sizes: (num_tasks, ) - """ - kl_loss = (head_samples * (torch.log(head_samples + eps) - math.log(prior))).sum(-1).sum(-1) - kl_loss /= (torch.numel(head_samples) / head_samples.size(0)) - kl_loss = self.kl_weight * torch.matmul(kl_loss, sample_sizes) - return kl_loss diff --git a/kosmos-g/fairseq/examples/attention_head_selection/src/models/__init__.py b/kosmos-g/fairseq/examples/attention_head_selection/src/models/__init__.py deleted file mode 100644 index e69de29bb..000000000 diff --git a/kosmos-g/fairseq/examples/attention_head_selection/src/models/head_selection_s2t_transformer.py b/kosmos-g/fairseq/examples/attention_head_selection/src/models/head_selection_s2t_transformer.py deleted file mode 100644 index 2c7ed89e8..000000000 --- a/kosmos-g/fairseq/examples/attention_head_selection/src/models/head_selection_s2t_transformer.py +++ /dev/null @@ -1,170 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from typing import Dict, List, Optional -from pathlib import Path -import torch.nn as nn -from torch import Tensor -from fairseq import checkpoint_utils - -from fairseq.models import register_model, register_model_architecture -from fairseq.utils import safe_hasattr -from fairseq.models.speech_to_text.s2t_transformer import ( - S2TTransformerModel, - S2TTransformerEncoder, - TransformerDecoderScriptable -) -from fairseq.models.speech_to_text.s2t_transformer import base_architecture as s2t_base_architecture - -from ..modules.attn_head_selector import AttnHeadSelector -from ..modules.head_selection_transformer_layer import HeadSelectionTransformerEncoderLayer -from .head_selection_transformer import HeadSelectionTransformerDecoder - - -logger = logging.getLogger(__name__) - - -@register_model("head_selection_s2t_transformer") -class HeadSelectionS2TTransformerModel(S2TTransformerModel): - """ - Head selection implemented in S2TTransformer - """ - def __init__(self, encoder, decoder): - super().__init__(encoder, decoder) - - @staticmethod - def add_args(parser): - S2TTransformerModel.add_args(parser) - # encoder head selection - parser.add_argument( - "--encoder-attn-head-select", - action="store_true", - default=False, - help="encoder head selection" - ) - parser.add_argument( - "--total-encoder-attention-heads", - type=int, - help="total number of encoder attention heads" - ) - # decoder self attention selection - parser.add_argument( - "--decoder-self-attn-head-select", - action="store_true", - default=False, - help="decoder self-attention head selection" - ) - # decoder-encoder attention selection - parser.add_argument( - "--dec-enc-attn-head-select", - action="store_true", - default=False, - help="decoder-encoder attention head selection" - ) - parser.add_argument( - "--total-decoder-attention-heads", - type=int, - help="total number of decoder attention heads" - ) - # selection strategy - parser.add_argument( - "--attn-head-select-strategy", - type=str, - help="attention head selection strategy, subset or group" - ) - - @classmethod - def build_encoder(cls, args): - if safe_hasattr(args, "encoder_attn_head_select") and args.encoder_attn_head_select: - encoder = HeadSelectionS2TTransformerEncoder(args) - else: - encoder = S2TTransformerEncoder(args) - pretraining_path = getattr(args, "load_pretrained_encoder_from", None) - if pretraining_path is not None: - if not Path(pretraining_path).exists(): - logger.warning( - f"skipped pretraining because {pretraining_path} does not exist" - ) - else: - encoder = checkpoint_utils.load_pretrained_component_from_model( - component=encoder, checkpoint=pretraining_path - ) - logger.info(f"loaded pretrained encoder from: {pretraining_path}") - return encoder - - @classmethod - def build_decoder(cls, args, task, embed_tokens): - if (safe_hasattr(args, "decoder_self_attn_head_select") and args.decoder_self_attn_head_select) or (safe_hasattr(args, "dec_enc_attn_head_select") and args.dec_enc_attn_head_select): - return HeadSelectionTransformerDecoderScriptable(args, task.target_dictionary, embed_tokens) - else: - return TransformerDecoderScriptable(args, task.target_dictionary, embed_tokens) - - -class HeadSelectionS2TTransformerEncoder(S2TTransformerEncoder): - - def __init__(self, args): - super().__init__(args) - self.attn_head_selector = AttnHeadSelector( - args.encoder_tasks, - args.encoder_layers, - args.total_encoder_attention_heads, - args.encoder_attention_heads, - args.attn_head_select_strategy, - ) - self.task_ids = None - self.transformer_layers = nn.ModuleList([ - HeadSelectionTransformerEncoderLayer(args, layer_idx, attn_head_selector=self.attn_head_selector) for layer_idx in range(args.encoder_layers) - ]) - - def set_task_ids(self, task_ids): - self.task_ids = task_ids - - def _forward(self, src_tokens, src_lengths, return_all_hiddens=False): - self.attn_head_selector.head_select(self.task_ids) - return super()._forward(src_tokens, src_lengths, return_all_hiddens) - - -class HeadSelectionTransformerDecoderScriptable(HeadSelectionTransformerDecoder): - def extract_features( - self, - prev_output_tokens, - encoder_out: Optional[Dict[str, List[Tensor]]] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - full_context_alignment: bool = False, - alignment_layer: Optional[int] = None, - alignment_heads: Optional[int] = None, - ): - # call scriptable method from parent class - x, _ = self.extract_features_scriptable( - prev_output_tokens, - encoder_out, - incremental_state, - full_context_alignment, - alignment_layer, - alignment_heads, - ) - return x, None - - -@register_model_architecture(model_name="head_selection_s2t_transformer", arch_name="head_selection_s2t_transformer") -def base_architecture(args): - s2t_base_architecture(args) - args.encoder_attn_head_select = getattr(args, "encoder_attn_head_select", False) - args.decoder_self_attn_head_select = getattr(args, "decoder_self_attn_head_select", False) - args.dec_enc_attn_head_select = getattr(args, "dec_enc_attn_head_select", False) - args.total_encoder_attention_heads = getattr(args, "total_encoder_attention_heads", 8) - args.total_decoder_attention_heads = getattr(args, "total_decoder_attention_heads", 8) - args.attn_head_select_strategy = getattr(args, "attn_head_select_strategy", "group") - - -@register_model_architecture("head_selection_s2t_transformer", "head_selection_s2t_transformer_s") -def head_selection_s2t_transformer_s(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 256) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 256 * 8) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 4) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 4) - args.dropout = getattr(args, "dropout", 0.1) - base_architecture(args) diff --git a/kosmos-g/fairseq/examples/attention_head_selection/src/models/head_selection_transformer.py b/kosmos-g/fairseq/examples/attention_head_selection/src/models/head_selection_transformer.py deleted file mode 100644 index b9d595699..000000000 --- a/kosmos-g/fairseq/examples/attention_head_selection/src/models/head_selection_transformer.py +++ /dev/null @@ -1,215 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Any, List, Dict, Optional -import torch -import torch.nn as nn -from torch import Tensor - -from fairseq.utils import safe_hasattr -from fairseq.models.transformer import ( - TransformerModel, - TransformerEncoder, - TransformerDecoder -) - -from ..modules.attn_head_selector import AttnHeadSelector -from ..modules.head_selection_transformer_layer import ( - HeadSelectionTransformerEncoderLayer, - HeadSelectionTransformerDecoderLayer -) - - -class HeadSelectionTransformerModel(TransformerModel): - def __init__(self, args, encoder, decoder): - super().__init__(args, encoder, decoder) - - @staticmethod - def add_args(parser): - TransformerModel.add_args(parser) - # encoder head selection - parser.add_argument( - "--encoder-attn-head-select", - action="store_true", - default=False, - help="encoder head selection" - ) - parser.add_argument( - "--total-encoder-attention-heads", - type=int, - help="total number of encoder attention heads" - ) - # decoder self attention - parser.add_argument( - "--decoder-self-attn-head-select", - action="store_true", - default=False, - help="decoder self-attention head selection" - ) - # decoder-encoder attention - parser.add_argument( - "--dec-enc-attn-head-select", - action="store_true", - default=False, - help="decoder-encoder attention head selection" - ) - parser.add_argument( - "--total-decoder-attention-heads", - type=int, - help="total number of decoder attention heads" - ) - # selection strategy - parser.add_argument( - "--attn-head-select-strategy", - type=str, - help="attention head selection strategy, subset or group" - ) - - @classmethod - def build_encoder(cls, args, src_dict, embed_tokens): - if safe_hasattr(args, "encoder_attn_head_select") and args.encoder_attn_head_select: - return HeadSelectionTransformerEncoder( - args, src_dict, embed_tokens - ) - else: - return TransformerEncoder(args, src_dict, embed_tokens) - - @classmethod - def build_decoder(cls, args, tgt_dict, embed_tokens): - if (safe_hasattr(args, "decoder_self_attn_head_select") and args.decoder_self_attn_head_select) or (safe_hasattr(args, "dec_enc_attn_head_select") and args.dec_enc_attn_head_select): - return HeadSelectionTransformerDecoder( - args, tgt_dict, embed_tokens - ) - else: - return TransformerDecoder(args, tgt_dict, embed_tokens) - - -class HeadSelectionTransformerEncoder(TransformerEncoder): - - def __init__(self, args, dictionary, embed_tokens): - self.num_tasks = args.encoder_tasks - self.num_layers = args.encoder_layers - self.total_num_heads = args.total_encoder_attention_heads - self.num_heads = args.encoder_attention_heads - self.select_strategy = args.attn_head_select_strategy - - super().__init__(args, dictionary, embed_tokens) - self.attn_head_selector = AttnHeadSelector( - self.num_tasks, - self.num_layers, - self.total_num_heads, - self.num_heads, - self.select_strategy - ) - self.task_ids = None - self.layers = nn.ModuleList( - [self.build_encoder_layer(args, i) for i in range(args.encoder_layers)] - ) - - def set_task_ids(self, task_ids): - self.task_ids = task_ids - - def build_encoder_layer(self, args, layer_idx=None): - return HeadSelectionTransformerEncoderLayer( - args, - layer_idx, - attn_head_selector=self.attn_head_selector - ) - - def forward( - self, - src_tokens, - src_lengths: Optional[torch.Tensor] = None, - return_all_hiddens: bool = False, - token_embeddings: Optional[torch.Tensor] = None, - ): - self.attn_head_selector.head_select(self.task_ids) - return super().forward(src_tokens, src_lengths, return_all_hiddens, token_embeddings) - - -class HeadSelectionTransformerDecoder(TransformerDecoder): - - def __init__( - self, - args, - dictionary, - embed_tokens, - no_encoder_attn=False, - output_projection=None, - ): - self.num_tasks = args.decoder_tasks - self.num_layers = args.decoder_layers - self.total_num_heads = args.total_decoder_attention_heads - self.num_heads = args.decoder_attention_heads - self.select_strategy = args.attn_head_select_strategy - super().__init__( - args, dictionary, embed_tokens, - no_encoder_attn=no_encoder_attn, - output_projection=output_projection - ) - self.self_attn_head_selector = None - self.enc_attn_head_selector = None - if safe_hasattr(args, "decoder_self_attn_head_select") and args.decoder_self_attn_head_select: - self.self_attn_head_selector = AttnHeadSelector( - self.num_tasks, - self.num_layers, - self.total_num_heads, - self.num_heads, - self.select_strategy - ) - if safe_hasattr(args, "dec_enc_attn_head_select") and args.dec_enc_attn_head_select: - self.enc_attn_head_selector = AttnHeadSelector( - self.num_tasks, - self.num_layers, - self.total_num_heads, - self.num_heads, - self.select_strategy - ) - self.task_ids = None - self.layers = nn.ModuleList( - [ - self.build_head_selection_decoder_layer(args, no_encoder_attn, idx) for idx in range(args.decoder_layers) - ] - ) - - def set_task_ids(self, task_ids): - self.task_ids = task_ids - - def build_head_selection_decoder_layer(self, args, no_encoder_attn=False, layer_idx=None): - return HeadSelectionTransformerDecoderLayer( - args, - layer_idx, - self.self_attn_head_selector, - self.enc_attn_head_selector, - no_encoder_attn=no_encoder_attn - ) - - def forward( - self, - prev_output_tokens, - encoder_out: Optional[Dict[str, List[Tensor]]] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - features_only: bool = False, - full_context_alignment: bool = False, - alignment_layer: Optional[int] = None, - alignment_heads: Optional[int] = None, - src_lengths: Optional[Any] = None, - return_all_hiddens: bool = False, - ): - if self.self_attn_head_selector is not None: - self.self_attn_head_selector.head_select(self.task_ids) - if self.enc_attn_head_selector is not None: - self.enc_attn_head_selector.head_select(self.task_ids) - return super().forward( - prev_output_tokens=prev_output_tokens, - encoder_out=encoder_out, - incremental_state=incremental_state, - features_only=features_only, - full_context_alignment=full_context_alignment, - alignment_layer=alignment_layer, - alignment_heads=alignment_heads, - src_lengths=src_lengths, - return_all_hiddens=return_all_hiddens - ) diff --git a/kosmos-g/fairseq/examples/attention_head_selection/src/modules/__init__.py b/kosmos-g/fairseq/examples/attention_head_selection/src/modules/__init__.py deleted file mode 100644 index e69de29bb..000000000 diff --git a/kosmos-g/fairseq/examples/attention_head_selection/src/modules/attn_head_selector.py b/kosmos-g/fairseq/examples/attention_head_selection/src/modules/attn_head_selector.py deleted file mode 100644 index 346fc6230..000000000 --- a/kosmos-g/fairseq/examples/attention_head_selection/src/modules/attn_head_selector.py +++ /dev/null @@ -1,81 +0,0 @@ -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -import math - - -class AttnHeadSelector(nn.Module): - """ - Latent variable modeling of attention head selection - """ - def __init__( - self, num_tasks, num_layers, - total_num_heads, num_heads, - select_strategy="group", - head_select_temp=5.0 - ): - super(AttnHeadSelector, self).__init__() - self.num_tasks = num_tasks - self.num_layers = num_layers - self.total_num_heads = total_num_heads - self.num_heads = num_heads - self.select_strategy = select_strategy - self.temp = head_select_temp - - self.head_logits = torch.nn.Parameter( - torch.Tensor(self.num_tasks, self.num_layers, total_num_heads), - requires_grad=True - ) - nn.init.uniform_( - self.head_logits, a=math.log(0.01), - b=math.log(1.0) - ) - - def gumbel_sample(self, logits, tau=1.0): - gumbels1 = -torch.empty_like(logits, memory_format=torch.legacy_contiguous_format).exponential_().log() - gumbels2 = -torch.empty_like(logits, memory_format=torch.legacy_contiguous_format).exponential_().log() - gumbels1 = (logits + gumbels1 - gumbels2) / tau - y_soft = gumbels1.sigmoid() - return y_soft - - def subset_select(self, y_soft, topk, dim=-1): - top_values, top_inds = torch.topk(y_soft, k=topk, dim=dim) - top_ret = 1.0 - top_values.detach() + top_values - return top_inds.detach(), top_ret - - def group_selet(self, y_soft, topk, dim=-1): - # top_values: (num_tasks, num_layers, topk) - top_values, top_inds = torch.max( - y_soft.view(self.num_tasks, self.num_layers, -1, topk), dim=2 - ) - top_inds = top_inds * topk + torch.arange(topk, device=top_inds.device).unsqueeze(0).unsqueeze(1) - top_ret = 1.0 - top_values.detach() + top_values - return top_inds.detach(), top_ret - - def head_select(self, task_ids=None): - # gumbel_sample - self.head_samples = self.gumbel_sample(self.head_logits, tau=self.temp) - # head select - if self.select_strategy == "subset": - self.subset_heads, self.subset_weights = self.subset_select( - self.head_samples, - topk=self.num_heads, - ) - elif self.select_strategy == "group": - self.subset_heads, self.subset_weights = self.group_selet( - self.head_samples, - topk=self.num_heads, - ) - else: - raise ValueError("{} is not supported".format(self.select_strategy)) - - self.batch_subset = self.subset_heads[task_ids, :, :] - self.batch_weights = self.subset_weights[task_ids, :, :] - - def forward(self, layer_idx): - assert layer_idx is not None - batch_subset = self.batch_subset[:, layer_idx, :] - batch_weights = self.batch_weights[:, layer_idx, :] - return batch_subset, batch_weights diff --git a/kosmos-g/fairseq/examples/attention_head_selection/src/modules/head_selection_transformer_layer.py b/kosmos-g/fairseq/examples/attention_head_selection/src/modules/head_selection_transformer_layer.py deleted file mode 100644 index c79214350..000000000 --- a/kosmos-g/fairseq/examples/attention_head_selection/src/modules/head_selection_transformer_layer.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq.utils import safe_getattr -from fairseq.modules import TransformerEncoderLayer, TransformerDecoderLayer -from ..modules.multihead_attention_selection import MultiheadAttentionSelection - - -class HeadSelectionTransformerEncoderLayer(TransformerEncoderLayer): - - def __init__(self, args, layer_idx, attn_head_selector=None): - super().__init__(args) - self.layer_idx = layer_idx - self.self_attn = self.build_self_attention_selection( - self.embed_dim, args, attn_head_selector - ) - - def build_self_attention_selection(self, embed_dim, args, attn_head_selector=None): - return MultiheadAttentionSelection( - embed_dim, - args.total_encoder_attention_heads, - args.encoder_attention_heads, - dropout=args.attention_dropout, - self_attention=True, - q_noise=self.quant_noise, - qn_block_size=self.quant_noise_block_size, - layer_idx=self.layer_idx, - attn_head_selector=attn_head_selector - ) - - -class HeadSelectionTransformerDecoderLayer(TransformerDecoderLayer): - - def __init__( - self, - args, - layer_idx, - self_attn_head_selector=None, - enc_attn_head_selector=None, - no_encoder_attn=False, - add_bias_kv=False, - add_zero_attn=False, - ): - self.layer_idx = layer_idx - super().__init__(args, no_encoder_attn, add_bias_kv, add_zero_attn) - if self_attn_head_selector is not None: - self.self_attn = self.build_self_attention_selection( - self.embed_dim, args, - self_attn_head_selector=self_attn_head_selector, - add_bias_kv=add_bias_kv, - add_zero_attn=add_zero_attn - ) - if enc_attn_head_selector is not None: - self.encoder_attn = self.build_encoder_attention_selection( - self.embed_dim, args, - enc_attn_head_selector=enc_attn_head_selector - ) - - def build_self_attention_selection( - self, embed_dim, args, self_attn_head_selector=None, - add_bias_kv=False, add_zero_attn=False - ): - return MultiheadAttentionSelection( - embed_dim, - args.total_decoder_attention_heads, - args.decoder_attention_heads, - dropout=args.attention_dropout, - add_bias_kv=add_bias_kv, - add_zero_attn=add_zero_attn, - self_attention=not safe_getattr(args, "cross_self_attention"), - q_noise=self.quant_noise, - qn_block_size=self.quant_noise_block_size, - layer_idx=self.layer_idx, - attn_head_selector=self_attn_head_selector, - ) - - def build_encoder_attention_selection(self, embed_dim, args, enc_attn_head_selector=None): - return MultiheadAttentionSelection( - embed_dim, - args.total_decoder_attention_heads, - args.decoder_attention_heads, - kdim=args.encoder_embed_dim, - vdim=args.encoder_embed_dim, - dropout=args.attention_dropout, - encoder_decoder_attention=True, - q_noise=self.quant_noise, - qn_block_size=self.quant_noise_block_size, - layer_idx=self.layer_idx, - attn_head_selector=enc_attn_head_selector, - ) diff --git a/kosmos-g/fairseq/examples/attention_head_selection/src/modules/multihead_attention_selection.py b/kosmos-g/fairseq/examples/attention_head_selection/src/modules/multihead_attention_selection.py deleted file mode 100644 index 566ad822a..000000000 --- a/kosmos-g/fairseq/examples/attention_head_selection/src/modules/multihead_attention_selection.py +++ /dev/null @@ -1,355 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Dict, Optional, Tuple -import torch -from fairseq import utils -from fairseq.modules.quant_noise import quant_noise -from torch import Tensor, nn -from torch.nn import Parameter - -from fairseq.modules.multihead_attention import MultiheadAttention -from ..modules.multihead_functional import multi_head_attention_forward - - -class MultiheadAttentionSelection(MultiheadAttention): - - def __init__( - self, - embed_dim, - total_num_heads, - num_heads, - kdim=None, - vdim=None, - dropout=0.0, - bias=True, - add_bias_kv=False, - add_zero_attn=False, - self_attention=False, - encoder_decoder_attention=False, - q_noise=0.0, - qn_block_size=8, - layer_idx=0, - attn_head_selector=None - ): - super().__init__( - embed_dim, - num_heads, - kdim=kdim, - vdim=vdim, - dropout=dropout, - bias=bias, - add_bias_kv=add_bias_kv, - add_zero_attn=add_zero_attn, - self_attention=self_attention, - encoder_decoder_attention=encoder_decoder_attention, - q_noise=q_noise, - qn_block_size=qn_block_size, - ) - self.layer_idx = layer_idx - self.attn_head_selector = attn_head_selector - self.total_num_heads = total_num_heads - self.total_embed_dim = self.head_dim * total_num_heads - self.k_proj = quant_noise( - nn.Linear(self.kdim, self.total_embed_dim, bias=bias), q_noise, qn_block_size - ) - self.v_proj = quant_noise( - nn.Linear(self.vdim, self.total_embed_dim, bias=bias), q_noise, qn_block_size - ) - self.q_proj = quant_noise( - nn.Linear(embed_dim, self.total_embed_dim, bias=bias), q_noise, qn_block_size - ) - if add_bias_kv: - self.bias_k = Parameter(torch.Tensor(1, 1, self.total_embed_dim)) - self.bias_v = Parameter(torch.Tensor(1, 1, self.total_embed_dim)) - else: - self.bias_k = self.bias_v = None - self.reset_parameters() - - def forward( - self, - query, - key: Optional[Tensor], - value: Optional[Tensor], - key_padding_mask: Optional[Tensor] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - need_weights: bool = True, - static_kv: bool = False, - attn_mask: Optional[Tensor] = None, - before_softmax: bool = False, - need_head_weights: bool = False, - # subset_heads: Optional[Tensor] = None, - # subset_weights: Optional[Tensor] = None - ) -> Tuple[Tensor, Optional[Tensor]]: - if need_head_weights: - need_weights = True - - is_tpu = query.device.type == "xla" - - subset_heads, subset_weights = self.attn_head_selector(self.layer_idx) - - tgt_len, bsz, embed_dim = query.size() - src_len = tgt_len - assert list(query.size()) == [tgt_len, bsz, self.embed_dim] - if key is not None: - src_len, key_bsz, _ = key.size() - if not torch.jit.is_scripting(): - assert key_bsz == bsz - assert value is not None - assert src_len, bsz == value.shape[:2] - - if ( - not self.onnx_trace - and not is_tpu # don't use PyTorch version on TPUs - and incremental_state is None - and not static_kv - # A workaround for quantization to work. Otherwise JIT compilation - # treats bias in linear module as method. - and not torch.jit.is_scripting() - ): - assert key is not None and value is not None - return multi_head_attention_forward( - query, - key, - value, - self.embed_dim, - self.total_num_heads, - self.num_heads, - torch.empty([0]), - torch.cat((self.q_proj.bias, self.k_proj.bias, self.v_proj.bias)), - self.bias_k, - self.bias_v, - self.add_zero_attn, - self.dropout_module.p, - self.out_proj.weight, - self.out_proj.bias, - self.training or self.dropout_module.apply_during_inference, - key_padding_mask, - need_weights, - attn_mask, - use_separate_proj_weight=True, - q_proj_weight=self.q_proj.weight, - k_proj_weight=self.k_proj.weight, - v_proj_weight=self.v_proj.weight, - subset_heads=subset_heads, - subset_weights=subset_weights - ) - - if incremental_state is not None: - saved_state = self._get_input_buffer(incremental_state) - if saved_state is not None and "prev_key" in saved_state: - # previous time steps are cached - no need to recompute - # key and value if they are static - if static_kv: - assert self.encoder_decoder_attention and not self.self_attention - key = value = None - else: - saved_state = None - - if self.self_attention: - q = self.q_proj(query) - k = self.k_proj(query) - v = self.v_proj(query) - elif self.encoder_decoder_attention: - # encoder-decoder attention - q = self.q_proj(query) - if key is None: - assert value is None - k = v = None - else: - k = self.k_proj(key) - v = self.v_proj(key) - - else: - assert key is not None and value is not None - q = self.q_proj(query) - k = self.k_proj(key) - v = self.v_proj(value) - q *= self.scaling - - if self.bias_k is not None: - assert self.bias_v is not None - k = torch.cat([k, self.bias_k.repeat(1, bsz, 1)]) - v = torch.cat([v, self.bias_v.repeat(1, bsz, 1)]) - if attn_mask is not None: - attn_mask = torch.cat( - [attn_mask, attn_mask.new_zeros(attn_mask.size(0), 1)], dim=1 - ) - if key_padding_mask is not None: - key_padding_mask = torch.cat( - [ - key_padding_mask, - key_padding_mask.new_zeros(key_padding_mask.size(0), 1), - ], - dim=1, - ) - - q = ( - q.contiguous() - .view(tgt_len, bsz * self.total_num_heads, self.head_dim) - .transpose(0, 1) - ) - if k is not None: - k = ( - k.contiguous() - .view(-1, bsz * self.total_num_heads, self.head_dim) - .transpose(0, 1) - ) - if v is not None: - v = ( - v.contiguous() - .view(-1, bsz * self.total_num_heads, self.head_dim) - .transpose(0, 1) - ) - - if saved_state is not None: - # saved states are stored with shape (bsz, num_heads, seq_len, head_dim) - if "prev_key" in saved_state: - _prev_key = saved_state["prev_key"] - assert _prev_key is not None - prev_key = _prev_key.view(bsz * self.total_num_heads, -1, self.head_dim) - if static_kv: - k = prev_key - else: - assert k is not None - k = torch.cat([prev_key, k], dim=1) - src_len = k.size(1) - if "prev_value" in saved_state: - _prev_value = saved_state["prev_value"] - assert _prev_value is not None - prev_value = _prev_value.view(bsz * self.total_num_heads, -1, self.head_dim) - if static_kv: - v = prev_value - else: - assert v is not None - v = torch.cat([prev_value, v], dim=1) - prev_key_padding_mask: Optional[Tensor] = None - if "prev_key_padding_mask" in saved_state: - prev_key_padding_mask = saved_state["prev_key_padding_mask"] - assert k is not None and v is not None - key_padding_mask = MultiheadAttention._append_prev_key_padding_mask( - key_padding_mask=key_padding_mask, - prev_key_padding_mask=prev_key_padding_mask, - batch_size=bsz, - src_len=k.size(1), - static_kv=static_kv, - ) - - saved_state["prev_key"] = k.view(bsz, self.total_num_heads, -1, self.head_dim) - saved_state["prev_value"] = v.view(bsz, self.total_num_heads, -1, self.head_dim) - saved_state["prev_key_padding_mask"] = key_padding_mask - # In this branch incremental_state is never None - assert incremental_state is not None - incremental_state = self._set_input_buffer(incremental_state, saved_state) - assert k is not None - assert k.size(1) == src_len - - # This is part of a workaround to get around fork/join parallelism - # not supporting Optional types. - if key_padding_mask is not None and key_padding_mask.dim() == 0: - key_padding_mask = None - - if key_padding_mask is not None: - assert key_padding_mask.size(0) == bsz - assert key_padding_mask.size(1) == src_len - - if self.add_zero_attn: - assert v is not None - src_len += 1 - k = torch.cat([k, k.new_zeros((k.size(0), 1) + k.size()[2:])], dim=1) - v = torch.cat([v, v.new_zeros((v.size(0), 1) + v.size()[2:])], dim=1) - if attn_mask is not None: - attn_mask = torch.cat( - [attn_mask, attn_mask.new_zeros(attn_mask.size(0), 1)], dim=1 - ) - if key_padding_mask is not None: - key_padding_mask = torch.cat( - [ - key_padding_mask, - torch.zeros(key_padding_mask.size(0), 1).type_as( - key_padding_mask - ), - ], - dim=1, - ) - - attn_weights = torch.bmm(q, k.transpose(1, 2)) - attn_weights = self.apply_sparse_mask(attn_weights, tgt_len, src_len, bsz) - - assert list(attn_weights.size()) == [bsz * self.total_num_heads, tgt_len, src_len] - - if attn_mask is not None: - attn_mask = attn_mask.unsqueeze(0) - if self.onnx_trace: - attn_mask = attn_mask.repeat(attn_weights.size(0), 1, 1) - attn_weights += attn_mask - - if key_padding_mask is not None: - # don't attend to padding symbols - attn_weights = attn_weights.view(bsz, self.total_num_heads, tgt_len, src_len) - if not is_tpu: - attn_weights = attn_weights.masked_fill( - key_padding_mask.unsqueeze(1).unsqueeze(2).to(torch.bool), - float("-inf"), - ) - else: - attn_weights = attn_weights.transpose(0, 2) - attn_weights = attn_weights.masked_fill(key_padding_mask, float("-inf")) - attn_weights = attn_weights.transpose(0, 2) - attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) - - if before_softmax: - return attn_weights, v - - attn_weights_float = utils.softmax( - attn_weights, dim=-1, onnx_trace=self.onnx_trace - ) - attn_weights = attn_weights_float.type_as(attn_weights) - attn_probs = self.dropout_module(attn_weights) - - assert v is not None - - # evaluation - if subset_heads is not None and subset_heads.numel() == 1: - subset_heads = subset_heads.repeat(bsz) - subset_weights = subset_weights.repeat(bsz) - - if subset_heads is None: - attn = torch.bmm(attn_probs, v) - else: - # training with head selection - mixed_attn = torch.bmm(attn_probs, v).contiguous().view(bsz, self.total_num_heads, tgt_len, self.head_dim) - attn = torch.stack( - [mixed_attn[torch.arange(bsz), subset_heads[:, col], :, :] for col in range(subset_heads.size(1))], dim=1 - ) - attn = attn * subset_weights.unsqueeze(2).unsqueeze(3) - attn = attn.contiguous().view(bsz * self.num_heads, tgt_len, self.head_dim) - - assert list(attn.size()) == [bsz * self.num_heads, tgt_len, self.head_dim] - if self.onnx_trace and attn.size(1) == 1: - # when ONNX tracing a single decoder step (sequence length == 1) - # the transpose is a no-op copy before view, thus unnecessary - attn = attn.contiguous().view(tgt_len, bsz, embed_dim) - else: - attn = attn.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim) - attn = self.out_proj(attn) - attn_weights: Optional[Tensor] = None - if need_weights: - if subset_heads is None: - attn_weights = attn_weights_float.view( - bsz, self.num_heads, tgt_len, src_len - ).transpose(1, 0) - else: - mixed_attn_weights = attn_weights_float.view( - bsz, self.total_num_heads, tgt_len, src_len - ) - attn_weights = torch.stack( - [mixed_attn_weights[torch.arange(bsz), subset_heads[:, col], :, :] for col in range(subset_heads.size(1))], dim=1 - ).transpose(1, 0) - if not need_head_weights: - # average attention weights over heads - attn_weights = attn_weights.mean(dim=0) - - return attn, attn_weights diff --git a/kosmos-g/fairseq/examples/attention_head_selection/src/modules/multihead_functional.py b/kosmos-g/fairseq/examples/attention_head_selection/src/modules/multihead_functional.py deleted file mode 100644 index d5edc777e..000000000 --- a/kosmos-g/fairseq/examples/attention_head_selection/src/modules/multihead_functional.py +++ /dev/null @@ -1,278 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Optional, Tuple -import torch -from torch import Tensor -from torch.nn.functional import ( - linear, softmax, dropout, pad, - has_torch_function, - handle_torch_function, - _in_projection_packed, -) -import math -import warnings - - -def _scaled_dot_product_attention( - q: Tensor, - k: Tensor, - v: Tensor, - attn_mask: Optional[Tensor] = None, - dropout_p: float = 0.0, - bsz: int = 1, - subset_heads: Optional[Tensor] = None, - subset_weights: Optional[Tensor] = None, -) -> Tuple[Tensor, Tensor]: - B, Nt, E = q.shape - q = q / math.sqrt(E) - # B: bsz * total_num_heads - # (B, Nt, E) x (B, E, Ns) -> (B, Nt, Ns) - attn = torch.bmm(q, k.transpose(-2, -1)) - if attn_mask is not None: - attn += attn_mask - attn = softmax(attn, dim=-1) - if dropout_p > 0.0: - attn = dropout(attn, p=dropout_p) - if subset_heads is None: - # (B, Nt, Ns) x (B, Ns, E) -> (B, Nt, E) - output = torch.bmm(attn, v) - else: - mixed_output = torch.bmm(attn, v).contiguous().view(bsz, -1, Nt, E) - output = torch.stack( - [mixed_output[torch.arange(bsz), subset_heads[:, col], :, :] for col in range(subset_heads.size(1))], - dim=1 - ) - output = output * subset_weights.unsqueeze(2).unsqueeze(3) - output = output.contiguous().view(-1, Nt, E) - if subset_heads is not None: - _, Nt, Ns = attn.size() - mixed_attn = attn.view(bsz, -1, Nt, Ns) - attn = torch.stack( - [mixed_attn[torch.arange(bsz), subset_heads[:, col], :, :] for col in range(subset_heads.size(1))], dim=1 - ) - return output, attn - - -def _in_projection( - q: Tensor, - k: Tensor, - v: Tensor, - w_q: Tensor, - w_k: Tensor, - w_v: Tensor, - b_q: Optional[Tensor] = None, - b_k: Optional[Tensor] = None, - b_v: Optional[Tensor] = None, -) -> Tuple[Tensor, Tensor, Tensor]: - return linear(q, w_q, b_q), linear(k, w_k, b_k), linear(v, w_v, b_v) - - -def multi_head_attention_forward( - query: Tensor, - key: Tensor, - value: Tensor, - embed_dim_to_check: int, - total_num_heads: int, - num_heads: int, - in_proj_weight: Tensor, - in_proj_bias: Optional[Tensor], - bias_k: Optional[Tensor], - bias_v: Optional[Tensor], - add_zero_attn: bool, - dropout_p: float, - out_proj_weight: Tensor, - out_proj_bias: Optional[Tensor], - training: bool = True, - key_padding_mask: Optional[Tensor] = None, - need_weights: bool = True, - attn_mask: Optional[Tensor] = None, - use_separate_proj_weight: bool = False, - q_proj_weight: Optional[Tensor] = None, - k_proj_weight: Optional[Tensor] = None, - v_proj_weight: Optional[Tensor] = None, - static_k: Optional[Tensor] = None, - static_v: Optional[Tensor] = None, - subset_heads: Optional[Tensor] = None, - subset_weights: Optional[Tensor] = None, -): - tens_ops = (query, key, value, in_proj_weight, in_proj_bias, bias_k, bias_v, out_proj_weight, out_proj_bias) - if has_torch_function(tens_ops): - return handle_torch_function( - multi_head_attention_forward, - tens_ops, - query, - key, - value, - embed_dim_to_check, - total_num_heads, - num_heads, - in_proj_weight, - in_proj_bias, - bias_k, - bias_v, - add_zero_attn, - dropout_p, - out_proj_weight, - out_proj_bias, - training=training, - key_padding_mask=key_padding_mask, - need_weights=need_weights, - attn_mask=attn_mask, - use_separate_proj_weight=use_separate_proj_weight, - q_proj_weight=q_proj_weight, - k_proj_weight=k_proj_weight, - v_proj_weight=v_proj_weight, - static_k=static_k, - static_v=static_v, - subset_heads=subset_heads, - subset_weights=subset_weights - ) - - # set up shape vars - tgt_len, bsz, embed_dim = query.shape - src_len, _, _ = key.shape - assert embed_dim == embed_dim_to_check, \ - f"was expecting embedding dimension of {embed_dim_to_check}, but got {embed_dim}" - if isinstance(embed_dim, torch.Tensor): - # embed_dim can be a tensor when JIT tracing - head_dim = embed_dim.div(num_heads, rounding_mode='trunc') - else: - head_dim = embed_dim // num_heads - assert head_dim * num_heads == embed_dim, f"embed_dim {embed_dim} not divisible by num_heads {num_heads}" - if use_separate_proj_weight: - # allow MHA to have different embedding dimensions when separate projection weights are used - assert key.shape[:2] == value.shape[:2], \ - f"key's sequence and batch dims {key.shape[:2]} do not match value's {value.shape[:2]}" - else: - assert key.shape == value.shape, f"key shape {key.shape} does not match value shape {value.shape}" - - # - # compute in-projection - # - if not use_separate_proj_weight: - q, k, v = _in_projection_packed(query, key, value, in_proj_weight, in_proj_bias) - else: - assert q_proj_weight is not None, "use_separate_proj_weight is True but q_proj_weight is None" - assert k_proj_weight is not None, "use_separate_proj_weight is True but k_proj_weight is None" - assert v_proj_weight is not None, "use_separate_proj_weight is True but v_proj_weight is None" - if in_proj_bias is None: - b_q = b_k = b_v = None - else: - b_q, b_k, b_v = in_proj_bias.chunk(3) - q, k, v = _in_projection(query, key, value, q_proj_weight, k_proj_weight, v_proj_weight, b_q, b_k, b_v) - - # prep attention mask - if attn_mask is not None: - if attn_mask.dtype == torch.uint8: - warnings.warn("Byte tensor for attn_mask in nn.MultiheadAttention is deprecated. Use bool tensor instead.") - attn_mask = attn_mask.to(torch.bool) - else: - assert attn_mask.is_floating_point() or attn_mask.dtype == torch.bool, \ - f"Only float, byte, and bool types are supported for attn_mask, not {attn_mask.dtype}" - # ensure attn_mask's dim is 3 - if attn_mask.dim() == 2: - correct_2d_size = (tgt_len, src_len) - if attn_mask.shape != correct_2d_size: - raise RuntimeError(f"The shape of the 2D attn_mask is {attn_mask.shape}, but should be {correct_2d_size}.") - attn_mask = attn_mask.unsqueeze(0) - elif attn_mask.dim() == 3: - correct_3d_size = (bsz * total_num_heads, tgt_len, src_len) - if attn_mask.shape != correct_3d_size: - raise RuntimeError(f"The shape of the 3D attn_mask is {attn_mask.shape}, but should be {correct_3d_size}.") - else: - raise RuntimeError(f"attn_mask's dimension {attn_mask.dim()} is not supported") - - # prep key padding mask - if key_padding_mask is not None and key_padding_mask.dtype == torch.uint8: - warnings.warn("Byte tensor for key_padding_mask in nn.MultiheadAttention is deprecated. Use bool tensor instead.") - key_padding_mask = key_padding_mask.to(torch.bool) - - # add bias along batch dimension (currently second) - if bias_k is not None and bias_v is not None: - assert static_k is None, "bias cannot be added to static key." - assert static_v is None, "bias cannot be added to static value." - k = torch.cat([k, bias_k.repeat(1, bsz, 1)]) - v = torch.cat([v, bias_v.repeat(1, bsz, 1)]) - if attn_mask is not None: - attn_mask = pad(attn_mask, (0, 1)) - if key_padding_mask is not None: - key_padding_mask = pad(key_padding_mask, (0, 1)) - else: - assert bias_k is None - assert bias_v is None - - # - # reshape q, k, v for multihead attention and make em batch first - # - q = q.contiguous().view(tgt_len, bsz * total_num_heads, head_dim).transpose(0, 1) - if static_k is None: - k = k.contiguous().view(k.shape[0], bsz * total_num_heads, head_dim).transpose(0, 1) - else: - # TODO finish disentangling control flow so we don't do in-projections when statics are passed - assert static_k.size(0) == bsz * total_num_heads, \ - f"expecting static_k.size(0) of {bsz * total_num_heads}, but got {static_k.size(0)}" - assert static_k.size(2) == head_dim, \ - f"expecting static_k.size(2) of {head_dim}, but got {static_k.size(2)}" - k = static_k - if static_v is None: - v = v.contiguous().view(v.shape[0], bsz * total_num_heads, head_dim).transpose(0, 1) - else: - # TODO finish disentangling control flow so we don't do in-projections when statics are passed - assert static_v.size(0) == bsz * total_num_heads, \ - f"expecting static_v.size(0) of {bsz * total_num_heads}, but got {static_v.size(0)}" - assert static_v.size(2) == head_dim, \ - f"expecting static_v.size(2) of {head_dim}, but got {static_v.size(2)}" - v = static_v - - # add zero attention along batch dimension (now first) - if add_zero_attn: - zero_attn_shape = (bsz * total_num_heads, 1, head_dim) - k = torch.cat([k, torch.zeros(zero_attn_shape, dtype=k.dtype, device=k.device)], dim=1) - v = torch.cat([v, torch.zeros(zero_attn_shape, dtype=v.dtype, device=v.device)], dim=1) - if attn_mask is not None: - attn_mask = pad(attn_mask, (0, 1)) - if key_padding_mask is not None: - key_padding_mask = pad(key_padding_mask, (0, 1)) - - # update source sequence length after adjustments - src_len = k.size(1) - - # merge key padding and attention masks - if key_padding_mask is not None: - assert key_padding_mask.shape == (bsz, src_len), \ - f"expecting key_padding_mask shape of {(bsz, src_len)}, but got {key_padding_mask.shape}" - key_padding_mask = key_padding_mask.view(bsz, 1, 1, src_len). \ - expand(-1, total_num_heads, -1, -1).reshape(bsz * total_num_heads, 1, src_len) - if attn_mask is None: - attn_mask = key_padding_mask - elif attn_mask.dtype == torch.bool: - attn_mask = attn_mask.logical_or(key_padding_mask) - else: - attn_mask = attn_mask.masked_fill(key_padding_mask, float("-inf")) - - # convert mask to float - if attn_mask is not None and attn_mask.dtype == torch.bool: - new_attn_mask = torch.zeros_like(attn_mask, dtype=torch.float) - new_attn_mask.masked_fill_(attn_mask, float("-inf")) - attn_mask = new_attn_mask - - # adjust dropout probability - if not training: - dropout_p = 0.0 - - # - # (deep breath) calculate attention and out projection - # - attn_output, attn_output_weights = _scaled_dot_product_attention(q, k, v, attn_mask, dropout_p, bsz, subset_heads, subset_weights) - attn_output = attn_output.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim) - attn_output = linear(attn_output, out_proj_weight, out_proj_bias) - - if need_weights: - # average attention weights over heads - attn_output_weights = attn_output_weights.view(bsz, num_heads, tgt_len, src_len) - return attn_output, attn_output_weights.sum(dim=1) / num_heads - else: - return attn_output, None diff --git a/kosmos-g/fairseq/examples/attention_head_selection/src/speech_to_text_head_selection.py b/kosmos-g/fairseq/examples/attention_head_selection/src/speech_to_text_head_selection.py deleted file mode 100644 index 6e0ce11d6..000000000 --- a/kosmos-g/fairseq/examples/attention_head_selection/src/speech_to_text_head_selection.py +++ /dev/null @@ -1,180 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -from fairseq.optim.amp_optimizer import AMPOptimizer -from fairseq.tasks import register_task -from fairseq.tasks.speech_to_text import SpeechToTextTask - -from .data.speech_to_text_dataset_with_domain import SpeechToTextDatasetCreatorWithDomain -from .loss.attention_head_selection import HeadSelectionLoss - - -@register_task("speech_to_text_head_selection") -class SpeechToTextHeadSelectionTask(SpeechToTextTask): - - @classmethod - def add_args(cls, parser): - SpeechToTextTask.add_args(parser) - parser.add_argument( - "--task-type", - type=str, - default="lang", - help="task type for head selection, lang or domain" - ) - parser.add_argument( - "--kl-weight", - type=float, - default=0.0, - help="the weight of KL loss" - ) - - def __init__(self, args, tgt_dict): - super().__init__(args, tgt_dict) - self.task_type = args.task_type - assert self.task_type in ["lang", "domain"], "invalid task_type: {}, should be either lang or domain".format(self.task_type) - self.map_task_to_id(args.train_subset) - self.encoder_head_prior = float(args.decoder_attention_heads) / args.total_decoder_attention_heads - self.decoder_head_prior = float(args.encoder_attention_heads) / args.total_encoder_attention_heads - self.kl_loss = HeadSelectionLoss(args) - - def map_task_to_id(self, train_subset): - src_lang_set, tgt_lang_set, domain_set = set(), set(), set() - for split in train_subset.split(","): - seq = split.split("_") - assert len(seq) == 4, "subset {} should be in the format of train_src_tgt_domain".format(split) - _, src_lang, tgt_lang, domain = seq - src_lang_set.add(src_lang) - tgt_lang_set.add(tgt_lang) - domain_set.add(domain) - src_langs = sorted(src_lang_set) - tgt_langs = sorted(tgt_lang_set) - domains = sorted(domain_set) - self.src_lang_map = {src_lang: i for (i, src_lang) in enumerate(src_langs)} - self.tgt_lang_map = {tgt_lang: i for (i, tgt_lang) in enumerate(tgt_langs)} - self.domain_map = {domain: i for (i, domain) in enumerate(domains)} - if self.task_type == "lang": - self.encoder_tasks = len(self.src_lang_map) - self.decoder_tasks = len(self.tgt_lang_map) - elif self.task_type == "domain": - self.encoder_tasks = len(self.domain_map) - self.decoder_tasks = len(self.domain_map) - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - is_train_split = split.startswith("train") - pre_tokenizer = self.build_tokenizer(self.args) - bpe_tokenizer = self.build_bpe(self.args) - self.datasets[split] = SpeechToTextDatasetCreatorWithDomain.from_tsv( - self.args.data, - self.data_cfg, - split, - self.tgt_dict, - pre_tokenizer, - bpe_tokenizer, - is_train_split=is_train_split, - epoch=epoch, - seed=self.args.seed, - src_lang_map=self.src_lang_map, - tgt_lang_map=self.tgt_lang_map, - domain_map=self.domain_map, - speaker_to_id=self.speaker_to_id - ) - - def build_model(self, args): - args.encoder_tasks = self.encoder_tasks - args.decoder_tasks = self.decoder_tasks - return super(SpeechToTextHeadSelectionTask, self).build_model(args) - - def get_sample_sizes(self, sample, task_ids, num_tasks): - """ - task_ids: (bsz,) - get sample sizes for each task - """ - bsz = task_ids.size(0) - mat = torch.zeros((num_tasks, bsz), device=task_ids.device) - mat[task_ids, torch.arange(bsz)] = 1.0 - ntokens = torch.sum(sample['target'] != 1, dim=-1) - sample_sizes = torch.matmul(mat, ntokens.float()) - return sample_sizes - - def train_step( - self, sample, model, criterion, optimizer, update_num, ignore_grad=False - ): - model.train() - model.set_num_updates(update_num) - # task ids - if self.task_type == "lang": - encoder_task_ids = sample["src_lang_ids"] - decoder_task_ids = sample["tgt_lang_ids"] - elif self.task_type == "domain": - encoder_task_ids = sample["domain_ids"] - decoder_task_ids = sample["domain_ids"] - model.encoder.set_task_ids(encoder_task_ids) - model.decoder.set_task_ids(decoder_task_ids) - - with torch.autograd.profiler.record_function("forward"): - with torch.cuda.amp.autocast(enabled=(isinstance(optimizer, AMPOptimizer))): - loss, sample_size, logging_output = criterion(model, sample) - # KL loss - if self.args.encoder_attn_head_select: - sample_sizes = self.get_sample_sizes(sample, encoder_task_ids, self.encoder_tasks) - loss += self.kl_loss( - model.encoder.attn_head_selector.head_samples, - sample_sizes, - self.encoder_head_prior - ) - if self.args.decoder_self_attn_head_select: - sample_sizes = self.get_sample_sizes(sample, decoder_task_ids, self.decoder_tasks) - loss += self.kl_loss( - model.decoder.self_attn_head_selector.head_samples, - sample_sizes, - self.decoder_head_prior - ) - if self.args.dec_enc_attn_head_select: - sample_sizes = self.get_sample_sizes(sample, decoder_task_ids, self.decoder_tasks) - loss += self.kl_loss( - model.decoder.enc_attn_head_selector.head_sampes, - sample_sizes, - self.decoder_head_prior - ) - - if ignore_grad: - loss *= 0 - with torch.autograd.profiler.record_function("backward"): - optimizer.backward(loss) - return loss, sample_size, logging_output - - def valid_step(self, sample, model, criterion): - model.eval() - # task ids - if self.task_type == "lang": - encoder_task_ids = sample["src_lang_ids"] - decoder_task_ids = sample["tgt_lang_ids"] - elif self.task_type == "domain": - encoder_task_ids = sample["domain_ids"] - decoder_task_ids = sample["domain_ids"] - model.encoder.set_task_ids(encoder_task_ids) - model.decoder.set_task_ids(decoder_task_ids) - with torch.no_grad(): - loss, sample_size, logging_output = criterion(model, sample) - return loss, sample_size, logging_output - - def inference_step( - self, generator, models, sample, prefix_tokens=None, constraints=None - ): - with torch.no_grad(): - # task ids - if self.task_type == "lang": - encoder_task_ids = sample["src_lang_ids"][:1] - decoder_task_ids = sample["tgt_lang_ids"][:1] - elif self.task_type == "domain": - encoder_task_ids = sample["domain_ids"][:1] - decoder_task_ids = sample["domain_ids"][:1] - for model in models: - model.encoder.set_task_ids(encoder_task_ids) - model.decoder.set_task_ids(decoder_task_ids) - return generator.generate( - models, sample, prefix_tokens=prefix_tokens, constraints=constraints - ) diff --git a/kosmos-g/fairseq/examples/backtranslation/README.md b/kosmos-g/fairseq/examples/backtranslation/README.md deleted file mode 100644 index 73675f112..000000000 --- a/kosmos-g/fairseq/examples/backtranslation/README.md +++ /dev/null @@ -1,297 +0,0 @@ -# Understanding Back-Translation at Scale (Edunov et al., 2018) - -This page includes pre-trained models from the paper [Understanding Back-Translation at Scale (Edunov et al., 2018)](https://arxiv.org/abs/1808.09381). - -## Pre-trained models - -Model | Description | Dataset | Download ----|---|---|--- -`transformer.wmt18.en-de` | Transformer
([Edunov et al., 2018](https://arxiv.org/abs/1808.09381))
WMT'18 winner | [WMT'18 English-German](http://www.statmt.org/wmt18/translation-task.html) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt18.en-de.ensemble.tar.gz)
See NOTE in the archive - -## Example usage (torch.hub) - -We require a few additional Python dependencies for preprocessing: -```bash -pip install subword_nmt sacremoses -``` - -Then to generate translations from the full model ensemble: -```python -import torch - -# List available models -torch.hub.list('pytorch/fairseq') # [..., 'transformer.wmt18.en-de', ... ] - -# Load the WMT'18 En-De ensemble -en2de_ensemble = torch.hub.load( - 'pytorch/fairseq', 'transformer.wmt18.en-de', - checkpoint_file='wmt18.model1.pt:wmt18.model2.pt:wmt18.model3.pt:wmt18.model4.pt:wmt18.model5.pt', - tokenizer='moses', bpe='subword_nmt') - -# The ensemble contains 5 models -len(en2de_ensemble.models) -# 5 - -# Translate -en2de_ensemble.translate('Hello world!') -# 'Hallo Welt!' -``` - -## Training your own model (WMT'18 English-German) - -The following instructions can be adapted to reproduce the models from the paper. - - -#### Step 1. Prepare parallel data and optionally train a baseline (English-German) model - -First download and preprocess the data: -```bash -# Download and prepare the data -cd examples/backtranslation/ -bash prepare-wmt18en2de.sh -cd ../.. - -# Binarize the data -TEXT=examples/backtranslation/wmt18_en_de -fairseq-preprocess \ - --joined-dictionary \ - --source-lang en --target-lang de \ - --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \ - --destdir data-bin/wmt18_en_de --thresholdtgt 0 --thresholdsrc 0 \ - --workers 20 - -# Copy the BPE code into the data-bin directory for future use -cp examples/backtranslation/wmt18_en_de/code data-bin/wmt18_en_de/code -``` - -(Optionally) Train a baseline model (English-German) using just the parallel data: -```bash -CHECKPOINT_DIR=checkpoints_en_de_parallel -fairseq-train --fp16 \ - data-bin/wmt18_en_de \ - --source-lang en --target-lang de \ - --arch transformer_wmt_en_de_big --share-all-embeddings \ - --dropout 0.3 --weight-decay 0.0 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --optimizer adam --adam-betas '(0.9, 0.98)' --clip-norm 0.0 \ - --lr 0.001 --lr-scheduler inverse_sqrt --warmup-updates 4000 \ - --max-tokens 3584 --update-freq 16 \ - --max-update 30000 \ - --save-dir $CHECKPOINT_DIR -# Note: the above command assumes 8 GPUs. Adjust `--update-freq` if you have a -# different number of GPUs. -``` - -Average the last 10 checkpoints: -```bash -python scripts/average_checkpoints.py \ - --inputs $CHECKPOINT_DIR \ - --num-epoch-checkpoints 10 \ - --output $CHECKPOINT_DIR/checkpoint.avg10.pt -``` - -Evaluate BLEU: -```bash -# tokenized BLEU on newstest2017: -bash examples/backtranslation/tokenized_bleu.sh \ - wmt17 \ - en-de \ - data-bin/wmt18_en_de \ - data-bin/wmt18_en_de/code \ - $CHECKPOINT_DIR/checkpoint.avg10.pt -# BLEU4 = 29.57, 60.9/35.4/22.9/15.5 (BP=1.000, ratio=1.014, syslen=63049, reflen=62152) -# compare to 29.46 in Table 1, which is also for tokenized BLEU - -# generally it's better to report (detokenized) sacrebleu though: -bash examples/backtranslation/sacrebleu.sh \ - wmt17 \ - en-de \ - data-bin/wmt18_en_de \ - data-bin/wmt18_en_de/code \ - $CHECKPOINT_DIR/checkpoint.avg10.pt -# BLEU+case.mixed+lang.en-de+numrefs.1+smooth.exp+test.wmt17+tok.13a+version.1.4.3 = 29.0 60.6/34.7/22.4/14.9 (BP = 1.000 ratio = 1.013 hyp_len = 62099 ref_len = 61287) -``` - - -#### Step 2. Back-translate monolingual German data - -Train a reverse model (German-English) to do the back-translation: -```bash -CHECKPOINT_DIR=checkpoints_de_en_parallel -fairseq-train --fp16 \ - data-bin/wmt18_en_de \ - --source-lang de --target-lang en \ - --arch transformer_wmt_en_de_big --share-all-embeddings \ - --dropout 0.3 --weight-decay 0.0 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --optimizer adam --adam-betas '(0.9, 0.98)' --clip-norm 0.0 \ - --lr 0.001 --lr-scheduler inverse_sqrt --warmup-updates 4000 \ - --max-tokens 3584 --update-freq 16 \ - --max-update 30000 \ - --save-dir $CHECKPOINT_DIR -# Note: the above command assumes 8 GPUs. Adjust `--update-freq` if you have a -# different number of GPUs. -``` - -Let's evaluate the back-translation (BT) model to make sure it is well trained: -```bash -bash examples/backtranslation/sacrebleu.sh \ - wmt17 \ - de-en \ - data-bin/wmt18_en_de \ - data-bin/wmt18_en_de/code \ - $CHECKPOINT_DIR/checkpoint_best.py -# BLEU+case.mixed+lang.de-en+numrefs.1+smooth.exp+test.wmt17+tok.13a+version.1.4.3 = 34.9 66.9/41.8/28.5/19.9 (BP = 0.983 ratio = 0.984 hyp_len = 63342 ref_len = 64399) -# compare to the best system from WMT'17 which scored 35.1: http://matrix.statmt.org/matrix/systems_list/1868 -``` - -Next prepare the monolingual data: -```bash -# Download and prepare the monolingual data -# By default the script samples 25M monolingual sentences, which after -# deduplication should be just over 24M sentences. These are split into 25 -# shards, each with 1M sentences (except for the last shard). -cd examples/backtranslation/ -bash prepare-de-monolingual.sh -cd ../.. - -# Binarize each shard of the monolingual data -TEXT=examples/backtranslation/wmt18_de_mono -for SHARD in $(seq -f "%02g" 0 24); do \ - fairseq-preprocess \ - --only-source \ - --source-lang de --target-lang en \ - --joined-dictionary \ - --srcdict data-bin/wmt18_en_de/dict.de.txt \ - --testpref $TEXT/bpe.monolingual.dedup.${SHARD} \ - --destdir data-bin/wmt18_de_mono/shard${SHARD} \ - --workers 20; \ - cp data-bin/wmt18_en_de/dict.en.txt data-bin/wmt18_de_mono/shard${SHARD}/; \ -done -``` - -Now we're ready to perform back-translation over the monolingual data. The -following command generates via sampling, but it's possible to use greedy -decoding (`--beam 1`), beam search (`--beam 5`), -top-k sampling (`--sampling --beam 1 --sampling-topk 10`), etc.: -```bash -mkdir backtranslation_output -for SHARD in $(seq -f "%02g" 0 24); do \ - fairseq-generate --fp16 \ - data-bin/wmt18_de_mono/shard${SHARD} \ - --path $CHECKPOINT_DIR/checkpoint_best.pt \ - --skip-invalid-size-inputs-valid-test \ - --max-tokens 4096 \ - --sampling --beam 1 \ - > backtranslation_output/sampling.shard${SHARD}.out; \ -done -``` - -After BT, use the `extract_bt_data.py` script to re-combine the shards, extract -the back-translations and apply length ratio filters: -```bash -python examples/backtranslation/extract_bt_data.py \ - --minlen 1 --maxlen 250 --ratio 1.5 \ - --output backtranslation_output/bt_data --srclang en --tgtlang de \ - backtranslation_output/sampling.shard*.out - -# Ensure lengths are the same: -# wc -l backtranslation_output/bt_data.{en,de} -# 21795614 backtranslation_output/bt_data.en -# 21795614 backtranslation_output/bt_data.de -# 43591228 total -``` - -Binarize the filtered BT data and combine it with the parallel data: -```bash -TEXT=backtranslation_output -fairseq-preprocess \ - --source-lang en --target-lang de \ - --joined-dictionary \ - --srcdict data-bin/wmt18_en_de/dict.en.txt \ - --trainpref $TEXT/bt_data \ - --destdir data-bin/wmt18_en_de_bt \ - --workers 20 - -# We want to train on the combined data, so we'll symlink the parallel + BT data -# in the wmt18_en_de_para_plus_bt directory. We link the parallel data as "train" -# and the BT data as "train1", so that fairseq will combine them automatically -# and so that we can use the `--upsample-primary` option to upsample the -# parallel data (if desired). -PARA_DATA=$(readlink -f data-bin/wmt18_en_de) -BT_DATA=$(readlink -f data-bin/wmt18_en_de_bt) -COMB_DATA=data-bin/wmt18_en_de_para_plus_bt -mkdir -p $COMB_DATA -for LANG in en de; do \ - ln -s ${PARA_DATA}/dict.$LANG.txt ${COMB_DATA}/dict.$LANG.txt; \ - for EXT in bin idx; do \ - ln -s ${PARA_DATA}/train.en-de.$LANG.$EXT ${COMB_DATA}/train.en-de.$LANG.$EXT; \ - ln -s ${BT_DATA}/train.en-de.$LANG.$EXT ${COMB_DATA}/train1.en-de.$LANG.$EXT; \ - ln -s ${PARA_DATA}/valid.en-de.$LANG.$EXT ${COMB_DATA}/valid.en-de.$LANG.$EXT; \ - ln -s ${PARA_DATA}/test.en-de.$LANG.$EXT ${COMB_DATA}/test.en-de.$LANG.$EXT; \ - done; \ -done -``` - - -#### 3. Train an English-German model over the combined parallel + BT data - -Finally we can train a model over the parallel + BT data: -```bash -CHECKPOINT_DIR=checkpoints_en_de_parallel_plus_bt -fairseq-train --fp16 \ - data-bin/wmt18_en_de_para_plus_bt \ - --upsample-primary 16 \ - --source-lang en --target-lang de \ - --arch transformer_wmt_en_de_big --share-all-embeddings \ - --dropout 0.3 --weight-decay 0.0 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --optimizer adam --adam-betas '(0.9, 0.98)' --clip-norm 0.0 \ - --lr 0.0007 --lr-scheduler inverse_sqrt --warmup-updates 4000 \ - --max-tokens 3584 --update-freq 16 \ - --max-update 100000 \ - --save-dir $CHECKPOINT_DIR -# Note: the above command assumes 8 GPUs. Adjust `--update-freq` if you have a -# different number of GPUs. -``` - -Average the last 10 checkpoints: -```bash -python scripts/average_checkpoints.py \ - --inputs $CHECKPOINT_DIR \ - --num-epoch-checkpoints 10 \ - --output $CHECKPOINT_DIR/checkpoint.avg10.pt -``` - -Evaluate BLEU: -```bash -# tokenized BLEU on newstest2017: -bash examples/backtranslation/tokenized_bleu.sh \ - wmt17 \ - en-de \ - data-bin/wmt18_en_de \ - data-bin/wmt18_en_de/code \ - $CHECKPOINT_DIR/checkpoint.avg10.pt -# BLEU4 = 32.35, 64.4/38.9/26.2/18.3 (BP=0.977, ratio=0.977, syslen=60729, reflen=62152) -# compare to 32.35 in Table 1, which is also for tokenized BLEU - -# generally it's better to report (detokenized) sacrebleu: -bash examples/backtranslation/sacrebleu.sh \ - wmt17 \ - en-de \ - data-bin/wmt18_en_de \ - data-bin/wmt18_en_de/code \ - $CHECKPOINT_DIR/checkpoint.avg10.pt -# BLEU+case.mixed+lang.en-de+numrefs.1+smooth.exp+test.wmt17+tok.13a+version.1.4.3 = 31.5 64.3/38.2/25.6/17.6 (BP = 0.971 ratio = 0.971 hyp_len = 59515 ref_len = 61287) -``` - - -## Citation -```bibtex -@inproceedings{edunov2018backtranslation, - title = {Understanding Back-Translation at Scale}, - author = {Edunov, Sergey and Ott, Myle and Auli, Michael and Grangier, David}, - booktitle = {Conference of the Association for Computational Linguistics (ACL)}, - year = 2018, -} -``` diff --git a/kosmos-g/fairseq/examples/backtranslation/deduplicate_lines.py b/kosmos-g/fairseq/examples/backtranslation/deduplicate_lines.py deleted file mode 100644 index 50e458328..000000000 --- a/kosmos-g/fairseq/examples/backtranslation/deduplicate_lines.py +++ /dev/null @@ -1,41 +0,0 @@ -#!/usr/bin/python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import fileinput -import hashlib -import sys -from multiprocessing import Pool - - -def get_hashes_and_lines(raw_line): - hash = hashlib.md5(raw_line).hexdigest() - return hash, raw_line - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--workers", type=int, default=10) - parser.add_argument("files", nargs="*", help="input files") - args = parser.parse_args() - - seen = set() - with fileinput.input(args.files, mode="rb") as h: - pool = Pool(args.workers) - results = pool.imap_unordered(get_hashes_and_lines, h, 1000) - for i, (hash, raw_line) in enumerate(results): - if hash not in seen: - seen.add(hash) - sys.stdout.buffer.write(raw_line) - if i % 1000000 == 0: - print(i, file=sys.stderr, end="", flush=True) - elif i % 100000 == 0: - print(".", file=sys.stderr, end="", flush=True) - print(file=sys.stderr, flush=True) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/backtranslation/extract_bt_data.py b/kosmos-g/fairseq/examples/backtranslation/extract_bt_data.py deleted file mode 100644 index e766391e8..000000000 --- a/kosmos-g/fairseq/examples/backtranslation/extract_bt_data.py +++ /dev/null @@ -1,72 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import fileinput - -from tqdm import tqdm - - -def main(): - parser = argparse.ArgumentParser( - description=( - "Extract back-translations from the stdout of fairseq-generate. " - "If there are multiply hypotheses for a source, we only keep the first one. " - ) - ) - parser.add_argument("--output", required=True, help="output prefix") - parser.add_argument( - "--srclang", required=True, help="source language (extracted from H-* lines)" - ) - parser.add_argument( - "--tgtlang", required=True, help="target language (extracted from S-* lines)" - ) - parser.add_argument("--minlen", type=int, help="min length filter") - parser.add_argument("--maxlen", type=int, help="max length filter") - parser.add_argument("--ratio", type=float, help="ratio filter") - parser.add_argument("files", nargs="*", help="input files") - args = parser.parse_args() - - def validate(src, tgt): - srclen = len(src.split(" ")) if src != "" else 0 - tgtlen = len(tgt.split(" ")) if tgt != "" else 0 - if ( - (args.minlen is not None and (srclen < args.minlen or tgtlen < args.minlen)) - or ( - args.maxlen is not None - and (srclen > args.maxlen or tgtlen > args.maxlen) - ) - or ( - args.ratio is not None - and (max(srclen, tgtlen) / float(min(srclen, tgtlen)) > args.ratio) - ) - ): - return False - return True - - def safe_index(toks, index, default): - try: - return toks[index] - except IndexError: - return default - - with open(args.output + "." + args.srclang, "w") as src_h, open( - args.output + "." + args.tgtlang, "w" - ) as tgt_h: - for line in tqdm(fileinput.input(args.files)): - if line.startswith("S-"): - tgt = safe_index(line.rstrip().split("\t"), 1, "") - elif line.startswith("H-"): - if tgt is not None: - src = safe_index(line.rstrip().split("\t"), 2, "") - if validate(src, tgt): - print(src, file=src_h) - print(tgt, file=tgt_h) - tgt = None - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/backtranslation/prepare-de-monolingual.sh b/kosmos-g/fairseq/examples/backtranslation/prepare-de-monolingual.sh deleted file mode 100644 index 5e67b2b3b..000000000 --- a/kosmos-g/fairseq/examples/backtranslation/prepare-de-monolingual.sh +++ /dev/null @@ -1,98 +0,0 @@ -#!/bin/bash - -SCRIPTS=mosesdecoder/scripts -TOKENIZER=$SCRIPTS/tokenizer/tokenizer.perl -NORM_PUNC=$SCRIPTS/tokenizer/normalize-punctuation.perl -REM_NON_PRINT_CHAR=$SCRIPTS/tokenizer/remove-non-printing-char.perl -BPEROOT=subword-nmt/subword_nmt - - -BPE_CODE=wmt18_en_de/code -SUBSAMPLE_SIZE=25000000 -LANG=de - - -OUTDIR=wmt18_${LANG}_mono -orig=orig -tmp=$OUTDIR/tmp -mkdir -p $OUTDIR $tmp - - -URLS=( - "http://www.statmt.org/wmt14/training-monolingual-news-crawl/news.2007.de.shuffled.gz" - "http://www.statmt.org/wmt14/training-monolingual-news-crawl/news.2008.de.shuffled.gz" - "http://www.statmt.org/wmt14/training-monolingual-news-crawl/news.2009.de.shuffled.gz" - "http://www.statmt.org/wmt14/training-monolingual-news-crawl/news.2010.de.shuffled.gz" - "http://www.statmt.org/wmt14/training-monolingual-news-crawl/news.2011.de.shuffled.gz" - "http://www.statmt.org/wmt14/training-monolingual-news-crawl/news.2012.de.shuffled.gz" - "http://www.statmt.org/wmt14/training-monolingual-news-crawl/news.2013.de.shuffled.gz" - "http://www.statmt.org/wmt15/training-monolingual-news-crawl-v2/news.2014.de.shuffled.v2.gz" - "http://data.statmt.org/wmt16/translation-task/news.2015.de.shuffled.gz" - "http://data.statmt.org/wmt17/translation-task/news.2016.de.shuffled.gz" - "http://data.statmt.org/wmt18/translation-task/news.2017.de.shuffled.deduped.gz" -) -FILES=( - "news.2007.de.shuffled.gz" - "news.2008.de.shuffled.gz" - "news.2009.de.shuffled.gz" - "news.2010.de.shuffled.gz" - "news.2011.de.shuffled.gz" - "news.2012.de.shuffled.gz" - "news.2013.de.shuffled.gz" - "news.2014.de.shuffled.v2.gz" - "news.2015.de.shuffled.gz" - "news.2016.de.shuffled.gz" - "news.2017.de.shuffled.deduped.gz" -) - - -cd $orig -for ((i=0;i<${#URLS[@]};++i)); do - file=${FILES[i]} - if [ -f $file ]; then - echo "$file already exists, skipping download" - else - url=${URLS[i]} - wget "$url" - fi -done -cd .. - - -if [ -f $tmp/monolingual.${SUBSAMPLE_SIZE}.${LANG} ]; then - echo "found monolingual sample, skipping shuffle/sample/tokenize" -else - gzip -c -d -k $(for FILE in "${FILES[@]}"; do echo $orig/$FILE; done) \ - | shuf -n $SUBSAMPLE_SIZE \ - | perl $NORM_PUNC $LANG \ - | perl $REM_NON_PRINT_CHAR \ - | perl $TOKENIZER -threads 8 -a -l $LANG \ - > $tmp/monolingual.${SUBSAMPLE_SIZE}.${LANG} -fi - - -if [ -f $tmp/bpe.monolingual.${SUBSAMPLE_SIZE}.${LANG} ]; then - echo "found BPE monolingual sample, skipping BPE step" -else - python $BPEROOT/apply_bpe.py -c $BPE_CODE \ - < $tmp/monolingual.${SUBSAMPLE_SIZE}.${LANG} \ - > $tmp/bpe.monolingual.${SUBSAMPLE_SIZE}.${LANG} -fi - - -if [ -f $tmp/bpe.monolingual.dedup.${SUBSAMPLE_SIZE}.${LANG} ]; then - echo "found deduplicated monolingual sample, skipping deduplication step" -else - python deduplicate_lines.py $tmp/bpe.monolingual.${SUBSAMPLE_SIZE}.${LANG} \ - > $tmp/bpe.monolingual.dedup.${SUBSAMPLE_SIZE}.${LANG} -fi - - -if [ -f $OUTDIR/bpe.monolingual.dedup.00.de ]; then - echo "found sharded data, skipping sharding step" -else - split --lines 1000000 --numeric-suffixes \ - --additional-suffix .${LANG} \ - $tmp/bpe.monolingual.dedup.${SUBSAMPLE_SIZE}.${LANG} \ - $OUTDIR/bpe.monolingual.dedup. -fi diff --git a/kosmos-g/fairseq/examples/backtranslation/prepare-wmt18en2de.sh b/kosmos-g/fairseq/examples/backtranslation/prepare-wmt18en2de.sh deleted file mode 100644 index f6fd27530..000000000 --- a/kosmos-g/fairseq/examples/backtranslation/prepare-wmt18en2de.sh +++ /dev/null @@ -1,135 +0,0 @@ -#!/bin/bash -# Adapted from https://github.com/facebookresearch/MIXER/blob/master/prepareData.sh - -echo 'Cloning Moses github repository (for tokenization scripts)...' -git clone https://github.com/moses-smt/mosesdecoder.git - -echo 'Cloning Subword NMT repository (for BPE pre-processing)...' -git clone https://github.com/rsennrich/subword-nmt.git - -SCRIPTS=mosesdecoder/scripts -TOKENIZER=$SCRIPTS/tokenizer/tokenizer.perl -CLEAN=$SCRIPTS/training/clean-corpus-n.perl -NORM_PUNC=$SCRIPTS/tokenizer/normalize-punctuation.perl -REM_NON_PRINT_CHAR=$SCRIPTS/tokenizer/remove-non-printing-char.perl -BPEROOT=subword-nmt/subword_nmt -BPE_TOKENS=32000 - -URLS=( - "http://statmt.org/wmt13/training-parallel-europarl-v7.tgz" - "http://statmt.org/wmt13/training-parallel-commoncrawl.tgz" - "http://data.statmt.org/wmt18/translation-task/training-parallel-nc-v13.tgz" - "http://data.statmt.org/wmt18/translation-task/rapid2016.tgz" - "http://data.statmt.org/wmt17/translation-task/dev.tgz" - "http://statmt.org/wmt14/test-full.tgz" -) -FILES=( - "training-parallel-europarl-v7.tgz" - "training-parallel-commoncrawl.tgz" - "training-parallel-nc-v13.tgz" - "rapid2016.tgz" - "dev.tgz" - "test-full.tgz" -) -CORPORA=( - "training/europarl-v7.de-en" - "commoncrawl.de-en" - "training-parallel-nc-v13/news-commentary-v13.de-en" - "rapid2016.de-en" -) - -if [ ! -d "$SCRIPTS" ]; then - echo "Please set SCRIPTS variable correctly to point to Moses scripts." - exit 1 -fi - -OUTDIR=wmt18_en_de - -src=en -tgt=de -lang=en-de -prep=$OUTDIR -tmp=$prep/tmp -orig=orig - -mkdir -p $orig $tmp $prep - -cd $orig - -for ((i=0;i<${#URLS[@]};++i)); do - file=${FILES[i]} - if [ -f $file ]; then - echo "$file already exists, skipping download" - else - url=${URLS[i]} - wget "$url" - if [ -f $file ]; then - echo "$url successfully downloaded." - else - echo "$url not successfully downloaded." - exit 1 - fi - if [ ${file: -4} == ".tgz" ]; then - tar zxvf $file - elif [ ${file: -4} == ".tar" ]; then - tar xvf $file - fi - fi -done -cd .. - -echo "pre-processing train data..." -for l in $src $tgt; do - rm $tmp/train.tags.$lang.tok.$l - for f in "${CORPORA[@]}"; do - cat $orig/$f.$l | \ - perl $NORM_PUNC $l | \ - perl $REM_NON_PRINT_CHAR | \ - perl $TOKENIZER -threads 8 -a -l $l >> $tmp/train.tags.$lang.tok.$l - done -done - -echo "pre-processing test data..." -for l in $src $tgt; do - if [ "$l" == "$src" ]; then - t="src" - else - t="ref" - fi - grep '\s*//g' | \ - sed -e 's/\s*<\/seg>\s*//g' | \ - sed -e "s/\’/\'/g" | \ - perl $TOKENIZER -threads 8 -a -l $l > $tmp/test.$l - echo "" -done - -echo "splitting train and valid..." -for l in $src $tgt; do - awk '{if (NR%100 == 0) print $0; }' $tmp/train.tags.$lang.tok.$l > $tmp/valid.$l - awk '{if (NR%100 != 0) print $0; }' $tmp/train.tags.$lang.tok.$l > $tmp/train.$l -done - -TRAIN=$tmp/train.de-en -BPE_CODE=$prep/code -rm -f $TRAIN -for l in $src $tgt; do - cat $tmp/train.$l >> $TRAIN -done - -echo "learn_bpe.py on ${TRAIN}..." -python $BPEROOT/learn_bpe.py -s $BPE_TOKENS < $TRAIN > $BPE_CODE - -for L in $src $tgt; do - for f in train.$L valid.$L test.$L; do - echo "apply_bpe.py to ${f}..." - python $BPEROOT/apply_bpe.py -c $BPE_CODE < $tmp/$f > $tmp/bpe.$f - done -done - -perl $CLEAN -ratio 1.5 $tmp/bpe.train $src $tgt $prep/train 1 250 -perl $CLEAN -ratio 1.5 $tmp/bpe.valid $src $tgt $prep/valid 1 250 - -for L in $src $tgt; do - cp $tmp/bpe.test.$L $prep/test.$L -done diff --git a/kosmos-g/fairseq/examples/backtranslation/sacrebleu.sh b/kosmos-g/fairseq/examples/backtranslation/sacrebleu.sh deleted file mode 100644 index a70da23f4..000000000 --- a/kosmos-g/fairseq/examples/backtranslation/sacrebleu.sh +++ /dev/null @@ -1,37 +0,0 @@ -#!/bin/bash - -if [ $# -ne 5 ]; then - echo "usage: $0 [dataset=wmt14/full] [langpair=en-de] [databin] [bpecode] [model]" - exit -fi - - -DATASET=$1 -LANGPAIR=$2 -DATABIN=$3 -BPECODE=$4 -MODEL=$5 - -SRCLANG=$(echo $LANGPAIR | cut -d '-' -f 1) -TGTLANG=$(echo $LANGPAIR | cut -d '-' -f 2) - - -BPEROOT=examples/backtranslation/subword-nmt/subword_nmt -if [ ! -e $BPEROOT ]; then - BPEROOT=subword-nmt/subword_nmt - if [ ! -e $BPEROOT ]; then - echo 'Cloning Subword NMT repository (for BPE pre-processing)...' - git clone https://github.com/rsennrich/subword-nmt.git - fi -fi - - -sacrebleu -t $DATASET -l $LANGPAIR --echo src \ -| sacremoses tokenize -a -l $SRCLANG -q \ -| python $BPEROOT/apply_bpe.py -c $BPECODE \ -| fairseq-interactive $DATABIN --path $MODEL \ - -s $SRCLANG -t $TGTLANG \ - --beam 5 --remove-bpe --buffer-size 1024 --max-tokens 8000 \ -| grep ^H- | cut -f 3- \ -| sacremoses detokenize -l $TGTLANG -q \ -| sacrebleu -t $DATASET -l $LANGPAIR diff --git a/kosmos-g/fairseq/examples/backtranslation/tokenized_bleu.sh b/kosmos-g/fairseq/examples/backtranslation/tokenized_bleu.sh deleted file mode 100644 index c6d6aaa19..000000000 --- a/kosmos-g/fairseq/examples/backtranslation/tokenized_bleu.sh +++ /dev/null @@ -1,46 +0,0 @@ -#!/bin/bash - -if [ $# -ne 5 ]; then - echo "usage: $0 [dataset=wmt14/full] [langpair=en-de] [databin] [bpecode] [model]" - exit -fi - - -DATASET=$1 -LANGPAIR=$2 -DATABIN=$3 -BPECODE=$4 -MODEL=$5 - -SRCLANG=$(echo $LANGPAIR | cut -d '-' -f 1) -TGTLANG=$(echo $LANGPAIR | cut -d '-' -f 2) - - -BPEROOT=examples/backtranslation/subword-nmt/subword_nmt -if [ ! -e $BPEROOT ]; then - BPEROOT=subword-nmt/subword_nmt - if [ ! -e $BPEROOT ]; then - echo 'Cloning Subword NMT repository (for BPE pre-processing)...' - git clone https://github.com/rsennrich/subword-nmt.git - fi -fi - - -TMP_REF=$(mktemp) - -sacrebleu -t $DATASET -l $LANGPAIR --echo ref -q \ -| sacremoses normalize -l $TGTLANG -q \ -| sacremoses tokenize -a -l $TGTLANG -q \ -> $TMP_REF - -sacrebleu -t $DATASET -l $LANGPAIR --echo src -q \ -| sacremoses normalize -l $SRCLANG -q \ -| sacremoses tokenize -a -l $SRCLANG -q \ -| python $BPEROOT/apply_bpe.py -c $BPECODE \ -| fairseq-interactive $DATABIN --path $MODEL \ - -s $SRCLANG -t $TGTLANG \ - --beam 5 --remove-bpe --buffer-size 1024 --max-tokens 8000 \ -| grep ^H- | cut -f 3- \ -| fairseq-score --ref $TMP_REF - -rm -f $TMP_REF diff --git a/kosmos-g/fairseq/examples/bart/README.glue.md b/kosmos-g/fairseq/examples/bart/README.glue.md deleted file mode 100644 index a010934e1..000000000 --- a/kosmos-g/fairseq/examples/bart/README.glue.md +++ /dev/null @@ -1,99 +0,0 @@ -# Fine-tuning BART on GLUE tasks - -### 1) Download the data from GLUE website (https://gluebenchmark.com/tasks) using following commands: -```bash -wget https://gist.githubusercontent.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e/raw/17b8dd0d724281ed7c3b2aeeda662b92809aadd5/download_glue_data.py -python download_glue_data.py --data_dir glue_data --tasks all -``` - -### 2) Preprocess GLUE task data (same as RoBERTa): -```bash -./examples/roberta/preprocess_GLUE_tasks.sh glue_data -``` -`glue_task_name` is one of the following: -`{ALL, QQP, MNLI, QNLI, MRPC, RTE, STS-B, SST-2, CoLA}` -Use `ALL` for preprocessing all the glue tasks. - -### 3) Fine-tuning on GLUE task: -Example fine-tuning cmd for `RTE` task -```bash -TOTAL_NUM_UPDATES=2036 # 10 epochs through RTE for bsz 16 -WARMUP_UPDATES=61 # 6 percent of the number of updates -LR=1e-05 # Peak LR for polynomial LR scheduler. -NUM_CLASSES=2 -MAX_SENTENCES=16 # Batch size. -BART_PATH=/path/to/bart/model.pt - -CUDA_VISIBLE_DEVICES=0,1 fairseq-train RTE-bin/ \ - --restore-file $BART_PATH \ - --batch-size $MAX_SENTENCES \ - --max-tokens 4400 \ - --task sentence_prediction \ - --add-prev-output-tokens \ - --layernorm-embedding \ - --share-all-embeddings \ - --share-decoder-input-output-embed \ - --reset-optimizer --reset-dataloader --reset-meters \ - --required-batch-size-multiple 1 \ - --init-token 0 \ - --arch bart_large \ - --criterion sentence_prediction \ - --num-classes $NUM_CLASSES \ - --dropout 0.1 --attention-dropout 0.1 \ - --weight-decay 0.01 --optimizer adam --adam-betas "(0.9, 0.98)" --adam-eps 1e-08 \ - --clip-norm 0.0 \ - --lr-scheduler polynomial_decay --lr $LR --total-num-update $TOTAL_NUM_UPDATES --warmup-updates $WARMUP_UPDATES \ - --fp16 --fp16-init-scale 4 --threshold-loss-scale 1 --fp16-scale-window 128 \ - --max-epoch 10 \ - --find-unused-parameters \ - --best-checkpoint-metric accuracy --maximize-best-checkpoint-metric; -``` - -For each of the GLUE task, you will need to use following cmd-line arguments: - -Model | MNLI | QNLI | QQP | RTE | SST-2 | MRPC | CoLA | STS-B ----|---|---|---|---|---|---|---|--- -`--num-classes` | 3 | 2 | 2 | 2 | 2 | 2 | 2 | 1 -`--lr` | 5e-6 | 1e-5 | 1e-5 | 1e-5 | 5e-6 | 2e-5 | 2e-5 | 2e-5 -`bsz` | 128 | 32 | 32 | 32 | 128 | 64 | 64 | 32 -`--total-num-update` | 30968 | 33112 | 113272 | 1018 | 5233 | 1148 | 1334 | 1799 -`--warmup-updates` | 1858 | 1986 | 6796 | 61 | 314 | 68 | 80 | 107 - -For `STS-B` additionally add `--regression-target --best-checkpoint-metric loss` and remove `--maximize-best-checkpoint-metric`. - -**Note:** - -a) `--total-num-updates` is used by `--polynomial_decay` scheduler and is calculated for `--max-epoch=10` and `--batch-size=32/64/128` depending on the task. - -b) Above cmd-args and hyperparams are tested on Nvidia `V100` GPU with `32gb` of memory for each task. Depending on the GPU memory resources available to you, you can use increase `--update-freq` and reduce `--batch-size`. - -### Inference on GLUE task -After training the model as mentioned in previous step, you can perform inference with checkpoints in `checkpoints/` directory using following python code snippet: - -```python -from fairseq.models.bart import BARTModel - -bart = BARTModel.from_pretrained( - 'checkpoints/', - checkpoint_file='checkpoint_best.pt', - data_name_or_path='RTE-bin' -) - -label_fn = lambda label: bart.task.label_dictionary.string( - [label + bart.task.label_dictionary.nspecial] -) -ncorrect, nsamples = 0, 0 -bart.cuda() -bart.eval() -with open('glue_data/RTE/dev.tsv') as fin: - fin.readline() - for index, line in enumerate(fin): - tokens = line.strip().split('\t') - sent1, sent2, target = tokens[1], tokens[2], tokens[3] - tokens = bart.encode(sent1, sent2) - prediction = bart.predict('sentence_classification_head', tokens).argmax().item() - prediction_label = label_fn(prediction) - ncorrect += int(prediction_label == target) - nsamples += 1 -print('| Accuracy: ', float(ncorrect)/float(nsamples)) -``` diff --git a/kosmos-g/fairseq/examples/bart/README.md b/kosmos-g/fairseq/examples/bart/README.md deleted file mode 100644 index 4050a724e..000000000 --- a/kosmos-g/fairseq/examples/bart/README.md +++ /dev/null @@ -1,228 +0,0 @@ -# BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension - -[https://arxiv.org/abs/1910.13461](https://arxiv.org/abs/1910.13461) - -## Introduction - -BART is sequence-to-sequence model trained with denoising as pretraining objective. We show that this pretraining objective is more generic and show that we can match [RoBERTa](../roberta) results on SQuAD and GLUE and gain state-of-the-art results on summarization (XSum, CNN dataset), long form generative question answering (ELI5) and dialog response genration (ConvAI2). See the associated paper for more details. - -## Pre-trained models - -Model | Description | # params | Download ----|---|---|--- -`bart.base` | BART model with 6 encoder and decoder layers | 140M | [bart.base.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/bart.base.tar.gz) -`bart.large` | BART model with 12 encoder and decoder layers | 400M | [bart.large.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/bart.large.tar.gz) -`bart.large.mnli` | `bart.large` finetuned on `MNLI` | 400M | [bart.large.mnli.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/bart.large.mnli.tar.gz) -`bart.large.cnn` | `bart.large` finetuned on `CNN-DM` | 400M | [bart.large.cnn.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/bart.large.cnn.tar.gz) -`bart.large.xsum` | `bart.large` finetuned on `Xsum` | 400M | [bart.large.xsum.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/bart.large.xsum.tar.gz) - -## Results - -**[GLUE (Wang et al., 2019)](https://gluebenchmark.com/)** -_(dev set, single model, single-task finetuning)_ - -Model | MNLI | QNLI | QQP | RTE | SST-2 | MRPC | CoLA | STS-B ----|---|---|---|---|---|---|---|--- -`roberta.large` | 90.2 | 94.7 | 92.2 | 86.6 | 96.4 | 90.9 | 68.0 | 92.4 -`bart.large` | 89.9 | 94.9 | 92.5 | 87.0 | 96.6 | 90.4 | 62.8 | 91.2 - -**[SQuAD (Rajpurkar et al., 2018)](https://rajpurkar.github.io/SQuAD-explorer/)** -_(dev set, no additional data used)_ - -Model | SQuAD 1.1 EM/F1 | SQuAD 2.0 EM/F1 ----|---|--- -`roberta.large` | 88.9/94.6 | 86.5/89.4 -`bart.large` | 88.8/94.6 | 86.1/89.2 - -**[CNN/Daily Mail](http://nlpprogress.com/english/summarization.html)** -_(test set, no additional data used)_ - -Model | R1 | R2 | RL ----|---|---|--- -`BERTSUMEXTABS` | 42.13 | 19.60 | 39.18 -`bart.large` | 44.16 | 21.28 | 40.90 - -## Example usage - -##### Load BART from torch.hub (PyTorch >= 1.1): -```python -import torch -bart = torch.hub.load('pytorch/fairseq', 'bart.large') -bart.eval() # disable dropout (or leave in train mode to finetune) -``` - -##### Load BART (for PyTorch 1.0 or custom models): -```python -# Download bart.large model -wget https://dl.fbaipublicfiles.com/fairseq/models/bart.large.tar.gz -tar -xzvf bart.large.tar.gz - -# Load the model in fairseq -from fairseq.models.bart import BARTModel -bart = BARTModel.from_pretrained('/path/to/bart.large', checkpoint_file='model.pt') -bart.eval() # disable dropout (or leave in train mode to finetune) -``` - -##### Apply Byte-Pair Encoding (BPE) to input text: -```python -tokens = bart.encode('Hello world!') -assert tokens.tolist() == [0, 31414, 232, 328, 2] -bart.decode(tokens) # 'Hello world!' -``` - -##### Extract features from BART: -```python -# Extract the last layer's features -last_layer_features = bart.extract_features(tokens) -assert last_layer_features.size() == torch.Size([1, 5, 1024]) - -# Extract all layer's features from decoder (layer 0 is the embedding layer) -all_layers = bart.extract_features(tokens, return_all_hiddens=True) -assert len(all_layers) == 13 -assert torch.all(all_layers[-1] == last_layer_features) -``` - -##### Use BART for sentence-pair classification tasks: -```python -# Download BART already finetuned for MNLI -bart = torch.hub.load('pytorch/fairseq', 'bart.large.mnli') -bart.eval() # disable dropout for evaluation - -# Encode a pair of sentences and make a prediction -tokens = bart.encode('BART is a seq2seq model.', 'BART is not sequence to sequence.') -bart.predict('mnli', tokens).argmax() # 0: contradiction - -# Encode another pair of sentences -tokens = bart.encode('BART is denoising autoencoder.', 'BART is version of autoencoder.') -bart.predict('mnli', tokens).argmax() # 2: entailment -``` - -##### Register a new (randomly initialized) classification head: -```python -bart.register_classification_head('new_task', num_classes=3) -logprobs = bart.predict('new_task', tokens) -``` - -##### Batched prediction: -```python -import torch -from fairseq.data.data_utils import collate_tokens - -bart = torch.hub.load('pytorch/fairseq', 'bart.large.mnli') -bart.eval() - -batch_of_pairs = [ - ['BART is a seq2seq model.', 'BART is not sequence to sequence.'], - ['BART is denoising autoencoder.', 'BART is version of autoencoder.'], -] - -batch = collate_tokens( - [bart.encode(pair[0], pair[1]) for pair in batch_of_pairs], pad_idx=1 -) - -logprobs = bart.predict('mnli', batch) -print(logprobs.argmax(dim=1)) -# tensor([0, 2]) -``` - -##### Using the GPU: -```python -bart.cuda() -bart.predict('new_task', tokens) -``` - -#### Filling masks: - -BART can be used to fill multiple `` tokens in the input. -```python -bart = torch.hub.load('pytorch/fairseq', 'bart.base') -bart.eval() -bart.fill_mask(['The cat on the .'], topk=3, beam=10) -# [[('The cat was on the ground.', tensor(-0.6183)), ('The cat was on the floor.', tensor(-0.6798)), ('The cat sleeps on the couch.', tensor(-0.6830))]] -``` - -Note that by default we enforce the output length to match the input length. -This can be disabled by setting ``match_source_len=False``: -``` -bart.fill_mask(['The cat on the .'], topk=3, beam=10, match_source_len=False) -# [[('The cat was on the ground.', tensor(-0.6185)), ('The cat was asleep on the couch.', tensor(-0.6276)), ('The cat was on the floor.', tensor(-0.6800))]] -``` - -Example code to fill masks for a batch of sentences using GPU -``` -bart.cuda() -bart.fill_mask(['The cat on the .', 'The dog on the .'], topk=3, beam=10) -# [[('The cat was on the ground.', tensor(-0.6183)), ('The cat was on the floor.', tensor(-0.6798)), ('The cat sleeps on the couch.', tensor(-0.6830))], [('The dog was on the ground.', tensor(-0.6190)), ('The dog lay on the ground.', tensor(-0.6711)), -('The dog was asleep on the couch', tensor(-0.6796))]] -``` - -#### Evaluating the `bart.large.mnli` model: - -Example python code snippet to evaluate accuracy on the MNLI `dev_matched` set. -```python -label_map = {0: 'contradiction', 1: 'neutral', 2: 'entailment'} -ncorrect, nsamples = 0, 0 -bart.cuda() -bart.eval() -with open('glue_data/MNLI/dev_matched.tsv') as fin: - fin.readline() - for index, line in enumerate(fin): - tokens = line.strip().split('\t') - sent1, sent2, target = tokens[8], tokens[9], tokens[-1] - tokens = bart.encode(sent1, sent2) - prediction = bart.predict('mnli', tokens).argmax().item() - prediction_label = label_map[prediction] - ncorrect += int(prediction_label == target) - nsamples += 1 - print('| Accuracy: ', float(ncorrect)/float(nsamples)) -# Expected output: 0.9010 -``` - -#### Evaluating the `bart.large.cnn` model: -- Follow instructions [here](https://github.com/abisee/cnn-dailymail) to download and process into data-files such that `test.source` and `test.target` has one line for each non-tokenized sample. -- For simpler preprocessing, you can also `wget https://cdn-datasets.huggingface.co/summarization/cnn_dm_v2.tgz`, although there is no guarantee of identical scores -- `huggingface/transformers` has a simpler interface that supports [single-gpu](https://github.com/huggingface/transformers/blob/master/examples/legacy/seq2seq/run_eval.py) and [multi-gpu](https://github.com/huggingface/transformers/blob/master/examples/legacy/seq2seq/run_distributed_eval.py) beam search. - In `huggingface/transformers`, the BART models' paths are `facebook/bart-large-cnn` and `facebook/bart-large-xsum`. - -In `fairseq`, summaries can be generated using: - -```bash -cp data-bin/cnn_dm/dict.source.txt checkpoints/ -python examples/bart/summarize.py \ - --model-dir pytorch/fairseq \ - --model-file bart.large.cnn \ - --src cnn_dm/test.source \ - --out cnn_dm/test.hypo -``` - -For calculating rouge, install `files2rouge` from [here](https://github.com/pltrdy/files2rouge). - -```bash -export CLASSPATH=/path/to/stanford-corenlp-full-2016-10-31/stanford-corenlp-3.7.0.jar - -# Tokenize hypothesis and target files. -cat test.hypo | java edu.stanford.nlp.process.PTBTokenizer -ioFileList -preserveLines > test.hypo.tokenized -cat test.target | java edu.stanford.nlp.process.PTBTokenizer -ioFileList -preserveLines > test.hypo.target -files2rouge test.hypo.tokenized test.hypo.target -# Expected output: (ROUGE-2 Average_F: 0.21238) -``` - - -## Finetuning - -- [Finetuning on GLUE](README.glue.md) -- [Finetuning on CNN-DM](README.summarization.md) - -## Citation - -```bibtex -@article{lewis2019bart, - title = {BART: Denoising Sequence-to-Sequence Pre-training for Natural -Language Generation, Translation, and Comprehension}, - author = {Mike Lewis and Yinhan Liu and Naman Goyal and Marjan Ghazvininejad and - Abdelrahman Mohamed and Omer Levy and Veselin Stoyanov - and Luke Zettlemoyer }, - journal={arXiv preprint arXiv:1910.13461}, - year = {2019}, -} -``` diff --git a/kosmos-g/fairseq/examples/bart/README.summarization.md b/kosmos-g/fairseq/examples/bart/README.summarization.md deleted file mode 100644 index 8727584f2..000000000 --- a/kosmos-g/fairseq/examples/bart/README.summarization.md +++ /dev/null @@ -1,102 +0,0 @@ -# Fine-tuning BART on CNN-Dailymail summarization task - -### 1) Download the CNN and Daily Mail data and preprocess it into data files with non-tokenized cased samples. - -Follow the instructions [here](https://github.com/abisee/cnn-dailymail) to download the original CNN and Daily Mail datasets. To preprocess the data, refer to the pointers in [this issue](https://github.com/pytorch/fairseq/issues/1391) or check out the code [here](https://github.com/artmatsak/cnn-dailymail). - -Follow the instructions [here](https://github.com/EdinburghNLP/XSum) to download the original Extreme Summarization datasets, or check out the code [here](https://github.com/EdinburghNLP/XSum/tree/master/XSum-Dataset), Please keep the raw dataset and make sure no tokenization nor BPE on the dataset. - -### 2) BPE preprocess: - -```bash -wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json' -wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe' -wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/dict.txt' - -TASK=cnn_dm -for SPLIT in train val -do - for LANG in source target - do - python -m examples.roberta.multiprocessing_bpe_encoder \ - --encoder-json encoder.json \ - --vocab-bpe vocab.bpe \ - --inputs "$TASK/$SPLIT.$LANG" \ - --outputs "$TASK/$SPLIT.bpe.$LANG" \ - --workers 60 \ - --keep-empty; - done -done -``` - -### 3) Binarize dataset: -```bash -fairseq-preprocess \ - --source-lang "source" \ - --target-lang "target" \ - --trainpref "${TASK}/train.bpe" \ - --validpref "${TASK}/val.bpe" \ - --destdir "${TASK}-bin/" \ - --workers 60 \ - --srcdict dict.txt \ - --tgtdict dict.txt; -``` - -### 4) Fine-tuning on CNN-DM summarization task: -Example fine-tuning CNN-DM -```bash -TOTAL_NUM_UPDATES=20000 -WARMUP_UPDATES=500 -LR=3e-05 -MAX_TOKENS=2048 -UPDATE_FREQ=4 -BART_PATH=/path/to/bart/model.pt - -CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 fairseq-train cnn_dm-bin \ - --restore-file $BART_PATH \ - --max-tokens $MAX_TOKENS \ - --task translation \ - --source-lang source --target-lang target \ - --truncate-source \ - --layernorm-embedding \ - --share-all-embeddings \ - --share-decoder-input-output-embed \ - --reset-optimizer --reset-dataloader --reset-meters \ - --required-batch-size-multiple 1 \ - --arch bart_large \ - --criterion label_smoothed_cross_entropy \ - --label-smoothing 0.1 \ - --dropout 0.1 --attention-dropout 0.1 \ - --weight-decay 0.01 --optimizer adam --adam-betas "(0.9, 0.999)" --adam-eps 1e-08 \ - --clip-norm 0.1 \ - --lr-scheduler polynomial_decay --lr $LR --total-num-update $TOTAL_NUM_UPDATES --warmup-updates $WARMUP_UPDATES \ - --fp16 --update-freq $UPDATE_FREQ \ - --skip-invalid-size-inputs-valid-test \ - --find-unused-parameters; -``` -Above is expected to run on `1` node with `8 32gb-V100`. -Expected training time is about `5 hours`. Training time can be reduced with distributed training on `4` nodes and `--update-freq 1`. - -Use TOTAL_NUM_UPDATES=15000 UPDATE_FREQ=2 for Xsum task - -### Inference for CNN-DM test data using above trained checkpoint. -After training the model as mentioned in previous step, you can perform inference with checkpoints in `checkpoints/` directory using `eval_cnn.py`, for example - -```bash -cp data-bin/cnn_dm/dict.source.txt checkpoints/ -python examples/bart/summarize.py \ - --model-dir checkpoints \ - --model-file checkpoint_best.pt \ - --src cnn_dm/test.source \ - --out cnn_dm/test.hypo -``` -For XSUM, which uses beam=6, lenpen=1.0, max_len_b=60, min_len=10: -```bash -cp data-bin/cnn_dm/dict.source.txt checkpoints/ -python examples/bart/summarize.py \ - --model-dir checkpoints \ - --model-file checkpoint_best.pt \ - --src cnn_dm/test.source \ - --out cnn_dm/test.hypo \ - --xsum-kwargs -``` diff --git a/kosmos-g/fairseq/examples/bart/summarize.py b/kosmos-g/fairseq/examples/bart/summarize.py deleted file mode 100644 index 04435f80e..000000000 --- a/kosmos-g/fairseq/examples/bart/summarize.py +++ /dev/null @@ -1,100 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -from fairseq.models.bart import BARTModel -import argparse - -XSUM_KWARGS = dict(beam=6, lenpen=1.0, max_len_b=60, min_len=10, no_repeat_ngram_size=3) -CNN_KWARGS = dict(beam=4, lenpen=2.0, max_len_b=140, min_len=55, no_repeat_ngram_size=3) - - -@torch.no_grad() -def generate(bart, infile, outfile="bart_hypo.txt", bsz=32, n_obs=None, **eval_kwargs): - count = 1 - - # if n_obs is not None: bsz = min(bsz, n_obs) - - with open(infile) as source, open(outfile, "w") as fout: - sline = source.readline().strip() - slines = [sline] - for sline in source: - if n_obs is not None and count > n_obs: - break - if count % bsz == 0: - hypotheses_batch = bart.sample(slines, **eval_kwargs) - for hypothesis in hypotheses_batch: - fout.write(hypothesis + "\n") - fout.flush() - slines = [] - - slines.append(sline.strip()) - count += 1 - - if slines != []: - hypotheses_batch = bart.sample(slines, **eval_kwargs) - for hypothesis in hypotheses_batch: - fout.write(hypothesis + "\n") - fout.flush() - - -def main(): - """ - Usage:: - - python examples/bart/summarize.py \ - --model-dir $HOME/bart.large.cnn \ - --model-file model.pt \ - --src $HOME/data-bin/cnn_dm/test.source - """ - parser = argparse.ArgumentParser() - parser.add_argument( - "--model-dir", - required=True, - type=str, - default="bart.large.cnn/", - help="path containing model file and src_dict.txt", - ) - parser.add_argument( - "--model-file", - default="checkpoint_best.pt", - help="where in model_dir are weights saved", - ) - parser.add_argument( - "--src", default="test.source", help="text to summarize", type=str - ) - parser.add_argument( - "--out", default="test.hypo", help="where to save summaries", type=str - ) - parser.add_argument("--bsz", default=32, help="where to save summaries", type=int) - parser.add_argument( - "--n", default=None, help="how many examples to summarize", type=int - ) - parser.add_argument( - "--xsum-kwargs", - action="store_true", - default=False, - help="if true use XSUM_KWARGS else CNN_KWARGS", - ) - args = parser.parse_args() - eval_kwargs = XSUM_KWARGS if args.xsum_kwargs else CNN_KWARGS - if args.model_dir == "pytorch/fairseq": - bart = torch.hub.load("pytorch/fairseq", args.model_file) - else: - bart = BARTModel.from_pretrained( - args.model_dir, - checkpoint_file=args.model_file, - data_name_or_path=args.model_dir, - ) - bart = bart.eval() - if torch.cuda.is_available(): - bart = bart.cuda().half() - generate( - bart, args.src, bsz=args.bsz, n_obs=args.n, outfile=args.out, **eval_kwargs - ) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/byte_level_bpe/README.md b/kosmos-g/fairseq/examples/byte_level_bpe/README.md deleted file mode 100644 index 657092660..000000000 --- a/kosmos-g/fairseq/examples/byte_level_bpe/README.md +++ /dev/null @@ -1,88 +0,0 @@ -# Neural Machine Translation with Byte-Level Subwords - -https://arxiv.org/abs/1909.03341 - -We provide an implementation of byte-level byte-pair encoding (BBPE), taking IWSLT 2017 Fr-En translation as -example. - -## Data -Get data and generate fairseq binary dataset: -```bash -bash ./get_data.sh -``` - -## Model Training -Train Transformer model with Bi-GRU embedding contextualization (implemented in `gru_transformer.py`): -```bash -# VOCAB=bytes -# VOCAB=chars -VOCAB=bbpe2048 -# VOCAB=bpe2048 -# VOCAB=bbpe4096 -# VOCAB=bpe4096 -# VOCAB=bpe16384 -``` -```bash -fairseq-train "data/bin_${VOCAB}" --task translation --user-dir examples/byte_level_bpe/gru_transformer \ - --arch gru_transformer --encoder-layers 2 --decoder-layers 2 --dropout 0.3 --share-all-embeddings \ - --optimizer adam --adam-betas '(0.9, 0.98)' \ - --lr 5e-4 --lr-scheduler inverse_sqrt --warmup-updates 4000 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --log-format 'simple' --log-interval 100 --save-dir "checkpoints/${VOCAB}" \ - --batch-size 100 --max-update 100000 --update-freq 2 -``` - -## Generation -`fairseq-generate` requires bytes (BBPE) decoder to convert byte-level representation back to characters: -```bash -# BPE=--bpe bytes -# BPE=--bpe characters -BPE=--bpe byte_bpe --sentencepiece-model-path data/spm_bbpe2048.model -# BPE=--bpe sentencepiece --sentencepiece-model data/spm_bpe2048.model -# BPE=--bpe byte_bpe --sentencepiece-model-path data/spm_bbpe4096.model -# BPE=--bpe sentencepiece --sentencepiece-model data/spm_bpe4096.model -# BPE=--bpe sentencepiece --sentencepiece-model data/spm_bpe16384.model -``` - -```bash -fairseq-generate "data/bin_${VOCAB}" --task translation --user-dir examples/byte_level_bpe/gru_transformer \ - --source-lang fr --gen-subset test --sacrebleu --path "checkpoints/${VOCAB}/checkpoint_last.pt" \ - --tokenizer moses --moses-target-lang en ${BPE} -``` -When using `fairseq-interactive`, bytes (BBPE) encoder/decoder is required to tokenize input data and detokenize model predictions: -```bash -fairseq-interactive "data/bin_${VOCAB}" --task translation --user-dir examples/byte_level_bpe/gru_transformer \ - --path "checkpoints/${VOCAB}/checkpoint_last.pt" --input data/test.fr --tokenizer moses --moses-source-lang fr \ - --moses-target-lang en ${BPE} --buffer-size 1000 --max-tokens 10000 -``` - -## Results -| Vocabulary | Model | BLEU | -|:-------------:|:-------------:|:-------------:| -| Joint BPE 16k ([Kudo, 2018](https://arxiv.org/abs/1804.10959)) | 512d LSTM 2+2 | 33.81 | -| Joint BPE 16k | Transformer base 2+2 (w/ GRU) | 36.64 (36.72) | -| Joint BPE 4k | Transformer base 2+2 (w/ GRU) | 35.49 (36.10) | -| Joint BBPE 4k | Transformer base 2+2 (w/ GRU) | 35.61 (35.82) | -| Joint BPE 2k | Transformer base 2+2 (w/ GRU) | 34.87 (36.13) | -| Joint BBPE 2k | Transformer base 2+2 (w/ GRU) | 34.98 (35.43) | -| Characters | Transformer base 2+2 (w/ GRU) | 31.78 (33.30) | -| Bytes | Transformer base 2+2 (w/ GRU) | 31.57 (33.62) | - - -## Citation -``` -@misc{wang2019neural, - title={Neural Machine Translation with Byte-Level Subwords}, - author={Changhan Wang and Kyunghyun Cho and Jiatao Gu}, - year={2019}, - eprint={1909.03341}, - archivePrefix={arXiv}, - primaryClass={cs.CL} -} -``` - - -## Contact -Changhan Wang ([changhan@fb.com](mailto:changhan@fb.com)), -Kyunghyun Cho ([kyunghyuncho@fb.com](mailto:kyunghyuncho@fb.com)), -Jiatao Gu ([jgu@fb.com](mailto:jgu@fb.com)) diff --git a/kosmos-g/fairseq/examples/byte_level_bpe/get_bitext.py b/kosmos-g/fairseq/examples/byte_level_bpe/get_bitext.py deleted file mode 100644 index 6ac1eeec1..000000000 --- a/kosmos-g/fairseq/examples/byte_level_bpe/get_bitext.py +++ /dev/null @@ -1,254 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import argparse -import os -import os.path as op -from collections import namedtuple -from multiprocessing import cpu_count -from typing import List, Optional - -import sentencepiece as sp -from fairseq.data.encoders.byte_bpe import ByteBPE -from fairseq.data.encoders.byte_utils import byte_encode -from fairseq.data.encoders.bytes import Bytes -from fairseq.data.encoders.characters import Characters -from fairseq.data.encoders.moses_tokenizer import MosesTokenizer -from fairseq.data.encoders.sentencepiece_bpe import SentencepieceBPE - - -SPLITS = ["train", "valid", "test"] - - -def _convert_xml(in_path: str, out_path: str): - with open(in_path) as f, open(out_path, "w") as f_o: - for s in f: - ss = s.strip() - if not ss.startswith("", "").split('">') - assert len(ss) == 2 - f_o.write(ss[1].strip() + "\n") - - -def _convert_train(in_path: str, out_path: str): - with open(in_path) as f, open(out_path, "w") as f_o: - for s in f: - ss = s.strip() - if ss.startswith("<"): - continue - f_o.write(ss.strip() + "\n") - - -def _get_bytes(in_path: str, out_path: str): - with open(in_path) as f, open(out_path, "w") as f_o: - for s in f: - f_o.write(Bytes.encode(s.strip()) + "\n") - - -def _get_chars(in_path: str, out_path: str): - with open(in_path) as f, open(out_path, "w") as f_o: - for s in f: - f_o.write(Characters.encode(s.strip()) + "\n") - - -def pretokenize(in_path: str, out_path: str, src: str, tgt: str): - Args = namedtuple( - "Args", - [ - "moses_source_lang", - "moses_target_lang", - "moses_no_dash_splits", - "moses_no_escape", - ], - ) - args = Args( - moses_source_lang=src, - moses_target_lang=tgt, - moses_no_dash_splits=False, - moses_no_escape=False, - ) - pretokenizer = MosesTokenizer(args) - with open(in_path) as f, open(out_path, "w") as f_o: - for s in f: - f_o.write(pretokenizer.encode(s.strip()) + "\n") - - -def _convert_to_bchar(in_path_prefix: str, src: str, tgt: str, out_path: str): - with open(out_path, "w") as f_o: - for lang in [src, tgt]: - with open(f"{in_path_prefix}.{lang}") as f: - for s in f: - f_o.write(byte_encode(s.strip()) + "\n") - - -def _get_bpe(in_path: str, model_prefix: str, vocab_size: int): - arguments = [ - f"--input={in_path}", - f"--model_prefix={model_prefix}", - f"--model_type=bpe", - f"--vocab_size={vocab_size}", - "--character_coverage=1.0", - "--normalization_rule_name=identity", - f"--num_threads={cpu_count()}", - ] - sp.SentencePieceTrainer.Train(" ".join(arguments)) - - -def _apply_bbpe(model_path: str, in_path: str, out_path: str): - Args = namedtuple("Args", ["sentencepiece_model_path"]) - args = Args(sentencepiece_model_path=model_path) - tokenizer = ByteBPE(args) - with open(in_path) as f, open(out_path, "w") as f_o: - for s in f: - f_o.write(tokenizer.encode(s.strip()) + "\n") - - -def _apply_bpe(model_path: str, in_path: str, out_path: str): - Args = namedtuple("Args", ["sentencepiece_model"]) - args = Args(sentencepiece_model=model_path) - tokenizer = SentencepieceBPE(args) - with open(in_path) as f, open(out_path, "w") as f_o: - for s in f: - f_o.write(tokenizer.encode(s.strip()) + "\n") - - -def _concat_files(in_paths: List[str], out_path: str): - with open(out_path, "w") as f_o: - for p in in_paths: - with open(p) as f: - for r in f: - f_o.write(r) - - -def preprocess_iwslt17( - root: str, - src: str, - tgt: str, - bpe_size: Optional[int], - need_chars: bool, - bbpe_size: Optional[int], - need_bytes: bool, -): - # extract bitext - in_root = op.join(root, f"{src}-{tgt}") - for lang in [src, tgt]: - _convert_train( - op.join(in_root, f"train.tags.{src}-{tgt}.{lang}"), - op.join(root, f"train.{lang}"), - ) - _convert_xml( - op.join(in_root, f"IWSLT17.TED.dev2010.{src}-{tgt}.{lang}.xml"), - op.join(root, f"valid.{lang}"), - ) - _convert_xml( - op.join(in_root, f"IWSLT17.TED.tst2015.{src}-{tgt}.{lang}.xml"), - op.join(root, f"test.{lang}"), - ) - # pre-tokenize - for lang in [src, tgt]: - for split in SPLITS: - pretokenize( - op.join(root, f"{split}.{lang}"), - op.join(root, f"{split}.moses.{lang}"), - src, - tgt, - ) - # tokenize with BPE vocabulary - if bpe_size is not None: - # learn vocabulary - concated_train_path = op.join(root, "train.all") - _concat_files( - [op.join(root, "train.moses.fr"), op.join(root, "train.moses.en")], - concated_train_path, - ) - bpe_model_prefix = op.join(root, f"spm_bpe{bpe_size}") - _get_bpe(concated_train_path, bpe_model_prefix, bpe_size) - os.remove(concated_train_path) - # apply - for lang in [src, tgt]: - for split in SPLITS: - _apply_bpe( - bpe_model_prefix + ".model", - op.join(root, f"{split}.moses.{lang}"), - op.join(root, f"{split}.moses.bpe{bpe_size}.{lang}"), - ) - # tokenize with bytes vocabulary - if need_bytes: - for lang in [src, tgt]: - for split in SPLITS: - _get_bytes( - op.join(root, f"{split}.moses.{lang}"), - op.join(root, f"{split}.moses.bytes.{lang}"), - ) - # tokenize with characters vocabulary - if need_chars: - for lang in [src, tgt]: - for split in SPLITS: - _get_chars( - op.join(root, f"{split}.moses.{lang}"), - op.join(root, f"{split}.moses.chars.{lang}"), - ) - # tokenize with byte-level BPE vocabulary - if bbpe_size is not None: - # learn vocabulary - bchar_path = op.join(root, "train.bchar") - _convert_to_bchar(op.join(root, "train.moses"), src, tgt, bchar_path) - bbpe_model_prefix = op.join(root, f"spm_bbpe{bbpe_size}") - _get_bpe(bchar_path, bbpe_model_prefix, bbpe_size) - os.remove(bchar_path) - # apply - for lang in [src, tgt]: - for split in SPLITS: - _apply_bbpe( - bbpe_model_prefix + ".model", - op.join(root, f"{split}.moses.{lang}"), - op.join(root, f"{split}.moses.bbpe{bbpe_size}.{lang}"), - ) - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--root", type=str, default="data") - parser.add_argument( - "--bpe-vocab", - default=None, - type=int, - help="Generate tokenized bitext with BPE of size K." - "Default to None (disabled).", - ) - parser.add_argument( - "--bbpe-vocab", - default=None, - type=int, - help="Generate tokenized bitext with BBPE of size K." - "Default to None (disabled).", - ) - parser.add_argument( - "--byte-vocab", - action="store_true", - help="Generate tokenized bitext with bytes vocabulary", - ) - parser.add_argument( - "--char-vocab", - action="store_true", - help="Generate tokenized bitext with chars vocabulary", - ) - args = parser.parse_args() - - preprocess_iwslt17( - args.root, - "fr", - "en", - args.bpe_vocab, - args.char_vocab, - args.bbpe_vocab, - args.byte_vocab, - ) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/byte_level_bpe/get_data.sh b/kosmos-g/fairseq/examples/byte_level_bpe/get_data.sh deleted file mode 100644 index c3d55d492..000000000 --- a/kosmos-g/fairseq/examples/byte_level_bpe/get_data.sh +++ /dev/null @@ -1,47 +0,0 @@ -#!/bin/bash - -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -PY_BIN_ROOT= - -# PyPI dependency -${PY_BIN_ROOT}pip install sentencepiece sacremoses - -# Get data -if [ ! -d "data" ]; then - mkdir data -fi - -if [ ! -f "data/fr-en.tgz" ]; then - wget https://wit3.fbk.eu/archive/2017-01-trnted/texts/fr/en/fr-en.tgz -P data - tar xvf data/fr-en.tgz -C data -fi -${PY_BIN_ROOT}python get_bitext.py --bpe-vocab 16384 --byte-vocab --char-vocab -for VOCAB_SIZE in 2048 4096; do - ${PY_BIN_ROOT}python get_bitext.py --bpe-vocab ${VOCAB_SIZE} --bbpe-vocab ${VOCAB_SIZE} -done -rm -r data/fr-en data/fr-en.tgz - -# Generate binary dataset -${PY_BIN_ROOT}/fairseq-preprocess --source-lang fr --target-lang en --destdir data/bin_bpe16384 --joined-dictionary \ - --workers "$(nproc)" --trainpref data/train.moses.bpe16384 --validpref data/valid.moses.bpe16384 \ - --testpref data/test.moses.bpe16384 - -${PY_BIN_ROOT}/fairseq-preprocess --source-lang fr --target-lang en --destdir data/bin_bytes --joined-dictionary \ - --workers "$(nproc)" --trainpref data/train.moses.bytes --validpref data/valid.moses.bytes \ - --testpref data/test.moses.bytes - -${PY_BIN_ROOT}/fairseq-preprocess --source-lang fr --target-lang en --destdir data/bin_chars --joined-dictionary \ - --workers "$(nproc)" --trainpref data/train.moses.chars --validpref data/valid.moses.chars \ - --testpref data/test.moses.chars - -for VOCAB_SIZE in 2048 4096; do - for TYPE in bbpe bpe; do - ${PY_BIN_ROOT}/fairseq-preprocess --source-lang fr --target-lang en --destdir "data/bin_${TYPE}${VOCAB_SIZE}" \ - --joined-dictionary --workers "$(nproc)" --trainpref "data/train.moses.${TYPE}${VOCAB_SIZE}" \ - --validpref "data/valid.moses.${TYPE}${VOCAB_SIZE}" --testpref "data/test.moses.${TYPE}${VOCAB_SIZE}" - done -done diff --git a/kosmos-g/fairseq/examples/byte_level_bpe/gru_transformer.py b/kosmos-g/fairseq/examples/byte_level_bpe/gru_transformer.py deleted file mode 100644 index d4efa93a4..000000000 --- a/kosmos-g/fairseq/examples/byte_level_bpe/gru_transformer.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch.nn as nn -import torch.nn.functional as F -from fairseq.models import register_model, register_model_architecture -from fairseq.models.transformer import TransformerEncoder, TransformerModel - - -@register_model("gru_transformer") -class GRUTransformerModel(TransformerModel): - @classmethod - def build_encoder(cls, args, src_dict, embed_tokens): - return GRUTransformerEncoder(args, src_dict, embed_tokens) - - -class GRUTransformerEncoder(TransformerEncoder): - def __init__(self, args, dictionary, embed_tokens): - super().__init__(args, dictionary, embed_tokens) - self.emb_ctx = nn.GRU( - input_size=embed_tokens.embedding_dim, - hidden_size=embed_tokens.embedding_dim // 2, - num_layers=1, - bidirectional=True, - ) - - def forward_embedding(self, src_tokens): - # embed tokens and positions - x = embed = self.embed_scale * self.embed_tokens(src_tokens) - if self.embed_positions is not None: - x = embed + self.embed_positions(src_tokens) - - # contextualize embeddings - x = x.transpose(0, 1) - x = self.dropout_module(x) - x, _ = self.emb_ctx.forward(x) - x = x.transpose(0, 1) - - if self.layernorm_embedding is not None: - x = self.layernorm_embedding(x) - x = self.dropout_module(x) - return x, embed - - -@register_model_architecture("gru_transformer", "gru_transformer") -def gru_transformer_base_architecture(args): - args.encoder_embed_path = getattr(args, "encoder_embed_path", None) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 2048) - args.encoder_layers = getattr(args, "encoder_layers", 6) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", False) - args.decoder_embed_path = getattr(args, "decoder_embed_path", None) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim) - args.decoder_ffn_embed_dim = getattr( - args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim - ) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", False) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False) - args.attention_dropout = getattr(args, "attention_dropout", 0.0) - args.activation_dropout = getattr(args, "activation_dropout", 0.0) - args.activation_fn = getattr(args, "activation_fn", "relu") - args.dropout = getattr(args, "dropout", 0.1) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", False - ) - args.share_all_embeddings = getattr(args, "share_all_embeddings", False) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - args.adaptive_input = getattr(args, "adaptive_input", False) - args.no_cross_attention = getattr(args, "no_cross_attention", False) - args.cross_self_attention = getattr(args, "cross_self_attention", False) - args.layer_wise_attention = getattr(args, "layer_wise_attention", False) - - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - - args.no_scale_embedding = getattr(args, "no_scale_embedding", False) - args.layernorm_embedding = getattr(args, "layernorm_embedding", False) - - -@register_model_architecture("gru_transformer", "gru_transformer_big") -def gru_transformer_big(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 1024) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4096) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - args.dropout = getattr(args, "dropout", 0.3) - gru_transformer_base_architecture(args) diff --git a/kosmos-g/fairseq/examples/camembert/README.md b/kosmos-g/fairseq/examples/camembert/README.md deleted file mode 100644 index 5ef4fe3f1..000000000 --- a/kosmos-g/fairseq/examples/camembert/README.md +++ /dev/null @@ -1,75 +0,0 @@ -# CamemBERT: a Tasty French Language Model - -## Introduction - -[CamemBERT](https://arxiv.org/abs/1911.03894) is a pretrained language model trained on 138GB of French text based on RoBERTa. - -Also available in [github.com/huggingface/transformers](https://github.com/huggingface/transformers/). - -## Pre-trained models - -| Model | #params | Download | Arch. | Training data | -|--------------------------------|---------|--------------------------------------------------------------------------------------------------------------------------|-------|-----------------------------------| -| `camembert` / `camembert-base` | 110M | [camembert-base.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/camembert-base.tar.gz) | Base | OSCAR (138 GB of text) | -| `camembert-large` | 335M | [camembert-large.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/camembert-large.tar.gz) | Large | CCNet (135 GB of text) | -| `camembert-base-ccnet` | 110M | [camembert-base-ccnet.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/camembert-base-ccnet.tar.gz) | Base | CCNet (135 GB of text) | -| `camembert-base-wikipedia-4gb` | 110M | [camembert-base-wikipedia-4gb.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/camembert-base-wikipedia-4gb.tar.gz) | Base | Wikipedia (4 GB of text) | -| `camembert-base-oscar-4gb` | 110M | [camembert-base-oscar-4gb.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/camembert-base-oscar-4gb.tar.gz) | Base | Subsample of OSCAR (4 GB of text) | -| `camembert-base-ccnet-4gb` | 110M | [camembert-base-ccnet-4gb.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/camembert-base-ccnet-4gb.tar.gz) | Base | Subsample of CCNet (4 GB of text) | - -## Example usage - -### fairseq -##### Load CamemBERT from torch.hub (PyTorch >= 1.1): -```python -import torch -camembert = torch.hub.load('pytorch/fairseq', 'camembert') -camembert.eval() # disable dropout (or leave in train mode to finetune) -``` - -##### Load CamemBERT (for PyTorch 1.0 or custom models): -```python -# Download camembert model -wget https://dl.fbaipublicfiles.com/fairseq/models/camembert-base.tar.gz -tar -xzvf camembert.tar.gz - -# Load the model in fairseq -from fairseq.models.roberta import CamembertModel -camembert = CamembertModel.from_pretrained('/path/to/camembert') -camembert.eval() # disable dropout (or leave in train mode to finetune) -``` - -##### Filling masks: -```python -masked_line = 'Le camembert est :)' -camembert.fill_mask(masked_line, topk=3) -# [('Le camembert est délicieux :)', 0.4909118115901947, ' délicieux'), -# ('Le camembert est excellent :)', 0.10556942224502563, ' excellent'), -# ('Le camembert est succulent :)', 0.03453322499990463, ' succulent')] -``` - -##### Extract features from Camembert: -```python -# Extract the last layer's features -line = "J'aime le camembert !" -tokens = camembert.encode(line) -last_layer_features = camembert.extract_features(tokens) -assert last_layer_features.size() == torch.Size([1, 10, 768]) - -# Extract all layer's features (layer 0 is the embedding layer) -all_layers = camembert.extract_features(tokens, return_all_hiddens=True) -assert len(all_layers) == 13 -assert torch.all(all_layers[-1] == last_layer_features) -``` - -## Citation -If you use our work, please cite: - -```bibtex -@inproceedings{martin2020camembert, - title={CamemBERT: a Tasty French Language Model}, - author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t}, - booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics}, - year={2020} -} -``` diff --git a/kosmos-g/fairseq/examples/constrained_decoding/README.md b/kosmos-g/fairseq/examples/constrained_decoding/README.md deleted file mode 100644 index e04b8b6a0..000000000 --- a/kosmos-g/fairseq/examples/constrained_decoding/README.md +++ /dev/null @@ -1,123 +0,0 @@ -# (Vectorized) Lexically constrained decoding with dynamic beam allocation - -This page provides instructions for how to use lexically constrained decoding in Fairseq. -Fairseq implements the code described in the following papers: - -* [Fast Lexically Constrained Decoding With Dynamic Beam Allocation](https://www.aclweb.org/anthology/N18-1119/) (Post & Vilar, 2018) -* [Improved Lexically Constrained Decoding for Translation and Monolingual Rewriting](https://www.aclweb.org/anthology/N19-1090/) (Hu et al., 2019) - -## Quick start - -Constrained search is enabled by adding the command-line argument `--constraints` to `fairseq-interactive`. -Constraints are appended to each line of input, separated by tabs. Each constraint (one or more tokens) -is a separate field. - -The following command, using [Fairseq's WMT19 German--English model](https://github.com/pytorch/fairseq/blob/main/examples/wmt19/README.md), -translates the sentence *Die maschinelle Übersetzung ist schwer zu kontrollieren.* with the constraints -"hard" and "to influence". - - echo -e "Die maschinelle Übersetzung ist schwer zu kontrollieren.\thard\ttoinfluence" \ - | normalize.py | tok.py \ - | fairseq-interactive /path/to/model \ - --path /path/to/model/model1.pt \ - --bpe fastbpe \ - --bpe-codes /path/to/model/bpecodes \ - --constraints \ - -s de -t en \ - --beam 10 - -(tok.py and normalize.py can be found in the same directory as this README; they are just shortcuts around Fairseq's WMT19 preprocessing). -This will generate the following output: - - [snip] - S-0 Die masch@@ in@@ elle Über@@ setzung ist schwer zu kontrollieren . - W-0 1.844 seconds - C-0 hard - C-0 influence - H-0 -1.5333266258239746 Mach@@ ine trans@@ lation is hard to influence . - D-0 -1.5333266258239746 Machine translation is hard to influence . - P-0 -0.5434 -0.1423 -0.1930 -0.1415 -0.2346 -1.8031 -0.1701 -11.7727 -0.1815 -0.1511 - -By default, constraints are generated in the order supplied, with any number (zero or more) of tokens generated -between constraints. If you wish for the decoder to order the constraints, then use `--constraints unordered`. -Note that you may want to use a larger beam. - -## Implementation details - -The heart of the implementation is in `fairseq/search.py`, which adds a `LexicallyConstrainedBeamSearch` instance. -This instance of beam search tracks the progress of each hypothesis in the beam through the set of constraints -provided for each input sentence. It does this using one of two classes, both found in `fairseq/token_generation_contstraints.py`: - -* OrderedConstraintState: assumes the `C` input constraints will be generated in the provided order -* UnorderedConstraintState: tries to apply `C` (phrasal) constraints in all `C!` orders - -## Differences from Sockeye - -There are a number of [differences from Sockeye's implementation](https://awslabs.github.io/sockeye/inference.html#lexical-constraints). - -* Generating constraints in the order supplied (the default option here) is not available in Sockeye. -* Due to an improved beam allocation method, there is no need to prune the beam. -* Again due to better allocation, beam sizes as low as 10 or even 5 are often sufficient. -* [The vector extensions described in Hu et al.](https://github.com/edwardjhu/sockeye/tree/trie_constraints) (NAACL 2019) were never merged - into the main Sockeye branch. - -## Citation - -The paper first describing lexical constraints for seq2seq decoding is: - -```bibtex -@inproceedings{hokamp-liu-2017-lexically, - title = "Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search", - author = "Hokamp, Chris and - Liu, Qun", - booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", - month = jul, - year = "2017", - address = "Vancouver, Canada", - publisher = "Association for Computational Linguistics", - url = "https://www.aclweb.org/anthology/P17-1141", - doi = "10.18653/v1/P17-1141", - pages = "1535--1546", -} -``` - -The fairseq implementation uses the extensions described in - -```bibtex -@inproceedings{post-vilar-2018-fast, - title = "Fast Lexically Constrained Decoding with Dynamic Beam Allocation for Neural Machine Translation", - author = "Post, Matt and - Vilar, David", - booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)", - month = jun, - year = "2018", - address = "New Orleans, Louisiana", - publisher = "Association for Computational Linguistics", - url = "https://www.aclweb.org/anthology/N18-1119", - doi = "10.18653/v1/N18-1119", - pages = "1314--1324", -} -``` - -and - -```bibtex -@inproceedings{hu-etal-2019-improved, - title = "Improved Lexically Constrained Decoding for Translation and Monolingual Rewriting", - author = "Hu, J. Edward and - Khayrallah, Huda and - Culkin, Ryan and - Xia, Patrick and - Chen, Tongfei and - Post, Matt and - Van Durme, Benjamin", - booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)", - month = jun, - year = "2019", - address = "Minneapolis, Minnesota", - publisher = "Association for Computational Linguistics", - url = "https://www.aclweb.org/anthology/N19-1090", - doi = "10.18653/v1/N19-1090", - pages = "839--850", -} -``` diff --git a/kosmos-g/fairseq/examples/constrained_decoding/normalize.py b/kosmos-g/fairseq/examples/constrained_decoding/normalize.py deleted file mode 100644 index 4ae2b5111..000000000 --- a/kosmos-g/fairseq/examples/constrained_decoding/normalize.py +++ /dev/null @@ -1,27 +0,0 @@ -#!/usr/bin/env python3 -# -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import sys - -from sacremoses.normalize import MosesPunctNormalizer - - -def main(args): - normalizer = MosesPunctNormalizer(lang=args.lang, penn=args.penn) - for line in sys.stdin: - print(normalizer.normalize(line.rstrip()), flush=True) - - -if __name__ == "__main__": - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("--lang", "-l", default="en") - parser.add_argument("--penn", "-p", action="store_true") - args = parser.parse_args() - - main(args) diff --git a/kosmos-g/fairseq/examples/constrained_decoding/tok.py b/kosmos-g/fairseq/examples/constrained_decoding/tok.py deleted file mode 100644 index b1f888a8c..000000000 --- a/kosmos-g/fairseq/examples/constrained_decoding/tok.py +++ /dev/null @@ -1,34 +0,0 @@ -#!/usr/bin/env python3 -# -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import sys - -import sacremoses - - -def main(args): - """Tokenizes, preserving tabs""" - mt = sacremoses.MosesTokenizer(lang=args.lang) - - def tok(s): - return mt.tokenize(s, return_str=True) - - for line in sys.stdin: - parts = list(map(tok, line.split("\t"))) - print(*parts, sep="\t", flush=True) - - -if __name__ == "__main__": - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("--lang", "-l", default="en") - parser.add_argument("--penn", "-p", action="store_true") - parser.add_argument("--fields", "-f", help="fields to tokenize") - args = parser.parse_args() - - main(args) diff --git a/kosmos-g/fairseq/examples/conv_seq2seq/README.md b/kosmos-g/fairseq/examples/conv_seq2seq/README.md deleted file mode 100644 index 95fe7e790..000000000 --- a/kosmos-g/fairseq/examples/conv_seq2seq/README.md +++ /dev/null @@ -1,25 +0,0 @@ -# Convolutional Sequence to Sequence Learning (Gehring et al., 2017) - -## Pre-trained models - -Description | Dataset | Model | Test set(s) ----|---|---|--- -Convolutional
([Gehring et al., 2017](https://arxiv.org/abs/1705.03122)) | [WMT14 English-French](http://statmt.org/wmt14/translation-task.html#Download) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt14.v2.en-fr.fconv-py.tar.bz2) | newstest2014:
[download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.v2.en-fr.newstest2014.tar.bz2)
newstest2012/2013:
[download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.v2.en-fr.ntst1213.tar.bz2) -Convolutional
([Gehring et al., 2017](https://arxiv.org/abs/1705.03122)) | [WMT14 English-German](http://statmt.org/wmt14/translation-task.html#Download) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt14.en-de.fconv-py.tar.bz2) | newstest2014:
[download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.en-de.newstest2014.tar.bz2) -Convolutional
([Gehring et al., 2017](https://arxiv.org/abs/1705.03122)) | [WMT17 English-German](http://statmt.org/wmt17/translation-task.html#Download) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt17.v2.en-de.fconv-py.tar.bz2) | newstest2014:
[download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt17.v2.en-de.newstest2014.tar.bz2) - -## Example usage - -See the [translation README](../translation/README.md) for instructions on reproducing results for WMT'14 En-De and -WMT'14 En-Fr using the `fconv_wmt_en_de` and `fconv_wmt_en_fr` model architectures. - -## Citation - -```bibtex -@inproceedings{gehring2017convs2s, - title = {Convolutional Sequence to Sequence Learning}, - author = {Gehring, Jonas, and Auli, Michael and Grangier, David and Yarats, Denis and Dauphin, Yann N}, - booktitle = {Proc. of ICML}, - year = 2017, -} -``` diff --git a/kosmos-g/fairseq/examples/criss/README.md b/kosmos-g/fairseq/examples/criss/README.md deleted file mode 100644 index 4689ed7c1..000000000 --- a/kosmos-g/fairseq/examples/criss/README.md +++ /dev/null @@ -1,61 +0,0 @@ -# Cross-lingual Retrieval for Iterative Self-Supervised Training - -https://arxiv.org/pdf/2006.09526.pdf - -## Introduction - -CRISS is a multilingual sequence-to-sequnce pretraining method where mining and training processes are applied iteratively, improving cross-lingual alignment and translation ability at the same time. - -## Requirements: - -* faiss: https://github.com/facebookresearch/faiss -* mosesdecoder: https://github.com/moses-smt/mosesdecoder -* flores: https://github.com/facebookresearch/flores -* LASER: https://github.com/facebookresearch/LASER - -## Unsupervised Machine Translation -##### 1. Download and decompress CRISS checkpoints -``` -cd examples/criss -wget https://dl.fbaipublicfiles.com/criss/criss_3rd_checkpoints.tar.gz -tar -xf criss_checkpoints.tar.gz -``` -##### 2. Download and preprocess Flores test dataset -Make sure to run all scripts from examples/criss directory -``` -bash download_and_preprocess_flores_test.sh -``` - -##### 3. Run Evaluation on Sinhala-English -``` -bash unsupervised_mt/eval.sh -``` - -## Sentence Retrieval -##### 1. Download and preprocess Tatoeba dataset -``` -bash download_and_preprocess_tatoeba.sh -``` - -##### 2. Run Sentence Retrieval on Tatoeba Kazakh-English -``` -bash sentence_retrieval/sentence_retrieval_tatoeba.sh -``` - -## Mining -##### 1. Install faiss -Follow instructions on https://github.com/facebookresearch/faiss/blob/master/INSTALL.md -##### 2. Mine pseudo-parallel data between Kazakh and English -``` -bash mining/mine_example.sh -``` - -## Citation -```bibtex -@article{tran2020cross, - title={Cross-lingual retrieval for iterative self-supervised training}, - author={Tran, Chau and Tang, Yuqing and Li, Xian and Gu, Jiatao}, - journal={arXiv preprint arXiv:2006.09526}, - year={2020} -} -``` diff --git a/kosmos-g/fairseq/examples/criss/download_and_preprocess_flores_test.sh b/kosmos-g/fairseq/examples/criss/download_and_preprocess_flores_test.sh deleted file mode 100644 index ed4b390fb..000000000 --- a/kosmos-g/fairseq/examples/criss/download_and_preprocess_flores_test.sh +++ /dev/null @@ -1,64 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -SPM_ENCODE=flores/scripts/spm_encode.py -DATA=data_tmp -SPM_MODEL=criss_checkpoints/sentence.bpe.model -DICT=criss_checkpoints/dict.txt - -download_data() { - CORPORA=$1 - URL=$2 - - if [ -f $CORPORA ]; then - echo "$CORPORA already exists, skipping download" - else - echo "Downloading $URL" - wget $URL -O $CORPORA --no-check-certificate || rm -f $CORPORA - if [ -f $CORPORA ]; then - echo "$URL successfully downloaded." - else - echo "$URL not successfully downloaded." - rm -f $CORPORA - fi - fi -} - -if [[ -f flores ]]; then - echo "flores already cloned" -else - git clone https://github.com/facebookresearch/flores -fi - -mkdir -p $DATA -download_data $DATA/wikipedia_en_ne_si_test_sets.tgz "https://github.com/facebookresearch/flores/raw/master/data/wikipedia_en_ne_si_test_sets.tgz" -pushd $DATA -pwd -tar -vxf wikipedia_en_ne_si_test_sets.tgz -popd - - -for lang in ne_NP si_LK; do - datadir=$DATA/${lang}-en_XX-flores - rm -rf $datadir - mkdir -p $datadir - TEST_PREFIX=$DATA/wikipedia_en_ne_si_test_sets/wikipedia.test - python $SPM_ENCODE \ - --model ${SPM_MODEL} \ - --output_format=piece \ - --inputs ${TEST_PREFIX}.${lang:0:2}-en.${lang:0:2} ${TEST_PREFIX}.${lang:0:2}-en.en \ - --outputs $datadir/test.bpe.${lang}-en_XX.${lang} $datadir/test.bpe.${lang}-en_XX.en_XX - - # binarize data - fairseq-preprocess \ - --source-lang ${lang} --target-lang en_XX \ - --testpref $datadir/test.bpe.${lang}-en_XX \ - --destdir $datadir \ - --srcdict ${DICT} \ - --joined-dictionary \ - --workers 4 -done diff --git a/kosmos-g/fairseq/examples/criss/download_and_preprocess_tatoeba.sh b/kosmos-g/fairseq/examples/criss/download_and_preprocess_tatoeba.sh deleted file mode 100644 index 7ed64f017..000000000 --- a/kosmos-g/fairseq/examples/criss/download_and_preprocess_tatoeba.sh +++ /dev/null @@ -1,46 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -SPM_ENCODE=flores/scripts/spm_encode.py -DATA=data_tmp -SPM_MODEL=criss_checkpoints/sentence.bpe.model -DICT=criss_checkpoints/dict.txt - -if [[ -f flores ]]; then - echo "flores already cloned" -else - git clone https://github.com/facebookresearch/flores -fi -if [[ -f LASER ]]; then - echo "LASER already cloned" -else - git clone https://github.com/facebookresearch/LASER -fi -mkdir -p data_tmp -declare -A lang_tatoeba_map=( ["ar_AR"]="ara" ["de_DE"]="deu" ["es_XX"]="spa" ["et_EE"]="est" ["fi_FI"]="fin" ["fr_XX"]="fra" ["hi_IN"]="hin" ["it_IT"]="ita" ["ja_XX"]="jpn" ["ko_KR"]="kor" ["kk_KZ"]="kaz" ["nl_XX"]="nld" ["ru_RU"]="rus" ["tr_TR"]="tur" ["vi_VN"]="vie" ["zh_CN"]="cmn") -for lang in ar_AR de_DE es_XX et_EE fi_FI fr_XX hi_IN it_IT ja_XX kk_KZ ko_KR nl_XX ru_RU tr_TR vi_VN zh_CN; do - lang_tatoeba=${lang_tatoeba_map[$lang]} - echo $lang_tatoeba - datadir=$DATA/${lang}-en_XX-tatoeba - rm -rf $datadir - mkdir -p $datadir - TEST_PREFIX=LASER/data/tatoeba/v1/tatoeba - python $SPM_ENCODE \ - --model ${SPM_MODEL} \ - --output_format=piece \ - --inputs ${TEST_PREFIX}.${lang_tatoeba}-eng.${lang_tatoeba} ${TEST_PREFIX}.${lang_tatoeba}-eng.eng \ - --outputs $datadir/test.bpe.${lang}-en_XX.${lang} $datadir/test.bpe.${lang}-en_XX.en_XX - - # binarize data - fairseq-preprocess \ - --source-lang ${lang} --target-lang en_XX \ - --testpref $datadir/test.bpe.${lang}-en_XX \ - --destdir $datadir \ - --srcdict ${DICT} \ - --joined-dictionary \ - --workers 4 -done diff --git a/kosmos-g/fairseq/examples/criss/mining/mine.py b/kosmos-g/fairseq/examples/criss/mining/mine.py deleted file mode 100644 index c872da196..000000000 --- a/kosmos-g/fairseq/examples/criss/mining/mine.py +++ /dev/null @@ -1,240 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import argparse -import glob -from subprocess import check_call - -try: - import faiss - - has_faiss = True -except ImportError: - has_faiss = False -import numpy as np - - -GB = 1024 * 1024 * 1024 - - -def call(cmd): - print(cmd) - check_call(cmd, shell=True) - - -def get_batches(directory, lang, prefix="all_avg_pool"): - print(f"Finding in {directory}/{prefix}.{lang}*") - files = glob.glob(f"{directory}/{prefix}.{lang}*") - emb_files = [] - txt_files = [] - for emb_fi in files: - emb_files.append(emb_fi) - txt_fi = emb_fi.replace(prefix, "sentences") - txt_files.append(txt_fi) - return emb_files, txt_files - - -def load_batch(emb_file, dim): - embeddings = np.fromfile(emb_file, dtype=np.float32) - num_rows = int(embeddings.shape[0] / dim) - embeddings = embeddings.reshape((num_rows, dim)) - faiss.normalize_L2(embeddings) - return embeddings - - -def knnGPU_sharded(x_batches_f, y_batches_f, dim, k, direction="x2y"): - if not has_faiss: - raise ImportError("Please install Faiss") - sims = [] - inds = [] - xfrom = 0 - xto = 0 - for x_batch_f in x_batches_f: - yfrom = 0 - yto = 0 - x_batch = load_batch(x_batch_f, dim) - xto = xfrom + x_batch.shape[0] - bsims, binds = [], [] - for y_batch_f in y_batches_f: - y_batch = load_batch(y_batch_f, dim) - neighbor_size = min(k, y_batch.shape[0]) - yto = yfrom + y_batch.shape[0] - print("{}-{} -> {}-{}".format(xfrom, xto, yfrom, yto)) - idx = faiss.IndexFlatIP(dim) - idx = faiss.index_cpu_to_all_gpus(idx) - idx.add(y_batch) - bsim, bind = idx.search(x_batch, neighbor_size) - - bsims.append(bsim) - binds.append(bind + yfrom) - yfrom += y_batch.shape[0] - del idx - del y_batch - bsims = np.concatenate(bsims, axis=1) - binds = np.concatenate(binds, axis=1) - aux = np.argsort(-bsims, axis=1) - sim_batch = np.zeros((x_batch.shape[0], k), dtype=np.float32) - ind_batch = np.zeros((x_batch.shape[0], k), dtype=np.int64) - for i in range(x_batch.shape[0]): - for j in range(k): - sim_batch[i, j] = bsims[i, aux[i, j]] - ind_batch[i, j] = binds[i, aux[i, j]] - sims.append(sim_batch) - inds.append(ind_batch) - xfrom += x_batch.shape[0] - del x_batch - sim = np.concatenate(sims, axis=0) - ind = np.concatenate(inds, axis=0) - return sim, ind - - -def score(sim, fwd_mean, bwd_mean, margin): - return margin(sim, (fwd_mean + bwd_mean) / 2) - - -def score_candidates( - sim_mat, candidate_inds, fwd_mean, bwd_mean, margin, verbose=False -): - print(" - scoring {:d} candidates".format(sim_mat.shape[0])) - scores = np.zeros(candidate_inds.shape) - for i in range(scores.shape[0]): - for j in range(scores.shape[1]): - k = int(candidate_inds[i, j]) - scores[i, j] = score(sim_mat[i, j], fwd_mean[i], bwd_mean[k], margin) - return scores - - -def load_text(files): - all_sentences = [] - for fi in files: - with open(fi) as sentence_fi: - for line in sentence_fi: - all_sentences.append(line.strip()) - print(f"Read {len(all_sentences)} sentences") - return all_sentences - - -if __name__ == "__main__": - parser = argparse.ArgumentParser(description="Mine bitext") - parser.add_argument("--src-lang", help="Source language") - parser.add_argument("--tgt-lang", help="Target language") - parser.add_argument( - "--dict-path", help="Path to dictionary file", default="dict.txt" - ) - parser.add_argument( - "--spm-path", help="Path to SPM model file", default="sentence.bpe.model" - ) - parser.add_argument("--dim", type=int, default=1024, help="Embedding dimension") - parser.add_argument("--mem", type=int, default=5, help="Memory in GB") - parser.add_argument("--src-dir", help="Source directory") - parser.add_argument("--tgt-dir", help="Target directory") - parser.add_argument("--output", help="Output path") - parser.add_argument( - "--neighborhood", type=int, default=4, help="Embedding dimension" - ) - parser.add_argument( - "--threshold", type=float, default=1.06, help="Threshold on mined bitext" - ) - parser.add_argument( - "--valid-size", - type=int, - default=2000, - help="Number of sentences used for validation set", - ) - parser.add_argument( - "--min-count", - type=int, - default=50000, - help="Min num sentences used for each language", - ) - args = parser.parse_args() - - x_batches_f, x_sents_f = get_batches(args.src_dir, args.src_lang) - y_batches_f, y_sents_f = get_batches(args.tgt_dir, args.tgt_lang) - margin = lambda a, b: a / b - y2x_sim, y2x_ind = knnGPU_sharded( - y_batches_f, x_batches_f, args.dim, args.neighborhood, direction="y2x" - ) - x2y_sim, x2y_ind = knnGPU_sharded( - x_batches_f, y_batches_f, args.dim, args.neighborhood, direction="x2y" - ) - - x2y_mean = x2y_sim.mean(axis=1) - y2x_mean = y2x_sim.mean(axis=1) - fwd_scores = score_candidates(x2y_sim, x2y_ind, x2y_mean, y2x_mean, margin) - bwd_scores = score_candidates(y2x_sim, y2x_ind, y2x_mean, x2y_mean, margin) - fwd_best = x2y_ind[np.arange(x2y_sim.shape[0]), fwd_scores.argmax(axis=1)] - bwd_best = y2x_ind[np.arange(y2x_sim.shape[0]), bwd_scores.argmax(axis=1)] - indices = np.stack( - ( - np.concatenate((np.arange(x2y_ind.shape[0]), bwd_best)), - np.concatenate((fwd_best, np.arange(y2x_ind.shape[0]))), - ), - axis=1, - ) - scores = np.concatenate((fwd_scores.max(axis=1), bwd_scores.max(axis=1))) - - x_sentences = load_text(x_sents_f) - y_sentences = load_text(y_sents_f) - - threshold = args.threshold - min_count = args.min_count - seen_src, seen_trg = set(), set() - directory = args.output - call(f"mkdir -p {directory}") - src_out = open( - f"{directory}/all.{args.src_lang}", - mode="w", - encoding="utf-8", - errors="surrogateescape", - ) - tgt_out = open( - f"{directory}/all.{args.tgt_lang}", - mode="w", - encoding="utf-8", - errors="surrogateescape", - ) - scores_out = open( - f"{directory}/all.scores", mode="w", encoding="utf-8", errors="surrogateescape" - ) - count = 0 - for i in np.argsort(-scores): - src_ind, trg_ind = indices[i] - if src_ind not in seen_src and trg_ind not in seen_trg: - seen_src.add(src_ind) - seen_trg.add(trg_ind) - if scores[i] > threshold or count < min_count: - if x_sentences[src_ind]: - print(scores[i], file=scores_out) - print(x_sentences[src_ind], file=src_out) - print(y_sentences[trg_ind], file=tgt_out) - count += 1 - else: - print(f"Ignoring sentence: {x_sentences[src_ind]}") - src_out.close() - tgt_out.close() - scores_out.close() - - print(f"Found {count} pairs for threshold={threshold}") - with open(f"{directory}/all.{args.src_lang}") as all_s, open( - f"{directory}/all.{args.tgt_lang}" - ) as all_t, open(f"{directory}/valid.{args.src_lang}", "w") as valid_s, open( - f"{directory}/valid.{args.tgt_lang}", "w" - ) as valid_t, open( - f"{directory}/train.{args.src_lang}", "w" - ) as train_s, open( - f"{directory}/train.{args.tgt_lang}", "w" - ) as train_t: - count = 0 - for s_line, t_line in zip(all_s, all_t): - s_line = s_line.split("\t")[1] - t_line = t_line.split("\t")[1] - if count >= args.valid_size: - train_s.write(s_line) - train_t.write(t_line) - else: - valid_s.write(s_line) - valid_t.write(t_line) - count += 1 diff --git a/kosmos-g/fairseq/examples/criss/mining/mine_example.sh b/kosmos-g/fairseq/examples/criss/mining/mine_example.sh deleted file mode 100644 index ace995ac4..000000000 --- a/kosmos-g/fairseq/examples/criss/mining/mine_example.sh +++ /dev/null @@ -1,103 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# -source_lang=kk_KZ -target_lang=en_XX -MODEL=criss_checkpoints/criss.3rd.pt -SPM=criss_checkpoints/sentence.bpe.model -SPLIT=test -LANG_DICT=criss_checkpoints/lang_dict.txt -SPM_ENCODE=flores/scripts/spm_encode.py -SAVE_ENCODER=save_encoder.py -ENCODER_SAVE_ROOT=sentence_embeddings/$MODEL -DICT=criss_checkpoints/dict.txt -THRESHOLD=1.02 -MIN_COUNT=500 - -DATA_DIR=data_tmp -SAVE_DIR=mining/${source_lang}_${target_lang}_mined -ENCODER_SAVE_DIR=${ENCODER_SAVE_ROOT}/${source_lang}-${target_lang} -INPUT_DIR=$DATA_DIR/${source_lang}-${target_lang}-tatoeba - -mkdir -p $ENCODER_SAVE_DIR/${target_lang} -mkdir -p $ENCODER_SAVE_DIR/${source_lang} -mkdir -p $SAVE_DIR - -## Save encoder outputs - -# Save encoder outputs for source sentences -python $SAVE_ENCODER \ - ${INPUT_DIR} \ - --path ${MODEL} \ - --task translation_multi_simple_epoch \ - --lang-pairs ${source_lang}-${target_lang} \ - --lang-dict ${LANG_DICT} \ - --gen-subset ${SPLIT} \ - --bpe 'sentencepiece' \ - -s ${source_lang} -t ${target_lang} \ - --sentencepiece-model ${SPM} \ - --remove-bpe 'sentencepiece' \ - --beam 1 \ - --lang-tok-style mbart \ - --encoder-save-dir ${ENCODER_SAVE_DIR}/${source_lang} - -## Save encoder outputs for target sentences -python $SAVE_ENCODER \ - ${INPUT_DIR} \ - --path ${MODEL} \ - --lang-pairs ${source_lang}-${target_lang} \ - --lang-dict ${LANG_DICT} \ - --task translation_multi_simple_epoch \ - --gen-subset ${SPLIT} \ - --bpe 'sentencepiece' \ - -t ${source_lang} -s ${target_lang} \ - --sentencepiece-model ${SPM} \ - --remove-bpe 'sentencepiece' \ - --beam 1 \ - --lang-tok-style mbart \ - --encoder-save-dir ${ENCODER_SAVE_DIR}/${target_lang} - -## Mining -python mining/mine.py \ - --src-lang ${source_lang} \ - --tgt-lang ${target_lang} \ - --dim 1024 \ - --mem 10 \ - --neighborhood 4 \ - --src-dir ${ENCODER_SAVE_DIR}/${source_lang} \ - --tgt-dir ${ENCODER_SAVE_DIR}/${target_lang} \ - --output $SAVE_DIR \ - --threshold ${THRESHOLD} \ - --min-count ${MIN_COUNT} \ - --valid-size 100 \ - --dict-path ${DICT} \ - --spm-path ${SPM} \ - - -## Process and binarize mined data -python $SPM_ENCODE \ - --model ${SPM} \ - --output_format=piece \ - --inputs mining/${source_lang}_${target_lang}_mined/train.${source_lang} mining/${source_lang}_${target_lang}_mined/train.${target_lang} \ - --outputs mining/${source_lang}_${target_lang}_mined/train.bpe.${source_lang} mining/${source_lang}_${target_lang}_mined/train.bpe.${target_lang} - -python $SPM_ENCODE \ - --model ${SPM} \ - --output_format=piece \ - --inputs mining/${source_lang}_${target_lang}_mined/valid.${source_lang} mining/${source_lang}_${target_lang}_mined/valid.${target_lang} \ - --outputs mining/${source_lang}_${target_lang}_mined/valid.bpe.${source_lang} mining/${source_lang}_${target_lang}_mined/valid.bpe.${target_lang} - - -fairseq-preprocess \ - --source-lang ${source_lang} \ - --target-lang ${target_lang} \ - --trainpref mining/${source_lang}_${target_lang}_mined/train.bpe \ - --validpref mining/${source_lang}_${target_lang}_mined/valid.bpe \ - --destdir mining/${source_lang}_${target_lang}_mined \ - --srcdict ${DICT} \ - --joined-dictionary \ - --workers 8 diff --git a/kosmos-g/fairseq/examples/criss/save_encoder.py b/kosmos-g/fairseq/examples/criss/save_encoder.py deleted file mode 100644 index 24a842e40..000000000 --- a/kosmos-g/fairseq/examples/criss/save_encoder.py +++ /dev/null @@ -1,214 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Translate pre-processed data with a trained model. -""" - -import numpy as np -import torch -from fairseq import checkpoint_utils, options, progress_bar, tasks, utils -from fairseq.sequence_generator import EnsembleModel -from fairseq.utils import safe_hasattr - - -def get_avg_pool( - models, sample, prefix_tokens, src_dict, remove_bpe, has_langtok=False -): - model = EnsembleModel(models) - - # model.forward normally channels prev_output_tokens into the decoder - # separately, but SequenceGenerator directly calls model.encoder - encoder_input = { - k: v for k, v in sample["net_input"].items() if k != "prev_output_tokens" - } - - # compute the encoder output for each beam - encoder_outs = model.forward_encoder(encoder_input) - np_encoder_outs = encoder_outs[0].encoder_out.cpu().numpy().astype(np.float32) - encoder_mask = 1 - encoder_outs[0].encoder_padding_mask.cpu().numpy().astype( - np.float32 - ) - encoder_mask = np.expand_dims(encoder_mask.T, axis=2) - if has_langtok: - encoder_mask = encoder_mask[1:, :, :] - np_encoder_outs = np_encoder_outs[1, :, :] - masked_encoder_outs = encoder_mask * np_encoder_outs - avg_pool = (masked_encoder_outs / encoder_mask.sum(axis=0)).sum(axis=0) - return avg_pool - - -def main(args): - assert args.path is not None, "--path required for generation!" - assert ( - not args.sampling or args.nbest == args.beam - ), "--sampling requires --nbest to be equal to --beam" - assert ( - args.replace_unk is None or args.raw_text - ), "--replace-unk requires a raw text dataset (--raw-text)" - - args.beam = 1 - utils.import_user_module(args) - - if args.max_tokens is None: - args.max_tokens = 12000 - print(args) - use_cuda = torch.cuda.is_available() and not args.cpu - - # Load dataset splits - task = tasks.setup_task(args) - task.load_dataset(args.gen_subset) - - # Set dictionaries - try: - src_dict = getattr(task, "source_dictionary", None) - except NotImplementedError: - src_dict = None - tgt_dict = task.target_dictionary - - # Load ensemble - print("| loading model(s) from {}".format(args.path)) - models, _model_args = checkpoint_utils.load_model_ensemble( - args.path.split(":"), - arg_overrides=eval(args.model_overrides), - task=task, - ) - - # Optimize ensemble for generation - for model in models: - model.make_generation_fast_( - beamable_mm_beam_size=None if args.no_beamable_mm else args.beam, - need_attn=args.print_alignment, - ) - if args.fp16: - model.half() - if use_cuda: - model.cuda() - - # Load alignment dictionary for unknown word replacement - # (None if no unknown word replacement, empty if no path to align dictionary) - align_dict = utils.load_align_dict(args.replace_unk) - - # Load dataset (possibly sharded) - itr = task.get_batch_iterator( - dataset=task.dataset(args.gen_subset), - max_tokens=args.max_tokens, - max_positions=utils.resolve_max_positions( - task.max_positions(), - ), - ignore_invalid_inputs=args.skip_invalid_size_inputs_valid_test, - required_batch_size_multiple=args.required_batch_size_multiple, - num_shards=args.num_shards, - shard_id=args.shard_id, - num_workers=args.num_workers, - ).next_epoch_itr(shuffle=False) - - num_sentences = 0 - source_sentences = [] - shard_id = 0 - all_avg_pool = None - encoder_has_langtok = ( - safe_hasattr(task.args, "encoder_langtok") - and task.args.encoder_langtok is not None - and safe_hasattr(task.args, "lang_tok_replacing_bos_eos") - and not task.args.lang_tok_replacing_bos_eos - ) - with progress_bar.build_progress_bar(args, itr) as t: - for sample in t: - if sample is None: - print("Skipping None") - continue - sample = utils.move_to_cuda(sample) if use_cuda else sample - if "net_input" not in sample: - continue - - prefix_tokens = None - if args.prefix_size > 0: - prefix_tokens = sample["target"][:, : args.prefix_size] - - with torch.no_grad(): - avg_pool = get_avg_pool( - models, - sample, - prefix_tokens, - src_dict, - args.post_process, - has_langtok=encoder_has_langtok, - ) - if all_avg_pool is not None: - all_avg_pool = np.concatenate((all_avg_pool, avg_pool)) - else: - all_avg_pool = avg_pool - - if not isinstance(sample["id"], list): - sample_ids = sample["id"].tolist() - else: - sample_ids = sample["id"] - for i, sample_id in enumerate(sample_ids): - # Remove padding - src_tokens = utils.strip_pad( - sample["net_input"]["src_tokens"][i, :], tgt_dict.pad() - ) - - # Either retrieve the original sentences or regenerate them from tokens. - if align_dict is not None: - src_str = task.dataset(args.gen_subset).src.get_original_text( - sample_id - ) - else: - if src_dict is not None: - src_str = src_dict.string(src_tokens, args.post_process) - else: - src_str = "" - - if not args.quiet: - if src_dict is not None: - print("S-{}\t{}".format(sample_id, src_str)) - - source_sentences.append(f"{sample_id}\t{src_str}") - - num_sentences += sample["nsentences"] - if all_avg_pool.shape[0] >= 1000000: - with open( - f"{args.encoder_save_dir}/all_avg_pool.{args.source_lang}.{shard_id}", - "w", - ) as avg_pool_file: - all_avg_pool.tofile(avg_pool_file) - with open( - f"{args.encoder_save_dir}/sentences.{args.source_lang}.{shard_id}", - "w", - ) as sentence_file: - sentence_file.writelines(f"{line}\n" for line in source_sentences) - all_avg_pool = None - source_sentences = [] - shard_id += 1 - - if all_avg_pool is not None: - with open( - f"{args.encoder_save_dir}/all_avg_pool.{args.source_lang}.{shard_id}", "w" - ) as avg_pool_file: - all_avg_pool.tofile(avg_pool_file) - with open( - f"{args.encoder_save_dir}/sentences.{args.source_lang}.{shard_id}", "w" - ) as sentence_file: - sentence_file.writelines(f"{line}\n" for line in source_sentences) - return None - - -def cli_main(): - parser = options.get_generation_parser() - parser.add_argument( - "--encoder-save-dir", - default="", - type=str, - metavar="N", - help="directory to save encoder outputs", - ) - args = options.parse_args_and_arch(parser) - main(args) - - -if __name__ == "__main__": - cli_main() diff --git a/kosmos-g/fairseq/examples/criss/sentence_retrieval/encoder_analysis.py b/kosmos-g/fairseq/examples/criss/sentence_retrieval/encoder_analysis.py deleted file mode 100644 index b41bfbe38..000000000 --- a/kosmos-g/fairseq/examples/criss/sentence_retrieval/encoder_analysis.py +++ /dev/null @@ -1,92 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import argparse -import glob - -import numpy as np - - -DIM = 1024 - - -def compute_dist(source_embs, target_embs, k=5, return_sim_mat=False): - target_ids = [tid for tid in target_embs] - source_mat = np.stack(source_embs.values(), axis=0) - normalized_source_mat = source_mat / np.linalg.norm( - source_mat, axis=1, keepdims=True - ) - target_mat = np.stack(target_embs.values(), axis=0) - normalized_target_mat = target_mat / np.linalg.norm( - target_mat, axis=1, keepdims=True - ) - sim_mat = normalized_source_mat.dot(normalized_target_mat.T) - if return_sim_mat: - return sim_mat - neighbors_map = {} - for i, sentence_id in enumerate(source_embs): - idx = np.argsort(sim_mat[i, :])[::-1][:k] - neighbors_map[sentence_id] = [target_ids[tid] for tid in idx] - return neighbors_map - - -def load_embeddings(directory, LANGS): - sentence_embeddings = {} - sentence_texts = {} - for lang in LANGS: - sentence_embeddings[lang] = {} - sentence_texts[lang] = {} - lang_dir = f"{directory}/{lang}" - embedding_files = glob.glob(f"{lang_dir}/all_avg_pool.{lang}.*") - for embed_file in embedding_files: - shard_id = embed_file.split(".")[-1] - embeddings = np.fromfile(embed_file, dtype=np.float32) - num_rows = embeddings.shape[0] // DIM - embeddings = embeddings.reshape((num_rows, DIM)) - - with open(f"{lang_dir}/sentences.{lang}.{shard_id}") as sentence_file: - for idx, line in enumerate(sentence_file): - sentence_id, sentence = line.strip().split("\t") - sentence_texts[lang][sentence_id] = sentence - sentence_embeddings[lang][sentence_id] = embeddings[idx, :] - - return sentence_embeddings, sentence_texts - - -def compute_accuracy(directory, LANGS): - sentence_embeddings, sentence_texts = load_embeddings(directory, LANGS) - - top_1_accuracy = {} - - top1_str = " ".join(LANGS) + "\n" - for source_lang in LANGS: - top_1_accuracy[source_lang] = {} - top1_str += f"{source_lang} " - for target_lang in LANGS: - top1 = 0 - top5 = 0 - neighbors_map = compute_dist( - sentence_embeddings[source_lang], sentence_embeddings[target_lang] - ) - for sentence_id, neighbors in neighbors_map.items(): - if sentence_id == neighbors[0]: - top1 += 1 - if sentence_id in neighbors[:5]: - top5 += 1 - n = len(sentence_embeddings[target_lang]) - top1_str += f"{top1/n} " - top1_str += "\n" - - print(top1_str) - print(top1_str, file=open(f"{directory}/accuracy", "w")) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser(description="Analyze encoder outputs") - parser.add_argument("directory", help="Source language corpus") - parser.add_argument("--langs", help="List of langs") - args = parser.parse_args() - langs = args.langs.split(",") - compute_accuracy(args.directory, langs) diff --git a/kosmos-g/fairseq/examples/criss/sentence_retrieval/sentence_retrieval_tatoeba.sh b/kosmos-g/fairseq/examples/criss/sentence_retrieval/sentence_retrieval_tatoeba.sh deleted file mode 100644 index 0428d8bef..000000000 --- a/kosmos-g/fairseq/examples/criss/sentence_retrieval/sentence_retrieval_tatoeba.sh +++ /dev/null @@ -1,59 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# -source_lang=kk_KZ -target_lang=en_XX -MODEL=criss_checkpoints/criss.3rd.pt -SPM=criss_checkpoints/sentence.bpe.model -SPLIT=test -LANG_DICT=criss_checkpoints/lang_dict.txt -ENCODER_ANALYSIS=sentence_retrieval/encoder_analysis.py -SAVE_ENCODER=save_encoder.py -ENCODER_SAVE_ROOT=sentence_embeddings/$MODEL - - - -DATA_DIR=data_tmp -INPUT_DIR=$DATA_DIR/${source_lang}-${target_lang}-tatoeba -ENCODER_SAVE_DIR=${ENCODER_SAVE_ROOT}/${source_lang}-${target_lang} -mkdir -p $ENCODER_SAVE_DIR/${target_lang} -mkdir -p $ENCODER_SAVE_DIR/${source_lang} - -# Save encoder outputs for source sentences -python $SAVE_ENCODER \ - ${INPUT_DIR} \ - --path ${MODEL} \ - --task translation_multi_simple_epoch \ - --lang-dict ${LANG_DICT} \ - --gen-subset ${SPLIT} \ - --bpe 'sentencepiece' \ - --lang-pairs ${source_lang}-${target_lang} \ - -s ${source_lang} -t ${target_lang} \ - --sentencepiece-model ${SPM} \ - --remove-bpe 'sentencepiece' \ - --beam 1 \ - --lang-tok-style mbart \ - --encoder-save-dir ${ENCODER_SAVE_DIR}/${source_lang} - -# Save encoder outputs for target sentences -python $SAVE_ENCODER \ - ${INPUT_DIR} \ - --path ${MODEL} \ - --lang-dict ${LANG_DICT} \ - --task translation_multi_simple_epoch \ - --gen-subset ${SPLIT} \ - --bpe 'sentencepiece' \ - --lang-pairs ${target_lang}-${source_lang} \ - -t ${source_lang} -s ${target_lang} \ - --sentencepiece-model ${SPM} \ - --remove-bpe 'sentencepiece' \ - --beam 1 \ - --lang-tok-style mbart \ - --encoder-save-dir ${ENCODER_SAVE_DIR}/${target_lang} - -# Analyze sentence retrieval accuracy -python $ENCODER_ANALYSIS --langs "${source_lang},${target_lang}" ${ENCODER_SAVE_DIR} diff --git a/kosmos-g/fairseq/examples/criss/unsupervised_mt/eval.sh b/kosmos-g/fairseq/examples/criss/unsupervised_mt/eval.sh deleted file mode 100644 index 03b773ed5..000000000 --- a/kosmos-g/fairseq/examples/criss/unsupervised_mt/eval.sh +++ /dev/null @@ -1,37 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# -SRC=si_LK -TGT=en_XX -MODEL=criss_checkpoints/criss.3rd.pt - -MULTIBLEU=mosesdecoder/scripts/generic/multi-bleu.perl -MOSES=mosesdecoder -REPLACE_UNICODE_PUNCT=$MOSES/scripts/tokenizer/replace-unicode-punctuation.perl -NORM_PUNC=$MOSES/scripts/tokenizer/normalize-punctuation.perl -REM_NON_PRINT_CHAR=$MOSES/scripts/tokenizer/remove-non-printing-char.perl -TOKENIZER=$MOSES/scripts/tokenizer/tokenizer.perl -GEN_TMP_DIR=gen_tmp -LANG_DICT=criss_checkpoints/lang_dict.txt - -if [ ! -d "mosesdecoder" ]; then - git clone https://github.com/moses-smt/mosesdecoder -fi -mkdir -p $GEN_TMP_DIR -fairseq-generate data_tmp/${SRC}-${TGT}-flores \ - --task translation_multi_simple_epoch \ - --max-tokens 2000 \ - --path ${MODEL} \ - --skip-invalid-size-inputs-valid-test \ - --beam 5 --lenpen 1.0 --gen-subset test \ - --remove-bpe=sentencepiece \ - --source-lang ${SRC} --target-lang ${TGT} \ - --decoder-langtok --lang-pairs 'en_XX-ar_AR,en_XX-de_DE,en_XX-es_XX,en_XX-fr_XX,en_XX-hi_IN,en_XX-it_IT,en_XX-ja_XX,en_XX-ko_KR,en_XX-nl_XX,en_XX-ru_RU,en_XX-zh_CN,en_XX-tr_TR,en_XX-vi_VN,en_XX-ro_RO,en_XX-my_MM,en_XX-ne_NP,en_XX-si_LK,en_XX-cs_CZ,en_XX-lt_LT,en_XX-kk_KZ,en_XX-gu_IN,en_XX-fi_FI,en_XX-et_EE,en_XX-lv_LV,ar_AR-en_XX,cs_CZ-en_XX,de_DE-en_XX,es_XX-en_XX,et_EE-en_XX,fi_FI-en_XX,fr_XX-en_XX,gu_IN-en_XX,hi_IN-en_XX,it_IT-en_XX,ja_XX-en_XX,kk_KZ-en_XX,ko_KR-en_XX,lt_LT-en_XX,lv_LV-en_XX,my_MM-en_XX,ne_NP-en_XX,nl_XX-en_XX,ro_RO-en_XX,ru_RU-en_XX,si_LK-en_XX,tr_TR-en_XX,vi_VN-en_XX,zh_CN-en_XX,ar_AR-es_XX,es_XX-ar_AR,ar_AR-hi_IN,hi_IN-ar_AR,ar_AR-zh_CN,zh_CN-ar_AR,cs_CZ-es_XX,es_XX-cs_CZ,cs_CZ-hi_IN,hi_IN-cs_CZ,cs_CZ-zh_CN,zh_CN-cs_CZ,de_DE-es_XX,es_XX-de_DE,de_DE-hi_IN,hi_IN-de_DE,de_DE-zh_CN,zh_CN-de_DE,es_XX-hi_IN,hi_IN-es_XX,es_XX-zh_CN,zh_CN-es_XX,et_EE-es_XX,es_XX-et_EE,et_EE-hi_IN,hi_IN-et_EE,et_EE-zh_CN,zh_CN-et_EE,fi_FI-es_XX,es_XX-fi_FI,fi_FI-hi_IN,hi_IN-fi_FI,fi_FI-zh_CN,zh_CN-fi_FI,fr_XX-es_XX,es_XX-fr_XX,fr_XX-hi_IN,hi_IN-fr_XX,fr_XX-zh_CN,zh_CN-fr_XX,gu_IN-es_XX,es_XX-gu_IN,gu_IN-hi_IN,hi_IN-gu_IN,gu_IN-zh_CN,zh_CN-gu_IN,hi_IN-zh_CN,zh_CN-hi_IN,it_IT-es_XX,es_XX-it_IT,it_IT-hi_IN,hi_IN-it_IT,it_IT-zh_CN,zh_CN-it_IT,ja_XX-es_XX,es_XX-ja_XX,ja_XX-hi_IN,hi_IN-ja_XX,ja_XX-zh_CN,zh_CN-ja_XX,kk_KZ-es_XX,es_XX-kk_KZ,kk_KZ-hi_IN,hi_IN-kk_KZ,kk_KZ-zh_CN,zh_CN-kk_KZ,ko_KR-es_XX,es_XX-ko_KR,ko_KR-hi_IN,hi_IN-ko_KR,ko_KR-zh_CN,zh_CN-ko_KR,lt_LT-es_XX,es_XX-lt_LT,lt_LT-hi_IN,hi_IN-lt_LT,lt_LT-zh_CN,zh_CN-lt_LT,lv_LV-es_XX,es_XX-lv_LV,lv_LV-hi_IN,hi_IN-lv_LV,lv_LV-zh_CN,zh_CN-lv_LV,my_MM-es_XX,es_XX-my_MM,my_MM-hi_IN,hi_IN-my_MM,my_MM-zh_CN,zh_CN-my_MM,ne_NP-es_XX,es_XX-ne_NP,ne_NP-hi_IN,hi_IN-ne_NP,ne_NP-zh_CN,zh_CN-ne_NP,nl_XX-es_XX,es_XX-nl_XX,nl_XX-hi_IN,hi_IN-nl_XX,nl_XX-zh_CN,zh_CN-nl_XX,ro_RO-es_XX,es_XX-ro_RO,ro_RO-hi_IN,hi_IN-ro_RO,ro_RO-zh_CN,zh_CN-ro_RO,ru_RU-es_XX,es_XX-ru_RU,ru_RU-hi_IN,hi_IN-ru_RU,ru_RU-zh_CN,zh_CN-ru_RU,si_LK-es_XX,es_XX-si_LK,si_LK-hi_IN,hi_IN-si_LK,si_LK-zh_CN,zh_CN-si_LK,tr_TR-es_XX,es_XX-tr_TR,tr_TR-hi_IN,hi_IN-tr_TR,tr_TR-zh_CN,zh_CN-tr_TR,vi_VN-es_XX,es_XX-vi_VN,vi_VN-hi_IN,hi_IN-vi_VN,vi_VN-zh_CN,zh_CN-vi_VN' \ - --lang-dict ${LANG_DICT} --lang-tok-style 'mbart' --sampling-method 'temperature' --sampling-temperature '1.0' > $GEN_TMP_DIR/${SRC}_${TGT}.gen -cat $GEN_TMP_DIR/${SRC}_${TGT}.gen | grep -P "^T-" | cut -f2 | $REPLACE_UNICODE_PUNCT | $NORM_PUNC -l ${TGT:0:2} | $REM_NON_PRINT_CHAR | $TOKENIZER -no-escape ${TGT:0:2} > $GEN_TMP_DIR/${SRC}_${TGT}.hyp -cat $GEN_TMP_DIR/${SRC}_${TGT}.gen | grep -P "^H-" | cut -f3 | $REPLACE_UNICODE_PUNCT | $NORM_PUNC -l ${TGT:0:2} | $REM_NON_PRINT_CHAR | $TOKENIZER -no-escape ${TGT:0:2} > $GEN_TMP_DIR/${SRC}_${TGT}.ref -${MULTIBLEU} $GEN_TMP_DIR/${SRC}_${TGT}.ref < $GEN_TMP_DIR/${SRC}_${TGT}.hyp diff --git a/kosmos-g/fairseq/examples/cross_lingual_language_model/README.md b/kosmos-g/fairseq/examples/cross_lingual_language_model/README.md deleted file mode 100644 index af9128e39..000000000 --- a/kosmos-g/fairseq/examples/cross_lingual_language_model/README.md +++ /dev/null @@ -1,77 +0,0 @@ -# Cross-Lingual Language Model Pre-training - -Below are some details for training Cross-Lingual Language Models (XLM) - similar to the ones presented in [Lample & Conneau, 2019](https://arxiv.org/pdf/1901.07291.pdf) - in Fairseq. The current implementation only supports the Masked Language Model (MLM) from the paper above. - -## Downloading and Tokenizing Monolingual Data - -Pointers to the monolingual data from wikipedia, used for training the XLM-style MLM model as well as details on processing (tokenization and BPE) it can be found in the [XLM Github Repository](https://github.com/facebookresearch/XLM#download--preprocess-monolingual-data). - -Let's assume the following for the code snippets in later sections to work -- Processed data is in the folder: monolingual_data/processed -- Each language has 3 files for train, test and validation. For example we have the following files for English: - train.en, valid.en -- We are training a model for 5 languages: Arabic (ar), German (de), English (en), Hindi (hi) and French (fr) -- The vocabulary file is monolingual_data/processed/vocab_mlm - - -## Fairseq Pre-processing and Binarization - -Pre-process and binarize the data with the MaskedLMDictionary and cross_lingual_lm task - -```bash -# Ensure the output directory exists -DATA_DIR=monolingual_data/fairseq_processed -mkdir -p "$DATA_DIR" - -for lg in ar de en hi fr -do - - fairseq-preprocess \ - --task cross_lingual_lm \ - --srcdict monolingual_data/processed/vocab_mlm \ - --only-source \ - --trainpref monolingual_data/processed/train \ - --validpref monolingual_data/processed/valid \ - --testpref monolingual_data/processed/test \ - --destdir monolingual_data/fairseq_processed \ - --workers 20 \ - --source-lang $lg - - # Since we only have a source language, the output file has a None for the - # target language. Remove this - - for stage in train test valid - - sudo mv "$DATA_DIR/$stage.$lg-None.$lg.bin" "$stage.$lg.bin" - sudo mv "$DATA_DIR/$stage.$lg-None.$lg.idx" "$stage.$lg.idx" - - done - -done -``` - -## Train a Cross-lingual Language Model similar to the XLM MLM model - -Use the following command to train the model on 5 languages. - -``` -fairseq-train \ ---task cross_lingual_lm monolingual_data/fairseq_processed \ ---save-dir checkpoints/mlm \ ---max-update 2400000 --save-interval 1 --no-epoch-checkpoints \ ---arch xlm_base \ ---optimizer adam --lr-scheduler reduce_lr_on_plateau \ ---lr-shrink 0.5 --lr 0.0001 --stop-min-lr 1e-09 \ ---dropout 0.1 \ ---criterion legacy_masked_lm_loss \ ---max-tokens 2048 --tokens-per-sample 256 --attention-dropout 0.1 \ ---dataset-impl lazy --seed 0 \ ---masked-lm-only \ ---monolingual-langs 'ar,de,en,hi,fr' --num-segment 5 \ ---ddp-backend=legacy_ddp -``` - -Some Notes: -- Using tokens_per_sample greater than 256 can cause OOM (out-of-memory) issues. Usually since MLM packs in streams of text, this parameter doesn't need much tuning. -- The Evaluation workflow for computing MLM Perplexity on test data is in progress. -- Finetuning this model on a downstream task is something which is not currently available. diff --git a/kosmos-g/fairseq/examples/discriminative_reranking_nmt/README.md b/kosmos-g/fairseq/examples/discriminative_reranking_nmt/README.md deleted file mode 100644 index b155e855f..000000000 --- a/kosmos-g/fairseq/examples/discriminative_reranking_nmt/README.md +++ /dev/null @@ -1,202 +0,0 @@ -# Discriminative Reranking for Neural Machine Translation -https://aclanthology.org/2021.acl-long.563/ - -This folder contains source code for training DrNMT, a discriminatively trained reranker for neural machine translation. - -## Data preparation -1. Follow the instructions under `examples/translation` to build a base MT model. Prepare three files, one with source sentences, one with ground truth target sentences, and one with hypotheses generated from the base MT model. Each line in the file contains one sentence in raw text (i.e. no sentencepiece, etc.). Below is an example of the files with _N_ hypotheses for each source sentence. - -``` -# Example of the source sentence file: (The file should contain L lines.) - -source_sentence_1 -source_sentence_2 -source_sentence_3 -... -source_sentence_L - -# Example of the target sentence file: (The file should contain L lines.) - -target_sentence_1 -target_sentence_2 -target_sentence_3 -... -target_sentence_L - -# Example of the hypotheses file: (The file should contain L*N lines.) - -source_sentence_1_hypo_1 -source_sentence_1_hypo_2 -... -source_sentence_1_hypo_N -source_sentence_2_hypo_1 -... -source_sentence_2_hypo_N -... -source_sentence_L_hypo_1 -... -source_sentence_L_hypo_N -``` - -2. Download the [XLMR model](https://github.com/fairinternal/fairseq-py/tree/main/examples/xlmr#pre-trained-models). -``` -wget https://dl.fbaipublicfiles.com/fairseq/models/xlmr.base.tar.gz -tar zxvf xlmr.base.tar.gz - -# The folder should contain dict.txt, model.pt and sentencepiece.bpe.model. -``` - -3. Prepare scores and BPE data. -* `N`: Number of hypotheses per each source sentence. We use 50 in the paper. -* `SPLIT`: Name of the data split, i.e. train, valid, test. Use split_name, split_name1, split_name2, ..., if there are multiple datasets for a split, e.g. train, train1, valid, valid1. -* `NUM_SHARDS`: Number of shards. Set this to 1 for non-train splits. -* `METRIC`: The metric for DrNMT to optimize for. We support either `bleu` or `ter`. -``` -# For each data split, e.g. train, valid, test, etc., run the following: - -SOURCE_FILE=/path/to/source_sentence_file -TARGET_FILE=/path/to/target_sentence_file -HYPO_FILE=/path/to/hypo_file -XLMR_DIR=/path/to/xlmr -OUTPUT_DIR=/path/to/output - -python scripts/prep_data.py \ - --input-source ${SOURCE_FILE} \ - --input-target ${TARGET_FILE} \ - --input-hypo ${HYPO_FILE} \ - --output-dir ${OUTPUT_DIR} \ - --split $SPLIT - --beam $N \ - --sentencepiece-model ${XLMR_DIR}/sentencepiece.bpe.model \ - --metric $METRIC \ - --num-shards ${NUM_SHARDS} - -# The script will create ${OUTPUT_DIR}/$METRIC with ${NUM_SHARDS} splits. -# Under split*/input_src, split*/input_tgt and split*/$METRIC, there will be $SPLIT.bpe and $SPLIT.$METRIC files, respectively. - -``` - -4. Pre-process the data into fairseq format. -``` -# use comma to separate if there are more than one train or valid set -for suffix in src tgt ; do - fairseq-preprocess --only-source \ - --trainpref ${OUTPUT_DIR}/$METRIC/split1/input_${suffix}/train.bpe \ - --validpref ${OUTPUT_DIR}/$METRIC/split1/input_${suffix}/valid.bpe \ - --destdir ${OUTPUT_DIR}/$METRIC/split1/input_${suffix} \ - --workers 60 \ - --srcdict ${XLMR_DIR}/dict.txt -done - -for i in `seq 2 ${NUM_SHARDS}`; do - for suffix in src tgt ; do - fairseq-preprocess --only-source \ - --trainpref ${OUTPUT_DIR}/$METRIC/split${i}/input_${suffix}/train.bpe \ - --destdir ${OUTPUT_DIR}/$METRIC/split${i}/input_${suffix} \ - --workers 60 \ - --srcdict ${XLMR_DIR}/dict.txt - - ln -s ${OUTPUT_DIR}/$METRIC/split1/input_${suffix}/valid* ${OUTPUT_DIR}/$METRIC/split${i}/input_${suffix}/. - done - - ln -s ${OUTPUT_DIR}/$METRIC/split1/$METRIC/valid* ${OUTPUT_DIR}/$METRIC/split${i}/$METRIC/. -done -``` - -## Training - -``` -EXP_DIR=/path/to/exp - -# An example of training the model with the config for De-En experiment in the paper. -# The config uses 16 GPUs and 50 hypotheses. -# For training with fewer number of GPUs, set -# distributed_training.distributed_world_size=k +optimization.update_freq='[x]' where x = 16/k -# For training with fewer number of hypotheses, set -# task.mt_beam=N dataset.batch_size=N dataset.required_batch_size_multiple=N - -fairseq-hydra-train -m \ - --config-dir config/ --config-name deen \ - task.data=${OUTPUT_DIR}/$METRIC/split1/ \ - task.num_data_splits=${NUM_SHARDS} \ - model.pretrained_model=${XLMR_DIR}/model.pt \ - common.user_dir=${FAIRSEQ_ROOT}/examples/discriminative_reranking_nmt \ - checkpoint.save_dir=${EXP_DIR} - -``` - -## Inference & scoring -Perform DrNMT reranking (fw + reranker score) -1. Tune weights on valid sets. -``` -# genrate N hypotheses with the base MT model (fw score) -VALID_SOURCE_FILE=/path/to/source_sentences # one sentence per line, converted to the sentencepiece used by the base MT model -VALID_TARGET_FILE=/path/to/target_sentences # one sentence per line in raw text, i.e. no sentencepiece and tokenization -MT_MODEL=/path/to/mt_model -MT_DATA_PATH=/path/to/mt_data - -cat ${VALID_SOURCE_FILE} | \ - fairseq-interactive ${MT_DATA_PATH} \ - --max-tokens 4000 --buffer-size 16 \ - --num-workers 32 --path ${MT_MODEL} \ - --beam $N --nbest $N \ - --post-process sentencepiece &> valid-hypo.out - -# replace "bleu" with "ter" to optimize for TER -python drnmt_rerank.py \ - ${OUTPUT_DIR}/$METRIC/split1/ \ - --path ${EXP_DIR}/checkpoint_best.pt \ - --in-text valid-hypo.out \ - --results-path ${EXP_DIR} \ - --gen-subset valid \ - --target-text ${VALID_TARGET_FILE} \ - --user-dir ${FAIRSEQ_ROOT}/examples/discriminative_reranking_nmt \ - --bpe sentencepiece \ - --sentencepiece-model ${XLMR_DIR}/sentencepiece.bpe.model \ - --beam $N \ - --batch-size $N \ - --metric bleu \ - --tune - -``` - -2. Apply best weights on test sets -``` -# genrate N hypotheses with the base MT model (fw score) -TEST_SOURCE_FILE=/path/to/source_sentences # one sentence per line, converted to the sentencepiece used by the base MT model - -cat ${TEST_SOURCE_FILE} | \ - fairseq-interactive ${MT_DATA_PATH} \ - --max-tokens 4000 --buffer-size 16 \ - --num-workers 32 --path ${MT_MODEL} \ - --beam $N --nbest $N \ - --post-process sentencepiece &> test-hypo.out - -# replace "bleu" with "ter" to evaluate TER -# Add --target-text for evaluating BLEU/TER, -# otherwise the script will only generate the hypotheses with the highest scores only. -python drnmt_rerank.py \ - ${OUTPUT_DIR}/$METRIC/split1/ \ - --path ${EXP_DIR}/checkpoint_best.pt \ - --in-text test-hypo.out \ - --results-path ${EXP_DIR} \ - --gen-subset test \ - --user-dir ${FAIRSEQ_ROOT}/examples/discriminative_reranking_nmt \ - --bpe sentencepiece \ - --sentencepiece-model ${XLMR_DIR}/sentencepiece.bpe.model \ - --beam $N \ - --batch-size $N \ - --metric bleu \ - --fw-weight ${BEST_FW_WEIGHT} \ - --lenpen ${BEST_LENPEN} -``` - -## Citation -```bibtex -@inproceedings{lee2021discriminative, - title={Discriminative Reranking for Neural Machine Translation}, - author={Lee, Ann and Auli, Michael and Ranzato, Marc'Aurelio}, - booktitle={ACL}, - year={2021} -} -``` diff --git a/kosmos-g/fairseq/examples/discriminative_reranking_nmt/__init__.py b/kosmos-g/fairseq/examples/discriminative_reranking_nmt/__init__.py deleted file mode 100644 index 0278f6a27..000000000 --- a/kosmos-g/fairseq/examples/discriminative_reranking_nmt/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from . import criterions, models, tasks # noqa diff --git a/kosmos-g/fairseq/examples/discriminative_reranking_nmt/config/deen.yaml b/kosmos-g/fairseq/examples/discriminative_reranking_nmt/config/deen.yaml deleted file mode 100644 index 3fc2d5fcf..000000000 --- a/kosmos-g/fairseq/examples/discriminative_reranking_nmt/config/deen.yaml +++ /dev/null @@ -1,56 +0,0 @@ -# @package _group_ - -common: - fp16: true - log_format: json - log_interval: 50 - seed: 2 - -checkpoint: - no_epoch_checkpoints: true - best_checkpoint_metric: bleu - maximize_best_checkpoint_metric: true - -task: - _name: discriminative_reranking_nmt - data: ??? - num_data_splits: ??? - include_src: true - mt_beam: 50 - eval_target_metric: true - target_metric: bleu - -dataset: - batch_size: 50 - num_workers: 6 - required_batch_size_multiple: 50 - valid_subset: ??? - -criterion: - _name: kl_divergence_rereanking - target_dist_norm: minmax - temperature: 0.5 - -optimization: - max_epoch: 200 - lr: [0.00005] - update_freq: [32] - -optimizer: - _name: adam - adam_betas: (0.9,0.98) - adam_eps: 1e-06 - -lr_scheduler: - _name: polynomial_decay - warmup_updates: 8000 - total_num_update: 320000 - -model: - _name: discriminative_nmt_reranker - pretrained_model: ??? - classifier_dropout: 0.2 - -distributed_training: - ddp_backend: no_c10d - distributed_world_size: 16 diff --git a/kosmos-g/fairseq/examples/discriminative_reranking_nmt/criterions/__init__.py b/kosmos-g/fairseq/examples/discriminative_reranking_nmt/criterions/__init__.py deleted file mode 100644 index 7c257c270..000000000 --- a/kosmos-g/fairseq/examples/discriminative_reranking_nmt/criterions/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -from .discriminative_reranking_criterion import KLDivergenceRerankingCriterion - - -__all__ = [ - "KLDivergenceRerankingCriterion", -] diff --git a/kosmos-g/fairseq/examples/discriminative_reranking_nmt/criterions/discriminative_reranking_criterion.py b/kosmos-g/fairseq/examples/discriminative_reranking_nmt/criterions/discriminative_reranking_criterion.py deleted file mode 100644 index 0b02ce187..000000000 --- a/kosmos-g/fairseq/examples/discriminative_reranking_nmt/criterions/discriminative_reranking_criterion.py +++ /dev/null @@ -1,138 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from dataclasses import dataclass, field - -import torch -import torch.nn.functional as F - -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.dataclass import ChoiceEnum, FairseqDataclass - - -_EPSILON = torch.finfo(torch.float32).eps -TARGET_DIST_NORM_CHOICES = ChoiceEnum(["none", "minmax"]) - - -@dataclass -class KLDivergenceRerankingCriterionConfig(FairseqDataclass): - target_dist_norm: TARGET_DIST_NORM_CHOICES = field( - default="none", - metadata={"help": "method to normalize the range of target scores"}, - ) - temperature: float = field( - default=1.0, - metadata={"help": "temperature in softmax for target distributions"}, - ) - forward_batch_size: int = field( - default=32, - metadata={ - "help": "number of hypotheses per batch for model forward (set a value smaller than --mt-beam to avoid OOM when training with a large beam size)" - }, - ) - - -@register_criterion( - "kl_divergence_rereanking", dataclass=KLDivergenceRerankingCriterionConfig -) -class KLDivergenceRerankingCriterion(FairseqCriterion): - def __init__( - self, task, target_dist_norm, temperature, forward_batch_size, - ): - super().__init__(task) - self.target_dist_norm = target_dist_norm - self.temperature = temperature - self.forward_batch_size = forward_batch_size - - def forward(self, model, sample, reduce=True): - """Compute the loss for the given sample. - - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - - sample_size = sample["id"].numel() - assert sample_size % self.task.cfg.mt_beam == 0, ( - f"sample_size ({sample_size}) cannot be divided by beam size ({self.task.cfg.mt_beam})." - f"Please set --required-batch-size-multiple={self.task.cfg.mt_beam}." - ) - - # split into smaller batches for model forward - batch_out = [] - for i in range(0, sample_size, self.forward_batch_size): - j = min(i + self.forward_batch_size, sample_size) - - out = model( - src_tokens=sample["net_input"]["src_tokens"][i:j, :], - src_lengths=sample["net_input"]["src_lengths"][i:j], - ) - - batch_out.append( - model.sentence_forward(out, sample["net_input"]["src_tokens"][i:j, :]) - ) - - batch_out = torch.cat(batch_out, dim=0).view( - self.task.cfg.mt_beam, sample_size // self.task.cfg.mt_beam, -1 - ) # T x B x C - if model.joint_classification == "sent": - batch_out = model.joint_forward(batch_out) - scores = model.classification_forward(batch_out.view(sample_size, 1, -1)).view( - -1, self.task.cfg.mt_beam - ) # input: B x T x C - - loss = self.compute_kl_loss( - scores, sample["target"][:, 0].view(-1, self.task.cfg.mt_beam) - ) - - sample_size = sample_size // self.task.cfg.mt_beam - - logging_output = { - "loss": loss.detach(), - "ntokens": sample["ntokens"], - "nsentences": sample_size * self.task.cfg.mt_beam, - "sample_size": sample_size, - "scores": scores.detach(), - } - - return loss, sample_size, logging_output - - def compute_kl_loss(self, logits, target): - norm_target = target - if self.target_dist_norm == "minmax": - min_v = torch.min(target, 1, keepdim=True).values - max_v = torch.max(target, 1, keepdim=True).values - norm_target = (target - min_v) / (max_v - min_v + _EPSILON) - - target_dist = F.softmax( - norm_target / self.temperature, dim=-1, dtype=torch.float32 - ) - model_dist = F.log_softmax(logits, dim=-1, dtype=torch.float32) - loss = -(target_dist * model_dist - target_dist * target_dist.log()).sum() - return loss - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_sum = utils.item(sum(log.get("loss", 0) for log in logging_outputs)) - - sample_size = utils.item( - sum(log.get("sample_size", 0) for log in logging_outputs) - ) - - loss = loss_sum / sample_size / math.log(2) - metrics.log_scalar("loss", loss, sample_size, round=3) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/kosmos-g/fairseq/examples/discriminative_reranking_nmt/drnmt_rerank.py b/kosmos-g/fairseq/examples/discriminative_reranking_nmt/drnmt_rerank.py deleted file mode 100644 index 2e0fc2bd2..000000000 --- a/kosmos-g/fairseq/examples/discriminative_reranking_nmt/drnmt_rerank.py +++ /dev/null @@ -1,364 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Score raw text with a trained model. -""" - -from collections import namedtuple -import logging -from multiprocessing import Pool -import sys -import os -import random - -import numpy as np -import sacrebleu -import torch - -from fairseq import checkpoint_utils, options, utils - - -logger = logging.getLogger("fairseq_cli.drnmt_rerank") -logger.setLevel(logging.INFO) - -Batch = namedtuple("Batch", "ids src_tokens src_lengths") - - -pool_init_variables = {} - - -def init_loaded_scores(mt_scores, model_scores, hyp, ref): - global pool_init_variables - pool_init_variables["mt_scores"] = mt_scores - pool_init_variables["model_scores"] = model_scores - pool_init_variables["hyp"] = hyp - pool_init_variables["ref"] = ref - - -def parse_fairseq_gen(filename, task): - source = {} - hypos = {} - scores = {} - with open(filename, "r", encoding="utf-8") as f: - for line in f: - line = line.strip() - if line.startswith("S-"): # source - uid, text = line.split("\t", 1) - uid = int(uid[2:]) - source[uid] = text - elif line.startswith("D-"): # hypo - uid, score, text = line.split("\t", 2) - uid = int(uid[2:]) - if uid not in hypos: - hypos[uid] = [] - scores[uid] = [] - hypos[uid].append(text) - scores[uid].append(float(score)) - else: - continue - - source_out = [source[i] for i in range(len(hypos))] - hypos_out = [h for i in range(len(hypos)) for h in hypos[i]] - scores_out = [s for i in range(len(scores)) for s in scores[i]] - - return source_out, hypos_out, scores_out - - -def read_target(filename): - with open(filename, "r", encoding="utf-8") as f: - output = [line.strip() for line in f] - return output - - -def make_batches(args, src, hyp, task, max_positions, encode_fn): - assert len(src) * args.beam == len( - hyp - ), f"Expect {len(src) * args.beam} hypotheses for {len(src)} source sentences with beam size {args.beam}. Got {len(hyp)} hypotheses intead." - hyp_encode = [ - task.source_dictionary.encode_line(encode_fn(h), add_if_not_exist=False).long() - for h in hyp - ] - if task.cfg.include_src: - src_encode = [ - task.source_dictionary.encode_line( - encode_fn(s), add_if_not_exist=False - ).long() - for s in src - ] - tokens = [(src_encode[i // args.beam], h) for i, h in enumerate(hyp_encode)] - lengths = [(t1.numel(), t2.numel()) for t1, t2 in tokens] - else: - tokens = [(h,) for h in hyp_encode] - lengths = [(h.numel(),) for h in hyp_encode] - - itr = task.get_batch_iterator( - dataset=task.build_dataset_for_inference(tokens, lengths), - max_tokens=args.max_tokens, - max_sentences=args.batch_size, - max_positions=max_positions, - ignore_invalid_inputs=args.skip_invalid_size_inputs_valid_test, - ).next_epoch_itr(shuffle=False) - - for batch in itr: - yield Batch( - ids=batch["id"], - src_tokens=batch["net_input"]["src_tokens"], - src_lengths=batch["net_input"]["src_lengths"], - ) - - -def decode_rerank_scores(args): - if args.max_tokens is None and args.batch_size is None: - args.batch_size = 1 - - logger.info(args) - - use_cuda = torch.cuda.is_available() and not args.cpu - - # Load ensemble - logger.info("loading model(s) from {}".format(args.path)) - models, _model_args, task = checkpoint_utils.load_model_ensemble_and_task( - [args.path], arg_overrides=eval(args.model_overrides), - ) - - for model in models: - if args.fp16: - model.half() - if use_cuda: - model.cuda() - - # Initialize generator - generator = task.build_generator(args) - - # Handle tokenization and BPE - tokenizer = task.build_tokenizer(args) - bpe = task.build_bpe(args) - - def encode_fn(x): - if tokenizer is not None: - x = tokenizer.encode(x) - if bpe is not None: - x = bpe.encode(x) - return x - - max_positions = utils.resolve_max_positions( - task.max_positions(), *[model.max_positions() for model in models] - ) - - src, hyp, mt_scores = parse_fairseq_gen(args.in_text, task) - model_scores = {} - logger.info("decode reranker score") - for batch in make_batches(args, src, hyp, task, max_positions, encode_fn): - src_tokens = batch.src_tokens - src_lengths = batch.src_lengths - if use_cuda: - src_tokens = src_tokens.cuda() - src_lengths = src_lengths.cuda() - - sample = { - "net_input": {"src_tokens": src_tokens, "src_lengths": src_lengths}, - } - scores = task.inference_step(generator, models, sample) - - for id, sc in zip(batch.ids.tolist(), scores.tolist()): - model_scores[id] = sc[0] - - model_scores = [model_scores[i] for i in range(len(model_scores))] - - return src, hyp, mt_scores, model_scores - - -def get_score(mt_s, md_s, w1, lp, tgt_len): - return mt_s / (tgt_len ** lp) * w1 + md_s - - -def get_best_hyps(mt_scores, md_scores, hypos, fw_weight, lenpen, beam): - assert len(mt_scores) == len(md_scores) and len(mt_scores) == len(hypos) - hypo_scores = [] - best_hypos = [] - best_scores = [] - offset = 0 - for i in range(len(hypos)): - tgt_len = len(hypos[i].split()) - hypo_scores.append( - get_score(mt_scores[i], md_scores[i], fw_weight, lenpen, tgt_len) - ) - - if (i + 1) % beam == 0: - max_i = np.argmax(hypo_scores) - best_hypos.append(hypos[offset + max_i]) - best_scores.append(hypo_scores[max_i]) - hypo_scores = [] - offset += beam - return best_hypos, best_scores - - -def eval_metric(args, hypos, ref): - if args.metric == "bleu": - score = sacrebleu.corpus_bleu(hypos, [ref]).score - else: - score = sacrebleu.corpus_ter(hypos, [ref]).score - - return score - - -def score_target_hypo(args, fw_weight, lp): - mt_scores = pool_init_variables["mt_scores"] - model_scores = pool_init_variables["model_scores"] - hyp = pool_init_variables["hyp"] - ref = pool_init_variables["ref"] - best_hypos, _ = get_best_hyps( - mt_scores, model_scores, hyp, fw_weight, lp, args.beam - ) - rerank_eval = None - if ref: - rerank_eval = eval_metric(args, best_hypos, ref) - print(f"fw_weight {fw_weight}, lenpen {lp}, eval {rerank_eval}") - - return rerank_eval - - -def print_result(best_scores, best_hypos, output_file): - for i, (s, h) in enumerate(zip(best_scores, best_hypos)): - print(f"{i}\t{s}\t{h}", file=output_file) - - -def main(args): - utils.import_user_module(args) - - src, hyp, mt_scores, model_scores = decode_rerank_scores(args) - - assert ( - not args.tune or args.target_text is not None - ), "--target-text has to be set when tuning weights" - if args.target_text: - ref = read_target(args.target_text) - assert len(src) == len( - ref - ), f"different numbers of source and target sentences ({len(src)} vs. {len(ref)})" - - orig_best_hypos = [hyp[i] for i in range(0, len(hyp), args.beam)] - orig_eval = eval_metric(args, orig_best_hypos, ref) - - if args.tune: - logger.info("tune weights for reranking") - - random_params = np.array( - [ - [ - random.uniform( - args.lower_bound_fw_weight, args.upper_bound_fw_weight - ), - random.uniform(args.lower_bound_lenpen, args.upper_bound_lenpen), - ] - for k in range(args.num_trials) - ] - ) - - logger.info("launching pool") - with Pool( - 32, - initializer=init_loaded_scores, - initargs=(mt_scores, model_scores, hyp, ref), - ) as p: - rerank_scores = p.starmap( - score_target_hypo, - [ - (args, random_params[i][0], random_params[i][1],) - for i in range(args.num_trials) - ], - ) - if args.metric == "bleu": - best_index = np.argmax(rerank_scores) - else: - best_index = np.argmin(rerank_scores) - best_fw_weight = random_params[best_index][0] - best_lenpen = random_params[best_index][1] - else: - assert ( - args.lenpen is not None and args.fw_weight is not None - ), "--lenpen and --fw-weight should be set" - best_fw_weight, best_lenpen = args.fw_weight, args.lenpen - - best_hypos, best_scores = get_best_hyps( - mt_scores, model_scores, hyp, best_fw_weight, best_lenpen, args.beam - ) - - if args.results_path is not None: - os.makedirs(args.results_path, exist_ok=True) - output_path = os.path.join( - args.results_path, "generate-{}.txt".format(args.gen_subset), - ) - with open(output_path, "w", buffering=1, encoding="utf-8") as o: - print_result(best_scores, best_hypos, o) - else: - print_result(best_scores, best_hypos, sys.stdout) - - if args.target_text: - rerank_eval = eval_metric(args, best_hypos, ref) - print(f"before reranking, {args.metric.upper()}:", orig_eval) - print( - f"after reranking with fw_weight={best_fw_weight}, lenpen={best_lenpen}, {args.metric.upper()}:", - rerank_eval, - ) - - -def cli_main(): - parser = options.get_generation_parser(interactive=True) - - parser.add_argument( - "--in-text", - default=None, - required=True, - help="text from fairseq-interactive output, containing source sentences and hypotheses", - ) - parser.add_argument("--target-text", default=None, help="reference text") - parser.add_argument("--metric", type=str, choices=["bleu", "ter"], default="bleu") - parser.add_argument( - "--tune", - action="store_true", - help="if set, tune weights on fw scores and lenpen instead of applying fixed weights for reranking", - ) - parser.add_argument( - "--lower-bound-fw-weight", - default=0.0, - type=float, - help="lower bound of search space", - ) - parser.add_argument( - "--upper-bound-fw-weight", - default=3, - type=float, - help="upper bound of search space", - ) - parser.add_argument( - "--lower-bound-lenpen", - default=0.0, - type=float, - help="lower bound of search space", - ) - parser.add_argument( - "--upper-bound-lenpen", - default=3, - type=float, - help="upper bound of search space", - ) - parser.add_argument( - "--fw-weight", type=float, default=None, help="weight on the fw model score" - ) - parser.add_argument( - "--num-trials", - default=1000, - type=int, - help="number of trials to do for random search", - ) - - args = options.parse_args_and_arch(parser) - main(args) - - -if __name__ == "__main__": - cli_main() diff --git a/kosmos-g/fairseq/examples/discriminative_reranking_nmt/models/__init__.py b/kosmos-g/fairseq/examples/discriminative_reranking_nmt/models/__init__.py deleted file mode 100644 index c593ea5f1..000000000 --- a/kosmos-g/fairseq/examples/discriminative_reranking_nmt/models/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -from .discriminative_reranking_model import DiscriminativeNMTReranker - - -__all__ = [ - "DiscriminativeNMTReranker", -] diff --git a/kosmos-g/fairseq/examples/discriminative_reranking_nmt/models/discriminative_reranking_model.py b/kosmos-g/fairseq/examples/discriminative_reranking_nmt/models/discriminative_reranking_model.py deleted file mode 100644 index e4b5887f8..000000000 --- a/kosmos-g/fairseq/examples/discriminative_reranking_nmt/models/discriminative_reranking_model.py +++ /dev/null @@ -1,365 +0,0 @@ -from dataclasses import dataclass, field -import os - -import torch -import torch.nn as nn - -from fairseq import utils -from fairseq.dataclass import ChoiceEnum, FairseqDataclass -from fairseq.models import ( - BaseFairseqModel, - register_model, -) - -from fairseq.models.roberta.model import RobertaClassificationHead - -from fairseq.modules import ( - LayerNorm, - TransformerSentenceEncoder, - TransformerSentenceEncoderLayer, -) - - -ACTIVATION_FN_CHOICES = ChoiceEnum(utils.get_available_activation_fns()) -JOINT_CLASSIFICATION_CHOICES = ChoiceEnum(["none", "sent"]) -SENTENCE_REP_CHOICES = ChoiceEnum(["head", "meanpool", "maxpool"]) - - -def update_init_roberta_model_state(state): - """ - update the state_dict of a Roberta model for initializing - weights of the BertRanker - """ - for k in list(state.keys()): - if ".lm_head." in k or "version" in k: - del state[k] - continue - # remove 'encoder/decoder.sentence_encoder.' from the key - assert k.startswith("encoder.sentence_encoder.") or k.startswith( - "decoder.sentence_encoder." - ), f"Cannot recognize parameter name {k}" - if "layernorm_embedding" in k: - new_k = k.replace(".layernorm_embedding.", ".emb_layer_norm.") - state[new_k[25:]] = state[k] - else: - state[k[25:]] = state[k] - del state[k] - - -class BaseRanker(nn.Module): - def __init__(self, args, task): - super().__init__() - - self.separator_token = task.dictionary.eos() - self.padding_idx = task.dictionary.pad() - - def forward(self, src_tokens): - raise NotImplementedError - - def get_segment_labels(self, src_tokens): - segment_boundary = (src_tokens == self.separator_token).long() - segment_labels = ( - segment_boundary.cumsum(dim=1) - - segment_boundary - - (src_tokens == self.padding_idx).long() - ) - - return segment_labels - - def get_positions(self, src_tokens, segment_labels): - segment_positions = ( - torch.arange(src_tokens.shape[1]) - .to(src_tokens.device) - .repeat(src_tokens.shape[0], 1) - ) - segment_boundary = (src_tokens == self.separator_token).long() - _, col_idx = (segment_positions * segment_boundary).nonzero(as_tuple=True) - col_idx = torch.cat([torch.zeros(1).type_as(col_idx), col_idx]) - offset = torch.cat( - [ - torch.zeros(1).type_as(segment_boundary), - segment_boundary.sum(dim=1).cumsum(dim=0)[:-1], - ] - ) - segment_positions -= col_idx[segment_labels + offset.unsqueeze(1)] * ( - segment_labels != 0 - ) - - padding_mask = src_tokens.ne(self.padding_idx) - segment_positions = (segment_positions + 1) * padding_mask.type_as( - segment_positions - ) + self.padding_idx - - return segment_positions - - -class BertRanker(BaseRanker): - def __init__(self, args, task): - super(BertRanker, self).__init__(args, task) - - init_model = getattr(args, "pretrained_model", "") - self.joint_layers = nn.ModuleList() - if os.path.isfile(init_model): - print(f"initialize weight from {init_model}") - - from fairseq import hub_utils - - x = hub_utils.from_pretrained( - os.path.dirname(init_model), - checkpoint_file=os.path.basename(init_model), - ) - - in_state_dict = x["models"][0].state_dict() - init_args = x["args"].model - - num_positional_emb = init_args.max_positions + task.dictionary.pad() + 1 - - # follow the setup in roberta - self.model = TransformerSentenceEncoder( - padding_idx=task.dictionary.pad(), - vocab_size=len(task.dictionary), - num_encoder_layers=getattr( - args, "encoder_layers", init_args.encoder_layers - ), - embedding_dim=init_args.encoder_embed_dim, - ffn_embedding_dim=init_args.encoder_ffn_embed_dim, - num_attention_heads=init_args.encoder_attention_heads, - dropout=init_args.dropout, - attention_dropout=init_args.attention_dropout, - activation_dropout=init_args.activation_dropout, - num_segments=2, # add language embeddings - max_seq_len=num_positional_emb, - offset_positions_by_padding=False, - encoder_normalize_before=True, - apply_bert_init=True, - activation_fn=init_args.activation_fn, - freeze_embeddings=args.freeze_embeddings, - n_trans_layers_to_freeze=args.n_trans_layers_to_freeze, - ) - - # still need to learn segment embeddings as we added a second language embedding - if args.freeze_embeddings: - for p in self.model.segment_embeddings.parameters(): - p.requires_grad = False - - update_init_roberta_model_state(in_state_dict) - print("loading weights from the pretrained model") - self.model.load_state_dict( - in_state_dict, strict=False - ) # ignore mismatch in language embeddings - - ffn_embedding_dim = init_args.encoder_ffn_embed_dim - num_attention_heads = init_args.encoder_attention_heads - dropout = init_args.dropout - attention_dropout = init_args.attention_dropout - activation_dropout = init_args.activation_dropout - activation_fn = init_args.activation_fn - - classifier_embed_dim = getattr( - args, "embed_dim", init_args.encoder_embed_dim - ) - if classifier_embed_dim != init_args.encoder_embed_dim: - self.transform_layer = nn.Linear( - init_args.encoder_embed_dim, classifier_embed_dim - ) - else: - self.model = TransformerSentenceEncoder( - padding_idx=task.dictionary.pad(), - vocab_size=len(task.dictionary), - num_encoder_layers=args.encoder_layers, - embedding_dim=args.embed_dim, - ffn_embedding_dim=args.ffn_embed_dim, - num_attention_heads=args.attention_heads, - dropout=args.dropout, - attention_dropout=args.attention_dropout, - activation_dropout=args.activation_dropout, - max_seq_len=task.max_positions() - if task.max_positions() - else args.tokens_per_sample, - num_segments=2, - offset_positions_by_padding=False, - encoder_normalize_before=args.encoder_normalize_before, - apply_bert_init=args.apply_bert_init, - activation_fn=args.activation_fn, - ) - - classifier_embed_dim = args.embed_dim - ffn_embedding_dim = args.ffn_embed_dim - num_attention_heads = args.attention_heads - dropout = args.dropout - attention_dropout = args.attention_dropout - activation_dropout = args.activation_dropout - activation_fn = args.activation_fn - - self.joint_classification = args.joint_classification - if args.joint_classification == "sent": - if args.joint_normalize_before: - self.joint_layer_norm = LayerNorm(classifier_embed_dim) - else: - self.joint_layer_norm = None - - self.joint_layers = nn.ModuleList( - [ - TransformerSentenceEncoderLayer( - embedding_dim=classifier_embed_dim, - ffn_embedding_dim=ffn_embedding_dim, - num_attention_heads=num_attention_heads, - dropout=dropout, - attention_dropout=attention_dropout, - activation_dropout=activation_dropout, - activation_fn=activation_fn, - ) - for _ in range(args.num_joint_layers) - ] - ) - - self.classifier = RobertaClassificationHead( - classifier_embed_dim, - classifier_embed_dim, - 1, # num_classes - "tanh", - args.classifier_dropout, - ) - - def forward(self, src_tokens, src_lengths): - segment_labels = self.get_segment_labels(src_tokens) - positions = self.get_positions(src_tokens, segment_labels) - - inner_states, _ = self.model( - tokens=src_tokens, - segment_labels=segment_labels, - last_state_only=True, - positions=positions, - ) - - return inner_states[-1].transpose(0, 1) # T x B x C -> B x T x C - - def sentence_forward(self, encoder_out, src_tokens=None, sentence_rep="head"): - # encoder_out: B x T x C - if sentence_rep == "head": - x = encoder_out[:, :1, :] - else: # 'meanpool', 'maxpool' - assert src_tokens is not None, "meanpool requires src_tokens input" - segment_labels = self.get_segment_labels(src_tokens) - padding_mask = src_tokens.ne(self.padding_idx) - encoder_mask = segment_labels * padding_mask.type_as(segment_labels) - - if sentence_rep == "meanpool": - ntokens = torch.sum(encoder_mask, dim=1, keepdim=True) - x = torch.sum( - encoder_out * encoder_mask.unsqueeze(2), dim=1, keepdim=True - ) / ntokens.unsqueeze(2).type_as(encoder_out) - else: # 'maxpool' - encoder_out[ - (encoder_mask == 0).unsqueeze(2).repeat(1, 1, encoder_out.shape[-1]) - ] = -float("inf") - x, _ = torch.max(encoder_out, dim=1, keepdim=True) - - if hasattr(self, "transform_layer"): - x = self.transform_layer(x) - - return x # B x 1 x C - - def joint_forward(self, x): - # x: T x B x C - if self.joint_layer_norm: - x = self.joint_layer_norm(x.transpose(0, 1)) - x = x.transpose(0, 1) - - for layer in self.joint_layers: - x, _ = layer(x, self_attn_padding_mask=None) - return x - - def classification_forward(self, x): - # x: B x T x C - return self.classifier(x) - - -@dataclass -class DiscriminativeNMTRerankerConfig(FairseqDataclass): - pretrained_model: str = field( - default="", metadata={"help": "pretrained model to load"} - ) - sentence_rep: SENTENCE_REP_CHOICES = field( - default="head", - metadata={ - "help": "method to transform the output of the transformer stack to a sentence-level representation" - }, - ) - - dropout: float = field(default=0.1, metadata={"help": "dropout probability"}) - attention_dropout: float = field( - default=0.0, metadata={"help": "dropout probability for attention weights"} - ) - activation_dropout: float = field( - default=0.0, metadata={"help": "dropout probability after activation in FFN"} - ) - classifier_dropout: float = field( - default=0.0, metadata={"help": "classifier dropout probability"} - ) - embed_dim: int = field(default=768, metadata={"help": "embedding dimension"}) - ffn_embed_dim: int = field( - default=2048, metadata={"help": "embedding dimension for FFN"} - ) - encoder_layers: int = field(default=12, metadata={"help": "num encoder layers"}) - attention_heads: int = field(default=8, metadata={"help": "num attention heads"}) - encoder_normalize_before: bool = field( - default=False, metadata={"help": "apply layernorm before each encoder block"} - ) - apply_bert_init: bool = field( - default=False, metadata={"help": "use custom param initialization for BERT"} - ) - activation_fn: ACTIVATION_FN_CHOICES = field( - default="relu", metadata={"help": "activation function to use"} - ) - freeze_embeddings: bool = field( - default=False, metadata={"help": "freeze embeddings in the pretrained model"} - ) - n_trans_layers_to_freeze: int = field( - default=0, - metadata={ - "help": "number of layers to freeze in the pretrained transformer model" - }, - ) - - # joint classfication - joint_classification: JOINT_CLASSIFICATION_CHOICES = field( - default="none", - metadata={"help": "method to compute joint features for classification"}, - ) - num_joint_layers: int = field( - default=1, metadata={"help": "number of joint layers"} - ) - joint_normalize_before: bool = field( - default=False, - metadata={"help": "apply layer norm on the input to the joint layer"}, - ) - - -@register_model( - "discriminative_nmt_reranker", dataclass=DiscriminativeNMTRerankerConfig -) -class DiscriminativeNMTReranker(BaseFairseqModel): - @classmethod - def build_model(cls, args, task): - model = BertRanker(args, task) - return DiscriminativeNMTReranker(args, model) - - def __init__(self, args, model): - super().__init__() - - self.model = model - self.sentence_rep = args.sentence_rep - self.joint_classification = args.joint_classification - - def forward(self, src_tokens, src_lengths, **kwargs): - return self.model(src_tokens, src_lengths) - - def sentence_forward(self, encoder_out, src_tokens): - return self.model.sentence_forward(encoder_out, src_tokens, self.sentence_rep) - - def joint_forward(self, x): - return self.model.joint_forward(x) - - def classification_forward(self, x): - return self.model.classification_forward(x) diff --git a/kosmos-g/fairseq/examples/discriminative_reranking_nmt/scripts/prep_data.py b/kosmos-g/fairseq/examples/discriminative_reranking_nmt/scripts/prep_data.py deleted file mode 100644 index 7aa7d37ed..000000000 --- a/kosmos-g/fairseq/examples/discriminative_reranking_nmt/scripts/prep_data.py +++ /dev/null @@ -1,136 +0,0 @@ -#!/usr/bin/env python - -import argparse -from multiprocessing import Pool -from pathlib import Path - -import sacrebleu -import sentencepiece as spm - - -def read_text_file(filename): - with open(filename, "r") as f: - output = [line.strip() for line in f] - - return output - - -def get_bleu(in_sent, target_sent): - bleu = sacrebleu.corpus_bleu([in_sent], [[target_sent]]) - out = " ".join( - map(str, [bleu.score, bleu.sys_len, bleu.ref_len] + bleu.counts + bleu.totals) - ) - return out - - -def get_ter(in_sent, target_sent): - ter = sacrebleu.corpus_ter([in_sent], [[target_sent]]) - out = " ".join(map(str, [ter.score, ter.num_edits, ter.ref_length])) - return out - - -def init(sp_model): - global sp - sp = spm.SentencePieceProcessor() - sp.Load(sp_model) - - -def process(source_sent, target_sent, hypo_sent, metric): - source_bpe = " ".join(sp.EncodeAsPieces(source_sent)) - hypo_bpe = [" ".join(sp.EncodeAsPieces(h)) for h in hypo_sent] - - if metric == "bleu": - score_str = [get_bleu(h, target_sent) for h in hypo_sent] - else: # ter - score_str = [get_ter(h, target_sent) for h in hypo_sent] - - return source_bpe, hypo_bpe, score_str - - -def main(args): - assert ( - args.split.startswith("train") or args.num_shards == 1 - ), "--num-shards should be set to 1 for valid and test sets" - assert ( - args.split.startswith("train") - or args.split.startswith("valid") - or args.split.startswith("test") - ), "--split should be set to train[n]/valid[n]/test[n]" - - source_sents = read_text_file(args.input_source) - target_sents = read_text_file(args.input_target) - - num_sents = len(source_sents) - assert num_sents == len( - target_sents - ), f"{args.input_source} and {args.input_target} should have the same number of sentences." - - hypo_sents = read_text_file(args.input_hypo) - assert ( - len(hypo_sents) % args.beam == 0 - ), f"Number of hypotheses ({len(hypo_sents)}) cannot be divided by beam size ({args.beam})." - - hypo_sents = [ - hypo_sents[i : i + args.beam] for i in range(0, len(hypo_sents), args.beam) - ] - assert num_sents == len( - hypo_sents - ), f"{args.input_hypo} should contain {num_sents * args.beam} hypotheses but only has {len(hypo_sents) * args.beam}. (--beam={args.beam})" - - output_dir = args.output_dir / args.metric - for ns in range(args.num_shards): - print(f"processing shard {ns+1}/{args.num_shards}") - shard_output_dir = output_dir / f"split{ns+1}" - source_output_dir = shard_output_dir / "input_src" - hypo_output_dir = shard_output_dir / "input_tgt" - metric_output_dir = shard_output_dir / args.metric - - source_output_dir.mkdir(parents=True, exist_ok=True) - hypo_output_dir.mkdir(parents=True, exist_ok=True) - metric_output_dir.mkdir(parents=True, exist_ok=True) - - if args.n_proc > 1: - with Pool( - args.n_proc, initializer=init, initargs=(args.sentencepiece_model,) - ) as p: - output = p.starmap( - process, - [ - (source_sents[i], target_sents[i], hypo_sents[i], args.metric) - for i in range(ns, num_sents, args.num_shards) - ], - ) - else: - init(args.sentencepiece_model) - output = [ - process(source_sents[i], target_sents[i], hypo_sents[i], args.metric) - for i in range(ns, num_sents, args.num_shards) - ] - - with open(source_output_dir / f"{args.split}.bpe", "w") as s_o, open( - hypo_output_dir / f"{args.split}.bpe", "w" - ) as h_o, open(metric_output_dir / f"{args.split}.{args.metric}", "w") as m_o: - for source_bpe, hypo_bpe, score_str in output: - assert len(hypo_bpe) == len(score_str) - for h, m in zip(hypo_bpe, score_str): - s_o.write(f"{source_bpe}\n") - h_o.write(f"{h}\n") - m_o.write(f"{m}\n") - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--input-source", type=Path, required=True) - parser.add_argument("--input-target", type=Path, required=True) - parser.add_argument("--input-hypo", type=Path, required=True) - parser.add_argument("--output-dir", type=Path, required=True) - parser.add_argument("--split", type=str, required=True) - parser.add_argument("--beam", type=int, required=True) - parser.add_argument("--sentencepiece-model", type=str, required=True) - parser.add_argument("--metric", type=str, choices=["bleu", "ter"], default="bleu") - parser.add_argument("--num-shards", type=int, default=1) - parser.add_argument("--n-proc", type=int, default=8) - - args = parser.parse_args() - - main(args) diff --git a/kosmos-g/fairseq/examples/discriminative_reranking_nmt/tasks/__init__.py b/kosmos-g/fairseq/examples/discriminative_reranking_nmt/tasks/__init__.py deleted file mode 100644 index 2d78ca987..000000000 --- a/kosmos-g/fairseq/examples/discriminative_reranking_nmt/tasks/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -from .discriminative_reranking_task import DiscriminativeRerankingNMTTask - - -__all__ = [ - "DiscriminativeRerankingNMTTask", -] diff --git a/kosmos-g/fairseq/examples/discriminative_reranking_nmt/tasks/discriminative_reranking_task.py b/kosmos-g/fairseq/examples/discriminative_reranking_nmt/tasks/discriminative_reranking_task.py deleted file mode 100644 index 223f8d429..000000000 --- a/kosmos-g/fairseq/examples/discriminative_reranking_nmt/tasks/discriminative_reranking_task.py +++ /dev/null @@ -1,490 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field - -import itertools -import logging -import os - -import numpy as np -import torch - -from fairseq import metrics -from fairseq.data import ( - ConcatDataset, - ConcatSentencesDataset, - data_utils, - Dictionary, - IdDataset, - indexed_dataset, - NestedDictionaryDataset, - NumSamplesDataset, - NumelDataset, - PrependTokenDataset, - RawLabelDataset, - RightPadDataset, - SortDataset, - TruncateDataset, - TokenBlockDataset, -) -from fairseq.dataclass import ChoiceEnum, FairseqDataclass -from fairseq.tasks import FairseqTask, register_task -from omegaconf import II, MISSING - - -EVAL_BLEU_ORDER = 4 -TARGET_METRIC_CHOICES = ChoiceEnum(["bleu", "ter"]) - -logger = logging.getLogger(__name__) - - -@dataclass -class DiscriminativeRerankingNMTConfig(FairseqDataclass): - data: str = field(default=MISSING, metadata={"help": "path to data directory"}) - num_data_splits: int = field( - default=1, metadata={"help": "total number of data splits"} - ) - no_shuffle: bool = field( - default=False, metadata={"help": "do not shuffle training data"} - ) - max_positions: int = field( - default=512, metadata={"help": "number of positional embeddings to learn"} - ) - include_src: bool = field( - default=False, metadata={"help": "include source sentence"} - ) - mt_beam: int = field(default=50, metadata={"help": "beam size of input hypotheses"}) - eval_target_metric: bool = field( - default=False, - metadata={"help": "evaluation with the target metric during validation"}, - ) - target_metric: TARGET_METRIC_CHOICES = field( - default="bleu", metadata={"help": "name of the target metric to optimize for"} - ) - train_subset: str = field( - default=II("dataset.train_subset"), - metadata={"help": "data subset to use for training (e.g. train, valid, test)"}, - ) - seed: int = field( - default=II("common.seed"), - metadata={"help": "pseudo random number generator seed"}, - ) - - -class RerankerScorer(object): - """Scores the target for a given (source (optional), target) input.""" - - def __init__(self, args, mt_beam): - self.mt_beam = mt_beam - - @torch.no_grad() - def generate(self, models, sample, **kwargs): - """Score a batch of translations.""" - net_input = sample["net_input"] - - assert len(models) == 1, "does not support model ensemble" - model = models[0] - - bs = net_input["src_tokens"].shape[0] - assert ( - model.joint_classification == "none" or bs % self.mt_beam == 0 - ), f"invalid batch size ({bs}) for joint classification with beam size ({self.mt_beam})" - - model.eval() - logits = model(**net_input) - - batch_out = model.sentence_forward(logits, net_input["src_tokens"]) - if model.joint_classification == "sent": - batch_out = model.joint_forward( - batch_out.view(self.mt_beam, bs // self.mt_beam, -1) - ) - scores = model.classification_forward( - batch_out.view(bs, 1, -1) - ) # input: B x T x C - - return scores - - -@register_task( - "discriminative_reranking_nmt", dataclass=DiscriminativeRerankingNMTConfig -) -class DiscriminativeRerankingNMTTask(FairseqTask): - """ - Translation rerank task. - The input can be either (src, tgt) sentence pairs or tgt sentence only. - """ - - cfg: DiscriminativeRerankingNMTConfig - - def __init__(self, cfg: DiscriminativeRerankingNMTConfig, data_dictionary=None): - super().__init__(cfg) - self.dictionary = data_dictionary - self._max_positions = cfg.max_positions - # args.tokens_per_sample = self._max_positions - # self.num_classes = 1 # for model - - @classmethod - def load_dictionary(cls, cfg, filename): - """Load the dictionary from the filename""" - dictionary = Dictionary.load(filename) - dictionary.add_symbol("") # for loading pretrained XLMR model - - return dictionary - - @classmethod - def setup_task(cls, cfg: DiscriminativeRerankingNMTConfig, **kwargs): - # load data dictionary (assume joint dictionary) - data_path = cfg.data - data_dict = cls.load_dictionary( - cfg, os.path.join(data_path, "input_src/dict.txt") - ) - - logger.info("[input] src dictionary: {} types".format(len(data_dict))) - - return DiscriminativeRerankingNMTTask(cfg, data_dict) - - def load_dataset(self, split, epoch=0, combine=False, **kwargs): - """Load a given dataset split (e.g., train, valid, test).""" - if self.cfg.data.endswith("1"): - data_shard = (epoch - 1) % self.cfg.num_data_splits + 1 - data_path = self.cfg.data[:-1] + str(data_shard) - else: - data_path = self.cfg.data - - def get_path(type, data_split): - return os.path.join(data_path, str(type), data_split) - - def make_dataset(type, dictionary, data_split, combine): - split_path = get_path(type, data_split) - - dataset = data_utils.load_indexed_dataset( - split_path, - dictionary, - combine=combine, - ) - return dataset - - def load_split(data_split, metric): - input_src = None - if self.cfg.include_src: - input_src = make_dataset( - "input_src", self.dictionary, data_split, combine=False - ) - assert input_src is not None, "could not find dataset: {}".format( - get_path("input_src", data_split) - ) - - input_tgt = make_dataset( - "input_tgt", self.dictionary, data_split, combine=False - ) - assert input_tgt is not None, "could not find dataset: {}".format( - get_path("input_tgt", data_split) - ) - - label_path = f"{get_path(metric, data_split)}.{metric}" - assert os.path.exists(label_path), f"could not find dataset: {label_path}" - - np_labels = np.loadtxt(label_path) - if self.cfg.target_metric == "ter": - np_labels = -np_labels - label = RawLabelDataset(np_labels) - - return input_src, input_tgt, label - - src_datasets = [] - tgt_datasets = [] - label_datasets = [] - - if split == self.cfg.train_subset: - for k in itertools.count(): - split_k = "train" + (str(k) if k > 0 else "") - prefix = os.path.join(data_path, "input_tgt", split_k) - if not indexed_dataset.dataset_exists(prefix, impl=None): - if k > 0: - break - else: - raise FileNotFoundError(f"Dataset not found: {prefix}") - input_src, input_tgt, label = load_split( - split_k, self.cfg.target_metric - ) - src_datasets.append(input_src) - tgt_datasets.append(input_tgt) - label_datasets.append(label) - else: - input_src, input_tgt, label = load_split(split, self.cfg.target_metric) - src_datasets.append(input_src) - tgt_datasets.append(input_tgt) - label_datasets.append(label) - - if len(tgt_datasets) == 1: - input_tgt, label = tgt_datasets[0], label_datasets[0] - if self.cfg.include_src: - input_src = src_datasets[0] - else: - input_tgt = ConcatDataset(tgt_datasets) - label = ConcatDataset(label_datasets) - if self.cfg.include_src: - input_src = ConcatDataset(src_datasets) - - input_tgt = TruncateDataset(input_tgt, self.cfg.max_positions) - if self.cfg.include_src: - input_src = PrependTokenDataset(input_src, self.dictionary.bos()) - input_src = TruncateDataset(input_src, self.cfg.max_positions) - src_lengths = NumelDataset(input_src, reduce=False) - src_tokens = ConcatSentencesDataset(input_src, input_tgt) - else: - src_tokens = PrependTokenDataset(input_tgt, self.dictionary.bos()) - src_lengths = NumelDataset(src_tokens, reduce=False) - - dataset = { - "id": IdDataset(), - "net_input": { - "src_tokens": RightPadDataset( - src_tokens, - pad_idx=self.source_dictionary.pad(), - ), - "src_lengths": src_lengths, - }, - "nsentences": NumSamplesDataset(), - "ntokens": NumelDataset(src_tokens, reduce=True), - "target": label, - } - - dataset = NestedDictionaryDataset( - dataset, - sizes=[src_tokens.sizes], - ) - - assert ( - len(dataset) % self.cfg.mt_beam == 0 - ), "dataset size (%d) is not a multiple of beam size (%d)" % ( - len(dataset), - self.cfg.mt_beam, - ) - - # no need to shuffle valid/test sets - if not self.cfg.no_shuffle and split == self.cfg.train_subset: - - # need to keep all hypothese together - start_idx = np.arange(0, len(dataset), self.cfg.mt_beam) - with data_utils.numpy_seed(self.cfg.seed + epoch): - np.random.shuffle(start_idx) - - idx = np.arange(0, self.cfg.mt_beam) - shuffle = np.tile(idx, (len(start_idx), 1)).reshape(-1) + np.tile( - start_idx, (self.cfg.mt_beam, 1) - ).transpose().reshape(-1) - - dataset = SortDataset( - dataset, - sort_order=[shuffle], - ) - - logger.info(f"Loaded {split} with #samples: {len(dataset)}") - - self.datasets[split] = dataset - return self.datasets[split] - - def build_dataset_for_inference(self, src_tokens, src_lengths, **kwargs): - assert not self.cfg.include_src or len(src_tokens[0]) == 2 - input_src = None - if self.cfg.include_src: - input_src = TokenBlockDataset( - [t[0] for t in src_tokens], - [l[0] for l in src_lengths], - block_size=None, # ignored for "eos" break mode - pad=self.source_dictionary.pad(), - eos=self.source_dictionary.eos(), - break_mode="eos", - ) - input_src = PrependTokenDataset(input_src, self.dictionary.bos()) - input_src = TruncateDataset(input_src, self.cfg.max_positions) - - input_tgt = TokenBlockDataset( - [t[-1] for t in src_tokens], - [l[-1] for l in src_lengths], - block_size=None, # ignored for "eos" break mode - pad=self.source_dictionary.pad(), - eos=self.source_dictionary.eos(), - break_mode="eos", - ) - input_tgt = TruncateDataset(input_tgt, self.cfg.max_positions) - if self.cfg.include_src: - src_tokens = ConcatSentencesDataset(input_src, input_tgt) - src_lengths = NumelDataset(input_src, reduce=False) - else: - input_tgt = PrependTokenDataset(input_tgt, self.dictionary.bos()) - src_tokens = input_tgt - src_lengths = NumelDataset(src_tokens, reduce=False) - - dataset = { - "id": IdDataset(), - "net_input": { - "src_tokens": RightPadDataset( - src_tokens, - pad_idx=self.source_dictionary.pad(), - ), - "src_lengths": src_lengths, - }, - "nsentences": NumSamplesDataset(), - "ntokens": NumelDataset(src_tokens, reduce=True), - } - - return NestedDictionaryDataset( - dataset, - sizes=[src_tokens.sizes], - ) - - def build_model(self, cfg: FairseqDataclass, from_checkpoint: bool = False): - return super().build_model(cfg) - - def build_generator(self, args): - return RerankerScorer(args, mt_beam=self.cfg.mt_beam) - - def max_positions(self): - return self._max_positions - - @property - def source_dictionary(self): - return self.dictionary - - @property - def target_dictionary(self): - return self.dictionary - - def create_dummy_batch(self, device): - dummy_target = ( - torch.zeros(self.cfg.mt_beam, EVAL_BLEU_ORDER * 2 + 3).long().to(device) - if not self.cfg.eval_ter - else torch.zeros(self.cfg.mt_beam, 3).long().to(device) - ) - - return { - "id": torch.zeros(self.cfg.mt_beam, 1).long().to(device), - "net_input": { - "src_tokens": torch.zeros(self.cfg.mt_beam, 4).long().to(device), - "src_lengths": torch.ones(self.cfg.mt_beam, 1).long().to(device), - }, - "nsentences": 0, - "ntokens": 0, - "target": dummy_target, - } - - def train_step( - self, sample, model, criterion, optimizer, update_num, ignore_grad=False - ): - if ignore_grad and sample is None: - sample = self.create_dummy_batch(model.device) - - return super().train_step( - sample, model, criterion, optimizer, update_num, ignore_grad - ) - - def valid_step(self, sample, model, criterion): - if sample is None: - sample = self.create_dummy_batch(model.device) - - loss, sample_size, logging_output = super().valid_step(sample, model, criterion) - - if not self.cfg.eval_target_metric: - return loss, sample_size, logging_output - - scores = logging_output["scores"] - - if self.cfg.target_metric == "bleu": - assert sample["target"].shape[1] == EVAL_BLEU_ORDER * 2 + 3, ( - "target does not contain enough information (" - + str(sample["target"].shape[1]) - + "for evaluating BLEU" - ) - - max_id = torch.argmax(scores, dim=1) - select_id = max_id + torch.arange( - 0, sample_size * self.cfg.mt_beam, self.cfg.mt_beam - ).to(max_id.device) - bleu_data = sample["target"][select_id, 1:].sum(0).data - - logging_output["_bleu_sys_len"] = bleu_data[0] - logging_output["_bleu_ref_len"] = bleu_data[1] - - for i in range(EVAL_BLEU_ORDER): - logging_output["_bleu_counts_" + str(i)] = bleu_data[2 + i] - logging_output["_bleu_totals_" + str(i)] = bleu_data[ - 2 + EVAL_BLEU_ORDER + i - ] - - elif self.cfg.target_metric == "ter": - assert sample["target"].shape[1] == 3, ( - "target does not contain enough information (" - + str(sample["target"].shape[1]) - + "for evaluating TER" - ) - - max_id = torch.argmax(scores, dim=1) - select_id = max_id + torch.arange( - 0, sample_size * self.cfg.mt_beam, self.cfg.mt_beam - ).to(max_id.device) - ter_data = sample["target"][select_id, 1:].sum(0).data - - logging_output["_ter_num_edits"] = -ter_data[0] - logging_output["_ter_ref_len"] = -ter_data[1] - - return loss, sample_size, logging_output - - def reduce_metrics(self, logging_outputs, criterion): - super().reduce_metrics(logging_outputs, criterion) - - if not self.cfg.eval_target_metric: - return - - def sum_logs(key): - return sum(log.get(key, 0) for log in logging_outputs) - - if self.cfg.target_metric == "bleu": - counts, totals = [], [] - for i in range(EVAL_BLEU_ORDER): - counts.append(sum_logs("_bleu_counts_" + str(i))) - totals.append(sum_logs("_bleu_totals_" + str(i))) - - if max(totals) > 0: - # log counts as numpy arrays -- log_scalar will sum them correctly - metrics.log_scalar("_bleu_counts", np.array(counts)) - metrics.log_scalar("_bleu_totals", np.array(totals)) - metrics.log_scalar("_bleu_sys_len", sum_logs("_bleu_sys_len")) - metrics.log_scalar("_bleu_ref_len", sum_logs("_bleu_ref_len")) - - def compute_bleu(meters): - import inspect - import sacrebleu - - fn_sig = inspect.getfullargspec(sacrebleu.compute_bleu)[0] - if "smooth_method" in fn_sig: - smooth = {"smooth_method": "exp"} - else: - smooth = {"smooth": "exp"} - bleu = sacrebleu.compute_bleu( - correct=meters["_bleu_counts"].sum, - total=meters["_bleu_totals"].sum, - sys_len=meters["_bleu_sys_len"].sum, - ref_len=meters["_bleu_ref_len"].sum, - **smooth, - ) - return round(bleu.score, 2) - - metrics.log_derived("bleu", compute_bleu) - elif self.cfg.target_metric == "ter": - num_edits = sum_logs("_ter_num_edits") - ref_len = sum_logs("_ter_ref_len") - - if ref_len > 0: - metrics.log_scalar("_ter_num_edits", num_edits) - metrics.log_scalar("_ter_ref_len", ref_len) - - def compute_ter(meters): - score = meters["_ter_num_edits"].sum / meters["_ter_ref_len"].sum - return round(score.item(), 2) - - metrics.log_derived("ter", compute_ter) diff --git a/kosmos-g/fairseq/examples/fast_noisy_channel/README.md b/kosmos-g/fairseq/examples/fast_noisy_channel/README.md deleted file mode 100644 index f2631a8c3..000000000 --- a/kosmos-g/fairseq/examples/fast_noisy_channel/README.md +++ /dev/null @@ -1,345 +0,0 @@ -# Language Models not just for Pre-training: Fast Online Neural Noisy Channel Modeling - -## Introduction -- [Yee et al. (2019)](https://www.aclweb.org/anthology/D19-1571.pdf) introduce a simple and effective noisy channel modeling approach for neural machine translation. However, the noisy channel online decoding approach introduced in this paper is too slow to be practical. -- To address this, [Bhosale et al. (2020)](http://www.statmt.org/wmt20/pdf/2020.wmt-1.68.pdf) introduces 3 simple approximations to make this approach very fast and practical without much loss in accuracy. -- This README provides intructions on how to run online decoding or generation with the noisy channel modeling approach, including ways to make it very fast without much loss in accuracy. - -## Noisy Channel Modeling - -[Yee et al. (2019)](https://www.aclweb.org/anthology/D19-1571.pdf) applies the Bayes Rule to predict `P(y|x)`, the probability of the target `y` given the source `x`. -```P(y|x) = P(x|y) * P(y) / P(x)``` -- `P(x|y)` predicts the source `x` given the target `y` and is referred to as the **channel model** -- `P(y)` is a **language model** over the target `y` -- `P(x)` is generally not modeled since it is constant for all `y`. - -We use Transformer models to parameterize the direct model `P(y|x)`, the channel model `P(x|y)` and the language model `P(y)`. - -During online decoding with beam search, we generate the top `K2` candidates per beam and score them with the following linear combination of the channel model, the language model as well as the direct model scores. - -```(1 / t) * log(P(y|x) + (1 / s) * ( λ1 * log(P(x|y)) + λ2 * log(P(y) ) )``` -- `t` - Target Prefix Length -- `s` - Source Length -- `λ1` - Channel Model Weight -- `λ2` - Language Model Weight - -The top `beam_size` candidates based on the above combined scores are chosen to continue the beams in beam search. In beam search with a direct model alone, the scores from the direct model `P(y|x)` are used to choose the top candidates in beam search. - -This framework provides a great way to utlize strong target language models trained on large amounts of unlabeled data. Language models can prefer targets unrelated to the source, so we also need a channel model whose role is to ensure that the target preferred by the language model also translates back to the source. - -### Training Translation Models and Language Models - -For training Transformer models in fairseq for machine translation, refer to instructions [here](https://github.com/pytorch/fairseq/tree/main/examples/translation) - -For training Transformer models in fairseq for language modeling, refer to instructions [here](https://github.com/pytorch/fairseq/tree/main/examples/language_model) - -### Generation with Language Model for German-English translation with fairseq - -Here are instructions to generate using a direct model and a target-side language model. - -Note: -- Download and install fairseq as per instructions [here](https://github.com/pytorch/fairseq) -- Preprocess and binarize the dataset as per instructions in section [Test Data Preprocessing](#test-data-preprocessing) - -```sh -binarized_data=data_dir/binarized -direct_model=de_en_seed4.pt -lm_model=en_lm.pt -lm_data=lm_data -wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/direct_models/seed4.pt -O ${direct_model} -wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/lm_model/transformer_lm.pt -O ${lm_model} -mkdir -p ${lm_data} -wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/lm_model/lm_dict/dict.txt -O ${lm_data}/dict.txt - -k2=10 -lenpen=0.16 -lm_wt=0.14 -fairseq-generate ${binarized_data} \ - --user-dir examples/fast_noisy_channel \ - --beam 5 \ - --path ${direct_model} \ - --lm-model ${lm_model} \ - --lm-data ${lm_data} \ - --k2 ${k2} \ - --combine-method lm_only \ - --task noisy_channel_translation \ - --lenpen ${lenpen} \ - --lm-wt ${lm_wt} \ - --gen-subset valid \ - --remove-bpe \ - --fp16 \ - --batch-size 10 -``` -### Noisy Channel Generation for German-English translation with fairseq - -Here are instructions for noisy channel generation with a direct model, channel model and language model as explained in section [Noisy Channel Modeling](#noisy-channel-modeling). - -Note: -- Download and install fairseq as per instructions [here](https://github.com/pytorch/fairseq) -- Preprocess and binarize the dataset as per instructions in section [Test Data Preprocessing](#test-data-preprocessing) - -```sh -binarized_data=data_dir/binarized -direct_model=de_en_seed4.pt -lm_model=en_lm.pt -lm_data=lm_data -ch_model=en_de.big.seed4.pt -wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/direct_models/seed4.pt -O ${direct_model} -wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/lm_model/transformer_lm.pt -O ${lm_model} -mkdir -p ${lm_data} -wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/lm_model/lm_dict/dict.txt -O ${lm_data}/dict.txt -wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/big.seed4.pt -O ${ch_model} - -k2=10 -lenpen=0.21 -lm_wt=0.50 -bw_wt=0.30 -fairseq-generate ${binarized_data} \ - --user-dir examples/fast_noisy_channel \ - --beam 5 \ - --path ${direct_model} \ - --lm-model ${lm_model} \ - --lm-data ${lm_data} \ - --channel-model ${ch_model} \ - --k2 ${k2} \ - --combine-method noisy_channel \ - --task noisy_channel_translation \ - --lenpen ${lenpen} \ - --lm-wt ${lm_wt} \ - --ch-wt ${bw_wt} \ - --gen-subset test \ - --remove-bpe \ - --fp16 \ - --batch-size 1 -``` -## Fast Noisy Channel Modeling - -[Bhosale et al. (2020)](http://www.statmt.org/wmt20/pdf/2020.wmt-1.68.pdf) introduces 3 approximations that speed up online noisy channel decoding - -- Smaller channel models (`Tranformer Base` with 1 encoder and decoder layer each vs. `Transformer Big`) - - This involves training a channel model that is possibly smaller and less accurate in terms of BLEU than a channel model of the same size as the direct model. - - Since the role of the channel model is mainly to assign low scores to generations from the language model if they don't translate back to the source, we may not need the most accurate channel model for this purpose. -- Smaller output vocabulary size for the channel model (~30,000 -> ~1000) - - The channel model doesn't need to score the full output vocabulary, it just needs to score the source tokens, which are completely known. - - This is specified using the arguments `--channel-scoring-type src_vocab --top-k-vocab 500` - - This means that the output vocabulary for the channel model will be the source tokens for all examples in the batch and the top-K most frequent tokens in the vocabulary - - This reduces the memory consumption needed to store channel model scores significantly -- Smaller number of candidates (`k2`) scored per beam - - This is specified by reducing the argument `--k2` - - -### Fast Noisy Channel Generation for German-English translation with fairseq - -Here are instructions for **fast** noisy channel generation with a direct model, channel model and language model as explained in section [Fast Noisy Channel Modeling](#fast-noisy-channel-modeling). The main differences are that we use a smaller channel model, reduce `--k2`, set `--channel-scoring-type src_vocab --top-k-vocab 500` and increase the `--batch-size`. - -Note: -- Download and install fairseq as per instructions [here](https://github.com/pytorch/fairseq) -- Preprocess and binarize the dataset as per instructions in section [Test Data Preprocessing](#test-data-preprocessing) - -```sh -binarized_data=data_dir/binarized -direct_model=de_en_seed4.pt -lm_model=en_lm.pt -lm_data=lm_data -small_ch_model=en_de.base_1_1.seed4.pt -wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/direct_models/seed4.pt -O ${direct_model} -wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/lm_model/transformer_lm.pt -O ${lm_model} -mkdir -p ${lm_data} -wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/lm_model/lm_dict/dict.txt -O ${lm_data}/dict.txt -wget https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/base_1_1.seed4.pt -O ${small_ch_model} - -k2=3 -lenpen=0.23 -lm_wt=0.58 -bw_wt=0.26 -fairseq-generate ${binarized_data} \ - --user-dir examples/fast_noisy_channel \ - --beam 5 \ - --path ${direct_model} \ - --lm-model ${lm_model} \ - --lm-data ${lm_data} \ - --channel-model ${small_ch_model} \ - --k2 ${k2} \ - --combine-method noisy_channel \ - --task noisy_channel_translation \ - --lenpen ${lenpen} \ - --lm-wt ${lm_wt} \ - --ch-wt ${bw_wt} \ - --gen-subset test \ - --remove-bpe \ - --fp16 \ - --batch-size 50 \ - --channel-scoring-type src_vocab --top-k-vocab 500 -``` - -## Test Data Preprocessing - -For preprocessing and binarizing the test sets for Romanian-English and German-English translation, we use the following script - - -```sh -FAIRSEQ=/path/to/fairseq -cd $FAIRSEQ -SCRIPTS=$FAIRSEQ/mosesdecoder/scripts -if [ ! -d "${SCRIPTS}" ]; then - echo 'Cloning Moses github repository (for tokenization scripts)...' - git clone https://github.com/moses-smt/mosesdecoder.git -fi -TOKENIZER=$SCRIPTS/tokenizer/tokenizer.perl -NORMALIZE=$SCRIPTS/tokenizer/normalize-punctuation.perl - -s=de -t=en -test=wmt18 - -mkdir -p data_dir - -# Tokenization -if [ $s == "ro" ] ; then - # Note: Get normalise-romanian.py and remove-diacritics.py from - # https://github.com/rsennrich/wmt16-scripts/tree/master/preprocess - sacrebleu -t $test -l $s-$t --echo src | \ - $NORMALIZE -l $s | \ - python normalise-romanian.py | \ - python remove-diacritics.py | \ - $TOKENIZER -l $s -a -q > data_dir/$test.$s-$t.$s -else - sacrebleu -t $test -l $s-$t --echo src | perl $NORMALIZE -l $s | perl $TOKENIZER -threads 8 -a -l $s > data_dir/$test.$s-$t.$s -fi - -sacrebleu -t $test -l $s-$t --echo ref | perl $NORMALIZE -l $t | perl $TOKENIZER -threads 8 -a -l $t > data_dir/$test.$s-$t.$t - - -# Applying BPE -src_bpe_code=/path/to/source/language/bpe/code -tgt_bpe_code=/path/to/target/language/bpe/code -src_dict=/path/to/source/language/dict -tgt_dict=/path/to/target/language/dict - -FASTBPE=$FAIRSEQ/fastBPE -if [ ! -d "${FASTBPE}" ] ; then - git clone https://github.com/glample/fastBPE.git - # Follow compilation instructions at https://github.com/glample/fastBPE - g++ -std=c++11 -pthread -O3 fastBPE/main.cc -IfastBPE -o fast -fi - -${FASTBPE}/fast applybpe data_dir/bpe.$test.$s-$t.$s data_dir/$test.$s-$t.$s ${src_bpe_code} -${FASTBPE}/fast applybpe data_dir/bpe.$test.$s-$t.$s data_dir/$test.$s-$t.$s ${tgt_bpe_code} - -fairseq-preprocess -s $s -t $t \ - --testpref data_dir/bpe.$test.$s-$t \ - --destdir data_dir/binarized \ - --srcdict ${src_dict} \ - --tgtdict ${tgt_dict} -``` - -## Calculating BLEU - -```sh -DETOKENIZER=$SCRIPTS/tokenizer/detokenizer.perl -cat ${generation_output} | grep -P "^H" | sort -V | cut -f 3- | $DETOKENIZER -l $t -q -a | sacrebleu -t $test -l $s-$t -``` - - -## Romanian-English Translation - -The direct and channel models are trained using bitext data (WMT16) combined with backtranslated data (The monolingual data used for backtranslation comes from http://data.statmt.org/rsennrich/wmt16_backtranslations/ (Sennrich et al., 2016c)) - -The backtranslated data is generated using an ensemble of 3 English-Romanian models trained on bitext training data (WMT16) with unrestricted sampling. - -### BPE Codes and Dictionary - -We learn a joint BPE vocabulary of 18K types on the bitext training data which is used for both the source and target. -||Path| -|----------|------| -| BPE Code | [joint_bpe_18k](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/bpe_18k) | -| Dictionary | [dict](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/dict) | - -### Direct Models -For Ro-En with backtranslation, the direct and channel models use a Transformer-Big architecture. - -| Seed | Model | -|----|----| -| 2 | [ro_en_seed2.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/direct_models/seed2.pt) -| 4 | [ro_en_seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/direct_models/seed4.pt) -| 6 | [ro_en_seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/direct_models/seed6.pt) - -### Channel Models -For channel models, we follow the same steps as for the direct models. But backtranslated data is generated in the opposite direction using [this Romanian monolingual data](http://data.statmt.org/rsennrich/wmt16_backtranslations/). -The best lenpen, LM weight and CH weight are obtained by sweeping over the validation set (wmt16/dev) using beam 5. -| Model Size | Lenpen | LM Weight | CH Weight | Seed 2 | Seed 4 | Seed 6 | -|----|----|----|----|----|----|----| -| `big` | 0.84 | 0.64 | 0.56 | [big.seed2.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/channel_models/big.seed2.pt) | [big.seed2.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/channel_models/big.seed2.pt) | [big.seed2.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/channel_models/big.seed2.pt) | -| `base_1_1` | 0.63 | 0.40 | 0.37 | [base_1_1.seed2.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/channel_models/base_1_1.seed2.pt) | [base_1_1.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/channel_models/base_1_1.seed4.pt) | [base_1_1.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/channel_models/base_1_1.seed6.pt) | - -### Language Model -The model is trained on de-duplicated English Newscrawl data from 2007-2018 comprising 186 million sentences or 4.5B words after normalization and tokenization. -| | Path | -|----|----| -| `--lm-model` | [transformer_en_lm](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/lm_model/transformer_lm.pt) | -| `--lm-data` | [lm_data](https://dl.fbaipublicfiles.com/fast_noisy_channel/ro_en/lm_model/lm_dict) - -## German-English Translation - -### BPE Codes and Dictionaries - -| | Path| -|----------|------| -| Source BPE Code | [de_bpe_code_24K](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/de_bpe_code_24K) | -| Target BPE Code | [en_bpe_code_24K](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/en_bpe_code_24K) -| Source Dictionary | [de_dict](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/de_dict) | -| Target Dictionary | [en_dict](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/en_dict) | - -### Direct Models -We train on WMT’19 training data. Following [Ng et al., 2019](http://statmt.org/wmt19/pdf/53/WMT33.pdf), we apply language identification filtering and remove sentences longer than 250 tokens as well as sentence pairs with a source/target length ratio exceeding 1.5. This results in 26.8M sentence pairs. -We use the Transformer-Big architecture for the direct model. - -| Seed | Model | -|:----:|----| -| 4 | [de_en_seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/direct_models/seed4.pt) -| 5 | [de_en_seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/direct_models/seed5.pt) -| 6 | [de_en_seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/direct_models/seed6.pt) - -### Channel Models - -We train on WMT’19 training data. Following [Ng et al., 2019](http://statmt.org/wmt19/pdf/53/WMT33.pdf), we apply language identification filtering and remove sentences longer than 250 tokens as well as sentence pairs with a source/target length ratio exceeding 1.5. This results in 26.8M sentence pairs. - -| Model Size | Seed 4 | Seed 5 | Seed 6 | -|----|----|----|----| -| `big` | [big.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/big.seed4.pt) | [big.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/big.seed5.pt) | [big.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/big.seed6.pt) | -| `big_1_1` | [big_1_1.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/big_1_1.seed4.pt) | [big_1_1.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/big_1_1.seed5.pt) | [big_1_1.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/big_1_1.seed6.pt) | -| `base` | [base.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/base.seed4.pt) | [base.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/base.seed5.pt) | [base.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/base.seed6.pt) | -| `base_1_1` | [base_1_1.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/base_1_1.seed4.pt) | [base_1_1.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/base_1_1.seed5.pt) | [base_1_1.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/base_1_1.seed6.pt) | -| `half` | [half.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/half.seed4.pt) | [half.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/half.seed5.pt) | [half.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/half.seed6.pt) | -| `half_1_1` | [half_1_1.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/half_1_1.seed4.pt) | [half_1_1.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/half_1_1.seed5.pt) | [half_1_1.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/half_1_1.seed6.pt) | -| `quarter` | [quarter.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/quarter.seed4.pt) | [quarter.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/quarter.seed5.pt) | [quarter.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/quarter.seed6.pt) | -| `quarter_1_1` | [quarter_1_1.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/quarter_1_1.seed4.pt) | [quarter_1_1.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/quarter_1_1.seed5.pt) | [quarter_1_1.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/quarter_1_1.seed6.pt) | -| `8th` | [8th.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/8th.seed4.pt) | [8th.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/8th.seed5.pt) | [8th.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/8th.seed6.pt) | -| `8th_1_1` | [8th_1_1.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/8th_1_1.seed4.pt) | [8th_1_1.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/8th_1_1.seed5.pt) | [8th_1_1.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/8th_1_1.seed6.pt) | -| `16th` | [16th.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/16th.seed4.pt) | [16th.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/16th.seed5.pt) | [16th.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/16th.seed6.pt) | -| `16th_1_1` | [16th_1_1.seed4.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/16th_1_1.seed4.pt) | [16th_1_1.seed5.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/16th_1_1.seed5.pt) | [16th_1_1.seed6.pt](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/channel_models/16th_1_1.seed6.pt) | - -### Language Model -The model is trained on de-duplicated English Newscrawl data from 2007-2018 comprising 186 million sentences or 4.5B words after normalization and tokenization. -| | Path | -|----|----| -| `--lm-model` | [transformer_en_lm](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/lm_model/transformer_lm.pt) | -| `--lm-data` | [lm_data](https://dl.fbaipublicfiles.com/fast_noisy_channel/de_en/lm_model/lm_dict/) - - -## Citation - -```bibtex -@inproceedings{bhosale2020language, - title={Language Models not just for Pre-training: Fast Online Neural Noisy Channel Modeling}, - author={Shruti Bhosale and Kyra Yee and Sergey Edunov and Michael Auli}, - booktitle={Proceedings of the Fifth Conference on Machine Translation (WMT)}, - year={2020}, -} - -@inproceedings{yee2019simple, - title={Simple and Effective Noisy Channel Modeling for Neural Machine Translation}, - author={Yee, Kyra and Dauphin, Yann and Auli, Michael}, - booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)}, - pages={5700--5705}, - year={2019} -} -``` diff --git a/kosmos-g/fairseq/examples/fast_noisy_channel/__init__.py b/kosmos-g/fairseq/examples/fast_noisy_channel/__init__.py deleted file mode 100644 index 9b248c3a2..000000000 --- a/kosmos-g/fairseq/examples/fast_noisy_channel/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import noisy_channel_translation # noqa -from . import noisy_channel_sequence_generator # noqa -from . import noisy_channel_beam_search # noqa diff --git a/kosmos-g/fairseq/examples/fast_noisy_channel/noisy_channel_beam_search.py b/kosmos-g/fairseq/examples/fast_noisy_channel/noisy_channel_beam_search.py deleted file mode 100644 index 23869ebcd..000000000 --- a/kosmos-g/fairseq/examples/fast_noisy_channel/noisy_channel_beam_search.py +++ /dev/null @@ -1,71 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -from fairseq.search import Search - - -class NoisyChannelBeamSearch(Search): - - def __init__(self, tgt_dict): - super().__init__(tgt_dict) - self.fw_scores_buf = None - self.lm_scores_buf = None - - def _init_buffers(self, t): - # super()._init_buffers(t) - if self.fw_scores_buf is None: - self.scores_buf = t.new() - self.indices_buf = torch.LongTensor().to(device=t.device) - self.beams_buf = torch.LongTensor().to(device=t.device) - self.fw_scores_buf = t.new() - self.lm_scores_buf = t.new() - - def combine_fw_bw(self, combine_method, fw_cum, bw, step): - if combine_method == "noisy_channel": - fw_norm = fw_cum.div(step + 1) - lprobs = bw + fw_norm - elif combine_method == "lm_only": - lprobs = bw + fw_cum - - return lprobs - - def step(self, step, fw_lprobs, scores, bw_lprobs, lm_lprobs, combine_method): - self._init_buffers(fw_lprobs) - bsz, beam_size, vocab_size = fw_lprobs.size() - - if step == 0: - # at the first step all hypotheses are equally likely, so use - # only the first beam - fw_lprobs = fw_lprobs[:, ::beam_size, :].contiguous() - bw_lprobs = bw_lprobs[:, ::beam_size, :].contiguous() - # nothing to add since we are at the first step - fw_lprobs_cum = fw_lprobs - - else: - # make probs contain cumulative scores for each hypothesis - raw_scores = (scores[:, :, step - 1].unsqueeze(-1)) - fw_lprobs_cum = (fw_lprobs.add(raw_scores)) - - combined_lprobs = self.combine_fw_bw(combine_method, fw_lprobs_cum, bw_lprobs, step) - - # choose the top k according to the combined noisy channel model score - torch.topk( - combined_lprobs.view(bsz, -1), - k=min( - # Take the best 2 x beam_size predictions. We'll choose the first - # beam_size of these which don't predict eos to continue with. - beam_size * 2, - combined_lprobs.view(bsz, -1).size(1) - 1, # -1 so we never select pad - ), - out=(self.scores_buf, self.indices_buf), - ) - # save corresponding fw and lm scores - self.fw_scores_buf = torch.gather(fw_lprobs_cum.view(bsz, -1), 1, self.indices_buf) - self.lm_scores_buf = torch.gather(lm_lprobs.view(bsz, -1), 1, self.indices_buf) - # Project back into relative indices and beams - self.beams_buf = self.indices_buf // vocab_size - self.indices_buf.fmod_(vocab_size) - return self.scores_buf, self.fw_scores_buf, self.lm_scores_buf, self.indices_buf, self.beams_buf diff --git a/kosmos-g/fairseq/examples/fast_noisy_channel/noisy_channel_sequence_generator.py b/kosmos-g/fairseq/examples/fast_noisy_channel/noisy_channel_sequence_generator.py deleted file mode 100644 index ea8fae98e..000000000 --- a/kosmos-g/fairseq/examples/fast_noisy_channel/noisy_channel_sequence_generator.py +++ /dev/null @@ -1,842 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Dict, List, Optional - -import math -import numpy as np - -import torch -import torch.nn.functional as F -from torch import Tensor - -from .noisy_channel_beam_search import NoisyChannelBeamSearch -from fairseq.sequence_generator import EnsembleModel - - -class NoisyChannelSequenceGenerator(object): - def __init__( - self, - combine_method, - tgt_dict, - src_dict=None, - beam_size=1, - max_len_a=0, - max_len_b=200, - min_len=1, - len_penalty=1.0, - unk_penalty=0.0, - retain_dropout=False, - temperature=1.0, - match_source_len=False, - no_repeat_ngram_size=0, - normalize_scores=True, - channel_models=None, - k2=10, - ch_weight=1.0, - channel_scoring_type='log_norm', - top_k_vocab=0, - lm_models=None, - lm_dict=None, - lm_weight=1.0, - normalize_lm_scores_by_tgt_len=False, - ): - """Generates translations of a given source sentence, - using beam search with noisy channel decoding. - - Args: - combine_method (string, optional): Method to combine direct, LM and - channel model scores (default: None) - tgt_dict (~fairseq.data.Dictionary): target dictionary - src_dict (~fairseq.data.Dictionary): source dictionary - beam_size (int, optional): beam width (default: 1) - max_len_a/b (int, optional): generate sequences of maximum length - ax + b, where x is the source length - min_len (int, optional): the minimum length of the generated output - (not including end-of-sentence) - len_penalty (float, optional): length penalty, where <1.0 favors - shorter, >1.0 favors longer sentences (default: 1.0) - unk_penalty (float, optional): unknown word penalty, where <0 - produces more unks, >0 produces fewer (default: 0.0) - retain_dropout (bool, optional): use dropout when generating - (default: False) - temperature (float, optional): temperature, where values - >1.0 produce more uniform samples and values <1.0 produce - sharper samples (default: 1.0) - match_source_len (bool, optional): outputs should match the source - length (default: False) - no_repeat_ngram_size (int, optional): Size of n-grams that we avoid - repeating in the generation (default: 0) - normalize_scores (bool, optional): normalize scores by the length - of the output (default: True) - channel_models (List[~fairseq.models.FairseqModel]): ensemble of models - translating from the target to the source - k2 (int, optional): Top K2 candidates to score per beam at each step (default:10) - ch_weight (int, optional): Weight associated with the channel model score - assuming that the direct model score has weight 1.0 (default: 1.0) - channel_scoring_type (str, optional): String specifying how to score - the channel model (default: 'log_norm') - top_k_vocab (int, optional): If `channel_scoring_type` is `'src_vocab'` or - `'src_vocab_batched'`, then this parameter specifies the number of - most frequent tokens to include in the channel model output vocabulary, - in addition to the source tokens in the input batch (default: 0) - lm_models (List[~fairseq.models.FairseqModel]): ensemble of models - generating text in the target language - lm_dict (~fairseq.data.Dictionary): LM Model dictionary - lm_weight (int, optional): Weight associated with the LM model score - assuming that the direct model score has weight 1.0 (default: 1.0) - normalize_lm_scores_by_tgt_len (bool, optional): Should we normalize LM scores - by the target length? By default, we normalize the combination of - LM and channel model scores by the source length - """ - self.pad = tgt_dict.pad() - self.unk = tgt_dict.unk() - self.eos = tgt_dict.eos() - self.vocab_size = len(tgt_dict) - self.beam_size = beam_size - # the max beam size is the dictionary size - 1, since we never select pad - self.beam_size = min(beam_size, self.vocab_size - 1) - self.max_len_a = max_len_a - self.max_len_b = max_len_b - self.min_len = min_len - self.normalize_scores = normalize_scores - self.len_penalty = len_penalty - self.unk_penalty = unk_penalty - self.retain_dropout = retain_dropout - self.temperature = temperature - self.match_source_len = match_source_len - self.no_repeat_ngram_size = no_repeat_ngram_size - self.channel_models = channel_models - self.src_dict = src_dict - self.tgt_dict = tgt_dict - self.combine_method = combine_method - self.k2 = k2 - self.ch_weight = ch_weight - self.channel_scoring_type = channel_scoring_type - self.top_k_vocab = top_k_vocab - self.lm_models = lm_models - self.lm_dict = lm_dict - self.lm_weight = lm_weight - self.log_softmax_fn = torch.nn.LogSoftmax(dim=1) - self.normalize_lm_scores_by_tgt_len = normalize_lm_scores_by_tgt_len - - self.share_tgt_dict = (self.lm_dict == self.tgt_dict) - self.tgt_to_lm = make_dict2dict(tgt_dict, lm_dict) - - self.ch_scoring_bsz = 3072 - - assert temperature > 0, '--temperature must be greater than 0' - - self.search = NoisyChannelBeamSearch(tgt_dict) - - @torch.no_grad() - def generate( - self, - models, - sample, - prefix_tokens=None, - bos_token=None, - **kwargs - ): - """Generate a batch of translations. - Args: - models (List[~fairseq.models.FairseqModel]): ensemble of models - sample (dict): batch - prefix_tokens (torch.LongTensor, optional): force decoder to begin - with these tokens - """ - model = EnsembleModel(models) - incremental_states = torch.jit.annotate( - List[Dict[str, Dict[str, Optional[Tensor]]]], - [ - torch.jit.annotate(Dict[str, Dict[str, Optional[Tensor]]], {}) - for i in range(model.models_size) - ], - ) - if not self.retain_dropout: - model.eval() - - # model.forward normally channels prev_output_tokens into the decoder - # separately, but SequenceGenerator directly calls model.encoder - encoder_input = { - k: v for k, v in sample['net_input'].items() - if k != 'prev_output_tokens' - } - src_tokens = encoder_input['src_tokens'] - src_lengths_no_eos = (src_tokens.ne(self.eos) & src_tokens.ne(self.pad)).long().sum(dim=1) - input_size = src_tokens.size() - # batch dimension goes first followed by source lengths - bsz = input_size[0] - src_len = input_size[1] - beam_size = self.beam_size - - if self.match_source_len: - max_len = src_lengths_no_eos.max().item() - else: - max_len = min( - int(self.max_len_a * src_len + self.max_len_b), - # exclude the EOS marker - model.max_decoder_positions() - 1, - ) - - # compute the encoder output for each beam - encoder_outs = model.forward_encoder(encoder_input) - new_order = torch.arange(bsz).view(-1, 1).repeat(1, beam_size).view(-1) - new_order = new_order.to(src_tokens.device).long() - encoder_outs = model.reorder_encoder_out(encoder_outs, new_order) - - src_lengths = encoder_input['src_lengths'] - # initialize buffers - scores = src_tokens.new(bsz * beam_size, max_len + 1).float().fill_(0) - lm_prefix_scores = src_tokens.new(bsz * beam_size).float().fill_(0) - - scores_buf = scores.clone() - tokens = src_tokens.new(bsz * beam_size, max_len + 2).long().fill_(self.pad) - tokens_buf = tokens.clone() - tokens[:, 0] = self.eos if bos_token is None else bos_token - - # reorder source tokens so they may be used as a reference in generating P(S|T) - src_tokens = reorder_all_tokens(src_tokens, src_lengths, self.src_dict.eos_index) - - src_tokens = src_tokens.repeat(1, beam_size).view(-1, src_len) - src_lengths = src_lengths.view(bsz, -1).repeat(1, beam_size).view(bsz*beam_size, -1) - - attn, attn_buf = None, None - nonpad_idxs = None - - # The cands_to_ignore indicates candidates that should be ignored. - # For example, suppose we're sampling and have already finalized 2/5 - # samples. Then the cands_to_ignore would mark 2 positions as being ignored, - # so that we only finalize the remaining 3 samples. - cands_to_ignore = src_tokens.new_zeros(bsz, beam_size).eq(-1) # forward and backward-compatible False mask - - # list of completed sentences - finalized = [[] for i in range(bsz)] - finished = [False for i in range(bsz)] - num_remaining_sent = bsz - - # number of candidate hypos per step - cand_size = 2 * beam_size # 2 x beam size in case half are EOS - - # offset arrays for converting between different indexing schemes - bbsz_offsets = (torch.arange(0, bsz) * beam_size).unsqueeze(1).type_as(tokens) - cand_offsets = torch.arange(0, cand_size).type_as(tokens) - - # helper function for allocating buffers on the fly - buffers = {} - - def buffer(name, type_of=tokens): # noqa - if name not in buffers: - buffers[name] = type_of.new() - return buffers[name] - - def is_finished(sent, step, unfin_idx): - """ - Check whether we've finished generation for a given sentence, by - comparing the worst score among finalized hypotheses to the best - possible score among unfinalized hypotheses. - """ - assert len(finalized[sent]) <= beam_size - if len(finalized[sent]) == beam_size: - return True - return False - - def finalize_hypos(step, bbsz_idx, eos_scores, combined_noisy_channel_eos_scores): - """ - Finalize the given hypotheses at this step, while keeping the total - number of finalized hypotheses per sentence <= beam_size. - - Note: the input must be in the desired finalization order, so that - hypotheses that appear earlier in the input are preferred to those - that appear later. - - Args: - step: current time step - bbsz_idx: A vector of indices in the range [0, bsz*beam_size), - indicating which hypotheses to finalize - eos_scores: A vector of the same size as bbsz_idx containing - fw scores for each hypothesis - combined_noisy_channel_eos_scores: A vector of the same size as bbsz_idx containing - combined noisy channel scores for each hypothesis - """ - assert bbsz_idx.numel() == eos_scores.numel() - - # clone relevant token and attention tensors - tokens_clone = tokens.index_select(0, bbsz_idx) - tokens_clone = tokens_clone[:, 1:step + 2] # skip the first index, which is EOS - assert not tokens_clone.eq(self.eos).any() - tokens_clone[:, step] = self.eos - attn_clone = attn.index_select(0, bbsz_idx)[:, :, 1:step+2] if attn is not None else None - - # compute scores per token position - pos_scores = scores.index_select(0, bbsz_idx)[:, :step+1] - pos_scores[:, step] = eos_scores - # convert from cumulative to per-position scores - pos_scores[:, 1:] = pos_scores[:, 1:] - pos_scores[:, :-1] - - # normalize sentence-level scores - if self.normalize_scores: - combined_noisy_channel_eos_scores /= (step + 1) ** self.len_penalty - - cum_unfin = [] - prev = 0 - for f in finished: - if f: - prev += 1 - else: - cum_unfin.append(prev) - - sents_seen = set() - for i, (idx, score) in enumerate(zip(bbsz_idx.tolist(), combined_noisy_channel_eos_scores.tolist())): - unfin_idx = idx // beam_size - sent = unfin_idx + cum_unfin[unfin_idx] - - sents_seen.add((sent, unfin_idx)) - - if self.match_source_len and step > src_lengths_no_eos[unfin_idx]: - score = -math.inf - - def get_hypo(): - - if attn_clone is not None: - # remove padding tokens from attn scores - hypo_attn = attn_clone[i][nonpad_idxs[sent]] - _, alignment = hypo_attn.max(dim=0) - else: - hypo_attn = None - alignment = None - - return { - 'tokens': tokens_clone[i], - 'score': score, - 'attention': hypo_attn, # src_len x tgt_len - 'alignment': alignment, - 'positional_scores': pos_scores[i], - } - - if len(finalized[sent]) < beam_size: - finalized[sent].append(get_hypo()) - - newly_finished = [] - for sent, unfin_idx in sents_seen: - # check termination conditions for this sentence - if not finished[sent] and is_finished(sent, step, unfin_idx): - finished[sent] = True - newly_finished.append(unfin_idx) - return newly_finished - - def noisy_channel_rescoring(lprobs, beam_size, bsz, src_tokens, tokens, k): - """Rescore the top k hypothesis from each beam using noisy channel modeling - Returns: - new_fw_lprobs: the direct model probabilities after pruning the top k - new_ch_lm_lprobs: the combined channel and language model probabilities - new_lm_lprobs: the language model probabilities after pruning the top k - """ - with torch.no_grad(): - lprobs_size = lprobs.size() - if prefix_tokens is not None and step < prefix_tokens.size(1): - probs_slice = lprobs.view(bsz, -1, lprobs.size(-1))[:, 0, :] - cand_scores = torch.gather( - probs_slice, dim=1, - index=prefix_tokens[:, step].view(-1, 1).data - ).expand(-1, beam_size).contiguous().view(bsz*beam_size, 1) - cand_indices = prefix_tokens[:, step].view(-1, 1).expand(bsz, beam_size).data.contiguous().view(bsz*beam_size, 1) - - # need to calculate and save fw and lm probs for prefix tokens - fw_top_k = cand_scores - fw_top_k_idx = cand_indices - k = 1 - else: - # take the top k best words for every sentence in batch*beam - fw_top_k, fw_top_k_idx = torch.topk(lprobs.view(beam_size*bsz, -1), k=k) - eos_idx = torch.nonzero(fw_top_k_idx.view(bsz*beam_size*k, -1) == self.eos)[:, 0] - ch_scores = fw_top_k.new_full((beam_size*bsz*k, ), 0) - src_size = torch.sum(src_tokens[:, :] != self.src_dict.pad_index, dim=1, keepdim=True, dtype=fw_top_k.dtype) - - if self.combine_method != "lm_only": - temp_src_tokens_full = src_tokens[:, :].repeat(1, k).view(bsz*beam_size*k, -1) - not_padding = temp_src_tokens_full[:, 1:] != self.src_dict.pad_index - cur_tgt_size = step+2 - - # add eos to all candidate sentences except those that already end in eos - eos_tokens = tokens[:, 0].repeat(1, k).view(-1, 1) - eos_tokens[eos_idx] = self.tgt_dict.pad_index - - if step == 0: - channel_input = torch.cat((fw_top_k_idx.view(-1, 1), eos_tokens), 1) - else: - # move eos from beginning to end of target sentence - channel_input = torch.cat((tokens[:, 1:step + 1].repeat(1, k).view(-1, step), fw_top_k_idx.view(-1, 1), eos_tokens), 1) - - ch_input_lengths = torch.tensor(np.full(channel_input.size(0), cur_tgt_size)) - ch_input_lengths[eos_idx] = cur_tgt_size-1 - if self.channel_scoring_type == "unnormalized": - ch_encoder_output = channel_model.encoder(channel_input, src_lengths=ch_input_lengths) - ch_decoder_output, _ = channel_model.decoder(temp_src_tokens_full, encoder_out=ch_encoder_output, features_only=True) - del ch_encoder_output - ch_intermed_scores = channel_model.decoder.unnormalized_scores_given_target(ch_decoder_output, target_ids=temp_src_tokens_full[:, 1:]) - ch_intermed_scores = ch_intermed_scores.float() - ch_intermed_scores *= not_padding.float() - ch_scores = torch.sum(ch_intermed_scores, dim=1) - elif self.channel_scoring_type == "k2_separate": - for k_idx in range(k): - k_eos_tokens = eos_tokens[k_idx::k, :] - if step == 0: - k_ch_input = torch.cat((fw_top_k_idx[:, k_idx:k_idx+1], k_eos_tokens), 1) - else: - # move eos from beginning to end of target sentence - k_ch_input = torch.cat((tokens[:, 1:step + 1], fw_top_k_idx[:, k_idx:k_idx+1], k_eos_tokens), 1) - k_ch_input_lengths = ch_input_lengths[k_idx::k] - k_ch_output = channel_model(k_ch_input, k_ch_input_lengths, src_tokens) - k_ch_lprobs = channel_model.get_normalized_probs(k_ch_output, log_probs=True) - k_ch_intermed_scores = torch.gather(k_ch_lprobs[:, :-1, :], 2, src_tokens[:, 1:].unsqueeze(2)).squeeze(2) - k_ch_intermed_scores *= not_padding.float() - ch_scores[k_idx::k] = torch.sum(k_ch_intermed_scores, dim=1) - elif self.channel_scoring_type == "src_vocab": - ch_encoder_output = channel_model.encoder(channel_input, src_lengths=ch_input_lengths) - ch_decoder_output, _ = channel_model.decoder(temp_src_tokens_full, encoder_out=ch_encoder_output, features_only=True) - - del ch_encoder_output - ch_lprobs = normalized_scores_with_batch_vocab( - channel_model.decoder, - ch_decoder_output, src_tokens, k, bsz, beam_size, - self.src_dict.pad_index, top_k=self.top_k_vocab) - ch_scores = torch.sum(ch_lprobs, dim=1) - elif self.channel_scoring_type == "src_vocab_batched": - ch_bsz_size = temp_src_tokens_full.shape[0] - ch_lprobs_list = [None] * len(range(0, ch_bsz_size, self.ch_scoring_bsz)) - for i, start_idx in enumerate(range(0, ch_bsz_size, self.ch_scoring_bsz)): - end_idx = min(start_idx + self.ch_scoring_bsz, ch_bsz_size) - temp_src_tokens_full_batch = temp_src_tokens_full[start_idx:end_idx, :] - channel_input_batch = channel_input[start_idx:end_idx, :] - ch_input_lengths_batch = ch_input_lengths[start_idx:end_idx] - ch_encoder_output_batch = channel_model.encoder(channel_input_batch, src_lengths=ch_input_lengths_batch) - ch_decoder_output_batch, _ = channel_model.decoder(temp_src_tokens_full_batch, encoder_out=ch_encoder_output_batch, features_only=True) - ch_lprobs_list[i] = normalized_scores_with_batch_vocab( - channel_model.decoder, - ch_decoder_output_batch, src_tokens, k, bsz, beam_size, - self.src_dict.pad_index, top_k=self.top_k_vocab, - start_idx=start_idx, end_idx=end_idx) - ch_lprobs = torch.cat(ch_lprobs_list, dim=0) - ch_scores = torch.sum(ch_lprobs, dim=1) - else: - ch_output = channel_model(channel_input, ch_input_lengths, temp_src_tokens_full) - ch_lprobs = channel_model.get_normalized_probs(ch_output, log_probs=True) - ch_intermed_scores = torch.gather(ch_lprobs[:, :-1, :], 2, temp_src_tokens_full[:, 1:].unsqueeze(2)).squeeze().view(bsz*beam_size*k, -1) - ch_intermed_scores *= not_padding.float() - ch_scores = torch.sum(ch_intermed_scores, dim=1) - - else: - cur_tgt_size = 0 - ch_scores = ch_scores.view(bsz*beam_size, k) - expanded_lm_prefix_scores = lm_prefix_scores.unsqueeze(1).expand(-1, k).flatten() - - if self.share_tgt_dict: - lm_scores = get_lm_scores(lm, tokens[:, :step + 1].view(-1, step+1), lm_incremental_states, fw_top_k_idx.view(-1, 1), torch.tensor(np.full(tokens.size(0), step+1)), k) - else: - new_lm_input = dict2dict(tokens[:, :step + 1].view(-1, step+1), self.tgt_to_lm) - new_cands = dict2dict(fw_top_k_idx.view(-1, 1), self.tgt_to_lm) - lm_scores = get_lm_scores(lm, new_lm_input, lm_incremental_states, new_cands, torch.tensor(np.full(tokens.size(0), step+1)), k) - - lm_scores.add_(expanded_lm_prefix_scores) - ch_lm_scores = combine_ch_lm(self.combine_method, ch_scores, lm_scores, src_size, cur_tgt_size) - # initialize all as min value - new_fw_lprobs = ch_scores.new(lprobs_size).fill_(-1e17).view(bsz*beam_size, -1) - new_ch_lm_lprobs = ch_scores.new(lprobs_size).fill_(-1e17).view(bsz*beam_size, -1) - new_lm_lprobs = ch_scores.new(lprobs_size).fill_(-1e17).view(bsz*beam_size, -1) - new_fw_lprobs[:, self.pad] = -math.inf - new_ch_lm_lprobs[:, self.pad] = -math.inf - new_lm_lprobs[:, self.pad] = -math.inf - - new_fw_lprobs.scatter_(1, fw_top_k_idx, fw_top_k) - new_ch_lm_lprobs.scatter_(1, fw_top_k_idx, ch_lm_scores) - new_lm_lprobs.scatter_(1, fw_top_k_idx, lm_scores.view(-1, k)) - return new_fw_lprobs, new_ch_lm_lprobs, new_lm_lprobs - - def combine_ch_lm(combine_type, ch_scores, lm_scores1, src_size, tgt_size): - if self.channel_scoring_type == "unnormalized": - ch_scores = self.log_softmax_fn( - ch_scores.view(-1, self.beam_size * self.k2) - ).view(ch_scores.shape) - ch_scores = ch_scores * self.ch_weight - lm_scores1 = lm_scores1 * self.lm_weight - - if combine_type == "lm_only": - # log P(T|S) + log P(T) - ch_scores = lm_scores1.view(ch_scores.size()) - elif combine_type == "noisy_channel": - # 1/t log P(T|S) + 1/s log P(S|T) + 1/t log P(T) - if self.normalize_lm_scores_by_tgt_len: - ch_scores.div_(src_size) - lm_scores_norm = lm_scores1.view(ch_scores.size()).div(tgt_size) - ch_scores.add_(lm_scores_norm) - # 1/t log P(T|S) + 1/s log P(S|T) + 1/s log P(T) - else: - ch_scores.add_(lm_scores1.view(ch_scores.size())) - ch_scores.div_(src_size) - - return ch_scores - - if self.channel_models is not None: - channel_model = self.channel_models[0] # assume only one channel_model model - else: - channel_model = None - - lm = EnsembleModel(self.lm_models) - lm_incremental_states = torch.jit.annotate( - List[Dict[str, Dict[str, Optional[Tensor]]]], - [ - torch.jit.annotate(Dict[str, Dict[str, Optional[Tensor]]], {}) - for i in range(lm.models_size) - ], - ) - - reorder_state = None - batch_idxs = None - for step in range(max_len + 1): # one extra step for EOS marker - # reorder decoder internal states based on the prev choice of beams - if reorder_state is not None: - if batch_idxs is not None: - # update beam indices to take into account removed sentences - corr = batch_idxs - torch.arange(batch_idxs.numel()).type_as(batch_idxs) - reorder_state.view(-1, beam_size).add_(corr.unsqueeze(-1) * beam_size) - model.reorder_incremental_state(incremental_states, reorder_state) - encoder_outs = model.reorder_encoder_out(encoder_outs, reorder_state) - - lm.reorder_incremental_state(lm_incremental_states, reorder_state) - - fw_lprobs, avg_attn_scores = model.forward_decoder( - tokens[:, :step + 1], encoder_outs, incremental_states, temperature=self.temperature, - ) - - fw_lprobs[:, self.pad] = -math.inf # never select pad - fw_lprobs[:, self.unk] -= self.unk_penalty # apply unk penalty - fw_lprobs, ch_lm_lprobs, lm_lprobs = noisy_channel_rescoring(fw_lprobs, beam_size, bsz, src_tokens, tokens, self.k2) - - # handle min and max length constraints - if step >= max_len: - fw_lprobs[:, :self.eos] = -math.inf - fw_lprobs[:, self.eos + 1:] = -math.inf - elif step < self.min_len: - fw_lprobs[:, self.eos] = -math.inf - - # handle prefix tokens (possibly with different lengths) - if prefix_tokens is not None and step < prefix_tokens.size(1): - prefix_toks = prefix_tokens[:, step].unsqueeze(-1).repeat(1, beam_size).view(-1) - prefix_mask = prefix_toks.ne(self.pad) - - prefix_fw_lprobs = fw_lprobs.gather(-1, prefix_toks.unsqueeze(-1)) - fw_lprobs[prefix_mask] = -math.inf - fw_lprobs[prefix_mask] = fw_lprobs[prefix_mask].scatter_( - -1, prefix_toks[prefix_mask].unsqueeze(-1), prefix_fw_lprobs - ) - - prefix_ch_lm_lprobs = ch_lm_lprobs.gather(-1, prefix_toks.unsqueeze(-1)) - ch_lm_lprobs[prefix_mask] = -math.inf - ch_lm_lprobs[prefix_mask] = ch_lm_lprobs[prefix_mask].scatter_( - -1, prefix_toks[prefix_mask].unsqueeze(-1), prefix_ch_lm_lprobs - ) - - prefix_lm_lprobs = lm_lprobs.gather(-1, prefix_toks.unsqueeze(-1)) - lm_lprobs[prefix_mask] = -math.inf - lm_lprobs[prefix_mask] = lm_lprobs[prefix_mask].scatter_( - -1, prefix_toks[prefix_mask].unsqueeze(-1), prefix_lm_lprobs - ) - - # if prefix includes eos, then we should make sure tokens and - # scores are the same across all beams - eos_mask = prefix_toks.eq(self.eos) - if eos_mask.any(): - # validate that the first beam matches the prefix - first_beam = tokens[eos_mask].view(-1, beam_size, tokens.size(-1))[:, 0, 1:step + 1] - eos_mask_batch_dim = eos_mask.view(-1, beam_size)[:, 0] - target_prefix = prefix_tokens[eos_mask_batch_dim][:, :step] - assert (first_beam == target_prefix).all() - - def replicate_first_beam(tensor, mask): - tensor = tensor.view(-1, beam_size, tensor.size(-1)) - tensor[mask] = tensor[mask][:, :1, :] - return tensor.view(-1, tensor.size(-1)) - - # copy tokens, scores and lprobs from the first beam to all beams - tokens = replicate_first_beam(tokens, eos_mask_batch_dim) - scores = replicate_first_beam(scores, eos_mask_batch_dim) - - fw_lprobs = replicate_first_beam(fw_lprobs, eos_mask_batch_dim) - ch_lm_lprobs = replicate_first_beam(ch_lm_lprobs, eos_mask_batch_dim) - lm_lprobs = replicate_first_beam(lm_lprobs, eos_mask_batch_dim) - - if self.no_repeat_ngram_size > 0: - # for each beam and batch sentence, generate a list of previous ngrams - gen_ngrams = [{} for bbsz_idx in range(bsz * beam_size)] - for bbsz_idx in range(bsz * beam_size): - gen_tokens = tokens[bbsz_idx].tolist() - for ngram in zip(*[gen_tokens[i:] for i in range(self.no_repeat_ngram_size)]): - gen_ngrams[bbsz_idx][tuple(ngram[:-1])] = \ - gen_ngrams[bbsz_idx].get(tuple(ngram[:-1]), []) + [ngram[-1]] - - # Record attention scores - if avg_attn_scores is not None: - if attn is None: - attn = scores.new(bsz * beam_size, src_tokens.size(1), max_len + 2) - attn_buf = attn.clone() - nonpad_idxs = src_tokens.ne(self.pad) - attn[:, :, step + 1].copy_(avg_attn_scores) - - scores = scores.type_as(fw_lprobs) - scores_buf = scores_buf.type_as(fw_lprobs) - - self.search.set_src_lengths(src_lengths_no_eos) - - if self.no_repeat_ngram_size > 0: - def calculate_banned_tokens(bbsz_idx): - # before decoding the next token, prevent decoding of ngrams that have already appeared - ngram_index = tuple(tokens[bbsz_idx, step + 2 - self.no_repeat_ngram_size:step + 1].tolist()) - return gen_ngrams[bbsz_idx].get(ngram_index, []) - - if step + 2 - self.no_repeat_ngram_size >= 0: - # no banned tokens if we haven't generated no_repeat_ngram_size tokens yet - banned_tokens = [calculate_banned_tokens(bbsz_idx) for bbsz_idx in range(bsz * beam_size)] - else: - banned_tokens = [[] for bbsz_idx in range(bsz * beam_size)] - - for bbsz_idx in range(bsz * beam_size): - fw_lprobs[bbsz_idx, banned_tokens[bbsz_idx]] = -math.inf - - combined_noisy_channel_scores, fw_lprobs_top_k, lm_lprobs_top_k, cand_indices, cand_beams = self.search.step( - step, - fw_lprobs.view(bsz, -1, self.vocab_size), - scores.view(bsz, beam_size, -1)[:, :, :step], ch_lm_lprobs.view(bsz, -1, self.vocab_size), - lm_lprobs.view(bsz, -1, self.vocab_size), self.combine_method - ) - - # cand_bbsz_idx contains beam indices for the top candidate - # hypotheses, with a range of values: [0, bsz*beam_size), - # and dimensions: [bsz, cand_size] - cand_bbsz_idx = cand_beams.add(bbsz_offsets) - - # finalize hypotheses that end in eos (except for candidates to be ignored) - eos_mask = cand_indices.eq(self.eos) - eos_mask[:, :beam_size] &= ~cands_to_ignore - - # only consider eos when it's among the top beam_size indices - eos_bbsz_idx = torch.masked_select( - cand_bbsz_idx[:, :beam_size], mask=eos_mask[:, :beam_size] - ) - - finalized_sents = set() - if eos_bbsz_idx.numel() > 0: - eos_scores = torch.masked_select( - fw_lprobs_top_k[:, :beam_size], mask=eos_mask[:, :beam_size] - ) - combined_noisy_channel_eos_scores = torch.masked_select( - combined_noisy_channel_scores[:, :beam_size], - mask=eos_mask[:, :beam_size], - ) - - # finalize hypo using channel model score - finalized_sents = finalize_hypos( - step, eos_bbsz_idx, eos_scores, combined_noisy_channel_eos_scores) - - num_remaining_sent -= len(finalized_sents) - - assert num_remaining_sent >= 0 - if num_remaining_sent == 0: - break - - if len(finalized_sents) > 0: - new_bsz = bsz - len(finalized_sents) - - # construct batch_idxs which holds indices of batches to keep for the next pass - batch_mask = cand_indices.new_ones(bsz) - batch_mask[cand_indices.new(finalized_sents)] = 0 - batch_idxs = torch.nonzero(batch_mask).squeeze(-1) - - eos_mask = eos_mask[batch_idxs] - cand_beams = cand_beams[batch_idxs] - bbsz_offsets.resize_(new_bsz, 1) - cand_bbsz_idx = cand_beams.add(bbsz_offsets) - - lm_lprobs_top_k = lm_lprobs_top_k[batch_idxs] - - fw_lprobs_top_k = fw_lprobs_top_k[batch_idxs] - cand_indices = cand_indices[batch_idxs] - if prefix_tokens is not None: - prefix_tokens = prefix_tokens[batch_idxs] - src_lengths_no_eos = src_lengths_no_eos[batch_idxs] - cands_to_ignore = cands_to_ignore[batch_idxs] - - scores = scores.view(bsz, -1)[batch_idxs].view(new_bsz * beam_size, -1) - scores_buf.resize_as_(scores) - tokens = tokens.view(bsz, -1)[batch_idxs].view(new_bsz * beam_size, -1) - tokens_buf.resize_as_(tokens) - src_tokens = src_tokens.view(bsz, -1)[batch_idxs].view(new_bsz * beam_size, -1) - src_lengths = src_lengths.view(bsz, -1)[batch_idxs].view(new_bsz * beam_size, -1) - lm_prefix_scores = lm_prefix_scores.view(bsz, -1)[batch_idxs].view(new_bsz * beam_size, -1).squeeze() - - if attn is not None: - attn = attn.view(bsz, -1)[batch_idxs].view(new_bsz * beam_size, attn.size(1), -1) - attn_buf.resize_as_(attn) - bsz = new_bsz - else: - batch_idxs = None - - # Set active_mask so that values > cand_size indicate eos or - # ignored hypos and values < cand_size indicate candidate - # active hypos. After this, the min values per row are the top - # candidate active hypos. - eos_mask[:, :beam_size] |= cands_to_ignore - active_mask = torch.add( - eos_mask.type_as(cand_offsets) * cand_size, - cand_offsets[: eos_mask.size(1)], - ) - - # get the top beam_size active hypotheses, which are just the hypos - # with the smallest values in active_mask - active_hypos, new_cands_to_ignore = buffer('active_hypos'), buffer('new_cands_to_ignore') - torch.topk( - active_mask, k=beam_size, dim=1, largest=False, - out=(new_cands_to_ignore, active_hypos) - ) - - # update cands_to_ignore to ignore any finalized hypos - cands_to_ignore = new_cands_to_ignore.ge(cand_size)[:, :beam_size] - assert (~cands_to_ignore).any(dim=1).all() - - active_bbsz_idx = buffer('active_bbsz_idx') - torch.gather( - cand_bbsz_idx, dim=1, index=active_hypos, - out=active_bbsz_idx, - ) - active_scores = torch.gather( - fw_lprobs_top_k, dim=1, index=active_hypos, - out=scores[:, step].view(bsz, beam_size), - ) - - active_bbsz_idx = active_bbsz_idx.view(-1) - active_scores = active_scores.view(-1) - - # copy tokens and scores for active hypotheses - torch.index_select( - tokens[:, :step + 1], dim=0, index=active_bbsz_idx, - out=tokens_buf[:, :step + 1], - ) - torch.gather( - cand_indices, dim=1, index=active_hypos, - out=tokens_buf.view(bsz, beam_size, -1)[:, :, step + 1], - ) - if step > 0: - torch.index_select( - scores[:, :step], dim=0, index=active_bbsz_idx, - out=scores_buf[:, :step], - ) - torch.gather( - fw_lprobs_top_k, dim=1, index=active_hypos, - out=scores_buf.view(bsz, beam_size, -1)[:, :, step], - ) - torch.gather( - lm_lprobs_top_k, dim=1, index=active_hypos, - out=lm_prefix_scores.view(bsz, beam_size) - ) - - # copy attention for active hypotheses - if attn is not None: - torch.index_select( - attn[:, :, :step + 2], dim=0, index=active_bbsz_idx, - out=attn_buf[:, :, :step + 2], - ) - - # swap buffers - tokens, tokens_buf = tokens_buf, tokens - scores, scores_buf = scores_buf, scores - if attn is not None: - attn, attn_buf = attn_buf, attn - - # reorder incremental state in decoder - reorder_state = active_bbsz_idx - - # sort by score descending - for sent in range(len(finalized)): - finalized[sent] = sorted(finalized[sent], key=lambda r: r['score'], reverse=True) - - return finalized - - -def get_lm_scores(model, input_tokens, incremental_states, cand_tokens, input_len, k): - with torch.no_grad(): - lm_lprobs, avg_attn_scores = model.forward_decoder( - input_tokens, encoder_outs=None, incremental_states=incremental_states, - ) - - lm_lprobs_size = lm_lprobs.size(0) - probs_next_wrd = torch.gather(lm_lprobs.repeat(1, k).view(lm_lprobs_size*k, -1), 1, cand_tokens).squeeze().view(-1) - - return probs_next_wrd - - -def make_dict2dict(old_dict, new_dict): - dict2dict_map = {} - for sym in old_dict.symbols: - dict2dict_map[old_dict.index(sym)] = new_dict.index(sym) - return dict2dict_map - - -def dict2dict(tokens, dict2dict_map): - if tokens.device == torch.device('cpu'): - tokens_tmp = tokens - else: - tokens_tmp = tokens.cpu() - return tokens_tmp.map_( - tokens_tmp, - lambda _, val, dict2dict_map=dict2dict_map : dict2dict_map[float(val)] - ).to(tokens.device) - - -def reorder_tokens(tokens, lengths, eos): - # reorder source tokens so they may be used as reference for P(S|T) - return torch.cat((tokens.new([eos]), tokens[-lengths:-1], tokens[:-lengths]), 0) - - -def reorder_all_tokens(tokens, lengths, eos): - # used to reorder src tokens from [ .. ] to [ ...] - # so source tokens can be used to predict P(S|T) - return torch.stack([reorder_tokens(token, length, eos) for token, length in zip(tokens, lengths)]) - - -def normalized_scores_with_batch_vocab( - model_decoder, features, target_ids, k, bsz, beam_size, - pad_idx, top_k=0, vocab_size_meter=None, start_idx=None, - end_idx=None, **kwargs): - """ - Get normalized probabilities (or log probs) from a net's output - w.r.t. vocab consisting of target IDs in the batch - """ - if model_decoder.adaptive_softmax is None: - weight = model_decoder.output_projection.weight - vocab_ids = torch.unique( - torch.cat( - (torch.unique(target_ids), torch.arange(top_k, device=target_ids.device)) - ) - ) - id_map = dict(zip(vocab_ids.tolist(), range(len(vocab_ids)))) - mapped_target_ids = target_ids.cpu().apply_( - lambda x, id_map=id_map: id_map[x] - ).to(target_ids.device) - expanded_target_ids = mapped_target_ids[:, :].repeat(1, k).view(bsz*beam_size*k, -1) - if start_idx is not None and end_idx is not None: - expanded_target_ids = expanded_target_ids[start_idx:end_idx, :] - logits = F.linear(features, weight[vocab_ids, :]) - log_softmax = F.log_softmax(logits, dim=-1, dtype=torch.float32) - intermed_scores = torch.gather( - log_softmax[:, :-1, :], - 2, - expanded_target_ids[:, 1:].unsqueeze(2), - ).squeeze() - not_padding = expanded_target_ids[:, 1:] != pad_idx - intermed_scores *= not_padding.float() - return intermed_scores - else: - raise ValueError("adaptive softmax doesn't work with " + - "`normalized_scores_with_batch_vocab()`") diff --git a/kosmos-g/fairseq/examples/fast_noisy_channel/noisy_channel_translation.py b/kosmos-g/fairseq/examples/fast_noisy_channel/noisy_channel_translation.py deleted file mode 100644 index b74bdfd45..000000000 --- a/kosmos-g/fairseq/examples/fast_noisy_channel/noisy_channel_translation.py +++ /dev/null @@ -1,127 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq.tasks.translation import TranslationTask -from fairseq.tasks.language_modeling import LanguageModelingTask -from fairseq import checkpoint_utils -import argparse -from fairseq.tasks import register_task -import torch - - -@register_task("noisy_channel_translation") -class NoisyChannelTranslation(TranslationTask): - """ - Rescore the top k candidates from each beam using noisy channel modeling - """ - - @staticmethod - def add_args(parser): - """Add task-specific arguments to the parser.""" - TranslationTask.add_args(parser) - # fmt: off - parser.add_argument('--channel-model', metavar='FILE', - help='path to P(S|T) model. P(S|T) and P(T|S) must share source and target dictionaries.') - parser.add_argument('--combine-method', default='lm_only', - choices=['lm_only', 'noisy_channel'], - help="""method for combining direct and channel model scores. - lm_only: decode with P(T|S)P(T) - noisy_channel: decode with 1/t P(T|S) + 1/s(P(S|T)P(T))""") - parser.add_argument('--normalize-lm-scores-by-tgt-len', action='store_true', default=False, - help='normalize lm score by target length instead of source length') - parser.add_argument('--channel-scoring-type', default='log_norm', choices=['unnormalized', 'log_norm', 'k2_separate', 'src_vocab', 'src_vocab_batched'], - help="Normalize bw scores with log softmax or return bw scores without log softmax") - parser.add_argument('--top-k-vocab', default=0, type=int, - help='top k vocab IDs to use with `src_vocab` in channel model scoring') - parser.add_argument('--k2', default=50, type=int, - help='the top k2 candidates to rescore with the noisy channel model for each beam') - parser.add_argument('--ch-wt', default=1, type=float, - help='weight for the channel model') - parser.add_argument('--lm-model', metavar='FILE', - help='path to lm model file, to model P(T). P(T) must share the same vocab as the direct model on the target side') - parser.add_argument('--lm-data', metavar='FILE', - help='path to lm model training data for target language, used to properly load LM with correct dictionary') - parser.add_argument('--lm-wt', default=1, type=float, - help='the weight of the lm in joint decoding') - # fmt: on - - def build_generator( - self, models, args, seq_gen_cls=None, extra_gen_cls_kwargs=None - ): - if getattr(args, "score_reference", False): - raise NotImplementedError() - else: - from .noisy_channel_sequence_generator import NoisyChannelSequenceGenerator - use_cuda = torch.cuda.is_available() and not self.args.cpu - assert self.args.lm_model is not None, '--lm-model required for noisy channel generation!' - assert self.args.lm_data is not None, '--lm-data required for noisy channel generation to map between LM and bitext vocabs' - if self.args.channel_model is not None: - import copy - ch_args_task = copy.deepcopy(self.args) - tmp = ch_args_task.source_lang - ch_args_task.source_lang = ch_args_task.target_lang - ch_args_task.target_lang = tmp - ch_args_task._name = 'translation' - channel_task = TranslationTask.setup_task(ch_args_task) - - arg_dict = {} - arg_dict['task'] = 'language_modeling' - arg_dict['sample_break_mode'] = 'eos' - arg_dict['data'] = self.args.lm_data - arg_dict['output_dictionary_size'] = -1 - lm_args = argparse.Namespace(**arg_dict) - lm_task = LanguageModelingTask.setup_task(lm_args) - lm_dict = lm_task.output_dictionary - - if self.args.channel_model is not None: - channel_models, _ = checkpoint_utils.load_model_ensemble(self.args.channel_model.split(':'), task=channel_task) - - for model in channel_models: - model.make_generation_fast_( - beamable_mm_beam_size=None if args.no_beamable_mm else args.beam, - need_attn=args.print_alignment, - ) - if self.args.fp16: - model.half() - if use_cuda: - model.cuda() - else: - channel_models = None - - lm_models, _ = checkpoint_utils.load_model_ensemble(self.args.lm_model.split(':'), task=lm_task) - - for model in lm_models: - model.make_generation_fast_( - beamable_mm_beam_size=None if args.no_beamable_mm else args.beam, - need_attn=args.print_alignment, - ) - if self.args.fp16: - model.half() - if use_cuda: - model.cuda() - return NoisyChannelSequenceGenerator( - combine_method=self.args.combine_method, - tgt_dict=self.target_dictionary, - src_dict=self.source_dictionary, - beam_size=getattr(args, 'beam', 5), - max_len_a=getattr(args, 'max_len_a', 0), - max_len_b=getattr(args, 'max_len_b', 200), - min_len=getattr(args, 'min_len', 1), - len_penalty=getattr(args, 'lenpen', 1), - unk_penalty=getattr(args, 'unkpen', 0), - temperature=getattr(args, 'temperature', 1.), - match_source_len=getattr(args, 'match_source_len', False), - no_repeat_ngram_size=getattr(args, 'no_repeat_ngram_size', 0), - normalize_scores=(not getattr(args, 'unnormalized', False)), - channel_models=channel_models, - k2=getattr(self.args, 'k2', 50), - ch_weight=getattr(self.args, 'ch_wt', 1), - channel_scoring_type=self.args.channel_scoring_type, - top_k_vocab=self.args.top_k_vocab, - lm_models=lm_models, - lm_dict=lm_dict, - lm_weight=getattr(self.args, 'lm_wt', 1), - normalize_lm_scores_by_tgt_len=getattr(self.args, 'normalize_lm_scores_by_tgt_len', False), - ) diff --git a/kosmos-g/fairseq/examples/flores101/README.md b/kosmos-g/fairseq/examples/flores101/README.md deleted file mode 100644 index 635c13f40..000000000 --- a/kosmos-g/fairseq/examples/flores101/README.md +++ /dev/null @@ -1,223 +0,0 @@ -

- -

- -# Flores101: Large-Scale Multilingual Machine Translation - -## Introduction - -Baseline pretrained models for small and large tracks of WMT 21 Large-Scale Multilingual Machine Translation competition. - -Flores Task at WMT 21: http://www.statmt.org/wmt21/large-scale-multilingual-translation-task.html - -Flores announement blog post: https://ai.facebook.com/blog/flores-researchers-kick-off-multilingual-translation-challenge-at-wmt-and-call-for-compute-grants/ - - - -## Pretrained models - -Model | Num layers | Embed dimension | FFN dimension| Vocab Size | #params | Download ----|---|---|---|---|---|--- -`flores101_mm100_615M` | 12 | 1024 | 4096 | 256,000 | 615M | https://dl.fbaipublicfiles.com/flores101/pretrained_models/flores101_mm100_615M.tar.gz -`flores101_mm100_175M` | 6 | 512 | 2048 | 256,000 | 175M | https://dl.fbaipublicfiles.com/flores101/pretrained_models/flores101_mm100_175M.tar.gz - - -These models are trained similar to [M2M-100](https://arxiv.org/abs/2010.11125) with additional support for the languages that are part of the WMT Large-Scale Multilingual Machine Translation track. Full list of languages can be found at the bottom. - - -## Example Generation code - -### Download model, sentencepiece vocab - -```bash -fairseq=/path/to/fairseq -cd $fairseq - -# Download 615M param model. -wget https://dl.fbaipublicfiles.com/flores101/pretrained_models/flores101_mm100_615M.tar.gz - -# Extract -tar -xvzf flores101_mm100_615M.tar.gz -``` - -### Encode using our SentencePiece Model -Note: Install SentencePiece from [here](https://github.com/google/sentencepiece) - - -```bash -fairseq=/path/to/fairseq -cd $fairseq - -# Download example dataset From German to French -sacrebleu --echo src -l de-fr -t wmt19 | head -n 20 > raw_input.de-fr.de -sacrebleu --echo ref -l de-fr -t wmt19 | head -n 20 > raw_input.de-fr.fr - -for lang in de fr ; do - python scripts/spm_encode.py \ - --model flores101_mm100_615M/sentencepiece.bpe.model \ - --output_format=piece \ - --inputs=raw_input.de-fr.${lang} \ - --outputs=spm.de-fr.${lang} -done -``` - -### Binarization - -```bash -fairseq-preprocess \ - --source-lang de --target-lang fr \ - --testpref spm.de-fr \ - --thresholdsrc 0 --thresholdtgt 0 \ - --destdir data_bin \ - --srcdict flores101_mm100_615M/dict.txt --tgtdict flores101_mm100_615M/dict.txt -``` - -### Generation - - -```bash -fairseq-generate \ - data_bin \ - --batch-size 1 \ - --path flores101_mm100_615M/model.pt \ - --fixed-dictionary flores101_mm100_615M/dict.txt \ - -s de -t fr \ - --remove-bpe 'sentencepiece' \ - --beam 5 \ - --task translation_multi_simple_epoch \ - --lang-pairs flores101_mm100_615M/language_pairs.txt \ - --decoder-langtok --encoder-langtok src \ - --gen-subset test \ - --fp16 \ - --dataset-impl mmap \ - --distributed-world-size 1 --distributed-no-spawn -``` - -### Supported Languages and lang code - -Language | lang code ----|--- -Akrikaans | af -Amharic | am -Arabic | ar -Assamese | as -Asturian | ast -Aymara | ay -Azerbaijani | az -Bashkir | ba -Belarusian | be -Bulgarian | bg -Bengali | bn -Breton | br -Bosnian | bs -Catalan | ca -Cebuano | ceb -Chokwe | cjk -Czech | cs -Welsh | cy -Danish | da -German | de -Dyula| dyu -Greek | el -English | en -Spanish | es -Estonian | et -Persian | fa -Fulah | ff -Finnish | fi -French | fr -Western Frisian | fy -Irish | ga -Scottish Gaelic | gd -Galician | gl -Gujarati | gu -Hausa | ha -Hebrew | he -Hindi | hi -Croatian | hr -Haitian Creole | ht -Hungarian | hu -Armenian | hy -Indonesian | id -Igbo | ig -Iloko | ilo -Icelandic | is -Italian | it -Japanese | ja -Javanese | jv -Georgian | ka -Kachin | kac -Kamba | kam -Kabuverdianu | kea -Kongo | kg -Kazakh | kk -Central Khmer | km -Kimbundu | kmb -Northern Kurdish | kmr -Kannada | kn -Korean | ko -Kurdish | ku -Kyrgyz | ky -Luxembourgish | lb -Ganda | lg -Lingala | ln -Lao | lo -Lithuanian | lt -Luo | luo -Latvian | lv -Malagasy | mg -Maori | mi -Macedonian | mk -Malayalam | ml -Mongolian | mn -Marathi | mr -Malay | ms -Maltese | mt -Burmese | my -Nepali | ne -Dutch | nl -Norwegian | no -Northern Sotho | ns -Nyanja | ny -Occitan | oc -Oromo | om -Oriya | or -Punjabi | pa -Polish | pl -Pashto | ps -Portuguese | pt -Quechua | qu -Romanian | ro -Russian | ru -Sindhi | sd -Shan | shn -Sinhala | si -Slovak | sk -Slovenian | sl -Shona | sn -Somali | so -Albanian | sq -Serbian | sr -Swati | ss -Sundanese | su -Swedish | sv -Swahili | sw -Tamil | ta -Telugu | te -Tajik | tg -Thai | th -Tigrinya | ti -Tagalog | tl -Tswana | tn -Turkish | tr -Ukrainian | uk -Umbundu | umb -Urdu | ur -Uzbek | uz -Vietnamese | vi -Wolof | wo -Xhosa | xh -Yiddish | yi -Yoruba | yo -Chinese| zh -Zulu | zu diff --git a/kosmos-g/fairseq/examples/flores101/flores_logo.png b/kosmos-g/fairseq/examples/flores101/flores_logo.png deleted file mode 100644 index d4d1455c6..000000000 Binary files a/kosmos-g/fairseq/examples/flores101/flores_logo.png and /dev/null differ diff --git a/kosmos-g/fairseq/examples/fully_sharded_data_parallel/README.md b/kosmos-g/fairseq/examples/fully_sharded_data_parallel/README.md deleted file mode 100644 index b9e44fef4..000000000 --- a/kosmos-g/fairseq/examples/fully_sharded_data_parallel/README.md +++ /dev/null @@ -1,177 +0,0 @@ -# Fully Sharded Data Parallel (FSDP) - -## Overview -Recent work by [Microsoft](https://arxiv.org/abs/1910.02054) and -[Google](https://arxiv.org/abs/2004.13336) has shown that data parallel -training can be made significantly more efficient by sharding the model -parameters and optimizer state across data parallel workers. These ideas are -encapsulated in the new **`FullyShardedDataParallel` (FSDP)** wrapper provided -by [fairscale](https://github.com/facebookresearch/fairscale/). - -Compared to PyTorch DDP: -* FSDP produces identical results as PyTorch DDP (it's still synchronous data parallel training) -* FSDP shards parameters (FP16 + FP32) and optimizer state across data parallel GPUs -* FSDP is faster than PyTorch DDP because the optimizer step is sharded, and the communication can be overlapped with the forward pass -* FSDP enables training 13B parameter models on 8 GPUs and 175B parameter models on 128 GPUs - -FSDP is fully supported in fairseq via the following new arguments: -* `--ddp-backend=fully_sharded`: enables full sharding via FSDP -* `--cpu-offload`: offloads the optimizer state and FP32 model copy to CPU (combine with `--optimizer=cpu_adam`) -* `--no-reshard-after-forward`: increases training speed for large models (1B+ params) and is similar to ZeRO stage 2 -* other popular options (`--fp16`, `--update-freq`, `--checkpoint-activations`, `--offload-activations`, etc.) continue to work as normal - -
Limitations

- -FSDP currently has several limitations compared to fairseq's default DDP backend (PyTorch DDP): -* while FSDP is full compatible with pointwise Optimizers (e.g., Adam, AdamW, Adadelta, Adamax, SGD, etc.), it is not currently compatible with non-pointwise Optimizers (e.g., Adagrad, Adafactor, LAMB, etc.) -* FSDP depends on flattening the parameters, so models that currently require `--fp16-no-flatten-grads` may not be supported - -See the [fairscale docs](https://fairscale.readthedocs.io/en/latest/api/nn/fsdp_tips.html) for a more detailed -explanation of these and other limitations. - -

- -
How it works

- -Fully Sharded Data Parallel - -See the [fairscale docs](https://fairscale.readthedocs.io/en/latest/api/nn/fsdp_tips.html) for a more detailed -explanation of how FSDP works. - -

- -## Example usage - -The following examples illustrate how to train a very large language model with -13 billion parameters on 1 GPU by offloading parameters and optimizer states to -CPU, or on 8 GPUs by fully sharding the params and optimizer states across GPUs. - -These examples use the WikiText-103 dataset for demonstration purposes, but -in practice a much larger dataset will be needed to achieve good results. -Follow the [instructions here](https://github.com/pytorch/fairseq/blob/main/examples/roberta/README.pretraining.md#1-preprocess-the-data) -to preprocess the WikiText-103 dataset using the GPT-2/RoBERTa vocabulary. - -### 13B params on 1 V100 GPU (with CPU offloading) - -The following command trains a 13B parameter GPT-3 model on a single V100 GPU -using the `--cpu-offload` feature to offload parameters and optimizer states to -CPU. In this setting, the optimizer step (Adam) happens on CPU. We also use the -`--checkpoint-activations` feature (sometimes called [gradient checkpointing](https://pytorch.org/docs/stable/checkpoint.html)), -which further saves memory in exchange for a small increase in computation. - -**Requirements:** -- Install the latest master version of fairscale: `pip install git+https://github.com/facebookresearch/fairscale.git@master` -- You'll need 32GB of GPU memory and ~256GB of system memory to train the 13B param model. -- If you have less system memory, the 6.7B param model can be trained with ~128GB of system memory, just set `--arch transformer_lm_gpt3_6_7` -- We use the CPU Adam optimizer from [DeepSpeed](https://github.com/microsoft/DeepSpeed), so you'll need to `pip install deepspeed` before running the command. - -**Notes:** -- The command will take ~5 minutes to start training, during which time it will appear to be hung, since randomly initializing 13B weights can be slow. -- The `--cpu-offload` feature requires training in mixed precision (`--fp16`). -- Tune the `OMP_NUM_THREADS` env variable for best performance with CPU offloading. -- The example command below stops training after 10 steps (`--max-update 10`) and does not save checkpoints (`--no-save`). - -```bash -OMP_NUM_THREADS=20 CUDA_VISIBLE_DEVICES=0 \ - fairseq-train data-bin/wikitext-103-roberta-bpe-bin \ - --ddp-backend fully_sharded --fp16 --fp16-init-scale 4 \ - --cpu-offload --checkpoint-activations \ - --task language_modeling --tokens-per-sample 2048 --batch-size 8 \ - --arch transformer_lm_gpt3_13 \ - --optimizer cpu_adam --adam-betas "(0.9,0.98)" \ - --lr 0.0001 --lr-scheduler polynomial_decay --warmup-updates 5 --total-num-update 10 \ - --max-update 10 --no-save --log-format json --log-interval 1 -``` - -
Example output

- -``` -(...) -2021-03-08 12:29:51 | INFO | fairseq_cli.train | num. model params: 13,110,865,920 (num. trained: 13,110,865,920) -(...) -2021-03-08 12:29:51 | INFO | fairseq_cli.train | training on 1 devices (GPUs/TPUs) -2021-03-08 12:29:51 | INFO | fairseq_cli.train | max tokens per GPU = None and batch size per GPU = 8 -(...) -Adam Optimizer #0 is created with AVX2 arithmetic capability. -Config: alpha=0.000100, betas=(0.900000, 0.980000), weight_decay=0.000000, adam_w=1 -(...) -2021-03-08 12:31:36 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "16.475", "ppl": "91120.8", "wps": "0", "ups": "0", "wpb": "16384", "bsz": "8", "num_updates": "1", "lr": "2e-05", "gnorm": "20.751", "loss_scale": "4", "train_wall": "99", "gb_free": "9.3", "wall": "105"} -2021-03-08 12:32:33 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "16.446", "ppl": "89281.6", "wps": "288.7", "ups": "0.02", "wpb": "16384", "bsz": "8", "num_updates": "2", "lr": "4e-05", "gnorm": "19.777", "loss_scale": "4", "train_wall": "57", "gb_free": "9.3", "wall": "161"} -2021-03-08 12:33:12 | INFO | fairseq.trainer | NOTE: gradient overflow detected, ignoring gradient, setting loss scale to: 2.0 -2021-03-08 12:33:51 | INFO | fairseq.trainer | NOTE: gradient overflow detected, ignoring gradient, setting loss scale to: 1.0 -2021-03-08 12:34:45 | INFO | train_inner | {"epoch": 1, "update": 0.001, "loss": "25.22", "ppl": "3.90691e+07", "wps": "123.4", "ups": "0.01", "wpb": "16384", "bsz": "8", "num_updates": "3", "lr": "6e-05", "gnorm": "131.281", "loss_scale": "1", "train_wall": "133", "gb_free": "9.3", "wall": "294"} -2021-03-08 12:35:43 | INFO | train_inner | {"epoch": 1, "update": 0.001, "loss": "18.079", "ppl": "276809", "wps": "285.5", "ups": "0.02", "wpb": "16384", "bsz": "8", "num_updates": "4", "lr": "8e-05", "gnorm": "13.776", "loss_scale": "1", "train_wall": "57", "gb_free": "9.3", "wall": "351"} -2021-03-08 12:36:35 | INFO | train_inner | {"epoch": 1, "update": 0.001, "loss": "23.729", "ppl": "1.39088e+07", "wps": "316.7", "ups": "0.02", "wpb": "16384", "bsz": "8", "num_updates": "5", "lr": "0.0001", "gnorm": "72.774", "loss_scale": "1", "train_wall": "52", "gb_free": "9.3", "wall": "403"} -2021-03-08 12:37:28 | INFO | train_inner | {"epoch": 1, "update": 0.001, "loss": "20.429", "ppl": "1.41203e+06", "wps": "307.6", "ups": "0.02", "wpb": "16384", "bsz": "8", "num_updates": "6", "lr": "8e-05", "gnorm": "60.846", "loss_scale": "1", "train_wall": "53", "gb_free": "9.3", "wall": "456"} -2021-03-08 12:38:27 | INFO | train_inner | {"epoch": 1, "update": 0.001, "loss": "18.965", "ppl": "511684", "wps": "279.4", "ups": "0.02", "wpb": "16384", "bsz": "8", "num_updates": "7", "lr": "6e-05", "gnorm": "22.687", "loss_scale": "1", "train_wall": "59", "gb_free": "9.3", "wall": "515"} -2021-03-08 12:39:18 | INFO | train_inner | {"epoch": 1, "update": 0.001, "loss": "18.345", "ppl": "332887", "wps": "319.1", "ups": "0.02", "wpb": "16384", "bsz": "8", "num_updates": "8", "lr": "4e-05", "gnorm": "8.451", "loss_scale": "1", "train_wall": "51", "gb_free": "9.3", "wall": "566"} -2021-03-08 12:40:11 | INFO | train_inner | {"epoch": 1, "update": 0.002, "loss": "18.262", "ppl": "314336", "wps": "305.9", "ups": "0.02", "wpb": "16384", "bsz": "8", "num_updates": "9", "lr": "2e-05", "gnorm": "6.457", "loss_scale": "1", "train_wall": "54", "gb_free": "9.3", "wall": "620"} -2021-03-08 12:41:04 | INFO | train_inner | {"epoch": 1, "update": 0.002, "loss": "17.556", "ppl": "192686", "wps": "311.8", "ups": "0.02", "wpb": "16384", "bsz": "8", "num_updates": "10", "lr": "0", "gnorm": "5.796", "loss_scale": "1", "train_wall": "53", "gb_free": "9.3", "wall": "673"} -2021-03-08 12:41:04 | INFO | fairseq_cli.train | Stopping training due to num_updates: 10 >= max_update: 10 -2021-03-08 12:41:04 | INFO | fairseq_cli.train | begin validation on "valid" subset -2021-03-08 12:43:15 | INFO | valid | {"epoch": 1, "valid_loss": "17.953", "valid_ppl": "253807", "valid_wps": "1868.4", "valid_wpb": "15400.2", "valid_bsz": "7.6", "valid_num_updates": "10"} -2021-03-08 12:43:15 | INFO | fairseq_cli.train | end of epoch 1 (average epoch stats below) -2021-03-08 12:43:15 | INFO | train | {"epoch": 1, "train_loss": "19.351", "train_ppl": "668509", "train_wps": "210.9", "train_ups": "0.01", "train_wpb": "16384", "train_bsz": "8", "train_num_updates": "10", "train_lr": "0", "train_gnorm": "36.26", "train_loss_scale": "1", "train_train_wall": "667", "train_gb_free": "9.3", "train_wall": "804"} -2021-03-08 12:43:15 | INFO | fairseq_cli.train | done training in 798.6 seconds -``` - -

- -### 13B params on 8 V100 GPUs (with full parameter + optimizer state sharding) - -FSDP can also shard the parameters and optimizer states across multiple GPUs, -reducing memory requirements significantly. On 8 x 32GB GPUs, sharding enables -training the same 13B parameter model *without offloading the parameters to -CPU*. However, without CPU offloading we'd only be able to fit a batch size of -1 per GPU, which would cause training speed to suffer. - -We obtain the best performance on 8 GPUs by combining full sharding and CPU -offloading. The following command trains the same 13B parameter GPT-3 model as -before on 8 x 32GB V100 GPUs; training speed increases superlinearly from ~310 -words per second to ~3200 words per second. - -```bash -OMP_NUM_THREADS=20 CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \ - fairseq-train data-bin/wikitext-103-roberta-bpe-bin \ - --ddp-backend fully_sharded --fp16 --fp16-init-scale 4 \ - --cpu-offload --checkpoint-activations \ - --task language_modeling --tokens-per-sample 2048 --batch-size 8 \ - --arch transformer_lm_gpt3_13 \ - --optimizer cpu_adam --adam-betas "(0.9,0.98)" \ - --lr 0.0001 --lr-scheduler polynomial_decay --warmup-updates 5 --total-num-update 10 \ - --max-update 10 --no-save --log-format json --log-interval 1 -``` - -
Example output

- -``` -(...) -2021-03-08 18:04:09 | INFO | fairseq_cli.train | num. model params: 13,110,865,920 (num. trained: 13,110,865,920) -(...) -2021-03-08 18:04:09 | INFO | fairseq_cli.train | training on 8 devices (GPUs/TPUs) -2021-03-08 18:04:09 | INFO | fairseq_cli.train | max tokens per GPU = None and batch size per GPU = 8 -(...) -Adam Optimizer #0 is created with AVX2 arithmetic capability. -Config: alpha=0.000100, betas=(0.900000, 0.980000), weight_decay=0.000000, adam_w=1 -(...) -2021-03-08 18:05:06 | INFO | train_inner | {"epoch": 1, "update": 0.001, "loss": "16.408", "ppl": "86945.6", "wps": "0", "ups": "0", "wpb": "131072", "bsz": "64", "num_updates": "1", "lr": "2e-05", "gnorm": "18.27", "loss_scale": "4", "train_wall": "47", "gb_free": "9.3", "wall": "56"} -2021-03-08 18:05:45 | INFO | train_inner | {"epoch": 1, "update": 0.002, "loss": "16.352", "ppl": "83644.3", "wps": "3283.4", "ups": "0.03", "wpb": "131072", "bsz": "64", "num_updates": "2", "lr": "4e-05", "gnorm": "18.411", "loss_scale": "4", "train_wall": "40", "gb_free": "9.3", "wall": "96"} -2021-03-08 18:06:21 | INFO | fairseq.trainer | NOTE: gradient overflow detected, ignoring gradient, setting loss scale to: 2.0 -2021-03-08 18:06:56 | INFO | fairseq.trainer | NOTE: gradient overflow detected, ignoring gradient, setting loss scale to: 1.0 -2021-03-08 18:07:37 | INFO | train_inner | {"epoch": 1, "update": 0.006, "loss": "23.682", "ppl": "1.34537e+07", "wps": "1176.6", "ups": "0.01", "wpb": "131072", "bsz": "64", "num_updates": "3", "lr": "6e-05", "gnorm": "119.682", "loss_scale": "1", "train_wall": "111", "gb_free": "9.3", "wall": "208"} -2021-03-08 18:08:18 | INFO | train_inner | {"epoch": 1, "update": 0.007, "loss": "18.988", "ppl": "519921", "wps": "3189.1", "ups": "0.02", "wpb": "131072", "bsz": "64", "num_updates": "4", "lr": "8e-05", "gnorm": "14.934", "loss_scale": "1", "train_wall": "41", "gb_free": "9.3", "wall": "249"} -2021-03-08 18:08:59 | INFO | train_inner | {"epoch": 1, "update": 0.008, "loss": "20.08", "ppl": "1.10798e+06", "wps": "3223.1", "ups": "0.02", "wpb": "131072", "bsz": "64", "num_updates": "5", "lr": "0.0001", "gnorm": "59.92", "loss_scale": "1", "train_wall": "41", "gb_free": "9.3", "wall": "289"} -2021-03-08 18:09:39 | INFO | train_inner | {"epoch": 1, "update": 0.009, "loss": "18.323", "ppl": "327980", "wps": "3256.6", "ups": "0.02", "wpb": "131072", "bsz": "64", "num_updates": "6", "lr": "8e-05", "gnorm": "37.425", "loss_scale": "1", "train_wall": "40", "gb_free": "9.3", "wall": "330"} -2021-03-08 18:10:20 | INFO | train_inner | {"epoch": 1, "update": 0.01, "loss": "17.264", "ppl": "157354", "wps": "3188.7", "ups": "0.02", "wpb": "131072", "bsz": "64", "num_updates": "7", "lr": "6e-05", "gnorm": "10.824", "loss_scale": "1", "train_wall": "41", "gb_free": "9.3", "wall": "371"} -2021-03-08 18:11:01 | INFO | train_inner | {"epoch": 1, "update": 0.011, "loss": "16.794", "ppl": "113647", "wps": "3230", "ups": "0.02", "wpb": "131072", "bsz": "64", "num_updates": "8", "lr": "4e-05", "gnorm": "5.616", "loss_scale": "1", "train_wall": "41", "gb_free": "9.3", "wall": "411"} -2021-03-08 18:11:39 | INFO | train_inner | {"epoch": 1, "update": 0.012, "loss": "16.706", "ppl": "106938", "wps": "3384", "ups": "0.03", "wpb": "131072", "bsz": "64", "num_updates": "9", "lr": "2e-05", "gnorm": "5.318", "loss_scale": "1", "train_wall": "39", "gb_free": "9.3", "wall": "450"} -2021-03-08 18:12:19 | INFO | train_inner | {"epoch": 1, "update": 0.013, "loss": "16.548", "ppl": "95796.2", "wps": "3274.4", "ups": "0.02", "wpb": "131072", "bsz": "64", "num_updates": "10", "lr": "0", "gnorm": "5.22", "loss_scale": "1", "train_wall": "40", "gb_free": "9.3", "wall": "490"} -2021-03-08 18:12:19 | INFO | fairseq_cli.train | Stopping training due to num_updates: 10 >= max_update: 10 -2021-03-08 18:12:19 | INFO | fairseq_cli.train | begin validation on "valid" subset -2021-03-08 18:12:45 | INFO | valid | {"epoch": 1, "valid_loss": "16.624", "valid_ppl": "101000", "valid_wps": "10855.9", "valid_wpb": "123202", "valid_bsz": "60.5", "valid_num_updates": "10"} -2021-03-08 18:12:45 | INFO | fairseq_cli.train | end of epoch 1 (average epoch stats below) -2021-03-08 18:12:45 | INFO | train | {"epoch": 1, "train_loss": "18.114", "train_ppl": "283776", "train_wps": "2567.8", "train_ups": "0.02", "train_wpb": "131072", "train_bsz": "64", "train_num_updates": "10", "train_lr": "0", "train_gnorm": "29.562", "train_loss_scale": "1", "train_train_wall": "480", "train_gb_free": "9.3", "train_wall": "516"} -2021-03-08 18:12:45 | INFO | fairseq_cli.train | done training in 509.9 seconds -``` - -

diff --git a/kosmos-g/fairseq/examples/gottbert/README.md b/kosmos-g/fairseq/examples/gottbert/README.md deleted file mode 100644 index 1d58feb27..000000000 --- a/kosmos-g/fairseq/examples/gottbert/README.md +++ /dev/null @@ -1,64 +0,0 @@ -# GottBERT: a pure German language model - -## Introduction - -[GottBERT](http://arxiv.org/abs/2012.02110) is a pretrained language model trained on 145GB of German text based on RoBERTa. - -## Example usage - -### fairseq -##### Load GottBERT from torch.hub (PyTorch >= 1.1): -```python -import torch -gottbert = torch.hub.load('pytorch/fairseq', 'gottbert-base') -gottbert.eval() # disable dropout (or leave in train mode to finetune) -``` - -##### Load GottBERT (for PyTorch 1.0 or custom models): -```python -# Download gottbert model -wget https://dl.gottbert.de/fairseq/models/gottbert-base.tar.gz -tar -xzvf gottbert.tar.gz - -# Load the model in fairseq -from fairseq.models.roberta import GottbertModel -gottbert = GottbertModel.from_pretrained('/path/to/gottbert') -gottbert.eval() # disable dropout (or leave in train mode to finetune) -``` - -##### Filling masks: -```python -masked_line = 'Gott ist ! :)' -gottbert.fill_mask(masked_line, topk=3) -# [('Gott ist gut ! :)', 0.3642110526561737, ' gut'), -# ('Gott ist überall ! :)', 0.06009674072265625, ' überall'), -# ('Gott ist großartig ! :)', 0.0370681993663311, ' großartig')] -``` - -##### Extract features from GottBERT - -```python -# Extract the last layer's features -line = "Der erste Schluck aus dem Becher der Naturwissenschaft macht atheistisch , aber auf dem Grunde des Bechers wartet Gott !" -tokens = gottbert.encode(line) -last_layer_features = gottbert.extract_features(tokens) -assert last_layer_features.size() == torch.Size([1, 27, 768]) - -# Extract all layer's features (layer 0 is the embedding layer) -all_layers = gottbert.extract_features(tokens, return_all_hiddens=True) -assert len(all_layers) == 13 -assert torch.all(all_layers[-1] == last_layer_features) -``` -## Citation -If you use our work, please cite: - -```bibtex -@misc{scheible2020gottbert, - title={GottBERT: a pure German Language Model}, - author={Raphael Scheible and Fabian Thomczyk and Patric Tippmann and Victor Jaravine and Martin Boeker}, - year={2020}, - eprint={2012.02110}, - archivePrefix={arXiv}, - primaryClass={cs.CL} -} -``` diff --git a/kosmos-g/fairseq/examples/hubert/README.md b/kosmos-g/fairseq/examples/hubert/README.md deleted file mode 100644 index f88a34dc3..000000000 --- a/kosmos-g/fairseq/examples/hubert/README.md +++ /dev/null @@ -1,116 +0,0 @@ -# HuBERT - -## Pre-trained and fine-tuned (ASR) models -Model | Pretraining Data | Finetuning Dataset | Model -|---|---|---|--- -HuBERT Base (~95M params) | [Librispeech](http://www.openslr.org/12) 960 hr | No finetuning (Pretrained Model) | [download](https://dl.fbaipublicfiles.com/hubert/hubert_base_ls960.pt) -HuBERT Large (~316M params) | [Libri-Light](https://github.com/facebookresearch/libri-light) 60k hr | No finetuning (Pretrained Model) | [download](https://dl.fbaipublicfiles.com/hubert/hubert_large_ll60k.pt) -HuBERT Extra Large (~1B params) | [Libri-Light](https://github.com/facebookresearch/libri-light) 60k hr | No finetuning (Pretrained Model) | [download](https://dl.fbaipublicfiles.com/hubert/hubert_xtralarge_ll60k.pt) -HuBERT Large | [Libri-Light](https://github.com/facebookresearch/libri-light) 60k hr | [Librispeech](http://www.openslr.org/12) 960 hr | [download](https://dl.fbaipublicfiles.com/hubert/hubert_large_ll60k_finetune_ls960.pt) -HuBERT Extra Large | [Libri-Light](https://github.com/facebookresearch/libri-light) 60k hr | [Librispeech](http://www.openslr.org/12) 960 hr | [download](https://dl.fbaipublicfiles.com/hubert/hubert_xtralarge_ll60k_finetune_ls960.pt) - -## Load a model -``` -ckpt_path = "/path/to/the/checkpoint.pt" -models, cfg, task = fairseq.checkpoint_utils.load_model_ensemble_and_task([ckpt_path]) -model = models[0] -``` - -## Train a new model - -### Data preparation - -Follow the steps in `./simple_kmeans` to create: -- `{train,valid}.tsv` waveform list files -- `{train,valid}.km` frame-aligned pseudo label files. -- `dict.km.txt` a dummy dictionary -The `label_rate` is the same as the feature frame rate used for clustering, -which is 100Hz for MFCC features and 50Hz for HuBERT features by default. - -### Pre-train a HuBERT model - -Suppose `{train,valid}.tsv` are saved at `/path/to/data`, `{train,valid}.km` -are saved at `/path/to/labels`, and the label rate is 100Hz. - -To train a base model (12 layer transformer), run: -```sh -$ python fairseq_cli/hydra_train.py \ - --config-dir /path/to/fairseq-py/examples/hubert/config/pretrain \ - --config-name hubert_base_librispeech \ - task.data=/path/to/data task.label_dir=/path/to/labels task.labels='["km"]' model.label_rate=100 -``` - -### Fine-tune a HuBERT model with a CTC loss - -Suppose `{train,valid}.tsv` are saved at `/path/to/data`, and their -corresponding character transcripts `{train,valid}.ltr` are saved at -`/path/to/trans`. - -To fine-tune a pre-trained HuBERT model at `/path/to/checkpoint`, run -```sh -$ python fairseq_cli/hydra_train.py \ - --config-dir /path/to/fairseq-py/examples/hubert/config/finetune \ - --config-name base_10h \ - task.data=/path/to/data task.label_dir=/path/to/trans \ - model.w2v_path=/path/to/checkpoint -``` - -### Decode a HuBERT model - -Suppose the `test.tsv` and `test.ltr` are the waveform list and transcripts of -the split to be decoded, saved at `/path/to/data`, and the fine-tuned model is -saved at `/path/to/checkpoint`. We support three decoding modes: -- Viterbi decoding: greedy decoding without a language model -- KenLM decoding: decoding with an arpa-format KenLM n-gram language model -- Fairseq-LM deocding: decoding with a Fairseq neural language model - - -#### Viterbi decoding - -`task.normalize` needs to be consistent with the value used during fine-tuning. -Decoding results will be saved at -`/path/to/experiment/directory/decode/viterbi/test`. - -```sh -$ python examples/speech_recognition/new/infer.py \ - --config-dir /path/to/fairseq-py/examples/hubert/config/decode \ - --config-name infer_viterbi \ - task.data=/path/to/data \ - task.normalize=[true|false] \ - decoding.exp_dir=/path/to/experiment/directory \ - common_eval.path=/path/to/checkpoint - dataset.gen_subset=test \ -``` - -#### KenLM / Fairseq-LM decoding - -Suppose the pronunciation lexicon and the n-gram LM are saved at -`/path/to/lexicon` and `/path/to/arpa`, respectively. Decoding results will be -saved at `/path/to/experiment/directory/decode/kenlm/test`. - -```sh -$ python examples/speech_recognition/new/infer.py \ - --config-dir /path/to/fairseq-py/examples/hubert/config/decode \ - --config-name infer_kenlm \ - task.data=/path/to/data \ - task.normalize=[true|false] \ - decoding.exp_dir=/path/to/experiment/directory \ - common_eval.path=/path/to/checkpoint - dataset.gen_subset=test \ - decoding.decoder.lexicon=/path/to/lexicon \ - decoding.decoder.lmpath=/path/to/arpa -``` - -The command above uses the default decoding hyperparameter, which can be found -in `examples/speech_recognition/hydra/decoder.py`. These parameters can be -configured from the command line. For example, to search with a beam size of -500, we can append the command above with `decoding.decoder.beam=500`. -Important parameters include: -- decoding.decoder.beam -- decoding.decoder.beamthreshold -- decoding.decoder.lmweight -- decoding.decoder.wordscore -- decoding.decoder.silweight - -To decode with a Fairseq LM, use `--config-name infer_fsqlm` instead, and -change the path of lexicon and LM accordingly. diff --git a/kosmos-g/fairseq/examples/hubert/config/decode/ax_sweep/ngram.yaml b/kosmos-g/fairseq/examples/hubert/config/decode/ax_sweep/ngram.yaml deleted file mode 100644 index 5a02df1f7..000000000 --- a/kosmos-g/fairseq/examples/hubert/config/decode/ax_sweep/ngram.yaml +++ /dev/null @@ -1,33 +0,0 @@ -# @package _global_ - -common_eval: - results_path: ${decoding.exp_dir}/decode/${decoding.decoder.name}_ax/${dataset.gen_subset} - -hydra: - sweeper: - ax_config: - max_trials: 60 - early_stop: - minimize: true - max_epochs_without_improvement: 10 - epsilon: 0.025 - experiment: - name: ${dataset.gen_subset} - objective_name: wer - minimize: true - parameter_constraints: null - outcome_constraints: null - status_quo: null - client: - verbose_logging: false - random_seed: null - params: - decoding.decoder.lmweight: - type: range - bounds: [0.0, 8.0] - decoding.decoder.wordscore: - type: range - bounds: [-5.0, 5.0] - decoding.decoder.silweight: - type: range - bounds: [-10.0, 0.0] diff --git a/kosmos-g/fairseq/examples/hubert/config/decode/ax_sweep/transformer.yaml b/kosmos-g/fairseq/examples/hubert/config/decode/ax_sweep/transformer.yaml deleted file mode 100644 index 85ed3bd1a..000000000 --- a/kosmos-g/fairseq/examples/hubert/config/decode/ax_sweep/transformer.yaml +++ /dev/null @@ -1,33 +0,0 @@ -# @package _global_ - -common_eval: - results_path: ${decoding.exp_dir}/decode/${decoding.decoder.name}_ax/${dataset.gen_subset} - -hydra: - sweeper: - ax_config: - max_trials: 60 - early_stop: - minimize: true - max_epochs_without_improvement: 10 - epsilon: 0.025 - experiment: - name: ${dataset.gen_subset} - objective_name: wer - minimize: true - parameter_constraints: null - outcome_constraints: null - status_quo: null - client: - verbose_logging: false - random_seed: null - params: - decoding.decoder.lmweight: - type: range - bounds: [0.0, 4.0] - decoding.decoder.wordscore: - type: range - bounds: [-5.0, 5.0] - decoding.decoder.silweight: - type: range - bounds: [-8.0, 0.0] diff --git a/kosmos-g/fairseq/examples/hubert/config/decode/infer_fsqlm.yaml b/kosmos-g/fairseq/examples/hubert/config/decode/infer_fsqlm.yaml deleted file mode 100644 index 026ad8db8..000000000 --- a/kosmos-g/fairseq/examples/hubert/config/decode/infer_fsqlm.yaml +++ /dev/null @@ -1,36 +0,0 @@ -# @package _group_ - -defaults: - - model: null - -hydra: - run: - dir: ${common_eval.results_path}/beam${decoding.beam}_th${decoding.beamthreshold}_lmw${decoding.lmweight}_wrd${decoding.wordscore}_sil${decoding.silweight} - sweep: - dir: ${common_eval.results_path} - subdir: beam${decoding.beam}_th${decoding.beamthreshold}_lmw${decoding.lmweight}_wrd${decoding.wordscore}_sil${decoding.silweight} - -task: - _name: hubert_pretraining - single_target: true - fine_tuning: true - data: ??? - normalize: ??? - -decoding: - type: fairseqlm - lexicon: ??? - lmpath: ??? - beamthreshold: 25 - beam: 500 - lmweight: 2 - wordscore: -1 - silweight: 0 - unique_wer_file: true -common_eval: - results_path: ??? - path: ??? - post_process: letter -dataset: - max_tokens: 1100000 - gen_subset: ??? diff --git a/kosmos-g/fairseq/examples/hubert/config/decode/infer_kenlm.yaml b/kosmos-g/fairseq/examples/hubert/config/decode/infer_kenlm.yaml deleted file mode 100644 index 04642aeb6..000000000 --- a/kosmos-g/fairseq/examples/hubert/config/decode/infer_kenlm.yaml +++ /dev/null @@ -1,36 +0,0 @@ -# @package _group_ - -defaults: - - model: null - -hydra: - run: - dir: ${common_eval.results_path}/beam${decoding.beam}_th${decoding.beamthreshold}_lmw${decoding.lmweight}_wrd${decoding.wordscore}_sil${decoding.silweight} - sweep: - dir: ${common_eval.results_path} - subdir: beam${decoding.beam}_th${decoding.beamthreshold}_lmw${decoding.lmweight}_wrd${decoding.wordscore}_sil${decoding.silweight} - -task: - _name: hubert_pretraining - single_target: true - fine_tuning: true - data: ??? - normalize: ??? - -decoding: - type: kenlm - lexicon: ??? - lmpath: ??? - beamthreshold: 100 - beam: 500 - lmweight: 2 - wordscore: -1 - silweight: 0 - unique_wer_file: true -common_eval: - results_path: ??? - path: ??? - post_process: letter -dataset: - max_tokens: 1100000 - gen_subset: ??? diff --git a/kosmos-g/fairseq/examples/hubert/config/decode/infer_viterbi.yaml b/kosmos-g/fairseq/examples/hubert/config/decode/infer_viterbi.yaml deleted file mode 100644 index 4afc74c18..000000000 --- a/kosmos-g/fairseq/examples/hubert/config/decode/infer_viterbi.yaml +++ /dev/null @@ -1,29 +0,0 @@ -# @package _group_ - -defaults: - - model: null - -hydra: - run: - dir: ${common_eval.results_path}/viterbi - sweep: - dir: ${common_eval.results_path} - subdir: viterbi - -task: - _name: hubert_pretraining - single_target: true - fine_tuning: true - data: ??? - normalize: ??? - -decoding: - type: viterbi - unique_wer_file: true -common_eval: - results_path: ??? - path: ??? - post_process: letter -dataset: - max_tokens: 1100000 - gen_subset: ??? diff --git a/kosmos-g/fairseq/examples/hubert/config/decode/run/submitit_slurm.yaml b/kosmos-g/fairseq/examples/hubert/config/decode/run/submitit_slurm.yaml deleted file mode 100644 index 0b8065832..000000000 --- a/kosmos-g/fairseq/examples/hubert/config/decode/run/submitit_slurm.yaml +++ /dev/null @@ -1,17 +0,0 @@ -# @package _global_ -hydra: - launcher: - cpus_per_task: ${distributed_training.distributed_world_size} - gpus_per_node: ${distributed_training.distributed_world_size} - tasks_per_node: ${hydra.launcher.gpus_per_node} - nodes: 1 - mem_gb: 200 - timeout_min: 4320 - max_num_timeout: 50 - name: ${hydra.job.config_name} - submitit_folder: ${hydra.sweep.dir}/submitit - -distributed_training: - distributed_world_size: 1 - distributed_no_spawn: true - distributed_port: 29761 diff --git a/kosmos-g/fairseq/examples/hubert/config/decode/run/submitit_slurm_8gpu.yaml b/kosmos-g/fairseq/examples/hubert/config/decode/run/submitit_slurm_8gpu.yaml deleted file mode 100644 index 2f669f376..000000000 --- a/kosmos-g/fairseq/examples/hubert/config/decode/run/submitit_slurm_8gpu.yaml +++ /dev/null @@ -1,17 +0,0 @@ -# @package _global_ -hydra: - launcher: - cpus_per_task: ${distributed_training.distributed_world_size} - gpus_per_node: ${distributed_training.distributed_world_size} - tasks_per_node: ${hydra.launcher.gpus_per_node} - nodes: 1 - mem_gb: 200 - timeout_min: 4320 - max_num_timeout: 50 - name: ${hydra.job.config_name} - submitit_folder: ${hydra.sweep.dir}/submitit - -distributed_training: - distributed_world_size: 8 - distributed_no_spawn: true - distributed_port: 29761 diff --git a/kosmos-g/fairseq/examples/hubert/config/finetune/base_10h.yaml b/kosmos-g/fairseq/examples/hubert/config/finetune/base_10h.yaml deleted file mode 100644 index a22c7c034..000000000 --- a/kosmos-g/fairseq/examples/hubert/config/finetune/base_10h.yaml +++ /dev/null @@ -1,100 +0,0 @@ -# @package _group_ - -common: - fp16: true - log_format: json - log_interval: 200 - tensorboard_logdir: tblog - seed: 1337 - -checkpoint: - save_interval: 5 - keep_interval_updates: 1 - no_epoch_checkpoints: true - best_checkpoint_metric: wer - -distributed_training: - ddp_backend: c10d - find_unused_parameters: true - distributed_world_size: 1 - distributed_port: 29671 - nprocs_per_node: 8 - -task: - _name: hubert_pretraining - data: ??? - fine_tuning: true - label_dir: ??? - normalize: false # must be consistent with pre-training - labels: ["ltr"] - single_target: true - -dataset: - num_workers: 0 - max_tokens: 3200000 - validate_after_updates: ${model.freeze_finetune_updates} - validate_interval: 5 - train_subset: train - valid_subset: valid - -criterion: - _name: ctc - zero_infinity: true - -optimization: - max_update: 25000 - lr: [2e-5] - sentence_avg: true - update_freq: [1] - -optimizer: - _name: adam - adam_betas: (0.9,0.98) - adam_eps: 1e-08 - -lr_scheduler: - _name: tri_stage - warmup_steps: 8000 - hold_steps: 0 - decay_steps: 72000 - final_lr_scale: 0.05 - -model: - _name: hubert_ctc - w2v_path: ??? - apply_mask: true - mask_selection: static - mask_length: 10 - mask_other: 0 - mask_prob: 0.75 - mask_channel_selection: static - mask_channel_length: 64 - mask_channel_other: 0 - mask_channel_prob: 0.5 - layerdrop: 0.1 - dropout: 0.0 - activation_dropout: 0.1 - attention_dropout: 0.0 - feature_grad_mult: 0.0 - freeze_finetune_updates: 10000 - -hydra: - job: - config: - override_dirname: - kv_sep: '-' - item_sep: '__' - exclude_keys: - - run - - task.data - - task.label_dir - - model.w2v_path - - dataset.train_subset - - dataset.valid_subset - - criterion.wer_kenlm_model - - criterion.wer_lexicon - run: - dir: ??? - sweep: - dir: ??? - subdir: ${hydra.job.config_name}__${hydra.job.override_dirname} diff --git a/kosmos-g/fairseq/examples/hubert/config/finetune/ckpt/it1.yaml b/kosmos-g/fairseq/examples/hubert/config/finetune/ckpt/it1.yaml deleted file mode 100644 index 2af96b3f7..000000000 --- a/kosmos-g/fairseq/examples/hubert/config/finetune/ckpt/it1.yaml +++ /dev/null @@ -1,7 +0,0 @@ -# @package _global_ - -task: - normalize: false - -model: - w2v_path: /checkpoint/wnhsu/w2v/hubert_final/iter1/hubert.km.randcrop.pmw1_0.puw0_0.grpnorm.ml10.mp0_8.untie.mxsz250000.ufreq1.maxtok1400000.MU400k.s1337.ngpu32/checkpoint_last.pt diff --git a/kosmos-g/fairseq/examples/hubert/config/finetune/lm/ls_4gram.yaml b/kosmos-g/fairseq/examples/hubert/config/finetune/lm/ls_4gram.yaml deleted file mode 100644 index 8c7728ad2..000000000 --- a/kosmos-g/fairseq/examples/hubert/config/finetune/lm/ls_4gram.yaml +++ /dev/null @@ -1,7 +0,0 @@ -# @package _global_ - -criterion: - wer_kenlm_model: /checkpoint/abdo/old_checkpoint02/datasets/librispeech/4-gram.bin - wer_lexicon: /checkpoint/abdo/old_checkpoint02/datasets/librispeech/10h/raw/lexicon_ltr.lst - wer_lm_weight: 2.0 - wer_word_score: -1.0 diff --git a/kosmos-g/fairseq/examples/hubert/config/finetune/run/submitit_reg.yaml b/kosmos-g/fairseq/examples/hubert/config/finetune/run/submitit_reg.yaml deleted file mode 100644 index 27509503e..000000000 --- a/kosmos-g/fairseq/examples/hubert/config/finetune/run/submitit_reg.yaml +++ /dev/null @@ -1,20 +0,0 @@ -# @package _global_ - -hydra: - launcher: - cpus_per_task: 8 - gpus_per_node: 8 - tasks_per_node: ${hydra.launcher.gpus_per_node} - nodes: 1 - comment: null - mem_gb: 384 - timeout_min: 4320 - max_num_timeout: 100 - constraint: volta32gb - name: ${hydra.job.config_name}/${hydra.job.override_dirname} - submitit_folder: ${hydra.sweep.dir}/submitit/%j - -distributed_training: - distributed_world_size: 8 - distributed_port: 29671 - nprocs_per_node: 8 diff --git a/kosmos-g/fairseq/examples/hubert/config/pretrain/data/iter1.yaml b/kosmos-g/fairseq/examples/hubert/config/pretrain/data/iter1.yaml deleted file mode 100644 index 0a1b65d80..000000000 --- a/kosmos-g/fairseq/examples/hubert/config/pretrain/data/iter1.yaml +++ /dev/null @@ -1,8 +0,0 @@ -# @package _global_ - -task: - label_dir: ??? - labels: ["km"] - -model: - label_rate: 100 diff --git a/kosmos-g/fairseq/examples/hubert/config/pretrain/data/iter2.yaml b/kosmos-g/fairseq/examples/hubert/config/pretrain/data/iter2.yaml deleted file mode 100644 index 2d4bfe61c..000000000 --- a/kosmos-g/fairseq/examples/hubert/config/pretrain/data/iter2.yaml +++ /dev/null @@ -1,8 +0,0 @@ -# @package _global_ - -task: - label_dir: ??? - labels: ["km"] - -model: - label_rate: 50 diff --git a/kosmos-g/fairseq/examples/hubert/config/pretrain/hubert_base_librispeech.yaml b/kosmos-g/fairseq/examples/hubert/config/pretrain/hubert_base_librispeech.yaml deleted file mode 100644 index bd84461a1..000000000 --- a/kosmos-g/fairseq/examples/hubert/config/pretrain/hubert_base_librispeech.yaml +++ /dev/null @@ -1,97 +0,0 @@ -# @package _group_ - -common: - fp16: true - log_format: json - log_interval: 200 - seed: 1337 - tensorboard_logdir: tblog - -checkpoint: - save_interval_updates: 25000 - keep_interval_updates: 1 - no_epoch_checkpoints: true - - -distributed_training: - ddp_backend: no_c10d - distributed_backend: 'nccl' - distributed_world_size: 32 - distributed_port: 29671 - nprocs_per_node: 8 - find_unused_parameters: true - -task: - _name: hubert_pretraining - data: ??? - label_dir: ??? - labels: ??? - label_rate: ${model.label_rate} - sample_rate: 16000 - max_sample_size: 250000 - min_sample_size: 32000 - pad_audio: false - random_crop: true - normalize: false # must be consistent with extractor - -dataset: - num_workers: 6 - max_tokens: 1400000 - skip_invalid_size_inputs_valid_test: true - validate_interval: 5 - validate_interval_updates: 10000 - -criterion: - _name: hubert - pred_masked_weight: 1.0 - pred_nomask_weight: 0.0 - loss_weights: [10,] - -optimization: - max_update: 400000 - lr: [0.0005] - clip_norm: 10.0 - -optimizer: - _name: adam - adam_betas: (0.9,0.98) - adam_eps: 1e-06 - weight_decay: 0.01 - -lr_scheduler: - _name: polynomial_decay - warmup_updates: 32000 - -model: - _name: hubert - label_rate: ??? - skip_masked: false - skip_nomask: false - mask_prob: 0.80 - extractor_mode: default - conv_feature_layers: '[(512,10,5)] + [(512,3,2)] * 4 + [(512,2,2)] * 2' - final_dim: 256 - encoder_layerdrop: 0.05 - dropout_input: 0.1 - dropout_features: 0.1 - dropout: 0.1 - attention_dropout: 0.1 - feature_grad_mult: 0.1 - untie_final_proj: true - activation_dropout: 0.0 - -hydra: - job: - config: - override_dirname: - kv_sep: '-' - item_sep: '__' - exclude_keys: - - run - - task.data - - task.label_dir - run: - dir: ??? - sweep: - dir: ??? - subdir: ${hydra.job.config_name}__${hydra.job.override_dirname} diff --git a/kosmos-g/fairseq/examples/hubert/config/pretrain/hubert_large_librivox.yaml b/kosmos-g/fairseq/examples/hubert/config/pretrain/hubert_large_librivox.yaml deleted file mode 100644 index a5192b5f2..000000000 --- a/kosmos-g/fairseq/examples/hubert/config/pretrain/hubert_large_librivox.yaml +++ /dev/null @@ -1,101 +0,0 @@ -# @package _group_ - -common: - fp16: true - log_format: json - log_interval: 200 - seed: 1337 - tensorboard_logdir: tblog - -checkpoint: - save_interval_updates: 25000 - keep_interval_updates: 1 - no_epoch_checkpoints: true - - -distributed_training: - ddp_backend: no_c10d - distributed_backend: 'nccl' - distributed_world_size: 128 - distributed_port: 29671 - nprocs_per_node: 8 - find_unused_parameters: true - -task: - _name: hubert_pretraining - data: ??? - label_dir: ??? - labels: ??? - label_rate: ${model.label_rate} - sample_rate: 16000 - max_sample_size: 250000 - min_sample_size: 32000 - pad_audio: false - random_crop: true - normalize: true # must be consistent with extractor - -dataset: - num_workers: 6 - max_tokens: 900000 - skip_invalid_size_inputs_valid_test: true - validate_interval: 5 - validate_interval_updates: 10000 - -criterion: - _name: hubert - pred_masked_weight: 1.0 - pred_nomask_weight: 0.0 - loss_weights: [10,] - -optimization: - max_update: 400000 - lr: [0.0015] - clip_norm: 1.0 - -optimizer: - _name: adam - adam_betas: (0.9,0.98) - adam_eps: 1e-06 - weight_decay: 0.01 - -lr_scheduler: - _name: polynomial_decay - warmup_updates: 32000 - -model: - _name: hubert - label_rate: ??? - encoder_layers: 24 - encoder_embed_dim: 1024 - encoder_ffn_embed_dim: 4096 - encoder_attention_heads: 16 - final_dim: 768 - skip_masked: false - skip_nomask: false - mask_prob: 0.80 - extractor_mode: layer_norm - conv_feature_layers: '[(512,10,5)] + [(512,3,2)] * 4 + [(512,2,2)] * 2' - encoder_layerdrop: 0.0 - dropout_input: 0.0 - dropout_features: 0.0 - dropout: 0.0 - attention_dropout: 0.0 - layer_norm_first: true - feature_grad_mult: 1.0 - untie_final_proj: true - activation_dropout: 0.0 - -hydra: - job: - config: - override_dirname: - kv_sep: '-' - item_sep: '__' - exclude_keys: - - run - - task.data - run: - dir: /checkpoint/wnhsu/w2v/hubert_final/hydra_pt - sweep: - dir: /checkpoint/wnhsu/w2v/hubert_final/hydra_pt - subdir: ${hydra.job.config_name}__${hydra.job.override_dirname} diff --git a/kosmos-g/fairseq/examples/hubert/config/pretrain/hubert_xlarge_librivox.yaml b/kosmos-g/fairseq/examples/hubert/config/pretrain/hubert_xlarge_librivox.yaml deleted file mode 100644 index 34e8f2bfb..000000000 --- a/kosmos-g/fairseq/examples/hubert/config/pretrain/hubert_xlarge_librivox.yaml +++ /dev/null @@ -1,101 +0,0 @@ -# @package _group_ - -common: - fp16: true - log_format: json - log_interval: 200 - seed: 1337 - tensorboard_logdir: tblog - -checkpoint: - save_interval_updates: 25000 - keep_interval_updates: 1 - no_epoch_checkpoints: true - - -distributed_training: - ddp_backend: no_c10d - distributed_backend: 'nccl' - distributed_world_size: 256 - distributed_port: 29671 - nprocs_per_node: 8 - find_unused_parameters: true - -task: - _name: hubert_pretraining - data: ??? - label_dir: ??? - labels: ??? - label_rate: ${model.label_rate} - sample_rate: 16000 - max_sample_size: 250000 - min_sample_size: 32000 - pad_audio: false - random_crop: true - normalize: true # must be consistent with extractor - -dataset: - num_workers: 6 - max_tokens: 360000 - skip_invalid_size_inputs_valid_test: true - validate_interval: 5 - validate_interval_updates: 10000 - -criterion: - _name: hubert - pred_masked_weight: 1.0 - pred_nomask_weight: 0.0 - loss_weights: [10,] - -optimization: - max_update: 400000 - lr: [0.003] - clip_norm: 1.0 - -optimizer: - _name: adam - adam_betas: (0.9,0.98) - adam_eps: 1e-06 - weight_decay: 0.01 - -lr_scheduler: - _name: polynomial_decay - warmup_updates: 32000 - -model: - _name: hubert - label_rate: ??? - encoder_layers: 48 - encoder_embed_dim: 1280 - encoder_ffn_embed_dim: 5120 - encoder_attention_heads: 16 - final_dim: 1024 - skip_masked: false - skip_nomask: false - mask_prob: 0.80 - extractor_mode: layer_norm - conv_feature_layers: '[(512,10,5)] + [(512,3,2)] * 4 + [(512,2,2)] * 2' - encoder_layerdrop: 0.0 - dropout_input: 0.0 - dropout_features: 0.0 - dropout: 0.0 - attention_dropout: 0.0 - layer_norm_first: true - feature_grad_mult: 1.0 - untie_final_proj: true - activation_dropout: 0.0 - -hydra: - job: - config: - override_dirname: - kv_sep: '-' - item_sep: '__' - exclude_keys: - - run - - task.data - run: - dir: /checkpoint/wnhsu/w2v/hubert_final/hydra_pt - sweep: - dir: /checkpoint/wnhsu/w2v/hubert_final/hydra_pt - subdir: ${hydra.job.config_name}__${hydra.job.override_dirname} diff --git a/kosmos-g/fairseq/examples/hubert/config/pretrain/run/submitit_reg.yaml b/kosmos-g/fairseq/examples/hubert/config/pretrain/run/submitit_reg.yaml deleted file mode 100644 index 46c979cd2..000000000 --- a/kosmos-g/fairseq/examples/hubert/config/pretrain/run/submitit_reg.yaml +++ /dev/null @@ -1,20 +0,0 @@ -# @package _global_ - -hydra: - launcher: - cpus_per_task: 8 - gpus_per_node: 8 - tasks_per_node: ${hydra.launcher.gpus_per_node} - nodes: 4 - comment: null - mem_gb: 384 - timeout_min: 4320 - max_num_timeout: 100 - constraint: volta32gb - name: ${hydra.job.config_name}/${hydra.job.override_dirname} - submitit_folder: ${hydra.sweep.dir}/submitit/%j - -distributed_training: - distributed_world_size: 32 - distributed_port: 29671 - nprocs_per_node: 8 diff --git a/kosmos-g/fairseq/examples/hubert/measure_teacher_quality.py b/kosmos-g/fairseq/examples/hubert/measure_teacher_quality.py deleted file mode 100644 index 92279b221..000000000 --- a/kosmos-g/fairseq/examples/hubert/measure_teacher_quality.py +++ /dev/null @@ -1,241 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import os.path as op -import re -from tabulate import tabulate -from collections import Counter - - -def comp_purity(p_xy, axis): - max_p = p_xy.max(axis=axis) - marg_p = p_xy.sum(axis=axis) - indv_pur = max_p / marg_p - aggr_pur = max_p.sum() - return indv_pur, aggr_pur - - -def comp_entropy(p): - return (-p * np.log(p + 1e-8)).sum() - - -def comp_norm_mutual_info(p_xy): - p_x = p_xy.sum(axis=1, keepdims=True) - p_y = p_xy.sum(axis=0, keepdims=True) - pmi = np.log(p_xy / np.matmul(p_x, p_y) + 1e-8) - mi = (p_xy * pmi).sum() - h_x = comp_entropy(p_x) - h_y = comp_entropy(p_y) - return mi, mi / h_x, mi / h_y, h_x, h_y - - -def pad(labs, n): - if n == 0: - return np.array(labs) - return np.concatenate([[labs[0]] * n, labs, [labs[-1]] * n]) - - -def comp_avg_seg_dur(labs_list): - n_frms = 0 - n_segs = 0 - for labs in labs_list: - labs = np.array(labs) - edges = np.zeros(len(labs)).astype(bool) - edges[0] = True - edges[1:] = labs[1:] != labs[:-1] - n_frms += len(edges) - n_segs += edges.astype(int).sum() - return n_frms / n_segs - - -def comp_joint_prob(uid2refs, uid2hyps): - """ - Args: - pad: padding for spliced-feature derived labels - """ - cnts = Counter() - skipped = [] - abs_frmdiff = 0 - for uid in uid2refs: - if uid not in uid2hyps: - skipped.append(uid) - continue - refs = uid2refs[uid] - hyps = uid2hyps[uid] - abs_frmdiff += abs(len(refs) - len(hyps)) - min_len = min(len(refs), len(hyps)) - refs = refs[:min_len] - hyps = hyps[:min_len] - cnts.update(zip(refs, hyps)) - tot = sum(cnts.values()) - - ref_set = sorted({ref for ref, _ in cnts.keys()}) - hyp_set = sorted({hyp for _, hyp in cnts.keys()}) - ref2pid = dict(zip(ref_set, range(len(ref_set)))) - hyp2lid = dict(zip(hyp_set, range(len(hyp_set)))) - # print(hyp_set) - p_xy = np.zeros((len(ref2pid), len(hyp2lid)), dtype=float) - for (ref, hyp), cnt in cnts.items(): - p_xy[ref2pid[ref], hyp2lid[hyp]] = cnt - p_xy /= p_xy.sum() - return p_xy, ref2pid, hyp2lid, tot, abs_frmdiff, skipped - - -def read_phn(tsv_path, rm_stress=True): - uid2phns = {} - with open(tsv_path) as f: - for line in f: - uid, phns = line.rstrip().split("\t") - phns = phns.split(",") - if rm_stress: - phns = [re.sub("[0-9]", "", phn) for phn in phns] - uid2phns[uid] = phns - return uid2phns - - -def read_lab(tsv_path, lab_path, pad_len=0, upsample=1): - """ - tsv is needed to retrieve the uids for the labels - """ - with open(tsv_path) as f: - f.readline() - uids = [op.splitext(op.basename(line.rstrip().split()[0]))[0] for line in f] - with open(lab_path) as f: - labs_list = [pad(line.rstrip().split(), pad_len).repeat(upsample) for line in f] - assert len(uids) == len(labs_list) - return dict(zip(uids, labs_list)) - - -def main_lab_lab( - tsv_dir, - lab_dir, - lab_name, - lab_sets, - ref_dir, - ref_name, - pad_len=0, - upsample=1, - verbose=False, -): - # assume tsv_dir is the same for both the reference and the hypotheses - tsv_dir = lab_dir if tsv_dir is None else tsv_dir - - uid2refs = {} - for s in lab_sets: - uid2refs.update(read_lab(f"{tsv_dir}/{s}.tsv", f"{ref_dir}/{s}.{ref_name}")) - - uid2hyps = {} - for s in lab_sets: - uid2hyps.update( - read_lab( - f"{tsv_dir}/{s}.tsv", f"{lab_dir}/{s}.{lab_name}", pad_len, upsample - ) - ) - _main(uid2refs, uid2hyps, verbose) - - -def main_phn_lab( - tsv_dir, - lab_dir, - lab_name, - lab_sets, - phn_dir, - phn_sets, - pad_len=0, - upsample=1, - verbose=False, -): - uid2refs = {} - for s in phn_sets: - uid2refs.update(read_phn(f"{phn_dir}/{s}.tsv")) - - uid2hyps = {} - tsv_dir = lab_dir if tsv_dir is None else tsv_dir - for s in lab_sets: - uid2hyps.update( - read_lab( - f"{tsv_dir}/{s}.tsv", f"{lab_dir}/{s}.{lab_name}", pad_len, upsample - ) - ) - _main(uid2refs, uid2hyps, verbose) - - -def _main(uid2refs, uid2hyps, verbose): - (p_xy, ref2pid, hyp2lid, tot, frmdiff, skipped) = comp_joint_prob( - uid2refs, uid2hyps - ) - ref_pur_by_hyp, ref_pur = comp_purity(p_xy, axis=0) - hyp_pur_by_ref, hyp_pur = comp_purity(p_xy, axis=1) - (mi, mi_norm_by_ref, mi_norm_by_hyp, h_ref, h_hyp) = comp_norm_mutual_info(p_xy) - outputs = { - "ref pur": ref_pur, - "hyp pur": hyp_pur, - "H(ref)": h_ref, - "H(hyp)": h_hyp, - "MI": mi, - "MI/H(ref)": mi_norm_by_ref, - "ref segL": comp_avg_seg_dur(uid2refs.values()), - "hyp segL": comp_avg_seg_dur(uid2hyps.values()), - "p_xy shape": p_xy.shape, - "frm tot": tot, - "frm diff": frmdiff, - "utt tot": len(uid2refs), - "utt miss": len(skipped), - } - print(tabulate([outputs.values()], outputs.keys(), floatfmt=".4f")) - - -if __name__ == "__main__": - """ - compute quality of labels with respect to phone or another labels if set - """ - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("tsv_dir") - parser.add_argument("lab_dir") - parser.add_argument("lab_name") - parser.add_argument("--lab_sets", default=["valid"], type=str, nargs="+") - parser.add_argument( - "--phn_dir", - default="/checkpoint/wnhsu/data/librispeech/960h/fa/raw_phn/phone_frame_align_v1", - ) - parser.add_argument( - "--phn_sets", default=["dev-clean", "dev-other"], type=str, nargs="+" - ) - parser.add_argument("--pad_len", default=0, type=int, help="padding for hypotheses") - parser.add_argument( - "--upsample", default=1, type=int, help="upsample factor for hypotheses" - ) - parser.add_argument("--ref_lab_dir", default="") - parser.add_argument("--ref_lab_name", default="") - parser.add_argument("--verbose", action="store_true") - args = parser.parse_args() - - if args.ref_lab_dir and args.ref_lab_name: - main_lab_lab( - args.tsv_dir, - args.lab_dir, - args.lab_name, - args.lab_sets, - args.ref_lab_dir, - args.ref_lab_name, - args.pad_len, - args.upsample, - args.verbose, - ) - else: - main_phn_lab( - args.tsv_dir, - args.lab_dir, - args.lab_name, - args.lab_sets, - args.phn_dir, - args.phn_sets, - args.pad_len, - args.upsample, - args.verbose, - ) diff --git a/kosmos-g/fairseq/examples/hubert/simple_kmeans/README.md b/kosmos-g/fairseq/examples/hubert/simple_kmeans/README.md deleted file mode 100644 index 847475c23..000000000 --- a/kosmos-g/fairseq/examples/hubert/simple_kmeans/README.md +++ /dev/null @@ -1,80 +0,0 @@ -# Sharded Feature Extraction and K-means Application - -This folder contains scripts for preparing HUBERT labels from tsv files, the -steps are: -1. feature extraction -2. k-means clustering -3. k-means application - - -## Data preparation - -`*.tsv` files contains a list of audio, where each line is the root, and -following lines are the subpath for each audio: -``` - - - -... -``` - - -## Feature extraction - -### MFCC feature -Suppose the tsv file is at `${tsv_dir}/${split}.tsv`. To extract 39-D -mfcc+delta+ddelta features for the 1st iteration HUBERT training, run: -```sh -python dump_mfcc_feature.py ${tsv_dir} ${split} ${nshard} ${rank} ${feat_dir} -``` -This would shard the tsv file into `${nshard}` and extract features for the -`${rank}`-th shard, where rank is an integer in `[0, nshard-1]`. Features would -be saved at `${feat_dir}/${split}_${rank}_${nshard}.{npy,len}`. - - -### HUBERT feature -To extract features from the `${layer}`-th transformer layer of a trained -HUBERT model saved at `${ckpt_path}`, run: -```sh -python dump_hubert_feature.py ${tsv_dir} ${split} ${ckpt_path} ${layer} ${nshard} ${rank} ${feat_dir} -``` -Features would also be saved at `${feat_dir}/${split}_${rank}_${nshard}.{npy,len}`. - -- if out-of-memory, decrease the chunk size with `--max_chunk` - - -## K-means clustering -To fit a k-means model with `${n_clusters}` clusters on 10% of the `${split}` data, run -```sh -python learn_kmeans.py ${feat_dir} ${split} ${nshard} ${km_path} ${n_cluster} --percent 0.1 -``` -This saves the k-means model to `${km_path}`. - -- set `--precent -1` to use all data -- more kmeans options can be found with `-h` flag - - -## K-means application -To apply a trained k-means model `${km_path}` to obtain labels for `${split}`, run -```sh -python dump_km_label.py ${feat_dir} ${split} ${km_path} ${nshard} ${rank} ${lab_dir} -``` -This would extract labels for the `${rank}`-th shard out of `${nshard}` shards -and dump them to `${lab_dir}/${split}_${rank}_${shard}.km` - - -Finally, merge shards for `${split}` by running -```sh -for rank in $(seq 0 $((nshard - 1))); do - cat $lab_dir/${split}_${rank}_${nshard}.km -done > $lab_dir/${split}.km -``` - - -## Create a dummy dict -To create a dummy dictionary, run -```sh -for x in $(seq 0 $((n_clusters - 1))); do - echo "$x 1" -done >> $lab_dir/dict.km.txt -``` diff --git a/kosmos-g/fairseq/examples/hubert/simple_kmeans/dump_hubert_feature.py b/kosmos-g/fairseq/examples/hubert/simple_kmeans/dump_hubert_feature.py deleted file mode 100644 index 5c7b67f8b..000000000 --- a/kosmos-g/fairseq/examples/hubert/simple_kmeans/dump_hubert_feature.py +++ /dev/null @@ -1,93 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import sys - -import fairseq -import soundfile as sf -import torch -import torch.nn.functional as F - -from feature_utils import get_path_iterator, dump_feature - - -logging.basicConfig( - format="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=os.environ.get("LOGLEVEL", "INFO").upper(), - stream=sys.stdout, -) -logger = logging.getLogger("dump_hubert_feature") - - -class HubertFeatureReader(object): - def __init__(self, ckpt_path, layer, max_chunk=1600000): - ( - model, - cfg, - task, - ) = fairseq.checkpoint_utils.load_model_ensemble_and_task([ckpt_path]) - self.model = model[0].eval().cuda() - self.task = task - self.layer = layer - self.max_chunk = max_chunk - logger.info(f"TASK CONFIG:\n{self.task.cfg}") - logger.info(f" max_chunk = {self.max_chunk}") - - def read_audio(self, path, ref_len=None): - wav, sr = sf.read(path) - assert sr == self.task.cfg.sample_rate, sr - if wav.ndim == 2: - wav = wav.mean(-1) - assert wav.ndim == 1, wav.ndim - if ref_len is not None and abs(ref_len - len(wav)) > 160: - logging.warning(f"ref {ref_len} != read {len(wav)} ({path})") - return wav - - def get_feats(self, path, ref_len=None): - x = self.read_audio(path, ref_len) - with torch.no_grad(): - x = torch.from_numpy(x).float().cuda() - if self.task.cfg.normalize: - x = F.layer_norm(x, x.shape) - x = x.view(1, -1) - - feat = [] - for start in range(0, x.size(1), self.max_chunk): - x_chunk = x[:, start: start + self.max_chunk] - feat_chunk, _ = self.model.extract_features( - source=x_chunk, - padding_mask=None, - mask=False, - output_layer=self.layer, - ) - feat.append(feat_chunk) - return torch.cat(feat, 1).squeeze(0) - - -def main(tsv_dir, split, ckpt_path, layer, nshard, rank, feat_dir, max_chunk): - reader = HubertFeatureReader(ckpt_path, layer, max_chunk) - generator, num = get_path_iterator(f"{tsv_dir}/{split}.tsv", nshard, rank) - dump_feature(reader, generator, num, split, nshard, rank, feat_dir) - - -if __name__ == "__main__": - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("tsv_dir") - parser.add_argument("split") - parser.add_argument("ckpt_path") - parser.add_argument("layer", type=int) - parser.add_argument("nshard", type=int) - parser.add_argument("rank", type=int) - parser.add_argument("feat_dir") - parser.add_argument("--max_chunk", type=int, default=1600000) - args = parser.parse_args() - logger.info(args) - - main(**vars(args)) diff --git a/kosmos-g/fairseq/examples/hubert/simple_kmeans/dump_hubert_feature_s2t.py b/kosmos-g/fairseq/examples/hubert/simple_kmeans/dump_hubert_feature_s2t.py deleted file mode 100644 index 6fff4faf4..000000000 --- a/kosmos-g/fairseq/examples/hubert/simple_kmeans/dump_hubert_feature_s2t.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import csv -import io -import logging -import os -import os.path as op -import sys - -from dump_hubert_feature import HubertFeatureReader -from feature_utils import get_shard_range, dump_feature -from fairseq.data.audio.audio_utils import get_waveform -from fairseq.data.audio.speech_to_text_dataset import ( - read_from_uncompressed_zip, -) - - -logging.basicConfig( - format="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=os.environ.get("LOGLEVEL", "INFO").upper(), - stream=sys.stdout, -) -logger = logging.getLogger("dump_hubert_feature_s2t") - - -class HubertFeatureReaderS2T(HubertFeatureReader): - def read_audio(self, path, ref_len=None): - path, *extra = path.split(":") - assert len(extra) == 2 - assert path.endswith(".zip") - - data = read_from_uncompressed_zip(path, int(extra[0]), int(extra[1])) - f = io.BytesIO(data) - wav, sr = get_waveform(f) - assert sr == self.task.cfg.sample_rate, sr - if wav.ndim == 2: - wav = wav.mean(-1) - assert wav.ndim == 1, wav.ndim - if ref_len is not None and abs(ref_len - len(wav)) > 160: - logging.warning(f"ref {ref_len} != read {len(wav)} ({path})") - return wav - - -def get_path_iterator(root, tsv, nshard, rank): - with open(tsv) as f: - reader = csv.DictReader( - f, - delimiter="\t", - quotechar=None, - doublequote=False, - lineterminator="\n", - quoting=csv.QUOTE_NONE, - ) - subpaths = [op.join(root, e["audio"]) for e in reader] - start, end = get_shard_range(len(subpaths), nshard, rank) - subpaths = subpaths[start:end] - def iterate(): - for subpath in subpaths: - yield op.join(root, subpath), None - return iterate, len(subpaths) - - -def main( - root, tsv_path, ckpt_path, layer, nshard, rank, feat_dir, split, max_chunk -): - reader = HubertFeatureReaderS2T(ckpt_path, layer, max_chunk) - generator, num = get_path_iterator(root, tsv_path, nshard, rank) - dump_feature(reader, generator, num, split, nshard, rank, feat_dir) - - - -if __name__ == "__main__": - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("root") - parser.add_argument("tsv_path") - parser.add_argument("ckpt_path") - parser.add_argument("layer", type=int) - parser.add_argument("nshard", type=int) - parser.add_argument("rank", type=int) - parser.add_argument("feat_dir") - parser.add_argument("split") - parser.add_argument("--max_chunk", type=int, default=1600000) - args = parser.parse_args() - logger.info(args) - - main(**vars(args)) diff --git a/kosmos-g/fairseq/examples/hubert/simple_kmeans/dump_km_label.py b/kosmos-g/fairseq/examples/hubert/simple_kmeans/dump_km_label.py deleted file mode 100644 index 887130780..000000000 --- a/kosmos-g/fairseq/examples/hubert/simple_kmeans/dump_km_label.py +++ /dev/null @@ -1,98 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import sys - -import numpy as np - -import joblib -import torch -import tqdm - -logging.basicConfig( - format="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=os.environ.get("LOGLEVEL", "INFO").upper(), - stream=sys.stdout, -) -logger = logging.getLogger("dump_km_label") - - -class ApplyKmeans(object): - def __init__(self, km_path): - self.km_model = joblib.load(km_path) - self.C_np = self.km_model.cluster_centers_.transpose() - self.Cnorm_np = (self.C_np ** 2).sum(0, keepdims=True) - - self.C = torch.from_numpy(self.C_np) - self.Cnorm = torch.from_numpy(self.Cnorm_np) - if torch.cuda.is_available(): - self.C = self.C.cuda() - self.Cnorm = self.Cnorm.cuda() - - def __call__(self, x): - if isinstance(x, torch.Tensor): - dist = ( - x.pow(2).sum(1, keepdim=True) - - 2 * torch.matmul(x, self.C) - + self.Cnorm - ) - return dist.argmin(dim=1).cpu().numpy() - else: - dist = ( - (x ** 2).sum(1, keepdims=True) - - 2 * np.matmul(x, self.C_np) - + self.Cnorm_np - ) - return np.argmin(dist, axis=1) - - -def get_feat_iterator(feat_dir, split, nshard, rank): - feat_path = f"{feat_dir}/{split}_{rank}_{nshard}.npy" - leng_path = f"{feat_dir}/{split}_{rank}_{nshard}.len" - with open(leng_path, "r") as f: - lengs = [int(line.rstrip()) for line in f] - offsets = [0] + np.cumsum(lengs[:-1]).tolist() - - def iterate(): - feat = np.load(feat_path, mmap_mode="r") - assert feat.shape[0] == (offsets[-1] + lengs[-1]) - for offset, leng in zip(offsets, lengs): - yield feat[offset: offset + leng] - - return iterate, len(lengs) - - -def dump_label(feat_dir, split, km_path, nshard, rank, lab_dir): - apply_kmeans = ApplyKmeans(km_path) - generator, num = get_feat_iterator(feat_dir, split, nshard, rank) - iterator = generator() - - lab_path = f"{lab_dir}/{split}_{rank}_{nshard}.km" - os.makedirs(lab_dir, exist_ok=True) - with open(lab_path, "w") as f: - for feat in tqdm.tqdm(iterator, total=num): - # feat = torch.from_numpy(feat).cuda() - lab = apply_kmeans(feat).tolist() - f.write(" ".join(map(str, lab)) + "\n") - logger.info("finished successfully") - - -if __name__ == "__main__": - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("feat_dir") - parser.add_argument("split") - parser.add_argument("km_path") - parser.add_argument("nshard", type=int) - parser.add_argument("rank", type=int) - parser.add_argument("lab_dir") - args = parser.parse_args() - logging.info(str(args)) - - dump_label(**vars(args)) diff --git a/kosmos-g/fairseq/examples/hubert/simple_kmeans/dump_mfcc_feature.py b/kosmos-g/fairseq/examples/hubert/simple_kmeans/dump_mfcc_feature.py deleted file mode 100644 index 70d001666..000000000 --- a/kosmos-g/fairseq/examples/hubert/simple_kmeans/dump_mfcc_feature.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import sys - -import soundfile as sf -import torch -import torchaudio - -from feature_utils import get_path_iterator, dump_feature - -logging.basicConfig( - format="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=os.environ.get("LOGLEVEL", "INFO").upper(), - stream=sys.stdout, -) -logger = logging.getLogger("dump_mfcc_feature") - - -class MfccFeatureReader(object): - def __init__(self, sample_rate): - self.sample_rate = sample_rate - - def read_audio(self, path, ref_len=None): - wav, sr = sf.read(path) - assert sr == self.sample_rate, sr - if wav.ndim == 2: - wav = wav.mean(-1) - assert wav.ndim == 1, wav.ndim - if ref_len is not None and abs(ref_len - len(wav)) > 160: - logging.warning(f"ref {ref_len} != read {len(wav)} ({path})") - return wav - - def get_feats(self, path, ref_len=None): - x = self.read_audio(path, ref_len) - with torch.no_grad(): - x = torch.from_numpy(x).float() - x = x.view(1, -1) - - mfccs = torchaudio.compliance.kaldi.mfcc( - waveform=x, - sample_frequency=self.sample_rate, - use_energy=False, - ) # (time, freq) - mfccs = mfccs.transpose(0, 1) # (freq, time) - deltas = torchaudio.functional.compute_deltas(mfccs) - ddeltas = torchaudio.functional.compute_deltas(deltas) - concat = torch.cat([mfccs, deltas, ddeltas], dim=0) - concat = concat.transpose(0, 1).contiguous() # (freq, time) - return concat - - -def main(tsv_dir, split, nshard, rank, feat_dir, sample_rate): - reader = MfccFeatureReader(sample_rate) - generator, num = get_path_iterator(f"{tsv_dir}/{split}.tsv", nshard, rank) - dump_feature(reader, generator, num, split, nshard, rank, feat_dir) - - - -if __name__ == "__main__": - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("tsv_dir") - parser.add_argument("split") - parser.add_argument("nshard", type=int) - parser.add_argument("rank", type=int) - parser.add_argument("feat_dir") - parser.add_argument("--sample_rate", type=int, default=16000) - args = parser.parse_args() - logger.info(args) - - main(**vars(args)) diff --git a/kosmos-g/fairseq/examples/hubert/simple_kmeans/dump_w2v2_feature.py b/kosmos-g/fairseq/examples/hubert/simple_kmeans/dump_w2v2_feature.py deleted file mode 100644 index a1f0d902a..000000000 --- a/kosmos-g/fairseq/examples/hubert/simple_kmeans/dump_w2v2_feature.py +++ /dev/null @@ -1,95 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import sys - -import fairseq -import soundfile as sf -import torch -import torch.nn.functional as F - -from feature_utils import get_path_iterator, dump_feature - - -logging.basicConfig( - format="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=os.environ.get("LOGLEVEL", "INFO").upper(), - stream=sys.stdout, -) -logger = logging.getLogger("dump_w2v2_feature") - - -class Wav2Vec2FeatureReader(object): - def __init__(self, ckpt_path, layer, max_chunk=1600000): - ( - model, - cfg, - task, - ) = fairseq.checkpoint_utils.load_model_ensemble_and_task([ckpt_path]) - self.model = model[0].eval().cuda() - self.task = task - self.layer = layer # assume this is 1-based like HuBERT - self.max_chunk = max_chunk - logger.info(f"TASK CONFIG:\n{self.task.cfg}") - logger.info(f" max_chunk = {self.max_chunk}") - logger.info(f" model:\n{self.model}") - - def read_audio(self, path, ref_len=None): - wav, sr = sf.read(path) - assert sr == self.task.cfg.sample_rate, sr - if wav.ndim == 2: - wav = wav.mean(-1) - assert wav.ndim == 1, wav.ndim - if ref_len is not None and abs(ref_len - len(wav)) > 160: - logging.warning(f"ref {ref_len} != read {len(wav)} ({path})") - return wav - - def get_feats(self, path, ref_len=None): - x = self.read_audio(path, ref_len) - with torch.no_grad(): - x = torch.from_numpy(x).float().cuda() - if self.task.cfg.normalize: - x = F.layer_norm(x, x.shape) - x = x.view(1, -1) - - feat = [] - for start in range(0, x.size(1), self.max_chunk): - x_chunk = x[:, start: start + self.max_chunk] - res = self.model.extract_features( - source=x_chunk, - padding_mask=None, - mask=False, - layer=self.layer - 1, - ) - feat_chunk = res["x"] - feat.append(feat_chunk) - return torch.cat(feat, 1).squeeze(0) - - -def main(tsv_dir, split, ckpt_path, layer, nshard, rank, feat_dir, max_chunk): - reader = Wav2Vec2FeatureReader(ckpt_path, layer, max_chunk) - generator, num = get_path_iterator(f"{tsv_dir}/{split}.tsv", nshard, rank) - dump_feature(reader, generator, num, split, nshard, rank, feat_dir) - - -if __name__ == "__main__": - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("tsv_dir") - parser.add_argument("split") - parser.add_argument("ckpt_path") - parser.add_argument("layer", type=int) - parser.add_argument("nshard", type=int) - parser.add_argument("rank", type=int) - parser.add_argument("feat_dir") - parser.add_argument("--max_chunk", type=int, default=1600000) - args = parser.parse_args() - logger.info(args) - - main(**vars(args)) diff --git a/kosmos-g/fairseq/examples/hubert/simple_kmeans/feature_utils.py b/kosmos-g/fairseq/examples/hubert/simple_kmeans/feature_utils.py deleted file mode 100644 index f80bc4569..000000000 --- a/kosmos-g/fairseq/examples/hubert/simple_kmeans/feature_utils.py +++ /dev/null @@ -1,66 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import sys - -import tqdm -from npy_append_array import NpyAppendArray - - -logging.basicConfig( - format="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=os.environ.get("LOGLEVEL", "INFO").upper(), - stream=sys.stdout, -) -logger = logging.getLogger("feature_utils") - - -def get_shard_range(tot, nshard, rank): - assert rank < nshard and rank >= 0, f"invaid rank/nshard {rank}/{nshard}" - start = round(tot / nshard * rank) - end = round(tot / nshard * (rank + 1)) - assert start < end, f"start={start}, end={end}" - logger.info( - f"rank {rank} of {nshard}, process {end-start} " - f"({start}-{end}) out of {tot}" - ) - return start, end - - -def get_path_iterator(tsv, nshard, rank): - with open(tsv, "r") as f: - root = f.readline().rstrip() - lines = [line.rstrip() for line in f] - start, end = get_shard_range(len(lines), nshard, rank) - lines = lines[start:end] - def iterate(): - for line in lines: - subpath, nsample = line.split("\t") - yield f"{root}/{subpath}", int(nsample) - return iterate, len(lines) - - -def dump_feature(reader, generator, num, split, nshard, rank, feat_dir): - iterator = generator() - - feat_path = f"{feat_dir}/{split}_{rank}_{nshard}.npy" - leng_path = f"{feat_dir}/{split}_{rank}_{nshard}.len" - - os.makedirs(feat_dir, exist_ok=True) - if os.path.exists(feat_path): - os.remove(feat_path) - - feat_f = NpyAppendArray(feat_path) - with open(leng_path, "w") as leng_f: - for path, nsample in tqdm.tqdm(iterator, total=num): - feat = reader.get_feats(path, nsample) - feat_f.append(feat.cpu().numpy()) - leng_f.write(f"{len(feat)}\n") - logger.info("finished successfully") - - diff --git a/kosmos-g/fairseq/examples/hubert/simple_kmeans/learn_kmeans.py b/kosmos-g/fairseq/examples/hubert/simple_kmeans/learn_kmeans.py deleted file mode 100644 index 113ac655b..000000000 --- a/kosmos-g/fairseq/examples/hubert/simple_kmeans/learn_kmeans.py +++ /dev/null @@ -1,146 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import sys - -import numpy as np -from sklearn.cluster import MiniBatchKMeans - -import joblib - -logging.basicConfig( - format="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=os.environ.get("LOGLEVEL", "INFO").upper(), - stream=sys.stdout, -) -logger = logging.getLogger("learn_kmeans") - - -def get_km_model( - n_clusters, - init, - max_iter, - batch_size, - tol, - max_no_improvement, - n_init, - reassignment_ratio, -): - return MiniBatchKMeans( - n_clusters=n_clusters, - init=init, - max_iter=max_iter, - batch_size=batch_size, - verbose=1, - compute_labels=False, - tol=tol, - max_no_improvement=max_no_improvement, - init_size=None, - n_init=n_init, - reassignment_ratio=reassignment_ratio, - ) - - -def load_feature_shard(feat_dir, split, nshard, rank, percent): - feat_path = f"{feat_dir}/{split}_{rank}_{nshard}.npy" - leng_path = f"{feat_dir}/{split}_{rank}_{nshard}.len" - with open(leng_path, "r") as f: - lengs = [int(line.rstrip()) for line in f] - offsets = [0] + np.cumsum(lengs[:-1]).tolist() - - if percent < 0: - return np.load(feat_path, mmap_mode="r") - else: - nsample = int(np.ceil(len(lengs) * percent)) - indices = np.random.choice(len(lengs), nsample, replace=False) - feat = np.load(feat_path, mmap_mode="r") - sampled_feat = np.concatenate( - [feat[offsets[i]: offsets[i] + lengs[i]] for i in indices], axis=0 - ) - logger.info( - ( - f"sampled {nsample} utterances, {len(sampled_feat)} frames " - f"from shard {rank}/{nshard}" - ) - ) - return sampled_feat - - -def load_feature(feat_dir, split, nshard, seed, percent): - assert percent <= 1.0 - feat = np.concatenate( - [ - load_feature_shard(feat_dir, split, nshard, r, percent) - for r in range(nshard) - ], - axis=0, - ) - logging.info(f"loaded feature with dimension {feat.shape}") - return feat - - -def learn_kmeans( - feat_dir, - split, - nshard, - km_path, - n_clusters, - seed, - percent, - init, - max_iter, - batch_size, - tol, - n_init, - reassignment_ratio, - max_no_improvement, -): - np.random.seed(seed) - feat = load_feature(feat_dir, split, nshard, seed, percent) - km_model = get_km_model( - n_clusters, - init, - max_iter, - batch_size, - tol, - max_no_improvement, - n_init, - reassignment_ratio, - ) - km_model.fit(feat) - joblib.dump(km_model, km_path) - - inertia = -km_model.score(feat) / len(feat) - logger.info("total intertia: %.5f", inertia) - logger.info("finished successfully") - - -if __name__ == "__main__": - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("feat_dir", type=str) - parser.add_argument("split", type=str) - parser.add_argument("nshard", type=int) - parser.add_argument("km_path", type=str) - parser.add_argument("n_clusters", type=int) - parser.add_argument("--seed", default=0, type=int) - parser.add_argument( - "--percent", default=-1, type=float, help="sample a subset; -1 for all" - ) - parser.add_argument("--init", default="k-means++") - parser.add_argument("--max_iter", default=100, type=int) - parser.add_argument("--batch_size", default=10000, type=int) - parser.add_argument("--tol", default=0.0, type=float) - parser.add_argument("--max_no_improvement", default=100, type=int) - parser.add_argument("--n_init", default=20, type=int) - parser.add_argument("--reassignment_ratio", default=0.0, type=float) - args = parser.parse_args() - logging.info(str(args)) - - learn_kmeans(**vars(args)) diff --git a/kosmos-g/fairseq/examples/hubert/update_ckpt.py b/kosmos-g/fairseq/examples/hubert/update_ckpt.py deleted file mode 100644 index 53c9e74ea..000000000 --- a/kosmos-g/fairseq/examples/hubert/update_ckpt.py +++ /dev/null @@ -1,22 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -src_ckpt = "/checkpoint/wnhsu/w2v/archived/hubert_base_ls960_it2.pt" -ref_ckpt = "/checkpoint/wnhsu/w2v/hubert_icassp_oss_v3/iter2_km100-400k-grp-L6/oss.km500_p0_1_s334.pmw1_0.puw0_0.grpnorm.ml10.mp0_8.untie.mxsz250000.ufreq1.maxtok1400000.MU100k.s1337.ngpu32/checkpoint_last.pt" -new_ckpt = "/checkpoint/wnhsu/w2v/archived/hubert_base_ls960_it2_updated.pt" - - -def update_state(state): - state["model"]["label_embs_concat"] = state["model"].pop("label_embs") - state["args"].task = "hubert_pretraining" - state["args"].labels = f"['{state['args'].labels}']" - return state - - -src_state = torch.load(src_ckpt) -src_state = update_state(src_state) -torch.save(src_state, new_ckpt) diff --git a/kosmos-g/fairseq/examples/joint_alignment_translation/README.md b/kosmos-g/fairseq/examples/joint_alignment_translation/README.md deleted file mode 100644 index cd9c0ea65..000000000 --- a/kosmos-g/fairseq/examples/joint_alignment_translation/README.md +++ /dev/null @@ -1,89 +0,0 @@ -# Jointly Learning to Align and Translate with Transformer Models (Garg et al., 2019) - -This page includes instructions for training models described in [Jointly Learning to Align and Translate with Transformer Models (Garg et al., 2019)](https://arxiv.org/abs/1909.02074). - -## Training a joint alignment-translation model on WMT'18 En-De - -##### 1. Extract and preprocess the WMT'18 En-De data -```bash -./prepare-wmt18en2de_no_norm_no_escape_no_agressive.sh -``` - -##### 2. Generate alignments from statistical alignment toolkits e.g. Giza++/FastAlign. -In this example, we use FastAlign. -```bash -git clone git@github.com:clab/fast_align.git -pushd fast_align -mkdir build -cd build -cmake .. -make -popd -ALIGN=fast_align/build/fast_align -paste bpe.32k/train.en bpe.32k/train.de | awk -F '\t' '{print $1 " ||| " $2}' > bpe.32k/train.en-de -$ALIGN -i bpe.32k/train.en-de -d -o -v > bpe.32k/train.align -``` - -##### 3. Preprocess the dataset with the above generated alignments. -```bash -fairseq-preprocess \ - --source-lang en --target-lang de \ - --trainpref bpe.32k/train \ - --validpref bpe.32k/valid \ - --testpref bpe.32k/test \ - --align-suffix align \ - --destdir binarized/ \ - --joined-dictionary \ - --workers 32 -``` - -##### 4. Train a model -```bash -fairseq-train \ - binarized \ - --arch transformer_wmt_en_de_big_align --share-all-embeddings \ - --optimizer adam --adam-betas '(0.9, 0.98)' --clip-norm 0.0 --activation-fn relu\ - --lr 0.0002 --lr-scheduler inverse_sqrt --warmup-updates 4000 --warmup-init-lr 1e-07 \ - --dropout 0.3 --attention-dropout 0.1 --weight-decay 0.0 \ - --max-tokens 3500 --label-smoothing 0.1 \ - --save-dir ./checkpoints --log-interval 1000 --max-update 60000 \ - --keep-interval-updates -1 --save-interval-updates 0 \ - --load-alignments --criterion label_smoothed_cross_entropy_with_alignment \ - --fp16 -``` - -Note that the `--fp16` flag requires you have CUDA 9.1 or greater and a Volta GPU or newer. - -If you want to train the above model with big batches (assuming your machine has 8 GPUs): -- add `--update-freq 8` to simulate training on 8x8=64 GPUs -- increase the learning rate; 0.0007 works well for big batches - -##### 5. Evaluate and generate the alignments (BPE level) -```bash -fairseq-generate \ - binarized --gen-subset test --print-alignment \ - --source-lang en --target-lang de \ - --path checkpoints/checkpoint_best.pt --beam 5 --nbest 1 -``` - -##### 6. Other resources. -The code for: -1. preparing alignment test sets -2. converting BPE level alignments to token level alignments -3. symmetrizing bidirectional alignments -4. evaluating alignments using AER metric -can be found [here](https://github.com/lilt/alignment-scripts) - -## Citation - -```bibtex -@inproceedings{garg2019jointly, - title = {Jointly Learning to Align and Translate with Transformer Models}, - author = {Garg, Sarthak and Peitz, Stephan and Nallasamy, Udhyakumar and Paulik, Matthias}, - booktitle = {Conference on Empirical Methods in Natural Language Processing (EMNLP)}, - address = {Hong Kong}, - month = {November}, - url = {https://arxiv.org/abs/1909.02074}, - year = {2019}, -} -``` diff --git a/kosmos-g/fairseq/examples/joint_alignment_translation/prepare-wmt18en2de_no_norm_no_escape_no_agressive.sh b/kosmos-g/fairseq/examples/joint_alignment_translation/prepare-wmt18en2de_no_norm_no_escape_no_agressive.sh deleted file mode 100644 index e3efeb21d..000000000 --- a/kosmos-g/fairseq/examples/joint_alignment_translation/prepare-wmt18en2de_no_norm_no_escape_no_agressive.sh +++ /dev/null @@ -1,118 +0,0 @@ -#!/bin/bash - -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -echo 'Cloning Moses github repository (for tokenization scripts)...' -git clone https://github.com/moses-smt/mosesdecoder.git - -SCRIPTS=mosesdecoder/scripts -TOKENIZER=$SCRIPTS/tokenizer/tokenizer.perl -CLEAN=$SCRIPTS/training/clean-corpus-n.perl -REM_NON_PRINT_CHAR=$SCRIPTS/tokenizer/remove-non-printing-char.perl - -URLS=( - "http://statmt.org/wmt13/training-parallel-europarl-v7.tgz" - "http://statmt.org/wmt13/training-parallel-commoncrawl.tgz" - "http://data.statmt.org/wmt18/translation-task/training-parallel-nc-v13.tgz" - "http://data.statmt.org/wmt18/translation-task/rapid2016.tgz" - "http://data.statmt.org/wmt17/translation-task/dev.tgz" - "http://statmt.org/wmt14/test-full.tgz" -) -CORPORA=( - "training/europarl-v7.de-en" - "commoncrawl.de-en" - "training-parallel-nc-v13/news-commentary-v13.de-en" - "rapid2016.de-en" -) - -if [ ! -d "$SCRIPTS" ]; then - echo "Please set SCRIPTS variable correctly to point to Moses scripts." - exit -fi - -src=en -tgt=de -lang=en-de -prep=wmt18_en_de -tmp=$prep/tmp -orig=orig -dev=dev/newstest2012 -codes=32000 -bpe=bpe.32k - -mkdir -p $orig $tmp $prep $bpe - -cd $orig - -for ((i=0;i<${#URLS[@]};++i)); do - url=${URLS[i]} - file=$(basename $url) - if [ -f $file ]; then - echo "$file already exists, skipping download" - else - wget "$url" - if [ -f $file ]; then - echo "$url successfully downloaded." - else - echo "$url not successfully downloaded." - exit 1 - fi - if [ ${file: -4} == ".tgz" ]; then - tar zxvf $file - elif [ ${file: -4} == ".tar" ]; then - tar xvf $file - fi - fi -done -cd .. - -echo "pre-processing train data..." -for l in $src $tgt; do - rm -rf $tmp/train.tags.$lang.tok.$l - for f in "${CORPORA[@]}"; do - cat $orig/$f.$l | \ - perl $REM_NON_PRINT_CHAR | \ - perl $TOKENIZER -threads 8 -l $l -no-escape >> $tmp/train.tags.$lang.tok.$l - done -done - -echo "pre-processing test data..." -for l in $src $tgt; do - if [ "$l" == "$src" ]; then - t="src" - else - t="ref" - fi - grep '\s*//g' | \ - sed -e 's/\s*<\/seg>\s*//g' | \ - sed -e "s/\’/\'/g" | \ - perl $TOKENIZER -threads 8 -l $l -no-escape > $tmp/test.$l - echo "" -done - -# apply length filtering before BPE -perl $CLEAN -ratio 1.5 $tmp/train.tags.$lang.tok $src $tgt $tmp/train 1 100 - -# use newstest2012 for valid -echo "pre-processing valid data..." -for l in $src $tgt; do - rm -rf $tmp/valid.$l - cat $orig/$dev.$l | \ - perl $REM_NON_PRINT_CHAR | \ - perl $TOKENIZER -threads 8 -l $l -no-escape >> $tmp/valid.$l -done - -mkdir output -mv $tmp/{train,valid,test}.{$src,$tgt} output - -#BPE -git clone https://github.com/glample/fastBPE.git -pushd fastBPE -g++ -std=c++11 -pthread -O3 fastBPE/main.cc -IfastBPE -o fast -popd -fastBPE/fast learnbpe $codes output/train.$src output/train.$tgt > $bpe/codes -for split in {train,valid,test}; do for lang in {en,de}; do fastBPE/fast applybpe $bpe/$split.$lang output/$split.$lang $bpe/codes; done; done diff --git a/kosmos-g/fairseq/examples/language_model/README.adaptive_inputs.md b/kosmos-g/fairseq/examples/language_model/README.adaptive_inputs.md deleted file mode 100644 index 6650d58f3..000000000 --- a/kosmos-g/fairseq/examples/language_model/README.adaptive_inputs.md +++ /dev/null @@ -1,39 +0,0 @@ -# Adaptive Input Representations for Neural Language Modeling (Baevski and Auli, 2018) - -## Pre-trained models - -Description | Parameters | Dataset | Model and Test set(s) ----|---:|---|--- -Adaptive Inputs
([Baevski and Auli, 2018](https://arxiv.org/abs/1809.10853)) | 1026M | [Google Billion Words](https://github.com/ciprian-chelba/1-billion-word-language-modeling-benchmark) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/lm/adaptive_lm_gbw_huge.tar.bz2) -Adaptive Inputs
([Baevski and Auli, 2018](https://arxiv.org/abs/1809.10853)) | 247M | [WikiText-103](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/lm/adaptive_lm_wiki103.v2.tar.bz2) - -## Training an LM with adaptive inputs - -First, see the general [language modeling README](README.md) for instructions on -preprocessing the WikiText-103 data. - -Then use the following training command to train a model with adaptive inputs -using the `transformer_lm_wiki103` model architecture: -```bash -fairseq-train --task language_modeling \ - data-bin/wikitext-103 \ - --save-dir checkpoints/transformer_wikitext-103 \ - --arch transformer_lm_wiki103 \ - --max-update 286000 --lr 1.0 --t-mult 2 --lr-period-updates 270000 --lr-scheduler cosine --lr-shrink 0.75 \ - --warmup-updates 16000 --warmup-init-lr 1e-07 --stop-min-lr 1e-09 --optimizer nag --min-lr 0.0001 --clip-norm 0.1 \ - --criterion adaptive_loss --max-tokens 3072 --update-freq 3 --tokens-per-sample 3072 --seed 1 \ - --sample-break-mode none --skip-invalid-size-inputs-valid-test --ddp-backend=legacy_ddp -``` - -## Citation - -```bibtex -@inproceedings{ - baevski2018adaptive, - title={Adaptive Input Representations for Neural Language Modeling}, - author={Alexei Baevski and Michael Auli}, - booktitle={International Conference on Learning Representations}, - year={2019}, - url={https://openreview.net/forum?id=ByxZX20qFQ}, -} -``` diff --git a/kosmos-g/fairseq/examples/language_model/README.conv.md b/kosmos-g/fairseq/examples/language_model/README.conv.md deleted file mode 100644 index 1ff863590..000000000 --- a/kosmos-g/fairseq/examples/language_model/README.conv.md +++ /dev/null @@ -1,40 +0,0 @@ -# Language Modeling with Gated Convolutional Networks (Dauphin et al., 2017) - -## Example usage - -First download and preprocess the data following the main [language modeling README](README.md). - -Then to train a convolutional LM using the `fconv_lm_dauphin_wikitext103` -architecture: -```bash -fairseq-train --task language_modeling \ - data-bin/wikitext-103 \ - --save-dir checkpoints/fconv_wikitext-103 \ - --arch fconv_lm_dauphin_wikitext103 \ - --adaptive-softmax-cutoff 10000,20000,200000 \ - --dropout 0.2 \ - --criterion adaptive_loss \ - --optimizer nag --clip-norm 0.1 --weight-decay 5e-06 \ - --lr 1.0 --lr-scheduler reduce_lr_on_plateau --lr-shrink 0.5 \ - --max-tokens 1024 --tokens-per-sample 1024 \ - --ddp-backend legacy_ddp \ - --max-epoch 35 -``` - -And evaluate with: -```bash -fairseq-eval-lm data-bin/wikitext-103 --path checkpoints/fconv_wiki103/checkpoint_best.pt -``` - -## Citation - -```bibtex -@inproceedings{dauphin2017language, - title={Language Modeling with Gated Convolutional Networks}, - author={Dauphin, Yann N and Fan, Angela and Auli, Michael and Grangier, David}, - booktitle={Proceedings of the 34th International Conference on Machine Learning-Volume 70}, - pages={933--941}, - year={2017}, - organization={JMLR} -} -``` diff --git a/kosmos-g/fairseq/examples/language_model/README.md b/kosmos-g/fairseq/examples/language_model/README.md deleted file mode 100644 index e78ea48e0..000000000 --- a/kosmos-g/fairseq/examples/language_model/README.md +++ /dev/null @@ -1,123 +0,0 @@ -# Neural Language Modeling - -## Pre-trained models - -Model | Description | Dataset | Download ----|---|---|--- -`transformer_lm.gbw.adaptive_huge` | Adaptive Inputs
([Baevski and Auli, 2018](https://arxiv.org/abs/1809.10853))
1026M params | [Google Billion Words](https://github.com/ciprian-chelba/1-billion-word-language-modeling-benchmark) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/lm/adaptive_lm_gbw_huge.tar.bz2) -`transformer_lm.wiki103.adaptive` | Adaptive Inputs
([Baevski and Auli, 2018](https://arxiv.org/abs/1809.10853))
247M params | [WikiText-103](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/lm/adaptive_lm_wiki103.v2.tar.bz2) -`transformer_lm.wmt19.en` | English LM
([Ng et al., 2019](https://arxiv.org/abs/1907.06616)) | [WMT News Crawl](http://data.statmt.org/news-crawl/) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt19.en.tar.gz) -`transformer_lm.wmt19.de` | German LM
([Ng et al., 2019](https://arxiv.org/abs/1907.06616)) | [WMT News Crawl](http://data.statmt.org/news-crawl/) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt19.de.tar.gz) -`transformer_lm.wmt19.ru` | Russian LM
([Ng et al., 2019](https://arxiv.org/abs/1907.06616)) | [WMT News Crawl](http://data.statmt.org/news-crawl/) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt19.ru.tar.gz) - -## Example usage - -We require a few additional Python dependencies for preprocessing: -```bash -pip install fastBPE sacremoses -``` - -To sample from a language model using PyTorch Hub: -```python -import torch - -# List available models -torch.hub.list('pytorch/fairseq') # [..., 'transformer_lm.wmt19.en', ...] - -# Load an English LM trained on WMT'19 News Crawl data -en_lm = torch.hub.load('pytorch/fairseq', 'transformer_lm.wmt19.en', tokenizer='moses', bpe='fastbpe') -en_lm.eval() # disable dropout - -# Move model to GPU -en_lm.cuda() - -# Sample from the language model -en_lm.sample('Barack Obama', beam=1, sampling=True, sampling_topk=10, temperature=0.8) -# "Barack Obama is coming to Sydney and New Zealand (...)" - -# Compute perplexity for a sequence -en_lm.score('Barack Obama is coming to Sydney and New Zealand')['positional_scores'].mean().neg().exp() -# tensor(15.1474) - -# The same interface can be used with custom models as well -from fairseq.models.transformer_lm import TransformerLanguageModel -custom_lm = TransformerLanguageModel.from_pretrained('/path/to/model/dir', 'checkpoint100.pt', tokenizer='moses', bpe='fastbpe') -custom_lm.sample('Barack Obama', beam=5) -# "Barack Obama (...)" -``` - -## Training a transformer language model with the CLI tools - -### 1) Preprocess the data - -First download and prepare the [WikiText-103 dataset](https://www.salesforce.com/products/einstein/ai-research/the-wikitext-dependency-language-modeling-dataset/): -```bash -cd examples/language_model/ -bash prepare-wikitext-103.sh -cd ../.. -``` - -Next preprocess/binarize the data: -```bash -TEXT=examples/language_model/wikitext-103 -fairseq-preprocess \ - --only-source \ - --trainpref $TEXT/wiki.train.tokens \ - --validpref $TEXT/wiki.valid.tokens \ - --testpref $TEXT/wiki.test.tokens \ - --destdir data-bin/wikitext-103 \ - --workers 20 -``` - -### 2) Train a language model - -Next we'll train a basic transformer language model on wikitext-103. For more -advanced usage, see the [adaptive inputs README](README.adaptive_inputs.md). - -To train a basic LM (assumes 2 GPUs): -``` -$ fairseq-train --task language_modeling \ - data-bin/wikitext-103 \ - --save-dir checkpoints/transformer_wikitext-103 \ - --arch transformer_lm --share-decoder-input-output-embed \ - --dropout 0.1 \ - --optimizer adam --adam-betas '(0.9, 0.98)' --weight-decay 0.01 --clip-norm 0.0 \ - --lr 0.0005 --lr-scheduler inverse_sqrt --warmup-updates 4000 --warmup-init-lr 1e-07 \ - --tokens-per-sample 512 --sample-break-mode none \ - --max-tokens 2048 --update-freq 16 \ - --fp16 \ - --max-update 50000 -``` - -If you run out of memory, try reducing `--max-tokens` (max number of tokens per -batch) or `--tokens-per-sample` (max sequence length). You can also adjust -`--update-freq` to accumulate gradients and simulate training on a different -number of GPUs. - -### 3) Evaluate - -```bash -fairseq-eval-lm data-bin/wikitext-103 \ - --path checkpoints/transformer_wiki103/checkpoint_best.pt \ - --batch-size 2 \ - --tokens-per-sample 512 \ - --context-window 400 -# | Evaluated 245569 tokens in 56.1s (4379.02 tokens/s) -# | Loss: 3.4164, Perplexity: 30.46 -``` - -*Note:* The `--context-window` option controls how much context is provided to -each token when computing perplexity. When the window size is 0, the dataset is -chunked into segments of length 512 and perplexity is computed over each segment -normally. However, this results in worse (higher) perplexity since tokens that -appear earlier in each segment have less conditioning. When the maximum window -size is used (511 in this case), then we compute perplexity for each token -fully conditioned on 511 tokens of context. This slows down evaluation -significantly, since we must run a separate forward pass for every token in the -dataset, but results in better (lower) perplexity. - - -## Convolutional language models - -Please see the [convolutional LM README](README.conv.md) for instructions on -training convolutional language models. diff --git a/kosmos-g/fairseq/examples/language_model/prepare-wikitext-103.sh b/kosmos-g/fairseq/examples/language_model/prepare-wikitext-103.sh deleted file mode 100644 index 751302156..000000000 --- a/kosmos-g/fairseq/examples/language_model/prepare-wikitext-103.sh +++ /dev/null @@ -1,33 +0,0 @@ -#!/bin/bash -# Adapted from https://github.com/facebookresearch/MIXER/blob/master/prepareData.sh - -URLS=( - "https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-v1.zip" -) -FILES=( - "wikitext-103-v1.zip" -) - -for ((i=0;i<${#URLS[@]};++i)); do - file=${FILES[i]} - if [ -f $file ]; then - echo "$file already exists, skipping download" - else - url=${URLS[i]} - wget "$url" - if [ -f $file ]; then - echo "$url successfully downloaded." - else - echo "$url not successfully downloaded." - exit -1 - fi - if [ ${file: -4} == ".tgz" ]; then - tar zxvf $file - elif [ ${file: -4} == ".tar" ]; then - tar xvf $file - elif [ ${file: -4} == ".zip" ]; then - unzip $file - fi - fi -done -cd .. diff --git a/kosmos-g/fairseq/examples/laser/README.md b/kosmos-g/fairseq/examples/laser/README.md deleted file mode 100644 index 66acada04..000000000 --- a/kosmos-g/fairseq/examples/laser/README.md +++ /dev/null @@ -1,144 +0,0 @@ -# LASER Language-Agnostic SEntence Representations - -LASER is a library to calculate and use multilingual sentence embeddings. - -You can find more information about LASER and how to use it on the official [LASER repository](https://github.com/facebookresearch/LASER). - -This folder contains source code for training LASER embeddings. - - -## Prepare data and configuration file - -Binarize your data with fairseq, as described [here](https://fairseq.readthedocs.io/en/latest/getting_started.html#data-pre-processing). - -Create a json config file with this format: -``` -{ - "src_vocab": "/path/to/spm.src.cvocab", - "tgt_vocab": "/path/to/spm.tgt.cvocab", - "train": [ - { - "type": "translation", - "id": 0, - "src": "/path/to/srclang1-tgtlang0/train.srclang1", - "tgt": "/path/to/srclang1-tgtlang0/train.tgtlang0" - }, - { - "type": "translation", - "id": 1, - "src": "/path/to/srclang1-tgtlang1/train.srclang1", - "tgt": "/path/to/srclang1-tgtlang1/train.tgtlang1" - }, - { - "type": "translation", - "id": 0, - "src": "/path/to/srclang2-tgtlang0/train.srclang2", - "tgt": "/path/to/srclang2-tgtlang0/train.tgtlang0" - }, - { - "type": "translation", - "id": 1, - "src": "/path/to/srclang2-tgtlang1/train.srclang2", - "tgt": "/path/to/srclang2-tgtlang1/train.tgtlang1" - }, - ... - ], - "valid": [ - { - "type": "translation", - "id": 0, - "src": "/unused", - "tgt": "/unused" - } - ] -} -``` -where paths are paths to binarized indexed fairseq dataset files. -`id` represents the target language id. - - -## Training Command Line Example - -``` -fairseq-train \ - /path/to/configfile_described_above.json \ - --user-dir examples/laser/laser_src \ - --log-interval 100 --log-format simple \ - --task laser --arch laser_lstm \ - --save-dir . \ - --optimizer adam \ - --lr 0.001 \ - --lr-scheduler inverse_sqrt \ - --clip-norm 5 \ - --warmup-updates 90000 \ - --update-freq 2 \ - --dropout 0.0 \ - --encoder-dropout-out 0.1 \ - --max-tokens 2000 \ - --max-epoch 50 \ - --encoder-bidirectional \ - --encoder-layers 5 \ - --encoder-hidden-size 512 \ - --decoder-layers 1 \ - --decoder-hidden-size 2048 \ - --encoder-embed-dim 320 \ - --decoder-embed-dim 320 \ - --decoder-lang-embed-dim 32 \ - --warmup-init-lr 0.001 \ - --disable-validation -``` - - -## Applications - -We showcase several applications of multilingual sentence embeddings -with code to reproduce our results (in the directory "tasks"). - -* [**Cross-lingual document classification**](https://github.com/facebookresearch/LASER/tree/master/tasks/mldoc) using the - [*MLDoc*](https://github.com/facebookresearch/MLDoc) corpus [2,6] -* [**WikiMatrix**](https://github.com/facebookresearch/LASER/tree/master/tasks/WikiMatrix) - Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia [7] -* [**Bitext mining**](https://github.com/facebookresearch/LASER/tree/master/tasks/bucc) using the - [*BUCC*](https://comparable.limsi.fr/bucc2018/bucc2018-task.html) corpus [3,5] -* [**Cross-lingual NLI**](https://github.com/facebookresearch/LASER/tree/master/tasks/xnli) - using the [*XNLI*](https://www.nyu.edu/projects/bowman/xnli/) corpus [4,5,6] -* [**Multilingual similarity search**](https://github.com/facebookresearch/LASER/tree/master/tasks/similarity) [1,6] -* [**Sentence embedding of text files**](https://github.com/facebookresearch/LASER/tree/master/tasks/embed) - example how to calculate sentence embeddings for arbitrary text files in any of the supported language. - -**For all tasks, we use exactly the same multilingual encoder, without any task specific optimization or fine-tuning.** - - - -## References - -[1] Holger Schwenk and Matthijs Douze, - [*Learning Joint Multilingual Sentence Representations with Neural Machine Translation*](https://aclanthology.info/papers/W17-2619/w17-2619), - ACL workshop on Representation Learning for NLP, 2017 - -[2] Holger Schwenk and Xian Li, - [*A Corpus for Multilingual Document Classification in Eight Languages*](http://www.lrec-conf.org/proceedings/lrec2018/pdf/658.pdf), - LREC, pages 3548-3551, 2018. - -[3] Holger Schwenk, - [*Filtering and Mining Parallel Data in a Joint Multilingual Space*](http://aclweb.org/anthology/P18-2037) - ACL, July 2018 - -[4] Alexis Conneau, Guillaume Lample, Ruty Rinott, Adina Williams, Samuel R. Bowman, Holger Schwenk and Veselin Stoyanov, - [*XNLI: Cross-lingual Sentence Understanding through Inference*](https://aclweb.org/anthology/D18-1269), - EMNLP, 2018. - -[5] Mikel Artetxe and Holger Schwenk, - [*Margin-based Parallel Corpus Mining with Multilingual Sentence Embeddings*](https://arxiv.org/abs/1811.01136) - arXiv, Nov 3 2018. - -[6] Mikel Artetxe and Holger Schwenk, - [*Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond*](https://arxiv.org/abs/1812.10464) - arXiv, Dec 26 2018. - -[7] Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong and Paco Guzman, - [*WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia*](https://arxiv.org/abs/1907.05791) - arXiv, July 11 2019. - -[8] Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave and Armand Joulin - [*CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB*](https://arxiv.org/abs/1911.04944) diff --git a/kosmos-g/fairseq/examples/laser/laser_src/__init__.py b/kosmos-g/fairseq/examples/laser/laser_src/__init__.py deleted file mode 100644 index 9ffbd656d..000000000 --- a/kosmos-g/fairseq/examples/laser/laser_src/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .laser_task import * # noqa -from .laser_lstm import * # noqa -from .laser_transformer import * # noqa diff --git a/kosmos-g/fairseq/examples/laser/laser_src/laser_lstm.py b/kosmos-g/fairseq/examples/laser/laser_src/laser_lstm.py deleted file mode 100644 index 10df90e00..000000000 --- a/kosmos-g/fairseq/examples/laser/laser_src/laser_lstm.py +++ /dev/null @@ -1,585 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from fairseq import options, utils - -from fairseq.models import ( - FairseqEncoder, - FairseqIncrementalDecoder, - FairseqEncoderDecoderModel, - register_model, - register_model_architecture, -) - - -@register_model("laser_lstm") -class LSTMModel(FairseqEncoderDecoderModel): - def __init__(self, encoder, decoder): - super().__init__(encoder, decoder) - - def forward( - self, - src_tokens, - src_lengths, - prev_output_tokens=None, - tgt_tokens=None, - tgt_lengths=None, - target_language_id=None, - dataset_name="", - ): - assert target_language_id is not None - - src_encoder_out = self.encoder(src_tokens, src_lengths, dataset_name) - return self.decoder( - prev_output_tokens, src_encoder_out, lang_id=target_language_id - ) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - parser.add_argument( - "--dropout", - default=0.1, - type=float, - metavar="D", - help="dropout probability", - ) - parser.add_argument( - "--encoder-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension", - ) - parser.add_argument( - "--encoder-embed-path", - default=None, - type=str, - metavar="STR", - help="path to pre-trained encoder embedding", - ) - parser.add_argument( - "--encoder-hidden-size", type=int, metavar="N", help="encoder hidden size" - ) - parser.add_argument( - "--encoder-layers", type=int, metavar="N", help="number of encoder layers" - ) - parser.add_argument( - "--encoder-bidirectional", - action="store_true", - help="make all layers of encoder bidirectional", - ) - parser.add_argument( - "--decoder-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension", - ) - parser.add_argument( - "--decoder-embed-path", - default=None, - type=str, - metavar="STR", - help="path to pre-trained decoder embedding", - ) - parser.add_argument( - "--decoder-hidden-size", type=int, metavar="N", help="decoder hidden size" - ) - parser.add_argument( - "--decoder-layers", type=int, metavar="N", help="number of decoder layers" - ) - parser.add_argument( - "--decoder-out-embed-dim", - type=int, - metavar="N", - help="decoder output embedding dimension", - ) - parser.add_argument( - "--decoder-zero-init", - type=str, - metavar="BOOL", - help="initialize the decoder hidden/cell state to zero", - ) - parser.add_argument( - "--decoder-lang-embed-dim", - type=int, - metavar="N", - help="decoder language embedding dimension", - ) - parser.add_argument( - "--fixed-embeddings", - action="store_true", - help="keep embeddings fixed (ENCODER ONLY)", - ) # TODO Also apply to decoder embeddings? - - # Granular dropout settings (if not specified these default to --dropout) - parser.add_argument( - "--encoder-dropout-in", - type=float, - metavar="D", - help="dropout probability for encoder input embedding", - ) - parser.add_argument( - "--encoder-dropout-out", - type=float, - metavar="D", - help="dropout probability for encoder output", - ) - parser.add_argument( - "--decoder-dropout-in", - type=float, - metavar="D", - help="dropout probability for decoder input embedding", - ) - parser.add_argument( - "--decoder-dropout-out", - type=float, - metavar="D", - help="dropout probability for decoder output", - ) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - # make sure that all args are properly defaulted (in case there are any new ones) - base_architecture(args) - - def load_pretrained_embedding_from_file(embed_path, dictionary, embed_dim): - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - embed_tokens = Embedding(num_embeddings, embed_dim, padding_idx) - embed_dict = utils.parse_embedding(embed_path) - utils.print_embed_overlap(embed_dict, dictionary) - return utils.load_embedding(embed_dict, dictionary, embed_tokens) - - pretrained_encoder_embed = None - if args.encoder_embed_path: - pretrained_encoder_embed = load_pretrained_embedding_from_file( - args.encoder_embed_path, task.source_dictionary, args.encoder_embed_dim - ) - pretrained_decoder_embed = None - if args.decoder_embed_path: - pretrained_decoder_embed = load_pretrained_embedding_from_file( - args.decoder_embed_path, task.target_dictionary, args.decoder_embed_dim - ) - - num_langs = task.num_tasks if hasattr(task, "num_tasks") else 0 - - encoder = LSTMEncoder( - dictionary=task.source_dictionary, - embed_dim=args.encoder_embed_dim, - hidden_size=args.encoder_hidden_size, - num_layers=args.encoder_layers, - dropout_in=args.encoder_dropout_in, - dropout_out=args.encoder_dropout_out, - bidirectional=args.encoder_bidirectional, - pretrained_embed=pretrained_encoder_embed, - fixed_embeddings=args.fixed_embeddings, - ) - decoder = LSTMDecoder( - dictionary=task.target_dictionary, - embed_dim=args.decoder_embed_dim, - hidden_size=args.decoder_hidden_size, - out_embed_dim=args.decoder_out_embed_dim, - num_layers=args.decoder_layers, - dropout_in=args.decoder_dropout_in, - dropout_out=args.decoder_dropout_out, - zero_init=options.eval_bool(args.decoder_zero_init), - encoder_embed_dim=args.encoder_embed_dim, - encoder_output_units=encoder.output_units, - pretrained_embed=pretrained_decoder_embed, - num_langs=num_langs, - lang_embed_dim=args.decoder_lang_embed_dim, - ) - return cls(encoder, decoder) - - -class LSTMEncoder(FairseqEncoder): - """LSTM encoder.""" - - def __init__( - self, - dictionary, - embed_dim=512, - hidden_size=512, - num_layers=1, - dropout_in=0.1, - dropout_out=0.1, - bidirectional=False, - left_pad=True, - pretrained_embed=None, - padding_value=0.0, - fixed_embeddings=False, - ): - super().__init__(dictionary) - self.num_layers = num_layers - self.dropout_in = dropout_in - self.dropout_out = dropout_out - self.bidirectional = bidirectional - self.hidden_size = hidden_size - - num_embeddings = len(dictionary) - self.padding_idx = dictionary.pad() - if pretrained_embed is None: - self.embed_tokens = Embedding(num_embeddings, embed_dim, self.padding_idx) - else: - self.embed_tokens = pretrained_embed - if fixed_embeddings: - self.embed_tokens.weight.requires_grad = False - - self.lstm = LSTM( - input_size=embed_dim, - hidden_size=hidden_size, - num_layers=num_layers, - dropout=self.dropout_out if num_layers > 1 else 0.0, - bidirectional=bidirectional, - ) - self.left_pad = left_pad - self.padding_value = padding_value - - self.output_units = hidden_size - if bidirectional: - self.output_units *= 2 - - def forward(self, src_tokens, src_lengths, dataset_name): - if self.left_pad: - # convert left-padding to right-padding - src_tokens = utils.convert_padding_direction( - src_tokens, - self.padding_idx, - left_to_right=True, - ) - - bsz, seqlen = src_tokens.size() - - # embed tokens - x = self.embed_tokens(src_tokens) - x = F.dropout(x, p=self.dropout_in, training=self.training) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - # pack embedded source tokens into a PackedSequence - try: - packed_x = nn.utils.rnn.pack_padded_sequence(x, src_lengths.data.tolist()) - except BaseException: - raise Exception(f"Packing failed in dataset {dataset_name}") - - # apply LSTM - if self.bidirectional: - state_size = 2 * self.num_layers, bsz, self.hidden_size - else: - state_size = self.num_layers, bsz, self.hidden_size - h0 = x.data.new(*state_size).zero_() - c0 = x.data.new(*state_size).zero_() - packed_outs, (final_hiddens, final_cells) = self.lstm(packed_x, (h0, c0)) - - # unpack outputs and apply dropout - x, _ = nn.utils.rnn.pad_packed_sequence( - packed_outs, padding_value=self.padding_value - ) - x = F.dropout(x, p=self.dropout_out, training=self.training) - assert list(x.size()) == [seqlen, bsz, self.output_units] - - if self.bidirectional: - - def combine_bidir(outs): - return torch.cat( - [ - torch.cat([outs[2 * i], outs[2 * i + 1]], dim=0).view( - 1, bsz, self.output_units - ) - for i in range(self.num_layers) - ], - dim=0, - ) - - final_hiddens = combine_bidir(final_hiddens) - final_cells = combine_bidir(final_cells) - - encoder_padding_mask = src_tokens.eq(self.padding_idx).t() - - # Set padded outputs to -inf so they are not selected by max-pooling - padding_mask = src_tokens.eq(self.padding_idx).t().unsqueeze(-1) - if padding_mask.any(): - x = x.float().masked_fill_(padding_mask, float("-inf")).type_as(x) - - # Build the sentence embedding by max-pooling over the encoder outputs - sentemb = x.max(dim=0)[0] - - return { - "sentemb": sentemb, - "encoder_out": (x, final_hiddens, final_cells), - "encoder_padding_mask": encoder_padding_mask - if encoder_padding_mask.any() - else None, - } - - def reorder_encoder_out(self, encoder_out_dict, new_order): - encoder_out_dict["sentemb"] = encoder_out_dict["sentemb"].index_select( - 0, new_order - ) - encoder_out_dict["encoder_out"] = tuple( - eo.index_select(1, new_order) for eo in encoder_out_dict["encoder_out"] - ) - if encoder_out_dict["encoder_padding_mask"] is not None: - encoder_out_dict["encoder_padding_mask"] = encoder_out_dict[ - "encoder_padding_mask" - ].index_select(1, new_order) - return encoder_out_dict - - def max_positions(self): - """Maximum input length supported by the encoder.""" - return int(1e5) # an arbitrary large number - - -class LSTMDecoder(FairseqIncrementalDecoder): - """LSTM decoder.""" - - def __init__( - self, - dictionary, - embed_dim=512, - hidden_size=512, - out_embed_dim=512, - num_layers=1, - dropout_in=0.1, - dropout_out=0.1, - zero_init=False, - encoder_embed_dim=512, - encoder_output_units=512, - pretrained_embed=None, - num_langs=1, - lang_embed_dim=0, - ): - super().__init__(dictionary) - self.dropout_in = dropout_in - self.dropout_out = dropout_out - self.hidden_size = hidden_size - - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - if pretrained_embed is None: - self.embed_tokens = Embedding(num_embeddings, embed_dim, padding_idx) - else: - self.embed_tokens = pretrained_embed - - self.layers = nn.ModuleList( - [ - LSTMCell( - input_size=encoder_output_units + embed_dim + lang_embed_dim - if layer == 0 - else hidden_size, - hidden_size=hidden_size, - ) - for layer in range(num_layers) - ] - ) - if hidden_size != out_embed_dim: - self.additional_fc = Linear(hidden_size, out_embed_dim) - self.fc_out = Linear(out_embed_dim, num_embeddings, dropout=dropout_out) - - if zero_init: - self.sentemb2init = None - else: - self.sentemb2init = Linear( - encoder_output_units, 2 * num_layers * hidden_size - ) - - if lang_embed_dim == 0: - self.embed_lang = None - else: - self.embed_lang = nn.Embedding(num_langs, lang_embed_dim) - nn.init.uniform_(self.embed_lang.weight, -0.1, 0.1) - - def forward( - self, prev_output_tokens, encoder_out_dict, incremental_state=None, lang_id=0 - ): - sentemb = encoder_out_dict["sentemb"] - encoder_out = encoder_out_dict["encoder_out"] - - if incremental_state is not None: - prev_output_tokens = prev_output_tokens[:, -1:] - bsz, seqlen = prev_output_tokens.size() - - # get outputs from encoder - encoder_outs, _, _ = encoder_out[:3] - srclen = encoder_outs.size(0) - - # embed tokens - x = self.embed_tokens(prev_output_tokens) - x = F.dropout(x, p=self.dropout_in, training=self.training) - - # embed language identifier - if self.embed_lang is not None: - lang_ids = prev_output_tokens.data.new_full((bsz,), lang_id) - langemb = self.embed_lang(lang_ids) - # TODO Should we dropout here??? - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - # initialize previous states (or get from cache during incremental generation) - cached_state = utils.get_incremental_state( - self, incremental_state, "cached_state" - ) - if cached_state is not None: - prev_hiddens, prev_cells, input_feed = cached_state - else: - num_layers = len(self.layers) - if self.sentemb2init is None: - prev_hiddens = [ - x.data.new(bsz, self.hidden_size).zero_() for i in range(num_layers) - ] - prev_cells = [ - x.data.new(bsz, self.hidden_size).zero_() for i in range(num_layers) - ] - else: - init = self.sentemb2init(sentemb) - prev_hiddens = [ - init[:, (2 * i) * self.hidden_size : (2 * i + 1) * self.hidden_size] - for i in range(num_layers) - ] - prev_cells = [ - init[ - :, - (2 * i + 1) * self.hidden_size : (2 * i + 2) * self.hidden_size, - ] - for i in range(num_layers) - ] - input_feed = x.data.new(bsz, self.hidden_size).zero_() - - attn_scores = x.data.new(srclen, seqlen, bsz).zero_() - outs = [] - for j in range(seqlen): - if self.embed_lang is None: - input = torch.cat((x[j, :, :], sentemb), dim=1) - else: - input = torch.cat((x[j, :, :], sentemb, langemb), dim=1) - - for i, rnn in enumerate(self.layers): - # recurrent cell - hidden, cell = rnn(input, (prev_hiddens[i], prev_cells[i])) - - # hidden state becomes the input to the next layer - input = F.dropout(hidden, p=self.dropout_out, training=self.training) - - # save state for next time step - prev_hiddens[i] = hidden - prev_cells[i] = cell - - out = hidden - out = F.dropout(out, p=self.dropout_out, training=self.training) - - # input feeding - input_feed = out - - # save final output - outs.append(out) - - # cache previous states (no-op except during incremental generation) - utils.set_incremental_state( - self, - incremental_state, - "cached_state", - (prev_hiddens, prev_cells, input_feed), - ) - - # collect outputs across time steps - x = torch.cat(outs, dim=0).view(seqlen, bsz, self.hidden_size) - - # T x B x C -> B x T x C - x = x.transpose(1, 0) - - # srclen x tgtlen x bsz -> bsz x tgtlen x srclen - attn_scores = attn_scores.transpose(0, 2) - - # project back to size of vocabulary - if hasattr(self, "additional_fc"): - x = self.additional_fc(x) - x = F.dropout(x, p=self.dropout_out, training=self.training) - x = self.fc_out(x) - - return x, attn_scores - - def reorder_incremental_state(self, incremental_state, new_order): - super().reorder_incremental_state(incremental_state, new_order) - cached_state = utils.get_incremental_state( - self, incremental_state, "cached_state" - ) - if cached_state is None: - return - - def reorder_state(state): - if isinstance(state, list): - return [reorder_state(state_i) for state_i in state] - return state.index_select(0, new_order) - - new_state = tuple(map(reorder_state, cached_state)) - utils.set_incremental_state(self, incremental_state, "cached_state", new_state) - - def max_positions(self): - """Maximum output length supported by the decoder.""" - return int(1e5) # an arbitrary large number - - -def Embedding(num_embeddings, embedding_dim, padding_idx): - m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx) - nn.init.uniform_(m.weight, -0.1, 0.1) - nn.init.constant_(m.weight[padding_idx], 0) - return m - - -def LSTM(input_size, hidden_size, **kwargs): - m = nn.LSTM(input_size, hidden_size, **kwargs) - for name, param in m.named_parameters(): - if "weight" in name or "bias" in name: - param.data.uniform_(-0.1, 0.1) - return m - - -def LSTMCell(input_size, hidden_size, **kwargs): - m = nn.LSTMCell(input_size, hidden_size, **kwargs) - for name, param in m.named_parameters(): - if "weight" in name or "bias" in name: - param.data.uniform_(-0.1, 0.1) - return m - - -def Linear(in_features, out_features, bias=True, dropout=0): - """Weight-normalized Linear layer (input: N x T x C)""" - m = nn.Linear(in_features, out_features, bias=bias) - m.weight.data.uniform_(-0.1, 0.1) - if bias: - m.bias.data.uniform_(-0.1, 0.1) - return m - - -@register_model_architecture("laser_lstm", "laser_lstm") -def base_architecture(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_embed_path = getattr(args, "encoder_embed_path", None) - args.encoder_hidden_size = getattr( - args, "encoder_hidden_size", args.encoder_embed_dim - ) - args.encoder_layers = getattr(args, "encoder_layers", 1) - args.encoder_bidirectional = getattr(args, "encoder_bidirectional", False) - args.encoder_dropout_in = getattr(args, "encoder_dropout_in", args.dropout) - args.encoder_dropout_out = getattr(args, "encoder_dropout_out", args.dropout) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_embed_path = getattr(args, "decoder_embed_path", None) - args.decoder_hidden_size = getattr( - args, "decoder_hidden_size", args.decoder_embed_dim - ) - args.decoder_layers = getattr(args, "decoder_layers", 1) - args.decoder_out_embed_dim = getattr(args, "decoder_out_embed_dim", 512) - args.decoder_dropout_in = getattr(args, "decoder_dropout_in", args.dropout) - args.decoder_dropout_out = getattr(args, "decoder_dropout_out", args.dropout) - args.decoder_zero_init = getattr(args, "decoder_zero_init", "0") - args.decoder_lang_embed_dim = getattr(args, "decoder_lang_embed_dim", 0) - args.fixed_embeddings = getattr(args, "fixed_embeddings", False) diff --git a/kosmos-g/fairseq/examples/laser/laser_src/laser_task.py b/kosmos-g/fairseq/examples/laser/laser_src/laser_task.py deleted file mode 100644 index 9bf2d7ad8..000000000 --- a/kosmos-g/fairseq/examples/laser/laser_src/laser_task.py +++ /dev/null @@ -1,334 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -from collections import OrderedDict, defaultdict -import json -import os -import logging -from argparse import ArgumentError - -from fairseq import options, models -from fairseq.data import ( - data_utils, - Dictionary, - LanguagePairDataset, - IndexedDataset, - FairseqDataset, -) -from .multitask_data_utils import ( - MultitaskDatasetWrapper, - MultidatasetEpochBatchIterator, -) - - -from fairseq.tasks import LegacyFairseqTask, register_task - -logger = logging.getLogger(__name__) - - -@register_task("laser") -class LaserTask(LegacyFairseqTask): - @staticmethod - def add_args(parser): - """Add task-specific arguments to the parser.""" - parser.add_argument( - "configfile", metavar="PATH", help="dataset configuration file in json" - ) - parser.add_argument( - "--weighting-alpha", - type=float, - default=None, - help="alpha for automatic weighting", - ) - parser.add_argument( - "--raw-text", action="store_true", help="load raw text dataset" - ) - parser.add_argument( - "--left-pad-source", - default="True", - type=str, - metavar="BOOL", - help="pad the source on the left (default: True)", - ) - parser.add_argument( - "--left-pad-target", - default="False", - type=str, - metavar="BOOL", - help="pad the target on the left (default: False)", - ) - try: - parser.add_argument( - "--max-source-positions", - default=1024, - type=int, - metavar="N", - help="max number of tokens in the source sequence", - ) - parser.add_argument( - "--max-target-positions", - default=1024, - type=int, - metavar="N", - help="max number of tokens in the target sequence", - ) - except ArgumentError: - # this might have already been defined. Once we transition this to hydra it should be fine to add it here. - pass - - def __init__(self, args, config, src_dictionary, tgt_dictionary, num_tasks): - super().__init__(args) - self.config = config - self.src_dictionary = src_dictionary - self.tgt_dictionary = tgt_dictionary - self.num_tasks = num_tasks - - @classmethod - def setup_task(cls, args, **kwargs): - with open(args.configfile, "r") as f: - config = json.load(f) - num_tasks = max(dataset["id"] for dataset in config["train"]) + 1 - - args.left_pad_source = options.eval_bool(args.left_pad_source) - args.left_pad_target = options.eval_bool(args.left_pad_target) - - src_dictionary = Dictionary.load(config["src_vocab"]) - tgt_dictionary = Dictionary.load(config["tgt_vocab"]) - - logger.info( - "| src Dictionary {} : {} types".format( - config["src_vocab"], len(src_dictionary) - ) - ) - logger.info( - "| tgt Dictionary {} : {} types".format( - config["tgt_vocab"], len(tgt_dictionary) - ) - ) - - return cls(args, config, src_dictionary, tgt_dictionary, num_tasks) - - # Experimental overriding for backtranslation - def build_model(self, args, from_checkpoint=False): - model = models.build_model(args, self) - return model - - def dataset(self, split): - if split not in self.datasets: - raise KeyError("Dataset not loaded: " + split) - return self.datasets[split] - - def load_dataset(self, split, epoch=1, **kwargs): - """Load a dataset split.""" - - def indexed_dataset(path, dictionary): - if self.args.raw_text: - raise Exception("Unable to handle raw text.") - dataset = IndexedDataset(path, fix_lua_indexing=True) - - return dataset - - pair_datasets = OrderedDict() - - if split == "valid": - self.datasets[split] = pair_datasets - return - - if split not in self.config: - raise FileNotFoundError( - "Dataset not found in config file: {}".format(split) - ) - - size_by_corpus = defaultdict(int) - size_sum = 0 - size_sum_with_subsampling = 0 - init_pair_datasets = {} - - for dataset_config in self.config[split]: - src_path = os.path.dirname(dataset_config["src"]) - corpus_name = src_path.split("/")[-2] - language_pair_name = src_path.split("/")[-1] - pair_datasets_key = corpus_name + "-" + language_pair_name - - logger.info(f"loading... {pair_datasets_key}") - if "src" in dataset_config: - src_dataset = indexed_dataset( - dataset_config["src"], self.src_dictionary - ) - else: - src_dataset = None - - if "tgt" in dataset_config: - tgt_dataset = indexed_dataset( - dataset_config["tgt"], self.tgt_dictionary - ) - else: - tgt_dataset = None - - dataset = LanguagePairDataset( - src_dataset, - src_dataset.sizes, - self.src_dictionary, - tgt_dataset, - tgt_dataset.sizes, - self.tgt_dictionary, - left_pad_source=self.args.left_pad_source, - left_pad_target=self.args.left_pad_target, - ) - - if pair_datasets_key in init_pair_datasets: - logger.warning( - f"Ignoring already added {pair_datasets_key}. " - f"Consider using `sample` key in order to upsample." - ) - else: - init_pair_datasets[pair_datasets_key] = { - "dataset": dataset, - "sample": dataset_config.get("sample", None), - "id": dataset_config.get("id", None), - "len": len(dataset), - } - - length_sum = 0 - weighted_freqs_sum = 0 - freq_per_dataset = {} - vmax = 0 - vmin = 1 - weighted_freq_per_dataset = {} - - if self.args.weighting_alpha: - for key in init_pair_datasets: - if init_pair_datasets[key]["sample"] is None: - length_sum += len(init_pair_datasets[key]["dataset"]) - - for key in init_pair_datasets: - if init_pair_datasets[key]["sample"] is None: - val = float(init_pair_datasets[key]["len"]) / length_sum - freq_per_dataset[key] = val - weighted_freqs_sum += val ** self.args.weighting_alpha - - for key in freq_per_dataset: - val = ( - freq_per_dataset[key] ** self.args.weighting_alpha - / weighted_freqs_sum - ) - vmin = min(vmin, val) - vmax = max(vmax, val) - weighted_freq_per_dataset[key] = val - - for pair_datasets_key in init_pair_datasets: - dataset_config = init_pair_datasets[pair_datasets_key] - dataset = dataset_config["dataset"] - sample = dataset_config["sample"] - if sample is None: - sample = 1.0 - - if pair_datasets_key in weighted_freq_per_dataset: - w = vmax / weighted_freq_per_dataset[pair_datasets_key] - sample = w - - sample = round(sample) - - initial_sample = sample - initial_pair_datasets_key = pair_datasets_key - - while sample >= 1.0: - assert ( - pair_datasets_key not in pair_datasets - ), f"{pair_datasets_key} already in" - size_sum_with_subsampling += len(dataset) - pair_datasets[pair_datasets_key] = MultitaskDatasetWrapper( - dataset, dataset_config.get("id", 0), 1.0, name=pair_datasets_key - ) - size_sum += len(dataset) - sample -= 1.0 - pair_datasets_key += "-up" - - assert sample < 1e-6, f"sample remains > 0 {pair_datasets_key}" - - logger.info( - f"added pair {initial_pair_datasets_key} length {len(dataset)} new_length = {len(dataset)*initial_sample}" - ) - size_by_corpus[corpus_name] += len(dataset) - - self.datasets[split] = pair_datasets - logger.info( - f"Datasets number = {len(self.datasets[split])} size = {size_sum} size_sum_with_subsampling = {size_sum_with_subsampling}" - ) - - @property - def source_dictionary(self): - return self.src_dictionary - - @property - def target_dictionary(self): - return self.tgt_dictionary - - def get_batch_iterator( - self, - dataset, - max_tokens=None, - max_sentences=None, - max_positions=None, - ignore_invalid_inputs=False, - required_batch_size_multiple=1, - seed=1, - num_shards=1, - shard_id=0, - num_workers=0, - epoch=1, - data_buffer_size=0, - disable_iterator_cache=False, - grouped_shuffling=False, - update_epoch_batch_itr=False, - **kwargs, - ): - - assert isinstance(dataset, OrderedDict) - assert len(dataset) - assert isinstance(dataset[next(iter(dataset))], FairseqDataset) - - # initialize the dataset with the correct starting epoch - for _, dt in dataset.items(): - dt.set_epoch(epoch) - - indices = OrderedDict() - batch_sampler = OrderedDict() - - with data_utils.numpy_seed(seed + epoch): - for key, dt in dataset.items(): - logger.info(f"\t ordered_indices {key}") - indices[key] = dt.ordered_indices() - - # filter examples that are too large - if max_positions is not None: - for key, dt in dataset.items(): - logger.info(f"\t filter_by_size {key}") - indices[key], ignored = dt.filter_indices_by_size( - indices[key], max_positions - ) - - for key, dt in dataset.items(): - logger.info(f"\t batch_by_size {key}") - batch_sampler[key] = data_utils.batch_by_size( - indices[key], - dt.num_tokens, - max_tokens=max_tokens, - max_sentences=max_sentences, - required_batch_size_multiple=required_batch_size_multiple, - ) - - epoch_iter = MultidatasetEpochBatchIterator( - dataset=dataset, - batch_sampler=batch_sampler, - seed=seed, - num_shards=num_shards, - shard_id=shard_id, - num_workers=num_workers, - epoch=epoch, - ) - - return epoch_iter diff --git a/kosmos-g/fairseq/examples/laser/laser_src/laser_transformer.py b/kosmos-g/fairseq/examples/laser/laser_src/laser_transformer.py deleted file mode 100644 index 0be030994..000000000 --- a/kosmos-g/fairseq/examples/laser/laser_src/laser_transformer.py +++ /dev/null @@ -1,354 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -from typing import Any, Dict, List, Optional -from torch import Tensor - -import torch -import torch.nn as nn - -from fairseq.models import ( - FairseqEncoderDecoderModel, - register_model, - register_model_architecture, -) -from fairseq.models.transformer import ( - base_architecture, - Embedding, - TransformerModel, - TransformerEncoder, - TransformerDecoder, -) -from fairseq.modules import ( - TransformerDecoderLayer, -) - -logger = logging.getLogger(__name__) - - -@register_model("laser_transformer") -class LaserTransformerModel(FairseqEncoderDecoderModel): - """Train Transformer for LASER task - - Requires --task laser - """ - - def __init__(self, encoder, decoder): - super().__init__(encoder, decoder) - - def forward( - self, - src_tokens, - src_lengths, - prev_output_tokens=None, - tgt_tokens=None, - tgt_lengths=None, - target_language_id=-1, - dataset_name="", - ): - laser_encoder_out = self.encoder(src_tokens, src_lengths) - return self.decoder( - prev_output_tokens, laser_encoder_out, lang_id=target_language_id - ) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - TransformerModel.add_args(parser) - parser.add_argument( - "--decoder-lang-embed-dim", - type=int, - metavar="N", - help="decoder language embedding dimension", - ) - - @classmethod - def build_model(cls, args, task): - base_laser_transformer_architecture(args) - - num_langs = task.num_tasks if hasattr(task, "num_tasks") else 0 - - def load_embed_tokens(dictionary, embed_dim): - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - - return Embedding(num_embeddings, embed_dim, padding_idx) - - encoder_embed_tokens = load_embed_tokens( - task.source_dictionary, args.encoder_embed_dim - ) - decoder_embed_tokens = load_embed_tokens( - task.target_dictionary, args.decoder_embed_dim - ) - num_langs = task.num_tasks if hasattr(task, "num_tasks") else 0 - - encoder = LaserTransformerEncoder( - args, task.source_dictionary, encoder_embed_tokens - ) - - decoder = LaserTransformerDecoder( - args, - task.target_dictionary, - decoder_embed_tokens, - num_langs=num_langs, - lang_embed_dim=args.decoder_lang_embed_dim, - ) - - return cls(encoder, decoder) - - -class LaserTransformerEncoder(TransformerEncoder): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - - def forward(self, src_tokens, *args, **kwargs): - encoder_out = super().forward(src_tokens, *args, **kwargs) - - x = encoder_out["encoder_out"][0] # T x B x C - padding_mask = src_tokens.eq(self.padding_idx).t().unsqueeze(-1) - - if padding_mask.any(): - x = x.float().masked_fill_(padding_mask, float("-inf")).type_as(x) - - # Build the sentence embedding by max-pooling over the encoder outputs - sentemb = x.max(dim=0)[0] - - # The Pytorch Mobile lite interpreter does not supports returning NamedTuple in - # `foward` so we use a dictionary instead. - # TorchScript does not support mixed values so the values are all lists. - # The empty list is equivalent to None. - return {"sentemb": [sentemb]} # B x C - - @torch.jit.export - def reorder_encoder_out(self, encoder_out: Dict[str, List[Tensor]], new_order): - """ - Same as the one in transformer.py, with new_sentemb - """ - if len(encoder_out["sentemb"]) == 0: - new_sentemb = [] - else: - new_sentemb = [encoder_out["sentemb"][0].index_select(0, new_order)] - - return { - "sentemb": new_sentemb, # B x C - } - - -class LaserTransformerDecoder(TransformerDecoder): - def __init__(self, args, dictionary, *kargs, **kwargs): - self.num_langs = kwargs.get("num_langs", 1) - self.lang_embed_dim = kwargs.get("lang_embed_dim", 0) - kwargs.pop("num_langs", None) - kwargs.pop("lang_embed_dim", None) - - super().__init__(args, dictionary, *kargs, **kwargs, no_encoder_attn=True) - - if self.lang_embed_dim == 0: - self.embed_lang = None - else: - self.embed_lang = nn.Embedding(self.num_langs, self.lang_embed_dim) - nn.init.uniform_(self.embed_lang.weight, -0.1, 0.1) - - if self.output_projection is not None: - laser_output_embed_dim = ( - self.output_embed_dim + self.lang_embed_dim + args.encoder_embed_dim - ) - self.output_projection = nn.Linear( - laser_output_embed_dim, len(dictionary), bias=False - ) - nn.init.normal_( - self.output_projection.weight, - mean=0, - std=laser_output_embed_dim ** -0.5, - ) - - def build_decoder_layer(self, args, no_encoder_attn=False): - decoder_embed_dim = args.decoder_embed_dim - args.decoder_embed_dim = ( - decoder_embed_dim + self.lang_embed_dim + args.encoder_embed_dim - ) - res = TransformerDecoderLayer(args, no_encoder_attn=True) - args.decoder_embed_dim = decoder_embed_dim - - return res - - def extract_features( - self, - prev_output_tokens, - encoder_out: Optional[Dict[str, List[Tensor]]], - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - full_context_alignment: bool = False, - alignment_layer: Optional[int] = None, - alignment_heads: Optional[int] = None, - lang_id: Optional[int] = None, - ): - """ - Similar to *forward* but only return features. - - Includes several features from "Jointly Learning to Align and - Translate with Transformer Models" (Garg et al., EMNLP 2019). - - Args: - full_context_alignment (bool, optional): don't apply - auto-regressive mask to self-attention (default: False). - alignment_layer (int, optional): return mean alignment over - heads at this layer (default: last layer). - alignment_heads (int, optional): only average alignment over - this many heads (default: all heads). - - Returns: - tuple: - - the decoder's features of shape `(batch, tgt_len, embed_dim)` - - a dictionary with any model-specific outputs - """ - if alignment_layer is None: - alignment_layer = self.num_layers - 1 - - # embed positions - positions = ( - self.embed_positions( - prev_output_tokens, incremental_state=incremental_state - ) - if self.embed_positions is not None - else None - ) - - if incremental_state is not None: - prev_output_tokens = prev_output_tokens[:, -1:] - if positions is not None: - positions = positions[:, -1:] - - bsz, seqlen = prev_output_tokens.size() - - # embed tokens and positions - x = self.embed_scale * self.embed_tokens(prev_output_tokens) - - if self.quant_noise is not None: - x = self.quant_noise(x) - - if self.project_in_dim is not None: - x = self.project_in_dim(x) - - if positions is not None: - x += positions - - if self.layernorm_embedding is not None: - x = self.layernorm_embedding(x) - - x = self.dropout_module(x) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - if self.embed_lang is not None: - lang_ids = prev_output_tokens.data.new_full((bsz,), lang_id) - langemb = self.embed_lang(lang_ids) - langemb = langemb.unsqueeze(0) - repeat_vals = [x.shape[0] // langemb.shape[0]] + [-1] * ( - len(langemb.shape) - 1 - ) - x = torch.cat((x, langemb.expand(*repeat_vals)), dim=-1) - - sentemb = encoder_out["sentemb"][0] - sentemb = sentemb.unsqueeze(0) - - repeat_vals = [x.shape[0] // sentemb.shape[0]] + [-1] * (len(sentemb.shape) - 1) - x = torch.cat((x, sentemb.expand(*repeat_vals)), dim=-1) - - self_attn_padding_mask: Optional[Tensor] = None - if self.cross_self_attention or prev_output_tokens.eq(self.padding_idx).any(): - self_attn_padding_mask = prev_output_tokens.eq(self.padding_idx) - - # decoder layers - attn: Optional[Tensor] = None - inner_states: List[Optional[Tensor]] = [x] - for idx, layer in enumerate(self.layers): - if incremental_state is None and not full_context_alignment: - self_attn_mask = self.buffered_future_mask(x) - else: - self_attn_mask = None - - x, layer_attn, _ = layer( - x, - None, - None, - incremental_state, - self_attn_mask=self_attn_mask, - self_attn_padding_mask=self_attn_padding_mask, - need_attn=bool((idx == alignment_layer)), - need_head_weights=bool((idx == alignment_layer)), - ) - inner_states.append(x) - if layer_attn is not None and idx == alignment_layer: - attn = layer_attn.float().to(x) - - if attn is not None: - if alignment_heads is not None: - attn = attn[:alignment_heads] - - # average probabilities over heads - attn = attn.mean(dim=0) - - if self.layer_norm is not None: - x = self.layer_norm(x) - - # T x B x C -> B x T x C - x = x.transpose(0, 1) - - if self.project_out_dim is not None: - x = self.project_out_dim(x) - - return x, {"attn": [attn], "inner_states": inner_states} - - def forward( - self, - prev_output_tokens, - encoder_out: Optional[Dict[str, List[Tensor]]] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - features_only: bool = False, - alignment_layer: Optional[int] = None, - alignment_heads: Optional[int] = None, - src_lengths: Optional[Any] = None, - return_all_hiddens: bool = False, - lang_id: Optional[int] = None, - ): - """ - Args: - prev_output_tokens (LongTensor): previous decoder outputs of shape - `(batch, tgt_len)`, for teacher forcing - encoder_out (optional): output from the encoder, used for - encoder-side attention - incremental_state (dict): dictionary used for storing state during - :ref:`Incremental decoding` - features_only (bool, optional): only return features without - applying output layer (default: False). - - Returns: - tuple: - - the decoder's output of shape `(batch, tgt_len, vocab)` - - a dictionary with any model-specific outputs - """ - - assert lang_id is not None - - x, extra = self.extract_features( - prev_output_tokens, - encoder_out=encoder_out, - incremental_state=incremental_state, - alignment_layer=alignment_layer, - alignment_heads=alignment_heads, - lang_id=lang_id, - ) - if not features_only: - x = self.output_layer(x) - return x, extra - - -@register_model_architecture("laser_transformer", "laser_transformer") -def base_laser_transformer_architecture(args): - base_architecture(args) - args.decoder_lang_embed_dim = getattr(args, "decoder_lang_embed_dim", 0) diff --git a/kosmos-g/fairseq/examples/laser/laser_src/multitask_data_utils.py b/kosmos-g/fairseq/examples/laser/laser_src/multitask_data_utils.py deleted file mode 100644 index b05caea26..000000000 --- a/kosmos-g/fairseq/examples/laser/laser_src/multitask_data_utils.py +++ /dev/null @@ -1,143 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from collections import OrderedDict - -import numpy as np - -from fairseq.data import BaseWrapperDataset, FairseqDataset, iterators - - -class MultiItr(object): - def __init__(self, itr): - self.itr = itr - self._counts = [0 for x in itr] - - def __len__(self): - return sum(len(itr) for itr in self.itr) - - def __iter__(self): - return self - - def __next__(self): - ratios = [count / len(itr) for count, itr in zip(self._counts, self.itr)] - idx = ratios.index(min(ratios)) - self._counts[idx] += 1 - return next(self.itr[idx]) - - -class MultidatasetEpochBatchIterator(iterators.EpochBatchIterating): - """A wrapper around multiple epoch batch iterators.""" - - def __init__( - self, - dataset, - batch_sampler, - seed=1, - num_shards=1, - shard_id=0, - num_workers=0, - epoch=1, - ): - - assert isinstance(dataset, OrderedDict) - assert len(dataset) - assert isinstance(dataset[next(iter(dataset))], FairseqDataset) - - self.iterators = [] - - self.epoch = epoch - for key, dt in dataset.items(): - epoch_iter = iterators.EpochBatchIterator( - dataset=dt, - collate_fn=dt.collater, - batch_sampler=batch_sampler[key], - seed=seed, - num_shards=num_shards, - shard_id=shard_id, - num_workers=0, - epoch=epoch, - ) - self.iterators.append(epoch_iter) - - def __len__(self): - return sum(len(itr) for itr in self.iterators) - - def next_epoch_itr(self, shuffle=True, fix_batches_to_gpus=False): - # `self.epoch += 1` should be handled by underlying `EpochBatchIterator`s. - return MultiItr( - [ - itr.next_epoch_itr( - shuffle=shuffle, fix_batches_to_gpus=fix_batches_to_gpus - ) - for itr in self.iterators - ] - ) - - def end_of_epoch(self): - return all(itr.end_of_epoch() for itr in self.iterators) - - @property - def next_epoch_idx(self): - """Return the epoch index after *next_epoch_itr* is called.""" - - epochs = [itr.next_epoch_idx for itr in self.iterators] - self.epoch = epochs[0] - assert all(epoch == self.epoch for epoch in epochs) - - return self.epoch - - @property - def iterations_in_epoch(self): - return sum(itr.iterations_in_epoch for itr in self.iterators) - - def state_dict(self): - return { - "iterators": [it.state_dict() for it in self.iterators], - "epoch": self.epoch, - } - - def load_state_dict(self, state_dict): - self.epoch = state_dict["epoch"] - for it, d in zip(self.iterators, state_dict["iterators"]): - it.load_state_dict(d) - - -class MultitaskDatasetWrapper(BaseWrapperDataset): - """A wrapper for a multitask dataset.""" - - def __init__(self, dataset, target_language_id, sample=1.0, name=""): - super().__init__(dataset) - self.target_language_id = target_language_id - self.sample = sample - self.name = name - - def collater(self, *args, **kwargs): - ans = self.dataset.collater(*args, **kwargs) - if "net_input" in ans: - ans["net_input"]["target_language_id"] = self.target_language_id - ans["net_input"]["dataset_name"] = self.name - return ans - - def num_tokens(self, *args, **kwargs): - return self.dataset.num_tokens(*args, **kwargs) - - def ordered_indices(self, *args, **kwargs): - indices = self.dataset.ordered_indices(*args, **kwargs) - # Hacky solution for sampling - size = int(self.sample * indices.shape[0]) - - return indices.take(np.sort(np.random.permutation(indices.shape[0])[:size])) - - def size(self, index: int): - return self.dataset.size(index) - - @property - def supports_prefetch(self): - """Whether this dataset supports prefetching.""" - return getattr(self.dataset, "supports_prefetch", False) - - def prefetch(self, indices): - return self.dataset.prefetch(indices) diff --git a/kosmos-g/fairseq/examples/latent_depth/README.md b/kosmos-g/fairseq/examples/latent_depth/README.md deleted file mode 100644 index 7774c3330..000000000 --- a/kosmos-g/fairseq/examples/latent_depth/README.md +++ /dev/null @@ -1,77 +0,0 @@ -# Deep Transformers with Latent Depth (Li et al., 2020) - -[https://arxiv.org/abs/2009.13102](https://arxiv.org/abs/2009.13102). - -## Introduction - -We present a probabilistic framework to automatically learn which layer(s) to use by learning the posterior distributions of layer selection. As an extension of this framework, we propose a novel method to train one shared Transformer network for multilingual machine translation with different layer selection posteriors for each language pair. - -## Training a multilingual model with latent depth - -Below is an example of training with latent depth in decoder for one-to-many (O2M) related languages. We use the same preprocessed (numberized and binarized) TED8 dataset as in [Balancing Training for Multilingual Neural Machine Translation (Wang et al., 2020)](https://github.com/cindyxinyiwang/multiDDS), which could be generated by [the script](https://github.com/cindyxinyiwang/multiDDS/blob/multiDDS/util_scripts/prepare_multilingual_data.sh) the author provided. -```bash -lang_pairs_str="eng-aze,eng-bel,eng-ces,eng-glg,eng-por,eng-rus,eng-slk,eng-tur" -databin_dir= - -fairseq-train ${databin_dir} \ - --user-dir examples/latent_depth/latent_depth_src \ - --lang-pairs "${lang_pairs_str}" \ - --arch multilingual_transformer_iwslt_de_en \ - --task multilingual_translation_latent_depth \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --share-encoders \ - --share-decoders \ - --decoder-langtok \ - --share-decoder-input-output-embed \ - --dropout 0.3 --attention-dropout 0.3 \ - --optimizer adam --adam-eps 1e-06 --adam-betas '(0.9, 0.98)' \ - --lr-scheduler inverse_sqrt --stop-min-lr 1e-9 --warmup-init-lr 1e-7 --warmup-updates 8000 \ - --max-tokens 4096 --update-freq 1 \ - --lr 0.0015 \ - --clip-norm 1.0 \ - --seed 2 \ - --ddp-backend=legacy_ddp \ - --encoder-layers 12 \ - --decoder-layers 24 \ - --decoder-latent-layer \ - --sparsity-weight 0.1 \ - --anneal-updates 5000 \ - --soft-update 500 \ - --target-layers 12 \ - --share-weight 0.1 -``` -## Inference command - -```bash -lang_pairs_str="eng-aze,eng-bel,eng-ces,eng-glg,eng-por,eng-rus,eng-slk,eng-tur" -databin_dir= -model_path= -src_lang= -tgt_lang= -gen_data= - -fairseq-generate ${databin_dir} \ - --path ${model_path} \ - --task multilingual_translation_latent_depth \ - --decoder-latent-layer \ - --lang-pairs "${lang_pairs_str}" \ - -s ${src_lang} -t ${tgt_lang} \ - --gen-subset $gen_data \ - --scoring sacrebleu \ - --remove-bpe 'sentencepiece' \ - --lenpen 1.0 \ - --beam 5 \ - --decoder-langtok \ - --max-tokens 4096 -``` - - -## Citation -```bibtex -@article{li2020deep, - title={Deep Transformers with Latent Depth}, - author={Li, Xian and Stickland, Asa Cooper and Tang, Yuqing and Kong, Xiang}, - journal={arXiv preprint arXiv:2009.13102}, - year={2020} -} -``` diff --git a/kosmos-g/fairseq/examples/latent_depth/latent_depth_src/__init__.py b/kosmos-g/fairseq/examples/latent_depth/latent_depth_src/__init__.py deleted file mode 100644 index c5fa76039..000000000 --- a/kosmos-g/fairseq/examples/latent_depth/latent_depth_src/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import multilingual_translation_latent_depth # noqa -from .loss import latent_depth # noqa -from .models import latent_multilingual_transformer # noqa -from .modules import latent_layers # noqa diff --git a/kosmos-g/fairseq/examples/latent_depth/latent_depth_src/loss/__init__.py b/kosmos-g/fairseq/examples/latent_depth/latent_depth_src/loss/__init__.py deleted file mode 100644 index e69de29bb..000000000 diff --git a/kosmos-g/fairseq/examples/latent_depth/latent_depth_src/loss/latent_depth.py b/kosmos-g/fairseq/examples/latent_depth/latent_depth_src/loss/latent_depth.py deleted file mode 100644 index a3b9535ec..000000000 --- a/kosmos-g/fairseq/examples/latent_depth/latent_depth_src/loss/latent_depth.py +++ /dev/null @@ -1,99 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -from torch.nn.modules.loss import _Loss - - -class LatentLayersKLLoss(_Loss): - def __init__(self, args): - super().__init__() - self.args = args - - def forward(self, layer_samples, lang_idx, update_num, sample_size): - prior = self.args.prior - samples = layer_samples[lang_idx] - eps = 1e-7 - if prior == "uniform": - # uniform prior - kl_loss = (samples * (torch.log(samples + eps) - math.log(0.5))).sum(-1) - elif prior == "agged_posterior": - # aggregated posterior - y_t = torch.stack([x.detach() for x in layer_samples], dim=0) - agged_q = torch.sum(y_t, dim=0) - row_norm = agged_q.sum(-1) - normed_agg_q = agged_q / row_norm - kl_loss = ( - samples * (torch.log(samples + eps) - torch.log(normed_agg_q + eps)) - ).sum(-1) - else: - raise NotImplementedError("The specified prior is not implemented.") - - # normalized by number of layers - kl_loss /= layer_samples[0].size()[0] - kl_weight = min( - self.args.sparsity_weight, - (update_num - self.args.soft_update) - * self.args.sparsity_weight - / self.args.anneal_updates, - ) - kl_loss *= kl_weight * sample_size - return kl_loss - - -class LatentLayersSparsityLoss(_Loss): - def __init__(self, args): - super().__init__() - self.args = args - - def is_valid(self, update_num): - if self.args.target_layers <= 0: - return False - return update_num > (self.args.soft_update + self.args.anneal_updates) - - def forward(self, layer_samples_list, update_num, sample_size): - batch_loss = 0 - share_loss = 0 - global_sparsity_loss = 0 - layer_samples = torch.stack(layer_samples_list, dim=0) - if ( - self.args.target_layers > 0 or self.args.share_weight > 0 - ) and update_num > (self.args.soft_update + self.args.anneal_updates): - # anneal sparsity weight - if update_num < (self.args.anneal_updates + self.args.soft_update): - weight_anneal = 0 - elif update_num < (2 * self.args.anneal_updates + self.args.soft_update): - weight_anneal = ( - (update_num - self.args.soft_update - self.args.anneal_updates) - * self.args.share_weight - / self.args.anneal_updates - ) - else: - weight_anneal = 1 - # compute ratio among languages - layer_utilization = torch.sum(layer_samples, dim=0) - layer_utilization /= layer_samples.size()[0] - if self.args.share_weight > 0: - # encouraging sharing across languages - share_loss = sum( - -1.0 * v * math.log(v) for v in layer_utilization if v > 0 - ) - batch_loss += ( - weight_anneal * self.args.share_weight * sample_size * share_loss - ) - if self.args.target_layers > 0: - # computed expected number of layers selected - expeted_layers = sum(layer_utilization) - # compute l2 loss wrt target number of layers - global_sparsity_loss = (expeted_layers - self.args.target_layers) ** 2 - batch_loss += ( - weight_anneal - * self.args.share_weight - * sample_size - * global_sparsity_loss - ) - return batch_loss diff --git a/kosmos-g/fairseq/examples/latent_depth/latent_depth_src/models/__init__.py b/kosmos-g/fairseq/examples/latent_depth/latent_depth_src/models/__init__.py deleted file mode 100644 index e69de29bb..000000000 diff --git a/kosmos-g/fairseq/examples/latent_depth/latent_depth_src/models/latent_multilingual_transformer.py b/kosmos-g/fairseq/examples/latent_depth/latent_depth_src/models/latent_multilingual_transformer.py deleted file mode 100644 index 9e7b655fe..000000000 --- a/kosmos-g/fairseq/examples/latent_depth/latent_depth_src/models/latent_multilingual_transformer.py +++ /dev/null @@ -1,76 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq.models import register_model, register_model_architecture -from fairseq.models.multilingual_transformer import MultilingualTransformerModel -from fairseq.models.transformer import ( - TransformerDecoder, - TransformerEncoder, - base_architecture, -) -from fairseq.utils import safe_hasattr - -from .latent_transformer import LatentTransformerDecoder, LatentTransformerEncoder - - -@register_model("latent_multilingual_transformer") -class LatentMultilingualTransformerModel(MultilingualTransformerModel): - """A variant of standard multilingual Transformer models which encoder and/or - decoders supports latent depth, as is in "Deep Transformer with Latent Depth" - (https://arxiv.org/abs/2009.13102). - """ - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - MultilingualTransformerModel.add_args(parser) - parser.add_argument( - '--soft-select', - action='store_true', - help='use soft samples in training an inference', - ) - parser.add_argument( - '--sampling-tau', - type=float, - default=5., - help='sampling temperature', - ) - - @classmethod - def _get_module_class(cls, is_encoder, args, lang_dict, embed_tokens, langs): - if is_encoder: - if safe_hasattr(args, "encoder_latent_layer") and args.encoder_latent_layer: - return LatentTransformerEncoder( - args, lang_dict, embed_tokens, num_logits=len(langs) - ) - else: - return TransformerEncoder(args, lang_dict, embed_tokens) - else: - if safe_hasattr(args, "decoder_latent_layer") and args.decoder_latent_layer: - return LatentTransformerDecoder( - args, lang_dict, embed_tokens, num_logits=len(langs) - ) - else: - return TransformerDecoder(args, lang_dict, embed_tokens) - - -@register_model_architecture( - "latent_multilingual_transformer", "latent_multilingual_transformer" -) -def latent_multilingual_architecture(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 1024) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 4) - args.encoder_layers = getattr(args, "encoder_layers", 12) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 1024) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 4) - args.decoder_layers = getattr(args, "decoder_layers", 24) - args.share_encoders = getattr(args, "share_encoders", True) - args.share_decoders = getattr(args, "share_decoders", True) - args.share_encoder_embeddings = getattr(args, "share_encoder_embeddings", True) - args.share_decoder_embeddings = getattr(args, "share_decoder_embeddings", True) - - base_architecture(args) diff --git a/kosmos-g/fairseq/examples/latent_depth/latent_depth_src/models/latent_transformer.py b/kosmos-g/fairseq/examples/latent_depth/latent_depth_src/models/latent_transformer.py deleted file mode 100644 index 6a825301a..000000000 --- a/kosmos-g/fairseq/examples/latent_depth/latent_depth_src/models/latent_transformer.py +++ /dev/null @@ -1,156 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Any, Dict, Optional - -import torch.nn as nn -from fairseq.models.fairseq_encoder import EncoderOut -from fairseq.models.transformer import TransformerDecoder, TransformerEncoder -from fairseq.modules import TransformerDecoderLayer, TransformerEncoderLayer -from torch import Tensor - -from ..modules.latent_layers import LayerSelect - - -class LatentTransformerEncoder(TransformerEncoder): - """Latent depth (https://arxiv.org/abs/2009.13102) implemented in - TransformerEncoder. - """ - - def __init__(self, args, dictionary, embed_tokens, num_logits=1): - self.num_logits = num_logits - self.num_layers = args.encoder_layers - super().__init__(args, dictionary, embed_tokens) - self.layer_select = LayerSelect( - num_layers=self.num_layers, - num_logits=self.num_logits, - soft_select=getattr(args, "soft_select", False), - sampling_tau=getattr(args, "sampling_tau", 5.), - ) - self.lang_idx = None - self.layers = nn.ModuleList( - [self._build_encoder_layer(args, idx) for idx in range(args.encoder_layers)] - ) - - def set_lang_idx(self, lang_idx): - self.lang_idx = lang_idx - - def _build_encoder_layer(self, args, idx=None): - return LatentTransformerEncoderLayer(args, idx, layer_select=self.layer_select) - - def forward(self, src_tokens, src_lengths, return_all_hiddens: bool = False): - self.layer_select.sample(self.lang_idx) - return super().forward(src_tokens, src_lengths, return_all_hiddens) - - -class LatentTransformerEncoderLayer(TransformerEncoderLayer): - """Encoder layer with each (non_residual) block weighted by samples of Bernouli - or Gumbel Signmoid samples. - - Args: - args (argparse.Namespace): parsed command-line arguments from standard - TransformerEncoderLayer. - idx (int): layer index (used to retrieve samples). - layer_select (LayerSelect, optional): instance of LayerSelect module with logits - parameters and sampling method. - """ - - def __init__(self, args, idx, layer_select=None): - super().__init__(args) - self.idx = idx - self.layer_select = layer_select - - def residual_connection(self, x, residual): - return residual + x * self.layer_select(self.idx) - - -class LatentTransformerDecoder(TransformerDecoder): - """Latent depth (https://arxiv.org/abs/2009.13102) implemented in - TransformerDecoder. - """ - - def __init__( - self, args, dictionary, embed_tokens, no_encoder_attn=False, num_logits=1 - ): - self.num_logits = num_logits - self.num_layers = args.decoder_layers - super().__init__( - args, dictionary, embed_tokens, no_encoder_attn=no_encoder_attn - ) - self.layer_select = LayerSelect( - num_layers=self.num_layers, - num_logits=self.num_logits, - soft_select=getattr(args, "soft_select", False), - sampling_tau=getattr(args, "sampling_tau", 5.), - ) - self.lang_idx = None - self.layers = nn.ModuleList( - [ - self._build_decoder_layer(args, no_encoder_attn, idx) - for idx in range(args.decoder_layers) - ] - ) - - def set_lang_idx(self, lang_idx): - self.lang_idx = lang_idx - - def _build_decoder_layer(self, args, no_encoder_attn=False, idx=None): - return LatentTransformerDecoderLayer( - args, idx, layer_select=self.layer_select, no_encoder_attn=no_encoder_attn - ) - - def forward( - self, - prev_output_tokens, - encoder_out: Optional[EncoderOut] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - features_only: bool = False, - alignment_layer: Optional[int] = None, - alignment_heads: Optional[int] = None, - src_lengths: Optional[Any] = None, - return_all_hiddens: bool = False, - ): - self.layer_select.sample(self.lang_idx) - return super().forward( - prev_output_tokens=prev_output_tokens, - encoder_out=encoder_out, - incremental_state=incremental_state, - features_only=features_only, - alignment_layer=alignment_layer, - src_lengths=src_lengths, - return_all_hiddens=return_all_hiddens, - ) - - -class LatentTransformerDecoderLayer(TransformerDecoderLayer): - """Decoder layer with each (non_residual) block weighted by samples of Bernouli - or Gumbel Signmoid samples. - - Args: - args (argparse.Namespace): parsed command-line arguments from standard - TransformerDecoderLayer. - idx (int): layer index (used to retrieve samples). - layer_select (LayerSelect, optional): instance of LayerSelect module with logits - parameters and sampling method. - no_encoder_attn (bool, optional): whether to attend to encoder outputs - (default: False). - - """ - - def __init__( - self, - args, - idx, - layer_select=None, - no_encoder_attn=False, - add_bias_kv=False, - add_zero_attn=False, - ): - super().__init__(args, no_encoder_attn, add_bias_kv, add_zero_attn) - self.idx = idx - self.layer_select = layer_select - - def residual_connection(self, x, residual): - return residual + x * self.layer_select(self.idx) diff --git a/kosmos-g/fairseq/examples/latent_depth/latent_depth_src/modules/__init__.py b/kosmos-g/fairseq/examples/latent_depth/latent_depth_src/modules/__init__.py deleted file mode 100644 index e69de29bb..000000000 diff --git a/kosmos-g/fairseq/examples/latent_depth/latent_depth_src/modules/latent_layers.py b/kosmos-g/fairseq/examples/latent_depth/latent_depth_src/modules/latent_layers.py deleted file mode 100644 index 2be05d553..000000000 --- a/kosmos-g/fairseq/examples/latent_depth/latent_depth_src/modules/latent_layers.py +++ /dev/null @@ -1,75 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn - - -class LayerSelect(nn.Module): - """Compute samples (from a Gumbel-Sigmoid distribution) which is used as - either (soft) weighting or (hard) selection of residual connection. - https://arxiv.org/abs/2009.13102 - """ - def __init__(self, num_layers, num_logits, soft_select=False, sampling_tau=5.): - super(LayerSelect, self).__init__() - self.layer_logits = torch.nn.Parameter( - torch.Tensor(num_logits, num_layers), - requires_grad=True, - ) - self.hard_select = not soft_select - self.tau = sampling_tau - self.detach_grad = False - self.layer_samples = [None] * num_logits - - def sample(self, logit_idx): - """To leverage the efficiency of distributed training, samples for all - layers are computed at once for each logit_idx. Logits are parameters - learnt independent of each other. - - Args: - logit_idx: The index of logit parameters used for sampling. - """ - assert logit_idx is not None - self.samples = self._gumbel_sigmoid( - self.layer_logits[logit_idx, :].detach() - if self.detach_grad - else self.layer_logits[logit_idx, :], - dim=-1, - tau=self.tau, - hard=self.hard_select, - ) - self.layer_samples[logit_idx] = self.samples - - def forward(self, i): - sample = self.samples[i] - return sample - - def _gumbel_sigmoid( - self, logits, tau=1, hard=False, eps=1e-10, dim=-1, threshold=0.5 - ): - # ~Gumbel(0,1) - gumbels1 = ( - -torch.empty_like(logits, memory_format=torch.legacy_contiguous_format) - .exponential_() - .log() - ) - gumbels2 = ( - -torch.empty_like(logits, memory_format=torch.legacy_contiguous_format) - .exponential_() - .log() - ) - # Difference of two gumbels because we apply a sigmoid - gumbels1 = (logits + gumbels1 - gumbels2) / tau - y_soft = gumbels1.sigmoid() - if hard: - # Straight through. - y_hard = torch.zeros_like( - logits, memory_format=torch.legacy_contiguous_format - ).masked_fill(y_soft > threshold, 1.0) - ret = y_hard - y_soft.detach() + y_soft - else: - # Reparametrization trick. - ret = y_soft - return ret diff --git a/kosmos-g/fairseq/examples/latent_depth/latent_depth_src/multilingual_translation_latent_depth.py b/kosmos-g/fairseq/examples/latent_depth/latent_depth_src/multilingual_translation_latent_depth.py deleted file mode 100644 index 8cc2a7174..000000000 --- a/kosmos-g/fairseq/examples/latent_depth/latent_depth_src/multilingual_translation_latent_depth.py +++ /dev/null @@ -1,195 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq.tasks import register_task -from fairseq.tasks.multilingual_translation import MultilingualTranslationTask -from fairseq.utils import safe_hasattr - -from .loss.latent_depth import LatentLayersKLLoss, LatentLayersSparsityLoss - - -@register_task("multilingual_translation_latent_depth") -class MultilingualTranslationTaskLatentDepth(MultilingualTranslationTask): - """A task for multiple translation with latent depth. - - See `"Deep Transformer with Latent Depth" - (Li et al., 2020) `_. - """ - - @staticmethod - def add_args(parser): - """Add task-specific arguments to the parser.""" - # fmt: off - MultilingualTranslationTask.add_args(parser) - parser.add_argument('--encoder-latent-layer', action='store_true', help='latent layer selection in encoder') - parser.add_argument('--decoder-latent-layer', action='store_true', help='latent layer selection in decoder') - parser.add_argument('--target-layers', default=-1, type=int, - help='number of effective layers to learn; -1 means no constraint') - parser.add_argument('--sparsity-weight', default=0.0, type=float, - help='weight for sparsity loss') - parser.add_argument('--share-weight', default=0.0, type=float, - help='weight for sharing loss') - parser.add_argument('--soft-update', default=1, type=int, - help='number of updates with soft sampling') - parser.add_argument('--anneal-updates', default=1, type=int, - help='number of updates to anneal the KL loss weight') - parser.add_argument('--prior', default="uniform", type=str, - help='prior used for computing KL loss') - # fmt: on - - def __init__(self, args, dicts, training): - super().__init__(args, dicts, training) - self.src_langs, self.tgt_langs = zip( - *[(lang.split("-")[0], lang.split("-")[1]) for lang in args.lang_pairs] - ) - if self.training and self.encoder_latent_layer: - assert self.args.share_encoders - if self.training and self.decoder_latent_layer: - assert self.args.share_decoders - if training or self.encoder_latent_layer or self.decoder_latent_layer: - self.lang_pairs = args.lang_pairs - else: - self.lang_pairs = ["{}-{}".format(args.source_lang, args.target_lang)] - self.eval_lang_pairs = self.lang_pairs - self.model_lang_pairs = self.lang_pairs - if self.training and (self.encoder_latent_layer or self.decoder_latent_layer): - self.kl_loss = LatentLayersKLLoss(self.args) - self.sparsity_loss = LatentLayersSparsityLoss(self.args) - - def _per_lang_pair_train_loss( - self, lang_pair, model, update_num, criterion, sample, optimizer, ignore_grad - ): - src, tgt = lang_pair.split("-") - if self.encoder_latent_layer: - src_lang_idx = self.src_lang_idx_dict[src] - model.models[lang_pair].encoder.set_lang_idx(src_lang_idx) - model.models[lang_pair].encoder.layer_select.hard_select = ( - update_num > self.args.soft_update - ) - if self.decoder_latent_layer: - tgt_lang_idx = self.tgt_lang_idx_dict[tgt] - model.models[lang_pair].decoder.set_lang_idx(tgt_lang_idx) - model.models[lang_pair].decoder.layer_select.hard_select = ( - update_num > self.args.soft_update - ) - - loss, sample_size, logging_output = criterion( - model.models[lang_pair], sample[lang_pair] - ) - if self.encoder_latent_layer: - none_samples = sum( - 1 if x is None else 0 - for x in model.models[lang_pair].encoder.layer_select.layer_samples - ) - if none_samples == 0 or self.args.prior != "agged_posterior": - loss += self.kl_loss( - model.models[lang_pair].encoder.layer_select.layer_samples, - src_lang_idx, - update_num, - sample_size, - ) - if self.decoder_latent_layer: - none_samples = sum( - 1 if x is None else 0 - for x in model.models[lang_pair].decoder.layer_select.layer_samples - ) - if none_samples == 0 or self.args.prior != "agged_posterior": - loss += self.kl_loss( - model.models[lang_pair].decoder.layer_select.layer_samples, - tgt_lang_idx, - update_num, - sample_size, - ) - if ignore_grad: - loss *= 0 - - if hasattr(self, "sparsity_loss") and self.sparsity_loss.is_valid(update_num): - # need to retain the graph if sparsity loss needs to be added - loss.backward(retain_graph=True) - else: - optimizer.backward(loss) - - return loss, sample_size, logging_output - - def train_step( - self, sample, model, criterion, optimizer, update_num, ignore_grad=False - ): - agg_loss, agg_sample_size, agg_logging_output = super().train_step( - sample, model, criterion, optimizer, update_num, ignore_grad - ) - # compute auxiliary loss from layere sparsity, based on all samples from all languages - if hasattr(self, "sparsity_loss") and self.sparsity_loss.is_valid(update_num): - sparsity_loss = 0 - if self.encoder_latent_layer: - sparsity_loss += self.sparsity_loss( - next( - iter(model.models.values()) - ).encoder.layer_select.layer_samples, - update_num, - agg_sample_size, - ) - if self.decoder_latent_layer: - sparsity_loss += self.sparsity_loss( - next( - iter(model.models.values()) - ).decoder.layer_select.layer_samples, - update_num, - agg_sample_size, - ) - if sparsity_loss > 0: - optimizer.backward(sparsity_loss) - return agg_loss, agg_sample_size, agg_logging_output - - def _per_lang_pair_valid_loss(self, lang_pair, model, criterion, sample): - src, tgt = lang_pair.split("-") - if self.encoder_latent_layer: - src_lang_idx = self.src_lang_idx_dict[src] - model.models[lang_pair].encoder.set_lang_idx(src_lang_idx) - if self.decoder_latent_layer: - tgt_lang_idx = self.tgt_lang_idx_dict[tgt] - model.models[lang_pair].decoder.set_lang_idx(tgt_lang_idx) - loss, sample_size, logging_output = criterion( - model.models[lang_pair], sample[lang_pair] - ) - return loss, sample_size, logging_output - - def inference_step( - self, generator, models, sample, prefix_tokens=None, constraints=None - ): - if self.encoder_latent_layer or self.decoder_latent_layer: - for model in models: - if self.encoder_latent_layer: - assert model.encoder.layer_select is not None - src_lang_idx = self.src_lang_idx_dict[self.args.source_lang] - model.encoder.set_lang_idx(src_lang_idx) - if self.decoder_latent_layer: - assert model.decoder.layer_select is not None - tgt_lang_idx = self.tgt_lang_idx_dict[self.args.target_lang] - model.decoder.set_lang_idx(tgt_lang_idx) - return super().inference_step( - generator, models, sample, prefix_tokens, constraints - ) - - @property - def encoder_latent_layer(self): - return ( - safe_hasattr(self.args, "encoder_latent_layer") - and self.args.encoder_latent_layer - ) - - @property - def decoder_latent_layer(self): - return ( - safe_hasattr(self.args, "decoder_latent_layer") - and self.args.decoder_latent_layer - ) - - @property - def src_lang_idx_dict(self): - return {lang: lang_idx for lang_idx, lang in enumerate(self.src_langs)} - - @property - def tgt_lang_idx_dict(self): - return {lang: lang_idx for lang_idx, lang in enumerate(self.tgt_langs)} diff --git a/kosmos-g/fairseq/examples/layerdrop/README.md b/kosmos-g/fairseq/examples/layerdrop/README.md deleted file mode 100644 index 4d48ee961..000000000 --- a/kosmos-g/fairseq/examples/layerdrop/README.md +++ /dev/null @@ -1,154 +0,0 @@ -# Reducing Transformer Depth on Demand with Structured Dropout (Fan et al., 2019) -This page contains information for how to train models with LayerDrop, based on this [paper](https://arxiv.org/abs/1909.11556). - -## Citation: -If you found this technique useful, please cite our paper: -```bibtex -@article{fan2019reducing, - title={Reducing Transformer Depth on Demand with Structured Dropout}, - author={Fan, Angela and Grave, Edouard and Joulin, Armand}, - journal={arXiv preprint arXiv:1909.11556}, - year={2019} -} -``` - -## Pre-trained models - -Model | Description | Download ----|---|--- -`layerdrop_wmt_en_de_12_6` | Transformer + LayerDrop 0.2 trained on WMT16 en-de with 12 encoder and 6 decoder layers | [layerdrop_wmt_en_de_12_6.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/layerdrop_wmt_en_de_12_6.tar.gz) -`roberta_layerdrop.base` | RoBERTa Base + LayerDrop 0.2 | [roberta_layerdrop.base.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/roberta_layerdrop.base.qnli.tar.gz) -`roberta_layerdrop.large` | RoBERTa Large + LayerDrop 0.2 | [roberta_layerdrop.large.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/roberta_layerdrop.large.tar.gz) -`roberta_layerdrop.large.mnli` | `roberta_layerdrop.large` finetuned on [MNLI](http://www.nyu.edu/projects/bowman/multinli) | [roberta_layerdrop.large.mnli.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/roberta_layerdrop.large.mnli.tar.gz) -`roberta_layerdrop.large.qnli` | `roberta_layerdrop.large` finetuned on [QNLI](https://arxiv.org/abs/1804.07461) | [roberta_layerdrop.large.mnli.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/roberta_layerdrop.large.qnli.tar.gz) - - -Evaluate performance of these pre-trained models: -```bash -# Example for Machine Translation -fairseq-generate /path/to/bped/wmt/data --path nmt_checkpoint.pt \ - --beam 8 --lenpen 0.4 \ - --batch-size 64 \ - --remove-bpe \ - --gen-subset test > wmt16_gen.txt -bash scripts/compound_split_bleu.sh wmt16_gen.txt -# prints BLEU4 = 30.17 -``` - -```python -# Example for RoBERTa + LayerDrop finetuned on MNLI: -from fairseq.models.roberta import RobertaModel - -roberta_layerdrop = RobertaModel.from_pretrained( - '/path/to/MNLI/model', - checkpoint_file='mnli_checkpoint.pt', - data_name_or_path='/path/to/MNLI/data/MNLI-bin' -) -label_map = {0: 'contradiction', 2: 'neutral', 1: 'entailment'} -ncorrect, nsamples = 0, 0 -roberta_layerdrop.cuda() -roberta_layerdrop.eval() -with open('/path/to/MNLI/data/dev_matched.tsv') as fin: - fin.readline() - for index, line in enumerate(fin): - tokens = line.strip().split('\t') - sent1, sent2, target = tokens[8], tokens[9], tokens[-1] - tokens = roberta_layerdrop.encode(sent1, sent2) - prediction = roberta_layerdrop.predict('sentence_classification_head', tokens).argmax().item() - prediction_label = label_map[prediction] - ncorrect += int(prediction_label == target) - nsamples += 1 -print('| Accuracy: ', float(ncorrect)/float(nsamples)) -# prints | Accuracy: 0.9026999490575649 - - -# Example for RoBERTa + LayerDrop finetuned on QNLI: -roberta = RobertaModel.from_pretrained( - '/path/to/QNLI/model', - checkpoint_file='qnli_checkpoint.pt', - data_name_or_path='/path/to/QNLI/data/QNLI-bin' -) - -label_fn = lambda label: roberta.task.label_dictionary.string( - [label + roberta.task.target_dictionary.nspecial] -) -ncorrect, nsamples = 0, 0 -roberta.cuda() -roberta.eval() -with open('/path/to/QNLI/data/dev.tsv') as fin: - fin.readline() - for index, line in enumerate(fin): - tokens = line.strip().split('\t') - sent1, sent2, target = tokens[1], tokens[2], tokens[3] - tokens = roberta.encode(sent1, sent2) - prediction = roberta.predict('sentence_classification_head', tokens).argmax().item() - prediction_label = label_fn(prediction) - ncorrect += int(prediction_label == target) - nsamples += 1 -print('| Accuracy: ', float(ncorrect)/float(nsamples)) -# prints | Accuracy: 0.9480139117700896 -``` - - -## Example usage - -To train a model with LayerDrop, add the following flags. We recommend 0.2, a value that worked well in our experiments. For Language Models that are decoder-only, you need only the decoder flag. For RoBERTa, an encoder, you need only the encoder flag. The encoder and decoder LayerDrop values can be set differently. -``` ---encoder-layerdrop 0.2 --decoder-layerdrop 0.2 -``` - -To prune a model that has been trained with LayerDrop, add the following flags followed by a comma separated list of which layers you would like to keep. -``` ---encoder-layers-to-keep 0,2,4,6,8,10,12,14 --decoder-layers-to-keep 0,2,4,6,8,10,12,14 -``` -Setting these flags should print a message such as: -``` -| Pruning model to specified layer configuration -``` -You should also see a smaller number of parameters in the model, for example the 16-Layer Transformer Language Model prints: -``` -num. model params: 246933504 -``` -while a model pruned to 8 Layers prints: -``` -num. model params: 146163712 -``` - -If you would like to pick up training with a model that has been pruned, simply adding these flags is sufficient. If you would like to use a script that only does evaluation (no training), you may need to pass an override command. A specific example would be for language modeling: -```bash -fairseq-eval-lm /path/to/wikitext-103 \ - --path /path/to/model/checkpoint.pt \ - --model-overrides "{'decoder_layers_to_keep':'0,2,4,6,8,10,12,14'}" -``` -This model override command overrides the training parameters and updates the model arguments so that the pruned model is run instead of the full model. - -## Reproduce Paper Results - -Looking to reproduce the results in the paper? - -1. For Translation on WMT16 en-de, we followed this setting [here](https://github.com/pytorch/fairseq/blob/main/examples/scaling_nmt/README.md) -2. To train RoBERTa, we followed this setting [here](https://github.com/pytorch/fairseq/tree/main/examples/roberta) -3. To train Language Models on Wikitext-103, we followed this setting [here](https://github.com/pytorch/fairseq/tree/main/examples/language_model) - - -## Tips - -1. If you would like to train large models with better performance, LayerDrop should be set to a smaller value such as 0.1 or 0.2. Too much LayerDrop will mean the model has too much regularization, so may not reach the best performance. Since LayerDrop adds regularization, you may achieve the best performance by slightly reducing the amount of standard dropout (for example, reduce by 0.1). - -2. If you would like to train large models to be pruned and made smaller, LayerDrop should be set to a larger value such as 0.5 if you want to prune very aggressively (such as removing half the network or more). If you would like to prune fewer layers away, LayerDrop can be set to a smaller value such as 0.2. Our experiments were conducted with low values of LayerDrop (such as 0.1 and 0.2), for reference. - -3. When pruning layers at inference time, it is best to spread out the layers remaining so they are evenly spaced throughout the network. For example, if you want to remove 50% of the network, keeping every other layer is good. - - -## FAQ - -1. How did the sharing layers experiment work? In an appendix (https://openreview.net/pdf?id=SylO2yStDr) we added an experiment on Wikitext-103 language modeling that combined LayerDrop with Weight Sharing. We shared chunks of 2 layers such that every other layer had shared weights. For example, if our network has layers 1 through 6, then layer 1 and 2 are shared, layer 3 and 4 are shared, and layer 5 and 6 are shared. - -2. LayerDrop hasn't been helping in my setting? During training time, LayerDrop can help regularize your network. This is most important if your network is already overfitting - if your network is underfitting, it is possible LayerDrop is adding too much regularization. We recommend using smaller values (such as 0.1 or 0.2) and also decreasing the quantity of standard dropout (for example, reduce by 0.1). - -3. Can you train a model without LayerDrop and finetune with LayerDrop (e.g. for BERT)? In our experiments, we did not see great performance. Models such as RoBERTa have trained for a long time in the pre-training setting, so only finetuning with LayerDrop for a few epochs on a downstream task such as MNLI does not achieve the robustness required for successful pruning. - - -## Having an issue or have a question? - -Please open an issue in this repository with the details of your question. Thanks! diff --git a/kosmos-g/fairseq/examples/linformer/README.md b/kosmos-g/fairseq/examples/linformer/README.md deleted file mode 100644 index f8b36bc69..000000000 --- a/kosmos-g/fairseq/examples/linformer/README.md +++ /dev/null @@ -1,22 +0,0 @@ -# Linformer: Self-Attention with Linear Complexity (Wang et al., 2020) - -This example contains code to train Linformer models as described in our paper -[Linformer: Self-Attention with Linear Complexity](https://arxiv.org/abs/2006.04768). - -## Training a new Linformer RoBERTa model - -You can mostly follow the [RoBERTa pretraining README](/examples/roberta/README.pretraining.md), -updating your training command with `--user-dir examples/linformer/linformer_src --arch linformer_roberta_base`. - -## Citation - -If you use our work, please cite: - -```bibtex -@article{wang2020linformer, - title={Linformer: Self-Attention with Linear Complexity}, - author={Wang, Sinong and Li, Belinda and Khabsa, Madian and Fang, Han and Ma, Hao}, - journal={arXiv preprint arXiv:2006.04768}, - year={2020} -} -``` diff --git a/kosmos-g/fairseq/examples/linformer/linformer_src/__init__.py b/kosmos-g/fairseq/examples/linformer/linformer_src/__init__.py deleted file mode 100644 index 1c52f135e..000000000 --- a/kosmos-g/fairseq/examples/linformer/linformer_src/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .models import linformer_roberta # noqa diff --git a/kosmos-g/fairseq/examples/linformer/linformer_src/models/__init__.py b/kosmos-g/fairseq/examples/linformer/linformer_src/models/__init__.py deleted file mode 100644 index e69de29bb..000000000 diff --git a/kosmos-g/fairseq/examples/linformer/linformer_src/models/linformer_roberta.py b/kosmos-g/fairseq/examples/linformer/linformer_src/models/linformer_roberta.py deleted file mode 100644 index b7bdbb110..000000000 --- a/kosmos-g/fairseq/examples/linformer/linformer_src/models/linformer_roberta.py +++ /dev/null @@ -1,120 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Linformer: Self-Attention with Linear Complexity -""" - -import logging - -import torch -from fairseq import utils -from fairseq.models import register_model, register_model_architecture -from fairseq.models.roberta import ( - init_bert_params, - roberta_base_architecture, - roberta_large_architecture, - RobertaEncoder, - RobertaModel, -) -from fairseq.utils import safe_hasattr - -from ..modules.linformer_sentence_encoder import LinformerTransformerEncoder - - -logger = logging.getLogger(__name__) - - -@register_model("linformer_roberta") -class LinformerModel(RobertaModel): - @staticmethod - def add_args(parser): - RobertaModel.add_args(parser) - - # add args for Linformer - parser.add_argument( - "--compressed", type=int, help="compressed ratio of sequence length" - ) - parser.add_argument( - "--shared-kv-compressed", - type=int, - help="share compressed matrix between k and v, in each layer", - ) - parser.add_argument( - "--shared-layer-kv-compressed", - type=int, - help="share compressed matrix between k and v and across all layers", - ) - parser.add_argument( - "--freeze-compress", - type=int, - help="freeze the parameters in compressed layer", - ) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - - # make sure all arguments are present - base_architecture(args) - - if not safe_hasattr(args, "max_positions"): - args.max_positions = args.tokens_per_sample - - encoder = LinformerEncoder(args, task.source_dictionary) - return cls(args, encoder) - - -class LinformerEncoder(RobertaEncoder): - """Linformer encoder.""" - - def __init__(self, args, dictionary): - super().__init__(args, dictionary) - self.register_buffer("version", torch.tensor(2)) - - def build_encoder(self, args, dictionary, embed_tokens): - encoder = LinformerTransformerEncoder(args, dictionary, embed_tokens) - encoder.apply(init_bert_params) - return encoder - - def upgrade_state_dict_named(self, state_dict, name): - super().upgrade_state_dict_named(state_dict, name) - prefix = name + "." if name != "" else "" - - # some old checkpoints had weight sharing implemented incorrectly - # (note: this was correct in the original paper code) - if utils.item(state_dict.get(f"{prefix}version", torch.tensor(1))) < 2: - state_dict[f"{prefix}version"] = torch.tensor(1) - # check if input embeddings and output embeddings were tied - if not torch.allclose( - state_dict[f"{prefix}sentence_encoder.embed_tokens.weight"], - state_dict[f"{prefix}lm_head.weight"], - ): - # they weren't tied, re-init the LM head without weight sharing - self.lm_head = self.build_lm_head( - embed_dim=self.args.encoder_embed_dim, - output_dim=len(self.dictionary), - activation_fn=self.args.activation_fn, - weight=None, # don't share weights - ) - - -@register_model_architecture("linformer_roberta", "linformer_roberta") -def base_architecture(args): - args.compressed = getattr(args, "compressed", 4) - args.shared_kv_compressed = getattr(args, "shared_kv_compressed", 0) - args.shared_layer_kv_compressed = getattr(args, "shared_layer_kv_compressed", 0) - args.freeze_compress = getattr(args, "freeze_compress", 0) - roberta_base_architecture(args) - - -@register_model_architecture("linformer_roberta", "linformer_roberta_base") -def linformer_roberta_base_architecture(args): - base_architecture(args) - - -@register_model_architecture("linformer_roberta", "linformer_roberta_large") -def linformer_roberta_large_architecture(args): - roberta_large_architecture(args) - base_architecture(args) diff --git a/kosmos-g/fairseq/examples/linformer/linformer_src/modules/__init__.py b/kosmos-g/fairseq/examples/linformer/linformer_src/modules/__init__.py deleted file mode 100644 index e69de29bb..000000000 diff --git a/kosmos-g/fairseq/examples/linformer/linformer_src/modules/linformer_sentence_encoder.py b/kosmos-g/fairseq/examples/linformer/linformer_src/modules/linformer_sentence_encoder.py deleted file mode 100644 index 44f7989bd..000000000 --- a/kosmos-g/fairseq/examples/linformer/linformer_src/modules/linformer_sentence_encoder.py +++ /dev/null @@ -1,54 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch.nn as nn -from fairseq.models.transformer import TransformerEncoder - -from .linformer_sentence_encoder_layer import LinformerTransformerEncoderLayer - - -class LinformerTransformerEncoder(TransformerEncoder): - """ - Implementation for a Bi-directional Linformer based Sentence Encoder used - in BERT/XLM style pre-trained models. - - This first computes the token embedding using the token embedding matrix, - position embeddings (if specified) and segment embeddings - (if specified). After applying the specified number of - LinformerEncoderLayers, it outputs all the internal states of the - encoder as well as the final representation associated with the first - token (usually CLS token). - - Input: - - tokens: B x T matrix representing sentences - - segment_labels: B x T matrix representing segment label for tokens - - Output: - - a tuple of the following: - - a list of internal model states used to compute the - predictions where each tensor has shape T x B x C - - sentence representation associated with first input token - in format B x C. - """ - - def __init__(self, args, dictionary, embed_tokens): - self.compress_layer = None - super().__init__(args, dictionary, embed_tokens) - - def build_encoder_layer(self, args): - if self.args.shared_layer_kv_compressed == 1 and self.compress_layer is None: - compress_layer = nn.Linear( - self.args.max_positions, - self.args.max_positions // self.args.compressed, - ) - # intialize parameters for compressed layer - nn.init.xavier_uniform_(compress_layer.weight, gain=1 / math.sqrt(2)) - if self.args.freeze_compress == 1: - compress_layer.weight.requires_grad = False - self.compress_layer = compress_layer - - return LinformerTransformerEncoderLayer(args, self.compress_layer) diff --git a/kosmos-g/fairseq/examples/linformer/linformer_src/modules/linformer_sentence_encoder_layer.py b/kosmos-g/fairseq/examples/linformer/linformer_src/modules/linformer_sentence_encoder_layer.py deleted file mode 100644 index 7e2caa034..000000000 --- a/kosmos-g/fairseq/examples/linformer/linformer_src/modules/linformer_sentence_encoder_layer.py +++ /dev/null @@ -1,65 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -from fairseq import utils -from fairseq.modules import TransformerEncoderLayer - -from .multihead_linear_attention import MultiheadLinearAttention - - -class LinformerTransformerEncoderLayer(TransformerEncoderLayer): - """ - Implements a Linformer Encoder Layer used in BERT/XLM style pre-trained - models. - """ - - def __init__(self, args, shared_compress_layer): - # wrap in a list so it's not automatically registered by PyTorch - self.shared_compress_layer = [shared_compress_layer] - - super().__init__(args) - - self.register_buffer("version", torch.tensor(2)) - - def build_self_attention(self, embed_dim, args): - return MultiheadLinearAttention( - embed_dim, - args.encoder_attention_heads, - dropout=args.dropout, - self_attention=True, - q_noise=args.quant_noise_pq, - qn_block_size=args.quant_noise_pq_block_size, - compressed=args.compressed, - max_seq_len=args.max_positions, - shared_kv_compressed=args.shared_kv_compressed, - shared_compress_layer=self.shared_compress_layer[0], - freeze_compress=args.freeze_compress, - ) - - def upgrade_state_dict_named(self, state_dict, name): - super().upgrade_state_dict_named(state_dict, name) - prefix = name + "." if name != "" else "" - - # some old checkpoints had weight sharing implemented incorrectly - # (note: this was correct in the original paper code) - if utils.item(state_dict.get(f"{prefix}version", torch.tensor(1))) < 2: - state_dict[f"{prefix}version"] = torch.tensor(1) - # check compression layer sharing - if f"{prefix}shared_compress_layer.weight" in state_dict: - # reinitialize block without sharing compression layer to match - # old behavior - self.shared_compress_layer = [ - torch.nn.Linear( - self.shared_compress_layer[0].weight.size(1), - self.shared_compress_layer[0].weight.size(0), - ) - ] - self.self_attn = self.build_self_attention(self.embed_dim, self.args) - # delete shared_compress_layer, since it's already copied to - # self_attn.compress_k.weight - del state_dict[f"{prefix}shared_compress_layer.weight"] - if f"{prefix}shared_compress_layer.bias" in state_dict: - del state_dict[f"{prefix}shared_compress_layer.bias"] diff --git a/kosmos-g/fairseq/examples/linformer/linformer_src/modules/multihead_linear_attention.py b/kosmos-g/fairseq/examples/linformer/linformer_src/modules/multihead_linear_attention.py deleted file mode 100644 index 6be100727..000000000 --- a/kosmos-g/fairseq/examples/linformer/linformer_src/modules/multihead_linear_attention.py +++ /dev/null @@ -1,481 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from typing import Dict, Optional, Tuple - -import torch -import torch.nn.functional as F -from fairseq import utils -from fairseq.incremental_decoding_utils import with_incremental_state -from fairseq.modules.quant_noise import quant_noise -from torch import Tensor, nn -from torch.nn import Parameter - - -@with_incremental_state -class MultiheadLinearAttention(nn.Module): - """Multi-headed linformer attention. - - Projects the key and values down to the compressed dimension, before computing self-attention. - - See "Linformer: Self-Attention with Linear Complexity" for more details. - """ - - def __init__( - self, - embed_dim, - num_heads, - kdim=None, - vdim=None, - dropout=0.0, - bias=True, - add_bias_kv=False, - add_zero_attn=False, - self_attention=False, - encoder_decoder_attention=False, - q_noise=0.0, - qn_block_size=8, - compressed=1, - max_seq_len=256, - shared_kv_compressed=0, - shared_compress_layer=None, - freeze_compress=0, - ): - super().__init__() - self.embed_dim = embed_dim - self.kdim = kdim if kdim is not None else embed_dim - self.vdim = vdim if vdim is not None else embed_dim - self.qkv_same_dim = self.kdim == embed_dim and self.vdim == embed_dim - - self.num_heads = num_heads - self.dropout = dropout - self.head_dim = embed_dim // num_heads - assert ( - self.head_dim * num_heads == self.embed_dim - ), "embed_dim must be divisible by num_heads" - self.scaling = self.head_dim ** -0.5 - - self.self_attention = self_attention - self.encoder_decoder_attention = encoder_decoder_attention - - assert not self.self_attention or self.qkv_same_dim, ( - "Self-attention requires query, key and " "value to be of the same size" - ) - - self.k_proj = quant_noise( - nn.Linear(self.kdim, embed_dim, bias=bias), q_noise, qn_block_size - ) - self.v_proj = quant_noise( - nn.Linear(self.vdim, embed_dim, bias=bias), q_noise, qn_block_size - ) - self.q_proj = quant_noise( - nn.Linear(embed_dim, embed_dim, bias=bias), q_noise, qn_block_size - ) - - # used for compress sequence to subsequence - if shared_compress_layer is None: - self.compress_seq_len = max_seq_len // compressed - self.compress_k = nn.Linear(max_seq_len, self.compress_seq_len, bias=False) - if shared_kv_compressed == 0: - self.compress_v = nn.Linear( - max_seq_len, self.compress_seq_len, bias=False - ) - self.layerwise_sharing = False - else: - self.compress_k = shared_compress_layer - if shared_kv_compressed == 0: - self.compress_v = shared_compress_layer - self.layerwise_sharing = True - self.shared_kv_compressed = shared_kv_compressed - - self.out_proj = quant_noise( - nn.Linear(embed_dim, embed_dim, bias=bias), q_noise, qn_block_size - ) - - if add_bias_kv: - self.bias_k = Parameter(torch.Tensor(1, 1, embed_dim)) - self.bias_v = Parameter(torch.Tensor(1, 1, embed_dim)) - else: - self.bias_k = self.bias_v = None - - self.add_zero_attn = add_zero_attn - - self.reset_parameters() - - if freeze_compress == 1: - self.compress_k.weight.requires_grad = False - if shared_kv_compressed == 0: - self.compress_v.weight.requires_grad = False - - self.onnx_trace = False - - def prepare_for_onnx_export_(self): - self.onnx_trace = True - - def reset_parameters(self): - if self.qkv_same_dim: - # Empirically observed the convergence to be much better with - # the scaled initialization - nn.init.xavier_uniform_(self.k_proj.weight, gain=1 / math.sqrt(2)) - nn.init.xavier_uniform_(self.v_proj.weight, gain=1 / math.sqrt(2)) - nn.init.xavier_uniform_(self.q_proj.weight, gain=1 / math.sqrt(2)) - if ( - not self.layerwise_sharing - ): # otherwise, we already initialize the parameters - nn.init.xavier_uniform_(self.compress_k.weight, gain=1 / math.sqrt(2)) - if self.shared_kv_compressed == 0: - nn.init.xavier_uniform_( - self.compress_v.weight, gain=1 / math.sqrt(2) - ) - else: - nn.init.xavier_uniform_(self.k_proj.weight) - nn.init.xavier_uniform_(self.v_proj.weight) - nn.init.xavier_uniform_(self.q_proj.weight) - if ( - not self.layerwise_sharing - ): # otherwise, we already initialize the parameters - nn.init.xavier_uniform_(self.compress_k.weight) - if self.shared_kv_compressed == 0: - nn.init.xavier_uniform_(self.compress_v.weight) - - nn.init.xavier_uniform_(self.out_proj.weight) - if self.out_proj.bias is not None: - nn.init.constant_(self.out_proj.bias, 0.0) - if self.bias_k is not None: - nn.init.xavier_normal_(self.bias_k) - if self.bias_v is not None: - nn.init.xavier_normal_(self.bias_v) - - def forward( - self, - query, - key: Optional[Tensor], - value: Optional[Tensor], - key_padding_mask: Optional[Tensor] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - need_weights: bool = True, - static_kv: bool = False, - attn_mask: Optional[Tensor] = None, - before_softmax: bool = False, - need_head_weights: bool = False, - ) -> Tuple[Tensor, Optional[Tensor]]: - """Input shape: Time x Batch x Channel - - Args: - key_padding_mask (ByteTensor, optional): mask to exclude - keys that are pads, of shape `(batch, src_len)`, where - padding elements are indicated by 1s. - need_weights (bool, optional): return the attention weights, - averaged over heads (default: False). - attn_mask (ByteTensor, optional): typically used to - implement causal attention, where the mask prevents the - attention from looking forward in time (default: None). - before_softmax (bool, optional): return the raw attention - weights and values before the attention softmax. - need_head_weights (bool, optional): return the attention - weights for each head. Implies *need_weights*. Default: - return the average attention weights over all heads. - """ - if need_head_weights: - need_weights = True - - tgt_len, bsz, embed_dim = query.size() - assert embed_dim == self.embed_dim - assert list(query.size()) == [tgt_len, bsz, embed_dim] - - if incremental_state is not None: - saved_state = self._get_input_buffer(incremental_state) - if saved_state is not None and "prev_key" in saved_state: - # previous time steps are cached - no need to recompute - # key and value if they are static - if static_kv: - assert self.encoder_decoder_attention and not self.self_attention - key = value = None - else: - saved_state = None - - if self.self_attention: - q = self.q_proj(query) - - k_input = query.permute(1, 2, 0).contiguous() # B * C * T - k_input = ( - F.linear(k_input, self.compress_k.weight[:, 0:tgt_len]) - .permute(2, 0, 1) - .contiguous() - ) - k = self.k_proj(k_input) - - v_input = query.permute(1, 2, 0).contiguous() # B * C * T - if self.shared_kv_compressed == 0: - v_input = ( - F.linear(v_input, self.compress_v.weight[:, 0:tgt_len]) - .permute(2, 0, 1) - .contiguous() - ) - if self.shared_kv_compressed == 1: # use shared kv compressed linear layer - v_input = ( - F.linear(v_input, self.compress_k.weight[:, 0:tgt_len]) - .permute(2, 0, 1) - .contiguous() - ) - v = self.v_proj(v_input) - elif self.encoder_decoder_attention: - # encoder-decoder attention - q = self.q_proj(query) - if key is None: - assert value is None - k = v = None - else: - k = self.k_proj(key) - v = self.v_proj(key) - - else: - assert key is not None and value is not None - q = self.q_proj(query) - k = self.k_proj(key) - v = self.v_proj(value) - q *= self.scaling - - if self.bias_k is not None: - assert self.bias_v is not None - k = torch.cat([k, self.bias_k.repeat(1, bsz, 1)]) - v = torch.cat([v, self.bias_v.repeat(1, bsz, 1)]) - if attn_mask is not None: - attn_mask = torch.cat( - [attn_mask, attn_mask.new_zeros(attn_mask.size(0), 1)], dim=1 - ) - if key_padding_mask is not None: - key_padding_mask = torch.cat( - [ - key_padding_mask, - key_padding_mask.new_zeros(key_padding_mask.size(0), 1), - ], - dim=1, - ) - - q = ( - q.contiguous() - .view(tgt_len, bsz * self.num_heads, self.head_dim) - .transpose(0, 1) - ) - if k is not None: - k = ( - k.contiguous() - .view(-1, bsz * self.num_heads, self.head_dim) - .transpose(0, 1) - ) - if v is not None: - v = ( - v.contiguous() - .view(-1, bsz * self.num_heads, self.head_dim) - .transpose(0, 1) - ) - - if saved_state is not None: - # saved states are stored with shape (bsz, num_heads, seq_len, head_dim) - if "prev_key" in saved_state: - _prev_key = saved_state["prev_key"] - assert _prev_key is not None - prev_key = _prev_key.view(bsz * self.num_heads, -1, self.head_dim) - if static_kv: - k = prev_key - else: - assert k is not None - k = torch.cat([prev_key, k], dim=1) - if "prev_value" in saved_state: - _prev_value = saved_state["prev_value"] - assert _prev_value is not None - prev_value = _prev_value.view(bsz * self.num_heads, -1, self.head_dim) - if static_kv: - v = prev_value - else: - assert v is not None - v = torch.cat([prev_value, v], dim=1) - prev_key_padding_mask: Optional[Tensor] = None - if "prev_key_padding_mask" in saved_state: - prev_key_padding_mask = saved_state["prev_key_padding_mask"] - assert k is not None and v is not None - key_padding_mask = MultiheadLinearAttention._append_prev_key_padding_mask( - key_padding_mask=key_padding_mask, - prev_key_padding_mask=prev_key_padding_mask, - batch_size=bsz, - src_len=k.size(1), - static_kv=static_kv, - ) - - saved_state["prev_key"] = k.view(bsz, self.num_heads, -1, self.head_dim) - saved_state["prev_value"] = v.view(bsz, self.num_heads, -1, self.head_dim) - saved_state["prev_key_padding_mask"] = key_padding_mask - # In this branch incremental_state is never None - assert incremental_state is not None - incremental_state = self._set_input_buffer(incremental_state, saved_state) - assert k is not None - src_len = k.size(1) - - if self.add_zero_attn: - assert v is not None - src_len += 1 - k = torch.cat([k, k.new_zeros((k.size(0), 1) + k.size()[2:])], dim=1) - v = torch.cat([v, v.new_zeros((v.size(0), 1) + v.size()[2:])], dim=1) - if attn_mask is not None: - attn_mask = torch.cat( - [attn_mask, attn_mask.new_zeros(attn_mask.size(0), 1)], dim=1 - ) - - attn_weights = torch.bmm(q, k.transpose(1, 2)) - attn_weights = MultiheadLinearAttention.apply_sparse_mask( - attn_weights, tgt_len, src_len, bsz - ) - - assert list(attn_weights.size()) == [bsz * self.num_heads, tgt_len, src_len] - - if attn_mask is not None: - attn_mask = attn_mask.unsqueeze(0) - if self.onnx_trace: - attn_mask = attn_mask.repeat(attn_weights.size(0), 1, 1) - attn_weights += attn_mask - - if before_softmax: - return attn_weights, v - - attn_weights_float = utils.softmax( - attn_weights, dim=-1, onnx_trace=self.onnx_trace - ) - attn_weights = attn_weights_float.type_as(attn_weights) - attn_probs = F.dropout( - attn_weights, - p=self.dropout, - training=self.training, - ) - assert v is not None - attn = torch.bmm(attn_probs, v) - assert list(attn.size()) == [bsz * self.num_heads, tgt_len, self.head_dim] - if self.onnx_trace and attn.size(1) == 1: - # when ONNX tracing a single decoder step (sequence length == 1) - # the transpose is a no-op copy before view, thus unnecessary - attn = attn.contiguous().view(tgt_len, bsz, embed_dim) - else: - attn = attn.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim) - attn = self.out_proj(attn) - attn_weights: Optional[Tensor] = None - if need_weights: - attn_weights = attn_weights_float.view( - bsz, self.num_heads, tgt_len, src_len - ).transpose(1, 0) - if not need_head_weights: - # average attention weights over heads - attn_weights = attn_weights.mean(dim=0) - - return attn, attn_weights - - @staticmethod - def _append_prev_key_padding_mask( - key_padding_mask: Optional[Tensor], - prev_key_padding_mask: Optional[Tensor], - batch_size: int, - src_len: int, - static_kv: bool, - ) -> Optional[Tensor]: - # saved key padding masks have shape (bsz, seq_len) - if prev_key_padding_mask is not None and static_kv: - new_key_padding_mask = prev_key_padding_mask - elif prev_key_padding_mask is not None and key_padding_mask is not None: - new_key_padding_mask = torch.cat( - [prev_key_padding_mask.float(), key_padding_mask.float()], dim=1 - ) - # During incremental decoding, as the padding token enters and - # leaves the frame, there will be a time when prev or current - # is None - elif prev_key_padding_mask is not None: - filler = torch.zeros( - (batch_size, src_len - prev_key_padding_mask.size(1)), - device=prev_key_padding_mask.device, - ) - new_key_padding_mask = torch.cat( - [prev_key_padding_mask.float(), filler.float()], dim=1 - ) - elif key_padding_mask is not None: - filler = torch.zeros( - (batch_size, src_len - key_padding_mask.size(1)), - device=key_padding_mask.device, - ) - new_key_padding_mask = torch.cat( - [filler.float(), key_padding_mask.float()], dim=1 - ) - else: - new_key_padding_mask = prev_key_padding_mask - return new_key_padding_mask - - @torch.jit.export - def reorder_incremental_state( - self, - incremental_state: Dict[str, Dict[str, Optional[Tensor]]], - new_order: Tensor, - ): - """Reorder buffered internal state (for incremental generation).""" - input_buffer = self._get_input_buffer(incremental_state) - if input_buffer is not None: - for k in input_buffer.keys(): - input_buffer_k = input_buffer[k] - if input_buffer_k is not None: - if self.encoder_decoder_attention and input_buffer_k.size( - 0 - ) == new_order.size(0): - break - input_buffer[k] = input_buffer_k.index_select(0, new_order) - incremental_state = self._set_input_buffer(incremental_state, input_buffer) - return incremental_state - - def _get_input_buffer( - self, incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] - ) -> Dict[str, Optional[Tensor]]: - result = self.get_incremental_state(incremental_state, "attn_state") - if result is not None: - return result - else: - empty_result: Dict[str, Optional[Tensor]] = {} - return empty_result - - def _set_input_buffer( - self, - incremental_state: Dict[str, Dict[str, Optional[Tensor]]], - buffer: Dict[str, Optional[Tensor]], - ): - return self.set_incremental_state(incremental_state, "attn_state", buffer) - - def apply_sparse_mask(attn_weights, tgt_len: int, src_len: int, bsz: int): - return attn_weights - - def upgrade_state_dict_named(self, state_dict, name): - prefix = name + "." if name != "" else "" - items_to_add = {} - keys_to_remove = [] - for k in state_dict.keys(): - if k.endswith(prefix + "in_proj_weight"): - # in_proj_weight used to be q + k + v with same dimensions - dim = int(state_dict[k].shape[0] / 3) - items_to_add[prefix + "q_proj.weight"] = state_dict[k][:dim] - items_to_add[prefix + "k_proj.weight"] = state_dict[k][dim : 2 * dim] - items_to_add[prefix + "v_proj.weight"] = state_dict[k][2 * dim :] - - keys_to_remove.append(k) - - k_bias = prefix + "in_proj_bias" - if k_bias in state_dict.keys(): - dim = int(state_dict[k].shape[0] / 3) - items_to_add[prefix + "q_proj.bias"] = state_dict[k_bias][:dim] - items_to_add[prefix + "k_proj.bias"] = state_dict[k_bias][ - dim : 2 * dim - ] - items_to_add[prefix + "v_proj.bias"] = state_dict[k_bias][2 * dim :] - - keys_to_remove.append(prefix + "in_proj_bias") - - for k in keys_to_remove: - del state_dict[k] - - for key, value in items_to_add.items(): - state_dict[key] = value diff --git a/kosmos-g/fairseq/examples/m2m_100/README.md b/kosmos-g/fairseq/examples/m2m_100/README.md deleted file mode 100644 index 02a68a5f0..000000000 --- a/kosmos-g/fairseq/examples/m2m_100/README.md +++ /dev/null @@ -1,241 +0,0 @@ -# Beyond English-Centric Multilingual Machine Translation - -## Introduction -In this work, we create a true Many-to-Many multilingual translation model that can translate directly between any pair of 100 languages. Our focus on non-English-Centric models brings gains of more than 10 BLEU when directly translating between non-English directions while performing competitively with the best single systems of WMT. - -If you are new to using fairseq, read the following walkthrough. Otherwise, skip to the sections below. - -0. **Generation Data** - -To download the generation data, follow the below commands. Note that all datasets need to be detokenized *before* applying SPM in the data preprocessing step. If you use these evaluation datasets, please cite their associated papers. -```bash -# WMT - use sacrebleu, example here: -sacrebleu -t wmt14 -l fr-en --echo src > wmt.test.fr-en.fr -sacrebleu -t wmt14 -l fr-en --echo ref > wmt.test.fr-en.en - -# WAT -wget http://lotus.kuee.kyoto-u.ac.jp/WAT/my-en-data/wat2020.my-en.zip -unzip wat2020.my-en.zip - -# FLORES -# download from: https://github.com/facebookresearch/flores - -# TED - need to detokenize with Moses! -# from: https://github.com/neulab/word-embeddings-for-nmt -wget http://phontron.com/data/ted_talks.tar.gz - -# Autshumato -# request to download: https://repo.sadilar.org/handle/20.500.12185/397 - -# Tatoeba Challenge -# available here: https://github.com/Helsinki-NLP/Tatoeba-Challenge -``` - -1. **Training Data** - -To produce the training data, we use a combination of [CCMatrix](https://arxiv.org/abs/1911.04944) and [CCAligned](https://arxiv.org/abs/1911.06154). Check out the instructions [here](https://github.com/facebookresearch/LASER/tree/master/tasks/CCMatrix) to download the raw data. - -2. **Preprocess Data** - -After downloading raw data, you will need to postprocess the data, then apply SPM, then binarize. Note that it is very important you run the postprocessing script, because this removes any instance of the evaluation data in the mined training data. - -```bash -# preprocess data - -# remove sentences with more than 50% punctuation -python /path/to/fairseq/examples/m2m_100/process_data/remove_too_much_punc.py - -# deduplicate training data -paste /path/to/datadir/train.$src /path/to/datadir/train.$tgt | awk '!x[$0]++' > /path/to/datadir/train.dedup -echo "keeping $(wc -l /path/to/datadir/train.dedup) bitext out of $(wc -l /path/to/datadir/train.$src)" -cut -f1 /path/to/datadir/train.dedup > /path/to/datadir/train.$src -cut -f2 /path/to/datadir/train.dedup > /path/to/datadir/train.$tgt - -# remove all instances of evaluation data from the training data -python /path/to/fairseq/examples/m2m_100/process_data/dedup_data.py - -# frequency cleaning -wget https://dl.fbaipublicfiles.com/m2m_100/histograms.tar.gz -tar -xvzf histograms.tar.gz -python /path/to/fairseq/examples/m2m_100/process_data/clean_histogram.py --src $src --tgt $tgt --src-file /path/to/source/file --tgt-file /path/to/output/file --src-output-file source_output.$src --tgt-output-file target_output.$tgt --histograms /path/to/histograms - -# apply SPM -wget https://dl.fbaipublicfiles.com/m2m_100/spm.128k.model -python /path/to/fairseq/scripts/spm_encode.py \ - --model spm.128k.model \ - --output_format=piece \ - --inputs=/path/to/input/file/here \ - --outputs=/path/to/output/file/here - -# length ratio cleaning -perl mosesdecoder/scripts/training/clean-corpus-n.perl --ratio 3 /path/to/training/data/train.spm.$src-$tgt $src $tgt /path/to/output/directory/train.spm.$src-$tgt 1 250 - -# binarize data -wget https://dl.fbaipublicfiles.com/m2m_100/data_dict.128k.txt -fairseq-preprocess \ - --source-lang $src --target-lang $tgt \ - --testpref spm.$src.$tgt \ - --thresholdsrc 0 --thresholdtgt 0 \ - --destdir data_bin \ - --srcdict data_dict.128k.txt --tgtdict data_dict.128k.txt -``` - -3. **Training Scripts** - -To reproduce the training of our models, we train with fairseq-py's multilingual translation [task](https://github.com/pytorch/fairseq/tree/main/examples/multilingual). If you are interested in model parallel training, also check out [fairscale](https://github.com/facebookresearch/fairscale). - -4. **Generation** - -To generate from our models, follow the the commands in the generation section below. - - -If you use any of the resources listed here, please cite: -```bibtex -@article{fan2020beyond, - title={Beyond English-Centric Multilingual Machine Translation}, - author={Fan, Angela and Bhosale, Shruti and Schwenk, Holger and Ma, Zhiyi and El-Kishky, Ahmed and Goyal, Siddharth and Baines, Mandeep and Celebi, Onur and Wenzek, Guillaume and Chaudhary, Vishrav and Goyal, Naman and Birch, Tom and Liptchinsky, Vitaliy and Edunov, Sergey and Grave, Edouard and Auli, Michael and Joulin, Armand}, - journal={arXiv preprint}, - year={2020} -} - -@article{schwenk2019ccmatrix, - title={Ccmatrix: Mining billions of high-quality parallel sentences on the web}, - author={Schwenk, Holger and Wenzek, Guillaume and Edunov, Sergey and Grave, Edouard and Joulin, Armand}, - journal={arXiv preprint arXiv:1911.04944}, - year={2019} -} - -@article{el2019massive, - title={A Massive Collection of Cross-Lingual Web-Document Pairs}, - author={El-Kishky, Ahmed and Chaudhary, Vishrav and Guzman, Francisco and Koehn, Philipp}, - journal={arXiv preprint arXiv:1911.06154}, - year={2019} -} -``` - - -## Trained Models - -### 418M and 1.2B Model -We include the last checkpoint for both of these models. - -```bash -wget https://dl.fbaipublicfiles.com/m2m_100/model_dict.128k.txt -wget https://dl.fbaipublicfiles.com/m2m_100/language_pairs_small_models.txt - -# 418M parameter model -wget https://dl.fbaipublicfiles.com/m2m_100/418M_last_checkpoint.pt - -# 1.2B parameter model -wget https://dl.fbaipublicfiles.com/m2m_100/1.2B_last_checkpoint.pt - -# Generation: -fairseq-generate $binarized_data_path --batch-size 32 --path $path_to_model --fixed-dictionary model_dict.128k.txt -s en -t fr --remove-bpe 'sentencepiece' --beam 5 --task translation_multi_simple_epoch --lang-pairs language_pairs_small_models.txt --decoder-langtok --encoder-langtok src --gen-subset test > gen_out -``` - -### 12B Model -12B parameter model trained on many-to-many training data for 100 languages. We include the last checkpoint, average of last 5 checkpoints, average of last 10 checkpoints. There isn't a universally best choice out of these three, but all three versions are pretty close in accuracy. You can either sweep over the 3 checkpoints on a dev test and use the best performing checkpoint for final testing. Or the last checkpoint can be a good default choice. - -**Model Download Links** -Configuration | 2 32GB GPUs | 4 16GB GPUs | 6 12GB GPUs | 8 8GB GPUs -:--|:--|:--|:--|:-- -Last Checkpoint | [12b_last_chk_2_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_last_chk_2_gpus.pt) | [12b_last_chk_4_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_last_chk_4_gpus.pt) | [12b_last_chk_6_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_last_chk_6_gpus.pt) | [12b_last_chk_8_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_last_chk_8_gpus.pt) -Average of last 5 checkpoints | [12b_avg5_chk_2_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg5_chk_2_gpus.pt) | [12b_avg5_chk_4_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg5_chk_4_gpus.pt) | [12b_avg5_chk_6_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg5_chk_6_gpus.pt) | [12b_avg5_chk_8_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg5_chk_8_gpus.pt) -Average of last 10 checkpoints | [12b_avg10_chk_2_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg10_chk_2_gpus.pt) | [12b_avg10_chk_4_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg10_chk_4_gpus.pt) | [12b_avg10_chk_6_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg10_chk_6_gpus.pt) | [12b_avg10_chk_8_gpus.pt](https://dl.fbaipublicfiles.com/m2m_100/12b_avg10_chk_8_gpus.pt) - -**Generation Arguments** -Configuration | 2 32GB GPUs | 4 16GB GPUs | 6 12GB GPUs | 8 8GB GPUs -:--|:--|:--|:--|:-- -`--pipeline-encoder-balance` | `[26]` | `[1,15,10]` | `[1,9,9,7]` | `[1,6,6,6,7]` -`--pipeline-encoder-devices` | `[0]` | `[0,1,0]` | `[0,1,2,0]` | `[0,4,5,1,0]` -`--pipeline-decoder-balance` | `[3,22,1]` | `[3,11,11,1]` | `[3,7,7,8,1]` | `[1,6,6,6,6,1]` -`--pipeline-decoder-devices` | `[0,1,0]` | `[0,2,3,0]` | `[0,3,4,5,0]` | `[0,2,6,7,3,0]` - - -## SentencePiece Model - -```bash -wget https://dl.fbaipublicfiles.com/m2m_100/spm.128k.model -``` - -## Generation with M2M-100 - -### Encode using our SentencePiece Model - -Note: Install SentencePiece from [here](https://github.com/google/sentencepiece) - -```bash -fairseq=/path/to/fairseq -cd $fairseq -sacrebleu --echo src -l de-fr -t wmt19 | head -n 20 > raw_input.de-fr.de -sacrebleu --echo ref -l de-fr -t wmt19 | head -n 20 > raw_input.de-fr.fr -wget https://dl.fbaipublicfiles.com/m2m_100/spm.128k.model -for lang in de fr ; do - python scripts/spm_encode.py \ - --model spm.128k.model \ - --output_format=piece \ - --inputs=raw_input.de-fr.${lang} \ - --outputs=spm.de-fr.${lang} -done -``` - -### Binarization - -```bash -wget https://dl.fbaipublicfiles.com/m2m_100/data_dict.128k.txt -fairseq-preprocess \ - --source-lang de --target-lang fr \ - --testpref spm.de-fr \ - --thresholdsrc 0 --thresholdtgt 0 \ - --destdir data_bin \ - --srcdict data_dict.128k.txt --tgtdict data_dict.128k.txt -``` - -### Generation for the 12B model - -Note that generation can currently be run using 2 32GB / 4 16GB / 6 12GB / 8 8GB GPUs, and the corresponding model checkpoints and pipeline arguments can be found in the [12B Model Section](#12b-model). -Generation on CPUs will be added in the future. - -```bash -wget https://dl.fbaipublicfiles.com/m2m_100/model_dict.128k.txt -wget https://dl.fbaipublicfiles.com/m2m_100/language_pairs.txt -wget https://dl.fbaipublicfiles.com/m2m_100/12b_last_chk_4_gpus.pt -fairseq-generate \ - data_bin \ - --batch-size 1 \ - --path 12b_last_chk_4_gpus.pt \ - --fixed-dictionary model_dict.128k.txt \ - -s de -t fr \ - --remove-bpe 'sentencepiece' \ - --beam 5 \ - --task translation_multi_simple_epoch \ - --lang-pairs language_pairs.txt \ - --decoder-langtok --encoder-langtok src \ - --gen-subset test \ - --fp16 \ - --dataset-impl mmap \ - --distributed-world-size 1 --distributed-no-spawn \ - --pipeline-model-parallel \ - --pipeline-chunks 1 \ - --pipeline-encoder-balance '[1,15,10]' \ - --pipeline-encoder-devices '[0,1,0]' \ - --pipeline-decoder-balance '[3,11,11,1]' \ - --pipeline-decoder-devices '[0,2,3,0]' > gen_out -``` -## Evaluation with M2M-100 - -### Tokenization - -Note: Refer to tokenizers/README.md for more details on tokenization. - -```bash -cd ${fairseq}/examples/m2m_100 -cat ${fairseq}/gen_out | grep -P "^H" | sort -V | cut -f 3- | sh tok.sh fr > hyp -cat ${fairseq}/raw_input.de-fr.fr | sh tok.sh fr > ref -``` - -### BLEU - -```bash -sacrebleu -tok 'none' ref < hyp -``` diff --git a/kosmos-g/fairseq/examples/m2m_100/install_dependecies.sh b/kosmos-g/fairseq/examples/m2m_100/install_dependecies.sh deleted file mode 100644 index 82a105474..000000000 --- a/kosmos-g/fairseq/examples/m2m_100/install_dependecies.sh +++ /dev/null @@ -1,78 +0,0 @@ -#!/usr/bin/env bash -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -CWD=`pwd` -INSTALL_PATH=$CWD/tokenizers/thirdparty - -MOSES=$INSTALL_PATH/mosesdecoder -if [ ! -d $MOSES ]; then - echo 'Cloning Moses github repository (for tokenization scripts)...' - git clone https://github.com/moses-smt/mosesdecoder.git $MOSES - cd $MOSES - # To deal with differences in handling ' vs " - git checkout 03578921cc1a03402 - cd - -fi - -WMT16_SCRIPTS=$INSTALL_PATH/wmt16-scripts -if [ ! -d $WMT16_SCRIPTS ]; then - echo 'Cloning Romanian tokenization scripts' - git clone https://github.com/rsennrich/wmt16-scripts.git $WMT16_SCRIPTS -fi - -KYTEA=$INSTALL_PATH/kytea -if [ ! -f $KYTEA/bin/kytea ]; then - git clone https://github.com/neubig/kytea.git $KYTEA - cd $KYTEA - autoreconf -i - ./configure --prefix=`pwd` - make - make install - cd .. -fi - -export MECAB=$INSTALL_PATH/mecab-0.996-ko-0.9.2 -if [ ! -f $MECAB/bin/mecab ]; then - cd $INSTALL_PATH - curl -LO https://bitbucket.org/eunjeon/mecab-ko/downloads/mecab-0.996-ko-0.9.2.tar.gz - tar zxfv mecab-0.996-ko-0.9.2.tar.gz - cd mecab-0.996-ko-0.9.2/ - ./configure --prefix=`pwd` - make - make install - - cd .. - curl -LO https://bitbucket.org/eunjeon/mecab-ko-dic/downloads/mecab-ko-dic-2.1.1-20180720.tar.gz - tar zxfv mecab-ko-dic-2.1.1-20180720.tar.gz - cd mecab-ko-dic-2.1.1-20180720/ - ./autogen.sh - ./configure --prefix=`pwd` --with-dicdir=$MECAB/lib/mecab/dic/mecab-ko-dic --with-mecab-config=$MECAB/bin/mecab-config - make - sh -c 'echo "dicdir=$MECAB/lib/mecab/dic/mecab-ko-dic" > $MECAB/etc/mecabrc' - make install - cd $CWD -fi - -INDIC_RESOURCES_PATH=$INSTALL_PATH/indic_nlp_resources -if [ ! -d $INDIC_RESOURCES_PATH ]; then - echo 'Cloning indic_nlp_resources' - git clone https://github.com/anoopkunchukuttan/indic_nlp_resources.git $INDIC_RESOURCES_PATH -fi - - -if [ ! -f $INSTALL_PATH/seg_my.py ]; then - cd $INSTALL_PATH - wget http://lotus.kuee.kyoto-u.ac.jp/WAT/my-en-data/wat2020.my-en.zip - unzip wat2020.my-en.zip - # switch to python3 - cat wat2020.my-en/myseg.py |sed 's/^sys.std/###sys.std/g' | sed 's/### sys/sys/g' | sed 's/unichr/chr/g' > seg_my.py - cd $CWD -fi - - -pip install pythainlp sacrebleu indic-nlp-library - diff --git a/kosmos-g/fairseq/examples/m2m_100/process_data/clean_histogram.py b/kosmos-g/fairseq/examples/m2m_100/process_data/clean_histogram.py deleted file mode 100644 index e24e073dc..000000000 --- a/kosmos-g/fairseq/examples/m2m_100/process_data/clean_histogram.py +++ /dev/null @@ -1,52 +0,0 @@ -import argparse - -parser = argparse.ArgumentParser() -parser.add_argument('--src', type=str, help='Source language') -parser.add_argument('--tgt', type=str, help='Target language') -parser.add_argument('--src-file', type=str, help='Input source file') -parser.add_argument('--tgt-file', type=str, help='Input target file') -parser.add_argument('--src-output-file', type=str, help='Output source file') -parser.add_argument('--tgt-output-file', type=str, help='Output target file') -parser.add_argument('--threshold', type=float, default=0.5, help='Threshold') -parser.add_argument('--threshold-character', type=str, default=']', help='Threshold character') -parser.add_argument('--histograms', type=str, help='Path to histograms') - -args = parser.parse_args() - - -def read_hist(f): - ch = [] - for line in f: - c = line[0] - if c == args.threshold_character: - break - ch.append(c) - return ch - - -with(open("{}/{}".format(args.histograms, args.src), 'r', encoding='utf8')) as f: - ch1 = read_hist(f) - -with(open("{}/{}".format(args.histograms, args.tgt), 'r', encoding='utf8')) as f: - ch2 = read_hist(f) - -print("Accepted characters for {}: {}".format(args.src, ch1)) -print("Accepted characters for {}: {}".format(args.tgt, ch2)) - -with open(args.src_file, 'r', encoding='utf8') as fs1, open(args.tgt_file, 'r', encoding='utf8') as fs2, open(args.src_output_file, 'w', encoding='utf8') as fos1, open(args.tgt_output_file, 'w', encoding='utf8') as fos2: - ls1 = fs1.readline() - ls2 = fs2.readline() - - while ls1 or ls2: - cnt1 = len([c for c in ls1.strip() if c in ch1]) - cnt2 = len([c for c in ls2.strip() if c in ch2]) - - if cnt1 / len(ls1) > args.threshold and cnt2 / len(ls2) > args.threshold: - fos1.write(ls1) - fos2.write(ls2) - else: - print("{} {} {} \n{} {} {}".format(args.src, cnt1 / len(ls1), ls1.strip(), args.tgt, cnt2 / len(ls2), ls2.strip())) - - ls1 = fs1.readline() - ls2 = fs2.readline() - \ No newline at end of file diff --git a/kosmos-g/fairseq/examples/m2m_100/process_data/dedup_data.py b/kosmos-g/fairseq/examples/m2m_100/process_data/dedup_data.py deleted file mode 100644 index 58d9ed1cd..000000000 --- a/kosmos-g/fairseq/examples/m2m_100/process_data/dedup_data.py +++ /dev/null @@ -1,91 +0,0 @@ -import argparse -from collections import namedtuple -import os - -DATADIR = "/path/to/train_data" -DEDUP_FROM_DIR = "/path/to/eval/data" -OUTPUT_DIR = "/path/to/output/data" - - -def main(args): - languages = set() - for language_directory in os.listdir(DATADIR): - if "_" in language_directory: - src, tgt = language_directory.split("_") - languages.add(LanguagePair(src=src, tgt=tgt)) - - data = existing_data() - train_languages = sorted(languages) - for language_pair in train_languages[args.start_index:args.start_index + args.size]: - print(language_pair) - dedup(language_pair, data) - - -LanguagePair = namedtuple("LanguagePair", ["src", "tgt"]) - - -def existing_data(): - data = set() - for file in os.listdir(DEDUP_FROM_DIR): - with open(os.path.join(DEDUP_FROM_DIR, file)) as f: - data |= set(f.readlines()) - return data - -def dedup(language_pair, data, verbose=True, output=True): - train_filenames = LanguagePair( - src=f"{DATADIR}/{language_pair.src}_{language_pair.tgt}/train.{language_pair.src}", - tgt=f"{DATADIR}/{language_pair.src}_{language_pair.tgt}/train.{language_pair.tgt}", - ) - - output_filenames = LanguagePair( - src=f"{OUTPUT_DIR}/train.dedup.{language_pair.src}-{language_pair.tgt}.{language_pair.src}", - tgt=f"{OUTPUT_DIR}/train.dedup.{language_pair.src}-{language_pair.tgt}.{language_pair.tgt}" - ) - - # If output exists, skip this pair. It has already been done. - if (os.path.exists(output_filenames.src) and - os.path.exists(output_filenames.tgt)): - if verbose: - print(f"{language_pair.src}-{language_pair.tgt} already done.") - return - - if verbose: - print(f"{language_pair.src}-{language_pair.tgt} ready, will check dups.") - - # If there is no output, no need to actually do the loop. - if not output: - return - - if os.path.exists(train_filenames.src) and os.path.exists(train_filenames.tgt): - with open(train_filenames.src) as f: - train_source = f.readlines() - - with open(train_filenames.tgt) as f: - train_target = f.readlines() - - # do dedup - new_train_source = [] - new_train_target = [] - for i, train_line in enumerate(train_source): - if train_line not in data and train_target[i] not in data: - new_train_source.append(train_line) - new_train_target.append(train_target[i]) - - assert len(train_source) == len(train_target) - assert len(new_train_source) == len(new_train_target) - assert len(new_train_source) <= len(train_source) - - with open(output_filenames.src, "w") as o: - for line in new_train_source: - o.write(line) - - with open(output_filenames.tgt, "w") as o: - for line in new_train_target: - o.write(line) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument("-s", "--start-index", required=True, type=int) - parser.add_argument("-n", "--size", required=True, type=int) - main(parser.parse_args()) diff --git a/kosmos-g/fairseq/examples/m2m_100/process_data/remove_too_much_punc.py b/kosmos-g/fairseq/examples/m2m_100/process_data/remove_too_much_punc.py deleted file mode 100644 index 6c280de24..000000000 --- a/kosmos-g/fairseq/examples/m2m_100/process_data/remove_too_much_punc.py +++ /dev/null @@ -1,36 +0,0 @@ -import gzip -import argparse -from string import punctuation - -def len_no_punc(s, punc): - return len([ch for ch in s if ch in punc]) - -def filter_overpunc(len_npunc, len_sen): - return len_npunc < 0.5*len_sen - -def main(args): - punc = punctuation + "—|–" - print('Processing file {}'.format(args.input)) - with gzip.open(args.input, 'rt', encoding=args.encoding) as tsv: - with open(args.bitext + '.' + args.src_lang, 'wt', encoding=args.encoding) as fsrc: - with open(args.bitext + '.' + args.tgt_lang, 'wt', encoding=args.encoding) as ftgt: - line = tsv.readline() - fields = line.split('\t') - - src, tgt = fields[1], fields[2] - - nchar_npunc_src = len_no_punc(src, punc) - nchar_npunc_tgt = len_no_punc(tgt, punc) - - if filter_overpunc(nchar_npunc_src, len(src)) and filter_overpunc(nchar_npunc_tgt, len(tgt)): - fsrc.write(src.strip() + '\n') - ftgt.write(tgt.strip() + '\n') - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument("--input", required=True, type=str) - parser.add_argument('--encoding', default='utf-8', help='character encoding for input/output') - parser.add_argument('--bitext', type=str, required=True, help='language direction') - parser.add_argument('--src-lang', type=str, required=True, help='Source language') - parser.add_argument('--tgt-lang', type=str, required=True, help='Target language') - main(parser.parse_args()) diff --git a/kosmos-g/fairseq/examples/m2m_100/tok.sh b/kosmos-g/fairseq/examples/m2m_100/tok.sh deleted file mode 100644 index ba2ec5a2f..000000000 --- a/kosmos-g/fairseq/examples/m2m_100/tok.sh +++ /dev/null @@ -1,83 +0,0 @@ -#!/usr/bin/env bash -# Copyright (c) 2019-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# - -set -e - -TOKENIZERS_SCRIPTS=tokenizers -INSTALL_PATH=$TOKENIZERS_SCRIPTS/thirdparty - -N_THREADS=8 - -lg=$1 - -MOSES=$INSTALL_PATH/mosesdecoder -REPLACE_UNICODE_PUNCT=$MOSES/scripts/tokenizer/replace-unicode-punctuation.perl -NORM_PUNC=$MOSES/scripts/tokenizer/normalize-punctuation.perl -REM_NON_PRINT_CHAR=$MOSES/scripts/tokenizer/remove-non-printing-char.perl -TOKENIZER=$MOSES/scripts/tokenizer/tokenizer.perl - -# special tokenization for Romanian -WMT16_SCRIPTS=$INSTALL_PATH/wmt16-scripts - -NORMALIZE_ROMANIAN=$WMT16_SCRIPTS/preprocess/normalise-romanian.py -REMOVE_DIACRITICS=$WMT16_SCRIPTS/preprocess/remove-diacritics.py - -# Burmese -MY_SEGMENT=$INSTALL_PATH/seg_my.py - -# Arabic -AR_TOKENIZER=$TOKENIZERS_SCRIPTS/tokenizer_ar.sh - -# Korean -KO_SEGMENT=$TOKENIZERS_SCRIPTS/seg_ko.sh - -# Japanese -JA_SEGMENT=$TOKENIZERS_SCRIPTS/seg_ja.sh - -# Indic -IN_TOKENIZER=$TOKENIZERS_SCRIPTS/tokenize_indic.py -INDIC_RESOURCES_PATH=$INSTALL_PATH/indic_nlp_resources - -# Thai -THAI_TOKENIZER=$TOKENIZERS_SCRIPTS/tokenize_thai.py - -# Chinese -CHINESE_TOKENIZER=$TOKENIZERS_SCRIPTS/tokenize_zh.py - -# Chinese -if [ "$lg" = "zh" ]; then - cat - | $REPLACE_UNICODE_PUNCT | $NORM_PUNC -l $lg | $REM_NON_PRINT_CHAR | python $CHINESE_TOKENIZER -# Thai -elif [ "$lg" = "th" ]; then - cat - | python $THAI_TOKENIZER -# Japanese -elif [ "$lg" = "ja" ]; then - cat - | $REPLACE_UNICODE_PUNCT | $NORM_PUNC -l $lg | $REM_NON_PRINT_CHAR | ${JA_SEGMENT} -# Korean -elif [ "$lg" = "ko" ]; then - cat - | $REM_NON_PRINT_CHAR | ${KO_SEGMENT} -# Romanian -elif [ "$lg" = "ro" ]; then - cat - | $REPLACE_UNICODE_PUNCT | $NORM_PUNC -l $lg | $REM_NON_PRINT_CHAR | $NORMALIZE_ROMANIAN | $REMOVE_DIACRITICS | $TOKENIZER -no-escape -threads $N_THREADS -l $lg -# Burmese -elif [ "$lg" = "my" ]; then - cat - | python ${MY_SEGMENT} -# Arabic -elif [ "$lg" = "ar" ]; then - cat - | ${AR_TOKENIZER} -# Indic -elif [ "$lg" = "ne" ]; then - cat - | python ${IN_TOKENIZER} $lg -elif [ "$lg" = "si" ]; then - cat - | python ${IN_TOKENIZER} $lg -elif [ "$lg" = "hi" ]; then - cat - | python ${IN_TOKENIZER} $lg -# other languages -else - cat - | $REPLACE_UNICODE_PUNCT | $NORM_PUNC -l $lg | $REM_NON_PRINT_CHAR | $TOKENIZER -no-escape -threads $N_THREADS -l $lg -fi diff --git a/kosmos-g/fairseq/examples/m2m_100/tokenizers/README.md b/kosmos-g/fairseq/examples/m2m_100/tokenizers/README.md deleted file mode 100644 index e116932bc..000000000 --- a/kosmos-g/fairseq/examples/m2m_100/tokenizers/README.md +++ /dev/null @@ -1,18 +0,0 @@ -# M2M-100 Tokenization - -We apply different tokenization strategies for different languages following the existing literature. Here we provide tok.sh a tokenizer that can be used to reproduce our results. - -To reproduce the results, follow these steps: - -``` -tgt_lang=... -reference_translation=... -cat generation_output | grep -P "^H" | sort -V | cut -f 3- | sh tok.sh $tgt_lang > hyp -cat $reference_translation |sh tok.sh $tgt_lang > ref -sacrebleu -tok 'none' ref < hyp -``` - -## Installation - -Tools needed for all the languages except Arabic can be installed by running install_dependencies.sh -If you want to evaluate Arabic models, please follow the instructions provided here: http://alt.qcri.org/tools/arabic-normalizer/ to install diff --git a/kosmos-g/fairseq/examples/m2m_100/tokenizers/seg_ja.sh b/kosmos-g/fairseq/examples/m2m_100/tokenizers/seg_ja.sh deleted file mode 100644 index be6f5ca5f..000000000 --- a/kosmos-g/fairseq/examples/m2m_100/tokenizers/seg_ja.sh +++ /dev/null @@ -1,11 +0,0 @@ -#!/usr/bin/env bash -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -SCRIPT=`realpath $0` -KYTEA=`dirname $SCRIPT`/thirdparty/kytea -export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$KYTEA/lib:/usr/local/lib -export PATH=$PATH:"$KYTEA/bin" - -cat - | tr -d "[:blank:]" | kytea -notags diff --git a/kosmos-g/fairseq/examples/m2m_100/tokenizers/seg_ko.sh b/kosmos-g/fairseq/examples/m2m_100/tokenizers/seg_ko.sh deleted file mode 100644 index c523d9263..000000000 --- a/kosmos-g/fairseq/examples/m2m_100/tokenizers/seg_ko.sh +++ /dev/null @@ -1,12 +0,0 @@ -#!/usr/bin/env bash -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -SCRIPT=`realpath $0` -MECAB=`dirname $SCRIPT`/thirdparty/mecab-0.996-ko-0.9.2 - -export PATH=$PATH:"$MECAB/bin":"$MECAB/lib" -export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:"$MECAB/lib" - -cat - | mecab -O wakati diff --git a/kosmos-g/fairseq/examples/m2m_100/tokenizers/thirdparty/.gitignore b/kosmos-g/fairseq/examples/m2m_100/tokenizers/thirdparty/.gitignore deleted file mode 100644 index 19eb6a9dd..000000000 --- a/kosmos-g/fairseq/examples/m2m_100/tokenizers/thirdparty/.gitignore +++ /dev/null @@ -1,12 +0,0 @@ -seg_my.py -indic_nlp_library/ -indic_nlp_resources/ -kytea/ -mecab-0.996-ko-0.9.2.tar.gz -mecab-0.996-ko-0.9.2/ -mosesdecoder/ -wat2020.my-en.zip -wat2020.my-en/ -wmt16-scripts/ -mecab-ko-dic-2.1.1-20180720/ -mecab-ko-dic-2.1.1-20180720.tar.gz \ No newline at end of file diff --git a/kosmos-g/fairseq/examples/m2m_100/tokenizers/tokenize_indic.py b/kosmos-g/fairseq/examples/m2m_100/tokenizers/tokenize_indic.py deleted file mode 100644 index a44fad07f..000000000 --- a/kosmos-g/fairseq/examples/m2m_100/tokenizers/tokenize_indic.py +++ /dev/null @@ -1,23 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -# Use: echo {text} | python tokenize_indic.py {language} - -import sys - -from indicnlp.normalize.indic_normalize import IndicNormalizerFactory -from indicnlp.tokenize.indic_tokenize import trivial_tokenize - - -factory = IndicNormalizerFactory() -normalizer = factory.get_normalizer( - sys.argv[1], remove_nuktas=False, nasals_mode="do_nothing" -) - -for line in sys.stdin: - normalized_line = normalizer.normalize(line.strip()) - tokenized_line = " ".join(trivial_tokenize(normalized_line, sys.argv[1])) - print(tokenized_line) diff --git a/kosmos-g/fairseq/examples/m2m_100/tokenizers/tokenize_thai.py b/kosmos-g/fairseq/examples/m2m_100/tokenizers/tokenize_thai.py deleted file mode 100644 index 9c72cb890..000000000 --- a/kosmos-g/fairseq/examples/m2m_100/tokenizers/tokenize_thai.py +++ /dev/null @@ -1,13 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import sys - -from pythainlp import word_tokenize - - -for line in sys.stdin: - print(" ".join(word_tokenize(line.strip()))) diff --git a/kosmos-g/fairseq/examples/m2m_100/tokenizers/tokenize_zh.py b/kosmos-g/fairseq/examples/m2m_100/tokenizers/tokenize_zh.py deleted file mode 100644 index 674b5849c..000000000 --- a/kosmos-g/fairseq/examples/m2m_100/tokenizers/tokenize_zh.py +++ /dev/null @@ -1,14 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import fileinput - -import sacrebleu - - -for line in fileinput.input(): - print(sacrebleu.tokenize_zh(line)) diff --git a/kosmos-g/fairseq/examples/m2m_100/tokenizers/tokenizer_ar.sh b/kosmos-g/fairseq/examples/m2m_100/tokenizers/tokenizer_ar.sh deleted file mode 100644 index ad35d7adf..000000000 --- a/kosmos-g/fairseq/examples/m2m_100/tokenizers/tokenizer_ar.sh +++ /dev/null @@ -1,27 +0,0 @@ -#!/usr/bin/env sh -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -# -# Please follow the instructions here http://alt.qcri.org/tools/arabic-normalizer/ -# to install tools needed for Arabic - -echo "Please install Arabic tools: http://alt.qcri.org/tools/arabic-normalizer/" -echo "Then update environment variables in tokenizer_ar.sh" -exit 1 - -SVMTOOL=... -GOMOSESGO=... -QCRI_ARABIC_NORMALIZER=... - -export PERL5LIB="$SVMTOOL/lib":"$GOMOSESGO/bin/MADA-3.2":$PERL5LIB - - -tempfile=$(mktemp) -cat - > $tempfile - -cd $QCRI_ARABIC_NORMALIZER - -bash qcri_normalizer_mada3.2_aramorph1.2.1.sh $tempfile -cat $tempfile.mada_norm-aramorph.europarl_tok diff --git a/kosmos-g/fairseq/examples/mbart/README.md b/kosmos-g/fairseq/examples/mbart/README.md deleted file mode 100644 index a45e37243..000000000 --- a/kosmos-g/fairseq/examples/mbart/README.md +++ /dev/null @@ -1,123 +0,0 @@ -# MBART: Multilingual Denoising Pre-training for Neural Machine Translation -[https://arxiv.org/abs/2001.08210] - -## Introduction - -MBART is a sequence-to-sequence denoising auto-encoder pre-trained on large-scale monolingual corpora in many languages using the BART objective. mBART is one of the first methods for pre-training a complete sequence-to-sequence model by denoising full texts in multiple languages, while previous approaches have focused only on the encoder, decoder, or reconstructing parts of the text. - -## Pre-trained models - -Model | Description | # params | Download ----|---|---|--- -`mbart.CC25` | mBART model with 12 encoder and decoder layers trained on 25 languages' monolingual corpus | 610M | [mbart.CC25.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/mbart/mbart.cc25.v2.tar.gz) -`mbart.ft.ro_en` | finetune mBART cc25 model on ro-en language pairs | 610M | [mbart.cc25.ft.enro.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/mbart/mbart.cc25.ft.enro.tar.gz) - -## Results - -**[WMT16 EN-RO](https://www.statmt.org/wmt16/translation-task.html)** - -_(test set, no additional data used)_ - -Model | en-ro | ro-en ----|---|--- -`Random` | 34.3 | 34.0 -`mbart.cc25` | 37.7 | 37.8 -`mbart.enro.bilingual` | 38.5 | 38.5 - -## BPE data -# download model -wget https://dl.fbaipublicfiles.com/fairseq/models/mbart/mbart.cc25.v2.tar.gz -tar -xzvf mbart.CC25.tar.gz -# bpe data -install SPM [here](https://github.com/google/sentencepiece) -```bash -SPM=/path/to/sentencepiece/build/src/spm_encode -MODEL=sentence.bpe.model -${SPM} --model=${MODEL} < ${DATA}/${TRAIN}.${SRC} > ${DATA}/${TRAIN}.spm.${SRC} & -${SPM} --model=${MODEL} < ${DATA}/${TRAIN}.${TGT} > ${DATA}/${TRAIN}.spm.${TGT} & -${SPM} --model=${MODEL} < ${DATA}/${VALID}.${SRC} > ${DATA}/${VALID}.spm.${SRC} & -${SPM} --model=${MODEL} < ${DATA}/${VALID}.${TGT} > ${DATA}/${VALID}.spm.${TGT} & -${SPM} --model=${MODEL} < ${DATA}/${TEST}.${SRC} > ${DATA}/${TEST}.spm.${SRC} & -${SPM} --model=${MODEL} < ${DATA}/${TEST}.${TGT} > ${DATA}/${TEST}.spm.${TGT} & -``` - -## Preprocess data - -```bash -DICT=dict.txt -fairseq-preprocess \ - --source-lang ${SRC} \ - --target-lang ${TGT} \ - --trainpref ${DATA}/${TRAIN}.spm \ - --validpref ${DATA}/${VALID}.spm \ - --testpref ${DATA}/${TEST}.spm \ - --destdir ${DEST}/${NAME} \ - --thresholdtgt 0 \ - --thresholdsrc 0 \ - --srcdict ${DICT} \ - --tgtdict ${DICT} \ - --workers 70 -``` - -## Finetune on EN-RO -Finetune on mbart CC25 - -```bash -PRETRAIN=mbart.cc25 # fix if you moved the downloaded checkpoint -langs=ar_AR,cs_CZ,de_DE,en_XX,es_XX,et_EE,fi_FI,fr_XX,gu_IN,hi_IN,it_IT,ja_XX,kk_KZ,ko_KR,lt_LT,lv_LV,my_MM,ne_NP,nl_XX,ro_RO,ru_RU,si_LK,tr_TR,vi_VN,zh_CN - -fairseq-train path_2_data \ - --encoder-normalize-before --decoder-normalize-before \ - --arch mbart_large --layernorm-embedding \ - --task translation_from_pretrained_bart \ - --source-lang en_XX --target-lang ro_RO \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.2 \ - --optimizer adam --adam-eps 1e-06 --adam-betas '(0.9, 0.98)' \ - --lr-scheduler polynomial_decay --lr 3e-05 --warmup-updates 2500 --total-num-update 40000 \ - --dropout 0.3 --attention-dropout 0.1 --weight-decay 0.0 \ - --max-tokens 1024 --update-freq 2 \ - --save-interval 1 --save-interval-updates 5000 --keep-interval-updates 10 --no-epoch-checkpoints \ - --seed 222 --log-format simple --log-interval 2 \ - --restore-file $PRETRAIN \ - --reset-optimizer --reset-meters --reset-dataloader --reset-lr-scheduler \ - --langs $langs \ - --ddp-backend legacy_ddp -``` -## Generate on EN-RO -Get sacrebleu on finetuned en-ro model - -get tokenizer [here](https://github.com/rsennrich/wmt16-scripts) -```bash -wget https://dl.fbaipublicfiles.com/fairseq/models/mbart/mbart.cc25.ft.enro.tar.gz -tar -xzvf mbart.cc25.ft.enro.tar.gz -``` - -```bash -model_dir=MBART_finetuned_enro # fix if you moved the checkpoint - -fairseq-generate path_2_data \ - --path $model_dir/model.pt \ - --task translation_from_pretrained_bart \ - --gen-subset test \ - -t ro_RO -s en_XX \ - --bpe 'sentencepiece' --sentencepiece-model $model_dir/sentence.bpe.model \ - --sacrebleu --remove-bpe 'sentencepiece' \ - --batch-size 32 --langs $langs > en_ro - -cat en_ro | grep -P "^H" |sort -V |cut -f 3- | sed 's/\[ro_RO\]//g' |$TOKENIZER ro > en_ro.hyp -cat en_ro | grep -P "^T" |sort -V |cut -f 2- | sed 's/\[ro_RO\]//g' |$TOKENIZER ro > en_ro.ref -sacrebleu -tok 'none' -s 'none' en_ro.ref < en_ro.hyp -``` - -## Citation - -```bibtex -@article{liu2020multilingual, - title={Multilingual Denoising Pre-training for Neural Machine Translation}, - author={Yinhan Liu and Jiatao Gu and Naman Goyal and Xian Li and Sergey Edunov and Marjan Ghazvininejad and Mike Lewis and Luke Zettlemoyer}, - year={2020}, - eprint={2001.08210}, - archivePrefix={arXiv}, - primaryClass={cs.CL} -} -``` diff --git a/kosmos-g/fairseq/examples/megatron_11b/README.md b/kosmos-g/fairseq/examples/megatron_11b/README.md deleted file mode 100644 index 945c96c91..000000000 --- a/kosmos-g/fairseq/examples/megatron_11b/README.md +++ /dev/null @@ -1,161 +0,0 @@ -# Megatron-11b - -Megatron-11b is a unidirectional language model with `11B` parameters based on [Megatron-LM](https://arxiv.org/pdf/1909.08053.pdf). Following the original Megatron work, we trained the model using intra-layer model parallelism with each layer's parameters split across 8 GPUs. - -Megatron-11b is trained on the same data and uses the same byte-pair encoding (BPE) as [RoBERTa](https://arxiv.org/pdf/1907.11692.pdf). - -## Pre-trained models - -Model | Description | # params | # filesize | Download ----|---|---|---|--- -`megatron_11b` | megatron_11b unidirectional language model | 11B | 19Gb | [megatron_11b.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/model_parallel/megatron_11b.tar.gz) - -#### Architecture: - -Param | Value ----|--- -embed_dim | 3072 -ffn_dim | 3072 * 6 -layers | 72 -attention heads | 32 - -#### Training details: - -Param | value ----|--- -bsz | 512 -num_updates | 300,000 -peak_lr | 1.5e-04 -lr scheduler | inverse_sqrt -clip norm | 0.0 - - -## Example training command (model parallel) - -Megatron-11b contains too many parameters to train on a single GPU. Following -the original Megatron work, we adopt an intra-layer model parallel training -approach in which each layer's parameters are split across multiple GPUs and -activations and gradients are communicated during the forward/backward pass, -respectively. We similarly split the loss computation using the -`vocab_parallel_cross_entropy` criterion. - -The following training command illustrates how to do model parallel training in -fairseq. We assume that each machine (node) has 8 GPUs among which to split the -model parameters (`--model-parallel-size 8`). If you have access to multiple -nodes, you may combine this with data parallel training by increasing -`--distributed-world-size`. - -To train Megatron-11b on a single node: - - -```bash -fairseq-train \ - --distributed-world-size 8 \ - --memory-efficient-fp16 \ - --num-workers 2 \ - --model-parallel-size 8 \ - --criterion vocab_parallel_cross_entropy \ - --task language_modeling \ - --sample-break-mode none \ - --tokens-per-sample 1024 \ - --arch transformer_lm_megatron_11b \ - --share-decoder-input-output-embed \ - --optimizer adam --adam-betas "(0.9, 0.98)" --adam-eps 1e-08 --clip-norm 0.0 \ - --lr-scheduler inverse_sqrt --lr 0.00015 \ - --warmup-updates 3000 --weight-decay 0.01 \ - --dropout 0.1 --attention-dropout 0.1 \ - --batch-size 2 \ - --max-update 300000; -``` - -Note: Above was tested on `DGX-1` box, with `8xV100-32Gb` GPUs. - -## Results - -**[Wikitext103](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/)** - -Model | Valid perplexity | Test perplexity ----|---|--- -`megatron_11b` | 10.64 | 10.54 - - -## Evaluating `megatron_11b` on Wikitext-103 - -#### 1. Downloading Megatron-11b -```bash -# WARNING: this file is 19GB -wget https://dl.fbaipublicfiles.com/fairseq/models/model_parallel/megatron_11b.tar.gz -tar -xzvf megatron_11b.tar.gz -``` - -#### 2. Download Wikitext-103 -```bash -wget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip -unzip wikitext-103-raw-v1.zip -``` - -#### 3. Detokenize test tokens -Megatron-11b uses a byte-level BPE that expects raw (untokenized) input. Since -the wikitext-103 dataset comes tokenized, we apply a simple detokenization -process to restore the untokenized test set: - -```bash -python -m examples.megatron_11b.detok wikitext-103-raw/wiki.test.raw > wikitext-103-raw/wiki.test.detok -``` - -#### 4. BPE encoding -```bash -wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json' -wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe' - -python -m examples.roberta.multiprocessing_bpe_encoder \ - --encoder-json encoder.json \ - --vocab-bpe vocab.bpe \ - --inputs "wikitext-103-raw/wiki.test.detok" \ - --outputs "wikitext-103-raw/wiki.test.bpe" \ - --workers 60; -``` - -#### 5. Fairseq binarize -```bash -fairseq-preprocess \ - --only-source \ - --testpref wikitext-103-raw/wiki.test.bpe \ - --srcdict megatron_11b/dict.txt \ - --destdir wikitext103-bin; -``` - -#### 6. Evaluating perplexity. -We can now evaluate perplexity on the test set. Note that because we've modified -the test set (via detokenization and BPE), the perplexity reported by -`fairseq-eval-lm` needs to be renormalized. - -Compute unnormalized perplexity: - -```bash -DATA_PATH=wikitext103-bin/ -fairseq-eval-lm \ - $DATA_PATH \ - --path megatron_11b/model.pt \ - --task language_modeling \ - --gen-subset test \ - --batch-size 8 \ - --criterion cross_entropy \ - --context-window 992 \ - --distributed-world-size 8 \ - --model-parallel-size 8; -# Expected PPL (unnormalized_ppl): [8.46] -# Note: the eval command needs to run on 8 GPUs for the released model -``` -Renormalizing formula: `2 ^ ( log_2(unnormalized_PPL) * (270847 / 245566))`. -PPL After normalization: `10.54` - -To renormalize the perplexity, we must account for the change in token count -after detokenizing and appling BPE. The formula for this is: -`2 ^ ( log_2(unnormalized_PPL) * (new_token_cnt / orig_token_cnt))` - -For the wikitext-103 test set, the original token count is `245566` and the -token count after detokenization and applying BPE is `270847`. - -The perplexity after renormalization is: -`2 ^ ( log_2(8.46) * (270847 / 245566)) = 10.54` diff --git a/kosmos-g/fairseq/examples/megatron_11b/detok.py b/kosmos-g/fairseq/examples/megatron_11b/detok.py deleted file mode 100644 index 49921b28a..000000000 --- a/kosmos-g/fairseq/examples/megatron_11b/detok.py +++ /dev/null @@ -1,32 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import fileinput - -import sacremoses - - -def main(): - parser = argparse.ArgumentParser(description="") - parser.add_argument("files", nargs="*", help="input files") - args = parser.parse_args() - - detok = sacremoses.MosesDetokenizer() - - for line in fileinput.input(args.files, openhook=fileinput.hook_compressed): - print( - detok.detokenize(line.strip().split(" ")) - .replace(" @", "") - .replace("@ ", "") - .replace(" =", "=") - .replace("= ", "=") - .replace(" – ", "–") - ) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/moe_lm/README.md b/kosmos-g/fairseq/examples/moe_lm/README.md deleted file mode 100644 index 64ee57ec2..000000000 --- a/kosmos-g/fairseq/examples/moe_lm/README.md +++ /dev/null @@ -1,126 +0,0 @@ -# Efficient Large Scale Language Modeling with Mixtures of Experts - -## Introduction - -Mixture of Experts layers (MoEs) enable efficient scaling of language models -through conditional computation. This work empirically compares how -autoregressive MoE language models scale in comparison with dense models in a -wide range of settings: in- and out-of-domain language modeling, zero- and -few-shot priming, and full fine-tuning. See the associated paper for more -details. - -This repo contains instructions for reproducing results from the paper. - -## Pre-trained models - -These models are intended for research purposes only in order to reproduce the -results from the paper, and to enable further research on the capabilities and -limitations of language models. Please see the [model card](model_card.md) for -more details about how the models were trained and evaluated, as well as their -limitations and intended use. - -#### Dense models - -Dense models can be run directly from the `main` branch. - -Model | Layers | Model Dim | Languages | Download ----|---|---|---|--- -`dense_125m` | 12 | 768 | English | [en_dense_lm_125m.tar.gz (0.2GB)](https://dl.fbaipublicfiles.com/fairseq/models/lm/en_dense_lm_125m.tar.gz) -`dense_355m` | 24 | 1024 | English | [en_dense_lm_355m.tar.gz (0.6GB)](https://dl.fbaipublicfiles.com/fairseq/models/lm/en_dense_lm_355m.tar.gz) -`dense_1_3b` | 24 | 2048 | English | [en_dense_lm_1_3b.tar.gz (2.3GB)](https://dl.fbaipublicfiles.com/fairseq/models/lm/en_dense_lm_1_3b.tar.gz) -`dense_2_7b` | 32 | 2560 | English | [en_dense_lm_2_7b.tar.gz (4.6GB)](https://dl.fbaipublicfiles.com/fairseq/models/lm/en_dense_lm_2_7b.tar.gz) -`dense_6_7b` | 32 | 4096 | English | [en_dense_lm_6_7b.tar.gz (12GB)](https://dl.fbaipublicfiles.com/fairseq/models/lm/en_dense_lm_6_7b.tar.gz) -`dense_13b` | 40 | 5120 | English | [en_dense_lm_13b.tar.gz (23GB)](https://dl.fbaipublicfiles.com/fairseq/models/lm/en_dense_lm_13b.tar.gz) - -#### Mixture of expert models - -MoE models must be run from the `moe` branch. Please see the -[MoE README](https://github.com/pytorch/fairseq/tree/moe#evaluating-moe-language-models) -for more details about how to load and evaluate MoE models. - -Model | Layers | Model Dim | Languages | Download ----|---|---|---|--- -`moe_15b` | 12 | 768 | English | [en_moe_lm_15b.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/lm/en_moe_lm_15b.tar.gz) -`moe_52b` | 24 | 1024 | English | [en_moe_lm_52b.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/lm/en_moe_lm_52b.tar.gz) -`moe_207b` | 24 | 2048 | English | Available by request -`moe_1_1t` | 32 | 4096 | English | Available by request - -## Evaluation - -### Example (COPA) - -The following snippet shows how to evaluate our dense models on the [Choice of -Plausible Alternatives (COPA)](https://people.ict.usc.edu/~gordon/copa.html) task. - -```python -from fairseq.models.transformer_lm import TransformerLanguageModel -model_dir = '/path/to/en_dense_lm_125m' -lm = TransformerLanguageModel.from_pretrained(model_dir, bpe='gpt2') -lm = lm.eval(); # disable dropout -lm = lm.half(); # use FP16 for evaluation -lm = lm.cuda(); # move to GPU - -def get_logprobs(prompt): - import re - prompt = re.sub('\n+' , '\n', prompt) # collapse repeated newlines, which indicate separate documents - return lm.score(prompt, replace_newlines_with_eos=True)['positional_scores'] - -# Zero-shot evaluation for the Choice of Plausible Alternatives (COPA) task. -# A return value of 1 indicates that the first alternative is more plausible, -# while 2 indicates that the second alternative is more plausible. -def COPA_eval(prompt, alternative1, alternative2): - lprob1 = get_logprobs(prompt + "\n" + alternative1).sum() - lprob2 = get_logprobs(prompt + "\n" + alternative2).sum() - return 1 if lprob1 > lprob2 else 2 - -COPA_eval("The man broke his toe. What was the CAUSE of this?", "He got a hole in his sock.", "He dropped a hammer on his foot.") -# 2 -COPA_eval("I tipped the bottle. What happened as a RESULT?", "The liquid in the bottle froze.", "The liquid in the bottle poured out.") -# 2 -COPA_eval("I knocked on my neighbor's door. What happened as a RESULT?", "My neighbor invited me in.", "My neighbor left his house.") -# 1 -``` - -### Data format - -Few-shot prompting is known to be sensitive to the input formatting, and it is usually best to match the formatting used in pretraining. - -During pretraining our models were presented with data in the following format (i.e., one paragraph per line, with a blank line separating documents): -``` - ... - ... - - ... -... -``` - -#### Newlines - -While we use the byte-level BPE from GPT-2/3, fairseq's preprocessing replaces newlines with the end-of-sentence symbol (`
`), which corresponds to embedding index `2`. -Thus **the model never saw newline characters during pretraining** and newlines should not be used during few-shot prompting. - -This is more clearly illustrated in the following example, which uses fairseq's Hub Interface to tokenize two documents in the desired format: -```python -from fairseq.models.transformer_lm import TransformerLanguageModel -model_dir = '/path/to/en_dense_lm_125m' -lm = TransformerLanguageModel.from_pretrained(model_dir, bpe='gpt2') - -data = """\ -This is the first paragraph of the first document. -This is the second paragraph of the first document. - -This is the first paragraph of the second document.\ -""" - -# The following is wrong, since it will encode newlines present in `data`. -tokens_bad = lm.score(data)['tokens'] -assert '\n' in lm.decode(tokens_bad) # oops, we encoded a newline - -# Instead pass the replace_newlines_with_eos option to get the correct behavior. -tokens_good = lm.score(data, replace_newline_with_eos=True)['tokens'] -assert '\n' not in lm.decode(tokens_good) # no newlines were encoded -``` - -## Citation - -Coming soon. diff --git a/kosmos-g/fairseq/examples/moe_lm/data_card.md b/kosmos-g/fairseq/examples/moe_lm/data_card.md deleted file mode 100644 index a69cdc974..000000000 --- a/kosmos-g/fairseq/examples/moe_lm/data_card.md +++ /dev/null @@ -1,221 +0,0 @@ -# Data card for the paper "Efficient Large Scale Language Modeling with Mixtures of Experts" -## Version 1.0.0 - -We follow the recommendations of Gebru et al. (2018) and provide a datacard for the dataset used to train the 1.1T parameter model. - -## Motivation -* **For what purpose was the dataset created? Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description.** -The pre-training data for training the 1.1 T model was created by a union of six English language datasets, including five datasets used by RoBERTa (Liu et al 2019) and the English subset of CC 100. These purpose of creating this dataset was to pre-train the language model. - -* **Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)?** -Meta AI. - -* **Who funded the creation of the dataset? If there is an associated grant, please provide the name of the grantor and the grant name and number.** -Meta AI. - -* **Any other comments?** -No. - -## Composition - -* **What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description.** -The instances are textual documents. The overall dataset is composed from a union of the following datasets - - * BookCorpus (Zhu et al., 2019) consists of more than 10K unpublished books (4GB); - * English Wikipedia, excluding lists, tables and headers (12GB); - * CC-News (Nagel,2016) contains 63 million English news articles crawled between September 2016 and February 2019 (76GB); - * OpenWebText (Gokaslan and Cohen, 2019), an open source recreation of the WebText dataset used to train GPT-2 (38GB); - * CC-Stories (Trinh and Le, 2018) contains a subset of CommonCrawl data filtered to match the story-like style of Winograd schemas (31GB); - * English CC100 (Wenzek et al., 2020), a dataset extracted from CommonCrawl snapshots between January 2018 and December 2018, filtered to match the style of Wikipedia (292GB). - -* **How many instances are there in total (of each type, if appropriate)?** -The training data contains 112B tokens corresponding to 453 GB of data. - -* **Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable).** -The English CC100 section of the dataset is a subset of CommonCrawl snapshots extracted between January 2018 to December 2018, filtered to match the style of Wikipedia. The CC-stories dataset contains a subset of CommonCrawl data filtered to match the story-like style of Winograd schemas. - -* **What data does each instance consist of? “Raw” data (e.g., unprocessed text or images) or features? In either case, please provide a description.** -Each instance consists of raw text data. - -* **Is there a label or target associated with each instance? If so, please provide a description.** -No. - -* **Is any information missing from individual instances? If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable). This does not include intentionally removed information, but might include, e.g., redacted text.** -No. - -* **Are relationships between individual instances made explicit (e.g., users' movie ratings, social network links)? If so, please describe how these relationships are made explicit.** -There are no explicit relationships between individual instances. - -* **Are there recommended data splits (e.g., training, development/validation, testing)? If so, please provide a description of these splits, explaining the rationale behind them.** -We hold out a random validation set of approximately 150MB from the pretraining data, sampled proportionally to each dataset's size in the pretraining corpus. - -* **Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description.** -N/A - -* **Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)?** -It's self-contained. - -* **Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor-patient confidentiality, data that includes the content of individuals' non-public communications)? If so, please provide a description.** -The datasets used are publicly available, and the information in them is not considered confidential. - -* **Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? If so, please describe why.** -Parts of the dataset are a subset of public Common Crawl data, which could contain sentences that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety. - -* **Does the dataset relate to people? If not, you may skip the remaining questions in this section.** -Some documents of this data relate to people, such as news articles, Wikipedia descriptions, etc. - -* **Does the dataset identify any subpopulations (e.g., by age, gender)? If so, please describe how these subpopulations are identified and provide a description of their respective distributions within the dataset.** -No. - -* **Is it possible to identify individuals (i.e., one or more natural persons), either directly or indirectly (i.e., in combination with other data) from the dataset? If so, please describe how** -In addition to individuals who have Wikipedia pages (celebrities, politicians, etc.), it may be possible to identify other individuals by their names, Twitter account names, etc. if that information is present in Common Crawl. - -* **Does the dataset contain data that might be considered sensitive in any way (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history)? If so, please provide a description.** -The training dataset is partially derived from Common Crawl, which may contain some sensitive information. - -* **Any other comments?** -No - - -## Collection Process - -* **How was the data associated with each instance acquired? Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey responses), or indirectly inferred/ derived from other data (e.g., part-of-speech tags, model-based guesses for age or language)? If data was reported by subjects or indirectly inferred/derived from other data, was the data validated/verified? If so, please describe how.** -N/A. The dataset is a union of six publicly available datasets. - -* **What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)? How were these mechanisms or procedures validated?** -N/A - -* **If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)?** -Please refer to the main document for details. - -* **Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)?** -This data is mined, filtered and sampled by machines. - -* **Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances (e.g., recent crawl of old news articles)? If not, please describe the timeframe in which the data associated with the instances was created.** -Different parts of the dataset were mined over different time periods. -1. The CC-News dataset contains English news articles crawled between September 2016 and February 2019. -2. The English CC-100 dataset was extracted from CommonCrawl snapshots between January 2018 and December 2018. - -* **Were any ethical review processes conducted (e.g., by an institutional review board)? If so, please provide a description of these review processes, including the outcomes, as well as a link or other access point to any supporting documentation.** -No. - -* **Does the dataset relate to people? If not, you may skip the remainder of the questions in this section.** -No. - -* **Did you collect the data from the individuals in question directly, or obtain it via third parties or other sources (e.g., websites)?** -N/A - -* **Were the individuals in question notified about the data collection? If so, please describe (or show with screenshots or other information) how notice was provided, and provide a link or other access point to, or otherwise reproduce, the exact language of the notification itself.** -N/A - -* **Did the individuals in question consent to the collection and use of their data? If so, please describe (or show with screenshots or other information) how consent was requested and provided, and provide a link or other access point to, or otherwise reproduce, the exact language to which the individuals consented.** -N/A - -* **If consent was obtained, were the consenting individuals provided with a mechanism to revoke their consent in the future or for certain uses? If so, please provide a description, as well as a link or other access point to the mechanism (if appropriate).** -N/A - -* **Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted? If so, please provide a description of this analysis, including the outcomes, as well as a link or other access point to any supporting documentation.** -Some responsible AI related evaluations were performed. Please refer to the main document and the model card for the paper. - -* **Any other comments?** -No - - -## Preprocessing/cleaning/labeling - - -* **Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? If so, please provide a description. If not, you may skip the remainder of the questions in this section.** -The component datasets went through standard cleaning and re-formatting practices, including removing repetitive/non informative text like "Chapter One", or "This ebook by Project Gutenberg". - -* **Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? If so, please provide a link or other access point to the “raw” data.** -The "raw" component datasets is publicly available in their respective locations (more details can be seen in the respective papers linked in references). - -* **Is the software used to preprocess/clean/label the instances available? If so, please provide a link or other access point.** -The software is proprietary to Meta Platforms and currently unavailable publicly. - -* **Any other comments?** -No - - -## Uses - -* **Has the dataset been used for any tasks already? If so, please provide a description.** -Yes, this dataset was used to pre-train the models described in the paper. - -* **Is there a repository that links to any or all papers or systems that use the dataset? If so, please provide a link or other access point.** -No. - -* **What (other) tasks could the dataset be used for?** -This data can be used to pretrain English language models, which are foundation to many current and future language tasks. - -* **Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? For example, is there anything that a future user might need to know to avoid uses that could result in unfair treatment of individuals or groups (e.g., stereotyping, quality of service issues) or other undesirable harms (e.g., financial harms, legal risks) If so, please provide a description. Is there anything a future user could do to mitigate these undesirable harms?** -The pipeline for creating this dataset paves a way for building a scalable infrastructure for mining datasets to be be used for training large-scale models. - -* **Are there tasks for which the dataset should not be used? If so, please provide a description.** -No. - -* **Any other comments?** -No. - -## Distribution - - -* **Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? If so, please provide a description.** -No. - -* **How will the dataset will be distributed (e.g., tarball on website, API, GitHub)? Does the dataset have a digital object identifier (DOI)?** -N/A - -* **When will the dataset be distributed?** -No. - -* **Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? If so, please describe this license and/or ToU, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms or ToU, as well as any fees associated with these restrictions.** -No. - -* **Have any third parties imposed IP-based or other restrictions on the data associated with the instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms, as well as any fees associated with these restrictions.** -No. - -* **Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any supporting documentation.** -N/A - -* **Any other comments?** -No. - -## Maintenance - -* **Who is supporting/hosting/maintaining the dataset?** -Meta AI. - -* **How can the owner/curator/manager of the dataset be contacted (e.g., email address)?** -Refer to the main document. - -* **Is there an erratum? If so, please provide a link or other access point.** -N/A - -* **Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)? If so, please describe how often, by whom, and how updates will be communicated to users (e.g., mailing list, GitHub)?** -No plan for updating. - -* **If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were individuals in question told that their data would be retained for a fixed period of time and then deleted)? If so, please describe these limits and explain how they will be enforced.** -N/A - -* **Will older versions of the dataset continue to be supported/hosted/maintained? If so, please describe how. If not, please describe how its obsolescence will be communicated to users.** -N/A - -* **If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? If so, please provide a description. Will these contributions be validated/ verified? If so, please describe how. If not, why not? Is there a process for communicating/ distributing these contributions to other users? If so, please provide a description.** -No. - -* **Any other comments?** -No. - -## References -Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. - -Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2019. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. arXiv:1506.06724. - -Sebastian Nagel. 2016. Cc-news. http: //web.archive.org/save/http: //commoncrawl.org/2016/10/news-dataset-available. - -Aaron Gokaslan and Vanya Cohen. 2019. Openwebtext corpus. http://web.archive.org/save/http://Skylion007.github.io/OpenWebTextCorpus - -Trieu H Trinh and Quoc V Le. 2018. A simple method for commonsense reasoning. arXiv preprint arXiv:1806.02847. - -Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Joulin, and Edouard Grave. 2020. CCNet: Extracting high quality monolingual datasets from web crawl data. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4003–4012, Marseille, France. European Language Resources Association. - diff --git a/kosmos-g/fairseq/examples/moe_lm/model_card.md b/kosmos-g/fairseq/examples/moe_lm/model_card.md deleted file mode 100644 index 22cd551f8..000000000 --- a/kosmos-g/fairseq/examples/moe_lm/model_card.md +++ /dev/null @@ -1,170 +0,0 @@ -# Model card for the paper ``Efficient Large Scale Language Modeling with Mixtures of Experts" -## Version 1.0.0 - -### Model developer -Meta AI - -### Model type -An autoregressive English language model trained on a union of six English language models. We explore dense and sparse (MoE based) architectures in the paper. -* Dense models - Our dense models range from 125M parameters to 13B parameters. -* Sparse (MoE) models - Our MoE based models range from 15B parameters to 1.1 Trillion parameters. -This model card focuses on the 1.1 Trillion parameter model, but the discussion -applies to all of the models explored in this work. - -### Citation details -Artetxe et al. (2021): Efficient Large Scale Language Modeling with Mixtures of Experts - -### Model Feedback Channel -fairseq - -## Intended use -### Primary intended use -For research purposes only, e.g. reproducing model evaluation results. Generation is only used in a limited capacity for explanation/justification or for prompting/probing/priming for class labels. - -### Out of scope uses -The primary purpose of the model is not to generate language, although the model is capable of doing that. - -## Factors influencing model performance -This section discusses potential risks associated with using the model. - -### Relevant factors -Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion). - -### Evaluation factors -The 1.1T model was evaluated on StereoSet and CrowS-Pairs datasets to quantify encoded bias in the model. - -## Metrics -### Model performance measures -The 1.1T parameter model was primarily evaluated on -1. In-domain and out-of-domain language modeling perplexity. -2. Zero-shot and few-shot priming. -3. Fully supervised finetuning. - -### Approaches to handle uncertainty -For few-shot learning, we report the average results across 25 runs, randomly sampling a different set of few-shot examples from the training set each time. - -## Evaluation data -## Zero Shot evaluation - -### HellaSwag -#### Description -HellaSwag is a dataset for evaluating commonsense reasoning. - -### PIQA -#### Description -PIQA is a dataset designed to evaluate reasoning about Physical Commonsense in Natural Language - -### ReCoRd -#### Description -Reading Comprehension with Commonsense Reasoning Dataset (ReCoRD) is a large-scale reading comprehension dataset which requires commonsense reasoning. ReCoRD consists of queries automatically generated from CNN/Daily Mail news articles; the answer to each query is a text span from a summarizing passage of the corresponding news. The goal of ReCoRD is to evaluate a machine's ability of commonsense reasoning in reading comprehension. - -## Few Shot evaluation -### Winogrande -#### Description -Winogrande is a benchmark for commonsense reasoning. The dataset contains pronoun resolution problems originally designed to be unsolvable for statistical models that rely on selectional preferences or word associations. - -### StoryCloze -#### Description -StoryCloze is a new commonsense reasoning framework for evaluating story understanding, story generation, and script learning. This test requires a system to choose the correct ending to a four-sentence story. - -### OpenBookQA -#### Description -OpenBookQA is a new kind of question-answering dataset modeled after open book exams for assessing human understanding of a subject. It consists of 5,957 multiple-choice elementary-level science questions (4,957 train, 500 dev, 500 test), which probe the understanding of a small “book” of 1,326 core science facts and the application of these facts to novel situations. - -## Fully supervised evaluation - -### BoolQ -#### Description -BoolQ is a question answering dataset for yes/no questions containing 15942 examples. These questions are naturally occurring – they are generated in unprompted and unconstrained settings. Each example is a triplet of (question, passage, answer), with the title of the page as optional additional context. - -### SST-2 -#### Description -SST-2 (or SST-binary) is a binary classification dataset where the goal is to differentiate between negative or somewhat negative vs somewhat positive or positive. - -### MNLI -#### Description -The Multi-Genre Natural Language Inference (MultiNLI) corpus is a crowd-sourced collection of 433k sentence pairs annotated with textual entailment information. The corpus is modeled on the SNLI corpus, but differs in that covers a range of genres of spoken and written text, and supports a distinctive cross-genre generalization evaluation. - -## Responsible AI (RAI) evaluation -### StereoSet -#### Description -A large-scale natural dataset in English to measure stereotypical biases in four domains: gender, profession, race, and religion - -#### Motivation for dataset use -The motivation for evaluating the 1.1T parameter model on this dataset is to evaluate the model's stereotype bias in gender, profession, race, and religion - -### CrowS -#### Description -Challenge Dataset for Measuring Social Biases in Masked Language Models - -#### Motivation for dataset use -The motivation for evaluating the 1.1T parameter model on this dataset is to evaluate the model’s bias in the domains of race, religion and age - ----- - -## Training data -### BookCorpus -#### Description -A dataset consisting of more than 10K unpublished books. 4GB in size. (Zhu et al., 2019) - -### English Wikipedia -#### Description -Data from English wikipedia, excluding lists, tables and headers. 12GB in size. - -### CC-News -#### Description -A dataset containing 63 millions English news articles crawled between September 2016 and February 2019. 76GB in size. (Nagel,2016) - -### OpenWebText -#### Description -An open source recreation of the WebText dataset used to train GPT-2. 38GB in size. (Gokaslan and Cohen, 2019) - -### CC-Stories -#### Description -A dataset containing a subset of CommonCrawl data filtered to match the story-like style of Winograd schemas. 31GB in size. (Trinh and Le, 2018) - -### English CC100 -#### Description -A dataset extracted from CommonCrawl snapshots between January 2018 and December 2018, filtered to match the style of Wikipedia following the methodology introduced in CCNet (https://arxiv.org/abs/1911.00359). 292GB in size. (Wenzek et al., 2020) - -## Responsible AI (RAI) Dimensions -### Fairness (Bias and inclusion) -The 1.1T parameter model was evaluated on the StereoSet and CrowS pairs dataset for inherent bias in the model, and bias as a result of the data. Similar to StereoSet, we observe that both the dense and MoE models get worse in terms of the Stereotype Score (SS) with scale. - -### Privacy and security -The 1.1T model did not have any special Privacy and Security considerations. The training data and evaluation data were both public and went through standard Meta AI Privacy and licensing procedures. - -### Transparency and control -In the spirit of transparency and accountability we have created this model card for the 1.1T parameter model and a data card for the training data (referenced in Artetxe et al. (2021)). - -### Efficiency (Green AI) -The 1.1T parameter model is trained as a Mixture of Experts (MoE) model. Mixture of expert (MoE) models are efficient because they leverage sparse computation, i.e., only a small fraction of parameters are active for any given input. For instance, our 1.1T parameter MoE model requires only 30% more FLOPS compared to a 6.7B parameter dense model, i.e., a 160x increase in parameters with only a 30% increase in FLOPS. Notably, MoE models achieve much better validation perplexity for a given compute budget compared to dense models. - -## References -Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. HellaSwag: Can a machine really finish your sentence? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4791– 4800, Florence, Italy. Association for Computational Linguistics. - -Yonatan Bisk, Rowan Zellers, Ronan Le bras, Jianfeng Gao, and Yejin Choi. 2020. Piqa: Reasoning about physical commonsense in natural language. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):7432–7439. - -Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, and Benjamin Van Durme. 2018. ReCoRD: Bridging the gap between human and machine commonsense reading comprehension. arXiv preprint 1810.12885. - -Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2020. Winogrande: An adversarial winograd schema challenge at scale. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):8732–8740. - -Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A corpus and cloze evaluation for deeper understanding of commonsense stories. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 839–849, San Diego, California. Association for Computational Linguistics. - -Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct electricity? a new dataset for open book question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2381–2391, Brussels, Belgium. Association for Computational Linguistics. - -Christopher Clark and Kenton Lee and Ming-Wei Chang and Tom Kwiatkowski and Michael Collins and Kristina Toutanova. 2019. BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions - -Moin Nadeem, Anna Bethke, and Siva Reddy. 2021. StereoSet: Measuring stereotypical bias in pretrained language models. In Association for Computational Linguistics (ACL). - -Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R. Bowman. 2020. CrowS-pairs: A challenge dataset for measuring social biases in masked language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1953–1967, Online. Association for Computational Linguistics. - -Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2019. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. arXiv:1506.06724. - -Sebastian Nagel. 2016. Cc-news. http: //web.archive.org/save/http: //commoncrawl.org/2016/10/news-dataset-available. - -Aaron Gokaslan and Vanya Cohen. 2019. Openwebtext corpus. http://web.archive.org/save/http://Skylion007.github.io/OpenWebTextCorpus - -Trieu H Trinh and Quoc V Le. 2018. A simple method for commonsense reasoning. arXiv preprint arXiv:1806.02847. - -Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Joulin, and Edouard Grave. 2020. CCNet: Extracting high quality monolingual datasets from web crawl data. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4003–4012, Marseille, France. European Language Resources Association. diff --git a/kosmos-g/fairseq/examples/multilingual/ML50_langs.txt b/kosmos-g/fairseq/examples/multilingual/ML50_langs.txt deleted file mode 100644 index 558abbc78..000000000 --- a/kosmos-g/fairseq/examples/multilingual/ML50_langs.txt +++ /dev/null @@ -1,52 +0,0 @@ -ar_AR -cs_CZ -de_DE -en_XX -es_XX -et_EE -fi_FI -fr_XX -gu_IN -hi_IN -it_IT -ja_XX -kk_KZ -ko_KR -lt_LT -lv_LV -my_MM -ne_NP -nl_XX -ro_RO -ru_RU -si_LK -tr_TR -vi_VN -zh_CN -af_ZA -az_AZ -bn_IN -fa_IR -he_IL -hr_HR -id_ID -ka_GE -km_KH -mk_MK -ml_IN -mn_MN -mr_IN -pl_PL -ps_AF -pt_XX -sv_SE -sw_KE -ta_IN -te_IN -th_TH -tl_XX -uk_UA -ur_PK -xh_ZA -gl_ES -sl_SI \ No newline at end of file diff --git a/kosmos-g/fairseq/examples/multilingual/README.md b/kosmos-g/fairseq/examples/multilingual/README.md deleted file mode 100644 index 46ff9c351..000000000 --- a/kosmos-g/fairseq/examples/multilingual/README.md +++ /dev/null @@ -1,158 +0,0 @@ -# Multilingual Translation - -[[Multilingual Translation with Extensible Multilingual Pretraining and Finetuning, https://arxiv.org/abs/2008.00401]](https://arxiv.org/abs/2008.00401) - -## Introduction - -This work is for training multilingual translation models with multiple bitext datasets. This multilingual translation framework supports (see [[training section]](#Training) and [[finetuning section]](#Finetuning) for examples) - -* temperature based sampling over unbalancing datasets of different translation directions - - --sampling-method' with - choices=['uniform', 'temperature', 'concat'] - - --sampling-temperature -* configurable to automatically add source and/or target language tokens to source/target sentences using data which are prepared in the same way as bilignual training - - --encoder-langtok with choices=['src', 'tgt', None] to specify whether to add source or target language tokens to the source sentences - - --decoder-langtok (binary option) to specify whether to add target language tokens to the target sentences or not -* finetuning mBART pretrained models for multilingual translation - - --finetune-from-model to specify the path from which to load the pretrained model - -## Preprocessing data -Multilingual training requires a joint BPE vocab. Please follow [mBART's preprocessing steps](https://github.com/pytorch/fairseq/tree/main/examples/mbart#bpe-data) to reuse our pretrained sentence-piece model. - -You can also train a joint BPE model on your own dataset and then follow the steps in [[link]](https://github.com/pytorch/fairseq/tree/main/examples/translation#multilingual-translation). - -## Training - - -```bash -lang_pairs= -path_2_data= -lang_list= - -fairseq-train $path_2_data \ - --encoder-normalize-before --decoder-normalize-before \ - --arch transformer --layernorm-embedding \ - --task translation_multi_simple_epoch \ - --sampling-method "temperature" \ - --sampling-temperature 1.5 \ - --encoder-langtok "src" \ - --decoder-langtok \ - --lang-dict "$lang_list" \ - --lang-pairs "$lang_pairs" \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.2 \ - --optimizer adam --adam-eps 1e-06 --adam-betas '(0.9, 0.98)' \ - --lr-scheduler inverse_sqrt --lr 3e-05 --warmup-updates 2500 --max-update 40000 \ - --dropout 0.3 --attention-dropout 0.1 --weight-decay 0.0 \ - --max-tokens 1024 --update-freq 2 \ - --save-interval 1 --save-interval-updates 5000 --keep-interval-updates 10 --no-epoch-checkpoints \ - --seed 222 --log-format simple --log-interval 2 -``` - -## Finetuning -We can also finetune multilingual models from a monolingual pretrained models, e.g. [mMBART](https://github.com/pytorch/fairseq/tree/main/examples/mbart). -```bash -lang_pairs= -path_2_data= -lang_list= -pretrained_model= - -fairseq-train $path_2_data \ - --finetune-from-model $pretrained_model \ - --encoder-normalize-before --decoder-normalize-before \ - --arch transformer --layernorm-embedding \ - --task translation_multi_simple_epoch \ - --sampling-method "temperature" \ - --sampling-temperature 1.5 \ - --encoder-langtok "src" \ - --decoder-langtok \ - --lang-dict "$lang_list" \ - --lang-pairs "$lang_pairs" \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.2 \ - --optimizer adam --adam-eps 1e-06 --adam-betas '(0.9, 0.98)' \ - --lr-scheduler inverse_sqrt --lr 3e-05 --warmup-updates 2500 --max-update 40000 \ - --dropout 0.3 --attention-dropout 0.1 --weight-decay 0.0 \ - --max-tokens 1024 --update-freq 2 \ - --save-interval 1 --save-interval-updates 5000 --keep-interval-updates 10 --no-epoch-checkpoints \ - --seed 222 --log-format simple --log-interval 2 -``` -## Generate -The following command uses the multilingual task (translation_multi_simple_epoch) to generate translation from $source_lang to $target_lang on the test dataset. During generaton, the source language tokens are added to source sentences and the target language tokens are added as the starting token to decode target sentences. Options --lang-dict and --lang-pairs are needed to tell the generation process the ordered list of languages and translation directions that the trained model are awared of; they will need to be consistent with the training. - -```bash -model= -source_lang= -target_lang= - -fairseq-generate $path_2_data \ - --path $model \ - --task translation_multi_simple_epoch \ - --gen-subset test \ - --source-lang $source_lang \ - --target-lang $target_lang - --sacrebleu --remove-bpe 'sentencepiece'\ - --batch-size 32 \ - --encoder-langtok "src" \ - --decoder-langtok \ - --lang-dict "$lang_list" \ - --lang-pairs "$lang_pairs" > ${source_lang}_${target_lang}.txt -``` -Fairseq will generate translation into a file {source_lang}_${target_lang}.txt with sacreblue at the end. - -You can also use costomized tokenizer to compare the performance with the literature. For example, you get a tokenizer [here](https://github.com/rsennrich/wmt16-scripts) and do the following: -```bash -TOKENIZER= -TOK_CMD=<"$TOKENIZER $target_lang" or cat for sacrebleu> - -cat {source_lang}_${target_lang}.txt | grep -P "^H" |sort -V |cut -f 3- |$TOK_CMD > ${source_lang}_${target_lang}.hyp -cat {source_lang}_${target_lang}.txt | grep -P "^T" |sort -V |cut -f 2- |$TOK_CMD > ${source_lang}_${target_lang}.ref -sacrebleu -tok 'none' -s 'none' ${source_lang}_${target_lang}.ref < ${source_lang}_${target_lang}.hyp -``` - -# mBART50 models - -* [mMBART 50 pretrained model](https://dl.fbaipublicfiles.com/fairseq/models/mbart50/mbart50.pretrained.tar.gz). -* [mMBART 50 finetuned many-to-one](https://dl.fbaipublicfiles.com/fairseq/models/mbart50/mbart50.ft.n1.tar.gz). -* [mMBART 50 finetuned one-to-many](https://dl.fbaipublicfiles.com/fairseq/models/mbart50/mbart50.ft.1n.tar.gz). -* [mMBART 50 finetuned many-to-many](https://dl.fbaipublicfiles.com/fairseq/models/mbart50/mbart50.ft.nn.tar.gz). - -Please download and extract from the above tarballs. Each tarball contains -* The fairseq model checkpoint: model.pt -* The list of supported languages: ML50_langs.txt -* Sentence piece model: sentence.bpe.model -* Fairseq dictionary of each language: dict.{lang}.txt (please replace lang with a language specified in ML50_langs.txt) - -To use the trained models, -* use the tool [binarize.py](./data_scripts/binarize.py) to binarize your data using sentence.bpe.model and dict.{lang}.txt, and copy the dictionaries to your data path -* then run the generation command: -```bash -path_2_data= -model=/model.pt -lang_list=/ML50_langs.txt -source_lang= -target_lang= - -fairseq-generate $path_2_data \ - --path $model \ - --task translation_multi_simple_epoch \ - --gen-subset test \ - --source-lang $source_lang \ - --target-lang $target_lang - --sacrebleu --remove-bpe 'sentencepiece'\ - --batch-size 32 \ - --encoder-langtok "src" \ - --decoder-langtok \ - --lang-dict "$lang_list" -``` - -## Citation - -```bibtex -@article{tang2020multilingual, - title={Multilingual Translation with Extensible Multilingual Pretraining and Finetuning}, - author={Yuqing Tang and Chau Tran and Xian Li and Peng-Jen Chen and Naman Goyal and Vishrav Chaudhary and Jiatao Gu and Angela Fan}, - year={2020}, - eprint={2008.00401}, - archivePrefix={arXiv}, - primaryClass={cs.CL} -} -``` diff --git a/kosmos-g/fairseq/examples/multilingual/data_scripts/README.md b/kosmos-g/fairseq/examples/multilingual/data_scripts/README.md deleted file mode 100644 index cc610c0c9..000000000 --- a/kosmos-g/fairseq/examples/multilingual/data_scripts/README.md +++ /dev/null @@ -1,24 +0,0 @@ - -# Install dependency -```bash -pip install -r requirement.txt -``` - -# Download the data set -```bash -export WORKDIR_ROOT= - -``` -The downloaded data will be at $WORKDIR_ROOT/ML50 - -# preprocess the data -Install SPM [here](https://github.com/google/sentencepiece) -```bash -export WORKDIR_ROOT= -export SPM_PATH= -``` -* $WORKDIR_ROOT/ML50/raw: extracted raw data -* $WORKDIR_ROOT/ML50/dedup: dedup data -* $WORKDIR_ROOT/ML50/clean: data with valid and test sentences removed from the dedup data - - diff --git a/kosmos-g/fairseq/examples/multilingual/data_scripts/binarize.py b/kosmos-g/fairseq/examples/multilingual/data_scripts/binarize.py deleted file mode 100644 index ee54c6aab..000000000 --- a/kosmos-g/fairseq/examples/multilingual/data_scripts/binarize.py +++ /dev/null @@ -1,200 +0,0 @@ -import shutil -import os, sys -from subprocess import check_call, check_output -import glob -import argparse -import shutil -import pathlib -import itertools - -def call_output(cmd): - print(f"Executing: {cmd}") - ret = check_output(cmd, shell=True) - print(ret) - return ret - -def call(cmd): - print(cmd) - check_call(cmd, shell=True) - - -WORKDIR_ROOT = os.environ.get('WORKDIR_ROOT', None) - -if WORKDIR_ROOT is None or not WORKDIR_ROOT.strip(): - print('please specify your working directory root in OS environment variable WORKDIR_ROOT. Exitting..."') - sys.exit(-1) - -SPM_PATH = os.environ.get('SPM_PATH', None) - -if SPM_PATH is None or not SPM_PATH.strip(): - print("Please install sentence piecence from https://github.com/google/sentencepiece and set SPM_PATH pointing to the installed spm_encode.py. Exitting...") - sys.exit(-1) - - -SPM_MODEL = f'{WORKDIR_ROOT}/sentence.bpe.model' -SPM_VOCAB = f'{WORKDIR_ROOT}/dict_250k.txt' - -SPM_ENCODE = f'{SPM_PATH}' - -if not os.path.exists(SPM_MODEL): - call(f"wget https://dl.fbaipublicfiles.com/fairseq/models/mbart50/sentence.bpe.model -O {SPM_MODEL}") - - -if not os.path.exists(SPM_VOCAB): - call(f"wget https://dl.fbaipublicfiles.com/fairseq/models/mbart50/dict_250k.txt -O {SPM_VOCAB}") - - - -def get_data_size(raw): - cmd = f'wc -l {raw}' - ret = call_output(cmd) - return int(ret.split()[0]) - -def encode_spm(model, direction, prefix='', splits=['train', 'test', 'valid'], pairs_per_shard=None): - src, tgt = direction.split('-') - - for split in splits: - src_raw, tgt_raw = f'{RAW_DIR}/{split}{prefix}.{direction}.{src}', f'{RAW_DIR}/{split}{prefix}.{direction}.{tgt}' - if os.path.exists(src_raw) and os.path.exists(tgt_raw): - cmd = f"""python {SPM_ENCODE} \ - --model {model}\ - --output_format=piece \ - --inputs {src_raw} {tgt_raw} \ - --outputs {BPE_DIR}/{direction}{prefix}/{split}.bpe.{src} {BPE_DIR}/{direction}{prefix}/{split}.bpe.{tgt} """ - print(cmd) - call(cmd) - - -def binarize_( - bpe_dir, - databin_dir, - direction, spm_vocab=SPM_VOCAB, - splits=['train', 'test', 'valid'], -): - src, tgt = direction.split('-') - - try: - shutil.rmtree(f'{databin_dir}', ignore_errors=True) - os.mkdir(f'{databin_dir}') - except OSError as error: - print(error) - cmds = [ - "fairseq-preprocess", - f"--source-lang {src} --target-lang {tgt}", - f"--destdir {databin_dir}/", - f"--workers 8", - ] - if isinstance(spm_vocab, tuple): - src_vocab, tgt_vocab = spm_vocab - cmds.extend( - [ - f"--srcdict {src_vocab}", - f"--tgtdict {tgt_vocab}", - ] - ) - else: - cmds.extend( - [ - f"--joined-dictionary", - f"--srcdict {spm_vocab}", - ] - ) - input_options = [] - if 'train' in splits and glob.glob(f"{bpe_dir}/train.bpe*"): - input_options.append( - f"--trainpref {bpe_dir}/train.bpe", - ) - if 'valid' in splits and glob.glob(f"{bpe_dir}/valid.bpe*"): - input_options.append(f"--validpref {bpe_dir}/valid.bpe") - if 'test' in splits and glob.glob(f"{bpe_dir}/test.bpe*"): - input_options.append(f"--testpref {bpe_dir}/test.bpe") - if len(input_options) > 0: - cmd = " ".join(cmds + input_options) - print(cmd) - call(cmd) - - -def binarize( - databin_dir, - direction, spm_vocab=SPM_VOCAB, prefix='', - splits=['train', 'test', 'valid'], - pairs_per_shard=None, -): - def move_databin_files(from_folder, to_folder): - for bin_file in glob.glob(f"{from_folder}/*.bin") \ - + glob.glob(f"{from_folder}/*.idx") \ - + glob.glob(f"{from_folder}/dict*"): - try: - shutil.move(bin_file, to_folder) - except OSError as error: - print(error) - bpe_databin_dir = f"{BPE_DIR}/{direction}{prefix}_databin" - bpe_dir = f"{BPE_DIR}/{direction}{prefix}" - if pairs_per_shard is None: - binarize_(bpe_dir, bpe_databin_dir, direction, spm_vocab=spm_vocab, splits=splits) - move_databin_files(bpe_databin_dir, databin_dir) - else: - # binarize valid and test which will not be sharded - binarize_( - bpe_dir, bpe_databin_dir, direction, - spm_vocab=spm_vocab, splits=[s for s in splits if s != "train"]) - for shard_bpe_dir in glob.glob(f"{bpe_dir}/shard*"): - path_strs = os.path.split(shard_bpe_dir) - shard_str = path_strs[-1] - shard_folder = f"{bpe_databin_dir}/{shard_str}" - databin_shard_folder = f"{databin_dir}/{shard_str}" - print(f'working from {shard_folder} to {databin_shard_folder}') - os.makedirs(databin_shard_folder, exist_ok=True) - binarize_( - shard_bpe_dir, shard_folder, direction, - spm_vocab=spm_vocab, splits=["train"]) - - for test_data in glob.glob(f"{bpe_databin_dir}/valid.*") + glob.glob(f"{bpe_databin_dir}/test.*"): - filename = os.path.split(test_data)[-1] - try: - os.symlink(test_data, f"{databin_shard_folder}/{filename}") - except OSError as error: - print(error) - move_databin_files(shard_folder, databin_shard_folder) - - -def load_langs(path): - with open(path) as fr: - langs = [l.strip() for l in fr] - return langs - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument("--data_root", default=f"{WORKDIR_ROOT}/ML50") - parser.add_argument("--raw-folder", default='raw') - parser.add_argument("--bpe-folder", default='bpe') - parser.add_argument("--databin-folder", default='databin') - - args = parser.parse_args() - - DATA_PATH = args.data_root #'/private/home/yuqtang/public_data/ML50' - RAW_DIR = f'{DATA_PATH}/{args.raw_folder}' - BPE_DIR = f'{DATA_PATH}/{args.bpe_folder}' - DATABIN_DIR = f'{DATA_PATH}/{args.databin_folder}' - os.makedirs(BPE_DIR, exist_ok=True) - - raw_files = itertools.chain( - glob.glob(f'{RAW_DIR}/train*'), - glob.glob(f'{RAW_DIR}/valid*'), - glob.glob(f'{RAW_DIR}/test*'), - ) - - directions = [os.path.split(file_path)[-1].split('.')[1] for file_path in raw_files] - - for direction in directions: - prefix = "" - splits = ['train', 'valid', 'test'] - try: - shutil.rmtree(f'{BPE_DIR}/{direction}{prefix}', ignore_errors=True) - os.mkdir(f'{BPE_DIR}/{direction}{prefix}') - os.makedirs(DATABIN_DIR, exist_ok=True) - except OSError as error: - print(error) - spm_model, spm_vocab = SPM_MODEL, SPM_VOCAB - encode_spm(spm_model, direction=direction, splits=splits) - binarize(DATABIN_DIR, direction, spm_vocab=spm_vocab, splits=splits) diff --git a/kosmos-g/fairseq/examples/multilingual/data_scripts/check_iswlt_test_data.py b/kosmos-g/fairseq/examples/multilingual/data_scripts/check_iswlt_test_data.py deleted file mode 100644 index f8e2eb0f1..000000000 --- a/kosmos-g/fairseq/examples/multilingual/data_scripts/check_iswlt_test_data.py +++ /dev/null @@ -1,67 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import os, sys -import subprocess -import re -from subprocess import check_call, check_output - -WORKDIR_ROOT = os.environ.get('WORKDIR_ROOT', None) - -if WORKDIR_ROOT is None or not WORKDIR_ROOT.strip(): - print('please specify your working directory root in OS environment variable WORKDIR_ROOT. Exitting..."') - sys.exit(-1) - - -BLEU_REGEX = re.compile("^BLEU\\S* = (\\S+) ") -def run_eval_bleu(cmd): - output = check_output(cmd, shell=True, stderr=subprocess.STDOUT).decode("utf-8").strip() - print(output) - bleu = -1.0 - for line in output.strip().split('\n'): - m = BLEU_REGEX.search(line) - if m is not None: - bleu = m.groups()[0] - bleu = float(bleu) - break - return bleu - -def check_data_test_bleu(raw_folder, data_lang_pairs): - not_matchings = [] - for sacrebleu_set, src_tgts in data_lang_pairs: - for src_tgt in src_tgts: - print(f'checking test bleus for: {src_tgt} at {sacrebleu_set}') - src, tgt = src_tgt.split('-') - ssrc, stgt = src[:2], tgt[:2] - if os.path.exists(f'{raw_folder}/test.{tgt}-{src}.{src}'): - # reversed direction may have different test set - test_src = f'{raw_folder}/test.{tgt}-{src}.{src}' - else: - test_src = f'{raw_folder}/test.{src}-{tgt}.{src}' - cmd1 = f'cat {test_src} | sacrebleu -t "{sacrebleu_set}" -l {stgt}-{ssrc}; [ $? -eq 0 ] || echo ""' - test_tgt = f'{raw_folder}/test.{src}-{tgt}.{tgt}' - cmd2 = f'cat {test_tgt} | sacrebleu -t "{sacrebleu_set}" -l {ssrc}-{stgt}; [ $? -eq 0 ] || echo ""' - bleu1 = run_eval_bleu(cmd1) - if bleu1 != 100.0: - not_matchings.append(f'{sacrebleu_set}:{src_tgt} source side not matching: {test_src}') - bleu2 = run_eval_bleu(cmd2) - if bleu2 != 100.0: - not_matchings.append(f'{sacrebleu_set}:{src_tgt} target side not matching: {test_tgt}') - return not_matchings - -if __name__ == "__main__": - to_data_path = f'{WORKDIR_ROOT}/iwsltv2' - not_matching = check_data_test_bleu( - f'{to_data_path}/raw', - [ - ('iwslt17', ['en_XX-ar_AR', 'en_XX-ko_KR', 'ar_AR-en_XX', 'ko_KR-en_XX']), - ('iwslt17', ['en_XX-it_IT', 'en_XX-nl_XX', 'it_IT-en_XX', 'nl_XX-en_XX']), - ('iwslt17/tst2015', ['en_XX-vi_VN', "vi_VN-en_XX"]), - ] - ) - if len(not_matching) > 0: - print('the following datasets do not have matching test datasets:\n\t', '\n\t'.join(not_matching)) - diff --git a/kosmos-g/fairseq/examples/multilingual/data_scripts/check_self_overlaps.py b/kosmos-g/fairseq/examples/multilingual/data_scripts/check_self_overlaps.py deleted file mode 100644 index 07b338dcf..000000000 --- a/kosmos-g/fairseq/examples/multilingual/data_scripts/check_self_overlaps.py +++ /dev/null @@ -1,103 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import os -import glob -import argparse -from utils.dedup import deup -import sys - -WORKDIR_ROOT = os.environ.get('WORKDIR_ROOT', None) - -if WORKDIR_ROOT is None or not WORKDIR_ROOT.strip(): - print('please specify your working directory root in OS environment variable WORKDIR_ROOT. Exitting..."') - sys.exit(-1) - -def get_directions(folder): - raw_files = glob.glob(f'{folder}/train*') - directions = [os.path.split(file_path)[-1].split('.')[1] for file_path in raw_files] - return directions - -def diff_list(lhs, rhs): - return set(lhs).difference(set(rhs)) - -def check_diff( - from_src_file, from_tgt_file, - to_src_file, to_tgt_file, -): - seen_in_from = set() - seen_src_in_from = set() - seen_tgt_in_from = set() - from_count = 0 - with open(from_src_file, encoding='utf-8') as fsrc, \ - open(from_tgt_file, encoding='utf-8') as ftgt: - for s, t in zip(fsrc, ftgt): - seen_in_from.add((s, t)) - seen_src_in_from.add(s) - seen_tgt_in_from.add(t) - from_count += 1 - common = 0 - common_src = 0 - common_tgt = 0 - to_count = 0 - seen = set() - - with open(to_src_file, encoding='utf-8') as fsrc, \ - open(to_tgt_file, encoding='utf-8') as ftgt: - for s, t in zip(fsrc, ftgt): - to_count += 1 - if (s, t) not in seen: - if (s, t) in seen_in_from: - common += 1 - if s in seen_src_in_from: - common_src += 1 - seen_src_in_from.remove(s) - if t in seen_tgt_in_from: - common_tgt += 1 - seen_tgt_in_from.remove(t) - seen.add((s, t)) - return common, common_src, common_tgt, from_count, to_count - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--folder", type=str, required=True, - help="the data folder ") - parser.add_argument("--split", type=str, default='test', - help="split (valid, test) to check against training data") - parser.add_argument('--directions', type=str, default=None, required=False) - - args = parser.parse_args() - - if args.directions is None: - directions = set(get_directions(args.folder)) - directions = sorted(directions) - else: - directions = args.directions.split(',') - directions = sorted(set(directions)) - - results = [] - print(f'checking where {args.split} split data are in training') - print(f'direction\tcommon_count\tsrc common\ttgt common\tfrom_size\tto_size') - - for direction in directions: - src, tgt = direction.split('-') - from_src_file = f'{args.folder}/{args.split}.{src}-{tgt}.{src}' - from_tgt_file = f'{args.folder}/{args.split}.{src}-{tgt}.{tgt}' - if not os.path.exists(from_src_file): - # some test/valid data might in reverse directinos: - from_src_file = f'{args.folder}/{args.split}.{tgt}-{src}.{src}' - from_tgt_file = f'{args.folder}/{args.split}.{tgt}-{src}.{tgt}' - to_src_file = f'{args.folder}/train.{src}-{tgt}.{src}' - to_tgt_file = f'{args.folder}/train.{src}-{tgt}.{tgt}' - if not os.path.exists(to_src_file) or not os.path.exists(from_src_file): - continue - r = check_diff(from_src_file, from_tgt_file, to_src_file, to_tgt_file) - results.append(r) - print(f'{direction}\t', '\t'.join(map(str, r))) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/multilingual/data_scripts/check_valid_test_overlaps.py b/kosmos-g/fairseq/examples/multilingual/data_scripts/check_valid_test_overlaps.py deleted file mode 100644 index 40fa9aecd..000000000 --- a/kosmos-g/fairseq/examples/multilingual/data_scripts/check_valid_test_overlaps.py +++ /dev/null @@ -1,124 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import os -import argparse -import pandas as pd -import sys - - -WORKDIR_ROOT = os.environ.get('WORKDIR_ROOT', None) - -if WORKDIR_ROOT is None or not WORKDIR_ROOT.strip(): - print('please specify your working directory root in OS environment variable WORKDIR_ROOT. Exitting..."') - sys.exit(-1) - -def load_langs(path): - with open(path) as fr: - langs = [l.strip() for l in fr] - return langs - - - -def load_sentences(raw_data, split, direction): - src, tgt = direction.split('-') - src_path = f"{raw_data}/{split}.{direction}.{src}" - tgt_path = f"{raw_data}/{split}.{direction}.{tgt}" - if os.path.exists(src_path) and os.path.exists(tgt_path): - return [(src, open(src_path).read().splitlines()), (tgt, open(tgt_path).read().splitlines())] - else: - return [] - -def swap_direction(d): - src, tgt = d.split('-') - return f'{tgt}-{src}' - -def get_all_test_data(raw_data, directions, split='test'): - test_data = [ - x - for dd in directions - for d in [dd, swap_direction(dd)] - for x in load_sentences(raw_data, split, d) - ] - # all_test_data = {s for _, d in test_data for s in d} - all_test_data = {} - for lang, d in test_data: - for s in d: - s = s.strip() - lgs = all_test_data.get(s, set()) - lgs.add(lang) - all_test_data[s] = lgs - return all_test_data, test_data - - -def check_train_sentences(src_path, tgt_path, direction, all_test_data, mess_up_train={}): - # src, tgt = direction.split('-') - print(f'check training data for {direction} in {src_path} and {tgt_path}') - size = 0 - overlapped_size_counted_dup = 0 - if not os.path.exists(tgt_path) or not os.path.exists(src_path): - return mess_up_train, size, overlapped_size_counted_dup - - with open(src_path) as f, open(tgt_path) as g: - for src_line, tgt_line in zip(f, g): - s = src_line.strip() - t = tgt_line.strip() - size += 1 - if s in all_test_data: - langs = mess_up_train.get(s, set()) - langs.add(direction) - mess_up_train[s] = langs - overlapped_size_counted_dup += 1 - if t in all_test_data: - langs = mess_up_train.get(t, set()) - langs.add(direction) - mess_up_train[t] = langs - overlapped_size_counted_dup += 1 - print(f'{direction}: size={size}, overlapped={overlapped_size_counted_dup}') - return mess_up_train, size, overlapped_size_counted_dup - -def check_train_all(raw_data, directions, all_test_data): - mess_up_train = {} - data_sizes = {} - # raw_data = '~chau/data-bin/MineBART/multilingual_mined_100M/en_XX/et_EE-en_XX/all.{en_XX, et_EE}' - print(f'checking training data againsts # {len(all_test_data)} sentences') - print(f'example test data: ', [s for i, s in enumerate(all_test_data.keys()) if i < 10]) - for direction in directions: - src, tgt = direction.split('-') - path = f'{raw_data}/en_XX/{direction}/all' - src_path = f'{path}.{src}' - tgt_path = f'{path}.{tgt}' - print(f'checking {src_path} {tgt_path}') - _, size, overlapped_size_counted_dup = check_train_sentences(src_path, tgt_path, direction, all_test_data, mess_up_train) - data_sizes[direction] = (size, overlapped_size_counted_dup) - return mess_up_train, data_sizes - - - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--folder", type=str, required=True, - help="the data folder ") - parser.add_argument("--test-data", type=str, required=True, - help="the test data folder ") - parser.add_argument('--directions', type=str, default=None, required=False) - - args = parser.parse_args() - directions = args.directions.split(',') - directions = sorted(set(directions)) - - results = [] - # print(f'checking where {args.split} split data are in training') - # print(f'direction\tcommon_count\tsrc common\ttgt common\tfrom_size\tto_size') - raw_data = args.folder - all_test_data, test_data = get_all_test_data(args.test_data, directions, split='test') - mess_up_train, data_sizes = check_train_all(raw_data, directions, all_test_data) - print(data_sizes) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/multilingual/data_scripts/dedup_all.py b/kosmos-g/fairseq/examples/multilingual/data_scripts/dedup_all.py deleted file mode 100644 index ef39c05ee..000000000 --- a/kosmos-g/fairseq/examples/multilingual/data_scripts/dedup_all.py +++ /dev/null @@ -1,52 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - - -import os -import glob -import argparse -from utils.dedup import deup - -import sys -WORKDIR_ROOT = os.environ.get('WORKDIR_ROOT', None) - -if WORKDIR_ROOT is None or not WORKDIR_ROOT.strip(): - print('please specify your working directory root in OS environment variable WORKDIR_ROOT. Exitting..."') - sys.exit(-1) - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--from-folder", type=str, required=True, - help="the data folder to be dedup") - parser.add_argument("--to-folder", type=str, required=True, - help="the data folder to save deduped data") - parser.add_argument('--directions', type=str, default=None, required=False) - - args = parser.parse_args() - - if args.directions is None: - raw_files = glob.glob(f'{args.from_folder}/train*') - - directions = [os.path.split(file_path)[-1].split('.')[1] for file_path in raw_files] - else: - directions = args.directions.split(',') - directions = sorted(set(directions)) - - for direction in directions: - src, tgt = direction.split('-') - src_file = f'{args.from_folder}/train.{src}-{tgt}.{src}' - tgt_file = f'{args.from_folder}/train.{src}-{tgt}.{tgt}' - src_file_out = f'{args.to_folder}/train.{src}-{tgt}.{src}' - tgt_file_out = f'{args.to_folder}/train.{src}-{tgt}.{tgt}' - assert src_file != src_file_out - assert tgt_file != tgt_file_out - print(f'deduping {src_file}, {tgt_file}') - deup(src_file, tgt_file, src_file_out, tgt_file_out) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/multilingual/data_scripts/download_ML50_v1.sh b/kosmos-g/fairseq/examples/multilingual/data_scripts/download_ML50_v1.sh deleted file mode 100644 index 99fbc7592..000000000 --- a/kosmos-g/fairseq/examples/multilingual/data_scripts/download_ML50_v1.sh +++ /dev/null @@ -1,30 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -if [ -z $WORKDIR_ROOT ] ; -then - echo "please specify your working directory root in environment variable WORKDIR_ROOT. Exitting..." - exit -fi - -# first run download_wmt20.sh; it will install a few useful tools for other scripts -# TODO: need to print out instructions on downloading a few files which requires manually authentication from the websites -bash ./download_wmt20.sh - -python ./download_wmt19_and_before.py -bash ./download_wat19_my.sh -python ./download_ted_and_extract.py -bash ./download_lotus.sh -bash ./download_iitb.sh -bash ./download_af_xh.sh - - -# IWSLT downloading URLs have changed in between; TODO: fix them: -bash ./download_iwslt_and_extract.sh - -# TODO: globalvoices URLs changed; need to be fixed -bash ./download_flores_data.sh diff --git a/kosmos-g/fairseq/examples/multilingual/data_scripts/download_af_xh.sh b/kosmos-g/fairseq/examples/multilingual/data_scripts/download_af_xh.sh deleted file mode 100644 index a78fbbbbc..000000000 --- a/kosmos-g/fairseq/examples/multilingual/data_scripts/download_af_xh.sh +++ /dev/null @@ -1,164 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# set -x -e - -if [ -z $WORKDIR_ROOT ] ; -then - echo "please specify your working directory root in environment variable WORKDIR_ROOT. Exitting..." - exit -fi - - -# put intermediate files -TMP_DIR=$WORKDIR_ROOT/temp/af_xhv2 -# output {train,valid,test} files to dest -DEST=${WORKDIR_ROOT}/ML50/raw - - - -ROOT=${WORKDIR_ROOT} -UTILS=$PWD/utils -TMX2CORPUS="${UTILS}/tmx2corpus" -TMX_TOOL="python ${TMX2CORPUS}/tmx2corpus.py" - -mkdir -p $TMP_DIR -mkdir -p $DEST -mkdir -p $UTILS - -function download_opus(){ - src=$1 - tgt=$2 - subset=$3 - ulr=$4 - - mkdir extract_$subset.$src-$tgt - pushd extract_$subset.$src-$tgt - if [ ! -f "$subset.$src-$tgt.tmx.gz" ]; then - wget $url -O "$subset.$src-$tgt.tmx.gz" - gzip -d "$subset.$src-$tgt.tmx.gz" - f=$subset.$src-$tgt.tmx - $TMX_TOOL $f - mv bitext.$src ../$subset.$src-$tgt.$src - mv bitext.$tgt ../$subset.$src-$tgt.$tgt - fi - popd -} - -function concat_subsets(){ - src=$1 - tgt=$2 - subsets=$3 - src_train=raw_train.$src-$tgt.$src - tgt_train=raw_train.$src-$tgt.$tgt - > $src_train - > $tgt_train - for subset in $subsets; do - cat $subset.$src-$tgt.$src >> $src_train - cat $subset.$src-$tgt.$tgt >> $tgt_train - done -} - - - -function get_seeded_random() -{ - seed="$1" - openssl enc -aes-256-ctr -pass pass:"$seed" -nosalt \ - /dev/null -} - -function split_train_valid(){ - src=$1 - tgt=$2 - raw_src_train=raw_train.$src-$tgt.$src - raw_tgt_train=raw_train.$src-$tgt.$tgt - - shuf --random-source=<(get_seeded_random 43) $raw_src_train > shuffled.$src-$tgt.$src - shuf --random-source=<(get_seeded_random 43) $raw_tgt_train > shuffled.$src-$tgt.$tgt - - head -n 1500 shuffled.$src-$tgt.$src > valid.$src-$tgt.$src - head -n 1500 shuffled.$src-$tgt.$tgt > valid.$src-$tgt.$tgt - - tail +1501 shuffled.$src-$tgt.$src > train.$src-$tgt.$src - tail +1501 shuffled.$src-$tgt.$tgt > train.$src-$tgt.$tgt -} - -function copy2dst(){ - lsrc=$1 - ltgt=$2 - src=${lsrc:0:2} - tgt=${ltgt:0:2} - - - cp valid.$src-$tgt.$src $DEST/valid.$lsrc-$ltgt.$lsrc - cp valid.$src-$tgt.$tgt $DEST/valid.$lsrc-$ltgt.$ltgt - - cp train.$src-$tgt.$src $DEST/train.$lsrc-$ltgt.$lsrc - cp train.$src-$tgt.$tgt $DEST/train.$lsrc-$ltgt.$ltgt -} - - - - -#for xh-en -declare -A xh_en_urls -xh_en_urls=( - [Tatoeba]=https://object.pouta.csc.fi/OPUS-Tatoeba/v20190709/tmx/en-xh.tmx.gz - [wikimedia]=https://object.pouta.csc.fi/OPUS-wikimedia/v20190628/tmx/en-xh.tmx.gz - [memat]=https://object.pouta.csc.fi/OPUS-memat/v1/tmx/en-xh.tmx.gz - [uedin]=https://object.pouta.csc.fi/OPUS-bible-uedin/v1/tmx/en-xh.tmx.gz - [GNOME]=https://object.pouta.csc.fi/OPUS-GNOME/v1/tmx/en-xh.tmx.gz - [XhosaNavy]=https://object.pouta.csc.fi/OPUS-XhosaNavy/v1/tmx/en-xh.tmx.gz - [KDE4]=https://object.pouta.csc.fi/OPUS-KDE4/v2/tmx/en-xh.tmx.gz - [Ubuntu]=https://object.pouta.csc.fi/OPUS-Ubuntu/v14.10/tmx/en-xh.tmx.gz -) - -mkdir $TMP_DIR/xh-en -pushd $TMP_DIR/xh-en -for k in "${!xh_en_urls[@]}" -do - name=$k - url=${xh_en_urls[$k]} - echo "$name: $url" - download_opus xh en $name $ulr -done -concat_subsets xh en "${!xh_en_urls[@]}" -split_train_valid xh en -copy2dst xh_ZA en_XX -popd - - -## -#for af-en -declare -A af_en_urls -af_en_urls=( - [Tatoeba]=https://object.pouta.csc.fi/OPUS-Tatoeba/v20190709/tmx/af-en.tmx.gz - [uedin]=https://object.pouta.csc.fi/OPUS-bible-uedin/v1/tmx/af-en.tmx.gz - [GNOME]=https://object.pouta.csc.fi/OPUS-GNOME/v1/tmx/af-en.tmx.gz - [QED]=https://object.pouta.csc.fi/OPUS-QED/v2.0a/tmx/af-en.tmx.gz - [KDE4]=https://object.pouta.csc.fi/OPUS-KDE4/v2/tmx/af-en.tmx.gz - [OpenSubtitles]=https://object.pouta.csc.fi/OPUS-OpenSubtitles/v2018/tmx/af-en.tmx.gz - [SPC]=https://object.pouta.csc.fi/OPUS-SPC/v1/tmx/af-en.tmx.gz - [Ubuntu]=https://object.pouta.csc.fi/OPUS-Ubuntu/v14.10/tmx/af-en.tmx.gz -) - -mkdir $TMP_DIR/af-en -pushd $TMP_DIR/af-en -for k in "${!af_en_urls[@]}" -do - name=$k - url=${af_en_urls[$k]} - echo "$name: $url" - download_opus af en $name $ulr -done -concat_subsets af en "${!af_en_urls[@]}" -split_train_valid af en -copy2dst af_ZA en_XX -popd - - diff --git a/kosmos-g/fairseq/examples/multilingual/data_scripts/download_flores_data.sh b/kosmos-g/fairseq/examples/multilingual/data_scripts/download_flores_data.sh deleted file mode 100644 index e6175ce0c..000000000 --- a/kosmos-g/fairseq/examples/multilingual/data_scripts/download_flores_data.sh +++ /dev/null @@ -1,246 +0,0 @@ -#!/bin/bash - -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# - -if [ -z $WORKDIR_ROOT ] ; -then - echo "please specify your working directory root in environment variable WORKDIR_ROOT. Exitting..." - exit -fi - - -set -e -set -o pipefail - -SRC=en -SI_TGT=si -NE_TGT=ne - -DESTDIR=${WORKDIR_ROOT}/ML50/raw/ - -ROOT=${WORKDIR_ROOT}/tmp -mkdir -p $ROOT -DATA=$ROOT/data -NE_ROOT=$DATA/all-clean-ne -SI_ROOT=$DATA/all-clean-si - -mkdir -p $DATA $NE_ROOT $SI_ROOT - -SI_OPUS_DATASETS=( - "$SI_ROOT/GNOME.en-si" - "$SI_ROOT/Ubuntu.en-si" - "$SI_ROOT/KDE4.en-si" - "$SI_ROOT/OpenSubtitles.en-si" -) - -SI_OPUS_URLS=( - "https://object.pouta.csc.fi/OPUS-GNOME/v1/moses/en-si.txt.zip" - "https://object.pouta.csc.fi/OPUS-Ubuntu/v14.10/moses/en-si.txt.zip" - "https://object.pouta.csc.fi/OPUS-KDE4/v2/moses/en-si.txt.zip" - "https://object.pouta.csc.fi/OPUS-OpenSubtitles/v2018/moses/en-si.txt.zip" -) - -NE_OPUS_DATASETS=( - "$NE_ROOT/GNOME.en-ne" - "$NE_ROOT/Ubuntu.en-ne" - "$NE_ROOT/KDE4.en-ne" -) - -NE_OPUS_URLS=( - "https://object.pouta.csc.fi/OPUS-GNOME/v1/moses/en-ne.txt.zip" - "https://object.pouta.csc.fi/OPUS-Ubuntu/v14.10/moses/en-ne.txt.zip" - "https://object.pouta.csc.fi/OPUS-KDE4/v2/moses/en-ne.txt.zip" -) - -REMOVE_FILE_PATHS=() - -# Download data -download_data() { - CORPORA=$1 - URL=$2 - - if [ -f $CORPORA ]; then - echo "$CORPORA already exists, skipping download" - else - echo "Downloading $URL" - wget $URL -O $CORPORA --no-check-certificate || rm -f $CORPORA - if [ -f $CORPORA ]; then - echo "$URL successfully downloaded." - else - echo "$URL not successfully downloaded." - rm -f $CORPORA - exit -1 - fi - fi -} - -# Example: download_opus_data $LANG_ROOT $TGT -download_opus_data() { - LANG_ROOT=$1 - TGT=$2 - - if [ "$TGT" = "si" ]; then - URLS=("${SI_OPUS_URLS[@]}") - DATASETS=("${SI_OPUS_DATASETS[@]}") - else - URLS=("${NE_OPUS_URLS[@]}") - DATASETS=("${NE_OPUS_DATASETS[@]}") - fi - - # Download and extract data - for ((i=0;i<${#URLS[@]};++i)); do - URL=${URLS[i]} - CORPORA=${DATASETS[i]} - - download_data $CORPORA $URL - unzip -o $CORPORA -d $LANG_ROOT - REMOVE_FILE_PATHS+=( $CORPORA $CORPORA.xml $CORPORA.ids $LANG_ROOT/README $LANG_ROOT/LICENSE ) - done - - cat ${DATASETS[0]}.$SRC ${DATASETS[1]}.$SRC ${DATASETS[2]}.$SRC > $LANG_ROOT/GNOMEKDEUbuntu.$SRC-$TGT.$SRC - cat ${DATASETS[0]}.$TGT ${DATASETS[1]}.$TGT ${DATASETS[2]}.$TGT > $LANG_ROOT/GNOMEKDEUbuntu.$SRC-$TGT.$TGT - - REMOVE_FILE_PATHS+=( ${DATASETS[0]}.$SRC ${DATASETS[1]}.$SRC ${DATASETS[2]}.$SRC ) - REMOVE_FILE_PATHS+=( ${DATASETS[0]}.$TGT ${DATASETS[1]}.$TGT ${DATASETS[2]}.$TGT ) -} - -download_opus_data $SI_ROOT $SI_TGT -cp ${SI_OPUS_DATASETS[3]}.$SRC $SI_ROOT/OpenSubtitles2018.$SRC-$SI_TGT.$SRC -cp ${SI_OPUS_DATASETS[3]}.$SI_TGT $SI_ROOT/OpenSubtitles2018.$SRC-$SI_TGT.$SI_TGT -REMOVE_FILE_PATHS+=( ${SI_OPUS_DATASETS[3]}.$SRC ${SI_OPUS_DATASETS[3]}.$SI_TGT ) - -download_opus_data $NE_ROOT $NE_TGT - - -# Download and extract Global Voices data -GLOBAL_VOICES="$NE_ROOT/globalvoices.2018q4.ne-en" -GLOBAL_VOICES_URL="http://www.casmacat.eu/corpus/global-voices/globalvoices.ne-en.xliff.gz" - -download_data $GLOBAL_VOICES.gz $GLOBAL_VOICES_URL -gunzip -Nf $GLOBAL_VOICES.gz - -sed -ne 's?.*\(.*\).*?\1?p' $GLOBAL_VOICES > $GLOBAL_VOICES.$NE_TGT -sed -ne 's?.*]*>\(.*\).*?\1?p' $GLOBAL_VOICES > $GLOBAL_VOICES.$SRC - -REMOVE_FILE_PATHS+=( $GLOBAL_VOICES ) - -# Download and extract the bible dataset -BIBLE_TOOLS=bible-corpus-tools -XML_BIBLES=XML_Bibles -XML_BIBLES_DUP=XML_Bibles_dup - -if [ ! -e $BIBLE_TOOLS ]; then - echo "Cloning bible-corpus-tools repository..." - git clone https://github.com/christos-c/bible-corpus-tools.git -fi - -mkdir -p $BIBLE_TOOLS/bin $XML_BIBLES $XML_BIBLES_DUP -javac -cp "$BIBLE_TOOLS/lib/*" -d $BIBLE_TOOLS/bin $BIBLE_TOOLS/src/bible/readers/*.java $BIBLE_TOOLS/src/bible/*.java - -download_data bible.tar.gz "https://github.com/christos-c/bible-corpus/archive/v1.2.1.tar.gz" -tar xvzf bible.tar.gz - -cp bible-corpus-1.2.1/bibles/{Greek.xml,English.xml,Nepali.xml} $XML_BIBLES/ -cp bible-corpus-1.2.1/bibles/{Greek.xml,English-WEB.xml,Nepali.xml} $XML_BIBLES_DUP/ - -java -cp $BIBLE_TOOLS/lib/*:$BIBLE_TOOLS/bin bible.CreateMLBooks $XML_BIBLES -java -cp $BIBLE_TOOLS/lib/*:$BIBLE_TOOLS/bin bible.CreateMLBooks $XML_BIBLES_DUP -java -cp $BIBLE_TOOLS/lib/*:$BIBLE_TOOLS/bin bible.CreateVerseAlignedBooks $XML_BIBLES -java -cp $BIBLE_TOOLS/lib/*:$BIBLE_TOOLS/bin bible.CreateVerseAlignedBooks $XML_BIBLES_DUP - -cat $XML_BIBLES/aligned/*/English.txt > $NE_ROOT/bible.$SRC-$NE_TGT.$SRC -cat $XML_BIBLES/aligned/*/Nepali.txt > $NE_ROOT/bible.$SRC-$NE_TGT.$NE_TGT -cat $XML_BIBLES_DUP/aligned/*/English-WEB.txt > $NE_ROOT/bible_dup.$SRC-$NE_TGT.$SRC -cat $XML_BIBLES_DUP/aligned/*/Nepali.txt > $NE_ROOT/bible_dup.$SRC-$NE_TGT.$NE_TGT -REMOVE_FILE_PATHS+=( bible-corpus-1.2.1 bible.tar.gz $BIBLE_TOOLS $XML_BIBLES $XML_BIBLES_DUP ) - -# Download and extract the Penn Treebank dataset -NE_TAGGED=$ROOT/new_submissions_parallel_corpus_project_Nepal -NE_TAGGED_URL="http://www.cle.org.pk/Downloads/ling_resources/parallelcorpus/NepaliTaggedCorpus.zip" -EN_TAGGED_PATCH_URL="https://dl.fbaipublicfiles.com/fairseq/data/nepali-penn-treebank.en.patch" -NE_TAGGED_PATCH_URL="https://dl.fbaipublicfiles.com/fairseq/data/nepali-penn-treebank.ne.patch" -MOSES=mosesdecoder -MOSES_TOK=$MOSES/scripts/tokenizer -EN_PATCH_REGEX="{s:\\\/:\/:g;s/\*\T\*\-\n+//g;s/\-LCB\-/\{/g;s/\-RCB\-/\}/g; s/\-LSB\-/\[/g; s/\-RSB\-/\]/g;s/\-LRB\-/\(/g; s/\-RRB\-/\)/g; s/\'\'/\"/g; s/\`\`/\"/g; s/\ +\'s\ +/\'s /g; s/\ +\'re\ +/\'re /g; s/\"\ +/\"/g; s/\ +\"/\"/g; s/\ n't([\ \.\"])/n't\1/g; s/\r+(.)/\1/g;}" -NE_PATCH_REGEX="{s:\p{Cf}::g;s:\\\/:\/:g;s/\*\T\*\-\n+//g;s/\-LCB\-/\{/g;s/\-RCB\-/\}/g; s/\-LSB\-/\[/g; s/\-RSB\-/\]/g;s/\-LRB\-/\(/g; s/\-RRB\-/\)/g; s/\'\'/\"/g; s/\`\`/\"/g; s/\ +\'s\ +/\'s /g; s/\ +\'re\ +/\'re /g; s/\"\ +/\"/g; s/\ +\"/\"/g; s/\ n't([\ \.\"])/n't\1/g; s/\r+(.)/\1/g;}" - -download_data $DATA/nepali-penn-treebank.$SRC.patch $EN_TAGGED_PATCH_URL -download_data $DATA/nepali-penn-treebank.$NE_TGT.patch $NE_TAGGED_PATCH_URL -download_data original.zip $NE_TAGGED_URL -unzip -o original.zip -d $ROOT - -cat $NE_TAGGED/00.txt $NE_TAGGED/01.txt $NE_TAGGED/02.txt > $NE_TAGGED/nepali-penn-treebank.$SRC -cat $NE_TAGGED/00ne_revised.txt $NE_TAGGED/01ne_revised.txt $NE_TAGGED/02ne_revised.txt > $NE_TAGGED/nepali-penn-treebank.$NE_TGT - -patch $NE_TAGGED/nepali-penn-treebank.$SRC -i $DATA/nepali-penn-treebank.$SRC.patch -o $NE_TAGGED/nepali-penn-treebank-patched.$SRC -patch $NE_TAGGED/nepali-penn-treebank.$NE_TGT -i $DATA/nepali-penn-treebank.$NE_TGT.patch -o $NE_TAGGED/nepali-penn-treebank-patched.$NE_TGT - -if [ ! -e $MOSES ]; then - echo "Cloning moses repository..." - git clone https://github.com/moses-smt/mosesdecoder.git -fi - -cat $NE_TAGGED/nepali-penn-treebank-patched.$SRC | \ - perl -anpe "$EN_PATCH_REGEX" | \ - $MOSES_TOK/tokenizer.perl -l $SRC | \ - $MOSES_TOK/detokenizer.perl -l $SRC > $NE_ROOT/nepali-penn-treebank.$SRC - -cat $NE_TAGGED/nepali-penn-treebank-patched.$NE_TGT | \ - perl -CIO -anpe "$NE_PATCH_REGEX" | \ - $MOSES_TOK/detokenizer.perl -l $SRC > $NE_ROOT/nepali-penn-treebank.$NE_TGT - - -# Download nepali dictionary data -NE_DICT=$NE_ROOT/dictionaries -download_data $NE_DICT "http://www.seas.upenn.edu/~nlp/resources/TACL-data-release/dictionaries.tar.gz" -tar xvzf $NE_DICT -cp dictionaries/dict.ne $NE_ROOT/dictionary.$NE_TGT-$SRC -REMOVE_FILE_PATHS+=( $NE_DICT dictionaries ) - -REMOVE_FILE_PATHS+=( $MOSES $NE_TAGGED original.zip $DATA/nepali-penn-treebank.$SRC.patch $DATA/nepali-penn-treebank.$NE_TGT.patch ) - - -# Remove the temporary files -for ((i=0;i<${#REMOVE_FILE_PATHS[@]};++i)); do - rm -rf ${REMOVE_FILE_PATHS[i]} -done - -# Copy the training data -si=si_LK -ne=ne_NP -en=en_XX -cat $SI_ROOT/GNOMEKDEUbuntu.en-si.si $SI_ROOT/OpenSubtitles2018.en-si.si > $DESTDIR/train.$si-$en.$si -cat $SI_ROOT/GNOMEKDEUbuntu.en-si.en $SI_ROOT/OpenSubtitles2018.en-si.en > $DESTDIR/train.$si-$en.$en - -cat $NE_ROOT/bible_dup.en-ne.ne $NE_ROOT/bible.en-ne.ne $NE_ROOT/globalvoices.2018q4.ne-en.ne $NE_ROOT/GNOMEKDEUbuntu.en-ne.ne $NE_ROOT/nepali-penn-treebank.ne > $DESTDIR/train.$ne-$en.$ne -cat $NE_ROOT/bible_dup.en-ne.en $NE_ROOT/bible.en-ne.en $NE_ROOT/globalvoices.2018q4.ne-en.en $NE_ROOT/GNOMEKDEUbuntu.en-ne.en $NE_ROOT/nepali-penn-treebank.en > $DESTDIR/train.$ne-$en.$en - - -#Download the test sets -wget https://github.com/facebookresearch/flores/raw/master/data/wikipedia_en_ne_si_test_sets.tgz -tar -xvzf wikipedia_en_ne_si_test_sets.tgz - -cp wikipedia_en_ne_si_test_sets/wikipedia.dev.ne-en.ne $DESTDIR/valid.$ne-$en.$ne -cp wikipedia_en_ne_si_test_sets/wikipedia.dev.ne-en.en $DESTDIR/valid.$ne-$en.$en - -cp wikipedia_en_ne_si_test_sets/wikipedia.dev.si-en.si $DESTDIR/valid.$si-$en.$si -cp wikipedia_en_ne_si_test_sets/wikipedia.dev.si-en.en $DESTDIR/valid.$si-$en.$en - -cp wikipedia_en_ne_si_test_sets/wikipedia.devtest.ne-en.ne $DESTDIR/devtest.$ne-$en.$ne -cp wikipedia_en_ne_si_test_sets/wikipedia.devtest.ne-en.en $DESTDIR/devtest.$ne-$en.$en - -cp wikipedia_en_ne_si_test_sets/wikipedia.devtest.si-en.si $DESTDIR/devtest.$si-$en.$si -cp wikipedia_en_ne_si_test_sets/wikipedia.devtest.si-en.en $DESTDIR/devtest.$si-$en.$en - -cp wikipedia_en_ne_si_test_sets/wikipedia.test.ne-en.ne $DESTDIR/test.$ne-$en.$ne -cp wikipedia_en_ne_si_test_sets/wikipedia.test.ne-en.en $DESTDIR/test.$ne-$en.$en - -cp wikipedia_en_ne_si_test_sets/wikipedia.test.si-en.si $DESTDIR/test.$si-$en.$si -cp wikipedia_en_ne_si_test_sets/wikipedia.test.si-en.en $DESTDIR/test.$si-$en.$en - -rm -rf wikipedia_en_ne_si_test_sets.tgz wikipedia_en_ne_si_test_sets diff --git a/kosmos-g/fairseq/examples/multilingual/data_scripts/download_iitb.sh b/kosmos-g/fairseq/examples/multilingual/data_scripts/download_iitb.sh deleted file mode 100644 index a884e2083..000000000 --- a/kosmos-g/fairseq/examples/multilingual/data_scripts/download_iitb.sh +++ /dev/null @@ -1,35 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - - -if [ -z $WORKDIR_ROOT ] ; -then - echo "please specify your working directory root in environment variable WORKDIR_ROOT. Exitting..." - exit -fi - -IITB=$WORKDIR_ROOT/IITB -mkdir -p $IITB -pushd $IITB - -wget http://www.cfilt.iitb.ac.in/~moses/iitb_en_hi_parallel/iitb_corpus_download/parallel.tgz -tar -xvzf parallel.tgz - -wget http://www.cfilt.iitb.ac.in/~moses/iitb_en_hi_parallel/iitb_corpus_download/dev_test.tgz -tar -xvzf dev_test.tgz - -DESTDIR=${WORKDIR_ROOT}/ML50/raw/ - -cp parallel/IITB.en-hi.en $DESTDIR/train.hi_IN-en_XX.en_XX -cp parallel/IITB.en-hi.hi $DESTDIR/train.hi_IN-en_XX.hi_IN - -cp dev_test/dev.en $DESTDIR/valid.hi_IN-en_XX.en_XX -cp dev_test/dev.hi $DESTDIR/valid.hi_IN-en_XX.hi_IN - -cp dev_test/test.en $DESTDIR/test.hi_IN-en_XX.en_XX -cp dev_test/test.hi $DESTDIR/test.hi_IN-en_XX.hi_IN -popd \ No newline at end of file diff --git a/kosmos-g/fairseq/examples/multilingual/data_scripts/download_iwslt_and_extract.sh b/kosmos-g/fairseq/examples/multilingual/data_scripts/download_iwslt_and_extract.sh deleted file mode 100644 index ca3591b3d..000000000 --- a/kosmos-g/fairseq/examples/multilingual/data_scripts/download_iwslt_and_extract.sh +++ /dev/null @@ -1,225 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -#echo 'Cloning Moses github repository (for tokenization scripts)...' -#git clone https://github.com/moses-smt/mosesdecoder.git - -if [ -z $WORKDIR_ROOT ] ; -then - echo "please specify your working directory root in environment variable WORKDIR_ROOT. Exitting..." - exit -fi - - - -data_root=${WORKDIR_ROOT}/iwsltv2 -DESTDIR=${WORKDIR_ROOT}/ML50/raw - - -langs="ar_AR it_IT nl_XX ko_KR vi_VN" -echo "data_root: $data_root" - -download_path=${data_root}/downloads -raw=${DESTDIR} -tmp=${data_root}/tmp -orig=${data_root}/orig - -mkdir -p $download_path $orig $raw $tmp -####################### -download_iwslt(){ - iwslt_key=$1 - src=$2 - tgt=$3 - save_prefix=$4 - pushd ${download_path} - if [[ ! -f ${save_prefix}$src-$tgt.tgz ]]; then - wget https://wit3.fbk.eu/archive/${iwslt_key}/texts/$src/$tgt/$src-$tgt.tgz -O ${save_prefix}$src-$tgt.tgz - [ $? -eq 0 ] && return 0 - fi - popd -} - -extract_iwslt(){ - src=$1 - tgt=$2 - prefix=$3 - pushd $orig - tar zxvf ${download_path}/${prefix}$src-${tgt}.tgz - popd -} - -generate_train(){ - lsrc=$1 - ltgt=$2 - src=${lsrc:0:2} - tgt=${ltgt:0:2} - for ll in $lsrc $ltgt; do - l=${ll:0:2} - f="$orig/*/train.tags.$src-$tgt.$l" - f_raw=$raw/train.$lsrc-$ltgt.$ll - cat $f \ - | grep -v '' \ - | grep -v '' \ - | grep -v '' \ - | grep -v '' \ - | grep -v '' \ - | sed -e 's///g' \ - | sed -e 's/<\/title>//g' \ - | sed -e 's/<description>//g' \ - | sed -e 's/<\/description>//g' \ - | sed 's/^\s*//g' \ - | sed 's/\s*$//g' \ - > $f_raw - [ $? -eq 0 ] && echo "extracted $f to $f_raw" - done - return 0 -} - -convert_valid_test(){ - src=$1 - tgt=$2 - for l in $src $tgt; do - echo "lang: ${l}" - for o in `ls $orig/*/IWSLT*.TED*.$src-$tgt.$l.xml`; do - fname=${o##*/} - f=$tmp/${fname%.*} - echo "$o => $f" - grep '<seg id' $o \ - | sed -e 's/<seg id="[0-9]*">\s*//g' \ - | sed -e 's/\s*<\/seg>\s*//g' \ - | sed -e "s/\’/\'/g" \ - > $f - echo "" - done - done -} - -generate_subset(){ - lsrc=$1 - ltgt=$2 - src=${lsrc:0:2} - tgt=${ltgt:0:2} - subset=$3 - prefix=$4 - for ll in $lsrc $ltgt; do - l=${ll:0:2} - f=$tmp/$prefix.${src}-${tgt}.$l - if [[ -f $f ]]; then - cp $f $raw/$subset.${lsrc}-$ltgt.${ll} - fi - done -} -################# - -echo "downloading iwslt training and dev data" -# using multilingual for it, nl -download_iwslt "2017-01-trnmted" DeEnItNlRo DeEnItNlRo -download_iwslt "2017-01-trnted" ar en -download_iwslt "2017-01-trnted" en ar -download_iwslt "2017-01-trnted" ko en -download_iwslt "2017-01-trnted" en ko -download_iwslt "2015-01" vi en -download_iwslt "2015-01" en vi - -echo "donwloading iwslt test data" -download_iwslt "2017-01-mted-test" it en "test." -download_iwslt "2017-01-mted-test" en it "test." -download_iwslt "2017-01-mted-test" nl en "test." -download_iwslt "2017-01-mted-test" en nl "test." - -download_iwslt "2017-01-ted-test" ar en "test." -download_iwslt "2017-01-ted-test" en ar "test." -download_iwslt "2017-01-ted-test" ko en "test." -download_iwslt "2017-01-ted-test" en ko "test." -download_iwslt "2015-01-test" vi en "test." -download_iwslt "2015-01-test" en vi "test." - -echo "extract training data tar balls" -extract_iwslt DeEnItNlRo DeEnItNlRo -extract_iwslt ar en -extract_iwslt en ar -extract_iwslt ko en -extract_iwslt en ko -extract_iwslt vi en -extract_iwslt en vi - - -echo "extracting iwslt test data" -for lang in $langs; do - l=${lang:0:2} - extract_iwslt $l en "test." - extract_iwslt en $l "test." -done - -echo "convert dev and test data" -for lang in $langs; do - s_lang=${lang:0:2} - convert_valid_test $s_lang en - convert_valid_test en $s_lang -done - - - -echo "creating training data into $raw" -for lang in $langs; do - generate_train $lang en_XX - generate_train en_XX $lang -done - -echo "creating iwslt dev data into raw" -generate_subset en_XX vi_VN valid "IWSLT15.TED.tst2013" -generate_subset vi_VN en_XX valid "IWSLT15.TED.tst2013" - -generate_subset en_XX ar_AR valid "IWSLT17.TED.tst2016" -generate_subset ar_AR en_XX valid "IWSLT17.TED.tst2016" -generate_subset en_XX ko_KR valid "IWSLT17.TED.tst2016" -generate_subset ko_KR en_XX valid "IWSLT17.TED.tst2016" - - -generate_subset en_XX it_IT valid "IWSLT17.TED.tst2010" -generate_subset it_IT en_XX valid "IWSLT17.TED.tst2010" -generate_subset en_XX nl_XX valid "IWSLT17.TED.tst2010" -generate_subset nl_XX en_XX valid "IWSLT17.TED.tst2010" - -echo "creating iswslt test data into raw" -generate_subset en_XX vi_VN test "IWSLT15.TED.tst2015" -generate_subset vi_VN en_XX test "IWSLT15.TED.tst2015" - -generate_subset en_XX ar_AR test "IWSLT17.TED.tst2017" -generate_subset ar_AR en_XX test "IWSLT17.TED.tst2017" -generate_subset en_XX ko_KR test "IWSLT17.TED.tst2017" -generate_subset ko_KR en_XX test "IWSLT17.TED.tst2017" - -generate_subset en_XX it_IT test "IWSLT17.TED.tst2017.mltlng" -generate_subset it_IT en_XX test "IWSLT17.TED.tst2017.mltlng" -generate_subset en_XX nl_XX test "IWSLT17.TED.tst2017.mltlng" -generate_subset nl_XX en_XX test "IWSLT17.TED.tst2017.mltlng" - -# normalze iwslt directions into x-en -pushd $raw -for lang in $langs; do - for split in test valid; do - x_en_f1=$split.$lang-en_XX.en_XX - x_en_f2=$split.$lang-en_XX.${lang} - - en_x_f1=$split.en_XX-$lang.en_XX - en_x_f2=$split.en_XX-$lang.${lang} - - if [ -f $en_x_f1 ] && [ ! -f $x_en_f1 ]; then - echo "cp $en_x_f1 $x_en_f1" - cp $en_x_f1 $x_en_f1 - fi - if [ -f $x_en_f2 ] && [ ! -f $x_en_f2 ]; then - echo "cp $en_x_f2 $x_en_f2" - cp $en_x_f2 $x_en_f2 - fi - done -done -popd \ No newline at end of file diff --git a/kosmos-g/fairseq/examples/multilingual/data_scripts/download_lotus.sh b/kosmos-g/fairseq/examples/multilingual/data_scripts/download_lotus.sh deleted file mode 100644 index c08c70131..000000000 --- a/kosmos-g/fairseq/examples/multilingual/data_scripts/download_lotus.sh +++ /dev/null @@ -1,46 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - - -if [ -z $WORKDIR_ROOT ] ; -then - echo "please specify your working directory root in environment variable WORKDIR_ROOT. Exitting..." - exit -fi - - -SRCDIR=$WORKDIR_ROOT/indic_languages_corpus -DESTDIR=${WORKDIR_ROOT}/ML50/raw/ -mkdir -p $SRCDIR -mkdir -p $DESTDIR - -cd $SRCDIR -wget http://lotus.kuee.kyoto-u.ac.jp/WAT/indic-multilingual/indic_languages_corpus.tar.gz -tar -xvzf indic_languages_corpus.tar.gz - -SRC_EXTRACT_DIR=$SRCDIR/indic_languages_corpus/bilingual - -cp $SRC_EXTRACT_DIR/ml-en/train.ml $DESTDIR/train.ml_IN-en_XX.ml_IN -cp $SRC_EXTRACT_DIR/ml-en/train.en $DESTDIR/train.ml_IN-en_XX.en_XX -cp $SRC_EXTRACT_DIR/ml-en/dev.ml $DESTDIR/valid.ml_IN-en_XX.ml_IN -cp $SRC_EXTRACT_DIR/ml-en/dev.en $DESTDIR/valid.ml_IN-en_XX.en_XX -cp $SRC_EXTRACT_DIR/ml-en/test.ml $DESTDIR/test.ml_IN-en_XX.ml_IN -cp $SRC_EXTRACT_DIR/ml-en/test.en $DESTDIR/test.ml_IN-en_XX.en_XX - -cp $SRC_EXTRACT_DIR/ur-en/train.ur $DESTDIR/train.ur_PK-en_XX.ur_PK -cp $SRC_EXTRACT_DIR/ur-en/train.en $DESTDIR/train.ur_PK-en_XX.en_XX -cp $SRC_EXTRACT_DIR/ur-en/dev.ur $DESTDIR/valid.ur_PK-en_XX.ur_PK -cp $SRC_EXTRACT_DIR/ur-en/dev.en $DESTDIR/valid.ur_PK-en_XX.en_XX -cp $SRC_EXTRACT_DIR/ur-en/test.ur $DESTDIR/test.ur_PK-en_XX.ur_PK -cp $SRC_EXTRACT_DIR/ur-en/test.en $DESTDIR/test.ur_PK-en_XX.en_XX - -cp $SRC_EXTRACT_DIR/te-en/train.te $DESTDIR/train.te_IN-en_XX.te_IN -cp $SRC_EXTRACT_DIR/te-en/train.en $DESTDIR/train.te_IN-en_XX.en_XX -cp $SRC_EXTRACT_DIR/te-en/dev.te $DESTDIR/valid.te_IN-en_XX.te_IN -cp $SRC_EXTRACT_DIR/te-en/dev.en $DESTDIR/valid.te_IN-en_XX.en_XX -cp $SRC_EXTRACT_DIR/te-en/test.te $DESTDIR/test.te_IN-en_XX.te_IN -cp $SRC_EXTRACT_DIR/te-en/test.en $DESTDIR/test.te_IN-en_XX.en_XX diff --git a/kosmos-g/fairseq/examples/multilingual/data_scripts/download_ted_and_extract.py b/kosmos-g/fairseq/examples/multilingual/data_scripts/download_ted_and_extract.py deleted file mode 100644 index eb756680f..000000000 --- a/kosmos-g/fairseq/examples/multilingual/data_scripts/download_ted_and_extract.py +++ /dev/null @@ -1,338 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import itertools -import os -import csv -from collections import defaultdict -from six.moves import zip -import io -import wget -import sys - -from subprocess import check_call, check_output - -# scripts and data locations -CWD = os.getcwd() -UTILS = f"{CWD}/utils" - -MOSES = f"{UTILS}/mosesdecoder" - -WORKDIR_ROOT = os.environ.get('WORKDIR_ROOT', None) - -if WORKDIR_ROOT is None or not WORKDIR_ROOT.strip(): - print('please specify your working directory root in OS environment variable WORKDIR_ROOT. Exitting..."') - sys.exit(-1) - - -# please donwload mosesdecoder here: -detok_cmd = f'{MOSES}/scripts/tokenizer/detokenizer.perl' - - -def call(cmd): - print(f"Executing: {cmd}") - check_call(cmd, shell=True) - -class MultiLingualAlignedCorpusReader(object): - """A class to read TED talk dataset - """ - - def __init__(self, corpus_path, delimiter='\t', - target_token=True, bilingual=True, corpus_type='file', - lang_dict={'source': ['fr'], 'target': ['en']}, - eval_lang_dict=None, zero_shot=False, - detok=True, - ): - - self.empty_line_flag = 'NULL' - self.corpus_path = corpus_path - self.delimiter = delimiter - self.bilingual = bilingual - self.lang_dict = lang_dict - self.lang_set = set() - self.target_token = target_token - self.zero_shot = zero_shot - self.eval_lang_dict = eval_lang_dict - self.corpus_type = corpus_type - self.detok = detok - - for list_ in self.lang_dict.values(): - for lang in list_: - self.lang_set.add(lang) - - self.data = dict() - self.data['train'] = self.read_aligned_corpus(split_type='train') - self.data['test'] = self.read_aligned_corpus(split_type='test') - self.data['dev'] = self.read_aligned_corpus(split_type='dev') - - def read_data(self, file_loc_): - data_list = list() - with io.open(file_loc_, 'r', encoding='utf8') as fp: - for line in fp: - try: - text = line.strip() - except IndexError: - text = self.empty_line_flag - data_list.append(text) - return data_list - - def filter_text(self, dict_): - if self.target_token: - field_index = 1 - else: - field_index = 0 - data_dict = defaultdict(list) - list1 = dict_['source'] - list2 = dict_['target'] - for sent1, sent2 in zip(list1, list2): - try: - src_sent = ' '.join(sent1.split()[field_index: ]) - except IndexError: - src_sent = 'NULL' - - if src_sent.find(self.empty_line_flag) != -1 or len(src_sent) == 0: - continue - - elif sent2.find(self.empty_line_flag) != -1 or len(sent2) == 0: - continue - - else: - data_dict['source'].append(sent1) - data_dict['target'].append(sent2) - return data_dict - - def read_file(self, split_type, data_type): - return self.data[split_type][data_type] - - def save_file(self, path_, split_type, data_type, lang): - tok_file = tok_file_name(path_, lang) - with io.open(tok_file, 'w', encoding='utf8') as fp: - for line in self.data[split_type][data_type]: - fp.write(line + '\n') - if self.detok: - de_tok(tok_file, lang) - - def add_target_token(self, list_, lang_id): - new_list = list() - token = '__' + lang_id + '__' - for sent in list_: - new_list.append(token + ' ' + sent) - return new_list - - def read_from_single_file(self, path_, s_lang, t_lang): - data_dict = defaultdict(list) - with io.open(path_, 'r', encoding='utf8') as fp: - reader = csv.DictReader(fp, delimiter='\t', quoting=csv.QUOTE_NONE) - for row in reader: - data_dict['source'].append(row[s_lang]) - data_dict['target'].append(row[t_lang]) - - if self.target_token: - text = self.add_target_token(data_dict['source'], t_lang) - data_dict['source'] = text - - return data_dict['source'], data_dict['target'] - - def read_aligned_corpus(self, split_type='train'): - data_dict = defaultdict(list) - iterable = [] - s_list = [] - t_list = [] - - if self.zero_shot: - if split_type == "train": - iterable = zip(self.lang_dict['source'], self.lang_dict['target']) - else: - iterable = zip(self.eval_lang_dict['source'], self.eval_lang_dict['target']) - - elif self.bilingual: - iterable = itertools.product(self.lang_dict['source'], self.lang_dict['target']) - - for s_lang, t_lang in iterable: - if s_lang == t_lang: - continue - if self.corpus_type == 'file': - split_type_file_path = os.path.join(self.corpus_path, - "all_talks_{}.tsv".format(split_type)) - s_list, t_list = self.read_from_single_file(split_type_file_path, - s_lang=s_lang, - t_lang=t_lang) - data_dict['source'] += s_list - data_dict['target'] += t_list - new_data_dict = self.filter_text(data_dict) - return new_data_dict - - -def read_langs(corpus_path): - split_type_file_path = os.path.join(corpus_path, 'extracted', - "all_talks_dev.tsv") - with io.open(split_type_file_path, 'r', encoding='utf8') as fp: - reader = csv.DictReader(fp, delimiter='\t', quoting=csv.QUOTE_NONE) - header = next(reader) - return [k for k in header.keys() if k != 'talk_name'] - -def extra_english(corpus_path, split): - split_type_file_path = os.path.join(corpus_path, - f"all_talks_{split}.tsv") - output_split_type_file_path = os.path.join(corpus_path, - f"all_talks_{split}.en") - with io.open(split_type_file_path, 'r', encoding='utf8') as fp, io.open(output_split_type_file_path, 'w', encoding='utf8') as fw: - reader = csv.DictReader(fp, delimiter='\t', quoting=csv.QUOTE_NONE) - for row in reader: - line = row['en'] - fw.write(line + '\n') - de_tok(output_split_type_file_path, 'en') - - - -def tok_file_name(filename, lang): - seps = filename.split('.') - seps.insert(-1, 'tok') - tok_file = '.'.join(seps) - return tok_file - -def de_tok(tok_file, lang): - # seps = tok_file.split('.') - # seps.insert(-1, 'detok') - # de_tok_file = '.'.join(seps) - de_tok_file = tok_file.replace('.tok.', '.') - cmd = 'perl {detok_cmd} -l {lang} < {tok_file} > {de_tok_file}'.format( - detok_cmd=detok_cmd, tok_file=tok_file, - de_tok_file=de_tok_file, lang=lang[:2]) - call(cmd) - -def extra_bitex( - ted_data_path, - lsrc_lang, - ltrg_lang, - target_token, - output_data_path, -): - def get_ted_lang(lang): - long_langs = ['pt-br', 'zh-cn', 'zh-tw', 'fr-ca'] - if lang[:5] in long_langs: - return lang[:5] - elif lang[:4] =='calv': - return lang[:5] - elif lang in ['pt_BR', 'zh_CN', 'zh_TW', 'fr_CA']: - return lang.lower().replace('_', '-') - return lang[:2] - src_lang = get_ted_lang(lsrc_lang) - trg_lang = get_ted_lang(ltrg_lang) - train_lang_dict={'source': [src_lang], 'target': [trg_lang]} - eval_lang_dict = {'source': [src_lang], 'target': [trg_lang]} - - obj = MultiLingualAlignedCorpusReader(corpus_path=ted_data_path, - lang_dict=train_lang_dict, - target_token=target_token, - corpus_type='file', - eval_lang_dict=eval_lang_dict, - zero_shot=False, - bilingual=True) - - os.makedirs(output_data_path, exist_ok=True) - lsrc_lang = lsrc_lang.replace('-', '_') - ltrg_lang = ltrg_lang.replace('-', '_') - obj.save_file(output_data_path + f"/train.{lsrc_lang}-{ltrg_lang}.{lsrc_lang}", - split_type='train', data_type='source', lang=src_lang) - obj.save_file(output_data_path + f"/train.{lsrc_lang}-{ltrg_lang}.{ltrg_lang}", - split_type='train', data_type='target', lang=trg_lang) - - obj.save_file(output_data_path + f"/test.{lsrc_lang}-{ltrg_lang}.{lsrc_lang}", - split_type='test', data_type='source', lang=src_lang) - obj.save_file(output_data_path + f"/test.{lsrc_lang}-{ltrg_lang}.{ltrg_lang}", - split_type='test', data_type='target', lang=trg_lang) - - obj.save_file(output_data_path + f"/valid.{lsrc_lang}-{ltrg_lang}.{lsrc_lang}", - split_type='dev', data_type='source', lang=src_lang) - obj.save_file(output_data_path + f"/valid.{lsrc_lang}-{ltrg_lang}.{ltrg_lang}", - split_type='dev', data_type='target', lang=trg_lang) - - -def bar_custom(current, total, width=80): - print("Downloading: %d%% [%d / %d] Ks" % (current / total * 100, current / 1000, total / 1000), end='\r') - - -def download_and_extract(download_to, extract_to): - url = 'http://phontron.com/data/ted_talks.tar.gz' - filename = f"{download_to}/ted_talks.tar.gz" - if os.path.exists(filename): - print(f'{filename} has already been downloaded so skip') - else: - filename = wget.download(url, filename, bar=bar_custom) - if os.path.exists(f'{extract_to}/all_talks_train.tsv'): - print(f'Already extracted so skip') - else: - extract_cmd = f'tar xzfv "{filename}" -C "{extract_to}"' - call(extract_cmd) - - -if __name__ == "__main__": - import argparse - parser = argparse.ArgumentParser() - parser.add_argument('--ted_data_path', type=str, default=WORKDIR_ROOT, required=False) - parser.add_argument( - '--direction-list', - type=str, - # default=None, - #for ML50 - default=( - "bn_IN-en_XX,he_IL-en_XX,fa_IR-en_XX,id_ID-en_XX,sv_SE-en_XX,pt_XX-en_XX,ka_GE-en_XX,ka_GE-en_XX,th_TH-en_XX," - "mr_IN-en_XX,hr_HR-en_XX,uk_UA-en_XX,az_AZ-en_XX,mk_MK-en_XX,gl_ES-en_XX,sl_SI-en_XX,mn_MN-en_XX," - #non-english directions - # "fr_XX-de_DE," # replaced with wmt20 - # "ja_XX-ko_KR,es_XX-pt_XX,ru_RU-sv_SE,hi_IN-bn_IN,id_ID-ar_AR,cs_CZ-pl_PL,ar_AR-tr_TR" - ), - required=False) - parser.add_argument('--target-token', action='store_true', default=False) - parser.add_argument('--extract-all-english', action='store_true', default=False) - - args = parser.parse_args() - - import sys - import json - - # TED Talks data directory - ted_data_path = args.ted_data_path - - download_to = f'{ted_data_path}/downloads' - extract_to = f'{ted_data_path}/extracted' - - #DESTDIR=${WORKDIR_ROOT}/ML50/raw/ - output_path = f'{ted_data_path}/ML50/raw' - os.makedirs(download_to, exist_ok=True) - os.makedirs(extract_to, exist_ok=True) - os.makedirs(output_path, exist_ok=True) - download_and_extract(download_to, extract_to) - - - if args.extract_all_english: - for split in ['train', 'dev', 'test']: - extra_english(ted_data_path, split) - exit(0) - if args.direction_list is not None: - directions = args.direction_list.strip().split(',') - directions = [tuple(d.strip().split('-', 1)) for d in directions if d] - else: - langs = read_langs(ted_data_path) - # directions = [ - # '{}.{}'.format(src, tgt) - # for src in langs - # for tgt in langs - # if src < tgt - # ] - directions = [('en', tgt) for tgt in langs if tgt != 'en'] - print(f'num directions={len(directions)}: {directions}') - - for src_lang, trg_lang in directions: - print('--working on {}-{}'.format(src_lang, trg_lang)) - extra_bitex( - extract_to, - src_lang, - trg_lang, - target_token=args.target_token, - output_data_path=output_path - ) diff --git a/kosmos-g/fairseq/examples/multilingual/data_scripts/download_wat19_my.sh b/kosmos-g/fairseq/examples/multilingual/data_scripts/download_wat19_my.sh deleted file mode 100644 index c1e2d4728..000000000 --- a/kosmos-g/fairseq/examples/multilingual/data_scripts/download_wat19_my.sh +++ /dev/null @@ -1,36 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - - -if [ -z $WORKDIR_ROOT ] ; -then - echo "please specify your working directory root in environment variable WORKDIR_ROOT. Exitting..." - exit -fi - - -SRCDIR=$WORKDIR_ROOT/indic_languages_corpus -DESTDIR=$WORKDIR_ROOT/ML50/raw -mkdir -p $SRCDIR -mkdir -p $DESTDIR - -WAT_MY_EN=wat2020.my-en.zip -cd $SRCDIR -# please refer to http://lotus.kuee.kyoto-u.ac.jp/WAT/my-en-data/ for latest URL if the following url expired -#- The data used for WAT2020 are identical to those used in WAT2019. -wget http://lotus.kuee.kyoto-u.ac.jp/WAT/my-en-data/$WAT_MY_EN -unzip $WAT_MY_EN - - -SRC_EXTRACT_DIR=$SRCDIR/wat2020.my-en/alt - -cp $SRC_EXTRACT_DIR/train.alt.en $DESTDIR/train.my_MM-en_XX.en_XX -cp $SRC_EXTRACT_DIR/train.alt.my $DESTDIR/train.my_MM-en_XX.my_MM -cp $SRC_EXTRACT_DIR/dev.alt.en $DESTDIR/valid.my_MM-en_XX.en_XX -cp $SRC_EXTRACT_DIR/dev.alt.my $DESTDIR/valid.my_MM-en_XX.my_MM -cp $SRC_EXTRACT_DIR/test.alt.en $DESTDIR/test.my_MM-en_XX.en_XX -cp $SRC_EXTRACT_DIR/test.alt.my $DESTDIR/test.my_MM-en_XX.my_MM diff --git a/kosmos-g/fairseq/examples/multilingual/data_scripts/download_wmt19_and_before.py b/kosmos-g/fairseq/examples/multilingual/data_scripts/download_wmt19_and_before.py deleted file mode 100644 index 3465731eb..000000000 --- a/kosmos-g/fairseq/examples/multilingual/data_scripts/download_wmt19_and_before.py +++ /dev/null @@ -1,899 +0,0 @@ -from typing import NamedTuple, List -from urllib.parse import urlparse -import os, sys -import subprocess -from subprocess import check_call, check_output -import glob -import wget -import re -import multiprocessing as mp -from functools import partial -import pathlib -from collections import OrderedDict - -WORKDIR_ROOT = os.environ.get('WORKDIR_ROOT', None) - -if WORKDIR_ROOT is None or not WORKDIR_ROOT.strip(): - print('please specify your working directory root in OS environment variable WORKDIR_ROOT. Exitting..."') - sys.exit(-1) - -# scripts and data locations -CWD = os.getcwd() -UTILS = f"{CWD}/utils" - -MOSES = f"{UTILS}/mosesdecoder" -SGM_TOOL = f'{MOSES}/scripts/ems/support/input-from-sgm.perl' - -TMX2CORPUS = f"{UTILS}/tmx2corpus" -TMX_TOOL = f'python {TMX2CORPUS}/tmx2corpus.py' - -to_data_path = f'{WORKDIR_ROOT}/wmt' -download_to = f'{to_data_path}/downloads' -manually_downloads = f'{to_data_path}/downloads' -extract_to = f'{to_data_path}/extracted' -#DESTDIR=${WORKDIR_ROOT}/ML50/raw/ -raw_data = f'{WORKDIR_ROOT}/ML50/raw' -#### - -class DLDataset(NamedTuple): - name: str - train_urls: List[str] - valid_urls: List[str] - test_urls: List[str] - train_files_patterns: List[str] = [] - valid_files_patterns: List[str] = [] - test_files_patterns: List[str] = [] - - - -def bar_custom(current, total, width=80): - print("Downloading: %d%% [%d / %d] Ks" % (current / total * 100, current / 1000, total / 1000), end='\r') - -def get_downloaded_file(dl_folder, url): - if isinstance(url, tuple): - url, f = url - else: - url_f = urlparse(url) - # f = os.path.split(url_f.path)[-1] - f = '_'.join(url_f.path.split('/')[1:]) - return url, f"{dl_folder}/{f}" - -def download_parts_and_combine(dl_folder, urls, filename): - parts = [] - for url_record in urls: - url, part_file = get_downloaded_file(dl_folder, url_record) - if os.path.exists(part_file): - print(f'{part_file} has already been downloaded so skip') - else: - part_file = wget.download(url, part_file, bar=bar_custom) - parts.append(part_file) - - def get_combine_cmd(parts): - #default as tar.gz.?? - return f'cat {" ".join(parts)} > {filename}' - - combine_cmd = get_combine_cmd(parts) - call(combine_cmd, debug=True) - return filename - -def download_a_url(dl_folder, url): - url, filename = get_downloaded_file(dl_folder, url) - if os.path.exists(filename): - print(f'{filename} has already been downloaded so skip') - return filename - - print(f'downloading {url} to {filename}') - if isinstance(url, list) or isinstance(url, tuple): - download_parts_and_combine(dl_folder, url, filename) - else: - wget.download(url, filename, bar=bar_custom) - print(f'dowloaded: {filename}') - return filename - -def download_files(dl_folder, urls, completed_urls={}): - for url_record in urls: - url, _ = get_downloaded_file(dl_folder, url_record) - filename = download_a_url(dl_folder, url_record) - completed_urls[str(url)] = filename - return completed_urls - -def check_need_manual_downalod(dl_folder, to_manually_download_urls): - to_be_manually_dowloaded = [] - manually_completed_urls = {} - for url_record, instruction in to_manually_download_urls: - url, filename = get_downloaded_file(dl_folder, url_record) - if not os.path.exists(filename): - print(f'{url} need to be download manually, please download it manually following {instruction}; and copy it to {filename}') - to_be_manually_dowloaded.append((url, filename)) - else: - manually_completed_urls[url] = filename - # if len(to_be_manually_dowloaded) > 0: - # raise ValueError('Missing files that need to be downloaded manually; stop the process now.') - return to_be_manually_dowloaded - -def download_dataset(to_folder, dl_dataset, completed_urls={}): - download_files(to_folder, dl_dataset.train_urls, completed_urls) - download_files(to_folder, dl_dataset.valid_urls, completed_urls) - download_files(to_folder, dl_dataset.test_urls, completed_urls) - print('completed downloading') - return completed_urls - -def call(cmd, debug=False): - if debug: - print(cmd) - check_call(cmd, shell=True) - - -def get_extract_name(file_path): - path = os.path.split(file_path) - return path[-1] + '_extract' #.split('.')[0] - -def extract_file(downloaded_file, extract_folder, get_extract_name=get_extract_name, debug=False): - extract_name = get_extract_name(downloaded_file) - extract_to = f'{extract_folder}/{extract_name}' - os.makedirs(extract_to, exist_ok=True) - if os.path.exists(f'{extract_to}/DONE'): - print(f'{downloaded_file} has already been extracted to {extract_to} so skip') - return extract_to - def get_extract_cmd(filename): - if filename.endswith('.tgz') or filename.endswith('tar.gz'): - return f'tar xzfv {filename} -C {extract_to}' - elif filename.endswith('.gz.tar'): - return f'tar xfv {filename} -C {extract_to}; (cd {extract_to}; gzip -d *.gz; [ $? -eq 0 ] || gzip -d */*.gz)' - elif filename.endswith('.tar'): - return f'tar xfv {filename} -C {extract_to}' - elif filename.endswith('.gz'): - return f'cp {filename} {extract_to}; (cd {extract_to}; gzip -d *.gz)' - elif filename.endswith('.zip'): - return f'unzip {filename} -d {extract_to}' - extract_cmd = get_extract_cmd(downloaded_file) - print(f'extracting {downloaded_file}') - if isinstance(extract_cmd, list): - for c in extract_cmd: - call(c, debug=debug) - else: - call(extract_cmd, debug=debug) - call(f'echo DONE > {extract_to}/DONE') - return extract_to - - -def extract_all_files( - completed_urls, extract_folder, - get_extract_name=get_extract_name, - completed_extraction={}, - debug=False): - extracted_folders = OrderedDict() - for url, downloaded_file in set(completed_urls.items()): - if downloaded_file in completed_extraction: - print(f'{downloaded_file} is already extracted; so skip') - continue - folder = extract_file(downloaded_file, extract_folder, get_extract_name, debug) - extracted_folders[url] = folder - return extracted_folders - - -def my_glob(folder): - for p in [f'{folder}/*', f'{folder}/*/*', f'{folder}/*/*/*']: - for f in glob.glob(p): - yield f - - -def sgm2raw(sgm, debug): - to_file = sgm[0:len(sgm) - len('.sgm')] - if os.path.exists(to_file): - debug and print(f'{sgm} already converted to {to_file}; so skip') - return to_file - cmd = f'{SGM_TOOL} < {sgm} > {to_file}' - call(cmd, debug) - return to_file - -def tmx2raw(tmx, debug): - to_file = tmx[0:len(tmx) - len('.tmx')] - to_folder = os.path.join(*os.path.split(tmx)[:-1]) - if os.path.exists(f'{to_folder}/bitext.en'): - debug and print(f'{tmx} already extracted to {to_file}; so skip') - return to_file - cmd = f'(cd {to_folder}; {TMX_TOOL} {tmx})' - call(cmd, debug) - return to_file - -CZENG16_REGEX = re.compile(r'.*?data.plaintext-format/0[0-9]train$') -WMT19_WIKITITLES_REGEX = re.compile(r'.*?wikititles-v1.(\w\w)-en.tsv.gz') -TSV_REGEX = re.compile(r'.*?(\w\w)-(\w\w).tsv$') - - - -def cut_wikitles(wiki_file, debug): - # different languages have different file names: - if wiki_file.endswith('wiki/fi-en/titles.fi-en'): - to_file1 = f'{wiki_file}.fi' - to_file2 = f'{wiki_file}.en' - BACKSLASH = '\\' - cmd1 = f"cat {wiki_file} | sed 's/|||/{BACKSLASH}t/g' |cut -f1 |awk '{{$1=$1}};1' > {to_file1}" - cmd2 = f"cat {wiki_file} | sed 's/|||/{BACKSLASH}t/g' |cut -f2 |awk '{{$1=$1}};1' > {to_file2}" -# elif WMT19_WIKITITLES_REGEX.match(wiki_file): -# src = WMT19_WIKITITLES_REGEX.match(wiki_file).groups()[0] -# to_file1 = f'{wiki_file}.{src}' -# to_file2 = f'{wiki_file}.en' -# cmd1 = f"cat {wiki_file} | cut -f1 |awk '{{$1=$1}};1' > {to_file1}" -# cmd2 = f"cat {wiki_file} | cut -f2 |awk '{{$1=$1}};1' > {to_file2}" - else: - return None - if os.path.exists(to_file1) and os.path.exists(to_file2): - debug and print(f'{wiki_file} already processed to {to_file1} and {to_file2}; so skip') - return wiki_file - - call(cmd1, debug=debug) - call(cmd2, debug=debug) - return wiki_file - -def cut_tsv(file, debug): - m = TSV_REGEX.match(file) - if m is None: - raise ValueError(f'{file} is not matching tsv pattern') - src = m.groups()[0] - tgt = m.groups()[1] - - to_file1 = f'{file}.{src}' - to_file2 = f'{file}.{tgt}' - cmd1 = f"cat {file} | cut -f1 |awk '{{$1=$1}};1' > {to_file1}" - cmd2 = f"cat {file} | cut -f2 |awk '{{$1=$1}};1' > {to_file2}" - if os.path.exists(to_file1) and os.path.exists(to_file2): - debug and print(f'{file} already processed to {to_file1} and {to_file2}; so skip') - return file - - call(cmd1, debug=debug) - call(cmd2, debug=debug) - return file - - -def convert_file_if_needed(file, debug): - if file.endswith('.sgm'): - return sgm2raw(file, debug) - elif file.endswith('.tmx'): - return tmx2raw(file, debug) - elif file.endswith('wiki/fi-en/titles.fi-en'): - return cut_wikitles(file, debug) -# elif WMT19_WIKITITLES_REGEX.match(file): -# return cut_wikitles(file, debug) - elif file.endswith('.tsv'): - return cut_tsv(file, debug) - elif CZENG16_REGEX.match(file): - return convert2czeng17(file, debug) - else: - return file - - -def convert_files_if_needed(extracted_foldrs, my_glob=my_glob, debug=False): - return { - url: list(sorted(set(convert_file_if_needed(f, debug)) for f in sorted(set(my_glob(folder))))) - for url, folder in extracted_foldrs.items() - } - -def match_patt(file_path, file_pattern, src, tgt, lang): - return file_pattern.format(src=src, tgt=tgt, lang=lang) in file_path - -def match_patts(file_path, file_patterns, src, tgt, lang): - for file_pattern in file_patterns: - params = { k: v for k, v in [('src', src), ('tgt', tgt), ('lang', lang)] if k in file_pattern} - matching = file_pattern.format(**params) - - if isinstance(file_pattern, tuple): - pattern, directions = file_pattern - if f'{src}-{tgt}' in directions and matching in file_path: - return True - else: - if matching in file_path: - return True - return False - -def extracted_glob(extracted_folder, file_patterns, src, tgt, lang): - def get_matching_pattern(file_pattern): - params = { - k: v - for k, v in [('src', src), ('tgt', tgt), ('lang', lang)] - if '{' + k + '}' in file_pattern - } - file_pattern = re.sub(r'{src:(.*?)}', r'\1' if lang == src else '', file_pattern) - file_pattern = re.sub(r'{tgt:(.*?)}', r'\1' if lang == tgt else '', file_pattern) - file_pattern = file_pattern.format(**params) - return file_pattern - for file_pattern in file_patterns: - if isinstance(file_pattern, tuple): - file_pattern, lang_pairs = file_pattern - if f'{src}-{tgt}' not in lang_pairs: - continue -# print('working on pattern: ', file_pattern, lang_pairs ) - matching_pattern = get_matching_pattern(file_pattern) - if matching_pattern is None: - continue - glob_patterns = f'{extracted_folder}/{matching_pattern}' -# print('glob_patterns: ', glob_patterns) - for f in glob.glob(glob_patterns): - yield f - -# for debug usage -def all_extracted_files(split, src, tgt, extracted_folders, split_urls): - def get_url(url): - if isinstance(url, tuple): - url, downloaded_file = url - return url - return [ - f - for url in split_urls - for f in my_glob(extracted_folders[str(get_url(url))]) - ] - -def concat_files(split, src, tgt, extracted_folders, split_urls, path_patterns, to_folder, debug=False): -# if debug: -# print('extracted files to be filtered by patterns: ', -# '\n\t'.join(sorted(all_extracted_files(split, src, tgt, extracted_folders, split_urls)))) - for lang in [src, tgt]: - to_file = f'{to_folder}/{split}.{src}-{tgt}.{lang}' - s_src, s_tgt, s_lang = src.split('_')[0], tgt.split('_')[0], lang.split('_')[0] - files = [] - for url in split_urls: - if isinstance(url, tuple): - url, downloaded_file = url - if str(url) not in extracted_folders: - print(f'warning: {url} not in extracted files') - for extracted_file in set( - extracted_glob( - extracted_folders[str(url)], path_patterns, - s_src, s_tgt, s_lang)): - files.append(extracted_file) - if len(files) == 0: - print('warning: ', f'No files found for split {to_file}') - continue - files = sorted(set(files)) - print(f'concating {len(files)} files into {to_file}') - cmd = ['cat'] + [f'"{f}"' for f in files] + [f'>{to_file}'] - cmd = " ".join(cmd) - call(cmd, debug=debug) - -UTILS = os.path.join(pathlib.Path(__file__).parent, 'utils') -LID_MODEL = f'{download_to}/lid.176.bin' -LID_MULTI = f'{UTILS}/fasttext_multi_filter.py' - -def lid_filter(split, src, tgt, from_folder, to_folder, debug=False): - if not os.path.exists(LID_MODEL): - call(f'wget -nc https://dl.fbaipublicfiles.com/fasttext/supervised-models/lid.176.bin -O {LID_MODEL}') - from_prefix = f'{from_folder}/{split}.{src}-{tgt}' - to_prefix = f'{to_folder}/{split}.{src}-{tgt}' - if os.path.exists(f'{from_prefix}.{src}') and os.path.exists(f'{from_prefix}.{tgt}'): - s_src, s_tgt = src.split('_')[0], tgt.split('_')[0] - cmd = ( - f'python {LID_MULTI} --model {LID_MODEL} --inputs {from_prefix}.{src} {from_prefix}.{tgt} ' - f'--langs {s_src} {s_tgt} --outputs {to_prefix}.{src} {to_prefix}.{tgt}' - ) - print(f'filtering {from_prefix}') - call(cmd, debug=debug) - -def concat_into_splits(dl_dataset, src, tgt, extracted_folders, to_folder, debug): - to_folder_tmp = f"{to_folder}_tmp" - os.makedirs(to_folder_tmp, exist_ok=True) - concat_files('train', src, tgt, - extracted_folders, - split_urls=dl_dataset.train_urls, - path_patterns=dl_dataset.train_files_patterns, - to_folder=to_folder_tmp, debug=debug) - lid_filter('train', src, tgt, to_folder_tmp, to_folder, debug) - - concat_files('valid', src, tgt, - extracted_folders, - split_urls=dl_dataset.valid_urls, - path_patterns=dl_dataset.valid_files_patterns, - to_folder=to_folder, debug=debug) - concat_files('test', src, tgt, - extracted_folders, - split_urls=dl_dataset.test_urls, - path_patterns=dl_dataset.test_files_patterns, - to_folder=to_folder, debug=debug) - - -def download_multi(dl_folder, extract_folder, urls, num_processes=8, debug=False): - pool = mp.Pool(processes=num_processes) - download_f = partial(download_a_url, dl_folder) - downloaded_files = pool.imap_unordered(download_f, urls) - pool.close() - pool.join() - -BLEU_REGEX = re.compile("^BLEU\\S* = (\\S+) ") -def run_eval_bleu(cmd): - output = check_output(cmd, shell=True, stderr=subprocess.STDOUT).decode("utf-8").strip() - print(output) - bleu = -1.0 - for line in output.strip().split('\n'): - m = BLEU_REGEX.search(line) - if m is not None: - bleu = m.groups()[0] - bleu = float(bleu) - break - return bleu - -def check_wmt_test_bleu(raw_folder, wmt_lang_pairs): - not_matchings = [] - for wmt, src_tgts in wmt_lang_pairs: - for src_tgt in src_tgts: - print(f'checking test bleus for: {src_tgt} at {wmt}') - src, tgt = src_tgt.split('-') - ssrc, stgt = src[:2], tgt[:2] - if os.path.exists(f'{raw_folder}/test.{tgt}-{src}.{src}'): - # reversed direction may have different test set - test_src = f'{raw_folder}/test.{tgt}-{src}.{src}' - else: - test_src = f'{raw_folder}/test.{src}-{tgt}.{src}' - cmd1 = f'cat {test_src} | sacrebleu -t "{wmt}" -l {stgt}-{ssrc}; [ $? -eq 0 ] || echo ""' - test_tgt = f'{raw_folder}/test.{src}-{tgt}.{tgt}' - cmd2 = f'cat {test_tgt} | sacrebleu -t "{wmt}" -l {ssrc}-{stgt}; [ $? -eq 0 ] || echo ""' - bleu1 = run_eval_bleu(cmd1) - if bleu1 != 100.0: - not_matchings.append(f'{wmt}:{src_tgt} source side not matching: {test_src}') - bleu2 = run_eval_bleu(cmd2) - if bleu2 != 100.0: - not_matchings.append(f'{wmt}:{src_tgt} target side not matching: {test_tgt}') - return not_matchings - -def download_and_extract( - to_folder, lang_pairs, dl_dataset, - to_manually_download_urls, - completed_urls={}, completed_extraction={}, - debug=False): - - dl_folder = f'{to_folder}/downloads' - extract_folder = f'{to_folder}/extracted' - raw_folder = f'{to_folder}/raw' - lid_filtered = f'{to_folder}/lid_filtered' - - os.makedirs(extract_folder, exist_ok=True) - os.makedirs(raw_folder, exist_ok=True) - os.makedirs(lid_filtered, exist_ok=True) - - - to_be_manually_dowloaded = check_need_manual_downalod(dl_folder, to_manually_download_urls) - - completed_urls = download_dataset( - dl_folder, dl_dataset, completed_urls) - if debug: - print('completed urls: ', completed_urls) - - - extracted_folders = extract_all_files( - completed_urls, - extract_folder=extract_folder, - completed_extraction=completed_extraction, - debug=debug) - if debug: - print('download files have been extracted to folders: ', extracted_folders) - - converted_files = convert_files_if_needed(extracted_folders, debug=False) - for src_tgt in lang_pairs: - print(f'working on {dl_dataset.name}: {src_tgt}') - src, tgt = src_tgt.split('-') - concat_into_splits(dl_dataset, - src=src, tgt=tgt, - extracted_folders=extracted_folders, - to_folder=raw_folder, debug=debug) - print('completed data into: ', raw_folder) - -def download_czang16(download_to, username=None): - wgets = [ - f'wget --user={username} --password=czeng -P {download_to} http://ufallab.ms.mff.cuni.cz/~bojar/czeng16-data/data-plaintext-format.{i}.tar' - for i in range(10)] - cmds = [] - for i, cmd in enumerate(wgets): - filename = f'{download_to}/data-plaintext-format.{i}.tar' - if os.path.exists(filename): - print(f'{filename} has already been downloaded; so skip') - continue - cmds.append(cmd) - if cmds and username is None: - raise ValueError('No czeng username is given; please register at http://ufal.mff.cuni.cz/czeng/czeng16 to obtain username to download') - for cmd in cmds: - call(cmd) - print('done with downloading czeng1.6') - -def download_czeng17_script(download_to, extract_folder, debug=False): - url = 'http://ufal.mff.cuni.cz/czeng/download.php?f=convert_czeng16_to_17.pl.zip' - filename = f'{download_to}/convert_czeng16_to_17.pl.zip' - extract_to = f'{extract_folder}/{get_extract_name(filename)}' - script_path = f'{extract_to}/convert_czeng16_to_17.pl' - - if not os.path.exists(script_path): - wget.download(url, filename, bar=bar_custom) - extract_to = extract_file(f'{download_to}/convert_czeng16_to_17.pl.zip', extract_folder, get_extract_name=get_extract_name, debug=debug) - return script_path - -czeng17_script_path = "" -def convert2czeng17(file, debug): - en_file = f'{file}.en' - cs_file = f'{file}.cs' - - if not os.path.exists(en_file) or not os.path.exists(cs_file): - cs_cmd = f'cat {file} | perl {czeng17_script_path} | cut -f3 > {cs_file}' - en_cmd = f'cat {file} | perl {czeng17_script_path} | cut -f4 > {en_file}' - call(cs_cmd, debug) - call(en_cmd, debug) - else: - print(f'already extracted: {en_file} and {cs_file}') - return file - -def extract_czeng17(extract_folder, debug=False): - url = 'http://ufal.mff.cuni.cz/czeng/download.php?f=convert_czeng16_to_17.pl.zip' - filename = f'{download_to}/convert_czeng16_to_17.pl.zip' - extract_to = f'{extract_folder}/{get_extract_name(filename)}' - script_path = f'{extract_to}/convert_czeng16_to_17.pl' - - if not os.path.exists(script_path): - wget.download(url, filename, bar=bar_custom) - extract_to = extract_file(f'{download_to}/convert_czeng16_to_17.pl.zip', extract_folder, get_extract_name=get_extract_name, debug=debug) - return script_path - -######### -# definitions of wmt data sources -# for es-en -# Punctuation in the official test sets will be encoded with ASCII characters (not complex Unicode characters) as much as possible. You may want to normalize your system's output before submission. You are able able to use a rawer version of the test sets that does not have this normalization. -# script to normalize punctuation: http://www.statmt.org/wmt11/normalize-punctuation.perl -wmt13_es_en = DLDataset( - name='wmt13_es-en', - train_urls=[ - 'http://www.statmt.org/wmt13/training-parallel-europarl-v7.tgz', - 'http://www.statmt.org/wmt13/training-parallel-commoncrawl.tgz', - 'http://www.statmt.org/wmt13/training-parallel-un.tgz', - 'http://www.statmt.org/wmt13/training-parallel-nc-v8.tgz', - ], - valid_urls=[ - ('http://www.statmt.org/wmt13/dev.tgz', 'wmt13_dev.tgz') - ], - test_urls=[ - ('http://www.statmt.org/wmt13/test.tgz', 'wmt13_test.tgz') - ], - train_files_patterns=[ - ('*/europarl-v7.{src}-{tgt}.{lang}', ['es-en']), - ('*commoncrawl.{src}-{tgt}.{lang}', ['es-en']), - ('*/news-commentary-v8.{src}-{tgt}.{lang}', ['es-en']), - ('un/*undoc.2000.{src}-{tgt}.{lang}', ['es-en']), - ] , - valid_files_patterns=[ - ('dev/newstest2012.{lang}', ['es-en']) - ], - test_files_patterns=[ - ('test/newstest*.{lang}', ['es-en']) - ], -) - -wmt14_de_fr_en = DLDataset( - name='wmt14_de_fr_en', - train_urls=[ - 'http://www.statmt.org/wmt13/training-parallel-europarl-v7.tgz', - 'http://www.statmt.org/wmt13/training-parallel-commoncrawl.tgz', - 'http://www.statmt.org/wmt13/training-parallel-un.tgz', - 'http://www.statmt.org/wmt14/training-parallel-nc-v9.tgz', - ('http://www.statmt.org/wmt10/training-giga-fren.tar', 'training-giga-fren.gz.tar'), #it is actuall a gz.tar - ], - valid_urls=[ - ('http://www.statmt.org/wmt14/dev.tgz', 'wmt14_dev.tgz'), - ], - test_urls=[ - ('http://www.statmt.org/wmt14/test-full.tgz', 'wmt14_test_full.tgz'), # cleaned test sets - ], - train_files_patterns=[ - ('*/europarl-v7.{src}-{tgt}.{lang}', ['fr-en', 'de-en']), - ('*commoncrawl.{src}-{tgt}.{lang}', ['fr-en', 'de-en']), - ('*/*news-commentary-v9.{src}-{tgt}.{lang}', ['fr-en', 'de-en']), - ('un/undoc.2000.{src}-{tgt}.{lang}', ['fr-en']), - ('*giga-{src}{tgt}*{lang}', ['fr-en']) - ], - valid_files_patterns=[ - ('dev/newstest2013.{lang}', ['fr-en', 'de-en']) - ], - test_files_patterns=[ - ('test-full/newstest*{src}{tgt}-{src:src}{tgt:ref}.{lang}', ['en-de', 'de-en', 'fr-en', 'en-fr']), - ], -) - -# pip install git+https://github.com/amake/tmx2corpus.git -wmt16_ro_en = DLDataset( - name='wmt16_ro-en', - train_urls=[ - ('http://data.statmt.org/wmt16/translation-task/training-parallel-ep-v8.tgz', 'wmt16_training-parallel-ep-v8.tgz'), - ('http://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-ro.tmx.gz', 'en-ro.tmx.gz'), - ], - valid_urls=[ - ('http://data.statmt.org/wmt16/translation-task/dev-romanian-updated.tgz', 'wmt16_dev.tgz') - ], - test_urls=[ - ('http://data.statmt.org/wmt16/translation-task/test.tgz', 'wmt16_test.tgz') - ], - train_files_patterns=[ - ('*/*europarl-v8.{src}-{tgt}.{lang}', ['ro-en']), - ('bitext.{lang}', ['ro-en']) #setimes from tmux - ] , - valid_files_patterns=[ - ('dev/newsdev2016*{src}{tgt}*.{lang}', ['ro-en', 'ro-en']) - ], - test_files_patterns=[ - ('test/newstest*{src}{tgt}*.{lang}', ['ro-en', 'en-ro']) - ], -) - -cwmt_wmt_instruction = 'cwmt download instruction at: http://nlp.nju.edu.cn/cwmt-wmt' -wmt17_fi_lv_tr_zh_en_manual_downloads = [ - # fake urls to have unique keys for the data - ( ('http://nlp.nju.edu.cn/cwmt-wmt/CASIA2015.zip', 'CASIA2015.zip'), cwmt_wmt_instruction), - ( ('http://nlp.nju.edu.cn/cwmt-wmt/CASICT2011.zip', 'CASICT2011.zip'), cwmt_wmt_instruction), - ( ('http://nlp.nju.edu.cn/cwmt-wmt/CASICT2015.zip', 'CASICT2015.zip'), cwmt_wmt_instruction), - ( ('http://nlp.nju.edu.cn/cwmt-wmt/Datum2015.zip', 'Datum2015.zip'), cwmt_wmt_instruction), - ( ('http://nlp.nju.edu.cn/cwmt-wmt/Datum2017.zip', 'Datum2017.zip'), cwmt_wmt_instruction), - ( ('http://nlp.nju.edu.cn/cwmt-wmt/NEU2017.zip', 'NEU2017.zip'), cwmt_wmt_instruction), -] -wmt17_fi_lv_tr_zh_en = DLDataset( - name='wmt17_fi_lv_tr_zh_en', - train_urls=[ - ('http://data.statmt.org/wmt17/translation-task/training-parallel-ep-v8.tgz', 'wmt17_training-parallel-ep-v8.tgz'), - 'http://data.statmt.org/wmt17/translation-task/training-parallel-nc-v12.tgz', - 'http://www.statmt.org/wmt15/wiki-titles.tgz', - ('http://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-tr.tmx.gz', 'en-tr.tmx.gz'), - ('http://data.statmt.org/wmt17/translation-task/rapid2016.tgz', 'wmt17_rapid2016.tgz'), - 'http://data.statmt.org/wmt17/translation-task/leta.v1.tgz', - 'http://data.statmt.org/wmt17/translation-task/dcep.lv-en.v1.tgz', - 'http://data.statmt.org/wmt17/translation-task/books.lv-en.v1.tgz', - (('https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-zh.tar.gz.00', - 'https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-zh.tar.gz.01',), 'UNv1.0.en-zh.tar.gz'), - #manually download files: - ('http://nlp.nju.edu.cn/cwmt-wmt/CASIA2015.zip', 'CASIA2015.zip'), - ('http://nlp.nju.edu.cn/cwmt-wmt/CASICT2011.zip', 'CASICT2011.zip'), - ('http://nlp.nju.edu.cn/cwmt-wmt/CASICT2015.zip', 'CASICT2015.zip'), - ('http://nlp.nju.edu.cn/cwmt-wmt/Datum2015.zip', 'Datum2015.zip'), - ('http://nlp.nju.edu.cn/cwmt-wmt/Datum2017.zip', 'Datum2017.zip'), - ('http://nlp.nju.edu.cn/cwmt-wmt/NEU2017.zip', 'NEU2017.zip'), - ], - valid_urls=[ - ('http://data.statmt.org/wmt17/translation-task/dev.tgz', 'wmt17_dev.tgz'), - ], - test_urls=[ - #NEW: Improved translations for zh test sets - ('http://data.statmt.org/wmt17/translation-task/test-update-1.tgz', 'wmt17_test_zh_en.tgz'), - ('http://data.statmt.org/wmt17/translation-task/test.tgz', 'wmt17_test_others.tgz') - ], - train_files_patterns=[ - ('casict*/cas*{src:ch}{tgt:en}.txt', ['zh-en', 'zh-en'] ), - ('casia*/cas*{src:ch}{tgt:en}.txt', ['zh-en', 'zh-en'] ), - ('dataum*/Book*{src:cn}{tgt:en}.txt', ['zh-en', 'zh-en']), - ('neu*/NEU*{src:cn}{tgt:en}.txt', ['zh-en', 'zh-en'] ), - ('*/*UNv1.0.en-zh.{src:zh}{tgt:en}', ['zh-en']), - ('training/*news-commentary-v12.{src}-{tgt}.{lang}', ['zh-en', ]), - - ('*/*europarl-v8.{src}-{tgt}.{lang}', ['fi-en', 'lv-en']), - ('wiki/fi-en/titles.{src}-{tgt}.{lang}', ['fi-en', ]), - ('rapid2016.{tgt}-{src}.{lang}', ['fi-en', 'lv-en']), - ('*/leta.{lang}', ['lv-en']), - ('*/dcep.{lang}', ['lv-en']), - ('*/farewell.{lang}', ['lv-en']), - ('bitext.{lang}', ['tr-en']), - ] , - valid_files_patterns=[ - ('dev/newsdev2017*{src}{tgt}-{src:src}{tgt:ref}.{lang}', - [ - 'fi-en', 'lv-en', 'tr-en', 'zh-en', - 'en-fi', 'en-lv', 'en-tr', 'en-zh' - ]), - ('dev/newstest2016*{src}{tgt}-{src:src}{tgt:ref}.{lang}', - [ - 'fi-en', 'tr-en', - 'en-fi', 'en-tr', - ]), - ], - test_files_patterns=[ - ('test/newstest2017-{src}{tgt}-{src:src}{tgt:ref}.{lang}', - [ - 'fi-en', 'lv-en', 'tr-en', - 'en-fi', 'en-lv', 'en-tr', - ]), - ('newstest2017-{src}{tgt}-{src:src}{tgt:ref}.{lang}', - [ - 'zh-en', - 'en-zh' - ]), - ], -) - -czeng_instruction = 'download instruction at: http://ufal.mff.cuni.cz/czeng/czeng16' -#alternative: use the prepared data but detokenize it? -wmt18_cs_et_en_manual_downloads = [ -#for cs, need to register and download; Register and download CzEng 1.6. -#Better results can be obtained by using a subset of sentences, released under a new version name CzEng 1.7. - # ((f'http://ufallab.ms.mff.cuni.cz/~bojar/czeng16-data/data-plaintext-format.{i}.tar', - # f'data-plaintext-format.{i}.tar'), czeng_instruction) - # for i in range(10) -] - -wmt18_cs_et_en = DLDataset( - name='wmt18_cs_et_en', - train_urls=[ - 'http://www.statmt.org/wmt13/training-parallel-europarl-v7.tgz', - 'http://data.statmt.org/wmt18/translation-task/training-parallel-ep-v8.tgz', - 'https://s3.amazonaws.com/web-language-models/paracrawl/release1/paracrawl-release1.en-cs.zipporah0-dedup-clean.tgz', - 'https://s3.amazonaws.com/web-language-models/paracrawl/release1/paracrawl-release1.en-et.zipporah0-dedup-clean.tgz', - 'http://www.statmt.org/wmt13/training-parallel-commoncrawl.tgz', - 'http://data.statmt.org/wmt18/translation-task/training-parallel-nc-v13.tgz', - ('http://data.statmt.org/wmt18/translation-task/rapid2016.tgz', 'wmt18_rapid2016.tgz'), - # (tuple( - # (f'http://ufallab.ms.mff.cuni.cz/~bojar/czeng16-data/data-plaintext-format.{i}.tar', - # f'data-plaintext-format.{i}.tar') - # for i in range(10) - # ), - # 'czeng16_data_plaintext.gz.tar'), - ], - valid_urls=[ - ('http://data.statmt.org/wmt18/translation-task/dev.tgz', 'wmt18_dev.tgz'), - ], - test_urls=[ - ('http://data.statmt.org/wmt18/translation-task/test.tgz', 'wmt18_test.tgz'), - ], - train_files_patterns=[ - # ('*/*europarl-v7.{src}-{tgt}.{lang}', ['cs-en']), - ('*/*europarl-v8.{src}-{tgt}.{lang}', ['et-en']), - # ('*paracrawl-release1.{tgt}-{src}.zipporah0-dedup-clean.{lang}', ['cs-en', 'et-en']), - ('*paracrawl-release1.{tgt}-{src}.zipporah0-dedup-clean.{lang}', ['et-en']), - # ('*commoncrawl.{src}-{tgt}.{lang}', ['cs-en']), - # ('*/news-commentary-v13.{src}-{tgt}.{lang}', ['cs-en']), - # ('data.plaintext-format/*train.{lang}', ['cs-en']), - ('rapid2016.{tgt}-{src}.{lang}', ['et-en']), - ] , - valid_files_patterns=[ - ('dev/newsdev2018*{src}{tgt}-{src:src}{tgt:ref}.{lang}', ['et-en']), - # ('dev/newstest2017*{src}{tgt}-{src:src}{tgt:ref}.{lang}', ['cs-en']) - ], - test_files_patterns=[ - ('test/newstest2018-{src}{tgt}-{src:src}{tgt:ref}.{lang}', - # ['cs-en', 'et-en']), - ['et-en']), - ] -) - -ru_en_yandex_instruction = 'Yandex Corpus download instruction at: https://translate.yandex.ru/corpus?lang=en' -wmt19_ru_gu_kk_lt_manual_downloads = [ - (('https://translate.yandex.ru/corpus?lang=en', 'wmt19_1mcorpus.zip'), ru_en_yandex_instruction) -] -wmt19_ru_gu_kk_lt = DLDataset( - name='wmt19_ru_gu_kk_lt', - train_urls=[ - 'http://www.statmt.org/europarl/v9/training/europarl-v9.lt-en.tsv.gz', - 'https://s3.amazonaws.com/web-language-models/paracrawl/release3/en-lt.bicleaner07.tmx.gz', - 'https://s3.amazonaws.com/web-language-models/paracrawl/release1/paracrawl-release1.en-ru.zipporah0-dedup-clean.tgz', - 'http://www.statmt.org/wmt13/training-parallel-commoncrawl.tgz', - 'http://data.statmt.org/news-commentary/v14/training/news-commentary-v14-wmt19.en-kk.tsv.gz', - 'http://data.statmt.org/news-commentary/v14/training/news-commentary-v14.en-ru.tsv.gz', - 'http://data.statmt.org/wikititles/v1/wikititles-v1.kk-en.tsv.gz', - 'http://data.statmt.org/wikititles/v1/wikititles-v1.ru-en.tsv.gz', - 'http://data.statmt.org/wikititles/v1/wikititles-v1.kk-en.tsv.gz', - 'http://data.statmt.org/wikititles/v1/wikititles-v1.lt-en.tsv.gz', - 'http://data.statmt.org/wikititles/v1/wikititles-v1.gu-en.tsv.gz', - (('https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar.gz.00', - 'https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar.gz.01', - 'https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar.gz.02',), - 'wmt19_UNv1.0.en-ru.tar.gz'), - 'https://tilde-model.s3-eu-west-1.amazonaws.com/rapid2016.en-lt.tmx.zip', - ('https://translate.yandex.ru/corpus?lang=en', 'wmt19_1mcorpus.zip'), - ], - valid_urls=[ - ('http://data.statmt.org/wmt19/translation-task/dev.tgz', 'wmt19_dev.tgz'), - ], - test_urls=[ - ('http://data.statmt.org/wmt19/translation-task/test.tgz', 'wmt19_test.tgz'), - ], - train_files_patterns=[ - ('*europarl-v9.{src}-{tgt}.tsv.{lang}', ['lt-en']), - #paracrawl - ('*paracrawl-release1.{tgt}-{src}.zipporah0-dedup-clean.{lang}', ['ru-en']), - ('bitext.{lang}', ['lt-en',]), - ('*commoncrawl.{src}-{tgt}.{lang}', ['ru-en',]), - ('*news-commentary-v14-wmt19.{tgt}-{src}.tsv.{lang}', ['kk-en', ]), - ('*news-commentary-v14.{tgt}-{src}.tsv.{lang}', ['ru-en']), - #yandex - ('corpus.{tgt}_{src}.1m.{lang}', ['ru-en']), - ('wikititles_v1_wikititles-v1.{src}-{tgt}.tsv.{lang}', ['ru-en', 'kk-en', 'lt-en', 'gu-en']), - ('*/UNv1.0.{tgt}-{src}.{lang}', ['ru-en']), - #rapid - ('bitext.{lang}', ['lt-en']) - ], - valid_files_patterns=[ - ('dev/newsdev2019*{src}{tgt}-{src:src}{tgt:ref}.{lang}', ['gu-en', 'kk-en', 'lt-en']), - ('dev/newstest2018*{src}{tgt}-{src:src}{tgt:ref}.{lang}', ['ru-en']), - ], - test_files_patterns=[ - ('sgm/newstest2019-{src}{tgt}-{src:src}{tgt:ref}.{lang}', - ['ru-en', 'gu-en', 'kk-en', 'lt-en', 'en-ru', 'en-gu', 'en-kk', 'en-lt']), - ] -) - - -######### - -if __name__ == "__main__": - # speed up the downloads with multiple processing - dl_folder = f'{to_data_path}/downloads' - extract_folder = f'{to_data_path}/extracted' - - urls = [ - url - for dataset in [wmt13_es_en, wmt14_de_fr_en, wmt16_ro_en, wmt18_cs_et_en, wmt19_ru_gu_kk_lt] - for urls in [dataset.train_urls, dataset.valid_urls, dataset.test_urls] - for url in urls - ] - urls = set(urls) - download_multi(dl_folder, extract_folder, urls, num_processes=8, debug=True) - - # check manually downlaods - to_manually_download_urls = ( - wmt17_fi_lv_tr_zh_en_manual_downloads + wmt18_cs_et_en_manual_downloads + wmt19_ru_gu_kk_lt_manual_downloads - ) - to_be_manually_dowloaded = check_need_manual_downalod(dl_folder, to_manually_download_urls) - if len(to_be_manually_dowloaded) > 0: - print('Missing files that need to be downloaded manually; stop the process now.') - exit(-1) - - completed_urls = {} - completed_extraction = {} - def work_on_wmt(directions, wmt_data): - download_and_extract( - to_data_path, - directions, - wmt_data, - to_manually_download_urls=to_manually_download_urls, - completed_urls=completed_urls, completed_extraction=completed_extraction, debug=True) - - work_on_wmt( - ['es_XX-en_XX'], - wmt13_es_en,) - work_on_wmt( - [ - 'fr_XX-en_XX', 'en_XX-fr_XX', - # 'en_XX-de_DE', 'de_DE-en_XX', - ], - wmt14_de_fr_en,) - work_on_wmt( - ['ro_RO-en_XX', 'en_XX-ro_XX'], - wmt16_ro_en,) - work_on_wmt( - [ - # 'zh_CN-en_XX', - 'lv_LV-en_XX', 'fi_FI-en_XX', 'tr_TR-en_XX', - #in case the reversed directions have different train/valid/test data - # 'en_XX-zh_CN', - 'en_XX-lv_LV', 'en_XX-fi_FI', 'en_XX-tr_TR', - ], - wmt17_fi_lv_tr_zh_en, ) - # czeng17_script_path = download_czeng17_script(download_to, extract_to, debug=False) - # cz_username = None - work_on_wmt( - [ - # 'cs_CZ-en_XX', - 'et_EE-en_XX'], - wmt18_cs_et_en,) - work_on_wmt( - [ - # 'ru_RU-en_XX', 'en_XX-ru_RU', - 'gu_IN-en_XX', 'kk_KZ-en_XX', 'lt_LT-en_XX', - #in case the reversed directions have different train/valid/test data - 'en_XX-gu_IN', 'en_XX-kk_KZ', 'en_XX-lt_LT' - ], - wmt19_ru_gu_kk_lt,) - - not_matching = check_wmt_test_bleu( - f'{to_data_path}/raw', - [ - ('wmt13', ['es_XX-en_XX']), - ('wmt14/full', ['fr_XX-en_XX',]), - ('wmt16', ['ro_RO-en_XX',]), - # ('wmt17/improved', ['zh_CN-en_XX']), - ('wmt17', [ 'lv_LV-en_XX', 'fi_FI-en_XX', 'tr_TR-en_XX']), - ('wmt18', ['cs_CZ-en_XX', 'et_EE-en_XX']), - ('wmt19', ['gu_IN-en_XX', 'kk_KZ-en_XX', 'lt_LT-en_XX']), - #'ru_RU-en_XX', - ] - ) - if len(not_matching) > 0: - print('the following datasets do not have matching test datasets:\n\t', '\n\t'.join(not_matching)) - diff --git a/kosmos-g/fairseq/examples/multilingual/data_scripts/download_wmt20.sh b/kosmos-g/fairseq/examples/multilingual/data_scripts/download_wmt20.sh deleted file mode 100644 index 31cd5c76b..000000000 --- a/kosmos-g/fairseq/examples/multilingual/data_scripts/download_wmt20.sh +++ /dev/null @@ -1,547 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -if [ -z $WORKDIR_ROOT ] ; -then - echo "please specify your working directory root in environment variable WORKDIR_ROOT. Exitting..." - exit -fi - - - -set -x -e - -# TODO update the workdir and dest dir name -# put fasttext model -WORKDIR=$WORKDIR_ROOT -# put intermediate files -TMP_DIR=$WORKDIR_ROOT/tmp/tmp_wmt20_lowres_download -# output {train,valid,test} files to dest -DEST=$WORKDIR_ROOT/ML50/raw - -UTILS=$PWD/utils - -# per dataset locations -COMMONCRAWL_DIR=$TMP_DIR/commoncrawl -YANDEX_CORPUS=$WORKDIR_ROOT/wmt20/official/ru/yandex/1mcorpus.zip -# unzipped -CZENG_CORPUS=$WORKDIR_ROOT/wmt20/official/cs/czeng/czeng20-train -CCMT_DIR=$WORKDIR_ROOT/wmt20/official/zh/ccmt/parallel - -download_and_select() { - SUBFOLDER=$1 - URL=$2 - UNCOMPRESS_CMD=$3 - LANG=$4 - INPUT_FILEPATH=$5 - if [[ $# -gt 5 ]]; then - LANG_COL=$6 - EN_COL=$7 - fi - - mkdir -p $SUBFOLDER - cd $SUBFOLDER - wget -nc --content-disposition $URL - $UNCOMPRESS_CMD - - if [[ $# -gt 5 ]]; then - cut -f$LANG_COL $INPUT_FILEPATH > $INPUT_FILEPATH.$LANG - cut -f$EN_COL $INPUT_FILEPATH > $INPUT_FILEPATH.en - fi - cd .. - - ln -sf $SUBFOLDER/$INPUT_FILEPATH.$LANG $SUBFOLDER.$LANG - ln -sf $SUBFOLDER/$INPUT_FILEPATH.en $SUBFOLDER.en -} - -prepare_lid() { - pip install fasttext - - # TODO specify global workdir - MODEL=$WORKDIR/fasttext/lid.176.bin - LID_MULTI=$UTILS/fasttext_multi_filter.py - - if [ ! -f "$MODEL" ]; then - echo "downloading fasttext lid model..." - mkdir -p $WORKDIR/fasttext - wget -nc https://dl.fbaipublicfiles.com/fasttext/supervised-models/lid.176.bin -O $MODEL - fi -} - -prepare_moses() { - pushd $UTILS - echo 'Cloning Moses github repository (for tokenization scripts)...' - git clone https://github.com/moses-smt/mosesdecoder.git - popd -} - -lid_filter() { - # TODO specify global workdir - MODEL=$WORKDIR/fasttext/lid.176.bin - LID_MULTI=$UTILS/fasttext_multi_filter.py - - prepare_lid - - SRC=$1 - SRC_FILE=$2 - SRC_OUTPUT=$3 - TGT=$4 - TGT_FILE=$5 - TGT_OUTPUT=$6 - python $LID_MULTI --model $MODEL --inputs $SRC_FILE $TGT_FILE --langs $SRC $TGT --outputs $SRC_OUTPUT $TGT_OUTPUT -} - -prepare_ja_ted() { - mkdir -p ted - cd ted - - wget -nc https://wit3.fbk.eu/archive/2017-01-trnted//texts/en/ja/en-ja.tgz - tar -zxvf en-ja.tgz - cat en-ja/train.tags.en-ja.en | grep -v -P "^[ ]*\<" | sed 's/^[ \t]*//g' | sed 's/[ \t]*$//g' > en-ja/train.en-ja.en - cat en-ja/train.tags.en-ja.ja | grep -v -P "^[ ]*\<" | sed 's/^[ \t]*//g' | sed 's/[ \t]*$//g' > en-ja/train.en-ja.ja - - cd .. - ln -sf ted/en-ja/train.en-ja.ja ted.ja - ln -sf ted/en-ja/train.en-ja.en ted.en -} - -prepare_ja() { - OUTPUT_DIR=$TMP_DIR/ja - mkdir -p $OUTPUT_DIR - cd $OUTPUT_DIR - - download_and_select paracrawl "http://www.kecl.ntt.co.jp/icl/lirg/jparacrawl/release/2.0/bitext/en-ja.tar.gz" "tar -zxvf en-ja.tar.gz" ja en-ja/en-ja.bicleaner05.txt 4 3 & - download_and_select newscommentary "http://data.statmt.org/news-commentary/v15/training/news-commentary-v15.en-ja.tsv.gz" "gunzip -f news-commentary-v15.en-ja.tsv.gz" ja news-commentary-v15.en-ja.tsv 2 1 & - download_and_select wikititles "http://data.statmt.org/wikititles/v2/wikititles-v2.ja-en.tsv.gz" "gunzip -f wikititles-v2.ja-en.tsv.gz" ja wikititles-v2.ja-en.tsv 1 2 & - download_and_select wikimatrix "http://data.statmt.org/wmt20/translation-task/WikiMatrix/WikiMatrix.v1.en-ja.langid.tsv.gz" "gunzip -f WikiMatrix.v1.en-ja.langid.tsv.gz" ja WikiMatrix.v1.en-ja.langid.tsv 3 2 & - download_and_select subtitle "https://nlp.stanford.edu/projects/jesc/data/split.tar.gz" "tar -zxvf split.tar.gz" ja split/train 2 1 & - download_and_select kftt "http://www.phontron.com/kftt/download/kftt-data-1.0.tar.gz" "tar -zxvf kftt-data-1.0.tar.gz" ja kftt-data-1.0/data/orig/kyoto-train & - - prepare_ja_ted & - - # ted data needs to - - wait - - # remove previous results - rm -f all.?? - find ./ -maxdepth 1 -name "*.ja" | sort -V | xargs cat > all.ja - find ./ -maxdepth 1 -name "*.en" | sort -V | xargs cat > all.en - lid_filter ja all.ja $DEST/train.ja_XX-en_XX.ja_XX en all.en $DEST/train.ja_XX-en_XX.en_XX -} - -prepare_ta() { - OUTPUT_DIR=$TMP_DIR/ta - mkdir -p $OUTPUT_DIR - cd $OUTPUT_DIR - - download_and_select wikititles "http://data.statmt.org/wikititles/v2/wikititles-v2.ta-en.tsv.gz" "gunzip -f wikititles-v2.ta-en.tsv.gz" ta wikititles-v2.ta-en.tsv 1 2 & - download_and_select wikimatrix "http://data.statmt.org/wmt20/translation-task/WikiMatrix/WikiMatrix.v1.en-ta.langid.tsv.gz" "gunzip -f WikiMatrix.v1.en-ta.langid.tsv.gz" ta WikiMatrix.v1.en-ta.langid.tsv 3 2 & - download_and_select pmindia "http://data.statmt.org/pmindia/v1/parallel/pmindia.v1.ta-en.tsv" "" ta pmindia.v1.ta-en.tsv 2 1 & - download_and_select tanzil "https://object.pouta.csc.fi/OPUS-Tanzil/v1/moses/en-ta.txt.zip" "unzip en-ta.txt.zip" ta Tanzil.en-ta & - download_and_select pib "http://preon.iiit.ac.in/~jerin/resources/datasets/pib-v0.tar" "tar -xvf pib-v0.tar" ta pib/en-ta/train & - download_and_select mkb "http://preon.iiit.ac.in/~jerin/resources/datasets/mkb-v0.tar" "tar -xvf mkb-v0.tar" ta mkb/en-ta/mkb & - download_and_select ufal "http://ufal.mff.cuni.cz/~ramasamy/parallel/data/v2/en-ta-parallel-v2.tar.gz" "tar -zxvf en-ta-parallel-v2.tar.gz" ta en-ta-parallel-v2/corpus.bcn.train & - - wait - - # need special handling for nlpc - mkdir -p nlpc - cd nlpc - wget -nc https://raw.githubusercontent.com/nlpc-uom/English-Tamil-Parallel-Corpus/master/En-Ta%20Corpus/En-Ta%20English.txt - wget -nc https://github.com/nlpc-uom/English-Tamil-Parallel-Corpus/raw/master/En-Ta%20Corpus/En-Ta%20Tamil.txt - tail -n +4 "En-Ta English.txt" > en-ta.en - tail -n +4 "En-Ta Tamil.txt" > en-ta.ta - cd .. - ln -sf nlpc/en-ta.en nlpc.en - ln -sf nlpc/en-ta.ta nlpc.ta - - # remove previous results - rm -f all.?? - find ./ -maxdepth 1 -name "*.ta" | sort -V | xargs cat > all.ta - find ./ -maxdepth 1 -name "*.en" | sort -V | xargs cat > all.en - lid_filter ta all.ta $DEST/train.ta_IN-en_XX.ta_IN en all.en $DEST/train.ta_IN-en_XX.en_XX -} - -prepare_iu() { - OUTPUT_DIR=$TMP_DIR/iu - mkdir -p $OUTPUT_DIR - cd $OUTPUT_DIR - - download_and_select nh "https://nrc-digital-repository.canada.ca/eng/view/dataset/?id=c7e34fa7-7629-43c2-bd6d-19b32bf64f60" "tar -zxvf Nunavut-Hansard-Inuktitut-English-Parallel-Corpus-3.0.1.tgz" iu Nunavut-Hansard-Inuktitut-English-Parallel-Corpus-3.0/NunavutHansard > /dev/null & - download_and_select wikititles "http://data.statmt.org/wikititles/v2/wikititles-v2.iu-en.tsv.gz" "gunzip -f wikititles-v2.iu-en.tsv.gz" iu wikititles-v2.iu-en.tsv 1 2 & - - wait - - # remove previous results - rm -f all.?? - find ./ -maxdepth 1 -name "*.iu" | sort -V | xargs cat | nh/Nunavut-Hansard-Inuktitut-English-Parallel-Corpus-3.0/scripts/normalize-iu-spelling.pl > all.iu - find ./ -maxdepth 1 -name "*.en" | sort -V | xargs cat > all.en - paste all.iu all.en | awk -F $'\t' '$1!=""&&$2!=""' > all.iuen - cut -f1 all.iuen > $DEST/train.iu_CA-en_XX.iu_CA - cut -f2 all.iuen > $DEST/train.iu_CA-en_XX.en_XX -} - -prepare_km() { - OUTPUT_DIR=$TMP_DIR/km - mkdir -p $OUTPUT_DIR - cd $OUTPUT_DIR - - download_and_select paracrawl "http://data.statmt.org/wmt20/translation-task/ps-km/wmt20-sent.en-km.xz" "unxz wmt20-sent.en-km.zx" km wmt20-sent.en-km 2 1 & - - # km-parallel has multiple sets, concat all of them together - mkdir -p opus - cd opus - wget -nc "http://data.statmt.org/wmt20/translation-task/ps-km/km-parallel.tgz" - tar -zxvf km-parallel.tgz - find ./km-parallel -maxdepth 1 -name "*.km" | sort -V | xargs cat > opus.km - find ./km-parallel -maxdepth 1 -name "*.en" | sort -V | xargs cat > opus.en - cd .. - ln -sf opus/opus.km . - ln -sf opus/opus.en . - - wait - - # remove previous results - rm -f all.?? - find ./ -maxdepth 1 -name "*.km" | sort -V | xargs cat > all.km - find ./ -maxdepth 1 -name "*.en" | sort -V | xargs cat > all.en - lid_filter km all.km $DEST/train.km_KH-en_XX.km_KH en all.en $DEST/train.km_KH-en_XX.en_XX -} - -prepare_ps() { - OUTPUT_DIR=$TMP_DIR/ps - mkdir -p $OUTPUT_DIR - cd $OUTPUT_DIR - - download_and_select paracrawl "http://data.statmt.org/wmt20/translation-task/ps-km/wmt20-sent.en-ps.xz" "unxz wmt20-sent.en-ps.xz" ps wmt20-sent.en-ps 2 1 & - download_and_select wikititles "http://data.statmt.org/wikititles/v2/wikititles-v2.ps-en.tsv.gz" "gunzip -f wikititles-v2.ps-en.tsv.gz" ps wikititles-v2.ps-en.tsv 1 2 & - # ps-parallel has multiple sets, concat all of them together - mkdir -p opus - cd opus - wget -nc "http://data.statmt.org/wmt20/translation-task/ps-km/ps-parallel.tgz" - tar -zxvf ps-parallel.tgz - find ./ps-parallel -maxdepth 1 -name "*.ps" | sort -V | xargs cat > opus.ps - find ./ps-parallel -maxdepth 1 -name "*.en" | sort -V | xargs cat > opus.en - cd .. - ln -sf opus/opus.ps opus.ps - ln -sf opus/opus.en opus.en - - wait - - # remove previous results - rm -f all.?? - find ./ -maxdepth 1 -name "*.ps" | sort -V | xargs cat > all.ps - find ./ -maxdepth 1 -name "*.en" | sort -V | xargs cat > all.en - lid_filter ps all.ps $DEST/train.ps_AF-en_XX.ps_AF en all.en $DEST/train.ps_AF-en_XX.en_XX -} - -download_commoncrawl() { - mkdir -p $COMMONCRAWL_DIR - cd $COMMONCRAWL_DIR - - wget -nc "http://www.statmt.org/wmt13/training-parallel-commoncrawl.tgz" - tar -zxvf training-parallel-commoncrawl.tgz -} -link_commoncrawl() { - LANG=$1 - ln -sf $COMMONCRAWL_DIR/commoncrawl.$LANG-en.en commoncrawl.en - ln -sf $COMMONCRAWL_DIR/commoncrawl.$LANG-en.$LANG commoncrawl.$LANG -} - -strip_xlf() { - INPUT_FILE=$1 - SRC=$2 - TGT=$3 - grep '<source xml:lang=' $INPUT_FILE | sed 's/^<[^<>]*>//g' | sed 's/<[^<>]*>$//g' > $INPUT_FILE.$SRC - grep '<target xml:lang=' $INPUT_FILE | sed 's/^<[^<>]*>//g' | sed 's/<[^<>]*>$//g' > $INPUT_FILE.$TGT -} - -download_and_process_tilde() { - URL=$1 - UNCOMPRESS_CMD=$2 - FILENAME=$3 - LANG=$4 - PROCESS_CMD=$5 - - mkdir -p tilde - cd tilde - wget -nc $URL - $UNCOMPRESS_CMD - echo "executing cmd" - echo $PROCESS_CMD - $PROCESS_CMD - cd .. - ln -sf tilde/$FILENAME.$LANG tilde.$LANG - ln -sf tilde/$FILENAME.en tilde.en -} - -prepare_cs() { - OUTPUT_DIR=$TMP_DIR/cs - mkdir -p $OUTPUT_DIR - cd $OUTPUT_DIR - - #download_and_select europarl "http://www.statmt.org/europarl/v10/training/europarl-v10.cs-en.tsv.gz" "gunzip europarl-v10.cs-en.tsv.gz" cs europarl-v10.cs-en.tsv 1 2 & - #download_and_select paracrawl "https://s3.amazonaws.com/web-language-models/paracrawl/release5.1/en-cs.txt.gz" "gunzip en-cs.txt.gz" cs en-cs.txt 2 1 & - #link_commoncrawl cs - #download_and_select newscommentary "http://data.statmt.org/news-commentary/v15/training/news-commentary-v15.cs-en.tsv.gz" "gunzip news-commentary-v15.cs-en.tsv.gz" cs news-commentary-v15.cs-en.tsv 1 2 & - #download_and_select wikititles "http://data.statmt.org/wikititles/v2/wikititles-v2.cs-en.tsv.gz" "gunzip wikititles-v2.cs-en.tsv.gz" cs wikititles-v2.cs-en.tsv 1 2 & - #download_and_process_tilde "http://data.statmt.org/wmt20/translation-task/rapid/RAPID_2019.cs-en.xlf.gz" "gunzip RAPID_2019.cs-en.xlf.gz" RAPID_2019.cs-en.xlf cs "strip_xlf RAPID_2019.cs-en.xlf cs en" & - #download_and_select wikimatrix "http://data.statmt.org/wmt20/translation-task/WikiMatrix/WikiMatrix.v1.cs-en.langid.tsv.gz" "gunzip WikiMatrix.v1.cs-en.langid.tsv.gz" cs WikiMatrix.v1.cs-en.langid.tsv 2 3 & - - #wait - - # remove previous results - #rm -f all.?? - #find ./ -maxdepth 1 -name "*.cs" | sort -V | xargs cat > all.cs - #find ./ -maxdepth 1 -name "*.en" | sort -V | xargs cat > all.en - if [ -z $CZENG_CORPUS ] ; - then - echo "Please download CZENG_CORPUS manually and place them at $CZENG_CORPUS. Exitting..." - exit - fi - cat $CZENG_CORPUS | sed '/^$/d' | cut -f5 > all.cs - cat $CZENG_CORPUS | sed '/^$/d' | cut -f6 > all.en - - lid_filter cs all.cs $DEST/train.cs_CZ-en_XX.cs_CZ en all.en $DEST/train.cs_CZ-en_XX.en_XX -} - -prepare_de() { - OUTPUT_DIR=$TMP_DIR/de - mkdir -p $OUTPUT_DIR - cd $OUTPUT_DIR - - download_and_select europarl "http://www.statmt.org/europarl/v10/training/europarl-v10.de-en.tsv.gz" "gunzip europarl-v10.de-en.tsv.gz" de europarl-v10.de-en.tsv 1 2 & - download_and_select paracrawl "https://s3.amazonaws.com/web-language-models/paracrawl/release5.1/en-de.txt.gz" "gunzip en-de.txt.gz" de en-de.txt 2 1 & - link_commoncrawl de - download_and_select newscommentary "http://data.statmt.org/news-commentary/v15/training/news-commentary-v15.de-en.tsv.gz" "gunzip news-commentary-v15.de-en.tsv.gz" de news-commentary-v15.de-en.tsv 1 2 & - download_and_select wikititles "http://data.statmt.org/wikititles/v2/wikititles-v2.de-en.tsv.gz" "gunzip wikititles-v2.de-en.tsv.gz" de wikititles-v2.de-en.tsv 1 2 & - download_and_process_tilde "http://data.statmt.org/wmt20/translation-task/rapid/RAPID_2019.de-en.xlf.gz" "gunzip RAPID_2019.de-en.xlf.gz" RAPID_2019.de-en.xlf de "strip_xlf RAPID_2019.de-en.xlf de en" & - download_and_select wikimatrix "http://data.statmt.org/wmt20/translation-task/WikiMatrix/WikiMatrix.v1.de-en.langid.tsv.gz" "gunzip WikiMatrix.v1.de-en.langid.tsv.gz" de WikiMatrix.v1.de-en.langid.tsv 2 3 & - - wait - - # remove previous results - rm -f all.?? - find ./ -maxdepth 1 -name "*.de" | sort -V | xargs cat > all.de - find ./ -maxdepth 1 -name "*.en" | sort -V | xargs cat > all.en - lid_filter de all.de $DEST/train.de_DE-en_XX.de_DE en all.en $DEST/train.de_DE-en_XX.en_XX -} - -prepare_tmx() { - TMX_FILE=$1 - git clone https://github.com/amake/TMX2Corpus $UTILS/tmx2corpus - pip install tinysegmenter - - python $UTILS/tmx2corpus/tmx2corpus.py $TMX_FILE -} - -prepare_pl() { - OUTPUT_DIR=$TMP_DIR/pl - mkdir -p $OUTPUT_DIR - cd $OUTPUT_DIR - - # download_and_select europarl "http://www.statmt.org/europarl/v10/training/europarl-v10.pl-en.tsv.gz" "gunzip europarl-v10.pl-en.tsv.gz" pl europarl-v10.pl-en.tsv 1 2 & - # download_and_select paracrawl "https://s3.amazonaws.com/web-language-models/paracrawl/release5.1/en-pl.txt.gz" "gunzip en-pl.txt.gz" pl en-pl.txt 2 1 & - # download_and_select wikititles "http://data.statmt.org/wikititles/v2/wikititles-v2.pl-en.tsv.gz" "gunzip wikititles-v2.pl-en.tsv.gz" pl wikititles-v2.pl-en.tsv 1 2 & - download_and_select tilde "https://tilde-model.s3-eu-west-1.amazonaws.com/rapid2019.en-pl.tmx.zip" "gunzip rapid2019.en-pl.tmx.zip" bitext pl "prepare_tmx RAPID_2019.UNIQUE.en-pl.tmx" & - # download_and_select wikimatrix "http://data.statmt.org/wmt20/translation-task/WikiMatrix/WikiMatrix.v1.en-pl.langid.tsv.gz" "gunzip WikiMatrix.v1.en-pl.langid.tsv.gz" pl WikiMatrix.v1.en-pl.langid.tsv 3 2 & - - wait - - # remove previous results - rm -f all.?? - find ./ -maxdepth 1 -name "*.pl" | sort -V | xargs cat > all.pl - find ./ -maxdepth 1 -name "*.en" | sort -V | xargs cat > all.en - lid_filter pl all.pl $DEST/train.pl_PL-en_XX.pl_PL en all.en $DEST/train.pl_PL-en_XX.en_XX -} - -prepare_uncorpus() { - $URLS=$1 - $FILES=$2 - - mkdir -p uncorpus - cd uncorpus - - for URL in $URLS; do - wget -nc $URL - done - cat $FILES > uncorpus.tar.gz - tar -zxvf uncorpus.tar.gz - - cd .. - ln -sf uncorpus/en-$LANG/UNv1.0.en-$LANG.$LANG uncorpus.$LANG - ln -sf uncorpus/en-$LANG/UNv1.0.en-$LANG.en uncorpus.en -} - -prepare_yandex() { - mkdir -p yandex - cd yandex - unzip $YANDEX_CORPUS ./ - cd .. - ln -s yandex/corpus.en_ru.1m.en yandex.en - ln -s yandex/corpus.en_ru.1m.ru yandex.ru -} - -prepare_ru() { - OUTPUT_DIR=$TMP_DIR/ru - mkdir -p $OUTPUT_DIR - cd $OUTPUT_DIR - - download_and_select paracrawl "https://s3.amazonaws.com/web-language-models/paracrawl/release1/paracrawl-release1.en-ru.zipporah0-dedup-clean.tgz" "tar -zxvf paracrawl-release1.en-ru.zipporah0-dedup-clean.tgz" ru paracrawl-release1.en-ru.zipporah0-dedup-clean & - link_commoncrawl ru - download_and_select newscommentary "http://data.statmt.org/news-commentary/v15/training/news-commentary-v15.en-ru.tsv.gz" "gunzip news-commentary-v15.en-ru.tsv.gz" ru news-commentary-v15.en-ru.tsv 2 1 & - prepare_yandex & - download_and_select wikititles "http://data.statmt.org/wikititles/v2/wikititles-v2.ru-en.tsv.gz" "gunzip wikititles-v2.ru-en.tsv.gz" ru wikititles-v2.ru-en.tsv 1 2 & - prepare_uncorpus "https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar.gz.00 https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar.gz.01 https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar.gz.02" "UNv1.0.en-ru.tar.gz.00 UNv1.0.en-ru.tar.gz.01 UNv1.0.en-ru.tar.gz.02" & - download_and_select wikimatrix "http://data.statmt.org/wmt20/translation-task/WikiMatrix/WikiMatrix.v1.en-ru.langid.tsv.gz" "gunzip WikiMatrix.v1.en-ru.langid.tsv.gz" ru WikiMatrix.v1.en-ru.langid.tsv 3 2 & - - wait - - # remove previous results - rm -f all.?? - find ./ -maxdepth 1 -name "*.ru" | sort -V | xargs cat > all.ru - find ./ -maxdepth 1 -name "*.en" | sort -V | xargs cat > all.en - lid_filter ru all.ru $DEST/train.ru_RU-en_XX.ru_RU en all.en $DEST/train.ru_RU-en_XX.en_XX -} - -prepare_ccmt() { - mkdir -p ccmt - cd ccmt - # assume ccmt data is already unzipped under CCMT_DIR folder - cat $CCMT_DIR/datum2017/Book*_cn.txt | sed 's/ //g' > datum2017.detok.zh - cat $CCMT_DIR/datum2017/Book*_en.txt > datum2017.detok.en - cat $CCMT_DIR/casict2011/casict-A_ch.txt $CCMT_DIR/casict2011/casict-B_ch.txt $CCMT_DIR/casict2015/casict2015_ch.txt $CCMT_DIR/datum2015/datum_ch.txt $CCMT_DIR/neu2017/NEU_cn.txt datum2017.detok.zh > ccmt.zh - cat $CCMT_DIR/casict2011/casict-A_en.txt $CCMT_DIR/casict2011/casict-B_en.txt $CCMT_DIR/casict2015/casict2015_en.txt $CCMT_DIR/datum2015/datum_en.txt $CCMT_DIR/neu2017/NEU_en.txt datum2017.detok.en > ccmt.en - cd .. - ln -sf ccmt/ccmt.zh ccmt.zh - ln -sf ccmt/ccmt.en ccmt.en -} - -prepare_zh() { - OUTPUT_DIR=$TMP_DIR/zh - mkdir -p $OUTPUT_DIR - cd $OUTPUT_DIR - - download_and_select newscommentary "http://data.statmt.org/news-commentary/v15/training/news-commentary-v15.en-zh.tsv.gz" "gunzip news-commentary-v15.en-zh.tsv.gz" zh news-commentary-v15.en-zh.tsv 2 1 & - download_and_select wikititles "http://data.statmt.org/wikititles/v2/wikititles-v2.zh-en.tsv.gz" "gunzip wikititles-v2.zh-en.tsv.gz" zh wikititles-v2.zh-en.tsv 1 2 & - prepare_uncorpus "https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-zh.tar.gz.00 https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-zh.tar.gz.01" "UNv1.0.en-zh.tar.gz.00 UNv1.0.en-zh.tar.gz.01" & - prepare_ccmt & - download_and_select wikimatrix "http://data.statmt.org/wmt20/translation-task/WikiMatrix/WikiMatrix.v1.en-zh.langid.tsv.gz" "gunzip WikiMatrix.v1.en-zh.langid.tsv.gz" zh WikiMatrix.v1.en-zh.langid.tsv 3 2 & - - wait - - # remove previous results - rm -f all.?? - find ./ -maxdepth 1 -name "*.zh" | sort -V | xargs cat > all.zh - find ./ -maxdepth 1 -name "*.en" | sort -V | xargs cat > all.en - lid_filter zh all.zh $DEST/train.zh_CN-en_XX.zh_CN en all.en $DEST/train.zh_CN-en_XX.en_XX -} - -prepare_tests() { - OUTPUT_DIR=$TMP_DIR - mkdir -p $OUTPUT_DIR - cd $OUTPUT_DIR - wget -nc http://data.statmt.org/wmt20/translation-task/dev.tgz - tar -zxvf dev.tgz - cd dev - - cat newsdev2020-jaen-src.ja.sgm | $UTILS/strip_sgm.sh > newsdev2020-jaen.ja - cat newsdev2020-jaen-ref.en.sgm | $UTILS/strip_sgm.sh > newsdev2020-jaen.en - split newsdev2020-jaen.ja -a 0 -n r/1/2 > $DEST/valid.ja_XX-en_XX.ja_XX - split newsdev2020-jaen.en -a 0 -n r/1/2 > $DEST/valid.ja_XX-en_XX.en_XX - split newsdev2020-jaen.ja -a 0 -n r/2/2 > $DEST/test.ja_XX-en_XX.ja_XX - split newsdev2020-jaen.en -a 0 -n r/2/2 > $DEST/test.ja_XX-en_XX.en_XX - - cat newsdev2020-iuen-src.iu.sgm | strip_sgm.sh > newsdev2020-iuen.iu - cat newsdev2020-iuen-ref.en.sgm | strip_sgm.sh > newsdev2020-iuen.en - split newsdev2020-iuen.iu -a 0 -n r/1/2 > $DEST/valid.iu_CA-en_XX.iu_CA - split newsdev2020-iuen.en -a 0 -n r/1/2 > $DEST/valid.iu_CA-en_XX.en_XX - split newsdev2020-iuen.iu -a 0 -n r/2/2 > $DEST/test.iu_CA-en_XX.iu_CA - split newsdev2020-iuen.en -a 0 -n r/2/2 > $DEST/test.iu_CA-en_XX.en_XX - - cat newsdev2020-taen-src.ta.sgm | strip_sgm.sh > newsdev2020-taen.ta - cat newsdev2020-taen-ref.en.sgm | strip_sgm.sh > newsdev2020-taen.en - split newsdev2020-taen.ta -a 0 -n r/1/2 > $DEST/valid.ta_IN-en_XX.ta_IN - split newsdev2020-taen.en -a 0 -n r/1/2 > $DEST/valid.ta_IN-en_XX.en_XX - split newsdev2020-taen.ta -a 0 -n r/2/2 > $DEST/test.ta_IN-en_XX.ta_IN - split newsdev2020-taen.en -a 0 -n r/2/2 > $DEST/test.ta_IN-en_XX.en_XX - - cp wikipedia.dev.km-en.km $DEST/valid.km_KH-en_XX.km_KH - cp wikipedia.dev.km-en.en $DEST/valid.km_KH-en_XX.en_XX - cp wikipedia.devtest.km-en.km $DEST/test.km_KH-en_XX.km_KH - cp wikipedia.devtest.km-en.en $DEST/test.km_KH-en_XX.en_XX - - cp wikipedia.dev.ps-en.ps $DEST/valid.ps_AF-en_XX.ps_AF - cp wikipedia.dev.ps-en.en $DEST/valid.ps_AF-en_XX.en_XX - cp wikipedia.devtest.ps-en.ps $DEST/test.ps_AF-en_XX.ps_AF - cp wikipedia.devtest.ps-en.en $DEST/test.ps_AF-en_XX.en_XX - - cat newsdev2020-plen-src.pl.sgm | strip_sgm.sh > newsdev2020-plen.pl - cat newsdev2020-plen-ref.en.sgm | strip_sgm.sh > newsdev2020-plen.en - split newsdev2020-plen.pl -a 0 -n r/1/2 > $DEST/valid.pl_PL-en_XX.pl_PL - split newsdev2020-plen.en -a 0 -n r/1/2 > $DEST/valid.pl_PL-en_XX.en_XX - split newsdev2020-plen.pl -a 0 -n r/2/2 > $DEST/test.pl_PL-en_XX.pl_PL - split newsdev2020-plen.en -a 0 -n r/2/2 > $DEST/test.pl_PL-en_XX.en_XX - - cat newstest2018-encs-src.en.sgm | strip_sgm.sh > $DEST/valid.en_XX-cs_CZ.en_XX - cat newstest2018-encs-ref.cs.sgm | strip_sgm.sh > $DEST/valid.en_XX-cs_CZ.cs_CZ - cat newstest2019-encs-src.en.sgm | strip_sgm.sh > $DEST/test.en_XX-cs_CZ.en_XX - cat newstest2019-encs-ref.cs.sgm | strip_sgm.sh > $DEST/test.en_XX-cs_CZ.cs_CZ - - cat newstest2018-deen-src.de.sgm | strip_sgm.sh > $DEST/valid.de_DE-en_XX.de_DE - cat newstest2018-deen-ref.en.sgm | strip_sgm.sh > $DEST/valid.de_DE-en_XX.en_XX - cat newstest2018-ende-src.en.sgm | strip_sgm.sh > $DEST/valid.en_XX-de_DE.en_XX - cat newstest2018-ende-ref.de.sgm | strip_sgm.sh > $DEST/valid.en_XX-de_DE.de_DE - cat newstest2019-deen-src.de.sgm | strip_sgm.sh > $DEST/test.de_DE-en_XX.de_DE - cat newstest2019-deen-ref.en.sgm | strip_sgm.sh > $DEST/test.de_DE-en_XX.en_XX - cat newstest2019-ende-src.en.sgm | strip_sgm.sh > $DEST/test.en_XX-de_DE.en_XX - cat newstest2019-ende-ref.de.sgm | strip_sgm.sh > $DEST/test.en_XX-de_DE.de_DE - - cat newstest2018-ruen-src.ru.sgm | strip_sgm.sh > $DEST/valid.ru_RU-en_XX.ru_RU - cat newstest2018-ruen-ref.en.sgm | strip_sgm.sh > $DEST/valid.ru_RU-en_XX.en_XX - cat newstest2018-enru-src.en.sgm | strip_sgm.sh > $DEST/valid.en_XX-ru_RU.en_XX - cat newstest2018-enru-ref.ru.sgm | strip_sgm.sh > $DEST/valid.en_XX-ru_RU.ru_RU - cat newstest2019-ruen-src.ru.sgm | strip_sgm.sh > $DEST/test.ru_RU-en_XX.ru_RU - cat newstest2019-ruen-ref.en.sgm | strip_sgm.sh > $DEST/test.ru_RU-en_XX.en_XX - cat newstest2019-enru-src.en.sgm | strip_sgm.sh > $DEST/test.en_XX-ru_RU.en_XX - cat newstest2019-enru-ref.ru.sgm | strip_sgm.sh > $DEST/test.en_XX-ru_RU.ru_RU - - cat newstest2018-zhen-src.zh.sgm | strip_sgm.sh > $DEST/valid.zh_CN-en_XX.zh_CN - cat newstest2018-zhen-ref.en.sgm | strip_sgm.sh > $DEST/valid.zh_CN-en_XX.en_XX - cat newstest2018-enzh-src.en.sgm | strip_sgm.sh > $DEST/valid.en_XX-zh_CN.en_XX - cat newstest2018-enzh-ref.zh.sgm | strip_sgm.sh > $DEST/valid.en_XX-zh_CN.zh_CN - cat newstest2019-zhen-src.zh.sgm | strip_sgm.sh > $DEST/test.zh_CN-en_XX.zh_CN - cat newstest2019-zhen-ref.en.sgm | strip_sgm.sh > $DEST/test.zh_CN-en_XX.en_XX - cat newstest2019-enzh-src.en.sgm | strip_sgm.sh > $DEST/test.en_XX-zh_CN.en_XX - cat newstest2019-enzh-ref.zh.sgm | strip_sgm.sh > $DEST/test.en_XX-zh_CN.zh_CN -} - -mkdir -p $DEST - -prepare_lid -prepare_moses -download_commoncrawl - -prepare_ja & -prepare_ta & -prepare_km & -prepare_ps & -prepare_iu & -prepare_cs & -prepare_de & -prepare_pl & -prepare_ru & -prepare_zh & - -# prepare valid/test set -prepare_tests & - -# wait - -# TODO remove intermediate files -# rm -rf $TMP_DIR diff --git a/kosmos-g/fairseq/examples/multilingual/data_scripts/preprocess_ML50_v1.sh b/kosmos-g/fairseq/examples/multilingual/data_scripts/preprocess_ML50_v1.sh deleted file mode 100644 index 465593614..000000000 --- a/kosmos-g/fairseq/examples/multilingual/data_scripts/preprocess_ML50_v1.sh +++ /dev/null @@ -1,27 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -if [ -z $WORKDIR_ROOT ] ; -then - echo "please specify your working directory root in environment variable WORKDIR_ROOT. Exitting..." - exit -fi - -if [ -z $SPM_PATH ] ; -then - echo "Please install sentence piecence from https://github.com/google/sentencepiece and set SPM_PATH pointing to the installed spm_encode.py. Exitting..." - exit -fi - -ML50=${WORKDIR_ROOT}/ML50 - -mkdir -p $ML50/dedup -mkdir -p $ML50/cleaned_dedup - -python ./dedup_all.py --from-folder $ML50/raw --to-folder $ML50/dedup -python ./remove_valid_test_in_train.py --from-folder $ML50/dedup --to-folder $ML50/clean -python ./binarize.py --raw-folder $ML50/clean \ No newline at end of file diff --git a/kosmos-g/fairseq/examples/multilingual/data_scripts/remove_valid_test_in_train.py b/kosmos-g/fairseq/examples/multilingual/data_scripts/remove_valid_test_in_train.py deleted file mode 100644 index ef618adef..000000000 --- a/kosmos-g/fairseq/examples/multilingual/data_scripts/remove_valid_test_in_train.py +++ /dev/null @@ -1,290 +0,0 @@ -import os, sys -import glob, itertools -import pandas as pd - -WORKDIR_ROOT = os.environ.get('WORKDIR_ROOT', None) - -if WORKDIR_ROOT is None or not WORKDIR_ROOT.strip(): - print('please specify your working directory root in OS environment variable WORKDIR_ROOT. Exitting..."') - sys.exit(-1) - - -def load_langs(path): - with open(path) as fr: - langs = [l.strip() for l in fr] - return langs - - - -def load_sentences(raw_data, split, direction): - src, tgt = direction.split('-') - src_path = f"{raw_data}/{split}.{direction}.{src}" - tgt_path = f"{raw_data}/{split}.{direction}.{tgt}" - if os.path.exists(src_path) and os.path.exists(tgt_path): - return [(src, open(src_path).read().splitlines()), (tgt, open(tgt_path).read().splitlines())] - else: - return [] - -def swap_direction(d): - src, tgt = d.split('-') - return f'{tgt}-{src}' - -def get_all_test_data(raw_data, directions, split='test'): - test_data = [ - x - for dd in directions - for d in [dd, swap_direction(dd)] - for x in load_sentences(raw_data, split, d) - ] - # all_test_data = {s for _, d in test_data for s in d} - all_test_data = {} - for lang, d in test_data: - for s in d: - s = s.strip() - lgs = all_test_data.get(s, set()) - lgs.add(lang) - all_test_data[s] = lgs - return all_test_data, test_data - -def check_train_sentences(raw_data, direction, all_test_data, mess_up_train={}): - src, tgt = direction.split('-') - tgt_path = f"{raw_data}/train.{direction}.{tgt}" - src_path = f"{raw_data}/train.{direction}.{src}" - print(f'check training data in {raw_data}/train.{direction}') - size = 0 - if not os.path.exists(tgt_path) or not os.path.exists(src_path): - return mess_up_train, size - with open(src_path) as f, open(tgt_path) as g: - for src_line, tgt_line in zip(f, g): - s = src_line.strip() - t = tgt_line.strip() - size += 1 - if s in all_test_data: - langs = mess_up_train.get(s, set()) - langs.add(direction) - mess_up_train[s] = langs - if t in all_test_data: - langs = mess_up_train.get(t, set()) - langs.add(direction) - mess_up_train[t] = langs - return mess_up_train, size - -def check_train_all(raw_data, directions, all_test_data): - mess_up_train = {} - data_sizes = {} - for direction in directions: - _, size = check_train_sentences(raw_data, direction, all_test_data, mess_up_train) - data_sizes[direction] = size - return mess_up_train, data_sizes - -def count_train_in_other_set(mess_up_train): - train_in_others = [(direction, s) for s, directions in mess_up_train.items() for direction in directions] - counts = {} - for direction, s in train_in_others: - counts[direction] = counts.get(direction, 0) + 1 - return counts - -def train_size_if_remove_in_otherset(data_sizes, mess_up_train): - counts_in_other = count_train_in_other_set(mess_up_train) - remain_sizes = [] - for direction, count in counts_in_other.items(): - remain_sizes.append((direction, data_sizes[direction] - count, data_sizes[direction], count, 100 * count / data_sizes[direction] )) - return remain_sizes - - -def remove_messed_up_sentences(raw_data, direction, mess_up_train, mess_up_train_pairs, corrected_langs): - split = 'train' - src_lang, tgt_lang = direction.split('-') - - tgt = f"{raw_data}/{split}.{direction}.{tgt_lang}" - src = f"{raw_data}/{split}.{direction}.{src_lang}" - print(f'working on {direction}: ', src, tgt) - if not os.path.exists(tgt) or not os.path.exists(src) : - return - - corrected_tgt = f"{to_folder}/{split}.{direction}.{tgt_lang}" - corrected_src = f"{to_folder}/{split}.{direction}.{src_lang}" - line_num = 0 - keep_num = 0 - with open(src, encoding='utf8',) as fsrc, \ - open(tgt, encoding='utf8',) as ftgt, \ - open(corrected_src, 'w', encoding='utf8') as fsrc_corrected, \ - open(corrected_tgt, 'w', encoding='utf8') as ftgt_corrected: - for s, t in zip(fsrc, ftgt): - s = s.strip() - t = t.strip() - if t not in mess_up_train \ - and s not in mess_up_train \ - and (s, t) not in mess_up_train_pairs \ - and (t, s) not in mess_up_train_pairs: - corrected_langs.add(direction) - print(s, file=fsrc_corrected) - print(t, file=ftgt_corrected) - keep_num += 1 - line_num += 1 - if line_num % 1000 == 0: - print(f'completed {line_num} lines', end='\r') - return line_num, keep_num - -########## - - -def merge_valid_test_messup(mess_up_train_valid, mess_up_train_test): - merged_mess = [] - for s in set(list(mess_up_train_valid.keys()) + list(mess_up_train_test.keys())): - if not s: - continue - valid = mess_up_train_valid.get(s, set()) - test = mess_up_train_test.get(s, set()) - merged_mess.append((s, valid | test)) - return dict(merged_mess) - - - -######### -def check_train_pairs(raw_data, direction, all_test_data, mess_up_train={}): - src, tgt = direction.split('-') - #a hack; TODO: check the reversed directions - path1 = f"{raw_data}/train.{src}-{tgt}.{src}" - path2 = f"{raw_data}/train.{src}-{tgt}.{tgt}" - if not os.path.exists(path1) or not os.path.exists(path2) : - return - - with open(path1) as f1, open(path2) as f2: - for src_line, tgt_line in zip(f1, f2): - s = src_line.strip() - t = tgt_line.strip() - if (s, t) in all_test_data or (t, s) in all_test_data: - langs = mess_up_train.get( (s, t), set()) - langs.add(src) - langs.add(tgt) - mess_up_train[(s, t)] = langs - - -def load_pairs(raw_data, split, direction): - src, tgt = direction.split('-') - src_f = f"{raw_data}/{split}.{direction}.{src}" - tgt_f = f"{raw_data}/{split}.{direction}.{tgt}" - if tgt != 'en_XX': - src_f, tgt_f = tgt_f, src_f - if os.path.exists(src_f) and os.path.exists(tgt_f): - return list(zip(open(src_f).read().splitlines(), - open(tgt_f).read().splitlines(), - )) - else: - return [] - -# skip_langs = ['cs_CZ', 'en_XX', 'tl_XX', 'tr_TR'] -def get_messed_up_test_pairs(split, directions): - test_pairs = [ - (d, load_pairs(raw_data, split, d)) - for d in directions - ] - # all_test_data = {s for _, d in test_data for s in d} - all_test_pairs = {} - for direction, d in test_pairs: - src, tgt = direction.split('-') - for s in d: - langs = all_test_pairs.get(s, set()) - langs.add(src) - langs.add(tgt) - all_test_pairs[s] = langs - mess_up_train_pairs = {} - for direction in directions: - check_train_pairs(raw_data, direction, all_test_pairs, mess_up_train_pairs) - return all_test_pairs, mess_up_train_pairs - - - -if __name__ == "__main__": - ####### - import argparse - parser = argparse.ArgumentParser() - parser.add_argument( - '--from-folder', - required=True, - type=str) - parser.add_argument( - '--to-folder', - required=True, - type=str) - parser.add_argument( - '--directions', - default=None, - type=str) - - - args = parser.parse_args() - raw_data = args.from_folder - to_folder = args.to_folder - os.makedirs(to_folder, exist_ok=True) - - if args.directions: - directions = args.directions.split(',') - else: - raw_files = itertools.chain( - glob.glob(f'{raw_data}/train*'), - glob.glob(f'{raw_data}/valid*'), - glob.glob(f'{raw_data}/test*'), - ) - directions = [os.path.split(file_path)[-1].split('.')[1] for file_path in raw_files] - print('working on directions: ', directions) - - ########## - - - - all_test_data, test_data = get_all_test_data(raw_data, directions, 'test') - print('==loaded test data==') - all_valid_data, valid_data = get_all_test_data(raw_data, directions, 'valid') - print('==loaded valid data==') - all_valid_test_data = merge_valid_test_messup(all_test_data, all_valid_data) - mess_up_train, data_sizes = check_train_all(raw_data, directions, all_valid_test_data) - print('training messing up with valid, test data:', len(mess_up_train)) - data_situation = train_size_if_remove_in_otherset(data_sizes, mess_up_train) - df = pd.DataFrame(data_situation, columns=['direction', 'train_size_after_remove', 'orig_size', 'num_to_remove', 'remove_percent']) - df.sort_values('remove_percent', ascending=False) - df.to_csv(f'{raw_data}/clean_summary.tsv', sep='\t') - print(f'projected data clean summary in: {raw_data}/clean_summary.tsv') - - # correct the dataset: - all_test_pairs, mess_up_test_train_pairs = get_messed_up_test_pairs('test', directions) - all_valid_pairs, mess_up_valid_train_pairs = get_messed_up_test_pairs('valid', directions) - - all_messed_pairs = set(mess_up_test_train_pairs.keys()).union(set(mess_up_valid_train_pairs.keys())) - corrected_directions = set() - - real_data_situation = [] - for direction in directions: - org_size, new_size = remove_messed_up_sentences(raw_data, direction, mess_up_train, all_messed_pairs, corrected_directions) - if org_size == 0: - print(f"{direction} has size 0") - continue - real_data_situation.append( - (direction, new_size, org_size, org_size - new_size, (org_size - new_size) / org_size * 100) - ) - print('corrected directions: ', corrected_directions) - df = pd.DataFrame(real_data_situation, columns=['direction', 'train_size_after_remove', 'orig_size', 'num_to_remove', 'remove_percent']) - df.sort_values('remove_percent', ascending=False) - df.to_csv(f'{raw_data}/actual_clean_summary.tsv', sep='\t') - print(f'actual data clean summary (which can be different from the projected one because of duplications) in: {raw_data}/actual_clean_summary.tsv') - - import shutil - for direction in directions: - src_lang, tgt_lang = direction.split('-') - for split in ['train', 'valid', 'test']: - # copying valid, test and uncorrected train - if direction in corrected_directions and split == 'train': - continue - tgt = f"{raw_data}/{split}.{direction}.{tgt_lang}" - src = f"{raw_data}/{split}.{direction}.{src_lang}" - if not (os.path.exists(src) and os.path.exists(tgt)): - continue - corrected_tgt = f"{to_folder}/{split}.{direction}.{tgt_lang}" - corrected_src = f"{to_folder}/{split}.{direction}.{src_lang}" - print(f'copying {src} to {corrected_src}') - shutil.copyfile(src, corrected_src) - print(f'copying {tgt} to {corrected_tgt}') - shutil.copyfile(tgt, corrected_tgt) - - print('completed') \ No newline at end of file diff --git a/kosmos-g/fairseq/examples/multilingual/data_scripts/requirement.txt b/kosmos-g/fairseq/examples/multilingual/data_scripts/requirement.txt deleted file mode 100644 index e85d7d540..000000000 --- a/kosmos-g/fairseq/examples/multilingual/data_scripts/requirement.txt +++ /dev/null @@ -1,2 +0,0 @@ -wget -pandas \ No newline at end of file diff --git a/kosmos-g/fairseq/examples/multilingual/data_scripts/utils/dedup.py b/kosmos-g/fairseq/examples/multilingual/data_scripts/utils/dedup.py deleted file mode 100644 index d6fed8c69..000000000 --- a/kosmos-g/fairseq/examples/multilingual/data_scripts/utils/dedup.py +++ /dev/null @@ -1,41 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import argparse - -def deup(src_file, tgt_file, src_file_out, tgt_file_out): - seen = set() - dup_count = 0 - with open(src_file, encoding='utf-8') as fsrc, \ - open(tgt_file, encoding='utf-8') as ftgt, \ - open(src_file_out, 'w', encoding='utf-8') as fsrc_out, \ - open(tgt_file_out, 'w', encoding='utf-8') as ftgt_out: - for s, t in zip(fsrc, ftgt): - if (s, t) not in seen: - fsrc_out.write(s) - ftgt_out.write(t) - seen.add((s, t)) - else: - dup_count += 1 - print(f'number of duplication: {dup_count}') - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--src-file", type=str, required=True, - help="src file") - parser.add_argument("--tgt-file", type=str, required=True, - help="tgt file") - parser.add_argument("--src-file-out", type=str, required=True, - help="src ouptut file") - parser.add_argument("--tgt-file-out", type=str, required=True, - help="tgt ouput file") - args = parser.parse_args() - deup(args.src_file, args.tgt_file, args.src_file_out, args.tgt_file_out) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/multilingual/data_scripts/utils/fasttext_multi_filter.py b/kosmos-g/fairseq/examples/multilingual/data_scripts/utils/fasttext_multi_filter.py deleted file mode 100644 index 41b38ba5b..000000000 --- a/kosmos-g/fairseq/examples/multilingual/data_scripts/utils/fasttext_multi_filter.py +++ /dev/null @@ -1,63 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -#!/bin/python - -import fasttext -from multiprocessing import Pool -import contextlib -import sys -import argparse -from functools import partial -import io - -model = None -def init(model_path): - global model - model = fasttext.load_model(model_path) - -def pred(lines): - return lines, [model.predict(line.strip())[0][0][9:] for line in lines] - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--model", type=str, required=True, - help="model to load") - parser.add_argument("--inputs", nargs="+", default=['-'], - help="input files to filter") - parser.add_argument("--langs", nargs="+", required=True, - help="lang ids of each input file") - parser.add_argument("--outputs", nargs="+", default=['-'], - help="path to save lid filtered outputs") - parser.add_argument("--num-workers", type=int, metavar="N", default=10, - help="number of processes in parallel") - args = parser.parse_args() - - assert len(args.inputs) == len(args.langs) and len(args.inputs) == len(args.outputs) - - with contextlib.ExitStack() as stack: - inputs = [ - stack.enter_context(open(input, "r", encoding="utf-8", newline="\n", errors="replace")) - if input != "-" else io.TextIOWrapper(sys.stdin.buffer, encoding='utf-8', errors="replace") - for input in args.inputs - ] - outputs = [ - stack.enter_context(open(output, "w", encoding="utf-8", newline="\n")) - if output != "-" else sys.stdout - for output in args.outputs - ] - with Pool(args.num_workers, initializer=partial(init, args.model)) as p: - skip_cnt = 0 - for lines, preds in p.imap(pred, list(zip(*inputs)), chunksize=500): - if not all(a == b for a, b in zip(preds, args.langs)): - skip_cnt += 1 - continue - for line, output_h in zip(lines, outputs): - print(line.strip(), file=output_h) - print(f"Skipped {skip_cnt} lines.") - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/multilingual/data_scripts/utils/strip_sgm.sh b/kosmos-g/fairseq/examples/multilingual/data_scripts/utils/strip_sgm.sh deleted file mode 100644 index 7f4f61d7b..000000000 --- a/kosmos-g/fairseq/examples/multilingual/data_scripts/utils/strip_sgm.sh +++ /dev/null @@ -1 +0,0 @@ -grep "seg id" | sed 's/<seg id="[0-9]\+">//g' | sed 's/<\/seg>//g' diff --git a/kosmos-g/fairseq/examples/multilingual/finetune_multilingual_model.sh b/kosmos-g/fairseq/examples/multilingual/finetune_multilingual_model.sh deleted file mode 100644 index 25960c5dc..000000000 --- a/kosmos-g/fairseq/examples/multilingual/finetune_multilingual_model.sh +++ /dev/null @@ -1,32 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -path_2_data=$1 # <path to data> which contains binarized data for each directions -lang_list=$2 # <path to a file which contains a list of languages separted by new lines> -lang_pairs=$3 #a list language pairs to train multilingual models, e.g. "en-fr,en-cs,fr-en,cs-en" -# pretrained can be an mBART pretrained model as well -pretrained_model=$4 #<path to a pretrained model> - - -fairseq-train "$path_2_data" \ - --encoder-normalize-before --decoder-normalize-before \ - --arch transformer --layernorm-embedding \ - --task translation_multi_simple_epoch \ - --finetune-from-model "$pretrained_model" \ - --sampling-method "temperature" \ - --sampling-temperature "1.5" \ - --encoder-langtok "src" \ - --decoder-langtok \ - --lang-dict "$lang_list" \ - --lang-pairs "$lang_pairs" \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.2 \ - --optimizer adam --adam-eps 1e-06 --adam-betas '(0.9, 0.98)' \ - --lr-scheduler inverse_sqrt --lr 3e-05 --warmup-updates 2500 --max-update 40000 \ - --dropout 0.3 --attention-dropout 0.1 --weight-decay 0.0 \ - --max-tokens 1024 --update-freq 2 \ - --save-interval 1 --save-interval-updates 5000 --keep-interval-updates 10 --no-epoch-checkpoints \ - --seed 222 --log-format simple --log-interval 2 diff --git a/kosmos-g/fairseq/examples/multilingual/multilingual_fairseq_gen.sh b/kosmos-g/fairseq/examples/multilingual/multilingual_fairseq_gen.sh deleted file mode 100644 index 65aa322d7..000000000 --- a/kosmos-g/fairseq/examples/multilingual/multilingual_fairseq_gen.sh +++ /dev/null @@ -1,26 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -lang_pairs="en-fr,en-cs,fr-en,cs-en" -path_2_data=$1 # <path to data> -lang_list=$2 # <path to a file which contains list of languages separted by new lines> -model=$3 # <path to a trained model> -source_lang=cs -target_lang=en - -fairseq-generate "$path_2_data" \ - --path "$model" \ - --task translation_multi_simple_epoch \ - --gen-subset test \ - --source-lang "$source_lang" \ - --target-lang "$target_lang" \ - --sacrebleu --remove-bpe 'sentencepiece'\ - --batch-size 32 \ - --encoder-langtok "src" \ - --decoder-langtok \ - --lang-dict "$lang_list" \ - --lang-pairs "$lang_pairs" diff --git a/kosmos-g/fairseq/examples/multilingual/train_multilingual_model.sh b/kosmos-g/fairseq/examples/multilingual/train_multilingual_model.sh deleted file mode 100644 index cc050bd3f..000000000 --- a/kosmos-g/fairseq/examples/multilingual/train_multilingual_model.sh +++ /dev/null @@ -1,28 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -path_2_data=$1 # <path to data> which contains binarized data for each directions -lang_list=$2 # <path to a file which contains a list of languages separted by new lines> -lang_pairs=$3 #a list language pairs to train multilingual models, e.g. "en-fr,en-cs,fr-en,cs-en" - -fairseq-train "$path_2_data" \ - --encoder-normalize-before --decoder-normalize-before \ - --arch transformer --layernorm-embedding \ - --task translation_multi_simple_epoch \ - --sampling-method "temperature" \ - --sampling-temperature 1.5 \ - --encoder-langtok "src" \ - --decoder-langtok \ - --lang-dict "$lang_list" \ - --lang-pairs "$lang_pairs" \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.2 \ - --optimizer adam --adam-eps 1e-06 --adam-betas '(0.9, 0.98)' \ - --lr-scheduler inverse_sqrt --lr 3e-05 --warmup-updates 2500 --max-update 40000 \ - --dropout 0.3 --attention-dropout 0.1 --weight-decay 0.0 \ - --max-tokens 1024 --update-freq 2 \ - --save-interval 1 --save-interval-updates 5000 --keep-interval-updates 10 --no-epoch-checkpoints \ - --seed 222 --log-format simple --log-interval 2 diff --git a/kosmos-g/fairseq/examples/noisychannel/README.md b/kosmos-g/fairseq/examples/noisychannel/README.md deleted file mode 100644 index 9d101aa87..000000000 --- a/kosmos-g/fairseq/examples/noisychannel/README.md +++ /dev/null @@ -1,72 +0,0 @@ -# Simple and Effective Noisy Channel Modeling for Neural Machine Translation (Yee et al., 2019) -This page contains pointers to pre-trained models as well as instructions on how to run the reranking scripts. - -## Citation: -```bibtex -@inproceedings{yee2019simple, - title = {Simple and Effective Noisy Channel Modeling for Neural Machine Translation}, - author = {Kyra Yee and Yann Dauphin and Michael Auli}, - booktitle = {Conference on Empirical Methods in Natural Language Processing}, - year = {2019}, -} -``` - -## Pre-trained Models: - -Model | Description | Download ----|---|--- -`transformer.noisychannel.de-en` | De->En Forward Model | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/noisychannel/forward_de2en.tar.bz2) -`transformer.noisychannel.en-de` | En->De Channel Model | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/noisychannel/backward_en2de.tar.bz2) -`transformer_lm.noisychannel.en` | En Language model | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/noisychannel/reranking_en_lm.tar.bz2) - -Test Data: [newstest_wmt17](https://dl.fbaipublicfiles.com/fairseq/models/noisychannel/wmt17test.tar.bz2) - -## Example usage - -``` -mkdir rerank_example -curl https://dl.fbaipublicfiles.com/fairseq/models/noisychannel/forward_de2en.tar.bz2 | tar xvjf - -C rerank_example -curl https://dl.fbaipublicfiles.com/fairseq/models/noisychannel/backward_en2de.tar.bz2 | tar xvjf - -C rerank_example -curl https://dl.fbaipublicfiles.com/fairseq/models/noisychannel/reranking_en_lm.tar.bz2 | tar xvjf - -C rerank_example -curl https://dl.fbaipublicfiles.com/fairseq/models/noisychannel/wmt17test.tar.bz2 | tar xvjf - -C rerank_example - -beam=50 -num_trials=1000 -fw_name=fw_model_ex -bw_name=bw_model_ex -lm_name=lm_ex -data_dir=rerank_example/hyphen-splitting-mixed-case-wmt17test-wmt14bpe -data_dir_name=wmt17 -lm=rerank_example/lm/checkpoint_best.pt -lm_bpe_code=rerank_example/lm/bpe32k.code -lm_dict=rerank_example/lm/dict.txt -batch_size=32 -bw=rerank_example/backward_en2de.pt -fw=rerank_example/forward_de2en.pt - -# reranking with P(T|S) P(S|T) and P(T) -python examples/noisychannel/rerank_tune.py $data_dir --tune-param lenpen weight1 weight3 \ - --lower-bound 0 0 0 --upper-bound 3 3 3 --data-dir-name $data_dir_name \ - --num-trials $num_trials --source-lang de --target-lang en --gen-model $fw \ - -n $beam --batch-size $batch_size --score-model2 $fw --score-model1 $bw \ - --backwards1 --weight2 1 \ - -lm $lm --lm-dict $lm_dict --lm-name en_newscrawl --lm-bpe-code $lm_bpe_code \ - --model2-name $fw_name --model1-name $bw_name --gen-model-name $fw_name - -# reranking with P(T|S) and P(T) -python examples/noisychannel/rerank_tune.py $data_dir --tune-param lenpen weight3 \ - --lower-bound 0 0 --upper-bound 3 3 --data-dir-name $data_dir_name \ - --num-trials $num_trials --source-lang de --target-lang en --gen-model $fw \ - -n $beam --batch-size $batch_size --score-model1 $fw \ - -lm $lm --lm-dict $lm_dict --lm-name en_newscrawl --lm-bpe-code $lm_bpe_code \ - --model1-name $fw_name --gen-model-name $fw_name - -# to run with a preconfigured set of hyperparameters for the lenpen and model weights, using rerank.py instead. -python examples/noisychannel/rerank.py $data_dir \ - --lenpen 0.269 --weight1 1 --weight2 0.929 --weight3 0.831 \ - --data-dir-name $data_dir_name --source-lang de --target-lang en --gen-model $fw \ - -n $beam --batch-size $batch_size --score-model2 $fw --score-model1 $bw --backwards1 \ - -lm $lm --lm-dict $lm_dict --lm-name en_newscrawl --lm-bpe-code $lm_bpe_code \ - --model2-name $fw_name --model1-name $bw_name --gen-model-name $fw_name -``` - diff --git a/kosmos-g/fairseq/examples/noisychannel/__init__.py b/kosmos-g/fairseq/examples/noisychannel/__init__.py deleted file mode 100644 index 89f1aef4f..000000000 --- a/kosmos-g/fairseq/examples/noisychannel/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .rerank_options import * # noqa diff --git a/kosmos-g/fairseq/examples/noisychannel/rerank.py b/kosmos-g/fairseq/examples/noisychannel/rerank.py deleted file mode 100644 index bb80d11a6..000000000 --- a/kosmos-g/fairseq/examples/noisychannel/rerank.py +++ /dev/null @@ -1,428 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from multiprocessing import Pool - -import numpy as np -from fairseq import options -from fairseq.data import dictionary -from fairseq.scoring import bleu - -from examples.noisychannel import ( - rerank_generate, - rerank_options, - rerank_score_bw, - rerank_score_lm, - rerank_utils, -) - - -def score_target_hypo( - args, a, b, c, lenpen, target_outfile, hypo_outfile, write_hypos, normalize -): - - print("lenpen", lenpen, "weight1", a, "weight2", b, "weight3", c) - gen_output_lst, bitext1_lst, bitext2_lst, lm_res_lst = load_score_files(args) - dict = dictionary.Dictionary() - scorer = scorer = bleu.Scorer( - bleu.BleuConfig( - pad=dict.pad(), - eos=dict.eos(), - unk=dict.unk(), - ) - ) - - ordered_hypos = {} - ordered_targets = {} - - for shard_id in range(len(bitext1_lst)): - bitext1 = bitext1_lst[shard_id] - bitext2 = bitext2_lst[shard_id] - gen_output = gen_output_lst[shard_id] - lm_res = lm_res_lst[shard_id] - - total = len(bitext1.rescore_source.keys()) - source_lst = [] - hypo_lst = [] - score_lst = [] - reference_lst = [] - j = 1 - best_score = -math.inf - - for i in range(total): - # length is measured in terms of words, not bpe tokens, since models may not share the same bpe - target_len = len(bitext1.rescore_hypo[i].split()) - - if lm_res is not None: - lm_score = lm_res.score[i] - else: - lm_score = 0 - - if bitext2 is not None: - bitext2_score = bitext2.rescore_score[i] - bitext2_backwards = bitext2.backwards - else: - bitext2_score = None - bitext2_backwards = None - - score = rerank_utils.get_score( - a, - b, - c, - target_len, - bitext1.rescore_score[i], - bitext2_score, - lm_score=lm_score, - lenpen=lenpen, - src_len=bitext1.source_lengths[i], - tgt_len=bitext1.target_lengths[i], - bitext1_backwards=bitext1.backwards, - bitext2_backwards=bitext2_backwards, - normalize=normalize, - ) - - if score > best_score: - best_score = score - best_hypo = bitext1.rescore_hypo[i] - - if j == gen_output.num_hypos[i] or j == args.num_rescore: - j = 1 - hypo_lst.append(best_hypo) - score_lst.append(best_score) - source_lst.append(bitext1.rescore_source[i]) - reference_lst.append(bitext1.rescore_target[i]) - - best_score = -math.inf - best_hypo = "" - else: - j += 1 - - gen_keys = list(sorted(gen_output.no_bpe_target.keys())) - - for key in range(len(gen_keys)): - if args.prefix_len is None: - assert hypo_lst[key] in gen_output.no_bpe_hypo[gen_keys[key]], ( - "pred and rescore hypo mismatch: i: " - + str(key) - + ", " - + str(hypo_lst[key]) - + str(gen_keys[key]) - + str(gen_output.no_bpe_hypo[key]) - ) - sys_tok = dict.encode_line(hypo_lst[key]) - ref_tok = dict.encode_line(gen_output.no_bpe_target[gen_keys[key]]) - scorer.add(ref_tok, sys_tok) - - else: - full_hypo = rerank_utils.get_full_from_prefix( - hypo_lst[key], gen_output.no_bpe_hypo[gen_keys[key]] - ) - sys_tok = dict.encode_line(full_hypo) - ref_tok = dict.encode_line(gen_output.no_bpe_target[gen_keys[key]]) - scorer.add(ref_tok, sys_tok) - - # if only one set of hyper parameters is provided, write the predictions to a file - if write_hypos: - # recover the orinal ids from n best list generation - for key in range(len(gen_output.no_bpe_target)): - if args.prefix_len is None: - assert hypo_lst[key] in gen_output.no_bpe_hypo[gen_keys[key]], ( - "pred and rescore hypo mismatch:" - + "i:" - + str(key) - + str(hypo_lst[key]) - + str(gen_output.no_bpe_hypo[key]) - ) - ordered_hypos[gen_keys[key]] = hypo_lst[key] - ordered_targets[gen_keys[key]] = gen_output.no_bpe_target[ - gen_keys[key] - ] - - else: - full_hypo = rerank_utils.get_full_from_prefix( - hypo_lst[key], gen_output.no_bpe_hypo[gen_keys[key]] - ) - ordered_hypos[gen_keys[key]] = full_hypo - ordered_targets[gen_keys[key]] = gen_output.no_bpe_target[ - gen_keys[key] - ] - - # write the hypos in the original order from nbest list generation - if args.num_shards == (len(bitext1_lst)): - with open(target_outfile, "w") as t: - with open(hypo_outfile, "w") as h: - for key in range(len(ordered_hypos)): - t.write(ordered_targets[key]) - h.write(ordered_hypos[key]) - - res = scorer.result_string(4) - if write_hypos: - print(res) - score = rerank_utils.parse_bleu_scoring(res) - return score - - -def match_target_hypo(args, target_outfile, hypo_outfile): - """combine scores from the LM and bitext models, and write the top scoring hypothesis to a file""" - if len(args.weight1) == 1: - res = score_target_hypo( - args, - args.weight1[0], - args.weight2[0], - args.weight3[0], - args.lenpen[0], - target_outfile, - hypo_outfile, - True, - args.normalize, - ) - rerank_scores = [res] - else: - print("launching pool") - with Pool(32) as p: - rerank_scores = p.starmap( - score_target_hypo, - [ - ( - args, - args.weight1[i], - args.weight2[i], - args.weight3[i], - args.lenpen[i], - target_outfile, - hypo_outfile, - False, - args.normalize, - ) - for i in range(len(args.weight1)) - ], - ) - - if len(rerank_scores) > 1: - best_index = np.argmax(rerank_scores) - best_score = rerank_scores[best_index] - print("best score", best_score) - print("best lenpen", args.lenpen[best_index]) - print("best weight1", args.weight1[best_index]) - print("best weight2", args.weight2[best_index]) - print("best weight3", args.weight3[best_index]) - return ( - args.lenpen[best_index], - args.weight1[best_index], - args.weight2[best_index], - args.weight3[best_index], - best_score, - ) - - else: - return ( - args.lenpen[0], - args.weight1[0], - args.weight2[0], - args.weight3[0], - rerank_scores[0], - ) - - -def load_score_files(args): - if args.all_shards: - shard_ids = list(range(args.num_shards)) - else: - shard_ids = [args.shard_id] - - gen_output_lst = [] - bitext1_lst = [] - bitext2_lst = [] - lm_res1_lst = [] - - for shard_id in shard_ids: - using_nbest = args.nbest_list is not None - ( - pre_gen, - left_to_right_preprocessed_dir, - right_to_left_preprocessed_dir, - backwards_preprocessed_dir, - lm_preprocessed_dir, - ) = rerank_utils.get_directories( - args.data_dir_name, - args.num_rescore, - args.gen_subset, - args.gen_model_name, - shard_id, - args.num_shards, - args.sampling, - args.prefix_len, - args.target_prefix_frac, - args.source_prefix_frac, - ) - - rerank1_is_gen = ( - args.gen_model == args.score_model1 and args.source_prefix_frac is None - ) - rerank2_is_gen = ( - args.gen_model == args.score_model2 and args.source_prefix_frac is None - ) - - score1_file = rerank_utils.rescore_file_name( - pre_gen, - args.prefix_len, - args.model1_name, - target_prefix_frac=args.target_prefix_frac, - source_prefix_frac=args.source_prefix_frac, - backwards=args.backwards1, - ) - if args.score_model2 is not None: - score2_file = rerank_utils.rescore_file_name( - pre_gen, - args.prefix_len, - args.model2_name, - target_prefix_frac=args.target_prefix_frac, - source_prefix_frac=args.source_prefix_frac, - backwards=args.backwards2, - ) - if args.language_model is not None: - lm_score_file = rerank_utils.rescore_file_name( - pre_gen, args.prefix_len, args.lm_name, lm_file=True - ) - - # get gen output - predictions_bpe_file = pre_gen + "/generate_output_bpe.txt" - if using_nbest: - print("Using predefined n-best list from interactive.py") - predictions_bpe_file = args.nbest_list - gen_output = rerank_utils.BitextOutputFromGen( - predictions_bpe_file, - bpe_symbol=args.post_process, - nbest=using_nbest, - prefix_len=args.prefix_len, - target_prefix_frac=args.target_prefix_frac, - ) - - if rerank1_is_gen: - bitext1 = gen_output - else: - bitext1 = rerank_utils.BitextOutput( - score1_file, - args.backwards1, - args.right_to_left1, - args.post_process, - args.prefix_len, - args.target_prefix_frac, - args.source_prefix_frac, - ) - - if args.score_model2 is not None or args.nbest_list is not None: - if rerank2_is_gen: - bitext2 = gen_output - else: - bitext2 = rerank_utils.BitextOutput( - score2_file, - args.backwards2, - args.right_to_left2, - args.post_process, - args.prefix_len, - args.target_prefix_frac, - args.source_prefix_frac, - ) - - assert ( - bitext2.source_lengths == bitext1.source_lengths - ), "source lengths for rescoring models do not match" - assert ( - bitext2.target_lengths == bitext1.target_lengths - ), "target lengths for rescoring models do not match" - else: - if args.diff_bpe: - assert args.score_model2 is None - bitext2 = gen_output - else: - bitext2 = None - - if args.language_model is not None: - lm_res1 = rerank_utils.LMOutput( - lm_score_file, - args.lm_dict, - args.prefix_len, - args.post_process, - args.target_prefix_frac, - ) - else: - lm_res1 = None - - gen_output_lst.append(gen_output) - bitext1_lst.append(bitext1) - bitext2_lst.append(bitext2) - lm_res1_lst.append(lm_res1) - return gen_output_lst, bitext1_lst, bitext2_lst, lm_res1_lst - - -def rerank(args): - if type(args.lenpen) is not list: - args.lenpen = [args.lenpen] - if type(args.weight1) is not list: - args.weight1 = [args.weight1] - if type(args.weight2) is not list: - args.weight2 = [args.weight2] - if type(args.weight3) is not list: - args.weight3 = [args.weight3] - if args.all_shards: - shard_ids = list(range(args.num_shards)) - else: - shard_ids = [args.shard_id] - - for shard_id in shard_ids: - ( - pre_gen, - left_to_right_preprocessed_dir, - right_to_left_preprocessed_dir, - backwards_preprocessed_dir, - lm_preprocessed_dir, - ) = rerank_utils.get_directories( - args.data_dir_name, - args.num_rescore, - args.gen_subset, - args.gen_model_name, - shard_id, - args.num_shards, - args.sampling, - args.prefix_len, - args.target_prefix_frac, - args.source_prefix_frac, - ) - rerank_generate.gen_and_reprocess_nbest(args) - rerank_score_bw.score_bw(args) - rerank_score_lm.score_lm(args) - - if args.write_hypos is None: - write_targets = pre_gen + "/matched_targets" - write_hypos = pre_gen + "/matched_hypos" - else: - write_targets = args.write_hypos + "_targets" + args.gen_subset - write_hypos = args.write_hypos + "_hypos" + args.gen_subset - - if args.all_shards: - write_targets += "_all_shards" - write_hypos += "_all_shards" - - ( - best_lenpen, - best_weight1, - best_weight2, - best_weight3, - best_score, - ) = match_target_hypo(args, write_targets, write_hypos) - - return best_lenpen, best_weight1, best_weight2, best_weight3, best_score - - -def cli_main(): - parser = rerank_options.get_reranking_parser() - args = options.parse_args_and_arch(parser) - rerank(args) - - -if __name__ == "__main__": - cli_main() diff --git a/kosmos-g/fairseq/examples/noisychannel/rerank_generate.py b/kosmos-g/fairseq/examples/noisychannel/rerank_generate.py deleted file mode 100644 index daeeae059..000000000 --- a/kosmos-g/fairseq/examples/noisychannel/rerank_generate.py +++ /dev/null @@ -1,397 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Generate n-best translations using a trained model. -""" - -import os -import subprocess -from contextlib import redirect_stdout - -from fairseq import options -from fairseq_cli import generate, preprocess - -from examples.noisychannel import rerank_options, rerank_utils - - -def gen_and_reprocess_nbest(args): - if args.score_dict_dir is None: - args.score_dict_dir = args.data - if args.prefix_len is not None: - assert ( - args.right_to_left1 is False - ), "prefix length not compatible with right to left models" - assert ( - args.right_to_left2 is False - ), "prefix length not compatible with right to left models" - - if args.nbest_list is not None: - assert args.score_model2 is None - - if args.backwards1: - scorer1_src = args.target_lang - scorer1_tgt = args.source_lang - else: - scorer1_src = args.source_lang - scorer1_tgt = args.target_lang - - store_data = ( - os.path.join(os.path.dirname(__file__)) + "/rerank_data/" + args.data_dir_name - ) - if not os.path.exists(store_data): - os.makedirs(store_data) - - ( - pre_gen, - left_to_right_preprocessed_dir, - right_to_left_preprocessed_dir, - backwards_preprocessed_dir, - lm_preprocessed_dir, - ) = rerank_utils.get_directories( - args.data_dir_name, - args.num_rescore, - args.gen_subset, - args.gen_model_name, - args.shard_id, - args.num_shards, - args.sampling, - args.prefix_len, - args.target_prefix_frac, - args.source_prefix_frac, - ) - assert not ( - args.right_to_left1 and args.backwards1 - ), "backwards right to left not supported" - assert not ( - args.right_to_left2 and args.backwards2 - ), "backwards right to left not supported" - assert not ( - args.prefix_len is not None and args.target_prefix_frac is not None - ), "target prefix frac and target prefix len incompatible" - - # make directory to store generation results - if not os.path.exists(pre_gen): - os.makedirs(pre_gen) - - rerank1_is_gen = ( - args.gen_model == args.score_model1 and args.source_prefix_frac is None - ) - rerank2_is_gen = ( - args.gen_model == args.score_model2 and args.source_prefix_frac is None - ) - - if args.nbest_list is not None: - rerank2_is_gen = True - - # make directories to store preprossed nbest list for reranking - if not os.path.exists(left_to_right_preprocessed_dir): - os.makedirs(left_to_right_preprocessed_dir) - if not os.path.exists(right_to_left_preprocessed_dir): - os.makedirs(right_to_left_preprocessed_dir) - if not os.path.exists(lm_preprocessed_dir): - os.makedirs(lm_preprocessed_dir) - if not os.path.exists(backwards_preprocessed_dir): - os.makedirs(backwards_preprocessed_dir) - - score1_file = rerank_utils.rescore_file_name( - pre_gen, - args.prefix_len, - args.model1_name, - target_prefix_frac=args.target_prefix_frac, - source_prefix_frac=args.source_prefix_frac, - backwards=args.backwards1, - ) - if args.score_model2 is not None: - score2_file = rerank_utils.rescore_file_name( - pre_gen, - args.prefix_len, - args.model2_name, - target_prefix_frac=args.target_prefix_frac, - source_prefix_frac=args.source_prefix_frac, - backwards=args.backwards2, - ) - - predictions_bpe_file = pre_gen + "/generate_output_bpe.txt" - - using_nbest = args.nbest_list is not None - - if using_nbest: - print("Using predefined n-best list from interactive.py") - predictions_bpe_file = args.nbest_list - - else: - if not os.path.isfile(predictions_bpe_file): - print("STEP 1: generate predictions using the p(T|S) model with bpe") - print(args.data) - param1 = [ - args.data, - "--path", - args.gen_model, - "--shard-id", - str(args.shard_id), - "--num-shards", - str(args.num_shards), - "--nbest", - str(args.num_rescore), - "--batch-size", - str(args.batch_size), - "--beam", - str(args.num_rescore), - "--batch-size", - str(args.num_rescore), - "--gen-subset", - args.gen_subset, - "--source-lang", - args.source_lang, - "--target-lang", - args.target_lang, - ] - if args.sampling: - param1 += ["--sampling"] - - gen_parser = options.get_generation_parser() - input_args = options.parse_args_and_arch(gen_parser, param1) - - print(input_args) - with open(predictions_bpe_file, "w") as f: - with redirect_stdout(f): - generate.main(input_args) - - gen_output = rerank_utils.BitextOutputFromGen( - predictions_bpe_file, - bpe_symbol=args.post_process, - nbest=using_nbest, - prefix_len=args.prefix_len, - target_prefix_frac=args.target_prefix_frac, - ) - - if args.diff_bpe: - rerank_utils.write_reprocessed( - gen_output.no_bpe_source, - gen_output.no_bpe_hypo, - gen_output.no_bpe_target, - pre_gen + "/source_gen_bpe." + args.source_lang, - pre_gen + "/target_gen_bpe." + args.target_lang, - pre_gen + "/reference_gen_bpe." + args.target_lang, - ) - bitext_bpe = args.rescore_bpe_code - bpe_src_param = [ - "-c", - bitext_bpe, - "--input", - pre_gen + "/source_gen_bpe." + args.source_lang, - "--output", - pre_gen + "/rescore_data." + args.source_lang, - ] - bpe_tgt_param = [ - "-c", - bitext_bpe, - "--input", - pre_gen + "/target_gen_bpe." + args.target_lang, - "--output", - pre_gen + "/rescore_data." + args.target_lang, - ] - - subprocess.call( - [ - "python", - os.path.join( - os.path.dirname(__file__), "subword-nmt/subword_nmt/apply_bpe.py" - ), - ] - + bpe_src_param, - shell=False, - ) - - subprocess.call( - [ - "python", - os.path.join( - os.path.dirname(__file__), "subword-nmt/subword_nmt/apply_bpe.py" - ), - ] - + bpe_tgt_param, - shell=False, - ) - - if (not os.path.isfile(score1_file) and not rerank1_is_gen) or ( - args.score_model2 is not None - and not os.path.isfile(score2_file) - and not rerank2_is_gen - ): - print( - "STEP 2: process the output of generate.py so we have clean text files with the translations" - ) - - rescore_file = "/rescore_data" - if args.prefix_len is not None: - prefix_len_rescore_file = rescore_file + "prefix" + str(args.prefix_len) - if args.target_prefix_frac is not None: - target_prefix_frac_rescore_file = ( - rescore_file + "target_prefix_frac" + str(args.target_prefix_frac) - ) - if args.source_prefix_frac is not None: - source_prefix_frac_rescore_file = ( - rescore_file + "source_prefix_frac" + str(args.source_prefix_frac) - ) - - if not args.right_to_left1 or not args.right_to_left2: - if not args.diff_bpe: - rerank_utils.write_reprocessed( - gen_output.source, - gen_output.hypo, - gen_output.target, - pre_gen + rescore_file + "." + args.source_lang, - pre_gen + rescore_file + "." + args.target_lang, - pre_gen + "/reference_file", - bpe_symbol=args.post_process, - ) - if args.prefix_len is not None: - bw_rescore_file = prefix_len_rescore_file - rerank_utils.write_reprocessed( - gen_output.source, - gen_output.hypo, - gen_output.target, - pre_gen + prefix_len_rescore_file + "." + args.source_lang, - pre_gen + prefix_len_rescore_file + "." + args.target_lang, - pre_gen + "/reference_file", - prefix_len=args.prefix_len, - bpe_symbol=args.post_process, - ) - elif args.target_prefix_frac is not None: - bw_rescore_file = target_prefix_frac_rescore_file - rerank_utils.write_reprocessed( - gen_output.source, - gen_output.hypo, - gen_output.target, - pre_gen - + target_prefix_frac_rescore_file - + "." - + args.source_lang, - pre_gen - + target_prefix_frac_rescore_file - + "." - + args.target_lang, - pre_gen + "/reference_file", - bpe_symbol=args.post_process, - target_prefix_frac=args.target_prefix_frac, - ) - else: - bw_rescore_file = rescore_file - - if args.source_prefix_frac is not None: - fw_rescore_file = source_prefix_frac_rescore_file - rerank_utils.write_reprocessed( - gen_output.source, - gen_output.hypo, - gen_output.target, - pre_gen - + source_prefix_frac_rescore_file - + "." - + args.source_lang, - pre_gen - + source_prefix_frac_rescore_file - + "." - + args.target_lang, - pre_gen + "/reference_file", - bpe_symbol=args.post_process, - source_prefix_frac=args.source_prefix_frac, - ) - else: - fw_rescore_file = rescore_file - - if args.right_to_left1 or args.right_to_left2: - rerank_utils.write_reprocessed( - gen_output.source, - gen_output.hypo, - gen_output.target, - pre_gen + "/right_to_left_rescore_data." + args.source_lang, - pre_gen + "/right_to_left_rescore_data." + args.target_lang, - pre_gen + "/right_to_left_reference_file", - right_to_left=True, - bpe_symbol=args.post_process, - ) - - print("STEP 3: binarize the translations") - if ( - not args.right_to_left1 - or args.score_model2 is not None - and not args.right_to_left2 - or not rerank1_is_gen - ): - - if args.backwards1 or args.backwards2: - if args.backwards_score_dict_dir is not None: - bw_dict = args.backwards_score_dict_dir - else: - bw_dict = args.score_dict_dir - bw_preprocess_param = [ - "--source-lang", - scorer1_src, - "--target-lang", - scorer1_tgt, - "--trainpref", - pre_gen + bw_rescore_file, - "--srcdict", - bw_dict + "/dict." + scorer1_src + ".txt", - "--tgtdict", - bw_dict + "/dict." + scorer1_tgt + ".txt", - "--destdir", - backwards_preprocessed_dir, - ] - preprocess_parser = options.get_preprocessing_parser() - input_args = preprocess_parser.parse_args(bw_preprocess_param) - preprocess.main(input_args) - - preprocess_param = [ - "--source-lang", - scorer1_src, - "--target-lang", - scorer1_tgt, - "--trainpref", - pre_gen + fw_rescore_file, - "--srcdict", - args.score_dict_dir + "/dict." + scorer1_src + ".txt", - "--tgtdict", - args.score_dict_dir + "/dict." + scorer1_tgt + ".txt", - "--destdir", - left_to_right_preprocessed_dir, - ] - preprocess_parser = options.get_preprocessing_parser() - input_args = preprocess_parser.parse_args(preprocess_param) - preprocess.main(input_args) - - if args.right_to_left1 or args.right_to_left2: - preprocess_param = [ - "--source-lang", - scorer1_src, - "--target-lang", - scorer1_tgt, - "--trainpref", - pre_gen + "/right_to_left_rescore_data", - "--srcdict", - args.score_dict_dir + "/dict." + scorer1_src + ".txt", - "--tgtdict", - args.score_dict_dir + "/dict." + scorer1_tgt + ".txt", - "--destdir", - right_to_left_preprocessed_dir, - ] - preprocess_parser = options.get_preprocessing_parser() - input_args = preprocess_parser.parse_args(preprocess_param) - preprocess.main(input_args) - - return gen_output - - -def cli_main(): - parser = rerank_options.get_reranking_parser() - args = options.parse_args_and_arch(parser) - gen_and_reprocess_nbest(args) - - -if __name__ == "__main__": - cli_main() diff --git a/kosmos-g/fairseq/examples/noisychannel/rerank_options.py b/kosmos-g/fairseq/examples/noisychannel/rerank_options.py deleted file mode 100644 index de91939e6..000000000 --- a/kosmos-g/fairseq/examples/noisychannel/rerank_options.py +++ /dev/null @@ -1,149 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq import options - - -def get_reranking_parser(default_task="translation"): - parser = options.get_parser("Generation and reranking", default_task) - add_reranking_args(parser) - return parser - - -def get_tuning_parser(default_task="translation"): - parser = options.get_parser("Reranking tuning", default_task) - add_reranking_args(parser) - add_tuning_args(parser) - return parser - - -def add_reranking_args(parser): - group = parser.add_argument_group("Reranking") - # fmt: off - group.add_argument('--score-model1', '-s1', type=str, metavar='FILE', required=True, - help='path to first model or ensemble of models for rescoring') - group.add_argument('--score-model2', '-s2', type=str, metavar='FILE', required=False, - help='path to second model or ensemble of models for rescoring') - group.add_argument('--num-rescore', '-n', type=int, metavar='N', default=10, - help='the number of candidate hypothesis to rescore') - group.add_argument('-bz', '--batch-size', type=int, metavar='N', default=128, - help='batch size for generating the nbest list') - group.add_argument('--gen-subset', default='test', metavar='SET', choices=['test', 'train', 'valid'], - help='data subset to generate (train, valid, test)') - group.add_argument('--gen-model', default=None, metavar='FILE', - help='the model to generate translations') - group.add_argument('-b1', '--backwards1', action='store_true', - help='whether or not the first model group is backwards') - group.add_argument('-b2', '--backwards2', action='store_true', - help='whether or not the second model group is backwards') - group.add_argument('-a', '--weight1', default=1, nargs='+', type=float, - help='the weight(s) of the first model') - group.add_argument('-b', '--weight2', default=1, nargs='+', type=float, - help='the weight(s) of the second model, or the gen model if using nbest from interactive.py') - group.add_argument('-c', '--weight3', default=1, nargs='+', type=float, - help='the weight(s) of the third model') - - # lm arguments - group.add_argument('-lm', '--language-model', default=None, metavar='FILE', - help='language model for target language to rescore translations') - group.add_argument('--lm-dict', default=None, metavar='FILE', - help='the dict of the language model for the target language') - group.add_argument('--lm-name', default=None, - help='the name of the language model for the target language') - group.add_argument('--lm-bpe-code', default=None, metavar='FILE', - help='the bpe code for the language model for the target language') - group.add_argument('--data-dir-name', default=None, - help='name of data directory') - group.add_argument('--lenpen', default=1, nargs='+', type=float, - help='length penalty: <1.0 favors shorter, >1.0 favors longer sentences') - group.add_argument('--score-dict-dir', default=None, - help='the directory with dictionaries for the scoring models') - group.add_argument('--right-to-left1', action='store_true', - help='whether the first model group is a right to left model') - group.add_argument('--right-to-left2', action='store_true', - help='whether the second model group is a right to left model') - group.add_argument('--post-process', '--remove-bpe', default='@@ ', - help='the bpe symbol, used for the bitext and LM') - group.add_argument('--prefix-len', default=None, type=int, - help='the length of the target prefix to use in rescoring (in terms of words wo bpe)') - group.add_argument('--sampling', action='store_true', - help='use sampling instead of beam search for generating n best list') - group.add_argument('--diff-bpe', action='store_true', - help='bpe for rescoring and nbest list not the same') - group.add_argument('--rescore-bpe-code', default=None, - help='bpe code for rescoring models') - group.add_argument('--nbest-list', default=None, - help='use predefined nbest list in interactive.py format') - group.add_argument('--write-hypos', default=None, - help='filename prefix to write hypos to') - group.add_argument('--ref-translation', default=None, - help='reference translation to use with nbest list from interactive.py') - group.add_argument('--backwards-score-dict-dir', default=None, - help='the directory with dictionaries for the backwards model,' - 'if None then it is assumed the fw and backwards models share dictionaries') - - # extra scaling args - group.add_argument('--gen-model-name', default=None, - help='the name of the models that generated the nbest list') - group.add_argument('--model1-name', default=None, - help='the name of the set for model1 group ') - group.add_argument('--model2-name', default=None, - help='the name of the set for model2 group') - group.add_argument('--shard-id', default=0, type=int, - help='the id of the shard to generate') - group.add_argument('--num-shards', default=1, type=int, - help='the number of shards to generate across') - group.add_argument('--all-shards', action='store_true', - help='use all shards') - group.add_argument('--target-prefix-frac', default=None, type=float, - help='the fraction of the target prefix to use in rescoring (in terms of words wo bpe)') - group.add_argument('--source-prefix-frac', default=None, type=float, - help='the fraction of the source prefix to use in rescoring (in terms of words wo bpe)') - group.add_argument('--normalize', action='store_true', - help='whether to normalize by src and target len') - # fmt: on - return group - - -def add_tuning_args(parser): - group = parser.add_argument_group("Tuning") - - group.add_argument( - "--lower-bound", - default=[-0.7], - nargs="+", - type=float, - help="lower bound of search space", - ) - group.add_argument( - "--upper-bound", - default=[3], - nargs="+", - type=float, - help="upper bound of search space", - ) - group.add_argument( - "--tune-param", - default=["lenpen"], - nargs="+", - choices=["lenpen", "weight1", "weight2", "weight3"], - help="the parameter(s) to tune", - ) - group.add_argument( - "--tune-subset", - default="valid", - choices=["valid", "test", "train"], - help="the subset to tune on ", - ) - group.add_argument( - "--num-trials", - default=1000, - type=int, - help="number of trials to do for random search", - ) - group.add_argument( - "--share-weights", action="store_true", help="share weight2 and weight 3" - ) - return group diff --git a/kosmos-g/fairseq/examples/noisychannel/rerank_score_bw.py b/kosmos-g/fairseq/examples/noisychannel/rerank_score_bw.py deleted file mode 100644 index b0bc91365..000000000 --- a/kosmos-g/fairseq/examples/noisychannel/rerank_score_bw.py +++ /dev/null @@ -1,143 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import os -from contextlib import redirect_stdout - -from fairseq import options -from fairseq_cli import generate - -from examples.noisychannel import rerank_options, rerank_utils - - -def score_bw(args): - if args.backwards1: - scorer1_src = args.target_lang - scorer1_tgt = args.source_lang - else: - scorer1_src = args.source_lang - scorer1_tgt = args.target_lang - - if args.score_model2 is not None: - if args.backwards2: - scorer2_src = args.target_lang - scorer2_tgt = args.source_lang - else: - scorer2_src = args.source_lang - scorer2_tgt = args.target_lang - - rerank1_is_gen = ( - args.gen_model == args.score_model1 and args.source_prefix_frac is None - ) - rerank2_is_gen = ( - args.gen_model == args.score_model2 and args.source_prefix_frac is None - ) - - ( - pre_gen, - left_to_right_preprocessed_dir, - right_to_left_preprocessed_dir, - backwards_preprocessed_dir, - lm_preprocessed_dir, - ) = rerank_utils.get_directories( - args.data_dir_name, - args.num_rescore, - args.gen_subset, - args.gen_model_name, - args.shard_id, - args.num_shards, - args.sampling, - args.prefix_len, - args.target_prefix_frac, - args.source_prefix_frac, - ) - - score1_file = rerank_utils.rescore_file_name( - pre_gen, - args.prefix_len, - args.model1_name, - target_prefix_frac=args.target_prefix_frac, - source_prefix_frac=args.source_prefix_frac, - backwards=args.backwards1, - ) - - if args.score_model2 is not None: - score2_file = rerank_utils.rescore_file_name( - pre_gen, - args.prefix_len, - args.model2_name, - target_prefix_frac=args.target_prefix_frac, - source_prefix_frac=args.source_prefix_frac, - backwards=args.backwards2, - ) - - if args.right_to_left1: - rerank_data1 = right_to_left_preprocessed_dir - elif args.backwards1: - rerank_data1 = backwards_preprocessed_dir - else: - rerank_data1 = left_to_right_preprocessed_dir - - gen_param = ["--batch-size", str(128), "--score-reference", "--gen-subset", "train"] - if not rerank1_is_gen and not os.path.isfile(score1_file): - print("STEP 4: score the translations for model 1") - - model_param1 = [ - "--path", - args.score_model1, - "--source-lang", - scorer1_src, - "--target-lang", - scorer1_tgt, - ] - gen_model1_param = [rerank_data1] + gen_param + model_param1 - - gen_parser = options.get_generation_parser() - input_args = options.parse_args_and_arch(gen_parser, gen_model1_param) - - with open(score1_file, "w") as f: - with redirect_stdout(f): - generate.main(input_args) - - if ( - args.score_model2 is not None - and not os.path.isfile(score2_file) - and not rerank2_is_gen - ): - print("STEP 4: score the translations for model 2") - - if args.right_to_left2: - rerank_data2 = right_to_left_preprocessed_dir - elif args.backwards2: - rerank_data2 = backwards_preprocessed_dir - else: - rerank_data2 = left_to_right_preprocessed_dir - - model_param2 = [ - "--path", - args.score_model2, - "--source-lang", - scorer2_src, - "--target-lang", - scorer2_tgt, - ] - gen_model2_param = [rerank_data2] + gen_param + model_param2 - - gen_parser = options.get_generation_parser() - input_args = options.parse_args_and_arch(gen_parser, gen_model2_param) - - with open(score2_file, "w") as f: - with redirect_stdout(f): - generate.main(input_args) - - -def cli_main(): - parser = rerank_options.get_reranking_parser() - args = options.parse_args_and_arch(parser) - score_bw(args) - - -if __name__ == "__main__": - cli_main() diff --git a/kosmos-g/fairseq/examples/noisychannel/rerank_score_lm.py b/kosmos-g/fairseq/examples/noisychannel/rerank_score_lm.py deleted file mode 100644 index e80948d78..000000000 --- a/kosmos-g/fairseq/examples/noisychannel/rerank_score_lm.py +++ /dev/null @@ -1,81 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import os - -from fairseq import options - -from examples.noisychannel import rerank_options, rerank_utils - - -def score_lm(args): - using_nbest = args.nbest_list is not None - ( - pre_gen, - left_to_right_preprocessed_dir, - right_to_left_preprocessed_dir, - backwards_preprocessed_dir, - lm_preprocessed_dir, - ) = rerank_utils.get_directories( - args.data_dir_name, - args.num_rescore, - args.gen_subset, - args.gen_model_name, - args.shard_id, - args.num_shards, - args.sampling, - args.prefix_len, - args.target_prefix_frac, - args.source_prefix_frac, - ) - - predictions_bpe_file = pre_gen + "/generate_output_bpe.txt" - if using_nbest: - print("Using predefined n-best list from interactive.py") - predictions_bpe_file = args.nbest_list - - gen_output = rerank_utils.BitextOutputFromGen( - predictions_bpe_file, bpe_symbol=args.post_process, nbest=using_nbest - ) - - if args.language_model is not None: - lm_score_file = rerank_utils.rescore_file_name( - pre_gen, args.prefix_len, args.lm_name, lm_file=True - ) - - if args.language_model is not None and not os.path.isfile(lm_score_file): - print("STEP 4.5: language modeling for P(T)") - if args.lm_bpe_code is None: - bpe_status = "no bpe" - elif args.lm_bpe_code == "shared": - bpe_status = "shared" - else: - bpe_status = "different" - - rerank_utils.lm_scoring( - lm_preprocessed_dir, - bpe_status, - gen_output, - pre_gen, - args.lm_dict, - args.lm_name, - args.language_model, - args.lm_bpe_code, - 128, - lm_score_file, - args.target_lang, - args.source_lang, - prefix_len=args.prefix_len, - ) - - -def cli_main(): - parser = rerank_options.get_reranking_parser() - args = options.parse_args_and_arch(parser) - score_lm(args) - - -if __name__ == "__main__": - cli_main() diff --git a/kosmos-g/fairseq/examples/noisychannel/rerank_tune.py b/kosmos-g/fairseq/examples/noisychannel/rerank_tune.py deleted file mode 100644 index b2e8b7594..000000000 --- a/kosmos-g/fairseq/examples/noisychannel/rerank_tune.py +++ /dev/null @@ -1,102 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import random - -import numpy as np -from fairseq import options - -from examples.noisychannel import rerank, rerank_options - - -def random_search(args): - param_values = [] - tuneable_parameters = ["lenpen", "weight1", "weight2", "weight3"] - initial_params = [args.lenpen, args.weight1, args.weight2, args.weight3] - for i, elem in enumerate(initial_params): - if type(elem) is not list: - initial_params[i] = [elem] - else: - initial_params[i] = elem - - tune_parameters = args.tune_param.copy() - for i in range(len(args.tune_param)): - assert args.upper_bound[i] >= args.lower_bound[i] - index = tuneable_parameters.index(args.tune_param[i]) - del tuneable_parameters[index] - del initial_params[index] - - tune_parameters += tuneable_parameters - param_values += initial_params - random.seed(args.seed) - - random_params = np.array( - [ - [ - random.uniform(args.lower_bound[i], args.upper_bound[i]) - for i in range(len(args.tune_param)) - ] - for k in range(args.num_trials) - ] - ) - set_params = np.array( - [ - [initial_params[i][0] for i in range(len(tuneable_parameters))] - for k in range(args.num_trials) - ] - ) - random_params = np.concatenate((random_params, set_params), 1) - - rerank_args = vars(args).copy() - if args.nbest_list: - rerank_args["gen_subset"] = "test" - else: - rerank_args["gen_subset"] = args.tune_subset - - for k in range(len(tune_parameters)): - rerank_args[tune_parameters[k]] = list(random_params[:, k]) - - if args.share_weights: - k = tune_parameters.index("weight2") - rerank_args["weight3"] = list(random_params[:, k]) - - rerank_args = argparse.Namespace(**rerank_args) - best_lenpen, best_weight1, best_weight2, best_weight3, best_score = rerank.rerank( - rerank_args - ) - rerank_args = vars(args).copy() - rerank_args["lenpen"] = [best_lenpen] - rerank_args["weight1"] = [best_weight1] - rerank_args["weight2"] = [best_weight2] - rerank_args["weight3"] = [best_weight3] - - # write the hypothesis from the valid set from the best trial - - if args.gen_subset != "valid": - rerank_args["gen_subset"] = "valid" - rerank_args = argparse.Namespace(**rerank_args) - rerank.rerank(rerank_args) - - # test with the best hyperparameters on gen subset - rerank_args = vars(args).copy() - rerank_args["gen_subset"] = args.gen_subset - rerank_args["lenpen"] = [best_lenpen] - rerank_args["weight1"] = [best_weight1] - rerank_args["weight2"] = [best_weight2] - rerank_args["weight3"] = [best_weight3] - rerank_args = argparse.Namespace(**rerank_args) - rerank.rerank(rerank_args) - - -def cli_main(): - parser = rerank_options.get_tuning_parser() - args = options.parse_args_and_arch(parser) - - random_search(args) - - -if __name__ == "__main__": - cli_main() diff --git a/kosmos-g/fairseq/examples/noisychannel/rerank_utils.py b/kosmos-g/fairseq/examples/noisychannel/rerank_utils.py deleted file mode 100644 index 2c6bf1b1a..000000000 --- a/kosmos-g/fairseq/examples/noisychannel/rerank_utils.py +++ /dev/null @@ -1,850 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -import os -import re -import subprocess -from contextlib import redirect_stdout - -from fairseq import options -from fairseq_cli import eval_lm, preprocess - - -def reprocess(fle): - # takes in a file of generate.py translation generate_output - # returns a source dict and hypothesis dict, where keys are the ID num (as a string) - # and values and the corresponding source and translation. There may be several translations - # per source, so the values for hypothesis_dict are lists. - # parses output of generate.py - - with open(fle, "r") as f: - txt = f.read() - - """reprocess generate.py output""" - p = re.compile(r"[STHP][-]\d+\s*") - hp = re.compile(r"(\s*[-]?\d+[.]?\d+\s*)|(\s*(-inf)\s*)") - source_dict = {} - hypothesis_dict = {} - score_dict = {} - target_dict = {} - pos_score_dict = {} - lines = txt.split("\n") - - for line in lines: - line += "\n" - prefix = re.search(p, line) - if prefix is not None: - assert len(prefix.group()) > 2, "prefix id not found" - _, j = prefix.span() - id_num = prefix.group()[2:] - id_num = int(id_num) - line_type = prefix.group()[0] - if line_type == "H": - h_txt = line[j:] - hypo = re.search(hp, h_txt) - assert ( - hypo is not None - ), "regular expression failed to find the hypothesis scoring" - _, i = hypo.span() - score = hypo.group() - if id_num in hypothesis_dict: - hypothesis_dict[id_num].append(h_txt[i:]) - score_dict[id_num].append(float(score)) - else: - hypothesis_dict[id_num] = [h_txt[i:]] - score_dict[id_num] = [float(score)] - - elif line_type == "S": - source_dict[id_num] = line[j:] - elif line_type == "T": - target_dict[id_num] = line[j:] - elif line_type == "P": - pos_scores = (line[j:]).split() - pos_scores = [float(x) for x in pos_scores] - if id_num in pos_score_dict: - pos_score_dict[id_num].append(pos_scores) - else: - pos_score_dict[id_num] = [pos_scores] - - return source_dict, hypothesis_dict, score_dict, target_dict, pos_score_dict - - -def reprocess_nbest(fle): - """reprocess interactive.py output""" - with open(fle, "r") as f: - txt = f.read() - - source_dict = {} - hypothesis_dict = {} - score_dict = {} - target_dict = {} - pos_score_dict = {} - lines = txt.split("\n") - - hp = re.compile(r"[-]?\d+[.]?\d+") - j = -1 - - for _i, line in enumerate(lines): - line += "\n" - line_type = line[0] - - if line_type == "H": - hypo = re.search(hp, line) - _, start_index = hypo.span() - score = hypo.group() - if j in score_dict: - score_dict[j].append(float(score)) - hypothesis_dict[j].append(line[start_index:].strip("\t")) - else: - score_dict[j] = [float(score)] - hypothesis_dict[j] = [line[start_index:].strip("\t")] - elif line_type == "O": - j += 1 - source_dict[j] = line[2:] - # we don't have the targets for interactive.py - target_dict[j] = "filler" - - elif line_type == "P": - pos_scores = [float(pos_score) for pos_score in line.split()[1:]] - if j in pos_score_dict: - pos_score_dict[j].append(pos_scores) - else: - pos_score_dict[j] = [pos_scores] - - assert source_dict.keys() == hypothesis_dict.keys() - assert source_dict.keys() == pos_score_dict.keys() - assert source_dict.keys() == score_dict.keys() - - return source_dict, hypothesis_dict, score_dict, target_dict, pos_score_dict - - -def write_reprocessed( - sources, - hypos, - targets, - source_outfile, - hypo_outfile, - target_outfile, - right_to_left=False, - prefix_len=None, - bpe_symbol=None, - target_prefix_frac=None, - source_prefix_frac=None, -): - - """writes nbest hypothesis for rescoring""" - assert not ( - prefix_len is not None and target_prefix_frac is not None - ), "in writing reprocessed, only one type of prefix may be used" - assert not ( - prefix_len is not None and source_prefix_frac is not None - ), "in writing reprocessed, only one type of prefix may be used" - assert not ( - target_prefix_frac is not None and source_prefix_frac is not None - ), "in writing reprocessed, only one type of prefix may be used" - - with open(source_outfile, "w") as source_file, open( - hypo_outfile, "w" - ) as hypo_file, open(target_outfile, "w") as target_file: - - assert len(sources) == len(hypos), "sources and hypos list length mismatch" - if right_to_left: - for i in range(len(sources)): - for j in range(len(hypos[i])): - if prefix_len is None: - hypo_file.write(make_right_to_left(hypos[i][j]) + "\n") - else: - raise NotImplementedError() - source_file.write(make_right_to_left(sources[i]) + "\n") - target_file.write(make_right_to_left(targets[i]) + "\n") - else: - for i in sorted(sources.keys()): - for j in range(len(hypos[i])): - if prefix_len is not None: - shortened = ( - get_prefix_no_bpe(hypos[i][j], bpe_symbol, prefix_len) - + "\n" - ) - hypo_file.write(shortened) - source_file.write(sources[i]) - target_file.write(targets[i]) - elif target_prefix_frac is not None: - num_words, shortened, num_bpe_tokens = calc_length_from_frac( - hypos[i][j], target_prefix_frac, bpe_symbol - ) - shortened += "\n" - hypo_file.write(shortened) - source_file.write(sources[i]) - target_file.write(targets[i]) - elif source_prefix_frac is not None: - num_words, shortened, num_bpe_tokensn = calc_length_from_frac( - sources[i], source_prefix_frac, bpe_symbol - ) - shortened += "\n" - hypo_file.write(hypos[i][j]) - source_file.write(shortened) - target_file.write(targets[i]) - else: - hypo_file.write(hypos[i][j]) - source_file.write(sources[i]) - target_file.write(targets[i]) - - -def calc_length_from_frac(bpe_sentence, prefix_frac, bpe_symbol): - # return number of words, (not bpe tokens) that we want - no_bpe_sen = remove_bpe(bpe_sentence, bpe_symbol) - len_sen = len(no_bpe_sen.split()) - - num_words = math.ceil(len_sen * prefix_frac) - prefix = get_prefix_no_bpe(bpe_sentence, bpe_symbol, num_words) - num_bpe_tokens = len(prefix.split()) - return num_words, prefix, num_bpe_tokens - - -def get_prefix(sentence, prefix_len): - """assuming no bpe, gets the prefix of the sentence with prefix_len words""" - tokens = sentence.strip("\n").split() - if prefix_len >= len(tokens): - return sentence.strip("\n") - else: - return " ".join(tokens[:prefix_len]) - - -def get_prefix_no_bpe(sentence, bpe_symbol, prefix_len): - if bpe_symbol is None: - return get_prefix(sentence, prefix_len) - else: - return " ".join(get_prefix_from_len(sentence.split(), bpe_symbol, prefix_len)) - - -def get_prefix_from_len(sentence, bpe_symbol, prefix_len): - """get the prefix of sentence with bpe, with prefix len in terms of words, not bpe tokens""" - bpe_count = sum([bpe_symbol.strip(" ") in t for t in sentence[:prefix_len]]) - if bpe_count == 0: - return sentence[:prefix_len] - else: - return sentence[:prefix_len] + get_prefix_from_len( - sentence[prefix_len:], bpe_symbol, bpe_count - ) - - -def get_num_bpe_tokens_from_len(sentence, bpe_symbol, prefix_len): - """given a prefix length in terms of words, return the number of bpe tokens""" - prefix = get_prefix_no_bpe(sentence, bpe_symbol, prefix_len) - assert len(remove_bpe(prefix, bpe_symbol).split()) <= prefix_len - return len(prefix.split(" ")) - - -def make_right_to_left(line): - tokens = line.split() - tokens.reverse() - new_line = " ".join(tokens) - return new_line - - -def remove_bpe(line, bpe_symbol): - line = line.replace("\n", "") - line = (line + " ").replace(bpe_symbol, "").rstrip() - return line + ("\n") - - -def remove_bpe_dict(pred_dict, bpe_symbol): - new_dict = {} - for i in pred_dict: - if type(pred_dict[i]) == list: - new_list = [remove_bpe(elem, bpe_symbol) for elem in pred_dict[i]] - new_dict[i] = new_list - else: - new_dict[i] = remove_bpe(pred_dict[i], bpe_symbol) - return new_dict - - -def parse_bleu_scoring(line): - p = re.compile(r"(BLEU4 = )\d+[.]\d+") - res = re.search(p, line) - assert res is not None, line - return float(res.group()[8:]) - - -def get_full_from_prefix(hypo_prefix, hypos): - """given a hypo prefix, recover the first hypo from the list of complete hypos beginning with that prefix""" - for hypo in hypos: - hypo_prefix = hypo_prefix.strip("\n") - len_prefix = len(hypo_prefix) - if hypo[:len_prefix] == hypo_prefix: - return hypo - # no match found - raise Exception() - - -def get_score( - a, - b, - c, - target_len, - bitext_score1, - bitext_score2=None, - lm_score=None, - lenpen=None, - src_len=None, - tgt_len=None, - bitext1_backwards=False, - bitext2_backwards=False, - normalize=False, -): - if bitext1_backwards: - bitext1_norm = src_len - else: - bitext1_norm = tgt_len - if bitext_score2 is not None: - if bitext2_backwards: - bitext2_norm = src_len - else: - bitext2_norm = tgt_len - else: - bitext2_norm = 1 - bitext_score2 = 0 - if normalize: - score = ( - a * bitext_score1 / bitext1_norm - + b * bitext_score2 / bitext2_norm - + c * lm_score / src_len - ) - else: - score = a * bitext_score1 + b * bitext_score2 + c * lm_score - - if lenpen is not None: - score /= (target_len) ** float(lenpen) - - return score - - -class BitextOutput(object): - def __init__( - self, - output_file, - backwards, - right_to_left, - bpe_symbol, - prefix_len=None, - target_prefix_frac=None, - source_prefix_frac=None, - ): - """process output from rescoring""" - source, hypo, score, target, pos_score = reprocess(output_file) - if backwards: - self.hypo_fracs = source_prefix_frac - else: - self.hypo_fracs = target_prefix_frac - - # remove length penalty so we can use raw scores - score, num_bpe_tokens = get_score_from_pos( - pos_score, prefix_len, hypo, bpe_symbol, self.hypo_fracs, backwards - ) - source_lengths = {} - target_lengths = {} - - assert hypo.keys() == source.keys(), "key mismatch" - if backwards: - tmp = hypo - hypo = source - source = tmp - for i in source: - # since we are reranking, there should only be one hypo per source sentence - if backwards: - len_src = len(source[i][0].split()) - # record length without <eos> - if len_src == num_bpe_tokens[i][0] - 1: - source_lengths[i] = num_bpe_tokens[i][0] - 1 - else: - source_lengths[i] = num_bpe_tokens[i][0] - - target_lengths[i] = len(hypo[i].split()) - - source[i] = remove_bpe(source[i][0], bpe_symbol) - target[i] = remove_bpe(target[i], bpe_symbol) - hypo[i] = remove_bpe(hypo[i], bpe_symbol) - - score[i] = float(score[i][0]) - pos_score[i] = pos_score[i][0] - - else: - len_tgt = len(hypo[i][0].split()) - # record length without <eos> - if len_tgt == num_bpe_tokens[i][0] - 1: - target_lengths[i] = num_bpe_tokens[i][0] - 1 - else: - target_lengths[i] = num_bpe_tokens[i][0] - - source_lengths[i] = len(source[i].split()) - - if right_to_left: - source[i] = remove_bpe(make_right_to_left(source[i]), bpe_symbol) - target[i] = remove_bpe(make_right_to_left(target[i]), bpe_symbol) - hypo[i] = remove_bpe(make_right_to_left(hypo[i][0]), bpe_symbol) - score[i] = float(score[i][0]) - pos_score[i] = pos_score[i][0] - else: - assert ( - len(hypo[i]) == 1 - ), "expected only one hypothesis per source sentence" - source[i] = remove_bpe(source[i], bpe_symbol) - target[i] = remove_bpe(target[i], bpe_symbol) - hypo[i] = remove_bpe(hypo[i][0], bpe_symbol) - score[i] = float(score[i][0]) - pos_score[i] = pos_score[i][0] - - self.rescore_source = source - self.rescore_hypo = hypo - self.rescore_score = score - self.rescore_target = target - self.rescore_pos_score = pos_score - self.backwards = backwards - self.right_to_left = right_to_left - self.target_lengths = target_lengths - self.source_lengths = source_lengths - - -class BitextOutputFromGen(object): - def __init__( - self, - predictions_bpe_file, - bpe_symbol=None, - nbest=False, - prefix_len=None, - target_prefix_frac=None, - ): - if nbest: - ( - pred_source, - pred_hypo, - pred_score, - pred_target, - pred_pos_score, - ) = reprocess_nbest(predictions_bpe_file) - else: - pred_source, pred_hypo, pred_score, pred_target, pred_pos_score = reprocess( - predictions_bpe_file - ) - - assert len(pred_source) == len(pred_hypo) - assert len(pred_source) == len(pred_score) - assert len(pred_source) == len(pred_target) - assert len(pred_source) == len(pred_pos_score) - - # remove length penalty so we can use raw scores - pred_score, num_bpe_tokens = get_score_from_pos( - pred_pos_score, prefix_len, pred_hypo, bpe_symbol, target_prefix_frac, False - ) - - self.source = pred_source - self.target = pred_target - self.score = pred_score - self.pos_score = pred_pos_score - self.hypo = pred_hypo - self.target_lengths = {} - self.source_lengths = {} - - self.no_bpe_source = remove_bpe_dict(pred_source.copy(), bpe_symbol) - self.no_bpe_hypo = remove_bpe_dict(pred_hypo.copy(), bpe_symbol) - self.no_bpe_target = remove_bpe_dict(pred_target.copy(), bpe_symbol) - - # indexes to match those from the rescoring models - self.rescore_source = {} - self.rescore_target = {} - self.rescore_pos_score = {} - self.rescore_hypo = {} - self.rescore_score = {} - self.num_hypos = {} - self.backwards = False - self.right_to_left = False - - index = 0 - - for i in sorted(pred_source.keys()): - for j in range(len(pred_hypo[i])): - - self.target_lengths[index] = len(self.hypo[i][j].split()) - self.source_lengths[index] = len(self.source[i].split()) - - self.rescore_source[index] = self.no_bpe_source[i] - self.rescore_target[index] = self.no_bpe_target[i] - self.rescore_hypo[index] = self.no_bpe_hypo[i][j] - self.rescore_score[index] = float(pred_score[i][j]) - self.rescore_pos_score[index] = pred_pos_score[i][j] - self.num_hypos[index] = len(pred_hypo[i]) - index += 1 - - -def get_score_from_pos( - pos_score_dict, prefix_len, hypo_dict, bpe_symbol, hypo_frac, backwards -): - score_dict = {} - num_bpe_tokens_dict = {} - assert prefix_len is None or hypo_frac is None - for key in pos_score_dict: - score_dict[key] = [] - num_bpe_tokens_dict[key] = [] - for i in range(len(pos_score_dict[key])): - if prefix_len is not None and not backwards: - num_bpe_tokens = get_num_bpe_tokens_from_len( - hypo_dict[key][i], bpe_symbol, prefix_len - ) - score_dict[key].append(sum(pos_score_dict[key][i][:num_bpe_tokens])) - num_bpe_tokens_dict[key].append(num_bpe_tokens) - elif hypo_frac is not None: - num_words, shortened, hypo_prefix_len = calc_length_from_frac( - hypo_dict[key][i], hypo_frac, bpe_symbol - ) - score_dict[key].append(sum(pos_score_dict[key][i][:hypo_prefix_len])) - num_bpe_tokens_dict[key].append(hypo_prefix_len) - else: - score_dict[key].append(sum(pos_score_dict[key][i])) - num_bpe_tokens_dict[key].append(len(pos_score_dict[key][i])) - return score_dict, num_bpe_tokens_dict - - -class LMOutput(object): - def __init__( - self, - lm_score_file, - lm_dict=None, - prefix_len=None, - bpe_symbol=None, - target_prefix_frac=None, - ): - ( - lm_sentences, - lm_sen_scores, - lm_sen_pos_scores, - lm_no_bpe_sentences, - lm_bpe_tokens, - ) = parse_lm( - lm_score_file, - prefix_len=prefix_len, - bpe_symbol=bpe_symbol, - target_prefix_frac=target_prefix_frac, - ) - - self.sentences = lm_sentences - self.score = lm_sen_scores - self.pos_score = lm_sen_pos_scores - self.lm_dict = lm_dict - self.no_bpe_sentences = lm_no_bpe_sentences - self.bpe_tokens = lm_bpe_tokens - - -def parse_lm(input_file, prefix_len=None, bpe_symbol=None, target_prefix_frac=None): - """parse output of eval_lm""" - with open(input_file, "r") as f: - text = f.readlines() - text = text[7:] - cleaned_text = text[:-2] - - sentences = {} - sen_scores = {} - sen_pos_scores = {} - no_bpe_sentences = {} - num_bpe_tokens_dict = {} - for _i, line in enumerate(cleaned_text): - tokens = line.split() - if tokens[0].isdigit(): - line_id = int(tokens[0]) - scores = [float(x[1:-1]) for x in tokens[2::2]] - sentences[line_id] = " ".join(tokens[1::2][:-1]) + "\n" - if bpe_symbol is not None: - # exclude <eos> symbol to match output from generate.py - bpe_sen = " ".join(tokens[1::2][:-1]) + "\n" - no_bpe_sen = remove_bpe(bpe_sen, bpe_symbol) - no_bpe_sentences[line_id] = no_bpe_sen - - if prefix_len is not None: - num_bpe_tokens = get_num_bpe_tokens_from_len( - bpe_sen, bpe_symbol, prefix_len - ) - sen_scores[line_id] = sum(scores[:num_bpe_tokens]) - num_bpe_tokens_dict[line_id] = num_bpe_tokens - elif target_prefix_frac is not None: - num_words, shortened, target_prefix_len = calc_length_from_frac( - bpe_sen, target_prefix_frac, bpe_symbol - ) - sen_scores[line_id] = sum(scores[:target_prefix_len]) - num_bpe_tokens_dict[line_id] = target_prefix_len - else: - sen_scores[line_id] = sum(scores) - num_bpe_tokens_dict[line_id] = len(scores) - - sen_pos_scores[line_id] = scores - - return sentences, sen_scores, sen_pos_scores, no_bpe_sentences, num_bpe_tokens_dict - - -def get_directories( - data_dir_name, - num_rescore, - gen_subset, - fw_name, - shard_id, - num_shards, - sampling=False, - prefix_len=None, - target_prefix_frac=None, - source_prefix_frac=None, -): - nbest_file_id = ( - "nbest_" - + str(num_rescore) - + "_subset_" - + gen_subset - + "_fw_name_" - + fw_name - + "_shard_" - + str(shard_id) - + "_of_" - + str(num_shards) - ) - - if sampling: - nbest_file_id += "_sampling" - - # the directory containing all information for this nbest list - pre_gen = ( - os.path.join(os.path.dirname(__file__)) - + "/rerank_data/" - + data_dir_name - + "/" - + nbest_file_id - ) - # the directory to store the preprocessed nbest list, for left to right rescoring - left_to_right_preprocessed_dir = pre_gen + "/left_to_right_preprocessed" - if source_prefix_frac is not None: - left_to_right_preprocessed_dir = ( - left_to_right_preprocessed_dir + "/prefix_frac" + str(source_prefix_frac) - ) - # the directory to store the preprocessed nbest list, for right to left rescoring - right_to_left_preprocessed_dir = pre_gen + "/right_to_left_preprocessed" - # the directory to store the preprocessed nbest list, for backwards rescoring - backwards_preprocessed_dir = pre_gen + "/backwards" - if target_prefix_frac is not None: - backwards_preprocessed_dir = ( - backwards_preprocessed_dir + "/prefix_frac" + str(target_prefix_frac) - ) - elif prefix_len is not None: - backwards_preprocessed_dir = ( - backwards_preprocessed_dir + "/prefix_" + str(prefix_len) - ) - - # the directory to store the preprocessed nbest list, for rescoring with P(T) - lm_preprocessed_dir = pre_gen + "/lm_preprocessed" - - return ( - pre_gen, - left_to_right_preprocessed_dir, - right_to_left_preprocessed_dir, - backwards_preprocessed_dir, - lm_preprocessed_dir, - ) - - -def lm_scoring( - preprocess_directory, - bpe_status, - gen_output, - pre_gen, - cur_lm_dict, - cur_lm_name, - cur_language_model, - cur_lm_bpe_code, - batch_size, - lm_score_file, - target_lang, - source_lang, - prefix_len=None, -): - if prefix_len is not None: - assert ( - bpe_status == "different" - ), "bpe status must be different to use prefix len" - if bpe_status == "no bpe": - # run lm on output without bpe - write_reprocessed( - gen_output.no_bpe_source, - gen_output.no_bpe_hypo, - gen_output.no_bpe_target, - pre_gen + "/rescore_data_no_bpe.de", - pre_gen + "/rescore_data_no_bpe.en", - pre_gen + "/reference_file_no_bpe", - ) - - preprocess_lm_param = [ - "--only-source", - "--trainpref", - pre_gen + "/rescore_data_no_bpe." + target_lang, - "--srcdict", - cur_lm_dict, - "--destdir", - preprocess_directory, - ] - preprocess_parser = options.get_preprocessing_parser() - input_args = preprocess_parser.parse_args(preprocess_lm_param) - preprocess.main(input_args) - - eval_lm_param = [ - preprocess_directory, - "--path", - cur_language_model, - "--output-word-probs", - "--batch-size", - str(batch_size), - "--max-tokens", - "1024", - "--sample-break-mode", - "eos", - "--gen-subset", - "train", - ] - - eval_lm_parser = options.get_eval_lm_parser() - input_args = options.parse_args_and_arch(eval_lm_parser, eval_lm_param) - - with open(lm_score_file, "w") as f: - with redirect_stdout(f): - eval_lm.main(input_args) - - elif bpe_status == "shared": - preprocess_lm_param = [ - "--only-source", - "--trainpref", - pre_gen + "/rescore_data." + target_lang, - "--srcdict", - cur_lm_dict, - "--destdir", - preprocess_directory, - ] - preprocess_parser = options.get_preprocessing_parser() - input_args = preprocess_parser.parse_args(preprocess_lm_param) - preprocess.main(input_args) - - eval_lm_param = [ - preprocess_directory, - "--path", - cur_language_model, - "--output-word-probs", - "--batch-size", - str(batch_size), - "--sample-break-mode", - "eos", - "--gen-subset", - "train", - ] - - eval_lm_parser = options.get_eval_lm_parser() - input_args = options.parse_args_and_arch(eval_lm_parser, eval_lm_param) - - with open(lm_score_file, "w") as f: - with redirect_stdout(f): - eval_lm.main(input_args) - - elif bpe_status == "different": - rescore_file = pre_gen + "/rescore_data_no_bpe" - rescore_bpe = pre_gen + "/rescore_data_new_bpe" - - rescore_file += "." - rescore_bpe += "." - - write_reprocessed( - gen_output.no_bpe_source, - gen_output.no_bpe_hypo, - gen_output.no_bpe_target, - rescore_file + source_lang, - rescore_file + target_lang, - pre_gen + "/reference_file_no_bpe", - bpe_symbol=None, - ) - - # apply LM bpe to nbest list - bpe_src_param = [ - "-c", - cur_lm_bpe_code, - "--input", - rescore_file + target_lang, - "--output", - rescore_bpe + target_lang, - ] - subprocess.call( - [ - "python", - os.path.join( - os.path.dirname(__file__), "subword-nmt/subword_nmt/apply_bpe.py" - ), - ] - + bpe_src_param, - shell=False, - ) - # uncomment to use fastbpe instead of subword-nmt bpe - # bpe_src_param = [rescore_bpe+target_lang, rescore_file+target_lang, cur_lm_bpe_code] - # subprocess.call(["/private/home/edunov/fastBPE/fast", "applybpe"] + bpe_src_param, shell=False) - - preprocess_dir = preprocess_directory - - preprocess_lm_param = [ - "--only-source", - "--trainpref", - rescore_bpe + target_lang, - "--srcdict", - cur_lm_dict, - "--destdir", - preprocess_dir, - ] - preprocess_parser = options.get_preprocessing_parser() - input_args = preprocess_parser.parse_args(preprocess_lm_param) - preprocess.main(input_args) - - eval_lm_param = [ - preprocess_dir, - "--path", - cur_language_model, - "--output-word-probs", - "--batch-size", - str(batch_size), - "--max-tokens", - "1024", - "--sample-break-mode", - "eos", - "--gen-subset", - "train", - ] - - eval_lm_parser = options.get_eval_lm_parser() - input_args = options.parse_args_and_arch(eval_lm_parser, eval_lm_param) - - with open(lm_score_file, "w") as f: - with redirect_stdout(f): - eval_lm.main(input_args) - - -def rescore_file_name( - nbest_dir, - prefix_len, - scorer_name, - lm_file=False, - target_prefix_frac=None, - source_prefix_frac=None, - backwards=None, -): - if lm_file: - score_file = nbest_dir + "/lm_score_translations_model_" + scorer_name + ".txt" - else: - score_file = nbest_dir + "/" + scorer_name + "_score_translations.txt" - if backwards: - if prefix_len is not None: - score_file += "prefix_len" + str(prefix_len) - elif target_prefix_frac is not None: - score_file += "target_prefix_frac" + str(target_prefix_frac) - else: - if source_prefix_frac is not None: - score_file += "source_prefix_frac" + str(source_prefix_frac) - return score_file diff --git a/kosmos-g/fairseq/examples/nonautoregressive_translation/README.md b/kosmos-g/fairseq/examples/nonautoregressive_translation/README.md deleted file mode 100644 index 8793e225c..000000000 --- a/kosmos-g/fairseq/examples/nonautoregressive_translation/README.md +++ /dev/null @@ -1,146 +0,0 @@ -# Non-autoregressive Neural Machine Translation (NAT) - -This page mainly includes instructions for reproducing results from the following papers -* [Levenshtein Transformer (Gu et al., 2019)](https://arxiv.org/abs/1905.11006). -* [Understanding Knowledge Distillation in Non-autoregressive Machine Translation (Zhou et al., 2019)](https://arxiv.org/abs/1911.02727). - -We also provided our own implementations for several popular non-autoregressive-based models as reference:<br> -* [Non-Autoregressive Neural Machine Translation (Gu et al., 2017)](https://arxiv.org/abs/1711.02281)<br> -* [Deterministic Non-Autoregressive Neural Sequence Modeling by Iterative Refinement (Lee et al., 2018)](https://arxiv.org/abs/1802.06901)<br> -* [Insertion Transformer: Flexible Sequence Generation via Insertion Operations (Stern et al., 2019)](https://arxiv.org/abs/1902.03249)<br> -* [Mask-Predict: Parallel Decoding of Conditional Masked Language Models (Ghazvininejad et al., 2019)](https://arxiv.org/abs/1904.09324v2)<br> -* [Fast Structured Decoding for Sequence Models (Sun et al., 2019)](https://arxiv.org/abs/1910.11555) - -## Dataset - -First, follow the [instructions to download and preprocess the WMT'14 En-De dataset](../translation#wmt14-english-to-german-convolutional). -Make sure to learn a joint vocabulary by passing the `--joined-dictionary` option to `fairseq-preprocess`. - -### Knowledge Distillation -Following [Gu et al. 2019](https://arxiv.org/abs/1905.11006), [knowledge distillation](https://arxiv.org/abs/1606.07947) from an autoregressive model can effectively simplify the training data distribution, which is sometimes essential for NAT-based models to learn good translations. -The easiest way of performing distillation is to follow the [instructions of training a standard transformer model](../translation) on the same data, and then decode the training set to produce a distillation dataset for NAT. - -### Download -We also provided the preprocessed [original](http://dl.fbaipublicfiles.com/nat/original_dataset.zip) and [distillation](http://dl.fbaipublicfiles.com/nat/distill_dataset.zip) datasets. Please build the binarized dataset on your own. - - -## Train a model - -Then we can train a nonautoregressive model using the `translation_lev` task and a new criterion `nat_loss`. -Use the `--noise` flag to specify the input noise used on the target sentences. -In default, we run the task for *Levenshtein Transformer*, with `--noise='random_delete'`. Full scripts to run other models can also be found [here](./scripts.md). - -The following command will train a *Levenshtein Transformer* on the binarized dataset. - -```bash -fairseq-train \ - data-bin/wmt14_en_de_distill \ - --save-dir checkpoints \ - --ddp-backend=legacy_ddp \ - --task translation_lev \ - --criterion nat_loss \ - --arch levenshtein_transformer \ - --noise random_delete \ - --share-all-embeddings \ - --optimizer adam --adam-betas '(0.9,0.98)' \ - --lr 0.0005 --lr-scheduler inverse_sqrt \ - --stop-min-lr '1e-09' --warmup-updates 10000 \ - --warmup-init-lr '1e-07' --label-smoothing 0.1 \ - --dropout 0.3 --weight-decay 0.01 \ - --decoder-learned-pos \ - --encoder-learned-pos \ - --apply-bert-init \ - --log-format 'simple' --log-interval 100 \ - --fixed-validation-seed 7 \ - --max-tokens 8000 \ - --save-interval-updates 10000 \ - --max-update 300000 -``` - -## Translate - -Once a model is trained, we can generate translations using an `iterative_refinement_generator` which will based on the model's initial output and iteratively read and greedily refine the translation until (1) the model predicts the same translations for two consecutive iterations; or (2) the generator reaches the maximum iterations (`--iter-decode-max-iter`). Use `--print-step` to check the actual # of iteration for each sentence. - -For *Levenshtein Transformer*, it sometimes helps to apply a `--iter-decode-eos-penalty` (typically, 0~3) to penalize the model finishing generation too early and generating too short translations. - -For example, to generate with `--iter-decode-max-iter=9`: -```bash -fairseq-generate \ - data-bin/wmt14_en_de_distill \ - --gen-subset test \ - --task translation_lev \ - --path checkpoints/checkpoint_best.pt \ - --iter-decode-max-iter 9 \ - --iter-decode-eos-penalty 0 \ - --beam 1 --remove-bpe \ - --print-step \ - --batch-size 400 -``` -In the end of the generation, we can see the tokenized BLEU score for the translation. - -## Advanced Decoding Methods -### Ensemble -The NAT models use special implementations of [ensembling](https://github.com/fairinternal/fairseq-py/blob/b98d88da52f2f21f1b169bab8c70c1c4ca19a768/fairseq/sequence_generator.py#L522) to support iterative refinement and a variety of parallel operations in different models, while it shares the same API as standard autoregressive models as follows: -```bash -fairseq-generate \ - data-bin/wmt14_en_de_distill \ - --gen-subset test \ - --task translation_lev \ - --path checkpoint_1.pt:checkpoint_2.pt:checkpoint_3.pt \ - --iter-decode-max-iter 9 \ - --iter-decode-eos-penalty 0 \ - --beam 1 --remove-bpe \ - --print-step \ - --batch-size 400 -``` -We use ``:`` to split multiple models. Note that, not all NAT models support ensembling for now. - - -### Length-beam -For models that predict lengths before decoding (e.g. the vanilla NAT, Mask-Predict, etc), it is possible to improve the translation quality by varying the target lengths around the predicted value, and translating the same example multiple times in parallel. We can select the best translation with the highest scores defined by your model's output. - -Note that, not all models support length beams. For models which dynamically change the lengths (e.g. *Insertion Transformer*, *Levenshtein Transformer*), the same trick does not apply. - -### Re-ranking -If the model generates multiple translations with length beam, we can also introduce an autoregressive model to rerank the translations considering scoring from an autoregressive model is much faster than decoding from that. - -For example, to generate translations with length beam and reranking, -```bash -fairseq-generate \ - data-bin/wmt14_en_de_distill \ - --gen-subset test \ - --task translation_lev \ - --path checkpoints/checkpoint_best.pt:at_checkpoints/checkpoint_best.pt \ - --iter-decode-max-iter 9 \ - --iter-decode-eos-penalty 0 \ - --iter-decode-with-beam 9 \ - --iter-decode-with-external-reranker \ - --beam 1 --remove-bpe \ - --print-step \ - --batch-size 100 -``` -Note that we need to make sure the autoregressive model shares the same vocabulary as our target non-autoregressive model. - - -## Citation - -```bibtex -@incollection{NIPS2019_9297, - title = {Levenshtein Transformer}, - author = {Gu, Jiatao and Wang, Changhan and Zhao, Junbo}, - booktitle = {Advances in Neural Information Processing Systems 32}, - editor = {H. Wallach and H. Larochelle and A. Beygelzimer and F. d\textquotesingle Alch\'{e}-Buc and E. Fox and R. Garnett}, - pages = {11179--11189}, - year = {2019}, - publisher = {Curran Associates, Inc.}, - url = {http://papers.nips.cc/paper/9297-levenshtein-transformer.pdf} -} -``` -```bibtex -@article{zhou2019understanding, - title={Understanding Knowledge Distillation in Non-autoregressive Machine Translation}, - author={Zhou, Chunting and Neubig, Graham and Gu, Jiatao}, - journal={arXiv preprint arXiv:1911.02727}, - year={2019} -} -``` diff --git a/kosmos-g/fairseq/examples/nonautoregressive_translation/scripts.md b/kosmos-g/fairseq/examples/nonautoregressive_translation/scripts.md deleted file mode 100644 index 9d3d7b67d..000000000 --- a/kosmos-g/fairseq/examples/nonautoregressive_translation/scripts.md +++ /dev/null @@ -1,179 +0,0 @@ -# Examples of Training scripts for Non-autoregressive Machine Translation models - -### Non-autoregressive Transformer (NAT, Gu et al., 2017) -Note that we need to have an additional module to perform "length prediction" (`--length-loss-factor`) before generating the whole sequence. -```bash -fairseq-train \ - data-bin/wmt14_en_de_distill \ - --save-dir checkpoints \ - --ddp-backend=legacy_ddp \ - --task translation_lev \ - --criterion nat_loss \ - --arch nonautoregressive_transformer \ - --noise full_mask \ - --share-all-embeddings \ - --optimizer adam --adam-betas '(0.9,0.98)' \ - --lr 0.0005 --lr-scheduler inverse_sqrt \ - --stop-min-lr '1e-09' --warmup-updates 10000 \ - --warmup-init-lr '1e-07' --label-smoothing 0.1 \ - --dropout 0.3 --weight-decay 0.01 \ - --decoder-learned-pos \ - --encoder-learned-pos \ - --pred-length-offset \ - --length-loss-factor 0.1 \ - --apply-bert-init \ - --log-format 'simple' --log-interval 100 \ - --fixed-validation-seed 7 \ - --max-tokens 8000 \ - --save-interval-updates 10000 \ - --max-update 300000 -``` - -### Fast Structured Decoding for Sequence Models (NAT-CRF, Sun et al., 2019) -Note that we implemented a low-rank appromixated CRF model by setting `--crf-lowrank-approx=32` and `--crf-beam-approx=64` as discribed in the original paper. All other settings are the same as the vanilla NAT model. -```bash -fairseq-train \ - data-bin/wmt14_en_de_distill \ - --save-dir checkpoints \ - --ddp-backend=legacy_ddp \ - --task translation_lev \ - --criterion nat_loss \ - --arch nacrf_transformer \ - --noise full_mask \ - --share-all-embeddings \ - --optimizer adam --adam-betas '(0.9,0.98)' \ - --lr 0.0005 --lr-scheduler inverse_sqrt \ - --stop-min-lr '1e-09' --warmup-updates 10000 \ - --warmup-init-lr '1e-07' --label-smoothing 0.1 \ - --dropout 0.3 --weight-decay 0.01 \ - --decoder-learned-pos \ - --encoder-learned-pos \ - --pred-length-offset \ - --length-loss-factor 0.1 \ - --word-ins-loss-factor 0.5 \ - --crf-lowrank-approx 32 \ - --crf-beam-approx 64 \ - --apply-bert-init \ - --log-format 'simple' --log-interval 100 \ - --fixed-validation-seed 7 \ - --max-tokens 8000 \ - --save-interval-updates 10000 \ - --max-update 300000 -``` - - -### Non-autoregressive Transformer with Iterative Refinement (iNAT, Lee et al., 2018) -Note that `--train-step` means how many iterations of refinement we used during training, and `--dae-ratio` controls the ratio of denoising auto-encoder training described in the original paper. -```bash -fairseq-train \ - data-bin/wmt14_en_de_distill \ - --save-dir checkpoints \ - --ddp-backend=legacy_ddp \ - --task translation_lev \ - --criterion nat_loss \ - --arch iterative_nonautoregressive_transformer \ - --noise full_mask \ - --share-all-embeddings \ - --optimizer adam --adam-betas '(0.9,0.98)' \ - --lr 0.0005 --lr-scheduler inverse_sqrt \ - --stop-min-lr '1e-09' --warmup-updates 10000 \ - --warmup-init-lr '1e-07' --label-smoothing 0.1 \ - --dropout 0.3 --weight-decay 0.01 \ - --decoder-learned-pos \ - --encoder-learned-pos \ - --pred-length-offset \ - --length-loss-factor 0.1 \ - --train-step 4 \ - --dae-ratio 0.5 \ - --stochastic-approx \ - --apply-bert-init \ - --log-format 'simple' --log-interval 100 \ - --fixed-validation-seed 7 \ - --max-tokens 8000 \ - --save-interval-updates 10000 \ - --max-update 300000 -``` - -### Insertion Transformer (InsT, Stern et al., 2019) -Note that we need to specify the "slot-loss" (uniform or balanced tree) described in the original paper. Here we use `--label-tau` to control the temperature. - -```bash -fairseq-train \ - data-bin/wmt14_en_de_distill \ - --save-dir checkpoints \ - --ddp-backend=legacy_ddp \ - --task translation_lev \ - --criterion nat_loss \ - --arch insertion_transformer \ - --noise random_delete \ - --share-all-embeddings \ - --optimizer adam --adam-betas '(0.9,0.98)' \ - --lr 0.0005 --lr-scheduler inverse_sqrt \ - --stop-min-lr '1e-09' --warmup-updates 10000 \ - --warmup-init-lr '1e-07' --label-smoothing 0.1 \ - --dropout 0.3 --weight-decay 0.01 \ - --decoder-learned-pos \ - --encoder-learned-pos \ - --apply-bert-init \ - --log-format 'simple' --log-interval 100 \ - --fixed-validation-seed 7 \ - --max-tokens 8000 \ - --save-interval-updates 10000 \ - --max-update 300000 -``` - - -### Mask Predict (CMLM, Ghazvininejad et al., 2019) -```bash -fairseq-train \ - data-bin/wmt14_en_de_distill \ - --save-dir checkpoints \ - --ddp-backend=legacy_ddp \ - --task translation_lev \ - --criterion nat_loss \ - --arch cmlm_transformer \ - --noise random_mask \ - --share-all-embeddings \ - --optimizer adam --adam-betas '(0.9,0.98)' \ - --lr 0.0005 --lr-scheduler inverse_sqrt \ - --stop-min-lr '1e-09' --warmup-updates 10000 \ - --warmup-init-lr '1e-07' --label-smoothing 0.1 \ - --dropout 0.3 --weight-decay 0.01 \ - --decoder-learned-pos \ - --encoder-learned-pos \ - --apply-bert-init \ - --log-format 'simple' --log-interval 100 \ - --fixed-validation-seed 7 \ - --max-tokens 8000 \ - --save-interval-updates 10000 \ - --max-update 300000 -``` - - - - -### Levenshtein Transformer (LevT, Gu et al., 2019) -```bash -fairseq-train \ - data-bin/wmt14_en_de_distill \ - --save-dir checkpoints \ - --ddp-backend=legacy_ddp \ - --task translation_lev \ - --criterion nat_loss \ - --arch levenshtein_transformer \ - --noise random_delete \ - --share-all-embeddings \ - --optimizer adam --adam-betas '(0.9,0.98)' \ - --lr 0.0005 --lr-scheduler inverse_sqrt \ - --stop-min-lr '1e-09' --warmup-updates 10000 \ - --warmup-init-lr '1e-07' --label-smoothing 0.1 \ - --dropout 0.3 --weight-decay 0.01 \ - --decoder-learned-pos \ - --encoder-learned-pos \ - --apply-bert-init \ - --log-format 'simple' --log-interval 100 \ - --fixed-validation-seed 7 \ - --max-tokens 8000 \ - --save-interval-updates 10000 \ - --max-update 300000 -``` diff --git a/kosmos-g/fairseq/examples/normformer/README.md b/kosmos-g/fairseq/examples/normformer/README.md deleted file mode 100644 index 037b453ff..000000000 --- a/kosmos-g/fairseq/examples/normformer/README.md +++ /dev/null @@ -1,70 +0,0 @@ -### NormFormer -This is the code for the ["NormFormer: Improved Transformer Pretraining with Extra Normalization"](https://arxiv.org/abs/2110.09456) -- 2021-10-19: Commands for CLM Experiments -- Coming soon: Commands for MLM experiments - -If you have any issues or questions please post a github issue and tag `@sshleifer`. - - -### Data -- To preprocess language modeling data, see [here](https://github.com/pytorch/fairseq/blob/d0fbcb0baef6f6ff3425ded62d8daea0e8b12114/examples/language_model/README.md#1-preprocess-the-data). -- The replication commands below expect `$DATA` to be the path to the binarized data directory. -- Note that NormFormer results in Table 2 use a much larger private dataset, and to get good results you should adapt the pre-processing instructions to your dataset and compare to a baseline on the same data, rather than Table 2. -- The code uses `FSDP`, which requires `pip install fairscale>=0.4.0`. - - -### Modify existing Command -To modify an existing `fairseq-train` command to use NormFormer, simply add the following flags: -```bash -fairseq-train ... \ - --scale-attn --scale-fc --scale-heads -``` -- you probably also want to increase your learning rate -- if your model is small, you may want to add `--scale-resids` - -### Exact Training Commands - -- Note that NormFormer results in Table 2 use a much larger private dataset, and to get good results you should adapt the pre-processing instructions to your dataset. -The full commands are functions defined here, so to run them you must `source examples/normformer/train_lm.sh`. -- We default `--distributed-world-size 8`. You should adjust `--update-freq` and `--batch-size` and such that the effective batch size is (1024x1024x0.5) tokens for 125M and 355M, - and (1024x1024) for 1.3B parameter and above. For small models, `--update-freq`=256/`global_bs`. For large models, `--update-freq`=512/`global_bs`, where `global_bs` = `--batch-size` * `--distributed-world-size` -- The small models will all train on as few as 8 GPUs. - -```bash -train_125M --lr 6e-4 # GPT-3 Replicated -train_125M --lr 1e-3 # stronger high-lr baseline -train_125M --lr 3e-3 --scale-attn --scale-fc --scale-heads # No scale-resids -train_125M --lr 3e-3 --scale-attn --scale-fc --scale-heads --scale-resids # Best command -``` - -```bash -train_355M --lr 6e-4 # GPT-3 Replicated -train_355M --lr 1e-3 # stronger high-lr baseline -train_355M --lr 1e-3 --scale-attn --scale-fc --scale-heads # No scale-resids -train_355M --lr 1e-3 --scale-attn --scale-fc --scale-heads --scale-resids # Slightly better -``` - -```bash -train_1.3B --lr 2e-4 # GPT-3 Replicated -train_1.3B --lr 6e-4 # stronger high-lr baseline -train_1.3B --lr 6e-4 --scale-attn --scale-fc --scale-heads # NormFormer -``` - -```bash -train_2.7B --lr 1.6e-4 # GPT-3 Replicated -train_2.7B --lr 1.6e-4 --activation-fn relu_squared # stronger Relu^2 baseline -train_2.7B --lr 6e-4 --activation-fn relu_squared --scale-attn --scale-fc --scale-heads # NormFormer 2.7B -``` - - -### Citation -```bibtex -@misc{shleifer2021normformer, - title={NormFormer: Improved Transformer Pretraining with Extra Normalization}, - author={Sam Shleifer and Jason Weston and Myle Ott}, - year={2021}, - eprint={2110.09456}, - archivePrefix={arXiv}, - primaryClass={cs.CL} -} -``` diff --git a/kosmos-g/fairseq/examples/normformer/train_lm.sh b/kosmos-g/fairseq/examples/normformer/train_lm.sh deleted file mode 100644 index b081f2ddd..000000000 --- a/kosmos-g/fairseq/examples/normformer/train_lm.sh +++ /dev/null @@ -1,78 +0,0 @@ -#!/usr/bin/env bash -train_common () { - fairseq-train "$DATA" \ - --combine-val \ - --train-subset train \ - --num-workers 2 \ - --validate-interval-updates 1000 \ - --save-interval-updates 1000 \ - --no-epoch-checkpoints \ - --ddp-backend fully_sharded \ - --memory-efficient-fp16 \ - --fp16-init-scale 4 \ - --checkpoint-activations \ - --arch transformer_lm_gpt \ - --activation-fn gelu \ - --share-decoder-input-output-embed \ - --task language_modeling \ - --sample-break-mode none \ - --tokens-per-sample 2048 \ - --optimizer adam --adam-betas "(0.9, 0.98)" \ - --adam-eps 1e-08 \ - --clip-norm 0.0 \ - --lr-scheduler polynomial_decay \ - --warmup-updates 750 \ - --dropout 0.1 \ - --attention-dropout 0.1 \ - --weight-decay 0.01 \ - --batch-size 16 \ - --update-freq 2 \ - --required-batch-size-multiple 1 \ - --total-num-update 572204 \ - --max-update 572204 \ - --seed 1 \ - --log-format json --log-interval 1 \ - --distributed-world-size 8 --distributed-port 13177 \ - "$@" -} - -train_125M () { - train_common --decoder-layers 12 \ - --decoder-embed-dim 768 \ - --decoder-ffn-embed-dim 3072 \ - --decoder-attention-heads 12 "$@" -} - -train_355M () { - train_common --decoder-layers 24 \ - --decoder-embed-dim 1024\ - --decoder-ffn-embed-dim 4096 \ - --decoder-attention-heads 16 \ - --dropout 0.0 \ - --attention-dropout 0.0 \ - "$@" -} - -train_1.3B () { - train_common --decoder-layers 24 \ - --decoder-embed-dim 2048 \ - --decoder-ffn-embed-dim 8192 \ - --decoder-attention-heads 32 \ - --batch-size 4 \ - --update-freq 16 \ - --total-num-update 286102 \ - --max-update 286102 \ - "$@" -} - -train_2.7B () { - train_common --decoder-layers 32 \ - --decoder-embed-dim 2560 \ - --decoder-ffn-embed-dim 10240 \ - --decoder-attention-heads 32 \ - --batch-size 4 \ - --update-freq 16 \ - --total-num-update 286102 \ - --max-update 286102 \ - "$@" -} diff --git a/kosmos-g/fairseq/examples/operators/alignment_train_cpu.cpp b/kosmos-g/fairseq/examples/operators/alignment_train_cpu.cpp deleted file mode 100644 index 13c015308..000000000 --- a/kosmos-g/fairseq/examples/operators/alignment_train_cpu.cpp +++ /dev/null @@ -1,166 +0,0 @@ -/** - * Copyright 2017-present, Facebook, Inc. - * All rights reserved. - * - * This source code is licensed under the license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include <torch/extension.h> // @manual=//caffe2:torch_extension -#include <algorithm> - -namespace { - -template <typename T> -void exclusiveCumprod( - const T* p_choose, - T* cumprod_1mp, - uint32_t bsz, - uint32_t tgt_len, - uint32_t src_len) { - // cumprod_1mp = 1 - p_choose - for (uint32_t b = 0; b < bsz; b++) { - for (uint32_t tgt = 0; tgt < tgt_len; tgt++) { - for (uint32_t src = 0; src < src_len; src++) { - uint32_t idx = b * tgt_len * src_len + tgt * src_len + src; - cumprod_1mp[idx] = 1 - p_choose[idx]; - } - } - } - - // Implementing exclusive cumprod in the innermost dimension - // cumprod_1mp = cumprod(1 - p_choose) - // There is cumprod in pytorch, however there is no exclusive mode. - // cumprod(x) = [x1, x1x2, x2x3x4, ..., prod_{i=1}^n x_i] - // exclusive means - // cumprod(x) = [1, x1, x1x2, x1x2x3, ..., prod_{i=1}^{n-1} x_i] - for (uint32_t b = 0; b < bsz; b++) { - for (uint32_t tgt = 0; tgt < tgt_len; tgt++) { - uint32_t idx_offset = b * tgt_len * src_len + tgt * src_len; - T prev = cumprod_1mp[idx_offset]; - // index [b][tgt][0] - cumprod_1mp[idx_offset] = (T)1.0; - T curr; - for (uint32_t src = 1; src < src_len; src++) { - uint32_t idx = idx_offset + src; - curr = cumprod_1mp[idx]; - cumprod_1mp[idx] = cumprod_1mp[idx - 1] * prev; - prev = curr; - } - } - } -} - -template <typename T> -void clamp( - const T* cumprod_1mp, - T* cumprod_1mp_clamp, - uint32_t bsz, - uint32_t tgt_len, - uint32_t src_len, - T min_val, - T max_val) { - for (uint32_t b = 0; b < bsz; b++) { - for (uint32_t tgt = 0; tgt < tgt_len; tgt++) { - for (uint32_t src = 0; src < src_len; src++) { - uint32_t idx = b * tgt_len * src_len + tgt * src_len + src; - if (cumprod_1mp[idx] < min_val) { - cumprod_1mp_clamp[idx] = min_val; - } else if (cumprod_1mp[idx] > max_val) { - cumprod_1mp_clamp[idx] = max_val; - } else { - cumprod_1mp_clamp[idx] = cumprod_1mp[idx]; - } - } - } - } -} - -template <typename T> -void alignmentTrainCPUImpl( - const T* p_choose, - T* alpha, - uint32_t bsz, - uint32_t tgt_len, - uint32_t src_len, - float eps) { - // p_choose: bsz , tgt_len, src_len - // cumprod_1mp: bsz , tgt_len, src_len - // cumprod_1mp_clamp : bsz, tgt_len, src_len - // alpha: bsz + 1, tgt_len, src_len - - uint32_t elements = bsz * tgt_len * src_len; - T* cumprod_1mp = new T[elements]; - T* cumprod_1mp_clamp = new T[elements]; - - exclusiveCumprod<T>(p_choose, cumprod_1mp, bsz, tgt_len, src_len); - clamp<T>( - cumprod_1mp, cumprod_1mp_clamp, bsz, tgt_len, src_len, (T)eps, (T)1.0); - - // ai = p_i * cumprod(1 − pi) * cumsum(a_i / cumprod(1 − pi)) - - // Initialize alpha [:, 0, 0] - for (uint32_t b = 0; b < bsz; b++) { - alpha[b * tgt_len * src_len] = 1.0; - } - - for (uint32_t tgt = 0; tgt < tgt_len; tgt++) { - for (uint32_t b = 0; b < bsz; b++) { - uint32_t alpha_idx, inout_idx; - T prev_scan = 0, curr_scan, out; - for (uint32_t src = 0; src < src_len; src++) { - // Apply scan/cumsum - if (tgt == 0) { - // alpha index is [b][tgt][src] - alpha_idx = b * tgt_len * src_len + src; - } else { - // alpha index is [b][tgt-1][src] - alpha_idx = b * tgt_len * src_len + (tgt - 1) * src_len + src; - } - // input index is [b][tgt][src] - inout_idx = b * tgt_len * src_len + tgt * src_len + src; - curr_scan = prev_scan + alpha[alpha_idx] / cumprod_1mp_clamp[inout_idx]; - - out = curr_scan * p_choose[inout_idx] * cumprod_1mp[inout_idx]; - alpha[inout_idx] = std::min<T>(std::max<T>(out, 0), 1.0); - prev_scan = curr_scan; - } - } - } - - free(cumprod_1mp); - free(cumprod_1mp_clamp); -} - -void alignmentTrainCPU( - const torch::Tensor& p_choose, - torch::Tensor& alpha, - float eps) { - uint32_t bsz = p_choose.size(0); - uint32_t tgt_len = p_choose.size(1); - uint32_t src_len = p_choose.size(2); - - AT_DISPATCH_FLOATING_TYPES_AND2( - torch::ScalarType::Half, - torch::ScalarType::BFloat16, - p_choose.scalar_type(), - "alignmentCPUImpl", - [&]() { - alignmentTrainCPUImpl<scalar_t>( - p_choose.data_ptr<scalar_t>(), - alpha.data_ptr<scalar_t>(), - bsz, - tgt_len, - src_len, - eps); - }); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def( - "alignment_train_cpu", - &alignmentTrainCPU, - "expected_alignment_from_p_choose (CPU)"); -} - -} // namespace diff --git a/kosmos-g/fairseq/examples/operators/alignment_train_cuda.cpp b/kosmos-g/fairseq/examples/operators/alignment_train_cuda.cpp deleted file mode 100644 index 430e04813..000000000 --- a/kosmos-g/fairseq/examples/operators/alignment_train_cuda.cpp +++ /dev/null @@ -1,31 +0,0 @@ -/** - * Copyright 2017-present, Facebook, Inc. - * All rights reserved. - * - * This source code is licensed under the license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include "alignment_train_cuda.h" -#include "utils.h" - -namespace { - -void alignmentTrainCUDA( - const torch::Tensor& p_choose, - torch::Tensor& alpha, - float eps) { - CHECK_INPUT(p_choose); - CHECK_INPUT(alpha); - - alignmentTrainCUDAWrapper(p_choose, alpha, eps); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def( - "alignment_train_cuda", - &alignmentTrainCUDA, - "expected_alignment_from_p_choose (CUDA)"); -} - -} // namespace diff --git a/kosmos-g/fairseq/examples/operators/alignment_train_cuda.h b/kosmos-g/fairseq/examples/operators/alignment_train_cuda.h deleted file mode 100644 index 8289d1a69..000000000 --- a/kosmos-g/fairseq/examples/operators/alignment_train_cuda.h +++ /dev/null @@ -1,16 +0,0 @@ -/** - * Copyright 2017-present, Facebook, Inc. - * All rights reserved. - * - * This source code is licensed under the license found in the - * LICENSE file in the root directory of this source tree. - */ - -#pragma once - -#include <torch/extension.h> // @manual=//caffe2:torch_extension - -void alignmentTrainCUDAWrapper( - const torch::Tensor& p_choose, - torch::Tensor& alpha, - float eps); diff --git a/kosmos-g/fairseq/examples/operators/alignment_train_kernel.cu b/kosmos-g/fairseq/examples/operators/alignment_train_kernel.cu deleted file mode 100644 index efae7cc76..000000000 --- a/kosmos-g/fairseq/examples/operators/alignment_train_kernel.cu +++ /dev/null @@ -1,354 +0,0 @@ -/** - * Copyright 2017-present, Facebook, Inc. - * All rights reserved. - * - * This source code is licensed under the license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include <ATen/ATen.h> -#include <ATen/cuda/CUDAContext.h> // @manual=//caffe2/aten:ATen-cu -#include <cuda_runtime.h> -#include <algorithm> // std::min/max -#include <cub/cub.cuh> - -#include "alignment_train_cuda.h" -#include "utils.h" - -namespace { - -// The thread block length in threads along the X dimension -constexpr int BLOCK_DIM_X = 128; -// The thread block length in threads along the Y dimension -constexpr int BLOCK_DIM_Y = 8; -// The thread block length in threads for scan operation -constexpr int SCAN_BLOCK = 512; - -#define gpuErrchk(ans) \ - { gpuAssert((ans), __FILE__, __LINE__); } - -inline void -gpuAssert(cudaError_t code, const char* file, int line, bool abort = true) { - if (code != cudaSuccess) { - fprintf( - stderr, - "\nGPUassert: %s %s %d\n", - cudaGetErrorString(code), - file, - line); - if (abort) - exit(code); - } -} - -template <typename T> -struct Prod { - /// prod operator, returns <tt>a * b</tt> - __host__ __device__ __forceinline__ T - operator()(const T& a, const T& b) const { - return a * b; - } -}; - -template <typename T> -struct BlockPrefixProdCallbackOp { - // Running prefix - T running_total; - - // Constructor - __device__ BlockPrefixProdCallbackOp(T running_total) - : running_total(running_total) {} - - // Callback operator to be entered by the first warp of threads in the block. - // Thread-0 is responsible for returning a value for seeding the block-wide - // scan. - __device__ T operator()(const T block_aggregate) { - T old_prefix = running_total; - running_total *= block_aggregate; - return old_prefix; - } -}; - -template <typename T> -struct BlockPrefixSumCallbackOp { - // Running prefix - T running_total; - - // Constructor - __device__ BlockPrefixSumCallbackOp(T running_total) - : running_total(running_total) {} - - // Callback operator to be entered by the first warp of threads in the block. - // Thread-0 is responsible for returning a value for seeding the block-wide - // scan. - __device__ T operator()(const T block_aggregate) { - T old_prefix = running_total; - running_total += block_aggregate; - return old_prefix; - } -}; - -template <typename T> -__global__ void oneMinusPKernel( - const T* __restrict__ p_choose, - T* __restrict__ cumprod_1mp, - uint32_t bsz, - uint32_t tgt_len, - uint32_t src_len) { - for (uint32_t b = blockIdx.x; b < bsz; b += gridDim.x) { - for (uint32_t tgt = threadIdx.y; tgt < tgt_len; tgt += blockDim.y) { - for (uint32_t src = threadIdx.x; src < src_len; src += blockDim.x) { - uint32_t idx = b * tgt_len * src_len + tgt * src_len + src; - cumprod_1mp[idx] = 1 - p_choose[idx]; - } - } - } -} - -template <typename T, int TPB> -__global__ void innermostScanKernel( - T* __restrict__ cumprod_1mp, - uint32_t bsz, - uint32_t tgt_len, - uint32_t src_len) { - for (uint32_t b = blockIdx.y; b < bsz; b += gridDim.y) { - for (uint32_t tgt = blockIdx.x; tgt < tgt_len; tgt += gridDim.x) { - // Specialize BlockScan for a 1D block of TPB threads on type T - typedef cub::BlockScan<T, TPB> BlockScan; - // Allocate shared memory for BlockScan - __shared__ typename BlockScan::TempStorage temp_storage; - // Initialize running total - BlockPrefixProdCallbackOp<T> prefix_op(1); - - const uint32_t tid = threadIdx.x; - for (uint32_t block_src = 0; block_src < src_len; - block_src += blockDim.x) { - uint32_t src = block_src + tid; - uint32_t idx = b * tgt_len * src_len + tgt * src_len + src; - T thread_data = (src < src_len) ? cumprod_1mp[idx] : (T)0; - - // Collectively compute the block-wide inclusive prefix sum - BlockScan(temp_storage) - .ExclusiveScan(thread_data, thread_data, Prod<T>(), prefix_op); - __syncthreads(); - - // write the scanned value to output - if (src < src_len) { - cumprod_1mp[idx] = thread_data; - } - } - } - } -} - -template <typename T> -__global__ void clampKernel( - const T* __restrict__ cumprod_1mp, - T* __restrict__ cumprod_1mp_clamp, - uint32_t bsz, - uint32_t tgt_len, - uint32_t src_len, - T min_val, - T max_val) { - for (uint32_t b = blockIdx.x; b < bsz; b += gridDim.x) { - for (uint32_t tgt = threadIdx.y; tgt < tgt_len; tgt += blockDim.y) { - for (uint32_t src = threadIdx.x; src < src_len; src += blockDim.x) { - uint32_t idx = b * tgt_len * src_len + tgt * src_len + src; - if (cumprod_1mp[idx] < min_val) { - cumprod_1mp_clamp[idx] = min_val; - } else if (cumprod_1mp[idx] > max_val) { - cumprod_1mp_clamp[idx] = max_val; - } else { - cumprod_1mp_clamp[idx] = cumprod_1mp[idx]; - } - } - } - } -} - -template <typename T> -__global__ void initAlphaCUDAKernel( - T* alpha, - uint32_t bsz, - uint32_t tgt_len, - uint32_t src_len) { - // alpha[:, 0, 0] = 1.0 - for (uint32_t b = blockIdx.x; b < bsz; b += gridDim.x) { - alpha[b * tgt_len * src_len] = (T)1.0; - } -} - -template <typename T, int TPB> -__global__ void alignmentTrainCUDAKernel( - const T* __restrict__ p_choose, - const T* __restrict__ cumprod_1mp, - const T* __restrict__ cumprod_1mp_clamp, - T* __restrict__ alpha, - uint32_t bsz, - uint32_t tgt_len, - uint32_t src_len, - uint32_t tgt) { - for (uint32_t b = blockIdx.x; b < bsz; b += gridDim.x) { - // Specialize BlockScan for a 1D block of TPB threads on type T - typedef cub::BlockScan<T, TPB> BlockScan; - - // Allocate shared memory for BlockScan - __shared__ typename BlockScan::TempStorage temp_storage; - // Initialize running total - BlockPrefixSumCallbackOp<T> prefix_op(0); - - uint32_t b_offset = b * tgt_len * src_len; - const uint32_t tid = threadIdx.x; - for (uint32_t block_src = 0; block_src < src_len; block_src += blockDim.x) { - uint32_t src = block_src + tid; - // Obtain a segment of consecutive items that are blocked across threads - uint32_t inout_idx, alpha_idx; - if (tgt == 0) { - // both alpha and other input index is [b][0][src] - alpha_idx = b_offset + src; - } else { - // alpha index is [b][tgt-1][src] - alpha_idx = b_offset + (tgt - 1) * src_len + src; - } - inout_idx = b_offset + tgt * src_len + src; - T thread_data = (T)0; - if (src < src_len) { - thread_data = alpha[alpha_idx] / cumprod_1mp_clamp[inout_idx]; - } - - // Collectively compute the block-wide inclusive prefix sum - BlockScan(temp_storage).InclusiveSum(thread_data, thread_data, prefix_op); - __syncthreads(); - - if (src < src_len) { - T out = thread_data * p_choose[inout_idx] * cumprod_1mp[inout_idx]; - // Clamps all elements into the range [ 0, 1.0 ] - alpha[inout_idx] = std::min<T>(std::max<T>(out, 0), (T)1.0); - } - } - } -} - -template <typename T> -void exclusiveCumprod( - const T* p_choose, - T* cumprod_1mp, - uint32_t bsz, - uint32_t tgt_len, - uint32_t src_len, - uint32_t max_grid_x, - uint32_t max_grid_y, - cudaStream_t& stream) { - // cumprod_1mp = 1 - p_choose - dim3 grid(std::min<T>(max_grid_x, bsz), 1, 1); - dim3 block(BLOCK_DIM_X, BLOCK_DIM_Y, 1); - oneMinusPKernel<T><<<grid, block, 0, stream>>>( - p_choose, cumprod_1mp, bsz, tgt_len, src_len); - gpuErrchk(cudaGetLastError()); - - // scan on the innermost dimension of cumprod_1mp - // cumprod_1mp = cumprod(cumprod_1mp) - dim3 grid_scan( - std::min<T>(max_grid_x, tgt_len), std::min<T>(max_grid_y, bsz), 1); - innermostScanKernel<T, SCAN_BLOCK><<<grid_scan, SCAN_BLOCK, 0, stream>>>( - cumprod_1mp, bsz, tgt_len, src_len); - gpuErrchk(cudaGetLastError()); -} - -template <typename T> -void alignmentTrainCUDAImpl( - const T* p_choose, - T* alpha, - uint32_t bsz, - uint32_t tgt_len, - uint32_t src_len, - float eps) { - // p_choose: bsz , tgt_len, src_len - // cumprod_1mp: bsz , tgt_len, src_len - // cumprod_1mp_clamp : bsz, tgt_len, src_len - // alpha: bsz, tgt_len, src_len - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - uint32_t max_grid_x = at::cuda::getCurrentDeviceProperties()->maxGridSize[0]; - uint32_t max_grid_y = at::cuda::getCurrentDeviceProperties()->maxGridSize[1]; - - // Implementing exclusive cumprod. - // cumprod_1mp = cumprod(1 - p_choose) - // There is cumprod in pytorch, however there is no exclusive mode. - // cumprod(x) = [x1, x1x2, x2x3x4, ..., prod_{i=1}^n x_i] - // exclusive means - // cumprod(x) = [1, x1, x1x2, x1x2x3, ..., prod_{i=1}^{n-1} x_i] - uint32_t elements = bsz * tgt_len * src_len; - T* cumprod_1mp; - gpuErrchk(cudaMalloc(&cumprod_1mp, elements * sizeof(T))); - exclusiveCumprod<T>( - p_choose, - cumprod_1mp, - bsz, - tgt_len, - src_len, - max_grid_x, - max_grid_y, - stream); - - // clamp cumprod_1mp to the range [eps, 1.0] - T* cumprod_1mp_clamp; - gpuErrchk(cudaMalloc(&cumprod_1mp_clamp, elements * sizeof(T))); - dim3 grid_clamp(std::min<T>(max_grid_x, bsz), 1, 1); - dim3 block_clamp(BLOCK_DIM_X, BLOCK_DIM_Y, 1); - clampKernel<T><<<grid_clamp, block_clamp, 0, stream>>>( - cumprod_1mp, cumprod_1mp_clamp, bsz, tgt_len, src_len, (T)eps, (T)1.0); - gpuErrchk(cudaGetLastError()); - - // ai = p_i * cumprod(1 − pi) * cumsum(a_i / cumprod(1 − pi)) - dim3 grid_init(std::min<int>(max_grid_x, bsz), 1, 1); - initAlphaCUDAKernel<T> - <<<grid_init, 1, 0, stream>>>(alpha, bsz, tgt_len, src_len); - gpuErrchk(cudaGetLastError()); - - const int grid = std::min(bsz, max_grid_x); - - for (uint32_t i = 0; i < tgt_len; i++) { - alignmentTrainCUDAKernel<T, SCAN_BLOCK><<<grid, SCAN_BLOCK, 0, stream>>>( - p_choose, - cumprod_1mp, - cumprod_1mp_clamp, - alpha, - bsz, - tgt_len, - src_len, - i); - gpuErrchk(cudaGetLastError()); - } - - gpuErrchk(cudaFree(cumprod_1mp)); - gpuErrchk(cudaFree(cumprod_1mp_clamp)); -} - -} // namespace - -void alignmentTrainCUDAWrapper( - const torch::Tensor& p_choose, - torch::Tensor& alpha, - float eps) { - // p_choose dimension: bsz, tgt_len, src_len - uint32_t bsz = p_choose.size(0); - uint32_t tgt_len = p_choose.size(1); - uint32_t src_len = p_choose.size(2); - - cudaSetDevice(p_choose.get_device()); - - AT_DISPATCH_FLOATING_TYPES_AND2( - torch::ScalarType::Half, - torch::ScalarType::BFloat16, - p_choose.scalar_type(), - "alignmentTrainCUDAImpl", - [&]() { - alignmentTrainCUDAImpl<scalar_t>( - p_choose.data_ptr<scalar_t>(), - alpha.data_ptr<scalar_t>(), - bsz, - tgt_len, - src_len, - eps); - }); -} diff --git a/kosmos-g/fairseq/examples/operators/utils.h b/kosmos-g/fairseq/examples/operators/utils.h deleted file mode 100644 index 0ef5b4383..000000000 --- a/kosmos-g/fairseq/examples/operators/utils.h +++ /dev/null @@ -1,19 +0,0 @@ -/** - * Copyright 2017-present, Facebook, Inc. - * All rights reserved. - * - * This source code is licensed under the license found in the - * LICENSE file in the root directory of this source tree. - */ - -#pragma once - -#include <torch/extension.h> // @manual=//caffe2:torch_extension - -#define CHECK_CUDA(x) \ - TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) \ - TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) \ - CHECK_CUDA(x); \ - CHECK_CONTIGUOUS(x) diff --git a/kosmos-g/fairseq/examples/paraphraser/README.md b/kosmos-g/fairseq/examples/paraphraser/README.md deleted file mode 100644 index 3810311f3..000000000 --- a/kosmos-g/fairseq/examples/paraphraser/README.md +++ /dev/null @@ -1,46 +0,0 @@ -# Paraphrasing with round-trip translation and mixture of experts - -Machine translation models can be used to paraphrase text by translating it to -an intermediate language and back (round-trip translation). - -This example shows how to paraphrase text by first passing it to an -English-French translation model, followed by a French-English [mixture of -experts translation model](/examples/translation_moe). - -##### 0. Setup - -Clone fairseq from source and install necessary dependencies: -```bash -git clone https://github.com/pytorch/fairseq.git -cd fairseq -pip install --editable . -pip install sacremoses sentencepiece -``` - -##### 1. Download models -```bash -wget https://dl.fbaipublicfiles.com/fairseq/models/paraphraser.en-fr.tar.gz -wget https://dl.fbaipublicfiles.com/fairseq/models/paraphraser.fr-en.hMoEup.tar.gz -tar -xzvf paraphraser.en-fr.tar.gz -tar -xzvf paraphraser.fr-en.hMoEup.tar.gz -``` - -##### 2. Paraphrase -```bash -python examples/paraphraser/paraphrase.py \ - --en2fr paraphraser.en-fr \ - --fr2en paraphraser.fr-en.hMoEup -# Example input: -# The new date for the Games, postponed for a year in response to the coronavirus pandemic, gives athletes time to recalibrate their training schedules. -# Example outputs: -# Delayed one year in response to the coronavirus pandemic, the new date of the Games gives athletes time to rebalance their training schedule. -# The new date of the Games, which was rescheduled one year in response to the coronavirus (CV) pandemic, gives athletes time to rebalance their training schedule. -# The new date of the Games, postponed one year in response to the coronavirus pandemic, provides athletes with time to rebalance their training schedule. -# The Games' new date, postponed one year in response to the coronavirus pandemic, gives athletes time to rebalance their training schedule. -# The new Games date, postponed one year in response to the coronavirus pandemic, gives the athletes time to rebalance their training schedule. -# The new date of the Games, which was postponed one year in response to the coronavirus pandemic, gives the athletes time to rebalance their training schedule. -# The new date of the Games, postponed one year in response to the coronavirus pandemic, gives athletes time to rebalance their training schedule. -# The new date of the Games, postponed one year in response to the coronavirus pandemic, gives athletes time to re-balance their training schedule. -# The new date of the Games, postponed one year in response to the coronavirus pandemic, gives the athletes time to rebalance their schedule of training. -# The new date of the Games, postponed one year in response to the pandemic of coronavirus, gives the athletes time to rebalance their training schedule. -``` diff --git a/kosmos-g/fairseq/examples/paraphraser/paraphrase.py b/kosmos-g/fairseq/examples/paraphraser/paraphrase.py deleted file mode 100644 index d3422fb3d..000000000 --- a/kosmos-g/fairseq/examples/paraphraser/paraphrase.py +++ /dev/null @@ -1,85 +0,0 @@ -#!/usr/bin/env python3 -u - -import argparse -import fileinput -import logging -import os -import sys - -from fairseq.models.transformer import TransformerModel - - -logging.getLogger().setLevel(logging.INFO) - - -def main(): - parser = argparse.ArgumentParser(description="") - parser.add_argument("--en2fr", required=True, help="path to en2fr model") - parser.add_argument( - "--fr2en", required=True, help="path to fr2en mixture of experts model" - ) - parser.add_argument( - "--user-dir", help="path to fairseq examples/translation_moe/src directory" - ) - parser.add_argument( - "--num-experts", - type=int, - default=10, - help="(keep at 10 unless using a different model)", - ) - parser.add_argument( - "files", - nargs="*", - default=["-"], - help='input files to paraphrase; "-" for stdin', - ) - args = parser.parse_args() - - if args.user_dir is None: - args.user_dir = os.path.join( - os.path.dirname(os.path.dirname(os.path.abspath(__file__))), # examples/ - "translation_moe", - "src", - ) - if os.path.exists(args.user_dir): - logging.info("found user_dir:" + args.user_dir) - else: - raise RuntimeError( - "cannot find fairseq examples/translation_moe/src " - "(tried looking here: {})".format(args.user_dir) - ) - - logging.info("loading en2fr model from:" + args.en2fr) - en2fr = TransformerModel.from_pretrained( - model_name_or_path=args.en2fr, - tokenizer="moses", - bpe="sentencepiece", - ).eval() - - logging.info("loading fr2en model from:" + args.fr2en) - fr2en = TransformerModel.from_pretrained( - model_name_or_path=args.fr2en, - tokenizer="moses", - bpe="sentencepiece", - user_dir=args.user_dir, - task="translation_moe", - ).eval() - - def gen_paraphrases(en): - fr = en2fr.translate(en) - return [ - fr2en.translate(fr, inference_step_args={"expert": i}) - for i in range(args.num_experts) - ] - - logging.info("Type the input sentence and press return:") - for line in fileinput.input(args.files): - line = line.strip() - if len(line) == 0: - continue - for paraphrase in gen_paraphrases(line): - print(paraphrase) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/pay_less_attention_paper/README.md b/kosmos-g/fairseq/examples/pay_less_attention_paper/README.md deleted file mode 100644 index 5adab11f4..000000000 --- a/kosmos-g/fairseq/examples/pay_less_attention_paper/README.md +++ /dev/null @@ -1,176 +0,0 @@ -# Pay Less Attention with Lightweight and Dynamic Convolutions (Wu et al., 2019) - -This page contains pointers to pre-trained models as well as instructions on how to train new models for [our paper](https://arxiv.org/abs/1901.10430). - -## Citation: -```bibtex -@inproceedings{wu2018pay, - title = {Pay Less Attention with Lightweight and Dynamic Convolutions}, - author = {Felix Wu and Angela Fan and Alexei Baevski and Yann Dauphin and Michael Auli}, - booktitle = {International Conference on Learning Representations}, - year = {2019}, - url = {https://arxiv.org/abs/1901.10430}, -} -``` - -## Translation - -### Pre-trained models -For some datasets we release models without GLUs which are faster at inference. - -Model | Description | Dataset | Download ----|---|---|--- -`lightconv.no_glu.iwslt14.de-en` | LightConv (without GLUs) | [IWSLT14 German-English](https://wit3.fbk.eu/archive/2014-01/texts/de/en/de-en.tgz) | model: <br> [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/iwslt14.de-en.lightconv.tar.gz) <br> IWSLT14 test: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/iwslt14.de-en.test.tar.bz2) -`dynamicconv.no_glu.iwslt14.de-en` | DynamicConv (without GLUs) | [IWSLT14 German-English](https://wit3.fbk.eu/archive/2014-01/texts/de/en/de-en.tgz) | model: <br> [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/iwslt14.de-en.dynamicconv.tar.gz) <br> IWSLT14 test: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/iwslt14.de-en.test.tar.bz2) -`lightconv.no_glu.wmt16.en-de` | LightConv (without GLUs) | [WMT16 English-German](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8) | model: <br> [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv.tar.gz) <br> newstest2014 (shared vocab): <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt16.en-de.joined-dict.newstest2014.tar.bz2) -`dynamicconv.no_glu.wmt16.en-de` | DynamicConv (without GLUs) | [WMT16 English-German](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8) | model: <br> [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv.tar.gz) <br> newstest2014 (shared vocab): <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt16.en-de.joined-dict.newstest2014.tar.bz2) -`lightconv.glu.wmt16.en-de` | LightConv | [WMT16 English-German](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8) | model: <br> [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv-glu.tar.gz) <br> newstest2014 (shared vocab): <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt16.en-de.joined-dict.newstest2014.tar.bz2) -`dynamicconv.glu.wmt16.en-de` | DynamicConv | [WMT16 English-German](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8) | model: <br> [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv-glu.tar.gz) <br> newstest2014 (shared vocab): <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt16.en-de.joined-dict.newstest2014.tar.bz2) -`lightconv.glu.wmt14.en-fr` | LightConv | [WMT14 English-French](http://statmt.org/wmt14/translation-task.html#Download) | model: <br> [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt14.en-fr.joined-dict.lightconv-glu.tar.gz) <br> newstest2014: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.en-fr.joined-dict.newstest2014.tar.bz2) -`dynamicconv.glu.wmt14.en-fr` | DynamicConv | [WMT14 English-French](http://statmt.org/wmt14/translation-task.html#Download) | model: <br> [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt14.en-fr.joined-dict.dynamicconv-glu.tar.gz) <br> newstest2014: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.en-fr.joined-dict.newstest2014.tar.bz2) -`lightconv.glu.wmt17.zh-en` | LightConv | [WMT17 Chinese-English](http://statmt.org/wmt17/translation-task.html#Download) | model: <br> [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt17.zh-en.lightconv-glu.tar.gz) <br> newstest2017: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt17.zh-en.newstest2017.tar.bz2) -`dynamicconv.glu.wmt17.zh-en` | DynamicConv | [WMT17 Chinese-English](http://statmt.org/wmt17/translation-task.html#Download) | model: <br> [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt17.zh-en.dynamicconv-glu.tar.gz) <br> newstest2017: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt17.zh-en.newstest2017.tar.bz2) - -### Memory-Efficient CUDA Kernels - -Since the PyTorch implementations of Light/Dynamic conv are quite memory intensive, we have developed CUDA kernels that implement the light and dynamic convolution operator in a memory-efficient and performant manner. For large sequence lengths, these kernels save about 50% memory compared to the PyTorch equivalent. - -To install the kernels, use the commands below. Once installed, they will automatically be used in place of the PyTorch implementations whenever a light or dynamic convolution is used. - -```sh -# to install lightconv -cd fairseq/modules/lightconv_layer -python cuda_function_gen.py -python setup.py install - -# to install dynamicconv -cd fairseq/modules/dynamicconv_layer -python cuda_function_gen.py -python setup.py install -``` - -### Example usage (torch.hub) - -We require a few additional Python dependencies for preprocessing: -```bash -pip install sacremoses subword_nmt -``` - -Interactive translation via PyTorch Hub: -```python -import torch - -# List available models -torch.hub.list('pytorch/fairseq') # [..., 'lightconv.glu.wmt17.zh-en', ... ] - -# Load a transformer trained on WMT'16 En-De -zh2en = torch.hub.load('pytorch/fairseq', 'lightconv.glu.wmt17.zh-en', tokenizer='moses', bpe='subword_nmt') - -# The underlying model is available under the *models* attribute -assert isinstance(zh2en.models[0], fairseq.models.lightconv.LightConvModel) - -# Translate a sentence -zh2en.translate('你好 世界') -# 'Hello World' -``` - -Loading custom models: -```python -from fairseq.models.lightconv import LightConvModel -en2fr = LightConvModel.from_pretrained( - '/path/to/checkpoints', - checkpoint_file='checkpoint_best.pt', - data_name_or_path='data-bin/wmt14_en_fr', - bpe='subword_nmt', - bpe_codes='data-bin/wmt14_en_fr/en.code' -) -en2fr.translate('Hello world!') -# 'Bonjour le monde' -``` - -### Preprocessing the training datasets - -Please follow the instructions in [`examples/translation/README.md`](../translation/README.md) to preprocess the data. - -### Training and evaluation options: -To use the model without GLU, please set `--encoder-glu 0 --decoder-glu 0`. -For LightConv, please use `--encoder-conv-type lightweight --decoder-conv-type lightweight`, otherwise the default is DynamicConv. -For best BLEU results, lenpen may need to be manually tuned. - -To use the CUDA kernels, first install the PyTorch modules using the commands -above. Once the CUDA modules are installed, they will automatically be used -instead of the PyTorch modules. - -### IWSLT14 De-En -Training and evaluating DynamicConv (without GLU) on a GPU: -```sh -# Training -SAVE="save/dynamic_conv_iwslt" -mkdir -p $SAVE -CUDA_VISIBLE_DEVICES=0 $(which fairseq-train) data-bin/iwslt14.tokenized.de-en \ - --clip-norm 0 --optimizer adam --lr 0.0005 \ - --source-lang de --target-lang en --max-tokens 4000 --no-progress-bar \ - --log-interval 100 --stop-min-lr '1e-09' --weight-decay 0.0001 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --lr-scheduler inverse_sqrt \ - --ddp-backend=legacy_ddp \ - --max-update 50000 --warmup-updates 4000 --warmup-init-lr '1e-07' \ - --adam-betas '(0.9, 0.98)' --keep-last-epochs 10 \ - -a lightconv_iwslt_de_en --save-dir $SAVE \ - --dropout 0.3 --attention-dropout 0.1 --weight-dropout 0.1 \ - --encoder-glu 0 --decoder-glu 0 -python scripts/average_checkpoints.py --inputs $SAVE \ - --num-epoch-checkpoints 10 --output "${SAVE}/checkpoint_last10_avg.pt" - -# Evaluation -CUDA_VISIBLE_DEVICES=0 fairseq-generate data-bin/iwslt14.tokenized.de-en --path "${SAVE}/checkpoint_last10_avg.pt" --batch-size 128 --beam 4 --remove-bpe --lenpen 1 --gen-subset test --quiet -``` - -### WMT16 En-De -Training and evaluating DynamicConv (with GLU) on WMT16 En-De using cosine scheduler on one machine with 8 V100 GPUs: -```sh -# Training -SAVE="save/dynamic_conv_wmt16en2de" -mkdir -p $SAVE -python -m torch.distributed.launch --nproc_per_node 8 $(which fairseq-train) \ - data-bin/wmt16_en_de_bpe32k --fp16 --log-interval 100 --no-progress-bar \ - --max-update 30000 --share-all-embeddings --optimizer adam \ - --adam-betas '(0.9, 0.98)' --clip-norm 0.0 --weight-decay 0.0 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --stop-min-lr 1e-09 --update-freq 16 --attention-dropout 0.1 --keep-last-epochs 10 \ - --ddp-backend=legacy_ddp --max-tokens 3584 \ - --lr-scheduler cosine --warmup-init-lr 1e-7 --warmup-updates 10000 \ - --lr-shrink 1 --lr 0.001 --min-lr 1e-7 --warmup-init-lr 1e-07 \ - --t-mult 1 --lr-period-updates 20000 \ - --arch lightconv_wmt_en_de_big --save-dir $SAVE \ - --dropout 0.3 --attention-dropout 0.1 --weight-dropout 0.1 \ - --encoder-glu 1 --decoder-glu 1 - -# Evaluation -CUDA_VISIBLE_DEVICES=0 fairseq-generate data-bin/wmt16.en-de.joined-dict.newstest2014 --path "${SAVE}/checkpoint_best.pt" --batch-size 128 --beam 5 --remove-bpe --lenpen 0.5 --gen-subset test > wmt16_gen.txt -bash scripts/compound_split_bleu.sh wmt16_gen.txt -``` - -### WMT14 En-Fr -Training DynamicConv (with GLU) on WMT14 En-Fr using cosine scheduler on one machine with 8 V100 GPUs: -```sh -# Training -SAVE="save/dynamic_conv_wmt14en2fr" -mkdir -p $SAVE -python -m torch.distributed.launch --nproc_per_node 8 $(which fairseq-train) \ - data-bin/wmt14_en_fr --fp16 --log-interval 100 --no-progress-bar \ - --max-update 30000 --share-all-embeddings --optimizer adam \ - --adam-betas '(0.9, 0.98)' --clip-norm 0.0 --weight-decay 0.0 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --stop-min-lr 1e-09 --update-freq 16 --attention-dropout 0.1 --keep-last-epochs 10 \ - --ddp-backend=legacy_ddp --max-tokens 3584 \ - --lr-scheduler cosine --warmup-init-lr 1e-7 --warmup-updates 10000 \ - --lr-shrink 1 --lr 0.001 --min-lr 1e-7 --warmup-init-lr 1e-07 \ - --t-mult 1 --lr-period-updates 70000 \ - --arch lightconv_wmt_en_fr_big --save-dir $SAVE \ - --dropout 0.1 --attention-dropout 0.1 --weight-dropout 0.1 \ - --encoder-glu 1 --decoder-glu 1 - -# Evaluation -CUDA_VISIBLE_DEVICES=0 fairseq-generate data-bin/wmt14.en-fr.joined-dict.newstest2014 --path "${SAVE}/checkpoint_best.pt" --batch-size 128 --beam 5 --remove-bpe --lenpen 0.9 --gen-subset test -``` diff --git a/kosmos-g/fairseq/examples/pointer_generator/README.md b/kosmos-g/fairseq/examples/pointer_generator/README.md deleted file mode 100644 index 609657082..000000000 --- a/kosmos-g/fairseq/examples/pointer_generator/README.md +++ /dev/null @@ -1,82 +0,0 @@ -# Transformer with Pointer-Generator Network - -This page describes the `transformer_pointer_generator` model that incorporates -a pointing mechanism in the Transformer model that facilitates copying of input -words to the output. This architecture is described in [Enarvi et al. (2020)](https://www.aclweb.org/anthology/2020.nlpmc-1.4/). - -## Background - -The pointer-generator network was introduced in [See et al. (2017)](https://arxiv.org/abs/1704.04368) -for RNN encoder-decoder attention models. A similar mechanism can be -incorporated in a Transformer model by reusing one of the many attention -distributions for pointing. The attention distribution over the input words is -interpolated with the normal output distribution over the vocabulary words. This -allows the model to generate words that appear in the input, even if they don't -appear in the vocabulary, helping especially with small vocabularies. - -## Implementation - -The mechanism for copying out-of-vocabulary words from the input has been -implemented differently to See et al. In their [implementation](https://github.com/abisee/pointer-generator) -they convey the word identities through the model in order to be able to produce -words that appear in the input sequence but not in the vocabulary. A different -approach was taken in the Fairseq implementation to keep it self-contained in -the model file, avoiding any changes to the rest of the code base. Copying -out-of-vocabulary words is possible by pre-processing the input and -post-processing the output. This is described in detail in the next section. - -## Usage - -The training and evaluation procedure is outlined below. You can also find a -more detailed example for the XSum dataset on [this page](README.xsum.md). - -##### 1. Create a vocabulary and extend it with source position markers - -The pointing mechanism is especially helpful with small vocabularies, if we are -able to recover the identities of any out-of-vocabulary words that are copied -from the input. For this purpose, the model allows extending the vocabulary with -special tokens that can be used in place of `<unk>` tokens to identify different -input positions. For example, the user may add `<unk-0>`, `<unk-1>`, `<unk-2>`, -etc. to the end of the vocabulary, after the normal words. Below is an example -of how to create a vocabulary of 10000 most common words and add 1000 input -position markers. - -```bash -vocab_size=10000 -position_markers=1000 -export LC_ALL=C -cat train.src train.tgt | - tr -s '[:space:]' '\n' | - sort | - uniq -c | - sort -k1,1bnr -k2 | - head -n "$((vocab_size - 4))" | - awk '{ print $2 " " $1 }' >dict.pg.txt -python3 -c "[print('<unk-{}> 0'.format(n)) for n in range($position_markers)]" >>dict.pg.txt -``` - -##### 2. Preprocess the text data - -The idea is that any `<unk>` tokens in the text are replaced with `<unk-0>` if -it appears in the first input position, `<unk-1>` if it appears in the second -input position, and so on. This can be achieved using the `preprocess.py` script -that is provided in this directory. - -##### 3. Train a model - -The number of these special tokens is given to the model with the -`--source-position-markers` argument—the model simply maps all of these to the -same word embedding as `<unk>`. - -The attention distribution that is used for pointing is selected using the -`--alignment-heads` and `--alignment-layer` command-line arguments in the same -way as with the `transformer_align` model. - -##### 4. Generate text and postprocess it - -When using the model to generate text, you want to preprocess the input text in -the same way that training data was processed, replacing out-of-vocabulary words -with `<unk-N>` tokens. If any of these tokens are copied to the output, the -actual words can be retrieved from the unprocessed input text. Any `<unk-N>` -token should be replaced with the word at position N in the original input -sequence. This can be achieved using the `postprocess.py` script. diff --git a/kosmos-g/fairseq/examples/pointer_generator/README.xsum.md b/kosmos-g/fairseq/examples/pointer_generator/README.xsum.md deleted file mode 100644 index ac3a8c3dd..000000000 --- a/kosmos-g/fairseq/examples/pointer_generator/README.xsum.md +++ /dev/null @@ -1,180 +0,0 @@ -## Training a pointer-generator model on the Extreme Summarization dataset - -##### 1. Download the Extreme Summarization data and preprocess it - -Follow the instructions [here](https://github.com/EdinburghNLP/XSum) to obtain -the original Extreme Summarization dataset. You should have six files, -{train,validation,test}.{document,summary}. - -##### 2. Create a vocabulary and extend it with source position markers - -```bash -vocab_size=10000 -position_markers=1000 -export LC_ALL=C -cat train.document train.summary | - tr -s '[:space:]' '\n' | - sort | - uniq -c | - sort -k1,1bnr -k2 | - head -n "$((vocab_size - 4))" | - awk '{ print $2 " " $1 }' >dict.pg.txt -python3 -c "[print('<unk-{}> 0'.format(n)) for n in range($position_markers)]" >>dict.pg.txt -``` - -This creates the file dict.pg.txt that contains the 10k most frequent words, -followed by 1k source position markers: - -``` -the 4954867 -. 4157552 -, 3439668 -to 2212159 -a 1916857 -of 1916820 -and 1823350 -... -<unk-0> 0 -<unk-1> 0 -<unk-2> 0 -<unk-3> 0 -<unk-4> 0 -... -``` - -##### 2. Preprocess the text data - -```bash -./preprocess.py --source train.document --target train.summary --vocab <(cut -d' ' -f1 dict.pg.txt) --source-out train.pg.src --target-out train.pg.tgt -./preprocess.py --source validation.document --target validation.summary --vocab <(cut -d' ' -f1 dict.pg.txt) --source-out valid.pg.src --target-out valid.pg.tgt -./preprocess.py --source test.document --vocab <(cut -d' ' -f1 dict.pg.txt) --source-out test.pg.src -``` - -The data should now contain `<unk-N>` tokens in place of out-of-vocabulary words. - -##### 3. Binarize the dataset: - -```bash -fairseq-preprocess \ - --source-lang src \ - --target-lang tgt \ - --trainpref train.pg \ - --validpref valid.pg \ - --destdir bin \ - --workers 60 \ - --srcdict dict.pg.txt \ - --joined-dictionary -``` - -##### 3. Train a model - -```bash -total_updates=20000 -warmup_updates=500 -lr=0.001 -max_tokens=4096 -update_freq=4 -pointer_layer=-2 - -CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 fairseq-train bin \ - --user-dir examples/pointer_generator/pointer_generator_src \ - --max-tokens "$max_tokens" \ - --task translation \ - --source-lang src --target-lang tgt \ - --truncate-source \ - --layernorm-embedding \ - --share-all-embeddings \ - --encoder-normalize-before \ - --decoder-normalize-before \ - --required-batch-size-multiple 1 \ - --arch transformer_pointer_generator \ - --alignment-layer "$pointer_layer" \ - --alignment-heads 1 \ - --source-position-markers 1000 \ - --criterion label_smoothed_cross_entropy \ - --label-smoothing 0.1 \ - --dropout 0.1 --attention-dropout 0.1 \ - --weight-decay 0.01 --optimizer adam --adam-betas "(0.9, 0.999)" --adam-eps 1e-08 \ - --clip-norm 0.1 \ - --lr-scheduler inverse_sqrt --lr "$lr" --max-update "$total_updates" --warmup-updates "$warmup_updates" \ - --update-freq "$update_freq" \ - --skip-invalid-size-inputs-valid-test -``` - -Above we specify that our dictionary contains 1000 source position markers, and -that we want to use one attention head from the penultimate decoder layer for -pointing. It should run in 5.5 hours on one node with eight 32GB V100 GPUs. The -logged messages confirm that dictionary indices above 10000 will be mapped to -the `<unk>` embedding: - -``` -2020-09-24 20:43:53 | INFO | fairseq.tasks.translation | [src] dictionary: 11000 types -2020-09-24 20:43:53 | INFO | fairseq.tasks.translation | [tgt] dictionary: 11000 types -2020-09-24 20:43:53 | INFO | fairseq.data.data_utils | loaded 11332 examples from: bin/valid.src-tgt.src -2020-09-24 20:43:53 | INFO | fairseq.data.data_utils | loaded 11332 examples from: bin/valid.src-tgt.tgt -2020-09-24 20:43:53 | INFO | fairseq.tasks.translation | bin valid src-tgt 11332 examples -2020-09-24 20:43:53 | INFO | fairseq.models.transformer_pg | dictionary indices from 10000 to 10999 will be mapped to 3 -``` - -##### 4. Summarize the test sequences - -```bash -batch_size=32 -beam_size=6 -max_length=60 -length_penalty=1.0 - -fairseq-interactive bin \ - --user-dir examples/pointer_generator/pointer_generator_src \ - --batch-size "$batch_size" \ - --task translation \ - --source-lang src --target-lang tgt \ - --path checkpoints/checkpoint_last.pt \ - --input test.pg.src \ - --buffer-size 200 \ - --max-len-a 0 \ - --max-len-b "$max_length" \ - --lenpen "$length_penalty" \ - --beam "$beam_size" \ - --skip-invalid-size-inputs-valid-test | - tee generate.out -grep ^H generate.out | cut -f 3- >generate.hyp -``` - -Now you should have the generated sequences in `generate.hyp`. They contain -`<unk-N>` tokens that the model has copied from the source sequence. In order to -retrieve the original words, we need the unprocessed source sequences from -`test.document`. - -##### 5. Process the generated output - -Since we skipped too long inputs when producing `generate.hyp`, we also have to -skip too long sequences now that we read `test.document`. - -```bash -./postprocess.py \ - --source <(awk 'NF<1024' test.document) \ - --target generate.hyp \ - --target-out generate.hyp.processed -``` - -Now you'll find the final sequences from `generate.hyp.processed`, with -`<unk-N>` replaced with the original word from the source sequence. - -##### An example of a summarized sequence - -The original source document in `test.document`: - -> de roon moved to teesside in june 2016 for an initial # 8.8 m fee and played 33 premier league games last term . the netherlands international , 26 , scored five goals in 36 league and cup games during his spell at boro . meanwhile , manager garry monk confirmed the championship club 's interest in signing chelsea midfielder lewis baker . `` he 's a target and one of many that we 've had throughout the summer months , '' said monk . find all the latest football transfers on our dedicated page . - -The preprocessed source document in `test.src.pg`: - -> de \<unk-1> moved to \<unk-4> in june 2016 for an initial # \<unk-12> m fee and played 33 premier league games last term . the netherlands international , 26 , scored five goals in 36 league and cup games during his spell at boro . meanwhile , manager garry monk confirmed the championship club 's interest in signing chelsea midfielder lewis baker . `` he 's a target and one of many that we 've had throughout the summer months , '' said monk . find all the latest football transfers on our dedicated page . - -The generated summary in `generate.hyp`: - -> middlesbrough striker \<unk> de \<unk-1> has joined spanish side \<unk> on a season-long loan . - -The generated summary after postprocessing in `generate.hyp.processed`: - -> middlesbrough striker \<unk> de roon has joined spanish side \<unk> on a season-long loan . diff --git a/kosmos-g/fairseq/examples/pointer_generator/pointer_generator_src/__init__.py b/kosmos-g/fairseq/examples/pointer_generator/pointer_generator_src/__init__.py deleted file mode 100644 index c361ff6bd..000000000 --- a/kosmos-g/fairseq/examples/pointer_generator/pointer_generator_src/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import transformer_pg # noqa diff --git a/kosmos-g/fairseq/examples/pointer_generator/pointer_generator_src/transformer_pg.py b/kosmos-g/fairseq/examples/pointer_generator/pointer_generator_src/transformer_pg.py deleted file mode 100644 index 4ccf30f4e..000000000 --- a/kosmos-g/fairseq/examples/pointer_generator/pointer_generator_src/transformer_pg.py +++ /dev/null @@ -1,518 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from typing import Any, Dict, Optional, List, Tuple - -import torch -import torch.nn as nn -from fairseq import utils -from fairseq.models import register_model, register_model_architecture -from fairseq.models.transformer import ( - DEFAULT_MAX_SOURCE_POSITIONS, - DEFAULT_MAX_TARGET_POSITIONS, - TransformerDecoder, - TransformerEncoder, - TransformerModel, - base_architecture, -) -from torch import Tensor - - -logger = logging.getLogger(__name__) - - -@register_model("transformer_pointer_generator") -class TransformerPointerGeneratorModel(TransformerModel): - """ - Transformer model from `"Attention Is All You Need" (Vaswani et al, 2017) - <https://arxiv.org/abs/1706.03762>`_, augmented with a pointer-generator - network from `"Get To The Point: Summarization with Pointer-Generator - Networks" (See et al, 2017) <https://arxiv.org/abs/1704.04368>`_. - - Args: - encoder (TransformerPointerGeneratorEncoder): the encoder - decoder (TransformerPointerGeneratorDecoder): the decoder - - The Transformer pointer-generator model provides the following named - architectures and command-line arguments: - - .. argparse:: - :ref: fairseq.models.transformer_pointer_generator_parser - :prog: - """ - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - # fmt: off - TransformerModel.add_args(parser) - parser.add_argument('--alignment-heads', type=int, metavar='N', - help='number of attention heads to be used for ' - 'pointing') - parser.add_argument('--alignment-layer', type=int, metavar='I', - help='layer number to be used for pointing (0 ' - 'corresponding to the bottommost layer)') - parser.add_argument('--source-position-markers', type=int, metavar='N', - help='dictionary includes N additional items that ' - 'represent an OOV token at a particular input ' - 'position') - parser.add_argument('--force-generation', type=float, metavar='P', - default=None, - help='set the vocabulary distribution weight to P, ' - 'instead of predicting it from the input (1.0 ' - 'corresponding to generation, 0.0 to pointing)') - # fmt: on - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - - # make sure all arguments are present in older models - base_architecture(args) - - if args.encoder_layers_to_keep: - args.encoder_layers = len(args.encoder_layers_to_keep.split(",")) - if args.decoder_layers_to_keep: - args.decoder_layers = len(args.decoder_layers_to_keep.split(",")) - - if getattr(args, "max_source_positions", None) is None: - args.max_source_positions = DEFAULT_MAX_SOURCE_POSITIONS - if getattr(args, "max_target_positions", None) is None: - args.max_target_positions = DEFAULT_MAX_TARGET_POSITIONS - if getattr(args, "source_position_markers", None) is None: - args.source_position_markers = args.max_source_positions - - src_dict, tgt_dict = task.source_dictionary, task.target_dictionary - if src_dict != tgt_dict: - raise ValueError("Pointer-generator requires a joined dictionary") - - def build_embedding(dictionary, embed_dim, path=None): - # The dictionary may include additional items that can be used in - # place of the normal OOV token and that all map to the same - # embedding. Using a different token for each input position allows - # one to restore the word identities from the original source text. - num_embeddings = len(dictionary) - args.source_position_markers - padding_idx = dictionary.pad() - unk_idx = dictionary.unk() - logger.info( - "dictionary indices from {0} to {1} will be mapped to {2}".format( - num_embeddings, len(dictionary) - 1, unk_idx - ) - ) - emb = Embedding(num_embeddings, embed_dim, padding_idx, unk_idx) - # if provided, load from preloaded dictionaries - if path: - embed_dict = utils.parse_embedding(path) - utils.load_embedding(embed_dict, dictionary, emb) - return emb - - if args.share_all_embeddings: - if args.encoder_embed_dim != args.decoder_embed_dim: - raise ValueError( - "--share-all-embeddings requires --encoder-embed-dim to match --decoder-embed-dim" - ) - if args.decoder_embed_path and ( - args.decoder_embed_path != args.encoder_embed_path - ): - raise ValueError( - "--share-all-embeddings not compatible with --decoder-embed-path" - ) - encoder_embed_tokens = build_embedding( - src_dict, args.encoder_embed_dim, args.encoder_embed_path - ) - decoder_embed_tokens = encoder_embed_tokens - args.share_decoder_input_output_embed = True - else: - encoder_embed_tokens = build_embedding( - src_dict, args.encoder_embed_dim, args.encoder_embed_path - ) - decoder_embed_tokens = build_embedding( - tgt_dict, args.decoder_embed_dim, args.decoder_embed_path - ) - - encoder = cls.build_encoder(args, src_dict, encoder_embed_tokens) - decoder = cls.build_decoder(args, tgt_dict, decoder_embed_tokens) - return cls(args, encoder, decoder) - - @classmethod - def build_encoder(cls, args, src_dict, embed_tokens): - return TransformerPointerGeneratorEncoder(args, src_dict, embed_tokens) - - @classmethod - def build_decoder(cls, args, tgt_dict, embed_tokens): - return TransformerPointerGeneratorDecoder(args, tgt_dict, embed_tokens) - - -class TransformerPointerGeneratorEncoder(TransformerEncoder): - """ - Transformer encoder consisting of *args.encoder_layers* layers. Each layer - is a :class:`TransformerEncoderLayer`. The pointer-generator variant adds - the source tokens to the encoder output as these are otherwise not passed - to the decoder. - """ - - def forward( - self, - src_tokens, - src_lengths: Optional[Tensor] = None, - return_all_hiddens: bool = False, - token_embeddings: Optional[Tensor] = None - ): - """ - Runs the `forward()` method of the parent Transformer class. Then adds - the source tokens into the encoder output tuple. - - While it might be more elegant that the model would pass the source - tokens to the `forward()` method of the decoder too, this would require - changes to `SequenceGenerator`. - - Args: - src_tokens (torch.LongTensor): tokens in the source language of - shape `(batch, src_len)` - src_lengths (torch.LongTensor): lengths of each source sentence of - shape `(batch)` - return_all_hiddens (bool, optional): also return all of the - intermediate hidden states (default: False). - token_embeddings (torch.Tensor, optional): precomputed embeddings - default `None` will recompute embeddings - - Returns: - namedtuple: - - **encoder_out** (Tensor): the last encoder layer's output of - shape `(src_len, batch, embed_dim)` - - **encoder_padding_mask** (ByteTensor): the positions of - padding elements of shape `(batch, src_len)` - - **encoder_embedding** (Tensor): the (scaled) embedding lookup - of shape `(batch, src_len, embed_dim)` - - **encoder_states** (List[Tensor]): all intermediate - hidden states of shape `(src_len, batch, embed_dim)`. - Only populated if *return_all_hiddens* is True. - - **src_tokens** (Tensor): input token ids of shape - `(batch, src_len)` - """ - encoder_out = self.forward_scriptable(src_tokens, - src_lengths, - return_all_hiddens, - token_embeddings) - - # The Pytorch Mobile lite interpreter does not supports returning NamedTuple in - # `forward` so we use a dictionary instead. - # TorchScript does not support mixed values so the values are all lists. - # The empty list is equivalent to None. - return { - "encoder_out": encoder_out["encoder_out"], # T x B x C - "encoder_padding_mask": encoder_out["encoder_padding_mask"], # B x T - "encoder_embedding": encoder_out["encoder_embedding"], # B x T x C - "encoder_states": encoder_out["encoder_states"], # List[T x B x C] - "src_tokens": [src_tokens], # B x T - "src_lengths": [], - } - - -class TransformerPointerGeneratorDecoder(TransformerDecoder): - """ - Transformer decoder consisting of *args.decoder_layers* layers. Each layer - is a :class:`TransformerDecoderLayer`. The pointer-generator variant mixes - the output probabilities with an attention distribution in the output layer. - - Args: - args (argparse.Namespace): parsed command-line arguments - dictionary (~fairseq.data.Dictionary): decoding dictionary - embed_tokens (torch.nn.Embedding): output embedding - """ - - def __init__(self, args, dictionary, embed_tokens): - super().__init__(args, dictionary, embed_tokens, no_encoder_attn=False) - - # In the pointer-generator model these arguments define the decoder - # layer and the number of attention heads that will be averaged to - # create the alignment for pointing. - self.alignment_heads = args.alignment_heads - self.alignment_layer = args.alignment_layer - - input_embed_dim = embed_tokens.embedding_dim - - # Generation probabilities / interpolation coefficients are predicted - # from the current decoder input embedding and the decoder output, which - # is the size of output_embed_dim. - p_gen_input_size = input_embed_dim + self.output_embed_dim - self.project_p_gens = nn.Linear(p_gen_input_size, 1) - nn.init.zeros_(self.project_p_gens.bias) - - # The dictionary may include a separate entry for an OOV token in each - # input position, so that their identity can be restored from the - # original source text. - self.num_types = len(dictionary) - self.num_oov_types = args.source_position_markers - self.num_embeddings = self.num_types - self.num_oov_types - self.force_p_gen = args.force_generation - - def forward( - self, - prev_output_tokens, - encoder_out: Optional[Dict[str, List[Tensor]]] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - features_only: bool = False, - alignment_layer: Optional[int] = 0, - alignment_heads: Optional[int] = 1, - src_lengths: Optional[Any] = None, - return_all_hiddens: bool = False, - ): - """ - Args: - prev_output_tokens (LongTensor): previous decoder outputs of shape - `(batch, tgt_len)`, for teacher forcing - encoder_out (optional): output from the encoder, used for - encoder-side attention - incremental_state (dict, optional): dictionary used for storing - state during :ref:`Incremental decoding` - features_only (bool, optional): only return features without - applying output layer (default: False) - alignment_layer (int, optional): 0-based index of the layer to be - used for pointing (default: 0) - alignment_heads (int, optional): number of attention heads to be - used for pointing (default: 1) - - Returns: - tuple: - - the decoder's output of shape `(batch, tgt_len, vocab)` - - a dictionary with any model-specific outputs - """ - # The normal Transformer model doesn't pass the alignment_layer and - # alignment_heads parameters correctly. We use our local variables. - x, extra = self.extract_features( - prev_output_tokens, - encoder_out=encoder_out, - incremental_state=incremental_state, - alignment_layer=self.alignment_layer, - alignment_heads=self.alignment_heads, - ) - if not features_only: - # Embedding the tokens again for generation probability prediction, - # so that we don't have to reimplement the whole extract_features() - # method. - if incremental_state is not None: - prev_output_tokens = prev_output_tokens[:, -1:] - prev_output_embed = self.embed_tokens(prev_output_tokens) - prev_output_embed *= self.embed_scale - predictors = torch.cat((prev_output_embed, x), 2) - p_gens = self.project_p_gens(predictors) - p_gens = torch.sigmoid(p_gens.float()) - # Torchscript complains if encoder_out or attn are None because - # `output_layer()` signature expects tensors instead - attn: Optional[Tensor] = extra["attn"][0] - assert encoder_out is not None - assert attn is not None - x = self.output_layer(x, attn, encoder_out["src_tokens"][0], p_gens) - return x, extra - - def output_layer( - self, - features: Tensor, - attn: Tensor, - src_tokens: Tensor, - p_gens: Tensor - ) -> Tensor: - """ - Project features to the vocabulary size and mix with the attention - distributions. - """ - if self.force_p_gen is not None: - p_gens = self.force_p_gen - - # project back to size of vocabulary - if self.adaptive_softmax is None: - logits = self.output_projection(features) - else: - logits = features - - batch_size = logits.shape[0] - output_length = logits.shape[1] - assert logits.shape[2] == self.num_embeddings - assert src_tokens.shape[0] == batch_size - src_length = src_tokens.shape[1] - - # The final output distribution will be a mixture of the normal output - # distribution (softmax of logits) and attention weights. - gen_dists = self.get_normalized_probs_scriptable( - (logits, None), log_probs=False, sample=None - ) - gen_dists = torch.mul(gen_dists, p_gens) - padding_size = (batch_size, output_length, self.num_oov_types) - padding = gen_dists.new_zeros(padding_size) - gen_dists = torch.cat((gen_dists, padding), 2) - assert gen_dists.shape[2] == self.num_types - - # Scatter attention distributions to distributions over the extended - # vocabulary in a tensor of shape [batch_size, output_length, - # vocab_size]. Each attention weight will be written into a location - # that is for other dimensions the same as in the index tensor, but for - # the third dimension it's the value of the index tensor (the token ID). - attn = torch.mul(attn.float(), 1 - p_gens) - index = src_tokens[:, None, :] - index = index.expand(batch_size, output_length, src_length) - attn_dists_size = (batch_size, output_length, self.num_types) - attn_dists = attn.new_zeros(attn_dists_size) - attn_dists.scatter_add_(2, index, attn.float()) - - # Final distributions, [batch_size, output_length, num_types]. - return gen_dists + attn_dists - - def get_normalized_probs( - self, - net_output: Tuple[Tensor, Optional[Dict[str, List[Optional[Tensor]]]]], - log_probs: bool, - sample: Optional[Dict[str, Tensor]] = None, - ): - """ - Get normalized probabilities (or log probs) from a net's output. - Pointer-generator network output is already normalized. - """ - probs = net_output[0] - # Make sure the probabilities are greater than zero when returning log - # probabilities. - return probs.clamp(1e-10, 1.0).log() if log_probs else probs - - -class Embedding(nn.Embedding): - r"""A simple lookup table that stores embeddings of a fixed dictionary and size. - This module is often used to store word embeddings and retrieve them using indices. - The input to the module is a list of indices, and the output is the corresponding - word embeddings. This subclass differs from the standard PyTorch Embedding class by - allowing additional vocabulary entries that will be mapped to the unknown token - embedding. - Args: - num_embeddings (int): size of the dictionary of embeddings - embedding_dim (int): the size of each embedding vector - padding_idx (int): Pads the output with the embedding vector at :attr:`padding_idx` - (initialized to zeros) whenever it encounters the index. - unk_idx (int): Maps all token indices that are greater than or equal to - num_embeddings to this index. - Attributes: - weight (Tensor): the learnable weights of the module of shape (num_embeddings, embedding_dim) - initialized from :math:`\mathcal{N}(0, 1)` - Shape: - - Input: :math:`(*)`, LongTensor of arbitrary shape containing the indices to extract - - Output: :math:`(*, H)`, where `*` is the input shape and :math:`H=\text{embedding\_dim}` - .. note:: - Keep in mind that only a limited number of optimizers support - sparse gradients: currently it's :class:`optim.SGD` (`CUDA` and `CPU`), - :class:`optim.SparseAdam` (`CUDA` and `CPU`) and :class:`optim.Adagrad` (`CPU`) - .. note:: - With :attr:`padding_idx` set, the embedding vector at - :attr:`padding_idx` is initialized to all zeros. However, note that this - vector can be modified afterwards, e.g., using a customized - initialization method, and thus changing the vector used to pad the - output. The gradient for this vector from :class:`~torch.nn.Embedding` - is always zero. - """ - __constants__ = ["unk_idx"] - - # Torchscript: Inheriting from Embedding class produces an error when exporting to Torchscript - # -> RuntimeError: Unable to cast Python instance to C++ type (compile in debug mode for details - # It's happening because max_norm attribute from nn.Embedding is None by default and it cannot be - # cast to a C++ type - def __init__( - self, - num_embeddings: int, - embedding_dim: int, - padding_idx: Optional[int], - unk_idx: int, - max_norm: Optional[float] = float("inf"), - ): - super().__init__(num_embeddings, embedding_dim, padding_idx=padding_idx, max_norm=max_norm) - self.unk_idx = unk_idx - nn.init.normal_(self.weight, mean=0, std=embedding_dim ** -0.5) - nn.init.constant_(self.weight[padding_idx], 0) - - def forward(self, input): - input = torch.where( - input >= self.num_embeddings, torch.ones_like(input) * self.unk_idx, input - ) - return nn.functional.embedding( - input, self.weight, self.padding_idx, self.max_norm, - self.norm_type, self.scale_grad_by_freq, self.sparse - ) - - -@register_model_architecture( - "transformer_pointer_generator", "transformer_pointer_generator" -) -def transformer_pointer_generator(args): - args.alignment_heads = getattr(args, "alignment_heads", 1) - args.alignment_layer = getattr(args, "alignment_layer", -1) - base_architecture(args) - if args.alignment_layer < 0: - args.alignment_layer = args.decoder_layers + args.alignment_layer - - -@register_model_architecture( - "transformer_pointer_generator", "transformer_pointer_generator_iwslt_de_en" -) -def transformer_pointer_generator_iwslt_de_en(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 1024) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 4) - args.encoder_layers = getattr(args, "encoder_layers", 6) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 1024) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 4) - args.decoder_layers = getattr(args, "decoder_layers", 6) - transformer_pointer_generator(args) - - -@register_model_architecture( - "transformer_pointer_generator", "transformer_pointer_generator_wmt_en_de" -) -def transformer_pointer_generator_wmt_en_de(args): - transformer_pointer_generator(args) - - -# Transformer pointer-generator with the base Transformer parameters as used in -# the "Attention Is All You Need" paper (Vaswani et al., 2017) -@register_model_architecture( - "transformer_pointer_generator", - "transformer_pointer_generator_vaswani_wmt_en_de_big", -) -def transformer_pointer_generator_vaswani_wmt_en_de_big(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 1024) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4096) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - args.dropout = getattr(args, "dropout", 0.3) - transformer_pointer_generator(args) - - -@register_model_architecture( - "transformer_pointer_generator", - "transformer_pointer_generator_vaswani_wmt_en_fr_big", -) -def transformer_pointer_generator_vaswani_wmt_en_fr_big(args): - args.dropout = getattr(args, "dropout", 0.1) - transformer_pointer_generator_vaswani_wmt_en_de_big(args) - - -@register_model_architecture( - "transformer_pointer_generator", "transformer_pointer_generator_wmt_en_de_big" -) -def transformer_pointer_generator_wmt_en_de_big(args): - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - transformer_pointer_generator_vaswani_wmt_en_de_big(args) - - -# default parameters used in tensor2tensor implementation -@register_model_architecture( - "transformer_pointer_generator", "transformer_pointer_generator_wmt_en_de_big_t2t" -) -def transformer_pointer_generator_wmt_en_de_big_t2t(args): - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", True) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", True) - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - args.activation_dropout = getattr(args, "activation_dropout", 0.1) - transformer_pointer_generator_vaswani_wmt_en_de_big(args) diff --git a/kosmos-g/fairseq/examples/pointer_generator/postprocess.py b/kosmos-g/fairseq/examples/pointer_generator/postprocess.py deleted file mode 100644 index b213aed80..000000000 --- a/kosmos-g/fairseq/examples/pointer_generator/postprocess.py +++ /dev/null @@ -1,96 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import re -import sys - - -class OOVIndexError(IndexError): - def __init__(self, pos, source_seq, target_seq): - super(OOVIndexError, self).__init__( - "A <unk-N> tag in the target sequence refers to a position that is " - "outside the source sequence. Most likely there was a mismatch in " - "provided source and target sequences. Otherwise this would mean that " - "the pointing mechanism somehow attended to a position that is past " - "the actual sequence end." - ) - self.source_pos = pos - self.source_seq = source_seq - self.target_seq = target_seq - - -def replace_oovs(source_in, target_in, target_out): - """Replaces <unk-N> tokens in the target text with the corresponding word in - the source text. - """ - - oov_re = re.compile("^<unk-([0-9]+)>$") - - for source_seq, target_seq in zip(source_in, target_in): - target_seq_out = [] - - pos_to_word = source_seq.strip().split() - for token in target_seq.strip().split(): - m = oov_re.match(token) - if m: - pos = int(m.group(1)) - if pos >= len(pos_to_word): - raise OOVIndexError(pos, source_seq, target_seq) - token_out = pos_to_word[pos] - else: - token_out = token - target_seq_out.append(token_out) - target_out.write(" ".join(target_seq_out) + "\n") - - -def main(): - parser = argparse.ArgumentParser( - description="Replaces <unk-N> tokens in target sequences with words from " - "the corresponding position in the source sequence." - ) - parser.add_argument( - "--source", type=str, help="text file with source sequences", required=True - ) - parser.add_argument( - "--target", type=str, help="text file with target sequences", required=True - ) - parser.add_argument( - "--target-out", - type=str, - help="where to write target sequences without <unk-N> " "entries", - required=True, - ) - args = parser.parse_args() - - target_in = ( - open(args.target, "r", encoding="utf-8") if args.target is not None else None - ) - target_out = ( - open(args.target_out, "w", encoding="utf-8") - if args.target_out is not None - else None - ) - with open(args.source, "r", encoding="utf-8") as source_in, open( - args.target, "r", encoding="utf-8" - ) as target_in, open(args.target_out, "w", encoding="utf-8") as target_out: - replace_oovs(source_in, target_in, target_out) - - -if __name__ == "__main__": - try: - main() - except OOVIndexError as e: - print(e, file=sys.stderr) - print("Source sequence:", e.source_seq.strip(), file=sys.stderr) - print("Target sequence:", e.target_seq.strip(), file=sys.stderr) - print( - "Source sequence length:", - len(e.source_seq.strip().split()), - file=sys.stderr, - ) - print("The offending tag points to:", e.source_pos) - sys.exit(2) diff --git a/kosmos-g/fairseq/examples/pointer_generator/preprocess.py b/kosmos-g/fairseq/examples/pointer_generator/preprocess.py deleted file mode 100644 index f72ca7d3d..000000000 --- a/kosmos-g/fairseq/examples/pointer_generator/preprocess.py +++ /dev/null @@ -1,102 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -from itertools import zip_longest - - -def replace_oovs(source_in, target_in, vocabulary, source_out, target_out): - """Replaces out-of-vocabulary words in source and target text with <unk-N>, - where N in is the position of the word in the source sequence. - """ - - def format_unk(pos): - return "<unk-{}>".format(pos) - - if target_in is None: - target_in = [] - - for seq_num, (source_seq, target_seq) in enumerate( - zip_longest(source_in, target_in) - ): - source_seq_out = [] - target_seq_out = [] - - word_to_pos = dict() - for position, token in enumerate(source_seq.strip().split()): - if token in vocabulary: - token_out = token - else: - if token in word_to_pos: - oov_pos = word_to_pos[token] - else: - word_to_pos[token] = position - oov_pos = position - token_out = format_unk(oov_pos) - source_seq_out.append(token_out) - source_out.write(" ".join(source_seq_out) + "\n") - - if target_seq is not None: - for token in target_seq.strip().split(): - if token in word_to_pos: - token_out = format_unk(word_to_pos[token]) - else: - token_out = token - target_seq_out.append(token_out) - if target_out is not None: - target_out.write(" ".join(target_seq_out) + "\n") - - -def main(): - parser = argparse.ArgumentParser( - description="Replaces out-of-vocabulary words in both source and target " - "sequences with tokens that indicate the position of the word " - "in the source sequence." - ) - parser.add_argument( - "--source", type=str, help="text file with source sequences", required=True - ) - parser.add_argument( - "--target", type=str, help="text file with target sequences", default=None - ) - parser.add_argument("--vocab", type=str, help="vocabulary file", required=True) - parser.add_argument( - "--source-out", - type=str, - help="where to write source sequences with <unk-N> entries", - required=True, - ) - parser.add_argument( - "--target-out", - type=str, - help="where to write target sequences with <unk-N> entries", - default=None, - ) - args = parser.parse_args() - - with open(args.vocab, encoding="utf-8") as vocab: - vocabulary = vocab.read().splitlines() - - target_in = ( - open(args.target, "r", encoding="utf-8") if args.target is not None else None - ) - target_out = ( - open(args.target_out, "w", encoding="utf-8") - if args.target_out is not None - else None - ) - with open(args.source, "r", encoding="utf-8") as source_in, open( - args.source_out, "w", encoding="utf-8" - ) as source_out: - replace_oovs(source_in, target_in, vocabulary, source_out, target_out) - if target_in is not None: - target_in.close() - if target_out is not None: - target_out.close() - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/quant_noise/README.md b/kosmos-g/fairseq/examples/quant_noise/README.md deleted file mode 100644 index a04d7e4e8..000000000 --- a/kosmos-g/fairseq/examples/quant_noise/README.md +++ /dev/null @@ -1,298 +0,0 @@ -# Training with Quantization Noise for Extreme Model Compression ({Fan\*, Stock\*} *et al.*, 2020) -This page contains information for how to train and quantize models with Quantization Noise, for both scalar quantization like `int8` and Iterative Product Quantization. -Check out our paper [here](https://arxiv.org/abs/2004.07320). - -Looking for pretrained models? They will be added shortly. -Looking for code to train vision models? We are working on open sourcing our code as part of ClassyVision. Please check back, but note that both the Scalar and Iterative Product Quantization counterparts of the `nn.Conv2d` module are already included in this release. - -**Contents**: -- [Walk through of code](#walk-through-the-code) -- [Reproduce NLP Results](#looking-to-reproduce-the-nlp-results-in-the-paper) -- [Reproduce Vision Results](#looking-to-reproduce-the-vision-results-in-the-paper) - - -## Citation -```bibtex -@article{fan2020training, - title={Training with Quantization Noise for Extreme Model Compression}, - author={Angela Fan* and Pierre Stock* and and Benjamin Graham and Edouard Grave and Remi Gribonval and Herve Jegou and Armand Joulin}, - year={2020}, - eprint={2004.07320}, - archivePrefix={arXiv}, - primaryClass={cs.ML} -} -``` - -## Walk through the code - -Training a model with Quant-Noise improves the performance in subsequent inference-time quantization by training models to be robust to quantization. This technique is useful for both scalar and product quantization methods, as well as multiple domains. We detail below our approach to train, quantize models and integrate our code to quantize your favorite models. - -### Scalar Quantization - -Unlike the section [Iterative Product Quantization](#iterative-product-quantization) which gives state-of-the-art compression, this section showcases the usefulness of our approach for simple scalar quantization baselines such as int8 using on-GPU Fake Quantization. - -#### Training - -Scalar quantization with Quant-Noise consists in randomly quantizing a proportion `p` of the weights during training. Scalar quantization is implemented [here](https://github.com/pytorch/fairseq/tree/main/fairseq/modules/quantization/scalar) under the form of Fake Quantization, meaning that we emulate int8 on GPU by quantizing and de-quantizing both the weights and the activations. We rely on PyTorch's [quantization primitives](https://github.com/pytorch/pytorch/tree/master/torch/quantization). - -To train a model with Quant-Noise, add the following flag: -``` ---quant-noise-scalar 0.5 -``` -Large values of noise make the network easier to quantize but may result in higher non-quantized test and validation perplexities. - -#### Quantization - -When evaluating a network, all quantized modules and activation hooks automatically switch to `p=1` so the validation accuracy reported by Fairseq is actually the quantized one, nothing more to do. - - -#### Integration with your own code - -Looking to quantize your own models with Quant-Noise + Scalar Quantization? -- Use the function `quantize_model_` implemented [here](https://github.com/pytorch/fairseq/tree/main/fairseq/modules/quantization/scalar/utils.py) to (1) replace all your modules by their quantized counterparts and (2) add hooks to those modules to quantize the activations. -- Then, perform your training as usual. Note that in `eval()` mode, the network is always fully quantized (weights and activations) by default (`p=1`). - - - -### Iterative Product Quantization - - -Iterative Product Quantization with Quant-Noise proceeds in two steps. First, a model must be trained uncompressed with Quant-Noise. Second, the model must be quantized with iPQ. Note that we implement here the simplest form of noise, which consists in randomly dropping a proportion `p` of blocks, and that worked as well as assigning those blocks to their current centroid. - -#### Training - -To train a model with Quant-Noise, add the following flags: -``` ---quant-noise-pq 0.1 --quant-noise-pq-block-size 8 -``` -`quant-noise-pq` controls how much dropout is applied to the blocks of the weight matrix. `quant-noise-pq-block-size` controls the size of the weight matrix blocks. -We recommend training with 0.05 to 0.2 Quant-Noise, a value that worked well in our experiments. For the block-size, we recommend training with block-size of 8. Note that the block size must be a multiple of `input_features`, see the size checks [here](https://github.com/pytorch/fairseq/tree/main/fairseq/modules/quant_noise.py). Large block sizes result in higher compression ratio but may induce a loss in accuracy. - -We currently support training Transformer based models, such as sequence-to-sequence, language models, and BERT architectures. The `quant_noise` function [here](https://github.com/pytorch/fairseq/tree/main/fairseq/modules/quant_noise.py) wraps a module. It splits a weight matrix into blocks and applies random dropout to these blocks. -In the Transformer architectures, quant-noise is applied to the input and output embeddings, the attention, and the FFN. - -Quant-Noise can also be combined with **LayerDrop** (see [here](https://github.com/pytorch/fairseq/tree/main/examples/layerdrop)) to add its pruning effect to the quantized model and make the model even smaller. We recommend training with LayerDrop 0.1 or 0.2. - -#### Quantization - -We implement an improved version of product quantization from Stock et al, **iPQ**, described [here](https://arxiv.org/abs/1907.05686), see code with old API [here](https://github.com/facebookresearch/kill-the-bits). Note that we improved the iPQ API in terms of both compute speed and usability as described below. - -For the particular case of PQ, quantization is made sequentially. We recommend first quantizing the FFNs, then the EMBs, and finally the ATTNs. Quantization is done in two sub-steps: -- First, perform `n` steps of Product Quantization (generally `n=20` is enough). -- Then, finetune the obtained centroids. - -#### Integration with your own code - -Looking to quantize your own models with Quant-Noise + iPQ? -- First wrap your modules with the `quant_noise` function [here](https://github.com/pytorch/fairseq/tree/main/fairseq/modules/quant_noise.py), which is module-agnostic and train your favorite model. -- Then, quantize your trained model using the code [here](https://github.com/pytorch/fairseq/tree/main/fairseq/modules/quantization/pq). This can be done *without any changes to your training loop*. Below is an example code for integration. -Note that we tried our approach only on Transformers and various Convolutional Models such as EfficientNets. - -```python -from fairseq.modules.quantization.pq import quantize_model_, SizeTracker - -# get configuration parameters -n_centroids_config = config["n_centroids"] -block_sizes_config = config["block_sizes"] -layers_to_quantize = config["layers_to_quantize"] - -# size tracker for keeping track of assignments, centroids and non-compressed sizes -size_tracker = SizeTracker(model) - -# Quantize model by stages -for step in range(len(layers_to_quantize)): - - # quantize model in-place - quantized_layers = quantize_model_( - model, - size_tracker, - layers_to_quantize, - block_sizes_config, - n_centroids_config, - step=step, - ) - logger.info(f"Finetuning stage {step}, quantized layers: {quantized_layers}") - logger.info(f"{size_tracker}") - - # Don't forget to re-create/update trainer/optimizer since model parameters have changed - optimizer = ... - - # Finetune the centroids with your usual training loop for a few epochs - trainer.train_epoch() -``` - - -## Looking to reproduce the NLP results in the paper? - -We detail below how to reproduce the state-of-the-art results in reported in the paper for Quant-Noise + Iterative Product Quantization. - -### Training with Quant-Noise - -To **train** RoBERTa + QuantNoise, we followed this setting [here](https://github.com/pytorch/fairseq/tree/main/examples/roberta). -The following command can be used to train a RoBERTa Base + QuantNoise model: - -```bash -TOTAL_UPDATES=125000 -WARMUP_UPDATES=10000 -PEAK_LR=0.0005 -TOKENS_PER_SAMPLE=512 -MAX_POSITIONS=512 -MAX_SENTENCES=16 -UPDATE_FREQ=2 -DATA_DIR=/path/to/data/here - -fairseq-train $DATA_DIR \ - --task masked_lm --criterion masked_lm --arch roberta_base \ - --sample-break-mode complete \ - --tokens-per-sample $TOKENS_PER_SAMPLE --max-positions $MAX_POSITIONS \ - --optimizer adam --adam-betas '(0.9, 0.98)' --adam-eps 1e-6 \ - --clip-norm 0.0 \ - --lr-scheduler polynomial_decay --lr $PEAK_LR \ - --warmup-updates $WARMUP_UPDATES --total-num-update $TOTAL_UPDATES \ - --dropout 0.1 --attention-dropout 0.1 \ - --weight-decay 0.01 \ - --batch-size $MAX_SENTENCES \ - --update-freq $UPDATE_FREQ --max-update $TOTAL_UPDATES \ - --save-dir checkpoint/roberta \ - --ddp-backend legacy_ddp --encoder-layerdrop 0.2 \ - --quant-noise-pq 0.2 --quant-noise-pq-block-size 8 --untie-weights-roberta -``` - -To **finetune** RoBERTa + QuantNoise, we followed this setting [here](https://github.com/pytorch/fairseq/blob/main/examples/roberta/README.glue.md). -The following command can be used to finetune a RoBERTa Base + QuantNoise model on the RTE dataset: - -```bash -TOTAL_NUM_UPDATES=2036 -WARMUP_UPDATES=122 -LR=2e-05 -NUM_CLASSES=2 -MAX_SENTENCES=16 -ROBERTA_PATH=/path/to/roberta_quantnoise/model.pt - -fairseq-train /path/to/rte/data/ \ - --restore-file $ROBERTA_PATH \ - --max-positions 512 \ - --batch-size $MAX_SENTENCES \ - --max-tokens 4400 \ - --task sentence_prediction \ - --reset-optimizer --reset-dataloader --reset-meters \ - --required-batch-size-multiple 1 \ - --init-token 0 --separator-token 2 \ - --arch roberta_large \ - --criterion sentence_prediction \ - --num-classes $NUM_CLASSES \ - --dropout 0.1 --attention-dropout 0.1 \ - --weight-decay 0.1 --optimizer adam --adam-betas "(0.9, 0.98)" --adam-eps 1e-06 \ - --clip-norm 0.0 \ - --lr-scheduler polynomial_decay --lr $LR --total-num-update $TOTAL_NUM_UPDATES --warmup-updates $WARMUP_UPDATES \ - --fp16 --fp16-init-scale 4 --threshold-loss-scale 1 --fp16-scale-window 128 \ - --max-epoch 10 \ - --find-unused-parameters \ - --best-checkpoint-metric accuracy --maximize-best-checkpoint-metric \ - --ddp-backend legacy_ddp \ - --quant-noise-pq 0.2 --quant-noise-pq-block-size 8 -``` - -To **train** Language Models on Wikitext-103, we followed this setting [here](https://github.com/pytorch/fairseq/tree/main/examples/language_model). -The following command can be used to train a Transformer + QuantNoise model on Wikitext-103: - -```bash -fairseq-train --task language_modeling /path/to/wikitext-103/data \ - --save-dir checkpoints/transformer_wikitext-103 \ - --adaptive-input --adaptive-input-cutoff 20000,60000 --adaptive-input-factor 4 \ - --adaptive-softmax-cutoff 20000,60000 --adaptive-softmax-dropout 0.2 --adaptive-softmax-factor 4.0 \ - --tie-adaptive-proj --tie-adaptive-weights \ - --arch transformer_lm_gbw \ - --attention-dropout 0.1 --dropout 0.2 --relu-dropout 0.1 \ - --clip-norm 0.1 --criterion adaptive_loss \ - --ddp-backend legacy_ddp \ - --decoder-attention-heads 8 --decoder-embed-dim 1024 --decoder-ffn-embed-dim 4096 --decoder-input-dim 1024 \ - --decoder-layers 16 --decoder-normalize-before --decoder-output-dim 1024 \ - --min-lr 0.0001 --lr-period-updates 270000 --lr-scheduler cosine --lr-shrink 0.75 --lr 1.0 --t-mult 2.0 \ - --max-tokens 3072 --tokens-per-sample 3072 --momentum 0.99 --optimizer nag \ - --sample-break-mode none --update-freq 3 \ - --warmup-init-lr 1e-07 --warmup-updates 16000 \ - --weight-decay 0 --seed 1 --stop-min-lr 1e-09 \ - --quant-noise-pq 0.05 --quant-noise-pq-block-size 8 -``` - -To **evaluate** this model, note you need to use the `eval.py` script. The following command can be used to evaluate: - -```bash -fairseq-eval-lm /path/to/wikitext-103/data --path /path/to/model/checkpoint \ - --sample-break-mode complete \ - --max-tokens 3072 \ - --context-window 2560 \ - --softmax-batch 1024 \ - --gen-subset valid -``` -and change the `--gen-subset` to `test` if you would like to evaluate on the test set instead. - - -### Iterative Product Quantization - -To quantize the finetuned RoBERTa model, we use this command on 1 GPU. This should run in a day. -```bash -TOTAL_NUM_UPDATES=6108 # 2036 updates for each iteration -WARMUP_UPDATES=122 -LR=2e-05 -NUM_CLASSES=2 -MAX_SENTENCES=16 -fairseq-train --task sentence_prediction /path/to/data/ \ - --restore-file $ROBERTA_PATH \ - --save-dir checkpoints/roberta_finetuned \ - --max-positions 512 \ - --batch-size $MAX_SENTENCES \ - --max-tokens 4400 \ - --init-token 0 --separator-token 2 \ - --arch roberta_large \ - --criterion sentence_prediction \ - --num-classes $NUM_CLASSES \ - --dropout 0.1 --attention-dropout 0.1 \ - --weight-decay 0.1 --optimizer adam --adam-betas "(0.9, 0.98)" --adam-eps 1e-06 \ - --clip-norm 0.0 --lr-scheduler polynomial_decay \ - --fp16 --fp16-init-scale 4 --threshold-loss-scale 1 --fp16-scale-window 128 \ - --no-progress-bar --skip-invalid-size-inputs-valid-test --ddp-backend legacy_ddp \ - --quantization-config-path /path/to/config/yaml -``` - -To quantize the trained Language Model, we use this command on 8 V100 23GB GPUs. This should run in a couple of hours. -```bash -fairseq-train --task language_modeling /path/to/wikitext-103/data \ - --save-dir checkpoints/transformer_wikitext-103 \ - --adaptive-input --adaptive-input-cutoff 20000,60000 --adaptive-input-factor 4 \ - --adaptive-softmax-cutoff 20000,60000 --adaptive-softmax-dropout 0.2 --adaptive-softmax-factor 4.0 \ - --arch transformer_lm_gbw \ - --attention-dropout 0.1 --dropout 0.2 --relu-dropout 0.1 \ - --bucket-cap-mb 25 --char-embedder-highway-layers 2 --character-embedding-dim 4 \ - --clip-norm 0.1 --criterion adaptive_loss \ - --ddp-backend legacy_ddp \ - --decoder-attention-heads 8 --decoder-embed-dim 1024 --decoder-ffn-embed-dim 4096 --decoder-input-dim 1024 --decoder-layers 16 --decoder-normalize-before --decoder-output-dim 1024 \ - --fp16 --keep-last-epochs -1 \ - --min-lr 0.0001 --lr-period-updates 270000 --lr-scheduler cosine --lr-shrink 0.75 --lr 0.05 --stop-min-lr 1e-09 \ - --max-tokens 2944 --tokens-per-sample 2944\ - --momentum 0.99 --no-epoch-checkpoints --no-progress-bar --optimizer nag --required-batch-size-multiple 8 \ - --sample-break-mode none --t-mult 2.0 --skip-invalid-size-inputs-valid-test \ - --tie-adaptive-proj --tie-adaptive-weights --update-freq 3 --weight-decay 0 --seed 1 \ - --log-interval 100 --no-progress-bar --skip-invalid-size-inputs-valid-test \ - --restore-file path/to/trained/lm/with/quant/noise \ - --max-update 13500 --quantization-config-path /path/to/config/yaml -``` -If you have less capacity or if your distributed training freezes, try reducing `--max-tokens` and `--tokens-per-sample` (this may reduce the quantized accuracy a bit). - -### Remarks - -We try to keep the open-sourced code as readable and as easy-to-plug as possible. Therefore, we did not test it for the following cases: -- Scalar quantization with RoBERTa. -- Quantization with iPQ and `int8` combined. - -If you have trouble adapting it, we will be more than happy to help! - -## Looking to reproduce the Vision results in the paper? - -We are working on open sourcing our code as part of ClassyVision. Please check back. - - -## Having an issue or have a question? - -Please open an issue in this repository with the details of your question. Thanks! diff --git a/kosmos-g/fairseq/examples/quant_noise/transformer_quantization_config.yaml b/kosmos-g/fairseq/examples/quant_noise/transformer_quantization_config.yaml deleted file mode 100644 index d4be14a93..000000000 --- a/kosmos-g/fairseq/examples/quant_noise/transformer_quantization_config.yaml +++ /dev/null @@ -1,33 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -# This file defines example configuration arguments for quantizing -# a transformer model with product quantization - -# Number of Centroids for Product Quantization, by default 256 (byte-aligned) -n_centroids: - Linear: - key: in_features - value: {"*": 256} - Embedding: - key: embedding_dim - value: {"*": 256} - -# Block Sizes for Product Quantization -# We suggest: 8 for FFN, 4 for ATTN, 4 for embedding projections, 8 for embeddings -block_sizes: - Linear: - key: fuzzy_name - value: {fc: 8, attn: 4, emb: 4} - Embedding: - key: fuzzy_name - value: {emb: 8} - -# Layers to Quantize Sequentially -# We suggest: first FFN, then EMB, then ATTN -layers_to_quantize: - - decoder\\.layers\\.\d+\\.fc[12] - - decoder\\.embed_tokens\\.embeddings\\.[012]\\.[01] - - decoder\\.layers\\.\d+\\.self_attn\\.(k_proj|v_proj|q_proj|out_proj) diff --git a/kosmos-g/fairseq/examples/roberta/README.custom_classification.md b/kosmos-g/fairseq/examples/roberta/README.custom_classification.md deleted file mode 100644 index 7254bb7d1..000000000 --- a/kosmos-g/fairseq/examples/roberta/README.custom_classification.md +++ /dev/null @@ -1,168 +0,0 @@ -# Finetuning RoBERTa on a custom classification task - -This example shows how to finetune RoBERTa on the IMDB dataset, but should illustrate the process for most classification tasks. - -### 1) Get the data - -```bash -wget http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz -tar zxvf aclImdb_v1.tar.gz -``` - - -### 2) Format data - -`IMDB` data has one data-sample in each file, below python code-snippet converts it one file for train and valid each for ease of processing. -```python -import argparse -import os -import random -from glob import glob - -random.seed(0) - -def main(args): - for split in ['train', 'test']: - samples = [] - for class_label in ['pos', 'neg']: - fnames = glob(os.path.join(args.datadir, split, class_label) + '/*.txt') - for fname in fnames: - with open(fname) as fin: - line = fin.readline() - samples.append((line, 1 if class_label == 'pos' else 0)) - random.shuffle(samples) - out_fname = 'train' if split == 'train' else 'dev' - f1 = open(os.path.join(args.datadir, out_fname + '.input0'), 'w') - f2 = open(os.path.join(args.datadir, out_fname + '.label'), 'w') - for sample in samples: - f1.write(sample[0] + '\n') - f2.write(str(sample[1]) + '\n') - f1.close() - f2.close() - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--datadir', default='aclImdb') - args = parser.parse_args() - main(args) -``` - - -### 3) BPE encode - -Run `multiprocessing_bpe_encoder`, you can also do this in previous step for each sample but that might be slower. -```bash -# Download encoder.json and vocab.bpe -wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json' -wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe' - -for SPLIT in train dev; do - python -m examples.roberta.multiprocessing_bpe_encoder \ - --encoder-json encoder.json \ - --vocab-bpe vocab.bpe \ - --inputs "aclImdb/$SPLIT.input0" \ - --outputs "aclImdb/$SPLIT.input0.bpe" \ - --workers 60 \ - --keep-empty -done -``` - - -### 4) Preprocess data - -```bash -# Download fairseq dictionary. -wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/dict.txt' - -fairseq-preprocess \ - --only-source \ - --trainpref "aclImdb/train.input0.bpe" \ - --validpref "aclImdb/dev.input0.bpe" \ - --destdir "IMDB-bin/input0" \ - --workers 60 \ - --srcdict dict.txt - -fairseq-preprocess \ - --only-source \ - --trainpref "aclImdb/train.label" \ - --validpref "aclImdb/dev.label" \ - --destdir "IMDB-bin/label" \ - --workers 60 - -``` - - -### 5) Run training - -```bash -TOTAL_NUM_UPDATES=7812 # 10 epochs through IMDB for bsz 32 -WARMUP_UPDATES=469 # 6 percent of the number of updates -LR=1e-05 # Peak LR for polynomial LR scheduler. -HEAD_NAME=imdb_head # Custom name for the classification head. -NUM_CLASSES=2 # Number of classes for the classification task. -MAX_SENTENCES=8 # Batch size. -ROBERTA_PATH=/path/to/roberta.large/model.pt - -CUDA_VISIBLE_DEVICES=0 fairseq-train IMDB-bin/ \ - --restore-file $ROBERTA_PATH \ - --max-positions 512 \ - --batch-size $MAX_SENTENCES \ - --max-tokens 4400 \ - --task sentence_prediction \ - --reset-optimizer --reset-dataloader --reset-meters \ - --required-batch-size-multiple 1 \ - --init-token 0 --separator-token 2 \ - --arch roberta_large \ - --criterion sentence_prediction \ - --classification-head-name $HEAD_NAME \ - --num-classes $NUM_CLASSES \ - --dropout 0.1 --attention-dropout 0.1 \ - --weight-decay 0.1 --optimizer adam --adam-betas "(0.9, 0.98)" --adam-eps 1e-06 \ - --clip-norm 0.0 \ - --lr-scheduler polynomial_decay --lr $LR --total-num-update $TOTAL_NUM_UPDATES --warmup-updates $WARMUP_UPDATES \ - --fp16 --fp16-init-scale 4 --threshold-loss-scale 1 --fp16-scale-window 128 \ - --max-epoch 10 \ - --best-checkpoint-metric accuracy --maximize-best-checkpoint-metric \ - --shorten-method "truncate" \ - --find-unused-parameters \ - --update-freq 4 -``` - -The above command will finetune RoBERTa-large with an effective batch-size of 32 -sentences (`--batch-size=8 --update-freq=4`). The expected -`best-validation-accuracy` after 10 epochs is ~96.5%. - -If you run out of GPU memory, try decreasing `--batch-size` and increase -`--update-freq` to compensate. - - -### 6) Load model using hub interface - -Now we can load the trained model checkpoint using the RoBERTa hub interface. - -Assuming your checkpoints are stored in `checkpoints/`: -```python -from fairseq.models.roberta import RobertaModel -roberta = RobertaModel.from_pretrained( - 'checkpoints', - checkpoint_file='checkpoint_best.pt', - data_name_or_path='IMDB-bin' -) -roberta.eval() # disable dropout -``` - -Finally you can make predictions using the `imdb_head` (or whatever you set -`--classification-head-name` to during training): -```python -label_fn = lambda label: roberta.task.label_dictionary.string( - [label + roberta.task.label_dictionary.nspecial] -) - -tokens = roberta.encode('Best movie this year') -pred = label_fn(roberta.predict('imdb_head', tokens).argmax().item()) -assert pred == '1' # positive - -tokens = roberta.encode('Worst movie ever') -pred = label_fn(roberta.predict('imdb_head', tokens).argmax().item()) -assert pred == '0' # negative -``` diff --git a/kosmos-g/fairseq/examples/roberta/README.glue.md b/kosmos-g/fairseq/examples/roberta/README.glue.md deleted file mode 100644 index 4f596d55a..000000000 --- a/kosmos-g/fairseq/examples/roberta/README.glue.md +++ /dev/null @@ -1,64 +0,0 @@ -# Finetuning RoBERTa on GLUE tasks - -### 1) Download the data from GLUE website (https://gluebenchmark.com/tasks) using following commands: -```bash -wget https://gist.githubusercontent.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e/raw/17b8dd0d724281ed7c3b2aeeda662b92809aadd5/download_glue_data.py -python download_glue_data.py --data_dir glue_data --tasks all -``` - -### 2) Preprocess GLUE task data: -```bash -./examples/roberta/preprocess_GLUE_tasks.sh glue_data <glue_task_name> -``` -`glue_task_name` is one of the following: -`{ALL, QQP, MNLI, QNLI, MRPC, RTE, STS-B, SST-2, CoLA}` -Use `ALL` for preprocessing all the glue tasks. - -### 3) Fine-tuning on GLUE task: -Example fine-tuning cmd for `RTE` task -```bash -ROBERTA_PATH=/path/to/roberta/model.pt - -CUDA_VISIBLE_DEVICES=0 fairseq-hydra-train -config-dir examples/roberta/config/finetuning --config-name rte \ -task.data=RTE-bin checkpoint.restore_file=$ROBERTA_PATH -``` - -There are additional config files for each of the GLUE tasks in the examples/roberta/config/finetuning directory. - -**Note:** - -a) Above cmd-args and hyperparams are tested on one Nvidia `V100` GPU with `32gb` of memory for each task. Depending on the GPU memory resources available to you, you can use increase `--update-freq` and reduce `--batch-size`. - -b) All the settings in above table are suggested settings based on our hyperparam search within a fixed search space (for careful comparison across models). You might be able to find better metrics with wider hyperparam search. - -### Inference on GLUE task -After training the model as mentioned in previous step, you can perform inference with checkpoints in `checkpoints/` directory using following python code snippet: - -```python -from fairseq.models.roberta import RobertaModel - -roberta = RobertaModel.from_pretrained( - 'checkpoints/', - checkpoint_file='checkpoint_best.pt', - data_name_or_path='RTE-bin' -) - -label_fn = lambda label: roberta.task.label_dictionary.string( - [label + roberta.task.label_dictionary.nspecial] -) -ncorrect, nsamples = 0, 0 -roberta.cuda() -roberta.eval() -with open('glue_data/RTE/dev.tsv') as fin: - fin.readline() - for index, line in enumerate(fin): - tokens = line.strip().split('\t') - sent1, sent2, target = tokens[1], tokens[2], tokens[3] - tokens = roberta.encode(sent1, sent2) - prediction = roberta.predict('sentence_classification_head', tokens).argmax().item() - prediction_label = label_fn(prediction) - ncorrect += int(prediction_label == target) - nsamples += 1 -print('| Accuracy: ', float(ncorrect)/float(nsamples)) - -``` diff --git a/kosmos-g/fairseq/examples/roberta/README.md b/kosmos-g/fairseq/examples/roberta/README.md deleted file mode 100644 index ed4d5df52..000000000 --- a/kosmos-g/fairseq/examples/roberta/README.md +++ /dev/null @@ -1,296 +0,0 @@ -# RoBERTa: A Robustly Optimized BERT Pretraining Approach - -https://arxiv.org/abs/1907.11692 - -## Introduction - -RoBERTa iterates on BERT's pretraining procedure, including training the model longer, with bigger batches over more data; removing the next sentence prediction objective; training on longer sequences; and dynamically changing the masking pattern applied to the training data. See the associated paper for more details. - -### What's New: - -- December 2020: German model (GottBERT) is available: [GottBERT](https://github.com/pytorch/fairseq/tree/main/examples/gottbert). -- January 2020: Italian model (UmBERTo) is available from Musixmatch Research: [UmBERTo](https://github.com/musixmatchresearch/umberto). -- November 2019: French model (CamemBERT) is available: [CamemBERT](https://github.com/pytorch/fairseq/tree/main/examples/camembert). -- November 2019: Multilingual encoder (XLM-RoBERTa) is available: [XLM-R](https://github.com/pytorch/fairseq/tree/main/examples/xlmr). -- September 2019: TensorFlow and TPU support via the [transformers library](https://github.com/huggingface/transformers). -- August 2019: RoBERTa is now supported in the [pytorch-transformers library](https://github.com/huggingface/pytorch-transformers). -- August 2019: Added [tutorial for finetuning on WinoGrande](https://github.com/pytorch/fairseq/tree/main/examples/roberta/wsc#roberta-training-on-winogrande-dataset). -- August 2019: Added [tutorial for pretraining RoBERTa using your own data](README.pretraining.md). - -## Pre-trained models - -Model | Description | # params | Download ----|---|---|--- -`roberta.base` | RoBERTa using the BERT-base architecture | 125M | [roberta.base.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/roberta.base.tar.gz) -`roberta.large` | RoBERTa using the BERT-large architecture | 355M | [roberta.large.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/roberta.large.tar.gz) -`roberta.large.mnli` | `roberta.large` finetuned on [MNLI](http://www.nyu.edu/projects/bowman/multinli) | 355M | [roberta.large.mnli.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/roberta.large.mnli.tar.gz) -`roberta.large.wsc` | `roberta.large` finetuned on [WSC](wsc/README.md) | 355M | [roberta.large.wsc.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/roberta.large.wsc.tar.gz) - -## Results - -**[GLUE (Wang et al., 2019)](https://gluebenchmark.com/)** -_(dev set, single model, single-task finetuning)_ - -Model | MNLI | QNLI | QQP | RTE | SST-2 | MRPC | CoLA | STS-B ----|---|---|---|---|---|---|---|--- -`roberta.base` | 87.6 | 92.8 | 91.9 | 78.7 | 94.8 | 90.2 | 63.6 | 91.2 -`roberta.large` | 90.2 | 94.7 | 92.2 | 86.6 | 96.4 | 90.9 | 68.0 | 92.4 -`roberta.large.mnli` | 90.2 | - | - | - | - | - | - | - - -**[SuperGLUE (Wang et al., 2019)](https://super.gluebenchmark.com/)** -_(dev set, single model, single-task finetuning)_ - -Model | BoolQ | CB | COPA | MultiRC | RTE | WiC | WSC ----|---|---|---|---|---|---|--- -`roberta.large` | 86.9 | 98.2 | 94.0 | 85.7 | 89.5 | 75.6 | - -`roberta.large.wsc` | - | - | - | - | - | - | 91.3 - -**[SQuAD (Rajpurkar et al., 2018)](https://rajpurkar.github.io/SQuAD-explorer/)** -_(dev set, no additional data used)_ - -Model | SQuAD 1.1 EM/F1 | SQuAD 2.0 EM/F1 ----|---|--- -`roberta.large` | 88.9/94.6 | 86.5/89.4 - -**[RACE (Lai et al., 2017)](http://www.qizhexie.com/data/RACE_leaderboard.html)** -_(test set)_ - -Model | Accuracy | Middle | High ----|---|---|--- -`roberta.large` | 83.2 | 86.5 | 81.3 - -**[HellaSwag (Zellers et al., 2019)](https://rowanzellers.com/hellaswag/)** -_(test set)_ - -Model | Overall | In-domain | Zero-shot | ActivityNet | WikiHow ----|---|---|---|---|--- -`roberta.large` | 85.2 | 87.3 | 83.1 | 74.6 | 90.9 - -**[Commonsense QA (Talmor et al., 2019)](https://www.tau-nlp.org/commonsenseqa)** -_(test set)_ - -Model | Accuracy ----|--- -`roberta.large` (single model) | 72.1 -`roberta.large` (ensemble) | 72.5 - -**[Winogrande (Sakaguchi et al., 2019)](https://arxiv.org/abs/1907.10641)** -_(test set)_ - -Model | Accuracy ----|--- -`roberta.large` | 78.1 - -**[XNLI (Conneau et al., 2018)](https://arxiv.org/abs/1809.05053)** -_(TRANSLATE-TEST)_ - -Model | en | fr | es | de | el | bg | ru | tr | ar | vi | th | zh | hi | sw | ur ----|---|---|---|---|---|---|---|---|---|---|---|---|---|---|--- -`roberta.large.mnli` | 91.3 | 82.91 | 84.27 | 81.24 | 81.74 | 83.13 | 78.28 | 76.79 | 76.64 | 74.17 | 74.05 | 77.5 | 70.9 | 66.65 | 66.81 - -## Example usage - -##### Load RoBERTa from torch.hub (PyTorch >= 1.1): -```python -import torch -roberta = torch.hub.load('pytorch/fairseq', 'roberta.large') -roberta.eval() # disable dropout (or leave in train mode to finetune) -``` - -##### Load RoBERTa (for PyTorch 1.0 or custom models): -```python -# Download roberta.large model -wget https://dl.fbaipublicfiles.com/fairseq/models/roberta.large.tar.gz -tar -xzvf roberta.large.tar.gz - -# Load the model in fairseq -from fairseq.models.roberta import RobertaModel -roberta = RobertaModel.from_pretrained('/path/to/roberta.large', checkpoint_file='model.pt') -roberta.eval() # disable dropout (or leave in train mode to finetune) -``` - -##### Apply Byte-Pair Encoding (BPE) to input text: -```python -tokens = roberta.encode('Hello world!') -assert tokens.tolist() == [0, 31414, 232, 328, 2] -roberta.decode(tokens) # 'Hello world!' -``` - -##### Extract features from RoBERTa: -```python -# Extract the last layer's features -last_layer_features = roberta.extract_features(tokens) -assert last_layer_features.size() == torch.Size([1, 5, 1024]) - -# Extract all layer's features (layer 0 is the embedding layer) -all_layers = roberta.extract_features(tokens, return_all_hiddens=True) -assert len(all_layers) == 25 -assert torch.all(all_layers[-1] == last_layer_features) -``` - -##### Use RoBERTa for sentence-pair classification tasks: -```python -# Download RoBERTa already finetuned for MNLI -roberta = torch.hub.load('pytorch/fairseq', 'roberta.large.mnli') -roberta.eval() # disable dropout for evaluation - -# Encode a pair of sentences and make a prediction -tokens = roberta.encode('Roberta is a heavily optimized version of BERT.', 'Roberta is not very optimized.') -roberta.predict('mnli', tokens).argmax() # 0: contradiction - -# Encode another pair of sentences -tokens = roberta.encode('Roberta is a heavily optimized version of BERT.', 'Roberta is based on BERT.') -roberta.predict('mnli', tokens).argmax() # 2: entailment -``` - -##### Register a new (randomly initialized) classification head: -```python -roberta.register_classification_head('new_task', num_classes=3) -logprobs = roberta.predict('new_task', tokens) # tensor([[-1.1050, -1.0672, -1.1245]], grad_fn=<LogSoftmaxBackward>) -``` - -##### Batched prediction: -```python -import torch -from fairseq.data.data_utils import collate_tokens - -roberta = torch.hub.load('pytorch/fairseq', 'roberta.large.mnli') -roberta.eval() - -batch_of_pairs = [ - ['Roberta is a heavily optimized version of BERT.', 'Roberta is not very optimized.'], - ['Roberta is a heavily optimized version of BERT.', 'Roberta is based on BERT.'], - ['potatoes are awesome.', 'I like to run.'], - ['Mars is very far from earth.', 'Mars is very close.'], -] - -batch = collate_tokens( - [roberta.encode(pair[0], pair[1]) for pair in batch_of_pairs], pad_idx=1 -) - -logprobs = roberta.predict('mnli', batch) -print(logprobs.argmax(dim=1)) -# tensor([0, 2, 1, 0]) -``` - -##### Using the GPU: -```python -roberta.cuda() -roberta.predict('new_task', tokens) # tensor([[-1.1050, -1.0672, -1.1245]], device='cuda:0', grad_fn=<LogSoftmaxBackward>) -``` - -## Advanced usage - -#### Filling masks: - -RoBERTa can be used to fill `<mask>` tokens in the input. Some examples from the -[Natural Questions dataset](https://ai.google.com/research/NaturalQuestions/): -```python -roberta.fill_mask('The first Star wars movie came out in <mask>', topk=3) -# [('The first Star wars movie came out in 1977', 0.9504708051681519, ' 1977'), ('The first Star wars movie came out in 1978', 0.009986862540245056, ' 1978'), ('The first Star wars movie came out in 1979', 0.009574787691235542, ' 1979')] - -roberta.fill_mask('Vikram samvat calender is official in <mask>', topk=3) -# [('Vikram samvat calender is official in India', 0.21878819167613983, ' India'), ('Vikram samvat calender is official in Delhi', 0.08547237515449524, ' Delhi'), ('Vikram samvat calender is official in Gujarat', 0.07556215673685074, ' Gujarat')] - -roberta.fill_mask('<mask> is the common currency of the European Union', topk=3) -# [('Euro is the common currency of the European Union', 0.9456493854522705, 'Euro'), ('euro is the common currency of the European Union', 0.025748178362846375, 'euro'), ('€ is the common currency of the European Union', 0.011183084920048714, '€')] -``` - -#### Pronoun disambiguation (Winograd Schema Challenge): - -RoBERTa can be used to disambiguate pronouns. First install spaCy and download the English-language model: -```bash -pip install spacy -python -m spacy download en_core_web_lg -``` - -Next load the `roberta.large.wsc` model and call the `disambiguate_pronoun` -function. The pronoun should be surrounded by square brackets (`[]`) and the -query referent surrounded by underscores (`_`), or left blank to return the -predicted candidate text directly: -```python -roberta = torch.hub.load('pytorch/fairseq', 'roberta.large.wsc', user_dir='examples/roberta/wsc') -roberta.cuda() # use the GPU (optional) - -roberta.disambiguate_pronoun('The _trophy_ would not fit in the brown suitcase because [it] was too big.') -# True -roberta.disambiguate_pronoun('The trophy would not fit in the brown _suitcase_ because [it] was too big.') -# False - -roberta.disambiguate_pronoun('The city councilmen refused the demonstrators a permit because [they] feared violence.') -# 'The city councilmen' -roberta.disambiguate_pronoun('The city councilmen refused the demonstrators a permit because [they] advocated violence.') -# 'demonstrators' -``` - -See the [RoBERTA Winograd Schema Challenge (WSC) README](wsc/README.md) for more details on how to train this model. - -#### Extract features aligned to words: - -By default RoBERTa outputs one feature vector per BPE token. You can instead -realign the features to match [spaCy's word-level tokenization](https://spacy.io/usage/linguistic-features#tokenization) -with the `extract_features_aligned_to_words` method. This will compute a -weighted average of the BPE-level features for each word and expose them in -spaCy's `Token.vector` attribute: -```python -doc = roberta.extract_features_aligned_to_words('I said, "hello RoBERTa."') -assert len(doc) == 10 -for tok in doc: - print('{:10}{} (...)'.format(str(tok), tok.vector[:5])) -# <s> tensor([-0.1316, -0.0386, -0.0832, -0.0477, 0.1943], grad_fn=<SliceBackward>) (...) -# I tensor([ 0.0559, 0.1541, -0.4832, 0.0880, 0.0120], grad_fn=<SliceBackward>) (...) -# said tensor([-0.1565, -0.0069, -0.8915, 0.0501, -0.0647], grad_fn=<SliceBackward>) (...) -# , tensor([-0.1318, -0.0387, -0.0834, -0.0477, 0.1944], grad_fn=<SliceBackward>) (...) -# " tensor([-0.0486, 0.1818, -0.3946, -0.0553, 0.0981], grad_fn=<SliceBackward>) (...) -# hello tensor([ 0.0079, 0.1799, -0.6204, -0.0777, -0.0923], grad_fn=<SliceBackward>) (...) -# RoBERTa tensor([-0.2339, -0.1184, -0.7343, -0.0492, 0.5829], grad_fn=<SliceBackward>) (...) -# . tensor([-0.1341, -0.1203, -0.1012, -0.0621, 0.1892], grad_fn=<SliceBackward>) (...) -# " tensor([-0.1341, -0.1203, -0.1012, -0.0621, 0.1892], grad_fn=<SliceBackward>) (...) -# </s> tensor([-0.0930, -0.0392, -0.0821, 0.0158, 0.0649], grad_fn=<SliceBackward>) (...) -``` - -#### Evaluating the `roberta.large.mnli` model: - -Example python code snippet to evaluate accuracy on the MNLI `dev_matched` set. -```python -label_map = {0: 'contradiction', 1: 'neutral', 2: 'entailment'} -ncorrect, nsamples = 0, 0 -roberta.cuda() -roberta.eval() -with open('glue_data/MNLI/dev_matched.tsv') as fin: - fin.readline() - for index, line in enumerate(fin): - tokens = line.strip().split('\t') - sent1, sent2, target = tokens[8], tokens[9], tokens[-1] - tokens = roberta.encode(sent1, sent2) - prediction = roberta.predict('mnli', tokens).argmax().item() - prediction_label = label_map[prediction] - ncorrect += int(prediction_label == target) - nsamples += 1 -print('| Accuracy: ', float(ncorrect)/float(nsamples)) -# Expected output: 0.9060 -``` - -## Finetuning - -- [Finetuning on GLUE](README.glue.md) -- [Finetuning on custom classification tasks (e.g., IMDB)](README.custom_classification.md) -- [Finetuning on Winograd Schema Challenge (WSC)](wsc/README.md) -- [Finetuning on Commonsense QA (CQA)](commonsense_qa/README.md) - -## Pretraining using your own data - -See the [tutorial for pretraining RoBERTa using your own data](README.pretraining.md). - -## Citation - -```bibtex -@article{liu2019roberta, - title = {RoBERTa: A Robustly Optimized BERT Pretraining Approach}, - author = {Yinhan Liu and Myle Ott and Naman Goyal and Jingfei Du and - Mandar Joshi and Danqi Chen and Omer Levy and Mike Lewis and - Luke Zettlemoyer and Veselin Stoyanov}, - journal={arXiv preprint arXiv:1907.11692}, - year = {2019}, -} -``` diff --git a/kosmos-g/fairseq/examples/roberta/README.pretraining.md b/kosmos-g/fairseq/examples/roberta/README.pretraining.md deleted file mode 100644 index a4e745352..000000000 --- a/kosmos-g/fairseq/examples/roberta/README.pretraining.md +++ /dev/null @@ -1,84 +0,0 @@ -# Pretraining RoBERTa using your own data - -This tutorial will walk you through pretraining RoBERTa over your own data. - -### 1) Preprocess the data - -Data should be preprocessed following the [language modeling format](/examples/language_model), i.e. each document should be separated by an empty line (only useful with `--sample-break-mode complete_doc`). Lines will be concatenated as a 1D text stream during training. - -We'll use the [WikiText-103 dataset](https://www.salesforce.com/products/einstein/ai-research/the-wikitext-dependency-language-modeling-dataset/) -to demonstrate how to preprocess raw text data with the GPT-2 BPE. Of course -this dataset is quite small, so the resulting pretrained model will perform -poorly, but it gives the general idea. - -First download the dataset: -```bash -wget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip -unzip wikitext-103-raw-v1.zip -``` - -Next encode it with the GPT-2 BPE: -```bash -mkdir -p gpt2_bpe -wget -O gpt2_bpe/encoder.json https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json -wget -O gpt2_bpe/vocab.bpe https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe -for SPLIT in train valid test; do \ - python -m examples.roberta.multiprocessing_bpe_encoder \ - --encoder-json gpt2_bpe/encoder.json \ - --vocab-bpe gpt2_bpe/vocab.bpe \ - --inputs wikitext-103-raw/wiki.${SPLIT}.raw \ - --outputs wikitext-103-raw/wiki.${SPLIT}.bpe \ - --keep-empty \ - --workers 60; \ -done -``` - -Finally preprocess/binarize the data using the GPT-2 fairseq dictionary: -```bash -wget -O gpt2_bpe/dict.txt https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/dict.txt -fairseq-preprocess \ - --only-source \ - --srcdict gpt2_bpe/dict.txt \ - --trainpref wikitext-103-raw/wiki.train.bpe \ - --validpref wikitext-103-raw/wiki.valid.bpe \ - --testpref wikitext-103-raw/wiki.test.bpe \ - --destdir data-bin/wikitext-103 \ - --workers 60 -``` - -### 2) Train RoBERTa base -```bash -DATA_DIR=data-bin/wikitext-103 - -fairseq-hydra-train -m --config-dir examples/roberta/config/pretraining \ ---config-name base task.data=$DATA_DIR -``` - -**Note:** You can optionally resume training the released RoBERTa base model by -adding `checkpoint.restore_file=/path/to/roberta.base/model.pt`. - -**Note:** The above command assumes training on 8x32GB V100 GPUs. Each GPU uses -a batch size of 16 sequences (`dataset.batch_size`) and accumulates gradients to -further increase the batch size by 16x (`optimization.update_freq`), for a total batch size -of 2048 sequences. If you have fewer GPUs or GPUs with less memory you may need -to reduce `dataset.batch_size` and increase dataset.update_freq to compensate. -Alternatively if you have more GPUs you can decrease `dataset.update_freq` accordingly -to increase training speed. - -**Note:** The learning rate and batch size are tightly connected and need to be -adjusted together. We generally recommend increasing the learning rate as you -increase the batch size according to the following table (although it's also -dataset dependent, so don't rely on the following values too closely): - -batch size | peak learning rate ----|--- -256 | 0.0001 -2048 | 0.0005 -8192 | 0.0007 - -### 3) Load your pretrained model -```python -from fairseq.models.roberta import RobertaModel -roberta = RobertaModel.from_pretrained('checkpoints', 'checkpoint_best.pt', 'path/to/data') -assert isinstance(roberta.model, torch.nn.Module) -``` diff --git a/kosmos-g/fairseq/examples/roberta/README.race.md b/kosmos-g/fairseq/examples/roberta/README.race.md deleted file mode 100644 index 13c917e8e..000000000 --- a/kosmos-g/fairseq/examples/roberta/README.race.md +++ /dev/null @@ -1,68 +0,0 @@ -# Finetuning RoBERTa on RACE tasks - -### 1) Download the data from RACE website (http://www.cs.cmu.edu/~glai1/data/race/) - -### 2) Preprocess RACE data: -```bash -python ./examples/roberta/preprocess_RACE.py --input-dir <input-dir> --output-dir <extracted-data-dir> -./examples/roberta/preprocess_RACE.sh <extracted-data-dir> <output-dir> -``` - -### 3) Fine-tuning on RACE: - -```bash -MAX_EPOCH=5 # Number of training epochs. -LR=1e-05 # Peak LR for fixed LR scheduler. -NUM_CLASSES=4 -MAX_SENTENCES=1 # Batch size per GPU. -UPDATE_FREQ=8 # Accumulate gradients to simulate training on 8 GPUs. -DATA_DIR=/path/to/race-output-dir -ROBERTA_PATH=/path/to/roberta/model.pt - -CUDA_VISIBLE_DEVICES=0,1 fairseq-train $DATA_DIR --ddp-backend=legacy_ddp \ - --restore-file $ROBERTA_PATH \ - --reset-optimizer --reset-dataloader --reset-meters \ - --best-checkpoint-metric accuracy --maximize-best-checkpoint-metric \ - --task sentence_ranking \ - --num-classes $NUM_CLASSES \ - --init-token 0 --separator-token 2 \ - --max-option-length 128 \ - --max-positions 512 \ - --shorten-method "truncate" \ - --arch roberta_large \ - --dropout 0.1 --attention-dropout 0.1 --weight-decay 0.01 \ - --criterion sentence_ranking \ - --optimizer adam --adam-betas '(0.9, 0.98)' --adam-eps 1e-06 \ - --clip-norm 0.0 \ - --lr-scheduler fixed --lr $LR \ - --fp16 --fp16-init-scale 4 --threshold-loss-scale 1 --fp16-scale-window 128 \ - --batch-size $MAX_SENTENCES \ - --required-batch-size-multiple 1 \ - --update-freq $UPDATE_FREQ \ - --max-epoch $MAX_EPOCH -``` - -**Note:** - -a) As contexts in RACE are relatively long, we are using smaller batch size per GPU while increasing update-freq to achieve larger effective batch size. - -b) Above cmd-args and hyperparams are tested on one Nvidia `V100` GPU with `32gb` of memory for each task. Depending on the GPU memory resources available to you, you can use increase `--update-freq` and reduce `--batch-size`. - -c) The setting in above command is based on our hyperparam search within a fixed search space (for careful comparison across models). You might be able to find better metrics with wider hyperparam search. - -### 4) Evaluation: - -``` -DATA_DIR=/path/to/race-output-dir # data directory used during training -MODEL_PATH=/path/to/checkpoint_best.pt # path to the finetuned model checkpoint -PREDS_OUT=preds.tsv # output file path to save prediction -TEST_SPLIT=test # can be test (Middle) or test1 (High) -fairseq-validate \ - $DATA_DIR \ - --valid-subset $TEST_SPLIT \ - --path $MODEL_PATH \ - --batch-size 1 \ - --task sentence_ranking \ - --criterion sentence_ranking \ - --save-predictions $PREDS_OUT -``` diff --git a/kosmos-g/fairseq/examples/roberta/commonsense_qa/README.md b/kosmos-g/fairseq/examples/roberta/commonsense_qa/README.md deleted file mode 100644 index 7f386decd..000000000 --- a/kosmos-g/fairseq/examples/roberta/commonsense_qa/README.md +++ /dev/null @@ -1,99 +0,0 @@ -# Finetuning RoBERTa on Commonsense QA - -We follow a similar approach to [finetuning RACE](../README.race.md). Specifically -for each question we construct five inputs, one for each of the five candidate -answer choices. Each input is constructed by concatenating the question and -candidate answer. We then encode each input and pass the resulting "[CLS]" -representations through a fully-connected layer to predict the correct answer. -We train with a standard cross-entropy loss. - -We also found it helpful to prepend a prefix of `Q:` to the question and `A:` to -the answer. The complete input format is: -``` -<s> Q: Where would I not want a fox? </s> A: hen house </s> -``` - -Our final submission is based on a hyperparameter search over the learning rate -(1e-5, 2e-5, 3e-5), batch size (8, 16), number of training steps (2000, 3000, -4000) and random seed. We selected the model with the best performance on the -development set after 100 trials. - -### 1) Download data from the Commonsense QA website (https://www.tau-nlp.org/commonsenseqa) -```bash -bash examples/roberta/commonsense_qa/download_cqa_data.sh -``` - -### 2) Finetune - -```bash -MAX_UPDATES=3000 # Number of training steps. -WARMUP_UPDATES=150 # Linearly increase LR over this many steps. -LR=1e-05 # Peak LR for polynomial LR scheduler. -MAX_SENTENCES=16 # Batch size. -SEED=1 # Random seed. -ROBERTA_PATH=/path/to/roberta/model.pt -DATA_DIR=data/CommonsenseQA - -# we use the --user-dir option to load the task from -# the examples/roberta/commonsense_qa directory: -FAIRSEQ_PATH=/path/to/fairseq -FAIRSEQ_USER_DIR=${FAIRSEQ_PATH}/examples/roberta/commonsense_qa - -CUDA_VISIBLE_DEVICES=0 fairseq-train --fp16 --ddp-backend=legacy_ddp \ - $DATA_DIR \ - --user-dir $FAIRSEQ_USER_DIR \ - --restore-file $ROBERTA_PATH \ - --reset-optimizer --reset-dataloader --reset-meters \ - --no-epoch-checkpoints --no-last-checkpoints --no-save-optimizer-state \ - --best-checkpoint-metric accuracy --maximize-best-checkpoint-metric \ - --task commonsense_qa --init-token 0 --bpe gpt2 \ - --arch roberta_large --max-positions 512 \ - --dropout 0.1 --attention-dropout 0.1 --weight-decay 0.01 \ - --criterion sentence_ranking --num-classes 5 \ - --optimizer adam --adam-betas '(0.9, 0.98)' --adam-eps 1e-06 --clip-norm 0.0 \ - --lr-scheduler polynomial_decay --lr $LR \ - --warmup-updates $WARMUP_UPDATES --total-num-update $MAX_UPDATES \ - --batch-size $MAX_SENTENCES \ - --max-update $MAX_UPDATES \ - --log-format simple --log-interval 25 \ - --seed $SEED -``` - -The above command assumes training on 1 GPU with 32GB of RAM. For GPUs with -less memory, decrease `--batch-size` and increase `--update-freq` -accordingly to compensate. - -### 3) Evaluate -```python -import json -import torch -from fairseq.models.roberta import RobertaModel -from examples.roberta import commonsense_qa # load the Commonsense QA task -roberta = RobertaModel.from_pretrained('checkpoints', 'checkpoint_best.pt', 'data/CommonsenseQA') -roberta.eval() # disable dropout -roberta.cuda() # use the GPU (optional) -nsamples, ncorrect = 0, 0 -with open('data/CommonsenseQA/valid.jsonl') as h: - for line in h: - example = json.loads(line) - scores = [] - for choice in example['question']['choices']: - input = roberta.encode( - 'Q: ' + example['question']['stem'], - 'A: ' + choice['text'], - no_separator=True - ) - score = roberta.predict('sentence_classification_head', input, return_logits=True) - scores.append(score) - pred = torch.cat(scores).argmax() - answer = ord(example['answerKey']) - ord('A') - nsamples += 1 - if pred == answer: - ncorrect += 1 - -print('Accuracy: ' + str(ncorrect / float(nsamples))) -# Accuracy: 0.7846027846027847 -``` - -The above snippet is not batched, which makes it quite slow. See [instructions -for batched prediction with RoBERTa](https://github.com/pytorch/fairseq/tree/main/examples/roberta#batched-prediction). diff --git a/kosmos-g/fairseq/examples/roberta/commonsense_qa/__init__.py b/kosmos-g/fairseq/examples/roberta/commonsense_qa/__init__.py deleted file mode 100644 index 42d21f35e..000000000 --- a/kosmos-g/fairseq/examples/roberta/commonsense_qa/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import commonsense_qa_task # noqa diff --git a/kosmos-g/fairseq/examples/roberta/commonsense_qa/commonsense_qa_task.py b/kosmos-g/fairseq/examples/roberta/commonsense_qa/commonsense_qa_task.py deleted file mode 100644 index 7d8f8131b..000000000 --- a/kosmos-g/fairseq/examples/roberta/commonsense_qa/commonsense_qa_task.py +++ /dev/null @@ -1,190 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import json -import os - -import numpy as np -import torch -from fairseq.data import ( - Dictionary, - IdDataset, - ListDataset, - NestedDictionaryDataset, - NumelDataset, - NumSamplesDataset, - RawLabelDataset, - RightPadDataset, - SortDataset, - data_utils, - encoders, -) -from fairseq.tasks import LegacyFairseqTask, register_task - - -@register_task("commonsense_qa") -class CommonsenseQATask(LegacyFairseqTask): - """Task to finetune RoBERTa for Commonsense QA.""" - - @staticmethod - def add_args(parser): - """Add task-specific arguments to the parser.""" - parser.add_argument( - "data", metavar="DIR", help="path to data directory; we load <split>.jsonl" - ) - parser.add_argument( - "--init-token", - type=int, - default=None, - help="add token at the beginning of each batch item", - ) - parser.add_argument("--num-classes", type=int, default=5) - - def __init__(self, args, vocab): - super().__init__(args) - self.vocab = vocab - self.mask = vocab.add_symbol("<mask>") - - self.bpe = encoders.build_bpe(args) - - @classmethod - def load_dictionary(cls, filename): - """Load the dictionary from the filename - - Args: - filename (str): the filename - """ - dictionary = Dictionary.load(filename) - dictionary.add_symbol("<mask>") - return dictionary - - @classmethod - def setup_task(cls, args, **kwargs): - assert ( - args.criterion == "sentence_ranking" - ), "Must set --criterion=sentence_ranking" - - # load data and label dictionaries - vocab = cls.load_dictionary(os.path.join(args.data, "dict.txt")) - print("| dictionary: {} types".format(len(vocab))) - - return cls(args, vocab) - - def load_dataset( - self, split, epoch=1, combine=False, data_path=None, return_only=False, **kwargs - ): - """Load a given dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - """ - - def binarize(s, append_bos=False): - if self.bpe is not None: - s = self.bpe.encode(s) - tokens = self.vocab.encode_line( - s, - append_eos=True, - add_if_not_exist=False, - ).long() - if append_bos and self.args.init_token is not None: - tokens = torch.cat([tokens.new([self.args.init_token]), tokens]) - return tokens - - if data_path is None: - data_path = os.path.join(self.args.data, split + ".jsonl") - if not os.path.exists(data_path): - raise FileNotFoundError("Cannot find data: {}".format(data_path)) - - src_tokens = [[] for i in range(self.args.num_classes)] - src_lengths = [[] for i in range(self.args.num_classes)] - labels = [] - - with open(data_path) as h: - for line in h: - example = json.loads(line.strip()) - if "answerKey" in example: - label = ord(example["answerKey"]) - ord("A") - labels.append(label) - question = example["question"]["stem"] - assert len(example["question"]["choices"]) == self.args.num_classes - # format: `<s> Q: Where would I not want a fox? </s> A: hen house </s>` - question = "Q: " + question - question_toks = binarize(question, append_bos=True) - for i, choice in enumerate(example["question"]["choices"]): - src = "A: " + choice["text"] - src_bin = torch.cat([question_toks, binarize(src)]) - src_tokens[i].append(src_bin) - src_lengths[i].append(len(src_bin)) - assert all( - len(src_tokens[0]) == len(src_tokens[i]) - for i in range(self.args.num_classes) - ) - assert len(src_tokens[0]) == len(src_lengths[0]) - assert len(labels) == 0 or len(labels) == len(src_tokens[0]) - - for i in range(self.args.num_classes): - src_lengths[i] = np.array(src_lengths[i]) - src_tokens[i] = ListDataset(src_tokens[i], src_lengths[i]) - src_lengths[i] = ListDataset(src_lengths[i]) - - dataset = { - "id": IdDataset(), - "nsentences": NumSamplesDataset(), - "ntokens": NumelDataset(src_tokens[0], reduce=True), - } - - for i in range(self.args.num_classes): - dataset.update( - { - "net_input{}".format(i + 1): { - "src_tokens": RightPadDataset( - src_tokens[i], - pad_idx=self.source_dictionary.pad(), - ), - "src_lengths": src_lengths[i], - } - } - ) - - if len(labels) > 0: - dataset.update({"target": RawLabelDataset(labels)}) - - dataset = NestedDictionaryDataset( - dataset, - sizes=[np.maximum.reduce([src_token.sizes for src_token in src_tokens])], - ) - - with data_utils.numpy_seed(self.args.seed): - dataset = SortDataset( - dataset, - # shuffle - sort_order=[np.random.permutation(len(dataset))], - ) - - print("| Loaded {} with {} samples".format(split, len(dataset))) - - self.datasets[split] = dataset - return self.datasets[split] - - def build_model(self, args, from_checkpoint=False): - from fairseq import models - - model = models.build_model(args, self) - - model.register_classification_head( - "sentence_classification_head", - num_classes=1, - ) - - return model - - @property - def source_dictionary(self): - return self.vocab - - @property - def target_dictionary(self): - return self.vocab diff --git a/kosmos-g/fairseq/examples/roberta/commonsense_qa/download_cqa_data.sh b/kosmos-g/fairseq/examples/roberta/commonsense_qa/download_cqa_data.sh deleted file mode 100644 index 5f300093f..000000000 --- a/kosmos-g/fairseq/examples/roberta/commonsense_qa/download_cqa_data.sh +++ /dev/null @@ -1,14 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -OUTDIR=data/CommonsenseQA - -mkdir -p $OUTDIR - -wget -O $OUTDIR/train.jsonl https://s3.amazonaws.com/commensenseqa/train_rand_split.jsonl -wget -O $OUTDIR/valid.jsonl https://s3.amazonaws.com/commensenseqa/dev_rand_split.jsonl -wget -O $OUTDIR/test.jsonl https://s3.amazonaws.com/commensenseqa/test_rand_split_no_answers.jsonl -wget -O $OUTDIR/dict.txt https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/dict.txt diff --git a/kosmos-g/fairseq/examples/roberta/config/finetuning/cola.yaml b/kosmos-g/fairseq/examples/roberta/config/finetuning/cola.yaml deleted file mode 100644 index ac7661120..000000000 --- a/kosmos-g/fairseq/examples/roberta/config/finetuning/cola.yaml +++ /dev/null @@ -1,59 +0,0 @@ -# @package _group_ - -common: - fp16: true - fp16_init_scale: 4 - threshold_loss_scale: 1 - fp16_scale_window: 128 - log_format: json - log_interval: 200 - -task: - _name: sentence_prediction - data: ??? - init_token: 0 - separator_token: 2 - num_classes: 2 - max_positions: 512 - -checkpoint: - restore_file: ??? - reset_optimizer: true - reset_dataloader: true - reset_meters: true - best_checkpoint_metric: accuracy - maximize_best_checkpoint_metric: true - no_epoch_checkpoints: true - -distributed_training: - find_unused_parameters: true - distributed_world_size: 1 - -criterion: - _name: sentence_prediction - -dataset: - batch_size: 16 - required_batch_size_multiple: 1 - max_tokens: 4400 - -optimizer: - _name: adam - weight_decay: 0.1 - adam_betas: (0.9,0.98) - adam_eps: 1e-06 - -lr_scheduler: - _name: polynomial_decay - warmup_updates: 320 - -optimization: - clip_norm: 0.0 - lr: [1e-05] - max_update: 5336 - max_epoch: 10 - -model: - _name: roberta - dropout: 0.1 - attention_dropout: 0.1 diff --git a/kosmos-g/fairseq/examples/roberta/config/finetuning/mnli.yaml b/kosmos-g/fairseq/examples/roberta/config/finetuning/mnli.yaml deleted file mode 100644 index 5be10c362..000000000 --- a/kosmos-g/fairseq/examples/roberta/config/finetuning/mnli.yaml +++ /dev/null @@ -1,59 +0,0 @@ -# @package _group_ - -common: - fp16: true - fp16_init_scale: 4 - threshold_loss_scale: 1 - fp16_scale_window: 128 - log_format: json - log_interval: 200 - -task: - _name: sentence_prediction - data: ??? - init_token: 0 - separator_token: 2 - num_classes: 3 - max_positions: 512 - -checkpoint: - restore_file: ??? - reset_optimizer: true - reset_dataloader: true - reset_meters: true - best_checkpoint_metric: accuracy - maximize_best_checkpoint_metric: true - no_epoch_checkpoints: true - -distributed_training: - find_unused_parameters: true - distributed_world_size: 1 - -criterion: - _name: sentence_prediction - -dataset: - batch_size: 32 - required_batch_size_multiple: 1 - max_tokens: 4400 - -optimizer: - _name: adam - weight_decay: 0.1 - adam_betas: (0.9,0.98) - adam_eps: 1e-06 - -lr_scheduler: - _name: polynomial_decay - warmup_updates: 7432 - -optimization: - clip_norm: 0.0 - lr: [1e-05] - max_update: 123873 - max_epoch: 10 - -model: - _name: roberta - dropout: 0.1 - attention_dropout: 0.1 diff --git a/kosmos-g/fairseq/examples/roberta/config/finetuning/mrpc.yaml b/kosmos-g/fairseq/examples/roberta/config/finetuning/mrpc.yaml deleted file mode 100644 index aa8b7db39..000000000 --- a/kosmos-g/fairseq/examples/roberta/config/finetuning/mrpc.yaml +++ /dev/null @@ -1,59 +0,0 @@ -# @package _group_ - -common: - fp16: true - fp16_init_scale: 4 - threshold_loss_scale: 1 - fp16_scale_window: 128 - log_format: json - log_interval: 200 - -task: - _name: sentence_prediction - data: ??? - init_token: 0 - separator_token: 2 - num_classes: 2 - max_positions: 512 - -checkpoint: - restore_file: ??? - reset_optimizer: true - reset_dataloader: true - reset_meters: true - best_checkpoint_metric: accuracy - maximize_best_checkpoint_metric: true - no_epoch_checkpoints: true - -distributed_training: - find_unused_parameters: true - distributed_world_size: 1 - -criterion: - _name: sentence_prediction - -dataset: - batch_size: 16 - required_batch_size_multiple: 1 - max_tokens: 4400 - -optimizer: - _name: adam - weight_decay: 0.1 - adam_betas: (0.9,0.98) - adam_eps: 1e-06 - -lr_scheduler: - _name: polynomial_decay - warmup_updates: 137 - -optimization: - clip_norm: 0.0 - lr: [1e-05] - max_update: 2296 - max_epoch: 10 - -model: - _name: roberta - dropout: 0.1 - attention_dropout: 0.1 diff --git a/kosmos-g/fairseq/examples/roberta/config/finetuning/qnli.yaml b/kosmos-g/fairseq/examples/roberta/config/finetuning/qnli.yaml deleted file mode 100644 index b4595b090..000000000 --- a/kosmos-g/fairseq/examples/roberta/config/finetuning/qnli.yaml +++ /dev/null @@ -1,59 +0,0 @@ -# @package _group_ - -common: - fp16: true - fp16_init_scale: 4 - threshold_loss_scale: 1 - fp16_scale_window: 128 - log_format: json - log_interval: 200 - -task: - _name: sentence_prediction - data: ??? - init_token: 0 - separator_token: 2 - num_classes: 2 - max_positions: 512 - -checkpoint: - restore_file: ??? - reset_optimizer: true - reset_dataloader: true - reset_meters: true - best_checkpoint_metric: accuracy - maximize_best_checkpoint_metric: true - no_epoch_checkpoints: true - -distributed_training: - find_unused_parameters: true - distributed_world_size: 1 - -criterion: - _name: sentence_prediction - -dataset: - batch_size: 32 - required_batch_size_multiple: 1 - max_tokens: 4400 - -optimizer: - _name: adam - weight_decay: 0.1 - adam_betas: (0.9,0.98) - adam_eps: 1e-06 - -lr_scheduler: - _name: polynomial_decay - warmup_updates: 1986 - -optimization: - clip_norm: 0.0 - lr: [1e-05] - max_update: 33112 - max_epoch: 10 - -model: - _name: roberta - dropout: 0.1 - attention_dropout: 0.1 diff --git a/kosmos-g/fairseq/examples/roberta/config/finetuning/qqp.yaml b/kosmos-g/fairseq/examples/roberta/config/finetuning/qqp.yaml deleted file mode 100644 index 5a2b2ed74..000000000 --- a/kosmos-g/fairseq/examples/roberta/config/finetuning/qqp.yaml +++ /dev/null @@ -1,59 +0,0 @@ -# @package _group_ - -common: - fp16: true - fp16_init_scale: 4 - threshold_loss_scale: 1 - fp16_scale_window: 128 - log_format: json - log_interval: 200 - -task: - _name: sentence_prediction - data: ??? - init_token: 0 - separator_token: 2 - num_classes: 2 - max_positions: 512 - -checkpoint: - restore_file: ??? - reset_optimizer: true - reset_dataloader: true - reset_meters: true - best_checkpoint_metric: accuracy - maximize_best_checkpoint_metric: true - no_epoch_checkpoints: true - -distributed_training: - find_unused_parameters: true - distributed_world_size: 1 - -criterion: - _name: sentence_prediction - -dataset: - batch_size: 32 - required_batch_size_multiple: 1 - max_tokens: 4400 - -optimizer: - _name: adam - weight_decay: 0.1 - adam_betas: (0.9,0.98) - adam_eps: 1e-06 - -lr_scheduler: - _name: polynomial_decay - warmup_updates: 28318 - -optimization: - clip_norm: 0.0 - lr: [1e-05] - max_update: 113272 - max_epoch: 10 - -model: - _name: roberta - dropout: 0.1 - attention_dropout: 0.1 diff --git a/kosmos-g/fairseq/examples/roberta/config/finetuning/rte.yaml b/kosmos-g/fairseq/examples/roberta/config/finetuning/rte.yaml deleted file mode 100644 index 731846501..000000000 --- a/kosmos-g/fairseq/examples/roberta/config/finetuning/rte.yaml +++ /dev/null @@ -1,59 +0,0 @@ -# @package _group_ - -common: - fp16: true - fp16_init_scale: 4 - threshold_loss_scale: 1 - fp16_scale_window: 128 - log_format: json - log_interval: 200 - -task: - _name: sentence_prediction - data: ??? - init_token: 0 - separator_token: 2 - num_classes: 2 - max_positions: 512 - -checkpoint: - restore_file: ??? - reset_optimizer: true - reset_dataloader: true - reset_meters: true - best_checkpoint_metric: accuracy - maximize_best_checkpoint_metric: true - no_epoch_checkpoints: true - -distributed_training: - find_unused_parameters: true - distributed_world_size: 1 - -criterion: - _name: sentence_prediction - -dataset: - batch_size: 16 - required_batch_size_multiple: 1 - max_tokens: 4400 - -optimizer: - _name: adam - weight_decay: 0.1 - adam_betas: (0.9,0.98) - adam_eps: 1e-06 - -lr_scheduler: - _name: polynomial_decay - warmup_updates: 122 - -optimization: - clip_norm: 0.0 - lr: [2e-05] - max_update: 2036 - max_epoch: 10 - -model: - _name: roberta - dropout: 0.1 - attention_dropout: 0.1 diff --git a/kosmos-g/fairseq/examples/roberta/config/finetuning/sst_2.yaml b/kosmos-g/fairseq/examples/roberta/config/finetuning/sst_2.yaml deleted file mode 100644 index a93ad2f22..000000000 --- a/kosmos-g/fairseq/examples/roberta/config/finetuning/sst_2.yaml +++ /dev/null @@ -1,59 +0,0 @@ -# @package _group_ - -common: - fp16: true - fp16_init_scale: 4 - threshold_loss_scale: 1 - fp16_scale_window: 128 - log_format: json - log_interval: 200 - -task: - _name: sentence_prediction - data: ??? - init_token: 0 - separator_token: 2 - num_classes: 2 - max_positions: 512 - -checkpoint: - restore_file: ??? - reset_optimizer: true - reset_dataloader: true - reset_meters: true - best_checkpoint_metric: accuracy - maximize_best_checkpoint_metric: true - no_epoch_checkpoints: true - -distributed_training: - find_unused_parameters: true - distributed_world_size: 1 - -criterion: - _name: sentence_prediction - -dataset: - batch_size: 32 - required_batch_size_multiple: 1 - max_tokens: 4400 - -optimizer: - _name: adam - weight_decay: 0.1 - adam_betas: (0.9,0.98) - adam_eps: 1e-06 - -lr_scheduler: - _name: polynomial_decay - warmup_updates: 1256 - -optimization: - clip_norm: 0.0 - lr: [1e-05] - max_update: 20935 - max_epoch: 10 - -model: - _name: roberta - dropout: 0.1 - attention_dropout: 0.1 diff --git a/kosmos-g/fairseq/examples/roberta/config/finetuning/sts_b.yaml b/kosmos-g/fairseq/examples/roberta/config/finetuning/sts_b.yaml deleted file mode 100644 index 2d495221a..000000000 --- a/kosmos-g/fairseq/examples/roberta/config/finetuning/sts_b.yaml +++ /dev/null @@ -1,58 +0,0 @@ -# @package _group_ - -common: - fp16: true - fp16_init_scale: 4 - threshold_loss_scale: 1 - fp16_scale_window: 128 - log_format: json - log_interval: 200 - -task: - _name: sentence_prediction - data: ??? - init_token: 0 - separator_token: 2 - num_classes: 1 - max_positions: 512 - -checkpoint: - restore_file: ??? - reset_optimizer: true - reset_dataloader: true - reset_meters: true - no_epoch_checkpoints: true - -distributed_training: - find_unused_parameters: true - distributed_world_size: 1 - -criterion: - _name: sentence_prediction - regression_target: true - -dataset: - batch_size: 16 - required_batch_size_multiple: 1 - max_tokens: 4400 - -optimizer: - _name: adam - weight_decay: 0.1 - adam_betas: (0.9,0.98) - adam_eps: 1e-06 - -lr_scheduler: - _name: polynomial_decay - warmup_updates: 214 - -optimization: - clip_norm: 0.0 - lr: [2e-05] - max_update: 3598 - max_epoch: 10 - -model: - _name: roberta - dropout: 0.1 - attention_dropout: 0.1 diff --git a/kosmos-g/fairseq/examples/roberta/config/pretraining/base.yaml b/kosmos-g/fairseq/examples/roberta/config/pretraining/base.yaml deleted file mode 100644 index 97829908f..000000000 --- a/kosmos-g/fairseq/examples/roberta/config/pretraining/base.yaml +++ /dev/null @@ -1,42 +0,0 @@ -# @package _group_ -common: - fp16: true - log_format: json - log_interval: 200 - -checkpoint: - no_epoch_checkpoints: true - -task: - _name: masked_lm - data: ??? - sample_break_mode: complete - tokens_per_sample: 512 - -criterion: masked_lm - -dataset: - batch_size: 16 - ignore_unused_valid_subsets: true - -optimizer: - _name: adam - weight_decay: 0.01 - adam_betas: (0.9,0.98) - adam_eps: 1e-06 - -lr_scheduler: - _name: polynomial_decay - warmup_updates: 10000 - -optimization: - clip_norm: 0 - lr: [0.0005] - max_update: 125000 - update_freq: [16] - -model: - _name: roberta - max_positions: 512 - dropout: 0.1 - attention_dropout: 0.1 diff --git a/kosmos-g/fairseq/examples/roberta/multiprocessing_bpe_encoder.py b/kosmos-g/fairseq/examples/roberta/multiprocessing_bpe_encoder.py deleted file mode 100644 index 43fe0451b..000000000 --- a/kosmos-g/fairseq/examples/roberta/multiprocessing_bpe_encoder.py +++ /dev/null @@ -1,130 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import contextlib -import sys -from collections import Counter -from multiprocessing import Pool - -from fairseq.data.encoders.gpt2_bpe import get_encoder - - -def main(): - """ - Helper script to encode raw text with the GPT-2 BPE using multiple processes. - - The encoder.json and vocab.bpe files can be obtained here: - - https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json - - https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe - """ - parser = argparse.ArgumentParser() - parser.add_argument( - "--encoder-json", - help="path to encoder.json", - ) - parser.add_argument( - "--vocab-bpe", - type=str, - help="path to vocab.bpe", - ) - parser.add_argument( - "--inputs", - nargs="+", - default=["-"], - help="input files to filter/encode", - ) - parser.add_argument( - "--outputs", - nargs="+", - default=["-"], - help="path to save encoded outputs", - ) - parser.add_argument( - "--keep-empty", - action="store_true", - help="keep empty lines", - ) - parser.add_argument("--workers", type=int, default=20) - args = parser.parse_args() - - assert len(args.inputs) == len( - args.outputs - ), "number of input and output paths should match" - - with contextlib.ExitStack() as stack: - inputs = [ - stack.enter_context(open(input, "r", encoding="utf-8")) - if input != "-" - else sys.stdin - for input in args.inputs - ] - outputs = [ - stack.enter_context(open(output, "w", encoding="utf-8")) - if output != "-" - else sys.stdout - for output in args.outputs - ] - - encoder = MultiprocessingEncoder(args) - pool = Pool(args.workers, initializer=encoder.initializer) - encoded_lines = pool.imap(encoder.encode_lines, zip(*inputs), 100) - - stats = Counter() - for i, (filt, enc_lines) in enumerate(encoded_lines, start=1): - if filt == "PASS": - for enc_line, output_h in zip(enc_lines, outputs): - print(enc_line, file=output_h) - else: - stats["num_filtered_" + filt] += 1 - if i % 10000 == 0: - print("processed {} lines".format(i), file=sys.stderr) - - for k, v in stats.most_common(): - print("[{}] filtered {} lines".format(k, v), file=sys.stderr) - - -class MultiprocessingEncoder(object): - def __init__(self, args): - self.args = args - - def initializer(self): - global bpe - bpe = get_encoder(self.args.encoder_json, self.args.vocab_bpe) - - def encode(self, line): - global bpe - ids = bpe.encode(line) - return list(map(str, ids)) - - def decode(self, tokens): - global bpe - return bpe.decode(tokens) - - def encode_lines(self, lines): - """ - Encode a set of lines. All lines will be encoded together. - """ - enc_lines = [] - for line in lines: - line = line.strip() - if len(line) == 0 and not self.args.keep_empty: - return ["EMPTY", None] - tokens = self.encode(line) - enc_lines.append(" ".join(tokens)) - return ["PASS", enc_lines] - - def decode_lines(self, lines): - dec_lines = [] - for line in lines: - tokens = map(int, line.strip().split()) - dec_lines.append(self.decode(tokens)) - return ["PASS", dec_lines] - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/roberta/preprocess_GLUE_tasks.sh b/kosmos-g/fairseq/examples/roberta/preprocess_GLUE_tasks.sh deleted file mode 100644 index 7f215a3b5..000000000 --- a/kosmos-g/fairseq/examples/roberta/preprocess_GLUE_tasks.sh +++ /dev/null @@ -1,185 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -# raw glue data as downloaded by glue download script (https://gist.github.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e) -if [[ $# -ne 2 ]]; then - echo "Run as following:" - echo "./examples/roberta/preprocess_GLUE_tasks.sh <glud_data_folder> <task_name>" - exit 1 -fi - -GLUE_DATA_FOLDER=$1 - -# download bpe encoder.json, vocabulary and fairseq dictionary -wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json' -wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe' -wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/dict.txt' - -TASKS=$2 # QQP - -if [ "$TASKS" = "ALL" ] -then - TASKS="QQP MNLI QNLI MRPC RTE STS-B SST-2 CoLA" -fi - -for TASK in $TASKS -do - echo "Preprocessing $TASK" - - TASK_DATA_FOLDER="$GLUE_DATA_FOLDER/$TASK" - echo "Raw data as downloaded from glue website: $TASK_DATA_FOLDER" - - SPLITS="train dev test" - INPUT_COUNT=2 - if [ "$TASK" = "QQP" ] - then - INPUT_COLUMNS=( 4 5 ) - TEST_INPUT_COLUMNS=( 2 3 ) - LABEL_COLUMN=6 - elif [ "$TASK" = "MNLI" ] - then - SPLITS="train dev_matched dev_mismatched test_matched test_mismatched" - INPUT_COLUMNS=( 9 10 ) - TEST_INPUT_COLUMNS=( 9 10 ) - DEV_LABEL_COLUMN=16 - LABEL_COLUMN=12 - elif [ "$TASK" = "QNLI" ] - then - INPUT_COLUMNS=( 2 3 ) - TEST_INPUT_COLUMNS=( 2 3 ) - LABEL_COLUMN=4 - elif [ "$TASK" = "MRPC" ] - then - INPUT_COLUMNS=( 4 5 ) - TEST_INPUT_COLUMNS=( 4 5 ) - LABEL_COLUMN=1 - elif [ "$TASK" = "RTE" ] - then - INPUT_COLUMNS=( 2 3 ) - TEST_INPUT_COLUMNS=( 2 3 ) - LABEL_COLUMN=4 - elif [ "$TASK" = "STS-B" ] - then - INPUT_COLUMNS=( 8 9 ) - TEST_INPUT_COLUMNS=( 8 9 ) - LABEL_COLUMN=10 - # Following are single sentence tasks. - elif [ "$TASK" = "SST-2" ] - then - INPUT_COLUMNS=( 1 ) - TEST_INPUT_COLUMNS=( 2 ) - LABEL_COLUMN=2 - INPUT_COUNT=1 - elif [ "$TASK" = "CoLA" ] - then - INPUT_COLUMNS=( 4 ) - TEST_INPUT_COLUMNS=( 2 ) - LABEL_COLUMN=2 - INPUT_COUNT=1 - fi - - # Strip out header and filter lines that don't have expected number of fields. - rm -rf "$TASK_DATA_FOLDER/processed" - mkdir -p "$TASK_DATA_FOLDER/processed" - for SPLIT in $SPLITS - do - # CoLA train and dev doesn't have header. - if [[ ( "$TASK" = "CoLA") && ( "$SPLIT" != "test" ) ]] - then - cp "$TASK_DATA_FOLDER/$SPLIT.tsv" "$TASK_DATA_FOLDER/processed/$SPLIT.tsv.temp"; - else - tail -n +2 "$TASK_DATA_FOLDER/$SPLIT.tsv" > "$TASK_DATA_FOLDER/processed/$SPLIT.tsv.temp"; - fi - - # Remove unformatted lines from train and dev files for QQP dataset. - if [[ ( "$TASK" = "QQP") && ( "$SPLIT" != "test" ) ]] - then - awk -F '\t' -v NUM_FIELDS=6 'NF==NUM_FIELDS{print}{}' "$TASK_DATA_FOLDER/processed/$SPLIT.tsv.temp" > "$TASK_DATA_FOLDER/processed/$SPLIT.tsv"; - else - cp "$TASK_DATA_FOLDER/processed/$SPLIT.tsv.temp" "$TASK_DATA_FOLDER/processed/$SPLIT.tsv"; - fi - rm "$TASK_DATA_FOLDER/processed/$SPLIT.tsv.temp"; - done - - # Split into input0, input1 and label - for SPLIT in $SPLITS - do - for INPUT_TYPE in $(seq 0 $((INPUT_COUNT-1))) - do - if [[ "$SPLIT" != test* ]] - then - COLUMN_NUMBER=${INPUT_COLUMNS[$INPUT_TYPE]} - else - COLUMN_NUMBER=${TEST_INPUT_COLUMNS[$INPUT_TYPE]} - fi - cut -f"$COLUMN_NUMBER" "$TASK_DATA_FOLDER/processed/$SPLIT.tsv" > "$TASK_DATA_FOLDER/processed/$SPLIT.raw.input$INPUT_TYPE"; - done - - if [[ "$SPLIT" != test* ]] - then - if [ "$TASK" = "MNLI" ] && [ "$SPLIT" != "train" ] - then - cut -f"$DEV_LABEL_COLUMN" "$TASK_DATA_FOLDER/processed/$SPLIT.tsv" > "$TASK_DATA_FOLDER/processed/$SPLIT.label"; - else - cut -f"$LABEL_COLUMN" "$TASK_DATA_FOLDER/processed/$SPLIT.tsv" > "$TASK_DATA_FOLDER/processed/$SPLIT.label"; - fi - fi - - # BPE encode. - for INPUT_TYPE in $(seq 0 $((INPUT_COUNT-1))) - do - LANG="input$INPUT_TYPE" - echo "BPE encoding $SPLIT/$LANG" - python -m examples.roberta.multiprocessing_bpe_encoder \ - --encoder-json encoder.json \ - --vocab-bpe vocab.bpe \ - --inputs "$TASK_DATA_FOLDER/processed/$SPLIT.raw.$LANG" \ - --outputs "$TASK_DATA_FOLDER/processed/$SPLIT.$LANG" \ - --workers 60 \ - --keep-empty; - done - done - - # Remove output directory. - rm -rf "$TASK-bin" - - DEVPREF="$TASK_DATA_FOLDER/processed/dev.LANG" - TESTPREF="$TASK_DATA_FOLDER/processed/test.LANG" - if [ "$TASK" = "MNLI" ] - then - DEVPREF="$TASK_DATA_FOLDER/processed/dev_matched.LANG,$TASK_DATA_FOLDER/processed/dev_mismatched.LANG" - TESTPREF="$TASK_DATA_FOLDER/processed/test_matched.LANG,$TASK_DATA_FOLDER/processed/test_mismatched.LANG" - fi - - # Run fairseq preprocessing: - for INPUT_TYPE in $(seq 0 $((INPUT_COUNT-1))) - do - LANG="input$INPUT_TYPE" - fairseq-preprocess \ - --only-source \ - --trainpref "$TASK_DATA_FOLDER/processed/train.$LANG" \ - --validpref "${DEVPREF//LANG/$LANG}" \ - --testpref "${TESTPREF//LANG/$LANG}" \ - --destdir "$TASK-bin/$LANG" \ - --workers 60 \ - --srcdict dict.txt; - done - if [[ "$TASK" != "STS-B" ]] - then - fairseq-preprocess \ - --only-source \ - --trainpref "$TASK_DATA_FOLDER/processed/train.label" \ - --validpref "${DEVPREF//LANG/label}" \ - --destdir "$TASK-bin/label" \ - --workers 60; - else - # For STS-B output range is converted to be between: [0.0, 1.0] - mkdir -p "$TASK-bin/label" - awk '{print $1 / 5.0 }' "$TASK_DATA_FOLDER/processed/train.label" > "$TASK-bin/label/train.label" - awk '{print $1 / 5.0 }' "$TASK_DATA_FOLDER/processed/dev.label" > "$TASK-bin/label/valid.label" - fi -done diff --git a/kosmos-g/fairseq/examples/roberta/preprocess_RACE.py b/kosmos-g/fairseq/examples/roberta/preprocess_RACE.py deleted file mode 100644 index cdd660727..000000000 --- a/kosmos-g/fairseq/examples/roberta/preprocess_RACE.py +++ /dev/null @@ -1,102 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import json -import os -import re - - -class InputExample: - def __init__(self, paragraph, qa_list, label): - self.paragraph = paragraph - self.qa_list = qa_list - self.label = label - - -def get_examples(data_dir, set_type): - """ - Extract paragraph and question-answer list from each json file - """ - examples = [] - - levels = ["middle", "high"] - set_type_c = set_type.split("-") - if len(set_type_c) == 2: - levels = [set_type_c[1]] - set_type = set_type_c[0] - for level in levels: - cur_dir = os.path.join(data_dir, set_type, level) - for filename in os.listdir(cur_dir): - cur_path = os.path.join(cur_dir, filename) - with open(cur_path, "r") as f: - cur_data = json.load(f) - answers = cur_data["answers"] - options = cur_data["options"] - questions = cur_data["questions"] - context = cur_data["article"].replace("\n", " ") - context = re.sub(r"\s+", " ", context) - for i in range(len(answers)): - label = ord(answers[i]) - ord("A") - qa_list = [] - question = questions[i] - for j in range(4): - option = options[i][j] - if "_" in question: - qa_cat = question.replace("_", option) - else: - qa_cat = " ".join([question, option]) - qa_cat = re.sub(r"\s+", " ", qa_cat) - qa_list.append(qa_cat) - examples.append(InputExample(context, qa_list, label)) - - return examples - - -def main(): - """ - Helper script to extract paragraphs questions and answers from RACE datasets. - """ - parser = argparse.ArgumentParser() - parser.add_argument( - "--input-dir", - help="input directory for downloaded RACE dataset", - ) - parser.add_argument( - "--output-dir", - help="output directory for extracted data", - ) - args = parser.parse_args() - - if not os.path.exists(args.output_dir): - os.makedirs(args.output_dir, exist_ok=True) - - for set_type in ["train", "dev", "test-middle", "test-high"]: - examples = get_examples(args.input_dir, set_type) - qa_file_paths = [ - os.path.join(args.output_dir, set_type + ".input" + str(i + 1)) - for i in range(4) - ] - qa_files = [open(qa_file_path, "w") for qa_file_path in qa_file_paths] - outf_context_path = os.path.join(args.output_dir, set_type + ".input0") - outf_label_path = os.path.join(args.output_dir, set_type + ".label") - outf_context = open(outf_context_path, "w") - outf_label = open(outf_label_path, "w") - for example in examples: - outf_context.write(example.paragraph + "\n") - for i in range(4): - qa_files[i].write(example.qa_list[i] + "\n") - outf_label.write(str(example.label) + "\n") - - for f in qa_files: - f.close() - outf_label.close() - outf_context.close() - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/roberta/preprocess_RACE.sh b/kosmos-g/fairseq/examples/roberta/preprocess_RACE.sh deleted file mode 100644 index 932d2ab6e..000000000 --- a/kosmos-g/fairseq/examples/roberta/preprocess_RACE.sh +++ /dev/null @@ -1,59 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -# data should be downloaded and processed with reprocess_RACE.py -if [[ $# -ne 2 ]]; then - echo "Run as following:" - echo "./examples/roberta/preprocess_RACE.sh <race_data_folder> <output_folder>" - exit 1 -fi - -RACE_DATA_FOLDER=$1 -OUT_DATA_FOLDER=$2 - -# download bpe encoder.json, vocabulary and fairseq dictionary -wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json' -wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe' -wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/dict.txt' - -SPLITS="train dev test-middle test-high" -INPUT_TYPES="input0 input1 input2 input3 input4" -for INPUT_TYPE in $INPUT_TYPES -do - for SPLIT in $SPLITS - do - echo "BPE encoding $SPLIT/$INPUT_TYPE" - python -m examples.roberta.multiprocessing_bpe_encoder \ - --encoder-json encoder.json \ - --vocab-bpe vocab.bpe \ - --inputs "$RACE_DATA_FOLDER/$SPLIT.$INPUT_TYPE" \ - --outputs "$RACE_DATA_FOLDER/$SPLIT.$INPUT_TYPE.bpe" \ - --workers 10 \ - --keep-empty; - - done -done - -for INPUT_TYPE in $INPUT_TYPES - do - LANG="input$INPUT_TYPE" - fairseq-preprocess \ - --only-source \ - --trainpref "$RACE_DATA_FOLDER/train.$INPUT_TYPE.bpe" \ - --validpref "$RACE_DATA_FOLDER/dev.$INPUT_TYPE.bpe" \ - --testpref "$RACE_DATA_FOLDER/test-middle.$INPUT_TYPE.bpe,$RACE_DATA_FOLDER/test-high.$INPUT_TYPE.bpe" \ - --destdir "$OUT_DATA_FOLDER/$INPUT_TYPE" \ - --workers 10 \ - --srcdict dict.txt; -done - -rm -rf "$OUT_DATA_FOLDER/label" -mkdir -p "$OUT_DATA_FOLDER/label" -cp "$RACE_DATA_FOLDER/train.label" "$OUT_DATA_FOLDER/label/" -cp "$RACE_DATA_FOLDER/dev.label" "$OUT_DATA_FOLDER/label/valid.label" -cp "$RACE_DATA_FOLDER/test-middle.label" "$OUT_DATA_FOLDER/label/test.label" -cp "$RACE_DATA_FOLDER/test-high.label" "$OUT_DATA_FOLDER/label/test1.label" diff --git a/kosmos-g/fairseq/examples/roberta/wsc/README.md b/kosmos-g/fairseq/examples/roberta/wsc/README.md deleted file mode 100644 index 21a045d99..000000000 --- a/kosmos-g/fairseq/examples/roberta/wsc/README.md +++ /dev/null @@ -1,125 +0,0 @@ -# Finetuning RoBERTa on Winograd Schema Challenge (WSC) data - -The following instructions can be used to finetune RoBERTa on the WSC training -data provided by [SuperGLUE](https://super.gluebenchmark.com/). - -Note that there is high variance in the results. For our GLUE/SuperGLUE -submission we swept over the learning rate (1e-5, 2e-5, 3e-5), batch size (16, -32, 64) and total number of updates (500, 1000, 2000, 3000), as well as the -random seed. Out of ~100 runs we chose the best 7 models and ensembled them. - -**Approach:** The instructions below use a slightly different loss function than -what's described in the original RoBERTa arXiv paper. In particular, -[Kocijan et al. (2019)](https://arxiv.org/abs/1905.06290) introduce a margin -ranking loss between `(query, candidate)` pairs with tunable hyperparameters -alpha and beta. This is supported in our code as well with the `--wsc-alpha` and -`--wsc-beta` arguments. However, we achieved slightly better (and more robust) -results on the development set by instead using a single cross entropy loss term -over the log-probabilities for the query and all mined candidates. **The -candidates are mined using spaCy from each input sentence in isolation, so the -approach remains strictly pointwise.** This reduces the number of -hyperparameters and our best model achieved 92.3% development set accuracy, -compared to ~90% accuracy for the margin loss. Later versions of the RoBERTa -arXiv paper will describe this updated formulation. - -### 1) Download the WSC data from the SuperGLUE website: -```bash -wget https://dl.fbaipublicfiles.com/glue/superglue/data/v2/WSC.zip -unzip WSC.zip - -# we also need to copy the RoBERTa dictionary into the same directory -wget -O WSC/dict.txt https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/dict.txt -``` - -### 2) Finetune over the provided training data: -```bash -TOTAL_NUM_UPDATES=2000 # Total number of training steps. -WARMUP_UPDATES=250 # Linearly increase LR over this many steps. -LR=2e-05 # Peak LR for polynomial LR scheduler. -MAX_SENTENCES=16 # Batch size per GPU. -SEED=1 # Random seed. -ROBERTA_PATH=/path/to/roberta/model.pt - -# we use the --user-dir option to load the task and criterion -# from the examples/roberta/wsc directory: -FAIRSEQ_PATH=/path/to/fairseq -FAIRSEQ_USER_DIR=${FAIRSEQ_PATH}/examples/roberta/wsc - -CUDA_VISIBLE_DEVICES=0,1,2,3 fairseq-train WSC/ \ - --restore-file $ROBERTA_PATH \ - --reset-optimizer --reset-dataloader --reset-meters \ - --no-epoch-checkpoints --no-last-checkpoints --no-save-optimizer-state \ - --best-checkpoint-metric accuracy --maximize-best-checkpoint-metric \ - --valid-subset val \ - --fp16 --ddp-backend legacy_ddp \ - --user-dir $FAIRSEQ_USER_DIR \ - --task wsc --criterion wsc --wsc-cross-entropy \ - --arch roberta_large --bpe gpt2 --max-positions 512 \ - --dropout 0.1 --attention-dropout 0.1 --weight-decay 0.01 \ - --optimizer adam --adam-betas '(0.9, 0.98)' --adam-eps 1e-06 \ - --lr-scheduler polynomial_decay --lr $LR \ - --warmup-updates $WARMUP_UPDATES --total-num-update $TOTAL_NUM_UPDATES \ - --batch-size $MAX_SENTENCES \ - --max-update $TOTAL_NUM_UPDATES \ - --log-format simple --log-interval 100 \ - --seed $SEED -``` - -The above command assumes training on 4 GPUs, but you can achieve the same -results on a single GPU by adding `--update-freq=4`. - -### 3) Evaluate -```python -from fairseq.models.roberta import RobertaModel -from examples.roberta.wsc import wsc_utils # also loads WSC task and criterion -roberta = RobertaModel.from_pretrained('checkpoints', 'checkpoint_best.pt', 'WSC/') -roberta.cuda() -nsamples, ncorrect = 0, 0 -for sentence, label in wsc_utils.jsonl_iterator('WSC/val.jsonl', eval=True): - pred = roberta.disambiguate_pronoun(sentence) - nsamples += 1 - if pred == label: - ncorrect += 1 -print('Accuracy: ' + str(ncorrect / float(nsamples))) -# Accuracy: 0.9230769230769231 -``` - -## RoBERTa training on WinoGrande dataset -We have also provided `winogrande` task and criterion for finetuning on the -[WinoGrande](https://mosaic.allenai.org/projects/winogrande) like datasets -where there are always two candidates and one is correct. -It's more efficient implementation for such subcases. - -```bash -TOTAL_NUM_UPDATES=23750 # Total number of training steps. -WARMUP_UPDATES=2375 # Linearly increase LR over this many steps. -LR=1e-05 # Peak LR for polynomial LR scheduler. -MAX_SENTENCES=32 # Batch size per GPU. -SEED=1 # Random seed. -ROBERTA_PATH=/path/to/roberta/model.pt - -# we use the --user-dir option to load the task and criterion -# from the examples/roberta/wsc directory: -FAIRSEQ_PATH=/path/to/fairseq -FAIRSEQ_USER_DIR=${FAIRSEQ_PATH}/examples/roberta/wsc - -cd fairseq -CUDA_VISIBLE_DEVICES=0 fairseq-train winogrande_1.0/ \ - --restore-file $ROBERTA_PATH \ - --reset-optimizer --reset-dataloader --reset-meters \ - --no-epoch-checkpoints --no-last-checkpoints --no-save-optimizer-state \ - --best-checkpoint-metric accuracy --maximize-best-checkpoint-metric \ - --valid-subset val \ - --fp16 --ddp-backend legacy_ddp \ - --user-dir $FAIRSEQ_USER_DIR \ - --task winogrande --criterion winogrande \ - --wsc-margin-alpha 5.0 --wsc-margin-beta 0.4 \ - --arch roberta_large --bpe gpt2 --max-positions 512 \ - --dropout 0.1 --attention-dropout 0.1 --weight-decay 0.01 \ - --optimizer adam --adam-betas '(0.9, 0.98)' --adam-eps 1e-06 \ - --lr-scheduler polynomial_decay --lr $LR \ - --warmup-updates $WARMUP_UPDATES --total-num-update $TOTAL_NUM_UPDATES \ - --batch-size $MAX_SENTENCES \ - --max-update $TOTAL_NUM_UPDATES \ - --log-format simple --log-interval 100 -``` diff --git a/kosmos-g/fairseq/examples/roberta/wsc/__init__.py b/kosmos-g/fairseq/examples/roberta/wsc/__init__.py deleted file mode 100644 index 78afa4728..000000000 --- a/kosmos-g/fairseq/examples/roberta/wsc/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import wsc_criterion # noqa -from . import wsc_task # noqa diff --git a/kosmos-g/fairseq/examples/roberta/wsc/wsc_criterion.py b/kosmos-g/fairseq/examples/roberta/wsc/wsc_criterion.py deleted file mode 100644 index ed0251fde..000000000 --- a/kosmos-g/fairseq/examples/roberta/wsc/wsc_criterion.py +++ /dev/null @@ -1,167 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn.functional as F -from fairseq import utils -from fairseq.criterions import LegacyFairseqCriterion, register_criterion -from fairseq.data import encoders - - -@register_criterion("wsc") -class WSCCriterion(LegacyFairseqCriterion): - def __init__(self, args, task): - super().__init__(args, task) - if self.args.save_predictions is not None: - self.prediction_h = open(self.args.save_predictions, "w") - else: - self.prediction_h = None - self.bpe = encoders.build_bpe(args.bpe) - self.tokenizer = encoders.build_tokenizer(args.tokenizer) - - def __del__(self): - if self.prediction_h is not None: - self.prediction_h.close() - - @staticmethod - def add_args(parser): - """Add criterion-specific arguments to the parser.""" - parser.add_argument("--wsc-margin-alpha", type=float, metavar="A", default=1.0) - parser.add_argument("--wsc-margin-beta", type=float, metavar="B", default=0.0) - parser.add_argument( - "--wsc-cross-entropy", - action="store_true", - help="use cross entropy formulation instead of margin loss", - ) - parser.add_argument( - "--save-predictions", metavar="FILE", help="file to save predictions to" - ) - - def get_masked_input(self, tokens, mask): - masked_tokens = tokens.clone() - masked_tokens[mask] = self.task.mask - return masked_tokens - - def get_lprobs(self, model, tokens, mask): - logits, _ = model(src_tokens=self.get_masked_input(tokens, mask)) - lprobs = F.log_softmax(logits, dim=-1, dtype=torch.float) - scores = lprobs.gather(2, tokens.unsqueeze(-1)).squeeze(-1) - mask = mask.type_as(scores) - scores = (scores * mask).sum(dim=-1) / mask.sum(dim=-1) - return scores - - def get_loss(self, query_lprobs, cand_lprobs): - if self.args.wsc_cross_entropy: - return F.cross_entropy( - torch.cat([query_lprobs, cand_lprobs]).unsqueeze(0), - query_lprobs.new([0]).long(), - ) - else: - return ( - -query_lprobs - + self.args.wsc_margin_alpha - * (cand_lprobs - query_lprobs + self.args.wsc_margin_beta).clamp(min=0) - ).sum() - - def forward(self, model, sample, reduce=True): - # compute loss and accuracy - loss, nloss = 0.0, 0 - ncorrect, nqueries = 0, 0 - - for i, label in enumerate(sample["labels"]): - query_lprobs = self.get_lprobs( - model, - sample["query_tokens"][i].unsqueeze(0), - sample["query_masks"][i].unsqueeze(0), - ) - cand_lprobs = self.get_lprobs( - model, - sample["candidate_tokens"][i], - sample["candidate_masks"][i], - ) - - pred = (query_lprobs >= cand_lprobs).all().item() - - if label is not None: - label = 1 if label else 0 - ncorrect += 1 if pred == label else 0 - nqueries += 1 - - if label: - # only compute a loss for positive instances - nloss += 1 - loss += self.get_loss(query_lprobs, cand_lprobs) - - id = sample["id"][i].item() - if self.prediction_h is not None: - print("{}\t{}\t{}".format(id, pred, label), file=self.prediction_h) - - if nloss == 0: - loss = torch.tensor(0.0, requires_grad=True) - - sample_size = nqueries if nqueries > 0 else 1 - logging_output = { - "loss": utils.item(loss.data) if reduce else loss.data, - "ntokens": sample["ntokens"], - "nsentences": sample["nsentences"], - "sample_size": sample_size, - "ncorrect": ncorrect, - "nqueries": nqueries, - } - return loss, sample_size, logging_output - - @staticmethod - def aggregate_logging_outputs(logging_outputs): - """Aggregate logging outputs from data parallel training.""" - loss_sum = sum(log.get("loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - nsentences = sum(log.get("nsentences", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - - agg_output = { - "loss": loss_sum / sample_size / math.log(2), - "ntokens": ntokens, - "nsentences": nsentences, - "sample_size": sample_size, - } - - ncorrect = sum(log.get("ncorrect", 0) for log in logging_outputs) - nqueries = sum(log.get("nqueries", 0) for log in logging_outputs) - if nqueries > 0: - agg_output["accuracy"] = ncorrect / float(nqueries) - - return agg_output - - -@register_criterion("winogrande") -class WinograndeCriterion(WSCCriterion): - def forward(self, model, sample, reduce=True): - # compute loss and accuracy - query_lprobs = self.get_lprobs( - model, - sample["query_tokens"], - sample["query_masks"], - ) - cand_lprobs = self.get_lprobs( - model, - sample["candidate_tokens"], - sample["candidate_masks"], - ) - pred = query_lprobs >= cand_lprobs - loss = self.get_loss(query_lprobs, cand_lprobs) - - sample_size = sample["query_tokens"].size(0) - ncorrect = pred.sum().item() - logging_output = { - "loss": utils.item(loss.data) if reduce else loss.data, - "ntokens": sample["ntokens"], - "nsentences": sample["nsentences"], - "sample_size": sample_size, - "ncorrect": ncorrect, - "nqueries": sample_size, - } - return loss, sample_size, logging_output diff --git a/kosmos-g/fairseq/examples/roberta/wsc/wsc_task.py b/kosmos-g/fairseq/examples/roberta/wsc/wsc_task.py deleted file mode 100644 index 602ea737e..000000000 --- a/kosmos-g/fairseq/examples/roberta/wsc/wsc_task.py +++ /dev/null @@ -1,401 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import json -import os -import tempfile - -import numpy as np -import torch -import torch.nn.functional as F -from fairseq import utils -from fairseq.data import ( - Dictionary, - IdDataset, - ListDataset, - NestedDictionaryDataset, - NumelDataset, - NumSamplesDataset, - PadDataset, - SortDataset, - data_utils, - encoders, -) -from fairseq.tasks import LegacyFairseqTask, register_task - -from . import wsc_utils - - -@register_task("wsc") -class WSCTask(LegacyFairseqTask): - """Task to finetune RoBERTa for Winograd Schemas.""" - - @staticmethod - def add_args(parser): - """Add task-specific arguments to the parser.""" - parser.add_argument( - "data", metavar="DIR", help="path to data directory; we load <split>.jsonl" - ) - parser.add_argument( - "--init-token", - type=int, - default=None, - help="add token at the beginning of each batch item", - ) - - def __init__(self, args, vocab): - super().__init__(args) - self.vocab = vocab - self.mask = vocab.add_symbol("<mask>") - - self.bpe = encoders.build_bpe(args) - self.tokenizer = encoders.build_tokenizer(args) - - # hack to handle GPT-2 BPE, which includes leading spaces - if args.bpe == "gpt2": - self.leading_space = True - self.trailing_space = False - else: - self.leading_space = False - self.trailing_space = True - - @classmethod - def load_dictionary(cls, filename): - """Load the dictionary from the filename - - Args: - filename (str): the filename - """ - dictionary = Dictionary.load(filename) - dictionary.add_symbol("<mask>") - return dictionary - - @classmethod - def setup_task(cls, args, **kwargs): - assert args.criterion == "wsc", "Must set --criterion=wsc" - - # load data and label dictionaries - vocab = cls.load_dictionary(os.path.join(args.data, "dict.txt")) - print("| dictionary: {} types".format(len(vocab))) - - return cls(args, vocab) - - def binarize(self, s: str, append_eos: bool = False): - if self.tokenizer is not None: - s = self.tokenizer.encode(s) - if self.bpe is not None: - s = self.bpe.encode(s) - tokens = self.vocab.encode_line( - s, - append_eos=append_eos, - add_if_not_exist=False, - ).long() - if self.args.init_token is not None: - tokens = torch.cat([tokens.new([self.args.init_token]), tokens]) - return tokens - - def binarize_with_mask(self, txt, prefix, suffix, leading_space, trailing_space): - toks = self.binarize( - prefix + leading_space + txt + trailing_space + suffix, - append_eos=True, - ) - mask = torch.zeros_like(toks, dtype=torch.bool) - mask_start = len(self.binarize(prefix)) - mask_size = len(self.binarize(leading_space + txt)) - mask[mask_start : mask_start + mask_size] = 1 - return toks, mask - - def load_dataset( - self, split, epoch=1, combine=False, data_path=None, return_only=False, **kwargs - ): - """Load a given dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - """ - if data_path is None: - data_path = os.path.join(self.args.data, split + ".jsonl") - if not os.path.exists(data_path): - raise FileNotFoundError("Cannot find data: {}".format(data_path)) - - query_tokens = [] - query_masks = [] - query_lengths = [] - candidate_tokens = [] - candidate_masks = [] - candidate_lengths = [] - labels = [] - - for sentence, pronoun_span, query, label in wsc_utils.jsonl_iterator(data_path): - prefix = sentence[: pronoun_span.start].text - suffix = sentence[pronoun_span.end :].text_with_ws - - # spaCy spans include trailing spaces, but we need to know about - # leading spaces for the GPT-2 BPE - leading_space = ( - " " if sentence[: pronoun_span.start].text_with_ws.endswith(" ") else "" - ) - trailing_space = " " if pronoun_span.text_with_ws.endswith(" ") else "" - - # get noun phrases, excluding pronouns and anything overlapping with the query - cand_spans = wsc_utils.filter_noun_chunks( - wsc_utils.extended_noun_chunks(sentence), - exclude_pronouns=True, - exclude_query=query, - exact_match=False, - ) - - if query is not None: - query_toks, query_mask = self.binarize_with_mask( - query, prefix, suffix, leading_space, trailing_space - ) - query_len = len(query_toks) - else: - query_toks, query_mask, query_len = None, None, 0 - - query_tokens.append(query_toks) - query_masks.append(query_mask) - query_lengths.append(query_len) - - cand_toks, cand_masks = [], [] - for cand_span in cand_spans: - toks, mask = self.binarize_with_mask( - cand_span.text, - prefix, - suffix, - leading_space, - trailing_space, - ) - cand_toks.append(toks) - cand_masks.append(mask) - - # collate candidates - cand_toks = data_utils.collate_tokens(cand_toks, pad_idx=self.vocab.pad()) - cand_masks = data_utils.collate_tokens(cand_masks, pad_idx=0) - assert cand_toks.size() == cand_masks.size() - - candidate_tokens.append(cand_toks) - candidate_masks.append(cand_masks) - candidate_lengths.append(cand_toks.size(1)) - - labels.append(label) - - query_lengths = np.array(query_lengths) - query_tokens = ListDataset(query_tokens, query_lengths) - query_masks = ListDataset(query_masks, query_lengths) - - candidate_lengths = np.array(candidate_lengths) - candidate_tokens = ListDataset(candidate_tokens, candidate_lengths) - candidate_masks = ListDataset(candidate_masks, candidate_lengths) - - labels = ListDataset(labels, [1] * len(labels)) - - dataset = { - "id": IdDataset(), - "query_tokens": query_tokens, - "query_masks": query_masks, - "candidate_tokens": candidate_tokens, - "candidate_masks": candidate_masks, - "labels": labels, - "nsentences": NumSamplesDataset(), - "ntokens": NumelDataset(query_tokens, reduce=True), - } - - nested_dataset = NestedDictionaryDataset( - dataset, - sizes=[query_lengths], - ) - - with data_utils.numpy_seed(self.args.seed): - shuffle = np.random.permutation(len(query_tokens)) - dataset = SortDataset( - nested_dataset, - # shuffle - sort_order=[shuffle], - ) - - if return_only: - return dataset - - self.datasets[split] = dataset - return self.datasets[split] - - def build_dataset_for_inference(self, sample_json): - with tempfile.NamedTemporaryFile(buffering=0) as h: - h.write((json.dumps(sample_json) + "\n").encode("utf-8")) - dataset = self.load_dataset( - "disambiguate_pronoun", - data_path=h.name, - return_only=True, - ) - return dataset - - def disambiguate_pronoun(self, model, sentence, use_cuda=False): - sample_json = wsc_utils.convert_sentence_to_json(sentence) - dataset = self.build_dataset_for_inference(sample_json) - sample = dataset.collater([dataset[0]]) - if use_cuda: - sample = utils.move_to_cuda(sample) - - def get_masked_input(tokens, mask): - masked_tokens = tokens.clone() - masked_tokens[mask.bool()] = self.mask - return masked_tokens - - def get_lprobs(tokens, mask): - logits, _ = model(src_tokens=get_masked_input(tokens, mask)) - lprobs = F.log_softmax(logits, dim=-1, dtype=torch.float) - scores = lprobs.gather(2, tokens.unsqueeze(-1)).squeeze(-1) - mask = mask.type_as(scores) - scores = (scores * mask).sum(dim=-1) / mask.sum(dim=-1) - return scores - - cand_lprobs = get_lprobs( - sample["candidate_tokens"][0], - sample["candidate_masks"][0], - ) - if sample["query_tokens"][0] is not None: - query_lprobs = get_lprobs( - sample["query_tokens"][0].unsqueeze(0), - sample["query_masks"][0].unsqueeze(0), - ) - return (query_lprobs >= cand_lprobs).all().item() == 1 - else: - best_idx = cand_lprobs.argmax().item() - full_cand = sample["candidate_tokens"][0][best_idx] - mask = sample["candidate_masks"][0][best_idx] - toks = full_cand[mask.bool()] - return self.bpe.decode(self.source_dictionary.string(toks)).strip() - - @property - def source_dictionary(self): - return self.vocab - - @property - def target_dictionary(self): - return self.vocab - - -@register_task("winogrande") -class WinograndeTask(WSCTask): - """ - Task for WinoGrande dataset. Efficient implementation for Winograd schema - tasks with exactly two candidates, one of which is correct. - """ - - @classmethod - def setup_task(cls, args, **kwargs): - assert args.criterion == "winogrande", "Must set --criterion=winogrande" - - # load data and label dictionaries - vocab = cls.load_dictionary(os.path.join(args.data, "dict.txt")) - print("| dictionary: {} types".format(len(vocab))) - - return cls(args, vocab) - - def load_dataset( - self, split, epoch=1, combine=False, data_path=None, return_only=False, **kwargs - ): - """Load a given dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - """ - if data_path is None: - data_path = os.path.join(self.args.data, split + ".jsonl") - if not os.path.exists(data_path): - raise FileNotFoundError("Cannot find data: {}".format(data_path)) - - query_tokens = [] - query_masks = [] - query_lengths = [] - candidate_tokens = [] - candidate_masks = [] - candidate_lengths = [] - - itr = wsc_utils.winogrande_jsonl_iterator(data_path, eval=(split == "test")) - - for sample in itr: - sentence, pronoun_span, query, cand_text = sample - prefix = sentence[: pronoun_span[0]].rstrip() - suffix = sentence[pronoun_span[1] :] - - leading_space = " " if sentence[: pronoun_span[0]].endswith(" ") else "" - trailing_space = "" - - if query is not None: - query_toks, query_mask = self.binarize_with_mask( - query, - prefix, - suffix, - leading_space, - trailing_space, - ) - query_len = len(query_toks) - else: - query_toks, query_mask, query_len = None, None, 0 - - query_tokens.append(query_toks) - query_masks.append(query_mask) - query_lengths.append(query_len) - - cand_toks, cand_mask = self.binarize_with_mask( - cand_text, - prefix, - suffix, - leading_space, - trailing_space, - ) - - candidate_tokens.append(cand_toks) - candidate_masks.append(cand_mask) - candidate_lengths.append(cand_toks.size(0)) - - query_lengths = np.array(query_lengths) - - def get_pad_dataset_fn(tokens, length, pad_idx): - return PadDataset( - ListDataset(tokens, length), - pad_idx=pad_idx, - left_pad=False, - ) - - query_tokens = get_pad_dataset_fn(query_tokens, query_lengths, self.vocab.pad()) - query_masks = get_pad_dataset_fn(query_masks, query_lengths, 0) - - candidate_lengths = np.array(candidate_lengths) - candidate_tokens = get_pad_dataset_fn( - candidate_tokens, candidate_lengths, self.vocab.pad() - ) - candidate_masks = get_pad_dataset_fn(candidate_masks, candidate_lengths, 0) - - dataset = { - "id": IdDataset(), - "query_tokens": query_tokens, - "query_masks": query_masks, - "candidate_tokens": candidate_tokens, - "candidate_masks": candidate_masks, - "nsentences": NumSamplesDataset(), - "ntokens": NumelDataset(query_tokens, reduce=True), - } - - nested_dataset = NestedDictionaryDataset( - dataset, - sizes=[query_lengths], - ) - - with data_utils.numpy_seed(self.args.seed): - shuffle = np.random.permutation(len(query_tokens)) - dataset = SortDataset( - nested_dataset, - # shuffle - sort_order=[shuffle], - ) - - if return_only: - return dataset - - self.datasets[split] = dataset - return self.datasets[split] diff --git a/kosmos-g/fairseq/examples/roberta/wsc/wsc_utils.py b/kosmos-g/fairseq/examples/roberta/wsc/wsc_utils.py deleted file mode 100644 index da6ba7438..000000000 --- a/kosmos-g/fairseq/examples/roberta/wsc/wsc_utils.py +++ /dev/null @@ -1,241 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import json -from functools import lru_cache - - -def convert_sentence_to_json(sentence): - if "_" in sentence: - prefix, rest = sentence.split("_", 1) - query, rest = rest.split("_", 1) - query_index = len(prefix.rstrip().split(" ")) - else: - query, query_index = None, None - - prefix, rest = sentence.split("[", 1) - pronoun, rest = rest.split("]", 1) - pronoun_index = len(prefix.rstrip().split(" ")) - - sentence = sentence.replace("_", "").replace("[", "").replace("]", "") - - return { - "idx": 0, - "text": sentence, - "target": { - "span1_index": query_index, - "span1_text": query, - "span2_index": pronoun_index, - "span2_text": pronoun, - }, - } - - -def extended_noun_chunks(sentence): - noun_chunks = {(np.start, np.end) for np in sentence.noun_chunks} - np_start, cur_np = 0, "NONE" - for i, token in enumerate(sentence): - np_type = token.pos_ if token.pos_ in {"NOUN", "PROPN"} else "NONE" - if np_type != cur_np: - if cur_np != "NONE": - noun_chunks.add((np_start, i)) - if np_type != "NONE": - np_start = i - cur_np = np_type - if cur_np != "NONE": - noun_chunks.add((np_start, len(sentence))) - return [sentence[s:e] for (s, e) in sorted(noun_chunks)] - - -def find_token(sentence, start_pos): - found_tok = None - for tok in sentence: - if tok.idx == start_pos: - found_tok = tok - break - return found_tok - - -def find_span(sentence, search_text, start=0): - search_text = search_text.lower() - for tok in sentence[start:]: - remainder = sentence[tok.i :].text.lower() - if remainder.startswith(search_text): - len_to_consume = len(search_text) - start_idx = tok.idx - for next_tok in sentence[tok.i :]: - end_idx = next_tok.idx + len(next_tok.text) - if end_idx - start_idx == len_to_consume: - span = sentence[tok.i : next_tok.i + 1] - return span - return None - - -@lru_cache(maxsize=1) -def get_detokenizer(): - from sacremoses import MosesDetokenizer - - detok = MosesDetokenizer(lang="en") - return detok - - -@lru_cache(maxsize=1) -def get_spacy_nlp(): - import en_core_web_lg - - nlp = en_core_web_lg.load() - return nlp - - -def jsonl_iterator(input_fname, positive_only=False, ngram_order=3, eval=False): - detok = get_detokenizer() - nlp = get_spacy_nlp() - - with open(input_fname) as fin: - for line in fin: - sample = json.loads(line.strip()) - - if positive_only and "label" in sample and not sample["label"]: - # only consider examples where the query is correct - continue - - target = sample["target"] - - # clean up the query - query = target["span1_text"] - if query is not None: - if "\n" in query: - continue - if query.endswith(".") or query.endswith(","): - query = query[:-1] - - # split tokens - tokens = sample["text"].split(" ") - - def strip_pronoun(x): - return x.rstrip('.,"') - - # find the pronoun - pronoun_idx = target["span2_index"] - pronoun = strip_pronoun(target["span2_text"]) - if strip_pronoun(tokens[pronoun_idx]) != pronoun: - # hack: sometimes the index is misaligned - if strip_pronoun(tokens[pronoun_idx + 1]) == pronoun: - pronoun_idx += 1 - else: - raise Exception("Misaligned pronoun!") - assert strip_pronoun(tokens[pronoun_idx]) == pronoun - - # split tokens before and after the pronoun - before = tokens[:pronoun_idx] - after = tokens[pronoun_idx + 1 :] - - # the GPT BPE attaches leading spaces to tokens, so we keep track - # of whether we need spaces before or after the pronoun - leading_space = " " if pronoun_idx > 0 else "" - trailing_space = " " if len(after) > 0 else "" - - # detokenize - before = detok.detokenize(before, return_str=True) - pronoun = detok.detokenize([pronoun], return_str=True) - after = detok.detokenize(after, return_str=True) - - # hack: when the pronoun ends in a period (or comma), move the - # punctuation to the "after" part - if pronoun.endswith(".") or pronoun.endswith(","): - after = pronoun[-1] + trailing_space + after - pronoun = pronoun[:-1] - - # hack: when the "after" part begins with a comma or period, remove - # the trailing space - if after.startswith(".") or after.startswith(","): - trailing_space = "" - - # parse sentence with spacy - sentence = nlp(before + leading_space + pronoun + trailing_space + after) - - # find pronoun span - start = len(before + leading_space) - first_pronoun_tok = find_token(sentence, start_pos=start) - pronoun_span = find_span(sentence, pronoun, start=first_pronoun_tok.i) - assert pronoun_span.text == pronoun - - if eval: - # convert to format where pronoun is surrounded by "[]" and - # query is surrounded by "_" - query_span = find_span(sentence, query) - query_with_ws = "_{}_{}".format( - query_span.text, - (" " if query_span.text_with_ws.endswith(" ") else ""), - ) - pronoun_with_ws = "[{}]{}".format( - pronoun_span.text, - (" " if pronoun_span.text_with_ws.endswith(" ") else ""), - ) - if query_span.start < pronoun_span.start: - first = (query_span, query_with_ws) - second = (pronoun_span, pronoun_with_ws) - else: - first = (pronoun_span, pronoun_with_ws) - second = (query_span, query_with_ws) - sentence = ( - sentence[: first[0].start].text_with_ws - + first[1] - + sentence[first[0].end : second[0].start].text_with_ws - + second[1] - + sentence[second[0].end :].text - ) - yield sentence, sample.get("label", None) - else: - yield sentence, pronoun_span, query, sample.get("label", None) - - -def winogrande_jsonl_iterator(input_fname, eval=False): - with open(input_fname) as fin: - for line in fin: - sample = json.loads(line.strip()) - sentence, option1, option2 = ( - sample["sentence"], - sample["option1"], - sample["option2"], - ) - - pronoun_span = (sentence.index("_"), sentence.index("_") + 1) - - if eval: - query, cand = option1, option2 - else: - query = option1 if sample["answer"] == "1" else option2 - cand = option2 if sample["answer"] == "1" else option1 - yield sentence, pronoun_span, query, cand - - -def filter_noun_chunks( - chunks, exclude_pronouns=False, exclude_query=None, exact_match=False -): - if exclude_pronouns: - chunks = [ - np - for np in chunks - if (np.lemma_ != "-PRON-" and not all(tok.pos_ == "PRON" for tok in np)) - ] - - if exclude_query is not None: - excl_txt = [exclude_query.lower()] - filtered_chunks = [] - for chunk in chunks: - lower_chunk = chunk.text.lower() - found = False - for excl in excl_txt: - if ( - not exact_match and (lower_chunk in excl or excl in lower_chunk) - ) or lower_chunk == excl: - found = True - break - if not found: - filtered_chunks.append(chunk) - chunks = filtered_chunks - - return chunks diff --git a/kosmos-g/fairseq/examples/rxf/README.md b/kosmos-g/fairseq/examples/rxf/README.md deleted file mode 100644 index 22a1cc47d..000000000 --- a/kosmos-g/fairseq/examples/rxf/README.md +++ /dev/null @@ -1,52 +0,0 @@ -[Better Fine-Tuning by Reducing Representational Collapse](https://arxiv.org/abs/2008.03156) -===================== -This repo contains the code to replicate all experiments from the _Better Fine-Tuning by Reducing Representational Collapse_ paper excluding the probing results. - -The R3F sentence prediction criterion is registered as `sentence_prediction_r3f` while the label smoothing version of it is implemented as `label_smoothed_cross_entropy_r3f`. The R4F version of the sentence prediction criterion can be achieved by applying spectral norm to the classification head via the `--spectral-norm-classification-head` parameter. - -## Hyper-parameters -Our methods introduce 3 new hyper-parameters; `--eps` which sets the standard deviation or range of the distribution we're sampling from, `--r3f-lambda` which controls the combining of logistic loss and noisy KL loss and `--noise-type` which controls which parametric distribution we use ('normal', 'uniform'). - -For example to run R3F on RTE from GLUE - -``` -TOTAL_NUM_UPDATES=3120 -WARMUP_UPDATES=187 -LR=1e-05 -NUM_CLASSES=2 -MAX_SENTENCES=8 # Batch size. -ROBERTA_PATH=/path/to/roberta/model.pt - -CUDA_VISIBLE_DEVICES=0 fairseq-train RTE-bin \ - --restore-file $ROBERTA_PATH \ - --max-positions 512 \ - --max-sentences $MAX_SENTENCES \ - --max-tokens 4400 \ - --task sentence_prediction \ - --reset-optimizer --reset-dataloader --reset-meters \ - --required-batch-size-multiple 1 \ - --init-token 0 --separator-token 2 \ - --arch roberta_large \ - --criterion sentence_prediction_r3f \ - --num-classes $NUM_CLASSES \ - --dropout 0.1 --attention-dropout 0.1 \ - --weight-decay 0.1 --optimizer adam --adam-betas "(0.9, 0.98)" --adam-eps 1e-06 \ - --clip-norm 0.0 \ - --lr-scheduler polynomial_decay --lr $LR --total-num-update $TOTAL_NUM_UPDATES --warmup-updates $WARMUP_UPDATES \ - --fp16 --fp16-init-scale 4 --threshold-loss-scale 1 --fp16-scale-window 128 \ - --max-epoch 10 \ - --find-unused-parameters \ - --best-checkpoint-metric accuracy --maximize-best-checkpoint-metric \ - --noise-type uniform --r3f-lambda 0.7 \ - --user-dir examples/rxf/rxf_src -``` - -## Citation -```bibtex -@article{aghajanyan2020better, - title={Better Fine-Tuning by Reducing Representational Collapse}, - author={Aghajanyan, Armen and Shrivastava, Akshat and Gupta, Anchit and Goyal, Naman and Zettlemoyer, Luke and Gupta, Sonal}, - journal={arXiv preprint arXiv:2008.03156}, - year={2020} -} -``` diff --git a/kosmos-g/fairseq/examples/rxf/__init__.py b/kosmos-g/fairseq/examples/rxf/__init__.py deleted file mode 100644 index b24cb6b79..000000000 --- a/kosmos-g/fairseq/examples/rxf/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import rxf_src # noqa diff --git a/kosmos-g/fairseq/examples/rxf/rxf_src/__init__.py b/kosmos-g/fairseq/examples/rxf/rxf_src/__init__.py deleted file mode 100644 index 306e232d6..000000000 --- a/kosmos-g/fairseq/examples/rxf/rxf_src/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import label_smoothed_cross_entropy_r3f, sentence_prediction_r3f # noqa diff --git a/kosmos-g/fairseq/examples/rxf/rxf_src/label_smoothed_cross_entropy_r3f.py b/kosmos-g/fairseq/examples/rxf/rxf_src/label_smoothed_cross_entropy_r3f.py deleted file mode 100644 index 079db13e6..000000000 --- a/kosmos-g/fairseq/examples/rxf/rxf_src/label_smoothed_cross_entropy_r3f.py +++ /dev/null @@ -1,157 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn.functional as F -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.criterions.label_smoothed_cross_entropy import label_smoothed_nll_loss - - -@register_criterion("label_smoothed_cross_entropy_r3f") -class LabelSmoothedCrossEntropyR3FCriterion(FairseqCriterion): - def __init__( - self, task, sentence_avg, label_smoothing, eps, r3f_lambda, noise_type - ): - super().__init__(task) - self.sentence_avg = sentence_avg - self.label_smoothing = label_smoothing - self.eps = eps - self.r3f_lambda = r3f_lambda - self.noise_type = noise_type - if self.noise_type in {"normal"}: - self.noise_sampler = torch.distributions.normal.Normal( - loc=0.0, scale=self.eps - ) - elif self.noise_type == "uniform": - self.noise_sampler = torch.distributions.uniform.Uniform( - low=-self.eps, high=self.eps - ) - else: - raise Exception(f"unrecognized noise type {self.noise_type}") - - @staticmethod - def add_args(parser): - """Add criterion-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--label-smoothing', default=0., type=float, metavar='D', - help='epsilon for label smoothing, 0 means no label smoothing') - parser.add_argument('--eps', type=float, default=1e-5, - help='noise eps') - parser.add_argument('--r3f-lambda', type=float, default=1.0, - help='lambda for combining logistic loss and noisy KL loss') - parser.add_argument('--noise-type', type=str, default='normal', - choices=['normal', 'uniform'], - help='type of noises') - # fmt: on - - def _get_symm_kl(self, noised_logits, input_logits): - return ( - F.kl_div( - F.log_softmax(noised_logits, dim=-1, dtype=torch.float32), - F.softmax(input_logits, dim=-1, dtype=torch.float32), - None, - None, - "sum", - ) - + F.kl_div( - F.log_softmax(input_logits, dim=-1, dtype=torch.float32), - F.softmax(noised_logits, dim=-1, dtype=torch.float32), - None, - None, - "sum", - ) - ) / noised_logits.size(0) - - def forward(self, model, sample, reduce=True): - """Compute the loss for the given sample. - - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - token_embeddings = model.encoder.embed_tokens(sample["net_input"]["src_tokens"]) - input_logits, extra = model(**sample["net_input"]) - loss, nll_loss = self.compute_loss( - model, (input_logits, extra), sample, reduce=reduce - ) - sample_size = ( - sample["target"].size(0) if self.sentence_avg else sample["ntokens"] - ) - - if model.training: - noise = self.noise_sampler.sample(sample_shape=token_embeddings.shape).to( - token_embeddings - ) - noised_embeddings = token_embeddings.clone() + noise - - noised_logits, _ = model( - **sample["net_input"], token_embeddings=noised_embeddings - ) - symm_kl = self._get_symm_kl(noised_logits, input_logits) - - if model.training: - symm_kl = symm_kl * sample_size - loss = loss + self.r3f_lambda * symm_kl - - logging_output = { - "loss": loss.data, - "nll_loss": nll_loss.data, - "ntokens": sample["ntokens"], - "nsentences": sample["target"].size(0), - "sample_size": sample_size, - } - - if model.training: - logging_output.update( - symm_kl=utils.item(symm_kl.data) if reduce else symm_kl.data - ) - - return loss, sample_size, logging_output - - def compute_loss(self, model, net_output, sample, reduce=True): - lprobs = model.get_normalized_probs(net_output, log_probs=True) - lprobs = lprobs.view(-1, lprobs.size(-1)) - target = model.get_targets(sample, net_output).view(-1, 1) - loss, nll_loss = label_smoothed_nll_loss( - lprobs, - target, - self.label_smoothing, - ignore_index=self.padding_idx, - reduce=reduce, - ) - return loss, nll_loss - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_sum = sum(log.get("loss", 0) for log in logging_outputs) - nll_loss_sum = sum(log.get("nll_loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - symm_kl_sum = sum(log.get("symm_kl", 0) for log in logging_outputs) - - metrics.log_scalar("symm_kl", symm_kl_sum / sample_size, sample_size, round=3) - metrics.log_scalar( - "loss", loss_sum / sample_size / math.log(2), sample_size, round=3 - ) - metrics.log_scalar( - "nll_loss", nll_loss_sum / ntokens / math.log(2), ntokens, round=3 - ) - metrics.log_derived( - "ppl", lambda meters: utils.get_perplexity(meters["nll_loss"].avg) - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/kosmos-g/fairseq/examples/rxf/rxf_src/sentence_prediction_r3f.py b/kosmos-g/fairseq/examples/rxf/rxf_src/sentence_prediction_r3f.py deleted file mode 100644 index 6ecffd6b1..000000000 --- a/kosmos-g/fairseq/examples/rxf/rxf_src/sentence_prediction_r3f.py +++ /dev/null @@ -1,171 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn.functional as F -from fairseq import utils -from fairseq.criterions import FairseqCriterion, register_criterion - - -@register_criterion("sentence_prediction_r3f") -class SentencePredictionR3F(FairseqCriterion): - def __init__( - self, - task, - eps, - r3f_lambda, - noise_type, - classification_head_name, - regression_target, - ): - super().__init__(task) - self.eps = eps - self.r3f_lambda = r3f_lambda - self.noise_type = noise_type - self.classification_head_name = classification_head_name - self.regression_target = regression_target - if self.noise_type in {"normal"}: - self.noise_sampler = torch.distributions.normal.Normal( - loc=0.0, scale=self.eps - ) - elif self.noise_type == "uniform": - self.noise_sampler = torch.distributions.uniform.Uniform( - low=-self.eps, high=self.eps - ) - else: - raise Exception(f"unrecognized noise type {self.noise_type}") - - @staticmethod - def add_args(parser): - # fmt: off - parser.add_argument('--eps', type=float, default=1e-5, - help='noise eps') - parser.add_argument('--r3f-lambda', type=float, default=1.0, - help='lambda for combining logistic loss and noisy KL loss') - parser.add_argument('--noise-type', type=str, default='uniform', - choices=['normal', 'uniform'], - help='type of noises for RXF methods') - parser.add_argument('--classification-head-name', - default='sentence_classification_head', - help='name of the classification head to use') - parser.add_argument('--regression-target', action='store_true') - # fmt: on - - def _get_symm_kl(self, noised_logits, input_logits): - return ( - F.kl_div( - F.log_softmax(noised_logits, dim=-1, dtype=torch.float32), - F.softmax(input_logits, dim=-1, dtype=torch.float32), - None, - None, - "sum", - ) - + F.kl_div( - F.log_softmax(input_logits, dim=-1, dtype=torch.float32), - F.softmax(noised_logits, dim=-1, dtype=torch.float32), - None, - None, - "sum", - ) - ) / noised_logits.size(0) - - def forward(self, model, sample, reduce=True): - """Compute the loss for the given sample. - - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - assert ( - hasattr(model, "classification_heads") - and self.classification_head_name in model.classification_heads - ), "model must provide sentence classification head for --criterion=sentence_prediction" - - token_embeddings = model.encoder.sentence_encoder.embed_tokens( - sample["net_input"]["src_tokens"] - ) - input_logits, _ = model( - **sample["net_input"], - features_only=True, - classification_head_name=self.classification_head_name, - token_embeddings=token_embeddings, - ) - if model.training and self.noise_sampler: - noise = self.noise_sampler.sample(sample_shape=token_embeddings.shape).to( - token_embeddings - ) - noised_embeddings = token_embeddings.detach().clone() + noise - - noised_logits, _ = model( - **sample["net_input"], - features_only=True, - classification_head_name=self.classification_head_name, - token_embeddings=noised_embeddings, - ) - symm_kl = self._get_symm_kl(noised_logits, input_logits) - else: - symm_kl = 0 - - targets = model.get_targets(sample, [input_logits]).view(-1) - sample_size = targets.numel() - - if not self.regression_target: - loss = F.nll_loss( - F.log_softmax(input_logits, dim=-1, dtype=torch.float32), - targets, - reduction="sum", - ) - if model.training: - symm_kl = symm_kl * sample_size - loss = loss + self.r3f_lambda * symm_kl - else: - logits = input_logits.squeeze().float() - targets = targets.float() - loss = F.mse_loss(logits, targets, reduction="sum") - - logging_output = { - "loss": utils.item(loss.data) if reduce else loss.data, - "ntokens": sample["ntokens"], - "nsentences": sample_size, - "sample_size": sample_size, - } - - if not self.regression_target: - preds = input_logits.max(dim=1)[1] - logging_output.update(ncorrect=(preds == targets).sum().item()) - - if model.training and self.noise_sampler: - logging_output.update( - symm_kl=utils.item(symm_kl.data) if reduce else symm_kl.data - ) - return loss, sample_size, logging_output - - @staticmethod - def aggregate_logging_outputs(logging_outputs): - """Aggregate logging outputs from data parallel training.""" - loss_sum = sum(log.get("loss", 0) for log in logging_outputs) - symm_kl_sum = sum(log.get("symm_kl", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - nsentences = sum(log.get("nsentences", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - - agg_output = { - "loss": loss_sum / sample_size / math.log(2), - "symm_kl": symm_kl_sum / sample_size, - "ntokens": ntokens, - "nsentences": nsentences, - "sample_size": sample_size, - } - - if len(logging_outputs) > 0 and "ncorrect" in logging_outputs[0]: - ncorrect = sum(log.get("ncorrect", 0) for log in logging_outputs) - agg_output.update(accuracy=ncorrect / nsentences) - - if sample_size != ntokens: - agg_output["nll_loss"] = loss_sum / ntokens / math.log(2) - return agg_output diff --git a/kosmos-g/fairseq/examples/scaling_nmt/README.md b/kosmos-g/fairseq/examples/scaling_nmt/README.md deleted file mode 100644 index 0cc3360c3..000000000 --- a/kosmos-g/fairseq/examples/scaling_nmt/README.md +++ /dev/null @@ -1,114 +0,0 @@ -# Scaling Neural Machine Translation (Ott et al., 2018) - -This page includes instructions for reproducing results from the paper [Scaling Neural Machine Translation (Ott et al., 2018)](https://arxiv.org/abs/1806.00187). - -## Pre-trained models - -Model | Description | Dataset | Download ----|---|---|--- -`transformer.wmt14.en-fr` | Transformer <br> ([Ott et al., 2018](https://arxiv.org/abs/1806.00187)) | [WMT14 English-French](http://statmt.org/wmt14/translation-task.html#Download) | model: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt14.en-fr.joined-dict.transformer.tar.bz2) <br> newstest2014: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.en-fr.joined-dict.newstest2014.tar.bz2) -`transformer.wmt16.en-de` | Transformer <br> ([Ott et al., 2018](https://arxiv.org/abs/1806.00187)) | [WMT16 English-German](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8) | model: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt16.en-de.joined-dict.transformer.tar.bz2) <br> newstest2014: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt16.en-de.joined-dict.newstest2014.tar.bz2) - -## Training a new model on WMT'16 En-De - -First download the [preprocessed WMT'16 En-De data provided by Google](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8). - -Then: - -##### 1. Extract the WMT'16 En-De data -```bash -TEXT=wmt16_en_de_bpe32k -mkdir -p $TEXT -tar -xzvf wmt16_en_de.tar.gz -C $TEXT -``` - -##### 2. Preprocess the dataset with a joined dictionary -```bash -fairseq-preprocess \ - --source-lang en --target-lang de \ - --trainpref $TEXT/train.tok.clean.bpe.32000 \ - --validpref $TEXT/newstest2013.tok.bpe.32000 \ - --testpref $TEXT/newstest2014.tok.bpe.32000 \ - --destdir data-bin/wmt16_en_de_bpe32k \ - --nwordssrc 32768 --nwordstgt 32768 \ - --joined-dictionary \ - --workers 20 -``` - -##### 3. Train a model -```bash -fairseq-train \ - data-bin/wmt16_en_de_bpe32k \ - --arch transformer_vaswani_wmt_en_de_big --share-all-embeddings \ - --optimizer adam --adam-betas '(0.9, 0.98)' --clip-norm 0.0 \ - --lr 0.0005 --lr-scheduler inverse_sqrt --warmup-updates 4000 --warmup-init-lr 1e-07 \ - --dropout 0.3 --weight-decay 0.0 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --max-tokens 3584 \ - --fp16 -``` - -Note that the `--fp16` flag requires you have CUDA 9.1 or greater and a Volta GPU or newer. - -***IMPORTANT:*** You will get better performance by training with big batches and -increasing the learning rate. If you want to train the above model with big batches -(assuming your machine has 8 GPUs): -- add `--update-freq 16` to simulate training on 8x16=128 GPUs -- increase the learning rate; 0.001 works well for big batches - -##### 4. Evaluate - -Now we can evaluate our trained model. - -Note that the original [Attention Is All You Need](https://arxiv.org/abs/1706.03762) -paper used a couple tricks to achieve better BLEU scores. We use these same tricks in -the Scaling NMT paper, so it's important to apply them when reproducing our results. - -First, use the [average_checkpoints.py](/scripts/average_checkpoints.py) script to -average the last few checkpoints. Averaging the last 5-10 checkpoints is usually -good, but you may need to adjust this depending on how long you've trained: -```bash -python scripts/average_checkpoints \ - --inputs /path/to/checkpoints \ - --num-epoch-checkpoints 10 \ - --output checkpoint.avg10.pt -``` - -Next, generate translations using a beam width of 4 and length penalty of 0.6: -```bash -fairseq-generate \ - data-bin/wmt16_en_de_bpe32k \ - --path checkpoint.avg10.pt \ - --beam 4 --lenpen 0.6 --remove-bpe > gen.out -``` - -Finally, we apply the ["compound splitting" script](/scripts/compound_split_bleu.sh) to -add spaces around dashes. For example "Café-Liebhaber" would become three tokens: -"Café - Liebhaber". This typically results in larger BLEU scores, but it is not -appropriate to compare these inflated scores to work which does not include this trick. -This trick was used in the [original AIAYN code](https://github.com/tensorflow/tensor2tensor/blob/fc9335c0203685cbbfe2b30c92db4352d8f60779/tensor2tensor/utils/get_ende_bleu.sh), -so we used it in the Scaling NMT paper as well. That said, it's strongly advised to -report [sacrebleu](https://github.com/mjpost/sacrebleu) scores instead. - -To compute "compound split" tokenized BLEU (not recommended!): -```bash -bash scripts/compound_split_bleu.sh gen.out -# BLEU4 = 29.29, 60.3/35.0/22.8/15.3 (BP=1.000, ratio=1.004, syslen=64763, reflen=64496) -``` - -To compute detokenized BLEU with sacrebleu (preferred): -```bash -bash scripts/sacrebleu.sh wmt14/full en de gen.out -# BLEU+case.mixed+lang.en-de+numrefs.1+smooth.exp+test.wmt14/full+tok.13a+version.1.4.3 = 28.6 59.3/34.3/22.1/14.9 (BP = 1.000 ratio = 1.016 hyp_len = 63666 ref_len = 62688) -``` - -## Citation - -```bibtex -@inproceedings{ott2018scaling, - title = {Scaling Neural Machine Translation}, - author = {Ott, Myle and Edunov, Sergey and Grangier, David and Auli, Michael}, - booktitle = {Proceedings of the Third Conference on Machine Translation (WMT)}, - year = 2018, -} -``` diff --git a/kosmos-g/fairseq/examples/shuffled_word_order/README.finetuning.md b/kosmos-g/fairseq/examples/shuffled_word_order/README.finetuning.md deleted file mode 100644 index ecbcb6588..000000000 --- a/kosmos-g/fairseq/examples/shuffled_word_order/README.finetuning.md +++ /dev/null @@ -1,135 +0,0 @@ -# Fine-tuning details - -For each task (GLUE and PAWS), we perform hyperparam search for each model, and report the mean and standard deviation across 5 seeds of the best model. First, get the datasets following the instructions in [RoBERTa fine-tuning README](../roberta/README.glue.md). Alternatively, you can use [huggingface datasets](https://huggingface.co/docs/datasets/) to get the task data: - -```python -from datasets import load_dataset -import pandas as pd -from pathlib import Path - -key2file = { -"paws": { - "loc": "paws_data", - "columns": ["id", "sentence1", "sentence2", "label"], - "train": "train.tsv", - "validation": "dev.tsv", - "test": "test.tsv" - } -} - -task_data = load_dataset("paws", "labeled_final") -task_config = key2file["paws"] -save_path = Path(task_config["loc"]) -save_path.mkdir(exist_ok=True, parents=True) -for key, fl in task_config.items(): - if key in ["loc", "columns"]: - continue - print(f"Reading {key}") - columns = task_config["columns"] - df = pd.DataFrame(task_data[key]) - print(df.columns) - df = df[columns] - print(f"Got {len(df)} records") - save_loc = save_path / fl - print(f"Saving to : {save_loc}") - df.to_csv(save_loc, sep="\t", header=None, index=None) - -``` - -- Preprocess using RoBERTa GLUE preprocessing script, while keeping in mind the column numbers for `sentence1`, `sentence2` and `label` (which is 0,1,2 if you save the data according to the above example.) -- Then, fine-tuning is performed similarly to RoBERTa (for example, in case of RTE): - -```bash -TOTAL_NUM_UPDATES=30875 # 10 epochs through RTE for bsz 16 -WARMUP_UPDATES=1852 # 6 percent of the number of updates -LR=2e-05 # Peak LR for polynomial LR scheduler. -NUM_CLASSES=2 -MAX_SENTENCES=16 # Batch size. -SHUFFLED_ROBERTA_PATH=/path/to/shuffled_roberta/model.pt - -CUDA_VISIBLE_DEVICES=0 fairseq-train RTE-bin/ \ - --restore-file $SHUFFLED_ROBERTA_PATH \ - --max-positions 512 \ - --batch-size $MAX_SENTENCES \ - --max-tokens 4400 \ - --task sentence_prediction \ - --reset-optimizer --reset-dataloader --reset-meters \ - --required-batch-size-multiple 1 \ - --init-token 0 --separator-token 2 \ - --arch roberta_large \ - --criterion sentence_prediction \ - --num-classes $NUM_CLASSES \ - --dropout 0.1 --attention-dropout 0.1 \ - --weight-decay 0.1 --optimizer adam --adam-betas "(0.9, 0.98)" --adam-eps 1e-06 \ - --clip-norm 0.0 \ - --lr-scheduler polynomial_decay --lr $LR --total-num-update $TOTAL_NUM_UPDATES --warmup-updates $WARMUP_UPDATES \ - --fp16 --fp16-init-scale 4 --threshold-loss-scale 1 --fp16-scale-window 128 \ - --max-epoch 10 \ - --find-unused-parameters \ - --best-checkpoint-metric accuracy --maximize-best-checkpoint-metric; -``` - -- `TOTAL_NUM_UPDATES` is computed based on the `--batch_size` value and the dataset size. -- `WARMUP_UPDATES` is computed as 6% of `TOTAL_NUM_UPDATES` -- Best hyperparam of `--lr` and `--batch_size` is reported below: - -## `--lr` - -| | name | RTE | MRPC | SST-2 | CoLA | QQP | QNLI | MNLI | PAWS | -| --: | :----------- | ----: | ----: | ----: | ----: | ----: | ----: | ----: | ----: | -| 0 | original | 2e-05 | 2e-05 | 1e-05 | 2e-05 | 1e-05 | 1e-05 | 1e-05 | 2e-05 | -| 1 | n_1 | 2e-05 | 1e-05 | 1e-05 | 1e-05 | 3e-05 | 1e-05 | 2e-05 | 2e-05 | -| 2 | n_2 | 2e-05 | 2e-05 | 1e-05 | 1e-05 | 2e-05 | 1e-05 | 1e-05 | 3e-05 | -| 3 | n_3 | 3e-05 | 1e-05 | 2e-05 | 2e-05 | 3e-05 | 1e-05 | 1e-05 | 2e-05 | -| 4 | n_4 | 3e-05 | 1e-05 | 2e-05 | 2e-05 | 2e-05 | 1e-05 | 1e-05 | 2e-05 | -| 5 | r512 | 1e-05 | 3e-05 | 2e-05 | 2e-05 | 3e-05 | 2e-05 | 3e-05 | 2e-05 | -| 6 | rand_corpus | 2e-05 | 1e-05 | 3e-05 | 1e-05 | 3e-05 | 3e-05 | 3e-05 | 2e-05 | -| 7 | rand_uniform | 2e-05 | 1e-05 | 3e-05 | 2e-05 | 3e-05 | 3e-05 | 3e-05 | 1e-05 | -| 8 | rand_init | 1e-05 | 1e-05 | 3e-05 | 1e-05 | 1e-05 | 1e-05 | 2e-05 | 1e-05 | -| 9 | no_pos | 1e-05 | 3e-05 | 2e-05 | 1e-05 | 1e-05 | 1e-05 | 1e-05 | 1e-05 | - -## `--batch_size` - -| | name | RTE | MRPC | SST-2 | CoLA | QQP | QNLI | MNLI | PAWS | -| --: | :----------- | --: | ---: | ----: | ---: | --: | ---: | ---: | ---: | -| 0 | orig | 16 | 16 | 32 | 16 | 16 | 32 | 32 | 16 | -| 1 | n_1 | 32 | 32 | 16 | 32 | 32 | 16 | 32 | 16 | -| 2 | n_2 | 32 | 16 | 32 | 16 | 32 | 32 | 16 | 32 | -| 3 | n_3 | 32 | 32 | 16 | 32 | 32 | 16 | 32 | 32 | -| 4 | n_4 | 32 | 16 | 32 | 16 | 32 | 32 | 32 | 32 | -| 5 | r512 | 32 | 16 | 16 | 32 | 32 | 16 | 16 | 16 | -| 6 | rand_corpus | 16 | 16 | 16 | 16 | 32 | 16 | 16 | 32 | -| 7 | rand_uniform | 16 | 32 | 16 | 16 | 32 | 16 | 16 | 16 | -| 8 | rand_init | 16 | 16 | 32 | 16 | 16 | 16 | 32 | 16 | -| 9 | no_pos | 16 | 32 | 16 | 16 | 32 | 16 | 16 | 16 | - -- Perform inference similar to RoBERTa as well: - -```python -from fairseq.models.roberta import RobertaModel - -roberta = RobertaModel.from_pretrained( - 'checkpoints/', - checkpoint_file='checkpoint_best.pt', - data_name_or_path='PAWS-bin' -) - -label_fn = lambda label: roberta.task.label_dictionary.string( - [label + roberta.task.label_dictionary.nspecial] -) -ncorrect, nsamples = 0, 0 -roberta.cuda() -roberta.eval() -with open('paws_data/dev.tsv') as fin: - fin.readline() - for index, line in enumerate(fin): - tokens = line.strip().split('\t') - sent1, sent2, target = tokens[0], tokens[1], tokens[2] - tokens = roberta.encode(sent1, sent2) - prediction = roberta.predict('sentence_classification_head', tokens).argmax().item() - prediction_label = label_fn(prediction) - ncorrect += int(prediction_label == target) - nsamples += 1 -print('| Accuracy: ', float(ncorrect)/float(nsamples)) - -``` diff --git a/kosmos-g/fairseq/examples/shuffled_word_order/README.md b/kosmos-g/fairseq/examples/shuffled_word_order/README.md deleted file mode 100644 index 6ce0b3927..000000000 --- a/kosmos-g/fairseq/examples/shuffled_word_order/README.md +++ /dev/null @@ -1,94 +0,0 @@ -# Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little - -[https://arxiv.org/abs/2104.06644](https://arxiv.org/abs/2104.06644) - -## Introduction - -In this work, we pre-train [RoBERTa](../roberta) base on various word shuffled variants of BookWiki corpus (16GB). We observe that a word shuffled pre-trained model achieves surprisingly good scores on GLUE, PAWS and several parametric probing tasks. Please read our paper for more details on the experiments. - -## Pre-trained models - -| Model | Description | Download | -| ------------------------------------- | -------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------- | -| `roberta.base.orig` | RoBERTa (base) trained on natural corpus | [roberta.base.orig.tar.gz](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.orig.tar.gz) | -| `roberta.base.shuffle.n1` | RoBERTa (base) trained on n=1 gram sentence word shuffled data | [roberta.base.shuffle.n1.tar.gz](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.n1.tar.gz) | -| `roberta.base.shuffle.n2` | RoBERTa (base) trained on n=2 gram sentence word shuffled data | [roberta.base.shuffle.n2.tar.gz](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.n2.tar.gz) | -| `roberta.base.shuffle.n3` | RoBERTa (base) trained on n=3 gram sentence word shuffled data | [roberta.base.shuffle.n3.tar.gz](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.n3.tar.gz) | -| `roberta.base.shuffle.n4` | RoBERTa (base) trained on n=4 gram sentence word shuffled data | [roberta.base.shuffle.n4.tar.gz](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.n4.tar.gz) | -| `roberta.base.shuffle.512` | RoBERTa (base) trained on unigram 512 word block shuffled data | [roberta.base.shuffle.512.tar.gz](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.512.tar.gz) | -| `roberta.base.shuffle.corpus` | RoBERTa (base) trained on unigram corpus word shuffled data | [roberta.base.shuffle.corpus.tar.gz](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.corpus.tar.gz) | -| `roberta.base.shuffle.corpus_uniform` | RoBERTa (base) trained on unigram corpus word shuffled data, where all words are uniformly sampled | [roberta.base.shuffle.corpus_uniform.tar.gz](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.corpus_uniform.tar.gz) | -| `roberta.base.nopos` | RoBERTa (base) without positional embeddings, trained on natural corpus | [roberta.base.nopos.tar.gz](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.nopos.tar.gz) | - -## Results - -[GLUE (Wang et al, 2019)](https://gluebenchmark.com/) & [PAWS (Zhang et al, 2019)](https://github.com/google-research-datasets/paws) _(dev set, single model, single-task fine-tuning, median of 5 seeds)_ - -| name | CoLA | MNLI | MRPC | PAWS | QNLI | QQP | RTE | SST-2 | -| :----------------------------------- | ----: | ----: | ----: | ----: | ----: | ----: | ----: | ----: | -| `roberta.base.orig` | 61.4 | 86.11 | 89.19 | 94.46 | 92.53 | 91.26 | 74.64 | 93.92 | -| `roberta.base.shuffle.n1` | 35.15 | 82.64 | 86 | 89.97 | 89.02 | 91.01 | 69.02 | 90.47 | -| `roberta.base.shuffle.n2` | 54.37 | 83.43 | 86.24 | 93.46 | 90.44 | 91.36 | 70.83 | 91.79 | -| `roberta.base.shuffle.n3` | 48.72 | 83.85 | 86.36 | 94.05 | 91.69 | 91.24 | 70.65 | 92.02 | -| `roberta.base.shuffle.n4` | 58.64 | 83.77 | 86.98 | 94.32 | 91.69 | 91.4 | 70.83 | 92.48 | -| `roberta.base.shuffle.512` | 12.76 | 77.52 | 79.61 | 84.77 | 85.19 | 90.2 | 56.52 | 86.34 | -| `roberta.base.shuffle.corpus` | 0 | 71.9 | 70.52 | 58.52 | 71.11 | 85.52 | 53.99 | 83.35 | -| `roberta.base.shuffle.corpus_random` | 9.19 | 72.33 | 70.76 | 58.42 | 77.76 | 85.93 | 53.99 | 84.04 | -| `roberta.base.nopos` | 0 | 63.5 | 72.73 | 57.08 | 77.72 | 87.87 | 54.35 | 83.24 | - -For more results on probing tasks, please refer to [our paper](https://arxiv.org/abs/2104.06644). - -## Example Usage - -Follow the same usage as in [RoBERTa](https://github.com/pytorch/fairseq/tree/main/examples/roberta) to load and test your models: - -```python -# Download roberta.base.shuffle.n1 model -wget https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.n1.tar.gz -tar -xzvf roberta.base.shuffle.n1.tar.gz -# Copy the dictionary files -cd roberta.base.shuffle.n1.tar.gz -wget -O dict.txt https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/dict.txt && wget -O encoder.json https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json && wget -O vocab.bpe https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe -cd .. - -# Load the model in fairseq -from fairseq.models.roberta import RobertaModel -roberta = RobertaModel.from_pretrained('/path/to/roberta.base.shuffle.n1', checkpoint_file='model.pt') -roberta.eval() # disable dropout (or leave in train mode to finetune) -``` - -We have also provided a [Google Colab](https://colab.research.google.com/drive/1IJDVfNVWdvRfLjphQKBGzmob84t-OXpm) notebook to demonstrate the loading of the model. The models were trained on top of Fairseq from the following commit: [62cff008ebeeed855093837507d5e6bf52065ee6](https://github.com/pytorch/fairseq/commit/62cff008ebeeed855093837507d5e6bf52065ee6). - -**Note**: The model trained without positional embeddings (`roberta.base.nopos`) is a modified `RoBERTa` model, where the positional embeddings are not used. Thus, the typical `from_pretrained` method on fairseq version of RoBERTa will not be able to load the above model weights. To do so, construct a new `RoBERTaModel` object by setting the flag `use_positional_embeddings` to `False` (or [in the latest code](https://github.com/pytorch/fairseq/blob/main/fairseq/models/roberta/model.py#L543), set `no_token_positional_embeddings` to `True`), and then load the individual weights. - -## Fine-tuning Evaluation - -We provide the trained fine-tuned models on MNLI here for each model above for quick evaluation (1 seed for each model). Please refer to [finetuning details](README.finetuning.md) for the parameters of these models. Follow [RoBERTa](https://github.com/pytorch/fairseq/tree/main/examples/roberta) instructions to evaluate these models. - -| Model | MNLI M Dev Accuracy | Link | -| :----------------------------------------- | :------------------ | :--------------------------------------------------------------------------------------------------------------- | -| `roberta.base.orig.mnli` | 86.14 | [Download](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.orig.mnli.tar.gz) | -| `roberta.base.shuffle.n1.mnli` | 82.55 | [Download](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.n1.mnli.tar.gz) | -| `roberta.base.shuffle.n2.mnli` | 83.21 | [Download](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.n2.mnli.tar.gz) | -| `roberta.base.shuffle.n3.mnli` | 83.89 | [Download](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.n3.mnli.tar.gz) | -| `roberta.base.shuffle.n4.mnli` | 84.00 | [Download](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.n4.mnli.tar.gz) | -| `roberta.base.shuffle.512.mnli` | 77.22 | [Download](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.512.mnli.tar.gz) | -| `roberta.base.shuffle.corpus.mnli` | 71.88 | [Download](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.corpus.mnli.tar.gz) | -| `roberta.base.shuffle.corpus_uniform.mnli` | 72.46 | [Download](https://dl.fbaipublicfiles.com/unnatural_pretraining/roberta.base.shuffle.corpus_uniform.mnli.tar.gz) | - -## Citation - -```bibtex -@misc{sinha2021masked, - title={Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little}, - author={Koustuv Sinha and Robin Jia and Dieuwke Hupkes and Joelle Pineau and Adina Williams and Douwe Kiela}, - year={2021}, - eprint={2104.06644}, - archivePrefix={arXiv}, - primaryClass={cs.CL} -} -``` - -## Contact - -For questions and comments, please reach out to Koustuv Sinha (koustuv.sinha@mail.mcgill.ca). diff --git a/kosmos-g/fairseq/examples/simultaneous_translation/README.md b/kosmos-g/fairseq/examples/simultaneous_translation/README.md deleted file mode 100644 index 62a005e0e..000000000 --- a/kosmos-g/fairseq/examples/simultaneous_translation/README.md +++ /dev/null @@ -1,5 +0,0 @@ -# Simultaneous Translation -Examples of simultaneous translation in fairseq -- [English-to-Japanese text-to-text wait-k model](docs/enja-waitk.md) -- [English-to-Germen text-to-text monotonic multihead attention model](docs/ende-mma.md) -- [English-to-Germen speech-to-text simultaneous translation model](../speech_to_text/docs/simulst_mustc_example.md) diff --git a/kosmos-g/fairseq/examples/simultaneous_translation/__init__.py b/kosmos-g/fairseq/examples/simultaneous_translation/__init__.py deleted file mode 100644 index 5835316ba..000000000 --- a/kosmos-g/fairseq/examples/simultaneous_translation/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import models # noqa diff --git a/kosmos-g/fairseq/examples/simultaneous_translation/docs/ende-mma.md b/kosmos-g/fairseq/examples/simultaneous_translation/docs/ende-mma.md deleted file mode 100644 index 241d604a3..000000000 --- a/kosmos-g/fairseq/examples/simultaneous_translation/docs/ende-mma.md +++ /dev/null @@ -1,74 +0,0 @@ -# Simultaneous Machine Translation - -This directory contains the code for the paper [Monotonic Multihead Attention](https://openreview.net/forum?id=Hyg96gBKPS) - -## Prepare Data - -[Please follow the instructions to download and preprocess the WMT'15 En-De dataset.](https://github.com/pytorch/fairseq/tree/simulastsharedtask/examples/translation#prepare-wmt14en2desh) - -Another example of training an English to Japanese model can be found [here](docs/enja.md) - -## Training - -- MMA-IL - -```shell -fairseq-train \ - data-bin/wmt15_en_de_32k \ - --simul-type infinite_lookback \ - --user-dir $FAIRSEQ/example/simultaneous_translation \ - --mass-preservation \ - --criterion latency_augmented_label_smoothed_cross_entropy \ - --latency-weight-avg 0.1 \ - --max-update 50000 \ - --arch transformer_monotonic_iwslt_de_en save_dir_key=lambda \ - --optimizer adam --adam-betas '(0.9, 0.98)' \ - --lr-scheduler 'inverse_sqrt' \ - --warmup-init-lr 1e-7 --warmup-updates 4000 \ - --lr 5e-4 --stop-min-lr 1e-9 --clip-norm 0.0 --weight-decay 0.0001\ - --dropout 0.3 \ - --label-smoothing 0.1\ - --max-tokens 3584 -``` - -- MMA-H - -```shell -fairseq-train \ - data-bin/wmt15_en_de_32k \ - --simul-type hard_aligned \ - --user-dir $FAIRSEQ/example/simultaneous_translation \ - --mass-preservation \ - --criterion latency_augmented_label_smoothed_cross_entropy \ - --latency-weight-var 0.1 \ - --max-update 50000 \ - --arch transformer_monotonic_iwslt_de_en save_dir_key=lambda \ - --optimizer adam --adam-betas '(0.9, 0.98)' \ - --lr-scheduler 'inverse_sqrt' \ - --warmup-init-lr 1e-7 --warmup-updates 4000 \ - --lr 5e-4 --stop-min-lr 1e-9 --clip-norm 0.0 --weight-decay 0.0001\ - --dropout 0.3 \ - --label-smoothing 0.1\ - --max-tokens 3584 -``` - -- wait-k - -```shell -fairseq-train \ - data-bin/wmt15_en_de_32k \ - --simul-type wait-k \ - --waitk-lagging 3 \ - --user-dir $FAIRSEQ/example/simultaneous_translation \ - --mass-preservation \ - --criterion latency_augmented_label_smoothed_cross_entropy \ - --max-update 50000 \ - --arch transformer_monotonic_iwslt_de_en save_dir_key=lambda \ - --optimizer adam --adam-betas '(0.9, 0.98)' \ - --lr-scheduler 'inverse_sqrt' \ - --warmup-init-lr 1e-7 --warmup-updates 4000 \ - --lr 5e-4 --stop-min-lr 1e-9 --clip-norm 0.0 --weight-decay 0.0001\ - --dropout 0.3 \ - --label-smoothing 0.1\ - --max-tokens 3584 -``` diff --git a/kosmos-g/fairseq/examples/simultaneous_translation/docs/enja-waitk.md b/kosmos-g/fairseq/examples/simultaneous_translation/docs/enja-waitk.md deleted file mode 100644 index fb9d82576..000000000 --- a/kosmos-g/fairseq/examples/simultaneous_translation/docs/enja-waitk.md +++ /dev/null @@ -1,106 +0,0 @@ -# An example of English to Japaneses Simultaneous Translation System - -This is an example of training and evaluating a transformer *wait-k* English to Japanese simultaneous text-to-text translation model. - -## Data Preparation -This section introduces the data preparation for training and evaluation. -If you only want to evaluate the model, please jump to [Inference & Evaluation](#inference-&-evaluation) - -For illustration, we only use the following subsets of the available data from [WMT20 news translation task](http://www.statmt.org/wmt20/translation-task.html), which results in 7,815,391 sentence pairs. -- News Commentary v16 -- Wiki Titles v3 -- WikiMatrix V1 -- Japanese-English Subtitle Corpus -- The Kyoto Free Translation Task Corpus - -We use WMT20 development data as development set. Training `transformer_vaswani_wmt_en_de_big` model on such amount of data will result in 17.3 BLEU with greedy search and 19.7 with beam (10) search. Notice that a better performance can be achieved with the full WMT training data. - -We use [sentencepiece](https://github.com/google/sentencepiece) toolkit to tokenize the data with a vocabulary size of 32000. -Additionally, we filtered out the sentences longer than 200 words after tokenization. -Assuming the tokenized text data is saved at `${DATA_DIR}`, -we prepare the data binary with the following command. - -```bash -fairseq-preprocess \ - --source-lang en --target-lang ja \ - --trainpref ${DATA_DIR}/train \ - --validpref ${DATA_DIR}/dev \ - --testpref ${DATA_DIR}/test \ - --destdir ${WMT20_ENJA_DATA_BIN} \ - --nwordstgt 32000 --nwordssrc 32000 \ - --workers 20 -``` - -## Simultaneous Translation Model Training -To train a wait-k `(k=10)` model. -```bash -fairseq-train ${WMT20_ENJA_DATA_BIN} \ - --save-dir ${SAVEDIR} - --simul-type waitk \ - --waitk-lagging 10 \ - --max-epoch 70 \ - --arch transformer_monotonic_vaswani_wmt_en_de_big \ - --optimizer adam \ - --adam-betas '(0.9, 0.98)' \ - --lr-scheduler inverse_sqrt \ - --warmup-init-lr 1e-07 \ - --warmup-updates 4000 \ - --lr 0.0005 \ - --stop-min-lr 1e-09 \ - --clip-norm 10.0 \ - --dropout 0.3 \ - --weight-decay 0.0 \ - --criterion label_smoothed_cross_entropy \ - --label-smoothing 0.1 \ - --max-tokens 3584 -``` -This command is for training on 8 GPUs. Equivalently, the model can be trained on one GPU with `--update-freq 8`. - -## Inference & Evaluation -First of all, install [SimulEval](https://github.com/facebookresearch/SimulEval) for evaluation. - -```bash -git clone https://github.com/facebookresearch/SimulEval.git -cd SimulEval -pip install -e . -``` - -The following command is for the evaluation. -Assuming the source and reference files are `${SRC_FILE}` and `${REF_FILE}`, the sentencepiece model file for English is saved at `${SRC_SPM_PATH}` - - -```bash -simuleval \ - --source ${SRC_FILE} \ - --target ${TGT_FILE} \ - --data-bin ${WMT20_ENJA_DATA_BIN} \ - --sacrebleu-tokenizer ja-mecab \ - --eval-latency-unit char \ - --no-space \ - --src-splitter-type sentencepiecemodel \ - --src-splitter-path ${SRC_SPM_PATH} \ - --agent ${FAIRSEQ}/examples/simultaneous_translation/agents/simul_trans_text_agent_enja.py \ - --model-path ${SAVE_DIR}/${CHECKPOINT_FILENAME} \ - --output ${OUTPUT} \ - --scores -``` - -The `--data-bin` should be the same in previous sections if you prepare the data from the scratch. -If only for evaluation, a prepared data directory can be found [here](https://dl.fbaipublicfiles.com/simultaneous_translation/wmt20_enja_medium_databin.tgz) and a pretrained checkpoint (wait-k=10 model) can be downloaded from [here](https://dl.fbaipublicfiles.com/simultaneous_translation/wmt20_enja_medium_wait10_ckpt.pt). - -The output should look like this: -```bash -{ - "Quality": { - "BLEU": 11.442253287568398 - }, - "Latency": { - "AL": 8.6587861866951, - "AP": 0.7863304776251316, - "DAL": 9.477850951194764 - } -} -``` -The latency is evaluated by characters (`--eval-latency-unit`) on the target side. The latency is evaluated with `sacrebleu` with `MeCab` tokenizer `--sacrebleu-tokenizer ja-mecab`. `--no-space` indicates that do not add space when merging the predicted words. - -If `--output ${OUTPUT}` option is used, the detailed log and scores will be stored under the `${OUTPUT}` directory. diff --git a/kosmos-g/fairseq/examples/simultaneous_translation/eval/agents/simul_t2t_enja.py b/kosmos-g/fairseq/examples/simultaneous_translation/eval/agents/simul_t2t_enja.py deleted file mode 100644 index 8f3c8703c..000000000 --- a/kosmos-g/fairseq/examples/simultaneous_translation/eval/agents/simul_t2t_enja.py +++ /dev/null @@ -1,226 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import os - -from fairseq import checkpoint_utils, tasks -import sentencepiece as spm -import torch - -try: - from simuleval import READ_ACTION, WRITE_ACTION, DEFAULT_EOS - from simuleval.agents import TextAgent -except ImportError: - print("Please install simuleval 'pip install simuleval'") - - -BOS_PREFIX = "\u2581" - - -class SimulTransTextAgentJA(TextAgent): - """ - Simultaneous Translation - Text agent for Japanese - """ - def __init__(self, args): - - # Whether use gpu - self.gpu = getattr(args, "gpu", False) - - # Max len - self.max_len = args.max_len - - # Load Model - self.load_model_vocab(args) - - # build word splitter - self.build_word_splitter(args) - - self.eos = DEFAULT_EOS - - def initialize_states(self, states): - states.incremental_states = dict() - states.incremental_states["online"] = dict() - - def to_device(self, tensor): - if self.gpu: - return tensor.cuda() - else: - return tensor.cpu() - - def load_model_vocab(self, args): - - filename = args.model_path - if not os.path.exists(filename): - raise IOError("Model file not found: {}".format(filename)) - - state = checkpoint_utils.load_checkpoint_to_cpu(filename) - - task_args = state["cfg"]["task"] - task_args.data = args.data_bin - - task = tasks.setup_task(task_args) - - # build model for ensemble - state["cfg"]["model"].load_pretrained_encoder_from = None - state["cfg"]["model"].load_pretrained_decoder_from = None - - self.model = task.build_model(state["cfg"]["model"]) - self.model.load_state_dict(state["model"], strict=True) - self.model.eval() - self.model.share_memory() - - if self.gpu: - self.model.cuda() - - # Set dictionary - self.dict = {} - self.dict["tgt"] = task.target_dictionary - self.dict["src"] = task.source_dictionary - - @staticmethod - def add_args(parser): - # fmt: off - parser.add_argument('--model-path', type=str, required=True, - help='path to your pretrained model.') - parser.add_argument("--data-bin", type=str, required=True, - help="Path of data binary") - parser.add_argument("--max-len", type=int, default=100, - help="Max length of translation") - parser.add_argument("--tgt-splitter-type", type=str, default="SentencePiece", - help="Subword splitter type for target text.") - parser.add_argument("--tgt-splitter-path", type=str, default=None, - help="Subword splitter model path for target text.") - parser.add_argument("--src-splitter-type", type=str, default="SentencePiece", - help="Subword splitter type for source text.") - parser.add_argument("--src-splitter-path", type=str, default=None, - help="Subword splitter model path for source text.") - # fmt: on - return parser - - def build_word_splitter(self, args): - self.spm = {} - for lang in ['src', 'tgt']: - if getattr(args, f'{lang}_splitter_type', None): - path = getattr(args, f'{lang}_splitter_path', None) - if path: - self.spm[lang] = spm.SentencePieceProcessor() - self.spm[lang].Load(path) - - def segment_to_units(self, segment, states): - # Split a full word (segment) into subwords (units) - return self.spm['src'].EncodeAsPieces(segment) - - def update_model_encoder(self, states): - if len(states.units.source) == 0: - return - - src_indices = [ - self.dict['src'].index(x) - for x in states.units.source.value - ] - - if states.finish_read(): - # Append the eos index when the prediction is over - src_indices += [self.dict["tgt"].eos_index] - - src_indices = self.to_device( - torch.LongTensor(src_indices).unsqueeze(0) - ) - src_lengths = self.to_device( - torch.LongTensor([src_indices.size(1)]) - ) - - states.encoder_states = self.model.encoder(src_indices, src_lengths) - - torch.cuda.empty_cache() - - def update_states_read(self, states): - # Happens after a read action. - self.update_model_encoder(states) - - def units_to_segment(self, units, states): - # Merge sub words (units) to full word (segment). - # For Japanese, we can directly send - # the untokenized token to server except the BOS token - # with following option - # --sacrebleu-tokenizer MeCab - # --eval-latency-unit char - # --no-space - token = units.value.pop() - - if ( - token == self.dict["tgt"].eos_word - or len(states.segments.target) > self.max_len - ): - return DEFAULT_EOS - - if BOS_PREFIX == token: - return None - if token[0] == BOS_PREFIX: - return token[1:] - else: - return token - - def policy(self, states): - - if not getattr(states, "encoder_states", None): - # No encoder states, read a token first - return READ_ACTION - - # encode previous predicted target tokens - tgt_indices = self.to_device( - torch.LongTensor( - [self.model.decoder.dictionary.eos()] - + [ - self.dict['tgt'].index(x) - for x in states.units.target.value - if x is not None - ] - ).unsqueeze(0) - ) - - # Current steps - states.incremental_states["steps"] = { - "src": states.encoder_states["encoder_out"][0].size(0), - "tgt": 1 + len(states.units.target), - } - - # Online only means the reading is not finished - states.incremental_states["online"]["only"] = ( - torch.BoolTensor([not states.finish_read()]) - ) - - x, outputs = self.model.decoder.forward( - prev_output_tokens=tgt_indices, - encoder_out=states.encoder_states, - incremental_state=states.incremental_states, - ) - - states.decoder_out = x - - torch.cuda.empty_cache() - - if outputs.action == 0: - return READ_ACTION - else: - return WRITE_ACTION - - def predict(self, states): - # Predict target token from decoder states - decoder_states = states.decoder_out - - lprobs = self.model.get_normalized_probs( - [decoder_states[:, -1:]], log_probs=True - ) - - index = lprobs.argmax(dim=-1)[0, 0].item() - - if index != self.dict['tgt'].eos_index: - token = self.dict['tgt'].string([index]) - else: - token = self.dict['tgt'].eos_word - - return token diff --git a/kosmos-g/fairseq/examples/simultaneous_translation/models/__init__.py b/kosmos-g/fairseq/examples/simultaneous_translation/models/__init__.py deleted file mode 100644 index 257a96593..000000000 --- a/kosmos-g/fairseq/examples/simultaneous_translation/models/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import importlib -import os - - -for file in sorted(os.listdir(os.path.dirname(__file__))): - if file.endswith(".py") and not file.startswith("_"): - model_name = file[: file.find(".py")] - importlib.import_module( - "examples.simultaneous_translation.models." + model_name - ) diff --git a/kosmos-g/fairseq/examples/simultaneous_translation/models/convtransformer_simul_trans.py b/kosmos-g/fairseq/examples/simultaneous_translation/models/convtransformer_simul_trans.py deleted file mode 100644 index 4a26422f6..000000000 --- a/kosmos-g/fairseq/examples/simultaneous_translation/models/convtransformer_simul_trans.py +++ /dev/null @@ -1,204 +0,0 @@ -# Copyright (c) 2017-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the LICENSE file in -# the root directory of this source tree. An additional grant of patent rights -# can be found in the PATENTS file in the same directory. - -from fairseq import checkpoint_utils -from fairseq.models import ( - register_model, - register_model_architecture, -) -from fairseq.models.speech_to_text import ( - ConvTransformerModel, - convtransformer_espnet, - ConvTransformerEncoder, -) -from fairseq.models.speech_to_text.modules.augmented_memory_attention import ( - augmented_memory, - SequenceEncoder, - AugmentedMemoryConvTransformerEncoder, -) - -from torch import nn, Tensor -from typing import Dict, List -from fairseq.models.speech_to_text.modules.emformer import NoSegAugmentedMemoryTransformerEncoderLayer - -@register_model("convtransformer_simul_trans") -class SimulConvTransformerModel(ConvTransformerModel): - """ - Implementation of the paper: - - SimulMT to SimulST: Adapting Simultaneous Text Translation to - End-to-End Simultaneous Speech Translation - - https://www.aclweb.org/anthology/2020.aacl-main.58.pdf - """ - - @staticmethod - def add_args(parser): - super(SimulConvTransformerModel, SimulConvTransformerModel).add_args(parser) - parser.add_argument( - "--train-monotonic-only", - action="store_true", - default=False, - help="Only train monotonic attention", - ) - - @classmethod - def build_decoder(cls, args, task, embed_tokens): - tgt_dict = task.tgt_dict - - from examples.simultaneous_translation.models.transformer_monotonic_attention import ( - TransformerMonotonicDecoder, - ) - - decoder = TransformerMonotonicDecoder(args, tgt_dict, embed_tokens) - - if getattr(args, "load_pretrained_decoder_from", None): - decoder = checkpoint_utils.load_pretrained_component_from_model( - component=decoder, checkpoint=args.load_pretrained_decoder_from - ) - return decoder - - -@register_model_architecture( - "convtransformer_simul_trans", "convtransformer_simul_trans_espnet" -) -def convtransformer_simul_trans_espnet(args): - convtransformer_espnet(args) - - -@register_model("convtransformer_augmented_memory") -@augmented_memory -class AugmentedMemoryConvTransformerModel(SimulConvTransformerModel): - @classmethod - def build_encoder(cls, args): - encoder = SequenceEncoder(args, AugmentedMemoryConvTransformerEncoder(args)) - - if getattr(args, "load_pretrained_encoder_from", None) is not None: - encoder = checkpoint_utils.load_pretrained_component_from_model( - component=encoder, checkpoint=args.load_pretrained_encoder_from - ) - - return encoder - - -@register_model_architecture( - "convtransformer_augmented_memory", "convtransformer_augmented_memory" -) -def augmented_memory_convtransformer_espnet(args): - convtransformer_espnet(args) - - -# ============================================================================ # -# Convtransformer -# with monotonic attention decoder -# with emformer encoder -# ============================================================================ # - - -class ConvTransformerEmformerEncoder(ConvTransformerEncoder): - def __init__(self, args): - super().__init__(args) - stride = self.conv_layer_stride(args) - trf_left_context = args.segment_left_context // stride - trf_right_context = args.segment_right_context // stride - context_config = [trf_left_context, trf_right_context] - self.transformer_layers = nn.ModuleList( - [ - NoSegAugmentedMemoryTransformerEncoderLayer( - input_dim=args.encoder_embed_dim, - num_heads=args.encoder_attention_heads, - ffn_dim=args.encoder_ffn_embed_dim, - num_layers=args.encoder_layers, - dropout_in_attn=args.dropout, - dropout_on_attn=args.dropout, - dropout_on_fc1=args.dropout, - dropout_on_fc2=args.dropout, - activation_fn=args.activation_fn, - context_config=context_config, - segment_size=args.segment_length, - max_memory_size=args.max_memory_size, - scaled_init=True, # TODO: use constant for now. - tanh_on_mem=args.amtrf_tanh_on_mem, - ) - ] - ) - self.conv_transformer_encoder = ConvTransformerEncoder(args) - - def forward(self, src_tokens, src_lengths): - encoder_out: Dict[str, List[Tensor]] = self.conv_transformer_encoder(src_tokens, src_lengths.to(src_tokens.device)) - output = encoder_out["encoder_out"][0] - encoder_padding_masks = encoder_out["encoder_padding_mask"] - - return { - "encoder_out": [output], - # This is because that in the original implementation - # the output didn't consider the last segment as right context. - "encoder_padding_mask": [encoder_padding_masks[0][:, : output.size(0)]] if len(encoder_padding_masks) > 0 - else [], - "encoder_embedding": [], - "encoder_states": [], - "src_tokens": [], - "src_lengths": [], - } - - @staticmethod - def conv_layer_stride(args): - # TODO: make it configurable from the args - return 4 - - -@register_model("convtransformer_emformer") -class ConvtransformerEmformer(SimulConvTransformerModel): - @staticmethod - def add_args(parser): - super(ConvtransformerEmformer, ConvtransformerEmformer).add_args(parser) - - parser.add_argument( - "--segment-length", - type=int, - metavar="N", - help="length of each segment (not including left context / right context)", - ) - parser.add_argument( - "--segment-left-context", - type=int, - help="length of left context in a segment", - ) - parser.add_argument( - "--segment-right-context", - type=int, - help="length of right context in a segment", - ) - parser.add_argument( - "--max-memory-size", - type=int, - default=-1, - help="Right context for the segment.", - ) - parser.add_argument( - "--amtrf-tanh-on-mem", - default=False, - action="store_true", - help="whether to use tanh on memory vector", - ) - - @classmethod - def build_encoder(cls, args): - encoder = ConvTransformerEmformerEncoder(args) - if getattr(args, "load_pretrained_encoder_from", None): - encoder = checkpoint_utils.load_pretrained_component_from_model( - component=encoder, checkpoint=args.load_pretrained_encoder_from - ) - return encoder - - -@register_model_architecture( - "convtransformer_emformer", - "convtransformer_emformer", -) -def convtransformer_emformer_base(args): - convtransformer_espnet(args) diff --git a/kosmos-g/fairseq/examples/simultaneous_translation/models/transformer_monotonic_attention.py b/kosmos-g/fairseq/examples/simultaneous_translation/models/transformer_monotonic_attention.py deleted file mode 100644 index 7b9414b0e..000000000 --- a/kosmos-g/fairseq/examples/simultaneous_translation/models/transformer_monotonic_attention.py +++ /dev/null @@ -1,302 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Dict, List, NamedTuple, Optional - -import torch -import torch.nn as nn -from examples.simultaneous_translation.modules.monotonic_transformer_layer import ( - TransformerMonotonicDecoderLayer, - TransformerMonotonicEncoderLayer, -) -from fairseq.models import ( - register_model, - register_model_architecture, -) -from fairseq.models.transformer import ( - TransformerModel, - TransformerEncoder, - TransformerDecoder, - base_architecture, - transformer_iwslt_de_en, - transformer_vaswani_wmt_en_de_big, - tiny_architecture -) -from torch import Tensor - -DEFAULT_MAX_SOURCE_POSITIONS = 1024 -DEFAULT_MAX_TARGET_POSITIONS = 1024 -READ_ACTION = 0 -WRITE_ACTION = 1 - -TransformerMonotonicDecoderOut = NamedTuple( - "TransformerMonotonicDecoderOut", - [ - ("action", int), - ("p_choose", Optional[Tensor]), - ("attn_list", Optional[List[Optional[Dict[str, Tensor]]]]), - ("encoder_out", Optional[Dict[str, List[Tensor]]]), - ("encoder_padding_mask", Optional[Tensor]), - ], -) - - -@register_model("transformer_unidirectional") -class TransformerUnidirectionalModel(TransformerModel): - @classmethod - def build_encoder(cls, args, src_dict, embed_tokens): - return TransformerMonotonicEncoder(args, src_dict, embed_tokens) - - -@register_model("transformer_monotonic") -class TransformerModelSimulTrans(TransformerModel): - @classmethod - def build_encoder(cls, args, src_dict, embed_tokens): - return TransformerMonotonicEncoder(args, src_dict, embed_tokens) - - @classmethod - def build_decoder(cls, args, tgt_dict, embed_tokens): - return TransformerMonotonicDecoder(args, tgt_dict, embed_tokens) - - -class TransformerMonotonicEncoder(TransformerEncoder): - def __init__(self, args, dictionary, embed_tokens): - super().__init__(args, dictionary, embed_tokens) - - self.dictionary = dictionary - self.layers = nn.ModuleList([]) - self.layers.extend( - [ - TransformerMonotonicEncoderLayer(args) - for i in range(args.encoder_layers) - ] - ) - - -class TransformerMonotonicDecoder(TransformerDecoder): - """ - Transformer decoder consisting of *args.decoder_layers* layers. Each layer - is a :class:`TransformerDecoderLayer`. - - Args: - args (argparse.Namespace): parsed command-line arguments - dictionary (~fairseq.data.Dictionary): decoding dictionary - embed_tokens (torch.nn.Embedding): output embedding - no_encoder_attn (bool, optional): whether to attend to encoder outputs - (default: False). - """ - - def __init__(self, args, dictionary, embed_tokens, no_encoder_attn=False): - super().__init__(args, dictionary, embed_tokens, no_encoder_attn=False) - - self.dictionary = dictionary - self.layers = nn.ModuleList([]) - self.layers.extend( - [ - TransformerMonotonicDecoderLayer(args) - for _ in range(args.decoder_layers) - ] - ) - self.policy_criterion = getattr(args, "policy_criterion", "any") - self.num_updates = None - - def set_num_updates(self, num_updates): - self.num_updates = num_updates - - def pre_attention( - self, - prev_output_tokens, - encoder_out_dict: Dict[str, List[Tensor]], - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - ): - positions = ( - self.embed_positions( - prev_output_tokens, - incremental_state=incremental_state, - ) - if self.embed_positions is not None - else None - ) - - if incremental_state is not None: - prev_output_tokens = prev_output_tokens[:, -1:] - if positions is not None: - positions = positions[:, -1:] - # embed tokens and positions - x = self.embed_scale * self.embed_tokens(prev_output_tokens) - - if self.project_in_dim is not None: - x = self.project_in_dim(x) - - if positions is not None: - x += positions - - x = self.dropout_module(x) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - encoder_out = encoder_out_dict["encoder_out"][0] - - if "encoder_padding_mask" in encoder_out_dict: - encoder_padding_mask = ( - encoder_out_dict["encoder_padding_mask"][0] - if encoder_out_dict["encoder_padding_mask"] - and len(encoder_out_dict["encoder_padding_mask"]) > 0 - else None - ) - else: - encoder_padding_mask = None - - return x, encoder_out, encoder_padding_mask - - def post_attention(self, x): - if self.layer_norm is not None: - x = self.layer_norm(x) - - # T x B x C -> B x T x C - x = x.transpose(0, 1) - - if self.project_out_dim is not None: - x = self.project_out_dim(x) - - return x - - def clean_cache( - self, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]], - end_id: Optional[int] = None, - ): - """ - Clean cache in the monotonic layers. - The cache is generated because of a forward pass of decoder has run but no prediction, - so that the self attention key value in decoder is written in the incremental state. - end_id is the last idx of the layers - """ - if end_id is None: - end_id = len(self.layers) - - for index, layer in enumerate(self.layers): - if index < end_id: - layer.prune_incremental_state(incremental_state) - - def extract_features( - self, - prev_output_tokens, - encoder_out: Optional[Dict[str, List[Tensor]]], - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - full_context_alignment: bool = False, # unused - alignment_layer: Optional[int] = None, # unused - alignment_heads: Optional[int] = None, # unsed - ): - """ - Similar to *forward* but only return features. - - Returns: - tuple: - - the decoder's features of shape `(batch, tgt_len, embed_dim)` - - a dictionary with any model-specific outputs - """ - # incremental_state = None - assert encoder_out is not None - (x, encoder_outs, encoder_padding_mask) = self.pre_attention( - prev_output_tokens, encoder_out, incremental_state - ) - attn = None - inner_states = [x] - attn_list: List[Optional[Dict[str, Tensor]]] = [] - - p_choose = torch.tensor([1.0]) - - for i, layer in enumerate(self.layers): - - x, attn, _ = layer( - x=x, - encoder_out=encoder_outs, - encoder_padding_mask=encoder_padding_mask, - incremental_state=incremental_state, - self_attn_mask=self.buffered_future_mask(x) - if incremental_state is None - else None, - ) - - inner_states.append(x) - attn_list.append(attn) - - if incremental_state is not None: - if_online = incremental_state["online"]["only"] - assert if_online is not None - if if_online.to(torch.bool): - # Online indicates that the encoder states are still changing - assert attn is not None - if self.policy_criterion == "any": - # Any head decide to read than read - head_read = layer.encoder_attn._get_monotonic_buffer(incremental_state)["head_read"] - assert head_read is not None - if head_read.any(): - # We need to prune the last self_attn saved_state - # if model decide not to read - # otherwise there will be duplicated saved_state - self.clean_cache(incremental_state, i + 1) - - return x, TransformerMonotonicDecoderOut( - action=0, - p_choose=p_choose, - attn_list=None, - encoder_out=None, - encoder_padding_mask=None, - ) - - x = self.post_attention(x) - - return x, TransformerMonotonicDecoderOut( - action=1, - p_choose=p_choose, - attn_list=attn_list, - encoder_out=encoder_out, - encoder_padding_mask=encoder_padding_mask, - ) - - -@register_model_architecture("transformer_monotonic", "transformer_monotonic") -def base_monotonic_architecture(args): - base_architecture(args) - args.encoder_unidirectional = getattr(args, "encoder_unidirectional", False) - - -@register_model_architecture( - "transformer_monotonic", "transformer_monotonic_iwslt_de_en" -) -def transformer_monotonic_iwslt_de_en(args): - transformer_iwslt_de_en(args) - base_monotonic_architecture(args) - - -# parameters used in the "Attention Is All You Need" paper (Vaswani et al., 2017) -@register_model_architecture( - "transformer_monotonic", "transformer_monotonic_vaswani_wmt_en_de_big" -) -def transformer_monotonic_vaswani_wmt_en_de_big(args): - transformer_vaswani_wmt_en_de_big(args) - - -@register_model_architecture( - "transformer_monotonic", "transformer_monotonic_vaswani_wmt_en_fr_big" -) -def transformer_monotonic_vaswani_wmt_en_fr_big(args): - transformer_monotonic_vaswani_wmt_en_fr_big(args) - - -@register_model_architecture( - "transformer_unidirectional", "transformer_unidirectional_iwslt_de_en" -) -def transformer_unidirectional_iwslt_de_en(args): - transformer_iwslt_de_en(args) - - -@register_model_architecture("transformer_monotonic", "transformer_monotonic_tiny") -def monotonic_tiny_architecture(args): - tiny_architecture(args) - base_monotonic_architecture(args) diff --git a/kosmos-g/fairseq/examples/simultaneous_translation/modules/__init__.py b/kosmos-g/fairseq/examples/simultaneous_translation/modules/__init__.py deleted file mode 100644 index f5ea180f9..000000000 --- a/kosmos-g/fairseq/examples/simultaneous_translation/modules/__init__.py +++ /dev/null @@ -1,23 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import os -import importlib -from fairseq import registry - -( - build_monotonic_attention, - register_monotonic_attention, - MONOTONIC_ATTENTION_REGISTRY, - _, -) = registry.setup_registry("--simul-type") - -for file in sorted(os.listdir(os.path.dirname(__file__))): - if file.endswith(".py") and not file.startswith("_"): - model_name = file[: file.find(".py")] - importlib.import_module( - "examples.simultaneous_translation.modules." + model_name - ) diff --git a/kosmos-g/fairseq/examples/simultaneous_translation/modules/fixed_pre_decision.py b/kosmos-g/fairseq/examples/simultaneous_translation/modules/fixed_pre_decision.py deleted file mode 100644 index 3991414ae..000000000 --- a/kosmos-g/fairseq/examples/simultaneous_translation/modules/fixed_pre_decision.py +++ /dev/null @@ -1,190 +0,0 @@ -from functools import partial - -import torch -from torch import Tensor -import math -import torch.nn.functional as F - -from . import register_monotonic_attention -from .monotonic_multihead_attention import ( - MonotonicAttention, - MonotonicInfiniteLookbackAttention, - WaitKAttention -) -from typing import Dict, Optional - - -def fixed_pooling_monotonic_attention(monotonic_attention): - def create_model(monotonic_attention, klass): - class FixedStrideMonotonicAttention(monotonic_attention): - def __init__(self, args): - self.waitk_lagging = 0 - self.num_heads = 0 - self.noise_mean = 0.0 - self.noise_var = 0.0 - super().__init__(args) - self.pre_decision_type = args.fixed_pre_decision_type - self.pre_decision_ratio = args.fixed_pre_decision_ratio - self.pre_decision_pad_threshold = args.fixed_pre_decision_pad_threshold - assert self.pre_decision_ratio > 1 - - if args.fixed_pre_decision_type == "average": - self.pooling_layer = torch.nn.AvgPool1d( - kernel_size=self.pre_decision_ratio, - stride=self.pre_decision_ratio, - ceil_mode=True, - ) - elif args.fixed_pre_decision_type == "last": - - def last(key): - if key.size(2) < self.pre_decision_ratio: - return key - else: - k = key[ - :, - :, - self.pre_decision_ratio - 1:: self.pre_decision_ratio, - ].contiguous() - if key.size(-1) % self.pre_decision_ratio != 0: - k = torch.cat([k, key[:, :, -1:]], dim=-1).contiguous() - return k - - self.pooling_layer = last - else: - raise NotImplementedError - - @staticmethod - def add_args(parser): - super( - FixedStrideMonotonicAttention, FixedStrideMonotonicAttention - ).add_args(parser) - parser.add_argument( - "--fixed-pre-decision-ratio", - type=int, - required=True, - help=( - "Ratio for the fixed pre-decision," - "indicating how many encoder steps will start" - "simultaneous decision making process." - ), - ) - parser.add_argument( - "--fixed-pre-decision-type", - default="average", - choices=["average", "last"], - help="Pooling type", - ) - parser.add_argument( - "--fixed-pre-decision-pad-threshold", - type=float, - default=0.3, - help="If a part of the sequence has pad" - ",the threshold the pooled part is a pad.", - ) - - def insert_zeros(self, x): - bsz_num_heads, tgt_len, src_len = x.size() - stride = self.pre_decision_ratio - weight = F.pad(torch.ones(1, 1, 1).to(x), (stride - 1, 0)) - x_upsample = F.conv_transpose1d( - x.view(-1, src_len).unsqueeze(1), - weight, - stride=stride, - padding=0, - ) - return x_upsample.squeeze(1).view(bsz_num_heads, tgt_len, -1) - - def p_choose( - self, - query: Optional[Tensor], - key: Optional[Tensor], - key_padding_mask: Optional[Tensor] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - ): - assert key is not None - assert query is not None - src_len = key.size(0) - tgt_len = query.size(0) - batch_size = query.size(1) - - key_pool = self.pooling_layer(key.transpose(0, 2)).transpose(0, 2) - - if key_padding_mask is not None: - key_padding_mask_pool = ( - self.pooling_layer(key_padding_mask.unsqueeze(0).float()) - .squeeze(0) - .gt(self.pre_decision_pad_threshold) - ) - # Make sure at least one element is not pad - key_padding_mask_pool[:, 0] = 0 - else: - key_padding_mask_pool = None - - if incremental_state is not None: - # The floor instead of ceil is used for inference - # But make sure the length key_pool at least 1 - if ( - max(1, math.floor(key.size(0) / self.pre_decision_ratio)) - ) < key_pool.size(0): - key_pool = key_pool[:-1] - if key_padding_mask_pool is not None: - key_padding_mask_pool = key_padding_mask_pool[:-1] - - p_choose_pooled = self.p_choose_from_qk( - query, - key_pool, - key_padding_mask_pool, - incremental_state=incremental_state, - ) - - # Upsample, interpolate zeros - p_choose = self.insert_zeros(p_choose_pooled) - - if p_choose.size(-1) < src_len: - # Append zeros if the upsampled p_choose is shorter than src_len - p_choose = torch.cat( - [ - p_choose, - torch.zeros( - p_choose.size(0), - tgt_len, - src_len - p_choose.size(-1) - ).to(p_choose) - ], - dim=2 - ) - else: - # can be larger than src_len because we used ceil before - p_choose = p_choose[:, :, :src_len] - p_choose[:, :, -1] = p_choose_pooled[:, :, -1] - - assert list(p_choose.size()) == [ - batch_size * self.num_heads, - tgt_len, - src_len, - ] - - return p_choose - - FixedStrideMonotonicAttention.__name__ = klass.__name__ - return FixedStrideMonotonicAttention - - return partial(create_model, monotonic_attention) - - -@register_monotonic_attention("waitk_fixed_pre_decision") -@fixed_pooling_monotonic_attention(WaitKAttention) -class WaitKAttentionFixedStride: - pass - - -@register_monotonic_attention("hard_aligned_fixed_pre_decision") -@fixed_pooling_monotonic_attention(MonotonicAttention) -class MonotonicAttentionFixedStride: - pass - - -@register_monotonic_attention("infinite_lookback_fixed_pre_decision") -@fixed_pooling_monotonic_attention(MonotonicInfiniteLookbackAttention) -class MonotonicInfiniteLookbackAttentionFixedStride: - pass diff --git a/kosmos-g/fairseq/examples/simultaneous_translation/modules/monotonic_multihead_attention.py b/kosmos-g/fairseq/examples/simultaneous_translation/modules/monotonic_multihead_attention.py deleted file mode 100644 index 11ef60c94..000000000 --- a/kosmos-g/fairseq/examples/simultaneous_translation/modules/monotonic_multihead_attention.py +++ /dev/null @@ -1,519 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -from torch import Tensor -import torch.nn as nn - -from examples.simultaneous_translation.utils.p_choose_strategy import ( - learnable_p_choose, - waitk_p_choose -) - -from examples.simultaneous_translation.utils.monotonic_attention import ( - expected_alignment_from_p_choose, - expected_soft_attention, - mass_preservation, -) -from fairseq.modules import MultiheadAttention - -from . import register_monotonic_attention -from typing import Dict, Optional - - -@register_monotonic_attention("hard_aligned") -class MonotonicAttention(MultiheadAttention): - """ - Abstract class of monotonic attentions - """ - k_in_proj: Dict[str, nn.Linear] - q_in_proj: Dict[str, nn.Linear] - - def __init__(self, args): - super().__init__( - embed_dim=args.decoder_embed_dim, - num_heads=args.decoder_attention_heads, - kdim=getattr(args, "encoder_embed_dim", None), - vdim=getattr(args, "encoder_embed_dim", None), - dropout=args.attention_dropout, - encoder_decoder_attention=True, - ) - - self.soft_attention = False - - self.eps = getattr(args, "attention_eps", True) - self.mass_preservation = getattr(args, "mass_preservation", True) - - self.noise_type = args.noise_type - self.noise_mean = args.noise_mean - self.noise_var = args.noise_var - - self.energy_bias_init = args.energy_bias_init - self.energy_bias = ( - nn.Parameter(self.energy_bias_init * torch.ones([1])) - if args.energy_bias is True - else 0 - ) - - self.k_in_proj = {"monotonic": self.k_proj} - self.q_in_proj = {"monotonic": self.q_proj} - self.chunk_size = None - - @staticmethod - def add_args(parser): - # fmt: off - parser.add_argument('--no-mass-preservation', action="store_false", - dest="mass_preservation", - help='Do not stay on the last token when decoding') - parser.add_argument('--mass-preservation', action="store_true", - dest="mass_preservation", - help='Stay on the last token when decoding') - parser.set_defaults(mass_preservation=True) - parser.add_argument('--noise-var', type=float, default=1.0, - help='Variance of discretness noise') - parser.add_argument('--noise-mean', type=float, default=0.0, - help='Mean of discretness noise') - parser.add_argument('--noise-type', type=str, default="flat", - help='Type of discretness noise') - parser.add_argument('--energy-bias', action="store_true", - default=False, - help='Bias for energy') - parser.add_argument('--energy-bias-init', type=float, default=-2.0, - help='Initial value of the bias for energy') - parser.add_argument('--attention-eps', type=float, default=1e-6, - help='Epsilon when calculating expected attention') - - def energy_from_qk( - self, - query: Tensor, - key: Tensor, - energy_type: str, - key_padding_mask: Optional[Tensor] = None, - bias: int = 0 - ): - """ - Compute energy from query and key - q_func_value is a tuple looks like - (q_proj_func, q_tensor) - q_tensor size: bsz, tgt_len, emb_dim - k_tensor size: bsz, src_len, emb_dim - key_padding_mask size: bsz, src_len - attn_mask: bsz, src_len - """ - - length, bsz, _ = query.size() - q = self.q_in_proj[energy_type].forward(query) - q = ( - q.contiguous() - .view(length, bsz * self.num_heads, self.head_dim) - .transpose(0, 1) - ) - q = q * self.scaling - length, bsz, _ = key.size() - k = self.k_in_proj[energy_type].forward(key) - k = ( - k.contiguous() - .view(length, bsz * self.num_heads, self.head_dim) - .transpose(0, 1) - ) - - energy = torch.bmm(q, k.transpose(1, 2)) + bias - - if key_padding_mask is not None: - energy = energy.masked_fill( - key_padding_mask.unsqueeze(1).to(torch.bool), - - float("inf") - ) - - return energy - - def p_choose_from_qk(self, query, key, key_padding_mask, incremental_states=None): - monotonic_energy = self.energy_from_qk( - query, - key, - "monotonic", - key_padding_mask=key_padding_mask, - bias=self.energy_bias, - ) - - p_choose = learnable_p_choose( - monotonic_energy, - self.noise_mean, - self.noise_var, - self.training - ) - return p_choose - - def p_choose(self, query, key, key_padding_mask, incremental_states=None): - return self.p_choose_from_qk(self, query, key, key_padding_mask) - - def monotonic_attention_process_infer( - self, - query: Optional[Tensor], - key: Optional[Tensor], - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]], - ): - """ - Monotonic attention at inference time - Notice that this function is designed for simuleval not sequence_generator - """ - assert query is not None - assert key is not None - - if query.size(1) != 1: - raise RuntimeError( - "Simultaneous translation models don't support batch decoding." - ) - # 1. compute stepwise probability - p_choose = self.p_choose( - query, key, None, incremental_state - ).squeeze(1) - - # 2. Compute the alpha - src_len = key.size(0) - # Maximum steps allows in this iteration - max_steps = src_len - 1 if self.mass_preservation else src_len - monotonic_cache = self._get_monotonic_buffer(incremental_state) - # Step for each head - monotonic_step = monotonic_cache.get( - 'head_step', - p_choose.new_zeros(1, self.num_heads).long() - ) - assert monotonic_step is not None - finish_read = monotonic_step.eq(max_steps) - p_choose_i = torch.tensor(1) - - while finish_read.sum().item() < self.num_heads: - # p_choose: self.num_heads, src_len - # only choose the p at monotonic steps - # p_choose_i: 1, self.num_heads - p_choose_i = ( - p_choose.gather( - 1, - monotonic_step - .clamp(0, src_len - 1), - ) - ) - - read_one_step = ( - (p_choose_i < 0.5) - .type_as(monotonic_step) - .masked_fill(finish_read, 0) - ) - # 1 x bsz - # sample actions on unfinished seq - # 0 means stay, finish reading - # 1 means leave, continue reading - - monotonic_step += read_one_step - - finish_read = monotonic_step.eq(max_steps) | (read_one_step == 0) - - # p_choose at last steps - p_choose_i = ( - p_choose.gather( - 1, - monotonic_step - .clamp(0, src_len - 1), - ) - ) - - monotonic_cache["head_step"] = monotonic_step - # Whether a head is looking for new input - monotonic_cache["head_read"] = ( - monotonic_step.eq(max_steps) & (p_choose_i < 0.5) - ) - self._set_monotonic_buffer(incremental_state, monotonic_cache) - - # 2. Update alpha - alpha = ( - p_choose - .new_zeros([self.num_heads, src_len]) - .scatter( - 1, - (monotonic_step) - .view(self.num_heads, 1).clamp(0, src_len - 1), - 1 - ) - ) - - if not self.mass_preservation: - alpha = alpha.masked_fill( - (monotonic_step == max_steps) - .view(self.num_heads, 1), - 0 - ) - - # 4. Compute Beta - if self.soft_attention: - monotonic_step = monotonic_step.t() - beta_mask = torch.arange(src_len).expand_as(alpha).gt(monotonic_step).unsqueeze(1) - # If it's soft attention just do softmax on current context - soft_energy = self.energy_from_qk( - query, - key, - "soft" - ) - beta = torch.nn.functional.softmax( - soft_energy.masked_fill(beta_mask, -float("inf")), dim=-1 - ) - # It could happen that a head doesn't move at all - beta = beta.masked_fill(monotonic_step.eq(0).unsqueeze(1), 0) - else: - # If it's hard attention just select the last state - beta = alpha - - return p_choose, alpha, beta - - def monotonic_attention_process_train( - self, - query: Optional[Tensor], - key: Optional[Tensor], - key_padding_mask: Optional[Tensor] = None, - ): - """ - Calculating monotonic attention process for training - Including: - stepwise probability: p_choose - expected hard alignment: alpha - expected soft attention: beta - """ - assert query is not None - assert key is not None - - # 1. compute stepwise probability - p_choose = self.p_choose_from_qk(query, key, key_padding_mask) - - # 2. compute expected_alignment - alpha = expected_alignment_from_p_choose( - p_choose, - key_padding_mask, - eps=self.eps, - ) - - if self.mass_preservation: - alpha = mass_preservation( - alpha, key_padding_mask - ) - - # 3. compute expected soft attention (soft aligned model only) - if self.soft_attention: - soft_energy = self.energy_from_qk( - query, - key, - "soft", - key_padding_mask=None, - ) - - beta = expected_soft_attention( - alpha, - soft_energy, - padding_mask=key_padding_mask, - chunk_size=self.chunk_size, - eps=self.eps, - ) - else: - beta = alpha - soft_energy = alpha - - return p_choose, alpha, beta, soft_energy - - def forward( - self, - query: Optional[Tensor], - key: Optional[Tensor], - value: Optional[Tensor], - key_padding_mask: Optional[Tensor] = None, - attn_mask: Optional[Tensor] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - need_weights: bool = True, static_kv: bool = False, need_head_weights: bool = False, - ): - """ - query: tgt_len, bsz, embed_dim - key: src_len, bsz, embed_dim - value: src_len, bsz, embed_dim - """ - - assert attn_mask is None - assert query is not None - assert key is not None - assert value is not None - - tgt_len, bsz, embed_dim = query.size() - src_len = value.size(0) - - if key_padding_mask is not None: - assert not key_padding_mask[:, 0].any(), ( - "Only right padding is supported." - ) - key_padding_mask = ( - key_padding_mask - .unsqueeze(1) - .expand([bsz, self.num_heads, src_len]) - .contiguous() - .view(-1, src_len) - ) - - if incremental_state is not None: - # Inference - ( - p_choose, alpha, beta - ) = self.monotonic_attention_process_infer( - query, key, incremental_state - ) - soft_energy = beta - else: - # Train - ( - p_choose, alpha, beta, soft_energy - ) = self.monotonic_attention_process_train( - query, key, key_padding_mask - ) - - v = self.v_proj(value) - length, bsz, _ = v.size() - v = ( - v.contiguous() - .view(length, bsz * self.num_heads, self.head_dim) - .transpose(0, 1) - ) - - attn = torch.bmm(beta.type_as(v), v) - - attn = attn.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim) - - attn = self.out_proj(attn) - - p_choose = p_choose.view(bsz, self.num_heads, tgt_len, src_len) - alpha = alpha.view(bsz, self.num_heads, tgt_len, src_len) - beta = beta.view(bsz, self.num_heads, tgt_len, src_len) - - return attn, { - "p_choose": p_choose, - "alpha": alpha, - "beta": beta, - } - - def _get_monotonic_buffer(self, incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]]): - maybe_incremental_state = self.get_incremental_state( - incremental_state, - 'monotonic', - ) - if maybe_incremental_state is None: - typed_empty_dict: Dict[str, Optional[Tensor]] = {} - return typed_empty_dict - else: - return maybe_incremental_state - - def _set_monotonic_buffer(self, incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]], buffer: Dict[str, Optional[Tensor]]): - self.set_incremental_state( - incremental_state, - 'monotonic', - buffer, - ) - - -@register_monotonic_attention("infinite_lookback") -class MonotonicInfiniteLookbackAttention( - MonotonicAttention -): - def __init__(self, args): - super().__init__(args) - self.soft_attention = True - self.init_soft_attention() - - def init_soft_attention(self): - self.k_proj_soft = nn.Linear(self.kdim, self.embed_dim, bias=True) - self.q_proj_soft = nn.Linear(self.embed_dim, self.embed_dim, bias=True) - self.k_in_proj["soft"] = self.k_proj_soft - self.q_in_proj["soft"] = self.q_proj_soft - - if self.qkv_same_dim: - # Empirically observed the convergence to be much better with - # the scaled initialization - nn.init.xavier_uniform_( - self.k_in_proj["soft"].weight, gain=1 / math.sqrt(2) - ) - nn.init.xavier_uniform_( - self.q_in_proj["soft"].weight, gain=1 / math.sqrt(2) - ) - else: - nn.init.xavier_uniform_(self.k_in_proj["soft"].weight) - nn.init.xavier_uniform_(self.q_in_proj["soft"].weight) - - -@register_monotonic_attention("waitk") -class WaitKAttention( - MonotonicInfiniteLookbackAttention -): - """ - STACL: Simultaneous Translation with Implicit Anticipation and - Controllable Latency using Prefix-to-Prefix Framework - https://www.aclweb.org/anthology/P19-1289/ - """ - def __init__(self, args): - super().__init__(args) - self.q_in_proj["soft"] = self.q_in_proj["monotonic"] - self.k_in_proj["soft"] = self.k_in_proj["monotonic"] - - self.waitk_lagging = args.waitk_lagging - assert self.waitk_lagging > 0, ( - f"Lagging has to been larger than 0, get {self.waitk_lagging}." - ) - - @staticmethod - def add_args(parser): - super( - MonotonicInfiniteLookbackAttention, - MonotonicInfiniteLookbackAttention - ).add_args(parser) - - parser.add_argument( - "--waitk-lagging", type=int, required=True, help="Wait K lagging" - ) - - def p_choose_from_qk( - self, - query: Optional[Tensor], - key: Optional[Tensor], - key_padding_mask: Optional[Tensor] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - ): - assert query is not None - assert key is not None - - p_choose = waitk_p_choose( - tgt_len=query.size(0), - src_len=key.size(0), - bsz=query.size(1) * self.num_heads, - waitk_lagging=self.waitk_lagging, - key_padding_mask=key_padding_mask, - incremental_state=incremental_state, - ) - - return p_choose.to(query) - - -@register_monotonic_attention("chunkwise") -class ChunkwiseAttention( - MonotonicInfiniteLookbackAttention -): - def __init__(self, args): - super().__init__(args) - self.chunk_size = args.mocha_chunk_size - assert self.chunk_size > 1 - - @staticmethod - def add_args(parser): - super( - MonotonicInfiniteLookbackAttention - ).add_args(parser) - - parser.add_argument( - "--mocha-chunk-size", type=int, - required=True, help="Mocha chunk size" - ) diff --git a/kosmos-g/fairseq/examples/simultaneous_translation/modules/monotonic_transformer_layer.py b/kosmos-g/fairseq/examples/simultaneous_translation/modules/monotonic_transformer_layer.py deleted file mode 100644 index 94bd71fb9..000000000 --- a/kosmos-g/fairseq/examples/simultaneous_translation/modules/monotonic_transformer_layer.py +++ /dev/null @@ -1,182 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq.modules import TransformerDecoderLayer, TransformerEncoderLayer - -from . import build_monotonic_attention - -from typing import Dict, Optional, List - -from torch import Tensor -import torch - - -class TransformerMonotonicEncoderLayer(TransformerEncoderLayer): - def forward(self, x, encoder_padding_mask): - seq_len, _, _ = x.size() - attn_mask = x.new_ones([seq_len, seq_len]).triu(1) - attn_mask = attn_mask.masked_fill(attn_mask.bool(), float("-inf")) - return super().forward(x, encoder_padding_mask, attn_mask) - - -class TransformerMonotonicDecoderLayer(TransformerDecoderLayer): - def __init__(self, args): - super().__init__(args) - - assert args.simul_type is not None, "A --simul-type is needed." - self.encoder_attn = build_monotonic_attention(args) - - def prune_incremental_state( - self, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] - ): - input_buffer = self.self_attn._get_input_buffer(incremental_state) - for key in ["prev_key", "prev_value"]: - input_buffer_key = input_buffer[key] - assert input_buffer_key is not None - if input_buffer_key.size(2) > 1: - input_buffer[key] = input_buffer_key[:, :, :-1, :] - else: - typed_empty_dict: Dict[str, Optional[Tensor]] = {} - input_buffer = typed_empty_dict - break - assert incremental_state is not None - self.self_attn._set_input_buffer(incremental_state, input_buffer) - - def forward( - self, - x, - encoder_out: Optional[Tensor] = None, - encoder_padding_mask: Optional[Tensor] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - prev_self_attn_state: Optional[List[Tensor]] = None, - prev_attn_state: Optional[List[Tensor]] = None, - self_attn_mask: Optional[Tensor] = None, - self_attn_padding_mask: Optional[Tensor] = None, - need_attn: bool = False, - need_head_weights: bool = False, - ): - """ - Args: - x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - encoder_padding_mask (ByteTensor, optional): binary - ByteTensor of shape `(batch, src_len)` where padding - elements are indicated by ``1``. - need_attn (bool, optional): return attention weights - need_head_weights (bool, optional): return attention weights - for each head (default: return average over heads). - - Returns: - encoded output of shape `(seq_len, batch, embed_dim)` - """ - if need_head_weights: - need_attn = True - - residual = x - if self.normalize_before: - x = self.self_attn_layer_norm(x) - if prev_self_attn_state is not None: - prev_key, prev_value = prev_self_attn_state[:2] - saved_state: Dict[str, Optional[Tensor]] = { - "prev_key": prev_key, - "prev_value": prev_value, - } - if len(prev_self_attn_state) >= 3: - saved_state["prev_key_padding_mask"] = prev_self_attn_state[2] - assert incremental_state is not None - self.self_attn._set_input_buffer(incremental_state, saved_state) - _self_attn_input_buffer = self.self_attn._get_input_buffer(incremental_state) - if self.cross_self_attention and not ( - incremental_state is not None - and _self_attn_input_buffer is not None - and "prev_key" in _self_attn_input_buffer - ): - if self_attn_mask is not None: - assert encoder_out is not None - self_attn_mask = torch.cat( - (x.new_zeros(x.size(0), encoder_out.size(0)), self_attn_mask), dim=1 - ) - if self_attn_padding_mask is not None: - if encoder_padding_mask is None: - assert encoder_out is not None - encoder_padding_mask = self_attn_padding_mask.new_zeros( - encoder_out.size(1), encoder_out.size(0) - ) - self_attn_padding_mask = torch.cat( - (encoder_padding_mask, self_attn_padding_mask), dim=1 - ) - assert encoder_out is not None - y = torch.cat((encoder_out, x), dim=0) - else: - y = x - - x, attn = self.self_attn( - query=x, - key=y, - value=y, - key_padding_mask=self_attn_padding_mask, - incremental_state=incremental_state, - need_weights=False, - attn_mask=self_attn_mask, - ) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.self_attn_layer_norm(x) - - assert self.encoder_attn is not None - residual = x - if self.normalize_before: - x = self.encoder_attn_layer_norm(x) - if prev_attn_state is not None: - prev_key, prev_value = prev_attn_state[:2] - saved_state: Dict[str, Optional[Tensor]] = { - "prev_key": prev_key, - "prev_value": prev_value, - } - if len(prev_attn_state) >= 3: - saved_state["prev_key_padding_mask"] = prev_attn_state[2] - assert incremental_state is not None - self.encoder_attn._set_input_buffer(incremental_state, saved_state) - - x, attn = self.encoder_attn( - query=x, - key=encoder_out, - value=encoder_out, - key_padding_mask=encoder_padding_mask, - incremental_state=incremental_state, - static_kv=True, - need_weights=need_attn or (not self.training and self.need_attn), - need_head_weights=need_head_weights, - ) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.encoder_attn_layer_norm(x) - - residual = x - if self.normalize_before: - x = self.final_layer_norm(x) - - x = self.activation_fn(self.fc1(x)) - x = self.activation_dropout_module(x) - x = self.fc2(x) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.final_layer_norm(x) - if self.onnx_trace and incremental_state is not None: - saved_state = self.self_attn._get_input_buffer(incremental_state) - assert saved_state is not None - if self_attn_padding_mask is not None: - self_attn_state = [ - saved_state["prev_key"], - saved_state["prev_value"], - saved_state["prev_key_padding_mask"], - ] - else: - self_attn_state = [saved_state["prev_key"], saved_state["prev_value"]] - return x, attn, self_attn_state - return x, attn, None diff --git a/kosmos-g/fairseq/examples/simultaneous_translation/utils/__init__.py b/kosmos-g/fairseq/examples/simultaneous_translation/utils/__init__.py deleted file mode 100644 index 1e9ce844f..000000000 --- a/kosmos-g/fairseq/examples/simultaneous_translation/utils/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import importlib -import os - - -# automatically import any Python files in the criterions/ directory -for file in sorted(os.listdir(os.path.dirname(__file__))): - if file.endswith(".py") and not file.startswith("_"): - module = file[: file.find(".py")] - importlib.import_module("examples.simultaneous_translation.utils." + module) diff --git a/kosmos-g/fairseq/examples/simultaneous_translation/utils/functions.py b/kosmos-g/fairseq/examples/simultaneous_translation/utils/functions.py deleted file mode 100644 index 590a6c11c..000000000 --- a/kosmos-g/fairseq/examples/simultaneous_translation/utils/functions.py +++ /dev/null @@ -1,125 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - - -def prob_check(tensor, eps=1e-10): - assert not torch.isnan(tensor).any(), ( - "Nan in a probability tensor." - ) - # Add the eps here to prevent errors introduced by precision - assert tensor.le(1.0 + eps).all() and tensor.ge(0.0 - eps).all(), ( - "Incorrect values in a probability tensor" - ", 0.0 <= tensor <= 1.0" - ) - - -def exclusive_cumprod(tensor, dim: int, eps: float = 1e-10): - """ - Implementing exclusive cumprod. - There is cumprod in pytorch, however there is no exclusive mode. - cumprod(x) = [x1, x1x2, x2x3x4, ..., prod_{i=1}^n x_i] - exclusive means - cumprod(x) = [1, x1, x1x2, x1x2x3, ..., prod_{i=1}^{n-1} x_i] - """ - tensor_size = list(tensor.size()) - tensor_size[dim] = 1 - return_tensor = safe_cumprod( - torch.cat([torch.ones(tensor_size).type_as(tensor), tensor], dim=dim), - dim=dim, - eps=eps, - ) - - if dim == 0: - return return_tensor[:-1] - elif dim == 1: - return return_tensor[:, :-1] - elif dim == 2: - return return_tensor[:, :, :-1] - else: - raise RuntimeError( - "Cumprod on dimension 3 and more is not implemented" - ) - - -def safe_cumprod(tensor, dim: int, eps: float = 1e-10): - """ - An implementation of cumprod to prevent precision issue. - cumprod(x) - = [x1, x1x2, x1x2x3, ....] - = [exp(log(x1)), exp(log(x1) + log(x2)), exp(log(x1) + log(x2) + log(x3)), ...] - = exp(cumsum(log(x))) - """ - - if (tensor + eps < 0).any().item(): - raise RuntimeError( - "Safe cumprod can only take non-negative tensors as input." - "Consider use torch.cumprod if you want to calculate negative values." - ) - - log_tensor = torch.log(tensor + eps) - cumsum_log_tensor = torch.cumsum(log_tensor, dim) - exp_cumsum_log_tensor = torch.exp(cumsum_log_tensor) - return exp_cumsum_log_tensor - - -def moving_sum(x, start_idx: int, end_idx: int): - """ - From MONOTONIC CHUNKWISE ATTENTION - https://arxiv.org/pdf/1712.05382.pdf - Equation (18) - - x = [x_1, x_2, ..., x_N] - MovingSum(x, start_idx, end_idx)_n = Sigma_{m=n−(start_idx−1)}^{n+end_idx-1} x_m - for n in {1, 2, 3, ..., N} - - x : src_len, batch_size - start_idx : start idx - end_idx : end idx - - Example - src_len = 5 - batch_size = 3 - x = - [[ 0, 5, 10], - [ 1, 6, 11], - [ 2, 7, 12], - [ 3, 8, 13], - [ 4, 9, 14]] - - MovingSum(x, 3, 1) = - [[ 0, 5, 10], - [ 1, 11, 21], - [ 3, 18, 33], - [ 6, 21, 36], - [ 9, 24, 39]] - - MovingSum(x, 1, 3) = - [[ 3, 18, 33], - [ 6, 21, 36], - [ 9, 24, 39], - [ 7, 17, 27], - [ 4, 9, 14]] - """ - # TODO: Make dimension configurable - assert start_idx > 0 and end_idx > 0 - batch_size, tgt_len, src_len = x.size() - x = x.view(-1, src_len).unsqueeze(1) - # batch_size, 1, src_len - moving_sum_weight = torch.ones([1, 1, end_idx + start_idx - 1]).type_as(x) - - moving_sum = torch.nn.functional.conv1d( - x, moving_sum_weight, padding=start_idx + end_idx - 1 - ).squeeze(1) - - moving_sum = moving_sum[:, end_idx:-start_idx] - - assert src_len == moving_sum.size(1) - assert batch_size * tgt_len == moving_sum.size(0) - - moving_sum = moving_sum.view(batch_size, tgt_len, src_len) - - return moving_sum diff --git a/kosmos-g/fairseq/examples/simultaneous_translation/utils/monotonic_attention.py b/kosmos-g/fairseq/examples/simultaneous_translation/utils/monotonic_attention.py deleted file mode 100644 index 3b8e0a858..000000000 --- a/kosmos-g/fairseq/examples/simultaneous_translation/utils/monotonic_attention.py +++ /dev/null @@ -1,180 +0,0 @@ -from typing import Optional -import torch -from torch import Tensor - -from examples.simultaneous_translation.utils.functions import ( - exclusive_cumprod, - prob_check, - moving_sum, -) - - -def expected_alignment_from_p_choose( - p_choose: Tensor, - padding_mask: Optional[Tensor] = None, - eps: float = 1e-6 -): - """ - Calculating expected alignment for from stepwise probability - - Reference: - Online and Linear-Time Attention by Enforcing Monotonic Alignments - https://arxiv.org/pdf/1704.00784.pdf - - q_ij = (1 − p_{ij−1})q_{ij−1} + a+{i−1j} - a_ij = p_ij q_ij - - Parallel solution: - ai = p_i * cumprod(1 − pi) * cumsum(a_i / cumprod(1 − pi)) - - ============================================================ - Expected input size - p_choose: bsz, tgt_len, src_len - """ - prob_check(p_choose) - - # p_choose: bsz, tgt_len, src_len - bsz, tgt_len, src_len = p_choose.size() - dtype = p_choose.dtype - - p_choose = p_choose.float() - - if padding_mask is not None: - p_choose = p_choose.masked_fill(padding_mask.unsqueeze(1), 0.0) - - if p_choose.is_cuda: - p_choose = p_choose.contiguous() - from alignment_train_cuda_binding import alignment_train_cuda as alignment_train - else: - from alignment_train_cpu_binding import alignment_train_cpu as alignment_train - - alpha = p_choose.new_zeros([bsz, tgt_len, src_len]) - alignment_train(p_choose, alpha, eps) - - # Mix precision to prevent overflow for fp16 - alpha = alpha.type(dtype) - - prob_check(alpha) - - return alpha - - -def expected_soft_attention( - alpha: Tensor, - soft_energy: Tensor, - padding_mask: Optional[Tensor] = None, - chunk_size: Optional[int] = None, - eps: float = 1e-10 -): - """ - Function to compute expected soft attention for - monotonic infinite lookback attention from - expected alignment and soft energy. - - Reference: - Monotonic Chunkwise Attention - https://arxiv.org/abs/1712.05382 - - Monotonic Infinite Lookback Attention for Simultaneous Machine Translation - https://arxiv.org/abs/1906.05218 - - alpha: bsz, tgt_len, src_len - soft_energy: bsz, tgt_len, src_len - padding_mask: bsz, src_len - left_padding: bool - """ - if padding_mask is not None: - alpha = alpha.masked_fill(padding_mask.unsqueeze(1), 0.0) - soft_energy = soft_energy.masked_fill( - padding_mask.unsqueeze(1), -float("inf") - ) - - prob_check(alpha) - - dtype = alpha.dtype - - alpha = alpha.float() - soft_energy = soft_energy.float() - - soft_energy = soft_energy - soft_energy.max(dim=2, keepdim=True)[0] - exp_soft_energy = torch.exp(soft_energy) + eps - - if chunk_size is not None: - # Chunkwise - beta = ( - exp_soft_energy - * moving_sum( - alpha / (eps + moving_sum(exp_soft_energy, chunk_size, 1)), - 1, chunk_size - ) - ) - else: - # Infinite lookback - # Notice that infinite lookback is a special case of chunkwise - # where chunksize = inf - inner_items = alpha / (eps + torch.cumsum(exp_soft_energy, dim=2)) - - beta = ( - exp_soft_energy - * torch.cumsum(inner_items.flip(dims=[2]), dim=2) - .flip(dims=[2]) - ) - - if padding_mask is not None: - beta = beta.masked_fill( - padding_mask.unsqueeze(1).to(torch.bool), 0.0) - - # Mix precision to prevent overflow for fp16 - beta = beta.type(dtype) - - beta = beta.clamp(0, 1) - - prob_check(beta) - - return beta - - -def mass_preservation( - alpha: Tensor, - padding_mask: Optional[Tensor] = None, - left_padding: bool = False -): - """ - Function to compute the mass perservation for alpha. - This means that the residual weights of alpha will be assigned - to the last token. - - Reference: - Monotonic Infinite Lookback Attention for Simultaneous Machine Translation - https://arxiv.org/abs/1906.05218 - - alpha: bsz, tgt_len, src_len - padding_mask: bsz, src_len - left_padding: bool - """ - - prob_check(alpha) - - if padding_mask is not None: - if not left_padding: - assert not padding_mask[:, 0].any(), ( - "Find padding on the beginning of the sequence." - ) - alpha = alpha.masked_fill(padding_mask.unsqueeze(1), 0.0) - - if left_padding or padding_mask is None: - residuals = 1 - alpha[:, :, :-1].sum(dim=-1).clamp(0, 1) - alpha[:, :, -1] = residuals - else: - # right padding - _, tgt_len, src_len = alpha.size() - residuals = 1 - alpha.sum(dim=-1, keepdim=True).clamp(0, 1) - src_lens = src_len - padding_mask.sum(dim=1, keepdim=True) - src_lens = src_lens.expand(-1, tgt_len).contiguous() - # add back the last value - residuals += alpha.gather(2, src_lens.unsqueeze(2) - 1) - alpha = alpha.scatter(2, src_lens.unsqueeze(2) - 1, residuals) - - prob_check(alpha) - - return alpha diff --git a/kosmos-g/fairseq/examples/simultaneous_translation/utils/p_choose_strategy.py b/kosmos-g/fairseq/examples/simultaneous_translation/utils/p_choose_strategy.py deleted file mode 100644 index 724c6912a..000000000 --- a/kosmos-g/fairseq/examples/simultaneous_translation/utils/p_choose_strategy.py +++ /dev/null @@ -1,126 +0,0 @@ -from typing import Optional, Dict -from torch import Tensor -import torch - - -def waitk_p_choose( - tgt_len: int, - src_len: int, - bsz: int, - waitk_lagging: int, - key_padding_mask: Optional[Tensor] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None -): - - max_src_len = src_len - if incremental_state is not None: - # Retrieve target length from incremental states - # For inference the length of query is always 1 - max_tgt_len = incremental_state["steps"]["tgt"] - assert max_tgt_len is not None - max_tgt_len = int(max_tgt_len) - else: - max_tgt_len = tgt_len - - if max_src_len < waitk_lagging: - if incremental_state is not None: - max_tgt_len = 1 - return torch.zeros( - bsz, max_tgt_len, max_src_len - ) - - # Assuming the p_choose looks like this for wait k=3 - # src_len = 6, max_tgt_len = 5 - # [0, 0, 1, 0, 0, 0, 0] - # [0, 0, 0, 1, 0, 0, 0] - # [0, 0, 0, 0, 1, 0, 0] - # [0, 0, 0, 0, 0, 1, 0] - # [0, 0, 0, 0, 0, 0, 1] - # linearize the p_choose matrix: - # [0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0...] - # The indices of linearized matrix that equals 1 is - # 2 + 6 * 0 - # 3 + 6 * 1 - # ... - # n + src_len * n + k - 1 = n * (src_len + 1) + k - 1 - # n from 0 to max_tgt_len - 1 - # - # First, generate the indices (activate_indices_offset: bsz, max_tgt_len) - # Second, scatter a zeros tensor (bsz, max_tgt_len * src_len) - # with activate_indices_offset - # Third, resize the tensor to (bsz, max_tgt_len, src_len) - - activate_indices_offset = ( - ( - torch.arange(max_tgt_len) * (max_src_len + 1) - + waitk_lagging - 1 - ) - .unsqueeze(0) - .expand(bsz, max_tgt_len) - .long() - ) - - if key_padding_mask is not None: - if key_padding_mask[:, 0].any(): - # Left padding - activate_indices_offset += ( - key_padding_mask.sum(dim=1, keepdim=True) - ) - - # Need to clamp the indices that are too large - activate_indices_offset = ( - activate_indices_offset - .clamp( - 0, - min( - [ - max_tgt_len, - max_src_len - waitk_lagging + 1 - ] - ) * max_src_len - 1 - ) - ) - - p_choose = torch.zeros(bsz, max_tgt_len * max_src_len) - - p_choose = p_choose.scatter( - 1, - activate_indices_offset, - 1.0 - ).view(bsz, max_tgt_len, max_src_len) - - if key_padding_mask is not None: - p_choose = p_choose.to(key_padding_mask) - p_choose = p_choose.masked_fill(key_padding_mask.unsqueeze(1), 0) - - if incremental_state is not None: - p_choose = p_choose[:, -1:] - - return p_choose.float() - - -def learnable_p_choose( - energy, - noise_mean: float = 0.0, - noise_var: float = 0.0, - training: bool = True -): - """ - Calculating step wise prob for reading and writing - 1 to read, 0 to write - energy: bsz, tgt_len, src_len - """ - - noise = 0 - if training: - # add noise here to encourage discretness - noise = ( - torch.normal(noise_mean, noise_var, energy.size()) - .type_as(energy) - .to(energy.device) - ) - - p_choose = torch.sigmoid(energy + noise) - - # p_choose: bsz * self.num_heads, tgt_len, src_len - return p_choose diff --git a/kosmos-g/fairseq/examples/speech_recognition/README.md b/kosmos-g/fairseq/examples/speech_recognition/README.md deleted file mode 100644 index 17030bf0f..000000000 --- a/kosmos-g/fairseq/examples/speech_recognition/README.md +++ /dev/null @@ -1,83 +0,0 @@ -### 2021 Update: We are merging this example into the [S2T framework](../speech_to_text), which supports more generic speech-to-text tasks (e.g. speech translation) and more flexible data processing pipelines. Please stay tuned. - -# Speech Recognition -`examples/speech_recognition` is implementing ASR task in Fairseq, along with needed features, datasets, models and loss functions to train and infer model described in [Transformers with convolutional context for ASR (Abdelrahman Mohamed et al., 2019)](https://arxiv.org/abs/1904.11660). - - -## Additional dependencies -On top of main fairseq dependencies there are couple more additional requirements. - -1) Please follow the instructions to install [torchaudio](https://github.com/pytorch/audio). This is required to compute audio fbank features. -2) [Sclite](http://www1.icsi.berkeley.edu/Speech/docs/sctk-1.2/sclite.htm#sclite_name_0) is used to measure WER. Sclite can be downloaded and installed from source from sctk package [here](http://www.openslr.org/4/). Training and inference doesn't require Sclite dependency. -3) [sentencepiece](https://github.com/google/sentencepiece) is required in order to create dataset with word-piece targets. - -## Preparing librispeech data -``` -./examples/speech_recognition/datasets/prepare-librispeech.sh $DIR_TO_SAVE_RAW_DATA $DIR_FOR_PREPROCESSED_DATA -``` - -## Training librispeech data -``` -python train.py $DIR_FOR_PREPROCESSED_DATA --save-dir $MODEL_PATH --max-epoch 80 --task speech_recognition --arch vggtransformer_2 --optimizer adadelta --lr 1.0 --adadelta-eps 1e-8 --adadelta-rho 0.95 --clip-norm 10.0 --max-tokens 5000 --log-format json --log-interval 1 --criterion cross_entropy_acc --user-dir examples/speech_recognition/ -``` - -## Inference for librispeech -`$SET` can be `test_clean` or `test_other` -Any checkpoint in `$MODEL_PATH` can be selected. In this example we are working with `checkpoint_last.pt` -``` -python examples/speech_recognition/infer.py $DIR_FOR_PREPROCESSED_DATA --task speech_recognition --max-tokens 25000 --nbest 1 --path $MODEL_PATH/checkpoint_last.pt --beam 20 --results-path $RES_DIR --batch-size 40 --gen-subset $SET --user-dir examples/speech_recognition/ -``` - -## Inference for librispeech -``` -sclite -r ${RES_DIR}/ref.word-checkpoint_last.pt-${SET}.txt -h ${RES_DIR}/hypo.word-checkpoint_last.pt-${SET}.txt -i rm -o all stdout > $RES_REPORT -``` -`Sum/Avg` row from first table of the report has WER - -## Using flashlight (previously called [wav2letter](https://github.com/facebookresearch/wav2letter)) components -[flashlight](https://github.com/facebookresearch/flashlight) now has integration with fairseq. Currently this includes: - -* AutoSegmentationCriterion (ASG) -* flashlight-style Conv/GLU model -* flashlight's beam search decoder - -To use these, follow the instructions on [this page](https://github.com/facebookresearch/flashlight/tree/master/bindings/python) to install python bindings. - -## Training librispeech data (flashlight style, Conv/GLU + ASG loss) -Training command: -``` -python train.py $DIR_FOR_PREPROCESSED_DATA --save-dir $MODEL_PATH --max-epoch 100 --task speech_recognition --arch w2l_conv_glu_enc --batch-size 4 --optimizer sgd --lr 0.3,0.8 --momentum 0.8 --clip-norm 0.2 --max-tokens 50000 --log-format json --log-interval 100 --num-workers 0 --sentence-avg --criterion asg_loss --asg-transitions-init 5 --max-replabel 2 --linseg-updates 8789 --user-dir examples/speech_recognition -``` - -Note that ASG loss currently doesn't do well with word-pieces. You should prepare a dataset with character targets by setting `nbpe=31` in `prepare-librispeech.sh`. - -## Inference for librispeech (flashlight decoder, n-gram LM) -Inference command: -``` -python examples/speech_recognition/infer.py $DIR_FOR_PREPROCESSED_DATA --task speech_recognition --seed 1 --nbest 1 --path $MODEL_PATH/checkpoint_last.pt --gen-subset $SET --results-path $RES_DIR --w2l-decoder kenlm --kenlm-model $KENLM_MODEL_PATH --lexicon $LEXICON_PATH --beam 200 --beam-threshold 15 --lm-weight 1.5 --word-score 1.5 --sil-weight -0.3 --criterion asg_loss --max-replabel 2 --user-dir examples/speech_recognition -``` - -`$KENLM_MODEL_PATH` should be a standard n-gram language model file. `$LEXICON_PATH` should be a flashlight-style lexicon (list of known words and their spellings). For ASG inference, a lexicon line should look like this (note the repetition labels): -``` -doorbell D O 1 R B E L 1 ▁ -``` -For CTC inference with word-pieces, repetition labels are not used and the lexicon should have most common spellings for each word (one can use sentencepiece's `NBestEncodeAsPieces` for this): -``` -doorbell ▁DOOR BE LL -doorbell ▁DOOR B E LL -doorbell ▁DO OR BE LL -doorbell ▁DOOR B EL L -doorbell ▁DOOR BE L L -doorbell ▁DO OR B E LL -doorbell ▁DOOR B E L L -doorbell ▁DO OR B EL L -doorbell ▁DO O R BE LL -doorbell ▁DO OR BE L L -``` -Lowercase vs. uppercase matters: the *word* should match the case of the n-gram language model (i.e. `$KENLM_MODEL_PATH`), while the *spelling* should match the case of the token dictionary (i.e. `$DIR_FOR_PREPROCESSED_DATA/dict.txt`). - -## Inference for librispeech (flashlight decoder, viterbi only) -Inference command: -``` -python examples/speech_recognition/infer.py $DIR_FOR_PREPROCESSED_DATA --task speech_recognition --seed 1 --nbest 1 --path $MODEL_PATH/checkpoint_last.pt --gen-subset $SET --results-path $RES_DIR --w2l-decoder viterbi --criterion asg_loss --max-replabel 2 --user-dir examples/speech_recognition -``` diff --git a/kosmos-g/fairseq/examples/speech_recognition/__init__.py b/kosmos-g/fairseq/examples/speech_recognition/__init__.py deleted file mode 100644 index 0278f6a27..000000000 --- a/kosmos-g/fairseq/examples/speech_recognition/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from . import criterions, models, tasks # noqa diff --git a/kosmos-g/fairseq/examples/speech_recognition/criterions/ASG_loss.py b/kosmos-g/fairseq/examples/speech_recognition/criterions/ASG_loss.py deleted file mode 100644 index 41f50bbd7..000000000 --- a/kosmos-g/fairseq/examples/speech_recognition/criterions/ASG_loss.py +++ /dev/null @@ -1,170 +0,0 @@ -#!/usr/bin/env python3 - -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -from examples.speech_recognition.data.replabels import pack_replabels -from fairseq import utils -from fairseq.criterions import FairseqCriterion, register_criterion - - -@register_criterion("asg_loss") -class ASGCriterion(FairseqCriterion): - @staticmethod - def add_args(parser): - group = parser.add_argument_group("ASG Loss") - group.add_argument( - "--asg-transitions-init", - help="initial diagonal value of transition matrix", - type=float, - default=0.0, - ) - group.add_argument( - "--max-replabel", help="maximum # of replabels", type=int, default=2 - ) - group.add_argument( - "--linseg-updates", - help="# of training updates to use LinSeg initialization", - type=int, - default=0, - ) - group.add_argument( - "--hide-linseg-messages", - help="hide messages about LinSeg initialization", - action="store_true", - ) - - def __init__( - self, - task, - silence_token, - asg_transitions_init, - max_replabel, - linseg_updates, - hide_linseg_messages, - ): - from flashlight.lib.sequence.criterion import ASGLoss, CriterionScaleMode - - super().__init__(task) - self.tgt_dict = task.target_dictionary - self.eos = self.tgt_dict.eos() - self.silence = ( - self.tgt_dict.index(silence_token) - if silence_token in self.tgt_dict - else None - ) - self.max_replabel = max_replabel - - num_labels = len(self.tgt_dict) - self.asg = ASGLoss(num_labels, scale_mode=CriterionScaleMode.TARGET_SZ_SQRT) - self.asg.trans = torch.nn.Parameter( - asg_transitions_init * torch.eye(num_labels), requires_grad=True - ) - - self.linseg_progress = torch.nn.Parameter( - torch.tensor([0], dtype=torch.int), requires_grad=False - ) - self.linseg_maximum = linseg_updates - self.linseg_message_state = "none" if hide_linseg_messages else "start" - - @classmethod - def build_criterion(cls, args, task): - return cls( - task, - args.silence_token, - args.asg_transitions_init, - args.max_replabel, - args.linseg_updates, - args.hide_linseg_messages, - ) - - def linseg_step(self): - if not self.training: - return False - if self.linseg_progress.item() < self.linseg_maximum: - if self.linseg_message_state == "start": - print("| using LinSeg to initialize ASG") - self.linseg_message_state = "finish" - self.linseg_progress.add_(1) - return True - elif self.linseg_message_state == "finish": - print("| finished LinSeg initialization") - self.linseg_message_state = "none" - return False - - def replace_eos_with_silence(self, tgt): - if tgt[-1] != self.eos: - return tgt - elif self.silence is None or (len(tgt) > 1 and tgt[-2] == self.silence): - return tgt[:-1] - else: - return tgt[:-1] + [self.silence] - - def forward(self, model, sample, reduce=True): - """Compute the loss for the given sample. - - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - - net_output = model(**sample["net_input"]) - emissions = net_output["encoder_out"].transpose(0, 1).contiguous() - B = emissions.size(0) - T = emissions.size(1) - device = emissions.device - - target = torch.IntTensor(B, T) - target_size = torch.IntTensor(B) - using_linseg = self.linseg_step() - - for b in range(B): - initial_target_size = sample["target_lengths"][b].item() - if initial_target_size == 0: - raise ValueError("target size cannot be zero") - - tgt = sample["target"][b, :initial_target_size].tolist() - tgt = self.replace_eos_with_silence(tgt) - tgt = pack_replabels(tgt, self.tgt_dict, self.max_replabel) - tgt = tgt[:T] - - if using_linseg: - tgt = [tgt[t * len(tgt) // T] for t in range(T)] - - target[b][: len(tgt)] = torch.IntTensor(tgt) - target_size[b] = len(tgt) - - loss = self.asg.forward(emissions, target.to(device), target_size.to(device)) - - if reduce: - loss = torch.sum(loss) - - sample_size = ( - sample["target"].size(0) if self.args.sentence_avg else sample["ntokens"] - ) - logging_output = { - "loss": utils.item(loss.data) if reduce else loss.data, - "ntokens": sample["ntokens"], - "nsentences": sample["target"].size(0), - "sample_size": sample_size, - } - return loss, sample_size, logging_output - - @staticmethod - def aggregate_logging_outputs(logging_outputs): - """Aggregate logging outputs from data parallel training.""" - loss_sum = sum(log.get("loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - nsentences = sum(log.get("nsentences", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - agg_output = { - "loss": loss_sum / nsentences, - "ntokens": ntokens, - "nsentences": nsentences, - "sample_size": sample_size, - } - return agg_output diff --git a/kosmos-g/fairseq/examples/speech_recognition/criterions/__init__.py b/kosmos-g/fairseq/examples/speech_recognition/criterions/__init__.py deleted file mode 100644 index 579abd2ac..000000000 --- a/kosmos-g/fairseq/examples/speech_recognition/criterions/__init__.py +++ /dev/null @@ -1,17 +0,0 @@ -import importlib -import os - - -# ASG loss requires flashlight bindings -files_to_skip = set() -try: - import flashlight.lib.sequence.criterion -except ImportError: - files_to_skip.add("ASG_loss.py") - -for file in sorted(os.listdir(os.path.dirname(__file__))): - if file.endswith(".py") and not file.startswith("_") and file not in files_to_skip: - criterion_name = file[: file.find(".py")] - importlib.import_module( - "examples.speech_recognition.criterions." + criterion_name - ) diff --git a/kosmos-g/fairseq/examples/speech_recognition/criterions/cross_entropy_acc.py b/kosmos-g/fairseq/examples/speech_recognition/criterions/cross_entropy_acc.py deleted file mode 100644 index 7c4d8ba38..000000000 --- a/kosmos-g/fairseq/examples/speech_recognition/criterions/cross_entropy_acc.py +++ /dev/null @@ -1,130 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from __future__ import absolute_import, division, print_function, unicode_literals - -import logging -import math - -import torch -import torch.nn.functional as F -from fairseq import utils -from fairseq.criterions import FairseqCriterion, register_criterion - - -@register_criterion("cross_entropy_acc") -class CrossEntropyWithAccCriterion(FairseqCriterion): - def __init__(self, task, sentence_avg): - super().__init__(task) - self.sentence_avg = sentence_avg - - def compute_loss(self, model, net_output, target, reduction, log_probs): - # N, T -> N * T - target = target.view(-1) - lprobs = model.get_normalized_probs(net_output, log_probs=log_probs) - if not hasattr(lprobs, "batch_first"): - logging.warning( - "ERROR: we need to know whether " - "batch first for the net output; " - "you need to set batch_first attribute for the return value of " - "model.get_normalized_probs. Now, we assume this is true, but " - "in the future, we will raise exception instead. " - ) - batch_first = getattr(lprobs, "batch_first", True) - if not batch_first: - lprobs = lprobs.transpose(0, 1) - - # N, T, D -> N * T, D - lprobs = lprobs.view(-1, lprobs.size(-1)) - loss = F.nll_loss( - lprobs, target, ignore_index=self.padding_idx, reduction=reduction - ) - return lprobs, loss - - def get_logging_output(self, sample, target, lprobs, loss): - target = target.view(-1) - mask = target != self.padding_idx - correct = torch.sum( - lprobs.argmax(1).masked_select(mask) == target.masked_select(mask) - ) - total = torch.sum(mask) - sample_size = ( - sample["target"].size(0) if self.sentence_avg else sample["ntokens"] - ) - - logging_output = { - "loss": utils.item(loss.data), # * sample['ntokens'], - "ntokens": sample["ntokens"], - "nsentences": sample["target"].size(0), - "sample_size": sample_size, - "correct": utils.item(correct.data), - "total": utils.item(total.data), - "nframes": torch.sum(sample["net_input"]["src_lengths"]).item(), - } - - return sample_size, logging_output - - def forward(self, model, sample, reduction="sum", log_probs=True): - """Computes the cross entropy with accuracy metric for the given sample. - - This is similar to CrossEntropyCriterion in fairseq, but also - computes accuracy metrics as part of logging - - Args: - logprobs (Torch.tensor) of shape N, T, D i.e. - batchsize, timesteps, dimensions - targets (Torch.tensor) of shape N, T i.e batchsize, timesteps - - Returns: - tuple: With three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - - TODO: - * Currently this Criterion will only work with LSTMEncoderModels or - FairseqModels which have decoder, or Models which return TorchTensor - as net_output. - We need to make a change to support all FairseqEncoder models. - """ - net_output = model(**sample["net_input"]) - target = model.get_targets(sample, net_output) - lprobs, loss = self.compute_loss( - model, net_output, target, reduction, log_probs - ) - sample_size, logging_output = self.get_logging_output( - sample, target, lprobs, loss - ) - return loss, sample_size, logging_output - - @staticmethod - def aggregate_logging_outputs(logging_outputs): - """Aggregate logging outputs from data parallel training.""" - correct_sum = sum(log.get("correct", 0) for log in logging_outputs) - total_sum = sum(log.get("total", 0) for log in logging_outputs) - loss_sum = sum(log.get("loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - nsentences = sum(log.get("nsentences", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - nframes = sum(log.get("nframes", 0) for log in logging_outputs) - agg_output = { - "loss": loss_sum / sample_size / math.log(2) if sample_size > 0 else 0.0, - # if args.sentence_avg, then sample_size is nsentences, then loss - # is per-sentence loss; else sample_size is ntokens, the loss - # becomes per-output token loss - "ntokens": ntokens, - "nsentences": nsentences, - "nframes": nframes, - "sample_size": sample_size, - "acc": correct_sum * 100.0 / total_sum if total_sum > 0 else 0.0, - "correct": correct_sum, - "total": total_sum, - # total is the number of validate tokens - } - if sample_size != ntokens: - agg_output["nll_loss"] = loss_sum / ntokens / math.log(2) - # loss: per output token loss - # nll_loss: per sentence loss - return agg_output diff --git a/kosmos-g/fairseq/examples/speech_recognition/data/__init__.py b/kosmos-g/fairseq/examples/speech_recognition/data/__init__.py deleted file mode 100644 index 47bb6e24d..000000000 --- a/kosmos-g/fairseq/examples/speech_recognition/data/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .asr_dataset import AsrDataset - - -__all__ = [ - "AsrDataset", -] diff --git a/kosmos-g/fairseq/examples/speech_recognition/data/asr_dataset.py b/kosmos-g/fairseq/examples/speech_recognition/data/asr_dataset.py deleted file mode 100644 index 63a6fcac8..000000000 --- a/kosmos-g/fairseq/examples/speech_recognition/data/asr_dataset.py +++ /dev/null @@ -1,122 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import os - -import numpy as np -from fairseq.data import FairseqDataset - -from . import data_utils -from .collaters import Seq2SeqCollater - - -class AsrDataset(FairseqDataset): - """ - A dataset representing speech and corresponding transcription. - - Args: - aud_paths: (List[str]): A list of str with paths to audio files. - aud_durations_ms (List[int]): A list of int containing the durations of - audio files. - tgt (List[torch.LongTensor]): A list of LongTensors containing the indices - of target transcriptions. - tgt_dict (~fairseq.data.Dictionary): target vocabulary. - ids (List[str]): A list of utterance IDs. - speakers (List[str]): A list of speakers corresponding to utterances. - num_mel_bins (int): Number of triangular mel-frequency bins (default: 80) - frame_length (float): Frame length in milliseconds (default: 25.0) - frame_shift (float): Frame shift in milliseconds (default: 10.0) - """ - - def __init__( - self, - aud_paths, - aud_durations_ms, - tgt, - tgt_dict, - ids, - speakers, - num_mel_bins=80, - frame_length=25.0, - frame_shift=10.0, - ): - assert frame_length > 0 - assert frame_shift > 0 - assert all(x > frame_length for x in aud_durations_ms) - self.frame_sizes = [ - int(1 + (d - frame_length) / frame_shift) for d in aud_durations_ms - ] - - assert len(aud_paths) > 0 - assert len(aud_paths) == len(aud_durations_ms) - assert len(aud_paths) == len(tgt) - assert len(aud_paths) == len(ids) - assert len(aud_paths) == len(speakers) - self.aud_paths = aud_paths - self.tgt_dict = tgt_dict - self.tgt = tgt - self.ids = ids - self.speakers = speakers - self.num_mel_bins = num_mel_bins - self.frame_length = frame_length - self.frame_shift = frame_shift - - self.s2s_collater = Seq2SeqCollater( - 0, - 1, - pad_index=self.tgt_dict.pad(), - eos_index=self.tgt_dict.eos(), - move_eos_to_beginning=True, - ) - - def __getitem__(self, index): - import torchaudio - import torchaudio.compliance.kaldi as kaldi - - tgt_item = self.tgt[index] if self.tgt is not None else None - - path = self.aud_paths[index] - if not os.path.exists(path): - raise FileNotFoundError("Audio file not found: {}".format(path)) - sound, sample_rate = torchaudio.load_wav(path) - output = kaldi.fbank( - sound, - num_mel_bins=self.num_mel_bins, - frame_length=self.frame_length, - frame_shift=self.frame_shift, - ) - output_cmvn = data_utils.apply_mv_norm(output) - - return {"id": index, "data": [output_cmvn.detach(), tgt_item]} - - def __len__(self): - return len(self.aud_paths) - - def collater(self, samples): - """Merge a list of samples to form a mini-batch. - - Args: - samples (List[int]): sample indices to collate - - Returns: - dict: a mini-batch suitable for forwarding with a Model - """ - return self.s2s_collater.collate(samples) - - def num_tokens(self, index): - return self.frame_sizes[index] - - def size(self, index): - """Return an example's size as a float or tuple. This value is used when - filtering a dataset with ``--max-positions``.""" - return ( - self.frame_sizes[index], - len(self.tgt[index]) if self.tgt is not None else 0, - ) - - def ordered_indices(self): - """Return an ordered list of indices. Batches will be constructed based - on this order.""" - return np.arange(len(self)) diff --git a/kosmos-g/fairseq/examples/speech_recognition/data/collaters.py b/kosmos-g/fairseq/examples/speech_recognition/data/collaters.py deleted file mode 100644 index 6acfec876..000000000 --- a/kosmos-g/fairseq/examples/speech_recognition/data/collaters.py +++ /dev/null @@ -1,131 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" - This module contains collection of classes which implement - collate functionalities for various tasks. - - Collaters should know what data to expect for each sample - and they should pack / collate them into batches -""" - - -from __future__ import absolute_import, division, print_function, unicode_literals - -import numpy as np -import torch -from fairseq.data import data_utils as fairseq_data_utils - - -class Seq2SeqCollater(object): - """ - Implements collate function mainly for seq2seq tasks - This expects each sample to contain feature (src_tokens) and - targets. - This collator is also used for aligned training task. - """ - - def __init__( - self, - feature_index=0, - label_index=1, - pad_index=1, - eos_index=2, - move_eos_to_beginning=True, - ): - self.feature_index = feature_index - self.label_index = label_index - self.pad_index = pad_index - self.eos_index = eos_index - self.move_eos_to_beginning = move_eos_to_beginning - - def _collate_frames(self, frames): - """Convert a list of 2d frames into a padded 3d tensor - Args: - frames (list): list of 2d frames of size L[i]*f_dim. Where L[i] is - length of i-th frame and f_dim is static dimension of features - Returns: - 3d tensor of size len(frames)*len_max*f_dim where len_max is max of L[i] - """ - len_max = max(frame.size(0) for frame in frames) - f_dim = frames[0].size(1) - res = frames[0].new(len(frames), len_max, f_dim).fill_(0.0) - - for i, v in enumerate(frames): - res[i, : v.size(0)] = v - - return res - - def collate(self, samples): - """ - utility function to collate samples into batch for speech recognition. - """ - if len(samples) == 0: - return {} - - # parse samples into torch tensors - parsed_samples = [] - for s in samples: - # skip invalid samples - if s["data"][self.feature_index] is None: - continue - source = s["data"][self.feature_index] - if isinstance(source, (np.ndarray, np.generic)): - source = torch.from_numpy(source) - target = s["data"][self.label_index] - if isinstance(target, (np.ndarray, np.generic)): - target = torch.from_numpy(target).long() - elif isinstance(target, list): - target = torch.LongTensor(target) - - parsed_sample = {"id": s["id"], "source": source, "target": target} - parsed_samples.append(parsed_sample) - samples = parsed_samples - - id = torch.LongTensor([s["id"] for s in samples]) - frames = self._collate_frames([s["source"] for s in samples]) - # sort samples by descending number of frames - frames_lengths = torch.LongTensor([s["source"].size(0) for s in samples]) - frames_lengths, sort_order = frames_lengths.sort(descending=True) - id = id.index_select(0, sort_order) - frames = frames.index_select(0, sort_order) - - target = None - target_lengths = None - prev_output_tokens = None - if samples[0].get("target", None) is not None: - ntokens = sum(len(s["target"]) for s in samples) - target = fairseq_data_utils.collate_tokens( - [s["target"] for s in samples], - self.pad_index, - self.eos_index, - left_pad=False, - move_eos_to_beginning=False, - ) - target = target.index_select(0, sort_order) - target_lengths = torch.LongTensor( - [s["target"].size(0) for s in samples] - ).index_select(0, sort_order) - prev_output_tokens = fairseq_data_utils.collate_tokens( - [s["target"] for s in samples], - self.pad_index, - self.eos_index, - left_pad=False, - move_eos_to_beginning=self.move_eos_to_beginning, - ) - prev_output_tokens = prev_output_tokens.index_select(0, sort_order) - else: - ntokens = sum(len(s["source"]) for s in samples) - - batch = { - "id": id, - "ntokens": ntokens, - "net_input": {"src_tokens": frames, "src_lengths": frames_lengths}, - "target": target, - "target_lengths": target_lengths, - "nsentences": len(samples), - } - if prev_output_tokens is not None: - batch["net_input"]["prev_output_tokens"] = prev_output_tokens - return batch diff --git a/kosmos-g/fairseq/examples/speech_recognition/data/data_utils.py b/kosmos-g/fairseq/examples/speech_recognition/data/data_utils.py deleted file mode 100644 index cc4729e63..000000000 --- a/kosmos-g/fairseq/examples/speech_recognition/data/data_utils.py +++ /dev/null @@ -1,100 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - - -def calc_mean_invstddev(feature): - if len(feature.size()) != 2: - raise ValueError("We expect the input feature to be 2-D tensor") - mean = feature.mean(0) - var = feature.var(0) - # avoid division by ~zero - eps = 1e-8 - if (var < eps).any(): - return mean, 1.0 / (torch.sqrt(var) + eps) - return mean, 1.0 / torch.sqrt(var) - - -def apply_mv_norm(features): - # If there is less than 2 spectrograms, the variance cannot be computed (is NaN) - # and normalization is not possible, so return the item as it is - if features.size(0) < 2: - return features - mean, invstddev = calc_mean_invstddev(features) - res = (features - mean) * invstddev - return res - - -def lengths_to_encoder_padding_mask(lengths, batch_first=False): - """ - convert lengths (a 1-D Long/Int tensor) to 2-D binary tensor - - Args: - lengths: a (B, )-shaped tensor - - Return: - max_length: maximum length of B sequences - encoder_padding_mask: a (max_length, B) binary mask, where - [t, b] = 0 for t < lengths[b] and 1 otherwise - - TODO: - kernelize this function if benchmarking shows this function is slow - """ - max_lengths = torch.max(lengths).item() - bsz = lengths.size(0) - encoder_padding_mask = torch.arange( - max_lengths - ).to( # a (T, ) tensor with [0, ..., T-1] - lengths.device - ).view( # move to the right device - 1, max_lengths - ).expand( # reshape to (1, T)-shaped tensor - bsz, -1 - ) >= lengths.view( # expand to (B, T)-shaped tensor - bsz, 1 - ).expand( - -1, max_lengths - ) - if not batch_first: - return encoder_padding_mask.t(), max_lengths - else: - return encoder_padding_mask, max_lengths - - -def encoder_padding_mask_to_lengths( - encoder_padding_mask, max_lengths, batch_size, device -): - """ - convert encoder_padding_mask (2-D binary tensor) to a 1-D tensor - - Conventionally, encoder output contains a encoder_padding_mask, which is - a 2-D mask in a shape (T, B), whose (t, b) element indicate whether - encoder_out[t, b] is a valid output (=0) or not (=1). Occasionally, we - need to convert this mask tensor to a 1-D tensor in shape (B, ), where - [b] denotes the valid length of b-th sequence - - Args: - encoder_padding_mask: a (T, B)-shaped binary tensor or None; if None, - indicating all are valid - Return: - seq_lengths: a (B,)-shaped tensor, where its (b, )-th element is the - number of valid elements of b-th sequence - - max_lengths: maximum length of all sequence, if encoder_padding_mask is - not None, max_lengths must equal to encoder_padding_mask.size(0) - - batch_size: batch size; if encoder_padding_mask is - not None, max_lengths must equal to encoder_padding_mask.size(1) - - device: which device to put the result on - """ - if encoder_padding_mask is None: - return torch.Tensor([max_lengths] * batch_size).to(torch.int32).to(device) - - assert encoder_padding_mask.size(0) == max_lengths, "max_lengths does not match" - assert encoder_padding_mask.size(1) == batch_size, "batch_size does not match" - - return max_lengths - torch.sum(encoder_padding_mask, dim=0) diff --git a/kosmos-g/fairseq/examples/speech_recognition/data/replabels.py b/kosmos-g/fairseq/examples/speech_recognition/data/replabels.py deleted file mode 100644 index 441f1bd43..000000000 --- a/kosmos-g/fairseq/examples/speech_recognition/data/replabels.py +++ /dev/null @@ -1,70 +0,0 @@ -#!/usr/bin/env python3 - -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Replabel transforms for use with flashlight's ASG criterion. -""" - - -def replabel_symbol(i): - """ - Replabel symbols used in flashlight, currently just "1", "2", ... - This prevents training with numeral tokens, so this might change in the future - """ - return str(i) - - -def pack_replabels(tokens, dictionary, max_reps): - """ - Pack a token sequence so that repeated symbols are replaced by replabels - """ - if len(tokens) == 0 or max_reps <= 0: - return tokens - - replabel_value_to_idx = [0] * (max_reps + 1) - for i in range(1, max_reps + 1): - replabel_value_to_idx[i] = dictionary.index(replabel_symbol(i)) - - result = [] - prev_token = -1 - num_reps = 0 - for token in tokens: - if token == prev_token and num_reps < max_reps: - num_reps += 1 - else: - if num_reps > 0: - result.append(replabel_value_to_idx[num_reps]) - num_reps = 0 - result.append(token) - prev_token = token - if num_reps > 0: - result.append(replabel_value_to_idx[num_reps]) - return result - - -def unpack_replabels(tokens, dictionary, max_reps): - """ - Unpack a token sequence so that replabels are replaced by repeated symbols - """ - if len(tokens) == 0 or max_reps <= 0: - return tokens - - replabel_idx_to_value = {} - for i in range(1, max_reps + 1): - replabel_idx_to_value[dictionary.index(replabel_symbol(i))] = i - - result = [] - prev_token = -1 - for token in tokens: - try: - for _ in range(replabel_idx_to_value[token]): - result.append(prev_token) - prev_token = -1 - except KeyError: - result.append(token) - prev_token = token - return result diff --git a/kosmos-g/fairseq/examples/speech_recognition/datasets/asr_prep_json.py b/kosmos-g/fairseq/examples/speech_recognition/datasets/asr_prep_json.py deleted file mode 100644 index b8db8ff16..000000000 --- a/kosmos-g/fairseq/examples/speech_recognition/datasets/asr_prep_json.py +++ /dev/null @@ -1,125 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from __future__ import absolute_import, division, print_function, unicode_literals - -import argparse -import concurrent.futures -import json -import multiprocessing -import os -from collections import namedtuple -from itertools import chain - -import sentencepiece as spm -from fairseq.data import Dictionary - - -MILLISECONDS_TO_SECONDS = 0.001 - - -def process_sample(aud_path, lable, utt_id, sp, tgt_dict): - import torchaudio - - input = {} - output = {} - si, ei = torchaudio.info(aud_path) - input["length_ms"] = int( - si.length / si.channels / si.rate / MILLISECONDS_TO_SECONDS - ) - input["path"] = aud_path - - token = " ".join(sp.EncodeAsPieces(lable)) - ids = tgt_dict.encode_line(token, append_eos=False) - output["text"] = lable - output["token"] = token - output["tokenid"] = ", ".join(map(str, [t.tolist() for t in ids])) - return {utt_id: {"input": input, "output": output}} - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument( - "--audio-dirs", - nargs="+", - default=["-"], - required=True, - help="input directories with audio files", - ) - parser.add_argument( - "--labels", - required=True, - help="aggregated input labels with format <ID LABEL> per line", - type=argparse.FileType("r", encoding="UTF-8"), - ) - parser.add_argument( - "--spm-model", - required=True, - help="sentencepiece model to use for encoding", - type=argparse.FileType("r", encoding="UTF-8"), - ) - parser.add_argument( - "--dictionary", - required=True, - help="file to load fairseq dictionary from", - type=argparse.FileType("r", encoding="UTF-8"), - ) - parser.add_argument("--audio-format", choices=["flac", "wav"], default="wav") - parser.add_argument( - "--output", - required=True, - type=argparse.FileType("w"), - help="path to save json output", - ) - args = parser.parse_args() - - sp = spm.SentencePieceProcessor() - sp.Load(args.spm_model.name) - - tgt_dict = Dictionary.load(args.dictionary) - - labels = {} - for line in args.labels: - (utt_id, label) = line.split(" ", 1) - labels[utt_id] = label - if len(labels) == 0: - raise Exception("No labels found in ", args.labels_path) - - Sample = namedtuple("Sample", "aud_path utt_id") - samples = [] - for path, _, files in chain.from_iterable( - os.walk(path) for path in args.audio_dirs - ): - for f in files: - if f.endswith(args.audio_format): - if len(os.path.splitext(f)) != 2: - raise Exception("Expect <utt_id.extension> file name. Got: ", f) - utt_id = os.path.splitext(f)[0] - if utt_id not in labels: - continue - samples.append(Sample(os.path.join(path, f), utt_id)) - - utts = {} - num_cpu = multiprocessing.cpu_count() - with concurrent.futures.ThreadPoolExecutor(max_workers=num_cpu) as executor: - future_to_sample = { - executor.submit( - process_sample, s.aud_path, labels[s.utt_id], s.utt_id, sp, tgt_dict - ): s - for s in samples - } - for future in concurrent.futures.as_completed(future_to_sample): - try: - data = future.result() - except Exception as exc: - print("generated an exception: ", exc) - else: - utts.update(data) - json.dump({"utts": utts}, args.output, indent=4) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/speech_recognition/datasets/prepare-librispeech.sh b/kosmos-g/fairseq/examples/speech_recognition/datasets/prepare-librispeech.sh deleted file mode 100644 index 9e9297f08..000000000 --- a/kosmos-g/fairseq/examples/speech_recognition/datasets/prepare-librispeech.sh +++ /dev/null @@ -1,88 +0,0 @@ -#!/usr/bin/env bash -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -# Prepare librispeech dataset - -base_url=www.openslr.org/resources/12 -train_dir=train_960 - -if [ "$#" -ne 2 ]; then - echo "Usage: $0 <download_dir> <out_dir>" - echo "e.g.: $0 /tmp/librispeech_raw/ ~/data/librispeech_final" - exit 1 -fi - -download_dir=${1%/} -out_dir=${2%/} - -fairseq_root=~/fairseq-py/ -mkdir -p ${out_dir} -cd ${out_dir} || exit - -nbpe=5000 -bpemode=unigram - -if [ ! -d "$fairseq_root" ]; then - echo "$0: Please set correct fairseq_root" - exit 1 -fi - -echo "Data Download" -for part in dev-clean test-clean dev-other test-other train-clean-100 train-clean-360 train-other-500; do - url=$base_url/$part.tar.gz - if ! wget -P $download_dir $url; then - echo "$0: wget failed for $url" - exit 1 - fi - if ! tar -C $download_dir -xvzf $download_dir/$part.tar.gz; then - echo "$0: error un-tarring archive $download_dir/$part.tar.gz" - exit 1 - fi -done - -echo "Merge all train packs into one" -mkdir -p ${download_dir}/LibriSpeech/${train_dir}/ -for part in train-clean-100 train-clean-360 train-other-500; do - mv ${download_dir}/LibriSpeech/${part}/* $download_dir/LibriSpeech/${train_dir}/ -done -echo "Merge train text" -find ${download_dir}/LibriSpeech/${train_dir}/ -name '*.txt' -exec cat {} \; >> ${download_dir}/LibriSpeech/${train_dir}/text - -# Use combined dev-clean and dev-other as validation set -find ${download_dir}/LibriSpeech/dev-clean/ ${download_dir}/LibriSpeech/dev-other/ -name '*.txt' -exec cat {} \; >> ${download_dir}/LibriSpeech/valid_text -find ${download_dir}/LibriSpeech/test-clean/ -name '*.txt' -exec cat {} \; >> ${download_dir}/LibriSpeech/test-clean/text -find ${download_dir}/LibriSpeech/test-other/ -name '*.txt' -exec cat {} \; >> ${download_dir}/LibriSpeech/test-other/text - - -dict=data/lang_char/${train_dir}_${bpemode}${nbpe}_units.txt -encoded=data/lang_char/${train_dir}_${bpemode}${nbpe}_encoded.txt -fairseq_dict=data/lang_char/${train_dir}_${bpemode}${nbpe}_fairseq_dict.txt -bpemodel=data/lang_char/${train_dir}_${bpemode}${nbpe} -echo "dictionary: ${dict}" -echo "Dictionary preparation" -mkdir -p data/lang_char/ -echo "<unk> 3" > ${dict} -echo "</s> 2" >> ${dict} -echo "<pad> 1" >> ${dict} -cut -f 2- -d" " ${download_dir}/LibriSpeech/${train_dir}/text > data/lang_char/input.txt -spm_train --input=data/lang_char/input.txt --vocab_size=${nbpe} --model_type=${bpemode} --model_prefix=${bpemodel} --input_sentence_size=100000000 --unk_id=3 --eos_id=2 --pad_id=1 --bos_id=-1 --character_coverage=1 -spm_encode --model=${bpemodel}.model --output_format=piece < data/lang_char/input.txt > ${encoded} -cat ${encoded} | tr ' ' '\n' | sort | uniq | awk '{print $0 " " NR+3}' >> ${dict} -cat ${encoded} | tr ' ' '\n' | sort | uniq -c | awk '{print $2 " " $1}' > ${fairseq_dict} -wc -l ${dict} - -echo "Prepare train and test jsons" -for part in train_960 test-other test-clean; do - python ${fairseq_root}/examples/speech_recognition/datasets/asr_prep_json.py --audio-dirs ${download_dir}/LibriSpeech/${part} --labels ${download_dir}/LibriSpeech/${part}/text --spm-model ${bpemodel}.model --audio-format flac --dictionary ${fairseq_dict} --output ${part}.json -done -# fairseq expects to find train.json and valid.json during training -mv train_960.json train.json - -echo "Prepare valid json" -python ${fairseq_root}/examples/speech_recognition/datasets/asr_prep_json.py --audio-dirs ${download_dir}/LibriSpeech/dev-clean ${download_dir}/LibriSpeech/dev-other --labels ${download_dir}/LibriSpeech/valid_text --spm-model ${bpemodel}.model --audio-format flac --dictionary ${fairseq_dict} --output valid.json - -cp ${fairseq_dict} ./dict.txt -cp ${bpemodel}.model ./spm.model diff --git a/kosmos-g/fairseq/examples/speech_recognition/infer.py b/kosmos-g/fairseq/examples/speech_recognition/infer.py deleted file mode 100644 index ce16bf47c..000000000 --- a/kosmos-g/fairseq/examples/speech_recognition/infer.py +++ /dev/null @@ -1,436 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Run inference for pre-processed data with a trained model. -""" - -import ast -import logging -import math -import os -import sys - -import editdistance -import numpy as np -import torch -from fairseq import checkpoint_utils, options, progress_bar, tasks, utils -from fairseq.data.data_utils import post_process -from fairseq.logging.meters import StopwatchMeter, TimeMeter - - -logging.basicConfig() -logging.root.setLevel(logging.INFO) -logging.basicConfig(level=logging.INFO) -logger = logging.getLogger(__name__) - - -def add_asr_eval_argument(parser): - parser.add_argument("--kspmodel", default=None, help="sentence piece model") - parser.add_argument( - "--wfstlm", default=None, help="wfstlm on dictonary output units" - ) - parser.add_argument( - "--rnnt_decoding_type", - default="greedy", - help="wfstlm on dictonary\ -output units", - ) - try: - parser.add_argument( - "--lm-weight", - "--lm_weight", - type=float, - default=0.2, - help="weight for lm while interpolating with neural score", - ) - except: - pass - parser.add_argument( - "--rnnt_len_penalty", default=-0.5, help="rnnt length penalty on word level" - ) - parser.add_argument( - "--w2l-decoder", - choices=["viterbi", "kenlm", "fairseqlm"], - help="use a w2l decoder", - ) - parser.add_argument("--lexicon", help="lexicon for w2l decoder") - parser.add_argument("--unit-lm", action="store_true", help="if using a unit lm") - parser.add_argument("--kenlm-model", "--lm-model", help="lm model for w2l decoder") - parser.add_argument("--beam-threshold", type=float, default=25.0) - parser.add_argument("--beam-size-token", type=float, default=100) - parser.add_argument("--word-score", type=float, default=1.0) - parser.add_argument("--unk-weight", type=float, default=-math.inf) - parser.add_argument("--sil-weight", type=float, default=0.0) - parser.add_argument( - "--dump-emissions", - type=str, - default=None, - help="if present, dumps emissions into this file and exits", - ) - parser.add_argument( - "--dump-features", - type=str, - default=None, - help="if present, dumps features into this file and exits", - ) - parser.add_argument( - "--load-emissions", - type=str, - default=None, - help="if present, loads emissions from this file", - ) - return parser - - -def check_args(args): - # assert args.path is not None, "--path required for generation!" - # assert args.results_path is not None, "--results_path required for generation!" - assert ( - not args.sampling or args.nbest == args.beam - ), "--sampling requires --nbest to be equal to --beam" - assert ( - args.replace_unk is None or args.raw_text - ), "--replace-unk requires a raw text dataset (--raw-text)" - - -def get_dataset_itr(args, task, models): - return task.get_batch_iterator( - dataset=task.dataset(args.gen_subset), - max_tokens=args.max_tokens, - max_sentences=args.batch_size, - max_positions=(sys.maxsize, sys.maxsize), - ignore_invalid_inputs=args.skip_invalid_size_inputs_valid_test, - required_batch_size_multiple=args.required_batch_size_multiple, - num_shards=args.num_shards, - shard_id=args.shard_id, - num_workers=args.num_workers, - data_buffer_size=args.data_buffer_size, - ).next_epoch_itr(shuffle=False) - - -def process_predictions( - args, hypos, sp, tgt_dict, target_tokens, res_files, speaker, id -): - for hypo in hypos[: min(len(hypos), args.nbest)]: - hyp_pieces = tgt_dict.string(hypo["tokens"].int().cpu()) - - if "words" in hypo: - hyp_words = " ".join(hypo["words"]) - else: - hyp_words = post_process(hyp_pieces, args.post_process) - - if res_files is not None: - print( - "{} ({}-{})".format(hyp_pieces, speaker, id), - file=res_files["hypo.units"], - ) - print( - "{} ({}-{})".format(hyp_words, speaker, id), - file=res_files["hypo.words"], - ) - - tgt_pieces = tgt_dict.string(target_tokens) - tgt_words = post_process(tgt_pieces, args.post_process) - - if res_files is not None: - print( - "{} ({}-{})".format(tgt_pieces, speaker, id), - file=res_files["ref.units"], - ) - print( - "{} ({}-{})".format(tgt_words, speaker, id), file=res_files["ref.words"] - ) - - if not args.quiet: - logger.info("HYPO:" + hyp_words) - logger.info("TARGET:" + tgt_words) - logger.info("___________________") - - hyp_words = hyp_words.split() - tgt_words = tgt_words.split() - return editdistance.eval(hyp_words, tgt_words), len(tgt_words) - - -def prepare_result_files(args): - def get_res_file(file_prefix): - if args.num_shards > 1: - file_prefix = f"{args.shard_id}_{file_prefix}" - path = os.path.join( - args.results_path, - "{}-{}-{}.txt".format( - file_prefix, os.path.basename(args.path), args.gen_subset - ), - ) - return open(path, "w", buffering=1) - - if not args.results_path: - return None - - return { - "hypo.words": get_res_file("hypo.word"), - "hypo.units": get_res_file("hypo.units"), - "ref.words": get_res_file("ref.word"), - "ref.units": get_res_file("ref.units"), - } - - -def optimize_models(args, use_cuda, models): - """Optimize ensemble for generation""" - for model in models: - model.make_generation_fast_( - beamable_mm_beam_size=None if args.no_beamable_mm else args.beam, - need_attn=args.print_alignment, - ) - if args.fp16: - model.half() - if use_cuda: - model.cuda() - - -def apply_half(t): - if t.dtype is torch.float32: - return t.to(dtype=torch.half) - return t - - -class ExistingEmissionsDecoder(object): - def __init__(self, decoder, emissions): - self.decoder = decoder - self.emissions = emissions - - def generate(self, models, sample, **unused): - ids = sample["id"].cpu().numpy() - try: - emissions = np.stack(self.emissions[ids]) - except: - print([x.shape for x in self.emissions[ids]]) - raise Exception("invalid sizes") - emissions = torch.from_numpy(emissions) - return self.decoder.decode(emissions) - - -def main(args, task=None, model_state=None): - check_args(args) - - use_fp16 = args.fp16 - if args.max_tokens is None and args.batch_size is None: - args.max_tokens = 4000000 - logger.info(args) - - use_cuda = torch.cuda.is_available() and not args.cpu - - logger.info("| decoding with criterion {}".format(args.criterion)) - - task = tasks.setup_task(args) - - # Load ensemble - if args.load_emissions: - models, criterions = [], [] - task.load_dataset(args.gen_subset) - else: - logger.info("| loading model(s) from {}".format(args.path)) - models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( - utils.split_paths(args.path, separator="\\"), - arg_overrides=ast.literal_eval(args.model_overrides), - task=task, - suffix=args.checkpoint_suffix, - strict=(args.checkpoint_shard_count == 1), - num_shards=args.checkpoint_shard_count, - state=model_state, - ) - optimize_models(args, use_cuda, models) - task.load_dataset(args.gen_subset, task_cfg=saved_cfg.task) - - - # Set dictionary - tgt_dict = task.target_dictionary - - logger.info( - "| {} {} {} examples".format( - args.data, args.gen_subset, len(task.dataset(args.gen_subset)) - ) - ) - - # hack to pass transitions to W2lDecoder - if args.criterion == "asg_loss": - raise NotImplementedError("asg_loss is currently not supported") - # trans = criterions[0].asg.trans.data - # args.asg_transitions = torch.flatten(trans).tolist() - - # Load dataset (possibly sharded) - itr = get_dataset_itr(args, task, models) - - # Initialize generator - gen_timer = StopwatchMeter() - - def build_generator(args): - w2l_decoder = getattr(args, "w2l_decoder", None) - if w2l_decoder == "viterbi": - from examples.speech_recognition.w2l_decoder import W2lViterbiDecoder - - return W2lViterbiDecoder(args, task.target_dictionary) - elif w2l_decoder == "kenlm": - from examples.speech_recognition.w2l_decoder import W2lKenLMDecoder - - return W2lKenLMDecoder(args, task.target_dictionary) - elif w2l_decoder == "fairseqlm": - from examples.speech_recognition.w2l_decoder import W2lFairseqLMDecoder - - return W2lFairseqLMDecoder(args, task.target_dictionary) - else: - print( - "only flashlight decoders with (viterbi, kenlm, fairseqlm) options are supported at the moment" - ) - - # please do not touch this unless you test both generate.py and infer.py with audio_pretraining task - generator = build_generator(args) - - if args.load_emissions: - generator = ExistingEmissionsDecoder( - generator, np.load(args.load_emissions, allow_pickle=True) - ) - logger.info("loaded emissions from " + args.load_emissions) - - num_sentences = 0 - - if args.results_path is not None and not os.path.exists(args.results_path): - os.makedirs(args.results_path) - - max_source_pos = ( - utils.resolve_max_positions( - task.max_positions(), *[model.max_positions() for model in models] - ), - ) - - if max_source_pos is not None: - max_source_pos = max_source_pos[0] - if max_source_pos is not None: - max_source_pos = max_source_pos[0] - 1 - - if args.dump_emissions: - emissions = {} - if args.dump_features: - features = {} - models[0].bert.proj = None - else: - res_files = prepare_result_files(args) - errs_t = 0 - lengths_t = 0 - with progress_bar.build_progress_bar(args, itr) as t: - wps_meter = TimeMeter() - for sample in t: - sample = utils.move_to_cuda(sample) if use_cuda else sample - if use_fp16: - sample = utils.apply_to_sample(apply_half, sample) - if "net_input" not in sample: - continue - - prefix_tokens = None - if args.prefix_size > 0: - prefix_tokens = sample["target"][:, : args.prefix_size] - - gen_timer.start() - if args.dump_emissions: - with torch.no_grad(): - encoder_out = models[0](**sample["net_input"]) - emm = models[0].get_normalized_probs(encoder_out, log_probs=True) - emm = emm.transpose(0, 1).cpu().numpy() - for i, id in enumerate(sample["id"]): - emissions[id.item()] = emm[i] - continue - elif args.dump_features: - with torch.no_grad(): - encoder_out = models[0](**sample["net_input"]) - feat = encoder_out["encoder_out"].transpose(0, 1).cpu().numpy() - for i, id in enumerate(sample["id"]): - padding = ( - encoder_out["encoder_padding_mask"][i].cpu().numpy() - if encoder_out["encoder_padding_mask"] is not None - else None - ) - features[id.item()] = (feat[i], padding) - continue - hypos = task.inference_step(generator, models, sample, prefix_tokens) - num_generated_tokens = sum(len(h[0]["tokens"]) for h in hypos) - gen_timer.stop(num_generated_tokens) - - for i, sample_id in enumerate(sample["id"].tolist()): - speaker = None - # id = task.dataset(args.gen_subset).ids[int(sample_id)] - id = sample_id - toks = ( - sample["target"][i, :] - if "target_label" not in sample - else sample["target_label"][i, :] - ) - target_tokens = utils.strip_pad(toks, tgt_dict.pad()).int().cpu() - # Process top predictions - errs, length = process_predictions( - args, - hypos[i], - None, - tgt_dict, - target_tokens, - res_files, - speaker, - id, - ) - errs_t += errs - lengths_t += length - - wps_meter.update(num_generated_tokens) - t.log({"wps": round(wps_meter.avg)}) - num_sentences += ( - sample["nsentences"] if "nsentences" in sample else sample["id"].numel() - ) - - wer = None - if args.dump_emissions: - emm_arr = [] - for i in range(len(emissions)): - emm_arr.append(emissions[i]) - np.save(args.dump_emissions, emm_arr) - logger.info(f"saved {len(emissions)} emissions to {args.dump_emissions}") - elif args.dump_features: - feat_arr = [] - for i in range(len(features)): - feat_arr.append(features[i]) - np.save(args.dump_features, feat_arr) - logger.info(f"saved {len(features)} emissions to {args.dump_features}") - else: - if lengths_t > 0: - wer = errs_t * 100.0 / lengths_t - logger.info(f"WER: {wer}") - - logger.info( - "| Processed {} sentences ({} tokens) in {:.1f}s ({:.2f}" - "sentences/s, {:.2f} tokens/s)".format( - num_sentences, - gen_timer.n, - gen_timer.sum, - num_sentences / gen_timer.sum, - 1.0 / gen_timer.avg, - ) - ) - logger.info("| Generate {} with beam={}".format(args.gen_subset, args.beam)) - return task, wer - - -def make_parser(): - parser = options.get_generation_parser() - parser = add_asr_eval_argument(parser) - return parser - - -def cli_main(): - parser = make_parser() - args = options.parse_args_and_arch(parser) - main(args) - - -if __name__ == "__main__": - cli_main() diff --git a/kosmos-g/fairseq/examples/speech_recognition/kaldi/__init__.py b/kosmos-g/fairseq/examples/speech_recognition/kaldi/__init__.py deleted file mode 100644 index e69de29bb..000000000 diff --git a/kosmos-g/fairseq/examples/speech_recognition/kaldi/add-self-loop-simple.cc b/kosmos-g/fairseq/examples/speech_recognition/kaldi/add-self-loop-simple.cc deleted file mode 100644 index e18fb62df..000000000 --- a/kosmos-g/fairseq/examples/speech_recognition/kaldi/add-self-loop-simple.cc +++ /dev/null @@ -1,94 +0,0 @@ -/* - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include <iostream> -#include "fstext/fstext-lib.h" // @manual -#include "util/common-utils.h" // @manual - -/* - * This program is to modify a FST without self-loop by: - * for each incoming arc with non-eps input symbol, add a self-loop arc - * with that non-eps symbol as input and eps as output. - * - * This is to make sure the resultant FST can do deduplication for repeated - * symbols, which is very common in acoustic model - * - */ -namespace { -int32 AddSelfLoopsSimple(fst::StdVectorFst* fst) { - typedef fst::MutableArcIterator<fst::StdVectorFst> IterType; - - int32 num_states_before = fst->NumStates(); - fst::MakePrecedingInputSymbolsSame(false, fst); - int32 num_states_after = fst->NumStates(); - KALDI_LOG << "There are " << num_states_before - << " states in the original FST; " - << " after MakePrecedingInputSymbolsSame, there are " - << num_states_after << " states " << std::endl; - - auto weight_one = fst::StdArc::Weight::One(); - - int32 num_arc_added = 0; - - fst::StdArc self_loop_arc; - self_loop_arc.weight = weight_one; - - int32 num_states = fst->NumStates(); - std::vector<std::set<int32>> incoming_non_eps_label_per_state(num_states); - - for (int32 state = 0; state < num_states; state++) { - for (IterType aiter(fst, state); !aiter.Done(); aiter.Next()) { - fst::StdArc arc(aiter.Value()); - if (arc.ilabel != 0) { - incoming_non_eps_label_per_state[arc.nextstate].insert(arc.ilabel); - } - } - } - - for (int32 state = 0; state < num_states; state++) { - if (!incoming_non_eps_label_per_state[state].empty()) { - auto& ilabel_set = incoming_non_eps_label_per_state[state]; - for (auto it = ilabel_set.begin(); it != ilabel_set.end(); it++) { - self_loop_arc.ilabel = *it; - self_loop_arc.olabel = 0; - self_loop_arc.nextstate = state; - fst->AddArc(state, self_loop_arc); - num_arc_added++; - } - } - } - return num_arc_added; -} - -void print_usage() { - std::cout << "add-self-loop-simple usage:\n" - "\tadd-self-loop-simple <in-fst> <out-fst> \n"; -} -} // namespace - -int main(int argc, char** argv) { - if (argc != 3) { - print_usage(); - exit(1); - } - - auto input = argv[1]; - auto output = argv[2]; - - auto fst = fst::ReadFstKaldi(input); - auto num_states = fst->NumStates(); - KALDI_LOG << "Loading FST from " << input << " with " << num_states - << " states." << std::endl; - - int32 num_arc_added = AddSelfLoopsSimple(fst); - KALDI_LOG << "Adding " << num_arc_added << " self-loop arcs " << std::endl; - - fst::WriteFstKaldi(*fst, std::string(output)); - KALDI_LOG << "Writing FST to " << output << std::endl; - - delete fst; -} diff --git a/kosmos-g/fairseq/examples/speech_recognition/kaldi/config/kaldi_initializer.yaml b/kosmos-g/fairseq/examples/speech_recognition/kaldi/config/kaldi_initializer.yaml deleted file mode 100644 index be9ba98f5..000000000 --- a/kosmos-g/fairseq/examples/speech_recognition/kaldi/config/kaldi_initializer.yaml +++ /dev/null @@ -1,8 +0,0 @@ -# @package _group_ - -data_dir: ??? -fst_dir: ??? -in_labels: ??? -kaldi_root: ??? -lm_arpa: ??? -blank_symbol: <s> diff --git a/kosmos-g/fairseq/examples/speech_recognition/kaldi/kaldi_decoder.py b/kosmos-g/fairseq/examples/speech_recognition/kaldi/kaldi_decoder.py deleted file mode 100644 index 5f62cc58a..000000000 --- a/kosmos-g/fairseq/examples/speech_recognition/kaldi/kaldi_decoder.py +++ /dev/null @@ -1,244 +0,0 @@ -#!/usr/bin/env python3 - -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from concurrent.futures import ThreadPoolExecutor -import logging -from omegaconf import MISSING -import os -import torch -from typing import Optional -import warnings - - -from dataclasses import dataclass -from fairseq.dataclass import FairseqDataclass -from .kaldi_initializer import KaldiInitializerConfig, initalize_kaldi - - -logger = logging.getLogger(__name__) - - -@dataclass -class KaldiDecoderConfig(FairseqDataclass): - hlg_graph_path: Optional[str] = None - output_dict: str = MISSING - - kaldi_initializer_config: Optional[KaldiInitializerConfig] = None - - acoustic_scale: float = 0.5 - max_active: int = 10000 - beam_delta: float = 0.5 - hash_ratio: float = 2.0 - - is_lattice: bool = False - lattice_beam: float = 10.0 - prune_interval: int = 25 - determinize_lattice: bool = True - prune_scale: float = 0.1 - max_mem: int = 0 - phone_determinize: bool = True - word_determinize: bool = True - minimize: bool = True - - num_threads: int = 1 - - -class KaldiDecoder(object): - def __init__( - self, - cfg: KaldiDecoderConfig, - beam: int, - nbest: int = 1, - ): - try: - from kaldi.asr import FasterRecognizer, LatticeFasterRecognizer - from kaldi.base import set_verbose_level - from kaldi.decoder import ( - FasterDecoder, - FasterDecoderOptions, - LatticeFasterDecoder, - LatticeFasterDecoderOptions, - ) - from kaldi.lat.functions import DeterminizeLatticePhonePrunedOptions - from kaldi.fstext import read_fst_kaldi, SymbolTable - except: - warnings.warn( - "pykaldi is required for this functionality. Please install from https://github.com/pykaldi/pykaldi" - ) - - # set_verbose_level(2) - - self.acoustic_scale = cfg.acoustic_scale - self.nbest = nbest - - if cfg.hlg_graph_path is None: - assert ( - cfg.kaldi_initializer_config is not None - ), "Must provide hlg graph path or kaldi initializer config" - cfg.hlg_graph_path = initalize_kaldi(cfg.kaldi_initializer_config) - - assert os.path.exists(cfg.hlg_graph_path), cfg.hlg_graph_path - - if cfg.is_lattice: - self.dec_cls = LatticeFasterDecoder - opt_cls = LatticeFasterDecoderOptions - self.rec_cls = LatticeFasterRecognizer - else: - assert self.nbest == 1, "nbest > 1 requires lattice decoder" - self.dec_cls = FasterDecoder - opt_cls = FasterDecoderOptions - self.rec_cls = FasterRecognizer - - self.decoder_options = opt_cls() - self.decoder_options.beam = beam - self.decoder_options.max_active = cfg.max_active - self.decoder_options.beam_delta = cfg.beam_delta - self.decoder_options.hash_ratio = cfg.hash_ratio - - if cfg.is_lattice: - self.decoder_options.lattice_beam = cfg.lattice_beam - self.decoder_options.prune_interval = cfg.prune_interval - self.decoder_options.determinize_lattice = cfg.determinize_lattice - self.decoder_options.prune_scale = cfg.prune_scale - det_opts = DeterminizeLatticePhonePrunedOptions() - det_opts.max_mem = cfg.max_mem - det_opts.phone_determinize = cfg.phone_determinize - det_opts.word_determinize = cfg.word_determinize - det_opts.minimize = cfg.minimize - self.decoder_options.det_opts = det_opts - - self.output_symbols = {} - with open(cfg.output_dict, "r") as f: - for line in f: - items = line.rstrip().split() - assert len(items) == 2 - self.output_symbols[int(items[1])] = items[0] - - logger.info(f"Loading FST from {cfg.hlg_graph_path}") - self.fst = read_fst_kaldi(cfg.hlg_graph_path) - self.symbol_table = SymbolTable.read_text(cfg.output_dict) - - self.executor = ThreadPoolExecutor(max_workers=cfg.num_threads) - - def generate(self, models, sample, **unused): - """Generate a batch of inferences.""" - # model.forward normally channels prev_output_tokens into the decoder - # separately, but SequenceGenerator directly calls model.encoder - encoder_input = { - k: v for k, v in sample["net_input"].items() if k != "prev_output_tokens" - } - emissions, padding = self.get_emissions(models, encoder_input) - return self.decode(emissions, padding) - - def get_emissions(self, models, encoder_input): - """Run encoder and normalize emissions""" - model = models[0] - - all_encoder_out = [m(**encoder_input) for m in models] - - if len(all_encoder_out) > 1: - - if "encoder_out" in all_encoder_out[0]: - encoder_out = { - "encoder_out": sum(e["encoder_out"] for e in all_encoder_out) - / len(all_encoder_out), - "encoder_padding_mask": all_encoder_out[0]["encoder_padding_mask"], - } - padding = encoder_out["encoder_padding_mask"] - else: - encoder_out = { - "logits": sum(e["logits"] for e in all_encoder_out) - / len(all_encoder_out), - "padding_mask": all_encoder_out[0]["padding_mask"], - } - padding = encoder_out["padding_mask"] - else: - encoder_out = all_encoder_out[0] - padding = ( - encoder_out["padding_mask"] - if "padding_mask" in encoder_out - else encoder_out["encoder_padding_mask"] - ) - - if hasattr(model, "get_logits"): - emissions = model.get_logits(encoder_out, normalize=True) - else: - emissions = model.get_normalized_probs(encoder_out, log_probs=True) - - return ( - emissions.cpu().float().transpose(0, 1), - padding.cpu() if padding is not None and padding.any() else None, - ) - - def decode_one(self, logits, padding): - from kaldi.matrix import Matrix - - decoder = self.dec_cls(self.fst, self.decoder_options) - asr = self.rec_cls( - decoder, self.symbol_table, acoustic_scale=self.acoustic_scale - ) - - if padding is not None: - logits = logits[~padding] - - mat = Matrix(logits.numpy()) - - out = asr.decode(mat) - - if self.nbest > 1: - from kaldi.fstext import shortestpath - from kaldi.fstext.utils import ( - convert_compact_lattice_to_lattice, - convert_lattice_to_std, - convert_nbest_to_list, - get_linear_symbol_sequence, - ) - - lat = out["lattice"] - - sp = shortestpath(lat, nshortest=self.nbest) - - sp = convert_compact_lattice_to_lattice(sp) - sp = convert_lattice_to_std(sp) - seq = convert_nbest_to_list(sp) - - results = [] - for s in seq: - _, o, w = get_linear_symbol_sequence(s) - words = list(self.output_symbols[z] for z in o) - results.append( - { - "tokens": words, - "words": words, - "score": w.value, - "emissions": logits, - } - ) - return results - else: - words = out["text"].split() - return [ - { - "tokens": words, - "words": words, - "score": out["likelihood"], - "emissions": logits, - } - ] - - def decode(self, emissions, padding): - if padding is None: - padding = [None] * len(emissions) - - ret = list( - map( - lambda e, p: self.executor.submit(self.decode_one, e, p), - emissions, - padding, - ) - ) - return ret diff --git a/kosmos-g/fairseq/examples/speech_recognition/kaldi/kaldi_initializer.py b/kosmos-g/fairseq/examples/speech_recognition/kaldi/kaldi_initializer.py deleted file mode 100644 index 6d2a2a4b6..000000000 --- a/kosmos-g/fairseq/examples/speech_recognition/kaldi/kaldi_initializer.py +++ /dev/null @@ -1,698 +0,0 @@ -#!/usr/bin/env python3 - -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass -import hydra -from hydra.core.config_store import ConfigStore -import logging -from omegaconf import MISSING, OmegaConf -import os -import os.path as osp -from pathlib import Path -import subprocess -from typing import Optional - -from fairseq.data.dictionary import Dictionary -from fairseq.dataclass import FairseqDataclass - -script_dir = Path(__file__).resolve().parent -config_path = script_dir / "config" - - -logger = logging.getLogger(__name__) - - -@dataclass -class KaldiInitializerConfig(FairseqDataclass): - data_dir: str = MISSING - fst_dir: Optional[str] = None - in_labels: str = MISSING - out_labels: Optional[str] = None - wav2letter_lexicon: Optional[str] = None - lm_arpa: str = MISSING - kaldi_root: str = MISSING - blank_symbol: str = "<s>" - silence_symbol: Optional[str] = None - - -def create_units(fst_dir: Path, in_labels: str, vocab: Dictionary) -> Path: - in_units_file = fst_dir / f"kaldi_dict.{in_labels}.txt" - if not in_units_file.exists(): - - logger.info(f"Creating {in_units_file}") - - with open(in_units_file, "w") as f: - print("<eps> 0", file=f) - i = 1 - for symb in vocab.symbols[vocab.nspecial :]: - if not symb.startswith("madeupword"): - print(f"{symb} {i}", file=f) - i += 1 - return in_units_file - - -def create_lexicon( - cfg: KaldiInitializerConfig, - fst_dir: Path, - unique_label: str, - in_units_file: Path, - out_words_file: Path, -) -> (Path, Path): - - disambig_in_units_file = fst_dir / f"kaldi_dict.{cfg.in_labels}_disambig.txt" - lexicon_file = fst_dir / f"kaldi_lexicon.{unique_label}.txt" - disambig_lexicon_file = fst_dir / f"kaldi_lexicon.{unique_label}_disambig.txt" - if ( - not lexicon_file.exists() - or not disambig_lexicon_file.exists() - or not disambig_in_units_file.exists() - ): - logger.info(f"Creating {lexicon_file} (in units file: {in_units_file})") - - assert cfg.wav2letter_lexicon is not None or cfg.in_labels == cfg.out_labels - - if cfg.wav2letter_lexicon is not None: - lm_words = set() - with open(out_words_file, "r") as lm_dict_f: - for line in lm_dict_f: - lm_words.add(line.split()[0]) - - num_skipped = 0 - total = 0 - with open(cfg.wav2letter_lexicon, "r") as w2l_lex_f, open( - lexicon_file, "w" - ) as out_f: - for line in w2l_lex_f: - items = line.rstrip().split("\t") - assert len(items) == 2, items - if items[0] in lm_words: - print(items[0], items[1], file=out_f) - else: - num_skipped += 1 - logger.debug( - f"Skipping word {items[0]} as it was not found in LM" - ) - total += 1 - if num_skipped > 0: - logger.warning( - f"Skipped {num_skipped} out of {total} words as they were not found in LM" - ) - else: - with open(in_units_file, "r") as in_f, open(lexicon_file, "w") as out_f: - for line in in_f: - symb = line.split()[0] - if symb != "<eps>" and symb != "<ctc_blank>" and symb != "<SIL>": - print(symb, symb, file=out_f) - - lex_disambig_path = ( - Path(cfg.kaldi_root) / "egs/wsj/s5/utils/add_lex_disambig.pl" - ) - res = subprocess.run( - [lex_disambig_path, lexicon_file, disambig_lexicon_file], - check=True, - capture_output=True, - ) - ndisambig = int(res.stdout) - disamib_path = Path(cfg.kaldi_root) / "egs/wsj/s5/utils/add_disambig.pl" - res = subprocess.run( - [disamib_path, "--include-zero", in_units_file, str(ndisambig)], - check=True, - capture_output=True, - ) - with open(disambig_in_units_file, "wb") as f: - f.write(res.stdout) - - return disambig_lexicon_file, disambig_in_units_file - - -def create_G( - kaldi_root: Path, fst_dir: Path, lm_arpa: Path, arpa_base: str -) -> (Path, Path): - - out_words_file = fst_dir / f"kaldi_dict.{arpa_base}.txt" - grammar_graph = fst_dir / f"G_{arpa_base}.fst" - if not grammar_graph.exists() or not out_words_file.exists(): - logger.info(f"Creating {grammar_graph}") - arpa2fst = kaldi_root / "src/lmbin/arpa2fst" - subprocess.run( - [ - arpa2fst, - "--disambig-symbol=#0", - f"--write-symbol-table={out_words_file}", - lm_arpa, - grammar_graph, - ], - check=True, - ) - return grammar_graph, out_words_file - - -def create_L( - kaldi_root: Path, - fst_dir: Path, - unique_label: str, - lexicon_file: Path, - in_units_file: Path, - out_words_file: Path, -) -> Path: - lexicon_graph = fst_dir / f"L.{unique_label}.fst" - - if not lexicon_graph.exists(): - logger.info(f"Creating {lexicon_graph} (in units: {in_units_file})") - make_lex = kaldi_root / "egs/wsj/s5/utils/make_lexicon_fst.pl" - fstcompile = kaldi_root / "tools/openfst-1.6.7/bin/fstcompile" - fstaddselfloops = kaldi_root / "src/fstbin/fstaddselfloops" - fstarcsort = kaldi_root / "tools/openfst-1.6.7/bin/fstarcsort" - - def write_disambig_symbol(file): - with open(file, "r") as f: - for line in f: - items = line.rstrip().split() - if items[0] == "#0": - out_path = str(file) + "_disamig" - with open(out_path, "w") as out_f: - print(items[1], file=out_f) - return out_path - - return None - - in_disambig_sym = write_disambig_symbol(in_units_file) - assert in_disambig_sym is not None - out_disambig_sym = write_disambig_symbol(out_words_file) - assert out_disambig_sym is not None - - try: - with open(lexicon_graph, "wb") as out_f: - res = subprocess.run( - [make_lex, lexicon_file], capture_output=True, check=True - ) - assert len(res.stderr) == 0, res.stderr.decode("utf-8") - res = subprocess.run( - [ - fstcompile, - f"--isymbols={in_units_file}", - f"--osymbols={out_words_file}", - "--keep_isymbols=false", - "--keep_osymbols=false", - ], - input=res.stdout, - capture_output=True, - ) - assert len(res.stderr) == 0, res.stderr.decode("utf-8") - res = subprocess.run( - [fstaddselfloops, in_disambig_sym, out_disambig_sym], - input=res.stdout, - capture_output=True, - check=True, - ) - res = subprocess.run( - [fstarcsort, "--sort_type=olabel"], - input=res.stdout, - capture_output=True, - check=True, - ) - out_f.write(res.stdout) - except subprocess.CalledProcessError as e: - logger.error(f"cmd: {e.cmd}, err: {e.stderr.decode('utf-8')}") - os.remove(lexicon_graph) - raise - except AssertionError: - os.remove(lexicon_graph) - raise - - return lexicon_graph - - -def create_LG( - kaldi_root: Path, - fst_dir: Path, - unique_label: str, - lexicon_graph: Path, - grammar_graph: Path, -) -> Path: - lg_graph = fst_dir / f"LG.{unique_label}.fst" - - if not lg_graph.exists(): - logger.info(f"Creating {lg_graph}") - - fsttablecompose = kaldi_root / "src/fstbin/fsttablecompose" - fstdeterminizestar = kaldi_root / "src/fstbin/fstdeterminizestar" - fstminimizeencoded = kaldi_root / "src/fstbin/fstminimizeencoded" - fstpushspecial = kaldi_root / "src/fstbin/fstpushspecial" - fstarcsort = kaldi_root / "tools/openfst-1.6.7/bin/fstarcsort" - - try: - with open(lg_graph, "wb") as out_f: - res = subprocess.run( - [fsttablecompose, lexicon_graph, grammar_graph], - capture_output=True, - check=True, - ) - res = subprocess.run( - [ - fstdeterminizestar, - "--use-log=true", - ], - input=res.stdout, - capture_output=True, - ) - res = subprocess.run( - [fstminimizeencoded], - input=res.stdout, - capture_output=True, - check=True, - ) - res = subprocess.run( - [fstpushspecial], - input=res.stdout, - capture_output=True, - check=True, - ) - res = subprocess.run( - [fstarcsort, "--sort_type=ilabel"], - input=res.stdout, - capture_output=True, - check=True, - ) - out_f.write(res.stdout) - except subprocess.CalledProcessError as e: - logger.error(f"cmd: {e.cmd}, err: {e.stderr.decode('utf-8')}") - os.remove(lg_graph) - raise - - return lg_graph - - -def create_H( - kaldi_root: Path, - fst_dir: Path, - disambig_out_units_file: Path, - in_labels: str, - vocab: Dictionary, - blk_sym: str, - silence_symbol: Optional[str], -) -> (Path, Path, Path): - h_graph = ( - fst_dir / f"H.{in_labels}{'_' + silence_symbol if silence_symbol else ''}.fst" - ) - h_out_units_file = fst_dir / f"kaldi_dict.h_out.{in_labels}.txt" - disambig_in_units_file_int = Path(str(h_graph) + "isym_disambig.int") - disambig_out_units_file_int = Path(str(disambig_out_units_file) + ".int") - if ( - not h_graph.exists() - or not h_out_units_file.exists() - or not disambig_in_units_file_int.exists() - ): - logger.info(f"Creating {h_graph}") - eps_sym = "<eps>" - - num_disambig = 0 - osymbols = [] - - with open(disambig_out_units_file, "r") as f, open( - disambig_out_units_file_int, "w" - ) as out_f: - for line in f: - symb, id = line.rstrip().split() - if line.startswith("#"): - num_disambig += 1 - print(id, file=out_f) - else: - if len(osymbols) == 0: - assert symb == eps_sym, symb - osymbols.append((symb, id)) - - i_idx = 0 - isymbols = [(eps_sym, 0)] - - imap = {} - - for i, s in enumerate(vocab.symbols): - i_idx += 1 - isymbols.append((s, i_idx)) - imap[s] = i_idx - - fst_str = [] - - node_idx = 0 - root_node = node_idx - - special_symbols = [blk_sym] - if silence_symbol is not None: - special_symbols.append(silence_symbol) - - for ss in special_symbols: - fst_str.append("{} {} {} {}".format(root_node, root_node, ss, eps_sym)) - - for symbol, _ in osymbols: - if symbol == eps_sym or symbol.startswith("#"): - continue - - node_idx += 1 - # 1. from root to emitting state - fst_str.append("{} {} {} {}".format(root_node, node_idx, symbol, symbol)) - # 2. from emitting state back to root - fst_str.append("{} {} {} {}".format(node_idx, root_node, eps_sym, eps_sym)) - # 3. from emitting state to optional blank state - pre_node = node_idx - node_idx += 1 - for ss in special_symbols: - fst_str.append("{} {} {} {}".format(pre_node, node_idx, ss, eps_sym)) - # 4. from blank state back to root - fst_str.append("{} {} {} {}".format(node_idx, root_node, eps_sym, eps_sym)) - - fst_str.append("{}".format(root_node)) - - fst_str = "\n".join(fst_str) - h_str = str(h_graph) - isym_file = h_str + ".isym" - - with open(isym_file, "w") as f: - for sym, id in isymbols: - f.write("{} {}\n".format(sym, id)) - - with open(h_out_units_file, "w") as f: - for sym, id in osymbols: - f.write("{} {}\n".format(sym, id)) - - with open(disambig_in_units_file_int, "w") as f: - disam_sym_id = len(isymbols) - for _ in range(num_disambig): - f.write("{}\n".format(disam_sym_id)) - disam_sym_id += 1 - - fstcompile = kaldi_root / "tools/openfst-1.6.7/bin/fstcompile" - fstaddselfloops = kaldi_root / "src/fstbin/fstaddselfloops" - fstarcsort = kaldi_root / "tools/openfst-1.6.7/bin/fstarcsort" - - try: - with open(h_graph, "wb") as out_f: - res = subprocess.run( - [ - fstcompile, - f"--isymbols={isym_file}", - f"--osymbols={h_out_units_file}", - "--keep_isymbols=false", - "--keep_osymbols=false", - ], - input=str.encode(fst_str), - capture_output=True, - check=True, - ) - res = subprocess.run( - [ - fstaddselfloops, - disambig_in_units_file_int, - disambig_out_units_file_int, - ], - input=res.stdout, - capture_output=True, - check=True, - ) - res = subprocess.run( - [fstarcsort, "--sort_type=olabel"], - input=res.stdout, - capture_output=True, - check=True, - ) - out_f.write(res.stdout) - except subprocess.CalledProcessError as e: - logger.error(f"cmd: {e.cmd}, err: {e.stderr.decode('utf-8')}") - os.remove(h_graph) - raise - return h_graph, h_out_units_file, disambig_in_units_file_int - - -def create_HLGa( - kaldi_root: Path, - fst_dir: Path, - unique_label: str, - h_graph: Path, - lg_graph: Path, - disambig_in_words_file_int: Path, -) -> Path: - hlga_graph = fst_dir / f"HLGa.{unique_label}.fst" - - if not hlga_graph.exists(): - logger.info(f"Creating {hlga_graph}") - - fsttablecompose = kaldi_root / "src/fstbin/fsttablecompose" - fstdeterminizestar = kaldi_root / "src/fstbin/fstdeterminizestar" - fstrmsymbols = kaldi_root / "src/fstbin/fstrmsymbols" - fstrmepslocal = kaldi_root / "src/fstbin/fstrmepslocal" - fstminimizeencoded = kaldi_root / "src/fstbin/fstminimizeencoded" - - try: - with open(hlga_graph, "wb") as out_f: - res = subprocess.run( - [ - fsttablecompose, - h_graph, - lg_graph, - ], - capture_output=True, - check=True, - ) - res = subprocess.run( - [fstdeterminizestar, "--use-log=true"], - input=res.stdout, - capture_output=True, - check=True, - ) - res = subprocess.run( - [fstrmsymbols, disambig_in_words_file_int], - input=res.stdout, - capture_output=True, - check=True, - ) - res = subprocess.run( - [fstrmepslocal], - input=res.stdout, - capture_output=True, - check=True, - ) - res = subprocess.run( - [fstminimizeencoded], - input=res.stdout, - capture_output=True, - check=True, - ) - out_f.write(res.stdout) - except subprocess.CalledProcessError as e: - logger.error(f"cmd: {e.cmd}, err: {e.stderr.decode('utf-8')}") - os.remove(hlga_graph) - raise - - return hlga_graph - - -def create_HLa( - kaldi_root: Path, - fst_dir: Path, - unique_label: str, - h_graph: Path, - l_graph: Path, - disambig_in_words_file_int: Path, -) -> Path: - hla_graph = fst_dir / f"HLa.{unique_label}.fst" - - if not hla_graph.exists(): - logger.info(f"Creating {hla_graph}") - - fsttablecompose = kaldi_root / "src/fstbin/fsttablecompose" - fstdeterminizestar = kaldi_root / "src/fstbin/fstdeterminizestar" - fstrmsymbols = kaldi_root / "src/fstbin/fstrmsymbols" - fstrmepslocal = kaldi_root / "src/fstbin/fstrmepslocal" - fstminimizeencoded = kaldi_root / "src/fstbin/fstminimizeencoded" - - try: - with open(hla_graph, "wb") as out_f: - res = subprocess.run( - [ - fsttablecompose, - h_graph, - l_graph, - ], - capture_output=True, - check=True, - ) - res = subprocess.run( - [fstdeterminizestar, "--use-log=true"], - input=res.stdout, - capture_output=True, - check=True, - ) - res = subprocess.run( - [fstrmsymbols, disambig_in_words_file_int], - input=res.stdout, - capture_output=True, - check=True, - ) - res = subprocess.run( - [fstrmepslocal], - input=res.stdout, - capture_output=True, - check=True, - ) - res = subprocess.run( - [fstminimizeencoded], - input=res.stdout, - capture_output=True, - check=True, - ) - out_f.write(res.stdout) - except subprocess.CalledProcessError as e: - logger.error(f"cmd: {e.cmd}, err: {e.stderr.decode('utf-8')}") - os.remove(hla_graph) - raise - - return hla_graph - - -def create_HLG( - kaldi_root: Path, - fst_dir: Path, - unique_label: str, - hlga_graph: Path, - prefix: str = "HLG", -) -> Path: - hlg_graph = fst_dir / f"{prefix}.{unique_label}.fst" - - if not hlg_graph.exists(): - logger.info(f"Creating {hlg_graph}") - - add_self_loop = script_dir / "add-self-loop-simple" - kaldi_src = kaldi_root / "src" - kaldi_lib = kaldi_src / "lib" - - try: - if not add_self_loop.exists(): - fst_include = kaldi_root / "tools/openfst-1.6.7/include" - add_self_loop_src = script_dir / "add-self-loop-simple.cc" - - subprocess.run( - [ - "c++", - f"-I{kaldi_src}", - f"-I{fst_include}", - f"-L{kaldi_lib}", - add_self_loop_src, - "-lkaldi-base", - "-lkaldi-fstext", - "-o", - add_self_loop, - ], - check=True, - ) - - my_env = os.environ.copy() - my_env["LD_LIBRARY_PATH"] = f"{kaldi_lib}:{my_env['LD_LIBRARY_PATH']}" - - subprocess.run( - [ - add_self_loop, - hlga_graph, - hlg_graph, - ], - check=True, - capture_output=True, - env=my_env, - ) - except subprocess.CalledProcessError as e: - logger.error(f"cmd: {e.cmd}, err: {e.stderr.decode('utf-8')}") - raise - - return hlg_graph - - -def initalize_kaldi(cfg: KaldiInitializerConfig) -> Path: - if cfg.fst_dir is None: - cfg.fst_dir = osp.join(cfg.data_dir, "kaldi") - if cfg.out_labels is None: - cfg.out_labels = cfg.in_labels - - kaldi_root = Path(cfg.kaldi_root) - data_dir = Path(cfg.data_dir) - fst_dir = Path(cfg.fst_dir) - fst_dir.mkdir(parents=True, exist_ok=True) - - arpa_base = osp.splitext(osp.basename(cfg.lm_arpa))[0] - unique_label = f"{cfg.in_labels}.{arpa_base}" - - with open(data_dir / f"dict.{cfg.in_labels}.txt", "r") as f: - vocab = Dictionary.load(f) - - in_units_file = create_units(fst_dir, cfg.in_labels, vocab) - - grammar_graph, out_words_file = create_G( - kaldi_root, fst_dir, Path(cfg.lm_arpa), arpa_base - ) - - disambig_lexicon_file, disambig_L_in_units_file = create_lexicon( - cfg, fst_dir, unique_label, in_units_file, out_words_file - ) - - h_graph, h_out_units_file, disambig_in_units_file_int = create_H( - kaldi_root, - fst_dir, - disambig_L_in_units_file, - cfg.in_labels, - vocab, - cfg.blank_symbol, - cfg.silence_symbol, - ) - lexicon_graph = create_L( - kaldi_root, - fst_dir, - unique_label, - disambig_lexicon_file, - disambig_L_in_units_file, - out_words_file, - ) - lg_graph = create_LG( - kaldi_root, fst_dir, unique_label, lexicon_graph, grammar_graph - ) - hlga_graph = create_HLGa( - kaldi_root, fst_dir, unique_label, h_graph, lg_graph, disambig_in_units_file_int - ) - hlg_graph = create_HLG(kaldi_root, fst_dir, unique_label, hlga_graph) - - # for debugging - # hla_graph = create_HLa(kaldi_root, fst_dir, unique_label, h_graph, lexicon_graph, disambig_in_units_file_int) - # hl_graph = create_HLG(kaldi_root, fst_dir, unique_label, hla_graph, prefix="HL_looped") - # create_HLG(kaldi_root, fst_dir, "phnc", h_graph, prefix="H_looped") - - return hlg_graph - - -@hydra.main(config_path=config_path, config_name="kaldi_initializer") -def cli_main(cfg: KaldiInitializerConfig) -> None: - container = OmegaConf.to_container(cfg, resolve=True, enum_to_str=True) - cfg = OmegaConf.create(container) - OmegaConf.set_struct(cfg, True) - initalize_kaldi(cfg) - - -if __name__ == "__main__": - - logging.root.setLevel(logging.INFO) - logging.basicConfig(level=logging.INFO) - - try: - from hydra._internal.utils import ( - get_args, - ) # pylint: disable=import-outside-toplevel - - cfg_name = get_args().config_name or "kaldi_initializer" - except ImportError: - logger.warning("Failed to get config name from hydra args") - cfg_name = "kaldi_initializer" - - cs = ConfigStore.instance() - cs.store(name=cfg_name, node=KaldiInitializerConfig) - - cli_main() diff --git a/kosmos-g/fairseq/examples/speech_recognition/models/__init__.py b/kosmos-g/fairseq/examples/speech_recognition/models/__init__.py deleted file mode 100644 index 54b5a1c31..000000000 --- a/kosmos-g/fairseq/examples/speech_recognition/models/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -import importlib -import os - - -for file in sorted(os.listdir(os.path.dirname(__file__))): - if file.endswith(".py") and not file.startswith("_"): - model_name = file[: file.find(".py")] - importlib.import_module("examples.speech_recognition.models." + model_name) diff --git a/kosmos-g/fairseq/examples/speech_recognition/models/vggtransformer.py b/kosmos-g/fairseq/examples/speech_recognition/models/vggtransformer.py deleted file mode 100644 index bca0ae59a..000000000 --- a/kosmos-g/fairseq/examples/speech_recognition/models/vggtransformer.py +++ /dev/null @@ -1,1020 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import math -from collections.abc import Iterable - -import torch -import torch.nn as nn -from examples.speech_recognition.data.data_utils import lengths_to_encoder_padding_mask -from fairseq import utils -from fairseq.models import ( - FairseqEncoder, - FairseqEncoderDecoderModel, - FairseqEncoderModel, - FairseqIncrementalDecoder, - register_model, - register_model_architecture, -) -from fairseq.modules import ( - LinearizedConvolution, - TransformerDecoderLayer, - TransformerEncoderLayer, - VGGBlock, -) - - -@register_model("asr_vggtransformer") -class VGGTransformerModel(FairseqEncoderDecoderModel): - """ - Transformers with convolutional context for ASR - https://arxiv.org/abs/1904.11660 - """ - - def __init__(self, encoder, decoder): - super().__init__(encoder, decoder) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - parser.add_argument( - "--input-feat-per-channel", - type=int, - metavar="N", - help="encoder input dimension per input channel", - ) - parser.add_argument( - "--vggblock-enc-config", - type=str, - metavar="EXPR", - help=""" - an array of tuples each containing the configuration of one vggblock: - [(out_channels, - conv_kernel_size, - pooling_kernel_size, - num_conv_layers, - use_layer_norm), ...]) - """, - ) - parser.add_argument( - "--transformer-enc-config", - type=str, - metavar="EXPR", - help="""" - a tuple containing the configuration of the encoder transformer layers - configurations: - [(input_dim, - num_heads, - ffn_dim, - normalize_before, - dropout, - attention_dropout, - relu_dropout), ...]') - """, - ) - parser.add_argument( - "--enc-output-dim", - type=int, - metavar="N", - help=""" - encoder output dimension, can be None. If specified, projecting the - transformer output to the specified dimension""", - ) - parser.add_argument( - "--in-channels", - type=int, - metavar="N", - help="number of encoder input channels", - ) - parser.add_argument( - "--tgt-embed-dim", - type=int, - metavar="N", - help="embedding dimension of the decoder target tokens", - ) - parser.add_argument( - "--transformer-dec-config", - type=str, - metavar="EXPR", - help=""" - a tuple containing the configuration of the decoder transformer layers - configurations: - [(input_dim, - num_heads, - ffn_dim, - normalize_before, - dropout, - attention_dropout, - relu_dropout), ...] - """, - ) - parser.add_argument( - "--conv-dec-config", - type=str, - metavar="EXPR", - help=""" - an array of tuples for the decoder 1-D convolution config - [(out_channels, conv_kernel_size, use_layer_norm), ...]""", - ) - - @classmethod - def build_encoder(cls, args, task): - return VGGTransformerEncoder( - input_feat_per_channel=args.input_feat_per_channel, - vggblock_config=eval(args.vggblock_enc_config), - transformer_config=eval(args.transformer_enc_config), - encoder_output_dim=args.enc_output_dim, - in_channels=args.in_channels, - ) - - @classmethod - def build_decoder(cls, args, task): - return TransformerDecoder( - dictionary=task.target_dictionary, - embed_dim=args.tgt_embed_dim, - transformer_config=eval(args.transformer_dec_config), - conv_config=eval(args.conv_dec_config), - encoder_output_dim=args.enc_output_dim, - ) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - # make sure that all args are properly defaulted - # (in case there are any new ones) - base_architecture(args) - - encoder = cls.build_encoder(args, task) - decoder = cls.build_decoder(args, task) - return cls(encoder, decoder) - - def get_normalized_probs(self, net_output, log_probs, sample=None): - # net_output['encoder_out'] is a (B, T, D) tensor - lprobs = super().get_normalized_probs(net_output, log_probs, sample) - lprobs.batch_first = True - return lprobs - - -DEFAULT_ENC_VGGBLOCK_CONFIG = ((32, 3, 2, 2, False),) * 2 -DEFAULT_ENC_TRANSFORMER_CONFIG = ((256, 4, 1024, True, 0.2, 0.2, 0.2),) * 2 -# 256: embedding dimension -# 4: number of heads -# 1024: FFN -# True: apply layerNorm before (dropout + resiaul) instead of after -# 0.2 (dropout): dropout after MultiheadAttention and second FC -# 0.2 (attention_dropout): dropout in MultiheadAttention -# 0.2 (relu_dropout): dropout after ReLu -DEFAULT_DEC_TRANSFORMER_CONFIG = ((256, 2, 1024, True, 0.2, 0.2, 0.2),) * 2 -DEFAULT_DEC_CONV_CONFIG = ((256, 3, True),) * 2 - - -# TODO: repace transformer encoder config from one liner -# to explicit args to get rid of this transformation -def prepare_transformer_encoder_params( - input_dim, - num_heads, - ffn_dim, - normalize_before, - dropout, - attention_dropout, - relu_dropout, -): - args = argparse.Namespace() - args.encoder_embed_dim = input_dim - args.encoder_attention_heads = num_heads - args.attention_dropout = attention_dropout - args.dropout = dropout - args.activation_dropout = relu_dropout - args.encoder_normalize_before = normalize_before - args.encoder_ffn_embed_dim = ffn_dim - return args - - -def prepare_transformer_decoder_params( - input_dim, - num_heads, - ffn_dim, - normalize_before, - dropout, - attention_dropout, - relu_dropout, -): - args = argparse.Namespace() - args.encoder_embed_dim = None - args.decoder_embed_dim = input_dim - args.decoder_attention_heads = num_heads - args.attention_dropout = attention_dropout - args.dropout = dropout - args.activation_dropout = relu_dropout - args.decoder_normalize_before = normalize_before - args.decoder_ffn_embed_dim = ffn_dim - return args - - -class VGGTransformerEncoder(FairseqEncoder): - """VGG + Transformer encoder""" - - def __init__( - self, - input_feat_per_channel, - vggblock_config=DEFAULT_ENC_VGGBLOCK_CONFIG, - transformer_config=DEFAULT_ENC_TRANSFORMER_CONFIG, - encoder_output_dim=512, - in_channels=1, - transformer_context=None, - transformer_sampling=None, - ): - """constructor for VGGTransformerEncoder - - Args: - - input_feat_per_channel: feature dim (not including stacked, - just base feature) - - in_channel: # input channels (e.g., if stack 8 feature vector - together, this is 8) - - vggblock_config: configuration of vggblock, see comments on - DEFAULT_ENC_VGGBLOCK_CONFIG - - transformer_config: configuration of transformer layer, see comments - on DEFAULT_ENC_TRANSFORMER_CONFIG - - encoder_output_dim: final transformer output embedding dimension - - transformer_context: (left, right) if set, self-attention will be focused - on (t-left, t+right) - - transformer_sampling: an iterable of int, must match with - len(transformer_config), transformer_sampling[i] indicates sampling - factor for i-th transformer layer, after multihead att and feedfoward - part - """ - super().__init__(None) - - self.num_vggblocks = 0 - if vggblock_config is not None: - if not isinstance(vggblock_config, Iterable): - raise ValueError("vggblock_config is not iterable") - self.num_vggblocks = len(vggblock_config) - - self.conv_layers = nn.ModuleList() - self.in_channels = in_channels - self.input_dim = input_feat_per_channel - self.pooling_kernel_sizes = [] - - if vggblock_config is not None: - for _, config in enumerate(vggblock_config): - ( - out_channels, - conv_kernel_size, - pooling_kernel_size, - num_conv_layers, - layer_norm, - ) = config - self.conv_layers.append( - VGGBlock( - in_channels, - out_channels, - conv_kernel_size, - pooling_kernel_size, - num_conv_layers, - input_dim=input_feat_per_channel, - layer_norm=layer_norm, - ) - ) - self.pooling_kernel_sizes.append(pooling_kernel_size) - in_channels = out_channels - input_feat_per_channel = self.conv_layers[-1].output_dim - - transformer_input_dim = self.infer_conv_output_dim( - self.in_channels, self.input_dim - ) - # transformer_input_dim is the output dimension of VGG part - - self.validate_transformer_config(transformer_config) - self.transformer_context = self.parse_transformer_context(transformer_context) - self.transformer_sampling = self.parse_transformer_sampling( - transformer_sampling, len(transformer_config) - ) - - self.transformer_layers = nn.ModuleList() - - if transformer_input_dim != transformer_config[0][0]: - self.transformer_layers.append( - Linear(transformer_input_dim, transformer_config[0][0]) - ) - self.transformer_layers.append( - TransformerEncoderLayer( - prepare_transformer_encoder_params(*transformer_config[0]) - ) - ) - - for i in range(1, len(transformer_config)): - if transformer_config[i - 1][0] != transformer_config[i][0]: - self.transformer_layers.append( - Linear(transformer_config[i - 1][0], transformer_config[i][0]) - ) - self.transformer_layers.append( - TransformerEncoderLayer( - prepare_transformer_encoder_params(*transformer_config[i]) - ) - ) - - self.encoder_output_dim = encoder_output_dim - self.transformer_layers.extend( - [ - Linear(transformer_config[-1][0], encoder_output_dim), - LayerNorm(encoder_output_dim), - ] - ) - - def forward(self, src_tokens, src_lengths, **kwargs): - """ - src_tokens: padded tensor (B, T, C * feat) - src_lengths: tensor of original lengths of input utterances (B,) - """ - bsz, max_seq_len, _ = src_tokens.size() - x = src_tokens.view(bsz, max_seq_len, self.in_channels, self.input_dim) - x = x.transpose(1, 2).contiguous() - # (B, C, T, feat) - - for layer_idx in range(len(self.conv_layers)): - x = self.conv_layers[layer_idx](x) - - bsz, _, output_seq_len, _ = x.size() - - # (B, C, T, feat) -> (B, T, C, feat) -> (T, B, C, feat) -> (T, B, C * feat) - x = x.transpose(1, 2).transpose(0, 1) - x = x.contiguous().view(output_seq_len, bsz, -1) - - input_lengths = src_lengths.clone() - for s in self.pooling_kernel_sizes: - input_lengths = (input_lengths.float() / s).ceil().long() - - encoder_padding_mask, _ = lengths_to_encoder_padding_mask( - input_lengths, batch_first=True - ) - if not encoder_padding_mask.any(): - encoder_padding_mask = None - - subsampling_factor = int(max_seq_len * 1.0 / output_seq_len + 0.5) - attn_mask = self.lengths_to_attn_mask(input_lengths, subsampling_factor) - - transformer_layer_idx = 0 - - for layer_idx in range(len(self.transformer_layers)): - - if isinstance(self.transformer_layers[layer_idx], TransformerEncoderLayer): - x = self.transformer_layers[layer_idx]( - x, encoder_padding_mask, attn_mask - ) - - if self.transformer_sampling[transformer_layer_idx] != 1: - sampling_factor = self.transformer_sampling[transformer_layer_idx] - x, encoder_padding_mask, attn_mask = self.slice( - x, encoder_padding_mask, attn_mask, sampling_factor - ) - - transformer_layer_idx += 1 - - else: - x = self.transformer_layers[layer_idx](x) - - # encoder_padding_maks is a (T x B) tensor, its [t, b] elements indicate - # whether encoder_output[t, b] is valid or not (valid=0, invalid=1) - - return { - "encoder_out": x, # (T, B, C) - "encoder_padding_mask": encoder_padding_mask.t() - if encoder_padding_mask is not None - else None, - # (B, T) --> (T, B) - } - - def infer_conv_output_dim(self, in_channels, input_dim): - sample_seq_len = 200 - sample_bsz = 10 - x = torch.randn(sample_bsz, in_channels, sample_seq_len, input_dim) - for i, _ in enumerate(self.conv_layers): - x = self.conv_layers[i](x) - x = x.transpose(1, 2) - mb, seq = x.size()[:2] - return x.contiguous().view(mb, seq, -1).size(-1) - - def validate_transformer_config(self, transformer_config): - for config in transformer_config: - input_dim, num_heads = config[:2] - if input_dim % num_heads != 0: - msg = ( - "ERROR in transformer config {}: ".format(config) - + "input dimension {} ".format(input_dim) - + "not dividable by number of heads {}".format(num_heads) - ) - raise ValueError(msg) - - def parse_transformer_context(self, transformer_context): - """ - transformer_context can be the following: - - None; indicates no context is used, i.e., - transformer can access full context - - a tuple/list of two int; indicates left and right context, - any number <0 indicates infinite context - * e.g., (5, 6) indicates that for query at x_t, transformer can - access [t-5, t+6] (inclusive) - * e.g., (-1, 6) indicates that for query at x_t, transformer can - access [0, t+6] (inclusive) - """ - if transformer_context is None: - return None - - if not isinstance(transformer_context, Iterable): - raise ValueError("transformer context must be Iterable if it is not None") - - if len(transformer_context) != 2: - raise ValueError("transformer context must have length 2") - - left_context = transformer_context[0] - if left_context < 0: - left_context = None - - right_context = transformer_context[1] - if right_context < 0: - right_context = None - - if left_context is None and right_context is None: - return None - - return (left_context, right_context) - - def parse_transformer_sampling(self, transformer_sampling, num_layers): - """ - parsing transformer sampling configuration - - Args: - - transformer_sampling, accepted input: - * None, indicating no sampling - * an Iterable with int (>0) as element - - num_layers, expected number of transformer layers, must match with - the length of transformer_sampling if it is not None - - Returns: - - A tuple with length num_layers - """ - if transformer_sampling is None: - return (1,) * num_layers - - if not isinstance(transformer_sampling, Iterable): - raise ValueError( - "transformer_sampling must be an iterable if it is not None" - ) - - if len(transformer_sampling) != num_layers: - raise ValueError( - "transformer_sampling {} does not match with the number " - "of layers {}".format(transformer_sampling, num_layers) - ) - - for layer, value in enumerate(transformer_sampling): - if not isinstance(value, int): - raise ValueError("Invalid value in transformer_sampling: ") - if value < 1: - raise ValueError( - "{} layer's subsampling is {}.".format(layer, value) - + " This is not allowed! " - ) - return transformer_sampling - - def slice(self, embedding, padding_mask, attn_mask, sampling_factor): - """ - embedding is a (T, B, D) tensor - padding_mask is a (B, T) tensor or None - attn_mask is a (T, T) tensor or None - """ - embedding = embedding[::sampling_factor, :, :] - if padding_mask is not None: - padding_mask = padding_mask[:, ::sampling_factor] - if attn_mask is not None: - attn_mask = attn_mask[::sampling_factor, ::sampling_factor] - - return embedding, padding_mask, attn_mask - - def lengths_to_attn_mask(self, input_lengths, subsampling_factor=1): - """ - create attention mask according to sequence lengths and transformer - context - - Args: - - input_lengths: (B, )-shape Int/Long tensor; input_lengths[b] is - the length of b-th sequence - - subsampling_factor: int - * Note that the left_context and right_context is specified in - the input frame-level while input to transformer may already - go through subsampling (e.g., the use of striding in vggblock) - we use subsampling_factor to scale the left/right context - - Return: - - a (T, T) binary tensor or None, where T is max(input_lengths) - * if self.transformer_context is None, None - * if left_context is None, - * attn_mask[t, t + right_context + 1:] = 1 - * others = 0 - * if right_context is None, - * attn_mask[t, 0:t - left_context] = 1 - * others = 0 - * elsif - * attn_mask[t, t - left_context: t + right_context + 1] = 0 - * others = 1 - """ - if self.transformer_context is None: - return None - - maxT = torch.max(input_lengths).item() - attn_mask = torch.zeros(maxT, maxT) - - left_context = self.transformer_context[0] - right_context = self.transformer_context[1] - if left_context is not None: - left_context = math.ceil(self.transformer_context[0] / subsampling_factor) - if right_context is not None: - right_context = math.ceil(self.transformer_context[1] / subsampling_factor) - - for t in range(maxT): - if left_context is not None: - st = 0 - en = max(st, t - left_context) - attn_mask[t, st:en] = 1 - if right_context is not None: - st = t + right_context + 1 - st = min(st, maxT - 1) - attn_mask[t, st:] = 1 - - return attn_mask.to(input_lengths.device) - - def reorder_encoder_out(self, encoder_out, new_order): - encoder_out["encoder_out"] = encoder_out["encoder_out"].index_select( - 1, new_order - ) - if encoder_out["encoder_padding_mask"] is not None: - encoder_out["encoder_padding_mask"] = encoder_out[ - "encoder_padding_mask" - ].index_select(1, new_order) - return encoder_out - - -class TransformerDecoder(FairseqIncrementalDecoder): - """ - Transformer decoder consisting of *args.decoder_layers* layers. Each layer - is a :class:`TransformerDecoderLayer`. - Args: - args (argparse.Namespace): parsed command-line arguments - dictionary (~fairseq.data.Dictionary): decoding dictionary - embed_tokens (torch.nn.Embedding): output embedding - no_encoder_attn (bool, optional): whether to attend to encoder outputs. - Default: ``False`` - left_pad (bool, optional): whether the input is left-padded. Default: - ``False`` - """ - - def __init__( - self, - dictionary, - embed_dim=512, - transformer_config=DEFAULT_ENC_TRANSFORMER_CONFIG, - conv_config=DEFAULT_DEC_CONV_CONFIG, - encoder_output_dim=512, - ): - - super().__init__(dictionary) - vocab_size = len(dictionary) - self.padding_idx = dictionary.pad() - self.embed_tokens = Embedding(vocab_size, embed_dim, self.padding_idx) - - self.conv_layers = nn.ModuleList() - for i in range(len(conv_config)): - out_channels, kernel_size, layer_norm = conv_config[i] - if i == 0: - conv_layer = LinearizedConv1d( - embed_dim, out_channels, kernel_size, padding=kernel_size - 1 - ) - else: - conv_layer = LinearizedConv1d( - conv_config[i - 1][0], - out_channels, - kernel_size, - padding=kernel_size - 1, - ) - self.conv_layers.append(conv_layer) - if layer_norm: - self.conv_layers.append(nn.LayerNorm(out_channels)) - self.conv_layers.append(nn.ReLU()) - - self.layers = nn.ModuleList() - if conv_config[-1][0] != transformer_config[0][0]: - self.layers.append(Linear(conv_config[-1][0], transformer_config[0][0])) - self.layers.append( - TransformerDecoderLayer( - prepare_transformer_decoder_params(*transformer_config[0]) - ) - ) - - for i in range(1, len(transformer_config)): - if transformer_config[i - 1][0] != transformer_config[i][0]: - self.layers.append( - Linear(transformer_config[i - 1][0], transformer_config[i][0]) - ) - self.layers.append( - TransformerDecoderLayer( - prepare_transformer_decoder_params(*transformer_config[i]) - ) - ) - self.fc_out = Linear(transformer_config[-1][0], vocab_size) - - def forward(self, prev_output_tokens, encoder_out=None, incremental_state=None): - """ - Args: - prev_output_tokens (LongTensor): previous decoder outputs of shape - `(batch, tgt_len)`, for input feeding/teacher forcing - encoder_out (Tensor, optional): output from the encoder, used for - encoder-side attention - incremental_state (dict): dictionary used for storing state during - :ref:`Incremental decoding` - Returns: - tuple: - - the last decoder layer's output of shape `(batch, tgt_len, - vocab)` - - the last decoder layer's attention weights of shape `(batch, - tgt_len, src_len)` - """ - target_padding_mask = ( - (prev_output_tokens == self.padding_idx).to(prev_output_tokens.device) - if incremental_state is None - else None - ) - - if incremental_state is not None: - prev_output_tokens = prev_output_tokens[:, -1:] - - # embed tokens - x = self.embed_tokens(prev_output_tokens) - - # B x T x C -> T x B x C - x = self._transpose_if_training(x, incremental_state) - - for layer in self.conv_layers: - if isinstance(layer, LinearizedConvolution): - x = layer(x, incremental_state) - else: - x = layer(x) - - # B x T x C -> T x B x C - x = self._transpose_if_inference(x, incremental_state) - - # decoder layers - for layer in self.layers: - if isinstance(layer, TransformerDecoderLayer): - x, *_ = layer( - x, - (encoder_out["encoder_out"] if encoder_out is not None else None), - ( - encoder_out["encoder_padding_mask"].t() - if encoder_out["encoder_padding_mask"] is not None - else None - ), - incremental_state, - self_attn_mask=( - self.buffered_future_mask(x) - if incremental_state is None - else None - ), - self_attn_padding_mask=( - target_padding_mask if incremental_state is None else None - ), - ) - else: - x = layer(x) - - # T x B x C -> B x T x C - x = x.transpose(0, 1) - - x = self.fc_out(x) - - return x, None - - def buffered_future_mask(self, tensor): - dim = tensor.size(0) - if ( - not hasattr(self, "_future_mask") - or self._future_mask is None - or self._future_mask.device != tensor.device - ): - self._future_mask = torch.triu( - utils.fill_with_neg_inf(tensor.new(dim, dim)), 1 - ) - if self._future_mask.size(0) < dim: - self._future_mask = torch.triu( - utils.fill_with_neg_inf(self._future_mask.resize_(dim, dim)), 1 - ) - return self._future_mask[:dim, :dim] - - def _transpose_if_training(self, x, incremental_state): - if incremental_state is None: - x = x.transpose(0, 1) - return x - - def _transpose_if_inference(self, x, incremental_state): - if incremental_state: - x = x.transpose(0, 1) - return x - - -@register_model("asr_vggtransformer_encoder") -class VGGTransformerEncoderModel(FairseqEncoderModel): - def __init__(self, encoder): - super().__init__(encoder) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - parser.add_argument( - "--input-feat-per-channel", - type=int, - metavar="N", - help="encoder input dimension per input channel", - ) - parser.add_argument( - "--vggblock-enc-config", - type=str, - metavar="EXPR", - help=""" - an array of tuples each containing the configuration of one vggblock - [(out_channels, conv_kernel_size, pooling_kernel_size,num_conv_layers), ...] - """, - ) - parser.add_argument( - "--transformer-enc-config", - type=str, - metavar="EXPR", - help=""" - a tuple containing the configuration of the Transformer layers - configurations: - [(input_dim, - num_heads, - ffn_dim, - normalize_before, - dropout, - attention_dropout, - relu_dropout), ]""", - ) - parser.add_argument( - "--enc-output-dim", - type=int, - metavar="N", - help="encoder output dimension, projecting the LSTM output", - ) - parser.add_argument( - "--in-channels", - type=int, - metavar="N", - help="number of encoder input channels", - ) - parser.add_argument( - "--transformer-context", - type=str, - metavar="EXPR", - help=""" - either None or a tuple of two ints, indicating left/right context a - transformer can have access to""", - ) - parser.add_argument( - "--transformer-sampling", - type=str, - metavar="EXPR", - help=""" - either None or a tuple of ints, indicating sampling factor in each layer""", - ) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - base_architecture_enconly(args) - encoder = VGGTransformerEncoderOnly( - vocab_size=len(task.target_dictionary), - input_feat_per_channel=args.input_feat_per_channel, - vggblock_config=eval(args.vggblock_enc_config), - transformer_config=eval(args.transformer_enc_config), - encoder_output_dim=args.enc_output_dim, - in_channels=args.in_channels, - transformer_context=eval(args.transformer_context), - transformer_sampling=eval(args.transformer_sampling), - ) - return cls(encoder) - - def get_normalized_probs(self, net_output, log_probs, sample=None): - # net_output['encoder_out'] is a (T, B, D) tensor - lprobs = super().get_normalized_probs(net_output, log_probs, sample) - # lprobs is a (T, B, D) tensor - # we need to transoose to get (B, T, D) tensor - lprobs = lprobs.transpose(0, 1).contiguous() - lprobs.batch_first = True - return lprobs - - -class VGGTransformerEncoderOnly(VGGTransformerEncoder): - def __init__( - self, - vocab_size, - input_feat_per_channel, - vggblock_config=DEFAULT_ENC_VGGBLOCK_CONFIG, - transformer_config=DEFAULT_ENC_TRANSFORMER_CONFIG, - encoder_output_dim=512, - in_channels=1, - transformer_context=None, - transformer_sampling=None, - ): - super().__init__( - input_feat_per_channel=input_feat_per_channel, - vggblock_config=vggblock_config, - transformer_config=transformer_config, - encoder_output_dim=encoder_output_dim, - in_channels=in_channels, - transformer_context=transformer_context, - transformer_sampling=transformer_sampling, - ) - self.fc_out = Linear(self.encoder_output_dim, vocab_size) - - def forward(self, src_tokens, src_lengths, **kwargs): - """ - src_tokens: padded tensor (B, T, C * feat) - src_lengths: tensor of original lengths of input utterances (B,) - """ - - enc_out = super().forward(src_tokens, src_lengths) - x = self.fc_out(enc_out["encoder_out"]) - # x = F.log_softmax(x, dim=-1) - # Note: no need this line, because model.get_normalized_prob will call - # log_softmax - return { - "encoder_out": x, # (T, B, C) - "encoder_padding_mask": enc_out["encoder_padding_mask"], # (T, B) - } - - def max_positions(self): - """Maximum input length supported by the encoder.""" - return (1e6, 1e6) # an arbitrary large number - - -def Embedding(num_embeddings, embedding_dim, padding_idx): - m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx) - # nn.init.uniform_(m.weight, -0.1, 0.1) - # nn.init.constant_(m.weight[padding_idx], 0) - return m - - -def Linear(in_features, out_features, bias=True, dropout=0): - """Linear layer (input: N x T x C)""" - m = nn.Linear(in_features, out_features, bias=bias) - # m.weight.data.uniform_(-0.1, 0.1) - # if bias: - # m.bias.data.uniform_(-0.1, 0.1) - return m - - -def LinearizedConv1d(in_channels, out_channels, kernel_size, dropout=0, **kwargs): - """Weight-normalized Conv1d layer optimized for decoding""" - m = LinearizedConvolution(in_channels, out_channels, kernel_size, **kwargs) - std = math.sqrt((4 * (1.0 - dropout)) / (m.kernel_size[0] * in_channels)) - nn.init.normal_(m.weight, mean=0, std=std) - nn.init.constant_(m.bias, 0) - return nn.utils.weight_norm(m, dim=2) - - -def LayerNorm(embedding_dim): - m = nn.LayerNorm(embedding_dim) - return m - - -# seq2seq models -def base_architecture(args): - args.input_feat_per_channel = getattr(args, "input_feat_per_channel", 40) - args.vggblock_enc_config = getattr( - args, "vggblock_enc_config", DEFAULT_ENC_VGGBLOCK_CONFIG - ) - args.transformer_enc_config = getattr( - args, "transformer_enc_config", DEFAULT_ENC_TRANSFORMER_CONFIG - ) - args.enc_output_dim = getattr(args, "enc_output_dim", 512) - args.in_channels = getattr(args, "in_channels", 1) - args.tgt_embed_dim = getattr(args, "tgt_embed_dim", 128) - args.transformer_dec_config = getattr( - args, "transformer_dec_config", DEFAULT_ENC_TRANSFORMER_CONFIG - ) - args.conv_dec_config = getattr(args, "conv_dec_config", DEFAULT_DEC_CONV_CONFIG) - args.transformer_context = getattr(args, "transformer_context", "None") - - -@register_model_architecture("asr_vggtransformer", "vggtransformer_1") -def vggtransformer_1(args): - args.input_feat_per_channel = getattr(args, "input_feat_per_channel", 80) - args.vggblock_enc_config = getattr( - args, "vggblock_enc_config", "[(64, 3, 2, 2, True), (128, 3, 2, 2, True)]" - ) - args.transformer_enc_config = getattr( - args, - "transformer_enc_config", - "((1024, 16, 4096, True, 0.15, 0.15, 0.15),) * 14", - ) - args.enc_output_dim = getattr(args, "enc_output_dim", 1024) - args.tgt_embed_dim = getattr(args, "tgt_embed_dim", 128) - args.conv_dec_config = getattr(args, "conv_dec_config", "((256, 3, True),) * 4") - args.transformer_dec_config = getattr( - args, - "transformer_dec_config", - "((1024, 16, 4096, True, 0.15, 0.15, 0.15),) * 4", - ) - - -@register_model_architecture("asr_vggtransformer", "vggtransformer_2") -def vggtransformer_2(args): - args.input_feat_per_channel = getattr(args, "input_feat_per_channel", 80) - args.vggblock_enc_config = getattr( - args, "vggblock_enc_config", "[(64, 3, 2, 2, True), (128, 3, 2, 2, True)]" - ) - args.transformer_enc_config = getattr( - args, - "transformer_enc_config", - "((1024, 16, 4096, True, 0.15, 0.15, 0.15),) * 16", - ) - args.enc_output_dim = getattr(args, "enc_output_dim", 1024) - args.tgt_embed_dim = getattr(args, "tgt_embed_dim", 512) - args.conv_dec_config = getattr(args, "conv_dec_config", "((256, 3, True),) * 4") - args.transformer_dec_config = getattr( - args, - "transformer_dec_config", - "((1024, 16, 4096, True, 0.15, 0.15, 0.15),) * 6", - ) - - -@register_model_architecture("asr_vggtransformer", "vggtransformer_base") -def vggtransformer_base(args): - args.input_feat_per_channel = getattr(args, "input_feat_per_channel", 80) - args.vggblock_enc_config = getattr( - args, "vggblock_enc_config", "[(64, 3, 2, 2, True), (128, 3, 2, 2, True)]" - ) - args.transformer_enc_config = getattr( - args, "transformer_enc_config", "((512, 8, 2048, True, 0.15, 0.15, 0.15),) * 12" - ) - - args.enc_output_dim = getattr(args, "enc_output_dim", 512) - args.tgt_embed_dim = getattr(args, "tgt_embed_dim", 512) - args.conv_dec_config = getattr(args, "conv_dec_config", "((256, 3, True),) * 4") - args.transformer_dec_config = getattr( - args, "transformer_dec_config", "((512, 8, 2048, True, 0.15, 0.15, 0.15),) * 6" - ) - # Size estimations: - # Encoder: - # - vggblock param: 64*1*3*3 + 64*64*3*3 + 128*64*3*3 + 128*128*3 = 258K - # Transformer: - # - input dimension adapter: 2560 x 512 -> 1.31M - # - transformer_layers (x12) --> 37.74M - # * MultiheadAttention: 512*512*3 (in_proj) + 512*512 (out_proj) = 1.048M - # * FFN weight: 512*2048*2 = 2.097M - # - output dimension adapter: 512 x 512 -> 0.26 M - # Decoder: - # - LinearizedConv1d: 512 * 256 * 3 + 256 * 256 * 3 * 3 - # - transformer_layer: (x6) --> 25.16M - # * MultiheadAttention (self-attention): 512*512*3 + 512*512 = 1.048M - # * MultiheadAttention (encoder-attention): 512*512*3 + 512*512 = 1.048M - # * FFN: 512*2048*2 = 2.097M - # Final FC: - # - FC: 512*5000 = 256K (assuming vocab size 5K) - # In total: - # ~65 M - - -# CTC models -def base_architecture_enconly(args): - args.input_feat_per_channel = getattr(args, "input_feat_per_channel", 40) - args.vggblock_enc_config = getattr( - args, "vggblock_enc_config", "[(32, 3, 2, 2, True)] * 2" - ) - args.transformer_enc_config = getattr( - args, "transformer_enc_config", "((256, 4, 1024, True, 0.2, 0.2, 0.2),) * 2" - ) - args.enc_output_dim = getattr(args, "enc_output_dim", 512) - args.in_channels = getattr(args, "in_channels", 1) - args.transformer_context = getattr(args, "transformer_context", "None") - args.transformer_sampling = getattr(args, "transformer_sampling", "None") - - -@register_model_architecture("asr_vggtransformer_encoder", "vggtransformer_enc_1") -def vggtransformer_enc_1(args): - # vggtransformer_1 is the same as vggtransformer_enc_big, except the number - # of layers is increased to 16 - # keep it here for backward compatiablity purpose - args.input_feat_per_channel = getattr(args, "input_feat_per_channel", 80) - args.vggblock_enc_config = getattr( - args, "vggblock_enc_config", "[(64, 3, 2, 2, True), (128, 3, 2, 2, True)]" - ) - args.transformer_enc_config = getattr( - args, - "transformer_enc_config", - "((1024, 16, 4096, True, 0.15, 0.15, 0.15),) * 16", - ) - args.enc_output_dim = getattr(args, "enc_output_dim", 1024) diff --git a/kosmos-g/fairseq/examples/speech_recognition/models/w2l_conv_glu_enc.py b/kosmos-g/fairseq/examples/speech_recognition/models/w2l_conv_glu_enc.py deleted file mode 100644 index 655a9b0d1..000000000 --- a/kosmos-g/fairseq/examples/speech_recognition/models/w2l_conv_glu_enc.py +++ /dev/null @@ -1,177 +0,0 @@ -#!/usr/bin/env python3 - -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq.models import ( - FairseqEncoder, - FairseqEncoderModel, - register_model, - register_model_architecture, -) -from fairseq.modules.fairseq_dropout import FairseqDropout - - -default_conv_enc_config = """[ - (400, 13, 170, 0.2), - (440, 14, 0, 0.214), - (484, 15, 0, 0.22898), - (532, 16, 0, 0.2450086), - (584, 17, 0, 0.262159202), - (642, 18, 0, 0.28051034614), - (706, 19, 0, 0.30014607037), - (776, 20, 0, 0.321156295296), - (852, 21, 0, 0.343637235966), - (936, 22, 0, 0.367691842484), - (1028, 23, 0, 0.393430271458), - (1130, 24, 0, 0.42097039046), - (1242, 25, 0, 0.450438317792), - (1366, 26, 0, 0.481969000038), - (1502, 27, 0, 0.51570683004), - (1652, 28, 0, 0.551806308143), - (1816, 29, 0, 0.590432749713), -]""" - - -@register_model("asr_w2l_conv_glu_encoder") -class W2lConvGluEncoderModel(FairseqEncoderModel): - def __init__(self, encoder): - super().__init__(encoder) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - parser.add_argument( - "--input-feat-per-channel", - type=int, - metavar="N", - help="encoder input dimension per input channel", - ) - parser.add_argument( - "--in-channels", - type=int, - metavar="N", - help="number of encoder input channels", - ) - parser.add_argument( - "--conv-enc-config", - type=str, - metavar="EXPR", - help=""" - an array of tuples each containing the configuration of one conv layer - [(out_channels, kernel_size, padding, dropout), ...] - """, - ) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - conv_enc_config = getattr(args, "conv_enc_config", default_conv_enc_config) - encoder = W2lConvGluEncoder( - vocab_size=len(task.target_dictionary), - input_feat_per_channel=args.input_feat_per_channel, - in_channels=args.in_channels, - conv_enc_config=eval(conv_enc_config), - ) - return cls(encoder) - - def get_normalized_probs(self, net_output, log_probs, sample=None): - lprobs = super().get_normalized_probs(net_output, log_probs, sample) - lprobs.batch_first = False - return lprobs - - -class W2lConvGluEncoder(FairseqEncoder): - def __init__( - self, vocab_size, input_feat_per_channel, in_channels, conv_enc_config - ): - super().__init__(None) - - self.input_dim = input_feat_per_channel - if in_channels != 1: - raise ValueError("only 1 input channel is currently supported") - - self.conv_layers = nn.ModuleList() - self.linear_layers = nn.ModuleList() - self.dropouts = [] - cur_channels = input_feat_per_channel - - for out_channels, kernel_size, padding, dropout in conv_enc_config: - layer = nn.Conv1d(cur_channels, out_channels, kernel_size, padding=padding) - layer.weight.data.mul_(math.sqrt(3)) # match wav2letter init - self.conv_layers.append(nn.utils.weight_norm(layer)) - self.dropouts.append( - FairseqDropout(dropout, module_name=self.__class__.__name__) - ) - if out_channels % 2 != 0: - raise ValueError("odd # of out_channels is incompatible with GLU") - cur_channels = out_channels // 2 # halved by GLU - - for out_channels in [2 * cur_channels, vocab_size]: - layer = nn.Linear(cur_channels, out_channels) - layer.weight.data.mul_(math.sqrt(3)) - self.linear_layers.append(nn.utils.weight_norm(layer)) - cur_channels = out_channels // 2 - - def forward(self, src_tokens, src_lengths, **kwargs): - - """ - src_tokens: padded tensor (B, T, C * feat) - src_lengths: tensor of original lengths of input utterances (B,) - """ - B, T, _ = src_tokens.size() - x = src_tokens.transpose(1, 2).contiguous() # (B, feat, T) assuming C == 1 - - for layer_idx in range(len(self.conv_layers)): - x = self.conv_layers[layer_idx](x) - x = F.glu(x, dim=1) - x = self.dropouts[layer_idx](x) - - x = x.transpose(1, 2).contiguous() # (B, T, 908) - x = self.linear_layers[0](x) - x = F.glu(x, dim=2) - x = self.dropouts[-1](x) - x = self.linear_layers[1](x) - - assert x.size(0) == B - assert x.size(1) == T - - encoder_out = x.transpose(0, 1) # (T, B, vocab_size) - - # need to debug this -- find a simpler/elegant way in pytorch APIs - encoder_padding_mask = ( - torch.arange(T).view(1, T).expand(B, -1).to(x.device) - >= src_lengths.view(B, 1).expand(-1, T) - ).t() # (B x T) -> (T x B) - - return { - "encoder_out": encoder_out, # (T, B, vocab_size) - "encoder_padding_mask": encoder_padding_mask, # (T, B) - } - - def reorder_encoder_out(self, encoder_out, new_order): - encoder_out["encoder_out"] = encoder_out["encoder_out"].index_select( - 1, new_order - ) - encoder_out["encoder_padding_mask"] = encoder_out[ - "encoder_padding_mask" - ].index_select(1, new_order) - return encoder_out - - def max_positions(self): - """Maximum input length supported by the encoder.""" - return (1e6, 1e6) # an arbitrary large number - - -@register_model_architecture("asr_w2l_conv_glu_encoder", "w2l_conv_glu_enc") -def w2l_conv_glu_enc(args): - args.input_feat_per_channel = getattr(args, "input_feat_per_channel", 80) - args.in_channels = getattr(args, "in_channels", 1) - args.conv_enc_config = getattr(args, "conv_enc_config", default_conv_enc_config) diff --git a/kosmos-g/fairseq/examples/speech_recognition/new/README.md b/kosmos-g/fairseq/examples/speech_recognition/new/README.md deleted file mode 100644 index 5fa0e9724..000000000 --- a/kosmos-g/fairseq/examples/speech_recognition/new/README.md +++ /dev/null @@ -1,43 +0,0 @@ -# Flashlight Decoder - -This script runs decoding for pre-trained speech recognition models. - -## Usage - -Assuming a few variables: - -```bash -checkpoint=<path-to-checkpoint> -data=<path-to-data-directory> -lm_model=<path-to-language-model> -lexicon=<path-to-lexicon> -``` - -Example usage for decoding a fine-tuned Wav2Vec model: - -```bash -python $FAIRSEQ_ROOT/examples/speech_recognition/new/infer.py --multirun \ - task=audio_pretraining \ - task.data=$data \ - task.labels=ltr \ - common_eval.path=$checkpoint \ - decoding.type=kenlm \ - decoding.lexicon=$lexicon \ - decoding.lmpath=$lm_model \ - dataset.gen_subset=dev_clean,dev_other,test_clean,test_other -``` - -Example usage for using Ax to sweep WER parameters (requires `pip install hydra-ax-sweeper`): - -```bash -python $FAIRSEQ_ROOT/examples/speech_recognition/new/infer.py --multirun \ - hydra/sweeper=ax \ - task=audio_pretraining \ - task.data=$data \ - task.labels=ltr \ - common_eval.path=$checkpoint \ - decoding.type=kenlm \ - decoding.lexicon=$lexicon \ - decoding.lmpath=$lm_model \ - dataset.gen_subset=dev_other -``` diff --git a/kosmos-g/fairseq/examples/speech_recognition/new/__init__.py b/kosmos-g/fairseq/examples/speech_recognition/new/__init__.py deleted file mode 100644 index e69de29bb..000000000 diff --git a/kosmos-g/fairseq/examples/speech_recognition/new/conf/hydra/sweeper/ax.yaml b/kosmos-g/fairseq/examples/speech_recognition/new/conf/hydra/sweeper/ax.yaml deleted file mode 100644 index 9a6935b93..000000000 --- a/kosmos-g/fairseq/examples/speech_recognition/new/conf/hydra/sweeper/ax.yaml +++ /dev/null @@ -1,29 +0,0 @@ -# @package hydra.sweeper -_target_: hydra_plugins.hydra_ax_sweeper.ax_sweeper.AxSweeper -max_batch_size: null -ax_config: - max_trials: 128 - early_stop: - minimize: true - max_epochs_without_improvement: 10 - epsilon: 0.025 - experiment: - name: ${dataset.gen_subset} - objective_name: wer - minimize: true - parameter_constraints: null - outcome_constraints: null - status_quo: null - client: - verbose_logging: false - random_seed: null - params: - decoding.lmweight: - type: range - bounds: [0.0, 5.0] - decoding.wordscore: - type: range - bounds: [-5.0, 5.0] - decoding.silweight: - type: range - bounds: [ -8.0, 0.0 ] diff --git a/kosmos-g/fairseq/examples/speech_recognition/new/conf/infer.yaml b/kosmos-g/fairseq/examples/speech_recognition/new/conf/infer.yaml deleted file mode 100644 index 21dd19fad..000000000 --- a/kosmos-g/fairseq/examples/speech_recognition/new/conf/infer.yaml +++ /dev/null @@ -1,25 +0,0 @@ -# @package _group_ - -defaults: - - task: null - - model: null - -hydra: - run: - dir: ${common_eval.results_path}/${dataset.gen_subset} - sweep: - dir: /checkpoint/${env:USER}/${env:PREFIX}/${common_eval.results_path} - subdir: ${dataset.gen_subset} -common_eval: - results_path: null - path: null - post_process: letter - quiet: true -dataset: - max_tokens: 3000000 - gen_subset: test -distributed_training: - distributed_world_size: 1 -decoding: - beam: 5 - type: viterbi diff --git a/kosmos-g/fairseq/examples/speech_recognition/new/decoders/__init__.py b/kosmos-g/fairseq/examples/speech_recognition/new/decoders/__init__.py deleted file mode 100644 index e69de29bb..000000000 diff --git a/kosmos-g/fairseq/examples/speech_recognition/new/decoders/base_decoder.py b/kosmos-g/fairseq/examples/speech_recognition/new/decoders/base_decoder.py deleted file mode 100644 index a097969b3..000000000 --- a/kosmos-g/fairseq/examples/speech_recognition/new/decoders/base_decoder.py +++ /dev/null @@ -1,62 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import itertools as it -from typing import Any, Dict, List - -import torch -from fairseq.data.dictionary import Dictionary -from fairseq.models.fairseq_model import FairseqModel - - -class BaseDecoder: - def __init__(self, tgt_dict: Dictionary) -> None: - self.tgt_dict = tgt_dict - self.vocab_size = len(tgt_dict) - - self.blank = ( - tgt_dict.index("<ctc_blank>") - if "<ctc_blank>" in tgt_dict.indices - else tgt_dict.bos() - ) - if "<sep>" in tgt_dict.indices: - self.silence = tgt_dict.index("<sep>") - elif "|" in tgt_dict.indices: - self.silence = tgt_dict.index("|") - else: - self.silence = tgt_dict.eos() - - def generate( - self, models: List[FairseqModel], sample: Dict[str, Any], **unused - ) -> List[List[Dict[str, torch.LongTensor]]]: - encoder_input = { - k: v for k, v in sample["net_input"].items() if k != "prev_output_tokens" - } - emissions = self.get_emissions(models, encoder_input) - return self.decode(emissions) - - def get_emissions( - self, - models: List[FairseqModel], - encoder_input: Dict[str, Any], - ) -> torch.FloatTensor: - model = models[0] - encoder_out = model(**encoder_input) - if hasattr(model, "get_logits"): - emissions = model.get_logits(encoder_out) - else: - emissions = model.get_normalized_probs(encoder_out, log_probs=True) - return emissions.transpose(0, 1).float().cpu().contiguous() - - def get_tokens(self, idxs: torch.IntTensor) -> torch.LongTensor: - idxs = (g[0] for g in it.groupby(idxs)) - idxs = filter(lambda x: x != self.blank, idxs) - return torch.LongTensor(list(idxs)) - - def decode( - self, - emissions: torch.FloatTensor, - ) -> List[List[Dict[str, torch.LongTensor]]]: - raise NotImplementedError diff --git a/kosmos-g/fairseq/examples/speech_recognition/new/decoders/decoder.py b/kosmos-g/fairseq/examples/speech_recognition/new/decoders/decoder.py deleted file mode 100644 index b5bec8cf7..000000000 --- a/kosmos-g/fairseq/examples/speech_recognition/new/decoders/decoder.py +++ /dev/null @@ -1,32 +0,0 @@ -#!/usr/bin/env python3 - -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Union - -from fairseq.data.dictionary import Dictionary - -from .decoder_config import DecoderConfig, FlashlightDecoderConfig -from .base_decoder import BaseDecoder - - -def Decoder( - cfg: Union[DecoderConfig, FlashlightDecoderConfig], tgt_dict: Dictionary -) -> BaseDecoder: - - if cfg.type == "viterbi": - from .viterbi_decoder import ViterbiDecoder - - return ViterbiDecoder(tgt_dict) - if cfg.type == "kenlm": - from .flashlight_decoder import KenLMDecoder - - return KenLMDecoder(cfg, tgt_dict) - if cfg.type == "fairseqlm": - from .flashlight_decoder import FairseqLMDecoder - - return FairseqLMDecoder(cfg, tgt_dict) - raise NotImplementedError(f"Invalid decoder name: {cfg.name}") diff --git a/kosmos-g/fairseq/examples/speech_recognition/new/decoders/decoder_config.py b/kosmos-g/fairseq/examples/speech_recognition/new/decoders/decoder_config.py deleted file mode 100644 index 659eb94a9..000000000 --- a/kosmos-g/fairseq/examples/speech_recognition/new/decoders/decoder_config.py +++ /dev/null @@ -1,70 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from dataclasses import dataclass, field -from typing import Optional - -from fairseq.dataclass.configs import FairseqDataclass -from fairseq.dataclass.constants import ChoiceEnum -from omegaconf import MISSING - - -DECODER_CHOICES = ChoiceEnum(["viterbi", "kenlm", "fairseqlm"]) - - -@dataclass -class DecoderConfig(FairseqDataclass): - type: DECODER_CHOICES = field( - default="viterbi", - metadata={"help": "The type of decoder to use"}, - ) - - -@dataclass -class FlashlightDecoderConfig(FairseqDataclass): - nbest: int = field( - default=1, - metadata={"help": "Number of decodings to return"}, - ) - unitlm: bool = field( - default=False, - metadata={"help": "If set, use unit language model"}, - ) - lmpath: str = field( - default=MISSING, - metadata={"help": "Language model for KenLM decoder"}, - ) - lexicon: Optional[str] = field( - default=None, - metadata={"help": "Lexicon for Flashlight decoder"}, - ) - beam: int = field( - default=50, - metadata={"help": "Number of beams to use for decoding"}, - ) - beamthreshold: float = field( - default=50.0, - metadata={"help": "Threshold for beam search decoding"}, - ) - beamsizetoken: Optional[int] = field( - default=None, metadata={"help": "Beam size to use"} - ) - wordscore: float = field( - default=-1, - metadata={"help": "Word score for KenLM decoder"}, - ) - unkweight: float = field( - default=-math.inf, - metadata={"help": "Unknown weight for KenLM decoder"}, - ) - silweight: float = field( - default=0, - metadata={"help": "Silence weight for KenLM decoder"}, - ) - lmweight: float = field( - default=2, - metadata={"help": "Weight for LM while interpolating score"}, - ) diff --git a/kosmos-g/fairseq/examples/speech_recognition/new/decoders/flashlight_decoder.py b/kosmos-g/fairseq/examples/speech_recognition/new/decoders/flashlight_decoder.py deleted file mode 100644 index 38c7ac492..000000000 --- a/kosmos-g/fairseq/examples/speech_recognition/new/decoders/flashlight_decoder.py +++ /dev/null @@ -1,431 +0,0 @@ -#!/usr/bin/env python3 - -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import gc -import os.path as osp -import warnings -from collections import deque, namedtuple -from typing import Any, Dict, Tuple - -import numpy as np -import torch -from fairseq import tasks -from fairseq.data.dictionary import Dictionary -from fairseq.dataclass.utils import convert_namespace_to_omegaconf -from fairseq.models.fairseq_model import FairseqModel -from fairseq.utils import apply_to_sample -from omegaconf import open_dict, OmegaConf - -from typing import List - -from .decoder_config import FlashlightDecoderConfig -from .base_decoder import BaseDecoder - -try: - from flashlight.lib.text.decoder import ( - LM, - CriterionType, - DecodeResult, - KenLM, - LexiconDecoder, - LexiconDecoderOptions, - LexiconFreeDecoder, - LexiconFreeDecoderOptions, - LMState, - SmearingMode, - Trie, - ) - from flashlight.lib.text.dictionary import create_word_dict, load_words -except ImportError: - warnings.warn( - "flashlight python bindings are required to use this functionality. " - "Please install from " - "https://github.com/facebookresearch/flashlight/tree/master/bindings/python" - ) - LM = object - LMState = object - - -class KenLMDecoder(BaseDecoder): - def __init__(self, cfg: FlashlightDecoderConfig, tgt_dict: Dictionary) -> None: - super().__init__(tgt_dict) - - self.nbest = cfg.nbest - self.unitlm = cfg.unitlm - - if cfg.lexicon: - self.lexicon = load_words(cfg.lexicon) - self.word_dict = create_word_dict(self.lexicon) - self.unk_word = self.word_dict.get_index("<unk>") - - self.lm = KenLM(cfg.lmpath, self.word_dict) - self.trie = Trie(self.vocab_size, self.silence) - - start_state = self.lm.start(False) - for word, spellings in self.lexicon.items(): - word_idx = self.word_dict.get_index(word) - _, score = self.lm.score(start_state, word_idx) - for spelling in spellings: - spelling_idxs = [tgt_dict.index(token) for token in spelling] - assert ( - tgt_dict.unk() not in spelling_idxs - ), f"{word} {spelling} {spelling_idxs}" - self.trie.insert(spelling_idxs, word_idx, score) - self.trie.smear(SmearingMode.MAX) - - self.decoder_opts = LexiconDecoderOptions( - beam_size=cfg.beam, - beam_size_token=cfg.beamsizetoken or len(tgt_dict), - beam_threshold=cfg.beamthreshold, - lm_weight=cfg.lmweight, - word_score=cfg.wordscore, - unk_score=cfg.unkweight, - sil_score=cfg.silweight, - log_add=False, - criterion_type=CriterionType.CTC, - ) - - self.decoder = LexiconDecoder( - self.decoder_opts, - self.trie, - self.lm, - self.silence, - self.blank, - self.unk_word, - [], - self.unitlm, - ) - else: - assert self.unitlm, "Lexicon-free decoding requires unit LM" - - d = {w: [[w]] for w in tgt_dict.symbols} - self.word_dict = create_word_dict(d) - self.lm = KenLM(cfg.lmpath, self.word_dict) - self.decoder_opts = LexiconFreeDecoderOptions( - beam_size=cfg.beam, - beam_size_token=cfg.beamsizetoken or len(tgt_dict), - beam_threshold=cfg.beamthreshold, - lm_weight=cfg.lmweight, - sil_score=cfg.silweight, - log_add=False, - criterion_type=CriterionType.CTC, - ) - self.decoder = LexiconFreeDecoder( - self.decoder_opts, self.lm, self.silence, self.blank, [] - ) - - def get_timesteps(self, token_idxs: List[int]) -> List[int]: - """Returns frame numbers corresponding to every non-blank token. - - Parameters - ---------- - token_idxs : List[int] - IDs of decoded tokens. - - Returns - ------- - List[int] - Frame numbers corresponding to every non-blank token. - """ - timesteps = [] - for i, token_idx in enumerate(token_idxs): - if token_idx == self.blank: - continue - if i == 0 or token_idx != token_idxs[i-1]: - timesteps.append(i) - return timesteps - - def decode( - self, - emissions: torch.FloatTensor, - ) -> List[List[Dict[str, torch.LongTensor]]]: - B, T, N = emissions.size() - hypos = [] - for b in range(B): - emissions_ptr = emissions.data_ptr() + 4 * b * emissions.stride(0) - results = self.decoder.decode(emissions_ptr, T, N) - - nbest_results = results[: self.nbest] - hypos.append( - [ - { - "tokens": self.get_tokens(result.tokens), - "score": result.score, - "timesteps": self.get_timesteps(result.tokens), - "words": [ - self.word_dict.get_entry(x) for x in result.words if x >= 0 - ], - } - for result in nbest_results - ] - ) - return hypos - - -FairseqLMState = namedtuple( - "FairseqLMState", - [ - "prefix", - "incremental_state", - "probs", - ], -) - - -class FairseqLM(LM): - def __init__(self, dictionary: Dictionary, model: FairseqModel) -> None: - super().__init__() - - self.dictionary = dictionary - self.model = model - self.unk = self.dictionary.unk() - - self.save_incremental = False # this currently does not work properly - self.max_cache = 20_000 - - if torch.cuda.is_available(): - model.cuda() - model.eval() - model.make_generation_fast_() - - self.states = {} - self.stateq = deque() - - def start(self, start_with_nothing: bool) -> LMState: - state = LMState() - prefix = torch.LongTensor([[self.dictionary.eos()]]) - incremental_state = {} if self.save_incremental else None - with torch.no_grad(): - res = self.model(prefix.cuda(), incremental_state=incremental_state) - probs = self.model.get_normalized_probs(res, log_probs=True, sample=None) - - if incremental_state is not None: - incremental_state = apply_to_sample(lambda x: x.cpu(), incremental_state) - self.states[state] = FairseqLMState( - prefix.numpy(), incremental_state, probs[0, -1].cpu().numpy() - ) - self.stateq.append(state) - - return state - - def score( - self, - state: LMState, - token_index: int, - no_cache: bool = False, - ) -> Tuple[LMState, int]: - """ - Evaluate language model based on the current lm state and new word - Parameters: - ----------- - state: current lm state - token_index: index of the word - (can be lexicon index then you should store inside LM the - mapping between indices of lexicon and lm, or lm index of a word) - Returns: - -------- - (LMState, float): pair of (new state, score for the current word) - """ - curr_state = self.states[state] - - def trim_cache(targ_size: int) -> None: - while len(self.stateq) > targ_size: - rem_k = self.stateq.popleft() - rem_st = self.states[rem_k] - rem_st = FairseqLMState(rem_st.prefix, None, None) - self.states[rem_k] = rem_st - - if curr_state.probs is None: - new_incremental_state = ( - curr_state.incremental_state.copy() - if curr_state.incremental_state is not None - else None - ) - with torch.no_grad(): - if new_incremental_state is not None: - new_incremental_state = apply_to_sample( - lambda x: x.cuda(), new_incremental_state - ) - elif self.save_incremental: - new_incremental_state = {} - - res = self.model( - torch.from_numpy(curr_state.prefix).cuda(), - incremental_state=new_incremental_state, - ) - probs = self.model.get_normalized_probs( - res, log_probs=True, sample=None - ) - - if new_incremental_state is not None: - new_incremental_state = apply_to_sample( - lambda x: x.cpu(), new_incremental_state - ) - - curr_state = FairseqLMState( - curr_state.prefix, new_incremental_state, probs[0, -1].cpu().numpy() - ) - - if not no_cache: - self.states[state] = curr_state - self.stateq.append(state) - - score = curr_state.probs[token_index].item() - - trim_cache(self.max_cache) - - outstate = state.child(token_index) - if outstate not in self.states and not no_cache: - prefix = np.concatenate( - [curr_state.prefix, torch.LongTensor([[token_index]])], -1 - ) - incr_state = curr_state.incremental_state - - self.states[outstate] = FairseqLMState(prefix, incr_state, None) - - if token_index == self.unk: - score = float("-inf") - - return outstate, score - - def finish(self, state: LMState) -> Tuple[LMState, int]: - """ - Evaluate eos for language model based on the current lm state - Returns: - -------- - (LMState, float): pair of (new state, score for the current word) - """ - return self.score(state, self.dictionary.eos()) - - def empty_cache(self) -> None: - self.states = {} - self.stateq = deque() - gc.collect() - - -class FairseqLMDecoder(BaseDecoder): - def __init__(self, cfg: FlashlightDecoderConfig, tgt_dict: Dictionary) -> None: - super().__init__(tgt_dict) - - self.nbest = cfg.nbest - self.unitlm = cfg.unitlm - - self.lexicon = load_words(cfg.lexicon) if cfg.lexicon else None - self.idx_to_wrd = {} - - checkpoint = torch.load(cfg.lmpath, map_location="cpu") - - if "cfg" in checkpoint and checkpoint["cfg"] is not None: - lm_args = checkpoint["cfg"] - else: - lm_args = convert_namespace_to_omegaconf(checkpoint["args"]) - - if not OmegaConf.is_dict(lm_args): - lm_args = OmegaConf.create(lm_args) - - with open_dict(lm_args.task): - lm_args.task.data = osp.dirname(cfg.lmpath) - - task = tasks.setup_task(lm_args.task) - model = task.build_model(lm_args.model) - model.load_state_dict(checkpoint["model"], strict=False) - - self.trie = Trie(self.vocab_size, self.silence) - - self.word_dict = task.dictionary - self.unk_word = self.word_dict.unk() - self.lm = FairseqLM(self.word_dict, model) - - if self.lexicon: - start_state = self.lm.start(False) - for i, (word, spellings) in enumerate(self.lexicon.items()): - if self.unitlm: - word_idx = i - self.idx_to_wrd[i] = word - score = 0 - else: - word_idx = self.word_dict.index(word) - _, score = self.lm.score(start_state, word_idx, no_cache=True) - - for spelling in spellings: - spelling_idxs = [tgt_dict.index(token) for token in spelling] - assert ( - tgt_dict.unk() not in spelling_idxs - ), f"{spelling} {spelling_idxs}" - self.trie.insert(spelling_idxs, word_idx, score) - self.trie.smear(SmearingMode.MAX) - - self.decoder_opts = LexiconDecoderOptions( - beam_size=cfg.beam, - beam_size_token=cfg.beamsizetoken or len(tgt_dict), - beam_threshold=cfg.beamthreshold, - lm_weight=cfg.lmweight, - word_score=cfg.wordscore, - unk_score=cfg.unkweight, - sil_score=cfg.silweight, - log_add=False, - criterion_type=CriterionType.CTC, - ) - - self.decoder = LexiconDecoder( - self.decoder_opts, - self.trie, - self.lm, - self.silence, - self.blank, - self.unk_word, - [], - self.unitlm, - ) - else: - assert self.unitlm, "Lexicon-free decoding requires unit LM" - - d = {w: [[w]] for w in tgt_dict.symbols} - self.word_dict = create_word_dict(d) - self.lm = KenLM(cfg.lmpath, self.word_dict) - self.decoder_opts = LexiconFreeDecoderOptions( - beam_size=cfg.beam, - beam_size_token=cfg.beamsizetoken or len(tgt_dict), - beam_threshold=cfg.beamthreshold, - lm_weight=cfg.lmweight, - sil_score=cfg.silweight, - log_add=False, - criterion_type=CriterionType.CTC, - ) - self.decoder = LexiconFreeDecoder( - self.decoder_opts, self.lm, self.silence, self.blank, [] - ) - - def decode( - self, - emissions: torch.FloatTensor, - ) -> List[List[Dict[str, torch.LongTensor]]]: - B, T, N = emissions.size() - hypos = [] - - def make_hypo(result: DecodeResult) -> Dict[str, Any]: - hypo = { - "tokens": self.get_tokens(result.tokens), - "score": result.score, - } - if self.lexicon: - hypo["words"] = [ - self.idx_to_wrd[x] if self.unitlm else self.word_dict[x] - for x in result.words - if x >= 0 - ] - return hypo - - for b in range(B): - emissions_ptr = emissions.data_ptr() + 4 * b * emissions.stride(0) - results = self.decoder.decode(emissions_ptr, T, N) - - nbest_results = results[: self.nbest] - hypos.append([make_hypo(result) for result in nbest_results]) - self.lm.empty_cache() - - return hypos diff --git a/kosmos-g/fairseq/examples/speech_recognition/new/decoders/viterbi_decoder.py b/kosmos-g/fairseq/examples/speech_recognition/new/decoders/viterbi_decoder.py deleted file mode 100644 index b1c47868f..000000000 --- a/kosmos-g/fairseq/examples/speech_recognition/new/decoders/viterbi_decoder.py +++ /dev/null @@ -1,24 +0,0 @@ -#!/usr/bin/env python3 - -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from typing import List, Dict - -from .base_decoder import BaseDecoder - - -class ViterbiDecoder(BaseDecoder): - def decode( - self, - emissions: torch.FloatTensor, - ) -> List[List[Dict[str, torch.LongTensor]]]: - def get_pred(e): - toks = e.argmax(dim=-1).unique_consecutive() - return toks[toks != self.blank] - - return [[{"tokens": get_pred(x), "score": 0}] for x in emissions] diff --git a/kosmos-g/fairseq/examples/speech_recognition/new/infer.py b/kosmos-g/fairseq/examples/speech_recognition/new/infer.py deleted file mode 100644 index d8d87f20b..000000000 --- a/kosmos-g/fairseq/examples/speech_recognition/new/infer.py +++ /dev/null @@ -1,473 +0,0 @@ -#!/usr/bin/env python -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import ast -import hashlib -import logging -import os -import shutil -import sys -from dataclasses import dataclass, field, is_dataclass -from pathlib import Path -from typing import Any, Dict, List, Optional, Tuple, Union - -import editdistance -import torch -import torch.distributed as dist -from examples.speech_recognition.new.decoders.decoder_config import ( - DecoderConfig, - FlashlightDecoderConfig, -) -from examples.speech_recognition.new.decoders.decoder import Decoder -from fairseq import checkpoint_utils, distributed_utils, progress_bar, tasks, utils -from fairseq.data.data_utils import post_process -from fairseq.dataclass.configs import ( - CheckpointConfig, - CommonConfig, - CommonEvalConfig, - DatasetConfig, - DistributedTrainingConfig, - FairseqDataclass, -) -from fairseq.logging.meters import StopwatchMeter, TimeMeter -from fairseq.logging.progress_bar import BaseProgressBar -from fairseq.models.fairseq_model import FairseqModel -from omegaconf import OmegaConf - -import hydra -from hydra.core.config_store import ConfigStore - -logging.root.setLevel(logging.INFO) -logging.basicConfig(level=logging.INFO) -logger = logging.getLogger(__name__) - -config_path = Path(__file__).resolve().parent / "conf" - - -@dataclass -class DecodingConfig(DecoderConfig, FlashlightDecoderConfig): - unique_wer_file: bool = field( - default=False, - metadata={"help": "If set, use a unique file for storing WER"}, - ) - results_path: Optional[str] = field( - default=None, - metadata={ - "help": "If set, write hypothesis and reference sentences into this directory" - }, - ) - - -@dataclass -class InferConfig(FairseqDataclass): - task: Any = None - decoding: DecodingConfig = DecodingConfig() - common: CommonConfig = CommonConfig() - common_eval: CommonEvalConfig = CommonEvalConfig() - checkpoint: CheckpointConfig = CheckpointConfig() - distributed_training: DistributedTrainingConfig = DistributedTrainingConfig() - dataset: DatasetConfig = DatasetConfig() - is_ax: bool = field( - default=False, - metadata={ - "help": "if true, assumes we are using ax for tuning and returns a tuple for ax to consume" - }, - ) - - -def reset_logging(): - root = logging.getLogger() - for handler in root.handlers: - root.removeHandler(handler) - root.setLevel(os.environ.get("LOGLEVEL", "INFO").upper()) - handler = logging.StreamHandler(sys.stdout) - handler.setFormatter( - logging.Formatter( - fmt="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - ) - ) - root.addHandler(handler) - - -class InferenceProcessor: - cfg: InferConfig - - def __init__(self, cfg: InferConfig) -> None: - self.cfg = cfg - self.task = tasks.setup_task(cfg.task) - - models, saved_cfg = self.load_model_ensemble() - self.models = models - self.saved_cfg = saved_cfg - self.tgt_dict = self.task.target_dictionary - - self.task.load_dataset( - self.cfg.dataset.gen_subset, - task_cfg=saved_cfg.task, - ) - self.generator = Decoder(cfg.decoding, self.tgt_dict) - self.gen_timer = StopwatchMeter() - self.wps_meter = TimeMeter() - self.num_sentences = 0 - self.total_errors = 0 - self.total_length = 0 - - self.hypo_words_file = None - self.hypo_units_file = None - self.ref_words_file = None - self.ref_units_file = None - - self.progress_bar = self.build_progress_bar() - - def __enter__(self) -> "InferenceProcessor": - if self.cfg.decoding.results_path is not None: - self.hypo_words_file = self.get_res_file("hypo.word") - self.hypo_units_file = self.get_res_file("hypo.units") - self.ref_words_file = self.get_res_file("ref.word") - self.ref_units_file = self.get_res_file("ref.units") - return self - - def __exit__(self, *exc) -> bool: - if self.cfg.decoding.results_path is not None: - self.hypo_words_file.close() - self.hypo_units_file.close() - self.ref_words_file.close() - self.ref_units_file.close() - return False - - def __iter__(self) -> Any: - for sample in self.progress_bar: - if not self.cfg.common.cpu: - sample = utils.move_to_cuda(sample) - - # Happens on the last batch. - if "net_input" not in sample: - continue - yield sample - - def log(self, *args, **kwargs): - self.progress_bar.log(*args, **kwargs) - - def print(self, *args, **kwargs): - self.progress_bar.print(*args, **kwargs) - - def get_res_file(self, fname: str) -> None: - fname = os.path.join(self.cfg.decoding.results_path, fname) - if self.data_parallel_world_size > 1: - fname = f"{fname}.{self.data_parallel_rank}" - return open(fname, "w", buffering=1) - - def merge_shards(self) -> None: - """Merges all shard files into shard 0, then removes shard suffix.""" - - shard_id = self.data_parallel_rank - num_shards = self.data_parallel_world_size - - if self.data_parallel_world_size > 1: - - def merge_shards_with_root(fname: str) -> None: - fname = os.path.join(self.cfg.decoding.results_path, fname) - logger.info("Merging %s on shard %d", fname, shard_id) - base_fpath = Path(f"{fname}.0") - with open(base_fpath, "a") as out_file: - for s in range(1, num_shards): - shard_fpath = Path(f"{fname}.{s}") - with open(shard_fpath, "r") as in_file: - for line in in_file: - out_file.write(line) - shard_fpath.unlink() - shutil.move(f"{fname}.0", fname) - - dist.barrier() # ensure all shards finished writing - if shard_id == (0 % num_shards): - merge_shards_with_root("hypo.word") - if shard_id == (1 % num_shards): - merge_shards_with_root("hypo.units") - if shard_id == (2 % num_shards): - merge_shards_with_root("ref.word") - if shard_id == (3 % num_shards): - merge_shards_with_root("ref.units") - dist.barrier() - - def optimize_model(self, model: FairseqModel) -> None: - model.make_generation_fast_() - if self.cfg.common.fp16: - model.half() - if not self.cfg.common.cpu: - model.cuda() - - def load_model_ensemble(self) -> Tuple[List[FairseqModel], FairseqDataclass]: - arg_overrides = ast.literal_eval(self.cfg.common_eval.model_overrides) - models, saved_cfg = checkpoint_utils.load_model_ensemble( - utils.split_paths(self.cfg.common_eval.path, separator="\\"), - arg_overrides=arg_overrides, - task=self.task, - suffix=self.cfg.checkpoint.checkpoint_suffix, - strict=(self.cfg.checkpoint.checkpoint_shard_count == 1), - num_shards=self.cfg.checkpoint.checkpoint_shard_count, - ) - for model in models: - self.optimize_model(model) - return models, saved_cfg - - def get_dataset_itr(self, disable_iterator_cache: bool = False) -> None: - return self.task.get_batch_iterator( - dataset=self.task.dataset(self.cfg.dataset.gen_subset), - max_tokens=self.cfg.dataset.max_tokens, - max_sentences=self.cfg.dataset.batch_size, - max_positions=(sys.maxsize, sys.maxsize), - ignore_invalid_inputs=self.cfg.dataset.skip_invalid_size_inputs_valid_test, - required_batch_size_multiple=self.cfg.dataset.required_batch_size_multiple, - seed=self.cfg.common.seed, - num_shards=self.data_parallel_world_size, - shard_id=self.data_parallel_rank, - num_workers=self.cfg.dataset.num_workers, - data_buffer_size=self.cfg.dataset.data_buffer_size, - disable_iterator_cache=disable_iterator_cache, - ).next_epoch_itr(shuffle=False) - - def build_progress_bar( - self, - epoch: Optional[int] = None, - prefix: Optional[str] = None, - default_log_format: str = "tqdm", - ) -> BaseProgressBar: - return progress_bar.progress_bar( - iterator=self.get_dataset_itr(), - log_format=self.cfg.common.log_format, - log_interval=self.cfg.common.log_interval, - epoch=epoch, - prefix=prefix, - tensorboard_logdir=self.cfg.common.tensorboard_logdir, - default_log_format=default_log_format, - ) - - @property - def data_parallel_world_size(self): - if self.cfg.distributed_training.distributed_world_size == 1: - return 1 - return distributed_utils.get_data_parallel_world_size() - - @property - def data_parallel_rank(self): - if self.cfg.distributed_training.distributed_world_size == 1: - return 0 - return distributed_utils.get_data_parallel_rank() - - def process_sentence( - self, - sample: Dict[str, Any], - hypo: Dict[str, Any], - sid: int, - batch_id: int, - ) -> Tuple[int, int]: - speaker = None # Speaker can't be parsed from dataset. - - if "target_label" in sample: - toks = sample["target_label"] - else: - toks = sample["target"] - toks = toks[batch_id, :] - - # Processes hypothesis. - hyp_pieces = self.tgt_dict.string(hypo["tokens"].int().cpu()) - if "words" in hypo: - hyp_words = " ".join(hypo["words"]) - else: - hyp_words = post_process(hyp_pieces, self.cfg.common_eval.post_process) - - # Processes target. - target_tokens = utils.strip_pad(toks, self.tgt_dict.pad()) - tgt_pieces = self.tgt_dict.string(target_tokens.int().cpu()) - tgt_words = post_process(tgt_pieces, self.cfg.common_eval.post_process) - - if self.cfg.decoding.results_path is not None: - print(f"{hyp_pieces} ({speaker}-{sid})", file=self.hypo_units_file) - print(f"{hyp_words} ({speaker}-{sid})", file=self.hypo_words_file) - print(f"{tgt_pieces} ({speaker}-{sid})", file=self.ref_units_file) - print(f"{tgt_words} ({speaker}-{sid})", file=self.ref_words_file) - - if not self.cfg.common_eval.quiet: - logger.info(f"HYPO: {hyp_words}") - logger.info(f"REF: {tgt_words}") - logger.info("---------------------") - - hyp_words, tgt_words = hyp_words.split(), tgt_words.split() - - return editdistance.eval(hyp_words, tgt_words), len(tgt_words) - - def process_sample(self, sample: Dict[str, Any]) -> None: - self.gen_timer.start() - hypos = self.task.inference_step( - generator=self.generator, - models=self.models, - sample=sample, - ) - num_generated_tokens = sum(len(h[0]["tokens"]) for h in hypos) - self.gen_timer.stop(num_generated_tokens) - self.wps_meter.update(num_generated_tokens) - - for batch_id, sample_id in enumerate(sample["id"].tolist()): - errs, length = self.process_sentence( - sample=sample, - sid=sample_id, - batch_id=batch_id, - hypo=hypos[batch_id][0], - ) - self.total_errors += errs - self.total_length += length - - self.log({"wps": round(self.wps_meter.avg)}) - if "nsentences" in sample: - self.num_sentences += sample["nsentences"] - else: - self.num_sentences += sample["id"].numel() - - def log_generation_time(self) -> None: - logger.info( - "Processed %d sentences (%d tokens) in %.1fs %.2f " - "sentences per second, %.2f tokens per second)", - self.num_sentences, - self.gen_timer.n, - self.gen_timer.sum, - self.num_sentences / self.gen_timer.sum, - 1.0 / self.gen_timer.avg, - ) - - -def parse_wer(wer_file: Path) -> float: - with open(wer_file, "r") as f: - return float(f.readline().strip().split(" ")[1]) - - -def get_wer_file(cfg: InferConfig) -> Path: - """Hashes the decoding parameters to a unique file ID.""" - base_path = "wer" - if cfg.decoding.results_path is not None: - base_path = os.path.join(cfg.decoding.results_path, base_path) - - if cfg.decoding.unique_wer_file: - yaml_str = OmegaConf.to_yaml(cfg.decoding) - fid = int(hashlib.md5(yaml_str.encode("utf-8")).hexdigest(), 16) - return Path(f"{base_path}.{fid % 1000000}") - else: - return Path(base_path) - - -def main(cfg: InferConfig) -> float: - """Entry point for main processing logic. - - Args: - cfg: The inferance configuration to use. - wer: Optional shared memory pointer for returning the WER. If not None, - the final WER value will be written here instead of being returned. - - Returns: - The final WER if `wer` is None, otherwise None. - """ - - yaml_str, wer_file = OmegaConf.to_yaml(cfg.decoding), get_wer_file(cfg) - - # Validates the provided configuration. - if cfg.dataset.max_tokens is None and cfg.dataset.batch_size is None: - cfg.dataset.max_tokens = 4000000 - if not cfg.common.cpu and not torch.cuda.is_available(): - raise ValueError("CUDA not found; set `cpu=True` to run without CUDA") - - logger.info(cfg.common_eval.path) - - with InferenceProcessor(cfg) as processor: - for sample in processor: - processor.process_sample(sample) - - processor.log_generation_time() - - if cfg.decoding.results_path is not None: - processor.merge_shards() - - errs_t, leng_t = processor.total_errors, processor.total_length - - if cfg.common.cpu: - logger.warning("Merging WER requires CUDA.") - elif processor.data_parallel_world_size > 1: - stats = torch.LongTensor([errs_t, leng_t]).cuda() - dist.all_reduce(stats, op=dist.ReduceOp.SUM) - errs_t, leng_t = stats[0].item(), stats[1].item() - - wer = errs_t * 100.0 / leng_t - - if distributed_utils.is_master(cfg.distributed_training): - with open(wer_file, "w") as f: - f.write( - ( - f"WER: {wer}\n" - f"err / num_ref_words = {errs_t} / {leng_t}\n\n" - f"{yaml_str}" - ) - ) - - return wer - - -@hydra.main(config_path=config_path, config_name="infer") -def hydra_main(cfg: InferConfig) -> Union[float, Tuple[float, Optional[float]]]: - container = OmegaConf.to_container(cfg, resolve=True, enum_to_str=True) - cfg = OmegaConf.create(container) - OmegaConf.set_struct(cfg, True) - - if cfg.common.reset_logging: - reset_logging() - - # logger.info("Config:\n%s", OmegaConf.to_yaml(cfg)) - wer = float("inf") - - try: - if cfg.common.profile: - with torch.cuda.profiler.profile(): - with torch.autograd.profiler.emit_nvtx(): - distributed_utils.call_main(cfg, main) - else: - distributed_utils.call_main(cfg, main) - - wer = parse_wer(get_wer_file(cfg)) - except BaseException as e: # pylint: disable=broad-except - if not cfg.common.suppress_crashes: - raise - else: - logger.error("Crashed! %s", str(e)) - - logger.info("Word error rate: %.4f", wer) - if cfg.is_ax: - return wer, None - - return wer - - -def cli_main() -> None: - try: - from hydra._internal.utils import ( - get_args, - ) # pylint: disable=import-outside-toplevel - - cfg_name = get_args().config_name or "infer" - except ImportError: - logger.warning("Failed to get config name from hydra args") - cfg_name = "infer" - - cs = ConfigStore.instance() - cs.store(name=cfg_name, node=InferConfig) - - for k in InferConfig.__dataclass_fields__: - if is_dataclass(InferConfig.__dataclass_fields__[k].type): - v = InferConfig.__dataclass_fields__[k].default - cs.store(name=k, node=v) - - hydra_main() # pylint: disable=no-value-for-parameter - - -if __name__ == "__main__": - cli_main() diff --git a/kosmos-g/fairseq/examples/speech_recognition/tasks/__init__.py b/kosmos-g/fairseq/examples/speech_recognition/tasks/__init__.py deleted file mode 100644 index 7ac3b8dc6..000000000 --- a/kosmos-g/fairseq/examples/speech_recognition/tasks/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -import importlib -import os - - -for file in sorted(os.listdir(os.path.dirname(__file__))): - if file.endswith(".py") and not file.startswith("_"): - task_name = file[: file.find(".py")] - importlib.import_module("examples.speech_recognition.tasks." + task_name) diff --git a/kosmos-g/fairseq/examples/speech_recognition/tasks/speech_recognition.py b/kosmos-g/fairseq/examples/speech_recognition/tasks/speech_recognition.py deleted file mode 100644 index d9f011d55..000000000 --- a/kosmos-g/fairseq/examples/speech_recognition/tasks/speech_recognition.py +++ /dev/null @@ -1,157 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import json -import os -import re -import sys - -import torch -from examples.speech_recognition.data import AsrDataset -from examples.speech_recognition.data.replabels import replabel_symbol -from fairseq.data import Dictionary -from fairseq.tasks import LegacyFairseqTask, register_task - - -def get_asr_dataset_from_json(data_json_path, tgt_dict): - """ - Parse data json and create dataset. - See scripts/asr_prep_json.py which pack json from raw files - - Json example: - { - "utts": { - "4771-29403-0025": { - "input": { - "length_ms": 170, - "path": "/tmp/file1.flac" - }, - "output": { - "text": "HELLO \n", - "token": "HE LLO", - "tokenid": "4815, 861" - } - }, - "1564-142299-0096": { - ... - } - } - """ - if not os.path.isfile(data_json_path): - raise FileNotFoundError("Dataset not found: {}".format(data_json_path)) - with open(data_json_path, "rb") as f: - data_samples = json.load(f)["utts"] - assert len(data_samples) != 0 - sorted_samples = sorted( - data_samples.items(), - key=lambda sample: int(sample[1]["input"]["length_ms"]), - reverse=True, - ) - aud_paths = [s[1]["input"]["path"] for s in sorted_samples] - ids = [s[0] for s in sorted_samples] - speakers = [] - for s in sorted_samples: - m = re.search("(.+?)-(.+?)-(.+?)", s[0]) - speakers.append(m.group(1) + "_" + m.group(2)) - frame_sizes = [s[1]["input"]["length_ms"] for s in sorted_samples] - tgt = [ - [int(i) for i in s[1]["output"]["tokenid"].split(", ")] - for s in sorted_samples - ] - # append eos - tgt = [[*t, tgt_dict.eos()] for t in tgt] - return AsrDataset(aud_paths, frame_sizes, tgt, tgt_dict, ids, speakers) - - -@register_task("speech_recognition") -class SpeechRecognitionTask(LegacyFairseqTask): - """ - Task for training speech recognition model. - """ - - @staticmethod - def add_args(parser): - """Add task-specific arguments to the parser.""" - parser.add_argument("data", help="path to data directory") - parser.add_argument( - "--silence-token", default="\u2581", help="token for silence (used by w2l)" - ) - parser.add_argument( - "--max-source-positions", - default=sys.maxsize, - type=int, - metavar="N", - help="max number of frames in the source sequence", - ) - parser.add_argument( - "--max-target-positions", - default=1024, - type=int, - metavar="N", - help="max number of tokens in the target sequence", - ) - - def __init__(self, args, tgt_dict): - super().__init__(args) - self.tgt_dict = tgt_dict - - @classmethod - def setup_task(cls, args, **kwargs): - """Setup the task (e.g., load dictionaries).""" - dict_path = os.path.join(args.data, "dict.txt") - if not os.path.isfile(dict_path): - raise FileNotFoundError("Dict not found: {}".format(dict_path)) - tgt_dict = Dictionary.load(dict_path) - - if args.criterion == "ctc_loss": - tgt_dict.add_symbol("<ctc_blank>") - elif args.criterion == "asg_loss": - for i in range(1, args.max_replabel + 1): - tgt_dict.add_symbol(replabel_symbol(i)) - - print("| dictionary: {} types".format(len(tgt_dict))) - return cls(args, tgt_dict) - - def load_dataset(self, split, combine=False, **kwargs): - """Load a given dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - """ - data_json_path = os.path.join(self.args.data, "{}.json".format(split)) - self.datasets[split] = get_asr_dataset_from_json(data_json_path, self.tgt_dict) - - def build_generator(self, models, args, **unused): - w2l_decoder = getattr(args, "w2l_decoder", None) - if w2l_decoder == "viterbi": - from examples.speech_recognition.w2l_decoder import W2lViterbiDecoder - - return W2lViterbiDecoder(args, self.target_dictionary) - elif w2l_decoder == "kenlm": - from examples.speech_recognition.w2l_decoder import W2lKenLMDecoder - - return W2lKenLMDecoder(args, self.target_dictionary) - elif w2l_decoder == "fairseqlm": - from examples.speech_recognition.w2l_decoder import W2lFairseqLMDecoder - - return W2lFairseqLMDecoder(args, self.target_dictionary) - else: - return super().build_generator(models, args) - - @property - def target_dictionary(self): - """Return the :class:`~fairseq.data.Dictionary` for the language - model.""" - return self.tgt_dict - - @property - def source_dictionary(self): - """Return the source :class:`~fairseq.data.Dictionary` (if applicable - for this task).""" - return None - - def max_positions(self): - """Return the max speech and sentence length allowed by the task.""" - return (self.args.max_source_positions, self.args.max_target_positions) diff --git a/kosmos-g/fairseq/examples/speech_recognition/utils/wer_utils.py b/kosmos-g/fairseq/examples/speech_recognition/utils/wer_utils.py deleted file mode 100644 index cf6f3d09b..000000000 --- a/kosmos-g/fairseq/examples/speech_recognition/utils/wer_utils.py +++ /dev/null @@ -1,381 +0,0 @@ -#!/usr/bin/env python3 - -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from __future__ import absolute_import, division, print_function, unicode_literals - -import re -from collections import deque -from enum import Enum - -import numpy as np - - -""" - Utility modules for computation of Word Error Rate, - Alignments, as well as more granular metrics like - deletion, insersion and substitutions. -""" - - -class Code(Enum): - match = 1 - substitution = 2 - insertion = 3 - deletion = 4 - - -class Token(object): - def __init__(self, lbl="", st=np.nan, en=np.nan): - if np.isnan(st): - self.label, self.start, self.end = "", 0.0, 0.0 - else: - self.label, self.start, self.end = lbl, st, en - - -class AlignmentResult(object): - def __init__(self, refs, hyps, codes, score): - self.refs = refs # std::deque<int> - self.hyps = hyps # std::deque<int> - self.codes = codes # std::deque<Code> - self.score = score # float - - -def coordinate_to_offset(row, col, ncols): - return int(row * ncols + col) - - -def offset_to_row(offset, ncols): - return int(offset / ncols) - - -def offset_to_col(offset, ncols): - return int(offset % ncols) - - -def trimWhitespace(str): - return re.sub(" +", " ", re.sub(" *$", "", re.sub("^ *", "", str))) - - -def str2toks(str): - pieces = trimWhitespace(str).split(" ") - toks = [] - for p in pieces: - toks.append(Token(p, 0.0, 0.0)) - return toks - - -class EditDistance(object): - def __init__(self, time_mediated): - self.time_mediated_ = time_mediated - self.scores_ = np.nan # Eigen::Matrix<float, Eigen::Dynamic, Eigen::Dynamic> - self.backtraces_ = ( - np.nan - ) # Eigen::Matrix<size_t, Eigen::Dynamic, Eigen::Dynamic> backtraces_; - self.confusion_pairs_ = {} - - def cost(self, ref, hyp, code): - if self.time_mediated_: - if code == Code.match: - return abs(ref.start - hyp.start) + abs(ref.end - hyp.end) - elif code == Code.insertion: - return hyp.end - hyp.start - elif code == Code.deletion: - return ref.end - ref.start - else: # substitution - return abs(ref.start - hyp.start) + abs(ref.end - hyp.end) + 0.1 - else: - if code == Code.match: - return 0 - elif code == Code.insertion or code == Code.deletion: - return 3 - else: # substitution - return 4 - - def get_result(self, refs, hyps): - res = AlignmentResult(refs=deque(), hyps=deque(), codes=deque(), score=np.nan) - - num_rows, num_cols = self.scores_.shape - res.score = self.scores_[num_rows - 1, num_cols - 1] - - curr_offset = coordinate_to_offset(num_rows - 1, num_cols - 1, num_cols) - - while curr_offset != 0: - curr_row = offset_to_row(curr_offset, num_cols) - curr_col = offset_to_col(curr_offset, num_cols) - - prev_offset = self.backtraces_[curr_row, curr_col] - - prev_row = offset_to_row(prev_offset, num_cols) - prev_col = offset_to_col(prev_offset, num_cols) - - res.refs.appendleft(curr_row - 1) # Note: this was .push_front() in C++ - res.hyps.appendleft(curr_col - 1) - if curr_row - 1 == prev_row and curr_col == prev_col: - res.codes.appendleft(Code.deletion) - elif curr_row == prev_row and curr_col - 1 == prev_col: - res.codes.appendleft(Code.insertion) - else: - # assert(curr_row - 1 == prev_row and curr_col - 1 == prev_col) - ref_str = refs[res.refs[0]].label - hyp_str = hyps[res.hyps[0]].label - - if ref_str == hyp_str: - res.codes.appendleft(Code.match) - else: - res.codes.appendleft(Code.substitution) - - confusion_pair = "%s -> %s" % (ref_str, hyp_str) - if confusion_pair not in self.confusion_pairs_: - self.confusion_pairs_[confusion_pair] = 1 - else: - self.confusion_pairs_[confusion_pair] += 1 - - curr_offset = prev_offset - - return res - - def align(self, refs, hyps): - if len(refs) == 0 and len(hyps) == 0: - return np.nan - - # NOTE: we're not resetting the values in these matrices because every value - # will be overridden in the loop below. If this assumption doesn't hold, - # be sure to set all entries in self.scores_ and self.backtraces_ to 0. - self.scores_ = np.zeros((len(refs) + 1, len(hyps) + 1)) - self.backtraces_ = np.zeros((len(refs) + 1, len(hyps) + 1)) - - num_rows, num_cols = self.scores_.shape - - for i in range(num_rows): - for j in range(num_cols): - if i == 0 and j == 0: - self.scores_[i, j] = 0.0 - self.backtraces_[i, j] = 0 - continue - - if i == 0: - self.scores_[i, j] = self.scores_[i, j - 1] + self.cost( - None, hyps[j - 1], Code.insertion - ) - self.backtraces_[i, j] = coordinate_to_offset(i, j - 1, num_cols) - continue - - if j == 0: - self.scores_[i, j] = self.scores_[i - 1, j] + self.cost( - refs[i - 1], None, Code.deletion - ) - self.backtraces_[i, j] = coordinate_to_offset(i - 1, j, num_cols) - continue - - # Below here both i and j are greater than 0 - ref = refs[i - 1] - hyp = hyps[j - 1] - best_score = self.scores_[i - 1, j - 1] + ( - self.cost(ref, hyp, Code.match) - if (ref.label == hyp.label) - else self.cost(ref, hyp, Code.substitution) - ) - - prev_row = i - 1 - prev_col = j - 1 - ins = self.scores_[i, j - 1] + self.cost(None, hyp, Code.insertion) - if ins < best_score: - best_score = ins - prev_row = i - prev_col = j - 1 - - delt = self.scores_[i - 1, j] + self.cost(ref, None, Code.deletion) - if delt < best_score: - best_score = delt - prev_row = i - 1 - prev_col = j - - self.scores_[i, j] = best_score - self.backtraces_[i, j] = coordinate_to_offset( - prev_row, prev_col, num_cols - ) - - return self.get_result(refs, hyps) - - -class WERTransformer(object): - def __init__(self, hyp_str, ref_str, verbose=True): - self.ed_ = EditDistance(False) - self.id2oracle_errs_ = {} - self.utts_ = 0 - self.words_ = 0 - self.insertions_ = 0 - self.deletions_ = 0 - self.substitutions_ = 0 - - self.process(["dummy_str", hyp_str, ref_str]) - - if verbose: - print("'%s' vs '%s'" % (hyp_str, ref_str)) - self.report_result() - - def process(self, input): # std::vector<std::string>&& input - if len(input) < 3: - print( - "Input must be of the form <id> ... <hypo> <ref> , got ", - len(input), - " inputs:", - ) - return None - - # Align - # std::vector<Token> hyps; - # std::vector<Token> refs; - - hyps = str2toks(input[-2]) - refs = str2toks(input[-1]) - - alignment = self.ed_.align(refs, hyps) - if alignment is None: - print("Alignment is null") - return np.nan - - # Tally errors - ins = 0 - dels = 0 - subs = 0 - for code in alignment.codes: - if code == Code.substitution: - subs += 1 - elif code == Code.insertion: - ins += 1 - elif code == Code.deletion: - dels += 1 - - # Output - row = input - row.append(str(len(refs))) - row.append(str(ins)) - row.append(str(dels)) - row.append(str(subs)) - # print(row) - - # Accumulate - kIdIndex = 0 - kNBestSep = "/" - - pieces = input[kIdIndex].split(kNBestSep) - - if len(pieces) == 0: - print( - "Error splitting ", - input[kIdIndex], - " on '", - kNBestSep, - "', got empty list", - ) - return np.nan - - id = pieces[0] - if id not in self.id2oracle_errs_: - self.utts_ += 1 - self.words_ += len(refs) - self.insertions_ += ins - self.deletions_ += dels - self.substitutions_ += subs - self.id2oracle_errs_[id] = [ins, dels, subs] - else: - curr_err = ins + dels + subs - prev_err = np.sum(self.id2oracle_errs_[id]) - if curr_err < prev_err: - self.id2oracle_errs_[id] = [ins, dels, subs] - - return 0 - - def report_result(self): - # print("---------- Summary ---------------") - if self.words_ == 0: - print("No words counted") - return - - # 1-best - best_wer = ( - 100.0 - * (self.insertions_ + self.deletions_ + self.substitutions_) - / self.words_ - ) - - print( - "\tWER = %0.2f%% (%i utts, %i words, %0.2f%% ins, " - "%0.2f%% dels, %0.2f%% subs)" - % ( - best_wer, - self.utts_, - self.words_, - 100.0 * self.insertions_ / self.words_, - 100.0 * self.deletions_ / self.words_, - 100.0 * self.substitutions_ / self.words_, - ) - ) - - def wer(self): - if self.words_ == 0: - wer = np.nan - else: - wer = ( - 100.0 - * (self.insertions_ + self.deletions_ + self.substitutions_) - / self.words_ - ) - return wer - - def stats(self): - if self.words_ == 0: - stats = {} - else: - wer = ( - 100.0 - * (self.insertions_ + self.deletions_ + self.substitutions_) - / self.words_ - ) - stats = dict( - { - "wer": wer, - "utts": self.utts_, - "numwords": self.words_, - "ins": self.insertions_, - "dels": self.deletions_, - "subs": self.substitutions_, - "confusion_pairs": self.ed_.confusion_pairs_, - } - ) - return stats - - -def calc_wer(hyp_str, ref_str): - t = WERTransformer(hyp_str, ref_str, verbose=0) - return t.wer() - - -def calc_wer_stats(hyp_str, ref_str): - t = WERTransformer(hyp_str, ref_str, verbose=0) - return t.stats() - - -def get_wer_alignment_codes(hyp_str, ref_str): - """ - INPUT: hypothesis string, reference string - OUTPUT: List of alignment codes (intermediate results from WER computation) - """ - t = WERTransformer(hyp_str, ref_str, verbose=0) - return t.ed_.align(str2toks(ref_str), str2toks(hyp_str)).codes - - -def merge_counts(x, y): - # Merge two hashes which have 'counts' as their values - # This can be used for example to merge confusion pair counts - # conf_pairs = merge_counts(conf_pairs, stats['confusion_pairs']) - for k, v in y.items(): - if k not in x: - x[k] = 0 - x[k] += v - return x diff --git a/kosmos-g/fairseq/examples/speech_recognition/w2l_decoder.py b/kosmos-g/fairseq/examples/speech_recognition/w2l_decoder.py deleted file mode 100644 index fbf2d3524..000000000 --- a/kosmos-g/fairseq/examples/speech_recognition/w2l_decoder.py +++ /dev/null @@ -1,486 +0,0 @@ -#!/usr/bin/env python3 - -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Flashlight decoders. -""" - -import gc -import itertools as it -import os.path as osp -from typing import List -import warnings -from collections import deque, namedtuple - -import numpy as np -import torch -from examples.speech_recognition.data.replabels import unpack_replabels -from fairseq import tasks -from fairseq.utils import apply_to_sample -from omegaconf import open_dict -from fairseq.dataclass.utils import convert_namespace_to_omegaconf - - -try: - from flashlight.lib.text.dictionary import create_word_dict, load_words - from flashlight.lib.sequence.criterion import CpuViterbiPath, get_data_ptr_as_bytes - from flashlight.lib.text.decoder import ( - CriterionType, - LexiconDecoderOptions, - KenLM, - LM, - LMState, - SmearingMode, - Trie, - LexiconDecoder, - ) -except: - warnings.warn( - "flashlight python bindings are required to use this functionality. Please install from https://github.com/facebookresearch/flashlight/tree/master/bindings/python" - ) - LM = object - LMState = object - - -class W2lDecoder(object): - def __init__(self, args, tgt_dict): - self.tgt_dict = tgt_dict - self.vocab_size = len(tgt_dict) - self.nbest = args.nbest - - # criterion-specific init - self.criterion_type = CriterionType.CTC - self.blank = ( - tgt_dict.index("<ctc_blank>") - if "<ctc_blank>" in tgt_dict.indices - else tgt_dict.bos() - ) - if "<sep>" in tgt_dict.indices: - self.silence = tgt_dict.index("<sep>") - elif "|" in tgt_dict.indices: - self.silence = tgt_dict.index("|") - else: - self.silence = tgt_dict.eos() - self.asg_transitions = None - - def generate(self, models, sample, **unused): - """Generate a batch of inferences.""" - # model.forward normally channels prev_output_tokens into the decoder - # separately, but SequenceGenerator directly calls model.encoder - encoder_input = { - k: v for k, v in sample["net_input"].items() if k != "prev_output_tokens" - } - emissions = self.get_emissions(models, encoder_input) - return self.decode(emissions) - - def get_emissions(self, models, encoder_input): - """Run encoder and normalize emissions""" - model = models[0] - encoder_out = model(**encoder_input) - if hasattr(model, "get_logits"): - emissions = model.get_logits(encoder_out) # no need to normalize emissions - else: - emissions = model.get_normalized_probs(encoder_out, log_probs=True) - return emissions.transpose(0, 1).float().cpu().contiguous() - - def get_tokens(self, idxs): - """Normalize tokens by handling CTC blank, ASG replabels, etc.""" - idxs = (g[0] for g in it.groupby(idxs)) - idxs = filter(lambda x: x != self.blank, idxs) - return torch.LongTensor(list(idxs)) - - -class W2lViterbiDecoder(W2lDecoder): - def __init__(self, args, tgt_dict): - super().__init__(args, tgt_dict) - - def decode(self, emissions): - B, T, N = emissions.size() - hypos = [] - if self.asg_transitions is None: - transitions = torch.FloatTensor(N, N).zero_() - else: - transitions = torch.FloatTensor(self.asg_transitions).view(N, N) - viterbi_path = torch.IntTensor(B, T) - workspace = torch.ByteTensor(CpuViterbiPath.get_workspace_size(B, T, N)) - CpuViterbiPath.compute( - B, - T, - N, - get_data_ptr_as_bytes(emissions), - get_data_ptr_as_bytes(transitions), - get_data_ptr_as_bytes(viterbi_path), - get_data_ptr_as_bytes(workspace), - ) - return [ - [{"tokens": self.get_tokens(viterbi_path[b].tolist()), "score": 0}] - for b in range(B) - ] - - -class W2lKenLMDecoder(W2lDecoder): - def __init__(self, args, tgt_dict): - super().__init__(args, tgt_dict) - - self.unit_lm = getattr(args, "unit_lm", False) - - if args.lexicon: - self.lexicon = load_words(args.lexicon) - self.word_dict = create_word_dict(self.lexicon) - self.unk_word = self.word_dict.get_index("<unk>") - - self.lm = KenLM(args.kenlm_model, self.word_dict) - self.trie = Trie(self.vocab_size, self.silence) - - start_state = self.lm.start(False) - for i, (word, spellings) in enumerate(self.lexicon.items()): - word_idx = self.word_dict.get_index(word) - _, score = self.lm.score(start_state, word_idx) - for spelling in spellings: - spelling_idxs = [tgt_dict.index(token) for token in spelling] - assert ( - tgt_dict.unk() not in spelling_idxs - ), f"{spelling} {spelling_idxs}" - self.trie.insert(spelling_idxs, word_idx, score) - self.trie.smear(SmearingMode.MAX) - - self.decoder_opts = LexiconDecoderOptions( - beam_size=args.beam, - beam_size_token=int(getattr(args, "beam_size_token", len(tgt_dict))), - beam_threshold=args.beam_threshold, - lm_weight=args.lm_weight, - word_score=args.word_score, - unk_score=args.unk_weight, - sil_score=args.sil_weight, - log_add=False, - criterion_type=self.criterion_type, - ) - - if self.asg_transitions is None: - N = 768 - # self.asg_transitions = torch.FloatTensor(N, N).zero_() - self.asg_transitions = [] - - self.decoder = LexiconDecoder( - self.decoder_opts, - self.trie, - self.lm, - self.silence, - self.blank, - self.unk_word, - self.asg_transitions, - self.unit_lm, - ) - else: - assert args.unit_lm, "lexicon free decoding can only be done with a unit language model" - from flashlight.lib.text.decoder import LexiconFreeDecoder, LexiconFreeDecoderOptions - - d = {w: [[w]] for w in tgt_dict.symbols} - self.word_dict = create_word_dict(d) - self.lm = KenLM(args.kenlm_model, self.word_dict) - self.decoder_opts = LexiconFreeDecoderOptions( - beam_size=args.beam, - beam_size_token=int(getattr(args, "beam_size_token", len(tgt_dict))), - beam_threshold=args.beam_threshold, - lm_weight=args.lm_weight, - sil_score=args.sil_weight, - log_add=False, - criterion_type=self.criterion_type, - ) - self.decoder = LexiconFreeDecoder( - self.decoder_opts, self.lm, self.silence, self.blank, [] - ) - - def get_timesteps(self, token_idxs: List[int]) -> List[int]: - """Returns frame numbers corresponding to every non-blank token. - - Parameters - ---------- - token_idxs : List[int] - IDs of decoded tokens. - - Returns - ------- - List[int] - Frame numbers corresponding to every non-blank token. - """ - timesteps = [] - for i, token_idx in enumerate(token_idxs): - if token_idx == self.blank: - continue - if i == 0 or token_idx != token_idxs[i-1]: - timesteps.append(i) - return timesteps - - def decode(self, emissions): - B, T, N = emissions.size() - hypos = [] - for b in range(B): - emissions_ptr = emissions.data_ptr() + 4 * b * emissions.stride(0) - results = self.decoder.decode(emissions_ptr, T, N) - - nbest_results = results[: self.nbest] - hypos.append( - [ - { - "tokens": self.get_tokens(result.tokens), - "score": result.score, - "timesteps": self.get_timesteps(result.tokens), - "words": [ - self.word_dict.get_entry(x) for x in result.words if x >= 0 - ], - } - for result in nbest_results - ] - ) - return hypos - - -FairseqLMState = namedtuple("FairseqLMState", ["prefix", "incremental_state", "probs"]) - - -class FairseqLM(LM): - def __init__(self, dictionary, model): - LM.__init__(self) - self.dictionary = dictionary - self.model = model - self.unk = self.dictionary.unk() - - self.save_incremental = False # this currently does not work properly - self.max_cache = 20_000 - - model.cuda() - model.eval() - model.make_generation_fast_() - - self.states = {} - self.stateq = deque() - - def start(self, start_with_nothing): - state = LMState() - prefix = torch.LongTensor([[self.dictionary.eos()]]) - incremental_state = {} if self.save_incremental else None - with torch.no_grad(): - res = self.model(prefix.cuda(), incremental_state=incremental_state) - probs = self.model.get_normalized_probs(res, log_probs=True, sample=None) - - if incremental_state is not None: - incremental_state = apply_to_sample(lambda x: x.cpu(), incremental_state) - self.states[state] = FairseqLMState( - prefix.numpy(), incremental_state, probs[0, -1].cpu().numpy() - ) - self.stateq.append(state) - - return state - - def score(self, state: LMState, token_index: int, no_cache: bool = False): - """ - Evaluate language model based on the current lm state and new word - Parameters: - ----------- - state: current lm state - token_index: index of the word - (can be lexicon index then you should store inside LM the - mapping between indices of lexicon and lm, or lm index of a word) - - Returns: - -------- - (LMState, float): pair of (new state, score for the current word) - """ - curr_state = self.states[state] - - def trim_cache(targ_size): - while len(self.stateq) > targ_size: - rem_k = self.stateq.popleft() - rem_st = self.states[rem_k] - rem_st = FairseqLMState(rem_st.prefix, None, None) - self.states[rem_k] = rem_st - - if curr_state.probs is None: - new_incremental_state = ( - curr_state.incremental_state.copy() - if curr_state.incremental_state is not None - else None - ) - with torch.no_grad(): - if new_incremental_state is not None: - new_incremental_state = apply_to_sample( - lambda x: x.cuda(), new_incremental_state - ) - elif self.save_incremental: - new_incremental_state = {} - - res = self.model( - torch.from_numpy(curr_state.prefix).cuda(), - incremental_state=new_incremental_state, - ) - probs = self.model.get_normalized_probs( - res, log_probs=True, sample=None - ) - - if new_incremental_state is not None: - new_incremental_state = apply_to_sample( - lambda x: x.cpu(), new_incremental_state - ) - - curr_state = FairseqLMState( - curr_state.prefix, new_incremental_state, probs[0, -1].cpu().numpy() - ) - - if not no_cache: - self.states[state] = curr_state - self.stateq.append(state) - - score = curr_state.probs[token_index].item() - - trim_cache(self.max_cache) - - outstate = state.child(token_index) - if outstate not in self.states and not no_cache: - prefix = np.concatenate( - [curr_state.prefix, torch.LongTensor([[token_index]])], -1 - ) - incr_state = curr_state.incremental_state - - self.states[outstate] = FairseqLMState(prefix, incr_state, None) - - if token_index == self.unk: - score = float("-inf") - - return outstate, score - - def finish(self, state: LMState): - """ - Evaluate eos for language model based on the current lm state - - Returns: - -------- - (LMState, float): pair of (new state, score for the current word) - """ - return self.score(state, self.dictionary.eos()) - - def empty_cache(self): - self.states = {} - self.stateq = deque() - gc.collect() - - -class W2lFairseqLMDecoder(W2lDecoder): - def __init__(self, args, tgt_dict): - super().__init__(args, tgt_dict) - - self.unit_lm = getattr(args, "unit_lm", False) - - self.lexicon = load_words(args.lexicon) if args.lexicon else None - self.idx_to_wrd = {} - - checkpoint = torch.load(args.kenlm_model, map_location="cpu") - - if "cfg" in checkpoint and checkpoint["cfg"] is not None: - lm_args = checkpoint["cfg"] - else: - lm_args = convert_namespace_to_omegaconf(checkpoint["args"]) - - with open_dict(lm_args.task): - lm_args.task.data = osp.dirname(args.kenlm_model) - - task = tasks.setup_task(lm_args.task) - model = task.build_model(lm_args.model) - model.load_state_dict(checkpoint["model"], strict=False) - - self.trie = Trie(self.vocab_size, self.silence) - - self.word_dict = task.dictionary - self.unk_word = self.word_dict.unk() - self.lm = FairseqLM(self.word_dict, model) - - if self.lexicon: - start_state = self.lm.start(False) - for i, (word, spellings) in enumerate(self.lexicon.items()): - if self.unit_lm: - word_idx = i - self.idx_to_wrd[i] = word - score = 0 - else: - word_idx = self.word_dict.index(word) - _, score = self.lm.score(start_state, word_idx, no_cache=True) - - for spelling in spellings: - spelling_idxs = [tgt_dict.index(token) for token in spelling] - assert ( - tgt_dict.unk() not in spelling_idxs - ), f"{spelling} {spelling_idxs}" - self.trie.insert(spelling_idxs, word_idx, score) - self.trie.smear(SmearingMode.MAX) - - self.decoder_opts = LexiconDecoderOptions( - beam_size=args.beam, - beam_size_token=int(getattr(args, "beam_size_token", len(tgt_dict))), - beam_threshold=args.beam_threshold, - lm_weight=args.lm_weight, - word_score=args.word_score, - unk_score=args.unk_weight, - sil_score=args.sil_weight, - log_add=False, - criterion_type=self.criterion_type, - ) - - self.decoder = LexiconDecoder( - self.decoder_opts, - self.trie, - self.lm, - self.silence, - self.blank, - self.unk_word, - [], - self.unit_lm, - ) - else: - assert args.unit_lm, "lexicon free decoding can only be done with a unit language model" - from flashlight.lib.text.decoder import LexiconFreeDecoder, LexiconFreeDecoderOptions - - d = {w: [[w]] for w in tgt_dict.symbols} - self.word_dict = create_word_dict(d) - self.lm = KenLM(args.kenlm_model, self.word_dict) - self.decoder_opts = LexiconFreeDecoderOptions( - beam_size=args.beam, - beam_size_token=int(getattr(args, "beam_size_token", len(tgt_dict))), - beam_threshold=args.beam_threshold, - lm_weight=args.lm_weight, - sil_score=args.sil_weight, - log_add=False, - criterion_type=self.criterion_type, - ) - self.decoder = LexiconFreeDecoder( - self.decoder_opts, self.lm, self.silence, self.blank, [] - ) - - def decode(self, emissions): - B, T, N = emissions.size() - hypos = [] - - def idx_to_word(idx): - if self.unit_lm: - return self.idx_to_wrd[idx] - else: - return self.word_dict[idx] - - def make_hypo(result): - hypo = {"tokens": self.get_tokens(result.tokens), "score": result.score} - if self.lexicon: - hypo["words"] = [idx_to_word(x) for x in result.words if x >= 0] - return hypo - - for b in range(B): - emissions_ptr = emissions.data_ptr() + 4 * b * emissions.stride(0) - results = self.decoder.decode(emissions_ptr, T, N) - - nbest_results = results[: self.nbest] - hypos.append([make_hypo(result) for result in nbest_results]) - self.lm.empty_cache() - - return hypos diff --git a/kosmos-g/fairseq/examples/speech_synthesis/README.md b/kosmos-g/fairseq/examples/speech_synthesis/README.md deleted file mode 100644 index a31e7f68b..000000000 --- a/kosmos-g/fairseq/examples/speech_synthesis/README.md +++ /dev/null @@ -1,38 +0,0 @@ -Speech Synthesis (S^2) -=== -[https://arxiv.org/abs/2109.06912](https://arxiv.org/abs/2109.06912) - -Speech synthesis with fairseq. - -## Features - -- Autoregressive and non-autoregressive models -- Multi-speaker synthesis -- Audio preprocessing (denoising, VAD, etc.) for less curated data -- Automatic metrics for model development -- Similar data configuration as [S2T](../speech_to_text/README.md) - - -## Examples -- [Single-speaker synthesis on LJSpeech](docs/ljspeech_example.md) -- [Multi-speaker synthesis on VCTK](docs/vctk_example.md) -- [Multi-speaker synthesis on Common Voice](docs/common_voice_example.md) - - -## Citation -Please cite as: -``` -@article{wang2021fairseqs2, - title={fairseq S\^{} 2: A Scalable and Integrable Speech Synthesis Toolkit}, - author={Wang, Changhan and Hsu, Wei-Ning and Adi, Yossi and Polyak, Adam and Lee, Ann and Chen, Peng-Jen and Gu, Jiatao and Pino, Juan}, - journal={arXiv preprint arXiv:2109.06912}, - year={2021} -} - -@inproceedings{ott2019fairseq, - title = {fairseq: A Fast, Extensible Toolkit for Sequence Modeling}, - author = {Myle Ott and Sergey Edunov and Alexei Baevski and Angela Fan and Sam Gross and Nathan Ng and David Grangier and Michael Auli}, - booktitle = {Proceedings of NAACL-HLT 2019: Demonstrations}, - year = {2019}, -} -``` diff --git a/kosmos-g/fairseq/examples/speech_synthesis/__init__.py b/kosmos-g/fairseq/examples/speech_synthesis/__init__.py deleted file mode 100644 index 626423691..000000000 --- a/kosmos-g/fairseq/examples/speech_synthesis/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. diff --git a/kosmos-g/fairseq/examples/speech_synthesis/data_utils.py b/kosmos-g/fairseq/examples/speech_synthesis/data_utils.py deleted file mode 100644 index 3b2d079a9..000000000 --- a/kosmos-g/fairseq/examples/speech_synthesis/data_utils.py +++ /dev/null @@ -1,344 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import io -import os -from pathlib import Path -from typing import Optional, List, Dict -import zipfile -import tempfile -from dataclasses import dataclass -from itertools import groupby - -import torch -import torch.nn.functional as F -import numpy as np -from tqdm import tqdm - -from examples.speech_to_text.data_utils import load_tsv_to_dicts -from fairseq.data.audio.audio_utils import ( - TTSSpectrogram, TTSMelScale, parse_path, read_from_stored_zip, is_npy_data -) - - -def trim_or_pad_to_target_length( - data_1d_or_2d: np.ndarray, target_length: int -) -> np.ndarray: - assert len(data_1d_or_2d.shape) in {1, 2} - delta = data_1d_or_2d.shape[0] - target_length - if delta >= 0: # trim if being longer - data_1d_or_2d = data_1d_or_2d[: target_length] - else: # pad if being shorter - if len(data_1d_or_2d.shape) == 1: - data_1d_or_2d = np.concatenate( - [data_1d_or_2d, np.zeros(-delta)], axis=0 - ) - else: - data_1d_or_2d = np.concatenate( - [data_1d_or_2d, np.zeros((-delta, data_1d_or_2d.shape[1]))], - axis=0 - ) - return data_1d_or_2d - - -def extract_logmel_spectrogram( - waveform: torch.Tensor, sample_rate: int, - output_path: Optional[Path] = None, win_length: int = 1024, - hop_length: int = 256, n_fft: int = 1024, - win_fn: callable = torch.hann_window, n_mels: int = 80, - f_min: float = 0., f_max: float = 8000, eps: float = 1e-5, - overwrite: bool = False, target_length: Optional[int] = None -): - if output_path is not None and output_path.is_file() and not overwrite: - return - - spectrogram_transform = TTSSpectrogram( - n_fft=n_fft, win_length=win_length, hop_length=hop_length, - window_fn=win_fn - ) - mel_scale_transform = TTSMelScale( - n_mels=n_mels, sample_rate=sample_rate, f_min=f_min, f_max=f_max, - n_stft=n_fft // 2 + 1 - ) - spectrogram = spectrogram_transform(waveform) - mel_spec = mel_scale_transform(spectrogram) - logmel_spec = torch.clamp(mel_spec, min=eps).log() - assert len(logmel_spec.shape) == 3 and logmel_spec.shape[0] == 1 - logmel_spec = logmel_spec.squeeze().t() # D x T -> T x D - if target_length is not None: - logmel_spec = trim_or_pad_to_target_length(logmel_spec, target_length) - - if output_path is not None: - np.save(output_path.as_posix(), logmel_spec) - else: - return logmel_spec - - -def extract_pitch( - waveform: torch.Tensor, sample_rate: int, - output_path: Optional[Path] = None, hop_length: int = 256, - log_scale: bool = True, phoneme_durations: Optional[List[int]] = None -): - if output_path is not None and output_path.is_file(): - return - - try: - import pyworld - except ImportError: - raise ImportError("Please install PyWORLD: pip install pyworld") - - _waveform = waveform.squeeze(0).double().numpy() - pitch, t = pyworld.dio( - _waveform, sample_rate, frame_period=hop_length / sample_rate * 1000 - ) - pitch = pyworld.stonemask(_waveform, pitch, t, sample_rate) - - if phoneme_durations is not None: - pitch = trim_or_pad_to_target_length(pitch, sum(phoneme_durations)) - try: - from scipy.interpolate import interp1d - except ImportError: - raise ImportError("Please install SciPy: pip install scipy") - nonzero_ids = np.where(pitch != 0)[0] - if len(nonzero_ids) == 0: - print((f"{output_path} has all empty values in the pitch contour")) - return - elif len(nonzero_ids) == 1: - print((f"{output_path} has only one non-zero values in the pitch contour")) - return - else: - interp_fn = interp1d( - nonzero_ids, - pitch[nonzero_ids], - fill_value=(pitch[nonzero_ids[0]], pitch[nonzero_ids[-1]]), - bounds_error=False, - ) - pitch = interp_fn(np.arange(0, len(pitch))) - d_cumsum = np.cumsum(np.concatenate([np.array([0]), phoneme_durations])) - pitch = np.array( - [ - np.mean(pitch[d_cumsum[i-1]: d_cumsum[i]]) - for i in range(1, len(d_cumsum)) - ] - ) - assert len(pitch) == len(phoneme_durations) - - if log_scale: - pitch = np.log(pitch + 1) - - if output_path is not None: - np.save(output_path.as_posix(), pitch) - else: - return pitch - - -def extract_energy( - waveform: torch.Tensor, output_path: Optional[Path] = None, - hop_length: int = 256, n_fft: int = 1024, log_scale: bool = True, - phoneme_durations: Optional[List[int]] = None -): - if output_path is not None and output_path.is_file(): - return - - assert len(waveform.shape) == 2 and waveform.shape[0] == 1 - waveform = waveform.view(1, 1, waveform.shape[1]) - waveform = F.pad( - waveform.unsqueeze(1), [n_fft // 2, n_fft // 2, 0, 0], - mode="reflect" - ) - waveform = waveform.squeeze(1) - - fourier_basis = np.fft.fft(np.eye(n_fft)) - cutoff = int((n_fft / 2 + 1)) - fourier_basis = np.vstack( - [np.real(fourier_basis[:cutoff, :]), - np.imag(fourier_basis[:cutoff, :])] - ) - - forward_basis = torch.FloatTensor(fourier_basis[:, None, :]) - forward_transform = F.conv1d( - waveform, forward_basis, stride=hop_length, padding=0 - ) - - real_part = forward_transform[:, :cutoff, :] - imag_part = forward_transform[:, cutoff:, :] - magnitude = torch.sqrt(real_part ** 2 + imag_part ** 2) - energy = torch.norm(magnitude, dim=1).squeeze(0).numpy() - - if phoneme_durations is not None: - energy = trim_or_pad_to_target_length(energy, sum(phoneme_durations)) - d_cumsum = np.cumsum(np.concatenate([np.array([0]), phoneme_durations])) - energy = np.array( - [ - np.mean(energy[d_cumsum[i - 1]: d_cumsum[i]]) - for i in range(1, len(d_cumsum)) - ] - ) - assert len(energy) == len(phoneme_durations) - - if log_scale: - energy = np.log(energy + 1) - - if output_path is not None: - np.save(output_path.as_posix(), energy) - else: - return energy - - -def get_global_cmvn(feature_root: Path, output_path: Optional[Path] = None): - mean_x, mean_x2, n_frames = None, None, 0 - feature_paths = feature_root.glob("*.npy") - for p in tqdm(feature_paths): - with open(p, 'rb') as f: - frames = np.load(f).squeeze() - - n_frames += frames.shape[0] - - cur_mean_x = frames.sum(axis=0) - if mean_x is None: - mean_x = cur_mean_x - else: - mean_x += cur_mean_x - - cur_mean_x2 = (frames ** 2).sum(axis=0) - if mean_x2 is None: - mean_x2 = cur_mean_x2 - else: - mean_x2 += cur_mean_x2 - - mean_x /= n_frames - mean_x2 /= n_frames - var_x = mean_x2 - mean_x ** 2 - std_x = np.sqrt(np.maximum(var_x, 1e-10)) - - if output_path is not None: - with open(output_path, 'wb') as f: - np.savez(f, mean=mean_x, std=std_x) - else: - return {"mean": mean_x, "std": std_x} - - -def ipa_phonemize(text, lang="en-us", use_g2p=False): - if use_g2p: - assert lang == "en-us", "g2pE phonemizer only works for en-us" - try: - from g2p_en import G2p - g2p = G2p() - return " ".join("|" if p == " " else p for p in g2p(text)) - except ImportError: - raise ImportError( - "Please install phonemizer: pip install g2p_en" - ) - else: - try: - from phonemizer import phonemize - from phonemizer.separator import Separator - return phonemize( - text, backend='espeak', language=lang, - separator=Separator(word="| ", phone=" ") - ) - except ImportError: - raise ImportError( - "Please install phonemizer: pip install phonemizer" - ) - - -@dataclass -class ForceAlignmentInfo(object): - tokens: List[str] - frame_durations: List[int] - start_sec: Optional[float] - end_sec: Optional[float] - - -def get_mfa_alignment_by_sample_id( - textgrid_zip_path: str, sample_id: str, sample_rate: int, - hop_length: int, silence_phones: List[str] = ("sil", "sp", "spn") -) -> ForceAlignmentInfo: - try: - import tgt - except ImportError: - raise ImportError("Please install TextGridTools: pip install tgt") - - filename = f"{sample_id}.TextGrid" - out_root = Path(tempfile.gettempdir()) - tgt_path = out_root / filename - with zipfile.ZipFile(textgrid_zip_path) as f_zip: - f_zip.extract(filename, path=out_root) - textgrid = tgt.io.read_textgrid(tgt_path.as_posix()) - os.remove(tgt_path) - - phones, frame_durations = [], [] - start_sec, end_sec, end_idx = 0, 0, 0 - for t in textgrid.get_tier_by_name("phones")._objects: - s, e, p = t.start_time, t.end_time, t.text - # Trim leading silences - if len(phones) == 0: - if p in silence_phones: - continue - else: - start_sec = s - phones.append(p) - if p not in silence_phones: - end_sec = e - end_idx = len(phones) - r = sample_rate / hop_length - frame_durations.append(int(np.round(e * r) - np.round(s * r))) - # Trim tailing silences - phones = phones[:end_idx] - frame_durations = frame_durations[:end_idx] - - return ForceAlignmentInfo( - tokens=phones, frame_durations=frame_durations, start_sec=start_sec, - end_sec=end_sec - ) - - -def get_mfa_alignment( - textgrid_zip_path: str, sample_ids: List[str], sample_rate: int, - hop_length: int -) -> Dict[str, ForceAlignmentInfo]: - return { - i: get_mfa_alignment_by_sample_id( - textgrid_zip_path, i, sample_rate, hop_length - ) for i in tqdm(sample_ids) - } - - -def get_unit_alignment( - id_to_unit_tsv_path: str, sample_ids: List[str] -) -> Dict[str, ForceAlignmentInfo]: - id_to_units = { - e["id"]: e["units"] for e in load_tsv_to_dicts(id_to_unit_tsv_path) - } - id_to_units = {i: id_to_units[i].split() for i in sample_ids} - id_to_units_collapsed = { - i: [uu for uu, _ in groupby(u)] for i, u in id_to_units.items() - } - id_to_durations = { - i: [len(list(g)) for _, g in groupby(u)] for i, u in id_to_units.items() - } - - return { - i: ForceAlignmentInfo( - tokens=id_to_units_collapsed[i], frame_durations=id_to_durations[i], - start_sec=None, end_sec=None - ) - for i in sample_ids - } - - -def get_feature_value_min_max(feature_paths: List[str]): - v_min, v_max = 1e-8, -1e-8 - for p in tqdm(feature_paths): - _path, slice_ptr = parse_path(p) - assert len(slice_ptr) == 2 - byte_data = read_from_stored_zip(_path, slice_ptr[0], slice_ptr[1]) - assert is_npy_data(byte_data) - path_or_fp = io.BytesIO(byte_data) - features = np.load(path_or_fp).squeeze() - v_min = min(v_min, features.min().item()) - v_max = max(v_max, features.max().item()) - return v_min, v_max diff --git a/kosmos-g/fairseq/examples/speech_synthesis/docs/common_voice_example.md b/kosmos-g/fairseq/examples/speech_synthesis/docs/common_voice_example.md deleted file mode 100644 index 1c0eef69a..000000000 --- a/kosmos-g/fairseq/examples/speech_synthesis/docs/common_voice_example.md +++ /dev/null @@ -1,67 +0,0 @@ -[[Back]](..) - -# Common Voice - -[Common Voice](https://commonvoice.mozilla.org/en/datasets) is a public domain speech corpus with 11.2K hours of read -speech in 76 languages (the latest version 7.0). We provide examples for building -[Transformer](https://arxiv.org/abs/1809.08895) models on this dataset. - - -## Data preparation -[Download](https://commonvoice.mozilla.org/en/datasets) and unpack Common Voice v4 to a path `${DATA_ROOT}/${LANG_ID}`. -Create splits and generate audio manifests with -```bash -python -m examples.speech_synthesis.preprocessing.get_common_voice_audio_manifest \ - --data-root ${DATA_ROOT} \ - --lang ${LANG_ID} \ - --output-manifest-root ${AUDIO_MANIFEST_ROOT} --convert-to-wav -``` - -To denoise audio and trim leading/trailing silence using signal processing based VAD, run -```bash -for SPLIT in dev test train; do - python -m examples.speech_synthesis.preprocessing.denoise_and_vad_audio \ - --audio-manifest ${AUDIO_MANIFEST_ROOT}/${SPLIT}.audio.tsv \ - --output-dir ${PROCESSED_DATA_ROOT} \ - --denoise --vad --vad-agg-level 2 -done -``` - -which generates a new audio TSV manifest under `${PROCESSED_DATA_ROOT}` with updated path to the processed audio and -a new column for SNR. - -To do filtering by CER, follow the [Automatic Evaluation](../docs/ljspeech_example.md#automatic-evaluation) section to -run ASR model (add `--eval-target` to `get_eval_manifest` for evaluation on the reference audio; add `--err-unit char` -to `eval_asr` to compute CER instead of WER). The example-level CER is saved to -`${EVAL_OUTPUT_ROOT}/uer_cer.${SPLIT}.tsv`. - -Then, extract log-Mel spectrograms, generate feature manifest and create data configuration YAML with -```bash -python -m examples.speech_synthesis.preprocessing.get_feature_manifest \ - --audio-manifest-root ${AUDIO_MANIFEST_ROOT} \ - --output-root ${FEATURE_MANIFEST_ROOT} \ - --ipa-vocab --lang ${LANG_ID} \ - --snr-threshold 15 \ - --cer-threshold 0.1 --cer-tsv-path ${EVAL_OUTPUT_ROOT}/uer_cer.${SPLIT}.tsv -``` -where we use phoneme inputs (`--ipa-vocab`) as example. For sample filtering, we set the SNR and CER threshold -to 15 and 10%, respectively. - - -## Training -(Please refer to [the LJSpeech example](../docs/ljspeech_example.md#transformer).) - - -## Inference -(Please refer to [the LJSpeech example](../docs/ljspeech_example.md#inference).) - -## Automatic Evaluation -(Please refer to [the LJSpeech example](../docs/ljspeech_example.md#automatic-evaluation).) - -## Results - -| Language | Speakers | --arch | Params | Test MCD | Model | -|---|---|---|---|---|---| -| English | 200 | tts_transformer | 54M | 3.8 | [Download](https://dl.fbaipublicfiles.com/fairseq/s2/cv4_en200_transformer_phn.tar) | - -[[Back]](..) diff --git a/kosmos-g/fairseq/examples/speech_synthesis/docs/ljspeech_example.md b/kosmos-g/fairseq/examples/speech_synthesis/docs/ljspeech_example.md deleted file mode 100644 index 90c524fac..000000000 --- a/kosmos-g/fairseq/examples/speech_synthesis/docs/ljspeech_example.md +++ /dev/null @@ -1,138 +0,0 @@ -[[Back]](..) - -# LJSpeech - -[LJSpeech](https://keithito.com/LJ-Speech-Dataset) is a public domain TTS -corpus with around 24 hours of English speech sampled at 22.05kHz. We provide examples for building -[Transformer](https://arxiv.org/abs/1809.08895) and [FastSpeech 2](https://arxiv.org/abs/2006.04558) -models on this dataset. - - -## Data preparation - -Download data, create splits and generate audio manifests with -```bash -python -m examples.speech_synthesis.preprocessing.get_ljspeech_audio_manifest \ - --output-data-root ${AUDIO_DATA_ROOT} \ - --output-manifest-root ${AUDIO_MANIFEST_ROOT} -``` - -Then, extract log-Mel spectrograms, generate feature manifest and create data configuration YAML with -```bash -python -m examples.speech_synthesis.preprocessing.get_feature_manifest \ - --audio-manifest-root ${AUDIO_MANIFEST_ROOT} \ - --output-root ${FEATURE_MANIFEST_ROOT} \ - --ipa-vocab --use-g2p -``` -where we use phoneme inputs (`--ipa-vocab --use-g2p`) as example. - -FastSpeech 2 additionally requires frame durations, pitch and energy as auxiliary training targets. -Add `--add-fastspeech-targets` to include these fields in the feature manifests. We get frame durations either from -phoneme-level force-alignment or frame-level pseudo-text unit sequence. They should be pre-computed and specified via: -- `--textgrid-zip ${TEXT_GRID_ZIP_PATH}` for a ZIP file, inside which there is one - [TextGrid](https://www.fon.hum.uva.nl/praat/manual/TextGrid.html) file per sample to provide force-alignment info. -- `--id-to-units-tsv ${ID_TO_UNIT_TSV}` for a TSV file, where there are 2 columns for sample ID and - space-delimited pseudo-text unit sequence, respectively. - -For your convenience, we provide pre-computed -[force-alignment](https://dl.fbaipublicfiles.com/fairseq/s2/ljspeech_mfa.zip) from -[Montreal Forced Aligner](https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner) and -[pseudo-text units](s3://dl.fbaipublicfiles.com/fairseq/s2/ljspeech_hubert.tsv) from -[HuBERT](https://github.com/pytorch/fairseq/tree/main/examples/hubert). You can also generate them by yourself using -a different software or model. - - -## Training -#### Transformer -```bash -fairseq-train ${FEATURE_MANIFEST_ROOT} --save-dir ${SAVE_DIR} \ - --config-yaml config.yaml --train-subset train --valid-subset dev \ - --num-workers 4 --max-tokens 30000 --max-update 200000 \ - --task text_to_speech --criterion tacotron2 --arch tts_transformer \ - --clip-norm 5.0 --n-frames-per-step 4 --bce-pos-weight 5.0 \ - --dropout 0.1 --attention-dropout 0.1 --activation-dropout 0.1 \ - --encoder-normalize-before --decoder-normalize-before \ - --optimizer adam --lr 2e-3 --lr-scheduler inverse_sqrt --warmup-updates 4000 \ - --seed 1 --update-freq 8 --eval-inference --best-checkpoint-metric mcd_loss -``` -where `SAVE_DIR` is the checkpoint root path. We set `--update-freq 8` to simulate 8 GPUs with 1 GPU. You may want to -update it accordingly when using more than 1 GPU. - -#### FastSpeech2 -```bash -fairseq-train ${FEATURE_MANIFEST_ROOT} --save-dir ${SAVE_DIR} \ - --config-yaml config.yaml --train-subset train --valid-subset dev \ - --num-workers 4 --max-sentences 6 --max-update 200000 \ - --task text_to_speech --criterion fastspeech2 --arch fastspeech2 \ - --clip-norm 5.0 --n-frames-per-step 1 \ - --dropout 0.1 --attention-dropout 0.1 --activation-dropout 0.1 \ - --encoder-normalize-before --decoder-normalize-before \ - --optimizer adam --lr 5e-4 --lr-scheduler inverse_sqrt --warmup-updates 4000 \ - --seed 1 --update-freq 8 --eval-inference --best-checkpoint-metric mcd_loss -``` - - -## Inference -Average the last 5 checkpoints, generate the test split spectrogram and waveform using the default Griffin-Lim vocoder: -```bash -SPLIT=test -CHECKPOINT_NAME=avg_last_5 -CHECKPOINT_PATH=${SAVE_DIR}/checkpoint_${CHECKPOINT_NAME}.pt -python scripts/average_checkpoints.py --inputs ${SAVE_DIR} \ - --num-epoch-checkpoints 5 \ - --output ${CHECKPOINT_PATH} - -python -m examples.speech_synthesis.generate_waveform ${FEATURE_MANIFEST_ROOT} \ - --config-yaml config.yaml --gen-subset ${SPLIT} --task text_to_speech \ - --path ${CHECKPOINT_PATH} --max-tokens 50000 --spec-bwd-max-iter 32 \ - --dump-waveforms -``` -which dumps files (waveform, feature, attention plot, etc.) to `${SAVE_DIR}/generate-${CHECKPOINT_NAME}-${SPLIT}`. To -re-synthesize target waveforms for automatic evaluation, add `--dump-target`. - -## Automatic Evaluation -To start with, generate the manifest for synthetic speech, which will be taken as inputs by evaluation scripts. -```bash -python -m examples.speech_synthesis.evaluation.get_eval_manifest \ - --generation-root ${SAVE_DIR}/generate-${CHECKPOINT_NAME}-${SPLIT} \ - --audio-manifest ${AUDIO_MANIFEST_ROOT}/${SPLIT}.audio.tsv \ - --output-path ${EVAL_OUTPUT_ROOT}/eval.tsv \ - --vocoder griffin_lim --sample-rate 22050 --audio-format flac \ - --use-resynthesized-target -``` -Speech recognition (ASR) models usually operate at lower sample rates (e.g. 16kHz). For the WER/CER metric, -you may need to resample the audios accordingly --- add `--output-sample-rate 16000` for `generate_waveform.py` and -use `--sample-rate 16000` for `get_eval_manifest.py`. - - -#### WER/CER metric -We use wav2vec 2.0 ASR model as example. [Download](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec) -the model checkpoint and dictionary, then compute WER/CER with -```bash -python -m examples.speech_synthesis.evaluation.eval_asr \ - --audio-header syn --text-header text --err-unit char --split ${SPLIT} \ - --w2v-ckpt ${WAV2VEC2_CHECKPOINT_PATH} --w2v-dict-dir ${WAV2VEC2_DICT_DIR} \ - --raw-manifest ${EVAL_OUTPUT_ROOT}/eval_16khz.tsv --asr-dir ${EVAL_OUTPUT_ROOT}/asr -``` - -#### MCD/MSD metric -```bash -python -m examples.speech_synthesis.evaluation.eval_sp \ - ${EVAL_OUTPUT_ROOT}/eval.tsv --mcd --msd -``` - -#### F0 metrics -```bash -python -m examples.speech_synthesis.evaluation.eval_f0 \ - ${EVAL_OUTPUT_ROOT}/eval.tsv --gpe --vde --ffe -``` - - -## Results - -| --arch | Params | Test MCD | Model | -|---|---|---|---| -| tts_transformer | 54M | 3.8 | [Download](https://dl.fbaipublicfiles.com/fairseq/s2/ljspeech_transformer_phn.tar) | -| fastspeech2 | 41M | 3.8 | [Download](https://dl.fbaipublicfiles.com/fairseq/s2/ljspeech_fastspeech2_phn.tar) | - -[[Back]](..) diff --git a/kosmos-g/fairseq/examples/speech_synthesis/docs/vctk_example.md b/kosmos-g/fairseq/examples/speech_synthesis/docs/vctk_example.md deleted file mode 100644 index 6808256d4..000000000 --- a/kosmos-g/fairseq/examples/speech_synthesis/docs/vctk_example.md +++ /dev/null @@ -1,61 +0,0 @@ -[[Back]](..) - -# VCTK - -[VCTK](https://datashare.ed.ac.uk/handle/10283/3443) is an open English speech corpus. We provide examples -for building [Transformer](https://arxiv.org/abs/1809.08895) models on this dataset. - - -## Data preparation -Download data, create splits and generate audio manifests with -```bash -python -m examples.speech_synthesis.preprocessing.get_vctk_audio_manifest \ - --output-data-root ${AUDIO_DATA_ROOT} \ - --output-manifest-root ${AUDIO_MANIFEST_ROOT} -``` - -To denoise audio and trim leading/trailing silence using signal processing based VAD, run -```bash -for SPLIT in dev test train; do - python -m examples.speech_synthesis.preprocessing.denoise_and_vad_audio \ - --audio-manifest ${AUDIO_MANIFEST_ROOT}/${SPLIT}.audio.tsv \ - --output-dir ${PROCESSED_DATA_ROOT} \ - --denoise --vad --vad-agg-level 3 -done -``` -which generates a new audio TSV manifest under `${PROCESSED_DATA_ROOT}` with updated path to the processed audio and -a new column for SNR. - -To do filtering by CER, follow the [Automatic Evaluation](../docs/ljspeech_example.md#automatic-evaluation) section to -run ASR model (add `--eval-target` to `get_eval_manifest` for evaluation on the reference audio; add `--err-unit char` -to `eval_asr` to compute CER instead of WER). The example-level CER is saved to -`${EVAL_OUTPUT_ROOT}/uer_cer.${SPLIT}.tsv`. - -Then, extract log-Mel spectrograms, generate feature manifest and create data configuration YAML with -```bash -python -m examples.speech_synthesis.preprocessing.get_feature_manifest \ - --audio-manifest-root ${PROCESSED_DATA_ROOT} \ - --output-root ${FEATURE_MANIFEST_ROOT} \ - --ipa-vocab --use-g2p \ - --snr-threshold 15 \ - --cer-threshold 0.1 --cer-tsv-path ${EVAL_OUTPUT_ROOT}/uer_cer.${SPLIT}.tsv -``` -where we use phoneme inputs (`--ipa-vocab --use-g2p`) as example. For sample filtering, we set the SNR and CER threshold -to 15 and 10%, respectively. - -## Training -(Please refer to [the LJSpeech example](../docs/ljspeech_example.md#transformer).) - -## Inference -(Please refer to [the LJSpeech example](../docs/ljspeech_example.md#inference).) - -## Automatic Evaluation -(Please refer to [the LJSpeech example](../docs/ljspeech_example.md#automatic-evaluation).) - -## Results - -| --arch | Params | Test MCD | Model | -|---|---|---|---| -| tts_transformer | 54M | 3.4 | [Download](https://dl.fbaipublicfiles.com/fairseq/s2/vctk_transformer_phn.tar) | - -[[Back]](..) diff --git a/kosmos-g/fairseq/examples/speech_synthesis/evaluation/__init__.py b/kosmos-g/fairseq/examples/speech_synthesis/evaluation/__init__.py deleted file mode 100644 index 626423691..000000000 --- a/kosmos-g/fairseq/examples/speech_synthesis/evaluation/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. diff --git a/kosmos-g/fairseq/examples/speech_synthesis/evaluation/eval_asr.py b/kosmos-g/fairseq/examples/speech_synthesis/evaluation/eval_asr.py deleted file mode 100644 index 005a11bfb..000000000 --- a/kosmos-g/fairseq/examples/speech_synthesis/evaluation/eval_asr.py +++ /dev/null @@ -1,128 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import editdistance -import re -import shutil -import soundfile as sf -import subprocess -from pathlib import Path - -from examples.speech_to_text.data_utils import load_tsv_to_dicts - - -def preprocess_text(text): - text = "|".join(re.sub(r"[^A-Z' ]", " ", text.upper()).split()) - text = " ".join(text) - return text - - -def prepare_w2v_data( - dict_dir, sample_rate, label, audio_paths, texts, split, data_dir -): - data_dir.mkdir(parents=True, exist_ok=True) - shutil.copyfile( - dict_dir / f"dict.{label}.txt", - data_dir / f"dict.{label}.txt" - ) - with open(data_dir / f"{split}.tsv", "w") as f: - f.write("/\n") - for audio_path in audio_paths: - wav, sr = sf.read(audio_path) - assert sr == sample_rate, f"{sr} != sample_rate" - nsample = len(wav) - f.write(f"{audio_path}\t{nsample}\n") - with open(data_dir / f"{split}.{label}", "w") as f: - for text in texts: - text = preprocess_text(text) - f.write(f"{text}\n") - - -def run_asr(asr_dir, split, w2v_ckpt, w2v_label, res_dir): - """ - results will be saved at - {res_dir}/{ref,hypo}.word-{w2v_ckpt.filename}-{split}.txt - """ - cmd = ["python", "-m", "examples.speech_recognition.infer"] - cmd += [str(asr_dir.resolve())] - cmd += ["--task", "audio_finetuning", "--nbest", "1", "--quiet"] - cmd += ["--w2l-decoder", "viterbi", "--criterion", "ctc"] - cmd += ["--post-process", "letter", "--max-tokens", "4000000"] - cmd += ["--path", str(w2v_ckpt.resolve()), "--labels", w2v_label] - cmd += ["--gen-subset", split, "--results-path", str(res_dir.resolve())] - - print(f"running cmd:\n{' '.join(cmd)}") - subprocess.run(cmd, check=True) - - -def compute_error_rate(hyp_wrd_path, ref_wrd_path, unit="word"): - """each line is "<text> (None-<index>)" """ - tokenize_line = { - "word": lambda x: re.sub(r" \(.*\)$", "", x.rstrip()).split(), - "char": lambda x: list(re.sub(r" \(.*\)$", "", x.rstrip())) - }.get(unit) - if tokenize_line is None: - raise ValueError(f"{unit} not supported") - - inds = [int(re.sub(r"\D*(\d*)\D*", r"\1", line)) - for line in open(hyp_wrd_path)] - hyps = [tokenize_line(line) for line in open(hyp_wrd_path)] - refs = [tokenize_line(line) for line in open(ref_wrd_path)] - assert(len(hyps) == len(refs)) - err_rates = [ - editdistance.eval(hyp, ref) / len(ref) for hyp, ref in zip(hyps, refs) - ] - ind_to_err_rates = {i: e for i, e in zip(inds, err_rates)} - return ind_to_err_rates - - -def main(args): - samples = load_tsv_to_dicts(args.raw_manifest) - ids = [ - sample[args.id_header] if args.id_header else "" for sample in samples - ] - audio_paths = [sample[args.audio_header] for sample in samples] - texts = [sample[args.text_header] for sample in samples] - - prepare_w2v_data( - args.w2v_dict_dir, - args.w2v_sample_rate, - args.w2v_label, - audio_paths, - texts, - args.split, - args.asr_dir - ) - run_asr(args.asr_dir, args.split, args.w2v_ckpt, args.w2v_label, args.asr_dir) - ind_to_err_rates = compute_error_rate( - args.asr_dir / f"hypo.word-{args.w2v_ckpt.name}-{args.split}.txt", - args.asr_dir / f"ref.word-{args.w2v_ckpt.name}-{args.split}.txt", - args.err_unit, - ) - - uer_path = args.asr_dir / f"uer_{args.err_unit}.{args.split}.tsv" - with open(uer_path, "w") as f: - f.write("id\taudio\tuer\n") - for ind, (id_, audio_path) in enumerate(zip(ids, audio_paths)): - f.write(f"{id_}\t{audio_path}\t{ind_to_err_rates[ind]:.4f}\n") - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--raw-manifest", required=True, type=Path) - parser.add_argument("--asr-dir", required=True, type=Path) - parser.add_argument("--id-header", default="id", type=str) - parser.add_argument("--audio-header", default="audio", type=str) - parser.add_argument("--text-header", default="src_text", type=str) - parser.add_argument("--split", default="raw", type=str) - parser.add_argument("--w2v-ckpt", required=True, type=Path) - parser.add_argument("--w2v-dict-dir", required=True, type=Path) - parser.add_argument("--w2v-sample-rate", default=16000, type=int) - parser.add_argument("--w2v-label", default="ltr", type=str) - parser.add_argument("--err-unit", default="word", type=str) - args = parser.parse_args() - - main(args) diff --git a/kosmos-g/fairseq/examples/speech_synthesis/evaluation/eval_f0.py b/kosmos-g/fairseq/examples/speech_synthesis/evaluation/eval_f0.py deleted file mode 100644 index df721d683..000000000 --- a/kosmos-g/fairseq/examples/speech_synthesis/evaluation/eval_f0.py +++ /dev/null @@ -1,266 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Signal processing-based evaluation using waveforms -""" -import numpy as np -import os.path as op - -import torchaudio -import tqdm -from tabulate import tabulate - -from examples.speech_synthesis.utils import ( - gross_pitch_error, voicing_decision_error, f0_frame_error -) -from examples.speech_synthesis.evaluation.eval_sp import load_eval_spec - - -def difference_function(x, n, tau_max): - """ - Compute difference function of data x. This solution is implemented directly - with Numpy fft. - - - :param x: audio data - :param n: length of data - :param tau_max: integration window size - :return: difference function - :rtype: list - """ - - x = np.array(x, np.float64) - w = x.size - tau_max = min(tau_max, w) - x_cumsum = np.concatenate((np.array([0.]), (x * x).cumsum())) - size = w + tau_max - p2 = (size // 32).bit_length() - nice_numbers = (16, 18, 20, 24, 25, 27, 30, 32) - size_pad = min(x * 2 ** p2 for x in nice_numbers if x * 2 ** p2 >= size) - fc = np.fft.rfft(x, size_pad) - conv = np.fft.irfft(fc * fc.conjugate())[:tau_max] - return x_cumsum[w:w - tau_max:-1] + x_cumsum[w] - x_cumsum[:tau_max] - \ - 2 * conv - - -def cumulative_mean_normalized_difference_function(df, n): - """ - Compute cumulative mean normalized difference function (CMND). - - :param df: Difference function - :param n: length of data - :return: cumulative mean normalized difference function - :rtype: list - """ - - # scipy method - cmn_df = df[1:] * range(1, n) / np.cumsum(df[1:]).astype(float) - return np.insert(cmn_df, 0, 1) - - -def get_pitch(cmdf, tau_min, tau_max, harmo_th=0.1): - """ - Return fundamental period of a frame based on CMND function. - - :param cmdf: Cumulative Mean Normalized Difference function - :param tau_min: minimum period for speech - :param tau_max: maximum period for speech - :param harmo_th: harmonicity threshold to determine if it is necessary to - compute pitch frequency - :return: fundamental period if there is values under threshold, 0 otherwise - :rtype: float - """ - tau = tau_min - while tau < tau_max: - if cmdf[tau] < harmo_th: - while tau + 1 < tau_max and cmdf[tau + 1] < cmdf[tau]: - tau += 1 - return tau - tau += 1 - - return 0 # if unvoiced - - -def compute_yin(sig, sr, w_len=512, w_step=256, f0_min=100, f0_max=500, - harmo_thresh=0.1): - """ - - Compute the Yin Algorithm. Return fundamental frequency and harmonic rate. - - https://github.com/NVIDIA/mellotron adaption of - https://github.com/patriceguyot/Yin - - :param sig: Audio signal (list of float) - :param sr: sampling rate (int) - :param w_len: size of the analysis window (samples) - :param w_step: size of the lag between two consecutives windows (samples) - :param f0_min: Minimum fundamental frequency that can be detected (hertz) - :param f0_max: Maximum fundamental frequency that can be detected (hertz) - :param harmo_thresh: Threshold of detection. The yalgorithmù return the - first minimum of the CMND function below this threshold. - - :returns: - - * pitches: list of fundamental frequencies, - * harmonic_rates: list of harmonic rate values for each fundamental - frequency value (= confidence value) - * argmins: minimums of the Cumulative Mean Normalized DifferenceFunction - * times: list of time of each estimation - :rtype: tuple - """ - - tau_min = int(sr / f0_max) - tau_max = int(sr / f0_min) - - # time values for each analysis window - time_scale = range(0, len(sig) - w_len, w_step) - times = [t/float(sr) for t in time_scale] - frames = [sig[t:t + w_len] for t in time_scale] - - pitches = [0.0] * len(time_scale) - harmonic_rates = [0.0] * len(time_scale) - argmins = [0.0] * len(time_scale) - - for i, frame in enumerate(frames): - # Compute YIN - df = difference_function(frame, w_len, tau_max) - cm_df = cumulative_mean_normalized_difference_function(df, tau_max) - p = get_pitch(cm_df, tau_min, tau_max, harmo_thresh) - - # Get results - if np.argmin(cm_df) > tau_min: - argmins[i] = float(sr / np.argmin(cm_df)) - if p != 0: # A pitch was found - pitches[i] = float(sr / p) - harmonic_rates[i] = cm_df[p] - else: # No pitch, but we compute a value of the harmonic rate - harmonic_rates[i] = min(cm_df) - - return pitches, harmonic_rates, argmins, times - - -def extract_f0(samples): - f0_samples = [] - for sample in tqdm.tqdm(samples): - if not op.isfile(sample["ref"]) or not op.isfile(sample["syn"]): - f0_samples.append(None) - continue - - # assume single channel - yref, sr = torchaudio.load(sample["ref"]) - ysyn, _sr = torchaudio.load(sample["syn"]) - yref, ysyn = yref[0], ysyn[0] - assert sr == _sr, f"{sr} != {_sr}" - - yref_f0 = compute_yin(yref, sr) - ysyn_f0 = compute_yin(ysyn, sr) - - f0_samples += [ - { - "ref": yref_f0, - "syn": ysyn_f0 - } - ] - - return f0_samples - - -def eval_f0_error(samples, distortion_fn): - results = [] - for sample in tqdm.tqdm(samples): - if sample is None: - results.append(None) - continue - # assume single channel - yref_f, _, _, yref_t = sample["ref"] - ysyn_f, _, _, ysyn_t = sample["syn"] - - yref_f = np.array(yref_f) - yref_t = np.array(yref_t) - ysyn_f = np.array(ysyn_f) - ysyn_t = np.array(ysyn_t) - - distortion = distortion_fn(yref_t, yref_f, ysyn_t, ysyn_f) - results.append((distortion.item(), - len(yref_f), - len(ysyn_f) - )) - return results - - -def eval_gross_pitch_error(samples): - return eval_f0_error(samples, gross_pitch_error) - - -def eval_voicing_decision_error(samples): - return eval_f0_error(samples, voicing_decision_error) - - -def eval_f0_frame_error(samples): - return eval_f0_error(samples, f0_frame_error) - - -def print_results(results, show_bin): - results = np.array(list(filter(lambda x: x is not None, results))) - - np.set_printoptions(precision=3) - - def _print_result(results): - res = { - "nutt": len(results), - "error": results[:, 0].mean(), - "std": results[:, 0].std(), - "dur_ref": int(results[:, 1].sum()), - "dur_syn": int(results[:, 2].sum()), - } - print(tabulate([res.values()], res.keys(), floatfmt=".4f")) - - print(">>>> ALL") - _print_result(results) - - if show_bin: - edges = [0, 200, 400, 600, 800, 1000, 2000, 4000] - for i in range(1, len(edges)): - mask = np.logical_and(results[:, 1] >= edges[i-1], - results[:, 1] < edges[i]) - if not mask.any(): - continue - bin_results = results[mask] - print(f">>>> ({edges[i-1]}, {edges[i]})") - _print_result(bin_results) - - -def main(eval_f0, gpe, vde, ffe, show_bin): - samples = load_eval_spec(eval_f0) - if gpe or vde or ffe: - f0_samples = extract_f0(samples) - - if gpe: - print("===== Evaluate Gross Pitch Error =====") - results = eval_gross_pitch_error(f0_samples) - print_results(results, show_bin) - if vde: - print("===== Evaluate Voicing Decision Error =====") - results = eval_voicing_decision_error(f0_samples) - print_results(results, show_bin) - if ffe: - print("===== Evaluate F0 Frame Error =====") - results = eval_f0_frame_error(f0_samples) - print_results(results, show_bin) - - -if __name__ == "__main__": - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("eval_f0") - parser.add_argument("--gpe", action="store_true") - parser.add_argument("--vde", action="store_true") - parser.add_argument("--ffe", action="store_true") - parser.add_argument("--show-bin", action="store_true") - args = parser.parse_args() - - main(args.eval_f0, args.gpe, args.vde, args.ffe, args.show_bin) diff --git a/kosmos-g/fairseq/examples/speech_synthesis/evaluation/eval_sp.py b/kosmos-g/fairseq/examples/speech_synthesis/evaluation/eval_sp.py deleted file mode 100644 index 702c49803..000000000 --- a/kosmos-g/fairseq/examples/speech_synthesis/evaluation/eval_sp.py +++ /dev/null @@ -1,131 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -""" -Signal processing-based evaluation using waveforms -""" - -import csv -import numpy as np -import os.path as op - -import torch -import tqdm -from tabulate import tabulate -import torchaudio - -from examples.speech_synthesis.utils import batch_mel_spectral_distortion -from fairseq.tasks.text_to_speech import batch_mel_cepstral_distortion - - -def load_eval_spec(path): - with open(path) as f: - reader = csv.DictReader(f, delimiter='\t') - samples = list(reader) - return samples - - -def eval_distortion(samples, distortion_fn, device="cuda"): - nmiss = 0 - results = [] - for sample in tqdm.tqdm(samples): - if not op.isfile(sample["ref"]) or not op.isfile(sample["syn"]): - nmiss += 1 - results.append(None) - continue - # assume single channel - yref, sr = torchaudio.load(sample["ref"]) - ysyn, _sr = torchaudio.load(sample["syn"]) - yref, ysyn = yref[0].to(device), ysyn[0].to(device) - assert sr == _sr, f"{sr} != {_sr}" - - distortion, extra = distortion_fn([yref], [ysyn], sr, None)[0] - _, _, _, _, _, pathmap = extra - nins = torch.sum(pathmap.sum(dim=1) - 1) # extra frames in syn - ndel = torch.sum(pathmap.sum(dim=0) - 1) # missing frames from syn - results.append( - (distortion.item(), # path distortion - pathmap.size(0), # yref num frames - pathmap.size(1), # ysyn num frames - pathmap.sum().item(), # path length - nins.item(), # insertion - ndel.item(), # deletion - ) - ) - return results - - -def eval_mel_cepstral_distortion(samples, device="cuda"): - return eval_distortion(samples, batch_mel_cepstral_distortion, device) - - -def eval_mel_spectral_distortion(samples, device="cuda"): - return eval_distortion(samples, batch_mel_spectral_distortion, device) - - -def print_results(results, show_bin): - results = np.array(list(filter(lambda x: x is not None, results))) - - np.set_printoptions(precision=3) - - def _print_result(results): - dist, dur_ref, dur_syn, dur_ali, nins, ndel = results.sum(axis=0) - res = { - "nutt": len(results), - "dist": dist, - "dur_ref": int(dur_ref), - "dur_syn": int(dur_syn), - "dur_ali": int(dur_ali), - "dist_per_ref_frm": dist/dur_ref, - "dist_per_syn_frm": dist/dur_syn, - "dist_per_ali_frm": dist/dur_ali, - "ins": nins/dur_ref, - "del": ndel/dur_ref, - } - print(tabulate( - [res.values()], - res.keys(), - floatfmt=".4f" - )) - - print(">>>> ALL") - _print_result(results) - - if show_bin: - edges = [0, 200, 400, 600, 800, 1000, 2000, 4000] - for i in range(1, len(edges)): - mask = np.logical_and(results[:, 1] >= edges[i-1], - results[:, 1] < edges[i]) - if not mask.any(): - continue - bin_results = results[mask] - print(f">>>> ({edges[i-1]}, {edges[i]})") - _print_result(bin_results) - - -def main(eval_spec, mcd, msd, show_bin): - samples = load_eval_spec(eval_spec) - device = "cpu" - if mcd: - print("===== Evaluate Mean Cepstral Distortion =====") - results = eval_mel_cepstral_distortion(samples, device) - print_results(results, show_bin) - if msd: - print("===== Evaluate Mean Spectral Distortion =====") - results = eval_mel_spectral_distortion(samples, device) - print_results(results, show_bin) - - -if __name__ == "__main__": - import argparse - parser = argparse.ArgumentParser() - parser.add_argument("eval_spec") - parser.add_argument("--mcd", action="store_true") - parser.add_argument("--msd", action="store_true") - parser.add_argument("--show-bin", action="store_true") - args = parser.parse_args() - - main(args.eval_spec, args.mcd, args.msd, args.show_bin) diff --git a/kosmos-g/fairseq/examples/speech_synthesis/evaluation/get_eval_manifest.py b/kosmos-g/fairseq/examples/speech_synthesis/evaluation/get_eval_manifest.py deleted file mode 100644 index 44b3685bb..000000000 --- a/kosmos-g/fairseq/examples/speech_synthesis/evaluation/get_eval_manifest.py +++ /dev/null @@ -1,64 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import csv -from pathlib import Path - - -def main(args): - """ - `uid syn ref text` - """ - in_root = Path(args.generation_root).resolve() - ext = args.audio_format - with open(args.audio_manifest) as f, open(args.output_path, "w") as f_out: - reader = csv.DictReader( - f, delimiter="\t", quotechar=None, doublequote=False, - lineterminator="\n", quoting=csv.QUOTE_NONE - ) - header = ["id", "syn", "ref", "text", "speaker"] - f_out.write("\t".join(header) + "\n") - for row in reader: - dir_name = f"{ext}_{args.sample_rate}hz_{args.vocoder}" - id_ = row["id"] - syn = (in_root / dir_name / f"{id_}.{ext}").as_posix() - ref = row["audio"] - if args.use_resynthesized_target: - ref = (in_root / f"{dir_name}_tgt" / f"{id_}.{ext}").as_posix() - if args.eval_target: - syn = row["audio"] - sample = [id_, syn, ref, row["tgt_text"], row["speaker"]] - f_out.write("\t".join(sample) + "\n") - print(f"wrote evaluation file to {args.output_path}") - - -if __name__ == "__main__": - import argparse - parser = argparse.ArgumentParser() - parser.add_argument( - "--generation-root", help="output directory for generate_waveform.py" - ) - parser.add_argument( - "--audio-manifest", - help="used to determine the original utterance ID and text" - ) - parser.add_argument( - "--output-path", help="path to output evaluation spec file" - ) - parser.add_argument( - "--use-resynthesized-target", action="store_true", - help="use resynthesized reference instead of the original audio" - ) - parser.add_argument( - "--eval-target", action="store_true", - help="evaluate reference instead of model prediction" - ) - parser.add_argument("--vocoder", type=str, default="griffin_lim") - parser.add_argument("--sample-rate", type=int, default=22_050) - parser.add_argument("--audio-format", type=str, default="wav") - args = parser.parse_args() - - main(args) diff --git a/kosmos-g/fairseq/examples/speech_synthesis/generate_waveform.py b/kosmos-g/fairseq/examples/speech_synthesis/generate_waveform.py deleted file mode 100644 index 3b56190db..000000000 --- a/kosmos-g/fairseq/examples/speech_synthesis/generate_waveform.py +++ /dev/null @@ -1,192 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import ast -import logging -import matplotlib.pyplot as plt -import numpy as np -from pathlib import Path -import soundfile as sf -import sys -import torch -import torchaudio - -from fairseq import checkpoint_utils, options, tasks, utils -from fairseq.logging import progress_bar -from fairseq.tasks.text_to_speech import plot_tts_output -from fairseq.data.audio.text_to_speech_dataset import TextToSpeechDataset - - -logging.basicConfig() -logging.root.setLevel(logging.INFO) -logging.basicConfig(level=logging.INFO) -logger = logging.getLogger(__name__) - - -def make_parser(): - parser = options.get_speech_generation_parser() - parser.add_argument("--dump-features", action="store_true") - parser.add_argument("--dump-waveforms", action="store_true") - parser.add_argument("--dump-attentions", action="store_true") - parser.add_argument("--dump-eos-probs", action="store_true") - parser.add_argument("--dump-plots", action="store_true") - parser.add_argument("--dump-target", action="store_true") - parser.add_argument("--output-sample-rate", default=22050, type=int) - parser.add_argument("--teacher-forcing", action="store_true") - parser.add_argument( - "--audio-format", type=str, default="wav", choices=["wav", "flac"] - ) - return parser - - -def postprocess_results( - dataset: TextToSpeechDataset, sample, hypos, resample_fn, dump_target -): - def to_np(x): - return None if x is None else x.detach().cpu().numpy() - - sample_ids = [dataset.ids[i] for i in sample["id"].tolist()] - texts = sample["src_texts"] if "src_texts" in sample else [""] * len(hypos) - attns = [to_np(hypo["attn"]) for hypo in hypos] - eos_probs = [to_np(hypo.get("eos_prob", None)) for hypo in hypos] - feat_preds = [to_np(hypo["feature"]) for hypo in hypos] - wave_preds = [to_np(resample_fn(h["waveform"])) for h in hypos] - if dump_target: - feat_targs = [to_np(hypo["targ_feature"]) for hypo in hypos] - wave_targs = [to_np(resample_fn(h["targ_waveform"])) for h in hypos] - else: - feat_targs = [None for _ in hypos] - wave_targs = [None for _ in hypos] - - return zip(sample_ids, texts, attns, eos_probs, feat_preds, wave_preds, - feat_targs, wave_targs) - - -def dump_result( - is_na_model, - args, - vocoder, - sample_id, - text, - attn, - eos_prob, - feat_pred, - wave_pred, - feat_targ, - wave_targ, -): - sample_rate = args.output_sample_rate - out_root = Path(args.results_path) - if args.dump_features: - feat_dir = out_root / "feat" - feat_dir.mkdir(exist_ok=True, parents=True) - np.save(feat_dir / f"{sample_id}.npy", feat_pred) - if args.dump_target: - feat_tgt_dir = out_root / "feat_tgt" - feat_tgt_dir.mkdir(exist_ok=True, parents=True) - np.save(feat_tgt_dir / f"{sample_id}.npy", feat_targ) - if args.dump_attentions: - attn_dir = out_root / "attn" - attn_dir.mkdir(exist_ok=True, parents=True) - np.save(attn_dir / f"{sample_id}.npy", attn.numpy()) - if args.dump_eos_probs and not is_na_model: - eos_dir = out_root / "eos" - eos_dir.mkdir(exist_ok=True, parents=True) - np.save(eos_dir / f"{sample_id}.npy", eos_prob) - - if args.dump_plots: - images = [feat_pred.T] if is_na_model else [feat_pred.T, attn] - names = ["output"] if is_na_model else ["output", "alignment"] - if feat_targ is not None: - images = [feat_targ.T] + images - names = [f"target (idx={sample_id})"] + names - if is_na_model: - plot_tts_output(images, names, attn, "alignment", suptitle=text) - else: - plot_tts_output(images, names, eos_prob, "eos prob", suptitle=text) - plot_dir = out_root / "plot" - plot_dir.mkdir(exist_ok=True, parents=True) - plt.savefig(plot_dir / f"{sample_id}.png") - plt.close() - - if args.dump_waveforms: - ext = args.audio_format - if wave_pred is not None: - wav_dir = out_root / f"{ext}_{sample_rate}hz_{vocoder}" - wav_dir.mkdir(exist_ok=True, parents=True) - sf.write(wav_dir / f"{sample_id}.{ext}", wave_pred, sample_rate) - if args.dump_target and wave_targ is not None: - wav_tgt_dir = out_root / f"{ext}_{sample_rate}hz_{vocoder}_tgt" - wav_tgt_dir.mkdir(exist_ok=True, parents=True) - sf.write(wav_tgt_dir / f"{sample_id}.{ext}", wave_targ, sample_rate) - - -def main(args): - assert(args.dump_features or args.dump_waveforms or args.dump_attentions - or args.dump_eos_probs or args.dump_plots) - if args.max_tokens is None and args.batch_size is None: - args.max_tokens = 8000 - logger.info(args) - - use_cuda = torch.cuda.is_available() and not args.cpu - task = tasks.setup_task(args) - models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( - [args.path], - task=task, - arg_overrides=ast.literal_eval(args.model_overrides), - ) - model = models[0].cuda() if use_cuda else models[0] - # use the original n_frames_per_step - task.args.n_frames_per_step = saved_cfg.task.n_frames_per_step - task.load_dataset(args.gen_subset, task_cfg=saved_cfg.task) - - data_cfg = task.data_cfg - sample_rate = data_cfg.config.get("features", {}).get("sample_rate", 22050) - resample_fn = { - False: lambda x: x, - True: lambda x: torchaudio.sox_effects.apply_effects_tensor( - x.detach().cpu().unsqueeze(0), sample_rate, - [['rate', str(args.output_sample_rate)]] - )[0].squeeze(0) - }.get(args.output_sample_rate != sample_rate) - if args.output_sample_rate != sample_rate: - logger.info(f"resampling to {args.output_sample_rate}Hz") - - generator = task.build_generator([model], args) - itr = task.get_batch_iterator( - dataset=task.dataset(args.gen_subset), - max_tokens=args.max_tokens, - max_sentences=args.batch_size, - max_positions=(sys.maxsize, sys.maxsize), - ignore_invalid_inputs=args.skip_invalid_size_inputs_valid_test, - required_batch_size_multiple=args.required_batch_size_multiple, - num_shards=args.num_shards, - shard_id=args.shard_id, - num_workers=args.num_workers, - data_buffer_size=args.data_buffer_size, - ).next_epoch_itr(shuffle=False) - - Path(args.results_path).mkdir(exist_ok=True, parents=True) - is_na_model = getattr(model, "NON_AUTOREGRESSIVE", False) - dataset = task.dataset(args.gen_subset) - vocoder = task.args.vocoder - with progress_bar.build_progress_bar(args, itr) as t: - for sample in t: - sample = utils.move_to_cuda(sample) if use_cuda else sample - hypos = generator.generate(model, sample, has_targ=args.dump_target) - for result in postprocess_results( - dataset, sample, hypos, resample_fn, args.dump_target - ): - dump_result(is_na_model, args, vocoder, *result) - - -def cli_main(): - parser = make_parser() - args = options.parse_args_and_arch(parser) - main(args) - - -if __name__ == "__main__": - cli_main() diff --git a/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/__init__.py b/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/__init__.py deleted file mode 100644 index 626423691..000000000 --- a/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. diff --git a/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/denoise_and_vad_audio.py b/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/denoise_and_vad_audio.py deleted file mode 100644 index 4e13b38a5..000000000 --- a/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/denoise_and_vad_audio.py +++ /dev/null @@ -1,204 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import logging -import os -import csv -import tempfile -from collections import defaultdict -from pathlib import Path - -import torchaudio -try: - import webrtcvad -except ImportError: - raise ImportError("Please install py-webrtcvad: pip install webrtcvad") -import pandas as pd -from tqdm import tqdm - -from examples.speech_synthesis.preprocessing.denoiser.pretrained import master64 -import examples.speech_synthesis.preprocessing.denoiser.utils as utils -from examples.speech_synthesis.preprocessing.vad import ( - frame_generator, vad_collector, read_wave, write_wave, FS_MS, THRESHOLD, - SCALE -) -from examples.speech_to_text.data_utils import save_df_to_tsv - - -log = logging.getLogger(__name__) - -PATHS = ["after_denoise", "after_vad"] -MIN_T = 0.05 - - -def generate_tmp_filename(extension="txt"): - return tempfile._get_default_tempdir() + "/" + \ - next(tempfile._get_candidate_names()) + "." + extension - - -def convert_sr(inpath, sr, output_path=None): - if not output_path: - output_path = generate_tmp_filename("wav") - cmd = f"sox {inpath} -r {sr} {output_path}" - os.system(cmd) - return output_path - - -def apply_vad(vad, inpath): - audio, sample_rate = read_wave(inpath) - frames = frame_generator(FS_MS, audio, sample_rate) - frames = list(frames) - segments = vad_collector(sample_rate, FS_MS, 300, vad, frames) - merge_segments = list() - timestamp_start = 0.0 - timestamp_end = 0.0 - # removing start, end, and long sequences of sils - for i, segment in enumerate(segments): - merge_segments.append(segment[0]) - if i and timestamp_start: - sil_duration = segment[1] - timestamp_end - if sil_duration > THRESHOLD: - merge_segments.append(int(THRESHOLD / SCALE) * (b'\x00')) - else: - merge_segments.append(int((sil_duration / SCALE)) * (b'\x00')) - timestamp_start = segment[1] - timestamp_end = segment[2] - segment = b''.join(merge_segments) - return segment, sample_rate - - -def write(wav, filename, sr=16_000): - # Normalize audio if it prevents clipping - wav = wav / max(wav.abs().max().item(), 1) - torchaudio.save(filename, wav.cpu(), sr, encoding="PCM_S", - bits_per_sample=16) - - -def process(args): - # making sure we are requested either denoise or vad - if not args.denoise and not args.vad: - log.error("No denoise or vad is requested.") - return - - log.info("Creating out directories...") - if args.denoise: - out_denoise = Path(args.output_dir).absolute().joinpath(PATHS[0]) - out_denoise.mkdir(parents=True, exist_ok=True) - if args.vad: - out_vad = Path(args.output_dir).absolute().joinpath(PATHS[1]) - out_vad.mkdir(parents=True, exist_ok=True) - - log.info("Loading pre-trained speech enhancement model...") - model = master64().to(args.device) - - log.info("Building the VAD model...") - vad = webrtcvad.Vad(int(args.vad_agg_level)) - - # preparing the output dict - output_dict = defaultdict(list) - - log.info(f"Parsing input manifest: {args.audio_manifest}") - with open(args.audio_manifest, "r") as f: - manifest_dict = csv.DictReader(f, delimiter="\t") - for row in tqdm(manifest_dict): - filename = str(row["audio"]) - - final_output = filename - keep_sample = True - n_frames = row["n_frames"] - snr = -1 - if args.denoise: - output_path_denoise = out_denoise.joinpath(Path(filename).name) - # convert to 16khz in case we use a differet sr - tmp_path = convert_sr(final_output, 16000) - - # loading audio file and generating the enhanced version - out, sr = torchaudio.load(tmp_path) - out = out.to(args.device) - estimate = model(out) - estimate = (1 - args.dry_wet) * estimate + args.dry_wet * out - write(estimate[0], str(output_path_denoise), sr) - - snr = utils.cal_snr(out, estimate) - snr = snr.cpu().detach().numpy()[0][0] - final_output = str(output_path_denoise) - - if args.vad: - output_path_vad = out_vad.joinpath(Path(filename).name) - sr = torchaudio.info(final_output).sample_rate - if sr in [16000, 32000, 48000]: - tmp_path = final_output - elif sr < 16000: - tmp_path = convert_sr(final_output, 16000) - elif sr < 32000: - tmp_path = convert_sr(final_output, 32000) - else: - tmp_path = convert_sr(final_output, 48000) - # apply VAD - segment, sample_rate = apply_vad(vad, tmp_path) - if len(segment) < sample_rate * MIN_T: - keep_sample = False - print(( - f"WARNING: skip {filename} because it is too short " - f"after VAD ({len(segment) / sample_rate} < {MIN_T})" - )) - else: - if sample_rate != sr: - tmp_path = generate_tmp_filename("wav") - write_wave(tmp_path, segment, sample_rate) - convert_sr(tmp_path, sr, - output_path=str(output_path_vad)) - else: - write_wave(str(output_path_vad), segment, sample_rate) - final_output = str(output_path_vad) - segment, _ = torchaudio.load(final_output) - n_frames = segment.size(1) - - if keep_sample: - output_dict["id"].append(row["id"]) - output_dict["audio"].append(final_output) - output_dict["n_frames"].append(n_frames) - output_dict["tgt_text"].append(row["tgt_text"]) - output_dict["speaker"].append(row["speaker"]) - output_dict["src_text"].append(row["src_text"]) - output_dict["snr"].append(snr) - - out_tsv_path = Path(args.output_dir) / Path(args.audio_manifest).name - log.info(f"Saving manifest to {out_tsv_path.as_posix()}") - save_df_to_tsv(pd.DataFrame.from_dict(output_dict), out_tsv_path) - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--audio-manifest", "-i", required=True, - type=str, help="path to the input manifest.") - parser.add_argument( - "--output-dir", "-o", required=True, type=str, - help="path to the output dir. it will contain files after denoising and" - " vad" - ) - parser.add_argument("--vad-agg-level", "-a", type=int, default=2, - help="the aggresive level of the vad [0-3].") - parser.add_argument( - "--dry-wet", "-dw", type=float, default=0.01, - help="the level of linear interpolation between noisy and enhanced " - "files." - ) - parser.add_argument( - "--device", "-d", type=str, default="cpu", - help="the device to be used for the speech enhancement model: " - "cpu | cuda." - ) - parser.add_argument("--denoise", action="store_true", - help="apply a denoising") - parser.add_argument("--vad", action="store_true", help="apply a VAD") - args = parser.parse_args() - - process(args) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/denoiser/__init__.py b/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/denoiser/__init__.py deleted file mode 100644 index 626423691..000000000 --- a/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/denoiser/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. diff --git a/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/denoiser/demucs.py b/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/denoiser/demucs.py deleted file mode 100644 index 3f70e73d6..000000000 --- a/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/denoiser/demucs.py +++ /dev/null @@ -1,473 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# author: adefossez - -import math -import time - -import torch as th -from torch import nn -from torch.nn import functional as F - -from .resample import downsample2, upsample2 -from .utils import capture_init - - -class BLSTM(nn.Module): - def __init__(self, dim, layers=2, bi=True): - super().__init__() - klass = nn.LSTM - self.lstm = klass( - bidirectional=bi, num_layers=layers, hidden_size=dim, input_size=dim - ) - self.linear = None - if bi: - self.linear = nn.Linear(2 * dim, dim) - - def forward(self, x, hidden=None): - x, hidden = self.lstm(x, hidden) - if self.linear: - x = self.linear(x) - return x, hidden - - -def rescale_conv(conv, reference): - std = conv.weight.std().detach() - scale = (std / reference)**0.5 - conv.weight.data /= scale - if conv.bias is not None: - conv.bias.data /= scale - - -def rescale_module(module, reference): - for sub in module.modules(): - if isinstance(sub, (nn.Conv1d, nn.ConvTranspose1d)): - rescale_conv(sub, reference) - - -class Demucs(nn.Module): - """ - Demucs speech enhancement model. - Args: - - chin (int): number of input channels. - - chout (int): number of output channels. - - hidden (int): number of initial hidden channels. - - depth (int): number of layers. - - kernel_size (int): kernel size for each layer. - - stride (int): stride for each layer. - - causal (bool): if false, uses BiLSTM instead of LSTM. - - resample (int): amount of resampling to apply to the input/output. - Can be one of 1, 2 or 4. - - growth (float): number of channels is multiplied by this for every layer. - - max_hidden (int): maximum number of channels. Can be useful to - control the size/speed of the model. - - normalize (bool): if true, normalize the input. - - glu (bool): if true uses GLU instead of ReLU in 1x1 convolutions. - - rescale (float): controls custom weight initialization. - See https://arxiv.org/abs/1911.13254. - - floor (float): stability flooring when normalizing. - - """ - @capture_init - def __init__(self, - chin=1, - chout=1, - hidden=48, - depth=5, - kernel_size=8, - stride=4, - causal=True, - resample=4, - growth=2, - max_hidden=10_000, - normalize=True, - glu=True, - rescale=0.1, - floor=1e-3): - - super().__init__() - if resample not in [1, 2, 4]: - raise ValueError("Resample should be 1, 2 or 4.") - - self.chin = chin - self.chout = chout - self.hidden = hidden - self.depth = depth - self.kernel_size = kernel_size - self.stride = stride - self.causal = causal - self.floor = floor - self.resample = resample - self.normalize = normalize - - self.encoder = nn.ModuleList() - self.decoder = nn.ModuleList() - activation = nn.GLU(1) if glu else nn.ReLU() - ch_scale = 2 if glu else 1 - - for index in range(depth): - encode = [] - encode += [ - nn.Conv1d(chin, hidden, kernel_size, stride), - nn.ReLU(), - nn.Conv1d(hidden, hidden * ch_scale, 1), activation, - ] - self.encoder.append(nn.Sequential(*encode)) - - decode = [] - decode += [ - nn.Conv1d(hidden, ch_scale * hidden, 1), activation, - nn.ConvTranspose1d(hidden, chout, kernel_size, stride), - ] - if index > 0: - decode.append(nn.ReLU()) - self.decoder.insert(0, nn.Sequential(*decode)) - chout = hidden - chin = hidden - hidden = min(int(growth * hidden), max_hidden) - - self.lstm = BLSTM(chin, bi=not causal) - if rescale: - rescale_module(self, reference=rescale) - - def valid_length(self, length): - """ - Return the nearest valid length to use with the model so that - there is no time steps left over in a convolutions, e.g. for all - layers, size of the input - kernel_size % stride = 0. - - If the mixture has a valid length, the estimated sources - will have exactly the same length. - """ - length = math.ceil(length * self.resample) - for _ in range(self.depth): - length = math.ceil((length - self.kernel_size) / self.stride) + 1 - length = max(length, 1) - for _ in range(self.depth): - length = (length - 1) * self.stride + self.kernel_size - length = int(math.ceil(length / self.resample)) - return int(length) - - @property - def total_stride(self): - return self.stride ** self.depth // self.resample - - def forward(self, mix): - if mix.dim() == 2: - mix = mix.unsqueeze(1) - - if self.normalize: - mono = mix.mean(dim=1, keepdim=True) - std = mono.std(dim=-1, keepdim=True) - mix = mix / (self.floor + std) - else: - std = 1 - length = mix.shape[-1] - x = mix - x = F.pad(x, (0, self.valid_length(length) - length)) - if self.resample == 2: - x = upsample2(x) - elif self.resample == 4: - x = upsample2(x) - x = upsample2(x) - skips = [] - for encode in self.encoder: - x = encode(x) - skips.append(x) - x = x.permute(2, 0, 1) - x, _ = self.lstm(x) - x = x.permute(1, 2, 0) - for decode in self.decoder: - skip = skips.pop(-1) - x = x + skip[..., :x.shape[-1]] - x = decode(x) - if self.resample == 2: - x = downsample2(x) - elif self.resample == 4: - x = downsample2(x) - x = downsample2(x) - - x = x[..., :length] - return std * x - - -def fast_conv(conv, x): - """ - Faster convolution evaluation if either kernel size is 1 - or length of sequence is 1. - """ - batch, chin, length = x.shape - chout, chin, kernel = conv.weight.shape - assert batch == 1 - if kernel == 1: - x = x.view(chin, length) - out = th.addmm(conv.bias.view(-1, 1), - conv.weight.view(chout, chin), x) - elif length == kernel: - x = x.view(chin * kernel, 1) - out = th.addmm(conv.bias.view(-1, 1), - conv.weight.view(chout, chin * kernel), x) - else: - out = conv(x) - return out.view(batch, chout, -1) - - -class DemucsStreamer: - """ - Streaming implementation for Demucs. It supports being fed with any amount - of audio at a time. You will get back as much audio as possible at that - point. - - Args: - - demucs (Demucs): Demucs model. - - dry (float): amount of dry (e.g. input) signal to keep. 0 is maximum - noise removal, 1 just returns the input signal. Small values > 0 - allows to limit distortions. - - num_frames (int): number of frames to process at once. Higher values - will increase overall latency but improve the real time factor. - - resample_lookahead (int): extra lookahead used for the resampling. - - resample_buffer (int): size of the buffer of previous inputs/outputs - kept for resampling. - """ - def __init__(self, demucs, - dry=0, - num_frames=1, - resample_lookahead=64, - resample_buffer=256): - device = next(iter(demucs.parameters())).device - self.demucs = demucs - self.lstm_state = None - self.conv_state = None - self.dry = dry - self.resample_lookahead = resample_lookahead - resample_buffer = min(demucs.total_stride, resample_buffer) - self.resample_buffer = resample_buffer - self.frame_length = demucs.valid_length(1) + \ - demucs.total_stride * (num_frames - 1) - self.total_length = self.frame_length + self.resample_lookahead - self.stride = demucs.total_stride * num_frames - self.resample_in = th.zeros(demucs.chin, resample_buffer, device=device) - self.resample_out = th.zeros( - demucs.chin, resample_buffer, device=device - ) - - self.frames = 0 - self.total_time = 0 - self.variance = 0 - self.pending = th.zeros(demucs.chin, 0, device=device) - - bias = demucs.decoder[0][2].bias - weight = demucs.decoder[0][2].weight - chin, chout, kernel = weight.shape - self._bias = bias.view(-1, 1).repeat(1, kernel).view(-1, 1) - self._weight = weight.permute(1, 2, 0).contiguous() - - def reset_time_per_frame(self): - self.total_time = 0 - self.frames = 0 - - @property - def time_per_frame(self): - return self.total_time / self.frames - - def flush(self): - """ - Flush remaining audio by padding it with zero. Call this - when you have no more input and want to get back the last chunk of audio. - """ - pending_length = self.pending.shape[1] - padding = th.zeros( - self.demucs.chin, self.total_length, device=self.pending.device - ) - out = self.feed(padding) - return out[:, :pending_length] - - def feed(self, wav): - """ - Apply the model to mix using true real time evaluation. - Normalization is done online as is the resampling. - """ - begin = time.time() - demucs = self.demucs - resample_buffer = self.resample_buffer - stride = self.stride - resample = demucs.resample - - if wav.dim() != 2: - raise ValueError("input wav should be two dimensional.") - chin, _ = wav.shape - if chin != demucs.chin: - raise ValueError(f"Expected {demucs.chin} channels, got {chin}") - - self.pending = th.cat([self.pending, wav], dim=1) - outs = [] - while self.pending.shape[1] >= self.total_length: - self.frames += 1 - frame = self.pending[:, :self.total_length] - dry_signal = frame[:, :stride] - if demucs.normalize: - mono = frame.mean(0) - variance = (mono**2).mean() - self.variance = variance / self.frames + \ - (1 - 1 / self.frames) * self.variance - frame = frame / (demucs.floor + math.sqrt(self.variance)) - frame = th.cat([self.resample_in, frame], dim=-1) - self.resample_in[:] = frame[:, stride - resample_buffer:stride] - - if resample == 4: - frame = upsample2(upsample2(frame)) - elif resample == 2: - frame = upsample2(frame) - # remove pre sampling buffer - frame = frame[:, resample * resample_buffer:] - # remove extra samples after window - frame = frame[:, :resample * self.frame_length] - - out, extra = self._separate_frame(frame) - padded_out = th.cat([self.resample_out, out, extra], 1) - self.resample_out[:] = out[:, -resample_buffer:] - if resample == 4: - out = downsample2(downsample2(padded_out)) - elif resample == 2: - out = downsample2(padded_out) - else: - out = padded_out - - out = out[:, resample_buffer // resample:] - out = out[:, :stride] - - if demucs.normalize: - out *= math.sqrt(self.variance) - out = self.dry * dry_signal + (1 - self.dry) * out - outs.append(out) - self.pending = self.pending[:, stride:] - - self.total_time += time.time() - begin - if outs: - out = th.cat(outs, 1) - else: - out = th.zeros(chin, 0, device=wav.device) - return out - - def _separate_frame(self, frame): - demucs = self.demucs - skips = [] - next_state = [] - first = self.conv_state is None - stride = self.stride * demucs.resample - x = frame[None] - for idx, encode in enumerate(demucs.encoder): - stride //= demucs.stride - length = x.shape[2] - if idx == demucs.depth - 1: - # This is sligthly faster for the last conv - x = fast_conv(encode[0], x) - x = encode[1](x) - x = fast_conv(encode[2], x) - x = encode[3](x) - else: - if not first: - prev = self.conv_state.pop(0) - prev = prev[..., stride:] - tgt = (length - demucs.kernel_size) // demucs.stride + 1 - missing = tgt - prev.shape[-1] - offset = length - demucs.kernel_size - \ - demucs.stride * (missing - 1) - x = x[..., offset:] - x = encode[1](encode[0](x)) - x = fast_conv(encode[2], x) - x = encode[3](x) - if not first: - x = th.cat([prev, x], -1) - next_state.append(x) - skips.append(x) - - x = x.permute(2, 0, 1) - x, self.lstm_state = demucs.lstm(x, self.lstm_state) - x = x.permute(1, 2, 0) - # In the following, x contains only correct samples, i.e. the one - # for which each time position is covered by two window of the upper - # layer. extra contains extra samples to the right, and is used only as - # a better padding for the online resampling. - extra = None - for idx, decode in enumerate(demucs.decoder): - skip = skips.pop(-1) - x += skip[..., :x.shape[-1]] - x = fast_conv(decode[0], x) - x = decode[1](x) - - if extra is not None: - skip = skip[..., x.shape[-1]:] - extra += skip[..., :extra.shape[-1]] - extra = decode[2](decode[1](decode[0](extra))) - x = decode[2](x) - next_state.append( - x[..., -demucs.stride:] - decode[2].bias.view(-1, 1) - ) - if extra is None: - extra = x[..., -demucs.stride:] - else: - extra[..., :demucs.stride] += next_state[-1] - x = x[..., :-demucs.stride] - - if not first: - prev = self.conv_state.pop(0) - x[..., :demucs.stride] += prev - if idx != demucs.depth - 1: - x = decode[3](x) - extra = decode[3](extra) - self.conv_state = next_state - return x[0], extra[0] - - -def test(): - import argparse - parser = argparse.ArgumentParser( - "denoiser.demucs", - description="Benchmark the streaming Demucs implementation, as well as " - "checking the delta with the offline implementation.") - parser.add_argument("--depth", default=5, type=int) - parser.add_argument("--resample", default=4, type=int) - parser.add_argument("--hidden", default=48, type=int) - parser.add_argument("--sample_rate", default=16000, type=float) - parser.add_argument("--device", default="cpu") - parser.add_argument("-t", "--num_threads", type=int) - parser.add_argument("-f", "--num_frames", type=int, default=1) - args = parser.parse_args() - if args.num_threads: - th.set_num_threads(args.num_threads) - sr = args.sample_rate - sr_ms = sr / 1000 - demucs = Demucs( - depth=args.depth, hidden=args.hidden, resample=args.resample - ).to(args.device) - x = th.randn(1, int(sr * 4)).to(args.device) - out = demucs(x[None])[0] - streamer = DemucsStreamer(demucs, num_frames=args.num_frames) - out_rt = [] - frame_size = streamer.total_length - with th.no_grad(): - while x.shape[1] > 0: - out_rt.append(streamer.feed(x[:, :frame_size])) - x = x[:, frame_size:] - frame_size = streamer.demucs.total_stride - out_rt.append(streamer.flush()) - out_rt = th.cat(out_rt, 1) - model_size = sum(p.numel() for p in demucs.parameters()) * 4 / 2**20 - initial_lag = streamer.total_length / sr_ms - tpf = 1000 * streamer.time_per_frame - print(f"model size: {model_size:.1f}MB, ", end='') - print(f"delta batch/streaming: {th.norm(out - out_rt) / th.norm(out):.2%}") - print(f"initial lag: {initial_lag:.1f}ms, ", end='') - print(f"stride: {streamer.stride * args.num_frames / sr_ms:.1f}ms") - print(f"time per frame: {tpf:.1f}ms, ", end='') - rtf = (1000 * streamer.time_per_frame) / (streamer.stride / sr_ms) - print(f"RTF: {rtf:.2f}") - print(f"Total lag with computation: {initial_lag + tpf:.1f}ms") - - -if __name__ == "__main__": - test() diff --git a/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/denoiser/pretrained.py b/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/denoiser/pretrained.py deleted file mode 100644 index 2fa846075..000000000 --- a/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/denoiser/pretrained.py +++ /dev/null @@ -1,81 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# author: adefossez - -import logging - -import torch.hub - -from .demucs import Demucs -from .utils import deserialize_model - -logger = logging.getLogger(__name__) -ROOT = "https://dl.fbaipublicfiles.com/adiyoss/denoiser/" -DNS_48_URL = ROOT + "dns48-11decc9d8e3f0998.th" -DNS_64_URL = ROOT + "dns64-a7761ff99a7d5bb6.th" -MASTER_64_URL = ROOT + "master64-8a5dfb4bb92753dd.th" - - -def _demucs(pretrained, url, **kwargs): - model = Demucs(**kwargs) - if pretrained: - state_dict = torch.hub.load_state_dict_from_url(url, map_location='cpu') - model.load_state_dict(state_dict) - return model - - -def dns48(pretrained=True): - return _demucs(pretrained, DNS_48_URL, hidden=48) - - -def dns64(pretrained=True): - return _demucs(pretrained, DNS_64_URL, hidden=64) - - -def master64(pretrained=True): - return _demucs(pretrained, MASTER_64_URL, hidden=64) - - -def add_model_flags(parser): - group = parser.add_mutually_exclusive_group(required=False) - group.add_argument( - "-m", "--model_path", help="Path to local trained model." - ) - group.add_argument( - "--dns48", action="store_true", - help="Use pre-trained real time H=48 model trained on DNS." - ) - group.add_argument( - "--dns64", action="store_true", - help="Use pre-trained real time H=64 model trained on DNS." - ) - group.add_argument( - "--master64", action="store_true", - help="Use pre-trained real time H=64 model trained on DNS and Valentini." - ) - - -def get_model(args): - """ - Load local model package or torchhub pre-trained model. - """ - if args.model_path: - logger.info("Loading model from %s", args.model_path) - pkg = torch.load(args.model_path) - model = deserialize_model(pkg) - elif args.dns64: - logger.info("Loading pre-trained real time H=64 model trained on DNS.") - model = dns64() - elif args.master64: - logger.info( - "Loading pre-trained real time H=64 model trained on DNS and Valentini." - ) - model = master64() - else: - logger.info("Loading pre-trained real time H=48 model trained on DNS.") - model = dns48() - logger.debug(model) - return model diff --git a/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/denoiser/resample.py b/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/denoiser/resample.py deleted file mode 100644 index 1222addc4..000000000 --- a/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/denoiser/resample.py +++ /dev/null @@ -1,79 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# author: adefossez - -import math - -import torch as th -from torch.nn import functional as F - - -def sinc(t): - """sinc. - - :param t: the input tensor - """ - return th.where(t == 0, th.tensor(1., device=t.device, dtype=t.dtype), - th.sin(t) / t) - - -def kernel_upsample2(zeros=56): - """kernel_upsample2. - - """ - win = th.hann_window(4 * zeros + 1, periodic=False) - winodd = win[1::2] - t = th.linspace(-zeros + 0.5, zeros - 0.5, 2 * zeros) - t *= math.pi - kernel = (sinc(t) * winodd).view(1, 1, -1) - return kernel - - -def upsample2(x, zeros=56): - """ - Upsampling the input by 2 using sinc interpolation. - Smith, Julius, and Phil Gossett. "A flexible sampling-rate conversion method." - ICASSP'84. IEEE International Conference on Acoustics, Speech, and Signal Processing. - Vol. 9. IEEE, 1984. - """ - *other, time = x.shape - kernel = kernel_upsample2(zeros).to(x) - out = F.conv1d(x.view(-1, 1, time), kernel, padding=zeros)[..., 1:].view( - *other, time - ) - y = th.stack([x, out], dim=-1) - return y.view(*other, -1) - - -def kernel_downsample2(zeros=56): - """kernel_downsample2. - - """ - win = th.hann_window(4 * zeros + 1, periodic=False) - winodd = win[1::2] - t = th.linspace(-zeros + 0.5, zeros - 0.5, 2 * zeros) - t.mul_(math.pi) - kernel = (sinc(t) * winodd).view(1, 1, -1) - return kernel - - -def downsample2(x, zeros=56): - """ - Downsampling the input by 2 using sinc interpolation. - Smith, Julius, and Phil Gossett. "A flexible sampling-rate conversion method." - ICASSP'84. IEEE International Conference on Acoustics, Speech, and Signal Processing. - Vol. 9. IEEE, 1984. - """ - if x.shape[-1] % 2 != 0: - x = F.pad(x, (0, 1)) - xeven = x[..., ::2] - xodd = x[..., 1::2] - *other, time = xodd.shape - kernel = kernel_downsample2(zeros).to(x) - out = xeven + F.conv1d( - xodd.view(-1, 1, time), kernel, padding=zeros - )[..., :-1].view(*other, time) - return out.view(*other, -1).mul(0.5) diff --git a/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/denoiser/utils.py b/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/denoiser/utils.py deleted file mode 100644 index 734d047f1..000000000 --- a/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/denoiser/utils.py +++ /dev/null @@ -1,176 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# author: adefossez - -import functools -import logging -from contextlib import contextmanager -import inspect -import time - -logger = logging.getLogger(__name__) - -EPS = 1e-8 - - -def capture_init(init): - """capture_init. - - Decorate `__init__` with this, and you can then - recover the *args and **kwargs passed to it in `self._init_args_kwargs` - """ - @functools.wraps(init) - def __init__(self, *args, **kwargs): - self._init_args_kwargs = (args, kwargs) - init(self, *args, **kwargs) - - return __init__ - - -def deserialize_model(package, strict=False): - """deserialize_model. - - """ - klass = package['class'] - if strict: - model = klass(*package['args'], **package['kwargs']) - else: - sig = inspect.signature(klass) - kw = package['kwargs'] - for key in list(kw): - if key not in sig.parameters: - logger.warning("Dropping inexistant parameter %s", key) - del kw[key] - model = klass(*package['args'], **kw) - model.load_state_dict(package['state']) - return model - - -def copy_state(state): - return {k: v.cpu().clone() for k, v in state.items()} - - -def serialize_model(model): - args, kwargs = model._init_args_kwargs - state = copy_state(model.state_dict()) - return {"class": model.__class__, "args": args, "kwargs": kwargs, "state": state} - - -@contextmanager -def swap_state(model, state): - """ - Context manager that swaps the state of a model, e.g: - - # model is in old state - with swap_state(model, new_state): - # model in new state - # model back to old state - """ - old_state = copy_state(model.state_dict()) - model.load_state_dict(state) - try: - yield - finally: - model.load_state_dict(old_state) - - -def pull_metric(history, name): - out = [] - for metrics in history: - if name in metrics: - out.append(metrics[name]) - return out - - -class LogProgress: - """ - Sort of like tqdm but using log lines and not as real time. - Args: - - logger: logger obtained from `logging.getLogger`, - - iterable: iterable object to wrap - - updates (int): number of lines that will be printed, e.g. - if `updates=5`, log every 1/5th of the total length. - - total (int): length of the iterable, in case it does not support - `len`. - - name (str): prefix to use in the log. - - level: logging level (like `logging.INFO`). - """ - def __init__(self, - logger, - iterable, - updates=5, - total=None, - name="LogProgress", - level=logging.INFO): - self.iterable = iterable - self.total = total or len(iterable) - self.updates = updates - self.name = name - self.logger = logger - self.level = level - - def update(self, **infos): - self._infos = infos - - def __iter__(self): - self._iterator = iter(self.iterable) - self._index = -1 - self._infos = {} - self._begin = time.time() - return self - - def __next__(self): - self._index += 1 - try: - value = next(self._iterator) - except StopIteration: - raise - else: - return value - finally: - log_every = max(1, self.total // self.updates) - # logging is delayed by 1 it, in order to have the metrics from update - if self._index >= 1 and self._index % log_every == 0: - self._log() - - def _log(self): - self._speed = (1 + self._index) / (time.time() - self._begin) - infos = " | ".join(f"{k.capitalize()} {v}" for k, v in self._infos.items()) - if self._speed < 1e-4: - speed = "oo sec/it" - elif self._speed < 0.1: - speed = f"{1/self._speed:.1f} sec/it" - else: - speed = f"{self._speed:.1f} it/sec" - out = f"{self.name} | {self._index}/{self.total} | {speed}" - if infos: - out += " | " + infos - self.logger.log(self.level, out) - - -def colorize(text, color): - """ - Display text with some ANSI color in the terminal. - """ - code = f"\033[{color}m" - restore = "\033[0m" - return "".join([code, text, restore]) - - -def bold(text): - """ - Display text in bold in the terminal. - """ - return colorize(text, "1") - - -def cal_snr(lbl, est): - import torch - y = 10.0 * torch.log10( - torch.sum(lbl**2, dim=-1) / (torch.sum((est-lbl)**2, dim=-1) + EPS) + - EPS - ) - return y diff --git a/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/get_common_voice_audio_manifest.py b/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/get_common_voice_audio_manifest.py deleted file mode 100644 index a30254604..000000000 --- a/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/get_common_voice_audio_manifest.py +++ /dev/null @@ -1,140 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import logging -from pathlib import Path -from collections import defaultdict -from typing import List, Dict, Tuple - -import pandas as pd -import numpy as np -import torchaudio -from tqdm import tqdm - -from examples.speech_to_text.data_utils import load_df_from_tsv, save_df_to_tsv - - -log = logging.getLogger(__name__) - -SPLITS = ["train", "dev", "test"] - - -def get_top_n( - root: Path, n_speakers: int = 10, min_n_tokens: int = 5 -) -> pd.DataFrame: - df = load_df_from_tsv(root / "validated.tsv") - df["n_tokens"] = [len(s.split()) for s in df["sentence"]] - df = df[df["n_tokens"] >= min_n_tokens] - df["n_frames"] = [ - torchaudio.info((root / "clips" / p).as_posix()).num_frames - for p in tqdm(df["path"]) - ] - df["id"] = [Path(p).stem for p in df["path"]] - total_duration_ms = df.groupby("client_id")["n_frames"].agg(["sum"]) - total_duration_ms = total_duration_ms.sort_values("sum", ascending=False) - - top_n_total_duration_ms = total_duration_ms.head(n_speakers) - top_n_client_ids = set(top_n_total_duration_ms.index.tolist()) - df_top_n = df[df["client_id"].isin(top_n_client_ids)] - return df_top_n - - -def get_splits( - df, train_split_ratio=0.99, speaker_in_all_splits=False, rand_seed=0 -) -> Tuple[Dict[str, str], List[str]]: - np.random.seed(rand_seed) - dev_split_ratio = (1. - train_split_ratio) / 3 - grouped = list(df.groupby("client_id")) - id_to_split = {} - for _, cur_df in tqdm(grouped): - cur_n_examples = len(cur_df) - if speaker_in_all_splits and cur_n_examples < 3: - continue - cur_n_train = int(cur_n_examples * train_split_ratio) - cur_n_dev = int(cur_n_examples * dev_split_ratio) - cur_n_test = cur_n_examples - cur_n_dev - cur_n_train - if speaker_in_all_splits and cur_n_dev * cur_n_test == 0: - cur_n_dev, cur_n_test = 1, 1 - cur_n_train = cur_n_examples - cur_n_dev - cur_n_test - cur_indices = cur_df.index.tolist() - cur_shuffled_indices = np.random.permutation(cur_n_examples) - cur_shuffled_indices = [cur_indices[i] for i in cur_shuffled_indices] - cur_indices_by_split = { - "train": cur_shuffled_indices[:cur_n_train], - "dev": cur_shuffled_indices[cur_n_train: cur_n_train + cur_n_dev], - "test": cur_shuffled_indices[cur_n_train + cur_n_dev:] - } - for split in SPLITS: - for i in cur_indices_by_split[split]: - id_ = df["id"].loc[i] - id_to_split[id_] = split - return id_to_split, sorted(df["client_id"].unique()) - - -def convert_to_wav(root: Path, filenames: List[str], target_sr=16_000): - out_root = root / "wav" - out_root.mkdir(exist_ok=True, parents=True) - print("Converting to WAV...") - for n in tqdm(filenames): - in_path = (root / "clips" / n).as_posix() - waveform, sr = torchaudio.load(in_path) - converted, converted_sr = torchaudio.sox_effects.apply_effects_tensor( - waveform, sr, [["rate", str(target_sr)], ["channels", "1"]] - ) - out_path = (out_root / Path(n).with_suffix(".wav").name).as_posix() - torchaudio.save(out_path, converted, converted_sr, encoding="PCM_S", - bits_per_sample=16) - - -def process(args): - data_root = Path(args.data_root).absolute() / args.lang - - # Generate TSV manifest - print("Generating manifest...") - - df_top_n = get_top_n(data_root) - id_to_split, speakers = get_splits(df_top_n) - - if args.convert_to_wav: - convert_to_wav(data_root, df_top_n["path"].tolist()) - - manifest_by_split = {split: defaultdict(list) for split in SPLITS} - for sample in tqdm(df_top_n.to_dict(orient="index").values()): - sample_id = sample["id"] - split = id_to_split[sample_id] - manifest_by_split[split]["id"].append(sample_id) - if args.convert_to_wav: - audio_path = data_root / "wav" / f"{sample_id}.wav" - else: - audio_path = data_root / "clips" / f"{sample_id}.mp3" - manifest_by_split[split]["audio"].append(audio_path.as_posix()) - manifest_by_split[split]["n_frames"].append(sample["n_frames"]) - manifest_by_split[split]["tgt_text"].append(sample["sentence"]) - manifest_by_split[split]["speaker"].append(sample["client_id"]) - manifest_by_split[split]["src_text"].append(sample["sentence"]) - - output_root = Path(args.output_manifest_root).absolute() - output_root.mkdir(parents=True, exist_ok=True) - for split in SPLITS: - save_df_to_tsv( - pd.DataFrame.from_dict(manifest_by_split[split]), - output_root / f"{split}.audio.tsv" - ) - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--data-root", "-d", required=True, type=str) - parser.add_argument("--output-manifest-root", "-m", required=True, type=str) - parser.add_argument("--lang", "-l", required=True, type=str) - parser.add_argument("--convert-to-wav", action="store_true") - args = parser.parse_args() - - process(args) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/get_feature_manifest.py b/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/get_feature_manifest.py deleted file mode 100644 index 4a1e119b3..000000000 --- a/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/get_feature_manifest.py +++ /dev/null @@ -1,262 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import logging -from pathlib import Path -import shutil -from tempfile import NamedTemporaryFile -from collections import Counter, defaultdict - -import pandas as pd -import torchaudio -from tqdm import tqdm - -from fairseq.data.audio.audio_utils import convert_waveform -from examples.speech_to_text.data_utils import ( - create_zip, - gen_config_yaml, - gen_vocab, - get_zip_manifest, - load_tsv_to_dicts, - save_df_to_tsv -) -from examples.speech_synthesis.data_utils import ( - extract_logmel_spectrogram, extract_pitch, extract_energy, get_global_cmvn, - ipa_phonemize, get_mfa_alignment, get_unit_alignment, - get_feature_value_min_max -) - - -log = logging.getLogger(__name__) - - -def process(args): - assert "train" in args.splits - out_root = Path(args.output_root).absolute() - out_root.mkdir(exist_ok=True) - - print("Fetching data...") - audio_manifest_root = Path(args.audio_manifest_root).absolute() - samples = [] - for s in args.splits: - for e in load_tsv_to_dicts(audio_manifest_root / f"{s}.audio.tsv"): - e["split"] = s - samples.append(e) - sample_ids = [s["id"] for s in samples] - - # Get alignment info - id_to_alignment = None - if args.textgrid_zip is not None: - assert args.id_to_units_tsv is None - id_to_alignment = get_mfa_alignment( - args.textgrid_zip, sample_ids, args.sample_rate, args.hop_length - ) - elif args.id_to_units_tsv is not None: - # assume identical hop length on the unit sequence - id_to_alignment = get_unit_alignment(args.id_to_units_tsv, sample_ids) - - # Extract features and pack features into ZIP - feature_name = "logmelspec80" - zip_path = out_root / f"{feature_name}.zip" - pitch_zip_path = out_root / "pitch.zip" - energy_zip_path = out_root / "energy.zip" - gcmvn_npz_path = out_root / "gcmvn_stats.npz" - if zip_path.exists() and gcmvn_npz_path.exists(): - print(f"{zip_path} and {gcmvn_npz_path} exist.") - else: - feature_root = out_root / feature_name - feature_root.mkdir(exist_ok=True) - pitch_root = out_root / "pitch" - energy_root = out_root / "energy" - if args.add_fastspeech_targets: - pitch_root.mkdir(exist_ok=True) - energy_root.mkdir(exist_ok=True) - print("Extracting Mel spectrogram features...") - for sample in tqdm(samples): - waveform, sample_rate = torchaudio.load(sample["audio"]) - waveform, sample_rate = convert_waveform( - waveform, sample_rate, normalize_volume=args.normalize_volume, - to_sample_rate=args.sample_rate - ) - sample_id = sample["id"] - target_length = None - if id_to_alignment is not None: - a = id_to_alignment[sample_id] - target_length = sum(a.frame_durations) - if a.start_sec is not None and a.end_sec is not None: - start_frame = int(a.start_sec * sample_rate) - end_frame = int(a.end_sec * sample_rate) - waveform = waveform[:, start_frame: end_frame] - extract_logmel_spectrogram( - waveform, sample_rate, feature_root / f"{sample_id}.npy", - win_length=args.win_length, hop_length=args.hop_length, - n_fft=args.n_fft, n_mels=args.n_mels, f_min=args.f_min, - f_max=args.f_max, target_length=target_length - ) - if args.add_fastspeech_targets: - assert id_to_alignment is not None - extract_pitch( - waveform, sample_rate, pitch_root / f"{sample_id}.npy", - hop_length=args.hop_length, log_scale=True, - phoneme_durations=id_to_alignment[sample_id].frame_durations - ) - extract_energy( - waveform, energy_root / f"{sample_id}.npy", - hop_length=args.hop_length, n_fft=args.n_fft, - log_scale=True, - phoneme_durations=id_to_alignment[sample_id].frame_durations - ) - print("ZIPing features...") - create_zip(feature_root, zip_path) - get_global_cmvn(feature_root, gcmvn_npz_path) - shutil.rmtree(feature_root) - if args.add_fastspeech_targets: - create_zip(pitch_root, pitch_zip_path) - shutil.rmtree(pitch_root) - create_zip(energy_root, energy_zip_path) - shutil.rmtree(energy_root) - - print("Fetching ZIP manifest...") - audio_paths, audio_lengths = get_zip_manifest(zip_path) - pitch_paths, pitch_lengths, energy_paths, energy_lengths = [None] * 4 - if args.add_fastspeech_targets: - pitch_paths, pitch_lengths = get_zip_manifest(pitch_zip_path) - energy_paths, energy_lengths = get_zip_manifest(energy_zip_path) - # Generate TSV manifest - print("Generating manifest...") - id_to_cer = None - if args.cer_threshold is not None: - assert Path(args.cer_tsv_path).is_file() - id_to_cer = { - x["id"]: x["uer"] for x in load_tsv_to_dicts(args.cer_tsv_path) - } - manifest_by_split = {split: defaultdict(list) for split in args.splits} - for sample in tqdm(samples): - sample_id, split = sample["id"], sample["split"] - - if args.snr_threshold is not None and "snr" in sample \ - and sample["snr"] < args.snr_threshold: - continue - if args.cer_threshold is not None \ - and id_to_cer[sample_id] > args.cer_threhold: - continue - - normalized_utt = sample["tgt_text"] - if id_to_alignment is not None: - normalized_utt = " ".join(id_to_alignment[sample_id].tokens) - elif args.ipa_vocab: - normalized_utt = ipa_phonemize( - normalized_utt, lang=args.lang, use_g2p=args.use_g2p - ) - manifest_by_split[split]["id"].append(sample_id) - manifest_by_split[split]["audio"].append(audio_paths[sample_id]) - manifest_by_split[split]["n_frames"].append(audio_lengths[sample_id]) - manifest_by_split[split]["tgt_text"].append(normalized_utt) - manifest_by_split[split]["speaker"].append(sample["speaker"]) - manifest_by_split[split]["src_text"].append(sample["src_text"]) - if args.add_fastspeech_targets: - assert id_to_alignment is not None - duration = " ".join( - str(d) for d in id_to_alignment[sample_id].frame_durations - ) - manifest_by_split[split]["duration"].append(duration) - manifest_by_split[split]["pitch"].append(pitch_paths[sample_id]) - manifest_by_split[split]["energy"].append(energy_paths[sample_id]) - for split in args.splits: - save_df_to_tsv( - pd.DataFrame.from_dict(manifest_by_split[split]), - out_root / f"{split}.tsv" - ) - # Generate vocab - vocab_name, spm_filename = None, None - if id_to_alignment is not None or args.ipa_vocab: - vocab = Counter() - for t in manifest_by_split["train"]["tgt_text"]: - vocab.update(t.split(" ")) - vocab_name = "vocab.txt" - with open(out_root / vocab_name, "w") as f: - for s, c in vocab.most_common(): - f.write(f"{s} {c}\n") - else: - spm_filename_prefix = "spm_char" - spm_filename = f"{spm_filename_prefix}.model" - with NamedTemporaryFile(mode="w") as f: - for t in manifest_by_split["train"]["tgt_text"]: - f.write(t + "\n") - f.flush() # needed to ensure gen_vocab sees dumped text - gen_vocab(Path(f.name), out_root / spm_filename_prefix, "char") - # Generate speaker list - speakers = sorted({sample["speaker"] for sample in samples}) - speakers_path = out_root / "speakers.txt" - with open(speakers_path, "w") as f: - for speaker in speakers: - f.write(f"{speaker}\n") - # Generate config YAML - win_len_t = args.win_length / args.sample_rate - hop_len_t = args.hop_length / args.sample_rate - extra = { - "sample_rate": args.sample_rate, - "features": { - "type": "spectrogram+melscale+log", - "eps": 1e-5, "n_mels": args.n_mels, "n_fft": args.n_fft, - "window_fn": "hann", "win_length": args.win_length, - "hop_length": args.hop_length, "sample_rate": args.sample_rate, - "win_len_t": win_len_t, "hop_len_t": hop_len_t, - "f_min": args.f_min, "f_max": args.f_max, - "n_stft": args.n_fft // 2 + 1 - } - } - if len(speakers) > 1: - extra["speaker_set_filename"] = "speakers.txt" - if args.add_fastspeech_targets: - pitch_min, pitch_max = get_feature_value_min_max( - [(out_root / n).as_posix() for n in pitch_paths.values()] - ) - energy_min, energy_max = get_feature_value_min_max( - [(out_root / n).as_posix() for n in energy_paths.values()] - ) - extra["features"]["pitch_min"] = pitch_min - extra["features"]["pitch_max"] = pitch_max - extra["features"]["energy_min"] = energy_min - extra["features"]["energy_max"] = energy_max - gen_config_yaml( - out_root, spm_filename=spm_filename, vocab_name=vocab_name, - audio_root=out_root.as_posix(), input_channels=None, - input_feat_per_channel=None, specaugment_policy=None, - cmvn_type="global", gcmvn_path=gcmvn_npz_path, extra=extra - ) - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--audio-manifest-root", "-m", required=True, type=str) - parser.add_argument("--output-root", "-o", required=True, type=str) - parser.add_argument("--splits", "-s", type=str, nargs="+", - default=["train", "dev", "test"]) - parser.add_argument("--ipa-vocab", action="store_true") - parser.add_argument("--use-g2p", action="store_true") - parser.add_argument("--lang", type=str, default="en-us") - parser.add_argument("--win-length", type=int, default=1024) - parser.add_argument("--hop-length", type=int, default=256) - parser.add_argument("--n-fft", type=int, default=1024) - parser.add_argument("--n-mels", type=int, default=80) - parser.add_argument("--f-min", type=int, default=20) - parser.add_argument("--f-max", type=int, default=8000) - parser.add_argument("--sample-rate", type=int, default=22050) - parser.add_argument("--normalize-volume", "-n", action="store_true") - parser.add_argument("--textgrid-zip", type=str, default=None) - parser.add_argument("--id-to-units-tsv", type=str, default=None) - parser.add_argument("--add-fastspeech-targets", action="store_true") - parser.add_argument("--snr-threshold", type=float, default=None) - parser.add_argument("--cer-threshold", type=float, default=None) - parser.add_argument("--cer-tsv-path", type=str, default="") - args = parser.parse_args() - - process(args) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/get_ljspeech_audio_manifest.py b/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/get_ljspeech_audio_manifest.py deleted file mode 100644 index 7ec1fb752..000000000 --- a/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/get_ljspeech_audio_manifest.py +++ /dev/null @@ -1,70 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import logging -from pathlib import Path -from collections import defaultdict - -import pandas as pd -from torchaudio.datasets import LJSPEECH -from tqdm import tqdm - -from examples.speech_to_text.data_utils import save_df_to_tsv - - -log = logging.getLogger(__name__) - -SPLITS = ["train", "dev", "test"] - - -def process(args): - out_root = Path(args.output_data_root).absolute() - out_root.mkdir(parents=True, exist_ok=True) - - # Generate TSV manifest - print("Generating manifest...") - # following FastSpeech's splits - dataset = LJSPEECH(out_root.as_posix(), download=True) - id_to_split = {} - for x in dataset._flist: - id_ = x[0] - speaker = id_.split("-")[0] - id_to_split[id_] = { - "LJ001": "test", "LJ002": "test", "LJ003": "dev" - }.get(speaker, "train") - manifest_by_split = {split: defaultdict(list) for split in SPLITS} - progress = tqdm(enumerate(dataset), total=len(dataset)) - for i, (waveform, _, utt, normalized_utt) in progress: - sample_id = dataset._flist[i][0] - split = id_to_split[sample_id] - manifest_by_split[split]["id"].append(sample_id) - audio_path = f"{dataset._path}/{sample_id}.wav" - manifest_by_split[split]["audio"].append(audio_path) - manifest_by_split[split]["n_frames"].append(len(waveform[0])) - manifest_by_split[split]["tgt_text"].append(normalized_utt) - manifest_by_split[split]["speaker"].append("ljspeech") - manifest_by_split[split]["src_text"].append(utt) - - manifest_root = Path(args.output_manifest_root).absolute() - manifest_root.mkdir(parents=True, exist_ok=True) - for split in SPLITS: - save_df_to_tsv( - pd.DataFrame.from_dict(manifest_by_split[split]), - manifest_root / f"{split}.audio.tsv" - ) - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--output-data-root", "-d", required=True, type=str) - parser.add_argument("--output-manifest-root", "-m", required=True, type=str) - args = parser.parse_args() - - process(args) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/get_speaker_embedding.py b/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/get_speaker_embedding.py deleted file mode 100644 index 0e3e4c5cd..000000000 --- a/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/get_speaker_embedding.py +++ /dev/null @@ -1,89 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import argparse -from collections import defaultdict -from itertools import chain -from pathlib import Path - -import numpy as np -import torchaudio -import torchaudio.sox_effects as ta_sox -import yaml -from tqdm import tqdm - -from examples.speech_to_text.data_utils import load_tsv_to_dicts -from examples.speech_synthesis.preprocessing.speaker_embedder import SpkrEmbedder - - -def extract_embedding(audio_path, embedder): - wav, sr = torchaudio.load(audio_path) # 2D - if sr != embedder.RATE: - wav, sr = ta_sox.apply_effects_tensor( - wav, sr, [["rate", str(embedder.RATE)]] - ) - try: - emb = embedder([wav[0].cuda().float()]).cpu().numpy() - except RuntimeError: - emb = None - return emb - - -def process(args): - print("Fetching data...") - raw_manifest_root = Path(args.raw_manifest_root).absolute() - samples = [load_tsv_to_dicts(raw_manifest_root / (s + ".tsv")) - for s in args.splits] - samples = list(chain(*samples)) - with open(args.config, "r") as f: - config = yaml.load(f, Loader=yaml.FullLoader) - with open(f"{config['audio_root']}/{config['speaker_set_filename']}") as f: - speaker_to_id = {r.strip(): i for i, r in enumerate(f)} - - embedder = SpkrEmbedder(args.ckpt).cuda() - speaker_to_cnt = defaultdict(float) - speaker_to_emb = defaultdict(float) - for sample in tqdm(samples, desc="extract emb"): - emb = extract_embedding(sample["audio"], embedder) - if emb is not None: - speaker_to_cnt[sample["speaker"]] += 1 - speaker_to_emb[sample["speaker"]] += emb - if len(speaker_to_emb) != len(speaker_to_id): - missed = set(speaker_to_id) - set(speaker_to_emb.keys()) - print( - f"WARNING: missing embeddings for {len(missed)} speaker:\n{missed}" - ) - speaker_emb_mat = np.zeros((len(speaker_to_id), len(emb)), float) - for speaker in speaker_to_emb: - idx = speaker_to_id[speaker] - emb = speaker_to_emb[speaker] - cnt = speaker_to_cnt[speaker] - speaker_emb_mat[idx, :] = emb / cnt - speaker_emb_name = "speaker_emb.npy" - speaker_emb_path = f"{config['audio_root']}/{speaker_emb_name}" - np.save(speaker_emb_path, speaker_emb_mat) - config["speaker_emb_filename"] = speaker_emb_name - - with open(args.new_config, "w") as f: - yaml.dump(config, f) - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--raw-manifest-root", "-m", required=True, type=str) - parser.add_argument("--splits", "-s", type=str, nargs="+", - default=["train"]) - parser.add_argument("--config", "-c", required=True, type=str) - parser.add_argument("--new-config", "-n", required=True, type=str) - parser.add_argument("--ckpt", required=True, type=str, - help="speaker embedder checkpoint") - args = parser.parse_args() - - process(args) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/get_vctk_audio_manifest.py b/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/get_vctk_audio_manifest.py deleted file mode 100644 index 7afa40fcd..000000000 --- a/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/get_vctk_audio_manifest.py +++ /dev/null @@ -1,79 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import logging -import numpy as np -import re -from pathlib import Path -from collections import defaultdict - -import pandas as pd -from torchaudio.datasets import VCTK -from tqdm import tqdm - -from examples.speech_to_text.data_utils import save_df_to_tsv - - -log = logging.getLogger(__name__) - -SPLITS = ["train", "dev", "test"] - - -def normalize_text(text): - return re.sub(r"[^a-zA-Z.?!,'\- ]", '', text) - - -def process(args): - out_root = Path(args.output_data_root).absolute() - out_root.mkdir(parents=True, exist_ok=True) - - # Generate TSV manifest - print("Generating manifest...") - dataset = VCTK(out_root.as_posix(), download=False) - ids = list(dataset._walker) - np.random.seed(args.seed) - np.random.shuffle(ids) - n_train = len(ids) - args.n_dev - args.n_test - _split = ["train"] * n_train + ["dev"] * args.n_dev + ["test"] * args.n_test - id_to_split = dict(zip(ids, _split)) - manifest_by_split = {split: defaultdict(list) for split in SPLITS} - progress = tqdm(enumerate(dataset), total=len(dataset)) - for i, (waveform, _, text, speaker_id, _) in progress: - sample_id = dataset._walker[i] - _split = id_to_split[sample_id] - audio_dir = Path(dataset._path) / dataset._folder_audio / speaker_id - audio_path = audio_dir / f"{sample_id}.wav" - text = normalize_text(text) - manifest_by_split[_split]["id"].append(sample_id) - manifest_by_split[_split]["audio"].append(audio_path.as_posix()) - manifest_by_split[_split]["n_frames"].append(len(waveform[0])) - manifest_by_split[_split]["tgt_text"].append(text) - manifest_by_split[_split]["speaker"].append(speaker_id) - manifest_by_split[_split]["src_text"].append(text) - - manifest_root = Path(args.output_manifest_root).absolute() - manifest_root.mkdir(parents=True, exist_ok=True) - for _split in SPLITS: - save_df_to_tsv( - pd.DataFrame.from_dict(manifest_by_split[_split]), - manifest_root / f"{_split}.audio.tsv" - ) - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--output-data-root", "-d", required=True, type=str) - parser.add_argument("--output-manifest-root", "-m", required=True, type=str) - parser.add_argument("--n-dev", default=50, type=int) - parser.add_argument("--n-test", default=100, type=int) - parser.add_argument("--seed", "-s", default=1234, type=int) - args = parser.parse_args() - - process(args) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/speaker_embedder/__init__.py b/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/speaker_embedder/__init__.py deleted file mode 100644 index 3b178676b..000000000 --- a/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/speaker_embedder/__init__.py +++ /dev/null @@ -1,135 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import librosa -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.data -import torchaudio - - -EMBEDDER_PARAMS = { - 'num_mels': 40, - 'n_fft': 512, - 'emb_dim': 256, - 'lstm_hidden': 768, - 'lstm_layers': 3, - 'window': 80, - 'stride': 40, -} - - -def set_requires_grad(nets, requires_grad=False): - """Set requies_grad=Fasle for all the networks to avoid unnecessary - computations - Parameters: - nets (network list) -- a list of networks - requires_grad (bool) -- whether the networks require gradients or not - """ - if not isinstance(nets, list): - nets = [nets] - for net in nets: - if net is not None: - for param in net.parameters(): - param.requires_grad = requires_grad - - -class LinearNorm(nn.Module): - def __init__(self, hp): - super(LinearNorm, self).__init__() - self.linear_layer = nn.Linear(hp["lstm_hidden"], hp["emb_dim"]) - - def forward(self, x): - return self.linear_layer(x) - - -class SpeechEmbedder(nn.Module): - def __init__(self, hp): - super(SpeechEmbedder, self).__init__() - self.lstm = nn.LSTM(hp["num_mels"], - hp["lstm_hidden"], - num_layers=hp["lstm_layers"], - batch_first=True) - self.proj = LinearNorm(hp) - self.hp = hp - - def forward(self, mel): - # (num_mels, T) -> (num_mels, T', window) - mels = mel.unfold(1, self.hp["window"], self.hp["stride"]) - mels = mels.permute(1, 2, 0) # (T', window, num_mels) - x, _ = self.lstm(mels) # (T', window, lstm_hidden) - x = x[:, -1, :] # (T', lstm_hidden), use last frame only - x = self.proj(x) # (T', emb_dim) - x = x / torch.norm(x, p=2, dim=1, keepdim=True) # (T', emb_dim) - - x = x.mean(dim=0) - if x.norm(p=2) != 0: - x = x / x.norm(p=2) - return x - - -class SpkrEmbedder(nn.Module): - RATE = 16000 - - def __init__( - self, - embedder_path, - embedder_params=EMBEDDER_PARAMS, - rate=16000, - hop_length=160, - win_length=400, - pad=False, - ): - super(SpkrEmbedder, self).__init__() - embedder_pt = torch.load(embedder_path, map_location="cpu") - self.embedder = SpeechEmbedder(embedder_params) - self.embedder.load_state_dict(embedder_pt) - self.embedder.eval() - set_requires_grad(self.embedder, requires_grad=False) - self.embedder_params = embedder_params - - self.register_buffer('mel_basis', torch.from_numpy( - librosa.filters.mel( - sr=self.RATE, - n_fft=self.embedder_params["n_fft"], - n_mels=self.embedder_params["num_mels"]) - ) - ) - - self.resample = None - if rate != self.RATE: - self.resample = torchaudio.transforms.Resample(rate, self.RATE) - self.hop_length = hop_length - self.win_length = win_length - self.pad = pad - - def get_mel(self, y): - if self.pad and y.shape[-1] < 14000: - y = F.pad(y, (0, 14000 - y.shape[-1])) - - window = torch.hann_window(self.win_length).to(y) - y = torch.stft(y, n_fft=self.embedder_params["n_fft"], - hop_length=self.hop_length, - win_length=self.win_length, - window=window) - magnitudes = torch.norm(y, dim=-1, p=2) ** 2 - mel = torch.log10(self.mel_basis @ magnitudes + 1e-6) - return mel - - def forward(self, inputs): - dvecs = [] - for wav in inputs: - mel = self.get_mel(wav) - if mel.dim() == 3: - mel = mel.squeeze(0) - dvecs += [self.embedder(mel)] - dvecs = torch.stack(dvecs) - - dvec = torch.mean(dvecs, dim=0) - dvec = dvec / torch.norm(dvec) - - return dvec diff --git a/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/vad/__init__.py b/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/vad/__init__.py deleted file mode 100644 index 9cf121081..000000000 --- a/kosmos-g/fairseq/examples/speech_synthesis/preprocessing/vad/__init__.py +++ /dev/null @@ -1,192 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import collections -import contextlib -import wave - -try: - import webrtcvad -except ImportError: - raise ImportError("Please install py-webrtcvad: pip install webrtcvad") -import argparse -import os -import logging -from tqdm import tqdm - -AUDIO_SUFFIX = '.wav' -FS_MS = 30 -SCALE = 6e-5 -THRESHOLD = 0.3 - - -def read_wave(path): - """Reads a .wav file. - Takes the path, and returns (PCM audio data, sample rate). - """ - with contextlib.closing(wave.open(path, 'rb')) as wf: - num_channels = wf.getnchannels() - assert num_channels == 1 - sample_width = wf.getsampwidth() - assert sample_width == 2 - sample_rate = wf.getframerate() - assert sample_rate in (8000, 16000, 32000, 48000) - pcm_data = wf.readframes(wf.getnframes()) - return pcm_data, sample_rate - - -def write_wave(path, audio, sample_rate): - """Writes a .wav file. - Takes path, PCM audio data, and sample rate. - """ - with contextlib.closing(wave.open(path, 'wb')) as wf: - wf.setnchannels(1) - wf.setsampwidth(2) - wf.setframerate(sample_rate) - wf.writeframes(audio) - - -class Frame(object): - """Represents a "frame" of audio data.""" - def __init__(self, bytes, timestamp, duration): - self.bytes = bytes - self.timestamp = timestamp - self.duration = duration - - -def frame_generator(frame_duration_ms, audio, sample_rate): - """Generates audio frames from PCM audio data. - Takes the desired frame duration in milliseconds, the PCM data, and - the sample rate. - Yields Frames of the requested duration. - """ - n = int(sample_rate * (frame_duration_ms / 1000.0) * 2) - offset = 0 - timestamp = 0.0 - duration = (float(n) / sample_rate) / 2.0 - while offset + n < len(audio): - yield Frame(audio[offset:offset + n], timestamp, duration) - timestamp += duration - offset += n - - -def vad_collector(sample_rate, frame_duration_ms, - padding_duration_ms, vad, frames): - """Filters out non-voiced audio frames. - Given a webrtcvad.Vad and a source of audio frames, yields only - the voiced audio. - Uses a padded, sliding window algorithm over the audio frames. - When more than 90% of the frames in the window are voiced (as - reported by the VAD), the collector triggers and begins yielding - audio frames. Then the collector waits until 90% of the frames in - the window are unvoiced to detrigger. - The window is padded at the front and back to provide a small - amount of silence or the beginnings/endings of speech around the - voiced frames. - Arguments: - sample_rate - The audio sample rate, in Hz. - frame_duration_ms - The frame duration in milliseconds. - padding_duration_ms - The amount to pad the window, in milliseconds. - vad - An instance of webrtcvad.Vad. - frames - a source of audio frames (sequence or generator). - Returns: A generator that yields PCM audio data. - """ - num_padding_frames = int(padding_duration_ms / frame_duration_ms) - # We use a deque for our sliding window/ring buffer. - ring_buffer = collections.deque(maxlen=num_padding_frames) - # We have two states: TRIGGERED and NOTTRIGGERED. We start in the - # NOTTRIGGERED state. - triggered = False - - voiced_frames = [] - for frame in frames: - is_speech = vad.is_speech(frame.bytes, sample_rate) - - # sys.stdout.write('1' if is_speech else '0') - if not triggered: - ring_buffer.append((frame, is_speech)) - num_voiced = len([f for f, speech in ring_buffer if speech]) - # If we're NOTTRIGGERED and more than 90% of the frames in - # the ring buffer are voiced frames, then enter the - # TRIGGERED state. - if num_voiced > 0.9 * ring_buffer.maxlen: - triggered = True - # We want to yield all the audio we see from now until - # we are NOTTRIGGERED, but we have to start with the - # audio that's already in the ring buffer. - for f, _ in ring_buffer: - voiced_frames.append(f) - ring_buffer.clear() - else: - # We're in the TRIGGERED state, so collect the audio data - # and add it to the ring buffer. - voiced_frames.append(frame) - ring_buffer.append((frame, is_speech)) - num_unvoiced = len([f for f, speech in ring_buffer if not speech]) - # If more than 90% of the frames in the ring buffer are - # unvoiced, then enter NOTTRIGGERED and yield whatever - # audio we've collected. - if num_unvoiced > 0.9 * ring_buffer.maxlen: - triggered = False - yield [b''.join([f.bytes for f in voiced_frames]), - voiced_frames[0].timestamp, voiced_frames[-1].timestamp] - ring_buffer.clear() - voiced_frames = [] - # If we have any leftover voiced audio when we run out of input, - # yield it. - if voiced_frames: - yield [b''.join([f.bytes for f in voiced_frames]), - voiced_frames[0].timestamp, voiced_frames[-1].timestamp] - - -def main(args): - # create output folder - try: - cmd = f"mkdir -p {args.out_path}" - os.system(cmd) - except Exception: - logging.error("Can not create output folder") - exit(-1) - - # build vad object - vad = webrtcvad.Vad(int(args.agg)) - # iterating over wavs in dir - for file in tqdm(os.listdir(args.in_path)): - if file.endswith(AUDIO_SUFFIX): - audio_inpath = os.path.join(args.in_path, file) - audio_outpath = os.path.join(args.out_path, file) - audio, sample_rate = read_wave(audio_inpath) - frames = frame_generator(FS_MS, audio, sample_rate) - frames = list(frames) - segments = vad_collector(sample_rate, FS_MS, 300, vad, frames) - merge_segments = list() - timestamp_start = 0.0 - timestamp_end = 0.0 - # removing start, end, and long sequences of sils - for i, segment in enumerate(segments): - merge_segments.append(segment[0]) - if i and timestamp_start: - sil_duration = segment[1] - timestamp_end - if sil_duration > THRESHOLD: - merge_segments.append(int(THRESHOLD / SCALE)*(b'\x00')) - else: - merge_segments.append(int((sil_duration / SCALE))*(b'\x00')) - timestamp_start = segment[1] - timestamp_end = segment[2] - segment = b''.join(merge_segments) - write_wave(audio_outpath, segment, sample_rate) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser(description='Apply vad to a file of fils.') - parser.add_argument('in_path', type=str, help='Path to the input files') - parser.add_argument('out_path', type=str, - help='Path to save the processed files') - parser.add_argument('--agg', type=int, default=3, - help='The level of aggressiveness of the VAD: [0-3]') - args = parser.parse_args() - - main(args) diff --git a/kosmos-g/fairseq/examples/speech_synthesis/utils.py b/kosmos-g/fairseq/examples/speech_synthesis/utils.py deleted file mode 100644 index 2c7b03733..000000000 --- a/kosmos-g/fairseq/examples/speech_synthesis/utils.py +++ /dev/null @@ -1,101 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch -from scipy.interpolate import interp1d -import torchaudio - -from fairseq.tasks.text_to_speech import ( - batch_compute_distortion, compute_rms_dist -) - - -def batch_mel_spectral_distortion( - y1, y2, sr, normalize_type="path", mel_fn=None -): - """ - https://arxiv.org/pdf/2011.03568.pdf - - Same as Mel Cepstral Distortion, but computed on log-mel spectrograms. - """ - if mel_fn is None or mel_fn.sample_rate != sr: - mel_fn = torchaudio.transforms.MelSpectrogram( - sr, n_fft=int(0.05 * sr), win_length=int(0.05 * sr), - hop_length=int(0.0125 * sr), f_min=20, n_mels=80, - window_fn=torch.hann_window - ).to(y1[0].device) - offset = 1e-6 - return batch_compute_distortion( - y1, y2, sr, lambda y: torch.log(mel_fn(y) + offset).transpose(-1, -2), - compute_rms_dist, normalize_type - ) - - -# This code is based on -# "https://github.com/bastibe/MAPS-Scripts/blob/master/helper.py" -def _same_t_in_true_and_est(func): - def new_func(true_t, true_f, est_t, est_f): - assert type(true_t) is np.ndarray - assert type(true_f) is np.ndarray - assert type(est_t) is np.ndarray - assert type(est_f) is np.ndarray - - interpolated_f = interp1d( - est_t, est_f, bounds_error=False, kind='nearest', fill_value=0 - )(true_t) - return func(true_t, true_f, true_t, interpolated_f) - - return new_func - - -@_same_t_in_true_and_est -def gross_pitch_error(true_t, true_f, est_t, est_f): - """The relative frequency in percent of pitch estimates that are - outside a threshold around the true pitch. Only frames that are - considered pitched by both the ground truth and the estimator (if - applicable) are considered. - """ - - correct_frames = _true_voiced_frames(true_t, true_f, est_t, est_f) - gross_pitch_error_frames = _gross_pitch_error_frames( - true_t, true_f, est_t, est_f - ) - return np.sum(gross_pitch_error_frames) / np.sum(correct_frames) - - -def _gross_pitch_error_frames(true_t, true_f, est_t, est_f, eps=1e-8): - voiced_frames = _true_voiced_frames(true_t, true_f, est_t, est_f) - true_f_p_eps = [x + eps for x in true_f] - pitch_error_frames = np.abs(est_f / true_f_p_eps - 1) > 0.2 - return voiced_frames & pitch_error_frames - - -def _true_voiced_frames(true_t, true_f, est_t, est_f): - return (est_f != 0) & (true_f != 0) - - -def _voicing_decision_error_frames(true_t, true_f, est_t, est_f): - return (est_f != 0) != (true_f != 0) - - -@_same_t_in_true_and_est -def f0_frame_error(true_t, true_f, est_t, est_f): - gross_pitch_error_frames = _gross_pitch_error_frames( - true_t, true_f, est_t, est_f - ) - voicing_decision_error_frames = _voicing_decision_error_frames( - true_t, true_f, est_t, est_f - ) - return (np.sum(gross_pitch_error_frames) + - np.sum(voicing_decision_error_frames)) / (len(true_t)) - - -@_same_t_in_true_and_est -def voicing_decision_error(true_t, true_f, est_t, est_f): - voicing_decision_error_frames = _voicing_decision_error_frames( - true_t, true_f, est_t, est_f - ) - return np.sum(voicing_decision_error_frames) / (len(true_t)) diff --git a/kosmos-g/fairseq/examples/speech_text_joint_to_text/README.md b/kosmos-g/fairseq/examples/speech_text_joint_to_text/README.md deleted file mode 100644 index e071d241e..000000000 --- a/kosmos-g/fairseq/examples/speech_text_joint_to_text/README.md +++ /dev/null @@ -1,46 +0,0 @@ -# Joint Speech Text training in Fairseq -An extension of Fairseq s2t project with the speech to text task enhanced by the co-trained text to text mapping task. More details about Fairseq s2t can be found [here](../speech_to_text/README.md) - -## Examples -Examples of speech text joint training in fairseq -- [English-to-German MuST-C model](docs/ende-mustc.md) -- [IWSLT 2021 Multilingual Speech Translation](docs/iwslt2021.md) - -## Citation -Please cite as: -``` -@inproceedings{Tang2021AGM, - title={A General Multi-Task Learning Framework to Leverage Text Data for Speech to Text Tasks}, - author={Yun Tang and J. Pino and Changhan Wang and Xutai Ma and Dmitriy Genzel}, - booktitle={ICASSP}, - year={2021} -} - -@inproceedings{Tang2021IST, - title = {Improving Speech Translation by Understanding and Learning from the Auxiliary Text Translation Task}, - author = {Yun Tang and Juan Pino and Xian Li and Changhan Wang and Dmitriy Genzel}, - booktitle = {ACL}, - year = {2021}, -} - -@inproceedings{Tang2021FST, - title = {FST: the FAIR Speech Translation System for the IWSLT21 Multilingual Shared Task}, - author = {Yun Tang and Hongyu Gong and Xian Li and Changhan Wang and Juan Pino and Holger Schwenk and Naman Goyal}, - booktitle = {IWSLT}, - year = {2021}, -} - -@inproceedings{wang2020fairseqs2t, - title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq}, - author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino}, - booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations}, - year = {2020}, -} - -@inproceedings{ott2019fairseq, - title = {fairseq: A Fast, Extensible Toolkit for Sequence Modeling}, - author = {Myle Ott and Sergey Edunov and Alexei Baevski and Angela Fan and Sam Gross and Nathan Ng and David Grangier and Michael Auli}, - booktitle = {Proceedings of NAACL-HLT 2019: Demonstrations}, - year = {2019}, -} -``` diff --git a/kosmos-g/fairseq/examples/speech_text_joint_to_text/__init__.py b/kosmos-g/fairseq/examples/speech_text_joint_to_text/__init__.py deleted file mode 100644 index 239d2e69f..000000000 --- a/kosmos-g/fairseq/examples/speech_text_joint_to_text/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import tasks, criterions, models # noqa diff --git a/kosmos-g/fairseq/examples/speech_text_joint_to_text/configs/mustc_noise.list b/kosmos-g/fairseq/examples/speech_text_joint_to_text/configs/mustc_noise.list deleted file mode 100644 index 02eeac4e0..000000000 --- a/kosmos-g/fairseq/examples/speech_text_joint_to_text/configs/mustc_noise.list +++ /dev/null @@ -1,49 +0,0 @@ -"(Applause) NOISE -"(Laughter) VOICE -"(Laughter)" VOICE -(Applause) NOISE -(Applause). NOISE -(Audience) VOICE -(Audio) NOISE -(Beat) NOISE -(Beatboxing) VOICE -(Beep) NOISE -(Beeps) NOISE -(Cheering) VOICE -(Cheers) VOICE -(Claps) NOISE -(Clicking) NOISE -(Clunk) NOISE -(Coughs) NOISE -(Drums) NOISE -(Explosion) NOISE -(Gasps) VOICE -(Guitar) NOISE -(Honk) NOISE -(Laugher) VOICE -(Laughing) VOICE -(Laughs) VOICE -(Laughter) VOICE -(Laughter). VOICE -(Laughter)... VOICE -(Mumbling) VOICE -(Music) NOISE -(Noise) NOISE -(Recording) VOICE -(Ringing) NOISE -(Shouts) VOICE -(Sigh) VOICE -(Sighs) VOICE -(Silence) NOISE -(Singing) VOICE -(Sings) VOICE -(Spanish) VOICE -(Static) NOISE -(Tones) NOISE -(Trumpet) NOISE -(Video) NOISE -(Video): NOISE -(Voice-over) NOISE -(Whistle) NOISE -(Whistling) NOISE -(video): NOISE diff --git a/kosmos-g/fairseq/examples/speech_text_joint_to_text/criterions/__init__.py b/kosmos-g/fairseq/examples/speech_text_joint_to_text/criterions/__init__.py deleted file mode 100644 index 7faae7311..000000000 --- a/kosmos-g/fairseq/examples/speech_text_joint_to_text/criterions/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import importlib -import os - - -for file in os.listdir(os.path.dirname(__file__)): - if file.endswith(".py") and not file.startswith("_"): - criterion_name = file[: file.find(".py")] - importlib.import_module( - "examples.speech_text_joint_to_text.criterions." + criterion_name - ) diff --git a/kosmos-g/fairseq/examples/speech_text_joint_to_text/criterions/text_guide_cross_entropy_acc.py b/kosmos-g/fairseq/examples/speech_text_joint_to_text/criterions/text_guide_cross_entropy_acc.py deleted file mode 100644 index 0d356e5a1..000000000 --- a/kosmos-g/fairseq/examples/speech_text_joint_to_text/criterions/text_guide_cross_entropy_acc.py +++ /dev/null @@ -1,223 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import math - -import torch -import torch.nn.functional as F -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.criterions.label_smoothed_cross_entropy import label_smoothed_nll_loss -from fairseq import metrics, utils - - -@register_criterion("guided_label_smoothed_cross_entropy_with_accuracy") -class GuidedCrossEntAccCriterion(FairseqCriterion): - def __init__( - self, - task, - sentence_avg, - guide_alpha, - text_input_cost_ratio, - label_smoothing, - disable_text_guide_update_num=0, - attentive_cost_regularization=0, - ): - """ - guide_alpha: alpha to inteplate nll and kd loss - text_input_cost_ratio: loss ratio for text only input data - label_smoothing: label smoothing ratio - disable_text_guide_update_num: only use nll loss for the first N updates - attentive_cost_regularization: ratio fo attentive cost - """ - super().__init__(task) - self.alpha = guide_alpha - self.attn_beta = attentive_cost_regularization - self.sentence_avg = sentence_avg - self.eps = label_smoothing - self.text_input_cost_ratio = text_input_cost_ratio - self.disable_update_num = disable_text_guide_update_num - assert self.alpha >= 0 and self.alpha <= 1.0 - - @staticmethod - def add_args(parser): - """Add criterion-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--label-smoothing', default=0., type=float, metavar='D', - help='epsilon for label smoothing, 0 means no label smoothing') - # fmt: off - parser.add_argument('--guide-alpha', default=0., type=float, metavar='D', - help='alpha to merge kd cost from text to speech input with ce loss') - # fmt: off - parser.add_argument('--disable-text-guide-update-num', default=0, type=int, metavar='D', - help='disable guided target from text for the first N updates.') - parser.add_argument("--attentive-cost-regularization", default=0.0, type=float, metavar='D', - help="use encoder attentive loss regularization with cost ratio D") - parser.add_argument("--attentive-cost-without-normalize", action='store_true', - help="Don't do normalization during attentive cost computation") - - def forward(self, model, sample, reduce=True): - reduction = 'sum' if reduce else 'none' - net_input = sample["net_input"] - net_output = model(**net_input) - attn_cost = None - lprobs = model.get_normalized_probs(net_output, log_probs=True) - is_dual_input = True if net_input['src_tokens'] is not None and net_input.get('src_txt_tokens') is not None else False - target = model.get_targets(sample, net_output) - src_token_num = 0 - if is_dual_input: - # lprobs_spch from speech encoder and lprobs_text from text encoder - lprobs_spch, lprobs_text = torch.chunk(lprobs, 2) - lprobs_spch.batch_first = lprobs.batch_first - lprobs_text.batch_first = lprobs.batch_first - - speech_loss, speech_nll_loss, speech_correct, speech_total = \ - self.guide_loss_and_acc(model, lprobs_spch, lprobs_text, target, reduce=(reduction == 'sum')) - text_loss, text_nll_loss, text_correct, text_total = self.compute_loss_and_acc(model, lprobs_text, target, reduction=reduction) - loss = (speech_loss + text_loss) - nll_loss = (speech_nll_loss + text_nll_loss) - correct = speech_correct + text_correct - total = speech_total + text_total - - attn_cost = net_output[1].get('attn_cost') - if attn_cost is not None: - # attn_cost is batch_first and padding tokens have been masked already - src_token_num = attn_cost.ne(0).sum() - attn_cost = attn_cost.sum() - loss = loss + attn_cost * self.attn_beta - else: - attn_cost = 0 - else: - loss, nll_loss, correct, total = self.compute_loss_and_acc(model, lprobs, target, reduction=reduction) - if sample["net_input"]['src_tokens'] is None: # text input only - loss = loss * self.text_input_cost_ratio - speech_loss = None - speech_nll_loss = None - - sample_size, logging_output = self.get_logging_output( - sample, loss, nll_loss, correct, total, src_token_num, speech_loss, speech_nll_loss, attn_cost, is_dual_input - ) - return loss, sample_size, logging_output - - def compute_loss_and_acc(self, model, lprobs, target, reduction='sum'): - if not lprobs.batch_first: - lprobs = lprobs.transpose(0, 1) - lprobs = lprobs.view(-1, lprobs.size(-1)) # -> (B x T) x C - target = target.view(-1) - loss, nll_loss = label_smoothed_nll_loss( - lprobs, target, self.eps, ignore_index=self.padding_idx, reduce=(reduction == 'sum'), - ) - - mask = target.ne(self.padding_idx) - correct = torch.sum(lprobs.argmax(1).masked_select(mask).eq(target.masked_select(mask))) - total = torch.sum(mask) - return loss, nll_loss, correct, total - - def guide_loss_and_acc(self, model, lprobs, lprobs_teacher, target, reduce=True): - """ lprobs_teacher is used as guide for lprobs """ - if self.alpha == 0.0 or model.num_updates < self.disable_update_num: - return self.compute_loss_and_acc(model, lprobs, target, reduction=('sum' if reduce else 'none')) - if not lprobs.batch_first: - lprobs = lprobs.transpose(0, 1) - lprobs_teacher = lprobs_teacher.transpose(0, 1) - - lprobs = lprobs.view(-1, lprobs.size(-1)).float() # -> (B x T) x C - lprobs_teacher = lprobs_teacher.view(-1, lprobs_teacher.size(-1)).float() # -> (B x T) x C - target = target.view(-1) - loss = F.nll_loss(lprobs, target, ignore_index=self.padding_idx, reduction='sum' if reduce else 'none') - nll_loss = loss - probs_teacher = lprobs_teacher.exp().masked_fill_(target.unsqueeze(-1).eq(self.padding_idx), 0) - probs_teacher = probs_teacher.detach() - guide_loss = -(probs_teacher*lprobs).sum() if reduce else -(probs_teacher*lprobs).sum(-1, keepdim=True) - loss = self.alpha*guide_loss + (1.0 - self.alpha)*loss - - mask = target.ne(self.padding_idx) - correct = torch.sum(lprobs.argmax(1).masked_select(mask).eq(target.masked_select(mask))) - total = torch.sum(mask) - return loss, nll_loss, correct, total - - def get_logging_output( - self, - sample, - loss, - nll_loss, - correct, - total, - src_token_num=0, - speech_loss=None, - speech_nll_loss=None, - attn_cost=None, - is_dual_input=False, - ): - - sample_size = ( - sample["target"].size(0) if self.sentence_avg else sample["ntokens"] - ) - mul_size = 2 if is_dual_input else 1 - - logging_output = { - "loss": utils.item(loss.data), # * sample['ntokens'], - "nll_loss": utils.item(nll_loss.data), # * sample['ntokens'], - "ntokens": sample["ntokens"]*mul_size, - "nsentences": sample["target"].size(0)*mul_size, - "sample_size": sample_size*mul_size, - "correct": utils.item(correct.data), - "total": utils.item(total.data), - "src_token_num": utils.item(src_token_num.data) if src_token_num > 0 else 0, - "nframes": torch.sum(sample["net_input"]["src_lengths"]).item(), - } - - if speech_loss is not None: - logging_output["speech_loss"] = utils.item(speech_loss.data) - logging_output["speech_nll_loss"] = utils.item(speech_nll_loss.data) - logging_output["sample_size_speech_cost"] = sample_size - logging_output["speech_attn_loss"] = attn_cost - - return sample_size*mul_size, logging_output - - @staticmethod - def aggregate_logging_outputs(logging_outputs): - """Aggregate logging outputs from data parallel training.""" - correct_sum = sum(log.get("correct", 0) for log in logging_outputs) - total_sum = sum(log.get("total", 0) for log in logging_outputs) - src_token_sum = sum(log.get("src_token_num", 0) for log in logging_outputs) - loss_sum = sum(log.get("loss", 0) for log in logging_outputs) - nll_loss_sum = sum(log.get("nll_loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - nsentences = sum(log.get("nsentences", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - nframes = sum(log.get("nframes", 0) for log in logging_outputs) - speech_loss_sum = sum(log.get("speech_loss", 0) for log in logging_outputs) - speech_nll_loss_sum = sum(log.get("speech_nll_loss", 0) for log in logging_outputs) - speech_attn_loss_sum = sum(log.get("speech_attn_loss", 0) for log in logging_outputs) - sample_size_speech = sum(log.get("sample_size_speech_cost", 0) for log in logging_outputs) - - agg_output = { - "loss": loss_sum / sample_size / math.log(2) if sample_size > 0 else 0.0, - "nll_loss": nll_loss_sum / sample_size / math.log(2) if sample_size > 0 else 0.0, - # if args.sentence_avg, then sample_size is nsentences, and loss - # is per-sentence loss; else sample_size is ntokens, and the loss - # becomes per-output token loss - "speech_loss": speech_loss_sum / sample_size_speech / math.log(2) if sample_size_speech > 0 else 0.0, - "speech_nll_loss": speech_nll_loss_sum / sample_size_speech / math.log(2) if sample_size_speech > 0 else 0.0, - "speech_attn_loss": speech_attn_loss_sum / src_token_sum / math.log(2) if src_token_sum > 0 else 0.0, - "ntokens": ntokens, - "nsentences": nsentences, - "nframes": nframes, - "sample_size": sample_size, - "acc": correct_sum * 100.0 / total_sum if total_sum > 0 else 0.0, - "correct": correct_sum, - "total": total_sum, - "src_token_num": src_token_sum, - # total is the number of validate tokens - } - return agg_output - - @classmethod - def reduce_metrics(cls, logging_outputs): - """Aggregate logging outputs from data parallel training.""" - agg_logging_outputs = cls.aggregate_logging_outputs(logging_outputs) - for k, v in agg_logging_outputs.items(): - if k in {'nsentences', 'ntokens', 'sample_size'}: - continue - metrics.log_scalar(k, v, round=3) diff --git a/kosmos-g/fairseq/examples/speech_text_joint_to_text/docs/ende-mustc.md b/kosmos-g/fairseq/examples/speech_text_joint_to_text/docs/ende-mustc.md deleted file mode 100644 index 86b8fa91f..000000000 --- a/kosmos-g/fairseq/examples/speech_text_joint_to_text/docs/ende-mustc.md +++ /dev/null @@ -1,112 +0,0 @@ -[[Back]](..) - -# Joint Speech Text Training for the MuST-C English to German Speech Translation task - -Joint Training Baseline: it is based on paper ["A general multi-task learning framework to leverage text data for speech to text tasks"](https://arxiv.org/pdf/2010.11338.pdf) - -Enhanced Joint Training: the joint training is enhanced with pre-trained models, cross attentive regularization and online knowledge distillation based on paper ["Improving Speech Translation by Understanding and Learning from the Auxiliary Text Translation Task"](https://research.fb.com/publications/improving-speech-translation-by-understanding-and-learning-from-the-auxiliary-text-translation-task) - -## Prepare Data -#### Download files -- Sentence piece model [spm.model](https://dl.fbaipublicfiles.com/joint_speech_text_4_s2t/must_c/en_de/spm.model) -- Dictionary [dict.txt](https://dl.fbaipublicfiles.com/joint_speech_text_4_s2t/must_c/en_de/dict.txt) -- config [config.yaml](https://dl.fbaipublicfiles.com/joint_speech_text_4_s2t/must_c/en_de/config.yaml) -#### Prepare MuST-C data set -- Please follow the data preparation in the [S2T example](https://github.com/pytorch/fairseq/blob/main/examples/speech_to_text/docs/mustc_example.md) -- Convert source text under the "src_text" column in the tsv file into phoneme representation. -```bash - python examples/speech_text_joint_to_text/scripts/g2p_encode.py \ - --lower-case --do-filter --use-word-start --no-punc \ - --reserve-word examples/speech_text_joint_to_text/configs/mustc_noise.list \ - --data-path ${must_c_en_de_src_text} \ - --out-path ${must_c_en_de_src_text_pho} -``` -- Replace the source text under the "src_text" column in the tsv file with the corresponding phoneme reprentation generated in the step above. -- Prepare phoneme dictionary and save to $MANIFEST_ROOT as [src_dict.txt](https://dl.fbaipublicfiles.com/joint_speech_text_4_s2t/must_c/en_de/src_dict.txt) -#### Prepare WMT text data -- [Download wmt data](https://github.com/pytorch/fairseq/blob/main/examples/translation/prepare-wmt14en2de.sh) -- Convert source text (English) into phoneme representation as above -- Generate binary parallel file for training (as translation example) and save data in $parallel_text_data - -## Training -The model is trained with 8 v100 GPUs. - -#### Download pretrained models -- [pretrain_encoder](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_multilingual_asr_transformer_m.pt) -- [pretrain_nmt](https://dl.fbaipublicfiles.com/joint_speech_text_4_s2t/must_c/en_de/checkpoint_mt.pt) - -#### Training scripts -- Jointly trained model from scratch -```bash -python train.py ${MANIFEST_ROOT} \ - --save-dir ${save_dir} \ - --num-workers 8 \ - --task speech_text_joint_to_text \ - --arch dualinputs2ttransformer_s \ - --user-dir examples/speech_text_joint_to_text \ - --max-epoch 100 --update-mix-data \ - --optimizer adam --lr-scheduler inverse_sqrt \ - --lr 0.001 --update-freq 4 --clip-norm 10.0 \ - --criterion guided_label_smoothed_cross_entropy_with_accuracy \ - --label-smoothing 0.1 --max-tokens 10000 --max-tokens-text 10000 \ - --max-positions-text 400 --seed 2 --speech-encoder-layers 12 \ - --text-encoder-layers 6 --encoder-shared-layers 6 --decoder-layers 6 \ - --dropout 0.1 --warmup-updates 20000 \ - --text-sample-ratio 0.25 --parallel-text-data ${parallel_text_data} \ - --text-input-cost-ratio 0.5 --enc-grad-mult 2.0 --add-speech-eos \ - --log-format json --langpairs en-de --noise-token '"'"'▁NOISE'"'"' \ - --mask-text-ratio 0.0 --max-tokens-valid 20000 --ddp-backend no_c10d \ - --log-interval 100 --data-buffer-size 50 --config-yaml config.yaml \ - --keep-last-epochs 10 -``` -- Jointly trained model with good initialization, cross attentive loss and online knowledge distillation -```bash -python train.py ${MANIFEST_ROOT} \ - --save-dir ${save_dir} \ - --num-workers 8 \ - --task speech_text_joint_to_text \ - --arch dualinputs2ttransformer_m \ - --user-dir examples/speech_text_joint_to_text \ - --max-epoch 100 --update-mix-data \ - --optimizer adam --lr-scheduler inverse_sqrt \ - --lr 0.002 --update-freq 4 --clip-norm 10.0 \ - --criterion guided_label_smoothed_cross_entropy_with_accuracy \ - --guide-alpha 0.8 --disable-text-guide-update-num 5000 \ - --label-smoothing 0.1 --max-tokens 10000 --max-tokens-text 10000 \ - --max-positions-text 400 --seed 2 --speech-encoder-layers 12 \ - --text-encoder-layers 6 --encoder-shared-layers 6 --decoder-layers 6 \ - --dropout 0.1 --warmup-updates 20000 --attentive-cost-regularization 0.02 \ - --text-sample-ratio 0.25 --parallel-text-data ${parallel_text_data} \ - --text-input-cost-ratio 0.5 --enc-grad-mult 2.0 --add-speech-eos \ - --log-format json --langpairs en-de --noise-token '"'"'▁NOISE'"'"' \ - --mask-text-ratio 0.0 --max-tokens-valid 20000 --ddp-backend no_c10d \ - --log-interval 100 --data-buffer-size 50 --config-yaml config.yaml \ - --load-pretrain-speech-encoder ${pretrain_encoder} \ - --load-pretrain-decoder ${pretrain_nmt} \ - --load-pretrain-text-encoder-last ${pretrain_nmt} \ - --keep-last-epochs 10 -``` - -## Evaluation -```bash -python ./fairseq_cli/generate.py \ - ${MANIFEST_ROOT} \ - --task speech_text_joint_to_text \ - --max-tokens 25000 \ - --nbest 1 \ - --results-path ${infer_results} \ - --batch-size 512 \ - --path ${model} \ - --gen-subset tst-COMMON_st \ - --config-yaml config.yaml \ - --scoring sacrebleu \ - --beam 5 --lenpen 1.0 \ - --user-dir examples/speech_text_joint_to_text \ - --load-speech-only -``` - -## Results (Joint training with initialization + CAR + online KD) -|Direction|En-De | En-Es | En-Fr | -|---|---|---|---| -|BLEU|27.4| 31.2 | 37.6 | -|checkpoint | [link](https://dl.fbaipublicfiles.com/joint_speech_text_4_s2t/must_c/en_de/checkpoint_ave_10.pt) |[link](https://dl.fbaipublicfiles.com/joint_speech_text_4_s2t/must_c/en_es/checkpoint_ave_10.pt)|[link](https://dl.fbaipublicfiles.com/joint_speech_text_4_s2t/must_c/en_fr/checkpoint_ave_10.pt)| diff --git a/kosmos-g/fairseq/examples/speech_text_joint_to_text/docs/iwslt2021.md b/kosmos-g/fairseq/examples/speech_text_joint_to_text/docs/iwslt2021.md deleted file mode 100644 index 0af0fbff1..000000000 --- a/kosmos-g/fairseq/examples/speech_text_joint_to_text/docs/iwslt2021.md +++ /dev/null @@ -1,76 +0,0 @@ -[[Back]](..) - -# Joint Speech Text Training for the 2021 IWSLT multilingual speech translation - -This directory contains the code from paper ["FST: the FAIR Speech Translation System for the IWSLT21 Multilingual Shared Task"](https://arxiv.org/pdf/2107.06959.pdf). - -## Prepare Data -#### Download files -- Sentence piece model [spm.model](https://dl.fbaipublicfiles.com/joint_speech_text_4_s2t/iwslt/iwslt_data/spm.model) -- Dictionary [tgt_dict.txt](https://dl.fbaipublicfiles.com/joint_speech_text_4_s2t/iwslt/iwslt_data/dict.txt) -- Config [config.yaml](https://dl.fbaipublicfiles.com/joint_speech_text_4_s2t/iwslt/iwslt_data/config.yaml) - -#### Prepare -- Please follow the data preparation in [speech-to-text](https://github.com/pytorch/fairseq/blob/main/examples/speech_to_text/docs/mtedx_example.md) with option "--use-audio-input" for raw audio tsv files. -- Prepare tsv files with phoneme based source text (under column 'src_text') as [MuST-C](ende-mustc.md) example. - - -## Training - -#### Download pretrained models -- [Pretrained mbart model](https://dl.fbaipublicfiles.com/joint_speech_text_4_s2t/iwslt/iwslt_data/mbart.pt) -- [Pretrained w2v model](https://dl.fbaipublicfiles.com/joint_speech_text_4_s2t/iwslt/iwslt_data/xlsr_53_56k.pt) - - -#### Training scripts - -```bash -python train.py ${MANIFEST_ROOT} \ - --save-dir ${save_dir} \ - --user-dir examples/speech_text_joint_to_text \ - --train-subset train_es_en_tedx,train_es_es_tedx,train_fr_en_tedx,train_fr_es_tedx,train_fr_fr_tedx,train_it_it_tedx,train_pt_en_tedx,train_pt_pt_tedx \ - --valid-subset valid_es_en_tedx,valid_es_es_tedx,valid_es_fr_tedx,valid_es_it_tedx,valid_es_pt_tedx,valid_fr_en_tedx,valid_fr_es_tedx,valid_fr_fr_tedx,valid_fr_pt_tedx,valid_it_en_tedx,valid_it_es_tedx,valid_it_it_tedx,valid_pt_en_tedx,valid_pt_es_tedx,valid_pt_pt_tedx \ - --config-yaml config.yaml --ddp-backend no_c10d \ - --num-workers 2 --task speech_text_joint_to_text \ - --criterion guided_label_smoothed_cross_entropy_with_accuracy \ - --label-smoothing 0.3 --guide-alpha 0.8 \ - --disable-text-guide-update-num 5000 --arch dualinputxmtransformer_base \ - --max-tokens 500000 --max-sentences 3 --max-tokens-valid 800000 \ - --max-source-positions 800000 --enc-grad-mult 2.0 \ - --attentive-cost-regularization 0.02 --optimizer adam \ - --clip-norm 1.0 --log-format simple --log-interval 200 \ - --keep-last-epochs 5 --seed 1 \ - --w2v-path ${w2v_path} \ - --load-pretrained-mbart-from ${mbart_path} \ - --max-update 1000000 --update-freq 4 \ - --skip-invalid-size-inputs-valid-test \ - --skip-encoder-projection --save-interval 1 \ - --attention-dropout 0.3 --mbart-dropout 0.3 \ - --finetune-w2v-params all --finetune-mbart-decoder-params all \ - --finetune-mbart-encoder-params all --stack-w2v-mbart-encoder \ - --drop-w2v-layers 12 --normalize \ - --lr 5e-05 --lr-scheduler inverse_sqrt --warmup-updates 5000 -``` - -## Evaluation -```bash -python ./fairseq_cli/generate.py - ${MANIFEST_ROOT} \ - --task speech_text_joint_to_text \ - --user-dir ./examples/speech_text_joint_to_text \ - --load-speech-only --gen-subset test_es_en_tedx \ - --path ${model} \ - --max-source-positions 800000 \ - --skip-invalid-size-inputs-valid-test \ - --config-yaml config.yaml \ - --infer-target-lang en \ - --max-tokens 800000 \ - --beam 5 \ - --results-path ${RESULTS_DIR} \ - --scoring sacrebleu -``` -The trained model can be downloaded [here](https://dl.fbaipublicfiles.com/joint_speech_text_4_s2t/iwslt/iwslt_data/checkpoint17.pt) - -|direction|es_en|fr_en|pt_en|it_en|fr_es|pt_es|it_es|es_es|fr_fr|pt_pt|it_it| -|---|---|---|---|---|---|---|---|---|---|---|---| -|BLEU|31.62|36.93|35.07|27.12|38.87|35.57|34.13|74.59|74.64|70.84|69.76| diff --git a/kosmos-g/fairseq/examples/speech_text_joint_to_text/models/__init__.py b/kosmos-g/fairseq/examples/speech_text_joint_to_text/models/__init__.py deleted file mode 100644 index 5fc5d9e21..000000000 --- a/kosmos-g/fairseq/examples/speech_text_joint_to_text/models/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import importlib -import os - diff --git a/kosmos-g/fairseq/examples/speech_text_joint_to_text/models/s2t_dualinputtransformer.py b/kosmos-g/fairseq/examples/speech_text_joint_to_text/models/s2t_dualinputtransformer.py deleted file mode 100644 index c4ec41bda..000000000 --- a/kosmos-g/fairseq/examples/speech_text_joint_to_text/models/s2t_dualinputtransformer.py +++ /dev/null @@ -1,1093 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from collections import namedtuple - -import torch -import torch.nn as nn -from fairseq import checkpoint_utils -from fairseq import utils -from fairseq.models import ( - FairseqEncoder, - FairseqDecoder, - FairseqEncoderDecoderModel, - register_model, - register_model_architecture, -) -from fairseq.models.fairseq_encoder import EncoderOut -from fairseq.models.speech_to_text import ( - TransformerDecoder, - S2TTransformerEncoder, -) -from fairseq.models.transformer import TransformerEncoder -from fairseq.modules import ( - TransformerEncoderLayer, - GradMultiply, - LayerNorm, -) - -logger = logging.getLogger(__name__) - - -class SpeechEoSEncoder(FairseqEncoder): - def __init__(self, encoder, eos_num, feat_dim, adapter_type="None", adapter_dim=0): - super().__init__(None) - self.encoder = encoder - self.eos_num = eos_num # downsampling rate for speech input feature - self.eos_emb = ( - nn.Parameter(torch.zeros(1, feat_dim), requires_grad=True) - if eos_num > 0 - else None - ) - self.adapter = self.add_adapter(adapter_type, adapter_dim) - - def add_adapter(self, adapter_type, adapter_dim): - def _make_identity(linear, eps=1e-5): - assert isinstance(linear, nn.Linear) - linear.weight.data.mul_(eps) - linear.weight.data.fill_diagonal_(1.0) - if linear.bias is not None: - linear.bias.data.mul_(eps) - - adapter = None - if adapter_type == "Linear": - assert adapter_dim > 0 - adapter = nn.Sequential( - nn.Linear(adapter_dim, adapter_dim), LayerNorm(adapter_dim) - ) - # initialize the adapter as identity matrix first - _make_identity(adapter[0]) - - elif adapter_type == "MLP": - assert adapter_dim > 0 - # assume the model is pre-norm model - adapter = nn.Sequential( - nn.Linear(adapter_dim, 2 * adapter_dim), - nn.ReLU(), - nn.Linear(2 * adapter_dim, adapter_dim), - LayerNorm(adapter_dim), - ) - _make_identity(adapter[0]) - _make_identity(adapter[2]) - return adapter - - def add_eos(self, src_tokens, src_lengths): - bsz, max_seq_len, fdim = src_tokens.size() - if self.eos_num > 0: - src_token_eos = torch.zeros( - [bsz, max_seq_len + self.eos_num, fdim], - dtype=src_tokens.dtype, - device=src_tokens.device, - ) - src_token_eos[:, :max_seq_len] = src_tokens - for bi in range(bsz): - src_token_eos[bi][ - src_lengths[bi] : src_lengths[bi] + self.eos_num - ] = self.eos_emb.expand(self.eos_num, fdim) - src_lengths = src_lengths + self.eos_num - src_tokens = src_token_eos - return src_tokens, src_lengths - - def apply_adapter(self, enc_out): - if self.adapter is None: - return enc_out - rst = self.adapter(enc_out.encoder_out) - if enc_out.encoder_padding_mask is not None: - rst.masked_fill_( - enc_out.encoder_padding_mask.transpose(0, 1).unsqueeze(-1), 0 - ) - return EncoderOut( - encoder_out=rst, - encoder_padding_mask=enc_out.encoder_padding_mask, - encoder_embedding=enc_out.encoder_embedding, - encoder_states=enc_out.encoder_states, - src_tokens=enc_out.src_tokens, - src_lengths=enc_out.src_lengths, - ) - - def forward(self, src_tokens, src_lengths=None, return_all_hiddens=False, **kwargs): - """ - src_tokens: padded tensor (B, T, C * feat) - src_lengths: tensor of original lengths of input utterances (B,) - """ - src_tokens, src_lengths = self.add_eos(src_tokens, src_lengths) - enc_out = self.encoder(src_tokens, src_lengths, return_all_hiddens) - enc_out = self.apply_adapter(enc_out) - return enc_out - - def reorder_encoder_out(self, encoder_out, new_order): - return self.encoder.reorder_encoder_out(encoder_out, new_order) - - -class DualInputEncoder(FairseqEncoder): - def __init__( - self, - args, - spch_encoder, - text_encoder, - dictionary, - cross_attentive_loss_before_last_layer=-1, - ): - super().__init__(dictionary) - - self.spch_encoder = spch_encoder - self.text_encoder = text_encoder - self.enc_grad_mult = args.enc_grad_mult - self.cross_attentive_loss_before_last_layer = ( - cross_attentive_loss_before_last_layer - ) - self.use_cross_attentive_loss = ( - False if cross_attentive_loss_before_last_layer <= -1 else True - ) - self.enc2_along_grad_mult = args.enc2_along_grad_mult - - @classmethod - def set_shared_layer(cls, share_level, src_layer, tgt_layer): - """ - share parameters from tgt_layer to src_layer - share_level: - 0: share everything - 1: share everything but different model - 2: share weight but not bias, layernorm - """ - if share_level == 0: - return tgt_layer - if isinstance(src_layer, nn.Linear): - return tgt_layer - if isinstance(src_layer, TransformerEncoderLayer): - assert src_layer.embed_dim == tgt_layer.embed_dim - assert src_layer.normalize_before == tgt_layer.normalize_before - if share_level == 1: - src_layer.fc1 = tgt_layer.fc1 - src_layer.fc2 = tgt_layer.fc2 - src_layer.self_attn = tgt_layer.self_attn - src_layer.final_layer_norm = tgt_layer.final_layer_norm - src_layer.self_attn_layer_norm = tgt_layer.self_attn_layer_norm - src_layer.layernorm_embedding = tgt_layer.layernorm_embedding - else: - src_layer.fc1.weight = tgt_layer.fc1.weight - src_layer.fc2.weight = tgt_layer.fc2.weight - src_layer.self_attn.k_proj.weight = tgt_layer.self_attn.k_proj.weight - src_layer.self_attn.v_proj.weight = tgt_layer.self_attn.v_proj.weight - src_layer.self_attn.q_proj.weight = tgt_layer.self_attn.q_proj.weight - src_layer.self_attn.out_proj.weight = ( - tgt_layer.self_attn.out_proj.weight - ) - else: - if share_level == 1: - return tgt_layer - return src_layer - - @classmethod - def build_spch_encoder(cls, args): - cfg = { - "input_feat_per_channel": args.input_feat_per_channel, - "input_channels": args.input_channels, - "conv_kernel_sizes": args.conv_kernel_sizes, - "conv_channels": args.conv_channels, - "encoder_embed_dim": args.encoder_embed_dim, - "encoder_ffn_embed_dim": args.encoder_ffn_embed_dim, - "encoder_layers": args.speech_encoder_layers, - "encoder_layerdrop": args.encoder_layerdrop, - "encoder_attention_heads": args.encoder_attention_heads, - "max_source_positions": args.max_source_positions, - "dropout": args.dropout, - "encoder_normalize_before": args.encoder_normalize_before, - "activation_dropout": args.activation_dropout, - "attention_dropout": args.attention_dropout, - "activation_fn": args.activation_fn, - "layernorm_embedding": args.layernorm_embedding, - "no_token_positional_embeddings": args.no_token_positional_embeddings, - "no_scale_embedding": args.no_scale_embedding, - "quant_noise_pq": args.quant_noise_pq, - "encoder_freezing_updates": 0, - } - model_args = namedtuple("args", cfg.keys())(*cfg.values()) - spch_encoder = S2TTransformerEncoder(model_args) - if args.add_speech_eos: - spch_encoder = SpeechEoSEncoder( - spch_encoder, - 2 * len(args.conv_kernel_sizes.split(",")), - args.input_feat_per_channel, - adapter_type=getattr(args, "speech_encoder_adapter_type", "None"), - adapter_dim=args.encoder_embed_dim, - ) - return spch_encoder - - @classmethod - def build_text_encoder(cls, args, src_dictionary, spch_encoder): - if args.encoder_shared_layers > 0: - mx_shared_layers = ( - args.speech_encoder_layers - if args.speech_encoder_layers < args.text_encoder_layers - else args.text_encoder_layers - ) - args.encoder_shared_layers = ( - args.encoder_shared_layers - if args.encoder_shared_layers <= mx_shared_layers - else mx_shared_layers - ) - cfg = { - "encoder_embed_dim": args.encoder_text_embed_dim, - "encoder_ffn_embed_dim": args.encoder_ffn_embed_dim, - "encoder_layers": args.text_encoder_layers, - "encoder_layerdrop": args.encoder_layerdrop, - "encoder_attention_heads": args.encoder_attention_heads, - "encoder_learned_pos": args.encoder_learned_pos, - "max_source_positions": args.max_source_positions, - "dropout": args.dropout, - "encoder_normalize_before": args.encoder_normalize_before, - "activation_dropout": args.activation_dropout, - "attention_dropout": args.attention_dropout, - "activation_fn": args.activation_fn, - "adaptive_input": args.adaptive_input, - "no_token_positional_embeddings": args.no_token_positional_embeddings, - "no_scale_embedding": args.no_scale_embedding, - "quant_noise_pq": args.quant_noise_pq, - } - model_args = namedtuple("args", cfg.keys())(*cfg.values()) - enc_emb = nn.Embedding( - len(src_dictionary), model_args.encoder_embed_dim, src_dictionary.pad() - ) - text_encoder = TransformerEncoder(model_args, src_dictionary, enc_emb) - if args.add_speech_eos: - spch_encoder = spch_encoder.encoder - if args.encoder_shared_layers > 0: - text_encoder.layer_norm = cls.set_shared_layer( - args.encoder_shared_layer_level, - text_encoder.layer_norm, - spch_encoder.layer_norm, - ) - for i, ly in enumerate( - spch_encoder.transformer_layers[-args.encoder_shared_layers :] - ): - ly_id = i + args.text_encoder_layers - args.encoder_shared_layers - if not isinstance(text_encoder.layers[ly_id], type(ly)): - if text_encoder.layers[ly_id]._get_name() not in ('TransformerEncoderLayerBase', 'TransformerEncoderLayer'): - raise ValueError("The shared layers are expected from the same class") - text_encoder.layers[ly_id] = cls.set_shared_layer( - args.encoder_shared_layer_level, - text_encoder.layers[ly_id], - ly, - ) - return text_encoder - - def mult_rst_grad(self, rst, ratio): - assert isinstance(rst, dict) # instead of EncoderOut - assert len(rst["encoder_out"]) == 1 - rst["encoder_out"][0] = GradMultiply.apply(rst["encoder_out"][0], ratio) - return rst - - def process_attentive_loss_states(self, rst, interstates): - assert isinstance(rst, dict) # instead of EncoderOut - rst["encoder_states"] = interstates - return rst - - def forward( - self, - src_tokens, - src_lengths=None, - src_txt_tokens=None, - src_txt_lengths=None, - **kwargs - ): - """ - Args: - src_tokens: padded tensor (B, T, C * feat) - src_lengths: tensor of original lengths of input utterances (speech) (B,) - src_txt_tokens: padded tensor (B, T) - src_txt_lengths: tensor of original lengths of input utterances (text) (B,) - """ - # src_tokens only: inference - # src_tokens, src_lengths: speech only training - # src_txt_tokens, src_txt_lengths: text only training - # all valid: speech + text training - - if src_tokens is None and src_txt_tokens is None: - raise ValueError( - "src_tokens and src_txt_tokens cannot be None at the same time" - ) - ret1 = None - ret2 = None - return_all_hiddens = False - if src_tokens is not None: - if ( - self.use_cross_attentive_loss and src_txt_tokens is not None - ): # remove self.training so we can get attn score during validation step - return_all_hiddens = True - ret1 = self.spch_encoder( - src_tokens, src_lengths, return_all_hiddens=return_all_hiddens - ) - - if self.use_cross_attentive_loss and src_txt_tokens is not None: - assert self.cross_attentive_loss_before_last_layer < len( - ret1["encoder_states"] - ) - ret1 = self.process_attentive_loss_states( - ret1, - ret1["encoder_states"][ - -self.cross_attentive_loss_before_last_layer - 1 - ], - ) - - if src_txt_tokens is not None: - ret2 = self.text_encoder( - src_txt_tokens, src_txt_lengths, return_all_hiddens=return_all_hiddens - ) - if return_all_hiddens: - if self.cross_attentive_loss_before_last_layer == len( - self.text_encoder.layers - ): - text_embedding, _ = self.text_encoder.forward_embedding( - src_txt_tokens - ) - text_embedding = text_embedding.transpose(0, 1) - ret2 = self.process_attentive_loss_states(ret2, text_embedding) - else: - assert self.cross_attentive_loss_before_last_layer < len( - self.text_encoder.layers - ) - ret2 = self.process_attentive_loss_states( - ret2, - ret2["encoder_states"][ - -self.cross_attentive_loss_before_last_layer - 1 - ], - ) - - def merge_output(rst1, rst2): - if rst1 is None: - if not (self.enc2_along_grad_mult == 1.0 or self.training): - rst2 = self.mult_rst_grad(rst2, self.enc2_along_grad_mult) - return rst2 - if rst2 is None: - return rst1 - if self.enc_grad_mult != 1.0 and self.training: - rst1 = self.mult_rst_grad(rst1, self.enc_grad_mult) - rst2 = self.mult_rst_grad(rst2, self.enc_grad_mult) - rst = (rst1, rst2) - return rst - - return merge_output(ret1, ret2) - - def reorder_encoder_out(self, encoder_out, new_order): - assert self.training is False # used for inference only - return self.spch_encoder.reorder_encoder_out(encoder_out, new_order) - - -# TransformerMultiInputDecoder: take one or two encoder inputs -class TransformerMultiInputDecoder(FairseqDecoder): - def __init__( - self, - dictionary, - spch_decoder, - text_decoder, - compute_cross_attentive_loss=False, - cross_attentive_loss_with_norm=True, - cross_attentive_loss_reverse=False, - ): - - super().__init__(dictionary) - self.spch_decoder = spch_decoder - self.text_decoder = text_decoder - self.compute_cross_attentive_loss = compute_cross_attentive_loss - self.cross_attentive_loss_with_norm = cross_attentive_loss_with_norm - self.cross_attentive_loss_reverse = cross_attentive_loss_reverse - - @classmethod - def share_spchdecoder(cls, task_args, text_decoder, spch_decoder): - if task_args.decoder_shared_layer_level == 0: - return text_decoder - assert text_decoder.embed_tokens == spch_decoder.embed_tokens - spch_decoder.project_in_dim = text_decoder.project_in_dim - spch_decoder.embed_positions = text_decoder.embed_positions - spch_decoder.layernorm_embedding = text_decoder.layernorm_embedding - spch_decoder.project_out_dim = text_decoder.project_out_dim - spch_decoder.adaptive_softmax = text_decoder.adaptive_softmax - if task_args.decoder_shared_layer_level == 1: - spch_decoder.output_projection = text_decoder.output_projection - spch_decoder.layer_norm = text_decoder.layer_norm - else: # 2 - spch_decoder.output_projection.weight = ( - text_decoder.output_projection.weight - ) - for i, ly in enumerate(text_decoder.layers): - sly = spch_decoder.layers[i] - sly.self_attn = ly.self_attn - sly.self_attn_layer_norm = ly.self_attn_layer_norm - # sly.encoder_attn = ly.encoder_attn - if ( - task_args.decoder_shared_layer_level == 1 - ): # share everything, but under different models - sly.encoder_attn = ly.encoder_attn - sly.encoder_attn_layer_norm = ly.encoder_attn_layer_norm - sly.fc1 = ly.fc1 - sly.fc2 = ly.fc2 - sly.final_layer_norm = ly.final_layer_norm - else: # task_args.decoder_shared_layer_level == 2: #separated encoder_attn_layer_norm and bias - sly.encoder_attn.k_proj.weight = ly.encoder_attn.k_proj.weight - sly.encoder_attn.v_proj.weight = ly.encoder_attn.v_proj.weight - sly.encoder_attn.q_proj.weight = ly.encoder_attn.q_proj.weight - sly.encoder_attn.out_proj.weight = ly.encoder_attn.out_proj.weight - sly.fc1.weight = ly.fc1.weight - sly.fc2.weight = ly.fc2.weight - - return spch_decoder - - def cross_attentive_loss( - self, teacher_states, student_states, teacher_masking, student_masking, eps=1e-6 - ): - x = teacher_states.transpose(0, 1) # from T X B X D to B X T X D - y = student_states.transpose(0, 1) - if self.cross_attentive_loss_with_norm: - x = x / (x.norm(dim=2, keepdim=True) + eps) - y = y / (y.norm(dim=2, keepdim=True) + eps) - dim = x.size(-1) - # lengths: batch X seqLen - sim_scores_xy = torch.bmm(x, y.transpose(1, 2)) # batch X lenx X leny ] - if y.dtype == torch.float16: - sim_scores_xy = sim_scores_xy.float() - y = y.float() - x = x.float() - if teacher_masking != []: - assert len(teacher_masking) == 1 - sim_scores_xy = sim_scores_xy.masked_fill( - teacher_masking[0].unsqueeze(-1), float("-inf") - ) - if student_masking != []: - sim_scores_xy = sim_scores_xy.masked_fill( - student_masking[0].unsqueeze(1), float("-inf") - ) - # do masking - y_weights = utils.softmax(sim_scores_xy, dim=-1) - if teacher_masking != []: - y_weights = y_weights.masked_fill(teacher_masking[0].unsqueeze(-1), 0) - x_reconstruct_from_y = torch.bmm(y_weights, y) - - sim_scores_xx = torch.bmm(x, x.transpose(1, 2)) # batch X lenx X lenx ] - x_weights = utils.softmax(sim_scores_xx, dim=-1) - if teacher_masking != []: - x_weights = x_weights.masked_fill(teacher_masking[0].unsqueeze(-1), 0) - - # no gradient for teacher state - x_reconstruct_from_x = torch.bmm(x_weights, x).detach() - cost = (x_reconstruct_from_x - x_reconstruct_from_y).norm(dim=2) - if teacher_masking != []: - cost = cost.masked_fill(teacher_masking[0], 0) - - if not self.cross_attentive_loss_with_norm: - cost = cost / dim - return cost - - def forward( - self, - prev_output_tokens, - encoder_out, - incremental_state=None, - has_txt_input=False, - **kwargs - ): - """ - Args: - prev_output_tokens (LongTensor): previous decoder outputs of shape - `(batch, tgt_len)`, for input feeding/teacher forcing. If there are - two or more input during training, they will share the same prev_output_tokens - encoder_out (tuple[Tensor]): output from the encoder, used for - encoder-side attention. It will be tuple if there are more inputs, but a tensor - if only one input - incremental_state ([dict]): dictionary used for storing state during - :ref:`Incremental decoding`. It is only valid for inference, only from single - input - Returns: - tuple: - - the last decoder layer's output of shape `(batch, tgt_len, - vocab)`. If there are N inputs, batch will be N bigger than a single input - - the last decoder layer's attention weights of shape `(batch, - tgt_len, src_len)` - """ - assert not isinstance(encoder_out, EncoderOut) - if isinstance(encoder_out, tuple): # training with mulitple input - rst = [] - assert len(encoder_out) == 2 - for i, eo in enumerate(encoder_out): - assert incremental_state is None - if i == 0: - rst.append( - self.spch_decoder(prev_output_tokens, eo, incremental_state) - ) - else: - rst.append( - self.text_decoder(prev_output_tokens, eo, incremental_state) - ) - dec_out = torch.cat([r[0] for r in rst], dim=0) - attn_cost = None - if self.compute_cross_attentive_loss: - assert isinstance(encoder_out[0], dict) - if self.cross_attentive_loss_reverse: - attn_cost = self.cross_attentive_loss( - teacher_states=encoder_out[1]["encoder_states"], # text_states - student_states=encoder_out[0]["encoder_states"], # spch_states - teacher_masking=encoder_out[1]["encoder_padding_mask"], - student_masking=encoder_out[0]["encoder_padding_mask"], - ) - else: - attn_cost = self.cross_attentive_loss( - teacher_states=encoder_out[0]["encoder_states"], # spch_states - student_states=encoder_out[1]["encoder_states"], # text_states - teacher_masking=encoder_out[0]["encoder_padding_mask"], - student_masking=encoder_out[1]["encoder_padding_mask"], - ) - - return (dec_out, {"attn_cost": attn_cost}) - else: # inference or training with one input - if has_txt_input: - return self.text_decoder( - prev_output_tokens, encoder_out, incremental_state - ) - return self.spch_decoder(prev_output_tokens, encoder_out, incremental_state) - - -# Note: -# dual input transformer: -# encoder: S2TTransformerEncoder for speech + TransformerEncoder for text -# decoder: TransformerDecoder for text -@register_model("dual_input_s2t_transformer") -class DualInputS2TTransformerModel(FairseqEncoderDecoderModel): - def __init__(self, encoder, decoder): - super().__init__(encoder, decoder) - self.num_updates = 0 - - def max_positions(self): - return None # it is provided in task - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - # encoder 1: S2TTransformerEncoder for speech - parser.add_argument( - "--conv-kernel-sizes", - type=str, - metavar="N", - help="kernel sizes of Conv1d subsampling layers", - ) - parser.add_argument( - "--conv-channels", - type=int, - metavar="N", - help="# of channels in Conv1d subsampling layers", - ) - parser.add_argument( - "--enc-output-dim", - type=int, - metavar="N", - help=""" - encoder output dimension, can be None. If specified, projecting the - transformer output to the specified dimension""", - ) - # standard Transformer - parser.add_argument( - "--activation-fn", - type=str, - default="relu", - choices=utils.get_available_activation_fns(), - help="activation function to use", - ) - parser.add_argument( - "--dropout", type=float, metavar="D", help="dropout probability" - ) - parser.add_argument( - "--attention-dropout", - type=float, - metavar="D", - help="dropout probability for attention weights", - ) - parser.add_argument( - "--activation-dropout", - "--relu-dropout", - type=float, - metavar="D", - help="dropout probability after activation in FFN.", - ) - parser.add_argument( - "--encoder-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension", - ) - parser.add_argument( - "--encoder-text-embed-dim", - type=int, - metavar="N", - help="encoder text embedding dimension", - ) - parser.add_argument( - "--encoder-ffn-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension for FFN", - ) - parser.add_argument( - "--encoder-attention-heads", - type=int, - metavar="N", - help="num encoder attention heads", - ) - parser.add_argument( - "--decoder-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension", - ) - parser.add_argument( - "--decoder-ffn-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension for FFN", - ) - parser.add_argument( - "--decoder-layers", type=int, metavar="N", help="num decoder layers" - ) - parser.add_argument( - "--decoder-attention-heads", - type=int, - metavar="N", - help="num decoder attention heads", - ) - parser.add_argument( - "--layernorm-embedding", - action="store_true", - help="add layernorm to embedding", - ) - parser.add_argument( - "--no-scale-embedding", - action="store_true", - help="if True, dont scale embeddings", - ) - # non-standard transformer parameters - parser.add_argument( - "--speech-encoder-layers", - type=int, - metavar="N", - help="num speech encoder layers", - ) - parser.add_argument( - "--text-encoder-layers", - type=int, - metavar="N", - help="num text encoder layers", - ) - parser.add_argument( - "--encoder-shared-layers", - type=int, - metavar="N", - help="num shared encoder layers", - ) - parser.add_argument( - "--encoder-shared-layer-level", - type=int, - metavar="N", - default=0, - choices=[0, 1, 2], - help="share layer level 0: all share 1: all share with separate model 2: share weight but not bias and layernorm", - ) - - parser.add_argument( - "--decoder-shared-layer-level", - default=0, - choices=[0, 1, 2], - type=int, - metavar="N", - help="0: share everything; 1: share everything with different model 2: no share layer_norm and bias", - ) - ### - parser.add_argument( - "--text-input-cost-ratio", - type=float, - default=1.0, - metavar="V", - help="text input cost ratio relative to speech input cost", - ) - parser.add_argument( - "--init-scale", - type=float, - default=1.0, - metavar="V", - help="scale the initial weight by given factor", - ) - parser.add_argument( - "--enc-grad-mult", - type=float, - metavar="V", - default=1.0, - help="multiply enc1 and enc2 gradient by V", - ) - parser.add_argument( - "--enc2-along-grad-mult", - type=float, - metavar="V", - default=1.0, - help="multiply enc2 gradient by V if only enc2 is used", - ) - parser.add_argument( - "--load-pretrain-encoder", - type=str, - default="", - metavar="EXPR", - help=""" path to the pretrained encoder """, - ) - parser.add_argument( - "--load-pretrain-speech-encoder", - type=str, - default="", - metavar="EXPR", - help=""" path to the pretrained speech encoder """, - ) - parser.add_argument( - "--load-pretrain-text-encoder", - type=str, - default="", - metavar="EXPR", - help=""" path to the pretrained text encoder """, - ) - parser.add_argument( - "--load-pretrain-text-encoder-last", - type=str, - default="", - metavar="EXPR", - help=""" path to the pretrained text encoder """, - ) - parser.add_argument( - "--load-pretrain-decoder", - type=str, - metavar="EXPR", - default="", - help=""" path to the pretrained encoder """, - ) - parser.add_argument( - "--add-speech-eos", - action="store_true", - help="add eos token at the end of input feature", - ) - parser.add_argument( - "--speech-encoder-adapter-type", - type=str, - metavar="EXPR", - default="None", - choices=["None", "Linear", "MLP"], - help="add speech encoder adapter", - ) - - @classmethod - def build_encoder(cls, args, task): - spch_encoder = DualInputEncoder.build_spch_encoder(args) - text_encoder = DualInputEncoder.build_text_encoder( - args, task.src_dict, spch_encoder - ) - cross_attentive_loss_before_last_layer = ( - 0 if getattr(args, "attentive_cost_regularization", 0.0) > 0.0 else -1 - ) - encoder = DualInputEncoder( - args, - spch_encoder, - text_encoder, - task.src_dict, - cross_attentive_loss_before_last_layer, - ) - if args.init_scale != 1.0: - with torch.no_grad(): - for param in encoder.parameters(): - param.data.mul_(args.init_scale) - if args.load_pretrain_text_encoder != "": - checkpoint_utils.load_pretrained_component_from_model( - text_encoder, args.load_pretrain_text_encoder - ) - if args.load_pretrain_speech_encoder != "": - if hasattr(spch_encoder, "encoder"): - checkpoint_utils.load_pretrained_component_from_model( - spch_encoder.encoder, args.load_pretrain_speech_encoder - ) - else: - checkpoint_utils.load_pretrained_component_from_model( - spch_encoder, args.load_pretrain_speech_encoder - ) - if ( - args.load_pretrain_text_encoder_last != "" - ): # if share encoder, speech encoder parameters will be used. - # It provides a chance to use pre-trained mt encoder instead - checkpoint_utils.load_pretrained_component_from_model( - text_encoder, args.load_pretrain_text_encoder_last - ) - - if args.load_pretrain_encoder != "": - checkpoint_utils.load_pretrained_component_from_model( - encoder, args.load_pretrain_encoder - ) - return encoder - - @classmethod - def build_decoder(cls, args, task): - dec_cfg = { - "decoder_layerdrop": args.decoder_layerdrop, - "share_decoder_input_output_embed": args.share_decoder_input_output_embed, - "decoder_embed_dim": args.decoder_embed_dim, - "max_target_positions": args.max_target_positions, - "dropout": args.dropout, - "encoder_learned_pos": args.encoder_learned_pos, - "decoder_learned_pos": args.decoder_learned_pos, - "layernorm_embedding": args.layernorm_embedding, - "decoder_normalize_before": args.decoder_normalize_before, - "activation_dropout": args.activation_dropout, - "attention_dropout": args.attention_dropout, - "decoder_ffn_embed_dim": args.decoder_ffn_embed_dim, - "decoder_layers": args.decoder_layers, - "decoder_attention_heads": args.decoder_attention_heads, - "decoder_output_dim": args.decoder_embed_dim, - "no_scale_embedding": args.no_scale_embedding, - "adaptive_input": args.adaptive_input, - "quant_noise_pq": args.quant_noise_pq, - "adaptive_softmax_cutoff": args.adaptive_softmax_cutoff, - "tie_adaptive_weights": args.tie_adaptive_weights, - "no_token_positional_embeddings": args.no_token_positional_embeddings, - "encoder": {"embed_dim":args.encoder_embed_dim} - } - dec_cfg = namedtuple("args", dec_cfg.keys())(*dec_cfg.values()) - dec_emb = nn.Embedding( - len(task.target_dictionary), - args.decoder_embed_dim, - task.target_dictionary.pad(), - ) - compute_cross_attentive_loss = ( - True if getattr(args, "attentive_cost_regularization", 0.0) > 0.0 else False - ) - cross_attentive_loss_without_norm = getattr( - args, "attentive_cost_without_normalize", False - ) - cross_attentive_loss_reverse = ( - False # getattr(args, "attentive_cost_reverse", False) - ) - - text_decoder = TransformerDecoder(dec_cfg, task.target_dictionary, dec_emb) - spch_decoder = TransformerDecoder(dec_cfg, task.target_dictionary, dec_emb) - spch_decoder = TransformerMultiInputDecoder.share_spchdecoder( - args, text_decoder, spch_decoder - ) - decoder = TransformerMultiInputDecoder( - dictionary=task.target_dictionary, - spch_decoder=spch_decoder, - text_decoder=text_decoder, - compute_cross_attentive_loss=compute_cross_attentive_loss, - cross_attentive_loss_with_norm=True - if not cross_attentive_loss_without_norm - else False, - cross_attentive_loss_reverse=cross_attentive_loss_reverse, - ) - if args.init_scale != 1.0: - with torch.no_grad(): - for param in decoder.parameters(): - param.data.mul_(args.init_scale) - if args.load_pretrain_decoder != "": - try: - checkpoint_utils.load_pretrained_component_from_model( - decoder, args.load_pretrain_decoder - ) - except RuntimeError: - checkpoint_utils.load_pretrained_component_from_model( - decoder.text_decoder, args.load_pretrain_decoder - ) - if args.decoder_shared_layer_level > 0: - checkpoint_utils.load_pretrained_component_from_model( - decoder.spch_decoder, args.load_pretrain_decoder - ) - - return decoder - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - # make sure that all args are properly defaulted - # (in case there are any new ones) - dualinputs2ttransformer_base(args) - - encoder = cls.build_encoder(args, task) - decoder = cls.build_decoder(args, task) - return cls(encoder, decoder) - - def get_normalized_probs(self, net_output, log_probs, sample=None): - # net_output['encoder_out'] is a (B, T, D) tensor - lprobs = super().get_normalized_probs(net_output, log_probs, sample) - lprobs.batch_first = True - return lprobs - - def set_num_updates(self, num_updates): - """Set the number of parameters updates.""" - super().set_num_updates(num_updates) - self.num_updates = num_updates - - def forward( - self, - src_tokens, - src_lengths, - prev_output_tokens, - use_encoder_outputs=False, - src_txt_tokens=None, - src_txt_lengths=None, - mode="sup_speech", - **kwargs - ): - """ - Run the forward pass for an encoder-decoder model. - - First feed a batch of source tokens through the encoder. Then, feed the - encoder output and previous decoder outputs (i.e., teacher forcing) to - the decoder to produce the next outputs:: - - encoder_out = self.encoder(src_tokens, src_lengths) - return self.decoder(prev_output_tokens, encoder_out) - - Args: - src_tokens (LongTensor): tokens in the source language of shape - `(batch, src_len)` - src_lengths (LongTensor): source sentence lengths of shape `(batch)` - prev_output_tokens (LongTensor): previous decoder outputs of shape - `(batch, tgt_len)`, for teacher forcing - mode = 'sup_speech' or 'text' - - Returns: - tuple: - - the decoder's output of shape `(batch, tgt_len, vocab)` - - a dictionary with any model-specific outputs - """ - if mode == "text": - assert src_txt_tokens is None - src_txt_tokens = src_tokens - src_txt_lengths = src_lengths - src_tokens = None - src_lengths = None - encoder_out = self.encoder( - src_tokens, - src_lengths=src_lengths, - src_txt_tokens=src_txt_tokens, - src_txt_lengths=src_txt_lengths, - **kwargs - ) - has_txt_input = True if src_txt_tokens is not None else False - decoder_out = self.decoder( - prev_output_tokens, - encoder_out=encoder_out, - has_txt_input=has_txt_input, - **kwargs - ) - if use_encoder_outputs: - return decoder_out, encoder_out - return decoder_out - - -@register_model_architecture( - "dual_input_s2t_transformer", "dualinputs2ttransformer_base" -) -def dualinputs2ttransformer_base(args): - args.encoder_freezing_updates = getattr(args, "encoder_freezing_updates", 0) - # Convolutional subsampler - args.input_feat_per_channel = getattr(args, "input_feat_per_channel", 80) - args.conv_kernel_sizes = getattr(args, "conv_kernel_sizes", "5,5") - args.conv_channels = getattr(args, "conv_channels", 1024) - # Transformer - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_text_embed_dim = getattr( - args, "encoder_text_embed_dim", args.encoder_embed_dim - ) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 2048) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", True) - args.encoder_layerdrop = getattr(args, "encoder_layerdrop", 0) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", False) - - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim) - args.decoder_ffn_embed_dim = getattr( - args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim - ) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", True) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False) - args.dropout = getattr(args, "dropout", 0.1) - args.attention_dropout = getattr(args, "attention_dropout", args.dropout) - args.activation_dropout = getattr(args, "activation_dropout", args.dropout) - args.activation_fn = getattr(args, "activation_fn", "relu") - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.tie_adaptive_weights = getattr(args, "tie_adaptive_weights", False) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", False - ) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - args.adaptive_input = getattr(args, "adaptive_input", False) - args.decoder_layerdrop = getattr(args, "decoder_layerdrop", 0.0) - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.layernorm_embedding = getattr(args, "layernorm_embedding", False) - args.no_scale_embedding = getattr(args, "no_scale_embedding", False) - args.quant_noise_pq = getattr(args, "quant_noise_pq", 0) - - args.speech_encoder_layers = getattr(args, "speech_encoder_layers", 10) - args.text_encoder_layers = getattr(args, "text_encoder_layers", 6) - args.encoder_shared_layers = getattr(args, "encoder_shared_layers", 0) - args.decoder_layers = getattr(args, "decoder_layers", 6) - - args.add_speech_eos = getattr(args, "add_speech_eos", False) - - -@register_model_architecture("dual_input_s2t_transformer", "dualinputs2ttransformer_s") -def dualinputs2ttransformer_s(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 256) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 256 * 4) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 4) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 4) - args.dropout = getattr(args, "dropout", 0.1) - args.speech_encoder_layers = getattr(args, "speech_encoder_layers", 7) - args.text_encoder_layers = getattr(args, "text_encoder_layers", 7) - args.decoder_layers = getattr(args, "decoder_layers", 7) - dualinputs2ttransformer_base(args) - - -@register_model_architecture("dual_input_s2t_transformer", "dualinputs2ttransformer_m") -def dualinputs2ttransformer_m(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 512 * 4) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8) - args.dropout = getattr(args, "dropout", 0.15) - args.speech_encoder_layers = getattr(args, "speech_encoder_layers", 10) - args.text_encoder_layers = getattr(args, "text_encoder_layers", 6) - args.decoder_layers = getattr(args, "decoder_layers", 6) - dualinputs2ttransformer_base(args) - - -@register_model_architecture("dual_input_s2t_transformer", "dualinputs2ttransformer_b") -def dualinputs2ttransformer_b(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 768) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 768 * 4) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 12) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 12) - args.dropout = getattr(args, "dropout", 0.15) - args.speech_encoder_layers = getattr(args, "speech_encoder_layers", 12) - args.text_encoder_layers = getattr(args, "text_encoder_layers", 6) - args.decoder_layers = getattr(args, "decoder_layers", 6) - dualinputs2ttransformer_base(args) - - -@register_model_architecture("dual_input_s2t_transformer", "dualinputs2ttransformer_l") -def dualinputs2ttransformer_l(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 1024 * 4) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - args.dropout = getattr(args, "dropout", 0.2) - args.speech_encoder_layers = getattr(args, "speech_encoder_layers", 12) - args.text_encoder_layers = getattr(args, "text_encoder_layers", 6) - args.decoder_layers = getattr(args, "decoder_layers", 6) - dualinputs2ttransformer_base(args) diff --git a/kosmos-g/fairseq/examples/speech_text_joint_to_text/models/s2t_dualinputxmtransformer.py b/kosmos-g/fairseq/examples/speech_text_joint_to_text/models/s2t_dualinputxmtransformer.py deleted file mode 100644 index 7b4cbb0aa..000000000 --- a/kosmos-g/fairseq/examples/speech_text_joint_to_text/models/s2t_dualinputxmtransformer.py +++ /dev/null @@ -1,584 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import copy - -import torch.nn as nn -from fairseq import checkpoint_utils -from fairseq import utils -from fairseq.data.data_utils import lengths_to_padding_mask -from fairseq.models import ( - register_model, - register_model_architecture, - FairseqEncoder, -) -from fairseq.models.speech_to_text import Wav2VecEncoderWithAdaptor -from fairseq.models.speech_to_text.xm_transformer import ( - set_default_adaptor_args, - set_default_w2v_encoder_args, - need_finetuning -) -from fairseq.models.transformer import TransformerEncoder, TransformerDecoder -from fairseq.models.wav2vec import TransformerSentenceEncoderLayer -from fairseq.utils import safe_hasattr - -from .s2t_dualinputtransformer import ( - DualInputS2TTransformerModel, - TransformerMultiInputDecoder, - DualInputEncoder, -) - - -class TransformerSentenceEncoderLayerStd(TransformerSentenceEncoderLayer): - def __init__(self, sent_enc_layer): - super(TransformerSentenceEncoderLayer, self).__init__() - self.embedding_dim = sent_enc_layer.embedding_dim - self.dropout = sent_enc_layer.dropout - self.activation_dropout = sent_enc_layer.activation_dropout - - # Initialize blocks - self.activation_fn = sent_enc_layer.activation_fn - self.self_attn = sent_enc_layer.self_attn - - self.dropout1 = sent_enc_layer.dropout1 - self.dropout2 = sent_enc_layer.dropout2 - self.dropout3 = sent_enc_layer.dropout3 - - self.layer_norm_first = sent_enc_layer.layer_norm_first - - # layer norm associated with the self attention layer - self.self_attn_layer_norm = sent_enc_layer.self_attn_layer_norm - self.fc1 = sent_enc_layer.fc1 - self.fc2 = sent_enc_layer.fc2 - - # layer norm associated with the position wise feed-forward NN - self.final_layer_norm = sent_enc_layer.final_layer_norm - - def forward( - self, - x, - self_attn_mask=None, - self_attn_padding_mask=None, - need_weights=None, - att_args=None, - ): - x, attn = super().forward( - x, self_attn_mask, self_attn_padding_mask, need_weights, att_args - ) - return x - - -# TODO retire SharedEncoder -class SharedEncoder(FairseqEncoder): - def __init__(self, wav2vec_enc, mbart_enc, adaptor, shared_layers): - super().__init__(None) - self.w2v_encoder = wav2vec_enc - self.shared_layers = self.w2v_encoder.w2v_model.encoder.layers[-shared_layers:] - self.w2v_encoder.w2v_model.encoder.layers = ( - self.w2v_encoder.w2v_model.encoder.layers[:-shared_layers] - ) - self.adaptor = adaptor - if self.shared_layers[-1].layer_norm_first: - self.final_layer_norm = mbart_enc.layer_norm - else: - mbart_enc.layer_norm = None - self.final_layer_norm = None - shared_layer_from = len(mbart_enc.layers) - shared_layers - if shared_layer_from < 0: - shared_layer_from = 0 - for layer_id, layer in enumerate(self.shared_layers): - mbart_enc.layers[ - shared_layer_from + layer_id - ] = TransformerSentenceEncoderLayerStd(layer) - - def forward(self, src_tokens, src_lengths=None, **kwargs): - padding_mask = lengths_to_padding_mask(src_lengths) - if not padding_mask.any(): - padding_mask = None - - out = self.w2v_encoder.forward(src_tokens, padding_mask, tbc=True) - x = out["encoder_out"] - enc_padding_mask = None - if out["encoder_padding_mask"] is not None: - enc_padding_mask = out["encoder_padding_mask"].transpose( - 0, 1 - ) # T X B --> B X T - - x, enc_padding_mask = self.adaptor(x, enc_padding_mask) - for layer in self.shared_layers: - x, _ = layer(x, enc_padding_mask) - if self.final_layer_norm is not None: - x = self.final_layer_norm(x) - - return { - "encoder_out": [x], # T x B x C - "encoder_padding_mask": [enc_padding_mask] - if enc_padding_mask is not None - else [], # B x T - "encoder_embedding": [], # B x T x C - "encoder_states": [], # List[T x B x C] - "src_tokens": [], - "src_lengths": [], - } - - -class StackedWav2VecEncoderWithAdaptor(FairseqEncoder): - def __init__( - self, - wav2vec_enc, - mbart_enc_layers, - mbart_layer_norm, - adaptor, - drop_w2v_layers=0, - ): - super().__init__(None) - self.w2v_encoder = wav2vec_enc - self.adaptor = adaptor - self.mbart_encoder_layers = mbart_enc_layers - self.final_layer_norm = mbart_layer_norm - if drop_w2v_layers > 0: - self.w2v_encoder.w2v_model.encoder.layers = ( - self.w2v_encoder.w2v_model.encoder.layers[:-drop_w2v_layers] - ) - - def forward(self, src_tokens, src_lengths=None, return_all_hiddens=False, **kwargs): - padding_mask = lengths_to_padding_mask(src_lengths) - if not padding_mask.any(): - padding_mask = None - - out = self.w2v_encoder.forward(src_tokens, padding_mask, tbc=True) - x = out["encoder_out"] - enc_padding_mask = None - if out["padding_mask"] is not None: - enc_padding_mask = out["padding_mask"] # B X T - - x, enc_padding_mask = self.adaptor(x, enc_padding_mask) - encoder_states = [] - for layer in self.mbart_encoder_layers: - x = layer(x, enc_padding_mask) - if return_all_hiddens: - encoder_states.append(x) - if self.final_layer_norm is not None: - x = self.final_layer_norm(x) - - return { - "encoder_out": [x], # T x B x C - "encoder_padding_mask": [enc_padding_mask] - if enc_padding_mask is not None - else [], # B x T - "encoder_embedding": [], # B x T x C - "encoder_states": encoder_states, # List[T x B x C] - "src_tokens": [], - "src_lengths": [], - } - - def reorder_encoder_out(self, encoder_out, new_order): - new_encoder_out = ( - [] - if len(encoder_out["encoder_out"]) == 0 - else [x.index_select(1, new_order) for x in encoder_out["encoder_out"]] - ) - - new_encoder_padding_mask = ( - [] - if len(encoder_out["encoder_padding_mask"]) == 0 - else [ - x.index_select(0, new_order) - for x in encoder_out["encoder_padding_mask"] - ] - ) - - new_encoder_embedding = ( - [] - if len(encoder_out["encoder_embedding"]) == 0 - else [ - x.index_select(0, new_order) for x in encoder_out["encoder_embedding"] - ] - ) - - encoder_states = encoder_out["encoder_states"] - if len(encoder_states) > 0: - for idx, state in enumerate(encoder_states): - encoder_states[idx] = state.index_select(1, new_order) - - return { - "encoder_out": new_encoder_out, # T x B x C - "encoder_padding_mask": new_encoder_padding_mask, # B x T - "encoder_embedding": new_encoder_embedding, # B x T x C - "encoder_states": encoder_states, # List[T x B x C] - "src_tokens": [], # B x T - "src_lengths": [], # B x 1 - } - - -# Note: -# dual input transformer: -# encoder: wav2vec for speech + mbart encoder for text -# decoder: mbart decoder for text -@register_model("dual_input_xm_transformer") -class DualInputXMTransformerModel(DualInputS2TTransformerModel): - def __init__(self, encoder, decoder): - super().__init__(encoder, decoder) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - # wav2vec encoder - Wav2VecEncoderWithAdaptor.add_args(parser) - # add_decoder_args(parser) - # mbart Transformer - parser.add_argument( - "--activation-fn", - type=str, - default="relu", - choices=utils.get_available_activation_fns(), - help="activation function to use", - ) - - parser.add_argument( - "--mbart-dropout", type=float, metavar="D", help="dropout probability" - ) - parser.add_argument( - "--mbart-attention-dropout", - type=float, - metavar="D", - help="dropout probability for attention weights", - ) - parser.add_argument( - "--mbart-activation-dropout", - type=float, - metavar="D", - help="dropout probability after activation in FFN.", - ) - - parser.add_argument( - "--encoder-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension", - ) - parser.add_argument( - "--encoder-ffn-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension for FFN", - ) - parser.add_argument( - "--encoder-layers", type=int, metavar="N", help="num encoder layers" - ) - parser.add_argument( - "--encoder-attention-heads", - type=int, - metavar="N", - help="num encoder attention heads", - ) - parser.add_argument( - "--encoder-normalize-before", - action="store_true", - help="apply layernorm before each encoder block", - ) - - parser.add_argument( - "--decoder-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension", - ) - parser.add_argument( - "--decoder-ffn-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension for FFN", - ) - parser.add_argument( - "--decoder-layers", type=int, metavar="N", help="num decoder layers" - ) - parser.add_argument( - "--decoder-attention-heads", - type=int, - metavar="N", - help="num decoder attention heads", - ) - parser.add_argument( - "--decoder-normalize-before", - action="store_true", - help="apply layernorm before each decoder block", - ) - parser.add_argument( - "--layernorm-embedding", - action="store_true", - help="add layernorm to embedding", - ) - parser.add_argument( - "--no-scale-embedding", - action="store_true", - help="if True, dont scale embeddings", - ) - parser.add_argument( - "--load-pretrained-mbart-from", - type=str, - metavar="STR", - help="model to take text encoder decoder weights from (for initialization)", - ) - # parser.add_argument("--finetune-w2v-params", type=str, metavar="STR", - # help="comma-separated param strings to finetune.") - parser.add_argument( - "--finetune-mbart-decoder-params", - type=str, - metavar="STR", - help="comma-separated param strings to finetune.", - ) - parser.add_argument( - "--finetune-mbart-encoder-params", - type=str, - metavar="STR", - help="comma-separated param strings to finetune.", - ) - parser.add_argument( - "--skip-encoder-projection", - action="store_true", - help="skip the projection layer in encoder", - ) - - parser.add_argument( - "--enc-grad-mult", - type=float, - metavar="V", - default=1.0, - help="multiply enc1 and enc2 gradient by V", - ) - parser.add_argument( - "--enc2-along-grad-mult", - type=float, - metavar="V", - default=1.0, - help="multiply enc2 gradient by V if only enc2 is used", - ) - parser.add_argument( - "--text-input-cost-ratio", - type=float, - default=1.0, - metavar="V", - help="text input cost ratio relative to speech input cost", - ) - parser.add_argument( - "--stack-w2v-mbart-encoder", - action="store_true", - help="stack w2v and mbart encoder", - ) - parser.add_argument( - "--stack-w2v-mbart-nonorm-encoder", - action="store_true", - help="stack w2v and mbart encoder", - ) - parser.add_argument( - "--no-final-norm-decoder", action="store_true", help="no layer norm" - ) - parser.add_argument( - "--drop-w2v-layers", - type=int, - default=0, - metavar="N", - help="drop w2v encoder layers", - ) - - parser.add_argument( - "--share-w2v-text-encoder", - action="store_true", - help="share w2v encoder layers with text encoder", - ) - parser.add_argument( - "--shared-w2v-layers", - type=int, - default=0, - metavar="N", - help="shared encoder layers from w2v encoder", - ) - - @classmethod - def build_encoder(cls, args, task): - _args = copy.deepcopy(args) - _args.dropout = args.mbart_dropout - _args.attention_dropout = args.mbart_attention_dropout - _args.activation_dropout = args.mbart_activation_dropout - _args.max_source_positions = 1024 - enc_emb = nn.Embedding( - len(task.src_dict), _args.encoder_embed_dim, task.src_dict.pad() - ) - text_encoder = TransformerEncoder(_args, task.src_dict, enc_emb) - spch_encoder = Wav2VecEncoderWithAdaptor(args) - if getattr(args, "load_pretrained_mbart_from", None): - text_encoder = checkpoint_utils.load_pretrained_component_from_model( - component=text_encoder, checkpoint=args.load_pretrained_mbart_from - ) - if getattr(args, "stack_w2v_mbart_encoder", False): - assert getattr(args, "share_w2v_text_encoder", False) is False - spch_encoder = StackedWav2VecEncoderWithAdaptor( - spch_encoder.w2v_encoder, - text_encoder.layers, - text_encoder.layer_norm, - spch_encoder.adaptor, - args.drop_w2v_layers, - ) - elif getattr(args, "stack_w2v_mbart_nonorm_encoder", False): - text_encoder.layer_norm = None - spch_encoder = StackedWav2VecEncoderWithAdaptor( - spch_encoder.w2v_encoder, - text_encoder.layers, - text_encoder.layer_norm, - spch_encoder.adaptor, - args.drop_w2v_layers, - ) - elif getattr(args, "share_w2v_text_encoder", False): - spch_encoder = SharedEncoder( - spch_encoder.w2v_encoder, - text_encoder, - spch_encoder.adaptor, - args.shared_w2v_layers, - ) - - for k, p in spch_encoder.named_parameters(): - # Freeze pretrained models by default - if safe_hasattr( - args, "finetune_w2v_params" - ) and need_finetuning(args.finetune_w2v_params, k): - p.requires_grad = True - else: - p.requires_grad = False - for k, p in text_encoder.named_parameters(): - # Freeze pretrained models by default - if safe_hasattr( - args, "finetune_mbart_encoder_params" - ) and need_finetuning( - args.finetune_mbart_encoder_params, k - ): - p.requires_grad = True - else: - p.requires_grad = False - cross_attentive_loss_before_last_layer = ( - 0 if getattr(args, "attentive_cost_regularization", 0.0) > 0.0 else -1 - ) - encoder = DualInputEncoder( - args, - spch_encoder, - text_encoder, - task.src_dict, - cross_attentive_loss_before_last_layer, - ) - return encoder - - @classmethod - def build_decoder(cls, args, task): - _args = copy.deepcopy(args) - _args.dropout = args.mbart_dropout - _args.attention_dropout = args.mbart_attention_dropout - _args.activation_dropout = args.mbart_activation_dropout - _args.max_target_positions = 1024 - dec_emb = nn.Embedding( - len(task.tgt_dict), _args.encoder_embed_dim, task.tgt_dict.pad() - ) - decoder = TransformerDecoder(_args, task.tgt_dict, dec_emb) - if getattr(args, "load_pretrained_mbart_from", None): - decoder = checkpoint_utils.load_pretrained_component_from_model( - component=decoder, checkpoint=args.load_pretrained_mbart_from - ) - if getattr(args, "no_final_norm_decoder", False): - decoder.layer_norm = None - for k, p in decoder.named_parameters(): - # Freeze pretrained models by default - if safe_hasattr( - args, "finetune_mbart_decoder_params" - ) and need_finetuning( - args.finetune_mbart_decoder_params, k - ): - p.requires_grad = True - else: - p.requires_grad = False - - compute_cross_attentive_loss = ( - True if getattr(args, "attentive_cost_regularization", 0.0) > 0.0 else False - ) - cross_attentive_loss_without_norm = getattr( - args, "attentive_cost_without_normalize", False - ) - cross_attentive_loss_reverse = ( - False # getattr(args, "attentive_cost_reverse", False) - ) - decoder = TransformerMultiInputDecoder( - dictionary=task.target_dictionary, - spch_decoder=decoder, - text_decoder=decoder, - compute_cross_attentive_loss=compute_cross_attentive_loss, - cross_attentive_loss_with_norm=True - if not cross_attentive_loss_without_norm - else False, - cross_attentive_loss_reverse=cross_attentive_loss_reverse, - ) - return decoder - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - # make sure that all args are properly defaulted - # (in case there are any new ones) - dualinputxmtransformer_base(args) - - encoder = cls.build_encoder(args, task) - decoder = cls.build_decoder(args, task) - return cls(encoder, decoder) - - -@register_model_architecture("dual_input_xm_transformer", "dualinputxmtransformer_base") -def dualinputxmtransformer_base(args): - # wav2vec encoder - set_default_w2v_encoder_args(args) - set_default_adaptor_args(args) - - # mbart model - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.encoder_ffn_embed_dim = getattr( - args, "encoder_ffn_embed_dim", 4 * args.encoder_embed_dim - ) - args.encoder_layers = getattr(args, "encoder_layers", 12) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", True) - args.encoder_layerdrop = getattr(args, "encoder_layerdrop", 0) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", True) - - args.decoder_embed_path = getattr(args, "decoder_embed_path", None) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 1024) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4 * 1024) - args.decoder_layers = getattr(args, "decoder_layers", 12) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", True) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", True) - args.decoder_layerdrop = getattr(args, "decoder_layerdrop", 0.0) - - args.adaptive_input = getattr(args, "adaptive_input", False) - - args.mbart_attention_dropout = getattr(args, "mbart_attention_dropout", 0.0) - args.mbart_activation_dropout = getattr(args, "mbart_activation_dropout", 0.0) - args.mbart_dropout = getattr(args, "mbart_dropout", 0.1) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", True - ) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - - args.no_scale_embedding = getattr(args, "no_scale_embedding", False) - args.quant_noise_pq = getattr(args, "quant_noise_pq", 0) - args.layernorm_embedding = getattr(args, "layernorm_embedding", True) - - args.activation_fn = getattr(args, "activation_fn", "gelu") - args.pooler_activation_fn = getattr(args, "pooler_activation_fn", "tanh") - args.pooler_dropout = getattr(args, "pooler_dropout", 0.0) diff --git a/kosmos-g/fairseq/examples/speech_text_joint_to_text/scripts/g2p_encode.py b/kosmos-g/fairseq/examples/speech_text_joint_to_text/scripts/g2p_encode.py deleted file mode 100644 index 9db779396..000000000 --- a/kosmos-g/fairseq/examples/speech_text_joint_to_text/scripts/g2p_encode.py +++ /dev/null @@ -1,191 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import itertools -import logging -import re -import time - -from g2p_en import G2p - -logger = logging.getLogger(__name__) - -FAIL_SENT = "FAILED_SENTENCE" - - -def parse(): - parser = argparse.ArgumentParser() - parser.add_argument("--data-path", type=str, required=True) - parser.add_argument("--out-path", type=str, required=True) - parser.add_argument("--lower-case", action="store_true") - parser.add_argument("--do-filter", action="store_true") - parser.add_argument("--use-word-start", action="store_true") - parser.add_argument("--dup-vowel", default=1, type=int) - parser.add_argument("--dup-consonant", default=1, type=int) - parser.add_argument("--no-punc", action="store_true") - parser.add_argument("--reserve-word", type=str, default="") - parser.add_argument( - "--reserve-first-column", - action="store_true", - help="first column is sentence id", - ) - ### - parser.add_argument("--parallel-process-num", default=1, type=int) - parser.add_argument("--logdir", default="") - args = parser.parse_args() - return args - - -def process_sent(sent, g2p, res_wrds, args): - sents = pre_process_sent(sent, args.do_filter, args.lower_case, res_wrds) - pho_seqs = [do_g2p(g2p, s, res_wrds, i == 0) for i, s in enumerate(sents)] - pho_seq = ( - [FAIL_SENT] - if [FAIL_SENT] in pho_seqs - else list(itertools.chain.from_iterable(pho_seqs)) - ) - if args.no_punc: - pho_seq = remove_punc(pho_seq) - if args.dup_vowel > 1 or args.dup_consonant > 1: - pho_seq = dup_pho(pho_seq, args.dup_vowel, args.dup_consonant) - if args.use_word_start: - pho_seq = add_word_start(pho_seq) - return " ".join(pho_seq) - - -def remove_punc(sent): - ns = [] - regex = re.compile("[^a-zA-Z0-9 ]") - for p in sent: - if (not regex.search(p)) or p == FAIL_SENT: - if p == " " and (len(ns) == 0 or ns[-1] == " "): - continue - ns.append(p) - return ns - - -def do_g2p(g2p, sent, res_wrds, is_first_sent): - if sent in res_wrds: - pho_seq = [res_wrds[sent]] - else: - pho_seq = g2p(sent) - if not is_first_sent: - pho_seq = [" "] + pho_seq # add space to separate - return pho_seq - - -def pre_process_sent(sent, do_filter, lower_case, res_wrds): - if do_filter: - sent = re.sub("-", " ", sent) - sent = re.sub("—", " ", sent) - if len(res_wrds) > 0: - wrds = sent.split() - wrds = ["SPLIT_ME " + w + " SPLIT_ME" if w in res_wrds else w for w in wrds] - sents = [x.strip() for x in " ".join(wrds).split("SPLIT_ME") if x.strip() != ""] - else: - sents = [sent] - if lower_case: - sents = [s.lower() if s not in res_wrds else s for s in sents] - return sents - - -def dup_pho(sent, dup_v_num, dup_c_num): - """ - duplicate phoneme defined as cmudict - http://www.speech.cs.cmu.edu/cgi-bin/cmudict - """ - if dup_v_num == 1 and dup_c_num == 1: - return sent - ns = [] - for p in sent: - ns.append(p) - if re.search(r"\d$", p): - for i in range(1, dup_v_num): - ns.append(f"{p}-{i}P") - elif re.search(r"\w", p): - for i in range(1, dup_c_num): - ns.append(f"{p}-{i}P") - return ns - - -def add_word_start(sent): - ns = [] - do_add = True - ws = "▁" - for p in sent: - if do_add: - p = ws + p - do_add = False - if p == " ": - do_add = True - else: - ns.append(p) - return ns - - -def load_reserve_word(reserve_word): - if reserve_word == "": - return [] - with open(reserve_word, "r") as fp: - res_wrds = [x.strip().split() for x in fp.readlines() if x.strip() != ""] - assert sum([0 if len(x) == 2 else 1 for x in res_wrds]) == 0 - res_wrds = dict(res_wrds) - return res_wrds - - -def process_sents(sents, args): - g2p = G2p() - out_sents = [] - res_wrds = load_reserve_word(args.reserve_word) - for sent in sents: - col1 = "" - if args.reserve_first_column: - col1, sent = sent.split(None, 1) - sent = process_sent(sent, g2p, res_wrds, args) - if args.reserve_first_column and col1 != "": - sent = f"{col1} {sent}" - out_sents.append(sent) - return out_sents - - -def main(): - args = parse() - out_sents = [] - with open(args.data_path, "r") as fp: - sent_list = [x.strip() for x in fp.readlines()] - if args.parallel_process_num > 1: - try: - import submitit - except ImportError: - logger.warn( - "submitit is not found and only one job is used to process the data" - ) - submitit = None - - if args.parallel_process_num == 1 or submitit is None: - out_sents = process_sents(sent_list, args) - else: - # process sentences with parallel computation - lsize = len(sent_list) // args.parallel_process_num + 1 - executor = submitit.AutoExecutor(folder=args.logdir) - executor.update_parameters(timeout_min=1000, cpus_per_task=4) - jobs = [] - for i in range(args.parallel_process_num): - job = executor.submit( - process_sents, sent_list[lsize * i : lsize * (i + 1)], args - ) - jobs.append(job) - is_running = True - while is_running: - time.sleep(5) - is_running = sum([job.done() for job in jobs]) < len(jobs) - out_sents = list(itertools.chain.from_iterable([job.result() for job in jobs])) - with open(args.out_path, "w") as fp: - fp.write("\n".join(out_sents) + "\n") - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/speech_text_joint_to_text/tasks/__init__.py b/kosmos-g/fairseq/examples/speech_text_joint_to_text/tasks/__init__.py deleted file mode 100644 index 5fc5d9e21..000000000 --- a/kosmos-g/fairseq/examples/speech_text_joint_to_text/tasks/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import importlib -import os - diff --git a/kosmos-g/fairseq/examples/speech_text_joint_to_text/tasks/speech_text_joint.py b/kosmos-g/fairseq/examples/speech_text_joint_to_text/tasks/speech_text_joint.py deleted file mode 100644 index bb04f14f1..000000000 --- a/kosmos-g/fairseq/examples/speech_text_joint_to_text/tasks/speech_text_joint.py +++ /dev/null @@ -1,377 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import logging -import os -from argparse import Namespace -from pathlib import Path - -import torch -from fairseq.data import ( - encoders, - Dictionary, - ResamplingDataset, - TransformEosLangPairDataset, - ConcatDataset, -) -from fairseq.data.iterators import GroupedEpochBatchIterator -from fairseq.data.audio.multi_modality_dataset import ( - MultiModalityDataset, - LangPairMaskDataset, - ModalityDatasetItem, -) -from fairseq.data.audio.speech_to_text_dataset import ( - SpeechToTextDataset, - SpeechToTextDatasetCreator, -) -from fairseq.data.audio.speech_to_text_joint_dataset import ( - S2TJointDataConfig, - SpeechToTextJointDatasetCreator, -) -from fairseq.tasks import register_task -from fairseq.tasks.speech_to_text import SpeechToTextTask -from fairseq.tasks.translation import load_langpair_dataset - -logger = logging.getLogger(__name__) -LANG_TAG_TEMPLATE = "<lang:{}>" - - -@register_task("speech_text_joint_to_text") -class SpeechTextJointToTextTask(SpeechToTextTask): - """ - Task for joint training speech and text to text. - """ - - @classmethod - def add_args(cls, parser): - """Add task-specific arguments to the parser.""" - super(SpeechTextJointToTextTask, cls).add_args(parser) - ### - parser.add_argument( - "--parallel-text-data", - default="", - help="path to parallel text data directory", - ) - parser.add_argument( - "--max-tokens-text", - type=int, - metavar="N", - help="maximum tokens for encoder text input ", - ) - parser.add_argument( - "--max-positions-text", - type=int, - metavar="N", - default=400, - help="maximum tokens for per encoder text input ", - ) - parser.add_argument( - "--langpairs", - default=None, - metavar="S", - help='language pairs for text training, separated with ","', - ) - parser.add_argument( - "--speech-sample-ratio", - default=1, - type=float, - metavar="N", - help="Multiple Ratio for speech dataset with transcripts ", - ) - parser.add_argument( - "--text-sample-ratio", - default=1, - type=float, - metavar="N", - help="Multiple Ratio for text set ", - ) - parser.add_argument( - "--update-mix-data", - action="store_true", - help="use mixed data in one update when update-freq > 1", - ) - parser.add_argument( - "--load-speech-only", action="store_true", help="load speech data only", - ) - parser.add_argument( - "--mask-text-ratio", - type=float, - metavar="V", - default=0.0, - help="mask V source tokens for text only mode", - ) - parser.add_argument( - "--mask-text-type", - default="random", - choices=["random", "tail"], - help="mask text typed", - ) - parser.add_argument( - "--noise-token", - default="", - help="noise token for masking src text tokens if mask-text-ratio > 0", - ) - parser.add_argument( - "--infer-target-lang", - default="", - metavar="S", - help="target language for inference", - ) - - def __init__(self, args, src_dict, tgt_dict, infer_tgt_lang_id=None): - super().__init__(args, tgt_dict) - self.src_dict = src_dict - self.data_cfg = S2TJointDataConfig(Path(args.data) / args.config_yaml) - assert self.tgt_dict.pad() == self.src_dict.pad() - assert self.tgt_dict.eos() == self.src_dict.eos() - self.speech_only = args.load_speech_only - self._infer_tgt_lang_id = infer_tgt_lang_id - - @classmethod - def setup_task(cls, args, **kwargs): - """Setup the task (e.g., load dictionaries).""" - data_cfg = S2TJointDataConfig(Path(args.data) / args.config_yaml) - tgt_dict_path = Path(args.data) / data_cfg.vocab_filename - src_dict_path = Path(args.data) / data_cfg.src_vocab_filename - if (not os.path.isfile(src_dict_path)) or (not os.path.isfile(tgt_dict_path)): - raise FileNotFoundError("Dict not found: {}".format(args.data)) - src_dict = Dictionary.load(src_dict_path.as_posix()) - tgt_dict = Dictionary.load(tgt_dict_path.as_posix()) - - print("| src dictionary: {} types".format(len(src_dict))) - print("| tgt dictionary: {} types".format(len(tgt_dict))) - - if args.parallel_text_data != "": - if not os.path.isabs(args.parallel_text_data): - args.parallel_text_data = os.path.join( - args.data, args.parallel_text_data - ) - - if args.langpairs is None: - raise Exception( - "Could not infer language pair, please provide it explicitly" - ) - infer_tgt_lang_id = None - if args.infer_target_lang != "" and data_cfg.prepend_tgt_lang_tag_no_change: - tgt_lang_tag = SpeechToTextDataset.LANG_TAG_TEMPLATE.format( - args.infer_target_lang - ) - infer_tgt_lang_id = tgt_dict.index(tgt_lang_tag) - assert infer_tgt_lang_id != tgt_dict.unk() - return cls(args, src_dict, tgt_dict, infer_tgt_lang_id=infer_tgt_lang_id) - - def load_langpair_dataset( - self, prepend_tgt_lang_tag=False, sampling_alpha=1.0, epoch=0 - ): - lang_pairs = [] - text_dataset = None - split = "train" - for lp in self.args.langpairs.split(","): - src, tgt = lp.split("-") - text_dataset = load_langpair_dataset( - self.args.parallel_text_data, - split, - src, - self.src_dict, - tgt, - self.tgt_dict, - combine=True, - dataset_impl=None, - upsample_primary=1, - left_pad_source=False, - left_pad_target=False, - max_source_positions=self.args.max_positions_text, - max_target_positions=self.args.max_target_positions, - load_alignments=False, - truncate_source=False, - ) - if prepend_tgt_lang_tag: - # TODO - text_dataset = TransformEosLangPairDataset( - text_dataset, - src_eos=self.src_dict.eos(), - tgt_bos=self.tgt_dict.eos(), # 'prev_output_tokens' starts with eos - new_tgt_bos=self.tgt_dict.index(LANG_TAG_TEMPLATE.format(tgt)), - ) - lang_pairs.append(text_dataset) - if len(lang_pairs) > 1: - if sampling_alpha != 1.0: - size_ratios = SpeechToTextDatasetCreator.get_size_ratios( - self.args.langpairs.split(","), - [len(s) for s in lang_pairs], - alpha=sampling_alpha, - ) - lang_pairs = [ - ResamplingDataset(d, size_ratio=r, epoch=epoch, replace=(r >= 1.0)) - for d, r in zip(lang_pairs, size_ratios) - ] - return ConcatDataset(lang_pairs) - return text_dataset - - def inference_step( - self, generator, models, sample, prefix_tokens=None, constraints=None - ): - with torch.no_grad(): - return generator.generate( - models, - sample, - prefix_tokens=prefix_tokens, - constraints=constraints, - bos_token=self._infer_tgt_lang_id, - ) - - def build_src_tokenizer(self, args): - logger.info(f"src-pre-tokenizer: {self.data_cfg.src_pre_tokenizer}") - return encoders.build_tokenizer(Namespace(**self.data_cfg.src_pre_tokenizer)) - - def build_src_bpe(self, args): - logger.info(f"tokenizer: {self.data_cfg.src_bpe_tokenizer}") - return encoders.build_bpe(Namespace(**self.data_cfg.src_bpe_tokenizer)) - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - """Load a given dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - """ - is_train_split = split.startswith("train") - pre_tokenizer = self.build_tokenizer(self.args) - bpe_tokenizer = self.build_bpe(self.args) - src_pre_tokenizer = self.build_src_tokenizer(self.args) - src_bpe_tokenizer = self.build_src_bpe(self.args) - ast_dataset = SpeechToTextJointDatasetCreator.from_tsv( - self.args.data, - self.data_cfg, - split, - self.tgt_dict, - src_dict=None if self.speech_only else self.src_dict, - pre_tokenizer=pre_tokenizer, - bpe_tokenizer=bpe_tokenizer, - src_pre_tokenizer=src_pre_tokenizer, - src_bpe_tokenizer=src_bpe_tokenizer, - is_train_split=is_train_split, - epoch=epoch, - seed=self.args.seed, - ) - noise_token_id = -1 - text_dataset = None - if self.args.parallel_text_data != "" and is_train_split: - text_dataset = self.load_langpair_dataset( - self.data_cfg.prepend_tgt_lang_tag_no_change, 1.0, epoch=epoch, - ) - if self.args.mask_text_ratio > 0: - # add mask - noise_token_id = ( - self.src_dict.unk() - if self.args.noise_token == "" - else self.src_dict.index(self.args.noise_token) - ) - text_dataset = LangPairMaskDataset( - text_dataset, - src_bos=self.src_dict.bos(), - src_eos=self.src_dict.eos(), - noise_id=noise_token_id, - mask_ratio=self.args.mask_text_ratio, - mask_type=self.args.mask_text_type, - ) - - if text_dataset is not None: - mdsets = [ - ModalityDatasetItem( - "sup_speech", - ast_dataset, - (self.args.max_source_positions, self.args.max_target_positions), - self.args.max_tokens, - self.args.batch_size, - ), - ModalityDatasetItem( - "text", - text_dataset, - (self.args.max_positions_text, self.args.max_target_positions), - self.args.max_tokens_text - if self.args.max_tokens_text is not None - else self.args.max_tokens, - self.args.batch_size, - ), - ] - ast_dataset = MultiModalityDataset(mdsets) - self.datasets[split] = ast_dataset - - @property - def target_dictionary(self): - """Return the :class:`~fairseq.data.Dictionary` for the language - model.""" - return self.tgt_dict - - @property - def source_dictionary(self): - """Return the source :class:`~fairseq.data.Dictionary` (if applicable - for this task).""" - return None if self.speech_only else self.src_dict - - def get_batch_iterator( - self, - dataset, - max_tokens=None, - max_sentences=None, - max_positions=None, - ignore_invalid_inputs=False, - required_batch_size_multiple=1, - seed=1, - num_shards=1, - shard_id=0, - num_workers=0, - epoch=0, - data_buffer_size=0, - disable_iterator_cache=False, - skip_remainder_batch=False, - grouped_shuffling=False, - update_epoch_batch_itr=False, - ): - - if not isinstance(dataset, MultiModalityDataset): - return super(SpeechTextJointToTextTask, self).get_batch_iterator( - dataset, - max_tokens, - max_sentences, - max_positions, - ignore_invalid_inputs, - required_batch_size_multiple, - seed, - num_shards, - shard_id, - num_workers, - epoch, - data_buffer_size, - disable_iterator_cache, - skip_remainder_batch=skip_remainder_batch, - update_epoch_batch_itr=update_epoch_batch_itr, - ) - - mult_ratio = [self.args.speech_sample_ratio, self.args.text_sample_ratio] - assert len(dataset.datasets) == 2 - - # initialize the dataset with the correct starting epoch - dataset.set_epoch(epoch) - - batch_samplers = dataset.get_batch_samplers( - mult_ratio, required_batch_size_multiple, seed - ) - - # return a reusable, sharded iterator - epoch_iter = GroupedEpochBatchIterator( - dataset=dataset, - collate_fn=dataset.collater, - batch_samplers=batch_samplers, - seed=seed, - num_shards=num_shards, - shard_id=shard_id, - num_workers=num_workers, - epoch=epoch, - mult_rate=1 if self.args.update_mix_data else max(self.args.update_freq), - buffer_size=data_buffer_size, - skip_remainder_batch=skip_remainder_batch, - ) - self.dataset_to_epoch_iter[dataset] = {} # refresh it every epoch - return epoch_iter diff --git a/kosmos-g/fairseq/examples/speech_to_speech/README.md b/kosmos-g/fairseq/examples/speech_to_speech/README.md deleted file mode 100644 index 2a055a605..000000000 --- a/kosmos-g/fairseq/examples/speech_to_speech/README.md +++ /dev/null @@ -1,181 +0,0 @@ -# Speech to speech translation (S2ST) - -We provide the implementation for speech-to-unit translation (S2UT) proposed in "[Direct speech-to-speech translation with discrete units (Lee et al. 2021)](https://arxiv.org/abs/2107.05604)" and also the transformer-based implementation of the speech-to-spectrogram translation (S2SPECT, or transformer-based [Translatotron](https://arxiv.org/abs/1904.06037)) baseline in the paper. - -## Pretrained Models - -### Unit-based HiFi-GAN Vocoder -Unit config | Unit size | Vocoder dataset | Model -|---|---|---|--- -[HuBERT Base, Librispeech](https://github.com/fairinternal/fairseq-py/tree/main/examples/hubert), layer 6 | 100 | [LJSpeech](https://keithito.com/LJ-Speech-Dataset/) | [ckpt](https://dl.fbaipublicfiles.com/fairseq/speech_to_speech/vocoder/code_hifigan/hubert_base_100_lj/g_00500000), [config](https://dl.fbaipublicfiles.com/fairseq/speech_to_speech/vocoder/code_hifigan/hubert_base_100_lj/config.json) - - -## Data preparation -### Target speech -0. (optional) To prepare S2S data from a speech-to-text translation (ST) dataset, see [fairseq-S^2](https://github.com/pytorch/fairseq/tree/main/examples/speech_synthesis) for pre-trained TTS models and instructions on how to train and decode TTS models. -1. Prepare two folders, `$SRC_AUDIO` and `$TGT_AUDIO`, with `${SPLIT}/${SAMPLE_ID}.wav` for source and target speech under each folder, separately. Note that for S2UT experiments, target audio sampling rate should be in 16,000 Hz, and for S2SPECT experiments, target audio sampling rate is recommended to be in 22,050 Hz. -2. To prepare target discrete units for S2UT model training, see [Generative Spoken Language Modeling (speech2unit)](https://github.com/pytorch/fairseq/tree/main/examples/textless_nlp/gslm/speech2unit) for pre-trained k-means models, checkpoints, and instructions on how to decode units from speech. Set the output target unit files (`--out_quantized_file_path`) as `${TGT_AUDIO}/${SPLIT}.txt`. In [Lee et al. 2021](https://arxiv.org/abs/2107.05604), we use 100 units from the sixth layer (`--layer 6`) of the HuBERT Base model. - -### Formatting data -**Speech-to-speech data** - -_S2UT_ - * Set `--reduce-unit` for training S2UT _reduced_ model - * Pre-trained vocoder and config (`$VOCODER_CKPT`, `$VOCODER_CFG`) can be downloaded from the **Pretrained Models** section. They are not required if `--eval-inference` is not going to be set during model training. -``` -# $SPLIT1, $SPLIT2, etc. are split names such as train, dev, test, etc. - -python examples/speech_to_speech/preprocessing/prep_s2ut_data.py \ - --source-dir $SRC_AUDIO --target-dir $TGT_AUDIO --data-split $SPLIT1 $SPLIT2 \ - --output-root $DATA_ROOT --reduce-unit \ - --vocoder-checkpoint $VOCODER_CKPT --vocoder-cfg $VOCODER_CFG -``` - -_S2SPECT_ -``` -# $SPLIT1, $SPLIT2, etc. are split names such as train, dev, test, etc. - -python examples/speech_to_speech/preprocessing/prep_s2spect_data.py \ - --source-dir $SRC_AUDIO --target-dir $TGT_AUDIO --data-split $SPLIT1 $SPLIT2 \ - --output-root $DATA_ROOT -``` - -**Multitask data** - * For each multitask `$TASK_NAME`, prepare `${DATA_ROOT}/${TASK_NAME}/${SPLIT}.tsv` files for each split following the format below: (Two tab separated columns. The sample_ids should match with the sample_ids for the speech-to-speech data in `${DATA_ROOT}/${SPLIT}.tsv`.) -``` -id tgt_text -sample_id_0 token1 token2 token3 ... -sample_id_1 token1 token2 token3 ... -... -``` - * For each multitask `$TASK_NAME`, prepare `${DATA_ROOT}/${TASK_NAME}/dict.txt`, a dictionary in fairseq format with all tokens for the targets for `$TASK_NAME`. - * Create `config_multitask.yaml`. Below is an example of the config used for S2UT _reduced_ with Fisher experiments including two encoder multitasks (`source_letter`, `target_letter`) and one decoder CTC task (`decoder_target_ctc`). -``` -source_letter: # $TASK_NAME - decoder_type: transformer - dict: ${DATA_ROOT}/source_letter/dict.txt - data: ${DATA_ROOT}/source_letter - encoder_layer: 6 - loss_weight: 8.0 -target_letter: - decoder_type: transformer - dict: ${DATA_ROOT}/target_letter/dict.txt - data: ${DATA_ROOT}/target_letter - encoder_layer: 8 - loss_weight: 8.0 -decoder_target_ctc: - decoder_type: ctc - dict: ${DATA_ROOT}/decoder_target_ctc/dict.txt - data: ${DATA_ROOT}/decoder_target_ctc - decoder_layer: 3 - loss_weight: 1.6 -``` - - -## Training - -**Speech-to-unit translation (S2UT)** - -Here's an example for training Fisher S2UT models with 100 discrete units as target: -``` -fairseq-train $DATA_ROOT \ - --config-yaml config.yaml --multitask-config-yaml config_multitask.yaml \ - --task speech_to_speech --target-is-code --target-code-size 100 --vocoder code_hifigan \ - --criterion speech_to_unit --label-smoothing 0.2 \ - --arch s2ut_transformer_fisher --share-decoder-input-output-embed \ - --dropout 0.1 --attention-dropout 0.1 --relu-dropout 0.1 \ - --train-subset train --valid-subset dev \ - --save-dir ${MODEL_DIR} \ - --lr 0.0005 --lr-scheduler inverse_sqrt --warmup-init-lr 1e-7 --warmup-updates 10000 \ - --optimizer adam --adam-betas "(0.9,0.98)" --clip-norm 10.0 \ - --max-update 400000 --max-tokens 20000 --max-target-positions 3000 --update-freq 4 \ - --seed 1 --fp16 --num-workers 8 -``` -* Adjust `--update-freq` accordingly for different #GPUs. In the above we set `--update-freq 4` to simulate training with 4 GPUs. -* Set `--n-frames-per-step 5` to train an S2UT _stacked_ system with reduction ratio r=5. (Use `$DATA_ROOT` prepared without `--reduce-unit`.) -* (optional) one can turn on tracking MCD loss during training for checkpoint selection by setting `--eval-inference --eval-args '{"beam": 1, "max_len_a": 1}' --best-checkpoint-metric mcd_loss`. It is recommended to sample a smaller subset as the validation set as MCD loss computation is time-consuming. - -**Speech-to-spectrogram translation (S2SPECT)** - -Here's an example for training Fisher S2SPECT models with reduction ratio r=5: -``` -fairseq-train $DATA_ROOT \ - --config-yaml config.yaml --multitask-config-yaml config_multitask.yaml \ - --task speech_to_speech --n-frames-per-step 5 \ - --criterion speech_to_spectrogram \ - --arch s2spect_transformer_fisher --decoder-normalize-before \ - --dropout 0.1 --attention-dropout 0.1 --relu-dropout 0.1 \ - --train-subset train --valid-subset dev \ - --save-dir ${MODEL_DIR} \ - --eval-inference --best-checkpoint-metric mcd_loss \ - --lr 0.0005 --lr-scheduler inverse_sqrt --warmup-init-lr 1e-7 --warmup-updates 10000 \ - --optimizer adam --adam-betas "(0.9,0.98)" --clip-norm 10.0 --weight-decay 1e-6 \ - --max-update 400000 --max-tokens 80000 --max-tokens-valid 30000 --required-batch-size-multiple 1 \ - --max-target-positions 3000 --update-freq 16 \ - --seed 1 --fp16 --num-workers 8 -``` -* Adjust `--update-freq` accordingly for different #GPUs. In the above we set `--update-freq 16` to simulate training with 16 GPUs. -* We recommend turning on MCD loss during training for the best checkpoint selection. - -**Unit-based HiFi-GAN vocoder** - -Coming soon. - -## Inference - -**Speech-to-unit translation (S2UT)** - -1. Follow the same inference process as in [fairseq-S2T](https://github.com/pytorch/fairseq/tree/main/examples/speech_to_text) to generate unit sequences (`${RESULTS_PATH}/generate-${GEN_SUBSET}.txt`). -``` -fairseq-generate $DATA_ROOT \ - --config-yaml config.yaml --multitask-config-yaml config_multitask.yaml \ - --task speech_to_speech --target-is-code --target-code-size 100 --vocoder code_hifigan \ - --path $MODEL_DIR/checkpoint_best.pt --gen-subset $GEN_SUBSET \ - --max-tokens 50000 \ - --beam 10 --max-len-a 1 \ - --results-path ${RESULTS_PATH} -``` - * Set `--beam 1 --n-frames-per-step $r` for decoding with S2UT _stacked_ models. - -2. Convert unit sequences to waveform. -``` -grep "^D\-" ${RESULTS_PATH}/generate-${GEN_SUBSET}.txt | \ - sed 's/^D-//ig' | sort -nk1 | cut -f3 \ - > ${RESULTS_PATH}/generate-${GEN_SUBSET}.unit - -python examples/speech_to_speech/generate_waveform_from_code.py \ - --in-code-file ${RESULTS_PATH}/generate-${GEN_SUBSET}.unit \ - --vocoder $VOCODER_CKPT --vocoder-cfg $VOCODER_CFG \ - --results-path ${RESULTS_PATH} --dur-prediction -``` - * Set `--dur-prediction` for generating audio for S2UT _reduced_ models. - - -**Speech-to-spectrogram translation (S2SPECT)** - -Follow the same inference process as in [fairseq-S^2](https://github.com/pytorch/fairseq/tree/main/examples/speech_synthesis) to generate waveform. - -``` -# assume using a default Griffin-Lim vocoder - -python examples/speech_synthesis/generate_waveform.py $DATA_ROOT \ - --config-yaml config.yaml --multitask-config-yaml config_multitask.yaml \ - --task speech_to_speech --n-frames-per-step 5 \ - --path $MODEL_DIR/checkpoint_best.pt --gen-subset $GEN_SUBSET \ - --max-tokens 50000 \ - --results-path ${RESULTS_PATH} --dump-waveforms --output-sample-rate 16000 -``` - -In addition to using the default Griffin-Lim vocoder, one can also finetune a HiFi-GAN vocoder for the S2SPECT model by following the instructions in the [HiFi-GAN repo](https://github.com/jik876/hifi-gan). - -**Multitask decoding** - -Coming soon. - -## Evaluation - -To evaluate speech translation output, we first apply ASR on the speech output and then compute BLEU score betweent the ASR decoded text and the references using sacreBLEU. - -**En** -* ASR: We use the "[Wav2Vec 2.0 Large (LV-60) + Self Training / 960 hours / Libri-Light + Librispeech](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_vox_960h_pl.pt)" En ASR model open-sourced by the [wav2vec](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec) project. See [instructions](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec#evaluating-a-ctc-model) on how to run inference with a wav2vec-based ASR model. The model is also available on [Hugging Face](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self). -* Text normalization: We use the text cleaner at [https://github.com/keithito/tacotron](https://github.com/keithito/tacotron) for pre-processing reference English text for ASR BLEU evaluation. diff --git a/kosmos-g/fairseq/examples/speech_to_speech/__init__.py b/kosmos-g/fairseq/examples/speech_to_speech/__init__.py deleted file mode 100644 index 626423691..000000000 --- a/kosmos-g/fairseq/examples/speech_to_speech/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. diff --git a/kosmos-g/fairseq/examples/speech_to_speech/benchmarking/README.md b/kosmos-g/fairseq/examples/speech_to_speech/benchmarking/README.md deleted file mode 100644 index c62fe1296..000000000 --- a/kosmos-g/fairseq/examples/speech_to_speech/benchmarking/README.md +++ /dev/null @@ -1,31 +0,0 @@ -# Benchmarking - -## Overview - -The goal of this framework is to support benchmarking various speech to speech translation(S2ST) models in terms of runtime, max-memory consumption and total number of floating point operations(FLOPS). It is a generic framework and can be easily extended to support any fairseq models. To accurately benchmark the performance, core inference modules are re-implemented based on fairseq_cli/generate.py (core.py/Processing) and examples/speech_to_text/generate_waveform.py(core.py/SpeechGeneration. To ensure that the end to end models and cascaded models are compared fairly, for cascaded models we only consider the performance metrics for model inference at all stages ignoring any intermediate data and io processing consumption. We run all the benchmarking runs on CPU as it is generally used in production environment and also due to lack of good benchmarking library support for GPUs. - -1. Runtime: Average time in seconds to run model inference on an example from a given dataset. We use [timeit](https://docs.python.org/3/library/timeit.html) library to measure the runtime. -2. Max memory: Maximum memory in MiB averaged over by running the model inference on all examples from the given dataset. We use [memory_profiler](https://pypi.org/project/memory-profiler/) library to gather memory footprints for a code snippet and find the maximum to get the max memory used by the code. For cascaded models, we find the max of all stages to get the overall max_memory footprint. -3. FLOPS: We compute the average number of floating point operations needed to run model inference for an example from the given dataset. We use [PAPI library](http://www.bnikolic.co.uk/blog/python/flops/2019/10/01/pytorch-count-flops.html) to benchmark the number of flops. - -## CLI Commands - -```{python} -CUBLAS_WORKSPACE_CONFIG=:4096:8 python examples/speech_to_speech/benchmarking/get_metrics.py ‘’ --config $config -``` - - -## Note: - -1. The npy dataset is a list of samples saved as a .npy file. Each sample is a dictionary with id, net_input. -2. The raw dataset is a list of raw audio paths similar to wav2vec2 input tsv file - -```{python} -sample: { - "id": xx, - "net_input": { - "src_tokens": torch.tensor([]), - "src_lengths": torch.tensor([]) - } -} -``` diff --git a/kosmos-g/fairseq/examples/speech_to_speech/benchmarking/configs/2StageS2ST.yaml b/kosmos-g/fairseq/examples/speech_to_speech/benchmarking/configs/2StageS2ST.yaml deleted file mode 100644 index 11deb42e7..000000000 --- a/kosmos-g/fairseq/examples/speech_to_speech/benchmarking/configs/2StageS2ST.yaml +++ /dev/null @@ -1,19 +0,0 @@ -general: - dataset_path: $npy_dataset - cpu: True - model_type: 2StageS2ST - dataset_size: 1 - -stage1: - data: $data_bin_stage1 - task: speech_to_text - path: $checkpoint_stage1 - config_yaml: config.yaml - max_len_a: 2 - max_len_b: 500 - -stage2: - data: $data_bin_stage2 - task: text_to_speech - path: $checkpoint_stage2 - config_yaml: config.yaml diff --git a/kosmos-g/fairseq/examples/speech_to_speech/benchmarking/configs/3StageS2ST.yaml b/kosmos-g/fairseq/examples/speech_to_speech/benchmarking/configs/3StageS2ST.yaml deleted file mode 100644 index 963813615..000000000 --- a/kosmos-g/fairseq/examples/speech_to_speech/benchmarking/configs/3StageS2ST.yaml +++ /dev/null @@ -1,28 +0,0 @@ -general: - dataset_path: $npy_dataset - cpu: True - model_type: 3StageS2ST - max_len_a: 2 - max_len_b: 500 - dataset_size: 1 - -stage1: - data: $data_bin_stage1 - task: speech_to_text - path: $checkpoint_stage1 - config_yaml: config.yaml - max_len_a: 2 - max_len_b: 500 - -stage2: - data: $data_bin_stage2 - task: translation - path: $checkpoint_stage2 - config_yaml: config.yaml - - -stage2: - data: $data_bin_stage3 - task: text_to_speech - path: $checkpoint_stage3 - config_yaml: config.yaml diff --git a/kosmos-g/fairseq/examples/speech_to_speech/benchmarking/configs/DirectS2U.yaml b/kosmos-g/fairseq/examples/speech_to_speech/benchmarking/configs/DirectS2U.yaml deleted file mode 100644 index 96264cec6..000000000 --- a/kosmos-g/fairseq/examples/speech_to_speech/benchmarking/configs/DirectS2U.yaml +++ /dev/null @@ -1,22 +0,0 @@ -general: - dataset_path: $npy_dataset_path - cpu: True - model_type: S2UT - dataset_size: 5 - dump_speech_waveforms_dir: $dump_waveforms_dir_path - -stage1: - data: $data_bin - task: speech_to_speech - path: $checkpoint - config_yaml: config.yaml - max_len_b: 100000 - beam: 10 - target_is_code: True - max_target_positions: 3000 - target_code_size: 100 - -stage2: - vocoder: $vocoder_path - vocoder_cfg: $vocoder_cfg_json - dur_prediction: True diff --git a/kosmos-g/fairseq/examples/speech_to_speech/benchmarking/configs/S2T.yaml b/kosmos-g/fairseq/examples/speech_to_speech/benchmarking/configs/S2T.yaml deleted file mode 100644 index 3a106a044..000000000 --- a/kosmos-g/fairseq/examples/speech_to_speech/benchmarking/configs/S2T.yaml +++ /dev/null @@ -1,13 +0,0 @@ -general: - dataset_path: $npy_dataset - cpu: True - model_type: S2T - dataset_size: 1 - -stage1: - data: $data_bin - task: speech_to_text - path: $checkpoint - config_yaml: config.yaml - max_len_a: 2 - max_len_b: 500 diff --git a/kosmos-g/fairseq/examples/speech_to_speech/benchmarking/core.py b/kosmos-g/fairseq/examples/speech_to_speech/benchmarking/core.py deleted file mode 100644 index da22a34ec..000000000 --- a/kosmos-g/fairseq/examples/speech_to_speech/benchmarking/core.py +++ /dev/null @@ -1,487 +0,0 @@ -import timeit -import logging -import torch -from pypapi import events, papi_high as high -from memory_profiler import memory_usage -from torch import nn -from argparse import Namespace -from fairseq.dataclass.utils import convert_namespace_to_omegaconf -from fairseq.data import data_utils as fairseq_data_utils -from fairseq import checkpoint_utils, tasks, utils -from fairseq.models.text_to_speech.vocoder import CodeHiFiGANVocoder -from examples.hubert.simple_kmeans.dump_hubert_feature import HubertFeatureReader -from examples.hubert.simple_kmeans.dump_km_label import ApplyKmeans -from fairseq_cli.generate import get_symbols_to_strip_from_output -import soundfile as sf -import ast -import json - -logging.basicConfig() -logging.root.setLevel(logging.INFO) -logging.basicConfig(level=logging.INFO) -logger = logging.getLogger(__name__) - - -torch.manual_seed(1) -torch.set_deterministic(True) - - -class BenchmarkingBase(nn.Module): - def __init__(self): - nn.Module.__init__(self) - self.s2x_task = None - - def warm_up(self, sample, repeat): - """Warm up the model""" - for _i in range(repeat): - self.forward(sample) - logger.info(f"Model warmed up by running inference {repeat} times") - - def benchmark_run_time(self, dataset, repeat): - """Benchmark average runtime for the model by calling benchmark_run_time_single_sample function""" - logger.info("Starting run time benchmarking") - time_elapsed = 0 - for i, sample in enumerate(dataset): - time_elapsed += self.benchmark_run_time_single_sample(sample, repeat=repeat) - if i % 100 == 0: - logger.info(f"Benchmarked run time for {i}/{len(dataset)} samples") - total_time_elapsed = time_elapsed / len(dataset) - return total_time_elapsed - - def benchmark_run_time_single_sample(self, sample, repeat): - """Benchmark average runtime for a single sample using timeit library. Units are seconds""" - timer = timeit.Timer(lambda: self.forward(sample)) - time_elapsed = timer.timeit(repeat) - return time_elapsed / repeat - - def count_flops( - self, - dataset, - repeat, - ): - """Use PYPAPI library to count average flops for model inference. - Note: It only works if the model is being run on cpu""" - logger.info("Starting flop counter") - high.start_counters([events.PAPI_DP_OPS]) - for i, sample in enumerate(dataset): - for _r in range(repeat): - self.forward(sample) - if i % 100 == 0: - logger.info(f"Counted flops for {i}/{len(dataset)} samples") - flops = high.stop_counters() - flops = round(flops[0] / (repeat * len(dataset))) - return flops - - def max_memory(self, dataset, repeat): - """Compute average max memory consumed by model inference. Units are MiB""" - logger.info("Starting memory benchmarking") - total_memory = 0 - for i, sample in enumerate(dataset): - for _r in range(repeat): - total_memory += max(memory_usage((self.forward, (sample,), {}))) - if i % 100 == 0: - logger.info(f"Benchmarked memory for {i}/{len(dataset)} samples") - total_memory = total_memory / (repeat * len(dataset)) - return total_memory - - def gather_all_metrics(self, dataset, repeat): - run_time = self.benchmark_run_time(dataset, repeat) - max_memory = self.max_memory(dataset, repeat) - flops = self.count_flops(dataset, repeat) - - return run_time, max_memory, flops - - def dump_final_speech_output( - self, dataset, output_dir, resample_fn, sample_rate, prefix=None - ): - - for i, sample in enumerate(dataset): - hypo = self.forward(sample)[0] - - def to_np(x): - return x.detach().cpu().numpy() - - try: - wave_preds = to_np(resample_fn(hypo["waveform"])) - sf.write( - f"{output_dir}/{prefix}_{i}_pred.wav", - wave_preds, - sample_rate, - ) - except Exception as e: - raise Exception( - f" Encountered {e} - Invalid waveform. Make sure the model outputs a waveform" - ) - - -class Processing(BenchmarkingBase): - """Class similar to fairseq_cli/generate.py. Supports ASR, MT and ST model inference""" - - def __init__(self, args): - super().__init__() - self.use_cuda = not getattr(args, "cpu", False) - self.setUp(args) - self.training = False - self.s2x_task = self.task - - def setUp(self, cfg): - if isinstance(cfg, Namespace): - cfg = convert_namespace_to_omegaconf(cfg) - - self.task = tasks.setup_task(cfg.task) - self.tgt_dict = self.task.target_dictionary - - # Load ensemble - logger.info("loading model(s) from {}".format(cfg.common_eval.path)) - models, _ = checkpoint_utils.load_model_ensemble( - utils.split_paths(cfg.common_eval.path), - arg_overrides={}, - task=self.task, - suffix=cfg.checkpoint.checkpoint_suffix, - strict=False, - num_shards=cfg.checkpoint.checkpoint_shard_count, - ) - if len(models) > 1: - raise Exception("Currently loading multiple models is not supported") - self.model = models[0] - - # Optimize model for generation - if cfg.common.fp16: - self.model.half() - if self.use_cuda: - self.model.cuda() - self.model.prepare_for_inference_(cfg) - - self.generator = self.task.build_generator( - [self.model], - cfg.generation, - extra_gen_cls_kwargs={}, - ) - # Handle tokenization and BPE - self.tokenizer = self.task.build_tokenizer(cfg.tokenizer) - self.bpe = self.task.build_bpe(cfg.bpe) - self.remove_bpe = cfg.common_eval.post_process - - def encode_source(self, src): - """Method to generate source tokens from a string""" - if self.tokenizer is not None: - src = self.tokenizer.encode(src) - if self.bpe is not None: - src = self.bpe.encode(src) - src_tokens = self.task.source_dictionary.encode_line(src).long() - src_lens = src_tokens.size(0) - return { - "net_input": { - "src_tokens": src_tokens.view(1, src_lens), - "src_lengths": torch.tensor([src_lens]), - } - } - - def decode_target(self, hypos): - """Method to decode target string from tokens""" - hypo_str = self.tgt_dict.string( - hypos[0][0]["tokens"].int().cpu(), - self.remove_bpe, - get_symbols_to_strip_from_output(self.generator), - ) - if self.bpe is not None: - hypo_str = self.bpe.decode(hypo_str) - if self.tokenizer is not None: - hypo_str = self.tokenizer.decode(hypo_str) - return hypo_str - - def forward(self, sample): - hypos = self.task.inference_step( - self.generator, - [self.model], - sample, - prefix_tokens=None, - constraints=None, - ) - return hypos - - -class GenerateWaveformFromCode(BenchmarkingBase): - """Class to support waveform generation from code. Currently, vocoder only supports single speaker""" - - def __init__(self, args): - super().__init__() - with open(args.vocoder_cfg) as f: - vocoder_cfg = json.load(f) - self.dur_prediction = args.dur_prediction - self.vocoder = CodeHiFiGANVocoder(args.vocoder, vocoder_cfg) - - def format_units(self, input): - code = torch.LongTensor(list(map(int, input.strip().split()))).view(1, -1) - return {"code": code} - - def generate_vocoder_input(self, dataset): - return [self.format_units(sample) for sample in dataset] - - def forward(self, sample): - return [{"waveform": self.vocoder(sample, self.dur_prediction)}] - - -class HubertUnitExtractor(BenchmarkingBase): - def __init__(self, args): - self.feature_reader = HubertFeatureReader( - args.hubert_ckpt_path, args.hubert_layer - ) - self.kmeans = ApplyKmeans(args.hubert_km_path) - - def forward(self, sample): - with torch.no_grad(): - feat = [] - for start in range(0, sample.size(1), self.feature_reader.max_chunk): - x_chunk = sample[:, start : start + self.max_chunk] - feat_chunk, _ = self.feature_reader.model.extract_features( - source=x_chunk, - padding_mask=None, - mask=False, - output_layer=self.layer, - ) - feat.append(feat_chunk) - torch.cat(feat, 1).squeeze(0) - return self.kmeans(feat).tolist() - - -class SpeechGeneration(BenchmarkingBase): - """Class similar to examples/text_to_speech/generate_waveform.py. - Supports models with speech generation as end goal (TTS, Direct S2ST models etc)""" - - def __init__(self, args): - super().__init__() - self.use_cuda = not getattr(args, "cpu", False) - self.setUp(args) - self.s2x_task = self.task - - def setUp(self, args): - if args.task == "speech_to_speech": - args.normalize_waveform = False - self.task = tasks.setup_task(args) - self.pre_tokenizer = self.task.build_tokenizer(args) - self.bpe_tokenizer = self.task.build_bpe(args) - try: - self.src_dict = self.task.src_dict - except Exception: - self.src_dict = None - ensemble, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( - [args.path], - arg_overrides=ast.literal_eval(args.model_overrides), - task=self.task, - strict=False, - ) - self.model = ensemble[0] - if self.use_cuda: - self.model.cuda() - # criterion.cuda() - self.model.eval() - self.generator = self.task.build_generator( - [self.model], - args, - ) - - def processTextInput(self, text): - """Generate source tokens from text input""" - if self.pre_tokenizer is not None: - text = self.pre_tokenizer.encode(text) - if self.bpe_tokenizer is not None: - text = self.bpe_tokenizer.encode(text) - target = self.src_dict.encode_line( - text, add_if_not_exist=False, append_eos=True - ).long() - target = fairseq_data_utils.collate_tokens( - [target], - self.src_dict.pad(), - self.src_dict.eos(), - left_pad=False, - move_eos_to_beginning=False, - ) - src_lengths = torch.tensor([target.size(1)], dtype=torch.long) - prev_output_tokens = None - sample = { - "net_input": { - "src_tokens": target, - "src_lengths": src_lengths, - "prev_output_tokens": prev_output_tokens, - } - } - sample = utils.move_to_cuda(sample) if self.use_cuda else sample - return sample - - def forward(self, sample): - sample["speaker"] = None - output = self.generator.generate(self.model, sample) # , has_targ=False - return output - - -class S2UT(BenchmarkingBase): - """Class to support S2UT models. Also supports generating waveforms from the units predicted""" - - def __init__(self, s2u_args, vocoder_args=None): - super().__init__() - self.s2u = Processing(s2u_args) - self.vocoder = None - if vocoder_args: - self.vocoder = GenerateWaveformFromCode(vocoder_args) - self.vocoder_input = None - - def forward(self, sample): - s2u_hypos = self.s2u(sample) - s2u_output = self.s2u.decode_target(s2u_hypos) - if not self.vocoder: - return s2u_output - units = self.vocoder.format_units(s2u_output) - vocoder_output = self.vocoder(units) - return vocoder_output - - def generate_s2u_outputs(self, dataset): - return [self.s2u.decode_target(self.s2u(sample)) for sample in dataset] - - def compute_metrics(self, metric_type, dataset, repeat=None): - """Generic function to compute metrics ignoring the io processing time""" - if self.vocoder and not self.vocoder_input: - self.s2u_output = self.generate_s2u_outputs(dataset) - self.vocoder_input = self.vocoder.generate_vocoder_input(self.s2u_output) - - s2u_metrics = getattr(self.s2u, metric_type)( - dataset, - repeat, - ) - vocoder_metrics = 0 - if self.vocoder: - vocoder_metrics = getattr(self.vocoder, metric_type)( - self.vocoder_input, - repeat, - ) - print( - f"metric_type = {metric_type} s2u_metrics = {s2u_metrics} \t vocoder_metrics = {vocoder_metrics}" - ) - if metric_type == "max_memory": - return max(s2u_metrics, vocoder_metrics) - else: - return s2u_metrics + vocoder_metrics - - def benchmark_run_time(self, dataset, repeat): - return self.compute_metrics("benchmark_run_time", dataset, repeat) - - def count_flops(self, dataset, repeat): - return self.compute_metrics("count_flops", dataset, repeat) - - def max_memory(self, dataset, repeat): - return self.compute_metrics("max_memory", dataset, repeat) - - -class Cascaded2StageS2ST(BenchmarkingBase): - """ST + TTS""" - - def __init__(self, s2t_args, tts_args): - super().__init__() - self.s2t = Processing(s2t_args) - self.s2x_task = self.s2t.task - self.tts = SpeechGeneration(tts_args) if tts_args else None - self.training = False - self.tts_inputs = None - - def forward(self, sample): - if not self.tts: - raise Exception( - "Forward function is not callable without tts. Reinitialize the class with tts_args" - ) - s2t_hypos = self.s2t(sample) - s2t_output = self.s2t.decode_target(s2t_hypos) - tts_input = self.tts.processTextInput(s2t_output) - tts_output = self.tts(tts_input) - return tts_output - - def generate_s2t_outputs(self, dataset): - """Process dataset and generate s2t outputs""" - return [self.s2t.decode_target(self.s2t(sample)) for sample in dataset] - - def generate_tts_inputs(self, dataset): - """Process dataset and generate tts inputs""" - return [self.tts.processTextInput(sample) for sample in dataset] - - def compute_metrics(self, metric_type, dataset, repeat=None): - """Generic function to compute metrics ignoring the io processing time""" - if not self.tts_inputs: - s2t_outputs = self.generate_s2t_outputs(dataset) - self.tts_inputs = self.generate_tts_inputs(s2t_outputs) - - s2t_metrics = getattr(self.s2t, metric_type)( - dataset, - repeat, - ) - - tts_metrics = getattr(self.tts, metric_type)( - self.tts_inputs, - repeat, - ) - print( - f"metric_type = {metric_type} s2t_metrics = {s2t_metrics} \t tts_metrics = {tts_metrics}" - ) - if metric_type == "max_memory": - return max(s2t_metrics, tts_metrics) - else: - return s2t_metrics + tts_metrics - - def benchmark_run_time(self, dataset, repeat): - return self.compute_metrics("benchmark_run_time", dataset, repeat) - - def count_flops(self, dataset, repeat): - return self.compute_metrics("count_flops", dataset, repeat) - - def max_memory(self, dataset, repeat): - return self.compute_metrics("max_memory", dataset, repeat) - - -class Cascaded3StageS2ST(Cascaded2StageS2ST): - """ASR + MT + TTS""" - - def __init__(self, s2t_args, tts_args, mt_args): - super().__init__(s2t_args, tts_args) - self.mt = Processing(mt_args) - self.mt_inputs = [] - - def forward(self, sample): - s2t_hypos = self.s2t(sample) - s2t_output = self.s2t.decode_target(s2t_hypos) - mt_input = self.mt.encode_source(s2t_output) - mt_hypos = self.mt(mt_input) - mt_output = self.mt.decode_target(mt_hypos) - tts_input = self.tts.processTextInput(mt_output) - tts_output = self.tts(tts_input) - return tts_output - - def generate_mt_inputs(self, dataset): - """Process dataset to generate mt model inputs""" - return [self.mt.encode_source(sample) for sample in dataset] - - def generate_mt_outputs(self, dataset): - """Process dataset to generate mt model outputs""" - return [self.mt.decode_target(self.mt(sample)) for sample in dataset] - - def compute_metrics(self, metric_type, dataset, repeat=None): - """Generic function to compute metrics ignoring the io processing time""" - if not self.tts_inputs: - s2t_outputs = self.generate_s2t_outputs(dataset) - self.mt_inputs = self.generate_mt_inputs(s2t_outputs) - mt_outputs = self.generate_mt_outputs(self.mt_inputs) - self.tts_inputs = self.generate_tts_inputs(mt_outputs) - - s2t_metrics = getattr(self.s2t, metric_type)( - dataset, - repeat, - ) - mt_metrics = getattr(self.mt, metric_type)(self.mt_inputs, repeat) - tts_metrics = getattr(self.tts, metric_type)( - self.tts_inputs, - repeat, - ) - print( - f"metric_type = {metric_type} s2t_metrics = {s2t_metrics} \t mt_metrics = {mt_metrics} \t tts_metrics = {tts_metrics}" - ) - if metric_type == "max_memory": - return max(s2t_metrics, mt_metrics, tts_metrics) - else: - return s2t_metrics + mt_metrics + tts_metrics diff --git a/kosmos-g/fairseq/examples/speech_to_speech/benchmarking/data_utils.py b/kosmos-g/fairseq/examples/speech_to_speech/benchmarking/data_utils.py deleted file mode 100644 index c73a59951..000000000 --- a/kosmos-g/fairseq/examples/speech_to_speech/benchmarking/data_utils.py +++ /dev/null @@ -1,264 +0,0 @@ -from fairseq import tasks -import numpy as np -import logging -import random -from fairseq import options -import torch -import os -import soundfile as sf - -from fairseq.data.audio.audio_utils import ( - get_waveform, - parse_path, -) - -logging.basicConfig() -logging.root.setLevel(logging.INFO) -logging.basicConfig(level=logging.INFO) -logger = logging.getLogger(__name__) - -random.seed(1) -np.random.seed(1) -random_number_generator = np.random.RandomState(30) - - -def generate_random_data_sample(T, B=1, D=80): - """Generate random data sample given the T, B, D values""" - net_input = { - "src_tokens": torch.tensor(random_number_generator.randn(B, T, D)).float(), - "src_lengths": torch.tensor([T]), - } - return {"net_input": net_input} - - -def generate_random_dataset(T_range_min, T_range_max, B=1, D=80, dataset_size=100): - """Generate random dataset with T values within a given range, B, D""" - T_values = [random.randint(T_range_min, T_range_max) for i in range(dataset_size)] - dataset = [] - for t in T_values: - dataset.append(generate_random_data_sample(t, B, D)) - return dataset, sum(T_values) / dataset_size - - -def load_dataset_npy(file_name, dataset_size=None): - """Load dataset from a .npy file.""" - data = np.load(file_name, allow_pickle=True) - if dataset_size: - data = data[:dataset_size] - return data - - -def load_dataset_raw_to_waveforms( - file_name, - dataset_size=None, - need_waveform=True, - sample_rate=16000, - read_using_soundfile=False, -): - """Load raw dataset from w2v tsv file. Optionally get waveforms""" - data = [] - with open(file_name, "r") as fp: - lines = fp.readlines() - data = [ - os.path.join(lines[0].strip(), line.strip().split("\t")[0]) - for line in lines[1:] - ] - - if dataset_size: - data = data[:dataset_size] - - if not need_waveform: - return data - - features = [] - if read_using_soundfile: - for _i, d in enumerate(data): - wav = sf.read(d)[0] - if wav.ndim == 2: - wav = wav.mean(-1) - features.append(torch.from_numpy(wav).float().view(1, -1)) - else: - for i, d in enumerate(data): - _path, slice_ptr = parse_path(d) - if len(slice_ptr) == 0: - feat = get_waveform( - _path, always_2d=True, output_sample_rate=sample_rate - )[0] - features.append( - { - "id": i, - "net_input": { - "src_tokens": torch.tensor(feat), - "src_lengths": torch.tensor([feat.shape[1]]), - }, - } - ) - else: - raise Exception("Currently unsupported data format") - return features - - -def load_dataset_task( - args, - batch_size=1, - limit_size=None, - ref_dataset=None, -): - """Loads dataset based on args by creating a task""" - if not args.data or not args.subset or not args.task: - raise Exception( - "Please provide necessary arguments to load the dataset - data, subset and task" - ) - task = tasks.setup_task(args) - - task.load_dataset(args.subset) - if not limit_size: - limit_size = len(task.dataset(args.subset)) - - iter = task.get_batch_iterator( - dataset=task.dataset(args.subset), max_sentences=batch_size - ).next_epoch_itr(shuffle=False) - dataset = [] - for i, sample in enumerate(iter): - sample = { - "id": task.datasets[args.subset].ids[sample["id"].item()], - "net_input": { - "src_tokens": sample["net_input"]["src_tokens"], - "src_lengths": sample["net_input"]["src_lengths"], - }, - } - dataset.append(sample) - if i == limit_size - 1: - break - - if ref_dataset: - try: - ids = get_ids_from_dataset(ref_dataset) - except Exception as e: - raise Exception(f"{e} - Cannot extract ids from reference dataset") - - filtered_dataset = [] - for sample in dataset: - if ( - sample["id"] in ids - or sample["id"][5:] in ids - or f"dev_{sample['id']}" in ids - ): - filtered_dataset.append(sample) - dataset = filtered_dataset - - max_len, min_len, avg_len = get_dataset_stats(dataset) - print( - f"{args.subset} dataset stats : num_samples={len(dataset)} max_len = {max_len} min_len = {min_len} avg_len = {avg_len}" - ) - - return dataset - - -def randomly_sample_subset(dataset, size=500): - """Randomly sample subset from a dataset""" - random_indices = [random.randint(0, len(dataset) - 1) for i in range(size)] - return [dataset[i] for i in random_indices] - - -def get_short_data_subset(dataset, size=500): - """Get a subset of desired size by sorting based on src_lengths""" - return sort_dataset(dataset)[:size] - - -def get_long_data_subset(dataset, size=500): - """Get a subset of desired size by sorting based on src_lengths descending""" - return sort_dataset(dataset, reverse=True)[:size] - - -def sort_dataset(dataset, reverse=False): - return sorted( - dataset, key=lambda x: x["net_input"]["src_lengths"].item(), reverse=reverse - ) - - -def save_dataset_npy(dataset, file_name): - """Save a dataset as .npy file""" - np.save(file_name, dataset) - - -def get_dataset_stats(dataset): - """Get stats about dataset based on src_lengths of samples""" - max_len = 0 - min_len = 100000 - avg_len = 0 - for d in dataset: - max_len = max(max_len, d["net_input"]["src_lengths"].item()) - min_len = min(min_len, d["net_input"]["src_lengths"].item()) - avg_len += d["net_input"]["src_lengths"].item() - - return max_len, min_len, avg_len / len(dataset) - - -def make_parser(): - """ - Additional args: - 1. Provide the dataset dir path using --data. - 2. Loading the dataset doesn't require config, provide --config-yaml to apply additional feature transforms - """ - parser = options.get_speech_generation_parser() - parser.add_argument( - "--subset", - default=None, - type=str, - required=True, - help="Subset to use for dataset generation", - ) - parser.add_argument( - "--dataset-save-dir", - default=None, - type=str, - required=False, - help="Dir path in which the datasets are to be saved", - ) - parser.add_argument( - "--ref-dataset", - default=None, - type=str, - required=False, - help="If provided, the ids in the reference dataset will be used to filter the new dataset generated.", - ) - parser.add_argument("--dataset-save-token", default="", type=str, required=False) - - options.add_generation_args(parser) - return parser - - -def get_ids_from_dataset(dataset): - return {sample["id"]: 1 for sample in dataset} - - -def cli_main(): - parser = make_parser() - args = options.parse_args_and_arch(parser) - dataset = load_dataset_task(args) - - random_dataset = randomly_sample_subset(dataset) - short_dataset = get_short_data_subset(dataset) - long_dataset = get_long_data_subset(dataset) - - if args.dataset_save_token: - args.dataset_save_token = f"_{args.dataset_save_token}_" - - if args.dataset_save_dir: - save_dataset_npy( - random_dataset, - f"{args.dataset_save_dir}/random_dataset{args.dataset_save_token}w_ids.npy", - ) - save_dataset_npy( - short_dataset, - f"{args.dataset_save_dir}/short_dataset{args.dataset_save_token}w_ids.npy", - ) - save_dataset_npy( - long_dataset, - f"{args.dataset_save_dir}/long_dataset{args.dataset_save_token}w_ids.npy", - ) - - -if __name__ == "__main__": - cli_main() diff --git a/kosmos-g/fairseq/examples/speech_to_speech/benchmarking/get_metrics.py b/kosmos-g/fairseq/examples/speech_to_speech/benchmarking/get_metrics.py deleted file mode 100644 index 773257f5d..000000000 --- a/kosmos-g/fairseq/examples/speech_to_speech/benchmarking/get_metrics.py +++ /dev/null @@ -1,162 +0,0 @@ -import copy -import torch -import logging -from argparse import Namespace -import yaml -from fairseq import options -from examples.speech_to_speech.benchmarking.core import ( - Processing, - SpeechGeneration, - Cascaded2StageS2ST, - Cascaded3StageS2ST, - S2UT, -) -from examples.speech_to_speech.benchmarking.data_utils import ( - load_dataset_npy, - load_dataset_raw_to_waveforms, -) - - -logging.basicConfig() -logging.root.setLevel(logging.INFO) -logging.basicConfig(level=logging.INFO) -logger = logging.getLogger(__name__) - -torch.manual_seed(1) -torch.set_deterministic(True) - - -def make_parser(): - """Note: As the names indicate use s2x_args(ex:ST, ASR etc) for models with speech input, - x2s_args for models with speech output(ex:TTS) and mt_args for translation models (ex: mt, T2U etc). - For direct S2ST models, use x2s_args to provide model details. - """ - parser = options.get_speech_generation_parser() - parser.add_argument("--target-is-code", action="store_true", default=False) - parser.add_argument("--config", type=str) - parser.add_argument( - "--model-type", - default="S2U", - choices=["S2S", "TTS", "S2UT", "MT", "S2T", "2StageS2ST", "3StageS2ST"], - help="Choose one of the models. For model inference implementation, refer to core.py", - ) - parser.add_argument( - "--dataset-path", - type=str, - help="""File to load dataset from. Assumes dataset is a list of samples. - Each sample is a dict of format {'net_input':{'src_tokens':torch.tenor(),'src_lengths':torch.tensor()}}""", - ) - parser.add_argument( - "--dataset-type", - type=str, - default="npy", - choices=["npy", "raw"], - help="""Type of input dataset file""", - ) - parser.add_argument( - "--read-using-sf", - type=str, - default=False, - help="""If sound file should be used to read the raw dataset""", - ) - parser.add_argument( - "--dataset-size", - default=None, - type=int, - help="Dataset size to use for benchmarking", - ) - parser.add_argument( - "--dump-speech-waveforms-dir", - default=None, - type=str, - help="Directory to dump the speech waveforms computed on the dataset.", - ) - parser.add_argument( - "--dump-waveform-file-prefix", - default="", - type=str, - help="File name prefix for the saved speech waveforms", - ) - parser.add_argument( - "--feat-dim", default=80, type=int, help="Input feature dimension" - ) - parser.add_argument( - "--target-sr", - default=16000, - type=int, - help="Target sample rate for dumping waveforms", - ) - - options.add_generation_args(parser) - options.get_interactive_generation_parser(parser) - return parser - - -def cli_main(): - parser = make_parser() - args = options.parse_args_and_arch(parser) - - with open( - args.config, - "r", - ) as f: - config = yaml.load(f, Loader=yaml.FullLoader) - dict_args = vars(args) - dict_args.update(config["general"]) - args = Namespace(**dict_args) - - i = 1 - stage_args = [] - while i <= 3: - var = f"stage{i}" - tmp_args = copy.deepcopy(dict_args) - if var in config: - tmp_args.update(config[var]) - stage_args.append(Namespace(**tmp_args)) - i += 1 - else: - break - - if args.model_type == "S2S" or args.model_type == "TTS": - model = SpeechGeneration(stage_args[0]) - elif args.model_type == "S2UT": - model = S2UT(stage_args[0], stage_args[1] if len(stage_args) > 1 else None) - elif args.model_type == "MT" or args.model_type == "S2T": - model = Processing(stage_args[0]) - elif args.model_type == "2StageS2ST": - model = Cascaded2StageS2ST(stage_args[0], stage_args[1]) - elif args.model_type == "3StageS2ST": - model = Cascaded3StageS2ST(stage_args[0], stage_args[2], stage_args[1]) - else: - raise Exception(f"Currently unsupported model type {args.model_type}") - - print(f"Evaluating on dataset - {args.dataset_path}\n") - - if args.dataset_type == "npy": - dataset = load_dataset_npy(args.dataset_path, dataset_size=args.dataset_size) - elif args.dataset_type == "raw": - dataset = load_dataset_raw_to_waveforms( - args.dataset_path, - dataset_size=args.dataset_size, - read_using_soundfile=args.read_using_sf, - ) - else: - raise Exception(f"Invalid dataset type {args.dataset_type}") - - model.warm_up(sample=dataset[0], repeat=2) - - run_time, memory, flops = model.gather_all_metrics(dataset, repeat=1) - print(f"run_time = {run_time}sec \tmemory = {memory}MiB \tflops = {flops}") - - if args.dump_speech_waveforms_dir: - model.dump_final_speech_output( - dataset, - args.dump_speech_waveforms_dir, - lambda x: x, - args.target_sr, - prefix=args.dump_waveform_file_prefix, - ) - - -if __name__ == "__main__": - cli_main() diff --git a/kosmos-g/fairseq/examples/speech_to_speech/generate_waveform_from_code.py b/kosmos-g/fairseq/examples/speech_to_speech/generate_waveform_from_code.py deleted file mode 100644 index 82aa7acfb..000000000 --- a/kosmos-g/fairseq/examples/speech_to_speech/generate_waveform_from_code.py +++ /dev/null @@ -1,116 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import json -import logging -from pathlib import Path -import random -import soundfile as sf -import torch - -from tqdm import tqdm - -from fairseq import utils -from fairseq.models.text_to_speech.vocoder import CodeHiFiGANVocoder - - -logging.basicConfig() -logging.root.setLevel(logging.INFO) -logging.basicConfig(level=logging.INFO) -logger = logging.getLogger(__name__) - - -def dump_result(args, sample_id, pred_wav, suffix=""): - sf.write( - f"{args.results_path}/{sample_id}{suffix}_pred.wav", - pred_wav.detach().cpu().numpy(), - 16000, - ) - - -def load_code(in_file): - with open(in_file) as f: - out = [list(map(int, line.strip().split())) for line in f] - return out - - -def main(args): - logger.info(args) - - use_cuda = torch.cuda.is_available() and not args.cpu - - with open(args.vocoder_cfg) as f: - vocoder_cfg = json.load(f) - vocoder = CodeHiFiGANVocoder(args.vocoder, vocoder_cfg) - if use_cuda: - vocoder = vocoder.cuda() - - multispkr = vocoder.model.multispkr - if multispkr: - logger.info("multi-speaker vocoder") - num_speakers = vocoder_cfg.get( - "num_speakers", 200 - ) # following the default in codehifigan to set to 200 - assert ( - args.speaker_id < num_speakers - ), f"invalid --speaker-id ({args.speaker_id}) with total #speakers = {num_speakers}" - - data = load_code(args.in_code_file) - Path(args.results_path).mkdir(exist_ok=True, parents=True) - for i, d in tqdm(enumerate(data), total=len(data)): - x = { - "code": torch.LongTensor(d).view(1, -1), - } - suffix = "" - if multispkr: - spk = ( - random.randint(0, num_speakers - 1) - if args.speaker_id == -1 - else args.speaker_id - ) - suffix = f"_spk{spk}" - x["spkr"] = torch.LongTensor([spk]).view(1, 1) - - x = utils.move_to_cuda(x) if use_cuda else x - wav = vocoder(x, args.dur_prediction) - dump_result(args, i, wav, suffix=suffix) - - -def cli_main(): - parser = argparse.ArgumentParser() - parser.add_argument( - "--in-code-file", type=str, required=True, help="one unit sequence per line" - ) - parser.add_argument( - "--vocoder", type=str, required=True, help="path to the CodeHiFiGAN vocoder" - ) - parser.add_argument( - "--vocoder-cfg", - type=str, - required=True, - help="path to the CodeHiFiGAN vocoder config", - ) - parser.add_argument("--results-path", type=str, required=True) - parser.add_argument( - "--dur-prediction", - action="store_true", - help="enable duration prediction (for reduced/unique code sequences)", - ) - parser.add_argument( - "--speaker-id", - type=int, - default=-1, - help="Speaker id (for vocoder that supports multispeaker). Set to -1 to randomly sample speakers.", - ) - parser.add_argument("--cpu", action="store_true", help="run on CPU") - - args = parser.parse_args() - - main(args) - - -if __name__ == "__main__": - cli_main() diff --git a/kosmos-g/fairseq/examples/speech_to_speech/preprocessing/__init__.py b/kosmos-g/fairseq/examples/speech_to_speech/preprocessing/__init__.py deleted file mode 100644 index 626423691..000000000 --- a/kosmos-g/fairseq/examples/speech_to_speech/preprocessing/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. diff --git a/kosmos-g/fairseq/examples/speech_to_speech/preprocessing/data_utils.py b/kosmos-g/fairseq/examples/speech_to_speech/preprocessing/data_utils.py deleted file mode 100644 index 48fe57b74..000000000 --- a/kosmos-g/fairseq/examples/speech_to_speech/preprocessing/data_utils.py +++ /dev/null @@ -1,70 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from pathlib import Path -from typing import List, Optional - -from examples.speech_to_text.data_utils import S2TDataConfigWriter - - -def gen_config_yaml( - manifest_root: Path, - yaml_filename: str = "config.yaml", - specaugment_policy: Optional[str] = "lb", - feature_transform: Optional[List[str]] = None, - input_channels: Optional[int] = 1, - input_feat_per_channel: Optional[int] = 80, - audio_root: str = "", - vocoder_type: Optional[str] = None, - vocoder_checkpoint: Optional[str] = None, - vocoder_cfg: Optional[str] = None, - extra=None, -): - manifest_root = manifest_root.absolute() - writer = S2TDataConfigWriter(manifest_root / yaml_filename) - - if input_channels is not None: - writer.set_input_channels(input_channels) - if input_feat_per_channel is not None: - writer.set_input_feat_per_channel(input_feat_per_channel) - specaugment_setters = { - "lb": writer.set_specaugment_lb_policy, - "ld": writer.set_specaugment_ld_policy, - "sm": writer.set_specaugment_sm_policy, - "ss": writer.set_specaugment_ss_policy, - } - specaugment_setter = specaugment_setters.get(specaugment_policy, None) - if specaugment_setter is not None: - specaugment_setter() - - if feature_transform is None: - feature_transform = [] - else: - writer.set_feature_transforms("*", feature_transform) - - if specaugment_policy is not None: - writer.set_feature_transforms("_train", feature_transform + ["specaugment"]) - - if len(audio_root) > 0: - writer.set_audio_root(audio_root) - - if ( - vocoder_type is not None - and vocoder_checkpoint is not None - and vocoder_cfg is not None - ): - writer.set_extra( - { - "vocoder": { - "type": vocoder_type, - "config": vocoder_cfg, - "checkpoint": vocoder_checkpoint, - } - } - ) - - if extra is not None: - writer.set_extra(extra) - writer.flush() diff --git a/kosmos-g/fairseq/examples/speech_to_speech/preprocessing/prep_s2spect_data.py b/kosmos-g/fairseq/examples/speech_to_speech/preprocessing/prep_s2spect_data.py deleted file mode 100644 index 2748b37ae..000000000 --- a/kosmos-g/fairseq/examples/speech_to_speech/preprocessing/prep_s2spect_data.py +++ /dev/null @@ -1,169 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import logging -import os -from pathlib import Path -import shutil -import torchaudio - -import soundfile as sf -from tqdm import tqdm -import pandas as pd - -from examples.speech_synthesis.data_utils import extract_logmel_spectrogram -from examples.speech_to_speech.preprocessing.data_utils import gen_config_yaml -from examples.speech_to_text.data_utils import create_zip, get_zip_manifest, save_df_to_tsv -from fairseq.data.audio.audio_utils import convert_waveform - - -logger = logging.getLogger(__name__) - -MANIFEST_COLUMNS = ["id", "src_audio", "src_n_frames", "tgt_audio", "tgt_n_frames"] - - -def prepare_target_data(args, tgt_audios): - feature_name = "logmelspec80" - zip_path = args.output_root / f"{feature_name}.zip" - if zip_path.exists(): - print(f"{zip_path} exists.") - return zip_path - - feature_root = args.output_root / feature_name - feature_root.mkdir(exist_ok=True) - - print("Extracting Mel spectrogram features...") - for tgt_audio in tqdm(tgt_audios): - sample_id = tgt_audio.stem - waveform, sample_rate = torchaudio.load(tgt_audio.as_posix()) - waveform, sample_rate = convert_waveform( - waveform, sample_rate, normalize_volume=args.normalize_volume, - to_sample_rate=args.sample_rate - ) - extract_logmel_spectrogram( - waveform, sample_rate, feature_root / f"{sample_id}.npy", - win_length=args.win_length, hop_length=args.hop_length, - n_fft=args.n_fft, n_mels=args.n_mels, f_min=args.f_min, - f_max=args.f_max - ) - print("ZIPing features...") - create_zip(feature_root, zip_path) - shutil.rmtree(feature_root) - - return zip_path - - -def process(args): - os.makedirs(args.output_root, exist_ok=True) - - manifest = {} - tgt_audios = [] - for split in args.data_split: - print(f"Processing {split}...") - - manifest[split] = {c: [] for c in MANIFEST_COLUMNS} - missing_tgt_audios = [] - src_audios = list(args.source_dir.glob(f"{split}/*.wav")) - for src_audio in tqdm(src_audios): - sample_id = src_audio.stem - - tgt_audio = args.target_dir / split / f"{sample_id}.wav" - if not tgt_audio.is_file(): - missing_tgt_audios.append(sample_id) - continue - - tgt_audios.append(tgt_audio) - - src_n_frames = sf.info(src_audio.as_posix()).frames - manifest[split]["id"].append(sample_id) - manifest[split]["src_audio"].append(src_audio.as_posix()) - manifest[split]["src_n_frames"].append( - src_n_frames // 160 - ) # estimation of 10-ms frame for 16kHz audio - - print(f"Processed {len(manifest[split]['id'])} samples") - if len(missing_tgt_audios) > 0: - print( - f"{len(missing_tgt_audios)} with missing target data (first 3 examples: {', '.join(missing_tgt_audios[:3])})" - ) - - # Extract features and pack features into ZIP - zip_path = prepare_target_data(args, tgt_audios) - - print("Fetching ZIP manifest...") - tgt_audio_paths, tgt_audio_lengths = get_zip_manifest(zip_path) - - print("Generating manifest...") - for split in args.data_split: - print(f"Processing {split}...") - - for sample_id in tqdm(manifest[split]["id"]): - manifest[split]["tgt_audio"].append(tgt_audio_paths[sample_id]) - manifest[split]["tgt_n_frames"].append(tgt_audio_lengths[sample_id]) - - out_manifest = args.output_root / f"{split}.tsv" - print(f"Writing manifest to {out_manifest}...") - save_df_to_tsv(pd.DataFrame.from_dict(manifest[split]), out_manifest) - - # Generate config YAML - win_len_t = args.win_length / args.sample_rate - hop_len_t = args.hop_length / args.sample_rate - extra = { - "features": { - "type": "spectrogram+melscale+log", - "sample_rate": args.sample_rate, - "eps": 1e-5, "n_mels": args.n_mels, "n_fft": args.n_fft, - "window_fn": "hann", "win_length": args.win_length, - "hop_length": args.hop_length, - "win_len_t": win_len_t, "hop_len_t": hop_len_t, - "f_min": args.f_min, "f_max": args.f_max, - "n_stft": args.n_fft // 2 + 1 - } - } - gen_config_yaml( - args.output_root, - audio_root=args.output_root.as_posix(), - specaugment_policy="lb", - feature_transform=["utterance_cmvn", "delta_deltas"], - extra=extra, - ) - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument( - "--source-dir", required=True, type=Path, help="source audio directory" - ) - parser.add_argument( - "--target-dir", required=True, type=Path, help="target audio directory" - ) - parser.add_argument( - "--data-split", - default=["train", "valid", "test"], - nargs="+", - help="data split names", - ) - parser.add_argument( - "--output-root", required=True, type=Path, help="output directory" - ) - # target feature related - parser.add_argument("--win-length", type=int, default=1024) - parser.add_argument("--hop-length", type=int, default=256) - parser.add_argument("--n-fft", type=int, default=1024) - parser.add_argument("--n-mels", type=int, default=80) - parser.add_argument("--f-min", type=int, default=20) - parser.add_argument("--f-max", type=int, default=8000) - parser.add_argument("--sample-rate", type=int, default=22050) - parser.add_argument("--normalize-volume", "-n", action="store_true") - - args = parser.parse_args() - - process(args) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/speech_to_speech/preprocessing/prep_s2ut_data.py b/kosmos-g/fairseq/examples/speech_to_speech/preprocessing/prep_s2ut_data.py deleted file mode 100644 index beeffaa0e..000000000 --- a/kosmos-g/fairseq/examples/speech_to_speech/preprocessing/prep_s2ut_data.py +++ /dev/null @@ -1,128 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import logging -from pathlib import Path - -import soundfile as sf -from tqdm import tqdm -import pandas as pd - -from examples.speech_to_speech.preprocessing.data_utils import gen_config_yaml -from examples.speech_to_text.data_utils import save_df_to_tsv - -logger = logging.getLogger(__name__) - -MANIFEST_COLUMNS = ["id", "src_audio", "src_n_frames", "tgt_audio", "tgt_n_frames"] - - -def load_units(in_file): - out = {} - with open(in_file) as f: - for line in f: - sample_id, units = line.strip().split("|", 1) - out[sample_id] = units.split() - - return out - - -def process_units(units, reduce=False): - if not reduce: - return units - - out = [u for i, u in enumerate(units) if i == 0 or u != units[i - 1]] - return out - - -def process(args): - args.output_root.mkdir(exist_ok=True) - - print("Generating manifest...") - for split in args.data_split: - print(f"Processing {split}") - - # load target units - target_unit_data = load_units(args.target_dir / f"{split}.txt") - - manifest = {c: [] for c in MANIFEST_COLUMNS} - missing_tgt_audios = [] - src_audios = list(args.source_dir.glob(f"{split}/*.wav")) - for src_audio in tqdm(src_audios): - sample_id = src_audio.stem - - if sample_id not in target_unit_data: - missing_tgt_audios.append(sample_id) - continue - - src_n_frames = sf.info(src_audio.as_posix()).frames - manifest["id"].append(sample_id) - manifest["src_audio"].append(src_audio.as_posix()) - manifest["src_n_frames"].append( - src_n_frames // 160 - ) # estimation of 10-ms frame for 16kHz audio - - target_units = process_units(target_unit_data[sample_id], args.reduce_unit) - manifest["tgt_audio"].append(" ".join(target_units)) - manifest["tgt_n_frames"].append(len(target_units)) - - print(f"Processed {len(manifest['id'])} samples") - if len(missing_tgt_audios) > 0: - print( - f"{len(missing_tgt_audios)} with missing target data (first 3 examples: {', '.join(missing_tgt_audios[:3])})" - ) - - out_manifest = args.output_root / f"{split}.tsv" - print(f"Writing manifest to {out_manifest}...") - save_df_to_tsv(pd.DataFrame.from_dict(manifest), out_manifest) - - # Generate config YAML - gen_config_yaml( - args.output_root, - specaugment_policy="lb", - feature_transform=["utterance_cmvn"], - vocoder_type="code_hifigan", - vocoder_checkpoint=args.vocoder_checkpoint, - vocoder_cfg=args.vocoder_cfg, - ) - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument( - "--source-dir", required=True, type=Path, help="source audio directory" - ) - parser.add_argument( - "--target-dir", required=True, type=Path, help="target audio directory" - ) - parser.add_argument( - "--data-split", - default=["train", "valid", "test"], - nargs="+", - help="data split names", - ) - parser.add_argument( - "--output-root", required=True, type=Path, help="output directory" - ) - parser.add_argument( - "--reduce-unit", - action="store_true", - help="reduce a target unit sequence to a unique unit sequence, i.e. '1 1 1 2 2' -> '1 2'", - ) - parser.add_argument( - "--vocoder-checkpoint", default=None, type=str, help="vocoder checkpoint" - ) - parser.add_argument( - "--vocoder-cfg", default=None, type=str, help="vocoder config file" - ) - - args = parser.parse_args() - - process(args) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/speech_to_text/README.md b/kosmos-g/fairseq/examples/speech_to_text/README.md deleted file mode 100644 index f639d300d..000000000 --- a/kosmos-g/fairseq/examples/speech_to_text/README.md +++ /dev/null @@ -1,77 +0,0 @@ -# Speech-to-Text (S2T) Modeling - -[https://www.aclweb.org/anthology/2020.aacl-demo.6](https://www.aclweb.org/anthology/2020.aacl-demo.6.pdf) - -Speech recognition (ASR) and speech-to-text translation (ST) with fairseq. - -## Data Preparation -S2T modeling data consists of source speech features, target text and other optional information -(source text, speaker id, etc.). Fairseq S2T uses per-dataset-split TSV manifest files -to store these information. Each data field is represented by a column in the TSV file. - -Unlike text token embeddings, speech features (e.g. log mel-scale filter banks) are usually fixed -during model training and can be pre-computed. The manifest file contains the path to -either the feature file in NumPy format or the WAV/FLAC audio file. For the latter, -features will be extracted on-the-fly by fairseq S2T. Optionally, feature/audio files can be packed -into uncompressed ZIP files (then accessed via byte offset and length) to improve I/O performance. - -Fairseq S2T also employs a YAML file for data related configurations: tokenizer type and dictionary path -for the target text, feature transforms such as CMVN (cepstral mean and variance normalization) and SpecAugment, -temperature-based resampling, etc. - -## Model Training -Fairseq S2T uses the unified `fairseq-train` interface for model training. It requires arguments `--task speech_to_text`, - `--arch <model architecture in fairseq.models.speech_to_text.*>` and `--config-yaml <config YAML filename>`. - -## Inference & Evaluation -Fairseq S2T uses the unified `fairseq-generate`/`fairseq-interactive` interface for inference and evaluation. It -requires arguments `--task speech_to_text` and `--config-yaml <config YAML filename>`. The interactive console takes -audio paths (one per line) as inputs. - - -## Examples -- [Speech Recognition (ASR) on LibriSpeech](docs/librispeech_example.md) - -- [Speech-to-Text Translation (ST) on MuST-C](docs/mustc_example.md) - -- [Speech-to-Text Translation (ST) on CoVoST 2](docs/covost_example.md) - -- [Speech-to-Text Translation (ST) on Multilingual TEDx](docs/mtedx_example.md) -- [Simultaneous Speech-to-Text Translation (SimulST) on MuST-C](docs/simulst_mustc_example.md) - -## Updates -- 02/04/2021: Added interactive decoding (`fairseq-interactive`) support. Examples: - [ASR (LibriSpeech)](docs/librispeech_example.md#interactive-decoding) - and [ST (CoVoST 2)](docs/covost_example.md#interactive-decoding). -- 01/08/2021: Several fixes for S2T Transformer model, inference-time de-tokenization, scorer configuration and data - preparation scripts. We also add pre-trained models to the examples and revise the instructions. - Breaking changes: the data preparation scripts now extract filterbank features without CMVN. CMVN is instead applied - on-the-fly (defined in the config YAML). - -## What's Next -- We are migrating the old fairseq [ASR example](../speech_recognition) into this S2T framework and - merging the features from both sides. -- The following papers also base their experiments on fairseq S2T. We are adding more examples for replication. - - [Improving Cross-Lingual Transfer Learning for End-to-End Speech Recognition with Speech Translation (Wang et al., 2020)](https://arxiv.org/abs/2006.05474) - - [Self-Supervised Representations Improve End-to-End Speech Translation (Wu et al., 2020)](https://arxiv.org/abs/2006.12124) - - [Self-Training for End-to-End Speech Translation (Pino et al., 2020)](https://arxiv.org/abs/2006.02490) - - [CoVoST: A Diverse Multilingual Speech-To-Text Translation Corpus (Wang et al., 2020)](https://arxiv.org/abs/2002.01320) - - [Harnessing Indirect Training Data for End-to-End Automatic Speech Translation: Tricks of the Trade (Pino et al., 2019)](https://arxiv.org/abs/1909.06515) - -## Citation -Please cite as: -``` -@inproceedings{wang2020fairseqs2t, - title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq}, - author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino}, - booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations}, - year = {2020}, -} - -@inproceedings{ott2019fairseq, - title = {fairseq: A Fast, Extensible Toolkit for Sequence Modeling}, - author = {Myle Ott and Sergey Edunov and Alexei Baevski and Angela Fan and Sam Gross and Nathan Ng and David Grangier and Michael Auli}, - booktitle = {Proceedings of NAACL-HLT 2019: Demonstrations}, - year = {2019}, -} -``` diff --git a/kosmos-g/fairseq/examples/speech_to_text/data_utils.py b/kosmos-g/fairseq/examples/speech_to_text/data_utils.py deleted file mode 100644 index c9c69e2b4..000000000 --- a/kosmos-g/fairseq/examples/speech_to_text/data_utils.py +++ /dev/null @@ -1,383 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import csv -from pathlib import Path -import zipfile -from functools import reduce -from multiprocessing import cpu_count -from typing import Any, Dict, List, Optional, Union -import io - -import numpy as np -import pandas as pd -import sentencepiece as sp -from fairseq.data.audio.audio_utils import ( - convert_waveform, _get_kaldi_fbank, _get_torchaudio_fbank, is_npy_data, - is_sf_audio_data -) -import torch -import soundfile as sf -from tqdm import tqdm - - -UNK_TOKEN, UNK_TOKEN_ID = "<unk>", 3 -BOS_TOKEN, BOS_TOKEN_ID = "<s>", 0 -EOS_TOKEN, EOS_TOKEN_ID = "</s>", 2 -PAD_TOKEN, PAD_TOKEN_ID = "<pad>", 1 - - -def gen_vocab( - input_path: Path, output_path_prefix: Path, model_type="bpe", - vocab_size=1000, special_symbols: Optional[List[str]] = None -): - # Train SentencePiece Model - arguments = [ - f"--input={input_path.as_posix()}", - f"--model_prefix={output_path_prefix.as_posix()}", - f"--model_type={model_type}", - f"--vocab_size={vocab_size}", - "--character_coverage=1.0", - f"--num_threads={cpu_count()}", - f"--unk_id={UNK_TOKEN_ID}", - f"--bos_id={BOS_TOKEN_ID}", - f"--eos_id={EOS_TOKEN_ID}", - f"--pad_id={PAD_TOKEN_ID}", - ] - if special_symbols is not None: - _special_symbols = ",".join(special_symbols) - arguments.append(f"--user_defined_symbols={_special_symbols}") - sp.SentencePieceTrainer.Train(" ".join(arguments)) - # Export fairseq dictionary - spm = sp.SentencePieceProcessor() - spm.Load(output_path_prefix.as_posix() + ".model") - vocab = {i: spm.IdToPiece(i) for i in range(spm.GetPieceSize())} - assert ( - vocab.get(UNK_TOKEN_ID) == UNK_TOKEN - and vocab.get(PAD_TOKEN_ID) == PAD_TOKEN - and vocab.get(BOS_TOKEN_ID) == BOS_TOKEN - and vocab.get(EOS_TOKEN_ID) == EOS_TOKEN - ) - vocab = { - i: s - for i, s in vocab.items() - if s not in {UNK_TOKEN, BOS_TOKEN, EOS_TOKEN, PAD_TOKEN} - } - with open(output_path_prefix.as_posix() + ".txt", "w") as f_out: - for _, s in sorted(vocab.items(), key=lambda x: x[0]): - f_out.write(f"{s} 1\n") - - -def extract_fbank_features( - waveform: torch.FloatTensor, - sample_rate: int, - output_path: Optional[Path] = None, - n_mel_bins: int = 80, - overwrite: bool = False, -): - if output_path is not None and output_path.is_file() and not overwrite: - return - - _waveform, _ = convert_waveform(waveform, sample_rate, to_mono=True) - # Kaldi compliance: 16-bit signed integers - _waveform = _waveform * (2 ** 15) - _waveform = _waveform[0].numpy() - - features = _get_kaldi_fbank(_waveform, sample_rate, n_mel_bins) - if features is None: - features = _get_torchaudio_fbank(_waveform, sample_rate, n_mel_bins) - if features is None: - raise ImportError( - "Please install pyKaldi or torchaudio to enable fbank feature extraction" - ) - - if output_path is not None: - np.save(output_path.as_posix(), features) - return features - - -def create_zip(data_root: Path, zip_path: Path): - paths = list(data_root.glob("*.npy")) - paths.extend(data_root.glob("*.flac")) - with zipfile.ZipFile(zip_path, "w", zipfile.ZIP_STORED) as f: - for path in tqdm(paths): - f.write(path, arcname=path.name) - - -def get_zip_manifest( - zip_path: Path, zip_root: Optional[Path] = None, is_audio=False -): - _zip_path = Path.joinpath(zip_root or Path(""), zip_path) - with zipfile.ZipFile(_zip_path, mode="r") as f: - info = f.infolist() - paths, lengths = {}, {} - for i in tqdm(info): - utt_id = Path(i.filename).stem - offset, file_size = i.header_offset + 30 + len(i.filename), i.file_size - paths[utt_id] = f"{zip_path.as_posix()}:{offset}:{file_size}" - with open(_zip_path, "rb") as f: - f.seek(offset) - byte_data = f.read(file_size) - assert len(byte_data) > 1 - if is_audio: - assert is_sf_audio_data(byte_data), i - else: - assert is_npy_data(byte_data), i - byte_data_fp = io.BytesIO(byte_data) - if is_audio: - lengths[utt_id] = sf.info(byte_data_fp).frames - else: - lengths[utt_id] = np.load(byte_data_fp).shape[0] - return paths, lengths - - -def gen_config_yaml( - manifest_root: Path, - spm_filename: Optional[str] = None, - vocab_name: Optional[str] = None, - yaml_filename: str = "config.yaml", - specaugment_policy: Optional[str] = "lb", - prepend_tgt_lang_tag: bool = False, - sampling_alpha: Optional[float] = None, - input_channels: Optional[int] = 1, - input_feat_per_channel: Optional[int] = 80, - audio_root: str = "", - cmvn_type: str = "utterance", - gcmvn_path: Optional[Path] = None, - extra=None -): - manifest_root = manifest_root.absolute() - writer = S2TDataConfigWriter(manifest_root / yaml_filename) - assert spm_filename is not None or vocab_name is not None - vocab_name = spm_filename.replace(".model", ".txt") if vocab_name is None \ - else vocab_name - writer.set_vocab_filename(vocab_name) - if input_channels is not None: - writer.set_input_channels(input_channels) - if input_feat_per_channel is not None: - writer.set_input_feat_per_channel(input_feat_per_channel) - specaugment_setters = { - "lb": writer.set_specaugment_lb_policy, - "ld": writer.set_specaugment_ld_policy, - "sm": writer.set_specaugment_sm_policy, - "ss": writer.set_specaugment_ss_policy, - } - specaugment_setter = specaugment_setters.get(specaugment_policy, None) - if specaugment_setter is not None: - specaugment_setter() - if spm_filename is not None: - writer.set_bpe_tokenizer( - { - "bpe": "sentencepiece", - "sentencepiece_model": (manifest_root / spm_filename).as_posix(), - } - ) - if prepend_tgt_lang_tag: - writer.set_prepend_tgt_lang_tag(True) - if sampling_alpha is not None: - writer.set_sampling_alpha(sampling_alpha) - - if cmvn_type not in ["global", "utterance"]: - raise NotImplementedError - - if specaugment_policy is not None: - writer.set_feature_transforms( - "_train", [f"{cmvn_type}_cmvn", "specaugment"] - ) - writer.set_feature_transforms("*", [f"{cmvn_type}_cmvn"]) - - if cmvn_type == "global": - if gcmvn_path is None: - raise ValueError("Please provide path of global cmvn file.") - else: - writer.set_global_cmvn(gcmvn_path.as_posix()) - - if len(audio_root) > 0: - writer.set_audio_root(audio_root) - - if extra is not None: - writer.set_extra(extra) - writer.flush() - - -def load_df_from_tsv(path: Union[str, Path]) -> pd.DataFrame: - _path = path if isinstance(path, str) else path.as_posix() - return pd.read_csv( - _path, - sep="\t", - header=0, - encoding="utf-8", - escapechar="\\", - quoting=csv.QUOTE_NONE, - na_filter=False, - ) - - -def save_df_to_tsv(dataframe, path: Union[str, Path]): - _path = path if isinstance(path, str) else path.as_posix() - dataframe.to_csv( - _path, - sep="\t", - header=True, - index=False, - encoding="utf-8", - escapechar="\\", - quoting=csv.QUOTE_NONE, - ) - - -def load_tsv_to_dicts(path: Union[str, Path]) -> List[dict]: - with open(path, "r") as f: - reader = csv.DictReader( - f, - delimiter="\t", - quotechar=None, - doublequote=False, - lineterminator="\n", - quoting=csv.QUOTE_NONE, - ) - rows = [dict(e) for e in reader] - return rows - - -def filter_manifest_df( - df, is_train_split=False, extra_filters=None, min_n_frames=5, max_n_frames=3000 -): - filters = { - "no speech": df["audio"] == "", - f"short speech (<{min_n_frames} frames)": df["n_frames"] < min_n_frames, - "empty sentence": df["tgt_text"] == "", - } - if is_train_split: - filters[f"long speech (>{max_n_frames} frames)"] = df["n_frames"] > max_n_frames - if extra_filters is not None: - filters.update(extra_filters) - invalid = reduce(lambda x, y: x | y, filters.values()) - valid = ~invalid - print( - "| " - + ", ".join(f"{n}: {f.sum()}" for n, f in filters.items()) - + f", total {invalid.sum()} filtered, {valid.sum()} remained." - ) - return df[valid] - - -def cal_gcmvn_stats(features_list): - features = np.concatenate(features_list) - square_sums = (features ** 2).sum(axis=0) - mean = features.mean(axis=0) - features = np.subtract(features, mean) - var = square_sums / features.shape[0] - mean ** 2 - std = np.sqrt(np.maximum(var, 1e-8)) - return {"mean": mean.astype("float32"), "std": std.astype("float32")} - - -class S2TDataConfigWriter(object): - DEFAULT_VOCAB_FILENAME = "dict.txt" - DEFAULT_INPUT_FEAT_PER_CHANNEL = 80 - DEFAULT_INPUT_CHANNELS = 1 - - def __init__(self, yaml_path: Path): - try: - import yaml - except ImportError: - print("Please install PyYAML for S2T data config YAML files") - self.yaml = yaml - self.yaml_path = yaml_path - self.config = {} - - def flush(self): - with open(self.yaml_path, "w") as f: - self.yaml.dump(self.config, f) - - def set_audio_root(self, audio_root=""): - self.config["audio_root"] = audio_root - - def set_vocab_filename(self, vocab_filename: str = "dict.txt"): - self.config["vocab_filename"] = vocab_filename - - def set_specaugment( - self, - time_wrap_w: int, - freq_mask_n: int, - freq_mask_f: int, - time_mask_n: int, - time_mask_t: int, - time_mask_p: float, - ): - self.config["specaugment"] = { - "time_wrap_W": time_wrap_w, - "freq_mask_N": freq_mask_n, - "freq_mask_F": freq_mask_f, - "time_mask_N": time_mask_n, - "time_mask_T": time_mask_t, - "time_mask_p": time_mask_p, - } - - def set_specaugment_lb_policy(self): - self.set_specaugment( - time_wrap_w=0, - freq_mask_n=1, - freq_mask_f=27, - time_mask_n=1, - time_mask_t=100, - time_mask_p=1.0, - ) - - def set_specaugment_ld_policy(self): - self.set_specaugment( - time_wrap_w=0, - freq_mask_n=2, - freq_mask_f=27, - time_mask_n=2, - time_mask_t=100, - time_mask_p=1.0, - ) - - def set_specaugment_sm_policy(self): - self.set_specaugment( - time_wrap_w=0, - freq_mask_n=2, - freq_mask_f=15, - time_mask_n=2, - time_mask_t=70, - time_mask_p=0.2, - ) - - def set_specaugment_ss_policy(self): - self.set_specaugment( - time_wrap_w=0, - freq_mask_n=2, - freq_mask_f=27, - time_mask_n=2, - time_mask_t=70, - time_mask_p=0.2, - ) - - def set_input_channels(self, input_channels: int = 1): - self.config["input_channels"] = input_channels - - def set_input_feat_per_channel(self, input_feat_per_channel: int = 80): - self.config["input_feat_per_channel"] = input_feat_per_channel - - def set_bpe_tokenizer(self, bpe_tokenizer: Dict[str, Any]): - self.config["bpe_tokenizer"] = bpe_tokenizer - - def set_global_cmvn(self, stats_npz_path: str): - self.config["global_cmvn"] = {"stats_npz_path": stats_npz_path} - - def set_feature_transforms(self, split: str, transforms: List[str]): - if "transforms" not in self.config: - self.config["transforms"] = {} - self.config["transforms"][split] = transforms - - def set_prepend_tgt_lang_tag(self, flag: bool = True): - self.config["prepend_tgt_lang_tag"] = flag - - def set_sampling_alpha(self, sampling_alpha: float = 1.0): - self.config["sampling_alpha"] = sampling_alpha - - def set_extra(self, data): - self.config.update(data) diff --git a/kosmos-g/fairseq/examples/speech_to_text/docs/covost_example.md b/kosmos-g/fairseq/examples/speech_to_text/docs/covost_example.md deleted file mode 100644 index 16447f041..000000000 --- a/kosmos-g/fairseq/examples/speech_to_text/docs/covost_example.md +++ /dev/null @@ -1,102 +0,0 @@ -[[Back]](..) - -# S2T Example: ST on CoVoST -We replicate the experiments in -[CoVoST 2 and Massively Multilingual Speech-to-Text Translation (Wang et al., 2020)](https://arxiv.org/abs/2007.10310). - -## Data Preparation -[Download](https://commonvoice.mozilla.org/en/datasets) and unpack Common Voice v4 to a path -`${COVOST_ROOT}/${SOURCE_LANG_ID}`, then preprocess it with -```bash -# additional Python packages for S2T data processing/model training -pip install pandas torchaudio sentencepiece - -# En ASR -python examples/speech_to_text/prep_covost_data.py \ - --data-root ${COVOST_ROOT} --vocab-type char --src-lang en -# ST -python examples/speech_to_text/prep_covost_data.py \ - --data-root ${COVOST_ROOT} --vocab-type char \ - --src-lang fr --tgt-lang en -``` -The generated files (manifest, features, vocabulary and data configuration) will be added to -`${COVOST_ROOT}/${SOURCE_LANG_ID}`. - -Download our vocabulary files if you want to use our pre-trained models: -- ASR: [En](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_asr_vocab_char.zip) -- ST: [Fr-En](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_fr_en_st_vocab_char.zip), [De-En](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_de_en_st_vocab_char.zip), [Es-En](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_es_en_st_vocab_char.zip), [Ca-En](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_ca_en_st_vocab_char.zip), [En-De](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_de_st_vocab_char.zip), [En-Ca](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_ca_st_vocab_char.zip), [En-Fa](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_fa_st_vocab_char.zip), [En-Et](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_et_st_vocab_char.zip) - -## ASR -#### Training -We train an En ASR model for encoder pre-training of all ST models: -```bash -fairseq-train ${COVOST_ROOT}/en \ - --config-yaml config_asr_en.yaml --train-subset train_asr_en --valid-subset dev_asr_en \ - --save-dir ${ASR_SAVE_DIR} --num-workers 4 --max-tokens 50000 --max-update 60000 \ - --task speech_to_text --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --report-accuracy --arch s2t_transformer_s --dropout 0.15 --optimizer adam --lr 2e-3 \ - --lr-scheduler inverse_sqrt --warmup-updates 10000 --clip-norm 10.0 --seed 1 --update-freq 8 -``` -where `ASR_SAVE_DIR` is the checkpoint root path. We set `--update-freq 8` to simulate 8 GPUs with 1 GPU. -You may want to update it accordingly when using more than 1 GPU. - -#### Inference & Evaluation -```bash -CHECKPOINT_FILENAME=avg_last_10_checkpoint.pt -python scripts/average_checkpoints.py \ - --inputs ${ASR_SAVE_DIR} --num-epoch-checkpoints 10 \ - --output "${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME}" -fairseq-generate ${COVOST_ROOT}/en \ - --config-yaml config_asr_en.yaml --gen-subset test_asr_en --task speech_to_text \ - --path ${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME} --max-tokens 50000 --beam 5 \ - --scoring wer --wer-tokenizer 13a --wer-lowercase --wer-remove-punct -``` -#### Results -| --arch | Params | En | Model | -|---|---|---|---| -| s2t_transformer_s | 31M | 25.6 | [Download](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_asr_transformer_s.pt) | - -## ST -#### Training -Fr-En as example: -```bash -fairseq-train ${COVOST_ROOT}/fr \ - --config-yaml config_st_fr_en.yaml --train-subset train_st_fr_en --valid-subset dev_st_fr_en \ - --save-dir ${ST_SAVE_DIR} --num-workers 4 --max-update 30000 --max-tokens 40000 \ # --max-tokens 50000 for en-* - --task speech_to_text --criterion label_smoothed_cross_entropy --label-smoothing 0.1 --report-accuracy \ - --arch s2t_transformer_s --encoder-freezing-updates 1000 --optimizer adam --lr 2e-3 \ - --lr-scheduler inverse_sqrt --warmup-updates 10000 --clip-norm 10.0 --seed 1 --update-freq 8 \ - --load-pretrained-encoder-from ${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME} -``` -where `ST_SAVE_DIR` is the checkpoint root path. The ST encoder is pre-trained by En ASR for faster training and better -performance: `--load-pretrained-encoder-from <ASR checkpoint path>`. We set `--update-freq 8` to simulate 8 GPUs with 1 GPU. -You may want to update it accordingly when using more than 1 GPU. - -#### Inference & Evaluation -Average the last 10 checkpoints and evaluate on test split: -```bash -CHECKPOINT_FILENAME=avg_last_10_checkpoint.pt -python scripts/average_checkpoints.py \ - --inputs ${ST_SAVE_DIR} --num-epoch-checkpoints 10 \ - --output "${ST_SAVE_DIR}/${CHECKPOINT_FILENAME}" -fairseq-generate ${COVOST_ROOT}/fr \ - --config-yaml config_st_fr_en.yaml --gen-subset test_st_fr_en --task speech_to_text \ - --path ${ST_SAVE_DIR}/${CHECKPOINT_FILENAME} \ - --max-tokens 50000 --beam 5 --scoring sacrebleu -``` - -## Interactive Decoding -Launch the interactive console via -```bash -fairseq-interactive ${COVOST_ROOT}/fr --config-yaml config_st_fr_en.yaml \ - --task speech_to_text --path ${SAVE_DIR}/${CHECKPOINT_FILENAME} \ - --max-tokens 50000 --beam 5 -``` -Type in WAV/FLAC/OGG audio paths (one per line) after the prompt. - -#### Results -| --arch | Params | Fr-En | De-En | Es-En | Ca-En | En-De | En-Ca | En-Fa | En-Et | Model | -|---|---|---|---|---|---|---|---|---|---|---| -| s2t_transformer_s | 31M | [27.2](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_fr_en_st_transformer_s.pt) | [17.7](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_de_en_st_transformer_s.pt) | [23.1](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_es_en_st_transformer_s.pt) | [19.3](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_ca_en_st_transformer_s.pt) | [16.1](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_de_st_transformer_s.pt) | [21.6](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_ca_st_transformer_s.pt) | [12.9](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_fa_st_transformer_s.pt) | [12.8](https://dl.fbaipublicfiles.com/fairseq/s2t/covost2_en_et_st_transformer_s.pt) | (<-Download) | - -[[Back]](..) diff --git a/kosmos-g/fairseq/examples/speech_to_text/docs/librispeech_example.md b/kosmos-g/fairseq/examples/speech_to_text/docs/librispeech_example.md deleted file mode 100644 index 4040fda94..000000000 --- a/kosmos-g/fairseq/examples/speech_to_text/docs/librispeech_example.md +++ /dev/null @@ -1,69 +0,0 @@ -[[Back]](..) - -# S2T Example: Speech Recognition (ASR) on LibriSpeech -[LibriSpeech](https://www.danielpovey.com/files/2015_icassp_librispeech.pdf) is a de-facto standard English ASR -benchmark. We provide competitive -vanilla [Transformer](https://papers.nips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf) baselines. - -## Data preparation -Download and preprocess LibriSpeech data with -```bash -# additional Python packages for S2T data processing/model training -pip install pandas torchaudio sentencepiece - -python examples/speech_to_text/prep_librispeech_data.py \ - --output-root ${LS_ROOT} --vocab-type unigram --vocab-size 10000 -``` -where `LS_ROOT` is the root path for downloaded data as well as generated files (manifest, features, vocabulary and -data configuration). - -[Download](https://dl.fbaipublicfiles.com/fairseq/s2t/librispeech_vocab_unigram10000.zip) our vocabulary files -if you want to use our pre-trained models. - -## Training -```bash -fairseq-train ${LS_ROOT} --save-dir ${SAVE_DIR} \ - --config-yaml config.yaml --train-subset train-clean-100,train-clean-360,train-other-500 --valid-subset dev-clean,dev-other \ - --num-workers 4 --max-tokens 40000 --max-update 300000 \ - --task speech_to_text --criterion label_smoothed_cross_entropy --label-smoothing 0.1 --report-accuracy \ - --arch s2t_transformer_s --share-decoder-input-output-embed \ - --optimizer adam --lr 2e-3 --lr-scheduler inverse_sqrt --warmup-updates 10000 \ - --clip-norm 10.0 --seed 1 --update-freq 8 -``` -where `SAVE_DIR` is the checkpoint root path. Here we use `--arch s2t_transformer_s` (31M parameters) as example. -For better performance, you may switch to `s2t_transformer_m` (71M, with `--lr 1e-3`) or `s2t_transformer_l` -(268M, with `--lr 5e-4`). We set `--update-freq 8` to simulate 8 GPUs with 1 GPU. You may want to update it accordingly -when using more than 1 GPU. - -## Inference & Evaluation -Average the last 10 checkpoints and evaluate on the 4 splits -(`dev-clean`, `dev-other`, `test-clean` and `test-other`): -```bash -CHECKPOINT_FILENAME=avg_last_10_checkpoint.pt -python scripts/average_checkpoints.py --inputs ${SAVE_DIR} \ - --num-epoch-checkpoints 10 \ - --output "${SAVE_DIR}/${CHECKPOINT_FILENAME}" -for SUBSET in dev-clean dev-other test-clean test-other; do - fairseq-generate ${LS_ROOT} --config-yaml config.yaml --gen-subset ${SUBSET} \ - --task speech_to_text --path ${SAVE_DIR}/${CHECKPOINT_FILENAME} \ - --max-tokens 50000 --beam 5 --scoring wer -done -``` - -## Interactive Decoding -Launch the interactive console via -```bash -fairseq-interactive ${LS_ROOT} --config-yaml config.yaml --task speech_to_text \ - --path ${SAVE_DIR}/${CHECKPOINT_FILENAME} --max-tokens 50000 --beam 5 -``` -Type in WAV/FLAC/OGG audio paths (one per line) after the prompt. - -## Results - -| --arch | Params | dev-clean | dev-other | test-clean | test-other | Model | -|---|---|---|---|---|---|---| -| s2t_transformer_s | 30M | 3.8 | 8.9 | 4.4 | 9.0 | [Download](https://dl.fbaipublicfiles.com/fairseq/s2t/librispeech_transformer_s.pt) | -| s2t_transformer_m | 71M | 3.2 | 8.0 | 3.4 | 7.9 | [Download](https://dl.fbaipublicfiles.com/fairseq/s2t/librispeech_transformer_m.pt) | -| s2t_transformer_l | 268M | 3.0 | 7.5 | 3.2 | 7.5 | [Download](https://dl.fbaipublicfiles.com/fairseq/s2t/librispeech_transformer_l.pt) | - -[[Back]](..) diff --git a/kosmos-g/fairseq/examples/speech_to_text/docs/mtedx_example.md b/kosmos-g/fairseq/examples/speech_to_text/docs/mtedx_example.md deleted file mode 100644 index 7e3d75955..000000000 --- a/kosmos-g/fairseq/examples/speech_to_text/docs/mtedx_example.md +++ /dev/null @@ -1,201 +0,0 @@ -[[Back]](..) - -# S2T Example: Speech Translation (ST) on Multilingual TEDx - -[Multilingual TEDx](https://arxiv.org/abs/2102.01757) is multilingual corpus for speech recognition and -speech translation. The data is derived from TEDx talks in 8 source languages -with translations to a subset of 5 target languages. - -## Data Preparation -[Download](http://openslr.org/100/) and unpack Multilingual TEDx data to a path -`${MTEDX_ROOT}/${LANG_PAIR}`, then preprocess it with -```bash -# additional Python packages for S2T data processing/model training -pip install pandas torchaudio soundfile sentencepiece - -# Generate TSV manifests, features, vocabulary -# and configuration for each language -python examples/speech_to_text/prep_mtedx_data.py \ - --data-root ${MTEDX_ROOT} --task asr \ - --vocab-type unigram --vocab-size 1000 -python examples/speech_to_text/prep_mtedx_data.py \ - --data-root ${MTEDX_ROOT} --task st \ - --vocab-type unigram --vocab-size 1000 - -# Add vocabulary and configuration for joint data -# (based on the manifests and features generated above) -python examples/speech_to_text/prep_mtedx_data.py \ - --data-root ${MTEDX_ROOT} --task asr --joint \ - --vocab-type unigram --vocab-size 8000 -python examples/speech_to_text/prep_mtedx_data.py \ - --data-root ${MTEDX_ROOT} --task st --joint \ - --vocab-type unigram --vocab-size 8000 -``` -The generated files (manifest, features, vocabulary and data configuration) will be added to -`${MTEDX_ROOT}/${LANG_PAIR}` (per-language data) and `MTEDX_ROOT` (joint data). - - -## ASR -#### Training -Spanish as example: -```bash -fairseq-train ${MTEDX_ROOT}/es-es \ - --config-yaml config_asr.yaml --train-subset train_asr --valid-subset valid_asr \ - --save-dir ${ASR_SAVE_DIR} --num-workers 4 --max-tokens 40000 --max-epoch 200 \ - --task speech_to_text --criterion label_smoothed_cross_entropy --report-accuracy \ - --arch s2t_transformer_xs --optimizer adam --lr 2e-3 --lr-scheduler inverse_sqrt \ - --warmup-updates 10000 --clip-norm 10.0 --seed 1 --dropout 0.3 --label-smoothing 0.1 \ - --load-pretrained-encoder-from ${PRETRAINED_ENCODER} \ - --skip-invalid-size-inputs-valid-test \ - --keep-last-epochs 10 --update-freq 8 --patience 10 -``` -For joint model (using ASR data from all 8 languages): -```bash -fairseq-train ${MTEDX_ROOT} \ - --config-yaml config_asr.yaml \ - --train-subset train_es-es_asr,train_fr-fr_asr,train_pt-pt_asr,train_it-it_asr,train_ru-ru_asr,train_el-el_asr,train_ar-ar_asr,train_de-de_asr \ - --valid-subset valid_es-es_asr,valid_fr-fr_asr,valid_pt-pt_asr,valid_it-it_asr,valid_ru-ru_asr,valid_el-el_asr,valid_ar-ar_asr,valid_de-de_asr \ - --save-dir ${MULTILINGUAL_ASR_SAVE_DIR} --num-workers 4 --max-tokens 40000 --max-epoch 200 \ - --task speech_to_text --criterion label_smoothed_cross_entropy --report-accuracy \ - --arch s2t_transformer_s --optimizer adam --lr 2e-3 --lr-scheduler inverse_sqrt \ - --warmup-updates 10000 --clip-norm 10.0 --seed 1 --dropout 0.3 --label-smoothing 0.1 \ - --skip-invalid-size-inputs-valid-test \ - --keep-last-epochs 10 --update-freq 8 --patience 10 \ - --ignore-prefix-size 1 -``` -where `MULTILINGUAL_ASR_SAVE_DIR` is the checkpoint root path. We set `--update-freq 8` to simulate 8 GPUs -with 1 GPU. You may want to update it accordingly when using more than 1 GPU. -For multilingual models, we prepend target language ID token as target BOS, which should be excluded from -the training loss via `--ignore-prefix-size 1`. - -#### Inference & Evaluation -```bash -CHECKPOINT_FILENAME=avg_last_10_checkpoint.pt -python scripts/average_checkpoints.py \ - --inputs ${ASR_SAVE_DIR} --num-epoch-checkpoints 10 \ - --output "${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME}" - -fairseq-generate ${MTEDX_ROOT}/es-es \ - --config-yaml config_asr.yaml --gen-subset test --task speech_to_text \ - --path ${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME} --max-tokens 50000 --beam 5 \ - --skip-invalid-size-inputs-valid-test \ - --scoring wer --wer-tokenizer 13a --wer-lowercase --wer-remove-punct --remove-bpe - -# For models trained on joint data -CHECKPOINT_FILENAME=avg_last_10_checkpoint.pt -python scripts/average_checkpoints.py \ - --inputs ${MULTILINGUAL_ASR_SAVE_DIR} --num-epoch-checkpoints 10 \ - --output "${MULTILINGUAL_ASR_SAVE_DIR}/${CHECKPOINT_FILENAME}" - -for LANG in es fr pt it ru el ar de; do - fairseq-generate ${MTEDX_ROOT} \ - --config-yaml config_asr.yaml --gen-subset test_${LANG}-${LANG}_asr --task speech_to_text \ - --prefix-size 1 --path ${MULTILINGUAL_ASR_SAVE_DIR}/${CHECKPOINT_FILENAME} \ - --max-tokens 40000 --beam 5 \ - --skip-invalid-size-inputs-valid-test \ - --scoring wer --wer-tokenizer 13a --wer-lowercase --wer-remove-punct --remove-bpe -done -``` -#### Results -| Data | --arch | Params | Es | Fr | Pt | It | Ru | El | Ar | De | -|--------------|--------------------|--------|------|------|------|------|------|-------|-------|-------| -| Monolingual | s2t_transformer_xs | 10M | 46.4 | 45.6 | 54.8 | 48.0 | 74.7 | 109.5 | 104.4 | 111.1 | - - -## ST -#### Training -Es-En as example: -```bash -fairseq-train ${MTEDX_ROOT}/es-en \ - --config-yaml config_st.yaml --train-subset train_st --valid-subset valid_st \ - --save-dir ${ST_SAVE_DIR} --num-workers 4 --max-tokens 40000 --max-epoch 200 \ - --task speech_to_text --criterion label_smoothed_cross_entropy --report-accuracy \ - --arch s2t_transformer_xs --optimizer adam --lr 2e-3 --lr-scheduler inverse_sqrt \ - --warmup-updates 10000 --clip-norm 10.0 --seed 1 --dropout 0.3 --label-smoothing 0.1 \ - --load-pretrained-encoder-from ${PRETRAINED_ENCODER} \ - --skip-invalid-size-inputs-valid-test \ - --keep-last-epochs 10 --update-freq 8 --patience 10 -``` -For multilingual model (all 12 directions): -```bash -fairseq-train ${MTEDX_ROOT} \ - --config-yaml config_st.yaml \ - --train-subset train_el-en_st,train_es-en_st,train_es-fr_st,train_es-it_st,train_es-pt_st,train_fr-en_st,train_fr-es_st,train_fr-pt_st,train_it-en_st,train_it-es_st,train_pt-en_st,train_pt-es_st,train_ru-en_st \ - --valid-subset valid_el-en_st,valid_es-en_st,valid_es-fr_st,valid_es-it_st,valid_es-pt_st,valid_fr-en_st,valid_fr-es_st,valid_fr-pt_st,valid_it-en_st,valid_it-es_st,valid_pt-en_st,valid_pt-es_st,valid_ru-en_st \ - --save-dir ${MULTILINGUAL_ST_SAVE_DIR} --num-workers 4 --max-tokens 40000 --max-epoch 200 \ - --task speech_to_text --criterion label_smoothed_cross_entropy --report-accuracy \ - --arch s2t_transformer_s --optimizer adam --lr 2e-3 --lr-scheduler inverse_sqrt \ - --warmup-updates 10000 --clip-norm 10.0 --seed 1 --dropout 0.3 --label-smoothing 0.1 \ - --skip-invalid-size-inputs-valid-test \ - --keep-last-epochs 10 --update-freq 8 --patience 10 \ - --ignore-prefix-size 1 \ - --load-pretrained-encoder-from ${PRETRAINED_ENCODER} -``` -where `ST_SAVE_DIR` (`MULTILINGUAL_ST_SAVE_DIR`) is the checkpoint root path. The ST encoder is pre-trained by ASR -for faster training and better performance: `--load-pretrained-encoder-from <(JOINT_)ASR checkpoint path>`. We set -`--update-freq 8` to simulate 8 GPUs with 1 GPU. You may want to update it accordingly when using more than 1 GPU. -For multilingual models, we prepend target language ID token as target BOS, which should be excluded from -the training loss via `--ignore-prefix-size 1`. - -#### Inference & Evaluation -Average the last 10 checkpoints and evaluate on the `test` split: -```bash -CHECKPOINT_FILENAME=avg_last_10_checkpoint.pt -python scripts/average_checkpoints.py \ - --inputs ${ST_SAVE_DIR} --num-epoch-checkpoints 10 \ - --output "${ST_SAVE_DIR}/${CHECKPOINT_FILENAME}" - -fairseq-generate ${MTEDX_ROOT}/es-en \ - --config-yaml config_st.yaml --gen-subset test --task speech_to_text \ - --path ${ST_SAVE_DIR}/${CHECKPOINT_FILENAME} \ - --max-tokens 50000 --beam 5 --scoring sacrebleu --remove-bpe - -# For multilingual models -python scripts/average_checkpoints.py \ - --inputs ${MULTILINGUAL_ST_SAVE_DIR} --num-epoch-checkpoints 10 \ - --output "${MULTILINGUAL_ST_SAVE_DIR}/${CHECKPOINT_FILENAME}" - -for LANGPAIR in es-en es-fr es-pt fr-en fr-es fr-pt pt-en pt-es it-en it-es ru-en el-en; do - fairseq-generate ${MTEDX_ROOT} \ - --config-yaml config_st.yaml --gen-subset test_${LANGPAIR}_st --task speech_to_text \ - --prefix-size 1 --path ${MULTILINGUAL_ST_SAVE_DIR}/${CHECKPOINT_FILENAME} \ - --max-tokens 40000 --beam 5 \ - --skip-invalid-size-inputs-valid-test \ - --scoring sacrebleu --remove-bpe -done -``` -For multilingual models, we force decoding from the target language ID token (as BOS) via `--prefix-size 1`. - -#### Results -| Data | --arch | Params | Es-En | Es-Pt | Es-Fr | Fr-En | Fr-Es | Fr-Pt | Pt-En | Pt-Es | It-En | It-Es | Ru-En | El-En | -|--------------|--------------------|-----|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------| -| Bilingual | s2t_transformer_xs | 10M | 7.0 | 12.2 | 1.7 | 8.9 | 10.6 | 7.9 | 8.1 | 8.7 | 6.4 | 1.0 | 0.7 | 0.6 | -| Multilingual | s2t_transformer_s | 31M | 12.3 | 17.4 | 6.1 | 12.0 | 13.6 | 13.2 | 12.0 | 13.7 | 10.7 | 13.1 | 0.6 | 0.8 | - - -## Citation -Please cite as: -``` -@inproceedings{salesky2021mtedx, - title={Multilingual TEDx Corpus for Speech Recognition and Translation}, - author={Elizabeth Salesky and Matthew Wiesner and Jacob Bremerman and Roldano Cattoni and Matteo Negri and Marco Turchi and Douglas W. Oard and Matt Post}, - booktitle={Proceedings of Interspeech}, - year={2021}, -} - -@inproceedings{wang2020fairseqs2t, - title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq}, - author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino}, - booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations}, - year = {2020}, -} - -@inproceedings{ott2019fairseq, - title = {fairseq: A Fast, Extensible Toolkit for Sequence Modeling}, - author = {Myle Ott and Sergey Edunov and Alexei Baevski and Angela Fan and Sam Gross and Nathan Ng and David Grangier and Michael Auli}, - booktitle = {Proceedings of NAACL-HLT 2019: Demonstrations}, - year = {2019}, -} -``` - -[[Back]](..) diff --git a/kosmos-g/fairseq/examples/speech_to_text/docs/mustc_example.md b/kosmos-g/fairseq/examples/speech_to_text/docs/mustc_example.md deleted file mode 100644 index c95ef3e15..000000000 --- a/kosmos-g/fairseq/examples/speech_to_text/docs/mustc_example.md +++ /dev/null @@ -1,155 +0,0 @@ -[[Back]](..) - -# S2T Example: Speech Translation (ST) on MuST-C - -[MuST-C](https://www.aclweb.org/anthology/N19-1202) is multilingual speech-to-text translation corpus with -8-language translations on English TED talks. We match the state-of-the-art performance in -[ESPNet-ST](https://arxiv.org/pdf/2004.10234.pdf) with a simpler model training pipeline. - -## Data Preparation -[Download](https://ict.fbk.eu/must-c) and unpack MuST-C data to a path -`${MUSTC_ROOT}/en-${TARGET_LANG_ID}`, then preprocess it with -```bash -# additional Python packages for S2T data processing/model training -pip install pandas torchaudio soundfile sentencepiece - -# Generate TSV manifests, features, vocabulary -# and configuration for each language -python examples/speech_to_text/prep_mustc_data.py \ - --data-root ${MUSTC_ROOT} --task asr \ - --vocab-type unigram --vocab-size 5000 -python examples/speech_to_text/prep_mustc_data.py \ - --data-root ${MUSTC_ROOT} --task st \ - --vocab-type unigram --vocab-size 8000 - -# Add vocabulary and configuration for joint data -# (based on the manifests and features generated above) -python examples/speech_to_text/prep_mustc_data.py \ - --data-root ${MUSTC_ROOT} --task asr --joint \ - --vocab-type unigram --vocab-size 10000 -python examples/speech_to_text/prep_mustc_data.py \ - --data-root ${MUSTC_ROOT} --task st --joint \ - --vocab-type unigram --vocab-size 10000 -``` -The generated files (manifest, features, vocabulary and data configuration) will be added to -`${MUSTC_ROOT}/en-${TARGET_LANG_ID}` (per-language data) and `MUSTC_ROOT` (joint data). - -Download our vocabulary files if you want to use our pre-trained models: -- ASR: [En-De](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_de_asr_vocab_unigram5000.zip), [En-Nl](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_nl_asr_vocab_unigram5000.zip), [En-Es](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_es_asr_vocab_unigram5000.zip), [En-Fr](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_fr_asr_vocab_unigram5000.zip), [En-It](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_it_asr_vocab_unigram5000.zip), [En-Pt](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_pt_asr_vocab_unigram5000.zip), [En-Ro](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_ro_asr_vocab_unigram5000.zip), [En-Ru](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_ru_asr_vocab_unigram5000.zip), [Joint](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_joint_asr_vocab_unigram10000.zip) -- ST: [En-De](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_de_st_vocab_unigram8000.zip), [En-Nl](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_nl_st_vocab_unigram8000.zip), [En-Es](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_es_st_vocab_unigram8000.zip), [En-Fr](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_fr_st_vocab_unigram8000.zip), [En-It](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_it_st_vocab_unigram8000.zip), [En-Pt](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_pt_st_vocab_unigram8000.zip), [En-Ro](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_ro_st_vocab_unigram8000.zip), [En-Ru](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_ru_st_vocab_unigram8000.zip), [Multilingual](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_multilingual_st_vocab_unigram10000.zip) - -## ASR -#### Training -En-De as example: -```bash -fairseq-train ${MUSTC_ROOT}/en-de \ - --config-yaml config_asr.yaml --train-subset train_asr --valid-subset dev_asr \ - --save-dir ${ASR_SAVE_DIR} --num-workers 4 --max-tokens 40000 --max-update 100000 \ - --task speech_to_text --criterion label_smoothed_cross_entropy --label-smoothing 0.1 --report-accuracy \ - --arch s2t_transformer_s --optimizer adam --lr 1e-3 --lr-scheduler inverse_sqrt \ - --warmup-updates 10000 --clip-norm 10.0 --seed 1 --update-freq 8 -``` -For joint model (using ASR data from all 8 directions): -```bash -fairseq-train ${MUSTC_ROOT} \ - --config-yaml config_asr.yaml \ - --train-subset train_de_asr,train_nl_asr,train_es_asr,train_fr_asr,train_it_asr,train_pt_asr,train_ro_asr,train_ru_asr \ - --valid-subset dev_de_asr,dev_nl_asr,dev_es_asr,dev_fr_asr,dev_it_asr,dev_pt_asr,dev_ro_asr,dev_ru_asr \ - --save-dir ${JOINT_ASR_SAVE_DIR} --num-workers 4 --max-tokens 40000 --max-update 100000 \ - --task speech_to_text --criterion label_smoothed_cross_entropy --label-smoothing 0.1 --report-accuracy \ - --arch s2t_transformer_s --optimizer adam --lr 1e-3 --lr-scheduler inverse_sqrt \ - --warmup-updates 10000 --clip-norm 10.0 --seed 1 --update-freq 8 -``` -where `ASR_SAVE_DIR` (`JOINT_ASR_SAVE_DIR`) is the checkpoint root path. We set `--update-freq 8` to simulate 8 GPUs -with 1 GPU. You may want to update it accordingly when using more than 1 GPU. - -#### Inference & Evaluation -```bash -CHECKPOINT_FILENAME=avg_last_10_checkpoint.pt -python scripts/average_checkpoints.py \ - --inputs ${ASR_SAVE_DIR} --num-epoch-checkpoints 10 \ - --output "${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME}" -fairseq-generate ${MUSTC_ROOT}/en-de \ - --config-yaml config_asr.yaml --gen-subset tst-COMMON_asr --task speech_to_text \ - --path ${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME} --max-tokens 50000 --beam 5 \ - --scoring wer --wer-tokenizer 13a --wer-lowercase --wer-remove-punct - -# For models trained on joint data -python scripts/average_checkpoints.py \ - --inputs ${JOINT_ASR_SAVE_DIR} --num-epoch-checkpoints 10 \ - --output "${JOINT_ASR_SAVE_DIR}/${CHECKPOINT_FILENAME}" -for LANG in de nl es fr it pt ro ru; do - fairseq-generate ${MUSTC_ROOT} \ - --config-yaml config_asr.yaml --gen-subset tst-COMMON_${LANG}_asr --task speech_to_text \ - --path ${JOINT_ASR_SAVE_DIR}/${CHECKPOINT_FILENAME} --max-tokens 50000 --beam 5 \ - --scoring wer --wer-tokenizer 13a --wer-lowercase --wer-remove-punct -done -``` -#### Results -| Data | --arch | Params | En-De | En-Nl | En-Es | En-Fr | En-It | En-Pt | En-Ro | En-Ru | Model | -|---|---|---|---|---|---|---|---|---|---|---|---| -| Single | s2t_transformer_s | 31M | [18.2](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_de_asr_transformer_s.pt) | [17.6](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_nl_asr_transformer_s.pt) | [17.7](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_es_asr_transformer_s.pt) | [17.2](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_fr_asr_transformer_s.pt) | [17.9](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_it_asr_transformer_s.pt) | [19.1](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_pt_asr_transformer_s.pt) | [18.1](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_ro_asr_transformer_s.pt) | [17.7](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_ru_asr_transformer_s.pt) | (<-Download) | -| Joint | s2t_transformer_m | 76M | 16.8 | 16.7 | 16.9 | 16.9 | 17.0 | 17.4 | 17.0 | 16.9 | [Download](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_joint_asr_transformer_m.pt) | - -## ST -#### Training -En-De as example: -```bash -fairseq-train ${MUSTC_ROOT}/en-de \ - --config-yaml config_st.yaml --train-subset train_st --valid-subset dev_st \ - --save-dir ${ST_SAVE_DIR} --num-workers 4 --max-tokens 40000 --max-update 100000 \ - --task speech_to_text --criterion label_smoothed_cross_entropy --label-smoothing 0.1 --report-accuracy \ - --arch s2t_transformer_s --optimizer adam --lr 2e-3 --lr-scheduler inverse_sqrt \ - --warmup-updates 10000 --clip-norm 10.0 --seed 1 --update-freq 8 \ - --load-pretrained-encoder-from ${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME} -``` -For multilingual model (all 8 directions): -```bash -fairseq-train ${MUSTC_ROOT} \ - --config-yaml config_st.yaml \ - --train-subset train_de_st,train_nl_st,train_es_st,train_fr_st,train_it_st,train_pt_st,train_ro_st,train_ru_st \ - --valid-subset dev_de_st,dev_nl_st,dev_es_st,dev_fr_st,dev_it_st,dev_pt_st,dev_ro_st,dev_ru_st \ - --save-dir ${MULTILINGUAL_ST_SAVE_DIR} --num-workers 4 --max-tokens 40000 --max-update 100000 \ - --task speech_to_text --criterion label_smoothed_cross_entropy --label-smoothing 0.1 --report-accuracy \ - --arch s2t_transformer_s --ignore-prefix-size 1 --optimizer adam --lr 2e-3 --lr-scheduler inverse_sqrt \ - --warmup-updates 10000 --clip-norm 10.0 --seed 1 --update-freq 8 \ - --load-pretrained-encoder-from ${JOINT_ASR_SAVE_DIR}/${CHECKPOINT_FILENAME} -``` -where `ST_SAVE_DIR` (`MULTILINGUAL_ST_SAVE_DIR`) is the checkpoint root path. The ST encoder is pre-trained by ASR -for faster training and better performance: `--load-pretrained-encoder-from <(JOINT_)ASR checkpoint path>`. We set -`--update-freq 8` to simulate 8 GPUs with 1 GPU. You may want to update it accordingly when using more than 1 GPU. -For multilingual models, we prepend target language ID token as target BOS, which should be excluded from -the training loss via `--ignore-prefix-size 1`. - -#### Inference & Evaluation -Average the last 10 checkpoints and evaluate on the `tst-COMMON` split: -```bash -CHECKPOINT_FILENAME=avg_last_10_checkpoint.pt -python scripts/average_checkpoints.py \ - --inputs ${ST_SAVE_DIR} --num-epoch-checkpoints 10 \ - --output "${ST_SAVE_DIR}/${CHECKPOINT_FILENAME}" -fairseq-generate ${MUSTC_ROOT}/en-de \ - --config-yaml config_st.yaml --gen-subset tst-COMMON_st --task speech_to_text \ - --path ${ST_SAVE_DIR}/${CHECKPOINT_FILENAME} \ - --max-tokens 50000 --beam 5 --scoring sacrebleu - -# For multilingual models -python scripts/average_checkpoints.py \ - --inputs ${MULTILINGUAL_ST_SAVE_DIR} --num-epoch-checkpoints 10 \ - --output "${MULTILINGUAL_ST_SAVE_DIR}/${CHECKPOINT_FILENAME}" -for LANG in de nl es fr it pt ro ru; do - fairseq-generate ${MUSTC_ROOT} \ - --config-yaml config_st.yaml --gen-subset tst-COMMON_${LANG}_st --task speech_to_text \ - --prefix-size 1 --path ${MULTILINGUAL_ST_SAVE_DIR}/${CHECKPOINT_FILENAME} \ - --max-tokens 50000 --beam 5 --scoring sacrebleu -done -``` -For multilingual models, we force decoding from the target language ID token (as BOS) via `--prefix-size 1`. - -#### Results -| Data | --arch | Params | En-De | En-Nl | En-Es | En-Fr | En-It | En-Pt | En-Ro | En-Ru | Model | -|---|---|---|---|---|---|---|---|---|---|---|---| -| Bilingual | s2t_transformer_s | 31M | [22.7](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_de_st_transformer_s.pt) | [27.3](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_nl_st_transformer_s.pt) | [27.2](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_es_st_transformer_s.pt) | [32.9](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_fr_st_transformer_s.pt) | [22.7](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_it_st_transformer_s.pt) | [28.1](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_pt_st_transformer_s.pt) | [21.9](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_ro_st_transformer_s.pt) | [15.3](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_ru_st_transformer_s.pt) | (<-Download) | -| Multilingual | s2t_transformer_m | 76M | 24.5 | 28.6 | 28.2 | 34.9 | 24.6 | 31.1 | 23.8 | 16.0 | [Download](https://dl.fbaipublicfiles.com/fairseq/s2t/mustc_multilingual_st_transformer_m.pt) | - -[[Back]](..) diff --git a/kosmos-g/fairseq/examples/speech_to_text/docs/simulst_mustc_example.md b/kosmos-g/fairseq/examples/speech_to_text/docs/simulst_mustc_example.md deleted file mode 100644 index f3b5a413a..000000000 --- a/kosmos-g/fairseq/examples/speech_to_text/docs/simulst_mustc_example.md +++ /dev/null @@ -1,190 +0,0 @@ -# Simultaneous Speech Translation (SimulST) on MuST-C - -This is a tutorial of training and evaluating a transformer *wait-k* simultaneous model on MUST-C English-Germen Dataset, from [SimulMT to SimulST: Adapting Simultaneous Text Translation to End-to-End Simultaneous Speech Translation](https://www.aclweb.org/anthology/2020.aacl-main.58.pdf). - -[MuST-C](https://www.aclweb.org/anthology/N19-1202) is multilingual speech-to-text translation corpus with 8-language translations on English TED talks. - -## Data Preparation -This section introduces the data preparation for training and evaluation. -If you only want to evaluate the model, please jump to [Inference & Evaluation](#inference--evaluation) - -[Download](https://ict.fbk.eu/must-c) and unpack MuST-C data to a path -`${MUSTC_ROOT}/en-${TARGET_LANG_ID}`, then preprocess it with -```bash -# Additional Python packages for S2T data processing/model training -pip install pandas torchaudio sentencepiece - -# Generate TSV manifests, features, vocabulary, -# global cepstral and mean estimation, -# and configuration for each language -cd fairseq - -python examples/speech_to_text/prep_mustc_data.py \ - --data-root ${MUSTC_ROOT} --task asr \ - --vocab-type unigram --vocab-size 10000 \ - --cmvn-type global - -python examples/speech_to_text/prep_mustc_data.py \ - --data-root ${MUSTC_ROOT} --task st \ - --vocab-type unigram --vocab-size 10000 \ - --cmvn-type global -``` - -## ASR Pretraining -We need a pretrained offline ASR model. Assuming the save directory of the ASR model is `${ASR_SAVE_DIR}`. -The following command (and the subsequent training commands in this tutorial) assume training on 1 GPU (you can also train on 8 GPUs and remove the `--update-freq 8` option). -``` -fairseq-train ${MUSTC_ROOT}/en-de \ - --config-yaml config_asr.yaml --train-subset train_asr --valid-subset dev_asr \ - --save-dir ${ASR_SAVE_DIR} --num-workers 4 --max-tokens 40000 --max-update 100000 \ - --task speech_to_text --criterion label_smoothed_cross_entropy --report-accuracy \ - --arch convtransformer_espnet --optimizer adam --lr 0.0005 --lr-scheduler inverse_sqrt \ - --warmup-updates 10000 --clip-norm 10.0 --seed 1 --update-freq 8 -``` -A pretrained ASR checkpoint can be downloaded [here](https://dl.fbaipublicfiles.com/simultaneous_translation/must_c_v1_en_de_pretrained_asr) - -## Simultaneous Speech Translation Training - -### Wait-K with fixed pre-decision module -Fixed pre-decision indicates that the model operate simultaneous policy on the boundaries of fixed chunks. -Here is a example of fixed pre-decision ratio 7 (the simultaneous decision is made every 7 encoder states) and -a wait-3 policy model. Assuming the save directory is `${ST_SAVE_DIR}` -```bash - fairseq-train ${MUSTC_ROOT}/en-de \ - --config-yaml config_st.yaml --train-subset train_st --valid-subset dev_st \ - --save-dir ${ST_SAVE_DIR} --num-workers 8 \ - --optimizer adam --lr 0.0001 --lr-scheduler inverse_sqrt --clip-norm 10.0 \ - --criterion label_smoothed_cross_entropy \ - --warmup-updates 4000 --max-update 100000 --max-tokens 40000 --seed 2 \ - --load-pretrained-encoder-from ${ASR_SAVE_DIR}/checkpoint_best.pt \ - --task speech_to_text \ - --arch convtransformer_simul_trans_espnet \ - --simul-type waitk_fixed_pre_decision \ - --waitk-lagging 3 \ - --fixed-pre-decision-ratio 7 \ - --update-freq 8 - -``` -### Monotonic multihead attention with fixed pre-decision module -``` - fairseq-train ${MUSTC_ROOT}/en-de \ - --config-yaml config_st.yaml --train-subset train_st --valid-subset dev_st \ - --save-dir ${ST_SAVE_DIR} --num-workers 8 \ - --optimizer adam --lr 0.0001 --lr-scheduler inverse_sqrt --clip-norm 10.0 \ - --warmup-updates 4000 --max-update 100000 --max-tokens 40000 --seed 2 \ - --load-pretrained-encoder-from ${ASR_SAVE_DIR}/${CHECKPOINT_FILENAME} \ - --task speech_to_text \ - --criterion latency_augmented_label_smoothed_cross_entropy \ - --latency-weight-avg 0.1 \ - --arch convtransformer_simul_trans_espnet \ - --simul-type infinite_lookback_fixed_pre_decision \ - --fixed-pre-decision-ratio 7 \ - --update-freq 8 -``` -## Inference & Evaluation -[SimulEval](https://github.com/facebookresearch/SimulEval) is used for evaluation. -The following command is for evaluation. - -``` -git clone https://github.com/facebookresearch/SimulEval.git -cd SimulEval -pip install -e . - -simuleval \ - --agent ${FAIRSEQ}/examples/speech_to_text/simultaneous_translation/agents/fairseq_simul_st_agent.py - --source ${SRC_LIST_OF_AUDIO} - --target ${TGT_FILE} - --data-bin ${MUSTC_ROOT}/en-de \ - --config config_st.yaml \ - --model-path ${ST_SAVE_DIR}/${CHECKPOINT_FILENAME} \ - --output ${OUTPUT} \ - --scores -``` - -The source file `${SRC_LIST_OF_AUDIO}` is a list of paths of audio files. Assuming your audio files stored at `/home/user/data`, -it should look like this - -```bash -/home/user/data/audio-1.wav -/home/user/data/audio-2.wav -``` - -Each line of target file `${TGT_FILE}` is the translation for each audio file input. -```bash -Translation_1 -Translation_2 -``` -The evaluation runs on the original MUSTC segmentation. -The following command will generate the wav list and text file for a evaluation set `${SPLIT}` (chose from `dev`, `tst-COMMON` and `tst-HE`) in MUSTC to `${EVAL_DATA}`. -```bash -python ${FAIRSEQ}/examples/speech_to_text/seg_mustc_data.py \ - --data-root ${MUSTC_ROOT} --lang de \ - --split ${SPLIT} --task st \ - --output ${EVAL_DATA} -``` - -The `--data-bin` and `--config` should be the same in previous section if you prepare the data from the scratch. -If only for evaluation, a prepared data directory can be found [here](https://dl.fbaipublicfiles.com/simultaneous_translation/must_c_v1.0_en_de_databin.tgz). It contains -- `spm_unigram10000_st.model`: a sentencepiece model binary. -- `spm_unigram10000_st.txt`: the dictionary file generated by the sentencepiece model. -- `gcmvn.npz`: the binary for global cepstral mean and variance. -- `config_st.yaml`: the config yaml file. It looks like this. -You will need to set the absolute paths for `sentencepiece_model` and `stats_npz_path` if the data directory is downloaded. -```yaml -bpe_tokenizer: - bpe: sentencepiece - sentencepiece_model: ABS_PATH_TO_SENTENCEPIECE_MODEL -global_cmvn: - stats_npz_path: ABS_PATH_TO_GCMVN_FILE -input_channels: 1 -input_feat_per_channel: 80 -sampling_alpha: 1.0 -specaugment: - freq_mask_F: 27 - freq_mask_N: 1 - time_mask_N: 1 - time_mask_T: 100 - time_mask_p: 1.0 - time_wrap_W: 0 -transforms: - '*': - - global_cmvn - _train: - - global_cmvn - - specaugment -vocab_filename: spm_unigram10000_st.txt -``` - -Notice that once a `--data-bin` is set, the `--config` is the base name of the config yaml, not the full path. - -Set `--model-path` to the model checkpoint. -A pretrained checkpoint can be downloaded from [here](https://dl.fbaipublicfiles.com/simultaneous_translation/convtransformer_wait5_pre7), which is a wait-5 model with a pre-decision of 280 ms. - -The result of this model on `tst-COMMON` is: -```bash -{ - "Quality": { - "BLEU": 13.94974229366959 - }, - "Latency": { - "AL": 1751.8031870037803, - "AL_CA": 2338.5911762796536, - "AP": 0.7931395378788959, - "AP_CA": 0.9405103863210942, - "DAL": 1987.7811616943081, - "DAL_CA": 2425.2751560926167 - } -} -``` - -If `--output ${OUTPUT}` option is used, the detailed log and scores will be stored under the `${OUTPUT}` directory. - - -The quality is measured by detokenized BLEU. So make sure that the predicted words sent to the server are detokenized. - -The latency metrics are -* Average Proportion -* Average Lagging -* Differentiable Average Lagging - -Again they will also be evaluated on detokenized text. diff --git a/kosmos-g/fairseq/examples/speech_to_text/prep_covost_data.py b/kosmos-g/fairseq/examples/speech_to_text/prep_covost_data.py deleted file mode 100644 index 411e9b551..000000000 --- a/kosmos-g/fairseq/examples/speech_to_text/prep_covost_data.py +++ /dev/null @@ -1,279 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import logging -from pathlib import Path -import shutil -from tempfile import NamedTemporaryFile -from typing import Optional, Tuple - -import pandas as pd -import torchaudio -from examples.speech_to_text.data_utils import ( - create_zip, - extract_fbank_features, - filter_manifest_df, - gen_config_yaml, - gen_vocab, - get_zip_manifest, - load_df_from_tsv, - save_df_to_tsv, -) -from torch import Tensor -from torch.utils.data import Dataset -from torchaudio.datasets.utils import download_url, extract_archive -from tqdm import tqdm - - -log = logging.getLogger(__name__) - - -MANIFEST_COLUMNS = ["id", "audio", "n_frames", "tgt_text", "speaker"] - - -class CoVoST(Dataset): - """Create a Dataset for CoVoST (https://github.com/facebookresearch/covost). - - Args: - root (str): root path to the dataset and generated manifests/features - source_language (str): source (audio) language - target_language (str, optional): target (text) language, - None for no translation (default: None) - version (int, optional): CoVoST version. (default: 2) - download (bool, optional): Whether to download the dataset if it is not - found at root path. (default: ``False``). - """ - - COVOST_URL_TEMPLATE = ( - "https://dl.fbaipublicfiles.com/covost/" - "covost_v2.{src_lang}_{tgt_lang}.tsv.tar.gz" - ) - - VERSIONS = {2} - SPLITS = ["train", "dev", "test"] - - XX_EN_LANGUAGES = { - 1: ["fr", "de", "nl", "ru", "es", "it", "tr", "fa", "sv-SE", "mn", "zh-CN"], - 2: [ - "fr", - "de", - "es", - "ca", - "it", - "ru", - "zh-CN", - "pt", - "fa", - "et", - "mn", - "nl", - "tr", - "ar", - "sv-SE", - "lv", - "sl", - "ta", - "ja", - "id", - "cy", - ], - } - EN_XX_LANGUAGES = { - 1: [], - 2: [ - "de", - "tr", - "fa", - "sv-SE", - "mn", - "zh-CN", - "cy", - "ca", - "sl", - "et", - "id", - "ar", - "ta", - "lv", - "ja", - ], - } - - def __init__( - self, - root: str, - split: str, - source_language: str, - target_language: Optional[str] = None, - version: int = 2, - ) -> None: - assert version in self.VERSIONS and split in self.SPLITS - assert source_language is not None - self.no_translation = target_language is None - if not self.no_translation: - assert "en" in {source_language, target_language} - if source_language == "en": - assert target_language in self.EN_XX_LANGUAGES[version] - else: - assert source_language in self.XX_EN_LANGUAGES[version] - else: - # Hack here so that we can get "split" column from CoVoST TSV. - # Note that we use CoVoST train split for ASR which is an extension - # to Common Voice train split. - target_language = "de" if source_language == "en" else "en" - - self.root: Path = Path(root) - - cv_tsv_path = self.root / "validated.tsv" - assert cv_tsv_path.is_file() - - covost_url = self.COVOST_URL_TEMPLATE.format( - src_lang=source_language, tgt_lang=target_language - ) - covost_archive = self.root / Path(covost_url).name - if not covost_archive.is_file(): - download_url(covost_url, self.root.as_posix(), hash_value=None) - extract_archive(covost_archive.as_posix()) - - cv_tsv = load_df_from_tsv(cv_tsv_path) - covost_tsv = load_df_from_tsv( - self.root / Path(covost_url).name.replace(".tar.gz", "") - ) - df = pd.merge( - left=cv_tsv[["path", "sentence", "client_id"]], - right=covost_tsv[["path", "translation", "split"]], - how="inner", - on="path", - ) - if split == "train": - df = df[(df["split"] == split) | (df["split"] == f"{split}_covost")] - else: - df = df[df["split"] == split] - data = df.to_dict(orient="index").items() - data = [v for k, v in sorted(data, key=lambda x: x[0])] - self.data = [] - for e in data: - try: - path = self.root / "clips" / e["path"] - _ = torchaudio.info(path.as_posix()) - self.data.append(e) - except RuntimeError: - pass - - def __getitem__( - self, n: int - ) -> Tuple[Tensor, int, str, str, Optional[str], str, str]: - """Load the n-th sample from the dataset. - - Args: - n (int): The index of the sample to be loaded - - Returns: - tuple: ``(waveform, sample_rate, sentence, translation, speaker_id, - sample_id)`` - """ - data = self.data[n] - path = self.root / "clips" / data["path"] - waveform, sample_rate = torchaudio.load(path) - sentence = data["sentence"] - translation = None if self.no_translation else data["translation"] - speaker_id = data["client_id"] - _id = data["path"].replace(".mp3", "") - return waveform, sample_rate, sentence, translation, speaker_id, _id - - def __len__(self) -> int: - return len(self.data) - - -def process(args): - root = Path(args.data_root).absolute() / args.src_lang - if not root.is_dir(): - raise NotADirectoryError(f"{root} does not exist") - # Extract features - feature_root = root / "fbank80" - feature_root.mkdir(exist_ok=True) - for split in CoVoST.SPLITS: - print(f"Fetching split {split}...") - dataset = CoVoST(root, split, args.src_lang, args.tgt_lang) - print("Extracting log mel filter bank features...") - for waveform, sample_rate, _, _, _, utt_id in tqdm(dataset): - extract_fbank_features( - waveform, sample_rate, feature_root / f"{utt_id}.npy" - ) - # Pack features into ZIP - zip_path = root / "fbank80.zip" - print("ZIPing features...") - create_zip(feature_root, zip_path) - print("Fetching ZIP manifest...") - audio_paths, audio_lengths = get_zip_manifest(zip_path) - # Generate TSV manifest - print("Generating manifest...") - train_text = [] - task = f"asr_{args.src_lang}" - if args.tgt_lang is not None: - task = f"st_{args.src_lang}_{args.tgt_lang}" - for split in CoVoST.SPLITS: - manifest = {c: [] for c in MANIFEST_COLUMNS} - dataset = CoVoST(root, split, args.src_lang, args.tgt_lang) - for _, _, src_utt, tgt_utt, speaker_id, utt_id in tqdm(dataset): - manifest["id"].append(utt_id) - manifest["audio"].append(audio_paths[utt_id]) - manifest["n_frames"].append(audio_lengths[utt_id]) - manifest["tgt_text"].append(src_utt if args.tgt_lang is None else tgt_utt) - manifest["speaker"].append(speaker_id) - is_train_split = split.startswith("train") - if is_train_split: - train_text.extend(manifest["tgt_text"]) - df = pd.DataFrame.from_dict(manifest) - df = filter_manifest_df(df, is_train_split=is_train_split) - save_df_to_tsv(df, root / f"{split}_{task}.tsv") - # Generate vocab - vocab_size_str = "" if args.vocab_type == "char" else str(args.vocab_size) - spm_filename_prefix = f"spm_{args.vocab_type}{vocab_size_str}_{task}" - with NamedTemporaryFile(mode="w") as f: - for t in train_text: - f.write(t + "\n") - gen_vocab( - Path(f.name), - root / spm_filename_prefix, - args.vocab_type, - args.vocab_size - ) - # Generate config YAML - gen_config_yaml( - root, - spm_filename=spm_filename_prefix + ".model", - yaml_filename=f"config_{task}.yaml", - specaugment_policy="lb", - ) - # Clean up - shutil.rmtree(feature_root) - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument( - "--data-root", "-d", required=True, type=str, - help="data root with sub-folders for each language <root>/<src_lang>" - ) - parser.add_argument( - "--vocab-type", - default="unigram", - required=True, - type=str, - choices=["bpe", "unigram", "char"], - ), - parser.add_argument("--vocab-size", default=1000, type=int) - parser.add_argument("--src-lang", "-s", required=True, type=str) - parser.add_argument("--tgt-lang", "-t", type=str) - args = parser.parse_args() - - process(args) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/speech_to_text/prep_librispeech_data.py b/kosmos-g/fairseq/examples/speech_to_text/prep_librispeech_data.py deleted file mode 100644 index f379fa7bf..000000000 --- a/kosmos-g/fairseq/examples/speech_to_text/prep_librispeech_data.py +++ /dev/null @@ -1,119 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import logging -from pathlib import Path -import shutil -from tempfile import NamedTemporaryFile - -import pandas as pd -from examples.speech_to_text.data_utils import ( - create_zip, - extract_fbank_features, - gen_config_yaml, - gen_vocab, - get_zip_manifest, - save_df_to_tsv, -) -from torchaudio.datasets import LIBRISPEECH -from tqdm import tqdm - - -log = logging.getLogger(__name__) - -SPLITS = [ - "train-clean-100", - "train-clean-360", - "train-other-500", - "dev-clean", - "dev-other", - "test-clean", - "test-other", -] - -MANIFEST_COLUMNS = ["id", "audio", "n_frames", "tgt_text", "speaker"] - - -def process(args): - out_root = Path(args.output_root).absolute() - out_root.mkdir(exist_ok=True) - # Extract features - feature_root = out_root / "fbank80" - feature_root.mkdir(exist_ok=True) - for split in SPLITS: - print(f"Fetching split {split}...") - dataset = LIBRISPEECH(out_root.as_posix(), url=split, download=True) - print("Extracting log mel filter bank features...") - for wav, sample_rate, _, spk_id, chapter_no, utt_no in tqdm(dataset): - sample_id = f"{spk_id}-{chapter_no}-{utt_no}" - extract_fbank_features( - wav, sample_rate, feature_root / f"{sample_id}.npy" - ) - # Pack features into ZIP - zip_path = out_root / "fbank80.zip" - print("ZIPing features...") - create_zip(feature_root, zip_path) - print("Fetching ZIP manifest...") - audio_paths, audio_lengths = get_zip_manifest(zip_path) - # Generate TSV manifest - print("Generating manifest...") - train_text = [] - for split in SPLITS: - manifest = {c: [] for c in MANIFEST_COLUMNS} - dataset = LIBRISPEECH(out_root.as_posix(), url=split) - for _, _, utt, spk_id, chapter_no, utt_no in tqdm(dataset): - sample_id = f"{spk_id}-{chapter_no}-{utt_no}" - manifest["id"].append(sample_id) - manifest["audio"].append(audio_paths[sample_id]) - manifest["n_frames"].append(audio_lengths[sample_id]) - manifest["tgt_text"].append(utt.lower()) - manifest["speaker"].append(spk_id) - save_df_to_tsv( - pd.DataFrame.from_dict(manifest), out_root / f"{split}.tsv" - ) - if split.startswith("train"): - train_text.extend(manifest["tgt_text"]) - # Generate vocab - vocab_size = "" if args.vocab_type == "char" else str(args.vocab_size) - spm_filename_prefix = f"spm_{args.vocab_type}{vocab_size}" - with NamedTemporaryFile(mode="w") as f: - for t in train_text: - f.write(t + "\n") - gen_vocab( - Path(f.name), - out_root / spm_filename_prefix, - args.vocab_type, - args.vocab_size, - ) - # Generate config YAML - gen_config_yaml( - out_root, - spm_filename=spm_filename_prefix + ".model", - specaugment_policy="ld" - ) - # Clean up - shutil.rmtree(feature_root) - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--output-root", "-o", required=True, type=str) - parser.add_argument( - "--vocab-type", - default="unigram", - required=True, - type=str, - choices=["bpe", "unigram", "char"], - ), - parser.add_argument("--vocab-size", default=10000, type=int) - args = parser.parse_args() - - process(args) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/speech_to_text/prep_mtedx_data.py b/kosmos-g/fairseq/examples/speech_to_text/prep_mtedx_data.py deleted file mode 100644 index 2dfd63176..000000000 --- a/kosmos-g/fairseq/examples/speech_to_text/prep_mtedx_data.py +++ /dev/null @@ -1,271 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import logging -import os -from pathlib import Path -import shutil -from itertools import groupby -from tempfile import NamedTemporaryFile -from typing import Tuple - -import pandas as pd -import soundfile as sf -from examples.speech_to_text.data_utils import ( - create_zip, - extract_fbank_features, - filter_manifest_df, - gen_config_yaml, - gen_vocab, - get_zip_manifest, - load_df_from_tsv, - save_df_to_tsv, -) -import torch -from torch.utils.data import Dataset -from tqdm import tqdm - -from fairseq.data.audio.audio_utils import get_waveform, convert_waveform - - -log = logging.getLogger(__name__) - - -MANIFEST_COLUMNS = [ - "id", "audio", "n_frames", "tgt_text", "speaker", "tgt_lang" -] - - -class mTEDx(Dataset): - """ - Create a Dataset for Multilingual TEDx. - Each item is a tuple of the form: waveform, sample_rate, source utterance, - target utterance, speaker_id, utterance_id - """ - - SPLITS = ["train", "valid", "test"] - LANGPAIRS = ["es-es", "fr-fr", "pt-pt", "it-it", "ru-ru", "el-el", "ar-ar", - "de-de", "es-en", "es-fr", "es-pt", "es-it", "fr-en", "fr-es", - "fr-pt", "pt-en", "pt-es", "it-en", "it-es", "ru-en", "el-en"] - - def __init__(self, root: str, lang: str, split: str) -> None: - assert split in self.SPLITS and lang in self.LANGPAIRS - _root = Path(root) / f"{lang}" / "data" / split - wav_root, txt_root = _root / "wav", _root / "txt" - assert _root.is_dir() and wav_root.is_dir() and txt_root.is_dir() - # Load audio segments - try: - import yaml - except ImportError: - print( - "Please install PyYAML to load the Multilingual TEDx YAML files" - ) - with open(txt_root / f"{split}.yaml") as f: - segments = yaml.load(f, Loader=yaml.BaseLoader) - # Load source and target utterances - src, tgt = lang.split("-") - for _lang in [src, tgt]: - with open(txt_root / f"{split}.{_lang}") as f: - utterances = [r.strip() for r in f] - assert len(segments) == len(utterances) - for i, u in enumerate(utterances): - segments[i][_lang] = u - # Gather info - self.data = [] - for wav_filename, _seg_group in groupby(segments, lambda x: x["wav"]): - wav_filename = wav_filename.replace(".wav", ".flac") - wav_path = wav_root / wav_filename - sample_rate = sf.info(wav_path.as_posix()).samplerate - seg_group = sorted(_seg_group, key=lambda x: float(x["offset"])) - for i, segment in enumerate(seg_group): - offset = int(float(segment["offset"]) * sample_rate) - n_frames = int(float(segment["duration"]) * sample_rate) - _id = f"{wav_path.stem}_{i}" - self.data.append( - ( - wav_path.as_posix(), - offset, - n_frames, - sample_rate, - segment[src], - segment[tgt], - segment["speaker_id"], - tgt, - _id, - ) - ) - - def __getitem__( - self, n: int - ) -> Tuple[torch.Tensor, int, str, str, str, str, str]: - wav_path, offset, n_frames, sr, src_utt, tgt_utt, spk_id, tgt_lang, \ - utt_id = self.data[n] - waveform, _ = get_waveform(wav_path, frames=n_frames, start=offset) - waveform = torch.from_numpy(waveform) - return waveform, sr, src_utt, tgt_utt, spk_id, tgt_lang, utt_id - - def __len__(self) -> int: - return len(self.data) - - -def process(args): - root = Path(args.data_root).absolute() - for lang in mTEDx.LANGPAIRS: - cur_root = root / f"{lang}" - if not cur_root.is_dir(): - print(f"{cur_root.as_posix()} does not exist. Skipped.") - continue - # Extract features - audio_root = cur_root / ("flac" if args.use_audio_input else "fbank80") - audio_root.mkdir(exist_ok=True) - for split in mTEDx.SPLITS: - print(f"Fetching split {split}...") - dataset = mTEDx(root.as_posix(), lang, split) - if args.use_audio_input: - print("Converting audios...") - for waveform, sample_rate, _, _, _, utt_id in tqdm(dataset): - tgt_sample_rate = 16_000 - _wavform, _ = convert_waveform( - waveform, sample_rate, to_mono=True, - to_sample_rate=tgt_sample_rate - ) - sf.write( - (audio_root / f"{utt_id}.flac").as_posix(), - _wavform.numpy(), tgt_sample_rate - ) - else: - print("Extracting log mel filter bank features...") - for waveform, sample_rate, _, _, _, _, utt_id in tqdm(dataset): - extract_fbank_features( - waveform, sample_rate, audio_root / f"{utt_id}.npy" - ) - # Pack features into ZIP - zip_path = cur_root / f"{audio_root.name}.zip" - print("ZIPing audios/features...") - create_zip(audio_root, zip_path) - print("Fetching ZIP manifest...") - audio_paths, audio_lengths = get_zip_manifest(zip_path) - # Generate TSV manifest - print("Generating manifest...") - train_text = [] - for split in mTEDx.SPLITS: - is_train_split = split.startswith("train") - manifest = {c: [] for c in MANIFEST_COLUMNS} - ds = mTEDx(args.data_root, lang, split) - for _, _, src_utt, tgt_utt, spk_id, tgt_lang, utt_id in tqdm(ds): - manifest["id"].append(utt_id) - manifest["audio"].append(audio_paths[utt_id]) - manifest["n_frames"].append(audio_lengths[utt_id]) - manifest["tgt_text"].append( - src_utt if args.task == "asr" else tgt_utt - ) - manifest["speaker"].append(spk_id) - manifest["tgt_lang"].append(tgt_lang) - if is_train_split: - train_text.extend(manifest["tgt_text"]) - df = pd.DataFrame.from_dict(manifest) - df = filter_manifest_df(df, is_train_split=is_train_split) - save_df_to_tsv(df, cur_root / f"{split}_{args.task}.tsv") - # Generate vocab - v_size_str = "" if args.vocab_type == "char" else str(args.vocab_size) - spm_filename_prefix = f"spm_{args.vocab_type}{v_size_str}_{args.task}" - with NamedTemporaryFile(mode="w") as f: - for t in train_text: - f.write(t + "\n") - gen_vocab( - Path(f.name), - cur_root / spm_filename_prefix, - args.vocab_type, - args.vocab_size, - ) - # Generate config YAML - if args.use_audio_input: - gen_config_yaml( - cur_root, - spm_filename=spm_filename_prefix + ".model", - yaml_filename=f"config_{args.task}.yaml", - specaugment_policy=None, - extra={"use_audio_input": True} - ) - else: - gen_config_yaml( - cur_root, - spm_filename=spm_filename_prefix + ".model", - yaml_filename=f"config_{args.task}.yaml", - specaugment_policy="lb", - ) - # Clean up - shutil.rmtree(audio_root) - - -def process_joint(args): - cur_root = Path(args.data_root) - assert all((cur_root / f"{lang}").is_dir() for lang in mTEDx.LANGPAIRS), \ - "do not have downloaded data available for all languages" - # Generate vocab - vocab_size_str = "" if args.vocab_type == "char" else str(args.vocab_size) - spm_filename_prefix = f"spm_{args.vocab_type}{vocab_size_str}_{args.task}" - with NamedTemporaryFile(mode="w") as f: - for lang in mTEDx.LANGPAIRS: - tsv_path = cur_root / f"{lang}" / f"train_{args.task}.tsv" - df = load_df_from_tsv(tsv_path) - for t in df["tgt_text"]: - f.write(t + "\n") - special_symbols = None - if args.joint: - # Add tgt_lang tags to dict - special_symbols = list( - {f'<lang:{lang.split("-")[1]}>' for lang in mTEDx.LANGPAIRS} - ) - gen_vocab( - Path(f.name), - cur_root / spm_filename_prefix, - args.vocab_type, - args.vocab_size, - special_symbols=special_symbols - ) - # Generate config YAML - gen_config_yaml( - cur_root, - spm_filename=spm_filename_prefix + ".model", - yaml_filename=f"config_{args.task}.yaml", - specaugment_policy="ld", - prepend_tgt_lang_tag=(args.joint), - ) - # Make symbolic links to manifests - for lang in mTEDx.LANGPAIRS: - for split in mTEDx.SPLITS: - src_path = cur_root / f"{lang}" / f"{split}_{args.task}.tsv" - desc_path = cur_root / f"{split}_{lang}_{args.task}.tsv" - if not desc_path.is_symlink(): - os.symlink(src_path, desc_path) - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--data-root", "-d", required=True, type=str) - parser.add_argument( - "--vocab-type", - default="unigram", - required=True, - type=str, - choices=["bpe", "unigram", "char"], - ), - parser.add_argument("--vocab-size", default=8000, type=int) - parser.add_argument("--task", type=str, choices=["asr", "st"]) - parser.add_argument("--joint", action="store_true", help="") - parser.add_argument("--use-audio-input", action="store_true") - args = parser.parse_args() - - if args.joint: - process_joint(args) - else: - process(args) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/speech_to_text/prep_mustc_data.py b/kosmos-g/fairseq/examples/speech_to_text/prep_mustc_data.py deleted file mode 100644 index c2362f76f..000000000 --- a/kosmos-g/fairseq/examples/speech_to_text/prep_mustc_data.py +++ /dev/null @@ -1,294 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import logging -import os -from pathlib import Path -import shutil -from itertools import groupby -from tempfile import NamedTemporaryFile -from typing import Tuple - -import numpy as np -import pandas as pd -import soundfile as sf -from examples.speech_to_text.data_utils import ( - create_zip, - extract_fbank_features, - filter_manifest_df, - gen_config_yaml, - gen_vocab, - get_zip_manifest, - load_df_from_tsv, - save_df_to_tsv, - cal_gcmvn_stats, -) -import torch -from torch.utils.data import Dataset -from tqdm import tqdm - -from fairseq.data.audio.audio_utils import get_waveform, convert_waveform - - -log = logging.getLogger(__name__) - - -MANIFEST_COLUMNS = ["id", "audio", "n_frames", "tgt_text", "speaker"] - - -class MUSTC(Dataset): - """ - Create a Dataset for MuST-C. Each item is a tuple of the form: - waveform, sample_rate, source utterance, target utterance, speaker_id, - utterance_id - """ - - SPLITS = ["train", "dev", "tst-COMMON", "tst-HE"] - LANGUAGES = ["de", "es", "fr", "it", "nl", "pt", "ro", "ru"] - - def __init__(self, root: str, lang: str, split: str) -> None: - assert split in self.SPLITS and lang in self.LANGUAGES - _root = Path(root) / f"en-{lang}" / "data" / split - wav_root, txt_root = _root / "wav", _root / "txt" - assert _root.is_dir() and wav_root.is_dir() and txt_root.is_dir() - # Load audio segments - try: - import yaml - except ImportError: - print("Please install PyYAML to load the MuST-C YAML files") - with open(txt_root / f"{split}.yaml") as f: - segments = yaml.load(f, Loader=yaml.BaseLoader) - # Load source and target utterances - for _lang in ["en", lang]: - with open(txt_root / f"{split}.{_lang}") as f: - utterances = [r.strip() for r in f] - assert len(segments) == len(utterances) - for i, u in enumerate(utterances): - segments[i][_lang] = u - # Gather info - self.data = [] - for wav_filename, _seg_group in groupby(segments, lambda x: x["wav"]): - wav_path = wav_root / wav_filename - sample_rate = sf.info(wav_path.as_posix()).samplerate - seg_group = sorted(_seg_group, key=lambda x: x["offset"]) - for i, segment in enumerate(seg_group): - offset = int(float(segment["offset"]) * sample_rate) - n_frames = int(float(segment["duration"]) * sample_rate) - _id = f"{wav_path.stem}_{i}" - self.data.append( - ( - wav_path.as_posix(), - offset, - n_frames, - sample_rate, - segment["en"], - segment[lang], - segment["speaker_id"], - _id, - ) - ) - - def __getitem__( - self, n: int - ) -> Tuple[torch.Tensor, int, str, str, str, str]: - wav_path, offset, n_frames, sr, src_utt, tgt_utt, spk_id, \ - utt_id = self.data[n] - waveform, _ = get_waveform(wav_path, frames=n_frames, start=offset) - waveform = torch.from_numpy(waveform) - return waveform, sr, src_utt, tgt_utt, spk_id, utt_id - - def __len__(self) -> int: - return len(self.data) - - -def process(args): - root = Path(args.data_root).absolute() - for lang in MUSTC.LANGUAGES: - cur_root = root / f"en-{lang}" - if not cur_root.is_dir(): - print(f"{cur_root.as_posix()} does not exist. Skipped.") - continue - # Extract features - audio_root = cur_root / ("flac" if args.use_audio_input else "fbank80") - audio_root.mkdir(exist_ok=True) - - for split in MUSTC.SPLITS: - print(f"Fetching split {split}...") - dataset = MUSTC(root.as_posix(), lang, split) - if args.use_audio_input: - print("Converting audios...") - for waveform, sample_rate, _, _, _, utt_id in tqdm(dataset): - tgt_sample_rate = 16_000 - _wavform, _ = convert_waveform( - waveform, sample_rate, to_mono=True, - to_sample_rate=tgt_sample_rate - ) - sf.write( - (audio_root / f"{utt_id}.flac").as_posix(), - _wavform.T.numpy(), tgt_sample_rate - ) - else: - print("Extracting log mel filter bank features...") - gcmvn_feature_list = [] - if split == 'train' and args.cmvn_type == "global": - print("And estimating cepstral mean and variance stats...") - - for waveform, sample_rate, _, _, _, utt_id in tqdm(dataset): - features = extract_fbank_features( - waveform, sample_rate, audio_root / f"{utt_id}.npy" - ) - if split == 'train' and args.cmvn_type == "global": - if len(gcmvn_feature_list) < args.gcmvn_max_num: - gcmvn_feature_list.append(features) - - if split == 'train' and args.cmvn_type == "global": - # Estimate and save cmv - stats = cal_gcmvn_stats(gcmvn_feature_list) - with open(cur_root / "gcmvn.npz", "wb") as f: - np.savez(f, mean=stats["mean"], std=stats["std"]) - - # Pack features into ZIP - zip_path = cur_root / f"{audio_root.name}.zip" - print("ZIPing audios/features...") - create_zip(audio_root, zip_path) - print("Fetching ZIP manifest...") - audio_paths, audio_lengths = get_zip_manifest( - zip_path, - is_audio=args.use_audio_input, - ) - # Generate TSV manifest - print("Generating manifest...") - train_text = [] - for split in MUSTC.SPLITS: - is_train_split = split.startswith("train") - manifest = {c: [] for c in MANIFEST_COLUMNS} - dataset = MUSTC(args.data_root, lang, split) - for _, _, src_utt, tgt_utt, speaker_id, utt_id in tqdm(dataset): - manifest["id"].append(utt_id) - manifest["audio"].append(audio_paths[utt_id]) - manifest["n_frames"].append(audio_lengths[utt_id]) - manifest["tgt_text"].append( - src_utt if args.task == "asr" else tgt_utt - ) - manifest["speaker"].append(speaker_id) - if is_train_split: - train_text.extend(manifest["tgt_text"]) - df = pd.DataFrame.from_dict(manifest) - df = filter_manifest_df(df, is_train_split=is_train_split) - save_df_to_tsv(df, cur_root / f"{split}_{args.task}.tsv") - # Generate vocab - v_size_str = "" if args.vocab_type == "char" else str(args.vocab_size) - spm_filename_prefix = f"spm_{args.vocab_type}{v_size_str}_{args.task}" - with NamedTemporaryFile(mode="w") as f: - for t in train_text: - f.write(t + "\n") - gen_vocab( - Path(f.name), - cur_root / spm_filename_prefix, - args.vocab_type, - args.vocab_size, - ) - # Generate config YAML - if args.use_audio_input: - gen_config_yaml( - cur_root, - spm_filename=spm_filename_prefix + ".model", - yaml_filename=f"config_{args.task}.yaml", - specaugment_policy=None, - extra={"use_audio_input": True} - ) - else: - gen_config_yaml( - cur_root, - spm_filename=spm_filename_prefix + ".model", - yaml_filename=f"config_{args.task}.yaml", - specaugment_policy="lb", - cmvn_type=args.cmvn_type, - gcmvn_path=( - cur_root / "gcmvn.npz" if args.cmvn_type == "global" - else None - ), - ) - # Clean up - shutil.rmtree(audio_root) - - -def process_joint(args): - cur_root = Path(args.data_root) - assert all( - (cur_root / f"en-{lang}").is_dir() for lang in MUSTC.LANGUAGES - ), "do not have downloaded data available for all 8 languages" - # Generate vocab - vocab_size_str = "" if args.vocab_type == "char" else str(args.vocab_size) - spm_filename_prefix = f"spm_{args.vocab_type}{vocab_size_str}_{args.task}" - with NamedTemporaryFile(mode="w") as f: - for lang in MUSTC.LANGUAGES: - tsv_path = cur_root / f"en-{lang}" / f"train_{args.task}.tsv" - df = load_df_from_tsv(tsv_path) - for t in df["tgt_text"]: - f.write(t + "\n") - special_symbols = None - if args.task == 'st': - special_symbols = [f'<lang:{lang}>' for lang in MUSTC.LANGUAGES] - gen_vocab( - Path(f.name), - cur_root / spm_filename_prefix, - args.vocab_type, - args.vocab_size, - special_symbols=special_symbols - ) - # Generate config YAML - gen_config_yaml( - cur_root, - spm_filename=spm_filename_prefix + ".model", - yaml_filename=f"config_{args.task}.yaml", - specaugment_policy="ld", - prepend_tgt_lang_tag=(args.task == "st"), - ) - # Make symbolic links to manifests - for lang in MUSTC.LANGUAGES: - for split in MUSTC.SPLITS: - src_path = cur_root / f"en-{lang}" / f"{split}_{args.task}.tsv" - desc_path = cur_root / f"{split}_{lang}_{args.task}.tsv" - if not desc_path.is_symlink(): - os.symlink(src_path, desc_path) - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--data-root", "-d", required=True, type=str) - parser.add_argument( - "--vocab-type", - default="unigram", - required=True, - type=str, - choices=["bpe", "unigram", "char"], - ), - parser.add_argument("--vocab-size", default=8000, type=int) - parser.add_argument("--task", type=str, choices=["asr", "st"]) - parser.add_argument("--joint", action="store_true", help="") - parser.add_argument( - "--cmvn-type", default="utterance", - choices=["global", "utterance"], - help="The type of cepstral mean and variance normalization" - ) - parser.add_argument( - "--gcmvn-max-num", default=150000, type=int, - help="Maximum number of sentences to use to estimate global mean and " - "variance" - ) - parser.add_argument("--use-audio-input", action="store_true") - args = parser.parse_args() - - if args.joint: - process_joint(args) - else: - process(args) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/speech_to_text/seg_mustc_data.py b/kosmos-g/fairseq/examples/speech_to_text/seg_mustc_data.py deleted file mode 100644 index 1ee665d63..000000000 --- a/kosmos-g/fairseq/examples/speech_to_text/seg_mustc_data.py +++ /dev/null @@ -1,54 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import logging -from pathlib import Path -import soundfile as sf -from examples.speech_to_text.prep_mustc_data import ( - MUSTC -) - -from tqdm import tqdm - -log = logging.getLogger(__name__) - - -def main(args): - root = Path(args.data_root).absolute() - lang = args.lang - split = args.split - - cur_root = root / f"en-{lang}" - assert cur_root.is_dir(), ( - f"{cur_root.as_posix()} does not exist. Skipped." - ) - - dataset = MUSTC(root.as_posix(), lang, split) - output = Path(args.output).absolute() - output.mkdir(exist_ok=True) - f_text = open(output / f"{split}.{lang}", "w") - f_wav_list = open(output / f"{split}.wav_list", "w") - for waveform, sample_rate, _, text, _, utt_id in tqdm(dataset): - sf.write( - output / f"{utt_id}.wav", - waveform.squeeze(0).numpy(), - samplerate=int(sample_rate) - ) - f_text.write(text + "\n") - f_wav_list.write(str(output / f"{utt_id}.wav") + "\n") - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--data-root", "-d", required=True, type=str) - parser.add_argument("--task", required=True, type=str, choices=["asr", "st"]) - parser.add_argument("--lang", required=True, type=str) - parser.add_argument("--output", required=True, type=str) - parser.add_argument("--split", required=True, choices=MUSTC.SPLITS) - args = parser.parse_args() - - main(args) diff --git a/kosmos-g/fairseq/examples/speech_to_text/simultaneous_translation/agents/fairseq_simul_st_agent.py b/kosmos-g/fairseq/examples/speech_to_text/simultaneous_translation/agents/fairseq_simul_st_agent.py deleted file mode 100644 index 61617a173..000000000 --- a/kosmos-g/fairseq/examples/speech_to_text/simultaneous_translation/agents/fairseq_simul_st_agent.py +++ /dev/null @@ -1,363 +0,0 @@ -import math -import os -import json -import numpy as np -import torch -import torchaudio.compliance.kaldi as kaldi -import yaml -from fairseq import checkpoint_utils, tasks -from fairseq.file_io import PathManager - -try: - from simuleval import READ_ACTION, WRITE_ACTION, DEFAULT_EOS - from simuleval.agents import SpeechAgent - from simuleval.states import ListEntry, SpeechStates -except ImportError: - print("Please install simuleval 'pip install simuleval'") - -SHIFT_SIZE = 10 -WINDOW_SIZE = 25 -SAMPLE_RATE = 16000 -FEATURE_DIM = 80 -BOW_PREFIX = "\u2581" - - -class OnlineFeatureExtractor: - """ - Extract speech feature on the fly. - """ - - def __init__(self, args): - self.shift_size = args.shift_size - self.window_size = args.window_size - assert self.window_size >= self.shift_size - - self.sample_rate = args.sample_rate - self.feature_dim = args.feature_dim - self.num_samples_per_shift = int(self.shift_size * self.sample_rate / 1000) - self.num_samples_per_window = int(self.window_size * self.sample_rate / 1000) - self.len_ms_to_samples = lambda x: x * self.sample_rate / 1000 - self.previous_residual_samples = [] - self.global_cmvn = args.global_cmvn - - def clear_cache(self): - self.previous_residual_samples = [] - - def __call__(self, new_samples): - samples = self.previous_residual_samples + new_samples - if len(samples) < self.num_samples_per_window: - self.previous_residual_samples = samples - return - - # num_frames is the number of frames from the new segment - num_frames = math.floor( - (len(samples) - self.len_ms_to_samples(self.window_size - self.shift_size)) - / self.num_samples_per_shift - ) - - # the number of frames used for feature extraction - # including some part of thte previous segment - effective_num_samples = int( - num_frames * self.len_ms_to_samples(self.shift_size) - + self.len_ms_to_samples(self.window_size - self.shift_size) - ) - - input_samples = samples[:effective_num_samples] - self.previous_residual_samples = samples[ - num_frames * self.num_samples_per_shift: - ] - - torch.manual_seed(1) - output = kaldi.fbank( - torch.FloatTensor(input_samples).unsqueeze(0), - num_mel_bins=self.feature_dim, - frame_length=self.window_size, - frame_shift=self.shift_size, - ).numpy() - - output = self.transform(output) - - return torch.from_numpy(output) - - def transform(self, input): - if self.global_cmvn is None: - return input - - mean = self.global_cmvn["mean"] - std = self.global_cmvn["std"] - - x = np.subtract(input, mean) - x = np.divide(x, std) - return x - - -class TensorListEntry(ListEntry): - """ - Data structure to store a list of tensor. - """ - - def append(self, value): - - if len(self.value) == 0: - self.value = value - return - - self.value = torch.cat([self.value] + [value], dim=0) - - def info(self): - return { - "type": str(self.new_value_type), - "length": self.__len__(), - "value": "" if type(self.value) is list else self.value.size(), - } - - -class FairseqSimulSTAgent(SpeechAgent): - - speech_segment_size = 40 # in ms, 4 pooling ratio * 10 ms step size - - def __init__(self, args): - super().__init__(args) - - self.eos = DEFAULT_EOS - - self.gpu = getattr(args, "gpu", False) - - self.args = args - - self.load_model_vocab(args) - - if getattr( - self.model.decoder.layers[0].encoder_attn, - 'pre_decision_ratio', - None - ) is not None: - self.speech_segment_size *= ( - self.model.decoder.layers[0].encoder_attn.pre_decision_ratio - ) - - args.global_cmvn = None - if args.config: - with open(os.path.join(args.data_bin, args.config), "r") as f: - config = yaml.load(f, Loader=yaml.BaseLoader) - - if "global_cmvn" in config: - args.global_cmvn = np.load(config["global_cmvn"]["stats_npz_path"]) - - if args.global_stats: - with PathManager.open(args.global_stats, "r") as f: - global_cmvn = json.loads(f.read()) - self.global_cmvn = {"mean": global_cmvn["mean"], "std": global_cmvn["stddev"]} - - self.feature_extractor = OnlineFeatureExtractor(args) - - self.max_len = args.max_len - - self.force_finish = args.force_finish - - torch.set_grad_enabled(False) - - def build_states(self, args, client, sentence_id): - # Initialize states here, for example add customized entry to states - # This function will be called at beginning of every new sentence - states = SpeechStates(args, client, sentence_id, self) - self.initialize_states(states) - return states - - def to_device(self, tensor): - if self.gpu: - return tensor.cuda() - else: - return tensor.cpu() - - @staticmethod - def add_args(parser): - # fmt: off - parser.add_argument('--model-path', type=str, required=True, - help='path to your pretrained model.') - parser.add_argument("--data-bin", type=str, required=True, - help="Path of data binary") - parser.add_argument("--config", type=str, default=None, - help="Path to config yaml file") - parser.add_argument("--global-stats", type=str, default=None, - help="Path to json file containing cmvn stats") - parser.add_argument("--tgt-splitter-type", type=str, default="SentencePiece", - help="Subword splitter type for target text") - parser.add_argument("--tgt-splitter-path", type=str, default=None, - help="Subword splitter model path for target text") - parser.add_argument("--user-dir", type=str, default="examples/simultaneous_translation", - help="User directory for simultaneous translation") - parser.add_argument("--max-len", type=int, default=200, - help="Max length of translation") - parser.add_argument("--force-finish", default=False, action="store_true", - help="Force the model to finish the hypothsis if the source is not finished") - parser.add_argument("--shift-size", type=int, default=SHIFT_SIZE, - help="Shift size of feature extraction window.") - parser.add_argument("--window-size", type=int, default=WINDOW_SIZE, - help="Window size of feature extraction window.") - parser.add_argument("--sample-rate", type=int, default=SAMPLE_RATE, - help="Sample rate") - parser.add_argument("--feature-dim", type=int, default=FEATURE_DIM, - help="Acoustic feature dimension.") - - # fmt: on - return parser - - def load_model_vocab(self, args): - - filename = args.model_path - if not os.path.exists(filename): - raise IOError("Model file not found: {}".format(filename)) - - state = checkpoint_utils.load_checkpoint_to_cpu(filename) - - task_args = state["cfg"]["task"] - task_args.data = args.data_bin - - if args.config is not None: - task_args.config_yaml = args.config - - task = tasks.setup_task(task_args) - - # build model for ensemble - state["cfg"]["model"].load_pretrained_encoder_from = None - state["cfg"]["model"].load_pretrained_decoder_from = None - self.model = task.build_model(state["cfg"]["model"]) - self.model.load_state_dict(state["model"], strict=True) - self.model.eval() - self.model.share_memory() - - if self.gpu: - self.model.cuda() - - # Set dictionary - self.dict = {} - self.dict["tgt"] = task.target_dictionary - - def initialize_states(self, states): - self.feature_extractor.clear_cache() - states.units.source = TensorListEntry() - states.units.target = ListEntry() - states.incremental_states = dict() - - def segment_to_units(self, segment, states): - # Convert speech samples to features - features = self.feature_extractor(segment) - if features is not None: - return [features] - else: - return [] - - def units_to_segment(self, units, states): - # Merge sub word to full word. - if self.model.decoder.dictionary.eos() == units[0]: - return DEFAULT_EOS - - segment = [] - if None in units.value: - units.value.remove(None) - - for index in units: - if index is None: - units.pop() - token = self.model.decoder.dictionary.string([index]) - if token.startswith(BOW_PREFIX): - if len(segment) == 0: - segment += [token.replace(BOW_PREFIX, "")] - else: - for j in range(len(segment)): - units.pop() - - string_to_return = ["".join(segment)] - - if self.model.decoder.dictionary.eos() == units[0]: - string_to_return += [DEFAULT_EOS] - - return string_to_return - else: - segment += [token.replace(BOW_PREFIX, "")] - - if ( - len(units) > 0 - and self.model.decoder.dictionary.eos() == units[-1] - or len(states.units.target) > self.max_len - ): - tokens = [self.model.decoder.dictionary.string([unit]) for unit in units] - return ["".join(tokens).replace(BOW_PREFIX, "")] + [DEFAULT_EOS] - - return None - - def update_model_encoder(self, states): - if len(states.units.source) == 0: - return - src_indices = self.to_device( - states.units.source.value.unsqueeze(0) - ) - src_lengths = self.to_device( - torch.LongTensor([states.units.source.value.size(0)]) - ) - - states.encoder_states = self.model.encoder(src_indices, src_lengths) - torch.cuda.empty_cache() - - def update_states_read(self, states): - # Happens after a read action. - self.update_model_encoder(states) - - def policy(self, states): - if not getattr(states, "encoder_states", None): - return READ_ACTION - - tgt_indices = self.to_device( - torch.LongTensor( - [self.model.decoder.dictionary.eos()] - + [x for x in states.units.target.value if x is not None] - ).unsqueeze(0) - ) - - states.incremental_states["steps"] = { - "src": states.encoder_states["encoder_out"][0].size(0), - "tgt": 1 + len(states.units.target), - } - - states.incremental_states["online"] = {"only": torch.tensor(not states.finish_read())} - - x, outputs = self.model.decoder.forward( - prev_output_tokens=tgt_indices, - encoder_out=states.encoder_states, - incremental_state=states.incremental_states, - ) - - states.decoder_out = x - - states.decoder_out_extra = outputs - - torch.cuda.empty_cache() - - if outputs.action == 0: - return READ_ACTION - else: - return WRITE_ACTION - - def predict(self, states): - decoder_states = states.decoder_out - - lprobs = self.model.get_normalized_probs( - [decoder_states[:, -1:]], log_probs=True - ) - - index = lprobs.argmax(dim=-1) - - index = index[0, 0].item() - - if ( - self.force_finish - and index == self.model.decoder.dictionary.eos() - and not states.finish_read() - ): - # If we want to force finish the translation - # (don't stop before finish reading), return a None - # self.model.decoder.clear_cache(states.incremental_states) - index = None - - return index diff --git a/kosmos-g/fairseq/examples/stories/README.md b/kosmos-g/fairseq/examples/stories/README.md deleted file mode 100644 index 588941edd..000000000 --- a/kosmos-g/fairseq/examples/stories/README.md +++ /dev/null @@ -1,66 +0,0 @@ -# Hierarchical Neural Story Generation (Fan et al., 2018) - -The following commands provide an example of pre-processing data, training a model, and generating text for story generation with the WritingPrompts dataset. - -## Pre-trained models - -Description | Dataset | Model | Test set(s) ----|---|---|--- -Stories with Convolutional Model <br> ([Fan et al., 2018](https://arxiv.org/abs/1805.04833)) | [WritingPrompts](https://dl.fbaipublicfiles.com/fairseq/data/writingPrompts.tar.gz) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/stories_checkpoint.tar.bz2) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/stories_test.tar.bz2) - -We provide sample stories generated by the [convolutional seq2seq model](https://dl.fbaipublicfiles.com/fairseq/data/seq2seq_stories.txt) and [fusion model](https://dl.fbaipublicfiles.com/fairseq/data/fusion_stories.txt) from [Fan et al., 2018](https://arxiv.org/abs/1805.04833). The corresponding prompts for the fusion model can be found [here](https://dl.fbaipublicfiles.com/fairseq/data/fusion_prompts.txt). Note that there are unk in the file, as we modeled a small full vocabulary (no BPE or pre-training). We did not use these unk prompts for human evaluation. - -## Dataset - -The dataset can be downloaded like this: - -```bash -cd examples/stories -curl https://dl.fbaipublicfiles.com/fairseq/data/writingPrompts.tar.gz | tar xvzf - -``` - -and contains a train, test, and valid split. The dataset is described here: https://arxiv.org/abs/1805.04833. We model only the first 1000 words of each story, including one newLine token. - -## Example usage - -First we will preprocess the dataset. Note that the dataset release is the full data, but the paper models the first 1000 words of each story. Here is example code that trims the dataset to the first 1000 words of each story: -```python -data = ["train", "test", "valid"] -for name in data: - with open(name + ".wp_target") as f: - stories = f.readlines() - stories = [" ".join(i.split()[0:1000]) for i in stories] - with open(name + ".wp_target", "w") as o: - for line in stories: - o.write(line.strip() + "\n") -``` - -Once we've trimmed the data we can binarize it and train our model: -```bash -# Binarize the dataset: -export TEXT=examples/stories/writingPrompts -fairseq-preprocess --source-lang wp_source --target-lang wp_target \ - --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \ - --destdir data-bin/writingPrompts --padding-factor 1 --thresholdtgt 10 --thresholdsrc 10 - -# Train the model: -fairseq-train data-bin/writingPrompts -a fconv_self_att_wp --lr 0.25 --optimizer nag --clip-norm 0.1 --max-tokens 1500 --lr-scheduler reduce_lr_on_plateau --decoder-attention True --encoder-attention False --criterion label_smoothed_cross_entropy --weight-decay .0000001 --label-smoothing 0 --source-lang wp_source --target-lang wp_target --gated-attention True --self-attention True --project-input True --pretrained False - -# Train a fusion model: -# add the arguments: --pretrained True --pretrained-checkpoint path/to/checkpoint - -# Generate: -# Note: to load the pretrained model at generation time, you need to pass in a model-override argument to communicate to the fusion model at generation time where you have placed the pretrained checkpoint. By default, it will load the exact path of the fusion model's pretrained model from training time. You should use model-override if you have moved the pretrained model (or are using our provided models). If you are generating from a non-fusion model, the model-override argument is not necessary. - -fairseq-generate data-bin/writingPrompts --path /path/to/trained/model/checkpoint_best.pt --batch-size 32 --beam 1 --sampling --sampling-topk 10 --temperature 0.8 --nbest 1 --model-overrides "{'pretrained_checkpoint':'/path/to/pretrained/model/checkpoint'}" -``` - -## Citation -```bibtex -@inproceedings{fan2018hierarchical, - title = {Hierarchical Neural Story Generation}, - author = {Fan, Angela and Lewis, Mike and Dauphin, Yann}, - booktitle = {Conference of the Association for Computational Linguistics (ACL)}, - year = 2018, -} -``` diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/README.md b/kosmos-g/fairseq/examples/textless_nlp/gslm/README.md deleted file mode 100644 index 7a76ffd57..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/README.md +++ /dev/null @@ -1,21 +0,0 @@ -# Generative Spoken Language Modeling - -* [Paper](https://arxiv.org/abs/2102.01192) -* [Demo](https://speechbot.github.io/gslm/index.html) - -We build and evaluate generative speech2speech systems using [Log Mel Filtebank](https://pytorch.org/audio/stable/compliance.kaldi.html#fbank), [Modified CPC](https://github.com/facebookresearch/CPC_audio), [HuBERT Base](https://github.com/pytorch/fairseq/tree/main/examples/hubert) and [Wav2Vec 2.0 Large](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec). Our system is composed of three components, namely, *speech2unit*, *ulm* and *unit2speech*. We explain about models and usage of these components in their respective sub-directories. See the links below. - -## Speech to Unit Model (speech2unit) -Speech to unit model is used for quantizing raw speech into learned discrete speech units. [More details](speech2unit) - -## Unit Language Model (ulm) -Unit Language Model is a generative language model trained on discrete speech units. [More details](ulm) - -## Unit to Speech Model (unit2speech) -Unit to speech model is used for synthesizing speech from discrete speech units. [More details](unit2speech) - -## Metrics -We show how to compute ASR based metrics as well as zero-shot metrics proposed in our paper [here](metrics). - -## Tools -We share two tools to resynthesize a given spoken utterance, and generate novel spoken language given a spoken prompt. [More detail](tools) diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/metrics/README.md b/kosmos-g/fairseq/examples/textless_nlp/gslm/metrics/README.md deleted file mode 100644 index 0a63e2f0d..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/metrics/README.md +++ /dev/null @@ -1,10 +0,0 @@ -# GSLM Metrics - -## ASR Metrics -The suite of metrics here uses an ASR model to transcribe the synthesized speech into text, and then uses text-based metrics. We also use word error rate from ASR transcription itself as one of the metrics. [More details](asr_metrics) - -## ABX Metrics -We use [ABX](https://www.semanticscholar.org/paper/ABX-Discriminability-Measures-and-Applications-Schatz/13d3537228f728c1063cc83743cb118bba3367a0) to evaluate how well-separated phonetic categories are with quantized representations. [More details](abx_metrics) - -## sWUGGY and sBLIMP -We refer to [ZeroSpeech challenge](https://www.zerospeech.com/2021/track_s.html#scoring-based-metrics) for details on the sWUGGY and sBLIMP metrics. diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/metrics/abx_metrics/README.md b/kosmos-g/fairseq/examples/textless_nlp/gslm/metrics/abx_metrics/README.md deleted file mode 100644 index aa2560f04..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/metrics/abx_metrics/README.md +++ /dev/null @@ -1,77 +0,0 @@ -# ABX-based evaluation - -ABX is used to evaluate the quality of the obtained discrete units. - -The life cycle of the ABX-based evaluation for the Speech-to-Unit contains the following steps: -1. Training an acoustic model (or use an existing acoustic model) ([description](./../..)) -2. Perform quantization of speech by learning a K-means clustering model ([description](./../..)) -3. Compute discrete features for ABX computation using the learned clusters -4. Compute the ABX score over the discrete features taking advantage of [libri-light's ABX evaluation script][ll-abx] - -Here we assume that you already went throught the first two steps and focus solely on extracting features and computing ABX scores. - -## Libri-light setup - -Follow [libri-light's instructions][ll-instructions] for installation and [ABX evaluation setup][ll-abx] (including the download of the data items required for ABX computation). - -## Computing ABX - -### Dumping quantized features - -The first step for the ABX computation is to dump the quantized representations corresponding to the test files. - -```shell -TYPE="hubert" -LAYER=6 -CKPT_PATH="<PATH_TO_HUBERT_MODEL_CHECKPOINT_FILE>" -KM_MODEL_PATH="<PATH_TO_PRETRAINED_KM_MODEL_FILE>" - -SUBSET="dev-clean" -MANIFEST="<PATH_TO_MANIFEST_FOR_LS_DEV-CLEAN>" -DATA_DIR="<PATH_TO_DIR_TO_STORE_FEATURES>/$SUBSET" - -PYTHONPATH=. python examples/textless_nlp/gslm/metrics/abx_metrics/dump_abx_feats.py \ - --feature_type $TYPE \ - --kmeans_model_path $KM_MODEL_PATH \ - --checkpoint_path $CKPT_PATH \ - --layer $LAYER \ - --manifest_path $MANIFEST \ - --out_dir_path $DATA_DIR \ - --extension ".flac" -``` - -Again the manifest file follows the same structure than elsewhere in the codebase. - -### Compute ABX with Libri-light - -Use libri-light's `eval_ABX.py` script (within the appropriate environment set up) as followed: - -```shell -LIBRILIGHT_ROOT="<PATH_TO_LIBRILIGHT>" - -SUBSET="dev-clean" -DATA_DIR="<PATH_TO_DIR_TO_STORE_FEATURES>/$SUBSET" -ITEM_FILE_PATH="$LIBRILIGHT_ROOT/eval/ABX_data/$SUBSET.item" -OUT_DIR="<PATH_TO_DIR_TO_STORE_ABX_SCORES>/$SUBSET" - -FILE_EXTENSION=".npy" -FEATURE_SIZE=0.02 # depends on the model used - -PYTHONPATH=$LIBRILIGHT_ROOT \ - python $LIBRILIGHT_ROOT/eval/eval_ABX.py \ - $DATA_DIR \ - $ITEM_FILE_PATH \ - --file_extension $FILE_EXTENSION \ - --feature_size $FEATURE_SIZE \ - --out $OUT_DIR \ - --mode "all" -``` - -Note that `FEATURE_SIZE` will depend on the model type you are using to extract the acoustic features: -* For HuBERT and Wav2Vec2.0, use `FEATURE_SIZE=0.02` -* For CPC and Log Mel, use `FEATURE_SIZE=0.01` - -If you have a gpu available, make sure you add the `--cuda` flag for faster computation. - -[ll-instructions]: https://github.com/facebookresearch/libri-light -[ll-abx]: https://github.com/facebookresearch/libri-light/tree/master/eval#abx diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/metrics/abx_metrics/dump_abx_feats.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/metrics/abx_metrics/dump_abx_feats.py deleted file mode 100644 index 41cf55897..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/metrics/abx_metrics/dump_abx_feats.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import logging -import os - -import joblib -import numpy as np - -from examples.textless_nlp.gslm.speech2unit.clustering.utils import get_audio_files -from examples.textless_nlp.gslm.speech2unit.pretrained.utils import get_features - -def get_logger(): - log_format = "[%(asctime)s] [%(levelname)s]: %(message)s" - logging.basicConfig(format=log_format, level=logging.INFO) - logger = logging.getLogger(__name__) - return logger - -def get_parser(): - parser = argparse.ArgumentParser( - description="Quantize using K-means clustering over acoustic features." - ) - parser.add_argument( - "--feature_type", - type=str, - choices=["logmel", "hubert", "w2v2", "cpc"], - default=None, - required=True, - help="Acoustic feature type", - ) - parser.add_argument( - "--kmeans_model_path", - type=str, - required=True, - help="K-means model file path to use for inference", - ) - parser.add_argument( - "--manifest_path", - type=str, - default=None, - help="Manifest file containing the root dir and file names", - ) - parser.add_argument( - "--checkpoint_path", - type=str, - help="Pretrained model checkpoint", - ) - parser.add_argument( - "--layer", - type=int, - help="The layer of the pretrained model to extract features from", - default=-1, - ) - parser.add_argument( - "--out_dir_path", - required=True, - type=str, - help="File path of quantized output.", - ) - parser.add_argument( - "--extension", type=str, default=".flac", help="Features file path" - ) - return parser - - -def one_hot(feat, n_clusters): - return np.eye(n_clusters)[feat] - -def main(args, logger): - # Feature extraction - logger.info(f"Extracting {args.feature_type} acoustic features...") - features_batch = get_features( - feature_type=args.feature_type, - checkpoint_path=args.checkpoint_path, - layer=args.layer, - manifest_path=args.manifest_path, - sample_pct=1.0, - flatten=False, - ) - logger.info(f"Features extracted for {len(features_batch)} utterances.\n") - logger.info(f"Dimensionality of representation = {features_batch[0].shape[1]}") - - logger.info(f"Loading K-means model from {args.kmeans_model_path} ...") - kmeans_model = joblib.load(open(args.kmeans_model_path, "rb")) - kmeans_model.verbose = False - - _, fnames, _ = get_audio_files(args.manifest_path) - - os.makedirs(args.out_dir_path, exist_ok=True) - logger.info(f"Writing quantized features to {args.out_dir_path}") - for i, feats in enumerate(features_batch): - pred = kmeans_model.predict(feats) - emb = one_hot(pred, kmeans_model.n_clusters) - base_fname = os.path.basename(fnames[i]).rstrip(args.extension) - output_path = os.path.join(args.out_dir_path, f"{base_fname}.npy") - with open(output_path, "wb") as f: - np.save(f, emb) - -if __name__ == "__main__": - parser = get_parser() - args = parser.parse_args() - logger = get_logger() - logger.info(args) - main(args, logger) diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/README.md b/kosmos-g/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/README.md deleted file mode 100644 index 90741f42b..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/README.md +++ /dev/null @@ -1,87 +0,0 @@ -# ASR-based evaluation - -Overall, the life cycle of the ASR-based evaluation for an ULM contains the following steps: - 1. Training an ULM and sampling from it [[description]](./../../ulm) - 2. Running UTS on the sampled unit sequences [[description]](./../../unit2speech) - 3. Pre-processing for the ASR (down-sampling to 16 KHz, aligning length of the generated audio with ground-truth utterances) - 4. Running ASR - 5. Calculation of the post-ASR evaluation metrics - -Here we assume that you have already went throught the first two steps and focus on the rest. - -## Preprocessing -### Down-sampling to 16KHz -The bulk conversion can be done by running -```bash - python $FAIRSEQ_ROOT/examples/textless_nlp/gslm/unit2speech/convert_to_16k.py $UTS_OUTPUT $UTS_OUTPUT_DOWNSAMPLE - ``` - where `$UTS_OUTPUT` specifies the directory with the generated audio and `$UTS_OUTPUT_DOWNSAMPLE` is the directory where downsampled audio would be saved. - - ### Matching by length -This step is somewhat optional. However, if you want to compare the fluency and diversity of a generated speech utterance to that of the ground-truth speech with the same prefix, it is a good idea to force them to be of the same length. -```bash -python $FAIRSEQ_ROOT/examples/textless_nlp/asr_metrics/cut_as.py \ - --samples_dir=$UTS_OUTPUT_DOWNSAMPLE --out_dir=$UTS_OUTPUT_DOWNSAMPLE_CUT \ - --prompts_description=data/ground_truth_continuation_dev.json -``` - -Here `ground_truth_continuation_dev.json` is a json file with ground-truth text from LibriSpeech dev-clean, associated with some meta-data (assuming the evaluation is done on dev-clean). This file can be downloaded [[here]](https://dl.fbaipublicfiles.com/textless_nlp/gslm/eval_data/ground_truth_continuation_dev.json). A similar file for the test-clean is [[here]](https://dl.fbaipublicfiles.com/textless_nlp/gslm/eval_data/ground_truth_continuation_test.json). These files are used for the evaluation and contain texts for audio sequences that are at least 6s long. - -## Running ASR -We use a pre-trained wav2vec model to run the ASR step. We firstly need to prepare manifest files which, roughly, tell the ASR system which files we want to transcribe. You can find more details and download the `960h_scratch.pt` checkpoint -[[here]](https://github.com/pytorch/fairseq/blob/main/examples/wav2vec/README.md)). To run ASR, you would also need to -install KenLM, Flashlight decoder, and download the KenLM 4-gram English language model. - -```bash - python $FAIRSEQ_ROOT/examples/wav2vec/wav2vec_manifest.py \ - $UTS_OUTPUT_DOWNSAMPLE_CUT --valid-percent 0.0 --dest $MANIFEST_DIR --ext wav -``` -where `$UTS_OUTPUT_DOWNSAMPLE_CUT` speficies the directory with the preprocessed UTS outputs and `$MANIFEST_DIR` is the output directory. - -We will be running an out-of-the-box evaluation script which requires ground-truth transcripts to measure quality metrics. We are only -interested in the transcripts (and we don't have ground-truth outputs for when our ULM generated!), hence we will just generate -some dummy transcripts instead: -```bash -cp $FAIRSEQ_ROOT/examples/textless_nlp/gslm/asr_metrics/misc/dict.ltr.txt $MANIFEST_DIR -python $FAIRSEQ_ROOT/examples/textless_nlp/gslm/asr_metrics/misc/dummy_asr_data.py --tsv=$MANIFEST_DIR/train.tsv \ - --output-dir=$MANIFEST_DIR -``` - -Now we are ready for running ASR: -``` -mkdir -p asr -python $FAIRSEQ_ROOT/examples/speech_recognition/infer.py \ - $MANIFEST_DIR \ - --task audio_pretraining --nbest 1 --path 960h_scratch.pt \ - --gen-subset=train --results-path $PATH_TO_ASR_OUTPUT \ - --w2l-decoder kenlm --lm-model 4-gram.bin \ - --lexicon librispeech/lexicon_ltr.lst --word-score -1 \ - --sil-weight 0 --lm-weight 2 --criterion ctc --labels ltr --max-tokens 300000 --remove-bpe letter -``` -where `lexicon_ltr.lst` is the LibriSpeech lexicon and `$PATH_TO_ASR_OUTPUT` is the output directory (can be downloaded [[here]](https://dl.fbaipublicfiles.com/textless_nlp/gslm/eval_data/lexicon_ltr.lst)). - -## Evaluation metrics -We run evaluation on the 1_000 shortest sequences that are at least 6s long. To filter those from the ASR transcript, we additionally provide each metric script with the paths to the manifest and `ground_truth_continuation_*` files. - -### Perplexity (PPX) -To get a PPX metric estimate on an ASR transcript, you need to run the following command: -```bash -python ppx.py $PATH_TO_ASR_OUTPUT/hypo.word-960h_scratch.pt-train.txt --cut-tail\ - --manifest=$MANIFEST_DIR/train.tsv --prompts-description=data/ground_truth_continuation_dev.json -``` -where `--cut-tail` tells the script to ignore the last token on each line (ASR puts the sequence ID there). - -### Self- and Auto-BLEU -```bash -python self_bleu.py $PATH_TO_ASR_OUTPUT/hypo.word-960h_scratch.pt-train.txt --cut-tail \ - --manifest=$MANIFEST_DIR/train.tsv --prompts-description=data/ground_truth_continuation_dev.json -``` - -### Continuation-BLEU -```bash -python continuation_eval.py --asr-transcript $PATH_TO_ASR_OUTPUT/hypo.word-960h_scratch.pt-train.txt \ - --manifest=$MANIFEST_DIR/train.tsv --prompts-description=data/ground_truth_continuation_dev.json -``` - -### AUC -Based on the metrics calculated above, we can estimate the AUC of the perplexity/diversity trade-off. We provide an illustration in a [Colab notebook](https://colab.research.google.com/drive/1pVPfOVax_PU3MkYdHRSsa-SI8GBUldNt?usp=sharing). diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/continuation_eval.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/continuation_eval.py deleted file mode 100644 index 72b92a341..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/continuation_eval.py +++ /dev/null @@ -1,99 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -from collections import defaultdict -import numpy as np -from misc.bleu_utils import sentence_bleu -import json -import warnings - - -def get_args(): - import argparse - - parser = argparse.ArgumentParser("Tool to calculate Continuation-BLEU2") - parser.add_argument('--asr-transcript', type=str, - help='Path to the transcript file.') - parser.add_argument('--prompts-description', type=str, - help='Path to the ground-truth continuation') - parser.add_argument('--manifest', type=str, required=True) - parser.add_argument('--take-shortest', type=int, default=1000) - - args = parser.parse_args() - - return args - - -def main(): - # NLTK produces warnings - warnings.filterwarnings("ignore") - - args = get_args() - - with open(args.prompts_description, 'r') as fin: - original_continuations = json.loads(fin.read()) - - sequence2length = [(k, v[0]) for k, v in original_continuations.items()] - assert all(float(v) >= 6.0 for (_, v) in sequence2length) # 6 seconds - - sequence2length.sort(key=lambda x: x[1]) - to_take = set(v[0] for v in sequence2length[:args.take_shortest]) - - with open(args.manifest, 'r') as fin: - fin.readline() - - linenum2file = dict([ - (i, l.split("__")[0]) for (i, l) in enumerate(fin) - ]) - - max_files = max(linenum2file.keys()) - continuations = defaultdict(list) - - mean_length_after = 0 - n_examples = 0 - - with open(args.asr_transcript, 'r') as fin: - for line in fin: - n_examples += 1 - line = line.split() - sequence_id = int(line[-1].split('-')[1][:-1]) - - assert sequence_id <= max_files - - sequence_name = linenum2file[sequence_id] - - continuations[sequence_name].append(line[:-1]) - mean_length_after += len(line) - - mean_length_after /= n_examples - print(f'Mean length of continuations, in words: {mean_length_after}') - metric_values = [] - - mean_ground_truth_words = 0 - n_examples = 0 - n_candidates = 0 - - for k, candidates in continuations.items(): - if k not in to_take: - continue - - n_examples += 1 - - ground_truth = original_continuations[k][1].split() - n_candidates += len(candidates) - bleu = sentence_bleu(candidates, ground_truth, weights=( - 0.5, 0.5), no_length_penalty=True, averaging_mode="geometric") - mean_ground_truth_words += len(ground_truth) - - metric_values.append(bleu) - - n = len(metric_values) - print( - f'Median BLEU over {n} examples: {np.median(metric_values)} +- {np.std(metric_values) / np.sqrt(n)}') - - -if __name__ == '__main__': - main() diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/misc/bleu_utils.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/misc/bleu_utils.py deleted file mode 100644 index 75cc5272d..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/misc/bleu_utils.py +++ /dev/null @@ -1,166 +0,0 @@ -""" - -TODO: the code is take from Apache-2 Licensed NLTK: make sure we do this properly! - - -Copied over from nltk.tranlate.bleu_score. This code has two major changes: - - allows to turn off length/brevity penalty --- it has no sense for self-bleu, - - allows to use arithmetic instead of geometric mean -""" - -import math -import sys -from fractions import Fraction -import warnings -from collections import Counter -from nltk.translate.bleu_score import modified_precision, closest_ref_length, brevity_penalty, SmoothingFunction - - -def corpus_bleu( - list_of_references, - hypotheses, - weights=(0.25, 0.25, 0.25, 0.25), - smoothing_function=None, - auto_reweigh=False, - averaging_mode="geometric", - no_length_penalty=False -): - """ - Calculate a single corpus-level BLEU score (aka. system-level BLEU) for all - the hypotheses and their respective references. - - Instead of averaging the sentence level BLEU scores (i.e. marco-average - precision), the original BLEU metric (Papineni et al. 2002) accounts for - the micro-average precision (i.e. summing the numerators and denominators - for each hypothesis-reference(s) pairs before the division). - - >>> hyp1 = ['It', 'is', 'a', 'guide', 'to', 'action', 'which', - ... 'ensures', 'that', 'the', 'military', 'always', - ... 'obeys', 'the', 'commands', 'of', 'the', 'party'] - >>> ref1a = ['It', 'is', 'a', 'guide', 'to', 'action', 'that', - ... 'ensures', 'that', 'the', 'military', 'will', 'forever', - ... 'heed', 'Party', 'commands'] - >>> ref1b = ['It', 'is', 'the', 'guiding', 'principle', 'which', - ... 'guarantees', 'the', 'military', 'forces', 'always', - ... 'being', 'under', 'the', 'command', 'of', 'the', 'Party'] - >>> ref1c = ['It', 'is', 'the', 'practical', 'guide', 'for', 'the', - ... 'army', 'always', 'to', 'heed', 'the', 'directions', - ... 'of', 'the', 'party'] - - >>> hyp2 = ['he', 'read', 'the', 'book', 'because', 'he', 'was', - ... 'interested', 'in', 'world', 'history'] - >>> ref2a = ['he', 'was', 'interested', 'in', 'world', 'history', - ... 'because', 'he', 'read', 'the', 'book'] - - >>> list_of_references = [[ref1a, ref1b, ref1c], [ref2a]] - >>> hypotheses = [hyp1, hyp2] - >>> corpus_bleu(list_of_references, hypotheses) # doctest: +ELLIPSIS - 0.5920... - - The example below show that corpus_bleu() is different from averaging - sentence_bleu() for hypotheses - - >>> score1 = sentence_bleu([ref1a, ref1b, ref1c], hyp1) - >>> score2 = sentence_bleu([ref2a], hyp2) - >>> (score1 + score2) / 2 # doctest: +ELLIPSIS - 0.6223... - - :param list_of_references: a corpus of lists of reference sentences, w.r.t. hypotheses - :type list_of_references: list(list(list(str))) - :param hypotheses: a list of hypothesis sentences - :type hypotheses: list(list(str)) - :param weights: weights for unigrams, bigrams, trigrams and so on - :type weights: list(float) - :param smoothing_function: - :type smoothing_function: SmoothingFunction - :param auto_reweigh: Option to re-normalize the weights uniformly. - :type auto_reweigh: bool - :return: The corpus-level BLEU score. - :rtype: float - """ - # Before proceeding to compute BLEU, perform sanity checks. - - p_numerators = Counter() # Key = ngram order, and value = no. of ngram matches. - p_denominators = Counter() # Key = ngram order, and value = no. of ngram in ref. - hyp_lengths, ref_lengths = 0, 0 - - assert len(list_of_references) == len(hypotheses), ( - "The number of hypotheses and their reference(s) should be the " "same " - ) - - # Iterate through each hypothesis and their corresponding references. - for references, hypothesis in zip(list_of_references, hypotheses): - # For each order of ngram, calculate the numerator and - # denominator for the corpus-level modified precision. - for i, _ in enumerate(weights, start=1): - p_i = modified_precision(references, hypothesis, i) - p_numerators[i] += p_i.numerator - p_denominators[i] += p_i.denominator - - # Calculate the hypothesis length and the closest reference length. - # Adds them to the corpus-level hypothesis and reference counts. - hyp_len = len(hypothesis) - hyp_lengths += hyp_len - ref_lengths += closest_ref_length(references, hyp_len) - - # Calculate corpus-level brevity penalty. - if no_length_penalty and averaging_mode == 'geometric': - bp = 1.0 - elif no_length_penalty and averaging_mode == 'arithmetic': - bp = 0.0 - else: - assert not no_length_penalty - assert averaging_mode != 'arithmetic', 'Not sure how to apply length penalty when aurithmetic mode' - bp = brevity_penalty(ref_lengths, hyp_lengths) - - # Uniformly re-weighting based on maximum hypothesis lengths if largest - # order of n-grams < 4 and weights is set at default. - if auto_reweigh: - if hyp_lengths < 4 and weights == (0.25, 0.25, 0.25, 0.25): - weights = (1 / hyp_lengths,) * hyp_lengths - - # Collects the various precision values for the different ngram orders. - p_n = [ - Fraction(p_numerators[i], p_denominators[i], _normalize=False) - for i, _ in enumerate(weights, start=1) - ] - - # Returns 0 if there's no matching n-grams - # We only need to check for p_numerators[1] == 0, since if there's - # no unigrams, there won't be any higher order ngrams. - if p_numerators[1] == 0: - return 0 - - # If there's no smoothing, set use method0 from SmoothinFunction class. - if not smoothing_function: - smoothing_function = SmoothingFunction().method0 - # Smoothen the modified precision. - # Note: smoothing_function() may convert values into floats; - # it tries to retain the Fraction object as much as the - # smoothing method allows. - p_n = smoothing_function( - p_n, references=references, hypothesis=hypothesis, hyp_len=hyp_lengths - ) - - if averaging_mode == "geometric": - s = (w_i * math.log(p_i) for w_i, p_i in zip(weights, p_n)) - s = bp * math.exp(math.fsum(s)) - elif averaging_mode == "arithmetic": - s = (w_i * p_i for w_i, p_i in zip(weights, p_n)) - s = math.fsum(s) - - return s - - -def sentence_bleu( - references, - hypothesis, - weights=(0.25, 0.25, 0.25, 0.25), - smoothing_function=None, - auto_reweigh=False, - averaging_mode="geometric", - no_length_penalty=False -): - return corpus_bleu( - [references], [hypothesis], weights, smoothing_function, auto_reweigh, averaging_mode, no_length_penalty - ) \ No newline at end of file diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/misc/cut_as.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/misc/cut_as.py deleted file mode 100644 index 5b7e1e968..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/misc/cut_as.py +++ /dev/null @@ -1,69 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import torchaudio -import argparse -import json -import pathlib - - -def get_args(): - parser = argparse.ArgumentParser( - "Assuring generated audio have the same length as ground-truth audio") - parser.add_argument('--samples_dir', required=True, type=str) - parser.add_argument('--out_dir', required=True, type=str) - parser.add_argument('--prompts_description', required=True, type=str) - return parser.parse_args() - - -def cut(src, tgt, l): - x, sr = torchaudio.load(str(src)) - assert sr == 16_000 - - x = x.squeeze() - target_frames = int(l * sr) - - flag = 0 - if target_frames <= x.size(0): - x = x[:target_frames] - flag = 1 - else: - flag = 0 - torchaudio.save(str(tgt), x.unsqueeze(0), sr) - return flag - - -def main(): - args = get_args() - tgt_dir = pathlib.Path(args.out_dir) - tgt_dir.mkdir(exist_ok=True, parents=True) - - total_files, sufficiently_long = 0, 0 - - with open(args.prompts_description, 'r') as f: - description = json.loads(f.read()) - - for src_f in pathlib.Path(args.samples_dir).glob('*.wav'): - name_prompt = src_f.with_suffix('').name.split('__')[0] - - assert name_prompt in description, f'Cannot find {name_prompt}!' - - target_length = description[name_prompt][0] - tgt_f = tgt_dir / (src_f.name) - - is_long_enough = cut(src_f, tgt_f, target_length) - sufficiently_long += is_long_enough - if not is_long_enough: - print(f'{src_f} is not long enough') - - total_files += 1 - - print( - f'Total files: {total_files}; sufficiently long: {sufficiently_long}') - - -if __name__ == '__main__': - main() diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/misc/dict.ltr.txt b/kosmos-g/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/misc/dict.ltr.txt deleted file mode 100644 index 69929e166..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/misc/dict.ltr.txt +++ /dev/null @@ -1,28 +0,0 @@ -| 94802 -E 51860 -T 38431 -A 33152 -O 31495 -N 28855 -I 28794 -H 27187 -S 26071 -R 23546 -D 18289 -L 16308 -U 12400 -M 10685 -W 10317 -C 9844 -F 9062 -G 8924 -Y 8226 -P 6890 -B 6339 -V 3936 -K 3456 -' 1023 -X 636 -J 598 -Q 437 -Z 213 diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/ppx.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/ppx.py deleted file mode 100644 index d6a40e4d3..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/ppx.py +++ /dev/null @@ -1,122 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import torch -import numpy as np -import warnings - - -def get_target_sequences(manifest, ground_truth, to_take=1000): - import json - import pathlib - - with open(ground_truth, 'r') as fin: - original_continuations = json.loads(fin.read()) - - sequence2length = [(k, v[0]) for k, v in original_continuations.items()] - assert all(float(v) >= 6.0 for (_, v) in sequence2length) # 6 seconds - - sequence2length.sort(key=lambda x: x[1]) - to_take_sequences = set(v[0] for v in sequence2length[:to_take]) - to_take_ids = [] - - with open(manifest, 'r') as f: - f.readline() - - for i, line in enumerate(f.readlines()): - seq_id = line.split()[0] - seq_id = pathlib.Path(seq_id).name.split('__')[0] - - if seq_id in to_take_sequences: - to_take_ids.append(i) - - print(f'Took {len(to_take_ids)} ids') - return set(to_take_ids) - - -def get_args(): - import argparse - - parser = argparse.ArgumentParser("Evaluate PPX metric of a transcript.") - parser.add_argument('--asr-transcript', type=str, - help='Path to the transcript file.') - parser.add_argument('--cut-id', action='store_true', - help='Whether cut the first token (typically a seq id)') - parser.add_argument('--cut-tail', action='store_true', - help='Whether cut the last token (typically a speaker id)') - - parser.add_argument('--manifest', type=str, default=None) - parser.add_argument('--prompts-description', type=str, default=None) - - args = parser.parse_args() - - return args - - -def main(): - args = get_args() - - lm = torch.hub.load( - 'pytorch/fairseq', 'transformer_lm.wmt19.en', tokenizer='moses', bpe='fastbpe') - - lm.eval().cuda() # disable dropout - - if args.manifest is None and args.prompts_description is None: - target_ids = None - else: - target_ids = get_target_sequences( - args.manifest, args.prompts_description) - - with open(args.asr_transcript, 'r') as fin: - lines = fin.readlines() - - if target_ids is not None: - filtered = [] - for line in lines: - line_id = line.split()[-1] - line_id = int(line_id.split('-')[1][:-1]) - if line_id in target_ids: - filtered.append(line) - lines = filtered - else: - pass - - if args.cut_id: - lines = [' '.join(x.split()[1:]) for x in lines] - if args.cut_tail: - lines = [' '.join(x.split()[:-1]) for x in lines] - lines = [x.strip().lower() for x in lines] - - def get_logprob(sent): return \ - lm.score(sent)['positional_scores'].mean().neg().item() - - logprobs = [get_logprob(l) for l in lines] - - filtered = [x for x in logprobs if not np.isnan(x)] - if len(filtered) != len(logprobs): - warnings.warn("NaNs detected!") - logprobs = filtered - - perplexities = [np.exp(l) for l in logprobs] - - for name, stats in [('logprob', logprobs), ('perplexity', perplexities)]: - mean = np.mean(stats) - sem = np.std(stats) / np.sqrt(len(stats)) - - median = np.median(stats) - interval = list(np.percentile(stats, [10, 90])) - - mean, sem, median, percentile10, percentile90 = [ - round(x, 2) for x in [mean, sem, median] + interval] - - print(name) - print(f"\tMean {mean} +- {sem}") - print( - f"\tMedian {median}, 90% confidence interval {percentile10}...{percentile90}") - - -if __name__ == '__main__': - main() diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/self_auto_bleu.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/self_auto_bleu.py deleted file mode 100644 index 062bb82f6..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/metrics/asr_metrics/self_auto_bleu.py +++ /dev/null @@ -1,201 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import nltk -from misc.bleu_utils import sentence_bleu -import warnings - - -def get_target_sequences(manifest, ground_truth, to_take=1000): - import json - import pathlib - - with open(ground_truth, 'r') as fin: - original_continuations = json.loads(fin.read()) - - sequence2length = [(k, v[0]) for k, v in original_continuations.items()] - assert all(float(v) >= 6.0 for (_, v) in sequence2length) # 6 seconds - - sequence2length.sort(key=lambda x: x[1]) - to_take_sequences = set(v[0] for v in sequence2length[:to_take]) - to_take_ids = [] - - with open(manifest, 'r') as f: - f.readline() - - for i, line in enumerate(f.readlines()): - seq_id = line.split()[0] - seq_id = pathlib.Path(seq_id).name.split('__')[0] - - if seq_id in to_take_sequences: - to_take_ids.append(i) - - print(f'Took {len(to_take_ids)} ids') - return set(to_take_ids) - - -def get_args(): - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument('--asr-transcript', type=str, - help='Path to the transcript file.') - - parser.add_argument('--manifest', required=True) - parser.add_argument('--prompts-description', required=True) - - parser.add_argument('--cut-id', action='store_true', - help='Whether cut the first token (typically a seq id)') - parser.add_argument('--cut-tail', action='store_true', - help='Whether cut the last token (typically a speaker id)') - parser.add_argument('--debug', action='store_true') - - args = parser.parse_args() - - return args - - -def get_self_bleu(utterances, averaging_mode, weights): - self_bleu = [] - - for i in range(len(utterances)): - hypo = utterances[i] - rest = utterances[:i] + utterances[i+1:] - - self_bleu.append(sentence_bleu(rest, hypo, weights, - no_length_penalty=True, averaging_mode=averaging_mode)) - - return self_bleu - - -def get_self_bleu2_arithmetic(utterances): - weights = (0.5, 0.5) # equal weight for unigrams and bigrams - return get_self_bleu(utterances, averaging_mode='arithmetic', weights=weights) - - -def get_self_bleu2_geometric(utterances): - weights = (0.5, 0.5) - return get_self_bleu(utterances, averaging_mode='geometric', weights=weights) - - -def get_auto_bleu2_arithmetic(utterances): - weights = (0.5, 0.5) - return [auto_bleu(u, mean_mode='arithmetic', weights=weights) for u in utterances] - - -def get_auto_bleu2_geometric(utterances): - weights = (0.5, 0.5) - return [auto_bleu(u, mean_mode='geometric', weights=weights) for u in utterances] - - -def get_auto_bleu3_geometric(utterances): - weights = (1./3, 1./3, 1./3) - return [auto_bleu(u, mean_mode='geometric', weights=weights) for u in utterances] - - -def get_auto_bleu3_arithmetic(utterances): - weights = (1./3, 1./3, 1./3) - return [auto_bleu(u, mean_mode='arithmetic', weights=weights) for u in utterances] - - -def get_self_bleu3_arithmetic(utterances): - weights = (1./3, 1./3, 1./3) - return get_self_bleu(utterances, averaging_mode='arithmetic', weights=weights) - - -def get_self_bleu3_geometric(utterances): - weights = (1./3, 1./3, 1./3) - return get_self_bleu(utterances, averaging_mode='geometric', weights=weights) - - -def auto_bleu(sentence, weights, mean_mode='arithmetic'): - if len(sentence) <= 1: - return 0 - - N = len(weights) - - bleu_n = np.zeros([N]) - for n in range(N): - targ_ngrams = list(nltk.ngrams(sentence, n+1)) - for p in range(len(targ_ngrams)): - left = sentence[:p] - right = sentence[(p+n+1):] - rest_ngrams = list(nltk.ngrams(left, n+1)) + \ - list(nltk.ngrams(right, n+1)) - # compute the nb of matching ngrams - bleu_n[n] += targ_ngrams[p] in rest_ngrams - bleu_n[n] /= len(targ_ngrams) # average them to get a proportion - - weights = np.array(weights) - if mean_mode == 'arithmetic': - return (bleu_n * weights).sum() - elif mean_mode == 'geometric': - return (bleu_n ** weights).prod() - else: - raise ValueError(f'Unknown agggregation mode {mean_mode}') - - -def main(): - from multiprocessing import Pool - - args = get_args() - target_ids = get_target_sequences(args.manifest, args.prompts_description) - - with open(args.asr_transcript, 'r') as fin: - lines = fin.readlines() - - terms = [x.strip().split() for x in lines] - filtered = [] - for term in terms: - line_id = int(term[-1].split('-')[1][:-1]) - if line_id in target_ids: - filtered.append(term) - terms = filtered - - if args.cut_id: - terms = [x[1:] for x in terms] - if args.cut_tail: - terms = [x[:-1] for x in terms] - - if args.debug: - terms = terms[:10] - - tasks = [ - ('Self-BLEU2-arithmetic', get_self_bleu2_arithmetic), - ('Self-BLEU2-geometric', get_self_bleu2_geometric), - ('Auto-BLEU2-arithmetic', get_auto_bleu2_arithmetic), - ('Auto-BLEU2-geometric', get_auto_bleu2_geometric), - - ('Self-BLEU3-arithmetic', get_self_bleu3_arithmetic), - ('Self-BLEU3-geometric', get_self_bleu3_geometric), - ('Auto-BLEU3-arithmetic', get_auto_bleu3_arithmetic), - ('Auto-BLEU3-geometric', get_auto_bleu3_geometric), - ] - - n_processes = min(16, len(tasks)) - with Pool(n_processes) as pool: - metrics = pool.map(run_f, [(t[1], terms) for t in tasks]) - - for (metric_name, _), metric in zip(tasks, metrics): - metric, sem = np.mean(metric), np.std(metric) / np.sqrt(len(metric)) - - metric, sem = [ - round(100 * x, 2) for x in [metric, sem] - ] - - print(f'{metric_name} {metric} +- {sem}') - - -def run_f(task_params): - f, terms = task_params - return f(terms) - - -if __name__ == '__main__': - # NLTK produces warnings - warnings.filterwarnings("ignore") - - main() diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/speech2unit/README.md b/kosmos-g/fairseq/examples/textless_nlp/gslm/speech2unit/README.md deleted file mode 100644 index 9dff9d33a..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/speech2unit/README.md +++ /dev/null @@ -1,68 +0,0 @@ -# Speech to Unit Model (speech2unit) - -## Acoustic Model -For quantizing speech we learn a K-means clustering over acoustic representations for which we either use Log-Mel Filterbank or pretrained acoustic representation models. For using pretrained models, please download from their respective locations linked below. -* [Modified CPC](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/cpc_big_ll6kh_top_ctc.pt) -* [HuBERT-Base](https://dl.fbaipublicfiles.com/hubert/hubert_base_ls960.pt) -* [Wav2Vec 2.0-Base](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_vox_new.pt) - -## Quantization Model -You can download pretrained quantized model from the list below. - -K-Means Model | Download Link -|-|- -Log Mel Filterbank + KM50 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/logmel/km50/km.bin) -Log Mel Filterbank + KM100 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/logmel/km100/km.bin) -Log Mel Filterbank + KM200 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/logmel/km200/km.bin) -Modified CPC + KM50 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/km50/km.bin) -Modified CPC + KM100 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/km100/km.bin) -Modified CPC + KM200 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/km200/km.bin) -HuBERT Base + KM50 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/hubert/km50/km.bin) -HuBERT Base + KM100 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/hubert/km100/km.bin) -HuBERT Base + KM200 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/hubert/km200/km.bin) -wav2vec 2.0 Large + KM50 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/w2v2/km50/km.bin) -wav2vec 2.0 Large + KM100 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/w2v2/km100/km.bin) -wav2vec 2.0 Large + KM200 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/w2v2/km200/km.bin) - -### Quantization -For quantizing speech with a given acoustic representation, please follow the steps below. -1. Learn K-means clustering model -``` -N_CLUSTERS=<number_of_clusters_used_for_kmeans> -TYPE=<one_of_logmel/cpc/hubert/w2v2> -CKPT_PATH=<path_of_pretrained_acoustic_model> -LAYER=<layer_of_acoustic_model_to_extract_features_from> -MANIFEST=<tab_separated_manifest_of_audio_files_for_training_kmeans> -KM_MODEL_PATH=<output_path_of_the_kmeans_model> - -PYTHONPATH=. python examples/textless_nlp/gslm/speech2unit/clustering/cluster_kmeans.py \ - --num_clusters $N_CLUSTERS \ - --feature_type $TYPE \ - --checkpoint_path $CKPT_PATH \ - --layer $LAYER \ - --manifest_path $MANIFEST \ - --out_kmeans_model_path $KM_MODEL_PATH -``` -2. Quantize using the learned clusters -``` -MANIFEST=<tab_separated_manifest_of_audio_files_to_quantize> -OUT_QUANTIZED_FILE=<output_quantized_audio_file_path> - -python examples/textless_nlp/gslm/speech2unit/clustering/quantize_with_kmeans.py \ - --feature_type $TYPE \ - --kmeans_model_path $KM_MODEL_PATH \ - --acoustic_model_path $CKPT_PATH \ - --layer $LAYER \ - --manifest_path $MANIFEST \ - --out_quantized_file_path $OUT_QUANTIZED_FILE \ - --extension ".flac" -``` - -Note about the manifest file is a file with paths and length of input audio files. The format of the file is as follows: -``` -<path_of_root_directory_containing_audio_files> -<relative_path_of_audio_file_1>\t<number_of_frames_1> -<relative_path_of_audio_file_2>\t<number_of_frames_1> -... -``` - diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/speech2unit/__init__.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/speech2unit/__init__.py deleted file mode 100644 index e69de29bb..000000000 diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/__init__.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/__init__.py deleted file mode 100644 index e69de29bb..000000000 diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/cluster_kmeans.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/cluster_kmeans.py deleted file mode 100644 index 7cf844a95..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/cluster_kmeans.py +++ /dev/null @@ -1,212 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import logging -import os -import time - -import numpy as np -from sklearn.cluster import MiniBatchKMeans - -import joblib -from examples.textless_nlp.gslm.speech2unit.pretrained.utils import ( - get_and_dump_features, - get_features, -) - - -def get_logger(): - log_format = "[%(asctime)s] [%(levelname)s]: %(message)s" - logging.basicConfig(format=log_format, level=logging.INFO) - logger = logging.getLogger(__name__) - return logger - - -def get_parser(): - parser = argparse.ArgumentParser( - description="Learn K-means clustering over acoustic features." - ) - - # Features arguments - parser.add_argument( - "--in_features_path", type=str, default=None, help="Features file path" - ) - parser.add_argument( - "--feature_type", - type=str, - choices=["logmel", "hubert", "w2v2", "cpc"], - default=None, - help="Acoustic feature type", - ) - parser.add_argument( - "--manifest_path", - type=str, - default=None, - help="Manifest file containing the root dir and file names", - ) - parser.add_argument( - "--out_features_path", - type=str, - default=None, - help="Features file path to write to", - ) - parser.add_argument( - "--checkpoint_path", - type=str, - help="Pretrained acoustic model checkpoint", - ) - parser.add_argument( - "--layer", - type=int, - help="The layer of the pretrained model to extract features from", - default=-1, - ) - parser.add_argument( - "--sample_pct", - type=float, - help="Percent data to use for K-means training", - default=0.1, - ) - - # K-means arguments - parser.add_argument( - "--num_clusters", type=int, help="Nubmer of clusters", default=50 - ) - parser.add_argument("--init", default="k-means++") - parser.add_argument( - "--max_iter", - type=int, - help="Maximum number of iterations for K-means training", - default=150, - ) - parser.add_argument( - "--batch_size", - type=int, - help="Batch size for K-means training", - default=10000, - ) - parser.add_argument("--tol", default=0.0, type=float) - parser.add_argument("--max_no_improvement", default=100, type=int) - parser.add_argument("--n_init", default=20, type=int) - parser.add_argument("--reassignment_ratio", default=0.5, type=float) - parser.add_argument( - "--out_kmeans_model_path", - type=str, - required=True, - help="Path to save K-means model", - ) - - # Leftovers - parser.add_argument( - "--seed", - type=int, - help="Random seed to use for K-means training", - default=1369, - ) - - return parser - - -def get_kmeans_model( - n_clusters, - init, - max_iter, - batch_size, - tol, - max_no_improvement, - n_init, - reassignment_ratio, - random_state, -): - return MiniBatchKMeans( - n_clusters=n_clusters, - init=init, - max_iter=max_iter, - batch_size=batch_size, - tol=tol, - max_no_improvement=max_no_improvement, - n_init=n_init, - reassignment_ratio=reassignment_ratio, - random_state=random_state, - verbose=1, - compute_labels=True, - init_size=None, - ) - - -def train_kmeans(kmeans_model, features_batch): - start_time = time.time() - kmeans_model.fit(features_batch) - time_taken = round((time.time() - start_time) // 60, 2) - return kmeans_model, time_taken - - -def main(args, logger): - # Features loading/extraction for K-means - if args.in_features_path: - # Feature loading - logger.info(f"Loading features from {args.in_features_path}...") - features_batch = np.load(args.in_features_path, allow_pickle=True) - else: - # Feature extraction - logger.info(f"Extracting {args.feature_type} acoustic features...") - features_batch = ( - get_features( - feature_type=args.feature_type, - checkpoint_path=args.checkpoint_path, - layer=args.layer, - manifest_path=args.manifest_path, - sample_pct=args.sample_pct, - flatten=True, - ) - if not args.out_features_path - else get_and_dump_features( - feature_type=args.feature_type, - checkpoint_path=args.checkpoint_path, - layer=args.layer, - manifest_path=args.manifest_path, - sample_pct=args.sample_pct, - flatten=True, - out_features_path=args.out_features_path, - ) - ) - if args.out_features_path: - logger.info( - f"Saved extracted features at {args.out_features_path}" - ) - logger.info(f"Features shape = {features_batch.shape}\n") - - # Learn and save K-means model - kmeans_model = get_kmeans_model( - n_clusters=args.num_clusters, - init=args.init, - max_iter=args.max_iter, - batch_size=args.batch_size, - tol=args.tol, - max_no_improvement=args.max_no_improvement, - n_init=args.n_init, - reassignment_ratio=args.reassignment_ratio, - random_state=args.seed, - ) - logger.info("Starting k-means training...") - kmeans_model, time_taken = train_kmeans( - kmeans_model=kmeans_model, features_batch=features_batch - ) - logger.info(f"...done k-means training in {time_taken} minutes") - inertia = -kmeans_model.score(features_batch) / len(features_batch) - logger.info(f"Total intertia: {round(inertia, 2)}\n") - - logger.info(f"Saving k-means model to {args.out_kmeans_model_path}") - os.makedirs(os.path.dirname(args.out_kmeans_model_path), exist_ok=True) - joblib.dump(kmeans_model, open(args.out_kmeans_model_path, "wb")) - - -if __name__ == "__main__": - parser = get_parser() - args = parser.parse_args() - logger = get_logger() - logger.info(args) - main(args, logger) diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/dump_feats.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/dump_feats.py deleted file mode 100644 index 031567c6d..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/dump_feats.py +++ /dev/null @@ -1,91 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import logging - -from examples.textless_nlp.gslm.speech2unit.pretrained.utils import ( - get_and_dump_features, -) - - -def get_parser(): - parser = argparse.ArgumentParser( - description="Compute and dump log mel fbank features." - ) - parser.add_argument( - "--feature_type", - type=str, - choices=["logmel", "hubert", "w2v2", "cpc"], - default=None, - help="Acoustic feature type", - ) - parser.add_argument( - "--manifest_path", - type=str, - default=None, - help="Manifest file containing the root dir and file names", - ) - parser.add_argument( - "--out_features_path", - type=str, - default=None, - help="Features file path to write to", - ) - parser.add_argument( - "--checkpoint_path", - type=str, - help="Pretrained acoustic model checkpoint", - ) - parser.add_argument( - "--layer", - type=int, - help="The layer of the pretrained model to extract features from", - default=-1, - ) - parser.add_argument( - "--sample_pct", - type=float, - help="Percent data to use for K-means training", - default=0.1, - ) - parser.add_argument( - "--out_features_path", - type=str, - help="Path to save log mel fbank features", - ) - return parser - - -def get_logger(): - log_format = "[%(asctime)s] [%(levelname)s]: %(message)s" - logging.basicConfig(format=log_format, level=logging.INFO) - logger = logging.getLogger(__name__) - return logger - - -if __name__ == "__main__": - """ - Example command: - python ~/speechbot/clustering/dump_logmelfank_feats.py \ - --manifest_path /checkpoint/kushall/data/LJSpeech-1.1/asr_input_wavs_16k/train.tsv - --out_features_path /checkpoint/kushall/experiments/speechbot/logmelfbank/features/ljspeech/train.npy - """ - parser = get_parser() - args = parser.parse_args() - logger = get_logger() - logger.info(args) - - logger.info(f"Extracting {args.feature_type} acoustic features...") - get_and_dump_features( - feature_type=args.feature_type, - checkpoint_path=args.checkpoint_path, - layer=args.layer, - manifest_path=args.manifest_path, - sample_pct=args.sample_pct, - flatten=True, - out_features_path=args.out_features_path, - ) - logger.info(f"Saved extracted features at {args.out_features_path}") diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/quantize_with_kmeans.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/quantize_with_kmeans.py deleted file mode 100644 index 2c87445d8..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/quantize_with_kmeans.py +++ /dev/null @@ -1,125 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import logging -import os - -import numpy as np - -import joblib -from examples.textless_nlp.gslm.speech2unit.clustering.utils import ( - get_audio_files, -) -from examples.textless_nlp.gslm.speech2unit.pretrained.utils import ( - get_features, -) - - -def get_logger(): - log_format = "[%(asctime)s] [%(levelname)s]: %(message)s" - logging.basicConfig(format=log_format, level=logging.INFO) - logger = logging.getLogger(__name__) - return logger - - -def get_parser(): - parser = argparse.ArgumentParser( - description="Quantize using K-means clustering over acoustic features." - ) - parser.add_argument( - "--feature_type", - type=str, - choices=["logmel", "hubert", "w2v2", "cpc"], - default=None, - required=True, - help="Acoustic feature type", - ) - parser.add_argument( - "--acoustic_model_path", - type=str, - help="Pretrained acoustic model checkpoint" - ) - parser.add_argument( - "--layer", - type=int, - help="The layer of the pretrained model to extract features from", - default=-1, - ) - parser.add_argument( - "--kmeans_model_path", - type=str, - required=True, - help="K-means model file path to use for inference", - ) - parser.add_argument( - "--features_path", - type=str, - default=None, - help="Features file path. You don't need to enter acoustic model details if you have dumped features", - ) - parser.add_argument( - "--manifest_path", - type=str, - default=None, - help="Manifest file containing the root dir and file names", - ) - parser.add_argument( - "--out_quantized_file_path", - required=True, - type=str, - help="File path of quantized output.", - ) - parser.add_argument( - "--extension", type=str, default=".flac", help="Features file path" - ) - return parser - - -def main(args, logger): - # Feature extraction - if args.features_path is not None: - logger.info(f"Loading acoustic features from {args.features_path}...") - features_batch = np.load(args.features_path) - else: - logger.info(f"Extracting {args.feature_type} acoustic features...") - features_batch = get_features( - feature_type=args.feature_type, - checkpoint_path=args.acoustic_model_path, - layer=args.layer, - manifest_path=args.manifest_path, - sample_pct=1.0, - flatten=False, - ) - logger.info( - f"Features extracted for {len(features_batch)} utterances.\n" - ) - logger.info( - f"Dimensionality of representation = {features_batch[0].shape[1]}" - ) - - # K-means model - logger.info(f"Loading K-means model from {args.kmeans_model_path} ...") - kmeans_model = joblib.load(open(args.kmeans_model_path, "rb")) - kmeans_model.verbose = False - - _, fnames, _ = get_audio_files(args.manifest_path) - - os.makedirs(os.path.dirname(args.out_quantized_file_path), exist_ok=True) - print(f"Writing quantized predictions to {args.out_quantized_file_path}") - with open(args.out_quantized_file_path, "w") as fout: - for i, feats in enumerate(features_batch): - pred = kmeans_model.predict(feats) - pred_str = " ".join(str(p) for p in pred) - base_fname = os.path.basename(fnames[i]).rstrip(args.extension) - fout.write(f"{base_fname}|{pred_str}\n") - - -if __name__ == "__main__": - parser = get_parser() - args = parser.parse_args() - logger = get_logger() - logger.info(args) - main(args, logger) diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/utils.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/utils.py deleted file mode 100644 index cf08d1fe4..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/speech2unit/clustering/utils.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import List, Tuple - - -def get_audio_files(manifest_path: str) -> Tuple[str, List[str], List[int]]: - fnames, sizes = [], [] - with open(manifest_path, "r") as f: - root_dir = f.readline().strip() - for line in f: - items = line.strip().split("\t") - assert ( - len(items) == 2 - ), f"File must have two columns separated by tab. Got {line}" - fnames.append(items[0]) - sizes.append(int(items[1])) - return root_dir, fnames, sizes diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/speech2unit/pretrained/cpc_feature_reader.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/speech2unit/pretrained/cpc_feature_reader.py deleted file mode 100644 index c613f52d3..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/speech2unit/pretrained/cpc_feature_reader.py +++ /dev/null @@ -1,192 +0,0 @@ -import soundfile as sf -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class CpcFeatureReader: - """ - Wrapper class to run inference on CPC model. - Helps extract features for a given audio file. - """ - - def __init__( - self, - checkpoint_path, - layer, - use_encoder_layer=False, - norm_features=False, - sample_rate=16000, - max_chunk=64000, - ): - self.model = load_cpc_model(checkpoint_path, layer).eval().cuda() - self.sample_rate = sample_rate - self.max_chunk = max_chunk - self.norm_features = norm_features - self.use_encoder_layer = use_encoder_layer - - def read_audio(self, path, ref_len=None): - wav, sr = sf.read(path) - if wav.ndim == 2: - wav = wav.mean(-1) - assert wav.ndim == 1, wav.ndim - assert sr == self.sample_rate, sr - if ref_len is not None and abs(ref_len - len(wav)) > 160: - print(f"ref {ref_len} != read {len(wav)} ({path})") - return wav - - def get_feats(self, file_path, ref_len=None): - x = self.read_audio(file_path, ref_len) - # Inspired from CPC_audio feature_loader.py - with torch.no_grad(): - x = torch.from_numpy(x).float().cuda() - x = x.view(1, 1, -1) - size = x.size(2) - feat = [] - start = 0 - while start < size: - if start + self.max_chunk > size: - break - x_chunk = x[..., start : start + self.max_chunk] - feat_chunk = self.model.extract_features( - source=x_chunk, - get_encoded=self.use_encoder_layer, - norm_output=self.norm_features, - ) - feat.append(feat_chunk) - start += self.max_chunk - - if start < size: - x_chunk = x[:, -self.max_chunk :] - feat_chunk = self.model.extract_features( - source=x_chunk, - get_encoded=self.use_encoder_layer, - norm_output=self.norm_features, - ) - df = x_chunk.size(2) // feat_chunk.size(1) - delta = (size - start) // df - feat.append(feat_chunk[:, -delta:]) - return torch.cat(feat, 1).squeeze(0) - - -def load_cpc_model(checkpoint_path, layer=None): - state_dict = torch.load(checkpoint_path) - weights = state_dict["weights"] - config = state_dict["config"] - if layer is not None: - config["nLevelsGRU"] = layer - - encoder = CPCEncoder(config["hiddenEncoder"]) - ar_net = CPCAR( - config["hiddenEncoder"], config["hiddenGar"], False, config["nLevelsGRU"] - ) - - model = CPCModel(encoder, ar_net) - model.load_state_dict(weights, strict=False) - model.config = config - - return model - - -class ChannelNorm(nn.Module): - def __init__(self, num_features, epsilon=1e-05, affine=True): - super(ChannelNorm, self).__init__() - if affine: - self.weight = nn.parameter.Parameter(torch.Tensor(1, num_features, 1)) - self.bias = nn.parameter.Parameter(torch.Tensor(1, num_features, 1)) - else: - self.weight = None - self.bias = None - self.epsilon = epsilon - self.p = 0 - self.affine = affine - self.reset_parameters() - - def reset_parameters(self): - if self.affine: - torch.nn.init.ones_(self.weight) - torch.nn.init.zeros_(self.bias) - - def forward(self, x): - cum_mean = x.mean(dim=1, keepdim=True) - cum_var = x.var(dim=1, keepdim=True) - x = (x - cum_mean) * torch.rsqrt(cum_var + self.epsilon) - if self.weight is not None: - x = x * self.weight + self.bias - return x - - -class CPCEncoder(nn.Module): - def __init__(self, hidden_dim=512): - super(CPCEncoder, self).__init__() - self.conv0 = nn.Conv1d(1, hidden_dim, 10, stride=5, padding=3) - self.batchNorm0 = ChannelNorm(hidden_dim) - self.conv1 = nn.Conv1d(hidden_dim, hidden_dim, 8, stride=4, padding=2) - self.batchNorm1 = ChannelNorm(hidden_dim) - self.conv2 = nn.Conv1d(hidden_dim, hidden_dim, 4, stride=2, padding=1) - self.batchNorm2 = ChannelNorm(hidden_dim) - self.conv3 = nn.Conv1d(hidden_dim, hidden_dim, 4, stride=2, padding=1) - self.batchNorm3 = ChannelNorm(hidden_dim) - self.conv4 = nn.Conv1d(hidden_dim, hidden_dim, 4, stride=2, padding=1) - self.batchNorm4 = ChannelNorm(hidden_dim) - self.DOWNSAMPLING = 160 - - def get_output_dim(self): - return self.conv4.out_channels - - def forward(self, x): - x = F.relu(self.batchNorm0(self.conv0(x))) - x = F.relu(self.batchNorm1(self.conv1(x))) - x = F.relu(self.batchNorm2(self.conv2(x))) - x = F.relu(self.batchNorm3(self.conv3(x))) - x = F.relu(self.batchNorm4(self.conv4(x))) - return x - - -class CPCAR(nn.Module): - def __init__(self, dim_encoded, dim_output, keep_hidden, num_layers): - super(CPCAR, self).__init__() - self.baseNet = nn.LSTM( - dim_encoded, dim_output, num_layers=num_layers, batch_first=True - ) - self.hidden = None - self.keep_hidden = keep_hidden - - def get_output_dim(self): - return self.baseNet.hidden_size - - def forward(self, x): - try: - self.baseNet.flatten_parameters() - except RuntimeError: - pass - x, h = self.baseNet(x, self.hidden) - if self.keep_hidden: - if isinstance(h, tuple): - self.hidden = tuple(x.detach() for x in h) - else: - self.hidden = h.detach() - return x - - -class CPCModel(nn.Module): - def __init__(self, encoder, ar_net): - super(CPCModel, self).__init__() - self.gEncoder = encoder - self.gAR = ar_net - self.config = None - - def forward(self, x, label): - encoded = self.gEncoder(x).permute(0, 2, 1) - cpc_feature = self.gAR(encoded) - return cpc_feature, encoded, label - - def extract_features(self, source, get_encoded=False, norm_output=False): - cpc_feature, encoded, _ = self.forward(source, None) - if get_encoded: - cpc_feature = encoded - if norm_output: - mean = cpc_feature.mean(dim=1, keepdim=True) - var = cpc_feature.var(dim=1, keepdim=True) - cpc_feature = (cpc_feature - mean) / torch.sqrt(var + 1e-08) - return cpc_feature diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/speech2unit/pretrained/hubert_feature_reader.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/speech2unit/pretrained/hubert_feature_reader.py deleted file mode 100644 index 09442206e..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/speech2unit/pretrained/hubert_feature_reader.py +++ /dev/null @@ -1,59 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import fairseq -import soundfile as sf -import torch.nn.functional as F - - -class HubertFeatureReader: - """ - Wrapper class to run inference on HuBERT model. - Helps extract features for a given audio file. - """ - - def __init__(self, checkpoint_path, layer, max_chunk=1600000): - ( - model, - cfg, - task, - ) = fairseq.checkpoint_utils.load_model_ensemble_and_task( - [checkpoint_path] - ) - self.model = model[0].eval().cuda() - self.task = task - self.layer = layer - self.max_chunk = max_chunk - - def read_audio(self, path, ref_len=None): - wav, sr = sf.read(path) - if wav.ndim == 2: - wav = wav.mean(-1) - assert wav.ndim == 1, wav.ndim - assert sr == self.task.cfg.sample_rate, sr - if ref_len is not None and abs(ref_len - len(wav)) > 160: - print(f"ref {ref_len} != read {len(wav)} ({path})") - return wav - - def get_feats(self, file_path, ref_len=None): - x = self.read_audio(file_path, ref_len) - with torch.no_grad(): - x = torch.from_numpy(x).float().cuda() - if self.task.cfg.normalize: - x = F.layer_norm(x, x.shape) - x = x.view(1, -1) - - feat = [] - for start in range(0, x.size(1), self.max_chunk): - x_chunk = x[:, start: start + self.max_chunk] - feat_chunk, _ = self.model.extract_features( - source=x_chunk, - padding_mask=None, - mask=False, - output_layer=self.layer, - ) - feat.append(feat_chunk) - return torch.cat(feat, 1).squeeze(0) diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/speech2unit/pretrained/logmel_feature_reader.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/speech2unit/pretrained/logmel_feature_reader.py deleted file mode 100644 index 106f50247..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/speech2unit/pretrained/logmel_feature_reader.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import soundfile as sf -import torch -import torchaudio.compliance.kaldi as kaldi - - -class LogMelFeatureReader: - """ - Wrapper class to run inference on HuBERT model. - Helps extract features for a given audio file. - """ - - def __init__(self, *args, **kwargs): - self.num_mel_bins = kwargs.get("num_mel_bins", 80) - self.frame_length = kwargs.get("frame_length", 25.0) - - def get_feats(self, file_path): - wav, sr = sf.read(file_path) - feats = torch.from_numpy(wav).float() - feats = kaldi.fbank( - feats.unsqueeze(0), - num_mel_bins=self.num_mel_bins, - frame_length=self.frame_length, - sample_frequency=sr, - ) - return feats diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/speech2unit/pretrained/utils.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/speech2unit/pretrained/utils.py deleted file mode 100644 index 5aaddf642..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/speech2unit/pretrained/utils.py +++ /dev/null @@ -1,126 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import gc -import os -import random -import shutil -import numpy as np - -import torch -import tqdm -from examples.textless_nlp.gslm.speech2unit.pretrained.cpc_feature_reader import ( - CpcFeatureReader, -) -from examples.textless_nlp.gslm.speech2unit.pretrained.hubert_feature_reader import ( - HubertFeatureReader, -) -from examples.textless_nlp.gslm.speech2unit.pretrained.logmel_feature_reader import ( - LogMelFeatureReader, -) -from examples.textless_nlp.gslm.speech2unit.pretrained.w2v2_feature_reader import ( - Wav2VecFeatureReader, -) - - -def get_feature_reader(feature_type): - if feature_type == "logmel": - return LogMelFeatureReader - elif feature_type == "hubert": - return HubertFeatureReader - elif feature_type == "w2v2": - return Wav2VecFeatureReader - elif feature_type == "cpc": - return CpcFeatureReader - else: - raise NotImplementedError(f"{feature_type} is not supported.") - - -def get_feature_iterator( - feature_type, checkpoint_path, layer, manifest_path, sample_pct -): - feature_reader_cls = get_feature_reader(feature_type) - with open(manifest_path, "r") as fp: - lines = fp.read().split("\n") - root = lines.pop(0).strip() - file_path_list = [ - os.path.join(root, line.split("\t")[0]) - for line in lines - if len(line) > 0 - ] - if sample_pct < 1.0: - file_path_list = random.sample( - file_path_list, int(sample_pct * len(file_path_list)) - ) - num_files = len(file_path_list) - reader = feature_reader_cls( - checkpoint_path=checkpoint_path, layer=layer - ) - - def iterate(): - for file_path in file_path_list: - feats = reader.get_feats(file_path) - yield feats.cpu().numpy() - - return iterate, num_files - - -def get_features( - feature_type, checkpoint_path, layer, manifest_path, sample_pct, flatten -): - generator, num_files = get_feature_iterator( - feature_type=feature_type, - checkpoint_path=checkpoint_path, - layer=layer, - manifest_path=manifest_path, - sample_pct=sample_pct, - ) - iterator = generator() - - features_list = [] - for features in tqdm.tqdm(iterator, total=num_files): - features_list.append(features) - - # Explicit clean up - del iterator - del generator - gc.collect() - torch.cuda.empty_cache() - - if flatten: - return np.concatenate(features_list) - - return features_list - - -def get_and_dump_features( - feature_type, - checkpoint_path, - layer, - manifest_path, - sample_pct, - flatten, - out_features_path, -): - # Feature extraction - features_batch = get_features( - feature_type=feature_type, - checkpoint_path=checkpoint_path, - layer=layer, - manifest_path=manifest_path, - sample_pct=sample_pct, - flatten=flatten, - ) - - # Save features - out_dir_path = os.path.dirname(out_features_path) - os.makedirs(out_dir_path, exist_ok=True) - shutil.copyfile( - manifest_path, - os.path.join(out_dir_path, os.path.basename(manifest_path)), - ) - np.save(out_features_path, features_batch) - - return features_batch diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/speech2unit/pretrained/w2v2_feature_reader.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/speech2unit/pretrained/w2v2_feature_reader.py deleted file mode 100644 index b878321e4..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/speech2unit/pretrained/w2v2_feature_reader.py +++ /dev/null @@ -1,46 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import fairseq -import soundfile as sf - - -class Wav2VecFeatureReader: - """ - Wrapper class to run inference on Wav2Vec 2.0 model. - Helps extract features for a given audio file. - """ - - def __init__(self, checkpoint_path, layer): - state = fairseq.checkpoint_utils.load_checkpoint_to_cpu( - checkpoint_path - ) - - w2v_args = state["args"] - self.task = fairseq.tasks.setup_task(w2v_args) - model = self.task.build_model(w2v_args) - model.load_state_dict(state["model"], strict=True) - model.eval() - model.cuda() - self.model = model - self.layer = layer - - def read_audio(self, fname): - wav, sr = sf.read(fname) - if wav.ndim == 2: - wav = wav.mean(-1) - assert wav.ndim == 1, wav.ndim - assert sr == self.task.cfg.sample_rate, sr - return wav - - def get_feats(self, file_path): - x = self.read_audio(file_path) - with torch.no_grad(): - source = torch.from_numpy(x).view(1, -1).float().cuda() - res = self.model( - source=source, mask=False, features_only=True, layer=self.layer - ) - return res["layer_results"][self.layer][0].squeeze(1) diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/tools/README.md b/kosmos-g/fairseq/examples/textless_nlp/gslm/tools/README.md deleted file mode 100644 index 385834841..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/tools/README.md +++ /dev/null @@ -1,25 +0,0 @@ -# GSLM Tools - -## Resynthesis -You can use the command line tool below to input an audio file and get the resynthesized audio. This tool implements the unsupervised method for resynthesis described in the paper. The way to invoke the command line tool is shown below. -``` -FAIRSEQ_ROOT=<path_to_your_fairseq_repo_root> -TYPE=<one_of_logmel/cpc/hubert/w2v2> -ACOUSTIC_MODEL_PATH=<path_of_pretrained_acoustic_model> -LAYER=<layer_of_acoustic_model_to_extract_features_from> -KM_MODEL_PATH=<output_path_of_the_kmeans_model> -TTS_MODEL_PATH=<unit2speech_model_file_path> -# A text file containing the codes, one per line -CODE_DICT_PATH=<unit2speech_code_dict_path> -WAVEGLOW_PATH=<path_where_you_have_downloaded_waveglow_checkpoint> - -PYTHONPATH=${FAIRSEQ_ROOT}:${FAIRSEQ_ROOT}/examples/textless_nlp/gslm/unit2speech python ${FAIRSEQ_ROOT}/examples/textless_nlp/gslm/tools/resynthesize_speech.py \ - --feature_type $TYPE \ - --acoustic_model_path $ACOUSTIC_MODEL_PATH \ - --layer $LAYER \ - --kmeans_model_path $KM_MODEL_PATH \ - --tts_model_path $TTS_MODEL_PATH \ - --code_dict_path $CODE_DICT_PATH \ - --waveglow_path $WAVEGLOW_PATH \ - --max_decoder_steps 2000 -``` \ No newline at end of file diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/tools/resynthesize_speech.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/tools/resynthesize_speech.py deleted file mode 100644 index 309877212..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/tools/resynthesize_speech.py +++ /dev/null @@ -1,132 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import gc -import logging -import os - -import joblib -import soundfile as sf -import torch -from examples.textless_nlp.gslm.speech2unit.pretrained.utils import get_feature_reader -from examples.textless_nlp.gslm.unit2speech.tts_data import TacotronInputDataset -from examples.textless_nlp.gslm.unit2speech.utils import ( - load_tacotron, - load_waveglow, - synthesize_audio, -) - - -def get_logger(): - log_format = "[%(asctime)s] [%(levelname)s]: %(message)s" - logging.basicConfig(format=log_format, level=logging.INFO) - logger = logging.getLogger(__name__) - return logger - - -def get_parser(): - parser = argparse.ArgumentParser(description="GSLM U2S tool") - parser.add_argument( - "--feature_type", - type=str, - choices=["logmel", "hubert", "w2v2", "cpc"], - default=None, - required=True, - help="Acoustic feature type", - ) - parser.add_argument( - "--acoustic_model_path", - type=str, - help="Pretrained acoustic model checkpoint", - ) - parser.add_argument("--layer", type=int, help="Layer of acoustic model") - parser.add_argument( - "--kmeans_model_path", - type=str, - required=True, - help="K-means model file path to use for inference", - ) - parser.add_argument( - "--tts_model_path", - type=str, - help="TTS model file path to use for inference", - ) - parser.add_argument( - "--code_dict_path", - type=str, - help="Code dict file path to use for inference", - ) - parser.add_argument( - "--waveglow_path", - type=str, - help="Waveglow (vocoder) model file path to use for inference", - ) - parser.add_argument("--max_decoder_steps", type=int, default=2000) - parser.add_argument("--denoiser_strength", type=float, default=0.1) - return parser - - -################################################ -def main(args, logger): - # Acoustic Model - logger.info(f"Loading acoustic model from {args.tts_model_path}...") - feature_reader_cls = get_feature_reader(args.feature_type) - reader = feature_reader_cls( - checkpoint_path=args.acoustic_model_path, layer=args.layer - ) - - # K-means Model - logger.info(f"Loading K-means model from {args.kmeans_model_path} ...") - kmeans_model = joblib.load(open(args.kmeans_model_path, "rb")) - kmeans_model.verbose = False - - # TTS Model - logger.info(f"Loading TTS model from {args.tts_model_path}...") - tacotron_model, sample_rate, hparams = load_tacotron( - tacotron_model_path=args.tts_model_path, - max_decoder_steps=args.max_decoder_steps, - ) - - # Waveglow Model - logger.info(f"Loading Waveglow model from {args.waveglow_path}...") - waveglow, denoiser = load_waveglow(waveglow_path=args.waveglow_path) - - # Dataset - if not os.path.exists(hparams.code_dict): - hparams.code_dict = args.code_dict_path - tts_dataset = TacotronInputDataset(hparams) - - iters = 0 - while True: - in_file_path = input("Input: Enter the full file path of audio file...\n") - out_file_path = input("Output: Enter the full file path of audio file...\n") - feats = reader.get_feats(in_file_path).cpu().numpy() - iters += 1 - if iters == 1000: - gc.collect() - torch.cuda.empty_cache() - - quantized_units = kmeans_model.predict(feats) - quantized_units_str = " ".join(map(str, quantized_units)) - - tts_input = tts_dataset.get_tensor(quantized_units_str) - mel, aud, aud_dn, has_eos = synthesize_audio( - tacotron_model, - waveglow, - denoiser, - tts_input.unsqueeze(0), - strength=args.denoiser_strength, - ) - sf.write(f"{out_file_path}", aud_dn[0].cpu().float().numpy(), sample_rate) - logger.info("Resynthesis done!\n") - - -if __name__ == "__main__": - parser = get_parser() - args = parser.parse_args() - logger = get_logger() - logger.info(args) - main(args, logger) diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/ulm/README.md b/kosmos-g/fairseq/examples/textless_nlp/gslm/ulm/README.md deleted file mode 100644 index 01459121c..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/ulm/README.md +++ /dev/null @@ -1,72 +0,0 @@ -# Unit Language Model (ULM) - -Here you can find links to the pre-trained ULMs and instructions on training new models using fairseq. At the end of the page, we also share how to run sampling for those models and provide pointers to the transcribed prompts we used. - -## Pre-trained models - -Using the links below, you can download pre-trained models for various unit types and vocabulary sizes: - -| | 50 | 100 | 200 -|-|-|-|- -| LogMel Filterbank | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/logmel/lm_km50/logmel50_lm.tgz) | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/logmel/lm_km100/logmel100_lm.tgz) | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/logmel/lm_km200/logmel200_lm.tgz) -| Modified CPC | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/lm_km50/cpc50_lm.tgz) | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/lm_km100/cpc100_lm.tgz) | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/lm_km200/cpc200_lm.tgz) -| HuBERT | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/hubert/lm_km50/hubert50_lm.tgz) | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/hubert/lm_km100/hubert100_lm.tgz) | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/hubert/lm_km200/hubert200_lm.tgz) -| Wav2Vec 2.0 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/w2v2/lm_km50/w2v2_50_lm.tgz) | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/w2v2/lm_km100/w2v2_100_lm.tgz) | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/w2v2/lm_km200/w2v2_200_lm.tgz) - - -## Preprocessing data -Assuming that unit-transcribed train, valid, and test sets are located in `data/train.txt`, `data/valid.txt`, and `data/test.txt`, respectively, -we run the following command to get a preprocessed version of the datast in `data-bin`: - -```bash -fairseq-preprocess --only-source \ - --trainpref data/train.txt --validpref data/valid.txt --testpref data/test.txt \ - --destdir data-bin/ --workers 40 -``` -As a result, the `data-bin` directory should appear. - -## Fitting a Unit Language Model (ULM) -As an ULM, we train a standard fairseq Transformer LM. Assuming 8 GPUs used for training, a good starting point for an ULM training would be: -```bash - fairseq-train data-bin/ \ - --task=language_modeling \ - --arch=transformer_lm_big \ - --share-decoder-input-output-embed \ - --dropout=0.1 \ - --attention-dropout=0.1 \ - --optimizer=adam \ - --adam-betas='(0.9, 0.98)' \ - --clip-norm=1.0 \ - --lr=0.0005 \ - --lr-scheduler=inverse_sqrt \ - --warmup-updates=4000 \ - --warmup-init-lr=1e-07 \ - --tokens-per-sample=3072 \ - --update-freq=16 \ - --max-tokens=4096 \ - --num-workers=4 \ - --skip-invalid-size-inputs-valid-test \ - --max-update=500000 \ - --log-interval=10 \ - --seed=100501 \ - --fp16 \ - --sample-break-mode=eos -``` -This command will train a Transformer-large model (12 layers). You can train other standard LM models provided by fairseq, e.g. specify `--arch=transformer_lm` to train a smaller (6-layer) Transformer model. When training with a different number of GPUs, it might be a good idea to adjust the `update-freq` parameter. To save the GPU memory at an expense of additional computation, it can be useful to enable activation checkpointing with `--checkpoint-activations`. - -## Sampling from an ULM -Once an ULM was trained, we can use it for generating new utterances. Suppose, that the prompts are given in a file named `prompts.txt`. Then we can sample continuations by running the following command: - -```bash - python sample.py data-bin/ \ - --path=checkpoints/checkpoint_best.pt --task=language_modeling --sampling --temperature=0.7 \ - --seed=1 --prompts=prompts.txt --output=samples.txt --max-len-a=0 --max-len-b=500 \ - --prefix-size=-1 --batch-size=16 --fp16 --samples-per-prompt=10 -``` -Here, `--prefix-size` controls the number of tokens that are used to prime the ULM. When set to a positive value, the sampling script will take first `prefix-size` tokens to prompt the ULM; with `0` it runs unconditional sampling and with `-1` the entire prompt is used. -`--samples-per-prompt` specifies how many utterances are generated with every prompt which can be useful when generating multiple prompt continuations. In this command, `--max-len-a` and `--max-len-b` control the number of generated tokens. - -When using a pretrained model from above, `data-bin` should point to the unpacked directory (with `dict.txt` file). - -Evaluation-time, to generate prompts, we used utterances from LibriSpeech dev-clean and test-clean that are longer than 6s. We took first 3s from an utterance as a prompt. Unit transcripts of those prompts can be downloaded here: [[dev]](https://dl.fbaipublicfiles.com/textless_nlp/gslm/eval_data/dev_prompts.tgz) [[test]](https://dl.fbaipublicfiles.com/textless_nlp/gslm/eval_data/test_prompts.tgz) - diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/ulm/sample.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/ulm/sample.py deleted file mode 100644 index 77302a689..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/ulm/sample.py +++ /dev/null @@ -1,174 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Sample from a trained LM; hacked fairseq-interactive -""" -from collections import namedtuple -import os -import ast -import numpy as np - -from fairseq import checkpoint_utils, options, tasks, utils - -import tqdm - -Batch = namedtuple('Batch', 'ids src_tokens src_lengths') -Translation = namedtuple('Translation', 'src_str hypos pos_scores alignments') - - -def make_batches(lines, args, task, max_positions): - tokens = [ - task.source_dictionary.encode_line( - src_str, add_if_not_exist=False - ).long() - for src_str in lines - ] - lengths = [t.numel() for t in tokens] - itr = task.get_batch_iterator( - dataset=task.build_dataset_for_inference(tokens, lengths), - max_tokens=args.dataset.max_tokens, - max_sentences=args.dataset.batch_size, - max_positions=max_positions, - ignore_invalid_inputs=args.dataset.skip_invalid_size_inputs_valid_test - ).next_epoch_itr(shuffle=False) - for batch in itr: - yield Batch( - ids=batch['id'], - src_tokens=batch['net_input']['src_tokens'], src_lengths=batch['net_input']['src_lengths'], - ) - - -def main(args): - arg_prompts = args.prompts - arg_output = args.output - arg_debug = args.debug - arg_sample_size = args.samples_per_prompt - - try: - from fairseq.dataclass.utils import convert_namespace_to_omegaconf - args = convert_namespace_to_omegaconf(args) - except: - pass - - # if args.max_tokens is None and args.max_sentences is None: - if args.common.seed is not None: - np.random.seed(args.common.seed) - utils.set_torch_seed(args.common.seed) - - if args.generation.sampling: - args.generation.nbest = args.generation.beam = arg_sample_size - - task = tasks.setup_task(args.task) - - overrides = ast.literal_eval(args.common_eval.model_overrides) - - models, _model_args = checkpoint_utils.load_model_ensemble( - args.common_eval.path.split(os.pathsep), - arg_overrides=overrides, - task=task, - suffix=getattr(args, "checkpoint_suffix", ""), - ) - - # Set dictionaries - src_dict = task.source_dictionary - tgt_dict = task.target_dictionary - - # Optimize ensemble for generation - for model in models: - model.prepare_for_inference_(args) - model.cuda() - - # Load alignment dictionary for unknown word replacement - # (None if no unknown word replacement, empty if no path to align dictionary) - align_dict = utils.load_align_dict(args.generation.replace_unk) - - max_positions = utils.resolve_max_positions( - task.max_positions(), - *[model.max_positions() for model in models] - ) - - output_file = open(arg_output, 'w') - - with open(arg_prompts, 'r') as fin: - lines = fin.readlines() - - split = [x.split('|', 1) for x in lines] - seq_id = [x[0] for x in split] - prompts = [x[1] for x in split] - - if args.generation.prefix_size >= 0: - prompts = [' '.join(l.split()[:args.generation.prefix_size]) - for l in prompts] - - if arg_debug: - prompts = prompts[:10] - - generator = task.build_generator(models, args.generation) - - start_id = 0 - pbar = tqdm.tqdm(total=len(prompts)) - for batch in make_batches(prompts, args, task, max_positions): - src_tokens = batch.src_tokens - src_lengths = batch.src_lengths - src_tokens = src_tokens.cuda() - src_lengths = src_lengths.cuda() - - sample = { - 'net_input': { - 'src_tokens': src_tokens, - 'src_lengths': src_lengths, - }, - } - - results = [] - translations = task.inference_step(generator, models, sample) - for i, (id, hypos) in enumerate(zip(batch.ids.tolist(), translations)): - src_tokens_i = utils.strip_pad(src_tokens[i], tgt_dict.pad()) - results.append((i + start_id, src_tokens_i, hypos)) - - # sort output to match input order - for id, src_tokens, hypos in sorted(results, key=lambda x: x[0]): - if src_dict is not None: - src_str = src_dict.string( - src_tokens, args.common_eval.post_process) - - # Process top predictions - for hypo_id, hypo in enumerate(hypos): - _hypo_tokens, hypo_str, _alignment = utils.post_process_prediction( - hypo_tokens=hypo['tokens'].int().cpu(), - src_str=src_str, - alignment=hypo['alignment'], - align_dict=align_dict, - tgt_dict=tgt_dict, - remove_bpe=args.common_eval.post_process, - ) - - detok_hypo_str = hypo_str - utterance = detok_hypo_str - print(f'{seq_id[id]}__{hypo_id}|{utterance}', file=output_file) - pbar.update(1) - start_id += len(results) - - # output_file.close() - - -def cli_main(): - parser = options.get_interactive_generation_parser() - parser.add_argument('--prompts', type=str, default=None, required=True) - parser.add_argument('--output', type=str, default=None, required=True) - parser.add_argument('--debug', action='store_true') - parser.add_argument('--samples-per-prompt', type=int, default=1) - - args = options.parse_args_and_arch(parser) - - np.random.seed(args.seed) - utils.set_torch_seed(args.seed) - - main(args) - - -if __name__ == '__main__': - cli_main() diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/README.md b/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/README.md deleted file mode 100644 index e61601392..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/README.md +++ /dev/null @@ -1,40 +0,0 @@ -# Unit to Speech Model (unit2speech) - -Unit to speech model is modified Tacotron2 model that learns to synthesize speech from discrete speech units. All models are trained on quantized [LJSpeech](https://keithito.com/LJ-Speech-Dataset/). - -Upstream Units | Download Links | model md5 -|-|-|- -Log Mel Filterbank + KM50 | [model](https://dl.fbaipublicfiles.com/textless_nlp/gslm/logmel/tts_km50/tts_checkpoint_best.pt) - [code_dict](https://dl.fbaipublicfiles.com/textless_nlp/gslm/logmel/tts_km50/code_dict) | 932b3b8527c0125f5f964b57762eba49 -Log Mel Filterbank + KM100 | [model](https://dl.fbaipublicfiles.com/textless_nlp/gslm/logmel/tts_km100/tts_checkpoint_best.pt) - [code_dict](https://dl.fbaipublicfiles.com/textless_nlp/gslm/logmel/tts_km100/code_dict) | cde0b0d278a39011d0acbd5df27abdf4 -Log Mel Filterbank + KM200 | [model](https://dl.fbaipublicfiles.com/textless_nlp/gslm/logmel/tts_km200/tts_checkpoint_best.pt) - [code_dict](https://dl.fbaipublicfiles.com/textless_nlp/gslm/logmel/tts_km200/code_dict) | dba0f1d4de64bc7976718834010b23e7 -Modified CPC + KM50 | [model](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/tts_km50/tts_checkpoint_best.pt) - [code_dict](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/tts_km50/code_dict) | a585e8dd8890ea56164f17635dd8e613 -Modified CPC + KM100 | [model](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/tts_km100/tts_checkpoint_best.pt) - [code_dict](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/tts_km100/code_dict) | 5c0ee2869b4f483d17f37f1a41a548e0 -Modified CPC + KM200 | [model](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/tts_km200/tts_checkpoint_best.pt) - [code_dict](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/tts_km200/code_dict) | 2f0c9951cf37020d9464514bff48bc5d -HuBERT Base + KM50 | [model](https://dl.fbaipublicfiles.com/textless_nlp/gslm/hubert/tts_km50/tts_checkpoint_best.pt) - [code_dict](https://dl.fbaipublicfiles.com/textless_nlp/gslm/hubert/tts_km50/code_dict) | 85ffce8baec5aa90035ab696fe676fce -HuBERT Base + KM100 | [model](https://dl.fbaipublicfiles.com/textless_nlp/gslm/hubert/tts_km100/tts_checkpoint_best.pt) - [code_dict](https://dl.fbaipublicfiles.com/textless_nlp/gslm/hubert/tts_km100/code_dict) | df4a9c6ffd1bb00c91405432c234aba3 -HuBERT Base + KM200 | [model](https://dl.fbaipublicfiles.com/textless_nlp/gslm/hubert/tts_km200/tts_checkpoint_best.pt) - [code_dict](https://dl.fbaipublicfiles.com/textless_nlp/gslm/hubert/tts_km200/code_dict) | ac72f2c0c563589819bec116c7f8d274 -wav2vec 2.0 Large + KM50 | [model](https://dl.fbaipublicfiles.com/textless_nlp/gslm/w2v2/tts_km50/tts_checkpoint_best.pt) - [code_dict](https://dl.fbaipublicfiles.com/textless_nlp/gslm/w2v2/tts_km50/code_dict) | e3503d0ad822b2c24b89f68b857fedff -wav2vec 2.0 Large + KM100 | [model](https://dl.fbaipublicfiles.com/textless_nlp/gslm/w2v2/tts_km100/tts_checkpoint_best.pt) - [code_dict](https://dl.fbaipublicfiles.com/textless_nlp/gslm/w2v2/tts_km100/code_dict) | eb3666e456ae4c96bf2a1eec825c13ed -wav2vec 2.0 Large + KM200 | [model](https://dl.fbaipublicfiles.com/textless_nlp/gslm/w2v2/tts_km200/tts_checkpoint_best.pt) - [code_dict](https://dl.fbaipublicfiles.com/textless_nlp/gslm/w2v2/tts_km200/code_dict) | 777d343e963c4d64f04d78eef032f4e8 - -## Run inference using a unit2speech model -* Install librosa, unidecode and inflect using `pip install librosa, unidecode, inflect` -* Download [Waveglow checkpoint](https://dl.fbaipublicfiles.com/textless_nlp/gslm/waveglow_256channels_new.pt). This is the vocoder. - -Sample commnd to run inference using trained unit2speech models. Please note that the quantized audio to synthesized should be using the same units as the unit2speech model was trained with. -``` -FAIRSEQ_ROOT=<path_to_your_fairseq_repo_root> -TTS_MODEL_PATH=<unit2speech_model_file_path> -QUANTIZED_UNIT_PATH=<quantized_audio_file_path> -OUT_DIR=<dir_to_dump_synthesized_audio_files> -WAVEGLOW_PATH=<path_where_you_have_downloaded_waveglow_checkpoint> -CODE_DICT_PATH=<unit2speech_code_dict_path> - -PYTHONPATH=${FAIRSEQ_ROOT}:${FAIRSEQ_ROOT}/examples/textless_nlp/gslm/unit2speech python ${FAIRSEQ_ROOT}/examples/textless_nlp/gslm/unit2speech/synthesize_audio_from_units.py \ - --tts_model_path $TTS_MODEL_PATH \ - --quantized_unit_path $QUANTIZED_UNIT_PATH \ - --out_audio_dir $OUT_DIR \ - --waveglow_path $WAVEGLOW_PATH \ - --code_dict_path $CODE_DICT_PATH \ - --max_decoder_steps 2000 -``` diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/convert_to_16k.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/convert_to_16k.py deleted file mode 100644 index 2be848fce..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/convert_to_16k.py +++ /dev/null @@ -1,56 +0,0 @@ -import os -import shlex -import subprocess -import progressbar -from time import time -from pathlib import Path - -def find_all_files(path_dir, extension): - out = [] - for root, dirs, filenames in os.walk(path_dir): - for f in filenames: - if f.endswith(extension): - out.append(((str(Path(f).stem)), os.path.join(root, f))) - return out - -def convert16k(inputfile, outputfile16k): - command = ('sox -c 1 -b 16 {} -t wav {} rate 16k'.format(inputfile, outputfile16k)) - subprocess.call(shlex.split(command)) - -if __name__ == "__main__": - import argparse - - parser = argparse.ArgumentParser(description='Convert to wav 16k audio using sox.') - parser.add_argument('input_dir', type=str, - help='Path to the input dir.') - parser.add_argument('output_dir', type=str, - help='Path to the output dir.') - parser.add_argument('--extension', type=str, default='wav', - help='Audio file extension in the input. Default: mp3') - args = parser.parse_args() - - # Find all sequences - print(f"Finding all audio files with extension '{args.extension}' from {args.input_dir}...") - audio_files = find_all_files(args.input_dir, args.extension) - print(f"Done! Found {len(audio_files)} files.") - - # Convert to relative path - audio_files = [os.path.relpath(file[-1], start=args.input_dir) for file in audio_files] - - # Create all the directories needed - rel_dirs_set = set([os.path.dirname(file) for file in audio_files]) - for rel_dir in rel_dirs_set: - Path(os.path.join(args.output_dir, rel_dir)).mkdir(parents=True, exist_ok=True) - - # Converting wavs files - print("Converting the audio to wav files...") - bar = progressbar.ProgressBar(maxval=len(audio_files)) - bar.start() - start_time = time() - for index, file in enumerate(audio_files): - bar.update(index) - input_file = os.path.join(args.input_dir, file) - output_file = os.path.join(args.output_dir, os.path.splitext(file)[0]+".wav") - convert16k(input_file, output_file) - bar.finish() - print(f"...done {len(audio_files)} files in {time()-start_time} seconds.") \ No newline at end of file diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/glow.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/glow.py deleted file mode 100644 index 7a7696403..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/glow.py +++ /dev/null @@ -1,311 +0,0 @@ -# ***************************************************************************** -# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions are met: -# * Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# * Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in the -# documentation and/or other materials provided with the distribution. -# * Neither the name of the NVIDIA CORPORATION nor the -# names of its contributors may be used to endorse or promote products -# derived from this software without specific prior written permission. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND -# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED -# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -# DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY -# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES -# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND -# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS -# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -# -# ***************************************************************************** -import copy -import torch -from torch.autograd import Variable -import torch.nn.functional as F - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a+input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -class WaveGlowLoss(torch.nn.Module): - def __init__(self, sigma=1.0): - super(WaveGlowLoss, self).__init__() - self.sigma = sigma - - def forward(self, model_output): - z, log_s_list, log_det_W_list = model_output - for i, log_s in enumerate(log_s_list): - if i == 0: - log_s_total = torch.sum(log_s) - log_det_W_total = log_det_W_list[i] - else: - log_s_total = log_s_total + torch.sum(log_s) - log_det_W_total += log_det_W_list[i] - - loss = torch.sum(z*z)/(2*self.sigma*self.sigma) - log_s_total - log_det_W_total - return loss/(z.size(0)*z.size(1)*z.size(2)) - - -class Invertible1x1Conv(torch.nn.Module): - """ - The layer outputs both the convolution, and the log determinant - of its weight matrix. If reverse=True it does convolution with - inverse - """ - def __init__(self, c): - super(Invertible1x1Conv, self).__init__() - self.conv = torch.nn.Conv1d(c, c, kernel_size=1, stride=1, padding=0, - bias=False) - - # Sample a random orthonormal matrix to initialize weights - W = torch.qr(torch.FloatTensor(c, c).normal_())[0] - - # Ensure determinant is 1.0 not -1.0 - if torch.det(W) < 0: - W[:,0] = -1*W[:,0] - W = W.view(c, c, 1) - self.conv.weight.data = W - - def forward(self, z, reverse=False): - # shape - batch_size, group_size, n_of_groups = z.size() - - W = self.conv.weight.squeeze() - - if reverse: - if not hasattr(self, 'W_inverse'): - # Reverse computation - W_inverse = W.float().inverse() - W_inverse = Variable(W_inverse[..., None]) - if z.type() == 'torch.cuda.HalfTensor': - W_inverse = W_inverse.half() - self.W_inverse = W_inverse - z = F.conv1d(z, self.W_inverse, bias=None, stride=1, padding=0) - return z - else: - # Forward computation - log_det_W = batch_size * n_of_groups * torch.logdet(W) - z = self.conv(z) - return z, log_det_W - - -class WN(torch.nn.Module): - """ - This is the WaveNet like layer for the affine coupling. The primary difference - from WaveNet is the convolutions need not be causal. There is also no dilation - size reset. The dilation only doubles on each layer - """ - def __init__(self, n_in_channels, n_mel_channels, n_layers, n_channels, - kernel_size): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - assert(n_channels % 2 == 0) - self.n_layers = n_layers - self.n_channels = n_channels - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - - start = torch.nn.Conv1d(n_in_channels, n_channels, 1) - start = torch.nn.utils.weight_norm(start, name='weight') - self.start = start - - # Initializing last layer to 0 makes the affine coupling layers - # do nothing at first. This helps with training stability - end = torch.nn.Conv1d(n_channels, 2*n_in_channels, 1) - end.weight.data.zero_() - end.bias.data.zero_() - self.end = end - - cond_layer = torch.nn.Conv1d(n_mel_channels, 2*n_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = 2 ** i - padding = int((kernel_size*dilation - dilation)/2) - in_layer = torch.nn.Conv1d(n_channels, 2*n_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2*n_channels - else: - res_skip_channels = n_channels - res_skip_layer = torch.nn.Conv1d(n_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, forward_input): - audio, spect = forward_input - audio = self.start(audio) - output = torch.zeros_like(audio) - n_channels_tensor = torch.IntTensor([self.n_channels]) - - spect = self.cond_layer(spect) - - for i in range(self.n_layers): - spect_offset = i*2*self.n_channels - acts = fused_add_tanh_sigmoid_multiply( - self.in_layers[i](audio), - spect[:,spect_offset:spect_offset+2*self.n_channels,:], - n_channels_tensor) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - audio = audio + res_skip_acts[:,:self.n_channels,:] - output = output + res_skip_acts[:,self.n_channels:,:] - else: - output = output + res_skip_acts - - return self.end(output) - - -class WaveGlow(torch.nn.Module): - def __init__(self, n_mel_channels, n_flows, n_group, n_early_every, - n_early_size, WN_config): - super(WaveGlow, self).__init__() - - self.upsample = torch.nn.ConvTranspose1d(n_mel_channels, - n_mel_channels, - 1024, stride=256) - assert(n_group % 2 == 0) - self.n_flows = n_flows - self.n_group = n_group - self.n_early_every = n_early_every - self.n_early_size = n_early_size - self.WN = torch.nn.ModuleList() - self.convinv = torch.nn.ModuleList() - - n_half = int(n_group/2) - - # Set up layers with the right sizes based on how many dimensions - # have been output already - n_remaining_channels = n_group - for k in range(n_flows): - if k % self.n_early_every == 0 and k > 0: - n_half = n_half - int(self.n_early_size/2) - n_remaining_channels = n_remaining_channels - self.n_early_size - self.convinv.append(Invertible1x1Conv(n_remaining_channels)) - self.WN.append(WN(n_half, n_mel_channels*n_group, **WN_config)) - self.n_remaining_channels = n_remaining_channels # Useful during inference - - def forward(self, forward_input): - """ - forward_input[0] = mel_spectrogram: batch x n_mel_channels x frames - forward_input[1] = audio: batch x time - """ - spect, audio = forward_input - - # Upsample spectrogram to size of audio - spect = self.upsample(spect) - assert(spect.size(2) >= audio.size(1)) - if spect.size(2) > audio.size(1): - spect = spect[:, :, :audio.size(1)] - - spect = spect.unfold(2, self.n_group, self.n_group).permute(0, 2, 1, 3) - spect = spect.contiguous().view(spect.size(0), spect.size(1), -1).permute(0, 2, 1) - - audio = audio.unfold(1, self.n_group, self.n_group).permute(0, 2, 1) - output_audio = [] - log_s_list = [] - log_det_W_list = [] - - for k in range(self.n_flows): - if k % self.n_early_every == 0 and k > 0: - output_audio.append(audio[:,:self.n_early_size,:]) - audio = audio[:,self.n_early_size:,:] - - audio, log_det_W = self.convinv[k](audio) - log_det_W_list.append(log_det_W) - - n_half = int(audio.size(1)/2) - audio_0 = audio[:,:n_half,:] - audio_1 = audio[:,n_half:,:] - - output = self.WN[k]((audio_0, spect)) - log_s = output[:, n_half:, :] - b = output[:, :n_half, :] - audio_1 = torch.exp(log_s)*audio_1 + b - log_s_list.append(log_s) - - audio = torch.cat([audio_0, audio_1],1) - - output_audio.append(audio) - return torch.cat(output_audio,1), log_s_list, log_det_W_list - - def infer(self, spect, sigma=1.0): - spect = self.upsample(spect) - # trim conv artifacts. maybe pad spec to kernel multiple - time_cutoff = self.upsample.kernel_size[0] - self.upsample.stride[0] - spect = spect[:, :, :-time_cutoff] - - spect = spect.unfold(2, self.n_group, self.n_group).permute(0, 2, 1, 3) - spect = spect.contiguous().view(spect.size(0), spect.size(1), -1).permute(0, 2, 1) - - if spect.type() == 'torch.cuda.HalfTensor': - audio = torch.cuda.HalfTensor(spect.size(0), - self.n_remaining_channels, - spect.size(2)).normal_() - else: - audio = torch.cuda.FloatTensor(spect.size(0), - self.n_remaining_channels, - spect.size(2)).normal_() - - audio = torch.autograd.Variable(sigma*audio) - - for k in reversed(range(self.n_flows)): - n_half = int(audio.size(1)/2) - audio_0 = audio[:,:n_half,:] - audio_1 = audio[:,n_half:,:] - - output = self.WN[k]((audio_0, spect)) - - s = output[:, n_half:, :] - b = output[:, :n_half, :] - audio_1 = (audio_1 - b)/torch.exp(s) - audio = torch.cat([audio_0, audio_1],1) - - audio = self.convinv[k](audio, reverse=True) - - if k % self.n_early_every == 0 and k > 0: - if spect.type() == 'torch.cuda.HalfTensor': - z = torch.cuda.HalfTensor(spect.size(0), self.n_early_size, spect.size(2)).normal_() - else: - z = torch.cuda.FloatTensor(spect.size(0), self.n_early_size, spect.size(2)).normal_() - audio = torch.cat((sigma*z, audio),1) - - audio = audio.permute(0,2,1).contiguous().view(audio.size(0), -1).data - return audio - - @staticmethod - def remove_weightnorm(model): - waveglow = model - for WN in waveglow.WN: - WN.start = torch.nn.utils.remove_weight_norm(WN.start) - WN.in_layers = remove(WN.in_layers) - WN.cond_layer = torch.nn.utils.remove_weight_norm(WN.cond_layer) - WN.res_skip_layers = remove(WN.res_skip_layers) - return waveglow - - -def remove(conv_list): - new_conv_list = torch.nn.ModuleList() - for old_conv in conv_list: - old_conv = torch.nn.utils.remove_weight_norm(old_conv) - new_conv_list.append(old_conv) - return new_conv_list diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/multiproc.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/multiproc.py deleted file mode 100644 index 2a287a4e9..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/multiproc.py +++ /dev/null @@ -1,27 +0,0 @@ -import os -import time -import torch -import sys -import subprocess - -argslist = list(sys.argv)[1:] -log_dir = argslist[-1] -num_gpus = torch.cuda.device_count() -argslist.append('--n_gpus={}'.format(num_gpus)) -workers = [] -job_id = time.strftime("%Y_%m_%d-%H%M%S") -argslist.append("--group_name=group_{}".format(job_id)) - -print("GPU log directory is {}".format(log_dir)) -os.makedirs(log_dir, exist_ok=True) -for i in range(num_gpus): - argslist.append('--rank={}'.format(i)) - stdout = None if i == 0 else open("{}/{}_GPU_{}.log".format(log_dir, job_id, i), - "w") - print(argslist) - p = subprocess.Popen([str(sys.executable)]+argslist, stdout=stdout) - workers.append(p) - argslist = argslist[:-1] - -for p in workers: - p.wait() diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/synthesize_audio_from_units.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/synthesize_audio_from_units.py deleted file mode 100644 index 80730843b..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/synthesize_audio_from_units.py +++ /dev/null @@ -1,105 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import logging -import os - -import soundfile as sf -from examples.textless_nlp.gslm.unit2speech.tts_data import ( - TacotronInputDataset, -) -from examples.textless_nlp.gslm.unit2speech.utils import ( - load_quantized_audio_from_file, - load_tacotron, - load_waveglow, - synthesize_audio, -) - - -def get_logger(): - log_format = "[%(asctime)s] [%(levelname)s]: %(message)s" - logging.basicConfig(format=log_format, level=logging.INFO) - logger = logging.getLogger(__name__) - return logger - - -def get_parser(): - parser = argparse.ArgumentParser( - description="Wav2Vec 2.0 speech generator." - ) - parser.add_argument( - "--quantized_unit_path", - type=str, - help="K-means model file path to use for inference", - ) - parser.add_argument( - "--tts_model_path", - type=str, - help="TTS model file path to use for inference", - ) - parser.add_argument( - "--waveglow_path", - type=str, - help="Path to the waveglow checkpoint (vocoder).", - ) - parser.add_argument( - "--code_dict_path", - type=str, - help="Code dict file path to use for inference", - ) - parser.add_argument("--max_decoder_steps", type=int, default=2000) - parser.add_argument("--denoiser_strength", type=float, default=0.1) - parser.add_argument( - "--out_audio_dir", - type=str, - help="Output directory to dump audio files", - ) - - return parser - - -def main(args, logger): - # Load quantized audio - logger.info(f"Loading quantized audio from {args.quantized_unit_path}...") - names_batch, quantized_units_batch = load_quantized_audio_from_file( - file_path=args.quantized_unit_path - ) - - logger.info(f"Loading TTS model from {args.tts_model_path}...") - tacotron_model, sample_rate, hparams = load_tacotron( - tacotron_model_path=args.tts_model_path, - max_decoder_steps=args.max_decoder_steps, - ) - - logger.info(f"Loading Waveglow model from {args.waveglow_path}...") - waveglow, denoiser = load_waveglow(waveglow_path=args.waveglow_path) - - if not os.path.exists(hparams.code_dict): - hparams.code_dict = args.code_dict_path - tts_dataset = TacotronInputDataset(hparams) - - for name, quantized_units in zip(names_batch, quantized_units_batch): - quantized_units_str = " ".join(map(str, quantized_units)) - tts_input = tts_dataset.get_tensor(quantized_units_str) - mel, aud, aud_dn, has_eos = synthesize_audio( - tacotron_model, - waveglow, - denoiser, - tts_input.unsqueeze(0), - strength=args.denoiser_strength, - ) - out_file_path = os.path.join(args.out_audio_dir, f"{name}.wav") - sf.write( - f"{out_file_path}", aud_dn[0].cpu().float().numpy(), sample_rate - ) - - -if __name__ == "__main__": - parser = get_parser() - args = parser.parse_args() - logger = get_logger() - logger.info(args) - main(args, logger) diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/__init__.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/__init__.py deleted file mode 100644 index e69de29bb..000000000 diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/audio_processing.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/audio_processing.py deleted file mode 100644 index b5af7f723..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/audio_processing.py +++ /dev/null @@ -1,93 +0,0 @@ -import torch -import numpy as np -from scipy.signal import get_window -import librosa.util as librosa_util - - -def window_sumsquare(window, n_frames, hop_length=200, win_length=800, - n_fft=800, dtype=np.float32, norm=None): - """ - # from librosa 0.6 - Compute the sum-square envelope of a window function at a given hop length. - - This is used to estimate modulation effects induced by windowing - observations in short-time fourier transforms. - - Parameters - ---------- - window : string, tuple, number, callable, or list-like - Window specification, as in `get_window` - - n_frames : int > 0 - The number of analysis frames - - hop_length : int > 0 - The number of samples to advance between frames - - win_length : [optional] - The length of the window function. By default, this matches `n_fft`. - - n_fft : int > 0 - The length of each analysis frame. - - dtype : np.dtype - The data type of the output - - Returns - ------- - wss : np.ndarray, shape=`(n_fft + hop_length * (n_frames - 1))` - The sum-squared envelope of the window function - """ - if win_length is None: - win_length = n_fft - - n = n_fft + hop_length * (n_frames - 1) - x = np.zeros(n, dtype=dtype) - - # Compute the squared window at the desired length - win_sq = get_window(window, win_length, fftbins=True) - win_sq = librosa_util.normalize(win_sq, norm=norm)**2 - win_sq = librosa_util.pad_center(win_sq, n_fft) - - # Fill the envelope - for i in range(n_frames): - sample = i * hop_length - x[sample:min(n, sample + n_fft)] += win_sq[:max(0, min(n_fft, n - sample))] - return x - - -def griffin_lim(magnitudes, stft_fn, n_iters=30): - """ - PARAMS - ------ - magnitudes: spectrogram magnitudes - stft_fn: STFT class with transform (STFT) and inverse (ISTFT) methods - """ - - angles = np.angle(np.exp(2j * np.pi * np.random.rand(*magnitudes.size()))) - angles = angles.astype(np.float32) - angles = torch.autograd.Variable(torch.from_numpy(angles)) - signal = stft_fn.inverse(magnitudes, angles).squeeze(1) - - for i in range(n_iters): - _, angles = stft_fn.transform(signal) - signal = stft_fn.inverse(magnitudes, angles).squeeze(1) - return signal - - -def dynamic_range_compression(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/cleaners.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/cleaners.py deleted file mode 100644 index e2e35c1a8..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/cleaners.py +++ /dev/null @@ -1,90 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You'll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -''' - -import re -from unidecode import unidecode -from .numbers import normalize_numbers - - -# Regular expression matching whitespace: -_whitespace_re = re.compile(r'\s+') - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [ - ('mrs', 'misess'), - ('mr', 'mister'), - ('dr', 'doctor'), - ('st', 'saint'), - ('co', 'company'), - ('jr', 'junior'), - ('maj', 'major'), - ('gen', 'general'), - ('drs', 'doctors'), - ('rev', 'reverend'), - ('lt', 'lieutenant'), - ('hon', 'honorable'), - ('sgt', 'sergeant'), - ('capt', 'captain'), - ('esq', 'esquire'), - ('ltd', 'limited'), - ('col', 'colonel'), - ('ft', 'fort'), -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def expand_numbers(text): - return normalize_numbers(text) - - -def lowercase(text): - return text.lower() - - -def collapse_whitespace(text): - return re.sub(_whitespace_re, ' ', text) - - -def convert_to_ascii(text): - return unidecode(text) - - -def basic_cleaners(text): - '''Basic pipeline that lowercases and collapses whitespace without transliteration.''' - text = lowercase(text) - text = collapse_whitespace(text) - return text - - -def transliteration_cleaners(text): - '''Pipeline for non-English text that transliterates to ASCII.''' - text = convert_to_ascii(text) - text = lowercase(text) - text = collapse_whitespace(text) - return text - - -def english_cleaners(text): - '''Pipeline for English text, including number and abbreviation expansion.''' - text = convert_to_ascii(text) - text = lowercase(text) - text = expand_numbers(text) - text = expand_abbreviations(text) - text = collapse_whitespace(text) - return text diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/cmudict.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/cmudict.py deleted file mode 100644 index 62bfef745..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/cmudict.py +++ /dev/null @@ -1,65 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -import re - - -valid_symbols = [ - 'AA', 'AA0', 'AA1', 'AA2', 'AE', 'AE0', 'AE1', 'AE2', 'AH', 'AH0', 'AH1', 'AH2', - 'AO', 'AO0', 'AO1', 'AO2', 'AW', 'AW0', 'AW1', 'AW2', 'AY', 'AY0', 'AY1', 'AY2', - 'B', 'CH', 'D', 'DH', 'EH', 'EH0', 'EH1', 'EH2', 'ER', 'ER0', 'ER1', 'ER2', 'EY', - 'EY0', 'EY1', 'EY2', 'F', 'G', 'HH', 'IH', 'IH0', 'IH1', 'IH2', 'IY', 'IY0', 'IY1', - 'IY2', 'JH', 'K', 'L', 'M', 'N', 'NG', 'OW', 'OW0', 'OW1', 'OW2', 'OY', 'OY0', - 'OY1', 'OY2', 'P', 'R', 'S', 'SH', 'T', 'TH', 'UH', 'UH0', 'UH1', 'UH2', 'UW', - 'UW0', 'UW1', 'UW2', 'V', 'W', 'Y', 'Z', 'ZH' -] - -_valid_symbol_set = set(valid_symbols) - - -class CMUDict: - '''Thin wrapper around CMUDict data. http://www.speech.cs.cmu.edu/cgi-bin/cmudict''' - def __init__(self, file_or_path, keep_ambiguous=True): - if isinstance(file_or_path, str): - with open(file_or_path, encoding='latin-1') as f: - entries = _parse_cmudict(f) - else: - entries = _parse_cmudict(file_or_path) - if not keep_ambiguous: - entries = {word: pron for word, pron in entries.items() if len(pron) == 1} - self._entries = entries - - - def __len__(self): - return len(self._entries) - - - def lookup(self, word): - '''Returns list of ARPAbet pronunciations of the given word.''' - return self._entries.get(word.upper()) - - - -_alt_re = re.compile(r'\([0-9]+\)') - - -def _parse_cmudict(file): - cmudict = {} - for line in file: - if len(line) and (line[0] >= 'A' and line[0] <= 'Z' or line[0] == "'"): - parts = line.split(' ') - word = re.sub(_alt_re, '', parts[0]) - pronunciation = _get_pronunciation(parts[1]) - if pronunciation: - if word in cmudict: - cmudict[word].append(pronunciation) - else: - cmudict[word] = [pronunciation] - return cmudict - - -def _get_pronunciation(s): - parts = s.strip().split(' ') - for part in parts: - if part not in _valid_symbol_set: - return None - return ' '.join(parts) diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/layers.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/layers.py deleted file mode 100644 index f10d557ff..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/layers.py +++ /dev/null @@ -1,103 +0,0 @@ -import torch -from librosa.filters import mel as librosa_mel_fn -from .audio_processing import dynamic_range_compression -from .audio_processing import dynamic_range_decompression -from .stft import STFT -from .utils import get_mask_from_lengths - - -class LinearNorm(torch.nn.Module): - def __init__(self, in_dim, out_dim, bias=True, w_init_gain='linear'): - super(LinearNorm, self).__init__() - self.linear_layer = torch.nn.Linear(in_dim, out_dim, bias=bias) - - torch.nn.init.xavier_uniform_( - self.linear_layer.weight, - gain=torch.nn.init.calculate_gain(w_init_gain)) - - def forward(self, x): - return self.linear_layer(x) - - -class ConvNorm(torch.nn.Module): - def __init__(self, in_channels, out_channels, kernel_size=1, stride=1, - padding=None, dilation=1, bias=True, w_init_gain='linear'): - super(ConvNorm, self).__init__() - if padding is None: - assert(kernel_size % 2 == 1) - padding = int(dilation * (kernel_size - 1) / 2) - - self.conv = torch.nn.Conv1d(in_channels, out_channels, - kernel_size=kernel_size, stride=stride, - padding=padding, dilation=dilation, - bias=bias) - - torch.nn.init.xavier_uniform_( - self.conv.weight, gain=torch.nn.init.calculate_gain(w_init_gain)) - - def forward(self, signal): - conv_signal = self.conv(signal) - return conv_signal - - -class GlobalAvgPool(torch.nn.Module): - def __init__(self): - super(GlobalAvgPool, self).__init__() - - def forward(self, x, lengths=None): - """Average pooling across time steps (dim=1) with optionally lengths. - Args: - x: torch.Tensor of shape (N, T, ...) - lengths: None or torch.Tensor of shape (N,) - dim: dimension to pool - """ - if lengths is None: - return x.mean(dim=1, keepdim=False) - else: - mask = get_mask_from_lengths(lengths).type(x.type()).to(x.device) - mask_shape = list(mask.size()) + [1 for _ in range(x.ndimension()-2)] - mask = mask.reshape(*mask_shape) - numer = (x * mask).sum(dim=1, keepdim=False) - denom = mask.sum(dim=1, keepdim=False) - return numer / denom - - -class TacotronSTFT(torch.nn.Module): - def __init__(self, filter_length=1024, hop_length=256, win_length=1024, - n_mel_channels=80, sampling_rate=22050, mel_fmin=0.0, - mel_fmax=8000.0): - super(TacotronSTFT, self).__init__() - self.n_mel_channels = n_mel_channels - self.sampling_rate = sampling_rate - self.stft_fn = STFT(filter_length, hop_length, win_length) - mel_basis = librosa_mel_fn( - sampling_rate, filter_length, n_mel_channels, mel_fmin, mel_fmax) - mel_basis = torch.from_numpy(mel_basis).float() - self.register_buffer('mel_basis', mel_basis) - - def spectral_normalize(self, magnitudes): - output = dynamic_range_compression(magnitudes) - return output - - def spectral_de_normalize(self, magnitudes): - output = dynamic_range_decompression(magnitudes) - return output - - def mel_spectrogram(self, y): - """Computes mel-spectrograms from a batch of waves - PARAMS - ------ - y: Variable(torch.FloatTensor) with shape (B, T) in range [-1, 1] - - RETURNS - ------- - mel_output: torch.FloatTensor of shape (B, n_mel_channels, T) - """ - assert(torch.min(y.data) >= -1) - assert(torch.max(y.data) <= 1) - - magnitudes, phases = self.stft_fn.transform(y) - magnitudes = magnitudes.data - mel_output = torch.matmul(self.mel_basis, magnitudes) - mel_output = self.spectral_normalize(mel_output) - return mel_output diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/model.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/model.py deleted file mode 100644 index ccf132b15..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/model.py +++ /dev/null @@ -1,669 +0,0 @@ -from math import sqrt -import torch -import torch.distributions as distr -from torch.autograd import Variable -from torch import nn -from torch.nn import functional as F -from .layers import ConvNorm, LinearNorm, GlobalAvgPool -from .utils import to_gpu, get_mask_from_lengths - - -class LocationLayer(nn.Module): - def __init__(self, attention_n_filters, attention_kernel_size, - attention_dim): - super(LocationLayer, self).__init__() - padding = int((attention_kernel_size - 1) / 2) - self.location_conv = ConvNorm(2, attention_n_filters, - kernel_size=attention_kernel_size, - padding=padding, bias=False, stride=1, - dilation=1) - self.location_dense = LinearNorm(attention_n_filters, attention_dim, - bias=False, w_init_gain='tanh') - - def forward(self, attention_weights_cat): - processed_attention = self.location_conv(attention_weights_cat) - processed_attention = processed_attention.transpose(1, 2) - processed_attention = self.location_dense(processed_attention) - return processed_attention - - -class Attention(nn.Module): - def __init__(self, attention_rnn_dim, embedding_dim, attention_dim, - attention_location_n_filters, attention_location_kernel_size): - super(Attention, self).__init__() - self.query_layer = LinearNorm(attention_rnn_dim, attention_dim, - bias=False, w_init_gain='tanh') - self.memory_layer = LinearNorm(embedding_dim, attention_dim, bias=False, - w_init_gain='tanh') - self.v = LinearNorm(attention_dim, 1, bias=False) - self.location_layer = LocationLayer(attention_location_n_filters, - attention_location_kernel_size, - attention_dim) - self.score_mask_value = -float("inf") - - def get_alignment_energies(self, query, processed_memory, - attention_weights_cat): - """ - PARAMS - ------ - query: decoder output (batch, n_mel_channels * n_frames_per_step) - processed_memory: processed encoder outputs (B, T_in, attention_dim) - attention_weights_cat: cumulative and prev. att weights (B, 2, max_time) - - RETURNS - ------- - alignment (batch, max_time) - """ - - processed_query = self.query_layer(query.unsqueeze(1)) - processed_attention_weights = self.location_layer(attention_weights_cat) - energies = self.v(torch.tanh( - processed_query + processed_attention_weights + processed_memory)) - - energies = energies.squeeze(-1) - return energies - - def forward(self, attention_hidden_state, memory, processed_memory, - attention_weights_cat, mask): - """ - PARAMS - ------ - attention_hidden_state: attention rnn last output - memory: encoder outputs - processed_memory: processed encoder outputs - attention_weights_cat: previous and cummulative attention weights - mask: binary mask for padded data - """ - alignment = self.get_alignment_energies( - attention_hidden_state, processed_memory, attention_weights_cat) - - if mask is not None: - alignment.data.masked_fill_(mask, self.score_mask_value) - - attention_weights = F.softmax(alignment, dim=1) - attention_context = torch.bmm(attention_weights.unsqueeze(1), memory) - attention_context = attention_context.squeeze(1) - - return attention_context, attention_weights - - -class Prenet(nn.Module): - def __init__(self, in_dim, sizes): - super(Prenet, self).__init__() - in_sizes = [in_dim] + sizes[:-1] - self.layers = nn.ModuleList( - [LinearNorm(in_size, out_size, bias=False) - for (in_size, out_size) in zip(in_sizes, sizes)]) - - def forward(self, x): - for linear in self.layers: - x = F.dropout(F.relu(linear(x)), p=0.5, training=True) - return x - - -class Postnet(nn.Module): - """Postnet - - Five 1-d convolution with 512 channels and kernel size 5 - """ - - def __init__(self, hparams): - super(Postnet, self).__init__() - self.convolutions = nn.ModuleList() - - self.convolutions.append( - nn.Sequential( - ConvNorm(hparams.n_mel_channels, hparams.postnet_embedding_dim, - kernel_size=hparams.postnet_kernel_size, stride=1, - padding=int((hparams.postnet_kernel_size - 1) / 2), - dilation=1, w_init_gain='tanh'), - nn.BatchNorm1d(hparams.postnet_embedding_dim)) - ) - - for i in range(1, hparams.postnet_n_convolutions - 1): - self.convolutions.append( - nn.Sequential( - ConvNorm(hparams.postnet_embedding_dim, - hparams.postnet_embedding_dim, - kernel_size=hparams.postnet_kernel_size, stride=1, - padding=int((hparams.postnet_kernel_size - 1) / 2), - dilation=1, w_init_gain='tanh'), - nn.BatchNorm1d(hparams.postnet_embedding_dim)) - ) - - self.convolutions.append( - nn.Sequential( - ConvNorm(hparams.postnet_embedding_dim, hparams.n_mel_channels, - kernel_size=hparams.postnet_kernel_size, stride=1, - padding=int((hparams.postnet_kernel_size - 1) / 2), - dilation=1, w_init_gain='linear'), - nn.BatchNorm1d(hparams.n_mel_channels)) - ) - - def forward(self, x): - for i in range(len(self.convolutions) - 1): - x = F.dropout(torch.tanh(self.convolutions[i](x)), 0.5, self.training) - x = F.dropout(self.convolutions[-1](x), 0.5, self.training) - - return x - - -class Encoder(nn.Module): - """Encoder module: - - Three 1-d convolution banks - - Bidirectional LSTM - """ - def __init__(self, hparams): - super(Encoder, self).__init__() - - convolutions = [] - for _ in range(hparams.encoder_n_convolutions): - conv_layer = nn.Sequential( - ConvNorm(hparams.encoder_embedding_dim, - hparams.encoder_embedding_dim, - kernel_size=hparams.encoder_kernel_size, stride=1, - padding=int((hparams.encoder_kernel_size - 1) / 2), - dilation=1, w_init_gain='relu'), - nn.BatchNorm1d(hparams.encoder_embedding_dim)) - convolutions.append(conv_layer) - self.convolutions = nn.ModuleList(convolutions) - - self.lstm = nn.LSTM(hparams.encoder_embedding_dim, - int(hparams.encoder_embedding_dim / 2), 1, - batch_first=True, bidirectional=True) - - def forward(self, x, input_lengths): - for conv in self.convolutions: - x = F.dropout(F.relu(conv(x)), 0.5, self.training) - - x = x.transpose(1, 2) - - # pytorch tensor are not reversible, hence the conversion - input_lengths = input_lengths.cpu().numpy() - x = nn.utils.rnn.pack_padded_sequence( - x, input_lengths, batch_first=True) - - self.lstm.flatten_parameters() - outputs, _ = self.lstm(x) - - outputs, _ = nn.utils.rnn.pad_packed_sequence( - outputs, batch_first=True) - - return outputs - - def inference(self, x): - for conv in self.convolutions: - x = F.dropout(F.relu(conv(x)), 0.5, self.training) - - x = x.transpose(1, 2) - - self.lstm.flatten_parameters() - outputs, _ = self.lstm(x) - - return outputs - - -class AudioEncoder(nn.Module): - def __init__(self, hparams): - super(AudioEncoder, self).__init__() - - assert hparams.lat_dim > 0 - - convolutions = [] - inp_dim = hparams.n_mel_channels - for _ in range(hparams.lat_n_convolutions): - conv_layer = nn.Sequential( - ConvNorm(inp_dim, hparams.lat_n_filters, - kernel_size=hparams.lat_kernel_size, stride=1, - padding=int((hparams.lat_kernel_size - 1) / 2), - dilation=1, w_init_gain='tanh'), - nn.BatchNorm1d(hparams.lat_n_filters)) - inp_dim = hparams.lat_n_filters - convolutions.append(conv_layer) - self.convolutions = nn.ModuleList(convolutions) - - self.lstm = nn.LSTM(hparams.lat_n_filters, - int(hparams.lat_n_filters / 2), - hparams.lat_n_blstms, batch_first=True, - bidirectional=True) - self.pool = GlobalAvgPool() - - self.mu_proj = LinearNorm(hparams.lat_n_filters, hparams.lat_dim) - self.logvar_proj = LinearNorm(hparams.lat_n_filters, hparams.lat_dim) - self.lat_dim = hparams.lat_dim - - def forward(self, x, lengths): - """ - Args: - x (torch.Tensor): (B, F, T) - """ - - for conv in self.convolutions: - x = F.dropout(F.tanh(conv(x)), 0.5, self.training) - - x = x.transpose(1, 2) # (B, T, D) - - # x may not be sorted by length. Sort->process->unsort - max_len = x.size(1) - assert max_len == torch.max(lengths).item() - - lengths, perm_idx = lengths.sort(0, descending=True) - x = x[perm_idx] - x = nn.utils.rnn.pack_padded_sequence(x, lengths, batch_first=True) - - self.lstm.flatten_parameters() - outputs, _ = self.lstm(x) - outputs, _ = nn.utils.rnn.pad_packed_sequence(outputs, batch_first=True) - - _, unperm_idx = perm_idx.sort(0) - outputs = outputs[unperm_idx] # (B, T, D) - lengths = lengths[unperm_idx] # (B, T, D) - - outputs = self.pool(outputs, lengths) # (B, D) - - mu = self.mu_proj(outputs) - logvar = self.logvar_proj(outputs) - z = distr.Normal(mu, logvar).rsample() - return z, mu, logvar - - -class Decoder(nn.Module): - def __init__(self, hparams): - super(Decoder, self).__init__() - self.n_mel_channels = hparams.n_mel_channels - self.n_frames_per_step = hparams.n_frames_per_step - self.encoder_embedding_dim = hparams.encoder_embedding_dim - self.obs_dim = hparams.obs_dim - self.lat_dim = hparams.lat_dim - self.attention_rnn_dim = hparams.attention_rnn_dim - self.decoder_rnn_dim = hparams.decoder_rnn_dim - self.prenet_dim = hparams.prenet_dim - self.max_decoder_steps = hparams.max_decoder_steps - self.gate_threshold = hparams.gate_threshold - self.p_attention_dropout = hparams.p_attention_dropout - self.p_decoder_dropout = hparams.p_decoder_dropout - - self.prenet = Prenet( - hparams.n_mel_channels * hparams.n_frames_per_step, - [hparams.prenet_dim, hparams.prenet_dim]) - - self.attention_rnn = nn.LSTMCell( - hparams.prenet_dim + hparams.encoder_embedding_dim, - hparams.attention_rnn_dim) - - self.attention_layer = Attention( - hparams.attention_rnn_dim, hparams.encoder_embedding_dim, - hparams.attention_dim, hparams.attention_location_n_filters, - hparams.attention_location_kernel_size) - - encoder_tot_dim = (hparams.encoder_embedding_dim + \ - hparams.lat_dim + hparams.obs_dim) - self.decoder_rnn = nn.LSTMCell( - hparams.attention_rnn_dim + encoder_tot_dim, - hparams.decoder_rnn_dim, 1) - - self.linear_projection = LinearNorm( - hparams.decoder_rnn_dim + encoder_tot_dim, - hparams.n_mel_channels * hparams.n_frames_per_step) - - self.gate_layer = LinearNorm( - hparams.decoder_rnn_dim + encoder_tot_dim, 1, - bias=True, w_init_gain='sigmoid') - - def get_go_frame(self, memory): - """ Gets all zeros frames to use as first decoder input - PARAMS - ------ - memory: decoder outputs - - RETURNS - ------- - decoder_input: all zeros frames - """ - B = memory.size(0) - decoder_input = Variable(memory.data.new( - B, self.n_mel_channels * self.n_frames_per_step).zero_()) - return decoder_input - - def initialize_decoder_states(self, memory, obs_and_lat, mask): - """ Initializes attention rnn states, decoder rnn states, attention - weights, attention cumulative weights, attention context, stores memory - and stores processed memory - PARAMS - ------ - memory: Encoder outputs - obs_and_lat: Observed and latent attribute embeddings - mask: Mask for padded data if training, expects None for inference - """ - B = memory.size(0) - MAX_TIME = memory.size(1) - - self.attention_hidden = Variable(memory.data.new( - B, self.attention_rnn_dim).zero_()) - self.attention_cell = Variable(memory.data.new( - B, self.attention_rnn_dim).zero_()) - - self.decoder_hidden = Variable(memory.data.new( - B, self.decoder_rnn_dim).zero_()) - self.decoder_cell = Variable(memory.data.new( - B, self.decoder_rnn_dim).zero_()) - - self.attention_weights = Variable(memory.data.new( - B, MAX_TIME).zero_()) - self.attention_weights_cum = Variable(memory.data.new( - B, MAX_TIME).zero_()) - self.attention_context = Variable(memory.data.new( - B, self.encoder_embedding_dim).zero_()) - - self.memory = memory - self.processed_memory = self.attention_layer.memory_layer(memory) - self.obs_and_lat = obs_and_lat - self.mask = mask - - def parse_decoder_inputs(self, decoder_inputs): - """ Prepares decoder inputs, i.e. mel outputs - PARAMS - ------ - decoder_inputs: inputs used for teacher-forced training, i.e. mel-specs - - RETURNS - ------- - inputs: processed decoder inputs - - """ - # (B, n_mel_channels, T_out) -> (B, T_out, n_mel_channels) - decoder_inputs = decoder_inputs.transpose(1, 2) - decoder_inputs = decoder_inputs.view( - decoder_inputs.size(0), - int(decoder_inputs.size(1)/self.n_frames_per_step), -1) - # (B, T_out, n_mel_channels) -> (T_out, B, n_mel_channels) - decoder_inputs = decoder_inputs.transpose(0, 1) - return decoder_inputs - - def parse_decoder_outputs(self, mel_outputs, gate_outputs, alignments): - """ Prepares decoder outputs for output - PARAMS - ------ - mel_outputs: - gate_outputs: gate output energies - alignments: - - RETURNS - ------- - mel_outputs: - gate_outpust: gate output energies - alignments: - """ - # (T_out, B) -> (B, T_out) - alignments = torch.stack(alignments).transpose(0, 1) - # (T_out, B) -> (B, T_out) - gate_outputs = torch.stack(gate_outputs).transpose(0, 1) - gate_outputs = gate_outputs.contiguous() - # (T_out, B, n_mel_channels) -> (B, T_out, n_mel_channels) - mel_outputs = torch.stack(mel_outputs).transpose(0, 1).contiguous() - # decouple frames per step - mel_outputs = mel_outputs.view( - mel_outputs.size(0), -1, self.n_mel_channels) - # (B, T_out, n_mel_channels) -> (B, n_mel_channels, T_out) - mel_outputs = mel_outputs.transpose(1, 2) - - return mel_outputs, gate_outputs, alignments - - def decode(self, decoder_input): - """ Decoder step using stored states, attention and memory - PARAMS - ------ - decoder_input: previous mel output - - RETURNS - ------- - mel_output: - gate_output: gate output energies - attention_weights: - """ - cell_input = torch.cat((decoder_input, self.attention_context), -1) - self.attention_hidden, self.attention_cell = self.attention_rnn( - cell_input, (self.attention_hidden, self.attention_cell)) - self.attention_hidden = F.dropout( - self.attention_hidden, self.p_attention_dropout, self.training) - - attention_weights_cat = torch.cat( - (self.attention_weights.unsqueeze(1), - self.attention_weights_cum.unsqueeze(1)), dim=1) - self.attention_context, self.attention_weights = self.attention_layer( - self.attention_hidden, self.memory, self.processed_memory, - attention_weights_cat, self.mask) - - self.attention_weights_cum += self.attention_weights - decoder_input = torch.cat( - (self.attention_hidden, self.attention_context), -1) - if self.obs_and_lat is not None: - decoder_input = torch.cat((decoder_input, self.obs_and_lat), -1) - self.decoder_hidden, self.decoder_cell = self.decoder_rnn( - decoder_input, (self.decoder_hidden, self.decoder_cell)) - self.decoder_hidden = F.dropout( - self.decoder_hidden, self.p_decoder_dropout, self.training) - - decoder_hidden_attention_context = torch.cat( - (self.decoder_hidden, self.attention_context), dim=1) - if self.obs_and_lat is not None: - decoder_hidden_attention_context = torch.cat( - (decoder_hidden_attention_context, self.obs_and_lat), dim=1) - decoder_output = self.linear_projection( - decoder_hidden_attention_context) - - gate_prediction = self.gate_layer(decoder_hidden_attention_context) - return decoder_output, gate_prediction, self.attention_weights - - def forward(self, memory, obs_and_lat, decoder_inputs, memory_lengths): - """ Decoder forward pass for training - PARAMS - ------ - memory: Encoder outputs - obs_and_lat: Observed and latent attribute embeddings - decoder_inputs: Decoder inputs for teacher forcing. i.e. mel-specs - memory_lengths: Encoder output lengths for attention masking. - - RETURNS - ------- - mel_outputs: mel outputs from the decoder - gate_outputs: gate outputs from the decoder - alignments: sequence of attention weights from the decoder - """ - - decoder_input = self.get_go_frame(memory).unsqueeze(0) - decoder_inputs = self.parse_decoder_inputs(decoder_inputs) - decoder_inputs = torch.cat((decoder_input, decoder_inputs), dim=0) - decoder_inputs = self.prenet(decoder_inputs) - - self.initialize_decoder_states( - memory, obs_and_lat, mask=~get_mask_from_lengths(memory_lengths)) - - mel_outputs, gate_outputs, alignments = [], [], [] - while len(mel_outputs) < decoder_inputs.size(0) - 1: - decoder_input = decoder_inputs[len(mel_outputs)] - mel_output, gate_output, attention_weights = self.decode( - decoder_input) - mel_outputs += [mel_output.squeeze(1)] - gate_outputs += [gate_output.squeeze()] - alignments += [attention_weights] - - mel_outputs, gate_outputs, alignments = self.parse_decoder_outputs( - mel_outputs, gate_outputs, alignments) - - return mel_outputs, gate_outputs, alignments - - def inference(self, memory, obs_and_lat, ret_has_eos=False): - """ Decoder inference - PARAMS - ------ - memory: Encoder outputs - obs_and_lat: Observed and latent attribute embeddings - - RETURNS - ------- - mel_outputs: mel outputs from the decoder - gate_outputs: gate outputs from the decoder - alignments: sequence of attention weights from the decoder - """ - decoder_input = self.get_go_frame(memory) - - self.initialize_decoder_states(memory, obs_and_lat, mask=None) - - mel_outputs, gate_outputs, alignments = [], [], [] - has_eos = False - while True: - decoder_input = self.prenet(decoder_input) - mel_output, gate_output, alignment = self.decode(decoder_input) - - mel_outputs += [mel_output.squeeze(1)] - gate_outputs += [gate_output] - alignments += [alignment] - - if torch.sigmoid(gate_output.data) > self.gate_threshold: - has_eos = True - break - elif len(mel_outputs) == self.max_decoder_steps: - # print("Warning! Reached max decoder steps") - break - - decoder_input = mel_output - - mel_outputs, gate_outputs, alignments = self.parse_decoder_outputs( - mel_outputs, gate_outputs, alignments) - - if ret_has_eos: - return mel_outputs, gate_outputs, alignments, has_eos - else: - return mel_outputs, gate_outputs, alignments - - -class Tacotron2(nn.Module): - def __init__(self, hparams): - super(Tacotron2, self).__init__() - self.mask_padding = hparams.mask_padding - self.fp16_run = hparams.fp16_run - self.n_mel_channels = hparams.n_mel_channels - self.n_frames_per_step = hparams.n_frames_per_step - - # initialize text encoder embedding - self.embedding = nn.Embedding( - hparams.n_symbols, hparams.symbols_embedding_dim) - std = sqrt(2.0 / (hparams.n_symbols + hparams.symbols_embedding_dim)) - val = sqrt(3.0) * std # uniform bounds for std - self.embedding.weight.data.uniform_(-val, val) - - # initialize observed attribute embedding - self.obs_embedding = None - if hparams.obs_dim > 0: - self.obs_embedding = nn.Embedding( - hparams.obs_n_class, hparams.obs_dim) - std = sqrt(2.0 / (hparams.obs_n_class + hparams.obs_dim)) - val = sqrt(3.0) * std # uniform bounds for std - self.obs_embedding.weight.data.uniform_(-val, val) - - self.encoder = Encoder(hparams) - self.decoder = Decoder(hparams) - self.postnet = Postnet(hparams) - - self.lat_encoder = None - if hparams.lat_dim > 0: - self.lat_encoder = AudioEncoder(hparams) - - def parse_batch(self, batch): - (text_padded, input_lengths, obs_labels, - mel_padded, gate_padded, output_lengths) = batch - text_padded = to_gpu(text_padded).long() - input_lengths = to_gpu(input_lengths).long() - obs_labels = to_gpu(obs_labels).long() - max_len = torch.max(input_lengths.data).item() - mel_padded = to_gpu(mel_padded).float() - gate_padded = to_gpu(gate_padded).float() - output_lengths = to_gpu(output_lengths).long() - - return ( - (text_padded, input_lengths, obs_labels, - mel_padded, max_len, output_lengths), - (mel_padded, gate_padded)) - - def parse_output(self, outputs, output_lengths=None): - if self.mask_padding and output_lengths is not None: - mask = ~get_mask_from_lengths(output_lengths) - mask = mask.expand(self.n_mel_channels, mask.size(0), mask.size(1)) - mask = mask.permute(1, 0, 2) - - outputs[0].data.masked_fill_(mask, 0.0) - outputs[1].data.masked_fill_(mask, 0.0) - outputs[2].data.masked_fill_(mask[:, 0, :], 1e3) # gate energies - - return outputs - - def forward(self, inputs): - (text_inputs, text_lengths, obs_labels, - mels, max_len, output_lengths) = inputs - text_lengths, output_lengths = text_lengths.data, output_lengths.data - - embedded_inputs = self.embedding(text_inputs).transpose(1, 2) - - encoder_outputs = self.encoder(embedded_inputs, text_lengths) - - obs = None - if self.obs_embedding is not None: - obs = self.obs_embedding(obs_labels) - - lat, lat_mu, lat_logvar = None, None, None - if self.lat_encoder is not None: - (lat, lat_mu, lat_logvar) = self.lat_encoder(mels, output_lengths) - - obs_and_lat = [x for x in [obs, lat] if x is not None] - if bool(obs_and_lat): - obs_and_lat = torch.cat(obs_and_lat, dim=-1) - else: - obs_and_lat = None - - mel_outputs, gate_outputs, alignments = self.decoder( - encoder_outputs, obs_and_lat, mels, memory_lengths=text_lengths) - - mel_outputs_postnet = self.postnet(mel_outputs) - mel_outputs_postnet = mel_outputs + mel_outputs_postnet - - return self.parse_output( - [mel_outputs, mel_outputs_postnet, gate_outputs, alignments, - lat_mu, lat_logvar], - output_lengths) - - def inference(self, inputs, obs_labels=None, lat=None, ret_has_eos=False): - embedded_inputs = self.embedding(inputs).transpose(1, 2) - encoder_outputs = self.encoder.inference(embedded_inputs) - - if obs_labels is None: - obs_labels = torch.LongTensor(len(inputs)) - obs_labels = obs_labels.to(inputs.device).zero_() - - obs = None - if self.obs_embedding is not None: - obs = self.obs_embedding(obs_labels) - - if self.lat_encoder is not None: - if lat is None: - lat = torch.FloatTensor(len(inputs), self.lat_encoder.lat_dim) - lat = lat.to(inputs.device).zero_().type(encoder_outputs.type()) - - obs_and_lat = [x for x in [obs, lat] if x is not None] - if bool(obs_and_lat): - obs_and_lat = torch.cat(obs_and_lat, dim=-1) - else: - obs_and_lat = None - - mel_outputs, gate_outputs, alignments, has_eos = self.decoder.inference( - encoder_outputs, obs_and_lat, ret_has_eos=True) - - mel_outputs_postnet = self.postnet(mel_outputs) - mel_outputs_postnet = mel_outputs + mel_outputs_postnet - - outputs = self.parse_output( - [mel_outputs, mel_outputs_postnet, gate_outputs, alignments]) - - if ret_has_eos: - return outputs + [has_eos] - else: - return outputs diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/numbers.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/numbers.py deleted file mode 100644 index 0d5f7fa81..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/numbers.py +++ /dev/null @@ -1,71 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -import inflect -import re - - -_inflect = inflect.engine() -_comma_number_re = re.compile(r'([0-9][0-9\,]+[0-9])') -_decimal_number_re = re.compile(r'([0-9]+\.[0-9]+)') -_pounds_re = re.compile(r'£([0-9\,]*[0-9]+)') -_dollars_re = re.compile(r'\$([0-9\.\,]*[0-9]+)') -_ordinal_re = re.compile(r'[0-9]+(st|nd|rd|th)') -_number_re = re.compile(r'[0-9]+') - - -def _remove_commas(m): - return m.group(1).replace(',', '') - - -def _expand_decimal_point(m): - return m.group(1).replace('.', ' point ') - - -def _expand_dollars(m): - match = m.group(1) - parts = match.split('.') - if len(parts) > 2: - return match + ' dollars' # Unexpected format - dollars = int(parts[0]) if parts[0] else 0 - cents = int(parts[1]) if len(parts) > 1 and parts[1] else 0 - if dollars and cents: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s, %s %s' % (dollars, dollar_unit, cents, cent_unit) - elif dollars: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - return '%s %s' % (dollars, dollar_unit) - elif cents: - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s' % (cents, cent_unit) - else: - return 'zero dollars' - - -def _expand_ordinal(m): - return _inflect.number_to_words(m.group(0)) - - -def _expand_number(m): - num = int(m.group(0)) - if num > 1000 and num < 3000: - if num == 2000: - return 'two thousand' - elif num > 2000 and num < 2010: - return 'two thousand ' + _inflect.number_to_words(num % 100) - elif num % 100 == 0: - return _inflect.number_to_words(num // 100) + ' hundred' - else: - return _inflect.number_to_words(num, andword='', zero='oh', group=2).replace(', ', ' ') - else: - return _inflect.number_to_words(num, andword='') - - -def normalize_numbers(text): - text = re.sub(_comma_number_re, _remove_commas, text) - text = re.sub(_pounds_re, r'\1 pounds', text) - text = re.sub(_dollars_re, _expand_dollars, text) - text = re.sub(_decimal_number_re, _expand_decimal_point, text) - text = re.sub(_ordinal_re, _expand_ordinal, text) - text = re.sub(_number_re, _expand_number, text) - return text diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/stft.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/stft.py deleted file mode 100644 index 63fcd431e..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/stft.py +++ /dev/null @@ -1,141 +0,0 @@ -""" -BSD 3-Clause License - -Copyright (c) 2017, Prem Seetharaman -All rights reserved. - -* Redistribution and use in source and binary forms, with or without - modification, are permitted provided that the following conditions are met: - -* Redistributions of source code must retain the above copyright notice, - this list of conditions and the following disclaimer. - -* Redistributions in binary form must reproduce the above copyright notice, this - list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - -* Neither the name of the copyright holder nor the names of its - contributors may be used to endorse or promote products derived from this - software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND -ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED -WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR -ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES -(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON -ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS -SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -""" - -import torch -import numpy as np -import torch.nn.functional as F -from torch.autograd import Variable -from scipy.signal import get_window -from librosa.util import pad_center, tiny -from .audio_processing import window_sumsquare - - -class STFT(torch.nn.Module): - """adapted from Prem Seetharaman's https://github.com/pseeth/pytorch-stft""" - def __init__(self, filter_length=800, hop_length=200, win_length=800, - window='hann'): - super(STFT, self).__init__() - self.filter_length = filter_length - self.hop_length = hop_length - self.win_length = win_length - self.window = window - self.forward_transform = None - scale = self.filter_length / self.hop_length - fourier_basis = np.fft.fft(np.eye(self.filter_length)) - - cutoff = int((self.filter_length / 2 + 1)) - fourier_basis = np.vstack([np.real(fourier_basis[:cutoff, :]), - np.imag(fourier_basis[:cutoff, :])]) - - forward_basis = torch.FloatTensor(fourier_basis[:, None, :]) - inverse_basis = torch.FloatTensor( - np.linalg.pinv(scale * fourier_basis).T[:, None, :]) - - if window is not None: - assert(filter_length >= win_length) - # get window and zero center pad it to filter_length - fft_window = get_window(window, win_length, fftbins=True) - fft_window = pad_center(fft_window, filter_length) - fft_window = torch.from_numpy(fft_window).float() - - # window the bases - forward_basis *= fft_window - inverse_basis *= fft_window - - self.register_buffer('forward_basis', forward_basis.float()) - self.register_buffer('inverse_basis', inverse_basis.float()) - - def transform(self, input_data): - num_batches = input_data.size(0) - num_samples = input_data.size(1) - - self.num_samples = num_samples - - # similar to librosa, reflect-pad the input - input_data = input_data.view(num_batches, 1, num_samples) - input_data = F.pad( - input_data.unsqueeze(1), - (int(self.filter_length / 2), int(self.filter_length / 2), 0, 0), - mode='reflect') - input_data = input_data.squeeze(1) - - forward_transform = F.conv1d( - input_data, - Variable(self.forward_basis, requires_grad=False), - stride=self.hop_length, - padding=0) - - cutoff = int((self.filter_length / 2) + 1) - real_part = forward_transform[:, :cutoff, :] - imag_part = forward_transform[:, cutoff:, :] - - magnitude = torch.sqrt(real_part**2 + imag_part**2) - phase = torch.autograd.Variable( - torch.atan2(imag_part.data, real_part.data)) - - return magnitude, phase - - def inverse(self, magnitude, phase): - recombine_magnitude_phase = torch.cat( - [magnitude*torch.cos(phase), magnitude*torch.sin(phase)], dim=1) - - inverse_transform = F.conv_transpose1d( - recombine_magnitude_phase, - Variable(self.inverse_basis, requires_grad=False), - stride=self.hop_length, - padding=0) - - if self.window is not None: - window_sum = window_sumsquare( - self.window, magnitude.size(-1), hop_length=self.hop_length, - win_length=self.win_length, n_fft=self.filter_length, - dtype=np.float32) - # remove modulation effects - approx_nonzero_indices = torch.from_numpy( - np.where(window_sum > tiny(window_sum))[0]) - window_sum = torch.autograd.Variable( - torch.from_numpy(window_sum), requires_grad=False) - window_sum = window_sum.cuda() if magnitude.is_cuda else window_sum - inverse_transform[:, :, approx_nonzero_indices] /= window_sum[approx_nonzero_indices] - - # scale by hop ratio - inverse_transform *= float(self.filter_length) / self.hop_length - - inverse_transform = inverse_transform[:, :, int(self.filter_length/2):] - inverse_transform = inverse_transform[:, :, :-int(self.filter_length/2):] - - return inverse_transform - - def forward(self, input_data): - self.magnitude, self.phase = self.transform(input_data) - reconstruction = self.inverse(self.magnitude, self.phase) - return reconstruction diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/symbols.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/symbols.py deleted file mode 100644 index 5f0d70fda..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/symbols.py +++ /dev/null @@ -1,18 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Defines the set of symbols used in text input to the model. - -The default is a set of ASCII characters that works well for English or text that has been run through Unidecode. For other data, you can modify _characters. See TRAINING_DATA.md for details. ''' -from . import cmudict - -_pad = '_' -_punctuation = '!\'(),.:;? ' -_special = '-' -_letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz' - -# Prepend "@" to ARPAbet symbols to ensure uniqueness (some are the same as uppercase letters): -_arpabet = ['@' + s for s in cmudict.valid_symbols] - -# Export all symbols: -symbols = [_pad] + list(_special) + list(_punctuation) + list(_letters) + _arpabet diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/text.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/text.py deleted file mode 100644 index 49e2ca498..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/text.py +++ /dev/null @@ -1,107 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -import numpy as np -import re -from . import cleaners -from .symbols import symbols - - -# Mappings from symbol to numeric ID and vice versa: -_symbol_to_id = {s: i for i, s in enumerate(symbols)} -_id_to_symbol = {i: s for i, s in enumerate(symbols)} - -# Regular expression matching text enclosed in curly braces: -_curly_re = re.compile(r'(.*?)\{(.+?)\}(.*)') - -# Special symbols -SOS_TOK = '<s>' -EOS_TOK = '</s>' - -def text_to_sequence(text, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - - The text can optionally have ARPAbet sequences enclosed in curly braces embedded - in it. For example, "Turn left on {HH AW1 S S T AH0 N} Street." - - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - - Returns: - List of integers corresponding to the symbols in the text - ''' - sequence = [] - - # Check for curly braces and treat their contents as ARPAbet: - while len(text): - m = _curly_re.match(text) - if not m: - sequence += _symbols_to_sequence(_clean_text(text, cleaner_names)) - break - sequence += _symbols_to_sequence(_clean_text(m.group(1), cleaner_names)) - sequence += _arpabet_to_sequence(m.group(2)) - text = m.group(3) - - return sequence - - -def sample_code_chunk(code, size): - assert(size > 0 and size <= len(code)) - start = np.random.randint(len(code) - size + 1) - end = start + size - return code[start:end], start, end - - -def code_to_sequence(code, code_dict, collapse_code): - if collapse_code: - prev_c = None - sequence = [] - for c in code: - if c in code_dict and c != prev_c: - sequence.append(code_dict[c]) - prev_c = c - else: - sequence = [code_dict[c] for c in code if c in code_dict] - if len(sequence) < 0.95 * len(code): - print('WARNING : over 5%% codes are OOV') - - return sequence - - -def sequence_to_text(sequence): - '''Converts a sequence of IDs back to a string''' - result = '' - for symbol_id in sequence: - if symbol_id in _id_to_symbol: - s = _id_to_symbol[symbol_id] - # Enclose ARPAbet back in curly braces: - if len(s) > 1 and s[0] == '@': - s = '{%s}' % s[1:] - result += s - return result.replace('}{', ' ') - - -def sequence_to_code(sequence, code_dict): - '''Analogous to sequence_to_text''' - id_to_code = {i: c for c, i in code_dict.items()} - return ' '.join([id_to_code[i] for i in sequence]) - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text - - -def _symbols_to_sequence(symbols): - return [_symbol_to_id[s] for s in symbols if _should_keep_symbol(s)] - - -def _arpabet_to_sequence(text): - return _symbols_to_sequence(['@' + s for s in text.split()]) - - -def _should_keep_symbol(s): - return s in _symbol_to_id and s != '_' and s != '~' diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/utils.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/utils.py deleted file mode 100644 index b72ae0e35..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/utils.py +++ /dev/null @@ -1,171 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import collections -import io -import json -import librosa -import numpy as np -import soundfile as sf -import time -import torch -from scipy.io.wavfile import read -from .text import SOS_TOK, EOS_TOK - - -def get_mask_from_lengths(lengths): - max_len = torch.max(lengths).item() - ids = torch.arange(0, max_len, out=torch.cuda.LongTensor(max_len)) - mask = (ids < lengths.unsqueeze(1)) - return mask - - -def load_wav_to_torch(full_path, sr=None): - data, sr = librosa.load(full_path, sr=sr) - data = np.clip(data, -1, 1) # potentially out of [-1, 1] due to resampling - data = data * 32768.0 # match values loaded by scipy - return torch.FloatTensor(data.astype(np.float32)), sr - - -def read_binary_audio(bin_data, tar_sr=None): - """ - read binary audio (`bytes` or `uint8` `numpy.ndarray`) to `float32` - `numpy.ndarray` - - RETURNS: - data (np.ndarray) : audio of shape (n,) or (2, n) - tar_sr (int) : sample rate - """ - data, ori_sr = sf.read(io.BytesIO(bin_data), dtype='float32') - data = data.T - if (tar_sr is not None) and (ori_sr != tar_sr): - data = librosa.resample(data, ori_sr, tar_sr) - else: - tar_sr = ori_sr - data = np.clip(data, -1, 1) - data = data * 32768.0 - return torch.FloatTensor(data.astype(np.float32)), tar_sr - - -def load_filepaths_and_text(filename): - with open(filename, encoding='utf-8') as f: - data = [json.loads(line.rstrip()) for line in f] - return data - - -def to_gpu(x): - x = x.contiguous() - - if torch.cuda.is_available(): - x = x.cuda(non_blocking=True) - return torch.autograd.Variable(x) - - -def load_code_dict(path, add_sos=False, add_eos=False): - if not path: - return {} - - with open(path, 'r') as f: - codes = ['_'] + [line.rstrip() for line in f] # '_' for pad - code_dict = {c: i for i, c in enumerate(codes)} - - if add_sos: - code_dict[SOS_TOK] = len(code_dict) - if add_eos: - code_dict[EOS_TOK] = len(code_dict) - assert(set(code_dict.values()) == set(range(len(code_dict)))) - - return code_dict - - -def load_obs_label_dict(path): - if not path: - return {} - with open(path, 'r') as f: - obs_labels = [line.rstrip() for line in f] - return {c: i for i, c in enumerate(obs_labels)} - - -# A simple timer class inspired from `tnt.TimeMeter` -class CudaTimer: - def __init__(self, keys): - self.keys = keys - self.reset() - - def start(self, key): - s = torch.cuda.Event(enable_timing=True) - s.record() - self.start_events[key].append(s) - return self - - def stop(self, key): - e = torch.cuda.Event(enable_timing=True) - e.record() - self.end_events[key].append(e) - return self - - def reset(self): - self.start_events = collections.defaultdict(list) - self.end_events = collections.defaultdict(list) - self.running_times = collections.defaultdict(float) - self.n = collections.defaultdict(int) - return self - - def value(self): - self._synchronize() - return {k: self.running_times[k] / self.n[k] for k in self.keys} - - def _synchronize(self): - torch.cuda.synchronize() - for k in self.keys: - starts = self.start_events[k] - ends = self.end_events[k] - if len(starts) == 0: - raise ValueError("Trying to divide by zero in TimeMeter") - if len(ends) != len(starts): - raise ValueError("Call stop before checking value!") - time = 0 - for start, end in zip(starts, ends): - time += start.elapsed_time(end) - self.running_times[k] += time * 1e-3 - self.n[k] += len(starts) - self.start_events = collections.defaultdict(list) - self.end_events = collections.defaultdict(list) - - -# Used to measure the time taken for multiple events -class Timer: - def __init__(self, keys): - self.keys = keys - self.n = {} - self.running_time = {} - self.total_time = {} - self.reset() - - def start(self, key): - self.running_time[key] = time.time() - return self - - def stop(self, key): - self.total_time[key] = time.time() - self.running_time[key] - self.n[key] += 1 - self.running_time[key] = None - return self - - def reset(self): - for k in self.keys: - self.total_time[k] = 0 - self.running_time[k] = None - self.n[k] = 0 - return self - - def value(self): - vals = {} - for k in self.keys: - if self.n[k] == 0: - raise ValueError("Trying to divide by zero in TimeMeter") - else: - vals[k] = self.total_time[k] / self.n[k] - return vals diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/waveglow_denoiser.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/waveglow_denoiser.py deleted file mode 100644 index 6a6585e8b..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/waveglow_denoiser.py +++ /dev/null @@ -1,40 +0,0 @@ -# import sys -# sys.path.append('tacotron2') -import torch -from .layers import STFT - - -class Denoiser(torch.nn.Module): - """ Removes model bias from audio produced with waveglow """ - - def __init__(self, waveglow, filter_length=1024, n_overlap=4, - win_length=1024, mode='zeros'): - super(Denoiser, self).__init__() - self.stft = STFT(filter_length=filter_length, - hop_length=int(filter_length/n_overlap), - win_length=win_length).cuda() - if mode == 'zeros': - mel_input = torch.zeros( - (1, 80, 88), - dtype=waveglow.upsample.weight.dtype, - device=waveglow.upsample.weight.device) - elif mode == 'normal': - mel_input = torch.randn( - (1, 80, 88), - dtype=waveglow.upsample.weight.dtype, - device=waveglow.upsample.weight.device) - else: - raise Exception("Mode {} if not supported".format(mode)) - - with torch.no_grad(): - bias_audio = waveglow.infer(mel_input, sigma=0.0).float() - bias_spec, _ = self.stft.transform(bias_audio) - - self.register_buffer('bias_spec', bias_spec[:, :, 0][:, :, None]) - - def forward(self, audio, strength=0.1): - audio_spec, audio_angles = self.stft.transform(audio.cuda().float()) - audio_spec_denoised = audio_spec - self.bias_spec * strength - audio_spec_denoised = torch.clamp(audio_spec_denoised, 0.0) - audio_denoised = self.stft.inverse(audio_spec_denoised, audio_angles) - return audio_denoised diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tts_data.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tts_data.py deleted file mode 100644 index d2b04c0fe..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/tts_data.py +++ /dev/null @@ -1,54 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import torch -import numpy as np -from examples.textless_nlp.gslm.unit2speech.tacotron2.text import ( - EOS_TOK, - SOS_TOK, - code_to_sequence, - text_to_sequence, -) -from examples.textless_nlp.gslm.unit2speech.tacotron2.utils import ( - load_code_dict, -) - - -class TacotronInputDataset: - def __init__(self, hparams, append_str=""): - self.is_text = getattr(hparams, "text_or_code", "text") == "text" - if not self.is_text: - self.code_dict = load_code_dict( - hparams.code_dict, hparams.add_sos, hparams.add_eos - ) - self.code_key = hparams.code_key - self.add_sos = hparams.add_sos - self.add_eos = hparams.add_eos - self.collapse_code = hparams.collapse_code - self.append_str = append_str - - def process_code(self, inp_str): - inp_toks = inp_str.split() - if self.add_sos: - inp_toks = [SOS_TOK] + inp_toks - if self.add_eos: - inp_toks = inp_toks + [EOS_TOK] - return code_to_sequence(inp_toks, self.code_dict, self.collapse_code) - - def process_text(self, inp_str): - return text_to_sequence(inp_str, ["english_cleaners"]) - - def get_tensor(self, inp_str): - # uid, txt, inp_str = self._get_data(idx) - inp_str = inp_str + self.append_str - if self.is_text: - inp_toks = self.process_text(inp_str) - else: - inp_toks = self.process_code(inp_str) - return torch.from_numpy(np.array(inp_toks)).long() - - def __len__(self): - return len(self.data) diff --git a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/utils.py b/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/utils.py deleted file mode 100644 index 7aced08d3..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/gslm/unit2speech/utils.py +++ /dev/null @@ -1,55 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import torch -from examples.textless_nlp.gslm.unit2speech.tacotron2.model import Tacotron2 -from examples.textless_nlp.gslm.unit2speech.tacotron2.waveglow_denoiser import ( - Denoiser, -) - - -def load_quantized_audio_from_file(file_path): - base_fname_batch, quantized_units_batch = [], [] - with open(file_path) as f: - for line in f: - base_fname, quantized_units_str = line.rstrip().split("|") - quantized_units = [int(q) for q in quantized_units_str.split(" ")] - base_fname_batch.append(base_fname) - quantized_units_batch.append(quantized_units) - return base_fname_batch, quantized_units_batch - - -def synthesize_audio(model, waveglow, denoiser, inp, lab=None, strength=0.0): - assert inp.size(0) == 1 - inp = inp.cuda() - if lab is not None: - lab = torch.LongTensor(1).cuda().fill_(lab) - - with torch.no_grad(): - _, mel, _, ali, has_eos = model.inference(inp, lab, ret_has_eos=True) - aud = waveglow.infer(mel, sigma=0.666) - aud_dn = denoiser(aud, strength=strength).squeeze(1) - return mel, aud, aud_dn, has_eos - - -def load_tacotron(tacotron_model_path, max_decoder_steps): - ckpt_dict = torch.load(tacotron_model_path) - hparams = ckpt_dict["hparams"] - hparams.max_decoder_steps = max_decoder_steps - sr = hparams.sampling_rate - model = Tacotron2(hparams) - model.load_state_dict(ckpt_dict["model_dict"]) - model = model.cuda().eval().half() - return model, sr, hparams - - -def load_waveglow(waveglow_path): - waveglow = torch.load(waveglow_path)["model"] - waveglow = waveglow.cuda().eval().half() - for k in waveglow.convinv: - k.float() - denoiser = Denoiser(waveglow) - return waveglow, denoiser diff --git a/kosmos-g/fairseq/examples/textless_nlp/speech-resynth/README.md b/kosmos-g/fairseq/examples/textless_nlp/speech-resynth/README.md deleted file mode 100644 index a099682cd..000000000 --- a/kosmos-g/fairseq/examples/textless_nlp/speech-resynth/README.md +++ /dev/null @@ -1,28 +0,0 @@ - -# Speech Resynthesis from Discrete Disentangled Self-Supervised Representations -Landing page with usfull resources for the [Speech Resynthesis from Discrete Disentangled Self-Supervised Representations](https://arxiv.org/abs/2104.00355) paper. - -<p align="center"><img width="70%" src="img/fig.png" /></p> - -__Abstract__: We propose using self-supervised discrete representations for the task of speech resynthesis. To generate disentangled representation, we separately extract low-bitrate representations for speech content, prosodic information, and speaker identity. This allows to synthesize speech in a controllable manner. We analyze various state-of-the-art, self-supervised representation learning methods and shed light on the advantages of each method while considering reconstruction quality and disentanglement properties. Specifically, we evaluate the F0 reconstruction, speaker identification performance (for both resynthesis and voice conversion), recordings' intelligibility, and overall quality using subjective human evaluation. Lastly, we demonstrate how these representations can be used for an ultra-lightweight speech codec. Using the obtained representations, we can get to a rate of 365 bits per second while providing better speech quality than the baseline methods. - - -## Quick Links -- [Paper](https://arxiv.org/pdf/2104.00355.pdf) -- [Samples](https://speechbot.github.io/resynthesis/index.html) -- [Code](https://github.com/facebookresearch/speech-resynthesis) - -The codebase for the [Speech Resynthesis from Discrete Disentangled Self-Supervised Representations](https://arxiv.org/abs/2104.00355) paper can be found under the following [repository](https://github.com/facebookresearch/speech-resynthesis). - - -## Citation -``` -@inproceedings{polyak21_interspeech, - author={Adam Polyak and Yossi Adi and Jade Copet and - Eugene Kharitonov and Kushal Lakhotia and - Wei-Ning Hsu and Abdelrahman Mohamed and Emmanuel Dupoux}, - title={{Speech Resynthesis from Discrete Disentangled Self-Supervised Representations}}, - year=2021, - booktitle={Proc. Interspeech 2021}, -} -``` diff --git a/kosmos-g/fairseq/examples/textless_nlp/speech-resynth/img/fig.png b/kosmos-g/fairseq/examples/textless_nlp/speech-resynth/img/fig.png deleted file mode 100644 index 585bbbce1..000000000 Binary files a/kosmos-g/fairseq/examples/textless_nlp/speech-resynth/img/fig.png and /dev/null differ diff --git a/kosmos-g/fairseq/examples/translation/README.md b/kosmos-g/fairseq/examples/translation/README.md deleted file mode 100644 index 2941f5eb8..000000000 --- a/kosmos-g/fairseq/examples/translation/README.md +++ /dev/null @@ -1,301 +0,0 @@ -# Neural Machine Translation - -This README contains instructions for [using pretrained translation models](#example-usage-torchhub) -as well as [training new models](#training-a-new-model). - -## Pre-trained models - -Model | Description | Dataset | Download ----|---|---|--- -`conv.wmt14.en-fr` | Convolutional <br> ([Gehring et al., 2017](https://arxiv.org/abs/1705.03122)) | [WMT14 English-French](http://statmt.org/wmt14/translation-task.html#Download) | model: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt14.v2.en-fr.fconv-py.tar.bz2) <br> newstest2014: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.v2.en-fr.newstest2014.tar.bz2) <br> newstest2012/2013: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.v2.en-fr.ntst1213.tar.bz2) -`conv.wmt14.en-de` | Convolutional <br> ([Gehring et al., 2017](https://arxiv.org/abs/1705.03122)) | [WMT14 English-German](http://statmt.org/wmt14/translation-task.html#Download) | model: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt14.en-de.fconv-py.tar.bz2) <br> newstest2014: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.en-de.newstest2014.tar.bz2) -`conv.wmt17.en-de` | Convolutional <br> ([Gehring et al., 2017](https://arxiv.org/abs/1705.03122)) | [WMT17 English-German](http://statmt.org/wmt17/translation-task.html#Download) | model: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt17.v2.en-de.fconv-py.tar.bz2) <br> newstest2014: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt17.v2.en-de.newstest2014.tar.bz2) -`transformer.wmt14.en-fr` | Transformer <br> ([Ott et al., 2018](https://arxiv.org/abs/1806.00187)) | [WMT14 English-French](http://statmt.org/wmt14/translation-task.html#Download) | model: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt14.en-fr.joined-dict.transformer.tar.bz2) <br> newstest2014: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.en-fr.joined-dict.newstest2014.tar.bz2) -`transformer.wmt16.en-de` | Transformer <br> ([Ott et al., 2018](https://arxiv.org/abs/1806.00187)) | [WMT16 English-German](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8) | model: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt16.en-de.joined-dict.transformer.tar.bz2) <br> newstest2014: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt16.en-de.joined-dict.newstest2014.tar.bz2) -`transformer.wmt18.en-de` | Transformer <br> ([Edunov et al., 2018](https://arxiv.org/abs/1808.09381)) <br> WMT'18 winner | [WMT'18 English-German](http://www.statmt.org/wmt18/translation-task.html) | model: <br> [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt18.en-de.ensemble.tar.gz) <br> See NOTE in the archive -`transformer.wmt19.en-de` | Transformer <br> ([Ng et al., 2019](https://arxiv.org/abs/1907.06616)) <br> WMT'19 winner | [WMT'19 English-German](http://www.statmt.org/wmt19/translation-task.html) | model: <br> [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.en-de.joined-dict.ensemble.tar.gz) -`transformer.wmt19.de-en` | Transformer <br> ([Ng et al., 2019](https://arxiv.org/abs/1907.06616)) <br> WMT'19 winner | [WMT'19 German-English](http://www.statmt.org/wmt19/translation-task.html) | model: <br> [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.de-en.joined-dict.ensemble.tar.gz) -`transformer.wmt19.en-ru` | Transformer <br> ([Ng et al., 2019](https://arxiv.org/abs/1907.06616)) <br> WMT'19 winner | [WMT'19 English-Russian](http://www.statmt.org/wmt19/translation-task.html) | model: <br> [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.en-ru.ensemble.tar.gz) -`transformer.wmt19.ru-en` | Transformer <br> ([Ng et al., 2019](https://arxiv.org/abs/1907.06616)) <br> WMT'19 winner | [WMT'19 Russian-English](http://www.statmt.org/wmt19/translation-task.html) | model: <br> [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.ru-en.ensemble.tar.gz) - -## Example usage (torch.hub) - -We require a few additional Python dependencies for preprocessing: -```bash -pip install fastBPE sacremoses subword_nmt -``` - -Interactive translation via PyTorch Hub: -```python -import torch - -# List available models -torch.hub.list('pytorch/fairseq') # [..., 'transformer.wmt16.en-de', ... ] - -# Load a transformer trained on WMT'16 En-De -# Note: WMT'19 models use fastBPE instead of subword_nmt, see instructions below -en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt16.en-de', - tokenizer='moses', bpe='subword_nmt') -en2de.eval() # disable dropout - -# The underlying model is available under the *models* attribute -assert isinstance(en2de.models[0], fairseq.models.transformer.TransformerModel) - -# Move model to GPU for faster translation -en2de.cuda() - -# Translate a sentence -en2de.translate('Hello world!') -# 'Hallo Welt!' - -# Batched translation -en2de.translate(['Hello world!', 'The cat sat on the mat.']) -# ['Hallo Welt!', 'Die Katze saß auf der Matte.'] -``` - -Loading custom models: -```python -from fairseq.models.transformer import TransformerModel -zh2en = TransformerModel.from_pretrained( - '/path/to/checkpoints', - checkpoint_file='checkpoint_best.pt', - data_name_or_path='data-bin/wmt17_zh_en_full', - bpe='subword_nmt', - bpe_codes='data-bin/wmt17_zh_en_full/zh.code' -) -zh2en.translate('你好 世界') -# 'Hello World' -``` - -If you are using a `transformer.wmt19` models, you will need to set the `bpe` -argument to `'fastbpe'` and (optionally) load the 4-model ensemble: -```python -en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-de', - checkpoint_file='model1.pt:model2.pt:model3.pt:model4.pt', - tokenizer='moses', bpe='fastbpe') -en2de.eval() # disable dropout -``` - -## Example usage (CLI tools) - -Generation with the binarized test sets can be run in batch mode as follows, e.g. for WMT 2014 English-French on a GTX-1080ti: -```bash -mkdir -p data-bin -curl https://dl.fbaipublicfiles.com/fairseq/models/wmt14.v2.en-fr.fconv-py.tar.bz2 | tar xvjf - -C data-bin -curl https://dl.fbaipublicfiles.com/fairseq/data/wmt14.v2.en-fr.newstest2014.tar.bz2 | tar xvjf - -C data-bin -fairseq-generate data-bin/wmt14.en-fr.newstest2014 \ - --path data-bin/wmt14.en-fr.fconv-py/model.pt \ - --beam 5 --batch-size 128 --remove-bpe | tee /tmp/gen.out -# ... -# | Translated 3003 sentences (96311 tokens) in 166.0s (580.04 tokens/s) -# | Generate test with beam=5: BLEU4 = 40.83, 67.5/46.9/34.4/25.5 (BP=1.000, ratio=1.006, syslen=83262, reflen=82787) - -# Compute BLEU score -grep ^H /tmp/gen.out | cut -f3- > /tmp/gen.out.sys -grep ^T /tmp/gen.out | cut -f2- > /tmp/gen.out.ref -fairseq-score --sys /tmp/gen.out.sys --ref /tmp/gen.out.ref -# BLEU4 = 40.83, 67.5/46.9/34.4/25.5 (BP=1.000, ratio=1.006, syslen=83262, reflen=82787) -``` - -## Training a new model - -### IWSLT'14 German to English (Transformer) - -The following instructions can be used to train a Transformer model on the [IWSLT'14 German to English dataset](http://workshop2014.iwslt.org/downloads/proceeding.pdf). - -First download and preprocess the data: -```bash -# Download and prepare the data -cd examples/translation/ -bash prepare-iwslt14.sh -cd ../.. - -# Preprocess/binarize the data -TEXT=examples/translation/iwslt14.tokenized.de-en -fairseq-preprocess --source-lang de --target-lang en \ - --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \ - --destdir data-bin/iwslt14.tokenized.de-en \ - --workers 20 -``` - -Next we'll train a Transformer translation model over this data: -```bash -CUDA_VISIBLE_DEVICES=0 fairseq-train \ - data-bin/iwslt14.tokenized.de-en \ - --arch transformer_iwslt_de_en --share-decoder-input-output-embed \ - --optimizer adam --adam-betas '(0.9, 0.98)' --clip-norm 0.0 \ - --lr 5e-4 --lr-scheduler inverse_sqrt --warmup-updates 4000 \ - --dropout 0.3 --weight-decay 0.0001 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --max-tokens 4096 \ - --eval-bleu \ - --eval-bleu-args '{"beam": 5, "max_len_a": 1.2, "max_len_b": 10}' \ - --eval-bleu-detok moses \ - --eval-bleu-remove-bpe \ - --eval-bleu-print-samples \ - --best-checkpoint-metric bleu --maximize-best-checkpoint-metric -``` - -Finally we can evaluate our trained model: -```bash -fairseq-generate data-bin/iwslt14.tokenized.de-en \ - --path checkpoints/checkpoint_best.pt \ - --batch-size 128 --beam 5 --remove-bpe -``` - -### WMT'14 English to German (Convolutional) - -The following instructions can be used to train a Convolutional translation model on the WMT English to German dataset. -See the [Scaling NMT README](../scaling_nmt/README.md) for instructions to train a Transformer translation model on this data. - -The WMT English to German dataset can be preprocessed using the `prepare-wmt14en2de.sh` script. -By default it will produce a dataset that was modeled after [Attention Is All You Need (Vaswani et al., 2017)](https://arxiv.org/abs/1706.03762), but with additional news-commentary-v12 data from WMT'17. - -To use only data available in WMT'14 or to replicate results obtained in the original [Convolutional Sequence to Sequence Learning (Gehring et al., 2017)](https://arxiv.org/abs/1705.03122) paper, please use the `--icml17` option. - -```bash -# Download and prepare the data -cd examples/translation/ -# WMT'17 data: -bash prepare-wmt14en2de.sh -# or to use WMT'14 data: -# bash prepare-wmt14en2de.sh --icml17 -cd ../.. - -# Binarize the dataset -TEXT=examples/translation/wmt17_en_de -fairseq-preprocess \ - --source-lang en --target-lang de \ - --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \ - --destdir data-bin/wmt17_en_de --thresholdtgt 0 --thresholdsrc 0 \ - --workers 20 - -# Train the model -mkdir -p checkpoints/fconv_wmt_en_de -fairseq-train \ - data-bin/wmt17_en_de \ - --arch fconv_wmt_en_de \ - --dropout 0.2 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --optimizer nag --clip-norm 0.1 \ - --lr 0.5 --lr-scheduler fixed --force-anneal 50 \ - --max-tokens 4000 \ - --save-dir checkpoints/fconv_wmt_en_de - -# Evaluate -fairseq-generate data-bin/wmt17_en_de \ - --path checkpoints/fconv_wmt_en_de/checkpoint_best.pt \ - --beam 5 --remove-bpe -``` - -### WMT'14 English to French -```bash -# Download and prepare the data -cd examples/translation/ -bash prepare-wmt14en2fr.sh -cd ../.. - -# Binarize the dataset -TEXT=examples/translation/wmt14_en_fr -fairseq-preprocess \ - --source-lang en --target-lang fr \ - --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \ - --destdir data-bin/wmt14_en_fr --thresholdtgt 0 --thresholdsrc 0 \ - --workers 60 - -# Train the model -mkdir -p checkpoints/fconv_wmt_en_fr -fairseq-train \ - data-bin/wmt14_en_fr \ - --arch fconv_wmt_en_fr \ - --dropout 0.1 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --optimizer nag --clip-norm 0.1 \ - --lr 0.5 --lr-scheduler fixed --force-anneal 50 \ - --max-tokens 3000 \ - --save-dir checkpoints/fconv_wmt_en_fr - -# Evaluate -fairseq-generate \ - data-bin/fconv_wmt_en_fr \ - --path checkpoints/fconv_wmt_en_fr/checkpoint_best.pt \ - --beam 5 --remove-bpe -``` - -## Multilingual Translation - -We also support training multilingual translation models. In this example we'll -train a multilingual `{de,fr}-en` translation model using the IWSLT'17 datasets. - -Note that we use slightly different preprocessing here than for the IWSLT'14 -En-De data above. In particular we learn a joint BPE code for all three -languages and use fairseq-interactive and sacrebleu for scoring the test set. - -```bash -# First install sacrebleu and sentencepiece -pip install sacrebleu sentencepiece - -# Then download and preprocess the data -cd examples/translation/ -bash prepare-iwslt17-multilingual.sh -cd ../.. - -# Binarize the de-en dataset -TEXT=examples/translation/iwslt17.de_fr.en.bpe16k -fairseq-preprocess --source-lang de --target-lang en \ - --trainpref $TEXT/train.bpe.de-en \ - --validpref $TEXT/valid0.bpe.de-en,$TEXT/valid1.bpe.de-en,$TEXT/valid2.bpe.de-en,$TEXT/valid3.bpe.de-en,$TEXT/valid4.bpe.de-en,$TEXT/valid5.bpe.de-en \ - --destdir data-bin/iwslt17.de_fr.en.bpe16k \ - --workers 10 - -# Binarize the fr-en dataset -# NOTE: it's important to reuse the en dictionary from the previous step -fairseq-preprocess --source-lang fr --target-lang en \ - --trainpref $TEXT/train.bpe.fr-en \ - --validpref $TEXT/valid0.bpe.fr-en,$TEXT/valid1.bpe.fr-en,$TEXT/valid2.bpe.fr-en,$TEXT/valid3.bpe.fr-en,$TEXT/valid4.bpe.fr-en,$TEXT/valid5.bpe.fr-en \ - --tgtdict data-bin/iwslt17.de_fr.en.bpe16k/dict.en.txt \ - --destdir data-bin/iwslt17.de_fr.en.bpe16k \ - --workers 10 - -# Train a multilingual transformer model -# NOTE: the command below assumes 1 GPU, but accumulates gradients from -# 8 fwd/bwd passes to simulate training on 8 GPUs -mkdir -p checkpoints/multilingual_transformer -CUDA_VISIBLE_DEVICES=0 fairseq-train data-bin/iwslt17.de_fr.en.bpe16k/ \ - --max-epoch 50 \ - --ddp-backend=legacy_ddp \ - --task multilingual_translation --lang-pairs de-en,fr-en \ - --arch multilingual_transformer_iwslt_de_en \ - --share-decoders --share-decoder-input-output-embed \ - --optimizer adam --adam-betas '(0.9, 0.98)' \ - --lr 0.0005 --lr-scheduler inverse_sqrt \ - --warmup-updates 4000 --warmup-init-lr '1e-07' \ - --label-smoothing 0.1 --criterion label_smoothed_cross_entropy \ - --dropout 0.3 --weight-decay 0.0001 \ - --save-dir checkpoints/multilingual_transformer \ - --max-tokens 4000 \ - --update-freq 8 - -# Generate and score the test set with sacrebleu -SRC=de -sacrebleu --test-set iwslt17 --language-pair ${SRC}-en --echo src \ - | python scripts/spm_encode.py --model examples/translation/iwslt17.de_fr.en.bpe16k/sentencepiece.bpe.model \ - > iwslt17.test.${SRC}-en.${SRC}.bpe -cat iwslt17.test.${SRC}-en.${SRC}.bpe \ - | fairseq-interactive data-bin/iwslt17.de_fr.en.bpe16k/ \ - --task multilingual_translation --lang-pairs de-en,fr-en \ - --source-lang ${SRC} --target-lang en \ - --path checkpoints/multilingual_transformer/checkpoint_best.pt \ - --buffer-size 2000 --batch-size 128 \ - --beam 5 --remove-bpe=sentencepiece \ - > iwslt17.test.${SRC}-en.en.sys -grep ^H iwslt17.test.${SRC}-en.en.sys | cut -f3 \ - | sacrebleu --test-set iwslt17 --language-pair ${SRC}-en -``` - -##### Argument format during inference - -During inference it is required to specify a single `--source-lang` and -`--target-lang`, which indicates the inference langauge direction. -`--lang-pairs`, `--encoder-langtok`, `--decoder-langtok` have to be set to -the same value as training. diff --git a/kosmos-g/fairseq/examples/translation/prepare-iwslt14.sh b/kosmos-g/fairseq/examples/translation/prepare-iwslt14.sh deleted file mode 100644 index 2fb6643fb..000000000 --- a/kosmos-g/fairseq/examples/translation/prepare-iwslt14.sh +++ /dev/null @@ -1,115 +0,0 @@ -#!/usr/bin/env bash -# -# Adapted from https://github.com/facebookresearch/MIXER/blob/master/prepareData.sh - -echo 'Cloning Moses github repository (for tokenization scripts)...' -git clone https://github.com/moses-smt/mosesdecoder.git - -echo 'Cloning Subword NMT repository (for BPE pre-processing)...' -git clone https://github.com/rsennrich/subword-nmt.git - -SCRIPTS=mosesdecoder/scripts -TOKENIZER=$SCRIPTS/tokenizer/tokenizer.perl -LC=$SCRIPTS/tokenizer/lowercase.perl -CLEAN=$SCRIPTS/training/clean-corpus-n.perl -BPEROOT=subword-nmt/subword_nmt -BPE_TOKENS=10000 - -URL="http://dl.fbaipublicfiles.com/fairseq/data/iwslt14/de-en.tgz" -GZ=de-en.tgz - -if [ ! -d "$SCRIPTS" ]; then - echo "Please set SCRIPTS variable correctly to point to Moses scripts." - exit -fi - -src=de -tgt=en -lang=de-en -prep=iwslt14.tokenized.de-en -tmp=$prep/tmp -orig=orig - -mkdir -p $orig $tmp $prep - -echo "Downloading data from ${URL}..." -cd $orig -wget "$URL" - -if [ -f $GZ ]; then - echo "Data successfully downloaded." -else - echo "Data not successfully downloaded." - exit -fi - -tar zxvf $GZ -cd .. - -echo "pre-processing train data..." -for l in $src $tgt; do - f=train.tags.$lang.$l - tok=train.tags.$lang.tok.$l - - cat $orig/$lang/$f | \ - grep -v '<url>' | \ - grep -v '<talkid>' | \ - grep -v '<keywords>' | \ - sed -e 's/<title>//g' | \ - sed -e 's/<\/title>//g' | \ - sed -e 's/<description>//g' | \ - sed -e 's/<\/description>//g' | \ - perl $TOKENIZER -threads 8 -l $l > $tmp/$tok - echo "" -done -perl $CLEAN -ratio 1.5 $tmp/train.tags.$lang.tok $src $tgt $tmp/train.tags.$lang.clean 1 175 -for l in $src $tgt; do - perl $LC < $tmp/train.tags.$lang.clean.$l > $tmp/train.tags.$lang.$l -done - -echo "pre-processing valid/test data..." -for l in $src $tgt; do - for o in `ls $orig/$lang/IWSLT14.TED*.$l.xml`; do - fname=${o##*/} - f=$tmp/${fname%.*} - echo $o $f - grep '<seg id' $o | \ - sed -e 's/<seg id="[0-9]*">\s*//g' | \ - sed -e 's/\s*<\/seg>\s*//g' | \ - sed -e "s/\’/\'/g" | \ - perl $TOKENIZER -threads 8 -l $l | \ - perl $LC > $f - echo "" - done -done - - -echo "creating train, valid, test..." -for l in $src $tgt; do - awk '{if (NR%23 == 0) print $0; }' $tmp/train.tags.de-en.$l > $tmp/valid.$l - awk '{if (NR%23 != 0) print $0; }' $tmp/train.tags.de-en.$l > $tmp/train.$l - - cat $tmp/IWSLT14.TED.dev2010.de-en.$l \ - $tmp/IWSLT14.TEDX.dev2012.de-en.$l \ - $tmp/IWSLT14.TED.tst2010.de-en.$l \ - $tmp/IWSLT14.TED.tst2011.de-en.$l \ - $tmp/IWSLT14.TED.tst2012.de-en.$l \ - > $tmp/test.$l -done - -TRAIN=$tmp/train.en-de -BPE_CODE=$prep/code -rm -f $TRAIN -for l in $src $tgt; do - cat $tmp/train.$l >> $TRAIN -done - -echo "learn_bpe.py on ${TRAIN}..." -python $BPEROOT/learn_bpe.py -s $BPE_TOKENS < $TRAIN > $BPE_CODE - -for L in $src $tgt; do - for f in train.$L valid.$L test.$L; do - echo "apply_bpe.py to ${f}..." - python $BPEROOT/apply_bpe.py -c $BPE_CODE < $tmp/$f > $prep/$f - done -done diff --git a/kosmos-g/fairseq/examples/translation/prepare-iwslt17-multilingual.sh b/kosmos-g/fairseq/examples/translation/prepare-iwslt17-multilingual.sh deleted file mode 100644 index 23be87555..000000000 --- a/kosmos-g/fairseq/examples/translation/prepare-iwslt17-multilingual.sh +++ /dev/null @@ -1,133 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -SRCS=( - "de" - "fr" -) -TGT=en - -ROOT=$(dirname "$0") -SCRIPTS=$ROOT/../../scripts -SPM_TRAIN=$SCRIPTS/spm_train.py -SPM_ENCODE=$SCRIPTS/spm_encode.py - -BPESIZE=16384 -ORIG=$ROOT/iwslt17_orig -DATA=$ROOT/iwslt17.de_fr.en.bpe16k -mkdir -p "$ORIG" "$DATA" - -TRAIN_MINLEN=1 # remove sentences with <1 BPE token -TRAIN_MAXLEN=250 # remove sentences with >250 BPE tokens - -URLS=( - "https://wit3.fbk.eu/archive/2017-01-trnted/texts/de/en/de-en.tgz" - "https://wit3.fbk.eu/archive/2017-01-trnted/texts/fr/en/fr-en.tgz" -) -ARCHIVES=( - "de-en.tgz" - "fr-en.tgz" -) -VALID_SETS=( - "IWSLT17.TED.dev2010.de-en IWSLT17.TED.tst2010.de-en IWSLT17.TED.tst2011.de-en IWSLT17.TED.tst2012.de-en IWSLT17.TED.tst2013.de-en IWSLT17.TED.tst2014.de-en IWSLT17.TED.tst2015.de-en" - "IWSLT17.TED.dev2010.fr-en IWSLT17.TED.tst2010.fr-en IWSLT17.TED.tst2011.fr-en IWSLT17.TED.tst2012.fr-en IWSLT17.TED.tst2013.fr-en IWSLT17.TED.tst2014.fr-en IWSLT17.TED.tst2015.fr-en" -) - -# download and extract data -for ((i=0;i<${#URLS[@]};++i)); do - ARCHIVE=$ORIG/${ARCHIVES[i]} - if [ -f "$ARCHIVE" ]; then - echo "$ARCHIVE already exists, skipping download" - else - URL=${URLS[i]} - wget -P "$ORIG" "$URL" - if [ -f "$ARCHIVE" ]; then - echo "$URL successfully downloaded." - else - echo "$URL not successfully downloaded." - exit 1 - fi - fi - FILE=${ARCHIVE: -4} - if [ -e "$FILE" ]; then - echo "$FILE already exists, skipping extraction" - else - tar -C "$ORIG" -xzvf "$ARCHIVE" - fi -done - -echo "pre-processing train data..." -for SRC in "${SRCS[@]}"; do - for LANG in "${SRC}" "${TGT}"; do - cat "$ORIG/${SRC}-${TGT}/train.tags.${SRC}-${TGT}.${LANG}" \ - | grep -v '<url>' \ - | grep -v '<talkid>' \ - | grep -v '<keywords>' \ - | grep -v '<speaker>' \ - | grep -v '<reviewer' \ - | grep -v '<translator' \ - | grep -v '<doc' \ - | grep -v '</doc>' \ - | sed -e 's/<title>//g' \ - | sed -e 's/<\/title>//g' \ - | sed -e 's/<description>//g' \ - | sed -e 's/<\/description>//g' \ - | sed 's/^\s*//g' \ - | sed 's/\s*$//g' \ - > "$DATA/train.${SRC}-${TGT}.${LANG}" - done -done - -echo "pre-processing valid data..." -for ((i=0;i<${#SRCS[@]};++i)); do - SRC=${SRCS[i]} - VALID_SET=(${VALID_SETS[i]}) - for ((j=0;j<${#VALID_SET[@]};++j)); do - FILE=${VALID_SET[j]} - for LANG in "$SRC" "$TGT"; do - grep '<seg id' "$ORIG/${SRC}-${TGT}/${FILE}.${LANG}.xml" \ - | sed -e 's/<seg id="[0-9]*">\s*//g' \ - | sed -e 's/\s*<\/seg>\s*//g' \ - | sed -e "s/\’/\'/g" \ - > "$DATA/valid${j}.${SRC}-${TGT}.${LANG}" - done - done -done - -# learn BPE with sentencepiece -TRAIN_FILES=$(for SRC in "${SRCS[@]}"; do echo $DATA/train.${SRC}-${TGT}.${SRC}; echo $DATA/train.${SRC}-${TGT}.${TGT}; done | tr "\n" ",") -echo "learning joint BPE over ${TRAIN_FILES}..." -python "$SPM_TRAIN" \ - --input=$TRAIN_FILES \ - --model_prefix=$DATA/sentencepiece.bpe \ - --vocab_size=$BPESIZE \ - --character_coverage=1.0 \ - --model_type=bpe - -# encode train/valid -echo "encoding train with learned BPE..." -for SRC in "${SRCS[@]}"; do - python "$SPM_ENCODE" \ - --model "$DATA/sentencepiece.bpe.model" \ - --output_format=piece \ - --inputs $DATA/train.${SRC}-${TGT}.${SRC} $DATA/train.${SRC}-${TGT}.${TGT} \ - --outputs $DATA/train.bpe.${SRC}-${TGT}.${SRC} $DATA/train.bpe.${SRC}-${TGT}.${TGT} \ - --min-len $TRAIN_MINLEN --max-len $TRAIN_MAXLEN -done - -echo "encoding valid with learned BPE..." -for ((i=0;i<${#SRCS[@]};++i)); do - SRC=${SRCS[i]} - VALID_SET=(${VALID_SETS[i]}) - for ((j=0;j<${#VALID_SET[@]};++j)); do - python "$SPM_ENCODE" \ - --model "$DATA/sentencepiece.bpe.model" \ - --output_format=piece \ - --inputs $DATA/valid${j}.${SRC}-${TGT}.${SRC} $DATA/valid${j}.${SRC}-${TGT}.${TGT} \ - --outputs $DATA/valid${j}.bpe.${SRC}-${TGT}.${SRC} $DATA/valid${j}.bpe.${SRC}-${TGT}.${TGT} - done -done diff --git a/kosmos-g/fairseq/examples/translation/prepare-wmt14en2de.sh b/kosmos-g/fairseq/examples/translation/prepare-wmt14en2de.sh deleted file mode 100644 index 6702c88b5..000000000 --- a/kosmos-g/fairseq/examples/translation/prepare-wmt14en2de.sh +++ /dev/null @@ -1,142 +0,0 @@ -#!/bin/bash -# Adapted from https://github.com/facebookresearch/MIXER/blob/master/prepareData.sh - -echo 'Cloning Moses github repository (for tokenization scripts)...' -git clone https://github.com/moses-smt/mosesdecoder.git - -echo 'Cloning Subword NMT repository (for BPE pre-processing)...' -git clone https://github.com/rsennrich/subword-nmt.git - -SCRIPTS=mosesdecoder/scripts -TOKENIZER=$SCRIPTS/tokenizer/tokenizer.perl -CLEAN=$SCRIPTS/training/clean-corpus-n.perl -NORM_PUNC=$SCRIPTS/tokenizer/normalize-punctuation.perl -REM_NON_PRINT_CHAR=$SCRIPTS/tokenizer/remove-non-printing-char.perl -BPEROOT=subword-nmt/subword_nmt -BPE_TOKENS=40000 - -URLS=( - "http://statmt.org/wmt13/training-parallel-europarl-v7.tgz" - "http://statmt.org/wmt13/training-parallel-commoncrawl.tgz" - "http://data.statmt.org/wmt17/translation-task/training-parallel-nc-v12.tgz" - "http://data.statmt.org/wmt17/translation-task/dev.tgz" - "http://statmt.org/wmt14/test-full.tgz" -) -FILES=( - "training-parallel-europarl-v7.tgz" - "training-parallel-commoncrawl.tgz" - "training-parallel-nc-v12.tgz" - "dev.tgz" - "test-full.tgz" -) -CORPORA=( - "training/europarl-v7.de-en" - "commoncrawl.de-en" - "training/news-commentary-v12.de-en" -) - -# This will make the dataset compatible to the one used in "Convolutional Sequence to Sequence Learning" -# https://arxiv.org/abs/1705.03122 -if [ "$1" == "--icml17" ]; then - URLS[2]="http://statmt.org/wmt14/training-parallel-nc-v9.tgz" - FILES[2]="training-parallel-nc-v9.tgz" - CORPORA[2]="training/news-commentary-v9.de-en" - OUTDIR=wmt14_en_de -else - OUTDIR=wmt17_en_de -fi - -if [ ! -d "$SCRIPTS" ]; then - echo "Please set SCRIPTS variable correctly to point to Moses scripts." - exit -fi - -src=en -tgt=de -lang=en-de -prep=$OUTDIR -tmp=$prep/tmp -orig=orig -dev=dev/newstest2013 - -mkdir -p $orig $tmp $prep - -cd $orig - -for ((i=0;i<${#URLS[@]};++i)); do - file=${FILES[i]} - if [ -f $file ]; then - echo "$file already exists, skipping download" - else - url=${URLS[i]} - wget "$url" - if [ -f $file ]; then - echo "$url successfully downloaded." - else - echo "$url not successfully downloaded." - exit -1 - fi - if [ ${file: -4} == ".tgz" ]; then - tar zxvf $file - elif [ ${file: -4} == ".tar" ]; then - tar xvf $file - fi - fi -done -cd .. - -echo "pre-processing train data..." -for l in $src $tgt; do - rm $tmp/train.tags.$lang.tok.$l - for f in "${CORPORA[@]}"; do - cat $orig/$f.$l | \ - perl $NORM_PUNC $l | \ - perl $REM_NON_PRINT_CHAR | \ - perl $TOKENIZER -threads 8 -a -l $l >> $tmp/train.tags.$lang.tok.$l - done -done - -echo "pre-processing test data..." -for l in $src $tgt; do - if [ "$l" == "$src" ]; then - t="src" - else - t="ref" - fi - grep '<seg id' $orig/test-full/newstest2014-deen-$t.$l.sgm | \ - sed -e 's/<seg id="[0-9]*">\s*//g' | \ - sed -e 's/\s*<\/seg>\s*//g' | \ - sed -e "s/\’/\'/g" | \ - perl $TOKENIZER -threads 8 -a -l $l > $tmp/test.$l - echo "" -done - -echo "splitting train and valid..." -for l in $src $tgt; do - awk '{if (NR%100 == 0) print $0; }' $tmp/train.tags.$lang.tok.$l > $tmp/valid.$l - awk '{if (NR%100 != 0) print $0; }' $tmp/train.tags.$lang.tok.$l > $tmp/train.$l -done - -TRAIN=$tmp/train.de-en -BPE_CODE=$prep/code -rm -f $TRAIN -for l in $src $tgt; do - cat $tmp/train.$l >> $TRAIN -done - -echo "learn_bpe.py on ${TRAIN}..." -python $BPEROOT/learn_bpe.py -s $BPE_TOKENS < $TRAIN > $BPE_CODE - -for L in $src $tgt; do - for f in train.$L valid.$L test.$L; do - echo "apply_bpe.py to ${f}..." - python $BPEROOT/apply_bpe.py -c $BPE_CODE < $tmp/$f > $tmp/bpe.$f - done -done - -perl $CLEAN -ratio 1.5 $tmp/bpe.train $src $tgt $prep/train 1 250 -perl $CLEAN -ratio 1.5 $tmp/bpe.valid $src $tgt $prep/valid 1 250 - -for L in $src $tgt; do - cp $tmp/bpe.test.$L $prep/test.$L -done diff --git a/kosmos-g/fairseq/examples/translation/prepare-wmt14en2fr.sh b/kosmos-g/fairseq/examples/translation/prepare-wmt14en2fr.sh deleted file mode 100644 index 2ac97a5b7..000000000 --- a/kosmos-g/fairseq/examples/translation/prepare-wmt14en2fr.sh +++ /dev/null @@ -1,136 +0,0 @@ -#!/bin/bash -# Adapted from https://github.com/facebookresearch/MIXER/blob/master/prepareData.sh - -echo 'Cloning Moses github repository (for tokenization scripts)...' -git clone https://github.com/moses-smt/mosesdecoder.git - -echo 'Cloning Subword NMT repository (for BPE pre-processing)...' -git clone https://github.com/rsennrich/subword-nmt.git - -SCRIPTS=mosesdecoder/scripts -TOKENIZER=$SCRIPTS/tokenizer/tokenizer.perl -CLEAN=$SCRIPTS/training/clean-corpus-n.perl -NORM_PUNC=$SCRIPTS/tokenizer/normalize-punctuation.perl -REM_NON_PRINT_CHAR=$SCRIPTS/tokenizer/remove-non-printing-char.perl -BPEROOT=subword-nmt/subword_nmt -BPE_TOKENS=40000 - -URLS=( - "http://statmt.org/wmt13/training-parallel-europarl-v7.tgz" - "http://statmt.org/wmt13/training-parallel-commoncrawl.tgz" - "http://statmt.org/wmt13/training-parallel-un.tgz" - "http://statmt.org/wmt14/training-parallel-nc-v9.tgz" - "http://statmt.org/wmt10/training-giga-fren.tar" - "http://statmt.org/wmt14/test-full.tgz" -) -FILES=( - "training-parallel-europarl-v7.tgz" - "training-parallel-commoncrawl.tgz" - "training-parallel-un.tgz" - "training-parallel-nc-v9.tgz" - "training-giga-fren.tar" - "test-full.tgz" -) -CORPORA=( - "training/europarl-v7.fr-en" - "commoncrawl.fr-en" - "un/undoc.2000.fr-en" - "training/news-commentary-v9.fr-en" - "giga-fren.release2.fixed" -) - -if [ ! -d "$SCRIPTS" ]; then - echo "Please set SCRIPTS variable correctly to point to Moses scripts." - exit -fi - -src=en -tgt=fr -lang=en-fr -prep=wmt14_en_fr -tmp=$prep/tmp -orig=orig - -mkdir -p $orig $tmp $prep - -cd $orig - -for ((i=0;i<${#URLS[@]};++i)); do - file=${FILES[i]} - if [ -f $file ]; then - echo "$file already exists, skipping download" - else - url=${URLS[i]} - wget "$url" - if [ -f $file ]; then - echo "$url successfully downloaded." - else - echo "$url not successfully downloaded." - exit -1 - fi - if [ ${file: -4} == ".tgz" ]; then - tar zxvf $file - elif [ ${file: -4} == ".tar" ]; then - tar xvf $file - fi - fi -done - -gunzip giga-fren.release2.fixed.*.gz -cd .. - -echo "pre-processing train data..." -for l in $src $tgt; do - rm $tmp/train.tags.$lang.tok.$l - for f in "${CORPORA[@]}"; do - cat $orig/$f.$l | \ - perl $NORM_PUNC $l | \ - perl $REM_NON_PRINT_CHAR | \ - perl $TOKENIZER -threads 8 -a -l $l >> $tmp/train.tags.$lang.tok.$l - done -done - -echo "pre-processing test data..." -for l in $src $tgt; do - if [ "$l" == "$src" ]; then - t="src" - else - t="ref" - fi - grep '<seg id' $orig/test-full/newstest2014-fren-$t.$l.sgm | \ - sed -e 's/<seg id="[0-9]*">\s*//g' | \ - sed -e 's/\s*<\/seg>\s*//g' | \ - sed -e "s/\’/\'/g" | \ - perl $TOKENIZER -threads 8 -a -l $l > $tmp/test.$l - echo "" -done - -echo "splitting train and valid..." -for l in $src $tgt; do - awk '{if (NR%1333 == 0) print $0; }' $tmp/train.tags.$lang.tok.$l > $tmp/valid.$l - awk '{if (NR%1333 != 0) print $0; }' $tmp/train.tags.$lang.tok.$l > $tmp/train.$l -done - -TRAIN=$tmp/train.fr-en -BPE_CODE=$prep/code -rm -f $TRAIN -for l in $src $tgt; do - cat $tmp/train.$l >> $TRAIN -done - -echo "learn_bpe.py on ${TRAIN}..." -python $BPEROOT/learn_bpe.py -s $BPE_TOKENS < $TRAIN > $BPE_CODE - -for L in $src $tgt; do - for f in train.$L valid.$L test.$L; do - echo "apply_bpe.py to ${f}..." - python $BPEROOT/apply_bpe.py -c $BPE_CODE < $tmp/$f > $tmp/bpe.$f - done -done - -perl $CLEAN -ratio 1.5 $tmp/bpe.train $src $tgt $prep/train 1 250 -perl $CLEAN -ratio 1.5 $tmp/bpe.valid $src $tgt $prep/valid 1 250 - -for L in $src $tgt; do - cp $tmp/bpe.test.$L $prep/test.$L -done diff --git a/kosmos-g/fairseq/examples/translation_moe/README.md b/kosmos-g/fairseq/examples/translation_moe/README.md deleted file mode 100644 index 2e5c8af61..000000000 --- a/kosmos-g/fairseq/examples/translation_moe/README.md +++ /dev/null @@ -1,89 +0,0 @@ -# Mixture Models for Diverse Machine Translation: Tricks of the Trade (Shen et al., 2019) - -This page includes instructions for reproducing results from the paper [Mixture Models for Diverse Machine Translation: Tricks of the Trade (Shen et al., 2019)](https://arxiv.org/abs/1902.07816). - -## Download data - -First, follow the [instructions to download and preprocess the WMT'17 En-De dataset](../translation#prepare-wmt14en2desh). -Make sure to learn a joint vocabulary by passing the `--joined-dictionary` option to `fairseq-preprocess`. - -## Train a model - -Then we can train a mixture of experts model using the `translation_moe` task. -Use the `--method` flag to choose the MoE variant; we support hard mixtures with a learned or uniform prior (`--method hMoElp` and `hMoEup`, respectively) and soft mixures (`--method sMoElp` and `sMoEup`). -The model is trained with online responsibility assignment and shared parameterization. - -The following command will train a `hMoElp` model with `3` experts: -```bash -fairseq-train --ddp-backend='legacy_ddp' \ - data-bin/wmt17_en_de \ - --max-update 100000 \ - --task translation_moe --user-dir examples/translation_moe/translation_moe_src \ - --method hMoElp --mean-pool-gating-network \ - --num-experts 3 \ - --arch transformer_wmt_en_de --share-all-embeddings \ - --optimizer adam --adam-betas '(0.9, 0.98)' --clip-norm 0.0 \ - --lr-scheduler inverse_sqrt --warmup-init-lr 1e-07 --warmup-updates 4000 \ - --lr 0.0007 \ - --dropout 0.1 --weight-decay 0.0 --criterion cross_entropy \ - --max-tokens 3584 -``` - -## Translate - -Once a model is trained, we can generate translations from different experts using the `--gen-expert` option. -For example, to generate from expert 0: -```bash -fairseq-generate data-bin/wmt17_en_de \ - --path checkpoints/checkpoint_best.pt \ - --beam 1 --remove-bpe \ - --task translation_moe --user-dir examples/translation_moe/translation_moe_src \ - --method hMoElp --mean-pool-gating-network \ - --num-experts 3 \ - --gen-expert 0 -``` - -## Evaluate - -First download a tokenized version of the WMT'14 En-De test set with multiple references: -```bash -wget dl.fbaipublicfiles.com/fairseq/data/wmt14-en-de.extra_refs.tok -``` - -Next apply BPE on the fly and run generation for each expert: -```bash -BPE_CODE=examples/translation/wmt17_en_de/code -for EXPERT in $(seq 0 2); do \ - cat wmt14-en-de.extra_refs.tok \ - | grep ^S | cut -f 2 \ - | fairseq-interactive data-bin/wmt17_en_de \ - --path checkpoints/checkpoint_best.pt \ - --beam 1 \ - --bpe subword_nmt --bpe-codes $BPE_CODE \ - --buffer-size 500 --max-tokens 6000 \ - --task translation_moe --user-dir examples/translation_moe/translation_moe_src \ - --method hMoElp --mean-pool-gating-network \ - --num-experts 3 \ - --gen-expert $EXPERT ; \ -done > wmt14-en-de.extra_refs.tok.gen.3experts -``` - -Finally use `score_moe.py` to compute pairwise BLUE and average oracle BLEU: -```bash -python examples/translation_moe/score.py --sys wmt14-en-de.extra_refs.tok.gen.3experts --ref wmt14-en-de.extra_refs.tok -# pairwise BLEU: 48.26 -# #refs covered: 2.11 -# multi-reference BLEU (leave-one-out): 59.46 -``` -This matches row 3 from Table 7 in the paper. - -## Citation - -```bibtex -@article{shen2019mixture, - title = {Mixture Models for Diverse Machine Translation: Tricks of the Trade}, - author = {Tianxiao Shen and Myle Ott and Michael Auli and Marc'Aurelio Ranzato}, - journal = {International Conference on Machine Learning}, - year = 2019, -} -``` diff --git a/kosmos-g/fairseq/examples/translation_moe/score.py b/kosmos-g/fairseq/examples/translation_moe/score.py deleted file mode 100644 index 9a529a985..000000000 --- a/kosmos-g/fairseq/examples/translation_moe/score.py +++ /dev/null @@ -1,197 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Scoring script for computing pairwise BLEU and multi-ref BLEU over a set of -candidate hypotheses. - -See `"Mixture Models for Diverse Machine Translation: Tricks of the Trade" -(Shen et al., 2019) <https://arxiv.org/abs/1902.07816>`_. -""" - -import argparse -import random -import sys -from itertools import chain - -import numpy as np -from sacrebleu import compute_bleu, corpus_bleu as _corpus_bleu - - -def main(): - parser = argparse.ArgumentParser(sys.argv[0]) - parser.add_argument( - "--sys", nargs="*", default="", metavar="FILE", help="path to system output" - ) - parser.add_argument("--ref", default="", metavar="FILE", help="path to references") - parser.add_argument( - "--output", - default="", - metavar="FILE", - help="print outputs into a pretty format", - ) - args = parser.parse_args() - - if args.sys: - src, tgt, hypos, log_probs = load_sys(args.sys) - print("pairwise BLEU: %.2f" % pairwise(hypos)) - if args.output: - merge(src, tgt, hypos, log_probs, args.output) - - if args.ref: - _, _, refs = load_ref(args.ref) - if args.sys: - multi_ref(refs, hypos) - else: - intra_ref(refs) - - -def dictolist(d): - a = sorted(d.items(), key=lambda i: i[0]) - return [i[1] for i in a] - - -def load_sys(paths): - src, tgt, hypos, log_probs = {}, {}, {}, {} - for path in paths: - with open(path) as f: - for line in f: - line = line.rstrip() - # S: source - # T: target - # D: detokenized system output - if line.startswith(("S-", "T-", "D-")): - i = int(line[line.find("-") + 1 : line.find("\t")]) - if line.startswith("S-"): - src[i] = line.split("\t")[1] - if line.startswith("T-"): - tgt[i] = line.split("\t")[1] - if line.startswith("D-"): - if i not in hypos: - hypos[i] = [] - log_probs[i] = [] - hypos[i].append(line.split("\t")[2]) - log_probs[i].append(float(line.split("\t")[1])) - return dictolist(src), dictolist(tgt), dictolist(hypos), dictolist(log_probs) - - -def load_ref(path): - with open(path) as f: - lines = f.readlines() - src, tgt, refs = [], [], [] - i = 0 - while i < len(lines): - if lines[i].startswith("S-"): - src.append(lines[i].split("\t")[1].rstrip()) - i += 1 - elif lines[i].startswith("T-"): - tgt.append(lines[i].split("\t")[1].rstrip()) - i += 1 - else: - a = [] - while i < len(lines) and lines[i].startswith("R"): - a.append(lines[i].split("\t")[1].rstrip()) - i += 1 - refs.append(a) - return src, tgt, refs - - -def merge(src, tgt, hypos, log_probs, path): - with open(path, "w") as f: - for s, t, hs, lps in zip(src, tgt, hypos, log_probs): - f.write(s + "\n") - f.write(t + "\n") - f.write("\n") - for h, lp in zip(hs, lps): - f.write("\t%f\t%s\n" % (lp, h.strip())) - f.write("------------------------------------------------------\n") - - -def corpus_bleu(sys_stream, ref_streams): - bleu = _corpus_bleu(sys_stream, ref_streams, tokenize="none") - return bleu.score - - -def sentence_bleu(hypothesis, reference): - bleu = _corpus_bleu(hypothesis, reference) - for i in range(1, 4): - bleu.counts[i] += 1 - bleu.totals[i] += 1 - bleu = compute_bleu( - bleu.counts, - bleu.totals, - bleu.sys_len, - bleu.ref_len, - smooth_method="exp", - ) - return bleu.score - - -def pairwise(sents): - _ref, _hypo = [], [] - for s in sents: - for i in range(len(s)): - for j in range(len(s)): - if i != j: - _ref.append(s[i]) - _hypo.append(s[j]) - return corpus_bleu(_hypo, [_ref]) - - -def multi_ref(refs, hypos): - _ref, _hypo = [], [] - ref_cnt = 0 - assert len(refs) == len(hypos) - - # count number of refs covered - for rs, hs in zip(refs, hypos): - a = set() - for h in hs: - s = [sentence_bleu(h, r) for r in rs] - j = np.argmax(s) - _ref.append(rs[j]) - _hypo.append(h) - best = [k for k in range(len(rs)) if s[k] == s[j]] - a.add(random.choice(best)) - ref_cnt += len(a) - print("#refs covered: %.2f" % (ref_cnt / len(refs))) - - # transpose refs and hypos - refs = list(zip(*refs)) - hypos = list(zip(*hypos)) - - # compute multi-ref corpus BLEU (leave-one-out to be comparable to intra_ref) - k = len(hypos) - m = len(refs) - flat_hypos = [hypos[j][i] for i in range(len(hypos[0])) for j in range(k)] - duplicated_refs = [[ref for ref in refs_i for _ in range(k)] for refs_i in refs] - loo_bleus = [] - for held_out_ref in range(m): - remaining_refs = ( - duplicated_refs[:held_out_ref] + duplicated_refs[held_out_ref + 1 :] - ) - assert len(remaining_refs) == m - 1 - loo_bleus.append(corpus_bleu(flat_hypos, remaining_refs)) - print("average multi-reference BLEU (leave-one-out): %.2f" % np.mean(loo_bleus)) - - -def intra_ref(refs): - print("ref pairwise BLEU: %.2f" % pairwise(refs)) - refs = list(zip(*refs)) - m = len(refs) - concat_h = [] - concat_rest = [[] for j in range(m - 1)] - for i, h in enumerate(refs): - rest = refs[:i] + refs[i + 1 :] - concat_h.append(h) - for j in range(m - 1): - concat_rest[j].extend(rest[j]) - concat_h = list(chain.from_iterable(concat_h)) - bleu = corpus_bleu(concat_h, concat_rest) - print("multi-reference BLEU (leave-one-out): %.2f" % bleu) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/translation_moe/translation_moe_src/__init__.py b/kosmos-g/fairseq/examples/translation_moe/translation_moe_src/__init__.py deleted file mode 100644 index c0abe53e9..000000000 --- a/kosmos-g/fairseq/examples/translation_moe/translation_moe_src/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import translation_moe # noqa diff --git a/kosmos-g/fairseq/examples/translation_moe/translation_moe_src/logsumexp_moe.py b/kosmos-g/fairseq/examples/translation_moe/translation_moe_src/logsumexp_moe.py deleted file mode 100644 index fb299daec..000000000 --- a/kosmos-g/fairseq/examples/translation_moe/translation_moe_src/logsumexp_moe.py +++ /dev/null @@ -1,26 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - - -class LogSumExpMoE(torch.autograd.Function): - """Standard LogSumExp forward pass, but use *posterior* for the backward. - - See `"Mixture Models for Diverse Machine Translation: Tricks of the Trade" - (Shen et al., 2019) <https://arxiv.org/abs/1902.07816>`_. - """ - - @staticmethod - def forward(ctx, logp, posterior, dim=-1): - ctx.save_for_backward(posterior) - ctx.dim = dim - return torch.logsumexp(logp, dim=dim) - - @staticmethod - def backward(ctx, grad_output): - (posterior,) = ctx.saved_tensors - grad_logp = grad_output.unsqueeze(ctx.dim) * posterior - return grad_logp, None, None diff --git a/kosmos-g/fairseq/examples/translation_moe/translation_moe_src/mean_pool_gating_network.py b/kosmos-g/fairseq/examples/translation_moe/translation_moe_src/mean_pool_gating_network.py deleted file mode 100644 index efc7ae40b..000000000 --- a/kosmos-g/fairseq/examples/translation_moe/translation_moe_src/mean_pool_gating_network.py +++ /dev/null @@ -1,50 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn.functional as F - - -class MeanPoolGatingNetwork(torch.nn.Module): - """A simple mean-pooling gating network for selecting experts. - - This module applies mean pooling over an encoder's output and returns - reponsibilities for each expert. The encoder format is expected to match - :class:`fairseq.models.transformer.TransformerEncoder`. - """ - - def __init__(self, embed_dim, num_experts, dropout=None): - super().__init__() - self.embed_dim = embed_dim - self.num_experts = num_experts - - self.fc1 = torch.nn.Linear(embed_dim, embed_dim) - self.dropout = torch.nn.Dropout(dropout) if dropout is not None else None - self.fc2 = torch.nn.Linear(embed_dim, num_experts) - - def forward(self, encoder_out): - if not ( - "encoder_out" in encoder_out - and "encoder_padding_mask" in encoder_out - and encoder_out["encoder_out"][0].size(2) == self.embed_dim - ): - raise ValueError("Unexpected format for encoder_out") - - # mean pooling over time - encoder_padding_mask = encoder_out["encoder_padding_mask"][0] # B x T - encoder_out = encoder_out["encoder_out"][0].transpose(0, 1) # B x T x C - if encoder_padding_mask is not None: - encoder_out = encoder_out.clone() # required because of transpose above - encoder_out[encoder_padding_mask] = 0 - ntokens = torch.sum(~encoder_padding_mask, dim=1, keepdim=True) - x = torch.sum(encoder_out, dim=1) / ntokens.type_as(encoder_out) - else: - x = torch.mean(encoder_out, dim=1) - - x = torch.tanh(self.fc1(x)) - if self.dropout is not None: - x = self.dropout(x) - x = self.fc2(x) - return F.log_softmax(x, dim=-1, dtype=torch.float32).type_as(x) diff --git a/kosmos-g/fairseq/examples/translation_moe/translation_moe_src/translation_moe.py b/kosmos-g/fairseq/examples/translation_moe/translation_moe_src/translation_moe.py deleted file mode 100644 index 1ee9d1b72..000000000 --- a/kosmos-g/fairseq/examples/translation_moe/translation_moe_src/translation_moe.py +++ /dev/null @@ -1,258 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field -import torch -from omegaconf import II - -from fairseq import metrics, utils -from fairseq.dataclass import ChoiceEnum -from fairseq.tasks import register_task -from fairseq.tasks.translation import TranslationConfig, TranslationTask - -from .logsumexp_moe import LogSumExpMoE -from .mean_pool_gating_network import MeanPoolGatingNetwork - - -METHOD_CHOICES = ChoiceEnum(["sMoElp", "sMoEup", "hMoElp", "hMoEup"]) - - -@dataclass -class TranslationMoEConfig(TranslationConfig): - method: METHOD_CHOICES = field( - default="hMoEup", - metadata={"help": "MoE method"}, - ) - num_experts: int = field( - default=3, - metadata={"help": "number of experts"}, - ) - mean_pool_gating_network: bool = field( - default=False, - metadata={"help": "use a simple mean-pooling gating network"}, - ) - mean_pool_gating_network_dropout: float = field( - default=0, - metadata={"help": "dropout for mean-pooling gating network"}, - ) - mean_pool_gating_network_encoder_dim: int = field( - default=0, - metadata={"help": "encoder output dim for mean-pooling gating network"}, - ) - gen_expert: int = field( - default=0, - metadata={"help": "which expert to use for generation"}, - ) - sentence_avg: bool = II("optimization.sentence_avg") - - -@register_task("translation_moe", dataclass=TranslationMoEConfig) -class TranslationMoETask(TranslationTask): - """ - Translation task for Mixture of Experts (MoE) models. - - See `"Mixture Models for Diverse Machine Translation: Tricks of the Trade" - (Shen et al., 2019) <https://arxiv.org/abs/1902.07816>`_. - - Args: - src_dict (~fairseq.data.Dictionary): dictionary for the source language - tgt_dict (~fairseq.data.Dictionary): dictionary for the target language - - .. note:: - - The translation task is compatible with :mod:`fairseq-train`, - :mod:`fairseq-generate` and :mod:`fairseq-interactive`. - - The translation task provides the following additional command-line - arguments: - - .. argparse:: - :ref: fairseq.tasks.translation_parser - :prog: - """ - - cfg: TranslationMoEConfig - - def __init__(self, cfg: TranslationMoEConfig, src_dict, tgt_dict): - if cfg.method == "sMoElp": - # soft MoE with learned prior - self.uniform_prior = False - self.hard_selection = False - elif cfg.method == "sMoEup": - # soft MoE with uniform prior - self.uniform_prior = True - self.hard_selection = False - elif cfg.method == "hMoElp": - # hard MoE with learned prior - self.uniform_prior = False - self.hard_selection = True - elif cfg.method == "hMoEup": - # hard MoE with uniform prior - self.uniform_prior = True - self.hard_selection = True - - # add indicator tokens for each expert - for i in range(cfg.num_experts): - # add to both dictionaries in case we're sharing embeddings - src_dict.add_symbol("<expert_{}>".format(i)) - tgt_dict.add_symbol("<expert_{}>".format(i)) - - super().__init__(cfg, src_dict, tgt_dict) - - def build_model(self, cfg, from_checkpoint=False): - from fairseq import models - - model = models.build_model(cfg, self) - if not self.uniform_prior and not hasattr(model, "gating_network"): - if self.cfg.mean_pool_gating_network: - if self.cfg.mean_pool_gating_network_encoder_dim > 0: - encoder_dim = self.cfg.mean_pool_gating_network_encoder_dim - elif getattr(cfg, "encoder_embed_dim", None): - # assume that encoder_embed_dim is the encoder's output dimension - encoder_dim = cfg.encoder_embed_dim - else: - raise ValueError( - "Must specify --mean-pool-gating-network-encoder-dim" - ) - - if self.cfg.mean_pool_gating_network_dropout > 0: - dropout = self.cfg.mean_pool_gating_network_dropout - elif getattr(cfg, "dropout", None): - dropout = cfg.dropout - else: - raise ValueError("Must specify task.mean_pool_gating_network_dropout") - - model.gating_network = MeanPoolGatingNetwork( - encoder_dim, - self.cfg.num_experts, - dropout, - ) - else: - raise ValueError( - "translation_moe task with learned prior requires the model to " - "have a gating network; try using --mean-pool-gating-network" - ) - return model - - def expert_index(self, i): - return i + self.tgt_dict.index("<expert_0>") - - def _get_loss(self, sample, model, criterion): - assert hasattr( - criterion, "compute_loss" - ), "translation_moe task requires the criterion to implement the compute_loss() method" - - k = self.cfg.num_experts - bsz = sample["target"].size(0) - - def get_lprob_y(encoder_out, prev_output_tokens_k): - net_output = model.decoder( - prev_output_tokens=prev_output_tokens_k, - encoder_out=encoder_out, - ) - loss, _ = criterion.compute_loss(model, net_output, sample, reduce=False) - loss = loss.view(bsz, -1) - return -loss.sum(dim=1, keepdim=True) # -> B x 1 - - def get_lprob_yz(winners=None): - encoder_out = model.encoder( - src_tokens=sample["net_input"]["src_tokens"], - src_lengths=sample["net_input"]["src_lengths"], - ) - - if winners is None: - lprob_y = [] - for i in range(k): - prev_output_tokens_k = sample["net_input"][ - "prev_output_tokens" - ].clone() - assert not prev_output_tokens_k.requires_grad - prev_output_tokens_k[:, 0] = self.expert_index(i) - lprob_y.append(get_lprob_y(encoder_out, prev_output_tokens_k)) - lprob_y = torch.cat(lprob_y, dim=1) # -> B x K - else: - prev_output_tokens_k = sample["net_input"]["prev_output_tokens"].clone() - prev_output_tokens_k[:, 0] = self.expert_index(winners) - lprob_y = get_lprob_y(encoder_out, prev_output_tokens_k) # -> B - - if self.uniform_prior: - lprob_yz = lprob_y - else: - lprob_z = model.gating_network(encoder_out) # B x K - if winners is not None: - lprob_z = lprob_z.gather(dim=1, index=winners.unsqueeze(-1)) - lprob_yz = lprob_y + lprob_z.type_as(lprob_y) # B x K - - return lprob_yz - - # compute responsibilities without dropout - with utils.model_eval(model): # disable dropout - with torch.no_grad(): # disable autograd - lprob_yz = get_lprob_yz() # B x K - prob_z_xy = torch.nn.functional.softmax(lprob_yz, dim=1) - assert not prob_z_xy.requires_grad - - # compute loss with dropout - if self.hard_selection: - winners = prob_z_xy.max(dim=1)[1] - loss = -get_lprob_yz(winners) - else: - lprob_yz = get_lprob_yz() # B x K - loss = -LogSumExpMoE.apply(lprob_yz, prob_z_xy, 1) - - loss = loss.sum() - sample_size = ( - sample["target"].size(0) if self.cfg.sentence_avg else sample["ntokens"] - ) - logging_output = { - "loss": utils.item(loss.data), - "ntokens": sample["ntokens"], - "nsentences": bsz, - "sample_size": sample_size, - "posterior": prob_z_xy.float().sum(dim=0).cpu(), - } - return loss, sample_size, logging_output - - def train_step( - self, sample, model, criterion, optimizer, update_num, ignore_grad=False - ): - model.train() - loss, sample_size, logging_output = self._get_loss(sample, model, criterion) - if ignore_grad: - loss *= 0 - optimizer.backward(loss) - return loss, sample_size, logging_output - - def valid_step(self, sample, model, criterion): - model.eval() - with torch.no_grad(): - loss, sample_size, logging_output = self._get_loss(sample, model, criterion) - return loss, sample_size, logging_output - - def inference_step( - self, - generator, - models, - sample, - prefix_tokens=None, - expert=None, - constraints=None, - ): - expert = expert or self.cfg.gen_expert - with torch.no_grad(): - return generator.generate( - models, - sample, - prefix_tokens=prefix_tokens, - constraints=constraints, - bos_token=self.expert_index(expert), - ) - - def reduce_metrics(self, logging_outputs, criterion): - super().reduce_metrics(logging_outputs, criterion) - metrics.log_scalar( - "posterior", - sum(log["posterior"] for log in logging_outputs if "posterior" in log), - ) diff --git a/kosmos-g/fairseq/examples/truncated_bptt/README.md b/kosmos-g/fairseq/examples/truncated_bptt/README.md deleted file mode 100644 index 86518c9d5..000000000 --- a/kosmos-g/fairseq/examples/truncated_bptt/README.md +++ /dev/null @@ -1,70 +0,0 @@ -# Truncated Backpropagation Through Time (BPTT) - -Truncated BPTT is a useful technique for training language models on very long -sequences. Typically a long sequences is split into chunks and a language model -is trained over the chunks sequentially. The LM may condition on previous -chunks, but gradients only flow through the current chunk. This technique was -the basis for the paper: [Transformer-XL: Attentive Language Models Beyond a -Fixed-Length Context](https://arxiv.org/abs/1901.02860), which achieved -state-of-the-art language modeling results at the time of publication. - -It is slightly tricky to implement Truncated BPTT efficiently in fairseq, since -we need to iterate over the data sequentially and disable any batch shuffling -logic. The code provided in this example illustrates how to implement Truncated -BPTT in fairseq by overriding ``FairseqTask::get_batch_iterator`` to iterate -over the data sequentially. Crucially, this example supports batching and -multi-GPU (data parallel) training. - -##### 0. Setup - -First, see the general [language modeling README](README.md) for instructions on -preprocessing the WikiText-103 data. - -##### 1. Train a Transformer-XL model on WikiText-103 - -We will train a 16-layer Transformer-XL model following the [hyperparameters -used in the original -paper](https://github.com/kimiyoung/transformer-xl/blob/master/pytorch/run_wt103_base.sh). - -The following command assumes 4 GPUs, so that the total batch size is 60 -sequences (15 x 4). Training should take ~24 hours on 4 V100 GPUs: -```bash -CUDA_VISIBLE_DEVICES=0,1,2,3 fairseq-train \ - --user-dir examples/truncated_bptt \ - data-bin/wikitext-103/ \ - --task truncated_bptt_lm --tokens-per-sample 150 \ - --batch-size 15 --max-update 200000 \ - --arch transformer_xl --n-layer 16 --d-model 410 --n-head 10 \ - --d-head 41 --d-inner 2100 --dropout 0.1 --dropatt 0.0 --mem-len 150 \ - --optimizer adam --clip-norm 0.25 \ - --lr-scheduler cosine --warmup-updates 0 --min-lr 0.0 --lr 0.00025 \ - --log-format json --log-interval 25 \ - --fp16 -``` - -If training on a single GPU, set `--update-freq=4` to accumulate 4x gradients -and simulate training on 4 GPUs. - -##### 2. Evaluate - -```bash -fairseq-eval-lm data-bin/wikitext-103/ \ - --path checkpoints/checkpoint_best.pt \ - --user-dir examples/truncated_bptt/ \ - --task truncated_bptt_lm \ - --batch-size 1 --required-batch-size-multiple 1 \ - --model-overrides '{"mem_len":640,"clamp_len":400,"same_length":True}' \ - --tokens-per-sample 64 -# ... | INFO | fairseq_cli.eval_lm | num. model params: 151123537 -# ... | INFO | fairseq_cli.eval_lm | Evaluated 245569 tokens in 83.1s (2956.82 tokens/s) -# ... | INFO | fairseq_cli.eval_lm | Loss (base 2): 4.5668, Perplexity: 23.70 -# Compare to 24.0 test perplexity from the paper -``` - -*Note:* During training the model saw 150 tokens of context -(``--tokens-per-sample=150``) and 150 extra memory tokens (``--mem-len=150``). -During evaluation we measure perplexity on sequences of 64 tokens -(``--tokens-per-sample=64``) and increase the memory length -(``--model-overrides='{"mem_len":640}'``). These settings match the evaluation -settings from [the original -paper](https://github.com/kimiyoung/transformer-xl/blob/master/pytorch/run_wt103_base.sh). diff --git a/kosmos-g/fairseq/examples/truncated_bptt/__init__.py b/kosmos-g/fairseq/examples/truncated_bptt/__init__.py deleted file mode 100644 index eee484d42..000000000 --- a/kosmos-g/fairseq/examples/truncated_bptt/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import transformer_xl_model, truncated_bptt_lm_task # noqa diff --git a/kosmos-g/fairseq/examples/truncated_bptt/transformer_xl_model.py b/kosmos-g/fairseq/examples/truncated_bptt/transformer_xl_model.py deleted file mode 100644 index a6c8b25a0..000000000 --- a/kosmos-g/fairseq/examples/truncated_bptt/transformer_xl_model.py +++ /dev/null @@ -1,155 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from dataclasses import dataclass, field -from typing import Dict, List, Optional - -import torch -from fairseq.dataclass import FairseqDataclass -from fairseq.models import ( - FairseqIncrementalDecoder, - FairseqLanguageModel, - register_model, -) -from fairseq.modules.checkpoint_activations import checkpoint_wrapper -from omegaconf import II - - -logger = logging.getLogger(__name__) - - -@dataclass -class TransformerXLConfig(FairseqDataclass): - # defaults come from the original Transformer-XL code - cutoffs: List[int] = field(default_factory=lambda: [20000, 40000, 200000]) - d_model: int = 500 - n_head: int = 10 - d_head: int = 50 - d_inner: int = 1000 - div_val: int = 1 - n_layer: int = 12 - mem_len: int = 0 - clamp_len: int = -1 - same_length: bool = False - dropout: float = 0.0 - dropatt: float = 0.0 - checkpoint_activations: bool = False - offload_activations: bool = False - max_target_positions: int = II("task.max_target_positions") - - -@register_model("transformer_xl", dataclass=TransformerXLConfig) -class TransformerXLLanguageModel(FairseqLanguageModel): - @classmethod - def build_model(cls, cfg: TransformerXLConfig, task): - return cls(TransformerXLDecoder(cfg, task)) - - -class TransformerXLDecoder(FairseqIncrementalDecoder): - def __init__(self, cfg, task): - try: - from transformers.models.transfo_xl import ( - TransfoXLConfig, - TransfoXLLMHeadModel, - ) - except ImportError: - from transformers.configuration_transfo_xl import TransfoXLConfig - from transformers.modeling_transfo_xl import TransfoXLLMHeadModel - - super().__init__(task.target_dictionary) - self.cfg = cfg - - # remove any cutoffs larger than the vocab size - cutoffs = [ - cutoff for cutoff in cfg.cutoffs if cutoff < len(task.target_dictionary) - ] - - config = TransfoXLConfig( - vocab_size=len(task.target_dictionary), - cutoffs=cutoffs, - d_model=cfg.d_model, - d_embed=cfg.d_model, - n_head=cfg.n_head, - d_head=cfg.d_head, - d_inner=cfg.d_inner, - div_val=cfg.div_val, - n_layer=cfg.n_layer, - mem_len=cfg.mem_len, - clamp_len=cfg.clamp_len, - same_length=cfg.same_length, - dropout=cfg.dropout, - dropatt=cfg.dropatt, - ) - logger.info(config) - self.model = TransfoXLLMHeadModel(config) - - # Workaround a bug in huggingface's ``ProjectedAdaptiveLogSoftmax`` - # which adds ``None`` values to an ``nn.ParameterList``, which is not - # supported in PyTorch. Instead we can replace this with an - # ``nn.ModuleList``, which does support ``None`` values. - try: - if all(p is None for p in self.model.crit.out_projs._parameters.values()): - self.model.crit.out_projs = torch.nn.ModuleList( - [None] * len(self.model.crit.out_projs._parameters) - ) - except Exception: - pass - - if cfg.checkpoint_activations or cfg.offload_activations: - for i in range(len(self.model.transformer.layers)): - self.model.transformer.layers[i] = checkpoint_wrapper( - self.model.transformer.layers[i], - offload_to_cpu=cfg.offload_activations, - ) - # TODO: may save mem to wrap(layer.pos_ff.CoreNet[3]) - - self._mems = None - - def forward( - self, - src_tokens, - src_lengths=None, # unused - incremental_state: Optional[Dict[str, List[torch.Tensor]]] = None, - encoder_out=None, - ): - if incremental_state is not None: # used during inference - mems = self.get_incremental_state(incremental_state, "mems") - src_tokens = src_tokens[:, -1:] # only keep the most recent token - else: - mems = self._mems - - output = self.model( - input_ids=src_tokens, - mems=mems, - return_dict=False, - ) - - if len(output) >= 2: - if incremental_state is not None: - self.set_incremental_state(incremental_state, "mems", output[1]) - else: - self._mems = output[1] - - return (output[0],) - - def max_positions(self): - return self.cfg.max_target_positions - - def reorder_incremental_state( - self, - incremental_state: Dict[str, Dict[str, Optional[torch.Tensor]]], - new_order: torch.Tensor, - ): - """Reorder incremental state. - - This will be called when the order of the input has changed from the - previous time step. A typical use case is beam search, where the input - order changes between time steps based on the selection of beams. - """ - mems = self.get_incremental_state(incremental_state, "mems") - if mems is not None: - new_mems = [mems_i.index_select(1, new_order) for mems_i in mems] - self.set_incremental_state(incremental_state, "mems", new_mems) diff --git a/kosmos-g/fairseq/examples/truncated_bptt/truncated_bptt_lm_task.py b/kosmos-g/fairseq/examples/truncated_bptt/truncated_bptt_lm_task.py deleted file mode 100644 index 9978481b6..000000000 --- a/kosmos-g/fairseq/examples/truncated_bptt/truncated_bptt_lm_task.py +++ /dev/null @@ -1,285 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -from dataclasses import dataclass, field -from typing import List, Optional, Tuple - -import torch -from fairseq import utils -from fairseq.data import ( - Dictionary, - TokenBlockDataset, - data_utils, - iterators, -) -from fairseq.dataclass import FairseqDataclass -from fairseq.distributed import utils as dist_utils -from fairseq.tasks import FairseqTask, register_task -from omegaconf import II - - -logger = logging.getLogger(__name__) - - -@dataclass -class TruncatedBPTTLMConfig(FairseqDataclass): - data: str = field(default="???", metadata={"help": "path to data directory"}) - tokens_per_sample: int = field( - default=1024, metadata={"help": "max number of tokens per sequence"}, - ) - batch_size: int = II("dataset.batch_size") - # Some models use *max_target_positions* to know how many positional - # embeddings to learn. We use II(...) to make it default to - # *tokens_per_sample*, but in principle there could be more positional - # embeddings than tokens in a single batch. This may also be irrelevant for - # custom model implementations. - max_target_positions: int = II("task.tokens_per_sample") - # these will be populated automatically if not provided - data_parallel_rank: Optional[int] = None - data_parallel_size: Optional[int] = None - - -@register_task("truncated_bptt_lm", dataclass=TruncatedBPTTLMConfig) -class TruncatedBPTTLMTask(FairseqTask): - def __init__(self, cfg: TruncatedBPTTLMConfig): - super().__init__(cfg) - - if cfg.data_parallel_rank is None or cfg.data_parallel_size is None: - if torch.distributed.is_initialized(): - cfg.data_parallel_rank = dist_utils.get_data_parallel_rank() - cfg.data_parallel_size = dist_utils.get_data_parallel_world_size() - else: - cfg.data_parallel_rank = 0 - cfg.data_parallel_size = 1 - - # load the dictionary - paths = utils.split_paths(cfg.data) - assert len(paths) > 0 - self.dictionary = Dictionary.load(os.path.join(paths[0], "dict.txt")) - logger.info("dictionary: {} types".format(len(self.dictionary))) - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - """Load a given dataset split (e.g., train, valid, test)""" - - # support sharded datasets - paths = utils.split_paths(self.cfg.data) - assert len(paths) > 0 - data_path = paths[(epoch - 1) % len(paths)] - split_path = os.path.join(data_path, split) - - # each element of *data* will be a tensorized line from the original - # text dataset, similar to ``open(split_path).readlines()`` - data = data_utils.load_indexed_dataset( - split_path, self.dictionary, combine=combine - ) - if data is None: - raise FileNotFoundError( - "Dataset not found: {} ({})".format(split, split_path) - ) - - # this is similar to ``data.view(-1).split(tokens_per_sample)`` - data = TokenBlockDataset( - data, - data.sizes, - block_size=self.cfg.tokens_per_sample, - pad=None, # unused - eos=None, # unused - break_mode="none", - ) - - self.datasets[split] = TruncatedBPTTDataset( - data=data, - bsz_per_shard=self.cfg.batch_size, - shard_id=self.cfg.data_parallel_rank, - num_shards=self.cfg.data_parallel_size, - ) - - def dataset(self, split): - return self.datasets[split] - - def get_batch_iterator( - self, - dataset, - num_workers=0, - epoch=1, - data_buffer_size=0, - skip_remainder_batch=False, - **kwargs - ): - return iterators.EpochBatchIterator( - dataset=dataset, - collate_fn=self._collate_fn, - num_workers=num_workers, - epoch=epoch, - buffer_size=data_buffer_size, - # we don't use the batching functionality from EpochBatchIterator; - # instead every item in *dataset* is a whole batch - batch_sampler=[[i] for i in range(len(dataset))], - disable_shuffling=True, - skip_remainder_batch=skip_remainder_batch, - ) - - def _collate_fn(self, items: List[List[torch.Tensor]]): - # we don't use fairseq's batching functionality, so we expect a single - # Tensor of type List[torch.Tensor] - assert len(items) == 1 - - # item will have shape B x T (the last batch may have length < T) - id, item = items[0] - item = data_utils.collate_tokens(item, pad_idx=self.source_dictionary.pad()) - B, T = item.size() - - # shift item one position over and append a padding token for the target - target = torch.nn.functional.pad( - item[:, 1:], (0, 1, 0, 0), value=self.target_dictionary.pad() - ) - - # fairseq expects batches to have the following structure - return { - "id": torch.tensor([id] * item.size(0)), - "net_input": {"src_tokens": item,}, - "target": target, - "nsentences": item.size(0), - "ntokens": item.numel(), - } - - def build_dataset_for_inference( - self, src_tokens: List[torch.Tensor], src_lengths: List[int], **kwargs - ) -> torch.utils.data.Dataset: - eos = self.source_dictionary.eos() - dataset = TokenBlockDataset( - src_tokens, - src_lengths, - block_size=None, # ignored for "eos" break mode - pad=self.source_dictionary.pad(), - eos=eos, - break_mode="eos", - ) - - class Dataset(torch.utils.data.Dataset): - def __getitem__(self, i): - item = dataset[i] - if item[-1] == eos: - # remove eos to support generating with a prefix - item = item[:-1] - return (i, [item]) - - def __len__(self): - return len(dataset) - - return Dataset() - - def inference_step( - self, generator, models, sample, prefix_tokens=None, constraints=None - ): - with torch.no_grad(): - if constraints is not None: - raise NotImplementedError - - # SequenceGenerator doesn't use *src_tokens* directly, we need to - # pass the *prefix_tokens* argument instead. - if prefix_tokens is None and sample["net_input"]["src_tokens"].nelement(): - prefix_tokens = sample["net_input"]["src_tokens"] - - # begin generation with the end-of-sentence token - bos_token = self.source_dictionary.eos() - - return generator.generate( - models, sample, prefix_tokens=prefix_tokens, bos_token=bos_token - ) - - def eval_lm_dataloader( - self, - dataset, - max_tokens: Optional[int] = 36000, - batch_size: Optional[int] = None, - max_positions: Optional[int] = None, - num_shards: int = 1, - shard_id: int = 0, - num_workers: int = 1, - data_buffer_size: int = 10, - context_window: int = 0, - ): - if context_window > 0: - raise NotImplementedError( - "Transformer-XL doesn't need --context-window, try " - "--model-overrides '{\"mem_len\":42}' instead " - ) - return self.get_batch_iterator( - dataset=dataset, - max_tokens=max_tokens, - max_sentences=batch_size, - max_positions=max_positions, - ignore_invalid_inputs=True, - num_shards=num_shards, - shard_id=shard_id, - num_workers=num_workers, - data_buffer_size=data_buffer_size, - ).next_epoch_itr(shuffle=False) - - @property - def source_dictionary(self): - return self.dictionary - - @property - def target_dictionary(self): - return self.dictionary - - -class TruncatedBPTTDataset(torch.utils.data.Dataset): - def __init__( - self, - data: List[torch.Tensor], # ordered list of items - bsz_per_shard, # number of items processed per GPUs per forward - shard_id, # current GPU ID - num_shards, # number of GPUs - ): - super().__init__() - self.data = data - - def batchify(data, bsz): - # Work out how cleanly we can divide the dataset into bsz parts. - nbatch = data.size(0) // bsz - # Trim off any extra elements that wouldn't cleanly fit (remainders). - data = data.narrow(0, 0, nbatch * bsz) - # Evenly divide the data across the bsz batches. - data = data.view(bsz, -1).contiguous() - return data - - # total number of sequences processed by all GPUs in each forward pass - global_batch_size = bsz_per_shard * num_shards - - """ - With a 16 item dataset, bsz_per_shard=2 and num_shards=3, - *indices* might look like: - - indices = [[0, 1], - [2, 3], - [4, 5], - [6, 7], - [8, 9], - [10, 11]] - - The size of the TruncatedBPTTDataset instance will be 2, - and shard 1 will see items: - - [(0, [data[4], data[6]]), - (1, [data[5], data[7]])] - """ - indices = batchify(torch.arange(len(data)), global_batch_size) - assert indices.size(0) == global_batch_size - - self.my_indices = indices[ - shard_id * bsz_per_shard : (shard_id + 1) * bsz_per_shard - ] - assert self.my_indices.size(0) == bsz_per_shard - - def __len__(self): - return self.my_indices.size(1) - - def __getitem__(self, i) -> Tuple[int, List[torch.Tensor]]: - return (i, [self.data[idx] for idx in self.my_indices[:, i]]) diff --git a/kosmos-g/fairseq/examples/unsupervised_quality_estimation/README.md b/kosmos-g/fairseq/examples/unsupervised_quality_estimation/README.md deleted file mode 100644 index e86a0d13b..000000000 --- a/kosmos-g/fairseq/examples/unsupervised_quality_estimation/README.md +++ /dev/null @@ -1,126 +0,0 @@ -# Unsupervised Quality Estimation for Neural Machine Translation (Fomicheva et al., 2020) - -This page includes instructions for reproducing results from the paper [Unsupervised Quality Estimation for Neural -Machine Translation (Fomicheva et al., 2020)](https://arxiv.org/abs/2005.10608) - -## Requirements: - -* mosesdecoder: https://github.com/moses-smt/mosesdecoder -* subword-nmt: https://github.com/rsennrich/subword-nmt -* flores: https://github.com/facebookresearch/flores - -## Download Models and Test Data - -Download translation models and test data from [MLQE dataset repository](https://github.com/facebookresearch/mlqe). - -## Set up: - -Given a testset consisting of source sentences and reference translations: - -* `SRC_LANG`: source language -* `TGT_LANG`: target language -* `INPUT`: input prefix, such that the file `$INPUT.$SRC_LANG` contains source sentences and `$INPUT.$TGT_LANG` -contains the reference sentences -* `OUTPUT_DIR`: output path to store results -* `MOSES_DECODER`: path to mosesdecoder installation -* `BPE_ROOT`: path to subword-nmt installation -* `BPE`: path to BPE model -* `MODEL_DIR`: directory containing the NMT model `.pt` file as well as the source and target vocabularies. -* `TMP`: directory for intermediate temporary files -* `GPU`: if translating with GPU, id of the GPU to use for inference -* `DROPOUT_N`: number of stochastic forward passes - -`$DROPOUT_N` is set to 30 in the experiments reported in the paper. However, we observed that increasing it beyond 10 -does not bring substantial improvements. - -## Translate the data using standard decoding - -Preprocess the input data: -``` -for LANG in $SRC_LANG $TGT_LANG; do - perl $MOSES_DECODER/scripts/tokenizer/tokenizer.perl -threads 80 -a -l $LANG < $INPUT.$LANG > $TMP/preprocessed.tok.$LANG - python $BPE_ROOT/apply_bpe.py -c ${BPE} < $TMP/preprocessed.tok.$LANG > $TMP/preprocessed.tok.bpe.$LANG -done -``` - -Binarize the data for faster translation: - -``` -fairseq-preprocess --srcdict $MODEL_DIR/dict.$SRC_LANG.txt --tgtdict $MODEL_DIR/dict.$TGT_LANG.txt ---source-lang ${SRC_LANG} --target-lang ${TGT_LANG} --testpref $TMP/preprocessed.tok.bpe --destdir $TMP/bin --workers 4 -``` - -Translate - -``` -CUDA_VISIBLE_DEVICES=$GPU fairseq-generate $TMP/bin --path ${MODEL_DIR}/${SRC_LANG}-${TGT_LANG}.pt --beam 5 ---source-lang $SRC_LANG --target-lang $TGT_LANG --no-progress-bar --unkpen 5 > $TMP/fairseq.out -grep ^H $TMP/fairseq.out | cut -d- -f2- | sort -n | cut -f3- > $TMP/mt.out -``` - -Post-process - -``` -sed -r 's/(@@ )| (@@ ?$)//g' < $TMP/mt.out | perl $MOSES_DECODER/scripts/tokenizer/detokenizer.perl --l $TGT_LANG > $OUTPUT_DIR/mt.out -``` - -## Produce uncertainty estimates - -### Scoring - -Make temporary files to store the translations repeated N times. - -``` -python ${SCRIPTS}/scripts/uncertainty/repeat_lines.py -i $TMP/preprocessed.tok.bpe.$SRC_LANG -n $DROPOUT_N --o $TMP/repeated.$SRC_LANG -python ${SCRIPTS}/scripts/uncertainty/repeat_lines.py -i $TMP/mt.out -n $DROPOUT_N -o $TMP/repeated.$TGT_LANG - -fairseq-preprocess --srcdict ${MODEL_DIR}/dict.${SRC_LANG}.txt $TGT_DIC --source-lang ${SRC_LANG} ---target-lang ${TGT_LANG} --testpref ${TMP}/repeated --destdir ${TMP}/bin-repeated -``` - -Produce model scores for the generated translations using `--retain-dropout` option to apply dropout at inference time: - -``` -CUDA_VISIBLE_DEVICES=${GPU} fairseq-generate ${TMP}/bin-repeated --path ${MODEL_DIR}/${LP}.pt --beam 5 - --source-lang $SRC_LANG --target-lang $TGT_LANG --no-progress-bar --unkpen 5 --score-reference --retain-dropout - --retain-dropout-modules '["TransformerModel","TransformerEncoder","TransformerDecoder","TransformerEncoderLayer"]' - TransformerDecoderLayer --seed 46 > $TMP/dropout.scoring.out - -grep ^H $TMP/dropout.scoring.out | cut -d- -f2- | sort -n | cut -f2 > $TMP/dropout.scores - -``` - -Use `--retain-dropout-modules` to specify the modules. By default, dropout is applied in the same places -as for training. - -Compute the mean of the resulting output distribution: - -``` -python $SCRIPTS/scripts/uncertainty/aggregate_scores.py -i $TMP/dropout.scores -o $OUTPUT_DIR/dropout.scores.mean --n $DROPOUT_N -``` - -### Generation - -Produce multiple translation hypotheses for the same source using `--retain-dropout` option: - -``` -CUDA_VISIBLE_DEVICES=${GPU} fairseq-generate ${TMP}/bin-repeated --path ${MODEL_DIR}/${LP}.pt - --beam 5 --source-lang $SRC_LANG --target-lang $TGT_LANG --no-progress-bar --retain-dropout - --unkpen 5 --retain-dropout-modules TransformerModel TransformerEncoder TransformerDecoder -TransformerEncoderLayer TransformerDecoderLayer --seed 46 > $TMP/dropout.generation.out - -grep ^H $TMP/dropout.generation.out | cut -d- -f2- | sort -n | cut -f3- > $TMP/dropout.hypotheses_ - -sed -r 's/(@@ )| (@@ ?$)//g' < $TMP/dropout.hypotheses_ | perl $MOSES_DECODER/scripts/tokenizer/detokenizer.perl --l $TGT_LANG > $TMP/dropout.hypotheses -``` - -Compute similarity between multiple hypotheses corresponding to the same source sentence using Meteor -evaluation metric: -``` -python meteor.py -i $TMP/dropout.hypotheses -m <path_to_meteor_installation> -n $DROPOUT_N -o -$OUTPUT_DIR/dropout.gen.sim.meteor -``` diff --git a/kosmos-g/fairseq/examples/unsupervised_quality_estimation/aggregate_scores.py b/kosmos-g/fairseq/examples/unsupervised_quality_estimation/aggregate_scores.py deleted file mode 100644 index 66d50d07f..000000000 --- a/kosmos-g/fairseq/examples/unsupervised_quality_estimation/aggregate_scores.py +++ /dev/null @@ -1,41 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import sys - -import numpy as np - - -aggregate_funcs = { - "std": np.std, - "var": np.var, - "median": np.median, - "mean": np.mean, - "min": np.min, - "max": np.max, -} - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("-i", "--input_file", required=True, type=str) - parser.add_argument("-n", "--repeat_times", required=True, type=int) - parser.add_argument("-o", "--output_file", required=False) - parser.add_argument("-f", "--func", required=False, default="mean") - args = parser.parse_args() - - stream = open(args.output_file, "w") if args.output_file else sys.stdout - - segment_scores = [] - for line in open(args.input_file): - segment_scores.append(float(line.strip())) - if len(segment_scores) == args.repeat_times: - stream.write("{}\n".format(aggregate_funcs[args.func](segment_scores))) - segment_scores = [] - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/unsupervised_quality_estimation/meteor.py b/kosmos-g/fairseq/examples/unsupervised_quality_estimation/meteor.py deleted file mode 100644 index 2ee0448cf..000000000 --- a/kosmos-g/fairseq/examples/unsupervised_quality_estimation/meteor.py +++ /dev/null @@ -1,109 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import math -import os -import subprocess -import sys -import tempfile -from collections import defaultdict -from itertools import combinations - - -def read_translations(path, n_repeats): - segment_counter = 0 - segment_translations = [] - translations = defaultdict(list) - for line in open(path): - segment_translations.append(" ".join(line.split())) - if len(segment_translations) == n_repeats: - translations[segment_counter] = segment_translations - segment_translations = [] - segment_counter += 1 - return translations - - -def generate_input(translations, n_repeats): - _, ref_path = tempfile.mkstemp() - _, mt_path = tempfile.mkstemp() - ref_fh = open(ref_path, "w") - mt_fh = open(mt_path, "w") - for segid in sorted(translations.keys()): - assert len(translations[segid]) == n_repeats - indexes = combinations(range(n_repeats), 2) - for idx1, idx2 in indexes: - mt_fh.write(translations[segid][idx1].strip() + "\n") - ref_fh.write(translations[segid][idx2].strip() + "\n") - sys.stderr.write("\nSaved translations to %s and %s" % (ref_path, mt_path)) - return ref_path, mt_path - - -def run_meteor(ref_path, mt_path, metric_path, lang="en"): - _, out_path = tempfile.mkstemp() - subprocess.call( - [ - "java", - "-Xmx2G", - "-jar", - metric_path, - mt_path, - ref_path, - "-p", - "0.5 0.2 0.6 0.75", # default parameters, only changed alpha to give equal weight to P and R - "-norm", - "-l", - lang, - ], - stdout=open(out_path, "w"), - ) - os.remove(ref_path) - os.remove(mt_path) - sys.stderr.write("\nSaved Meteor output to %s" % out_path) - return out_path - - -def read_output(meteor_output_path, n_repeats): - n_combinations = math.factorial(n_repeats) / ( - math.factorial(2) * math.factorial(n_repeats - 2) - ) - raw_scores = [] - average_scores = [] - for line in open(meteor_output_path): - if not line.startswith("Segment "): - continue - score = float(line.strip().split("\t")[1]) - raw_scores.append(score) - if len(raw_scores) == n_combinations: - average_scores.append(sum(raw_scores) / n_combinations) - raw_scores = [] - os.remove(meteor_output_path) - return average_scores - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("-i", "--infile") - parser.add_argument("-n", "--repeat_times", type=int) - parser.add_argument("-m", "--meteor") - parser.add_argument("-o", "--output") - args = parser.parse_args() - - translations = read_translations(args.infile, args.repeat_times) - sys.stderr.write("\nGenerating input for Meteor...") - ref_path, mt_path = generate_input(translations, args.repeat_times) - sys.stderr.write("\nRunning Meteor...") - out_path = run_meteor(ref_path, mt_path, args.meteor) - sys.stderr.write("\nReading output...") - scores = read_output(out_path, args.repeat_times) - sys.stderr.write("\nWriting results...") - with open(args.output, "w") as o: - for scr in scores: - o.write("{}\n".format(scr)) - o.close() - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/unsupervised_quality_estimation/repeat_lines.py b/kosmos-g/fairseq/examples/unsupervised_quality_estimation/repeat_lines.py deleted file mode 100644 index 5a04851a7..000000000 --- a/kosmos-g/fairseq/examples/unsupervised_quality_estimation/repeat_lines.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import sys - - -def _normalize_spaces(line): - return " ".join(line.split()) - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("-i", "--input_file", required=True, type=str) - parser.add_argument("-n", "--repeat_times", required=True, type=int) - parser.add_argument("-o", "--output_file", required=False, type=str) - args = parser.parse_args() - stream = open(args.output_file, "w") if args.output_file else sys.stdout - - for line in open(args.input_file): - for _ in range(args.repeat_times): - stream.write(_normalize_spaces(line) + "\n") - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/wav2vec/README.md b/kosmos-g/fairseq/examples/wav2vec/README.md deleted file mode 100644 index db363f3f1..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/README.md +++ /dev/null @@ -1,389 +0,0 @@ -# wav2vec 2.0 - -wav2vec 2.0 learns speech representations on unlabeled data as described in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations (Baevski et al., 2020)](https://arxiv.org/abs/2006.11477). - -We learned speech representations in multiple languages as well in [Unsupervised Cross-lingual Representation Learning for Speech Recognition (Conneau et al., 2020)](https://arxiv.org/abs/2006.13979). - -We also combined wav2vec 2.0 with self-training in [Self-training and Pre-training are Complementary for Speech Recognition (Xu et al., 2020)](https://arxiv.org/abs/2010.11430). - -We combined speech data from multiple domains in [Robust wav2vec 2.0: Analyzing Domain Shift in Self-Supervised Pre-Training (Hsu, et al., 2021)](https://arxiv.org/abs/2104.01027). - -We finetuned XLSR-53 on multiple languages to transcribe unseen languages in [Simple and Effective Zero-shot Cross-lingual Phoneme Recognition (Xu et al., 2021)](https://arxiv.org/abs/2109.11680). - - -## Pre-trained models - -Model | Finetuning split | Dataset | Model -|---|---|---|--- -Wav2Vec 2.0 Base | No finetuning | [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small.pt) -Wav2Vec 2.0 Base | 10 minutes | [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small_10m.pt) -Wav2Vec 2.0 Base | 100 hours | [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small_100h.pt) -Wav2Vec 2.0 Base | 960 hours | [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small_960h.pt) -Wav2Vec 2.0 Large | No finetuning | [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/libri960_big.pt) -Wav2Vec 2.0 Large | 10 minutes | [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_big_10m.pt) -Wav2Vec 2.0 Large | 100 hours | [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_big_100h.pt) -Wav2Vec 2.0 Large | 960 hours | [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_big_960h.pt) -Wav2Vec 2.0 Large (LV-60)* | No finetuning | [Libri-Light](https://github.com/facebookresearch/libri-light) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_vox_new.pt) -Wav2Vec 2.0 Large (LV-60)* | 10 minutes | [Libri-Light](https://github.com/facebookresearch/libri-light) + [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_vox_10m_new.pt) -Wav2Vec 2.0 Large (LV-60)* | 100 hours | [Libri-Light](https://github.com/facebookresearch/libri-light) + [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_vox_100h_new.pt) -Wav2Vec 2.0 Large (LV-60)* | 960 hours | [Libri-Light](https://github.com/facebookresearch/libri-light) + [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec2_vox_960h_new.pt) -Wav2Vec 2.0 Large (LV-60) + Self Training * | 10 minutes | [Libri-Light](https://github.com/facebookresearch/libri-light) + [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_vox_10m_pl.pt) -Wav2Vec 2.0 Large (LV-60) + Self Training * | 100 hours | [Libri-Light](https://github.com/facebookresearch/libri-light) + [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_vox_100h_pl.pt) -Wav2Vec 2.0 Large (LV-60) + Self Training * | 960 hours | [Libri-Light](https://github.com/facebookresearch/libri-light) + [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_vox_960h_pl.pt) -Wav2Vec 2.0 Large (LV-60 + CV + SWBD + FSH) ** | No finetuning | [Libri-Light](https://github.com/facebookresearch/libri-light) + [CommonVoice](https://commonvoice.mozilla.org/en/languages) + [Switchboard](https://catalog.ldc.upenn.edu/LDC97S62) + [Fisher](https://catalog.ldc.upenn.edu/LDC2004T19) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/w2v_large_lv_fsh_swbd_cv.pt) -Wav2Vec 2.0 Large (LV-60 + CV + SWBD + FSH) ** | 960 hours Librispeech | [Libri-Light](https://github.com/facebookresearch/libri-light) + [CommonVoice](https://commonvoice.mozilla.org/en/languages) + [Switchboard](https://catalog.ldc.upenn.edu/LDC97S62) + [Fisher](https://catalog.ldc.upenn.edu/LDC2004T19) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/w2v_large_lv_fsh_swbd_cv_ftls960_updated.pt) -Wav2Vec 2.0 Large (LV-60 + CV + SWBD + FSH) ** | 300 hours Switchboard | [Libri-Light](https://github.com/facebookresearch/libri-light) + [CommonVoice](https://commonvoice.mozilla.org/en/languages) + [Switchboard](https://catalog.ldc.upenn.edu/LDC97S62) + [Fisher](https://catalog.ldc.upenn.edu/LDC2004T19) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/w2v_large_lv_fsh_swbd_cv_ftsb300_updated.pt) - -\* updated (Oct. 24, 2020)\ -** updated (Nov. 13, 2021) - -We also release multilingual pre-trained wav2vec 2.0 (XLSR) models: - -Model | Architecture | Hours | Languages | Datasets | Model -|---|---|---|---|---|--- -XLSR-53 | Large | 56k | 53 | MLS, CommonVoice, BABEL | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/xlsr_53_56k.pt) - -The XLSR model uses the following datasets for multilingual pretraining: - -* **[MLS: Multilingual LibriSpeech](https://indico2.conference4me.psnc.pl/event/35/contributions/3585/attachments/1060/1101/Wed-2-6-10.pdf)** (8 languages, 50.7k hours): *Dutch, English, French, German, Italian, Polish, Portuguese, Spanish* - -* **[CommonVoice](https://commonvoice.mozilla.org/en/languages)** (36 languages, 3.6k hours): *Arabic, Basque, Breton, Chinese (CN), Chinese (HK), Chinese (TW), Chuvash, Dhivehi, Dutch, English, Esperanto, Estonian, French, German, Hakh-Chin, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kinyarwanda, Kyrgyz, Latvian, Mongolian, Persian, Portuguese, Russian, Sakha, Slovenian, Spanish, Swedish, Tamil, Tatar, Turkish, Welsh* (see also [finetuning splits]([https://dl.fbaipublicfiles.com/cpc_audio/common_voices_splits.tar.gz]) from [this paper](https://arxiv.org/abs/2002.02848)). - -* **[Babel](https://catalog.ldc.upenn.edu/byyear)** (17 languages, 1.7k hours): *Assamese, Bengali, Cantonese, Cebuano, Georgian, Haitian, Kazakh, Kurmanji, Lao, Pashto, Swahili, Tagalog, Tamil, Tok, Turkish, Vietnamese, Zulu* - -We also finetuned several models on languages from [CommonVoice](https://commonvoice.mozilla.org/en/languages) (version 6.1) and [Babel](https://catalog.ldc.upenn.edu/byyear). Please refer to [our paper](https://arxiv.org/abs/2109.11680) for details about which languages are used. - -Pretrained Model | Fintune Dataset | # Languages | Phonemizer | Model | Dictionary -|---|---|---|---|---|--- -LV-60 | CommonVoice | 26 | [Espeak](https://github.com/espeak-ng/espeak-ng/blob/master/docs/languages.md) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/zero_shot/espeak_en_26lang_m10.pt) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/zero_shot/espeak_dict.txt) -XLSR-53 | CommonVoice | 26 | [Espeak](https://github.com/espeak-ng/espeak-ng/blob/master/docs/languages.md) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/zero_shot/espeak_26lang_m10.pt) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/zero_shot/espeak_dict.txt) -XLSR-53 | CommonVoice | 21 | [Phonetisaurus](https://github.com/AdolfVonKleist/Phonetisaurus) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/zero_shot/phonetisaurus_21lang_m10.pt) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/zero_shot/phonetisaurus_dict.txt) -XLSR-53 | CommonVoice, BABEL | 21, 19 | [Phonetisaurus](https://github.com/AdolfVonKleist/Phonetisaurus) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/zero_shot/phonetisaurus_40lang_m10.pt) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/zero_shot/phonetisaurus_40lang.dict.txt) - -We release 2 models that are finetuned on data from 2 different phonemizers. Although the phonemes are all [IPA](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet) symbols, there are still subtle differences between the phonemized transcriptions from the 2 phonemizers. Thus, it's better to use the corresponding model, if your data is phonemized by either phonemizer above. - -## Training a new model with the CLI tools - -Given a directory containing wav files to be used for pretraining (we recommend splitting each file into separate file 10 to 30 seconds in length) - -### Prepare training data manifest: - -First, install the `soundfile` library: -```shell script -pip install soundfile -``` - -Next, run: - -```shell script -$ python examples/wav2vec/wav2vec_manifest.py /path/to/waves --dest /manifest/path --ext $ext --valid-percent $valid -``` - -$ext should be set to flac, wav, or whatever format your dataset happens to use that soundfile can read. - -$valid should be set to some reasonable percentage (like 0.01) of training data to use for validation. -To use a pre-defined validation set (like dev-other from librispeech), set to it 0 and then overwrite valid.tsv with a -separately pre-processed manifest file. - -### Train a wav2vec 2.0 base model: - -This configuration was used for the base model trained on the Librispeech dataset in the wav2vec 2.0 paper - -Note that the input is expected to be single channel, sampled at 16 kHz - -```shell script -$ fairseq-hydra-train \ - task.data=/path/to/data \ - --config-dir /path/to/fairseq-py/examples/wav2vec/config/pretraining \ - --config-name wav2vec2_base_librispeech -``` - -Note: you can simulate 64 GPUs by using k GPUs and adding command line parameters (before `--config-dir`) -`distributed_training.distributed_world_size=k` `+optimization.update_freq='[x]'` where x = 64/k - -### Train a wav2vec 2.0 large model: - -This configuration was used for the large model trained on the Libri-light dataset in the wav2vec 2.0 paper - -```shell script -$ fairseq-hydra-train \ - task.data=/path/to/data \ - --config-dir /path/to/fairseq-py/examples/wav2vec/config/pretraining \ - --config-name wav2vec2_large_librivox -``` - -Note: you can simulate 128 GPUs by using k GPUs and adding command line parameters (before `--config-dir`) -`distributed_training.distributed_world_size=k` `+optimization.update_freq='[x]'` where x = 128/k - -### Fine-tune a pre-trained model with CTC: - -Fine-tuning a model requires parallel audio and labels file, as well as a vocabulary file in fairseq format. -A letter vocabulary can be downloaded [here](https://dl.fbaipublicfiles.com/fairseq/wav2vec/dict.ltr.txt). -An example [script](libri_labels.py) that generates labels for the Librispeech dataset from the tsv file produced by wav2vec_manifest.py can be used as follows: - -```shell script -split=train -$ python libri_labels.py /path/to/tsv --output-dir /output/dir --output-name $split -``` - -Fine-tuning on 100h of Librispeech with letter targets: -```shell script -$ fairseq-hydra-train \ - distributed_training.distributed_port=$PORT \ - task.data=/path/to/data \ - model.w2v_path=/path/to/model.pt \ - --config-dir /path/to/fairseq-py/examples/wav2vec/config/finetuning \ - --config-name base_100h -``` - -There are other config files in the config/finetuning directory that can be used to fine-tune on other splits. -You can specify the right config via the `--config-name` parameter. - -Note: you can simulate 24 GPUs by using k GPUs and adding command line parameters (before `--config-dir`) -`distributed_training.distributed_world_size=k` `+optimization.update_freq='[x]'` where x = 24/k - -Decoding with a language model during training requires flashlight [python bindings](https://github.com/facebookresearch/flashlight/tree/master/bindings/python) (previously called [wav2letter](https://github.com/facebookresearch/wav2letter). -If you want to use a language model, add `+criterion.wer_args='[/path/to/kenlm, /path/to/lexicon, 2, -1]'` to the command line. - -### Evaluating a CTC model: - -Evaluating a CTC model with a language model requires [flashlight python bindings](https://github.com/facebookresearch/flashlight/tree/master/bindings/python) (previously called [wav2letter](https://github.com/facebookresearch/wav2letter) to be installed. - -Fairseq transformer language model used in the wav2vec 2.0 paper can be obtained from the [wav2letter model repository](https://github.com/facebookresearch/wav2letter/tree/master/recipes/sota/2019). -Be sure to upper-case the language model vocab after downloading it. - -Letter dictionary for pre-trained models can be found [here](https://dl.fbaipublicfiles.com/fairseq/wav2vec/dict.ltr.txt). - -Next, run the evaluation command: - -```shell script -$subset=dev_other -python examples/speech_recognition/infer.py /checkpoint/abaevski/data/speech/libri/10h/wav2vec/raw --task audio_finetuning \ ---nbest 1 --path /path/to/model --gen-subset $subset --results-path /path/to/save/results/for/sclite --w2l-decoder kenlm \ ---lm-model /path/to/kenlm.bin --lm-weight 2 --word-score -1 --sil-weight 0 --criterion ctc --labels ltr --max-tokens 4000000 \ ---post-process letter -``` - -To get raw numbers, use --w2l-decoder viterbi and omit the lexicon. To use the transformer language model, use --w2l-decoder fairseqlm. - -## Use wav2vec 2.0 with 🤗Transformers: - -Wav2Vec2 is also available in the [🤗Transformers library](https://github.com/huggingface/transformers) since version 4.4. - -Pretrained Models can be found on the [hub](https://huggingface.co/models?filter=wav2vec2) -and documentation can be found [here](https://huggingface.co/transformers/master/model_doc/wav2vec2.html). - -Usage example: - -```python -# !pip install transformers -# !pip install datasets -import soundfile as sf -import torch -from datasets import load_dataset -from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor - -# load pretrained model -processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h") -model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h") - - -librispeech_samples_ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") - -# load audio -audio_input, sample_rate = sf.read(librispeech_samples_ds[0]["file"]) - -# pad input values and return pt tensor -input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values - -# INFERENCE - -# retrieve logits & take argmax -logits = model(input_values).logits -predicted_ids = torch.argmax(logits, dim=-1) - -# transcribe -transcription = processor.decode(predicted_ids[0]) - -# FINE-TUNE - -target_transcription = "A MAN SAID TO THE UNIVERSE I EXIST" - -# encode labels -with processor.as_target_processor(): - labels = processor(target_transcription, return_tensors="pt").input_ids - -# compute loss by passing labels -loss = model(input_values, labels=labels).loss -loss.backward() -``` - -# wav2vec - -Example to train a wav2vec model as described in [wav2vec: Unsupervised Pre-training for Speech Recognition (Schneider et al., 2019)](https://arxiv.org/abs/1904.05862). - -## Pre-trained models - -Description | Dataset | Model ----|---|--- -Wav2Vec large | [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_large.pt) - -#### Example usage: -```python -import torch -import fairseq - -cp_path = '/path/to/wav2vec.pt' -model, cfg, task = fairseq.checkpoint_utils.load_model_ensemble_and_task([cp_path]) -model = model[0] -model.eval() - -wav_input_16khz = torch.randn(1,10000) -z = model.feature_extractor(wav_input_16khz) -c = model.feature_aggregator(z) -``` - -## Training a new model with the CLI tools - -Given a directory containing wav files to be used for pretraining (we recommend splitting each file into separate files 10 to 30 seconds in length) - -### Prepare training data manifest: - -``` -$ python examples/wav2vec/wav2vec_manifest.py /path/to/waves --dest /manifest/path --ext wav -``` - -### Train a wav2vec model: - -``` -$ python train.py /manifest/path --save-dir /model/path --num-workers 6 --fp16 --max-update 400000 --save-interval 1 --no-epoch-checkpoints \ ---arch wav2vec --task audio_pretraining --min-lr 1e-06 --stop-min-lr 1e-09 --optimizer adam --lr 0.005 --lr-scheduler cosine \ ---conv-feature-layers [(512, 10, 5), (512, 8, 4), (512, 4, 2), (512, 4, 2), (512, 4, 2), (512, 1, 1), (512, 1, 1)] \ ---conv-aggregator-layers [(512, 2, 1), (512, 3, 1), (512, 4, 1), (512, 5, 1), (512, 6, 1), (512, 7, 1), (512, 8, 1), (512, 9, 1), (512, 10, 1), (512, 11, 1), (512, 12, 1), (512, 13, 1)] \ ---skip-connections-agg --residual-scale 0.5 --log-compression --warmup-updates 500 --warmup-init-lr 1e-07 --criterion wav2vec --num-negatives 10 \ ---max-sample-size 150000 --max-tokens 1500000 --skip-invalid-size-inputs-valid-test -``` - -### Run wav2vec2 pre-training on Google Cloud TPUs: - -Wav2Vec2 is now supported on TPUs! It's currently pre-training only. - -#### Using hydra on a v3-8: - -``` -$ OMP_NUM_THREADS=1 fairseq-hydra-train \ - task.data=/manifest/path \ - --config-dir /PATH/TO/FAIRSEQ/examples/wav2vec/config/pretraining \ - --config-name wav2vec2_large_librivox_tpu.yaml -``` - -#### Using command line arguments on a v3-8: -Note: Commandline arguments way of execution has a [known-problem](https://github.com/pytorch/fairseq/issues/3741) currently. - -``` -$ OMP_NUM_THREADS=1 python train.py /manifest/path --save-dir /model/path --num-workers 6 --fp16 --max-update 400000 --save-interval 1 --no-epoch-checkpoints \ ---arch wav2vec2 --task audio_pretraining --min-lr 1e-06 --stop-min-lr 1e-09 --optimizer adam --lr 0.005 --lr-scheduler cosine \ ---conv-feature-layers [(512, 10, 5), (512, 8, 4), (512, 4, 2), (512, 4, 2), (512, 4, 2), (512, 1, 1), (512, 1, 1)] \ ---conv-aggregator-layers [(512, 2, 1), (512, 3, 1), (512, 4, 1), (512, 5, 1), (512, 6, 1), (512, 7, 1), (512, 8, 1), (512, 9, 1), (512, 10, 1), (512, 11, 1), (512, 12, 1), (512, 13, 1)] \ ---skip-connections-agg --residual-scale 0.5 --log-compression --warmup-updates 500 --warmup-init-lr 1e-07 --criterion wav2vec --num-negatives 10 \ ---max-sample-size 150000 --max-tokens 1500000 --skip-invalid-size-inputs-valid-test \ ---tpu --distributed-world-size 8 --num-batch-buckets 3 --enable-padding \ ---encoder-layerdrop 0 --mask-channel-prob 0.1 -``` - -#### Using hydra on a pod slice (v3-N with N > 8): - -``` -$ OMP_NUM_THREADS=1 fairseq-hydra-train \ - task.data=/manifest/path \ - --config-dir /PATH/TO/FAIRSEQ/examples/wav2vec/config/pretraining \ - --config-name wav2vec2_large_librivox_tpu-pod.yaml # edit distributed-world-size accordingly -``` - -#### Using command line arguments on a pod slice (v3-N with N > 8): -Note: Commandline arguments way of execution has a [known-problem](https://github.com/pytorch/fairseq/issues/3741) currently. - -``` -$ python -m torch_xla.distributed.xla_dist \ - --tpu ${TPUNAME} --conda-env=torch-xla-${TORCH_XLA_VERSION} --env OMP_NUM_THREADS=1 \ - -- \ -python train.py /manifest/path --save-dir /model/path --num-workers 6 --fp16 --max-update 400000 --save-interval 1 --no-epoch-checkpoints \ ---arch wav2vec2 --task audio_pretraining --min-lr 1e-06 --stop-min-lr 1e-09 --optimizer adam --lr 0.005 --lr-scheduler cosine \ ---conv-feature-layers [(512, 10, 5), (512, 8, 4), (512, 4, 2), (512, 4, 2), (512, 4, 2), (512, 1, 1), (512, 1, 1)] \ ---conv-aggregator-layers [(512, 2, 1), (512, 3, 1), (512, 4, 1), (512, 5, 1), (512, 6, 1), (512, 7, 1), (512, 8, 1), (512, 9, 1), (512, 10, 1), (512, 11, 1), (512, 12, 1), (512, 13, 1)] \ ---skip-connections-agg --residual-scale 0.5 --log-compression --warmup-updates 500 --warmup-init-lr 1e-07 --criterion wav2vec --num-negatives 10 \ ---max-sample-size 150000 --max-tokens 1500000 --skip-invalid-size-inputs-valid-test \ ---tpu --distributed-world-size ${WORLD_SIZE} --num-batch-buckets 3 --enable-padding \ ---encoder-layerdrop 0 --mask-channel-prob 0.1 -``` - -### Extract embeddings from the downstream task data: - -``` -$ PYTHONPATH=/path/to/fairseq python examples/wav2vec/wav2vec_featurize.py --input /path/to/task/waves --output /path/to/output \ ---model /model/path/checkpoint_best.pt --split train valid test -``` - -# vq-wav2vec - -Example to train a vq-wav2vec model as described in [vq-wav2vec: Self-Supervised Learning of Discrete Speech Representations (Baevski et al., 2019)](https://arxiv.org/abs/1910.05453). - -These models are also used in [Effectiveness of self-supervised pre-training for speech recognition (Baevski et al., 2019)](https://arxiv.org/abs/1911.03912). - -## Pre-trained models - -Description | Dataset | Model ----|---|--- -vq-wav2vec Gumbel | [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/vq-wav2vec.pt) -vq-wav2vec K-means | [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/vq-wav2vec_kmeans.pt) -Roberta on K-means codes | [Librispeech](http://www.openslr.org/12) | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/bert_kmeans.tar) - -#### Example usage: -```python -import torch -import fairseq - -cp = torch.load('/path/to/vq-wav2vec.pt') -model, cfg, task = fairseq.checkpoint_utils.load_model_ensemble_and_task([cp]) -model = model[0] -model.eval() - -wav_input_16khz = torch.randn(1,10000) -z = model.feature_extractor(wav_input_16khz) -_, idxs = model.vector_quantizer.forward_idx(z) -print(idxs.shape) # output: torch.Size([1, 60, 2]), 60 timesteps with 2 indexes corresponding to 2 groups in the model -``` - -## Training a new model with the CLI tools - -Given a directory containing wav files to be used for pretraining (we recommend splitting each file into separate file 10 to 30 seconds in length) - -### Prepare training data manifest: - -``` -$ python examples/wav2vec/wav2vec_manifest.py /path/to/waves --dest /manifest/path --ext wav -``` - -### Train a gumbel vq-wav2vec model: - -``` -$ python train.py /manifest/path --save-dir /model/path --num-workers 6 --fp16 --max-update 400000 \ ---save-interval 1 --no-epoch-checkpoints --arch wav2vec --task audio_pretraining --min-lr 1e-06 --stop-min-lr 1e-09 \ ---optimizer adam --lr 1e-05 --lr-scheduler cosine \ ---conv-feature-layers [(512, 10, 5), (512, 8, 4), (512, 4, 2), (512, 4, 2), (512, 4, 2), (512, 1, 1), (512, 1, 1), (512, 1, 1)] \ ---conv-aggregator-layers [(512, 2, 1), (512, 3, 1), (512, 4, 1), (512, 5, 1), (512, 6, 1), (512, 7, 1), (512, 8, 1), (512, 9, 1), (512, 10, 1), (512, 11, 1), (512, 12, 1), (512, 13, 1)] \ ---activation gelu --offset auto --skip-connections-agg --residual-scale 0.5 \ ---log-keys ["prob_perplexity","code_perplexity","temp"] --vq-type gumbel --vq-groups 2 --vq-depth 2 \ ---combine-groups --vq-vars 320 --vq-temp (2,0.5,0.999995) --prediction-steps 12 --warmup-updates 1000 \ ---warmup-init-lr 1e-07 --criterion wav2vec --num-negatives 10 --max-sample-size 150000 \ ---max-tokens 300000 --cross-sample-negatives 0 --update-freq 1 --seed 2 --skip-invalid-size-inputs-valid-test -``` - -for k-means training, set vq-type with "kmeans" and add --loss-weights [1] argument. Pre-trained models were trained on 16 GPUs. - -### Tokenize audio data (e.g. for BERT training): - -``` -$ PYTHONPATH=/path/to/fairseq python examples/wav2vec/vq-wav2vec_featurize.py --data-dir /manifest/path --output-dir /path/to/output \ ---checkpoint /model/path/checkpoint_best.pt --split train valid test --extension tsv -``` diff --git a/kosmos-g/fairseq/examples/wav2vec/__init__.py b/kosmos-g/fairseq/examples/wav2vec/__init__.py deleted file mode 100644 index e69de29bb..000000000 diff --git a/kosmos-g/fairseq/examples/wav2vec/config/finetuning/base_100h.yaml b/kosmos-g/fairseq/examples/wav2vec/config/finetuning/base_100h.yaml deleted file mode 100644 index 153b5df17..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/config/finetuning/base_100h.yaml +++ /dev/null @@ -1,58 +0,0 @@ -# @package _group_ - -common: - fp16: true - log_format: json - log_interval: 200 - -checkpoint: - no_epoch_checkpoints: true - best_checkpoint_metric: wer - -task: - _name: audio_finetuning - data: ??? - normalize: false - labels: ltr - -dataset: - num_workers: 6 - max_tokens: 3200000 - skip_invalid_size_inputs_valid_test: true - valid_subset: dev_other - -distributed_training: - ddp_backend: legacy_ddp - distributed_world_size: 2 - -criterion: - _name: ctc - zero_infinity: true - -optimization: - max_update: 80000 - lr: [0.00003] - sentence_avg: true - update_freq: [4] - -optimizer: - _name: adam - adam_betas: (0.9,0.98) - adam_eps: 1e-08 - -lr_scheduler: - _name: tri_stage - phase_ratio: [0.1, 0.4, 0.5] - final_lr_scale: 0.05 - -model: - _name: wav2vec_ctc - w2v_path: ??? - apply_mask: true - mask_prob: 0.65 - mask_channel_prob: 0.5 - mask_channel_length: 64 - layerdrop: 0.1 - activation_dropout: 0.1 - feature_grad_mult: 0.0 - freeze_finetune_updates: 0 diff --git a/kosmos-g/fairseq/examples/wav2vec/config/finetuning/base_10h.yaml b/kosmos-g/fairseq/examples/wav2vec/config/finetuning/base_10h.yaml deleted file mode 100644 index 504451802..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/config/finetuning/base_10h.yaml +++ /dev/null @@ -1,63 +0,0 @@ -# @package _group_ - -common: - fp16: true - log_format: json - log_interval: 200 - -checkpoint: - save_interval: 50 - save_interval_updates: 10000 - keep_interval_updates: 1 - no_epoch_checkpoints: true - best_checkpoint_metric: wer - -task: - _name: audio_finetuning - data: ??? - normalize: false - labels: ltr - -dataset: - num_workers: 6 - max_tokens: 3200000 - skip_invalid_size_inputs_valid_test: true - validate_after_updates: 10000 - validate_interval: 50 - valid_subset: dev_other - -distributed_training: - ddp_backend: legacy_ddp - distributed_world_size: 2 - -criterion: - _name: ctc - zero_infinity: true - -optimization: - max_update: 20000 - lr: [0.00005] - sentence_avg: true - update_freq: [4] - -optimizer: - _name: adam - adam_betas: (0.9,0.98) - adam_eps: 1e-08 - -lr_scheduler: - _name: tri_stage - phase_ratio: [0.1, 0.4, 0.5] - final_lr_scale: 0.05 - -model: - _name: wav2vec_ctc - w2v_path: ??? - apply_mask: true - mask_prob: 0.65 - mask_channel_prob: 0.5 - mask_channel_length: 64 - layerdrop: 0.05 - activation_dropout: 0.1 - feature_grad_mult: 0.0 - freeze_finetune_updates: 10000 diff --git a/kosmos-g/fairseq/examples/wav2vec/config/finetuning/base_10m.yaml b/kosmos-g/fairseq/examples/wav2vec/config/finetuning/base_10m.yaml deleted file mode 100644 index 14abc013b..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/config/finetuning/base_10m.yaml +++ /dev/null @@ -1,63 +0,0 @@ -# @package _group_ - -common: - fp16: true - log_format: json - log_interval: 200 - -checkpoint: - save_interval: 1000 - save_interval_updates: 50 - keep_interval_updates: 1 - no_epoch_checkpoints: true - best_checkpoint_metric: wer - -task: - _name: audio_finetuning - data: ??? - normalize: false - labels: ltr - -dataset: - num_workers: 6 - max_tokens: 3200000 - skip_invalid_size_inputs_valid_test: true - validate_after_updates: 10000 - validate_interval: 1000 - valid_subset: dev_other - -distributed_training: - ddp_backend: legacy_ddp - distributed_world_size: 2 - -criterion: - _name: ctc - zero_infinity: true - -optimization: - max_update: 13000 - lr: [0.00005] - sentence_avg: true - update_freq: [4] - -optimizer: - _name: adam - adam_betas: (0.9,0.98) - adam_eps: 1e-08 - -lr_scheduler: - _name: tri_stage - phase_ratio: [0.1, 0.4, 0.5] - final_lr_scale: 0.05 - -model: - _name: wav2vec_ctc - w2v_path: ??? - apply_mask: true - mask_prob: 0.65 - mask_channel_prob: 0.25 - mask_channel_length: 64 - layerdrop: 0.1 - activation_dropout: 0.1 - feature_grad_mult: 0.0 - freeze_finetune_updates: 10000 diff --git a/kosmos-g/fairseq/examples/wav2vec/config/finetuning/base_1h.yaml b/kosmos-g/fairseq/examples/wav2vec/config/finetuning/base_1h.yaml deleted file mode 100644 index 14abc013b..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/config/finetuning/base_1h.yaml +++ /dev/null @@ -1,63 +0,0 @@ -# @package _group_ - -common: - fp16: true - log_format: json - log_interval: 200 - -checkpoint: - save_interval: 1000 - save_interval_updates: 50 - keep_interval_updates: 1 - no_epoch_checkpoints: true - best_checkpoint_metric: wer - -task: - _name: audio_finetuning - data: ??? - normalize: false - labels: ltr - -dataset: - num_workers: 6 - max_tokens: 3200000 - skip_invalid_size_inputs_valid_test: true - validate_after_updates: 10000 - validate_interval: 1000 - valid_subset: dev_other - -distributed_training: - ddp_backend: legacy_ddp - distributed_world_size: 2 - -criterion: - _name: ctc - zero_infinity: true - -optimization: - max_update: 13000 - lr: [0.00005] - sentence_avg: true - update_freq: [4] - -optimizer: - _name: adam - adam_betas: (0.9,0.98) - adam_eps: 1e-08 - -lr_scheduler: - _name: tri_stage - phase_ratio: [0.1, 0.4, 0.5] - final_lr_scale: 0.05 - -model: - _name: wav2vec_ctc - w2v_path: ??? - apply_mask: true - mask_prob: 0.65 - mask_channel_prob: 0.25 - mask_channel_length: 64 - layerdrop: 0.1 - activation_dropout: 0.1 - feature_grad_mult: 0.0 - freeze_finetune_updates: 10000 diff --git a/kosmos-g/fairseq/examples/wav2vec/config/finetuning/base_960h.yaml b/kosmos-g/fairseq/examples/wav2vec/config/finetuning/base_960h.yaml deleted file mode 100644 index 3eadc36b3..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/config/finetuning/base_960h.yaml +++ /dev/null @@ -1,57 +0,0 @@ -# @package _group_ - -common: - fp16: true - log_format: json - log_interval: 200 - -checkpoint: - no_epoch_checkpoints: true - best_checkpoint_metric: wer - -task: - _name: audio_finetuning - data: ??? - normalize: false - labels: ltr - -dataset: - num_workers: 6 - max_tokens: 3200000 - skip_invalid_size_inputs_valid_test: true - valid_subset: dev_other - -distributed_training: - ddp_backend: legacy_ddp - distributed_world_size: 8 - -criterion: - _name: ctc - zero_infinity: true - -optimization: - max_update: 320000 - lr: [0.0001] - sentence_avg: true - -optimizer: - _name: adam - adam_betas: (0.9,0.98) - adam_eps: 1e-08 - -lr_scheduler: - _name: tri_stage - phase_ratio: [0.1, 0.4, 0.5] - final_lr_scale: 0.05 - -model: - _name: wav2vec_ctc - w2v_path: ??? - apply_mask: true - mask_prob: 0.5 - mask_channel_prob: 0.1 - mask_channel_length: 64 - layerdrop: 0.1 - activation_dropout: 0.1 - feature_grad_mult: 0.0 - freeze_finetune_updates: 0 diff --git a/kosmos-g/fairseq/examples/wav2vec/config/finetuning/vox_100h.yaml b/kosmos-g/fairseq/examples/wav2vec/config/finetuning/vox_100h.yaml deleted file mode 100644 index b8f81e5e1..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/config/finetuning/vox_100h.yaml +++ /dev/null @@ -1,58 +0,0 @@ -# @package _group_ - -common: - fp16: true - log_format: json - log_interval: 200 - -checkpoint: - no_epoch_checkpoints: true - best_checkpoint_metric: wer - -task: - _name: audio_finetuning - data: ??? - normalize: true - labels: ltr - -dataset: - num_workers: 6 - max_tokens: 1280000 - skip_invalid_size_inputs_valid_test: true - valid_subset: dev_other - -distributed_training: - ddp_backend: legacy_ddp - distributed_world_size: 4 - -criterion: - _name: ctc - zero_infinity: true - -optimization: - max_update: 80000 - lr: [0.00003] - sentence_avg: true - update_freq: [5] - -optimizer: - _name: adam - adam_betas: (0.9,0.98) - adam_eps: 1e-08 - -lr_scheduler: - _name: tri_stage - phase_ratio: [0.1, 0.4, 0.5] - final_lr_scale: 0.05 - -model: - _name: wav2vec_ctc - w2v_path: ??? - apply_mask: true - mask_prob: 0.5 - mask_channel_prob: 0.5 - mask_channel_length: 64 - layerdrop: 0.1 - activation_dropout: 0.1 - feature_grad_mult: 0.0 - freeze_finetune_updates: 10000 diff --git a/kosmos-g/fairseq/examples/wav2vec/config/finetuning/vox_10h.yaml b/kosmos-g/fairseq/examples/wav2vec/config/finetuning/vox_10h.yaml deleted file mode 100644 index 8f1ca71ee..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/config/finetuning/vox_10h.yaml +++ /dev/null @@ -1,63 +0,0 @@ -# @package _group_ - -common: - fp16: true - log_format: json - log_interval: 200 - -checkpoint: - save_interval: 50 - save_interval_updates: 10000 - keep_interval_updates: 1 - no_epoch_checkpoints: true - best_checkpoint_metric: wer - -task: - _name: audio_finetuning - data: ??? - normalize: true - labels: ltr - -dataset: - num_workers: 6 - max_tokens: 1280000 - skip_invalid_size_inputs_valid_test: true - validate_after_updates: 10000 - validate_interval: 50 - valid_subset: dev_other - -distributed_training: - ddp_backend: legacy_ddp - distributed_world_size: 4 - -criterion: - _name: ctc - zero_infinity: true - -optimization: - max_update: 20000 - lr: [0.0001] - sentence_avg: true - update_freq: [5] - -optimizer: - _name: adam - adam_betas: (0.9,0.98) - adam_eps: 1e-08 - -lr_scheduler: - _name: tri_stage - phase_ratio: [0.1, 0.4, 0.5] - final_lr_scale: 0.05 - -model: - _name: wav2vec_ctc - w2v_path: ??? - apply_mask: true - mask_prob: 0.75 - mask_channel_prob: 0.25 - mask_channel_length: 64 - layerdrop: 0.1 - activation_dropout: 0.1 - feature_grad_mult: 0.0 - freeze_finetune_updates: 10000 diff --git a/kosmos-g/fairseq/examples/wav2vec/config/finetuning/vox_10m.yaml b/kosmos-g/fairseq/examples/wav2vec/config/finetuning/vox_10m.yaml deleted file mode 100644 index 07e327fe7..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/config/finetuning/vox_10m.yaml +++ /dev/null @@ -1,63 +0,0 @@ -# @package _group_ - -common: - fp16: true - log_format: json - log_interval: 200 - -checkpoint: - save_interval: 1000 - save_interval_updates: 50 - keep_interval_updates: 1 - no_epoch_checkpoints: true - best_checkpoint_metric: wer - -task: - _name: audio_finetuning - data: ??? - normalize: true - labels: ltr - -dataset: - num_workers: 6 - max_tokens: 1280000 - skip_invalid_size_inputs_valid_test: true - validate_after_updates: 10000 - validate_interval: 1000 - valid_subset: dev_other - -distributed_training: - ddp_backend: legacy_ddp - distributed_world_size: 4 - -criterion: - _name: ctc - zero_infinity: true - -optimization: - max_update: 13000 - lr: [0.0001] - sentence_avg: true - update_freq: [5] - -optimizer: - _name: adam - adam_betas: (0.9,0.98) - adam_eps: 1e-08 - -lr_scheduler: - _name: tri_stage - phase_ratio: [0.1, 0.4, 0.5] - final_lr_scale: 0.05 - -model: - _name: wav2vec_ctc - w2v_path: ??? - apply_mask: true - mask_prob: 0.65 - mask_channel_prob: 0.25 - mask_channel_length: 64 - layerdrop: 0.1 - activation_dropout: 0.1 - feature_grad_mult: 0.0 - freeze_finetune_updates: 10000 diff --git a/kosmos-g/fairseq/examples/wav2vec/config/finetuning/vox_1h.yaml b/kosmos-g/fairseq/examples/wav2vec/config/finetuning/vox_1h.yaml deleted file mode 100644 index fac1bbb32..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/config/finetuning/vox_1h.yaml +++ /dev/null @@ -1,63 +0,0 @@ -# @package _group_ - -common: - fp16: true - log_format: json - log_interval: 200 - -checkpoint: - save_interval: 1000 - save_interval_updates: 50 - keep_interval_updates: 1 - no_epoch_checkpoints: true - best_checkpoint_metric: wer - -task: - _name: audio_finetuning - data: ??? - normalize: true - labels: ltr - -dataset: - num_workers: 6 - max_tokens: 1280000 - skip_invalid_size_inputs_valid_test: true - validate_after_updates: 10000 - validate_interval: 1000 - valid_subset: dev_other - -distributed_training: - ddp_backend: legacy_ddp - distributed_world_size: 4 - -criterion: - _name: ctc - zero_infinity: true - -optimization: - max_update: 13000 - lr: [0.0003] - sentence_avg: true - update_freq: [5] - -optimizer: - _name: adam - adam_betas: (0.9,0.98) - adam_eps: 1e-08 - -lr_scheduler: - _name: tri_stage - phase_ratio: [0.1, 0.4, 0.5] - final_lr_scale: 0.05 - -model: - _name: wav2vec_ctc - w2v_path: ??? - apply_mask: true - mask_prob: 0.75 - mask_channel_prob: 0.25 - mask_channel_length: 64 - layerdrop: 0.1 - activation_dropout: 0.1 - feature_grad_mult: 0.0 - freeze_finetune_updates: 10000 diff --git a/kosmos-g/fairseq/examples/wav2vec/config/finetuning/vox_960h.yaml b/kosmos-g/fairseq/examples/wav2vec/config/finetuning/vox_960h.yaml deleted file mode 100644 index 9d72404fa..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/config/finetuning/vox_960h.yaml +++ /dev/null @@ -1,57 +0,0 @@ -# @package _group_ - -common: - fp16: true - log_format: json - log_interval: 200 - -checkpoint: - no_epoch_checkpoints: true - best_checkpoint_metric: wer - -task: - _name: audio_finetuning - data: ??? - normalize: true - labels: ltr - -dataset: - num_workers: 6 - max_tokens: 1280000 - skip_invalid_size_inputs_valid_test: true - valid_subset: dev_other - -distributed_training: - ddp_backend: legacy_ddp - distributed_world_size: 24 - -criterion: - _name: ctc - zero_infinity: true - -optimization: - max_update: 320000 - lr: [0.00003] - sentence_avg: true - -optimizer: - _name: adam - adam_betas: (0.9,0.98) - adam_eps: 1e-08 - -lr_scheduler: - _name: tri_stage - phase_ratio: [0.1, 0.4, 0.5] - final_lr_scale: 0.05 - -model: - _name: wav2vec_ctc - w2v_path: ??? - apply_mask: true - mask_prob: 0.5 - mask_channel_prob: 0.25 - mask_channel_length: 64 - layerdrop: 0.1 - activation_dropout: 0.1 - feature_grad_mult: 0.0 - freeze_finetune_updates: 10000 diff --git a/kosmos-g/fairseq/examples/wav2vec/config/pretraining/wav2vec2_base_librispeech.yaml b/kosmos-g/fairseq/examples/wav2vec/config/pretraining/wav2vec2_base_librispeech.yaml deleted file mode 100644 index b686e21ab..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/config/pretraining/wav2vec2_base_librispeech.yaml +++ /dev/null @@ -1,57 +0,0 @@ -# @package _group_ - -common: - fp16: true - log_format: json - log_interval: 200 - -checkpoint: - save_interval_updates: 25000 - keep_interval_updates: 1 - no_epoch_checkpoints: true - -task: - _name: audio_pretraining - data: ??? - max_sample_size: 250000 - min_sample_size: 32000 - normalize: false - -dataset: - num_workers: 6 - max_tokens: 1400000 - skip_invalid_size_inputs_valid_test: true - -distributed_training: - distributed_world_size: 64 - ddp_backend: legacy_ddp - -criterion: - _name: wav2vec - infonce: true - log_keys: ["prob_perplexity","code_perplexity","temp"] - loss_weights: [0.1, 10] - -optimization: - max_update: 400000 - lr: [0.0005] - -optimizer: - _name: adam - adam_betas: (0.9,0.98) - adam_eps: 1e-06 - weight_decay: 0.01 - -lr_scheduler: - _name: polynomial_decay - warmup_updates: 32000 - -model: - _name: wav2vec2 - quantize_targets: true - final_dim: 256 - encoder_layerdrop: 0.05 - dropout_input: 0.1 - dropout_features: 0.1 - feature_grad_mult: 0.1 - encoder_embed_dim: 768 diff --git a/kosmos-g/fairseq/examples/wav2vec/config/pretraining/wav2vec2_large_librivox.yaml b/kosmos-g/fairseq/examples/wav2vec/config/pretraining/wav2vec2_large_librivox.yaml deleted file mode 100644 index 3192ce4cb..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/config/pretraining/wav2vec2_large_librivox.yaml +++ /dev/null @@ -1,70 +0,0 @@ -# @package _group_ - -common: - fp16: true - log_format: json - log_interval: 200 - -checkpoint: - save_interval_updates: 25000 - keep_interval_updates: 1 - no_epoch_checkpoints: true - -task: - _name: audio_pretraining - data: ??? - max_sample_size: 320000 - min_sample_size: 32000 - normalize: true - -dataset: - batch_size: 4 - num_workers: 6 - max_tokens: 1200000 - skip_invalid_size_inputs_valid_test: true - -distributed_training: - distributed_world_size: 128 - ddp_backend: legacy_ddp - -criterion: - _name: wav2vec - infonce: true - log_keys: ["prob_perplexity","code_perplexity","temp"] - loss_weights: [0.1, 0] - -optimization: - max_update: 1000000 - lr: [0.005] - -optimizer: - _name: adam - adam_betas: (0.9,0.98) - adam_eps: 1e-06 - weight_decay: 0.01 - -lr_scheduler: - _name: polynomial_decay - warmup_updates: 32000 - -model: - _name: wav2vec2 - quantize_targets: true - extractor_mode: layer_norm - layer_norm_first: true - final_dim: 768 - latent_temp: [2.0,0.1,0.999995] - encoder_layerdrop: 0.00 - dropout_input: 0.0 - dropout_features: 0.0 - dropout: 0.0 - attention_dropout: 0.0 - conv_bias: true - - encoder_layers: 24 - encoder_embed_dim: 1024 - encoder_ffn_embed_dim: 4096 - encoder_attention_heads: 16 - - feature_grad_mult: 1.0 - diff --git a/kosmos-g/fairseq/examples/wav2vec/config/pretraining/wav2vec2_large_librivox_tpu-pod.yaml b/kosmos-g/fairseq/examples/wav2vec/config/pretraining/wav2vec2_large_librivox_tpu-pod.yaml deleted file mode 100644 index ff35a95b6..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/config/pretraining/wav2vec2_large_librivox_tpu-pod.yaml +++ /dev/null @@ -1,72 +0,0 @@ -# @package _group_ - -common: - tpu: true - fp16: false - log_format: json - log_interval: 10 - -checkpoint: - save_interval_updates: 25000 - keep_interval_updates: 1 - no_epoch_checkpoints: true - -task: - _name: audio_pretraining - data: ??? - max_sample_size: 250000 - min_sample_size: 32000 - normalize: true - num_batch_buckets: 3 - precompute_mask_indices: true - enable_padding: true - -dataset: - num_workers: 6 - max_tokens: 1200000 - skip_invalid_size_inputs_valid_test: true - -distributed_training: - distributed_world_size: 128 - ddp_backend: legacy_ddp - -criterion: - _name: wav2vec - infonce: true - log_keys: ["prob_perplexity","code_perplexity","temp"] - loss_weights: [0.1, 0] - -optimization: - max_update: 1000000 - lr: [0.005] - -optimizer: - _name: adam - adam_betas: (0.9,0.98) - adam_eps: 1e-06 - weight_decay: 0.01 - -lr_scheduler: - _name: polynomial_decay - warmup_updates: 32000 - -model: - _name: wav2vec2 - quantize_targets: true - extractor_mode: layer_norm - layer_norm_first: true - final_dim: 768 - latent_temp: [2.0,0.1,0.999995] - encoder_layerdrop: 0.00 - dropout_input: 0.0 - dropout_features: 0.0 - dropout: 0.0 - attention_dropout: 0.0 - conv_bias: true - - encoder_layers: 24 - encoder_embed_dim: 1024 - encoder_ffn_embed_dim: 4096 - encoder_attention_heads: 16 - - feature_grad_mult: 1.0 diff --git a/kosmos-g/fairseq/examples/wav2vec/config/pretraining/wav2vec2_large_librivox_tpu.yaml b/kosmos-g/fairseq/examples/wav2vec/config/pretraining/wav2vec2_large_librivox_tpu.yaml deleted file mode 100644 index ee55bdab7..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/config/pretraining/wav2vec2_large_librivox_tpu.yaml +++ /dev/null @@ -1,77 +0,0 @@ -# @package _group_ - -common: - tpu: true - fp16: false - log_format: json - log_interval: 10 - -checkpoint: - save_interval_updates: 25000 - keep_interval_updates: 1 - no_epoch_checkpoints: true - -task: - _name: audio_pretraining - data: ??? - max_sample_size: 250000 - min_sample_size: 32000 - normalize: true - num_batch_buckets: 3 - precompute_mask_indices: true - enable_padding: true - inferred_w2v_config: - mask_prob: 0.65 - mask_selection: 'static' - mask_other: 0 - mask_channel_prob: 0.1 - -dataset: - num_workers: 6 - max_tokens: 1200000 - skip_invalid_size_inputs_valid_test: true - -distributed_training: - distributed_world_size: 8 - ddp_backend: legacy_ddp - -criterion: - _name: wav2vec - infonce: true - log_keys: ["prob_perplexity","code_perplexity","temp"] - loss_weights: [0.1, 0] - -optimization: - max_update: 1000000 - lr: [0.005] - -optimizer: - _name: adam - adam_betas: (0.9,0.98) - adam_eps: 1e-06 - weight_decay: 0.01 - -lr_scheduler: - _name: polynomial_decay - warmup_updates: 32000 - -model: - _name: wav2vec2 - quantize_targets: true - extractor_mode: layer_norm - layer_norm_first: true - final_dim: 768 - latent_temp: [2.0,0.1,0.999995] - encoder_layerdrop: 0.00 - dropout_input: 0.0 - dropout_features: 0.0 - dropout: 0.0 - attention_dropout: 0.0 - conv_bias: true - - encoder_layers: 24 - encoder_embed_dim: 1024 - encoder_ffn_embed_dim: 4096 - encoder_attention_heads: 16 - - feature_grad_mult: 1.0 diff --git a/kosmos-g/fairseq/examples/wav2vec/libri_labels.py b/kosmos-g/fairseq/examples/wav2vec/libri_labels.py deleted file mode 100644 index 694a20260..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/libri_labels.py +++ /dev/null @@ -1,56 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Helper script to pre-compute embeddings for a flashlight (previously called wav2letter++) dataset -""" - -import argparse -import os - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("tsv") - parser.add_argument("--output-dir", required=True) - parser.add_argument("--output-name", required=True) - args = parser.parse_args() - - os.makedirs(args.output_dir, exist_ok=True) - - transcriptions = {} - - with open(args.tsv, "r") as tsv, open( - os.path.join(args.output_dir, args.output_name + ".ltr"), "w" - ) as ltr_out, open( - os.path.join(args.output_dir, args.output_name + ".wrd"), "w" - ) as wrd_out: - root = next(tsv).strip() - for line in tsv: - line = line.strip() - dir = os.path.dirname(line) - if dir not in transcriptions: - parts = dir.split(os.path.sep) - trans_path = f"{parts[-2]}-{parts[-1]}.trans.txt" - path = os.path.join(root, dir, trans_path) - assert os.path.exists(path) - texts = {} - with open(path, "r") as trans_f: - for tline in trans_f: - items = tline.strip().split() - texts[items[0]] = " ".join(items[1:]) - transcriptions[dir] = texts - part = os.path.basename(line).split(".")[0] - assert part in transcriptions[dir] - print(transcriptions[dir][part], file=wrd_out) - print( - " ".join(list(transcriptions[dir][part].replace(" ", "|"))) + " |", - file=ltr_out, - ) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/wav2vec/scripts/binarize_manifest.sh b/kosmos-g/fairseq/examples/wav2vec/scripts/binarize_manifest.sh deleted file mode 100644 index 6f201bdb5..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/scripts/binarize_manifest.sh +++ /dev/null @@ -1,33 +0,0 @@ -#!/usr/bin/env bash - -# usage: bash binarize_manifest <dest_dir> <train_split> <valid_split> - -DEST_DIR=$1 -TRAIN_SPLIT=$2 -VALID_SPLIT=$3 -FAIRSEQ_ROOT=$4 - -mkdir -p $DEST_DIR - -# split file path and lengths into separate files -cut -f1 $TRAIN_SPLIT.tsv > $DEST_DIR/train_fnames.txt -cut -f1 $VALID_SPLIT.tsv > $DEST_DIR/valid_fnames.txt -cut -f2 $TRAIN_SPLIT.tsv > $DEST_DIR/train.lengths -cut -f2 $VALID_SPLIT.tsv > $DEST_DIR/valid.lengths - -# copy root directory -head -1 $TRAIN_SPLIT.tsv > $DEST_DIR/train.root -head -1 $VALID_SPLIT.tsv > $DEST_DIR/valid.root - -# remove root directory -sed -i '1d' $DEST_DIR/train_fnames.txt -sed -i '1d' $DEST_DIR/valid_fnames.txt -sed -i '1d' $DEST_DIR/train.lengths -sed -i '1d' $DEST_DIR/valid.lengths - -# insert spaces between characters -sed -i -e 's/\(.\)/\1 /g' $DEST_DIR/train_fnames.txt -sed -i -e 's/\(.\)/\1 /g' $DEST_DIR/valid_fnames.txt - -# run preprocessor -PYTHONPATH=$FAIRSEQ_ROOT python $FAIRSEQ_ROOT/fairseq_cli/preprocess.py --dataset-impl mmap --trainpref $DEST_DIR/train_fnames.txt --validpref $DEST_DIR/valid_fnames.txt --workers 60 --only-source --destdir $DEST_DIR diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/README.md b/kosmos-g/fairseq/examples/wav2vec/unsupervised/README.md deleted file mode 100644 index 0b213fd20..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/README.md +++ /dev/null @@ -1,103 +0,0 @@ -# wav2vec Unsupervised (wav2vec-U) - -Wav2vec Unsupervised (wav2vec-U) is a framework for building speech recognition systems without any labeled training data as described in [Unsupervised Speech Recognition (Baevski et al., 2021)](https://ai.facebook.com/research/publications/unsupervised-speech-recognition). The model takes as input wav2vec 2.0 or XLSR representations (see [pretrained models](https://github.com/pytorch/fairseq/blob/main/examples/wav2vec)) as well as unlabeled speech and text data. - - The wav2vec-U training procedure consists of three consecutive main steps: -* Preparation of speech representations and text data -* Generative adversarial training (GAN) -* Iterative self-training + Kaldi LM-decoding - -## Preparation of speech and text data -Similar to [wav2vec 2.0](https://github.com/pytorch/fairseq/blob/main/examples/wav2vec/README.md), data folders contain {train,valid,test}.{tsv,wrd,phn} files, where audio paths are stored in tsv files, and word, letter or phoneme transcriptions are stored in .{wrd,ltr,phn}. - -In **/path/to/data/with_silence** you need a *train.tsv* file as well as (optionally) *{valid,test}.{tsv,wrd,phn}*. It is nice to have *10h.{tsv,phn}* files there too for reproducing the ablation study on layer selection. In **/path/to/data/without_silence** you have the same files, except *.tsv* files contain audios with silences removed using rVAD. - -Pre-requisites: -* set FAIRSEQ_ROOT environmental variable to your fairseq installation -* set RVAD_ROOT environmental variable to a checkout of [rVADfast](https://github.com/zhenghuatan/rVADfast) -* set KENLM_ROOT environmental variable to the location of [KenLM](https://github.com/kpu/kenlm) binaries -* install [PyKaldi](https://github.com/pykaldi/pykaldi) and set KALDI_ROOT environmental variable to the location of your kaldi installation. To use the version bundled with PyKaldi, you can use /path/to/pykaldi/tools/kaldi - -Create new audio files without silences: -```shell -# create a manifest file for the set original of audio files -python $FAIRSEQ_ROOT/examples/wav2vec/wav2vec_manifest.py /dir/to/save/audio/files --ext wav --dest /path/to/new/train.tsv --valid-percent 0 - -python scripts/vads.py -r $RVAD_ROOT < /path/to/train.tsv > train.vads - -python scripts/remove_silence.py --tsv /path/to/train.tsv --vads train.vads --out /dir/to/save/audio/files - -python $FAIRSEQ_ROOT/examples/wav2vec/wav2vec_manifest.py /dir/to/save/audio/files --ext wav --dest /path/to/new/train.tsv --valid-percent 0.01 -``` - -Next, we need to preprocess the audio data to better match phonemized text data: - -```shell -zsh scripts/prepare_audio.sh /dir/with/{train,test,valid}.tsv /output/dir /path/to/wav2vec2/model.pt 512 14 -``` -Note that if you have splits different than train/valid/test, you will need to modify this script. The last two arguments are the PCA dimensionality and the 0-based index of the layer from which to extract representations. - -Now we need to prepare text data: -```shell -zsh scripts/prepare_text.sh language /path/to/text/file /output/dir 1000 espeak /path/to/fasttext/lid/model -``` - -The fourth argument is minimum number observations of phones to keep. If your text corpus is small, you might want to reduce this number. - -The fifth argument is which phonemizer to use. Supported values are [espeak](http://espeak.sourceforge.net/), [espeak-ng](https://github.com/espeak-ng/espeak-ng), and [G2P](https://github.com/Kyubyong/g2p) (english only). - -Pre-trained fasttext LID models can be downloaded [here](https://fasttext.cc/docs/en/language-identification.html). - -### Prepare TIMIT data -TIMIT transcripts include silence. Therefore VAD is not used for audio preprocessing, and we do not wrap transcripts with silences or insert random silence in between words. - -To prepare TIMIT data for both the matched an unmatched setup: -```shell -bash scripts/prepare_timit.sh /dir/to/timit/raw/data /output/dir /path/to/wav2vec2/model.pt -``` - -Note that we assume the TIMIT distribution with capitalized directories and filenames are used (e.g., `TRAIN/DR1/FCJF0/SA1.PHN`). - -## Generative adversarial training (GAN) - -We then use a GAN model to build a first unsupervised ASR model. The data preparation above of both speech features and text data is a necessary procedure that enables the generator to match speech to text in an unsupervised way. - -Launching GAN training on top of preprocessed features, with default hyperparameters can be done with: - -``` -PREFIX=w2v_unsup_gan_xp -TASK_DATA=/path/to/features/precompute_unfiltered_pca512_cls128_mean_pooled -TEXT_DATA=/path/to/data/phones # path to fairseq-preprocessed GAN data (phones dir) -KENLM_PATH=/path/to/data/phones/kenlm.phn.o4.bin # KenLM 4-gram phoneme language model (LM data = GAN data here) - -PYTHONPATH=$FAIRSEQ_ROOT PREFIX=$PREFIX fairseq-hydra-train \ - -m --config-dir config/gan \ - --config-name w2vu \ - task.data=${TASK_DATA} \ - task.text_data=${TEXT_DATA} \ - task.kenlm_path=${KENLM_PATH} \ - common.user_dir=${FAIRSEQ_ROOT}/examples/wav2vec/unsupervised \ - model.code_penalty=2,4 model.gradient_penalty=1.5,2.0 \ - model.smoothness_weight=0.5,0.75,1.0 'common.seed=range(0,5)' -``` - - -Once we find the best checkpoint (chosen using unsupervised metric that combined language model perplexity and vocabulary usage), we can use it to generate phone labels (or word labels with an appropriate kaldi WFST): - -```shell -python w2vu_generate.py --config-dir config/generate --config-name viterbi \ -fairseq.common.user_dir=${FAIRSEQ_ROOT}/examples/wav2vec/unsupervised \ -fairseq.task.data=/path/to/dir/with/features \ -fairseq.common_eval.path=/path/to/gan/checkpoint \ -fairseq.dataset.gen_subset=valid results_path=/where/to/save/transcriptions -``` - -The decoding without LM works best on the same adjacent-mean-pooled features that the gan was trained on, while decoding with LM works better on features before the adjacent timestep mean-pooling step (without the "_pooled" suffix). - -## Iterative self-training + Kaldi LM-decoding -After the GAN training provides a first unsupervised model, we can then progressively refine the quality of transcriptions using several iterations of semi-supervised learning. We perform two iterations: first, pseudo-label the training data with the unsupervised GAN model and train an HMM on the pseudo-labels. Second, we relabel the training data with the HMM and then fine-tune the original wav2vec 2.0 model using the HMM pseudo-labels with a CTC loss. Note that HMM models use phonemes as output, while wav2vec 2.0 use letter. Both are decoded using WFST decoders into words. - - -Please see [this README](kaldi_self_train/README.md) for more instructions on how to do iterative self-training + Kaldi LM-decoding. - -*** Note: these instructions are a work in progress and will be updated over the next few days diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/__init__.py b/kosmos-g/fairseq/examples/wav2vec/unsupervised/__init__.py deleted file mode 100644 index e69de29bb..000000000 diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/config/finetuning/w2v_finetune.yaml b/kosmos-g/fairseq/examples/wav2vec/unsupervised/config/finetuning/w2v_finetune.yaml deleted file mode 100644 index 19a3ef348..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/config/finetuning/w2v_finetune.yaml +++ /dev/null @@ -1,62 +0,0 @@ -# @package _group_ - -common: - fp16: true - log_format: json - log_interval: 200 - tensorboard_logdir: tb - -checkpoint: - no_epoch_checkpoints: true - save_interval_updates: 20000 - -task: - _name: audio_finetuning - data: ??? - normalize: true - labels: ltr - -dataset: - num_workers: 6 - max_tokens: 800000 - skip_invalid_size_inputs_valid_test: true - train_subset: train - valid_subset: valid - -distributed_training: - ddp_backend: legacy_ddp - distributed_world_size: 8 - find_unused_parameters: True - -criterion: - _name: ctc - zero_infinity: true - post_process: letter - -optimization: - max_update: 80000 - lr: [0.00003] - sentence_avg: true - update_freq: [1] - -optimizer: - _name: adam - adam_betas: (0.9,0.98) - adam_eps: 1e-08 - -lr_scheduler: - _name: tri_stage - phase_ratio: [0.1, 0.4, 0.5] - final_lr_scale: 0.05 - -model: - _name: wav2vec_ctc - w2v_path: ??? - apply_mask: true - mask_prob: 0.25 - mask_channel_prob: 0.1 - mask_channel_length: 64 - layerdrop: 0.1 - activation_dropout: 0.1 - feature_grad_mult: 0.0 - freeze_finetune_updates: 0 diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/config/gan/w2vu.yaml b/kosmos-g/fairseq/examples/wav2vec/unsupervised/config/gan/w2vu.yaml deleted file mode 100644 index 74f1829d1..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/config/gan/w2vu.yaml +++ /dev/null @@ -1,115 +0,0 @@ -# @package _group_ - -common: - fp16: false - fp16_no_flatten_grads: true - log_format: json - log_interval: 100 - tensorboard_logdir: tb - reset_logging: false - suppress_crashes: false - -checkpoint: - save_interval: 1000 - save_interval_updates: 1000 - no_epoch_checkpoints: true - best_checkpoint_metric: weighted_lm_ppl - save_dir: . - -distributed_training: - distributed_world_size: 1 - -task: - _name: unpaired_audio_text - data: ??? - text_data: ??? - labels: phn - sort_by_length: false - unfiltered: false - max_length: null - append_eos: false - kenlm_path: ??? - -dataset: - num_workers: 6 - batch_size: 160 - skip_invalid_size_inputs_valid_test: true - valid_subset: valid - validate_interval: 1000 - validate_interval_updates: 1000 - -criterion: - _name: model - log_keys: - - accuracy_dense - - accuracy_token - - temp - - code_ppl - -optimization: - max_update: 150000 - clip_norm: 5.0 - lr: [0] - -optimizer: - _name: composite - groups: - generator: - lr: [0.0004] - lr_float: null - optimizer: - _name: adam - adam_betas: [0.5,0.98] - adam_eps: 1e-06 - weight_decay: 0 - amsgrad: false - lr_scheduler: - _name: fixed - warmup_updates: 0 - discriminator: - lr: [ 0.0005 ] - lr_float: null - optimizer: - _name: adam - adam_betas: [0.5,0.98] - adam_eps: 1e-06 - weight_decay: 0.0001 - amsgrad: false - lr_scheduler: - _name: fixed - warmup_updates: 0 - -lr_scheduler: pass_through - -model: - _name: wav2vec_u - - discriminator_dim: 384 - discriminator_depth: 2 - discriminator_kernel: 6 - discriminator_linear_emb: false - discriminator_causal: true - discriminator_max_pool: false - discriminator_act_after_linear: false - discriminator_dropout: 0.0 - discriminator_weight_norm: false - - generator_stride: 1 - generator_kernel: 4 - generator_bias: false - generator_dropout: 0.1 - - smoothness_weight: 0.5 - smoothing: 0 - smoothing_one_sided: false - gumbel: false - hard_gumbel: false - gradient_penalty: 1.5 - code_penalty: 4.0 - temp: [ 2,0.1,0.99995 ] - input_dim: 512 - - segmentation: - type: JOIN - mean_pool_join: false - remove_zeros: false diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/config/generate/viterbi.yaml b/kosmos-g/fairseq/examples/wav2vec/unsupervised/config/generate/viterbi.yaml deleted file mode 100644 index 9c88beebc..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/config/generate/viterbi.yaml +++ /dev/null @@ -1,21 +0,0 @@ -# @package _group_ - -fairseq: - task: - _name: unpaired_audio_text - labels: phn - data: ??? - sort_by_length: false - shuffle: false - text_data: '' - - common_eval: - path: ??? - quiet: true - - dataset: - gen_subset: valid - batch_size: 1 - -w2l_decoder: VITERBI -post_process: silence diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/config/timit_matched/test.uid b/kosmos-g/fairseq/examples/wav2vec/unsupervised/config/timit_matched/test.uid deleted file mode 100644 index 401008246..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/config/timit_matched/test.uid +++ /dev/null @@ -1,192 +0,0 @@ -FDHC0_SI1559 -FDHC0_SI2189 -FDHC0_SI929 -FDHC0_SX119 -FDHC0_SX209 -FDHC0_SX29 -FDHC0_SX299 -FDHC0_SX389 -FELC0_SI1386 -FELC0_SI2016 -FELC0_SI756 -FELC0_SX126 -FELC0_SX216 -FELC0_SX306 -FELC0_SX36 -FELC0_SX396 -FJLM0_SI1043 -FJLM0_SI1673 -FJLM0_SI2303 -FJLM0_SX143 -FJLM0_SX233 -FJLM0_SX323 -FJLM0_SX413 -FJLM0_SX53 -FMGD0_SI1564 -FMGD0_SI2194 -FMGD0_SI934 -FMGD0_SX124 -FMGD0_SX214 -FMGD0_SX304 -FMGD0_SX34 -FMGD0_SX394 -FMLD0_SI2185 -FMLD0_SI822 -FMLD0_SI925 -FMLD0_SX115 -FMLD0_SX205 -FMLD0_SX25 -FMLD0_SX295 -FMLD0_SX385 -FNLP0_SI1308 -FNLP0_SI1938 -FNLP0_SI678 -FNLP0_SX138 -FNLP0_SX228 -FNLP0_SX318 -FNLP0_SX408 -FNLP0_SX48 -FPAS0_SI1272 -FPAS0_SI2204 -FPAS0_SI944 -FPAS0_SX134 -FPAS0_SX224 -FPAS0_SX314 -FPAS0_SX404 -FPAS0_SX44 -FPKT0_SI1538 -FPKT0_SI2168 -FPKT0_SI908 -FPKT0_SX188 -FPKT0_SX278 -FPKT0_SX368 -FPKT0_SX8 -FPKT0_SX98 -MBPM0_SI1577 -MBPM0_SI1584 -MBPM0_SI947 -MBPM0_SX137 -MBPM0_SX227 -MBPM0_SX317 -MBPM0_SX407 -MBPM0_SX47 -MCMJ0_SI1094 -MCMJ0_SI464 -MCMJ0_SI602 -MCMJ0_SX104 -MCMJ0_SX14 -MCMJ0_SX194 -MCMJ0_SX284 -MCMJ0_SX374 -MDAB0_SI1039 -MDAB0_SI1669 -MDAB0_SI2299 -MDAB0_SX139 -MDAB0_SX229 -MDAB0_SX319 -MDAB0_SX409 -MDAB0_SX49 -MGRT0_SI1450 -MGRT0_SI2080 -MGRT0_SI820 -MGRT0_SX10 -MGRT0_SX100 -MGRT0_SX190 -MGRT0_SX280 -MGRT0_SX370 -MJDH0_SI1354 -MJDH0_SI1984 -MJDH0_SI724 -MJDH0_SX184 -MJDH0_SX274 -MJDH0_SX364 -MJDH0_SX4 -MJDH0_SX94 -MJLN0_SI1449 -MJLN0_SI2079 -MJLN0_SI819 -MJLN0_SX189 -MJLN0_SX279 -MJLN0_SX369 -MJLN0_SX9 -MJLN0_SX99 -MJMP0_SI1535 -MJMP0_SI1791 -MJMP0_SI905 -MJMP0_SX185 -MJMP0_SX275 -MJMP0_SX365 -MJMP0_SX5 -MJMP0_SX95 -MKLT0_SI1213 -MKLT0_SI1843 -MKLT0_SI583 -MKLT0_SX133 -MKLT0_SX223 -MKLT0_SX313 -MKLT0_SX403 -MKLT0_SX43 -MLLL0_SI1363 -MLLL0_SI1993 -MLLL0_SI733 -MLLL0_SX103 -MLLL0_SX13 -MLLL0_SX193 -MLLL0_SX283 -MLLL0_SX373 -MLNT0_SI1574 -MLNT0_SI1902 -MLNT0_SI642 -MLNT0_SX102 -MLNT0_SX12 -MLNT0_SX192 -MLNT0_SX282 -MLNT0_SX372 -MNJM0_SI1580 -MNJM0_SI2210 -MNJM0_SI950 -MNJM0_SX140 -MNJM0_SX230 -MNJM0_SX320 -MNJM0_SX410 -MNJM0_SX50 -MPAM0_SI1189 -MPAM0_SI1819 -MPAM0_SI1961 -MPAM0_SX109 -MPAM0_SX19 -MPAM0_SX199 -MPAM0_SX289 -MPAM0_SX379 -MTAS1_SI1473 -MTAS1_SI2098 -MTAS1_SI838 -MTAS1_SX118 -MTAS1_SX208 -MTAS1_SX28 -MTAS1_SX298 -MTAS1_SX388 -MTLS0_SI1370 -MTLS0_SI2000 -MTLS0_SI740 -MTLS0_SX110 -MTLS0_SX20 -MTLS0_SX200 -MTLS0_SX290 -MTLS0_SX380 -MWBT0_SI1553 -MWBT0_SI2183 -MWBT0_SI923 -MWBT0_SX113 -MWBT0_SX203 -MWBT0_SX23 -MWBT0_SX293 -MWBT0_SX383 -MWEW0_SI1361 -MWEW0_SI1991 -MWEW0_SI731 -MWEW0_SX101 -MWEW0_SX11 -MWEW0_SX191 -MWEW0_SX281 -MWEW0_SX371 diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/config/timit_matched/train.uid b/kosmos-g/fairseq/examples/wav2vec/unsupervised/config/timit_matched/train.uid deleted file mode 100644 index c39fd0b91..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/config/timit_matched/train.uid +++ /dev/null @@ -1,3696 +0,0 @@ -FAEM0_SI1392 -FAEM0_SI2022 -FAEM0_SI762 -FAEM0_SX132 -FAEM0_SX222 -FAEM0_SX312 -FAEM0_SX402 -FAEM0_SX42 -FAJW0_SI1263 -FAJW0_SI1893 -FAJW0_SI633 -FAJW0_SX183 -FAJW0_SX273 -FAJW0_SX3 -FAJW0_SX363 -FAJW0_SX93 -FALK0_SI1086 -FALK0_SI456 -FALK0_SI658 -FALK0_SX186 -FALK0_SX276 -FALK0_SX366 -FALK0_SX6 -FALK0_SX96 -FALR0_SI1325 -FALR0_SI1955 -FALR0_SI695 -FALR0_SX155 -FALR0_SX245 -FALR0_SX335 -FALR0_SX425 -FALR0_SX65 -FAPB0_SI1063 -FAPB0_SI1693 -FAPB0_SI2323 -FAPB0_SX163 -FAPB0_SX253 -FAPB0_SX343 -FAPB0_SX433 -FAPB0_SX73 -FBAS0_SI1387 -FBAS0_SI1472 -FBAS0_SI2066 -FBAS0_SX127 -FBAS0_SX217 -FBAS0_SX307 -FBAS0_SX37 -FBAS0_SX397 -FBCG1_SI1612 -FBCG1_SI2242 -FBCG1_SI982 -FBCG1_SX172 -FBCG1_SX262 -FBCG1_SX352 -FBCG1_SX442 -FBCG1_SX82 -FBCH0_SI1586 -FBCH0_SI956 -FBCH0_SI959 -FBCH0_SX146 -FBCH0_SX236 -FBCH0_SX326 -FBCH0_SX416 -FBCH0_SX56 -FBJL0_SI1552 -FBJL0_SI2182 -FBJL0_SI922 -FBJL0_SX112 -FBJL0_SX202 -FBJL0_SX22 -FBJL0_SX292 -FBJL0_SX382 -FBLV0_SI1058 -FBLV0_SI1688 -FBLV0_SI2318 -FBLV0_SX158 -FBLV0_SX248 -FBLV0_SX338 -FBLV0_SX428 -FBLV0_SX68 -FBMH0_SI1136 -FBMH0_SI1766 -FBMH0_SI970 -FBMH0_SX146 -FBMH0_SX236 -FBMH0_SX326 -FBMH0_SX416 -FBMH0_SX56 -FBMJ0_SI1776 -FBMJ0_SI516 -FBMJ0_SI815 -FBMJ0_SX156 -FBMJ0_SX246 -FBMJ0_SX336 -FBMJ0_SX426 -FBMJ0_SX66 -FCAG0_SI1503 -FCAG0_SI1641 -FCAG0_SI2133 -FCAG0_SX153 -FCAG0_SX243 -FCAG0_SX333 -FCAG0_SX423 -FCAG0_SX63 -FCAJ0_SI1479 -FCAJ0_SI1804 -FCAJ0_SI849 -FCAJ0_SX129 -FCAJ0_SX219 -FCAJ0_SX309 -FCAJ0_SX39 -FCAJ0_SX399 -FCDR1_SI1186 -FCDR1_SI1816 -FCDR1_SI556 -FCDR1_SX106 -FCDR1_SX16 -FCDR1_SX196 -FCDR1_SX286 -FCDR1_SX376 -FCEG0_SI1248 -FCEG0_SI1878 -FCEG0_SI618 -FCEG0_SX168 -FCEG0_SX258 -FCEG0_SX348 -FCEG0_SX438 -FCEG0_SX78 -FCJF0_SI1027 -FCJF0_SI1657 -FCJF0_SI648 -FCJF0_SX127 -FCJF0_SX217 -FCJF0_SX307 -FCJF0_SX37 -FCJF0_SX397 -FCJS0_SI1607 -FCJS0_SI2237 -FCJS0_SI977 -FCJS0_SX167 -FCJS0_SX257 -FCJS0_SX347 -FCJS0_SX437 -FCJS0_SX77 -FCKE0_SI1111 -FCKE0_SI1741 -FCKE0_SI481 -FCKE0_SX121 -FCKE0_SX211 -FCKE0_SX301 -FCKE0_SX31 -FCKE0_SX391 -FCLT0_SI1438 -FCLT0_SI2068 -FCLT0_SI808 -FCLT0_SX178 -FCLT0_SX268 -FCLT0_SX358 -FCLT0_SX448 -FCLT0_SX88 -FCMG0_SI1142 -FCMG0_SI1242 -FCMG0_SI1872 -FCMG0_SX162 -FCMG0_SX252 -FCMG0_SX342 -FCMG0_SX432 -FCMG0_SX72 -FCMM0_SI1083 -FCMM0_SI1957 -FCMM0_SI453 -FCMM0_SX183 -FCMM0_SX273 -FCMM0_SX363 -FCMM0_SX420 -FCMM0_SX93 -FCRZ0_SI1913 -FCRZ0_SI2053 -FCRZ0_SI793 -FCRZ0_SX163 -FCRZ0_SX253 -FCRZ0_SX343 -FCRZ0_SX433 -FCRZ0_SX73 -FCYL0_SI1297 -FCYL0_SI1927 -FCYL0_SI667 -FCYL0_SX127 -FCYL0_SX217 -FCYL0_SX349 -FCYL0_SX37 -FCYL0_SX397 -FDAS1_SI1461 -FDAS1_SI2091 -FDAS1_SI831 -FDAS1_SX111 -FDAS1_SX201 -FDAS1_SX21 -FDAS1_SX291 -FDAS1_SX381 -FDAW0_SI1271 -FDAW0_SI1406 -FDAW0_SI2036 -FDAW0_SX146 -FDAW0_SX236 -FDAW0_SX326 -FDAW0_SX416 -FDAW0_SX56 -FDFB0_SI1318 -FDFB0_SI1948 -FDFB0_SI2010 -FDFB0_SX148 -FDFB0_SX238 -FDFB0_SX328 -FDFB0_SX418 -FDFB0_SX58 -FDJH0_SI1565 -FDJH0_SI2195 -FDJH0_SI935 -FDJH0_SX125 -FDJH0_SX215 -FDJH0_SX305 -FDJH0_SX35 -FDJH0_SX395 -FDKN0_SI1081 -FDKN0_SI1202 -FDKN0_SI1711 -FDKN0_SX181 -FDKN0_SX271 -FDKN0_SX361 -FDKN0_SX451 -FDKN0_SX91 -FDML0_SI1149 -FDML0_SI1779 -FDML0_SI2075 -FDML0_SX159 -FDML0_SX249 -FDML0_SX339 -FDML0_SX429 -FDML0_SX69 -FDMY0_SI1197 -FDMY0_SI567 -FDMY0_SI714 -FDMY0_SX117 -FDMY0_SX207 -FDMY0_SX27 -FDMY0_SX297 -FDMY0_SX387 -FDNC0_SI1278 -FDNC0_SI1908 -FDNC0_SI2287 -FDNC0_SX108 -FDNC0_SX18 -FDNC0_SX198 -FDNC0_SX288 -FDNC0_SX378 -FDTD0_SI1561 -FDTD0_SI2191 -FDTD0_SI931 -FDTD0_SX121 -FDTD0_SX211 -FDTD0_SX301 -FDTD0_SX321 -FDTD0_SX391 -FDXW0_SI1511 -FDXW0_SI2141 -FDXW0_SI881 -FDXW0_SX161 -FDXW0_SX251 -FDXW0_SX341 -FDXW0_SX431 -FDXW0_SX71 -FEAC0_SI1245 -FEAC0_SI1875 -FEAC0_SI615 -FEAC0_SX165 -FEAC0_SX255 -FEAC0_SX345 -FEAC0_SX435 -FEAC0_SX75 -FEAR0_SI1252 -FEAR0_SI1882 -FEAR0_SI622 -FEAR0_SX172 -FEAR0_SX262 -FEAR0_SX352 -FEAR0_SX442 -FEAR0_SX82 -FECD0_SI1418 -FECD0_SI2048 -FECD0_SI788 -FECD0_SX158 -FECD0_SX248 -FECD0_SX338 -FECD0_SX428 -FECD0_SX68 -FEEH0_SI1112 -FEEH0_SI1742 -FEEH0_SI471 -FEEH0_SX122 -FEEH0_SX212 -FEEH0_SX302 -FEEH0_SX32 -FEEH0_SX392 -FEME0_SI1505 -FEME0_SI2135 -FEME0_SI875 -FEME0_SX155 -FEME0_SX245 -FEME0_SX335 -FEME0_SX425 -FEME0_SX65 -FETB0_SI1148 -FETB0_SI1778 -FETB0_SI518 -FETB0_SX158 -FETB0_SX248 -FETB0_SX338 -FETB0_SX428 -FETB0_SX68 -FEXM0_SI1101 -FEXM0_SI1731 -FEXM0_SI482 -FEXM0_SX111 -FEXM0_SX201 -FEXM0_SX291 -FEXM0_SX366 -FEXM0_SX381 -FGCS0_SI1486 -FGCS0_SI2116 -FGCS0_SI856 -FGCS0_SX136 -FGCS0_SX226 -FGCS0_SX316 -FGCS0_SX406 -FGCS0_SX46 -FGDP0_SI1618 -FGDP0_SI2248 -FGDP0_SI988 -FGDP0_SX178 -FGDP0_SX268 -FGDP0_SX358 -FGDP0_SX448 -FGDP0_SX88 -FGMB0_SI1145 -FGMB0_SI1775 -FGMB0_SI515 -FGMB0_SX155 -FGMB0_SX245 -FGMB0_SX335 -FGMB0_SX425 -FGMB0_SX65 -FGRW0_SI1152 -FGRW0_SI1782 -FGRW0_SI1990 -FGRW0_SX162 -FGRW0_SX252 -FGRW0_SX342 -FGRW0_SX432 -FGRW0_SX72 -FHLM0_SI1560 -FHLM0_SI2190 -FHLM0_SI930 -FHLM0_SX120 -FHLM0_SX210 -FHLM0_SX300 -FHLM0_SX349 -FHLM0_SX390 -FHXS0_SI1075 -FHXS0_SI2302 -FHXS0_SI2335 -FHXS0_SX175 -FHXS0_SX265 -FHXS0_SX355 -FHXS0_SX445 -FHXS0_SX85 -FJDM2_SI1582 -FJDM2_SI1964 -FJDM2_SI2212 -FJDM2_SX142 -FJDM2_SX232 -FJDM2_SX322 -FJDM2_SX412 -FJDM2_SX52 -FJEN0_SI1047 -FJEN0_SI1677 -FJEN0_SI2307 -FJEN0_SX147 -FJEN0_SX237 -FJEN0_SX327 -FJEN0_SX417 -FJEN0_SX57 -FJHK0_SI1022 -FJHK0_SI1652 -FJHK0_SI2282 -FJHK0_SX122 -FJHK0_SX212 -FJHK0_SX302 -FJHK0_SX32 -FJHK0_SX392 -FJKL0_SI1562 -FJKL0_SI2192 -FJKL0_SI932 -FJKL0_SX122 -FJKL0_SX212 -FJKL0_SX302 -FJKL0_SX32 -FJKL0_SX392 -FJLG0_SI1506 -FJLG0_SI1889 -FJLG0_SI2306 -FJLG0_SX179 -FJLG0_SX269 -FJLG0_SX359 -FJLG0_SX449 -FJLG0_SX89 -FJLR0_SI1231 -FJLR0_SI1861 -FJLR0_SI601 -FJLR0_SX151 -FJLR0_SX241 -FJLR0_SX331 -FJLR0_SX421 -FJLR0_SX61 -FJRB0_SI1302 -FJRB0_SI1932 -FJRB0_SI672 -FJRB0_SX132 -FJRB0_SX222 -FJRB0_SX312 -FJRB0_SX402 -FJRB0_SX42 -FJRP1_SI1432 -FJRP1_SI2062 -FJRP1_SI802 -FJRP1_SX172 -FJRP1_SX262 -FJRP1_SX352 -FJRP1_SX442 -FJRP1_SX82 -FJSK0_SI1052 -FJSK0_SI1682 -FJSK0_SI2312 -FJSK0_SX152 -FJSK0_SX242 -FJSK0_SX332 -FJSK0_SX422 -FJSK0_SX62 -FJSP0_SI1434 -FJSP0_SI1763 -FJSP0_SI804 -FJSP0_SX174 -FJSP0_SX264 -FJSP0_SX354 -FJSP0_SX444 -FJSP0_SX84 -FJWB1_SI2055 -FJWB1_SI748 -FJWB1_SI795 -FJWB1_SX165 -FJWB1_SX255 -FJWB1_SX345 -FJWB1_SX435 -FJWB1_SX75 -FJXM0_SI1211 -FJXM0_SI1971 -FJXM0_SI581 -FJXM0_SX131 -FJXM0_SX221 -FJXM0_SX311 -FJXM0_SX401 -FJXM0_SX41 -FJXP0_SI1122 -FJXP0_SI1752 -FJXP0_SI492 -FJXP0_SX132 -FJXP0_SX222 -FJXP0_SX312 -FJXP0_SX402 -FJXP0_SX42 -FKAA0_SI1208 -FKAA0_SI1838 -FKAA0_SI578 -FKAA0_SX128 -FKAA0_SX218 -FKAA0_SX308 -FKAA0_SX38 -FKAA0_SX398 -FKDE0_SI1141 -FKDE0_SI1771 -FKDE0_SI2221 -FKDE0_SX151 -FKDE0_SX241 -FKDE0_SX331 -FKDE0_SX421 -FKDE0_SX61 -FKDW0_SI1207 -FKDW0_SI1891 -FKDW0_SI577 -FKDW0_SX127 -FKDW0_SX217 -FKDW0_SX307 -FKDW0_SX37 -FKDW0_SX397 -FKFB0_SI1608 -FKFB0_SI2238 -FKFB0_SI978 -FKFB0_SX168 -FKFB0_SX258 -FKFB0_SX348 -FKFB0_SX438 -FKFB0_SX78 -FKKH0_SI1290 -FKKH0_SI1920 -FKKH0_SI660 -FKKH0_SX120 -FKKH0_SX210 -FKKH0_SX30 -FKKH0_SX300 -FKKH0_SX390 -FKLC0_SI1615 -FKLC0_SI2245 -FKLC0_SI985 -FKLC0_SX175 -FKLC0_SX265 -FKLC0_SX355 -FKLC0_SX445 -FKLC0_SX85 -FKLC1_SI1048 -FKLC1_SI1678 -FKLC1_SI2308 -FKLC1_SX148 -FKLC1_SX238 -FKLC1_SX328 -FKLC1_SX418 -FKLC1_SX58 -FKLH0_SI1257 -FKLH0_SI1887 -FKLH0_SI627 -FKLH0_SX177 -FKLH0_SX267 -FKLH0_SX357 -FKLH0_SX447 -FKLH0_SX87 -FKSR0_SI1117 -FKSR0_SI1747 -FKSR0_SI487 -FKSR0_SX161 -FKSR0_SX217 -FKSR0_SX366 -FKSR0_SX37 -FKSR0_SX397 -FLAC0_SI1339 -FLAC0_SI2161 -FLAC0_SI901 -FLAC0_SX181 -FLAC0_SX271 -FLAC0_SX361 -FLAC0_SX451 -FLAC0_SX91 -FLAG0_SI1464 -FLAG0_SI2094 -FLAG0_SI834 -FLAG0_SX114 -FLAG0_SX204 -FLAG0_SX24 -FLAG0_SX294 -FLAG0_SX384 -FLEH0_SI1051 -FLEH0_SI1681 -FLEH0_SI2311 -FLEH0_SX151 -FLEH0_SX241 -FLEH0_SX331 -FLEH0_SX421 -FLEH0_SX61 -FLET0_SI1137 -FLET0_SI1767 -FLET0_SI507 -FLET0_SX147 -FLET0_SX237 -FLET0_SX277 -FLET0_SX417 -FLET0_SX57 -FLHD0_SI1344 -FLHD0_SI1827 -FLHD0_SI1974 -FLHD0_SX174 -FLHD0_SX264 -FLHD0_SX354 -FLHD0_SX444 -FLHD0_SX84 -FLJA0_SI1078 -FLJA0_SI1708 -FLJA0_SI2338 -FLJA0_SX178 -FLJA0_SX268 -FLJA0_SX358 -FLJA0_SX448 -FLJA0_SX88 -FLJD0_SI1516 -FLJD0_SI2146 -FLJD0_SI886 -FLJD0_SX166 -FLJD0_SX256 -FLJD0_SX346 -FLJD0_SX436 -FLJD0_SX76 -FLJG0_SI1611 -FLJG0_SI2241 -FLJG0_SI981 -FLJG0_SX171 -FLJG0_SX261 -FLJG0_SX351 -FLJG0_SX441 -FLJG0_SX81 -FLKM0_SI1880 -FLKM0_SI620 -FLKM0_SI686 -FLKM0_SX116 -FLKM0_SX260 -FLKM0_SX350 -FLKM0_SX440 -FLKM0_SX80 -FLMA0_SI1243 -FLMA0_SI1873 -FLMA0_SI613 -FLMA0_SX163 -FLMA0_SX253 -FLMA0_SX343 -FLMA0_SX433 -FLMA0_SX73 -FLMC0_SI1372 -FLMC0_SI2002 -FLMC0_SI742 -FLMC0_SX112 -FLMC0_SX22 -FLMC0_SX292 -FLMC0_SX336 -FLMC0_SX382 -FLMK0_SI1035 -FLMK0_SI1229 -FLMK0_SI2295 -FLMK0_SX135 -FLMK0_SX225 -FLMK0_SX315 -FLMK0_SX405 -FLMK0_SX45 -FLOD0_SI1287 -FLOD0_SI1917 -FLOD0_SI657 -FLOD0_SX117 -FLOD0_SX171 -FLOD0_SX207 -FLOD0_SX297 -FLOD0_SX387 -FLTM0_SI1070 -FLTM0_SI1700 -FLTM0_SI2330 -FLTM0_SX170 -FLTM0_SX260 -FLTM0_SX350 -FLTM0_SX440 -FLTM0_SX80 -FMAH1_SI1509 -FMAH1_SI2139 -FMAH1_SI879 -FMAH1_SX159 -FMAH1_SX249 -FMAH1_SX339 -FMAH1_SX429 -FMAH1_SX69 -FMBG0_SI1160 -FMBG0_SI1790 -FMBG0_SI2264 -FMBG0_SX260 -FMBG0_SX3 -FMBG0_SX350 -FMBG0_SX440 -FMBG0_SX80 -FMEM0_SI1377 -FMEM0_SI2007 -FMEM0_SI747 -FMEM0_SX117 -FMEM0_SX207 -FMEM0_SX297 -FMEM0_SX333 -FMEM0_SX387 -FMJB0_SI1177 -FMJB0_SI1807 -FMJB0_SI547 -FMJB0_SX187 -FMJB0_SX277 -FMJB0_SX367 -FMJB0_SX7 -FMJB0_SX97 -FMJF0_SI1254 -FMJF0_SI1884 -FMJF0_SI624 -FMJF0_SX174 -FMJF0_SX264 -FMJF0_SX354 -FMJF0_SX444 -FMJF0_SX84 -FMJU0_SI1389 -FMJU0_SI2019 -FMJU0_SI759 -FMJU0_SX129 -FMJU0_SX219 -FMJU0_SX309 -FMJU0_SX39 -FMJU0_SX399 -FMKC0_SI1041 -FMKC0_SI1072 -FMKC0_SI1702 -FMKC0_SX172 -FMKC0_SX262 -FMKC0_SX352 -FMKC0_SX442 -FMKC0_SX82 -FMKF0_SI1018 -FMKF0_SI1536 -FMKF0_SI906 -FMKF0_SX186 -FMKF0_SX276 -FMKF0_SX366 -FMKF0_SX6 -FMKF0_SX96 -FMMH0_SI1537 -FMMH0_SI2167 -FMMH0_SI907 -FMMH0_SX187 -FMMH0_SX367 -FMMH0_SX420 -FMMH0_SX7 -FMMH0_SX97 -FMPG0_SI1602 -FMPG0_SI2232 -FMPG0_SI972 -FMPG0_SX162 -FMPG0_SX252 -FMPG0_SX342 -FMPG0_SX432 -FMPG0_SX72 -FNKL0_SI1522 -FNKL0_SI2152 -FNKL0_SI892 -FNKL0_SX172 -FNKL0_SX196 -FNKL0_SX262 -FNKL0_SX442 -FNKL0_SX82 -FNTB0_SI1203 -FNTB0_SI573 -FNTB0_SI679 -FNTB0_SX123 -FNTB0_SX213 -FNTB0_SX303 -FNTB0_SX33 -FNTB0_SX393 -FPAB1_SI1471 -FPAB1_SI2101 -FPAB1_SI841 -FPAB1_SX121 -FPAB1_SX211 -FPAB1_SX301 -FPAB1_SX31 -FPAB1_SX391 -FPAC0_SI1921 -FPAC0_SI2011 -FPAC0_SI661 -FPAC0_SX121 -FPAC0_SX211 -FPAC0_SX301 -FPAC0_SX31 -FPAC0_SX391 -FPAD0_SI1346 -FPAD0_SI1976 -FPAD0_SI716 -FPAD0_SX176 -FPAD0_SX266 -FPAD0_SX356 -FPAD0_SX446 -FPAD0_SX86 -FPAF0_SI1054 -FPAF0_SI1684 -FPAF0_SI2314 -FPAF0_SX154 -FPAF0_SX244 -FPAF0_SX334 -FPAF0_SX424 -FPAF0_SX64 -FPAZ0_SI1593 -FPAZ0_SI2223 -FPAZ0_SI963 -FPAZ0_SX153 -FPAZ0_SX243 -FPAZ0_SX27 -FPAZ0_SX423 -FPAZ0_SX63 -FPJF0_SI1046 -FPJF0_SI1259 -FPJF0_SI1676 -FPJF0_SX146 -FPJF0_SX236 -FPJF0_SX326 -FPJF0_SX352 -FPJF0_SX56 -FPLS0_SI1590 -FPLS0_SI2220 -FPLS0_SI960 -FPLS0_SX150 -FPLS0_SX240 -FPLS0_SX3 -FPLS0_SX330 -FPLS0_SX60 -FPMY0_SI1153 -FPMY0_SI1783 -FPMY0_SI523 -FPMY0_SX163 -FPMY0_SX196 -FPMY0_SX253 -FPMY0_SX343 -FPMY0_SX73 -FREH0_SI1315 -FREH0_SI1945 -FREH0_SI685 -FREH0_SX145 -FREH0_SX235 -FREH0_SX325 -FREH0_SX415 -FREH0_SX55 -FRJB0_SI1427 -FRJB0_SI1470 -FRJB0_SI1794 -FRJB0_SX167 -FRJB0_SX257 -FRJB0_SX347 -FRJB0_SX437 -FRJB0_SX77 -FRLL0_SI1514 -FRLL0_SI805 -FRLL0_SI884 -FRLL0_SX164 -FRLL0_SX254 -FRLL0_SX344 -FRLL0_SX434 -FRLL0_SX74 -FSAG0_SI1323 -FSAG0_SI1953 -FSAG0_SI693 -FSAG0_SX153 -FSAG0_SX243 -FSAG0_SX333 -FSAG0_SX423 -FSAG0_SX63 -FSAH0_SI1244 -FSAH0_SI1874 -FSAH0_SI614 -FSAH0_SX164 -FSAH0_SX327 -FSAH0_SX344 -FSAH0_SX434 -FSAH0_SX74 -FSAK0_SI1300 -FSAK0_SI1930 -FSAK0_SI670 -FSAK0_SX130 -FSAK0_SX220 -FSAK0_SX310 -FSAK0_SX40 -FSAK0_SX400 -FSBK0_SI1069 -FSBK0_SI1699 -FSBK0_SI2329 -FSBK0_SX169 -FSBK0_SX259 -FSBK0_SX349 -FSBK0_SX439 -FSBK0_SX79 -FSCN0_SI1886 -FSCN0_SI626 -FSCN0_SI705 -FSCN0_SX176 -FSCN0_SX266 -FSCN0_SX356 -FSCN0_SX446 -FSCN0_SX86 -FSDC0_SI1312 -FSDC0_SI1942 -FSDC0_SI2234 -FSDC0_SX142 -FSDC0_SX232 -FSDC0_SX322 -FSDC0_SX412 -FSDC0_SX52 -FSDJ0_SI1115 -FSDJ0_SI1745 -FSDJ0_SI485 -FSDJ0_SX125 -FSDJ0_SX215 -FSDJ0_SX305 -FSDJ0_SX35 -FSDJ0_SX395 -FSGF0_SI1557 -FSGF0_SI2187 -FSGF0_SI927 -FSGF0_SX117 -FSGF0_SX207 -FSGF0_SX27 -FSGF0_SX297 -FSGF0_SX387 -FSJG0_SI1570 -FSJG0_SI2200 -FSJG0_SI940 -FSJG0_SX130 -FSJG0_SX220 -FSJG0_SX310 -FSJG0_SX40 -FSJG0_SX400 -FSJK1_SI1025 -FSJK1_SI2285 -FSJK1_SI696 -FSJK1_SX125 -FSJK1_SX215 -FSJK1_SX305 -FSJK1_SX35 -FSJK1_SX395 -FSJS0_SI1171 -FSJS0_SI1801 -FSJS0_SI541 -FSJS0_SX181 -FSJS0_SX271 -FSJS0_SX361 -FSJS0_SX451 -FSJS0_SX91 -FSJW0_SI1333 -FSJW0_SI1963 -FSJW0_SI703 -FSJW0_SX163 -FSJW0_SX253 -FSJW0_SX343 -FSJW0_SX433 -FSJW0_SX73 -FSKC0_SI1416 -FSKC0_SI2046 -FSKC0_SI786 -FSKC0_SX156 -FSKC0_SX246 -FSKC0_SX336 -FSKC0_SX426 -FSKC0_SX66 -FSKL0_SI1529 -FSKL0_SI2159 -FSKL0_SI899 -FSKL0_SX179 -FSKL0_SX269 -FSKL0_SX359 -FSKL0_SX449 -FSKL0_SX89 -FSKP0_SI1098 -FSKP0_SI1728 -FSKP0_SI468 -FSKP0_SX108 -FSKP0_SX18 -FSKP0_SX198 -FSKP0_SX288 -FSKP0_SX378 -FSLS0_SI1056 -FSLS0_SI1686 -FSLS0_SI2316 -FSLS0_SX156 -FSLS0_SX202 -FSLS0_SX246 -FSLS0_SX426 -FSLS0_SX66 -FSMA0_SI1621 -FSMA0_SI2251 -FSMA0_SI991 -FSMA0_SX181 -FSMA0_SX271 -FSMA0_SX361 -FSMA0_SX451 -FSMA0_SX91 -FSMM0_SI1314 -FSMM0_SI1944 -FSMM0_SI684 -FSMM0_SX144 -FSMM0_SX234 -FSMM0_SX324 -FSMM0_SX414 -FSMM0_SX54 -FSMS1_SI1504 -FSMS1_SI2134 -FSMS1_SI874 -FSMS1_SX154 -FSMS1_SX244 -FSMS1_SX334 -FSMS1_SX347 -FSMS1_SX64 -FSPM0_SI1241 -FSPM0_SI1871 -FSPM0_SI611 -FSPM0_SX161 -FSPM0_SX251 -FSPM0_SX341 -FSPM0_SX431 -FSPM0_SX71 -FSRH0_SI1719 -FSRH0_SI1931 -FSRH0_SI671 -FSRH0_SX131 -FSRH0_SX221 -FSRH0_SX311 -FSRH0_SX401 -FSRH0_SX41 -FSSB0_SI1082 -FSSB0_SI1712 -FSSB0_SI2342 -FSSB0_SX182 -FSSB0_SX272 -FSSB0_SX362 -FSSB0_SX452 -FSSB0_SX92 -FTAJ0_SI1329 -FTAJ0_SI474 -FTAJ0_SI699 -FTAJ0_SX159 -FTAJ0_SX249 -FTAJ0_SX339 -FTAJ0_SX429 -FTAJ0_SX69 -FTBR0_SI1402 -FTBR0_SI2181 -FTBR0_SI921 -FTBR0_SX111 -FTBR0_SX201 -FTBR0_SX21 -FTBR0_SX291 -FTBR0_SX381 -FTBW0_SI1345 -FTBW0_SI1975 -FTBW0_SI715 -FTBW0_SX175 -FTBW0_SX265 -FTBW0_SX355 -FTBW0_SX445 -FTBW0_SX85 -FTLG0_SI1743 -FTLG0_SI483 -FTLG0_SI840 -FTLG0_SX123 -FTLG0_SX213 -FTLG0_SX303 -FTLG0_SX33 -FTLG0_SX393 -FTMG0_SI1532 -FTMG0_SI2162 -FTMG0_SI902 -FTMG0_SX182 -FTMG0_SX272 -FTMG0_SX362 -FTMG0_SX452 -FTMG0_SX92 -FVFB0_SI1032 -FVFB0_SI1510 -FVFB0_SI2292 -FVFB0_SX132 -FVFB0_SX222 -FVFB0_SX312 -FVFB0_SX402 -FVFB0_SX42 -FVKB0_SI1159 -FVKB0_SI1789 -FVKB0_SI529 -FVKB0_SX169 -FVKB0_SX259 -FVKB0_SX349 -FVKB0_SX439 -FVKB0_SX79 -FVMH0_SI1466 -FVMH0_SI2096 -FVMH0_SI836 -FVMH0_SX116 -FVMH0_SX206 -FVMH0_SX26 -FVMH0_SX296 -FVMH0_SX386 -MABC0_SI1620 -MABC0_SI2041 -MABC0_SI781 -MABC0_SX151 -MABC0_SX241 -MABC0_SX331 -MABC0_SX421 -MABC0_SX61 -MADC0_SI1367 -MADC0_SI1997 -MADC0_SI737 -MADC0_SX107 -MADC0_SX17 -MADC0_SX197 -MADC0_SX287 -MADC0_SX377 -MADD0_SI1295 -MADD0_SI1798 -MADD0_SI538 -MADD0_SX178 -MADD0_SX268 -MADD0_SX358 -MADD0_SX448 -MADD0_SX88 -MAEB0_SI1411 -MAEB0_SI2250 -MAEB0_SI990 -MAEB0_SX180 -MAEB0_SX270 -MAEB0_SX360 -MAEB0_SX450 -MAEB0_SX90 -MAEO0_SI1326 -MAEO0_SI1655 -MAEO0_SI1956 -MAEO0_SX156 -MAEO0_SX246 -MAEO0_SX336 -MAEO0_SX426 -MAEO0_SX66 -MAFM0_SI1569 -MAFM0_SI2199 -MAFM0_SI939 -MAFM0_SX129 -MAFM0_SX219 -MAFM0_SX309 -MAFM0_SX39 -MAFM0_SX399 -MAJP0_SI1074 -MAJP0_SI1704 -MAJP0_SI2334 -MAJP0_SX174 -MAJP0_SX264 -MAJP0_SX354 -MAJP0_SX444 -MAJP0_SX84 -MAKB0_SI1016 -MAKB0_SI1646 -MAKB0_SI2276 -MAKB0_SX116 -MAKB0_SX206 -MAKB0_SX26 -MAKB0_SX296 -MAKB0_SX386 -MAKR0_SI1352 -MAKR0_SI1982 -MAKR0_SI722 -MAKR0_SX182 -MAKR0_SX272 -MAKR0_SX362 -MAKR0_SX452 -MAKR0_SX92 -MAPV0_SI1293 -MAPV0_SI1923 -MAPV0_SI663 -MAPV0_SX123 -MAPV0_SX213 -MAPV0_SX303 -MAPV0_SX33 -MAPV0_SX393 -MARC0_SI1188 -MARC0_SI1818 -MARC0_SI558 -MARC0_SX108 -MARC0_SX18 -MARC0_SX198 -MARC0_SX288 -MARC0_SX378 -MARW0_SI1276 -MARW0_SI1906 -MARW0_SI646 -MARW0_SX106 -MARW0_SX16 -MARW0_SX286 -MARW0_SX349 -MARW0_SX376 -MBAR0_SI1319 -MBAR0_SI1949 -MBAR0_SI689 -MBAR0_SX149 -MBAR0_SX239 -MBAR0_SX329 -MBAR0_SX419 -MBAR0_SX59 -MBBR0_SI1055 -MBBR0_SI1685 -MBBR0_SI2315 -MBBR0_SX155 -MBBR0_SX245 -MBBR0_SX335 -MBBR0_SX425 -MBBR0_SX65 -MBCG0_SI2217 -MBCG0_SI486 -MBCG0_SI957 -MBCG0_SX147 -MBCG0_SX237 -MBCG0_SX327 -MBCG0_SX417 -MBCG0_SX57 -MBEF0_SI1281 -MBEF0_SI1911 -MBEF0_SI651 -MBEF0_SX111 -MBEF0_SX201 -MBEF0_SX21 -MBEF0_SX291 -MBEF0_SX381 -MBGT0_SI1341 -MBGT0_SI1841 -MBGT0_SI711 -MBGT0_SX171 -MBGT0_SX261 -MBGT0_SX351 -MBGT0_SX441 -MBGT0_SX81 -MBJV0_SI1247 -MBJV0_SI1877 -MBJV0_SI617 -MBJV0_SX167 -MBJV0_SX257 -MBJV0_SX347 -MBJV0_SX437 -MBJV0_SX77 -MBMA0_SI1222 -MBMA0_SI1852 -MBMA0_SI592 -MBMA0_SX142 -MBMA0_SX232 -MBMA0_SX322 -MBMA0_SX412 -MBMA0_SX52 -MBMA1_SI2207 -MBMA1_SI2214 -MBMA1_SI954 -MBMA1_SX144 -MBMA1_SX234 -MBMA1_SX324 -MBMA1_SX414 -MBMA1_SX54 -MBML0_SI1169 -MBML0_SI1799 -MBML0_SI539 -MBML0_SX179 -MBML0_SX269 -MBML0_SX359 -MBML0_SX449 -MBML0_SX89 -MBOM0_SI1014 -MBOM0_SI1644 -MBOM0_SI2274 -MBOM0_SX114 -MBOM0_SX204 -MBOM0_SX294 -MBOM0_SX311 -MBOM0_SX384 -MBSB0_SI1353 -MBSB0_SI1983 -MBSB0_SI723 -MBSB0_SX183 -MBSB0_SX273 -MBSB0_SX3 -MBSB0_SX363 -MBSB0_SX93 -MBTH0_SI2102 -MBTH0_SI505 -MBTH0_SI757 -MBTH0_SX122 -MBTH0_SX212 -MBTH0_SX302 -MBTH0_SX32 -MBTH0_SX392 -MBWP0_SI1531 -MBWP0_SI1969 -MBWP0_SI709 -MBWP0_SX169 -MBWP0_SX259 -MBWP0_SX349 -MBWP0_SX439 -MBWP0_SX79 -MCAE0_SI1447 -MCAE0_SI2077 -MCAE0_SI817 -MCAE0_SX187 -MCAE0_SX277 -MCAE0_SX367 -MCAE0_SX7 -MCAE0_SX97 -MCAL0_SI1138 -MCAL0_SI1768 -MCAL0_SI508 -MCAL0_SX148 -MCAL0_SX238 -MCAL0_SX328 -MCAL0_SX418 -MCAL0_SX58 -MCDC0_SI1292 -MCDC0_SI1922 -MCDC0_SI662 -MCDC0_SX122 -MCDC0_SX212 -MCDC0_SX302 -MCDC0_SX32 -MCDC0_SX392 -MCDD0_SI1513 -MCDD0_SI2143 -MCDD0_SI883 -MCDD0_SX163 -MCDD0_SX253 -MCDD0_SX343 -MCDD0_SX433 -MCDD0_SX73 -MCDR0_SI1154 -MCDR0_SI1784 -MCDR0_SI524 -MCDR0_SX164 -MCDR0_SX254 -MCDR0_SX344 -MCDR0_SX434 -MCDR0_SX74 -MCEF0_SI1135 -MCEF0_SI1765 -MCEF0_SI842 -MCEF0_SX145 -MCEF0_SX235 -MCEF0_SX325 -MCEF0_SX415 -MCEF0_SX55 -MCEW0_SI1442 -MCEW0_SI2072 -MCEW0_SI812 -MCEW0_SX182 -MCEW0_SX272 -MCEW0_SX362 -MCEW0_SX452 -MCEW0_SX92 -MCHL0_SI1347 -MCHL0_SI1404 -MCHL0_SI1977 -MCHL0_SX177 -MCHL0_SX267 -MCHL0_SX357 -MCHL0_SX447 -MCHL0_SX87 -MCLK0_SI1660 -MCLK0_SI2290 -MCLK0_SI650 -MCLK0_SX130 -MCLK0_SX220 -MCLK0_SX310 -MCLK0_SX40 -MCLK0_SX400 -MCLM0_SI1456 -MCLM0_SI2086 -MCLM0_SI826 -MCLM0_SX106 -MCLM0_SX16 -MCLM0_SX196 -MCLM0_SX286 -MCLM0_SX376 -MCPM0_SI1194 -MCPM0_SI1824 -MCPM0_SI564 -MCPM0_SX114 -MCPM0_SX204 -MCPM0_SX24 -MCPM0_SX294 -MCPM0_SX384 -MCRE0_SI1121 -MCRE0_SI1725 -MCRE0_SI1751 -MCRE0_SX131 -MCRE0_SX221 -MCRE0_SX24 -MCRE0_SX401 -MCRE0_SX41 -MCSS0_SI1380 -MCSS0_SI688 -MCSS0_SI750 -MCSS0_SX120 -MCSS0_SX210 -MCSS0_SX30 -MCSS0_SX300 -MCSS0_SX390 -MCTH0_SI1209 -MCTH0_SI1839 -MCTH0_SI579 -MCTH0_SX129 -MCTH0_SX219 -MCTH0_SX309 -MCTH0_SX39 -MCTH0_SX399 -MCTM0_SI1350 -MCTM0_SI1980 -MCTM0_SI720 -MCTM0_SX180 -MCTM0_SX270 -MCTM0_SX360 -MCTM0_SX450 -MCTM0_SX90 -MCXM0_SI1351 -MCXM0_SI1981 -MCXM0_SI721 -MCXM0_SX181 -MCXM0_SX271 -MCXM0_SX361 -MCXM0_SX451 -MCXM0_SX91 -MDAC0_SI1261 -MDAC0_SI1837 -MDAC0_SI631 -MDAC0_SX181 -MDAC0_SX271 -MDAC0_SX361 -MDAC0_SX451 -MDAC0_SX91 -MDAS0_SI1266 -MDAS0_SI1896 -MDAS0_SI636 -MDAS0_SX186 -MDAS0_SX21 -MDAS0_SX276 -MDAS0_SX6 -MDAS0_SX96 -MDBB1_SI1006 -MDBB1_SI1636 -MDBB1_SI2056 -MDBB1_SX106 -MDBB1_SX16 -MDBB1_SX196 -MDBB1_SX286 -MDBB1_SX376 -MDBP0_SI1158 -MDBP0_SI1788 -MDBP0_SI528 -MDBP0_SX168 -MDBP0_SX258 -MDBP0_SX348 -MDBP0_SX438 -MDBP0_SX78 -MDCD0_SI1415 -MDCD0_SI2045 -MDCD0_SI785 -MDCD0_SX155 -MDCD0_SX245 -MDCD0_SX335 -MDCD0_SX425 -MDCD0_SX65 -MDCM0_SI1480 -MDCM0_SI2110 -MDCM0_SI850 -MDCM0_SX130 -MDCM0_SX220 -MDCM0_SX310 -MDCM0_SX40 -MDCM0_SX400 -MDDC0_SI1419 -MDDC0_SI2049 -MDDC0_SI789 -MDDC0_SX159 -MDDC0_SX249 -MDDC0_SX339 -MDDC0_SX429 -MDDC0_SX69 -MDED0_SI1170 -MDED0_SI1800 -MDED0_SI540 -MDED0_SX180 -MDED0_SX270 -MDED0_SX360 -MDED0_SX450 -MDED0_SX90 -MDEF0_SI1123 -MDEF0_SI1563 -MDEF0_SI2193 -MDEF0_SX123 -MDEF0_SX213 -MDEF0_SX303 -MDEF0_SX33 -MDEF0_SX393 -MDEM0_SI1868 -MDEM0_SI608 -MDEM0_SI800 -MDEM0_SX158 -MDEM0_SX248 -MDEM0_SX338 -MDEM0_SX428 -MDEM0_SX68 -MDHL0_SI1439 -MDHL0_SI2069 -MDHL0_SI809 -MDHL0_SX179 -MDHL0_SX269 -MDHL0_SX359 -MDHL0_SX449 -MDHL0_SX89 -MDHS0_SI1530 -MDHS0_SI2160 -MDHS0_SI900 -MDHS0_SX180 -MDHS0_SX270 -MDHS0_SX360 -MDHS0_SX450 -MDHS0_SX90 -MDJM0_SI1455 -MDJM0_SI2085 -MDJM0_SI825 -MDJM0_SX105 -MDJM0_SX15 -MDJM0_SX195 -MDJM0_SX285 -MDJM0_SX375 -MDKS0_SI1066 -MDKS0_SI1696 -MDKS0_SI2326 -MDKS0_SX166 -MDKS0_SX256 -MDKS0_SX346 -MDKS0_SX436 -MDKS0_SX76 -MDLB0_SI1306 -MDLB0_SI1936 -MDLB0_SI676 -MDLB0_SX136 -MDLB0_SX226 -MDLB0_SX316 -MDLB0_SX406 -MDLB0_SX46 -MDLC0_SI1395 -MDLC0_SI2025 -MDLC0_SI765 -MDLC0_SX135 -MDLC0_SX225 -MDLC0_SX315 -MDLC0_SX405 -MDLC0_SX45 -MDLC1_SI1435 -MDLC1_SI2065 -MDLC1_SI2144 -MDLC1_SX175 -MDLC1_SX265 -MDLC1_SX355 -MDLC1_SX445 -MDLC1_SX85 -MDLC2_SI1614 -MDLC2_SI2244 -MDLC2_SI984 -MDLC2_SX174 -MDLC2_SX264 -MDLC2_SX354 -MDLC2_SX444 -MDLC2_SX84 -MDLH0_SI1960 -MDLH0_SI574 -MDLH0_SI700 -MDLH0_SX160 -MDLH0_SX250 -MDLH0_SX340 -MDLH0_SX430 -MDLH0_SX70 -MDLM0_SI1234 -MDLM0_SI1864 -MDLM0_SI604 -MDLM0_SX154 -MDLM0_SX244 -MDLM0_SX334 -MDLM0_SX424 -MDLM0_SX64 -MDLR0_SI1233 -MDLR0_SI1863 -MDLR0_SI603 -MDLR0_SX153 -MDLR0_SX243 -MDLR0_SX333 -MDLR0_SX423 -MDLR0_SX63 -MDLR1_SI1299 -MDLR1_SI1929 -MDLR1_SI669 -MDLR1_SX129 -MDLR1_SX219 -MDLR1_SX309 -MDLR1_SX39 -MDLR1_SX399 -MDMA0_SI1238 -MDMA0_SI1430 -MDMA0_SI2060 -MDMA0_SX170 -MDMA0_SX260 -MDMA0_SX350 -MDMA0_SX440 -MDMA0_SX80 -MDMT0_SI1832 -MDMT0_SI2341 -MDMT0_SI572 -MDMT0_SX122 -MDMT0_SX212 -MDMT0_SX302 -MDMT0_SX32 -MDMT0_SX392 -MDNS0_SI1011 -MDNS0_SI2271 -MDNS0_SI873 -MDNS0_SX111 -MDNS0_SX201 -MDNS0_SX21 -MDNS0_SX291 -MDNS0_SX381 -MDPB0_SI1760 -MDPB0_SI2126 -MDPB0_SI866 -MDPB0_SX146 -MDPB0_SX236 -MDPB0_SX326 -MDPB0_SX416 -MDPB0_SX56 -MDPK0_SI1053 -MDPK0_SI1683 -MDPK0_SI552 -MDPK0_SX153 -MDPK0_SX243 -MDPK0_SX333 -MDPK0_SX423 -MDPK0_SX63 -MDPS0_SI1651 -MDPS0_SI1979 -MDPS0_SI719 -MDPS0_SX179 -MDPS0_SX269 -MDPS0_SX359 -MDPS0_SX449 -MDPS0_SX89 -MDRD0_SI1382 -MDRD0_SI2012 -MDRD0_SI752 -MDRD0_SX122 -MDRD0_SX212 -MDRD0_SX302 -MDRD0_SX32 -MDRD0_SX392 -MDSJ0_SI1462 -MDSJ0_SI2092 -MDSJ0_SI832 -MDSJ0_SX112 -MDSJ0_SX22 -MDSJ0_SX292 -MDSJ0_SX382 -MDSJ0_SX438 -MDSS0_SI1881 -MDSS0_SI2087 -MDSS0_SI621 -MDSS0_SX171 -MDSS0_SX261 -MDSS0_SX351 -MDSS0_SX441 -MDSS0_SX81 -MDSS1_SI1327 -MDSS1_SI1713 -MDSS1_SI697 -MDSS1_SX157 -MDSS1_SX247 -MDSS1_SX337 -MDSS1_SX427 -MDSS1_SX67 -MDTB0_SI1200 -MDTB0_SI1830 -MDTB0_SI570 -MDTB0_SX120 -MDTB0_SX210 -MDTB0_SX300 -MDTB0_SX321 -MDTB0_SX390 -MDWD0_SI1260 -MDWD0_SI1890 -MDWD0_SI557 -MDWD0_SX180 -MDWD0_SX270 -MDWD0_SX360 -MDWD0_SX450 -MDWD0_SX90 -MDWH0_SI1168 -MDWH0_SI1925 -MDWH0_SI665 -MDWH0_SX125 -MDWH0_SX215 -MDWH0_SX305 -MDWH0_SX35 -MDWH0_SX395 -MDWM0_SI1546 -MDWM0_SI2176 -MDWM0_SI916 -MDWM0_SX106 -MDWM0_SX16 -MDWM0_SX286 -MDWM0_SX376 -MDWM0_SX433 -MEAL0_SI1547 -MEAL0_SI2177 -MEAL0_SI917 -MEAL0_SX107 -MEAL0_SX197 -MEAL0_SX287 -MEAL0_SX347 -MEAL0_SX377 -MEDR0_SI1374 -MEDR0_SI2004 -MEDR0_SI744 -MEDR0_SX114 -MEDR0_SX204 -MEDR0_SX24 -MEDR0_SX294 -MEDR0_SX384 -MEFG0_SI465 -MEFG0_SI491 -MEFG0_SI598 -MEFG0_SX105 -MEFG0_SX15 -MEFG0_SX195 -MEFG0_SX285 -MEFG0_SX375 -MEGJ0_SI1337 -MEGJ0_SI1967 -MEGJ0_SI707 -MEGJ0_SX167 -MEGJ0_SX257 -MEGJ0_SX3 -MEGJ0_SX437 -MEGJ0_SX77 -MEJL0_SI1592 -MEJL0_SI1654 -MEJL0_SI962 -MEJL0_SX152 -MEJL0_SX242 -MEJL0_SX332 -MEJL0_SX422 -MEJL0_SX62 -MEJS0_SI1240 -MEJS0_SI1870 -MEJS0_SI610 -MEJS0_SX160 -MEJS0_SX250 -MEJS0_SX340 -MEJS0_SX430 -MEJS0_SX70 -MESG0_SI1332 -MESG0_SI1962 -MESG0_SI702 -MESG0_SX162 -MESG0_SX252 -MESG0_SX342 -MESG0_SX432 -MESG0_SX72 -MESJ0_SI2039 -MESJ0_SI2257 -MESJ0_SI997 -MESJ0_SX187 -MESJ0_SX277 -MESJ0_SX367 -MESJ0_SX7 -MESJ0_SX97 -MEWM0_SI1348 -MEWM0_SI1978 -MEWM0_SI718 -MEWM0_SX178 -MEWM0_SX268 -MEWM0_SX358 -MEWM0_SX448 -MEWM0_SX88 -MFER0_SI1492 -MFER0_SI2122 -MFER0_SI862 -MFER0_SX142 -MFER0_SX232 -MFER0_SX322 -MFER0_SX412 -MFER0_SX52 -MFMC0_SI1132 -MFMC0_SI1762 -MFMC0_SI502 -MFMC0_SX142 -MFMC0_SX232 -MFMC0_SX322 -MFMC0_SX412 -MFMC0_SX52 -MFRM0_SI1155 -MFRM0_SI1717 -MFRM0_SI1785 -MFRM0_SX165 -MFRM0_SX255 -MFRM0_SX345 -MFRM0_SX435 -MFRM0_SX75 -MFWK0_SI1249 -MFWK0_SI1879 -MFWK0_SI619 -MFWK0_SX169 -MFWK0_SX259 -MFWK0_SX349 -MFWK0_SX439 -MFWK0_SX79 -MFXS0_SI1674 -MFXS0_SI2225 -MFXS0_SI2304 -MFXS0_SX144 -MFXS0_SX234 -MFXS0_SX324 -MFXS0_SX414 -MFXS0_SX54 -MFXV0_SI1005 -MFXV0_SI1342 -MFXV0_SI1635 -MFXV0_SX105 -MFXV0_SX15 -MFXV0_SX195 -MFXV0_SX285 -MFXV0_SX375 -MGAF0_SI1282 -MGAF0_SI1912 -MGAF0_SI652 -MGAF0_SX112 -MGAF0_SX202 -MGAF0_SX22 -MGAF0_SX292 -MGAF0_SX382 -MGAG0_SI1321 -MGAG0_SI645 -MGAG0_SI691 -MGAG0_SX151 -MGAG0_SX241 -MGAG0_SX331 -MGAG0_SX421 -MGAG0_SX61 -MGAK0_SI1036 -MGAK0_SI1666 -MGAK0_SI2296 -MGAK0_SX136 -MGAK0_SX226 -MGAK0_SX316 -MGAK0_SX406 -MGAK0_SX46 -MGAR0_SI1212 -MGAR0_SI1694 -MGAR0_SI1842 -MGAR0_SX132 -MGAR0_SX222 -MGAR0_SX312 -MGAR0_SX402 -MGAR0_SX42 -MGAW0_SI1165 -MGAW0_SI1802 -MGAW0_SI535 -MGAW0_SX175 -MGAW0_SX265 -MGAW0_SX355 -MGAW0_SX445 -MGAW0_SX85 -MGES0_SI1481 -MGES0_SI2111 -MGES0_SI851 -MGES0_SX131 -MGES0_SX221 -MGES0_SX311 -MGES0_SX401 -MGES0_SX41 -MGJC0_SI1256 -MGJC0_SI1335 -MGJC0_SI1965 -MGJC0_SX165 -MGJC0_SX255 -MGJC0_SX345 -MGJC0_SX435 -MGJC0_SX75 -MGRL0_SI1497 -MGRL0_SI2127 -MGRL0_SI867 -MGRL0_SX147 -MGRL0_SX237 -MGRL0_SX327 -MGRL0_SX417 -MGRL0_SX57 -MGRP0_SI1317 -MGRP0_SI1947 -MGRP0_SI687 -MGRP0_SX147 -MGRP0_SX237 -MGRP0_SX327 -MGRP0_SX417 -MGRP0_SX57 -MGSH0_SI1176 -MGSH0_SI1806 -MGSH0_SI546 -MGSH0_SX127 -MGSH0_SX186 -MGSH0_SX276 -MGSH0_SX6 -MGSH0_SX96 -MGSL0_SI1164 -MGSL0_SI534 -MGSL0_SI797 -MGSL0_SX174 -MGSL0_SX264 -MGSL0_SX354 -MGSL0_SX444 -MGSL0_SX84 -MGXP0_SI1087 -MGXP0_SI457 -MGXP0_SI525 -MGXP0_SX187 -MGXP0_SX277 -MGXP0_SX367 -MGXP0_SX7 -MGXP0_SX97 -MHBS0_SI1575 -MHBS0_SI2205 -MHBS0_SI945 -MHBS0_SX135 -MHBS0_SX225 -MHBS0_SX315 -MHBS0_SX405 -MHBS0_SX45 -MHIT0_SI1613 -MHIT0_SI2243 -MHIT0_SI983 -MHIT0_SX173 -MHIT0_SX263 -MHIT0_SX353 -MHIT0_SX443 -MHIT0_SX83 -MHJB0_SI1017 -MHJB0_SI1647 -MHJB0_SI2277 -MHJB0_SX117 -MHJB0_SX207 -MHJB0_SX27 -MHJB0_SX297 -MHJB0_SX387 -MHMG0_SI1365 -MHMG0_SI1995 -MHMG0_SI735 -MHMG0_SX105 -MHMG0_SX15 -MHMG0_SX195 -MHMG0_SX285 -MHMG0_SX375 -MHMR0_SI1119 -MHMR0_SI1692 -MHMR0_SI489 -MHMR0_SX129 -MHMR0_SX219 -MHMR0_SX309 -MHMR0_SX39 -MHMR0_SX399 -MHRM0_SI1475 -MHRM0_SI2218 -MHRM0_SI958 -MHRM0_SX148 -MHRM0_SX238 -MHRM0_SX328 -MHRM0_SX418 -MHRM0_SX58 -MHXL0_SI1772 -MHXL0_SI512 -MHXL0_SI612 -MHXL0_SX152 -MHXL0_SX242 -MHXL0_SX332 -MHXL0_SX422 -MHXL0_SX62 -MILB0_SI2163 -MILB0_SI807 -MILB0_SI903 -MILB0_SX183 -MILB0_SX273 -MILB0_SX3 -MILB0_SX363 -MILB0_SX93 -MJAC0_SI1331 -MJAC0_SI2148 -MJAC0_SI701 -MJAC0_SX251 -MJAC0_SX307 -MJAC0_SX341 -MJAC0_SX431 -MJAC0_SX71 -MJAE0_SI1524 -MJAE0_SI1999 -MJAE0_SI2154 -MJAE0_SX174 -MJAE0_SX264 -MJAE0_SX354 -MJAE0_SX444 -MJAE0_SX84 -MJAI0_SI1604 -MJAI0_SI682 -MJAI0_SI710 -MJAI0_SX164 -MJAI0_SX254 -MJAI0_SX344 -MJAI0_SX434 -MJAI0_SX74 -MJBG0_SI1232 -MJBG0_SI1724 -MJBG0_SI1862 -MJBG0_SX152 -MJBG0_SX242 -MJBG0_SX332 -MJBG0_SX422 -MJBG0_SX62 -MJDA0_SI1031 -MJDA0_SI1661 -MJDA0_SI2291 -MJDA0_SX131 -MJDA0_SX221 -MJDA0_SX311 -MJDA0_SX401 -MJDA0_SX41 -MJDC0_SI1161 -MJDC0_SI2165 -MJDC0_SI531 -MJDC0_SX171 -MJDC0_SX261 -MJDC0_SX351 -MJDC0_SX441 -MJDC0_SX81 -MJDE0_SI1120 -MJDE0_SI463 -MJDE0_SI490 -MJDE0_SX130 -MJDE0_SX220 -MJDE0_SX310 -MJDE0_SX40 -MJDE0_SX400 -MJDG0_SI1042 -MJDG0_SI1672 -MJDG0_SI1705 -MJDG0_SX142 -MJDG0_SX232 -MJDG0_SX322 -MJDG0_SX412 -MJDG0_SX52 -MJDM0_SI1340 -MJDM0_SI1937 -MJDM0_SI974 -MJDM0_SX170 -MJDM0_SX260 -MJDM0_SX350 -MJDM0_SX440 -MJDM0_SX80 -MJEB0_SI1286 -MJEB0_SI1916 -MJEB0_SI656 -MJEB0_SX170 -MJEB0_SX206 -MJEB0_SX26 -MJEB0_SX296 -MJEB0_SX386 -MJEB1_SI1467 -MJEB1_SI2097 -MJEB1_SI837 -MJEB1_SX117 -MJEB1_SX207 -MJEB1_SX27 -MJEB1_SX297 -MJEB1_SX387 -MJEE0_SI1237 -MJEE0_SI1867 -MJEE0_SI607 -MJEE0_SX157 -MJEE0_SX247 -MJEE0_SX337 -MJEE0_SX427 -MJEE0_SX67 -MJFH0_SI1107 -MJFH0_SI1737 -MJFH0_SI477 -MJFH0_SX117 -MJFH0_SX207 -MJFH0_SX27 -MJFH0_SX297 -MJFH0_SX387 -MJFR0_SI1605 -MJFR0_SI2235 -MJFR0_SI975 -MJFR0_SX165 -MJFR0_SX255 -MJFR0_SX345 -MJFR0_SX435 -MJFR0_SX75 -MJHI0_SI1328 -MJHI0_SI555 -MJHI0_SI698 -MJHI0_SX158 -MJHI0_SX248 -MJHI0_SX338 -MJHI0_SX428 -MJHI0_SX68 -MJJB0_SI1139 -MJJB0_SI1277 -MJJB0_SI1769 -MJJB0_SX149 -MJJB0_SX239 -MJJB0_SX329 -MJJB0_SX419 -MJJB0_SX59 -MJJJ0_SI1163 -MJJJ0_SI1793 -MJJJ0_SI533 -MJJJ0_SX173 -MJJJ0_SX263 -MJJJ0_SX353 -MJJJ0_SX443 -MJJJ0_SX83 -MJJM0_SI1251 -MJJM0_SI1457 -MJJM0_SI827 -MJJM0_SX107 -MJJM0_SX17 -MJJM0_SX197 -MJJM0_SX287 -MJJM0_SX377 -MJKR0_SI1201 -MJKR0_SI1831 -MJKR0_SI571 -MJKR0_SX121 -MJKR0_SX211 -MJKR0_SX301 -MJKR0_SX31 -MJKR0_SX391 -MJLB0_SI1616 -MJLB0_SI2246 -MJLB0_SI986 -MJLB0_SX176 -MJLB0_SX266 -MJLB0_SX356 -MJLB0_SX446 -MJLB0_SX86 -MJLG1_SI1012 -MJLG1_SI1642 -MJLG1_SI2272 -MJLG1_SX112 -MJLG1_SX202 -MJLG1_SX22 -MJLG1_SX292 -MJLG1_SX382 -MJLS0_SI1096 -MJLS0_SI1726 -MJLS0_SI466 -MJLS0_SX106 -MJLS0_SX16 -MJLS0_SX196 -MJLS0_SX286 -MJLS0_SX376 -MJMA0_SI1495 -MJMA0_SI2125 -MJMA0_SI865 -MJMA0_SX145 -MJMA0_SX235 -MJMA0_SX325 -MJMA0_SX415 -MJMA0_SX55 -MJMD0_SI1028 -MJMD0_SI1658 -MJMD0_SI2288 -MJMD0_SX128 -MJMD0_SX218 -MJMD0_SX308 -MJMD0_SX38 -MJMD0_SX398 -MJMM0_SI1255 -MJMM0_SI1885 -MJMM0_SI625 -MJMM0_SX175 -MJMM0_SX265 -MJMM0_SX355 -MJMM0_SX445 -MJMM0_SX85 -MJPG0_SI1191 -MJPG0_SI1821 -MJPG0_SI561 -MJPG0_SX111 -MJPG0_SX201 -MJPG0_SX21 -MJPG0_SX291 -MJPG0_SX381 -MJPM0_SI1368 -MJPM0_SI1998 -MJPM0_SI738 -MJPM0_SX108 -MJPM0_SX18 -MJPM0_SX198 -MJPM0_SX288 -MJPM0_SX378 -MJPM1_SI1897 -MJPM1_SI2280 -MJPM1_SI761 -MJPM1_SX131 -MJPM1_SX221 -MJPM1_SX311 -MJPM1_SX401 -MJPM1_SX41 -MJRA0_SI1236 -MJRA0_SI1866 -MJRA0_SI606 -MJRA0_SX156 -MJRA0_SX246 -MJRA0_SX336 -MJRA0_SX426 -MJRA0_SX66 -MJRG0_SI1366 -MJRG0_SI1996 -MJRG0_SI736 -MJRG0_SX106 -MJRG0_SX16 -MJRG0_SX286 -MJRG0_SX352 -MJRG0_SX376 -MJRH0_SI1125 -MJRH0_SI1755 -MJRH0_SI1840 -MJRH0_SX135 -MJRH0_SX225 -MJRH0_SX315 -MJRH0_SX405 -MJRH0_SX45 -MJRH1_SI1558 -MJRH1_SI1774 -MJRH1_SI514 -MJRH1_SX154 -MJRH1_SX244 -MJRH1_SX334 -MJRH1_SX424 -MJRH1_SX64 -MJRK0_SI1662 -MJRK0_SI2103 -MJRK0_SI880 -MJRK0_SX160 -MJRK0_SX250 -MJRK0_SX340 -MJRK0_SX430 -MJRK0_SX70 -MJRP0_SI1835 -MJRP0_SI1845 -MJRP0_SI585 -MJRP0_SX135 -MJRP0_SX225 -MJRP0_SX315 -MJRP0_SX405 -MJRP0_SX45 -MJSR0_SI1424 -MJSR0_SI2054 -MJSR0_SI794 -MJSR0_SX164 -MJSR0_SX254 -MJSR0_SX344 -MJSR0_SX434 -MJSR0_SX74 -MJWG0_SI2155 -MJWG0_SI813 -MJWG0_SI895 -MJWG0_SX175 -MJWG0_SX265 -MJWG0_SX355 -MJWG0_SX445 -MJWG0_SX85 -MJWS0_SI1143 -MJWS0_SI1773 -MJWS0_SI513 -MJWS0_SX153 -MJWS0_SX243 -MJWS0_SX333 -MJWS0_SX423 -MJWS0_SX63 -MJWT0_SI1291 -MJWT0_SI1381 -MJWT0_SI751 -MJWT0_SX121 -MJWT0_SX211 -MJWT0_SX301 -MJWT0_SX31 -MJWT0_SX391 -MJXA0_SI1507 -MJXA0_SI2137 -MJXA0_SI877 -MJXA0_SX157 -MJXA0_SX247 -MJXA0_SX337 -MJXA0_SX427 -MJXA0_SX67 -MJXL0_SI1172 -MJXL0_SI1795 -MJXL0_SI542 -MJXL0_SX182 -MJXL0_SX272 -MJXL0_SX362 -MJXL0_SX452 -MJXL0_SX92 -MKAG0_SI1609 -MKAG0_SI2239 -MKAG0_SI979 -MKAG0_SX169 -MKAG0_SX259 -MKAG0_SX30 -MKAG0_SX439 -MKAG0_SX79 -MKAH0_SI1528 -MKAH0_SI2158 -MKAH0_SI898 -MKAH0_SX178 -MKAH0_SX268 -MKAH0_SX358 -MKAH0_SX448 -MKAH0_SX88 -MKAJ0_SI1414 -MKAJ0_SI2044 -MKAJ0_SI784 -MKAJ0_SX154 -MKAJ0_SX244 -MKAJ0_SX334 -MKAJ0_SX424 -MKAJ0_SX64 -MKAM0_SI1250 -MKAM0_SI1316 -MKAM0_SI1465 -MKAM0_SX146 -MKAM0_SX236 -MKAM0_SX326 -MKAM0_SX416 -MKAM0_SX56 -MKDB0_SI2132 -MKDB0_SI588 -MKDB0_SI872 -MKDB0_SX152 -MKDB0_SX242 -MKDB0_SX332 -MKDB0_SX422 -MKDB0_SX62 -MKDD0_SI1567 -MKDD0_SI2197 -MKDD0_SI937 -MKDD0_SX127 -MKDD0_SX217 -MKDD0_SX307 -MKDD0_SX37 -MKDD0_SX397 -MKDT0_SI2153 -MKDT0_SI814 -MKDT0_SI893 -MKDT0_SX173 -MKDT0_SX263 -MKDT0_SX353 -MKDT0_SX443 -MKDT0_SX83 -MKES0_SI1253 -MKES0_SI1883 -MKES0_SI623 -MKES0_SX173 -MKES0_SX263 -MKES0_SX353 -MKES0_SX443 -MKES0_SX83 -MKJO0_SI1517 -MKJO0_SI2147 -MKJO0_SI887 -MKJO0_SX167 -MKJO0_SX257 -MKJO0_SX424 -MKJO0_SX437 -MKJO0_SX77 -MKLN0_SI1598 -MKLN0_SI2228 -MKLN0_SI968 -MKLN0_SX158 -MKLN0_SX248 -MKLN0_SX338 -MKLN0_SX428 -MKLN0_SX68 -MKLR0_SI1059 -MKLR0_SI1689 -MKLR0_SI2319 -MKLR0_SX159 -MKLR0_SX249 -MKLR0_SX339 -MKLR0_SX429 -MKLR0_SX69 -MKLS0_SI1437 -MKLS0_SI1533 -MKLS0_SI2067 -MKLS0_SX177 -MKLS0_SX267 -MKLS0_SX357 -MKLS0_SX447 -MKLS0_SX87 -MKLS1_SI1545 -MKLS1_SI2175 -MKLS1_SI915 -MKLS1_SX105 -MKLS1_SX15 -MKLS1_SX195 -MKLS1_SX285 -MKLS1_SX375 -MKLW0_SI1571 -MKLW0_SI1844 -MKLW0_SI2201 -MKLW0_SX131 -MKLW0_SX221 -MKLW0_SX311 -MKLW0_SX401 -MKLW0_SX41 -MKRG0_SI1491 -MKRG0_SI2121 -MKRG0_SI861 -MKRG0_SX141 -MKRG0_SX231 -MKRG0_SX31 -MKRG0_SX411 -MKRG0_SX51 -MKXL0_SI1185 -MKXL0_SI1815 -MKXL0_SI1958 -MKXL0_SX105 -MKXL0_SX15 -MKXL0_SX195 -MKXL0_SX285 -MKXL0_SX375 -MLBC0_SI1239 -MLBC0_SI1869 -MLBC0_SI609 -MLBC0_SX159 -MLBC0_SX249 -MLBC0_SX339 -MLBC0_SX429 -MLBC0_SX69 -MLEL0_SI1246 -MLEL0_SI1876 -MLEL0_SI616 -MLEL0_SX166 -MLEL0_SX256 -MLEL0_SX346 -MLEL0_SX436 -MLEL0_SX76 -MLJC0_SI1225 -MLJC0_SI1855 -MLJC0_SI595 -MLJC0_SX145 -MLJC0_SX235 -MLJC0_SX325 -MLJC0_SX415 -MLJC0_SX55 -MLJH0_SI1324 -MLJH0_SI1422 -MLJH0_SI694 -MLJH0_SX154 -MLJH0_SX244 -MLJH0_SX334 -MLJH0_SX424 -MLJH0_SX64 -MLNS0_SI1407 -MLNS0_SI2037 -MLNS0_SI777 -MLNS0_SX147 -MLNS0_SX237 -MLNS0_SX327 -MLNS0_SX417 -MLNS0_SX57 -MLSH0_SI1417 -MLSH0_SI2047 -MLSH0_SI787 -MLSH0_SX157 -MLSH0_SX247 -MLSH0_SX337 -MLSH0_SX427 -MLSH0_SX67 -MMAA0_SI1588 -MMAA0_SI2105 -MMAA0_SI845 -MMAA0_SX125 -MMAA0_SX215 -MMAA0_SX305 -MMAA0_SX35 -MMAA0_SX395 -MMAB1_SI1494 -MMAB1_SI2124 -MMAB1_SI864 -MMAB1_SX144 -MMAB1_SX234 -MMAB1_SX324 -MMAB1_SX414 -MMAB1_SX54 -MMAG0_SI1126 -MMAG0_SI1756 -MMAG0_SI496 -MMAG0_SX136 -MMAG0_SX226 -MMAG0_SX316 -MMAG0_SX406 -MMAG0_SX46 -MMAM0_SI1597 -MMAM0_SI1668 -MMAM0_SI2227 -MMAM0_SX157 -MMAM0_SX247 -MMAM0_SX337 -MMAM0_SX427 -MMAM0_SX67 -MMAR0_SI1336 -MMAR0_SI1966 -MMAR0_SI706 -MMAR0_SX166 -MMAR0_SX256 -MMAR0_SX346 -MMAR0_SX436 -MMAR0_SX76 -MMBS0_SI1151 -MMBS0_SI1781 -MMBS0_SI521 -MMBS0_SX161 -MMBS0_SX251 -MMBS0_SX341 -MMBS0_SX431 -MMBS0_SX71 -MMCC0_SI1338 -MMCC0_SI1968 -MMCC0_SI708 -MMCC0_SX168 -MMCC0_SX258 -MMCC0_SX348 -MMCC0_SX438 -MMCC0_SX78 -MMDB0_SI1358 -MMDB0_SI1617 -MMDB0_SI987 -MMDB0_SX177 -MMDB0_SX267 -MMDB0_SX357 -MMDB0_SX447 -MMDB0_SX87 -MMDG0_SI1780 -MMDG0_SI2035 -MMDG0_SI520 -MMDG0_SX160 -MMDG0_SX250 -MMDG0_SX340 -MMDG0_SX430 -MMDG0_SX70 -MMDM0_SI1311 -MMDM0_SI1941 -MMDM0_SI681 -MMDM0_SX141 -MMDM0_SX231 -MMDM0_SX321 -MMDM0_SX411 -MMDM0_SX51 -MMDM1_SI1650 -MMDM1_SI2043 -MMDM1_SI783 -MMDM1_SX153 -MMDM1_SX243 -MMDM1_SX333 -MMDM1_SX423 -MMDM1_SX63 -MMDS0_SI1343 -MMDS0_SI1973 -MMDS0_SI713 -MMDS0_SX173 -MMDS0_SX263 -MMDS0_SX353 -MMDS0_SX443 -MMDS0_SX83 -MMEA0_SI1388 -MMEA0_SI2018 -MMEA0_SI758 -MMEA0_SX128 -MMEA0_SX218 -MMEA0_SX308 -MMEA0_SX38 -MMEA0_SX398 -MMEB0_SI1357 -MMEB0_SI1987 -MMEB0_SI727 -MMEB0_SX187 -MMEB0_SX327 -MMEB0_SX367 -MMEB0_SX7 -MMEB0_SX97 -MMGC0_SI1305 -MMGC0_SI1935 -MMGC0_SI2184 -MMGC0_SX135 -MMGC0_SX225 -MMGC0_SX315 -MMGC0_SX405 -MMGC0_SX45 -MMGG0_SI1079 -MMGG0_SI1709 -MMGG0_SI2339 -MMGG0_SX179 -MMGG0_SX269 -MMGG0_SX359 -MMGG0_SX449 -MMGG0_SX89 -MMGK0_SI1322 -MMGK0_SI1952 -MMGK0_SI692 -MMGK0_SX152 -MMGK0_SX242 -MMGK0_SX332 -MMGK0_SX422 -MMGK0_SX62 -MMJB1_SI1408 -MMJB1_SI2038 -MMJB1_SI778 -MMJB1_SX148 -MMJB1_SX238 -MMJB1_SX328 -MMJB1_SX418 -MMJB1_SX58 -MMLM0_SI1527 -MMLM0_SI2150 -MMLM0_SI897 -MMLM0_SX177 -MMLM0_SX267 -MMLM0_SX357 -MMLM0_SX447 -MMLM0_SX87 -MMPM0_SI1061 -MMPM0_SI1691 -MMPM0_SI2321 -MMPM0_SX161 -MMPM0_SX251 -MMPM0_SX341 -MMPM0_SX431 -MMPM0_SX71 -MMRP0_SI2034 -MMRP0_SI717 -MMRP0_SI774 -MMRP0_SX144 -MMRP0_SX234 -MMRP0_SX324 -MMRP0_SX414 -MMRP0_SX54 -MMSM0_SI1106 -MMSM0_SI1736 -MMSM0_SI476 -MMSM0_SX116 -MMSM0_SX206 -MMSM0_SX26 -MMSM0_SX296 -MMSM0_SX386 -MMVP0_SI1284 -MMVP0_SI1914 -MMVP0_SI654 -MMVP0_SX114 -MMVP0_SX204 -MMVP0_SX294 -MMVP0_SX347 -MMVP0_SX384 -MMWB0_SI1619 -MMWB0_SI2249 -MMWB0_SI989 -MMWB0_SX179 -MMWB0_SX269 -MMWB0_SX359 -MMWB0_SX449 -MMWB0_SX89 -MMWS0_SI1518 -MMWS0_SI559 -MMWS0_SI888 -MMWS0_SX168 -MMWS0_SX258 -MMWS0_SX348 -MMWS0_SX438 -MMWS0_SX78 -MMWS1_SI1071 -MMWS1_SI1701 -MMWS1_SI2331 -MMWS1_SX261 -MMWS1_SX27 -MMWS1_SX351 -MMWS1_SX441 -MMWS1_SX81 -MMXS0_SI2136 -MMXS0_SI629 -MMXS0_SI876 -MMXS0_SX156 -MMXS0_SX246 -MMXS0_SX336 -MMXS0_SX426 -MMXS0_SX66 -MNET0_SI1446 -MNET0_SI2076 -MNET0_SI816 -MNET0_SX186 -MNET0_SX276 -MNET0_SX366 -MNET0_SX6 -MNET0_SX96 -MNTW0_SI1068 -MNTW0_SI1698 -MNTW0_SI2328 -MNTW0_SX168 -MNTW0_SX202 -MNTW0_SX258 -MNTW0_SX348 -MNTW0_SX78 -MPAR0_SI1576 -MPAR0_SI2206 -MPAR0_SI946 -MPAR0_SX136 -MPAR0_SX226 -MPAR0_SX316 -MPAR0_SX406 -MPAR0_SX46 -MPEB0_SI1034 -MPEB0_SI1860 -MPEB0_SI600 -MPEB0_SX150 -MPEB0_SX240 -MPEB0_SX330 -MPEB0_SX420 -MPEB0_SX60 -MPFU0_SI1258 -MPFU0_SI1888 -MPFU0_SI628 -MPFU0_SX178 -MPFU0_SX268 -MPFU0_SX358 -MPFU0_SX448 -MPFU0_SX88 -MPGH0_SI1554 -MPGH0_SI675 -MPGH0_SI924 -MPGH0_SX114 -MPGH0_SX204 -MPGH0_SX24 -MPGH0_SX294 -MPGH0_SX384 -MPGR0_SI1410 -MPGR0_SI2040 -MPGR0_SI780 -MPGR0_SX150 -MPGR0_SX240 -MPGR0_SX330 -MPGR0_SX420 -MPGR0_SX60 -MPGR1_SI1269 -MPGR1_SI1499 -MPGR1_SI2129 -MPGR1_SX149 -MPGR1_SX239 -MPGR1_SX329 -MPGR1_SX419 -MPGR1_SX59 -MPMB0_SI1501 -MPMB0_SI2131 -MPMB0_SI871 -MPMB0_SX151 -MPMB0_SX241 -MPMB0_SX331 -MPMB0_SX421 -MPMB0_SX61 -MPPC0_SI1412 -MPPC0_SI2042 -MPPC0_SI782 -MPPC0_SX152 -MPPC0_SX242 -MPPC0_SX332 -MPPC0_SX422 -MPPC0_SX62 -MPRB0_SI1205 -MPRB0_SI1215 -MPRB0_SI575 -MPRB0_SX125 -MPRB0_SX215 -MPRB0_SX305 -MPRB0_SX35 -MPRB0_SX395 -MPRD0_SI1431 -MPRD0_SI2061 -MPRD0_SI801 -MPRD0_SX171 -MPRD0_SX261 -MPRD0_SX351 -MPRD0_SX441 -MPRD0_SX81 -MPRK0_SI1097 -MPRK0_SI1727 -MPRK0_SI467 -MPRK0_SX107 -MPRK0_SX17 -MPRK0_SX197 -MPRK0_SX287 -MPRK0_SX377 -MPRT0_SI1210 -MPRT0_SI495 -MPRT0_SI580 -MPRT0_SX130 -MPRT0_SX220 -MPRT0_SX310 -MPRT0_SX40 -MPRT0_SX400 -MPSW0_SI1067 -MPSW0_SI1697 -MPSW0_SI2327 -MPSW0_SX167 -MPSW0_SX24 -MPSW0_SX257 -MPSW0_SX437 -MPSW0_SX77 -MRAB0_SI1224 -MRAB0_SI1854 -MRAB0_SI594 -MRAB0_SX144 -MRAB0_SX234 -MRAB0_SX324 -MRAB0_SX414 -MRAB0_SX54 -MRAB1_SI1478 -MRAB1_SI2108 -MRAB1_SI848 -MRAB1_SX128 -MRAB1_SX218 -MRAB1_SX308 -MRAB1_SX38 -MRAB1_SX398 -MRAI0_SI1954 -MRAI0_SI2052 -MRAI0_SI792 -MRAI0_SX162 -MRAI0_SX252 -MRAI0_SX342 -MRAI0_SX432 -MRAI0_SX72 -MRAM0_SI1275 -MRAM0_SI1905 -MRAM0_SI1951 -MRAM0_SX105 -MRAM0_SX15 -MRAM0_SX195 -MRAM0_SX285 -MRAM0_SX375 -MRAV0_SI1008 -MRAV0_SI1638 -MRAV0_SI2268 -MRAV0_SX108 -MRAV0_SX18 -MRAV0_SX198 -MRAV0_SX288 -MRAV0_SX378 -MRBC0_SI1665 -MRBC0_SI1859 -MRBC0_SI599 -MRBC0_SX149 -MRBC0_SX239 -MRBC0_SX329 -MRBC0_SX419 -MRBC0_SX59 -MRCG0_SI1428 -MRCG0_SI2058 -MRCG0_SI798 -MRCG0_SX168 -MRCG0_SX258 -MRCG0_SX348 -MRCG0_SX438 -MRCG0_SX78 -MRCW0_SI1371 -MRCW0_SI2001 -MRCW0_SI741 -MRCW0_SX111 -MRCW0_SX201 -MRCW0_SX21 -MRCW0_SX291 -MRCW0_SX381 -MRDD0_SI1050 -MRDD0_SI1680 -MRDD0_SI2310 -MRDD0_SX150 -MRDD0_SX240 -MRDD0_SX277 -MRDD0_SX330 -MRDD0_SX60 -MRDM0_SI1044 -MRDM0_SI1595 -MRDM0_SI965 -MRDM0_SX155 -MRDM0_SX245 -MRDM0_SX335 -MRDM0_SX425 -MRDM0_SX65 -MRDS0_SI1167 -MRDS0_SI1797 -MRDS0_SI537 -MRDS0_SX177 -MRDS0_SX267 -MRDS0_SX357 -MRDS0_SX447 -MRDS0_SX87 -MREE0_SI1104 -MREE0_SI1734 -MREE0_SI1959 -MREE0_SX114 -MREE0_SX204 -MREE0_SX24 -MREE0_SX294 -MREE0_SX384 -MREH1_SI1599 -MREH1_SI2229 -MREH1_SI969 -MREH1_SX159 -MREH1_SX249 -MREH1_SX339 -MREH1_SX429 -MREH1_SX69 -MREM0_SI1591 -MREM0_SI511 -MREM0_SI961 -MREM0_SX151 -MREM0_SX241 -MREM0_SX331 -MREM0_SX421 -MREM0_SX61 -MREW1_SI1500 -MREW1_SI2130 -MREW1_SI870 -MREW1_SX150 -MREW1_SX240 -MREW1_SX330 -MREW1_SX420 -MREW1_SX60 -MRFK0_SI1076 -MRFK0_SI1706 -MRFK0_SI2336 -MRFK0_SX176 -MRFK0_SX266 -MRFK0_SX356 -MRFK0_SX446 -MRFK0_SX86 -MRFL0_SI1156 -MRFL0_SI1786 -MRFL0_SI526 -MRFL0_SX166 -MRFL0_SX256 -MRFL0_SX346 -MRFL0_SX436 -MRFL0_SX76 -MRGM0_SI1162 -MRGM0_SI1792 -MRGM0_SI532 -MRGM0_SX172 -MRGM0_SX262 -MRGM0_SX416 -MRGM0_SX442 -MRGM0_SX82 -MRGS0_SI1356 -MRGS0_SI1986 -MRGS0_SI726 -MRGS0_SX186 -MRGS0_SX276 -MRGS0_SX366 -MRGS0_SX6 -MRGS0_SX96 -MRHL0_SI1515 -MRHL0_SI2145 -MRHL0_SI885 -MRHL0_SX165 -MRHL0_SX255 -MRHL0_SX345 -MRHL0_SX435 -MRHL0_SX75 -MRJB1_SI1020 -MRJB1_SI1413 -MRJB1_SI2021 -MRJB1_SX120 -MRJB1_SX210 -MRJB1_SX30 -MRJB1_SX300 -MRJB1_SX390 -MRJH0_SI1519 -MRJH0_SI889 -MRJH0_SI914 -MRJH0_SX169 -MRJH0_SX259 -MRJH0_SX307 -MRJH0_SX439 -MRJH0_SX79 -MRJM0_SI1095 -MRJM0_SI1228 -MRJM0_SI1858 -MRJM0_SX148 -MRJM0_SX238 -MRJM0_SX328 -MRJM0_SX418 -MRJM0_SX58 -MRJM1_SI1298 -MRJM1_SI1928 -MRJM1_SI668 -MRJM1_SX128 -MRJM1_SX218 -MRJM1_SX308 -MRJM1_SX38 -MRJM1_SX398 -MRJT0_SI1498 -MRJT0_SI1805 -MRJT0_SI868 -MRJT0_SX148 -MRJT0_SX238 -MRJT0_SX328 -MRJT0_SX418 -MRJT0_SX58 -MRKM0_SI1267 -MRKM0_SI1391 -MRKM0_SI637 -MRKM0_SX187 -MRKM0_SX277 -MRKM0_SX367 -MRKM0_SX7 -MRKM0_SX97 -MRLD0_SI1594 -MRLD0_SI2224 -MRLD0_SI964 -MRLD0_SX154 -MRLD0_SX244 -MRLD0_SX334 -MRLD0_SX424 -MRLD0_SX64 -MRLJ0_SI1420 -MRLJ0_SI2050 -MRLJ0_SI790 -MRLJ0_SX160 -MRLJ0_SX250 -MRLJ0_SX340 -MRLJ0_SX430 -MRLJ0_SX70 -MRLJ1_SI1671 -MRLJ1_SI2301 -MRLJ1_SI2332 -MRLJ1_SX141 -MRLJ1_SX231 -MRLJ1_SX321 -MRLJ1_SX411 -MRLJ1_SX51 -MRLK0_SI1468 -MRLK0_SI2140 -MRLK0_SI843 -MRLK0_SX123 -MRLK0_SX213 -MRLK0_SX303 -MRLK0_SX33 -MRLK0_SX393 -MRLR0_SI1196 -MRLR0_SI1826 -MRLR0_SI566 -MRLR0_SX116 -MRLR0_SX206 -MRLR0_SX26 -MRLR0_SX296 -MRLR0_SX386 -MRMB0_SI1581 -MRMB0_SI2211 -MRMB0_SI951 -MRMB0_SX141 -MRMB0_SX231 -MRMB0_SX321 -MRMB0_SX411 -MRMB0_SX51 -MRMG0_SI1080 -MRMG0_SI1710 -MRMG0_SI2340 -MRMG0_SX180 -MRMG0_SX270 -MRMG0_SX360 -MRMG0_SX450 -MRMG0_SX90 -MRMH0_SI1021 -MRMH0_SI1349 -MRMH0_SI2281 -MRMH0_SX121 -MRMH0_SX211 -MRMH0_SX301 -MRMH0_SX31 -MRMH0_SX391 -MRML0_SI1421 -MRML0_SI2051 -MRML0_SI791 -MRML0_SX161 -MRML0_SX251 -MRML0_SX341 -MRML0_SX431 -MRML0_SX71 -MRMS0_SI1113 -MRMS0_SI2057 -MRMS0_SI2100 -MRMS0_SX120 -MRMS0_SX210 -MRMS0_SX30 -MRMS0_SX300 -MRMS0_SX390 -MRPC1_SI1482 -MRPC1_SI2026 -MRPC1_SI2112 -MRPC1_SX132 -MRPC1_SX222 -MRPC1_SX312 -MRPC1_SX402 -MRPC1_SX42 -MRRE0_SI1334 -MRRE0_SI704 -MRRE0_SI952 -MRRE0_SX164 -MRRE0_SX254 -MRRE0_SX344 -MRRE0_SX434 -MRRE0_SX74 -MRSO0_SI1206 -MRSO0_SI1659 -MRSO0_SI2289 -MRSO0_SX129 -MRSO0_SX219 -MRSO0_SX309 -MRSO0_SX39 -MRSO0_SX399 -MRSP0_SI1429 -MRSP0_SI2059 -MRSP0_SI799 -MRSP0_SX169 -MRSP0_SX196 -MRSP0_SX259 -MRSP0_SX439 -MRSP0_SX79 -MRTC0_SI1458 -MRTC0_SI2088 -MRTC0_SI828 -MRTC0_SX108 -MRTC0_SX18 -MRTC0_SX198 -MRTC0_SX288 -MRTC0_SX378 -MRTJ0_SI1551 -MRTJ0_SI2032 -MRTJ0_SI772 -MRTJ0_SX142 -MRTJ0_SX232 -MRTJ0_SX322 -MRTJ0_SX412 -MRTJ0_SX52 -MRVG0_SI1140 -MRVG0_SI1770 -MRVG0_SI510 -MRVG0_SX150 -MRVG0_SX240 -MRVG0_SX330 -MRVG0_SX420 -MRVG0_SX60 -MRWA0_SI1603 -MRWA0_SI2233 -MRWA0_SI973 -MRWA0_SX163 -MRWA0_SX253 -MRWA0_SX343 -MRWA0_SX433 -MRWA0_SX73 -MRWS0_SI1102 -MRWS0_SI1732 -MRWS0_SI472 -MRWS0_SX112 -MRWS0_SX202 -MRWS0_SX22 -MRWS0_SX292 -MRWS0_SX382 -MRXB0_SI1585 -MRXB0_SI2215 -MRXB0_SI955 -MRXB0_SX145 -MRXB0_SX235 -MRXB0_SX325 -MRXB0_SX415 -MRXB0_SX55 -MSAH1_SI1049 -MSAH1_SI1679 -MSAH1_SI2309 -MSAH1_SX149 -MSAH1_SX239 -MSAH1_SX329 -MSAH1_SX419 -MSAH1_SX59 -MSAS0_SI1376 -MSAS0_SI2006 -MSAS0_SI746 -MSAS0_SX116 -MSAS0_SX206 -MSAS0_SX26 -MSAS0_SX296 -MSAS0_SX386 -MSAT0_SI1526 -MSAT0_SI2156 -MSAT0_SI896 -MSAT0_SX176 -MSAT0_SX266 -MSAT0_SX356 -MSAT0_SX446 -MSAT0_SX86 -MSAT1_SI1073 -MSAT1_SI1703 -MSAT1_SI2333 -MSAT1_SX173 -MSAT1_SX263 -MSAT1_SX353 -MSAT1_SX443 -MSAT1_SX83 -MSDB0_SI1007 -MSDB0_SI1637 -MSDB0_SI2267 -MSDB0_SX107 -MSDB0_SX17 -MSDB0_SX197 -MSDB0_SX287 -MSDB0_SX377 -MSDH0_SI2113 -MSDH0_SI2240 -MSDH0_SI980 -MSDH0_SX170 -MSDH0_SX260 -MSDH0_SX350 -MSDH0_SX440 -MSDH0_SX80 -MSDS0_SI1077 -MSDS0_SI1707 -MSDS0_SI2337 -MSDS0_SX177 -MSDS0_SX267 -MSDS0_SX357 -MSDS0_SX447 -MSDS0_SX87 -MSEM1_SI1440 -MSEM1_SI2070 -MSEM1_SI810 -MSEM1_SX180 -MSEM1_SX270 -MSEM1_SX360 -MSEM1_SX450 -MSEM1_SX90 -MSES0_SI1589 -MSES0_SI2216 -MSES0_SI2219 -MSES0_SX149 -MSES0_SX239 -MSES0_SX329 -MSES0_SX419 -MSES0_SX59 -MSFH0_SI1216 -MSFH0_SI1738 -MSFH0_SI586 -MSFH0_SX136 -MSFH0_SX226 -MSFH0_SX316 -MSFH0_SX406 -MSFH0_SX46 -MSFV0_SI1262 -MSFV0_SI1892 -MSFV0_SI632 -MSFV0_SX182 -MSFV0_SX272 -MSFV0_SX362 -MSFV0_SX452 -MSFV0_SX92 -MSJK0_SI1596 -MSJK0_SI2226 -MSJK0_SI966 -MSJK0_SX156 -MSJK0_SX246 -MSJK0_SX336 -MSJK0_SX426 -MSJK0_SX66 -MSMC0_SI1907 -MSMC0_SI509 -MSMC0_SI647 -MSMC0_SX107 -MSMC0_SX17 -MSMC0_SX197 -MSMC0_SX287 -MSMC0_SX377 -MSMR0_SI1150 -MSMR0_SI1405 -MSMR0_SI775 -MSMR0_SX145 -MSMR0_SX235 -MSMR0_SX325 -MSMR0_SX415 -MSMR0_SX55 -MSMS0_SI1433 -MSMS0_SI2063 -MSMS0_SI803 -MSMS0_SX173 -MSMS0_SX263 -MSMS0_SX353 -MSMS0_SX443 -MSMS0_SX83 -MSRG0_SI1221 -MSRG0_SI1851 -MSRG0_SI591 -MSRG0_SX141 -MSRG0_SX231 -MSRG0_SX321 -MSRG0_SX411 -MSRG0_SX51 -MSRR0_SI1131 -MSRR0_SI1761 -MSRR0_SI501 -MSRR0_SX141 -MSRR0_SX231 -MSRR0_SX30 -MSRR0_SX411 -MSRR0_SX51 -MSTF0_SI1396 -MSTF0_SI766 -MSTF0_SI852 -MSTF0_SX136 -MSTF0_SX226 -MSTF0_SX316 -MSTF0_SX406 -MSTF0_SX46 -MSVS0_SI1568 -MSVS0_SI2198 -MSVS0_SI938 -MSVS0_SX128 -MSVS0_SX218 -MSVS0_SX308 -MSVS0_SX38 -MSVS0_SX398 -MTAB0_SI1572 -MTAB0_SI2202 -MTAB0_SI942 -MTAB0_SX132 -MTAB0_SX222 -MTAB0_SX312 -MTAB0_SX402 -MTAB0_SX42 -MTAS0_SI1385 -MTAS0_SI2015 -MTAS0_SI755 -MTAS0_SX125 -MTAS0_SX215 -MTAS0_SX305 -MTAS0_SX35 -MTAS0_SX395 -MTAT0_SI1110 -MTAT0_SI1740 -MTAT0_SI811 -MTAT0_SX120 -MTAT0_SX210 -MTAT0_SX30 -MTAT0_SX300 -MTAT0_SX390 -MTAT1_SI1409 -MTAT1_SI1627 -MTAT1_SI779 -MTAT1_SX149 -MTAT1_SX239 -MTAT1_SX329 -MTAT1_SX419 -MTAT1_SX59 -MTBC0_SI1173 -MTBC0_SI1803 -MTBC0_SI543 -MTBC0_SX183 -MTBC0_SX273 -MTBC0_SX347 -MTBC0_SX363 -MTBC0_SX93 -MTCS0_SI1972 -MTCS0_SI2265 -MTCS0_SI712 -MTCS0_SX172 -MTCS0_SX262 -MTCS0_SX352 -MTCS0_SX442 -MTCS0_SX82 -MTDB0_SI1401 -MTDB0_SI2031 -MTDB0_SI771 -MTDB0_SX141 -MTDB0_SX231 -MTDB0_SX321 -MTDB0_SX411 -MTDB0_SX51 -MTDP0_SI1274 -MTDP0_SI1521 -MTDP0_SI2151 -MTDP0_SX171 -MTDP0_SX261 -MTDP0_SX351 -MTDP0_SX441 -MTDP0_SX81 -MTER0_SI1157 -MTER0_SI1787 -MTER0_SI527 -MTER0_SX167 -MTER0_SX17 -MTER0_SX257 -MTER0_SX437 -MTER0_SX77 -MTJG0_SI1520 -MTJG0_SI2157 -MTJG0_SI890 -MTJG0_SX170 -MTJG0_SX260 -MTJG0_SX350 -MTJG0_SX440 -MTJG0_SX80 -MTJM0_SI1226 -MTJM0_SI1856 -MTJM0_SI655 -MTJM0_SX146 -MTJM0_SX236 -MTJM0_SX326 -MTJM0_SX416 -MTJM0_SX56 -MTJS0_SI1192 -MTJS0_SI1822 -MTJS0_SI562 -MTJS0_SX112 -MTJS0_SX202 -MTJS0_SX22 -MTJS0_SX292 -MTJS0_SX382 -MTJU0_SI2020 -MTJU0_SI2269 -MTJU0_SI760 -MTJU0_SX130 -MTJU0_SX220 -MTJU0_SX310 -MTJU0_SX40 -MTJU0_SX400 -MTKD0_SI1187 -MTKD0_SI1817 -MTKD0_SI630 -MTKD0_SX107 -MTKD0_SX17 -MTKD0_SX197 -MTKD0_SX287 -MTKD0_SX377 -MTKP0_SI1023 -MTKP0_SI2283 -MTKP0_SI454 -MTKP0_SX123 -MTKP0_SX213 -MTKP0_SX303 -MTKP0_SX33 -MTKP0_SX393 -MTLB0_SI1134 -MTLB0_SI1764 -MTLB0_SI504 -MTLB0_SX144 -MTLB0_SX234 -MTLB0_SX324 -MTLB0_SX414 -MTLB0_SX54 -MTLC0_SI1313 -MTLC0_SI1477 -MTLC0_SI847 -MTLC0_SX127 -MTLC0_SX217 -MTLC0_SX307 -MTLC0_SX37 -MTLC0_SX397 -MTML0_SI1065 -MTML0_SI1695 -MTML0_SI2325 -MTML0_SX165 -MTML0_SX255 -MTML0_SX345 -MTML0_SX435 -MTML0_SX75 -MTMN0_SI1064 -MTMN0_SI2324 -MTMN0_SI582 -MTMN0_SX164 -MTMN0_SX254 -MTMN0_SX344 -MTMN0_SX434 -MTMN0_SX74 -MTMT0_SI1118 -MTMT0_SI1748 -MTMT0_SI488 -MTMT0_SX128 -MTMT0_SX218 -MTMT0_SX308 -MTMT0_SX38 -MTMT0_SX398 -MTPF0_SI1235 -MTPF0_SI1865 -MTPF0_SI605 -MTPF0_SX155 -MTPF0_SX245 -MTPF0_SX335 -MTPF0_SX425 -MTPF0_SX65 -MTPG0_SI1383 -MTPG0_SI2013 -MTPG0_SI753 -MTPG0_SX123 -MTPG0_SX213 -MTPG0_SX303 -MTPG0_SX33 -MTPG0_SX393 -MTPP0_SI1508 -MTPP0_SI2138 -MTPP0_SI878 -MTPP0_SX158 -MTPP0_SX248 -MTPP0_SX338 -MTPP0_SX428 -MTPP0_SX68 -MTPR0_SI1600 -MTPR0_SI2230 -MTPR0_SI506 -MTPR0_SX160 -MTPR0_SX250 -MTPR0_SX340 -MTPR0_SX430 -MTPR0_SX70 -MTQC0_SI1441 -MTQC0_SI2071 -MTQC0_SI480 -MTQC0_SX181 -MTQC0_SX271 -MTQC0_SX361 -MTQC0_SX451 -MTQC0_SX91 -MTRC0_SI1623 -MTRC0_SI589 -MTRC0_SI993 -MTRC0_SX170 -MTRC0_SX183 -MTRC0_SX273 -MTRC0_SX363 -MTRC0_SX93 -MTRR0_SI1548 -MTRR0_SI2178 -MTRR0_SI918 -MTRR0_SX108 -MTRR0_SX18 -MTRR0_SX198 -MTRR0_SX288 -MTRR0_SX378 -MTRT0_SI1227 -MTRT0_SI1857 -MTRT0_SI597 -MTRT0_SX147 -MTRT0_SX237 -MTRT0_SX254 -MTRT0_SX417 -MTRT0_SX57 -MTWH1_SI1512 -MTWH1_SI2142 -MTWH1_SI882 -MTWH1_SX162 -MTWH1_SX252 -MTWH1_SX342 -MTWH1_SX432 -MTWH1_SX72 -MTXS0_SI1060 -MTXS0_SI1690 -MTXS0_SI2320 -MTXS0_SX160 -MTXS0_SX250 -MTXS0_SX340 -MTXS0_SX430 -MTXS0_SX70 -MVJH0_SI1556 -MVJH0_SI2186 -MVJH0_SI926 -MVJH0_SX116 -MVJH0_SX206 -MVJH0_SX26 -MVJH0_SX296 -MVJH0_SX386 -MVLO0_SI1147 -MVLO0_SI1777 -MVLO0_SI517 -MVLO0_SX157 -MVLO0_SX247 -MVLO0_SX337 -MVLO0_SX427 -MVLO0_SX67 -MVRW0_SI1485 -MVRW0_SI2115 -MVRW0_SI855 -MVRW0_SX135 -MVRW0_SX225 -MVRW0_SX315 -MVRW0_SX405 -MVRW0_SX45 -MWAC0_SI1601 -MWAC0_SI2231 -MWAC0_SI971 -MWAC0_SX161 -MWAC0_SX251 -MWAC0_SX341 -MWAC0_SX431 -MWAC0_SX71 -MWAD0_SI1062 -MWAD0_SI1749 -MWAD0_SI2322 -MWAD0_SX162 -MWAD0_SX252 -MWAD0_SX342 -MWAD0_SX432 -MWAD0_SX72 -MWAR0_SI1045 -MWAR0_SI1675 -MWAR0_SI2305 -MWAR0_SX145 -MWAR0_SX235 -MWAR0_SX325 -MWAR0_SX415 -MWAR0_SX55 -MWCH0_SI1622 -MWCH0_SI1895 -MWCH0_SI2252 -MWCH0_SX182 -MWCH0_SX272 -MWCH0_SX362 -MWCH0_SX452 -MWCH0_SX92 -MWDK0_SI1436 -MWDK0_SI2017 -MWDK0_SI806 -MWDK0_SX176 -MWDK0_SX266 -MWDK0_SX356 -MWDK0_SX446 -MWDK0_SX86 -MWEM0_SI1320 -MWEM0_SI1393 -MWEM0_SI1950 -MWEM0_SX150 -MWEM0_SX240 -MWEM0_SX330 -MWEM0_SX420 -MWEM0_SX60 -MWGR0_SI1606 -MWGR0_SI2236 -MWGR0_SI976 -MWGR0_SX166 -MWGR0_SX256 -MWGR0_SX346 -MWGR0_SX436 -MWGR0_SX76 -MWRE0_SI1057 -MWRE0_SI1687 -MWRE0_SI2317 -MWRE0_SX157 -MWRE0_SX247 -MWRE0_SX337 -MWRE0_SX427 -MWRE0_SX67 -MWRP0_SI1443 -MWRP0_SI1525 -MWRP0_SI2073 -MWRP0_SX183 -MWRP0_SX273 -MWRP0_SX3 -MWRP0_SX363 -MWRP0_SX93 -MWSB0_SI1626 -MWSB0_SI2256 -MWSB0_SI996 -MWSB0_SX186 -MWSB0_SX276 -MWSB0_SX366 -MWSB0_SX6 -MWSB0_SX96 -MWSH0_SI1426 -MWSH0_SI2266 -MWSH0_SI796 -MWSH0_SX166 -MWSH0_SX256 -MWSH0_SX346 -MWSH0_SX436 -MWSH0_SX76 -MZMB0_SI1166 -MZMB0_SI1796 -MZMB0_SI536 -MZMB0_SX176 -MZMB0_SX266 -MZMB0_SX356 -MZMB0_SX446 -MZMB0_SX86 diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/config/timit_matched/train_text.uid b/kosmos-g/fairseq/examples/wav2vec/unsupervised/config/timit_matched/train_text.uid deleted file mode 100644 index c39fd0b91..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/config/timit_matched/train_text.uid +++ /dev/null @@ -1,3696 +0,0 @@ -FAEM0_SI1392 -FAEM0_SI2022 -FAEM0_SI762 -FAEM0_SX132 -FAEM0_SX222 -FAEM0_SX312 -FAEM0_SX402 -FAEM0_SX42 -FAJW0_SI1263 -FAJW0_SI1893 -FAJW0_SI633 -FAJW0_SX183 -FAJW0_SX273 -FAJW0_SX3 -FAJW0_SX363 -FAJW0_SX93 -FALK0_SI1086 -FALK0_SI456 -FALK0_SI658 -FALK0_SX186 -FALK0_SX276 -FALK0_SX366 -FALK0_SX6 -FALK0_SX96 -FALR0_SI1325 -FALR0_SI1955 -FALR0_SI695 -FALR0_SX155 -FALR0_SX245 -FALR0_SX335 -FALR0_SX425 -FALR0_SX65 -FAPB0_SI1063 -FAPB0_SI1693 -FAPB0_SI2323 -FAPB0_SX163 -FAPB0_SX253 -FAPB0_SX343 -FAPB0_SX433 -FAPB0_SX73 -FBAS0_SI1387 -FBAS0_SI1472 -FBAS0_SI2066 -FBAS0_SX127 -FBAS0_SX217 -FBAS0_SX307 -FBAS0_SX37 -FBAS0_SX397 -FBCG1_SI1612 -FBCG1_SI2242 -FBCG1_SI982 -FBCG1_SX172 -FBCG1_SX262 -FBCG1_SX352 -FBCG1_SX442 -FBCG1_SX82 -FBCH0_SI1586 -FBCH0_SI956 -FBCH0_SI959 -FBCH0_SX146 -FBCH0_SX236 -FBCH0_SX326 -FBCH0_SX416 -FBCH0_SX56 -FBJL0_SI1552 -FBJL0_SI2182 -FBJL0_SI922 -FBJL0_SX112 -FBJL0_SX202 -FBJL0_SX22 -FBJL0_SX292 -FBJL0_SX382 -FBLV0_SI1058 -FBLV0_SI1688 -FBLV0_SI2318 -FBLV0_SX158 -FBLV0_SX248 -FBLV0_SX338 -FBLV0_SX428 -FBLV0_SX68 -FBMH0_SI1136 -FBMH0_SI1766 -FBMH0_SI970 -FBMH0_SX146 -FBMH0_SX236 -FBMH0_SX326 -FBMH0_SX416 -FBMH0_SX56 -FBMJ0_SI1776 -FBMJ0_SI516 -FBMJ0_SI815 -FBMJ0_SX156 -FBMJ0_SX246 -FBMJ0_SX336 -FBMJ0_SX426 -FBMJ0_SX66 -FCAG0_SI1503 -FCAG0_SI1641 -FCAG0_SI2133 -FCAG0_SX153 -FCAG0_SX243 -FCAG0_SX333 -FCAG0_SX423 -FCAG0_SX63 -FCAJ0_SI1479 -FCAJ0_SI1804 -FCAJ0_SI849 -FCAJ0_SX129 -FCAJ0_SX219 -FCAJ0_SX309 -FCAJ0_SX39 -FCAJ0_SX399 -FCDR1_SI1186 -FCDR1_SI1816 -FCDR1_SI556 -FCDR1_SX106 -FCDR1_SX16 -FCDR1_SX196 -FCDR1_SX286 -FCDR1_SX376 -FCEG0_SI1248 -FCEG0_SI1878 -FCEG0_SI618 -FCEG0_SX168 -FCEG0_SX258 -FCEG0_SX348 -FCEG0_SX438 -FCEG0_SX78 -FCJF0_SI1027 -FCJF0_SI1657 -FCJF0_SI648 -FCJF0_SX127 -FCJF0_SX217 -FCJF0_SX307 -FCJF0_SX37 -FCJF0_SX397 -FCJS0_SI1607 -FCJS0_SI2237 -FCJS0_SI977 -FCJS0_SX167 -FCJS0_SX257 -FCJS0_SX347 -FCJS0_SX437 -FCJS0_SX77 -FCKE0_SI1111 -FCKE0_SI1741 -FCKE0_SI481 -FCKE0_SX121 -FCKE0_SX211 -FCKE0_SX301 -FCKE0_SX31 -FCKE0_SX391 -FCLT0_SI1438 -FCLT0_SI2068 -FCLT0_SI808 -FCLT0_SX178 -FCLT0_SX268 -FCLT0_SX358 -FCLT0_SX448 -FCLT0_SX88 -FCMG0_SI1142 -FCMG0_SI1242 -FCMG0_SI1872 -FCMG0_SX162 -FCMG0_SX252 -FCMG0_SX342 -FCMG0_SX432 -FCMG0_SX72 -FCMM0_SI1083 -FCMM0_SI1957 -FCMM0_SI453 -FCMM0_SX183 -FCMM0_SX273 -FCMM0_SX363 -FCMM0_SX420 -FCMM0_SX93 -FCRZ0_SI1913 -FCRZ0_SI2053 -FCRZ0_SI793 -FCRZ0_SX163 -FCRZ0_SX253 -FCRZ0_SX343 -FCRZ0_SX433 -FCRZ0_SX73 -FCYL0_SI1297 -FCYL0_SI1927 -FCYL0_SI667 -FCYL0_SX127 -FCYL0_SX217 -FCYL0_SX349 -FCYL0_SX37 -FCYL0_SX397 -FDAS1_SI1461 -FDAS1_SI2091 -FDAS1_SI831 -FDAS1_SX111 -FDAS1_SX201 -FDAS1_SX21 -FDAS1_SX291 -FDAS1_SX381 -FDAW0_SI1271 -FDAW0_SI1406 -FDAW0_SI2036 -FDAW0_SX146 -FDAW0_SX236 -FDAW0_SX326 -FDAW0_SX416 -FDAW0_SX56 -FDFB0_SI1318 -FDFB0_SI1948 -FDFB0_SI2010 -FDFB0_SX148 -FDFB0_SX238 -FDFB0_SX328 -FDFB0_SX418 -FDFB0_SX58 -FDJH0_SI1565 -FDJH0_SI2195 -FDJH0_SI935 -FDJH0_SX125 -FDJH0_SX215 -FDJH0_SX305 -FDJH0_SX35 -FDJH0_SX395 -FDKN0_SI1081 -FDKN0_SI1202 -FDKN0_SI1711 -FDKN0_SX181 -FDKN0_SX271 -FDKN0_SX361 -FDKN0_SX451 -FDKN0_SX91 -FDML0_SI1149 -FDML0_SI1779 -FDML0_SI2075 -FDML0_SX159 -FDML0_SX249 -FDML0_SX339 -FDML0_SX429 -FDML0_SX69 -FDMY0_SI1197 -FDMY0_SI567 -FDMY0_SI714 -FDMY0_SX117 -FDMY0_SX207 -FDMY0_SX27 -FDMY0_SX297 -FDMY0_SX387 -FDNC0_SI1278 -FDNC0_SI1908 -FDNC0_SI2287 -FDNC0_SX108 -FDNC0_SX18 -FDNC0_SX198 -FDNC0_SX288 -FDNC0_SX378 -FDTD0_SI1561 -FDTD0_SI2191 -FDTD0_SI931 -FDTD0_SX121 -FDTD0_SX211 -FDTD0_SX301 -FDTD0_SX321 -FDTD0_SX391 -FDXW0_SI1511 -FDXW0_SI2141 -FDXW0_SI881 -FDXW0_SX161 -FDXW0_SX251 -FDXW0_SX341 -FDXW0_SX431 -FDXW0_SX71 -FEAC0_SI1245 -FEAC0_SI1875 -FEAC0_SI615 -FEAC0_SX165 -FEAC0_SX255 -FEAC0_SX345 -FEAC0_SX435 -FEAC0_SX75 -FEAR0_SI1252 -FEAR0_SI1882 -FEAR0_SI622 -FEAR0_SX172 -FEAR0_SX262 -FEAR0_SX352 -FEAR0_SX442 -FEAR0_SX82 -FECD0_SI1418 -FECD0_SI2048 -FECD0_SI788 -FECD0_SX158 -FECD0_SX248 -FECD0_SX338 -FECD0_SX428 -FECD0_SX68 -FEEH0_SI1112 -FEEH0_SI1742 -FEEH0_SI471 -FEEH0_SX122 -FEEH0_SX212 -FEEH0_SX302 -FEEH0_SX32 -FEEH0_SX392 -FEME0_SI1505 -FEME0_SI2135 -FEME0_SI875 -FEME0_SX155 -FEME0_SX245 -FEME0_SX335 -FEME0_SX425 -FEME0_SX65 -FETB0_SI1148 -FETB0_SI1778 -FETB0_SI518 -FETB0_SX158 -FETB0_SX248 -FETB0_SX338 -FETB0_SX428 -FETB0_SX68 -FEXM0_SI1101 -FEXM0_SI1731 -FEXM0_SI482 -FEXM0_SX111 -FEXM0_SX201 -FEXM0_SX291 -FEXM0_SX366 -FEXM0_SX381 -FGCS0_SI1486 -FGCS0_SI2116 -FGCS0_SI856 -FGCS0_SX136 -FGCS0_SX226 -FGCS0_SX316 -FGCS0_SX406 -FGCS0_SX46 -FGDP0_SI1618 -FGDP0_SI2248 -FGDP0_SI988 -FGDP0_SX178 -FGDP0_SX268 -FGDP0_SX358 -FGDP0_SX448 -FGDP0_SX88 -FGMB0_SI1145 -FGMB0_SI1775 -FGMB0_SI515 -FGMB0_SX155 -FGMB0_SX245 -FGMB0_SX335 -FGMB0_SX425 -FGMB0_SX65 -FGRW0_SI1152 -FGRW0_SI1782 -FGRW0_SI1990 -FGRW0_SX162 -FGRW0_SX252 -FGRW0_SX342 -FGRW0_SX432 -FGRW0_SX72 -FHLM0_SI1560 -FHLM0_SI2190 -FHLM0_SI930 -FHLM0_SX120 -FHLM0_SX210 -FHLM0_SX300 -FHLM0_SX349 -FHLM0_SX390 -FHXS0_SI1075 -FHXS0_SI2302 -FHXS0_SI2335 -FHXS0_SX175 -FHXS0_SX265 -FHXS0_SX355 -FHXS0_SX445 -FHXS0_SX85 -FJDM2_SI1582 -FJDM2_SI1964 -FJDM2_SI2212 -FJDM2_SX142 -FJDM2_SX232 -FJDM2_SX322 -FJDM2_SX412 -FJDM2_SX52 -FJEN0_SI1047 -FJEN0_SI1677 -FJEN0_SI2307 -FJEN0_SX147 -FJEN0_SX237 -FJEN0_SX327 -FJEN0_SX417 -FJEN0_SX57 -FJHK0_SI1022 -FJHK0_SI1652 -FJHK0_SI2282 -FJHK0_SX122 -FJHK0_SX212 -FJHK0_SX302 -FJHK0_SX32 -FJHK0_SX392 -FJKL0_SI1562 -FJKL0_SI2192 -FJKL0_SI932 -FJKL0_SX122 -FJKL0_SX212 -FJKL0_SX302 -FJKL0_SX32 -FJKL0_SX392 -FJLG0_SI1506 -FJLG0_SI1889 -FJLG0_SI2306 -FJLG0_SX179 -FJLG0_SX269 -FJLG0_SX359 -FJLG0_SX449 -FJLG0_SX89 -FJLR0_SI1231 -FJLR0_SI1861 -FJLR0_SI601 -FJLR0_SX151 -FJLR0_SX241 -FJLR0_SX331 -FJLR0_SX421 -FJLR0_SX61 -FJRB0_SI1302 -FJRB0_SI1932 -FJRB0_SI672 -FJRB0_SX132 -FJRB0_SX222 -FJRB0_SX312 -FJRB0_SX402 -FJRB0_SX42 -FJRP1_SI1432 -FJRP1_SI2062 -FJRP1_SI802 -FJRP1_SX172 -FJRP1_SX262 -FJRP1_SX352 -FJRP1_SX442 -FJRP1_SX82 -FJSK0_SI1052 -FJSK0_SI1682 -FJSK0_SI2312 -FJSK0_SX152 -FJSK0_SX242 -FJSK0_SX332 -FJSK0_SX422 -FJSK0_SX62 -FJSP0_SI1434 -FJSP0_SI1763 -FJSP0_SI804 -FJSP0_SX174 -FJSP0_SX264 -FJSP0_SX354 -FJSP0_SX444 -FJSP0_SX84 -FJWB1_SI2055 -FJWB1_SI748 -FJWB1_SI795 -FJWB1_SX165 -FJWB1_SX255 -FJWB1_SX345 -FJWB1_SX435 -FJWB1_SX75 -FJXM0_SI1211 -FJXM0_SI1971 -FJXM0_SI581 -FJXM0_SX131 -FJXM0_SX221 -FJXM0_SX311 -FJXM0_SX401 -FJXM0_SX41 -FJXP0_SI1122 -FJXP0_SI1752 -FJXP0_SI492 -FJXP0_SX132 -FJXP0_SX222 -FJXP0_SX312 -FJXP0_SX402 -FJXP0_SX42 -FKAA0_SI1208 -FKAA0_SI1838 -FKAA0_SI578 -FKAA0_SX128 -FKAA0_SX218 -FKAA0_SX308 -FKAA0_SX38 -FKAA0_SX398 -FKDE0_SI1141 -FKDE0_SI1771 -FKDE0_SI2221 -FKDE0_SX151 -FKDE0_SX241 -FKDE0_SX331 -FKDE0_SX421 -FKDE0_SX61 -FKDW0_SI1207 -FKDW0_SI1891 -FKDW0_SI577 -FKDW0_SX127 -FKDW0_SX217 -FKDW0_SX307 -FKDW0_SX37 -FKDW0_SX397 -FKFB0_SI1608 -FKFB0_SI2238 -FKFB0_SI978 -FKFB0_SX168 -FKFB0_SX258 -FKFB0_SX348 -FKFB0_SX438 -FKFB0_SX78 -FKKH0_SI1290 -FKKH0_SI1920 -FKKH0_SI660 -FKKH0_SX120 -FKKH0_SX210 -FKKH0_SX30 -FKKH0_SX300 -FKKH0_SX390 -FKLC0_SI1615 -FKLC0_SI2245 -FKLC0_SI985 -FKLC0_SX175 -FKLC0_SX265 -FKLC0_SX355 -FKLC0_SX445 -FKLC0_SX85 -FKLC1_SI1048 -FKLC1_SI1678 -FKLC1_SI2308 -FKLC1_SX148 -FKLC1_SX238 -FKLC1_SX328 -FKLC1_SX418 -FKLC1_SX58 -FKLH0_SI1257 -FKLH0_SI1887 -FKLH0_SI627 -FKLH0_SX177 -FKLH0_SX267 -FKLH0_SX357 -FKLH0_SX447 -FKLH0_SX87 -FKSR0_SI1117 -FKSR0_SI1747 -FKSR0_SI487 -FKSR0_SX161 -FKSR0_SX217 -FKSR0_SX366 -FKSR0_SX37 -FKSR0_SX397 -FLAC0_SI1339 -FLAC0_SI2161 -FLAC0_SI901 -FLAC0_SX181 -FLAC0_SX271 -FLAC0_SX361 -FLAC0_SX451 -FLAC0_SX91 -FLAG0_SI1464 -FLAG0_SI2094 -FLAG0_SI834 -FLAG0_SX114 -FLAG0_SX204 -FLAG0_SX24 -FLAG0_SX294 -FLAG0_SX384 -FLEH0_SI1051 -FLEH0_SI1681 -FLEH0_SI2311 -FLEH0_SX151 -FLEH0_SX241 -FLEH0_SX331 -FLEH0_SX421 -FLEH0_SX61 -FLET0_SI1137 -FLET0_SI1767 -FLET0_SI507 -FLET0_SX147 -FLET0_SX237 -FLET0_SX277 -FLET0_SX417 -FLET0_SX57 -FLHD0_SI1344 -FLHD0_SI1827 -FLHD0_SI1974 -FLHD0_SX174 -FLHD0_SX264 -FLHD0_SX354 -FLHD0_SX444 -FLHD0_SX84 -FLJA0_SI1078 -FLJA0_SI1708 -FLJA0_SI2338 -FLJA0_SX178 -FLJA0_SX268 -FLJA0_SX358 -FLJA0_SX448 -FLJA0_SX88 -FLJD0_SI1516 -FLJD0_SI2146 -FLJD0_SI886 -FLJD0_SX166 -FLJD0_SX256 -FLJD0_SX346 -FLJD0_SX436 -FLJD0_SX76 -FLJG0_SI1611 -FLJG0_SI2241 -FLJG0_SI981 -FLJG0_SX171 -FLJG0_SX261 -FLJG0_SX351 -FLJG0_SX441 -FLJG0_SX81 -FLKM0_SI1880 -FLKM0_SI620 -FLKM0_SI686 -FLKM0_SX116 -FLKM0_SX260 -FLKM0_SX350 -FLKM0_SX440 -FLKM0_SX80 -FLMA0_SI1243 -FLMA0_SI1873 -FLMA0_SI613 -FLMA0_SX163 -FLMA0_SX253 -FLMA0_SX343 -FLMA0_SX433 -FLMA0_SX73 -FLMC0_SI1372 -FLMC0_SI2002 -FLMC0_SI742 -FLMC0_SX112 -FLMC0_SX22 -FLMC0_SX292 -FLMC0_SX336 -FLMC0_SX382 -FLMK0_SI1035 -FLMK0_SI1229 -FLMK0_SI2295 -FLMK0_SX135 -FLMK0_SX225 -FLMK0_SX315 -FLMK0_SX405 -FLMK0_SX45 -FLOD0_SI1287 -FLOD0_SI1917 -FLOD0_SI657 -FLOD0_SX117 -FLOD0_SX171 -FLOD0_SX207 -FLOD0_SX297 -FLOD0_SX387 -FLTM0_SI1070 -FLTM0_SI1700 -FLTM0_SI2330 -FLTM0_SX170 -FLTM0_SX260 -FLTM0_SX350 -FLTM0_SX440 -FLTM0_SX80 -FMAH1_SI1509 -FMAH1_SI2139 -FMAH1_SI879 -FMAH1_SX159 -FMAH1_SX249 -FMAH1_SX339 -FMAH1_SX429 -FMAH1_SX69 -FMBG0_SI1160 -FMBG0_SI1790 -FMBG0_SI2264 -FMBG0_SX260 -FMBG0_SX3 -FMBG0_SX350 -FMBG0_SX440 -FMBG0_SX80 -FMEM0_SI1377 -FMEM0_SI2007 -FMEM0_SI747 -FMEM0_SX117 -FMEM0_SX207 -FMEM0_SX297 -FMEM0_SX333 -FMEM0_SX387 -FMJB0_SI1177 -FMJB0_SI1807 -FMJB0_SI547 -FMJB0_SX187 -FMJB0_SX277 -FMJB0_SX367 -FMJB0_SX7 -FMJB0_SX97 -FMJF0_SI1254 -FMJF0_SI1884 -FMJF0_SI624 -FMJF0_SX174 -FMJF0_SX264 -FMJF0_SX354 -FMJF0_SX444 -FMJF0_SX84 -FMJU0_SI1389 -FMJU0_SI2019 -FMJU0_SI759 -FMJU0_SX129 -FMJU0_SX219 -FMJU0_SX309 -FMJU0_SX39 -FMJU0_SX399 -FMKC0_SI1041 -FMKC0_SI1072 -FMKC0_SI1702 -FMKC0_SX172 -FMKC0_SX262 -FMKC0_SX352 -FMKC0_SX442 -FMKC0_SX82 -FMKF0_SI1018 -FMKF0_SI1536 -FMKF0_SI906 -FMKF0_SX186 -FMKF0_SX276 -FMKF0_SX366 -FMKF0_SX6 -FMKF0_SX96 -FMMH0_SI1537 -FMMH0_SI2167 -FMMH0_SI907 -FMMH0_SX187 -FMMH0_SX367 -FMMH0_SX420 -FMMH0_SX7 -FMMH0_SX97 -FMPG0_SI1602 -FMPG0_SI2232 -FMPG0_SI972 -FMPG0_SX162 -FMPG0_SX252 -FMPG0_SX342 -FMPG0_SX432 -FMPG0_SX72 -FNKL0_SI1522 -FNKL0_SI2152 -FNKL0_SI892 -FNKL0_SX172 -FNKL0_SX196 -FNKL0_SX262 -FNKL0_SX442 -FNKL0_SX82 -FNTB0_SI1203 -FNTB0_SI573 -FNTB0_SI679 -FNTB0_SX123 -FNTB0_SX213 -FNTB0_SX303 -FNTB0_SX33 -FNTB0_SX393 -FPAB1_SI1471 -FPAB1_SI2101 -FPAB1_SI841 -FPAB1_SX121 -FPAB1_SX211 -FPAB1_SX301 -FPAB1_SX31 -FPAB1_SX391 -FPAC0_SI1921 -FPAC0_SI2011 -FPAC0_SI661 -FPAC0_SX121 -FPAC0_SX211 -FPAC0_SX301 -FPAC0_SX31 -FPAC0_SX391 -FPAD0_SI1346 -FPAD0_SI1976 -FPAD0_SI716 -FPAD0_SX176 -FPAD0_SX266 -FPAD0_SX356 -FPAD0_SX446 -FPAD0_SX86 -FPAF0_SI1054 -FPAF0_SI1684 -FPAF0_SI2314 -FPAF0_SX154 -FPAF0_SX244 -FPAF0_SX334 -FPAF0_SX424 -FPAF0_SX64 -FPAZ0_SI1593 -FPAZ0_SI2223 -FPAZ0_SI963 -FPAZ0_SX153 -FPAZ0_SX243 -FPAZ0_SX27 -FPAZ0_SX423 -FPAZ0_SX63 -FPJF0_SI1046 -FPJF0_SI1259 -FPJF0_SI1676 -FPJF0_SX146 -FPJF0_SX236 -FPJF0_SX326 -FPJF0_SX352 -FPJF0_SX56 -FPLS0_SI1590 -FPLS0_SI2220 -FPLS0_SI960 -FPLS0_SX150 -FPLS0_SX240 -FPLS0_SX3 -FPLS0_SX330 -FPLS0_SX60 -FPMY0_SI1153 -FPMY0_SI1783 -FPMY0_SI523 -FPMY0_SX163 -FPMY0_SX196 -FPMY0_SX253 -FPMY0_SX343 -FPMY0_SX73 -FREH0_SI1315 -FREH0_SI1945 -FREH0_SI685 -FREH0_SX145 -FREH0_SX235 -FREH0_SX325 -FREH0_SX415 -FREH0_SX55 -FRJB0_SI1427 -FRJB0_SI1470 -FRJB0_SI1794 -FRJB0_SX167 -FRJB0_SX257 -FRJB0_SX347 -FRJB0_SX437 -FRJB0_SX77 -FRLL0_SI1514 -FRLL0_SI805 -FRLL0_SI884 -FRLL0_SX164 -FRLL0_SX254 -FRLL0_SX344 -FRLL0_SX434 -FRLL0_SX74 -FSAG0_SI1323 -FSAG0_SI1953 -FSAG0_SI693 -FSAG0_SX153 -FSAG0_SX243 -FSAG0_SX333 -FSAG0_SX423 -FSAG0_SX63 -FSAH0_SI1244 -FSAH0_SI1874 -FSAH0_SI614 -FSAH0_SX164 -FSAH0_SX327 -FSAH0_SX344 -FSAH0_SX434 -FSAH0_SX74 -FSAK0_SI1300 -FSAK0_SI1930 -FSAK0_SI670 -FSAK0_SX130 -FSAK0_SX220 -FSAK0_SX310 -FSAK0_SX40 -FSAK0_SX400 -FSBK0_SI1069 -FSBK0_SI1699 -FSBK0_SI2329 -FSBK0_SX169 -FSBK0_SX259 -FSBK0_SX349 -FSBK0_SX439 -FSBK0_SX79 -FSCN0_SI1886 -FSCN0_SI626 -FSCN0_SI705 -FSCN0_SX176 -FSCN0_SX266 -FSCN0_SX356 -FSCN0_SX446 -FSCN0_SX86 -FSDC0_SI1312 -FSDC0_SI1942 -FSDC0_SI2234 -FSDC0_SX142 -FSDC0_SX232 -FSDC0_SX322 -FSDC0_SX412 -FSDC0_SX52 -FSDJ0_SI1115 -FSDJ0_SI1745 -FSDJ0_SI485 -FSDJ0_SX125 -FSDJ0_SX215 -FSDJ0_SX305 -FSDJ0_SX35 -FSDJ0_SX395 -FSGF0_SI1557 -FSGF0_SI2187 -FSGF0_SI927 -FSGF0_SX117 -FSGF0_SX207 -FSGF0_SX27 -FSGF0_SX297 -FSGF0_SX387 -FSJG0_SI1570 -FSJG0_SI2200 -FSJG0_SI940 -FSJG0_SX130 -FSJG0_SX220 -FSJG0_SX310 -FSJG0_SX40 -FSJG0_SX400 -FSJK1_SI1025 -FSJK1_SI2285 -FSJK1_SI696 -FSJK1_SX125 -FSJK1_SX215 -FSJK1_SX305 -FSJK1_SX35 -FSJK1_SX395 -FSJS0_SI1171 -FSJS0_SI1801 -FSJS0_SI541 -FSJS0_SX181 -FSJS0_SX271 -FSJS0_SX361 -FSJS0_SX451 -FSJS0_SX91 -FSJW0_SI1333 -FSJW0_SI1963 -FSJW0_SI703 -FSJW0_SX163 -FSJW0_SX253 -FSJW0_SX343 -FSJW0_SX433 -FSJW0_SX73 -FSKC0_SI1416 -FSKC0_SI2046 -FSKC0_SI786 -FSKC0_SX156 -FSKC0_SX246 -FSKC0_SX336 -FSKC0_SX426 -FSKC0_SX66 -FSKL0_SI1529 -FSKL0_SI2159 -FSKL0_SI899 -FSKL0_SX179 -FSKL0_SX269 -FSKL0_SX359 -FSKL0_SX449 -FSKL0_SX89 -FSKP0_SI1098 -FSKP0_SI1728 -FSKP0_SI468 -FSKP0_SX108 -FSKP0_SX18 -FSKP0_SX198 -FSKP0_SX288 -FSKP0_SX378 -FSLS0_SI1056 -FSLS0_SI1686 -FSLS0_SI2316 -FSLS0_SX156 -FSLS0_SX202 -FSLS0_SX246 -FSLS0_SX426 -FSLS0_SX66 -FSMA0_SI1621 -FSMA0_SI2251 -FSMA0_SI991 -FSMA0_SX181 -FSMA0_SX271 -FSMA0_SX361 -FSMA0_SX451 -FSMA0_SX91 -FSMM0_SI1314 -FSMM0_SI1944 -FSMM0_SI684 -FSMM0_SX144 -FSMM0_SX234 -FSMM0_SX324 -FSMM0_SX414 -FSMM0_SX54 -FSMS1_SI1504 -FSMS1_SI2134 -FSMS1_SI874 -FSMS1_SX154 -FSMS1_SX244 -FSMS1_SX334 -FSMS1_SX347 -FSMS1_SX64 -FSPM0_SI1241 -FSPM0_SI1871 -FSPM0_SI611 -FSPM0_SX161 -FSPM0_SX251 -FSPM0_SX341 -FSPM0_SX431 -FSPM0_SX71 -FSRH0_SI1719 -FSRH0_SI1931 -FSRH0_SI671 -FSRH0_SX131 -FSRH0_SX221 -FSRH0_SX311 -FSRH0_SX401 -FSRH0_SX41 -FSSB0_SI1082 -FSSB0_SI1712 -FSSB0_SI2342 -FSSB0_SX182 -FSSB0_SX272 -FSSB0_SX362 -FSSB0_SX452 -FSSB0_SX92 -FTAJ0_SI1329 -FTAJ0_SI474 -FTAJ0_SI699 -FTAJ0_SX159 -FTAJ0_SX249 -FTAJ0_SX339 -FTAJ0_SX429 -FTAJ0_SX69 -FTBR0_SI1402 -FTBR0_SI2181 -FTBR0_SI921 -FTBR0_SX111 -FTBR0_SX201 -FTBR0_SX21 -FTBR0_SX291 -FTBR0_SX381 -FTBW0_SI1345 -FTBW0_SI1975 -FTBW0_SI715 -FTBW0_SX175 -FTBW0_SX265 -FTBW0_SX355 -FTBW0_SX445 -FTBW0_SX85 -FTLG0_SI1743 -FTLG0_SI483 -FTLG0_SI840 -FTLG0_SX123 -FTLG0_SX213 -FTLG0_SX303 -FTLG0_SX33 -FTLG0_SX393 -FTMG0_SI1532 -FTMG0_SI2162 -FTMG0_SI902 -FTMG0_SX182 -FTMG0_SX272 -FTMG0_SX362 -FTMG0_SX452 -FTMG0_SX92 -FVFB0_SI1032 -FVFB0_SI1510 -FVFB0_SI2292 -FVFB0_SX132 -FVFB0_SX222 -FVFB0_SX312 -FVFB0_SX402 -FVFB0_SX42 -FVKB0_SI1159 -FVKB0_SI1789 -FVKB0_SI529 -FVKB0_SX169 -FVKB0_SX259 -FVKB0_SX349 -FVKB0_SX439 -FVKB0_SX79 -FVMH0_SI1466 -FVMH0_SI2096 -FVMH0_SI836 -FVMH0_SX116 -FVMH0_SX206 -FVMH0_SX26 -FVMH0_SX296 -FVMH0_SX386 -MABC0_SI1620 -MABC0_SI2041 -MABC0_SI781 -MABC0_SX151 -MABC0_SX241 -MABC0_SX331 -MABC0_SX421 -MABC0_SX61 -MADC0_SI1367 -MADC0_SI1997 -MADC0_SI737 -MADC0_SX107 -MADC0_SX17 -MADC0_SX197 -MADC0_SX287 -MADC0_SX377 -MADD0_SI1295 -MADD0_SI1798 -MADD0_SI538 -MADD0_SX178 -MADD0_SX268 -MADD0_SX358 -MADD0_SX448 -MADD0_SX88 -MAEB0_SI1411 -MAEB0_SI2250 -MAEB0_SI990 -MAEB0_SX180 -MAEB0_SX270 -MAEB0_SX360 -MAEB0_SX450 -MAEB0_SX90 -MAEO0_SI1326 -MAEO0_SI1655 -MAEO0_SI1956 -MAEO0_SX156 -MAEO0_SX246 -MAEO0_SX336 -MAEO0_SX426 -MAEO0_SX66 -MAFM0_SI1569 -MAFM0_SI2199 -MAFM0_SI939 -MAFM0_SX129 -MAFM0_SX219 -MAFM0_SX309 -MAFM0_SX39 -MAFM0_SX399 -MAJP0_SI1074 -MAJP0_SI1704 -MAJP0_SI2334 -MAJP0_SX174 -MAJP0_SX264 -MAJP0_SX354 -MAJP0_SX444 -MAJP0_SX84 -MAKB0_SI1016 -MAKB0_SI1646 -MAKB0_SI2276 -MAKB0_SX116 -MAKB0_SX206 -MAKB0_SX26 -MAKB0_SX296 -MAKB0_SX386 -MAKR0_SI1352 -MAKR0_SI1982 -MAKR0_SI722 -MAKR0_SX182 -MAKR0_SX272 -MAKR0_SX362 -MAKR0_SX452 -MAKR0_SX92 -MAPV0_SI1293 -MAPV0_SI1923 -MAPV0_SI663 -MAPV0_SX123 -MAPV0_SX213 -MAPV0_SX303 -MAPV0_SX33 -MAPV0_SX393 -MARC0_SI1188 -MARC0_SI1818 -MARC0_SI558 -MARC0_SX108 -MARC0_SX18 -MARC0_SX198 -MARC0_SX288 -MARC0_SX378 -MARW0_SI1276 -MARW0_SI1906 -MARW0_SI646 -MARW0_SX106 -MARW0_SX16 -MARW0_SX286 -MARW0_SX349 -MARW0_SX376 -MBAR0_SI1319 -MBAR0_SI1949 -MBAR0_SI689 -MBAR0_SX149 -MBAR0_SX239 -MBAR0_SX329 -MBAR0_SX419 -MBAR0_SX59 -MBBR0_SI1055 -MBBR0_SI1685 -MBBR0_SI2315 -MBBR0_SX155 -MBBR0_SX245 -MBBR0_SX335 -MBBR0_SX425 -MBBR0_SX65 -MBCG0_SI2217 -MBCG0_SI486 -MBCG0_SI957 -MBCG0_SX147 -MBCG0_SX237 -MBCG0_SX327 -MBCG0_SX417 -MBCG0_SX57 -MBEF0_SI1281 -MBEF0_SI1911 -MBEF0_SI651 -MBEF0_SX111 -MBEF0_SX201 -MBEF0_SX21 -MBEF0_SX291 -MBEF0_SX381 -MBGT0_SI1341 -MBGT0_SI1841 -MBGT0_SI711 -MBGT0_SX171 -MBGT0_SX261 -MBGT0_SX351 -MBGT0_SX441 -MBGT0_SX81 -MBJV0_SI1247 -MBJV0_SI1877 -MBJV0_SI617 -MBJV0_SX167 -MBJV0_SX257 -MBJV0_SX347 -MBJV0_SX437 -MBJV0_SX77 -MBMA0_SI1222 -MBMA0_SI1852 -MBMA0_SI592 -MBMA0_SX142 -MBMA0_SX232 -MBMA0_SX322 -MBMA0_SX412 -MBMA0_SX52 -MBMA1_SI2207 -MBMA1_SI2214 -MBMA1_SI954 -MBMA1_SX144 -MBMA1_SX234 -MBMA1_SX324 -MBMA1_SX414 -MBMA1_SX54 -MBML0_SI1169 -MBML0_SI1799 -MBML0_SI539 -MBML0_SX179 -MBML0_SX269 -MBML0_SX359 -MBML0_SX449 -MBML0_SX89 -MBOM0_SI1014 -MBOM0_SI1644 -MBOM0_SI2274 -MBOM0_SX114 -MBOM0_SX204 -MBOM0_SX294 -MBOM0_SX311 -MBOM0_SX384 -MBSB0_SI1353 -MBSB0_SI1983 -MBSB0_SI723 -MBSB0_SX183 -MBSB0_SX273 -MBSB0_SX3 -MBSB0_SX363 -MBSB0_SX93 -MBTH0_SI2102 -MBTH0_SI505 -MBTH0_SI757 -MBTH0_SX122 -MBTH0_SX212 -MBTH0_SX302 -MBTH0_SX32 -MBTH0_SX392 -MBWP0_SI1531 -MBWP0_SI1969 -MBWP0_SI709 -MBWP0_SX169 -MBWP0_SX259 -MBWP0_SX349 -MBWP0_SX439 -MBWP0_SX79 -MCAE0_SI1447 -MCAE0_SI2077 -MCAE0_SI817 -MCAE0_SX187 -MCAE0_SX277 -MCAE0_SX367 -MCAE0_SX7 -MCAE0_SX97 -MCAL0_SI1138 -MCAL0_SI1768 -MCAL0_SI508 -MCAL0_SX148 -MCAL0_SX238 -MCAL0_SX328 -MCAL0_SX418 -MCAL0_SX58 -MCDC0_SI1292 -MCDC0_SI1922 -MCDC0_SI662 -MCDC0_SX122 -MCDC0_SX212 -MCDC0_SX302 -MCDC0_SX32 -MCDC0_SX392 -MCDD0_SI1513 -MCDD0_SI2143 -MCDD0_SI883 -MCDD0_SX163 -MCDD0_SX253 -MCDD0_SX343 -MCDD0_SX433 -MCDD0_SX73 -MCDR0_SI1154 -MCDR0_SI1784 -MCDR0_SI524 -MCDR0_SX164 -MCDR0_SX254 -MCDR0_SX344 -MCDR0_SX434 -MCDR0_SX74 -MCEF0_SI1135 -MCEF0_SI1765 -MCEF0_SI842 -MCEF0_SX145 -MCEF0_SX235 -MCEF0_SX325 -MCEF0_SX415 -MCEF0_SX55 -MCEW0_SI1442 -MCEW0_SI2072 -MCEW0_SI812 -MCEW0_SX182 -MCEW0_SX272 -MCEW0_SX362 -MCEW0_SX452 -MCEW0_SX92 -MCHL0_SI1347 -MCHL0_SI1404 -MCHL0_SI1977 -MCHL0_SX177 -MCHL0_SX267 -MCHL0_SX357 -MCHL0_SX447 -MCHL0_SX87 -MCLK0_SI1660 -MCLK0_SI2290 -MCLK0_SI650 -MCLK0_SX130 -MCLK0_SX220 -MCLK0_SX310 -MCLK0_SX40 -MCLK0_SX400 -MCLM0_SI1456 -MCLM0_SI2086 -MCLM0_SI826 -MCLM0_SX106 -MCLM0_SX16 -MCLM0_SX196 -MCLM0_SX286 -MCLM0_SX376 -MCPM0_SI1194 -MCPM0_SI1824 -MCPM0_SI564 -MCPM0_SX114 -MCPM0_SX204 -MCPM0_SX24 -MCPM0_SX294 -MCPM0_SX384 -MCRE0_SI1121 -MCRE0_SI1725 -MCRE0_SI1751 -MCRE0_SX131 -MCRE0_SX221 -MCRE0_SX24 -MCRE0_SX401 -MCRE0_SX41 -MCSS0_SI1380 -MCSS0_SI688 -MCSS0_SI750 -MCSS0_SX120 -MCSS0_SX210 -MCSS0_SX30 -MCSS0_SX300 -MCSS0_SX390 -MCTH0_SI1209 -MCTH0_SI1839 -MCTH0_SI579 -MCTH0_SX129 -MCTH0_SX219 -MCTH0_SX309 -MCTH0_SX39 -MCTH0_SX399 -MCTM0_SI1350 -MCTM0_SI1980 -MCTM0_SI720 -MCTM0_SX180 -MCTM0_SX270 -MCTM0_SX360 -MCTM0_SX450 -MCTM0_SX90 -MCXM0_SI1351 -MCXM0_SI1981 -MCXM0_SI721 -MCXM0_SX181 -MCXM0_SX271 -MCXM0_SX361 -MCXM0_SX451 -MCXM0_SX91 -MDAC0_SI1261 -MDAC0_SI1837 -MDAC0_SI631 -MDAC0_SX181 -MDAC0_SX271 -MDAC0_SX361 -MDAC0_SX451 -MDAC0_SX91 -MDAS0_SI1266 -MDAS0_SI1896 -MDAS0_SI636 -MDAS0_SX186 -MDAS0_SX21 -MDAS0_SX276 -MDAS0_SX6 -MDAS0_SX96 -MDBB1_SI1006 -MDBB1_SI1636 -MDBB1_SI2056 -MDBB1_SX106 -MDBB1_SX16 -MDBB1_SX196 -MDBB1_SX286 -MDBB1_SX376 -MDBP0_SI1158 -MDBP0_SI1788 -MDBP0_SI528 -MDBP0_SX168 -MDBP0_SX258 -MDBP0_SX348 -MDBP0_SX438 -MDBP0_SX78 -MDCD0_SI1415 -MDCD0_SI2045 -MDCD0_SI785 -MDCD0_SX155 -MDCD0_SX245 -MDCD0_SX335 -MDCD0_SX425 -MDCD0_SX65 -MDCM0_SI1480 -MDCM0_SI2110 -MDCM0_SI850 -MDCM0_SX130 -MDCM0_SX220 -MDCM0_SX310 -MDCM0_SX40 -MDCM0_SX400 -MDDC0_SI1419 -MDDC0_SI2049 -MDDC0_SI789 -MDDC0_SX159 -MDDC0_SX249 -MDDC0_SX339 -MDDC0_SX429 -MDDC0_SX69 -MDED0_SI1170 -MDED0_SI1800 -MDED0_SI540 -MDED0_SX180 -MDED0_SX270 -MDED0_SX360 -MDED0_SX450 -MDED0_SX90 -MDEF0_SI1123 -MDEF0_SI1563 -MDEF0_SI2193 -MDEF0_SX123 -MDEF0_SX213 -MDEF0_SX303 -MDEF0_SX33 -MDEF0_SX393 -MDEM0_SI1868 -MDEM0_SI608 -MDEM0_SI800 -MDEM0_SX158 -MDEM0_SX248 -MDEM0_SX338 -MDEM0_SX428 -MDEM0_SX68 -MDHL0_SI1439 -MDHL0_SI2069 -MDHL0_SI809 -MDHL0_SX179 -MDHL0_SX269 -MDHL0_SX359 -MDHL0_SX449 -MDHL0_SX89 -MDHS0_SI1530 -MDHS0_SI2160 -MDHS0_SI900 -MDHS0_SX180 -MDHS0_SX270 -MDHS0_SX360 -MDHS0_SX450 -MDHS0_SX90 -MDJM0_SI1455 -MDJM0_SI2085 -MDJM0_SI825 -MDJM0_SX105 -MDJM0_SX15 -MDJM0_SX195 -MDJM0_SX285 -MDJM0_SX375 -MDKS0_SI1066 -MDKS0_SI1696 -MDKS0_SI2326 -MDKS0_SX166 -MDKS0_SX256 -MDKS0_SX346 -MDKS0_SX436 -MDKS0_SX76 -MDLB0_SI1306 -MDLB0_SI1936 -MDLB0_SI676 -MDLB0_SX136 -MDLB0_SX226 -MDLB0_SX316 -MDLB0_SX406 -MDLB0_SX46 -MDLC0_SI1395 -MDLC0_SI2025 -MDLC0_SI765 -MDLC0_SX135 -MDLC0_SX225 -MDLC0_SX315 -MDLC0_SX405 -MDLC0_SX45 -MDLC1_SI1435 -MDLC1_SI2065 -MDLC1_SI2144 -MDLC1_SX175 -MDLC1_SX265 -MDLC1_SX355 -MDLC1_SX445 -MDLC1_SX85 -MDLC2_SI1614 -MDLC2_SI2244 -MDLC2_SI984 -MDLC2_SX174 -MDLC2_SX264 -MDLC2_SX354 -MDLC2_SX444 -MDLC2_SX84 -MDLH0_SI1960 -MDLH0_SI574 -MDLH0_SI700 -MDLH0_SX160 -MDLH0_SX250 -MDLH0_SX340 -MDLH0_SX430 -MDLH0_SX70 -MDLM0_SI1234 -MDLM0_SI1864 -MDLM0_SI604 -MDLM0_SX154 -MDLM0_SX244 -MDLM0_SX334 -MDLM0_SX424 -MDLM0_SX64 -MDLR0_SI1233 -MDLR0_SI1863 -MDLR0_SI603 -MDLR0_SX153 -MDLR0_SX243 -MDLR0_SX333 -MDLR0_SX423 -MDLR0_SX63 -MDLR1_SI1299 -MDLR1_SI1929 -MDLR1_SI669 -MDLR1_SX129 -MDLR1_SX219 -MDLR1_SX309 -MDLR1_SX39 -MDLR1_SX399 -MDMA0_SI1238 -MDMA0_SI1430 -MDMA0_SI2060 -MDMA0_SX170 -MDMA0_SX260 -MDMA0_SX350 -MDMA0_SX440 -MDMA0_SX80 -MDMT0_SI1832 -MDMT0_SI2341 -MDMT0_SI572 -MDMT0_SX122 -MDMT0_SX212 -MDMT0_SX302 -MDMT0_SX32 -MDMT0_SX392 -MDNS0_SI1011 -MDNS0_SI2271 -MDNS0_SI873 -MDNS0_SX111 -MDNS0_SX201 -MDNS0_SX21 -MDNS0_SX291 -MDNS0_SX381 -MDPB0_SI1760 -MDPB0_SI2126 -MDPB0_SI866 -MDPB0_SX146 -MDPB0_SX236 -MDPB0_SX326 -MDPB0_SX416 -MDPB0_SX56 -MDPK0_SI1053 -MDPK0_SI1683 -MDPK0_SI552 -MDPK0_SX153 -MDPK0_SX243 -MDPK0_SX333 -MDPK0_SX423 -MDPK0_SX63 -MDPS0_SI1651 -MDPS0_SI1979 -MDPS0_SI719 -MDPS0_SX179 -MDPS0_SX269 -MDPS0_SX359 -MDPS0_SX449 -MDPS0_SX89 -MDRD0_SI1382 -MDRD0_SI2012 -MDRD0_SI752 -MDRD0_SX122 -MDRD0_SX212 -MDRD0_SX302 -MDRD0_SX32 -MDRD0_SX392 -MDSJ0_SI1462 -MDSJ0_SI2092 -MDSJ0_SI832 -MDSJ0_SX112 -MDSJ0_SX22 -MDSJ0_SX292 -MDSJ0_SX382 -MDSJ0_SX438 -MDSS0_SI1881 -MDSS0_SI2087 -MDSS0_SI621 -MDSS0_SX171 -MDSS0_SX261 -MDSS0_SX351 -MDSS0_SX441 -MDSS0_SX81 -MDSS1_SI1327 -MDSS1_SI1713 -MDSS1_SI697 -MDSS1_SX157 -MDSS1_SX247 -MDSS1_SX337 -MDSS1_SX427 -MDSS1_SX67 -MDTB0_SI1200 -MDTB0_SI1830 -MDTB0_SI570 -MDTB0_SX120 -MDTB0_SX210 -MDTB0_SX300 -MDTB0_SX321 -MDTB0_SX390 -MDWD0_SI1260 -MDWD0_SI1890 -MDWD0_SI557 -MDWD0_SX180 -MDWD0_SX270 -MDWD0_SX360 -MDWD0_SX450 -MDWD0_SX90 -MDWH0_SI1168 -MDWH0_SI1925 -MDWH0_SI665 -MDWH0_SX125 -MDWH0_SX215 -MDWH0_SX305 -MDWH0_SX35 -MDWH0_SX395 -MDWM0_SI1546 -MDWM0_SI2176 -MDWM0_SI916 -MDWM0_SX106 -MDWM0_SX16 -MDWM0_SX286 -MDWM0_SX376 -MDWM0_SX433 -MEAL0_SI1547 -MEAL0_SI2177 -MEAL0_SI917 -MEAL0_SX107 -MEAL0_SX197 -MEAL0_SX287 -MEAL0_SX347 -MEAL0_SX377 -MEDR0_SI1374 -MEDR0_SI2004 -MEDR0_SI744 -MEDR0_SX114 -MEDR0_SX204 -MEDR0_SX24 -MEDR0_SX294 -MEDR0_SX384 -MEFG0_SI465 -MEFG0_SI491 -MEFG0_SI598 -MEFG0_SX105 -MEFG0_SX15 -MEFG0_SX195 -MEFG0_SX285 -MEFG0_SX375 -MEGJ0_SI1337 -MEGJ0_SI1967 -MEGJ0_SI707 -MEGJ0_SX167 -MEGJ0_SX257 -MEGJ0_SX3 -MEGJ0_SX437 -MEGJ0_SX77 -MEJL0_SI1592 -MEJL0_SI1654 -MEJL0_SI962 -MEJL0_SX152 -MEJL0_SX242 -MEJL0_SX332 -MEJL0_SX422 -MEJL0_SX62 -MEJS0_SI1240 -MEJS0_SI1870 -MEJS0_SI610 -MEJS0_SX160 -MEJS0_SX250 -MEJS0_SX340 -MEJS0_SX430 -MEJS0_SX70 -MESG0_SI1332 -MESG0_SI1962 -MESG0_SI702 -MESG0_SX162 -MESG0_SX252 -MESG0_SX342 -MESG0_SX432 -MESG0_SX72 -MESJ0_SI2039 -MESJ0_SI2257 -MESJ0_SI997 -MESJ0_SX187 -MESJ0_SX277 -MESJ0_SX367 -MESJ0_SX7 -MESJ0_SX97 -MEWM0_SI1348 -MEWM0_SI1978 -MEWM0_SI718 -MEWM0_SX178 -MEWM0_SX268 -MEWM0_SX358 -MEWM0_SX448 -MEWM0_SX88 -MFER0_SI1492 -MFER0_SI2122 -MFER0_SI862 -MFER0_SX142 -MFER0_SX232 -MFER0_SX322 -MFER0_SX412 -MFER0_SX52 -MFMC0_SI1132 -MFMC0_SI1762 -MFMC0_SI502 -MFMC0_SX142 -MFMC0_SX232 -MFMC0_SX322 -MFMC0_SX412 -MFMC0_SX52 -MFRM0_SI1155 -MFRM0_SI1717 -MFRM0_SI1785 -MFRM0_SX165 -MFRM0_SX255 -MFRM0_SX345 -MFRM0_SX435 -MFRM0_SX75 -MFWK0_SI1249 -MFWK0_SI1879 -MFWK0_SI619 -MFWK0_SX169 -MFWK0_SX259 -MFWK0_SX349 -MFWK0_SX439 -MFWK0_SX79 -MFXS0_SI1674 -MFXS0_SI2225 -MFXS0_SI2304 -MFXS0_SX144 -MFXS0_SX234 -MFXS0_SX324 -MFXS0_SX414 -MFXS0_SX54 -MFXV0_SI1005 -MFXV0_SI1342 -MFXV0_SI1635 -MFXV0_SX105 -MFXV0_SX15 -MFXV0_SX195 -MFXV0_SX285 -MFXV0_SX375 -MGAF0_SI1282 -MGAF0_SI1912 -MGAF0_SI652 -MGAF0_SX112 -MGAF0_SX202 -MGAF0_SX22 -MGAF0_SX292 -MGAF0_SX382 -MGAG0_SI1321 -MGAG0_SI645 -MGAG0_SI691 -MGAG0_SX151 -MGAG0_SX241 -MGAG0_SX331 -MGAG0_SX421 -MGAG0_SX61 -MGAK0_SI1036 -MGAK0_SI1666 -MGAK0_SI2296 -MGAK0_SX136 -MGAK0_SX226 -MGAK0_SX316 -MGAK0_SX406 -MGAK0_SX46 -MGAR0_SI1212 -MGAR0_SI1694 -MGAR0_SI1842 -MGAR0_SX132 -MGAR0_SX222 -MGAR0_SX312 -MGAR0_SX402 -MGAR0_SX42 -MGAW0_SI1165 -MGAW0_SI1802 -MGAW0_SI535 -MGAW0_SX175 -MGAW0_SX265 -MGAW0_SX355 -MGAW0_SX445 -MGAW0_SX85 -MGES0_SI1481 -MGES0_SI2111 -MGES0_SI851 -MGES0_SX131 -MGES0_SX221 -MGES0_SX311 -MGES0_SX401 -MGES0_SX41 -MGJC0_SI1256 -MGJC0_SI1335 -MGJC0_SI1965 -MGJC0_SX165 -MGJC0_SX255 -MGJC0_SX345 -MGJC0_SX435 -MGJC0_SX75 -MGRL0_SI1497 -MGRL0_SI2127 -MGRL0_SI867 -MGRL0_SX147 -MGRL0_SX237 -MGRL0_SX327 -MGRL0_SX417 -MGRL0_SX57 -MGRP0_SI1317 -MGRP0_SI1947 -MGRP0_SI687 -MGRP0_SX147 -MGRP0_SX237 -MGRP0_SX327 -MGRP0_SX417 -MGRP0_SX57 -MGSH0_SI1176 -MGSH0_SI1806 -MGSH0_SI546 -MGSH0_SX127 -MGSH0_SX186 -MGSH0_SX276 -MGSH0_SX6 -MGSH0_SX96 -MGSL0_SI1164 -MGSL0_SI534 -MGSL0_SI797 -MGSL0_SX174 -MGSL0_SX264 -MGSL0_SX354 -MGSL0_SX444 -MGSL0_SX84 -MGXP0_SI1087 -MGXP0_SI457 -MGXP0_SI525 -MGXP0_SX187 -MGXP0_SX277 -MGXP0_SX367 -MGXP0_SX7 -MGXP0_SX97 -MHBS0_SI1575 -MHBS0_SI2205 -MHBS0_SI945 -MHBS0_SX135 -MHBS0_SX225 -MHBS0_SX315 -MHBS0_SX405 -MHBS0_SX45 -MHIT0_SI1613 -MHIT0_SI2243 -MHIT0_SI983 -MHIT0_SX173 -MHIT0_SX263 -MHIT0_SX353 -MHIT0_SX443 -MHIT0_SX83 -MHJB0_SI1017 -MHJB0_SI1647 -MHJB0_SI2277 -MHJB0_SX117 -MHJB0_SX207 -MHJB0_SX27 -MHJB0_SX297 -MHJB0_SX387 -MHMG0_SI1365 -MHMG0_SI1995 -MHMG0_SI735 -MHMG0_SX105 -MHMG0_SX15 -MHMG0_SX195 -MHMG0_SX285 -MHMG0_SX375 -MHMR0_SI1119 -MHMR0_SI1692 -MHMR0_SI489 -MHMR0_SX129 -MHMR0_SX219 -MHMR0_SX309 -MHMR0_SX39 -MHMR0_SX399 -MHRM0_SI1475 -MHRM0_SI2218 -MHRM0_SI958 -MHRM0_SX148 -MHRM0_SX238 -MHRM0_SX328 -MHRM0_SX418 -MHRM0_SX58 -MHXL0_SI1772 -MHXL0_SI512 -MHXL0_SI612 -MHXL0_SX152 -MHXL0_SX242 -MHXL0_SX332 -MHXL0_SX422 -MHXL0_SX62 -MILB0_SI2163 -MILB0_SI807 -MILB0_SI903 -MILB0_SX183 -MILB0_SX273 -MILB0_SX3 -MILB0_SX363 -MILB0_SX93 -MJAC0_SI1331 -MJAC0_SI2148 -MJAC0_SI701 -MJAC0_SX251 -MJAC0_SX307 -MJAC0_SX341 -MJAC0_SX431 -MJAC0_SX71 -MJAE0_SI1524 -MJAE0_SI1999 -MJAE0_SI2154 -MJAE0_SX174 -MJAE0_SX264 -MJAE0_SX354 -MJAE0_SX444 -MJAE0_SX84 -MJAI0_SI1604 -MJAI0_SI682 -MJAI0_SI710 -MJAI0_SX164 -MJAI0_SX254 -MJAI0_SX344 -MJAI0_SX434 -MJAI0_SX74 -MJBG0_SI1232 -MJBG0_SI1724 -MJBG0_SI1862 -MJBG0_SX152 -MJBG0_SX242 -MJBG0_SX332 -MJBG0_SX422 -MJBG0_SX62 -MJDA0_SI1031 -MJDA0_SI1661 -MJDA0_SI2291 -MJDA0_SX131 -MJDA0_SX221 -MJDA0_SX311 -MJDA0_SX401 -MJDA0_SX41 -MJDC0_SI1161 -MJDC0_SI2165 -MJDC0_SI531 -MJDC0_SX171 -MJDC0_SX261 -MJDC0_SX351 -MJDC0_SX441 -MJDC0_SX81 -MJDE0_SI1120 -MJDE0_SI463 -MJDE0_SI490 -MJDE0_SX130 -MJDE0_SX220 -MJDE0_SX310 -MJDE0_SX40 -MJDE0_SX400 -MJDG0_SI1042 -MJDG0_SI1672 -MJDG0_SI1705 -MJDG0_SX142 -MJDG0_SX232 -MJDG0_SX322 -MJDG0_SX412 -MJDG0_SX52 -MJDM0_SI1340 -MJDM0_SI1937 -MJDM0_SI974 -MJDM0_SX170 -MJDM0_SX260 -MJDM0_SX350 -MJDM0_SX440 -MJDM0_SX80 -MJEB0_SI1286 -MJEB0_SI1916 -MJEB0_SI656 -MJEB0_SX170 -MJEB0_SX206 -MJEB0_SX26 -MJEB0_SX296 -MJEB0_SX386 -MJEB1_SI1467 -MJEB1_SI2097 -MJEB1_SI837 -MJEB1_SX117 -MJEB1_SX207 -MJEB1_SX27 -MJEB1_SX297 -MJEB1_SX387 -MJEE0_SI1237 -MJEE0_SI1867 -MJEE0_SI607 -MJEE0_SX157 -MJEE0_SX247 -MJEE0_SX337 -MJEE0_SX427 -MJEE0_SX67 -MJFH0_SI1107 -MJFH0_SI1737 -MJFH0_SI477 -MJFH0_SX117 -MJFH0_SX207 -MJFH0_SX27 -MJFH0_SX297 -MJFH0_SX387 -MJFR0_SI1605 -MJFR0_SI2235 -MJFR0_SI975 -MJFR0_SX165 -MJFR0_SX255 -MJFR0_SX345 -MJFR0_SX435 -MJFR0_SX75 -MJHI0_SI1328 -MJHI0_SI555 -MJHI0_SI698 -MJHI0_SX158 -MJHI0_SX248 -MJHI0_SX338 -MJHI0_SX428 -MJHI0_SX68 -MJJB0_SI1139 -MJJB0_SI1277 -MJJB0_SI1769 -MJJB0_SX149 -MJJB0_SX239 -MJJB0_SX329 -MJJB0_SX419 -MJJB0_SX59 -MJJJ0_SI1163 -MJJJ0_SI1793 -MJJJ0_SI533 -MJJJ0_SX173 -MJJJ0_SX263 -MJJJ0_SX353 -MJJJ0_SX443 -MJJJ0_SX83 -MJJM0_SI1251 -MJJM0_SI1457 -MJJM0_SI827 -MJJM0_SX107 -MJJM0_SX17 -MJJM0_SX197 -MJJM0_SX287 -MJJM0_SX377 -MJKR0_SI1201 -MJKR0_SI1831 -MJKR0_SI571 -MJKR0_SX121 -MJKR0_SX211 -MJKR0_SX301 -MJKR0_SX31 -MJKR0_SX391 -MJLB0_SI1616 -MJLB0_SI2246 -MJLB0_SI986 -MJLB0_SX176 -MJLB0_SX266 -MJLB0_SX356 -MJLB0_SX446 -MJLB0_SX86 -MJLG1_SI1012 -MJLG1_SI1642 -MJLG1_SI2272 -MJLG1_SX112 -MJLG1_SX202 -MJLG1_SX22 -MJLG1_SX292 -MJLG1_SX382 -MJLS0_SI1096 -MJLS0_SI1726 -MJLS0_SI466 -MJLS0_SX106 -MJLS0_SX16 -MJLS0_SX196 -MJLS0_SX286 -MJLS0_SX376 -MJMA0_SI1495 -MJMA0_SI2125 -MJMA0_SI865 -MJMA0_SX145 -MJMA0_SX235 -MJMA0_SX325 -MJMA0_SX415 -MJMA0_SX55 -MJMD0_SI1028 -MJMD0_SI1658 -MJMD0_SI2288 -MJMD0_SX128 -MJMD0_SX218 -MJMD0_SX308 -MJMD0_SX38 -MJMD0_SX398 -MJMM0_SI1255 -MJMM0_SI1885 -MJMM0_SI625 -MJMM0_SX175 -MJMM0_SX265 -MJMM0_SX355 -MJMM0_SX445 -MJMM0_SX85 -MJPG0_SI1191 -MJPG0_SI1821 -MJPG0_SI561 -MJPG0_SX111 -MJPG0_SX201 -MJPG0_SX21 -MJPG0_SX291 -MJPG0_SX381 -MJPM0_SI1368 -MJPM0_SI1998 -MJPM0_SI738 -MJPM0_SX108 -MJPM0_SX18 -MJPM0_SX198 -MJPM0_SX288 -MJPM0_SX378 -MJPM1_SI1897 -MJPM1_SI2280 -MJPM1_SI761 -MJPM1_SX131 -MJPM1_SX221 -MJPM1_SX311 -MJPM1_SX401 -MJPM1_SX41 -MJRA0_SI1236 -MJRA0_SI1866 -MJRA0_SI606 -MJRA0_SX156 -MJRA0_SX246 -MJRA0_SX336 -MJRA0_SX426 -MJRA0_SX66 -MJRG0_SI1366 -MJRG0_SI1996 -MJRG0_SI736 -MJRG0_SX106 -MJRG0_SX16 -MJRG0_SX286 -MJRG0_SX352 -MJRG0_SX376 -MJRH0_SI1125 -MJRH0_SI1755 -MJRH0_SI1840 -MJRH0_SX135 -MJRH0_SX225 -MJRH0_SX315 -MJRH0_SX405 -MJRH0_SX45 -MJRH1_SI1558 -MJRH1_SI1774 -MJRH1_SI514 -MJRH1_SX154 -MJRH1_SX244 -MJRH1_SX334 -MJRH1_SX424 -MJRH1_SX64 -MJRK0_SI1662 -MJRK0_SI2103 -MJRK0_SI880 -MJRK0_SX160 -MJRK0_SX250 -MJRK0_SX340 -MJRK0_SX430 -MJRK0_SX70 -MJRP0_SI1835 -MJRP0_SI1845 -MJRP0_SI585 -MJRP0_SX135 -MJRP0_SX225 -MJRP0_SX315 -MJRP0_SX405 -MJRP0_SX45 -MJSR0_SI1424 -MJSR0_SI2054 -MJSR0_SI794 -MJSR0_SX164 -MJSR0_SX254 -MJSR0_SX344 -MJSR0_SX434 -MJSR0_SX74 -MJWG0_SI2155 -MJWG0_SI813 -MJWG0_SI895 -MJWG0_SX175 -MJWG0_SX265 -MJWG0_SX355 -MJWG0_SX445 -MJWG0_SX85 -MJWS0_SI1143 -MJWS0_SI1773 -MJWS0_SI513 -MJWS0_SX153 -MJWS0_SX243 -MJWS0_SX333 -MJWS0_SX423 -MJWS0_SX63 -MJWT0_SI1291 -MJWT0_SI1381 -MJWT0_SI751 -MJWT0_SX121 -MJWT0_SX211 -MJWT0_SX301 -MJWT0_SX31 -MJWT0_SX391 -MJXA0_SI1507 -MJXA0_SI2137 -MJXA0_SI877 -MJXA0_SX157 -MJXA0_SX247 -MJXA0_SX337 -MJXA0_SX427 -MJXA0_SX67 -MJXL0_SI1172 -MJXL0_SI1795 -MJXL0_SI542 -MJXL0_SX182 -MJXL0_SX272 -MJXL0_SX362 -MJXL0_SX452 -MJXL0_SX92 -MKAG0_SI1609 -MKAG0_SI2239 -MKAG0_SI979 -MKAG0_SX169 -MKAG0_SX259 -MKAG0_SX30 -MKAG0_SX439 -MKAG0_SX79 -MKAH0_SI1528 -MKAH0_SI2158 -MKAH0_SI898 -MKAH0_SX178 -MKAH0_SX268 -MKAH0_SX358 -MKAH0_SX448 -MKAH0_SX88 -MKAJ0_SI1414 -MKAJ0_SI2044 -MKAJ0_SI784 -MKAJ0_SX154 -MKAJ0_SX244 -MKAJ0_SX334 -MKAJ0_SX424 -MKAJ0_SX64 -MKAM0_SI1250 -MKAM0_SI1316 -MKAM0_SI1465 -MKAM0_SX146 -MKAM0_SX236 -MKAM0_SX326 -MKAM0_SX416 -MKAM0_SX56 -MKDB0_SI2132 -MKDB0_SI588 -MKDB0_SI872 -MKDB0_SX152 -MKDB0_SX242 -MKDB0_SX332 -MKDB0_SX422 -MKDB0_SX62 -MKDD0_SI1567 -MKDD0_SI2197 -MKDD0_SI937 -MKDD0_SX127 -MKDD0_SX217 -MKDD0_SX307 -MKDD0_SX37 -MKDD0_SX397 -MKDT0_SI2153 -MKDT0_SI814 -MKDT0_SI893 -MKDT0_SX173 -MKDT0_SX263 -MKDT0_SX353 -MKDT0_SX443 -MKDT0_SX83 -MKES0_SI1253 -MKES0_SI1883 -MKES0_SI623 -MKES0_SX173 -MKES0_SX263 -MKES0_SX353 -MKES0_SX443 -MKES0_SX83 -MKJO0_SI1517 -MKJO0_SI2147 -MKJO0_SI887 -MKJO0_SX167 -MKJO0_SX257 -MKJO0_SX424 -MKJO0_SX437 -MKJO0_SX77 -MKLN0_SI1598 -MKLN0_SI2228 -MKLN0_SI968 -MKLN0_SX158 -MKLN0_SX248 -MKLN0_SX338 -MKLN0_SX428 -MKLN0_SX68 -MKLR0_SI1059 -MKLR0_SI1689 -MKLR0_SI2319 -MKLR0_SX159 -MKLR0_SX249 -MKLR0_SX339 -MKLR0_SX429 -MKLR0_SX69 -MKLS0_SI1437 -MKLS0_SI1533 -MKLS0_SI2067 -MKLS0_SX177 -MKLS0_SX267 -MKLS0_SX357 -MKLS0_SX447 -MKLS0_SX87 -MKLS1_SI1545 -MKLS1_SI2175 -MKLS1_SI915 -MKLS1_SX105 -MKLS1_SX15 -MKLS1_SX195 -MKLS1_SX285 -MKLS1_SX375 -MKLW0_SI1571 -MKLW0_SI1844 -MKLW0_SI2201 -MKLW0_SX131 -MKLW0_SX221 -MKLW0_SX311 -MKLW0_SX401 -MKLW0_SX41 -MKRG0_SI1491 -MKRG0_SI2121 -MKRG0_SI861 -MKRG0_SX141 -MKRG0_SX231 -MKRG0_SX31 -MKRG0_SX411 -MKRG0_SX51 -MKXL0_SI1185 -MKXL0_SI1815 -MKXL0_SI1958 -MKXL0_SX105 -MKXL0_SX15 -MKXL0_SX195 -MKXL0_SX285 -MKXL0_SX375 -MLBC0_SI1239 -MLBC0_SI1869 -MLBC0_SI609 -MLBC0_SX159 -MLBC0_SX249 -MLBC0_SX339 -MLBC0_SX429 -MLBC0_SX69 -MLEL0_SI1246 -MLEL0_SI1876 -MLEL0_SI616 -MLEL0_SX166 -MLEL0_SX256 -MLEL0_SX346 -MLEL0_SX436 -MLEL0_SX76 -MLJC0_SI1225 -MLJC0_SI1855 -MLJC0_SI595 -MLJC0_SX145 -MLJC0_SX235 -MLJC0_SX325 -MLJC0_SX415 -MLJC0_SX55 -MLJH0_SI1324 -MLJH0_SI1422 -MLJH0_SI694 -MLJH0_SX154 -MLJH0_SX244 -MLJH0_SX334 -MLJH0_SX424 -MLJH0_SX64 -MLNS0_SI1407 -MLNS0_SI2037 -MLNS0_SI777 -MLNS0_SX147 -MLNS0_SX237 -MLNS0_SX327 -MLNS0_SX417 -MLNS0_SX57 -MLSH0_SI1417 -MLSH0_SI2047 -MLSH0_SI787 -MLSH0_SX157 -MLSH0_SX247 -MLSH0_SX337 -MLSH0_SX427 -MLSH0_SX67 -MMAA0_SI1588 -MMAA0_SI2105 -MMAA0_SI845 -MMAA0_SX125 -MMAA0_SX215 -MMAA0_SX305 -MMAA0_SX35 -MMAA0_SX395 -MMAB1_SI1494 -MMAB1_SI2124 -MMAB1_SI864 -MMAB1_SX144 -MMAB1_SX234 -MMAB1_SX324 -MMAB1_SX414 -MMAB1_SX54 -MMAG0_SI1126 -MMAG0_SI1756 -MMAG0_SI496 -MMAG0_SX136 -MMAG0_SX226 -MMAG0_SX316 -MMAG0_SX406 -MMAG0_SX46 -MMAM0_SI1597 -MMAM0_SI1668 -MMAM0_SI2227 -MMAM0_SX157 -MMAM0_SX247 -MMAM0_SX337 -MMAM0_SX427 -MMAM0_SX67 -MMAR0_SI1336 -MMAR0_SI1966 -MMAR0_SI706 -MMAR0_SX166 -MMAR0_SX256 -MMAR0_SX346 -MMAR0_SX436 -MMAR0_SX76 -MMBS0_SI1151 -MMBS0_SI1781 -MMBS0_SI521 -MMBS0_SX161 -MMBS0_SX251 -MMBS0_SX341 -MMBS0_SX431 -MMBS0_SX71 -MMCC0_SI1338 -MMCC0_SI1968 -MMCC0_SI708 -MMCC0_SX168 -MMCC0_SX258 -MMCC0_SX348 -MMCC0_SX438 -MMCC0_SX78 -MMDB0_SI1358 -MMDB0_SI1617 -MMDB0_SI987 -MMDB0_SX177 -MMDB0_SX267 -MMDB0_SX357 -MMDB0_SX447 -MMDB0_SX87 -MMDG0_SI1780 -MMDG0_SI2035 -MMDG0_SI520 -MMDG0_SX160 -MMDG0_SX250 -MMDG0_SX340 -MMDG0_SX430 -MMDG0_SX70 -MMDM0_SI1311 -MMDM0_SI1941 -MMDM0_SI681 -MMDM0_SX141 -MMDM0_SX231 -MMDM0_SX321 -MMDM0_SX411 -MMDM0_SX51 -MMDM1_SI1650 -MMDM1_SI2043 -MMDM1_SI783 -MMDM1_SX153 -MMDM1_SX243 -MMDM1_SX333 -MMDM1_SX423 -MMDM1_SX63 -MMDS0_SI1343 -MMDS0_SI1973 -MMDS0_SI713 -MMDS0_SX173 -MMDS0_SX263 -MMDS0_SX353 -MMDS0_SX443 -MMDS0_SX83 -MMEA0_SI1388 -MMEA0_SI2018 -MMEA0_SI758 -MMEA0_SX128 -MMEA0_SX218 -MMEA0_SX308 -MMEA0_SX38 -MMEA0_SX398 -MMEB0_SI1357 -MMEB0_SI1987 -MMEB0_SI727 -MMEB0_SX187 -MMEB0_SX327 -MMEB0_SX367 -MMEB0_SX7 -MMEB0_SX97 -MMGC0_SI1305 -MMGC0_SI1935 -MMGC0_SI2184 -MMGC0_SX135 -MMGC0_SX225 -MMGC0_SX315 -MMGC0_SX405 -MMGC0_SX45 -MMGG0_SI1079 -MMGG0_SI1709 -MMGG0_SI2339 -MMGG0_SX179 -MMGG0_SX269 -MMGG0_SX359 -MMGG0_SX449 -MMGG0_SX89 -MMGK0_SI1322 -MMGK0_SI1952 -MMGK0_SI692 -MMGK0_SX152 -MMGK0_SX242 -MMGK0_SX332 -MMGK0_SX422 -MMGK0_SX62 -MMJB1_SI1408 -MMJB1_SI2038 -MMJB1_SI778 -MMJB1_SX148 -MMJB1_SX238 -MMJB1_SX328 -MMJB1_SX418 -MMJB1_SX58 -MMLM0_SI1527 -MMLM0_SI2150 -MMLM0_SI897 -MMLM0_SX177 -MMLM0_SX267 -MMLM0_SX357 -MMLM0_SX447 -MMLM0_SX87 -MMPM0_SI1061 -MMPM0_SI1691 -MMPM0_SI2321 -MMPM0_SX161 -MMPM0_SX251 -MMPM0_SX341 -MMPM0_SX431 -MMPM0_SX71 -MMRP0_SI2034 -MMRP0_SI717 -MMRP0_SI774 -MMRP0_SX144 -MMRP0_SX234 -MMRP0_SX324 -MMRP0_SX414 -MMRP0_SX54 -MMSM0_SI1106 -MMSM0_SI1736 -MMSM0_SI476 -MMSM0_SX116 -MMSM0_SX206 -MMSM0_SX26 -MMSM0_SX296 -MMSM0_SX386 -MMVP0_SI1284 -MMVP0_SI1914 -MMVP0_SI654 -MMVP0_SX114 -MMVP0_SX204 -MMVP0_SX294 -MMVP0_SX347 -MMVP0_SX384 -MMWB0_SI1619 -MMWB0_SI2249 -MMWB0_SI989 -MMWB0_SX179 -MMWB0_SX269 -MMWB0_SX359 -MMWB0_SX449 -MMWB0_SX89 -MMWS0_SI1518 -MMWS0_SI559 -MMWS0_SI888 -MMWS0_SX168 -MMWS0_SX258 -MMWS0_SX348 -MMWS0_SX438 -MMWS0_SX78 -MMWS1_SI1071 -MMWS1_SI1701 -MMWS1_SI2331 -MMWS1_SX261 -MMWS1_SX27 -MMWS1_SX351 -MMWS1_SX441 -MMWS1_SX81 -MMXS0_SI2136 -MMXS0_SI629 -MMXS0_SI876 -MMXS0_SX156 -MMXS0_SX246 -MMXS0_SX336 -MMXS0_SX426 -MMXS0_SX66 -MNET0_SI1446 -MNET0_SI2076 -MNET0_SI816 -MNET0_SX186 -MNET0_SX276 -MNET0_SX366 -MNET0_SX6 -MNET0_SX96 -MNTW0_SI1068 -MNTW0_SI1698 -MNTW0_SI2328 -MNTW0_SX168 -MNTW0_SX202 -MNTW0_SX258 -MNTW0_SX348 -MNTW0_SX78 -MPAR0_SI1576 -MPAR0_SI2206 -MPAR0_SI946 -MPAR0_SX136 -MPAR0_SX226 -MPAR0_SX316 -MPAR0_SX406 -MPAR0_SX46 -MPEB0_SI1034 -MPEB0_SI1860 -MPEB0_SI600 -MPEB0_SX150 -MPEB0_SX240 -MPEB0_SX330 -MPEB0_SX420 -MPEB0_SX60 -MPFU0_SI1258 -MPFU0_SI1888 -MPFU0_SI628 -MPFU0_SX178 -MPFU0_SX268 -MPFU0_SX358 -MPFU0_SX448 -MPFU0_SX88 -MPGH0_SI1554 -MPGH0_SI675 -MPGH0_SI924 -MPGH0_SX114 -MPGH0_SX204 -MPGH0_SX24 -MPGH0_SX294 -MPGH0_SX384 -MPGR0_SI1410 -MPGR0_SI2040 -MPGR0_SI780 -MPGR0_SX150 -MPGR0_SX240 -MPGR0_SX330 -MPGR0_SX420 -MPGR0_SX60 -MPGR1_SI1269 -MPGR1_SI1499 -MPGR1_SI2129 -MPGR1_SX149 -MPGR1_SX239 -MPGR1_SX329 -MPGR1_SX419 -MPGR1_SX59 -MPMB0_SI1501 -MPMB0_SI2131 -MPMB0_SI871 -MPMB0_SX151 -MPMB0_SX241 -MPMB0_SX331 -MPMB0_SX421 -MPMB0_SX61 -MPPC0_SI1412 -MPPC0_SI2042 -MPPC0_SI782 -MPPC0_SX152 -MPPC0_SX242 -MPPC0_SX332 -MPPC0_SX422 -MPPC0_SX62 -MPRB0_SI1205 -MPRB0_SI1215 -MPRB0_SI575 -MPRB0_SX125 -MPRB0_SX215 -MPRB0_SX305 -MPRB0_SX35 -MPRB0_SX395 -MPRD0_SI1431 -MPRD0_SI2061 -MPRD0_SI801 -MPRD0_SX171 -MPRD0_SX261 -MPRD0_SX351 -MPRD0_SX441 -MPRD0_SX81 -MPRK0_SI1097 -MPRK0_SI1727 -MPRK0_SI467 -MPRK0_SX107 -MPRK0_SX17 -MPRK0_SX197 -MPRK0_SX287 -MPRK0_SX377 -MPRT0_SI1210 -MPRT0_SI495 -MPRT0_SI580 -MPRT0_SX130 -MPRT0_SX220 -MPRT0_SX310 -MPRT0_SX40 -MPRT0_SX400 -MPSW0_SI1067 -MPSW0_SI1697 -MPSW0_SI2327 -MPSW0_SX167 -MPSW0_SX24 -MPSW0_SX257 -MPSW0_SX437 -MPSW0_SX77 -MRAB0_SI1224 -MRAB0_SI1854 -MRAB0_SI594 -MRAB0_SX144 -MRAB0_SX234 -MRAB0_SX324 -MRAB0_SX414 -MRAB0_SX54 -MRAB1_SI1478 -MRAB1_SI2108 -MRAB1_SI848 -MRAB1_SX128 -MRAB1_SX218 -MRAB1_SX308 -MRAB1_SX38 -MRAB1_SX398 -MRAI0_SI1954 -MRAI0_SI2052 -MRAI0_SI792 -MRAI0_SX162 -MRAI0_SX252 -MRAI0_SX342 -MRAI0_SX432 -MRAI0_SX72 -MRAM0_SI1275 -MRAM0_SI1905 -MRAM0_SI1951 -MRAM0_SX105 -MRAM0_SX15 -MRAM0_SX195 -MRAM0_SX285 -MRAM0_SX375 -MRAV0_SI1008 -MRAV0_SI1638 -MRAV0_SI2268 -MRAV0_SX108 -MRAV0_SX18 -MRAV0_SX198 -MRAV0_SX288 -MRAV0_SX378 -MRBC0_SI1665 -MRBC0_SI1859 -MRBC0_SI599 -MRBC0_SX149 -MRBC0_SX239 -MRBC0_SX329 -MRBC0_SX419 -MRBC0_SX59 -MRCG0_SI1428 -MRCG0_SI2058 -MRCG0_SI798 -MRCG0_SX168 -MRCG0_SX258 -MRCG0_SX348 -MRCG0_SX438 -MRCG0_SX78 -MRCW0_SI1371 -MRCW0_SI2001 -MRCW0_SI741 -MRCW0_SX111 -MRCW0_SX201 -MRCW0_SX21 -MRCW0_SX291 -MRCW0_SX381 -MRDD0_SI1050 -MRDD0_SI1680 -MRDD0_SI2310 -MRDD0_SX150 -MRDD0_SX240 -MRDD0_SX277 -MRDD0_SX330 -MRDD0_SX60 -MRDM0_SI1044 -MRDM0_SI1595 -MRDM0_SI965 -MRDM0_SX155 -MRDM0_SX245 -MRDM0_SX335 -MRDM0_SX425 -MRDM0_SX65 -MRDS0_SI1167 -MRDS0_SI1797 -MRDS0_SI537 -MRDS0_SX177 -MRDS0_SX267 -MRDS0_SX357 -MRDS0_SX447 -MRDS0_SX87 -MREE0_SI1104 -MREE0_SI1734 -MREE0_SI1959 -MREE0_SX114 -MREE0_SX204 -MREE0_SX24 -MREE0_SX294 -MREE0_SX384 -MREH1_SI1599 -MREH1_SI2229 -MREH1_SI969 -MREH1_SX159 -MREH1_SX249 -MREH1_SX339 -MREH1_SX429 -MREH1_SX69 -MREM0_SI1591 -MREM0_SI511 -MREM0_SI961 -MREM0_SX151 -MREM0_SX241 -MREM0_SX331 -MREM0_SX421 -MREM0_SX61 -MREW1_SI1500 -MREW1_SI2130 -MREW1_SI870 -MREW1_SX150 -MREW1_SX240 -MREW1_SX330 -MREW1_SX420 -MREW1_SX60 -MRFK0_SI1076 -MRFK0_SI1706 -MRFK0_SI2336 -MRFK0_SX176 -MRFK0_SX266 -MRFK0_SX356 -MRFK0_SX446 -MRFK0_SX86 -MRFL0_SI1156 -MRFL0_SI1786 -MRFL0_SI526 -MRFL0_SX166 -MRFL0_SX256 -MRFL0_SX346 -MRFL0_SX436 -MRFL0_SX76 -MRGM0_SI1162 -MRGM0_SI1792 -MRGM0_SI532 -MRGM0_SX172 -MRGM0_SX262 -MRGM0_SX416 -MRGM0_SX442 -MRGM0_SX82 -MRGS0_SI1356 -MRGS0_SI1986 -MRGS0_SI726 -MRGS0_SX186 -MRGS0_SX276 -MRGS0_SX366 -MRGS0_SX6 -MRGS0_SX96 -MRHL0_SI1515 -MRHL0_SI2145 -MRHL0_SI885 -MRHL0_SX165 -MRHL0_SX255 -MRHL0_SX345 -MRHL0_SX435 -MRHL0_SX75 -MRJB1_SI1020 -MRJB1_SI1413 -MRJB1_SI2021 -MRJB1_SX120 -MRJB1_SX210 -MRJB1_SX30 -MRJB1_SX300 -MRJB1_SX390 -MRJH0_SI1519 -MRJH0_SI889 -MRJH0_SI914 -MRJH0_SX169 -MRJH0_SX259 -MRJH0_SX307 -MRJH0_SX439 -MRJH0_SX79 -MRJM0_SI1095 -MRJM0_SI1228 -MRJM0_SI1858 -MRJM0_SX148 -MRJM0_SX238 -MRJM0_SX328 -MRJM0_SX418 -MRJM0_SX58 -MRJM1_SI1298 -MRJM1_SI1928 -MRJM1_SI668 -MRJM1_SX128 -MRJM1_SX218 -MRJM1_SX308 -MRJM1_SX38 -MRJM1_SX398 -MRJT0_SI1498 -MRJT0_SI1805 -MRJT0_SI868 -MRJT0_SX148 -MRJT0_SX238 -MRJT0_SX328 -MRJT0_SX418 -MRJT0_SX58 -MRKM0_SI1267 -MRKM0_SI1391 -MRKM0_SI637 -MRKM0_SX187 -MRKM0_SX277 -MRKM0_SX367 -MRKM0_SX7 -MRKM0_SX97 -MRLD0_SI1594 -MRLD0_SI2224 -MRLD0_SI964 -MRLD0_SX154 -MRLD0_SX244 -MRLD0_SX334 -MRLD0_SX424 -MRLD0_SX64 -MRLJ0_SI1420 -MRLJ0_SI2050 -MRLJ0_SI790 -MRLJ0_SX160 -MRLJ0_SX250 -MRLJ0_SX340 -MRLJ0_SX430 -MRLJ0_SX70 -MRLJ1_SI1671 -MRLJ1_SI2301 -MRLJ1_SI2332 -MRLJ1_SX141 -MRLJ1_SX231 -MRLJ1_SX321 -MRLJ1_SX411 -MRLJ1_SX51 -MRLK0_SI1468 -MRLK0_SI2140 -MRLK0_SI843 -MRLK0_SX123 -MRLK0_SX213 -MRLK0_SX303 -MRLK0_SX33 -MRLK0_SX393 -MRLR0_SI1196 -MRLR0_SI1826 -MRLR0_SI566 -MRLR0_SX116 -MRLR0_SX206 -MRLR0_SX26 -MRLR0_SX296 -MRLR0_SX386 -MRMB0_SI1581 -MRMB0_SI2211 -MRMB0_SI951 -MRMB0_SX141 -MRMB0_SX231 -MRMB0_SX321 -MRMB0_SX411 -MRMB0_SX51 -MRMG0_SI1080 -MRMG0_SI1710 -MRMG0_SI2340 -MRMG0_SX180 -MRMG0_SX270 -MRMG0_SX360 -MRMG0_SX450 -MRMG0_SX90 -MRMH0_SI1021 -MRMH0_SI1349 -MRMH0_SI2281 -MRMH0_SX121 -MRMH0_SX211 -MRMH0_SX301 -MRMH0_SX31 -MRMH0_SX391 -MRML0_SI1421 -MRML0_SI2051 -MRML0_SI791 -MRML0_SX161 -MRML0_SX251 -MRML0_SX341 -MRML0_SX431 -MRML0_SX71 -MRMS0_SI1113 -MRMS0_SI2057 -MRMS0_SI2100 -MRMS0_SX120 -MRMS0_SX210 -MRMS0_SX30 -MRMS0_SX300 -MRMS0_SX390 -MRPC1_SI1482 -MRPC1_SI2026 -MRPC1_SI2112 -MRPC1_SX132 -MRPC1_SX222 -MRPC1_SX312 -MRPC1_SX402 -MRPC1_SX42 -MRRE0_SI1334 -MRRE0_SI704 -MRRE0_SI952 -MRRE0_SX164 -MRRE0_SX254 -MRRE0_SX344 -MRRE0_SX434 -MRRE0_SX74 -MRSO0_SI1206 -MRSO0_SI1659 -MRSO0_SI2289 -MRSO0_SX129 -MRSO0_SX219 -MRSO0_SX309 -MRSO0_SX39 -MRSO0_SX399 -MRSP0_SI1429 -MRSP0_SI2059 -MRSP0_SI799 -MRSP0_SX169 -MRSP0_SX196 -MRSP0_SX259 -MRSP0_SX439 -MRSP0_SX79 -MRTC0_SI1458 -MRTC0_SI2088 -MRTC0_SI828 -MRTC0_SX108 -MRTC0_SX18 -MRTC0_SX198 -MRTC0_SX288 -MRTC0_SX378 -MRTJ0_SI1551 -MRTJ0_SI2032 -MRTJ0_SI772 -MRTJ0_SX142 -MRTJ0_SX232 -MRTJ0_SX322 -MRTJ0_SX412 -MRTJ0_SX52 -MRVG0_SI1140 -MRVG0_SI1770 -MRVG0_SI510 -MRVG0_SX150 -MRVG0_SX240 -MRVG0_SX330 -MRVG0_SX420 -MRVG0_SX60 -MRWA0_SI1603 -MRWA0_SI2233 -MRWA0_SI973 -MRWA0_SX163 -MRWA0_SX253 -MRWA0_SX343 -MRWA0_SX433 -MRWA0_SX73 -MRWS0_SI1102 -MRWS0_SI1732 -MRWS0_SI472 -MRWS0_SX112 -MRWS0_SX202 -MRWS0_SX22 -MRWS0_SX292 -MRWS0_SX382 -MRXB0_SI1585 -MRXB0_SI2215 -MRXB0_SI955 -MRXB0_SX145 -MRXB0_SX235 -MRXB0_SX325 -MRXB0_SX415 -MRXB0_SX55 -MSAH1_SI1049 -MSAH1_SI1679 -MSAH1_SI2309 -MSAH1_SX149 -MSAH1_SX239 -MSAH1_SX329 -MSAH1_SX419 -MSAH1_SX59 -MSAS0_SI1376 -MSAS0_SI2006 -MSAS0_SI746 -MSAS0_SX116 -MSAS0_SX206 -MSAS0_SX26 -MSAS0_SX296 -MSAS0_SX386 -MSAT0_SI1526 -MSAT0_SI2156 -MSAT0_SI896 -MSAT0_SX176 -MSAT0_SX266 -MSAT0_SX356 -MSAT0_SX446 -MSAT0_SX86 -MSAT1_SI1073 -MSAT1_SI1703 -MSAT1_SI2333 -MSAT1_SX173 -MSAT1_SX263 -MSAT1_SX353 -MSAT1_SX443 -MSAT1_SX83 -MSDB0_SI1007 -MSDB0_SI1637 -MSDB0_SI2267 -MSDB0_SX107 -MSDB0_SX17 -MSDB0_SX197 -MSDB0_SX287 -MSDB0_SX377 -MSDH0_SI2113 -MSDH0_SI2240 -MSDH0_SI980 -MSDH0_SX170 -MSDH0_SX260 -MSDH0_SX350 -MSDH0_SX440 -MSDH0_SX80 -MSDS0_SI1077 -MSDS0_SI1707 -MSDS0_SI2337 -MSDS0_SX177 -MSDS0_SX267 -MSDS0_SX357 -MSDS0_SX447 -MSDS0_SX87 -MSEM1_SI1440 -MSEM1_SI2070 -MSEM1_SI810 -MSEM1_SX180 -MSEM1_SX270 -MSEM1_SX360 -MSEM1_SX450 -MSEM1_SX90 -MSES0_SI1589 -MSES0_SI2216 -MSES0_SI2219 -MSES0_SX149 -MSES0_SX239 -MSES0_SX329 -MSES0_SX419 -MSES0_SX59 -MSFH0_SI1216 -MSFH0_SI1738 -MSFH0_SI586 -MSFH0_SX136 -MSFH0_SX226 -MSFH0_SX316 -MSFH0_SX406 -MSFH0_SX46 -MSFV0_SI1262 -MSFV0_SI1892 -MSFV0_SI632 -MSFV0_SX182 -MSFV0_SX272 -MSFV0_SX362 -MSFV0_SX452 -MSFV0_SX92 -MSJK0_SI1596 -MSJK0_SI2226 -MSJK0_SI966 -MSJK0_SX156 -MSJK0_SX246 -MSJK0_SX336 -MSJK0_SX426 -MSJK0_SX66 -MSMC0_SI1907 -MSMC0_SI509 -MSMC0_SI647 -MSMC0_SX107 -MSMC0_SX17 -MSMC0_SX197 -MSMC0_SX287 -MSMC0_SX377 -MSMR0_SI1150 -MSMR0_SI1405 -MSMR0_SI775 -MSMR0_SX145 -MSMR0_SX235 -MSMR0_SX325 -MSMR0_SX415 -MSMR0_SX55 -MSMS0_SI1433 -MSMS0_SI2063 -MSMS0_SI803 -MSMS0_SX173 -MSMS0_SX263 -MSMS0_SX353 -MSMS0_SX443 -MSMS0_SX83 -MSRG0_SI1221 -MSRG0_SI1851 -MSRG0_SI591 -MSRG0_SX141 -MSRG0_SX231 -MSRG0_SX321 -MSRG0_SX411 -MSRG0_SX51 -MSRR0_SI1131 -MSRR0_SI1761 -MSRR0_SI501 -MSRR0_SX141 -MSRR0_SX231 -MSRR0_SX30 -MSRR0_SX411 -MSRR0_SX51 -MSTF0_SI1396 -MSTF0_SI766 -MSTF0_SI852 -MSTF0_SX136 -MSTF0_SX226 -MSTF0_SX316 -MSTF0_SX406 -MSTF0_SX46 -MSVS0_SI1568 -MSVS0_SI2198 -MSVS0_SI938 -MSVS0_SX128 -MSVS0_SX218 -MSVS0_SX308 -MSVS0_SX38 -MSVS0_SX398 -MTAB0_SI1572 -MTAB0_SI2202 -MTAB0_SI942 -MTAB0_SX132 -MTAB0_SX222 -MTAB0_SX312 -MTAB0_SX402 -MTAB0_SX42 -MTAS0_SI1385 -MTAS0_SI2015 -MTAS0_SI755 -MTAS0_SX125 -MTAS0_SX215 -MTAS0_SX305 -MTAS0_SX35 -MTAS0_SX395 -MTAT0_SI1110 -MTAT0_SI1740 -MTAT0_SI811 -MTAT0_SX120 -MTAT0_SX210 -MTAT0_SX30 -MTAT0_SX300 -MTAT0_SX390 -MTAT1_SI1409 -MTAT1_SI1627 -MTAT1_SI779 -MTAT1_SX149 -MTAT1_SX239 -MTAT1_SX329 -MTAT1_SX419 -MTAT1_SX59 -MTBC0_SI1173 -MTBC0_SI1803 -MTBC0_SI543 -MTBC0_SX183 -MTBC0_SX273 -MTBC0_SX347 -MTBC0_SX363 -MTBC0_SX93 -MTCS0_SI1972 -MTCS0_SI2265 -MTCS0_SI712 -MTCS0_SX172 -MTCS0_SX262 -MTCS0_SX352 -MTCS0_SX442 -MTCS0_SX82 -MTDB0_SI1401 -MTDB0_SI2031 -MTDB0_SI771 -MTDB0_SX141 -MTDB0_SX231 -MTDB0_SX321 -MTDB0_SX411 -MTDB0_SX51 -MTDP0_SI1274 -MTDP0_SI1521 -MTDP0_SI2151 -MTDP0_SX171 -MTDP0_SX261 -MTDP0_SX351 -MTDP0_SX441 -MTDP0_SX81 -MTER0_SI1157 -MTER0_SI1787 -MTER0_SI527 -MTER0_SX167 -MTER0_SX17 -MTER0_SX257 -MTER0_SX437 -MTER0_SX77 -MTJG0_SI1520 -MTJG0_SI2157 -MTJG0_SI890 -MTJG0_SX170 -MTJG0_SX260 -MTJG0_SX350 -MTJG0_SX440 -MTJG0_SX80 -MTJM0_SI1226 -MTJM0_SI1856 -MTJM0_SI655 -MTJM0_SX146 -MTJM0_SX236 -MTJM0_SX326 -MTJM0_SX416 -MTJM0_SX56 -MTJS0_SI1192 -MTJS0_SI1822 -MTJS0_SI562 -MTJS0_SX112 -MTJS0_SX202 -MTJS0_SX22 -MTJS0_SX292 -MTJS0_SX382 -MTJU0_SI2020 -MTJU0_SI2269 -MTJU0_SI760 -MTJU0_SX130 -MTJU0_SX220 -MTJU0_SX310 -MTJU0_SX40 -MTJU0_SX400 -MTKD0_SI1187 -MTKD0_SI1817 -MTKD0_SI630 -MTKD0_SX107 -MTKD0_SX17 -MTKD0_SX197 -MTKD0_SX287 -MTKD0_SX377 -MTKP0_SI1023 -MTKP0_SI2283 -MTKP0_SI454 -MTKP0_SX123 -MTKP0_SX213 -MTKP0_SX303 -MTKP0_SX33 -MTKP0_SX393 -MTLB0_SI1134 -MTLB0_SI1764 -MTLB0_SI504 -MTLB0_SX144 -MTLB0_SX234 -MTLB0_SX324 -MTLB0_SX414 -MTLB0_SX54 -MTLC0_SI1313 -MTLC0_SI1477 -MTLC0_SI847 -MTLC0_SX127 -MTLC0_SX217 -MTLC0_SX307 -MTLC0_SX37 -MTLC0_SX397 -MTML0_SI1065 -MTML0_SI1695 -MTML0_SI2325 -MTML0_SX165 -MTML0_SX255 -MTML0_SX345 -MTML0_SX435 -MTML0_SX75 -MTMN0_SI1064 -MTMN0_SI2324 -MTMN0_SI582 -MTMN0_SX164 -MTMN0_SX254 -MTMN0_SX344 -MTMN0_SX434 -MTMN0_SX74 -MTMT0_SI1118 -MTMT0_SI1748 -MTMT0_SI488 -MTMT0_SX128 -MTMT0_SX218 -MTMT0_SX308 -MTMT0_SX38 -MTMT0_SX398 -MTPF0_SI1235 -MTPF0_SI1865 -MTPF0_SI605 -MTPF0_SX155 -MTPF0_SX245 -MTPF0_SX335 -MTPF0_SX425 -MTPF0_SX65 -MTPG0_SI1383 -MTPG0_SI2013 -MTPG0_SI753 -MTPG0_SX123 -MTPG0_SX213 -MTPG0_SX303 -MTPG0_SX33 -MTPG0_SX393 -MTPP0_SI1508 -MTPP0_SI2138 -MTPP0_SI878 -MTPP0_SX158 -MTPP0_SX248 -MTPP0_SX338 -MTPP0_SX428 -MTPP0_SX68 -MTPR0_SI1600 -MTPR0_SI2230 -MTPR0_SI506 -MTPR0_SX160 -MTPR0_SX250 -MTPR0_SX340 -MTPR0_SX430 -MTPR0_SX70 -MTQC0_SI1441 -MTQC0_SI2071 -MTQC0_SI480 -MTQC0_SX181 -MTQC0_SX271 -MTQC0_SX361 -MTQC0_SX451 -MTQC0_SX91 -MTRC0_SI1623 -MTRC0_SI589 -MTRC0_SI993 -MTRC0_SX170 -MTRC0_SX183 -MTRC0_SX273 -MTRC0_SX363 -MTRC0_SX93 -MTRR0_SI1548 -MTRR0_SI2178 -MTRR0_SI918 -MTRR0_SX108 -MTRR0_SX18 -MTRR0_SX198 -MTRR0_SX288 -MTRR0_SX378 -MTRT0_SI1227 -MTRT0_SI1857 -MTRT0_SI597 -MTRT0_SX147 -MTRT0_SX237 -MTRT0_SX254 -MTRT0_SX417 -MTRT0_SX57 -MTWH1_SI1512 -MTWH1_SI2142 -MTWH1_SI882 -MTWH1_SX162 -MTWH1_SX252 -MTWH1_SX342 -MTWH1_SX432 -MTWH1_SX72 -MTXS0_SI1060 -MTXS0_SI1690 -MTXS0_SI2320 -MTXS0_SX160 -MTXS0_SX250 -MTXS0_SX340 -MTXS0_SX430 -MTXS0_SX70 -MVJH0_SI1556 -MVJH0_SI2186 -MVJH0_SI926 -MVJH0_SX116 -MVJH0_SX206 -MVJH0_SX26 -MVJH0_SX296 -MVJH0_SX386 -MVLO0_SI1147 -MVLO0_SI1777 -MVLO0_SI517 -MVLO0_SX157 -MVLO0_SX247 -MVLO0_SX337 -MVLO0_SX427 -MVLO0_SX67 -MVRW0_SI1485 -MVRW0_SI2115 -MVRW0_SI855 -MVRW0_SX135 -MVRW0_SX225 -MVRW0_SX315 -MVRW0_SX405 -MVRW0_SX45 -MWAC0_SI1601 -MWAC0_SI2231 -MWAC0_SI971 -MWAC0_SX161 -MWAC0_SX251 -MWAC0_SX341 -MWAC0_SX431 -MWAC0_SX71 -MWAD0_SI1062 -MWAD0_SI1749 -MWAD0_SI2322 -MWAD0_SX162 -MWAD0_SX252 -MWAD0_SX342 -MWAD0_SX432 -MWAD0_SX72 -MWAR0_SI1045 -MWAR0_SI1675 -MWAR0_SI2305 -MWAR0_SX145 -MWAR0_SX235 -MWAR0_SX325 -MWAR0_SX415 -MWAR0_SX55 -MWCH0_SI1622 -MWCH0_SI1895 -MWCH0_SI2252 -MWCH0_SX182 -MWCH0_SX272 -MWCH0_SX362 -MWCH0_SX452 -MWCH0_SX92 -MWDK0_SI1436 -MWDK0_SI2017 -MWDK0_SI806 -MWDK0_SX176 -MWDK0_SX266 -MWDK0_SX356 -MWDK0_SX446 -MWDK0_SX86 -MWEM0_SI1320 -MWEM0_SI1393 -MWEM0_SI1950 -MWEM0_SX150 -MWEM0_SX240 -MWEM0_SX330 -MWEM0_SX420 -MWEM0_SX60 -MWGR0_SI1606 -MWGR0_SI2236 -MWGR0_SI976 -MWGR0_SX166 -MWGR0_SX256 -MWGR0_SX346 -MWGR0_SX436 -MWGR0_SX76 -MWRE0_SI1057 -MWRE0_SI1687 -MWRE0_SI2317 -MWRE0_SX157 -MWRE0_SX247 -MWRE0_SX337 -MWRE0_SX427 -MWRE0_SX67 -MWRP0_SI1443 -MWRP0_SI1525 -MWRP0_SI2073 -MWRP0_SX183 -MWRP0_SX273 -MWRP0_SX3 -MWRP0_SX363 -MWRP0_SX93 -MWSB0_SI1626 -MWSB0_SI2256 -MWSB0_SI996 -MWSB0_SX186 -MWSB0_SX276 -MWSB0_SX366 -MWSB0_SX6 -MWSB0_SX96 -MWSH0_SI1426 -MWSH0_SI2266 -MWSH0_SI796 -MWSH0_SX166 -MWSH0_SX256 -MWSH0_SX346 -MWSH0_SX436 -MWSH0_SX76 -MZMB0_SI1166 -MZMB0_SI1796 -MZMB0_SI536 -MZMB0_SX176 -MZMB0_SX266 -MZMB0_SX356 -MZMB0_SX446 -MZMB0_SX86 diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/config/timit_matched/valid.uid b/kosmos-g/fairseq/examples/wav2vec/unsupervised/config/timit_matched/valid.uid deleted file mode 100644 index ab5ef381a..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/config/timit_matched/valid.uid +++ /dev/null @@ -1,400 +0,0 @@ -FADG0_SI1279 -FADG0_SI1909 -FADG0_SI649 -FADG0_SX109 -FADG0_SX19 -FADG0_SX199 -FADG0_SX289 -FADG0_SX379 -FAKS0_SI1573 -FAKS0_SI2203 -FAKS0_SI943 -FAKS0_SX133 -FAKS0_SX223 -FAKS0_SX313 -FAKS0_SX403 -FAKS0_SX43 -FCAL1_SI1403 -FCAL1_SI2033 -FCAL1_SI773 -FCAL1_SX143 -FCAL1_SX233 -FCAL1_SX323 -FCAL1_SX413 -FCAL1_SX53 -FCMH0_SI1454 -FCMH0_SI2084 -FCMH0_SI824 -FCMH0_SX104 -FCMH0_SX14 -FCMH0_SX194 -FCMH0_SX284 -FCMH0_SX374 -FDAC1_SI1474 -FDAC1_SI2104 -FDAC1_SI844 -FDAC1_SX124 -FDAC1_SX214 -FDAC1_SX304 -FDAC1_SX34 -FDAC1_SX394 -FDMS0_SI1218 -FDMS0_SI1502 -FDMS0_SI1848 -FDMS0_SX138 -FDMS0_SX228 -FDMS0_SX318 -FDMS0_SX408 -FDMS0_SX48 -FDRW0_SI1283 -FDRW0_SI1423 -FDRW0_SI653 -FDRW0_SX113 -FDRW0_SX203 -FDRW0_SX23 -FDRW0_SX293 -FDRW0_SX383 -FEDW0_SI1084 -FEDW0_SI1653 -FEDW0_SI1714 -FEDW0_SX184 -FEDW0_SX274 -FEDW0_SX364 -FEDW0_SX4 -FEDW0_SX94 -FGJD0_SI1179 -FGJD0_SI549 -FGJD0_SI818 -FGJD0_SX189 -FGJD0_SX279 -FGJD0_SX369 -FGJD0_SX9 -FGJD0_SX99 -FJEM0_SI1264 -FJEM0_SI1894 -FJEM0_SI634 -FJEM0_SX184 -FJEM0_SX274 -FJEM0_SX364 -FJEM0_SX4 -FJEM0_SX94 -FJMG0_SI1181 -FJMG0_SI1811 -FJMG0_SI551 -FJMG0_SX101 -FJMG0_SX11 -FJMG0_SX191 -FJMG0_SX281 -FJMG0_SX371 -FJSJ0_SI1484 -FJSJ0_SI2114 -FJSJ0_SI854 -FJSJ0_SX134 -FJSJ0_SX224 -FJSJ0_SX314 -FJSJ0_SX404 -FJSJ0_SX44 -FKMS0_SI1490 -FKMS0_SI2120 -FKMS0_SI860 -FKMS0_SX140 -FKMS0_SX230 -FKMS0_SX320 -FKMS0_SX410 -FKMS0_SX50 -FMAH0_SI1289 -FMAH0_SI1919 -FMAH0_SI659 -FMAH0_SX119 -FMAH0_SX209 -FMAH0_SX29 -FMAH0_SX299 -FMAH0_SX389 -FMML0_SI1040 -FMML0_SI1670 -FMML0_SI2300 -FMML0_SX140 -FMML0_SX230 -FMML0_SX320 -FMML0_SX410 -FMML0_SX50 -FNMR0_SI1399 -FNMR0_SI2029 -FNMR0_SI769 -FNMR0_SX139 -FNMR0_SX229 -FNMR0_SX319 -FNMR0_SX409 -FNMR0_SX49 -FREW0_SI1030 -FREW0_SI1280 -FREW0_SI1910 -FREW0_SX110 -FREW0_SX20 -FREW0_SX200 -FREW0_SX290 -FREW0_SX380 -FSEM0_SI1198 -FSEM0_SI1828 -FSEM0_SI568 -FSEM0_SX118 -FSEM0_SX208 -FSEM0_SX28 -FSEM0_SX298 -FSEM0_SX388 -MAJC0_SI1946 -MAJC0_SI2095 -MAJC0_SI835 -MAJC0_SX115 -MAJC0_SX205 -MAJC0_SX25 -MAJC0_SX295 -MAJC0_SX385 -MBDG0_SI1463 -MBDG0_SI2093 -MBDG0_SI833 -MBDG0_SX113 -MBDG0_SX203 -MBDG0_SX23 -MBDG0_SX293 -MBDG0_SX383 -MBNS0_SI1220 -MBNS0_SI1850 -MBNS0_SI590 -MBNS0_SX140 -MBNS0_SX230 -MBNS0_SX320 -MBNS0_SX410 -MBNS0_SX50 -MBWM0_SI1304 -MBWM0_SI1934 -MBWM0_SI674 -MBWM0_SX134 -MBWM0_SX224 -MBWM0_SX314 -MBWM0_SX404 -MBWM0_SX44 -MCSH0_SI1549 -MCSH0_SI2179 -MCSH0_SI919 -MCSH0_SX109 -MCSH0_SX19 -MCSH0_SX199 -MCSH0_SX289 -MCSH0_SX379 -MDLF0_SI1583 -MDLF0_SI2213 -MDLF0_SI953 -MDLF0_SX143 -MDLF0_SX233 -MDLF0_SX323 -MDLF0_SX413 -MDLF0_SX53 -MDLS0_SI1628 -MDLS0_SI2258 -MDLS0_SI998 -MDLS0_SX188 -MDLS0_SX278 -MDLS0_SX368 -MDLS0_SX8 -MDLS0_SX98 -MDVC0_SI2174 -MDVC0_SI2196 -MDVC0_SI936 -MDVC0_SX126 -MDVC0_SX216 -MDVC0_SX306 -MDVC0_SX36 -MDVC0_SX396 -MERS0_SI1019 -MERS0_SI1649 -MERS0_SI497 -MERS0_SX119 -MERS0_SX209 -MERS0_SX29 -MERS0_SX299 -MERS0_SX389 -MGJF0_SI1901 -MGJF0_SI641 -MGJF0_SI776 -MGJF0_SX101 -MGJF0_SX11 -MGJF0_SX191 -MGJF0_SX281 -MGJF0_SX371 -MGLB0_SI1534 -MGLB0_SI2164 -MGLB0_SI904 -MGLB0_SX184 -MGLB0_SX274 -MGLB0_SX364 -MGLB0_SX4 -MGLB0_SX94 -MGWT0_SI1539 -MGWT0_SI2169 -MGWT0_SI909 -MGWT0_SX189 -MGWT0_SX279 -MGWT0_SX369 -MGWT0_SX9 -MGWT0_SX99 -MJAR0_SI1988 -MJAR0_SI2247 -MJAR0_SI728 -MJAR0_SX188 -MJAR0_SX278 -MJAR0_SX368 -MJAR0_SX8 -MJAR0_SX98 -MJFC0_SI1033 -MJFC0_SI1663 -MJFC0_SI2293 -MJFC0_SX133 -MJFC0_SX223 -MJFC0_SX313 -MJFC0_SX403 -MJFC0_SX43 -MJSW0_SI1010 -MJSW0_SI1640 -MJSW0_SI2270 -MJSW0_SX110 -MJSW0_SX20 -MJSW0_SX200 -MJSW0_SX290 -MJSW0_SX380 -MMDB1_SI1625 -MMDB1_SI2255 -MMDB1_SI995 -MMDB1_SX185 -MMDB1_SX275 -MMDB1_SX365 -MMDB1_SX5 -MMDB1_SX95 -MMDM2_SI1452 -MMDM2_SI1555 -MMDM2_SI2082 -MMDM2_SX102 -MMDM2_SX12 -MMDM2_SX192 -MMDM2_SX282 -MMDM2_SX372 -MMJR0_SI1648 -MMJR0_SI2166 -MMJR0_SI2278 -MMJR0_SX118 -MMJR0_SX208 -MMJR0_SX28 -MMJR0_SX298 -MMJR0_SX388 -MMWH0_SI1089 -MMWH0_SI1301 -MMWH0_SI459 -MMWH0_SX189 -MMWH0_SX279 -MMWH0_SX369 -MMWH0_SX9 -MMWH0_SX99 -MPDF0_SI1542 -MPDF0_SI2172 -MPDF0_SI912 -MPDF0_SX102 -MPDF0_SX12 -MPDF0_SX192 -MPDF0_SX282 -MPDF0_SX372 -MRCS0_SI1223 -MRCS0_SI1853 -MRCS0_SI593 -MRCS0_SX143 -MRCS0_SX233 -MRCS0_SX323 -MRCS0_SX413 -MRCS0_SX53 -MREB0_SI1375 -MREB0_SI2005 -MREB0_SI745 -MREB0_SX115 -MREB0_SX205 -MREB0_SX25 -MREB0_SX295 -MREB0_SX385 -MRJM4_SI1489 -MRJM4_SI2119 -MRJM4_SI859 -MRJM4_SX139 -MRJM4_SX229 -MRJM4_SX319 -MRJM4_SX409 -MRJM4_SX49 -MRJR0_SI1182 -MRJR0_SI1812 -MRJR0_SI2313 -MRJR0_SX102 -MRJR0_SX12 -MRJR0_SX192 -MRJR0_SX282 -MRJR0_SX372 -MROA0_SI1307 -MROA0_SI1970 -MROA0_SI677 -MROA0_SX137 -MROA0_SX227 -MROA0_SX317 -MROA0_SX407 -MROA0_SX47 -MRTK0_SI1093 -MRTK0_SI1723 -MRTK0_SI1750 -MRTK0_SX103 -MRTK0_SX13 -MRTK0_SX193 -MRTK0_SX283 -MRTK0_SX373 -MRWS1_SI1130 -MRWS1_SI1496 -MRWS1_SI500 -MRWS1_SX140 -MRWS1_SX230 -MRWS1_SX320 -MRWS1_SX410 -MRWS1_SX50 -MTAA0_SI1285 -MTAA0_SI1915 -MTAA0_SI596 -MTAA0_SX115 -MTAA0_SX205 -MTAA0_SX25 -MTAA0_SX295 -MTAA0_SX385 -MTDT0_SI1994 -MTDT0_SI2254 -MTDT0_SI994 -MTDT0_SX184 -MTDT0_SX274 -MTDT0_SX364 -MTDT0_SX4 -MTDT0_SX94 -MTEB0_SI1133 -MTEB0_SI2064 -MTEB0_SI503 -MTEB0_SX143 -MTEB0_SX233 -MTEB0_SX323 -MTEB0_SX413 -MTEB0_SX53 -MTHC0_SI1015 -MTHC0_SI1645 -MTHC0_SI2275 -MTHC0_SX115 -MTHC0_SX205 -MTHC0_SX25 -MTHC0_SX295 -MTHC0_SX385 -MWJG0_SI1124 -MWJG0_SI1754 -MWJG0_SI494 -MWJG0_SX134 -MWJG0_SX224 -MWJG0_SX314 -MWJG0_SX404 -MWJG0_SX44 diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/config/timit_unmatched/test.uid b/kosmos-g/fairseq/examples/wav2vec/unsupervised/config/timit_unmatched/test.uid deleted file mode 100644 index e3967e424..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/config/timit_unmatched/test.uid +++ /dev/null @@ -1,1680 +0,0 @@ -FADG0_SA1 -FADG0_SA2 -FADG0_SI1279 -FADG0_SI1909 -FADG0_SI649 -FADG0_SX109 -FADG0_SX19 -FADG0_SX199 -FADG0_SX289 -FADG0_SX379 -FAKS0_SA1 -FAKS0_SA2 -FAKS0_SI1573 -FAKS0_SI2203 -FAKS0_SI943 -FAKS0_SX133 -FAKS0_SX223 -FAKS0_SX313 -FAKS0_SX403 -FAKS0_SX43 -FASW0_SA1 -FASW0_SA2 -FASW0_SI1550 -FASW0_SI2180 -FASW0_SI920 -FASW0_SX110 -FASW0_SX20 -FASW0_SX200 -FASW0_SX290 -FASW0_SX380 -FAWF0_SA1 -FAWF0_SA2 -FAWF0_SI1000 -FAWF0_SI1630 -FAWF0_SI2260 -FAWF0_SX10 -FAWF0_SX100 -FAWF0_SX190 -FAWF0_SX280 -FAWF0_SX370 -FCAL1_SA1 -FCAL1_SA2 -FCAL1_SI1403 -FCAL1_SI2033 -FCAL1_SI773 -FCAL1_SX143 -FCAL1_SX233 -FCAL1_SX323 -FCAL1_SX413 -FCAL1_SX53 -FCAU0_SA1 -FCAU0_SA2 -FCAU0_SI1037 -FCAU0_SI1667 -FCAU0_SI2297 -FCAU0_SX137 -FCAU0_SX227 -FCAU0_SX317 -FCAU0_SX407 -FCAU0_SX47 -FCFT0_SA1 -FCFT0_SA2 -FCFT0_SI1178 -FCFT0_SI1808 -FCFT0_SI548 -FCFT0_SX188 -FCFT0_SX278 -FCFT0_SX368 -FCFT0_SX8 -FCFT0_SX98 -FCMH0_SA1 -FCMH0_SA2 -FCMH0_SI1454 -FCMH0_SI2084 -FCMH0_SI824 -FCMH0_SX104 -FCMH0_SX14 -FCMH0_SX194 -FCMH0_SX284 -FCMH0_SX374 -FCMH1_SA1 -FCMH1_SA2 -FCMH1_SI1493 -FCMH1_SI2123 -FCMH1_SI863 -FCMH1_SX143 -FCMH1_SX233 -FCMH1_SX323 -FCMH1_SX413 -FCMH1_SX53 -FCMR0_SA1 -FCMR0_SA2 -FCMR0_SI1105 -FCMR0_SI1735 -FCMR0_SI475 -FCMR0_SX115 -FCMR0_SX205 -FCMR0_SX25 -FCMR0_SX295 -FCMR0_SX385 -FCRH0_SA1 -FCRH0_SA2 -FCRH0_SI1088 -FCRH0_SI1718 -FCRH0_SI458 -FCRH0_SX188 -FCRH0_SX278 -FCRH0_SX368 -FCRH0_SX8 -FCRH0_SX98 -FDAC1_SA1 -FDAC1_SA2 -FDAC1_SI1474 -FDAC1_SI2104 -FDAC1_SI844 -FDAC1_SX124 -FDAC1_SX214 -FDAC1_SX304 -FDAC1_SX34 -FDAC1_SX394 -FDHC0_SA1 -FDHC0_SA2 -FDHC0_SI1559 -FDHC0_SI2189 -FDHC0_SI929 -FDHC0_SX119 -FDHC0_SX209 -FDHC0_SX29 -FDHC0_SX299 -FDHC0_SX389 -FDMS0_SA1 -FDMS0_SA2 -FDMS0_SI1218 -FDMS0_SI1502 -FDMS0_SI1848 -FDMS0_SX138 -FDMS0_SX228 -FDMS0_SX318 -FDMS0_SX408 -FDMS0_SX48 -FDRD1_SA1 -FDRD1_SA2 -FDRD1_SI1544 -FDRD1_SI1566 -FDRD1_SI2149 -FDRD1_SX104 -FDRD1_SX14 -FDRD1_SX194 -FDRD1_SX284 -FDRD1_SX374 -FDRW0_SA1 -FDRW0_SA2 -FDRW0_SI1283 -FDRW0_SI1423 -FDRW0_SI653 -FDRW0_SX113 -FDRW0_SX203 -FDRW0_SX23 -FDRW0_SX293 -FDRW0_SX383 -FEDW0_SA1 -FEDW0_SA2 -FEDW0_SI1084 -FEDW0_SI1653 -FEDW0_SI1714 -FEDW0_SX184 -FEDW0_SX274 -FEDW0_SX364 -FEDW0_SX4 -FEDW0_SX94 -FELC0_SA1 -FELC0_SA2 -FELC0_SI1386 -FELC0_SI2016 -FELC0_SI756 -FELC0_SX126 -FELC0_SX216 -FELC0_SX306 -FELC0_SX36 -FELC0_SX396 -FGJD0_SA1 -FGJD0_SA2 -FGJD0_SI1179 -FGJD0_SI549 -FGJD0_SI818 -FGJD0_SX189 -FGJD0_SX279 -FGJD0_SX369 -FGJD0_SX9 -FGJD0_SX99 -FGMD0_SA1 -FGMD0_SA2 -FGMD0_SI1943 -FGMD0_SI2107 -FGMD0_SI683 -FGMD0_SX143 -FGMD0_SX233 -FGMD0_SX323 -FGMD0_SX413 -FGMD0_SX53 -FGWR0_SA1 -FGWR0_SA2 -FGWR0_SI1578 -FGWR0_SI2208 -FGWR0_SI948 -FGWR0_SX138 -FGWR0_SX228 -FGWR0_SX318 -FGWR0_SX408 -FGWR0_SX48 -FHES0_SA1 -FHES0_SA2 -FHES0_SI1109 -FHES0_SI1739 -FHES0_SI479 -FHES0_SX119 -FHES0_SX209 -FHES0_SX29 -FHES0_SX299 -FHES0_SX389 -FHEW0_SA1 -FHEW0_SA2 -FHEW0_SI2023 -FHEW0_SI690 -FHEW0_SI763 -FHEW0_SX133 -FHEW0_SX223 -FHEW0_SX313 -FHEW0_SX403 -FHEW0_SX43 -FISB0_SA1 -FISB0_SA2 -FISB0_SI1579 -FISB0_SI2209 -FISB0_SI949 -FISB0_SX139 -FISB0_SX229 -FISB0_SX319 -FISB0_SX409 -FISB0_SX49 -FJAS0_SA1 -FJAS0_SA2 -FJAS0_SI1400 -FJAS0_SI2030 -FJAS0_SI770 -FJAS0_SX140 -FJAS0_SX230 -FJAS0_SX320 -FJAS0_SX410 -FJAS0_SX50 -FJCS0_SA1 -FJCS0_SA2 -FJCS0_SI1309 -FJCS0_SI1833 -FJCS0_SI1939 -FJCS0_SX139 -FJCS0_SX229 -FJCS0_SX319 -FJCS0_SX409 -FJCS0_SX49 -FJEM0_SA1 -FJEM0_SA2 -FJEM0_SI1264 -FJEM0_SI1894 -FJEM0_SI634 -FJEM0_SX184 -FJEM0_SX274 -FJEM0_SX364 -FJEM0_SX4 -FJEM0_SX94 -FJLM0_SA1 -FJLM0_SA2 -FJLM0_SI1043 -FJLM0_SI1673 -FJLM0_SI2303 -FJLM0_SX143 -FJLM0_SX233 -FJLM0_SX323 -FJLM0_SX413 -FJLM0_SX53 -FJMG0_SA1 -FJMG0_SA2 -FJMG0_SI1181 -FJMG0_SI1811 -FJMG0_SI551 -FJMG0_SX101 -FJMG0_SX11 -FJMG0_SX191 -FJMG0_SX281 -FJMG0_SX371 -FJRE0_SA1 -FJRE0_SA2 -FJRE0_SI1116 -FJRE0_SI1587 -FJRE0_SI1746 -FJRE0_SX126 -FJRE0_SX216 -FJRE0_SX306 -FJRE0_SX36 -FJRE0_SX396 -FJSA0_SA1 -FJSA0_SA2 -FJSA0_SI1379 -FJSA0_SI2009 -FJSA0_SI749 -FJSA0_SX119 -FJSA0_SX209 -FJSA0_SX29 -FJSA0_SX299 -FJSA0_SX389 -FJSJ0_SA1 -FJSJ0_SA2 -FJSJ0_SI1484 -FJSJ0_SI2114 -FJSJ0_SI854 -FJSJ0_SX134 -FJSJ0_SX224 -FJSJ0_SX314 -FJSJ0_SX404 -FJSJ0_SX44 -FJWB0_SA1 -FJWB0_SA2 -FJWB0_SI1265 -FJWB0_SI635 -FJWB0_SI992 -FJWB0_SX185 -FJWB0_SX275 -FJWB0_SX365 -FJWB0_SX5 -FJWB0_SX95 -FKMS0_SA1 -FKMS0_SA2 -FKMS0_SI1490 -FKMS0_SI2120 -FKMS0_SI860 -FKMS0_SX140 -FKMS0_SX230 -FKMS0_SX320 -FKMS0_SX410 -FKMS0_SX50 -FLAS0_SA1 -FLAS0_SA2 -FLAS0_SI1026 -FLAS0_SI1488 -FLAS0_SI858 -FLAS0_SX138 -FLAS0_SX228 -FLAS0_SX318 -FLAS0_SX408 -FLAS0_SX48 -FLBW0_SA1 -FLBW0_SA2 -FLBW0_SI1219 -FLBW0_SI1849 -FLBW0_SI2253 -FLBW0_SX139 -FLBW0_SX229 -FLBW0_SX319 -FLBW0_SX409 -FLBW0_SX49 -FLKD0_SA1 -FLKD0_SA2 -FLKD0_SI1369 -FLKD0_SI739 -FLKD0_SI894 -FLKD0_SX109 -FLKD0_SX19 -FLKD0_SX199 -FLKD0_SX289 -FLKD0_SX379 -FLNH0_SA1 -FLNH0_SA2 -FLNH0_SI1214 -FLNH0_SI584 -FLNH0_SI941 -FLNH0_SX134 -FLNH0_SX224 -FLNH0_SX314 -FLNH0_SX404 -FLNH0_SX44 -FMAF0_SA1 -FMAF0_SA2 -FMAF0_SI1459 -FMAF0_SI2089 -FMAF0_SI829 -FMAF0_SX109 -FMAF0_SX19 -FMAF0_SX199 -FMAF0_SX289 -FMAF0_SX379 -FMAH0_SA1 -FMAH0_SA2 -FMAH0_SI1289 -FMAH0_SI1919 -FMAH0_SI659 -FMAH0_SX119 -FMAH0_SX209 -FMAH0_SX29 -FMAH0_SX299 -FMAH0_SX389 -FMCM0_SA1 -FMCM0_SA2 -FMCM0_SI1180 -FMCM0_SI1810 -FMCM0_SI550 -FMCM0_SX10 -FMCM0_SX100 -FMCM0_SX190 -FMCM0_SX280 -FMCM0_SX370 -FMGD0_SA1 -FMGD0_SA2 -FMGD0_SI1564 -FMGD0_SI2194 -FMGD0_SI934 -FMGD0_SX124 -FMGD0_SX214 -FMGD0_SX304 -FMGD0_SX34 -FMGD0_SX394 -FMLD0_SA1 -FMLD0_SA2 -FMLD0_SI2185 -FMLD0_SI822 -FMLD0_SI925 -FMLD0_SX115 -FMLD0_SX205 -FMLD0_SX25 -FMLD0_SX295 -FMLD0_SX385 -FMML0_SA1 -FMML0_SA2 -FMML0_SI1040 -FMML0_SI1670 -FMML0_SI2300 -FMML0_SX140 -FMML0_SX230 -FMML0_SX320 -FMML0_SX410 -FMML0_SX50 -FNLP0_SA1 -FNLP0_SA2 -FNLP0_SI1308 -FNLP0_SI1938 -FNLP0_SI678 -FNLP0_SX138 -FNLP0_SX228 -FNLP0_SX318 -FNLP0_SX408 -FNLP0_SX48 -FNMR0_SA1 -FNMR0_SA2 -FNMR0_SI1399 -FNMR0_SI2029 -FNMR0_SI769 -FNMR0_SX139 -FNMR0_SX229 -FNMR0_SX319 -FNMR0_SX409 -FNMR0_SX49 -FPAS0_SA1 -FPAS0_SA2 -FPAS0_SI1272 -FPAS0_SI2204 -FPAS0_SI944 -FPAS0_SX134 -FPAS0_SX224 -FPAS0_SX314 -FPAS0_SX404 -FPAS0_SX44 -FPKT0_SA1 -FPKT0_SA2 -FPKT0_SI1538 -FPKT0_SI2168 -FPKT0_SI908 -FPKT0_SX188 -FPKT0_SX278 -FPKT0_SX368 -FPKT0_SX8 -FPKT0_SX98 -FRAM1_SA1 -FRAM1_SA2 -FRAM1_SI1360 -FRAM1_SI522 -FRAM1_SI730 -FRAM1_SX10 -FRAM1_SX100 -FRAM1_SX190 -FRAM1_SX280 -FRAM1_SX370 -FREW0_SA1 -FREW0_SA2 -FREW0_SI1030 -FREW0_SI1280 -FREW0_SI1910 -FREW0_SX110 -FREW0_SX20 -FREW0_SX200 -FREW0_SX290 -FREW0_SX380 -FRNG0_SA1 -FRNG0_SA2 -FRNG0_SI1355 -FRNG0_SI1985 -FRNG0_SI725 -FRNG0_SX185 -FRNG0_SX275 -FRNG0_SX365 -FRNG0_SX5 -FRNG0_SX95 -FSEM0_SA1 -FSEM0_SA2 -FSEM0_SI1198 -FSEM0_SI1828 -FSEM0_SI568 -FSEM0_SX118 -FSEM0_SX208 -FSEM0_SX28 -FSEM0_SX298 -FSEM0_SX388 -FSLB1_SA1 -FSLB1_SA2 -FSLB1_SI1904 -FSLB1_SI644 -FSLB1_SI891 -FSLB1_SX104 -FSLB1_SX14 -FSLB1_SX194 -FSLB1_SX284 -FSLB1_SX374 -FSXA0_SA1 -FSXA0_SA2 -FSXA0_SI1108 -FSXA0_SI1846 -FSXA0_SI478 -FSXA0_SX118 -FSXA0_SX208 -FSXA0_SX28 -FSXA0_SX298 -FSXA0_SX388 -FTLH0_SA1 -FTLH0_SA2 -FTLH0_SI1009 -FTLH0_SI1390 -FTLH0_SI1639 -FTLH0_SX109 -FTLH0_SX19 -FTLH0_SX199 -FTLH0_SX289 -FTLH0_SX379 -FUTB0_SA1 -FUTB0_SA2 -FUTB0_SI1204 -FUTB0_SI1330 -FUTB0_SI1834 -FUTB0_SX124 -FUTB0_SX214 -FUTB0_SX304 -FUTB0_SX34 -FUTB0_SX394 -MABW0_SA1 -MABW0_SA2 -MABW0_SI1230 -MABW0_SI1664 -MABW0_SI2294 -MABW0_SX134 -MABW0_SX224 -MABW0_SX314 -MABW0_SX404 -MABW0_SX44 -MAHH0_SA1 -MAHH0_SA2 -MAHH0_SI1294 -MAHH0_SI1924 -MAHH0_SI664 -MAHH0_SX124 -MAHH0_SX214 -MAHH0_SX304 -MAHH0_SX34 -MAHH0_SX394 -MAJC0_SA1 -MAJC0_SA2 -MAJC0_SI1946 -MAJC0_SI2095 -MAJC0_SI835 -MAJC0_SX115 -MAJC0_SX205 -MAJC0_SX25 -MAJC0_SX295 -MAJC0_SX385 -MBDG0_SA1 -MBDG0_SA2 -MBDG0_SI1463 -MBDG0_SI2093 -MBDG0_SI833 -MBDG0_SX113 -MBDG0_SX203 -MBDG0_SX23 -MBDG0_SX293 -MBDG0_SX383 -MBJK0_SA1 -MBJK0_SA2 -MBJK0_SI1175 -MBJK0_SI2128 -MBJK0_SI545 -MBJK0_SX185 -MBJK0_SX275 -MBJK0_SX365 -MBJK0_SX5 -MBJK0_SX95 -MBNS0_SA1 -MBNS0_SA2 -MBNS0_SI1220 -MBNS0_SI1850 -MBNS0_SI590 -MBNS0_SX140 -MBNS0_SX230 -MBNS0_SX320 -MBNS0_SX410 -MBNS0_SX50 -MBPM0_SA1 -MBPM0_SA2 -MBPM0_SI1577 -MBPM0_SI1584 -MBPM0_SI947 -MBPM0_SX137 -MBPM0_SX227 -MBPM0_SX317 -MBPM0_SX407 -MBPM0_SX47 -MBWM0_SA1 -MBWM0_SA2 -MBWM0_SI1304 -MBWM0_SI1934 -MBWM0_SI674 -MBWM0_SX134 -MBWM0_SX224 -MBWM0_SX314 -MBWM0_SX404 -MBWM0_SX44 -MCCS0_SA1 -MCCS0_SA2 -MCCS0_SI1469 -MCCS0_SI2099 -MCCS0_SI839 -MCCS0_SX119 -MCCS0_SX209 -MCCS0_SX29 -MCCS0_SX299 -MCCS0_SX389 -MCEM0_SA1 -MCEM0_SA2 -MCEM0_SI1398 -MCEM0_SI2028 -MCEM0_SI768 -MCEM0_SX138 -MCEM0_SX228 -MCEM0_SX318 -MCEM0_SX408 -MCEM0_SX48 -MCHH0_SA1 -MCHH0_SA2 -MCHH0_SI1004 -MCHH0_SI1634 -MCHH0_SI530 -MCHH0_SX104 -MCHH0_SX14 -MCHH0_SX194 -MCHH0_SX284 -MCHH0_SX374 -MCMB0_SA1 -MCMB0_SA2 -MCMB0_SI1268 -MCMB0_SI1898 -MCMB0_SI638 -MCMB0_SX188 -MCMB0_SX278 -MCMB0_SX368 -MCMB0_SX8 -MCMB0_SX98 -MCMJ0_SA1 -MCMJ0_SA2 -MCMJ0_SI1094 -MCMJ0_SI464 -MCMJ0_SI602 -MCMJ0_SX104 -MCMJ0_SX14 -MCMJ0_SX194 -MCMJ0_SX284 -MCMJ0_SX374 -MCRC0_SA1 -MCRC0_SA2 -MCRC0_SI1092 -MCRC0_SI1722 -MCRC0_SI462 -MCRC0_SX102 -MCRC0_SX12 -MCRC0_SX192 -MCRC0_SX282 -MCRC0_SX372 -MCSH0_SA1 -MCSH0_SA2 -MCSH0_SI1549 -MCSH0_SI2179 -MCSH0_SI919 -MCSH0_SX109 -MCSH0_SX19 -MCSH0_SX199 -MCSH0_SX289 -MCSH0_SX379 -MCTT0_SA1 -MCTT0_SA2 -MCTT0_SI1144 -MCTT0_SI2188 -MCTT0_SI928 -MCTT0_SX118 -MCTT0_SX208 -MCTT0_SX28 -MCTT0_SX298 -MCTT0_SX388 -MCTW0_SA1 -MCTW0_SA2 -MCTW0_SI1373 -MCTW0_SI2003 -MCTW0_SI743 -MCTW0_SX113 -MCTW0_SX203 -MCTW0_SX23 -MCTW0_SX293 -MCTW0_SX383 -MDAB0_SA1 -MDAB0_SA2 -MDAB0_SI1039 -MDAB0_SI1669 -MDAB0_SI2299 -MDAB0_SX139 -MDAB0_SX229 -MDAB0_SX319 -MDAB0_SX409 -MDAB0_SX49 -MDAC2_SA1 -MDAC2_SA2 -MDAC2_SI2259 -MDAC2_SI560 -MDAC2_SI999 -MDAC2_SX189 -MDAC2_SX279 -MDAC2_SX369 -MDAC2_SX9 -MDAC2_SX99 -MDAW1_SA1 -MDAW1_SA2 -MDAW1_SI1453 -MDAW1_SI2083 -MDAW1_SI823 -MDAW1_SX103 -MDAW1_SX13 -MDAW1_SX193 -MDAW1_SX283 -MDAW1_SX373 -MDBB0_SA1 -MDBB0_SA2 -MDBB0_SI1195 -MDBB0_SI1825 -MDBB0_SI565 -MDBB0_SX115 -MDBB0_SX205 -MDBB0_SX25 -MDBB0_SX295 -MDBB0_SX385 -MDLD0_SA1 -MDLD0_SA2 -MDLD0_SI1543 -MDLD0_SI2173 -MDLD0_SI913 -MDLD0_SX103 -MDLD0_SX13 -MDLD0_SX193 -MDLD0_SX283 -MDLD0_SX373 -MDLF0_SA1 -MDLF0_SA2 -MDLF0_SI1583 -MDLF0_SI2213 -MDLF0_SI953 -MDLF0_SX143 -MDLF0_SX233 -MDLF0_SX323 -MDLF0_SX413 -MDLF0_SX53 -MDLS0_SA1 -MDLS0_SA2 -MDLS0_SI1628 -MDLS0_SI2258 -MDLS0_SI998 -MDLS0_SX188 -MDLS0_SX278 -MDLS0_SX368 -MDLS0_SX8 -MDLS0_SX98 -MDRB0_SA1 -MDRB0_SA2 -MDRB0_SI1174 -MDRB0_SI2109 -MDRB0_SI544 -MDRB0_SX184 -MDRB0_SX274 -MDRB0_SX364 -MDRB0_SX4 -MDRB0_SX94 -MDRM0_SA1 -MDRM0_SA2 -MDRM0_SI1013 -MDRM0_SI1643 -MDRM0_SI2273 -MDRM0_SX113 -MDRM0_SX203 -MDRM0_SX23 -MDRM0_SX293 -MDRM0_SX383 -MDSC0_SA1 -MDSC0_SA2 -MDSC0_SI1038 -MDSC0_SI2298 -MDSC0_SI967 -MDSC0_SX138 -MDSC0_SX228 -MDSC0_SX318 -MDSC0_SX408 -MDSC0_SX48 -MDVC0_SA1 -MDVC0_SA2 -MDVC0_SI2174 -MDVC0_SI2196 -MDVC0_SI936 -MDVC0_SX126 -MDVC0_SX216 -MDVC0_SX306 -MDVC0_SX36 -MDVC0_SX396 -MDWA0_SA1 -MDWA0_SA2 -MDWA0_SI1146 -MDWA0_SI1445 -MDWA0_SI519 -MDWA0_SX185 -MDWA0_SX275 -MDWA0_SX365 -MDWA0_SX5 -MDWA0_SX95 -MDWK0_SA1 -MDWK0_SA2 -MDWK0_SI1540 -MDWK0_SI2170 -MDWK0_SI910 -MDWK0_SX10 -MDWK0_SX100 -MDWK0_SX190 -MDWK0_SX280 -MDWK0_SX370 -MERS0_SA1 -MERS0_SA2 -MERS0_SI1019 -MERS0_SI1649 -MERS0_SI497 -MERS0_SX119 -MERS0_SX209 -MERS0_SX29 -MERS0_SX299 -MERS0_SX389 -MESD0_SA1 -MESD0_SA2 -MESD0_SI1002 -MESD0_SI1632 -MESD0_SI2262 -MESD0_SX102 -MESD0_SX12 -MESD0_SX192 -MESD0_SX282 -MESD0_SX372 -MFGK0_SA1 -MFGK0_SA2 -MFGK0_SI1451 -MFGK0_SI1744 -MFGK0_SI484 -MFGK0_SX124 -MFGK0_SX214 -MFGK0_SX304 -MFGK0_SX34 -MFGK0_SX394 -MGJF0_SA1 -MGJF0_SA2 -MGJF0_SI1901 -MGJF0_SI641 -MGJF0_SI776 -MGJF0_SX101 -MGJF0_SX11 -MGJF0_SX191 -MGJF0_SX281 -MGJF0_SX371 -MGLB0_SA1 -MGLB0_SA2 -MGLB0_SI1534 -MGLB0_SI2164 -MGLB0_SI904 -MGLB0_SX184 -MGLB0_SX274 -MGLB0_SX364 -MGLB0_SX4 -MGLB0_SX94 -MGMM0_SA1 -MGMM0_SA2 -MGMM0_SI1129 -MGMM0_SI1759 -MGMM0_SI499 -MGMM0_SX139 -MGMM0_SX229 -MGMM0_SX319 -MGMM0_SX409 -MGMM0_SX49 -MGRT0_SA1 -MGRT0_SA2 -MGRT0_SI1450 -MGRT0_SI2080 -MGRT0_SI820 -MGRT0_SX10 -MGRT0_SX100 -MGRT0_SX190 -MGRT0_SX280 -MGRT0_SX370 -MGWT0_SA1 -MGWT0_SA2 -MGWT0_SI1539 -MGWT0_SI2169 -MGWT0_SI909 -MGWT0_SX189 -MGWT0_SX279 -MGWT0_SX369 -MGWT0_SX9 -MGWT0_SX99 -MHPG0_SA1 -MHPG0_SA2 -MHPG0_SI1090 -MHPG0_SI1720 -MHPG0_SI460 -MHPG0_SX10 -MHPG0_SX100 -MHPG0_SX190 -MHPG0_SX280 -MHPG0_SX370 -MJAR0_SA1 -MJAR0_SA2 -MJAR0_SI1988 -MJAR0_SI2247 -MJAR0_SI728 -MJAR0_SX188 -MJAR0_SX278 -MJAR0_SX368 -MJAR0_SX8 -MJAR0_SX98 -MJBR0_SA1 -MJBR0_SA2 -MJBR0_SI1001 -MJBR0_SI1631 -MJBR0_SI2261 -MJBR0_SX101 -MJBR0_SX11 -MJBR0_SX191 -MJBR0_SX281 -MJBR0_SX371 -MJDH0_SA1 -MJDH0_SA2 -MJDH0_SI1354 -MJDH0_SI1984 -MJDH0_SI724 -MJDH0_SX184 -MJDH0_SX274 -MJDH0_SX364 -MJDH0_SX4 -MJDH0_SX94 -MJDM1_SA1 -MJDM1_SA2 -MJDM1_SI1085 -MJDM1_SI1715 -MJDM1_SI455 -MJDM1_SX185 -MJDM1_SX275 -MJDM1_SX365 -MJDM1_SX5 -MJDM1_SX95 -MJES0_SA1 -MJES0_SA2 -MJES0_SI1384 -MJES0_SI2014 -MJES0_SI754 -MJES0_SX124 -MJES0_SX214 -MJES0_SX304 -MJES0_SX34 -MJES0_SX394 -MJFC0_SA1 -MJFC0_SA2 -MJFC0_SI1033 -MJFC0_SI1663 -MJFC0_SI2293 -MJFC0_SX133 -MJFC0_SX223 -MJFC0_SX313 -MJFC0_SX403 -MJFC0_SX43 -MJJG0_SA1 -MJJG0_SA2 -MJJG0_SI1003 -MJJG0_SI1633 -MJJG0_SI2263 -MJJG0_SX103 -MJJG0_SX13 -MJJG0_SX193 -MJJG0_SX283 -MJJG0_SX373 -MJLN0_SA1 -MJLN0_SA2 -MJLN0_SI1449 -MJLN0_SI2079 -MJLN0_SI819 -MJLN0_SX189 -MJLN0_SX279 -MJLN0_SX369 -MJLN0_SX9 -MJLN0_SX99 -MJMP0_SA1 -MJMP0_SA2 -MJMP0_SI1535 -MJMP0_SI1791 -MJMP0_SI905 -MJMP0_SX185 -MJMP0_SX275 -MJMP0_SX365 -MJMP0_SX5 -MJMP0_SX95 -MJRF0_SA1 -MJRF0_SA2 -MJRF0_SI1114 -MJRF0_SI2081 -MJRF0_SI821 -MJRF0_SX101 -MJRF0_SX11 -MJRF0_SX191 -MJRF0_SX281 -MJRF0_SX371 -MJSW0_SA1 -MJSW0_SA2 -MJSW0_SI1010 -MJSW0_SI1640 -MJSW0_SI2270 -MJSW0_SX110 -MJSW0_SX20 -MJSW0_SX200 -MJSW0_SX290 -MJSW0_SX380 -MJTC0_SA1 -MJTC0_SA2 -MJTC0_SI1460 -MJTC0_SI2090 -MJTC0_SI830 -MJTC0_SX110 -MJTC0_SX20 -MJTC0_SX200 -MJTC0_SX290 -MJTC0_SX380 -MJTH0_SA1 -MJTH0_SA2 -MJTH0_SI1296 -MJTH0_SI1926 -MJTH0_SI666 -MJTH0_SX126 -MJTH0_SX216 -MJTH0_SX306 -MJTH0_SX36 -MJTH0_SX396 -MJVW0_SA1 -MJVW0_SA2 -MJVW0_SI1733 -MJVW0_SI1758 -MJVW0_SI473 -MJVW0_SX113 -MJVW0_SX203 -MJVW0_SX23 -MJVW0_SX293 -MJVW0_SX383 -MKCH0_SA1 -MKCH0_SA2 -MKCH0_SI1378 -MKCH0_SI1425 -MKCH0_SI2008 -MKCH0_SX118 -MKCH0_SX208 -MKCH0_SX28 -MKCH0_SX298 -MKCH0_SX388 -MKCL0_SA1 -MKCL0_SA2 -MKCL0_SI1091 -MKCL0_SI1721 -MKCL0_SI461 -MKCL0_SX101 -MKCL0_SX11 -MKCL0_SX191 -MKCL0_SX281 -MKCL0_SX371 -MKDR0_SA1 -MKDR0_SA2 -MKDR0_SI1273 -MKDR0_SI1903 -MKDR0_SI643 -MKDR0_SX103 -MKDR0_SX13 -MKDR0_SX193 -MKDR0_SX283 -MKDR0_SX373 -MKJL0_SA1 -MKJL0_SA2 -MKJL0_SI1100 -MKJL0_SI1730 -MKJL0_SI470 -MKJL0_SX110 -MKJL0_SX20 -MKJL0_SX200 -MKJL0_SX290 -MKJL0_SX380 -MKLT0_SA1 -MKLT0_SA2 -MKLT0_SI1213 -MKLT0_SI1843 -MKLT0_SI583 -MKLT0_SX133 -MKLT0_SX223 -MKLT0_SX313 -MKLT0_SX403 -MKLT0_SX43 -MLIH0_SA1 -MLIH0_SA2 -MLIH0_SI1183 -MLIH0_SI1813 -MLIH0_SI553 -MLIH0_SX103 -MLIH0_SX13 -MLIH0_SX193 -MLIH0_SX283 -MLIH0_SX373 -MLJB0_SA1 -MLJB0_SA2 -MLJB0_SI1310 -MLJB0_SI1940 -MLJB0_SI680 -MLJB0_SX140 -MLJB0_SX230 -MLJB0_SX320 -MLJB0_SX410 -MLJB0_SX50 -MLLL0_SA1 -MLLL0_SA2 -MLLL0_SI1363 -MLLL0_SI1993 -MLLL0_SI733 -MLLL0_SX103 -MLLL0_SX13 -MLLL0_SX193 -MLLL0_SX283 -MLLL0_SX373 -MLNT0_SA1 -MLNT0_SA2 -MLNT0_SI1574 -MLNT0_SI1902 -MLNT0_SI642 -MLNT0_SX102 -MLNT0_SX12 -MLNT0_SX192 -MLNT0_SX282 -MLNT0_SX372 -MMAB0_SA1 -MMAB0_SA2 -MMAB0_SI1362 -MMAB0_SI1992 -MMAB0_SI732 -MMAB0_SX102 -MMAB0_SX12 -MMAB0_SX192 -MMAB0_SX282 -MMAB0_SX372 -MMDB1_SA1 -MMDB1_SA2 -MMDB1_SI1625 -MMDB1_SI2255 -MMDB1_SI995 -MMDB1_SX185 -MMDB1_SX275 -MMDB1_SX365 -MMDB1_SX5 -MMDB1_SX95 -MMDH0_SA1 -MMDH0_SA2 -MMDH0_SI1656 -MMDH0_SI2118 -MMDH0_SI2286 -MMDH0_SX126 -MMDH0_SX216 -MMDH0_SX306 -MMDH0_SX36 -MMDH0_SX396 -MMDM2_SA1 -MMDM2_SA2 -MMDM2_SI1452 -MMDM2_SI1555 -MMDM2_SI2082 -MMDM2_SX102 -MMDM2_SX12 -MMDM2_SX192 -MMDM2_SX282 -MMDM2_SX372 -MMJR0_SA1 -MMJR0_SA2 -MMJR0_SI1648 -MMJR0_SI2166 -MMJR0_SI2278 -MMJR0_SX118 -MMJR0_SX208 -MMJR0_SX28 -MMJR0_SX298 -MMJR0_SX388 -MMWH0_SA1 -MMWH0_SA2 -MMWH0_SI1089 -MMWH0_SI1301 -MMWH0_SI459 -MMWH0_SX189 -MMWH0_SX279 -MMWH0_SX369 -MMWH0_SX9 -MMWH0_SX99 -MNJM0_SA1 -MNJM0_SA2 -MNJM0_SI1580 -MNJM0_SI2210 -MNJM0_SI950 -MNJM0_SX140 -MNJM0_SX230 -MNJM0_SX320 -MNJM0_SX410 -MNJM0_SX50 -MNLS0_SA1 -MNLS0_SA2 -MNLS0_SI1483 -MNLS0_SI1610 -MNLS0_SI853 -MNLS0_SX133 -MNLS0_SX223 -MNLS0_SX313 -MNLS0_SX403 -MNLS0_SX43 -MPAB0_SA1 -MPAB0_SA2 -MPAB0_SI1103 -MPAB0_SI1128 -MPAB0_SI498 -MPAB0_SX138 -MPAB0_SX228 -MPAB0_SX318 -MPAB0_SX408 -MPAB0_SX48 -MPAM0_SA1 -MPAM0_SA2 -MPAM0_SI1189 -MPAM0_SI1819 -MPAM0_SI1961 -MPAM0_SX109 -MPAM0_SX19 -MPAM0_SX199 -MPAM0_SX289 -MPAM0_SX379 -MPAM1_SA1 -MPAM1_SA2 -MPAM1_SI1029 -MPAM1_SI1836 -MPAM1_SI576 -MPAM1_SX126 -MPAM1_SX216 -MPAM1_SX306 -MPAM1_SX36 -MPAM1_SX396 -MPCS0_SA1 -MPCS0_SA2 -MPCS0_SI1359 -MPCS0_SI1989 -MPCS0_SI729 -MPCS0_SX189 -MPCS0_SX279 -MPCS0_SX369 -MPCS0_SX9 -MPCS0_SX99 -MPDF0_SA1 -MPDF0_SA2 -MPDF0_SI1542 -MPDF0_SI2172 -MPDF0_SI912 -MPDF0_SX102 -MPDF0_SX12 -MPDF0_SX192 -MPDF0_SX282 -MPDF0_SX372 -MPGL0_SA1 -MPGL0_SA2 -MPGL0_SI1099 -MPGL0_SI1729 -MPGL0_SI469 -MPGL0_SX109 -MPGL0_SX19 -MPGL0_SX199 -MPGL0_SX289 -MPGL0_SX379 -MPLB0_SA1 -MPLB0_SA2 -MPLB0_SI1394 -MPLB0_SI2024 -MPLB0_SI764 -MPLB0_SX134 -MPLB0_SX224 -MPLB0_SX314 -MPLB0_SX404 -MPLB0_SX44 -MPWM0_SA1 -MPWM0_SA2 -MPWM0_SI1127 -MPWM0_SI1757 -MPWM0_SI2279 -MPWM0_SX137 -MPWM0_SX227 -MPWM0_SX317 -MPWM0_SX407 -MPWM0_SX47 -MRCS0_SA1 -MRCS0_SA2 -MRCS0_SI1223 -MRCS0_SI1853 -MRCS0_SI593 -MRCS0_SX143 -MRCS0_SX233 -MRCS0_SX323 -MRCS0_SX413 -MRCS0_SX53 -MRCZ0_SA1 -MRCZ0_SA2 -MRCZ0_SI1541 -MRCZ0_SI2171 -MRCZ0_SI911 -MRCZ0_SX101 -MRCZ0_SX11 -MRCZ0_SX191 -MRCZ0_SX281 -MRCZ0_SX371 -MREB0_SA1 -MREB0_SA2 -MREB0_SI1375 -MREB0_SI2005 -MREB0_SI745 -MREB0_SX115 -MREB0_SX205 -MREB0_SX25 -MREB0_SX295 -MREB0_SX385 -MRES0_SA1 -MRES0_SA2 -MRES0_SI1217 -MRES0_SI1847 -MRES0_SI587 -MRES0_SX137 -MRES0_SX227 -MRES0_SX317 -MRES0_SX407 -MRES0_SX47 -MRGG0_SA1 -MRGG0_SA2 -MRGG0_SI1199 -MRGG0_SI1829 -MRGG0_SI569 -MRGG0_SX119 -MRGG0_SX209 -MRGG0_SX29 -MRGG0_SX299 -MRGG0_SX389 -MRJM3_SA1 -MRJM3_SA2 -MRJM3_SI1448 -MRJM3_SI1809 -MRJM3_SI2078 -MRJM3_SX188 -MRJM3_SX278 -MRJM3_SX368 -MRJM3_SX8 -MRJM3_SX98 -MRJM4_SA1 -MRJM4_SA2 -MRJM4_SI1489 -MRJM4_SI2119 -MRJM4_SI859 -MRJM4_SX139 -MRJM4_SX229 -MRJM4_SX319 -MRJM4_SX409 -MRJM4_SX49 -MRJO0_SA1 -MRJO0_SA2 -MRJO0_SI1364 -MRJO0_SI1624 -MRJO0_SI734 -MRJO0_SX104 -MRJO0_SX14 -MRJO0_SX194 -MRJO0_SX284 -MRJO0_SX374 -MRJR0_SA1 -MRJR0_SA2 -MRJR0_SI1182 -MRJR0_SI1812 -MRJR0_SI2313 -MRJR0_SX102 -MRJR0_SX12 -MRJR0_SX192 -MRJR0_SX282 -MRJR0_SX372 -MRJS0_SA1 -MRJS0_SA2 -MRJS0_SI1444 -MRJS0_SI1523 -MRJS0_SI2074 -MRJS0_SX184 -MRJS0_SX274 -MRJS0_SX364 -MRJS0_SX4 -MRJS0_SX94 -MRKO0_SA1 -MRKO0_SA2 -MRKO0_SI1397 -MRKO0_SI2027 -MRKO0_SI767 -MRKO0_SX137 -MRKO0_SX227 -MRKO0_SX317 -MRKO0_SX407 -MRKO0_SX47 -MRMS1_SA1 -MRMS1_SA2 -MRMS1_SI1487 -MRMS1_SI2117 -MRMS1_SI857 -MRMS1_SX137 -MRMS1_SX227 -MRMS1_SX317 -MRMS1_SX407 -MRMS1_SX47 -MROA0_SA1 -MROA0_SA2 -MROA0_SI1307 -MROA0_SI1970 -MROA0_SI677 -MROA0_SX137 -MROA0_SX227 -MROA0_SX317 -MROA0_SX407 -MROA0_SX47 -MRPC0_SA1 -MRPC0_SA2 -MRPC0_SI1753 -MRPC0_SI493 -MRPC0_SI933 -MRPC0_SX133 -MRPC0_SX223 -MRPC0_SX313 -MRPC0_SX403 -MRPC0_SX43 -MRPP0_SA1 -MRPP0_SA2 -MRPP0_SI1184 -MRPP0_SI1814 -MRPP0_SI554 -MRPP0_SX104 -MRPP0_SX14 -MRPP0_SX194 -MRPP0_SX284 -MRPP0_SX374 -MRRK0_SA1 -MRRK0_SA2 -MRRK0_SI1288 -MRRK0_SI1716 -MRRK0_SI1918 -MRRK0_SX118 -MRRK0_SX208 -MRRK0_SX28 -MRRK0_SX298 -MRRK0_SX388 -MRTK0_SA1 -MRTK0_SA2 -MRTK0_SI1093 -MRTK0_SI1723 -MRTK0_SI1750 -MRTK0_SX103 -MRTK0_SX13 -MRTK0_SX193 -MRTK0_SX283 -MRTK0_SX373 -MRWS1_SA1 -MRWS1_SA2 -MRWS1_SI1130 -MRWS1_SI1496 -MRWS1_SI500 -MRWS1_SX140 -MRWS1_SX230 -MRWS1_SX320 -MRWS1_SX410 -MRWS1_SX50 -MSFH1_SA1 -MSFH1_SA2 -MSFH1_SI1270 -MSFH1_SI1900 -MSFH1_SI640 -MSFH1_SX10 -MSFH1_SX100 -MSFH1_SX190 -MSFH1_SX280 -MSFH1_SX370 -MSJS1_SA1 -MSJS1_SA2 -MSJS1_SI1899 -MSJS1_SI639 -MSJS1_SI869 -MSJS1_SX189 -MSJS1_SX279 -MSJS1_SX369 -MSJS1_SX9 -MSJS1_SX99 -MSLB0_SA1 -MSLB0_SA2 -MSLB0_SI1193 -MSLB0_SI1823 -MSLB0_SI563 -MSLB0_SX113 -MSLB0_SX203 -MSLB0_SX23 -MSLB0_SX293 -MSLB0_SX383 -MSTK0_SA1 -MSTK0_SA2 -MSTK0_SI1024 -MSTK0_SI2222 -MSTK0_SI2284 -MSTK0_SX124 -MSTK0_SX214 -MSTK0_SX304 -MSTK0_SX34 -MSTK0_SX394 -MTAA0_SA1 -MTAA0_SA2 -MTAA0_SI1285 -MTAA0_SI1915 -MTAA0_SI596 -MTAA0_SX115 -MTAA0_SX205 -MTAA0_SX25 -MTAA0_SX295 -MTAA0_SX385 -MTAS1_SA1 -MTAS1_SA2 -MTAS1_SI1473 -MTAS1_SI2098 -MTAS1_SI838 -MTAS1_SX118 -MTAS1_SX208 -MTAS1_SX28 -MTAS1_SX298 -MTAS1_SX388 -MTDT0_SA1 -MTDT0_SA2 -MTDT0_SI1994 -MTDT0_SI2254 -MTDT0_SI994 -MTDT0_SX184 -MTDT0_SX274 -MTDT0_SX364 -MTDT0_SX4 -MTDT0_SX94 -MTEB0_SA1 -MTEB0_SA2 -MTEB0_SI1133 -MTEB0_SI2064 -MTEB0_SI503 -MTEB0_SX143 -MTEB0_SX233 -MTEB0_SX323 -MTEB0_SX413 -MTEB0_SX53 -MTHC0_SA1 -MTHC0_SA2 -MTHC0_SI1015 -MTHC0_SI1645 -MTHC0_SI2275 -MTHC0_SX115 -MTHC0_SX205 -MTHC0_SX25 -MTHC0_SX295 -MTHC0_SX385 -MTLS0_SA1 -MTLS0_SA2 -MTLS0_SI1370 -MTLS0_SI2000 -MTLS0_SI740 -MTLS0_SX110 -MTLS0_SX20 -MTLS0_SX200 -MTLS0_SX290 -MTLS0_SX380 -MTMR0_SA1 -MTMR0_SA2 -MTMR0_SI1303 -MTMR0_SI1933 -MTMR0_SI673 -MTMR0_SX133 -MTMR0_SX223 -MTMR0_SX313 -MTMR0_SX403 -MTMR0_SX43 -MTWH0_SA1 -MTWH0_SA2 -MTWH0_SI1190 -MTWH0_SI1629 -MTWH0_SI1820 -MTWH0_SX110 -MTWH0_SX20 -MTWH0_SX200 -MTWH0_SX290 -MTWH0_SX380 -MWBT0_SA1 -MWBT0_SA2 -MWBT0_SI1553 -MWBT0_SI2183 -MWBT0_SI923 -MWBT0_SX113 -MWBT0_SX203 -MWBT0_SX23 -MWBT0_SX293 -MWBT0_SX383 -MWEW0_SA1 -MWEW0_SA2 -MWEW0_SI1361 -MWEW0_SI1991 -MWEW0_SI731 -MWEW0_SX101 -MWEW0_SX11 -MWEW0_SX191 -MWEW0_SX281 -MWEW0_SX371 -MWJG0_SA1 -MWJG0_SA2 -MWJG0_SI1124 -MWJG0_SI1754 -MWJG0_SI494 -MWJG0_SX134 -MWJG0_SX224 -MWJG0_SX314 -MWJG0_SX404 -MWJG0_SX44 -MWVW0_SA1 -MWVW0_SA2 -MWVW0_SI1476 -MWVW0_SI2106 -MWVW0_SI846 -MWVW0_SX126 -MWVW0_SX216 -MWVW0_SX306 -MWVW0_SX36 -MWVW0_SX396 diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/config/timit_unmatched/train.uid b/kosmos-g/fairseq/examples/wav2vec/unsupervised/config/timit_unmatched/train.uid deleted file mode 100644 index 35b02e7f8..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/config/timit_unmatched/train.uid +++ /dev/null @@ -1,3000 +0,0 @@ -FAEM0_SA1 -FAEM0_SA2 -FAEM0_SI2022 -FAEM0_SX132 -FAEM0_SX222 -FAEM0_SX312 -FAEM0_SX402 -FAJW0_SA2 -FAJW0_SI1893 -FAJW0_SX183 -FAJW0_SX273 -FAJW0_SX363 -FALK0_SA1 -FALK0_SA2 -FALK0_SI1086 -FALK0_SI456 -FALK0_SX276 -FALK0_SX366 -FALK0_SX96 -FALR0_SA1 -FALR0_SA2 -FALR0_SI1955 -FALR0_SI695 -FALR0_SX155 -FALR0_SX245 -FALR0_SX425 -FALR0_SX65 -FAPB0_SA1 -FAPB0_SA2 -FAPB0_SI1693 -FAPB0_SX163 -FAPB0_SX253 -FAPB0_SX343 -FAPB0_SX73 -FBAS0_SA2 -FBAS0_SI1387 -FBAS0_SX127 -FBAS0_SX307 -FBAS0_SX37 -FBAS0_SX397 -FBCG1_SA2 -FBCG1_SI1612 -FBCG1_SI2242 -FBCG1_SI982 -FBCG1_SX262 -FBCG1_SX82 -FBCH0_SA1 -FBCH0_SA2 -FBCH0_SI1586 -FBCH0_SI956 -FBCH0_SX146 -FBCH0_SX326 -FBCH0_SX56 -FBJL0_SA1 -FBJL0_SA2 -FBJL0_SI1552 -FBJL0_SI2182 -FBJL0_SX112 -FBJL0_SX202 -FBJL0_SX22 -FBJL0_SX292 -FBJL0_SX382 -FBLV0_SA2 -FBLV0_SI2318 -FBLV0_SX158 -FBLV0_SX248 -FBLV0_SX428 -FBMH0_SA2 -FBMH0_SI1766 -FBMH0_SX146 -FBMH0_SX236 -FBMH0_SX326 -FBMH0_SX416 -FBMH0_SX56 -FBMJ0_SA2 -FBMJ0_SX156 -FBMJ0_SX246 -FBMJ0_SX426 -FBMJ0_SX66 -FCAG0_SA2 -FCAG0_SI1503 -FCAG0_SI1641 -FCAG0_SI2133 -FCAG0_SX333 -FCAG0_SX423 -FCAG0_SX63 -FCAJ0_SA1 -FCAJ0_SA2 -FCAJ0_SI1804 -FCAJ0_SI849 -FCAJ0_SX129 -FCAJ0_SX219 -FCAJ0_SX39 -FCAJ0_SX399 -FCDR1_SA1 -FCDR1_SA2 -FCDR1_SX16 -FCDR1_SX376 -FCEG0_SA1 -FCEG0_SI1248 -FCEG0_SI1878 -FCEG0_SI618 -FCEG0_SX168 -FCEG0_SX258 -FCEG0_SX348 -FCEG0_SX438 -FCEG0_SX78 -FCJF0_SA2 -FCJF0_SI1027 -FCJF0_SI1657 -FCJF0_SI648 -FCJF0_SX217 -FCJF0_SX307 -FCJF0_SX37 -FCJF0_SX397 -FCJS0_SA1 -FCJS0_SA2 -FCJS0_SI977 -FCJS0_SX167 -FCJS0_SX347 -FCJS0_SX437 -FCJS0_SX77 -FCKE0_SA1 -FCKE0_SI1111 -FCKE0_SX211 -FCKE0_SX301 -FCKE0_SX31 -FCKE0_SX391 -FCLT0_SA1 -FCLT0_SA2 -FCLT0_SI1438 -FCLT0_SX178 -FCLT0_SX268 -FCLT0_SX358 -FCMG0_SA1 -FCMG0_SI1242 -FCMG0_SX162 -FCMG0_SX252 -FCMG0_SX342 -FCMM0_SI1083 -FCMM0_SI453 -FCMM0_SX273 -FCMM0_SX363 -FCMM0_SX93 -FCRZ0_SA1 -FCRZ0_SA2 -FCRZ0_SI1913 -FCRZ0_SI793 -FCRZ0_SX163 -FCRZ0_SX253 -FCRZ0_SX343 -FCRZ0_SX73 -FCYL0_SA2 -FCYL0_SI1297 -FCYL0_SI1927 -FCYL0_SX127 -FCYL0_SX217 -FCYL0_SX397 -FDAS1_SA1 -FDAS1_SA2 -FDAS1_SX111 -FDAS1_SX21 -FDAS1_SX291 -FDAW0_SA1 -FDAW0_SA2 -FDAW0_SX146 -FDAW0_SX236 -FDAW0_SX326 -FDAW0_SX416 -FDAW0_SX56 -FDFB0_SI1318 -FDFB0_SI1948 -FDFB0_SX148 -FDFB0_SX238 -FDFB0_SX328 -FDFB0_SX418 -FDJH0_SA1 -FDJH0_SA2 -FDJH0_SI1565 -FDJH0_SI2195 -FDJH0_SX125 -FDJH0_SX215 -FDJH0_SX35 -FDJH0_SX395 -FDKN0_SA1 -FDKN0_SA2 -FDKN0_SI1081 -FDKN0_SI1711 -FDKN0_SX271 -FDKN0_SX361 -FDKN0_SX91 -FDML0_SA1 -FDML0_SI1149 -FDML0_SI1779 -FDML0_SI2075 -FDML0_SX339 -FDML0_SX69 -FDMY0_SI1197 -FDMY0_SX117 -FDMY0_SX207 -FDMY0_SX297 -FDNC0_SA1 -FDNC0_SA2 -FDNC0_SI2287 -FDNC0_SX108 -FDNC0_SX18 -FDNC0_SX378 -FDTD0_SA2 -FDTD0_SI1561 -FDTD0_SI2191 -FDTD0_SI931 -FDTD0_SX121 -FDTD0_SX301 -FDTD0_SX391 -FDXW0_SA2 -FDXW0_SI1511 -FDXW0_SI2141 -FDXW0_SI881 -FDXW0_SX161 -FDXW0_SX431 -FEAC0_SA1 -FEAC0_SA2 -FEAC0_SI1245 -FEAC0_SI1875 -FEAC0_SX255 -FEAC0_SX345 -FEAC0_SX435 -FEAR0_SA1 -FEAR0_SA2 -FEAR0_SI1252 -FEAR0_SI1882 -FEAR0_SX172 -FEAR0_SX262 -FEAR0_SX442 -FEAR0_SX82 -FECD0_SA2 -FECD0_SI2048 -FECD0_SX158 -FECD0_SX248 -FECD0_SX338 -FECD0_SX428 -FEEH0_SA2 -FEEH0_SI1112 -FEEH0_SX212 -FEEH0_SX302 -FEEH0_SX32 -FEEH0_SX392 -FEME0_SA2 -FEME0_SI1505 -FEME0_SI2135 -FEME0_SX245 -FEME0_SX425 -FETB0_SA2 -FETB0_SI1778 -FETB0_SI518 -FETB0_SX248 -FETB0_SX338 -FETB0_SX428 -FETB0_SX68 -FEXM0_SA2 -FEXM0_SI1731 -FEXM0_SX111 -FEXM0_SX201 -FEXM0_SX291 -FEXM0_SX381 -FGCS0_SA1 -FGCS0_SA2 -FGCS0_SI1486 -FGCS0_SI2116 -FGCS0_SI856 -FGCS0_SX46 -FGDP0_SA2 -FGDP0_SI1618 -FGDP0_SI2248 -FGDP0_SX178 -FGDP0_SX268 -FGDP0_SX358 -FGDP0_SX448 -FGMB0_SA1 -FGMB0_SA2 -FGMB0_SI515 -FGMB0_SX155 -FGMB0_SX425 -FGMB0_SX65 -FGRW0_SA2 -FGRW0_SI1782 -FGRW0_SI1990 -FGRW0_SX252 -FGRW0_SX342 -FGRW0_SX72 -FHLM0_SA1 -FHLM0_SA2 -FHLM0_SI1560 -FHLM0_SI2190 -FHLM0_SI930 -FHLM0_SX210 -FHLM0_SX300 -FHXS0_SI2335 -FHXS0_SX265 -FHXS0_SX355 -FHXS0_SX85 -FJDM2_SI1582 -FJDM2_SI1964 -FJDM2_SI2212 -FJDM2_SX322 -FJDM2_SX412 -FJEN0_SA2 -FJEN0_SI1047 -FJEN0_SI1677 -FJEN0_SI2307 -FJEN0_SX147 -FJEN0_SX237 -FJEN0_SX57 -FJHK0_SA1 -FJHK0_SA2 -FJHK0_SI1022 -FJHK0_SI1652 -FJHK0_SX122 -FJHK0_SX212 -FJHK0_SX32 -FJHK0_SX392 -FJKL0_SA1 -FJKL0_SA2 -FJKL0_SI1562 -FJKL0_SI2192 -FJKL0_SX122 -FJKL0_SX302 -FJKL0_SX32 -FJLG0_SA1 -FJLG0_SA2 -FJLG0_SI1506 -FJLG0_SX179 -FJLG0_SX269 -FJLG0_SX359 -FJLG0_SX449 -FJLG0_SX89 -FJLR0_SA2 -FJLR0_SI1861 -FJLR0_SI601 -FJLR0_SX151 -FJLR0_SX241 -FJLR0_SX331 -FJLR0_SX421 -FJLR0_SX61 -FJRB0_SA1 -FJRB0_SA2 -FJRB0_SI1302 -FJRB0_SI1932 -FJRB0_SI672 -FJRB0_SX132 -FJRB0_SX222 -FJRB0_SX312 -FJRB0_SX42 -FJRP1_SA2 -FJRP1_SI802 -FJRP1_SX172 -FJRP1_SX442 -FJSK0_SA2 -FJSK0_SI1682 -FJSK0_SI2312 -FJSK0_SX152 -FJSK0_SX242 -FJSK0_SX332 -FJSK0_SX422 -FJSK0_SX62 -FJSP0_SA1 -FJSP0_SA2 -FJSP0_SI1763 -FJSP0_SI804 -FJSP0_SX174 -FJSP0_SX84 -FJWB1_SA2 -FJWB1_SI2055 -FJWB1_SI795 -FJWB1_SX165 -FJWB1_SX255 -FJWB1_SX75 -FJXM0_SA2 -FJXM0_SI1211 -FJXM0_SI1971 -FJXM0_SX131 -FJXM0_SX221 -FJXP0_SA2 -FJXP0_SI492 -FJXP0_SX222 -FJXP0_SX312 -FJXP0_SX402 -FJXP0_SX42 -FKAA0_SA2 -FKAA0_SI1208 -FKAA0_SI1838 -FKAA0_SI578 -FKAA0_SX218 -FKAA0_SX308 -FKAA0_SX38 -FKDE0_SA2 -FKDE0_SI2221 -FKDE0_SX331 -FKDW0_SA1 -FKDW0_SA2 -FKDW0_SI577 -FKDW0_SX127 -FKDW0_SX217 -FKDW0_SX307 -FKDW0_SX37 -FKFB0_SA1 -FKFB0_SI2238 -FKFB0_SI978 -FKFB0_SX168 -FKFB0_SX258 -FKKH0_SI660 -FKKH0_SX210 -FKKH0_SX30 -FKKH0_SX300 -FKLC0_SA1 -FKLC0_SA2 -FKLC0_SI1615 -FKLC0_SI2245 -FKLC0_SX265 -FKLC0_SX445 -FKLC0_SX85 -FKLC1_SA1 -FKLC1_SA2 -FKLC1_SI1678 -FKLC1_SX148 -FKLC1_SX58 -FKLH0_SA1 -FKLH0_SI1887 -FKLH0_SI627 -FKLH0_SX267 -FKLH0_SX357 -FKLH0_SX447 -FKLH0_SX87 -FKSR0_SI1117 -FKSR0_SX161 -FKSR0_SX37 -FKSR0_SX397 -FLAC0_SA1 -FLAC0_SA2 -FLAC0_SI2161 -FLAC0_SI901 -FLAC0_SX181 -FLAC0_SX271 -FLAC0_SX361 -FLAC0_SX91 -FLAG0_SA1 -FLAG0_SI2094 -FLAG0_SX294 -FLEH0_SA1 -FLEH0_SA2 -FLEH0_SX151 -FLEH0_SX241 -FLEH0_SX421 -FLEH0_SX61 -FLET0_SA2 -FLET0_SI1137 -FLET0_SI1767 -FLET0_SX147 -FLET0_SX237 -FLET0_SX277 -FLET0_SX417 -FLET0_SX57 -FLHD0_SA1 -FLHD0_SA2 -FLHD0_SI1344 -FLHD0_SI1974 -FLHD0_SX174 -FLHD0_SX264 -FLHD0_SX444 -FLHD0_SX84 -FLJA0_SA2 -FLJA0_SI1708 -FLJA0_SX268 -FLJA0_SX358 -FLJA0_SX448 -FLJA0_SX88 -FLJD0_SA1 -FLJD0_SA2 -FLJD0_SI2146 -FLJD0_SX166 -FLJD0_SX256 -FLJD0_SX346 -FLJD0_SX436 -FLJG0_SA1 -FLJG0_SI1611 -FLJG0_SI2241 -FLJG0_SX261 -FLJG0_SX441 -FLJG0_SX81 -FLKM0_SI1880 -FLKM0_SX116 -FLMA0_SA2 -FLMA0_SI1243 -FLMA0_SI1873 -FLMA0_SX163 -FLMA0_SX253 -FLMA0_SX343 -FLMC0_SA1 -FLMC0_SA2 -FLMC0_SI2002 -FLMC0_SI742 -FLMC0_SX112 -FLMC0_SX292 -FLMC0_SX336 -FLMC0_SX382 -FLMK0_SA2 -FLMK0_SI2295 -FLMK0_SX135 -FLMK0_SX225 -FLMK0_SX45 -FLOD0_SA1 -FLOD0_SA2 -FLOD0_SI1287 -FLOD0_SI657 -FLOD0_SX207 -FLOD0_SX387 -FLTM0_SA2 -FLTM0_SI1700 -FLTM0_SX260 -FLTM0_SX80 -FMAH1_SA1 -FMAH1_SI1509 -FMAH1_SI2139 -FMAH1_SX249 -FMAH1_SX339 -FMAH1_SX429 -FMAH1_SX69 -FMBG0_SA1 -FMBG0_SI1790 -FMBG0_SX260 -FMBG0_SX3 -FMBG0_SX350 -FMBG0_SX440 -FMBG0_SX80 -FMEM0_SA2 -FMEM0_SI1377 -FMEM0_SI2007 -FMEM0_SX117 -FMEM0_SX207 -FMEM0_SX297 -FMJB0_SA1 -FMJB0_SA2 -FMJB0_SI1807 -FMJB0_SX187 -FMJB0_SX277 -FMJB0_SX367 -FMJB0_SX7 -FMJF0_SA1 -FMJF0_SI1254 -FMJF0_SI1884 -FMJF0_SX264 -FMJF0_SX354 -FMJF0_SX444 -FMJU0_SA1 -FMJU0_SA2 -FMJU0_SI2019 -FMJU0_SI759 -FMJU0_SX129 -FMJU0_SX219 -FMJU0_SX39 -FMKC0_SA1 -FMKC0_SA2 -FMKC0_SI1072 -FMKC0_SX172 -FMKC0_SX262 -FMKC0_SX352 -FMKF0_SA1 -FMKF0_SA2 -FMKF0_SI1536 -FMKF0_SI906 -FMKF0_SX276 -FMKF0_SX366 -FMKF0_SX6 -FMKF0_SX96 -FMMH0_SA1 -FMMH0_SA2 -FMMH0_SI1537 -FMMH0_SI2167 -FMMH0_SI907 -FMMH0_SX187 -FMMH0_SX367 -FMMH0_SX420 -FMMH0_SX7 -FMMH0_SX97 -FMPG0_SI1602 -FMPG0_SI2232 -FMPG0_SX252 -FMPG0_SX72 -FNKL0_SA1 -FNKL0_SA2 -FNKL0_SI2152 -FNKL0_SX172 -FNKL0_SX196 -FNKL0_SX262 -FNKL0_SX442 -FNKL0_SX82 -FNTB0_SA1 -FNTB0_SA2 -FNTB0_SX123 -FNTB0_SX213 -FNTB0_SX33 -FNTB0_SX393 -FPAB1_SA2 -FPAB1_SX121 -FPAB1_SX301 -FPAB1_SX31 -FPAB1_SX391 -FPAC0_SA1 -FPAC0_SI2011 -FPAC0_SX121 -FPAC0_SX211 -FPAC0_SX301 -FPAC0_SX31 -FPAC0_SX391 -FPAD0_SA1 -FPAD0_SI1346 -FPAD0_SI1976 -FPAD0_SX266 -FPAD0_SX446 -FPAF0_SI1684 -FPAF0_SI2314 -FPAF0_SX244 -FPAF0_SX334 -FPAF0_SX424 -FPAF0_SX64 -FPAZ0_SI1593 -FPAZ0_SX153 -FPAZ0_SX27 -FPAZ0_SX423 -FPAZ0_SX63 -FPJF0_SA2 -FPJF0_SI1046 -FPJF0_SI1676 -FPJF0_SX236 -FPJF0_SX326 -FPLS0_SA1 -FPLS0_SA2 -FPLS0_SI2220 -FPLS0_SX150 -FPLS0_SX240 -FPLS0_SX3 -FPLS0_SX60 -FPMY0_SA2 -FPMY0_SI1783 -FPMY0_SX163 -FPMY0_SX196 -FPMY0_SX253 -FPMY0_SX73 -FREH0_SI1315 -FREH0_SI685 -FREH0_SX145 -FREH0_SX235 -FREH0_SX325 -FREH0_SX55 -FRJB0_SA1 -FRJB0_SA2 -FRJB0_SI1427 -FRJB0_SI1470 -FRJB0_SI1794 -FRJB0_SX167 -FRJB0_SX257 -FRJB0_SX437 -FRJB0_SX77 -FRLL0_SA1 -FRLL0_SA2 -FRLL0_SI1514 -FRLL0_SI884 -FRLL0_SX164 -FRLL0_SX254 -FRLL0_SX344 -FRLL0_SX74 -FSAG0_SA2 -FSAG0_SI1953 -FSAG0_SI693 -FSAG0_SX63 -FSAH0_SI1244 -FSAH0_SI1874 -FSAH0_SX344 -FSAH0_SX74 -FSAK0_SA1 -FSAK0_SA2 -FSAK0_SI1930 -FSAK0_SI670 -FSAK0_SX130 -FSAK0_SX220 -FSAK0_SX310 -FSAK0_SX40 -FSAK0_SX400 -FSBK0_SA1 -FSBK0_SI1699 -FSBK0_SI2329 -FSBK0_SX259 -FSBK0_SX439 -FSBK0_SX79 -FSCN0_SI1886 -FSCN0_SX356 -FSDC0_SA1 -FSDC0_SI1942 -FSDC0_SI2234 -FSDC0_SX232 -FSDC0_SX412 -FSDJ0_SA1 -FSDJ0_SA2 -FSDJ0_SI1745 -FSDJ0_SX125 -FSDJ0_SX35 -FSGF0_SA1 -FSGF0_SA2 -FSGF0_SI1557 -FSGF0_SX207 -FSGF0_SX27 -FSGF0_SX297 -FSGF0_SX387 -FSJG0_SI1570 -FSJG0_SI2200 -FSJG0_SX310 -FSJK1_SA1 -FSJK1_SI1025 -FSJK1_SI2285 -FSJK1_SI696 -FSJK1_SX215 -FSJK1_SX305 -FSJK1_SX395 -FSJS0_SA2 -FSJS0_SI1171 -FSJS0_SI1801 -FSJS0_SI541 -FSJS0_SX271 -FSJS0_SX361 -FSJS0_SX91 -FSJW0_SA1 -FSJW0_SA2 -FSJW0_SI703 -FSJW0_SX163 -FSJW0_SX253 -FSJW0_SX343 -FSJW0_SX73 -FSKC0_SA1 -FSKC0_SA2 -FSKC0_SI2046 -FSKC0_SX156 -FSKC0_SX336 -FSKC0_SX426 -FSKC0_SX66 -FSKL0_SA1 -FSKL0_SA2 -FSKL0_SI2159 -FSKL0_SI899 -FSKL0_SX179 -FSKL0_SX269 -FSKL0_SX359 -FSKL0_SX89 -FSKP0_SA1 -FSKP0_SI1728 -FSKP0_SI468 -FSKP0_SX108 -FSKP0_SX18 -FSKP0_SX198 -FSKP0_SX288 -FSKP0_SX378 -FSLS0_SA1 -FSLS0_SA2 -FSLS0_SI1056 -FSLS0_SI1686 -FSLS0_SI2316 -FSLS0_SX202 -FSLS0_SX246 -FSLS0_SX66 -FSMA0_SA1 -FSMA0_SI1621 -FSMA0_SI2251 -FSMA0_SX271 -FSMA0_SX361 -FSMA0_SX91 -FSMM0_SA1 -FSMM0_SA2 -FSMM0_SI1314 -FSMM0_SI1944 -FSMM0_SI684 -FSMM0_SX414 -FSMM0_SX54 -FSMS1_SA1 -FSMS1_SA2 -FSMS1_SI1504 -FSMS1_SI2134 -FSMS1_SI874 -FSMS1_SX154 -FSMS1_SX334 -FSMS1_SX64 -FSPM0_SA1 -FSPM0_SI1871 -FSPM0_SI611 -FSPM0_SX341 -FSPM0_SX431 -FSRH0_SA1 -FSRH0_SA2 -FSRH0_SI1719 -FSRH0_SX131 -FSRH0_SX41 -FSSB0_SA1 -FSSB0_SA2 -FSSB0_SI1082 -FSSB0_SI2342 -FSSB0_SX182 -FSSB0_SX272 -FSSB0_SX452 -FSSB0_SX92 -FTAJ0_SA1 -FTAJ0_SA2 -FTAJ0_SI1329 -FTAJ0_SI474 -FTAJ0_SX339 -FTAJ0_SX69 -FTBR0_SA1 -FTBR0_SA2 -FTBR0_SI2181 -FTBR0_SX111 -FTBR0_SX201 -FTBR0_SX291 -FTBR0_SX381 -FTBW0_SA2 -FTBW0_SI1345 -FTBW0_SI1975 -FTBW0_SX265 -FTBW0_SX355 -FTBW0_SX445 -FTBW0_SX85 -FTLG0_SA1 -FTLG0_SA2 -FTLG0_SI840 -FTLG0_SX123 -FTLG0_SX213 -FTLG0_SX303 -FTLG0_SX33 -FTLG0_SX393 -FTMG0_SA1 -FTMG0_SA2 -FTMG0_SX182 -FTMG0_SX272 -FTMG0_SX362 -FTMG0_SX92 -FVFB0_SA1 -FVFB0_SI1032 -FVFB0_SI2292 -FVFB0_SX222 -FVFB0_SX312 -FVFB0_SX402 -FVKB0_SA2 -FVKB0_SI1159 -FVKB0_SI1789 -FVKB0_SI529 -FVKB0_SX169 -FVKB0_SX259 -FVKB0_SX439 -FVKB0_SX79 -FVMH0_SA1 -FVMH0_SI2096 -FVMH0_SX206 -FVMH0_SX296 -FVMH0_SX386 -MABC0_SA1 -MABC0_SA2 -MABC0_SX151 -MABC0_SX241 -MABC0_SX331 -MABC0_SX421 -MABC0_SX61 -MADC0_SA1 -MADC0_SA2 -MADC0_SI1997 -MADC0_SX17 -MADC0_SX197 -MADC0_SX287 -MADD0_SA1 -MADD0_SI1798 -MADD0_SI538 -MADD0_SX358 -MADD0_SX448 -MAEB0_SA1 -MAEB0_SA2 -MAEB0_SI2250 -MAEB0_SI990 -MAEB0_SX180 -MAEB0_SX270 -MAEB0_SX360 -MAEB0_SX90 -MAEO0_SA2 -MAEO0_SI1655 -MAEO0_SI1956 -MAEO0_SX156 -MAEO0_SX246 -MAEO0_SX336 -MAEO0_SX426 -MAEO0_SX66 -MAFM0_SA1 -MAFM0_SA2 -MAFM0_SI1569 -MAFM0_SI2199 -MAFM0_SX219 -MAFM0_SX39 -MAFM0_SX399 -MAJP0_SA1 -MAJP0_SI1074 -MAJP0_SI2334 -MAJP0_SX264 -MAJP0_SX354 -MAJP0_SX444 -MAJP0_SX84 -MAKB0_SA1 -MAKB0_SX206 -MAKB0_SX296 -MAKR0_SA1 -MAKR0_SA2 -MAKR0_SI1352 -MAKR0_SI1982 -MAKR0_SI722 -MAKR0_SX182 -MAKR0_SX272 -MAKR0_SX452 -MAPV0_SA1 -MAPV0_SA2 -MAPV0_SI1923 -MAPV0_SX123 -MAPV0_SX303 -MAPV0_SX33 -MAPV0_SX393 -MARC0_SA1 -MARC0_SI1188 -MARC0_SI1818 -MARC0_SI558 -MARC0_SX288 -MARC0_SX378 -MARW0_SA1 -MARW0_SA2 -MARW0_SI1276 -MARW0_SI646 -MARW0_SX106 -MARW0_SX16 -MARW0_SX376 -MBAR0_SA2 -MBAR0_SI1319 -MBAR0_SI1949 -MBAR0_SI689 -MBAR0_SX149 -MBAR0_SX239 -MBAR0_SX329 -MBBR0_SA1 -MBBR0_SA2 -MBBR0_SI1685 -MBBR0_SX155 -MBBR0_SX245 -MBBR0_SX425 -MBCG0_SA2 -MBCG0_SI2217 -MBCG0_SX147 -MBCG0_SX237 -MBCG0_SX417 -MBCG0_SX57 -MBEF0_SA1 -MBEF0_SA2 -MBEF0_SX111 -MBEF0_SX201 -MBEF0_SX291 -MBGT0_SA1 -MBGT0_SI1341 -MBGT0_SI711 -MBGT0_SX81 -MBJV0_SA2 -MBJV0_SI1247 -MBJV0_SI1877 -MBJV0_SX167 -MBJV0_SX257 -MBJV0_SX437 -MBJV0_SX77 -MBMA0_SA1 -MBMA0_SA2 -MBMA0_SI1852 -MBMA0_SX142 -MBMA0_SX322 -MBMA0_SX412 -MBMA1_SA1 -MBMA1_SA2 -MBMA1_SI2207 -MBMA1_SX144 -MBMA1_SX234 -MBMA1_SX414 -MBML0_SA1 -MBML0_SI1799 -MBML0_SI539 -MBML0_SX179 -MBML0_SX269 -MBML0_SX359 -MBML0_SX449 -MBOM0_SA1 -MBOM0_SI1014 -MBOM0_SI1644 -MBOM0_SX114 -MBOM0_SX204 -MBOM0_SX311 -MBOM0_SX384 -MBSB0_SA2 -MBSB0_SI1353 -MBSB0_SI1983 -MBSB0_SI723 -MBSB0_SX183 -MBSB0_SX273 -MBSB0_SX363 -MBSB0_SX93 -MBTH0_SA1 -MBTH0_SI505 -MBTH0_SI757 -MBTH0_SX212 -MBTH0_SX302 -MBTH0_SX392 -MBWP0_SA1 -MBWP0_SA2 -MBWP0_SI1531 -MBWP0_SI1969 -MBWP0_SI709 -MBWP0_SX169 -MBWP0_SX259 -MBWP0_SX439 -MBWP0_SX79 -MCAE0_SA1 -MCAE0_SA2 -MCAE0_SX187 -MCAE0_SX367 -MCAE0_SX7 -MCAE0_SX97 -MCAL0_SA1 -MCAL0_SI508 -MCAL0_SX148 -MCAL0_SX238 -MCAL0_SX328 -MCAL0_SX418 -MCAL0_SX58 -MCDC0_SA2 -MCDC0_SI1292 -MCDC0_SI1922 -MCDC0_SI662 -MCDC0_SX122 -MCDC0_SX302 -MCDC0_SX32 -MCDC0_SX392 -MCDD0_SA1 -MCDD0_SI1513 -MCDD0_SI2143 -MCDD0_SX163 -MCDD0_SX343 -MCDD0_SX73 -MCDR0_SA1 -MCDR0_SA2 -MCDR0_SX164 -MCDR0_SX254 -MCDR0_SX344 -MCDR0_SX434 -MCDR0_SX74 -MCEF0_SA1 -MCEF0_SA2 -MCEF0_SI1135 -MCEF0_SI1765 -MCEF0_SX145 -MCEF0_SX325 -MCEF0_SX55 -MCEW0_SI1442 -MCEW0_SX182 -MCEW0_SX272 -MCEW0_SX92 -MCHL0_SA1 -MCHL0_SA2 -MCHL0_SI1977 -MCHL0_SX177 -MCHL0_SX267 -MCHL0_SX357 -MCHL0_SX447 -MCLK0_SA1 -MCLK0_SA2 -MCLK0_SI1660 -MCLK0_SX130 -MCLK0_SX220 -MCLK0_SX40 -MCLK0_SX400 -MCLM0_SA2 -MCLM0_SI1456 -MCLM0_SX106 -MCLM0_SX16 -MCLM0_SX196 -MCLM0_SX286 -MCLM0_SX376 -MCPM0_SA2 -MCPM0_SI1194 -MCPM0_SI564 -MCPM0_SX204 -MCPM0_SX24 -MCRE0_SA1 -MCRE0_SA2 -MCRE0_SI1121 -MCRE0_SI1725 -MCRE0_SI1751 -MCRE0_SX131 -MCRE0_SX221 -MCRE0_SX24 -MCRE0_SX401 -MCRE0_SX41 -MCSS0_SA1 -MCSS0_SA2 -MCSS0_SX120 -MCSS0_SX210 -MCSS0_SX30 -MCSS0_SX300 -MCSS0_SX390 -MCTH0_SA2 -MCTH0_SI1209 -MCTH0_SI1839 -MCTH0_SI579 -MCTH0_SX129 -MCTH0_SX219 -MCTH0_SX309 -MCTH0_SX399 -MCTM0_SA1 -MCTM0_SA2 -MCTM0_SI720 -MCTM0_SX180 -MCTM0_SX270 -MCTM0_SX360 -MCTM0_SX450 -MCTM0_SX90 -MCXM0_SA1 -MCXM0_SA2 -MCXM0_SI1351 -MCXM0_SI1981 -MCXM0_SI721 -MCXM0_SX181 -MCXM0_SX271 -MCXM0_SX361 -MCXM0_SX451 -MDAC0_SA2 -MDAC0_SI1261 -MDAC0_SI1837 -MDAC0_SX271 -MDAC0_SX451 -MDAC0_SX91 -MDAS0_SA1 -MDAS0_SA2 -MDAS0_SI1266 -MDAS0_SX186 -MDAS0_SX21 -MDAS0_SX276 -MDAS0_SX96 -MDBB1_SA1 -MDBB1_SA2 -MDBB1_SI1006 -MDBB1_SI1636 -MDBB1_SI2056 -MDBB1_SX196 -MDBB1_SX286 -MDBP0_SA1 -MDBP0_SA2 -MDBP0_SI1158 -MDBP0_SI1788 -MDBP0_SX258 -MDBP0_SX348 -MDBP0_SX78 -MDCD0_SA1 -MDCD0_SA2 -MDCD0_SI2045 -MDCD0_SX155 -MDCD0_SX65 -MDCM0_SA1 -MDCM0_SA2 -MDCM0_SI2110 -MDCM0_SI850 -MDCM0_SX130 -MDCM0_SX220 -MDCM0_SX310 -MDDC0_SA1 -MDDC0_SA2 -MDDC0_SX249 -MDDC0_SX339 -MDDC0_SX429 -MDED0_SI1170 -MDED0_SI1800 -MDED0_SX180 -MDED0_SX270 -MDED0_SX360 -MDED0_SX450 -MDED0_SX90 -MDEF0_SA1 -MDEF0_SA2 -MDEF0_SI1563 -MDEF0_SI2193 -MDEF0_SX213 -MDEF0_SX33 -MDEF0_SX393 -MDEM0_SA2 -MDEM0_SI1868 -MDEM0_SX158 -MDEM0_SX248 -MDEM0_SX338 -MDEM0_SX68 -MDHL0_SA1 -MDHL0_SA2 -MDHL0_SI2069 -MDHL0_SI809 -MDHL0_SX179 -MDHL0_SX359 -MDHL0_SX89 -MDHS0_SX180 -MDHS0_SX270 -MDHS0_SX360 -MDHS0_SX450 -MDHS0_SX90 -MDJM0_SA1 -MDJM0_SA2 -MDJM0_SI2085 -MDJM0_SI825 -MDJM0_SX195 -MDJM0_SX285 -MDJM0_SX375 -MDKS0_SA1 -MDKS0_SA2 -MDKS0_SI1066 -MDKS0_SI1696 -MDKS0_SI2326 -MDKS0_SX256 -MDKS0_SX76 -MDLB0_SA1 -MDLB0_SI1936 -MDLB0_SI676 -MDLB0_SX226 -MDLB0_SX316 -MDLB0_SX46 -MDLC0_SA1 -MDLC0_SA2 -MDLC0_SI765 -MDLC0_SX135 -MDLC0_SX225 -MDLC0_SX315 -MDLC0_SX45 -MDLC1_SA1 -MDLC1_SX175 -MDLC1_SX265 -MDLC1_SX355 -MDLC1_SX85 -MDLC2_SA1 -MDLC2_SA2 -MDLC2_SI1614 -MDLC2_SI984 -MDLC2_SX174 -MDLC2_SX264 -MDLC2_SX444 -MDLC2_SX84 -MDLH0_SA1 -MDLH0_SI1960 -MDLH0_SI574 -MDLH0_SI700 -MDLH0_SX250 -MDLH0_SX340 -MDLH0_SX70 -MDLM0_SA1 -MDLM0_SA2 -MDLM0_SX244 -MDLM0_SX334 -MDLM0_SX64 -MDLR0_SI1233 -MDLR0_SX243 -MDLR0_SX423 -MDLR0_SX63 -MDLR1_SI1299 -MDLR1_SI1929 -MDLR1_SX129 -MDLR1_SX219 -MDLR1_SX309 -MDLR1_SX39 -MDLR1_SX399 -MDMA0_SA1 -MDMA0_SA2 -MDMA0_SI1238 -MDMA0_SI2060 -MDMT0_SI2341 -MDMT0_SI572 -MDMT0_SX212 -MDMT0_SX302 -MDMT0_SX392 -MDNS0_SA1 -MDNS0_SX111 -MDNS0_SX291 -MDNS0_SX381 -MDPB0_SA1 -MDPB0_SA2 -MDPB0_SI2126 -MDPB0_SX146 -MDPB0_SX236 -MDPB0_SX326 -MDPB0_SX56 -MDPK0_SA1 -MDPK0_SA2 -MDPK0_SI1683 -MDPK0_SI552 -MDPK0_SX153 -MDPK0_SX243 -MDPK0_SX63 -MDPS0_SA1 -MDPS0_SA2 -MDPS0_SI1651 -MDPS0_SI1979 -MDPS0_SX179 -MDPS0_SX269 -MDPS0_SX449 -MDPS0_SX89 -MDRD0_SA2 -MDRD0_SI1382 -MDRD0_SI2012 -MDRD0_SX122 -MDRD0_SX212 -MDRD0_SX302 -MDRD0_SX392 -MDSJ0_SA1 -MDSJ0_SA2 -MDSJ0_SI832 -MDSJ0_SX112 -MDSJ0_SX22 -MDSJ0_SX292 -MDSJ0_SX382 -MDSS0_SA1 -MDSS0_SI1881 -MDSS0_SI2087 -MDSS0_SI621 -MDSS0_SX171 -MDSS0_SX261 -MDSS0_SX351 -MDSS0_SX81 -MDSS1_SA2 -MDSS1_SI1713 -MDSS1_SX247 -MDSS1_SX337 -MDSS1_SX427 -MDTB0_SA1 -MDTB0_SA2 -MDTB0_SI570 -MDTB0_SX210 -MDTB0_SX300 -MDTB0_SX321 -MDTB0_SX390 -MDWD0_SA1 -MDWD0_SI1890 -MDWD0_SI557 -MDWD0_SX180 -MDWD0_SX360 -MDWD0_SX450 -MDWH0_SA2 -MDWH0_SI1925 -MDWH0_SX125 -MDWH0_SX35 -MDWH0_SX395 -MDWM0_SI1546 -MDWM0_SI2176 -MDWM0_SX106 -MDWM0_SX376 -MDWM0_SX433 -MEAL0_SA1 -MEAL0_SI1547 -MEAL0_SI917 -MEAL0_SX197 -MEAL0_SX287 -MEAL0_SX377 -MEDR0_SI744 -MEDR0_SX114 -MEDR0_SX204 -MEDR0_SX24 -MEDR0_SX294 -MEDR0_SX384 -MEFG0_SA2 -MEFG0_SI465 -MEFG0_SX105 -MEFG0_SX15 -MEFG0_SX195 -MEFG0_SX285 -MEFG0_SX375 -MEGJ0_SI1967 -MEGJ0_SX437 -MEGJ0_SX77 -MEJL0_SA2 -MEJL0_SI1592 -MEJL0_SI1654 -MEJL0_SI962 -MEJL0_SX332 -MEJL0_SX422 -MEJL0_SX62 -MEJS0_SA1 -MEJS0_SA2 -MEJS0_SI1870 -MEJS0_SX250 -MEJS0_SX430 -MEJS0_SX70 -MESG0_SA1 -MESG0_SA2 -MESG0_SI1332 -MESG0_SI1962 -MESG0_SX162 -MESG0_SX252 -MESG0_SX342 -MESG0_SX72 -MESJ0_SA1 -MESJ0_SA2 -MESJ0_SI2257 -MESJ0_SI997 -MESJ0_SX277 -MESJ0_SX367 -MESJ0_SX7 -MEWM0_SA1 -MEWM0_SA2 -MEWM0_SI1348 -MEWM0_SI1978 -MEWM0_SX268 -MEWM0_SX358 -MEWM0_SX448 -MFER0_SA1 -MFER0_SA2 -MFER0_SI1492 -MFER0_SI2122 -MFER0_SX232 -MFER0_SX322 -MFER0_SX412 -MFER0_SX52 -MFMC0_SA1 -MFMC0_SA2 -MFMC0_SI1132 -MFMC0_SI1762 -MFMC0_SI502 -MFMC0_SX142 -MFMC0_SX232 -MFMC0_SX322 -MFMC0_SX412 -MFMC0_SX52 -MFRM0_SA1 -MFRM0_SA2 -MFRM0_SI1155 -MFRM0_SI1717 -MFRM0_SI1785 -MFRM0_SX165 -MFRM0_SX255 -MFRM0_SX75 -MFWK0_SA1 -MFWK0_SA2 -MFWK0_SI1249 -MFWK0_SI619 -MFWK0_SX259 -MFWK0_SX439 -MFWK0_SX79 -MFXS0_SA1 -MFXS0_SA2 -MFXS0_SI1674 -MFXS0_SI2225 -MFXS0_SI2304 -MFXS0_SX144 -MFXS0_SX234 -MFXS0_SX414 -MFXV0_SA1 -MFXV0_SI1635 -MFXV0_SX15 -MFXV0_SX195 -MFXV0_SX285 -MFXV0_SX375 -MGAF0_SA2 -MGAF0_SI1912 -MGAF0_SI652 -MGAF0_SX112 -MGAF0_SX202 -MGAF0_SX292 -MGAG0_SA1 -MGAG0_SI1321 -MGAG0_SI645 -MGAG0_SX151 -MGAG0_SX241 -MGAG0_SX331 -MGAG0_SX421 -MGAG0_SX61 -MGAK0_SA1 -MGAK0_SA2 -MGAK0_SI1666 -MGAK0_SI2296 -MGAK0_SX316 -MGAK0_SX406 -MGAR0_SA1 -MGAR0_SA2 -MGAR0_SI1212 -MGAR0_SI1694 -MGAR0_SI1842 -MGAR0_SX222 -MGAR0_SX402 -MGAR0_SX42 -MGAW0_SA1 -MGAW0_SA2 -MGAW0_SI1802 -MGAW0_SX265 -MGAW0_SX355 -MGAW0_SX445 -MGAW0_SX85 -MGES0_SA2 -MGES0_SI1481 -MGES0_SX131 -MGES0_SX221 -MGES0_SX401 -MGES0_SX41 -MGJC0_SA1 -MGJC0_SI1256 -MGJC0_SI1335 -MGJC0_SI1965 -MGJC0_SX165 -MGJC0_SX255 -MGJC0_SX345 -MGRL0_SA1 -MGRL0_SA2 -MGRL0_SI1497 -MGRL0_SX237 -MGRL0_SX417 -MGRL0_SX57 -MGRP0_SA1 -MGRP0_SI1947 -MGRP0_SI687 -MGRP0_SX147 -MGRP0_SX237 -MGRP0_SX417 -MGRP0_SX57 -MGSH0_SA1 -MGSH0_SX186 -MGSH0_SX96 -MGSL0_SA2 -MGSL0_SI1164 -MGSL0_SX174 -MGSL0_SX354 -MGSL0_SX444 -MGSL0_SX84 -MGXP0_SA1 -MGXP0_SA2 -MGXP0_SI457 -MGXP0_SX277 -MGXP0_SX367 -MGXP0_SX97 -MHBS0_SA1 -MHBS0_SA2 -MHBS0_SI1575 -MHBS0_SI2205 -MHBS0_SX135 -MHBS0_SX225 -MHBS0_SX405 -MHIT0_SA2 -MHIT0_SI1613 -MHIT0_SI2243 -MHIT0_SX173 -MHIT0_SX263 -MHIT0_SX353 -MHIT0_SX443 -MHIT0_SX83 -MHJB0_SA2 -MHJB0_SI1647 -MHJB0_SI2277 -MHJB0_SX117 -MHJB0_SX207 -MHJB0_SX27 -MHJB0_SX297 -MHJB0_SX387 -MHMG0_SA1 -MHMG0_SA2 -MHMG0_SI1365 -MHMG0_SI1995 -MHMG0_SX105 -MHMG0_SX15 -MHMG0_SX285 -MHMG0_SX375 -MHMR0_SA2 -MHMR0_SI1119 -MHMR0_SX129 -MHMR0_SX219 -MHMR0_SX309 -MHMR0_SX39 -MHMR0_SX399 -MHRM0_SA2 -MHRM0_SI1475 -MHRM0_SI2218 -MHRM0_SX238 -MHRM0_SX328 -MHRM0_SX418 -MHXL0_SA1 -MHXL0_SA2 -MHXL0_SI512 -MHXL0_SI612 -MHXL0_SX152 -MHXL0_SX332 -MHXL0_SX422 -MHXL0_SX62 -MILB0_SA1 -MILB0_SI2163 -MILB0_SI807 -MILB0_SX183 -MILB0_SX273 -MILB0_SX3 -MILB0_SX363 -MILB0_SX93 -MJAC0_SA1 -MJAC0_SA2 -MJAC0_SI1331 -MJAC0_SI2148 -MJAC0_SX341 -MJAC0_SX431 -MJAE0_SA1 -MJAE0_SA2 -MJAE0_SI1524 -MJAE0_SI1999 -MJAE0_SI2154 -MJAE0_SX264 -MJAE0_SX354 -MJAE0_SX444 -MJAI0_SI1604 -MJAI0_SX164 -MJAI0_SX254 -MJAI0_SX344 -MJAI0_SX434 -MJAI0_SX74 -MJBG0_SA1 -MJBG0_SA2 -MJBG0_SI1232 -MJBG0_SI1724 -MJBG0_SI1862 -MJBG0_SX152 -MJBG0_SX242 -MJBG0_SX332 -MJBG0_SX422 -MJDA0_SA1 -MJDA0_SA2 -MJDA0_SI1661 -MJDA0_SI2291 -MJDA0_SX131 -MJDA0_SX221 -MJDA0_SX401 -MJDA0_SX41 -MJDC0_SA1 -MJDC0_SA2 -MJDC0_SI1161 -MJDC0_SI2165 -MJDC0_SX171 -MJDC0_SX261 -MJDC0_SX351 -MJDC0_SX441 -MJDC0_SX81 -MJDE0_SA2 -MJDE0_SX130 -MJDE0_SX310 -MJDE0_SX40 -MJDE0_SX400 -MJDG0_SA1 -MJDG0_SI1672 -MJDG0_SX142 -MJDG0_SX232 -MJDG0_SX322 -MJDG0_SX412 -MJDG0_SX52 -MJDM0_SA2 -MJDM0_SI1937 -MJDM0_SX260 -MJDM0_SX440 -MJDM0_SX80 -MJEB0_SA1 -MJEB0_SA2 -MJEB0_SI1286 -MJEB0_SI1916 -MJEB0_SX206 -MJEB0_SX26 -MJEB0_SX386 -MJEB1_SA1 -MJEB1_SI2097 -MJEB1_SX117 -MJEB1_SX27 -MJEB1_SX297 -MJEE0_SA2 -MJEE0_SI1237 -MJEE0_SI1867 -MJEE0_SI607 -MJEE0_SX157 -MJEE0_SX427 -MJEE0_SX67 -MJFH0_SA1 -MJFH0_SI1737 -MJFH0_SI477 -MJFH0_SX117 -MJFH0_SX207 -MJFH0_SX27 -MJFH0_SX297 -MJFH0_SX387 -MJFR0_SA2 -MJFR0_SI1605 -MJFR0_SI2235 -MJFR0_SI975 -MJFR0_SX165 -MJFR0_SX255 -MJFR0_SX345 -MJHI0_SA2 -MJHI0_SI555 -MJHI0_SI698 -MJHI0_SX248 -MJHI0_SX338 -MJHI0_SX428 -MJHI0_SX68 -MJJB0_SA2 -MJJB0_SI1139 -MJJB0_SI1277 -MJJB0_SI1769 -MJJB0_SX149 -MJJB0_SX329 -MJJB0_SX419 -MJJB0_SX59 -MJJJ0_SA1 -MJJJ0_SA2 -MJJJ0_SI1793 -MJJJ0_SI533 -MJJJ0_SX173 -MJJJ0_SX263 -MJJJ0_SX353 -MJJJ0_SX83 -MJJM0_SA1 -MJJM0_SI1457 -MJJM0_SX17 -MJJM0_SX197 -MJJM0_SX287 -MJJM0_SX377 -MJKR0_SA2 -MJKR0_SI1201 -MJKR0_SI1831 -MJKR0_SX121 -MJKR0_SX211 -MJKR0_SX301 -MJKR0_SX31 -MJKR0_SX391 -MJLB0_SA1 -MJLB0_SA2 -MJLB0_SI2246 -MJLB0_SI986 -MJLB0_SX266 -MJLB0_SX356 -MJLB0_SX446 -MJLB0_SX86 -MJLG1_SA1 -MJLG1_SA2 -MJLG1_SI1012 -MJLG1_SI1642 -MJLG1_SI2272 -MJLG1_SX112 -MJLG1_SX202 -MJLG1_SX22 -MJLG1_SX382 -MJLS0_SA1 -MJLS0_SA2 -MJLS0_SI1096 -MJLS0_SI466 -MJLS0_SX16 -MJLS0_SX196 -MJLS0_SX286 -MJLS0_SX376 -MJMA0_SI1495 -MJMA0_SI865 -MJMA0_SX145 -MJMA0_SX235 -MJMA0_SX325 -MJMA0_SX415 -MJMA0_SX55 -MJMD0_SA1 -MJMD0_SI1028 -MJMD0_SI1658 -MJMD0_SX128 -MJMD0_SX218 -MJMD0_SX398 -MJMM0_SA1 -MJMM0_SA2 -MJMM0_SI1885 -MJMM0_SI625 -MJMM0_SX265 -MJMM0_SX355 -MJMM0_SX445 -MJPG0_SA1 -MJPG0_SA2 -MJPG0_SI561 -MJPG0_SX291 -MJPG0_SX381 -MJPM0_SA1 -MJPM0_SI1998 -MJPM0_SI738 -MJPM0_SX108 -MJPM0_SX18 -MJPM0_SX198 -MJPM0_SX288 -MJPM1_SA1 -MJPM1_SA2 -MJPM1_SI1897 -MJPM1_SI761 -MJPM1_SX131 -MJPM1_SX221 -MJPM1_SX41 -MJRA0_SI606 -MJRA0_SX156 -MJRA0_SX246 -MJRA0_SX66 -MJRG0_SA1 -MJRG0_SA2 -MJRG0_SX106 -MJRG0_SX16 -MJRG0_SX286 -MJRH0_SA1 -MJRH0_SA2 -MJRH0_SI1125 -MJRH0_SI1755 -MJRH0_SX135 -MJRH0_SX315 -MJRH0_SX405 -MJRH0_SX45 -MJRH1_SA2 -MJRH1_SI1774 -MJRH1_SX334 -MJRH1_SX64 -MJRK0_SI2103 -MJRK0_SX340 -MJRK0_SX70 -MJRP0_SI1835 -MJRP0_SI585 -MJRP0_SX135 -MJRP0_SX315 -MJRP0_SX405 -MJRP0_SX45 -MJSR0_SA2 -MJSR0_SX164 -MJSR0_SX254 -MJSR0_SX434 -MJSR0_SX74 -MJWG0_SA2 -MJWG0_SI2155 -MJWG0_SX355 -MJWG0_SX445 -MJWG0_SX85 -MJWS0_SA1 -MJWS0_SA2 -MJWS0_SI1143 -MJWS0_SI1773 -MJWS0_SX243 -MJWS0_SX423 -MJWT0_SA2 -MJWT0_SI751 -MJXA0_SA1 -MJXA0_SA2 -MJXA0_SI1507 -MJXA0_SI2137 -MJXA0_SI877 -MJXA0_SX157 -MJXA0_SX247 -MJXA0_SX337 -MJXA0_SX67 -MJXL0_SA1 -MJXL0_SA2 -MJXL0_SI1795 -MJXL0_SX182 -MJXL0_SX272 -MJXL0_SX362 -MJXL0_SX452 -MJXL0_SX92 -MKAG0_SA2 -MKAG0_SI1609 -MKAG0_SI2239 -MKAG0_SX169 -MKAG0_SX30 -MKAG0_SX439 -MKAG0_SX79 -MKAH0_SA1 -MKAH0_SA2 -MKAH0_SI1528 -MKAH0_SI2158 -MKAH0_SI898 -MKAH0_SX268 -MKAH0_SX358 -MKAH0_SX448 -MKAH0_SX88 -MKAJ0_SA1 -MKAJ0_SI1414 -MKAJ0_SI2044 -MKAJ0_SI784 -MKAJ0_SX244 -MKAJ0_SX334 -MKAJ0_SX424 -MKAJ0_SX64 -MKAM0_SA2 -MKAM0_SI1316 -MKAM0_SX236 -MKAM0_SX416 -MKDB0_SI2132 -MKDB0_SI588 -MKDB0_SI872 -MKDB0_SX242 -MKDB0_SX332 -MKDB0_SX422 -MKDB0_SX62 -MKDD0_SA1 -MKDD0_SX127 -MKDD0_SX217 -MKDD0_SX307 -MKDD0_SX37 -MKDD0_SX397 -MKDT0_SA1 -MKDT0_SA2 -MKDT0_SI2153 -MKDT0_SI893 -MKDT0_SX173 -MKDT0_SX263 -MKDT0_SX353 -MKDT0_SX443 -MKDT0_SX83 -MKES0_SA2 -MKES0_SX263 -MKES0_SX353 -MKES0_SX443 -MKES0_SX83 -MKJO0_SA1 -MKJO0_SA2 -MKJO0_SI2147 -MKJO0_SX167 -MKJO0_SX257 -MKJO0_SX424 -MKJO0_SX77 -MKLN0_SA1 -MKLN0_SA2 -MKLN0_SI1598 -MKLN0_SI2228 -MKLN0_SX158 -MKLN0_SX338 -MKLN0_SX428 -MKLN0_SX68 -MKLR0_SA1 -MKLR0_SI1059 -MKLR0_SI2319 -MKLR0_SX159 -MKLR0_SX249 -MKLR0_SX339 -MKLR0_SX429 -MKLR0_SX69 -MKLS0_SA2 -MKLS0_SI1533 -MKLS0_SX177 -MKLS0_SX267 -MKLS0_SX447 -MKLS1_SI1545 -MKLS1_SI2175 -MKLS1_SX105 -MKLS1_SX15 -MKLS1_SX195 -MKLS1_SX285 -MKLW0_SA2 -MKLW0_SI1844 -MKLW0_SI2201 -MKLW0_SX131 -MKLW0_SX221 -MKLW0_SX401 -MKLW0_SX41 -MKRG0_SA1 -MKRG0_SA2 -MKRG0_SI1491 -MKRG0_SI2121 -MKRG0_SX141 -MKRG0_SX231 -MKRG0_SX31 -MKRG0_SX51 -MKXL0_SA1 -MKXL0_SI1185 -MKXL0_SX105 -MKXL0_SX195 -MKXL0_SX285 -MLBC0_SA2 -MLBC0_SI609 -MLBC0_SX159 -MLBC0_SX339 -MLBC0_SX429 -MLBC0_SX69 -MLEL0_SI1876 -MLEL0_SX346 -MLEL0_SX76 -MLJC0_SA1 -MLJC0_SA2 -MLJC0_SI1855 -MLJC0_SI595 -MLJC0_SX235 -MLJC0_SX325 -MLJC0_SX55 -MLJH0_SI1324 -MLJH0_SX154 -MLJH0_SX334 -MLJH0_SX424 -MLNS0_SA1 -MLNS0_SA2 -MLNS0_SI1407 -MLNS0_SI777 -MLNS0_SX147 -MLNS0_SX237 -MLNS0_SX327 -MLNS0_SX417 -MLNS0_SX57 -MLSH0_SA1 -MLSH0_SA2 -MLSH0_SI2047 -MLSH0_SI787 -MLSH0_SX157 -MLSH0_SX337 -MLSH0_SX427 -MLSH0_SX67 -MMAA0_SI2105 -MMAA0_SX125 -MMAA0_SX215 -MMAA0_SX305 -MMAA0_SX395 -MMAB1_SA1 -MMAB1_SA2 -MMAB1_SI2124 -MMAB1_SX144 -MMAB1_SX414 -MMAB1_SX54 -MMAG0_SI496 -MMAG0_SX226 -MMAG0_SX406 -MMAG0_SX46 -MMAM0_SA1 -MMAM0_SA2 -MMAM0_SI1597 -MMAM0_SI1668 -MMAM0_SX247 -MMAM0_SX337 -MMAM0_SX67 -MMAR0_SA1 -MMAR0_SA2 -MMAR0_SI1336 -MMAR0_SI706 -MMAR0_SX436 -MMAR0_SX76 -MMBS0_SA1 -MMBS0_SA2 -MMBS0_SI1151 -MMBS0_SX251 -MMBS0_SX341 -MMBS0_SX431 -MMBS0_SX71 -MMCC0_SA1 -MMCC0_SI1968 -MMCC0_SI708 -MMCC0_SX168 -MMCC0_SX258 -MMCC0_SX348 -MMCC0_SX438 -MMCC0_SX78 -MMDB0_SA1 -MMDB0_SA2 -MMDB0_SI1358 -MMDB0_SI1617 -MMDB0_SX267 -MMDB0_SX357 -MMDB0_SX447 -MMDB0_SX87 -MMDG0_SI2035 -MMDG0_SX340 -MMDG0_SX430 -MMDG0_SX70 -MMDM0_SA1 -MMDM0_SA2 -MMDM0_SX231 -MMDM0_SX321 -MMDM0_SX411 -MMDM0_SX51 -MMDM1_SA1 -MMDM1_SI1650 -MMDM1_SI783 -MMDM1_SX243 -MMDS0_SA2 -MMDS0_SI1343 -MMDS0_SI1973 -MMDS0_SI713 -MMDS0_SX173 -MMDS0_SX263 -MMDS0_SX353 -MMDS0_SX443 -MMDS0_SX83 -MMEA0_SA2 -MMEA0_SI1388 -MMEA0_SI2018 -MMEA0_SI758 -MMEA0_SX218 -MMEA0_SX308 -MMEA0_SX38 -MMEB0_SA1 -MMEB0_SI1357 -MMEB0_SI1987 -MMEB0_SI727 -MMEB0_SX7 -MMEB0_SX97 -MMGC0_SA1 -MMGC0_SI1935 -MMGC0_SI2184 -MMGC0_SX315 -MMGC0_SX405 -MMGC0_SX45 -MMGG0_SA1 -MMGG0_SA2 -MMGG0_SI1709 -MMGG0_SI2339 -MMGG0_SX179 -MMGG0_SX359 -MMGG0_SX89 -MMGK0_SA1 -MMGK0_SA2 -MMGK0_SI1322 -MMGK0_SI1952 -MMGK0_SI692 -MMGK0_SX152 -MMGK0_SX242 -MMGK0_SX422 -MMJB1_SA1 -MMJB1_SI1408 -MMJB1_SI2038 -MMJB1_SI778 -MMJB1_SX148 -MMJB1_SX238 -MMJB1_SX328 -MMJB1_SX418 -MMJB1_SX58 -MMLM0_SA1 -MMLM0_SA2 -MMLM0_SI1527 -MMLM0_SI897 -MMLM0_SX177 -MMLM0_SX267 -MMLM0_SX357 -MMLM0_SX447 -MMLM0_SX87 -MMPM0_SA1 -MMPM0_SA2 -MMPM0_SI1061 -MMPM0_SI1691 -MMPM0_SI2321 -MMPM0_SX251 -MMPM0_SX341 -MMPM0_SX431 -MMPM0_SX71 -MMRP0_SA1 -MMRP0_SI2034 -MMRP0_SI717 -MMRP0_SI774 -MMRP0_SX234 -MMRP0_SX414 -MMRP0_SX54 -MMSM0_SA1 -MMSM0_SA2 -MMSM0_SI1736 -MMSM0_SX26 -MMSM0_SX296 -MMSM0_SX386 -MMVP0_SI1284 -MMVP0_SI1914 -MMVP0_SX114 -MMVP0_SX204 -MMVP0_SX294 -MMVP0_SX384 -MMWB0_SA2 -MMWB0_SI1619 -MMWB0_SX179 -MMWB0_SX269 -MMWS0_SA1 -MMWS0_SI1518 -MMWS0_SI559 -MMWS0_SI888 -MMWS0_SX258 -MMWS0_SX78 -MMWS1_SA1 -MMWS1_SA2 -MMWS1_SI1071 -MMWS1_SI2331 -MMWS1_SX261 -MMWS1_SX27 -MMWS1_SX351 -MMWS1_SX441 -MMWS1_SX81 -MMXS0_SA1 -MMXS0_SA2 -MMXS0_SI629 -MMXS0_SI876 -MMXS0_SX156 -MMXS0_SX336 -MMXS0_SX66 -MNET0_SA1 -MNET0_SA2 -MNET0_SI1446 -MNET0_SI2076 -MNET0_SX186 -MNET0_SX276 -MNET0_SX366 -MNET0_SX96 -MNTW0_SA1 -MNTW0_SI2328 -MNTW0_SX202 -MNTW0_SX258 -MNTW0_SX348 -MPAR0_SA1 -MPAR0_SA2 -MPAR0_SI1576 -MPAR0_SX226 -MPAR0_SX406 -MPAR0_SX46 -MPEB0_SA1 -MPEB0_SA2 -MPEB0_SX150 -MPEB0_SX420 -MPEB0_SX60 -MPFU0_SA1 -MPFU0_SA2 -MPFU0_SI1888 -MPFU0_SX178 -MPFU0_SX268 -MPFU0_SX358 -MPFU0_SX88 -MPGH0_SA1 -MPGH0_SA2 -MPGH0_SI1554 -MPGH0_SI924 -MPGH0_SX204 -MPGH0_SX294 -MPGH0_SX384 -MPGR0_SA1 -MPGR0_SA2 -MPGR0_SI2040 -MPGR0_SI780 -MPGR0_SX150 -MPGR0_SX420 -MPGR0_SX60 -MPGR1_SA1 -MPGR1_SA2 -MPGR1_SI1269 -MPGR1_SI2129 -MPGR1_SX239 -MPGR1_SX329 -MPGR1_SX419 -MPGR1_SX59 -MPMB0_SX241 -MPPC0_SA2 -MPPC0_SI2042 -MPPC0_SI782 -MPPC0_SX152 -MPPC0_SX242 -MPPC0_SX332 -MPPC0_SX422 -MPPC0_SX62 -MPRB0_SA1 -MPRB0_SA2 -MPRB0_SI1205 -MPRB0_SX125 -MPRB0_SX215 -MPRB0_SX305 -MPRB0_SX35 -MPRB0_SX395 -MPRD0_SA2 -MPRD0_SI1431 -MPRD0_SI2061 -MPRK0_SA2 -MPRK0_SX17 -MPRK0_SX197 -MPRT0_SA2 -MPRT0_SI1210 -MPRT0_SI495 -MPRT0_SI580 -MPRT0_SX130 -MPRT0_SX220 -MPRT0_SX40 -MPRT0_SX400 -MPSW0_SA1 -MPSW0_SA2 -MPSW0_SI1697 -MPSW0_SI2327 -MPSW0_SX24 -MPSW0_SX257 -MPSW0_SX77 -MRAB0_SA1 -MRAB0_SA2 -MRAB0_SI1224 -MRAB0_SI594 -MRAB0_SX144 -MRAB0_SX234 -MRAB0_SX324 -MRAB0_SX414 -MRAB0_SX54 -MRAB1_SA1 -MRAB1_SA2 -MRAB1_SI1478 -MRAB1_SI2108 -MRAB1_SX218 -MRAB1_SX38 -MRAB1_SX398 -MRAI0_SI1954 -MRAI0_SX162 -MRAI0_SX252 -MRAI0_SX342 -MRAM0_SI1275 -MRAM0_SI1905 -MRAM0_SX105 -MRAM0_SX195 -MRAM0_SX285 -MRAM0_SX375 -MRAV0_SA1 -MRAV0_SA2 -MRAV0_SI1008 -MRAV0_SI1638 -MRAV0_SI2268 -MRAV0_SX108 -MRAV0_SX18 -MRAV0_SX198 -MRAV0_SX288 -MRAV0_SX378 -MRBC0_SA1 -MRBC0_SA2 -MRBC0_SI1665 -MRBC0_SI599 -MRBC0_SX149 -MRBC0_SX239 -MRBC0_SX59 -MRCG0_SA1 -MRCG0_SI2058 -MRCG0_SX258 -MRCG0_SX78 -MRCW0_SA2 -MRCW0_SI1371 -MRCW0_SI2001 -MRCW0_SX111 -MRCW0_SX201 -MRCW0_SX21 -MRCW0_SX381 -MRDD0_SA1 -MRDD0_SA2 -MRDD0_SI1050 -MRDD0_SI2310 -MRDD0_SX240 -MRDD0_SX330 -MRDM0_SA1 -MRDM0_SA2 -MRDM0_SI965 -MRDM0_SX155 -MRDM0_SX245 -MRDM0_SX425 -MRDS0_SA2 -MRDS0_SI1167 -MRDS0_SI1797 -MRDS0_SI537 -MRDS0_SX177 -MRDS0_SX267 -MRDS0_SX357 -MRDS0_SX447 -MRDS0_SX87 -MREE0_SA1 -MREE0_SA2 -MREE0_SI1734 -MREE0_SX114 -MREE0_SX204 -MREE0_SX294 -MREE0_SX384 -MREH1_SA2 -MREH1_SI2229 -MREH1_SX159 -MREH1_SX339 -MREH1_SX429 -MREM0_SA1 -MREM0_SI1591 -MREM0_SI961 -MREM0_SX151 -MREM0_SX241 -MREM0_SX331 -MREM0_SX421 -MREM0_SX61 -MREW1_SA1 -MREW1_SA2 -MREW1_SI1500 -MREW1_SI2130 -MREW1_SX150 -MREW1_SX240 -MREW1_SX330 -MREW1_SX420 -MREW1_SX60 -MRFK0_SA1 -MRFK0_SA2 -MRFK0_SI1706 -MRFK0_SI2336 -MRFK0_SX176 -MRFK0_SX266 -MRFK0_SX356 -MRFK0_SX86 -MRFL0_SA2 -MRFL0_SI1786 -MRFL0_SX346 -MRGM0_SA1 -MRGM0_SI1162 -MRGM0_SI1792 -MRGM0_SX416 -MRGM0_SX82 -MRGS0_SA1 -MRGS0_SI1986 -MRGS0_SX276 -MRGS0_SX366 -MRGS0_SX96 -MRHL0_SA1 -MRHL0_SA2 -MRHL0_SI1515 -MRHL0_SI2145 -MRHL0_SX165 -MRHL0_SX255 -MRHL0_SX75 -MRJB1_SI1020 -MRJB1_SX300 -MRJH0_SA1 -MRJH0_SI914 -MRJH0_SX259 -MRJH0_SX439 -MRJM0_SA1 -MRJM0_SA2 -MRJM0_SI1095 -MRJM0_SI1228 -MRJM0_SI1858 -MRJM0_SX238 -MRJM0_SX328 -MRJM0_SX418 -MRJM0_SX58 -MRJM1_SA1 -MRJM1_SI668 -MRJM1_SX218 -MRJM1_SX308 -MRJM1_SX38 -MRJM1_SX398 -MRJT0_SA1 -MRJT0_SI1805 -MRJT0_SX148 -MRJT0_SX238 -MRKM0_SA1 -MRKM0_SX187 -MRKM0_SX277 -MRKM0_SX7 -MRKM0_SX97 -MRLD0_SA1 -MRLD0_SI1594 -MRLD0_SI964 -MRLD0_SX244 -MRLD0_SX334 -MRLD0_SX64 -MRLJ0_SA2 -MRLJ0_SI1420 -MRLJ0_SI2050 -MRLJ0_SX160 -MRLJ0_SX430 -MRLJ0_SX70 -MRLJ1_SI1671 -MRLJ1_SI2332 -MRLJ1_SX141 -MRLJ1_SX231 -MRLJ1_SX411 -MRLJ1_SX51 -MRLK0_SA1 -MRLK0_SA2 -MRLK0_SI2140 -MRLK0_SX303 -MRLK0_SX33 -MRLK0_SX393 -MRLR0_SA1 -MRLR0_SA2 -MRLR0_SI1826 -MRLR0_SI566 -MRLR0_SX116 -MRLR0_SX206 -MRLR0_SX26 -MRLR0_SX296 -MRLR0_SX386 -MRMB0_SA1 -MRMB0_SI2211 -MRMB0_SI951 -MRMB0_SX141 -MRMB0_SX231 -MRMB0_SX321 -MRMB0_SX51 -MRMG0_SA2 -MRMG0_SI1710 -MRMG0_SI2340 -MRMG0_SX180 -MRMG0_SX270 -MRMG0_SX360 -MRMG0_SX90 -MRMH0_SA1 -MRMH0_SA2 -MRMH0_SI1021 -MRMH0_SX211 -MRMH0_SX301 -MRMH0_SX31 -MRMH0_SX391 -MRML0_SI2051 -MRML0_SI791 -MRML0_SX431 -MRML0_SX71 -MRMS0_SA1 -MRMS0_SA2 -MRMS0_SI1113 -MRMS0_SI2100 -MRMS0_SX120 -MRMS0_SX210 -MRMS0_SX30 -MRMS0_SX300 -MRMS0_SX390 -MRPC1_SA1 -MRPC1_SA2 -MRPC1_SI1482 -MRPC1_SI2026 -MRPC1_SX132 -MRPC1_SX222 -MRPC1_SX312 -MRPC1_SX402 -MRPC1_SX42 -MRRE0_SI704 -MRRE0_SX254 -MRRE0_SX434 -MRSO0_SA1 -MRSO0_SA2 -MRSO0_SI1659 -MRSO0_SI2289 -MRSO0_SX219 -MRSO0_SX309 -MRSO0_SX399 -MRSP0_SA1 -MRSP0_SA2 -MRSP0_SI2059 -MRSP0_SI799 -MRSP0_SX169 -MRSP0_SX196 -MRSP0_SX439 -MRSP0_SX79 -MRTC0_SA1 -MRTC0_SA2 -MRTC0_SI2088 -MRTC0_SI828 -MRTC0_SX108 -MRTC0_SX18 -MRTC0_SX198 -MRTC0_SX288 -MRTJ0_SA2 -MRTJ0_SI1551 -MRTJ0_SI2032 -MRTJ0_SX322 -MRTJ0_SX412 -MRVG0_SA1 -MRVG0_SA2 -MRVG0_SI1770 -MRVG0_SI510 -MRVG0_SX150 -MRVG0_SX330 -MRVG0_SX420 -MRVG0_SX60 -MRWA0_SA1 -MRWA0_SA2 -MRWA0_SI1603 -MRWA0_SI2233 -MRWA0_SX253 -MRWA0_SX343 -MRWA0_SX433 -MRWS0_SA1 -MRWS0_SA2 -MRWS0_SX112 -MRWS0_SX202 -MRWS0_SX292 -MRXB0_SA1 -MRXB0_SI1585 -MRXB0_SX145 -MRXB0_SX235 -MRXB0_SX325 -MRXB0_SX55 -MSAH1_SA1 -MSAH1_SA2 -MSAH1_SI1049 -MSAH1_SI2309 -MSAH1_SX149 -MSAH1_SX239 -MSAH1_SX329 -MSAH1_SX419 -MSAH1_SX59 -MSAS0_SA1 -MSAS0_SA2 -MSAS0_SI2006 -MSAS0_SX26 -MSAS0_SX296 -MSAT0_SA2 -MSAT0_SI1526 -MSAT0_SI2156 -MSAT0_SI896 -MSAT0_SX176 -MSAT0_SX266 -MSAT0_SX356 -MSAT0_SX446 -MSAT0_SX86 -MSAT1_SA1 -MSAT1_SA2 -MSAT1_SI1073 -MSAT1_SI1703 -MSAT1_SI2333 -MSAT1_SX173 -MSAT1_SX353 -MSDB0_SA1 -MSDB0_SA2 -MSDB0_SI1007 -MSDB0_SI1637 -MSDB0_SI2267 -MSDB0_SX107 -MSDB0_SX17 -MSDH0_SA1 -MSDH0_SA2 -MSDH0_SI2113 -MSDH0_SX260 -MSDH0_SX350 -MSDS0_SA2 -MSDS0_SI1707 -MSDS0_SI2337 -MSDS0_SX177 -MSDS0_SX447 -MSDS0_SX87 -MSEM1_SA1 -MSEM1_SA2 -MSEM1_SX360 -MSEM1_SX450 -MSEM1_SX90 -MSES0_SA1 -MSES0_SA2 -MSES0_SI2216 -MSES0_SI2219 -MSES0_SX149 -MSES0_SX329 -MSES0_SX59 -MSFH0_SA2 -MSFH0_SI1216 -MSFH0_SI586 -MSFH0_SX226 -MSFH0_SX46 -MSFV0_SA1 -MSFV0_SA2 -MSFV0_SI1262 -MSFV0_SX182 -MSFV0_SX272 -MSFV0_SX452 -MSJK0_SA1 -MSJK0_SA2 -MSJK0_SI2226 -MSJK0_SI966 -MSJK0_SX156 -MSJK0_SX246 -MSJK0_SX426 -MSJK0_SX66 -MSMC0_SA1 -MSMC0_SA2 -MSMC0_SI1907 -MSMC0_SI647 -MSMC0_SX107 -MSMC0_SX17 -MSMC0_SX197 -MSMC0_SX287 -MSMC0_SX377 -MSMR0_SA1 -MSMR0_SA2 -MSMR0_SI1405 -MSMR0_SI775 -MSMR0_SX145 -MSMR0_SX235 -MSMR0_SX325 -MSMR0_SX55 -MSMS0_SA2 -MSMS0_SI2063 -MSMS0_SI803 -MSMS0_SX263 -MSMS0_SX353 -MSMS0_SX443 -MSRG0_SA2 -MSRG0_SI1851 -MSRG0_SI591 -MSRG0_SX141 -MSRG0_SX231 -MSRG0_SX321 -MSRG0_SX411 -MSRG0_SX51 -MSRR0_SA1 -MSRR0_SA2 -MSRR0_SI1131 -MSRR0_SX141 -MSRR0_SX231 -MSRR0_SX30 -MSRR0_SX411 -MSRR0_SX51 -MSTF0_SA1 -MSTF0_SA2 -MSTF0_SI1396 -MSTF0_SX136 -MSTF0_SX226 -MSTF0_SX406 -MSVS0_SA1 -MSVS0_SI1568 -MSVS0_SX128 -MSVS0_SX218 -MSVS0_SX38 -MTAB0_SA1 -MTAB0_SA2 -MTAB0_SI2202 -MTAB0_SI942 -MTAB0_SX132 -MTAB0_SX222 -MTAB0_SX402 -MTAB0_SX42 -MTAS0_SA1 -MTAS0_SA2 -MTAS0_SI1385 -MTAS0_SI2015 -MTAS0_SI755 -MTAS0_SX125 -MTAS0_SX305 -MTAT0_SA2 -MTAT0_SI1740 -MTAT0_SX120 -MTAT0_SX210 -MTAT0_SX30 -MTAT0_SX300 -MTAT1_SA1 -MTAT1_SA2 -MTAT1_SI1409 -MTAT1_SI1627 -MTAT1_SX239 -MTAT1_SX419 -MTBC0_SA1 -MTBC0_SA2 -MTBC0_SI1173 -MTBC0_SX183 -MTBC0_SX273 -MTBC0_SX347 -MTBC0_SX363 -MTBC0_SX93 -MTCS0_SA1 -MTCS0_SI1972 -MTCS0_SX172 -MTCS0_SX262 -MTCS0_SX352 -MTCS0_SX442 -MTDB0_SA1 -MTDB0_SA2 -MTDB0_SI2031 -MTDB0_SX141 -MTDB0_SX231 -MTDB0_SX321 -MTDB0_SX411 -MTDB0_SX51 -MTDP0_SI1274 -MTDP0_SI2151 -MTDP0_SX261 -MTDP0_SX441 -MTDP0_SX81 -MTER0_SI527 -MTER0_SX167 -MTER0_SX17 -MTER0_SX257 -MTER0_SX77 -MTJG0_SA2 -MTJG0_SI1520 -MTJG0_SI890 -MTJG0_SX350 -MTJG0_SX440 -MTJG0_SX80 -MTJM0_SA1 -MTJM0_SA2 -MTJM0_SI1226 -MTJM0_SI655 -MTJM0_SX236 -MTJM0_SX326 -MTJM0_SX416 -MTJM0_SX56 -MTJS0_SA1 -MTJS0_SI1192 -MTJS0_SX112 -MTJS0_SX202 -MTJS0_SX22 -MTJS0_SX292 -MTJU0_SA1 -MTJU0_SA2 -MTJU0_SI2269 -MTJU0_SI760 -MTJU0_SX220 -MTJU0_SX310 -MTJU0_SX40 -MTKD0_SA1 -MTKD0_SA2 -MTKD0_SI1187 -MTKD0_SI1817 -MTKD0_SX17 -MTKD0_SX197 -MTKD0_SX377 -MTKP0_SA1 -MTKP0_SA2 -MTKP0_SX123 -MTKP0_SX213 -MTKP0_SX303 -MTKP0_SX33 -MTKP0_SX393 -MTLB0_SA2 -MTLB0_SI1764 -MTLB0_SI504 -MTLB0_SX144 -MTLB0_SX414 -MTLB0_SX54 -MTLC0_SA2 -MTLC0_SI847 -MTLC0_SX127 -MTLC0_SX217 -MTLC0_SX307 -MTLC0_SX37 -MTLC0_SX397 -MTML0_SA1 -MTML0_SA2 -MTML0_SI1065 -MTML0_SI1695 -MTML0_SX255 -MTML0_SX345 -MTML0_SX75 -MTMN0_SA1 -MTMN0_SX164 -MTMN0_SX254 -MTMN0_SX344 -MTMN0_SX74 -MTMT0_SA1 -MTMT0_SI1118 -MTMT0_SX128 -MTMT0_SX218 -MTMT0_SX308 -MTMT0_SX38 -MTMT0_SX398 -MTPF0_SA1 -MTPF0_SA2 -MTPF0_SI1235 -MTPF0_SI1865 -MTPF0_SI605 -MTPF0_SX155 -MTPF0_SX245 -MTPF0_SX335 -MTPF0_SX425 -MTPG0_SA1 -MTPG0_SA2 -MTPG0_SI2013 -MTPG0_SX123 -MTPG0_SX213 -MTPG0_SX33 -MTPG0_SX393 -MTPP0_SA1 -MTPP0_SA2 -MTPP0_SI2138 -MTPP0_SI878 -MTPP0_SX158 -MTPP0_SX248 -MTPP0_SX428 -MTPP0_SX68 -MTPR0_SA1 -MTPR0_SA2 -MTPR0_SI1600 -MTPR0_SI506 -MTPR0_SX250 -MTPR0_SX70 -MTQC0_SA2 -MTQC0_SI2071 -MTQC0_SX271 -MTQC0_SX361 -MTRC0_SA1 -MTRC0_SA2 -MTRC0_SI1623 -MTRC0_SI993 -MTRC0_SX170 -MTRC0_SX183 -MTRC0_SX273 -MTRC0_SX363 -MTRC0_SX93 -MTRR0_SA1 -MTRR0_SA2 -MTRR0_SI1548 -MTRR0_SI2178 -MTRR0_SX108 -MTRR0_SX18 -MTRR0_SX378 -MTRT0_SA1 -MTRT0_SI1857 -MTRT0_SI597 -MTRT0_SX147 -MTRT0_SX237 -MTRT0_SX417 -MTWH1_SA1 -MTWH1_SA2 -MTWH1_SI1512 -MTWH1_SI2142 -MTWH1_SI882 -MTWH1_SX162 -MTWH1_SX252 -MTWH1_SX342 -MTWH1_SX432 -MTXS0_SI1690 -MTXS0_SX250 -MTXS0_SX340 -MTXS0_SX70 -MVJH0_SA1 -MVJH0_SA2 -MVJH0_SI2186 -MVJH0_SX116 -MVJH0_SX26 -MVJH0_SX386 -MVLO0_SA2 -MVLO0_SI1147 -MVLO0_SI1777 -MVLO0_SX157 -MVLO0_SX247 -MVLO0_SX337 -MVLO0_SX427 -MVLO0_SX67 -MVRW0_SA1 -MVRW0_SI1485 -MVRW0_SI2115 -MVRW0_SI855 -MVRW0_SX315 -MVRW0_SX405 -MVRW0_SX45 -MWAC0_SA1 -MWAC0_SI2231 -MWAC0_SI971 -MWAC0_SX71 -MWAD0_SA1 -MWAD0_SA2 -MWAD0_SI1062 -MWAD0_SI1749 -MWAD0_SI2322 -MWAD0_SX162 -MWAD0_SX252 -MWAD0_SX342 -MWAR0_SA2 -MWAR0_SI2305 -MWAR0_SX145 -MWAR0_SX235 -MWAR0_SX325 -MWAR0_SX415 -MWAR0_SX55 -MWCH0_SA1 -MWCH0_SA2 -MWCH0_SI1622 -MWCH0_SX272 -MWCH0_SX362 -MWCH0_SX92 -MWDK0_SX266 -MWDK0_SX356 -MWDK0_SX446 -MWEM0_SA1 -MWEM0_SI1950 -MWEM0_SX240 -MWEM0_SX330 -MWEM0_SX60 -MWGR0_SA1 -MWGR0_SA2 -MWGR0_SI1606 -MWGR0_SI2236 -MWGR0_SI976 -MWGR0_SX166 -MWGR0_SX256 -MWGR0_SX436 -MWGR0_SX76 -MWRE0_SA1 -MWRE0_SI1687 -MWRE0_SI2317 -MWRE0_SX157 -MWRP0_SA2 -MWRP0_SI1525 -MWRP0_SI2073 -MWRP0_SX183 -MWRP0_SX3 -MWRP0_SX93 -MWSB0_SA1 -MWSB0_SA2 -MWSB0_SI1626 -MWSB0_SI2256 -MWSB0_SX186 -MWSB0_SX366 -MWSB0_SX6 -MWSB0_SX96 -MWSH0_SA1 -MWSH0_SA2 -MWSH0_SI2266 -MWSH0_SX346 -MWSH0_SX436 -MZMB0_SA2 -MZMB0_SI1166 -MZMB0_SI1796 -MZMB0_SI536 -MZMB0_SX176 -MZMB0_SX266 -MZMB0_SX356 -MZMB0_SX446 -MZMB0_SX86 diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/config/timit_unmatched/train_text.uid b/kosmos-g/fairseq/examples/wav2vec/unsupervised/config/timit_unmatched/train_text.uid deleted file mode 100644 index 0e0c2517c..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/config/timit_unmatched/train_text.uid +++ /dev/null @@ -1,1000 +0,0 @@ -FAEM0_SI762 -FAEM0_SX42 -FAJW0_SA1 -FAJW0_SX3 -FAJW0_SX93 -FALK0_SX186 -FALK0_SX6 -FALR0_SI1325 -FBAS0_SA1 -FBAS0_SX217 -FBCG1_SA1 -FBCG1_SX172 -FBCG1_SX442 -FBCH0_SX236 -FBCH0_SX416 -FBLV0_SA1 -FBLV0_SI1058 -FBLV0_SX338 -FBLV0_SX68 -FBMH0_SA1 -FBMJ0_SI815 -FCAG0_SA1 -FCAG0_SX153 -FCAG0_SX243 -FCAJ0_SI1479 -FCAJ0_SX309 -FCDR1_SX106 -FCDR1_SX196 -FCEG0_SA2 -FCJF0_SA1 -FCJF0_SX127 -FCJS0_SI1607 -FCJS0_SI2237 -FCJS0_SX257 -FCKE0_SA2 -FCKE0_SX121 -FCLT0_SI2068 -FCLT0_SX448 -FCLT0_SX88 -FCMG0_SA2 -FCMG0_SI1872 -FCMG0_SX72 -FCMM0_SA1 -FCMM0_SA2 -FCMM0_SX183 -FCRZ0_SI2053 -FCRZ0_SX433 -FCYL0_SA1 -FCYL0_SX37 -FDAS1_SI2091 -FDAS1_SX201 -FDAS1_SX381 -FDAW0_SI1406 -FDFB0_SA1 -FDFB0_SA2 -FDFB0_SI2010 -FDFB0_SX58 -FDJH0_SX305 -FDML0_SA2 -FDML0_SX159 -FDML0_SX249 -FDML0_SX429 -FDMY0_SA2 -FDMY0_SX27 -FDNC0_SX198 -FDNC0_SX288 -FDTD0_SX211 -FDXW0_SA1 -FDXW0_SX251 -FDXW0_SX341 -FDXW0_SX71 -FEAC0_SX165 -FEAC0_SX75 -FEAR0_SI622 -FECD0_SX68 -FEEH0_SA1 -FEEH0_SI1742 -FEEH0_SI471 -FEEH0_SX122 -FEME0_SA1 -FEME0_SX155 -FEME0_SX65 -FETB0_SA1 -FETB0_SI1148 -FETB0_SX158 -FEXM0_SI1101 -FGCS0_SX136 -FGCS0_SX226 -FGCS0_SX316 -FGCS0_SX406 -FGDP0_SA1 -FGMB0_SI1775 -FGMB0_SX245 -FHLM0_SX390 -FHXS0_SA2 -FHXS0_SX445 -FJDM2_SA1 -FJDM2_SX232 -FJDM2_SX52 -FJHK0_SX302 -FJKL0_SX212 -FJKL0_SX392 -FJLG0_SI2306 -FJLR0_SA1 -FJRP1_SI2062 -FJRP1_SX82 -FJSK0_SA1 -FJSP0_SX264 -FJSP0_SX354 -FJSP0_SX444 -FJWB1_SA1 -FJWB1_SX345 -FJWB1_SX435 -FJXM0_SA1 -FJXM0_SI581 -FJXM0_SX401 -FJXP0_SA1 -FJXP0_SI1122 -FJXP0_SX132 -FKAA0_SX128 -FKAA0_SX398 -FKDE0_SA1 -FKDE0_SX151 -FKDE0_SX241 -FKDE0_SX421 -FKDE0_SX61 -FKDW0_SX397 -FKFB0_SA2 -FKFB0_SX348 -FKFB0_SX78 -FKKH0_SA1 -FKKH0_SA2 -FKKH0_SX120 -FKKH0_SX390 -FKLC0_SX355 -FKLC1_SI2308 -FKLC1_SX238 -FKLC1_SX328 -FKLC1_SX418 -FKLH0_SA2 -FKLH0_SX177 -FKSR0_SA1 -FKSR0_SA2 -FKSR0_SI1747 -FKSR0_SI487 -FKSR0_SX217 -FLAC0_SX451 -FLAG0_SA2 -FLAG0_SX114 -FLAG0_SX204 -FLAG0_SX24 -FLAG0_SX384 -FLEH0_SI1681 -FLEH0_SI2311 -FLEH0_SX331 -FLET0_SA1 -FLHD0_SI1827 -FLHD0_SX354 -FLJA0_SA1 -FLJA0_SI2338 -FLJD0_SI886 -FLJD0_SX76 -FLJG0_SA2 -FLKM0_SA2 -FLKM0_SI686 -FLKM0_SX260 -FLKM0_SX80 -FLMA0_SA1 -FLMA0_SI613 -FLMA0_SX433 -FLMA0_SX73 -FLMC0_SX22 -FLMK0_SI1035 -FLMK0_SX315 -FLMK0_SX405 -FLOD0_SI1917 -FLOD0_SX117 -FLOD0_SX171 -FLOD0_SX297 -FLTM0_SA1 -FLTM0_SI1070 -FLTM0_SI2330 -FMAH1_SA2 -FMAH1_SX159 -FMBG0_SA2 -FMBG0_SI2264 -FMEM0_SI747 -FMEM0_SX387 -FMJB0_SI547 -FMJB0_SX97 -FMJF0_SA2 -FMJU0_SX309 -FMJU0_SX399 -FMKC0_SI1702 -FMKC0_SX442 -FMKC0_SX82 -FMKF0_SX186 -FMPG0_SA2 -FNKL0_SI1522 -FNTB0_SI1203 -FNTB0_SI573 -FNTB0_SX303 -FPAB1_SI1471 -FPAB1_SX211 -FPAC0_SA2 -FPAD0_SA2 -FPAD0_SX356 -FPAD0_SX86 -FPAF0_SA2 -FPAF0_SX154 -FPAZ0_SA1 -FPAZ0_SA2 -FPAZ0_SX243 -FPJF0_SA1 -FPJF0_SX146 -FPJF0_SX56 -FPLS0_SI1590 -FPLS0_SX330 -FPMY0_SA1 -FPMY0_SX343 -FREH0_SA1 -FREH0_SA2 -FREH0_SX415 -FRJB0_SX347 -FRLL0_SX434 -FSAG0_SA1 -FSAG0_SX243 -FSAH0_SA1 -FSAH0_SA2 -FSAH0_SX164 -FSAH0_SX434 -FSBK0_SA2 -FSBK0_SI1069 -FSBK0_SX169 -FSCN0_SA2 -FSCN0_SI626 -FSCN0_SX266 -FSCN0_SX446 -FSCN0_SX86 -FSDC0_SA2 -FSDC0_SX142 -FSDC0_SX322 -FSDC0_SX52 -FSDJ0_SI485 -FSDJ0_SX215 -FSDJ0_SX305 -FSDJ0_SX395 -FSGF0_SX117 -FSJG0_SX130 -FSJK1_SA2 -FSJK1_SX125 -FSJK1_SX35 -FSJS0_SX181 -FSJW0_SI1963 -FSJW0_SX433 -FSKC0_SI1416 -FSKC0_SI786 -FSKC0_SX246 -FSKL0_SI1529 -FSKL0_SX449 -FSKP0_SA2 -FSLS0_SX156 -FSLS0_SX426 -FSMA0_SA2 -FSMA0_SX181 -FSMM0_SX144 -FSMM0_SX234 -FSMS1_SX244 -FSMS1_SX347 -FSPM0_SA2 -FSPM0_SX161 -FSPM0_SX71 -FSRH0_SI1931 -FSRH0_SI671 -FSRH0_SX221 -FSRH0_SX401 -FTAJ0_SI699 -FTAJ0_SX159 -FTAJ0_SX249 -FTAJ0_SX429 -FTBR0_SX21 -FTBW0_SA1 -FTMG0_SI1532 -FTMG0_SI2162 -FTMG0_SX452 -FVFB0_SA2 -FVFB0_SX132 -FVFB0_SX42 -FVKB0_SA1 -FVMH0_SA2 -FVMH0_SX116 -FVMH0_SX26 -MABC0_SI1620 -MABC0_SI2041 -MABC0_SI781 -MADC0_SX107 -MADC0_SX377 -MADD0_SA2 -MADD0_SI1295 -MADD0_SX178 -MADD0_SX268 -MADD0_SX88 -MAEB0_SX450 -MAEO0_SA1 -MAFM0_SI939 -MAFM0_SX129 -MAFM0_SX309 -MAJP0_SA2 -MAKB0_SI1646 -MAKB0_SX26 -MAKB0_SX386 -MAKR0_SX362 -MAKR0_SX92 -MAPV0_SX213 -MARC0_SA2 -MARC0_SX108 -MARC0_SX18 -MARC0_SX198 -MARW0_SI1906 -MBAR0_SA1 -MBAR0_SX419 -MBAR0_SX59 -MBBR0_SI2315 -MBBR0_SX65 -MBCG0_SA1 -MBCG0_SI486 -MBEF0_SI1281 -MBEF0_SI1911 -MBEF0_SI651 -MBEF0_SX21 -MBEF0_SX381 -MBGT0_SA2 -MBGT0_SX261 -MBGT0_SX351 -MBGT0_SX441 -MBJV0_SA1 -MBJV0_SI617 -MBJV0_SX347 -MBMA0_SI592 -MBMA0_SX232 -MBMA0_SX52 -MBMA1_SI2214 -MBMA1_SX54 -MBML0_SA2 -MBML0_SI1169 -MBML0_SX89 -MBOM0_SA2 -MBOM0_SI2274 -MBOM0_SX294 -MBSB0_SA1 -MBSB0_SX3 -MBTH0_SA2 -MBTH0_SX122 -MBTH0_SX32 -MCAE0_SX277 -MCAL0_SA2 -MCAL0_SI1768 -MCDC0_SA1 -MCDC0_SX212 -MCDD0_SA2 -MCDD0_SI883 -MCDD0_SX253 -MCDD0_SX433 -MCDR0_SI1154 -MCEF0_SX235 -MCEF0_SX415 -MCEW0_SA2 -MCHL0_SX87 -MCLK0_SX310 -MCLM0_SA1 -MCLM0_SI2086 -MCLM0_SI826 -MCPM0_SA1 -MCPM0_SX114 -MCPM0_SX294 -MCPM0_SX384 -MCSS0_SI750 -MCTH0_SA1 -MCTH0_SX39 -MCXM0_SX91 -MDAC0_SA1 -MDAC0_SX181 -MDAC0_SX361 -MDAS0_SX6 -MDBB1_SX106 -MDBB1_SX16 -MDBB1_SX376 -MDBP0_SX168 -MDCD0_SI1415 -MDCD0_SX245 -MDCD0_SX425 -MDCM0_SX40 -MDCM0_SX400 -MDDC0_SI2049 -MDDC0_SI789 -MDDC0_SX159 -MDDC0_SX69 -MDED0_SA1 -MDED0_SA2 -MDEF0_SX123 -MDEF0_SX303 -MDHL0_SI1439 -MDHL0_SX269 -MDHL0_SX449 -MDHS0_SA1 -MDHS0_SA2 -MDHS0_SI1530 -MDHS0_SI2160 -MDJM0_SX105 -MDJM0_SX15 -MDKS0_SX436 -MDLB0_SA2 -MDLC0_SX405 -MDLC1_SA2 -MDLC1_SI2065 -MDLC1_SI2144 -MDLC1_SX445 -MDLC2_SI2244 -MDLC2_SX354 -MDLH0_SA2 -MDLM0_SI1234 -MDLM0_SI1864 -MDLM0_SX154 -MDLM0_SX424 -MDLR0_SA1 -MDLR0_SA2 -MDLR0_SI1863 -MDLR0_SI603 -MDLR0_SX153 -MDLR1_SA1 -MDLR1_SA2 -MDMA0_SI1430 -MDMA0_SX260 -MDMA0_SX80 -MDMT0_SA1 -MDMT0_SA2 -MDMT0_SI1832 -MDMT0_SX122 -MDMT0_SX32 -MDNS0_SA2 -MDNS0_SI2271 -MDNS0_SX201 -MDNS0_SX21 -MDPB0_SX416 -MDPK0_SI1053 -MDPK0_SX333 -MDPK0_SX423 -MDPS0_SI719 -MDPS0_SX359 -MDRD0_SA1 -MDRD0_SX32 -MDSJ0_SI2092 -MDSS0_SA2 -MDSS0_SX441 -MDSS1_SA1 -MDSS1_SI1327 -MDSS1_SI697 -MDSS1_SX157 -MDSS1_SX67 -MDTB0_SI1200 -MDTB0_SI1830 -MDTB0_SX120 -MDWD0_SA2 -MDWD0_SX270 -MDWD0_SX90 -MDWH0_SX215 -MDWH0_SX305 -MDWM0_SA1 -MDWM0_SA2 -MDWM0_SX16 -MDWM0_SX286 -MEAL0_SA2 -MEAL0_SI2177 -MEAL0_SX107 -MEAL0_SX347 -MEDR0_SA1 -MEDR0_SA2 -MEDR0_SI1374 -MEFG0_SA1 -MEGJ0_SA2 -MEGJ0_SX257 -MEGJ0_SX3 -MEJL0_SA1 -MEJL0_SX152 -MEJL0_SX242 -MEJS0_SI610 -MEJS0_SX160 -MEJS0_SX340 -MESG0_SX432 -MESJ0_SX187 -MESJ0_SX97 -MEWM0_SI718 -MEWM0_SX178 -MEWM0_SX88 -MFER0_SI862 -MFER0_SX142 -MFRM0_SX345 -MFRM0_SX435 -MFWK0_SI1879 -MFWK0_SX169 -MFXS0_SX54 -MFXV0_SA2 -MFXV0_SX105 -MGAF0_SA1 -MGAF0_SX22 -MGAF0_SX382 -MGAG0_SA2 -MGAK0_SX226 -MGAK0_SX46 -MGAR0_SX132 -MGAW0_SI535 -MGAW0_SX175 -MGES0_SA1 -MGES0_SI2111 -MGES0_SI851 -MGJC0_SA2 -MGJC0_SX75 -MGRL0_SI2127 -MGRL0_SI867 -MGRL0_SX147 -MGRP0_SA2 -MGSH0_SA2 -MGSH0_SI1806 -MGSH0_SX127 -MGSH0_SX276 -MGSH0_SX6 -MGSL0_SA1 -MGSL0_SI534 -MGSL0_SX264 -MGXP0_SX187 -MGXP0_SX7 -MHBS0_SX315 -MHBS0_SX45 -MHIT0_SA1 -MHJB0_SA1 -MHJB0_SI1017 -MHMG0_SX195 -MHMR0_SA1 -MHMR0_SI489 -MHRM0_SA1 -MHRM0_SI958 -MHRM0_SX148 -MHRM0_SX58 -MHXL0_SI1772 -MHXL0_SX242 -MILB0_SA2 -MJAC0_SX307 -MJAC0_SX71 -MJAE0_SX174 -MJAI0_SA1 -MJAI0_SA2 -MJBG0_SX62 -MJDA0_SI1031 -MJDA0_SX311 -MJDE0_SI463 -MJDG0_SA2 -MJDG0_SI1042 -MJDG0_SI1705 -MJDM0_SA1 -MJDM0_SI974 -MJEB0_SI656 -MJEB0_SX296 -MJEB1_SA2 -MJEB1_SX207 -MJEB1_SX387 -MJEE0_SA1 -MJEE0_SX247 -MJEE0_SX337 -MJFH0_SA2 -MJFH0_SI1107 -MJFR0_SX75 -MJHI0_SA1 -MJHI0_SX158 -MJJB0_SA1 -MJJB0_SX239 -MJJJ0_SX443 -MJJM0_SA2 -MJJM0_SI827 -MJJM0_SX107 -MJKR0_SA1 -MJKR0_SI571 -MJLB0_SX176 -MJLG1_SX292 -MJLS0_SX106 -MJMA0_SA1 -MJMA0_SA2 -MJMD0_SA2 -MJMD0_SX308 -MJMD0_SX38 -MJMM0_SX85 -MJPG0_SI1191 -MJPG0_SX111 -MJPG0_SX201 -MJPG0_SX21 -MJPM0_SA2 -MJPM0_SX378 -MJPM1_SI2280 -MJPM1_SX401 -MJRA0_SA1 -MJRA0_SA2 -MJRA0_SI1236 -MJRA0_SI1866 -MJRA0_SX426 -MJRG0_SI1366 -MJRG0_SI1996 -MJRG0_SX376 -MJRH0_SX225 -MJRH1_SA1 -MJRH1_SI514 -MJRH1_SX154 -MJRH1_SX244 -MJRH1_SX424 -MJRK0_SA1 -MJRK0_SA2 -MJRK0_SI1662 -MJRK0_SX160 -MJRK0_SX250 -MJRK0_SX430 -MJRP0_SA1 -MJRP0_SA2 -MJRP0_SX225 -MJSR0_SA1 -MJSR0_SI1424 -MJSR0_SX344 -MJWG0_SA1 -MJWG0_SX265 -MJWS0_SI513 -MJWS0_SX153 -MJWS0_SX63 -MJWT0_SA1 -MJWT0_SX121 -MJWT0_SX211 -MJWT0_SX301 -MJWT0_SX31 -MJWT0_SX391 -MJXA0_SX427 -MJXL0_SI542 -MKAG0_SA1 -MKAG0_SX259 -MKAJ0_SA2 -MKAJ0_SX154 -MKAM0_SA1 -MKAM0_SX146 -MKAM0_SX326 -MKAM0_SX56 -MKDB0_SA1 -MKDB0_SA2 -MKDB0_SX152 -MKDD0_SA2 -MKES0_SA1 -MKES0_SI1253 -MKES0_SI1883 -MKES0_SX173 -MKJO0_SI1517 -MKJO0_SI887 -MKJO0_SX437 -MKLN0_SI968 -MKLN0_SX248 -MKLR0_SA2 -MKLR0_SI1689 -MKLS0_SA1 -MKLS0_SX357 -MKLS0_SX87 -MKLS1_SA1 -MKLS1_SA2 -MKLS1_SX375 -MKLW0_SA1 -MKRG0_SX411 -MKXL0_SA2 -MKXL0_SX15 -MKXL0_SX375 -MLBC0_SA1 -MLBC0_SI1869 -MLBC0_SX249 -MLEL0_SA1 -MLEL0_SA2 -MLEL0_SI1246 -MLEL0_SX256 -MLEL0_SX436 -MLJC0_SX145 -MLJC0_SX415 -MLJH0_SX64 -MLNS0_SI2037 -MMAA0_SA1 -MMAA0_SA2 -MMAA0_SX35 -MMAB1_SI1494 -MMAB1_SX234 -MMAG0_SA2 -MMAG0_SI1126 -MMAG0_SX316 -MMAM0_SI2227 -MMAM0_SX157 -MMAM0_SX427 -MMAR0_SX256 -MMBS0_SI1781 -MMCC0_SA2 -MMDB0_SX177 -MMDG0_SA1 -MMDG0_SA2 -MMDG0_SI520 -MMDG0_SX160 -MMDG0_SX250 -MMDM0_SI1941 -MMDM0_SI681 -MMDM0_SX141 -MMDM1_SA2 -MMDM1_SI2043 -MMDM1_SX423 -MMDM1_SX63 -MMDS0_SA1 -MMEA0_SA1 -MMEA0_SX128 -MMEA0_SX398 -MMEB0_SA2 -MMEB0_SX187 -MMEB0_SX367 -MMGC0_SA2 -MMGC0_SX135 -MMGC0_SX225 -MMGG0_SX269 -MMGK0_SX332 -MMGK0_SX62 -MMJB1_SA2 -MMRP0_SA2 -MMRP0_SX144 -MMSM0_SX116 -MMSM0_SX206 -MMVP0_SA1 -MMVP0_SA2 -MMWB0_SI989 -MMWB0_SX89 -MMWS0_SA2 -MMWS0_SX168 -MMWS0_SX348 -MMWS0_SX438 -MMWS1_SI1701 -MMXS0_SI2136 -MMXS0_SX246 -MMXS0_SX426 -MNET0_SI816 -MNET0_SX6 -MNTW0_SA2 -MNTW0_SX168 -MNTW0_SX78 -MPAR0_SI2206 -MPAR0_SI946 -MPAR0_SX136 -MPAR0_SX316 -MPEB0_SI1034 -MPEB0_SI1860 -MPEB0_SX240 -MPEB0_SX330 -MPFU0_SI628 -MPFU0_SX448 -MPGH0_SX114 -MPGH0_SX24 -MPGR0_SX240 -MPGR0_SX330 -MPGR1_SX149 -MPPC0_SA1 -MPRD0_SA1 -MPRD0_SX261 -MPRD0_SX351 -MPRD0_SX441 -MPRD0_SX81 -MPRK0_SI1727 -MPRK0_SX107 -MPRK0_SX377 -MPRT0_SA1 -MPRT0_SX310 -MPSW0_SI1067 -MPSW0_SX167 -MPSW0_SX437 -MRAB1_SX128 -MRAB1_SX308 -MRAI0_SA1 -MRAI0_SA2 -MRAI0_SX72 -MRAM0_SA1 -MRAM0_SA2 -MRAM0_SX15 -MRBC0_SI1859 -MRBC0_SX329 -MRBC0_SX419 -MRCG0_SI798 -MRCG0_SX168 -MRCW0_SA1 -MRCW0_SX291 -MRDD0_SI1680 -MRDD0_SX150 -MRDD0_SX277 -MRDD0_SX60 -MRDM0_SI1595 -MRDM0_SX65 -MRDS0_SA1 -MREE0_SX24 -MREH1_SX249 -MREH1_SX69 -MREM0_SA2 -MREW1_SI870 -MRFK0_SX446 -MRFL0_SA1 -MRFL0_SX256 -MRFL0_SX436 -MRFL0_SX76 -MRGM0_SA2 -MRGM0_SX262 -MRGS0_SA2 -MRGS0_SX186 -MRHL0_SI885 -MRHL0_SX345 -MRHL0_SX435 -MRJB1_SA1 -MRJB1_SA2 -MRJB1_SX210 -MRJB1_SX30 -MRJB1_SX390 -MRJH0_SA2 -MRJH0_SX307 -MRJH0_SX79 -MRJM0_SX148 -MRJM1_SA2 -MRJM1_SI1298 -MRJM1_SI1928 -MRJM1_SX128 -MRJT0_SA2 -MRJT0_SI1498 -MRJT0_SX328 -MRJT0_SX418 -MRKM0_SA2 -MRKM0_SX367 -MRLD0_SA2 -MRLD0_SI2224 -MRLD0_SX154 -MRLD0_SX424 -MRLJ0_SA1 -MRLJ0_SX250 -MRLJ0_SX340 -MRLJ1_SA1 -MRLJ1_SA2 -MRLJ1_SX321 -MRLK0_SI843 -MRLK0_SX123 -MRLK0_SX213 -MRMB0_SA2 -MRMB0_SI1581 -MRMB0_SX411 -MRMG0_SA1 -MRMG0_SI1080 -MRMG0_SX450 -MRMH0_SI1349 -MRMH0_SI2281 -MRMH0_SX121 -MRML0_SA2 -MRML0_SX341 -MRPC1_SI2112 -MRRE0_SA2 -MRRE0_SX164 -MRRE0_SX344 -MRRE0_SX74 -MRSO0_SX129 -MRSO0_SX39 -MRSP0_SX259 -MRTC0_SX378 -MRVG0_SI1140 -MRVG0_SX240 -MRWA0_SI973 -MRWA0_SX163 -MRWA0_SX73 -MRWS0_SI1732 -MRWS0_SI472 -MRWS0_SX22 -MRWS0_SX382 -MRXB0_SA2 -MRXB0_SX415 -MSAH1_SI1679 -MSAS0_SX116 -MSAS0_SX206 -MSAS0_SX386 -MSAT0_SA1 -MSAT1_SX263 -MSAT1_SX443 -MSAT1_SX83 -MSDB0_SX197 -MSDB0_SX287 -MSDB0_SX377 -MSDH0_SI2240 -MSDH0_SX440 -MSDH0_SX80 -MSDS0_SA1 -MSEM1_SI1440 -MSEM1_SX180 -MSEM1_SX270 -MSES0_SI1589 -MSES0_SX239 -MSES0_SX419 -MSFH0_SX316 -MSFV0_SI1892 -MSFV0_SX362 -MSFV0_SX92 -MSMR0_SX415 -MSMS0_SA1 -MSMS0_SX173 -MSMS0_SX83 -MSRG0_SA1 -MSRG0_SI1221 -MSTF0_SI766 -MSTF0_SX316 -MSTF0_SX46 -MSVS0_SA2 -MSVS0_SX308 -MTAS0_SX215 -MTAS0_SX35 -MTAS0_SX395 -MTAT0_SX390 -MTAT1_SX59 -MTBC0_SI1803 -MTCS0_SA2 -MTCS0_SI2265 -MTCS0_SX82 -MTDP0_SA2 -MTER0_SA2 -MTER0_SI1787 -MTJG0_SA1 -MTJG0_SI2157 -MTJG0_SX260 -MTJM0_SI1856 -MTJM0_SX146 -MTJU0_SX130 -MTJU0_SX400 -MTKD0_SX107 -MTKD0_SX287 -MTKP0_SI1023 -MTLB0_SA1 -MTLB0_SX234 -MTLC0_SA1 -MTML0_SI2325 -MTML0_SX165 -MTMN0_SA2 -MTMN0_SI1064 -MTMN0_SI2324 -MTMN0_SX434 -MTMT0_SA2 -MTMT0_SI1748 -MTPF0_SX65 -MTPG0_SI1383 -MTPG0_SI753 -MTPG0_SX303 -MTPP0_SX338 -MTPR0_SX340 -MTQC0_SI480 -MTQC0_SX91 -MTRR0_SX198 -MTRR0_SX288 -MTRT0_SA2 -MTRT0_SX254 -MTRT0_SX57 -MTWH1_SX72 -MTXS0_SA1 -MTXS0_SA2 -MVJH0_SI926 -MVJH0_SX206 -MVJH0_SX296 -MVLO0_SA1 -MVRW0_SA2 -MVRW0_SX135 -MVRW0_SX225 -MWAC0_SA2 -MWAC0_SX341 -MWAC0_SX431 -MWAD0_SX432 -MWAD0_SX72 -MWAR0_SA1 -MWAR0_SI1675 -MWCH0_SI1895 -MWCH0_SI2252 -MWCH0_SX182 -MWCH0_SX452 -MWDK0_SA1 -MWDK0_SA2 -MWDK0_SI2017 -MWDK0_SI806 -MWDK0_SX176 -MWDK0_SX86 -MWEM0_SA2 -MWEM0_SI1320 -MWEM0_SI1393 -MWEM0_SX150 -MWGR0_SX346 -MWRE0_SX247 -MWRE0_SX337 -MWRE0_SX427 -MWRP0_SA1 -MWRP0_SX273 -MWRP0_SX363 -MWSB0_SX276 -MWSH0_SX256 -MWSH0_SX76 -MZMB0_SA1 diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/config/timit_unmatched/valid.uid b/kosmos-g/fairseq/examples/wav2vec/unsupervised/config/timit_unmatched/valid.uid deleted file mode 100644 index e99edfe93..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/config/timit_unmatched/valid.uid +++ /dev/null @@ -1,620 +0,0 @@ -FAEM0_SI1392 -FAJW0_SI1263 -FAJW0_SI633 -FALK0_SI658 -FALR0_SX335 -FAPB0_SI1063 -FAPB0_SI2323 -FAPB0_SX433 -FBAS0_SI1472 -FBAS0_SI2066 -FBCG1_SX352 -FBCH0_SI959 -FBJL0_SI922 -FBLV0_SI1688 -FBMH0_SI1136 -FBMH0_SI970 -FBMJ0_SA1 -FBMJ0_SI1776 -FBMJ0_SI516 -FBMJ0_SX336 -FCDR1_SI1186 -FCDR1_SI1816 -FCDR1_SI556 -FCDR1_SX286 -FCKE0_SI1741 -FCKE0_SI481 -FCLT0_SI808 -FCMG0_SI1142 -FCMG0_SX432 -FCMM0_SI1957 -FCMM0_SX420 -FCYL0_SI667 -FCYL0_SX349 -FDAS1_SI1461 -FDAS1_SI831 -FDAW0_SI1271 -FDAW0_SI2036 -FDJH0_SI935 -FDKN0_SI1202 -FDKN0_SX181 -FDKN0_SX451 -FDMY0_SA1 -FDMY0_SI567 -FDMY0_SI714 -FDMY0_SX387 -FDNC0_SI1278 -FDNC0_SI1908 -FDTD0_SA1 -FDTD0_SX321 -FEAC0_SI615 -FEAR0_SX352 -FECD0_SA1 -FECD0_SI1418 -FECD0_SI788 -FEME0_SI875 -FEME0_SX335 -FEXM0_SA1 -FEXM0_SI482 -FEXM0_SX366 -FGDP0_SI988 -FGDP0_SX88 -FGMB0_SI1145 -FGMB0_SX335 -FGRW0_SA1 -FGRW0_SI1152 -FGRW0_SX162 -FGRW0_SX432 -FHLM0_SX120 -FHLM0_SX349 -FHXS0_SA1 -FHXS0_SI1075 -FHXS0_SI2302 -FHXS0_SX175 -FJDM2_SA2 -FJDM2_SX142 -FJEN0_SA1 -FJEN0_SX327 -FJEN0_SX417 -FJHK0_SI2282 -FJKL0_SI932 -FJLG0_SI1889 -FJLR0_SI1231 -FJRB0_SX402 -FJRP1_SA1 -FJRP1_SI1432 -FJRP1_SX262 -FJRP1_SX352 -FJSK0_SI1052 -FJSP0_SI1434 -FJWB1_SI748 -FJXM0_SX311 -FJXM0_SX41 -FJXP0_SI1752 -FKAA0_SA1 -FKDE0_SI1141 -FKDE0_SI1771 -FKDW0_SI1207 -FKDW0_SI1891 -FKFB0_SI1608 -FKFB0_SX438 -FKKH0_SI1290 -FKKH0_SI1920 -FKLC0_SI985 -FKLC0_SX175 -FKLC1_SI1048 -FKLH0_SI1257 -FKSR0_SX366 -FLAC0_SI1339 -FLAG0_SI1464 -FLAG0_SI834 -FLEH0_SI1051 -FLET0_SI507 -FLJA0_SI1078 -FLJA0_SX178 -FLJD0_SI1516 -FLJG0_SI981 -FLJG0_SX171 -FLJG0_SX351 -FLKM0_SA1 -FLKM0_SI620 -FLKM0_SX350 -FLKM0_SX440 -FLMC0_SI1372 -FLMK0_SA1 -FLMK0_SI1229 -FLTM0_SX170 -FLTM0_SX350 -FLTM0_SX440 -FMAH1_SI879 -FMBG0_SI1160 -FMEM0_SA1 -FMEM0_SX333 -FMJB0_SI1177 -FMJF0_SI624 -FMJF0_SX174 -FMJF0_SX84 -FMJU0_SI1389 -FMKC0_SI1041 -FMKF0_SI1018 -FMPG0_SA1 -FMPG0_SI972 -FMPG0_SX162 -FMPG0_SX342 -FMPG0_SX432 -FNKL0_SI892 -FNTB0_SI679 -FPAB1_SA1 -FPAB1_SI2101 -FPAB1_SI841 -FPAC0_SI1921 -FPAC0_SI661 -FPAD0_SI716 -FPAD0_SX176 -FPAF0_SA1 -FPAF0_SI1054 -FPAZ0_SI2223 -FPAZ0_SI963 -FPJF0_SI1259 -FPJF0_SX352 -FPLS0_SI960 -FPMY0_SI1153 -FPMY0_SI523 -FREH0_SI1945 -FRLL0_SI805 -FSAG0_SI1323 -FSAG0_SX153 -FSAG0_SX333 -FSAG0_SX423 -FSAH0_SI614 -FSAH0_SX327 -FSAK0_SI1300 -FSBK0_SX349 -FSCN0_SA1 -FSCN0_SI705 -FSCN0_SX176 -FSDC0_SI1312 -FSDJ0_SI1115 -FSGF0_SI2187 -FSGF0_SI927 -FSJG0_SA1 -FSJG0_SA2 -FSJG0_SI940 -FSJG0_SX220 -FSJG0_SX40 -FSJG0_SX400 -FSJS0_SA1 -FSJS0_SX451 -FSJW0_SI1333 -FSKP0_SI1098 -FSMA0_SI991 -FSMA0_SX451 -FSMM0_SX324 -FSPM0_SI1241 -FSPM0_SX251 -FSRH0_SX311 -FSSB0_SI1712 -FSSB0_SX362 -FTBR0_SI1402 -FTBR0_SI921 -FTBW0_SI715 -FTBW0_SX175 -FTLG0_SI1743 -FTLG0_SI483 -FTMG0_SI902 -FVFB0_SI1510 -FVKB0_SX349 -FVMH0_SI1466 -FVMH0_SI836 -MADC0_SI1367 -MADC0_SI737 -MAEB0_SI1411 -MAEO0_SI1326 -MAJP0_SI1704 -MAJP0_SX174 -MAKB0_SA2 -MAKB0_SI1016 -MAKB0_SI2276 -MAKB0_SX116 -MAPV0_SI1293 -MAPV0_SI663 -MARW0_SX286 -MARW0_SX349 -MBBR0_SI1055 -MBBR0_SX335 -MBCG0_SI957 -MBCG0_SX327 -MBGT0_SI1841 -MBGT0_SX171 -MBMA0_SI1222 -MBMA1_SI954 -MBMA1_SX324 -MBTH0_SI2102 -MBWP0_SX349 -MCAE0_SI1447 -MCAE0_SI2077 -MCAE0_SI817 -MCAL0_SI1138 -MCDR0_SI1784 -MCDR0_SI524 -MCEF0_SI842 -MCEW0_SA1 -MCEW0_SI2072 -MCEW0_SI812 -MCEW0_SX362 -MCEW0_SX452 -MCHL0_SI1347 -MCHL0_SI1404 -MCLK0_SI2290 -MCLK0_SI650 -MCPM0_SI1824 -MCSS0_SI1380 -MCSS0_SI688 -MCTM0_SI1350 -MCTM0_SI1980 -MDAC0_SI631 -MDAS0_SI1896 -MDAS0_SI636 -MDBP0_SI528 -MDBP0_SX438 -MDCD0_SI785 -MDCD0_SX335 -MDCM0_SI1480 -MDDC0_SI1419 -MDED0_SI540 -MDEF0_SI1123 -MDEM0_SA1 -MDEM0_SI608 -MDEM0_SI800 -MDEM0_SX428 -MDHS0_SI900 -MDJM0_SI1455 -MDKS0_SX166 -MDKS0_SX346 -MDLB0_SI1306 -MDLB0_SX136 -MDLB0_SX406 -MDLC0_SI1395 -MDLC0_SI2025 -MDLC1_SI1435 -MDLH0_SX160 -MDLH0_SX430 -MDLM0_SI604 -MDLR0_SX333 -MDLR1_SI669 -MDMA0_SX170 -MDMA0_SX350 -MDMA0_SX440 -MDNS0_SI1011 -MDNS0_SI873 -MDPB0_SI1760 -MDPB0_SI866 -MDRD0_SI752 -MDSJ0_SI1462 -MDSJ0_SX438 -MDWD0_SI1260 -MDWH0_SA1 -MDWH0_SI1168 -MDWH0_SI665 -MDWM0_SI916 -MEDR0_SI2004 -MEFG0_SI491 -MEFG0_SI598 -MEGJ0_SA1 -MEGJ0_SI1337 -MEGJ0_SI707 -MEGJ0_SX167 -MEJS0_SI1240 -MESG0_SI702 -MESJ0_SI2039 -MFWK0_SX349 -MFXS0_SX324 -MFXV0_SI1005 -MFXV0_SI1342 -MGAF0_SI1282 -MGAG0_SI691 -MGAK0_SI1036 -MGAK0_SX136 -MGAR0_SX312 -MGAW0_SI1165 -MGES0_SX311 -MGJC0_SX435 -MGRL0_SX327 -MGRP0_SI1317 -MGRP0_SX327 -MGSH0_SI1176 -MGSH0_SI546 -MGSL0_SI797 -MGXP0_SI1087 -MGXP0_SI525 -MHBS0_SI945 -MHIT0_SI983 -MHMG0_SI735 -MHMR0_SI1692 -MILB0_SI903 -MJAC0_SI701 -MJAC0_SX251 -MJAE0_SX84 -MJAI0_SI682 -MJAI0_SI710 -MJDC0_SI531 -MJDE0_SA1 -MJDE0_SI1120 -MJDE0_SI490 -MJDE0_SX220 -MJDM0_SI1340 -MJDM0_SX170 -MJDM0_SX350 -MJEB0_SX170 -MJEB1_SI1467 -MJEB1_SI837 -MJFR0_SA1 -MJFR0_SX435 -MJHI0_SI1328 -MJJJ0_SI1163 -MJJM0_SI1251 -MJLB0_SI1616 -MJLS0_SI1726 -MJMA0_SI2125 -MJMD0_SI2288 -MJMM0_SI1255 -MJMM0_SX175 -MJPG0_SI1821 -MJPM0_SI1368 -MJPM1_SX311 -MJRA0_SX336 -MJRG0_SI736 -MJRG0_SX352 -MJRH0_SI1840 -MJRH1_SI1558 -MJRK0_SI880 -MJRP0_SI1845 -MJSR0_SI2054 -MJSR0_SI794 -MJWG0_SI813 -MJWG0_SI895 -MJWG0_SX175 -MJWS0_SX333 -MJWT0_SI1291 -MJWT0_SI1381 -MJXL0_SI1172 -MKAG0_SI979 -MKAH0_SX178 -MKAM0_SI1250 -MKAM0_SI1465 -MKDD0_SI1567 -MKDD0_SI2197 -MKDD0_SI937 -MKDT0_SI814 -MKES0_SI623 -MKLS0_SI1437 -MKLS0_SI2067 -MKLS1_SI915 -MKLW0_SI1571 -MKLW0_SX311 -MKRG0_SI861 -MKXL0_SI1815 -MKXL0_SI1958 -MLBC0_SI1239 -MLEL0_SI616 -MLEL0_SX166 -MLJC0_SI1225 -MLJH0_SA1 -MLJH0_SA2 -MLJH0_SI1422 -MLJH0_SI694 -MLJH0_SX244 -MLSH0_SI1417 -MLSH0_SX247 -MMAA0_SI1588 -MMAA0_SI845 -MMAB1_SI864 -MMAB1_SX324 -MMAG0_SA1 -MMAG0_SI1756 -MMAG0_SX136 -MMAR0_SI1966 -MMAR0_SX166 -MMAR0_SX346 -MMBS0_SI521 -MMBS0_SX161 -MMCC0_SI1338 -MMDB0_SI987 -MMDG0_SI1780 -MMDM0_SI1311 -MMDM1_SX153 -MMDM1_SX333 -MMEB0_SX327 -MMGC0_SI1305 -MMGG0_SI1079 -MMGG0_SX449 -MMLM0_SI2150 -MMPM0_SX161 -MMRP0_SX324 -MMSM0_SI1106 -MMSM0_SI476 -MMVP0_SI654 -MMVP0_SX347 -MMWB0_SA1 -MMWB0_SI2249 -MMWB0_SX359 -MMWB0_SX449 -MNTW0_SI1068 -MNTW0_SI1698 -MPEB0_SI600 -MPFU0_SI1258 -MPGH0_SI675 -MPGR0_SI1410 -MPGR1_SI1499 -MPMB0_SA1 -MPMB0_SA2 -MPMB0_SI1501 -MPMB0_SI2131 -MPMB0_SI871 -MPMB0_SX151 -MPMB0_SX331 -MPMB0_SX421 -MPMB0_SX61 -MPPC0_SI1412 -MPRB0_SI1215 -MPRB0_SI575 -MPRD0_SI801 -MPRD0_SX171 -MPRK0_SA1 -MPRK0_SI1097 -MPRK0_SI467 -MPRK0_SX287 -MRAB0_SI1854 -MRAB1_SI848 -MRAI0_SI2052 -MRAI0_SI792 -MRAI0_SX432 -MRAM0_SI1951 -MRCG0_SA2 -MRCG0_SI1428 -MRCG0_SX348 -MRCG0_SX438 -MRCW0_SI741 -MRDM0_SI1044 -MRDM0_SX335 -MREE0_SI1104 -MREE0_SI1959 -MREH1_SA1 -MREH1_SI1599 -MREH1_SI969 -MREM0_SI511 -MRFK0_SI1076 -MRFL0_SI1156 -MRFL0_SI526 -MRFL0_SX166 -MRGM0_SI532 -MRGM0_SX172 -MRGM0_SX442 -MRGS0_SI1356 -MRGS0_SI726 -MRGS0_SX6 -MRJB1_SI1413 -MRJB1_SI2021 -MRJB1_SX120 -MRJH0_SI1519 -MRJH0_SI889 -MRJH0_SX169 -MRJT0_SI868 -MRJT0_SX58 -MRKM0_SI1267 -MRKM0_SI1391 -MRKM0_SI637 -MRLJ0_SI790 -MRLJ1_SI2301 -MRLK0_SI1468 -MRLR0_SI1196 -MRML0_SA1 -MRML0_SI1421 -MRML0_SX161 -MRML0_SX251 -MRMS0_SI2057 -MRRE0_SA1 -MRRE0_SI1334 -MRRE0_SI952 -MRSO0_SI1206 -MRSP0_SI1429 -MRTC0_SI1458 -MRTJ0_SA1 -MRTJ0_SI772 -MRTJ0_SX142 -MRTJ0_SX232 -MRTJ0_SX52 -MRWS0_SI1102 -MRXB0_SI2215 -MRXB0_SI955 -MSAS0_SI1376 -MSAS0_SI746 -MSDH0_SI980 -MSDH0_SX170 -MSDS0_SI1077 -MSDS0_SX267 -MSDS0_SX357 -MSEM1_SI2070 -MSEM1_SI810 -MSFH0_SA1 -MSFH0_SI1738 -MSFH0_SX136 -MSFH0_SX406 -MSFV0_SI632 -MSJK0_SI1596 -MSJK0_SX336 -MSMC0_SI509 -MSMR0_SI1150 -MSMS0_SI1433 -MSRR0_SI1761 -MSRR0_SI501 -MSTF0_SI852 -MSVS0_SI2198 -MSVS0_SI938 -MSVS0_SX398 -MTAB0_SI1572 -MTAB0_SX312 -MTAT0_SA1 -MTAT0_SI1110 -MTAT0_SI811 -MTAT1_SI779 -MTAT1_SX149 -MTAT1_SX329 -MTBC0_SI543 -MTCS0_SI712 -MTDB0_SI1401 -MTDB0_SI771 -MTDP0_SA1 -MTDP0_SI1521 -MTDP0_SX171 -MTDP0_SX351 -MTER0_SA1 -MTER0_SI1157 -MTER0_SX437 -MTJG0_SX170 -MTJS0_SA2 -MTJS0_SI1822 -MTJS0_SI562 -MTJS0_SX382 -MTJU0_SI2020 -MTKD0_SI630 -MTKP0_SI2283 -MTKP0_SI454 -MTLB0_SI1134 -MTLB0_SX324 -MTLC0_SI1313 -MTLC0_SI1477 -MTML0_SX435 -MTMN0_SI582 -MTMT0_SI488 -MTPP0_SI1508 -MTPR0_SI2230 -MTPR0_SX160 -MTPR0_SX430 -MTQC0_SA1 -MTQC0_SI1441 -MTQC0_SX181 -MTQC0_SX451 -MTRC0_SI589 -MTRR0_SI918 -MTRT0_SI1227 -MTXS0_SI1060 -MTXS0_SI2320 -MTXS0_SX160 -MTXS0_SX430 -MVJH0_SI1556 -MVLO0_SI517 -MWAC0_SI1601 -MWAC0_SX161 -MWAC0_SX251 -MWAR0_SI1045 -MWDK0_SI1436 -MWEM0_SX420 -MWRE0_SA2 -MWRE0_SI1057 -MWRE0_SX67 -MWRP0_SI1443 -MWSB0_SI996 -MWSH0_SI1426 -MWSH0_SI796 -MWSH0_SX166 diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/data/__init__.py b/kosmos-g/fairseq/examples/wav2vec/unsupervised/data/__init__.py deleted file mode 100644 index d0545627e..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/data/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .extracted_features_dataset import ExtractedFeaturesDataset -from .random_input_dataset import RandomInputDataset - - -__all__ = [ - "ExtractedFeaturesDataset", - "RandomInputDataset", -] diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/data/extracted_features_dataset.py b/kosmos-g/fairseq/examples/wav2vec/unsupervised/data/extracted_features_dataset.py deleted file mode 100644 index d6ee9c4a3..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/data/extracted_features_dataset.py +++ /dev/null @@ -1,144 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import logging -import os -import contextlib - -import numpy as np -import torch - -from fairseq.data import FairseqDataset, data_utils - - -logger = logging.getLogger(__name__) - - -class ExtractedFeaturesDataset(FairseqDataset): - def __init__( - self, - path, - split, - min_length=3, - max_length=None, - labels=None, - label_dict=None, - shuffle=True, - sort_by_length=True, - ): - super().__init__() - - self.min_length = min_length - self.max_length = max_length - self.shuffle = shuffle - self.sort_by_length = sort_by_length - self.label_dict = label_dict - - if labels is not None: - assert label_dict is not None - - self.sizes = [] - self.offsets = [] - self.labels = [] - - path = os.path.join(path, split) - data_path = path - self.data = np.load(data_path + ".npy", mmap_mode="r") - - offset = 0 - skipped = 0 - - if not os.path.exists(path + f".{labels}"): - labels = None - - with open(data_path + ".lengths", "r") as len_f, open( - path + f".{labels}", "r" - ) if labels is not None else contextlib.ExitStack() as lbl_f: - for line in len_f: - length = int(line.rstrip()) - lbl = None if labels is None else next(lbl_f).rstrip().split() - if length >= min_length and ( - max_length is None or length <= max_length - ): - self.sizes.append(length) - self.offsets.append(offset) - if lbl is not None: - self.labels.append(lbl) - offset += length - - self.sizes = np.asarray(self.sizes) - self.offsets = np.asarray(self.offsets) - - logger.info(f"loaded {len(self.offsets)}, skipped {skipped} samples") - - def __getitem__(self, index): - offset = self.offsets[index] - end = self.sizes[index] + offset - feats = torch.from_numpy(self.data[offset:end].copy()).float() - - res = {"id": index, "features": feats} - if len(self.labels) > 0: - res["target"] = self.label_dict.encode_line( - self.labels[index], - line_tokenizer=lambda x: x, - append_eos=False, - ) - - return res - - def __len__(self): - return len(self.sizes) - - def collater(self, samples): - if len(samples) == 0: - return {} - - features = [s["features"] for s in samples] - sizes = [len(s) for s in features] - - target_size = max(sizes) - - collated_features = features[0].new_zeros( - len(features), target_size, features[0].size(-1) - ) - padding_mask = torch.BoolTensor(collated_features.shape[:-1]).fill_(False) - for i, (f, size) in enumerate(zip(features, sizes)): - collated_features[i, :size] = f - padding_mask[i, size:] = True - - res = { - "id": torch.LongTensor([s["id"] for s in samples]), - "net_input": {"features": collated_features, "padding_mask": padding_mask}, - } - - if len(self.labels) > 0: - target = data_utils.collate_tokens( - [s["target"] for s in samples], - pad_idx=self.label_dict.pad(), - left_pad=False, - ) - res["target"] = target - return res - - def num_tokens(self, index): - return self.size(index) - - def size(self, index): - return self.sizes[index] - - def ordered_indices(self): - """Return an ordered list of indices. Batches will be constructed based - on this order.""" - if self.shuffle: - order = [np.random.permutation(len(self))] - else: - order = [np.arange(len(self))] - - if self.sort_by_length: - order.append(self.sizes) - return np.lexsort(order)[::-1] - else: - return order[0] diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/data/random_input_dataset.py b/kosmos-g/fairseq/examples/wav2vec/unsupervised/data/random_input_dataset.py deleted file mode 100644 index 886505616..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/data/random_input_dataset.py +++ /dev/null @@ -1,62 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import random -from typing import List - -from fairseq.data import BaseWrapperDataset, data_utils - - -class RandomInputDataset(BaseWrapperDataset): - def __init__( - self, - dataset, - random_input_dataset, - input_key_path: List[str], - add_to_input, - pad_idx, - ): - super().__init__(dataset) - self.random_input_dataset = random_input_dataset - if isinstance(input_key_path, str): - input_key_path = [input_key_path] - assert len(input_key_path) > 0 - self.input_key_path = input_key_path - self.add_to_input = add_to_input - self.pad_idx = pad_idx - - def get_target(self, item): - target_loc = item - for p in self.input_key_path[:-1]: - target_loc = target_loc[p] - return self.input_key_path[-1], target_loc - - def get_target_value(self, item): - k, target_loc = self.get_target(item) - return target_loc[k] - - def __getitem__(self, index): - item = self.dataset[index] - k, target_loc = self.get_target(item) - target_loc[k] = random.choice(self.random_input_dataset) - return item - - def collater(self, samples): - collated = self.dataset.collater(samples) - if len(collated) == 0: - return collated - indices = set(collated["id"].tolist()) - - random_inputs = data_utils.collate_tokens( - [self.get_target_value(s) for s in samples if s["id"] in indices], - pad_idx=self.pad_idx, - left_pad=False, - ) - k, target_loc = self.get_target( - collated if not self.add_to_input else collated["net_input"] - ) - target_loc[k] = random_inputs - - return collated diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/README.md b/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/README.md deleted file mode 100644 index 314984fcb..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/README.md +++ /dev/null @@ -1,56 +0,0 @@ -# Self-Training with Kaldi HMM Models -This folder contains recipes for self-training on pseudo phone transcripts and -decoding into phones or words with [kaldi](https://github.com/kaldi-asr/kaldi). - -To start, download and install kaldi follow its instruction, and place this -folder in `path/to/kaldi/egs`. - -## Training -Assuming the following has been prepared: -- `w2v_dir`: contains features `{train,valid}.{npy,lengths}`, real transcripts `{train,valid}.${label}`, and dict `dict.${label}.txt` -- `lab_dir`: contains pseudo labels `{train,valid}.txt` -- `arpa_lm`: Arpa-format n-gram phone LM for decoding -- `arpa_lm_bin`: Arpa-format n-gram phone LM for unsupervised model selection to be used with KenLM - -Set these variables in `train.sh`, as well as `out_dir`, the output directory, -and then run it. - -The output will be: -``` -==== WER w.r.t. real transcript (select based on unsupervised metric) -INFO:root:./out/exp/mono/decode_valid/scoring/14.0.0.tra.txt: score 0.9178 wer 28.71% lm_ppl 24.4500 gt_wer 25.57% -INFO:root:./out/exp/tri1/decode_valid/scoring/17.1.0.tra.txt: score 0.9257 wer 26.99% lm_ppl 30.8494 gt_wer 21.90% -INFO:root:./out/exp/tri2b/decode_valid/scoring/8.0.0.tra.txt: score 0.7506 wer 23.15% lm_ppl 25.5944 gt_wer 15.78% -``` -where `wer` is the word eror rate with respect to the pseudo label, `gt_wer` to -the ground truth label, `lm_ppl` the language model perplexity of HMM prediced -transcripts, and `score` is the unsupervised metric for model selection. We -choose the model and the LM parameter of the one with the lowest score. In the -example above, it is `tri2b`, `8.0.0`. - - -## Decoding into Phones -In `decode_phone.sh`, set `out_dir` the same as used in `train.sh`, set -`dec_exp` and `dec_lmparam` to the selected model and LM parameter (e.g. -`tri2b` and `8.0.0` in the above example). `dec_script` needs to be set -according to `dec_exp`: for mono/tri1/tri2b, use `decode.sh`; for tri3b, use -`decode_fmllr.sh`. - -The output will be saved at `out_dir/dec_data` - - -## Decoding into Words -`decode_word_step1.sh` prepares WFSTs for word decoding. Besides the variables -mentioned above, set -- `wrd_arpa_lm`: Arpa-format n-gram word LM for decoding -- `wrd_arpa_lm_bin`: Arpa-format n-gram word LM for unsupervised model selection - -`decode_word_step1.sh` decodes the `train` and `valid` split into word and runs -unsupervised model selection using the `valid` split. The output is like: -``` -INFO:root:./out/exp/tri2b/decodeword_valid/scoring/17.0.0.tra.txt: score 1.8693 wer 24.97% lm_ppl 1785.5333 gt_wer 31.45% -``` - -After determining the LM parameter (`17.0.0` in the example above), set it in -`decode_word_step2.sh` and run it. The output will be saved at -`out_dir/dec_data_word`. diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/cmd.sh b/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/cmd.sh deleted file mode 100644 index e74953194..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/cmd.sh +++ /dev/null @@ -1,15 +0,0 @@ -# you can change cmd.sh depending on what type of queue you are using. -# If you have no queueing system and want to run on a local machine, you -# can change all instances 'queue.pl' to run.pl (but be careful and run -# commands one by one: most recipes will exhaust the memory on your -# machine). queue.pl works with GridEngine (qsub). slurm.pl works -# with slurm. Different queues are configured differently, with different -# queue names and different ways of specifying things like memory; -# to account for these differences you can create and edit the file -# conf/queue.conf to match your queue's configuration. Search for -# conf/queue.conf in http://kaldi-asr.org/doc/queue.html for more information, -# or search for the string 'default_config' in utils/queue.pl or utils/slurm.pl. - -export train_cmd="run.pl --mem 2G" -export decode_cmd="run.pl --mem 4G" -export mkgraph_cmd="run.pl --mem 8G" diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/decode_phone.sh b/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/decode_phone.sh deleted file mode 100644 index 947342a0b..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/decode_phone.sh +++ /dev/null @@ -1,33 +0,0 @@ -#!/bin/bash - -# decode into phones (and prepare a new data directory for HMM outputs) - -. ./path.sh - -set -eu - -out_dir= # same as in train.sh -dec_lmparam= # LM hyperparameters (e.g., 7.0.0) -dec_exp= -dec_script= -dec_splits="train valid" -dec_data_dir=$out_dir/dec_data # where to write HMM output - -data_dir=${out_dir}/data - -local/decode.sh --nj 40 --graph_name graph \ - --val_sets "$dec_splits" --decode_script $dec_script \ - $out_dir/exp/$dec_exp $data_dir $data_dir/lang_test - -if [ ! -z $dec_lmparam ]; then - for x in $dec_splits; do - mkdir -p $dec_data_dir/$x - cp $data_dir/$x/{feats.scp,cmvn.scp,utt2spk,spk2utt} $dec_data_dir/$x/ - - tra=$out_dir/exp/$dec_exp/decode_${x}/scoring/${dec_lmparam}.tra - cat $tra | utils/int2sym.pl -f 2- $data_dir/lang/words.txt | \ - sed 's:<UNK>::g' | sed 's:<SIL>::g' > $dec_data_dir/${x}/text - utils/fix_data_dir.sh $dec_data_dir/${x} - echo "WER on ${x} is" $(compute-wer ark:$data_dir/${x}_gt/text ark:$dec_data_dir/$x/text | cut -d" " -f2-) - done -fi diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/decode_word_step1.sh b/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/decode_word_step1.sh deleted file mode 100644 index c1276bbe4..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/decode_word_step1.sh +++ /dev/null @@ -1,46 +0,0 @@ -#!/bin/bash - -# prepare word WFSTs, reference data, and decode - -set -eu - -w2v_dir= # same as in train.sh -out_dir= # same as in train.sh -lexicon= # word to phone mapping -wrd_arpa_lm= # word LM -wrd_arpa_lm_bin= # word LM for KenLM, used in unsupervised selection - -dec_exp= # what HMM stage to decode (e.g., tri3b) -dec_script= # what decoding script to use (e.g., steps/decode_fmllr.sh) -phn_label=phnc -wrd_label=wrd -dec_suffix=word -dec_splits="train valid" -valid_split="valid" - -data_dir=$out_dir/data -wrd_data_dir=$out_dir/data_word - -lexicon_clean=$(mktemp) -cat $lexicon | sort | uniq > $lexicon_clean -local/prepare_lang_word.sh $w2v_dir/dict.${phn_label}.txt $data_dir $lexicon_clean && rm $lexicon_clean -local/prepare_lm.sh --langdir $data_dir/lang_word --lmdir $data_dir/lang_test_word $wrd_arpa_lm $data_dir - -for x in $dec_splits; do - x_gt=${x}_gt - mkdir -p $wrd_data_dir/$x_gt - cp $data_dir/$x_gt/{feats.scp,cmvn.scp,utt2spk,spk2utt} $wrd_data_dir/$x_gt/ - python local/copy_aligned_text.py < $w2v_dir/$x.$wrd_label > $wrd_data_dir/$x_gt/text -done - -local/decode.sh --nj 40 --graph_name graph${dec_suffix} --decode_suffix $dec_suffix \ - --val_sets "$dec_splits" --decode_script $dec_script \ - $out_dir/exp/$dec_exp $data_dir $data_dir/lang_test_word - -local/unsup_select_decode_word.sh \ - --split $valid_split --kenlm_path $wrd_arpa_lm_bin \ - --ref_txt $wrd_data_dir/${valid_split}_gt/text \ - --psd_txt $data_dir/${valid_split}/text \ - --dec_name decode${dec_suffix} --graph_name graph${dec_suffix} \ - --phonemize_lexicon $data_dir/local/dict_word/lexicon.txt \ - $out_dir/exp diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/decode_word_step2.sh b/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/decode_word_step2.sh deleted file mode 100644 index 59a6cbb12..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/decode_word_step2.sh +++ /dev/null @@ -1,30 +0,0 @@ -#!/bin/bash - -# prepare a new data directory of HMM word output - -. ./path.sh - -set -eu - -out_dir= # same as in train.sh -dec_lmparam= # LM hyperparameters (e.g., 7.0.0) - -dec_exp=tri3b # what HMM stage to decode (e.g., tri3b) -dec_suffix=word -dec_splits="train valid" -dec_data_dir=$out_dir/dec_data_word # where to write HMM output - -data_dir=$out_dir/data -wrd_data_dir=$out_dir/data_word - -for x in $dec_splits; do - mkdir -p $dec_data_dir/$x - cp $data_dir/$x/{feats.scp,cmvn.scp,utt2spk,spk2utt} $dec_data_dir/$x/ - - tra=$out_dir/exp/$dec_exp/decode${dec_suffix}_${x}/scoring/${dec_lmparam}.tra - cat $tra | utils/int2sym.pl -f 2- $data_dir/lang_word/words.txt | \ - sed 's:<UNK>::g' | sed 's:<SIL>::g' > $dec_data_dir/$x/text - utils/fix_data_dir.sh $dec_data_dir/$x - echo "WER on $x is" $(compute-wer ark:$wrd_data_dir/${x}_gt/text ark:$dec_data_dir/$x/text | cut -d" " -f2-) -done - diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/copy_aligned_text.py b/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/copy_aligned_text.py deleted file mode 100644 index 5f4faa992..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/copy_aligned_text.py +++ /dev/null @@ -1,4 +0,0 @@ -import sys - -for idx, line in enumerate(sys.stdin): - print(f"utt{idx:010d} {line}", end='') \ No newline at end of file diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/decode.sh b/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/decode.sh deleted file mode 100644 index 811cb63c8..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/decode.sh +++ /dev/null @@ -1,38 +0,0 @@ -#!/bin/bash - -set -u - -val_sets="dev_other" -graph_name=graph -decode_suffix="" -decode_script="steps/decode_fmllr.sh" -decode_args="" -nj=60 - -. ./cmd.sh -. ./path.sh -. parse_options.sh - -set -x -exp_dir=$1 -data_root=$2 -lang_test=$3 - -graph=$exp_dir/$graph_name - -if [ ! -d $graph ]; then - utils/mkgraph.sh $lang_test $exp_dir $graph -fi - -for part in $val_sets; do - dec_dir=$exp_dir/decode${decode_suffix}_${part} - if [ ! -d $dec_dir ]; then - echo "decoding $part for $exp_dir" - $decode_script --nj $nj --cmd "$decode_cmd" $decode_args \ - $graph $data_root/$part $dec_dir & - else - echo "$dec_dir exists. skip" - fi -done - -wait diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_data_from_w2v.py b/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_data_from_w2v.py deleted file mode 100644 index 66954ea5c..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_data_from_w2v.py +++ /dev/null @@ -1,56 +0,0 @@ -import kaldi_io -import numpy as np -import os - - -def get_parser(): - import argparse - parser = argparse.ArgumentParser() - parser.add_argument("w2v_dir", help="wav2vec feature and text directory") - parser.add_argument("tar_root", help="output data directory in kaldi's format") - parser.add_argument("split", help="name of the subset") - parser.add_argument("--label", default="", help="if specified, copy labels too") - return parser - -def main(): - parser = get_parser() - args = parser.parse_args() - - tar_dir = os.path.join(args.tar_root, args.split) - os.makedirs(tar_dir, exist_ok=True) - - lengths_path = os.path.join(args.w2v_dir, f"{args.split}.lengths") - with open(lengths_path) as f: - lengths = [int(line.rstrip()) for line in f] - offsets = [0] + np.cumsum(lengths[:-1]).tolist() - feats = np.load( - os.path.join(args.w2v_dir, f"{args.split}.npy"), - mmap_mode="r" - ) - assert feats.shape[0] == sum(lengths), \ - f"lengths mismatch {feats.shape[0]} != {sum(lengths)}" - - ark_path = os.path.join(tar_dir, "feats.ark") - scp_path = os.path.join(tar_dir, "feats.scp") - wspec = f"ark:| copy-feats --compress=true ark:- ark,scp:{ark_path},{scp_path}" - with kaldi_io.open_or_fd(wspec, "wb") as f: - for idx, (offset, length) in enumerate(zip(offsets, lengths)): - feat = feats[offset:offset+length] - kaldi_io.write_mat(f, feat, key=f"utt{idx:010d}") - - u2s_path = os.path.join(tar_dir, "utt2spk") - s2u_path = os.path.join(tar_dir, "spk2utt") - with open(u2s_path, "w") as f_u2s, open(s2u_path, "w") as f_s2u: - for idx in range(len(lengths)): - f_u2s.write(f"utt{idx:010d} utt{idx:010d}\n") - f_s2u.write(f"utt{idx:010d} utt{idx:010d}\n") - - if bool(args.label): - lab_path = os.path.join(args.w2v_dir, f"{args.split}.{args.label}") - txt_path = os.path.join(tar_dir, "text") - with open(lab_path) as f_lab, open(txt_path, "w") as f_txt: - for idx, line in enumerate(f_lab): - f_txt.write(f"utt{idx:010d} {line}") - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lang.sh b/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lang.sh deleted file mode 100644 index e9a80001e..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lang.sh +++ /dev/null @@ -1,37 +0,0 @@ -#!/bin/bash - -sil_prob=0.5 -num_sil_states=3 -num_nonsil_states=1 - -. ./cmd.sh -. ./path.sh -. parse_options.sh - -set -eux - -dict=$1 -data_dir=$2 - -dict_dir=$data_dir/local/dict -tmplm_dir=$data_dir/local/lang_tmp -lm_dir=$data_dir/lang - -mkdir -p $dict_dir $tmplm_dir $lm_dir - -# prepare dict -echo "SIL" > $dict_dir/silence_phones.txt -echo "SIL" > $dict_dir/optional_silence.txt -awk '{print $1}' $dict > $dict_dir/nonsilence_phones.txt - -echo "SIL SIL" > $dict_dir/lexicon.txt -echo "<UNK> SIL" >> $dict_dir/lexicon.txt -awk '{print $1" "$1}' $dict >> $dict_dir/lexicon.txt - -echo "SIL" > $dict_dir/extra_questions.txt -awk '{printf $1" "} END {printf "\n"}' $dict >> $dict_dir/extra_questions.txt - -# prepare lang -utils/prepare_lang.sh --sil-prob $sil_prob --position-dependent-phones false \ - --num_sil_states $num_sil_states --num_nonsil_states $num_nonsil_states \ - $dict_dir "<UNK>" $tmplm_dir $lm_dir diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lang_word.sh b/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lang_word.sh deleted file mode 100644 index a7ea3877b..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lang_word.sh +++ /dev/null @@ -1,35 +0,0 @@ -#!/bin/bash - -num_sil_states=3 -num_nonsil_states=1 - -. ./cmd.sh -. ./path.sh -. parse_options.sh - -set -eux - -dict=$1 -data_dir=$2 -lexicon=$3 - -dict_dir=$data_dir/local/dict_word -tmplm_dir=$data_dir/local/lang_tmp_word -lm_dir=$data_dir/lang_word - -mkdir -p $dict_dir $tmplm_dir $lm_dir - -# prepare dict -echo "SIL" > $dict_dir/silence_phones.txt -echo "SIL" > $dict_dir/optional_silence.txt -awk '{print $1}' $dict > $dict_dir/nonsilence_phones.txt - -(echo "!SIL SIL"; echo "<UNK> SIL";) | cat - $lexicon > $dict_dir/lexicon.txt - -echo "SIL" > $dict_dir/extra_questions.txt -awk '{printf $1" "} END {printf "\n"}' $dict >> $dict_dir/extra_questions.txt - -# prepare lang -utils/prepare_lang.sh --position-dependent-phones false \ - --num_sil_states $num_sil_states --num_nonsil_states $num_nonsil_states \ - $dict_dir "<UNK>" $tmplm_dir $lm_dir diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lm.sh b/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lm.sh deleted file mode 100644 index c2edcefed..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/prepare_lm.sh +++ /dev/null @@ -1,35 +0,0 @@ -#!/usr/bin/env bash - -langdir="" -lmdir="" - -. ./cmd.sh -. ./path.sh -. parse_options.sh - -arpa_lm=$1 -data=$2 - -if [ -z $langdir ]; then - langdir=$data/lang -fi -if [ -z $lmdir ]; then - lmdir=$data/lang_test -fi - -if [ ! -d $langdir ]; then - echo "$langdir not found. run local/prepare_lang.sh first" && exit 1 -fi - -mkdir -p $lmdir -cp -r $langdir/* $lmdir - -if [[ "$arpa_lm" == *.gz ]]; then - gunzip -c $arpa_lm | arpa2fst --disambig-symbol=#0 --read-symbol-table=$lmdir/words.txt - $lmdir/G.fst -else - arpa2fst --disambig-symbol=#0 --read-symbol-table=$lmdir/words.txt $arpa_lm $lmdir/G.fst -fi -fstisstochastic $lmdir/G.fst -utils/validate_lang.pl $lmdir || exit 1 - -echo "done preparing lm ($lmdir)" diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/score.sh b/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/score.sh deleted file mode 100644 index cb5bbb727..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/score.sh +++ /dev/null @@ -1,63 +0,0 @@ -#!/usr/bin/env bash -# Copyright 2012 Johns Hopkins University (Author: Daniel Povey) -# 2014 Guoguo Chen -# Apache 2.0 - -[ -f ./path.sh ] && . ./path.sh - -# begin configuration section. -cmd=run.pl -stage=0 -decode_mbr=true -word_ins_penalty=0.0,0.5,1.0 -min_lmwt=7 -max_lmwt=17 -iter=final -#end configuration section. - -[ -f ./path.sh ] && . ./path.sh -. parse_options.sh || exit 1; - -if [ $# -ne 3 ]; then - echo "Usage: local/score.sh [--cmd (run.pl|queue.pl...)] <data-dir> <lang-dir|graph-dir> <decode-dir>" - echo " Options:" - echo " --cmd (run.pl|queue.pl...) # specify how to run the sub-processes." - echo " --stage (0|1|2) # start scoring script from part-way through." - echo " --decode_mbr (true/false) # maximum bayes risk decoding (confusion network)." - echo " --min_lmwt <int> # minumum LM-weight for lattice rescoring " - echo " --max_lmwt <int> # maximum LM-weight for lattice rescoring " - exit 1; -fi - -data=$1 -lang_or_graph=$2 -dir=$3 - -symtab=$lang_or_graph/words.txt - -for f in $symtab $dir/lat.1.gz $data/text; do - [ ! -f $f ] && echo "score.sh: no such file $f" && exit 1; -done - -mkdir -p $dir/scoring/log - -cat $data/text | sed 's:<NOISE>::g' | sed 's:<SPOKEN_NOISE>::g' > $dir/scoring/test_filt.txt - -for wip in $(echo $word_ins_penalty | sed 's/,/ /g'); do - $cmd LMWT=$min_lmwt:$max_lmwt $dir/scoring/log/best_path.LMWT.$wip.log \ - lattice-scale --inv-acoustic-scale=LMWT "ark:gunzip -c $dir/lat.*.gz|" ark:- \| \ - lattice-add-penalty --word-ins-penalty=$wip ark:- ark:- \| \ - lattice-best-path --word-symbol-table=$symtab \ - ark:- ark,t:$dir/scoring/LMWT.$wip.tra || exit 1; -done - -# Note: the double level of quoting for the sed command -for wip in $(echo $word_ins_penalty | sed 's/,/ /g'); do - $cmd LMWT=$min_lmwt:$max_lmwt $dir/scoring/log/score.LMWT.$wip.log \ - cat $dir/scoring/LMWT.$wip.tra \| \ - utils/int2sym.pl -f 2- $symtab \| sed 's:\<UNK\>::g' \| \ - compute-wer --text --mode=present \ - ark:$dir/scoring/test_filt.txt ark,p:- ">&" $dir/wer_LMWT_$wip || exit 1; -done - -exit 0; diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/show_wer.sh b/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/show_wer.sh deleted file mode 100644 index 9ecf1690c..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/show_wer.sh +++ /dev/null @@ -1,52 +0,0 @@ -#!/bin/bash - -split="dev_other" -ref_data="" -get_best_wer=true -dec_name="decode" -graph_name="graph" - -. ./cmd.sh -. ./path.sh -. parse_options.sh - -exp_root=$1 - -set -eu - -echo "==== WER w.r.t. pseudo transcript" -for x in $exp_root/*/${dec_name}_${split}*; do grep WER $x/wer_* 2>/dev/null | utils/best_wer.sh; done - - -if [ ! -z $ref_data ]; then - echo "==== WER w.r.t. real transcript (select based on pseudo WER)" - ref_txt=$ref_data/$split/text - for x in $exp_root/*/${dec_name}_${split}*; do - lang=$(dirname $x)/$graph_name - - lmwt=$( - grep WER $x/wer_* 2>/dev/null | utils/best_wer.sh | - sed 's/.*wer_\(.*\)$/\1/g' | sed 's/_/./g' - ) - tra=$x/scoring/$lmwt.tra - cat $tra | utils/int2sym.pl -f 2- $lang/words.txt | sed 's:<UNK>::g' | sed 's:<SIL>::g' | \ - compute-wer --text --mode=present \ - ark:$ref_txt ark,p:- 2> /dev/null | grep WER | xargs -I{} echo {} $tra - done -fi - -if [ ! -z $ref_data ] && $get_best_wer; then - echo "==== WER w.r.t. real transcript (select based on true WER)" - ref_txt=$ref_data/$split/text - for x in $exp_root/*/${dec_name}_${split}*; do - lang=$(dirname $x)/$graph_name - - for tra in $x/scoring/*.tra; do - cat $tra | utils/int2sym.pl -f 2- $lang/words.txt | sed 's:<UNK>::g' | sed 's:<SIL>::g' | \ - compute-wer --text --mode=present \ - ark:$ref_txt ark,p:- 2> /dev/null | grep WER | xargs -I{} echo {} $tra - done | sort -k2n | head -n1 - done -fi - -exit 0; diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/train_subset_lgbeam.sh b/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/train_subset_lgbeam.sh deleted file mode 100644 index 913c1d8e4..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/train_subset_lgbeam.sh +++ /dev/null @@ -1,129 +0,0 @@ -#!/usr/bin/env bash - -out_root=/tmp -out_name=train_${RANDOM} -num_nonsil_states=1 - -valid="dev_other" -train="train" -mono_size="-1" # 2000 -tri1_size="-1" # 5000 -tri2b_size="-1" # 10000 -tri3b_size="-1" # 10000 - -# Acoustic model parameters -numLeavesTri1=2000 -numGaussTri1=10000 -numLeavesMLLT=2500 -numGaussMLLT=15000 -numLeavesSAT=2500 -numGaussSAT=15000 - -stage=1 -max_stage=1 - -. ./cmd.sh -. ./path.sh -. parse_options.sh - -data=$1 -lang=$2 -lang_test=$3 - -exp_root=$out_root/$out_name - -# you might not want to do this for interactive shells. -set -e - - -if [ $stage -le 1 ] && [ $max_stage -ge 1 ]; then - # train a monophone system - if [ ! $mono_size -eq -1 ]; then - utils/subset_data_dir.sh $data/$train $mono_size $data/${train}_${mono_size} - mono_train=${train}_${mono_size} - else - mono_train=${train} - fi - - steps/train_mono.sh --boost-silence 1.25 --nj 20 --cmd "$train_cmd" \ - --initial-beam 40 --regular-beam 60 --retry-beam 120 \ - $data/$mono_train $lang $exp_root/mono - - utils/mkgraph.sh $lang_test $exp_root/mono $exp_root/mono/graph - steps/decode.sh --nj 20 --cmd "$decode_cmd" \ - $exp_root/mono/graph $data/$valid $exp_root/mono/decode_$valid & -fi - - -if [ $stage -le 2 ] && [ $max_stage -ge 2 ]; then - # train a first delta + delta-delta triphone system on a subset of 5000 utterances - if [ ! $tri1_size -eq -1 ]; then - utils/subset_data_dir.sh $data/$train $tri1_size $data/${train}_${tri1_size} - tri1_train=${train}_${tri1_size} - else - tri1_train=${train} - fi - - steps/align_si.sh --boost-silence 1.25 --nj 10 --cmd "$train_cmd" \ - $data/$tri1_train $lang \ - $exp_root/mono $exp_root/mono_ali_${tri1_train} - - steps_gan/train_deltas.sh --boost-silence 1.25 --cmd "$train_cmd" \ - --num_nonsil_states $num_nonsil_states $numLeavesTri1 $numGaussTri1 \ - $data/$tri1_train $lang \ - $exp_root/mono_ali_${tri1_train} $exp_root/tri1 - - utils/mkgraph.sh $lang_test $exp_root/tri1 $exp_root/tri1/graph - steps/decode.sh --nj 20 --cmd "$decode_cmd" \ - $exp_root/tri1/graph $data/$valid $exp_root/tri1/decode_$valid & -fi - -if [ $stage -le 3 ] && [ $max_stage -ge 3 ]; then - # train an LDA+MLLT system. - if [ ! $tri2b_size -eq -1 ]; then - utils/subset_data_dir.sh $data/$train $tri2b_size $data/${train}_${tri2b_size} - tri2b_train=${train}_${tri2b_size} - else - tri2b_train=${train} - fi - - steps/align_si.sh --nj 10 --cmd "$train_cmd" \ - $data/$tri2b_train $lang \ - $exp_root/tri1 $exp_root/tri1_ali_${tri2b_train} - - steps_gan/train_lda_mllt.sh --cmd "$train_cmd" \ - --num_nonsil_states $num_nonsil_states \ - --splice-opts "--left-context=3 --right-context=3" $numLeavesMLLT $numGaussMLLT \ - $data/$tri2b_train $lang \ - $exp_root/tri1_ali_${tri2b_train} $exp_root/tri2b - - utils/mkgraph.sh $lang_test $exp_root/tri2b $exp_root/tri2b/graph - steps/decode.sh --nj 20 --cmd "$decode_cmd" \ - $exp_root/tri2b/graph $data/$valid $exp_root/tri2b/decode_$valid & -fi - - -if [ $stage -le 4 ] && [ $max_stage -ge 4 ]; then - # Train tri3b, which is LDA+MLLT+SAT on 10k utts - if [ ! $tri3b_size -eq -1 ]; then - utils/subset_data_dir.sh $data/$train $tri3b_size $data/${train}_${tri3b_size} - tri3b_train=${train}_${tri3b_size} - else - tri3b_train=${train} - fi - - steps/align_si.sh --nj 10 --cmd "$train_cmd" --use-graphs true \ - $data/$tri3b_train $lang \ - $exp_root/tri2b $exp_root/tri2b_ali_${tri2b_train} - - steps_gan/train_sat.sh --cmd "$train_cmd" \ - --num_nonsil_states $num_nonsil_states $numLeavesSAT $numGaussSAT \ - $data/$tri3b_train $lang \ - $exp_root/tri2b_ali_${tri2b_train} $exp_root/tri3b - - utils/mkgraph.sh $lang_test $exp_root/tri3b $exp_root/tri3b/graph - steps/decode_fmllr.sh --nj 20 --cmd "$decode_cmd" \ - $exp_root/tri3b/graph $data/$valid $exp_root/tri3b/decode_$valid & -fi - -wait diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/unsup_select.py b/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/unsup_select.py deleted file mode 100644 index 1122c88c1..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/unsup_select.py +++ /dev/null @@ -1,135 +0,0 @@ -""" -Implement unsupervised metric for decoding hyperparameter selection: - $$ alpha * LM_PPL + ViterbitUER(%) * 100 $$ -""" -import argparse -import logging -import math -import sys - -import kenlm -import editdistance -from g2p_en import G2p - -logging.root.setLevel(logging.INFO) -logging.basicConfig(stream=sys.stdout, level=logging.INFO) -logger = logging.getLogger(__name__) - - -def get_parser(): - parser = argparse.ArgumentParser() - parser.add_argument("ref_tra", help="reference pseudo labels") - parser.add_argument("hyp_tra", help="decoded pseudo labels to be assess") - parser.add_argument("--kenlm_path", default="/checkpoint/abaevski/data/speech/libri/librispeech_lm_novox.phnc_o5.bin", help="") - parser.add_argument("--uppercase", action="store_true", help="") - parser.add_argument("--skipwords", default="", help="") - parser.add_argument("--gt_tra", default="", help="ground truth pseudo labels for computing oracle WER") - parser.add_argument("--min_vt_uer", default=0.0, type=float) - parser.add_argument("--phonemize", action="store_true", help="phonemize word hypotheses, used when reference is phone transcript") - parser.add_argument("--phonemize_lexicon", default="", type=str, help="use a lexicon for phonemizing") - return parser - -def load_tra(tra_path): - with open(tra_path, "r") as f: - uid_to_tra = {} - for line in f: - toks = line.rstrip().split() - uid, tra = toks[0], " ".join(toks[1:]) - uid_to_tra[uid] = tra - logger.debug(f"loaded {len(uid_to_tra)} utterances from {tra_path}") - return uid_to_tra - -def load_lex(lex_path): - with open(lex_path, "r") as f: - w2p = {} - for line in f: - w, p = line.rstrip().split(None, 1) - w2p[w] = p.split() - return w2p - -def compute_wer(ref_uid_to_tra, hyp_uid_to_tra, g2p, g2p_dict): - d_cnt = 0 - w_cnt = 0 - w_cnt_h = 0 - for uid in hyp_uid_to_tra: - ref = ref_uid_to_tra[uid].split() - if g2p_dict is not None: - hyp = [] - for word in hyp_uid_to_tra[uid].split(): - if word in g2p_dict: - hyp = hyp + g2p_dict[word] - else: - logger.warning(f"{word} not in g2p_dict") - elif g2p is not None: - hyp = g2p(hyp_uid_to_tra[uid]) - hyp = [p for p in hyp if p != "'" and p != " "] - hyp = [p[:-1] if p[-1].isnumeric() else p for p in hyp] - else: - hyp = hyp_uid_to_tra[uid].split() - logger.debug(( - f"======================\n" - f"HYP: {' '.join(hyp)}\n" - f"REF: {' '.join(ref)}" - )) - d_cnt += editdistance.eval(ref, hyp) - w_cnt += len(ref) - w_cnt_h += len(hyp) - wer = float(d_cnt) / w_cnt - logger.debug(( - f"wer = {wer*100:.2f}%; num. of ref words = {w_cnt}; " - f"num. of hyp words = {w_cnt_h}; num. of sentences = {len(ref_uid_to_tra)}" - )) - return wer - -def compute_lm_ppl(hyp_uid_to_tra, score_fn): - lm_score = 0. - w_cnt = 0 - for hyp in hyp_uid_to_tra.values(): - cur_score = score_fn(hyp) - cur_cnt = len(hyp.split()) + 1 # plus one for </s> - lm_score += cur_score - w_cnt += cur_cnt - logger.debug(( - f"======================\n" - f"score sum/avg = {cur_score:.2f}/{cur_score/cur_cnt:.2f}\n" - f"hyp = {hyp}" - )) - lm_ppl = math.pow(10, -lm_score / w_cnt) - logger.debug(f"lm ppl = {lm_ppl:.2f}; num. of words = {w_cnt}") - return lm_ppl - -def main(): - args = get_parser().parse_args() - logger.debug(f"Args: {args}") - - ref_uid_to_tra = load_tra(args.ref_tra) - hyp_uid_to_tra = load_tra(args.hyp_tra) - assert not bool(set(hyp_uid_to_tra.keys()) - set(ref_uid_to_tra.keys())) - - lm = kenlm.Model(args.kenlm_path) - skipwords = set(args.skipwords.split(",")) - def compute_lm_score(s): - s = " ".join(w for w in s.split() if w not in skipwords) - s = s.upper() if args.uppercase else s - return lm.score(s) - - g2p, g2p_dict = None, None - if args.phonemize: - if args.phonemize_lexicon: - g2p_dict = load_lex(args.phonemize_lexicon) - else: - g2p = G2p() - - wer = compute_wer(ref_uid_to_tra, hyp_uid_to_tra, g2p, g2p_dict) - lm_ppl = compute_lm_ppl(hyp_uid_to_tra, compute_lm_score) - - gt_wer = -math.inf - if args.gt_tra: - gt_uid_to_tra = load_tra(args.gt_tra) - gt_wer = compute_wer(gt_uid_to_tra, hyp_uid_to_tra, None, None) - - score = math.log(lm_ppl) * max(wer, args.min_vt_uer) - logging.info(f"{args.hyp_tra}: score={score:.4f}; wer={wer*100:.2f}%; lm_ppl={lm_ppl:.4f}; gt_wer={gt_wer*100:.2f}%") - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/unsup_select_decode.sh b/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/unsup_select_decode.sh deleted file mode 100644 index b34c5b6e0..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/unsup_select_decode.sh +++ /dev/null @@ -1,37 +0,0 @@ -#!/bin/bash - -split="dev_other" -ref_txt="" # ground truth transcript path -psd_txt="" # pseudo transcript path -get_best_wer=true -dec_name="decode" -graph_name="graph" -kenlm_path=/checkpoint/abaevski/data/speech/libri/librispeech_lm_novox.phnc_o6.bin - -. ./cmd.sh -. ./path.sh -. parse_options.sh - -exp_root=$1 -unsup_args="" -if [ $# -ge 2 ]; then - unsup_args=$2 -fi - -set -eu - -if [ ! -z $ref_txt ] && $get_best_wer; then - echo "==== WER w.r.t. real transcript (select based on unsupervised metric)" - for x in $exp_root/*/${dec_name}_${split}*; do - lang=$(dirname $x)/$graph_name - - ( - for tra in $x/scoring/*.tra; do - cat $tra | utils/int2sym.pl -f 2- $lang/words.txt | sed 's:<UNK>::g' | sed 's:<SIL>::g' > $tra.txt - python local/unsup_select.py $psd_txt $tra.txt --kenlm_path $kenlm_path --gt_tra $ref_txt $unsup_args - done 2>/dev/null | grep "score=" | sed 's/=/ /g' | sed 's/;//g' | sort -k3n | head -n1 - ) & - done -fi -wait - diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/unsup_select_decode_word.sh b/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/unsup_select_decode_word.sh deleted file mode 100644 index c10a6b880..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/unsup_select_decode_word.sh +++ /dev/null @@ -1,35 +0,0 @@ -#!/bin/bash - -split="dev_other" -ref_txt="" # ground truth transcript path -psd_txt="" # pseudo transcript path -get_best_wer=true -dec_name="decode" -graph_name="graph" -kenlm_path=/checkpoint/abaevski/data/speech/libri/librispeech_lm_novox.phnc_o6.bin -phonemize_lexicon="" - -. ./cmd.sh -. ./path.sh -. parse_options.sh -. /private/home/wnhsu/unsup_asr/fairseq-py-unsup/env.sh - -exp_root=$1 - -set -eu - -if [ ! -z $ref_txt ] && $get_best_wer; then - echo "==== WER w.r.t. real transcript (select based on unsupervised metric)" - for x in $exp_root/*/${dec_name}_${split}*; do - lang=$(dirname $x)/$graph_name - - for tra in $x/scoring/*.tra; do - cat $tra | utils/int2sym.pl -f 2- $lang/words.txt | sed 's:\<UNK\>::g' > $tra.txt - python local/unsup_select.py $psd_txt $tra.txt \ - --kenlm_path $kenlm_path --gt_tra $ref_txt --phonemize \ - --phonemize_lexicon "$phonemize_lexicon" - done | grep "score=" | sed 's/=/ /g' | sed 's/;//g' | sort -k3n | head -n1 - done -fi - - diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/path.sh b/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/path.sh deleted file mode 100644 index 1a6fb5f89..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/path.sh +++ /dev/null @@ -1,5 +0,0 @@ -export KALDI_ROOT=`pwd`/../../.. -export PATH=$PWD/utils/:$KALDI_ROOT/tools/openfst/bin:$PWD:$PATH -[ ! -f $KALDI_ROOT/tools/config/common_path.sh ] && echo >&2 "The standard file $KALDI_ROOT/tools/config/common_path.sh is not present -> Exit!" && exit 1 -. $KALDI_ROOT/tools/config/common_path.sh -export LC_ALL=C diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/steps b/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/steps deleted file mode 100644 index 6e99bf5b5..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/steps +++ /dev/null @@ -1 +0,0 @@ -../../wsj/s5/steps \ No newline at end of file diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/steps_gan/train_deltas.sh b/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/steps_gan/train_deltas.sh deleted file mode 100644 index af68715ab..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/steps_gan/train_deltas.sh +++ /dev/null @@ -1,175 +0,0 @@ -#!/usr/bin/env bash - -# Copyright 2012 Johns Hopkins University (Author: Daniel Povey) -# Apache 2.0 - -# Begin configuration. -stage=-4 # This allows restarting after partway, when something when wrong. -config= -cmd=run.pl -scale_opts="--transition-scale=1.0 --acoustic-scale=0.1 --self-loop-scale=0.1" -realign_iters="10 20 30"; -num_iters=35 # Number of iterations of training -max_iter_inc=25 # Last iter to increase #Gauss on. -beam=10 -careful=false -retry_beam=40 -boost_silence=1.0 # Factor by which to boost silence likelihoods in alignment -power=0.25 # Exponent for number of gaussians according to occurrence counts -cluster_thresh=-1 # for build-tree control final bottom-up clustering of leaves -norm_vars=false # deprecated. Prefer --cmvn-opts "--norm-vars=true" - # use the option --cmvn-opts "--norm-means=false" -cmvn_opts= -delta_opts= -context_opts= # use"--context-width=5 --central-position=2" for quinphone -num_nonsil_states=3 -# End configuration. - -echo "$0 $@" # Print the command line for logging - -[ -f path.sh ] && . ./path.sh; -. parse_options.sh || exit 1; - -if [ $# != 6 ]; then - echo "Usage: steps/train_deltas.sh <num-leaves> <tot-gauss> <data-dir> <lang-dir> <alignment-dir> <exp-dir>" - echo "e.g.: steps/train_deltas.sh 2000 10000 data/train_si84_half data/lang exp/mono_ali exp/tri1" - echo "main options (for others, see top of script file)" - echo " --cmd (utils/run.pl|utils/queue.pl <queue opts>) # how to run jobs." - echo " --config <config-file> # config containing options" - echo " --stage <stage> # stage to do partial re-run from." - exit 1; -fi - -numleaves=$1 -totgauss=$2 -data=$3 -lang=$4 -alidir=$5 -dir=$6 - -for f in $alidir/final.mdl $alidir/ali.1.gz $data/feats.scp $lang/phones.txt; do - [ ! -f $f ] && echo "train_deltas.sh: no such file $f" && exit 1; -done - -numgauss=$numleaves -incgauss=$[($totgauss-$numgauss)/$max_iter_inc] # per-iter increment for #Gauss -oov=`cat $lang/oov.int` || exit 1; -ciphonelist=`cat $lang/phones/context_indep.csl` || exit 1; -nj=`cat $alidir/num_jobs` || exit 1; -mkdir -p $dir/log -echo $nj > $dir/num_jobs - -utils/lang/check_phones_compatible.sh $lang/phones.txt $alidir/phones.txt || exit 1; -cp $lang/phones.txt $dir || exit 1; - -sdata=$data/split$nj; -split_data.sh $data $nj || exit 1; - - -[ $(cat $alidir/cmvn_opts 2>/dev/null | wc -c) -gt 1 ] && [ -z "$cmvn_opts" ] && \ - echo "$0: warning: ignoring CMVN options from source directory $alidir" -$norm_vars && cmvn_opts="--norm-vars=true $cmvn_opts" -echo $cmvn_opts > $dir/cmvn_opts # keep track of options to CMVN. -[ ! -z $delta_opts ] && echo $delta_opts > $dir/delta_opts - -feats="ark,s,cs:apply-cmvn $cmvn_opts --utt2spk=ark:$sdata/JOB/utt2spk scp:$sdata/JOB/cmvn.scp scp:$sdata/JOB/feats.scp ark:- | add-deltas $delta_opts ark:- ark:- |" - -rm $dir/.error 2>/dev/null - -if [ $stage -le -3 ]; then - echo "$0: accumulating tree stats" - $cmd JOB=1:$nj $dir/log/acc_tree.JOB.log \ - acc-tree-stats $context_opts \ - --ci-phones=$ciphonelist $alidir/final.mdl "$feats" \ - "ark:gunzip -c $alidir/ali.JOB.gz|" $dir/JOB.treeacc || exit 1; - sum-tree-stats $dir/treeacc $dir/*.treeacc 2>$dir/log/sum_tree_acc.log || exit 1; - rm $dir/*.treeacc -fi - -if [ $stage -le -2 ]; then - echo "$0: getting questions for tree-building, via clustering" - # preparing questions, roots file... - cluster-phones --pdf-class-list=$(($num_nonsil_states / 2)) $context_opts \ - $dir/treeacc $lang/phones/sets.int \ - $dir/questions.int 2> $dir/log/questions.log || exit 1; - cat $lang/phones/extra_questions.int >> $dir/questions.int - compile-questions $context_opts $lang/topo $dir/questions.int \ - $dir/questions.qst 2>$dir/log/compile_questions.log || exit 1; - - echo "$0: building the tree" - $cmd $dir/log/build_tree.log \ - build-tree $context_opts --verbose=1 --max-leaves=$numleaves \ - --cluster-thresh=$cluster_thresh $dir/treeacc $lang/phones/roots.int \ - $dir/questions.qst $lang/topo $dir/tree || exit 1; - - $cmd $dir/log/init_model.log \ - gmm-init-model --write-occs=$dir/1.occs \ - $dir/tree $dir/treeacc $lang/topo $dir/1.mdl || exit 1; - if grep 'no stats' $dir/log/init_model.log; then - echo "** The warnings above about 'no stats' generally mean you have phones **" - echo "** (or groups of phones) in your phone set that had no corresponding data. **" - echo "** You should probably figure out whether something went wrong, **" - echo "** or whether your data just doesn't happen to have examples of those **" - echo "** phones. **" - fi - - gmm-mixup --mix-up=$numgauss $dir/1.mdl $dir/1.occs $dir/1.mdl 2>$dir/log/mixup.log || exit 1; - rm $dir/treeacc -fi - -if [ $stage -le -1 ]; then - # Convert the alignments. - echo "$0: converting alignments from $alidir to use current tree" - $cmd JOB=1:$nj $dir/log/convert.JOB.log \ - convert-ali $alidir/final.mdl $dir/1.mdl $dir/tree \ - "ark:gunzip -c $alidir/ali.JOB.gz|" "ark:|gzip -c >$dir/ali.JOB.gz" || exit 1; -fi - -if [ $stage -le 0 ]; then - echo "$0: compiling graphs of transcripts" - $cmd JOB=1:$nj $dir/log/compile_graphs.JOB.log \ - compile-train-graphs --read-disambig-syms=$lang/phones/disambig.int $dir/tree $dir/1.mdl $lang/L.fst \ - "ark:utils/sym2int.pl --map-oov $oov -f 2- $lang/words.txt < $sdata/JOB/text |" \ - "ark:|gzip -c >$dir/fsts.JOB.gz" || exit 1; -fi - -x=1 -while [ $x -lt $num_iters ]; do - echo "$0: training pass $x" - if [ $stage -le $x ]; then - if echo $realign_iters | grep -w $x >/dev/null; then - echo "$0: aligning data" - mdl="gmm-boost-silence --boost=$boost_silence `cat $lang/phones/optional_silence.csl` $dir/$x.mdl - |" - $cmd JOB=1:$nj $dir/log/align.$x.JOB.log \ - gmm-align-compiled $scale_opts --beam=$beam --retry-beam=$retry_beam --careful=$careful "$mdl" \ - "ark:gunzip -c $dir/fsts.JOB.gz|" "$feats" \ - "ark:|gzip -c >$dir/ali.JOB.gz" || exit 1; - fi - $cmd JOB=1:$nj $dir/log/acc.$x.JOB.log \ - gmm-acc-stats-ali $dir/$x.mdl "$feats" \ - "ark,s,cs:gunzip -c $dir/ali.JOB.gz|" $dir/$x.JOB.acc || exit 1; - $cmd $dir/log/update.$x.log \ - gmm-est --mix-up=$numgauss --power=$power \ - --write-occs=$dir/$[$x+1].occs $dir/$x.mdl \ - "gmm-sum-accs - $dir/$x.*.acc |" $dir/$[$x+1].mdl || exit 1; - rm $dir/$x.mdl $dir/$x.*.acc - rm $dir/$x.occs - fi - [ $x -le $max_iter_inc ] && numgauss=$[$numgauss+$incgauss]; - x=$[$x+1]; -done - -rm $dir/final.mdl $dir/final.occs 2>/dev/null -ln -s $x.mdl $dir/final.mdl -ln -s $x.occs $dir/final.occs - -steps/diagnostic/analyze_alignments.sh --cmd "$cmd" $lang $dir - -# Summarize warning messages... -utils/summarize_warnings.pl $dir/log - -steps/info/gmm_dir_info.pl $dir - -echo "$0: Done training system with delta+delta-delta features in $dir" - -exit 0 diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/steps_gan/train_lda_mllt.sh b/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/steps_gan/train_lda_mllt.sh deleted file mode 100644 index 9d8c319ce..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/steps_gan/train_lda_mllt.sh +++ /dev/null @@ -1,239 +0,0 @@ -#!/usr/bin/env bash - -# Copyright 2012 Johns Hopkins University (Author: Daniel Povey) -# -# LDA+MLLT refers to the way we transform the features after computing -# the MFCCs: we splice across several frames, reduce the dimension (to 40 -# by default) using Linear Discriminant Analysis), and then later estimate, -# over multiple iterations, a diagonalizing transform known as MLLT or STC. -# See http://kaldi-asr.org/doc/transform.html for more explanation. -# -# Apache 2.0. - -# Begin configuration. -cmd=run.pl -config= -stage=-5 -scale_opts="--transition-scale=1.0 --acoustic-scale=0.1 --self-loop-scale=0.1" -realign_iters="10 20 30"; -mllt_iters="2 4 6 12"; -num_iters=35 # Number of iterations of training -max_iter_inc=25 # Last iter to increase #Gauss on. -dim=40 -beam=10 -retry_beam=40 -careful=false -boost_silence=1.0 # Factor by which to boost silence likelihoods in alignment -power=0.25 # Exponent for number of gaussians according to occurrence counts -randprune=4.0 # This is approximately the ratio by which we will speed up the - # LDA and MLLT calculations via randomized pruning. -splice_opts= -cluster_thresh=-1 # for build-tree control final bottom-up clustering of leaves -norm_vars=false # deprecated. Prefer --cmvn-opts "--norm-vars=false" -cmvn_opts= -context_opts= # use "--context-width=5 --central-position=2" for quinphone. -# End configuration. -train_tree=true # if false, don't actually train the tree. -use_lda_mat= # If supplied, use this LDA[+MLLT] matrix. -num_nonsil_states=3 - -echo "$0 $@" # Print the command line for logging - -[ -f path.sh ] && . ./path.sh -. parse_options.sh || exit 1; - -if [ $# != 6 ]; then - echo "Usage: steps/train_lda_mllt.sh [options] <#leaves> <#gauss> <data> <lang> <alignments> <dir>" - echo " e.g.: steps/train_lda_mllt.sh 2500 15000 data/train_si84 data/lang exp/tri1_ali_si84 exp/tri2b" - echo "Main options (for others, see top of script file)" - echo " --cmd (utils/run.pl|utils/queue.pl <queue opts>) # how to run jobs." - echo " --config <config-file> # config containing options" - echo " --stage <stage> # stage to do partial re-run from." - exit 1; -fi - -numleaves=$1 -totgauss=$2 -data=$3 -lang=$4 -alidir=$5 -dir=$6 - -for f in $alidir/final.mdl $alidir/ali.1.gz $data/feats.scp $lang/phones.txt; do - [ ! -f $f ] && echo "train_lda_mllt.sh: no such file $f" && exit 1; -done - -numgauss=$numleaves -incgauss=$[($totgauss-$numgauss)/$max_iter_inc] # per-iter #gauss increment -oov=`cat $lang/oov.int` || exit 1; -nj=`cat $alidir/num_jobs` || exit 1; -silphonelist=`cat $lang/phones/silence.csl` || exit 1; -ciphonelist=`cat $lang/phones/context_indep.csl` || exit 1; - -mkdir -p $dir/log - -utils/lang/check_phones_compatible.sh $lang/phones.txt $alidir/phones.txt || exit 1; -cp $lang/phones.txt $dir || exit 1; - -echo $nj >$dir/num_jobs -echo "$splice_opts" >$dir/splice_opts # keep track of frame-splicing options - # so that later stages of system building can know what they were. - - -[ $(cat $alidir/cmvn_opts 2>/dev/null | wc -c) -gt 1 ] && [ -z "$cmvn_opts" ] && \ - echo "$0: warning: ignoring CMVN options from source directory $alidir" -$norm_vars && cmvn_opts="--norm-vars=true $cmvn_opts" -echo $cmvn_opts > $dir/cmvn_opts # keep track of options to CMVN. - -sdata=$data/split$nj; -split_data.sh $data $nj || exit 1; - -splicedfeats="ark,s,cs:apply-cmvn $cmvn_opts --utt2spk=ark:$sdata/JOB/utt2spk scp:$sdata/JOB/cmvn.scp scp:$sdata/JOB/feats.scp ark:- | splice-feats $splice_opts ark:- ark:- |" -# Note: $feats gets overwritten later in the script. -feats="$splicedfeats transform-feats $dir/0.mat ark:- ark:- |" - - - -if [ $stage -le -5 ]; then - if [ -z "$use_lda_mat" ]; then - echo "$0: Accumulating LDA statistics." - rm $dir/lda.*.acc 2>/dev/null - $cmd JOB=1:$nj $dir/log/lda_acc.JOB.log \ - ali-to-post "ark:gunzip -c $alidir/ali.JOB.gz|" ark:- \| \ - weight-silence-post 0.0 $silphonelist $alidir/final.mdl ark:- ark:- \| \ - acc-lda --rand-prune=$randprune $alidir/final.mdl "$splicedfeats" ark,s,cs:- \ - $dir/lda.JOB.acc || exit 1; - est-lda --write-full-matrix=$dir/full.mat --dim=$dim $dir/0.mat $dir/lda.*.acc \ - 2>$dir/log/lda_est.log || exit 1; - rm $dir/lda.*.acc - else - echo "$0: Using supplied LDA matrix $use_lda_mat" - cp $use_lda_mat $dir/0.mat || exit 1; - [ ! -z "$mllt_iters" ] && \ - echo "$0: Warning: using supplied LDA matrix $use_lda_mat but we will do MLLT," && \ - echo " which you might not want; to disable MLLT, specify --mllt-iters ''" && \ - sleep 5 - fi -fi - -cur_lda_iter=0 - -if [ $stage -le -4 ] && $train_tree; then - echo "$0: Accumulating tree stats" - $cmd JOB=1:$nj $dir/log/acc_tree.JOB.log \ - acc-tree-stats $context_opts \ - --ci-phones=$ciphonelist $alidir/final.mdl "$feats" \ - "ark:gunzip -c $alidir/ali.JOB.gz|" $dir/JOB.treeacc || exit 1; - [ `ls $dir/*.treeacc | wc -w` -ne "$nj" ] && echo "$0: Wrong #tree-accs" && exit 1; - $cmd $dir/log/sum_tree_acc.log \ - sum-tree-stats $dir/treeacc $dir/*.treeacc || exit 1; - rm $dir/*.treeacc -fi - - -if [ $stage -le -3 ] && $train_tree; then - echo "$0: Getting questions for tree clustering." - # preparing questions, roots file... - cluster-phones --pdf-class-list=$(($num_nonsil_states / 2)) $context_opts $dir/treeacc $lang/phones/sets.int \ - $dir/questions.int 2> $dir/log/questions.log || exit 1; - cat $lang/phones/extra_questions.int >> $dir/questions.int - compile-questions $context_opts $lang/topo $dir/questions.int \ - $dir/questions.qst 2>$dir/log/compile_questions.log || exit 1; - - echo "$0: Building the tree" - $cmd $dir/log/build_tree.log \ - build-tree $context_opts --verbose=1 --max-leaves=$numleaves \ - --cluster-thresh=$cluster_thresh $dir/treeacc $lang/phones/roots.int \ - $dir/questions.qst $lang/topo $dir/tree || exit 1; -fi - -if [ $stage -le -2 ]; then - echo "$0: Initializing the model" - if $train_tree; then - gmm-init-model --write-occs=$dir/1.occs \ - $dir/tree $dir/treeacc $lang/topo $dir/1.mdl 2> $dir/log/init_model.log || exit 1; - grep 'no stats' $dir/log/init_model.log && echo "This is a bad warning."; - rm $dir/treeacc - else - cp $alidir/tree $dir/ || exit 1; - $cmd JOB=1 $dir/log/init_model.log \ - gmm-init-model-flat $dir/tree $lang/topo $dir/1.mdl \ - "$feats subset-feats ark:- ark:-|" || exit 1; - fi -fi - - -if [ $stage -le -1 ]; then - # Convert the alignments. - echo "$0: Converting alignments from $alidir to use current tree" - $cmd JOB=1:$nj $dir/log/convert.JOB.log \ - convert-ali $alidir/final.mdl $dir/1.mdl $dir/tree \ - "ark:gunzip -c $alidir/ali.JOB.gz|" "ark:|gzip -c >$dir/ali.JOB.gz" || exit 1; -fi - -if [ $stage -le 0 ] && [ "$realign_iters" != "" ]; then - echo "$0: Compiling graphs of transcripts" - $cmd JOB=1:$nj $dir/log/compile_graphs.JOB.log \ - compile-train-graphs --read-disambig-syms=$lang/phones/disambig.int $dir/tree $dir/1.mdl $lang/L.fst \ - "ark:utils/sym2int.pl --map-oov $oov -f 2- $lang/words.txt < $data/split$nj/JOB/text |" \ - "ark:|gzip -c >$dir/fsts.JOB.gz" || exit 1; -fi - - -x=1 -while [ $x -lt $num_iters ]; do - echo Training pass $x - if echo $realign_iters | grep -w $x >/dev/null && [ $stage -le $x ]; then - echo Aligning data - mdl="gmm-boost-silence --boost=$boost_silence `cat $lang/phones/optional_silence.csl` $dir/$x.mdl - |" - $cmd JOB=1:$nj $dir/log/align.$x.JOB.log \ - gmm-align-compiled $scale_opts --beam=$beam --retry-beam=$retry_beam --careful=$careful "$mdl" \ - "ark:gunzip -c $dir/fsts.JOB.gz|" "$feats" \ - "ark:|gzip -c >$dir/ali.JOB.gz" || exit 1; - fi - if echo $mllt_iters | grep -w $x >/dev/null; then - if [ $stage -le $x ]; then - echo "$0: Estimating MLLT" - $cmd JOB=1:$nj $dir/log/macc.$x.JOB.log \ - ali-to-post "ark:gunzip -c $dir/ali.JOB.gz|" ark:- \| \ - weight-silence-post 0.0 $silphonelist $dir/$x.mdl ark:- ark:- \| \ - gmm-acc-mllt --rand-prune=$randprune $dir/$x.mdl "$feats" ark:- $dir/$x.JOB.macc \ - || exit 1; - est-mllt $dir/$x.mat.new $dir/$x.*.macc 2> $dir/log/mupdate.$x.log || exit 1; - gmm-transform-means $dir/$x.mat.new $dir/$x.mdl $dir/$x.mdl \ - 2> $dir/log/transform_means.$x.log || exit 1; - compose-transforms --print-args=false $dir/$x.mat.new $dir/$cur_lda_iter.mat $dir/$x.mat || exit 1; - rm $dir/$x.*.macc - fi - feats="$splicedfeats transform-feats $dir/$x.mat ark:- ark:- |" - cur_lda_iter=$x - fi - - if [ $stage -le $x ]; then - $cmd JOB=1:$nj $dir/log/acc.$x.JOB.log \ - gmm-acc-stats-ali $dir/$x.mdl "$feats" \ - "ark,s,cs:gunzip -c $dir/ali.JOB.gz|" $dir/$x.JOB.acc || exit 1; - $cmd $dir/log/update.$x.log \ - gmm-est --write-occs=$dir/$[$x+1].occs --mix-up=$numgauss --power=$power \ - $dir/$x.mdl "gmm-sum-accs - $dir/$x.*.acc |" $dir/$[$x+1].mdl || exit 1; - rm $dir/$x.mdl $dir/$x.*.acc $dir/$x.occs - fi - [ $x -le $max_iter_inc ] && numgauss=$[$numgauss+$incgauss]; - x=$[$x+1]; -done - -rm $dir/final.{mdl,mat,occs} 2>/dev/null -ln -s $x.mdl $dir/final.mdl -ln -s $x.occs $dir/final.occs -ln -s $cur_lda_iter.mat $dir/final.mat - -steps/diagnostic/analyze_alignments.sh --cmd "$cmd" $lang $dir - -# Summarize warning messages... -utils/summarize_warnings.pl $dir/log - -steps/info/gmm_dir_info.pl $dir - -echo "$0: Done training system with LDA+MLLT features in $dir" - -exit 0 diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/steps_gan/train_sat.sh b/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/steps_gan/train_sat.sh deleted file mode 100644 index f75afafb1..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/steps_gan/train_sat.sh +++ /dev/null @@ -1,281 +0,0 @@ -#!/usr/bin/env bash -# Copyright 2012 Johns Hopkins University (Author: Daniel Povey). Apache 2.0. - - -# This does Speaker Adapted Training (SAT), i.e. train on -# fMLLR-adapted features. It can be done on top of either LDA+MLLT, or -# delta and delta-delta features. If there are no transforms supplied -# in the alignment directory, it will estimate transforms itself before -# building the tree (and in any case, it estimates transforms a number -# of times during training). - - -# Begin configuration section. -stage=-5 -exit_stage=-100 # you can use this to require it to exit at the - # beginning of a specific stage. Not all values are - # supported. -fmllr_update_type=full -cmd=run.pl -scale_opts="--transition-scale=1.0 --acoustic-scale=0.1 --self-loop-scale=0.1" -beam=10 -retry_beam=40 -careful=false -boost_silence=1.0 # Factor by which to boost silence likelihoods in alignment -context_opts= # e.g. set this to "--context-width 5 --central-position 2" for quinphone. -realign_iters="10 20 30"; -fmllr_iters="2 4 6 12"; -silence_weight=0.0 # Weight on silence in fMLLR estimation. -num_iters=35 # Number of iterations of training -max_iter_inc=25 # Last iter to increase #Gauss on. -power=0.2 # Exponent for number of gaussians according to occurrence counts -cluster_thresh=-1 # for build-tree control final bottom-up clustering of leaves -phone_map= -train_tree=true -tree_stats_opts= -cluster_phones_opts= -compile_questions_opts= -# End configuration section. -num_nonsil_states=3 - -echo "$0 $@" # Print the command line for logging - -[ -f path.sh ] && . ./path.sh -. parse_options.sh || exit 1; - -if [ $# != 6 ]; then - echo "Usage: steps/train_sat.sh <#leaves> <#gauss> <data> <lang> <ali-dir> <exp-dir>" - echo " e.g.: steps/train_sat.sh 2500 15000 data/train_si84 data/lang exp/tri2b_ali_si84 exp/tri3b" - echo "Main options (for others, see top of script file)" - echo " --cmd (utils/run.pl|utils/queue.pl <queue opts>) # how to run jobs." - echo " --config <config-file> # config containing options" - echo " --stage <stage> # stage to do partial re-run from." - exit 1; -fi - -numleaves=$1 -totgauss=$2 -data=$3 -lang=$4 -alidir=$5 -dir=$6 - -for f in $data/feats.scp $lang/phones.txt $alidir/final.mdl $alidir/ali.1.gz; do - [ ! -f $f ] && echo "train_sat.sh: no such file $f" && exit 1; -done - -numgauss=$numleaves -incgauss=$[($totgauss-$numgauss)/$max_iter_inc] # per-iter #gauss increment -oov=`cat $lang/oov.int` -nj=`cat $alidir/num_jobs` || exit 1; -silphonelist=`cat $lang/phones/silence.csl` -ciphonelist=`cat $lang/phones/context_indep.csl` || exit 1; -sdata=$data/split$nj; -splice_opts=`cat $alidir/splice_opts 2>/dev/null` # frame-splicing options. -cmvn_opts=`cat $alidir/cmvn_opts 2>/dev/null` -delta_opts=`cat $alidir/delta_opts 2>/dev/null` -phone_map_opt= -[ ! -z "$phone_map" ] && phone_map_opt="--phone-map='$phone_map'" - -mkdir -p $dir/log -cp $alidir/splice_opts $dir 2>/dev/null # frame-splicing options. -cp $alidir/cmvn_opts $dir 2>/dev/null # cmn/cmvn option. -cp $alidir/delta_opts $dir 2>/dev/null # delta option. - -utils/lang/check_phones_compatible.sh $lang/phones.txt $alidir/phones.txt || exit 1; -cp $lang/phones.txt $dir || exit 1; - -echo $nj >$dir/num_jobs -[[ -d $sdata && $data/feats.scp -ot $sdata ]] || split_data.sh $data $nj || exit 1; - -# Set up features. - -if [ -f $alidir/final.mat ]; then feat_type=lda; else feat_type=delta; fi -echo "$0: feature type is $feat_type" - -## Set up speaker-independent features. -case $feat_type in - delta) sifeats="ark,s,cs:apply-cmvn $cmvn_opts --utt2spk=ark:$sdata/JOB/utt2spk scp:$sdata/JOB/cmvn.scp scp:$sdata/JOB/feats.scp ark:- | add-deltas $delta_opts ark:- ark:- |";; - lda) sifeats="ark,s,cs:apply-cmvn $cmvn_opts --utt2spk=ark:$sdata/JOB/utt2spk scp:$sdata/JOB/cmvn.scp scp:$sdata/JOB/feats.scp ark:- | splice-feats $splice_opts ark:- ark:- | transform-feats $alidir/final.mat ark:- ark:- |" - cp $alidir/final.mat $dir - cp $alidir/full.mat $dir 2>/dev/null - ;; - *) echo "$0: invalid feature type $feat_type" && exit 1; -esac - -## Get initial fMLLR transforms (possibly from alignment dir) -if [ -f $alidir/trans.1 ]; then - echo "$0: Using transforms from $alidir" - feats="$sifeats transform-feats --utt2spk=ark:$sdata/JOB/utt2spk ark,s,cs:$alidir/trans.JOB ark:- ark:- |" - cur_trans_dir=$alidir -else - if [ $stage -le -5 ]; then - echo "$0: obtaining initial fMLLR transforms since not present in $alidir" - # The next line is necessary because of $silphonelist otherwise being incorrect; would require - # old $lang dir which would require another option. Not needed anyway. - [ ! -z "$phone_map" ] && \ - echo "$0: error: you must provide transforms if you use the --phone-map option." && exit 1; - $cmd JOB=1:$nj $dir/log/fmllr.0.JOB.log \ - ali-to-post "ark:gunzip -c $alidir/ali.JOB.gz|" ark:- \| \ - weight-silence-post $silence_weight $silphonelist $alidir/final.mdl ark:- ark:- \| \ - gmm-est-fmllr --fmllr-update-type=$fmllr_update_type \ - --spk2utt=ark:$sdata/JOB/spk2utt $alidir/final.mdl "$sifeats" \ - ark:- ark:$dir/trans.JOB || exit 1; - fi - feats="$sifeats transform-feats --utt2spk=ark:$sdata/JOB/utt2spk ark,s,cs:$dir/trans.JOB ark:- ark:- |" - cur_trans_dir=$dir -fi - -if [ $stage -le -4 ] && $train_tree; then - # Get tree stats. - echo "$0: Accumulating tree stats" - $cmd JOB=1:$nj $dir/log/acc_tree.JOB.log \ - acc-tree-stats $context_opts $tree_stats_opts $phone_map_opt --ci-phones=$ciphonelist $alidir/final.mdl "$feats" \ - "ark:gunzip -c $alidir/ali.JOB.gz|" $dir/JOB.treeacc || exit 1; - [ "`ls $dir/*.treeacc | wc -w`" -ne "$nj" ] && echo "$0: Wrong #tree-accs" && exit 1; - $cmd $dir/log/sum_tree_acc.log \ - sum-tree-stats $dir/treeacc $dir/*.treeacc || exit 1; - rm $dir/*.treeacc -fi - -if [ $stage -le -3 ] && $train_tree; then - echo "$0: Getting questions for tree clustering." - # preparing questions, roots file... - cluster-phones --pdf-class-list=$(($num_nonsil_states / 2)) \ - $cluster_phones_opts $context_opts \ - $dir/treeacc $lang/phones/sets.int $dir/questions.int 2>$dir/log/questions.log || exit 1; - cat $lang/phones/extra_questions.int >> $dir/questions.int - compile-questions $context_opts $compile_questions_opts $lang/topo $dir/questions.int $dir/questions.qst 2>$dir/log/compile_questions.log || exit 1; - - echo "$0: Building the tree" - $cmd $dir/log/build_tree.log \ - build-tree $context_opts --verbose=1 --max-leaves=$numleaves \ - --cluster-thresh=$cluster_thresh $dir/treeacc $lang/phones/roots.int \ - $dir/questions.qst $lang/topo $dir/tree || exit 1; -fi - -if [ $stage -le -2 ]; then - echo "$0: Initializing the model" - if $train_tree; then - gmm-init-model --write-occs=$dir/1.occs \ - $dir/tree $dir/treeacc $lang/topo $dir/1.mdl 2> $dir/log/init_model.log || exit 1; - grep 'no stats' $dir/log/init_model.log && echo "This is a bad warning."; - rm $dir/treeacc - else - cp $alidir/tree $dir/ || exit 1; - $cmd JOB=1 $dir/log/init_model.log \ - gmm-init-model-flat $dir/tree $lang/topo $dir/1.mdl \ - "$feats subset-feats ark:- ark:-|" || exit 1; - fi -fi - -if [ $stage -le -1 ]; then - # Convert the alignments. - echo "$0: Converting alignments from $alidir to use current tree" - $cmd JOB=1:$nj $dir/log/convert.JOB.log \ - convert-ali $phone_map_opt $alidir/final.mdl $dir/1.mdl $dir/tree \ - "ark:gunzip -c $alidir/ali.JOB.gz|" "ark:|gzip -c >$dir/ali.JOB.gz" || exit 1; -fi - -[ "$exit_stage" -eq 0 ] && echo "$0: Exiting early: --exit-stage $exit_stage" && exit 0; - -if [ $stage -le 0 ] && [ "$realign_iters" != "" ]; then - echo "$0: Compiling graphs of transcripts" - $cmd JOB=1:$nj $dir/log/compile_graphs.JOB.log \ - compile-train-graphs --read-disambig-syms=$lang/phones/disambig.int $dir/tree $dir/1.mdl $lang/L.fst \ - "ark:utils/sym2int.pl --map-oov $oov -f 2- $lang/words.txt < $sdata/JOB/text |" \ - "ark:|gzip -c >$dir/fsts.JOB.gz" || exit 1; -fi - -x=1 -while [ $x -lt $num_iters ]; do - echo Pass $x - if echo $realign_iters | grep -w $x >/dev/null && [ $stage -le $x ]; then - echo Aligning data - mdl="gmm-boost-silence --boost=$boost_silence `cat $lang/phones/optional_silence.csl` $dir/$x.mdl - |" - $cmd JOB=1:$nj $dir/log/align.$x.JOB.log \ - gmm-align-compiled $scale_opts --beam=$beam --retry-beam=$retry_beam --careful=$careful "$mdl" \ - "ark:gunzip -c $dir/fsts.JOB.gz|" "$feats" \ - "ark:|gzip -c >$dir/ali.JOB.gz" || exit 1; - fi - - if echo $fmllr_iters | grep -w $x >/dev/null; then - if [ $stage -le $x ]; then - echo Estimating fMLLR transforms - # We estimate a transform that's additional to the previous transform; - # we'll compose them. - $cmd JOB=1:$nj $dir/log/fmllr.$x.JOB.log \ - ali-to-post "ark:gunzip -c $dir/ali.JOB.gz|" ark:- \| \ - weight-silence-post $silence_weight $silphonelist $dir/$x.mdl ark:- ark:- \| \ - gmm-est-fmllr --fmllr-update-type=$fmllr_update_type \ - --spk2utt=ark:$sdata/JOB/spk2utt $dir/$x.mdl \ - "$feats" ark:- ark:$dir/tmp_trans.JOB || exit 1; - for n in `seq $nj`; do - ! ( compose-transforms --b-is-affine=true \ - ark:$dir/tmp_trans.$n ark:$cur_trans_dir/trans.$n ark:$dir/composed_trans.$n \ - && mv $dir/composed_trans.$n $dir/trans.$n && \ - rm $dir/tmp_trans.$n ) 2>$dir/log/compose_transforms.$x.log \ - && echo "$0: Error composing transforms" && exit 1; - done - fi - feats="$sifeats transform-feats --utt2spk=ark:$sdata/JOB/utt2spk ark:$dir/trans.JOB ark:- ark:- |" - cur_trans_dir=$dir - fi - - if [ $stage -le $x ]; then - $cmd JOB=1:$nj $dir/log/acc.$x.JOB.log \ - gmm-acc-stats-ali $dir/$x.mdl "$feats" \ - "ark,s,cs:gunzip -c $dir/ali.JOB.gz|" $dir/$x.JOB.acc || exit 1; - [ `ls $dir/$x.*.acc | wc -w` -ne "$nj" ] && echo "$0: Wrong #accs" && exit 1; - $cmd $dir/log/update.$x.log \ - gmm-est --power=$power --write-occs=$dir/$[$x+1].occs --mix-up=$numgauss $dir/$x.mdl \ - "gmm-sum-accs - $dir/$x.*.acc |" $dir/$[$x+1].mdl || exit 1; - rm $dir/$x.mdl $dir/$x.*.acc - rm $dir/$x.occs - fi - [ $x -le $max_iter_inc ] && numgauss=$[$numgauss+$incgauss]; - x=$[$x+1]; -done - - -if [ $stage -le $x ]; then - # Accumulate stats for "alignment model"-- this model is - # computed with the speaker-independent features, but matches Gaussian-for-Gaussian - # with the final speaker-adapted model. - $cmd JOB=1:$nj $dir/log/acc_alimdl.JOB.log \ - ali-to-post "ark:gunzip -c $dir/ali.JOB.gz|" ark:- \| \ - gmm-acc-stats-twofeats $dir/$x.mdl "$feats" "$sifeats" \ - ark,s,cs:- $dir/$x.JOB.acc || exit 1; - [ `ls $dir/$x.*.acc | wc -w` -ne "$nj" ] && echo "$0: Wrong #accs" && exit 1; - # Update model. - $cmd $dir/log/est_alimdl.log \ - gmm-est --power=$power --remove-low-count-gaussians=false $dir/$x.mdl \ - "gmm-sum-accs - $dir/$x.*.acc|" $dir/$x.alimdl || exit 1; - rm $dir/$x.*.acc -fi - -rm $dir/final.{mdl,alimdl,occs} 2>/dev/null -ln -s $x.mdl $dir/final.mdl -ln -s $x.occs $dir/final.occs -ln -s $x.alimdl $dir/final.alimdl - - -steps/diagnostic/analyze_alignments.sh --cmd "$cmd" $lang $dir - -utils/summarize_warnings.pl $dir/log -( - echo "$0: Likelihood evolution:" - for x in `seq $[$num_iters-1]`; do - tail -n 30 $dir/log/acc.$x.*.log | awk '/Overall avg like/{l += $(NF-3)*$(NF-1); t += $(NF-1); } - /Overall average logdet/{d += $(NF-3)*$(NF-1); t2 += $(NF-1);} - END{ d /= t2; l /= t; printf("%s ", d+l); } ' - done - echo -) | tee $dir/log/summary.log - - -steps/info/gmm_dir_info.pl $dir - -echo "$0: done training SAT system in $dir" - -exit 0 diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/train.sh b/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/train.sh deleted file mode 100644 index f3a3d3fc7..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/train.sh +++ /dev/null @@ -1,43 +0,0 @@ -#!/bin/bash - -set -eu - -w2v_dir= # contains features `{train,valid}.{npy,lengths}`, real transcripts `{train,valid}.${label}`, and dict `dict.${label}.txt` -lab_dir= # contains pseudo labels `{train,valid}.txt` -out_dir= # output root -arpa_lm= # phone LM -arpa_lm_bin= # (binary) phone LM for KenLM, used in unsupervised selection - -label=phnc -train_name="train" -valid_name="valid" -data_dir=${out_dir}/data - -mkdir -p ${out_dir}/exp -local/prepare_lang.sh $w2v_dir/dict.${label}.txt $data_dir -local/prepare_lm.sh $arpa_lm $data_dir - -for x in $train_name $valid_name; do - x_gt=${x}_gt - - # prepare pseudo data - python local/prepare_data_from_w2v.py $w2v_dir $data_dir $x - steps/compute_cmvn_stats.sh $data_dir/$x $out_dir/exp/make_feat/$x $out_dir/feats/$x - python local/copy_aligned_text.py < $lab_dir/$x.txt > $data_dir/$x/text - - # prepare ground truth data - mkdir $data_dir/$x_gt - cp $data_dir/$x/{feats.scp,cmvn.scp,utt2spk,spk2utt} $data_dir/$x_gt/ - python local/copy_aligned_text.py < $w2v_dir/$x.$label > $data_dir/$x_gt/text -done - -local/train_subset_lgbeam.sh \ - --out_root ${out_dir} --out_name exp --train $train_name --valid $valid_name \ - --mono_size 2000 --tri1_size 5000 --tri2b_size -1 --tri3b_size -1 \ - --stage 1 --max_stage 3 $data_dir $data_dir/lang $data_dir/lang_test - -local/unsup_select_decode.sh \ - --split $valid_name --kenlm_path $arpa_lm_bin \ - --ref_txt $data_dir/${valid_name}_gt/text \ - --psd_txt $data_dir/${valid_name}/text \ - $out_dir/exp diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/utils b/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/utils deleted file mode 100644 index b24088521..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/utils +++ /dev/null @@ -1 +0,0 @@ -../../wsj/s5/utils \ No newline at end of file diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/models/__init__.py b/kosmos-g/fairseq/examples/wav2vec/unsupervised/models/__init__.py deleted file mode 100644 index 3e3039b70..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/models/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .wav2vec_u import Wav2vec_U - - -__all__ = [ - "Wav2vec_U", -] diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/models/wav2vec_u.py b/kosmos-g/fairseq/examples/wav2vec/unsupervised/models/wav2vec_u.py deleted file mode 100644 index 27792ebda..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/models/wav2vec_u.py +++ /dev/null @@ -1,637 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass -from enum import Enum, auto -import math -import numpy as np -from typing import Tuple, List, Optional, Dict - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch import autograd - -from fairseq import checkpoint_utils, utils -from fairseq.dataclass import FairseqDataclass -from fairseq.models import BaseFairseqModel, register_model -from fairseq.modules import ( - SamePad, - TransposeLast, -) - - -class SegmentationType(Enum): - NONE = auto() - RANDOM = auto() - UNIFORM_RANDOM = auto() - UNIFORM_RANDOM_JOIN = auto() - JOIN = auto() - - -@dataclass -class SegmentationConfig(FairseqDataclass): - type: SegmentationType = SegmentationType.NONE - subsample_rate: float = 0.25 - mean_pool: bool = True - mean_pool_join: bool = False - remove_zeros: bool = False - - -@dataclass -class Wav2vec_UConfig(FairseqDataclass): - - discriminator_kernel: int = 3 - discriminator_dilation: int = 1 - discriminator_dim: int = 256 - discriminator_causal: bool = True - discriminator_linear_emb: bool = False - discriminator_depth: int = 1 - discriminator_max_pool: bool = False - discriminator_act_after_linear: bool = False - discriminator_dropout: float = 0.0 - discriminator_spectral_norm: bool = False - discriminator_weight_norm: bool = False - - generator_kernel: int = 4 - generator_dilation: int = 1 - generator_stride: int = 1 - generator_bias: bool = False - generator_dropout: float = 0.0 - - blank_weight: float = 0 - blank_mode: str = "add" - blank_is_sil: bool = False - no_softmax: bool = False - - smoothness_weight: float = 0.0 - smoothing: float = 0.0 - smoothing_one_sided: bool = False - gradient_penalty: float = 0.0 - probabilistic_grad_penalty_slicing: bool = False - code_penalty: float = 0.0 - gumbel: bool = False - hard_gumbel: bool = True - temp: Tuple[float, float, float] = (2, 0.1, 0.99995) - input_dim: int = 128 - - segmentation: SegmentationConfig = SegmentationConfig() - - -class Segmenter(nn.Module): - cfg: SegmentationConfig - - def __init__(self, cfg: SegmentationConfig): - super().__init__() - self.cfg = cfg - self.subsample_rate = cfg.subsample_rate - - def pre_segment(self, dense_x, dense_padding_mask): - return dense_x, dense_padding_mask - - def logit_segment(self, logits, padding_mask): - return logits, padding_mask - - -class RandomSegmenter(Segmenter): - def pre_segment(self, dense_x, dense_padding_mask): - target_num = math.ceil(dense_x.size(1) * self.subsample_rate) - ones = torch.ones(dense_x.shape[:-1], device=dense_x.device) - indices, _ = ones.multinomial(target_num).sort(dim=-1) - indices_ld = indices.unsqueeze(-1).expand(-1, -1, dense_x.size(-1)) - dense_x = dense_x.gather(1, indices_ld) - dense_padding_mask = dense_padding_mask.gather(1, index=indices) - return dense_x, dense_padding_mask - - -class UniformRandomSegmenter(Segmenter): - def pre_segment(self, dense_x, dense_padding_mask): - bsz, tsz, fsz = dense_x.shape - - target_num = math.ceil(tsz * self.subsample_rate) - - rem = tsz % target_num - - if rem > 0: - dense_x = F.pad(dense_x, [0, 0, 0, target_num - rem]) - dense_padding_mask = F.pad( - dense_padding_mask, [0, target_num - rem], value=True - ) - - dense_x = dense_x.view(bsz, target_num, -1, fsz) - dense_padding_mask = dense_padding_mask.view(bsz, target_num, -1) - - if self.cfg.mean_pool: - dense_x = dense_x.mean(dim=-2) - dense_padding_mask = dense_padding_mask.all(dim=-1) - else: - ones = torch.ones((bsz, dense_x.size(2)), device=dense_x.device) - indices = ones.multinomial(1) - indices = indices.unsqueeze(-1).expand(-1, target_num, -1) - indices_ld = indices.unsqueeze(-1).expand(-1, -1, -1, fsz) - dense_x = dense_x.gather(2, indices_ld).reshape(bsz, -1, fsz) - dense_padding_mask = dense_padding_mask.gather(2, index=indices).reshape( - bsz, -1 - ) - return dense_x, dense_padding_mask - - -class JoinSegmenter(Segmenter): - def logit_segment(self, logits, padding_mask): - preds = logits.argmax(dim=-1) - - if padding_mask.any(): - preds[padding_mask] = -1 # mark pad - uniques = [] - - bsz, tsz, csz = logits.shape - - for p in preds: - uniques.append( - p.cpu().unique_consecutive(return_inverse=True, return_counts=True) - ) - - new_tsz = max(u[0].numel() for u in uniques) - new_logits = logits.new_zeros(bsz, new_tsz, csz) - new_pad = padding_mask.new_zeros(bsz, new_tsz) - - for b in range(bsz): - u, idx, c = uniques[b] - keep = u != -1 - - if self.cfg.remove_zeros: - keep.logical_and_(u != 0) - - if self.training and not self.cfg.mean_pool_join: - u[0] = 0 - u[1:] = c.cumsum(0)[:-1] - m = c > 1 - r = torch.rand(m.sum()) - o = (c[m] * r).long() - u[m] += o - new_logits[b, : u.numel()] = logits[b, u] - else: - new_logits[b].index_add_( - dim=0, index=idx.to(new_logits.device), source=logits[b] - ) - new_logits[b, : c.numel()] /= c.unsqueeze(-1).to(new_logits.device) - - new_sz = keep.sum() - if not keep.all(): - kept_logits = new_logits[b, : c.numel()][keep] - new_logits[b, :new_sz] = kept_logits - - if new_sz < new_tsz: - pad = new_tsz - new_sz - new_logits[b, -pad:] = 0 - new_pad[b, -pad:] = True - - return new_logits, new_pad - - -class UniformRandomJoinSegmenter(UniformRandomSegmenter, JoinSegmenter): - pass - - -SEGMENT_FACTORY = { - SegmentationType.NONE: Segmenter, - SegmentationType.RANDOM: RandomSegmenter, - SegmentationType.UNIFORM_RANDOM: UniformRandomSegmenter, - SegmentationType.UNIFORM_RANDOM_JOIN: UniformRandomJoinSegmenter, - SegmentationType.JOIN: JoinSegmenter, -} - - -class Discriminator(nn.Module): - def __init__(self, dim, cfg: Wav2vec_UConfig): - super().__init__() - - inner_dim = cfg.discriminator_dim - kernel = cfg.discriminator_kernel - dilation = cfg.discriminator_dilation - self.max_pool = cfg.discriminator_max_pool - - if cfg.discriminator_causal: - padding = kernel - 1 - else: - padding = kernel // 2 - - def make_conv(in_d, out_d, k, p=0, has_dilation=True): - conv = nn.Conv1d( - in_d, - out_d, - kernel_size=k, - padding=p, - dilation=dilation if has_dilation else 1, - ) - if cfg.discriminator_spectral_norm: - conv = nn.utils.spectral_norm(conv) - elif cfg.discriminator_weight_norm: - conv = nn.utils.weight_norm(conv) - return conv - - inner_net = [ - nn.Sequential( - make_conv(inner_dim, inner_dim, kernel, padding), - SamePad(kernel_size=kernel, causal=cfg.discriminator_causal), - nn.Dropout(cfg.discriminator_dropout), - nn.GELU(), - ) - for _ in range(cfg.discriminator_depth - 1) - ] + [ - make_conv(inner_dim, 1, kernel, padding, has_dilation=False), - SamePad(kernel_size=kernel, causal=cfg.discriminator_causal), - ] - - if cfg.discriminator_linear_emb: - emb_net = [make_conv(dim, inner_dim, 1)] - else: - emb_net = [ - make_conv(dim, inner_dim, kernel, padding), - SamePad(kernel_size=kernel, causal=cfg.discriminator_causal), - ] - - if cfg.discriminator_act_after_linear: - emb_net.append(nn.GELU()) - - self.net = nn.Sequential( - *emb_net, - nn.Dropout(cfg.discriminator_dropout), - *inner_net, - ) - - def forward(self, x, padding_mask): - x = x.transpose(1, 2) # BTC -> BCT - x = self.net(x) - x = x.transpose(1, 2) - x_sz = x.size(1) - if padding_mask is not None and padding_mask.any() and padding_mask.dim() > 1: - padding_mask = padding_mask[:, : x.size(1)] - x[padding_mask] = float("-inf") if self.max_pool else 0 - x_sz = x_sz - padding_mask.sum(dim=-1) - x = x.squeeze(-1) - if self.max_pool: - x, _ = x.max(dim=-1) - else: - x = x.sum(dim=-1) - x = x / x_sz - return x - - -class Generator(nn.Module): - def __init__(self, input_dim, output_dim, cfg: Wav2vec_UConfig): - super().__init__() - - self.cfg = cfg - self.output_dim = output_dim - self.stride = cfg.generator_stride - self.dropout = nn.Dropout(cfg.generator_dropout) - - padding = cfg.generator_kernel // 2 - self.proj = nn.Sequential( - TransposeLast(), - nn.Conv1d( - input_dim, - output_dim, - kernel_size=cfg.generator_kernel, - stride=cfg.generator_stride, - dilation=cfg.generator_dilation, - padding=padding, - bias=cfg.generator_bias, - ), - TransposeLast(), - ) - - def forward(self, dense_x, tokens, dense_padding_mask): - dense_x = self.dropout(dense_x) - - dense_x = self.proj(dense_x) - if self.stride > 1: - dense_padding_mask = dense_padding_mask[:, :: self.stride] - - if dense_padding_mask.size(1) != dense_x.size(1): - new_padding = dense_padding_mask.new_zeros(dense_x.shape[:-1]) - diff = new_padding.size(1) - dense_padding_mask.size(1) - assert ( - diff > 0 - ), f"{new_padding.shape}, {dense_padding_mask.shape}, {dense_x.shape}, {diff}" - if diff > 0: - new_padding[:, diff:] = dense_padding_mask - else: - assert diff < 0 - new_padding = dense_padding_mask[:, :diff] - - dense_padding_mask = new_padding - - result = {} - - token_x = None - if tokens is not None: - token_x = dense_x.new_zeros(tokens.numel(), self.output_dim) - token_x.scatter_(1, tokens.view(-1, 1).long(), 1) - token_x = token_x.view(tokens.shape + (self.output_dim,)) - - result["dense_x"] = dense_x - result["token_x"] = token_x - result["dense_padding_mask"] = dense_padding_mask - - return result - - -@register_model("wav2vec_u", dataclass=Wav2vec_UConfig) -class Wav2vec_U(BaseFairseqModel): - def calc_gradient_penalty(self, real_data, fake_data): - - b_size = min(real_data.size(0), fake_data.size(0)) - t_size = min(real_data.size(1), fake_data.size(1)) - - if self.cfg.probabilistic_grad_penalty_slicing: - - def get_slice(data, dim, target_size): - - size = data.size(dim) - diff = size - target_size - if diff <= 0: - return data - - start = np.random.randint(0, diff + 1) - return data.narrow(dim=dim, start=start, length=target_size) - - real_data = get_slice(real_data, 0, b_size) - real_data = get_slice(real_data, 1, t_size) - fake_data = get_slice(fake_data, 0, b_size) - fake_data = get_slice(fake_data, 1, t_size) - - else: - real_data = real_data[:b_size, :t_size] - fake_data = fake_data[:b_size, :t_size] - - alpha = torch.rand(real_data.size(0), 1, 1) - alpha = alpha.expand(real_data.size()) - alpha = alpha.to(real_data.device) - - interpolates = alpha * real_data + ((1 - alpha) * fake_data) - - disc_interpolates = self.discriminator(interpolates, None) - - gradients = autograd.grad( - outputs=disc_interpolates, - inputs=interpolates, - grad_outputs=torch.ones(disc_interpolates.size(), device=real_data.device), - create_graph=True, - retain_graph=True, - only_inputs=True, - )[0] - - gradient_penalty = (gradients.norm(2, dim=1) - 1) ** 2 - return gradient_penalty - - def set_num_updates(self, num_updates): - super().set_num_updates(num_updates) - self.update_num = num_updates - self.curr_temp = max( - self.max_temp * self.temp_decay ** num_updates, self.min_temp - ) - - def discrim_step(self, num_updates): - return num_updates % 2 == 1 - - def get_groups_for_update(self, num_updates): - return "discriminator" if self.discrim_step(num_updates) else "generator" - - def __init__(self, cfg: Wav2vec_UConfig, target_dict): - super().__init__() - - self.cfg = cfg - self.zero_index = target_dict.index("<SIL>") if "<SIL>" in target_dict else 0 - self.smoothness_weight = cfg.smoothness_weight - - output_size = len(target_dict) - self.pad = target_dict.pad() - self.eos = target_dict.eos() - self.smoothing = cfg.smoothing - self.smoothing_one_sided = cfg.smoothing_one_sided - self.no_softmax = cfg.no_softmax - self.gumbel = cfg.gumbel - self.hard_gumbel = cfg.hard_gumbel - self.last_acc = None - - self.gradient_penalty = cfg.gradient_penalty - self.code_penalty = cfg.code_penalty - self.blank_weight = cfg.blank_weight - self.blank_mode = cfg.blank_mode - self.blank_index = target_dict.index("<SIL>") if cfg.blank_is_sil else 0 - assert self.blank_index != target_dict.unk() - - self.discriminator = Discriminator(output_size, cfg) - for p in self.discriminator.parameters(): - p.param_group = "discriminator" - - self.pca_A = self.pca_b = None - d = cfg.input_dim - - self.segmenter = SEGMENT_FACTORY[cfg.segmentation.type](cfg.segmentation) - - self.generator = Generator(d, output_size, cfg) - - for p in self.generator.parameters(): - p.param_group = "generator" - - for p in self.segmenter.parameters(): - p.param_group = "generator" - - self.max_temp, self.min_temp, self.temp_decay = cfg.temp - self.curr_temp = self.max_temp - self.update_num = 0 - - @classmethod - def build_model(cls, cfg, task): - return cls(cfg, task.target_dictionary) - - def get_logits( - self, - net_output: Optional[Dict[str, List[Optional[torch.Tensor]]]], - normalize: bool = False, - ): - logits = net_output["logits"] - - if self.blank_weight != 0: - if self.blank_mode == "add": - logits[..., self.blank_index] += self.blank_weight - elif self.blank_mode == "set": - logits[..., self.blank_index] = self.blank_weight - else: - raise Exception(f"invalid blank mode {self.blank_mode}") - - padding = net_output["padding_mask"] - if padding.any(): - logits[padding] = float("-inf") - logits[padding][..., self.blank_index] = float("inf") - - if normalize: - logits = utils.log_softmax(logits.float(), dim=-1) - - return logits.transpose(0, 1) - - def get_normalized_probs( - self, - net_output: Tuple[ - torch.Tensor, Optional[Dict[str, List[Optional[torch.Tensor]]]] - ], - log_probs: bool, - sample: Optional[Dict[str, torch.Tensor]] = None, - ): - logits = self.get_logits(net_output) - - probs = super().get_normalized_probs(logits, log_probs, sample) - # BTC -> TBC for ctc - probs = probs.transpose(0, 1) - return probs - - def normalize(self, dense_x): - - bsz, tsz, csz = dense_x.shape - - if dense_x.numel() == 0: - raise Exception(dense_x.shape) - _, k = dense_x.max(-1) - hard_x = ( - dense_x.new_zeros(bsz * tsz, csz) - .scatter_(-1, k.view(-1, 1), 1.0) - .view(-1, csz) - ) - hard_probs = torch.mean(hard_x.float(), dim=0) - code_perplexity = torch.exp( - -torch.sum(hard_probs * torch.log(hard_probs + 1e-7), dim=-1) - ) - - avg_probs = torch.softmax(dense_x.reshape(-1, csz).float(), dim=-1).mean(dim=0) - prob_perplexity = torch.exp( - -torch.sum(avg_probs * torch.log(avg_probs + 1e-7), dim=-1) - ) - - if not self.no_softmax: - if self.training and self.gumbel: - dense_x = F.gumbel_softmax( - dense_x.float(), tau=self.curr_temp, hard=self.hard_gumbel - ).type_as(dense_x) - else: - dense_x = dense_x.softmax(-1) - - return dense_x, code_perplexity, prob_perplexity - - def forward( - self, - features, - padding_mask, - random_label=None, - dense_x_only=False, - segment=True, - ): - if segment: - features, padding_mask = self.segmenter.pre_segment(features, padding_mask) - - orig_size = features.size(0) * features.size(1) - padding_mask.sum() - - gen_result = self.generator(features, random_label, padding_mask) - - orig_dense_x, token_x = gen_result["dense_x"], gen_result["token_x"] - orig_dense_padding_mask = gen_result["dense_padding_mask"] - - if segment: - dense_x, dense_padding_mask = self.segmenter.logit_segment( - orig_dense_x, orig_dense_padding_mask - ) - else: - dense_x = orig_dense_x - dense_padding_mask = orig_dense_padding_mask - - dense_logits = dense_x - prob_perplexity = None - code_perplexity = None - - if not (self.no_softmax and dense_x_only): - dense_x, code_perplexity, prob_perplexity = self.normalize(dense_logits) - - if dense_x_only or self.discriminator is None: - return { - "logits": dense_x, - "padding_mask": dense_padding_mask, - } - - token_padding_mask = random_label == self.pad - - dense_y = self.discriminator(dense_x, dense_padding_mask) - token_y = self.discriminator(token_x, token_padding_mask) - - sample_size = features.size(0) - - d_step = self.discrim_step(self.update_num) - - fake_smooth = self.smoothing - real_smooth = self.smoothing - if self.smoothing_one_sided: - fake_smooth = 0 - - zero_loss = None - smoothness_loss = None - code_pen = None - - if d_step: - loss_dense = F.binary_cross_entropy_with_logits( - dense_y, - dense_y.new_ones(dense_y.shape) - fake_smooth, - reduction="sum", - ) - loss_token = F.binary_cross_entropy_with_logits( - token_y, - token_y.new_zeros(token_y.shape) + real_smooth, - reduction="sum", - ) - if self.training and self.gradient_penalty > 0: - grad_pen = self.calc_gradient_penalty(token_x, dense_x) - grad_pen = grad_pen.sum() * self.gradient_penalty - else: - grad_pen = None - else: - grad_pen = None - loss_token = None - loss_dense = F.binary_cross_entropy_with_logits( - dense_y, - dense_y.new_zeros(dense_y.shape) + fake_smooth, - reduction="sum", - ) - num_vars = dense_x.size(-1) - if prob_perplexity is not None: - code_pen = (num_vars - prob_perplexity) / num_vars - code_pen = code_pen * sample_size * self.code_penalty - - if self.smoothness_weight > 0: - smoothness_loss = F.mse_loss( - dense_logits[:, :-1], dense_logits[:, 1:], reduction="none" - ) - smoothness_loss[dense_padding_mask[:, 1:]] = 0 - smoothness_loss = ( - smoothness_loss.mean() * sample_size * self.smoothness_weight - ) - - result = { - "losses": { - "grad_pen": grad_pen, - "code_pen": code_pen, - "smoothness": smoothness_loss, - }, - "temp": self.curr_temp, - "code_ppl": code_perplexity, - "prob_ppl": prob_perplexity, - "d_steps": int(d_step), - "sample_size": sample_size, - } - - suff = "_d" if d_step else "_g" - result["losses"]["dense" + suff] = loss_dense - result["losses"]["token" + suff] = loss_token - - return result diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/apply_pca.py b/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/apply_pca.py deleted file mode 100644 index 10ad6ce47..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/apply_pca.py +++ /dev/null @@ -1,76 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import os -import os.path as osp -import math -import numpy as np -import tqdm -import torch -from shutil import copyfile - -from npy_append_array import NpyAppendArray - - -def get_parser(): - parser = argparse.ArgumentParser( - description="transforms features via a given pca and stored them in target dir" - ) - # fmt: off - parser.add_argument('source', help='directory with features') - parser.add_argument('--split', help='which split to read', required=True) - parser.add_argument('--save-dir', help='where to save the output', required=True) - parser.add_argument('--pca-path', type=str, help='pca location. will append _A.npy and _b.npy', required=True) - parser.add_argument('--batch-size', type=int, default=2048000, help='batch size') - parser.add_argument('--unfiltered', action='store_true', help='process the unfiltered version') - # fmt: on - - return parser - - -def main(): - parser = get_parser() - args = parser.parse_args() - - source_path = osp.join(args.source, args.split) - data_poth = source_path + "_unfiltered" if args.unfiltered else source_path - - print(f"data path: {data_poth}") - - features = np.load(data_poth + ".npy", mmap_mode="r") - pca_A = torch.from_numpy(np.load(args.pca_path + "_A.npy")).cuda() - pca_b = torch.from_numpy(np.load(args.pca_path + "_b.npy")).cuda() - - os.makedirs(args.save_dir, exist_ok=True) - save_path = osp.join(args.save_dir, args.split) - - copyfile(source_path + ".tsv", save_path + ".tsv") - copyfile(data_poth + ".lengths", save_path + ".lengths") - - if osp.exists(source_path + ".phn"): - copyfile(source_path + ".phn", save_path + ".phn") - - if osp.exists(source_path + ".wrd"): - copyfile(source_path + ".wrd", save_path + ".wrd") - - if osp.exists(save_path + ".npy"): - os.remove(save_path + ".npy") - npaa = NpyAppendArray(save_path + ".npy") - - batches = math.ceil(features.shape[0] / args.batch_size) - - with torch.no_grad(): - for b in tqdm.trange(batches): - start = b * args.batch_size - end = start + args.batch_size - x = torch.from_numpy(features[start:end]).cuda() - x = torch.matmul(x, pca_A) + pca_b - npaa.append(x.cpu().numpy()) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/copy_labels.py b/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/copy_labels.py deleted file mode 100644 index 989868388..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/copy_labels.py +++ /dev/null @@ -1,10 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import sys - -for idx, line in enumerate(sys.stdin): - print(f"utt{idx:010d} {line}", end="") diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/filter_lexicon.py b/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/filter_lexicon.py deleted file mode 100644 index 5bf3e51e7..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/filter_lexicon.py +++ /dev/null @@ -1,40 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import sys - -from fairseq.data import Dictionary - - -def get_parser(): - parser = argparse.ArgumentParser( - description="filters a lexicon given a unit dictionary" - ) - parser.add_argument("-d", "--unit-dict", help="unit dictionary", required=True) - return parser - - -def main(): - parser = get_parser() - args = parser.parse_args() - - d = Dictionary.load(args.unit_dict) - symbols = set(d.symbols) - - for line in sys.stdin: - items = line.rstrip().split() - skip = len(items) < 2 - for x in items[1:]: - if x not in symbols: - skip = True - break - if not skip: - print(line, end="") - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/filter_tsv.py b/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/filter_tsv.py deleted file mode 100644 index a09d79acf..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/filter_tsv.py +++ /dev/null @@ -1,37 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import os -import argparse -import sys - - -parser = argparse.ArgumentParser() -parser.add_argument("--tsv", required=True, type=str) -parser.add_argument("--no-skip", action="store_true") -parser.add_argument("--keep", action="store_true") -params = parser.parse_args() - - -def get_fname(line): - p = os.path.basename(line.split("\t")[0]) - p = os.path.splitext(p)[0] - return p - - -# filenames to exclude -seen = set() -with open(params.tsv) as f: - if not params.no_skip: - root = next(f).rstrip() - for line in f: - seen.add(get_fname(line)) - -for i, line in enumerate(sys.stdin): - exists = get_fname(line) in seen - keep = (exists and params.keep) or (not exists and not params.keep) - if i == 0 or keep: - print(line, end="") diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/g2p_wrd_to_phn.py b/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/g2p_wrd_to_phn.py deleted file mode 100644 index 2e31c307b..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/g2p_wrd_to_phn.py +++ /dev/null @@ -1,45 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import sys - -from g2p_en import G2p - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument( - "--compact", - action="store_true", - help="if set, compacts phones", - ) - args = parser.parse_args() - - compact = args.compact - - wrd_to_phn = {} - g2p = G2p() - for line in sys.stdin: - words = line.strip().split() - phones = [] - for w in words: - if w not in wrd_to_phn: - wrd_to_phn[w] = g2p(w) - if compact: - wrd_to_phn[w] = [ - p[:-1] if p[-1].isnumeric() else p for p in wrd_to_phn[w] - ] - phones.extend(wrd_to_phn[w]) - try: - print(" ".join(phones)) - except: - print(wrd_to_phn, words, phones, file=sys.stderr) - raise - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/ltr_to_wrd.py b/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/ltr_to_wrd.py deleted file mode 100644 index 36c85d1e2..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/ltr_to_wrd.py +++ /dev/null @@ -1,16 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import sys - - -def main(): - for line in sys.stdin: - print(line.replace(" ", "").replace("|", " ").strip()) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/mean_pool.py b/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/mean_pool.py deleted file mode 100644 index 4eea048ef..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/mean_pool.py +++ /dev/null @@ -1,99 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import os -import os.path as osp -import math -import numpy as np -import tqdm -import torch -import torch.nn.functional as F -from shutil import copyfile - -from npy_append_array import NpyAppendArray - - -def get_parser(): - parser = argparse.ArgumentParser( - description="mean pools representations by compressing uniform splits of the data" - ) - # fmt: off - parser.add_argument('source', help='directory with features') - parser.add_argument('--split', help='which split to read', required=True) - parser.add_argument('--save-dir', help='where to save the output', required=True) - parser.add_argument('--subsample-rate', type=float, default=0.5, help='size to subsample data to') - - parser.add_argument('--remove-extra', action='store_true', help='if true, removes extra states that cant be pooled, otherwise pads with 0s') - # fmt: on - - return parser - - -def main(): - parser = get_parser() - args = parser.parse_args() - - source_path = osp.join(args.source, args.split) - - print(f"data path: {source_path}") - - features = np.load(source_path + ".npy", mmap_mode="r") - - os.makedirs(args.save_dir, exist_ok=True) - save_path = osp.join(args.save_dir, args.split) - - copyfile(source_path + ".tsv", save_path + ".tsv") - - if os.path.exists(source_path + ".phn"): - copyfile(source_path + ".phn", save_path + ".phn") - if os.path.exists(source_path + ".wrd"): - copyfile(source_path + ".wrd", save_path + ".wrd") - - if os.path.exists(osp.join(args.source, "dict.phn.txt")): - copyfile( - osp.join(args.source, "dict.phn.txt"), - osp.join(args.save_dir, "dict.phn.txt"), - ) - - if osp.exists(save_path + ".npy"): - os.remove(save_path + ".npy") - npaa = NpyAppendArray(save_path + ".npy") - - with open(source_path + ".lengths", "r") as lf: - lengths = lf.readlines() - - fsz = features.shape[-1] - start = 0 - with torch.no_grad(): - with open(save_path + ".lengths", "w") as lengths_out: - for length in tqdm.tqdm(lengths): - length = int(length) - end = start + length - feats = features[start:end] - start += length - x = torch.from_numpy(feats).cuda() - target_num = math.ceil(length * args.subsample_rate) - rem = length % target_num - - if rem > 0: - if args.remove_extra: - to_rem = target_num - rem - target_num -= 1 - x = x[:-to_rem] - else: - to_add = target_num - rem - x = F.pad(x, [0, 0, 0, to_add]) - x[-to_add:] = x[-to_add - 1] - - x = x.view(target_num, -1, fsz) - x = x.mean(dim=-2) - print(target_num, file=lengths_out) - npaa.append(x.cpu().numpy()) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/merge_clusters.py b/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/merge_clusters.py deleted file mode 100644 index 2780f9d97..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/merge_clusters.py +++ /dev/null @@ -1,114 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import os -import os.path as osp -import numpy as np -import tqdm -import torch -import random -from shutil import copyfile - -from npy_append_array import NpyAppendArray - - -def get_parser(): - parser = argparse.ArgumentParser( - description="transforms features via a given pca and stored them in target dir" - ) - # fmt: off - parser.add_argument('source', help='directory with features') - parser.add_argument('--split', help='which split to read', required=True) - parser.add_argument('--save-dir', help='where to save the output', required=True) - parser.add_argument('--cluster-dir', help='where the clusters are') - parser.add_argument('--pooling', type=str, default='mean', choices=['mean', 'sample'], help='how to pool') - # fmt: on - - return parser - - -def main(): - parser = get_parser() - args = parser.parse_args() - - source_path = osp.join(args.source, args.split) - cluster_path = osp.join(args.cluster_dir, args.split + ".src") - print(f"data path: {source_path}") - - features = np.load(source_path + ".npy", mmap_mode="r") - sizes = [] - offsets = [] - offset = 0 - with open(source_path + ".lengths", "r") as len_f: - for line in len_f: - length = int(line.rstrip()) - sizes.append(length) - offsets.append(offset) - offset += length - - clusters = [] - with open(cluster_path, "r") as cf: - for line in cf: - line = line.rstrip() - items = line.split() - items = list(map(int, items)) - clusters.append(items) - - os.makedirs(args.save_dir, exist_ok=True) - save_path = osp.join(args.save_dir, args.split) - - copyfile(source_path + ".tsv", save_path + ".tsv") - - if os.path.exists(source_path + ".phn"): - copyfile(source_path + ".phn", save_path + ".phn") - if os.path.exists(osp.join(args.source, "dict.phn.txt")): - copyfile( - osp.join(args.source, "dict.phn.txt"), - osp.join(args.save_dir, "dict.phn.txt"), - ) - if os.path.exists(source_path + ".wrd"): - copyfile(source_path + ".wrd", save_path + ".wrd") - - if osp.exists(save_path + ".npy"): - os.remove(save_path + ".npy") - npaa = NpyAppendArray(save_path + ".npy") - - def merge(feats, clust): - feats = torch.from_numpy(feats.copy()) - clust = torch.LongTensor(clust) - _, counts = clust.unique_consecutive(return_counts=True) - curr = 0 - - merged = [] - for c in counts: - c = c.item() - start = curr - end = curr + c - curr += c - if args.pooling == "mean": - new_x = feats[start:end].mean(dim=0) - elif args.pooling == "sample": - new_x = feats[start + int(random.random() * c)] - else: - raise NotImplementedError() - merged.append(new_x) - - return torch.stack(merged, dim=0).numpy() - - with open(save_path + ".lengths", "w") as l_f: - for size, offset, clust in tqdm.tqdm( - zip(sizes, offsets, clusters), total=len(sizes) - ): - end = size + offset - feats = features[offset:end] - feats = merge(feats, clust) - print(len(feats), file=l_f) - npaa.append(feats) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/normalize_and_filter_text.py b/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/normalize_and_filter_text.py deleted file mode 100644 index c2bd16efb..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/normalize_and_filter_text.py +++ /dev/null @@ -1,72 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import fasttext as ft -import os -import regex -import sys - - -def get_parser(): - parser = argparse.ArgumentParser( - description="reads text from stdin and outputs normalized, lid-filtered version to stdout" - ) - parser.add_argument( - "--fasttext-model", - help="path to fasttext model", - default="lid.187.bin", - ) - parser.add_argument("--lang", help="language id", required=True) - parser.add_argument( - "--lid-threshold", - type=float, - help="threshold for this lang id probability", - default=0.4, - ) - - return parser - - -def main(): - parser = get_parser() - args = parser.parse_args() - filter_r = regex.compile(r"[^\p{L}\p{N}\p{M}\' \-]") - - lg = args.lang.lower() - lg_label = f"__label__{lg}" - thresh = args.lid_threshold - - if os.path.exists(args.fasttext_model): - model = ft.load_model(args.fasttext_model) - else: - print( - f"fasttext language id model {args.fasttext_model} not found. Proceeding without language filtering. " - f"To enable language filtering, please download the latest language id model " - f"from https://fasttext.cc/docs/en/language-identification.html", - file=sys.stderr, - ) - model = None - - for line in sys.stdin: - line = line.strip() - line = filter_r.sub(" ", line) - line = " ".join(line.split()) - - if model is not None: - lid, prob = model.predict(line, k=100) - try: - target_idx = lid.index(lg_label) - except ValueError: - continue - if target_idx == 0 or prob[target_idx] >= thresh: - print(line) - else: - print(line) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/normalize_text.py b/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/normalize_text.py deleted file mode 100644 index 9d0ffeb27..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/normalize_text.py +++ /dev/null @@ -1,22 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import regex -import sys - - -def main(): - filter_r = regex.compile(r"[^\p{L}\p{N}\p{M}\' \-]") - - for line in sys.stdin: - line = line.strip() - line = filter_r.sub(" ", line) - line = " ".join(line.split()) - print(line) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/pca.py b/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/pca.py deleted file mode 100644 index 948cf5319..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/pca.py +++ /dev/null @@ -1,53 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import os -import os.path as osp -import numpy as np - -import faiss - - - -def get_parser(): - parser = argparse.ArgumentParser( - description="compute a pca matrix given an array of numpy features" - ) - # fmt: off - parser.add_argument('data', help='numpy file containing features') - parser.add_argument('--output', help='where to save the pca matrix', required=True) - parser.add_argument('--dim', type=int, help='dim for pca reduction', required=True) - parser.add_argument('--eigen-power', type=float, default=0, help='eigen power, -0.5 for whitening') - - return parser - - -def main(): - parser = get_parser() - args = parser.parse_args() - - print("Reading features") - x = np.load(args.data, mmap_mode="r") - - print("Computing PCA") - pca = faiss.PCAMatrix(x.shape[-1], args.dim, args.eigen_power) - pca.train(x) - b = faiss.vector_to_array(pca.b) - A = faiss.vector_to_array(pca.A).reshape(pca.d_out, pca.d_in) - - os.makedirs(args.output, exist_ok=True) - - prefix = str(args.dim) - if args.eigen_power != 0: - prefix += f"_{args.eigen_power}" - - np.save(osp.join(args.output, f"{prefix}_pca_A"), A.T) - np.save(osp.join(args.output, f"{prefix}_pca_b"), b) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/phonemize_with_sil.py b/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/phonemize_with_sil.py deleted file mode 100644 index c6512d732..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/phonemize_with_sil.py +++ /dev/null @@ -1,83 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import numpy as np -import sys - - -def get_parser(): - parser = argparse.ArgumentParser( - description="converts words to phones adding optional silences around in between words" - ) - parser.add_argument( - "--sil-prob", - "-s", - type=float, - default=0, - help="probability of inserting silence between each word", - ) - parser.add_argument( - "--surround", - action="store_true", - help="if set, surrounds each example with silence", - ) - parser.add_argument( - "--lexicon", - help="lexicon to convert to phones", - required=True, - ) - - return parser - - -def main(): - parser = get_parser() - args = parser.parse_args() - - sil_prob = args.sil_prob - surround = args.surround - sil = "<SIL>" - - wrd_to_phn = {} - - with open(args.lexicon, "r") as lf: - for line in lf: - items = line.rstrip().split() - assert len(items) > 1, line - assert items[0] not in wrd_to_phn, items - wrd_to_phn[items[0]] = items[1:] - - for line in sys.stdin: - words = line.strip().split() - - if not all(w in wrd_to_phn for w in words): - continue - - phones = [] - if surround: - phones.append(sil) - - sample_sil_probs = None - if sil_prob > 0 and len(words) > 1: - sample_sil_probs = np.random.random(len(words) - 1) - - for i, w in enumerate(words): - phones.extend(wrd_to_phn[w]) - if ( - sample_sil_probs is not None - and i < len(sample_sil_probs) - and sample_sil_probs[i] < sil_prob - ): - phones.append(sil) - - if surround: - phones.append(sil) - print(" ".join(phones)) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/prepare_audio.sh b/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/prepare_audio.sh deleted file mode 100644 index 013f7a9b0..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/prepare_audio.sh +++ /dev/null @@ -1,78 +0,0 @@ -#!/usr/bin/env zsh -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -source_dir=$1 -tgt_dir=$2 -model=$3 - -if [ -z "$4" ] - then - dim=512 - else - dim=$4 -fi - -echo "using $dim dim for PCA" - -if [ -z "$5" ] - then - layer=14 - else - layer=$5 -fi - -echo "extracting from layer $layer" - -train_split=train -valid_split=valid -test_split=test - -all_splits=($train_split) - -if [[ -f "$source_dir/valid.tsv" ]]; then - all_splits+=('valid') -fi - -if [[ -f "$source_dir/test.tsv" ]]; then - all_splits+=('test') -fi - -echo "processing splits: $all_splits" - -mkdir -p $tgt_dir - -cp $source_dir/*.tsv $tgt_dir -cp $source_dir/*.wrd $tgt_dir -cp $source_dir/*.ltr $tgt_dir -cp $source_dir/*.phn $tgt_dir -cp $source_dir/dict* $tgt_dir - -setopt shwordsplit - -for split in $all_splits; do - python $FAIRSEQ_ROOT/examples/wav2vec/unsupervised/scripts/wav2vec_extract_features.py $source_dir --split $split \ - --save-dir $tgt_dir --checkpoint $model --layer $layer -done - -python $FAIRSEQ_ROOT/examples/wav2vec/unsupervised/scripts/wav2vec_cluster_faiss.py $tgt_dir/${train_split}.tsv \ ---checkpoint $model --save-dir $tgt_dir -f "CLUS128" --sample-pct 1.0 - -for split in $all_splits; do - python $FAIRSEQ_ROOT/examples/wav2vec/unsupervised/scripts/wav2vec_apply_cluster_faiss.py $tgt_dir \ - --checkpoint $model --path $tgt_dir/CLUS128 --split $split -done - -python $FAIRSEQ_ROOT/examples/wav2vec/unsupervised/scripts/pca.py $tgt_dir/${train_split}.npy --output $tgt_dir/pca --dim $dim - -for split in $all_splits; do - python $FAIRSEQ_ROOT/examples/wav2vec/unsupervised/scripts/apply_pca.py $tgt_dir --split $split --save-dir $tgt_dir/precompute_pca$dim --pca-path $tgt_dir/pca/${dim}_pca --batch-size 1048000 - - python $FAIRSEQ_ROOT/examples/wav2vec/unsupervised/scripts/merge_clusters.py $tgt_dir/precompute_pca$dim --cluster-dir $tgt_dir/CLUS128 \ - --split $split --save-dir $tgt_dir/precompute_pca${dim}_cls128_mean --pooling mean - - python $FAIRSEQ_ROOT/examples/wav2vec/unsupervised/scripts/mean_pool.py $tgt_dir/precompute_pca${dim}_cls128_mean \ - --save-dir $tgt_dir/precompute_pca${dim}_cls128_mean_pooled --split $split -done diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/prepare_text.sh b/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/prepare_text.sh deleted file mode 100644 index 1caf13cb6..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/prepare_text.sh +++ /dev/null @@ -1,82 +0,0 @@ -#!/usr/bin/env zsh -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -lg=$1 -text_path=$2 -target_dir=$3 -min_phones=$4 -phonemizer=$5 -lid_path=$6 - -if [ -z "$lid_path" ]; then - lid_path="lid.187.bin" -fi - -ph_lg=${lg:l} -if test "$lg" = 'fr'; then - ph_lg='fr-fr' -elif test "$lg" = 'en'; then - ph_lg='en-us' -elif test "$lg" = 'pt'; then - ph_lg='pt-br' -fi - -ESPEAK_PATH='' -if test "$phonemizer" = 'espeak'; then - ESPEAK_PATH=$(which espeak) -elif test "$phonemizer" = 'espeak-ng'; then - ESPEAK_PATH=$(which espeak-ng) -elif test "$phonemizer" = 'G2P'; then - ESPEAK_PATH='' -else - echo "Unknown phonemizer $phonemizer. Valid options are espeak, espean-ng and G2P" - exit 1 -fi - -echo $lg -echo $ph_lg -echo $text_path -echo $target_dir -echo "min phone seen threshold is $min_phones" - -mkdir -p $target_dir -python $FAIRSEQ_ROOT/examples/wav2vec/unsupervised/scripts/normalize_and_filter_text.py --lang $lg --fasttext-model $lid_path < $text_path | grep -v '\-\-\-' >! $target_dir/lm.upper.lid.txt -python $FAIRSEQ_ROOT/fairseq_cli/preprocess.py --dataset-impl mmap --trainpref $target_dir/lm.upper.lid.txt --only-source --destdir $target_dir --thresholdsrc 2 --padding-factor 1 --dict-only -cut -f1 -d' ' $target_dir/dict.txt | grep -v -x '[[:punct:]]*' | grep -Pv '\d\d\d\d\d+' >! $target_dir/words.txt - - -if [ -z "$ESPEAK_PATH" ]; then - python $FAIRSEQ_ROOT/examples/wav2vec/unsupervised/scripts/g2p_wrd_to_phn.py --compact < $target_dir/words.txt > $target_dir/phones.txt -else - # echoing 1 into corpus will prevent the mismatch lines between lexicon and phones in case the phonemizer fails - one=$(echo "1" | PHONEMIZER_ESPEAK_PATH=$ESPEAK_PATH phonemize -p ' ' -w '' -l $ph_lg --language-switch remove-flags) - sed 's/$/ 1/' $target_dir/words.txt | PHONEMIZER_ESPEAK_PATH=$ESPEAK_PATH phonemize -o $target_dir/phones.txt -p ' ' -w '' -l $ph_lg -j 70 --language-switch remove-flags - echo "one is ${one}" - sed -i "s/${one}$//" $target_dir/phones.txt -fi - -paste $target_dir/words.txt $target_dir/phones.txt >! $target_dir/lexicon.lst - -python $FAIRSEQ_ROOT/fairseq_cli/preprocess.py --dataset-impl mmap --trainpref $target_dir/phones.txt --only-source --destdir $target_dir/phones --thresholdsrc $min_phones --padding-factor 1 --dict-only - -python $FAIRSEQ_ROOT/examples/wav2vec/unsupervised/scripts/filter_lexicon.py -d $target_dir/phones/dict.txt < $target_dir/lexicon.lst >! $target_dir/lexicon_filtered.lst -python $FAIRSEQ_ROOT/examples/wav2vec/unsupervised/scripts/phonemize_with_sil.py -s 0.25 --surround --lexicon $target_dir/lexicon_filtered.lst < $target_dir/lm.upper.lid.txt >! $target_dir/phones/lm.phones.filtered.txt -cp $target_dir/phones/dict.txt $target_dir/phones/dict.phn.txt -echo "<SIL> 0" >> $target_dir/phones/dict.phn.txt -python $FAIRSEQ_ROOT/fairseq_cli/preprocess.py --dataset-impl mmap --trainpref $target_dir/phones/lm.phones.filtered.txt --workers 70 --only-source --destdir $target_dir/phones --srcdict $target_dir/phones/dict.phn.txt - -$KENLM_ROOT/lmplz -o 4 < $target_dir/lm.upper.lid.txt --discount_fallback --prune 0 0 0 3 >! $target_dir/kenlm.wrd.o40003.arpa -$KENLM_ROOT/build_binary $target_dir/kenlm.wrd.o40003.arpa $target_dir/kenlm.wrd.o40003.bin - -lg=$lg python $FAIRSEQ_ROOT/examples/speech_recognition/kaldi/kaldi_initializer.py kaldi_root=$KALDI_ROOT fst_dir=$target_dir/fst/phn_to_words_sil lm_arpa=$target_dir/kenlm.wrd.o40003.arpa wav2letter_lexicon=$target_dir/lexicon_filtered.lst data_dir=$target_dir/phones in_labels=phn "blank_symbol='<SIL>'" -lg=$lg python $FAIRSEQ_ROOT/examples/speech_recognition/kaldi/kaldi_initializer.py kaldi_root=$KALDI_ROOT fst_dir=$target_dir/fst/phn_to_words lm_arpa=$target_dir/kenlm.wrd.o40003.arpa wav2letter_lexicon=$target_dir/lexicon_filtered.lst data_dir=$target_dir/phones in_labels=phn - -$KENLM_ROOT/lmplz -o 4 < $target_dir/phones/lm.phones.filtered.txt --discount_fallback >! $target_dir/phones/lm.phones.filtered.04.arpa -$KENLM_ROOT/build_binary $target_dir/phones/lm.phones.filtered.04.arpa $target_dir/phones/lm.phones.filtered.04.bin -$KENLM_ROOT/lmplz -o 6 < $target_dir/phones/lm.phones.filtered.txt --discount_fallback >! $target_dir/phones/lm.phones.filtered.06.arpa -$KENLM_ROOT/build_binary $target_dir/phones/lm.phones.filtered.06.arpa $target_dir/phones/lm.phones.filtered.06.bin - -lg=$lg python $FAIRSEQ_ROOT/examples/speech_recognition/kaldi/kaldi_initializer.py kaldi_root=$KALDI_ROOT fst_dir=$target_dir/fst/phn_to_phn_sil lm_arpa=$target_dir/phones/lm.phones.filtered.06.arpa data_dir=$target_dir/phones in_labels=phn "blank_symbol='<SIL>'" diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/prepare_timit.sh b/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/prepare_timit.sh deleted file mode 100644 index d8f5d596b..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/prepare_timit.sh +++ /dev/null @@ -1,79 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -timit_root=$1 # assume it is the upper-cased version -tgt_dir=$2 -model=$3 - -set -eu - -setups="matched unmatched" -splits="test valid train train_text" - -tgt_dir=$(realpath $tgt_dir) -sph2wav=$KALDI_ROOT/tools/sph2pipe_v2.5/sph2pipe -wav_dir=$tgt_dir/wav - - -mkdir -p $tgt_dir $wav_dir -find $timit_root/{TRAIN,TEST} -iname "*.WAV" > $tgt_dir/all_sph.flist -cat $tgt_dir/all_sph.flist | sed -e 's#//*#/#g' -e 's#.*/\([^/]*\)/\([^/]*\).WAV#\1_\2#g' > $tgt_dir/all.uid -paste -d' ' $tgt_dir/{all_sph.flist,all.uid} | \ - awk -v sph2wav=$sph2wav -v wav_dir=$wav_dir '{print sph2wav " -f wav " $1 " > " wav_dir "/" $2 ".wav"}' \ - > $tgt_dir/sph2wav.sh -bash $tgt_dir/sph2wav.sh -cat $tgt_dir/all.uid | awk -v wav_dir=$(pwd)/$wav_dir '{print $1" "wav_dir"/"$1".wav"}' | sort > $tgt_dir/all_wav.scp -cut -d' ' -f2 $tgt_dir/all_wav.scp | xargs -I{} soxi -s {} > $tgt_dir/all.dur -paste -d' ' $tgt_dir/{all_wav.scp,all.dur} > $tgt_dir/all_wav_dur.scp -rm $tgt_dir/{all.uid,all_sph.flist,sph2wav.sh} - -find $timit_root/{TRAIN,TEST} -iname "*.PHN" > $tgt_dir/all_phn60.flist -while read line; do - if [ ! -f $line ]; then - >&2 echo "Cannot find transcription file '$line'" && exit 1; - fi - cut -f3 -d' ' "$line" | tr '\n' ' ' | perl -ape 's: *$:\n:;' -done < $tgt_dir/all_phn60.flist > $tgt_dir/all.phn60 -cat $tgt_dir/all_phn60.flist | sed -e 's#//*#/#g' -e 's#.*/\([^/]*\)/\([^/]*\).PHN#\1_\2#g' | \ - paste -d' ' - $tgt_dir/all.phn60 | \ - $KALDI_ROOT/egs/timit/s5/local/timit_norm_trans.pl -i - -m $KALDI_ROOT/egs/timit/s5/conf/phones.60-48-39.map -to 39 | \ - sort > $tgt_dir/all.phn -echo "done preparing wav and 39-phone transcripts" - - -for s in $setups; do - mkdir -p $tgt_dir/$s - for x in $splits; do - uid_path=config/timit_${s}/${x}.uid - grep -w -f $uid_path $tgt_dir/all.phn | cut -d' ' -f2- > $tgt_dir/$s/$x.phn - ln -sf $(realpath $tgt_dir/$s/$x.phn) $tgt_dir/$s/$x.wrd - - echo "/" > $tgt_dir/$s/$x.tsv && grep -w -f $uid_path $tgt_dir/all_wav_dur.scp | cut -d' ' -f2- | sed 's# #\t#' >> $tgt_dir/$s/$x.tsv - done - - for x in $splits; do - cat $tgt_dir/$s/$x.phn - done | tr ' ' '\n' | sort -u | awk '{print $1" "1}' > $tgt_dir/$s/dict.phn.txt - ln -sf $(realpath $tgt_dir/$s/dict.phn.txt) $tgt_dir/$s/dict.wrd.txt -done -echo "done preparing unmatched and matched setups for TIMIT" - - -for s in $setups; do - zsh scripts/prepare_audio.sh $tgt_dir/$s $tgt_dir/$s/feat $model - - lm_dir=$tgt_dir/$s/phones - fst_dir=$tgt_dir/$s/fst/phn_to_phn - - python $FAIRSEQ_ROOT/fairseq_cli/preprocess.py --dataset-impl mmap --trainpref $tgt_dir/$s/train_text.phn --workers 10 --only-source --destdir $lm_dir --srcdict $tgt_dir/$s/dict.phn.txt - $KENLM_ROOT/lmplz -o 3 < $tgt_dir/$s/train_text.phn --discount_fallback >$lm_dir/train_text_phn.03.arpa - $KENLM_ROOT/build_binary $lm_dir/train_text_phn.03.arpa $lm_dir/train_text_phn.03.bin - $KENLM_ROOT/lmplz -o 4 < $tgt_dir/$s/train_text.phn --discount_fallback >$lm_dir/train_text_phn.04.arpa - $KENLM_ROOT/build_binary $lm_dir/train_text_phn.04.arpa $lm_dir/train_text_phn.04.bin - - python $FAIRSEQ_ROOT/examples/speech_recognition/kaldi/kaldi_initializer.py kaldi_root=$KALDI_ROOT fst_dir=$fst_dir lm_arpa=$lm_dir/train_text_phn.03.arpa data_dir=$tgt_dir/$s in_labels=phn -done -echo "done preprocessing audio and text for wav2vec-U" diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/remove_silence.py b/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/remove_silence.py deleted file mode 100644 index fac88b989..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/remove_silence.py +++ /dev/null @@ -1,63 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -get intervals from .vads file, specify output data, and this script removes silences and saves the audio data in out path folder -paths=shards/train.tsv -vads=shards/train.vads -python remove_silence.py --paths $paths --vads $vads -""" - -import os -import argparse -import torch -import torchaudio -import tqdm - - -parser = argparse.ArgumentParser() -parser.add_argument("--tsv", default="", type=str) -parser.add_argument("--vads", default="", type=str) -parser.add_argument("--out", type=str) -params = parser.parse_args() - -# load paths -paths = [] -with open(params.tsv) as f: - root = next(f).rstrip() - for line in f: - paths.append(os.path.join(root, line.rstrip().split("\t")[0])) - -# load vads -list_intervals = [] -with open(params.vads) as f: - for line in f: - interval = [ - [int(w.split(":")[0]), int(w.split(":")[1])] for w in line.rstrip().split() - ] - list_intervals.append(interval) - - -# load audio and keep only intervals (i.e. remove silences) -for i in tqdm.trange(len(paths)): - data, _ = torchaudio.load(paths[i]) - if len(list_intervals[i]) > 0: - data_filtered = torch.cat( - [data[0][int(it[0]) : int(it[1])] for it in list_intervals[i]] - ).unsqueeze(0) - else: - data_filtered = data - - # YOU MAY NEED TO MODIFY THIS TO GET THE RIGHT SUBPATH - # outpath = params.out + '/'.join(paths[i].split('/')[-1]) - outpath = params.out + "/" + "/".join(paths[i].split("/")[-2:]) - - if not os.path.isdir("/".join(outpath.split("/")[:-1])): - os.makedirs("/".join(outpath.split("/")[:-1])) - if not os.path.exists(outpath): - torchaudio.save(outpath, data_filtered, sample_rate=16000) - else: - print(outpath, "exists!") diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/vads.py b/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/vads.py deleted file mode 100644 index 2398da97d..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/vads.py +++ /dev/null @@ -1,98 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import sys - -from copy import deepcopy -from scipy.signal import lfilter - -import numpy as np -from tqdm import tqdm -import soundfile as sf -import os.path as osp - - -def get_parser(): - parser = argparse.ArgumentParser(description="compute vad segments") - parser.add_argument( - "--rvad-home", - "-r", - help="path to rvad home (see https://github.com/zhenghuatan/rVADfast)", - required=True, - ) - - return parser - - -def rvad(speechproc, path): - winlen, ovrlen, pre_coef, nfilter, nftt = 0.025, 0.01, 0.97, 20, 512 - ftThres = 0.5 - vadThres = 0.4 - opts = 1 - - data, fs = sf.read(path) - assert fs == 16_000, "sample rate must be 16khz" - ft, flen, fsh10, nfr10 = speechproc.sflux(data, fs, winlen, ovrlen, nftt) - - # --spectral flatness -- - pv01 = np.zeros(ft.shape[0]) - pv01[np.less_equal(ft, ftThres)] = 1 - pitch = deepcopy(ft) - - pvblk = speechproc.pitchblockdetect(pv01, pitch, nfr10, opts) - - # --filtering-- - ENERGYFLOOR = np.exp(-50) - b = np.array([0.9770, -0.9770]) - a = np.array([1.0000, -0.9540]) - fdata = lfilter(b, a, data, axis=0) - - # --pass 1-- - noise_samp, noise_seg, n_noise_samp = speechproc.snre_highenergy( - fdata, nfr10, flen, fsh10, ENERGYFLOOR, pv01, pvblk - ) - - # sets noisy segments to zero - for j in range(n_noise_samp): - fdata[range(int(noise_samp[j, 0]), int(noise_samp[j, 1]) + 1)] = 0 - - vad_seg = speechproc.snre_vad( - fdata, nfr10, flen, fsh10, ENERGYFLOOR, pv01, pvblk, vadThres - ) - return vad_seg, data - - -def main(): - parser = get_parser() - args = parser.parse_args() - - sys.path.append(args.rvad_home) - import speechproc - - stride = 160 - lines = sys.stdin.readlines() - root = lines[0].rstrip() - for fpath in tqdm(lines[1:]): - path = osp.join(root, fpath.split()[0]) - vads, wav = rvad(speechproc, path) - - start = None - vad_segs = [] - for i, v in enumerate(vads): - if start is None and v == 1: - start = i * stride - elif start is not None and v == 0: - vad_segs.append((start, i * stride)) - start = None - if start is not None: - vad_segs.append((start, len(wav))) - - print(" ".join(f"{v[0]}:{v[1]}" for v in vad_segs)) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/wav2vec_apply_cluster_faiss.py b/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/wav2vec_apply_cluster_faiss.py deleted file mode 100644 index a5dd7ae6c..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/wav2vec_apply_cluster_faiss.py +++ /dev/null @@ -1,128 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import os -import os.path as osp -import numpy as np -import tqdm -import torch -import sys - -import faiss -import torch.nn.functional as F - -from wav2vec_cluster_faiss import parse_faiss_specs, Wav2VecFeatureReader - - -def get_parser(): - parser = argparse.ArgumentParser(description="apply clusters") - # fmt: off - parser.add_argument('data', help='location of tsv files') - parser.add_argument('--split', help='split to process', required=True) - parser.add_argument('--labels', help='split to process', default="phn") - parser.add_argument('--path', help='path to pca and centroids', required=True) - parser.add_argument('--checkpoint', type=str, help='checkpoint for wav2vec model (if using wav2vec features)', required=True) - parser.add_argument('--layer', '-l', type=int, help='which layer to read', default=14) - parser.add_argument('--max-tsz', type=int, help='batch kmeans up to this much', default=14) - # fmt: on - - return parser - - -def get_iterator(args): - label_path = osp.join(args.data, f"{args.split}.{args.labels}") - if osp.exists(label_path): - lp = open(label_path, "r") - else: - lp = None - - with open(osp.join(args.data, f"{args.split}.tsv"), "r") as fp: - lines = fp.read().split("\n") - root = lines.pop(0).strip() - files = [line.rstrip() for line in lines if len(line) > 0] - - if lp is not None: - lbls = [line.rstrip() for line in lp] - else: - lbls = [None] * len(files) - - num = len(files) - reader = Wav2VecFeatureReader(args.checkpoint, args.layer) - - def iterate(): - for fname, lbl in zip(files, lbls): - file = osp.join(root, fname.split("\t")[0]) - feats = reader.get_feats(file) - yield feats.data, fname, lbl - - return iterate, num, root - - -def main(): - parser = get_parser() - args = parser.parse_args() - - spec = osp.basename(args.path) - - try: - faiss_spec = parse_faiss_specs(spec.rstrip("/"))[0] - except: - print(spec) - raise - - print("Faiss Spec:", faiss_spec, file=sys.stderr) - - if faiss_spec.pca: - A = torch.from_numpy(np.load(osp.join(args.path, "pca_A.npy"))).cuda() - b = torch.from_numpy(np.load(osp.join(args.path, "pca_b.npy"))).cuda() - print("Loaded PCA", file=sys.stderr) - - centroids = np.load(osp.join(args.path, "centroids.npy")) - print("Loaded centroids", centroids.shape, file=sys.stderr) - - res = faiss.StandardGpuResources() - index_flat = ( - faiss.IndexFlatL2(centroids.shape[1]) - if not faiss_spec.sphere - else faiss.IndexFlatIP(centroids.shape[1]) - ) - faiss_index = faiss.index_cpu_to_gpu(res, 0, index_flat) - faiss_index.add(centroids) - - generator, num, root = get_iterator(args) - iterator = generator() - - had_labels = False - label_path = osp.join(args.path, f"{args.split}.{args.labels}") - - with torch.no_grad(): - with open(osp.join(args.path, f"{args.split}.src"), "w") as fp, open( - osp.join(args.path, f"{args.split}.tsv"), "w" - ) as pp, open(label_path, "w") as lp: - print(root, file=pp) - for f, fname, lbl in tqdm.tqdm(iterator, total=num): - if faiss_spec.pca: - f = torch.mm(f, A) + b - if faiss_spec.norm: - f = F.normalize(f, p=2, dim=-1) - - f = f.cpu().numpy() - - _, z = faiss_index.search(f, 1) - - print(" ".join(str(x.item()) for x in z), file=fp) - print(fname, file=pp) - - if lbl is not None: - print(lbl, file=lp) - had_labels = True - if not had_labels: - os.remove(label_path) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/wav2vec_cluster_faiss.py b/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/wav2vec_cluster_faiss.py deleted file mode 100644 index 632a69e9f..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/wav2vec_cluster_faiss.py +++ /dev/null @@ -1,210 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import gc -import os -import os.path as osp -import random -import numpy as np -import tqdm -import torch - -from collections import namedtuple - -import faiss - -import fairseq -import soundfile as sf - - -def get_parser(): - parser = argparse.ArgumentParser( - description="compute kmeans codebook from kaldi-computed feats" - ) - # fmt: off - parser.add_argument('data', help='location of tsv files') - parser.add_argument('--save-dir', help='where to save the output', required=True) - parser.add_argument('--checkpoint', type=str, help='checkpoint for wav2vec model (if using wav2vec features)', required=True) - parser.add_argument('--sample-pct', '-r', type=float, help='percentage of timesteps to sample', default=0) - parser.add_argument('--layer', '-l', type=int, help='which layer to read', default=14) - parser.add_argument('--faiss-specs', '-f', type=str, - help='faiss index specs; separated by space ' - 'format is: PCAx_NORM_CLUSx_SPHERICAL -> ' - 'PCAx if exists first apply PCA ' - 'NORM if exists, normalize the vector by L2 norm ' - 'CLUSx must exist, cluster to x clusters ' - 'SPEHRICAL if exists, apply spherical kmeans', - default='l2') - # fmt: on - - return parser - - -faiss_spec = namedtuple("faiss_spec", ["pca", "norm", "n_clus", "sphere", "spec_str"]) - - -def parse_faiss_specs(specs_str): - specs = [] - for ss in specs_str.split(): - comps = ss.split("_") - pca = 0 - norm = False - n_clus = 0 - sphere = False - for c in comps: - if c.startswith("PCA"): - pca = int(c[3:]) - elif c == "NORM": - norm = True - elif c.startswith("CLUS"): - n_clus = int(c[4:]) - elif c == "SPHERICAL": - sphere = True - assert n_clus > 0 - specs.append( - faiss_spec(pca=pca, norm=norm, n_clus=n_clus, sphere=sphere, spec_str=ss) - ) - return specs - - -class Wav2VecFeatureReader(object): - def __init__(self, cp_file, layer): - state = fairseq.checkpoint_utils.load_checkpoint_to_cpu(cp_file) - - self.layer = layer - - if "cfg" in state: - w2v_args = state["cfg"] - task = fairseq.tasks.setup_task(w2v_args.task) - model = task.build_model(w2v_args.model) - else: - w2v_args = state["args"] - task = fairseq.tasks.setup_task(w2v_args) - model = task.build_model(w2v_args) - model.load_state_dict(state["model"], strict=True) - model.eval() - model.cuda() - self.model = model - - def read_audio(self, fname): - """Load an audio file and return PCM along with the sample rate""" - wav, sr = sf.read(fname) - assert sr == 16e3 - - return wav - - def get_feats(self, loc): - x = self.read_audio(loc) - with torch.no_grad(): - source = torch.from_numpy(x).view(1, -1).float().cuda() - res = self.model( - source=source, mask=False, features_only=True, layer=self.layer - ) - return res["layer_results"][self.layer][0].squeeze(1) - - -def get_iterator(args): - with open(args.data, "r") as fp: - lines = fp.read().split("\n") - root = lines.pop(0).strip() - files = [osp.join(root, line.split("\t")[0]) for line in lines if len(line) > 0] - - if getattr(args, "sample_pct", 0) > 0: - files = random.sample(files, int(args.sample_pct * len(files))) - num = len(files) - reader = Wav2VecFeatureReader(args.checkpoint, args.layer) - - def iterate(): - for fname in files: - feats = reader.get_feats(fname) - yield feats.cpu().numpy() - - return iterate, num - - -def main(): - parser = get_parser() - args = parser.parse_args() - - faiss_specs = parse_faiss_specs(args.faiss_specs) - print("Faiss Specs:", faiss_specs) - - feat_path = osp.join(args.save_dir, "features") - if osp.exists(feat_path + ".npy"): - feats = np.load(feat_path + ".npy") - else: - generator, num = get_iterator(args) - iterator = generator() - - feats = [] - for f in tqdm.tqdm(iterator, total=num): - feats.append(f) - - del iterator - del generator - - feats = np.concatenate(feats) - - print(feats.shape) - - os.makedirs(args.save_dir, exist_ok=True) - # np.save(feat_path, feats) - - gc.collect() - torch.cuda.empty_cache() - - reload = False - for spec in faiss_specs: - print("Processing spec", spec) - - if reload: - print("Reloading...") - del feats - gc.collect() - feats = np.load(feat_path + ".npy") - - save_path = osp.join(args.save_dir, spec.spec_str) - os.makedirs(save_path, exist_ok=True) - d = feats.shape[-1] - x = feats - if spec.pca > 0: - print("Computing PCA") - pca = faiss.PCAMatrix(d, spec.pca) - pca.train(x) - d = spec.pca - b = faiss.vector_to_array(pca.b) - A = faiss.vector_to_array(pca.A).reshape(pca.d_out, pca.d_in) - np.save(osp.join(save_path, "pca_A"), A.T) - np.save(osp.join(save_path, "pca_b"), b) - print("Applying PCA") - x = pca.apply_py(x) - - if spec.norm: - reload = spec.pca <= 0 - print("Normalizing") - faiss.normalize_L2(x) - - print("Computing kmeans") - kmeans = faiss.Kmeans( - d, - spec.n_clus, - niter=50, - verbose=True, - spherical=spec.sphere, - max_points_per_centroid=feats.shape[0], - gpu=True, - nredo=3, - ) - kmeans.train(x) - np.save(osp.join(save_path, "centroids"), kmeans.centroids) - del kmeans - del x - gc.collect() - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/wav2vec_extract_features.py b/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/wav2vec_extract_features.py deleted file mode 100644 index b07e274d2..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/wav2vec_extract_features.py +++ /dev/null @@ -1,119 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import os -import os.path as osp -import tqdm -import torch -import torch.nn.functional as F -from shutil import copyfile - -from npy_append_array import NpyAppendArray - -import fairseq -import soundfile as sf - - -def get_parser(): - parser = argparse.ArgumentParser( - description="compute kmeans codebook from kaldi-computed feats" - ) - # fmt: off - parser.add_argument('data', help='location of tsv files') - parser.add_argument('--split', help='which split to read', required=True) - parser.add_argument('--save-dir', help='where to save the output', required=True) - parser.add_argument('--checkpoint', type=str, help='checkpoint for wav2vec ctc model', required=True) - parser.add_argument('--layer', type=int, default=14, help='which layer to use') - # fmt: on - - return parser - - -class Wav2VecFeatureReader(object): - def __init__(self, cp_file, layer): - model, cfg, task = fairseq.checkpoint_utils.load_model_ensemble_and_task( - [cp_file] - ) - model = model[0] - model.eval() - model.cuda() - self.model = model - self.task = task - self.layer = layer - - def read_audio(self, fname): - """Load an audio file and return PCM along with the sample rate""" - wav, sr = sf.read(fname) - assert sr == 16e3 - - return wav - - def get_feats(self, loc): - x = self.read_audio(loc) - with torch.no_grad(): - source = torch.from_numpy(x).float().cuda() - if self.task.cfg.normalize: - assert source.dim() == 1, source.dim() - with torch.no_grad(): - source = F.layer_norm(source, source.shape) - source = source.view(1, -1) - - m_res = self.model(source=source, mask=False, features_only=True, layer=self.layer) - return m_res["x"].squeeze(0).cpu() - - -def get_iterator(args): - with open(osp.join(args.data, args.split) + ".tsv", "r") as fp: - lines = fp.read().split("\n") - root = lines.pop(0).strip() - files = [osp.join(root, line.split("\t")[0]) for line in lines if len(line) > 0] - - num = len(files) - reader = Wav2VecFeatureReader(args.checkpoint, args.layer) - - def iterate(): - for fname in files: - w2v_feats = reader.get_feats(fname) - yield w2v_feats - - return iterate, num - - -def main(): - parser = get_parser() - args = parser.parse_args() - - os.makedirs(args.save_dir, exist_ok=True) - - def create_files(dest): - copyfile(osp.join(args.data, args.split) + ".tsv", dest + ".tsv") - if osp.exists(osp.join(args.data, args.split) + ".wrd"): - copyfile(osp.join(args.data, args.split) + ".wrd", dest + ".wrd") - if osp.exists(osp.join(args.data, args.split) + ".phn"): - copyfile(osp.join(args.data, args.split) + ".phn", dest + ".phn") - - if osp.exists(dest + ".npy"): - os.remove(dest + ".npy") - npaa = NpyAppendArray(dest + ".npy") - return npaa - - save_path = osp.join(args.save_dir, args.split) - npaa = create_files(save_path) - - generator, num = get_iterator(args) - iterator = generator() - - with open(save_path + ".lengths", "w") as l_f: - for w2v_feats in tqdm.tqdm(iterator, total=num): - print(len(w2v_feats), file=l_f) - - if len(w2v_feats) > 0: - npaa.append(w2v_feats.numpy()) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/wer.py b/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/wer.py deleted file mode 100644 index 613ab50d3..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/wer.py +++ /dev/null @@ -1,82 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Implement unsupervised metric for decoding hyperparameter selection: - $$ alpha * LM_PPL + ViterbitUER(%) * 100 $$ -""" -import argparse -import logging -import sys - -import editdistance - -logging.root.setLevel(logging.INFO) -logging.basicConfig(stream=sys.stdout, level=logging.INFO) -logger = logging.getLogger(__name__) - - -def get_parser(): - parser = argparse.ArgumentParser() - parser.add_argument("-s", "--hypo", help="hypo transcription", required=True) - parser.add_argument( - "-r", "--reference", help="reference transcription", required=True - ) - return parser - - -def compute_wer(ref_uid_to_tra, hyp_uid_to_tra, g2p): - d_cnt = 0 - w_cnt = 0 - w_cnt_h = 0 - for uid in hyp_uid_to_tra: - ref = ref_uid_to_tra[uid].split() - if g2p is not None: - hyp = g2p(hyp_uid_to_tra[uid]) - hyp = [p for p in hyp if p != "'" and p != " "] - hyp = [p[:-1] if p[-1].isnumeric() else p for p in hyp] - else: - hyp = hyp_uid_to_tra[uid].split() - d_cnt += editdistance.eval(ref, hyp) - w_cnt += len(ref) - w_cnt_h += len(hyp) - wer = float(d_cnt) / w_cnt - logger.debug( - ( - f"wer = {wer * 100:.2f}%; num. of ref words = {w_cnt}; " - f"num. of hyp words = {w_cnt_h}; num. of sentences = {len(ref_uid_to_tra)}" - ) - ) - return wer - - -def main(): - args = get_parser().parse_args() - - errs = 0 - count = 0 - with open(args.hypo, "r") as hf, open(args.reference, "r") as rf: - for h, r in zip(hf, rf): - h = h.rstrip().split() - r = r.rstrip().split() - errs += editdistance.eval(r, h) - count += len(r) - - logger.info(f"UER: {errs / count * 100:.2f}%") - - -if __name__ == "__main__": - main() - - -def load_tra(tra_path): - with open(tra_path, "r") as f: - uid_to_tra = {} - for line in f: - uid, tra = line.split(None, 1) - uid_to_tra[uid] = tra - logger.debug(f"loaded {len(uid_to_tra)} utterances from {tra_path}") - return uid_to_tra diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/wrd_to_ltr.py b/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/wrd_to_ltr.py deleted file mode 100644 index f83471409..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/scripts/wrd_to_ltr.py +++ /dev/null @@ -1,16 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import sys - - -def main(): - for line in sys.stdin: - print(" ".join(list(line.strip().replace(" ", "|"))) + " |") - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/tasks/__init__.py b/kosmos-g/fairseq/examples/wav2vec/unsupervised/tasks/__init__.py deleted file mode 100644 index 6d7dd625e..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/tasks/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .unpaired_audio_text import UnpairedAudioText - - -__all__ = [ - "UnpairedAudioText", -] diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/tasks/unpaired_audio_text.py b/kosmos-g/fairseq/examples/wav2vec/unsupervised/tasks/unpaired_audio_text.py deleted file mode 100644 index 1e2dc55c2..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/tasks/unpaired_audio_text.py +++ /dev/null @@ -1,447 +0,0 @@ -# Copyright (c) 2017-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the LICENSE file in -# the root directory of this source tree. An additional grant of patent rights -# can be found in the PATENTS file in the same directory. - -from dataclasses import dataclass, field -import logging -import math -import os -from typing import Optional -import torch - -from fairseq.logging import metrics -from fairseq.tasks import FairseqTask, register_task -from ..data import ExtractedFeaturesDataset, RandomInputDataset - -from fairseq.data import ( - Dictionary, - data_utils, - StripTokenDataset, -) -from fairseq.dataclass import FairseqDataclass -from fairseq.distributed.utils import get_data_parallel_world_size -from omegaconf import MISSING - -from examples.speech_recognition.kaldi.kaldi_decoder import ( - KaldiDecoder, - KaldiDecoderConfig, -) - - -logger = logging.getLogger(__name__) - - -@dataclass -class DecodingConfig(FairseqDataclass): - kenlm_path: Optional[str] = None - lm_weight: float = 0 - blank_weight: float = 0 - - -@dataclass -class UnpairedAudioTextConfig(FairseqDataclass): - data: str = field( - default=MISSING, metadata={"help": "path to data directory containing audio"} - ) - text_data: str = field( - default=MISSING, metadata={"help": "path to data directory containing text"} - ) - max_length: Optional[int] = None - labels: Optional[str] = field( - default=None, - metadata={"help": "extension of the label file to load, used for fine-tuning"}, - ) - unfiltered: bool = field( - default=False, metadata={"help": "load data with _unfiltered suffix"} - ) - ctc_eval: bool = field( - default=False, metadata={"help": "eval UER as if computed by CTC"} - ) - sort_by_length: bool = field( - default=True, metadata={"help": "sort examples by length of audio timesteps"} - ) - shuffle: bool = field(default=True, metadata={"help": "shuffle examples"}) - append_eos: bool = field(default=False, metadata={"help": "append eos"}) - uppercase: Optional[bool] = field( - default=False, metadata={"help": "uppercase for LM score computation"} - ) - skipwords: Optional[str] = field( - default="", - metadata={ - "help": "comma-separated words to be removed for LM score computation" - }, - ) - kenlm_path: Optional[str] = None - vocab_usage_power: float = 2 - - word_decoder_config: Optional[KaldiDecoderConfig] = None - word_kenlm_path: Optional[str] = None - - decoding_config: DecodingConfig = DecodingConfig() - - -@register_task("unpaired_audio_text", dataclass=UnpairedAudioTextConfig) -class UnpairedAudioText(FairseqTask): - """ """ - - cfg: UnpairedAudioTextConfig - - def __init__( - self, - cfg: UnpairedAudioTextConfig, - source_dictionary=None, - target_dictionary=None, - ): - super().__init__(cfg) - - self._target_dictionary = target_dictionary - self._source_dictionary = source_dictionary - self.num_symbols = ( - len([s for s in target_dictionary.symbols if not s.startswith("madeup")]) - - target_dictionary.nspecial - ) - self.sil_id = ( - target_dictionary.index("<SIL>") if "<SIL>" in target_dictionary else -1 - ) - self.kenlm = None - if cfg.kenlm_path is not None: - import kenlm - - self.kenlm = kenlm.Model(cfg.kenlm_path) - - self.word_kenlm = None - if cfg.word_kenlm_path is not None: - import kenlm - - self.word_kenlm = kenlm.Model(cfg.word_kenlm_path) - - self.uppercase = cfg.uppercase - self.skipwords = set(cfg.skipwords.split(",")) - - def str_postprocess(s): - s = " ".join(w for w in s.split() if w not in self.skipwords) - s = s.upper() if self.uppercase else s - return s - - self.str_postprocess = str_postprocess - self.compute_lm_score = lambda s: self.kenlm.score(self.str_postprocess(s)) - - self.compute_word_score = None - if cfg.word_decoder_config is not None: - self.kaldi_decoder = KaldiDecoder(cfg.word_decoder_config, beam=10) - - def compute_word_score(logits, padding): - res = self.kaldi_decoder.decode(logits, padding) - for r in res: - r = r.result() - assert len(r) == 1 - r = r[0] - yield r["score"], r["words"] - - self.compute_word_score = compute_word_score - - @classmethod - def setup_task(cls, cfg: UnpairedAudioTextConfig, **kwargs): - """Setup the task (e.g., load dictionaries). - - Args: - cfg (AudioPretrainingConfig): configuration of this task - """ - - dict_path = os.path.join(cfg.text_data, "dict.txt") - if os.path.exists(dict_path): - target_dictionary = Dictionary.load(dict_path) - else: - dict_path = os.path.join(cfg.data, f"dict.{cfg.labels}.txt") - target_dictionary = Dictionary.load(dict_path) - - return cls(cfg, target_dictionary=target_dictionary) - - def optimizer_step(self, optimizer, model, update_num): - if hasattr(model, "get_groups_for_update"): - groups = model.get_groups_for_update(update_num) - optimizer.step(groups={groups}) - else: - optimizer.step() - - def valid_step(self, sample, model, criterion): - res = model( - **sample["net_input"], - dense_x_only=True, - ) - - dense_x = res["logits"] - padding_mask = res["padding_mask"] - - word_scores = None - if self.compute_word_score is not None: - word_scores = self.compute_word_score(dense_x.cpu(), padding_mask.cpu()) - - z = dense_x.argmax(-1) - z[padding_mask] = self.target_dictionary.pad() - - vocab_seen = torch.zeros(self.num_symbols, dtype=torch.bool) - - import editdistance - - c_err = 0 - c_len = 0 - pred_c_len = 0 - lm_score_sum = 0 - for i, (x, t, id) in enumerate( - zip( - z, - sample["target"] if "target" in sample else [None] * len(z), - sample["id"], - ) - ): - - if t is not None: - t = t[(t >= self.target_dictionary.nspecial)] - x = x[ - (x >= self.target_dictionary.nspecial) - & (x < (self.num_symbols + self.target_dictionary.nspecial)) - ] - if self.sil_id >= 0: - x = x[x != self.sil_id] - - vocab_seen[x - self.target_dictionary.nspecial] = True - - pred_units_arr = x - if self.cfg.ctc_eval: - pred_units_arr = pred_units_arr.unique_consecutive() - pred_units_arr = pred_units_arr[pred_units_arr != 0] - - if id == 0: - if t is not None: - logger.info(f"REF: {self.target_dictionary.string(t)}") - logger.info(f"HYP: {self.target_dictionary.string(pred_units_arr)}") - - if self.kenlm is not None: - if t is not None: - ref_lm_s = self.compute_lm_score( - self.target_dictionary.string(t) - ) - logger.info( - f"LM [REF]: {ref_lm_s}, {math.pow(10, -ref_lm_s / (len(t) + 1))}" - ) - - hyp_lm_s = self.compute_lm_score( - self.target_dictionary.string(pred_units_arr) - ) - logger.info( - f"LM [HYP]: {hyp_lm_s}, {math.pow(10, -hyp_lm_s / (len(pred_units_arr) + 1))}" - ) - - pred_units_arr = pred_units_arr.tolist() - - pred_c_len += len(pred_units_arr) - - if t is not None: - t = t.tolist() - c_err += editdistance.eval(pred_units_arr, t) - c_len += len(t) - else: - c_len = pred_c_len - - if self.kenlm is not None: - pred_str = self.target_dictionary.string(pred_units_arr) - lm_score = self.compute_lm_score(pred_str) - lm_score_sum += lm_score - - kaldi_score_sum = 0 - word_lm_sum = 0 - num_words = 0 - if word_scores is not None: - for score, words in word_scores: - kaldi_score_sum += score - num_words += len(words) - if self.word_kenlm is not None: - word_lm_sum += self.kenlm.score(" ".join(words)) - - try: - world_size = get_data_parallel_world_size() - except: - world_size = 1 - - logging_output = { - "loss": c_err, - "_num_char_errors": c_err, - "_num_chars": c_len, - "_num_pred_chars": pred_c_len, - "ntokens": c_len, - "nsentences": z.size(0), - "sample_size": c_len, - "_world_size": world_size, - "_lm_score_sum": lm_score_sum, - "_kaldi_score_sum": kaldi_score_sum, - "_word_lm_sum": word_lm_sum, - "_num_words": num_words, - "_vocab_seen": vocab_seen, - } - - return c_err, c_len, logging_output - - def load_dataset(self, split: str, task_cfg: FairseqDataclass = None, **kwargs): - data_path = self.cfg.data - task_cfg = task_cfg or self.cfg - - has_unpaired_text = os.path.exists( - os.path.join(self.cfg.text_data, f"{split}.idx") - ) - - self.datasets[split] = ExtractedFeaturesDataset( - path=data_path, - split=split, - min_length=3, - max_length=task_cfg.max_length, - labels=None if has_unpaired_text else task_cfg.labels, - label_dict=self.target_dictionary, - shuffle=getattr(task_cfg, "shuffle", True), - sort_by_length=task_cfg.sort_by_length, - ) - - logger.info(f"split {split} has unpaired text? {has_unpaired_text}") - if has_unpaired_text: - text_dataset = data_utils.load_indexed_dataset( - os.path.join(self.cfg.text_data, split), self.target_dictionary - ) - text_dataset = StripTokenDataset(text_dataset, self.target_dictionary.eos()) - self.datasets[split] = RandomInputDataset( - self.datasets[split], - text_dataset, - ["random_label"], - add_to_input=True, - pad_idx=self.target_dictionary.pad(), - ) - - @property - def source_dictionary(self): - return self._source_dictionary - - @property - def target_dictionary(self): - """Return the :class:`~fairseq.data.Dictionary` for the language - model.""" - return self._target_dictionary - - def max_positions(self): - """Maximum input length supported by the encoder.""" - return None - - def reduce_metrics(self, logging_outputs, criterion): - super().reduce_metrics(logging_outputs, criterion) - - zero = torch.scalar_tensor(0.0) - num_char_errors = sum( - log.get("_num_char_errors", zero) for log in logging_outputs - ) - num_chars = sum(log.get("_num_chars", zero) for log in logging_outputs) - num_word_errors = sum( - log.get("_num_word_errors", zero) for log in logging_outputs - ) - num_words = sum(log.get("_num_words", zero) for log in logging_outputs) - num_pred_chars = sum( - log.get("_num_pred_chars", zero) for log in logging_outputs - ) - - lm_score_sum = sum(log.get("_lm_score_sum", zero) for log in logging_outputs) - vocab_seen = ( - sum(log.get("_vocab_seen", zero) for log in logging_outputs) - .bool() - .sum() - .item() - ) - kaldi_score_sum = sum( - log.get("_kaldi_score_sum", zero) for log in logging_outputs - ) - word_lm_sum = sum(log.get("_word_lm_sum", zero) for log in logging_outputs) - - metrics.log_scalar_sum("_num_char_errors", num_char_errors) - metrics.log_scalar_sum("_num_chars", num_chars) - metrics.log_scalar_sum("_num_word_errors", num_word_errors) - metrics.log_scalar_sum("_num_words", num_words) - - metrics.log_scalar_sum("lm_score_sum", lm_score_sum) - metrics.log_scalar_sum("num_pred_chars", num_pred_chars) - - if self.cfg.word_kenlm_path is not None: - metrics.log_scalar_sum("kaldi_score_sum", kaldi_score_sum) - metrics.log_scalar_sum("word_lm_sum", word_lm_sum) - - if num_chars > 0: - metrics.log_derived( - "uer", - lambda meters: meters["_num_char_errors"].sum - * 100.0 - / meters["_num_chars"].sum - if meters["_num_chars"].sum > 0 - else float("nan"), - ) - - if lm_score_sum < 0 and vocab_seen > 0: - metrics.log_scalar("vocab_seen_pct", vocab_seen / self.num_symbols) - - metrics.log_derived( - "weighted_lm_ppl", - lambda meters: math.pow( - 10, - -meters["lm_score_sum"].sum - / ( - meters["num_pred_chars"].sum + meters["nsentences"].sum - ), # account for </s> - ) - / meters["vocab_seen_pct"].avg ** self.cfg.vocab_usage_power, - ) - - metrics.log_derived( - "lm_ppl", - lambda meters: math.pow( - 10, - -meters["lm_score_sum"].sum - / ( - meters["num_pred_chars"].sum + meters["nsentences"].sum - ), # account for </s> - ), - ) - else: - metrics.log_derived("weighted_lm_ppl", lambda meters: float("inf")) - - if num_words > 0: - if word_lm_sum != 0: - metrics.log_derived( - "word_lm_ppl", - lambda meters: math.pow( - 10, - -meters["word_lm_sum"].sum - / ( - meters["_num_words"].sum + meters["nsentences"].sum - ), # account for </s> - ), - ) - metrics.log_derived( - "weighted_word_lm_ppl", - lambda meters: math.pow( - 10, - -meters["word_lm_sum"].sum - / ( - meters["_num_words"].sum + meters["nsentences"].sum - ), # account for </s> - ) - / meters["vocab_seen_pct"].avg ** self.cfg.vocab_usage_power, - ) - - if self.cfg.word_kenlm_path is not None: - metrics.log_derived( - "kaldi_score", - lambda meters: meters["kaldi_score_sum"].sum - / meters["nsentences"].sum, - ) - - def build_model(self, cfg: FairseqDataclass, from_checkpoint=False): - model = super().build_model(cfg) - - return model diff --git a/kosmos-g/fairseq/examples/wav2vec/unsupervised/w2vu_generate.py b/kosmos-g/fairseq/examples/wav2vec/unsupervised/w2vu_generate.py deleted file mode 100644 index fca0c96f3..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/unsupervised/w2vu_generate.py +++ /dev/null @@ -1,707 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Run inference for pre-processed data with a trained model. -""" - -import ast -from collections import namedtuple -from dataclasses import dataclass, field -from enum import Enum, auto -import hydra -from hydra.core.config_store import ConfigStore -import logging -import math -import os -from omegaconf import OmegaConf -from typing import Optional -import sys - -import editdistance -import torch - -from hydra.core.hydra_config import HydraConfig - -from fairseq import checkpoint_utils, progress_bar, tasks, utils -from fairseq.data.data_utils import post_process -from fairseq.dataclass.configs import FairseqDataclass, FairseqConfig -from fairseq.logging.meters import StopwatchMeter -from omegaconf import open_dict - -from examples.speech_recognition.kaldi.kaldi_decoder import KaldiDecoderConfig - -logging.root.setLevel(logging.INFO) -logging.basicConfig(stream=sys.stdout, level=logging.INFO) -logger = logging.getLogger(__name__) - - -class DecoderType(Enum): - VITERBI = auto() - KENLM = auto() - FAIRSEQ = auto() - KALDI = auto() - - -@dataclass -class UnsupGenerateConfig(FairseqDataclass): - fairseq: FairseqConfig = FairseqConfig() - lm_weight: float = field( - default=2.0, - metadata={"help": "language model weight"}, - ) - w2l_decoder: DecoderType = field( - default=DecoderType.VITERBI, - metadata={"help": "type of decoder to use"}, - ) - kaldi_decoder_config: Optional[KaldiDecoderConfig] = None - lexicon: Optional[str] = field( - default=None, - metadata={ - "help": "path to lexicon. This is also used to 'phonemize' for unsupvised param tuning" - }, - ) - lm_model: Optional[str] = field( - default=None, - metadata={"help": "path to language model (kenlm or fairseq)"}, - ) - unit_lm: bool = field( - default=False, - metadata={"help": "whether to use unit lm"}, - ) - beam_threshold: float = field( - default=50.0, - metadata={"help": "beam score threshold"}, - ) - beam_size_token: float = field( - default=100.0, - metadata={"help": "max tokens per beam"}, - ) - beam: int = field( - default=5, - metadata={"help": "decoder beam size"}, - ) - nbest: int = field( - default=1, - metadata={"help": "number of results to return"}, - ) - word_score: float = field( - default=1.0, - metadata={"help": "word score to add at end of word"}, - ) - unk_weight: float = field( - default=-math.inf, - metadata={"help": "unknown token weight"}, - ) - sil_weight: float = field( - default=0.0, - metadata={"help": "silence token weight"}, - ) - targets: Optional[str] = field( - default=None, - metadata={"help": "extension of ground truth labels to compute UER"}, - ) - results_path: Optional[str] = field( - default=None, - metadata={"help": "where to store results"}, - ) - post_process: Optional[str] = field( - default=None, - metadata={"help": "how to post process results"}, - ) - vocab_usage_power: float = field( - default=2, - metadata={"help": "for unsupervised param tuning"}, - ) - - viterbi_transcript: Optional[str] = field( - default=None, - metadata={"help": "for unsupervised param tuning"}, - ) - min_lm_ppl: float = field( - default=0, - metadata={"help": "for unsupervised param tuning"}, - ) - min_vt_uer: float = field( - default=0, - metadata={"help": "for unsupervised param tuning"}, - ) - - blank_weight: float = field( - default=0, - metadata={"help": "value to add or set for blank emission"}, - ) - blank_mode: str = field( - default="set", - metadata={ - "help": "can be add or set, how to modify blank emission with blank weight" - }, - ) - sil_is_blank: bool = field( - default=False, - metadata={"help": "if true, <SIL> token is same as blank token"}, - ) - - unsupervised_tuning: bool = field( - default=False, - metadata={ - "help": "if true, returns a score based on unsupervised param selection metric instead of UER" - }, - ) - is_ax: bool = field( - default=False, - metadata={ - "help": "if true, assumes we are using ax for tuning and returns a tuple for ax to consume" - }, - ) - - -def get_dataset_itr(cfg, task): - return task.get_batch_iterator( - dataset=task.dataset(cfg.fairseq.dataset.gen_subset), - max_tokens=cfg.fairseq.dataset.max_tokens, - max_sentences=cfg.fairseq.dataset.batch_size, - max_positions=(sys.maxsize, sys.maxsize), - ignore_invalid_inputs=cfg.fairseq.dataset.skip_invalid_size_inputs_valid_test, - required_batch_size_multiple=cfg.fairseq.dataset.required_batch_size_multiple, - num_shards=cfg.fairseq.dataset.num_shards, - shard_id=cfg.fairseq.dataset.shard_id, - num_workers=cfg.fairseq.dataset.num_workers, - data_buffer_size=cfg.fairseq.dataset.data_buffer_size, - ).next_epoch_itr(shuffle=False) - - -def process_predictions( - cfg: UnsupGenerateConfig, - hypos, - tgt_dict, - target_tokens, - res_files, -): - retval = [] - word_preds = [] - transcriptions = [] - dec_scores = [] - - for i, hypo in enumerate(hypos[: min(len(hypos), cfg.nbest)]): - if torch.is_tensor(hypo["tokens"]): - tokens = hypo["tokens"].int().cpu() - tokens = tokens[tokens >= tgt_dict.nspecial] - hyp_pieces = tgt_dict.string(tokens) - else: - hyp_pieces = " ".join(hypo["tokens"]) - - if "words" in hypo and len(hypo["words"]) > 0: - hyp_words = " ".join(hypo["words"]) - else: - hyp_words = post_process(hyp_pieces, cfg.post_process) - - to_write = {} - if res_files is not None: - to_write[res_files["hypo.units"]] = hyp_pieces - to_write[res_files["hypo.words"]] = hyp_words - - tgt_words = "" - if target_tokens is not None: - if isinstance(target_tokens, str): - tgt_pieces = tgt_words = target_tokens - else: - tgt_pieces = tgt_dict.string(target_tokens) - tgt_words = post_process(tgt_pieces, cfg.post_process) - - if res_files is not None: - to_write[res_files["ref.units"]] = tgt_pieces - to_write[res_files["ref.words"]] = tgt_words - - if not cfg.fairseq.common_eval.quiet: - logger.info(f"HYPO {i}:" + hyp_words) - if tgt_words: - logger.info("TARGET:" + tgt_words) - - if "am_score" in hypo and "lm_score" in hypo: - logger.info( - f"DECODER AM SCORE: {hypo['am_score']}, DECODER LM SCORE: {hypo['lm_score']}, DECODER SCORE: {hypo['score']}" - ) - elif "score" in hypo: - logger.info(f"DECODER SCORE: {hypo['score']}") - - logger.info("___________________") - - hyp_words_arr = hyp_words.split() - tgt_words_arr = tgt_words.split() - - retval.append( - ( - editdistance.eval(hyp_words_arr, tgt_words_arr), - len(hyp_words_arr), - len(tgt_words_arr), - hyp_pieces, - hyp_words, - ) - ) - word_preds.append(hyp_words_arr) - transcriptions.append(to_write) - dec_scores.append(-hypo.get("score", 0)) # negate cuz kaldi returns NLL - - if len(retval) > 1: - best = None - for r, t in zip(retval, transcriptions): - if best is None or r[0] < best[0][0]: - best = r, t - for dest, tran in best[1].items(): - print(tran, file=dest) - dest.flush() - return best[0] - - assert len(transcriptions) == 1 - for dest, tran in transcriptions[0].items(): - print(tran, file=dest) - - return retval[0] - - -def prepare_result_files(cfg: UnsupGenerateConfig): - def get_res_file(file_prefix): - if cfg.fairseq.dataset.num_shards > 1: - file_prefix = f"{cfg.fairseq.dataset.shard_id}_{file_prefix}" - path = os.path.join( - cfg.results_path, - "{}{}.txt".format( - cfg.fairseq.dataset.gen_subset, - file_prefix, - ), - ) - return open(path, "w", buffering=1) - - if not cfg.results_path: - return None - - return { - "hypo.words": get_res_file(""), - "hypo.units": get_res_file("_units"), - "ref.words": get_res_file("_ref"), - "ref.units": get_res_file("_ref_units"), - "hypo.nbest.words": get_res_file("_nbest_words"), - } - - -def optimize_models(cfg: UnsupGenerateConfig, use_cuda, models): - """Optimize ensemble for generation""" - for model in models: - model.eval() - if cfg.fairseq.common.fp16: - model.half() - if use_cuda: - model.cuda() - - -GenResult = namedtuple( - "GenResult", - [ - "count", - "errs_t", - "gen_timer", - "lengths_hyp_unit_t", - "lengths_hyp_t", - "lengths_t", - "lm_score_t", - "num_feats", - "num_sentences", - "num_symbols", - "vt_err_t", - "vt_length_t", - ], -) - - -def generate(cfg: UnsupGenerateConfig, models, saved_cfg, use_cuda): - task = tasks.setup_task(cfg.fairseq.task) - saved_cfg.task.labels = cfg.fairseq.task.labels - task.load_dataset(cfg.fairseq.dataset.gen_subset, task_cfg=saved_cfg.task) - # Set dictionary - tgt_dict = task.target_dictionary - logger.info( - "| {} {} {} examples".format( - cfg.fairseq.task.data, - cfg.fairseq.dataset.gen_subset, - len(task.dataset(cfg.fairseq.dataset.gen_subset)), - ) - ) - # Load dataset (possibly sharded) - itr = get_dataset_itr(cfg, task) - # Initialize generator - gen_timer = StopwatchMeter() - - def build_generator(cfg: UnsupGenerateConfig): - w2l_decoder = cfg.w2l_decoder - if w2l_decoder == DecoderType.VITERBI: - from examples.speech_recognition.w2l_decoder import W2lViterbiDecoder - - return W2lViterbiDecoder(cfg, task.target_dictionary) - elif w2l_decoder == DecoderType.KENLM: - from examples.speech_recognition.w2l_decoder import W2lKenLMDecoder - - return W2lKenLMDecoder(cfg, task.target_dictionary) - elif w2l_decoder == DecoderType.FAIRSEQ: - from examples.speech_recognition.w2l_decoder import W2lFairseqLMDecoder - - return W2lFairseqLMDecoder(cfg, task.target_dictionary) - elif w2l_decoder == DecoderType.KALDI: - from examples.speech_recognition.kaldi.kaldi_decoder import KaldiDecoder - - assert cfg.kaldi_decoder_config is not None - - return KaldiDecoder( - cfg.kaldi_decoder_config, - cfg.beam, - ) - else: - raise NotImplementedError( - "only wav2letter decoders with (viterbi, kenlm, fairseqlm) options are supported at the moment but found " - + str(w2l_decoder) - ) - - generator = build_generator(cfg) - - kenlm = None - fairseq_lm = None - if cfg.lm_model is not None: - import kenlm - - kenlm = kenlm.Model(cfg.lm_model) - - num_sentences = 0 - if cfg.results_path is not None and not os.path.exists(cfg.results_path): - os.makedirs(cfg.results_path) - - res_files = prepare_result_files(cfg) - errs_t = 0 - lengths_hyp_t = 0 - lengths_hyp_unit_t = 0 - lengths_t = 0 - count = 0 - num_feats = 0 - all_hyp_pieces = [] - all_hyp_words = [] - - num_symbols = ( - len([s for s in tgt_dict.symbols if not s.startswith("madeup")]) - - tgt_dict.nspecial - ) - targets = None - if cfg.targets is not None: - tgt_path = os.path.join( - cfg.fairseq.task.data, cfg.fairseq.dataset.gen_subset + "." + cfg.targets - ) - if os.path.exists(tgt_path): - with open(tgt_path, "r") as f: - targets = f.read().splitlines() - viterbi_transcript = None - if cfg.viterbi_transcript is not None and len(cfg.viterbi_transcript) > 0: - logger.info(f"loading viterbi transcript from {cfg.viterbi_transcript}") - with open(cfg.viterbi_transcript, "r") as vf: - viterbi_transcript = vf.readlines() - viterbi_transcript = [v.rstrip().split() for v in viterbi_transcript] - - gen_timer.start() - - start = 0 - end = len(itr) - - hypo_futures = None - if cfg.w2l_decoder == DecoderType.KALDI: - logger.info("Extracting features") - hypo_futures = [] - samples = [] - with progress_bar.build_progress_bar(cfg.fairseq.common, itr) as t: - for i, sample in enumerate(t): - if "net_input" not in sample or i < start or i >= end: - continue - if "padding_mask" not in sample["net_input"]: - sample["net_input"]["padding_mask"] = None - - hypos, num_feats = gen_hypos( - generator, models, num_feats, sample, task, use_cuda - ) - hypo_futures.append(hypos) - samples.append(sample) - itr = list(zip(hypo_futures, samples)) - start = 0 - end = len(itr) - logger.info("Finished extracting features") - - with progress_bar.build_progress_bar(cfg.fairseq.common, itr) as t: - for i, sample in enumerate(t): - if i < start or i >= end: - continue - - if hypo_futures is not None: - hypos, sample = sample - hypos = [h.result() for h in hypos] - else: - if "net_input" not in sample: - continue - - hypos, num_feats = gen_hypos( - generator, models, num_feats, sample, task, use_cuda - ) - - for i, sample_id in enumerate(sample["id"].tolist()): - if targets is not None: - target_tokens = targets[sample_id] - elif "target" in sample or "target_label" in sample: - toks = ( - sample["target"][i, :] - if "target_label" not in sample - else sample["target_label"][i, :] - ) - - target_tokens = utils.strip_pad(toks, tgt_dict.pad()).int().cpu() - else: - target_tokens = None - - # Process top predictions - ( - errs, - length_hyp, - length, - hyp_pieces, - hyp_words, - ) = process_predictions( - cfg, - hypos[i], - tgt_dict, - target_tokens, - res_files, - ) - errs_t += errs - lengths_hyp_t += length_hyp - lengths_hyp_unit_t += ( - len(hyp_pieces) if len(hyp_pieces) > 0 else len(hyp_words) - ) - lengths_t += length - count += 1 - all_hyp_pieces.append(hyp_pieces) - all_hyp_words.append(hyp_words) - - num_sentences += ( - sample["nsentences"] if "nsentences" in sample else sample["id"].numel() - ) - - lm_score_sum = 0 - if kenlm is not None: - - if cfg.unit_lm: - lm_score_sum = sum(kenlm.score(w) for w in all_hyp_pieces) - else: - lm_score_sum = sum(kenlm.score(w) for w in all_hyp_words) - elif fairseq_lm is not None: - lm_score_sum = sum(fairseq_lm.score([h.split() for h in all_hyp_words])[0]) - - vt_err_t = 0 - vt_length_t = 0 - if viterbi_transcript is not None: - unit_hyps = [] - if cfg.targets is not None and cfg.lexicon is not None: - lex = {} - with open(cfg.lexicon, "r") as lf: - for line in lf: - items = line.rstrip().split() - lex[items[0]] = items[1:] - for h in all_hyp_pieces: - hyp_ws = [] - for w in h.split(): - assert w in lex, w - hyp_ws.extend(lex[w]) - unit_hyps.append(hyp_ws) - - else: - unit_hyps.extend([h.split() for h in all_hyp_words]) - - vt_err_t = sum( - editdistance.eval(vt, h) for vt, h in zip(viterbi_transcript, unit_hyps) - ) - - vt_length_t = sum(len(h) for h in viterbi_transcript) - - if res_files is not None: - for r in res_files.values(): - r.close() - - gen_timer.stop(lengths_hyp_t) - - return GenResult( - count, - errs_t, - gen_timer, - lengths_hyp_unit_t, - lengths_hyp_t, - lengths_t, - lm_score_sum, - num_feats, - num_sentences, - num_symbols, - vt_err_t, - vt_length_t, - ) - - -def gen_hypos(generator, models, num_feats, sample, task, use_cuda): - sample = utils.move_to_cuda(sample) if use_cuda else sample - - if "features" in sample["net_input"]: - sample["net_input"]["dense_x_only"] = True - num_feats += ( - sample["net_input"]["features"].shape[0] - * sample["net_input"]["features"].shape[1] - ) - hypos = task.inference_step(generator, models, sample, None) - return hypos, num_feats - - -def main(cfg: UnsupGenerateConfig, model=None): - if ( - cfg.fairseq.dataset.max_tokens is None - and cfg.fairseq.dataset.batch_size is None - ): - cfg.fairseq.dataset.max_tokens = 1024000 - - use_cuda = torch.cuda.is_available() and not cfg.fairseq.common.cpu - - task = tasks.setup_task(cfg.fairseq.task) - - overrides = ast.literal_eval(cfg.fairseq.common_eval.model_overrides) - - if cfg.fairseq.task._name == "unpaired_audio_text": - overrides["model"] = { - "blank_weight": cfg.blank_weight, - "blank_mode": cfg.blank_mode, - "blank_is_sil": cfg.sil_is_blank, - "no_softmax": True, - "segmentation": { - "type": "NONE", - }, - } - else: - overrides["model"] = { - "blank_weight": cfg.blank_weight, - "blank_mode": cfg.blank_mode, - } - - if model is None: - # Load ensemble - logger.info("| loading model(s) from {}".format(cfg.fairseq.common_eval.path)) - models, saved_cfg = checkpoint_utils.load_model_ensemble( - cfg.fairseq.common_eval.path.split("\\"), - arg_overrides=overrides, - task=task, - suffix=cfg.fairseq.checkpoint.checkpoint_suffix, - strict=(cfg.fairseq.checkpoint.checkpoint_shard_count == 1), - num_shards=cfg.fairseq.checkpoint.checkpoint_shard_count, - ) - optimize_models(cfg, use_cuda, models) - else: - models = [model] - saved_cfg = cfg.fairseq - - with open_dict(saved_cfg.task): - saved_cfg.task.shuffle = False - saved_cfg.task.sort_by_length = False - - gen_result = generate(cfg, models, saved_cfg, use_cuda) - - wer = None - if gen_result.lengths_t > 0: - wer = gen_result.errs_t * 100.0 / gen_result.lengths_t - logger.info(f"WER: {wer}") - - lm_ppl = float("inf") - - if gen_result.lm_score_t != 0 and gen_result.lengths_hyp_t > 0: - hyp_len = gen_result.lengths_hyp_t - lm_ppl = math.pow( - 10, -gen_result.lm_score_t / (hyp_len + gen_result.num_sentences) - ) - logger.info(f"LM PPL: {lm_ppl}") - - logger.info( - "| Processed {} sentences ({} tokens) in {:.1f}s ({:.2f}" - " sentences/s, {:.2f} tokens/s)".format( - gen_result.num_sentences, - gen_result.gen_timer.n, - gen_result.gen_timer.sum, - gen_result.num_sentences / gen_result.gen_timer.sum, - 1.0 / gen_result.gen_timer.avg, - ) - ) - - vt_diff = None - if gen_result.vt_length_t > 0: - vt_diff = gen_result.vt_err_t / gen_result.vt_length_t - vt_diff = max(cfg.min_vt_uer, vt_diff) - - lm_ppl = max(cfg.min_lm_ppl, lm_ppl) - - if not cfg.unsupervised_tuning: - weighted_score = wer - else: - weighted_score = math.log(lm_ppl) * (vt_diff or 1.0) - - res = ( - f"| Generate {cfg.fairseq.dataset.gen_subset} with beam={cfg.beam}, " - f"lm_weight={cfg.kaldi_decoder_config.acoustic_scale if cfg.kaldi_decoder_config else cfg.lm_weight}, " - f"word_score={cfg.word_score}, sil_weight={cfg.sil_weight}, blank_weight={cfg.blank_weight}, " - f"WER: {wer}, LM_PPL: {lm_ppl}, num feats: {gen_result.num_feats}, " - f"length: {gen_result.lengths_hyp_t}, UER to viterbi: {(vt_diff or 0) * 100}, score: {weighted_score}" - ) - - logger.info(res) - # print(res) - - return task, weighted_score - - -@hydra.main( - config_path=os.path.join("../../..", "fairseq", "config"), config_name="config" -) -def hydra_main(cfg): - with open_dict(cfg): - # make hydra logging work with ddp (see # see https://github.com/facebookresearch/hydra/issues/1126) - cfg.job_logging_cfg = OmegaConf.to_container( - HydraConfig.get().job_logging, resolve=True - ) - - cfg = OmegaConf.create( - OmegaConf.to_container(cfg, resolve=False, enum_to_str=False) - ) - OmegaConf.set_struct(cfg, True) - logger.info(cfg) - - utils.import_user_module(cfg.fairseq.common) - - _, score = main(cfg) - - if cfg.is_ax: - return score, None - return score - - -def cli_main(): - try: - from hydra._internal.utils import get_args - - cfg_name = get_args().config_name or "config" - except: - logger.warning("Failed to get config name from hydra args") - cfg_name = "config" - - cs = ConfigStore.instance() - cs.store(name=cfg_name, node=UnsupGenerateConfig) - hydra_main() - - -if __name__ == "__main__": - cli_main() diff --git a/kosmos-g/fairseq/examples/wav2vec/vq-wav2vec_featurize.py b/kosmos-g/fairseq/examples/wav2vec/vq-wav2vec_featurize.py deleted file mode 100644 index 627072ee1..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/vq-wav2vec_featurize.py +++ /dev/null @@ -1,250 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Helper script to pre-compute embeddings for a flashlight (previously called wav2letter++) dataset -""" - -import argparse -import glob -import os -import os.path as osp -import pprint - -import soundfile as sf -import torch -import fairseq -from torch import nn -from torch.utils.data import DataLoader - - -try: - import tqdm -except: - print("Install tqdm to use --log-format=tqdm") - - -class FilesDataset: - def __init__(self, files, labels): - self.files = files - if labels and osp.exists(labels): - with open(labels, "r") as lbl_f: - self.labels = [line.rstrip() for line in lbl_f] - else: - self.labels = labels - - def __len__(self): - return len(self.files) - - def __getitem__(self, index): - fname = self.files[index] - - wav, sr = sf.read(fname) - assert sr == 16000 - - wav = torch.from_numpy(wav).float() - lbls = None - if self.labels: - if isinstance(self.labels, str): - lbl_file = osp.splitext(fname)[0] + "." + self.labels - with open(lbl_file, "r") as lblf: - lbls = lblf.readline() - assert lbls is not None - else: - lbls = self.labels[index] - return wav, lbls - - def collate(self, batch): - return batch - - -class ArgTypes: - @staticmethod - def existing_path(arg): - arg = str(arg) - assert osp.exists(arg), f"File {arg} does not exist" - return arg - - @staticmethod - def mkdir(arg): - arg = str(arg) - os.makedirs(arg, exist_ok=True) - return arg - - -class DatasetWriter: - def __init__(self): - - self.args = self.load_config() - pprint.pprint(self.args.__dict__) - - self.model = self.load_model() - - def __getattr__(self, attr): - return getattr(self.args, attr) - - def read_manifest(self, fname): - - with open(fname, "r") as fp: - lines = fp.read().split("\n") - root = lines.pop(0).strip() - fnames = [ - osp.join(root, line.split("\t")[0]) for line in lines if len(line) > 0 - ] - - return fnames - - def process_splits(self): - - if self.args.shard is not None or self.args.num_shards is not None: - assert self.args.shard is not None and self.args.num_shards is not None - - for split in self.splits: - print(split) - - if self.extension == "tsv": - datadir = osp.join(self.data_dir, f"{split}.{self.extension}") - print("Reading manifest file: ", datadir) - files = self.read_manifest(datadir) - else: - datadir = osp.join(self.data_dir, split, f"**/*.{self.extension}") - files = glob.glob(datadir, recursive=True) - - assert len(files) > 0 - - if self.args.shard is not None: - files = files[self.args.shard :: self.args.num_shards] - - lbls = [] - with open(self.data_file(split), "w") as srcf: - for line, lbl in self.iterate(files): - print(line, file=srcf) - if self.args.labels: - lbls.append(lbl + "\n") - - if self.args.labels: - assert all(a is not None for a in lbls) - with open(self.lbl_file(split), "w") as lblf: - lblf.writelines(lbls) - - def iterate(self, files): - - data = self.load_data(files) - for samples in tqdm.tqdm(data, total=len(files) // 32): - - for wav, lbl in samples: - x = wav.unsqueeze(0).float().cuda() - - div = 1 - while x.size(-1) // div > self.args.max_size: - div += 1 - - xs = x.chunk(div, dim=-1) - - result = [] - for x in xs: - torch.cuda.empty_cache() - x = self.model.feature_extractor(x) - if self.quantize_location == "encoder": - with torch.no_grad(): - _, idx = self.model.vector_quantizer.forward_idx(x) - idx = idx.squeeze(0).cpu() - else: - with torch.no_grad(): - z = self.model.feature_aggregator(x) - _, idx = self.model.vector_quantizer.forward_idx(z) - idx = idx.squeeze(0).cpu() - result.append(idx) - - idx = torch.cat(result, dim=0) - yield " ".join("-".join(map(str, a.tolist())) for a in idx), lbl - - def lbl_file(self, name): - shard_part = "" if self.args.shard is None else f".{self.args.shard}" - return osp.join(self.output_dir, f"{name}.lbl{shard_part}") - - def data_file(self, name): - shard_part = "" if self.args.shard is None else f".{self.args.shard}" - return osp.join(self.output_dir, f"{name}.src{shard_part}") - - def var_file(self): - return osp.join(self.output_dir, f"vars.pt") - - def load_config(self): - - parser = argparse.ArgumentParser("Vector Quantized wav2vec features") - - # Model Arguments - parser.add_argument("--checkpoint", type=ArgTypes.existing_path, required=True) - parser.add_argument("--data-parallel", action="store_true") - - # Output Arguments - parser.add_argument("--output-dir", type=ArgTypes.mkdir, required=True) - - # Data Arguments - parser.add_argument("--data-dir", type=ArgTypes.existing_path, required=True) - parser.add_argument("--splits", type=str, nargs="+", required=True) - parser.add_argument("--extension", type=str, required=True) - parser.add_argument("--labels", type=str, required=False) - - parser.add_argument("--shard", type=int, default=None) - parser.add_argument("--num-shards", type=int, default=None) - parser.add_argument("--max-size", type=int, default=1300000) - - # Logger Arguments - parser.add_argument( - "--log-format", type=str, choices=["none", "simple", "tqdm"] - ) - - return parser.parse_args() - - def load_data(self, fnames): - - dataset = FilesDataset(fnames, self.args.labels) - loader = DataLoader( - dataset, batch_size=32, collate_fn=dataset.collate, num_workers=8 - ) - return loader - - def load_model(self): - model, cfg, task = fairseq.checkpoint_utils.load_model_ensemble_and_task([self.checkpoint]) - model = model[0] - - self.quantize_location = getattr(cfg.model, "vq", "encoder") - - model.eval().float() - model.cuda() - - if self.data_parallel: - model = nn.DataParallel(model) - - return model - - def __call__(self): - - self.process_splits() - - if hasattr(self.model.feature_extractor, "vars") and ( - self.args.shard is None or self.args.shard == 0 - ): - vars = ( - self.model.feature_extractor.vars.view( - self.model.feature_extractor.banks, - self.model.feature_extractor.num_vars, - -1, - ) - .cpu() - .detach() - ) - print("writing learned latent variable embeddings: ", vars.shape) - torch.save(vars, self.var_file()) - - -if __name__ == "__main__": - write_data = DatasetWriter() - - write_data() - print("Done.") diff --git a/kosmos-g/fairseq/examples/wav2vec/wav2vec_featurize.py b/kosmos-g/fairseq/examples/wav2vec/wav2vec_featurize.py deleted file mode 100644 index 588268b70..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/wav2vec_featurize.py +++ /dev/null @@ -1,249 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Helper script to pre-compute embeddings for a flashlight (previously called wav2letter++) dataset -""" - -import argparse -import glob -import os -from shutil import copy - -import h5py -import numpy as np -import soundfile as sf -import torch -import tqdm -import fairseq -from torch import nn - - -def read_audio(fname): - """ Load an audio file and return PCM along with the sample rate """ - - wav, sr = sf.read(fname) - assert sr == 16e3 - - return wav, 16e3 - - -class PretrainedWav2VecModel(nn.Module): - def __init__(self, fname): - super().__init__() - - model, cfg, task = fairseq.checkpoint_utils.load_model_ensemble_and_task([fname]) - model = model[0] - model.eval() - - self.model = model - - def forward(self, x): - with torch.no_grad(): - z = self.model.feature_extractor(x) - if isinstance(z, tuple): - z = z[0] - c = self.model.feature_aggregator(z) - return z, c - - -class EmbeddingWriterConfig(argparse.ArgumentParser): - def __init__(self): - super().__init__("Pre-compute embeddings for flashlight datasets") - - kwargs = {"action": "store", "type": str, "required": True} - - self.add_argument("--input", "-i", help="Input Directory", **kwargs) - self.add_argument("--output", "-o", help="Output Directory", **kwargs) - self.add_argument("--model", help="Path to model checkpoint", **kwargs) - self.add_argument("--split", help="Dataset Splits", nargs="+", **kwargs) - self.add_argument( - "--ext", default="wav", required=False, help="Audio file extension" - ) - - self.add_argument( - "--no-copy-labels", - action="store_true", - help="Do not copy label files. Useful for large datasets, use --targetdir in flashlight then.", - ) - self.add_argument( - "--use-feat", - action="store_true", - help="Use the feature vector ('z') instead of context vector ('c') for features", - ) - self.add_argument("--gpu", help="GPU to use", default=0, type=int) - - -class Prediction: - """ Lightweight wrapper around a fairspeech embedding model """ - - def __init__(self, fname, gpu=0): - self.gpu = gpu - self.model = PretrainedWav2VecModel(fname).cuda(gpu) - - def __call__(self, x): - x = torch.from_numpy(x).float().cuda(self.gpu) - with torch.no_grad(): - z, c = self.model(x.unsqueeze(0)) - - return z.squeeze(0).cpu().numpy(), c.squeeze(0).cpu().numpy() - - -class H5Writer: - """ Write features as hdf5 file in flashlight compatible format """ - - def __init__(self, fname): - self.fname = fname - os.makedirs(os.path.dirname(self.fname), exist_ok=True) - - def write(self, data): - channel, T = data.shape - - with h5py.File(self.fname, "w") as out_ds: - data = data.T.flatten() - out_ds["features"] = data - out_ds["info"] = np.array([16e3 // 160, T, channel]) - - -class EmbeddingDatasetWriter(object): - """Given a model and a flashlight dataset, pre-compute and store embeddings - - Args: - input_root, str : - Path to the flashlight dataset - output_root, str : - Desired output directory. Will be created if non-existent - split, str : - Dataset split - """ - - def __init__( - self, - input_root, - output_root, - split, - model_fname, - extension="wav", - gpu=0, - verbose=False, - use_feat=False, - ): - - assert os.path.exists(model_fname) - - self.model_fname = model_fname - self.model = Prediction(self.model_fname, gpu) - - self.input_root = input_root - self.output_root = output_root - self.split = split - self.verbose = verbose - self.extension = extension - self.use_feat = use_feat - - assert os.path.exists(self.input_path), "Input path '{}' does not exist".format( - self.input_path - ) - - def _progress(self, iterable, **kwargs): - if self.verbose: - return tqdm.tqdm(iterable, **kwargs) - return iterable - - def require_output_path(self, fname=None): - path = self.get_output_path(fname) - os.makedirs(path, exist_ok=True) - - @property - def input_path(self): - return self.get_input_path() - - @property - def output_path(self): - return self.get_output_path() - - def get_input_path(self, fname=None): - if fname is None: - return os.path.join(self.input_root, self.split) - return os.path.join(self.get_input_path(), fname) - - def get_output_path(self, fname=None): - if fname is None: - return os.path.join(self.output_root, self.split) - return os.path.join(self.get_output_path(), fname) - - def copy_labels(self): - self.require_output_path() - - labels = list( - filter( - lambda x: self.extension not in x, glob.glob(self.get_input_path("*")) - ) - ) - for fname in tqdm.tqdm(labels): - copy(fname, self.output_path) - - @property - def input_fnames(self): - return sorted(glob.glob(self.get_input_path("*.{}".format(self.extension)))) - - def __len__(self): - return len(self.input_fnames) - - def write_features(self): - - paths = self.input_fnames - - fnames_context = map( - lambda x: os.path.join( - self.output_path, x.replace("." + self.extension, ".h5context") - ), - map(os.path.basename, paths), - ) - - for name, target_fname in self._progress( - zip(paths, fnames_context), total=len(self) - ): - wav, sr = read_audio(name) - z, c = self.model(wav) - feat = z if self.use_feat else c - writer = H5Writer(target_fname) - writer.write(feat) - - def __repr__(self): - - return "EmbeddingDatasetWriter ({n_files} files)\n\tinput:\t{input_root}\n\toutput:\t{output_root}\n\tsplit:\t{split})".format( - n_files=len(self), **self.__dict__ - ) - - -if __name__ == "__main__": - - args = EmbeddingWriterConfig().parse_args() - - for split in args.split: - - writer = EmbeddingDatasetWriter( - input_root=args.input, - output_root=args.output, - split=split, - model_fname=args.model, - gpu=args.gpu, - extension=args.ext, - use_feat=args.use_feat, - ) - - print(writer) - writer.require_output_path() - - print("Writing Features...") - writer.write_features() - print("Done.") - - if not args.no_copy_labels: - print("Copying label data...") - writer.copy_labels() - print("Done.") diff --git a/kosmos-g/fairseq/examples/wav2vec/wav2vec_manifest.py b/kosmos-g/fairseq/examples/wav2vec/wav2vec_manifest.py deleted file mode 100644 index 9b8aa180e..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/wav2vec_manifest.py +++ /dev/null @@ -1,87 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Data pre-processing: build vocabularies and binarize training data. -""" - -import argparse -import glob -import os -import random - -import soundfile - - -def get_parser(): - parser = argparse.ArgumentParser() - parser.add_argument( - "root", metavar="DIR", help="root directory containing flac files to index" - ) - parser.add_argument( - "--valid-percent", - default=0.01, - type=float, - metavar="D", - help="percentage of data to use as validation set (between 0 and 1)", - ) - parser.add_argument( - "--dest", default=".", type=str, metavar="DIR", help="output directory" - ) - parser.add_argument( - "--ext", default="flac", type=str, metavar="EXT", help="extension to look for" - ) - parser.add_argument("--seed", default=42, type=int, metavar="N", help="random seed") - parser.add_argument( - "--path-must-contain", - default=None, - type=str, - metavar="FRAG", - help="if set, path must contain this substring for a file to be included in the manifest", - ) - return parser - - -def main(args): - assert args.valid_percent >= 0 and args.valid_percent <= 1.0 - - if not os.path.exists(args.dest): - os.makedirs(args.dest) - - dir_path = os.path.realpath(args.root) - search_path = os.path.join(dir_path, "**/*." + args.ext) - rand = random.Random(args.seed) - - valid_f = ( - open(os.path.join(args.dest, "valid.tsv"), "w") - if args.valid_percent > 0 - else None - ) - - with open(os.path.join(args.dest, "train.tsv"), "w") as train_f: - print(dir_path, file=train_f) - - if valid_f is not None: - print(dir_path, file=valid_f) - - for fname in glob.iglob(search_path, recursive=True): - file_path = os.path.realpath(fname) - - if args.path_must_contain and args.path_must_contain not in file_path: - continue - - frames = soundfile.info(fname).frames - dest = train_f if rand.random() > args.valid_percent else valid_f - print( - "{}\t{}".format(os.path.relpath(file_path, dir_path), frames), file=dest - ) - if valid_f is not None: - valid_f.close() - - -if __name__ == "__main__": - parser = get_parser() - args = parser.parse_args() - main(args) diff --git a/kosmos-g/fairseq/examples/wav2vec/xlsr/README.md b/kosmos-g/fairseq/examples/wav2vec/xlsr/README.md deleted file mode 100644 index 84e9de36a..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/xlsr/README.md +++ /dev/null @@ -1,68 +0,0 @@ -# XLS-R - -XLS-R is a set of large-scale models for self-supervised cross-lingual speech representation learning based on wav2vec 2.0. It was pretrained on 128 languages and approximately 436K hours of unlabeled speech data. With finetuning, these models achieve state of the art performance in speech translation, speech recognition and language identification. We evaluate the model across multiple benchmarks such as CoVoST-2 for speech translation, BABEL / MLS / CommonVoice / VoxPopuli for automatic speech recognition, and VoxLingua107 for language identification as we llas VoxCeleb1 for speaker identification. More details about this work can be found in our [paper](https://arxiv.org/pdf/2111.09296.pdf) and download links can be found below. - -Model | Link -|------|------ -XLS-R 300M | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/xlsr2_300m.pt) -XLS-R 1B | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/xlsr2_960m_1000k.pt) -XLS-R 2B | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/xlsr2_2B_1000k.pt) - -You can also download these models [here](https://huggingface.co/models?other=xls_r) and read more about it in the [blogpost](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) from Hugging Face. - -## Speech Translation Finetuned Models - -We multilingually finetune XLS-R models on [CoVoST 2](https://github.com/facebookresearch/covost), which has 21 -into-English and 15 out-of-English directions. - -Model | Directions | Link -|------|------|------ -XLS-R 300M | 21 langs → En | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/xls_r_300m_21_en.pt) -XLS-R 300M | En → 15 langs | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/xls_r_300m_en_15.pt) -XLS-R 1B | 21 langs → En | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/xls_r_1b_21_en.pt) -XLS-R 1B | En → 15 langs | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/xls_r_1b_en_15.pt) -XLS-R 2B | 21 langs → En | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/xls_r_2b_21_en.pt) -XLS-R 2B | En → 15 langs | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/xls_r_2b_en_15.pt) -XLS-R 2B | 21 langs → En + En → 15 langs | [download](https://dl.fbaipublicfiles.com/fairseq/wav2vec/xls_r_2b_22_16.pt) - -## ASR Finetuning - -You can refer the original wav2vec documentation on detailed instructions about how to finetune a pretrained model with CTC [here](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec#fine-tune-a-pre-trained-model-with-ctc). Below is an example command and you can find the values for different hyperparameters to reproduce the results in our paper. - -```shell script -$ fairseq-hydra-train \ - distributed_training.distributed_port=$PORT \ - task.data=/path/to/data \ - model.w2v_path=/path/to/model.pt \ - --config-dir /path/to/fairseq-py/examples/wav2vec/xlsr/config \ - --config-name finetune -``` - -For finetuning the 300M as well as 1B model, we use the same hyperparameter setting defined in `finetune.yaml`. We vary `optimization.max_update` as described in the below table and the `optimization.lr` is picked from the interval [2e-5, 3e-4] based on dev word error rate. - -Benchmark | Total Number of Updates -|------|------ -Babel | 26000 -Common Voice | 13000 -VoxPopuli | 50000 -MLS 10h | 20000 - -For finetuning the 2B model, we make some additional changes for `finetune.yaml` . We use the fully_sharded `distributed_training.ddp_backend` provided by the [fairscale](https://github.com/facebookresearch/fairscale) library and and set `model.activation_checkpoint` to true. We also increase `dataset.max_tokens` to 2560000 and use a total effective batch size of 2560000*24. We sweep for the best `optimization.lr` within the interval [3e−6,3e−5] using dev error rate. For common voice dataset, we pick the `model.mask_prob` for different languages among {0.30, 0.40} based on best dev error rate. - - - -## Citation - -Please cite as: - -``` bibtex -@article{babu2021xlsr, - title={XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale}, - author={Arun Babu and Changhan Wang and Andros Tjandra and Kushal Lakhotia and Qiantong Xu and Naman Goyal and Kritika Singh and Patrick von Platen and Yatharth Saraf and Juan Pino and Alexei Baevski and Alexis Conneau and Michael Auli}, - year={2021}, - volume={abs/2111.09296}, - journal={arXiv}, -} -``` - - diff --git a/kosmos-g/fairseq/examples/wav2vec/xlsr/config/finetune.yaml b/kosmos-g/fairseq/examples/wav2vec/xlsr/config/finetune.yaml deleted file mode 100644 index 8736e101c..000000000 --- a/kosmos-g/fairseq/examples/wav2vec/xlsr/config/finetune.yaml +++ /dev/null @@ -1,66 +0,0 @@ -# @package _group_ - -common: - fp16: true - log_format: json - log_interval: 200 - tensorboard_logdir: tb - -checkpoint: - save_interval: 1000 - save_interval_updates: 1000 - keep_interval_updates: 1 - no_epoch_checkpoints: true - best_checkpoint_metric: wer - -task: - _name: audio_finetuning - data: ??? - normalize: true - labels: ltr - -dataset: - num_workers: 6 - max_tokens: 1280000 - skip_invalid_size_inputs_valid_test: true - validate_after_updates: 10000 - validate_interval_updates: 1000 - valid_subset: valid - -distributed_training: - ddp_backend: legacy_ddp - distributed_world_size: 4 - -criterion: - _name: ctc - zero_infinity: true - -optimization: - max_update: ??? - lr: [0.0003] - sentence_avg: true - update_freq: [5] - -optimizer: - _name: adam - adam_betas: (0.9,0.98) - adam_eps: 1e-08 - -lr_scheduler: - _name: tri_stage - phase_ratio: [0.1, 0.4, 0.5] - final_lr_scale: 0.05 - -model: - _name: wav2vec_ctc - w2v_path: ??? - apply_mask: true - mask_prob: 0.75 - mask_channel_prob: 0.25 - mask_channel_length: 64 - layerdrop: 0.1 - activation_dropout: 0.1 - feature_grad_mult: 0.0 - freeze_finetune_updates: 10000 - - checkpoint_activations: false diff --git a/kosmos-g/fairseq/examples/wmt19/README.md b/kosmos-g/fairseq/examples/wmt19/README.md deleted file mode 100644 index 5c90d0e6c..000000000 --- a/kosmos-g/fairseq/examples/wmt19/README.md +++ /dev/null @@ -1,85 +0,0 @@ -# WMT 19 - -This page provides pointers to the models of Facebook-FAIR's WMT'19 news translation task submission [(Ng et al., 2019)](https://arxiv.org/abs/1907.06616). - -## Pre-trained models - -Model | Description | Download ----|---|--- -`transformer.wmt19.en-de` | En->De Ensemble | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.en-de.joined-dict.ensemble.tar.gz) -`transformer.wmt19.de-en` | De->En Ensemble | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.de-en.joined-dict.ensemble.tar.gz) -`transformer.wmt19.en-ru` | En->Ru Ensemble | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.en-ru.ensemble.tar.gz) -`transformer.wmt19.ru-en` | Ru->En Ensemble | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.ru-en.ensemble.tar.gz) -`transformer_lm.wmt19.en` | En Language Model | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt19.en.tar.gz) -`transformer_lm.wmt19.de` | De Language Model | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt19.de.tar.gz) -`transformer_lm.wmt19.ru` | Ru Language Model | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt19.ru.tar.gz) - -## Pre-trained single models before finetuning - -Model | Description | Download ----|---|--- -`transformer.wmt19.en-de` | En->De Single, no finetuning | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.en-de.ffn8192.tar.gz) -`transformer.wmt19.de-en` | De->En Single, no finetuning | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.de-en.ffn8192.tar.gz) -`transformer.wmt19.en-ru` | En->Ru Single, no finetuning | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.en-ru.ffn8192.tar.gz) -`transformer.wmt19.ru-en` | Ru->En Single, no finetuning | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.ru-en.ffn8192.tar.gz) - -## Example usage (torch.hub) - -#### Requirements - -We require a few additional Python dependencies for preprocessing: -```bash -pip install fastBPE sacremoses -``` - -#### Translation - -```python -import torch - -# English to German translation -en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-de', checkpoint_file='model1.pt:model2.pt:model3.pt:model4.pt', - tokenizer='moses', bpe='fastbpe') -en2de.translate("Machine learning is great!") # 'Maschinelles Lernen ist großartig!' - -# German to English translation -de2en = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.de-en', checkpoint_file='model1.pt:model2.pt:model3.pt:model4.pt', - tokenizer='moses', bpe='fastbpe') -de2en.translate("Maschinelles Lernen ist großartig!") # 'Machine learning is great!' - -# English to Russian translation -en2ru = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-ru', checkpoint_file='model1.pt:model2.pt:model3.pt:model4.pt', - tokenizer='moses', bpe='fastbpe') -en2ru.translate("Machine learning is great!") # 'Машинное обучение - это здорово!' - -# Russian to English translation -ru2en = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.ru-en', checkpoint_file='model1.pt:model2.pt:model3.pt:model4.pt', - tokenizer='moses', bpe='fastbpe') -ru2en.translate("Машинное обучение - это здорово!") # 'Machine learning is great!' -``` - -#### Language Modeling - -```python -# Sample from the English LM -en_lm = torch.hub.load('pytorch/fairseq', 'transformer_lm.wmt19.en', tokenizer='moses', bpe='fastbpe') -en_lm.sample("Machine learning is") # 'Machine learning is the future of computing, says Microsoft boss Satya Nadella ...' - -# Sample from the German LM -de_lm = torch.hub.load('pytorch/fairseq', 'transformer_lm.wmt19.de', tokenizer='moses', bpe='fastbpe') -de_lm.sample("Maschinelles lernen ist") # 'Maschinelles lernen ist das A und O (neues-deutschland.de) Die Arbeitsbedingungen für Lehrerinnen und Lehrer sind seit Jahren verbesserungswürdig ...' - -# Sample from the Russian LM -ru_lm = torch.hub.load('pytorch/fairseq', 'transformer_lm.wmt19.ru', tokenizer='moses', bpe='fastbpe') -ru_lm.sample("машинное обучение это") # 'машинное обучение это то, что мы называем "искусственным интеллектом".' -``` - -## Citation -```bibtex -@inproceedings{ng2019facebook}, - title = {Facebook FAIR's WMT19 News Translation Task Submission}, - author = {Ng, Nathan and Yee, Kyra and Baevski, Alexei and Ott, Myle and Auli, Michael and Edunov, Sergey}, - booktitle = {Proc. of WMT}, - year = 2019, -} -``` diff --git a/kosmos-g/fairseq/examples/wmt20/README.md b/kosmos-g/fairseq/examples/wmt20/README.md deleted file mode 100644 index b4f287465..000000000 --- a/kosmos-g/fairseq/examples/wmt20/README.md +++ /dev/null @@ -1,72 +0,0 @@ -# WMT 20 - -This page provides pointers to the models of Facebook-FAIR's WMT'20 news translation task submission [(Chen et al., 2020)](https://arxiv.org/abs/2011.08298). - -## Single best MT models (after finetuning on part of WMT20 news dev set) - -Model | Description | Download ----|---|--- -`transformer.wmt20.ta-en` | Ta->En | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.ta-en.single.tar.gz) -`transformer.wmt20.en-ta` | En->Ta | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.en-ta.single.tar.gz) -`transformer.wmt20.iu-en.news` | Iu->En (News domain) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.iu-en.news.single.tar.gz) -`transformer.wmt20.en-iu.news` | En->Iu (News domain) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.en-iu.news.single.tar.gz) -`transformer.wmt20.iu-en.nh` | Iu->En (Nunavut Hansard domain) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.iu-en.nh.single.tar.gz) -`transformer.wmt20.en-iu.nh` | En->Iu (Nunavut Hansard domain) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.en-iu.nh.single.tar.gz) - -## Language models -Model | Description | Download ----|---|--- -`transformer_lm.wmt20.en` | En Language Model | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.en.tar.gz) -`transformer_lm.wmt20.ta` | Ta Language Model | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.ta.tar.gz) -`transformer_lm.wmt20.iu.news` | Iu Language Model (News domain) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.iu.news.tar.gz) -`transformer_lm.wmt20.iu.nh` | Iu Language Model (Nunavut Hansard domain) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.iu.nh.tar.gz) - -## Example usage (torch.hub) - -#### Translation - -```python -import torch - -# English to Tamil translation -en2ta = torch.hub.load('pytorch/fairseq', 'transformer.wmt20.en-ta') -en2ta.translate("Machine learning is great!") # 'இயந்திரக் கற்றல் அருமை!' - -# Tamil to English translation -ta2en = torch.hub.load('pytorch/fairseq', 'transformer.wmt20.ta-en') -ta2en.translate("இயந்திரக் கற்றல் அருமை!") # 'Machine learning is great!' - -# English to Inuktitut translation -en2iu = torch.hub.load('pytorch/fairseq', 'transformer.wmt20.en-iu.news') -en2iu.translate("machine learning is great!") # 'ᖃᒧᑕᐅᔭᓄᑦ ᐃᓕᓐᓂᐊᕐᓂᖅ ᐱᐅᔪᒻᒪᕆᒃ!' - -# Inuktitut to English translation -iu2en = torch.hub.load('pytorch/fairseq', 'transformer.wmt20.iu-en.news') -iu2en.translate("ᖃᒧᑕᐅᔭᓄᑦ ᐃᓕᓐᓂᐊᕐᓂᖅ ᐱᐅᔪᒻᒪᕆᒃ!") # 'Machine learning excellence!' -``` - -#### Language Modeling - -```python -# Sample from the English LM -en_lm = torch.hub.load('pytorch/fairseq', 'transformer_lm.wmt20.en') -en_lm.sample("Machine learning is") # 'Machine learning is a type of artificial intelligence that uses machine learning to learn from data and make predictions.' - -# Sample from the Tamil LM -ta_lm = torch.hub.load('pytorch/fairseq', 'transformer_lm.wmt20.ta') -ta_lm.sample("இயந்திரக் கற்றல் என்பது செயற்கை நுண்ணறிவின்") # 'இயந்திரக் கற்றல் என்பது செயற்கை நுண்ணறிவின் ஒரு பகுதியாகும்.' - -# Sample from the Inuktitut LM -iu_lm = torch.hub.load('pytorch/fairseq', 'transformer_lm.wmt20.iu.news') -iu_lm.sample("ᖃᒧᑕᐅᔭᓄᑦ ᐃᓕᓐᓂᐊᕐᓂᖅ") # 'ᖃᒧᑕᐅᔭᓄᑦ ᐃᓕᓐᓂᐊᕐᓂᖅ, ᐊᒻᒪᓗ ᓯᓚᐅᑉ ᐊᓯᙳᖅᐸᓪᓕᐊᓂᖓᓄᑦ ᖃᓄᐃᓕᐅᕈᑎᒃᓴᑦ, ᐃᓚᖃᖅᖢᑎᒃ ᐅᑯᓂᖓ:' -``` - -## Citation -```bibtex -@inproceedings{chen2020facebook - title={Facebook AI's WMT20 News Translation Task Submission}, - author={Peng-Jen Chen and Ann Lee and Changhan Wang and Naman Goyal and Angela Fan and Mary Williamson and Jiatao Gu}, - booktitle={Proc. of WMT}, - year={2020}, -} -``` diff --git a/kosmos-g/fairseq/examples/wmt21/README.md b/kosmos-g/fairseq/examples/wmt21/README.md deleted file mode 100644 index 524fffb72..000000000 --- a/kosmos-g/fairseq/examples/wmt21/README.md +++ /dev/null @@ -1,25 +0,0 @@ -# WMT 21 - -This page provides pointers to the models of Facebook AI's WMT'21 news translation task submission [(Tran et al., 2021)](https://arxiv.org/abs/2108.03265). - -## Single best dense models - -Model | Description | Download ----|---|--- -`wmt21.dense-24-wide.X-En` | X-En | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt21.dense-24-wide.X-En.tar.gz) -`wmt21.dense-24-wide.En-X` | En-X | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt21.dense-24-wide.En-X.tar.gz) - -## Example usage - -See eval.sh - - -## Citation -```bibtex -@inproceedings{tran2021facebook - title={Facebook AI’s WMT21 News Translation Task Submission}, - author={Chau Tran and Shruti Bhosale and James Cross and Philipp Koehn and Sergey Edunov and Angela Fan}, - booktitle={Proc. of WMT}, - year={2021}, -} -``` diff --git a/kosmos-g/fairseq/examples/wmt21/eval.sh b/kosmos-g/fairseq/examples/wmt21/eval.sh deleted file mode 100644 index b36d934c5..000000000 --- a/kosmos-g/fairseq/examples/wmt21/eval.sh +++ /dev/null @@ -1,49 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -SRC=en -TGT=is -MODEL_NAME=wmt21.dense-24-wide.En-X - -PATH_TO_FAIRSEQ_PY=. -TMP_DIR=generation_tmp -mkdir -p $TMP_DIR - -REPLACE_UNICODE_PUNCT=$PATH_TO_FAIRSEQ_PY/examples/wmt21/scripts/replace-unicode-punctuation.perl -NORM_PUNCT=$PATH_TO_FAIRSEQ_PY/examples/wmt21/scripts/normalize-punctuation.perl -if [ ! -d "${TMP_DIR}/${MODEL_NAME}" ]; then - wget https://dl.fbaipublicfiles.com/fairseq/models/${MODEL_NAME}.tar.gz -P $TMP_DIR/ - tar -xvf $TMP_DIR/${MODEL_NAME}.tar.gz -C $TMP_DIR -fi -MODEL_DIR=$TMP_DIR/${MODEL_NAME} -if [ ! -d "${TMP_DIR}/wmt21-news-systems" ]; then - git clone https://github.com/wmt-conference/wmt21-news-systems $TMP_DIR/wmt21-news-systems -fi - -DOMAIN_TAG="wmtdata newsdomain" -INPUT_FILE=$TMP_DIR/wmt21-news-systems/txt/sources/newstest2021.${SRC}-${TGT}.src.${SRC} -REF_FILE=$TMP_DIR/wmt21-news-systems/txt/references/newstest2021.${SRC}-${TGT}.ref.A.${TGT} - -# Translate -cat ${INPUT_FILE} | sed "s/^/${DOMAIN_TAG} /" | $REPLACE_UNICODE_PUNCT | $NORM_PUNCT -l ${SRC} | python $PATH_TO_FAIRSEQ_PY/fairseq_cli/interactive.py $MODEL_DIR \ - --path ${MODEL_DIR}/checkpoint.pt \ - --task translation_multi_simple_epoch \ - --langs "en,ha,is,ja,cs,ru,zh,de" \ - --lang-pairs $SRC-$TGT \ - --bpe "sentencepiece" \ - --sentencepiece-model ${MODEL_DIR}/sentencepiece.model \ - --buffer-size 1024 \ - --batch-size 10 -s $SRC -t $TGT \ - --decoder-langtok \ - --encoder-langtok src \ - --beam 5 \ - --lenpen 1.0 \ - --fp16 > $TMP_DIR/${SRC}-${TGT}.gen_log - -cat $TMP_DIR/$SRC-$TGT.gen_log | grep -P "^D-" | cut -f3 > $TMP_DIR/$SRC-$TGT.hyp - -# Calculate BLEU score -sacrebleu -l $SRC-$TGT $REF_FILE < $TMP_DIR/$SRC-$TGT.hyp diff --git a/kosmos-g/fairseq/examples/wmt21/scripts/normalize-punctuation.perl b/kosmos-g/fairseq/examples/wmt21/scripts/normalize-punctuation.perl deleted file mode 100644 index a7c0750f5..000000000 --- a/kosmos-g/fairseq/examples/wmt21/scripts/normalize-punctuation.perl +++ /dev/null @@ -1,90 +0,0 @@ -#!/usr/bin/env perl -# -# This file is part of moses. Its use is licensed under the GNU Lesser General -# Public License version 2.1 or, at your option, any later version. - -use warnings; -use strict; - -my $language = "en"; -my $PENN = 0; - -while (@ARGV) { - $_ = shift; - /^-b$/ && ($| = 1, next); # not buffered (flush each line) - /^-l$/ && ($language = shift, next); - /^[^\-]/ && ($language = $_, next); - /^-penn$/ && ($PENN = 1, next); -} - -while(<STDIN>) { - s/\r//g; - # remove extra spaces - s/\(/ \(/g; - s/\)/\) /g; s/ +/ /g; - s/\) ([\.\!\:\?\;\,])/\)$1/g; - s/\( /\(/g; - s/ \)/\)/g; - s/(\d) \%/$1\%/g; - s/ :/:/g; - s/ ;/;/g; - # normalize unicode punctuation - if ($PENN == 0) { - s/\`/\'/g; - s/\'\'/ \" /g; - } - - s/„/\"/g; - s/“/\"/g; - s/”/\"/g; - s/–/-/g; - s/—/ - /g; s/ +/ /g; - s/´/\'/g; - s/([a-z])‘([a-z])/$1\'$2/gi; - s/([a-z])’([a-z])/$1\'$2/gi; - s/‘/\'/g; - s/‚/\'/g; - s/’/\"/g; - s/''/\"/g; - s/´´/\"/g; - s/…/.../g; - # French quotes - s/ « / \"/g; - s/« /\"/g; - s/«/\"/g; - s/ » /\" /g; - s/ »/\"/g; - s/»/\"/g; - # handle pseudo-spaces - s/ \%/\%/g; - s/nº /nº /g; - s/ :/:/g; - s/ ºC/ ºC/g; - s/ cm/ cm/g; - s/ \?/\?/g; - s/ \!/\!/g; - s/ ;/;/g; - s/, /, /g; s/ +/ /g; - - # English "quotation," followed by comma, style - if ($language eq "en") { - s/\"([,\.]+)/$1\"/g; - } - # Czech is confused - elsif ($language eq "cs" || $language eq "cz") { - } - # German/Spanish/French "quotation", followed by comma, style - else { - s/,\"/\",/g; - s/(\.+)\"(\s*[^<])/\"$1$2/g; # don't fix period at end of sentence - } - - - if ($language eq "de" || $language eq "es" || $language eq "cz" || $language eq "cs" || $language eq "fr") { - s/(\d) (\d)/$1,$2/g; - } - else { - s/(\d) (\d)/$1.$2/g; - } - print $_; -} diff --git a/kosmos-g/fairseq/examples/wmt21/scripts/replace-unicode-punctuation.perl b/kosmos-g/fairseq/examples/wmt21/scripts/replace-unicode-punctuation.perl deleted file mode 100644 index faed2cd9d..000000000 --- a/kosmos-g/fairseq/examples/wmt21/scripts/replace-unicode-punctuation.perl +++ /dev/null @@ -1,55 +0,0 @@ -#!/usr/bin/env perl -# -# This file is part of moses. Its use is licensed under the GNU Lesser General -# Public License version 2.1 or, at your option, any later version. - -use warnings; -use strict; - -while (@ARGV) { - $_ = shift; - /^-b$/ && ($| = 1, next); # not buffered (flush each line) -} - -#binmode(STDIN, ":utf8"); -#binmode(STDOUT, ":utf8"); - -while(<STDIN>) { - s/,/,/g; - s/。 */. /g; - s/、/,/g; - s/”/"/g; - s/“/"/g; - s/∶/:/g; - s/:/:/g; - s/?/\?/g; - s/《/"/g; - s/》/"/g; - s/)/\)/g; - s/!/\!/g; - s/(/\(/g; - s/;/;/g; - s/1/1/g; - s/」/"/g; - s/「/"/g; - s/0/0/g; - s/3/3/g; - s/2/2/g; - s/5/5/g; - s/6/6/g; - s/9/9/g; - s/7/7/g; - s/8/8/g; - s/4/4/g; - s/. */. /g; - s/~/\~/g; - s/’/\'/g; - s/…/\.\.\./g; - s/━/\-/g; - s/〈/\</g; - s/〉/\>/g; - s/【/\[/g; - s/】/\]/g; - s/%/\%/g; - print $_; -} diff --git a/kosmos-g/fairseq/examples/xglm/README.md b/kosmos-g/fairseq/examples/xglm/README.md deleted file mode 100644 index a4fc755d0..000000000 --- a/kosmos-g/fairseq/examples/xglm/README.md +++ /dev/null @@ -1,24 +0,0 @@ -# Few-shot Learning with Multilingual Language Models - -## Introduction - -In this work, we train multilingual generative language models, dubbed XGLM, on a balanced corpus covering a diverse set of languages, and study their few- and zero-shot learning capabilities in a wide range of tasks. Our largest model with 7.5 billion parameters sets new state of the art in few-shot learning on more than 20 representative languages, outperforming GPT-3 of comparable size in multilingual commonsense reasoning (+7.4 accuracy points for 0-shot, +9.4 for 4-shot) and natural language inference (+5.4 for 0-shot, +5.4 for 4-shot). We have included a [model card](model_card.md) of XGLM for transparency and accountability. - -## Data and Languages -XGLM models are trained on a new multilingual corpus extracted from CommonCrawl (CC100-XL), a significantly larger multilingual dataset covering 68 Common Crawl (CC) snapshots (from [Summer 2013](http://commoncrawl.org/2013/11/new-crawl-data-available/) to [March/April 2020](https://commoncrawl.org/2020/04/march-april-2020-crawl-archive-now-available/) consisting of 134 languages. The detailed languages and data statistics are reported in the paper (Table A.1). - -## Pre-trained models - -Model | Layers | Model Dim | Languages | Download ----|---|---|---|--- -`XGLM 564M` | 24 | 1024 | trained on 30 languages| [xglm.564M.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/xglm/xglm.564M.tar.gz) -`XGLM 1.7B` | 24 | 2048 | trained on 30 languages| [xglm.1.7B.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/xglm/xglm.1.7B.tar.gz) -`XGLM 2.9B` | 48 | 2048 | trained on 30 languages| [xglm.2.9B.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/xglm/xglm.2.9B.tar.gz) -`XGLM 7.5B` | 32 | 4096 | trained on 30 languages| [xglm.7.5B.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/xglm/xglm.7.5B.tar.gz) -`XGLM 4.5B` | 48 | 2048 | trained on 134 languages| [xglm.4.5B.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/xglm/xglm.4.5B.tar.gz) - -## Evaluation -Coming soon. - -## Citation -Coming soon. diff --git a/kosmos-g/fairseq/examples/xglm/model_card.md b/kosmos-g/fairseq/examples/xglm/model_card.md deleted file mode 100644 index 75ddb1708..000000000 --- a/kosmos-g/fairseq/examples/xglm/model_card.md +++ /dev/null @@ -1,121 +0,0 @@ -# XGLM multilingual model -## Version 1.0.0 - -### Model developer -Meta AI - -### Model type -A multilingual autoregressive language model trained on a balanced corpus of a diverse set of languages. The largest trained model has 7.5 billion parameters. The language model can learn tasks from natural language descriptions and a few examples. - -### References -Coming soon. - -### Citation details -Coming soon. - -### Model Feedback Channel -https://github.com/pytorch/fairseq - -## Intended use -### Primary intended use -For research purposes only, e.g. reproducing model evaluation results. Generation is only used in a limited capacity for explanation/justification or for prompting/probing/priming for class labels. - -### Out of scope uses -The primary purpose of the model is not to generate language, although the model is capable of doing that. - -## Potential risks -This section lists the potential risks associated with using the model. - -### Relevant factors -Based on known problems with NLP technology, potential relevant factors include output correctness, robustness, bias (gender, profession, race and religion), etc. - -### Evaluation factors -The model was evaluated on hate speech detection and occupation identification. -* Hate speech detection (Huang et al. (2020)) - A safety task to test language models’ ability to identify hateful and offensive text. -* Occupation identification (De-Arteaga et al., 2019), (Zhao et al., 2020) - A bias task to study language models’ performance divergence between different gender groups on the task of occupation identification. - -## Metrics -### Model performance measures -The XGLM model was primarily evaluated on -1. Zero shot and few shot learning by looking at per-language performance on tasks spanning commonsense reasoning (XCOPA, XWinograd), natural language inference (XNLI) and paraphrasing (PAWS-X). The model is also evaluated on XStoryCloze, a new dataset created by Meta AI. -2. Cross lingual transfer through templates and few-shot examples. -3. Knowledge probing - Evaluate to what extent the XGLM model can effectively store factual knowledge in different languages using the mLAMA benchmark. -4. Translation - We report machine translation results on WMT benchmarks and a subset of FLORES-101 in the main paper. - -The model was also evaluated on hate speech datasets introduced by Huang et al. (2020) and an occupation identification dataset by De-Arteaga et al. 2019 to identify bias in the model. - -### Approaches to handle uncertainty -Report confidence intervals, variance metrics for the model performance metrics. Few-shot evaluation was conducted with different sampling with 5 seeds. We reported statistical significance. - -## Evaluation data -## Zero Shot and Few Shot evaluation - -### XNLI (Conneau et al., 2018) -#### Description -The Cross-lingual Natural Language Inference (XNLI) corpus is the extension of the Multi-Genre NLI (MultiNLI) corpus to 15 languages. The dataset was created by manually translating the validation and test sets of MultiNLI into each of those 15 languages. - -### XStoryCloze -#### Description -A new dataset created by Meta AI by translating the validation split of the English StoryCloze dataset (Mostafazadeh et al., 2016) (Spring 2016 version) to 10 other typologically diverse languages (ru, zh Simplified, es Latin America, ar, hi, id, te, sw, eu, my). - -### XCOPA (Ponti et al., 2020) -#### Description -The Cross-lingual Choice of Plausible Alternatives (XCOPA) dataset is a benchmark to evaluate the ability of machine learning models to transfer commonsense reasoning across languages. The dataset is the translation and reannotation of the English COPA (Roemmele et al. 2011) and covers 11 languages from 11 families and several areas around the globe. - -### XWinograd (Tikhonov and Ryabinin, 2021) -#### Description -XWinograd is a multilingual collection of Winograd Schemas in six languages that can be used for evaluation of cross-lingual commonsense reasoning capabilities. - -### PAWS-X (Yang et al., 2019) -#### Description -PAWS-X contains 23,659 human translated PAWS evaluation pairs and 296,406 machine translated training pairs in six typologically distinct languages: French, Spanish, German, Chinese, Japanese, and Korean. All translated pairs are sourced from examples in PAWS-Wiki. - -## Responsible AI (RAI) evaluation -### Hate speech -Hate speech datasets introduced by Huang et al. (2020). This dataset is a multilingual Twitter corpus for the task of hate speech detection with inferred four author demographic factors: age, country, gender and race/ethnicity. The corpus covers five languages: English, Italian, Polish, Portuguese and Spanish. - -### Bias dataset -This dataset by De-Arteaga et al. 2019 is a bias detection dataset, where the aim is to study gender bias based on identifying a person’s occupation from their bios. - ----- - -## Training data -### CC100-XL -#### Description -Following the recent success of multilingual self-supervised pre-training (Devlin et al., 2019; Lample and Conneau, 2019; Con; Xue et al., 2020; Goyal et al., 2021a; Liu et al., 2020), we train our language models on a mixture of monolingual text of different languages. We extended the pipeline used for mining the CC100 corpus to generate CC100-XL, a significantly larger multilingual dataset covering 68 Common Crawl snapshots (from Summer 2013 to March/April 2020) and 134 languages. - -More details on the CC100-XL dataset can be found in the Appendix section of the paper. - -## RAI Dimensions -### Fairness (Bias and inclusion) -The XGLM model was evaluated on Hate speech and bias identification datasets. For hate speech, we observe that across the 5 languages in the dataset, in context learning results are only slightly better than random (50%). Another interesting observation is that most few shot results are worse than zero-shot, which indicates that the model is not able to utilize examples using the templates described in the paper. For bias identification, the XGLM (6.7B) English only model achieves the best performance on English and Spanish, while the GPT-3 model of comparable size (6.7B) model achieves the best in French. On certain occupations (e.g. model and teacher), XGLM 6.7B En only model and GPT-3 (6.7B) have very significant bias while XGLM 7.5B is much less biased. - -### Privacy and security -The XGLM model did not have any special Privacy and Security considerations. The training data and evaluation data were both public and went through standard Meta AI Privacy and licensing procedures. - -### Transparency and control -In the spirit of transparency and accountability we have created this model card and a data card for the CC100-XL which can be found in the Appendix section of the paper. - -### Efficiency (Green AI) -From an engineering perspective, XGLM pertains to a family of models that represent single unified models catering to many languages which have wide application across many applications. Such a unified single model saves on carbon footprint as well as energy consumption (comparing to the alternative: separate models for different languages) leading to more energy efficiency. A single model, despite having the risk of being a single point of failure, has the powerful incentive of being easier to maintain, access, distribute, and track. - -## References -Edoardo Maria Ponti, Goran Glavas, Olga Majewska, Qianchu Liu, Ivan Vulic, and Anna Korhonen. 2020. XCOPA: A multilingual dataset for causal commonsense reasoning. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 2362–2376. Association for Computational Linguistics. -XCOPA Dataset | Papers With Code - -Alexey Tikhonov and Max Ryabinin. 2021. It’s all in the heads: Using attention heads as a baseline for cross-lingual transfer in commonsense reasoning. In Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, Online Event, August 1-6, 2021, volume ACL/IJCNLP 2021 of Findings of ACL, pages 3534–3546. Association for Computational Linguistics. -XWINO Dataset | Papers With Code (XWinograd) - -Yinfei Yang, Yuan Zhang, Chris Tar, and Jason Baldridge. 2019. PAWS-X: A cross-lingual adversarial dataset for paraphrase identification. CoRR, abs/1908.11828. -PAWS-X Dataset | Papers With Code - -Alexis Conneau, Guillaume Lample, Ruty Rinott, Adina Williams, Samuel R. Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: evaluating cross-lingual sentence representations. CoRR, abs/1809.05053. -XNLI Dataset | Papers With Code - -Xiaolei Huang, Linzi Xing, Franck Dernoncourt, and Michael Paul. 2020. Multilingual twitter corpus and baselines for evaluating demographic bias in hate speech recognition. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 1440–1448. - -Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, and Adam Tauman Kalai. 2019. Bias in bios: A case study of semantic representation bias in a high-stakes setting. In proceedings of the Conference on Fairness, Accountability, and Transparency, pages 120–128. - -Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, James F. Allen. A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories. CoRR abs/1604.01696. - -Jieyu Zhao, Subhabrata Mukherjee, Saghar Hosseini, Kai-Wei Chang, and Ahmed Hassan Awadallah. 2020. Gender bias in multilingual embeddings and crosslingual transfer. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2896–2907. diff --git a/kosmos-g/fairseq/examples/xlmr/README.md b/kosmos-g/fairseq/examples/xlmr/README.md deleted file mode 100644 index bba7910e3..000000000 --- a/kosmos-g/fairseq/examples/xlmr/README.md +++ /dev/null @@ -1,144 +0,0 @@ -# Unsupervised Cross-lingual Representation Learning at Scale (XLM-RoBERTa) -https://arxiv.org/pdf/1911.02116.pdf - -# Larger-Scale Transformers for Multilingual Masked Language Modeling -https://arxiv.org/pdf/2105.00572.pdf - - -## What's New: -- June 2021: `XLMR-XL` AND `XLMR-XXL` models released. - -## Introduction - -`XLM-R` (`XLM-RoBERTa`) is a generic cross lingual sentence encoder that obtains state-of-the-art results on many cross-lingual understanding (XLU) benchmarks. It is trained on `2.5T` of filtered CommonCrawl data in 100 languages (list below). - - Language | Language|Language |Language | Language ----|---|---|---|--- -Afrikaans | Albanian | Amharic | Arabic | Armenian -Assamese | Azerbaijani | Basque | Belarusian | Bengali -Bengali Romanize | Bosnian | Breton | Bulgarian | Burmese -Burmese zawgyi font | Catalan | Chinese (Simplified) | Chinese (Traditional) | Croatian -Czech | Danish | Dutch | English | Esperanto -Estonian | Filipino | Finnish | French | Galician -Georgian | German | Greek | Gujarati | Hausa -Hebrew | Hindi | Hindi Romanize | Hungarian | Icelandic -Indonesian | Irish | Italian | Japanese | Javanese -Kannada | Kazakh | Khmer | Korean | Kurdish (Kurmanji) -Kyrgyz | Lao | Latin | Latvian | Lithuanian -Macedonian | Malagasy | Malay | Malayalam | Marathi -Mongolian | Nepali | Norwegian | Oriya | Oromo -Pashto | Persian | Polish | Portuguese | Punjabi -Romanian | Russian | Sanskrit | Scottish Gaelic | Serbian -Sindhi | Sinhala | Slovak | Slovenian | Somali -Spanish | Sundanese | Swahili | Swedish | Tamil -Tamil Romanize | Telugu | Telugu Romanize | Thai | Turkish -Ukrainian | Urdu | Urdu Romanize | Uyghur | Uzbek -Vietnamese | Welsh | Western Frisian | Xhosa | Yiddish - -## Pre-trained models - -Model | Description | #params | vocab size | Download ----|---|---|---|--- -`xlmr.base` | XLM-R using the BERT-base architecture | 250M | 250k | [xlm.base.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/xlmr.base.tar.gz) -`xlmr.large` | XLM-R using the BERT-large architecture | 560M | 250k | [xlm.large.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/xlmr.large.tar.gz) -`xlmr.xl` | XLM-R (`layers=36, model_dim=2560`) | 3.5B | 250k | [xlm.xl.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/xlmr/xlmr.xl.tar.gz) -`xlmr.xxl` | XLM-R (`layers=48, model_dim=4096`) | 10.7B | 250k | [xlm.xxl.tar.gz](https://dl.fbaipublicfiles.com/fairseq/models/xlmr/xlmr.xxl.tar.gz) - -## Results - -**[XNLI (Conneau et al., 2018)](https://arxiv.org/abs/1809.05053)** - -Model | average | en | fr | es | de | el | bg | ru | tr | ar | vi | th | zh | hi | sw | ur ----|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|--- -`roberta.large.mnli` _(TRANSLATE-TEST)_ | 77.8 | 91.3 | 82.9 | 84.3 | 81.2 | 81.7 | 83.1 | 78.3 | 76.8 | 76.6 | 74.2 | 74.1 | 77.5 | 70.9 | 66.7 | 66.8 -`xlmr.large` _(TRANSLATE-TRAIN-ALL)_ | 83.6 | 89.1 | 85.1 | 86.6 | 85.7 | 85.3 | 85.9 | 83.5 | 83.2 | 83.1 | 83.7 | 81.5 | 83.7 | 81.6 | 78.0 | 78.1 -`xlmr.xl` _(TRANSLATE-TRAIN-ALL)_ | 85.4 | 91.1 | 87.2 | 88.1 | 87.0 | 87.4 | 87.8 | 85.3 | 85.2 | 85.3 | 86.2 | 83.8 | 85.3 | 83.1 | 79.8 | 78.2 | 85.4 -`xlmr.xxl` _(TRANSLATE-TRAIN-ALL)_ | 86.0 | 91.5 | 87.6 | 88.7 | 87.8 | 87.4 | 88.2 | 85.6 | 85.1 | 85.8 | 86.3 | 83.9 | 85.6 | 84.6 | 81.7 | 80.6 - -**[MLQA (Lewis et al., 2018)](https://arxiv.org/abs/1910.07475)** - -Model | average | en | es | de | ar | hi | vi | zh ----|---|---|---|---|---|---|---|--- -`BERT-large` | - | 80.2/67.4 | - | - | - | - | - | - -`mBERT` | 57.7 / 41.6 | 77.7 / 65.2 | 64.3 / 46.6 | 57.9 / 44.3 | 45.7 / 29.8| 43.8 / 29.7 | 57.1 / 38.6 | 57.5 / 37.3 -`xlmr.large` | 70.7 / 52.7 | 80.6 / 67.8 | 74.1 / 56.0 | 68.5 / 53.6 | 63.1 / 43.5 | 69.2 / 51.6 | 71.3 / 50.9 | 68.0 / 45.4 -`xlmr.xl` | 73.4 / 55.3 | 85.1 / 72.6 | 66.7 / 46.2 | 70.5 / 55.5 | 74.3 / 56.9 | 72.2 / 54.7 | 74.4 / 52.9 | 70.9 / 48.5 -`xlmr.xxl` | 74.8 / 56.6 | 85.5 / 72.4 | 68.6 / 48.4 | 72.7 / 57.8 | 75.4 / 57.6 | 73.7 / 55.8 | 76.0 / 55.0 | 71.7 / 48.9 - - -## Example usage - -##### Load XLM-R from torch.hub (PyTorch >= 1.1): -```python -import torch -xlmr = torch.hub.load('pytorch/fairseq:main', 'xlmr.large') -xlmr.eval() # disable dropout (or leave in train mode to finetune) -``` - -##### Load XLM-R (for PyTorch 1.0 or custom models): -```python -# Download xlmr.large model -wget https://dl.fbaipublicfiles.com/fairseq/models/xlmr.large.tar.gz -tar -xzvf xlmr.large.tar.gz - -# Load the model in fairseq -from fairseq.models.roberta import XLMRModel -xlmr = XLMRModel.from_pretrained('/path/to/xlmr.large', checkpoint_file='model.pt') -xlmr.eval() # disable dropout (or leave in train mode to finetune) -``` - -##### Apply sentence-piece-model (SPM) encoding to input text: -```python -en_tokens = xlmr.encode('Hello world!') -assert en_tokens.tolist() == [0, 35378, 8999, 38, 2] -xlmr.decode(en_tokens) # 'Hello world!' - -zh_tokens = xlmr.encode('你好,世界') -assert zh_tokens.tolist() == [0, 6, 124084, 4, 3221, 2] -xlmr.decode(zh_tokens) # '你好,世界' - -hi_tokens = xlmr.encode('नमस्ते दुनिया') -assert hi_tokens.tolist() == [0, 68700, 97883, 29405, 2] -xlmr.decode(hi_tokens) # 'नमस्ते दुनिया' - -ar_tokens = xlmr.encode('مرحبا بالعالم') -assert ar_tokens.tolist() == [0, 665, 193478, 258, 1705, 77796, 2] -xlmr.decode(ar_tokens) # 'مرحبا بالعالم' - -fr_tokens = xlmr.encode('Bonjour le monde') -assert fr_tokens.tolist() == [0, 84602, 95, 11146, 2] -xlmr.decode(fr_tokens) # 'Bonjour le monde' -``` - -##### Extract features from XLM-R: -```python -# Extract the last layer's features -last_layer_features = xlmr.extract_features(zh_tokens) -assert last_layer_features.size() == torch.Size([1, 6, 1024]) - -# Extract all layer's features (layer 0 is the embedding layer) -all_layers = xlmr.extract_features(zh_tokens, return_all_hiddens=True) -assert len(all_layers) == 25 -assert torch.all(all_layers[-1] == last_layer_features) -``` - -## Citation - -```bibtex -@article{conneau2019unsupervised, - title={Unsupervised Cross-lingual Representation Learning at Scale}, - author={Conneau, Alexis and Khandelwal, Kartikay and Goyal, Naman and Chaudhary, Vishrav and Wenzek, Guillaume and Guzm{\'a}n, Francisco and Grave, Edouard and Ott, Myle and Zettlemoyer, Luke and Stoyanov, Veselin}, - journal={arXiv preprint arXiv:1911.02116}, - year={2019} -} -``` - - -```bibtex -@article{goyal2021larger, - title={Larger-Scale Transformers for Multilingual Masked Language Modeling}, - author={Goyal, Naman and Du, Jingfei and Ott, Myle and Anantharaman, Giri and Conneau, Alexis}, - journal={arXiv preprint arXiv:2105.00572}, - year={2021} -} -``` diff --git a/kosmos-g/fairseq/fairseq/__init__.py b/kosmos-g/fairseq/fairseq/__init__.py deleted file mode 100644 index 080c988b2..000000000 --- a/kosmos-g/fairseq/fairseq/__init__.py +++ /dev/null @@ -1,45 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -"""isort:skip_file""" - -import os -import sys - -try: - from .version import __version__ # noqa -except ImportError: - version_txt = os.path.join(os.path.dirname(__file__), "version.txt") - with open(version_txt) as f: - __version__ = f.read().strip() - -__all__ = ["pdb"] - -# backwards compatibility to support `from fairseq.X import Y` -from fairseq.distributed import utils as distributed_utils -from fairseq.logging import meters, metrics, progress_bar # noqa - -sys.modules["fairseq.distributed_utils"] = distributed_utils -sys.modules["fairseq.meters"] = meters -sys.modules["fairseq.metrics"] = metrics -sys.modules["fairseq.progress_bar"] = progress_bar - -# initialize hydra -from fairseq.dataclass.initialize import hydra_init - -hydra_init() - -import fairseq.criterions # noqa -import fairseq.distributed # noqa -import fairseq.models # noqa -import fairseq.modules # noqa -import fairseq.optim # noqa -import fairseq.optim.lr_scheduler # noqa -import fairseq.pdb # noqa -import fairseq.scoring # noqa -import fairseq.tasks # noqa -import fairseq.token_generation_constraints # noqa - -import fairseq.benchmark # noqa -import fairseq.model_parallel # noqa diff --git a/kosmos-g/fairseq/fairseq/benchmark/__init__.py b/kosmos-g/fairseq/fairseq/benchmark/__init__.py deleted file mode 100644 index 0317d5c62..000000000 --- a/kosmos-g/fairseq/fairseq/benchmark/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -# import models/tasks to register them -from . import dummy_dataset, dummy_lm, dummy_masked_lm, dummy_model, dummy_mt # noqa diff --git a/kosmos-g/fairseq/fairseq/benchmark/dummy_dataset.py b/kosmos-g/fairseq/fairseq/benchmark/dummy_dataset.py deleted file mode 100644 index 2f051754a..000000000 --- a/kosmos-g/fairseq/fairseq/benchmark/dummy_dataset.py +++ /dev/null @@ -1,36 +0,0 @@ -import numpy as np -from fairseq.data import FairseqDataset - - -class DummyDataset(FairseqDataset): - def __init__(self, batch, num_items, item_size): - super().__init__() - self.batch = batch - self.num_items = num_items - self.item_size = item_size - - def __getitem__(self, index): - return index - - def __len__(self): - return self.num_items - - def collater(self, samples): - return self.batch - - @property - def sizes(self): - return np.array([self.item_size] * self.num_items) - - def num_tokens(self, index): - return self.item_size - - def size(self, index): - return self.item_size - - def ordered_indices(self): - return np.arange(self.num_items) - - @property - def supports_prefetch(self): - return False diff --git a/kosmos-g/fairseq/fairseq/benchmark/dummy_lm.py b/kosmos-g/fairseq/fairseq/benchmark/dummy_lm.py deleted file mode 100644 index c6246a0c0..000000000 --- a/kosmos-g/fairseq/fairseq/benchmark/dummy_lm.py +++ /dev/null @@ -1,83 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from dataclasses import dataclass, field -from typing import Optional - -import torch -from .dummy_dataset import DummyDataset -from fairseq.data import Dictionary -from fairseq.dataclass import FairseqDataclass -from fairseq.tasks import FairseqTask, register_task -from omegaconf import II - - -logger = logging.getLogger(__name__) - - -@dataclass -class DummyLMConfig(FairseqDataclass): - dict_size: int = 49996 - dataset_size: int = 100000 - tokens_per_sample: int = field( - default=512, metadata={"help": "max sequence length"} - ) - add_bos_token: bool = False - batch_size: Optional[int] = II("dataset.batch_size") - max_tokens: Optional[int] = II("dataset.max_tokens") - max_target_positions: int = II("task.tokens_per_sample") - - -@register_task("dummy_lm", dataclass=DummyLMConfig) -class DummyLMTask(FairseqTask): - def __init__(self, cfg: DummyLMConfig): - super().__init__(cfg) - - # load dictionary - self.dictionary = Dictionary() - for i in range(cfg.dict_size): - self.dictionary.add_symbol("word{}".format(i)) - self.dictionary.pad_to_multiple_(8) # often faster if divisible by 8 - logger.info("dictionary: {} types".format(len(self.dictionary))) - - seq = torch.arange(cfg.tokens_per_sample + 1) + self.dictionary.pad() + 1 - - self.dummy_src = seq[:-1] - self.dummy_tgt = seq[1:] - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - """Load a given dataset split. - Args: - split (str): name of the split (e.g., train, valid, test) - """ - if self.cfg.batch_size is not None: - bsz = self.cfg.batch_size - else: - bsz = max(1, self.cfg.max_tokens // self.cfg.tokens_per_sample) - self.datasets[split] = DummyDataset( - { - "id": 1, - "net_input": { - "src_tokens": torch.stack([self.dummy_src for _ in range(bsz)]), - "src_lengths": torch.full( - (bsz,), self.cfg.tokens_per_sample, dtype=torch.long - ), - }, - "target": torch.stack([self.dummy_tgt for _ in range(bsz)]), - "nsentences": bsz, - "ntokens": bsz * self.cfg.tokens_per_sample, - }, - num_items=self.cfg.dataset_size, - item_size=self.cfg.tokens_per_sample, - ) - - @property - def source_dictionary(self): - return self.dictionary - - @property - def target_dictionary(self): - return self.dictionary diff --git a/kosmos-g/fairseq/fairseq/benchmark/dummy_masked_lm.py b/kosmos-g/fairseq/fairseq/benchmark/dummy_masked_lm.py deleted file mode 100644 index 12b9c5d0f..000000000 --- a/kosmos-g/fairseq/fairseq/benchmark/dummy_masked_lm.py +++ /dev/null @@ -1,94 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from dataclasses import dataclass, field -from typing import Optional - -import torch -from omegaconf import II - -from .dummy_dataset import DummyDataset -from fairseq.data import Dictionary -from fairseq.dataclass import FairseqDataclass -from fairseq.tasks import FairseqTask, register_task - -logger = logging.getLogger(__name__) - - -@dataclass -class DummyMaskedLMConfig(FairseqDataclass): - dict_size: int = 49996 - dataset_size: int = 100000 - tokens_per_sample: int = field( - default=512, - metadata={ - "help": "max number of total tokens over all" - " segments per sample for BERT dataset" - }, - ) - batch_size: Optional[int] = II("dataset.batch_size") - max_tokens: Optional[int] = II("dataset.max_tokens") - max_target_positions: int = II("task.tokens_per_sample") - - -@register_task("dummy_masked_lm", dataclass=DummyMaskedLMConfig) -class DummyMaskedLMTask(FairseqTask): - def __init__(self, cfg: DummyMaskedLMConfig): - super().__init__(cfg) - - self.dictionary = Dictionary() - for i in range(cfg.dict_size): - self.dictionary.add_symbol("word{}".format(i)) - logger.info("dictionary: {} types".format(len(self.dictionary))) - # add mask token - self.mask_idx = self.dictionary.add_symbol("<mask>") - self.dictionary.pad_to_multiple_(8) # often faster if divisible by 8 - - mask_idx = 0 - pad_idx = 1 - seq = torch.arange(cfg.tokens_per_sample) + pad_idx + 1 - mask = torch.arange(2, cfg.tokens_per_sample, 7) # ~15% - src = seq.clone() - src[mask] = mask_idx - tgt = torch.full_like(seq, pad_idx) - tgt[mask] = seq[mask] - - self.dummy_src = src - self.dummy_tgt = tgt - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - """Load a given dataset split. - Args: - split (str): name of the split (e.g., train, valid, test) - """ - if self.cfg.batch_size is not None: - bsz = self.cfg.batch_size - else: - bsz = max(1, self.cfg.max_tokens // self.cfg.tokens_per_sample) - self.datasets[split] = DummyDataset( - { - "id": 1, - "net_input": { - "src_tokens": torch.stack([self.dummy_src for _ in range(bsz)]), - "src_lengths": torch.full( - (bsz,), self.cfg.tokens_per_sample, dtype=torch.long - ), - }, - "target": torch.stack([self.dummy_tgt for _ in range(bsz)]), - "nsentences": bsz, - "ntokens": bsz * self.cfg.tokens_per_sample, - }, - num_items=self.cfg.dataset_size, - item_size=self.cfg.tokens_per_sample, - ) - - @property - def source_dictionary(self): - return self.dictionary - - @property - def target_dictionary(self): - return self.dictionary diff --git a/kosmos-g/fairseq/fairseq/benchmark/dummy_model.py b/kosmos-g/fairseq/fairseq/benchmark/dummy_model.py deleted file mode 100644 index ff26e4fe6..000000000 --- a/kosmos-g/fairseq/fairseq/benchmark/dummy_model.py +++ /dev/null @@ -1,96 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch.nn as nn -import torch.nn.functional as F -from fairseq.data import Dictionary -from fairseq.models import ( - FairseqDecoder, - FairseqLanguageModel, - register_model, - register_model_architecture, -) - - -@register_model("dummy_model") -class DummyModel(FairseqLanguageModel): - def __init__(self, args, encoder): - super().__init__(encoder) - self.args = args - - @staticmethod - def add_args(parser): - parser.add_argument("--num-layers", type=int, default=24) - parser.add_argument("--embed-dim", type=int, default=1024) - - @classmethod - def build_model(cls, args, task): - encoder = DummyEncoder( - num_embed=len(task.target_dictionary), - embed_dim=args.embed_dim, - num_layers=args.num_layers, - ) - return cls(args, encoder) - - def forward(self, src_tokens, masked_tokens=None, **kwargs): - return self.decoder(src_tokens, masked_tokens=masked_tokens) - - -class DummyEncoder(FairseqDecoder): - def __init__(self, num_embed=50000, embed_dim=1024, num_layers=24): - super().__init__(Dictionary()) - self.embed = nn.Embedding( - num_embeddings=num_embed, embedding_dim=embed_dim, padding_idx=0 - ) - self.layers_a = nn.ModuleList( - [ - nn.Sequential( - nn.LayerNorm(embed_dim), - nn.Linear(embed_dim, 3 * embed_dim), # q, k, v input projection - nn.Linear(3 * embed_dim, embed_dim), # skip self-attention - nn.Linear(embed_dim, embed_dim), # output projection - nn.Dropout(), - ) - for i in range(num_layers) - ] - ) - self.layers_b = nn.ModuleList( - [ - nn.Sequential( - nn.LayerNorm(embed_dim), - nn.Linear(embed_dim, 4 * embed_dim), # FFN - nn.ReLU(), - nn.Linear(4 * embed_dim, embed_dim), # FFN - nn.Dropout(0.1), - ) - for i in range(num_layers) - ] - ) - self.out_proj = nn.Linear(embed_dim, num_embed) - - def forward(self, tokens, masked_tokens=None): - x = self.embed(tokens) - for layer_a, layer_b in zip(self.layers_a, self.layers_b): - x = x + layer_a(x) - x = x + layer_b(x) - x = self.out_proj(x) - if masked_tokens is not None: - x = x[masked_tokens] - return (x,) - - def max_positions(self): - return 1024 - - def get_normalized_probs(self, net_output, log_probs, sample=None): - logits = net_output[0].float() - if log_probs: - return F.log_softmax(logits, dim=-1) - else: - return F.softmax(logits, dim=-1) - - -@register_model_architecture("dummy_model", "dummy_model") -def base_architecture(args): - pass diff --git a/kosmos-g/fairseq/fairseq/benchmark/dummy_mt.py b/kosmos-g/fairseq/fairseq/benchmark/dummy_mt.py deleted file mode 100644 index 28d78cffd..000000000 --- a/kosmos-g/fairseq/fairseq/benchmark/dummy_mt.py +++ /dev/null @@ -1,119 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -import numpy as np -import torch - -from fairseq.data import Dictionary, FairseqDataset -from fairseq.tasks import LegacyFairseqTask, register_task - -logger = logging.getLogger(__name__) - - -@register_task("dummy_mt") -class DummyMTTask(LegacyFairseqTask): - @staticmethod - def add_args(parser): - """Add task-specific arguments to the parser.""" - parser.add_argument("--dict-size", default=49996, type=int) - parser.add_argument("--dataset-size", default=100000, type=int) - parser.add_argument("--src-len", default=30, type=int) - parser.add_argument("--tgt-len", default=30, type=int) - - def __init__(self, args, dictionary): - super().__init__(args) - self.dictionary = dictionary - self.seed = args.seed - - dictionary.pad_to_multiple_(8) # often faster if divisible by 8 - - self.dummy_src = torch.arange(args.src_len + 1) + dictionary.pad() + 1 - self.dummy_tgt = torch.arange(args.tgt_len + 1) + dictionary.pad() + 1 - - @classmethod - def setup_task(cls, args, **kwargs): - """Setup the task.""" - dictionary = Dictionary() - for i in range(args.dict_size): - dictionary.add_symbol("word{}".format(i)) - logger.info("dictionary: {} types".format(len(dictionary))) - - args.max_source_positions = args.src_len + dictionary.pad() + 2 - args.max_target_positions = args.tgt_len + dictionary.pad() + 2 - - return cls(args, dictionary) - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - """Load a given dataset split. - Args: - split (str): name of the split (e.g., train, valid, test) - """ - item_size = max(self.args.src_len, self.args.tgt_len) - if self.args.batch_size is not None: - bsz = self.args.batch_size - else: - bsz = max(1, self.args.max_tokens // item_size) - tgt = torch.stack([self.dummy_tgt for _ in range(bsz)]) - self.datasets[split] = DummyDataset( - { - "id": 1, - "net_input": { - "src_tokens": torch.stack([self.dummy_src for _ in range(bsz)]), - "src_lengths": torch.full( - (bsz,), self.args.src_len, dtype=torch.long - ), - "prev_output_tokens": tgt.clone(), - }, - "target": tgt, - "nsentences": bsz, - "ntokens": bsz * self.args.tgt_len, - }, - num_items=self.args.dataset_size, - item_size=item_size, - ) - - @property - def source_dictionary(self): - return self.dictionary - - @property - def target_dictionary(self): - return self.dictionary - - -class DummyDataset(FairseqDataset): - def __init__(self, batch, num_items, item_size): - super().__init__() - self.batch = batch - self.num_items = num_items - self.item_size = item_size - - def __getitem__(self, index): - return index - - def __len__(self): - return self.num_items - - def collater(self, samples): - return self.batch - - @property - def sizes(self): - return np.array([self.item_size] * self.num_items) - - def num_tokens(self, index): - return self.item_size - - def size(self, index): - return self.item_size - - def ordered_indices(self): - return np.arange(self.num_items) - - @property - def supports_prefetch(self): - return False diff --git a/kosmos-g/fairseq/fairseq/binarizer.py b/kosmos-g/fairseq/fairseq/binarizer.py deleted file mode 100644 index 6f03d7a2c..000000000 --- a/kosmos-g/fairseq/fairseq/binarizer.py +++ /dev/null @@ -1,381 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import typing as tp -from abc import ABC, abstractmethod -from collections import Counter -from dataclasses import dataclass -from multiprocessing import Pool - -import torch - -from fairseq.data import Dictionary, indexed_dataset -from fairseq.file_chunker_utils import Chunker, find_offsets -from fairseq.file_io import PathManager -from fairseq.tokenizer import tokenize_line - -logger = logging.getLogger("binarizer") - - -@dataclass -class BinarizeSummary: - """ - Keep track of what's going on in the binarizer - """ - - num_seq: int = 0 - replaced: tp.Optional[Counter] = None - num_tok: int = 0 - - @property - def num_replaced(self) -> int: - if self.replaced is None: - return 0 - return sum(self.replaced.values()) - - @property - def replaced_percent(self) -> float: - return 100 * self.num_replaced / self.num_tok - - def __str__(self) -> str: - base = f"{self.num_seq} sents, {self.num_tok} tokens" - if self.replaced is None: - return base - - return f"{base}, {self.replaced_percent:.3}% replaced" - - def merge(self, other: "BinarizeSummary"): - replaced = None - if self.replaced is not None: - replaced = self.replaced - if other.replaced is not None: - if replaced is None: - replaced = other.replaced - else: - replaced += other.replaced - self.replaced = replaced - self.num_seq += other.num_seq - self.num_tok += other.num_tok - - -class Binarizer(ABC): - """ - a binarizer describes how to take a string and build a tensor out of it - """ - - @abstractmethod - def binarize_line( - self, - line: str, - summary: BinarizeSummary, - ) -> torch.IntTensor: - ... - - -def _worker_prefix(output_prefix: str, worker_id: int): - return f"{output_prefix}.pt{worker_id}" - - -class FileBinarizer: - """ - An file binarizer can take a file, tokenize it, and binarize each line to a tensor - """ - - @classmethod - def multiprocess_dataset( - cls, - input_file: str, - dataset_impl: str, - binarizer: Binarizer, - output_prefix: str, - vocab_size=None, - num_workers=1, - ) -> BinarizeSummary: - final_summary = BinarizeSummary() - - offsets = find_offsets(input_file, num_workers) - # find_offsets returns a list of position [pos1, pos2, pos3, pos4] but we would want pairs: - # [(pos1, pos2), (pos2, pos3), (pos3, pos4)] to process the chunks with start/end info - # we zip the list with itself shifted by one to get all the pairs. - (first_chunk, *more_chunks) = zip(offsets, offsets[1:]) - pool = None - if num_workers > 1: - pool = Pool(processes=num_workers - 1) - worker_results = [ - pool.apply_async( - cls._binarize_chunk_and_finalize, - args=( - binarizer, - input_file, - start_offset, - end_offset, - _worker_prefix( - output_prefix, - worker_id, - ), - dataset_impl, - ), - kwds={ - "vocab_size": vocab_size, - } - if vocab_size is not None - else {}, - ) - for worker_id, (start_offset, end_offset) in enumerate( - more_chunks, start=1 - ) - ] - - pool.close() - pool.join() - for r in worker_results: - summ = r.get() - final_summary.merge(summ) - - # do not close the bin file as we need to merge the worker results in - final_ds, summ = cls._binarize_file_chunk( - binarizer, - input_file, - offset_start=first_chunk[0], - offset_end=first_chunk[1], - output_prefix=output_prefix, - dataset_impl=dataset_impl, - vocab_size=vocab_size if vocab_size is not None else None, - ) - final_summary.merge(summ) - - if num_workers > 1: - for worker_id in range(1, num_workers): - # merge the worker outputs - worker_output_prefix = _worker_prefix( - output_prefix, - worker_id, - ) - final_ds.merge_file_(worker_output_prefix) - try: - os.remove(indexed_dataset.data_file_path(worker_output_prefix)) - os.remove(indexed_dataset.index_file_path(worker_output_prefix)) - except Exception as e: - logger.error( - f"couldn't remove {worker_output_prefix}.*", exc_info=e - ) - - # now we can close the file - idx_file = indexed_dataset.index_file_path(output_prefix) - final_ds.finalize(idx_file) - return final_summary - - @staticmethod - def _binarize_file_chunk( - binarizer: Binarizer, - filename: str, - offset_start: int, - offset_end: int, - output_prefix: str, - dataset_impl: str, - vocab_size=None, - ) -> tp.Tuple[tp.Any, BinarizeSummary]: # (dataset builder, BinarizeSummary) - """ - creates a dataset builder and append binarized items to it. This function does not - finalize the builder, this is useful if you want to do other things with your bin file - like appending/merging other files - """ - bin_file = indexed_dataset.data_file_path(output_prefix) - ds = indexed_dataset.make_builder( - bin_file, - impl=dataset_impl, - vocab_size=vocab_size, - ) - summary = BinarizeSummary() - - with Chunker( - PathManager.get_local_path(filename), offset_start, offset_end - ) as line_iterator: - for line in line_iterator: - ds.add_item(binarizer.binarize_line(line, summary)) - - return ds, summary - - @classmethod - def _binarize_chunk_and_finalize( - cls, - binarizer: Binarizer, - filename: str, - offset_start: int, - offset_end: int, - output_prefix: str, - dataset_impl: str, - vocab_size=None, - ): - """ - same as above, but also finalizes the builder - """ - ds, summ = cls._binarize_file_chunk( - binarizer, - filename, - offset_start, - offset_end, - output_prefix, - dataset_impl, - vocab_size=vocab_size, - ) - - idx_file = indexed_dataset.index_file_path(output_prefix) - ds.finalize(idx_file) - - return summ - - -class VocabularyDatasetBinarizer(Binarizer): - """ - Takes a Dictionary/Vocabulary, assign ids to each - token using the dictionary encode_line function. - """ - - def __init__( - self, - dict: Dictionary, - tokenize: tp.Callable[[str], tp.List[str]] = tokenize_line, - append_eos: bool = True, - reverse_order: bool = False, - already_numberized: bool = False, - ) -> None: - self.dict = dict - self.tokenize = tokenize - self.append_eos = append_eos - self.reverse_order = reverse_order - self.already_numberized = already_numberized - super().__init__() - - def binarize_line( - self, - line: str, - summary: BinarizeSummary, - ): - if summary.replaced is None: - summary.replaced = Counter() - - def replaced_consumer(word, idx): - if idx == self.dict.unk_index and word != self.dict.unk_word: - summary.replaced.update([word]) - - if self.already_numberized: - id_strings = line.strip().split() - id_list = [int(id_string) for id_string in id_strings] - if self.reverse_order: - id_list.reverse() - if self.append_eos: - id_list.append(self.dict.eos()) - ids = torch.IntTensor(id_list) - else: - ids = self.dict.encode_line( - line=line, - line_tokenizer=self.tokenize, - add_if_not_exist=False, - consumer=replaced_consumer, - append_eos=self.append_eos, - reverse_order=self.reverse_order, - ) - - summary.num_seq += 1 - summary.num_tok += len(ids) - return ids - - -class AlignmentDatasetBinarizer(Binarizer): - """ - binarize by parsing a set of alignments and packing - them in a tensor (see utils.parse_alignment) - """ - - def __init__( - self, - alignment_parser: tp.Callable[[str], torch.IntTensor], - ) -> None: - super().__init__() - self.alignment_parser = alignment_parser - - def binarize_line( - self, - line: str, - summary: BinarizeSummary, - ): - ids = self.alignment_parser(line) - summary.num_seq += 1 - summary.num_tok += len(ids) - return ids - - -class LegacyBinarizer: - @classmethod - def binarize( - cls, - filename: str, - dico: Dictionary, - consumer: tp.Callable[[torch.IntTensor], None], - tokenize: tp.Callable[[str], tp.List[str]] = tokenize_line, - append_eos: bool = True, - reverse_order: bool = False, - offset: int = 0, - end: int = -1, - already_numberized: bool = False, - ) -> tp.Dict[str, int]: - binarizer = VocabularyDatasetBinarizer( - dict=dico, - tokenize=tokenize, - append_eos=append_eos, - reverse_order=reverse_order, - already_numberized=already_numberized, - ) - return cls._consume_file( - filename, - binarizer, - consumer, - offset_start=offset, - offset_end=end, - ) - - @classmethod - def binarize_alignments( - cls, - filename: str, - alignment_parser: tp.Callable[[str], torch.IntTensor], - consumer: tp.Callable[[torch.IntTensor], None], - offset: int = 0, - end: int = -1, - ) -> tp.Dict[str, int]: - binarizer = AlignmentDatasetBinarizer(alignment_parser) - return cls._consume_file( - filename, - binarizer, - consumer, - offset_start=offset, - offset_end=end, - ) - - @staticmethod - def _consume_file( - filename: str, - binarizer: Binarizer, - consumer: tp.Callable[[torch.IntTensor], None], - offset_start: int, - offset_end: int, - ) -> tp.Dict[str, int]: - summary = BinarizeSummary() - - with Chunker( - PathManager.get_local_path(filename), offset_start, offset_end - ) as line_iterator: - for line in line_iterator: - consumer(binarizer.binarize_line(line, summary)) - - return { - "nseq": summary.num_seq, - "nunk": summary.num_replaced, - "ntok": summary.num_tok, - "replaced": summary.replaced, - } diff --git a/kosmos-g/fairseq/fairseq/checkpoint_utils.py b/kosmos-g/fairseq/fairseq/checkpoint_utils.py deleted file mode 100644 index 6927c4118..000000000 --- a/kosmos-g/fairseq/fairseq/checkpoint_utils.py +++ /dev/null @@ -1,912 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import ast -import collections -import contextlib -import logging -import numpy as np -import os -import re -import time -import traceback -from collections import OrderedDict -from typing import Any, Dict, Optional, Union - -import torch -from fairseq.data import data_utils -from fairseq.dataclass.configs import CheckpointConfig -from fairseq.dataclass.utils import ( - convert_namespace_to_omegaconf, - overwrite_args_by_name, -) -from fairseq.distributed.fully_sharded_data_parallel import FSDP, has_FSDP -from fairseq.distributed import utils as distributed_utils -from fairseq.file_io import PathManager -from fairseq.models import FairseqDecoder, FairseqEncoder -from omegaconf import DictConfig, open_dict, OmegaConf -from fairseq.ds_trainer import DeepSpeedTrainer - -logger = logging.getLogger(__name__) - - -def save_checkpoint(cfg: CheckpointConfig, trainer, epoch_itr, val_loss): - from fairseq import meters - - # only one worker should attempt to create the required dir - if trainer.data_parallel_rank == 0: - os.makedirs(cfg.save_dir, exist_ok=True) - - prev_best = getattr(save_checkpoint, "best", val_loss) - if val_loss is not None: - best_function = max if cfg.maximize_best_checkpoint_metric else min - save_checkpoint.best = best_function(val_loss, prev_best) - - if cfg.no_save: - return - - trainer.consolidate_optimizer() # TODO(SS): do we need this if no_save_optimizer_state - - extra_state = {"train_iterator": epoch_itr.state_dict(), "val_loss": val_loss} - if hasattr(save_checkpoint, "best"): - extra_state.update({"best": save_checkpoint.best}) - - if getattr(epoch_itr, "sharded_checkpoint", False): - local_state_dict = extra_state["train_iterator"] - all_state_dicts = distributed_utils.all_gather_list( - local_state_dict, - max_size=getattr(trainer.cfg.common, "all_gather_list_size", 16384), - group=trainer.data_parallel_process_group, - ) - extra_state["train_iterator"] = all_state_dicts - - if not trainer.should_save_checkpoint_on_current_rank: - if trainer.always_call_state_dict_during_save_checkpoint: - trainer.state_dict() - return - - write_timer = meters.StopwatchMeter() - write_timer.start() - - epoch = epoch_itr.epoch - end_of_epoch = epoch_itr.end_of_epoch() - updates = trainer.get_num_updates() - - logger.info(f"Preparing to save checkpoint for epoch {epoch} @ {updates} updates") - - def is_better(a, b): - return a >= b if cfg.maximize_best_checkpoint_metric else a <= b - - suffix = trainer.checkpoint_suffix - checkpoint_conds = collections.OrderedDict() - if isinstance(trainer, DeepSpeedTrainer): - checkpoint_conds["checkpoints"] = ( - not end_of_epoch - and cfg.save_interval_updates > 0 - and updates % cfg.save_interval_updates == 0 - ) - else: - checkpoint_conds["checkpoint{}{}.pt".format(epoch, suffix)] = ( - end_of_epoch and not cfg.no_epoch_checkpoints and epoch % cfg.save_interval == 0 - ) - checkpoint_conds["checkpoint_{}_{}{}.pt".format(epoch, updates, suffix)] = ( - not end_of_epoch - and cfg.save_interval_updates > 0 - and updates % cfg.save_interval_updates == 0 - ) - checkpoint_conds["checkpoint_best{}.pt".format(suffix)] = val_loss is not None and ( - not hasattr(save_checkpoint, "best") - or is_better(val_loss, save_checkpoint.best) - ) - if val_loss is not None and cfg.keep_best_checkpoints > 0: - worst_best = getattr(save_checkpoint, "best", None) - chkpts = checkpoint_paths( - cfg.save_dir, - pattern=r"checkpoint\.best_{}_(\d+\.?\d*){}\.pt".format( - cfg.best_checkpoint_metric, suffix - ), - ) - if len(chkpts) > 0: - p = chkpts[-1] if cfg.maximize_best_checkpoint_metric else chkpts[0] - worst_best = float(p.rsplit("_")[-1].replace("{}.pt".format(suffix), "")) - # add random digits to resolve ties - with data_utils.numpy_seed(epoch, updates, val_loss): - rand_sfx = np.random.randint(0, cfg.keep_best_checkpoints) - - checkpoint_conds[ - "checkpoint.best_{}_{:.3f}{}{}.pt".format( - cfg.best_checkpoint_metric, - val_loss, - rand_sfx, - suffix - ) - ] = worst_best is None or is_better(val_loss, worst_best) - checkpoint_conds[ - "checkpoint_last{}.pt".format(suffix) - ] = not cfg.no_last_checkpoints - - checkpoints = [ - os.path.join(cfg.save_dir, fn) for fn, cond in checkpoint_conds.items() if cond - ] - if len(checkpoints) > 0: - trainer.save_checkpoint(checkpoints[0], extra_state) - if not isinstance(trainer, DeepSpeedTrainer): - for cp in checkpoints[1:]: - if cfg.write_checkpoints_asynchronously: - # TODO[ioPath]: Need to implement a delayed asynchronous - # file copying/moving feature. - logger.warning( - f"ioPath is not copying {checkpoints[0]} to {cp} " - "since async write mode is on." - ) - else: - assert PathManager.copy( - checkpoints[0], cp, overwrite=True - ), f"Failed to copy {checkpoints[0]} to {cp}" - - write_timer.stop() - logger.info( - "Saved checkpoint {} (epoch {} @ {} updates, score {}) (writing took {} seconds)".format( - checkpoints[0], epoch, updates, val_loss, write_timer.sum - ) - ) - - if not end_of_epoch and cfg.keep_interval_updates > 0: - # remove old checkpoints; checkpoints are sorted in descending order - if cfg.keep_interval_updates_pattern == -1: - checkpoints = checkpoint_paths( - cfg.save_dir, pattern=r"checkpoint_\d+_(\d+){}\.pt".format(suffix) - ) - else: - checkpoints = checkpoint_paths( - cfg.save_dir, - pattern=r"checkpoint_\d+_(\d+){}\.pt".format(suffix), - keep_match=True, - ) - checkpoints = [ - x[0] - for x in checkpoints - if x[1] % cfg.keep_interval_updates_pattern != 0 - ] - - for old_chk in checkpoints[cfg.keep_interval_updates:]: - if os.path.lexists(old_chk): - os.remove(old_chk) - elif PathManager.exists(old_chk): - PathManager.rm(old_chk) - - if cfg.keep_last_epochs > 0: - # remove old epoch checkpoints; checkpoints are sorted in descending order - checkpoints = checkpoint_paths( - cfg.save_dir, pattern=r"checkpoint(\d+){}\.pt".format(suffix) - ) - for old_chk in checkpoints[cfg.keep_last_epochs:]: - if os.path.lexists(old_chk): - os.remove(old_chk) - elif PathManager.exists(old_chk): - PathManager.rm(old_chk) - - if cfg.keep_best_checkpoints > 0: - # only keep the best N checkpoints according to validation metric - checkpoints = checkpoint_paths( - cfg.save_dir, - pattern=r"checkpoint\.best_{}_(\d+\.?\d*){}\.pt".format( - cfg.best_checkpoint_metric, suffix - ), - ) - if not cfg.maximize_best_checkpoint_metric: - checkpoints = checkpoints[::-1] - for old_chk in checkpoints[cfg.keep_best_checkpoints:]: - if os.path.lexists(old_chk): - os.remove(old_chk) - elif PathManager.exists(old_chk): - PathManager.rm(old_chk) - - -def load_checkpoint(cfg: CheckpointConfig, trainer, **passthrough_args): - """ - Load a checkpoint and restore the training iterator. - - *passthrough_args* will be passed through to - ``trainer.get_train_iterator``. - """ - - reset_optimizer = cfg.reset_optimizer - reset_lr_scheduler = cfg.reset_lr_scheduler - optimizer_overrides = ast.literal_eval(cfg.optimizer_overrides) - reset_meters = cfg.reset_meters - reset_dataloader = cfg.reset_dataloader - - if cfg.finetune_from_model is not None and ( - reset_optimizer or reset_lr_scheduler or reset_meters or reset_dataloader - ): - raise ValueError( - "--finetune-from-model can not be set together with either --reset-optimizer" - " or reset_lr_scheduler or reset_meters or reset_dataloader" - ) - - suffix = trainer.checkpoint_suffix - if isinstance(trainer, DeepSpeedTrainer): - checkpoint_path = os.path.join(cfg.save_dir, "checkpoints/") - else: - if ( - cfg.restore_file == "checkpoint_last.pt" - ): # default value of restore_file is 'checkpoint_last.pt' - checkpoint_path = os.path.join( - cfg.save_dir, "checkpoint_last{}.pt".format(suffix) - ) - first_launch = not PathManager.exists(checkpoint_path) - if cfg.finetune_from_model is not None and first_launch: - # if there is no last checkpoint to restore, start the finetune from pretrained model - # else just use usual logic to load checkpoint, e.g. restart from last checkpoint and etc. - if PathManager.exists(cfg.finetune_from_model): - checkpoint_path = cfg.finetune_from_model - reset_optimizer = True - reset_lr_scheduler = True - reset_meters = True - reset_dataloader = True - logger.info( - f"loading pretrained model from {checkpoint_path}: " - "optimizer, lr scheduler, meters, dataloader will be reset" - ) - else: - raise ValueError( - f"--funetune-from-model {cfg.finetune_from_model} does not exist" - ) - elif suffix is not None: - checkpoint_path = cfg.restore_file.replace(".pt", suffix + ".pt") - else: - checkpoint_path = cfg.restore_file - - if cfg.restore_file != "checkpoint_last.pt" and cfg.finetune_from_model: - raise ValueError( - "--finetune-from-model and --restore-file (non-default value) " - "can not be specified together: " + str(cfg) - ) - - extra_state = trainer.load_checkpoint( - checkpoint_path, - reset_optimizer, - reset_lr_scheduler, - optimizer_overrides, - reset_meters=reset_meters, - ) - - if ( - extra_state is not None - and "best" in extra_state - and not reset_optimizer - and not reset_meters - ): - save_checkpoint.best = extra_state["best"] - - if extra_state is not None and not reset_dataloader: - # restore iterator from checkpoint - itr_state = extra_state["train_iterator"] - epoch_itr = trainer.get_train_iterator( - epoch=itr_state.get("epoch", 1), load_dataset=True, **passthrough_args - ) - epoch_itr.load_state_dict(itr_state) - else: - epoch_itr = trainer.get_train_iterator( - epoch=1, load_dataset=True, **passthrough_args - ) - - trainer.lr_step(epoch_itr.epoch) - - return extra_state, epoch_itr - - -def load_checkpoint_to_cpu_(path, arg_overrides=None, load_on_all_ranks=False): - local_path = PathManager.get_local_path(path) - if local_path != path and PathManager.path_requires_pathmanager(path): - try: - os.remove(local_path) - except FileNotFoundError: - pass - if load_on_all_ranks: - torch.distributed.barrier() - local_path = PathManager.get_local_path(path) - - with open(local_path, "rb") as f: - state = torch.load(f, map_location=torch.device("cpu")) - - return state - - -def load_checkpoint_to_cpu(path, arg_overrides=None, load_on_all_ranks=False): - """Loads a checkpoint to CPU (with upgrading for backward compatibility). - - If doing single-GPU training or if the checkpoint is only being loaded by at - most one process on each node (current default behavior is for only rank 0 - to read the checkpoint from disk), load_on_all_ranks should be False to - avoid errors from torch.distributed not having been initialized or - torch.distributed.barrier() hanging. - - If all processes on each node may be loading the checkpoint - simultaneously, load_on_all_ranks should be set to True to avoid I/O - conflicts. - - There's currently no support for > 1 but < all processes loading the - checkpoint on each node. - """ - local_path = PathManager.get_local_path(path) - # The locally cached file returned by get_local_path() may be stale for - # remote files that are periodically updated/overwritten (ex: - # checkpoint_last.pt) - so we remove the local copy, sync across processes - # (if needed), and then download a fresh copy. - if local_path != path and PathManager.path_requires_pathmanager(path): - try: - os.remove(local_path) - except FileNotFoundError: - # With potentially multiple processes removing the same file, the - # file being missing is benign (missing_ok isn't available until - # Python 3.8). - pass - if load_on_all_ranks: - torch.distributed.barrier() - local_path = PathManager.get_local_path(path) - - with open(local_path, "rb") as f: - state = torch.load(f, map_location=torch.device("cpu")) - - if "args" in state and state["args"] is not None and arg_overrides is not None: - args = state["args"] - for arg_name, arg_val in arg_overrides.items(): - setattr(args, arg_name, arg_val) - - if "cfg" in state and state["cfg"] is not None: - - # hack to be able to set Namespace in dict config. this should be removed when we update to newer - # omegaconf version that supports object flags, or when we migrate all existing models - from omegaconf import _utils - - old_primitive = _utils.is_primitive_type - _utils.is_primitive_type = lambda _: True - - state["cfg"] = OmegaConf.create(state["cfg"]) - - _utils.is_primitive_type = old_primitive - OmegaConf.set_struct(state["cfg"], True) - - if arg_overrides is not None: - overwrite_args_by_name(state["cfg"], arg_overrides) - - state = _upgrade_state_dict(state) - return state - - -def load_model_ensemble( - filenames, - arg_overrides: Optional[Dict[str, Any]] = None, - task=None, - strict=True, - suffix="", - num_shards=1, - state=None, -): - """Loads an ensemble of models. - - Args: - filenames (List[str]): checkpoint files to load - arg_overrides (Dict[str,Any], optional): override model args that - were used during model training - task (fairseq.tasks.FairseqTask, optional): task to use for loading - """ - assert not ( - strict and num_shards > 1 - ), "Cannot load state dict with strict=True and checkpoint shards > 1" - ensemble, args, _task = load_model_ensemble_and_task( - filenames, - arg_overrides, - task, - strict, - suffix, - num_shards, - state, - ) - return ensemble, args - - -def get_maybe_sharded_checkpoint_filename( - filename: str, suffix: str, shard_idx: int, num_shards: int -) -> str: - orig_filename = filename - filename = filename.replace(".pt", suffix + ".pt") - fsdp_filename = filename[:-3] + f"-shard{shard_idx}.pt" - model_parallel_filename = orig_filename[:-3] + f"_part{shard_idx}.pt" - if PathManager.exists(fsdp_filename): - return fsdp_filename - elif num_shards > 1: - return model_parallel_filename - else: - return filename - - -def load_model_ensemble_and_task( - filenames, - arg_overrides: Optional[Dict[str, Any]] = None, - task=None, - strict=True, - suffix="", - num_shards=1, - state=None, -): - assert state is None or len(filenames) == 1 - - from fairseq import tasks - - assert not ( - strict and num_shards > 1 - ), "Cannot load state dict with strict=True and checkpoint shards > 1" - ensemble = [] - cfg = None - for filename in filenames: - orig_filename = filename - model_shard_state = {"shard_weights": [], "shard_metadata": []} - assert num_shards > 0 - st = time.time() - for shard_idx in range(num_shards): - filename = get_maybe_sharded_checkpoint_filename( - orig_filename, suffix, shard_idx, num_shards - ) - - if not PathManager.exists(filename): - raise IOError("Model file not found: {}".format(filename)) - if state is None: - state = load_checkpoint_to_cpu(filename, arg_overrides) - if "args" in state and state["args"] is not None: - cfg = convert_namespace_to_omegaconf(state["args"]) - elif "cfg" in state and state["cfg"] is not None: - cfg = state["cfg"] - else: - raise RuntimeError( - f"Neither args nor cfg exist in state keys = {state.keys()}" - ) - - if task is None: - task = tasks.setup_task(cfg.task) - - if "task_state" in state: - task.load_state_dict(state["task_state"]) - - if "fsdp_metadata" in state and num_shards > 1: - model_shard_state["shard_weights"].append(state["model"]) - model_shard_state["shard_metadata"].append(state["fsdp_metadata"]) - # check FSDP import before the code goes too far - if not has_FSDP: - raise ImportError( - "Cannot find FullyShardedDataParallel. " - "Please install fairscale with: pip install fairscale" - ) - if shard_idx == num_shards - 1: - consolidated_model_state = FSDP.consolidate_shard_weights( - shard_weights=model_shard_state["shard_weights"], - shard_metadata=model_shard_state["shard_metadata"], - ) - model = task.build_model(cfg.model) - if ( - "optimizer_history" in state - and len(state["optimizer_history"]) > 0 - and "num_updates" in state["optimizer_history"][-1] - ): - model.set_num_updates( - state["optimizer_history"][-1]["num_updates"] - ) - model.load_state_dict( - consolidated_model_state, strict=strict, model_cfg=cfg.model - ) - else: - # model parallel checkpoint or unsharded checkpoint - model = task.build_model(cfg.model) - if ( - "optimizer_history" in state - and len(state["optimizer_history"]) > 0 - and "num_updates" in state["optimizer_history"][-1] - ): - model.set_num_updates( - state["optimizer_history"][-1]["num_updates"] - ) - model.load_state_dict( - state["model"], strict=strict, model_cfg=cfg.model - ) - - # reset state so it gets loaded for the next model in ensemble - state = None - if shard_idx % 10 == 0 and shard_idx > 0: - elapsed = time.time() - st - logger.info( - f"Loaded {shard_idx} shards in {elapsed:.2f}s, {elapsed / (shard_idx + 1):.2f}s/shard" - ) - - # build model for ensemble - ensemble.append(model) - return ensemble, cfg, task - - -def checkpoint_paths(path, pattern=r"checkpoint(\d+)\.pt", keep_match=False): - """Retrieves all checkpoints found in `path` directory. - - Checkpoints are identified by matching filename to the specified pattern. If - the pattern contains groups, the result will be sorted by the first group in - descending order. - """ - pt_regexp = re.compile(pattern) - files = PathManager.ls(path) - - entries = [] - for i, f in enumerate(files): - m = pt_regexp.fullmatch(f) - if m is not None: - idx = float(m.group(1)) if len(m.groups()) > 0 else i - entries.append((idx, m.group(0))) - if keep_match: - return [(os.path.join(path, x[1]), x[0]) for x in sorted(entries, reverse=True)] - else: - return [os.path.join(path, x[1]) for x in sorted(entries, reverse=True)] - - -def torch_persistent_save(obj, filename, async_write: bool = False): - if async_write: - with PathManager.opena(filename, "wb") as f: - _torch_persistent_save(obj, f) - else: - if PathManager.supports_rename(filename): - # do atomic save - with PathManager.open(filename + ".tmp", "wb") as f: - _torch_persistent_save(obj, f) - PathManager.rename(filename + ".tmp", filename) - else: - # fallback to non-atomic save - with PathManager.open(filename, "wb") as f: - _torch_persistent_save(obj, f) - - -def _torch_persistent_save(obj, f): - if isinstance(f, str): - with PathManager.open(f, "wb") as h: - torch_persistent_save(obj, h) - return - for i in range(3): - try: - return torch.save(obj, f) - except Exception: - if i == 2: - logger.error(traceback.format_exc()) - raise - - -def _upgrade_state_dict(state): - """Helper for upgrading old model checkpoints.""" - - # add optimizer_history - if "optimizer_history" not in state: - state["optimizer_history"] = [ - {"criterion_name": "CrossEntropyCriterion", "best_loss": state["best_loss"]} - ] - state["last_optimizer_state"] = state["optimizer"] - del state["optimizer"] - del state["best_loss"] - # move extra_state into sub-dictionary - if "epoch" in state and "extra_state" not in state: - state["extra_state"] = { - "epoch": state["epoch"], - "batch_offset": state["batch_offset"], - "val_loss": state["val_loss"], - } - del state["epoch"] - del state["batch_offset"] - del state["val_loss"] - # reduce optimizer history's memory usage (only keep the last state) - if "optimizer" in state["optimizer_history"][-1]: - state["last_optimizer_state"] = state["optimizer_history"][-1]["optimizer"] - for optim_hist in state["optimizer_history"]: - del optim_hist["optimizer"] - # record the optimizer class name - if "optimizer_name" not in state["optimizer_history"][-1]: - state["optimizer_history"][-1]["optimizer_name"] = "FairseqNAG" - # move best_loss into lr_scheduler_state - if "lr_scheduler_state" not in state["optimizer_history"][-1]: - state["optimizer_history"][-1]["lr_scheduler_state"] = { - "best": state["optimizer_history"][-1]["best_loss"] - } - del state["optimizer_history"][-1]["best_loss"] - # keep track of number of updates - if "num_updates" not in state["optimizer_history"][-1]: - state["optimizer_history"][-1]["num_updates"] = 0 - # old model checkpoints may not have separate source/target positions - if ( - "args" in state - and hasattr(state["args"], "max_positions") - and not hasattr(state["args"], "max_source_positions") - ): - state["args"].max_source_positions = state["args"].max_positions - state["args"].max_target_positions = state["args"].max_positions - # use stateful training data iterator - if "train_iterator" not in state["extra_state"]: - state["extra_state"]["train_iterator"] = { - "epoch": state["extra_state"]["epoch"], - "iterations_in_epoch": state["extra_state"].get("batch_offset", 0), - } - - # backward compatibility, cfg updates - if "args" in state and state["args"] is not None: - # default to translation task - if not hasattr(state["args"], "task"): - state["args"].task = "translation" - # --raw-text and --lazy-load are deprecated - if getattr(state["args"], "raw_text", False): - state["args"].dataset_impl = "raw" - elif getattr(state["args"], "lazy_load", False): - state["args"].dataset_impl = "lazy" - # epochs start at 1 - if state["extra_state"]["train_iterator"] is not None: - state["extra_state"]["train_iterator"]["epoch"] = max( - state["extra_state"]["train_iterator"].get("epoch", 1), 1 - ) - # --remove-bpe ==> --postprocess - if hasattr(state["args"], "remove_bpe"): - state["args"].post_process = state["args"].remove_bpe - # --min-lr ==> --stop-min-lr - if hasattr(state["args"], "min_lr"): - state["args"].stop_min_lr = state["args"].min_lr - del state["args"].min_lr - # binary_cross_entropy / kd_binary_cross_entropy => wav2vec criterion - if ( - hasattr(state["args"], "criterion") - and state["args"].criterion in [ - "binary_cross_entropy", - "kd_binary_cross_entropy", - ] - ): - state["args"].criterion = "wav2vec" - # remove log_keys if it's None (criteria will supply a default value of []) - if hasattr(state["args"], "log_keys") and state["args"].log_keys is None: - delattr(state["args"], "log_keys") - # speech_pretraining => audio pretraining - if ( - hasattr(state["args"], "task") - and state["args"].task == "speech_pretraining" - ): - state["args"].task = "audio_pretraining" - # audio_cpc => wav2vec - if hasattr(state["args"], "arch") and state["args"].arch == "audio_cpc": - state["args"].arch = "wav2vec" - # convert legacy float learning rate to List[float] - if hasattr(state["args"], "lr") and isinstance(state["args"].lr, float): - state["args"].lr = [state["args"].lr] - # convert task data arg to a string instead of List[string] - if ( - hasattr(state["args"], "data") - and isinstance(state["args"].data, list) - and len(state["args"].data) > 0 - ): - state["args"].data = state["args"].data[0] - # remove keys in state["args"] related to teacher-student learning - for key in [ - "static_teachers", - "static_teacher_weights", - "dynamic_teachers", - "dynamic_teacher_weights", - ]: - if key in state["args"]: - delattr(state["args"], key) - - state["cfg"] = convert_namespace_to_omegaconf(state["args"]) - - if "cfg" in state and state["cfg"] is not None: - cfg = state["cfg"] - with open_dict(cfg): - # any upgrades for Hydra-based configs - if ( - "task" in cfg - and "eval_wer_config" in cfg.task - and isinstance(cfg.task.eval_wer_config.print_alignment, bool) - ): - cfg.task.eval_wer_config.print_alignment = "hard" - if "generation" in cfg and isinstance(cfg.generation.print_alignment, bool): - cfg.generation.print_alignment = "hard" if cfg.generation.print_alignment else None - if ( - "model" in cfg - and "w2v_args" in cfg.model - and cfg.model.w2v_args is not None - and ( - hasattr(cfg.model.w2v_args, "task") or "task" in cfg.model.w2v_args - ) - and hasattr(cfg.model.w2v_args.task, "eval_wer_config") - and cfg.model.w2v_args.task.eval_wer_config is not None - and isinstance( - cfg.model.w2v_args.task.eval_wer_config.print_alignment, bool - ) - ): - cfg.model.w2v_args.task.eval_wer_config.print_alignment = "hard" - - return state - - -def prune_state_dict(state_dict, model_cfg: Optional[DictConfig]): - """Prune the given state_dict if desired for LayerDrop - (https://arxiv.org/abs/1909.11556). - - Training with LayerDrop allows models to be robust to pruning at inference - time. This function prunes state_dict to allow smaller models to be loaded - from a larger model and re-maps the existing state_dict for this to occur. - - It's called by functions that load models from checkpoints and does not - need to be called directly. - """ - arch = None - if model_cfg is not None: - arch = ( - model_cfg._name - if isinstance(model_cfg, DictConfig) - else getattr(model_cfg, "arch", None) - ) - - if not model_cfg or arch is None or arch == "ptt_transformer": - # args should not be none, but don't crash if it is. - return state_dict - - encoder_layers_to_keep = getattr(model_cfg, "encoder_layers_to_keep", None) - decoder_layers_to_keep = getattr(model_cfg, "decoder_layers_to_keep", None) - - if not encoder_layers_to_keep and not decoder_layers_to_keep: - return state_dict - - # apply pruning - logger.info( - "Pruning model to specified layer configuration - this works best if the model was trained with LayerDrop" - ) - - def create_pruning_pass(layers_to_keep, layer_name): - keep_layers = sorted( - int(layer_string) for layer_string in layers_to_keep.split(",") - ) - mapping_dict = {} - for i in range(len(keep_layers)): - mapping_dict[str(keep_layers[i])] = str(i) - - regex = re.compile(r"^{layer}.*\.layers\.(\d+)".format(layer=layer_name)) - return {"substitution_regex": regex, "mapping_dict": mapping_dict} - - pruning_passes = [] - if encoder_layers_to_keep: - pruning_passes.append(create_pruning_pass(encoder_layers_to_keep, "encoder")) - if decoder_layers_to_keep: - pruning_passes.append(create_pruning_pass(decoder_layers_to_keep, "decoder")) - - new_state_dict = {} - for layer_name in state_dict.keys(): - match = re.search(r"\.layers\.(\d+)\.", layer_name) - # if layer has no number in it, it is a supporting layer, such as an - # embedding - if not match: - new_state_dict[layer_name] = state_dict[layer_name] - continue - - # otherwise, layer should be pruned. - original_layer_number = match.group(1) - # figure out which mapping dict to replace from - for pruning_pass in pruning_passes: - if original_layer_number in pruning_pass["mapping_dict"] and pruning_pass[ - "substitution_regex" - ].search(layer_name): - new_layer_number = pruning_pass["mapping_dict"][original_layer_number] - substitution_match = pruning_pass["substitution_regex"].search( - layer_name - ) - new_state_key = ( - layer_name[: substitution_match.start(1)] - + new_layer_number - + layer_name[substitution_match.end(1):] - ) - new_state_dict[new_state_key] = state_dict[layer_name] - - # Since layers are now pruned, *_layers_to_keep are no longer needed. - # This is more of "It would make it work fix" rather than a proper fix. - if isinstance(model_cfg, DictConfig): - context = open_dict(model_cfg) - else: - context = contextlib.ExitStack() - with context: - if hasattr(model_cfg, "encoder_layers_to_keep"): - model_cfg.encoder_layers_to_keep = None - if hasattr(model_cfg, "decoder_layers_to_keep"): - model_cfg.decoder_layers_to_keep = None - - return new_state_dict - - -def load_pretrained_component_from_model( - component: Union[FairseqEncoder, FairseqDecoder], checkpoint: str -): - """ - Load a pretrained FairseqEncoder or FairseqDecoder from checkpoint into the - provided `component` object. If state_dict fails to load, there may be a - mismatch in the architecture of the corresponding `component` found in the - `checkpoint` file. - """ - if not PathManager.exists(checkpoint): - raise IOError("Model file not found: {}".format(checkpoint)) - state = load_checkpoint_to_cpu(checkpoint) - if isinstance(component, FairseqEncoder): - component_type = "encoder" - elif isinstance(component, FairseqDecoder): - component_type = "decoder" - else: - raise ValueError( - "component to load must be either a FairseqEncoder or " - "FairseqDecoder. Loading other component types are not supported." - ) - component_state_dict = OrderedDict() - for key in state["model"].keys(): - if key.startswith(component_type): - # encoder.input_layers.0.0.weight --> input_layers.0.0.weight - component_subkey = key[len(component_type) + 1:] - component_state_dict[component_subkey] = state["model"][key] - component.load_state_dict(component_state_dict, strict=True) - return component - - -def verify_checkpoint_directory(save_dir: str, rank: int) -> None: - if not os.path.exists(save_dir): - os.makedirs(save_dir, exist_ok=True) - temp_file_path = os.path.join(save_dir, f"dummy-{rank}") - try: - with open(temp_file_path, "w"): - pass - except OSError as e: - logger.warning( - "Unable to access checkpoint save directory: {}".format(save_dir) - ) - raise e - else: - os.remove(temp_file_path) - - -def load_ema_from_checkpoint(fpath): - """Loads exponential moving averaged (EMA) checkpoint from input and - returns a model with ema weights. - - Args: - fpath: A string path of checkpoint to load from. - - Returns: - A dict of string keys mapping to various values. The 'model' key - from the returned dict should correspond to an OrderedDict mapping - string parameter names to torch Tensors. - """ - params_dict = collections.OrderedDict() - new_state = None - - with PathManager.open(fpath, 'rb') as f: - new_state = torch.load( - f, - map_location=( - lambda s, _: torch.serialization.default_restore_location(s, 'cpu') - ), - ) - - # EMA model is stored in a separate "extra state" - model_params = new_state['extra_state']['ema'] - - for key in list(model_params.keys()): - p = model_params[key] - if isinstance(p, torch.HalfTensor): - p = p.float() - if key not in params_dict: - params_dict[key] = p.clone() - # NOTE: clone() is needed in case of p is a shared parameter - else: - raise ValueError("Key {} is repeated in EMA model params.".format(key)) - - if len(params_dict) == 0: - raise ValueError( - f"Input checkpoint path '{fpath}' does not contain " - "ema model weights, is this model trained with EMA?" - ) - - new_state['model'] = params_dict - return new_state diff --git a/kosmos-g/fairseq/fairseq/clib/cuda/ngram_repeat_block_cuda.cpp b/kosmos-g/fairseq/fairseq/clib/cuda/ngram_repeat_block_cuda.cpp deleted file mode 100644 index 707219105..000000000 --- a/kosmos-g/fairseq/fairseq/clib/cuda/ngram_repeat_block_cuda.cpp +++ /dev/null @@ -1,55 +0,0 @@ -/* -Copyright (c) Microsoft Corporation. -Licensed under the MIT License. -*/ - -#include <torch/extension.h> -#include <vector> - -/* -CPP Binding for CUDA OP -*/ - -// CUDA forward declarations -torch::Tensor ngram_repeat_block_cuda_forward( - torch::Tensor tokens, - torch::Tensor lprobs, - int bsz, - int step, - int beam_size, - int no_repeat_ngram_size); - -#define CHECK_CUDA(x) \ - TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) \ - TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) \ - CHECK_CUDA(x); \ - CHECK_CONTIGUOUS(x) - -// Input check and call to CUDA OP -// Backward method not required -torch::Tensor ngram_repeat_block_forward( - torch::Tensor tokens, - torch::Tensor lprobs, - int bsz, - int step, - int beam_size, - int no_repeat_ngram_size) { - CHECK_INPUT(tokens); - CHECK_INPUT(lprobs); - assert(bsz > 0); - assert(step >= 0); - assert(beam_size > 0); - assert(no_repeat_ngram_size > 0); - - return ngram_repeat_block_cuda_forward( - tokens, lprobs, bsz, step, beam_size, no_repeat_ngram_size); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def( - "forward", - &ngram_repeat_block_forward, - "No Repeat Ngram Block forward (CUDA)"); -} diff --git a/kosmos-g/fairseq/fairseq/clib/cuda/ngram_repeat_block_cuda_kernel.cu b/kosmos-g/fairseq/fairseq/clib/cuda/ngram_repeat_block_cuda_kernel.cu deleted file mode 100644 index bd6106cba..000000000 --- a/kosmos-g/fairseq/fairseq/clib/cuda/ngram_repeat_block_cuda_kernel.cu +++ /dev/null @@ -1,82 +0,0 @@ -/* -Copyright (c) Microsoft Corporation. -Licensed under the MIT License. -*/ - -/* -Kernel implementation for blocking repeated n-grams. -*/ - -#include <cuda.h> -#include <cuda_runtime.h> -#include <math.h> -#include <torch/extension.h> -#include <vector> - -// Ban repeated ngrams of length = 'no_repeat_ngram_size' -__global__ void banRepeatedTokens( - long* __restrict__ tokens, - float* __restrict__ lprobs, - int max_predict_len, - int vocab_size, - int no_repeat_ngram_size) { - auto row = blockIdx.x; - auto col = threadIdx.x; - auto start = row * (max_predict_len) + col; - // Each thread compares ngram starting from - // thread index with final ngram starting from - // step - no_repeat_ngram_size +2 - auto check_start_pos = blockDim.x; - auto lprob_start = row * vocab_size; - bool is_banned = true; - extern __shared__ long tokens_shm[]; - tokens_shm[col] = tokens[start]; - if (col == blockDim.x - 1) { - for (int i = 1; i < no_repeat_ngram_size; i++) { - if (col + i < max_predict_len) { - tokens_shm[col + i] = tokens[start + i]; - } - } - } - __syncthreads(); - - for (int k = 0; k < no_repeat_ngram_size - 1; k++) { - if (tokens_shm[col + k] != tokens_shm[check_start_pos + k]) { - is_banned = false; - } - } - if (is_banned == true) { - auto token_to_be_banned = tokens_shm[col + no_repeat_ngram_size - 1]; - lprobs[lprob_start + token_to_be_banned] = -INFINITY; - } -} - -// Allocate blocks and threads based on -// batch size and sequence length and launch -// kernel -torch::Tensor ngram_repeat_block_cuda_forward( - const torch::Tensor tokens, - torch::Tensor lprobs, - int bsz, - int step, - int beam_size, - int no_repeat_ngram_size) { - int threads = step - no_repeat_ngram_size + 2; - if (threads <= 0) - return lprobs; - int max_predict_len = tokens.size(1); - int vocab_size = lprobs.size(1); - auto token_ptr = tokens.data_ptr<long>(); - auto lprob_ptr = lprobs.data_ptr<float>(); - int blocks = bsz * beam_size; - int shared_mem_size = (step + 1) * sizeof(long); - - // Launching N blocks where N is number of samples in a batch (beams*bsz) - // Launching T threads where T is number of previous ngrams in a sample - // Allocating shared mem per block for fastser access of input tokens since - // each token will be accessed N times to compare with current Ngram where - // N is Ngram size. - banRepeatedTokens<<<blocks, threads, shared_mem_size>>>( - token_ptr, lprob_ptr, max_predict_len, vocab_size, no_repeat_ngram_size); - return lprobs; -} diff --git a/kosmos-g/fairseq/fairseq/clib/libbase/balanced_assignment.cpp b/kosmos-g/fairseq/fairseq/clib/libbase/balanced_assignment.cpp deleted file mode 100644 index 1a5a1061f..000000000 --- a/kosmos-g/fairseq/fairseq/clib/libbase/balanced_assignment.cpp +++ /dev/null @@ -1,109 +0,0 @@ -/** - * Copyright 2017-present, Facebook, Inc. - * All rights reserved. - * - * This source code is licensed under the license found in the - * LICENSE file in the root directory of this source tree. - */ - -/* -C++ code for solving the linear assignment problem. -Based on the Auction Algorithm from -https://dspace.mit.edu/bitstream/handle/1721.1/3265/P-2108-26912652.pdf and the -implementation from: https://github.com/bkj/auction-lap Adapted to be more -efficient when each worker is looking for k jobs instead of 1. -*/ -#include <torch/extension.h> -#include <iostream> -using namespace torch::indexing; -torch::Tensor balanced_assignment(torch::Tensor job_and_worker_to_score) { - int max_iterations = 100; - torch::Tensor epsilon = - (job_and_worker_to_score.max() - job_and_worker_to_score.min()) / 50; - epsilon.clamp_min_(1e-04); - torch::Tensor worker_and_job_to_score = - job_and_worker_to_score.detach().transpose(0, 1).contiguous(); - int num_workers = worker_and_job_to_score.size(0); - int num_jobs = worker_and_job_to_score.size(1); - auto device = worker_and_job_to_score.device(); - int jobs_per_worker = num_jobs / num_workers; - torch::Tensor value = worker_and_job_to_score.clone(); - int counter = 0; - torch::Tensor max_value = worker_and_job_to_score.max(); - - torch::Tensor bid_indices; - torch::Tensor cost = worker_and_job_to_score.new_zeros({1, num_jobs}); - torch::Tensor bids = - worker_and_job_to_score.new_empty({num_workers, num_jobs}); - torch::Tensor bid_increments = - worker_and_job_to_score.new_empty({num_workers, jobs_per_worker}); - torch::Tensor top_values = - worker_and_job_to_score.new_empty({num_workers, jobs_per_worker + 1}); - torch::Tensor high_bids = worker_and_job_to_score.new_empty({num_jobs}); - - torch::Tensor top_index = top_values.to(torch::kLong); - torch::Tensor high_bidders = top_index.new_empty({num_jobs}); - torch::Tensor have_bids = high_bidders.to(torch::kBool); - torch::Tensor jobs_indices = - torch::arange({num_jobs}, torch::dtype(torch::kLong).device(device)); - torch::Tensor true_tensor = - torch::ones({1}, torch::dtype(torch::kBool).device(device)); - - while (true) { - bids.zero_(); - torch::topk_out(top_values, top_index, value, jobs_per_worker + 1, 1); - - // Each worker bids the difference in value between that job and the k+1th - // job - torch::sub_out( - bid_increments, - top_values.index({Slice(None, None), Slice(0, jobs_per_worker)}), - top_values.index({Slice(None, None), jobs_per_worker}).unsqueeze(1)); - - bid_increments.add_(epsilon); - bids.scatter_( - 1, - top_index.index({Slice(None, None), Slice(0, jobs_per_worker)}), - bid_increments); - - if (counter < max_iterations && counter > 0) { - // Put in a minimal bid to retain items from the last round if no-one else - // bids for them this round - bids.view(-1).index_put_({bid_indices}, epsilon); - } - - // Find the highest bidding worker per job - torch::max_out(high_bids, high_bidders, bids, 0); - torch::gt_out(have_bids, high_bids, 0); - - if (have_bids.all().item<bool>()) { - // All jobs were bid for - break; - } - - // Make popular items more expensive - cost.add_(high_bids); - torch::sub_out(value, worker_and_job_to_score, cost); - - bid_indices = ((high_bidders * num_jobs) + jobs_indices).index({have_bids}); - - if (counter < max_iterations) { - // Make sure that this item will be in the winning worker's top-k next - // time. - value.view(-1).index_put_({bid_indices}, max_value); - } else { - // Suboptimal approximation that converges quickly from current solution - value.view(-1).index_put_( - {bid_indices}, worker_and_job_to_score.view(-1).index({bid_indices})); - } - - counter += 1; - } - - return top_index.index({Slice(None, None), Slice(0, jobs_per_worker)}) - .reshape(-1); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("balanced_assignment", &balanced_assignment, "Balanced Assignment"); -} diff --git a/kosmos-g/fairseq/fairseq/clib/libbleu/libbleu.cpp b/kosmos-g/fairseq/fairseq/clib/libbleu/libbleu.cpp deleted file mode 100644 index 939d9e117..000000000 --- a/kosmos-g/fairseq/fairseq/clib/libbleu/libbleu.cpp +++ /dev/null @@ -1,157 +0,0 @@ -/** - * Copyright 2017-present, Facebook, Inc. - * All rights reserved. - * - * This source code is licensed under the license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include <array> -#include <cstdio> -#include <cstring> -#include <map> - -// NOLINTNEXTLINE -typedef struct { - size_t reflen; - size_t predlen; - size_t match1; - size_t count1; - size_t match2; - size_t count2; - size_t match3; - size_t count3; - size_t match4; - size_t count4; -} bleu_stat; - -// left trim (remove pad) -void bleu_ltrim(size_t* len, int** sent, int pad) { - size_t start = 0; - while (start < *len) { - if (*(*sent + start) != pad) { - break; - } - start++; - } - *sent += start; - *len -= start; -} - -// right trim remove (eos) -void bleu_rtrim(size_t* len, int** sent, int pad, int eos) { - size_t end = *len - 1; - while (end > 0) { - if (*(*sent + end) != eos && *(*sent + end) != pad) { - break; - } - end--; - } - *len = end + 1; -} - -// left and right trim -void bleu_trim(size_t* len, int** sent, int pad, int eos) { - bleu_ltrim(len, sent, pad); - bleu_rtrim(len, sent, pad, eos); -} - -size_t bleu_hash(int len, int* data) { - size_t h = 14695981039346656037ul; - size_t prime = 0x100000001b3; - char* b = (char*)data; - size_t blen = sizeof(int) * len; - - while (blen-- > 0) { - h ^= *b++; - h *= prime; - } - - return h; -} - -void bleu_addngram( - size_t* ntotal, - size_t* nmatch, - size_t n, - size_t reflen, - int* ref, - size_t predlen, - int* pred) { - if (predlen < n) { - return; - } - - predlen = predlen - n + 1; - (*ntotal) += predlen; - - if (reflen < n) { - return; - } - - reflen = reflen - n + 1; - - std::map<size_t, size_t> count; - while (predlen > 0) { - size_t w = bleu_hash(n, pred++); - count[w]++; - predlen--; - } - - while (reflen > 0) { - size_t w = bleu_hash(n, ref++); - if (count[w] > 0) { - (*nmatch)++; - count[w] -= 1; - } - reflen--; - } -} - -extern "C" { - -#ifdef _WIN64 -__declspec(dllexport) -#endif - void bleu_zero_init(bleu_stat* stat) { - std::memset(stat, 0, sizeof(bleu_stat)); -} - -#ifdef _WIN64 -__declspec(dllexport) -#endif - void bleu_one_init(bleu_stat* stat) { - bleu_zero_init(stat); - stat->count1 = 0; - stat->count2 = 1; - stat->count3 = 1; - stat->count4 = 1; - stat->match1 = 0; - stat->match2 = 1; - stat->match3 = 1; - stat->match4 = 1; -} - -#ifdef _WIN64 -__declspec(dllexport) -#endif - void bleu_add( - bleu_stat* stat, - size_t reflen, - int* ref, - size_t predlen, - int* pred, - int pad, - int eos) { - - bleu_trim(&reflen, &ref, pad, eos); - bleu_trim(&predlen, &pred, pad, eos); - stat->reflen += reflen; - stat->predlen += predlen; - - bleu_addngram(&stat->count1, &stat->match1, 1, reflen, ref, predlen, pred); - bleu_addngram(&stat->count2, &stat->match2, 2, reflen, ref, predlen, pred); - bleu_addngram(&stat->count3, &stat->match3, 3, reflen, ref, predlen, pred); - bleu_addngram(&stat->count4, &stat->match4, 4, reflen, ref, predlen, pred); -} -} diff --git a/kosmos-g/fairseq/fairseq/clib/libbleu/module.cpp b/kosmos-g/fairseq/fairseq/clib/libbleu/module.cpp deleted file mode 100644 index 35288b317..000000000 --- a/kosmos-g/fairseq/fairseq/clib/libbleu/module.cpp +++ /dev/null @@ -1,33 +0,0 @@ -/** - * Copyright 2017-present, Facebook, Inc. - * All rights reserved. - * - * This source code is licensed under the license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include <Python.h> - -static PyMethodDef method_def[] = {{NULL, NULL, 0, NULL}}; // NOLINT - -static struct PyModuleDef module_def = { - PyModuleDef_HEAD_INIT, - "libbleu", /* name of module */ - // NOLINTNEXTLINE - NULL, /* module documentation, may be NULL */ - -1, /* size of per-interpreter state of the module, - or -1 if the module keeps state in global variables. */ - method_def}; // NOLINT - -#if PY_MAJOR_VERSION == 2 -PyMODINIT_FUNC init_libbleu() -#else -PyMODINIT_FUNC PyInit_libbleu() -#endif -{ - PyObject* m = PyModule_Create(&module_def); - if (!m) { - return NULL; - } - return m; -} diff --git a/kosmos-g/fairseq/fairseq/clib/libnat/edit_dist.cpp b/kosmos-g/fairseq/fairseq/clib/libnat/edit_dist.cpp deleted file mode 100644 index 9ffb60569..000000000 --- a/kosmos-g/fairseq/fairseq/clib/libnat/edit_dist.cpp +++ /dev/null @@ -1,231 +0,0 @@ -/** - * Copyright 2017-present, Facebook, Inc. - * All rights reserved. - * - * This source code is licensed under the license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include <pybind11/detail/common.h> -#include <pybind11/pybind11.h> -#include <torch/torch.h> // @manual=//caffe2:torch_extension -#include <algorithm> -#include <cstdint> -#include <iosfwd> -#include <memory> -#include <new> -#include <string> -#include <utility> -#include <vector> - -using namespace ::std; - -vector<vector<uint32_t>> edit_distance2_with_dp( - vector<uint32_t>& x, - vector<uint32_t>& y) { - uint32_t lx = x.size(); - uint32_t ly = y.size(); - vector<vector<uint32_t>> d(lx + 1, vector<uint32_t>(ly + 1)); - for (uint32_t i = 0; i < lx + 1; i++) { - d[i][0] = i; - } - for (uint32_t j = 0; j < ly + 1; j++) { - d[0][j] = j; - } - for (uint32_t i = 1; i < lx + 1; i++) { - for (uint32_t j = 1; j < ly + 1; j++) { - d[i][j] = - min(min(d[i - 1][j], d[i][j - 1]) + 1, - d[i - 1][j - 1] + 2 * (x.at(i - 1) == y.at(j - 1) ? 0 : 1)); - } - } - return d; -} - -vector<vector<uint32_t>> edit_distance2_backtracking( - vector<vector<uint32_t>>& d, - vector<uint32_t>& x, - vector<uint32_t>& y, - uint32_t terminal_symbol) { - vector<uint32_t> seq; - vector<vector<uint32_t>> edit_seqs(x.size() + 2, vector<uint32_t>()); - /* - edit_seqs: - 0~x.size() cell is the insertion sequences - last cell is the delete sequence - */ - - if (x.size() == 0) { - edit_seqs.at(0) = y; - return edit_seqs; - } - - uint32_t i = d.size() - 1; - uint32_t j = d.at(0).size() - 1; - - while ((i >= 0) && (j >= 0)) { - if ((i == 0) && (j == 0)) { - break; - } - - if ((j > 0) && (d.at(i).at(j - 1) < d.at(i).at(j))) { - seq.push_back(1); // insert - seq.push_back(y.at(j - 1)); - j--; - } else if ((i > 0) && (d.at(i - 1).at(j) < d.at(i).at(j))) { - seq.push_back(2); // delete - seq.push_back(x.at(i - 1)); - i--; - } else { - seq.push_back(3); // keep - seq.push_back(x.at(i - 1)); - i--; - j--; - } - } - - uint32_t prev_op, op, s, word; - prev_op = 0, s = 0; - for (uint32_t k = 0; k < seq.size() / 2; k++) { - op = seq.at(seq.size() - 2 * k - 2); - word = seq.at(seq.size() - 2 * k - 1); - if (prev_op != 1) { - s++; - } - if (op == 1) // insert - { - edit_seqs.at(s - 1).push_back(word); - } else if (op == 2) // delete - { - edit_seqs.at(x.size() + 1).push_back(1); - } else { - edit_seqs.at(x.size() + 1).push_back(0); - } - - prev_op = op; - } - - for (uint32_t k = 0; k < edit_seqs.size(); k++) { - if (edit_seqs[k].size() == 0) { - edit_seqs[k].push_back(terminal_symbol); - } - } - return edit_seqs; -} - -vector<vector<uint32_t>> edit_distance2_backtracking_with_delete( - vector<vector<uint32_t>>& d, - vector<uint32_t>& x, - vector<uint32_t>& y, - uint32_t terminal_symbol, - uint32_t deletion_symbol) { - vector<uint32_t> seq; - vector<vector<uint32_t>> edit_seqs(x.size() + 1, vector<uint32_t>()); - /* - edit_seqs: - 0~x.size() cell is the insertion sequences - last cell is the delete sequence - */ - - if (x.size() == 0) { - edit_seqs.at(0) = y; - return edit_seqs; - } - - uint32_t i = d.size() - 1; - uint32_t j = d.at(0).size() - 1; - - while ((i >= 0) && (j >= 0)) { - if ((i == 0) && (j == 0)) { - break; - } - - if ((j > 0) && (d.at(i).at(j - 1) < d.at(i).at(j))) { - seq.push_back(1); // insert - seq.push_back(y.at(j - 1)); - j--; - } else if ((i > 0) && (d.at(i - 1).at(j) < d.at(i).at(j))) { - seq.push_back(2); // delete - seq.push_back(x.at(i - 1)); - i--; - } else { - seq.push_back(3); // keep - seq.push_back(x.at(i - 1)); - i--; - j--; - } - } - - uint32_t prev_op, op, s, word; - prev_op = 0, s = 0; - for (uint32_t k = 0; k < seq.size() / 2; k++) { - op = seq.at(seq.size() - 2 * k - 2); - word = seq.at(seq.size() - 2 * k - 1); - if (prev_op != 1) { - s++; - } - if (op == 1) // insert - { - edit_seqs.at(s - 1).push_back(word); - } else if (op == 2) // delete - { - edit_seqs.at(s - 1).push_back(deletion_symbol); - } - - prev_op = op; - } - - for (uint32_t k = 0; k < edit_seqs.size(); k++) { - if (edit_seqs.at(k).size() == 0) { - edit_seqs.at(k).push_back(terminal_symbol); - } - } - return edit_seqs; -} - -vector<uint32_t> compute_ed2( - vector<vector<uint32_t>>& xs, - vector<vector<uint32_t>>& ys) { - vector<uint32_t> distances(xs.size()); - for (uint32_t i = 0; i < xs.size(); i++) { - vector<vector<uint32_t>> d = edit_distance2_with_dp(xs.at(i), ys.at(i)); - distances.at(i) = d.at(xs.at(i).size()).at(ys.at(i).size()); - } - return distances; -} - -vector<vector<vector<uint32_t>>> suggested_ed2_path( - vector<vector<uint32_t>>& xs, - vector<vector<uint32_t>>& ys, - uint32_t terminal_symbol) { - vector<vector<vector<uint32_t>>> seq(xs.size()); - for (uint32_t i = 0; i < xs.size(); i++) { - vector<vector<uint32_t>> d = edit_distance2_with_dp(xs.at(i), ys.at(i)); - seq.at(i) = - edit_distance2_backtracking(d, xs.at(i), ys.at(i), terminal_symbol); - } - return seq; -} - -vector<vector<vector<uint32_t>>> suggested_ed2_path_with_delete( - vector<vector<uint32_t>>& xs, - vector<vector<uint32_t>>& ys, - uint32_t terminal_symbol, - uint32_t deletion_symbol) { - vector<vector<vector<uint32_t>>> seq(xs.size()); - for (uint32_t i = 0; i < xs.size(); i++) { - vector<vector<uint32_t>> d = edit_distance2_with_dp(xs.at(i), ys.at(i)); - seq.at(i) = edit_distance2_backtracking_with_delete( - d, xs.at(i), ys.at(i), terminal_symbol, deletion_symbol); - } - return seq; -} - -PYBIND11_MODULE(libnat, m) { - m.def("compute_ed2", &compute_ed2, "compute_ed2"); - m.def("suggested_ed2_path", &suggested_ed2_path, "suggested_ed2_path"); - m.def( - "suggested_ed2_path_with_delete", - &suggested_ed2_path_with_delete, - "suggested_ed2_path_with_delete"); -} diff --git a/kosmos-g/fairseq/fairseq/clib/libnat_cuda/binding.cpp b/kosmos-g/fairseq/fairseq/clib/libnat_cuda/binding.cpp deleted file mode 100644 index ced91c0d0..000000000 --- a/kosmos-g/fairseq/fairseq/clib/libnat_cuda/binding.cpp +++ /dev/null @@ -1,67 +0,0 @@ -/** - * Copyright 2017-present, Facebook, Inc. - * All rights reserved. - * - * This source code is licensed under the license found in the - * LICENSE file in the root directory of this source tree. - */ - -/* - This code is partially adpoted from - https://github.com/1ytic/pytorch-edit-distance - */ - -#include <torch/types.h> -#include "edit_dist.h" - -#ifndef TORCH_CHECK -#define TORCH_CHECK AT_CHECK -#endif - -#define CHECK_CUDA(x) \ - TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) \ - TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) \ - CHECK_CUDA(x); \ - CHECK_CONTIGUOUS(x) - -torch::Tensor LevenshteinDistance( - torch::Tensor source, - torch::Tensor target, - torch::Tensor source_length, - torch::Tensor target_length) { - CHECK_INPUT(source); - CHECK_INPUT(target); - CHECK_INPUT(source_length); - CHECK_INPUT(target_length); - return LevenshteinDistanceCuda(source, target, source_length, target_length); -} - -torch::Tensor GenerateDeletionLabel( - torch::Tensor source, - torch::Tensor operations) { - CHECK_INPUT(source); - CHECK_INPUT(operations); - return GenerateDeletionLabelCuda(source, operations); -} - -std::pair<torch::Tensor, torch::Tensor> GenerateInsertionLabel( - torch::Tensor target, - torch::Tensor operations) { - CHECK_INPUT(target); - CHECK_INPUT(operations); - return GenerateInsertionLabelCuda(target, operations); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("levenshtein_distance", &LevenshteinDistance, "Levenshtein distance"); - m.def( - "generate_deletion_labels", - &GenerateDeletionLabel, - "Generate Deletion Label"); - m.def( - "generate_insertion_labels", - &GenerateInsertionLabel, - "Generate Insertion Label"); -} diff --git a/kosmos-g/fairseq/fairseq/clib/libnat_cuda/edit_dist.cu b/kosmos-g/fairseq/fairseq/clib/libnat_cuda/edit_dist.cu deleted file mode 100644 index 96569d46c..000000000 --- a/kosmos-g/fairseq/fairseq/clib/libnat_cuda/edit_dist.cu +++ /dev/null @@ -1,344 +0,0 @@ -/** - * Copyright 2017-present, Facebook, Inc. - * All rights reserved. - * - * This source code is licensed under the license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include "edit_dist.h" - -#include <THC/THC.h> -#include <cuda.h> -#include <cuda_runtime.h> -#include <device_launch_parameters.h> -#include <utility> // std::pair - -template <typename scalar_t> -__global__ void generate_deletion_label_kernel( - const scalar_t* __restrict__ source, - const size_t source_size, - const size_t operation_size, - int* __restrict__ operations, - int* __restrict__ labels) { - const int index = blockIdx.x; - const int offset = index * operation_size; - const int offset_label = index * source_size; - - for (int i = 0; i < source_size; i++) { - labels[offset_label + i] = 0; - } - - int k = 0; - for (int i = 0; i < operation_size; i++) { - if (operations[offset + i] == 0) { - break; - } else if (operations[offset + i] == 1) { - continue; - } else { - labels[offset_label + k] = 3 - operations[offset + i]; - k++; - } - } -} - -template <typename scalar_t> -__global__ void generate_insertion_label_kernel( - const scalar_t* __restrict__ target, - const size_t target_size, - const size_t operation_size, - int* __restrict__ operations, - int* __restrict__ labels, - int* __restrict__ masks) { - const int index = blockIdx.x; - const int offset = index * operation_size; - const int offset_label = index * target_size; - - int k = 0; - int u = 0; - int m = 0; - - for (int i = 0; i < target_size; i++) { - labels[offset_label + i] = 0; - masks[offset_label + i] = 0; - } - - for (int i = 0; i < operation_size - 1; i++) { - if (operations[offset + i] == 0) { - break; - } else if (operations[offset + i] == 2) { - continue; - } else if (operations[offset + i] == 1) { - masks[offset_label + m] = 1; - u++; - m++; - } else { - labels[offset_label + k] = u; - masks[offset_label + m] = 0; - k++; - m++; - u = 0; - } - } -} - -template <typename scalar_t> -__global__ void levenshtein_distance_kernel( - const scalar_t* __restrict__ source, - const scalar_t* __restrict__ target, - const int* __restrict__ source_length, - const int* __restrict__ target_length, - const size_t source_size, - const size_t target_size, - int* __restrict__ operations, - int* __restrict__ errors_curr) { - const int index = blockIdx.x; - const int offset = index * (source_size + target_size); - const int d = index * (source_size + 1) * (target_size + 1); - const int t = target_size + 1; - - auto err_idx = [d, t](int i, int j) { return d + i * t + j; }; - auto opt_idx = [offset](int k) { return offset + k; }; - - const int hyp_len = source_length[index]; - const int ref_len = target_length[index]; - const scalar_t* hyp_begin = source + index * source_size; - const scalar_t* ref_begin = target + index * target_size; - - // dynamic programming - for (int i = 0; i <= hyp_len; i++) { - errors_curr[err_idx(i, 0)] = i; - } - for (int j = 0; j <= ref_len; j++) { - errors_curr[err_idx(0, j)] = j; - } - for (int i = 1; i <= hyp_len; i++) { - for (int j = 1; j <= ref_len; j++) { - errors_curr[err_idx(i, j)] = min( - min(errors_curr[err_idx(i - 1, j)], errors_curr[err_idx(i, j - 1)]) + - 1, - errors_curr[err_idx(i - 1, j - 1)] + - 2 * (*(hyp_begin + i - 1) == *(ref_begin + j - 1) ? 0 : 1)); - } - } - - // back-tracing - int i = hyp_len; - int j = ref_len; - int o = hyp_len + ref_len; - - for (int k = 0; k < source_size + target_size; k++) { - operations[opt_idx(k)] = 0; - } - - while ((i >= 0) && (j >= 0)) { - if ((i == 0) && (j == 0)) { - break; - } - - if ((j > 0) && - (errors_curr[err_idx(i, j - 1)] < errors_curr[err_idx(i, j)])) { - o--; - operations[opt_idx(o)] = 1; - j--; // insertion - } else if ( - (i > 0) && - (errors_curr[err_idx(i - 1, j)] < errors_curr[err_idx(i, j)])) { - o--; - operations[opt_idx(o)] = 2; - i--; // deletion - } else { - o--; - operations[opt_idx(o)] = 3; - i--; - j--; // do nothing - } - } - - // moving to the left - for (int k = 0; k < hyp_len + ref_len; k++) { - if (k + o < hyp_len + ref_len) { - operations[opt_idx(k)] = operations[opt_idx(k + o)]; - } else { - operations[opt_idx(k)] = 0; // padding - } - } -} - -template <typename scalar_t> -__global__ void faster_levenshtein_distance_kernel( - const scalar_t* __restrict__ source, - const scalar_t* __restrict__ target, - const int* __restrict__ source_length, - const int* __restrict__ target_length, - const size_t source_size, - const size_t target_size, - int* __restrict__ operations) { - extern __shared__ short errors[]; - auto errors_curr = errors; - - const int index = blockIdx.x; - const int offset = index * (source_size + target_size); - const int t = target_size + 1; - - auto err_idx = [t](int i, int j) { return i * t + j; }; - auto opt_idx = [offset](int k) { return offset + k; }; - - const int hyp_len = source_length[index]; - const int ref_len = target_length[index]; - const scalar_t* hyp_begin = source + index * source_size; - const scalar_t* ref_begin = target + index * target_size; - - // dynamic programming - for (int i = 0; i <= hyp_len; i++) { - errors_curr[err_idx(i, 0)] = i; - } - for (int j = 0; j <= ref_len; j++) { - errors_curr[err_idx(0, j)] = j; - } - for (int i = 1; i <= hyp_len; i++) { - for (int j = 1; j <= ref_len; j++) { - errors_curr[err_idx(i, j)] = min( - min(errors_curr[err_idx(i - 1, j)], errors_curr[err_idx(i, j - 1)]) + - 1, - errors_curr[err_idx(i - 1, j - 1)] + - 2 * (*(hyp_begin + i - 1) == *(ref_begin + j - 1) ? 0 : 1)); - } - } - - // back-tracing - int i = hyp_len; - int j = ref_len; - int o = hyp_len + ref_len; - - for (int k = 0; k < source_size + target_size; k++) { - operations[opt_idx(k)] = 0; - } - - while ((i >= 0) && (j >= 0)) { - if ((i == 0) && (j == 0)) { - break; - } - - if ((j > 0) && - (errors_curr[err_idx(i, j - 1)] < errors_curr[err_idx(i, j)])) { - o--; - operations[opt_idx(o)] = 1; - j--; // insertion - } else if ( - (i > 0) && - (errors_curr[err_idx(i - 1, j)] < errors_curr[err_idx(i, j)])) { - o--; - operations[opt_idx(o)] = 2; - i--; // deletion - } else { - o--; - operations[opt_idx(o)] = 3; - i--; - j--; // do nothing - } - } - - // moving to the left - for (int k = 0; k < hyp_len + ref_len; k++) { - if (k + o < hyp_len + ref_len) { - operations[opt_idx(k)] = operations[opt_idx(k + o)]; - } else { - operations[opt_idx(k)] = 0; // padding - } - } -} - -torch::Tensor GenerateDeletionLabelCuda( - torch::Tensor source, - torch::Tensor operations) { - const auto batch_size = source.size(0); - at::TensorOptions options(source.device()); - options = options.dtype(at::ScalarType::Int); - auto labels = torch::empty({batch_size, source.size(1)}, options); - auto stream = at::cuda::getCurrentCUDAStream(source.device().index()); - - AT_DISPATCH_ALL_TYPES(source.scalar_type(), "generate_deletion_labels", ([&] { - generate_deletion_label_kernel<scalar_t> - <<<batch_size, 1, 0, stream>>>( - source.data_ptr<scalar_t>(), - source.size(1), - operations.size(1), - operations.data_ptr<int>(), - labels.data_ptr<int>()); - })); - - return labels; -} - -std::pair<torch::Tensor, torch::Tensor> GenerateInsertionLabelCuda( - torch::Tensor target, - torch::Tensor operations) { - const auto batch_size = target.size(0); - at::TensorOptions options(target.device()); - options = options.dtype(at::ScalarType::Int); - auto labels = torch::empty({batch_size, target.size(1)}, options); - auto masks = torch::empty({batch_size, target.size(1)}, options); - auto stream = at::cuda::getCurrentCUDAStream(target.device().index()); - - AT_DISPATCH_ALL_TYPES( - target.scalar_type(), "generate_insertion_labels", ([&] { - generate_insertion_label_kernel<scalar_t><<<batch_size, 1, 0, stream>>>( - target.data_ptr<scalar_t>(), - target.size(1), - operations.size(1), - operations.data_ptr<int>(), - labels.data_ptr<int>(), - masks.data_ptr<int>()); - })); - - return std::make_pair(labels, masks); -} - -torch::Tensor LevenshteinDistanceCuda( - torch::Tensor source, - torch::Tensor target, - torch::Tensor source_length, - torch::Tensor target_length) { - const auto batch_size = source.size(0); - const auto shared_size = - (source.size(1) + 1) * (target.size(1) + 1) * sizeof(short); - - at::TensorOptions options(source.device()); - options = options.dtype(at::ScalarType::Int); - auto operations = - torch::empty({batch_size, source.size(1) + target.size(1)}, options); - auto stream = at::cuda::getCurrentCUDAStream(source.device().index()); - - if (shared_size > 40000) { - auto distances = torch::empty( - {batch_size, (source.size(1) + 1) * (target.size(1) + 1)}, options); - AT_DISPATCH_ALL_TYPES(source.scalar_type(), "levenshtein_distance", ([&] { - levenshtein_distance_kernel<scalar_t> - <<<batch_size, 1, 0, stream>>>( - source.data_ptr<scalar_t>(), - target.data_ptr<scalar_t>(), - source_length.data_ptr<int>(), - target_length.data_ptr<int>(), - source.size(1), - target.size(1), - operations.data_ptr<int>(), - distances.data_ptr<int>()); - })); - } else { - AT_DISPATCH_ALL_TYPES( - source.scalar_type(), "faster_levenshtein_distance", ([&] { - faster_levenshtein_distance_kernel<scalar_t> - <<<batch_size, 1, shared_size, stream>>>( - source.data_ptr<scalar_t>(), - target.data_ptr<scalar_t>(), - source_length.data_ptr<int>(), - target_length.data_ptr<int>(), - source.size(1), - target.size(1), - operations.data_ptr<int>()); - })); - } - - return operations; -} diff --git a/kosmos-g/fairseq/fairseq/clib/libnat_cuda/edit_dist.h b/kosmos-g/fairseq/fairseq/clib/libnat_cuda/edit_dist.h deleted file mode 100644 index 5220c52fd..000000000 --- a/kosmos-g/fairseq/fairseq/clib/libnat_cuda/edit_dist.h +++ /dev/null @@ -1,25 +0,0 @@ -/** - * Copyright 2017-present, Facebook, Inc. - * All rights reserved. - * - * This source code is licensed under the license found in the - * LICENSE file in the root directory of this source tree. - */ - -#pragma once - -#include <torch/extension.h> - -torch::Tensor LevenshteinDistanceCuda( - torch::Tensor source, - torch::Tensor target, - torch::Tensor source_length, - torch::Tensor target_length); - -torch::Tensor GenerateDeletionLabelCuda( - torch::Tensor source, - torch::Tensor operations); - -std::pair<torch::Tensor, torch::Tensor> GenerateInsertionLabelCuda( - torch::Tensor source, - torch::Tensor operations); diff --git a/kosmos-g/fairseq/fairseq/config/__init__.py b/kosmos-g/fairseq/fairseq/config/__init__.py deleted file mode 100644 index 626423691..000000000 --- a/kosmos-g/fairseq/fairseq/config/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. diff --git a/kosmos-g/fairseq/fairseq/config/config.yaml b/kosmos-g/fairseq/fairseq/config/config.yaml deleted file mode 100644 index 2ed7168cb..000000000 --- a/kosmos-g/fairseq/fairseq/config/config.yaml +++ /dev/null @@ -1,19 +0,0 @@ -# @package _group_ - -hydra: - run: - dir: . - -defaults: - - _self_ - - task: null - - model: null - - criterion: cross_entropy - - optimizer: null - - lr_scheduler: fixed - - bpe: null - - tokenizer: null - - scoring: null - - generation: null - - common_eval: null - - eval_lm: null diff --git a/kosmos-g/fairseq/fairseq/config/model/transformer_lm/transformer_lm_baevski_gbw.yaml b/kosmos-g/fairseq/fairseq/config/model/transformer_lm/transformer_lm_baevski_gbw.yaml deleted file mode 100644 index 30b1a4f1e..000000000 --- a/kosmos-g/fairseq/fairseq/config/model/transformer_lm/transformer_lm_baevski_gbw.yaml +++ /dev/null @@ -1,36 +0,0 @@ -# @package _group_ -activation_fn: "relu" -dropout: 0.1 -attention_dropout: 0.1 -activation_dropout: 0.0 -relu_dropout: 0.0 -decoder_embed_dim: 512 -decoder_output_dim: 512 -decoder_input_dim: 512 -decoder_ffn_embed_dim: 4096 -decoder_layers: 12 -decoder_attention_heads: 16 -decoder_normalize_before: true -no_decoder_final_norm: true -adaptive_softmax_cutoff: null -adaptive_softmax_dropout: 0 -adaptive_softmax_factor: 4 -no_token_positional_embeddings: false -share_decoder_input_output_embed: false -character_embeddings: false -character_filters: "[(1, 64), (2, 128), (3, 192), (4, 256), (5, 256), (6, 256), (7, 256)]" -character_embedding_dim: 4 -char_embedder_highway_layers: 2 -adaptive_input: false -adaptive_input_factor: 4 -adaptive_input_cutoff: null -tie_adaptive_weights: false -tie_adaptive_proj: false -decoder_learned_pos: false -decoder_layerdrop: 0 -decoder_layers_to_keep: null -layernorm_embedding: false -no_scale_embedding: false -quant_noise_pq: 0 -quant_noise_pq_block_size: 8 -quant_noise_scalar: 0 diff --git a/kosmos-g/fairseq/fairseq/config/model/transformer_lm/transformer_lm_baevski_wiki103.yaml b/kosmos-g/fairseq/fairseq/config/model/transformer_lm/transformer_lm_baevski_wiki103.yaml deleted file mode 100644 index 1154cfa66..000000000 --- a/kosmos-g/fairseq/fairseq/config/model/transformer_lm/transformer_lm_baevski_wiki103.yaml +++ /dev/null @@ -1,36 +0,0 @@ -# @package _group_ -activation_fn: "relu" -dropout: 0.3 -attention_dropout: 0.1 -activation_dropout: 0.1 -relu_dropout: 0.1 -decoder_embed_dim: 1024 -decoder_output_dim: 1024 -decoder_input_dim: 1024 -decoder_ffn_embed_dim: 4096 -decoder_layers: 16 -decoder_attention_heads: 8 -decoder_normalize_before: true -no_decoder_final_norm: true -adaptive_softmax_cutoff: "20000,60000" -adaptive_softmax_dropout: 0.2 -adaptive_softmax_factor: 4 -no_token_positional_embeddings: false -share_decoder_input_output_embed: false -character_embeddings: false -character_filters: "[(1, 64), (2, 128), (3, 192), (4, 256), (5, 256), (6, 256), (7, 256)]" -character_embedding_dim: 4 -char_embedder_highway_layers: 2 -adaptive_input: true -adaptive_input_factor: 4 -adaptive_input_cutoff: "20000,60000" -tie_adaptive_weights: true -tie_adaptive_proj: true -decoder_learned_pos: false -decoder_layerdrop: 0 -decoder_layers_to_keep: null -layernorm_embedding: false -no_scale_embedding: false -quant_noise_pq: 0 -quant_noise_pq_block_size: 8 -quant_noise_scalar: 0 diff --git a/kosmos-g/fairseq/fairseq/config/model/transformer_lm/transformer_lm_big.yaml b/kosmos-g/fairseq/fairseq/config/model/transformer_lm/transformer_lm_big.yaml deleted file mode 100644 index 309575310..000000000 --- a/kosmos-g/fairseq/fairseq/config/model/transformer_lm/transformer_lm_big.yaml +++ /dev/null @@ -1,36 +0,0 @@ -# @package _group_ -activation_fn: "relu" -dropout: 0.1 -attention_dropout: 0.0 -activation_dropout: 0.0 -relu_dropout: 0.0 -decoder_embed_dim: 1024 -decoder_output_dim: 1024 -decoder_input_dim: 1024 -decoder_ffn_embed_dim: 4096 -decoder_layers: 12 -decoder_attention_heads: 16 -decoder_normalize_before: true -no_decoder_final_norm: false -adaptive_softmax_cutoff: null -adaptive_softmax_dropout: 0 -adaptive_softmax_factor: 4 -no_token_positional_embeddings: false -share_decoder_input_output_embed: false -character_embeddings: false -character_filters: "[(1, 64), (2, 128), (3, 192), (4, 256), (5, 256), (6, 256), (7, 256)]" -character_embedding_dim: 4 -char_embedder_highway_layers: 2 -adaptive_input: false -adaptive_input_factor: 4 -adaptive_input_cutoff: null -tie_adaptive_weights: false -tie_adaptive_proj: false -decoder_learned_pos: false -decoder_layerdrop: 0 -decoder_layers_to_keep: null -layernorm_embedding: false -no_scale_embedding: false -quant_noise_pq: 0 -quant_noise_pq_block_size: 8 -quant_noise_scalar: 0 diff --git a/kosmos-g/fairseq/fairseq/config/model/transformer_lm/transformer_lm_gbw.yaml b/kosmos-g/fairseq/fairseq/config/model/transformer_lm/transformer_lm_gbw.yaml deleted file mode 100644 index 30b1a4f1e..000000000 --- a/kosmos-g/fairseq/fairseq/config/model/transformer_lm/transformer_lm_gbw.yaml +++ /dev/null @@ -1,36 +0,0 @@ -# @package _group_ -activation_fn: "relu" -dropout: 0.1 -attention_dropout: 0.1 -activation_dropout: 0.0 -relu_dropout: 0.0 -decoder_embed_dim: 512 -decoder_output_dim: 512 -decoder_input_dim: 512 -decoder_ffn_embed_dim: 4096 -decoder_layers: 12 -decoder_attention_heads: 16 -decoder_normalize_before: true -no_decoder_final_norm: true -adaptive_softmax_cutoff: null -adaptive_softmax_dropout: 0 -adaptive_softmax_factor: 4 -no_token_positional_embeddings: false -share_decoder_input_output_embed: false -character_embeddings: false -character_filters: "[(1, 64), (2, 128), (3, 192), (4, 256), (5, 256), (6, 256), (7, 256)]" -character_embedding_dim: 4 -char_embedder_highway_layers: 2 -adaptive_input: false -adaptive_input_factor: 4 -adaptive_input_cutoff: null -tie_adaptive_weights: false -tie_adaptive_proj: false -decoder_learned_pos: false -decoder_layerdrop: 0 -decoder_layers_to_keep: null -layernorm_embedding: false -no_scale_embedding: false -quant_noise_pq: 0 -quant_noise_pq_block_size: 8 -quant_noise_scalar: 0 diff --git a/kosmos-g/fairseq/fairseq/config/model/transformer_lm/transformer_lm_gpt.yaml b/kosmos-g/fairseq/fairseq/config/model/transformer_lm/transformer_lm_gpt.yaml deleted file mode 100644 index 2c6cb7be3..000000000 --- a/kosmos-g/fairseq/fairseq/config/model/transformer_lm/transformer_lm_gpt.yaml +++ /dev/null @@ -1,36 +0,0 @@ -# @package _group_ -activation_fn: "gelu" -dropout: 0.1 -attention_dropout: 0.1 -activation_dropout: 0.0 -relu_dropout: 0.0 -decoder_embed_dim: 768 -decoder_output_dim: 768 -decoder_input_dim: 768 -decoder_ffn_embed_dim: 3072 -decoder_layers: 12 -decoder_attention_heads: 12 -decoder_normalize_before: true -no_decoder_final_norm: false -adaptive_softmax_cutoff: null -adaptive_softmax_dropout: 0 -adaptive_softmax_factor: 4 -no_token_positional_embeddings: false -share_decoder_input_output_embed: false -character_embeddings: false -character_filters: "[(1, 64), (2, 128), (3, 192), (4, 256), (5, 256), (6, 256), (7, 256)]" -character_embedding_dim: 4 -char_embedder_highway_layers: 2 -adaptive_input: false -adaptive_input_factor: 4 -adaptive_input_cutoff: null -tie_adaptive_weights: false -tie_adaptive_proj: false -decoder_learned_pos: false -decoder_layerdrop: 0 -decoder_layers_to_keep: null -layernorm_embedding: false -no_scale_embedding: false -quant_noise_pq: 0 -quant_noise_pq_block_size: 8 -quant_noise_scalar: 0 diff --git a/kosmos-g/fairseq/fairseq/config/model/transformer_lm/transformer_lm_gpt2_big.yaml b/kosmos-g/fairseq/fairseq/config/model/transformer_lm/transformer_lm_gpt2_big.yaml deleted file mode 100644 index a08769a17..000000000 --- a/kosmos-g/fairseq/fairseq/config/model/transformer_lm/transformer_lm_gpt2_big.yaml +++ /dev/null @@ -1,36 +0,0 @@ -# @package _group_ -activation_fn: "gelu" -dropout: 0.1 -attention_dropout: 0.1 -activation_dropout: 0.0 -relu_dropout: 0.0 -decoder_embed_dim: 1600 -decoder_output_dim: 1600 -decoder_input_dim: 1600 -decoder_ffn_embed_dim: 6400 -decoder_layers: 48 -decoder_attention_heads: 25 -decoder_normalize_before: true -no_decoder_final_norm: false -adaptive_softmax_cutoff: null -adaptive_softmax_dropout: 0 -adaptive_softmax_factor: 4 -no_token_positional_embeddings: false -share_decoder_input_output_embed: false -character_embeddings: false -character_filters: "[(1, 64), (2, 128), (3, 192), (4, 256), (5, 256), (6, 256), (7, 256)]" -character_embedding_dim: 4 -char_embedder_highway_layers: 2 -adaptive_input: false -adaptive_input_factor: 4 -adaptive_input_cutoff: null -tie_adaptive_weights: false -tie_adaptive_proj: false -decoder_learned_pos: false -decoder_layerdrop: 0 -decoder_layers_to_keep: null -layernorm_embedding: false -no_scale_embedding: false -quant_noise_pq: 0 -quant_noise_pq_block_size: 8 -quant_noise_scalar: 0 diff --git a/kosmos-g/fairseq/fairseq/config/model/transformer_lm/transformer_lm_gpt2_medium.yaml b/kosmos-g/fairseq/fairseq/config/model/transformer_lm/transformer_lm_gpt2_medium.yaml deleted file mode 100644 index 64261d793..000000000 --- a/kosmos-g/fairseq/fairseq/config/model/transformer_lm/transformer_lm_gpt2_medium.yaml +++ /dev/null @@ -1,36 +0,0 @@ -# @package _group_ -activation_fn: "gelu" -dropout: 0.1 -attention_dropout: 0.1 -activation_dropout: 0.0 -relu_dropout: 0.0 -decoder_embed_dim: 1280 -decoder_output_dim: 1280 -decoder_input_dim: 1280 -decoder_ffn_embed_dim: 5120 -decoder_layers: 36 -decoder_attention_heads: 20 -decoder_normalize_before: true -no_decoder_final_norm: false -adaptive_softmax_cutoff: null -adaptive_softmax_dropout: 0 -adaptive_softmax_factor: 4 -no_token_positional_embeddings: false -share_decoder_input_output_embed: false -character_embeddings: false -character_filters: "[(1, 64), (2, 128), (3, 192), (4, 256), (5, 256), (6, 256), (7, 256)]" -character_embedding_dim: 4 -char_embedder_highway_layers: 2 -adaptive_input: false -adaptive_input_factor: 4 -adaptive_input_cutoff: null -tie_adaptive_weights: false -tie_adaptive_proj: false -decoder_learned_pos: false -decoder_layerdrop: 0 -decoder_layers_to_keep: null -layernorm_embedding: false -no_scale_embedding: false -quant_noise_pq: 0 -quant_noise_pq_block_size: 8 -quant_noise_scalar: 0 diff --git a/kosmos-g/fairseq/fairseq/config/model/transformer_lm/transformer_lm_gpt2_small.yaml b/kosmos-g/fairseq/fairseq/config/model/transformer_lm/transformer_lm_gpt2_small.yaml deleted file mode 100644 index 702e81f46..000000000 --- a/kosmos-g/fairseq/fairseq/config/model/transformer_lm/transformer_lm_gpt2_small.yaml +++ /dev/null @@ -1,36 +0,0 @@ -# @package _group_ -activation_fn: "gelu" -dropout: 0.1 -attention_dropout: 0.1 -activation_dropout: 0.0 -relu_dropout: 0.0 -decoder_embed_dim: 1024 -decoder_output_dim: 1024 -decoder_input_dim: 1024 -decoder_ffn_embed_dim: 4096 -decoder_layers: 24 -decoder_attention_heads: 16 -decoder_normalize_before: true -no_decoder_final_norm: false -adaptive_softmax_cutoff: null -adaptive_softmax_dropout: 0 -adaptive_softmax_factor: 4 -no_token_positional_embeddings: false -share_decoder_input_output_embed: false -character_embeddings: false -character_filters: "[(1, 64), (2, 128), (3, 192), (4, 256), (5, 256), (6, 256), (7, 256)]" -character_embedding_dim: 4 -char_embedder_highway_layers: 2 -adaptive_input: false -adaptive_input_factor: 4 -adaptive_input_cutoff: null -tie_adaptive_weights: false -tie_adaptive_proj: false -decoder_learned_pos: false -decoder_layerdrop: 0 -decoder_layers_to_keep: null -layernorm_embedding: false -no_scale_embedding: false -quant_noise_pq: 0 -quant_noise_pq_block_size: 8 -quant_noise_scalar: 0 diff --git a/kosmos-g/fairseq/fairseq/config/model/transformer_lm/transformer_lm_wiki103.yaml b/kosmos-g/fairseq/fairseq/config/model/transformer_lm/transformer_lm_wiki103.yaml deleted file mode 100644 index 1154cfa66..000000000 --- a/kosmos-g/fairseq/fairseq/config/model/transformer_lm/transformer_lm_wiki103.yaml +++ /dev/null @@ -1,36 +0,0 @@ -# @package _group_ -activation_fn: "relu" -dropout: 0.3 -attention_dropout: 0.1 -activation_dropout: 0.1 -relu_dropout: 0.1 -decoder_embed_dim: 1024 -decoder_output_dim: 1024 -decoder_input_dim: 1024 -decoder_ffn_embed_dim: 4096 -decoder_layers: 16 -decoder_attention_heads: 8 -decoder_normalize_before: true -no_decoder_final_norm: true -adaptive_softmax_cutoff: "20000,60000" -adaptive_softmax_dropout: 0.2 -adaptive_softmax_factor: 4 -no_token_positional_embeddings: false -share_decoder_input_output_embed: false -character_embeddings: false -character_filters: "[(1, 64), (2, 128), (3, 192), (4, 256), (5, 256), (6, 256), (7, 256)]" -character_embedding_dim: 4 -char_embedder_highway_layers: 2 -adaptive_input: true -adaptive_input_factor: 4 -adaptive_input_cutoff: "20000,60000" -tie_adaptive_weights: true -tie_adaptive_proj: true -decoder_learned_pos: false -decoder_layerdrop: 0 -decoder_layers_to_keep: null -layernorm_embedding: false -no_scale_embedding: false -quant_noise_pq: 0 -quant_noise_pq_block_size: 8 -quant_noise_scalar: 0 diff --git a/kosmos-g/fairseq/fairseq/config/model/wav2vec/vq_wav2vec_gumbel.yaml b/kosmos-g/fairseq/fairseq/config/model/wav2vec/vq_wav2vec_gumbel.yaml deleted file mode 100644 index ee1329bf4..000000000 --- a/kosmos-g/fairseq/fairseq/config/model/wav2vec/vq_wav2vec_gumbel.yaml +++ /dev/null @@ -1,5 +0,0 @@ -# @package _group_ -activation: gelu -vq_type: gumbel -vq_depth: 2 -combine_groups: true diff --git a/kosmos-g/fairseq/fairseq/config/model/wav2vec2/wav2vec2_base.yaml b/kosmos-g/fairseq/fairseq/config/model/wav2vec2/wav2vec2_base.yaml deleted file mode 100644 index ce65499b8..000000000 --- a/kosmos-g/fairseq/fairseq/config/model/wav2vec2/wav2vec2_base.yaml +++ /dev/null @@ -1,8 +0,0 @@ -# @package _group_ - -quantize_targets: true -final_dim: 256 -encoder_layerdrop: 0.05 -dropout_input: 0.1 -dropout_features: 0.1 -feature_grad_mult: 0.1 diff --git a/kosmos-g/fairseq/fairseq/config/model/wav2vec2/wav2vec2_large.yaml b/kosmos-g/fairseq/fairseq/config/model/wav2vec2/wav2vec2_large.yaml deleted file mode 100644 index 5846f7524..000000000 --- a/kosmos-g/fairseq/fairseq/config/model/wav2vec2/wav2vec2_large.yaml +++ /dev/null @@ -1,20 +0,0 @@ -# @package _group_ - -quantize_targets: true -extractor_mode: layer_norm -layer_norm_first: true -final_dim: 768 -latent_temp: [2.0,0.1,0.999995] -encoder_layerdrop: 0.0 -dropout_input: 0.0 -dropout_features: 0.0 -dropout: 0.0 -attention_dropout: 0.0 -conv_bias: true - -encoder_layers: 24 -encoder_embed_dim: 1024 -encoder_ffn_embed_dim: 4096 -encoder_attention_heads: 16 - -feature_grad_mult: 1.0 diff --git a/kosmos-g/fairseq/fairseq/criterions/__init__.py b/kosmos-g/fairseq/fairseq/criterions/__init__.py deleted file mode 100644 index 4dbf46a1c..000000000 --- a/kosmos-g/fairseq/fairseq/criterions/__init__.py +++ /dev/null @@ -1,36 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -"""isort:skip_file""" - -import importlib -import os - -from fairseq import registry -from fairseq.criterions.fairseq_criterion import ( # noqa - FairseqCriterion, - LegacyFairseqCriterion, -) -from omegaconf import DictConfig - - -( - build_criterion_, - register_criterion, - CRITERION_REGISTRY, - CRITERION_DATACLASS_REGISTRY, -) = registry.setup_registry( - "--criterion", base_class=FairseqCriterion, default="cross_entropy" -) - - -def build_criterion(cfg: DictConfig, task): - return build_criterion_(cfg, task) - - -# automatically import any Python files in the criterions/ directory -for file in sorted(os.listdir(os.path.dirname(__file__))): - if file.endswith(".py") and not file.startswith("_"): - file_name = file[: file.find(".py")] - importlib.import_module("fairseq.criterions." + file_name) diff --git a/kosmos-g/fairseq/fairseq/criterions/adaptive_loss.py b/kosmos-g/fairseq/fairseq/criterions/adaptive_loss.py deleted file mode 100644 index 6209ceaed..000000000 --- a/kosmos-g/fairseq/fairseq/criterions/adaptive_loss.py +++ /dev/null @@ -1,123 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from dataclasses import dataclass - -import torch.nn.functional as F -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.dataclass import FairseqDataclass -from fairseq.dataclass.constants import DDP_BACKEND_CHOICES -from omegaconf import II - - -@dataclass -class AdaptiveLossConfig(FairseqDataclass): - sentence_avg: bool = II("optimization.sentence_avg") - ddp_backend: DDP_BACKEND_CHOICES = II("distributed_training.ddp_backend") - - -@register_criterion("adaptive_loss", dataclass=AdaptiveLossConfig) -class AdaptiveLoss(FairseqCriterion): - """This is an implementation of the loss function accompanying the adaptive softmax approximation for - graphical processing units (GPU), described in the paper "Efficient softmax approximation for GPUs" - (http://arxiv.org/abs/1609.04309).""" - - def __init__(self, task, sentence_avg): - super().__init__(task) - self.sentence_avg = sentence_avg - - @classmethod - def build_criterion(cls, cfg: AdaptiveLossConfig, task): - if cfg.ddp_backend in {"c10d", "pytorch_ddp"}: - raise Exception( - "AdaptiveLoss is not compatible with the PyTorch " - "version of DistributedDataParallel. Please use " - "`--ddp-backend=legacy_ddp` instead." - ) - return cls(task, cfg.sentence_avg) - - def forward(self, model, sample, reduce=True): - """Compute the loss for the given sample. - - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - - assert ( - hasattr(model.decoder, "adaptive_softmax") - and model.decoder.adaptive_softmax is not None - ) - adaptive_softmax = model.decoder.adaptive_softmax - - net_output = model(**sample["net_input"]) - orig_target = model.get_targets(sample, net_output) - - nsentences = orig_target.size(0) - orig_target = orig_target.view(-1) - - bsz = orig_target.size(0) - - logits, target = adaptive_softmax(net_output[0], orig_target) - assert len(target) == len(logits) - - loss = net_output[0].new(1 if reduce else bsz).zero_() - - for i in range(len(target)): - if target[i] is not None: - assert target[i].min() >= 0 and target[i].max() <= logits[i].size(1) - loss += F.cross_entropy( - logits[i], - target[i], - ignore_index=self.padding_idx, - reduction="sum" if reduce else "none", - ) - - orig = utils.strip_pad(orig_target, self.padding_idx) - ntokens = orig.numel() - sample_size = sample["target"].size(0) if self.sentence_avg else ntokens - logging_output = { - "loss": loss.data, - "ntokens": ntokens, - "nsentences": nsentences, - "sample_size": sample_size, - } - return loss, sample_size, logging_output - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_sum = utils.item(sum(log.get("loss", 0) for log in logging_outputs)) - ntokens = utils.item(sum(log.get("ntokens", 0) for log in logging_outputs)) - sample_size = utils.item( - sum(log.get("sample_size", 0) for log in logging_outputs) - ) - - metrics.log_scalar( - "loss", loss_sum / sample_size / math.log(2), sample_size, round=3 - ) - if sample_size != ntokens: - metrics.log_scalar( - "nll_loss", loss_sum / ntokens / math.log(2), ntokens, round=3 - ) - metrics.log_derived( - "ppl", lambda meters: utils.get_perplexity(meters["nll_loss"].avg) - ) - else: - metrics.log_derived( - "ppl", lambda meters: utils.get_perplexity(meters["loss"].avg) - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/kosmos-g/fairseq/fairseq/criterions/composite_loss.py b/kosmos-g/fairseq/fairseq/criterions/composite_loss.py deleted file mode 100644 index 98e835fa6..000000000 --- a/kosmos-g/fairseq/fairseq/criterions/composite_loss.py +++ /dev/null @@ -1,100 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq import utils -from fairseq.criterions import LegacyFairseqCriterion, register_criterion -from torch import nn - - -@register_criterion("composite_loss") -class CompositeLoss(LegacyFairseqCriterion): - """This is a composite loss that, given a list of model outputs and a list of targets, - computes an average of losses for each output-target pair""" - - def __init__(self, args, task): - super().__init__(args, task) - self.underlying_criterion = args.underlying_criterion - - @staticmethod - def add_args(parser): - """Add criterion-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--underlying-criterion', type=str, metavar='VAL', required=True, - help='underlying criterion to use for the composite loss') - # fmt: on - - @staticmethod - def build_underlying_criterion(args, task): - saved_criterion = args.criterion - args.criterion = args.underlying_criterion - assert saved_criterion != args.underlying_criterion - underlying_criterion = task.build_criterion(args) - args.criterion = saved_criterion - return underlying_criterion - - @classmethod - def build_criterion(cls, args, task): - underlying_criterion = CompositeLoss.build_underlying_criterion(args, task) - - class FakeModel(nn.Module): - def __init__(self, model, net_out, target): - super().__init__() - self.model = model - self.net_out = net_out - self.target = target - - def forward(self, **unused): - return self.net_out - - def get_normalized_probs(self, net_output, log_probs, sample=None): - return self.model.get_normalized_probs( - net_output, log_probs, sample=sample - ) - - def get_targets(self, *unused): - return self.target - - @property - def decoder(self): - return self.model.decoder - - class _CompositeLoss(LegacyFairseqCriterion): - def __init__(self, args, task, underlying_criterion): - super().__init__(args, task) - self.underlying_criterion = underlying_criterion - - def forward(self, model, sample, reduce=True): - net_outputs = model(**sample["net_input"]) - targets = sample["target"] - - bsz = targets[0].size(0) - loss = net_outputs[0][0].new(1 if reduce else bsz).float().zero_() - - sample_size = 0 - logging_output = {} - for o, t in zip(net_outputs[0], targets): - m = FakeModel(model, (o, net_outputs[1]), t) - sample["target"] = t - l, ss, logging_output = self.underlying_criterion(m, sample, reduce) - loss += l - sample_size += ss - - loss.div_(len(targets)) - sample_size /= len(targets) - - logging_output["loss"] = utils.item(loss.data) if reduce else loss.data - return loss, sample_size, logging_output - - @staticmethod - def aggregate_logging_outputs(logging_outputs): - return underlying_criterion.__class__.aggregate_logging_outputs( - logging_outputs - ) - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - underlying_criterion.__class__.reduce_metrics(logging_outputs) - - return _CompositeLoss(args, task, underlying_criterion) diff --git a/kosmos-g/fairseq/fairseq/criterions/cross_entropy.py b/kosmos-g/fairseq/fairseq/criterions/cross_entropy.py deleted file mode 100644 index fe4610647..000000000 --- a/kosmos-g/fairseq/fairseq/criterions/cross_entropy.py +++ /dev/null @@ -1,90 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from dataclasses import dataclass - -import torch.nn.functional as F -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.dataclass import FairseqDataclass -from omegaconf import II - - -@dataclass -class CrossEntropyCriterionConfig(FairseqDataclass): - sentence_avg: bool = II("optimization.sentence_avg") - - -@register_criterion("cross_entropy", dataclass=CrossEntropyCriterionConfig) -class CrossEntropyCriterion(FairseqCriterion): - def __init__(self, task, sentence_avg): - super().__init__(task) - self.sentence_avg = sentence_avg - - def forward(self, model, sample, reduce=True): - """Compute the loss for the given sample. - - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - net_output = model(**sample["net_input"]) - loss, _ = self.compute_loss(model, net_output, sample, reduce=reduce) - sample_size = ( - sample["target"].size(0) if self.sentence_avg else sample["ntokens"] - ) - logging_output = { - "loss": loss.data, - "ntokens": sample["ntokens"], - "nsentences": sample["target"].size(0), - "sample_size": sample_size, - } - return loss, sample_size, logging_output - - def compute_loss(self, model, net_output, sample, reduce=True): - lprobs = model.get_normalized_probs(net_output, log_probs=True) - lprobs = lprobs.view(-1, lprobs.size(-1)) - target = model.get_targets(sample, net_output).view(-1) - loss = F.nll_loss( - lprobs, - target, - ignore_index=self.padding_idx, - reduction="sum" if reduce else "none", - ) - return loss, loss - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_sum = sum(log.get("loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - - # we divide by log(2) to convert the loss from base e to base 2 - metrics.log_scalar( - "loss", loss_sum / sample_size / math.log(2), sample_size, round=3 - ) - if sample_size != ntokens: - metrics.log_scalar( - "nll_loss", loss_sum / ntokens / math.log(2), ntokens, round=3 - ) - metrics.log_derived( - "ppl", lambda meters: utils.get_perplexity(meters["nll_loss"].avg) - ) - else: - metrics.log_derived( - "ppl", lambda meters: utils.get_perplexity(meters["loss"].avg) - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/kosmos-g/fairseq/fairseq/criterions/ctc.py b/kosmos-g/fairseq/fairseq/criterions/ctc.py deleted file mode 100644 index e966e47cf..000000000 --- a/kosmos-g/fairseq/fairseq/criterions/ctc.py +++ /dev/null @@ -1,295 +0,0 @@ -# All rights reserved. -# -# This source code is licensed under the license found in the LICENSE file in -# the root directory of this source tree. An additional grant of patent rights -# can be found in the PATENTS file in the same directory. - -import math -from argparse import Namespace -from dataclasses import dataclass, field -from omegaconf import II -from typing import Optional - -import torch -import torch.nn.functional as F -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.dataclass import FairseqDataclass -from fairseq.data.data_utils import post_process -from fairseq.tasks import FairseqTask -from fairseq.logging.meters import safe_round - - -@dataclass -class CtcCriterionConfig(FairseqDataclass): - zero_infinity: bool = field( - default=False, - metadata={"help": "zero inf loss when source length <= target length"}, - ) - sentence_avg: bool = II("optimization.sentence_avg") - post_process: str = field( - default="letter", - metadata={ - "help": "how to post process predictions into words. can be letter, " - "wordpiece, BPE symbols, etc. " - "See fairseq.data.data_utils.post_process() for full list of options" - }, - ) - wer_kenlm_model: Optional[str] = field( - default=None, - metadata={ - "help": "if this is provided, use kenlm to compute wer (along with other wer_* args)" - }, - ) - wer_lexicon: Optional[str] = field( - default=None, - metadata={"help": "lexicon to use with wer_kenlm_model"}, - ) - wer_lm_weight: float = field( - default=2.0, - metadata={"help": "lm weight to use with wer_kenlm_model"}, - ) - wer_word_score: float = field( - default=-1.0, - metadata={"help": "lm word score to use with wer_kenlm_model"}, - ) - - wer_args: Optional[str] = field( - default=None, - metadata={ - "help": "DEPRECATED: tuple of (wer_kenlm_model, wer_lexicon, wer_lm_weight, wer_word_score)" - }, - ) - - -@register_criterion("ctc", dataclass=CtcCriterionConfig) -class CtcCriterion(FairseqCriterion): - def __init__(self, cfg: CtcCriterionConfig, task: FairseqTask): - super().__init__(task) - self.blank_idx = ( - task.target_dictionary.index(task.blank_symbol) - if hasattr(task, "blank_symbol") - else 0 - ) - self.pad_idx = task.target_dictionary.pad() - self.eos_idx = task.target_dictionary.eos() - self.post_process = cfg.post_process - - if cfg.wer_args is not None: - ( - cfg.wer_kenlm_model, - cfg.wer_lexicon, - cfg.wer_lm_weight, - cfg.wer_word_score, - ) = eval(cfg.wer_args) - - if cfg.wer_kenlm_model is not None and cfg.wer_kenlm_model != "": - from examples.speech_recognition.w2l_decoder import W2lKenLMDecoder - - dec_args = Namespace() - dec_args.nbest = 1 - dec_args.criterion = "ctc" - dec_args.kenlm_model = cfg.wer_kenlm_model - dec_args.lexicon = cfg.wer_lexicon - dec_args.beam = 50 - dec_args.beam_size_token = min(50, len(task.target_dictionary)) - dec_args.beam_threshold = min(50, len(task.target_dictionary)) - dec_args.lm_weight = cfg.wer_lm_weight - dec_args.word_score = cfg.wer_word_score - dec_args.unk_weight = -math.inf - dec_args.sil_weight = 0 - - self.w2l_decoder = W2lKenLMDecoder(dec_args, task.target_dictionary) - else: - self.w2l_decoder = None - - self.zero_infinity = cfg.zero_infinity - self.sentence_avg = cfg.sentence_avg - - def forward(self, model, sample, reduce=True): - net_output = model(**sample["net_input"]) - lprobs = model.get_normalized_probs( - net_output, log_probs=True - ).contiguous() # (T, B, C) from the encoder - - if "src_lengths" in sample["net_input"]: - input_lengths = sample["net_input"]["src_lengths"] - else: - if net_output["padding_mask"] is not None: - non_padding_mask = ~net_output["padding_mask"] - input_lengths = non_padding_mask.long().sum(-1) - else: - input_lengths = lprobs.new_full( - (lprobs.size(1),), lprobs.size(0), dtype=torch.long - ) - - pad_mask = (sample["target"] != self.pad_idx) & ( - sample["target"] != self.eos_idx - ) - targets_flat = sample["target"].masked_select(pad_mask) - if "target_lengths" in sample: - target_lengths = sample["target_lengths"] - else: - target_lengths = pad_mask.sum(-1) - - with torch.backends.cudnn.flags(enabled=False): - loss = F.ctc_loss( - lprobs, - targets_flat, - input_lengths, - target_lengths, - blank=self.blank_idx, - reduction="sum", - zero_infinity=self.zero_infinity, - ) - - ntokens = ( - sample["ntokens"] if "ntokens" in sample else target_lengths.sum().item() - ) - - sample_size = sample["target"].size(0) if self.sentence_avg else ntokens - logging_output = { - "loss": utils.item(loss.data), # * sample['ntokens'], - "ntokens": ntokens, - "nsentences": sample["id"].numel(), - "sample_size": sample_size, - } - - if not model.training: - import editdistance - - with torch.no_grad(): - lprobs_t = lprobs.transpose(0, 1).float().contiguous().cpu() - - c_err = 0 - c_len = 0 - w_errs = 0 - w_len = 0 - wv_errs = 0 - for lp, t, inp_l in zip( - lprobs_t, - sample["target_label"] - if "target_label" in sample - else sample["target"], - input_lengths, - ): - lp = lp[:inp_l].unsqueeze(0) - - decoded = None - if self.w2l_decoder is not None: - decoded = self.w2l_decoder.decode(lp) - if len(decoded) < 1: - decoded = None - else: - decoded = decoded[0] - if len(decoded) < 1: - decoded = None - else: - decoded = decoded[0] - - p = (t != self.task.target_dictionary.pad()) & ( - t != self.task.target_dictionary.eos() - ) - targ = t[p] - targ_units = self.task.target_dictionary.string(targ) - targ_units_arr = targ.tolist() - - toks = lp.argmax(dim=-1).unique_consecutive() - pred_units_arr = toks[toks != self.blank_idx].tolist() - - c_err += editdistance.eval(pred_units_arr, targ_units_arr) - c_len += len(targ_units_arr) - - targ_words = post_process(targ_units, self.post_process).split() - - pred_units = self.task.target_dictionary.string(pred_units_arr) - pred_words_raw = post_process(pred_units, self.post_process).split() - - if decoded is not None and "words" in decoded: - pred_words = decoded["words"] - w_errs += editdistance.eval(pred_words, targ_words) - wv_errs += editdistance.eval(pred_words_raw, targ_words) - else: - dist = editdistance.eval(pred_words_raw, targ_words) - w_errs += dist - wv_errs += dist - - w_len += len(targ_words) - - logging_output["wv_errors"] = wv_errs - logging_output["w_errors"] = w_errs - logging_output["w_total"] = w_len - logging_output["c_errors"] = c_err - logging_output["c_total"] = c_len - - return loss, sample_size, logging_output - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - - loss_sum = utils.item(sum(log.get("loss", 0) for log in logging_outputs)) - ntokens = utils.item(sum(log.get("ntokens", 0) for log in logging_outputs)) - nsentences = utils.item( - sum(log.get("nsentences", 0) for log in logging_outputs) - ) - sample_size = utils.item( - sum(log.get("sample_size", 0) for log in logging_outputs) - ) - - metrics.log_scalar( - "loss", loss_sum / sample_size / math.log(2), sample_size, round=3 - ) - metrics.log_scalar("ntokens", ntokens) - metrics.log_scalar("nsentences", nsentences) - if sample_size != ntokens: - metrics.log_scalar( - "nll_loss", loss_sum / ntokens / math.log(2), ntokens, round=3 - ) - - c_errors = sum(log.get("c_errors", 0) for log in logging_outputs) - metrics.log_scalar("_c_errors", c_errors) - c_total = sum(log.get("c_total", 0) for log in logging_outputs) - metrics.log_scalar("_c_total", c_total) - w_errors = sum(log.get("w_errors", 0) for log in logging_outputs) - metrics.log_scalar("_w_errors", w_errors) - wv_errors = sum(log.get("wv_errors", 0) for log in logging_outputs) - metrics.log_scalar("_wv_errors", wv_errors) - w_total = sum(log.get("w_total", 0) for log in logging_outputs) - metrics.log_scalar("_w_total", w_total) - - if c_total > 0: - metrics.log_derived( - "uer", - lambda meters: safe_round( - meters["_c_errors"].sum * 100.0 / meters["_c_total"].sum, 3 - ) - if meters["_c_total"].sum > 0 - else float("nan"), - ) - if w_total > 0: - metrics.log_derived( - "wer", - lambda meters: safe_round( - meters["_w_errors"].sum * 100.0 / meters["_w_total"].sum, 3 - ) - if meters["_w_total"].sum > 0 - else float("nan"), - ) - metrics.log_derived( - "raw_wer", - lambda meters: safe_round( - meters["_wv_errors"].sum * 100.0 / meters["_w_total"].sum, 3 - ) - if meters["_w_total"].sum > 0 - else float("nan"), - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/kosmos-g/fairseq/fairseq/criterions/fairseq_criterion.py b/kosmos-g/fairseq/fairseq/criterions/fairseq_criterion.py deleted file mode 100644 index ff4beb025..000000000 --- a/kosmos-g/fairseq/fairseq/criterions/fairseq_criterion.py +++ /dev/null @@ -1,120 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import inspect -from typing import Any, Dict, List - -from fairseq import metrics, utils -from fairseq.dataclass import FairseqDataclass -from fairseq.dataclass.utils import gen_parser_from_dataclass -from torch.nn.modules.loss import _Loss - - -class FairseqCriterion(_Loss): - def __init__(self, task): - super().__init__() - self.task = task - if hasattr(task, "target_dictionary"): - tgt_dict = task.target_dictionary - self.padding_idx = tgt_dict.pad() if tgt_dict is not None else -100 - - @classmethod - def add_args(cls, parser): - """Add criterion-specific arguments to the parser.""" - dc = getattr(cls, "__dataclass", None) - if dc is not None: - gen_parser_from_dataclass(parser, dc()) - - @classmethod - def build_criterion(cls, cfg: FairseqDataclass, task): - """Construct a criterion from command-line args.""" - # arguments in the __init__. - init_args = {} - for p in inspect.signature(cls).parameters.values(): - if ( - p.kind == p.POSITIONAL_ONLY - or p.kind == p.VAR_POSITIONAL - or p.kind == p.VAR_KEYWORD - ): - # we haven't implemented inference for these argument types, - # but PRs welcome :) - raise NotImplementedError("{} not supported".format(p.kind)) - - assert p.kind in {p.POSITIONAL_OR_KEYWORD, p.KEYWORD_ONLY} - - if p.name == "task": - init_args["task"] = task - elif p.name == "cfg": - init_args["cfg"] = cfg - elif hasattr(cfg, p.name): - init_args[p.name] = getattr(cfg, p.name) - elif p.default != p.empty: - pass # we'll use the default value - else: - raise NotImplementedError( - "Unable to infer Criterion arguments, please implement " - "{}.build_criterion".format(cls.__name__) - ) - return cls(**init_args) - - def forward(self, model, sample, reduce=True): - """Compute the loss for the given sample. - - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - raise NotImplementedError - - @staticmethod - def aggregate_logging_outputs( - logging_outputs: List[Dict[str, Any]] - ) -> Dict[str, Any]: - """Aggregate logging outputs from data parallel training.""" - utils.deprecation_warning( - "The aggregate_logging_outputs API is deprecated. " - "Please use the reduce_metrics API instead." - ) - raise NotImplementedError - - @classmethod - def reduce_metrics(cls, logging_outputs: List[Dict[str, Any]]) -> None: - """Aggregate logging outputs from data parallel training.""" - utils.deprecation_warning( - "Criterions should implement the reduce_metrics API. " - "Falling back to deprecated aggregate_logging_outputs API." - ) - agg_logging_outputs = cls.aggregate_logging_outputs(logging_outputs) - for k, v in agg_logging_outputs.items(): - if k in {"nsentences", "ntokens", "sample_size"}: - continue - metrics.log_scalar(k, v) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return False - - -class LegacyFairseqCriterion(FairseqCriterion): - def __init__(self, args, task): - super().__init__(task=task) - self.args = args - - utils.deprecation_warning( - "Criterions should take explicit arguments instead of an " - "argparse.Namespace object, please update your criterion by " - "extending FairseqCriterion instead of LegacyFairseqCriterion." - ) - - @classmethod - def build_criterion(cls, args, task): - """Construct a criterion from command-line args.""" - return cls(args, task) diff --git a/kosmos-g/fairseq/fairseq/criterions/fastspeech2_loss.py b/kosmos-g/fairseq/fairseq/criterions/fastspeech2_loss.py deleted file mode 100644 index b317409e2..000000000 --- a/kosmos-g/fairseq/fairseq/criterions/fastspeech2_loss.py +++ /dev/null @@ -1,136 +0,0 @@ -# Copyright (c) 2017-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the LICENSE file in -# the root directory of this source tree. An additional grant of patent rights -# can be found in the PATENTS file in the same directory. - -from typing import List, Dict, Any -from dataclasses import dataclass, field - -import torch -import torch.nn.functional as F - -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.dataclass import FairseqDataclass -from fairseq.data.data_utils import lengths_to_mask -from fairseq.models.fairseq_model import FairseqEncoderModel - - -@dataclass -class FastSpeech2CriterionConfig(FairseqDataclass): - ctc_weight: float = field(default=0.0, metadata={"help": "weight for CTC loss"}) - - -@register_criterion("fastspeech2", dataclass=FastSpeech2CriterionConfig) -class FastSpeech2Loss(FairseqCriterion): - def __init__(self, task, ctc_weight): - super().__init__(task) - self.ctc_weight = ctc_weight - - def forward(self, model: FairseqEncoderModel, sample, reduction="mean"): - src_tokens = sample["net_input"]["src_tokens"] - src_lens = sample["net_input"]["src_lengths"] - tgt_lens = sample["target_lengths"] - _feat_out, _feat_out_post, _, log_dur_out, pitch_out, energy_out = model( - src_tokens=src_tokens, - src_lengths=src_lens, - prev_output_tokens=sample["net_input"]["prev_output_tokens"], - incremental_state=None, - target_lengths=tgt_lens, - speaker=sample["speaker"], - durations=sample["durations"], - pitches=sample["pitches"], - energies=sample["energies"], - ) - - src_mask = lengths_to_mask(sample["net_input"]["src_lengths"]) - tgt_mask = lengths_to_mask(sample["target_lengths"]) - - pitches, energies = sample["pitches"], sample["energies"] - pitch_out, pitches = pitch_out[src_mask], pitches[src_mask] - energy_out, energies = energy_out[src_mask], energies[src_mask] - - feat_out, feat = _feat_out[tgt_mask], sample["target"][tgt_mask] - l1_loss = F.l1_loss(feat_out, feat, reduction=reduction) - if _feat_out_post is not None: - l1_loss += F.l1_loss(_feat_out_post[tgt_mask], feat, reduction=reduction) - - pitch_loss = F.mse_loss(pitch_out, pitches, reduction=reduction) - energy_loss = F.mse_loss(energy_out, energies, reduction=reduction) - - log_dur_out = log_dur_out[src_mask] - dur = sample["durations"].float() - dur = dur.half() if log_dur_out.type().endswith(".HalfTensor") else dur - log_dur = torch.log(dur + 1)[src_mask] - dur_loss = F.mse_loss(log_dur_out, log_dur, reduction=reduction) - - ctc_loss = torch.tensor(0.0).type_as(l1_loss) - if self.ctc_weight > 0.0: - lprobs = model.get_normalized_probs((_feat_out,), log_probs=True) - lprobs = lprobs.transpose(0, 1) # T x B x C - src_mask = lengths_to_mask(src_lens) - src_tokens_flat = src_tokens.masked_select(src_mask) - ctc_loss = ( - F.ctc_loss( - lprobs, - src_tokens_flat, - tgt_lens, - src_lens, - reduction=reduction, - zero_infinity=True, - ) - * self.ctc_weight - ) - - loss = l1_loss + dur_loss + pitch_loss + energy_loss + ctc_loss - - sample_size = sample["nsentences"] - logging_output = { - "loss": utils.item(loss.data), - "ntokens": sample["ntokens"], - "nsentences": sample["nsentences"], - "sample_size": sample_size, - "l1_loss": utils.item(l1_loss.data), - "dur_loss": utils.item(dur_loss.data), - "pitch_loss": utils.item(pitch_loss.data), - "energy_loss": utils.item(energy_loss.data), - "ctc_loss": utils.item(ctc_loss.data), - } - return loss, sample_size, logging_output - - @classmethod - def reduce_metrics(cls, logging_outputs: List[Dict[str, Any]]) -> None: - ns = [log.get("sample_size", 0) for log in logging_outputs] - ntot = sum(ns) - ws = [n / (ntot + 1e-8) for n in ns] - for key in [ - "loss", - "l1_loss", - "dur_loss", - "pitch_loss", - "energy_loss", - "ctc_loss", - ]: - vals = [log.get(key, 0) for log in logging_outputs] - val = sum(val * w for val, w in zip(vals, ws)) - metrics.log_scalar(key, val, ntot, round=3) - metrics.log_scalar("sample_size", ntot, len(logging_outputs)) - - # inference metrics - if "targ_frames" not in logging_outputs[0]: - return - n = sum(log.get("targ_frames", 0) for log in logging_outputs) - for key, new_key in [ - ("mcd_loss", "mcd_loss"), - ("pred_frames", "pred_ratio"), - ("nins", "ins_rate"), - ("ndel", "del_rate"), - ]: - val = sum(log.get(key, 0) for log in logging_outputs) - metrics.log_scalar(new_key, val / n, n, round=3) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - return False diff --git a/kosmos-g/fairseq/fairseq/criterions/hubert_criterion.py b/kosmos-g/fairseq/fairseq/criterions/hubert_criterion.py deleted file mode 100644 index 83b514aee..000000000 --- a/kosmos-g/fairseq/fairseq/criterions/hubert_criterion.py +++ /dev/null @@ -1,194 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -import re -from dataclasses import dataclass, field -from typing import List, Optional - -import torch -import torch.nn.functional as F -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.dataclass import FairseqDataclass - - -@dataclass -class HubertCriterionConfig(FairseqDataclass): - pred_masked_weight: float = field( - default=1.0, - metadata={"help": "weight for predictive loss for masked frames"}, - ) - pred_nomask_weight: float = field( - default=0.0, - metadata={"help": "weight for predictive loss for unmasked frames"}, - ) - loss_weights: Optional[List[float]] = field( - default=None, - metadata={"help": "weights for additional loss terms (not first one)"}, - ) - log_keys: List[str] = field( - default_factory=lambda: [], - metadata={"help": "output keys to log"}, - ) - - -@register_criterion("hubert", dataclass=HubertCriterionConfig) -class HubertCriterion(FairseqCriterion): - def __init__( - self, - task, - pred_masked_weight, - pred_nomask_weight, - loss_weights=None, - log_keys=None, - ): - super().__init__(task) - self.pred_masked_weight = pred_masked_weight - self.pred_nomask_weight = pred_nomask_weight - self.loss_weights = loss_weights - self.log_keys = [] if log_keys is None else log_keys - - def forward(self, model, sample, reduce=True, log_pred=False): - """Compute the loss for the given sample. - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - net_output = model(target_list=sample["target_list"], **sample["net_input"]) - loss = 0.0 - sample_size = 0 - logging_output = {} - reduction = "sum" if reduce else "none" - - loss_m_list = [] - logp_m_list = model.get_logits(net_output, True) - targ_m_list = model.get_targets(net_output, True) - assert self.pred_masked_weight == 0 or len(logp_m_list) > 0 - for i, (logp_m, targ_m) in enumerate(zip(logp_m_list, targ_m_list)): - loss_m = F.cross_entropy(logp_m, targ_m, reduction=reduction) - loss_m_list.append(loss_m) - logging_output[f"loss_m_{i}"] = loss_m.detach().item() - if self.pred_masked_weight > 0: - loss += self.pred_masked_weight * sum(loss_m_list) - sample_size += targ_m_list[0].numel() - - loss_u_list = [] - logp_u_list = model.get_logits(net_output, False) - targ_u_list = model.get_targets(net_output, False) - assert self.pred_nomask_weight == 0 or len(logp_u_list) > 0 - for i, (logp_u, targ_u) in enumerate(zip(logp_u_list, targ_u_list)): - loss_u = F.cross_entropy(logp_u, targ_u, reduction=reduction) - loss_u_list.append(loss_u) - logging_output[f"loss_u_{i}"] = loss_u.detach().item() - if self.pred_nomask_weight > 0: - loss += self.pred_nomask_weight * sum(loss_u_list) - sample_size += targ_u_list[0].numel() - - if self.loss_weights is not None: - assert hasattr(model, "get_extra_losses") - extra_losses, names = model.get_extra_losses(net_output) - if torch.is_tensor(extra_losses): - extra_losses = [extra_losses] - names = [names] - if len(self.loss_weights) == 1 and len(extra_losses) != 1: - self.loss_weights = [self.loss_weights[0]] * len(extra_losses) - assert len(extra_losses) == len( - self.loss_weights - ), f"{len(extra_losses)}, {len(self.loss_weights)}" - for p, n, coef in zip(extra_losses, names, self.loss_weights): - if coef != 0 and p is not None: - p = coef * p.float() * sample_size - loss += p - logging_output[f"loss_{n}"] = p.item() - - logging_output = { - "loss": loss.item() if reduce else loss, - "ntokens": sample_size, - "nsentences": sample["id"].numel(), - "sample_size": sample_size, - **logging_output, - } - - for lk in self.log_keys: - if lk in net_output: - logging_output[lk] = float((net_output[lk])) - - def compute_correct(logits): - if logits.numel() == 0: - return 0, 0 - else: - assert logits.dim() > 1, logits.shape - max = logits.argmax(-1) == 0 - min = logits.argmin(-1) == 0 - both = max & min - corr = max.long().sum().item() - both.long().sum().item() - count = max.numel() - return corr, count - - with torch.no_grad(): - for i, logp_m in enumerate(logp_m_list): - corr_m, count_m = compute_correct(logp_m) - logging_output[f"correct_m_{i}"] = corr_m - logging_output[f"count_m_{i}"] = count_m - - for i, logp_u in enumerate(logp_u_list): - corr_u, count_u = compute_correct(logp_u) - logging_output[f"correct_u_{i}"] = corr_u - logging_output[f"count_u_{i}"] = count_u - - return loss, sample_size, logging_output - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training (copied from normal cross entropy).""" - loss_sum = sum(log.get("loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - - metrics.log_scalar( - "loss", loss_sum / sample_size / math.log(2), sample_size, round=3 - ) - if sample_size != ntokens: - metrics.log_scalar( - "nll_loss", loss_sum / ntokens / math.log(2), ntokens, round=3 - ) - metrics.log_derived( - "ppl", lambda meters: utils.get_perplexity(meters["nll_loss"].avg) - ) - else: - metrics.log_derived( - "ppl", lambda meters: utils.get_perplexity(meters["loss"].avg) - ) - - counts = {} - for lk in logging_outputs[0].keys(): - if lk.startswith("count_"): - val = sum(log[lk] for log in logging_outputs) - metrics.log_scalar(lk, val) - counts[lk] = val - - for lk in logging_outputs[0].keys(): - if lk.startswith("loss_"): - val = sum(log[lk] for log in logging_outputs) - metrics.log_scalar(lk, val / sample_size / math.log(2), round=3) - elif lk.startswith("correct_"): - val = sum(log[lk] for log in logging_outputs) - metrics.log_scalar(lk, val / counts[re.sub("correct", "count", lk)]) - - @staticmethod - def aggregate_logging_outputs(logging_outputs): - """Aggregate logging outputs from data parallel training.""" - raise NotImplementedError() - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return False diff --git a/kosmos-g/fairseq/fairseq/criterions/label_smoothed_cross_entropy.py b/kosmos-g/fairseq/fairseq/criterions/label_smoothed_cross_entropy.py deleted file mode 100644 index cb43be0ca..000000000 --- a/kosmos-g/fairseq/fairseq/criterions/label_smoothed_cross_entropy.py +++ /dev/null @@ -1,167 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from dataclasses import dataclass, field - -import torch -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.dataclass import FairseqDataclass -from omegaconf import II - - -@dataclass -class LabelSmoothedCrossEntropyCriterionConfig(FairseqDataclass): - label_smoothing: float = field( - default=0.0, - metadata={"help": "epsilon for label smoothing, 0 means no label smoothing"}, - ) - report_accuracy: bool = field( - default=False, - metadata={"help": "report accuracy metric"}, - ) - ignore_prefix_size: int = field( - default=0, - metadata={"help": "Ignore first N tokens"}, - ) - sentence_avg: bool = II("optimization.sentence_avg") - - -def label_smoothed_nll_loss(lprobs, target, epsilon, ignore_index=None, reduce=True): - if target.dim() == lprobs.dim() - 1: - target = target.unsqueeze(-1) - nll_loss = -lprobs.gather(dim=-1, index=target) - smooth_loss = -lprobs.sum(dim=-1, keepdim=True) - if ignore_index is not None: - pad_mask = target.eq(ignore_index) - nll_loss.masked_fill_(pad_mask, 0.0) - smooth_loss.masked_fill_(pad_mask, 0.0) - else: - nll_loss = nll_loss.squeeze(-1) - smooth_loss = smooth_loss.squeeze(-1) - if reduce: - nll_loss = nll_loss.sum() - smooth_loss = smooth_loss.sum() - eps_i = epsilon / (lprobs.size(-1) - 1) - loss = (1.0 - epsilon - eps_i) * nll_loss + eps_i * smooth_loss - return loss, nll_loss - - -@register_criterion( - "label_smoothed_cross_entropy", dataclass=LabelSmoothedCrossEntropyCriterionConfig -) -class LabelSmoothedCrossEntropyCriterion(FairseqCriterion): - def __init__( - self, - task, - sentence_avg, - label_smoothing, - ignore_prefix_size=0, - report_accuracy=False, - ): - super().__init__(task) - self.sentence_avg = sentence_avg - self.eps = label_smoothing - self.ignore_prefix_size = ignore_prefix_size - self.report_accuracy = report_accuracy - - def forward(self, model, sample, reduce=True): - """Compute the loss for the given sample. - - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - net_output = model(**sample["net_input"]) - loss, nll_loss = self.compute_loss(model, net_output, sample, reduce=reduce) - sample_size = ( - sample["target"].size(0) if self.sentence_avg else sample["ntokens"] - ) - logging_output = { - "loss": loss.data, - "nll_loss": nll_loss.data, - "ntokens": sample["ntokens"], - "nsentences": sample["target"].size(0), - "sample_size": sample_size, - } - if self.report_accuracy: - n_correct, total = self.compute_accuracy(model, net_output, sample) - logging_output["n_correct"] = utils.item(n_correct.data) - logging_output["total"] = utils.item(total.data) - return loss, sample_size, logging_output - - def get_lprobs_and_target(self, model, net_output, sample): - lprobs = model.get_normalized_probs(net_output, log_probs=True) - target = model.get_targets(sample, net_output) - if self.ignore_prefix_size > 0: - # lprobs: B x T x C - lprobs = lprobs[:, self.ignore_prefix_size :, :].contiguous() - target = target[:, self.ignore_prefix_size :].contiguous() - return lprobs.view(-1, lprobs.size(-1)), target.view(-1) - - def compute_loss(self, model, net_output, sample, reduce=True): - lprobs, target = self.get_lprobs_and_target(model, net_output, sample) - loss, nll_loss = label_smoothed_nll_loss( - lprobs, - target, - self.eps, - ignore_index=self.padding_idx, - reduce=reduce, - ) - return loss, nll_loss - - def compute_accuracy(self, model, net_output, sample): - lprobs, target = self.get_lprobs_and_target(model, net_output, sample) - mask = target.ne(self.padding_idx) - n_correct = torch.sum( - lprobs.argmax(1).masked_select(mask).eq(target.masked_select(mask)) - ) - total = torch.sum(mask) - return n_correct, total - - @classmethod - def reduce_metrics(cls, logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_sum = sum(log.get("loss", 0) for log in logging_outputs) - nll_loss_sum = sum(log.get("nll_loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - - metrics.log_scalar( - "loss", loss_sum / sample_size / math.log(2), sample_size, round=3 - ) - metrics.log_scalar( - "nll_loss", nll_loss_sum / ntokens / math.log(2), ntokens, round=3 - ) - metrics.log_derived( - "ppl", lambda meters: utils.get_perplexity(meters["nll_loss"].avg) - ) - - total = utils.item(sum(log.get("total", 0) for log in logging_outputs)) - if total > 0: - metrics.log_scalar("total", total) - n_correct = utils.item( - sum(log.get("n_correct", 0) for log in logging_outputs) - ) - metrics.log_scalar("n_correct", n_correct) - metrics.log_derived( - "accuracy", - lambda meters: round( - meters["n_correct"].sum * 100.0 / meters["total"].sum, 3 - ) - if meters["total"].sum > 0 - else float("nan"), - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/kosmos-g/fairseq/fairseq/criterions/label_smoothed_cross_entropy_latency_augmented.py b/kosmos-g/fairseq/fairseq/criterions/label_smoothed_cross_entropy_latency_augmented.py deleted file mode 100644 index d5fb390f8..000000000 --- a/kosmos-g/fairseq/fairseq/criterions/label_smoothed_cross_entropy_latency_augmented.py +++ /dev/null @@ -1,220 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field -import torch -from fairseq import metrics, utils -from fairseq.criterions import register_criterion -from fairseq.criterions.label_smoothed_cross_entropy import ( - LabelSmoothedCrossEntropyCriterion, - LabelSmoothedCrossEntropyCriterionConfig, -) - -try: - from simuleval.metrics.latency import ( - AverageLagging, - AverageProportion, - DifferentiableAverageLagging, - ) - - LATENCY_METRICS = { - "average_lagging": AverageLagging, - "average_proportion": AverageProportion, - "differentiable_average_lagging": DifferentiableAverageLagging, - } -except ImportError: - LATENCY_METRICS = None - - -@dataclass -class LabelSmoothedCrossEntropyCriterionLatencyAugmentConfig( - LabelSmoothedCrossEntropyCriterionConfig -): - latency_avg_weight: float = field( - default=0.0, - metadata={"help": "weight fot average latency loss."}, - ) - latency_var_weight: float = field( - default=0.0, - metadata={"help": "weight fot variance latency loss."}, - ) - latency_avg_type: str = field( - default="differentiable_average_lagging", - metadata={"help": "latency type for average loss"}, - ) - latency_var_type: str = field( - default="variance_delay", - metadata={"help": "latency typ for variance loss"}, - ) - latency_gather_method: str = field( - default="weighted_average", - metadata={"help": "method to gather latency loss for all heads"}, - ) - latency_update_after: int = field( - default=0, - metadata={"help": "Add latency loss after certain steps"}, - ) - - -@register_criterion( - "latency_augmented_label_smoothed_cross_entropy", - dataclass=LabelSmoothedCrossEntropyCriterionLatencyAugmentConfig, -) -class LatencyAugmentedLabelSmoothedCrossEntropyCriterion( - LabelSmoothedCrossEntropyCriterion -): - def __init__( - self, - task, - sentence_avg, - label_smoothing, - ignore_prefix_size, - report_accuracy, - latency_avg_weight, - latency_var_weight, - latency_avg_type, - latency_var_type, - latency_gather_method, - latency_update_after, - ): - super().__init__( - task, sentence_avg, label_smoothing, ignore_prefix_size, report_accuracy - ) - assert LATENCY_METRICS is not None, "Please make sure SimulEval is installed." - - self.latency_avg_weight = latency_avg_weight - self.latency_var_weight = latency_var_weight - self.latency_avg_type = latency_avg_type - self.latency_var_type = latency_var_type - self.latency_gather_method = latency_gather_method - self.latency_update_after = latency_update_after - - def forward(self, model, sample, reduce=True): - net_output = model(**sample["net_input"]) - # 1. Compute cross entropy loss - loss, nll_loss = self.compute_loss(model, net_output, sample, reduce=reduce) - - # 2. Compute cross latency loss - latency_loss, expected_latency, expected_delays_var = self.compute_latency_loss( - model, sample, net_output - ) - - if self.latency_update_after > 0: - num_updates = getattr(model.decoder, "num_updates", None) - assert ( - num_updates is not None - ), "model.decoder doesn't have attribute 'num_updates'" - if num_updates <= self.latency_update_after: - latency_loss = 0 - - loss += latency_loss - - sample_size = ( - sample["target"].size(0) if self.sentence_avg else sample["ntokens"] - ) - - logging_output = { - "loss": loss.data, - "nll_loss": nll_loss.data, - "ntokens": sample["ntokens"], - "nsentences": sample["target"].size(0), - "sample_size": sample_size, - "latency": expected_latency, - "delays_var": expected_delays_var, - "latency_loss": latency_loss, - } - - if self.report_accuracy: - n_correct, total = self.compute_accuracy(model, net_output, sample) - logging_output["n_correct"] = utils.item(n_correct.data) - logging_output["total"] = utils.item(total.data) - return loss, sample_size, logging_output - - def compute_latency_loss(self, model, sample, net_output): - assert ( - net_output[-1].encoder_padding_mask is None - or not net_output[-1].encoder_padding_mask[:, 0].any() - ), "Only right padding on source is supported." - # 1. Obtain the expected alignment - alpha_list = [item["alpha"] for item in net_output[1].attn_list] - num_layers = len(alpha_list) - bsz, num_heads, tgt_len, src_len = alpha_list[0].size() - - # bsz * num_layers * num_heads, tgt_len, src_len - alpha_all = torch.cat(alpha_list, dim=1).view(-1, tgt_len, src_len) - - # 2 compute expected delays - # bsz * num_heads * num_layers, tgt_len, src_len for MMA - steps = ( - torch.arange(1, 1 + src_len) - .unsqueeze(0) - .unsqueeze(1) - .expand_as(alpha_all) - .type_as(alpha_all) - ) - - expected_delays = torch.sum(steps * alpha_all, dim=-1) - - target_padding_mask = ( - model.get_targets(sample, net_output) - .eq(self.padding_idx) - .unsqueeze(1) - .expand(bsz, num_layers * num_heads, tgt_len) - .contiguous() - .view(-1, tgt_len) - ) - - src_lengths = ( - sample["net_input"]["src_lengths"] - .unsqueeze(1) - .expand(bsz, num_layers * num_heads) - .contiguous() - .view(-1) - ) - expected_latency = LATENCY_METRICS[self.latency_avg_type]( - expected_delays, src_lengths, None, target_padding_mask=target_padding_mask - ) - - # 2.1 average expected latency of heads - # bsz, num_layers * num_heads - expected_latency = expected_latency.view(bsz, -1) - if self.latency_gather_method == "average": - # bsz * tgt_len - expected_latency = expected_delays.mean(dim=1) - elif self.latency_gather_method == "weighted_average": - weights = torch.nn.functional.softmax(expected_latency, dim=1) - expected_latency = torch.sum(expected_latency * weights, dim=1) - elif self.latency_gather_method == "max": - expected_latency = expected_latency.max(dim=1)[0] - else: - raise NotImplementedError - - expected_latency = expected_latency.sum() - avg_loss = self.latency_avg_weight * expected_latency - - # 2.2 variance of expected delays - expected_delays_var = ( - expected_delays.view(bsz, -1, tgt_len).var(dim=1).mean(dim=1) - ) - expected_delays_var = expected_delays_var.sum() - var_loss = self.latency_avg_weight * expected_delays_var - - # 3. Final loss - latency_loss = avg_loss + var_loss - - return latency_loss, expected_latency, expected_delays_var - - @classmethod - def reduce_metrics(cls, logging_outputs) -> None: - super().reduce_metrics(logging_outputs) - latency = sum(log.get("latency", 0) for log in logging_outputs) - delays_var = sum(log.get("delays_var", 0) for log in logging_outputs) - latency_loss = sum(log.get("latency_loss", 0) for log in logging_outputs) - nsentences = sum(log.get("nsentences", 0) for log in logging_outputs) - metrics.log_scalar("latency", latency.float() / nsentences, nsentences, round=3) - metrics.log_scalar("delays_var", delays_var / nsentences, nsentences, round=3) - metrics.log_scalar( - "latency_loss", latency_loss / nsentences, nsentences, round=3 - ) diff --git a/kosmos-g/fairseq/fairseq/criterions/label_smoothed_cross_entropy_with_alignment.py b/kosmos-g/fairseq/fairseq/criterions/label_smoothed_cross_entropy_with_alignment.py deleted file mode 100644 index 2ea37c16b..000000000 --- a/kosmos-g/fairseq/fairseq/criterions/label_smoothed_cross_entropy_with_alignment.py +++ /dev/null @@ -1,130 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -from fairseq import metrics, utils -from fairseq.criterions import register_criterion - -from .label_smoothed_cross_entropy import ( - LabelSmoothedCrossEntropyCriterion, - LabelSmoothedCrossEntropyCriterionConfig, -) - -from dataclasses import dataclass, field - - -@dataclass -class LabelSmoothedCrossEntropyCriterionWithAlignmentConfig( - LabelSmoothedCrossEntropyCriterionConfig -): - alignment_lambda: float = field( - default=0.05, metadata={"help": "weight for the alignment loss"} - ) - - -@register_criterion( - "label_smoothed_cross_entropy_with_alignment", - dataclass=LabelSmoothedCrossEntropyCriterionWithAlignmentConfig, -) -class LabelSmoothedCrossEntropyCriterionWithAlignment( - LabelSmoothedCrossEntropyCriterion -): - def __init__(self, task, sentence_avg, label_smoothing, alignment_lambda): - super().__init__(task, sentence_avg, label_smoothing) - self.alignment_lambda = alignment_lambda - - def forward(self, model, sample, reduce=True): - """Compute the loss for the given sample. - - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - net_output = model(**sample["net_input"]) - loss, nll_loss = self.compute_loss(model, net_output, sample, reduce=reduce) - sample_size = ( - sample["target"].size(0) if self.sentence_avg else sample["ntokens"] - ) - logging_output = { - "loss": utils.item(loss.data) if reduce else loss.data, - "nll_loss": utils.item(nll_loss.data) if reduce else nll_loss.data, - "ntokens": sample["ntokens"], - "nsentences": sample["target"].size(0), - "sample_size": sample_size, - } - - alignment_loss = None - - # Compute alignment loss only for training set and non dummy batches. - if "alignments" in sample and sample["alignments"] is not None: - alignment_loss = self.compute_alignment_loss(sample, net_output) - - if alignment_loss is not None: - logging_output["alignment_loss"] = utils.item(alignment_loss.data) - loss += self.alignment_lambda * alignment_loss - - return loss, sample_size, logging_output - - def compute_alignment_loss(self, sample, net_output): - attn_prob = net_output[1]["attn"][0] - bsz, tgt_sz, src_sz = attn_prob.shape - attn = attn_prob.view(bsz * tgt_sz, src_sz) - - align = sample["alignments"] - align_weights = sample["align_weights"].float() - - if len(align) > 0: - # Alignment loss computation. align (shape [:, 2]) contains the src-tgt index pairs corresponding to - # the alignments. align_weights (shape [:]) contains the 1 / frequency of a tgt index for normalizing. - loss = -( - (attn[align[:, 1][:, None], align[:, 0][:, None]]).log() - * align_weights[:, None] - ).sum() - else: - return None - - return loss - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_sum = utils.item(sum(log.get("loss", 0) for log in logging_outputs)) - nll_loss_sum = utils.item( - sum(log.get("nll_loss", 0) for log in logging_outputs) - ) - alignment_loss_sum = utils.item( - sum(log.get("alignment_loss", 0) for log in logging_outputs) - ) - ntokens = utils.item(sum(log.get("ntokens", 0) for log in logging_outputs)) - sample_size = utils.item( - sum(log.get("sample_size", 0) for log in logging_outputs) - ) - - metrics.log_scalar( - "loss", loss_sum / sample_size / math.log(2), sample_size, round=3 - ) - metrics.log_scalar( - "nll_loss", nll_loss_sum / ntokens / math.log(2), ntokens, round=3 - ) - metrics.log_scalar( - "alignment_loss", - alignment_loss_sum / sample_size / math.log(2), - sample_size, - round=3, - ) - metrics.log_derived( - "ppl", lambda meters: utils.get_perplexity(meters["nll_loss"].avg) - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/kosmos-g/fairseq/fairseq/criterions/legacy_masked_lm.py b/kosmos-g/fairseq/fairseq/criterions/legacy_masked_lm.py deleted file mode 100644 index c70608c5a..000000000 --- a/kosmos-g/fairseq/fairseq/criterions/legacy_masked_lm.py +++ /dev/null @@ -1,177 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn.functional as F -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion - - -def compute_cross_entropy_loss(logits, targets, ignore_index=-100): - """ - Function to compute the cross entropy loss. The default value of - ignore_index is the same as the default value for F.cross_entropy in - pytorch. - """ - assert logits.size(0) == targets.size( - -1 - ), "Logits and Targets tensor shapes don't match up" - - loss = F.nll_loss( - F.log_softmax(logits, -1, dtype=torch.float32), - targets, - reduction="sum", - ignore_index=ignore_index, - ) - return loss - - -@register_criterion("legacy_masked_lm_loss") -class LegacyMaskedLmLoss(FairseqCriterion): - """ - Implementation for the loss used in masked language model (MLM) training. - This optionally also computes the next sentence prediction (NSP) loss and - adds it to the overall loss based on the specified args. There are three - cases to consider: - 1) Generic MLM training without NSP loss. In this case sentence_targets - and sentence_logits are both None. - 2) BERT training without NSP loss. In this case sentence_targets is - not None but sentence_logits is None and we should not be computing - a sentence level loss. - 3) BERT training with NSP loss. In this case both sentence_targets and - sentence_logits are not None and we should be computing a sentence - level loss. The weight of the sentence level loss is specified as - an argument. - """ - - def __init__(self, task, masked_lm_only, nsp_loss_weight): - super().__init__(task) - self.masked_lm_only = masked_lm_only - self.nsp_loss_weight = nsp_loss_weight - - @staticmethod - def add_args(parser): - """Args for MaskedLM Loss""" - # Default for masked_lm_only is False so as to not break BERT training - parser.add_argument( - "--masked-lm-only", - default=False, - action="store_true", - help="compute MLM loss only", - ) - parser.add_argument( - "--nsp-loss-weight", - default=1.0, - type=float, - help="weight for next sentence prediction" " loss (default 1)", - ) - - def forward(self, model, sample, reduce=True): - """Compute the loss for the given sample. - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - lm_logits, output_metadata = model(**sample["net_input"]) - - # reshape lm_logits from (N,T,C) to (N*T,C) - lm_logits = lm_logits.view(-1, lm_logits.size(-1)) - lm_targets = sample["lm_target"].view(-1) - lm_loss = compute_cross_entropy_loss(lm_logits, lm_targets, self.padding_idx) - - # compute the number of tokens for which loss is computed. This is used - # to normalize the loss - ntokens = utils.strip_pad(lm_targets, self.padding_idx).numel() - loss = lm_loss / ntokens - nsentences = sample["nsentences"] - # nsentences = 0 - - # Compute sentence loss if masked_lm_only is False - sentence_loss = None - if not self.masked_lm_only: - sentence_logits = output_metadata["sentence_logits"] - sentence_targets = sample["sentence_target"].view(-1) - # This needs to be recomputed due to some differences between - # TokenBlock and BlockPair dataset. This can be resolved with a - # refactor of BERTModel which we will do in the future. - # TODO: Remove this after refactor of BERTModel - nsentences = sentence_targets.size(0) - - # Check for logits being none which can happen when remove_heads - # is set to true in the BERT model. Ideally we should set - # masked_lm_only to true in this case, but that requires some - # refactor in the BERT model. - if sentence_logits is not None: - sentence_loss = compute_cross_entropy_loss( - sentence_logits, sentence_targets - ) - - loss += self.nsp_loss_weight * (sentence_loss / nsentences) - - # NOTE: as we are summing up per token mlm loss and per sentence nsp loss - # we don't need to use sample_size as denominator for the gradient - # here sample_size is just used for logging - sample_size = 1 - logging_output = { - "loss": utils.item(loss.data) if reduce else loss.data, - "lm_loss": utils.item(lm_loss.data) if reduce else lm_loss.data, - # sentence loss is not always computed - "sentence_loss": ( - (utils.item(sentence_loss.data) if reduce else sentence_loss.data) - if sentence_loss is not None - else 0.0 - ), - "ntokens": ntokens, - "nsentences": nsentences, - "sample_size": sample_size, - } - return loss, sample_size, logging_output - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - lm_loss_sum = sum(log.get("lm_loss", 0) for log in logging_outputs) - sentence_loss_sum = sum(log.get("sentence_loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - nsentences = sum(log.get("nsentences", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - agg_loss = sum(log.get("loss", 0) for log in logging_outputs) - - metrics.log_scalar( - "loss", - agg_loss / sample_size / math.log(2) if sample_size > 0 else 0.0, - sample_size, - round=3, - ) - metrics.log_scalar( - "lm_loss", - lm_loss_sum / ntokens / math.log(2) if ntokens > 0 else 0.0, - ntokens, - round=3, - ) - metrics.log_scalar( - "sentence_loss", - sentence_loss_sum / nsentences / math.log(2) if nsentences > 0 else 0.0, - nsentences, - round=3, - ) - metrics.log_scalar( - "nll_loss", - lm_loss_sum / ntokens / math.log(2) if ntokens > 0 else 0.0, - ntokens, - round=3, - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/kosmos-g/fairseq/fairseq/criterions/masked_lm.py b/kosmos-g/fairseq/fairseq/criterions/masked_lm.py deleted file mode 100644 index 279458f31..000000000 --- a/kosmos-g/fairseq/fairseq/criterions/masked_lm.py +++ /dev/null @@ -1,98 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass -import math -from omegaconf import II - -import torch -from fairseq import metrics, modules, utils -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.dataclass import FairseqDataclass - - -@dataclass -class MaskedLmConfig(FairseqDataclass): - tpu: bool = II("common.tpu") - - -@register_criterion("masked_lm", dataclass=MaskedLmConfig) -class MaskedLmLoss(FairseqCriterion): - """ - Implementation for the loss used in masked language model (MLM) training. - """ - - def __init__(self, cfg: MaskedLmConfig, task): - super().__init__(task) - self.tpu = cfg.tpu - - def forward(self, model, sample, reduce=True): - """Compute the loss for the given sample. - - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - masked_tokens = sample["target"].ne(self.padding_idx) - sample_size = masked_tokens.int().sum() - - # Rare: when all tokens are masked, project all tokens. - # We use torch.where to avoid device-to-host transfers, - # except on CPU where torch.where is not well supported - # (see github.com/pytorch/pytorch/issues/26247). - if self.tpu: - masked_tokens = None # always project all tokens on TPU - elif masked_tokens.device == torch.device("cpu"): - if not masked_tokens.any(): - masked_tokens = None - else: - masked_tokens = torch.where( - masked_tokens.any(), - masked_tokens, - masked_tokens.new([True]), - ) - - logits = model(**sample["net_input"], masked_tokens=masked_tokens)[0] - targets = model.get_targets(sample, [logits]) - if masked_tokens is not None: - targets = targets[masked_tokens] - - loss = modules.cross_entropy( - logits.view(-1, logits.size(-1)), - targets.view(-1), - reduction="sum", - ignore_index=self.padding_idx, - ) - - logging_output = { - "loss": loss if self.tpu else loss.data, - "ntokens": sample["ntokens"], - "nsentences": sample["nsentences"], - "sample_size": sample_size, - } - return loss, sample_size, logging_output - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_sum = sum(log.get("loss", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - - metrics.log_scalar( - "loss", loss_sum / sample_size / math.log(2), sample_size, round=3 - ) - metrics.log_derived( - "ppl", lambda meters: utils.get_perplexity(meters["loss"].avg) - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/kosmos-g/fairseq/fairseq/criterions/model_criterion.py b/kosmos-g/fairseq/fairseq/criterions/model_criterion.py deleted file mode 100644 index f9a810d83..000000000 --- a/kosmos-g/fairseq/fairseq/criterions/model_criterion.py +++ /dev/null @@ -1,155 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from dataclasses import dataclass, field -from typing import Dict, List - -import torch - -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.dataclass import FairseqDataclass - - -logger = logging.getLogger(__name__) - - -@dataclass -class ModelCriterionConfig(FairseqDataclass): - loss_weights: Dict[str, float] = field( - default_factory=dict, - metadata={"help": "weights for the loss terms"}, - ) - log_keys: List[str] = field( - default_factory=list, - metadata={"help": "additional output keys to log"}, - ) - - -@register_criterion("model", dataclass=ModelCriterionConfig) -class ModelCriterion(FairseqCriterion): - """ - This criterion relies on the model to supply losses. - The losses should be a dictionary of name -> scalar returned by - the model either by including it in the net_output dict or by - implementing a get_losses(net_output, sample) method. The final loss is - a scaled sum of all losses according to weights in loss_weights. - If no weights are provided, then all losses are scaled by 1.0. - - The losses will be automatically logged. Additional keys from - net_output dict can be logged via the log_keys parameter. - """ - - def __init__(self, task, loss_weights=None, log_keys=None): - super().__init__(task) - self.loss_weights = loss_weights - self.log_keys = log_keys - - def forward(self, model, sample, reduce=True): - net_output = model(**sample["net_input"]) - - scaled_losses = {} - - if hasattr(model, "get_losses"): - losses = model.get_losses(net_output, sample) - elif isinstance(net_output, dict) and "losses" in net_output: - losses = net_output["losses"] - else: - raise Exception("Could not retrieve losses") - - for lk, p in losses.items(): - try: - coef = 1.0 if len(self.loss_weights) == 0 else self.loss_weights[lk] - except KeyError: - logger.error( - f"weight for loss {lk} is not in loss_weights ({self.loss_weights})" - ) - raise - if coef != 0 and p is not None: - scaled_losses[lk] = coef * p.float() - - loss = sum(scaled_losses.values()) - - if "sample_size" in net_output: - sample_size = net_output["sample_size"] - else: - sample_size = loss.numel() - - if reduce and loss.numel() > 1: - loss = loss.sum() - - logging_output = { - "loss": loss.data, - "ntokens": sample_size, - "nsentences": sample["id"].numel(), - "sample_size": sample_size, - "_world_size": 1, - } - - for lk in self.log_keys: - if lk in net_output and net_output[lk] is not None: - if not torch.is_tensor(net_output[lk]) or net_output[lk].numel() == 1: - logging_output[lk] = float(net_output[lk]) - else: - for i, v in enumerate(net_output[lk]): - logging_output[f"{lk}_{i}"] = float(v) - - if len(scaled_losses) > 1: - for lk, l in scaled_losses.items(): - if l.numel() > 1: - l = l.sum() - logging_output[f"loss_{lk}"] = l.item() - - if "logs" in net_output: - for lgw in net_output["logs"]: - logging_output[lgw] = net_output["logs"][lgw] - - return loss, sample_size, logging_output - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_sum = utils.item(sum(log.get("loss", 0) for log in logging_outputs)) - ntokens = utils.item(sum(log.get("ntokens", 0) for log in logging_outputs)) - nsentences = utils.item( - sum(log.get("nsentences", 0) for log in logging_outputs) - ) - sample_size = utils.item( - sum(log.get("sample_size", 0) for log in logging_outputs) - ) - - metrics.log_scalar("loss", loss_sum / sample_size, sample_size, round=3) - metrics.log_scalar("ntokens", ntokens) - metrics.log_scalar("nsentences", nsentences) - - builtin_keys = { - "loss", - "ntokens", - "nsentences", - "sample_size", - "_world_size", - } - - world_size = utils.item( - sum(log.get("_world_size", 0) for log in logging_outputs) - ) - - for k in logging_outputs[0]: - if k not in builtin_keys: - val = sum(log.get(k, 0) for log in logging_outputs) - if k.startswith("loss_"): - metrics.log_scalar(k, val / sample_size, sample_size, round=3) - else: - metrics.log_scalar(k, val / world_size, round=3) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/kosmos-g/fairseq/fairseq/criterions/nat_loss.py b/kosmos-g/fairseq/fairseq/criterions/nat_loss.py deleted file mode 100644 index 7dac32fba..000000000 --- a/kosmos-g/fairseq/fairseq/criterions/nat_loss.py +++ /dev/null @@ -1,180 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn.functional as F -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.dataclass import FairseqDataclass -from torch import Tensor - -from dataclasses import dataclass, field - - -@dataclass -class LabelSmoothedDualImitationCriterionConfig(FairseqDataclass): - label_smoothing: float = field( - default=0.0, - metadata={"help": "epsilon for label smoothing, 0 means no label smoothing"}, - ) - - -@register_criterion("nat_loss", dataclass=LabelSmoothedDualImitationCriterionConfig) -class LabelSmoothedDualImitationCriterion(FairseqCriterion): - def __init__(self, task, label_smoothing): - super().__init__(task) - self.label_smoothing = label_smoothing - - def _compute_loss( - self, outputs, targets, masks=None, label_smoothing=0.0, name="loss", factor=1.0 - ): - """ - outputs: batch x len x d_model - targets: batch x len - masks: batch x len - - policy_logprob: if there is some policy - depends on the likelihood score as rewards. - """ - - def mean_ds(x: Tensor, dim=None) -> Tensor: - return ( - x.float().mean().type_as(x) - if dim is None - else x.float().mean(dim).type_as(x) - ) - - if masks is not None: - outputs, targets = outputs[masks], targets[masks] - - if masks is not None and not masks.any(): - nll_loss = torch.tensor(0) - loss = nll_loss - else: - logits = F.log_softmax(outputs, dim=-1) - if targets.dim() == 1: - losses = F.nll_loss(logits, targets.to(logits.device), reduction="none") - - else: # soft-labels - losses = F.kl_div(logits, targets.to(logits.device), reduction="none") - losses = losses.sum(-1) - - nll_loss = mean_ds(losses) - if label_smoothing > 0: - loss = ( - nll_loss * (1 - label_smoothing) - mean_ds(logits) * label_smoothing - ) - else: - loss = nll_loss - - loss = loss * factor - return {"name": name, "loss": loss, "nll_loss": nll_loss, "factor": factor} - - def _custom_loss(self, loss, name="loss", factor=1.0): - return {"name": name, "loss": loss, "factor": factor} - - def forward(self, model, sample, reduce=True): - """Compute the loss for the given sample. - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - nsentences, ntokens = sample["nsentences"], sample["ntokens"] - - # B x T - src_tokens, src_lengths = ( - sample["net_input"]["src_tokens"], - sample["net_input"]["src_lengths"], - ) - tgt_tokens, prev_output_tokens = sample["target"], sample["prev_target"] - - outputs = model(src_tokens, src_lengths, prev_output_tokens, tgt_tokens) - losses, nll_loss = [], [] - - for obj in outputs: - if outputs[obj].get("loss", None) is None: - _losses = self._compute_loss( - outputs[obj].get("out"), - outputs[obj].get("tgt"), - outputs[obj].get("mask", None), - outputs[obj].get("ls", 0.0), - name=obj + "-loss", - factor=outputs[obj].get("factor", 1.0), - ) - else: - _losses = self._custom_loss( - outputs[obj].get("loss"), - name=obj + "-loss", - factor=outputs[obj].get("factor", 1.0), - ) - - losses += [_losses] - if outputs[obj].get("nll_loss", False): - nll_loss += [_losses.get("nll_loss", 0.0)] - - loss = sum(l["loss"] for l in losses) - nll_loss = sum(l for l in nll_loss) if len(nll_loss) > 0 else loss.new_tensor(0) - - # NOTE: - # we don't need to use sample_size as denominator for the gradient - # here sample_size is just used for logging - sample_size = 1 - logging_output = { - "loss": loss.data, - "nll_loss": nll_loss.data, - "ntokens": ntokens, - "nsentences": nsentences, - "sample_size": sample_size, - } - - for l in losses: - logging_output[l["name"]] = ( - utils.item(l["loss"].data / l["factor"]) - if reduce - else l[["loss"]].data / l["factor"] - ) - - return loss, sample_size, logging_output - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - sample_size = utils.item( - sum(log.get("sample_size", 0) for log in logging_outputs) - ) - loss = utils.item(sum(log.get("loss", 0) for log in logging_outputs)) - nll_loss = utils.item(sum(log.get("nll_loss", 0) for log in logging_outputs)) - - metrics.log_scalar( - "loss", loss / sample_size / math.log(2), sample_size, round=3 - ) - metrics.log_scalar( - "nll_loss", nll_loss / sample_size / math.log(2), sample_size, round=3 - ) - metrics.log_derived( - "ppl", lambda meters: utils.get_perplexity(meters["loss"].avg) - ) - - for key in logging_outputs[0]: - if key[-5:] == "-loss": - val = sum(log.get(key, 0) for log in logging_outputs) - metrics.log_scalar( - key[:-5], - val / sample_size / math.log(2) if sample_size > 0 else 0.0, - sample_size, - round=3, - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/kosmos-g/fairseq/fairseq/criterions/sentence_prediction.py b/kosmos-g/fairseq/fairseq/criterions/sentence_prediction.py deleted file mode 100644 index b402d7603..000000000 --- a/kosmos-g/fairseq/fairseq/criterions/sentence_prediction.py +++ /dev/null @@ -1,141 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from dataclasses import dataclass, field - -import torch -import torch.nn.functional as F - -from fairseq import metrics -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.dataclass import FairseqDataclass - - -@dataclass -class SentencePredictionConfig(FairseqDataclass): - classification_head_name: str = field( - default="sentence_classification_head", - metadata={"help": "name of the classification head to use"}, - ) - regression_target: bool = field( - default=False, - ) - - -@register_criterion("sentence_prediction", dataclass=SentencePredictionConfig) -class SentencePredictionCriterion(FairseqCriterion): - def __init__(self, cfg: SentencePredictionConfig, task): - super().__init__(task) - self.classification_head_name = cfg.classification_head_name - self.regression_target = cfg.regression_target - - def forward(self, model, sample, reduce=True): - """Compute the loss for the given sample. - - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - assert ( - hasattr(model, "classification_heads") - and self.classification_head_name in model.classification_heads - ), "model must provide sentence classification head for --criterion=sentence_prediction" - - logits, _ = model( - **sample["net_input"], - features_only=True, - classification_head_name=self.classification_head_name, - ) - targets = model.get_targets(sample, [logits]).view(-1) - sample_size = targets.numel() - - if not self.regression_target: - lprobs = F.log_softmax(logits, dim=-1, dtype=torch.float32) - task_loss = F.nll_loss(lprobs, targets, reduction="sum") - else: - logits = logits.view(-1).float() - targets = targets.float() - task_loss = F.mse_loss(logits, targets, reduction="sum") - - logging_output = {} - loss = task_loss - # mha & ffn regularization update - if ( - hasattr(model.args, "mha_reg_scale_factor") - and model.args.mha_reg_scale_factor != 0.0 - ): - mha_reg_loss = model._get_adaptive_head_loss() - loss += mha_reg_loss - logging_output.update({"mha_reg_loss": mha_reg_loss}) - if ( - hasattr(model.args, "ffn_reg_scale_factor") - and model.args.ffn_reg_scale_factor != 0.0 - ): - ffn_reg_loss = model._get_adaptive_ffn_loss() - loss += ffn_reg_loss - logging_output.update({"ffn_reg_loss": ffn_reg_loss}) - - logging_output.update( - { - "loss": loss.data, - "ntokens": sample["ntokens"], - "nsentences": sample_size, - "sample_size": sample_size, - } - ) - if not self.regression_target: - preds = logits.argmax(dim=1) - logging_output["ncorrect"] = (preds == targets).sum() - - return loss, sample_size, logging_output - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_sum = sum(log.get("loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - nsentences = sum(log.get("nsentences", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - mha_reg_loss_sum = sum(log.get("mha_reg_loss", 0) for log in logging_outputs) - ffn_reg_loss_sum = sum(log.get("ffn_reg_loss", 0) for log in logging_outputs) - - metrics.log_scalar( - "loss", loss_sum / sample_size / math.log(2), sample_size, round=3 - ) - if mha_reg_loss_sum: - metrics.log_scalar( - "mha_reg_loss", - mha_reg_loss_sum / sample_size / math.log(2), - sample_size, - round=3, - ) - if ffn_reg_loss_sum: - metrics.log_scalar( - "ffn_reg_loss", - ffn_reg_loss_sum / sample_size / math.log(2), - sample_size, - round=3, - ) - if sample_size != ntokens: - metrics.log_scalar( - "nll_loss", loss_sum / ntokens / math.log(2), ntokens, round=3 - ) - - if len(logging_outputs) > 0 and "ncorrect" in logging_outputs[0]: - ncorrect = sum(log.get("ncorrect", 0) for log in logging_outputs) - metrics.log_scalar( - "accuracy", 100.0 * ncorrect / nsentences, nsentences, round=1 - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/kosmos-g/fairseq/fairseq/criterions/sentence_ranking.py b/kosmos-g/fairseq/fairseq/criterions/sentence_ranking.py deleted file mode 100644 index d4c76341d..000000000 --- a/kosmos-g/fairseq/fairseq/criterions/sentence_ranking.py +++ /dev/null @@ -1,120 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn.functional as F -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion - - -@register_criterion("sentence_ranking") -class SentenceRankingCriterion(FairseqCriterion): - def __init__(self, task, ranking_head_name, save_predictions, num_classes): - super().__init__(task) - self.ranking_head_name = ranking_head_name - if save_predictions is not None: - self.prediction_h = open(save_predictions, "w") - else: - self.prediction_h = None - self.num_classes = num_classes - - def __del__(self): - if self.prediction_h is not None: - self.prediction_h.close() - - @staticmethod - def add_args(parser): - # fmt: off - parser.add_argument('--save-predictions', metavar='FILE', - help='file to save predictions to') - parser.add_argument('--ranking-head-name', - default='sentence_classification_head', - help='name of the ranking head to use') - # fmt: on - - def forward(self, model, sample, reduce=True): - """Compute ranking loss for the given sample. - - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - assert ( - hasattr(model, "classification_heads") - and self.ranking_head_name in model.classification_heads - ), "model must provide sentence ranking head for --criterion=sentence_ranking" - - scores = [] - for idx in range(self.num_classes): - score, _ = model( - **sample["net_input{idx}".format(idx=idx + 1)], - classification_head_name=self.ranking_head_name, - ) - scores.append(score) - - logits = torch.cat(scores, dim=1) - sample_size = logits.size(0) - - if "target" in sample: - targets = model.get_targets(sample, [logits]).view(-1) - lprobs = F.log_softmax(logits, dim=-1, dtype=torch.float32) - loss = F.nll_loss(lprobs, targets, reduction="sum") - else: - targets = None - loss = torch.tensor(0.0, requires_grad=True) - - if self.prediction_h is not None: - preds = logits.argmax(dim=1) - for i, (id, pred) in enumerate(zip(sample["id"].tolist(), preds.tolist())): - if targets is not None: - label = targets[i].item() - print("{}\t{}\t{}".format(id, pred, label), file=self.prediction_h) - else: - print("{}\t{}".format(id, pred), file=self.prediction_h) - - logging_output = { - "loss": loss.data, - "ntokens": sample["ntokens"], - "nsentences": sample_size, - "sample_size": sample_size, - } - if targets is not None: - logging_output["ncorrect"] = (logits.argmax(dim=1) == targets).sum() - - return loss, sample_size, logging_output - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_sum = sum(log.get("loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - nsentences = sum(log.get("nsentences", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - - metrics.log_scalar( - "loss", loss_sum / sample_size / math.log(2), sample_size, round=3 - ) - if sample_size != ntokens: - metrics.log_scalar( - "nll_loss", loss_sum / ntokens / math.log(2), ntokens, round=3 - ) - - if len(logging_outputs) > 0 and "ncorrect" in logging_outputs[0]: - ncorrect = sum(log.get("ncorrect", 0) for log in logging_outputs) - metrics.log_scalar( - "accuracy", 100.0 * ncorrect / nsentences, nsentences, round=1 - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/kosmos-g/fairseq/fairseq/criterions/speech_to_speech_criterion.py b/kosmos-g/fairseq/fairseq/criterions/speech_to_speech_criterion.py deleted file mode 100644 index 7fba673d2..000000000 --- a/kosmos-g/fairseq/fairseq/criterions/speech_to_speech_criterion.py +++ /dev/null @@ -1,310 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -import torch - -from fairseq import metrics, utils -from fairseq.criterions import register_criterion -from fairseq.criterions.ctc import CtcCriterion -from fairseq.criterions.label_smoothed_cross_entropy import ( - LabelSmoothedCrossEntropyCriterion, - LabelSmoothedCrossEntropyCriterionConfig, -) -from fairseq.criterions.tacotron2_loss import ( - Tacotron2Criterion, - Tacotron2CriterionConfig, -) - - -class MultitaskCriterion: - def __init__(self, multitask_tasks): - self.multitask_criterion = {} - self.multitask_loss_weight = {} - for task_name, task_obj in multitask_tasks.items(): - if task_obj.args.decoder_type == "ctc": - self.multitask_criterion[task_name] = CtcCriterion( - task_obj.args.criterion_cfg, task_obj - ) - else: - self.multitask_criterion[ - task_name - ] = LabelSmoothedCrossEntropyCriterion( - task_obj, - task_obj.args.criterion_cfg.sentence_avg, - label_smoothing=task_obj.args.criterion_cfg.label_smoothing, - ) - - def set_multitask_loss_weight(self, task_name, weight=0.0): - self.multitask_loss_weight[task_name] = weight - - def get_multitask_loss(self, model, sample, model_out): - logging_output = {} - loss = 0.0 - for task_name, task_criterion in self.multitask_criterion.items(): - layer_id = task_criterion.task.args.input_layer - if isinstance(task_criterion, CtcCriterion): - if task_criterion.task.args.input_from == "encoder": - non_padding_mask = ~model_out["encoder_padding_mask"][0] - input_lengths = non_padding_mask.long().sum(-1) - task_sample = { - "net_input": { - "src_tokens": model_out["encoder_states"][ - layer_id - ], # check batch idx - "src_lengths": input_lengths, - }, - "id": sample["id"], - } - else: - task_sample = { - "net_input": { - "src_tokens": model_out["inner_states"][layer_id], - "src_lengths": sample["target_lengths"], - }, - "id": sample["id"], - } - else: - task_sample = { - "net_input": { - "src_tokens": sample["multitask"][task_name]["net_input"][ - "prev_output_tokens" - ], - "encoder_out": { - "encoder_out": [model_out["encoder_states"][layer_id]], - "encoder_padding_mask": model_out["encoder_padding_mask"], - }, - } - } - - for key in ["target", "target_lengths", "ntokens"]: - task_sample[key] = sample["multitask"][task_name][key] - - task_loss, task_sample_size, task_logging_output = task_criterion( - model.multitask_decoders[task_name], task_sample - ) - - loss = loss + self.multitask_loss_weight[task_name] * task_loss - task_logging_output["loss_weight"] = self.multitask_loss_weight[task_name] - logging_output[task_name] = task_logging_output - return loss, logging_output - - @classmethod - def reduce_metrics(cls, logging_outputs) -> None: - for task_name in logging_outputs[0]["multitask"].keys(): - # different criterion may return different logging - # currently only reduce on loss, the most common one - # ideally the way that losses are reduced should also depend on the task type - loss_sum = sum( - log["multitask"][task_name].get("loss", 0) for log in logging_outputs - ) - sample_size = sum( - log["multitask"][task_name].get("sample_size", 0) - for log in logging_outputs - ) - - metrics.log_scalar( - f"multitask_{task_name}_loss", - loss_sum / sample_size / math.log(2), - sample_size, - round=3, - ) - - loss_weight = logging_outputs[0]["multitask"][task_name].get( - "loss_weight", 0 - ) - metrics.log_scalar( - f"multitask_{task_name}_loss_weight", - loss_weight, - weight=0, - priority=250, - ) - - -@register_criterion( - "speech_to_unit", dataclass=LabelSmoothedCrossEntropyCriterionConfig -) -class SpeechToUnitMultitaskTaskCriterion( - LabelSmoothedCrossEntropyCriterion, MultitaskCriterion -): - def __init__( - self, - task, - sentence_avg, - label_smoothing, - ignore_prefix_size=0, - report_accuracy=False, - ): - super().__init__( - task, sentence_avg, label_smoothing, ignore_prefix_size, report_accuracy - ) - MultitaskCriterion.__init__(self, task.multitask_tasks) - - def forward(self, model, sample, reduce=True): - net_output, extra = model( - src_tokens=sample["net_input"]["src_tokens"], - src_lengths=sample["net_input"]["src_lengths"], - prev_output_tokens=sample["net_input"]["prev_output_tokens"], - tgt_speaker=sample["net_input"]["tgt_speaker"], - return_all_hiddens=True, - ) - - loss, nll_loss = self.compute_loss(model, [net_output], sample, reduce=reduce) - sample_size = ( - sample["target"].size(0) if self.sentence_avg else sample["ntokens"] - ) - logging_output = { - "loss": loss.data, - "nll_loss": nll_loss.data, - "ntokens": sample["ntokens"], - "nsentences": sample["target"].size(0), - "sample_size": sample_size, - } - if self.report_accuracy: - n_correct, total = self.compute_accuracy(model, [net_output], sample) - logging_output["n_correct"] = utils.item(n_correct.data) - logging_output["total"] = utils.item(total.data) - - if len(self.multitask_criterion) == 0: - return loss, sample_size, logging_output - - # multitask - multitask_loss, multitask_log = self.get_multitask_loss(model, sample, extra) - loss += multitask_loss - logging_output["multitask"] = multitask_log - - return loss, sample_size, logging_output - - @classmethod - def reduce_metrics(cls, logging_outputs) -> None: - super().reduce_metrics(logging_outputs) - - # inference metrics - if "targ_frames" in logging_outputs[0]: - n = sum(log.get("norm_frames", 0) for log in logging_outputs) - for key, new_key in [ - ("mcd_loss", "mcd_loss"), - ("pred_frames", "pred_ratio"), - ("nins", "ins_rate"), - ("ndel", "del_rate"), - ]: - val = sum(log.get(key, 0) for log in logging_outputs) - metrics.log_scalar(new_key, val / n, n, round=3) - - if "multitask" not in logging_outputs[0]: - return - - MultitaskCriterion.reduce_metrics(logging_outputs) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return False - - -@register_criterion("speech_to_spectrogram", dataclass=Tacotron2CriterionConfig) -class SpeechToSpectrogramMultitaskTaskCriterion(Tacotron2Criterion, MultitaskCriterion): - def __init__( - self, - task, - sentence_avg, - use_guided_attention_loss, - guided_attention_loss_sigma, - bce_pos_weight, - ctc_weight, - ): - super().__init__( - task, - sentence_avg, - use_guided_attention_loss, - guided_attention_loss_sigma, - bce_pos_weight, - ctc_weight, - ) - MultitaskCriterion.__init__(self, task.multitask_tasks) - - def forward(self, model, sample, reduction="mean"): - bsz, max_len, _ = sample["target"].size() - feat_tgt = sample["target"] - feat_len = sample["target_lengths"].view(bsz, 1).expand(-1, max_len) - eos_tgt = torch.arange(max_len).to(sample["target"].device) - eos_tgt = eos_tgt.view(1, max_len).expand(bsz, -1) - eos_tgt = (eos_tgt == (feat_len - 1)).float() - - feat_out, eos_out, extra = model( - src_tokens=sample["net_input"]["src_tokens"], - src_lengths=sample["net_input"]["src_lengths"], - prev_output_tokens=sample["net_input"]["prev_output_tokens"], - tgt_speaker=sample["net_input"]["tgt_speaker"], - target_lengths=sample["target_lengths"], - return_all_hiddens=True, - ) - - l1_loss, mse_loss, eos_loss = self.compute_loss( - extra["feature_out"], - feat_out, - eos_out, - feat_tgt, - eos_tgt, - sample["target_lengths"], - reduction, - ) - attn_loss = torch.tensor(0.0).type_as(l1_loss) - if self.guided_attn is not None: - attn_loss = self.guided_attn( - extra["attn"], - sample["net_input"]["src_lengths"], - sample["target_lengths"], - reduction, - ) - loss = ( - l1_loss + mse_loss + eos_loss + attn_loss - ) # do not include ctc loss as there's no text target - - sample_size = sample["nsentences"] if self.sentence_avg else sample["ntokens"] - logging_output = { - "loss": utils.item(loss.data), - "ntokens": sample["ntokens"], - "nsentences": sample["nsentences"], - "sample_size": sample_size, - "l1_loss": utils.item(l1_loss.data), - "mse_loss": utils.item(mse_loss.data), - "eos_loss": utils.item(eos_loss.data), - "attn_loss": utils.item(attn_loss.data), - } - - if len(self.multitask_criterion) == 0: - return loss, sample_size, logging_output - - # multitask - multitask_loss, multitask_log = self.get_multitask_loss(model, sample, extra) - loss += multitask_loss - logging_output["multitask"] = multitask_log - return loss, sample_size, logging_output - - @classmethod - def reduce_metrics(cls, logging_outputs) -> None: - super().reduce_metrics(logging_outputs) - - # inference metrics - if "targ_frames" in logging_outputs[0]: - n = sum(log.get("norm_frames", 0) for log in logging_outputs) - for key, new_key in [ - ("mcd_loss", "mcd_loss"), - ("pred_frames", "pred_ratio"), - ("nins", "ins_rate"), - ("ndel", "del_rate"), - ]: - val = sum(log.get(key, 0) for log in logging_outputs) - metrics.log_scalar(new_key, val / n, n, round=3) - - if "multitask" not in logging_outputs[0]: - return - - MultitaskCriterion.reduce_metrics(logging_outputs) diff --git a/kosmos-g/fairseq/fairseq/criterions/tacotron2_loss.py b/kosmos-g/fairseq/fairseq/criterions/tacotron2_loss.py deleted file mode 100644 index 04747bf1a..000000000 --- a/kosmos-g/fairseq/fairseq/criterions/tacotron2_loss.py +++ /dev/null @@ -1,227 +0,0 @@ -# Copyright (c) 2017-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the LICENSE file in -# the root directory of this source tree. An additional grant of patent rights -# can be found in the PATENTS file in the same directory. - -import logging -from typing import Any, Dict, List -from functools import lru_cache -from dataclasses import dataclass, field - -import torch -from omegaconf import II - -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.dataclass import FairseqDataclass -from fairseq.data.data_utils import lengths_to_mask -import torch.nn.functional as F - - -logger = logging.getLogger(__name__) - - -@dataclass -class Tacotron2CriterionConfig(FairseqDataclass): - bce_pos_weight: float = field( - default=1.0, - metadata={"help": "weight of positive examples for BCE loss"}, - ) - use_guided_attention_loss: bool = field( - default=False, - metadata={"help": "use guided attention loss"}, - ) - guided_attention_loss_sigma: float = field( - default=0.4, - metadata={"help": "weight of positive examples for BCE loss"}, - ) - ctc_weight: float = field(default=0.0, metadata={"help": "weight for CTC loss"}) - sentence_avg: bool = II("optimization.sentence_avg") - - -class GuidedAttentionLoss(torch.nn.Module): - """ - Efficiently Trainable Text-to-Speech System Based on Deep Convolutional - Networks with Guided Attention (https://arxiv.org/abs/1710.08969) - """ - - def __init__(self, sigma): - super().__init__() - self.sigma = sigma - - @staticmethod - @lru_cache(maxsize=8) - def _get_weight(s_len, t_len, sigma): - grid_x, grid_y = torch.meshgrid(torch.arange(t_len), torch.arange(s_len)) - grid_x = grid_x.to(s_len.device) - grid_y = grid_y.to(s_len.device) - w = (grid_y.float() / s_len - grid_x.float() / t_len) ** 2 - return 1.0 - torch.exp(-w / (2 * (sigma ** 2))) - - def _get_weights(self, src_lens, tgt_lens): - bsz, max_s_len, max_t_len = len(src_lens), max(src_lens), max(tgt_lens) - weights = torch.zeros((bsz, max_t_len, max_s_len)) - for i, (s_len, t_len) in enumerate(zip(src_lens, tgt_lens)): - weights[i, :t_len, :s_len] = self._get_weight(s_len, t_len, self.sigma) - return weights - - @staticmethod - def _get_masks(src_lens, tgt_lens): - in_masks = lengths_to_mask(src_lens) - out_masks = lengths_to_mask(tgt_lens) - return out_masks.unsqueeze(2) & in_masks.unsqueeze(1) - - def forward(self, attn, src_lens, tgt_lens, reduction="mean"): - weights = self._get_weights(src_lens, tgt_lens).to(attn.device) - masks = self._get_masks(src_lens, tgt_lens).to(attn.device) - loss = (weights * attn.transpose(1, 2)).masked_select(masks) - loss = torch.sum(loss) if reduction == "sum" else torch.mean(loss) - return loss - - -@register_criterion("tacotron2", dataclass=Tacotron2CriterionConfig) -class Tacotron2Criterion(FairseqCriterion): - def __init__( - self, - task, - sentence_avg, - use_guided_attention_loss, - guided_attention_loss_sigma, - bce_pos_weight, - ctc_weight, - ): - super().__init__(task) - self.sentence_avg = sentence_avg - self.bce_pos_weight = bce_pos_weight - - self.guided_attn = None - if use_guided_attention_loss: - self.guided_attn = GuidedAttentionLoss(guided_attention_loss_sigma) - self.ctc_weight = ctc_weight - - def forward(self, model, sample, reduction="mean"): - bsz, max_len, _ = sample["target"].size() - feat_tgt = sample["target"] - feat_len = sample["target_lengths"].view(bsz, 1).expand(-1, max_len) - eos_tgt = torch.arange(max_len).to(sample["target"].device) - eos_tgt = eos_tgt.view(1, max_len).expand(bsz, -1) - eos_tgt = (eos_tgt == (feat_len - 1)).float() - src_tokens = sample["net_input"]["src_tokens"] - src_lens = sample["net_input"]["src_lengths"] - tgt_lens = sample["target_lengths"] - - feat_out, eos_out, extra = model( - src_tokens=src_tokens, - src_lengths=src_lens, - prev_output_tokens=sample["net_input"]["prev_output_tokens"], - incremental_state=None, - target_lengths=tgt_lens, - speaker=sample["speaker"], - ) - - l1_loss, mse_loss, eos_loss = self.compute_loss( - extra["feature_out"], - feat_out, - eos_out, - feat_tgt, - eos_tgt, - tgt_lens, - reduction, - ) - attn_loss = torch.tensor(0.0).type_as(l1_loss) - if self.guided_attn is not None: - attn_loss = self.guided_attn(extra["attn"], src_lens, tgt_lens, reduction) - ctc_loss = torch.tensor(0.0).type_as(l1_loss) - if self.ctc_weight > 0.0: - net_output = (feat_out, eos_out, extra) - lprobs = model.get_normalized_probs(net_output, log_probs=True) - lprobs = lprobs.transpose(0, 1) # T x B x C - src_mask = lengths_to_mask(src_lens) - src_tokens_flat = src_tokens.masked_select(src_mask) - ctc_loss = ( - F.ctc_loss( - lprobs, - src_tokens_flat, - tgt_lens, - src_lens, - reduction=reduction, - zero_infinity=True, - ) - * self.ctc_weight - ) - loss = l1_loss + mse_loss + eos_loss + attn_loss + ctc_loss - - sample_size = sample["nsentences"] if self.sentence_avg else sample["ntokens"] - logging_output = { - "loss": utils.item(loss.data), - "ntokens": sample["ntokens"], - "nsentences": sample["nsentences"], - "sample_size": sample_size, - "l1_loss": utils.item(l1_loss.data), - "mse_loss": utils.item(mse_loss.data), - "eos_loss": utils.item(eos_loss.data), - "attn_loss": utils.item(attn_loss.data), - "ctc_loss": utils.item(ctc_loss.data), - } - return loss, sample_size, logging_output - - def compute_loss( - self, - feat_out, - feat_out_post, - eos_out, - feat_tgt, - eos_tgt, - tgt_lens, - reduction="mean", - ): - mask = lengths_to_mask(tgt_lens) - _eos_out = eos_out[mask].squeeze() - _eos_tgt = eos_tgt[mask] - _feat_tgt = feat_tgt[mask] - _feat_out = feat_out[mask] - _feat_out_post = feat_out_post[mask] - - l1_loss = F.l1_loss(_feat_out, _feat_tgt, reduction=reduction) + F.l1_loss( - _feat_out_post, _feat_tgt, reduction=reduction - ) - mse_loss = F.mse_loss(_feat_out, _feat_tgt, reduction=reduction) + F.mse_loss( - _feat_out_post, _feat_tgt, reduction=reduction - ) - eos_loss = F.binary_cross_entropy_with_logits( - _eos_out, - _eos_tgt, - pos_weight=torch.tensor(self.bce_pos_weight), - reduction=reduction, - ) - return l1_loss, mse_loss, eos_loss - - @classmethod - def reduce_metrics(cls, logging_outputs: List[Dict[str, Any]]) -> None: - ns = [log.get("sample_size", 0) for log in logging_outputs] - ntot = sum(ns) - ws = [n / (ntot + 1e-8) for n in ns] - for key in ["loss", "l1_loss", "mse_loss", "eos_loss", "attn_loss", "ctc_loss"]: - vals = [log.get(key, 0) for log in logging_outputs] - val = sum(val * w for val, w in zip(vals, ws)) - metrics.log_scalar(key, val, ntot, round=3) - metrics.log_scalar("sample_size", ntot, len(logging_outputs)) - - # inference metrics - if "targ_frames" not in logging_outputs[0]: - return - n = sum(log.get("targ_frames", 0) for log in logging_outputs) - for key, new_key in [ - ("mcd_loss", "mcd_loss"), - ("pred_frames", "pred_ratio"), - ("nins", "ins_rate"), - ("ndel", "del_rate"), - ]: - val = sum(log.get(key, 0) for log in logging_outputs) - metrics.log_scalar(new_key, val / n, n, round=3) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - return False diff --git a/kosmos-g/fairseq/fairseq/criterions/wav2vec_criterion.py b/kosmos-g/fairseq/fairseq/criterions/wav2vec_criterion.py deleted file mode 100644 index e37274d5a..000000000 --- a/kosmos-g/fairseq/fairseq/criterions/wav2vec_criterion.py +++ /dev/null @@ -1,230 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from dataclasses import dataclass, field -from typing import List, Optional - -import torch -import torch.nn.functional as F -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.dataclass import FairseqDataclass -from fairseq.logging.meters import safe_round -from fairseq.utils import is_xla_tensor - - -@dataclass -class Wav2VecCriterionConfig(FairseqDataclass): - infonce: bool = field( - default=False, - metadata={ - "help": "if set, uses cross entropy instead of binary cross entropy (i.e. InfoNCE loss)" - }, - ) - loss_weights: Optional[List[float]] = field( - default=None, - metadata={"help": "weights for additional loss terms (not first one)"}, - ) - log_keys: List[str] = field( - default_factory=lambda: [], - metadata={"help": "output keys to log"}, - ) - - -@register_criterion("wav2vec", dataclass=Wav2VecCriterionConfig) -class Wav2vecCriterion(FairseqCriterion): - def __init__(self, task, infonce=False, loss_weights=None, log_keys=None): - super().__init__(task) - self.infonce = infonce - self.loss_weights = loss_weights - self.log_keys = [] if log_keys is None else log_keys - - def forward(self, model, sample, reduce=True): - """Compute the loss for the given sample. - - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - net_output = model(**sample["net_input"]) - logits = model.get_logits(net_output).float() - target = model.get_targets(sample, net_output) - self.xla = is_xla_tensor(logits) - - # XXX: handle weights on xla. - weights = None - if hasattr(model, "get_target_weights") and not self.infonce: - weights = model.get_target_weights(target, net_output) - if torch.is_tensor(weights): - weights = weights.float() - - losses = [] - - reduction = "none" if ((not reduce) or self.xla) else "sum" - if self.infonce: - loss = F.cross_entropy(logits, target, reduction=reduction) - else: - loss = F.binary_cross_entropy_with_logits( - logits, target.float(), weights, reduction=reduction - ) - - if self.xla: - # tpu-comment: since dynamic shapes lead to recompilations on xla, - # we don't shrink tensors using mask_indices. - # Instead, we use mask indices to adjust loss. - mi = ( - sample["net_input"]["mask_indices"] - .transpose(0, 1) # logits are transposed in `model.get_logits` - .reshape(logits.size(0)) - ) - loss = (loss * mi).sum() if reduce else (loss * mi) - - if "sample_size" in sample: - sample_size = sample["sample_size"] - elif "mask_indices" in sample["net_input"]: - sample_size = sample["net_input"]["mask_indices"].sum() - else: - sample_size = target.numel() if self.infonce else target.long().sum().item() - losses.append(loss.detach().clone()) - - if self.loss_weights is not None: - assert hasattr(model, "get_extra_losses") - extra_losses = model.get_extra_losses(net_output) - if torch.is_tensor(extra_losses): - extra_losses = [extra_losses] - if len(self.loss_weights) == 1 and len(extra_losses) != 1: - self.loss_weights = [self.loss_weights[0]] * len(extra_losses) - assert len(extra_losses) == len( - self.loss_weights - ), f"{len(extra_losses)}, {len(self.loss_weights)}" - for p, coef in zip(extra_losses, self.loss_weights): - if coef != 0 and p is not None: - p = coef * p.float() * sample_size - loss += p - losses.append(p) - - logging_output = { - "loss": loss.item() if (reduce and not self.xla) else loss.detach(), - "ntokens": sample_size, - "nsentences": sample["id"].numel(), - "sample_size": sample_size, - } - - for lk in self.log_keys: - # Only store "logits" and "target" for computing MAP and MAUC - # during validation - if lk == "logits": - if not self.training: - logging_output["logits"] = logits.cpu().numpy() - elif lk == "target": - if not self.training: - # If the targets have been mixed with the predictions of - # teacher models, find the original targets - if hasattr(model, "get_original_targets"): - original_target = model.get_original_targets(sample, net_output) - else: - original_target = target - logging_output["target"] = original_target.cpu().numpy() - elif lk in net_output: - value = net_output[lk] - if not is_xla_tensor(value): - value = float(value) - logging_output[lk] = value - - if len(losses) > 1: - for i, l in enumerate(losses): - logging_output[f"loss_{i}"] = l.item() if not self.xla else l.detach() - - if self.infonce: - with torch.no_grad(): - if logits.numel() == 0: - corr = 0 - count = 0 - else: - assert logits.dim() > 1, logits.shape - max = logits.argmax(-1) == 0 - min = logits.argmin(-1) == 0 - if is_xla_tensor(logits): - max, min = max * mi, min * mi - both = max & min - corr = max.long().sum() - both.long().sum() - count = mi.sum() - else: - both = max & min - corr = max.long().sum().item() - both.long().sum().item() - count = float(max.numel()) - - logging_output["correct"] = corr - logging_output["count"] = count - - return loss, sample_size, logging_output - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_sum = utils.item(sum(log.get("loss", 0) for log in logging_outputs)) - ntokens = utils.item(sum(log.get("ntokens", 0) for log in logging_outputs)) - nsentences = utils.item( - sum(log.get("nsentences", 0) for log in logging_outputs) - ) - sample_size = utils.item( - sum(log.get("sample_size", 0) for log in logging_outputs) - ) - - metrics.log_scalar( - "loss", loss_sum / (sample_size or 1) / math.log(2), sample_size, round=3 - ) - metrics.log_scalar("ntokens", ntokens) - metrics.log_scalar("nsentences", nsentences) - - correct = sum(log.get("correct", 0) for log in logging_outputs) - metrics.log_scalar("_correct", correct) - - total = sum(log.get("count", 0) for log in logging_outputs) - metrics.log_scalar("_total", total) - - if total > 0: - metrics.log_derived( - "accuracy", - lambda meters: safe_round( - meters["_correct"].sum / meters["_total"].sum, 5 - ) - if meters["_total"].sum > 0 - else float("nan"), - ) - - builtin_keys = { - "loss", - "ntokens", - "nsentences", - "sample_size", - "correct", - "count", - } - - for k in logging_outputs[0]: - if k not in builtin_keys: - val = sum(log.get(k, 0) for log in logging_outputs) - if k.startswith("loss"): - metrics.log_scalar( - k, val / (sample_size or 1) / math.log(2), sample_size, round=3 - ) - else: - metrics.log_scalar(k, val / len(logging_outputs), round=3) - - # FIXME: revert when gather based xla reduction is implemented - # @staticmethod - # def logging_outputs_can_be_summed() -> bool: - def logging_outputs_can_be_summed(self) -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - # XXX: Gather based reduction not implemented for xla yet. - # So we fall to sum based reduction for xla. - return self.xla diff --git a/kosmos-g/fairseq/fairseq/data/__init__.py b/kosmos-g/fairseq/fairseq/data/__init__.py deleted file mode 100644 index 8acf2ca17..000000000 --- a/kosmos-g/fairseq/fairseq/data/__init__.py +++ /dev/null @@ -1,130 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -"""isort:skip_file""" - -from .dictionary import Dictionary, TruncatedDictionary - -from .fairseq_dataset import FairseqDataset, FairseqIterableDataset - -from .base_wrapper_dataset import BaseWrapperDataset - -from .add_target_dataset import AddTargetDataset -from .append_token_dataset import AppendTokenDataset -from .audio.raw_audio_dataset import BinarizedAudioDataset, FileAudioDataset -from .audio.hubert_dataset import HubertDataset -from .backtranslation_dataset import BacktranslationDataset -from .bucket_pad_length_dataset import BucketPadLengthDataset -from .colorize_dataset import ColorizeDataset -from .concat_dataset import ConcatDataset -from .concat_sentences_dataset import ConcatSentencesDataset -from .denoising_dataset import DenoisingDataset -from .id_dataset import IdDataset -from .indexed_dataset import ( - IndexedCachedDataset, - IndexedDataset, - IndexedRawTextDataset, - MMapIndexedDataset, -) -from .language_pair_dataset import LanguagePairDataset -from .list_dataset import ListDataset -from .lm_context_window_dataset import LMContextWindowDataset -from .lru_cache_dataset import LRUCacheDataset -from .mask_tokens_dataset import MaskTokensDataset -from .monolingual_dataset import MonolingualDataset -from .multi_corpus_sampled_dataset import MultiCorpusSampledDataset -from .nested_dictionary_dataset import NestedDictionaryDataset -from .noising import NoisingDataset -from .numel_dataset import NumelDataset -from .num_samples_dataset import NumSamplesDataset -from .offset_tokens_dataset import OffsetTokensDataset -from .pad_dataset import LeftPadDataset, PadDataset, RightPadDataset -from .prepend_dataset import PrependDataset -from .prepend_token_dataset import PrependTokenDataset -from .raw_label_dataset import RawLabelDataset -from .replace_dataset import ReplaceDataset -from .resampling_dataset import ResamplingDataset -from .roll_dataset import RollDataset -from .round_robin_zip_datasets import RoundRobinZipDatasets -from .sort_dataset import SortDataset -from .strip_token_dataset import StripTokenDataset -from .subsample_dataset import SubsampleDataset -from .token_block_dataset import TokenBlockDataset -from .transform_eos_dataset import TransformEosDataset -from .transform_eos_lang_pair_dataset import TransformEosLangPairDataset -from .shorten_dataset import TruncateDataset, RandomCropDataset -from .multilingual.sampled_multi_dataset import SampledMultiDataset -from .multilingual.sampled_multi_epoch_dataset import SampledMultiEpochDataset -from .fasta_dataset import FastaDataset, EncodedFastaDataset -from .transform_eos_concat_langpair_dataset import TransformEosConcatLangPairDataset - -from .iterators import ( - CountingIterator, - EpochBatchIterator, - GroupedIterator, - ShardedIterator, -) - -__all__ = [ - "AddTargetDataset", - "AppendTokenDataset", - "BacktranslationDataset", - "BaseWrapperDataset", - "BinarizedAudioDataset", - "BucketPadLengthDataset", - "ColorizeDataset", - "ConcatDataset", - "ConcatSentencesDataset", - "CountingIterator", - "DenoisingDataset", - "Dictionary", - "EncodedFastaDataset", - "EpochBatchIterator", - "FairseqDataset", - "FairseqIterableDataset", - "FastaDataset", - "FileAudioDataset", - "GroupedIterator", - "HubertDataset", - "IdDataset", - "IndexedCachedDataset", - "IndexedDataset", - "IndexedRawTextDataset", - "LanguagePairDataset", - "LeftPadDataset", - "ListDataset", - "LMContextWindowDataset", - "LRUCacheDataset", - "MaskTokensDataset", - "MMapIndexedDataset", - "MonolingualDataset", - "MultiCorpusSampledDataset", - "NestedDictionaryDataset", - "NoisingDataset", - "NumelDataset", - "NumSamplesDataset", - "OffsetTokensDataset", - "PadDataset", - "PrependDataset", - "PrependTokenDataset", - "RandomCropDataset", - "RawLabelDataset", - "ResamplingDataset", - "ReplaceDataset", - "RightPadDataset", - "RollDataset", - "RoundRobinZipDatasets", - "SampledMultiDataset", - "SampledMultiEpochDataset", - "ShardedIterator", - "SortDataset", - "StripTokenDataset", - "SubsampleDataset", - "TokenBlockDataset", - "TransformEosDataset", - "TransformEosLangPairDataset", - "TransformEosConcatLangPairDataset", - "TruncateDataset", - "TruncatedDictionary", -] diff --git a/kosmos-g/fairseq/fairseq/data/add_target_dataset.py b/kosmos-g/fairseq/fairseq/data/add_target_dataset.py deleted file mode 100644 index bf89f2565..000000000 --- a/kosmos-g/fairseq/fairseq/data/add_target_dataset.py +++ /dev/null @@ -1,79 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from . import BaseWrapperDataset, data_utils -from fairseq.data.text_compressor import TextCompressor, TextCompressionLevel - - -class AddTargetDataset(BaseWrapperDataset): - def __init__( - self, - dataset, - labels, - pad, - eos, - batch_targets, - process_label=None, - label_len_fn=None, - add_to_input=False, - text_compression_level=TextCompressionLevel.none, - ): - super().__init__(dataset) - self.labels = labels - self.batch_targets = batch_targets - self.pad = pad - self.eos = eos - self.process_label = process_label - self.label_len_fn = label_len_fn - self.add_to_input = add_to_input - self.text_compressor = TextCompressor(level=text_compression_level) - - def get_label(self, index, process_fn=None): - lbl = self.labels[index] - lbl = self.text_compressor.decompress(lbl) - return lbl if process_fn is None else process_fn(lbl) - - def __getitem__(self, index): - item = self.dataset[index] - item["label"] = self.get_label(index, process_fn=self.process_label) - return item - - def size(self, index): - sz = self.dataset.size(index) - own_sz = self.label_len_fn(self.get_label(index)) - return sz, own_sz - - def collater(self, samples): - collated = self.dataset.collater(samples) - if len(collated) == 0: - return collated - indices = set(collated["id"].tolist()) - target = [s["label"] for s in samples if s["id"] in indices] - - if self.batch_targets: - collated["target_lengths"] = torch.LongTensor([len(t) for t in target]) - target = data_utils.collate_tokens(target, pad_idx=self.pad, left_pad=False) - collated["ntokens"] = collated["target_lengths"].sum().item() - else: - collated["ntokens"] = sum([len(t) for t in target]) - - collated["target"] = target - - if self.add_to_input: - eos = target.new_full((target.size(0), 1), self.eos) - collated["target"] = torch.cat([target, eos], dim=-1).long() - collated["net_input"]["prev_output_tokens"] = torch.cat( - [eos, target], dim=-1 - ).long() - collated["ntokens"] += target.size(0) - return collated - - def filter_indices_by_size(self, indices, max_sizes): - indices, ignored = data_utils._filter_by_size_dynamic( - indices, self.size, max_sizes - ) - return indices, ignored diff --git a/kosmos-g/fairseq/fairseq/data/append_token_dataset.py b/kosmos-g/fairseq/fairseq/data/append_token_dataset.py deleted file mode 100644 index 87695bd0f..000000000 --- a/kosmos-g/fairseq/fairseq/data/append_token_dataset.py +++ /dev/null @@ -1,41 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch - -from . import BaseWrapperDataset - - -class AppendTokenDataset(BaseWrapperDataset): - def __init__(self, dataset, token=None): - super().__init__(dataset) - self.token = token - if token is not None: - self._sizes = np.array(dataset.sizes) + 1 - else: - self._sizes = dataset.sizes - - def __getitem__(self, idx): - item = self.dataset[idx] - if self.token is not None: - item = torch.cat([item, item.new([self.token])]) - return item - - @property - def sizes(self): - return self._sizes - - def num_tokens(self, index): - n = self.dataset.num_tokens(index) - if self.token is not None: - n += 1 - return n - - def size(self, index): - n = self.dataset.size(index) - if self.token is not None: - n += 1 - return n diff --git a/kosmos-g/fairseq/fairseq/data/audio/__init__.py b/kosmos-g/fairseq/fairseq/data/audio/__init__.py deleted file mode 100644 index e69de29bb..000000000 diff --git a/kosmos-g/fairseq/fairseq/data/audio/audio_utils.py b/kosmos-g/fairseq/fairseq/data/audio/audio_utils.py deleted file mode 100644 index ac6b13d81..000000000 --- a/kosmos-g/fairseq/fairseq/data/audio/audio_utils.py +++ /dev/null @@ -1,294 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -from pathlib import Path -from typing import BinaryIO, Optional, Tuple, Union, List -import mmap - -import numpy as np -import torch -import torch.nn.functional as F - - -SF_AUDIO_FILE_EXTENSIONS = {".wav", ".flac", ".ogg"} -FEATURE_OR_SF_AUDIO_FILE_EXTENSIONS = {".npy", ".wav", ".flac", ".ogg"} - - -def convert_waveform( - waveform: Union[np.ndarray, torch.Tensor], - sample_rate: int, - normalize_volume: bool = False, - to_mono: bool = False, - to_sample_rate: Optional[int] = None, -) -> Tuple[Union[np.ndarray, torch.Tensor], int]: - """convert a waveform: - - to a target sample rate - - from multi-channel to mono channel - - volume normalization - - Args: - waveform (numpy.ndarray or torch.Tensor): 2D original waveform - (channels x length) - sample_rate (int): original sample rate - normalize_volume (bool): perform volume normalization - to_mono (bool): convert to mono channel if having multiple channels - to_sample_rate (Optional[int]): target sample rate - Returns: - waveform (numpy.ndarray): converted 2D waveform (channels x length) - sample_rate (float): target sample rate - """ - try: - import torchaudio.sox_effects as ta_sox - except ImportError: - raise ImportError("Please install torchaudio: pip install torchaudio") - - effects = [] - if normalize_volume: - effects.append(["gain", "-n"]) - if to_sample_rate is not None and to_sample_rate != sample_rate: - effects.append(["rate", f"{to_sample_rate}"]) - if to_mono and waveform.shape[0] > 1: - effects.append(["channels", "1"]) - if len(effects) > 0: - is_np_input = isinstance(waveform, np.ndarray) - _waveform = torch.from_numpy(waveform) if is_np_input else waveform - converted, converted_sample_rate = ta_sox.apply_effects_tensor( - _waveform, sample_rate, effects - ) - if is_np_input: - converted = converted.numpy() - return converted, converted_sample_rate - return waveform, sample_rate - - -def get_waveform( - path_or_fp: Union[str, BinaryIO], - normalization: bool = True, - mono: bool = True, - frames: int = -1, - start: int = 0, - always_2d: bool = True, - output_sample_rate: Optional[int] = None, - normalize_volume: bool = False, -) -> Tuple[np.ndarray, int]: - """Get the waveform and sample rate of a 16-bit WAV/FLAC/OGG Vorbis audio. - - Args: - path_or_fp (str or BinaryIO): the path or file-like object - normalization (bool): normalize values to [-1, 1] (Default: True) - mono (bool): convert multi-channel audio to mono-channel one - frames (int): the number of frames to read. (-1 for reading all) - start (int): Where to start reading. A negative value counts from the end. - always_2d (bool): always return 2D array even for mono-channel audios - output_sample_rate (Optional[int]): output sample rate - normalize_volume (bool): normalize volume - Returns: - waveform (numpy.ndarray): 1D or 2D waveform (channels x length) - sample_rate (float): sample rate - """ - if isinstance(path_or_fp, str): - ext = Path(path_or_fp).suffix - if ext not in SF_AUDIO_FILE_EXTENSIONS: - raise ValueError(f"Unsupported audio format: {ext}") - - try: - import soundfile as sf - except ImportError: - raise ImportError("Please install soundfile: pip install soundfile") - - waveform, sample_rate = sf.read( - path_or_fp, dtype="float32", always_2d=True, frames=frames, start=start - ) - waveform = waveform.T # T x C -> C x T - waveform, sample_rate = convert_waveform( - waveform, - sample_rate, - normalize_volume=normalize_volume, - to_mono=mono, - to_sample_rate=output_sample_rate, - ) - - if not normalization: - waveform *= 2 ** 15 # denormalized to 16-bit signed integers - if not always_2d: - waveform = waveform.squeeze(axis=0) - return waveform, sample_rate - - -def _get_kaldi_fbank( - waveform: np.ndarray, sample_rate: int, n_bins=80 -) -> Optional[np.ndarray]: - """Get mel-filter bank features via PyKaldi.""" - try: - from kaldi.feat.fbank import FbankOptions, Fbank - from kaldi.feat.mel import MelBanksOptions - from kaldi.feat.window import FrameExtractionOptions - from kaldi.matrix import Vector - - mel_opts = MelBanksOptions() - mel_opts.num_bins = n_bins - frame_opts = FrameExtractionOptions() - frame_opts.samp_freq = sample_rate - opts = FbankOptions() - opts.mel_opts = mel_opts - opts.frame_opts = frame_opts - fbank = Fbank(opts=opts) - features = fbank.compute(Vector(waveform.squeeze()), 1.0).numpy() - return features - except ImportError: - return None - - -def _get_torchaudio_fbank( - waveform: np.ndarray, sample_rate, n_bins=80 -) -> Optional[np.ndarray]: - """Get mel-filter bank features via TorchAudio.""" - try: - import torchaudio.compliance.kaldi as ta_kaldi - - waveform = torch.from_numpy(waveform) - features = ta_kaldi.fbank( - waveform, num_mel_bins=n_bins, sample_frequency=sample_rate - ) - return features.numpy() - except ImportError: - return None - - -def get_fbank(path_or_fp: Union[str, BinaryIO], n_bins=80) -> np.ndarray: - """Get mel-filter bank features via PyKaldi or TorchAudio. Prefer PyKaldi - (faster CPP implementation) to TorchAudio (Python implementation). Note that - Kaldi/TorchAudio requires 16-bit signed integers as inputs and hence the - waveform should not be normalized.""" - waveform, sample_rate = get_waveform(path_or_fp, normalization=False) - - features = _get_kaldi_fbank(waveform, sample_rate, n_bins) - if features is None: - features = _get_torchaudio_fbank(waveform, sample_rate, n_bins) - if features is None: - raise ImportError( - "Please install pyKaldi or torchaudio to enable " - "online filterbank feature extraction" - ) - - return features - - -def is_npy_data(data: bytes) -> bool: - return data[0] == 147 and data[1] == 78 - - -def is_sf_audio_data(data: bytes) -> bool: - is_wav = data[0] == 82 and data[1] == 73 and data[2] == 70 - is_flac = data[0] == 102 and data[1] == 76 and data[2] == 97 - is_ogg = data[0] == 79 and data[1] == 103 and data[2] == 103 - return is_wav or is_flac or is_ogg - - -def mmap_read(path: str, offset: int, length: int) -> bytes: - with open(path, "rb") as f: - with mmap.mmap(f.fileno(), length=0, access=mmap.ACCESS_READ) as mmap_o: - data = mmap_o[offset : offset + length] - return data - - -def read_from_stored_zip(zip_path: str, offset: int, length: int) -> bytes: - return mmap_read(zip_path, offset, length) - - -def parse_path(path: str) -> Tuple[str, List[int]]: - """Parse data path which is either a path to - 1. a .npy/.wav/.flac/.ogg file - 2. a stored ZIP file with slicing info: "[zip_path]:[offset]:[length]" - - Args: - path (str): the data path to parse - - Returns: - file_path (str): the file path - slice_ptr (list of int): empty in case 1; - byte offset and length for the slice in case 2 - """ - - if Path(path).suffix in FEATURE_OR_SF_AUDIO_FILE_EXTENSIONS: - _path, slice_ptr = path, [] - else: - _path, *slice_ptr = path.split(":") - if not Path(_path).is_file(): - raise FileNotFoundError(f"File not found: {_path}") - assert len(slice_ptr) in {0, 2}, f"Invalid path: {path}" - slice_ptr = [int(i) for i in slice_ptr] - return _path, slice_ptr - - -def get_window(window_fn: callable, n_fft: int, win_length: int) -> torch.Tensor: - padding = n_fft - win_length - assert padding >= 0 - return F.pad(window_fn(win_length), (padding // 2, padding - padding // 2)) - - -def get_fourier_basis(n_fft: int) -> torch.Tensor: - basis = np.fft.fft(np.eye(n_fft)) - basis = np.vstack( - [np.real(basis[: n_fft // 2 + 1, :]), np.imag(basis[: n_fft // 2 + 1, :])] - ) - return torch.from_numpy(basis).float() - - -def get_mel_filters( - sample_rate: int, n_fft: int, n_mels: int, f_min: float, f_max: float -) -> torch.Tensor: - try: - import librosa - except ImportError: - raise ImportError("Please install librosa: pip install librosa") - basis = librosa.filters.mel(sample_rate, n_fft, n_mels, f_min, f_max) - return torch.from_numpy(basis).float() - - -class TTSSpectrogram(torch.nn.Module): - def __init__( - self, - n_fft: int, - win_length: int, - hop_length: int, - window_fn: callable = torch.hann_window, - return_phase: bool = False, - ) -> None: - super(TTSSpectrogram, self).__init__() - self.n_fft = n_fft - self.hop_length = hop_length - self.return_phase = return_phase - - basis = get_fourier_basis(n_fft).unsqueeze(1) - basis *= get_window(window_fn, n_fft, win_length) - self.register_buffer("basis", basis) - - def forward( - self, waveform: torch.Tensor - ) -> Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor]]: - padding = (self.n_fft // 2, self.n_fft // 2) - x = F.pad(waveform.unsqueeze(1), padding, mode="reflect") - x = F.conv1d(x, self.basis, stride=self.hop_length) - real_part = x[:, : self.n_fft // 2 + 1, :] - imag_part = x[:, self.n_fft // 2 + 1 :, :] - magnitude = torch.sqrt(real_part ** 2 + imag_part ** 2) - if self.return_phase: - phase = torch.atan2(imag_part, real_part) - return magnitude, phase - return magnitude - - -class TTSMelScale(torch.nn.Module): - def __init__( - self, n_mels: int, sample_rate: int, f_min: float, f_max: float, n_stft: int - ) -> None: - super(TTSMelScale, self).__init__() - basis = get_mel_filters(sample_rate, (n_stft - 1) * 2, n_mels, f_min, f_max) - self.register_buffer("basis", basis) - - def forward(self, specgram: torch.Tensor) -> torch.Tensor: - return torch.matmul(self.basis, specgram) diff --git a/kosmos-g/fairseq/fairseq/data/audio/data_cfg.py b/kosmos-g/fairseq/fairseq/data/audio/data_cfg.py deleted file mode 100644 index 18fcf416f..000000000 --- a/kosmos-g/fairseq/fairseq/data/audio/data_cfg.py +++ /dev/null @@ -1,285 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -from argparse import Namespace -from pathlib import Path -from typing import Dict, Optional - -from fairseq.data import Dictionary - - -def get_config_from_yaml(yaml_path: Path): - try: - import yaml - except ImportError: - print("Please install PyYAML: pip install PyYAML") - config = {} - if yaml_path.is_file(): - try: - with open(yaml_path) as f: - config = yaml.load(f, Loader=yaml.FullLoader) - except Exception as e: - raise Exception(f"Failed to load config from {yaml_path.as_posix()}: {e}") - else: - raise FileNotFoundError(f"{yaml_path.as_posix()} not found") - - return config - - -class S2TDataConfig(object): - """Wrapper class for data config YAML""" - - def __init__(self, yaml_path: Path): - self.config = get_config_from_yaml(yaml_path) - self.root = yaml_path.parent - - def _auto_convert_to_abs_path(self, x): - if isinstance(x, str): - if not Path(x).exists() and (self.root / x).exists(): - return (self.root / x).as_posix() - elif isinstance(x, dict): - return {k: self._auto_convert_to_abs_path(v) for k, v in x.items()} - return x - - @property - def vocab_filename(self): - """fairseq vocabulary file under data root""" - return self.config.get("vocab_filename", "dict.txt") - - @property - def speaker_set_filename(self): - """speaker set file under data root""" - return self.config.get("speaker_set_filename", None) - - @property - def shuffle(self) -> bool: - """Shuffle dataset samples before batching""" - return self.config.get("shuffle", False) - - @property - def pre_tokenizer(self) -> Dict: - """Pre-tokenizer to apply before subword tokenization. Returning - a dictionary with `tokenizer` providing the tokenizer name and - the other items providing the tokenizer-specific arguments. - Tokenizers are defined in `fairseq.data.encoders.*`""" - tokenizer = self.config.get("pre_tokenizer", {"tokenizer": None}) - return self._auto_convert_to_abs_path(tokenizer) - - @property - def bpe_tokenizer(self) -> Dict: - """Subword tokenizer to apply after pre-tokenization. Returning - a dictionary with `bpe` providing the tokenizer name and - the other items providing the tokenizer-specific arguments. - Tokenizers are defined in `fairseq.data.encoders.*`""" - tokenizer = self.config.get("bpe_tokenizer", {"bpe": None}) - return self._auto_convert_to_abs_path(tokenizer) - - @property - def prepend_tgt_lang_tag(self) -> bool: - """Prepend target lang ID token as the target BOS (e.g. for to-many - multilingual setting). During inference, this requires `--prefix-size 1` - to force BOS to be lang ID token.""" - return self.config.get("prepend_tgt_lang_tag", False) - - @property - def input_feat_per_channel(self): - """The dimension of input features (per audio channel)""" - return self.config.get("input_feat_per_channel", 80) - - @property - def input_channels(self): - """The number of channels in the input audio""" - return self.config.get("input_channels", 1) - - @property - def sample_rate(self): - return self.config.get("sample_rate", 16_000) - - @property - def sampling_alpha(self): - """Hyper-parameter alpha = 1/T for temperature-based resampling. - (alpha = 1 for no resampling)""" - return self.config.get("sampling_alpha", 1.0) - - @property - def use_audio_input(self): - """Needed by the dataset loader to see if the model requires - raw audio as inputs.""" - return self.config.get("use_audio_input", False) - - @property - def use_sample_rate(self): - """Needed by the dataset loader to see if the model requires - raw audio with specific sample rate as inputs.""" - return self.config.get("use_sample_rate", 16000) - - @property - def audio_root(self): - """Audio paths in the manifest TSV can be relative and this provides - the root path. Set this to empty string when using absolute paths.""" - return self.config.get("audio_root", "") - - def get_feature_transforms(self, split, is_train): - """Split-specific feature transforms. Allowing train set - wildcard `_train`, evaluation set wildcard `_eval` and general - wildcard `*` for matching.""" - from copy import deepcopy - - cfg = deepcopy(self.config) - _cur = cfg.get("transforms", {}) - cur = _cur.get(split) - cur = _cur.get("_train") if cur is None and is_train else cur - cur = _cur.get("_eval") if cur is None and not is_train else cur - cur = _cur.get("*") if cur is None else cur - cfg["transforms"] = cur - return cfg - - @property - def global_cmvn_stats_npz(self) -> Optional[str]: - path = self.config.get("global_cmvn", {}).get("stats_npz_path", None) - return self._auto_convert_to_abs_path(path) - - @property - def vocoder(self) -> Dict[str, str]: - vocoder = self.config.get("vocoder", {"type": "griffin_lim"}) - return self._auto_convert_to_abs_path(vocoder) - - @property - def hub(self) -> Dict[str, str]: - return self.config.get("hub", {}) - - -class S2SDataConfig(S2TDataConfig): - """Wrapper class for data config YAML""" - - @property - def vocab_filename(self): - return None - - @property - def pre_tokenizer(self) -> Dict: - return None - - @property - def bpe_tokenizer(self) -> Dict: - return None - - @property - def input_transformed_channels(self): - """The number of channels in the audio after feature transforms""" - # TODO: move this into individual transforms - _cur = self.config.get("transforms", {}) - cur = _cur.get("_train", []) - - _channels = self.input_channels - if "delta_deltas" in cur: - _channels *= 3 - - return _channels - - @property - def output_sample_rate(self): - """The audio sample rate of output target speech""" - return self.config.get("output_sample_rate", 22050) - - @property - def target_speaker_embed(self): - """Target speaker embedding file (one line per target audio sample)""" - return self.config.get("target_speaker_embed", None) - - -class MultitaskConfig(object): - """Wrapper class for data config YAML""" - - def __init__(self, yaml_path: Path): - config = get_config_from_yaml(yaml_path) - self.config = {} - for k, v in config.items(): - self.config[k] = SingleTaskConfig(k, v) - - def get_all_tasks(self): - return self.config - - def get_single_task(self, name): - assert name in self.config, f"multitask '{name}' does not exist!" - return self.config[name] - - -class SingleTaskConfig(object): - def __init__(self, name, config): - self.task_name = name - self.config = config - dict_path = config.get("dict", "") - self.tgt_dict = Dictionary.load(dict_path) if Path(dict_path).exists() else None - - @property - def data(self): - return self.config.get("data", "") - - @property - def decoder_type(self): - return self.config.get("decoder_type", "transformer") - - @property - def decoder_args(self): - """Decoder arch related args""" - args = self.config.get("decoder_args", {}) - return Namespace(**args) - - @property - def criterion_cfg(self): - """cfg for the multitask criterion""" - if self.decoder_type == "ctc": - from fairseq.criterions.ctc import CtcCriterionConfig - - cfg = CtcCriterionConfig - cfg.zero_infinity = self.config.get("zero_infinity", True) - else: - from fairseq.criterions.label_smoothed_cross_entropy import ( - LabelSmoothedCrossEntropyCriterionConfig, - ) - - cfg = LabelSmoothedCrossEntropyCriterionConfig - cfg.label_smoothing = self.config.get("label_smoothing", 0.2) - return cfg - - @property - def input_from(self): - """Condition on encoder/decoder of the main model""" - return "decoder" if "decoder_layer" in self.config else "encoder" - - @property - def input_layer(self): - if self.input_from == "decoder": - return self.config["decoder_layer"] - 1 - else: - # default using the output from the last encoder layer (-1) - return self.config.get("encoder_layer", 0) - 1 - - @property - def loss_weight_schedule(self): - return ( - "decay" - if "loss_weight_max" in self.config - and "loss_weight_decay_steps" in self.config - else "fixed" - ) - - def get_loss_weight(self, num_updates): - if self.loss_weight_schedule == "fixed": - weight = self.config.get("loss_weight", 1.0) - else: # "decay" - assert ( - self.config.get("loss_weight_decay_steps", 0) > 0 - ), "loss_weight_decay_steps must be greater than 0 for a decay schedule" - loss_weight_min = self.config.get("loss_weight_min", 0.0001) - loss_weight_decay_stepsize = ( - self.config["loss_weight_max"] - loss_weight_min - ) / self.config["loss_weight_decay_steps"] - weight = max( - self.config["loss_weight_max"] - - loss_weight_decay_stepsize * num_updates, - loss_weight_min, - ) - return weight diff --git a/kosmos-g/fairseq/fairseq/data/audio/feature_transforms/__init__.py b/kosmos-g/fairseq/fairseq/data/audio/feature_transforms/__init__.py deleted file mode 100644 index 359fa0697..000000000 --- a/kosmos-g/fairseq/fairseq/data/audio/feature_transforms/__init__.py +++ /dev/null @@ -1,82 +0,0 @@ -import importlib -import os -from abc import ABC, abstractmethod -from typing import Dict, Optional - - -class AudioFeatureTransform(ABC): - @classmethod - @abstractmethod - def from_config_dict(cls, config: Optional[Dict] = None): - pass - - -AUDIO_FEATURE_TRANSFORM_REGISTRY = {} -AUDIO_FEATURE_TRANSFORM_CLASS_NAMES = set() - - -def register_audio_feature_transform(name): - def register_audio_feature_transform_cls(cls): - if name in AUDIO_FEATURE_TRANSFORM_REGISTRY: - raise ValueError(f"Cannot register duplicate transform ({name})") - if not issubclass(cls, AudioFeatureTransform): - raise ValueError( - f"Transform ({name}: {cls.__name__}) must extend " - "AudioFeatureTransform" - ) - if cls.__name__ in AUDIO_FEATURE_TRANSFORM_CLASS_NAMES: - raise ValueError( - f"Cannot register audio feature transform with duplicate " - f"class name ({cls.__name__})" - ) - AUDIO_FEATURE_TRANSFORM_REGISTRY[name] = cls - AUDIO_FEATURE_TRANSFORM_CLASS_NAMES.add(cls.__name__) - return cls - - return register_audio_feature_transform_cls - - -def get_audio_feature_transform(name): - return AUDIO_FEATURE_TRANSFORM_REGISTRY[name] - - -transforms_dir = os.path.dirname(__file__) -for file in os.listdir(transforms_dir): - path = os.path.join(transforms_dir, file) - if ( - not file.startswith("_") - and not file.startswith(".") - and (file.endswith(".py") or os.path.isdir(path)) - ): - name = file[: file.find(".py")] if file.endswith(".py") else file - importlib.import_module("fairseq.data.audio.feature_transforms." + name) - - -class CompositeAudioFeatureTransform(AudioFeatureTransform): - @classmethod - def from_config_dict(cls, config=None): - _config = {} if config is None else config - _transforms = _config.get("transforms") - if _transforms is None: - return None - transforms = [ - get_audio_feature_transform(_t).from_config_dict(_config.get(_t)) - for _t in _transforms - ] - return CompositeAudioFeatureTransform(transforms) - - def __init__(self, transforms): - self.transforms = [t for t in transforms if t is not None] - - def __call__(self, x): - for t in self.transforms: - x = t(x) - return x - - def __repr__(self): - format_string = ( - [self.__class__.__name__ + "("] - + [f" {t.__repr__()}" for t in self.transforms] - + [")"] - ) - return "\n".join(format_string) diff --git a/kosmos-g/fairseq/fairseq/data/audio/feature_transforms/delta_deltas.py b/kosmos-g/fairseq/fairseq/data/audio/feature_transforms/delta_deltas.py deleted file mode 100644 index 49d090b11..000000000 --- a/kosmos-g/fairseq/fairseq/data/audio/feature_transforms/delta_deltas.py +++ /dev/null @@ -1,37 +0,0 @@ -import numpy as np -import torch -from fairseq.data.audio.feature_transforms import ( - AudioFeatureTransform, - register_audio_feature_transform, -) - - -@register_audio_feature_transform("delta_deltas") -class DeltaDeltas(AudioFeatureTransform): - """Expand delta-deltas features from spectrum.""" - - @classmethod - def from_config_dict(cls, config=None): - _config = {} if config is None else config - return DeltaDeltas(_config.get("win_length", 5)) - - def __init__(self, win_length=5): - self.win_length = win_length - - def __repr__(self): - return self.__class__.__name__ - - def __call__(self, spectrogram): - from torchaudio.functional import compute_deltas - - assert len(spectrogram.shape) == 2, "spectrogram must be a 2-D tensor." - # spectrogram is T x F, while compute_deltas takes (…, F, T) - spectrogram = torch.from_numpy(spectrogram).transpose(0, 1) - delta = compute_deltas(spectrogram) - delta_delta = compute_deltas(delta) - - out_feat = np.concatenate( - [spectrogram, delta.numpy(), delta_delta.numpy()], axis=0 - ) - out_feat = np.transpose(out_feat) - return out_feat diff --git a/kosmos-g/fairseq/fairseq/data/audio/feature_transforms/global_cmvn.py b/kosmos-g/fairseq/fairseq/data/audio/feature_transforms/global_cmvn.py deleted file mode 100644 index e457ff176..000000000 --- a/kosmos-g/fairseq/fairseq/data/audio/feature_transforms/global_cmvn.py +++ /dev/null @@ -1,29 +0,0 @@ -import numpy as np -from fairseq.data.audio.feature_transforms import ( - AudioFeatureTransform, - register_audio_feature_transform, -) - - -@register_audio_feature_transform("global_cmvn") -class GlobalCMVN(AudioFeatureTransform): - """Global CMVN (cepstral mean and variance normalization). The global mean - and variance need to be pre-computed and stored in NumPy format (.npz).""" - - @classmethod - def from_config_dict(cls, config=None): - _config = {} if config is None else config - return GlobalCMVN(_config.get("stats_npz_path")) - - def __init__(self, stats_npz_path): - self.stats_npz_path = stats_npz_path - stats = np.load(stats_npz_path) - self.mean, self.std = stats["mean"], stats["std"] - - def __repr__(self): - return self.__class__.__name__ + f'(stats_npz_path="{self.stats_npz_path}")' - - def __call__(self, x): - x = np.subtract(x, self.mean) - x = np.divide(x, self.std) - return x diff --git a/kosmos-g/fairseq/fairseq/data/audio/feature_transforms/specaugment.py b/kosmos-g/fairseq/fairseq/data/audio/feature_transforms/specaugment.py deleted file mode 100644 index ce5802b41..000000000 --- a/kosmos-g/fairseq/fairseq/data/audio/feature_transforms/specaugment.py +++ /dev/null @@ -1,131 +0,0 @@ -import math -import numbers -from typing import Optional - -import numpy as np -from fairseq.data.audio.feature_transforms import ( - AudioFeatureTransform, - register_audio_feature_transform, -) - - -@register_audio_feature_transform("specaugment") -class SpecAugmentTransform(AudioFeatureTransform): - """SpecAugment (https://arxiv.org/abs/1904.08779)""" - - @classmethod - def from_config_dict(cls, config=None): - _config = {} if config is None else config - return SpecAugmentTransform( - _config.get("time_warp_W", 0), - _config.get("freq_mask_N", 0), - _config.get("freq_mask_F", 0), - _config.get("time_mask_N", 0), - _config.get("time_mask_T", 0), - _config.get("time_mask_p", 0.0), - _config.get("mask_value", None), - ) - - def __init__( - self, - time_warp_w: int = 0, - freq_mask_n: int = 0, - freq_mask_f: int = 0, - time_mask_n: int = 0, - time_mask_t: int = 0, - time_mask_p: float = 0.0, - mask_value: Optional[float] = 0.0, - ): - # Sanity checks - assert mask_value is None or isinstance( - mask_value, numbers.Number - ), f"mask_value (type: {type(mask_value)}) must be None or a number" - if freq_mask_n > 0: - assert freq_mask_f > 0, ( - f"freq_mask_F ({freq_mask_f}) " - f"must be larger than 0 when doing freq masking." - ) - if time_mask_n > 0: - assert time_mask_t > 0, ( - f"time_mask_T ({time_mask_t}) must be larger than 0 when " - f"doing time masking." - ) - - self.time_warp_w = time_warp_w - self.freq_mask_n = freq_mask_n - self.freq_mask_f = freq_mask_f - self.time_mask_n = time_mask_n - self.time_mask_t = time_mask_t - self.time_mask_p = time_mask_p - self.mask_value = mask_value - - def __repr__(self): - return ( - self.__class__.__name__ - + "(" - + ", ".join( - [ - f"time_warp_w={self.time_warp_w}", - f"freq_mask_n={self.freq_mask_n}", - f"freq_mask_f={self.freq_mask_f}", - f"time_mask_n={self.time_mask_n}", - f"time_mask_t={self.time_mask_t}", - f"time_mask_p={self.time_mask_p}", - ] - ) - + ")" - ) - - def __call__(self, spectrogram): - assert len(spectrogram.shape) == 2, "spectrogram must be a 2-D tensor." - - distorted = spectrogram.copy() # make a copy of input spectrogram. - num_frames = spectrogram.shape[0] # or 'tau' in the paper. - num_freqs = spectrogram.shape[1] # or 'miu' in the paper. - mask_value = self.mask_value - - if mask_value is None: # if no value was specified, use local mean. - mask_value = spectrogram.mean() - - if num_frames == 0: - return spectrogram - - if num_freqs < self.freq_mask_f: - return spectrogram - - if self.time_warp_w > 0: - if 2 * self.time_warp_w < num_frames: - import cv2 - - w0 = np.random.randint(self.time_warp_w, num_frames - self.time_warp_w) - w = np.random.randint(-self.time_warp_w + 1, self.time_warp_w) - upper, lower = distorted[:w0, :], distorted[w0:, :] - upper = cv2.resize( - upper, dsize=(num_freqs, w0 + w), interpolation=cv2.INTER_LINEAR - ) - lower = cv2.resize( - lower, - dsize=(num_freqs, num_frames - w0 - w), - interpolation=cv2.INTER_LINEAR, - ) - distorted = np.concatenate((upper, lower), axis=0) - - for _i in range(self.freq_mask_n): - f = np.random.randint(0, self.freq_mask_f) - f0 = np.random.randint(0, num_freqs - f) - if f != 0: - distorted[:, f0 : f0 + f] = mask_value - - max_time_mask_t = min( - self.time_mask_t, math.floor(num_frames * self.time_mask_p) - ) - if max_time_mask_t < 1: - return distorted - - for _i in range(self.time_mask_n): - t = np.random.randint(0, max_time_mask_t) - t0 = np.random.randint(0, num_frames - t) - if t != 0: - distorted[t0 : t0 + t, :] = mask_value - - return distorted diff --git a/kosmos-g/fairseq/fairseq/data/audio/feature_transforms/utterance_cmvn.py b/kosmos-g/fairseq/fairseq/data/audio/feature_transforms/utterance_cmvn.py deleted file mode 100644 index 6bbd0ae82..000000000 --- a/kosmos-g/fairseq/fairseq/data/audio/feature_transforms/utterance_cmvn.py +++ /dev/null @@ -1,40 +0,0 @@ -import numpy as np -from fairseq.data.audio.feature_transforms import ( - AudioFeatureTransform, - register_audio_feature_transform, -) - - -@register_audio_feature_transform("utterance_cmvn") -class UtteranceCMVN(AudioFeatureTransform): - """Utterance-level CMVN (cepstral mean and variance normalization)""" - - @classmethod - def from_config_dict(cls, config=None): - _config = {} if config is None else config - return UtteranceCMVN( - _config.get("norm_means", True), - _config.get("norm_vars", True), - ) - - def __init__(self, norm_means=True, norm_vars=True): - self.norm_means, self.norm_vars = norm_means, norm_vars - - def __repr__(self): - return ( - self.__class__.__name__ - + f"(norm_means={self.norm_means}, norm_vars={self.norm_vars})" - ) - - def __call__(self, x): - mean = x.mean(axis=0) - square_sums = (x ** 2).sum(axis=0) - - if self.norm_means: - x = np.subtract(x, mean) - if self.norm_vars: - var = square_sums / x.shape[0] - mean ** 2 - std = np.sqrt(np.maximum(var, 1e-10)) - x = np.divide(x, std) - - return x diff --git a/kosmos-g/fairseq/fairseq/data/audio/frm_text_to_speech_dataset.py b/kosmos-g/fairseq/fairseq/data/audio/frm_text_to_speech_dataset.py deleted file mode 100644 index b54654d49..000000000 --- a/kosmos-g/fairseq/fairseq/data/audio/frm_text_to_speech_dataset.py +++ /dev/null @@ -1,205 +0,0 @@ -# Copyright (c) 2017-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the LICENSE file in -# the root directory of this source tree. An additional grant of patent rights -# can be found in the PATENTS file in the same directory.abs - -import csv -import logging -import os.path as op -from typing import List, Optional - -import numpy as np -import torch -from fairseq.data import Dictionary -from fairseq.data.audio.speech_to_text_dataset import S2TDataConfig -from fairseq.data.audio.text_to_speech_dataset import ( - TextToSpeechDataset, - TextToSpeechDatasetCreator, -) - -logger = logging.getLogger(__name__) - - -class FrmTextToSpeechDataset(TextToSpeechDataset): - def __init__( - self, - split: str, - is_train_split: bool, - data_cfg: S2TDataConfig, - audio_paths: List[str], - n_frames: List[int], - src_texts: Optional[List[str]] = None, - tgt_texts: Optional[List[str]] = None, - speakers: Optional[List[str]] = None, - src_langs: Optional[List[str]] = None, - tgt_langs: Optional[List[str]] = None, - ids: Optional[List[str]] = None, - tgt_dict: Optional[Dictionary] = None, - pre_tokenizer=None, - bpe_tokenizer=None, - n_frames_per_step=1, - speaker_to_id=None, - do_chunk=False, - chunk_bound=-1, - chunk_init=50, - chunk_incr=5, - add_eos=True, - dedup=True, - ref_fpu=-1, - ): - # It assumes texts are encoded at a fixed frame-rate - super().__init__( - split=split, - is_train_split=is_train_split, - data_cfg=data_cfg, - audio_paths=audio_paths, - n_frames=n_frames, - src_texts=src_texts, - tgt_texts=tgt_texts, - speakers=speakers, - src_langs=src_langs, - tgt_langs=tgt_langs, - ids=ids, - tgt_dict=tgt_dict, - pre_tokenizer=pre_tokenizer, - bpe_tokenizer=bpe_tokenizer, - n_frames_per_step=n_frames_per_step, - speaker_to_id=speaker_to_id, - ) - - self.do_chunk = do_chunk - self.chunk_bound = chunk_bound - self.chunk_init = chunk_init - self.chunk_incr = chunk_incr - self.add_eos = add_eos - self.dedup = dedup - self.ref_fpu = ref_fpu - - self.chunk_size = -1 - - if do_chunk: - assert self.chunk_incr >= 0 - assert self.pre_tokenizer is None - - def __getitem__(self, index): - index, source, target, speaker_id, _, _, _ = super().__getitem__(index) - if target[-1].item() == self.tgt_dict.eos_index: - target = target[:-1] - - fpu = source.size(0) / target.size(0) # frame-per-unit - fps = self.n_frames_per_step - assert ( - self.ref_fpu == -1 or abs((fpu * fps - self.ref_fpu) / self.ref_fpu) < 0.1 - ), f"{fpu*fps} != {self.ref_fpu}" - - # only chunk training split - if self.is_train_split and self.do_chunk and self.chunk_size > 0: - lang = target[: int(self.data_cfg.prepend_tgt_lang_tag)] - text = target[int(self.data_cfg.prepend_tgt_lang_tag) :] - size = len(text) - chunk_size = min(self.chunk_size, size) - chunk_start = np.random.randint(size - chunk_size + 1) - text = text[chunk_start : chunk_start + chunk_size] - target = torch.cat((lang, text), 0) - - f_size = int(np.floor(chunk_size * fpu)) - f_start = int(np.floor(chunk_start * fpu)) - assert f_size > 0 - source = source[f_start : f_start + f_size, :] - - if self.dedup: - target = torch.unique_consecutive(target) - - if self.add_eos: - eos_idx = self.tgt_dict.eos_index - target = torch.cat((target, torch.LongTensor([eos_idx])), 0) - - return index, source, target, speaker_id - - def set_epoch(self, epoch): - if self.is_train_split and self.do_chunk: - old = self.chunk_size - self.chunk_size = self.chunk_init + epoch * self.chunk_incr - if self.chunk_bound > 0: - self.chunk_size = min(self.chunk_size, self.chunk_bound) - logger.info( - ( - f"{self.split}: setting chunk size " - f"from {old} to {self.chunk_size}" - ) - ) - - -class FrmTextToSpeechDatasetCreator(TextToSpeechDatasetCreator): - # inherit for key names - @classmethod - def from_tsv( - cls, - root: str, - data_cfg: S2TDataConfig, - split: str, - tgt_dict, - pre_tokenizer, - bpe_tokenizer, - is_train_split: bool, - n_frames_per_step: int, - speaker_to_id, - do_chunk: bool = False, - chunk_bound: int = -1, - chunk_init: int = 50, - chunk_incr: int = 5, - add_eos: bool = True, - dedup: bool = True, - ref_fpu: float = -1, - ) -> FrmTextToSpeechDataset: - tsv_path = op.join(root, f"{split}.tsv") - if not op.isfile(tsv_path): - raise FileNotFoundError(f"Dataset not found: {tsv_path}") - with open(tsv_path) as f: - reader = csv.DictReader( - f, - delimiter="\t", - quotechar=None, - doublequote=False, - lineterminator="\n", - quoting=csv.QUOTE_NONE, - ) - s = [dict(e) for e in reader] - assert len(s) > 0 - - ids = [ss[cls.KEY_ID] for ss in s] - audio_paths = [op.join(data_cfg.audio_root, ss[cls.KEY_AUDIO]) for ss in s] - n_frames = [int(ss[cls.KEY_N_FRAMES]) for ss in s] - tgt_texts = [ss[cls.KEY_TGT_TEXT] for ss in s] - src_texts = [ss.get(cls.KEY_SRC_TEXT, cls.DEFAULT_SRC_TEXT) for ss in s] - speakers = [ss.get(cls.KEY_SPEAKER, cls.DEFAULT_SPEAKER) for ss in s] - src_langs = [ss.get(cls.KEY_SRC_LANG, cls.DEFAULT_LANG) for ss in s] - tgt_langs = [ss.get(cls.KEY_TGT_LANG, cls.DEFAULT_LANG) for ss in s] - - return FrmTextToSpeechDataset( - split=split, - is_train_split=is_train_split, - data_cfg=data_cfg, - audio_paths=audio_paths, - n_frames=n_frames, - src_texts=src_texts, - tgt_texts=tgt_texts, - speakers=speakers, - src_langs=src_langs, - tgt_langs=tgt_langs, - ids=ids, - tgt_dict=tgt_dict, - pre_tokenizer=pre_tokenizer, - bpe_tokenizer=bpe_tokenizer, - n_frames_per_step=n_frames_per_step, - speaker_to_id=speaker_to_id, - do_chunk=do_chunk, - chunk_bound=chunk_bound, - chunk_init=chunk_init, - chunk_incr=chunk_incr, - add_eos=add_eos, - dedup=dedup, - ref_fpu=ref_fpu, - ) diff --git a/kosmos-g/fairseq/fairseq/data/audio/hubert_dataset.py b/kosmos-g/fairseq/fairseq/data/audio/hubert_dataset.py deleted file mode 100644 index 1c0267bbc..000000000 --- a/kosmos-g/fairseq/fairseq/data/audio/hubert_dataset.py +++ /dev/null @@ -1,344 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import itertools -import logging -import os -import sys -from typing import Any, List, Optional, Union - -import numpy as np - -import torch -import torch.nn.functional as F -from fairseq.data import data_utils -from fairseq.data.fairseq_dataset import FairseqDataset - -logger = logging.getLogger(__name__) - - -def load_audio(manifest_path, max_keep, min_keep): - n_long, n_short = 0, 0 - names, inds, sizes = [], [], [] - with open(manifest_path) as f: - root = f.readline().strip() - for ind, line in enumerate(f): - items = line.strip().split("\t") - assert len(items) == 2, line - sz = int(items[1]) - if min_keep is not None and sz < min_keep: - n_short += 1 - elif max_keep is not None and sz > max_keep: - n_long += 1 - else: - names.append(items[0]) - inds.append(ind) - sizes.append(sz) - tot = ind + 1 - logger.info( - ( - f"max_keep={max_keep}, min_keep={min_keep}, " - f"loaded {len(names)}, skipped {n_short} short and {n_long} long, " - f"longest-loaded={max(sizes)}, shortest-loaded={min(sizes)}" - ) - ) - return root, names, inds, tot, sizes - - -def load_label(label_path, inds, tot): - with open(label_path) as f: - labels = [line.rstrip() for line in f] - assert ( - len(labels) == tot - ), f"number of labels does not match ({len(labels)} != {tot})" - labels = [labels[i] for i in inds] - return labels - - -def load_label_offset(label_path, inds, tot): - with open(label_path) as f: - code_lengths = [len(line.encode("utf-8")) for line in f] - assert ( - len(code_lengths) == tot - ), f"number of labels does not match ({len(code_lengths)} != {tot})" - offsets = list(itertools.accumulate([0] + code_lengths)) - offsets = [(offsets[i], offsets[i + 1]) for i in inds] - return offsets - - -def verify_label_lengths( - audio_sizes, - audio_rate, - label_path, - label_rate, - inds, - tot, - tol=0.1, # tolerance in seconds -): - if label_rate < 0: - logger.info(f"{label_path} is sequence label. skipped") - return - - with open(label_path) as f: - lengths = [len(line.rstrip().split()) for line in f] - assert len(lengths) == tot - lengths = [lengths[i] for i in inds] - num_invalid = 0 - for i, ind in enumerate(inds): - dur_from_audio = audio_sizes[i] / audio_rate - dur_from_label = lengths[i] / label_rate - if abs(dur_from_audio - dur_from_label) > tol: - logger.warning( - ( - f"audio and label duration differ too much " - f"(|{dur_from_audio} - {dur_from_label}| > {tol}) " - f"in line {ind+1} of {label_path}. Check if `label_rate` " - f"is correctly set (currently {label_rate}). " - f"num. of samples = {audio_sizes[i]}; " - f"label length = {lengths[i]}" - ) - ) - num_invalid += 1 - if num_invalid > 0: - logger.warning( - f"total {num_invalid} (audio, label) pairs with mismatched lengths" - ) - - -class HubertDataset(FairseqDataset): - def __init__( - self, - manifest_path: str, - sample_rate: float, - label_paths: List[str], - label_rates: Union[List[float], float], # -1 for sequence labels - pad_list: List[str], - eos_list: List[str], - label_processors: Optional[List[Any]] = None, - max_keep_sample_size: Optional[int] = None, - min_keep_sample_size: Optional[int] = None, - max_sample_size: Optional[int] = None, - shuffle: bool = True, - pad_audio: bool = False, - normalize: bool = False, - store_labels: bool = True, - random_crop: bool = False, - single_target: bool = False, - ): - self.audio_root, self.audio_names, inds, tot, self.sizes = load_audio( - manifest_path, max_keep_sample_size, min_keep_sample_size - ) - self.sample_rate = sample_rate - self.shuffle = shuffle - self.random_crop = random_crop - - self.num_labels = len(label_paths) - self.pad_list = pad_list - self.eos_list = eos_list - self.label_processors = label_processors - self.single_target = single_target - self.label_rates = ( - [label_rates for _ in range(len(label_paths))] - if isinstance(label_rates, int) - else label_rates - ) - self.store_labels = store_labels - if store_labels: - self.label_list = [load_label(p, inds, tot) for p in label_paths] - else: - self.label_paths = label_paths - self.label_offsets_list = [ - load_label_offset(p, inds, tot) for p in label_paths - ] - assert label_processors is None or len(label_processors) == self.num_labels - for label_path, label_rate in zip(label_paths, self.label_rates): - verify_label_lengths( - self.sizes, sample_rate, label_path, label_rate, inds, tot - ) - - self.max_sample_size = ( - max_sample_size if max_sample_size is not None else sys.maxsize - ) - self.pad_audio = pad_audio - self.normalize = normalize - logger.info( - f"pad_audio={pad_audio}, random_crop={random_crop}, " - f"normalize={normalize}, max_sample_size={self.max_sample_size}" - ) - - def get_audio(self, index): - import soundfile as sf - - wav_path = os.path.join(self.audio_root, self.audio_names[index]) - wav, cur_sample_rate = sf.read(wav_path) - wav = torch.from_numpy(wav).float() - wav = self.postprocess(wav, cur_sample_rate) - return wav - - def get_label(self, index, label_idx): - if self.store_labels: - label = self.label_list[label_idx][index] - else: - with open(self.label_paths[label_idx]) as f: - offset_s, offset_e = self.label_offsets_list[label_idx][index] - f.seek(offset_s) - label = f.read(offset_e - offset_s) - - if self.label_processors is not None: - label = self.label_processors[label_idx](label) - return label - - def get_labels(self, index): - return [self.get_label(index, i) for i in range(self.num_labels)] - - def __getitem__(self, index): - wav = self.get_audio(index) - labels = self.get_labels(index) - return {"id": index, "source": wav, "label_list": labels} - - def __len__(self): - return len(self.sizes) - - def crop_to_max_size(self, wav, target_size): - size = len(wav) - diff = size - target_size - if diff <= 0: - return wav, 0 - - start, end = 0, target_size - if self.random_crop: - start = np.random.randint(0, diff + 1) - end = size - diff + start - return wav[start:end], start - - def collater(self, samples): - # target = max(sizes) -> random_crop not used - # target = max_sample_size -> random_crop used for long - samples = [s for s in samples if s["source"] is not None] - if len(samples) == 0: - return {} - - audios = [s["source"] for s in samples] - audio_sizes = [len(s) for s in audios] - if self.pad_audio: - audio_size = min(max(audio_sizes), self.max_sample_size) - else: - audio_size = min(min(audio_sizes), self.max_sample_size) - collated_audios, padding_mask, audio_starts = self.collater_audio( - audios, audio_size - ) - - targets_by_label = [ - [s["label_list"][i] for s in samples] for i in range(self.num_labels) - ] - targets_list, lengths_list, ntokens_list = self.collater_label( - targets_by_label, audio_size, audio_starts - ) - - net_input = {"source": collated_audios, "padding_mask": padding_mask} - batch = { - "id": torch.LongTensor([s["id"] for s in samples]), - "net_input": net_input, - } - - if self.single_target: - batch["target_lengths"] = lengths_list[0] - batch["ntokens"] = ntokens_list[0] - batch["target"] = targets_list[0] - else: - batch["target_lengths_list"] = lengths_list - batch["ntokens_list"] = ntokens_list - batch["target_list"] = targets_list - return batch - - def collater_audio(self, audios, audio_size): - collated_audios = audios[0].new_zeros(len(audios), audio_size) - padding_mask = ( - torch.BoolTensor(collated_audios.shape).fill_(False) - # if self.pad_audio else None - ) - audio_starts = [0 for _ in audios] - for i, audio in enumerate(audios): - diff = len(audio) - audio_size - if diff == 0: - collated_audios[i] = audio - elif diff < 0: - assert self.pad_audio - collated_audios[i] = torch.cat([audio, audio.new_full((-diff,), 0.0)]) - padding_mask[i, diff:] = True - else: - collated_audios[i], audio_starts[i] = self.crop_to_max_size( - audio, audio_size - ) - return collated_audios, padding_mask, audio_starts - - def collater_frm_label(self, targets, audio_size, audio_starts, label_rate, pad): - assert label_rate > 0 - s2f = label_rate / self.sample_rate - frm_starts = [int(round(s * s2f)) for s in audio_starts] - frm_size = int(round(audio_size * s2f)) - if not self.pad_audio: - rem_size = [len(t) - s for t, s in zip(targets, frm_starts)] - frm_size = min(frm_size, *rem_size) - targets = [t[s : s + frm_size] for t, s in zip(targets, frm_starts)] - logger.debug(f"audio_starts={audio_starts}") - logger.debug(f"frame_starts={frm_starts}") - logger.debug(f"frame_size={frm_size}") - - lengths = torch.LongTensor([len(t) for t in targets]) - ntokens = lengths.sum().item() - targets = data_utils.collate_tokens(targets, pad_idx=pad, left_pad=False) - return targets, lengths, ntokens - - def collater_seq_label(self, targets, pad): - lengths = torch.LongTensor([len(t) for t in targets]) - ntokens = lengths.sum().item() - targets = data_utils.collate_tokens(targets, pad_idx=pad, left_pad=False) - return targets, lengths, ntokens - - def collater_label(self, targets_by_label, audio_size, audio_starts): - targets_list, lengths_list, ntokens_list = [], [], [] - itr = zip(targets_by_label, self.label_rates, self.pad_list) - for targets, label_rate, pad in itr: - if label_rate == -1: - targets, lengths, ntokens = self.collater_seq_label(targets, pad) - else: - targets, lengths, ntokens = self.collater_frm_label( - targets, audio_size, audio_starts, label_rate, pad - ) - targets_list.append(targets) - lengths_list.append(lengths) - ntokens_list.append(ntokens) - return targets_list, lengths_list, ntokens_list - - def num_tokens(self, index): - return self.size(index) - - def size(self, index): - if self.pad_audio: - return self.sizes[index] - return min(self.sizes[index], self.max_sample_size) - - def ordered_indices(self): - if self.shuffle: - order = [np.random.permutation(len(self))] - else: - order = [np.arange(len(self))] - - order.append(self.sizes) - return np.lexsort(order)[::-1] - - def postprocess(self, wav, cur_sample_rate): - if wav.dim() == 2: - wav = wav.mean(-1) - assert wav.dim() == 1, wav.dim() - - if cur_sample_rate != self.sample_rate: - raise Exception(f"sr {cur_sample_rate} != {self.sample_rate}") - - if self.normalize: - with torch.no_grad(): - wav = F.layer_norm(wav, wav.shape) - return wav diff --git a/kosmos-g/fairseq/fairseq/data/audio/multi_modality_dataset.py b/kosmos-g/fairseq/fairseq/data/audio/multi_modality_dataset.py deleted file mode 100644 index 625a16ec9..000000000 --- a/kosmos-g/fairseq/fairseq/data/audio/multi_modality_dataset.py +++ /dev/null @@ -1,264 +0,0 @@ -# Copyright (c) 2021-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the LICENSE file in -# the root directory of this source tree. An additional grant of patent rights -# can be found in the PATENTS file in the same directory. - -import logging -import math -from typing import List, Optional, NamedTuple - -import numpy as np -import torch -from fairseq.data import ( - ConcatDataset, - LanguagePairDataset, - FileAudioDataset, - data_utils, -) -from fairseq.data import FairseqDataset - -logger = logging.getLogger(__name__) - - -class ModalityDatasetItem(NamedTuple): - datasetname: str - dataset: any - max_positions: List[int] - max_tokens: Optional[int] = None - max_sentences: Optional[int] = None - - -# MultiModalityDataset: it concate multiple datasets with different modalities. -# Compared with ConcatDataset it can 1) sample data given the ratios for different datasets -# 2) it adds mode to indicate what type of the data samples come from. -# It will be used with GroupedEpochBatchIterator together to generate mini-batch with samples -# from the same type of dataset -# If only one dataset is used, it will perform like the original dataset with mode added -class MultiModalityDataset(ConcatDataset): - def __init__(self, datasets: List[ModalityDatasetItem]): - id_to_mode = [] - dsets = [] - max_tokens = [] - max_sentences = [] - max_positions = [] - for dset in datasets: - id_to_mode.append(dset.datasetname) - dsets.append(dset.dataset) - max_tokens.append(dset.max_tokens) - max_positions.append(dset.max_positions) - max_sentences.append(dset.max_sentences) - weights = [1.0 for s in dsets] - super().__init__(dsets, weights) - self.max_tokens = max_tokens - self.max_positions = max_positions - self.max_sentences = max_sentences - self.id_to_mode = id_to_mode - self.raw_sub_batch_samplers = [] - self._cur_epoch = 0 - - def set_epoch(self, epoch): - super().set_epoch(epoch) - self._cur_epoch = epoch - - def __getitem__(self, idx): - dataset_idx, sample_idx = self._get_dataset_and_sample_index(idx) - sample = self.datasets[dataset_idx][sample_idx] - return (dataset_idx, sample) - - def collater(self, samples): - if len(samples) == 0: - return {} - dataset_idx = samples[0][0] - # make sure all samples in samples are from same dataset - assert sum([0 if dataset_idx == s[0] else 1 for s in samples]) == 0 - samples = self.datasets[dataset_idx].collater([x[1] for x in samples]) - # add mode - samples["net_input"]["mode"] = self.id_to_mode[dataset_idx] - - return samples - - def size(self, index: int): - if len(self.datasets) == 1: - return self.datasets[0].size(index) - return super().size(index) - - @property - def sizes(self): - if len(self.datasets) == 1: - return self.datasets[0].sizes - super().sizes - - def ordered_indices(self): - """ - Returns indices sorted by length. So less padding is needed. - """ - if len(self.datasets) == 1: - return self.datasets[0].ordered_indices() - indices_group = [] - for d_idx, ds in enumerate(self.datasets): - sample_num = self.cumulative_sizes[d_idx] - if d_idx > 0: - sample_num = sample_num - self.cumulative_sizes[d_idx - 1] - assert sample_num == len(ds) - indices_group.append(ds.ordered_indices()) - return indices_group - - def get_raw_batch_samplers(self, required_batch_size_multiple, seed): - if len(self.raw_sub_batch_samplers) > 0: - logger.info(" raw_sub_batch_samplers exists. No action is taken") - return - with data_utils.numpy_seed(seed): - indices = self.ordered_indices() - for i, ds in enumerate(self.datasets): - indices[i] = ds.filter_indices_by_size( - indices[i], - self.max_positions[i], - )[0] - sub_batch_sampler = ds.batch_by_size( - indices[i], - max_tokens=self.max_tokens[i], - max_sentences=self.max_sentences[i], - required_batch_size_multiple=required_batch_size_multiple, - ) - self.raw_sub_batch_samplers.append(sub_batch_sampler) - - def get_batch_samplers(self, mult_ratios, required_batch_size_multiple, seed): - self.get_raw_batch_samplers(required_batch_size_multiple, seed) - batch_samplers = [] - for i, _ in enumerate(self.datasets): - if i > 0: - sub_batch_sampler = [ - [y + self.cumulative_sizes[i - 1] for y in x] - for x in self.raw_sub_batch_samplers[i] - ] - else: - sub_batch_sampler = list(self.raw_sub_batch_samplers[i]) - smp_r = mult_ratios[i] - if smp_r != 1: - is_increase = "increased" if smp_r > 1 else "decreased" - logger.info( - "number of batch for the dataset {} is {} from {} to {}".format( - self.id_to_mode[i], - is_increase, - len(sub_batch_sampler), - int(len(sub_batch_sampler) * smp_r), - ) - ) - mul_samplers = [] - for _ in range(math.floor(smp_r)): - mul_samplers = mul_samplers + sub_batch_sampler - if math.floor(smp_r) != smp_r: - with data_utils.numpy_seed(seed + self._cur_epoch): - np.random.shuffle(sub_batch_sampler) - smp_num = int( - (smp_r - math.floor(smp_r)) * len(sub_batch_sampler) - ) - mul_samplers = mul_samplers + sub_batch_sampler[:smp_num] - sub_batch_sampler = mul_samplers - else: - logger.info( - "dataset {} batch number is {} ".format( - self.id_to_mode[i], len(sub_batch_sampler) - ) - ) - batch_samplers.append(sub_batch_sampler) - - return batch_samplers - - -class LangPairMaskDataset(FairseqDataset): - def __init__( - self, - dataset: LanguagePairDataset, - src_eos: int, - src_bos: Optional[int] = None, - noise_id: Optional[int] = -1, - mask_ratio: Optional[float] = 0, - mask_type: Optional[str] = "random", - ): - self.dataset = dataset - self.src_eos = src_eos - self.src_bos = src_bos - self.noise_id = noise_id - self.mask_ratio = mask_ratio - self.mask_type = mask_type - assert mask_type in ("random", "tail") - - @property - def src_sizes(self): - return self.dataset.src_sizes - - @property - def tgt_sizes(self): - return self.dataset.tgt_sizes - - @property - def sizes(self): - # dataset.sizes can be a dynamically computed sizes: - return self.dataset.sizes - - def get_batch_shapes(self): - return self.dataset.buckets - - def num_tokens_vec(self, indices): - return self.dataset.num_tokens_vec(indices) - - def __len__(self): - return len(self.dataset) - - def num_tokens(self, index): - return self.dataset.num_tokens(index) - - def size(self, index): - return self.dataset.size(index) - - def ordered_indices(self): - return self.dataset.ordered_indices() - - @property - def supports_prefetch(self): - return getattr(self.dataset, "supports_prefetch", False) - - def prefetch(self, indices): - return self.dataset.prefetch(indices) - - def mask_src_tokens(self, sample): - src_item = sample["source"] - mask = None - if self.mask_type == "random": - mask = torch.rand(len(src_item)).le(self.mask_ratio) - else: - mask = torch.ones(len(src_item)) - mask[: int(len(src_item) * (1 - self.mask_ratio))] = 0 - mask = mask.eq(1) - if src_item[0] == self.src_bos: - mask[0] = False - if src_item[-1] == self.src_eos: - mask[-1] = False - mask_src_item = src_item.masked_fill(mask, self.noise_id) - smp = {"id": sample["id"], "source": mask_src_item, "target": sample["target"]} - return smp - - def __getitem__(self, index): - sample = self.dataset[index] - if self.mask_ratio > 0: - sample = self.mask_src_tokens(sample) - return sample - - def collater(self, samples, pad_to_length=None): - return self.dataset.collater(samples, pad_to_length) - - -class FileAudioDatasetWrapper(FileAudioDataset): - def collater(self, samples): - samples = super().collater(samples) - if len(samples) == 0: - return {} - samples["net_input"]["src_tokens"] = samples["net_input"]["source"] - samples["net_input"]["prev_output_tokens"] = None - del samples["net_input"]["source"] - samples["net_input"]["src_lengths"] = None - samples["net_input"]["alignment"] = None - return samples diff --git a/kosmos-g/fairseq/fairseq/data/audio/raw_audio_dataset.py b/kosmos-g/fairseq/fairseq/data/audio/raw_audio_dataset.py deleted file mode 100644 index 181e2bbc9..000000000 --- a/kosmos-g/fairseq/fairseq/data/audio/raw_audio_dataset.py +++ /dev/null @@ -1,393 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import logging -import os -import sys -import io - -import numpy as np -import torch -import torch.nn.functional as F - -from .. import FairseqDataset -from ..data_utils import compute_mask_indices, get_buckets, get_bucketed_sizes -from fairseq.data.audio.audio_utils import ( - parse_path, - read_from_stored_zip, - is_sf_audio_data, -) -from fairseq.data.text_compressor import TextCompressor, TextCompressionLevel - - -logger = logging.getLogger(__name__) - - -class RawAudioDataset(FairseqDataset): - def __init__( - self, - sample_rate, - max_sample_size=None, - min_sample_size=0, - shuffle=True, - pad=False, - normalize=False, - compute_mask_indices=False, - **mask_compute_kwargs, - ): - super().__init__() - - self.sample_rate = sample_rate - self.sizes = [] - self.max_sample_size = ( - max_sample_size if max_sample_size is not None else sys.maxsize - ) - self.min_sample_size = min_sample_size - self.pad = pad - self.shuffle = shuffle - self.normalize = normalize - self.compute_mask_indices = compute_mask_indices - if self.compute_mask_indices: - self.mask_compute_kwargs = mask_compute_kwargs - self._features_size_map = {} - self._C = mask_compute_kwargs["encoder_embed_dim"] - self._conv_feature_layers = eval(mask_compute_kwargs["conv_feature_layers"]) - - def __getitem__(self, index): - raise NotImplementedError() - - def __len__(self): - return len(self.sizes) - - def postprocess(self, feats, curr_sample_rate): - if feats.dim() == 2: - feats = feats.mean(-1) - - if curr_sample_rate != self.sample_rate: - raise Exception(f"sample rate: {curr_sample_rate}, need {self.sample_rate}") - - assert feats.dim() == 1, feats.dim() - - if self.normalize: - with torch.no_grad(): - feats = F.layer_norm(feats, feats.shape) - return feats - - def crop_to_max_size(self, wav, target_size): - size = len(wav) - diff = size - target_size - if diff <= 0: - return wav - - start = np.random.randint(0, diff + 1) - end = size - diff + start - return wav[start:end] - - def _compute_mask_indices(self, dims, padding_mask): - B, T, C = dims - mask_indices, mask_channel_indices = None, None - if self.mask_compute_kwargs["mask_prob"] > 0: - mask_indices = compute_mask_indices( - (B, T), - padding_mask, - self.mask_compute_kwargs["mask_prob"], - self.mask_compute_kwargs["mask_length"], - self.mask_compute_kwargs["mask_selection"], - self.mask_compute_kwargs["mask_other"], - min_masks=2, - no_overlap=self.mask_compute_kwargs["no_mask_overlap"], - min_space=self.mask_compute_kwargs["mask_min_space"], - ) - mask_indices = torch.from_numpy(mask_indices) - if self.mask_compute_kwargs["mask_channel_prob"] > 0: - mask_channel_indices = compute_mask_indices( - (B, C), - None, - self.mask_compute_kwargs["mask_channel_prob"], - self.mask_compute_kwargs["mask_channel_length"], - self.mask_compute_kwargs["mask_channel_selection"], - self.mask_compute_kwargs["mask_channel_other"], - no_overlap=self.mask_compute_kwargs["no_mask_channel_overlap"], - min_space=self.mask_compute_kwargs["mask_channel_min_space"], - ) - mask_channel_indices = ( - torch.from_numpy(mask_channel_indices).unsqueeze(1).expand(-1, T, -1) - ) - - return mask_indices, mask_channel_indices - - @staticmethod - def _bucket_tensor(tensor, num_pad, value): - return F.pad(tensor, (0, num_pad), value=value) - - def collater(self, samples): - samples = [s for s in samples if s["source"] is not None] - if len(samples) == 0: - return {} - - sources = [s["source"] for s in samples] - sizes = [len(s) for s in sources] - - if self.pad: - target_size = min(max(sizes), self.max_sample_size) - else: - target_size = min(min(sizes), self.max_sample_size) - - collated_sources = sources[0].new_zeros(len(sources), target_size) - padding_mask = ( - torch.BoolTensor(collated_sources.shape).fill_(False) if self.pad else None - ) - for i, (source, size) in enumerate(zip(sources, sizes)): - diff = size - target_size - if diff == 0: - collated_sources[i] = source - elif diff < 0: - assert self.pad - collated_sources[i] = torch.cat( - [source, source.new_full((-diff,), 0.0)] - ) - padding_mask[i, diff:] = True - else: - collated_sources[i] = self.crop_to_max_size(source, target_size) - - input = {"source": collated_sources} - out = {"id": torch.LongTensor([s["id"] for s in samples])} - if self.pad: - input["padding_mask"] = padding_mask - - if hasattr(self, "num_buckets") and self.num_buckets > 0: - assert self.pad, "Cannot bucket without padding first." - bucket = max(self._bucketed_sizes[s["id"]] for s in samples) - num_pad = bucket - collated_sources.size(-1) - if num_pad: - input["source"] = self._bucket_tensor(collated_sources, num_pad, 0) - input["padding_mask"] = self._bucket_tensor(padding_mask, num_pad, True) - - if self.compute_mask_indices: - B = input["source"].size(0) - T = self._get_mask_indices_dims(input["source"].size(-1)) - padding_mask_reshaped = input["padding_mask"].clone() - extra = padding_mask_reshaped.size(1) % T - if extra > 0: - padding_mask_reshaped = padding_mask_reshaped[:, :-extra] - padding_mask_reshaped = padding_mask_reshaped.view( - padding_mask_reshaped.size(0), T, -1 - ) - padding_mask_reshaped = padding_mask_reshaped.all(-1) - input["padding_count"] = padding_mask_reshaped.sum(-1).max().item() - mask_indices, mask_channel_indices = self._compute_mask_indices( - (B, T, self._C), - padding_mask_reshaped, - ) - input["mask_indices"] = mask_indices - input["mask_channel_indices"] = mask_channel_indices - out["sample_size"] = mask_indices.sum().item() - - out["net_input"] = input - return out - - def _get_mask_indices_dims(self, size, padding=0, dilation=1): - if size not in self._features_size_map: - L_in = size - for (_, kernel_size, stride) in self._conv_feature_layers: - L_out = L_in + 2 * padding - dilation * (kernel_size - 1) - 1 - L_out = 1 + L_out // stride - L_in = L_out - self._features_size_map[size] = L_out - return self._features_size_map[size] - - def num_tokens(self, index): - return self.size(index) - - def size(self, index): - """Return an example's size as a float or tuple. This value is used when - filtering a dataset with ``--max-positions``.""" - if self.pad: - return self.sizes[index] - return min(self.sizes[index], self.max_sample_size) - - def ordered_indices(self): - """Return an ordered list of indices. Batches will be constructed based - on this order.""" - - if self.shuffle: - order = [np.random.permutation(len(self))] - order.append( - np.minimum( - np.array(self.sizes), - self.max_sample_size, - ) - ) - return np.lexsort(order)[::-1] - else: - return np.arange(len(self)) - - def set_bucket_info(self, num_buckets): - self.num_buckets = num_buckets - if self.num_buckets > 0: - self._collated_sizes = np.minimum( - np.array(self.sizes), - self.max_sample_size, - ) - self.buckets = get_buckets( - self._collated_sizes, - self.num_buckets, - ) - self._bucketed_sizes = get_bucketed_sizes( - self._collated_sizes, self.buckets - ) - logger.info( - f"{len(self.buckets)} bucket(s) for the audio dataset: " - f"{self.buckets}" - ) - - -class FileAudioDataset(RawAudioDataset): - def __init__( - self, - manifest_path, - sample_rate, - max_sample_size=None, - min_sample_size=0, - shuffle=True, - pad=False, - normalize=False, - num_buckets=0, - compute_mask_indices=False, - text_compression_level=TextCompressionLevel.none, - **mask_compute_kwargs, - ): - super().__init__( - sample_rate=sample_rate, - max_sample_size=max_sample_size, - min_sample_size=min_sample_size, - shuffle=shuffle, - pad=pad, - normalize=normalize, - compute_mask_indices=compute_mask_indices, - **mask_compute_kwargs, - ) - - self.text_compressor = TextCompressor(level=text_compression_level) - - skipped = 0 - self.fnames = [] - sizes = [] - self.skipped_indices = set() - - with open(manifest_path, "r") as f: - self.root_dir = f.readline().strip() - for i, line in enumerate(f): - items = line.strip().split("\t") - assert len(items) == 2, line - sz = int(items[1]) - if min_sample_size is not None and sz < min_sample_size: - skipped += 1 - self.skipped_indices.add(i) - continue - self.fnames.append(self.text_compressor.compress(items[0])) - sizes.append(sz) - logger.info(f"loaded {len(self.fnames)}, skipped {skipped} samples") - - self.sizes = np.array(sizes, dtype=np.int64) - - try: - import pyarrow - - self.fnames = pyarrow.array(self.fnames) - except: - logger.debug( - "Could not create a pyarrow array. Please install pyarrow for better performance" - ) - pass - - self.set_bucket_info(num_buckets) - - def __getitem__(self, index): - import soundfile as sf - - fn = self.fnames[index] - fn = fn if isinstance(self.fnames, list) else fn.as_py() - fn = self.text_compressor.decompress(fn) - path_or_fp = os.path.join(self.root_dir, fn) - _path, slice_ptr = parse_path(path_or_fp) - if len(slice_ptr) == 2: - byte_data = read_from_stored_zip(_path, slice_ptr[0], slice_ptr[1]) - assert is_sf_audio_data(byte_data) - path_or_fp = io.BytesIO(byte_data) - - wav, curr_sample_rate = sf.read(path_or_fp, dtype="float32") - - feats = torch.from_numpy(wav).float() - feats = self.postprocess(feats, curr_sample_rate) - return {"id": index, "source": feats} - - -class BinarizedAudioDataset(RawAudioDataset): - def __init__( - self, - data_dir, - split, - sample_rate, - max_sample_size=None, - min_sample_size=0, - shuffle=True, - pad=False, - normalize=False, - num_buckets=0, - compute_mask_indices=False, - **mask_compute_kwargs, - ): - super().__init__( - sample_rate=sample_rate, - max_sample_size=max_sample_size, - min_sample_size=min_sample_size, - shuffle=shuffle, - pad=pad, - normalize=normalize, - compute_mask_indices=compute_mask_indices, - **mask_compute_kwargs, - ) - - from fairseq.data import data_utils, Dictionary - - self.fnames_dict = Dictionary.load(os.path.join(data_dir, "dict.txt")) - - root_path = os.path.join(data_dir, f"{split}.root") - if os.path.exists(root_path): - with open(root_path, "r") as f: - self.root_dir = next(f).strip() - else: - self.root_dir = None - - fnames_path = os.path.join(data_dir, split) - self.fnames = data_utils.load_indexed_dataset(fnames_path, self.fnames_dict) - lengths_path = os.path.join(data_dir, f"{split}.lengths") - - with open(lengths_path, "r") as f: - for line in f: - sz = int(line.rstrip()) - assert ( - sz >= min_sample_size - ), f"Min sample size is not supported for binarized dataset, but found a sample with size {sz}" - self.sizes.append(sz) - - self.sizes = np.array(self.sizes, dtype=np.int64) - - self.set_bucket_info(num_buckets) - logger.info(f"loaded {len(self.fnames)} samples") - - def __getitem__(self, index): - import soundfile as sf - - fname = self.fnames_dict.string(self.fnames[index], separator="") - if self.root_dir: - fname = os.path.join(self.root_dir, fname) - - wav, curr_sample_rate = sf.read(fname) - feats = torch.from_numpy(wav).float() - feats = self.postprocess(feats, curr_sample_rate) - return {"id": index, "source": feats} diff --git a/kosmos-g/fairseq/fairseq/data/audio/speech_to_speech_dataset.py b/kosmos-g/fairseq/fairseq/data/audio/speech_to_speech_dataset.py deleted file mode 100644 index 3fed09837..000000000 --- a/kosmos-g/fairseq/fairseq/data/audio/speech_to_speech_dataset.py +++ /dev/null @@ -1,406 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass -import logging -from pathlib import Path -from typing import Dict, List, Optional, Tuple - -import torch -from fairseq.data import ( - ConcatDataset, - data_utils as fairseq_data_utils, - Dictionary, -) -from fairseq.data.audio.data_cfg import S2SDataConfig -from fairseq.data.audio.speech_to_text_dataset import ( - _collate_frames, - get_features_or_waveform, - SpeechToTextDataset, - SpeechToTextDatasetCreator, -) - - -logger = logging.getLogger(__name__) - - -@dataclass -class SpeechToSpeechDatasetItem(object): - index: int - source: torch.Tensor - target: Optional[torch.Tensor] = None - target_speaker: Optional[torch.Tensor] = None - - -class SpeechToSpeechDataset(SpeechToTextDataset): - def __init__( - self, - split: str, - is_train_split: bool, - data_cfg: S2SDataConfig, - src_audio_paths: List[str], - src_n_frames: List[int], - tgt_audio_paths: List[str], - tgt_n_frames: List[int], - ids: Optional[List[str]] = None, - target_is_code: bool = False, - tgt_dict: Dictionary = None, - n_frames_per_step: int = 1, - ): - tgt_texts = tgt_audio_paths if target_is_code else None - super().__init__( - split, - is_train_split, - data_cfg, - src_audio_paths, - src_n_frames, - ids=ids, - tgt_dict=tgt_dict, - tgt_texts=tgt_texts, - n_frames_per_step=n_frames_per_step, - ) - - self.tgt_audio_paths = tgt_audio_paths - self.tgt_lens = [t // self.n_frames_per_step for t in tgt_n_frames] - - assert not target_is_code or tgt_dict is not None - self.target_is_code = target_is_code - - assert len(tgt_audio_paths) == self.n_samples - assert len(tgt_n_frames) == self.n_samples - - self.tgt_speakers = None - if self.cfg.target_speaker_embed: - samples = SpeechToTextDatasetCreator._load_samples_from_tsv( - self.cfg.target_speaker_embed, split - ) - spk_emb_dict = {s["id"]: s["speaker_embed"] for s in samples} - self.tgt_speakers = [spk_emb_dict[id] for id in self.ids] - assert len(self.tgt_speakers) == self.n_samples - - logger.info(self.__repr__()) - - def pack_units(self, input: torch.Tensor) -> torch.Tensor: - if self.n_frames_per_step <= 1: - return input - - offset = 4 - vocab_size = ( - len(self.tgt_dict) - offset - ) # remove offset from <bos>, <pad>, <eos>, <unk>, which is specific to fairseq dictionary - - assert input.dim() == 1 - stacked_input = ( - input[:-1].view(-1, self.n_frames_per_step) - offset - ) # remove <eos> - scale = [ - pow(vocab_size, self.n_frames_per_step - 1 - i) - for i in range(self.n_frames_per_step) - ] - scale = torch.LongTensor(scale).squeeze(0) - res = input.new((len(input) - 1) // self.n_frames_per_step + 1).fill_(input[-1]) - res[:-1] = (stacked_input * scale).sum(dim=1) + offset - - return res - - def __getitem__(self, index: int) -> SpeechToSpeechDatasetItem: - source = self._get_source_audio(index) - - if not self.target_is_code: - target = get_features_or_waveform(self.tgt_audio_paths[index]) - target = torch.from_numpy(target).float() - target = self.pack_frames(target) - else: - target = self.tgt_dict.encode_line( - self.tgt_audio_paths[index], - add_if_not_exist=False, - append_eos=True, - ).long() - if self.n_frames_per_step > 1: - n_tgt_frame = target.size(0) - 1 # exclude <eos> - keep_n_tgt_frame = n_tgt_frame - n_tgt_frame % self.n_frames_per_step - target = torch.cat( - ( - target[:keep_n_tgt_frame], - target.new_full((1,), self.tgt_dict.eos()), - ), - dim=0, - ) - - if self.tgt_speakers: - tgt_spk = get_features_or_waveform(self.tgt_speakers[index]) - tgt_spk = torch.from_numpy(tgt_spk).float() - else: - tgt_spk = torch.FloatTensor([]) - - return SpeechToSpeechDatasetItem( - index=index, source=source, target=target, target_speaker=tgt_spk - ) - - def _collate_target(self, samples: List[SpeechToSpeechDatasetItem]) -> torch.Tensor: - if self.target_is_code: - target = fairseq_data_utils.collate_tokens( - [x.target for x in samples], - self.tgt_dict.pad(), - self.tgt_dict.eos(), - left_pad=False, - move_eos_to_beginning=False, - ) - # convert stacked units to a single id - pack_targets = [self.pack_units(x.target) for x in samples] - prev_output_tokens = fairseq_data_utils.collate_tokens( - pack_targets, - self.tgt_dict.pad(), - self.tgt_dict.eos(), - left_pad=False, - move_eos_to_beginning=True, - ) - target_lengths = torch.tensor( - [x.size(0) for x in pack_targets], dtype=torch.long - ) - else: - target = _collate_frames([x.target for x in samples], is_audio_input=False) - bsz, _, d = target.size() - prev_output_tokens = torch.cat( - (target.new_full((bsz, 1, d), 0.0), target[:, :-1, :]), dim=1 - ) - target_lengths = torch.tensor( - [x.target.size(0) for x in samples], dtype=torch.long - ) - - return target, prev_output_tokens, target_lengths - - def collater( - self, samples: List[SpeechToSpeechDatasetItem], return_order: bool = False - ) -> Dict: - if len(samples) == 0: - return {} - indices = torch.tensor([x.index for x in samples], dtype=torch.long) - frames = _collate_frames([x.source for x in samples], self.cfg.use_audio_input) - # sort samples by descending number of frames - n_frames = torch.tensor([x.source.size(0) for x in samples], dtype=torch.long) - n_frames, order = n_frames.sort(descending=True) - indices = indices.index_select(0, order) - frames = frames.index_select(0, order) - - target, prev_output_tokens, target_lengths = self._collate_target(samples) - target = target.index_select(0, order) - target_lengths = target_lengths.index_select(0, order) - prev_output_tokens = prev_output_tokens.index_select(0, order) - ntokens = sum(x.target.size(0) for x in samples) - - tgt_speakers = None - if self.cfg.target_speaker_embed: - tgt_speakers = _collate_frames( - [x.target_speaker for x in samples], is_audio_input=True - ).index_select(0, order) - - net_input = { - "src_tokens": frames, - "src_lengths": n_frames, - "prev_output_tokens": prev_output_tokens, - "tgt_speaker": tgt_speakers, # TODO: unify "speaker" and "tgt_speaker" - } - out = { - "id": indices, - "net_input": net_input, - "speaker": tgt_speakers, # to support Tacotron2 loss for speech-to-spectrogram model - "target": target, - "target_lengths": target_lengths, - "ntokens": ntokens, - "nsentences": len(samples), - } - if return_order: - out["order"] = order - return out - - -class TextTargetMultitaskData(object): - # mandatory columns - KEY_ID, KEY_TEXT = "id", "tgt_text" - - def __init__(self, args, split, tgt_dict): - samples = SpeechToTextDatasetCreator._load_samples_from_tsv(args.data, split) - self.data = {s[self.KEY_ID]: s[self.KEY_TEXT] for s in samples} - self.dict = tgt_dict - self.append_eos = args.decoder_type != "ctc" - - def get(self, sample_id): - if sample_id in self.data: - return self.dict.encode_line( - self.data[sample_id], - add_if_not_exist=False, - append_eos=self.append_eos, - ) - else: - logger.warning(f"no target for {sample_id}") - return torch.IntTensor([]) - - def collater(self, samples: List[torch.Tensor]) -> torch.Tensor: - out = fairseq_data_utils.collate_tokens( - samples, - self.dict.pad(), - self.dict.eos(), - left_pad=False, - move_eos_to_beginning=False, - ).long() - - prev_out = fairseq_data_utils.collate_tokens( - samples, - self.dict.pad(), - self.dict.eos(), - left_pad=False, - move_eos_to_beginning=True, - ).long() - - target_lengths = torch.tensor([t.size(0) for t in samples], dtype=torch.long) - ntokens = sum(t.size(0) for t in samples) - - output = { - "prev_output_tokens": prev_out, - "target": out, - "target_lengths": target_lengths, - "ntokens": ntokens, - } - - return output - - -class SpeechToSpeechMultitaskDataset(SpeechToSpeechDataset): - def __init__(self, *argv): - super().__init__(*argv) - self.multitask_data = {} - - def add_multitask_dataset(self, task_name, task_data): - self.multitask_data[task_name] = task_data - - def __getitem__( - self, index: int - ) -> Tuple[SpeechToSpeechDatasetItem, Dict[str, torch.Tensor]]: - s2s_data = super().__getitem__(index) - - multitask_target = {} - sample_id = self.ids[index] - for task_name, task_dataset in self.multitask_data.items(): - multitask_target[task_name] = task_dataset.get(sample_id) - - return s2s_data, multitask_target - - def collater( - self, samples: List[Tuple[SpeechToSpeechDatasetItem, Dict[str, torch.Tensor]]] - ) -> Dict: - if len(samples) == 0: - return {} - - out = super().collater([s for s, _ in samples], return_order=True) - order = out["order"] - del out["order"] - - for task_name, task_dataset in self.multitask_data.items(): - if "multitask" not in out: - out["multitask"] = {} - d = [s[task_name] for _, s in samples] - task_target = task_dataset.collater(d) - out["multitask"][task_name] = { - "target": task_target["target"].index_select(0, order), - "target_lengths": task_target["target_lengths"].index_select(0, order), - "ntokens": task_target["ntokens"], - } - out["multitask"][task_name]["net_input"] = { - "prev_output_tokens": task_target["prev_output_tokens"].index_select( - 0, order - ), - } - - return out - - -class SpeechToSpeechDatasetCreator(object): - # mandatory columns - KEY_ID, KEY_SRC_AUDIO, KEY_SRC_N_FRAMES = "id", "src_audio", "src_n_frames" - KEY_TGT_AUDIO, KEY_TGT_N_FRAMES = "tgt_audio", "tgt_n_frames" - - @classmethod - def _from_list( - cls, - split_name: str, - is_train_split, - samples: List[Dict], - data_cfg: S2SDataConfig, - target_is_code: bool = False, - target_dictionary: Dictionary = None, - n_frames_per_step: int = 1, - multitask: Optional[Dict] = None, - ) -> SpeechToSpeechDataset: - audio_root = Path(data_cfg.audio_root) - ids = [s[cls.KEY_ID] for s in samples] - src_audio_paths = [ - (audio_root / s[cls.KEY_SRC_AUDIO]).as_posix() for s in samples - ] - tgt_audio_paths = [ - s[cls.KEY_TGT_AUDIO] - if target_is_code - else (audio_root / s[cls.KEY_TGT_AUDIO]).as_posix() - for s in samples - ] - src_n_frames = [int(s[cls.KEY_SRC_N_FRAMES]) for s in samples] - tgt_n_frames = [int(s[cls.KEY_TGT_N_FRAMES]) for s in samples] - - has_multitask = len(multitask) > 0 - dataset_cls = ( - SpeechToSpeechMultitaskDataset if has_multitask else SpeechToSpeechDataset - ) - - ds = dataset_cls( - split_name, - is_train_split, - data_cfg, - src_audio_paths, - src_n_frames, - tgt_audio_paths, - tgt_n_frames, - ids, - target_is_code, - target_dictionary, - n_frames_per_step, - ) - - if has_multitask: - for task_name, task_obj in multitask.items(): - task_data = TextTargetMultitaskData( - task_obj.args, split_name, task_obj.target_dictionary - ) - ds.add_multitask_dataset(task_name, task_data) - return ds - - @classmethod - def from_tsv( - cls, - root: str, - data_cfg: S2SDataConfig, - splits: str, - is_train_split: bool, - epoch: int, - seed: int, - target_is_code: bool = False, - target_dictionary: Dictionary = None, - n_frames_per_step: int = 1, - multitask: Optional[Dict] = None, - ) -> SpeechToSpeechDataset: - datasets = [] - for split in splits.split(","): - samples = SpeechToTextDatasetCreator._load_samples_from_tsv(root, split) - ds = cls._from_list( - split, - is_train_split, - samples, - data_cfg, - target_is_code, - target_dictionary, - n_frames_per_step, - multitask, - ) - datasets.append(ds) - return ConcatDataset(datasets) if len(datasets) > 1 else datasets[0] diff --git a/kosmos-g/fairseq/fairseq/data/audio/speech_to_text_dataset.py b/kosmos-g/fairseq/fairseq/data/audio/speech_to_text_dataset.py deleted file mode 100644 index 7bcc88b6b..000000000 --- a/kosmos-g/fairseq/fairseq/data/audio/speech_to_text_dataset.py +++ /dev/null @@ -1,553 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import csv -import io -import logging -import re -from collections import defaultdict -from pathlib import Path -from typing import Dict, List, Optional -from dataclasses import dataclass - -import numpy as np -import torch -from fairseq.data import ( - ConcatDataset, - Dictionary, - FairseqDataset, - ResamplingDataset, - data_utils as fairseq_data_utils, -) -from fairseq.data.audio.audio_utils import ( - get_fbank, - get_waveform, - read_from_stored_zip, - is_npy_data, - is_sf_audio_data, - parse_path, - FEATURE_OR_SF_AUDIO_FILE_EXTENSIONS, -) -from fairseq.data.audio.feature_transforms import CompositeAudioFeatureTransform -from fairseq.data.audio.data_cfg import S2TDataConfig - - -logger = logging.getLogger(__name__) - - -def get_features_from_npy_or_audio(path): - ext = Path(path).suffix - if ext not in FEATURE_OR_SF_AUDIO_FILE_EXTENSIONS: - raise ValueError(f'Unsupported file format for "{path}"') - return np.load(path) if ext == ".npy" else get_fbank(path) - - -def get_features_or_waveform_from_stored_zip( - path, - byte_offset, - byte_size, - need_waveform=False, - use_sample_rate=None, -): - assert path.endswith(".zip") - data = read_from_stored_zip(path, byte_offset, byte_size) - f = io.BytesIO(data) - if is_npy_data(data): - features_or_waveform = np.load(f) - elif is_sf_audio_data(data): - features_or_waveform = ( - get_waveform(f, always_2d=False, output_sample_rate=use_sample_rate)[0] - if need_waveform - else get_fbank(f) - ) - else: - raise ValueError(f'Unknown file format for "{path}"') - return features_or_waveform - - -def get_features_or_waveform(path: str, need_waveform=False, use_sample_rate=None): - """Get speech features from .npy file or waveform from .wav/.flac file. - The file may be inside an uncompressed ZIP file and is accessed via byte - offset and length. - - Args: - path (str): File path in the format of "<.npy/.wav/.flac path>" or - "<zip path>:<byte offset>:<byte length>". - need_waveform (bool): return waveform instead of features. - use_sample_rate (int): change sample rate for the input wave file - - Returns: - features_or_waveform (numpy.ndarray): speech features or waveform. - """ - _path, slice_ptr = parse_path(path) - if len(slice_ptr) == 0: - if need_waveform: - return get_waveform( - _path, always_2d=False, output_sample_rate=use_sample_rate - )[0] - return get_features_from_npy_or_audio(_path) - elif len(slice_ptr) == 2: - features_or_waveform = get_features_or_waveform_from_stored_zip( - _path, - slice_ptr[0], - slice_ptr[1], - need_waveform=need_waveform, - use_sample_rate=use_sample_rate, - ) - else: - raise ValueError(f"Invalid path: {path}") - - return features_or_waveform - - -def _collate_frames( - frames: List[torch.Tensor], is_audio_input: bool = False -) -> torch.Tensor: - """ - Convert a list of 2D frames into a padded 3D tensor - Args: - frames (list): list of 2D frames of size L[i]*f_dim. Where L[i] is - length of i-th frame and f_dim is static dimension of features - Returns: - 3D tensor of size len(frames)*len_max*f_dim where len_max is max of L[i] - """ - max_len = max(frame.size(0) for frame in frames) - if is_audio_input: - out = frames[0].new_zeros((len(frames), max_len)) - else: - out = frames[0].new_zeros((len(frames), max_len, frames[0].size(1))) - for i, v in enumerate(frames): - out[i, : v.size(0)] = v - return out - - -@dataclass -class SpeechToTextDatasetItem(object): - index: int - source: torch.Tensor - target: Optional[torch.Tensor] = None - speaker_id: Optional[int] = None - - -class SpeechToTextDataset(FairseqDataset): - LANG_TAG_TEMPLATE = "<lang:{}>" - - def __init__( - self, - split: str, - is_train_split: bool, - cfg: S2TDataConfig, - audio_paths: List[str], - n_frames: List[int], - src_texts: Optional[List[str]] = None, - tgt_texts: Optional[List[str]] = None, - speakers: Optional[List[str]] = None, - src_langs: Optional[List[str]] = None, - tgt_langs: Optional[List[str]] = None, - ids: Optional[List[str]] = None, - tgt_dict: Optional[Dictionary] = None, - pre_tokenizer=None, - bpe_tokenizer=None, - n_frames_per_step=1, - speaker_to_id=None, - append_eos=True, - ): - self.split, self.is_train_split = split, is_train_split - self.cfg = cfg - self.audio_paths, self.n_frames = audio_paths, n_frames - self.n_samples = len(audio_paths) - assert len(n_frames) == self.n_samples > 0 - assert src_texts is None or len(src_texts) == self.n_samples - assert tgt_texts is None or len(tgt_texts) == self.n_samples - assert speakers is None or len(speakers) == self.n_samples - assert src_langs is None or len(src_langs) == self.n_samples - assert tgt_langs is None or len(tgt_langs) == self.n_samples - assert ids is None or len(ids) == self.n_samples - assert (tgt_dict is None and tgt_texts is None) or ( - tgt_dict is not None and tgt_texts is not None - ) - self.src_texts, self.tgt_texts = src_texts, tgt_texts - self.src_langs, self.tgt_langs = src_langs, tgt_langs - self.speakers = speakers - self.tgt_dict = tgt_dict - self.check_tgt_lang_tag() - self.ids = ids - self.shuffle = cfg.shuffle if is_train_split else False - - self.feature_transforms = CompositeAudioFeatureTransform.from_config_dict( - self.cfg.get_feature_transforms(split, is_train_split) - ) - - self.pre_tokenizer = pre_tokenizer - self.bpe_tokenizer = bpe_tokenizer - self.n_frames_per_step = n_frames_per_step - self.speaker_to_id = speaker_to_id - - self.tgt_lens = self.get_tgt_lens_and_check_oov() - self.append_eos = append_eos - - logger.info(self.__repr__()) - - def get_tgt_lens_and_check_oov(self): - if self.tgt_texts is None: - return [0 for _ in range(self.n_samples)] - tgt_lens = [] - n_tokens, n_oov_tokens = 0, 0 - for i in range(self.n_samples): - tokenized = self.get_tokenized_tgt_text(i).split(" ") - oov_tokens = [ - t - for t in tokenized - if self.tgt_dict.index(t) == self.tgt_dict.unk_index - ] - n_tokens += len(tokenized) - n_oov_tokens += len(oov_tokens) - tgt_lens.append(len(tokenized)) - logger.info(f"'{self.split}' has {n_oov_tokens / n_tokens * 100:.2f}% OOV") - return tgt_lens - - def __repr__(self): - return ( - self.__class__.__name__ - + f'(split="{self.split}", n_samples={self.n_samples:_}, ' - f"prepend_tgt_lang_tag={self.cfg.prepend_tgt_lang_tag}, " - f"shuffle={self.shuffle}, transforms={self.feature_transforms}, " - f"n_frames_per_step={self.n_frames_per_step}" - ) - - @classmethod - def is_lang_tag(cls, token): - pattern = cls.LANG_TAG_TEMPLATE.replace("{}", "(.*)") - return re.match(pattern, token) - - def check_tgt_lang_tag(self): - if self.cfg.prepend_tgt_lang_tag: - assert self.tgt_langs is not None and self.tgt_dict is not None - tgt_lang_tags = [ - self.LANG_TAG_TEMPLATE.format(t) for t in set(self.tgt_langs) - ] - assert all(t in self.tgt_dict for t in tgt_lang_tags) - - @classmethod - def tokenize(cls, tokenizer, text: str): - return text if tokenizer is None else tokenizer.encode(text) - - def get_tokenized_tgt_text(self, index: int): - text = self.tokenize(self.pre_tokenizer, self.tgt_texts[index]) - text = self.tokenize(self.bpe_tokenizer, text) - return text - - def pack_frames(self, feature: torch.Tensor): - if self.n_frames_per_step == 1: - return feature - n_packed_frames = feature.shape[0] // self.n_frames_per_step - feature = feature[: self.n_frames_per_step * n_packed_frames] - return feature.reshape(n_packed_frames, -1) - - @classmethod - def get_lang_tag_idx(cls, lang: str, dictionary: Dictionary): - lang_tag_idx = dictionary.index(cls.LANG_TAG_TEMPLATE.format(lang)) - assert lang_tag_idx != dictionary.unk() - return lang_tag_idx - - def _get_source_audio(self, index: int) -> torch.Tensor: - source = get_features_or_waveform( - self.audio_paths[index], - need_waveform=self.cfg.use_audio_input, - use_sample_rate=self.cfg.use_sample_rate, - ) - if self.feature_transforms is not None: - assert not self.cfg.use_audio_input - source = self.feature_transforms(source) - source = torch.from_numpy(source).float() - return source - - def __getitem__(self, index: int) -> SpeechToTextDatasetItem: - source = self._get_source_audio(index) - source = self.pack_frames(source) - - target = None - if self.tgt_texts is not None: - tokenized = self.get_tokenized_tgt_text(index) - target = self.tgt_dict.encode_line( - tokenized, add_if_not_exist=False, append_eos=self.append_eos - ).long() - if self.cfg.prepend_tgt_lang_tag: - lang_tag_idx = self.get_lang_tag_idx( - self.tgt_langs[index], self.tgt_dict - ) - target = torch.cat((torch.LongTensor([lang_tag_idx]), target), 0) - - speaker_id = None - if self.speaker_to_id is not None: - speaker_id = self.speaker_to_id[self.speakers[index]] - return SpeechToTextDatasetItem( - index=index, source=source, target=target, speaker_id=speaker_id - ) - - def __len__(self): - return self.n_samples - - def collater( - self, samples: List[SpeechToTextDatasetItem], return_order: bool = False - ) -> Dict: - if len(samples) == 0: - return {} - indices = torch.tensor([x.index for x in samples], dtype=torch.long) - frames = _collate_frames([x.source for x in samples], self.cfg.use_audio_input) - # sort samples by descending number of frames - n_frames = torch.tensor([x.source.size(0) for x in samples], dtype=torch.long) - n_frames, order = n_frames.sort(descending=True) - indices = indices.index_select(0, order) - frames = frames.index_select(0, order) - - target, target_lengths = None, None - prev_output_tokens = None - ntokens = None - if self.tgt_texts is not None: - target = fairseq_data_utils.collate_tokens( - [x.target for x in samples], - self.tgt_dict.pad(), - self.tgt_dict.eos(), - left_pad=False, - move_eos_to_beginning=False, - ) - target = target.index_select(0, order) - target_lengths = torch.tensor( - [x.target.size(0) for x in samples], dtype=torch.long - ).index_select(0, order) - prev_output_tokens = fairseq_data_utils.collate_tokens( - [x.target for x in samples], - self.tgt_dict.pad(), - self.tgt_dict.eos(), - left_pad=False, - move_eos_to_beginning=True, - ) - prev_output_tokens = prev_output_tokens.index_select(0, order) - ntokens = sum(x.target.size(0) for x in samples) - - speaker = None - if self.speaker_to_id is not None: - speaker = ( - torch.tensor([s.speaker_id for s in samples], dtype=torch.long) - .index_select(0, order) - .view(-1, 1) - ) - - net_input = { - "src_tokens": frames, - "src_lengths": n_frames, - "prev_output_tokens": prev_output_tokens, - } - out = { - "id": indices, - "net_input": net_input, - "speaker": speaker, - "target": target, - "target_lengths": target_lengths, - "ntokens": ntokens, - "nsentences": len(samples), - } - if return_order: - out["order"] = order - return out - - def num_tokens(self, index): - return self.n_frames[index] - - def size(self, index): - return self.n_frames[index], self.tgt_lens[index] - - @property - def sizes(self): - return np.array(self.n_frames) - - @property - def can_reuse_epoch_itr_across_epochs(self): - return True - - def ordered_indices(self): - if self.shuffle: - order = [np.random.permutation(len(self))] - else: - order = [np.arange(len(self))] - # first by descending order of # of frames then by original/random order - order.append([-n for n in self.n_frames]) - return np.lexsort(order) - - def prefetch(self, indices): - raise False - - -class SpeechToTextDatasetCreator(object): - # mandatory columns - KEY_ID, KEY_AUDIO, KEY_N_FRAMES = "id", "audio", "n_frames" - KEY_TGT_TEXT = "tgt_text" - # optional columns - KEY_SPEAKER, KEY_SRC_TEXT = "speaker", "src_text" - KEY_SRC_LANG, KEY_TGT_LANG = "src_lang", "tgt_lang" - # default values - DEFAULT_SPEAKER = DEFAULT_SRC_TEXT = DEFAULT_LANG = "" - - @classmethod - def _from_list( - cls, - split_name: str, - is_train_split, - samples: List[Dict], - cfg: S2TDataConfig, - tgt_dict, - pre_tokenizer, - bpe_tokenizer, - n_frames_per_step, - speaker_to_id, - ) -> SpeechToTextDataset: - audio_root = Path(cfg.audio_root) - ids = [s[cls.KEY_ID] for s in samples] - audio_paths = [(audio_root / s[cls.KEY_AUDIO]).as_posix() for s in samples] - n_frames = [int(s[cls.KEY_N_FRAMES]) for s in samples] - tgt_texts = [s[cls.KEY_TGT_TEXT] for s in samples] - src_texts = [s.get(cls.KEY_SRC_TEXT, cls.DEFAULT_SRC_TEXT) for s in samples] - speakers = [s.get(cls.KEY_SPEAKER, cls.DEFAULT_SPEAKER) for s in samples] - src_langs = [s.get(cls.KEY_SRC_LANG, cls.DEFAULT_LANG) for s in samples] - tgt_langs = [s.get(cls.KEY_TGT_LANG, cls.DEFAULT_LANG) for s in samples] - return SpeechToTextDataset( - split_name, - is_train_split, - cfg, - audio_paths, - n_frames, - src_texts=src_texts, - tgt_texts=tgt_texts, - speakers=speakers, - src_langs=src_langs, - tgt_langs=tgt_langs, - ids=ids, - tgt_dict=tgt_dict, - pre_tokenizer=pre_tokenizer, - bpe_tokenizer=bpe_tokenizer, - n_frames_per_step=n_frames_per_step, - speaker_to_id=speaker_to_id, - ) - - @classmethod - def get_size_ratios( - cls, datasets: List[SpeechToTextDataset], alpha: float = 1.0 - ) -> List[float]: - """Size ratios for temperature-based sampling - (https://arxiv.org/abs/1907.05019)""" - - id_to_lp, lp_to_sz = {}, defaultdict(int) - for ds in datasets: - lang_pairs = {f"{s}->{t}" for s, t in zip(ds.src_langs, ds.tgt_langs)} - assert len(lang_pairs) == 1 - lang_pair = list(lang_pairs)[0] - id_to_lp[ds.split] = lang_pair - lp_to_sz[lang_pair] += sum(ds.n_frames) - - sz_sum = sum(v for v in lp_to_sz.values()) - lp_to_prob = {k: v / sz_sum for k, v in lp_to_sz.items()} - lp_to_tgt_prob = {k: v ** alpha for k, v in lp_to_prob.items()} - prob_sum = sum(v for v in lp_to_tgt_prob.values()) - lp_to_tgt_prob = {k: v / prob_sum for k, v in lp_to_tgt_prob.items()} - lp_to_sz_ratio = { - k: (lp_to_tgt_prob[k] * sz_sum) / v for k, v in lp_to_sz.items() - } - size_ratio = [lp_to_sz_ratio[id_to_lp[ds.split]] for ds in datasets] - - p_formatted = { - k: f"{lp_to_prob[k]:.3f}->{lp_to_tgt_prob[k]:.3f}" for k in lp_to_sz - } - logger.info(f"sampling probability balancing: {p_formatted}") - sr_formatted = {ds.split: f"{r:.3f}" for ds, r in zip(datasets, size_ratio)} - logger.info(f"balanced sampling size ratio: {sr_formatted}") - return size_ratio - - @classmethod - def _load_samples_from_tsv(cls, root: str, split: str): - tsv_path = Path(root) / f"{split}.tsv" - if not tsv_path.is_file(): - raise FileNotFoundError(f"Dataset not found: {tsv_path}") - with open(tsv_path) as f: - reader = csv.DictReader( - f, - delimiter="\t", - quotechar=None, - doublequote=False, - lineterminator="\n", - quoting=csv.QUOTE_NONE, - ) - samples = [dict(e) for e in reader] - if len(samples) == 0: - raise ValueError(f"Empty manifest: {tsv_path}") - return samples - - @classmethod - def _from_tsv( - cls, - root: str, - cfg: S2TDataConfig, - split: str, - tgt_dict, - is_train_split: bool, - pre_tokenizer, - bpe_tokenizer, - n_frames_per_step, - speaker_to_id, - ) -> SpeechToTextDataset: - samples = cls._load_samples_from_tsv(root, split) - return cls._from_list( - split, - is_train_split, - samples, - cfg, - tgt_dict, - pre_tokenizer, - bpe_tokenizer, - n_frames_per_step, - speaker_to_id, - ) - - @classmethod - def from_tsv( - cls, - root: str, - cfg: S2TDataConfig, - splits: str, - tgt_dict, - pre_tokenizer, - bpe_tokenizer, - is_train_split: bool, - epoch: int, - seed: int, - n_frames_per_step: int = 1, - speaker_to_id=None, - ) -> SpeechToTextDataset: - datasets = [ - cls._from_tsv( - root, - cfg, - split, - tgt_dict, - is_train_split, - pre_tokenizer, - bpe_tokenizer, - n_frames_per_step, - speaker_to_id, - ) - for split in splits.split(",") - ] - - if is_train_split and len(datasets) > 1 and cfg.sampling_alpha != 1.0: - # temperature-based sampling - size_ratios = cls.get_size_ratios(datasets, alpha=cfg.sampling_alpha) - datasets = [ - ResamplingDataset( - d, size_ratio=r, seed=seed, epoch=epoch, replace=(r >= 1.0) - ) - for r, d in zip(size_ratios, datasets) - ] - - return ConcatDataset(datasets) if len(datasets) > 1 else datasets[0] diff --git a/kosmos-g/fairseq/fairseq/data/audio/speech_to_text_joint_dataset.py b/kosmos-g/fairseq/fairseq/data/audio/speech_to_text_joint_dataset.py deleted file mode 100644 index 505ee81f3..000000000 --- a/kosmos-g/fairseq/fairseq/data/audio/speech_to_text_joint_dataset.py +++ /dev/null @@ -1,359 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from pathlib import Path -from typing import Dict, List, Optional, NamedTuple - -import torch -from fairseq.data import ( - ConcatDataset, - Dictionary, - ResamplingDataset, - data_utils as fairseq_data_utils, -) -from fairseq.data.audio.speech_to_text_dataset import ( - SpeechToTextDataset, - S2TDataConfig, - SpeechToTextDatasetCreator, -) - - -logger = logging.getLogger(__name__) - - -class S2TJointDataConfig(S2TDataConfig): - """Wrapper class for data config YAML""" - - @property - def src_vocab_filename(self): - """fairseq vocabulary file under data root""" - return self.config.get("src_vocab_filename", "src_dict.txt") - - @property - def src_pre_tokenizer(self) -> Dict: - """Pre-tokenizer to apply before subword tokenization. Returning - a dictionary with `tokenizer` providing the tokenizer name and - the other items providing the tokenizer-specific arguments. - Tokenizers are defined in `fairseq.data.encoders.*`""" - return self.config.get("src_pre_tokenizer", {"tokenizer": None}) - - @property - def src_bpe_tokenizer(self) -> Dict: - """Subword tokenizer to apply on source text after pre-tokenization. - Returning a dictionary with `bpe` providing the tokenizer name and - the other items providing the tokenizer-specific arguments. - Tokenizers are defined in `fairseq.data.encoders.*`""" - return self.config.get("src_bpe_tokenizer", {"bpe": None}) - - @property - def prepend_tgt_lang_tag_no_change(self) -> bool: - """Prepend target lang ID token as the prev_output_tokens BOS (e.g. for - to-many multilingual setting). No change needed during inference. - """ - return self.config.get("prepend_tgt_lang_tag_no_change", False) - - @property - def sampling_text_alpha(self): - """Hyper-parameter alpha = 1/T for temperature-based resampling. (text - input only) (alpha = 1 for no resampling)""" - return self.config.get("sampling_text_alpha", 1.0) - - -class SpeechToTextJointDatasetItem(NamedTuple): - index: int - source: torch.Tensor - target: Optional[torch.Tensor] = None - src_txt_tokens: Optional[torch.Tensor] = None - tgt_lang_tag: Optional[int] = None - src_lang_tag: Optional[int] = None - tgt_alignment: Optional[torch.Tensor] = None - - -# use_src_lang_id: -# 0: don't use src_lang_id -# 1: attach src_lang_id to the src_txt_tokens as eos -class SpeechToTextJointDataset(SpeechToTextDataset): - def __init__( - self, - split: str, - is_train_split: bool, - cfg: S2TJointDataConfig, - audio_paths: List[str], - n_frames: List[int], - src_texts: Optional[List[str]] = None, - tgt_texts: Optional[List[str]] = None, - speakers: Optional[List[str]] = None, - src_langs: Optional[List[str]] = None, - tgt_langs: Optional[List[str]] = None, - ids: Optional[List[str]] = None, - tgt_dict: Optional[Dictionary] = None, - src_dict: Optional[Dictionary] = None, - pre_tokenizer=None, - bpe_tokenizer=None, - src_pre_tokenizer=None, - src_bpe_tokenizer=None, - append_eos: Optional[bool] = True, - alignment: Optional[List[str]] = None, - use_src_lang_id: Optional[int] = 0, - ): - super().__init__( - split, - is_train_split, - cfg, - audio_paths, - n_frames, - src_texts=src_texts, - tgt_texts=tgt_texts, - speakers=speakers, - src_langs=src_langs, - tgt_langs=tgt_langs, - ids=ids, - tgt_dict=tgt_dict, - pre_tokenizer=pre_tokenizer, - bpe_tokenizer=bpe_tokenizer, - append_eos=append_eos, - ) - - self.src_dict = src_dict - self.src_pre_tokenizer = src_pre_tokenizer - self.src_bpe_tokenizer = src_bpe_tokenizer - self.alignment = None - self.use_src_lang_id = use_src_lang_id - if alignment is not None: - self.alignment = [ - [float(s) for s in sample.split()] for sample in alignment - ] - - def get_tokenized_src_text(self, index: int): - text = self.tokenize(self.src_pre_tokenizer, self.src_texts[index]) - text = self.tokenize(self.src_bpe_tokenizer, text) - return text - - def __getitem__(self, index: int) -> SpeechToTextJointDatasetItem: - s2t_dataset_item = super().__getitem__(index) - src_tokens = None - src_lang_tag = None - if self.src_texts is not None and self.src_dict is not None: - src_tokens = self.get_tokenized_src_text(index) - src_tokens = self.src_dict.encode_line( - src_tokens, add_if_not_exist=False, append_eos=True - ).long() - if self.use_src_lang_id > 0: - src_lang_tag = self.get_lang_tag_idx( - self.src_langs[index], self.src_dict - ) - tgt_lang_tag = None - if self.cfg.prepend_tgt_lang_tag_no_change: - # prepend_tgt_lang_tag_no_change: modify prev_output_tokens instead - tgt_lang_tag = self.get_lang_tag_idx(self.tgt_langs[index], self.tgt_dict) - ali = None - if self.alignment is not None: - ali = torch.Tensor(self.alignment[index]).float() - - return SpeechToTextJointDatasetItem( - index=index, - source=s2t_dataset_item.source, - target=s2t_dataset_item.target, - src_txt_tokens=src_tokens, - tgt_lang_tag=tgt_lang_tag, - src_lang_tag=src_lang_tag, - tgt_alignment=ali, - ) - - def __len__(self): - return self.n_samples - - def collater(self, samples: List[SpeechToTextJointDatasetItem]) -> Dict: - s2t_out = super().collater(samples, return_order=True) - if s2t_out == {}: - return s2t_out - net_input, order = s2t_out["net_input"], s2t_out["order"] - - if self.src_texts is not None and self.src_dict is not None: - src_txt_tokens = fairseq_data_utils.collate_tokens( - [x.src_txt_tokens for x in samples], - self.src_dict.pad(), - self.src_dict.eos(), - left_pad=False, - move_eos_to_beginning=False, - ) - src_txt_lengths = torch.tensor( - [x.src_txt_tokens.size()[0] for x in samples], dtype=torch.long - ) - if self.use_src_lang_id > 0: - src_lang_idxs = torch.tensor( - [s.src_lang_tag for s in samples], dtype=src_txt_tokens.dtype - ) - if self.use_src_lang_id == 1: # replace eos with lang_id - eos_idx = src_txt_lengths - 1 - src_txt_tokens.scatter_( - 1, eos_idx.view(-1, 1), src_lang_idxs.view(-1, 1) - ) - else: - raise NotImplementedError("Implementation is required") - - src_txt_tokens = src_txt_tokens.index_select(0, order) - src_txt_lengths = src_txt_lengths.index_select(0, order) - net_input["src_txt_tokens"] = src_txt_tokens - net_input["src_txt_lengths"] = src_txt_lengths - - net_input["alignment"] = None - if self.alignment is not None: - max_len = max([s.tgt_alignment.size(0) for s in samples]) - alignment = torch.ones(len(samples), max_len).float() - for i, s in enumerate(samples): - cur_len = s.tgt_alignment.size(0) - alignment[i][:cur_len].copy_(s.tgt_alignment) - net_input["alignment"] = alignment.index_select(0, order) - - if self.tgt_texts is not None and samples[0].tgt_lang_tag is not None: - for i in range(len(samples)): - net_input["prev_output_tokens"][i][0] = samples[order[i]].tgt_lang_tag - - out = { - "id": s2t_out["id"], - "net_input": net_input, - "target": s2t_out["target"], - "target_lengths": s2t_out["target_lengths"], - "ntokens": s2t_out["ntokens"], - "nsentences": len(samples), - } - return out - - -class SpeechToTextJointDatasetCreator(SpeechToTextDatasetCreator): - KEY_ALIGN = "align" - - @classmethod - def _from_list( - cls, - split_name: str, - is_train_split, - samples: List[Dict], - cfg: S2TJointDataConfig, - tgt_dict, - src_dict, - pre_tokenizer, - bpe_tokenizer, - src_pre_tokenizer, - src_bpe_tokenizer, - append_eos, - use_src_lang_id, - ) -> SpeechToTextJointDataset: - audio_root = Path(cfg.audio_root) - ids = [s[cls.KEY_ID] for s in samples] - audio_paths = [(audio_root / s[cls.KEY_AUDIO]).as_posix() for s in samples] - n_frames = [int(s[cls.KEY_N_FRAMES]) for s in samples] - tgt_texts = [s[cls.KEY_TGT_TEXT] for s in samples] - src_texts = [s.get(cls.KEY_SRC_TEXT, cls.DEFAULT_SRC_TEXT) for s in samples] - speakers = [s.get(cls.KEY_SPEAKER, cls.DEFAULT_SPEAKER) for s in samples] - src_langs = [s.get(cls.KEY_SRC_LANG, cls.DEFAULT_LANG) for s in samples] - tgt_langs = [s.get(cls.KEY_TGT_LANG, cls.DEFAULT_LANG) for s in samples] - tgt_alignment = None - if cls.KEY_ALIGN in samples[0].keys(): - tgt_alignment = [s[cls.KEY_ALIGN] for s in samples] - return SpeechToTextJointDataset( - split_name, - is_train_split, - cfg, - audio_paths, - n_frames, - src_texts=src_texts, - tgt_texts=tgt_texts, - speakers=speakers, - src_langs=src_langs, - tgt_langs=tgt_langs, - ids=ids, - tgt_dict=tgt_dict, - src_dict=src_dict, - pre_tokenizer=pre_tokenizer, - bpe_tokenizer=bpe_tokenizer, - src_pre_tokenizer=src_pre_tokenizer, - src_bpe_tokenizer=src_bpe_tokenizer, - append_eos=append_eos, - alignment=tgt_alignment, - use_src_lang_id=use_src_lang_id, - ) - - @classmethod - def _from_tsv( - cls, - root: str, - cfg: S2TJointDataConfig, - split: str, - tgt_dict, - src_dict, - is_train_split: bool, - pre_tokenizer, - bpe_tokenizer, - src_pre_tokenizer, - src_bpe_tokenizer, - append_eos: bool, - use_src_lang_id: int, - ) -> SpeechToTextJointDataset: - samples = cls._load_samples_from_tsv(root, split) - return cls._from_list( - split, - is_train_split, - samples, - cfg, - tgt_dict, - src_dict, - pre_tokenizer, - bpe_tokenizer, - src_pre_tokenizer, - src_bpe_tokenizer, - append_eos, - use_src_lang_id, - ) - - @classmethod - def from_tsv( - cls, - root: str, - cfg: S2TJointDataConfig, - splits: str, - tgt_dict, - src_dict, - pre_tokenizer, - bpe_tokenizer, - src_pre_tokenizer, - src_bpe_tokenizer, - is_train_split: bool, - epoch: int, - seed: int, - append_eos: Optional[bool] = True, - use_src_lang_id: Optional[int] = 0, - ) -> SpeechToTextJointDataset: - datasets = [ - cls._from_tsv( - root, - cfg, - split, - tgt_dict, - src_dict, - is_train_split, - pre_tokenizer, - bpe_tokenizer, - src_pre_tokenizer, - src_bpe_tokenizer, - append_eos=append_eos, - use_src_lang_id=use_src_lang_id, - ) - for split in splits.split(",") - ] - - if is_train_split and len(datasets) > 1 and cfg.sampling_alpha != 1.0: - # temperature-based sampling - size_ratios = cls.get_size_ratios(datasets, alpha=cfg.sampling_alpha) - datasets = [ - ResamplingDataset( - d, size_ratio=r, seed=seed, epoch=epoch, replace=(r >= 1.0) - ) - for r, d in zip(size_ratios, datasets) - ] - - return ConcatDataset(datasets) if len(datasets) > 1 else datasets[0] diff --git a/kosmos-g/fairseq/fairseq/data/audio/text_to_speech_dataset.py b/kosmos-g/fairseq/fairseq/data/audio/text_to_speech_dataset.py deleted file mode 100644 index 0e1489ae8..000000000 --- a/kosmos-g/fairseq/fairseq/data/audio/text_to_speech_dataset.py +++ /dev/null @@ -1,248 +0,0 @@ -# Copyright (c) 2017-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the LICENSE file in -# the root directory of this source tree. An additional grant of patent rights -# can be found in the PATENTS file in the same directory.abs - -from pathlib import Path -from typing import List, Dict, Optional, Any -from dataclasses import dataclass - -import numpy as np -import torch - -from fairseq.data.audio.speech_to_text_dataset import ( - SpeechToTextDataset, - SpeechToTextDatasetCreator, - S2TDataConfig, - _collate_frames, - get_features_or_waveform, -) -from fairseq.data import Dictionary, data_utils as fairseq_data_utils - - -@dataclass -class TextToSpeechDatasetItem(object): - index: int - source: torch.Tensor - target: Optional[torch.Tensor] = None - speaker_id: Optional[int] = None - duration: Optional[torch.Tensor] = None - pitch: Optional[torch.Tensor] = None - energy: Optional[torch.Tensor] = None - - -class TextToSpeechDataset(SpeechToTextDataset): - def __init__( - self, - split: str, - is_train_split: bool, - cfg: S2TDataConfig, - audio_paths: List[str], - n_frames: List[int], - src_texts: Optional[List[str]] = None, - tgt_texts: Optional[List[str]] = None, - speakers: Optional[List[str]] = None, - src_langs: Optional[List[str]] = None, - tgt_langs: Optional[List[str]] = None, - ids: Optional[List[str]] = None, - tgt_dict: Optional[Dictionary] = None, - pre_tokenizer=None, - bpe_tokenizer=None, - n_frames_per_step=1, - speaker_to_id=None, - durations: Optional[List[List[int]]] = None, - pitches: Optional[List[str]] = None, - energies: Optional[List[str]] = None, - ): - super(TextToSpeechDataset, self).__init__( - split, - is_train_split, - cfg, - audio_paths, - n_frames, - src_texts=src_texts, - tgt_texts=tgt_texts, - speakers=speakers, - src_langs=src_langs, - tgt_langs=tgt_langs, - ids=ids, - tgt_dict=tgt_dict, - pre_tokenizer=pre_tokenizer, - bpe_tokenizer=bpe_tokenizer, - n_frames_per_step=n_frames_per_step, - speaker_to_id=speaker_to_id, - ) - self.durations = durations - self.pitches = pitches - self.energies = energies - - def __getitem__(self, index: int) -> TextToSpeechDatasetItem: - s2t_item = super().__getitem__(index) - - duration, pitch, energy = None, None, None - if self.durations is not None: - duration = torch.tensor( - self.durations[index] + [0], dtype=torch.long # pad 0 for EOS - ) - if self.pitches is not None: - pitch = get_features_or_waveform(self.pitches[index]) - pitch = torch.from_numpy( - np.concatenate((pitch, [0])) # pad 0 for EOS - ).float() - if self.energies is not None: - energy = get_features_or_waveform(self.energies[index]) - energy = torch.from_numpy( - np.concatenate((energy, [0])) # pad 0 for EOS - ).float() - return TextToSpeechDatasetItem( - index=index, - source=s2t_item.source, - target=s2t_item.target, - speaker_id=s2t_item.speaker_id, - duration=duration, - pitch=pitch, - energy=energy, - ) - - def collater(self, samples: List[TextToSpeechDatasetItem]) -> Dict[str, Any]: - if len(samples) == 0: - return {} - - src_lengths, order = torch.tensor( - [s.target.shape[0] for s in samples], dtype=torch.long - ).sort(descending=True) - id_ = torch.tensor([s.index for s in samples], dtype=torch.long).index_select( - 0, order - ) - feat = _collate_frames( - [s.source for s in samples], self.cfg.use_audio_input - ).index_select(0, order) - target_lengths = torch.tensor( - [s.source.shape[0] for s in samples], dtype=torch.long - ).index_select(0, order) - - src_tokens = fairseq_data_utils.collate_tokens( - [s.target for s in samples], - self.tgt_dict.pad(), - self.tgt_dict.eos(), - left_pad=False, - move_eos_to_beginning=False, - ).index_select(0, order) - - speaker = None - if self.speaker_to_id is not None: - speaker = ( - torch.tensor([s.speaker_id for s in samples], dtype=torch.long) - .index_select(0, order) - .view(-1, 1) - ) - - bsz, _, d = feat.size() - prev_output_tokens = torch.cat( - (feat.new_zeros((bsz, 1, d)), feat[:, :-1, :]), dim=1 - ) - - durations, pitches, energies = None, None, None - if self.durations is not None: - durations = fairseq_data_utils.collate_tokens( - [s.duration for s in samples], 0 - ).index_select(0, order) - assert src_tokens.shape[1] == durations.shape[1] - if self.pitches is not None: - pitches = _collate_frames([s.pitch for s in samples], True) - pitches = pitches.index_select(0, order) - assert src_tokens.shape[1] == pitches.shape[1] - if self.energies is not None: - energies = _collate_frames([s.energy for s in samples], True) - energies = energies.index_select(0, order) - assert src_tokens.shape[1] == energies.shape[1] - src_texts = [self.tgt_dict.string(samples[i].target) for i in order] - - return { - "id": id_, - "net_input": { - "src_tokens": src_tokens, - "src_lengths": src_lengths, - "prev_output_tokens": prev_output_tokens, - }, - "speaker": speaker, - "target": feat, - "durations": durations, - "pitches": pitches, - "energies": energies, - "target_lengths": target_lengths, - "ntokens": sum(target_lengths).item(), - "nsentences": len(samples), - "src_texts": src_texts, - } - - -class TextToSpeechDatasetCreator(SpeechToTextDatasetCreator): - KEY_DURATION = "duration" - KEY_PITCH = "pitch" - KEY_ENERGY = "energy" - - @classmethod - def _from_list( - cls, - split_name: str, - is_train_split, - samples: List[Dict], - cfg: S2TDataConfig, - tgt_dict, - pre_tokenizer, - bpe_tokenizer, - n_frames_per_step, - speaker_to_id, - ) -> TextToSpeechDataset: - audio_root = Path(cfg.audio_root) - ids = [s[cls.KEY_ID] for s in samples] - audio_paths = [(audio_root / s[cls.KEY_AUDIO]).as_posix() for s in samples] - n_frames = [int(s[cls.KEY_N_FRAMES]) for s in samples] - tgt_texts = [s[cls.KEY_TGT_TEXT] for s in samples] - src_texts = [s.get(cls.KEY_SRC_TEXT, cls.DEFAULT_SRC_TEXT) for s in samples] - speakers = [s.get(cls.KEY_SPEAKER, cls.DEFAULT_SPEAKER) for s in samples] - src_langs = [s.get(cls.KEY_SRC_LANG, cls.DEFAULT_LANG) for s in samples] - tgt_langs = [s.get(cls.KEY_TGT_LANG, cls.DEFAULT_LANG) for s in samples] - - durations = [s.get(cls.KEY_DURATION, None) for s in samples] - durations = [ - None if dd is None else [int(d) for d in dd.split(" ")] for dd in durations - ] - durations = None if any(dd is None for dd in durations) else durations - - pitches = [s.get(cls.KEY_PITCH, None) for s in samples] - pitches = [ - None if pp is None else (audio_root / pp).as_posix() for pp in pitches - ] - pitches = None if any(pp is None for pp in pitches) else pitches - - energies = [s.get(cls.KEY_ENERGY, None) for s in samples] - energies = [ - None if ee is None else (audio_root / ee).as_posix() for ee in energies - ] - energies = None if any(ee is None for ee in energies) else energies - - return TextToSpeechDataset( - split_name, - is_train_split, - cfg, - audio_paths, - n_frames, - src_texts, - tgt_texts, - speakers, - src_langs, - tgt_langs, - ids, - tgt_dict, - pre_tokenizer, - bpe_tokenizer, - n_frames_per_step, - speaker_to_id, - durations, - pitches, - energies, - ) diff --git a/kosmos-g/fairseq/fairseq/data/backtranslation_dataset.py b/kosmos-g/fairseq/fairseq/data/backtranslation_dataset.py deleted file mode 100644 index 8f70c90df..000000000 --- a/kosmos-g/fairseq/fairseq/data/backtranslation_dataset.py +++ /dev/null @@ -1,165 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -from fairseq import utils - -from . import FairseqDataset - - -def backtranslate_samples(samples, collate_fn, generate_fn, cuda=True): - """Backtranslate a list of samples. - - Given an input (*samples*) of the form: - - [{'id': 1, 'source': 'hallo welt'}] - - this will return: - - [{'id': 1, 'source': 'hello world', 'target': 'hallo welt'}] - - Args: - samples (List[dict]): samples to backtranslate. Individual samples are - expected to have a 'source' key, which will become the 'target' - after backtranslation. - collate_fn (callable): function to collate samples into a mini-batch - generate_fn (callable): function to generate backtranslations - cuda (bool): use GPU for generation (default: ``True``) - - Returns: - List[dict]: an updated list of samples with a backtranslated source - """ - collated_samples = collate_fn(samples) - s = utils.move_to_cuda(collated_samples) if cuda else collated_samples - generated_sources = generate_fn(s) - - id_to_src = {sample["id"]: sample["source"] for sample in samples} - - # Go through each tgt sentence in batch and its corresponding best - # generated hypothesis and create a backtranslation data pair - # {id: id, source: generated backtranslation, target: original tgt} - return [ - { - "id": id.item(), - "target": id_to_src[id.item()], - "source": hypos[0]["tokens"].cpu(), - } - for id, hypos in zip(collated_samples["id"], generated_sources) - ] - - -class BacktranslationDataset(FairseqDataset): - """ - Sets up a backtranslation dataset which takes a tgt batch, generates - a src using a tgt-src backtranslation function (*backtranslation_fn*), - and returns the corresponding `{generated src, input tgt}` batch. - - Args: - tgt_dataset (~fairseq.data.FairseqDataset): the dataset to be - backtranslated. Only the source side of this dataset will be used. - After backtranslation, the source sentences in this dataset will be - returned as the targets. - src_dict (~fairseq.data.Dictionary): the dictionary of backtranslated - sentences. - tgt_dict (~fairseq.data.Dictionary, optional): the dictionary of - sentences to be backtranslated. - backtranslation_fn (callable, optional): function to call to generate - backtranslations. This is typically the `generate` method of a - :class:`~fairseq.sequence_generator.SequenceGenerator` object. - Pass in None when it is not available at initialization time, and - use set_backtranslation_fn function to set it when available. - output_collater (callable, optional): function to call on the - backtranslated samples to create the final batch - (default: ``tgt_dataset.collater``). - cuda: use GPU for generation - """ - - def __init__( - self, - tgt_dataset, - src_dict, - tgt_dict=None, - backtranslation_fn=None, - output_collater=None, - cuda=True, - **kwargs - ): - self.tgt_dataset = tgt_dataset - self.backtranslation_fn = backtranslation_fn - self.output_collater = ( - output_collater if output_collater is not None else tgt_dataset.collater - ) - self.cuda = cuda if torch.cuda.is_available() else False - self.src_dict = src_dict - self.tgt_dict = tgt_dict - - def __getitem__(self, index): - """ - Returns a single sample from *tgt_dataset*. Note that backtranslation is - not applied in this step; use :func:`collater` instead to backtranslate - a batch of samples. - """ - return self.tgt_dataset[index] - - def __len__(self): - return len(self.tgt_dataset) - - def set_backtranslation_fn(self, backtranslation_fn): - self.backtranslation_fn = backtranslation_fn - - def collater(self, samples): - """Merge and backtranslate a list of samples to form a mini-batch. - - Using the samples from *tgt_dataset*, load a collated target sample to - feed to the backtranslation model. Then take the backtranslation with - the best score as the source and the original input as the target. - - Note: we expect *tgt_dataset* to provide a function `collater()` that - will collate samples into the format expected by *backtranslation_fn*. - After backtranslation, we will feed the new list of samples (i.e., the - `(backtranslated source, original source)` pairs) to *output_collater* - and return the result. - - Args: - samples (List[dict]): samples to backtranslate and collate - - Returns: - dict: a mini-batch with keys coming from *output_collater* - """ - if samples[0].get("is_dummy", False): - return samples - samples = backtranslate_samples( - samples=samples, - collate_fn=self.tgt_dataset.collater, - generate_fn=(lambda net_input: self.backtranslation_fn(net_input)), - cuda=self.cuda, - ) - return self.output_collater(samples) - - def num_tokens(self, index): - """Just use the tgt dataset num_tokens""" - return self.tgt_dataset.num_tokens(index) - - def ordered_indices(self): - """Just use the tgt dataset ordered_indices""" - return self.tgt_dataset.ordered_indices() - - def size(self, index): - """Return an example's size as a float or tuple. This value is used - when filtering a dataset with ``--max-positions``. - - Note: we use *tgt_dataset* to approximate the length of the source - sentence, since we do not know the actual length until after - backtranslation. - """ - tgt_size = self.tgt_dataset.size(index)[0] - return (tgt_size, tgt_size) - - @property - def supports_prefetch(self): - return getattr(self.tgt_dataset, "supports_prefetch", False) - - def prefetch(self, indices): - return self.tgt_dataset.prefetch(indices) diff --git a/kosmos-g/fairseq/fairseq/data/base_wrapper_dataset.py b/kosmos-g/fairseq/fairseq/data/base_wrapper_dataset.py deleted file mode 100644 index 134d398b4..000000000 --- a/kosmos-g/fairseq/fairseq/data/base_wrapper_dataset.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from torch.utils.data.dataloader import default_collate - -from . import FairseqDataset - - -class BaseWrapperDataset(FairseqDataset): - def __init__(self, dataset): - super().__init__() - self.dataset = dataset - - def __getitem__(self, index): - return self.dataset[index] - - def __len__(self): - return len(self.dataset) - - def collater(self, samples): - if hasattr(self.dataset, "collater"): - return self.dataset.collater(samples) - else: - return default_collate(samples) - - @property - def sizes(self): - return self.dataset.sizes - - def num_tokens(self, index): - return self.dataset.num_tokens(index) - - def size(self, index): - return self.dataset.size(index) - - def ordered_indices(self): - return self.dataset.ordered_indices() - - @property - def supports_prefetch(self): - return getattr(self.dataset, "supports_prefetch", False) - - def attr(self, attr: str, index: int): - return self.dataset.attr(attr, index) - - def prefetch(self, indices): - self.dataset.prefetch(indices) - - def get_batch_shapes(self): - return self.dataset.get_batch_shapes() - - def batch_by_size( - self, - indices, - max_tokens=None, - max_sentences=None, - required_batch_size_multiple=1, - ): - return self.dataset.batch_by_size( - indices, - max_tokens=max_tokens, - max_sentences=max_sentences, - required_batch_size_multiple=required_batch_size_multiple, - ) - - def filter_indices_by_size(self, indices, max_sizes): - return self.dataset.filter_indices_by_size(indices, max_sizes) - - @property - def can_reuse_epoch_itr_across_epochs(self): - return self.dataset.can_reuse_epoch_itr_across_epochs - - def set_epoch(self, epoch): - super().set_epoch(epoch) - if hasattr(self.dataset, "set_epoch"): - self.dataset.set_epoch(epoch) diff --git a/kosmos-g/fairseq/fairseq/data/bucket_pad_length_dataset.py b/kosmos-g/fairseq/fairseq/data/bucket_pad_length_dataset.py deleted file mode 100644 index 0f9410014..000000000 --- a/kosmos-g/fairseq/fairseq/data/bucket_pad_length_dataset.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch.nn.functional as F -from fairseq.data import BaseWrapperDataset -from fairseq.data.data_utils import get_buckets, get_bucketed_sizes - - -class BucketPadLengthDataset(BaseWrapperDataset): - """ - Bucket and pad item lengths to the nearest bucket size. This can be used to - reduce the number of unique batch shapes, which is important on TPUs since - each new batch shape requires a recompilation. - - Args: - dataset (FairseqDatset): dataset to bucket - sizes (List[int]): all item sizes - num_buckets (int): number of buckets to create - pad_idx (int): padding symbol - left_pad (bool): if True, pad on the left; otherwise right pad - """ - - def __init__( - self, - dataset, - sizes, - num_buckets, - pad_idx, - left_pad, - tensor_key=None, - ): - super().__init__(dataset) - self.pad_idx = pad_idx - self.left_pad = left_pad - - assert num_buckets > 0 - self.buckets = get_buckets(sizes, num_buckets) - self._bucketed_sizes = get_bucketed_sizes(sizes, self.buckets) - self._tensor_key = tensor_key - - def _set_tensor(self, item, val): - if self._tensor_key is None: - return val - item[self._tensor_key] = val - return item - - def _get_tensor(self, item): - if self._tensor_key is None: - return item - return item[self._tensor_key] - - def _pad(self, tensor, bucket_size, dim=-1): - num_pad = bucket_size - tensor.size(dim) - return F.pad( - tensor, - (num_pad if self.left_pad else 0, 0 if self.left_pad else num_pad), - value=self.pad_idx, - ) - - def __getitem__(self, index): - item = self.dataset[index] - bucket_size = self._bucketed_sizes[index] - tensor = self._get_tensor(item) - padded = self._pad(tensor, bucket_size) - return self._set_tensor(item, padded) - - @property - def sizes(self): - return self._bucketed_sizes - - def num_tokens(self, index): - return self._bucketed_sizes[index] - - def size(self, index): - return self._bucketed_sizes[index] diff --git a/kosmos-g/fairseq/fairseq/data/colorize_dataset.py b/kosmos-g/fairseq/fairseq/data/colorize_dataset.py deleted file mode 100644 index 7a6d27137..000000000 --- a/kosmos-g/fairseq/fairseq/data/colorize_dataset.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from . import BaseWrapperDataset - - -class ColorizeDataset(BaseWrapperDataset): - """Adds 'colors' property to net input that is obtained from the provided color getter for use by models""" - - def __init__(self, dataset, color_getter): - super().__init__(dataset) - self.color_getter = color_getter - - def collater(self, samples): - base_collate = super().collater(samples) - if len(base_collate) > 0: - base_collate["net_input"]["colors"] = torch.tensor( - list(self.color_getter(self.dataset, s["id"]) for s in samples), - dtype=torch.long, - ) - return base_collate diff --git a/kosmos-g/fairseq/fairseq/data/concat_dataset.py b/kosmos-g/fairseq/fairseq/data/concat_dataset.py deleted file mode 100644 index 01a4078bb..000000000 --- a/kosmos-g/fairseq/fairseq/data/concat_dataset.py +++ /dev/null @@ -1,124 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import bisect - -import numpy as np -from torch.utils.data.dataloader import default_collate - -from . import FairseqDataset - - -class ConcatDataset(FairseqDataset): - @staticmethod - def cumsum(sequence, sample_ratios): - r, s = [], 0 - for e, ratio in zip(sequence, sample_ratios): - curr_len = int(ratio * len(e)) - r.append(curr_len + s) - s += curr_len - return r - - def __init__(self, datasets, sample_ratios=1): - super(ConcatDataset, self).__init__() - assert len(datasets) > 0, "datasets should not be an empty iterable" - self.datasets = list(datasets) - if isinstance(sample_ratios, int): - sample_ratios = [sample_ratios] * len(self.datasets) - self.sample_ratios = sample_ratios - self.cumulative_sizes = self.cumsum(self.datasets, sample_ratios) - self.real_sizes = [len(d) for d in self.datasets] - - def __len__(self): - return self.cumulative_sizes[-1] - - def __getitem__(self, idx): - dataset_idx, sample_idx = self._get_dataset_and_sample_index(idx) - return self.datasets[dataset_idx][sample_idx] - - def _get_dataset_and_sample_index(self, idx: int): - dataset_idx = bisect.bisect_right(self.cumulative_sizes, idx) - if dataset_idx == 0: - sample_idx = idx - else: - sample_idx = idx - self.cumulative_sizes[dataset_idx - 1] - sample_idx = sample_idx % self.real_sizes[dataset_idx] - return dataset_idx, sample_idx - - def collater(self, samples, **extra_args): - # For now only supports datasets with same underlying collater implementations - if hasattr(self.datasets[0], "collater"): - return self.datasets[0].collater(samples, **extra_args) - else: - return default_collate(samples, **extra_args) - - def size(self, idx: int): - """ - Return an example's size as a float or tuple. - """ - dataset_idx, sample_idx = self._get_dataset_and_sample_index(idx) - return self.datasets[dataset_idx].size(sample_idx) - - def num_tokens(self, index: int): - return np.max(self.size(index)) - - def attr(self, attr: str, index: int): - dataset_idx = bisect.bisect_right(self.cumulative_sizes, index) - return getattr(self.datasets[dataset_idx], attr, None) - - @property - def sizes(self): - _dataset_sizes = [] - for ds, sr in zip(self.datasets, self.sample_ratios): - if isinstance(ds.sizes, np.ndarray): - _dataset_sizes.append(np.tile(ds.sizes, sr)) - else: - # Only support underlying dataset with single size array. - assert isinstance(ds.sizes, list) - _dataset_sizes.append(np.tile(ds.sizes[0], sr)) - return np.concatenate(_dataset_sizes) - - @property - def supports_prefetch(self): - return all(d.supports_prefetch for d in self.datasets) - - def ordered_indices(self): - """ - Returns indices sorted by length. So less padding is needed. - """ - if isinstance(self.sizes, np.ndarray) and len(self.sizes.shape) > 1: - # special handling for concatenating lang_pair_datasets - indices = np.arange(len(self)) - sizes = self.sizes - tgt_sizes = ( - sizes[:, 1] if len(sizes.shape) > 0 and sizes.shape[1] > 1 else None - ) - src_sizes = ( - sizes[:, 0] if len(sizes.shape) > 0 and sizes.shape[1] > 1 else sizes - ) - # sort by target length, then source length - if tgt_sizes is not None: - indices = indices[np.argsort(tgt_sizes[indices], kind="mergesort")] - return indices[np.argsort(src_sizes[indices], kind="mergesort")] - else: - return np.argsort(self.sizes) - - def prefetch(self, indices): - frm = 0 - for to, ds in zip(self.cumulative_sizes, self.datasets): - real_size = len(ds) - if getattr(ds, "supports_prefetch", False): - ds.prefetch([(i - frm) % real_size for i in indices if frm <= i < to]) - frm = to - - @property - def can_reuse_epoch_itr_across_epochs(self): - return all(d.can_reuse_epoch_itr_across_epochs for d in self.datasets) - - def set_epoch(self, epoch): - super().set_epoch(epoch) - for ds in self.datasets: - if hasattr(ds, "set_epoch"): - ds.set_epoch(epoch) diff --git a/kosmos-g/fairseq/fairseq/data/concat_sentences_dataset.py b/kosmos-g/fairseq/fairseq/data/concat_sentences_dataset.py deleted file mode 100644 index 625a29370..000000000 --- a/kosmos-g/fairseq/fairseq/data/concat_sentences_dataset.py +++ /dev/null @@ -1,54 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from . import FairseqDataset - - -class ConcatSentencesDataset(FairseqDataset): - def __init__(self, *datasets): - super().__init__() - self.datasets = datasets - assert all( - len(ds) == len(datasets[0]) for ds in datasets - ), "datasets must have the same length" - - def __getitem__(self, index): - return torch.cat([ds[index] for ds in self.datasets]) - - def __len__(self): - return len(self.datasets[0]) - - def collater(self, samples): - return self.datasets[0].collater(samples) - - @property - def sizes(self): - return sum(ds.sizes for ds in self.datasets) - - def num_tokens(self, index): - return sum(ds.num_tokens(index) for ds in self.datasets) - - def size(self, index): - return sum(ds.size(index) for ds in self.datasets) - - def ordered_indices(self): - return self.datasets[0].ordered_indices() - - @property - def supports_prefetch(self): - return any(getattr(ds, "supports_prefetch", False) for ds in self.datasets) - - def prefetch(self, indices): - for ds in self.datasets: - if getattr(ds, "supports_prefetch", False): - ds.prefetch(indices) - - def set_epoch(self, epoch): - super().set_epoch(epoch) - for ds in self.datasets: - if hasattr(ds, "set_epoch"): - ds.set_epoch(epoch) diff --git a/kosmos-g/fairseq/fairseq/data/data_utils.py b/kosmos-g/fairseq/fairseq/data/data_utils.py deleted file mode 100644 index 9433e56d0..000000000 --- a/kosmos-g/fairseq/fairseq/data/data_utils.py +++ /dev/null @@ -1,602 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -try: - from collections.abc import Iterable -except ImportError: - from collections import Iterable -import contextlib -import itertools -import logging -import re -import warnings -from typing import Optional, Tuple - -import numpy as np -import torch - -from fairseq.file_io import PathManager -from fairseq import utils -import os - -logger = logging.getLogger(__name__) - - -def infer_language_pair(path): - """Infer language pair from filename: <split>.<lang1>-<lang2>.(...).idx""" - src, dst = None, None - for filename in PathManager.ls(path): - parts = filename.split(".") - if len(parts) >= 3 and len(parts[1].split("-")) == 2: - return parts[1].split("-") - return src, dst - - -def collate_tokens( - values, - pad_idx, - eos_idx=None, - left_pad=False, - move_eos_to_beginning=False, - pad_to_length=None, - pad_to_multiple=1, - pad_to_bsz=None, -): - """Convert a list of 1d tensors into a padded 2d tensor.""" - size = max(v.size(0) for v in values) - size = size if pad_to_length is None else max(size, pad_to_length) - if pad_to_multiple != 1 and size % pad_to_multiple != 0: - size = int(((size - 0.1) // pad_to_multiple + 1) * pad_to_multiple) - - batch_size = len(values) if pad_to_bsz is None else max(len(values), pad_to_bsz) - res = values[0].new(batch_size, size).fill_(pad_idx) - - def copy_tensor(src, dst): - assert dst.numel() == src.numel() - if move_eos_to_beginning: - if eos_idx is None: - # if no eos_idx is specified, then use the last token in src - dst[0] = src[-1] - else: - dst[0] = eos_idx - dst[1:] = src[:-1] - else: - dst.copy_(src) - - for i, v in enumerate(values): - copy_tensor(v, res[i][size - len(v) :] if left_pad else res[i][: len(v)]) - return res - - -def load_indexed_dataset( - path, dictionary=None, dataset_impl=None, combine=False, default="cached" -): - """A helper function for loading indexed datasets. - - Args: - path (str): path to indexed dataset (e.g., 'data-bin/train') - dictionary (~fairseq.data.Dictionary): data dictionary - dataset_impl (str, optional): which dataset implementation to use. If - not provided, it will be inferred automatically. For legacy indexed - data we use the 'cached' implementation by default. - combine (bool, optional): automatically load and combine multiple - datasets. For example, if *path* is 'data-bin/train', then we will - combine 'data-bin/train', 'data-bin/train1', ... and return a - single ConcatDataset instance. - """ - import fairseq.data.indexed_dataset as indexed_dataset - from fairseq.data.concat_dataset import ConcatDataset - - datasets = [] - for k in itertools.count(): - path_k = path + (str(k) if k > 0 else "") - try: - path_k = indexed_dataset.get_indexed_dataset_to_local(path_k) - except Exception as e: - if "StorageException: [404] Path not found" in str(e): - logger.warning(f"path_k: {e} not found") - else: - raise e - - dataset_impl_k = dataset_impl - if dataset_impl_k is None: - dataset_impl_k = indexed_dataset.infer_dataset_impl(path_k) - dataset = indexed_dataset.make_dataset( - path_k, - impl=dataset_impl_k or default, - fix_lua_indexing=True, - dictionary=dictionary, - ) - if dataset is None: - break - logger.info("loaded {:,} examples from: {}".format(len(dataset), path_k)) - datasets.append(dataset) - if not combine: - break - if len(datasets) == 0: - return None - elif len(datasets) == 1: - return datasets[0] - else: - return ConcatDataset(datasets) - - -@contextlib.contextmanager -def numpy_seed(seed, *addl_seeds): - """Context manager which seeds the NumPy PRNG with the specified seed and - restores the state afterward""" - if seed is None: - yield - return - if len(addl_seeds) > 0: - seed = int(hash((seed, *addl_seeds)) % 1e6) - state = np.random.get_state() - np.random.seed(seed) - try: - yield - finally: - np.random.set_state(state) - - -def collect_filtered(function, iterable, filtered): - """ - Similar to :func:`filter` but collects filtered elements in ``filtered``. - - Args: - function (callable): function that returns ``False`` for elements that - should be filtered - iterable (iterable): iterable to filter - filtered (list): list to store filtered elements - """ - for el in iterable: - if function(el): - yield el - else: - filtered.append(el) - - -def _filter_by_size_dynamic(indices, size_fn, max_positions, raise_exception=False): - def compare_leq(a, b): - return a <= b if not isinstance(a, tuple) else max(a) <= b - - def check_size(idx): - if isinstance(max_positions, float) or isinstance(max_positions, int): - return size_fn(idx) <= max_positions - elif isinstance(max_positions, dict): - idx_size = size_fn(idx) - assert isinstance(idx_size, dict) - intersect_keys = set(max_positions.keys()) & set(idx_size.keys()) - return all( - all( - a is None or b is None or a <= b - for a, b in zip(idx_size[key], max_positions[key]) - ) - for key in intersect_keys - ) - else: - # For MultiCorpusSampledDataset, will generalize it later - if not isinstance(size_fn(idx), Iterable): - return all(size_fn(idx) <= b for b in max_positions) - return all( - a is None or b is None or a <= b - for a, b in zip(size_fn(idx), max_positions) - ) - - ignored = [] - itr = collect_filtered(check_size, indices, ignored) - indices = np.fromiter(itr, dtype=np.int64, count=-1) - return indices, ignored - - -def filter_by_size(indices, dataset, max_positions, raise_exception=False): - """ - [deprecated] Filter indices based on their size. - Use `FairseqDataset::filter_indices_by_size` instead. - - Args: - indices (List[int]): ordered list of dataset indices - dataset (FairseqDataset): fairseq dataset instance - max_positions (tuple): filter elements larger than this size. - Comparisons are done component-wise. - raise_exception (bool, optional): if ``True``, raise an exception if - any elements are filtered (default: False). - """ - warnings.warn( - "data_utils.filter_by_size is deprecated. " - "Use `FairseqDataset::filter_indices_by_size` instead.", - stacklevel=2, - ) - if isinstance(max_positions, float) or isinstance(max_positions, int): - if hasattr(dataset, "sizes") and isinstance(dataset.sizes, np.ndarray): - ignored = indices[dataset.sizes[indices] > max_positions].tolist() - indices = indices[dataset.sizes[indices] <= max_positions] - elif ( - hasattr(dataset, "sizes") - and isinstance(dataset.sizes, list) - and len(dataset.sizes) == 1 - ): - ignored = indices[dataset.sizes[0][indices] > max_positions].tolist() - indices = indices[dataset.sizes[0][indices] <= max_positions] - else: - indices, ignored = _filter_by_size_dynamic( - indices, dataset.size, max_positions - ) - else: - indices, ignored = _filter_by_size_dynamic(indices, dataset.size, max_positions) - - if len(ignored) > 0 and raise_exception: - raise Exception( - ( - "Size of sample #{} is invalid (={}) since max_positions={}, " - "skip this example with --skip-invalid-size-inputs-valid-test" - ).format(ignored[0], dataset.size(ignored[0]), max_positions) - ) - if len(ignored) > 0: - logger.warning( - ( - "{} samples have invalid sizes and will be skipped, " - "max_positions={}, first few sample ids={}" - ).format(len(ignored), max_positions, ignored[:10]) - ) - return indices - - -def filter_paired_dataset_indices_by_size(src_sizes, tgt_sizes, indices, max_sizes): - """Filter a list of sample indices. Remove those that are longer - than specified in max_sizes. - - Args: - indices (np.array): original array of sample indices - max_sizes (int or list[int] or tuple[int]): max sample size, - can be defined separately for src and tgt (then list or tuple) - - Returns: - np.array: filtered sample array - list: list of removed indices - """ - if max_sizes is None: - return indices, [] - if type(max_sizes) in (int, float): - max_src_size, max_tgt_size = max_sizes, max_sizes - else: - max_src_size, max_tgt_size = max_sizes - if tgt_sizes is None: - ignored = indices[src_sizes[indices] > max_src_size] - else: - ignored = indices[ - (src_sizes[indices] > max_src_size) | (tgt_sizes[indices] > max_tgt_size) - ] - if len(ignored) > 0: - if tgt_sizes is None: - indices = indices[src_sizes[indices] <= max_src_size] - else: - indices = indices[ - (src_sizes[indices] <= max_src_size) - & (tgt_sizes[indices] <= max_tgt_size) - ] - return indices, ignored.tolist() - - -def batch_by_size( - indices, - num_tokens_fn, - num_tokens_vec=None, - max_tokens=None, - max_sentences=None, - required_batch_size_multiple=1, - fixed_shapes=None, -): - """ - Yield mini-batches of indices bucketed by size. Batches may contain - sequences of different lengths. - - Args: - indices (List[int]): ordered list of dataset indices - num_tokens_fn (callable): function that returns the number of tokens at - a given index - num_tokens_vec (List[int], optional): precomputed vector of the number - of tokens for each index in indices (to enable faster batch generation) - max_tokens (int, optional): max number of tokens in each batch - (default: None). - max_sentences (int, optional): max number of sentences in each - batch (default: None). - required_batch_size_multiple (int, optional): require batch size to - be less than N or a multiple of N (default: 1). - fixed_shapes (List[Tuple[int, int]], optional): if given, batches will - only be created with the given shapes. *max_sentences* and - *required_batch_size_multiple* will be ignored (default: None). - """ - try: - from fairseq.data.data_utils_fast import ( - batch_by_size_fn, - batch_by_size_vec, - batch_fixed_shapes_fast, - ) - except ImportError: - raise ImportError( - "Please build Cython components with: " - "`python setup.py build_ext --inplace`" - ) - except ValueError: - raise ValueError( - "Please build (or rebuild) Cython components with `python setup.py build_ext --inplace`." - ) - - # added int() to avoid TypeError: an integer is required - max_tokens = int(max_tokens) if max_tokens is not None else -1 - max_sentences = max_sentences if max_sentences is not None else -1 - bsz_mult = required_batch_size_multiple - - if not isinstance(indices, np.ndarray): - indices = np.fromiter(indices, dtype=np.int64, count=-1) - - if num_tokens_vec is not None and not isinstance(num_tokens_vec, np.ndarray): - num_tokens_vec = np.fromiter(num_tokens_vec, dtype=np.int64, count=-1) - - if fixed_shapes is None: - if num_tokens_vec is None: - return batch_by_size_fn( - indices, - num_tokens_fn, - max_tokens, - max_sentences, - bsz_mult, - ) - else: - return batch_by_size_vec( - indices, - num_tokens_vec, - max_tokens, - max_sentences, - bsz_mult, - ) - - else: - fixed_shapes = np.array(fixed_shapes, dtype=np.int64) - sort_order = np.lexsort( - [ - fixed_shapes[:, 1].argsort(), # length - fixed_shapes[:, 0].argsort(), # bsz - ] - ) - fixed_shapes_sorted = fixed_shapes[sort_order] - return batch_fixed_shapes_fast(indices, num_tokens_fn, fixed_shapes_sorted) - - -def post_process(sentence: str, symbol: str): - if symbol == "sentencepiece": - sentence = sentence.replace(" ", "").replace("\u2581", " ").strip() - elif symbol == "wordpiece": - sentence = sentence.replace(" ", "").replace("_", " ").strip() - elif symbol == "letter": - sentence = sentence.replace(" ", "").replace("|", " ").strip() - elif symbol == "silence": - import re - - sentence = sentence.replace("<SIL>", "") - sentence = re.sub(" +", " ", sentence).strip() - elif symbol == "_EOW": - sentence = sentence.replace(" ", "").replace("_EOW", " ").strip() - elif symbol in {"subword_nmt", "@@ ", "@@"}: - if symbol == "subword_nmt": - symbol = "@@ " - sentence = (sentence + " ").replace(symbol, "").rstrip() - elif symbol == "none": - pass - elif symbol is not None: - raise NotImplementedError(f"Unknown post_process option: {symbol}") - return sentence - - -def compute_mask_indices( - shape: Tuple[int, int], - padding_mask: Optional[torch.Tensor], - mask_prob: float, - mask_length: int, - mask_type: str = "static", - mask_other: float = 0.0, - min_masks: int = 0, - no_overlap: bool = False, - min_space: int = 0, - require_same_masks: bool = True, - pct_holes: float = 0.0, -) -> np.ndarray: - """ - Computes random mask spans for a given shape - - Args: - shape: the the shape for which to compute masks. - should be of size 2 where first element is batch size and 2nd is timesteps - padding_mask: optional padding mask of the same size as shape, which will prevent masking padded elements - mask_prob: probability for each token to be chosen as start of the span to be masked. this will be multiplied by - number of timesteps divided by length of mask span to mask approximately this percentage of all elements. - however due to overlaps, the actual number will be smaller (unless no_overlap is True) - mask_type: how to compute mask lengths - static = fixed size - uniform = sample from uniform distribution [mask_other, mask_length*2] - normal = sample from normal distribution with mean mask_length and stdev mask_other. mask is min 1 element - poisson = sample from possion distribution with lambda = mask length - min_masks: minimum number of masked spans - no_overlap: if false, will switch to an alternative recursive algorithm that prevents spans from overlapping - min_space: only used if no_overlap is True, this is how many elements to keep unmasked between spans - """ - - bsz, all_sz = shape - mask = np.full((bsz, all_sz), False) - - all_num_mask = int( - # add a random number for probabilistic rounding - mask_prob * all_sz / float(mask_length) - + np.random.rand() - ) - - all_num_mask = max(min_masks, all_num_mask) - - mask_idcs = [] - for i in range(bsz): - if padding_mask is not None: - sz = all_sz - padding_mask[i].long().sum().item() - num_mask = int( - # add a random number for probabilistic rounding - mask_prob * sz / float(mask_length) - + np.random.rand() - ) - num_mask = max(min_masks, num_mask) - else: - sz = all_sz - num_mask = all_num_mask - - if mask_type == "static": - lengths = np.full(num_mask, mask_length) - elif mask_type == "uniform": - lengths = np.random.randint(mask_other, mask_length * 2 + 1, size=num_mask) - elif mask_type == "normal": - lengths = np.random.normal(mask_length, mask_other, size=num_mask) - lengths = [max(1, int(round(x))) for x in lengths] - elif mask_type == "poisson": - lengths = np.random.poisson(mask_length, size=num_mask) - lengths = [int(round(x)) for x in lengths] - else: - raise Exception("unknown mask selection " + mask_type) - - if sum(lengths) == 0: - lengths[0] = min(mask_length, sz - 1) - - if no_overlap: - mask_idc = [] - - def arrange(s, e, length, keep_length): - span_start = np.random.randint(s, e - length) - mask_idc.extend(span_start + i for i in range(length)) - - new_parts = [] - if span_start - s - min_space >= keep_length: - new_parts.append((s, span_start - min_space + 1)) - if e - span_start - keep_length - min_space > keep_length: - new_parts.append((span_start + length + min_space, e)) - return new_parts - - parts = [(0, sz)] - min_length = min(lengths) - for length in sorted(lengths, reverse=True): - lens = np.fromiter( - (e - s if e - s >= length + min_space else 0 for s, e in parts), - np.int, - ) - l_sum = np.sum(lens) - if l_sum == 0: - break - probs = lens / np.sum(lens) - c = np.random.choice(len(parts), p=probs) - s, e = parts.pop(c) - parts.extend(arrange(s, e, length, min_length)) - mask_idc = np.asarray(mask_idc) - else: - min_len = min(lengths) - if sz - min_len <= num_mask: - min_len = sz - num_mask - 1 - - mask_idc = np.random.choice(sz - min_len, num_mask, replace=False) - - mask_idc = np.asarray( - [ - mask_idc[j] + offset - for j in range(len(mask_idc)) - for offset in range(lengths[j]) - ] - ) - - mask_idcs.append(np.unique(mask_idc[mask_idc < sz])) - - min_len = min([len(m) for m in mask_idcs]) - for i, mask_idc in enumerate(mask_idcs): - if len(mask_idc) > min_len and require_same_masks: - mask_idc = np.random.choice(mask_idc, min_len, replace=False) - if pct_holes > 0: - num_holes = np.rint(len(mask_idc) * pct_holes).astype(int) - mask_idc = np.random.choice( - mask_idc, len(mask_idc) - num_holes, replace=False - ) - - mask[i, mask_idc] = True - - return mask - - -def get_mem_usage(): - try: - import psutil - - mb = 1024 * 1024 - return f"used={psutil.virtual_memory().used / mb}Mb; avail={psutil.virtual_memory().available / mb}Mb" - except ImportError: - return "N/A" - - -# lens: torch.LongTensor -# returns: torch.BoolTensor -def lengths_to_padding_mask(lens): - bsz, max_lens = lens.size(0), torch.max(lens).item() - mask = torch.arange(max_lens).to(lens.device).view(1, max_lens) - mask = mask.expand(bsz, -1) >= lens.view(bsz, 1).expand(-1, max_lens) - return mask - - -# lens: torch.LongTensor -# returns: torch.BoolTensor -def lengths_to_mask(lens): - return ~lengths_to_padding_mask(lens) - - -def get_buckets(sizes, num_buckets): - buckets = np.unique( - np.percentile( - sizes, - np.linspace(0, 100, num_buckets + 1), - interpolation="lower", - )[1:] - ) - return buckets - - -def get_bucketed_sizes(orig_sizes, buckets): - sizes = np.copy(orig_sizes) - assert np.min(sizes) >= 0 - start_val = -1 - for end_val in buckets: - mask = (sizes > start_val) & (sizes <= end_val) - sizes[mask] = end_val - start_val = end_val - return sizes - - -def _find_extra_valid_paths(dataset_path: str) -> set: - paths = utils.split_paths(dataset_path) - all_valid_paths = set() - for sub_dir in paths: - contents = PathManager.ls(sub_dir) - valid_paths = [c for c in contents if re.match("valid*[0-9].*", c) is not None] - all_valid_paths |= {os.path.basename(p) for p in valid_paths} - # Remove .bin, .idx etc - roots = {os.path.splitext(p)[0] for p in all_valid_paths} - return roots - - -def raise_if_valid_subsets_unintentionally_ignored(train_cfg) -> None: - """Raises if there are paths matching 'valid*[0-9].*' which are not combined or ignored.""" - if ( - train_cfg.dataset.ignore_unused_valid_subsets - or train_cfg.dataset.combine_valid_subsets - or train_cfg.dataset.disable_validation - or not hasattr(train_cfg.task, "data") - ): - return - other_paths = _find_extra_valid_paths(train_cfg.task.data) - specified_subsets = train_cfg.dataset.valid_subset.split(",") - ignored_paths = [p for p in other_paths if p not in specified_subsets] - if ignored_paths: - advice = "Set --combine-val to combine them or --ignore-unused-valid-subsets to ignore them." - msg = f"Valid paths {ignored_paths} will be ignored. {advice}" - raise ValueError(msg) diff --git a/kosmos-g/fairseq/fairseq/data/data_utils_fast.pyx b/kosmos-g/fairseq/fairseq/data/data_utils_fast.pyx deleted file mode 100644 index c61f31d6b..000000000 --- a/kosmos-g/fairseq/fairseq/data/data_utils_fast.pyx +++ /dev/null @@ -1,178 +0,0 @@ -# cython: language_level=3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np - -cimport cython -cimport numpy as np - -from libc.stdint cimport int32_t, int64_t -from libcpp cimport bool as bool_t - -ctypedef int64_t DTYPE_t - -@cython.cdivision(True) -@cython.boundscheck(False) -@cython.wraparound(False) -cpdef list batch_by_size_vec( - np.ndarray[int64_t, ndim=1] indices, - np.ndarray[int64_t, ndim=1] num_tokens_vec, - int64_t max_tokens, - int64_t max_sentences, - int32_t bsz_mult, -): - if indices.shape[0] == 0: - return [] - - assert max_tokens <= 0 or np.max(num_tokens_vec) <= max_tokens, ( - f"Sentences lengths should not exceed max_tokens={max_tokens}" - ) - - cdef int32_t indices_len = indices.shape[0] - cdef np.ndarray[int32_t, ndim=1] batches_ends = \ - np.zeros(indices_len, dtype=np.int32) - cdef int32_t[:] batches_ends_view = batches_ends - cdef int64_t[:] num_tokens_view = num_tokens_vec - - cdef int32_t pos = 0 - cdef int32_t new_batch_end = 0 - - cdef int64_t new_batch_max_tokens = 0 - cdef int32_t new_batch_sentences = 0 - cdef int64_t new_batch_num_tokens = 0 - - cdef bool_t overflow = False - cdef bool_t size_matches_with_bsz_mult = False - - cdef int32_t batches_count = 0 - cdef int32_t batch_start = 0 - cdef int64_t tail_max_tokens = 0 - cdef int64_t batch_max_tokens = 0 - - for pos in range(indices_len): - # At every pos we keep stats about the last complete batch [batch_start:batch_end), - # and tail [batch_end:pos]. - # 1) Every time when (batch + tail) forms a valid batch - # (according to max_tokens, max_sentences and bsz_mult) we append tail to batch. - # 2) When (batch+tail) violates max_tokens or max_sentences constraints - # we finalize running batch, and tail becomes a new batch. - # 3) There is a corner case when tail also violates constraints. - # In that situation [batch_end:pos-1] (tail without the current pos) - # gets added to the finalized batches, while [pos:pos] becomes a new tail. - # - # Important: For the sake of performance try to avoid using function calls within this loop. - - tail_max_tokens = tail_max_tokens \ - if tail_max_tokens > num_tokens_view[pos] \ - else num_tokens_view[pos] - new_batch_end = pos + 1 - new_batch_max_tokens = batch_max_tokens \ - if batch_max_tokens > tail_max_tokens \ - else tail_max_tokens - new_batch_sentences = new_batch_end - batch_start - new_batch_num_tokens = new_batch_sentences * new_batch_max_tokens - - overflow = (new_batch_sentences > max_sentences > 0 or - new_batch_num_tokens > max_tokens > 0) - size_matches_with_bsz_mult = (new_batch_sentences < bsz_mult or - new_batch_sentences % bsz_mult == 0) - - if overflow: - tail_num_tokens = tail_max_tokens * \ - (new_batch_end - batches_ends_view[batches_count]) - tail_overflow = tail_num_tokens > max_tokens > 0 - # In case of a tail overflow finalize two batches - if tail_overflow: - batches_count += 1 - batches_ends_view[batches_count] = pos - tail_max_tokens = num_tokens_view[pos] - batch_start = batches_ends_view[batches_count] - batches_count += 1 - new_batch_max_tokens = tail_max_tokens - - if overflow or size_matches_with_bsz_mult: - batches_ends_view[batches_count] = new_batch_end - batch_max_tokens = new_batch_max_tokens - tail_max_tokens = 0 - if batches_ends_view[batches_count] != indices_len: - batches_count += 1 - # Memory and time-efficient split - return np.split(indices, batches_ends[:batches_count]) - - -@cython.boundscheck(False) -@cython.wraparound(False) -cpdef list batch_by_size_fn( - np.ndarray[DTYPE_t, ndim=1] indices, - num_tokens_fn, - int64_t max_tokens, - int64_t max_sentences, - int32_t bsz_mult, -): - cdef int32_t indices_len = indices.shape[0] - cdef np.ndarray[int64_t, ndim=1] num_tokens_vec = np.zeros(indices_len, - dtype=np.int64) - cdef DTYPE_t[:] indices_view = indices - cdef DTYPE_t[:] num_tokens_vec_view = num_tokens_vec - cdef int64_t pos - for pos in range(indices_len): - num_tokens_vec[pos] = num_tokens_fn(indices_view[pos]) - return batch_by_size_vec(indices, num_tokens_vec, max_tokens, - max_sentences, bsz_mult,) - - -cdef _find_valid_shape( - DTYPE_t[:, :] shapes_view, - int64_t num_sentences, - int64_t num_tokens, -): - """Return index of first valid shape of -1 if none is found.""" - for i in range(shapes_view.shape[0]): - if num_sentences <= shapes_view[i][0] and num_tokens <= shapes_view[i][1]: - return i - return -1 - - -@cython.cdivision(True) -cpdef list batch_fixed_shapes_fast( - np.ndarray[DTYPE_t, ndim=1] indices, - num_tokens_fn, - np.ndarray[DTYPE_t, ndim=2] fixed_shapes_sorted, -): - cdef int64_t sample_len = 0 - cdef list sample_lens = [] - cdef list batch = [] - cdef list batches = [] - cdef int64_t mod_len - cdef int64_t i - cdef int64_t idx - cdef int64_t num_tokens - cdef DTYPE_t[:] indices_view = indices - cdef DTYPE_t[:, :] shapes_view = fixed_shapes_sorted - - for i in range(len(indices_view)): - idx = indices_view[i] - num_tokens = num_tokens_fn(idx) - sample_lens.append(num_tokens) - sample_len = max(sample_len, num_tokens) - - shape_idx = _find_valid_shape(shapes_view, len(batch) + 1, sample_len) - if shape_idx == -1: - batches.append(batch) - batch = [] - sample_lens = [] - sample_len = 0 - shapes_view = fixed_shapes_sorted - elif shape_idx > 0: - # small optimization for the next call to _find_valid_shape - shapes_view = shapes_view[shape_idx:] - - batch.append(idx) - - if len(batch) > 0: - batches.append(batch) - - return batches diff --git a/kosmos-g/fairseq/fairseq/data/denoising_dataset.py b/kosmos-g/fairseq/fairseq/data/denoising_dataset.py deleted file mode 100644 index bdb62c8d5..000000000 --- a/kosmos-g/fairseq/fairseq/data/denoising_dataset.py +++ /dev/null @@ -1,436 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import numpy as np -import torch - -from . import FairseqDataset, data_utils - - -def collate( - samples, - pad_idx, - eos_idx, - vocab, - left_pad_source=False, - left_pad_target=False, - input_feeding=True, - pad_to_length=None, -): - assert input_feeding - if len(samples) == 0: - return {} - - def merge(key, left_pad, move_eos_to_beginning=False, pad_to_length=None): - return data_utils.collate_tokens( - [s[key] for s in samples], - pad_idx, - eos_idx=None, # use eos_idx of each sample instead of vocab.eos() - left_pad=left_pad, - move_eos_to_beginning=move_eos_to_beginning, - pad_to_length=pad_to_length, - ) - - id = torch.LongTensor([s["id"] for s in samples]) - src_tokens = merge( - "source", - left_pad=left_pad_source, - pad_to_length=pad_to_length["source"] if pad_to_length is not None else None, - ) - # sort by descending source length - src_lengths = torch.LongTensor([s["source"].numel() for s in samples]) - src_lengths, sort_order = src_lengths.sort(descending=True) - id = id.index_select(0, sort_order) - src_tokens = src_tokens.index_select(0, sort_order) - - prev_output_tokens = None - target = None - if samples[0].get("target", None) is not None: - target = merge( - "target", - left_pad=left_pad_target, - pad_to_length=pad_to_length["target"] - if pad_to_length is not None - else None, - ) - target = target.index_select(0, sort_order) - ntokens = sum(len(s["target"]) for s in samples) - - if input_feeding: - # we create a shifted version of targets for feeding the - # previous output token(s) into the next decoder step - prev_output_tokens = merge( - "target", - left_pad=left_pad_target, - move_eos_to_beginning=True, - pad_to_length=pad_to_length["target"] - if pad_to_length is not None - else None, - ) - prev_output_tokens = prev_output_tokens.index_select(0, sort_order) - else: - ntokens = sum(len(s["source"]) for s in samples) - - batch = { - "id": id, - "ntokens": ntokens, - "net_input": { - "src_tokens": src_tokens, - "src_lengths": src_lengths, - }, - "target": target, - "nsentences": samples[0]["source"].size(0), - "sort_order": sort_order, - } - if prev_output_tokens is not None: - batch["net_input"]["prev_output_tokens"] = prev_output_tokens - - return batch - - -class DenoisingDataset(FairseqDataset): - """ - A wrapper around TokenBlockDataset for BART dataset. - - Args: - dataset (TokenBlockDataset): dataset to wrap - sizes (List[int]): sentence lengths - vocab (~fairseq.data.Dictionary): vocabulary - mask_idx (int): dictionary index used for masked token - mask_whole_words: only mask whole words. This should be a byte mask - over vocab indices, indicating whether it is the beginning of a - word. We will extend any mask to encompass the whole word. - shuffle (bool, optional): shuffle the elements before batching. - Default: ``True`` - seed: Seed for random number generator for reproducibility. - args: argparse arguments. - """ - - def __init__( - self, - dataset, - sizes, - vocab, - mask_idx, - mask_whole_words, - shuffle, - seed, - args, - eos=None, - item_transform_func=None, - ): - self.dataset = dataset - - self.sizes = sizes - - self.vocab = vocab - self.shuffle = shuffle - self.seed = seed - self.mask_idx = mask_idx - self.mask_whole_word = mask_whole_words - self.mask_ratio = args.mask - self.random_ratio = args.mask_random - self.insert_ratio = args.insert - self.rotate_ratio = args.rotate - self.permute_sentence_ratio = args.permute_sentences - self.eos = eos if eos is not None else vocab.eos() - self.item_transform_func = item_transform_func - - if args.bpe != "gpt2": - self.full_stop_index = self.vocab.eos() - else: - assert args.bpe == "gpt2" - self.full_stop_index = self.vocab.index("13") - - self.replace_length = args.replace_length - if self.replace_length not in [-1, 0, 1]: - raise ValueError(f"invalid arg: replace_length={self.replace_length}") - if args.mask_length not in ["subword", "word", "span-poisson"]: - raise ValueError(f"invalid arg: mask-length={args.mask_length}") - if args.mask_length == "subword" and args.replace_length not in [0, 1]: - raise ValueError(f"if using subwords, use replace-length=1 or 0") - - self.mask_span_distribution = None - if args.mask_length == "span-poisson": - _lambda = args.poisson_lambda - - lambda_to_the_k = 1 - e_to_the_minus_lambda = math.exp(-_lambda) - k_factorial = 1 - ps = [] - for k in range(0, 128): - ps.append(e_to_the_minus_lambda * lambda_to_the_k / k_factorial) - lambda_to_the_k *= _lambda - k_factorial *= k + 1 - if ps[-1] < 0.0000001: - break - ps = torch.FloatTensor(ps) - self.mask_span_distribution = torch.distributions.Categorical(ps) - - self.epoch = 0 - - @property - def can_reuse_epoch_itr_across_epochs(self): - return True # only the noise changes, not item sizes - - def set_epoch(self, epoch, **unused): - self.epoch = epoch - - def __getitem__(self, index): - with data_utils.numpy_seed(self.seed, self.epoch, index): - tokens = self.dataset[index] - assert tokens[-1] == self.eos - source, target = tokens, tokens.clone() - - if self.permute_sentence_ratio > 0.0: - source = self.permute_sentences(source, self.permute_sentence_ratio) - - if self.mask_ratio > 0: - source = self.add_whole_word_mask(source, self.mask_ratio) - - if self.insert_ratio > 0: - source = self.add_insertion_noise(source, self.insert_ratio) - - if self.rotate_ratio > 0.0 and np.random.random() < self.rotate_ratio: - source = self.add_rolling_noise(source) - # there can additional changes to make: - if self.item_transform_func is not None: - source, target = self.item_transform_func(source, target) - - assert (source >= 0).all() - assert (source[1:-1] >= 1).all() - assert (source <= len(self.vocab)).all() - assert source[0] == self.vocab.bos() - assert source[-1] == self.eos - return { - "id": index, - "source": source, - "target": target, - } - - def __len__(self): - return len(self.dataset) - - def permute_sentences(self, source, p=1.0): - full_stops = source == self.full_stop_index - # Pretend it ends with a full stop so last span is a sentence - full_stops[-2] = 1 - - # Tokens that are full stops, where the previous token is not - sentence_ends = (full_stops[1:] * ~full_stops[:-1]).nonzero(as_tuple=False) + 2 - result = source.clone() - - num_sentences = sentence_ends.size(0) - num_to_permute = math.ceil((num_sentences * 2 * p) / 2.0) - substitutions = torch.randperm(num_sentences)[:num_to_permute] - ordering = torch.arange(0, num_sentences) - ordering[substitutions] = substitutions[torch.randperm(num_to_permute)] - - # Ignore <bos> at start - index = 1 - for i in ordering: - sentence = source[(sentence_ends[i - 1] if i > 0 else 1) : sentence_ends[i]] - result[index : index + sentence.size(0)] = sentence - index += sentence.size(0) - return result - - def word_starts(self, source): - if self.mask_whole_word is not None: - is_word_start = self.mask_whole_word.gather(0, source) - else: - is_word_start = torch.ones(source.size()) - is_word_start[0] = 0 - is_word_start[-1] = 0 - return is_word_start - - def add_whole_word_mask(self, source, p): - is_word_start = self.word_starts(source) - num_to_mask = int(math.ceil(is_word_start.float().sum() * p)) - num_inserts = 0 - if num_to_mask == 0: - return source - - if self.mask_span_distribution is not None: - lengths = self.mask_span_distribution.sample(sample_shape=(num_to_mask,)) - - # Make sure we have enough to mask - cum_length = torch.cumsum(lengths, 0) - while cum_length[-1] < num_to_mask: - lengths = torch.cat( - [ - lengths, - self.mask_span_distribution.sample(sample_shape=(num_to_mask,)), - ], - dim=0, - ) - cum_length = torch.cumsum(lengths, 0) - - # Trim to masking budget - i = 0 - while cum_length[i] < num_to_mask: - i += 1 - lengths[i] = num_to_mask - (0 if i == 0 else cum_length[i - 1]) - num_to_mask = i + 1 - lengths = lengths[:num_to_mask] - - # Handle 0-length mask (inserts) separately - lengths = lengths[lengths > 0] - num_inserts = num_to_mask - lengths.size(0) - num_to_mask -= num_inserts - if num_to_mask == 0: - return self.add_insertion_noise(source, num_inserts / source.size(0)) - - assert (lengths > 0).all() - else: - lengths = torch.ones((num_to_mask,)).long() - assert is_word_start[-1] == 0 - word_starts = is_word_start.nonzero(as_tuple=False) - indices = word_starts[ - torch.randperm(word_starts.size(0))[:num_to_mask] - ].squeeze(1) - mask_random = torch.FloatTensor(num_to_mask).uniform_() < self.random_ratio - - source_length = source.size(0) - assert source_length - 1 not in indices - to_keep = torch.ones(source_length, dtype=torch.bool) - is_word_start[ - -1 - ] = 255 # acts as a long length, so spans don't go over the end of doc - if self.replace_length == 0: - to_keep[indices] = 0 - else: - # keep index, but replace it with [MASK] - source[indices] = self.mask_idx - source[indices[mask_random]] = torch.randint( - 1, len(self.vocab), size=(mask_random.sum(),) - ) - - if self.mask_span_distribution is not None: - assert len(lengths.size()) == 1 - assert lengths.size() == indices.size() - lengths -= 1 - while indices.size(0) > 0: - assert lengths.size() == indices.size() - lengths -= is_word_start[indices + 1].long() - uncompleted = lengths >= 0 - indices = indices[uncompleted] + 1 - mask_random = mask_random[uncompleted] - lengths = lengths[uncompleted] - if self.replace_length != -1: - # delete token - to_keep[indices] = 0 - else: - # keep index, but replace it with [MASK] - source[indices] = self.mask_idx - source[indices[mask_random]] = torch.randint( - 1, len(self.vocab), size=(mask_random.sum(),) - ) - else: - # A bit faster when all lengths are 1 - while indices.size(0) > 0: - uncompleted = is_word_start[indices + 1] == 0 - indices = indices[uncompleted] + 1 - mask_random = mask_random[uncompleted] - if self.replace_length != -1: - # delete token - to_keep[indices] = 0 - else: - # keep index, but replace it with [MASK] - source[indices] = self.mask_idx - source[indices[mask_random]] = torch.randint( - 1, len(self.vocab), size=(mask_random.sum(),) - ) - - assert source_length - 1 not in indices - - source = source[to_keep] - - if num_inserts > 0: - source = self.add_insertion_noise(source, num_inserts / source.size(0)) - - return source - - def add_permuted_noise(self, tokens, p): - num_words = len(tokens) - num_to_permute = math.ceil(((num_words * 2) * p) / 2.0) - substitutions = torch.randperm(num_words - 2)[:num_to_permute] + 1 - tokens[substitutions] = tokens[substitutions[torch.randperm(num_to_permute)]] - return tokens - - def add_rolling_noise(self, tokens): - offset = np.random.randint(1, max(1, tokens.size(-1) - 1) + 1) - tokens = torch.cat( - (tokens[0:1], tokens[offset:-1], tokens[1:offset], tokens[-1:]), - dim=0, - ) - return tokens - - def add_insertion_noise(self, tokens, p): - if p == 0.0: - return tokens - - num_tokens = len(tokens) - n = int(math.ceil(num_tokens * p)) - - noise_indices = torch.randperm(num_tokens + n - 2)[:n] + 1 - noise_mask = torch.zeros(size=(num_tokens + n,), dtype=torch.bool) - noise_mask[noise_indices] = 1 - result = torch.LongTensor(n + len(tokens)).fill_(-1) - - num_random = int(math.ceil(n * self.random_ratio)) - result[noise_indices[num_random:]] = self.mask_idx - result[noise_indices[:num_random]] = torch.randint( - low=1, high=len(self.vocab), size=(num_random,) - ) - - result[~noise_mask] = tokens - - assert (result >= 0).all() - return result - - def collater(self, samples, pad_to_length=None): - """Merge a list of samples to form a mini-batch. - Args: - samples (List[dict]): samples to collate - Returns: - dict: a mini-batch of data - """ - return collate( - samples, self.vocab.pad(), self.eos, self.vocab, pad_to_length=pad_to_length - ) - - def num_tokens(self, index): - """Return the number of tokens in a sample. This value is used to - enforce ``--max-tokens`` during batching.""" - return self.sizes[index] - - def size(self, index): - """Return an example's size as a float or tuple. This value is used when - filtering a dataset with ``--max-positions``.""" - return self.sizes[index] - - def ordered_indices(self): - """Return an ordered list of indices. Batches will be constructed based - on this order.""" - if self.shuffle: - indices = np.random.permutation(len(self)) - else: - indices = np.arange(len(self)) - return indices[np.argsort(self.sizes[indices], kind="mergesort")] - - def prefetch(self, indices): - self.src.prefetch(indices) - self.tgt.prefetch(indices) - - @property - def supports_prefetch(self): - return ( - hasattr(self.src, "supports_prefetch") - and self.src.supports_prefetch - and hasattr(self.tgt, "supports_prefetch") - and self.tgt.supports_prefetch - ) diff --git a/kosmos-g/fairseq/fairseq/data/dictionary.py b/kosmos-g/fairseq/fairseq/data/dictionary.py deleted file mode 100644 index d6495389f..000000000 --- a/kosmos-g/fairseq/fairseq/data/dictionary.py +++ /dev/null @@ -1,401 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import os -from collections import Counter -from multiprocessing import Pool - -import torch -from fairseq import utils -from fairseq.data import data_utils -from fairseq.file_chunker_utils import Chunker, find_offsets -from fairseq.file_io import PathManager -from fairseq.tokenizer import tokenize_line - - -class Dictionary: - """A mapping from symbols to consecutive integers""" - - def __init__( - self, - *, # begin keyword-only arguments - bos="<s>", - pad="<pad>", - eos="</s>", - unk="<unk>", - extra_special_symbols=None, - ): - self.bos_word, self.unk_word, self.pad_word, self.eos_word = bos, unk, pad, eos - self.symbols = [] - self.count = [] - self.indices = {} - self.bos_index = self.add_symbol(bos) - self.pad_index = self.add_symbol(pad) - self.eos_index = self.add_symbol(eos) - self.unk_index = self.add_symbol(unk) - if extra_special_symbols: - for s in extra_special_symbols: - self.add_symbol(s) - self.nspecial = len(self.symbols) - - def __eq__(self, other): - return self.indices == other.indices - - def __getitem__(self, idx): - if idx < len(self.symbols): - return self.symbols[idx] - return self.unk_word - - def get_count(self, idx): - return self.count[idx] - - def __len__(self): - """Returns the number of symbols in the dictionary""" - return len(self.symbols) - - def __contains__(self, sym): - return sym in self.indices - - def index(self, sym): - """Returns the index of the specified symbol""" - assert isinstance(sym, str) - if sym in self.indices: - return self.indices[sym] - return self.unk_index - - def string( - self, - tensor, - bpe_symbol=None, - escape_unk=False, - extra_symbols_to_ignore=None, - unk_string=None, - include_eos=False, - separator=" ", - ): - """Helper for converting a tensor of token indices to a string. - - Can optionally remove BPE symbols or escape <unk> words. - """ - if torch.is_tensor(tensor) and tensor.dim() == 2: - return "\n".join( - self.string( - t, - bpe_symbol, - escape_unk, - extra_symbols_to_ignore, - include_eos=include_eos, - ) - for t in tensor - ) - - extra_symbols_to_ignore = set(extra_symbols_to_ignore or []) - if not include_eos: - extra_symbols_to_ignore.add(self.eos()) - - def token_string(i): - if i == self.unk(): - if unk_string is not None: - return unk_string - else: - return self.unk_string(escape_unk) - else: - return self[i] - - if hasattr(self, "bos_index"): - extra_symbols_to_ignore.add(self.bos()) - - sent = separator.join( - token_string(i) - for i in tensor - if utils.item(i) not in extra_symbols_to_ignore - ) - - return data_utils.post_process(sent, bpe_symbol) - - def unk_string(self, escape=False): - """Return unknown string, optionally escaped as: <<unk>>""" - if escape: - return "<{}>".format(self.unk_word) - else: - return self.unk_word - - def add_symbol(self, word, n=1, overwrite=False): - """Adds a word to the dictionary""" - if word in self.indices and not overwrite: - idx = self.indices[word] - self.count[idx] = self.count[idx] + n - return idx - else: - idx = len(self.symbols) - self.indices[word] = idx - self.symbols.append(word) - self.count.append(n) - return idx - - def update(self, new_dict): - """Updates counts from new dictionary.""" - for word in new_dict.symbols: - idx2 = new_dict.indices[word] - if word in self.indices: - idx = self.indices[word] - self.count[idx] = self.count[idx] + new_dict.count[idx2] - else: - idx = len(self.symbols) - self.indices[word] = idx - self.symbols.append(word) - self.count.append(new_dict.count[idx2]) - - def finalize(self, threshold=-1, nwords=-1, padding_factor=8): - """Sort symbols by frequency in descending order, ignoring special ones. - - Args: - - threshold defines the minimum word count - - nwords defines the total number of words in the final dictionary, - including special symbols - - padding_factor can be used to pad the dictionary size to be a - multiple of 8, which is important on some hardware (e.g., Nvidia - Tensor Cores). - """ - if nwords <= 0: - nwords = len(self) - - new_indices = dict(zip(self.symbols[: self.nspecial], range(self.nspecial))) - new_symbols = self.symbols[: self.nspecial] - new_count = self.count[: self.nspecial] - - c = Counter( - dict( - sorted(zip(self.symbols[self.nspecial :], self.count[self.nspecial :])) - ) - ) - for symbol, count in c.most_common(nwords - self.nspecial): - if count >= threshold: - new_indices[symbol] = len(new_symbols) - new_symbols.append(symbol) - new_count.append(count) - else: - break - - assert len(new_symbols) == len(new_indices) - - self.count = list(new_count) - self.symbols = list(new_symbols) - self.indices = new_indices - - self.pad_to_multiple_(padding_factor) - - def pad_to_multiple_(self, padding_factor): - """Pad Dictionary size to be a multiple of *padding_factor*.""" - if padding_factor > 1: - i = 0 - while len(self) % padding_factor != 0: - symbol = "madeupword{:04d}".format(i) - self.add_symbol(symbol, n=0) - i += 1 - - def bos(self): - """Helper to get index of beginning-of-sentence symbol""" - return self.bos_index - - def pad(self): - """Helper to get index of pad symbol""" - return self.pad_index - - def eos(self): - """Helper to get index of end-of-sentence symbol""" - return self.eos_index - - def unk(self): - """Helper to get index of unk symbol""" - return self.unk_index - - @classmethod - def load(cls, f): - """Loads the dictionary from a text file with the format: - - ``` - <symbol0> <count0> - <symbol1> <count1> - ... - ``` - """ - d = cls() - d.add_from_file(f) - return d - - def add_from_file(self, f): - """ - Loads a pre-existing dictionary from a text file and adds its symbols - to this instance. - """ - if isinstance(f, str): - try: - with open(PathManager.get_local_path(f), "r", encoding="utf-8") as fd: - self.add_from_file(fd) - except FileNotFoundError as fnfe: - raise fnfe - except UnicodeError: - raise Exception( - "Incorrect encoding detected in {}, please " - "rebuild the dataset".format(f) - ) - return - - lines = f.readlines() - indices_start_line = self._load_meta(lines) - - for line in lines[indices_start_line:]: - try: - line, field = line.rstrip().rsplit(" ", 1) - if field == "#fairseq:overwrite": - overwrite = True - line, field = line.rsplit(" ", 1) - else: - overwrite = False - count = int(field) - word = line - if word in self and not overwrite: - raise RuntimeError( - "Duplicate word found when loading Dictionary: '{}'. " - "Duplicate words can overwrite earlier ones by adding the " - "#fairseq:overwrite flag at the end of the corresponding row " - "in the dictionary file. If using the Camembert model, please " - "download an updated copy of the model file.".format(word) - ) - self.add_symbol(word, n=count, overwrite=overwrite) - except ValueError: - raise ValueError( - f"Incorrect dictionary format, expected '<token> <cnt> [flags]': \"{line}\"" - ) - - def _save(self, f, kv_iterator): - if isinstance(f, str): - PathManager.mkdirs(os.path.dirname(f)) - with PathManager.open(f, "w", encoding="utf-8") as fd: - return self.save(fd) - for k, v in kv_iterator: - print("{} {}".format(k, v), file=f) - - def _get_meta(self): - return [], [] - - def _load_meta(self, lines): - return 0 - - def save(self, f): - """Stores dictionary into a text file""" - ex_keys, ex_vals = self._get_meta() - self._save( - f, - zip( - ex_keys + self.symbols[self.nspecial :], - ex_vals + self.count[self.nspecial :], - ), - ) - - def dummy_sentence(self, length): - t = torch.Tensor(length).uniform_(self.nspecial + 1, len(self)).long() - t[-1] = self.eos() - return t - - def encode_line( - self, - line, - line_tokenizer=tokenize_line, - add_if_not_exist=True, - consumer=None, - append_eos=True, - reverse_order=False, - ) -> torch.IntTensor: - words = line_tokenizer(line) - if reverse_order: - words = list(reversed(words)) - nwords = len(words) - ids = torch.IntTensor(nwords + 1 if append_eos else nwords) - - for i, word in enumerate(words): - if add_if_not_exist: - idx = self.add_symbol(word) - else: - idx = self.index(word) - if consumer is not None: - consumer(word, idx) - ids[i] = idx - if append_eos: - ids[nwords] = self.eos_index - return ids - - @staticmethod - def _add_file_to_dictionary_single_worker( - filename, - tokenize, - eos_word, - start_offset, - end_offset, - ): - counter = Counter() - with Chunker(filename, start_offset, end_offset) as line_iterator: - for line in line_iterator: - for word in tokenize(line): - counter.update([word]) - counter.update([eos_word]) - return counter - - @staticmethod - def add_file_to_dictionary(filename, dict, tokenize, num_workers): - def merge_result(counter): - for w, c in sorted(counter.items()): - dict.add_symbol(w, c) - - local_file = PathManager.get_local_path(filename) - offsets = find_offsets(local_file, num_workers) - if num_workers > 1: - chunks = zip(offsets, offsets[1:]) - pool = Pool(processes=num_workers) - results = [] - for (start_offset, end_offset) in chunks: - results.append( - pool.apply_async( - Dictionary._add_file_to_dictionary_single_worker, - ( - local_file, - tokenize, - dict.eos_word, - start_offset, - end_offset, - ), - ) - ) - pool.close() - pool.join() - for r in results: - merge_result(r.get()) - else: - merge_result( - Dictionary._add_file_to_dictionary_single_worker( - local_file, tokenize, dict.eos_word, offsets[0], offsets[1] - ) - ) - - -class TruncatedDictionary(object): - def __init__(self, wrapped_dict, length): - self.__class__ = type( - wrapped_dict.__class__.__name__, - (self.__class__, wrapped_dict.__class__), - {}, - ) - self.__dict__ = wrapped_dict.__dict__ - self.wrapped_dict = wrapped_dict - self.length = min(len(self.wrapped_dict), length) - - def __len__(self): - return self.length - - def __getitem__(self, i): - if i < self.length: - return self.wrapped_dict[i] - return self.wrapped_dict.unk() diff --git a/kosmos-g/fairseq/fairseq/data/encoders/__init__.py b/kosmos-g/fairseq/fairseq/data/encoders/__init__.py deleted file mode 100644 index 7cbe00a10..000000000 --- a/kosmos-g/fairseq/fairseq/data/encoders/__init__.py +++ /dev/null @@ -1,29 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import importlib -import os - -from fairseq import registry - - -build_tokenizer, register_tokenizer, TOKENIZER_REGISTRY, _ = registry.setup_registry( - "--tokenizer", - default=None, -) - - -build_bpe, register_bpe, BPE_REGISTRY, _ = registry.setup_registry( - "--bpe", - default=None, -) - - -# automatically import any Python files in the encoders/ directory -for file in sorted(os.listdir(os.path.dirname(__file__))): - if file.endswith(".py") and not file.startswith("_"): - module = file[: file.find(".py")] - importlib.import_module("fairseq.data.encoders." + module) diff --git a/kosmos-g/fairseq/fairseq/data/encoders/byte_bpe.py b/kosmos-g/fairseq/fairseq/data/encoders/byte_bpe.py deleted file mode 100644 index 31e3a0627..000000000 --- a/kosmos-g/fairseq/fairseq/data/encoders/byte_bpe.py +++ /dev/null @@ -1,48 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -from dataclasses import dataclass, field - -from fairseq import file_utils -from fairseq.data.encoders import register_bpe -from fairseq.data.encoders.byte_utils import ( - SPACE, - SPACE_ESCAPE, - byte_encode, - smart_byte_decode, -) -from fairseq.dataclass import FairseqDataclass - - -@dataclass -class ByteBpeConfig(FairseqDataclass): - sentencepiece_model_path: str = field( - default="???", metadata={"help": "path to sentencepiece model"} - ) - - -@register_bpe("byte_bpe", dataclass=ByteBpeConfig) -class ByteBPE(object): - def __init__(self, cfg): - vocab = file_utils.cached_path(cfg.sentencepiece_model_path) - try: - import sentencepiece as spm - - self.sp = spm.SentencePieceProcessor() - self.sp.Load(vocab) - except ImportError: - raise ImportError( - "Please install sentencepiece with: pip install sentencepiece" - ) - - def encode(self, x: str) -> str: - byte_encoded = byte_encode(x) - return SPACE.join(self.sp.EncodeAsPieces(byte_encoded)) - - @staticmethod - def decode(x: str) -> str: - unescaped = x.replace(SPACE, "").replace(SPACE_ESCAPE, SPACE) - return smart_byte_decode(unescaped) diff --git a/kosmos-g/fairseq/fairseq/data/encoders/byte_utils.py b/kosmos-g/fairseq/fairseq/data/encoders/byte_utils.py deleted file mode 100644 index a305c0809..000000000 --- a/kosmos-g/fairseq/fairseq/data/encoders/byte_utils.py +++ /dev/null @@ -1,51 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import re - - -WHITESPACE_NORMALIZER = re.compile(r"\s+") -SPACE = chr(32) -SPACE_ESCAPE = chr(9601) -# excluding non-breaking space (160) here -PRINTABLE_LATIN = set( - list(range(32, 126 + 1)) + list(range(161, 172 + 1)) + list(range(174, 255 + 1)) -) -BYTE_TO_BCHAR = { - b: chr(b) if b in PRINTABLE_LATIN else chr(256 + b) for b in range(256) -} -BCHAR_TO_BYTE = {bc: b for b, bc in BYTE_TO_BCHAR.items()} - - -def byte_encode(x: str) -> str: - normalized = WHITESPACE_NORMALIZER.sub(SPACE, x) - return "".join([BYTE_TO_BCHAR[b] for b in normalized.encode("utf-8")]) - - -def byte_decode(x: str) -> str: - try: - return bytes([BCHAR_TO_BYTE[bc] for bc in x]).decode("utf-8") - except ValueError: - return "" - - -def smart_byte_decode(x: str) -> str: - output = byte_decode(x) - if output == "": - # DP the best recovery (max valid chars) if it's broken - n_bytes = len(x) - f = [0 for _ in range(n_bytes + 1)] - pt = [0 for _ in range(n_bytes + 1)] - for i in range(1, n_bytes + 1): - f[i], pt[i] = f[i - 1], i - 1 - for j in range(1, min(4, i) + 1): - if f[i - j] + 1 > f[i] and len(byte_decode(x[i - j : i])) > 0: - f[i], pt[i] = f[i - j] + 1, i - j - cur_pt = n_bytes - while cur_pt > 0: - if f[cur_pt] == f[pt[cur_pt]] + 1: - output = byte_decode(x[pt[cur_pt] : cur_pt]) + output - cur_pt = pt[cur_pt] - return output diff --git a/kosmos-g/fairseq/fairseq/data/encoders/bytes.py b/kosmos-g/fairseq/fairseq/data/encoders/bytes.py deleted file mode 100644 index f88f8f692..000000000 --- a/kosmos-g/fairseq/fairseq/data/encoders/bytes.py +++ /dev/null @@ -1,34 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -from fairseq.data.encoders import register_bpe -from fairseq.data.encoders.byte_utils import ( - SPACE, - SPACE_ESCAPE, - byte_encode, - smart_byte_decode, -) - - -@register_bpe("bytes") -class Bytes(object): - def __init__(self, *unused): - pass - - @staticmethod - def add_args(parser): - pass - - @staticmethod - def encode(x: str) -> str: - encoded = byte_encode(x) - escaped = encoded.replace(SPACE, SPACE_ESCAPE) - return SPACE.join(list(escaped)) - - @staticmethod - def decode(x: str) -> str: - unescaped = x.replace(SPACE, "").replace(SPACE_ESCAPE, SPACE) - return smart_byte_decode(unescaped) diff --git a/kosmos-g/fairseq/fairseq/data/encoders/characters.py b/kosmos-g/fairseq/fairseq/data/encoders/characters.py deleted file mode 100644 index 494ea2193..000000000 --- a/kosmos-g/fairseq/fairseq/data/encoders/characters.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -from fairseq.data.encoders import register_bpe - - -SPACE = chr(32) -SPACE_ESCAPE = chr(9601) - - -@register_bpe("characters") -class Characters(object): - def __init__(self, *unused): - pass - - @staticmethod - def add_args(parser): - pass - - @staticmethod - def encode(x: str) -> str: - escaped = x.replace(SPACE, SPACE_ESCAPE) - return SPACE.join(list(escaped)) - - @staticmethod - def decode(x: str) -> str: - return x.replace(SPACE, "").replace(SPACE_ESCAPE, SPACE) diff --git a/kosmos-g/fairseq/fairseq/data/encoders/fastbpe.py b/kosmos-g/fairseq/fairseq/data/encoders/fastbpe.py deleted file mode 100644 index f7c210395..000000000 --- a/kosmos-g/fairseq/fairseq/data/encoders/fastbpe.py +++ /dev/null @@ -1,36 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field - -from fairseq import file_utils -from fairseq.data.encoders import register_bpe -from fairseq.dataclass import FairseqDataclass - - -@dataclass -class fastBPEConfig(FairseqDataclass): - bpe_codes: str = field(default="???", metadata={"help": "path to fastBPE BPE"}) - - -@register_bpe("fastbpe", dataclass=fastBPEConfig) -class fastBPE(object): - def __init__(self, cfg): - if cfg.bpe_codes is None: - raise ValueError("--bpe-codes is required for --bpe=fastbpe") - codes = file_utils.cached_path(cfg.bpe_codes) - try: - import fastBPE - - self.bpe = fastBPE.fastBPE(codes) - self.bpe_symbol = "@@ " - except ImportError: - raise ImportError("Please install fastBPE with: pip install fastBPE") - - def encode(self, x: str) -> str: - return self.bpe.apply([x])[0] - - def decode(self, x: str) -> str: - return (x + " ").replace(self.bpe_symbol, "").rstrip() diff --git a/kosmos-g/fairseq/fairseq/data/encoders/gpt2_bpe.py b/kosmos-g/fairseq/fairseq/data/encoders/gpt2_bpe.py deleted file mode 100644 index e661426a7..000000000 --- a/kosmos-g/fairseq/fairseq/data/encoders/gpt2_bpe.py +++ /dev/null @@ -1,45 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field - -from fairseq import file_utils -from fairseq.data.encoders import register_bpe -from fairseq.dataclass import FairseqDataclass - -from .gpt2_bpe_utils import get_encoder - - -DEFAULT_ENCODER_JSON = "https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json" -DEFAULT_VOCAB_BPE = "https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe" - - -@dataclass -class GPT2BPEConfig(FairseqDataclass): - gpt2_encoder_json: str = field( - default=DEFAULT_ENCODER_JSON, metadata={"help": "path to encoder.json"} - ) - gpt2_vocab_bpe: str = field( - default=DEFAULT_VOCAB_BPE, metadata={"help": "path to vocab.bpe"} - ) - - -@register_bpe("gpt2", dataclass=GPT2BPEConfig) -class GPT2BPE(object): - def __init__(self, cfg): - encoder_json = file_utils.cached_path(cfg.gpt2_encoder_json) - vocab_bpe = file_utils.cached_path(cfg.gpt2_vocab_bpe) - self.bpe = get_encoder(encoder_json, vocab_bpe) - - def encode(self, x: str) -> str: - return " ".join(map(str, self.bpe.encode(x))) - - def decode(self, x: str) -> str: - return self.bpe.decode( - [int(tok) if tok not in {"<unk>", "<mask>"} else tok for tok in x.split()] - ) - - def is_beginning_of_word(self, x: str) -> bool: - return self.decode(x).startswith(" ") diff --git a/kosmos-g/fairseq/fairseq/data/encoders/gpt2_bpe_utils.py b/kosmos-g/fairseq/fairseq/data/encoders/gpt2_bpe_utils.py deleted file mode 100644 index 688d4e36e..000000000 --- a/kosmos-g/fairseq/fairseq/data/encoders/gpt2_bpe_utils.py +++ /dev/null @@ -1,140 +0,0 @@ -""" -Byte pair encoding utilities from GPT-2. - -Original source: https://github.com/openai/gpt-2/blob/master/src/encoder.py -Original license: MIT -""" - -import json -from functools import lru_cache - - -@lru_cache() -def bytes_to_unicode(): - """ - Returns list of utf-8 byte and a corresponding list of unicode strings. - The reversible bpe codes work on unicode strings. - This means you need a large # of unicode characters in your vocab if you want to avoid UNKs. - When you're at something like a 10B token dataset you end up needing around 5K for decent coverage. - This is a signficant percentage of your normal, say, 32K bpe vocab. - To avoid that, we want lookup tables between utf-8 bytes and unicode strings. - And avoids mapping to whitespace/control characters the bpe code barfs on. - """ - bs = ( - list(range(ord("!"), ord("~") + 1)) - + list(range(ord("¡"), ord("¬") + 1)) - + list(range(ord("®"), ord("ÿ") + 1)) - ) - cs = bs[:] - n = 0 - for b in range(2 ** 8): - if b not in bs: - bs.append(b) - cs.append(2 ** 8 + n) - n += 1 - cs = [chr(n) for n in cs] - return dict(zip(bs, cs)) - - -def get_pairs(word): - """Return set of symbol pairs in a word. - Word is represented as tuple of symbols (symbols being variable-length strings). - """ - pairs = set() - prev_char = word[0] - for char in word[1:]: - pairs.add((prev_char, char)) - prev_char = char - return pairs - - -class Encoder: - def __init__(self, encoder, bpe_merges, errors="replace"): - self.encoder = encoder - self.decoder = {v: k for k, v in self.encoder.items()} - self.errors = errors # how to handle errors in decoding - self.byte_encoder = bytes_to_unicode() - self.byte_decoder = {v: k for k, v in self.byte_encoder.items()} - self.bpe_ranks = dict(zip(bpe_merges, range(len(bpe_merges)))) - self.cache = {} - - try: - import regex as re - - self.re = re - except ImportError: - raise ImportError("Please install regex with: pip install regex") - - # Should haved added re.IGNORECASE so BPE merges can happen for capitalized versions of contractions - self.pat = self.re.compile( - r"""'s|'t|'re|'ve|'m|'ll|'d| ?\p{L}+| ?\p{N}+| ?[^\s\p{L}\p{N}]+|\s+(?!\S)|\s+""" - ) - - def bpe(self, token): - if token in self.cache: - return self.cache[token] - word = tuple(token) - pairs = get_pairs(word) - - if not pairs: - return token - - while True: - bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float("inf"))) - if bigram not in self.bpe_ranks: - break - first, second = bigram - new_word = [] - i = 0 - while i < len(word): - try: - j = word.index(first, i) - new_word.extend(word[i:j]) - i = j - except: - new_word.extend(word[i:]) - break - - if word[i] == first and i < len(word) - 1 and word[i + 1] == second: - new_word.append(first + second) - i += 2 - else: - new_word.append(word[i]) - i += 1 - new_word = tuple(new_word) - word = new_word - if len(word) == 1: - break - else: - pairs = get_pairs(word) - word = " ".join(word) - self.cache[token] = word - return word - - def encode(self, text): - bpe_tokens = [] - for token in self.re.findall(self.pat, text): - token = "".join(self.byte_encoder[b] for b in token.encode("utf-8")) - bpe_tokens.extend( - self.encoder[bpe_token] for bpe_token in self.bpe(token).split(" ") - ) - return bpe_tokens - - def decode(self, tokens): - text = "".join([self.decoder.get(token, token) for token in tokens]) - text = bytearray([self.byte_decoder[c] for c in text]).decode( - "utf-8", errors=self.errors - ) - return text - - -def get_encoder(encoder_json_path, vocab_bpe_path): - with open(encoder_json_path, "r") as f: - encoder = json.load(f) - with open(vocab_bpe_path, "r", encoding="utf-8") as f: - bpe_data = f.read() - bpe_merges = [tuple(merge_str.split()) for merge_str in bpe_data.split("\n")[1:-1]] - return Encoder( - encoder=encoder, - bpe_merges=bpe_merges, - ) diff --git a/kosmos-g/fairseq/fairseq/data/encoders/hf_bert_bpe.py b/kosmos-g/fairseq/fairseq/data/encoders/hf_bert_bpe.py deleted file mode 100644 index a41c05934..000000000 --- a/kosmos-g/fairseq/fairseq/data/encoders/hf_bert_bpe.py +++ /dev/null @@ -1,50 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field -from typing import Optional - -from fairseq.data.encoders import register_bpe -from fairseq.dataclass import FairseqDataclass - - -@dataclass -class BertBPEConfig(FairseqDataclass): - bpe_cased: bool = field(default=False, metadata={"help": "set for cased BPE"}) - bpe_vocab_file: Optional[str] = field( - default=None, metadata={"help": "bpe vocab file"} - ) - - -@register_bpe("bert", dataclass=BertBPEConfig) -class BertBPE(object): - def __init__(self, cfg): - try: - from transformers import BertTokenizer - except ImportError: - raise ImportError( - "Please install transformers with: pip install transformers" - ) - - if cfg.bpe_vocab_file: - self.bert_tokenizer = BertTokenizer( - cfg.bpe_vocab_file, do_lower_case=not cfg.bpe_cased - ) - else: - vocab_file_name = ( - "bert-base-cased" if cfg.bpe_cased else "bert-base-uncased" - ) - self.bert_tokenizer = BertTokenizer.from_pretrained(vocab_file_name) - - def encode(self, x: str) -> str: - return " ".join(self.bert_tokenizer.tokenize(x)) - - def decode(self, x: str) -> str: - return self.bert_tokenizer.clean_up_tokenization( - self.bert_tokenizer.convert_tokens_to_string(x.split(" ")) - ) - - def is_beginning_of_word(self, x: str) -> bool: - return not x.startswith("##") diff --git a/kosmos-g/fairseq/fairseq/data/encoders/hf_byte_bpe.py b/kosmos-g/fairseq/fairseq/data/encoders/hf_byte_bpe.py deleted file mode 100644 index c508578d4..000000000 --- a/kosmos-g/fairseq/fairseq/data/encoders/hf_byte_bpe.py +++ /dev/null @@ -1,50 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field - -from fairseq.data.encoders import register_bpe -from fairseq.dataclass import FairseqDataclass -from fairseq import file_utils - - -@dataclass -class HuggingFaceByteLevelBPEConfig(FairseqDataclass): - bpe_merges: str = field(default="???", metadata={"help": "path to merges.txt"}) - bpe_vocab: str = field(default="???", metadata={"help": "path to vocab.json"}) - bpe_add_prefix_space: bool = field( - default=False, metadata={"help": "add prefix space before encoding"} - ) - - -@register_bpe("hf_byte_bpe", dataclass=HuggingFaceByteLevelBPEConfig) -class HuggingFaceByteLevelBPE(object): - def __init__(self, cfg): - try: - from tokenizers import ByteLevelBPETokenizer - except ImportError: - raise ImportError( - "Please install huggingface/tokenizers with: " "pip install tokenizers" - ) - - bpe_vocab = file_utils.cached_path(cfg.bpe_vocab) - bpe_merges = file_utils.cached_path(cfg.bpe_merges) - - self.bpe = ByteLevelBPETokenizer( - bpe_vocab, - bpe_merges, - add_prefix_space=cfg.bpe_add_prefix_space, - ) - - def encode(self, x: str) -> str: - return " ".join(map(str, self.bpe.encode(x).ids)) - - def decode(self, x: str) -> str: - return self.bpe.decode( - [int(tok) if tok not in {"<unk>", "<mask>"} else tok for tok in x.split()] - ) - - def is_beginning_of_word(self, x: str) -> bool: - return self.decode(x).startswith(" ") diff --git a/kosmos-g/fairseq/fairseq/data/encoders/moses_tokenizer.py b/kosmos-g/fairseq/fairseq/data/encoders/moses_tokenizer.py deleted file mode 100644 index e236dad16..000000000 --- a/kosmos-g/fairseq/fairseq/data/encoders/moses_tokenizer.py +++ /dev/null @@ -1,49 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field - -from fairseq.data.encoders import register_tokenizer -from fairseq.dataclass import FairseqDataclass - - -@dataclass -class MosesTokenizerConfig(FairseqDataclass): - source_lang: str = field(default="en", metadata={"help": "source language"}) - target_lang: str = field(default="en", metadata={"help": "target language"}) - moses_no_dash_splits: bool = field( - default=False, metadata={"help": "don't apply dash split rules"} - ) - moses_no_escape: bool = field( - default=False, - metadata={"help": "don't perform HTML escaping on apostrophe, quotes, etc."}, - ) - - -@register_tokenizer("moses", dataclass=MosesTokenizerConfig) -class MosesTokenizer(object): - def __init__(self, cfg: MosesTokenizerConfig): - self.cfg = cfg - - try: - from sacremoses import MosesTokenizer, MosesDetokenizer - - self.tok = MosesTokenizer(cfg.source_lang) - self.detok = MosesDetokenizer(cfg.target_lang) - except ImportError: - raise ImportError( - "Please install Moses tokenizer with: pip install sacremoses" - ) - - def encode(self, x: str) -> str: - return self.tok.tokenize( - x, - aggressive_dash_splits=(not self.cfg.moses_no_dash_splits), - return_str=True, - escape=(not self.cfg.moses_no_escape), - ) - - def decode(self, x: str) -> str: - return self.detok.detokenize(x.split()) diff --git a/kosmos-g/fairseq/fairseq/data/encoders/nltk_tokenizer.py b/kosmos-g/fairseq/fairseq/data/encoders/nltk_tokenizer.py deleted file mode 100644 index 0ab92377b..000000000 --- a/kosmos-g/fairseq/fairseq/data/encoders/nltk_tokenizer.py +++ /dev/null @@ -1,24 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq.data.encoders import register_tokenizer -from fairseq.dataclass import FairseqDataclass - - -@register_tokenizer("nltk", dataclass=FairseqDataclass) -class NLTKTokenizer(object): - def __init__(self, *unused): - try: - from nltk.tokenize import word_tokenize - - self.word_tokenize = word_tokenize - except ImportError: - raise ImportError("Please install nltk with: pip install nltk") - - def encode(self, x: str) -> str: - return " ".join(self.word_tokenize(x)) - - def decode(self, x: str) -> str: - return x diff --git a/kosmos-g/fairseq/fairseq/data/encoders/sentencepiece_bpe.py b/kosmos-g/fairseq/fairseq/data/encoders/sentencepiece_bpe.py deleted file mode 100644 index 0aa6cd768..000000000 --- a/kosmos-g/fairseq/fairseq/data/encoders/sentencepiece_bpe.py +++ /dev/null @@ -1,65 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field -from typing import Optional - -from fairseq import file_utils -from fairseq.data.encoders import register_bpe -from fairseq.dataclass import FairseqDataclass - - -@dataclass -class SentencepieceConfig(FairseqDataclass): - sentencepiece_model: str = field( - default="???", metadata={"help": "path to sentencepiece model"} - ) - sentencepiece_enable_sampling: bool = field( - default=False, metadata={"help": "enable sampling"} - ) - sentencepiece_alpha: Optional[float] = field( - default=None, - metadata={ - "help": "soothing parameter for unigram sampling, " - "and merge probability for BPE-dropout" - }, - ) - - -@register_bpe("sentencepiece", dataclass=SentencepieceConfig) -class SentencepieceBPE(object): - def __init__(self, cfg): - self.enable_sampling = cfg.sentencepiece_enable_sampling - self.alpha = cfg.sentencepiece_alpha - sentencepiece_model = file_utils.cached_path(cfg.sentencepiece_model) - try: - import sentencepiece as spm - - self.sp = spm.SentencePieceProcessor() - self.sp.Load(sentencepiece_model) - except ImportError: - raise ImportError( - "Please install sentencepiece with: pip install sentencepiece" - ) - - def encode(self, x: str) -> str: - return " ".join( - self.sp.Encode( - x, out_type=str, enable_sampling=self.enable_sampling, alpha=self.alpha - ) - ) - - def decode(self, x: str) -> str: - return x.replace(" ", "").replace("\u2581", " ").strip() - - def is_beginning_of_word(self, x: str) -> bool: - if x in ["<unk>", "<s>", "</s>", "<pad>"]: - # special elements are always considered beginnings - # HACK: this logic is already present in fairseq/tasks/masked_lm.py - # but these special tokens are also contained in the sentencepiece - # vocabulary which causes duplicate special tokens. This hack makes - # sure that they are all taken into account. - return True - return x.startswith("\u2581") diff --git a/kosmos-g/fairseq/fairseq/data/encoders/space_tokenizer.py b/kosmos-g/fairseq/fairseq/data/encoders/space_tokenizer.py deleted file mode 100644 index 925ad41b7..000000000 --- a/kosmos-g/fairseq/fairseq/data/encoders/space_tokenizer.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import re - -from fairseq.data.encoders import register_tokenizer -from fairseq.dataclass import FairseqDataclass - - -@register_tokenizer("space", dataclass=FairseqDataclass) -class SpaceTokenizer(object): - def __init__(self, *unused): - self.space_tok = re.compile(r"\s+") - - def encode(self, x: str) -> str: - return self.space_tok.sub(" ", x) - - def decode(self, x: str) -> str: - return x diff --git a/kosmos-g/fairseq/fairseq/data/encoders/subword_nmt_bpe.py b/kosmos-g/fairseq/fairseq/data/encoders/subword_nmt_bpe.py deleted file mode 100644 index 5d724d273..000000000 --- a/kosmos-g/fairseq/fairseq/data/encoders/subword_nmt_bpe.py +++ /dev/null @@ -1,54 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field - -from fairseq import file_utils -from fairseq.data.encoders import register_bpe -from fairseq.dataclass import FairseqDataclass - - -@dataclass -class SubwordNMTBPEConfig(FairseqDataclass): - bpe_codes: str = field(default="???", metadata={"help": "path to subword NMT BPE"}) - bpe_separator: str = field(default="@@", metadata={"help": "BPE separator"}) - - -@register_bpe("subword_nmt", dataclass=SubwordNMTBPEConfig) -class SubwordNMTBPE(object): - def __init__(self, cfg): - if cfg.bpe_codes is None: - raise ValueError("--bpe-codes is required for --bpe=subword_nmt") - codes = file_utils.cached_path(cfg.bpe_codes) - try: - from subword_nmt import apply_bpe - - bpe_parser = apply_bpe.create_parser() - bpe_args = bpe_parser.parse_args( - [ - "--codes", - codes, - "--separator", - cfg.bpe_separator, - ] - ) - self.bpe = apply_bpe.BPE( - bpe_args.codes, - bpe_args.merges, - bpe_args.separator, - None, - bpe_args.glossaries, - ) - self.bpe_symbol = bpe_args.separator + " " - except ImportError: - raise ImportError( - "Please install subword_nmt with: pip install subword-nmt" - ) - - def encode(self, x: str) -> str: - return self.bpe.process_line(x) - - def decode(self, x: str) -> str: - return (x + " ").replace(self.bpe_symbol, "").rstrip() diff --git a/kosmos-g/fairseq/fairseq/data/encoders/utils.py b/kosmos-g/fairseq/fairseq/data/encoders/utils.py deleted file mode 100644 index d93eb532e..000000000 --- a/kosmos-g/fairseq/fairseq/data/encoders/utils.py +++ /dev/null @@ -1,30 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -from fairseq.data import encoders - - -def get_whole_word_mask(args, dictionary): - bpe = encoders.build_bpe(args) - if bpe is not None: - - def is_beginning_of_word(i): - if i < dictionary.nspecial: - # special elements are always considered beginnings - return True - tok = dictionary[i] - if tok.startswith("madeupword"): - return True - try: - return bpe.is_beginning_of_word(tok) - except ValueError: - return True - - mask_whole_words = torch.ByteTensor( - list(map(is_beginning_of_word, range(len(dictionary)))) - ) - return mask_whole_words - return None diff --git a/kosmos-g/fairseq/fairseq/data/fairseq_dataset.py b/kosmos-g/fairseq/fairseq/data/fairseq_dataset.py deleted file mode 100644 index 2bde7fc57..000000000 --- a/kosmos-g/fairseq/fairseq/data/fairseq_dataset.py +++ /dev/null @@ -1,205 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import numpy as np -import torch.utils.data -from fairseq.data import data_utils - -logger = logging.getLogger(__name__) - - -class EpochListening: - """Mixin for receiving updates whenever the epoch increments.""" - - @property - def can_reuse_epoch_itr_across_epochs(self): - """ - Whether we can reuse the :class:`fairseq.data.EpochBatchIterator` for - this dataset across epochs. - - This needs to return ``False`` if the sample sizes can change across - epochs, in which case we may need to regenerate batches at each epoch. - If your dataset relies in ``set_epoch`` then you should consider setting - this to ``False``. - """ - return True - - def set_epoch(self, epoch): - """Will receive the updated epoch number at the beginning of the epoch.""" - pass - - -class FairseqDataset(torch.utils.data.Dataset, EpochListening): - """A dataset that provides helpers for batching.""" - - def __getitem__(self, index): - raise NotImplementedError - - def __len__(self): - raise NotImplementedError - - def collater(self, samples): - """Merge a list of samples to form a mini-batch. - - Args: - samples (List[dict]): samples to collate - - Returns: - dict: a mini-batch suitable for forwarding with a Model - """ - raise NotImplementedError - - def num_tokens(self, index): - """Return the number of tokens in a sample. This value is used to - enforce ``--max-tokens`` during batching.""" - raise NotImplementedError - - def num_tokens_vec(self, indices): - """Return the number of tokens for a set of positions defined by indices. - This value is used to enforce ``--max-tokens`` during batching.""" - raise NotImplementedError - - def size(self, index): - """Return an example's size as a float or tuple. This value is used when - filtering a dataset with ``--max-positions``.""" - raise NotImplementedError - - def ordered_indices(self): - """Return an ordered list of indices. Batches will be constructed based - on this order.""" - return np.arange(len(self), dtype=np.int64) - - @property - def supports_prefetch(self): - """Whether this dataset supports prefetching.""" - return False - - def attr(self, attr: str, index: int): - return getattr(self, attr, None) - - def prefetch(self, indices): - """Prefetch the data required for this epoch.""" - raise NotImplementedError - - def get_batch_shapes(self): - """ - Return a list of valid batch shapes, for example:: - - [(8, 512), (16, 256), (32, 128)] - - The first dimension of each tuple is the batch size and can be ``None`` - to automatically infer the max batch size based on ``--max-tokens``. - The second dimension of each tuple is the max supported length as given - by :func:`fairseq.data.FairseqDataset.num_tokens`. - - This will be used by :func:`fairseq.data.FairseqDataset.batch_by_size` - to restrict batch shapes. This is useful on TPUs to avoid too many - dynamic shapes (and recompilations). - """ - return None - - def batch_by_size( - self, - indices, - max_tokens=None, - max_sentences=None, - required_batch_size_multiple=1, - ): - """ - Given an ordered set of indices, return batches according to - *max_tokens*, *max_sentences* and *required_batch_size_multiple*. - """ - from fairseq.data import data_utils - - fixed_shapes = self.get_batch_shapes() - if fixed_shapes is not None: - - def adjust_bsz(bsz, num_tokens): - if bsz is None: - assert max_tokens is not None, "Must specify --max-tokens" - bsz = max_tokens // num_tokens - if max_sentences is not None: - bsz = min(bsz, max_sentences) - elif ( - bsz >= required_batch_size_multiple - and bsz % required_batch_size_multiple != 0 - ): - bsz -= bsz % required_batch_size_multiple - return bsz - - fixed_shapes = np.array( - [ - [adjust_bsz(bsz, num_tokens), num_tokens] - for (bsz, num_tokens) in fixed_shapes - ] - ) - - try: - num_tokens_vec = self.num_tokens_vec(indices).astype("int64") - except NotImplementedError: - num_tokens_vec = None - - return data_utils.batch_by_size( - indices, - num_tokens_fn=self.num_tokens, - num_tokens_vec=num_tokens_vec, - max_tokens=max_tokens, - max_sentences=max_sentences, - required_batch_size_multiple=required_batch_size_multiple, - fixed_shapes=fixed_shapes, - ) - - def filter_indices_by_size(self, indices, max_sizes): - """ - Filter a list of sample indices. Remove those that are longer than - specified in *max_sizes*. - - WARNING: don't update, override method in child classes - - Args: - indices (np.array): original array of sample indices - max_sizes (int or list[int] or tuple[int]): max sample size, - can be defined separately for src and tgt (then list or tuple) - - Returns: - np.array: filtered sample array - list: list of removed indices - """ - if isinstance(max_sizes, float) or isinstance(max_sizes, int): - if hasattr(self, "sizes") and isinstance(self.sizes, np.ndarray): - ignored = indices[self.sizes[indices] > max_sizes].tolist() - indices = indices[self.sizes[indices] <= max_sizes] - elif ( - hasattr(self, "sizes") - and isinstance(self.sizes, list) - and len(self.sizes) == 1 - ): - ignored = indices[self.sizes[0][indices] > max_sizes].tolist() - indices = indices[self.sizes[0][indices] <= max_sizes] - else: - indices, ignored = data_utils._filter_by_size_dynamic( - indices, self.size, max_sizes - ) - else: - indices, ignored = data_utils._filter_by_size_dynamic( - indices, self.size, max_sizes - ) - return indices, ignored - - @property - def supports_fetch_outside_dataloader(self): - """Whether this dataset supports fetching outside the workers of the dataloader.""" - return True - - -class FairseqIterableDataset(torch.utils.data.IterableDataset, EpochListening): - """ - For datasets that need to be read sequentially, usually because the data is - being streamed or otherwise can't be manipulated on a single machine. - """ - - def __iter__(self): - raise NotImplementedError diff --git a/kosmos-g/fairseq/fairseq/data/fasta_dataset.py b/kosmos-g/fairseq/fairseq/data/fasta_dataset.py deleted file mode 100644 index 007011974..000000000 --- a/kosmos-g/fairseq/fairseq/data/fasta_dataset.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import os -import subprocess -import threading -from pathlib import Path - -import numpy as np -import torch - - -def fasta_file_path(prefix_path): - return prefix_path + ".fasta" - - -class FastaDataset(torch.utils.data.Dataset): - """ - For loading protein sequence datasets in the common FASTA data format - """ - - def __init__(self, path: str, cache_indices=False): - self.fn = fasta_file_path(path) - self.threadlocal = threading.local() - self.cache = Path(f"{path}.fasta.idx.npy") - if cache_indices: - if self.cache.exists(): - self.offsets, self.sizes = np.load(self.cache) - else: - self.offsets, self.sizes = self._build_index(path) - np.save(self.cache, np.stack([self.offsets, self.sizes])) - else: - self.offsets, self.sizes = self._build_index(path) - - def _get_file(self): - if not hasattr(self.threadlocal, "f"): - self.threadlocal.f = open(self.fn, "r") - return self.threadlocal.f - - def __getitem__(self, idx): - f = self._get_file() - f.seek(self.offsets[idx]) - desc = f.readline().strip() - line = f.readline() - seq = "" - while line != "" and line[0] != ">": - seq += line.strip() - line = f.readline() - return desc, seq - - def __len__(self): - return self.offsets.size - - def _build_index(self, path: str): - # Use grep and awk to get 100M/s on local SSD. - # Should process your enormous 100G fasta in ~10 min single core... - path = fasta_file_path(path) - bytes_offsets = subprocess.check_output( - f"cat {path} | tqdm --bytes --total $(wc -c < {path})" - "| grep --byte-offset '^>' -o | cut -d: -f1", - shell=True, - ) - fasta_lengths = subprocess.check_output( - f"cat {path} | tqdm --bytes --total $(wc -c < {path})" - "| awk '/^>/ {print \"\";next;} { printf(\"%s\",$0);}' | tail -n+2 | awk '{print length($1)}'", - shell=True, - ) - bytes_np = np.fromstring(bytes_offsets, dtype=np.int64, sep=" ") - sizes_np = np.fromstring(fasta_lengths, dtype=np.int64, sep=" ") - return bytes_np, sizes_np - - def __setstate__(self, state): - self.__dict__ = state - self.threadlocal = threading.local() - - def __getstate__(self): - d = {} - for i, v in self.__dict__.items(): - if i != "threadlocal": - d[i] = v - return d - - def __del__(self): - if hasattr(self.threadlocal, "f"): - self.threadlocal.f.close() - del self.threadlocal.f - - @staticmethod - def exists(path): - return os.path.exists(fasta_file_path(path)) - - -class EncodedFastaDataset(FastaDataset): - """ - The FastaDataset returns raw sequences - this allows us to return - indices with a dictionary instead. - """ - - def __init__(self, path, dictionary): - super().__init__(path, cache_indices=True) - self.dictionary = dictionary - - def __getitem__(self, idx): - desc, seq = super().__getitem__(idx) - return self.dictionary.encode_line(seq, line_tokenizer=list).long() diff --git a/kosmos-g/fairseq/fairseq/data/huffman/__init__.py b/kosmos-g/fairseq/fairseq/data/huffman/__init__.py deleted file mode 100644 index 9b61fafad..000000000 --- a/kosmos-g/fairseq/fairseq/data/huffman/__init__.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .huffman_coder import HuffmanCodeBuilder, HuffmanCoder -from .huffman_mmap_indexed_dataset import ( - HuffmanMMapIndex, - HuffmanMMapIndexedDataset, - HuffmanMMapIndexedDatasetBuilder, - vocab_file_path, -) - -__all__ = [ - "HuffmanCoder", - "HuffmanCodeBuilder", - "HuffmanMMapIndexedDatasetBuilder", - "HuffmanMMapIndexedDataset", - "HuffmanMMapIndex", - "vocab_file_path", -] diff --git a/kosmos-g/fairseq/fairseq/data/huffman/huffman_coder.py b/kosmos-g/fairseq/fairseq/data/huffman/huffman_coder.py deleted file mode 100644 index c04f84564..000000000 --- a/kosmos-g/fairseq/fairseq/data/huffman/huffman_coder.py +++ /dev/null @@ -1,267 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import re -import typing as tp -from collections import Counter, deque -from dataclasses import dataclass - -from bitarray import bitarray, util -from fairseq.data import Dictionary - -# basically we have to write to addressable bytes for the memory mapped -# dataset loader. Sentences that get encoded to a length that is not a -# multiple of BLOCKSIZE (a byte) will be padded to fit. (see _pad in the coder) -BLOCKSIZE = 8 - - -class HuffmanCoder: - def __init__( - self, root: "HuffmanNode", bos="<s>", pad="<pad>", eos="</s>", unk="<unk>" - ): - self.root = root - self.table = root.code_table() - self.bos_word, self.unk_word, self.pad_word, self.eos_word = bos, unk, pad, eos - - def _pad(self, a: bitarray) -> bitarray: - """ - bitpadding, 1 then 0. - - If the array is already a multiple of blocksize, we add a full block. - """ - pad_len = BLOCKSIZE - (len(a) % BLOCKSIZE) - 1 - padding = bitarray("1" + "0" * pad_len) - return a + padding - - def _unpad(self, a: bitarray) -> bitarray: - """ - remove the bitpadding. - - There will be a set of 0s preceded by a 1 at the end of the bitarray, we remove that - """ - # count the 0 padding at the end until we find the first 1 - # we want to remove the one too - remove_cnt = util.rindex(a, 1) - return a[:remove_cnt] - - def encode(self, iter: tp.List[str]) -> bytes: - """ - encode a list of tokens a return bytes. We use bitpadding to make sure the encoded bits fit in bytes. - """ - a = bitarray() - for token in iter: - code = self.get_code(token) - if code is None: - if self.unk_word is None: - raise Exception(f"unknown token {token} cannot be encoded.") - else: - token = self.unk_word - a = a + self.get_code(token) - return self._pad(a).tobytes() - - def decode(self, bits: bytes) -> tp.Iterator["HuffmanNode"]: - """ - take bitpadded bytes and decode it to a set of leaves. You can then use each node to find the symbol/id - """ - a = bitarray() - a.frombytes(bits) - return self.root.decode(self._unpad(a)) - - def get_code(self, symbol: str) -> tp.Optional[bitarray]: - node = self.get_node(symbol) - return None if node is None else node.code - - def get_node(self, symbol: str) -> "HuffmanNode": - return self.table.get(symbol) - - @classmethod - def from_file( - cls, - filename: str, - bos="<s>", - pad="<pad>", - eos="</s>", - unk="<unk>", - ) -> "HuffmanCoder": - builder = HuffmanCodeBuilder.from_file(filename) - return builder.build_code(bos=bos, pad=pad, eos=eos, unk=unk) - - def to_file(self, filename, sep="\t"): - nodes = list(self.table.values()) - nodes.sort(key=lambda n: n.id) - with open(filename, "w", encoding="utf-8") as output: - for n in nodes: - output.write(f"{n.symbol}{sep}{n.count}\n") - - def __iter__(self): - for n in self.table.values(): - yield n - - def merge(self, other_coder: "HuffmanCoder") -> "HuffmanCoder": - builder = HuffmanCodeBuilder() - for n in self: - builder.increment(n.symbol, n.count) - for n in other_coder: - builder.increment(n.symbol, n.count) - return builder.build_code() - - def __eq__(self, other: "HuffmanCoder") -> bool: - return self.table == other.table - - def __len__(self) -> int: - return len(self.table) - - def __contains__(self, sym: str) -> bool: - return sym in self.table - - def to_dictionary(self) -> Dictionary: - dictionary = Dictionary(bos=self.bos, unk=self.unk, pad=self.pad, eos=self.eos) - for n in self: - dictionary.add_symbol(n.symbol, n=n.count) - dictionary.finalize() - return dictionary - - -@dataclass -class HuffmanNode: - """ - a node in a Huffman tree - """ - - id: int - count: int - symbol: tp.Optional[str] = None - left: tp.Optional["HuffmanNode"] = None - right: tp.Optional["HuffmanNode"] = None - code: tp.Optional[bitarray] = None - - def is_leaf(self) -> bool: - return self.left is None and self.right is None - - def code_table( - self, prefix: tp.Optional[bitarray] = None - ) -> tp.Dict[str, "HuffmanNode"]: - defaulted_prefix = prefix if prefix is not None else bitarray() - if self.is_leaf(): - self.code = ( - defaulted_prefix if len(defaulted_prefix) > 0 else bitarray("0") - ) # leaf could be the root if there is only one symbol - return {self.symbol: self} - - codes_right = self.right.code_table(defaulted_prefix + bitarray([0])) - codes_left = self.left.code_table(defaulted_prefix + bitarray([1])) - return {**codes_left, **codes_right} - - def decode(self, bits: bitarray) -> tp.Iterator["HuffmanNode"]: - current_node = self - for bit in bits: - if bit == 0: # go right - current_node = current_node.right - else: # go left - current_node = current_node.left - if current_node is None: - # we shouldn't be on a leaf here - raise Exception("fell off a leaf") - if current_node.is_leaf(): - yield current_node - current_node = self - if current_node != self: - raise Exception("couldn't decode all the bits") - - -class HuffmanCodeBuilder: - """ - build a dictionary with occurence count and then build the Huffman code for it. - """ - - def __init__(self): - self.symbols = Counter() - - def add_symbols(self, *syms) -> None: - self.symbols.update(syms) - - def increment(self, symbol: str, cnt: int) -> None: - self.symbols[symbol] += cnt - - @classmethod - def from_file(cls, filename): - c = cls() - with open(filename, "r", encoding="utf-8") as input: - for line in input: - split = re.split(r"[\s]+", line) - c.increment(split[0], int(split[1])) - return c - - def to_file(self, filename, sep="\t"): - with open(filename, "w", encoding="utf-8") as output: - for (tok, cnt) in self.symbols.most_common(): - output.write(f"{tok}{sep}{cnt}\n") - - def _smallest(self, q1: deque, q2: deque) -> HuffmanNode: - if len(q1) == 0: - return q2.pop() - - if len(q2) == 0: - return q1.pop() - - if q1[-1].count < q2[-1].count: - return q1.pop() - - return q2.pop() - - def __add__(self, c: "HuffmanCodeBuilder") -> "HuffmanCodeBuilder": - new_c = self.symbols + c.symbols - new_b = HuffmanCodeBuilder() - new_b.symbols = new_c - return new_b - - def build_code( - self, - bos="<s>", - pad="<pad>", - eos="</s>", - unk="<unk>", - ) -> HuffmanCoder: - assert len(self.symbols) > 0, "cannot build code from empty list of symbols" - - if self.symbols[bos] == 0: - self.add_symbols(bos) - if self.symbols[pad] == 0: - self.add_symbols(pad) - if self.symbols[eos] == 0: - self.add_symbols(eos) - if self.symbols[unk] == 0: - self.add_symbols(unk) - - node_id = 0 - leaves_queue = deque( - [ - HuffmanNode(symbol=symbol, count=count, id=idx) - for idx, (symbol, count) in enumerate(self.symbols.most_common()) - ] - ) # left are the most common, right are the least common - - if len(leaves_queue) == 1: - root = leaves_queue.pop() - root.id = 0 - return HuffmanCoder(root) - - nodes_queue = deque() - - while len(leaves_queue) > 0 or len(nodes_queue) != 1: - # get the lowest two nodes at the head of each queue - node1 = self._smallest(leaves_queue, nodes_queue) - node2 = self._smallest(leaves_queue, nodes_queue) - - # add new node - nodes_queue.appendleft( - HuffmanNode( - count=node1.count + node2.count, left=node1, right=node2, id=node_id - ) - ) - node_id += 1 - - # we are left with the root - return HuffmanCoder(nodes_queue.pop(), bos=bos, pad=pad, eos=eos, unk=unk) diff --git a/kosmos-g/fairseq/fairseq/data/huffman/huffman_mmap_indexed_dataset.py b/kosmos-g/fairseq/fairseq/data/huffman/huffman_mmap_indexed_dataset.py deleted file mode 100644 index 3279dae89..000000000 --- a/kosmos-g/fairseq/fairseq/data/huffman/huffman_mmap_indexed_dataset.py +++ /dev/null @@ -1,287 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import mmap -import os -import shutil -import struct -import typing as tp -from functools import lru_cache - -import numpy as np -import torch -from fairseq.data import indexed_dataset -from fairseq.data.huffman import HuffmanCoder -from fairseq.file_io import PathManager - - -class HuffmanMMapIndex: - """ - keep an index of the offsets in the huffman binary file. - First a header, then the list of sizes (num tokens) for each instance and finally - the addresses of each instance. - """ - - _HDR_MAGIC = b"HUFFIDX\x00\x00" - _VERSION = 1 - - @classmethod - def writer(cls, path: str, data_len: int): - class _Writer: - def __enter__(self): - self._file = open(path, "wb") - - # write header (magic + version) - self._file.write(cls._HDR_MAGIC) - self._file.write(struct.pack("<Q", cls._VERSION)) - self._file.write(struct.pack("<Q", data_len)) - - return self - - def write(self, sizes, pointers): - # add number of items in the index to the header - self._file.write(struct.pack("<Q", len(sizes))) - - # write sizes - sizes = np.array(sizes, dtype=np.int32) - self._file.write(sizes.tobytes(order="C")) - del sizes - - # write address pointers - pointers = np.array(pointers, dtype=np.int64) - self._file.write(pointers.tobytes(order="C")) - del pointers - - def __exit__(self, exc_type, exc_val, exc_tb): - self._file.close() - - return _Writer() - - def __init__(self, path): - with open(path, "rb") as stream: - # read headers - magic_test = stream.read(9) - assert self._HDR_MAGIC == magic_test, ( - "Index file doesn't match expected format. " - "Make sure that --dataset-impl is configured properly." - ) - (version,) = struct.unpack("<Q", stream.read(8)) - assert ( - self._VERSION == version - ), "Unexpected file version f{version} != code version f{self._VERSION}" - - # read length of data file - (self._data_len,) = struct.unpack("<Q", stream.read(8)) - # read number of items in data file/index - (self._len,) = struct.unpack("<Q", stream.read(8)) - offset = stream.tell() - - indexed_dataset._warmup_mmap_file(path) - - self._bin_buffer_mmap = np.memmap(path, mode="r", order="C") - self._bin_buffer = memoryview(self._bin_buffer_mmap) - self._sizes = np.frombuffer( - self._bin_buffer, dtype=np.int32, count=self._len, offset=offset - ) - self._pointers = np.frombuffer( - self._bin_buffer, - dtype=np.int64, - count=self._len, - offset=offset + self._sizes.nbytes, - ) - - def __del__(self): - self._bin_buffer_mmap._mmap.close() - del self._bin_buffer_mmap - - def __iter__(self): - for i in range(self._len): - yield self[i] - - @property - def data_len(self): - return self._data_len - - @property - def sizes(self): - return self._sizes - - @lru_cache(maxsize=8) - def __getitem__(self, i): - return self._pointers[i], self._sizes[i] - - def __len__(self): - return self._len - - -def vocab_file_path(prefix_path): - return prefix_path + ".vocab" - - -class HuffmanMMapIndexedDataset(torch.utils.data.Dataset): - """ - an indexed dataset that use mmap and memoryview to access data from disk - that was compressed with a HuffmanCoder. - """ - - def __init__(self, prefix_path): - super().__init__() - - self._prefix_path = None - self._index = None - self._bin_buffer = None - self._coder = None - self._file = None - - self._bin_buffer_mmap = None - - self._do_init(prefix_path) - - def __getstate__(self): - return self._prefix_path - - def __setstate__(self, state): - self._do_init(state) - - def _do_init(self, prefix_path): - self._prefix_path = prefix_path - self._index = HuffmanMMapIndex( - indexed_dataset.index_file_path(self._prefix_path) - ) - self._coder = HuffmanCoder.from_file(vocab_file_path(self._prefix_path)) - - indexed_dataset._warmup_mmap_file( - indexed_dataset.data_file_path(self._prefix_path) - ) - self._file = os.open( - indexed_dataset.data_file_path(self._prefix_path), os.O_RDONLY - ) - self._bin_buffer_mmap = mmap.mmap( - self._file, - self._index.data_len, - access=mmap.ACCESS_READ, - ) - self._bin_buffer = memoryview(self._bin_buffer_mmap) - - def __del__(self): - del self._bin_buffer - if self._file: - os.close(self._file) - del self._index - - def __len__(self): - return len(self._index) - - def _decode(self, i): - ptr, _ = self._index[i] - if i == 0: - raw_bytes = self._bin_buffer[:ptr] - else: - (prev_ptr, _) = self._index[i - 1] - raw_bytes = self._bin_buffer[prev_ptr:ptr] - - return self._coder.decode(raw_bytes.tobytes()) - - @lru_cache(maxsize=8) - def __getitem__(self, i): - nodes = self._decode(i) - return torch.tensor([n.id for n in nodes], dtype=torch.int64) - - def __iter__(self): - for idx in range(len(self)): - yield self[idx] - - def get_symbols(self, i): - nodes = self._decode(i) - for n in nodes: - yield n.symbol - - @property - def sizes(self): - return self._index.sizes - - @property - def supports_prefetch(self): - return False - - @property - def coder(self): - return self._coder - - @staticmethod - def exists(prefix_path): - return ( - PathManager.exists(indexed_dataset.index_file_path(prefix_path)) - and PathManager.exists(indexed_dataset.data_file_path(prefix_path)) - and PathManager.exists(vocab_file_path(prefix_path)) - ) - - -class HuffmanMMapIndexedDatasetBuilder: - """ - Helper to build a memory mapped datasets with a huffman encoder. - You can either open/close this manually or use it as a ContextManager. - Provide your own coder, it will then be stored alongside the dataset. - The builder will first write the vocab file, then open the binary file so you can stream - into it, finally the index will be written when the builder is closed (your index should fit in memory). - """ - - def __init__(self, path_prefix: str, coder: HuffmanCoder) -> None: - self._path_prefix = path_prefix - self._coder = coder - self._sizes = [] - self._ptrs = [] - self._data_len = 0 - - def open(self): - self._coder.to_file(vocab_file_path(self._path_prefix)) - self._data_file = open(indexed_dataset.data_file_path(self._path_prefix), "wb") - - def __enter__(self) -> "HuffmanMMapIndexedDatasetBuilder": - self.open() - return self - - def add_item(self, tokens: tp.List[str]) -> None: - """ - add a list of tokens to the dataset, they will compressed with the - provided coder before being written to file. - """ - encoded = self._coder.encode(tokens) - code_len = len(encoded) - last_ptr = 0 - if len(self._ptrs) > 0: - last_ptr = self._ptrs[-1] - self._sizes.append(len(tokens)) - self._ptrs.append(last_ptr + code_len) - self._data_len += code_len - self._data_file.write(encoded) - - def append(self, other_dataset_path_prefix: str) -> None: - """ - append an existing dataset. - Beware, if it wasn't built with the same coder, you are in trouble. - """ - other_index = HuffmanMMapIndex( - indexed_dataset.index_file_path(other_dataset_path_prefix) - ) - for (ptr, size) in other_index: - self._ptrs.append(ptr + self._data_len) - self._sizes.append(size) - - # Concatenate data - with open(indexed_dataset.data_file_path(other_dataset_path_prefix), "rb") as f: - shutil.copyfileobj(f, self._data_file) - - self._data_len += other_index.data_len - - def close(self): - self._data_file.close() - with HuffmanMMapIndex.writer( - indexed_dataset.index_file_path(self._path_prefix), self._data_len - ) as index: - index.write(self._sizes, self._ptrs) - - def __exit__(self, exc_type, exc_val, exc_tb) -> None: - self.close() diff --git a/kosmos-g/fairseq/fairseq/data/id_dataset.py b/kosmos-g/fairseq/fairseq/data/id_dataset.py deleted file mode 100644 index 3e4d7969c..000000000 --- a/kosmos-g/fairseq/fairseq/data/id_dataset.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from . import FairseqDataset - - -class IdDataset(FairseqDataset): - def __getitem__(self, index): - return index - - def __len__(self): - return 0 - - def collater(self, samples): - return torch.tensor(samples) diff --git a/kosmos-g/fairseq/fairseq/data/indexed_dataset.py b/kosmos-g/fairseq/fairseq/data/indexed_dataset.py deleted file mode 100644 index d0843926a..000000000 --- a/kosmos-g/fairseq/fairseq/data/indexed_dataset.py +++ /dev/null @@ -1,587 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import shutil -import struct -from functools import lru_cache - -import numpy as np -import torch -from fairseq.dataclass.constants import DATASET_IMPL_CHOICES -from fairseq.data.fasta_dataset import FastaDataset -from fairseq.file_io import PathManager -from fairseq.data.huffman import HuffmanMMapIndexedDataset, HuffmanMMapIndex - -from . import FairseqDataset - -from typing import Union - - -def best_fitting_int_dtype( - max_int_to_represent, -) -> Union[np.uint16, np.uint32, np.int64]: - - if max_int_to_represent is None: - return np.uint32 # Safe guess - elif max_int_to_represent < 65500: - return np.uint16 - elif max_int_to_represent < 4294967295: - return np.uint32 - else: - return np.int64 - # we avoid np.uint64 because it doesn't save space and its type promotion behaves unexpectedly - # https://github.com/numpy/numpy/issues/5745 - - -def get_available_dataset_impl(): - return list(map(str, DATASET_IMPL_CHOICES)) - - -def infer_dataset_impl(path): - if IndexedRawTextDataset.exists(path): - return "raw" - elif IndexedDataset.exists(path): - with open(index_file_path(path), "rb") as f: - magic = f.read(8) - if magic == IndexedDataset._HDR_MAGIC: - return "cached" - elif magic == MMapIndexedDataset.Index._HDR_MAGIC[:8]: - return "mmap" - elif magic == HuffmanMMapIndex._HDR_MAGIC[:8]: - return "huffman" - else: - return None - elif FastaDataset.exists(path): - return "fasta" - else: - return None - - -def make_builder(out_file, impl, vocab_size=None): - if impl == "mmap": - return MMapIndexedDatasetBuilder( - out_file, dtype=best_fitting_int_dtype(vocab_size) - ) - elif impl == "fasta": - raise NotImplementedError - elif impl == "huffman": - raise ValueError( - "Use HuffmanCodeBuilder directly as it has a different interface." - ) - else: - return IndexedDatasetBuilder(out_file) - - -def make_dataset(path, impl, fix_lua_indexing=False, dictionary=None): - if impl == "raw" and IndexedRawTextDataset.exists(path): - assert dictionary is not None - return IndexedRawTextDataset(path, dictionary) - elif impl == "lazy" and IndexedDataset.exists(path): - return IndexedDataset(path, fix_lua_indexing=fix_lua_indexing) - elif impl == "cached" and IndexedDataset.exists(path): - return IndexedCachedDataset(path, fix_lua_indexing=fix_lua_indexing) - elif impl == "mmap" and MMapIndexedDataset.exists(path): - return MMapIndexedDataset(path) - elif impl == "fasta" and FastaDataset.exists(path): - from fairseq.data.fasta_dataset import EncodedFastaDataset - - return EncodedFastaDataset(path, dictionary) - elif impl == "huffman" and HuffmanMMapIndexedDataset.exists(path): - return HuffmanMMapIndexedDataset(path) - return None - - -def dataset_exists(path, impl): - if impl == "raw": - return IndexedRawTextDataset.exists(path) - elif impl == "mmap": - return MMapIndexedDataset.exists(path) - elif impl == "huffman": - return HuffmanMMapIndexedDataset.exists(path) - else: - return IndexedDataset.exists(path) - - -def read_longs(f, n): - a = np.empty(n, dtype=np.int64) - f.readinto(a) - return a - - -def write_longs(f, a): - f.write(np.array(a, dtype=np.int64)) - - -_code_to_dtype = { - 1: np.uint8, - 2: np.int8, - 3: np.int16, - 4: np.int32, - 5: np.int64, - 6: np.float, - 7: np.double, - 8: np.uint16, - 9: np.uint32, - 10: np.uint64, -} - - -def _dtype_header_code(dtype) -> int: - for k in _code_to_dtype.keys(): - if _code_to_dtype[k] == dtype: - return k - raise ValueError(dtype) - - -def index_file_path(prefix_path): - return prefix_path + ".idx" - - -def data_file_path(prefix_path): - return prefix_path + ".bin" - - -class IndexedDataset(FairseqDataset): - """Loader for TorchNet IndexedDataset""" - - _HDR_MAGIC = b"TNTIDX\x00\x00" - - def __init__(self, path, fix_lua_indexing=False): - super().__init__() - self.path = path - self.fix_lua_indexing = fix_lua_indexing - self.data_file = None - self.read_index(path) - - def read_index(self, path): - with open(index_file_path(path), "rb") as f: - magic = f.read(8) - assert magic == self._HDR_MAGIC, ( - "Index file doesn't match expected format. " - "Make sure that --dataset-impl is configured properly." - ) - version = f.read(8) - assert struct.unpack("<Q", version) == (1,) - code, self.element_size = struct.unpack("<QQ", f.read(16)) - self.dtype = _code_to_dtype[code] - self._len, self.s = struct.unpack("<QQ", f.read(16)) - self.dim_offsets = read_longs(f, self._len + 1) - self.data_offsets = read_longs(f, self._len + 1) - self.sizes = read_longs(f, self.s) - - def read_data(self, path): - self.data_file = open(data_file_path(path), "rb", buffering=0) - - def check_index(self, i): - if i < 0 or i >= self._len: - raise IndexError("index out of range") - - def __del__(self): - if self.data_file: - self.data_file.close() - - @lru_cache(maxsize=8) - def __getitem__(self, i) -> torch.Tensor: - if not self.data_file: - self.read_data(self.path) - self.check_index(i) - tensor_size = self.sizes[self.dim_offsets[i] : self.dim_offsets[i + 1]] - a = np.empty(tensor_size, dtype=self.dtype) - self.data_file.seek(self.data_offsets[i] * self.element_size) - self.data_file.readinto(a) - item = torch.from_numpy(a).long() - if self.fix_lua_indexing: - item -= 1 # subtract 1 for 0-based indexing - return item - - def __len__(self): - return self._len - - def num_tokens(self, index): - return self.sizes[index] - - def size(self, index): - return self.sizes[index] - - @staticmethod - def exists(path): - return PathManager.exists(index_file_path(path)) and PathManager.exists( - data_file_path(path) - ) - - @property - def supports_prefetch(self): - return False # avoid prefetching to save memory - - -class IndexedCachedDataset(IndexedDataset): - def __init__(self, path, fix_lua_indexing=False): - super().__init__(path, fix_lua_indexing=fix_lua_indexing) - self.cache = None - self.cache_index = {} - - @property - def supports_prefetch(self): - return True - - def prefetch(self, indices): - if all(i in self.cache_index for i in indices): - return - if not self.data_file: - self.read_data(self.path) - indices = sorted(set(indices)) - total_size = 0 - for i in indices: - total_size += self.data_offsets[i + 1] - self.data_offsets[i] - self.cache = np.empty(total_size, dtype=self.dtype) - ptx = 0 - self.cache_index.clear() - for i in indices: - self.cache_index[i] = ptx - size = self.data_offsets[i + 1] - self.data_offsets[i] - a = self.cache[ptx : ptx + size] - self.data_file.seek(self.data_offsets[i] * self.element_size) - self.data_file.readinto(a) - ptx += size - if self.data_file: - # close and delete data file after prefetch so we can pickle - self.data_file.close() - self.data_file = None - - @lru_cache(maxsize=8) - def __getitem__(self, i): - self.check_index(i) - tensor_size = self.sizes[self.dim_offsets[i] : self.dim_offsets[i + 1]] - a = np.empty(tensor_size, dtype=self.dtype) - ptx = self.cache_index[i] - np.copyto(a, self.cache[ptx : ptx + a.size]) - item = torch.from_numpy(a).long() - if self.fix_lua_indexing: - item -= 1 # subtract 1 for 0-based indexing - return item - - -class IndexedRawTextDataset(FairseqDataset): - """Takes a text file as input and binarizes it in memory at instantiation. - Original lines are also kept in memory""" - - def __init__(self, path, dictionary, append_eos=True, reverse_order=False): - self.tokens_list = [] - self.lines = [] - self.sizes = [] - self.append_eos = append_eos - self.reverse_order = reverse_order - self.read_data(path, dictionary) - self.size = len(self.tokens_list) - - def read_data(self, path, dictionary): - with open(path, "r", encoding="utf-8") as f: - for line in f: - self.lines.append(line.strip("\n")) - tokens = dictionary.encode_line( - line, - add_if_not_exist=False, - append_eos=self.append_eos, - reverse_order=self.reverse_order, - ).long() - self.tokens_list.append(tokens) - self.sizes.append(len(tokens)) - self.sizes = np.array(self.sizes) - - def check_index(self, i): - if i < 0 or i >= self.size: - raise IndexError("index out of range") - - @lru_cache(maxsize=8) - def __getitem__(self, i): - self.check_index(i) - return self.tokens_list[i] - - def get_original_text(self, i): - self.check_index(i) - return self.lines[i] - - def __del__(self): - pass - - def __len__(self): - return self.size - - def num_tokens(self, index): - return self.sizes[index] - - def size(self, index): - return self.sizes[index] - - @staticmethod - def exists(path): - return PathManager.exists(path) - - -class IndexedDatasetBuilder: - element_sizes = { - np.uint8: 1, - np.int8: 1, - np.int16: 2, - np.int32: 4, - np.int64: 8, - np.float: 4, - np.double: 8, - } - - def __init__(self, out_file, dtype=np.int32): - self.out_file = open(out_file, "wb") - self.dtype = dtype - self.data_offsets = [0] - self.dim_offsets = [0] - self.sizes = [] - self.element_size = self.element_sizes[self.dtype] - - def add_item(self, tensor): - # +1 for Lua compatibility - bytes = self.out_file.write(np.array(tensor.numpy() + 1, dtype=self.dtype)) - self.data_offsets.append(self.data_offsets[-1] + bytes / self.element_size) - for s in tensor.size(): - self.sizes.append(s) - self.dim_offsets.append(self.dim_offsets[-1] + len(tensor.size())) - - def merge_file_(self, another_file): - index = IndexedDataset(another_file) - assert index.dtype == self.dtype - - begin = self.data_offsets[-1] - for offset in index.data_offsets[1:]: - self.data_offsets.append(begin + offset) - self.sizes.extend(index.sizes) - begin = self.dim_offsets[-1] - for dim_offset in index.dim_offsets[1:]: - self.dim_offsets.append(begin + dim_offset) - - with open(data_file_path(another_file), "rb") as f: - while True: - data = f.read(1024) - if data: - self.out_file.write(data) - else: - break - - def finalize(self, index_file): - self.out_file.close() - index = open(index_file, "wb") - index.write(b"TNTIDX\x00\x00") - index.write(struct.pack("<Q", 1)) - index.write( - struct.pack("<QQ", _dtype_header_code(self.dtype), self.element_size) - ) - index.write(struct.pack("<QQ", len(self.data_offsets) - 1, len(self.sizes))) - write_longs(index, self.dim_offsets) - write_longs(index, self.data_offsets) - write_longs(index, self.sizes) - index.close() - - -def _warmup_mmap_file(path): - with open(path, "rb") as stream: - while stream.read(100 * 1024 * 1024): - pass - - -class MMapIndexedDataset(torch.utils.data.Dataset): - class Index: - _HDR_MAGIC = b"MMIDIDX\x00\x00" - - @classmethod - def writer(cls, path, dtype): - class _Writer: - def __enter__(self): - self._file = open(path, "wb") - - self._file.write(cls._HDR_MAGIC) - self._file.write(struct.pack("<Q", 1)) - self._file.write(struct.pack("<B", _dtype_header_code(dtype))) - - return self - - @staticmethod - def _get_pointers(sizes): - dtype_size = dtype().itemsize - address = 0 - pointers = [] - - for size in sizes: - pointers.append(address) - address += size * dtype_size - - return pointers - - def write(self, sizes): - pointers = self._get_pointers(sizes) - - self._file.write(struct.pack("<Q", len(sizes))) - - sizes = np.array(sizes, dtype=np.int32) - self._file.write(sizes.tobytes(order="C")) - del sizes - - pointers = np.array(pointers, dtype=np.int64) - self._file.write(pointers.tobytes(order="C")) - del pointers - - def __exit__(self, exc_type, exc_val, exc_tb): - self._file.close() - - return _Writer() - - def __init__(self, path): - with open(path, "rb") as stream: - magic_test = stream.read(9) - assert self._HDR_MAGIC == magic_test, ( - "Index file doesn't match expected format. " - "Make sure that --dataset-impl is configured properly." - ) - version = struct.unpack("<Q", stream.read(8)) - assert (1,) == version - - (dtype_code,) = struct.unpack("<B", stream.read(1)) - self._dtype = _code_to_dtype[dtype_code] - self._dtype_size = self._dtype().itemsize - - self._len = struct.unpack("<Q", stream.read(8))[0] - offset = stream.tell() - - _warmup_mmap_file(path) - - self._bin_buffer_mmap = np.memmap(path, mode="r", order="C") - self._bin_buffer = memoryview(self._bin_buffer_mmap) - self._sizes = np.frombuffer( - self._bin_buffer, dtype=np.int32, count=self._len, offset=offset - ) - self._pointers = np.frombuffer( - self._bin_buffer, - dtype=np.int64, - count=self._len, - offset=offset + self._sizes.nbytes, - ) - - def __del__(self): - self._bin_buffer_mmap._mmap.close() - del self._bin_buffer_mmap - - @property - def dtype(self): - return self._dtype - - @property - def sizes(self): - return self._sizes - - @lru_cache(maxsize=8) - def __getitem__(self, i): - return self._pointers[i], self._sizes[i] - - def __len__(self): - return self._len - - def __init__(self, path): - super().__init__() - - self._path = None - self._index = None - self._bin_buffer = None - - self._do_init(path) - - def __getstate__(self): - return self._path - - def __setstate__(self, state): - self._do_init(state) - - def _do_init(self, path): - self._path = path - self._index = self.Index(index_file_path(self._path)) - - _warmup_mmap_file(data_file_path(self._path)) - self._bin_buffer_mmap = np.memmap( - data_file_path(self._path), mode="r", order="C" - ) - self._bin_buffer = memoryview(self._bin_buffer_mmap) - - def __del__(self): - self._bin_buffer_mmap._mmap.close() - del self._bin_buffer_mmap - del self._index - - def __len__(self): - return len(self._index) - - @lru_cache(maxsize=8) - def __getitem__(self, i): - ptr, size = self._index[i] - np_array = np.frombuffer( - self._bin_buffer, dtype=self._index.dtype, count=size, offset=ptr - ) - if self._index.dtype != np.int64: - np_array = np_array.astype(np.int64) - - return torch.from_numpy(np_array) - - @property - def sizes(self): - return self._index.sizes - - @property - def supports_prefetch(self): - return False - - @staticmethod - def exists(path): - return PathManager.exists(index_file_path(path)) and PathManager.exists( - data_file_path(path) - ) - - -def get_indexed_dataset_to_local(path) -> str: - local_index_path = PathManager.get_local_path(index_file_path(path)) - local_data_path = PathManager.get_local_path(data_file_path(path)) - - assert local_index_path.endswith(".idx") and local_data_path.endswith(".bin"), ( - "PathManager.get_local_path does not return files with expected patterns: " - f"{local_index_path} and {local_data_path}" - ) - - local_path = local_data_path[:-4] # stripping surfix ".bin" - assert local_path == local_index_path[:-4] # stripping surfix ".idx" - return local_path - - -class MMapIndexedDatasetBuilder: - def __init__(self, out_file, dtype=np.int64): - self._data_file = open(out_file, "wb") - self._dtype = dtype - self._sizes = [] - - def add_item(self, tensor): - np_array = np.array(tensor.numpy(), dtype=self._dtype) - self._data_file.write(np_array.tobytes(order="C")) - self._sizes.append(np_array.size) - - def merge_file_(self, another_file): - # Concatenate index - index = MMapIndexedDataset.Index(index_file_path(another_file)) - assert index.dtype == self._dtype - - for size in index.sizes: - self._sizes.append(size) - - # Concatenate data - with open(data_file_path(another_file), "rb") as f: - shutil.copyfileobj(f, self._data_file) - - def finalize(self, index_file): - self._data_file.close() - - with MMapIndexedDataset.Index.writer(index_file, self._dtype) as index: - index.write(self._sizes) diff --git a/kosmos-g/fairseq/fairseq/data/iterators.py b/kosmos-g/fairseq/fairseq/data/iterators.py deleted file mode 100644 index 1d8e179f8..000000000 --- a/kosmos-g/fairseq/fairseq/data/iterators.py +++ /dev/null @@ -1,821 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import itertools -import logging -import math -import operator -import os -import queue -import time -from threading import Thread - -import numpy as np -import torch -from fairseq.data import data_utils - - -logger = logging.getLogger(__name__) - -# Object used by _background_consumer to signal the source is exhausted -# to the main thread. -_sentinel = object() - - -class CountingIterator(object): - """Wrapper around an iterable that maintains the iteration count. - - Args: - iterable (iterable): iterable to wrap - start (int): starting iteration count. Note that this doesn't - actually advance the iterator. - total (int): override the iterator length returned by ``__len``. - This can be used to truncate *iterator*. - - Attributes: - n (int): number of elements consumed from this iterator - """ - - def __init__(self, iterable, start=None, total=None): - self._itr = iter(iterable) - self.n = start or getattr(iterable, "n", 0) - self.total = total if total is not None else self.n + len(iterable) - - def __len__(self): - return self.total - - def __iter__(self): - return self - - def __next__(self): - if not self.has_next(): - raise StopIteration - try: - x = next(self._itr) - except StopIteration: - raise IndexError( - f"Iterator expected to have length {self.total}, " - "but exhausted at position {self.n}." - ) - self.n += 1 - return x - - def has_next(self): - """Whether the iterator has been exhausted.""" - return self.n < self.total - - def skip(self, n): - """Fast-forward the iterator by skipping n elements.""" - for _ in range(n): - next(self) - return self - - def take(self, n): - """Truncate the iterator to n elements at most.""" - self.total = min(self.total, n) - # Propagate this change to the underlying iterator - if hasattr(self._itr, "take"): - self._itr.take(max(n - self.n, 0)) - return self - - -class EpochBatchIterating(object): - def __len__(self) -> int: - raise NotImplementedError - - @property - def next_epoch_idx(self): - raise NotImplementedError - - def next_epoch_itr( - self, shuffle=True, fix_batches_to_gpus=False, set_dataset_epoch=True - ): - """Return a new iterator over the dataset. - - Args: - shuffle (bool, optional): shuffle batches before returning the - iterator (default: True). - fix_batches_to_gpus (bool, optional): ensure that batches are always - allocated to the same shards across epochs. Requires - that :attr:`dataset` supports prefetching (default: False). - set_dataset_epoch (bool, optional): update the wrapped Dataset with - the new epoch number (default: True). - """ - raise NotImplementedError - - def end_of_epoch(self) -> bool: - """Returns whether the most recent epoch iterator has been exhausted""" - raise NotImplementedError - - @property - def iterations_in_epoch(self) -> int: - """The number of consumed batches in the current epoch.""" - raise NotImplementedError - - def state_dict(self): - """Returns a dictionary containing a whole state of the iterator.""" - raise NotImplementedError - - def load_state_dict(self, state_dict): - """Copies the state of the iterator from the given *state_dict*.""" - raise NotImplementedError - - @property - def first_batch(self): - return "DUMMY" - - -class StreamingEpochBatchIterator(EpochBatchIterating): - """A steaming-style iterator over a :class:`torch.utils.data.IterableDataset`. - - Args: - dataset (~torch.utils.data.Dataset): dataset from which to load the data - max_sentences: batch size - collate_fn (callable): merges a list of samples to form a mini-batch - num_workers (int, optional): how many subprocesses to use for data - loading. 0 means the data will be loaded in the main process - (default: 0). - epoch (int, optional): the epoch to start the iterator from - (default: 1). - buffer_size (int, optional): the number of batches to keep ready in the - queue. Helps speeding up dataloading. When buffer_size is zero, the - default torch.utils.data.DataLoader preloading is used. - timeout (int, optional): if positive, the timeout value for collecting a batch - from workers. Should always be non-negative (default: ``0``). - """ - - def __init__( - self, - dataset, - max_sentences=1, - collate_fn=None, - epoch=1, - num_workers=0, - buffer_size=0, - timeout=0, - ): - assert isinstance(dataset, torch.utils.data.IterableDataset) - self.dataset = dataset - self.max_sentences = max_sentences - self.collate_fn = collate_fn - self.epoch = max(epoch, 1) # we use 1-based indexing for epochs - self.num_workers = num_workers - # This upper limit here is to prevent people from abusing this feature - # in a shared computing environment. - self.buffer_size = min(buffer_size, 20) - self.timeout = timeout - - self._current_epoch_iterator = None - - @property - def next_epoch_idx(self): - """Return the epoch index after *next_epoch_itr* is called.""" - if self._current_epoch_iterator is not None and self.end_of_epoch(): - return self.epoch + 1 - else: - return self.epoch - - def next_epoch_itr( - self, shuffle=True, fix_batches_to_gpus=False, set_dataset_epoch=True - ): - self.epoch = self.next_epoch_idx - if set_dataset_epoch and hasattr(self.dataset, "set_epoch"): - self.dataset.set_epoch(self.epoch) - self._current_epoch_iterator = self._get_iterator_for_epoch(self.epoch, shuffle) - return self._current_epoch_iterator - - def end_of_epoch(self) -> bool: - return not self._current_epoch_iterator.has_next() - - @property - def iterations_in_epoch(self) -> int: - if self._current_epoch_iterator is not None: - return self._current_epoch_iterator.n - return 0 - - def state_dict(self): - return { - "epoch": self.epoch, - } - - def load_state_dict(self, state_dict): - self.epoch = state_dict["epoch"] - - def _get_iterator_for_epoch(self, epoch, shuffle, offset=0): - if self.num_workers > 0: - os.environ["PYTHONWARNINGS"] = "ignore:semaphore_tracker:UserWarning" - - # Create data loader - worker_init_fn = getattr(self.dataset, "worker_init_fn", None) - itr = torch.utils.data.DataLoader( - self.dataset, - batch_size=self.max_sentences, - collate_fn=self.collate_fn, - num_workers=self.num_workers, - timeout=self.timeout, - worker_init_fn=worker_init_fn, - pin_memory=True, - ) - - # Wrap with a BufferedIterator if needed - if self.buffer_size > 0: - itr = BufferedIterator(self.buffer_size, itr) - - # Wrap with CountingIterator - itr = CountingIterator(itr, start=offset) - - return itr - - -class EpochBatchIterator(EpochBatchIterating): - """A multi-epoch iterator over a :class:`torch.utils.data.Dataset`. - - Compared to :class:`torch.utils.data.DataLoader`, this iterator: - - - can be reused across multiple epochs with the :func:`next_epoch_itr` - method (optionally shuffled between epochs) - - can be serialized/deserialized with the :func:`state_dict` and - :func:`load_state_dict` methods - - supports sharding with the *num_shards* and *shard_id* arguments - - Args: - dataset (~torch.utils.data.Dataset): dataset from which to load the data - collate_fn (callable): merges a list of samples to form a mini-batch - batch_sampler (~torch.utils.data.Sampler or a callable): an iterator over batches of - indices, or a callable to create such an iterator (~torch.utils.data.Sampler). - A callable batch_sampler will be called for each epoch to enable per epoch dynamic - batch iterators defined by this callable batch_sampler. - seed (int, optional): seed for random number generator for - reproducibility (default: 1). - num_shards (int, optional): shard the data iterator into N - shards (default: 1). - shard_id (int, optional): which shard of the data iterator to - return (default: 0). - num_workers (int, optional): how many subprocesses to use for data - loading. 0 means the data will be loaded in the main process - (default: 0). - epoch (int, optional): the epoch to start the iterator from - (default: 1). - buffer_size (int, optional): the number of batches to keep ready in the - queue. Helps speeding up dataloading. When buffer_size is zero, the - default torch.utils.data.DataLoader preloading is used. - timeout (int, optional): if positive, the timeout value for collecting a batch - from workers. Should always be non-negative (default: ``0``). - disable_shuffling (bool, optional): force disable shuffling - (default: ``False``). - skip_remainder_batch (bool, optional): if set, discard the last batch in an epoch - for the sake of training stability, as the last batch is usually smaller than - local_batch_size * distributed_word_size (default: ``False``). - grouped_shuffling (bool, optional): enable shuffling batches in groups - of num_shards. Ensures that each GPU receives similar length sequences when - batches are sorted by length. - """ - - def __init__( - self, - dataset, - collate_fn, - batch_sampler, - seed=1, - num_shards=1, - shard_id=0, - num_workers=0, - epoch=1, - buffer_size=0, - timeout=0, - disable_shuffling=False, - skip_remainder_batch=False, - grouped_shuffling=False, - ): - assert isinstance(dataset, torch.utils.data.Dataset) - self.dataset = dataset - self.collate_fn = collate_fn - self.batch_sampler = batch_sampler - self._frozen_batches = ( - tuple(batch_sampler) if not callable(batch_sampler) else None - ) - self.seed = seed - self.num_shards = num_shards - self.shard_id = shard_id - self.num_workers = num_workers - # This upper limit here is to prevent people from abusing this feature - # in a shared computing environment. - self.buffer_size = min(buffer_size, 20) - self.timeout = timeout - self.disable_shuffling = disable_shuffling - self.skip_remainder_batch = skip_remainder_batch - self.grouped_shuffling = grouped_shuffling - - self.epoch = max(epoch, 1) # we use 1-based indexing for epochs - self.shuffle = not disable_shuffling - self._cur_epoch_itr = None - self._next_epoch_itr = None - self._supports_prefetch = getattr(dataset, "supports_prefetch", False) - - @property - def frozen_batches(self): - if self._frozen_batches is None: - self._frozen_batches = tuple(self.batch_sampler(self.dataset, self.epoch)) - return self._frozen_batches - - @property - def first_batch(self): - if len(self.frozen_batches) == 0: - raise Exception( - "The dataset is empty. This could indicate " - "that all elements in the dataset have been skipped. " - "Try increasing the max number of allowed tokens or using " - "a larger dataset." - ) - - if getattr(self.dataset, "supports_fetch_outside_dataloader", True): - return self.collate_fn([self.dataset[i] for i in self.frozen_batches[0]]) - else: - return "DUMMY" - - def __len__(self): - return int(math.ceil(len(self.frozen_batches) / float(self.num_shards))) - - @property - def n(self): - return self.iterations_in_epoch - - @property - def next_epoch_idx(self): - """Return the epoch index after *next_epoch_itr* is called.""" - if self._next_epoch_itr is not None: - return self.epoch - elif self._cur_epoch_itr is not None and self.end_of_epoch(): - return self.epoch + 1 - else: - return self.epoch - - def next_epoch_itr( - self, shuffle=True, fix_batches_to_gpus=False, set_dataset_epoch=True - ): - """Return a new iterator over the dataset. - - Args: - shuffle (bool, optional): shuffle batches before returning the - iterator (default: True). - fix_batches_to_gpus (bool, optional): ensure that batches are always - allocated to the same shards across epochs. Requires - that :attr:`dataset` supports prefetching (default: False). - set_dataset_epoch (bool, optional): update the wrapped Dataset with - the new epoch number (default: True). - """ - if self.disable_shuffling: - shuffle = False - prev_epoch = self.epoch - self.epoch = self.next_epoch_idx - if set_dataset_epoch and hasattr(self.dataset, "set_epoch"): - self.dataset.set_epoch(self.epoch) - if self._next_epoch_itr is not None: - self._cur_epoch_itr = self._next_epoch_itr - self._next_epoch_itr = None - else: - if callable(self.batch_sampler) and prev_epoch != self.epoch: - # reset _frozen_batches to refresh the next epoch - self._frozen_batches = None - self._cur_epoch_itr = self._get_iterator_for_epoch( - self.epoch, - shuffle, - fix_batches_to_gpus=fix_batches_to_gpus, - ) - self.shuffle = shuffle - return self._cur_epoch_itr - - def end_of_epoch(self) -> bool: - """Returns whether the most recent epoch iterator has been exhausted""" - return not self._cur_epoch_itr.has_next() - - @property - def iterations_in_epoch(self): - """The number of consumed batches in the current epoch.""" - if self._cur_epoch_itr is not None: - return self._cur_epoch_itr.n - elif self._next_epoch_itr is not None: - return self._next_epoch_itr.n - return 0 - - def state_dict(self): - """Returns a dictionary containing a whole state of the iterator.""" - if self.end_of_epoch(): - epoch = self.epoch + 1 - iter_in_epoch = 0 - else: - epoch = self.epoch - iter_in_epoch = self.iterations_in_epoch - return { - "version": 2, - "epoch": epoch, - "iterations_in_epoch": iter_in_epoch, - "shuffle": self.shuffle, - } - - def load_state_dict(self, state_dict): - """Copies the state of the iterator from the given *state_dict*.""" - self.epoch = state_dict["epoch"] - itr_pos = state_dict.get("iterations_in_epoch", 0) - version = state_dict.get("version", 1) - if itr_pos > 0: - # fast-forward epoch iterator - self._next_epoch_itr = self._get_iterator_for_epoch( - self.epoch, - shuffle=state_dict.get("shuffle", True), - offset=itr_pos, - ) - if self._next_epoch_itr is None: - if version == 1: - # legacy behavior: we finished the epoch, increment epoch counter - self.epoch += 1 - else: - raise RuntimeError( - "Cannot resume training due to dataloader mismatch, please " - "report this to the fairseq developers. You can relaunch " - "training with `--reset-dataloader` and it should work." - ) - else: - self._next_epoch_itr = None - - def _get_iterator_for_epoch( - self, epoch, shuffle, fix_batches_to_gpus=False, offset=0 - ): - def shuffle_batches(batches, seed): - with data_utils.numpy_seed(seed): - - if self.grouped_shuffling: - grouped_batches = [ - batches[(i * self.num_shards) : ((i + 1) * self.num_shards)] - for i in range((len(batches) // self.num_shards)) - ] - np.random.shuffle(grouped_batches) - batches = list(itertools.chain(*grouped_batches)) - else: - np.random.shuffle(batches) - - return batches - - if self._supports_prefetch: - batches = self.frozen_batches - - if shuffle and not fix_batches_to_gpus: - batches = shuffle_batches(list(batches), self.seed + epoch) - - batches = list( - ShardedIterator(batches, self.num_shards, self.shard_id, fill_value=[]) - ) - self.dataset.prefetch([i for s in batches for i in s]) - - if shuffle and fix_batches_to_gpus: - batches = shuffle_batches(batches, self.seed + epoch + self.shard_id) - else: - if shuffle: - batches = shuffle_batches(list(self.frozen_batches), self.seed + epoch) - else: - batches = self.frozen_batches - batches = list( - ShardedIterator(batches, self.num_shards, self.shard_id, fill_value=[]) - ) - - if offset > 0 and offset >= len(batches): - return None - - if self.num_workers > 0: - os.environ["PYTHONWARNINGS"] = "ignore:semaphore_tracker:UserWarning" - - # Create data loader - itr = torch.utils.data.DataLoader( - self.dataset, - collate_fn=self.collate_fn, - batch_sampler=batches[offset:], - num_workers=self.num_workers, - timeout=self.timeout, - pin_memory=True, - ) - - # Wrap with a BufferedIterator if needed - if self.buffer_size > 0: - itr = BufferedIterator(self.buffer_size, itr) - - # Wrap with CountingIterator - itr = CountingIterator(itr, start=offset) - - if self.skip_remainder_batch: - # TODO: Below is a lazy implementation which discard the final batch regardless - # of whether it is a full batch or not. - total_num_itrs = len(batches) - 1 - itr.take(total_num_itrs) - logger.info(f"skip final residual batch, total_num_itrs = {total_num_itrs}") - - return itr - - -class GroupedIterator(CountingIterator): - """Wrapper around an iterable that returns groups (chunks) of items. - - Args: - iterable (iterable): iterable to wrap - chunk_size (int): size of each chunk - skip_remainder_batch (bool, optional): if set, discard the last grouped batch in - each training epoch, as the last grouped batch is usually smaller than - local_batch_size * distributed_word_size * chunk_size (default: ``False``). - Attributes: - n (int): number of elements consumed from this iterator - """ - - def __init__(self, iterable, chunk_size, skip_remainder_batch=False): - if skip_remainder_batch: - total_num_itrs = int(math.floor(len(iterable) / float(chunk_size))) - logger.info( - f"skip final residual batch, grouped total_num_itrs = {total_num_itrs}" - ) - else: - total_num_itrs = int(math.ceil(len(iterable) / float(chunk_size))) - logger.info(f"grouped total_num_itrs = {total_num_itrs}") - - itr = _chunk_iterator(iterable, chunk_size, skip_remainder_batch) - super().__init__( - itr, - start=int(math.ceil(getattr(iterable, "n", 0) / float(chunk_size))), - total=total_num_itrs, - ) - self.chunk_size = chunk_size - - if skip_remainder_batch: - self.take(total_num_itrs) - # TODO: [Hack] Here the grouped iterator modifies the base iterator size so that - # training can move into the next epoch once the grouped iterator is exhausted. - # Double-check this implementation in case unexpected behavior occurs. - iterable.take(total_num_itrs * chunk_size) - - -def _chunk_iterator(itr, chunk_size, skip_remainder_batch=False): - chunk = [] - for x in itr: - chunk.append(x) - if len(chunk) == chunk_size: - yield chunk - chunk = [] - if not skip_remainder_batch and len(chunk) > 0: - yield chunk - - -class ShardedIterator(CountingIterator): - """A sharded wrapper around an iterable, padded to length. - - Args: - iterable (iterable): iterable to wrap - num_shards (int): number of shards to split the iterable into - shard_id (int): which shard to iterator over - fill_value (Any, optional): padding value when the iterable doesn't - evenly divide *num_shards* (default: None). - - Attributes: - n (int): number of elements consumed from this iterator - """ - - def __init__( - self, iterable, num_shards, shard_id, fill_value=None, skip_remainder_batch=None - ): - """ - Args: - skip_remainder_batch: ignored""" - if shard_id < 0 or shard_id >= num_shards: - raise ValueError("shard_id must be between 0 and num_shards") - sharded_len = int(math.ceil(len(iterable) / float(num_shards))) - itr = map( - operator.itemgetter(1), - itertools.zip_longest( - range(sharded_len), - itertools.islice(iterable, shard_id, len(iterable), num_shards), - fillvalue=fill_value, - ), - ) - super().__init__( - itr, - start=int(math.ceil(getattr(iterable, "n", 0) / float(num_shards))), - total=sharded_len, - ) - - -class BackgroundConsumer(Thread): - def __init__(self, queue, source, max_len, cuda_device): - Thread.__init__(self) - - self._queue = queue - self._source = source - self._max_len = max_len - self.count = 0 - self.cuda_device = cuda_device - - def run(self): - # set_device to avoid creation of GPU0 context when using pin_memory - if self.cuda_device is not None: - torch.cuda.set_device(self.cuda_device) - - try: - for item in self._source: - self._queue.put(item) - - # Stop if we reached the maximum length - self.count += 1 - if self._max_len is not None and self.count >= self._max_len: - break - - # Signal the consumer we are done. - self._queue.put(_sentinel) - except Exception as e: - self._queue.put(e) - - -class BufferedIterator(object): - def __init__(self, size, iterable): - self._queue = queue.Queue(size) - self._iterable = iterable - self._consumer = None - - self.start_time = time.time() - self.warning_time = None - - self.total = len(iterable) - - def _create_consumer(self): - self._consumer = BackgroundConsumer( - self._queue, - self._iterable, - self.total, - torch.cuda.current_device() if torch.cuda.is_available() else None, - ) - self._consumer.daemon = True - self._consumer.start() - - def __iter__(self): - return self - - def __len__(self): - return self.total - - def take(self, n): - self.total = min(self.total, n) - # Propagate this change to the underlying iterator - if hasattr(self._iterable, "take"): - self._iterable.take(n) - return self - - def __next__(self): - # Create consumer if not created yet - if self._consumer is None: - self._create_consumer() - - # Notify the user if there is a data loading bottleneck - if self._queue.qsize() < min(2, max(1, self._queue.maxsize // 2)): - if time.time() - self.start_time > 5 * 60: - if ( - self.warning_time is None - or time.time() - self.warning_time > 15 * 60 - ): - logger.debug( - "Data loading buffer is empty or nearly empty. This may " - "indicate a data loading bottleneck, and increasing the " - "number of workers (--num-workers) may help." - ) - self.warning_time = time.time() - - # Get next example - item = self._queue.get(True) - if isinstance(item, Exception): - raise item - if item is _sentinel: - raise StopIteration() - return item - - -class GroupedEpochBatchIterator(EpochBatchIterator): - """Grouped version of EpochBatchIterator - It takes several samplers from different datasets. - Each epoch shuffle the dataset wise sampler individually with different - random seed. The those sub samplers are combined with into - one big samplers with deterministic permutation to mix batches from - different datasets. It will act like EpochBatchIterator but make sure - 1) data from one data set each time - 2) for different workers, they use the same order to fetch the data - so they will use data from the same dataset everytime - mult_rate is used for update_freq > 1 case where we want to make sure update_freq - mini-batches come from same source - """ - - def __init__( - self, - dataset, - collate_fn, - batch_samplers, - seed=1, - num_shards=1, - shard_id=0, - num_workers=0, - epoch=0, - mult_rate=1, - buffer_size=0, - skip_remainder_batch=False, - ): - super().__init__( - dataset, - collate_fn, - batch_samplers, - seed, - num_shards, - shard_id, - num_workers, - epoch, - buffer_size, - skip_remainder_batch=skip_remainder_batch, - ) - # level 0: sub-samplers 1: batch_idx 2: batches - self._frozen_batches = tuple([tuple(sub_batch) for sub_batch in batch_samplers]) - self.step_size = mult_rate * num_shards - - self.lengths = [ - (len(x) // self.step_size) * self.step_size for x in self.frozen_batches - ] - - def __len__(self): - return sum(self.lengths) - - @property - def first_batch(self): - if len(self.frozen_batches) == 0: - raise Exception( - "The dataset is empty. This could indicate " - "that all elements in the dataset have been skipped. " - "Try increasing the max number of allowed tokens or using " - "a larger dataset." - ) - - if self.dataset.supports_fetch_outside_dataloader: - return self.collate_fn([self.dataset[i] for i in self.frozen_batches[0][0]]) - else: - return "DUMMY" - - def _get_iterator_for_epoch( - self, epoch, shuffle, fix_batches_to_gpus=False, offset=0 - ): - def shuffle_batches(batches, seed): - with data_utils.numpy_seed(seed): - np.random.shuffle(batches) - return batches - - def return_full_batches(batch_sets, seed, shuffle): - if shuffle: - batch_sets = [shuffle_batches(list(x), seed) for x in batch_sets] - - batch_sets = [ - batch_sets[i][: self.lengths[i]] for i in range(len(batch_sets)) - ] - batches = list(itertools.chain.from_iterable(batch_sets)) - - if shuffle: - with data_utils.numpy_seed(seed): - idx = np.random.permutation(len(batches) // self.step_size) - if len(idx) * self.step_size != len(batches): - raise ValueError( - "ERROR: %d %d %d %d" - % (len(idx), self.step_size, len(batches), self.shard_id), - ":".join(["%d" % x for x in self.lengths]), - ) - mini_shards = [ - batches[i * self.step_size : (i + 1) * self.step_size] - for i in idx - ] - batches = list(itertools.chain.from_iterable(mini_shards)) - - return batches - - if self._supports_prefetch: - raise NotImplementedError("To be implemented") - else: - batches = return_full_batches( - self.frozen_batches, self.seed + epoch, shuffle - ) - batches = list( - ShardedIterator(batches, self.num_shards, self.shard_id, fill_value=[]) - ) - - if offset > 0 and offset >= len(batches): - return None - - if self.num_workers > 0: - os.environ["PYTHONWARNINGS"] = "ignore:semaphore_tracker:UserWarning" - - itr = torch.utils.data.DataLoader( - self.dataset, - collate_fn=self.collate_fn, - batch_sampler=batches[offset:], - num_workers=self.num_workers, - ) - if self.buffer_size > 0: - itr = BufferedIterator(self.buffer_size, itr) - - return CountingIterator(itr, start=offset) diff --git a/kosmos-g/fairseq/fairseq/data/language_pair_dataset.py b/kosmos-g/fairseq/fairseq/data/language_pair_dataset.py deleted file mode 100644 index fd356ddd0..000000000 --- a/kosmos-g/fairseq/fairseq/data/language_pair_dataset.py +++ /dev/null @@ -1,477 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -import numpy as np -import torch -from fairseq.data import FairseqDataset, data_utils - - -logger = logging.getLogger(__name__) - - -def collate( - samples, - pad_idx, - eos_idx, - left_pad_source=True, - left_pad_target=False, - input_feeding=True, - pad_to_length=None, - pad_to_multiple=1, -): - if len(samples) == 0: - return {} - - def merge(key, left_pad, move_eos_to_beginning=False, pad_to_length=None): - return data_utils.collate_tokens( - [s[key] for s in samples], - pad_idx, - eos_idx, - left_pad, - move_eos_to_beginning, - pad_to_length=pad_to_length, - pad_to_multiple=pad_to_multiple, - ) - - def check_alignment(alignment, src_len, tgt_len): - if alignment is None or len(alignment) == 0: - return False - if ( - alignment[:, 0].max().item() >= src_len - 1 - or alignment[:, 1].max().item() >= tgt_len - 1 - ): - logger.warning("alignment size mismatch found, skipping alignment!") - return False - return True - - def compute_alignment_weights(alignments): - """ - Given a tensor of shape [:, 2] containing the source-target indices - corresponding to the alignments, a weight vector containing the - inverse frequency of each target index is computed. - For e.g. if alignments = [[5, 7], [2, 3], [1, 3], [4, 2]], then - a tensor containing [1., 0.5, 0.5, 1] should be returned (since target - index 3 is repeated twice) - """ - align_tgt = alignments[:, 1] - _, align_tgt_i, align_tgt_c = torch.unique( - align_tgt, return_inverse=True, return_counts=True - ) - align_weights = align_tgt_c[align_tgt_i[np.arange(len(align_tgt))]] - return 1.0 / align_weights.float() - - id = torch.LongTensor([s["id"] for s in samples]) - src_tokens = merge( - "source", - left_pad=left_pad_source, - pad_to_length=pad_to_length["source"] if pad_to_length is not None else None, - ) - # sort by descending source length - src_lengths = torch.LongTensor( - [s["source"].ne(pad_idx).long().sum() for s in samples] - ) - src_lengths, sort_order = src_lengths.sort(descending=True) - id = id.index_select(0, sort_order) - src_tokens = src_tokens.index_select(0, sort_order) - - prev_output_tokens = None - target = None - if samples[0].get("target", None) is not None: - target = merge( - "target", - left_pad=left_pad_target, - pad_to_length=pad_to_length["target"] - if pad_to_length is not None - else None, - ) - target = target.index_select(0, sort_order) - tgt_lengths = torch.LongTensor( - [s["target"].ne(pad_idx).long().sum() for s in samples] - ).index_select(0, sort_order) - ntokens = tgt_lengths.sum().item() - - if samples[0].get("prev_output_tokens", None) is not None: - prev_output_tokens = merge("prev_output_tokens", left_pad=left_pad_target) - elif input_feeding: - # we create a shifted version of targets for feeding the - # previous output token(s) into the next decoder step - prev_output_tokens = merge( - "target", - left_pad=left_pad_target, - move_eos_to_beginning=True, - pad_to_length=pad_to_length["target"] - if pad_to_length is not None - else None, - ) - else: - ntokens = src_lengths.sum().item() - - batch = { - "id": id, - "nsentences": len(samples), - "ntokens": ntokens, - "net_input": { - "src_tokens": src_tokens, - "src_lengths": src_lengths, - }, - "target": target, - } - if prev_output_tokens is not None: - batch["net_input"]["prev_output_tokens"] = prev_output_tokens.index_select( - 0, sort_order - ) - - if samples[0].get("alignment", None) is not None: - bsz, tgt_sz = batch["target"].shape - src_sz = batch["net_input"]["src_tokens"].shape[1] - - offsets = torch.zeros((len(sort_order), 2), dtype=torch.long) - offsets[:, 1] += torch.arange(len(sort_order), dtype=torch.long) * tgt_sz - if left_pad_source: - offsets[:, 0] += src_sz - src_lengths - if left_pad_target: - offsets[:, 1] += tgt_sz - tgt_lengths - - alignments = [ - alignment + offset - for align_idx, offset, src_len, tgt_len in zip( - sort_order, offsets, src_lengths, tgt_lengths - ) - for alignment in [samples[align_idx]["alignment"].view(-1, 2)] - if check_alignment(alignment, src_len, tgt_len) - ] - - if len(alignments) > 0: - alignments = torch.cat(alignments, dim=0) - align_weights = compute_alignment_weights(alignments) - - batch["alignments"] = alignments - batch["align_weights"] = align_weights - - if samples[0].get("constraints", None) is not None: - # Collate the packed constraints across the samples, padding to - # the length of the longest sample. - lens = [sample.get("constraints").size(0) for sample in samples] - max_len = max(lens) - constraints = torch.zeros((len(samples), max(lens))).long() - for i, sample in enumerate(samples): - constraints[i, 0 : lens[i]] = samples[i].get("constraints") - batch["constraints"] = constraints.index_select(0, sort_order) - - return batch - - -class LanguagePairDataset(FairseqDataset): - """ - A pair of torch.utils.data.Datasets. - - Args: - src (torch.utils.data.Dataset): source dataset to wrap - src_sizes (List[int]): source sentence lengths - src_dict (~fairseq.data.Dictionary): source vocabulary - tgt (torch.utils.data.Dataset, optional): target dataset to wrap - tgt_sizes (List[int], optional): target sentence lengths - tgt_dict (~fairseq.data.Dictionary, optional): target vocabulary - left_pad_source (bool, optional): pad source tensors on the left side - (default: True). - left_pad_target (bool, optional): pad target tensors on the left side - (default: False). - shuffle (bool, optional): shuffle dataset elements before batching - (default: True). - input_feeding (bool, optional): create a shifted version of the targets - to be passed into the model for teacher forcing (default: True). - remove_eos_from_source (bool, optional): if set, removes eos from end - of source if it's present (default: False). - append_eos_to_target (bool, optional): if set, appends eos to end of - target if it's absent (default: False). - align_dataset (torch.utils.data.Dataset, optional): dataset - containing alignments. - constraints (Tensor, optional): 2d tensor with a concatenated, zero- - delimited list of constraints for each sentence. - append_bos (bool, optional): if set, appends bos to the beginning of - source/target sentence. - num_buckets (int, optional): if set to a value greater than 0, then - batches will be bucketed into the given number of batch shapes. - src_lang_id (int, optional): source language ID, if set, the collated batch - will contain a field 'src_lang_id' in 'net_input' which indicates the - source language of the samples. - tgt_lang_id (int, optional): target language ID, if set, the collated batch - will contain a field 'tgt_lang_id' which indicates the target language - of the samples. - """ - - def __init__( - self, - src, - src_sizes, - src_dict, - tgt=None, - tgt_sizes=None, - tgt_dict=None, - left_pad_source=True, - left_pad_target=False, - shuffle=True, - input_feeding=True, - remove_eos_from_source=False, - append_eos_to_target=False, - align_dataset=None, - constraints=None, - append_bos=False, - eos=None, - num_buckets=0, - src_lang_id=None, - tgt_lang_id=None, - pad_to_multiple=1, - ): - if tgt_dict is not None: - assert src_dict.pad() == tgt_dict.pad() - assert src_dict.eos() == tgt_dict.eos() - assert src_dict.unk() == tgt_dict.unk() - if tgt is not None: - assert len(src) == len( - tgt - ), "Source and target must contain the same number of examples" - self.src = src - self.tgt = tgt - self.src_sizes = np.array(src_sizes) - self.tgt_sizes = np.array(tgt_sizes) if tgt_sizes is not None else None - self.sizes = ( - np.vstack((self.src_sizes, self.tgt_sizes)).T - if self.tgt_sizes is not None - else self.src_sizes - ) - self.src_dict = src_dict - self.tgt_dict = tgt_dict - self.left_pad_source = left_pad_source - self.left_pad_target = left_pad_target - self.shuffle = shuffle - self.input_feeding = input_feeding - self.remove_eos_from_source = remove_eos_from_source - self.append_eos_to_target = append_eos_to_target - self.align_dataset = align_dataset - if self.align_dataset is not None: - assert ( - self.tgt_sizes is not None - ), "Both source and target needed when alignments are provided" - self.constraints = constraints - self.append_bos = append_bos - self.eos = eos if eos is not None else src_dict.eos() - self.src_lang_id = src_lang_id - self.tgt_lang_id = tgt_lang_id - if num_buckets > 0: - from fairseq.data import BucketPadLengthDataset - - self.src = BucketPadLengthDataset( - self.src, - sizes=self.src_sizes, - num_buckets=num_buckets, - pad_idx=self.src_dict.pad(), - left_pad=self.left_pad_source, - ) - self.src_sizes = self.src.sizes - logger.info("bucketing source lengths: {}".format(list(self.src.buckets))) - if self.tgt is not None: - self.tgt = BucketPadLengthDataset( - self.tgt, - sizes=self.tgt_sizes, - num_buckets=num_buckets, - pad_idx=self.tgt_dict.pad(), - left_pad=self.left_pad_target, - ) - self.tgt_sizes = self.tgt.sizes - logger.info( - "bucketing target lengths: {}".format(list(self.tgt.buckets)) - ) - - # determine bucket sizes using self.num_tokens, which will return - # the padded lengths (thanks to BucketPadLengthDataset) - num_tokens = np.vectorize(self.num_tokens, otypes=[np.compat.long]) - self.bucketed_num_tokens = num_tokens(np.arange(len(self.src))) - self.buckets = [ - (None, num_tokens) for num_tokens in np.unique(self.bucketed_num_tokens) - ] - else: - self.buckets = None - self.pad_to_multiple = pad_to_multiple - - def get_batch_shapes(self): - return self.buckets - - def __getitem__(self, index): - tgt_item = self.tgt[index] if self.tgt is not None else None - src_item = self.src[index] - # Append EOS to end of tgt sentence if it does not have an EOS and remove - # EOS from end of src sentence if it exists. This is useful when we use - # use existing datasets for opposite directions i.e., when we want to - # use tgt_dataset as src_dataset and vice versa - if self.append_eos_to_target: - eos = self.tgt_dict.eos() if self.tgt_dict else self.src_dict.eos() - if self.tgt and self.tgt[index][-1] != eos: - tgt_item = torch.cat([self.tgt[index], torch.LongTensor([eos])]) - - if self.append_bos: - bos = self.tgt_dict.bos() if self.tgt_dict else self.src_dict.bos() - if self.tgt and self.tgt[index][0] != bos: - tgt_item = torch.cat([torch.LongTensor([bos]), self.tgt[index]]) - - bos = self.src_dict.bos() - if self.src[index][0] != bos: - src_item = torch.cat([torch.LongTensor([bos]), self.src[index]]) - - if self.remove_eos_from_source: - eos = self.src_dict.eos() - if self.src[index][-1] == eos: - src_item = self.src[index][:-1] - - example = { - "id": index, - "source": src_item, - "target": tgt_item, - } - if self.align_dataset is not None: - example["alignment"] = self.align_dataset[index] - if self.constraints is not None: - example["constraints"] = self.constraints[index] - return example - - def __len__(self): - return len(self.src) - - def collater(self, samples, pad_to_length=None): - """Merge a list of samples to form a mini-batch. - - Args: - samples (List[dict]): samples to collate - pad_to_length (dict, optional): a dictionary of - {'source': source_pad_to_length, 'target': target_pad_to_length} - to indicate the max length to pad to in source and target respectively. - - Returns: - dict: a mini-batch with the following keys: - - - `id` (LongTensor): example IDs in the original input order - - `ntokens` (int): total number of tokens in the batch - - `net_input` (dict): the input to the Model, containing keys: - - - `src_tokens` (LongTensor): a padded 2D Tensor of tokens in - the source sentence of shape `(bsz, src_len)`. Padding will - appear on the left if *left_pad_source* is ``True``. - - `src_lengths` (LongTensor): 1D Tensor of the unpadded - lengths of each source sentence of shape `(bsz)` - - `prev_output_tokens` (LongTensor): a padded 2D Tensor of - tokens in the target sentence, shifted right by one - position for teacher forcing, of shape `(bsz, tgt_len)`. - This key will not be present if *input_feeding* is - ``False``. Padding will appear on the left if - *left_pad_target* is ``True``. - - `src_lang_id` (LongTensor): a long Tensor which contains source - language IDs of each sample in the batch - - - `target` (LongTensor): a padded 2D Tensor of tokens in the - target sentence of shape `(bsz, tgt_len)`. Padding will appear - on the left if *left_pad_target* is ``True``. - - `tgt_lang_id` (LongTensor): a long Tensor which contains target language - IDs of each sample in the batch - """ - res = collate( - samples, - pad_idx=self.src_dict.pad(), - eos_idx=self.eos, - left_pad_source=self.left_pad_source, - left_pad_target=self.left_pad_target, - input_feeding=self.input_feeding, - pad_to_length=pad_to_length, - pad_to_multiple=self.pad_to_multiple, - ) - if self.src_lang_id is not None or self.tgt_lang_id is not None: - src_tokens = res["net_input"]["src_tokens"] - bsz = src_tokens.size(0) - if self.src_lang_id is not None: - res["net_input"]["src_lang_id"] = ( - torch.LongTensor([[self.src_lang_id]]).expand(bsz, 1).to(src_tokens) - ) - if self.tgt_lang_id is not None: - res["tgt_lang_id"] = ( - torch.LongTensor([[self.tgt_lang_id]]).expand(bsz, 1).to(src_tokens) - ) - return res - - def num_tokens(self, index): - """Return the number of tokens in a sample. This value is used to - enforce ``--max-tokens`` during batching.""" - return max( - self.src_sizes[index], - self.tgt_sizes[index] if self.tgt_sizes is not None else 0, - ) - - def num_tokens_vec(self, indices): - """Return the number of tokens for a set of positions defined by indices. - This value is used to enforce ``--max-tokens`` during batching.""" - sizes = self.src_sizes[indices] - if self.tgt_sizes is not None: - sizes = np.maximum(sizes, self.tgt_sizes[indices]) - return sizes - - def size(self, index): - """Return an example's size as a float or tuple. This value is used when - filtering a dataset with ``--max-positions``.""" - return ( - self.src_sizes[index], - self.tgt_sizes[index] if self.tgt_sizes is not None else 0, - ) - - def ordered_indices(self): - """Return an ordered list of indices. Batches will be constructed based - on this order.""" - if self.shuffle: - indices = np.random.permutation(len(self)).astype(np.int64) - else: - indices = np.arange(len(self), dtype=np.int64) - if self.buckets is None: - # sort by target length, then source length - if self.tgt_sizes is not None: - indices = indices[np.argsort(self.tgt_sizes[indices], kind="mergesort")] - return indices[np.argsort(self.src_sizes[indices], kind="mergesort")] - else: - # sort by bucketed_num_tokens, which is: - # max(padded_src_len, padded_tgt_len) - return indices[ - np.argsort(self.bucketed_num_tokens[indices], kind="mergesort") - ] - - @property - def supports_prefetch(self): - return getattr(self.src, "supports_prefetch", False) and ( - getattr(self.tgt, "supports_prefetch", False) or self.tgt is None - ) - - def prefetch(self, indices): - self.src.prefetch(indices) - if self.tgt is not None: - self.tgt.prefetch(indices) - if self.align_dataset is not None: - self.align_dataset.prefetch(indices) - - def filter_indices_by_size(self, indices, max_sizes): - """Filter a list of sample indices. Remove those that are longer - than specified in max_sizes. - - Args: - indices (np.array): original array of sample indices - max_sizes (int or list[int] or tuple[int]): max sample size, - can be defined separately for src and tgt (then list or tuple) - - Returns: - np.array: filtered sample array - list: list of removed indices - """ - return data_utils.filter_paired_dataset_indices_by_size( - self.src_sizes, - self.tgt_sizes, - indices, - max_sizes, - ) diff --git a/kosmos-g/fairseq/fairseq/data/legacy/__init__.py b/kosmos-g/fairseq/fairseq/data/legacy/__init__.py deleted file mode 100644 index 9bd5c72b5..000000000 --- a/kosmos-g/fairseq/fairseq/data/legacy/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .block_pair_dataset import BlockPairDataset -from .masked_lm_dataset import MaskedLMDataset -from .masked_lm_dictionary import BertDictionary, MaskedLMDictionary - - -__all__ = [ - "BertDictionary", - "BlockPairDataset", - "MaskedLMDataset", - "MaskedLMDictionary", -] diff --git a/kosmos-g/fairseq/fairseq/data/legacy/block_pair_dataset.py b/kosmos-g/fairseq/fairseq/data/legacy/block_pair_dataset.py deleted file mode 100644 index ba069b460..000000000 --- a/kosmos-g/fairseq/fairseq/data/legacy/block_pair_dataset.py +++ /dev/null @@ -1,311 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import numpy as np -import torch -from fairseq.data import FairseqDataset - - -class BlockPairDataset(FairseqDataset): - """Break a Dataset of tokens into sentence pair blocks for next sentence - prediction as well as masked language model. - - High-level logics are: - 1. break input tensor to tensor blocks - 2. pair the blocks with 50% next sentence and 50% random sentence - 3. return paired blocks as well as related segment labels - - Args: - dataset (~torch.utils.data.Dataset): dataset to break into blocks - sizes: array of sentence lengths - dictionary: dictionary for the task - block_size: maximum block size - break_mode: mode for breaking copurs into block pairs. currently we support - 2 modes - doc: respect document boundaries and each part of the pair should belong to on document - none: don't respect any boundary and cut tokens evenly - short_seq_prob: probability for generating shorter block pairs - doc_break_size: Size for empty line separating documents. Typically 1 if - the sentences have eos, 0 otherwise. - """ - - def __init__( - self, - dataset, - dictionary, - sizes, - block_size, - break_mode="doc", - short_seq_prob=0.1, - doc_break_size=1, - ): - super().__init__() - self.dataset = dataset - self.pad = dictionary.pad() - self.eos = dictionary.eos() - self.cls = dictionary.cls() - self.mask = dictionary.mask() - self.sep = dictionary.sep() - self.break_mode = break_mode - self.dictionary = dictionary - self.short_seq_prob = short_seq_prob - self.block_indices = [] - - assert len(dataset) == len(sizes) - - if break_mode == "doc": - cur_doc = [] - for sent_id, sz in enumerate(sizes): - assert doc_break_size == 0 or sz != 0, ( - "when doc_break_size is non-zero, we expect documents to be" - "separated by a blank line with a single eos." - ) - # empty line as document separator - if sz == doc_break_size: - if len(cur_doc) == 0: - continue - self.block_indices.append(cur_doc) - cur_doc = [] - else: - cur_doc.append(sent_id) - max_num_tokens = block_size - 3 # Account for [CLS], [SEP], [SEP] - self.sent_pairs = [] - self.sizes = [] - for doc_id, doc in enumerate(self.block_indices): - self._generate_sentence_pair(doc, doc_id, max_num_tokens, sizes) - elif break_mode is None or break_mode == "none": - # each block should have half of the block size since we are constructing block pair - sent_length = (block_size - 3) // 2 - total_len = sum(dataset.sizes) - length = math.ceil(total_len / sent_length) - - def block_at(i): - start = i * sent_length - end = min(start + sent_length, total_len) - return (start, end) - - sent_indices = np.array([block_at(i) for i in range(length)]) - sent_sizes = np.array([e - s for s, e in sent_indices]) - dataset_index = self._sent_to_dataset_index(sent_sizes) - - # pair sentences - self._pair_sentences(dataset_index) - else: - raise ValueError("Invalid break_mode: " + break_mode) - - def _pair_sentences(self, dataset_index): - """ - Give a list of evenly cut blocks/sentences, pair these sentences with 50% - consecutive sentences and 50% random sentences. - This is used for none break mode - """ - # pair sentences - for sent_id, sent in enumerate(dataset_index): - next_sent_label = ( - 1 if np.random.rand() > 0.5 and sent_id != len(dataset_index) - 1 else 0 - ) - if next_sent_label: - next_sent = dataset_index[sent_id + 1] - else: - next_sent = dataset_index[ - self._skip_sampling(len(dataset_index), [sent_id, sent_id + 1]) - ] - self.sent_pairs.append((sent, next_sent, next_sent_label)) - - # The current blocks don't include the special tokens but the - # sizes already account for this - self.sizes.append(3 + sent[3] + next_sent[3]) - - def _sent_to_dataset_index(self, sent_sizes): - """ - Build index mapping block indices to the underlying dataset indices - """ - dataset_index = [] - ds_idx, ds_remaining = -1, 0 - for to_consume in sent_sizes: - sent_size = to_consume - if ds_remaining == 0: - ds_idx += 1 - ds_remaining = sent_sizes[ds_idx] - start_ds_idx = ds_idx - start_offset = sent_sizes[ds_idx] - ds_remaining - while to_consume > ds_remaining: - to_consume -= ds_remaining - ds_idx += 1 - ds_remaining = sent_sizes[ds_idx] - ds_remaining -= to_consume - dataset_index.append( - ( - start_ds_idx, # starting index in dataset - start_offset, # starting offset within starting index - ds_idx, # ending index in dataset - sent_size, # sentence length - ) - ) - assert ds_remaining == 0 - assert ds_idx == len(self.dataset) - 1 - return dataset_index - - def _generate_sentence_pair(self, doc, doc_id, max_num_tokens, sizes): - """ - Go through a single document and genrate sentence paris from it - """ - current_chunk = [] - current_length = 0 - curr = 0 - # To provide more randomness, we decrease target seq length for parts of - # samples (10% by default). Note that max_num_tokens is the hard threshold - # for batching and will never be changed. - target_seq_length = max_num_tokens - if np.random.random() < self.short_seq_prob: - target_seq_length = np.random.randint(2, max_num_tokens) - # loop through all sentences in document - while curr < len(doc): - sent_id = doc[curr] - current_chunk.append(sent_id) - current_length = sum(sizes[current_chunk]) - # split chunk and generate pair when exceed target_seq_length or - # finish the loop - if curr == len(doc) - 1 or current_length >= target_seq_length: - # split the chunk into 2 parts - a_end = 1 - if len(current_chunk) > 2: - a_end = np.random.randint(1, len(current_chunk) - 1) - sent_a = current_chunk[:a_end] - len_a = sum(sizes[sent_a]) - # generate next sentence label, note that if there is only 1 sentence - # in current chunk, label is always 0 - next_sent_label = ( - 1 if np.random.rand() > 0.5 and len(current_chunk) != 1 else 0 - ) - if not next_sent_label: - # if next sentence label is 0, sample sent_b from a random doc - target_b_length = target_seq_length - len_a - rand_doc_id = self._skip_sampling(len(self.block_indices), [doc_id]) - random_doc = self.block_indices[rand_doc_id] - random_start = np.random.randint(0, len(random_doc)) - sent_b = [] - len_b = 0 - for j in range(random_start, len(random_doc)): - sent_b.append(random_doc[j]) - len_b = sum(sizes[sent_b]) - if len_b >= target_b_length: - break - # return the second part of the chunk since it's not used - num_unused_segments = len(current_chunk) - a_end - curr -= num_unused_segments - else: - # if next sentence label is 1, use the second part of chunk as sent_B - sent_b = current_chunk[a_end:] - len_b = sum(sizes[sent_b]) - # currently sent_a and sent_B may be longer than max_num_tokens, - # truncate them and return block idx and offsets for them - sent_a, sent_b = self._truncate_sentences( - sent_a, sent_b, max_num_tokens - ) - self.sent_pairs.append((sent_a, sent_b, next_sent_label)) - self.sizes.append(3 + sent_a[3] + sent_b[3]) - current_chunk = [] - curr += 1 - - def _skip_sampling(self, total, skip_ids): - """ - Generate a random integer which is not in skip_ids. Sample range is [0, total) - TODO: ids in skip_ids should be consecutive, we can extend it to more generic version later - """ - rand_id = np.random.randint(total - len(skip_ids)) - return rand_id if rand_id < min(skip_ids) else rand_id + len(skip_ids) - - def _truncate_sentences(self, sent_a, sent_b, max_num_tokens): - """ - Trancate a pair of sentence to limit total length under max_num_tokens - Logics: - 1. Truncate longer sentence - 2. Tokens to be truncated could be at the beginning or the end of the sentnce - Returns: - Truncated sentences represented by dataset idx - """ - len_a, len_b = sum(self.dataset.sizes[sent_a]), sum(self.dataset.sizes[sent_b]) - front_cut_a = front_cut_b = end_cut_a = end_cut_b = 0 - - while True: - total_length = ( - len_a + len_b - front_cut_a - front_cut_b - end_cut_a - end_cut_b - ) - if total_length <= max_num_tokens: - break - - if len_a - front_cut_a - end_cut_a > len_b - front_cut_b - end_cut_b: - if np.random.rand() < 0.5: - front_cut_a += 1 - else: - end_cut_a += 1 - else: - if np.random.rand() < 0.5: - front_cut_b += 1 - else: - end_cut_b += 1 - - # calculate ds indices as well as offsets and return - truncated_sent_a = self._cut_sentence(sent_a, front_cut_a, end_cut_a) - truncated_sent_b = self._cut_sentence(sent_b, front_cut_b, end_cut_b) - return truncated_sent_a, truncated_sent_b - - def _cut_sentence(self, sent, front_cut, end_cut): - """ - Cut a sentence based on the numbers of tokens to be cut from beginning and end - Represent the sentence as dataset idx and return - """ - start_ds_idx, end_ds_idx, offset = sent[0], sent[-1], 0 - target_len = sum(self.dataset.sizes[sent]) - front_cut - end_cut - while front_cut > 0: - if self.dataset.sizes[start_ds_idx] > front_cut: - offset += front_cut - break - else: - front_cut -= self.dataset.sizes[start_ds_idx] - start_ds_idx += 1 - while end_cut > 0: - if self.dataset.sizes[end_ds_idx] > end_cut: - break - else: - end_cut -= self.dataset.sizes[end_ds_idx] - end_ds_idx -= 1 - return start_ds_idx, offset, end_ds_idx, target_len - - def _fetch_block(self, start_ds_idx, offset, end_ds_idx, length): - """ - Fetch a block of tokens based on its dataset idx - """ - buffer = torch.cat( - [self.dataset[idx] for idx in range(start_ds_idx, end_ds_idx + 1)] - ) - s, e = offset, offset + length - return buffer[s:e] - - def __getitem__(self, index): - block1, block2, next_sent_label = self.sent_pairs[index] - block1 = self._fetch_block(*block1) - block2 = self._fetch_block(*block2) - return block1, block2, next_sent_label - - def __len__(self): - return len(self.sizes) - - @property - def supports_prefetch(self): - return getattr(self.dataset, "supports_prefetch", False) - - def prefetch(self, indices): - prefetch_idx = set() - for index in indices: - for block1, block2, _ in [self.sent_pairs[index]]: - for ds_idx in range(block1[0], block1[2] + 1): - prefetch_idx.add(ds_idx) - for ds_idx in range(block2[0], block2[2] + 1): - prefetch_idx.add(ds_idx) - self.dataset.prefetch(prefetch_idx) diff --git a/kosmos-g/fairseq/fairseq/data/legacy/masked_lm_dataset.py b/kosmos-g/fairseq/fairseq/data/legacy/masked_lm_dataset.py deleted file mode 100644 index dd8ea2c60..000000000 --- a/kosmos-g/fairseq/fairseq/data/legacy/masked_lm_dataset.py +++ /dev/null @@ -1,303 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from typing import Dict, List, Tuple - -import numpy as np -import torch -from fairseq.data import Dictionary, FairseqDataset, data_utils -from fairseq.data.concat_dataset import ConcatDataset -from fairseq.data.legacy.block_pair_dataset import BlockPairDataset -from fairseq.data.token_block_dataset import TokenBlockDataset - - -class MaskedLMDataset(FairseqDataset): - """ - A wrapper Dataset for masked language modelling. The dataset - wraps around TokenBlockDataset or BlockedPairDataset and creates a batch - where the input blocks are masked according to the specified masking - probability. Additionally the batch can also contain sentence level targets - if this is specified. - - Args: - dataset: Dataset which generates blocks of data. Only BlockPairDataset - and TokenBlockDataset are supported. - sizes: Sentence lengths - vocab: Dictionary with the vocabulary and special tokens. - pad_idx: Id of padding token in dictionary - mask_idx: Id of mask token in dictionary - classif_token_idx: Id of classification token in dictionary. This is the - token associated with the sentence embedding (Eg: CLS for BERT) - sep_token_idx: Id of separator token in dictionary - (Eg: SEP in BERT) - seed: Seed for random number generator for reproducibility. - shuffle: Shuffle the elements before batching. - has_pairs: Specifies whether the underlying dataset - generates a pair of blocks along with a sentence_target or not. - Setting it to True assumes that the underlying dataset generates a - label for the pair of sentences which is surfaced as - sentence_target. The default value assumes a single block with no - sentence target. - segment_id: An optional segment id for filling in the segment labels - when we are in the single block setting (Eg: XLM). Default is 0. - masking_ratio: specifies what percentage of the blocks should be masked. - masking_prob: specifies the probability of a given token being - replaced with the "MASK" token. - random_token_prob: specifies the probability of a given token being - replaced by a random token from the vocabulary. - """ - - def __init__( - self, - dataset: FairseqDataset, - sizes: np.ndarray, - vocab: Dictionary, - pad_idx: int, - mask_idx: int, - classif_token_idx: int, - sep_token_idx: int, - seed: int = 1, - shuffle: bool = True, - has_pairs: bool = True, - segment_id: int = 0, - masking_ratio: float = 0.15, - masking_prob: float = 0.8, - random_token_prob: float = 0.1, - ): - # Make sure the input datasets are the ones supported - assert ( - isinstance(dataset, TokenBlockDataset) - or isinstance(dataset, BlockPairDataset) - or isinstance(dataset, ConcatDataset) - ), ( - "MaskedLMDataset only wraps TokenBlockDataset or BlockPairDataset or " - "ConcatDataset" - ) - - self.dataset = dataset - self.sizes = np.array(sizes) - self.vocab = vocab - self.pad_idx = pad_idx - self.mask_idx = mask_idx - self.classif_token_idx = classif_token_idx - self.sep_token_idx = sep_token_idx - self.shuffle = shuffle - self.seed = seed - self.has_pairs = has_pairs - self.segment_id = segment_id - self.masking_ratio = masking_ratio - self.masking_prob = masking_prob - self.random_token_prob = random_token_prob - - # If we have only one block then sizes needs to be updated to include - # the classification token - if not has_pairs: - self.sizes = self.sizes + 1 - - def __getitem__(self, index: int): - # if has_pairs, then expect 2 blocks and a sentence target - if self.has_pairs: - (block_one, block_two, sentence_target) = self.dataset[index] - else: - block_one = self.dataset[index] - - return { - "id": index, - "block_one": block_one, - "block_two": block_two if self.has_pairs else None, - "sentence_target": sentence_target if self.has_pairs else None, - } - - def __len__(self): - return len(self.dataset) - - def _mask_block( - self, - sentence: np.ndarray, - mask_idx: int, - pad_idx: int, - dictionary_token_range: Tuple, - ): - """ - Mask tokens for Masked Language Model training - Samples mask_ratio tokens that will be predicted by LM. - - Note:This function may not be efficient enough since we had multiple - conversions between np and torch, we can replace them with torch - operators later. - - Args: - sentence: 1d tensor to be masked - mask_idx: index to use for masking the sentence - pad_idx: index to use for masking the target for tokens we aren't - predicting - dictionary_token_range: range of indices in dictionary which can - be used for random word replacement - (e.g. without special characters) - Return: - masked_sent: masked sentence - target: target with words which we are not predicting replaced - by pad_idx - """ - masked_sent = np.copy(sentence) - sent_length = len(sentence) - mask_num = math.ceil(sent_length * self.masking_ratio) - mask = np.random.choice(sent_length, mask_num, replace=False) - target = np.copy(sentence) - - for i in range(sent_length): - if i in mask: - rand = np.random.random() - - # replace with mask if probability is less than masking_prob - # (Eg: 0.8) - if rand < self.masking_prob: - masked_sent[i] = mask_idx - - # replace with random token if probability is less than - # masking_prob + random_token_prob (Eg: 0.9) - elif rand < (self.masking_prob + self.random_token_prob): - # sample random token from dictionary - masked_sent[i] = np.random.randint( - dictionary_token_range[0], dictionary_token_range[1] - ) - else: - target[i] = pad_idx - - return masked_sent, target - - def _collate(self, samples: List[Dict], pad_idx: int, eos_idx: int): - """ - Does the heavy lifting for creating a batch from the input list of - examples. The logic is as follows: - 1. Mask the input blocks. In case has_pair is True then we have 2 - blocks to mask. - 2. Prepend the first masked block tensor with the special token - used as sentence embedding. Eg: CLS in BERT. This happens - irrespective of the value of has_pair. - 3. If has_pair is True, then append the first masked block with the - special separator token (eg: SEP for BERT) and compute segment - label accordingly. In this case, also append the second masked - block with this special separator token and compute its segment - label. - 4. For the targets tensor, prepend and append with padding index - accordingly. - 5. Concatenate all tensors. - """ - if len(samples) == 0: - return {} - # To ensure determinism, we reset the state of the PRNG after every - # batch based on the seed and the first id of the batch. This ensures - # that across epochs we get the same mask for the same example. This - # is needed for reproducibility and is how BERT does masking - # TODO: Can we add deteminism without this constraint? - with data_utils.numpy_seed(self.seed + samples[0]["id"]): - for s in samples: - - # token range is needed for replacing with random token during - # masking - token_range = (self.vocab.nspecial, len(self.vocab)) - - # mask according to specified probabilities. - masked_blk_one, masked_tgt_one = self._mask_block( - s["block_one"], - self.mask_idx, - self.pad_idx, - token_range, - ) - - tokens = np.concatenate([[self.classif_token_idx], masked_blk_one]) - targets = np.concatenate([[self.pad_idx], masked_tgt_one]) - segments = np.ones(len(tokens)) * self.segment_id - - # if has_pairs is True then we need to add the SEP token to both - # the blocks after masking and re-compute segments based on the new - # lengths. - if self.has_pairs: - tokens_one = np.concatenate([tokens, [self.sep_token_idx]]) - targets_one = np.concatenate([targets, [self.pad_idx]]) - - masked_blk_two, masked_tgt_two = self._mask_block( - s["block_two"], self.mask_idx, self.pad_idx, token_range - ) - tokens_two = np.concatenate([masked_blk_two, [self.sep_token_idx]]) - targets_two = np.concatenate([masked_tgt_two, [self.pad_idx]]) - - # block + 1 sep + 1 special (CLS) - segments_one = np.zeros(len(tokens_one)) - # block + 1 sep - segments_two = np.ones(len(tokens_two)) - - tokens = np.concatenate([tokens_one, tokens_two]) - targets = np.concatenate([targets_one, targets_two]) - segments = np.concatenate([segments_one, segments_two]) - - s["source"] = torch.LongTensor(tokens) - s["segment_labels"] = torch.LongTensor(segments) - s["lm_target"] = torch.LongTensor(targets) - - def merge(key): - return data_utils.collate_tokens( - [s[key] for s in samples], pad_idx, eos_idx, left_pad=False - ) - - return { - "id": torch.LongTensor([s["id"] for s in samples]), - "ntokens": sum(len(s["source"]) for s in samples), - "net_input": { - "src_tokens": merge("source"), - "segment_labels": merge("segment_labels"), - }, - "lm_target": merge("lm_target"), - "sentence_target": torch.LongTensor([s["sentence_target"] for s in samples]) - if self.has_pairs - else None, - "nsentences": len(samples), - } - - def collater(self, samples: List[Dict]): - """Merge a list of samples to form a mini-batch. - - Args: - samples (List[dict]): samples to collate - - Returns: - dict: a mini-batch of data - """ - return self._collate(samples, self.vocab.pad(), self.vocab.eos()) - - def num_tokens(self, index: int): - """ - Return the number of tokens in a sample. This value is used to - enforce max-tokens during batching. - """ - return self.sizes[index] - - def size(self, index: int): - """ - Return an example's size as a float or tuple. This value is used when - filtering a dataset with max-positions. - """ - return self.sizes[index] - - def ordered_indices(self): - """ - Return an ordered list of indices. Batches will be constructed based - on this order. - """ - if self.shuffle: - return np.random.permutation(len(self)) - else: - order = [np.arange(len(self))] - order.append(self.sizes) - return np.lexsort(order) - - @property - def supports_prefetch(self): - return getattr(self.dataset, "supports_prefetch", False) - - def prefetch(self, indices): - self.dataset.prefetch(indices) diff --git a/kosmos-g/fairseq/fairseq/data/legacy/masked_lm_dictionary.py b/kosmos-g/fairseq/fairseq/data/legacy/masked_lm_dictionary.py deleted file mode 100644 index dee88f7a3..000000000 --- a/kosmos-g/fairseq/fairseq/data/legacy/masked_lm_dictionary.py +++ /dev/null @@ -1,60 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq.data import Dictionary - - -class MaskedLMDictionary(Dictionary): - """ - Dictionary for Masked Language Modelling tasks. This extends Dictionary by - adding the mask symbol. - """ - - def __init__( - self, - pad="<pad>", - eos="</s>", - unk="<unk>", - mask="<mask>", - ): - super().__init__(pad=pad, eos=eos, unk=unk) - self.mask_word = mask - self.mask_index = self.add_symbol(mask) - self.nspecial = len(self.symbols) - - def mask(self): - """Helper to get index of mask symbol""" - return self.mask_index - - -class BertDictionary(MaskedLMDictionary): - """ - Dictionary for BERT task. This extends MaskedLMDictionary by adding support - for cls and sep symbols. - """ - - def __init__( - self, - pad="<pad>", - eos="</s>", - unk="<unk>", - mask="<mask>", - cls="<cls>", - sep="<sep>", - ): - super().__init__(pad=pad, eos=eos, unk=unk, mask=mask) - self.cls_word = cls - self.sep_word = sep - self.cls_index = self.add_symbol(cls) - self.sep_index = self.add_symbol(sep) - self.nspecial = len(self.symbols) - - def cls(self): - """Helper to get index of cls symbol""" - return self.cls_index - - def sep(self): - """Helper to get index of sep symbol""" - return self.sep_index diff --git a/kosmos-g/fairseq/fairseq/data/list_dataset.py b/kosmos-g/fairseq/fairseq/data/list_dataset.py deleted file mode 100644 index 12f00aa43..000000000 --- a/kosmos-g/fairseq/fairseq/data/list_dataset.py +++ /dev/null @@ -1,32 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import BaseWrapperDataset - - -class ListDataset(BaseWrapperDataset): - def __init__(self, dataset, sizes=None): - super().__init__(dataset) - self._sizes = sizes - - def __iter__(self): - for x in self.dataset: - yield x - - def collater(self, samples): - return samples - - @property - def sizes(self): - return self._sizes - - def num_tokens(self, index): - return self.sizes[index] - - def size(self, index): - return self.sizes[index] - - def set_epoch(self, epoch): - pass diff --git a/kosmos-g/fairseq/fairseq/data/lm_context_window_dataset.py b/kosmos-g/fairseq/fairseq/data/lm_context_window_dataset.py deleted file mode 100644 index 1a945927c..000000000 --- a/kosmos-g/fairseq/fairseq/data/lm_context_window_dataset.py +++ /dev/null @@ -1,97 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch -from typing import Dict - -from fairseq.data.monolingual_dataset import MonolingualDataset - -from . import FairseqDataset - - -class LMContextWindowDataset(FairseqDataset): - """ - Wraps a MonolingualDataset and provides more context for evaluation. - - Each item in the new dataset will have a maximum size of - ``tokens_per_sample + context_window``. - - Args: - dataset: dataset to wrap - tokens_per_sample (int): the max number of tokens in each dataset item - context_window (int): the number of accumulated tokens to add to each - dataset item - pad_idx (int): padding symbol - """ - - def __init__( - self, - dataset: MonolingualDataset, - tokens_per_sample: int, - context_window: int, - pad_idx: int, - ): - assert context_window > 0 - self.dataset = dataset - self.tokens_per_sample = tokens_per_sample - self.context_window = context_window - self.pad_idx = pad_idx - self.prev_tokens = np.empty([0]) - - def __getitem__(self, index): - return self.dataset[index] - - def __len__(self): - return len(self.dataset) - - def collater(self, samples) -> Dict: - sample = self.dataset.collater(samples) - - pad = self.pad_idx - max_sample_len = self.tokens_per_sample + self.context_window - - bsz, tsz = sample["net_input"]["src_tokens"].shape - start_idxs = [0] * bsz - toks = sample["net_input"]["src_tokens"] - lengths = sample["net_input"]["src_lengths"] - tgt = sample["target"] - new_toks = np.empty([bsz, tsz + self.context_window], dtype=np.int64) - new_tgt = np.full([bsz, tsz + self.context_window], pad, dtype=np.int64) - sample_lens = toks.ne(pad).long().sum(dim=1).cpu() - for i in range(bsz): - sample_len = sample_lens[i] - extra = len(self.prev_tokens) + sample_len - max_sample_len - if extra > 0: - self.prev_tokens = self.prev_tokens[extra:] - pads = np.full(self.context_window - len(self.prev_tokens), pad) - new_toks[i] = np.concatenate([self.prev_tokens, toks[i].numpy(), pads]) - new_tgt[ - i, len(self.prev_tokens) : len(self.prev_tokens) + len(tgt[i]) - ] = tgt[i] - start_idxs[i] = len(self.prev_tokens) - lengths[i] += len(self.prev_tokens) - self.prev_tokens = new_toks[i][new_toks[i] != pad][-self.context_window :] - sample["net_input"]["src_tokens"] = torch.from_numpy(new_toks) - sample["target"] = torch.from_numpy(new_tgt) - sample["start_indices"] = start_idxs - return sample - - def num_tokens(self, index): - return self.dataset.num_tokens(index) - - def size(self, index): - return self.dataset.size(index) - - def ordered_indices(self): - # NOTE we don't shuffle the data to retain access to the previous dataset elements - return np.arange(len(self.dataset)) - - @property - def supports_prefetch(self): - return getattr(self.dataset, "supports_prefetch", False) - - def prefetch(self, indices): - return self.dataset.prefetch(indices) diff --git a/kosmos-g/fairseq/fairseq/data/lru_cache_dataset.py b/kosmos-g/fairseq/fairseq/data/lru_cache_dataset.py deleted file mode 100644 index a7854ac17..000000000 --- a/kosmos-g/fairseq/fairseq/data/lru_cache_dataset.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from functools import lru_cache - -from . import BaseWrapperDataset - - -class LRUCacheDataset(BaseWrapperDataset): - def __init__(self, dataset, token=None): - super().__init__(dataset) - - @lru_cache(maxsize=8) - def __getitem__(self, index): - return self.dataset[index] - - @lru_cache(maxsize=8) - def collater(self, samples): - return self.dataset.collater(samples) diff --git a/kosmos-g/fairseq/fairseq/data/mask_tokens_dataset.py b/kosmos-g/fairseq/fairseq/data/mask_tokens_dataset.py deleted file mode 100644 index 912323559..000000000 --- a/kosmos-g/fairseq/fairseq/data/mask_tokens_dataset.py +++ /dev/null @@ -1,220 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from functools import lru_cache - -import numpy as np -import torch -from fairseq.data import Dictionary, data_utils - -from . import BaseWrapperDataset, LRUCacheDataset - - -class MaskTokensDataset(BaseWrapperDataset): - """ - A wrapper Dataset for masked language modeling. - - Input items are masked according to the specified masking probability. - - Args: - dataset: Dataset to wrap. - sizes: Sentence lengths - vocab: Dictionary with the vocabulary and special tokens. - pad_idx: Id of pad token in vocab - mask_idx: Id of mask token in vocab - return_masked_tokens: controls whether to return the non-masked tokens - (the default) or to return a tensor with the original masked token - IDs (and *pad_idx* elsewhere). The latter is useful as targets for - masked LM training. - seed: Seed for random number generator for reproducibility. - mask_prob: probability of replacing a token with *mask_idx*. - leave_unmasked_prob: probability that a masked token is unmasked. - random_token_prob: probability of replacing a masked token with a - random token from the vocabulary. - freq_weighted_replacement: sample random replacement words based on - word frequencies in the vocab. - mask_whole_words: only mask whole words. This should be a byte mask - over vocab indices, indicating whether it is the beginning of a - word. We will extend any mask to encompass the whole word. - bpe: BPE to use for whole-word masking. - mask_multiple_length : repeat each mask index multiple times. Default - value is 1. - mask_stdev : standard deviation of masks distribution in case of - multiple masking. Default value is 0. - """ - - @classmethod - def apply_mask(cls, dataset: torch.utils.data.Dataset, *args, **kwargs): - """Return the source and target datasets for masked LM training.""" - dataset = LRUCacheDataset(dataset) - return ( - LRUCacheDataset(cls(dataset, *args, **kwargs, return_masked_tokens=False)), - LRUCacheDataset(cls(dataset, *args, **kwargs, return_masked_tokens=True)), - ) - - def __init__( - self, - dataset: torch.utils.data.Dataset, - vocab: Dictionary, - pad_idx: int, - mask_idx: int, - return_masked_tokens: bool = False, - seed: int = 1, - mask_prob: float = 0.15, - leave_unmasked_prob: float = 0.1, - random_token_prob: float = 0.1, - freq_weighted_replacement: bool = False, - mask_whole_words: torch.Tensor = None, - mask_multiple_length: int = 1, - mask_stdev: float = 0.0, - ): - assert 0.0 < mask_prob < 1.0 - assert 0.0 <= random_token_prob <= 1.0 - assert 0.0 <= leave_unmasked_prob <= 1.0 - assert random_token_prob + leave_unmasked_prob <= 1.0 - assert mask_multiple_length >= 1 - assert mask_stdev >= 0.0 - - self.dataset = dataset - self.vocab = vocab - self.pad_idx = pad_idx - self.mask_idx = mask_idx - self.return_masked_tokens = return_masked_tokens - self.seed = seed - self.mask_prob = mask_prob - self.leave_unmasked_prob = leave_unmasked_prob - self.random_token_prob = random_token_prob - self.mask_whole_words = mask_whole_words - self.mask_multiple_length = mask_multiple_length - self.mask_stdev = mask_stdev - - if random_token_prob > 0.0: - if freq_weighted_replacement: - weights = np.array(self.vocab.count) - else: - weights = np.ones(len(self.vocab)) - weights[: self.vocab.nspecial] = 0 - self.weights = weights / weights.sum() - - self.epoch = 0 - - @property - def can_reuse_epoch_itr_across_epochs(self): - return True # only the noise changes, not item sizes - - def set_epoch(self, epoch, **unused): - super().set_epoch(epoch) - self.epoch = epoch - - def __getitem__(self, index: int): - return self.__getitem_cached__(self.seed, self.epoch, index) - - @lru_cache(maxsize=8) - def __getitem_cached__(self, seed: int, epoch: int, index: int): - with data_utils.numpy_seed(self.seed, self.epoch, index): - item = self.dataset[index] - sz = len(item) - - assert ( - self.mask_idx not in item - ), "Dataset contains mask_idx (={}), this is not expected!".format( - self.mask_idx, - ) - - if self.mask_whole_words is not None: - word_begins_mask = self.mask_whole_words.gather(0, item) - word_begins_idx = word_begins_mask.nonzero().view(-1) - sz = len(word_begins_idx) - words = np.split(word_begins_mask, word_begins_idx)[1:] - assert len(words) == sz - word_lens = list(map(len, words)) - - # decide elements to mask - mask = np.full(sz, False) - num_mask = int( - # add a random number for probabilistic rounding - self.mask_prob * sz / float(self.mask_multiple_length) - + np.random.rand() - ) - - # multiple masking as described in the vq-wav2vec paper (https://arxiv.org/abs/1910.05453) - mask_idc = np.random.choice(sz, num_mask, replace=False) - if self.mask_stdev > 0.0: - lengths = np.random.normal( - self.mask_multiple_length, self.mask_stdev, size=num_mask - ) - lengths = [max(0, int(round(x))) for x in lengths] - mask_idc = np.asarray( - [ - mask_idc[j] + offset - for j in range(len(mask_idc)) - for offset in range(lengths[j]) - ], - dtype=np.int64, - ) - else: - mask_idc = np.concatenate( - [mask_idc + i for i in range(self.mask_multiple_length)] - ) - mask_idc = mask_idc[mask_idc < len(mask)] - try: - mask[mask_idc] = True - except: # something wrong - print( - "Assigning mask indexes {} to mask {} failed!".format( - mask_idc, mask - ) - ) - raise - - if self.return_masked_tokens: - # exit early if we're just returning the masked tokens - # (i.e., the targets for masked LM training) - if self.mask_whole_words is not None: - mask = np.repeat(mask, word_lens) - new_item = np.full(len(mask), self.pad_idx) - new_item[mask] = item[torch.from_numpy(mask.astype(np.uint8)) == 1] - return torch.from_numpy(new_item) - - # decide unmasking and random replacement - rand_or_unmask_prob = self.random_token_prob + self.leave_unmasked_prob - if rand_or_unmask_prob > 0.0: - rand_or_unmask = mask & (np.random.rand(sz) < rand_or_unmask_prob) - if self.random_token_prob == 0.0: - unmask = rand_or_unmask - rand_mask = None - elif self.leave_unmasked_prob == 0.0: - unmask = None - rand_mask = rand_or_unmask - else: - unmask_prob = self.leave_unmasked_prob / rand_or_unmask_prob - decision = np.random.rand(sz) < unmask_prob - unmask = rand_or_unmask & decision - rand_mask = rand_or_unmask & (~decision) - else: - unmask = rand_mask = None - - if unmask is not None: - mask = mask ^ unmask - - if self.mask_whole_words is not None: - mask = np.repeat(mask, word_lens) - - new_item = np.copy(item) - new_item[mask] = self.mask_idx - if rand_mask is not None: - num_rand = rand_mask.sum() - if num_rand > 0: - if self.mask_whole_words is not None: - rand_mask = np.repeat(rand_mask, word_lens) - num_rand = rand_mask.sum() - - new_item[rand_mask] = np.random.choice( - len(self.vocab), - num_rand, - p=self.weights, - ) - - return torch.from_numpy(new_item) diff --git a/kosmos-g/fairseq/fairseq/data/monolingual_dataset.py b/kosmos-g/fairseq/fairseq/data/monolingual_dataset.py deleted file mode 100644 index 54fd583b6..000000000 --- a/kosmos-g/fairseq/fairseq/data/monolingual_dataset.py +++ /dev/null @@ -1,253 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch - -from . import FairseqDataset, data_utils - - -def collate(samples, pad_idx, eos_idx, fixed_pad_length=None, pad_to_bsz=None): - if len(samples) == 0: - return {} - - def merge(key, is_list=False): - if is_list: - res = [] - for i in range(len(samples[0][key])): - res.append( - data_utils.collate_tokens( - [s[key][i] for s in samples], - pad_idx, - eos_idx, - left_pad=False, - pad_to_length=fixed_pad_length, - pad_to_bsz=pad_to_bsz, - ) - ) - return res - else: - return data_utils.collate_tokens( - [s[key] for s in samples], - pad_idx, - eos_idx, - left_pad=False, - pad_to_length=fixed_pad_length, - pad_to_bsz=pad_to_bsz, - ) - - src_tokens = merge("source") - if samples[0]["target"] is not None: - is_target_list = isinstance(samples[0]["target"], list) - target = merge("target", is_target_list) - else: - target = src_tokens - - return { - "id": torch.LongTensor([s["id"] for s in samples]), - "nsentences": len(samples), - "ntokens": sum(len(s["source"]) for s in samples), - "net_input": { - "src_tokens": src_tokens, - "src_lengths": torch.LongTensor([s["source"].numel() for s in samples]), - }, - "target": target, - } - - -class MonolingualDataset(FairseqDataset): - """ - A wrapper around torch.utils.data.Dataset for monolingual data. - - Args: - dataset (torch.utils.data.Dataset): dataset to wrap - sizes (List[int]): sentence lengths - vocab (~fairseq.data.Dictionary): vocabulary - shuffle (bool, optional): shuffle the elements before batching - (default: True). - """ - - def __init__( - self, - dataset, - sizes, - src_vocab, - tgt_vocab=None, - add_eos_for_other_targets=False, - shuffle=False, - targets=None, - add_bos_token=False, - fixed_pad_length=None, - pad_to_bsz=None, - src_lang_idx=None, - tgt_lang_idx=None, - ): - self.dataset = dataset - self.sizes = np.array(sizes) - self.vocab = src_vocab - self.tgt_vocab = tgt_vocab or src_vocab - self.add_eos_for_other_targets = add_eos_for_other_targets - self.shuffle = shuffle - self.add_bos_token = add_bos_token - self.fixed_pad_length = fixed_pad_length - self.pad_to_bsz = pad_to_bsz - self.src_lang_idx = src_lang_idx - self.tgt_lang_idx = tgt_lang_idx - - assert targets is None or all( - t in {"self", "future", "past"} for t in targets - ), "targets must be none or one of 'self', 'future', 'past'" - if targets is not None and len(targets) == 0: - targets = None - self.targets = targets - - def __getitem__(self, index): - if self.targets is not None: - # *future_target* is the original sentence - # *source* is shifted right by 1 (maybe left-padded with eos) - # *past_target* is shifted right by 2 (left-padded as needed) - # - # Left-to-right language models should condition on *source* and - # predict *future_target*. - # Right-to-left language models should condition on *source* and - # predict *past_target*. - source, future_target, past_target = self.dataset[index] - source, target = self._make_source_target( - source, future_target, past_target - ) - else: - source = self.dataset[index] - target = None - source, target = self._maybe_add_bos(source, target) - return {"id": index, "source": source, "target": target} - - def __len__(self): - return len(self.dataset) - - def _make_source_target(self, source, future_target, past_target): - if self.targets is not None: - target = [] - - if ( - self.add_eos_for_other_targets - and (("self" in self.targets) or ("past" in self.targets)) - and source[-1] != self.vocab.eos() - ): - # append eos at the end of source - source = torch.cat([source, source.new([self.vocab.eos()])]) - - if "future" in self.targets: - future_target = torch.cat( - [future_target, future_target.new([self.vocab.pad()])] - ) - if "past" in self.targets: - # first token is before the start of sentence which is only used in "none" break mode when - # add_eos_for_other_targets is False - past_target = torch.cat( - [ - past_target.new([self.vocab.pad()]), - past_target[1:], - source[-2, None], - ] - ) - - for t in self.targets: - if t == "self": - target.append(source) - elif t == "future": - target.append(future_target) - elif t == "past": - target.append(past_target) - else: - raise Exception("invalid target " + t) - - if len(target) == 1: - target = target[0] - else: - target = future_target - - return source, self._filter_vocab(target) - - def _maybe_add_bos(self, source, target): - if self.add_bos_token: - source = torch.cat([source.new([self.vocab.bos()]), source]) - if target is not None: - target = torch.cat([target.new([self.tgt_vocab.bos()]), target]) - return source, target - - def num_tokens_vec(self, indices): - """Return the number of tokens for a set of positions defined by indices. - This value is used to enforce ``--max-tokens`` during batching.""" - return self.sizes[indices] - - def _filter_vocab(self, target): - if len(self.tgt_vocab) != len(self.vocab): - - def _filter(target): - mask = target.ge(len(self.tgt_vocab)) - if mask.any(): - target[mask] = self.tgt_vocab.unk() - return target - - if isinstance(target, list): - return [_filter(t) for t in target] - return _filter(target) - return target - - def collater(self, samples): - """Merge a list of samples to form a mini-batch. - - Args: - samples (List[dict]): samples to collate - - Returns: - dict: a mini-batch with the following keys: - - - `id` (LongTensor): example IDs in the original input order - - `ntokens` (int): total number of tokens in the batch - - `net_input` (dict): the input to the Model, containing keys: - - - `src_tokens` (LongTensor): a padded 2D Tensor of tokens in - the source sentence of shape `(bsz, src_len)`. Padding will - appear on the right. - - - `target` (LongTensor): a padded 2D Tensor of tokens in the - target sentence of shape `(bsz, tgt_len)`. Padding will appear - on the right. - """ - return collate( - samples, - self.vocab.pad(), - self.vocab.eos(), - self.fixed_pad_length, - self.pad_to_bsz, - ) - - def num_tokens(self, index): - """Return the number of tokens in a sample. This value is used to - enforce ``--max-tokens`` during batching.""" - return self.sizes[index] - - def size(self, index): - """Return an example's size as a float or tuple. This value is used when - filtering a dataset with ``--max-positions``.""" - return self.sizes[index] - - def ordered_indices(self): - """Return an ordered list of indices. Batches will be constructed based - on this order.""" - if self.shuffle: - order = [np.random.permutation(len(self))] - else: - order = [np.arange(len(self))] - order.append(self.sizes) - return np.lexsort(order) - - @property - def supports_prefetch(self): - return getattr(self.dataset, "supports_prefetch", False) - - def prefetch(self, indices): - self.dataset.prefetch(indices) diff --git a/kosmos-g/fairseq/fairseq/data/multi_corpus_dataset.py b/kosmos-g/fairseq/fairseq/data/multi_corpus_dataset.py deleted file mode 100644 index a3f47c720..000000000 --- a/kosmos-g/fairseq/fairseq/data/multi_corpus_dataset.py +++ /dev/null @@ -1,256 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import time -from collections import OrderedDict -from typing import Dict, List, Optional - -import numpy as np -from fairseq.data import data_utils - -from . import FairseqDataset - -logger = logging.getLogger(__name__) - - -class MultiCorpusDataset(FairseqDataset): - """ - Stores multiple instances of FairseqDataset together. - Unless batch_sample=True, requires each instance - to be the same dataset, as the collate method needs to work on batches with - samples from each dataset. - - Allows specifying a distribution over the datasets to use. Note that unlike - MultiCorpusSampledDataset, this distribution allows sampling for each item, - rather than on a batch level. Note that datasets with sampling probabilty - of 0 will be skipped. - - Each time ordered_indices() is called, a new sample is generated with - the specified distribution. - - Args: - datasets: a OrderedDict of FairseqDataset instances. - distribution: a List containing the probability of getting an utterance from - corresponding dataset - seed: random seed for sampling the datsets - sort_indices: if true, will sort the ordered indices by size - batch_sample: if true, will ensure each batch is from a single dataset - """ - - def __init__( - self, - datasets: Dict[str, FairseqDataset], - distribution: List[float], - seed: int, - sort_indices: bool = False, - batch_sample: bool = False, - distributed_rank: Optional[int] = None, - ): - super().__init__() - assert isinstance(datasets, OrderedDict) - assert len(datasets) == len(distribution) - assert sum(distribution) == 1 - self.datasets = datasets - self.distribution = distribution - self.seed = seed - self.sort_indices = sort_indices - self.batch_sample = batch_sample - self.distributed_rank = distributed_rank - - # Avoid repeated conversions to list later - self.dataset_list = list(datasets.values()) - self.total_num_instances = 0 - - first_dataset = self.dataset_list[0] - - self.num_instances_per_dataset = [] - self.dataset_offsets = [] - for i, dataset in enumerate(self.dataset_list): - assert isinstance(dataset, FairseqDataset) - assert type(dataset) is type(first_dataset) - self.num_instances_per_dataset.append( - 0 if self.distribution[i] == 0 else len(dataset) - ) - self.dataset_offsets.append(self.total_num_instances) - self.total_num_instances += self.num_instances_per_dataset[i] - - def ordered_indices(self): - start = time.time() - with data_utils.numpy_seed(self.seed, self.epoch): - logger.info( - f"sampling new dataset with seed {self.seed} epoch {self.epoch}" - ) - sampled_indices = [] - num_selected_instances = 0 - - # For each dataset i, sample self.distribution[i] * self.total_num_instances - for i, key in enumerate(self.datasets): - if self.distribution[i] == 0: - # skip dataset if sampling probability is 0 - continue - - if i < len(self.datasets) - 1: - num_instances = int(self.distribution[i] * self.total_num_instances) - high = self.dataset_offsets[i + 1] - else: - num_instances = self.total_num_instances - num_selected_instances - high = self.total_num_instances - - logger.info(f"sampling {num_instances} from {key} dataset") - num_selected_instances += num_instances - - # First, add k copies of the dataset where k = num_instances // len(dataset). - # This ensures an equal distribution of the data points as much as possible. - # For the remaining entries randomly sample them - dataset_size = len(self.datasets[key]) - num_copies = num_instances // dataset_size - dataset_indices = ( - np.random.permutation(high - self.dataset_offsets[i]) - + self.dataset_offsets[i] - )[: num_instances - num_copies * dataset_size] - if num_copies > 0: - sampled_indices += list( - np.concatenate( - ( - np.repeat( - np.arange(self.dataset_offsets[i], high), num_copies - ), - dataset_indices, - ) - ) - ) - else: - sampled_indices += list(dataset_indices) - - assert ( - len(sampled_indices) == self.total_num_instances - ), f"{len(sampled_indices)} vs {self.total_num_instances}" - - np.random.shuffle(sampled_indices) - if self.sort_indices: - sampled_indices.sort(key=lambda i: self.num_tokens(i)) - - logger.info( - "multi_corpus_dataset ordered_indices took {}s".format( - time.time() - start - ) - ) - return np.array(sampled_indices, dtype=np.int64) - - def _map_index(self, index: int): - """ - If dataset A has length N and dataset B has length M - then index 1 maps to index 1 of dataset A, and index N + 1 - maps to index 1 of B. - """ - counter = 0 - for num_instances, key in zip(self.num_instances_per_dataset, self.datasets): - if index < counter + num_instances: - return index - counter, key - counter += num_instances - raise ValueError( - "Invalid index: {}, max: {}".format(index, self.total_num_instances) - ) - - def __len__(self): - """ - Length of this dataset is the sum of individual datasets - """ - return self.total_num_instances - - def __getitem__(self, index): - new_index, key = self._map_index(index) - try: - item = self.datasets[key][new_index] - item["full_id"] = index - return item - except Exception as e: - e.args = (f"Error from {key} dataset", *e.args) - raise - - def collater(self, samples): - """ - If we are doing batch sampling, then pick the right collater to use. - - Otherwise we assume all collaters are the same. - """ - if len(samples) == 0: - return None - if "full_id" in samples[0]: - _, key = self._map_index(samples[0]["full_id"]) - try: - batch = self.datasets[key].collater(samples) - except Exception: - print(f"Collating failed for key {key}", flush=True) - raise - return batch - else: - # Subclasses may override __getitem__ to not specify full_id - return list(self.datasets.values())[0].collater(samples) - - def num_tokens(self, index: int): - index, key = self._map_index(index) - return self.datasets[key].num_tokens(index) - - def size(self, index: int): - index, key = self._map_index(index) - return self.datasets[key].size(index) - - @property - def can_reuse_epoch_itr_across_epochs(self): - return False - - def set_epoch(self, epoch, **unused): - super().set_epoch(epoch) - logger.info(f"setting epoch of multi_corpus_dataset to {epoch}") - self.epoch = epoch - - @property - def supports_prefetch(self): - return False - - @property - def supports_fetch_outside_dataloader(self): - return all( - self.datasets[key].supports_fetch_outside_dataloader - for key in self.datasets - ) - - def batch_by_size( - self, - indices, - max_tokens=None, - max_sentences=None, - required_batch_size_multiple=1, - ): - if not self.batch_sample: - return super().batch_by_size( - indices, max_tokens, max_sentences, required_batch_size_multiple - ) - - dataset_indices = {key: [] for key in self.datasets} - for i in indices: - _, key = self._map_index(i) - dataset_indices[key].append(i) - - batches = [] - for key in dataset_indices: - cur_batches = super().batch_by_size( - np.array(dataset_indices[key], dtype=np.int64), - max_tokens, - max_sentences, - required_batch_size_multiple, - ) - logger.info(f"Created {len(cur_batches)} batches for dataset {key}") - batches += cur_batches - - # If this dataset is used in a distributed training setup, - # then shuffle such that the order is seeded by the distributed rank - # as well - if self.distributed_rank is not None: - with data_utils.numpy_seed(self.seed, self.epoch, self.distributed_rank): - np.random.shuffle(batches) - return batches diff --git a/kosmos-g/fairseq/fairseq/data/multi_corpus_sampled_dataset.py b/kosmos-g/fairseq/fairseq/data/multi_corpus_sampled_dataset.py deleted file mode 100644 index e2e9fdf00..000000000 --- a/kosmos-g/fairseq/fairseq/data/multi_corpus_sampled_dataset.py +++ /dev/null @@ -1,152 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from collections import OrderedDict -from typing import Callable, Dict, List - -import numpy as np - -from . import FairseqDataset - - -def uniform_sampler(x): - # Sample from uniform distribution - return np.random.choice(x, 1).item() - - -class MultiCorpusSampledDataset(FairseqDataset): - """ - Stores multiple instances of FairseqDataset together and in every iteration - creates a batch by first sampling a dataset according to a specified - probability distribution and then getting instances from that dataset. - - Args: - datasets: an OrderedDict of FairseqDataset instances. - sampling_func: A function for sampling over list of dataset keys. - The default strategy is to sample uniformly. - """ - - def __init__( - self, - datasets: Dict[str, FairseqDataset], - sampling_func: Callable[[List], int] = None, - ): - super().__init__() - assert isinstance(datasets, OrderedDict) - self.datasets = datasets - if sampling_func is None: - sampling_func = uniform_sampler - self.sampling_func = sampling_func - - self.total_num_instances = 0 - for _, dataset in datasets.items(): - assert isinstance(dataset, FairseqDataset) - self.total_num_instances += len(dataset) - - self._ordered_indices = None - - def __len__(self): - """ - Length of this dataset is the sum of individual datasets - """ - return self.total_num_instances - - def ordered_indices(self): - """ - Ordered indices for batching. Here we call the underlying - dataset's ordered_indices() so that we get the same random ordering - as we would have from using the underlying dataset directly. - """ - if self._ordered_indices is None: - self._ordered_indices = OrderedDict( - [ - (key, dataset.ordered_indices()) - for key, dataset in self.datasets.items() - ] - ) - return np.arange(len(self)) - - def _map_index_to_dataset(self, key: int, index: int): - """ - Different underlying datasets have different lengths. In order to ensure - we are not accessing an index outside the range of the current dataset - size, we wrap around. This function should be called after we have - created an ordering for this and all underlying datasets. - """ - assert ( - self._ordered_indices is not None - ), "Must call MultiCorpusSampledDataset.ordered_indices() first" - mapped_index = index % len(self.datasets[key]) - return self._ordered_indices[key][mapped_index] - - def __getitem__(self, index: int): - """ - Get the item associated with index from each underlying dataset. - Since index is in the range of [0, TotalNumInstances], we need to - map the index to the dataset before retrieving the item. - """ - return OrderedDict( - [ - (key, dataset[self._map_index_to_dataset(key, index)]) - for key, dataset in self.datasets.items() - ] - ) - - def collater(self, samples: List[Dict]): - """ - Generate a mini-batch for this dataset. - To convert this into a regular mini-batch we use the following - logic: - 1. Select a dataset using the specified probability distribution. - 2. Call the collater function of the selected dataset. - """ - if len(samples) == 0: - return None - - selected_key = self.sampling_func(list(self.datasets.keys())) - selected_samples = [sample[selected_key] for sample in samples] - return self.datasets[selected_key].collater(selected_samples) - - def num_tokens(self, index: int): - """ - Return an example's length (number of tokens), used for batching. Here - we return the max across all examples at index across all underlying - datasets. - """ - return max( - dataset.num_tokens(self._map_index_to_dataset(key, index)) - for key, dataset in self.datasets.items() - ) - - def size(self, index: int): - """ - Return an example's size as a float or tuple. Here we return the max - across all underlying datasets. This value is used when filtering a - dataset with max-positions. - """ - return max( - dataset.size(self._map_index_to_dataset(key, index)) - for key, dataset in self.datasets.items() - ) - - @property - def supports_prefetch(self): - return all( - getattr(dataset, "supports_prefetch", False) - for dataset in self.datasets.values() - ) - - def prefetch(self, indices): - for key, dataset in self.datasets.items(): - dataset.prefetch( - [self._map_index_to_dataset(key, index) for index in indices] - ) - - @property - def supports_fetch_outside_dataloader(self): - return all( - self.datasets[key].supports_fetch_outside_dataloader - for key in self.datasets - ) diff --git a/kosmos-g/fairseq/fairseq/data/multilingual/__init__.py b/kosmos-g/fairseq/fairseq/data/multilingual/__init__.py deleted file mode 100644 index 626423691..000000000 --- a/kosmos-g/fairseq/fairseq/data/multilingual/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. diff --git a/kosmos-g/fairseq/fairseq/data/multilingual/multilingual_data_manager.py b/kosmos-g/fairseq/fairseq/data/multilingual/multilingual_data_manager.py deleted file mode 100644 index 8dae99d99..000000000 --- a/kosmos-g/fairseq/fairseq/data/multilingual/multilingual_data_manager.py +++ /dev/null @@ -1,1156 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import itertools -import json -import logging -import math -import os -from collections import OrderedDict, defaultdict -from argparse import ArgumentError - -from fairseq import utils -from fairseq.data import ( - AppendTokenDataset, - ConcatDataset, - Dictionary, - LanguagePairDataset, - PrependTokenDataset, - SampledMultiDataset, - SampledMultiEpochDataset, - StripTokenDataset, - TransformEosLangPairDataset, - TruncateDataset, - data_utils, - indexed_dataset, -) -from fairseq.data.multilingual.multilingual_utils import ( - EncoderLangtok, - LangTokSpec, - LangTokStyle, - augment_dictionary, - get_lang_tok, -) -from fairseq.data.multilingual.sampled_multi_dataset import CollateFormat -from fairseq.file_io import PathManager -from fairseq.utils import FileContentsAction, csv_str_list, eval_str_dict - - -logger = logging.getLogger(__name__) - -SRC_DICT_NAME = "src" -TGT_DICT_NAME = "tgt" - - -def _lang_id(dic: Dictionary, lang: str): - """Return language ID index.""" - idx = dic.index(lang) - assert idx != dic.unk_index, "cannot find language ID for lang {}".format(lang) - return idx - - -def load_sampling_weights(from_file): - with open(from_file) as f: - weights = json.load(f) - return weights - - -class MultilingualDatasetManager(object): - def __init__(self, args, lang_pairs, langs, dicts, sampling_method): - super().__init__() - self.args = args - self.seed = args.seed - self.lang_pairs = lang_pairs - self.extra_lang_pairs = ( - list({p for _, v in args.extra_lang_pairs.items() for p in v.split(",")}) - if args.extra_lang_pairs - else [] - ) - self.src_langs = { - p.split("-")[0] for p in args.lang_pairs + self.extra_lang_pairs - } - self.tgt_langs = { - p.split("-")[1] for p in args.lang_pairs + self.extra_lang_pairs - } - self.langs = langs - self.dicts = dicts - self.lang_dict = self.create_lang_dictionary(self.langs) - self.sampling_method = sampling_method - self.sampling_scheduler = None - self._has_sharded_data = False - self._num_shards_dict = {} - self._training_data_sizes = defaultdict(lambda: {}) - - @classmethod - def setup_data_manager(cls, args, lang_pairs, langs, dicts, sampling_method): - return MultilingualDatasetManager( - args, lang_pairs, langs, dicts, sampling_method - ) - - @staticmethod - def add_args(parser): - parser.add_argument( - "data", - help="colon separated path to data directories list, \ - will be iterated upon during epochs in round-robin manner", - action=FileContentsAction, - ) - parser.add_argument( - "--langs", - default=None, - type=csv_str_list, - help="a list of languages comma sperated languages which can appear in lang-pairs; " - "note that the ordering determines language token IDs", - ) - parser.add_argument( - "--lang-dict", - default=None, - type=str, - help="an external file which contains a list of " - "languages which can appear in lang-pairs; " - "note that the ordering determines language token IDs; " - "--langs and --lang-dict are two exclusive options", - ) - parser.add_argument( - "--source-dict", - default=None, - type=str, - help="path to source dictionary; if specified it will override per language dictionary loading", - ) - parser.add_argument( - "--target-dict", - default=None, - type=str, - help="path to target dictionary; if specified it will override per language dictionary loading", - ) - parser.add_argument( - "--lang-tok-style", - default=LangTokStyle.multilingual.value, - type=str, - choices=[LangTokStyle.multilingual.value, LangTokStyle.mbart.value], - help="language token styles", - ) - - parser.add_argument( - "--load-alignments", - action="store_true", - help="load the binarized alignments", - ) - parser.add_argument( - "--left-pad-source", - default="True", - type=str, - metavar="BOOL", - help="pad the source on the left", - ) - parser.add_argument( - "--left-pad-target", - default="False", - type=str, - metavar="BOOL", - help="pad the target on the left", - ) - try: - parser.add_argument( - "--max-source-positions", - default=1024, - type=int, - metavar="N", - help="max number of tokens in the source sequence", - ) - parser.add_argument( - "--max-target-positions", - default=1024, - type=int, - metavar="N", - help="max number of tokens in the target sequence", - ) - except ArgumentError: - # this might have already been defined. Once we transition this to hydra it should be fine to add it here. - pass - parser.add_argument( - "--upsample-primary", - default=1, - type=int, - help="amount to upsample primary dataset", - ) - parser.add_argument( - "--truncate-source", - action="store_true", - default=False, - help="truncate source to max-source-positions", - ) - parser.add_argument( - "--encoder-langtok", - default=None, - type=str, - choices=[EncoderLangtok.src.value, EncoderLangtok.tgt.value], - metavar="SRCTGT", - help="prepend to the beginning of source sentence the source or target " - "language token. (src/tgt)", - ) - parser.add_argument( - "--decoder-langtok", - action="store_true", - help="prepend to the beginning of target sentence the target language token", - ) - parser.add_argument( - "--lang-tok-replacing-bos-eos", action="store_true", default=False - ) - parser.add_argument( - "--enable-lang-ids", - default=False, - action="store_true", - help="whether to include language IDs in samples", - ) - parser.add_argument( - "--enable-reservsed-directions-shared-datasets", - default=False, - action="store_true", - help="whether to allow datasets be used in reversed directions", - ) - - parser.add_argument( - "--extra-data", - help='a dictionary of data name to this path, \ - e.g. {"mined", path_to_mined_data, "denoised": path_to_denoised_data}', - type=lambda uf: eval_str_dict(uf, type=str), - default=None, - ) - parser.add_argument( - "--extra-lang-pairs", - help='a dictionary of data name to the language pairs they serve, \ - e.g. {"mined": comma-separated-lang-pairs, "denoised": comma-separated-lang-pairs}', - type=lambda uf: eval_str_dict(uf, type=str), - default=None, - ) - parser.add_argument( - "--fixed-dictionary", - help="Fixed dictionary to use with model path", - default=None, - type=str, - ) - parser.add_argument( - "--langtoks-specs", - help='a list of comma separated data types that a set of language tokens to be specialized for, \ - e.g. "main,dae,mined". There will be a set of language tokens added to the vocab to \ - distinguish languages in different training data types. If not specified, default language \ - tokens per languages will be added', - default=LangTokSpec.main.value, - type=csv_str_list, - ) - parser.add_argument( - "--langtoks", - help='a dictionary of how to add language tokens, \ - e.g. {"mined": (None, "tgt"), "mono_dae": ("src.dae", "tgt"), "main": \ - ("src", "tgt")}, or {"mined": ("src.mined", "tgt")}', - default=None, - type=lambda uf: eval_str_dict(uf, type=str), - ) - parser.add_argument( - "--sampling-weights-from-file", - help='a file contain a python dictionary of how to sample data sets, \ - e.g. { "main:en_XX-es_XX": 0.2, "mined:en_XX-pt_XX": 0.5, \ - "mono_dae:es_XX-es_XX: 0.3, "main:en_xx-fr_XX": 0.8 }', - default=None, - type=str, - ) - parser.add_argument( - "--sampling-weights", - help='a dictionary of how to sample data sets, \ - e.g. { "main:en_XX-es_XX": 0.2, "mined:en_XX-pt_XX": 0.5, \ - "mono_dae:es_XX-es_XX: 0.3, "main:en_xx-fr_XX": 0.8 }', - default=None, - type=lambda uf: eval_str_dict(uf, type=str), - ) - parser.add_argument( - "--virtual-epoch-size", - default=None, - type=int, - help="virtual epoch size to speed up data loading", - ) - parser.add_argument( - "--virtual-data-size", - default=None, - type=int, - help="virtual data size of the whole joint dataset to speed" - "up data loading and have specific dynamic sampling strategy interval", - ) - - @classmethod - def load_langs(cls, args, **kwargs): - if args.lang_dict and args.langs: - raise ValueError("--langs and --lang-dict can not both be specified") - if args.lang_dict is None and args.langs is None: - logger.warning( - "External language dictionary is not provided; " - "use lang-pairs to infer the set of supported languages. " - "The language ordering is not stable which might cause " - "misalignment in pretraining and finetuning." - ) - # infer from lang_pairs as it is - langs = list( - {x for lang_pair in args.lang_pairs for x in lang_pair.split("-")} - ) - langs = sorted(langs) - logger.info(f"inferred language list: {langs}") - elif args.lang_dict: - with open( - PathManager.get_local_path(args.lang_dict), "r", encoding="utf-8" - ) as f: - langs = [lang.strip() for lang in f.readlines() if lang.strip()] - logger.info( - f"loaded language list from {args.lang_dict} as they are ordered in file" - ) - elif args.langs: - langs = args.langs - logger.info( - f"parsed the language list as they are ordered in the option: {langs}" - ) - return langs - - def has_sharded_data(self, split): - return self._has_sharded_data and split == getattr( - self.args, "train_subset", None - ) - - def _shared_collater(self): - return not (self.args.extra_data and "mono_dae" in self.args.extra_data) and ( - not self.args.lang_tok_replacing_bos_eos - ) - - def estimate_global_pass_epoch(self, epoch): - if self.args.virtual_epoch_size is None or self.args.virtual_data_size is None: - return None - # one epoch more for remaining data in each shard - virtual_epochs_per_shard = math.ceil( - self.args.virtual_data_size / self.args.virtual_epoch_size - ) - # note that fairseq epoch / shard_epoch starts from 1 - shard_epoch = (epoch - 1) // virtual_epochs_per_shard + 1 - return shard_epoch - - @classmethod - def prepare(cls, load_dictionary, args, **kargs): - args.left_pad_source = utils.eval_bool(args.left_pad_source) - args.left_pad_target = utils.eval_bool(args.left_pad_target) - - if not hasattr(args, "shuffle_instance"): - args.shuffle_instance = False - if args.langtoks is None: - args.langtoks = {} - if "main" not in args.langtoks: - src_langtok_spec = args.encoder_langtok if args.encoder_langtok else None - tgt_langtok_spec = "tgt" if args.decoder_langtok else None - args.langtoks["main"] = (src_langtok_spec, tgt_langtok_spec) - - def check_langs(langs, pairs): - messages = [] - for src, tgt in pairs: - if src not in langs or tgt not in langs: - messages.append( - f"language pair {src}-{tgt} contains languages " - "that are not in the language dictionary" - ) - if len(messages) > 0: - raise ValueError(" ".join(messages) + f"; langs: {langs}") - - if args.lang_pairs is None: - raise ValueError( - "--lang-pairs is required. List all the language pairs in the training objective." - ) - if isinstance(args.lang_pairs, str): - args.lang_pairs = args.lang_pairs.split(",") - if args.source_lang is not None or args.target_lang is not None: - training = False - else: - training = True - language_list = cls.load_langs(args, **kargs) - check_langs( - language_list, - ( - [p.split("-") for p in args.lang_pairs] - if training - else [(args.source_lang, args.target_lang)] - ), - ) - - def load_dictionary_and_postproc(path): - d = load_dictionary(path) - augment_dictionary( - dictionary=d, - language_list=language_list, - lang_tok_style=args.lang_tok_style, - langtoks_specs=args.langtoks_specs, - extra_data=args.extra_data, - ) - return d - - dicts = cls.load_all_dictionaries( - args, language_list, load_dictionary_and_postproc, training - ) - return language_list, dicts, training - - @classmethod - def load_all_dictionaries(cls, args, language_list, load_dictionary, training): - dicts = OrderedDict() - if args.source_dict is not None: - dicts[SRC_DICT_NAME] = load_dictionary(args.source_dict) - if args.target_dict is not None: - dicts[TGT_DICT_NAME] = load_dictionary(args.target_dict) - - if training: - extra_lang_pairs = ( - list( - {p for _, v in args.extra_lang_pairs.items() for p in v.split(",")} - ) - if args.extra_lang_pairs - else [] - ) - src_langs_to_load_dicts = sorted( - {p.split("-")[0] for p in (args.lang_pairs + extra_lang_pairs)} - ) - tgt_langs_to_load_dicts = sorted( - {p.split("-")[1] for p in (args.lang_pairs + extra_lang_pairs)} - ) - else: - src_langs_to_load_dicts = [args.source_lang] - tgt_langs_to_load_dicts = [args.target_lang] - - paths = utils.split_paths(args.data) - assert len(paths) > 0 - - def load_dicts(langs_to_load_dicts): - for lang in langs_to_load_dicts: - dicts[lang] = load_dictionary( - os.path.join(paths[0], "dict.{}.txt".format(lang)) - ) - if len(dicts) > 0: - dict0 = next(iter(dicts.values())) - assert dicts[lang].pad() == dict0.pad() - assert dicts[lang].eos() == dict0.eos() - assert dicts[lang].unk() == dict0.unk() - logger.info("[{}] dictionary: {} types".format(lang, len(dicts[lang]))) - - if args.fixed_dictionary is not None: - fixed_dict = load_dictionary(args.fixed_dictionary) - dicts = { - lang: fixed_dict - for lang in src_langs_to_load_dicts + tgt_langs_to_load_dicts - } - else: - if args.source_dict is None: - load_dicts(src_langs_to_load_dicts) - if args.target_dict is None: - load_dicts(tgt_langs_to_load_dicts) - return dicts - - def get_source_dictionary(self, lang): - if self.args.source_dict is not None: - return self.dicts[SRC_DICT_NAME] - else: - return self.dicts[lang] - - def get_target_dictionary(self, lang): - if self.args.target_dict is not None: - return self.dicts[TGT_DICT_NAME] - else: - return self.dicts[lang] - - @classmethod - def create_lang_dictionary(cls, langs): - unk = "<unk>" - # hack to remove symbols other than unk as they are not needed by lang dict - lang_dict = Dictionary(pad=unk, eos=unk, unk=unk, bos=unk) - for lang in langs: - lang_dict.add_symbol(lang) - return lang_dict - - @classmethod - def get_langtok_index(cls, lang_tok, dic): - idx = dic.index(lang_tok) - assert ( - idx != dic.unk_index - ), "cannot find language token {} in the dictionary".format(lang_tok) - return idx - - def get_encoder_langtok(self, src_lang, tgt_lang, spec=None): - if spec is None: - return None - if spec and spec.startswith("src"): - if src_lang is None: - return None - langtok = get_lang_tok( - lang=src_lang, lang_tok_style=self.args.lang_tok_style, spec=spec - ) - else: - if tgt_lang is None: - return None - langtok = get_lang_tok( - lang=tgt_lang, lang_tok_style=self.args.lang_tok_style, spec=spec - ) - return self.get_langtok_index( - langtok, - self.get_source_dictionary(src_lang) - if src_lang - else self.get_target_dictionary(tgt_lang), - ) - - def get_decoder_langtok(self, tgt_lang, spec=None): - if spec is None: - return None - langtok = get_lang_tok( - lang=tgt_lang, lang_tok_style=self.args.lang_tok_style, spec=spec - ) - return self.get_langtok_index(langtok, self.get_target_dictionary(tgt_lang)) - - @classmethod - def load_data(cls, path, vdict, impl): - dataset = data_utils.load_indexed_dataset(path, vdict, impl) - return dataset - - @classmethod - def split_exists(cls, split, src, tgt, lang, data_path, dataset_impl): - filename = os.path.join(data_path, "{}.{}-{}.{}".format(split, src, tgt, lang)) - return indexed_dataset.dataset_exists(filename, impl=dataset_impl) - - def load_lang_dataset( - self, - data_path, - split, - src, - src_dict, - tgt, - tgt_dict, - combine, - dataset_impl, - upsample_primary, - max_source_positions, - prepend_bos=False, - load_alignments=False, - truncate_source=False, - ): - - src_datasets = [] - tgt_datasets = [] - - for k in itertools.count(): - split_k = split + (str(k) if k > 0 else "") - - # infer langcode - if self.split_exists(split_k, src, tgt, src, data_path, dataset_impl): - prefix = os.path.join(data_path, "{}.{}-{}.".format(split_k, src, tgt)) - elif self.split_exists(split_k, tgt, src, src, data_path, dataset_impl): - prefix = os.path.join(data_path, "{}.{}-{}.".format(split_k, tgt, src)) - else: - if k > 0: - break - else: - logger.error( - f"Dataset not found: {data_path}, {split_k}, {src}, {tgt}" - ) - raise FileNotFoundError( - "Dataset not found: {} ({})".format(split, data_path) - ) - - src_dataset = self.load_data(prefix + src, src_dict, dataset_impl) - if truncate_source: - src_dataset = AppendTokenDataset( - TruncateDataset( - StripTokenDataset(src_dataset, src_dict.eos()), - max_source_positions - 1, - ), - src_dict.eos(), - ) - src_datasets.append(src_dataset) - tgt_datasets.append(self.load_data(prefix + tgt, tgt_dict, dataset_impl)) - - logger.info( - "{} {} {}-{} {} examples".format( - data_path, split_k, src, tgt, len(src_datasets[-1]) - ) - ) - - if not combine: - break - - assert len(src_datasets) == len(tgt_datasets) - - if len(src_datasets) == 1: - src_dataset, tgt_dataset = src_datasets[0], tgt_datasets[0] - else: - sample_ratios = [1] * len(src_datasets) - sample_ratios[0] = upsample_primary - src_dataset = ConcatDataset(src_datasets, sample_ratios) - tgt_dataset = ConcatDataset(tgt_datasets, sample_ratios) - - if prepend_bos: - assert hasattr(src_dict, "bos_index") and hasattr(tgt_dict, "bos_index") - src_dataset = PrependTokenDataset(src_dataset, src_dict.bos()) - tgt_dataset = PrependTokenDataset(tgt_dataset, tgt_dict.bos()) - - align_dataset = None - if load_alignments: - align_path = os.path.join( - data_path, "{}.align.{}-{}".format(split, src, tgt) - ) - if indexed_dataset.dataset_exists(align_path, impl=dataset_impl): - align_dataset = data_utils.load_indexed_dataset( - align_path, None, dataset_impl - ) - - return src_dataset, tgt_dataset, align_dataset - - def load_langpair_dataset( - self, - data_path, - split, - src, - src_dict, - tgt, - tgt_dict, - combine, - dataset_impl, - upsample_primary, - left_pad_source, - left_pad_target, - max_source_positions, - max_target_positions, - prepend_bos=False, - load_alignments=False, - truncate_source=False, - src_dataset_transform_func=lambda dataset: dataset, - tgt_dataset_transform_func=lambda dataset: dataset, - src_lang_id=None, - tgt_lang_id=None, - langpairs_sharing_datasets=None, - ): - norm_direction = "-".join(sorted([src, tgt])) - if langpairs_sharing_datasets is not None: - src_dataset = langpairs_sharing_datasets.get( - (data_path, split, norm_direction, src), "NotInCache" - ) - tgt_dataset = langpairs_sharing_datasets.get( - (data_path, split, norm_direction, tgt), "NotInCache" - ) - align_dataset = langpairs_sharing_datasets.get( - (data_path, split, norm_direction, src, tgt), "NotInCache" - ) - - # a hack: any one is not in cache, we need to reload them - if ( - langpairs_sharing_datasets is None - or src_dataset == "NotInCache" - or tgt_dataset == "NotInCache" - or align_dataset == "NotInCache" - or split != getattr(self.args, "train_subset", None) - ): - # source and target datasets can be reused in reversed directions to save memory - # reversed directions of valid and test data will not share source and target datasets - src_dataset, tgt_dataset, align_dataset = self.load_lang_dataset( - data_path, - split, - src, - src_dict, - tgt, - tgt_dict, - combine, - dataset_impl, - upsample_primary, - max_source_positions=max_source_positions, - prepend_bos=prepend_bos, - load_alignments=load_alignments, - truncate_source=truncate_source, - ) - src_dataset = src_dataset_transform_func(src_dataset) - tgt_dataset = tgt_dataset_transform_func(tgt_dataset) - if langpairs_sharing_datasets is not None: - langpairs_sharing_datasets[ - (data_path, split, norm_direction, src) - ] = src_dataset - langpairs_sharing_datasets[ - (data_path, split, norm_direction, tgt) - ] = tgt_dataset - langpairs_sharing_datasets[ - (data_path, split, norm_direction, src, tgt) - ] = align_dataset - if align_dataset is None: - # no align data so flag the reverse direction as well in sharing - langpairs_sharing_datasets[ - (data_path, split, norm_direction, tgt, src) - ] = align_dataset - else: - logger.info( - f"Reusing source and target datasets of [{split}] {tgt}-{src} for reversed direction: " - f"[{split}] {src}-{tgt}: src length={len(src_dataset)}; tgt length={len(tgt_dataset)}" - ) - - return LanguagePairDataset( - src_dataset, - src_dataset.sizes, - src_dict, - tgt_dataset, - tgt_dataset.sizes if tgt_dataset is not None else None, - tgt_dict, - left_pad_source=left_pad_source, - left_pad_target=left_pad_target, - align_dataset=align_dataset, - src_lang_id=src_lang_id, - tgt_lang_id=tgt_lang_id, - ) - - def src_dataset_tranform_func(self, src_lang, tgt_lang, dataset, spec=None): - if self.args.lang_tok_replacing_bos_eos: - # it is handled by self.alter_dataset_langtok - # TODO: Unifiy with alter_dataset_langtok - return dataset - if spec is None: - return dataset - tok = self.get_encoder_langtok(src_lang, tgt_lang, spec) - if tok: - return PrependTokenDataset(dataset, tok) - return dataset - - def tgt_dataset_tranform_func(self, source_lang, target_lang, dataset, spec=None): - if dataset is None: - # note that target dataset can be None during inference time - return None - if self.args.lang_tok_replacing_bos_eos: - # TODO: Unifiy with alter_dataset_langtok - # It is handled by self.alter_dataset_langtok. - # The complication in self.alter_dataset_langtok - # makes a unified framework difficult. - return dataset - # if not self.args.decoder_langtok: - if not spec: - return dataset - tok = self.get_decoder_langtok(target_lang, spec) - if tok: - return PrependTokenDataset(dataset, tok) - return dataset - - def alter_dataset_langtok( - self, - lang_pair_dataset, - src_eos=None, - src_lang=None, - tgt_eos=None, - tgt_lang=None, - src_langtok_spec=None, - tgt_langtok_spec=None, - ): - if src_langtok_spec is None and tgt_langtok_spec is None: - return lang_pair_dataset - - new_src_eos = None - if ( - src_langtok_spec is not None - and src_eos is not None - and (src_lang is not None or tgt_lang is not None) - ): - new_src_eos = self.get_encoder_langtok(src_lang, tgt_lang, src_langtok_spec) - else: - src_eos = None - - new_tgt_bos = None - if tgt_langtok_spec and tgt_eos is not None and tgt_lang is not None: - new_tgt_bos = self.get_decoder_langtok(tgt_lang, tgt_langtok_spec) - else: - tgt_eos = None - - return TransformEosLangPairDataset( - lang_pair_dataset, - src_eos=src_eos, - new_src_eos=new_src_eos, - tgt_bos=tgt_eos, - new_tgt_bos=new_tgt_bos, - ) - - def load_a_dataset( - self, - split, - data_path, - src, - src_dict, - tgt, - tgt_dict, - combine, - prepend_bos=False, - langpairs_sharing_datasets=None, - data_category=None, - **extra_kwargs, - ): - dataset_impl = self.args.dataset_impl - upsample_primary = self.args.upsample_primary - left_pad_source = self.args.left_pad_source - left_pad_target = self.args.left_pad_target - max_source_positions = self.args.max_source_positions - max_target_positions = self.args.max_target_positions - load_alignments = self.args.load_alignments - truncate_source = self.args.truncate_source - src_dataset_transform_func = self.src_dataset_tranform_func - tgt_dataset_transform_func = self.tgt_dataset_tranform_func - enable_lang_ids = self.args.enable_lang_ids - lang_dictionary = self.lang_dict - src_langtok_spec, tgt_langtok_spec = extra_kwargs["langtok_spec"] - - src_langtok = self.get_encoder_langtok(src, tgt, src_langtok_spec) - tgt_langtok = self.get_decoder_langtok(tgt, tgt_langtok_spec) - logger.info( - f"{data_category}:{src}-{tgt} src_langtok: {src_langtok}; tgt_langtok: {tgt_langtok}" - ) - - langpair_ds = self.load_langpair_dataset( - data_path, - split, - src, - src_dict, - tgt, - tgt_dict, - combine, - dataset_impl, - upsample_primary, - left_pad_source, - left_pad_target, - max_source_positions, - max_target_positions, - prepend_bos, - load_alignments, - truncate_source, - src_dataset_transform_func=lambda dataset: src_dataset_transform_func( - src, tgt, dataset, src_langtok_spec - ), - tgt_dataset_transform_func=lambda dataset: tgt_dataset_transform_func( - src, tgt, dataset, tgt_langtok_spec - ), - src_lang_id=_lang_id(lang_dictionary, src) - if enable_lang_ids and lang_dictionary is not None - else None, - tgt_lang_id=_lang_id(lang_dictionary, tgt) - if enable_lang_ids and lang_dictionary is not None - else None, - langpairs_sharing_datasets=langpairs_sharing_datasets, - ) - # TODO: handle modified lang toks for mined data and dae data - if self.args.lang_tok_replacing_bos_eos: - ds = self.alter_dataset_langtok( - langpair_ds, - src_eos=self.get_source_dictionary(src).eos() - if src - else self.get_target_dictionary(tgt).eos(), - src_lang=src, - tgt_eos=self.get_target_dictionary(tgt).eos(), - tgt_lang=tgt, - src_langtok_spec=src_langtok_spec, - tgt_langtok_spec=tgt_langtok_spec, - ) - else: - ds = langpair_ds - return ds - - def load_split_langpair_datasets(self, split, data_param_list): - datasets = [] - langpairs_sharing_datasets = ( - {} if self.args.enable_reservsed_directions_shared_datasets else None - ) - for param in data_param_list: - ds = self.load_a_dataset( - split=split, - langpairs_sharing_datasets=langpairs_sharing_datasets, - **param, - ) - datasets.append(ds) - return datasets - - def get_data_paths_and_lang_pairs(self, split): - datapaths = {"main": self.args.data} - lang_pairs = {"main": self.lang_pairs} - if split == getattr(self.args, "train_subset", None): - # only training data can have extra data and extra language pairs - if self.args.extra_data: - extra_datapaths = self.args.extra_data - datapaths.update(extra_datapaths) - if self.args.extra_lang_pairs: - extra_lang_pairs = { - k: v.split(",") for k, v in self.args.extra_lang_pairs.items() - } - lang_pairs.update(extra_lang_pairs) - return datapaths, lang_pairs - - @classmethod - def get_dataset_key(cls, data_category, src, tgt): - return f"{data_category}:{src}-{tgt}" - - @classmethod - def _get_shard_num_dict(cls, split, paths): - shards = defaultdict(int) - for path in paths: - files = PathManager.ls(path) - directions = set() - for f in files: - if f.startswith(split) and f.endswith(".idx"): - # idx files of the form "{split}.{src}-{tgt}.{lang}.idx" - direction = f.split(".")[-3] - directions.add(direction) - for direction in directions: - shards[direction] += 1 - return shards - - def get_split_num_data_shards(self, split): - if split in self._num_shards_dict: - return self._num_shards_dict[split] - num_shards_dict = {} - data_paths, lang_pairs = self.get_data_paths_and_lang_pairs(split) - - for data_category, paths in data_paths.items(): - if data_category not in lang_pairs: - continue - paths = utils.split_paths(paths) - shards_dict = self._get_shard_num_dict(split, paths) - lang_dirs = [ - lang_pair.split("-") for lang_pair in lang_pairs[data_category] - ] - lang_dirs = [x if len(x) > 1 else (x[0], x[0]) for x in lang_dirs] - for src, tgt in lang_dirs: - key = self.get_dataset_key(data_category, src, tgt) - if "mono_" in data_category: - # monolingual data requires tgt only - assert src is None or src == tgt, ( - f"error: src={src}, " - "tgt={tgt} for data_category={data_category}" - ) - num_shards_dict[key] = shards_dict[tgt] - else: - if f"{src}-{tgt}" in shards_dict: - num_shards_dict[key] = shards_dict[f"{src}-{tgt}"] - elif f"{tgt}-{src}" in shards_dict: - # follow the fairseq tradition to use reversed direction data if it is not available - num_shards_dict[key] = shards_dict[f"{tgt}-{src}"] - self._num_shards_dict[split] = num_shards_dict - logger.info(f"[{split}] num of shards: {num_shards_dict}") - return num_shards_dict - - @classmethod - def get_shard_id(cls, num_shards, epoch, shard_epoch=None): - shard = epoch if shard_epoch is None else shard_epoch - shard = (shard - 1) % num_shards - return shard - - def get_split_data_path(self, paths, epoch, shard_epoch, num_shards): - path = paths[self.get_shard_id(num_shards, epoch, shard_epoch)] - return path - - def get_split_data_param_list(self, split, epoch, shard_epoch=None): - # TODO: to extend with extra datasets and keys and loop over different shard data paths - param_list = [] - data_paths, lang_pairs = self.get_data_paths_and_lang_pairs(split) - logger.info(f"langtoks settings: {self.args.langtoks}") - split_num_shards_dict = self.get_split_num_data_shards(split) - for data_category, paths in data_paths.items(): - if data_category not in lang_pairs: - continue - paths = utils.split_paths(paths) - assert len(paths) > 0 - if len(paths) > 1: - self._has_sharded_data = True - if split != getattr(self.args, "train_subset", None): - # if not training data set, use the first shard for valid and test - paths = paths[:1] - - if data_category in self.args.langtoks: - lang_tok_spec = self.args.langtoks[data_category] - else: - # default to None - lang_tok_spec = (None, None) - - # infer langcode - lang_dirs = [ - lang_pair.split("-") for lang_pair in lang_pairs[data_category] - ] - lang_dirs = [x if len(x) > 1 else (x[0], x[0]) for x in lang_dirs] - for src, tgt in lang_dirs: - assert src is not None or data_category == "mono_dae", ( - f"error: src={src}, " "tgt={tgt} for data_category={data_category}" - ) - # logger.info(f"preparing param for {data_category}: {src} - {tgt}") - key = self.get_dataset_key(data_category, src, tgt) - data_path = self.get_split_data_path( - paths, epoch, shard_epoch, split_num_shards_dict[key] - ) - param_list.append( - { - "key": key, - "data_path": data_path, - "split": split, - "src": src, - "src_dict": self.get_source_dictionary(src) - if src and data_category != "mono_dae" - else None, - "tgt": tgt, - "tgt_dict": self.get_target_dictionary(tgt), - "data_category": data_category, - "langtok_spec": lang_tok_spec, - } - ) - return param_list - - def get_train_dataset_sizes( - self, data_param_list, datasets, epoch, shard_epoch=None - ): - num_shards = [ - self.get_split_num_data_shards(param["split"])[param["key"]] - for param in data_param_list - ] - data_sizes = [] - for (key, d), num_shard in zip(datasets, num_shards): - my_data_sizes = self._training_data_sizes[key] - shard_ind = self.get_shard_id(num_shard, epoch, shard_epoch) - if shard_ind not in my_data_sizes: - my_data_sizes[shard_ind] = len(d) - known_size = max(my_data_sizes.values()) - data_sizes.append( - # If we don't know the data size of the shard yet, - # use the the max known data size to approximate. - # Note that we preprocess shards by a designated shard size - # and put any remaining data at the end into the last shard so - # the max shard size approximation is almost correct before loading - # the last shard; after loading the last shard, it will have the - # exact data sizes of the whole data size. - (key, sum(my_data_sizes.get(i, known_size) for i in range(num_shard))) - ) - logger.info( - f"estimated total data sizes of all shards used in sampling ratios: {data_sizes}. " - "Note that if the data a shard has not been loaded yet, use the max known data size to approximate" - ) - return [s for _, s in data_sizes] - - def get_train_sampling_ratios( - self, data_param_list, datasets, epoch=1, shard_epoch=None - ): - data_sizes = self.get_train_dataset_sizes( - data_param_list, datasets, epoch, shard_epoch - ) - sampling_func = self.sampling_method.sampling_method_selector() - sample_ratios = sampling_func(data_sizes) if sampling_func is not None else None - return sample_ratios - - def get_sampling_ratios(self, data_param_list, datasets, epoch, shard_epoch=None): - if self.args.sampling_weights_from_file: - weights = load_sampling_weights(self.args.sampling_weights_from_file) - sample_ratios = [weights[k] for k, _ in datasets] - logger.info( - "| ignoring --sampling-weights when loadding sampling weights " - f"from file {self.args.sampling_weights_from_file}" - ) - elif self.args.sampling_weights: - sample_ratios = [self.args.sampling_weights[k] for k, _ in datasets] - else: - sample_ratios = self.get_train_sampling_ratios( - data_param_list, datasets, epoch, shard_epoch - ) - - if sample_ratios is not None: - logger.info( - "| Upsample ratios: {}".format( - list(zip(map(lambda x: x["key"], data_param_list), sample_ratios)) - ) - ) - assert len(sample_ratios) == len(datasets) - return sample_ratios - - def load_split_datasets( - self, split, training, epoch=1, combine=False, shard_epoch=None, **kwargs - ): - data_param_list = self.get_split_data_param_list( - split, epoch, shard_epoch=shard_epoch - ) - langpairs_sharing_datasets = ( - {} if self.args.enable_reservsed_directions_shared_datasets else None - ) - datasets = [ - ( - param["key"], - self.load_a_dataset( - combine=combine, - langpairs_sharing_datasets=langpairs_sharing_datasets, - **param, - ), - ) - for param in data_param_list - ] - return datasets, data_param_list - - def load_into_concat_dataset(self, split, datasets, data_param_list): - if self.args.lang_tok_replacing_bos_eos: - # TODO: to investigate why TransformEosLangPairDataset doesn't work with ConcatDataset - return SampledMultiDataset( - OrderedDict(datasets), - sampling_ratios=None, - eval_key=None, - collate_format=CollateFormat.single, - virtual_size=None, - split=split, - ) - return ConcatDataset([d for _, d in datasets]) - - def load_sampled_multi_epoch_dataset( - self, split, training, epoch=0, combine=False, shard_epoch=None, **kwargs - ): - datasets, data_param_list = self.load_split_datasets( - split, training, epoch, combine, shard_epoch=shard_epoch, **kwargs - ) - if training and split == getattr(self.args, "train_subset", None): - sample_ratios = self.get_sampling_ratios(data_param_list, datasets, epoch) - return SampledMultiEpochDataset( - OrderedDict(datasets), - epoch=epoch, - shard_epoch=shard_epoch, - # valid and test datasets will be degenerate to concating datasets: - sampling_ratios=sample_ratios, - eval_key=None, - collate_format=CollateFormat.single, - virtual_size=self.args.virtual_data_size, - split=split, - virtual_epoch_size=self.args.virtual_epoch_size, - # if not using lang_tok altering, simplified to use the same collater - shared_collater=self._shared_collater(), - ) - else: - return self.load_into_concat_dataset(split, datasets, data_param_list) - - def load_sampled_multi_dataset( - self, split, training, epoch=0, combine=False, shard_epoch=None, **kwargs - ): - datasets, data_param_list = self.load_split_datasets( - split, training, epoch, combine, shard_epoch=shard_epoch, **kwargs - ) - if training and split == getattr(self.args, "train_subset", None): - sample_ratios = self.get_sampling_ratios(data_param_list, datasets, epoch) - return SampledMultiDataset( - OrderedDict(datasets), - epoch=epoch, - # valid and test datasets will be degerate to concating datasets: - sampling_ratios=sample_ratios, - eval_key=None, - collate_format=CollateFormat.single, - virtual_size=self.args.virtual_data_size, - split=split, - # if not using lang_tok altering, simplified to use the same collater - shared_collater=self._shared_collater(), - ) - else: - return self.load_into_concat_dataset(split, datasets, data_param_list) - - def load_dataset( - self, split, training, epoch=0, combine=False, shard_epoch=None, **kwargs - ): - if self.args.virtual_epoch_size is None: - return self.load_sampled_multi_dataset( - split, training, epoch, combine, shard_epoch, **kwargs - ) - else: - return self.load_sampled_multi_epoch_dataset( - split, training, epoch, combine, shard_epoch, **kwargs - ) diff --git a/kosmos-g/fairseq/fairseq/data/multilingual/multilingual_utils.py b/kosmos-g/fairseq/fairseq/data/multilingual/multilingual_utils.py deleted file mode 100644 index b4e0f9828..000000000 --- a/kosmos-g/fairseq/fairseq/data/multilingual/multilingual_utils.py +++ /dev/null @@ -1,63 +0,0 @@ -from enum import Enum -from typing import Dict, List, Optional, Sequence - -import torch -from fairseq.data import Dictionary - - -class EncoderLangtok(Enum): - """ - Prepend to the beginning of source sentence either the - source or target language token. (src/tgt). - """ - - src = "src" - tgt = "tgt" - - -class LangTokSpec(Enum): - main = "main" - mono_dae = "mono_dae" - - -class LangTokStyle(Enum): - multilingual = "multilingual" - mbart = "mbart" - - -@torch.jit.export -def get_lang_tok( - lang: str, lang_tok_style: str, spec: str = LangTokSpec.main.value -) -> str: - # TOKEN_STYLES can't be defined outside this fn since it needs to be - # TorchScriptable. - TOKEN_STYLES: Dict[str, str] = { - LangTokStyle.mbart.value: "[{}]", - LangTokStyle.multilingual.value: "__{}__", - } - - if spec.endswith("dae"): - lang = f"{lang}_dae" - elif spec.endswith("mined"): - lang = f"{lang}_mined" - style = TOKEN_STYLES[lang_tok_style] - return style.format(lang) - - -def augment_dictionary( - dictionary: Dictionary, - language_list: List[str], - lang_tok_style: str, - langtoks_specs: Sequence[str] = (LangTokSpec.main.value,), - extra_data: Optional[Dict[str, str]] = None, -) -> None: - for spec in langtoks_specs: - for language in language_list: - dictionary.add_symbol( - get_lang_tok(lang=language, lang_tok_style=lang_tok_style, spec=spec) - ) - - if lang_tok_style == LangTokStyle.mbart.value or ( - extra_data is not None and LangTokSpec.mono_dae.value in extra_data - ): - dictionary.add_symbol("<mask>") diff --git a/kosmos-g/fairseq/fairseq/data/multilingual/sampled_multi_dataset.py b/kosmos-g/fairseq/fairseq/data/multilingual/sampled_multi_dataset.py deleted file mode 100644 index b0a617424..000000000 --- a/kosmos-g/fairseq/fairseq/data/multilingual/sampled_multi_dataset.py +++ /dev/null @@ -1,467 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import datetime -import hashlib -import logging -import time -from bisect import bisect_right -from collections import OrderedDict, defaultdict -from enum import Enum -from typing import List - -import numpy as np -import torch -from fairseq.data import FairseqDataset, data_utils -from fairseq.distributed import utils as distributed_utils - - -def get_time_gap(s, e): - return ( - datetime.datetime.fromtimestamp(e) - datetime.datetime.fromtimestamp(s) - ).__str__() - - -logger = logging.getLogger(__name__) - - -def default_virtual_size_func(datasets, ratios, max_scale_up=1.5): - sizes = [len(d) for d in datasets] - if ratios is None: - return sum(sizes) - largest_idx = np.argmax(sizes) - largest_r = ratios[largest_idx] - largest_s = sizes[largest_idx] - # set virtual sizes relative to the largest dataset - virtual_sizes = [(r / largest_r) * largest_s for r in ratios] - vsize = sum(virtual_sizes) - max_size = sum(sizes) * max_scale_up - return int(vsize if vsize < max_size else max_size) - - -class CollateFormat(Enum): - single = 1 - ordered_dict = 2 - - -class SampledMultiDataset(FairseqDataset): - """Samples from multiple sub-datasets according to given sampling ratios. - Args: - datasets ( - List[~torch.utils.data.Dataset] - or OrderedDict[str, ~torch.utils.data.Dataset] - ): datasets - sampling_ratios (List[float]): list of probability of each dataset to be sampled - (default: None, which corresponds to concatenating all dataset together). - seed (int): RNG seed to use (default: 2). - epoch (int): starting epoch number (default: 1). - eval_key (str, optional): a key used at evaluation time that causes - this instance to pass-through batches from *datasets[eval_key]*. - collate_format (CollateFormat): collater output format, either CollateFormat.ordered_dict or - CollateFormat.single (default: CollateFormat.single) where CollateFormat.single configures - the collater to output batches of data mixed from all sub-datasets, - and CollateFormat.ordered_dict configures the collater to output a dictionary of batches indexed by keys - of sub-datasets. - Note that not all sub-datasets will present in a single batch in both formats. - virtual_size (int, or callable): the expected virtual size of the dataset (default: default_virtual_size_func). - split (str): the split of the data, e.g. 'train', 'valid' or 'test'. - shared_collater (bool): whether or not to all sub-datasets have the same collater. - shuffle (bool): whether or not to shuffle data (default: True). - """ - - def __init__( - self, - datasets, - sampling_ratios=None, - seed=2, - epoch=1, - eval_key=None, - collate_format=CollateFormat.single, - virtual_size=default_virtual_size_func, - split="", - shared_collater=False, - shuffle=True, - ): - super().__init__() - self.shared_collater = shared_collater - self.shuffle = shuffle - - if isinstance(datasets, OrderedDict): - self.keys = list(datasets.keys()) - datasets = list(datasets.values()) - elif isinstance(datasets, List): - self.keys = list(range(len(datasets))) - else: - raise AssertionError() - self.datasets = datasets - self.split = split - - self.eval_key = eval_key - if self.eval_key is not None: - self.collate_format = CollateFormat.single - else: - self.collate_format = collate_format - - self.seed = seed - self._cur_epoch = None - - self.cumulated_sizes = None - # self.datasets[k][self._cur_indices[i]] is the data item i in this sampled dataset - # namely, data item i is sampled from the kth sub-dataset self.datasets[k] - # where self.cumulated_sizes[k-1] <= i < self.cumulated_sizes[k] - self._cur_indices = None - - self._sizes = None - self.virtual_size_per_dataset = None - # caching properties - self._reset_cached_properties() - self.setup_sampling(sampling_ratios, virtual_size) - self.set_epoch(epoch) - - def _clean_if_not_none(self, var_list): - for v in var_list: - if v is not None: - del v - - def _reset_cached_properties(self): - self._clean_if_not_none([self._sizes, self._cur_indices]) - self._sizes = None - self._cur_indices = None - - def setup_sampling(self, sample_ratios, virtual_size): - sizes = [len(d) for d in self.datasets] - if sample_ratios is None: - # default back to concating datasets - self.sample_ratios = None - self.virtual_size = sum(sizes) - else: - if not isinstance(sample_ratios, np.ndarray): - sample_ratios = np.array(sample_ratios) - self.sample_ratios = sample_ratios - virtual_size = ( - default_virtual_size_func if virtual_size is None else virtual_size - ) - self.virtual_size = ( - virtual_size(self.datasets, self.sample_ratios) - if callable(virtual_size) - else virtual_size - ) - - def adjust_sampling(self, epoch, sampling_ratios, virtual_size): - if sampling_ratios is not None: - sampling_ratios = self._sync_sample_ratios(sampling_ratios) - self.setup_sampling(sampling_ratios, virtual_size) - - def _sync_sample_ratios(self, ratios): - # in case the ratios are not precisely the same across processes - # also to ensure every procresses update the ratios in the same pace - ratios = torch.DoubleTensor(ratios) - if torch.distributed.is_initialized(): - if torch.cuda.is_available(): - distributed_utils.all_reduce( - ratios.cuda(), group=distributed_utils.get_data_parallel_group() - ) - else: - distributed_utils.all_reduce( - ratios, group=distributed_utils.get_data_parallel_group() - ) - ret = ratios.cpu() - ret = ret.numpy() - return ret - - def random_choice_in_dataset(self, rng, dataset, choice_size): - if hasattr(dataset, "random_choice_in_dataset"): - return dataset.random_choice_in_dataset(rng, choice_size) - dataset_size = len(dataset) - return rng.choice( - dataset_size, choice_size, replace=(choice_size > dataset_size) - ) - - def get_virtual_indices(self, rng, datasets, sample_ratios, virtual_size): - def get_counts(sample_ratios): - counts = np.array([virtual_size * r for r in sample_ratios], dtype=np.int64) - diff = virtual_size - counts.sum() - assert diff >= 0 - # due to round-offs, the size might not match the desired sizes - if diff > 0: - dataset_indices = rng.choice( - len(sample_ratios), size=diff, p=sample_ratios - ) - for i in dataset_indices: - counts[i] += 1 - return counts - - def get_in_dataset_indices(datasets, sizes, sample_ratios): - counts = get_counts(sample_ratios) - # uniformally sample desired counts for each dataset - # if the desired counts are large, sample with replacement: - indices = [ - self.random_choice_in_dataset(rng, d, c) - for c, d in zip(counts, datasets) - ] - return indices - - sizes = [len(d) for d in datasets] - if sample_ratios is None: - # default back to concating datasets - in_dataset_indices = [list(range(s)) for s in sizes] - virtual_sizes_per_dataset = sizes - else: - ratios = sample_ratios / sample_ratios.sum() - in_dataset_indices = get_in_dataset_indices(datasets, sizes, ratios) - virtual_sizes_per_dataset = [len(d) for d in in_dataset_indices] - virtual_sizes_per_dataset = np.array(virtual_sizes_per_dataset, np.int64) - cumulative_sizes = np.cumsum(virtual_sizes_per_dataset) - assert sum(virtual_sizes_per_dataset) == virtual_size - assert cumulative_sizes[-1] == virtual_size - if virtual_size < sum(sizes): - logger.warning( - f"virtual data size ({virtual_size}) is less than real data size ({sum(sizes)})." - " If virtual size << real data size, there could be data coverage issue." - ) - in_dataset_indices = np.hstack(in_dataset_indices) - return in_dataset_indices, cumulative_sizes, virtual_sizes_per_dataset - - def _get_dataset_and_index(self, index): - i = bisect_right(self.cumulated_sizes, index) - return i, self._cur_indices[index] - - def __getitem__(self, index): - # self.__getitem__(index) returns self.datasets[k][self._cur_indices[index]] - # where k satisfies self.cumulated_sizes[k - 1] <= k < self.cumulated_sizes[k] - ds_idx, ds_sample_idx = self._get_dataset_and_index(index) - ret = (ds_idx, self.datasets[ds_idx][ds_sample_idx]) - return ret - - def num_tokens(self, index): - return self.sizes[index].max() - - def num_tokens_vec(self, indices): - sizes_vec = self.sizes[np.array(indices)] - # max across all dimensions but first one - return np.amax(sizes_vec, axis=tuple(range(1, len(sizes_vec.shape)))) - - def size(self, index): - return self.sizes[index] - - def __len__(self): - return self.virtual_size - - def collater(self, samples, **extra_args): - """Merge a list of samples to form a mini-batch.""" - if len(samples) == 0: - return None - if self.collate_format == "ordered_dict": - collect_samples = [[] for _ in range(len(self.datasets))] - for (i, sample) in samples: - collect_samples[i].append(sample) - batch = OrderedDict( - [ - (self.keys[i], dataset.collater(collect_samples[i])) - for i, (key, dataset) in enumerate(zip(self.keys, self.datasets)) - if len(collect_samples[i]) > 0 - ] - ) - elif self.shared_collater: - batch = self.datasets[0].collater([s for _, s in samples]) - else: - samples_dict = defaultdict(list) - pad_to_length = ( - defaultdict(int) - if "pad_to_length" not in extra_args - else extra_args["pad_to_length"] - ) - for ds_idx, s in samples: - pad_to_length["source"] = max( - pad_to_length["source"], s["source"].size(0) - ) - if s["target"] is not None: - pad_to_length["target"] = max( - pad_to_length["target"], s["target"].size(0) - ) - samples_dict[ds_idx].append(s) - batches = [ - self.datasets[i].collater(samples_dict[i], pad_to_length=pad_to_length) - for i in range(len(self.datasets)) - if len(samples_dict[i]) > 0 - ] - - def straight_data(tensors): - batch = torch.cat(tensors, dim=0) - return batch - - src_lengths = straight_data( - [b["net_input"]["src_lengths"] for b in batches] - ) - src_lengths, sort_order = src_lengths.sort(descending=True) - - def straight_order(tensors): - batch = straight_data(tensors) - return batch.index_select(0, sort_order) - - batch = { - "id": straight_order([b["id"] for b in batches]), - "nsentences": sum(b["nsentences"] for b in batches), - "ntokens": sum(b["ntokens"] for b in batches), - "net_input": { - "src_tokens": straight_order( - [b["net_input"]["src_tokens"] for b in batches] - ), - "src_lengths": src_lengths, - }, - "target": straight_order([b["target"] for b in batches]) - if batches[0]["target"] is not None - else None, - } - if "prev_output_tokens" in batches[0]["net_input"]: - batch["net_input"]["prev_output_tokens"] = straight_order( - [b["net_input"]["prev_output_tokens"] for b in batches] - ) - if "src_lang_id" in batches[0]["net_input"]: - batch["net_input"]["src_lang_id"] = straight_order( - [b["net_input"]["src_lang_id"] for b in batches] - ) - if "tgt_lang_id" in batches[0]: - batch["tgt_lang_id"] = straight_order( - [b["tgt_lang_id"] for b in batches] - ) - return batch - - @property - def sizes(self): - if self._sizes is not None: - return self._sizes - start_time = time.time() - in_sub_dataset_indices = [ - self._cur_indices[ - 0 if i == 0 else self.cumulated_sizes[i - 1] : self.cumulated_sizes[i] - ] - for i in range(len(self.datasets)) - ] - sub_dataset_sizes = [ - d.sizes[indices] - for d, indices in zip(self.datasets, in_sub_dataset_indices) - ] - self._sizes = np.vstack(sub_dataset_sizes) - logger.info(f"sizes() calling time: {get_time_gap(start_time, time.time())}") - return self._sizes - - def ordered_indices(self): - if self.shuffle: - indices = np.random.permutation(len(self)) - else: - indices = np.arange(len(self)) - - sizes = self.sizes - tgt_sizes = sizes[:, 1] if len(sizes.shape) > 0 and sizes.shape[1] > 1 else None - src_sizes = ( - sizes[:, 0] if len(sizes.shape) > 0 and sizes.shape[1] > 1 else sizes - ) - - # sort by target length, then source length - if tgt_sizes is not None: - indices = indices[np.argsort(tgt_sizes[indices], kind="mergesort")] - sort_indices = indices[np.argsort(src_sizes[indices], kind="mergesort")] - return sort_indices - - def prefetch(self, indices): - prefetch_indices = [[] for _ in range(len(self.datasets))] - for i in indices: - ds_idx, ds_sample_idx = self._get_dataset_and_index(i) - prefetch_indices[ds_idx].append(ds_sample_idx) - for i in range(len(prefetch_indices)): - self.datasets[i].prefetch(prefetch_indices[i]) - - @property - def can_reuse_epoch_itr_across_epochs(self): - return False - - def set_epoch(self, epoch): - super().set_epoch(epoch) - if epoch == self._cur_epoch: - # re-enter so return - return - for d in self.datasets: - if hasattr(d, "set_epoch"): - d.set_epoch(epoch) - self._cur_epoch = epoch - self._establish_virtual_datasets() - - def _establish_virtual_datasets(self): - if self.sample_ratios is None and self._cur_indices is not None: - # not a samping dataset, no need to resample if indices are already established - return - self._reset_cached_properties() - - start_time = time.time() - # Generate a weighted sample of indices as a function of the - # random seed and the current epoch. - rng = np.random.RandomState( - [ - int( - hashlib.sha1( - str(self.__class__.__name__).encode("utf-8") - ).hexdigest(), - 16, - ) - % (2 ** 32), - self.seed % (2 ** 32), # global seed - self._cur_epoch, # epoch index, - ] - ) - self._clean_if_not_none( - [self.cumulated_sizes, self.virtual_size_per_dataset, self._sizes] - ) - self._sizes = None - - indices, cumulated_sizes, virtual_size_per_dataset = self.get_virtual_indices( - rng, self.datasets, self.sample_ratios, self.virtual_size - ) - self._cur_indices = indices - self.cumulated_sizes = cumulated_sizes - self.virtual_size_per_dataset = virtual_size_per_dataset - - raw_sizes = [len(d) for d in self.datasets] - sampled_sizes = self.virtual_size_per_dataset - logger.info( - f"[{self.split}] Raw sizes: {str(dict(zip(self.keys, raw_sizes)))}; " - f"raw total size: {sum(raw_sizes)}" - ) - logger.info( - f"[{self.split}] Resampled sizes: {str(dict(zip(self.keys, sampled_sizes)))}; " - f"resampled total size: {sum(sampled_sizes)}" - ) - if self.sample_ratios is not None: - logger.info( - f"[{self.split}] Upsampling ratios: {str(dict(zip(self.keys, self.sample_ratios)))}" - ) - else: - logger.info(f"[{self.split}] A concat dataset") - logger.info( - f"[{self.split}] virtual dataset established time: {get_time_gap(start_time, time.time())}" - ) - - def filter_indices_by_size(self, indices, max_sizes): - """Filter a list of sample indices. Remove those that are longer - than specified in max_sizes. - - Args: - indices (np.array): original array of sample indices - max_sizes (int or list[int] or tuple[int]): max sample size, - can be defined separately for src and tgt (then list or tuple) - - Returns: - np.array: filtered sample array - list: list of removed indices - """ - sizes = self.sizes - tgt_sizes = sizes[:, 1] if len(sizes.shape) > 0 and sizes.shape[1] > 1 else None - src_sizes = ( - sizes[:, 0] if len(sizes.shape) > 0 and sizes.shape[1] > 1 else sizes - ) - - return data_utils.filter_paired_dataset_indices_by_size( - src_sizes, tgt_sizes, indices, max_sizes - ) diff --git a/kosmos-g/fairseq/fairseq/data/multilingual/sampled_multi_epoch_dataset.py b/kosmos-g/fairseq/fairseq/data/multilingual/sampled_multi_epoch_dataset.py deleted file mode 100644 index 17387b2f8..000000000 --- a/kosmos-g/fairseq/fairseq/data/multilingual/sampled_multi_epoch_dataset.py +++ /dev/null @@ -1,199 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import hashlib -import logging -import math - -import numpy as np -from fairseq.data import SampledMultiDataset - -from .sampled_multi_dataset import CollateFormat, default_virtual_size_func - - -logger = logging.getLogger(__name__) - - -class SampledMultiEpochDataset(SampledMultiDataset): - """Samples from multiple sub-datasets according to sampling ratios - using virtual epoch sizes to speed up dataloading. - Args: - datasets ( - List[~torch.utils.data.Dataset] - or OrderedDict[str, ~torch.utils.data.Dataset] - ): datasets - sampling_ratios (List[float]): list of probability of each dataset to be sampled - (default: None, which corresponds to concating all dataset together). - seed (int): RNG seed to use (default: 2). - epoch (int): starting epoch number (default: 1). - eval_key (str, optional): a key used at evaluation time that causes - this instance to pass-through batches from *datasets[eval_key]*. - collate_format (CollateFormat): collater output format, either CollateFormat.ordered_dict or - CollateFormat.single (default: CollateFormat.single) where CollateFormat.single configures - the collater to output batches of data mixed from all sub-datasets, - and CollateFormat.ordered_dict configures the collater to output a dictionary of batches indexed by keys - of sub-datasets. - Note that not all sub-datasets will present in a single batch in both formats. - virtual_size (int, or callable): the expected virtual size of the dataset (default: default_virtual_size_func). - split (str): the split of the data, e.g. 'train', 'valid' or 'test'. - virtual_epoch_size (int): virtual epoch size, the dataset will go through the data by - this virtual epoch size one by one to speed up data loading, e.g. indicing and filtering - can be performed whenever a virtual epoch is loaded without waiting for the whole dataset to be loaded. - shared_collater (bool): whether or not to all sub-datasets have the same collater. - shard_epoch (int): the real epoch number for shard selection. - shuffle (bool): whether or not to shuffle data (default: True). - """ - - def __init__( - self, - datasets, - sampling_ratios=None, - seed=2, - epoch=1, - eval_key=None, - collate_format=CollateFormat.single, - virtual_size=default_virtual_size_func, - split="", - virtual_epoch_size=None, - shared_collater=False, - shard_epoch=1, - shuffle=True, - ): - self.virtual_epoch_size = virtual_epoch_size - self._current_epoch_start_index = None - self._random_global_indices = None - self.shard_epoch = shard_epoch if shard_epoch is not None else 1 - self.load_next_shard = None - self._epoch_sizes = None - super().__init__( - datasets=datasets, - sampling_ratios=sampling_ratios, - seed=seed, - epoch=epoch, - eval_key=eval_key, - collate_format=collate_format, - virtual_size=virtual_size, - split=split, - shared_collater=shared_collater, - shuffle=shuffle, - ) - - def _setup(self, epoch): - self.virtual_epoch_size = ( - self.virtual_epoch_size - if self.virtual_epoch_size is not None - else self.virtual_size - ) - if self.virtual_epoch_size > self.virtual_size: - logger.warning( - f"virtual epoch size {self.virtual_epoch_size} " - f"is greater than virtual dataset size {self.virtual_size}" - ) - self.virtual_epoch_size = self.virtual_size - self.num_virtual_epochs = math.ceil(self.virtual_size / self.virtual_epoch_size) - self._current_epoch_start_index = self._get_epoch_start_index(epoch) - logger.info( - f"virtual epoch size {self.virtual_epoch_size}; virtual dataset size {self.virtual_size}" - ) - - def _map_epoch_index_to_global(self, index): - index = self._current_epoch_start_index + index - # add randomness - return self._random_global_indices[index] - - @property - def sizes(self): - if self._epoch_sizes is not None: - return self._epoch_sizes - _sizes = super().sizes - indices = self._random_global_indices[ - self._current_epoch_start_index : self._current_epoch_start_index - + len(self) - ] - self._epoch_sizes = _sizes[indices] - # del super()._sizes to save memory - del self._sizes - self._sizes = None - return self._epoch_sizes - - def _get_dataset_and_index(self, index): - i = self._map_epoch_index_to_global(index) - return super()._get_dataset_and_index(i) - - def __len__(self): - return ( - self.virtual_epoch_size - if self._current_epoch_start_index + self.virtual_epoch_size - < self.virtual_size - else self.virtual_size - self._current_epoch_start_index - ) - - def set_epoch(self, epoch): - if self._current_epoch_start_index is None: - # initializing epoch idnices of a virtual dataset - self._setup(epoch) - self._next_virtual_epoch(epoch) - else: - # working on already intialized epoch indices - if epoch == self._cur_epoch: - # re-enter so return - return - self._next_virtual_epoch(epoch) - - def _get_epoch_start_index(self, epoch): - assert epoch >= 1 # fairseq is using 1-based epoch everywhere - return ((epoch - 1) % self.num_virtual_epochs) * self.virtual_epoch_size - - def _next_global_indices(self, epoch): - rng = np.random.RandomState( - [ - int( - hashlib.sha1( - str(self.__class__.__name__).encode("utf-8") - ).hexdigest(), - 16, - ) - % (2 ** 32), - self.seed % (2 ** 32), # global seed - epoch, # epoch index, - ] - ) - del self._random_global_indices - self._random_global_indices = rng.choice( - self.virtual_size, self.virtual_size, replace=False - ) - if self.load_next_shard is None: - self.load_next_shard = False - else: - # increase shard epoch for next loading - self.shard_epoch += 1 - self.load_next_shard = True - logger.info( - "to load next epoch/shard in next load_dataset: " - f"epoch={epoch}/shard_epoch={self.shard_epoch}" - ) - - def _next_virtual_epoch(self, epoch): - index = self._get_epoch_start_index(epoch) - if index == 0 or self._random_global_indices is None: - # need to start from the beginning, - # so call super().set_epoch(epoch) to establish the global virtual indices - logger.info( - "establishing a new set of global virtual indices for " - f"epoch={epoch}/shard_epoch={self.shard_epoch}" - ) - super().set_epoch(epoch) - self._next_global_indices(epoch) - else: - self._cur_epoch = epoch - - # reset cache sizes and ordered_indices for the epoch after moving to a new epoch - self._clean_if_not_none( - [ - self._epoch_sizes, - ] - ) - self._epoch_sizes = None - self._current_epoch_start_index = index diff --git a/kosmos-g/fairseq/fairseq/data/multilingual/sampling_method.py b/kosmos-g/fairseq/fairseq/data/multilingual/sampling_method.py deleted file mode 100644 index 140c68f01..000000000 --- a/kosmos-g/fairseq/fairseq/data/multilingual/sampling_method.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from typing import List - - -logger = logging.getLogger(__name__) - - -def uniform(dataset_sizes: List[int]): - return [1.0] * len(dataset_sizes) - - -def temperature_sampling(dataset_sizes, temp): - total_size = sum(dataset_sizes) - return [(size / total_size) ** (1.0 / temp) for size in dataset_sizes] - - -def make_temperature_sampling(temp=1.0): - def sampling_func(dataset_sizes): - return temperature_sampling(dataset_sizes, temp) - - return sampling_func - - -def make_ratio_sampling(ratios): - def sampling_func(dataset_sizes): - return ratios - - return sampling_func - - -class SamplingMethod: - @staticmethod - def add_arguments(parser): - parser.add_argument( - "--sampling-method", - choices=[ - "uniform", - "temperature", - "concat", - "RoundRobin", - ], - type=str, - default="concat", - help="The method to sample data per language pairs", - ) - parser.add_argument( - "--sampling-temperature", - default=1.5, - type=float, - help="only work with --sampling-method temperature", - ) - - @staticmethod - def build_sampler(args, task): - return SamplingMethod(args, task) - - def __init__(self, args, task): - self.args = args - self.task = task - - def is_adaptive(self): - return False - - def sampling_method_selector(self): - args = self.args - logger.info(f"selected sampler: {args.sampling_method}") - if args.sampling_method == "uniform": - return uniform - elif args.sampling_method == "temperature" or self.is_adaptive(): - return make_temperature_sampling(float(args.sampling_temperature)) - else: - # default to concating all data set together - return None diff --git a/kosmos-g/fairseq/fairseq/data/nested_dictionary_dataset.py b/kosmos-g/fairseq/fairseq/data/nested_dictionary_dataset.py deleted file mode 100644 index 52e74abdd..000000000 --- a/kosmos-g/fairseq/fairseq/data/nested_dictionary_dataset.py +++ /dev/null @@ -1,125 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from collections import OrderedDict - -import torch -from torch.utils.data.dataloader import default_collate - -from . import FairseqDataset - - -def _flatten(dico, prefix=None): - """Flatten a nested dictionary.""" - new_dico = OrderedDict() - if isinstance(dico, dict): - prefix = prefix + "." if prefix is not None else "" - for k, v in dico.items(): - if v is None: - continue - new_dico.update(_flatten(v, prefix + k)) - elif isinstance(dico, list): - for i, v in enumerate(dico): - new_dico.update(_flatten(v, prefix + ".[" + str(i) + "]")) - else: - new_dico = OrderedDict({prefix: dico}) - return new_dico - - -def _unflatten(dico): - """Unflatten a flattened dictionary into a nested dictionary.""" - new_dico = OrderedDict() - for full_k, v in dico.items(): - full_k = full_k.split(".") - node = new_dico - for k in full_k[:-1]: - if k.startswith("[") and k.endswith("]"): - k = int(k[1:-1]) - if k not in node: - node[k] = OrderedDict() - node = node[k] - node[full_k[-1]] = v - return new_dico - - -class NestedDictionaryDataset(FairseqDataset): - def __init__(self, defn, sizes=None): - super().__init__() - self.defn = _flatten(defn) - self.sizes = [sizes] if not isinstance(sizes, (list, tuple)) else sizes - - first = None - for v in self.defn.values(): - if not isinstance( - v, - ( - FairseqDataset, - torch.utils.data.Dataset, - ), - ): - raise ValueError("Expected Dataset but found: {}".format(v.__class__)) - first = first or v - if len(v) > 0: - assert len(v) == len(first), "dataset lengths must match" - - self._len = len(first) - - def __getitem__(self, index): - return OrderedDict((k, ds[index]) for k, ds in self.defn.items()) - - def __len__(self): - return self._len - - def collater(self, samples): - """Merge a list of samples to form a mini-batch. - - Args: - samples (List[dict]): samples to collate - - Returns: - dict: a mini-batch suitable for forwarding with a Model - """ - if len(samples) == 0: - return {} - sample = OrderedDict() - for k, ds in self.defn.items(): - try: - sample[k] = ds.collater([s[k] for s in samples]) - except NotImplementedError: - sample[k] = default_collate([s[k] for s in samples]) - return _unflatten(sample) - - def num_tokens(self, index): - """Return the number of tokens in a sample. This value is used to - enforce ``--max-tokens`` during batching.""" - return max(s[index] for s in self.sizes) - - def size(self, index): - """Return an example's size as a float or tuple. This value is used when - filtering a dataset with ``--max-positions``.""" - if len(self.sizes) == 1: - return self.sizes[0][index] - else: - return (s[index] for s in self.sizes) - - @property - def supports_prefetch(self): - """Whether this dataset supports prefetching.""" - return any(ds.supports_prefetch for ds in self.defn.values()) - - def prefetch(self, indices): - """Prefetch the data required for this epoch.""" - for ds in self.defn.values(): - if getattr(ds, "supports_prefetch", False): - ds.prefetch(indices) - - @property - def can_reuse_epoch_itr_across_epochs(self): - return all(ds.can_reuse_epoch_itr_across_epochs for ds in self.defn.values()) - - def set_epoch(self, epoch): - super().set_epoch(epoch) - for ds in self.defn.values(): - ds.set_epoch(epoch) diff --git a/kosmos-g/fairseq/fairseq/data/noising.py b/kosmos-g/fairseq/fairseq/data/noising.py deleted file mode 100644 index e92e83c2c..000000000 --- a/kosmos-g/fairseq/fairseq/data/noising.py +++ /dev/null @@ -1,334 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch -from fairseq.data import data_utils - - -class WordNoising(object): - """Generate a noisy version of a sentence, without changing words themselves.""" - - def __init__(self, dictionary, bpe_cont_marker="@@", bpe_end_marker=None): - self.dictionary = dictionary - self.bpe_end = None - if bpe_cont_marker: - self.bpe_end = np.array( - [ - not self.dictionary[i].endswith(bpe_cont_marker) - for i in range(len(self.dictionary)) - ] - ) - elif bpe_end_marker: - self.bpe_end = np.array( - [ - self.dictionary[i].endswith(bpe_end_marker) - for i in range(len(self.dictionary)) - ] - ) - - self.get_word_idx = ( - self._get_bpe_word_idx if self.bpe_end is not None else self._get_token_idx - ) - - def noising(self, x, lengths, noising_prob=0.0): - raise NotImplementedError() - - def _get_bpe_word_idx(self, x): - """ - Given a list of BPE tokens, for every index in the tokens list, - return the index of the word grouping that it belongs to. - For example, for input x corresponding to ["how", "are", "y@@", "ou"], - return [[0], [1], [2], [2]]. - """ - # x: (T x B) - bpe_end = self.bpe_end[x] - - if x.size(0) == 1 and x.size(1) == 1: - # Special case when we only have one word in x. If x = [[N]], - # bpe_end is a scalar (bool) instead of a 2-dim array of bools, - # which makes the sum operation below fail. - return np.array([[0]]) - - # do a reduce front sum to generate word ids - word_idx = bpe_end[::-1].cumsum(0)[::-1] - word_idx = word_idx.max(0)[None, :] - word_idx - return word_idx - - def _get_token_idx(self, x): - """ - This is to extend noising functions to be able to apply to non-bpe - tokens, e.g. word or characters. - """ - x = torch.t(x) - word_idx = np.array([range(len(x_i)) for x_i in x]) - return np.transpose(word_idx) - - -class WordDropout(WordNoising): - """Randomly drop input words. If not passing blank_idx (default is None), - then dropped words will be removed. Otherwise, it will be replaced by the - blank_idx.""" - - def __init__( - self, - dictionary, - default_dropout_prob=0.1, - bpe_cont_marker="@@", - bpe_end_marker=None, - ): - super().__init__(dictionary, bpe_cont_marker, bpe_end_marker) - self.default_dropout_prob = default_dropout_prob - - def noising(self, x, lengths, dropout_prob=None, blank_idx=None): - if dropout_prob is None: - dropout_prob = self.default_dropout_prob - # x: (T x B), lengths: B - if dropout_prob == 0: - return x, lengths - - assert 0 < dropout_prob < 1 - - # be sure to drop entire words - word_idx = self.get_word_idx(x) - sentences = [] - modified_lengths = [] - for i in range(lengths.size(0)): - # Since dropout probabilities need to apply over non-pad tokens, - # it is not trivial to generate the keep mask without consider - # input lengths; otherwise, this could be done outside the loop - - # We want to drop whole words based on word_idx grouping - num_words = max(word_idx[:, i]) + 1 - - # ith example: [x0, x1, ..., eos, pad, ..., pad] - # We should only generate keep probs for non-EOS tokens. Thus if the - # input sentence ends in EOS, the last word idx is not included in - # the dropout mask generation and we append True to always keep EOS. - # Otherwise, just generate the dropout mask for all word idx - # positions. - has_eos = x[lengths[i] - 1, i] == self.dictionary.eos() - if has_eos: # has eos? - keep = np.random.rand(num_words - 1) >= dropout_prob - keep = np.append(keep, [True]) # keep EOS symbol - else: - keep = np.random.rand(num_words) >= dropout_prob - - words = x[: lengths[i], i].tolist() - - # TODO: speed up the following loop - # drop words from the input according to keep - new_s = [ - w if keep[word_idx[j, i]] else blank_idx for j, w in enumerate(words) - ] - new_s = [w for w in new_s if w is not None] - # we need to have at least one word in the sentence (more than the - # start / end sentence symbols) - if len(new_s) <= 1: - # insert at beginning in case the only token left is EOS - # EOS should be at end of list. - new_s.insert(0, words[np.random.randint(0, len(words))]) - assert len(new_s) >= 1 and ( - not has_eos # Either don't have EOS at end or last token is EOS - or (len(new_s) >= 2 and new_s[-1] == self.dictionary.eos()) - ), "New sentence is invalid." - sentences.append(new_s) - modified_lengths.append(len(new_s)) - # re-construct input - modified_lengths = torch.LongTensor(modified_lengths) - modified_x = torch.LongTensor( - modified_lengths.max(), modified_lengths.size(0) - ).fill_(self.dictionary.pad()) - for i in range(modified_lengths.size(0)): - modified_x[: modified_lengths[i], i].copy_(torch.LongTensor(sentences[i])) - - return modified_x, modified_lengths - - -class WordShuffle(WordNoising): - """Shuffle words by no more than k positions.""" - - def __init__( - self, - dictionary, - default_max_shuffle_distance=3, - bpe_cont_marker="@@", - bpe_end_marker=None, - ): - super().__init__(dictionary, bpe_cont_marker, bpe_end_marker) - self.default_max_shuffle_distance = 3 - - def noising(self, x, lengths, max_shuffle_distance=None): - if max_shuffle_distance is None: - max_shuffle_distance = self.default_max_shuffle_distance - # x: (T x B), lengths: B - if max_shuffle_distance == 0: - return x, lengths - - # max_shuffle_distance < 1 will return the same sequence - assert max_shuffle_distance > 1 - - # define noise word scores - noise = np.random.uniform( - 0, - max_shuffle_distance, - size=(x.size(0), x.size(1)), - ) - noise[0] = -1 # do not move start sentence symbol - # be sure to shuffle entire words - word_idx = self.get_word_idx(x) - x2 = x.clone() - for i in range(lengths.size(0)): - length_no_eos = lengths[i] - if x[lengths[i] - 1, i] == self.dictionary.eos(): - length_no_eos = lengths[i] - 1 - # generate a random permutation - scores = word_idx[:length_no_eos, i] + noise[word_idx[:length_no_eos, i], i] - # ensure no reordering inside a word - scores += 1e-6 * np.arange(length_no_eos.item()) - permutation = scores.argsort() - # shuffle words - x2[:length_no_eos, i].copy_( - x2[:length_no_eos, i][torch.from_numpy(permutation)] - ) - return x2, lengths - - -class UnsupervisedMTNoising(WordNoising): - """ - Implements the default configuration for noising in UnsupervisedMT - (github.com/facebookresearch/UnsupervisedMT) - """ - - def __init__( - self, - dictionary, - max_word_shuffle_distance, - word_dropout_prob, - word_blanking_prob, - bpe_cont_marker="@@", - bpe_end_marker=None, - ): - super().__init__(dictionary) - self.max_word_shuffle_distance = max_word_shuffle_distance - self.word_dropout_prob = word_dropout_prob - self.word_blanking_prob = word_blanking_prob - - self.word_dropout = WordDropout( - dictionary=dictionary, - bpe_cont_marker=bpe_cont_marker, - bpe_end_marker=bpe_end_marker, - ) - self.word_shuffle = WordShuffle( - dictionary=dictionary, - bpe_cont_marker=bpe_cont_marker, - bpe_end_marker=bpe_end_marker, - ) - - def noising(self, x, lengths): - # 1. Word Shuffle - noisy_src_tokens, noisy_src_lengths = self.word_shuffle.noising( - x=x, - lengths=lengths, - max_shuffle_distance=self.max_word_shuffle_distance, - ) - # 2. Word Dropout - noisy_src_tokens, noisy_src_lengths = self.word_dropout.noising( - x=noisy_src_tokens, - lengths=noisy_src_lengths, - dropout_prob=self.word_dropout_prob, - ) - # 3. Word Blanking - noisy_src_tokens, noisy_src_lengths = self.word_dropout.noising( - x=noisy_src_tokens, - lengths=noisy_src_lengths, - dropout_prob=self.word_blanking_prob, - blank_idx=self.dictionary.unk(), - ) - - return noisy_src_tokens - - -class NoisingDataset(torch.utils.data.Dataset): - def __init__( - self, - src_dataset, - src_dict, - seed, - noiser=None, - noising_class=UnsupervisedMTNoising, - **kwargs - ): - """ - Wrap a :class:`~torch.utils.data.Dataset` and apply noise to the - samples based on the supplied noising configuration. - - Args: - src_dataset (~torch.utils.data.Dataset): dataset to wrap. - to build self.src_dataset -- - a LanguagePairDataset with src dataset as the source dataset and - None as the target dataset. Should NOT have padding so that - src_lengths are accurately calculated by language_pair_dataset - collate function. - We use language_pair_dataset here to encapsulate the tgt_dataset - so we can re-use the LanguagePairDataset collater to format the - batches in the structure that SequenceGenerator expects. - src_dict (~fairseq.data.Dictionary): source dictionary - seed (int): seed to use when generating random noise - noiser (WordNoising): a pre-initialized :class:`WordNoising` - instance. If this is None, a new instance will be created using - *noising_class* and *kwargs*. - noising_class (class, optional): class to use to initialize a - default :class:`WordNoising` instance. - kwargs (dict, optional): arguments to initialize the default - :class:`WordNoising` instance given by *noiser*. - """ - self.src_dataset = src_dataset - self.src_dict = src_dict - self.seed = seed - self.noiser = ( - noiser - if noiser is not None - else noising_class( - dictionary=src_dict, - **kwargs, - ) - ) - self.sizes = src_dataset.sizes - - def __getitem__(self, index): - """ - Returns a single noisy sample. Multiple samples are fed to the collater - create a noising dataset batch. - """ - src_tokens = self.src_dataset[index] - src_lengths = torch.LongTensor([len(src_tokens)]) - src_tokens = src_tokens.unsqueeze(0) - - # Transpose src tokens to fit expected shape of x in noising function - # (batch size, sequence length) -> (sequence length, batch size) - src_tokens_t = torch.t(src_tokens) - - with data_utils.numpy_seed(self.seed + index): - noisy_src_tokens = self.noiser.noising(src_tokens_t, src_lengths) - - # Transpose back to expected src_tokens format - # (sequence length, 1) -> (1, sequence length) - noisy_src_tokens = torch.t(noisy_src_tokens) - return noisy_src_tokens[0] - - def __len__(self): - """ - The length of the noising dataset is the length of src. - """ - return len(self.src_dataset) - - @property - def supports_prefetch(self): - return self.src_dataset.supports_prefetch - - def prefetch(self, indices): - if self.src_dataset.supports_prefetch: - self.src_dataset.prefetch(indices) diff --git a/kosmos-g/fairseq/fairseq/data/num_samples_dataset.py b/kosmos-g/fairseq/fairseq/data/num_samples_dataset.py deleted file mode 100644 index 99a17495c..000000000 --- a/kosmos-g/fairseq/fairseq/data/num_samples_dataset.py +++ /dev/null @@ -1,17 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import FairseqDataset - - -class NumSamplesDataset(FairseqDataset): - def __getitem__(self, index): - return 1 - - def __len__(self): - return 0 - - def collater(self, samples): - return sum(samples) diff --git a/kosmos-g/fairseq/fairseq/data/numel_dataset.py b/kosmos-g/fairseq/fairseq/data/numel_dataset.py deleted file mode 100644 index ac86dfd2f..000000000 --- a/kosmos-g/fairseq/fairseq/data/numel_dataset.py +++ /dev/null @@ -1,31 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch - -from . import BaseWrapperDataset - - -class NumelDataset(BaseWrapperDataset): - def __init__(self, dataset, reduce=False): - super().__init__(dataset) - self.reduce = reduce - - def __getitem__(self, index): - item = self.dataset[index] - if torch.is_tensor(item): - return torch.numel(item) - else: - return np.size(item) - - def __len__(self): - return len(self.dataset) - - def collater(self, samples): - if self.reduce: - return sum(samples) - else: - return torch.tensor(samples) diff --git a/kosmos-g/fairseq/fairseq/data/offset_tokens_dataset.py b/kosmos-g/fairseq/fairseq/data/offset_tokens_dataset.py deleted file mode 100644 index 6fabbdcda..000000000 --- a/kosmos-g/fairseq/fairseq/data/offset_tokens_dataset.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import BaseWrapperDataset - - -class OffsetTokensDataset(BaseWrapperDataset): - def __init__(self, dataset, offset): - super().__init__(dataset) - self.offset = offset - - def __getitem__(self, idx): - return self.dataset[idx] + self.offset diff --git a/kosmos-g/fairseq/fairseq/data/pad_dataset.py b/kosmos-g/fairseq/fairseq/data/pad_dataset.py deleted file mode 100644 index 8075bba6a..000000000 --- a/kosmos-g/fairseq/fairseq/data/pad_dataset.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq.data import data_utils - -from . import BaseWrapperDataset - - -class PadDataset(BaseWrapperDataset): - def __init__(self, dataset, pad_idx, left_pad): - super().__init__(dataset) - self.pad_idx = pad_idx - self.left_pad = left_pad - - def collater(self, samples): - return data_utils.collate_tokens(samples, self.pad_idx, left_pad=self.left_pad) - - -class LeftPadDataset(PadDataset): - def __init__(self, dataset, pad_idx): - super().__init__(dataset, pad_idx, left_pad=True) - - -class RightPadDataset(PadDataset): - def __init__(self, dataset, pad_idx): - super().__init__(dataset, pad_idx, left_pad=False) diff --git a/kosmos-g/fairseq/fairseq/data/plasma_utils.py b/kosmos-g/fairseq/fairseq/data/plasma_utils.py deleted file mode 100644 index b9fab3b73..000000000 --- a/kosmos-g/fairseq/fairseq/data/plasma_utils.py +++ /dev/null @@ -1,197 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import subprocess -import json -import tempfile -import hashlib -from typing import Hashable - -try: - import pyarrow.plasma as plasma - - PYARROW_AVAILABLE = True -except ImportError: - plasma = None - PYARROW_AVAILABLE = False - - -class PlasmaArray: - """ - Wrapper around numpy arrays that automatically moves the data to shared - memory upon serialization. This is particularly helpful when passing numpy - arrays through multiprocessing, so that data is not unnecessarily - duplicated or pickled. - """ - - def __init__(self, array): - super().__init__() - self.array = array - self.disable = array.nbytes < 134217728 # disable for arrays <128MB - self.object_id = None - self.path = None - - # variables with underscores shouldn't be pickled - self._client = None - self._server = None - self._server_tmp = None - self._plasma = None - - @property - def plasma(self): - if self._plasma is None and not self.disable: - self._plasma = plasma - return self._plasma - - def start_server(self): - if self.plasma is None or self._server is not None: - return - assert self.object_id is None - assert self.path is None - self._server_tmp = tempfile.NamedTemporaryFile() - self.path = self._server_tmp.name - self._server = subprocess.Popen( - ["plasma_store", "-m", str(int(1.05 * self.array.nbytes)), "-s", self.path] - ) - - @property - def client(self): - if self._client is None: - assert self.path is not None - self._client = self.plasma.connect(self.path, num_retries=200) - return self._client - - def __getstate__(self): - """Called on pickle load""" - if self.plasma is None: - return self.__dict__ - if self.object_id is None: - self.start_server() - self.object_id = self.client.put(self.array) - state = self.__dict__.copy() - del state["array"] - state["_client"] = None - state["_server"] = None - state["_server_tmp"] = None - state["_plasma"] = None - return state - - def __setstate__(self, state): - """Called on pickle save""" - self.__dict__.update(state) - if self.plasma is None: - return - self.array = self.client.get(self.object_id) - - def __del__(self): - if self._server is not None: - self._server.kill() - self._server = None - self._server_tmp.close() - self._server_tmp = None - - -DEFAULT_PLASMA_PATH = "/tmp/plasma" - - -class PlasmaView: - """Interface to write and read from shared memory. Whereas PlasmaArray writes to plasma on serialization, - PlasmaView writes to shared memory on instantiation.""" - - def __init__(self, array, split_path: str, hash_data: Hashable, plasma_path=None): - """ - Args: - array: numpy array to store. This can be read with ``PlasmaView().array`` - split_path: the path whence the data was read, used for hashing - hash_data: other metadata about the array that can be used to create a unique key. - as of writing, the 3 callers in ``TokenBlockDataset`` use:: - - hash_data = ((block_size, document_sep_len, str(break_mode), len(dataset)), 0|1|2) - - - """ - assert PYARROW_AVAILABLE - assert split_path is not None - if plasma_path is None: - plasma_path = DEFAULT_PLASMA_PATH - - self.path = plasma_path - self.split_path = split_path - self._client = None # Initialize lazily for pickle. plasma clients should not be deep copied or serialized. - self._n = None - - self.object_id = self.get_object_id(self.split_path, hash_data) - try: - self.client.put(array, object_id=self.object_id) - except plasma.PlasmaObjectExists: - pass - - @property - def client(self): - if self._client is None: - self._client = plasma.connect(self.path, num_retries=200) - return self._client - - @property - def array(self): - """Fetch a read only view of an np.array, stored in plasma.""" - ret = self.client.get(self.object_id) - return ret - - @staticmethod - def get_object_id(split_path: str, hash_data: Hashable): - """Returns plasma.ObjectID from hashing split_path and object_num.""" - hash = hashlib.blake2b(bytes(split_path, "utf-8"), digest_size=20) - harg = json.dumps(hash_data).encode("utf-8") - hash.update(harg) - return plasma.ObjectID(hash.digest()) - - def __getstate__(self): - """Called on pickle save""" - self.disconnect() - state = self.__dict__.copy() - assert state["_client"] is None - assert "object_id" in state - return state - - def __setstate__(self, state): - """Called on pickle load""" - self.__dict__.update(state) - - def __del__(self): - self.disconnect() - - def disconnect(self): - if self._client is not None: - self._client.disconnect() - self._client = None - - def __len__(self): - """Save reads by caching len""" - if self._n is None: - self._n = len(self.array) - return self._n - - -GB100 = (1024 ** 3) * 100 - - -class PlasmaStore: - def __init__(self, path=DEFAULT_PLASMA_PATH, nbytes: int = GB100): - - self.server = self.start(path, nbytes) - - def __del__(self): - self.server.kill() - - @staticmethod - def start(path=DEFAULT_PLASMA_PATH, nbytes: int = GB100) -> subprocess.Popen: - if not PYARROW_AVAILABLE: - raise ImportError("please run pip install pyarrow to use --use_plasma_view") - # best practice is to allocate more space than we need. The limitation seems to be the size of /dev/shm - _server = subprocess.Popen(["plasma_store", "-m", str(nbytes), "-s", path]) - plasma.connect(path, num_retries=200) # If we can't connect we fail immediately - return _server diff --git a/kosmos-g/fairseq/fairseq/data/prepend_dataset.py b/kosmos-g/fairseq/fairseq/data/prepend_dataset.py deleted file mode 100644 index ad74784d2..000000000 --- a/kosmos-g/fairseq/fairseq/data/prepend_dataset.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch - -from . import BaseWrapperDataset - - -class PrependDataset(BaseWrapperDataset): - def __init__(self, dataset, prepend_getter, ensure_first_token_is=None): - super().__init__(dataset) - self.prepend_getter = prepend_getter - self.ensure_first_token = ensure_first_token_is - - def __getitem__(self, idx): - item = self.dataset[idx] - is_tuple = isinstance(item, tuple) - src = item[0] if is_tuple else item - - assert self.ensure_first_token is None or src[0] == self.ensure_first_token - prepend_idx = self.prepend_getter(self.dataset, idx) - assert isinstance(prepend_idx, int) - src[0] = prepend_idx - item = tuple((src,) + item[1:]) if is_tuple else src - return item diff --git a/kosmos-g/fairseq/fairseq/data/prepend_token_dataset.py b/kosmos-g/fairseq/fairseq/data/prepend_token_dataset.py deleted file mode 100644 index fd1331f4c..000000000 --- a/kosmos-g/fairseq/fairseq/data/prepend_token_dataset.py +++ /dev/null @@ -1,41 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch - -from . import BaseWrapperDataset - - -class PrependTokenDataset(BaseWrapperDataset): - def __init__(self, dataset, token=None): - super().__init__(dataset) - self.token = token - if token is not None: - self._sizes = np.array(dataset.sizes) + 1 - else: - self._sizes = dataset.sizes - - def __getitem__(self, idx): - item = self.dataset[idx] - if self.token is not None: - item = torch.cat([item.new([self.token]), item]) - return item - - @property - def sizes(self): - return self._sizes - - def num_tokens(self, index): - n = self.dataset.num_tokens(index) - if self.token is not None: - n += 1 - return n - - def size(self, index): - n = self.dataset.size(index) - if self.token is not None: - n += 1 - return n diff --git a/kosmos-g/fairseq/fairseq/data/raw_label_dataset.py b/kosmos-g/fairseq/fairseq/data/raw_label_dataset.py deleted file mode 100644 index d054904f4..000000000 --- a/kosmos-g/fairseq/fairseq/data/raw_label_dataset.py +++ /dev/null @@ -1,23 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from . import FairseqDataset - - -class RawLabelDataset(FairseqDataset): - def __init__(self, labels): - super().__init__() - self.labels = labels - - def __getitem__(self, index): - return self.labels[index] - - def __len__(self): - return len(self.labels) - - def collater(self, samples): - return torch.tensor(samples) diff --git a/kosmos-g/fairseq/fairseq/data/replace_dataset.py b/kosmos-g/fairseq/fairseq/data/replace_dataset.py deleted file mode 100644 index 5aac2ba96..000000000 --- a/kosmos-g/fairseq/fairseq/data/replace_dataset.py +++ /dev/null @@ -1,36 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import BaseWrapperDataset - - -class ReplaceDataset(BaseWrapperDataset): - """Replaces tokens found in the dataset by a specified replacement token - - Args: - dataset (~torch.utils.data.Dataset): dataset to replace tokens in - replace_map(Dictionary[int,int]): map of token to replace -> replacement token - offsets (List[int]): do not replace tokens before (from left if pos, right if neg) this offset. should be - as many as the number of objects returned by the underlying dataset __getitem__ method. - """ - - def __init__(self, dataset, replace_map, offsets): - super().__init__(dataset) - assert len(replace_map) > 0 - self.replace_map = replace_map - self.offsets = offsets - - def __getitem__(self, index): - item = self.dataset[index] - is_tuple = isinstance(item, tuple) - srcs = item if is_tuple else [item] - - for offset, src in zip(self.offsets, srcs): - for k, v in self.replace_map.items(): - src_off = src[offset:] if offset >= 0 else src[:offset] - src_off.masked_fill_(src_off == k, v) - - item = srcs if is_tuple else srcs[0] - return item diff --git a/kosmos-g/fairseq/fairseq/data/resampling_dataset.py b/kosmos-g/fairseq/fairseq/data/resampling_dataset.py deleted file mode 100644 index 3d3b99316..000000000 --- a/kosmos-g/fairseq/fairseq/data/resampling_dataset.py +++ /dev/null @@ -1,139 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -import numpy as np -from fairseq.data import BaseWrapperDataset, plasma_utils - - -logger = logging.getLogger(__name__) - - -class ResamplingDataset(BaseWrapperDataset): - """Randomly samples from a given dataset at each epoch. - - Sampling is done with or without replacement, depending on the "replace" - parameter. - - Optionally, the epoch size can be rescaled. This is potentially desirable - to increase per-epoch coverage of the base dataset (since sampling with - replacement means that many items in the dataset will be left out). In the - case of sampling without replacement, size_ratio should be strictly less - than 1. - - Args: - dataset (~torch.utils.data.Dataset): dataset on which to sample. - weights (List[float]): list of probability weights - (default: None, which corresponds to uniform sampling). - replace (bool): sampling mode; True for "with replacement", or False - for "without replacement" (default: True) - size_ratio (float): the ratio to subsample to; must be positive - (default: 1.0). - batch_by_size (bool): whether or not to batch by sequence length - (default: True). - seed (int): RNG seed to use (default: 0). - epoch (int): starting epoch number (default: 1). - """ - - def __init__( - self, - dataset, - weights=None, - replace=True, - size_ratio=1.0, - batch_by_size=True, - seed=0, - epoch=1, - ): - super().__init__(dataset) - - if weights is None: - self.weights = None - - else: - assert len(weights) == len(dataset) - weights_arr = np.array(weights, dtype=np.float64) - weights_arr /= weights_arr.sum() - self.weights = plasma_utils.PlasmaArray(weights_arr) - - self.replace = replace - - assert size_ratio > 0.0 - if not self.replace: - assert size_ratio < 1.0 - self.size_ratio = float(size_ratio) - self.actual_size = np.ceil(len(dataset) * self.size_ratio).astype(int) - - self.batch_by_size = batch_by_size - self.seed = seed - - self._cur_epoch = None - self._cur_indices = None - - self.set_epoch(epoch) - - def __getitem__(self, index): - return self.dataset[self._cur_indices.array[index]] - - def __len__(self): - return self.actual_size - - @property - def sizes(self): - if isinstance(self.dataset.sizes, list): - return [s[self._cur_indices.array] for s in self.dataset.sizes] - return self.dataset.sizes[self._cur_indices.array] - - def num_tokens(self, index): - return self.dataset.num_tokens(self._cur_indices.array[index]) - - def size(self, index): - return self.dataset.size(self._cur_indices.array[index]) - - def ordered_indices(self): - if self.batch_by_size: - order = [ - np.arange(len(self)), - self.sizes, - ] # No need to handle `self.shuffle == True` - return np.lexsort(order) - else: - return np.arange(len(self)) - - def prefetch(self, indices): - self.dataset.prefetch(self._cur_indices.array[indices]) - - @property - def can_reuse_epoch_itr_across_epochs(self): - return False - - def set_epoch(self, epoch): - logger.debug("ResamplingDataset.set_epoch: {}".format(epoch)) - super().set_epoch(epoch) - - if epoch == self._cur_epoch: - return - - self._cur_epoch = epoch - - # Generate a weighted sample of indices as a function of the - # random seed and the current epoch. - - rng = np.random.RandomState( - [ - 42, # magic number - self.seed % (2 ** 32), # global seed - self._cur_epoch, # epoch index - ] - ) - self._cur_indices = plasma_utils.PlasmaArray( - rng.choice( - len(self.dataset), - self.actual_size, - replace=self.replace, - p=(None if self.weights is None else self.weights.array), - ) - ) diff --git a/kosmos-g/fairseq/fairseq/data/roll_dataset.py b/kosmos-g/fairseq/fairseq/data/roll_dataset.py deleted file mode 100644 index a2915eeb3..000000000 --- a/kosmos-g/fairseq/fairseq/data/roll_dataset.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from . import BaseWrapperDataset - - -class RollDataset(BaseWrapperDataset): - def __init__(self, dataset, shifts): - super().__init__(dataset) - self.shifts = shifts - - def __getitem__(self, index): - item = self.dataset[index] - return torch.roll(item, self.shifts) diff --git a/kosmos-g/fairseq/fairseq/data/round_robin_zip_datasets.py b/kosmos-g/fairseq/fairseq/data/round_robin_zip_datasets.py deleted file mode 100644 index 2cb7447ea..000000000 --- a/kosmos-g/fairseq/fairseq/data/round_robin_zip_datasets.py +++ /dev/null @@ -1,160 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from collections import OrderedDict -from typing import Dict, Sequence - -import numpy as np - -from . import FairseqDataset, LanguagePairDataset - -logger = logging.getLogger(__name__) - - -class RoundRobinZipDatasets(FairseqDataset): - """Zip multiple :class:`~fairseq.data.FairseqDataset` instances together. - - Shorter datasets are repeated in a round-robin fashion to match the length - of the longest one. - - Args: - datasets (Dict[~fairseq.data.FairseqDataset]): a dictionary of - :class:`~fairseq.data.FairseqDataset` instances. - eval_key (str, optional): a key used at evaluation time that causes - this instance to pass-through batches from *datasets[eval_key]*. - """ - - def __init__(self, datasets, eval_key=None): - super().__init__() - if isinstance(datasets, dict): - datasets = OrderedDict(datasets) - assert isinstance(datasets, OrderedDict) - assert datasets, "Can't make a RoundRobinZipDatasets out of nothing" - for dataset in datasets.values(): - assert isinstance(dataset, FairseqDataset) - - self.datasets = datasets - self.eval_key = eval_key - - self.longest_dataset_key = max(datasets, key=lambda k: len(datasets[k])) - self.longest_dataset = datasets[self.longest_dataset_key] - self._ordered_indices: Dict[str, Sequence[int]] = None - - def _map_index(self, key, index): - assert ( - self._ordered_indices is not None - ), "Must call RoundRobinZipDatasets.ordered_indices() first" - o = self._ordered_indices[key] - return o[index % len(o)] - - def __getitem__(self, index): - if self.eval_key is None: - return OrderedDict( - [ - (key, dataset[self._map_index(key, index)]) - for key, dataset in self.datasets.items() - ] - ) - else: - # at evaluation time it's useful to pass-through batches from a single key - return self.datasets[self.eval_key][self._map_index(self.eval_key, index)] - - def __len__(self): - if self._ordered_indices is not None: - return len(self._ordered_indices[self.longest_dataset_key]) - return len(self.longest_dataset) - - def collater(self, samples): - """Merge a list of samples to form a mini-batch.""" - if len(samples) == 0: - return None - if self.eval_key is None: - return OrderedDict( - [ - (key, dataset.collater([sample[key] for sample in samples])) - for key, dataset in self.datasets.items() - ] - ) - else: - # at evaluation time it's useful to pass-through batches from a single key - return self.datasets[self.eval_key].collater(samples) - - def num_tokens(self, index): - """Return an example's length (number of tokens), used for batching.""" - # TODO make it configurable whether to use max() or sum() here - return max( - dataset.num_tokens(self._map_index(key, index)) - for key, dataset in self.datasets.items() - ) - - def size(self, index): - """Return an example's size as a float or tuple. This value is used when - filtering a dataset with ``--max-positions``.""" - return { - key: dataset.size(self._map_index(key, index)) - for key, dataset in self.datasets.items() - } - - def ordered_indices(self): - """Ordered indices for batching.""" - if self._ordered_indices is None: - # Call the underlying dataset's ordered_indices() here, so that we - # get the same random ordering as we would have from using the - # underlying sub-datasets directly. - self._ordered_indices = OrderedDict( - [ - (key, dataset.ordered_indices()) - for key, dataset in self.datasets.items() - ] - ) - return np.arange(len(self)) - - def filter_indices_by_size(self, indices, max_positions=None): - """ - Filter each sub-dataset independently, then update the round robin to work - on the filtered sub-datasets. - """ - - def _deep_until_language_pair(dataset): - if isinstance(dataset, LanguagePairDataset): - return dataset - if hasattr(dataset, "tgt_dataset"): - return _deep_until_language_pair(dataset.tgt_dataset) - if hasattr(dataset, "dataset"): - return _deep_until_language_pair(dataset.dataset) - raise Exception(f"Don't know how to unwrap this dataset: {dataset}") - - if not isinstance(max_positions, dict): - max_positions = {k: max_positions for k in self.datasets.keys()} - ignored_some = False - for key, dataset in self.datasets.items(): - dataset = _deep_until_language_pair(dataset) - self._ordered_indices[key], ignored = dataset.filter_indices_by_size( - self._ordered_indices[key], max_positions[key] - ) - if len(ignored) > 0: - ignored_some = True - logger.warning( - f"{len(ignored)} samples from {key} have invalid sizes and will be skipped, " - f"max_positions={max_positions[key]}, first few sample ids={ignored[:10]}" - ) - # Since we are modifying in place the _ordered_indices, - # it's not possible anymore to return valid ignored indices. - # Hopefully the extra debug information print above should be enough to debug. - # Ideally we would receive ignore_invalid_inputs so that we could have - # a proper error message. - return (np.arange(len(self)), [0] if ignored_some else []) - - @property - def supports_prefetch(self): - return all( - getattr(dataset, "supports_prefetch", False) - for dataset in self.datasets.values() - ) - - def prefetch(self, indices): - for key, dataset in self.datasets.items(): - dataset.prefetch([self._map_index(key, index) for index in indices]) diff --git a/kosmos-g/fairseq/fairseq/data/shorten_dataset.py b/kosmos-g/fairseq/fairseq/data/shorten_dataset.py deleted file mode 100644 index 6ebb5d88f..000000000 --- a/kosmos-g/fairseq/fairseq/data/shorten_dataset.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -from fairseq.data import data_utils - -from . import BaseWrapperDataset - - -class TruncateDataset(BaseWrapperDataset): - """Truncate a sequence by returning the first truncation_length tokens""" - - def __init__(self, dataset, truncation_length): - super().__init__(dataset) - assert truncation_length is not None - self.truncation_length = truncation_length - self.dataset = dataset - - def __getitem__(self, index): - item = self.dataset[index] - item_len = item.size(0) - if item_len > self.truncation_length: - item = item[: self.truncation_length] - return item - - @property - def sizes(self): - return np.minimum(self.dataset.sizes, self.truncation_length) - - def __len__(self): - return len(self.dataset) - - -class RandomCropDataset(TruncateDataset): - """Truncate a sequence by returning a random crop of truncation_length tokens""" - - def __init__(self, dataset, truncation_length, seed=1): - super().__init__(dataset, truncation_length) - self.seed = seed - self.epoch = 0 - - @property - def can_reuse_epoch_itr_across_epochs(self): - return True # only the crop changes, not item sizes - - def set_epoch(self, epoch, **unused): - super().set_epoch(epoch) - self.epoch = epoch - - def __getitem__(self, index): - with data_utils.numpy_seed(self.seed, self.epoch, index): - item = self.dataset[index] - item_len = item.size(0) - excess = item_len - self.truncation_length - if excess > 0: - start_idx = np.random.randint(0, excess) - item = item[start_idx : start_idx + self.truncation_length] - return item - - -def maybe_shorten_dataset( - dataset, - split, - shorten_data_split_list, - shorten_method, - tokens_per_sample, - seed, -): - truncate_split = ( - split in shorten_data_split_list.split(",") or len(shorten_data_split_list) == 0 - ) - if shorten_method == "truncate" and truncate_split: - dataset = TruncateDataset(dataset, tokens_per_sample) - elif shorten_method == "random_crop" and truncate_split: - dataset = RandomCropDataset(dataset, tokens_per_sample, seed) - return dataset diff --git a/kosmos-g/fairseq/fairseq/data/sort_dataset.py b/kosmos-g/fairseq/fairseq/data/sort_dataset.py deleted file mode 100644 index b3890e727..000000000 --- a/kosmos-g/fairseq/fairseq/data/sort_dataset.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np - -from . import BaseWrapperDataset - - -class SortDataset(BaseWrapperDataset): - def __init__(self, dataset, sort_order): - super().__init__(dataset) - if not isinstance(sort_order, (list, tuple)): - sort_order = [sort_order] - self.sort_order = sort_order - - assert all(len(so) == len(dataset) for so in sort_order) - - def ordered_indices(self): - return np.lexsort(self.sort_order) diff --git a/kosmos-g/fairseq/fairseq/data/strip_token_dataset.py b/kosmos-g/fairseq/fairseq/data/strip_token_dataset.py deleted file mode 100644 index cae39ba4d..000000000 --- a/kosmos-g/fairseq/fairseq/data/strip_token_dataset.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import BaseWrapperDataset - - -class StripTokenDataset(BaseWrapperDataset): - def __init__(self, dataset, id_to_strip): - super().__init__(dataset) - self.id_to_strip = id_to_strip - - def __getitem__(self, index): - item = self.dataset[index] - while len(item) > 0 and item[-1] == self.id_to_strip: - item = item[:-1] - while len(item) > 0 and item[0] == self.id_to_strip: - item = item[1:] - return item diff --git a/kosmos-g/fairseq/fairseq/data/subsample_dataset.py b/kosmos-g/fairseq/fairseq/data/subsample_dataset.py deleted file mode 100644 index 48feaf883..000000000 --- a/kosmos-g/fairseq/fairseq/data/subsample_dataset.py +++ /dev/null @@ -1,72 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -import numpy as np - -from . import BaseWrapperDataset - - -logger = logging.getLogger(__name__) - - -class SubsampleDataset(BaseWrapperDataset): - """Subsamples a given dataset by a specified ratio. Subsampling is done on the number of examples - - Args: - dataset (~torch.utils.data.Dataset): dataset to subsample - size_ratio(float): the ratio to subsample to. must be between 0 and 1 (exclusive) - """ - - def __init__(self, dataset, size_ratio, shuffle=False): - super().__init__(dataset) - assert size_ratio < 1 - self.actual_size = np.ceil(len(dataset) * size_ratio).astype(int) - self.indices = np.random.choice( - list(range(len(self.dataset))), self.actual_size, replace=False - ) - self.shuffle = shuffle - logger.info( - "subsampled dataset from {} to {} (ratio={})".format( - len(self.dataset), self.actual_size, size_ratio - ) - ) - - def __getitem__(self, index): - return self.dataset[self.indices[index]] - - def __len__(self): - return self.actual_size - - def collater(self, samples): - return self.dataset.collater(samples) - - @property - def sizes(self): - return self.dataset.sizes[self.indices] - - @property - def name(self): - return self.dataset.name - - def num_tokens(self, index): - return self.dataset.num_tokens(self.indices[index]) - - def size(self, index): - return self.dataset.size(self.indices[index]) - - def ordered_indices(self): - """Return an ordered list of indices. Batches will be constructed based - on this order.""" - if self.shuffle: - order = [np.random.permutation(len(self))] - else: - order = [np.arange(len(self))] - order.append(self.sizes) - return np.lexsort(order) - - def prefetch(self, indices): - self.dataset.prefetch(self.indices[indices]) diff --git a/kosmos-g/fairseq/fairseq/data/text_compressor.py b/kosmos-g/fairseq/fairseq/data/text_compressor.py deleted file mode 100644 index 8a4e8daa3..000000000 --- a/kosmos-g/fairseq/fairseq/data/text_compressor.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from enum import Enum - - -class TextCompressionLevel(Enum): - none = 0 - low = 1 - high = 2 - - -class TextCompressor(object): - def __init__( - self, level: TextCompressionLevel, max_input_byte_length: int = 2 ** 16 - ): - self.level = level - self.max_input_length = max_input_byte_length - - def compress(self, text: str) -> bytes: - if self.level == TextCompressionLevel.low: - import zlib - - # zlib: built-in, fast - return zlib.compress(text.encode(), level=0) - elif self.level == TextCompressionLevel.high: - try: - import unishox2 - - # unishox2: optimized for short text but slower - except ImportError: - raise ImportError( - "Please install unishox2 for the text compression feature: " - "pip install unishox2-py3" - ) - assert len(text.encode()) <= self.max_input_length - return unishox2.compress(text)[0] - else: - return text.encode() - - def decompress(self, compressed: bytes) -> str: - if self.level == TextCompressionLevel.low: - import zlib - - return zlib.decompress(compressed).decode() - elif self.level == TextCompressionLevel.high: - try: - import unishox2 - except ImportError: - raise ImportError( - "Please install unishox2 for the text compression feature: " - "pip install unishox2-py3" - ) - return unishox2.decompress(compressed, self.max_input_length) - else: - return compressed.decode() diff --git a/kosmos-g/fairseq/fairseq/data/token_block_dataset.py b/kosmos-g/fairseq/fairseq/data/token_block_dataset.py deleted file mode 100644 index a414e7ef6..000000000 --- a/kosmos-g/fairseq/fairseq/data/token_block_dataset.py +++ /dev/null @@ -1,206 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch -from fairseq.data import FairseqDataset, plasma_utils -from fairseq.data.indexed_dataset import best_fitting_int_dtype -from typing import Tuple - - -class TokenBlockDataset(FairseqDataset): - """Break a Dataset of tokens into blocks. - - Args: - dataset (~torch.utils.data.Dataset): dataset to break into blocks - sizes (List[int]): sentence lengths (required for 'complete' and 'eos') - block_size (int): maximum block size (ignored in 'eos' break mode) - break_mode (str, optional): Mode used for breaking tokens. Values can - be one of: - - 'none': break tokens into equally sized blocks (up to block_size) - - 'complete': break tokens into blocks (up to block_size) such that - blocks contains complete sentences, although block_size may be - exceeded if some sentences exceed block_size - - 'complete_doc': similar to 'complete' mode, but do not - cross document boundaries - - 'eos': each block contains one sentence (block_size is ignored) - include_targets (bool, optional): return next tokens as targets - (default: False). - document_sep_len (int, optional): document separator size (required for - 'complete_doc' break mode). Typically 1 if the sentences have eos - and 0 otherwise. - """ - - def __init__( - self, - dataset, - sizes, - block_size, - pad, - eos, - break_mode=None, - include_targets=False, - document_sep_len=1, - use_plasma_view=False, - split_path=None, - plasma_path=None, - ): - - super().__init__() - self.dataset = dataset - self.pad = pad - self.eos = eos - self.include_targets = include_targets - - assert len(dataset) > 0 - - assert len(dataset) == len(sizes) - _sizes, block_to_dataset_index, slice_indices = self._build_slice_indices( - sizes, break_mode, document_sep_len, block_size - ) - if use_plasma_view: - plasma_id = (block_size, document_sep_len, str(break_mode), len(dataset)) - self._slice_indices = plasma_utils.PlasmaView( - slice_indices, split_path, (plasma_id, 0), plasma_path=plasma_path - ) - self._sizes = plasma_utils.PlasmaView( - _sizes, split_path, (plasma_id, 1), plasma_path=plasma_path - ) - self._block_to_dataset_index = plasma_utils.PlasmaView( - block_to_dataset_index, - split_path, - (plasma_id, 2), - plasma_path=plasma_path, - ) - else: - self._slice_indices = plasma_utils.PlasmaArray(slice_indices) - self._sizes = plasma_utils.PlasmaArray(_sizes) - self._block_to_dataset_index = plasma_utils.PlasmaArray( - block_to_dataset_index - ) - - @staticmethod - def _build_slice_indices( - sizes, break_mode, document_sep_len, block_size - ) -> Tuple[np.ndarray]: - """Use token_block_utils_fast to build arrays for indexing into self.dataset""" - try: - from fairseq.data.token_block_utils_fast import ( - _get_slice_indices_fast, - _get_block_to_dataset_index_fast, - ) - except ImportError: - raise ImportError( - "Please build Cython components with: `pip install --editable .` " - "or `python setup.py build_ext --inplace`" - ) - - if isinstance(sizes, list): - sizes = np.array(sizes, dtype=np.int64) - else: - if torch.is_tensor(sizes): - sizes = sizes.numpy() - sizes = sizes.astype(np.int64) - - break_mode = break_mode if break_mode is not None else "none" - - # For "eos" break-mode, block_size is not required parameters. - if break_mode == "eos" and block_size is None: - block_size = 0 - - slice_indices = _get_slice_indices_fast( - sizes, str(break_mode), block_size, document_sep_len - ) - _sizes = slice_indices[:, 1] - slice_indices[:, 0] - - # build index mapping block indices to the underlying dataset indices - if break_mode == "eos": - # much faster version for eos break mode - block_to_dataset_index = np.stack( - [ - np.arange(len(sizes)), # starting index in dataset - np.zeros( - len(sizes), dtype=np.compat.long - ), # starting offset within starting index - np.arange(len(sizes)), # ending index in dataset - ], - 1, - ) - else: - block_to_dataset_index = _get_block_to_dataset_index_fast( - sizes, - slice_indices, - ) - size_dtype = np.uint16 if block_size < 65535 else np.uint32 - num_tokens = slice_indices[-1].max() - slice_indices_dtype = best_fitting_int_dtype(num_tokens) - slice_indices = slice_indices.astype(slice_indices_dtype) - _sizes = _sizes.astype(size_dtype) - block_to_dataset_index = block_to_dataset_index.astype(slice_indices_dtype) - return _sizes, block_to_dataset_index, slice_indices - - @property - def slice_indices(self): - return self._slice_indices.array - - @property - def sizes(self): - return self._sizes.array - - @property - def block_to_dataset_index(self): - return self._block_to_dataset_index.array - - def attr(self, attr: str, index: int): - start_ds_idx, _, _ = self.block_to_dataset_index[index] - return self.dataset.attr(attr, start_ds_idx) - - def __getitem__(self, index): - start_ds_idx, start_offset, end_ds_idx = self.block_to_dataset_index[index] - - buffer = torch.cat( - [self.dataset[idx] for idx in range(start_ds_idx, end_ds_idx + 1)] - ) - slice_s, slice_e = self.slice_indices[index] - length = slice_e - slice_s - s, e = start_offset, start_offset + length - item = buffer[s:e] - - if self.include_targets: - # *target* is the original sentence (=item) - # *source* is shifted right by 1 (maybe left-padded with eos) - # *past_target* is shifted right by 2 (left-padded as needed) - if s == 0: - source = torch.cat([item.new([self.eos]), buffer[0 : e - 1]]) - past_target = torch.cat( - [item.new([self.pad, self.eos]), buffer[0 : e - 2]] - ) - else: - source = buffer[s - 1 : e - 1] - if s == 1: - past_target = torch.cat([item.new([self.eos]), buffer[0 : e - 2]]) - else: - past_target = buffer[s - 2 : e - 2] - - return source, item, past_target - - return item - - def __len__(self): - return len(self.slice_indices) - - @property - def supports_prefetch(self): - return getattr(self.dataset, "supports_prefetch", False) - - def prefetch(self, indices): - self.dataset.prefetch( - { - ds_idx - for index in indices - for start_ds_idx, _, end_ds_idx in [self.block_to_dataset_index[index]] - for ds_idx in range(start_ds_idx, end_ds_idx + 1) - } - ) diff --git a/kosmos-g/fairseq/fairseq/data/token_block_utils_fast.pyx b/kosmos-g/fairseq/fairseq/data/token_block_utils_fast.pyx deleted file mode 100644 index 08af4f306..000000000 --- a/kosmos-g/fairseq/fairseq/data/token_block_utils_fast.pyx +++ /dev/null @@ -1,187 +0,0 @@ -# cython: language_level=3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch -from itertools import chain -from libc.math cimport ceil - -cimport cython -cimport numpy as np - -from libc.stdint cimport int32_t, int64_t - -DTYPE = np.int64 -ctypedef int64_t DTYPE_t - - -@cython.boundscheck(False) -@cython.wraparound(False) -@cython.nonecheck(False) -cdef np.ndarray[DTYPE_t, ndim=2] _get_slice_indices_none_mode(np.ndarray[DTYPE_t, ndim=1] sizes, int block_size): - cdef DTYPE_t total_size = sizes.sum() - cdef DTYPE_t length = <DTYPE_t> ceil(total_size / <double> block_size) - cdef np.ndarray[DTYPE_t, ndim=2] slice_indices = np.zeros([length, 2], dtype=DTYPE) - cdef DTYPE_t[:, :] slice_indices_view = slice_indices - cdef DTYPE_t i - cdef DTYPE_t start - cdef DTYPE_t end - for i in range(length): - start = i * block_size - end = min(start + block_size, total_size) - slice_indices_view[i][0] = start - slice_indices_view[i][1] = end - return slice_indices - - -cdef np.ndarray[DTYPE_t, ndim=2] _fast_convert_to_np_array(list list_of_list): - """ - Faster function to convert DTYPE_t list of list. - Only fast when there are huge number of rows and low number of columns. - """ - cdef np.ndarray[DTYPE_t, ndim=1] flat = np.fromiter(chain.from_iterable(list_of_list), DTYPE, -1) - return flat.reshape((len(list_of_list), -1)) - - -@cython.boundscheck(False) -@cython.wraparound(False) -@cython.nonecheck(False) -cpdef np.ndarray[DTYPE_t, ndim=2] _get_slice_indices_fast(np.ndarray[DTYPE_t, ndim=1] sizes, str break_mode, int block_size, int document_sep_len): - cdef DTYPE_t tok_idx = 0 - cdef DTYPE_t sz_idx = 0 - cdef DTYPE_t curr_size = 0 - cdef DTYPE_t i = 0 - cdef DTYPE_t length - cdef DTYPE_t total_size - cdef DTYPE_t[:] sizes_view = sizes - cdef np.ndarray[DTYPE_t, ndim=2] slice_indices - cdef list slice_indices_list = [] - - if break_mode is None or break_mode == 'none': - slice_indices = _get_slice_indices_none_mode(sizes, block_size) - elif break_mode == 'complete': - while sz_idx < len(sizes_view): - if curr_size + sizes_view[sz_idx] <= block_size or curr_size == 0: - curr_size += sizes_view[sz_idx] - sz_idx += 1 - else: - slice_indices_list.append((tok_idx, tok_idx + curr_size)) - tok_idx += curr_size - curr_size = 0 - if curr_size > 0: - slice_indices_list.append((tok_idx, tok_idx + curr_size)) - slice_indices = _fast_convert_to_np_array(slice_indices_list) - elif break_mode == 'complete_doc': - while sz_idx < len(sizes_view): - if ( - (curr_size + sizes_view[sz_idx] <= block_size or curr_size == 0) - # an empty sentence indicates end-of-document: - and sizes_view[sz_idx] != document_sep_len - ): - curr_size += sizes_view[sz_idx] - sz_idx += 1 - else: - # Only keep non-empty documents. - if curr_size > 1: - slice_indices_list.append((tok_idx, tok_idx + curr_size)) - tok_idx += curr_size - curr_size = 0 - if sizes_view[sz_idx] == document_sep_len: - tok_idx += sizes_view[sz_idx] - sz_idx += 1 - if curr_size > 1: - slice_indices_list.append((tok_idx, tok_idx + curr_size)) - slice_indices = _fast_convert_to_np_array(slice_indices_list) - elif break_mode == 'eos': - slice_indices = np.zeros((len(sizes), 2), dtype=DTYPE) - cumsum = sizes.cumsum(axis=0) - slice_indices[1:, 0] = cumsum[:cumsum.shape[0] - 1] - slice_indices[:, 1] = cumsum - else: - raise ValueError('Invalid break_mode: ' + break_mode) - return slice_indices - - -@cython.boundscheck(False) -@cython.wraparound(False) -@cython.nonecheck(False) -cpdef np.ndarray[DTYPE_t, ndim=2] _get_block_to_dataset_index_fast(np.ndarray[DTYPE_t, ndim=1] sizes, np.ndarray[DTYPE_t, ndim=2] slice_indices): - cdef DTYPE_t start_ds_idx - cdef DTYPE_t start_offset - cdef DTYPE_t end_ds_idx - cdef DTYPE_t i - cdef DTYPE_t s - cdef DTYPE_t e - cdef DatasetSearcher ds = DatasetSearcher(sizes) - cdef np.ndarray[DTYPE_t, ndim=2] block_to_dataset_index = np.zeros([len(slice_indices), 3], dtype=DTYPE) - cdef DTYPE_t[:, :] block_to_dataset_index_view = block_to_dataset_index - cdef DTYPE_t[:, :] slice_indices_view = slice_indices - cdef Py_ssize_t x_max = slice_indices.shape[0] - - for i in range(x_max): - s = slice_indices_view[i][0] - e = slice_indices_view[i][1] - ds.seek(s) - start_ds_idx = ds.current_index - start_offset = ds.current_offset - if e <= s: - end_ds_idx = start_ds_idx - else: - ds.seek(e - 1) - end_ds_idx = ds.current_index - block_to_dataset_index_view[i][0] = start_ds_idx # starting index in dataset - block_to_dataset_index_view[i][1] = start_offset # starting offset within starting index - block_to_dataset_index_view[i][2] = end_ds_idx # ending index in dataset - return block_to_dataset_index - - -cdef class DatasetSearcher(object): - """Helper for mapping "flat" indices to indices and offsets in an - underlying dataset.""" - cdef DTYPE_t current_i - cdef DTYPE_t current_offset - cdef DTYPE_t current_index - cdef DTYPE_t[:] sizes - - def __init__(self, DTYPE_t[:] sizes): - self.sizes = sizes - self.reset() - - cdef reset(self): - self.current_offset = 0 # offset within current index in underlying dataset - self.current_i = 0 # "flat" index - self.current_index = 0 # index in underlying dataset - - @cython.boundscheck(False) - @cython.wraparound(False) - @cython.nonecheck(False) - cdef int step(self, DTYPE_t i): - cdef DTYPE_t to_consume - cdef DTYPE_t remaining - if i < self.current_i: - self.reset() - if i > self.current_i: - to_consume = i - self.current_i - remaining = self.sizes[self.current_index] - self.current_offset - if remaining > to_consume: - self.current_offset += to_consume - self.current_i += to_consume - else: - assert remaining >= 0 - self.current_i += remaining - self.current_index += 1 - self.current_offset = 0 - return 1 - return 0 - - @cython.boundscheck(False) - @cython.wraparound(False) - @cython.nonecheck(False) - cdef seek(self, DTYPE_t i): - cdef int not_done = 1 - while not_done == 1: - not_done = self.step(i) - assert self.current_i == i diff --git a/kosmos-g/fairseq/fairseq/data/transform_eos_concat_langpair_dataset.py b/kosmos-g/fairseq/fairseq/data/transform_eos_concat_langpair_dataset.py deleted file mode 100644 index 638bd1a3d..000000000 --- a/kosmos-g/fairseq/fairseq/data/transform_eos_concat_langpair_dataset.py +++ /dev/null @@ -1,139 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -import torch -from torch.utils.data.dataloader import default_collate - -from fairseq.data import ConcatDataset - -logger = logging.getLogger(__name__) - - -class TransformEosConcatLangPairDataset(ConcatDataset): - """ - It is a combination of TransformEosLangPairDataset and ConcatDataset for multiple LangPairDataset datasets. - Assume all datasets share the same src_eos, tgt_bos, left_pad_source and left_pad_target - """ - - def __init__( - self, - datasets, - src_eos, - tgt_bos, - new_src_eos=None, - new_tgt_bos=None, - ): - super().__init__(datasets) - if new_src_eos is not None: - assert len(new_src_eos) == len(datasets) - else: - new_src_eos = [] - if new_tgt_bos is not None: - assert len(new_tgt_bos) == len(datasets) - else: - new_tgt_bos = [] - self.src_eos = src_eos - self.tgt_bos = tgt_bos - self.new_src_eos = ( - torch.LongTensor(new_src_eos).cpu() if len(new_src_eos) > 0 else [] - ) - self.new_tgt_bos = ( - torch.LongTensor(new_tgt_bos).cpu() if len(new_tgt_bos) > 0 else [] - ) - self.left_pad_source = self.is_left_pad_source(datasets) - self.left_pad_target = self.is_left_pad_target(datasets) - self.pad_idx = self.src_dict_pad() - - def src_dict_pad(self): - if hasattr(self.datasets[0], "src_dict"): - return self.datasets[0].src_dict.pad() - if hasattr(self.datasets[0], "dataset"): - return self.datasets[0].dataset.src_dict.pad() - raise NotImplementedError("No src_dict is found") - - def __getitem__(self, idx): - dataset_idx, sample_idx = self._get_dataset_and_sample_index(idx) - return dataset_idx, self.datasets[dataset_idx][sample_idx] - - def is_left_pad_source(self, datasets): - def _left_pad_source(ds): - if hasattr(ds, "left_pad_source"): - return ds.left_pad_source - if hasattr(ds, "dataset"): - return _left_pad_source(ds.dataset) - logger.warn(f"{type(ds)} has no left_pad_source, using default True") - return True - - left_pad_source = _left_pad_source(datasets[0]) - for ds in datasets: - if left_pad_source != _left_pad_source(ds): - raise ValueError("Different left_pad_source setting detected!") - return left_pad_source - - def is_left_pad_target(self, datasets): - def _left_pad_target(ds): - if hasattr(ds, "left_pad_target"): - return ds.left_pad_target - if hasattr(ds, "dataset"): - return _left_pad_target(ds.dataset) - logger.warn(f"{type(ds)} has no left_pad_target, using default False") - return False - - left_pad_target = _left_pad_target(datasets[0]) - for ds in datasets: - if left_pad_target != _left_pad_target(ds): - raise ValueError("Different left_pad_target setting detected!") - return left_pad_target - - def collater(self, samples, **extra_args): - if len(samples) == 0: - return samples - - dataset_ids = [s[0] for s in samples] - samples = [s[1] for s in samples] - - if hasattr(self.datasets[0], "collater"): - samples = self.datasets[0].collater(samples, **extra_args) - else: - samples = default_collate(samples, **extra_args) - - if len(self.new_src_eos) > 0: - if self.left_pad_source: - assert ( - samples["net_input"]["src_tokens"][:, -1] != self.src_eos - ).sum() == 0 - samples["net_input"]["src_tokens"][:, -1] = self.new_src_eos[ - dataset_ids - ] - - else: - eos_idx = samples["net_input"]["src_lengths"] - 1 - assert ( - samples["net_input"]["src_tokens"][ - torch.arange(eos_idx.size(0)), eos_idx - ] - != self.src_eos - ).sum() == 0 - samples["net_input"]["src_tokens"].scatter_( - 1, eos_idx.view(-1, 1), self.new_src_eos[dataset_ids].view(-1, 1) - ) - - if len(self.new_tgt_bos) > 0 and "prev_output_tokens" in samples["net_input"]: - if self.left_pad_target: - # TODO: support different padding direction on target side - raise NotImplementedError( - "TransformEosLangPairDataset does not implement --left-pad-target True option" - ) - else: - assert ( - samples["net_input"]["prev_output_tokens"][:, 0] != self.tgt_bos - ).sum() == 0 - samples["net_input"]["prev_output_tokens"][:, 0] = self.new_tgt_bos[ - dataset_ids - ] - - return samples diff --git a/kosmos-g/fairseq/fairseq/data/transform_eos_dataset.py b/kosmos-g/fairseq/fairseq/data/transform_eos_dataset.py deleted file mode 100644 index fb14ff018..000000000 --- a/kosmos-g/fairseq/fairseq/data/transform_eos_dataset.py +++ /dev/null @@ -1,120 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from . import FairseqDataset - - -class TransformEosDataset(FairseqDataset): - """A :class:`~fairseq.data.FairseqDataset` wrapper that appends/prepends/strips EOS. - - Note that the transformation is applied in :func:`collater`. - - Args: - dataset (~fairseq.data.FairseqDataset): dataset to wrap - eos (int): index of the end-of-sentence symbol - append_eos_to_src (bool, optional): append EOS to the end of src - remove_eos_from_src (bool, optional): remove EOS from the end of src - append_eos_to_tgt (bool, optional): append EOS to the end of tgt - remove_eos_from_tgt (bool, optional): remove EOS from the end of tgt - """ - - def __init__( - self, - dataset, - eos, - append_eos_to_src=False, - remove_eos_from_src=False, - append_eos_to_tgt=False, - remove_eos_from_tgt=False, - has_target=True, - ): - if not isinstance(dataset, FairseqDataset): - raise ValueError("dataset must be an instance of FairseqDataset") - if append_eos_to_src and remove_eos_from_src: - raise ValueError("cannot combine append_eos_to_src and remove_eos_from_src") - if append_eos_to_tgt and remove_eos_from_tgt: - raise ValueError("cannot combine append_eos_to_tgt and remove_eos_from_tgt") - - self.dataset = dataset - self.eos = torch.LongTensor([eos]) - self.append_eos_to_src = append_eos_to_src - self.remove_eos_from_src = remove_eos_from_src - self.append_eos_to_tgt = append_eos_to_tgt - self.remove_eos_from_tgt = remove_eos_from_tgt - self.has_target = has_target - - # precompute how we should adjust the reported sizes - self._src_delta = 0 - self._src_delta += 1 if append_eos_to_src else 0 - self._src_delta -= 1 if remove_eos_from_src else 0 - self._tgt_delta = 0 - self._tgt_delta += 1 if append_eos_to_tgt else 0 - self._tgt_delta -= 1 if remove_eos_from_tgt else 0 - - self._checked_src = False - self._checked_tgt = False - - def _check_src(self, src, expect_eos): - if not self._checked_src: - assert (src[-1] == self.eos[0]) == expect_eos - self._checked_src = True - - def _check_tgt(self, tgt, expect_eos): - if self.has_target and not self._checked_tgt: - assert (tgt[-1] == self.eos[0]) == expect_eos - self._checked_tgt = True - - def __getitem__(self, index): - return self.dataset[index] - - def __len__(self): - return len(self.dataset) - - def collater(self, samples): - def transform(item): - if self.append_eos_to_src: - self.eos = self.eos.to(device=item["source"].device) - self._check_src(item["source"], expect_eos=False) - item["source"] = torch.cat([item["source"], self.eos]) - if self.remove_eos_from_src: - self.eos = self.eos.to(device=item["source"].device) - self._check_src(item["source"], expect_eos=True) - item["source"] = item["source"][:-1] - if self.append_eos_to_tgt: - self.eos = self.eos.to(device=item["target"].device) - self._check_tgt(item["target"], expect_eos=False) - item["target"] = torch.cat([item["target"], self.eos]) - if self.remove_eos_from_tgt: - self.eos = self.eos.to(device=item["target"].device) - self._check_tgt(item["target"], expect_eos=True) - item["target"] = item["target"][:-1] - return item - - samples = list(map(transform, samples)) - return self.dataset.collater(samples) - - def num_tokens(self, index): - return self.dataset.num_tokens(index) - - def size(self, index): - if self.has_target: - src_len, tgt_len = self.dataset.size(index) - return (src_len + self._src_delta, tgt_len + self._tgt_delta) - else: - return self.dataset.size(index) - - def ordered_indices(self): - # NOTE: we assume that the ordering does not change based on the - # addition or removal of eos - return self.dataset.ordered_indices() - - @property - def supports_prefetch(self): - return getattr(self.dataset, "supports_prefetch", False) - - def prefetch(self, indices): - return self.dataset.prefetch(indices) diff --git a/kosmos-g/fairseq/fairseq/data/transform_eos_lang_pair_dataset.py b/kosmos-g/fairseq/fairseq/data/transform_eos_lang_pair_dataset.py deleted file mode 100644 index d8b210901..000000000 --- a/kosmos-g/fairseq/fairseq/data/transform_eos_lang_pair_dataset.py +++ /dev/null @@ -1,113 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -from typing import Optional - -import torch - -from . import FairseqDataset - - -class TransformEosLangPairDataset(FairseqDataset): - """A :class:`~fairseq.data.FairseqDataset` wrapper that transform bos on - collated samples of language pair dataset. - - Note that the transformation is applied in :func:`collater`. - - Args: - dataset (~fairseq.data.FairseqDataset): dataset that collates sample into - LanguagePairDataset schema - src_eos (int): original source end-of-sentence symbol index to be replaced - new_src_eos (int, optional): new end-of-sentence symbol index to replace source eos symbol - tgt_bos (int, optional): original target beginning-of-sentence symbol index to be replaced - new_tgt_bos (int, optional): new beginning-of-sentence symbol index to replace at the - beginning of 'prev_output_tokens' - """ - - def __init__( - self, - dataset: FairseqDataset, - src_eos: int, - new_src_eos: Optional[int] = None, - tgt_bos: Optional[int] = None, - new_tgt_bos: Optional[int] = None, - ): - self.dataset = dataset - self.src_eos = src_eos - self.new_src_eos = new_src_eos - self.tgt_bos = tgt_bos - self.new_tgt_bos = new_tgt_bos - - def __getitem__(self, index): - return self.dataset[index] - - def __len__(self): - return len(self.dataset) - - def collater(self, samples, **extra_args): - samples = self.dataset.collater(samples, **extra_args) - if len(samples) == 0: - return samples - - if "net_input" not in samples: - return samples - - if self.new_src_eos is not None: - if self.dataset.left_pad_source: - assert ( - samples["net_input"]["src_tokens"][:, -1] != self.src_eos - ).sum() == 0 - samples["net_input"]["src_tokens"][:, -1] = self.new_src_eos - else: - eos_idx = samples["net_input"]["src_lengths"] - 1 - assert ( - samples["net_input"]["src_tokens"][ - torch.arange(eos_idx.size(0)), eos_idx - ] - != self.src_eos - ).sum() == 0 - eos_idx = eos_idx.resize_(len(samples["net_input"]["src_lengths"]), 1) - samples["net_input"]["src_tokens"].scatter_( - 1, eos_idx, self.new_src_eos - ) - - if ( - self.new_tgt_bos is not None - and "prev_output_tokens" in samples["net_input"] - ): - if self.dataset.left_pad_target: - # TODO: support different padding direction on target side - raise NotImplementedError( - "TransformEosLangPairDataset does not implement --left-pad-target True option" - ) - else: - assert ( - samples["net_input"]["prev_output_tokens"][:, 0] != self.tgt_bos - ).sum() == 0 - samples["net_input"]["prev_output_tokens"][:, 0] = self.new_tgt_bos - - return samples - - def num_tokens(self, index): - return self.dataset.num_tokens(index) - - def size(self, index): - return self.dataset.size(index) - - @property - def sizes(self): - # dataset.sizes can be a dynamically computed sizes: - return self.dataset.sizes - - def ordered_indices(self): - return self.dataset.ordered_indices() - - @property - def supports_prefetch(self): - return getattr(self.dataset, "supports_prefetch", False) - - def prefetch(self, indices): - return self.dataset.prefetch(indices) diff --git a/kosmos-g/fairseq/fairseq/dataclass/__init__.py b/kosmos-g/fairseq/fairseq/dataclass/__init__.py deleted file mode 100644 index 25408d28e..000000000 --- a/kosmos-g/fairseq/fairseq/dataclass/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .configs import FairseqDataclass -from .constants import ChoiceEnum - - -__all__ = [ - "FairseqDataclass", - "ChoiceEnum", -] diff --git a/kosmos-g/fairseq/fairseq/dataclass/configs.py b/kosmos-g/fairseq/fairseq/dataclass/configs.py deleted file mode 100644 index 15b2750b3..000000000 --- a/kosmos-g/fairseq/fairseq/dataclass/configs.py +++ /dev/null @@ -1,1129 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import sys -from dataclasses import _MISSING_TYPE, dataclass, field -from typing import Any, List, Optional - -import torch - -from fairseq.dataclass.constants import ( - DATASET_IMPL_CHOICES, - DDP_BACKEND_CHOICES, - DDP_COMM_HOOK_CHOICES, - GENERATION_CONSTRAINTS_CHOICES, - GENERATION_DECODING_FORMAT_CHOICES, - LOG_FORMAT_CHOICES, - PIPELINE_CHECKPOINT_CHOICES, - PRINT_ALIGNMENT_CHOICES, - ZERO_SHARDING_CHOICES, -) - -from omegaconf import II, MISSING - - -@dataclass -class FairseqDataclass: - """fairseq base dataclass that supported fetching attributes and metas""" - - _name: Optional[str] = None - - @staticmethod - def name(): - return None - - def _get_all_attributes(self) -> List[str]: - return [k for k in self.__dataclass_fields__.keys()] - - def _get_meta( - self, attribute_name: str, meta: str, default: Optional[Any] = None - ) -> Any: - return self.__dataclass_fields__[attribute_name].metadata.get(meta, default) - - def _get_name(self, attribute_name: str) -> str: - return self.__dataclass_fields__[attribute_name].name - - def _get_default(self, attribute_name: str) -> Any: - if hasattr(self, attribute_name): - if str(getattr(self, attribute_name)).startswith("${"): - return str(getattr(self, attribute_name)) - elif str(self.__dataclass_fields__[attribute_name].default).startswith( - "${" - ): - return str(self.__dataclass_fields__[attribute_name].default) - elif ( - getattr(self, attribute_name) - != self.__dataclass_fields__[attribute_name].default - ): - return getattr(self, attribute_name) - - f = self.__dataclass_fields__[attribute_name] - if not isinstance(f.default_factory, _MISSING_TYPE): - return f.default_factory() - return f.default - - def _get_type(self, attribute_name: str) -> Any: - return self.__dataclass_fields__[attribute_name].type - - def _get_help(self, attribute_name: str) -> Any: - return self._get_meta(attribute_name, "help") - - def _get_argparse_const(self, attribute_name: str) -> Any: - return self._get_meta(attribute_name, "argparse_const") - - def _get_argparse_alias(self, attribute_name: str) -> Any: - return self._get_meta(attribute_name, "argparse_alias") - - def _get_choices(self, attribute_name: str) -> Any: - return self._get_meta(attribute_name, "choices") - - @classmethod - def from_namespace(cls, args): - if isinstance(args, cls): - return args - else: - config = cls() - for k in config.__dataclass_fields__.keys(): - if k.startswith("_"): - # private member, skip - continue - if hasattr(args, k): - setattr(config, k, getattr(args, k)) - - return config - - -@dataclass -class CommonConfig(FairseqDataclass): - # This is the core dataclass including common parameters shared by all different jobs. Please append your params to other dataclasses if they were - # used for a particular purpose or task, such as those dedicated for `distributed training`, `optimization`, etc. - no_progress_bar: bool = field( - default=False, metadata={"help": "disable progress bar"} - ) - log_interval: int = field( - default=100, - metadata={ - "help": "log progress every N batches (when progress bar is disabled)" - }, - ) - log_format: Optional[LOG_FORMAT_CHOICES] = field( - default=None, metadata={"help": "log format to use"} - ) - log_file: Optional[str] = field( - default=None, metadata={"help": "log file to copy metrics to."} - ) - tensorboard_logdir: Optional[str] = field( - default=None, - metadata={ - "help": "path to save logs for tensorboard, should match --logdir " - "of running tensorboard (default: no tensorboard logging)" - }, - ) - wandb_project: Optional[str] = field( - default=None, - metadata={"help": "Weights and Biases project name to use for logging"}, - ) - azureml_logging: Optional[bool] = field( - default=False, - metadata={"help": "Log scalars to AzureML context"}, - ) - seed: int = field( - default=1, metadata={"help": "pseudo random number generator seed"} - ) - cpu: bool = field(default=False, metadata={"help": "use CPU instead of CUDA"}) - tpu: bool = field(default=False, metadata={"help": "use TPU instead of CUDA"}) - bf16: bool = field(default=False, metadata={"help": "use bfloat16; implies --tpu"}) - memory_efficient_bf16: bool = field( - default=False, - metadata={ - "help": "use a memory-efficient version of BF16 training; implies --bf16" - }, - ) - fp16: bool = field(default=False, metadata={"help": "use FP16"}) - memory_efficient_fp16: bool = field( - default=False, - metadata={ - "help": "use a memory-efficient version of FP16 training; implies --fp16" - }, - ) - fp16_no_flatten_grads: bool = field( - default=False, metadata={"help": "don't flatten FP16 grads tensor"} - ) - fp16_init_scale: int = field( - default=2 ** 7, metadata={"help": "default FP16 loss scale"} - ) - fp16_scale_window: Optional[int] = field( - default=None, - metadata={"help": "number of updates before increasing loss scale"}, - ) - fp16_scale_tolerance: float = field( - default=0.0, - metadata={ - "help": "pct of updates that can overflow before decreasing the loss scale" - }, - ) - on_cpu_convert_precision: bool = field( - default=False, - metadata={ - "help": "if set, the floating point conversion to fp16/bf16 runs on CPU. " - "This reduces bus transfer time and GPU memory usage." - }, - ) - min_loss_scale: float = field( - default=1e-4, - metadata={ - "help": "minimum FP16/AMP loss scale, after which training is stopped" - }, - ) - threshold_loss_scale: Optional[float] = field( - default=None, metadata={"help": "threshold FP16 loss scale from below"} - ) - amp: bool = field(default=False, metadata={"help": "use automatic mixed precision"}) - amp_batch_retries: int = field( - default=2, - metadata={ - "help": "number of retries of same batch after reducing loss scale with AMP" - }, - ) - amp_init_scale: int = field( - default=2 ** 7, metadata={"help": "default AMP loss scale"} - ) - amp_scale_window: Optional[int] = field( - default=None, - metadata={"help": "number of updates before increasing AMP loss scale"}, - ) - user_dir: Optional[str] = field( - default=None, - metadata={ - "help": "path to a python module containing custom extensions (tasks and/or architectures)" - }, - ) - empty_cache_freq: int = field( - default=0, - metadata={"help": "how often to clear the PyTorch CUDA cache (0 to disable)"}, - ) - all_gather_list_size: int = field( - default=16384, - metadata={"help": "number of bytes reserved for gathering stats from workers"}, - ) - model_parallel_size: int = field( - default=1, metadata={"help": "total number of GPUs to parallelize model over"} - ) - quantization_config_path: Optional[str] = field( - default=None, metadata={"help": "path to quantization config file"} - ) - profile: bool = field( - default=False, metadata={"help": "enable autograd profiler emit_nvtx"} - ) - reset_logging: bool = field( - default=False, - metadata={ - "help": "when using Hydra, reset the logging at the beginning of training" - }, - ) - suppress_crashes: bool = field( - default=False, - metadata={ - "help": "suppress crashes when training with the hydra_train entry point so that the " - "main method can return a value (useful for sweeps)" - }, - ) - use_plasma_view: bool = field( - default=False, metadata={"help": "Store indices and sizes in shared memory"} - ) - plasma_path: Optional[str] = field( - default="/tmp/plasma", - metadata={ - "help": "path to run plasma_store, defaults to /tmp/plasma. Paths outside /tmp tend to fail." - }, - ) - deepspeed: bool = field( - default=False, - metadata={"help": "use deepspeed instead of fairseq for training"}, - ) - zero: int = field( - default=0, - metadata={"help": "use deepspeed zero stage 1 or 2 instead of fairseq for training"}, - ) - exit_interval: int = field( - default=0, - metadata={"help": "exit after this many seconds"}, - ) - - -@dataclass -class DistributedTrainingConfig(FairseqDataclass): - distributed_world_size: int = field( - default=max(1, torch.cuda.device_count()), - metadata={ - "help": "total number of GPUs across all nodes (default: all visible GPUs)" - }, - ) - distributed_num_procs: Optional[int] = field( - default=max(1, torch.cuda.device_count()), - metadata={ - "help": "total number of processes to fork (default: all visible GPUs)" - }, - ) - distributed_rank: Optional[int] = field( - default=0, metadata={"help": "rank of the current worker"} - ) - distributed_backend: str = field( - default="nccl", metadata={"help": "distributed backend"} - ) - distributed_init_method: Optional[str] = field( - default=None, - metadata={ - "help": "typically tcp://hostname:port that will be used to " - "establish initial connetion" - }, - ) - distributed_port: int = field( - default=-1, - metadata={ - "help": "port number (not required if using --distributed-init-method)" - }, - ) - device_id: int = field( - default=0, - metadata={ - "help": "which GPU to use (usually configured automatically)", - "argparse_alias": "--local_rank", - }, - ) - distributed_no_spawn: bool = field( - default=False, - metadata={ - "help": "do not spawn multiple processes even if multiple GPUs are visible" - }, - ) - ddp_backend: DDP_BACKEND_CHOICES = field( - default="pytorch_ddp", metadata={"help": "DistributedDataParallel backend"} - ) - ddp_comm_hook: DDP_COMM_HOOK_CHOICES = field( - default="none", metadata={"help": "communication hook"} - ) - bucket_cap_mb: int = field( - default=25, metadata={"help": "bucket size for reduction"} - ) - fix_batches_to_gpus: bool = field( - default=False, - metadata={ - "help": "don't shuffle batches between GPUs; this reduces overall " - "randomness and may affect precision but avoids the cost of re-reading the data" - }, - ) - find_unused_parameters: bool = field( - default=False, - metadata={ - "help": "disable unused parameter detection (not applicable to " - "--ddp-backend=legacy_ddp)" - }, - ) - gradient_as_bucket_view: bool = field( - default=False, - metadata={ - "help": "when set to True, gradients will be views pointing to different offsets of allreduce communication buckets. This can reduce peak memory usage, where the saved memory size will be equal to the total gradients size. " - "--gradient-as-bucket-view=gradient_as_bucket_view)" - }, - ) - fast_stat_sync: bool = field( - default=False, - metadata={"help": "[deprecated] this is now defined per Criterion"}, - ) - heartbeat_timeout: int = field( - default=-1, - metadata={ - "help": "kill the job if no progress is made in N seconds; " - "set to -1 to disable" - }, - ) - broadcast_buffers: bool = field( - default=False, - metadata={ - "help": "Copy non-trainable parameters between GPUs, such as " - "batchnorm population statistics" - }, - ) - slowmo_momentum: Optional[float] = field( - default=None, - metadata={ - "help": "SlowMo momentum term; by default use 0.0 for 16 GPUs, " - "0.2 for 32 GPUs; 0.5 for 64 GPUs, 0.6 for > 64 GPUs" - }, - ) - slowmo_base_algorithm: str = field( - default="localsgd", - metadata={ - "help": "Base algorithm. Either 'localsgd' or 'sgp'. Please refer " - "to the documentation of 'slowmo_base_algorithm' parameter in " - "https://fairscale.readthedocs.io/en/latest/api/experimental/nn/slowmo_ddp.html " - "for more details" - }, - ) - localsgd_frequency: int = field( - default=3, metadata={"help": "Local SGD allreduce frequency"} - ) - nprocs_per_node: int = field( - default=max(1, torch.cuda.device_count()), - metadata={ - "help": "number of GPUs in each node. An allreduce operation across GPUs in " - "a node is very fast. Hence, we do allreduce across GPUs in a node, " - "and gossip across different nodes" - }, - ) - pipeline_model_parallel: bool = field( - default=False, - metadata={"help": "if set, use pipeline model parallelism across GPUs"}, - ) - pipeline_balance: Optional[str] = field( - default=None, - metadata={ - "help": "partition the model into N_K pieces, where each piece " - "contains N_i layers. The sum(args.pipeline_balance) " - "should equal the total number of layers in the model" - }, - ) - pipeline_devices: Optional[str] = field( - default=None, - metadata={ - "help": "a list of device indices indicating which device to place " - "each of the N_K partitions. The length of this list should " - "equal the length of the --pipeline-balance argument" - }, - ) - pipeline_chunks: Optional[int] = field( - default=0, metadata={"help": "microbatch count for pipeline model parallelism"} - ) - pipeline_encoder_balance: Optional[str] = field( - default=None, - metadata={ - "help": "partition the pipeline parallel encoder into N_K pieces, where each piece " - "contains N_i layers. The sum(args.pipeline_encoder_balance) " - "should equal the total number of encoder layers in the model" - }, - ) - pipeline_encoder_devices: Optional[str] = field( - default=None, - metadata={ - "help": "a list of device indices indicating which device to place " - "each of the N_K partitions. The length of this list should " - "equal the length of the --pipeline-encoder-balance argument" - }, - ) - pipeline_decoder_balance: Optional[str] = field( - default=None, - metadata={ - "help": "partition the pipeline parallel decoder into N_K pieces, where each piece " - "contains N_i layers. The sum(args.pipeline_decoder_balance) " - "should equal the total number of decoder layers in the model" - }, - ) - pipeline_decoder_devices: Optional[str] = field( - default=None, - metadata={ - "help": "a list of device indices indicating which device to place " - "each of the N_K partitions. The length of this list should " - "equal the length of the --pipeline-decoder-balance argument" - }, - ) - pipeline_checkpoint: PIPELINE_CHECKPOINT_CHOICES = field( - default="never", - metadata={"help": "checkpointing mode for pipeline model parallelism"}, - ) - zero_sharding: ZERO_SHARDING_CHOICES = field( - default="none", metadata={"help": "ZeRO sharding"} - ) - fp16: bool = II("common.fp16") - memory_efficient_fp16: bool = II("common.memory_efficient_fp16") - tpu: bool = II("common.tpu") - # configuration for --ddp-backend=fully_sharded - no_reshard_after_forward: bool = field( - default=False, - metadata={"help": "don't reshard parameters after forward pass"}, - ) - fp32_reduce_scatter: bool = field( - default=False, - metadata={"help": "reduce-scatter grads in FP32"}, - ) - cpu_offload: bool = field( - default=False, metadata={"help": "offload FP32 params to CPU"} - ) - use_sharded_state: bool = field( - default=False, - metadata={"help": "use sharded checkpoint files"}, - ) - not_fsdp_flatten_parameters: bool = field( - default=False, - metadata={"help": "not flatten parameter param for fsdp"}, - ) - - # group.add_argument('--deepspeed', nargs='?', const=True, default=False, - # help="Enable DeepSpeed with auto-generated config with flag and " \ - # "no argument, or pass an argument to a ds_config json to use.") - # group.add_argument("--zero", default=0, type=int, help="enable a specific ZeRO stage") - # group.add_argument('--exit-interval', type=int, default=None, - # help='Exit the program after the iteration is divisible ' - # 'by this value.') - - -@dataclass -class DatasetConfig(FairseqDataclass): - num_workers: int = field( - default=1, metadata={"help": "how many subprocesses to use for data loading"} - ) - skip_invalid_size_inputs_valid_test: bool = field( - default=False, - metadata={"help": "ignore too long or too short lines in valid and test set"}, - ) - max_tokens: Optional[int] = field( - default=None, metadata={"help": "maximum number of tokens in a batch"} - ) - batch_size: Optional[int] = field( - default=None, - metadata={ - "help": "number of examples in a batch", - "argparse_alias": "--max-sentences", - }, - ) - required_batch_size_multiple: int = field( - default=8, metadata={"help": "batch size will be a multiplier of this value"} - ) - required_seq_len_multiple: int = field( - default=1, - metadata={ - "help": "maximum sequence length in batch will be a multiplier of this value" - }, - ) - dataset_impl: Optional[DATASET_IMPL_CHOICES] = field( - default=None, metadata={"help": "output dataset implementation"} - ) - data_buffer_size: int = field( - default=10, metadata={"help": "Number of batches to preload"} - ) - train_subset: str = field( - default="train", - metadata={"help": "data subset to use for training (e.g. train, valid, test)"}, - ) - valid_subset: str = field( - default="valid", - metadata={ - "help": "comma separated list of data subsets to use for validation" - " (e.g. train, valid, test)" - }, - ) - combine_valid_subsets: Optional[bool] = field( - default=None, - metadata={ - "help": "comma separated list of data subsets to use for validation" - " (e.g. train, valid, test)", - "argparse_alias": "--combine-val", - }, - ) - ignore_unused_valid_subsets: Optional[bool] = field( - default=False, - metadata={"help": "do not raise error if valid subsets are ignored"}, - ) - - validate_interval: int = field( - default=1, metadata={"help": "validate every N epochs"} - ) - validate_interval_updates: int = field( - default=0, metadata={"help": "validate every N updates"} - ) - validate_after_updates: int = field( - default=0, metadata={"help": "dont validate until reaching this many updates"} - ) - fixed_validation_seed: Optional[int] = field( - default=None, metadata={"help": "specified random seed for validation"} - ) - disable_validation: bool = field( - default=False, metadata={"help": "disable validation"} - ) - max_tokens_valid: Optional[int] = field( - default=II("dataset.max_tokens"), - metadata={ - "help": "maximum number of tokens in a validation batch" - " (defaults to --max-tokens)" - }, - ) - batch_size_valid: Optional[int] = field( - default=II("dataset.batch_size"), - metadata={ - "help": "batch size of the validation batch (defaults to --batch-size)", - "argparse_alias": "--max-sentences-valid", - }, - ) - max_valid_steps: Optional[int] = field( - default=None, - metadata={"help": "How many batches to evaluate", "argparse_alias": "--nval"}, - ) - curriculum: int = field( - default=0, metadata={"help": "don't shuffle batches for first N epochs"} - ) - gen_subset: str = field( - default="test", - metadata={"help": "data subset to generate (train, valid, test)"}, - ) - num_shards: int = field( - default=1, metadata={"help": "shard generation over N shards"} - ) - shard_id: int = field( - default=0, metadata={"help": "id of the shard to generate (id < num_shards)"} - ) - grouped_shuffling: bool = field( - default=False, - metadata={ - "help": "shuffle batches in groups of num_shards to enable similar sequence lengths on each GPU worker when batches are sorted by length", - }, - ) - update_epoch_batch_itr: bool = field( - default=II("dataset.grouped_shuffling"), - metadata={ - "help": "if true then prevents the reuse the epoch batch iterator by setting can_reuse_epoch_itr to false, defaults to --grouped-shuffling )", - }, - ) - update_ordered_indices_seed: bool = field( - default=False, - metadata={ - "help": "if true then increment seed with epoch for getting batch iterators, defautls to False.", - }, - ) - - -@dataclass -class OptimizationConfig(FairseqDataclass): - max_epoch: int = field( - default=0, metadata={"help": "force stop training at specified epoch"} - ) - max_update: int = field( - default=0, metadata={"help": "force stop training at specified update"} - ) - stop_time_hours: float = field( - default=0, - metadata={ - "help": "force stop training after specified cumulative time (if >0)" - }, - ) - clip_norm: float = field( - default=0.0, metadata={"help": "clip threshold of gradients"} - ) - sentence_avg: bool = field( - default=False, - metadata={ - "help": "normalize gradients by the number of sentences in a batch" - " (default is to normalize by number of tokens)" - }, - ) - update_freq: List[int] = field( - default_factory=lambda: [1], - metadata={"help": "update parameters every N_i batches, when in epoch i"}, - ) - lr: List[float] = field( - default_factory=lambda: [0.25], - metadata={ - "help": "learning rate for the first N epochs; all epochs >N using LR_N" - " (note: this may be interpreted differently depending on --lr-scheduler)" - }, - ) - stop_min_lr: float = field( - default=-1.0, - metadata={"help": "stop training when the learning rate reaches this minimum"}, - ) - use_bmuf: bool = field( - default=False, - metadata={ - "help": "specify global optimizer for syncing models on different GPUs/shards" - }, - ) - skip_remainder_batch: Optional[bool] = field( - default=False, - metadata={ - "help": "if set, include the last (partial) batch of each epoch in training" - " (default is to skip it)." - }, - ) - - -@dataclass -class CheckpointConfig(FairseqDataclass): - save_dir: str = field( - default="checkpoints", metadata={"help": "path to save checkpoints"} - ) - restore_file: str = field( - default="checkpoint_last.pt", - metadata={ - "help": "filename from which to load checkpoint " - "(default: <save-dir>/checkpoint_last.pt" - }, - ) - continue_once: Optional[str] = field( - default=None, - metadata={ - "help": "continues from this checkpoint, unless a checkpoint indicated in 'restore_file' option is present" - }, - ) - finetune_from_model: Optional[str] = field( - default=None, - metadata={ - "help": "finetune from a pretrained model; note that meters and lr scheduler will be reset" - }, - ) - reset_dataloader: bool = field( - default=False, - metadata={ - "help": "if set, does not reload dataloader state from the checkpoint" - }, - ) - reset_lr_scheduler: bool = field( - default=False, - metadata={ - "help": "if set, does not load lr scheduler state from the checkpoint" - }, - ) - reset_meters: bool = field( - default=False, - metadata={"help": "if set, does not load meters from the checkpoint"}, - ) - reset_optimizer: bool = field( - default=False, - metadata={"help": "if set, does not load optimizer state from the checkpoint"}, - ) - optimizer_overrides: str = field( - default="{}", - metadata={ - "help": "a dictionary used to override optimizer args when loading a checkpoint" - }, - ) - save_interval: int = field( - default=1, metadata={"help": "save a checkpoint every N epochs"} - ) - save_interval_updates: int = field( - default=0, metadata={"help": "save a checkpoint (and validate) every N updates"} - ) - keep_interval_updates: int = field( - default=-1, - metadata={ - "help": "keep the last N checkpoints saved with --save-interval-updates" - }, - ) - keep_interval_updates_pattern: int = field( - default=-1, - metadata={ - "help": "when used with --keep-interval-updates, skips deleting " - "any checkpoints with update X where " - "X %% keep_interval_updates_pattern == 0" - }, - ) - keep_last_epochs: int = field( - default=-1, metadata={"help": "keep last N epoch checkpoints"} - ) - keep_best_checkpoints: int = field( - default=-1, metadata={"help": "keep best N checkpoints based on scores"} - ) - no_save: bool = field( - default=False, metadata={"help": "don't save models or checkpoints"} - ) - no_epoch_checkpoints: bool = field( - default=False, metadata={"help": "only store last and best checkpoints"} - ) - no_last_checkpoints: bool = field( - default=False, metadata={"help": "don't store last checkpoints"} - ) - no_save_optimizer_state: bool = field( - default=False, - metadata={"help": "don't save optimizer-state as part of checkpoint"}, - ) - best_checkpoint_metric: str = field( - default="loss", metadata={"help": 'metric to use for saving "best" checkpoints'} - ) - maximize_best_checkpoint_metric: bool = field( - default=False, - metadata={ - "help": 'select the largest metric value for saving "best" checkpoints' - }, - ) - patience: int = field( - default=-1, - metadata={ - "help": ( - "early stop training if valid performance doesn't " - "improve for N consecutive validation runs; note " - "that this is influenced by --validate-interval" - ) - }, - ) - checkpoint_suffix: str = field( - default="", metadata={"help": "suffix to add to the checkpoint file name"} - ) - checkpoint_shard_count: int = field( - default=1, - metadata={ - "help": "Number of shards containing the checkpoint - " - "if the checkpoint is over 300GB, it is preferable " - "to split it into shards to prevent OOM on CPU while loading " - "the checkpoint" - }, - ) - load_checkpoint_on_all_dp_ranks: bool = field( - default=False, - metadata={ - "help": "load checkpoints on all data parallel devices " - "(default: only load on rank 0 and broadcast to other devices)" - }, - ) - write_checkpoints_asynchronously: bool = field( - default=False, - metadata={ - "help": ( - "Write checkpoints asynchronously in a separate " - "thread. NOTE: This feature is currently being tested." - ), - "argparse_alias": "--save-async", - }, - ) - model_parallel_size: int = II("common.model_parallel_size") - - -@dataclass -class FairseqBMUFConfig(FairseqDataclass): - block_lr: float = field( - default=1, metadata={"help": "block learning rate for bmuf"} - ) - block_momentum: float = field( - default=0.875, metadata={"help": "block momentum for bmuf"} - ) - global_sync_iter: int = field( - default=50, metadata={"help": "Iteration for syncing global model"} - ) - warmup_iterations: int = field( - default=500, metadata={"help": "warmup iterations for model to broadcast"} - ) - use_nbm: bool = field( - default=False, - metadata={"help": "Specify whether you want to use classical BM / Nesterov BM"}, - ) - average_sync: bool = field( - default=False, - metadata={ - "help": "Specify whether you want to average the local momentum after each sync" - }, - ) - distributed_world_size: int = II("distributed_training.distributed_world_size") - - -@dataclass -class GenerationConfig(FairseqDataclass): - beam: int = field( - default=5, - metadata={"help": "beam size"}, - ) - nbest: int = field( - default=1, - metadata={"help": "number of hypotheses to output"}, - ) - max_len_a: float = field( - default=0, - metadata={ - "help": "generate sequences of maximum length ax + b, where x is the source length" - }, - ) - max_len_b: int = field( - default=200, - metadata={ - "help": "generate sequences of maximum length ax + b, where x is the source length" - }, - ) - min_len: int = field( - default=1, - metadata={"help": "minimum generation length"}, - ) - match_source_len: bool = field( - default=False, - metadata={"help": "generations should match the source length"}, - ) - unnormalized: bool = field( - default=False, - metadata={"help": "compare unnormalized hypothesis scores"}, - ) - no_early_stop: bool = field( - default=False, - metadata={"help": "deprecated"}, - ) - no_beamable_mm: bool = field( - default=False, - metadata={"help": "don't use BeamableMM in attention layers"}, - ) - lenpen: float = field( - default=1, - metadata={ - "help": "length penalty: <1.0 favors shorter, >1.0 favors longer sentences" - }, - ) - unkpen: float = field( - default=0, - metadata={ - "help": "unknown word penalty: <0 produces more unks, >0 produces fewer" - }, - ) - replace_unk: Optional[str] = field( - default=None, - metadata={ - "help": "perform unknown replacement (optionally with alignment dictionary)", - "argparse_const": "@@ ", - }, - ) - sacrebleu: bool = field( - default=False, - metadata={"help": "score with sacrebleu"}, - ) - score_reference: bool = field( - default=False, - metadata={"help": "just score the reference translation"}, - ) - prefix_size: int = field( - default=0, - metadata={"help": "initialize generation by target prefix of given length"}, - ) - no_repeat_ngram_size: int = field( - default=0, - metadata={ - "help": "ngram blocking such that this size ngram cannot be repeated in the generation" - }, - ) - sampling: bool = field( - default=False, - metadata={"help": "sample hypotheses instead of using beam search"}, - ) - sampling_topk: int = field( - default=-1, - metadata={"help": "sample from top K likely next words instead of all words"}, - ) - sampling_topp: float = field( - default=-1.0, - metadata={ - "help": "sample from the smallest set whose cumulative probability mass exceeds p for next words" - }, - ) - constraints: Optional[GENERATION_CONSTRAINTS_CHOICES] = field( - default=None, - metadata={ - "help": "enables lexically constrained decoding", - "argparse_const": "ordered", - }, - ) - temperature: float = field( - default=1.0, - metadata={"help": "temperature for generation"}, - ) - diverse_beam_groups: int = field( - default=-1, - metadata={"help": "number of groups for Diverse Beam Search"}, - ) - diverse_beam_strength: float = field( - default=0.5, - metadata={"help": "strength of diversity penalty for Diverse Beam Search"}, - ) - diversity_rate: float = field( - default=-1.0, - metadata={"help": "strength of diversity penalty for Diverse Siblings Search"}, - ) - print_alignment: Optional[PRINT_ALIGNMENT_CHOICES] = field( - default=None, - metadata={ - "help": "if set, uses attention feedback to compute and print alignment to source tokens " - "(valid options are: hard, soft, otherwise treated as hard alignment)", - "argparse_const": "hard", - }, - ) - print_step: bool = field( - default=False, - metadata={"help": "print steps"}, - ) - lm_path: Optional[str] = field( - default=None, - metadata={"help": "path to lm checkpoint for lm fusion"}, - ) - lm_weight: float = field( - default=0.0, - metadata={"help": "weight for lm probs for lm fusion"}, - ) - - # arguments for iterative refinement generator - iter_decode_eos_penalty: float = field( - default=0.0, - metadata={"help": "if > 0.0, it penalized early-stopping in decoding."}, - ) - iter_decode_max_iter: int = field( - default=10, - metadata={"help": "maximum iterations for iterative refinement."}, - ) - iter_decode_force_max_iter: bool = field( - default=False, - metadata={ - "help": "if set, run exact the maximum number of iterations without early stop" - }, - ) - iter_decode_with_beam: int = field( - default=1, - metadata={ - "help": "if > 1, model will generate translations varying by the lengths." - }, - ) - iter_decode_with_external_reranker: bool = field( - default=False, - metadata={ - "help": "if set, the last checkpoint are assumed to be a reranker to rescore the translations" - }, - ) - retain_iter_history: bool = field( - default=False, - metadata={ - "help": "if set, decoding returns the whole history of iterative refinement" - }, - ) - retain_dropout: bool = field( - default=False, - metadata={"help": "Use dropout at inference time"}, - ) - # temporarily set to Any until https://github.com/facebookresearch/hydra/issues/1117 is fixed - # retain_dropout_modules: Optional[List[str]] = field( - retain_dropout_modules: Any = field( - default=None, - metadata={ - "help": "if set, only retain dropout for the specified modules; " - "if not set, then dropout will be retained for all modules" - }, - ) - # special decoding format for advanced decoding. - decoding_format: Optional[GENERATION_DECODING_FORMAT_CHOICES] = field( - default=None, - metadata={"help": "special decoding format for advanced decoding."}, - ) - no_seed_provided: bool = field( - default=False, - metadata={"help": "if set, dont use seed for initializing random generators"}, - ) - - -@dataclass -class CommonEvalConfig(FairseqDataclass): - path: Optional[str] = field( - default=None, - metadata={"help": "path(s) to model file(s), colon separated"}, - ) - post_process: Optional[str] = field( - default=None, - metadata={ - "help": ( - "post-process text by removing BPE, letter segmentation, etc. " - "Valid options can be found in fairseq.data.utils.post_process." - ), - "argparse_const": "subword_nmt", - "argparse_alias": "--remove-bpe", - }, - ) - quiet: bool = field(default=False, metadata={"help": "only print final scores"}) - model_overrides: str = field( - default="{}", - metadata={ - "help": "a dictionary used to override model args at generation that were used during model training" - }, - ) - results_path: Optional[str] = field( - default=None, metadata={"help": "path to save eval results (optional)"} - ) - - -@dataclass -class EvalLMConfig(FairseqDataclass): - output_word_probs: bool = field( - default=False, - metadata={ - "help": "if set, outputs words and their predicted log probabilities to standard output" - }, - ) - output_word_stats: bool = field( - default=False, - metadata={ - "help": "if set, outputs word statistics such as word count, average probability, etc" - }, - ) - context_window: int = field( - default=0, - metadata={ - "help": "ensures that every evaluated token has access to a context of at least this size, if possible" - }, - ) - softmax_batch: int = field( - default=sys.maxsize, - metadata={ - "help": "if BxT is more than this, will batch the softmax over vocab to this amount of tokens, in order to fit into GPU memory" - }, - ) - - -@dataclass -class InteractiveConfig(FairseqDataclass): - buffer_size: int = field( - default=0, - metadata={ - "help": "read this many sentences into a buffer before processing them" - }, - ) - input: str = field( - default="-", - metadata={"help": "file to read from; use - for stdin"}, - ) - - -@dataclass -class EMAConfig(FairseqDataclass): - store_ema: bool = field( - default=False, metadata={help: "store exponential moving average shadow model"} - ) - ema_decay: float = field( - default=0.9999, metadata={"help": "decay for exponential moving average model"} - ) - ema_start_update: int = field( - default=0, metadata={"help": "start EMA update after this many model updates"} - ) - ema_seed_model: Optional[str] = field( - default=None, - metadata={ - "help": "Seed to load EMA model from. " - "Used to load EMA model separately from the actual model." - }, - ) - ema_update_freq: int = field( - default=1, metadata={"help": "Do EMA update every this many model updates"} - ) - ema_fp32: bool = field( - default=False, - metadata={"help": "If true, store EMA model in fp32 even if model is in fp16"}, - ) - - -@dataclass -class FairseqConfig(FairseqDataclass): - common: CommonConfig = CommonConfig() - common_eval: CommonEvalConfig = CommonEvalConfig() - distributed_training: DistributedTrainingConfig = DistributedTrainingConfig() - dataset: DatasetConfig = DatasetConfig() - optimization: OptimizationConfig = OptimizationConfig() - checkpoint: CheckpointConfig = CheckpointConfig() - bmuf: FairseqBMUFConfig = FairseqBMUFConfig() - generation: GenerationConfig = GenerationConfig() - eval_lm: EvalLMConfig = EvalLMConfig() - interactive: InteractiveConfig = InteractiveConfig() - model: Any = MISSING - task: Any = None - criterion: Any = None - optimizer: Any = None - lr_scheduler: Any = None - scoring: Any = None - bpe: Any = None - tokenizer: Any = None - ema: EMAConfig = EMAConfig() diff --git a/kosmos-g/fairseq/fairseq/dataclass/constants.py b/kosmos-g/fairseq/fairseq/dataclass/constants.py deleted file mode 100644 index 5af92f2b3..000000000 --- a/kosmos-g/fairseq/fairseq/dataclass/constants.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from enum import Enum, EnumMeta -from typing import List - - -class StrEnumMeta(EnumMeta): - # this is workaround for submitit pickling leading to instance checks failing in hydra for StrEnum, see - # https://github.com/facebookresearch/hydra/issues/1156 - @classmethod - def __instancecheck__(cls, other): - return "enum" in str(type(other)) - - -class StrEnum(Enum, metaclass=StrEnumMeta): - def __str__(self): - return self.value - - def __eq__(self, other: str): - return self.value == other - - def __repr__(self): - return self.value - - def __hash__(self): - return hash(str(self)) - - -def ChoiceEnum(choices: List[str]): - """return the Enum class used to enforce list of choices""" - return StrEnum("Choices", {k: k for k in choices}) - - -LOG_FORMAT_CHOICES = ChoiceEnum(["json", "none", "simple", "tqdm"]) -DDP_BACKEND_CHOICES = ChoiceEnum( - [ - "c10d", # alias for pytorch_ddp - "fully_sharded", # FullyShardedDataParallel from fairscale - "legacy_ddp", - "no_c10d", # alias for legacy_ddp - "pytorch_ddp", - "slowmo", - ] -) -DDP_COMM_HOOK_CHOICES = ChoiceEnum(["none", "fp16"]) -DATASET_IMPL_CHOICES = ChoiceEnum(["raw", "lazy", "cached", "mmap", "fasta", "huffman"]) -GENERATION_CONSTRAINTS_CHOICES = ChoiceEnum(["ordered", "unordered"]) -GENERATION_DECODING_FORMAT_CHOICES = ChoiceEnum( - ["unigram", "ensemble", "vote", "dp", "bs"] -) -ZERO_SHARDING_CHOICES = ChoiceEnum(["none", "os"]) -PIPELINE_CHECKPOINT_CHOICES = ChoiceEnum(["always", "never", "except_last"]) -PRINT_ALIGNMENT_CHOICES = ChoiceEnum(["hard", "soft"]) diff --git a/kosmos-g/fairseq/fairseq/dataclass/initialize.py b/kosmos-g/fairseq/fairseq/dataclass/initialize.py deleted file mode 100644 index 5a7784bad..000000000 --- a/kosmos-g/fairseq/fairseq/dataclass/initialize.py +++ /dev/null @@ -1,61 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -"""isort:skip_file""" - -import logging -from hydra.core.config_store import ConfigStore -from fairseq.dataclass.configs import FairseqConfig -from omegaconf import DictConfig, OmegaConf - - -logger = logging.getLogger(__name__) - - -def hydra_init(cfg_name="config") -> None: - - cs = ConfigStore.instance() - cs.store(name=f"{cfg_name}", node=FairseqConfig) - - for k in FairseqConfig.__dataclass_fields__: - v = FairseqConfig.__dataclass_fields__[k].default - try: - cs.store(name=k, node=v) - except BaseException: - logger.error(f"{k} - {v}") - raise - - -def add_defaults(cfg: DictConfig) -> None: - """This function adds default values that are stored in dataclasses that hydra doesn't know about""" - - from fairseq.registry import REGISTRIES - from fairseq.tasks import TASK_DATACLASS_REGISTRY - from fairseq.models import ARCH_MODEL_NAME_REGISTRY, MODEL_DATACLASS_REGISTRY - from fairseq.dataclass.utils import merge_with_parent - from typing import Any - - OmegaConf.set_struct(cfg, False) - - for k, v in FairseqConfig.__dataclass_fields__.items(): - field_cfg = cfg.get(k) - if field_cfg is not None and v.type == Any: - dc = None - - if isinstance(field_cfg, str): - field_cfg = DictConfig({"_name": field_cfg}) - field_cfg.__dict__["_parent"] = field_cfg.__dict__["_parent"] - - name = getattr(field_cfg, "_name", None) - - if k == "task": - dc = TASK_DATACLASS_REGISTRY.get(name) - elif k == "model": - name = ARCH_MODEL_NAME_REGISTRY.get(name, name) - dc = MODEL_DATACLASS_REGISTRY.get(name) - elif k in REGISTRIES: - dc = REGISTRIES[k]["dataclass_registry"].get(name) - - if dc is not None: - cfg[k] = merge_with_parent(dc, field_cfg) diff --git a/kosmos-g/fairseq/fairseq/dataclass/utils.py b/kosmos-g/fairseq/fairseq/dataclass/utils.py deleted file mode 100644 index f307fe6e5..000000000 --- a/kosmos-g/fairseq/fairseq/dataclass/utils.py +++ /dev/null @@ -1,493 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import ast -import inspect -import logging -import os -import re -from argparse import ArgumentError, ArgumentParser, Namespace -from dataclasses import _MISSING_TYPE, MISSING, is_dataclass -from enum import Enum -from typing import Any, Dict, List, Optional, Tuple, Type - -from fairseq.dataclass import FairseqDataclass -from fairseq.dataclass.configs import FairseqConfig -from hydra.core.global_hydra import GlobalHydra -from hydra.experimental import compose, initialize -from omegaconf import DictConfig, OmegaConf, open_dict, _utils - -logger = logging.getLogger(__name__) - - -def eval_str_list(x, x_type=float): - if x is None: - return None - if isinstance(x, str): - if len(x) == 0: - return [] - x = ast.literal_eval(x) - try: - return list(map(x_type, x)) - except TypeError: - return [x_type(x)] - - -def interpret_dc_type(field_type): - if isinstance(field_type, str): - raise RuntimeError("field should be a type") - - if field_type == Any: - return str - - typestring = str(field_type) - if re.match( - r"(typing.|^)Union\[(.*), NoneType\]$", typestring - ) or typestring.startswith("typing.Optional"): - return field_type.__args__[0] - return field_type - - -def gen_parser_from_dataclass( - parser: ArgumentParser, - dataclass_instance: FairseqDataclass, - delete_default: bool = False, - with_prefix: Optional[str] = None, -) -> None: - """ - convert a dataclass instance to tailing parser arguments. - - If `with_prefix` is provided, prefix all the keys in the resulting parser with it. It means that we are - building a flat namespace from a structured dataclass (see transformer_config.py for example). - """ - - def argparse_name(name: str): - if name == "data" and (with_prefix is None or with_prefix == ""): - # normally data is positional args, so we don't add the -- nor the prefix - return name - if name == "_name": - # private member, skip - return None - full_name = "--" + name.replace("_", "-") - if with_prefix is not None and with_prefix != "": - # if a prefix is specified, construct the prefixed arg name - full_name = with_prefix + "-" + full_name[2:] # strip -- when composing - return full_name - - def get_kwargs_from_dc( - dataclass_instance: FairseqDataclass, k: str - ) -> Dict[str, Any]: - """k: dataclass attributes""" - - kwargs = {} - - field_type = dataclass_instance._get_type(k) - inter_type = interpret_dc_type(field_type) - - field_default = dataclass_instance._get_default(k) - - if isinstance(inter_type, type) and issubclass(inter_type, Enum): - field_choices = [t.value for t in list(inter_type)] - else: - field_choices = None - - field_help = dataclass_instance._get_help(k) - field_const = dataclass_instance._get_argparse_const(k) - - if isinstance(field_default, str) and field_default.startswith("${"): - kwargs["default"] = field_default - else: - if field_default is MISSING: - kwargs["required"] = True - if field_choices is not None: - kwargs["choices"] = field_choices - if ( - isinstance(inter_type, type) - and (issubclass(inter_type, List) or issubclass(inter_type, Tuple)) - ) or ("List" in str(inter_type) or "Tuple" in str(inter_type)): - if "int" in str(inter_type): - kwargs["type"] = lambda x: eval_str_list(x, int) - elif "float" in str(inter_type): - kwargs["type"] = lambda x: eval_str_list(x, float) - elif "str" in str(inter_type): - kwargs["type"] = lambda x: eval_str_list(x, str) - else: - raise NotImplementedError( - "parsing of type " + str(inter_type) + " is not implemented" - ) - if field_default is not MISSING: - kwargs["default"] = ( - ",".join(map(str, field_default)) - if field_default is not None - else None - ) - elif ( - isinstance(inter_type, type) and issubclass(inter_type, Enum) - ) or "Enum" in str(inter_type): - kwargs["type"] = str - if field_default is not MISSING: - if isinstance(field_default, Enum): - kwargs["default"] = field_default.value - else: - kwargs["default"] = field_default - elif inter_type is bool: - kwargs["action"] = ( - "store_false" if field_default is True else "store_true" - ) - kwargs["default"] = field_default - else: - kwargs["type"] = inter_type - if field_default is not MISSING: - kwargs["default"] = field_default - - # build the help with the hierarchical prefix - if with_prefix is not None and with_prefix != "" and field_help is not None: - field_help = with_prefix[2:] + ": " + field_help - - kwargs["help"] = field_help - if field_const is not None: - kwargs["const"] = field_const - kwargs["nargs"] = "?" - - return kwargs - - for k in dataclass_instance._get_all_attributes(): - field_name = argparse_name(dataclass_instance._get_name(k)) - field_type = dataclass_instance._get_type(k) - if field_name is None: - continue - elif inspect.isclass(field_type) and issubclass(field_type, FairseqDataclass): - # for fields that are of type FairseqDataclass, we can recursively - # add their fields to the namespace (so we add the args from model, task, etc. to the root namespace) - prefix = None - if with_prefix is not None: - # if a prefix is specified, then we don't want to copy the subfields directly to the root namespace - # but we prefix them with the name of the current field. - prefix = field_name - gen_parser_from_dataclass(parser, field_type(), delete_default, prefix) - continue - - kwargs = get_kwargs_from_dc(dataclass_instance, k) - - field_args = [field_name] - alias = dataclass_instance._get_argparse_alias(k) - if alias is not None: - field_args.append(alias) - - if "default" in kwargs: - if isinstance(kwargs["default"], str) and kwargs["default"].startswith( - "${" - ): - if kwargs["help"] is None: - # this is a field with a name that will be added elsewhere - continue - else: - del kwargs["default"] - if delete_default and "default" in kwargs: - del kwargs["default"] - try: - parser.add_argument(*field_args, **kwargs) - except ArgumentError: - pass - - -def _set_legacy_defaults(args, cls): - """Helper to set default arguments based on *add_args*.""" - if not hasattr(cls, "add_args"): - return - - import argparse - - parser = argparse.ArgumentParser( - argument_default=argparse.SUPPRESS, allow_abbrev=False - ) - cls.add_args(parser) - # copied from argparse.py: - defaults = argparse.Namespace() - for action in parser._actions: - if action.dest is not argparse.SUPPRESS: - if not hasattr(defaults, action.dest): - if action.default is not argparse.SUPPRESS: - setattr(defaults, action.dest, action.default) - for key, default_value in vars(defaults).items(): - if not hasattr(args, key): - setattr(args, key, default_value) - - -def _override_attr( - sub_node: str, data_class: Type[FairseqDataclass], args: Namespace -) -> List[str]: - overrides = [] - - if not inspect.isclass(data_class) or not issubclass(data_class, FairseqDataclass): - return overrides - - def get_default(f): - if not isinstance(f.default_factory, _MISSING_TYPE): - return f.default_factory() - return f.default - - for k, v in data_class.__dataclass_fields__.items(): - if k.startswith("_"): - # private member, skip - continue - - val = get_default(v) if not hasattr(args, k) else getattr(args, k) - - field_type = interpret_dc_type(v.type) - if ( - isinstance(val, str) - and not val.startswith("${") # not interpolation - and field_type != str - and ( - not inspect.isclass(field_type) or not issubclass(field_type, Enum) - ) # not choices enum - ): - # upgrade old models that stored complex parameters as string - val = ast.literal_eval(val) - - if isinstance(val, tuple): - val = list(val) - - v_type = getattr(v.type, "__origin__", None) - if ( - (v_type is List or v_type is list or v_type is Optional) - # skip interpolation - and not (isinstance(val, str) and val.startswith("${")) - ): - # if type is int but val is float, then we will crash later - try to convert here - if hasattr(v.type, "__args__"): - t_args = v.type.__args__ - if len(t_args) == 1 and (t_args[0] is float or t_args[0] is int): - val = list(map(t_args[0], val)) - elif val is not None and ( - field_type is int or field_type is bool or field_type is float - ): - try: - val = field_type(val) - except: - pass # ignore errors here, they are often from interpolation args - - if val is None: - overrides.append("{}.{}=null".format(sub_node, k)) - elif val == "": - overrides.append("{}.{}=''".format(sub_node, k)) - elif isinstance(val, str): - val = val.replace("'", r"\'") - overrides.append("{}.{}='{}'".format(sub_node, k, val)) - elif isinstance(val, FairseqDataclass): - overrides += _override_attr(f"{sub_node}.{k}", type(val), args) - elif isinstance(val, Namespace): - sub_overrides, _ = override_module_args(val) - for so in sub_overrides: - overrides.append(f"{sub_node}.{k}.{so}") - else: - overrides.append("{}.{}={}".format(sub_node, k, val)) - - return overrides - - -def migrate_registry( - name, value, registry, args, overrides, deletes, use_name_as_val=False -): - if value in registry: - overrides.append("{}={}".format(name, value)) - overrides.append("{}._name={}".format(name, value)) - overrides.extend(_override_attr(name, registry[value], args)) - elif use_name_as_val and value is not None: - overrides.append("{}={}".format(name, value)) - else: - deletes.append(name) - - -def override_module_args(args: Namespace) -> Tuple[List[str], List[str]]: - """use the field in args to overrides those in cfg""" - overrides = [] - deletes = [] - - for k in FairseqConfig.__dataclass_fields__.keys(): - overrides.extend( - _override_attr(k, FairseqConfig.__dataclass_fields__[k].type, args) - ) - - if args is not None: - if hasattr(args, "task"): - from fairseq.tasks import TASK_DATACLASS_REGISTRY - - migrate_registry( - "task", args.task, TASK_DATACLASS_REGISTRY, args, overrides, deletes - ) - else: - deletes.append("task") - - # these options will be set to "None" if they have not yet been migrated - # so we can populate them with the entire flat args - CORE_REGISTRIES = {"criterion", "optimizer", "lr_scheduler"} - - from fairseq.registry import REGISTRIES - - for k, v in REGISTRIES.items(): - if hasattr(args, k): - migrate_registry( - k, - getattr(args, k), - v["dataclass_registry"], - args, - overrides, - deletes, - use_name_as_val=k not in CORE_REGISTRIES, - ) - else: - deletes.append(k) - - no_dc = True - if hasattr(args, "arch"): - from fairseq.models import ARCH_MODEL_REGISTRY, ARCH_MODEL_NAME_REGISTRY - - if args.arch in ARCH_MODEL_REGISTRY: - m_cls = ARCH_MODEL_REGISTRY[args.arch] - dc = getattr(m_cls, "__dataclass", None) - if dc is not None: - m_name = ARCH_MODEL_NAME_REGISTRY[args.arch] - overrides.append("model={}".format(m_name)) - overrides.append("model._name={}".format(args.arch)) - # override model params with those exist in args - overrides.extend(_override_attr("model", dc, args)) - no_dc = False - if no_dc: - deletes.append("model") - - return overrides, deletes - - -class omegaconf_no_object_check: - def __init__(self): - self.old_is_primitive = _utils.is_primitive_type - - def __enter__(self): - _utils.is_primitive_type = lambda _: True - - def __exit__(self, type, value, traceback): - _utils.is_primitive_type = self.old_is_primitive - - -def convert_namespace_to_omegaconf(args: Namespace) -> DictConfig: - """Convert a flat argparse.Namespace to a structured DictConfig.""" - - # Here we are using field values provided in args to override counterparts inside config object - overrides, deletes = override_module_args(args) - - # configs will be in fairseq/config after installation - config_path = os.path.join("..", "config") - - GlobalHydra.instance().clear() - - with initialize(config_path=config_path): - try: - composed_cfg = compose("config", overrides=overrides, strict=False) - except: - logger.error("Error when composing. Overrides: " + str(overrides)) - raise - - for k in deletes: - composed_cfg[k] = None - - cfg = OmegaConf.create( - OmegaConf.to_container(composed_cfg, resolve=True, enum_to_str=True) - ) - - # hack to be able to set Namespace in dict config. this should be removed when we update to newer - # omegaconf version that supports object flags, or when we migrate all existing models - from omegaconf import _utils - - with omegaconf_no_object_check(): - if cfg.task is None and getattr(args, "task", None): - cfg.task = Namespace(**vars(args)) - from fairseq.tasks import TASK_REGISTRY - - _set_legacy_defaults(cfg.task, TASK_REGISTRY[args.task]) - cfg.task._name = args.task - if cfg.model is None and getattr(args, "arch", None): - cfg.model = Namespace(**vars(args)) - from fairseq.models import ARCH_MODEL_REGISTRY - - _set_legacy_defaults(cfg.model, ARCH_MODEL_REGISTRY[args.arch]) - cfg.model._name = args.arch - if cfg.optimizer is None and getattr(args, "optimizer", None): - cfg.optimizer = Namespace(**vars(args)) - from fairseq.optim import OPTIMIZER_REGISTRY - - _set_legacy_defaults(cfg.optimizer, OPTIMIZER_REGISTRY[args.optimizer]) - cfg.optimizer._name = args.optimizer - if cfg.lr_scheduler is None and getattr(args, "lr_scheduler", None): - cfg.lr_scheduler = Namespace(**vars(args)) - from fairseq.optim.lr_scheduler import LR_SCHEDULER_REGISTRY - - _set_legacy_defaults( - cfg.lr_scheduler, LR_SCHEDULER_REGISTRY[args.lr_scheduler] - ) - cfg.lr_scheduler._name = args.lr_scheduler - if cfg.criterion is None and getattr(args, "criterion", None): - cfg.criterion = Namespace(**vars(args)) - from fairseq.criterions import CRITERION_REGISTRY - - _set_legacy_defaults(cfg.criterion, CRITERION_REGISTRY[args.criterion]) - cfg.criterion._name = args.criterion - - OmegaConf.set_struct(cfg, True) - return cfg - - -def overwrite_args_by_name(cfg: DictConfig, overrides: Dict[str, any]): - # this will be deprecated when we get rid of argparse and model_overrides logic - - from fairseq.registry import REGISTRIES - - with open_dict(cfg): - for k in cfg.keys(): - # "k in cfg" will return false if its a "mandatory value (e.g. ???)" - if k in cfg and isinstance(cfg[k], DictConfig): - if k in overrides and isinstance(overrides[k], dict): - for ok, ov in overrides[k].items(): - if isinstance(ov, dict) and cfg[k][ok] is not None: - overwrite_args_by_name(cfg[k][ok], ov) - else: - cfg[k][ok] = ov - else: - overwrite_args_by_name(cfg[k], overrides) - elif k in cfg and isinstance(cfg[k], Namespace): - for override_key, val in overrides.items(): - setattr(cfg[k], override_key, val) - elif k in overrides: - if ( - k in REGISTRIES - and overrides[k] in REGISTRIES[k]["dataclass_registry"] - ): - cfg[k] = DictConfig( - REGISTRIES[k]["dataclass_registry"][overrides[k]] - ) - overwrite_args_by_name(cfg[k], overrides) - cfg[k]._name = overrides[k] - else: - cfg[k] = overrides[k] - - -def merge_with_parent(dc: FairseqDataclass, cfg: DictConfig, remove_missing=False): - if remove_missing: - - if is_dataclass(dc): - target_keys = set(dc.__dataclass_fields__.keys()) - else: - target_keys = set(dc.keys()) - - with open_dict(cfg): - for k in list(cfg.keys()): - if k not in target_keys: - del cfg[k] - - merged_cfg = OmegaConf.merge(dc, cfg) - merged_cfg.__dict__["_parent"] = cfg.__dict__["_parent"] - OmegaConf.set_struct(merged_cfg, True) - return merged_cfg diff --git a/kosmos-g/fairseq/fairseq/distributed/__init__.py b/kosmos-g/fairseq/fairseq/distributed/__init__.py deleted file mode 100644 index 9130db8f5..000000000 --- a/kosmos-g/fairseq/fairseq/distributed/__init__.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .distributed_timeout_wrapper import DistributedTimeoutWrapper -from .fully_sharded_data_parallel import ( - fsdp_enable_wrap, - fsdp_wrap, - FullyShardedDataParallel, -) -from .legacy_distributed_data_parallel import LegacyDistributedDataParallel -from .module_proxy_wrapper import ModuleProxyWrapper -from .tpu_distributed_data_parallel import TPUDistributedDataParallel - - -__all__ = [ - "DistributedTimeoutWrapper", - "fsdp_enable_wrap", - "fsdp_wrap", - "FullyShardedDataParallel", - "LegacyDistributedDataParallel", - "ModuleProxyWrapper", - "TPUDistributedDataParallel", -] diff --git a/kosmos-g/fairseq/fairseq/distributed/distributed_timeout_wrapper.py b/kosmos-g/fairseq/fairseq/distributed/distributed_timeout_wrapper.py deleted file mode 100644 index 6e06b4b6d..000000000 --- a/kosmos-g/fairseq/fairseq/distributed/distributed_timeout_wrapper.py +++ /dev/null @@ -1,97 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import signal -import threading - -from torch import nn - - -logger = logging.getLogger(__name__) - - -class DistributedTimeoutWrapper(nn.Module): - """ - A wrapper that kills the process if no progress is made within a given - *timeout*. The timer is reset every time :func:`forward` is called. - - Usage:: - - module = DistributedTimeoutWrapper(module, timeout=30) - x = module(input) - time.sleep(20) # safe - x = module(input) - time.sleep(45) # job will be killed before this returns - - Args: - module (nn.Module): module to wrap - timeout (int): number of seconds before killing the process - (set to a value <= 0 to disable the timeout) - signal (Optional): signal to send once timeout is triggered - """ - - def __init__(self, module: nn.Module, timeout: int, signal=signal.SIGINT): - super().__init__() - self.module = module - self.timeout = timeout - self.signal = signal - - if timeout > 0: - self._heartbeat = threading.Event() - self._heartbeat_thread = threading.Thread( - target=self._check_heartbeat, - args=(os.getpid(),), - daemon=True, - ) - self._heartbeat_thread.start() - self._terminated = False - else: - self._heartbeat = None - self._heartbeat_thread = None - - def __del__(self): - self.stop_timeout() - - def __getattr__(self, name): - """Forward missing attributes to wrapped module.""" - try: - return super().__getattr__(name) # defer to nn.Module's logic - except AttributeError: - return getattr(self.module, name) - - def stop_timeout(self): - if self._heartbeat_thread is not None: - self._terminated = True - self._heartbeat_thread.join() - - def state_dict(self, *args, **kwargs): - return self.module.state_dict(*args, **kwargs) - - def load_state_dict(self, *args, **kwargs): - return self.module.load_state_dict(*args, **kwargs) - - def forward(self, *args, **kwargs): - if self._heartbeat is not None: - self._heartbeat.set() - return self.module(*args, **kwargs) - - def _check_heartbeat(self, parent_pid): - self._heartbeat.wait() # wait for the first forward pass - while True: - self._heartbeat.clear() - success = self._heartbeat.wait(timeout=self.timeout) - if self._terminated: - break - elif not success: - logger.error( - ( - "Killing job for not making progress in {} seconds. " - "Set --heartbeat-timeout=-1 to disable this timeout." - ).format(int(self.timeout)) - ) - os.kill(parent_pid, self.signal) - return diff --git a/kosmos-g/fairseq/fairseq/distributed/fully_sharded_data_parallel.py b/kosmos-g/fairseq/fairseq/distributed/fully_sharded_data_parallel.py deleted file mode 100644 index 88dc698b4..000000000 --- a/kosmos-g/fairseq/fairseq/distributed/fully_sharded_data_parallel.py +++ /dev/null @@ -1,135 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import contextlib -from typing import Optional - -import torch -from fairseq.dataclass.configs import DistributedTrainingConfig -from fairseq.distributed import utils as dist_utils - - -try: - from fairscale.nn.data_parallel import FullyShardedDataParallel as FSDP - - has_FSDP = True -except ImportError: - FSDP = torch.nn.Module - has_FSDP = False - - -class FullyShardedDataParallel(FSDP): - """ - A small wrapper around fairscale's FullyShardedDataParallel (FSDP) with some - fairseq-specific checkpoint saving/loading logic. - - Args: - use_sharded_state (bool): if True, then ``state_dict`` will return - ``FSDP.local_state_dict`` and ``load_state_dict`` will call - ``FSDP.load_local_state_dict``. Otherwise, ``state_dict`` will - return the full model weights on data parallel rank 0 (empty on - other ranks) and ``load_state_dict`` will broadcast model weights - from rank 0 to other ranks. - """ - - def __init__(self, *args, use_sharded_state: bool = False, **kwargs): - if not has_FSDP: - raise ImportError( - "Cannot find FullyShardedDataParallel. " - "Please install fairscale with: pip install fairscale" - ) - super().__init__(*args, **kwargs) - self.use_sharded_state = use_sharded_state - - @property - def unwrapped_module(self) -> torch.nn.Module: - if self.flatten_parameters: - return self.module.module - else: - return self.module - - def state_dict(self, destination=None, prefix="", keep_vars=False): - if self.use_sharded_state: - return super().local_state_dict( - destination=destination, prefix=prefix, keep_vars=keep_vars - ) - else: - if self.rank == 0: - return super().state_dict( - destination=destination, prefix=prefix, keep_vars=keep_vars - ) - else: - # We must call state_dict() due to use of communication - # primitives. But we don't use the result. - super().state_dict() - return destination or {} - - def load_state_dict(self, state_dict, strict=True, model_cfg=None): - if self.use_sharded_state: - return super().load_local_state_dict(state_dict, strict=strict) - else: - state_dict = dist_utils.broadcast_object( - state_dict, src_rank=0, group=self.process_group - ) - return super().load_state_dict(state_dict, strict=strict) - - -@contextlib.contextmanager -def fsdp_enable_wrap(cfg: DistributedTrainingConfig): - try: - from fairscale.nn import enable_wrap - except ImportError: - raise ImportError( - "Cannot find FullyShardedDataParallel. " - "Please install fairscale with: pip install fairscale" - ) - if cfg.memory_efficient_fp16: - assert cfg.fp16 # memory_efficient_fp16 should imply fp16 - group = dist_utils.get_data_parallel_group() - if group is None and cfg.distributed_world_size == 1: - from fairscale.utils.testing import DummyProcessGroup - - group = DummyProcessGroup(rank=0, size=1) - fsdp_config = { - "process_group": group, - "reshard_after_forward": not cfg.no_reshard_after_forward, - "mixed_precision": cfg.fp16 and not cfg.memory_efficient_fp16, - "fp32_reduce_scatter": cfg.fp32_reduce_scatter, - "flatten_parameters": not cfg.not_fsdp_flatten_parameters, - "cpu_offload": cfg.cpu_offload, - "compute_dtype": torch.float16 if cfg.fp16 else torch.float32, - "bucket_cap_mb": cfg.bucket_cap_mb, - "state_dict_device": torch.device("cpu"), # reduce GPU mem usage - } - with enable_wrap( - wrapper_cls=FullyShardedDataParallel, - use_sharded_state=cfg.use_sharded_state, - **fsdp_config, - ): - yield - - -def fsdp_wrap(module, min_num_params: Optional[int] = None, **kwargs): - """ - Helper to wrap layers/modules in FSDP. This falls back to a no-op if - fairscale is not available. - - Args: - module (nn.Module): module to (maybe) wrap - min_num_params (int, Optional): minimum number of layer params to wrap - """ - try: - from fairscale.nn import wrap - - if min_num_params is not None: - num_params = sum(p.numel() for p in module.parameters()) - if num_params >= min_num_params: - return wrap(module, **kwargs) - else: - return module - else: - return wrap(module, **kwargs) - except ImportError: - return module diff --git a/kosmos-g/fairseq/fairseq/distributed/legacy_distributed_data_parallel.py b/kosmos-g/fairseq/fairseq/distributed/legacy_distributed_data_parallel.py deleted file mode 100644 index 5f89e6c0e..000000000 --- a/kosmos-g/fairseq/fairseq/distributed/legacy_distributed_data_parallel.py +++ /dev/null @@ -1,165 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -A modified version of the legacy DistributedDataParallel module that uses c10d -communication primitives. This version is simpler than the latest PyTorch -version and is useful for debugging. Notably it does not overlap gradient -communication with the backward pass, which makes it slower but more robust -than the PyTorch version. - -This version also supports the *no_sync* context manager, which allows faster -training with `--update-freq`. -""" - -from collections import OrderedDict -from contextlib import contextmanager - -import torch -from torch import nn - -from fairseq.distributed import utils - - -class LegacyDistributedDataParallel(nn.Module): - """Implements distributed data parallelism at the module level. - - A simplified version of :class:`torch.nn.parallel.DistributedDataParallel`. - This version uses a c10d process group for communication and does not - broadcast buffers. - - Args: - module (~torch.nn.Module): module to be parallelized - process_group: the c10d process group to be used for distributed data - parallel all-reduction. - buffer_size (int, optional): number of elements to buffer before - performing all-reduce (default: 256M). - """ - - def __init__(self, module, process_group, buffer_size=2 ** 28): - super().__init__() - - self.module = module - self.process_group = process_group - self.world_size = utils.get_world_size(self.process_group) - - # Never use a bigger buffer than the number of model params - self.buffer_size = min(buffer_size, sum(p.numel() for p in module.parameters())) - self.buffer = None - - # We can also forcibly accumulate grads locally and only do the - # all-reduce at some later time - self.accumulate_grads = False - - # make per-device lists of parameters - paramlists = OrderedDict() - for param in self.module.parameters(): - device = param.device - if paramlists.get(device) is None: - paramlists[device] = [] - paramlists[device] += [param] - self.per_device_params = list(paramlists.values()) - - @contextmanager - def no_sync(self): - """A context manager to disable gradient synchronization.""" - old_accumulate_grads = self.accumulate_grads - self.accumulate_grads = True - yield - self.accumulate_grads = old_accumulate_grads - - def forward(self, *inputs, **kwargs): - return self.module(*inputs, **kwargs) - - def all_reduce_grads(self): - """ - This function must be called explicitly after backward to reduce - gradients. There is no automatic hook like c10d. - """ - - def all_reduce_params(params): - buffer = self.buffer - nonzero_buffer = False - if len(params) > 1: - offset = 0 - for p in params: - sz = p.numel() - if p.grad is not None: - buffer[offset : offset + sz].copy_(p.grad.data.view(-1)) - nonzero_buffer = True - else: - buffer[offset : offset + sz].zero_() - offset += sz - else: - # we only have a single grad to all-reduce - p = params[0] - if p.grad is not None: - buffer = p.grad.data - nonzero_buffer = True - elif p.numel() <= self.buffer.numel(): - buffer = buffer[: p.numel()] - buffer.zero_() - else: - buffer = torch.zeros_like(p) - - if nonzero_buffer: - buffer.div_(self.world_size) - - utils.all_reduce(buffer, self.process_group) - - # copy all-reduced grads back into their original place - offset = 0 - for p in params: - sz = p.numel() - if p.grad is not None: - p.grad.data.copy_(buffer[offset : offset + sz].view_as(p)) - else: - p.grad = buffer[offset : offset + sz].view_as(p).clone() - offset += sz - - def reduction_fn(): - # This function only needs to be called once - if self.accumulate_grads: - return - - if self.buffer is None: - self.buffer = next(self.module.parameters()).new(self.buffer_size) - - for params in self.per_device_params: - # All-reduce the gradients in buckets - offset = 0 - buffered_params = [] - for param in params: - if not param.requires_grad: - continue - if param.grad is None: - param.grad = torch.zeros_like(param) - - if hasattr(param, "expert"): - # Skip gradient sync for unshared parameters - continue - - if param.grad.requires_grad: - raise RuntimeError( - "DistributedDataParallel only works " - "with gradients that don't require " - "grad" - ) - sz = param.numel() - if sz > self.buffer.numel(): - # all-reduce big params directly - all_reduce_params([param]) - else: - if offset + sz > self.buffer.numel(): - all_reduce_params(buffered_params) - offset = 0 - buffered_params.clear() - buffered_params.append(param) - offset += sz - - if len(buffered_params) > 0: - all_reduce_params(buffered_params) - - reduction_fn() diff --git a/kosmos-g/fairseq/fairseq/distributed/module_proxy_wrapper.py b/kosmos-g/fairseq/fairseq/distributed/module_proxy_wrapper.py deleted file mode 100644 index 904dc0c20..000000000 --- a/kosmos-g/fairseq/fairseq/distributed/module_proxy_wrapper.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from torch import nn - - -class ModuleProxyWrapper(nn.Module): - """ - Wrap a DistributedDataParallel module and forward requests for missing - attributes to the module wrapped by DDP (the twice-wrapped module). - Also forward calls to :func:`state_dict` and :func:`load_state_dict`. - - Usage:: - - module.xyz = "hello world" - wrapped_module = DistributedDataParallel(module, **ddp_args) - wrapped_module = ModuleProxyWrapper(wrapped_module) - assert wrapped_module.xyz == "hello world" - assert wrapped_module.state_dict().keys() == module.state_dict().keys() - - Args: - module (nn.Module): module to wrap - """ - - def __init__(self, module: nn.Module): - super().__init__() - assert hasattr( - module, "module" - ), "ModuleProxyWrapper expects input to wrap another module" - self.module = module - - def __getattr__(self, name): - """Forward missing attributes to twice-wrapped module.""" - try: - # defer to nn.Module's logic - return super().__getattr__(name) - except AttributeError: - try: - # forward to the once-wrapped module - return getattr(self.module, name) - except AttributeError: - # forward to the twice-wrapped module - return getattr(self.module.module, name) - - def state_dict(self, *args, **kwargs): - """Forward to the twice-wrapped module.""" - return self.module.module.state_dict(*args, **kwargs) - - def load_state_dict(self, *args, **kwargs): - """Forward to the twice-wrapped module.""" - return self.module.module.load_state_dict(*args, **kwargs) - - def forward(self, *args, **kwargs): - return self.module(*args, **kwargs) diff --git a/kosmos-g/fairseq/fairseq/distributed/tpu_distributed_data_parallel.py b/kosmos-g/fairseq/fairseq/distributed/tpu_distributed_data_parallel.py deleted file mode 100644 index 3b9e10330..000000000 --- a/kosmos-g/fairseq/fairseq/distributed/tpu_distributed_data_parallel.py +++ /dev/null @@ -1,43 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -from torch import nn - -from fairseq.distributed import utils - - -class TPUDistributedDataParallel(nn.Module): - def __init__(self, module, process_group): - super().__init__() - self.module = module - self.process_group = process_group - self.world_size = utils.get_world_size(self.process_group) - - def forward(self, *inputs, **kwargs): - return self.module(*inputs, **kwargs) - - def all_reduce_grads(self): - gradients = [] - for p in self.parameters(): - if not p.requires_grad: - continue - if p.grad is None: - p.grad = torch.zeros_like(p) - if p.grad.requires_grad: - raise RuntimeError( - "TPUDistributedDataParallel only works with gradients that don't " - "require grad" - ) - gradients.append(p.grad) - - import torch_xla.core.xla_model as xm - - xm.all_reduce( - "sum", - gradients, - scale=1.0 / self.world_size, - groups=self.process_group[1], - ) diff --git a/kosmos-g/fairseq/fairseq/distributed/utils.py b/kosmos-g/fairseq/fairseq/distributed/utils.py deleted file mode 100644 index 3f77dcda6..000000000 --- a/kosmos-g/fairseq/fairseq/distributed/utils.py +++ /dev/null @@ -1,813 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import io -import logging -import os -import pickle -import random -import socket -import struct -import subprocess -import warnings -from argparse import Namespace -from collections import OrderedDict -from dataclasses import dataclass -from typing import Any, Dict, List, Mapping, Optional - -import torch -import torch.distributed as dist -from fairseq.dataclass.configs import DistributedTrainingConfig, FairseqConfig -from omegaconf import open_dict - -try: - import torch_xla.core.xla_model as xm -except ImportError: - xm = None - - -# Flag to indicate if we're using Megatron -# NOTE: this is a temporary hack until we move away from Megatron's model parallel init -_USE_MEGATRON = False - -# Whether to use XLA ops (e.g., on TPUs) instead of CUDA ops. -_USE_XLA = False - - -logger = logging.getLogger(__name__) - - -def is_master(cfg: DistributedTrainingConfig): - return cfg.distributed_rank == 0 - - -def is_local_master(cfg: DistributedTrainingConfig): - # local master rank - return cfg.distributed_rank % cfg.nprocs_per_node == 0 - - -def infer_init_method(cfg: DistributedTrainingConfig, force_distributed=False): - if cfg.distributed_init_method is not None or cfg.tpu: - return - - num_pipelines_per_node = None - if cfg.pipeline_model_parallel: - num_pipeline_devices, num_pipelines_per_node = _pipeline_parallel_pre_init(cfg) - - if all( - key in os.environ - for key in ["MASTER_ADDR", "MASTER_PORT", "WORLD_SIZE", "RANK"] - ): - # support torch.distributed.launch - _infer_torch_distributed_launch_init(cfg) - elif cfg.distributed_port > 0: - # we can determine the init method automatically for Slurm - _infer_slurm_init(cfg, num_pipelines_per_node) - elif cfg.distributed_world_size > 1 or force_distributed: - # fallback for single node with multiple GPUs - _infer_single_node_init(cfg) - - if cfg.pipeline_model_parallel: - _pipeline_parallel_post_init(cfg, num_pipeline_devices, num_pipelines_per_node) - elif not cfg.distributed_no_spawn: - with open_dict(cfg): - cfg.distributed_num_procs = min( - torch.cuda.device_count(), cfg.distributed_world_size - ) - - -def _infer_torch_distributed_launch_init(cfg: DistributedTrainingConfig): - cfg.distributed_init_method = "env://" - cfg.distributed_world_size = int(os.environ["WORLD_SIZE"]) - cfg.distributed_rank = int(os.environ["RANK"]) - # processes are created by torch.distributed.launch - cfg.distributed_no_spawn = True - - -def _infer_slurm_init(cfg: DistributedTrainingConfig, num_pipelines_per_node): - node_list = os.environ.get("SLURM_STEP_NODELIST") - if node_list is None: - node_list = os.environ.get("SLURM_JOB_NODELIST") - if node_list is not None: - try: - hostnames = subprocess.check_output( - ["scontrol", "show", "hostnames", node_list] - ) - cfg.distributed_init_method = "tcp://{host}:{port}".format( - host=hostnames.split()[0].decode("utf-8"), - port=cfg.distributed_port, - ) - nnodes = int(os.environ.get("SLURM_NNODES")) - ntasks_per_node = os.environ.get("SLURM_NTASKS_PER_NODE") - if ntasks_per_node is not None: - ntasks_per_node = int(ntasks_per_node) - else: - ntasks = int(os.environ.get("SLURM_NTASKS")) - nnodes = int(os.environ.get("SLURM_NNODES")) - assert ntasks % nnodes == 0 - ntasks_per_node = int(ntasks / nnodes) - if ntasks_per_node == 1: - gpus_per_node = torch.cuda.device_count() - node_id = int(os.environ.get("SLURM_NODEID")) - cfg.distributed_rank = node_id * gpus_per_node - cfg.distributed_world_size = nnodes * gpus_per_node - elif cfg.pipeline_model_parallel: - assert ntasks_per_node == num_pipelines_per_node, ( - "SLURM --ntasks-per-node must match number of pipelines per " - "node (={})".format(num_pipelines_per_node) - ) - cfg.distributed_no_spawn = True - # For 4-way MP on nodes with 8 GPUs, ranks will be [0, 1] on - # the first node, [1, 2] on the second node, etc. This - # matches torch.distributed.launch. - node_id = int(os.environ.get("SLURM_NODEID")) - local_id = int(os.environ.get("SLURM_LOCALID")) - cfg.distributed_rank = node_id * num_pipelines_per_node + local_id - # In the above example, device_id will always be in [0, 1], - # which also matches torch.distributed.launch. - cfg.device_id = local_id - # We also want to set distributed_world_size to be the total - # number of pipelines across all nodes. - cfg.distributed_world_size = nnodes * num_pipelines_per_node - else: - assert ntasks_per_node == cfg.distributed_world_size // nnodes - cfg.distributed_no_spawn = True - cfg.distributed_rank = int(os.environ.get("SLURM_PROCID")) - cfg.device_id = int(os.environ.get("SLURM_LOCALID")) - except subprocess.CalledProcessError as e: # scontrol failed - raise e - except FileNotFoundError: # Slurm is not installed - pass - - -def _infer_single_node_init(cfg: DistributedTrainingConfig): - assert ( - cfg.distributed_world_size <= torch.cuda.device_count() - ), f"world size is {cfg.distributed_world_size} but have {torch.cuda.device_count()} available devices" - port = random.randint(10000, 20000) - cfg.distributed_init_method = "tcp://localhost:{port}".format(port=port) - - -def _pipeline_parallel_pre_init(cfg: DistributedTrainingConfig): - from fairseq import utils - - balance_exists = ( - cfg.pipeline_balance is not None - or cfg.pipeline_encoder_balance is not None - or cfg.pipeline_decoder_balance is not None - ) - devices_exist = ( - cfg.pipeline_devices is not None - or cfg.pipeline_encoder_devices is not None - or cfg.pipeline_decoder_devices is not None - ) - if not balance_exists: - raise ValueError( - "--pipeline-balance is currently required for pipeline model parallelism" - ) - if not devices_exist: - raise ValueError( - "--pipeline-devices is currently required for pipeline model parallelism" - ) - - cfg.pipeline_balance = utils.eval_str_list(cfg.pipeline_balance, type=int) - if cfg.pipeline_devices is not None: - cfg.pipeline_devices = utils.eval_str_list(cfg.pipeline_devices, type=int) - num_pipeline_devices = len(set(cfg.pipeline_devices)) - else: - cfg.pipeline_encoder_devices = utils.eval_str_list( - cfg.pipeline_encoder_devices, type=int - ) - cfg.pipeline_decoder_devices = utils.eval_str_list( - cfg.pipeline_decoder_devices, type=int - ) - num_pipeline_devices = len( - set(cfg.pipeline_encoder_devices + cfg.pipeline_decoder_devices) - ) - gpus_per_node = torch.cuda.device_count() - assert ( - gpus_per_node >= num_pipeline_devices - and gpus_per_node % num_pipeline_devices == 0 - ), ( - "the number of unique device IDs in --pipeline-devices must evenly divide " - "the number of GPUs per node (multi-node pipelining is not yet supported)" - ) - num_pipelines_per_node = gpus_per_node // num_pipeline_devices - return num_pipeline_devices, num_pipelines_per_node - - -def _pipeline_parallel_post_init( - cfg: DistributedTrainingConfig, num_pipeline_devices, num_pipelines_per_node -): - if not cfg.distributed_no_spawn: - # When distributed_no_spawn is False, we expect distributed_rank and - # distributed_world_size to be based on the total number of GPUs, so - # we need to correct them to be based on the number of pipelines. - assert cfg.distributed_world_size % num_pipeline_devices == 0 - cfg.distributed_world_size = cfg.distributed_world_size // num_pipeline_devices - # In the case of 4-way MP on nodes with 8 GPUs, we want - # distributed_rank to be the starting GPU index for each pipeline - # i.e., 0, 2, ... - gpus_per_node = torch.cuda.device_count() - assert cfg.distributed_rank % gpus_per_node == 0 - assert cfg.distributed_rank % num_pipeline_devices == 0 - - with open_dict(cfg): - cfg.distributed_rank = cfg.distributed_rank // num_pipeline_devices - # launch one process per pipeline - cfg.distributed_num_procs = num_pipelines_per_node - - # if we have 4-way MP on a node with 8 GPUs, we want device_ids to be 0 - # and 4, indicating the starting device IDs for each pipeline - cfg.device_id *= num_pipeline_devices - - if cfg.device_id > 0: - # if there's multiple pipelines on a node (e.g., 4-way MP on an 8 - # GPU node), we need to adjust pipeline_devices accordingly - logger.debug( - "setting CUDA device={} on rank {}".format( - cfg.device_id, cfg.distributed_rank - ) - ) - torch.cuda.set_device(cfg.device_id) - with open_dict(cfg): - cfg.pipeline_devices = [cfg.device_id + d for d in cfg.pipeline_devices] - logger.info( - "setting pipeline_devices={} on rank {}".format( - cfg.pipeline_devices, cfg.distributed_rank - ) - ) - - -def distributed_init(cfg: FairseqConfig): - if isinstance(cfg, Namespace): - from fairseq.dataclass.utils import convert_namespace_to_omegaconf - - cfg = convert_namespace_to_omegaconf(cfg) - - if not cfg.common.tpu: - if torch.distributed.is_available() and torch.distributed.is_initialized(): - warnings.warn( - "Distributed is already initialized, cannot initialize twice!" - ) - else: - logger.info( - "distributed init (rank {}): {}".format( - cfg.distributed_training.distributed_rank, - cfg.distributed_training.distributed_init_method, - ) - ) - dist.init_process_group( - backend=cfg.distributed_training.distributed_backend, - init_method=cfg.distributed_training.distributed_init_method, - world_size=cfg.distributed_training.distributed_world_size, - rank=cfg.distributed_training.distributed_rank, - ) - logger.info( - "initialized host {} as rank {}".format( - socket.gethostname(), - cfg.distributed_training.distributed_rank, - ) - ) - - # perform a dummy all-reduce to initialize the NCCL communicator - if torch.cuda.is_available(): - dist.all_reduce(torch.zeros(1).cuda()) - - cfg.distributed_training.distributed_rank = torch.distributed.get_rank() - else: - assert xm.xrt_world_size() == cfg.distributed_training.distributed_world_size - global _USE_XLA - _USE_XLA = True - cfg.distributed_training.device_id = xm.get_local_ordinal() - cfg.distributed_training.distributed_rank = xm.get_ordinal() - xm.rendezvous("distributed_init") # wait for all workers - - if is_master(cfg.distributed_training): - logging.getLogger().setLevel(logging.INFO) - else: - logging.getLogger().setLevel(logging.WARNING) - - if cfg.common.model_parallel_size > 1: - try: - from fairseq.model_parallel.megatron.mpu import ( - initialize_model_parallel, - model_parallel_cuda_manual_seed, - ) - except ImportError: - raise ImportError( - "\n\nPlease install the megatron submodule:" - "\n\n git submodule update --init " - "fairseq/model_parallel/megatron" - ) - global _USE_MEGATRON - _USE_MEGATRON = True - initialize_model_parallel(cfg.common.model_parallel_size) - model_parallel_cuda_manual_seed(cfg.common.seed) - model_part_number = get_model_parallel_rank() - cfg.checkpoint.checkpoint_suffix += "-model_part-{0}".format(model_part_number) - - if hasattr(cfg, "model") and getattr(cfg.model, "base_layers", 0) > 0: - cfg.checkpoint.checkpoint_suffix = ( - f"-rank-{cfg.distributed_training.distributed_rank}" - ) - - return cfg.distributed_training.distributed_rank - - -def distributed_main(i, main, cfg: FairseqConfig, kwargs): - cfg.distributed_training.device_id = i - if torch.cuda.is_available() and not cfg.common.cpu and not cfg.common.tpu: - torch.cuda.set_device(cfg.distributed_training.device_id) - if cfg.distributed_training.distributed_rank is None: # torch.multiprocessing.spawn - cfg.distributed_training.distributed_rank = kwargs.pop("start_rank", 0) + i - - cfg.distributed_training.distributed_rank = distributed_init(cfg) - - after_distributed_init_fn = kwargs.pop("after_distributed_init_fn", None) - if after_distributed_init_fn: - cfg = after_distributed_init_fn(cfg) - - main(cfg, **kwargs) - - if torch.distributed.is_initialized(): - torch.distributed.barrier(get_global_group()) - - -def call_main(cfg: FairseqConfig, main, **kwargs): - if cfg.distributed_training.distributed_init_method is None: - infer_init_method(cfg.distributed_training) - - if cfg.distributed_training.distributed_init_method is not None: - # distributed training - if not cfg.distributed_training.distributed_no_spawn: - start_rank = cfg.distributed_training.distributed_rank - cfg.distributed_training.distributed_rank = None # assign automatically - kwargs["start_rank"] = start_rank - torch.multiprocessing.spawn( - fn=distributed_main, - args=(main, cfg, kwargs), - nprocs=min( - torch.cuda.device_count(), - cfg.distributed_training.distributed_world_size, - ), - join=True, - ) - else: - distributed_main(cfg.distributed_training.device_id, main, cfg, kwargs) - elif cfg.common.tpu and cfg.distributed_training.distributed_world_size > 1: - import torch_xla.distributed.xla_multiprocessing as xmp - - torch.multiprocessing.set_sharing_strategy("file_system") - xmp.spawn( - fn=distributed_main, - args=(main, cfg, kwargs), - # tpu-comment: - # 8 devices in one TPU VM, is the max processes to be spawned. - # The rest is driven by xm.distributed.xla_dist - nprocs=min(cfg.distributed_training.distributed_world_size, 8), - ) - else: - # single GPU main - main(cfg, **kwargs) - - -def use_xla(): - global _USE_XLA - return _USE_XLA - - -def new_groups(grouped_ranks: List[List[int]]): - if use_xla(): - return ("tpu", grouped_ranks) - else: - groups = [dist.new_group(g) for g in grouped_ranks] - my_group_idx = _find_my_group_index(grouped_ranks) - return groups[my_group_idx] - - -def _find_my_group_index(grouped_ranks): - my_rank = get_global_rank() - for i, group in enumerate(grouped_ranks): - if my_rank in group: - return i - raise RuntimeError - - -def _find_my_group(grouped_ranks): - index = _find_my_group_index(grouped_ranks) - return grouped_ranks[index] - - -def get_rank(group): - if use_xla(): - assert group[0] == "tpu" - my_group = _find_my_group(group[1]) - return my_group.index(get_global_rank()) - else: - return dist.get_rank(group=group) - - -def get_world_size(group): - if use_xla(): - assert group[0] == "tpu" - my_group = _find_my_group(group[1]) - return len(my_group) - elif torch.distributed.is_initialized(): - return dist.get_world_size(group=group) - else: - return 1 - - -def get_global_group(): - if use_xla(): - return new_groups([list(range(get_global_world_size()))]) - elif torch.distributed.is_initialized(): - if not hasattr(get_global_group, "_global_group"): - # ideally we could use torch.distributed.group.WORLD, but it seems - # to cause random NCCL hangs in some cases - get_global_group._global_group = dist.new_group() - return get_global_group._global_group - else: - return None - - -def get_global_rank(): - if use_xla(): - return xm.get_ordinal() - elif torch.distributed.is_initialized(): - return torch.distributed.get_rank() - else: - return 0 - - -def get_global_world_size(): - if use_xla(): - return xm.xrt_world_size() - elif torch.distributed.is_initialized(): - return torch.distributed.get_world_size() - else: - return 1 - - -def get_data_parallel_group(): - """Get the data parallel group the caller rank belongs to.""" - global _USE_MEGATRON - if _USE_MEGATRON: - from fairseq.model_parallel.megatron import mpu - - return mpu.get_data_parallel_group() - else: - return get_global_group() - - -def get_data_parallel_rank(): - """Return my rank for the data parallel group.""" - return get_rank(get_data_parallel_group()) - - -def get_data_parallel_world_size(): - """Return world size for the data parallel group.""" - return get_world_size(get_data_parallel_group()) - - -def get_model_parallel_group(): - global _USE_MEGATRON - if _USE_MEGATRON: - from fairseq.model_parallel.megatron import mpu - - return mpu.get_model_parallel_group() - else: - return None - - -def get_model_parallel_rank(): - """Return my rank for the model parallel group.""" - return get_rank(get_model_parallel_group()) - - -def get_model_parallel_world_size(): - """Return world size for the model parallel group.""" - return get_world_size(get_model_parallel_group()) - - -def all_reduce(tensor, group, op="sum"): - if use_xla(): - assert isinstance(group, tuple) and group[0] == "tpu" - tensor = [tensor] # wrap in a list to make xm.all_reduce in-place - return xm.all_reduce(op, tensor, groups=group[1])[0] - else: - if op == "sum": - op = dist.ReduceOp.SUM - elif op == "max": - op = dist.ReduceOp.MAX - else: - raise NotImplementedError - dist.all_reduce(tensor, op=op, group=group) - return tensor - - -def broadcast(tensor, src, group): - if use_xla(): - # XLA doesn't support broadcast, hack it with all_reduce - if get_rank(group) != src: - tensor.zero_() - all_reduce(tensor, group) - else: - dist.broadcast(tensor, src=src, group=group) - - -def all_to_all(tensor, group): - """Perform an all-to-all operation on a 1D Tensor.""" - assert tensor.dim() == 1 - split_count = get_world_size(group=group) - assert tensor.numel() % split_count == 0 - if use_xla(): - assert isinstance(group, tuple) and group[0] == "tpu" - return xm.all_to_all( - tensor, - split_dimension=0, - concat_dimension=0, - split_count=split_count, - groups=group[1], - ) - else: - output = torch.zeros_like(tensor) - dist.all_to_all_single(output, tensor, group=group) - return output - - -def all_gather(tensor, group, return_tensor=False): - """Perform an all-gather operation.""" - if use_xla(): - result = xm.all_gather(tensor, groups=group[1]) - world_size = get_world_size(group=group) - result = result.view(world_size, *tensor.size()) - if return_tensor: - return result - else: - return [result[i] for i in range(world_size)] - else: - world_size = get_world_size(group=group) - rank = get_rank(group=group) - tensor_list = [ - tensor if i == rank else torch.empty_like(tensor) for i in range(world_size) - ] - dist.all_gather(tensor_list, tensor, group=group) - if return_tensor: - return torch.stack(tensor_list, dim=0) - else: - return tensor_list - - -def all_gather_list(data, group=None, max_size=16384): - """Gathers arbitrary data from all nodes into a list. - - Similar to :func:`~torch.distributed.all_gather` but for arbitrary Python - data. Note that *data* must be picklable and any CUDA tensors will be moved - to CPU and returned on CPU as well. - - Args: - data (Any): data from the local worker to be gathered on other workers - group: group of the collective - max_size (int, optional): maximum size of the data to be gathered - across workers - """ - from fairseq import utils - - if group is None: - group = get_global_group() - rank = get_rank(group=group) - world_size = get_world_size(group=group) - - buffer_size = max_size * world_size - if ( - not hasattr(all_gather_list, "_buffer") - or all_gather_list._buffer.numel() < buffer_size - ): - all_gather_list._buffer = torch.cuda.ByteTensor(buffer_size) - all_gather_list._cpu_buffer = torch.ByteTensor(max_size).pin_memory() - buffer = all_gather_list._buffer - buffer.zero_() - cpu_buffer = all_gather_list._cpu_buffer - - data = utils.move_to_cpu(data) - enc = pickle.dumps(data) - enc_size = len(enc) - header_size = 4 # size of header that contains the length of the encoded data - size = header_size + enc_size - if size > max_size: - raise ValueError( - "encoded data size ({}) exceeds max_size ({})".format(size, max_size) - ) - - header = struct.pack(">I", enc_size) - cpu_buffer[:size] = torch.ByteTensor(list(header + enc)) - start = rank * max_size - buffer[start : start + size].copy_(cpu_buffer[:size]) - - all_reduce(buffer, group=group) - - buffer = buffer.cpu() - try: - result = [] - for i in range(world_size): - out_buffer = buffer[i * max_size : (i + 1) * max_size] - (enc_size,) = struct.unpack(">I", bytes(out_buffer[:header_size].tolist())) - if enc_size > 0: - result.append( - pickle.loads( - bytes(out_buffer[header_size : header_size + enc_size].tolist()) - ) - ) - return result - except pickle.UnpicklingError: - raise Exception( - "Unable to unpickle data from other workers. all_gather_list requires all " - "workers to enter the function together, so this error usually indicates " - "that the workers have fallen out of sync somehow. Workers can fall out of " - "sync if one of them runs out of memory, or if there are other conditions " - "in your training script that can cause one worker to finish an epoch " - "while other workers are still iterating over their portions of the data. " - "Try rerunning with --ddp-backend=legacy_ddp and see if that helps." - ) - - -def all_reduce_dict(data: Mapping[str, Any], device, group) -> Dict[str, Any]: - """ - AllReduce a dictionary of values across workers. We separately - reduce items that are already on the device and items on CPU for - better performance. - - Args: - data (Mapping[str, Any]): dictionary of data to all-reduce, but - cannot be a nested dictionary - device (torch.device): device for the reduction - group: group of the collective - """ - data_keys = list(data.keys()) - - # We want to separately reduce items that are already on the - # device and items on CPU for performance reasons. - cpu_data = OrderedDict() - device_data = OrderedDict() - for k in data_keys: - t = data[k] - if not torch.is_tensor(t): - cpu_data[k] = torch.tensor(t, dtype=torch.double) - elif t.device.type != device.type: - cpu_data[k] = t.to(dtype=torch.double) - else: - device_data[k] = t.to(dtype=torch.double) - - def _all_reduce_dict(data: OrderedDict): - if len(data) == 0: - return data - buf = torch.cat([t.view(-1) for t in data.values()]).to(device=device) - all_reduce(buf, group=group) - split_buf = torch.split(buf.clone(), [t.numel() for t in data.values()]) - reduced_data = [t.view_as(orig) for t, orig in zip(split_buf, data.values())] - return OrderedDict(zip(data.keys(), reduced_data)) - - cpu_data = _all_reduce_dict(cpu_data) - device_data = _all_reduce_dict(device_data) - - def get_from_stack(key): - if key in cpu_data: - return cpu_data[key] - elif key in device_data: - return device_data[key] - raise KeyError - - return OrderedDict([(key, get_from_stack(key)) for key in data_keys]) - - -def broadcast_tensors( - tensors: Optional[List[torch.Tensor]], - src_rank: int, - group: object, - dist_device: Optional[torch.device] = None, -) -> List[torch.Tensor]: - """ - Broadcasts a list of tensors without other (non-src) ranks needing to know - the dtypes/shapes of the tensors. - """ - if dist_device is None: - if torch.distributed.get_backend(group) == "nccl": - dist_device = torch.device("cuda") - else: - dist_device = torch.device("cpu") - - # share metadata first to simplify transfer - is_src_rank = get_rank(group) == src_rank - if is_src_rank: - metadata = [ - {"size": t.size(), "dtype": t.dtype, "device": t.device} for t in tensors - ] - metadata = _broadcast_object_slow(metadata, src_rank, group, dist_device) - else: - metadata = _broadcast_object_slow(None, src_rank, group, dist_device) - - out_tensors = [] - for i, meta in enumerate(metadata): - if is_src_rank: - tensor = tensors[i] - broadcast(tensors[i].to(dist_device), src=src_rank, group=group) - else: - tensor = torch.zeros( - [meta["size"].numel()], dtype=meta["dtype"], device=dist_device - ) - broadcast(tensor, src=src_rank, group=group) - tensor = tensor.view(meta["size"]).to(meta["device"]) - out_tensors.append(tensor) - return out_tensors - - -def broadcast_object( - obj: Any, - src_rank: int, - group: object, - dist_device: Optional[torch.device] = None, -) -> Any: - """Broadcast an arbitrary Python object to other workers.""" - if dist_device is None: - if torch.distributed.get_backend(group) == "nccl": - dist_device = torch.device("cuda") - else: - dist_device = torch.device("cpu") - - if get_rank(group) == src_rank: - # split the tensors from the non-tensors so we can broadcast them - # directly, avoiding unnecessary serialization/deserialization - tensors = [] - obj = _split_tensors_from_obj(obj, tensors) - obj = _broadcast_object_slow(obj, src_rank, group, dist_device) - tensors = broadcast_tensors(tensors, src_rank, group, dist_device) - else: - obj = _broadcast_object_slow(None, src_rank, group, dist_device) - tensors = broadcast_tensors(None, src_rank, group, dist_device) - return _put_tensors_in_obj(obj, tensors) - - -def _broadcast_object_slow( - obj: Any, - src_rank: int, - group: object, - dist_device: torch.device, -) -> Any: - if get_rank(group) == src_rank: - # Emit data - buffer = io.BytesIO() - torch.save(obj, buffer) - buffer = torch.ByteTensor(buffer.getbuffer()).to(dist_device) - length = torch.LongTensor([len(buffer)]).to(dist_device) - broadcast(length, src=src_rank, group=group) - broadcast(buffer, src=src_rank, group=group) - else: - # Fetch from the source - length = torch.LongTensor([0]).to(dist_device) - broadcast(length, src=src_rank, group=group) - buffer = torch.ByteTensor(int(length.item())).to(dist_device) - broadcast(buffer, src=src_rank, group=group) - buffer = io.BytesIO(buffer.cpu().numpy()) - obj = torch.load(buffer, map_location="cpu") - return obj - - -@dataclass(frozen=True) -class _TensorPlaceholder: - index: int - - -def _split_tensors_from_obj(obj: Any, tensors: List[torch.Tensor]) -> Any: - if torch.is_tensor(obj): - placeholder = _TensorPlaceholder(index=len(tensors)) - tensors.append(obj) - return placeholder - elif isinstance(obj, dict): - return {k: _split_tensors_from_obj(v, tensors) for k, v in obj.items()} - elif isinstance(obj, list): - return [_split_tensors_from_obj(v, tensors) for v in obj] - elif isinstance(obj, tuple): - return tuple(_split_tensors_from_obj(v, tensors) for v in obj) - elif isinstance(obj, set): - return {_split_tensors_from_obj(v, tensors) for v in obj} - else: - return obj - - -def _put_tensors_in_obj(obj: Any, tensors: List[torch.Tensor]) -> Any: - if isinstance(obj, _TensorPlaceholder): - return tensors[obj.index] - elif isinstance(obj, dict): - return {k: _put_tensors_in_obj(v, tensors) for k, v in obj.items()} - elif isinstance(obj, list): - return [_put_tensors_in_obj(v, tensors) for v in obj] - elif isinstance(obj, tuple): - return tuple(_put_tensors_in_obj(v, tensors) for v in obj) - elif isinstance(obj, set): - return {_put_tensors_in_obj(v, tensors) for v in obj} - else: - return obj diff --git a/kosmos-g/fairseq/fairseq/ds_trainer.py b/kosmos-g/fairseq/fairseq/ds_trainer.py deleted file mode 100644 index 1c267eeb4..000000000 --- a/kosmos-g/fairseq/fairseq/ds_trainer.py +++ /dev/null @@ -1,1111 +0,0 @@ -""" -DeepSpeed trainer -""" - -import os -import sys -import torch -import time -import logging -import deepspeed -import json -import subprocess - -from typing import Any, Dict, List -from itertools import chain -from argparse import Namespace -import torch.distributed as dist - -from fairseq import checkpoint_utils, models, optim, utils -from fairseq.distributed import utils as distributed_utils -from fairseq.dataclass.configs import FairseqConfig -from fairseq.logging import meters, metrics -from fairseq.optim import lr_scheduler -from fairseq.optim.dynamic_loss_scaler import DynamicLossScaler -from fairseq.trainer import Trainer -from fairseq.file_io import PathManager - -from omegaconf import OmegaConf - - -logger = logging.getLogger(__name__) - - -def get_config(config, full_name, fairseq_value): - _config = config - for name in full_name.split(":"): - if name in _config: - _config = _config[name] - else: - _config = fairseq_value - break - assert _config == fairseq_value, f"deepspeed config: {full_name} does not align with fairseq value: {fairseq_value}" - return _config - - -class DeepSpeedTrainer(object): - """Main class for data parallel training w. DeepSpeed. - - Similar to fairseq.Trainer this class supports synchronous distributed - data parallel training. However, in this case we expose DeepSpeed features - like ZeRO stages 1, 2, and 3. - """ - - def __init__(self, cfg: FairseqConfig, task, model, criterion): - if isinstance(cfg, Namespace): - logger.warning( - "argparse.Namespace configuration is deprecated! Automatically converting to OmegaConf" - ) - cfg = convert_namespace_to_omegaconf(cfg) - - self.cfg = cfg - self.task = task - - # try: - # subprocess.check_output('which rsync', shell=True) - # except subprocess.CalledProcessError: - # raise RuntimeError('Please install rsync, this is required for model checkpointing') - - ds_config = {} - if isinstance(cfg.common.deepspeed, str): - assert os.path.isfile(cfg.common.deepspeed), f"deepspeed config path is not a file: {cfg.common.deepspeed}" - with open(cfg.common.deepspeed, 'r') as fd: - ds_config = json.load(fd) - - # ds_config['zero_allow_untested_optimizer'] = True - - # gradient accumulation steps - assert len(self.cfg.optimization.update_freq) == 1, "no support for gradient accumulation schedules" - gas = ds_config.get("gradient_accumulation_steps", self.cfg.optimization.update_freq[0]) - ds_config["gradient_accumulation_steps"] = gas - - # train_micro_batch_size_per_gpu - micro_batch_size = get_config(ds_config, "train_micro_batch_size_per_gpu", self.cfg.dataset.batch_size) - # micro_batch_size = get_config(ds_config, "train_micro_batch_size_per_gpu", self.cfg.dataset.max_tokens // self.cfg.task.tokens_per_sample) - ds_config["train_micro_batch_size_per_gpu"] = int(micro_batch_size) - - # enable fp16 - fp16 = get_config(config=ds_config, full_name="fp16:enabled", fairseq_value=self.cfg.common.fp16) - if "fp16" not in ds_config: - ds_config["fp16"] = {} - ds_config["fp16"]["enabled"] = fp16 - - # gradient_clipping self.cfg.optimization.clip_norm - grad_clip = get_config(ds_config, "gradient_clipping", self.cfg.optimization.clip_norm) - ds_config["gradient_clipping"] = grad_clip - - # force zero elastic checkpoint disabled - elastic_ckpt = get_config(ds_config, "zero_optimization:elastic_checkpoint", False) - if "zero_optimization" not in ds_config: - ds_config["zero_optimization"] = {} - ds_config["zero_optimization"]["elastic_checkpoint"] = elastic_ckpt - - zero_stage = get_config(ds_config, "zero_optimization:stage", cfg.common.zero) - ds_config["zero_optimization"]["stage"] = zero_stage - self.zero_stage = int(zero_stage) - - ds_config["zero_optimization"]["contiguous_gradients"] = False - - assert cfg.common.zero != 3, "zero stage 3 is currently untested with this codebase" - - self.ds_config = ds_config - - print(f"****** fairseq generated ds-config: {self.ds_config}") - - # catalog shared parameters - shared_params = _catalog_shared_params(model) - self.tpu = cfg.common.tpu - assert not self.tpu, "deepspeed does not support tpu" - self.cuda = torch.cuda.is_available() - assert self.cuda, "deepspeed assumes cuda devices are available" - self.device = torch.device("cuda") - - self._criterion = criterion - self._model = model - - # assert self.cfg.common.fp16, "only fp16 is supported" - assert self.cfg.distributed_training.ddp_backend in ["c10d", "legacy_ddp", "no_c10d"] - assert cfg.distributed_training.ddp_backend != "fully_sharded" - assert not cfg.common.bf16, "bf16 not yet supported" - assert not cfg.distributed_training.pipeline_model_parallel, "pipeline not yet supported" - assert not self.cfg.optimization.use_bmuf, "bmuf not yet supported" - assert self.cfg.distributed_training.zero_sharding != "os" - assert not self.cfg.common.memory_efficient_fp16, "mem efficient fp16 not yet supported" - assert self.cfg.distributed_training.ddp_backend != "slow_mo" - - self.pipeline_model_parallel = cfg.distributed_training.pipeline_model_parallel - - self.zero_enabled = False - - if cfg.common.fp16: - self._criterion = self._criterion.half() - self._model = self._model.half() - - assert not utils.has_parameters(self._criterion), "criterion has params, not supported yet" - - # check that shared parameters are preserved after device transfer - for shared_param in shared_params: - ref = _get_module_by_path(self._model, shared_param[0]) - for path in shared_param[1:]: - logger.info( - "detected shared parameter: {} <- {}".format(shared_param[0], path) - ) - _set_module_by_path(self._model, path, ref) - - self._dummy_batch = None # indicates we don't have a dummy batch at first - self._lr_scheduler = None - self._num_updates = 0 - self._num_xla_compiles = 0 # for TPUs - self._optim_history = None - self._optimizer = None - self._warn_once = set() - self._wrapped_criterion = None - self._wrapped_model = None - - self.train_step_count = 0 - - if self.cuda and self.data_parallel_world_size > 1: - self._grad_norm_buf = torch.cuda.DoubleTensor(self.data_parallel_world_size) - else: - self._grad_norm_buf = None - - self.quantizer = None - - # get detailed cuda environment - if self.cuda: - self.cuda_env = utils.CudaEnvironment() - if self.data_parallel_world_size > 1: - self.cuda_env_arr = distributed_utils.all_gather_list( - self.cuda_env, group=distributed_utils.get_global_group() - ) - else: - self.cuda_env_arr = [self.cuda_env] - if self.data_parallel_rank == 0: - utils.CudaEnvironment.pretty_print_cuda_env_list(self.cuda_env_arr) - else: - self.cuda_env = None - self.cuda_env_arr = None - - metrics.log_start_time("wall", priority=790, round=2) - - self._start_time = time.time() - self._previous_training_time = 0 - self._cumulative_training_time = None - - self._build_optimizer() - - def _build_optimizer(self): - params = list( - filter( - lambda p: p.requires_grad, - chain(self.model.parameters(), self.criterion.parameters()), - ) - ) - - # create simple optimizer, DS will handle fp16 wrappers - optimizer = optim.build_optimizer(self.cfg.optimizer, params) - - print(f"************ built fairseq optimizer: {optimizer}") - - os.environ['LOCAL_RANK'] = str(self.cfg.distributed_training.device_id) - os.environ['OMPI_COMM_WORLD_LOCAL_RANK'] = str(self.cfg.distributed_training.device_id) - self.device = torch.device("cuda", self.cfg.distributed_training.device_id) - self.model.to(device=self.device) - - torch.distributed.barrier() - print(f'done pre-engine rank={dist.get_rank()}') - - engine, optimizer, _, _ = deepspeed.initialize( - model=self.model, - optimizer=optimizer, - config_params=self.ds_config - ) - - self.zero_enabled = engine.zero_optimization_stage() > 0 - - # We should initialize the learning rate scheduler immediately after - # building the optimizer, so that the initial learning rate is set. - self._lr_scheduler = lr_scheduler.build_lr_scheduler( - self.cfg.lr_scheduler, - engine.optimizer, - ) - self._lr_scheduler.step_update(0) - - self._optimizer = optimizer - self._wrapped_model = engine - self.device = engine.device - self._criterion.to(device=self.device) - print(f"local_rank={torch.distributed.get_rank()}, engine.device={engine.device}") #, engine.module.device={engine.module.device}") - torch.distributed.barrier() - - if getattr(self.cfg.common, "fp16_scale_window", None) is None: - if len(self.cfg.optimization.update_freq) > 1: - raise ValueError( - "--fp16-scale-window must be given explicitly when using a " - "custom --update-freq schedule" - ) - data_parallel_size = int( - self.cfg.distributed_training.distributed_world_size - / self.cfg.common.model_parallel_size - ) - scale_window = int( - 2 ** 14 / data_parallel_size / self.cfg.optimization.update_freq[0] - ) - else: - scale_window = self.cfg.common.fp16_scale_window - - self.scaler = DynamicLossScaler( - init_scale=self.cfg.common.fp16_init_scale, - scale_window=scale_window, - tolerance=self.cfg.common.fp16_scale_tolerance, - threshold=self.cfg.common.threshold_loss_scale, - min_loss_scale=self.cfg.common.min_loss_scale, - ) - - @metrics.aggregate("train") - def train_step(self, samples, raise_oom=False): - """Do forward, backward and parameter update.""" - self._set_seed() - self.model.train() - self.criterion.train() - self.zero_grad() - - metrics.log_start_time("train_wall", priority=800, round=2) - - grad_norm = torch.tensor(0.0).cuda() - - # let fairseq handle grad accumlation scaling - self.model.scale_wrt_gas = False - - # forward and backward pass - logging_outputs, sample_size, ooms = [], 0, 0 - # print(f'** samples size: {len(samples)}') - sample_count = len(samples) - for i, sample in enumerate(samples): # delayed update loop - sample, is_dummy_batch = self._prepare_sample(sample) - - if self.cfg.common.fp16: - self.model.optimizer.override_loss_scale(self.scaler.loss_scale) - - self.model.set_gradient_accumulation_boundary(is_boundary=False) - try: - # forward and backward - # print(f'i={i}, rank={dist.get_rank()}, pre task.train_step') - dist.barrier() - loss, sample_size_i, logging_output = self.task.train_step( - sample=sample, - model=self.model, - criterion=self.criterion, - optimizer=self.optimizer, - update_num=self.get_num_updates(), - ignore_grad=is_dummy_batch, - ) - self.train_step_count += 1 - - # increment deepspeed micro step on non-final train step since optimizer.step will increment it for us - if (i + 1) != sample_count: - self.model.micro_steps += 1 - - self.model.set_gradient_accumulation_boundary(is_boundary=True) - if self.zero_stage <= 2: - self.model.allreduce_gradients() - - # print(f'grads[0]={list([p.grad for p in self.model.optimizer.fp16_groups[0]])}') - # print(f'train_step={self.train_step_count}, loss_scale={self.model.optimizer.cur_scale}') - # print(f'i={i}, rank={dist.get_rank()}, loss={loss}') - if torch.distributed.get_rank() == 0: - _loss_scale = self.model.optimizer.external_loss_scale if self.cfg.common.fp16 else 0 - print(f"[{torch.distributed.get_rank()}], " \ - f"micro_step={self.model.micro_steps}, " \ - f"gas_boundary={self.model.is_gradient_accumulation_boundary()}, " \ - f"train_step={self.train_step_count}, " \ - f"lr={self.get_lr()}, " \ - f"loss_scale={_loss_scale}, " \ - f"loss={loss}") - - if self.cfg.common.exit_interval and self.train_step_count % self.cfg.common.exit_interval == 0: - if torch.distributed.get_rank() == 0: - print("exiting early...") - sys.exit() - - logging_outputs.append(logging_output) - sample_size += sample_size_i - - # emptying the CUDA cache after the first step can - # reduce the chance of OOM - if self.cuda and self.get_num_updates() == 0: - torch.cuda.empty_cache() - except RuntimeError as e: - if "out of memory" in str(e): - self._log_oom(e) - if raise_oom: - raise e - logger.warning( - "attempting to recover from OOM in forward/backward pass" - ) - ooms += 1 - self.zero_grad() - if self.cuda: - torch.cuda.empty_cache() - if self.cfg.distributed_training.distributed_world_size == 1: - return None - else: - raise e - - if is_dummy_batch: - if torch.is_tensor(sample_size): - sample_size.zero_() - else: - sample_size *= 0.0 - - if torch.is_tensor(sample_size): - sample_size = sample_size.float() - else: - sample_size = float(sample_size) - - # gather logging outputs from all replicas - if self.data_parallel_world_size > 1: - train_time = self._local_cumulative_training_time() - logging_outputs, ( - sample_size, - ooms, - total_train_time, - ) = self._aggregate_logging_outputs( - logging_outputs, sample_size, ooms, train_time, ignore=is_dummy_batch - ) - self._cumulative_training_time = ( - total_train_time / self.data_parallel_world_size - ) - - # grad_norms = [] - # for param in self.optimizer.fp16_params: - # if param.grad is not None: - # grad_norms.append(param.grad.norm()) - # print(grad_norms) - - # self.model.optimizer.dump_grad_norms() - - ###################### - ### multiply grads ### - ###################### - # numer = ( - # self.data_parallel_world_size - # if not self.cfg.optimization.use_bmuf or self._sync_stats() - # else 1 - # ) - # self.optimizer.multiply_grads(numer / (sample_size or 1.0)) - # grad_norm = self.optimizer.clip_grad_norm(self.cfg.optimization.clip_norm) - - overflow = False - try: - with torch.autograd.profiler.record_function("optimizer"): - # take an optimization step - self.task.optimizer_step( - self.optimizer, model=self.model, update_num=self.get_num_updates() - ) - # pass overflow flag from ds to fairseq - if self.cfg.common.fp16: - overflow = self.model.optimizer.overflow - self.scaler.check_overflow(overflow=overflow) - self.scaler.update() - - _grad_norm = self.model.get_global_grad_norm() - if _grad_norm is not None: - grad_norm = torch.tensor(_grad_norm).to(self.device) - except FloatingPointError: - raise - except OverflowError as e: - logger.info(f"NOTE: gradient overflow detected, ignoring gradient, {str(e)}") - overflow = True - - # import sys; sys.exit() - # except RuntimeError as e: - # raise e - - # except FloatingPointError: - # # re-run the forward and backward pass with hooks attached to print - # # out where it fails - # self.zero_grad() - # with NanDetector(self.get_model()): - # for _, sample in enumerate(samples): - # sample, _ = self._prepare_sample(sample) - # self.task.train_step( - # sample, - # self.model, - # self.criterion, - # self.optimizer, - # self.get_num_updates(), - # ignore_grad=False, - # ) - # raise - # except OverflowError as e: - # overflow = True - # logger.info(f"NOTE: gradient overflow detected, ignoring gradient, {str(e)}") - # grad_norm = torch.tensor(0.0).cuda() - # self.zero_grad() - # except RuntimeError as e: - # if "out of memory" in str(e): - # self._log_oom(e) - # logger.error("OOM during optimization, irrecoverable") - # raise e - - # Some distributed wrappers (e.g., SlowMo) need access to the optimizer - # after the step - # if hasattr(self.model, "perform_additional_optimizer_actions"): - # if hasattr(self.optimizer, "fp32_params"): - # self.model.perform_additional_optimizer_actions( - # self.optimizer.optimizer, self.optimizer.fp32_params - # ) - # else: - # self.model.perform_additional_optimizer_actions( - # self.optimizer.optimizer - # ) - - logging_output = None - if not overflow or self.cfg.distributed_training.ddp_backend == "slow_mo": - self.set_num_updates(self.get_num_updates() + 1) - - if self.cuda and self.cuda_env is not None: - # log minimum free memory over the iteration - gb_used = torch.cuda.max_memory_allocated() / 1024 / 1024 / 1024 - torch.cuda.reset_peak_memory_stats() - gb_free = self.cuda_env.total_memory_in_GB - gb_used - metrics.log_scalar( - "gb_free", gb_free, priority=1500, round=1, weight=0 - ) - - # log stats - logging_output = self._reduce_and_log_stats( - logging_outputs, sample_size, grad_norm - ) - - # clear CUDA cache to reduce memory fragmentation - # if ( - # self.cuda - # and self.cfg.common.empty_cache_freq > 0 - # and ( - # (self.get_num_updates() + self.cfg.common.empty_cache_freq - 1) - # % self.cfg.common.empty_cache_freq - # ) - # == 0 - # ): - # torch.cuda.empty_cache() - - if self.cfg.common.fp16: - metrics.log_scalar( - "loss_scale", - # self.optimizer.loss_scaler.loss_scaler, - self.optimizer.cur_scale, - priority=700, - round=4, - weight=0, - ) - - metrics.log_stop_time("train_wall") - return logging_output - - def _sync_stats(): - if self.data_parallel_world_size == 1: - return False - else: - return True - - def set_num_updates(self, num_updates): - """Set the number of parameters updates.""" - self._num_updates = num_updates - self.lr_step_update() - metrics.log_scalar("num_updates", self._num_updates, weight=0, priority=200) - - def _aggregate_logging_outputs( - self, - logging_outputs: List[Dict[str, Any]], - *extra_stats_to_sum, - ignore=False, - ): - if self.task.__class__.logging_outputs_can_be_summed(self.get_criterion()): - return self._fast_stat_sync_sum( - logging_outputs, *extra_stats_to_sum, ignore=ignore - ) - else: - return self._all_gather_list_sync( - logging_outputs, *extra_stats_to_sum, ignore=ignore - ) - - def _fast_stat_sync_sum( - self, logging_outputs: List[Dict[str, Any]], *extra_stats_to_sum, ignore=False, - ): - """ - Sync logging outputs across workers. fast_stat_sync_sum is - faster than all_gather_list_sync, but is only suitable when - logging outputs are scalars and can be summed. Note that - *logging_outputs* cannot contain any nested dicts/lists. - """ - data = {} - for i, stat in enumerate(extra_stats_to_sum): - data["extra_stats_" + str(i)] = stat - if len(logging_outputs) > 0: - log_keys = list(logging_outputs[0].keys()) - for k in log_keys: - if not ignore: - v = sum(log[k] for log in logging_outputs if k in log) - else: - v = logging_outputs[0][k] - v = torch.zeros_like(v) if torch.is_tensor(v) else 0 - data["logging_outputs_" + k] = v - else: - log_keys = None - - data = distributed_utils.all_reduce_dict( - data, device=self.device, group=self.data_parallel_process_group - ) - - extra_stats_to_sum = [ - data["extra_stats_" + str(i)] for i in range(len(extra_stats_to_sum)) - ] - if log_keys is not None: - logging_outputs = [{k: data["logging_outputs_" + k] for k in log_keys}] - else: - logging_outputs = [] - return logging_outputs, extra_stats_to_sum - - def _all_gather_list_sync( - self, - logging_outputs: List[Dict[str, Any]], - *extra_stats_to_sum, - ignore=False, - ): - """ - Sync logging outputs across workers. all_gather_list_sync is - suitable when logging outputs are complex types. - """ - if self.tpu: - raise NotImplementedError - if ignore: - logging_outputs = [] - results = list( - zip( - *distributed_utils.all_gather_list( - [logging_outputs] + list(extra_stats_to_sum), - max_size=getattr(self.cfg.common, "all_gather_list_size", 16384), - group=self.data_parallel_process_group, - ) - ) - ) - logging_outputs, extra_stats_to_sum = results[0], results[1:] - logging_outputs = list(chain.from_iterable(logging_outputs)) - extra_stats_to_sum = [sum(s) for s in extra_stats_to_sum] - return logging_outputs, extra_stats_to_sum - - def _reduce_and_log_stats(self, logging_outputs, sample_size, grad_norm=None): - if grad_norm is not None and ( - not torch.is_tensor(grad_norm) or torch.isfinite(grad_norm) - ): - metrics.log_speed("ups", 1.0, priority=100, round=2) - metrics.log_scalar("gnorm", grad_norm, priority=400, round=3) - if self.cfg.optimization.clip_norm > 0: - metrics.log_scalar( - "clip", - torch.where( - grad_norm > self.cfg.optimization.clip_norm, - grad_norm.new_tensor(100), - grad_norm.new_tensor(0), - ), - priority=500, - round=1, - ) - - with metrics.aggregate() as agg: - if logging_outputs is not None: - self.task.reduce_metrics(logging_outputs, self.get_criterion()) - del logging_outputs - - # extra warning for criterions that don't properly log a loss value - if "loss" not in agg: - if "loss" not in self._warn_once: - self._warn_once.add("loss") - logger.warning( - "Criterion.reduce_metrics did not log a 'loss' value, " - "which may break some functionality" - ) - metrics.log_scalar("loss", -1) - - # support legacy interface - if self.tpu: - logging_output = {} - else: - logging_output = agg.get_smoothed_values() - logging_output["sample_size"] = sample_size - for key_to_delete in ["ppl", "wps", "wpb", "bsz"]: - if key_to_delete in logging_output: - del logging_output[key_to_delete] - return logging_output - - def cumulative_training_time(self): - if self._cumulative_training_time is None: - # single GPU - return self._local_cumulative_training_time() - else: - return self._cumulative_training_time - - def _local_cumulative_training_time(self): - """Aggregate training time in seconds.""" - return time.time() - self._start_time + self._previous_training_time - - def _prepare_sample(self, sample, is_dummy=False): - #DS: untouched - if sample == "DUMMY": - raise Exception( - "Trying to use an uninitialized 'dummy' batch. This usually indicates " - "that the total number of batches is smaller than the number of " - "participating GPUs. Try reducing the batch size or using fewer GPUs." - ) - - if sample is None or len(sample) == 0: - assert ( - self._dummy_batch is not None and len(self._dummy_batch) > 0 - ), "Invalid dummy batch: {}".format(self._dummy_batch) - sample, _ = self._prepare_sample(self._dummy_batch, is_dummy=True) - return sample, True - - if self.cuda: - if self.pipeline_model_parallel: - if "target" in sample: - sample["target"] = utils.move_to_cuda( - sample["target"], device=self.last_device - ) - else: - sample = utils.move_to_cuda(sample) - elif self.tpu and is_dummy: - # the dummy batch may not be on the appropriate device - sample = utils.move_to_cuda(sample, device=self.device) - - def apply_half(t): - if t.dtype is torch.float32: - return t.half() - return t - - def apply_bfloat16(t): - if t.dtype is torch.float32: - return t.to(dtype=torch.bfloat16) - return t - - if self.cfg.common.fp16: - sample = utils.apply_to_sample(apply_half, sample) - - if self.cfg.common.bf16: - sample = utils.apply_to_sample(apply_bfloat16, sample) - - if self._dummy_batch == "DUMMY": - self._dummy_batch = sample - - return sample, False - - def lr_step(self, epoch, val_loss=None): - """Adjust the learning rate at the end of the epoch.""" - self.lr_scheduler.step(epoch, val_loss) - # prefer updating the LR based on the number of steps - return self.lr_step_update() - - def get_lr(self): - """Get the current learning rate.""" - return self.model.optimizer.get_lr() - - def get_num_updates(self): - """Get the number of parameters updates.""" - return self._num_updates - - def lr_step_update(self): - """Update the learning rate after each update.""" - new_lr = self.lr_scheduler.step_update(self.get_num_updates()) - if isinstance(new_lr, dict): - for k, v in new_lr.items(): - metrics.log_scalar(f"lr_{k}", v, weight=0, priority=300) - new_lr = new_lr.get("default", next(iter(new_lr.values()))) - else: - metrics.log_scalar("lr", new_lr, weight=0, priority=300) - return new_lr - - def begin_epoch(self, epoch): - """Called at the beginning of each epoch.""" - logger.info("begin training epoch {}".format(epoch)) - - self.lr_step_begin_epoch(epoch) - - # task specific setup per epoch - self.task.begin_epoch(epoch, self.get_model()) - - def lr_step_begin_epoch(self, epoch): - """Adjust the learning rate at the beginning of the epoch.""" - self.lr_scheduler.step_begin_epoch(epoch) - # prefer updating the LR based on the number of steps - return self.lr_step_update() - - def get_model(self): - """Get the (non-wrapped) model instance.""" - return self._model - - def get_criterion(self): - """Get the (non-wrapped) criterion instance.""" - return self._criterion - - def _set_seed(self): - # Set seed based on args.seed and the update number so that we get - # reproducible results when resuming from checkpoints - seed = self.cfg.common.seed + self.get_num_updates() - utils.set_torch_seed(seed) - - def consolidate_optimizer(self): - """ DeepSpeed doesn't require any optimizer consolidation. """ - return False - - @metrics.aggregate("valid") - def valid_step(self, sample, raise_oom=False): - """Do forward pass in evaluation mode.""" - - with torch.no_grad(): - self.model.eval() - self.criterion.eval() - - sample, is_dummy_batch = self._prepare_sample(sample) - - try: - _loss, sample_size, logging_output = self.task.valid_step( - sample, self.model, self.criterion - ) - except RuntimeError as e: - if "out of memory" in str(e): - self._log_oom(e) - if not raise_oom: - logger.warning( - "ran out of memory in validation step, retrying batch" - ) - for p in self.model.parameters(): - if p.grad is not None: - p.grad = None # free some memory - if self.cuda: - torch.cuda.empty_cache() - return self.valid_step(sample, raise_oom=True) - raise e - - logging_outputs = [logging_output] - if is_dummy_batch: - if torch.is_tensor(sample_size): - sample_size.zero_() - else: - sample_size *= 0.0 - - # gather logging outputs from all replicas - if self.data_parallel_world_size > 1: - logging_outputs, (sample_size,) = self._aggregate_logging_outputs( - logging_outputs, - sample_size, - ignore=is_dummy_batch, - ) - - # log validation stats - if self.tpu: - logging_outputs = self._xla_markstep_and_send_to_cpu(logging_outputs) - # don't reduce here, otherwise the metric is wrong - # logging_output = self._reduce_and_log_stats(logging_outputs, sample_size) - - return logging_outputs - - def begin_valid_epoch(self, epoch): - """Called at the beginning of each validation epoch.""" - - # task specific setup per validation epoch - self.task.begin_valid_epoch(epoch, self.get_model()) - - def get_valid_iterator( - self, - subset, - disable_iterator_cache=False, - ): - """Return an EpochBatchIterator over given validation subset for a given epoch.""" - batch_iterator = self.task.get_batch_iterator( - dataset=self.task.dataset(subset), - max_tokens=self.cfg.dataset.max_tokens_valid, - max_sentences=self.cfg.dataset.batch_size_valid, - max_positions=utils.resolve_max_positions( - self.task.max_positions(), - self.model.max_positions(), - ), - ignore_invalid_inputs=self.cfg.dataset.skip_invalid_size_inputs_valid_test, - required_batch_size_multiple=self.cfg.dataset.required_batch_size_multiple, - seed=self.cfg.common.seed, - num_shards=self.data_parallel_world_size, - shard_id=self.data_parallel_rank, - num_workers=self.cfg.dataset.num_workers, - # always pass a fixed "epoch" to keep validation data consistent - # across training epochs - epoch=1, - data_buffer_size=self.cfg.dataset.data_buffer_size, - disable_iterator_cache=disable_iterator_cache, - ) - self.reset_dummy_batch(batch_iterator.first_batch) - return batch_iterator - - def load_checkpoint( - self, - filename, - reset_optimizer=False, - reset_lr_scheduler=False, - optimizer_overrides=None, - reset_meters=False, - ): - """ - Load all training state from a checkpoint file. - rank = 0 will load the checkpoint, and then broadcast it to all - other ranks. - """ - extra_state, self._optim_history, last_optim_state = None, [], None - - logger.info(f"Preparing to load checkpoint {filename}") - is_distributed = self.data_parallel_world_size > 1 - bexists = os.path.isdir(filename) #PathManager.isfile(filename) - if not bexists: - logger.info("No existing checkpoint found {}".format(filename)) - return None - - def load_model(src, dst): - if torch.distributed.get_rank() == 0: - print(self.cfg.model) - dst.load_state_dict(src, strict=False, model_cfg=self.cfg.model) - - load_path, client_states = self.model.load_checkpoint(load_dir=filename, load_optimizer_states=not reset_optimizer, model_f=load_model) - - print(f'[{torch.distributed.get_rank()}] ckpt client states={client_states}') - - assert not utils.has_parameters(self.get_criterion()), "criterion w. params not supported yet" - extra_state = client_states["extra_state"] - - if not reset_optimizer and not reset_lr_scheduler: - self.lr_scheduler.load_state_dict(client_states["lr_scheduler_state"]) - - self.set_num_updates(client_states["num_updates"]) - - self.scaler.loss_scale = client_states["loss_scale"] - - if extra_state is not None: - itr_state = extra_state["train_iterator"] - epoch = itr_state.get("epoch", 1) - - if "previous_training_time" in extra_state: - self._previous_training_time = extra_state["previous_training_time"] - self._start_time = time.time() - - self.lr_step(epoch) - - if itr_state.get("version", 1) >= 2 and itr_state["iterations_in_epoch"] == 0: - # reset meters at start of epoch - reset_meters = True - - if "metrics" in extra_state and not reset_meters: - metrics.load_state_dict(extra_state["metrics"]) - - # reset TimeMeters, since their start times don't make sense anymore - for meter in metrics.get_meters("default"): - if isinstance(meter, meters.TimeMeter): - meter.reset() - - logger.info( - "Loaded checkpoint {} (epoch {} @ {} updates)".format( - filename, epoch, self.get_num_updates() - ) - ) - return extra_state - - def get_train_iterator( - self, - epoch, - combine=True, - load_dataset=True, - data_selector=None, - shard_batch_itr=True, - disable_iterator_cache=False, - ): - """Return an EpochBatchIterator over the training set for a given epoch.""" - if load_dataset: - logger.info("loading train data for epoch {}".format(epoch)) - self.task.load_dataset( - self.cfg.dataset.train_subset, - epoch=epoch, - combine=combine, - data_selector=data_selector, - tpu=self.tpu, - ) - - seed = self.cfg.common.seed - if hasattr(self.cfg, 'infinibatch_dataloader'): - print("| If using infinibatch, do reset_dataloader!", flush=True) - assert self.cfg.reset_dataloader - seed += self.get_num_updates() - print("| Set seed {}={}+{}".format(seed, self.cfg.common.seed, self.get_num_updates()), flush=True) - - batch_iterator = self.task.get_batch_iterator( - dataset=self.task.dataset(self.cfg.dataset.train_subset), - max_tokens=self.cfg.dataset.max_tokens, - max_sentences=self.cfg.dataset.batch_size, - max_positions=utils.resolve_max_positions( - self.task.max_positions(), - self.model.max_positions(), - self.cfg.dataset.max_tokens, - ), - ignore_invalid_inputs=True, - required_batch_size_multiple=self.cfg.dataset.required_batch_size_multiple, - seed=seed, - num_shards=self.data_parallel_world_size if shard_batch_itr else 1, - shard_id=self.data_parallel_rank if shard_batch_itr else 0, - num_workers=self.cfg.dataset.num_workers, - epoch=epoch, - data_buffer_size=self.cfg.dataset.data_buffer_size, - disable_iterator_cache=disable_iterator_cache, - ) - self.reset_dummy_batch(batch_iterator.first_batch) - return batch_iterator - - def reset_dummy_batch(self, batch): - self._dummy_batch = batch - - def zero_grad(self): - self.optimizer.zero_grad() - - @property - def model(self): - if self._wrapped_model is None: - self._wrapped_model = self._model - return self._wrapped_model - - @property - def criterion(self): - if self._wrapped_criterion is None: - self._wrapped_criterion = self._criterion - return self._wrapped_criterion - - @property - def data_parallel_world_size(self): - if self.cfg.distributed_training.distributed_world_size == 1: - return 1 - return distributed_utils.get_data_parallel_world_size() - - @property - def lr_scheduler(self): - if self._lr_scheduler is None: - self._build_optimizer() # this will initialize self._lr_scheduler - return self._lr_scheduler - - @property - def optimizer(self): - if self._optimizer is None: - self._build_optimizer() - return self._optimizer - - @property - def data_parallel_process_group(self): - return distributed_utils.get_data_parallel_group() - - @property - def data_parallel_rank(self): - if self.cfg.distributed_training.distributed_world_size == 1: - return 0 - return distributed_utils.get_data_parallel_rank() - - @property - def is_data_parallel_master(self): - # NOTE: this returns true for all model parallel replicas with data - # parallel rank 0 - return self.data_parallel_rank == 0 - - @property - def checkpoint_suffix(self) -> str: - """Suffix to add to the checkpoint file name.""" - # if self.cfg.distributed_training.ddp_backend == "fully_sharded": - # return self.cfg.checkpoint.checkpoint_suffix + "-shard{0}".format(self.data_parallel_rank) - # else: - return self.cfg.checkpoint.checkpoint_suffix or "" - - @property - def should_save_checkpoint_on_current_rank(self) -> bool: - """Indicates whether to save checkpoints on the current DDP rank.""" - if self.zero_enabled: - return True - else: - return self.is_data_parallel_master - - def save_checkpoint(self, filename, extra_state): - """Save all training state in a checkpoint file.""" - logger.info(f"Saving checkpoint to {filename}") - - state_dict = self.state_dict(exclude_model_opt=True) - state_dict["extra_state"].update(extra_state) - - self.model.save_checkpoint(save_dir=filename, client_state=state_dict) - - logger.info(f"Finished saving checkpoint to {filename}") - - def state_dict(self, exclude_model_opt=False): - state_dict = { - "args": None, # legacy - "cfg": ( - OmegaConf.to_container(self.cfg) - if OmegaConf.is_config(self.cfg) else self.cfg - ), - "model": None if exclude_model_opt else self.model.state_dict(), - "criterion": ( - self.criterion.state_dict() - if utils.has_parameters(self.criterion) else None - ), - "optimizer_history": (self._optim_history or []) - + [ - { - "criterion_name": self.get_criterion().__class__.__name__, - "optimizer_name": self.optimizer.__class__.__name__, - "lr_scheduler_state": self.lr_scheduler.state_dict(), - "num_updates": self.get_num_updates(), - } - ], - "lr_scheduler_state": self.lr_scheduler.state_dict(), - "num_updates": self.get_num_updates(), - "task_state": self.task.state_dict() if self.task is not None else {}, - "loss_scale": self.scaler.loss_scale, - "extra_state": { - "metrics": metrics.state_dict(), - "previous_training_time": self.cumulative_training_time(), - } - } - if exclude_model_opt and not self.cfg.checkpoint.no_save_optimizer_state: - state_dict["last_optimizer_state"] = self.optimizer.state_dict() - return state_dict - -def _catalog_shared_params(module, memo=None, prefix=""): - if memo is None: - first_call = True - memo = {} - else: - first_call = False - for name, param in module._parameters.items(): - param_prefix = prefix + ("." if prefix else "") + name - if param not in memo: - memo[param] = [] - memo[param].append(param_prefix) - for name, m in module._modules.items(): - if m is None: - continue - submodule_prefix = prefix + ("." if prefix else "") + name - _catalog_shared_params(m, memo, submodule_prefix) - if first_call: - return [x for x in memo.values() if len(x) > 1] - - -def _get_module_by_path(module, path): - path = path.split(".") - for name in path: - module = getattr(module, name) - return module - - -def _set_module_by_path(module, path, value): - path = path.split(".") - for name in path[:-1]: - module = getattr(module, name) - setattr(module, path[-1], value) diff --git a/kosmos-g/fairseq/fairseq/file_chunker_utils.py b/kosmos-g/fairseq/fairseq/file_chunker_utils.py deleted file mode 100644 index 443100c61..000000000 --- a/kosmos-g/fairseq/fairseq/file_chunker_utils.py +++ /dev/null @@ -1,84 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import os -import typing as tp - - -def _safe_readline(fd) -> str: - pos = fd.tell() - while True: - try: - return fd.readline() - except UnicodeDecodeError: - pos -= 1 - fd.seek(pos) # search where this character begins - - -def find_offsets(filename: str, num_chunks: int) -> tp.List[int]: - """ - given a file and a number of chuncks, find the offsets in the file - to be able to chunk around full lines. - """ - with open(filename, "r", encoding="utf-8") as f: - size = os.fstat(f.fileno()).st_size - chunk_size = size // num_chunks - offsets = [0 for _ in range(num_chunks + 1)] - for i in range(1, num_chunks): - f.seek(chunk_size * i) - _safe_readline(f) - offsets[i] = f.tell() - offsets[-1] = size - return offsets - - -class ChunkLineIterator: - """ - Iterator to properly iterate over lines of a file chunck. - """ - - def __init__(self, fd, start_offset: int, end_offset: int): - self._fd = fd - self._start_offset = start_offset - self._end_offset = end_offset - - def __iter__(self) -> tp.Iterable[str]: - self._fd.seek(self._start_offset) - # next(f) breaks f.tell(), hence readline() must be used - line = _safe_readline(self._fd) - while line: - pos = self._fd.tell() - # f.tell() does not always give the byte position in the file - # sometimes it skips to a very large number - # it is unlikely that through a normal read we go from - # end bytes to end + 2**32 bytes (4 GB) and this makes it unlikely - # that the procedure breaks by the undeterministic behavior of - # f.tell() - if ( - self._end_offset > 0 - and pos > self._end_offset - and pos < self._end_offset + 2 ** 32 - ): - break - yield line - line = self._fd.readline() - - -class Chunker: - """ - contextmanager to read a chunck of a file line by line. - """ - - def __init__(self, path: str, start_offset: int, end_offset: int): - self.path = path - self.start_offset = start_offset - self.end_offset = end_offset - - def __enter__(self) -> ChunkLineIterator: - self.fd = open(self.path, "r", encoding="utf-8") - return ChunkLineIterator(self.fd, self.start_offset, self.end_offset) - - def __exit__(self, exc_type, exc_val, exc_tb) -> None: - self.fd.close() diff --git a/kosmos-g/fairseq/fairseq/file_io.py b/kosmos-g/fairseq/fairseq/file_io.py deleted file mode 100644 index 8eca70a06..000000000 --- a/kosmos-g/fairseq/fairseq/file_io.py +++ /dev/null @@ -1,196 +0,0 @@ -#!/usr/bin/env python3 - -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import shutil -from typing import List, Optional - - -logger = logging.getLogger(__file__) - - -try: - from iopath.common.file_io import g_pathmgr as IOPathManager - - try: - # [FB only - for now] AWS PathHandler for PathManager - from .fb_pathhandlers import S3PathHandler - - IOPathManager.register_handler(S3PathHandler()) - except KeyError: - logging.warning("S3PathHandler already registered.") - except ImportError: - logging.debug( - "S3PathHandler couldn't be imported. Either missing fb-only files, or boto3 module." - ) - -except ImportError: - IOPathManager = None - - -class PathManager: - """ - Wrapper for insulating OSS I/O (using Python builtin operations) from - iopath's PathManager abstraction (for transparently handling various - internal backends). - """ - - @staticmethod - def open( - path: str, - mode: str = "r", - buffering: int = -1, - encoding: Optional[str] = None, - errors: Optional[str] = None, - newline: Optional[str] = None, - ): - if IOPathManager: - return IOPathManager.open( - path=path, - mode=mode, - buffering=buffering, - encoding=encoding, - errors=errors, - newline=newline, - ) - return open( - path, - mode=mode, - buffering=buffering, - encoding=encoding, - errors=errors, - newline=newline, - ) - - @staticmethod - def copy(src_path: str, dst_path: str, overwrite: bool = False) -> bool: - if IOPathManager: - return IOPathManager.copy( - src_path=src_path, dst_path=dst_path, overwrite=overwrite - ) - return shutil.copyfile(src_path, dst_path) - - @staticmethod - def get_local_path(path: str, **kwargs) -> str: - if IOPathManager: - return IOPathManager.get_local_path(path, **kwargs) - return path - - @staticmethod - def exists(path: str) -> bool: - if IOPathManager: - return IOPathManager.exists(path) - return os.path.exists(path) - - @staticmethod - def isfile(path: str) -> bool: - if IOPathManager: - return IOPathManager.isfile(path) - return os.path.isfile(path) - - @staticmethod - def ls(path: str) -> List[str]: - if IOPathManager: - return IOPathManager.ls(path) - return os.listdir(path) - - @staticmethod - def mkdirs(path: str) -> None: - if IOPathManager: - return IOPathManager.mkdirs(path) - os.makedirs(path, exist_ok=True) - - @staticmethod - def rm(path: str) -> None: - if IOPathManager: - return IOPathManager.rm(path) - os.remove(path) - - @staticmethod - def chmod(path: str, mode: int) -> None: - if not PathManager.path_requires_pathmanager(path): - os.chmod(path, mode) - - @staticmethod - def register_handler(handler) -> None: - if IOPathManager: - return IOPathManager.register_handler(handler=handler) - - @staticmethod - def copy_from_local( - local_path: str, dst_path: str, overwrite: bool = False, **kwargs - ) -> None: - if IOPathManager: - return IOPathManager.copy_from_local( - local_path=local_path, dst_path=dst_path, overwrite=overwrite, **kwargs - ) - return shutil.copyfile(local_path, dst_path) - - @staticmethod - def path_requires_pathmanager(path: str) -> bool: - """Do we require PathManager to access given path?""" - if IOPathManager: - for p in IOPathManager._path_handlers.keys(): - if path.startswith(p): - return True - return False - - @staticmethod - def supports_rename(path: str) -> bool: - # PathManager doesn't yet support renames - return not PathManager.path_requires_pathmanager(path) - - @staticmethod - def rename(src: str, dst: str): - os.rename(src, dst) - - """ - ioPath async PathManager methods: - """ - - @staticmethod - def opena( - path: str, - mode: str = "r", - buffering: int = -1, - encoding: Optional[str] = None, - errors: Optional[str] = None, - newline: Optional[str] = None, - ): - """ - Return file descriptor with asynchronous write operations. - """ - global IOPathManager - if not IOPathManager: - logging.info("ioPath is initializing PathManager.") - try: - from iopath.common.file_io import PathManager - - IOPathManager = PathManager() - except Exception: - logging.exception("Failed to initialize ioPath PathManager object.") - return IOPathManager.opena( - path=path, - mode=mode, - buffering=buffering, - encoding=encoding, - errors=errors, - newline=newline, - ) - - @staticmethod - def async_close() -> bool: - """ - Wait for files to be written and clean up asynchronous PathManager. - NOTE: `PathManager.async_close()` must be called at the end of any - script that uses `PathManager.opena(...)`. - """ - global IOPathManager - if IOPathManager: - return IOPathManager.async_close() - return False diff --git a/kosmos-g/fairseq/fairseq/file_utils.py b/kosmos-g/fairseq/fairseq/file_utils.py deleted file mode 100644 index b99da2e8c..000000000 --- a/kosmos-g/fairseq/fairseq/file_utils.py +++ /dev/null @@ -1,370 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Utilities for working with the local dataset cache. -This file is adapted from `AllenNLP <https://github.com/allenai/allennlp>`_. -and `huggingface <https://github.com/huggingface>`_. -""" - -import fnmatch -import json -import logging -import os -import shutil -import tarfile -import tempfile -from functools import partial, wraps -from hashlib import sha256 -from io import open - - -try: - from torch.hub import _get_torch_home - - torch_cache_home = _get_torch_home() -except ImportError: - torch_cache_home = os.path.expanduser( - os.getenv( - "TORCH_HOME", os.path.join(os.getenv("XDG_CACHE_HOME", "~/.cache"), "torch") - ) - ) -default_cache_path = os.path.join(torch_cache_home, "pytorch_fairseq") - -try: - from urllib.parse import urlparse -except ImportError: - from urlparse import urlparse - -try: - from pathlib import Path - - PYTORCH_FAIRSEQ_CACHE = Path(os.getenv("PYTORCH_FAIRSEQ_CACHE", default_cache_path)) -except (AttributeError, ImportError): - PYTORCH_FAIRSEQ_CACHE = os.getenv("PYTORCH_FAIRSEQ_CACHE", default_cache_path) - -CONFIG_NAME = "config.json" -WEIGHTS_NAME = "pytorch_model.bin" - -logger = logging.getLogger(__name__) # pylint: disable=invalid-name - - -def load_archive_file(archive_file): - # redirect to the cache, if necessary - try: - resolved_archive_file = cached_path(archive_file, cache_dir=None) - except EnvironmentError: - logger.info( - "Archive name '{}' was not found in archive name list. " - "We assumed '{}' was a path or URL but couldn't find any file " - "associated to this path or URL.".format( - archive_file, - archive_file, - ) - ) - return None - - if resolved_archive_file == archive_file: - logger.info("loading archive file {}".format(archive_file)) - else: - logger.info( - "loading archive file {} from cache at {}".format( - archive_file, resolved_archive_file - ) - ) - - # Extract archive to temp dir and replace .tar.bz2 if necessary - tempdir = None - if not os.path.isdir(resolved_archive_file): - tempdir = tempfile.mkdtemp() - logger.info( - "extracting archive file {} to temp dir {}".format( - resolved_archive_file, tempdir - ) - ) - ext = os.path.splitext(archive_file)[1][1:] - with tarfile.open(resolved_archive_file, "r:" + ext) as archive: - top_dir = os.path.commonprefix(archive.getnames()) - archive.extractall(tempdir) - os.remove(resolved_archive_file) - shutil.move(os.path.join(tempdir, top_dir), resolved_archive_file) - shutil.rmtree(tempdir) - - return resolved_archive_file - - -def url_to_filename(url, etag=None): - """ - Convert `url` into a hashed filename in a repeatable way. - If `etag` is specified, append its hash to the URL's, delimited - by a period. - """ - url_bytes = url.encode("utf-8") - url_hash = sha256(url_bytes) - filename = url_hash.hexdigest() - - if etag: - etag_bytes = etag.encode("utf-8") - etag_hash = sha256(etag_bytes) - filename += "." + etag_hash.hexdigest() - - return filename - - -def filename_to_url(filename, cache_dir=None): - """ - Return the url and etag (which may be ``None``) stored for `filename`. - Raise ``EnvironmentError`` if `filename` or its stored metadata do not exist. - """ - if cache_dir is None: - cache_dir = PYTORCH_FAIRSEQ_CACHE - if isinstance(cache_dir, Path): - cache_dir = str(cache_dir) - - cache_path = os.path.join(cache_dir, filename) - if not os.path.exists(cache_path): - raise EnvironmentError("file {} not found".format(cache_path)) - - meta_path = cache_path + ".json" - if not os.path.exists(meta_path): - raise EnvironmentError("file {} not found".format(meta_path)) - - with open(meta_path, encoding="utf-8") as meta_file: - metadata = json.load(meta_file) - url = metadata["url"] - etag = metadata["etag"] - - return url, etag - - -def cached_path_from_pm(url_or_filename): - """ - Tries to cache the specified URL using PathManager class. - Returns the cached path if success otherwise failure. - """ - try: - from fairseq.file_io import PathManager - - local_path = PathManager.get_local_path(url_or_filename) - return local_path - except Exception: - return None - - -def cached_path(url_or_filename, cache_dir=None): - """ - Given something that might be a URL (or might be a local path), - determine which. If it's a URL, download the file and cache it, and - return the path to the cached file. If it's already a local path, - make sure the file exists and then return the path. - """ - if cache_dir is None: - cache_dir = PYTORCH_FAIRSEQ_CACHE - if isinstance(url_or_filename, Path): - url_or_filename = str(url_or_filename) - if isinstance(cache_dir, Path): - cache_dir = str(cache_dir) - - parsed = urlparse(url_or_filename) - - if parsed.scheme in ("http", "https", "s3"): - # URL, so get it from the cache (downloading if necessary) - return get_from_cache(url_or_filename, cache_dir) - elif os.path.exists(url_or_filename): - # File, and it exists. - return url_or_filename - elif parsed.scheme == "": - # File, but it doesn't exist. - raise EnvironmentError("file {} not found".format(url_or_filename)) - else: - cached_path = cached_path_from_pm(url_or_filename) - if cached_path: - return cached_path - # Something unknown - raise ValueError( - "unable to parse {} as a URL or as a local path".format(url_or_filename) - ) - - -def split_s3_path(url): - """Split a full s3 path into the bucket name and path.""" - parsed = urlparse(url) - if not parsed.netloc or not parsed.path: - raise ValueError("bad s3 path {}".format(url)) - bucket_name = parsed.netloc - s3_path = parsed.path - # Remove '/' at beginning of path. - if s3_path.startswith("/"): - s3_path = s3_path[1:] - return bucket_name, s3_path - - -def s3_request(func): - """ - Wrapper function for s3 requests in order to create more helpful error - messages. - """ - - @wraps(func) - def wrapper(url, *args, **kwargs): - from botocore.exceptions import ClientError - - try: - return func(url, *args, **kwargs) - except ClientError as exc: - if int(exc.response["Error"]["Code"]) == 404: - raise EnvironmentError("file {} not found".format(url)) - else: - raise - - return wrapper - - -@s3_request -def s3_etag(url): - """Check ETag on S3 object.""" - import boto3 - - s3_resource = boto3.resource("s3") - bucket_name, s3_path = split_s3_path(url) - s3_object = s3_resource.Object(bucket_name, s3_path) - return s3_object.e_tag - - -@s3_request -def s3_get(url, temp_file): - """Pull a file directly from S3.""" - import boto3 - - s3_resource = boto3.resource("s3") - bucket_name, s3_path = split_s3_path(url) - s3_resource.Bucket(bucket_name).download_fileobj(s3_path, temp_file) - - -def request_wrap_timeout(func, url): - import requests - - for attempt, timeout in enumerate([10, 20, 40, 60, 60]): - try: - return func(timeout=timeout) - except requests.exceptions.Timeout as e: - logger.warning( - "Request for %s timed-out (attempt %d). Retrying with a timeout of %d secs", - url, - attempt, - timeout, - exc_info=e, - ) - continue - raise RuntimeError(f"Unable to fetch file {url}") - - -def http_get(url, temp_file): - import requests - from tqdm import tqdm - - req = request_wrap_timeout(partial(requests.get, url, stream=True), url) - content_length = req.headers.get("Content-Length") - total = int(content_length) if content_length is not None else None - progress = tqdm(unit="B", total=total) - for chunk in req.iter_content(chunk_size=1024): - if chunk: # filter out keep-alive new chunks - progress.update(len(chunk)) - temp_file.write(chunk) - progress.close() - - -def get_from_cache(url, cache_dir=None): - """ - Given a URL, look for the corresponding dataset in the local cache. - If it's not there, download it. Then return the path to the cached file. - """ - if cache_dir is None: - cache_dir = PYTORCH_FAIRSEQ_CACHE - if isinstance(cache_dir, Path): - cache_dir = str(cache_dir) - - if not os.path.exists(cache_dir): - os.makedirs(cache_dir) - - # Get eTag to add to filename, if it exists. - if url.startswith("s3://"): - etag = s3_etag(url) - else: - try: - import requests - - response = request_wrap_timeout( - partial(requests.head, url, allow_redirects=True), url - ) - if response.status_code != 200: - etag = None - else: - etag = response.headers.get("ETag") - except RuntimeError: - etag = None - - filename = url_to_filename(url, etag) - - # get cache path to put the file - cache_path = os.path.join(cache_dir, filename) - - # If we don't have a connection (etag is None) and can't identify the file - # try to get the last downloaded one - if not os.path.exists(cache_path) and etag is None: - matching_files = fnmatch.filter(os.listdir(cache_dir), filename + ".*") - matching_files = list(filter(lambda s: not s.endswith(".json"), matching_files)) - if matching_files: - cache_path = os.path.join(cache_dir, matching_files[-1]) - - if not os.path.exists(cache_path): - # Download to temporary file, then copy to cache dir once finished. - # Otherwise you get corrupt cache entries if the download gets interrupted. - with tempfile.NamedTemporaryFile() as temp_file: - logger.info("%s not found in cache, downloading to %s", url, temp_file.name) - - # GET file object - if url.startswith("s3://"): - s3_get(url, temp_file) - else: - http_get(url, temp_file) - - # we are copying the file before closing it, so flush to avoid truncation - temp_file.flush() - # shutil.copyfileobj() starts at the current position, so go to the start - temp_file.seek(0) - - logger.info("copying %s to cache at %s", temp_file.name, cache_path) - with open(cache_path, "wb") as cache_file: - shutil.copyfileobj(temp_file, cache_file) - - logger.info("creating metadata file for %s", cache_path) - meta = {"url": url, "etag": etag} - meta_path = cache_path + ".json" - with open(meta_path, "w") as meta_file: - output_string = json.dumps(meta) - meta_file.write(output_string) - - logger.info("removing temp file %s", temp_file.name) - - return cache_path - - -def read_set_from_file(filename): - """ - Extract a de-duped collection (set) of text from a file. - Expected file format is one item per line. - """ - collection = set() - with open(filename, "r", encoding="utf-8") as file_: - for line in file_: - collection.add(line.rstrip()) - return collection - - -def get_file_extension(path, dot=True, lower=True): - ext = os.path.splitext(path)[1] - ext = ext if dot else ext[1:] - return ext.lower() if lower else ext diff --git a/kosmos-g/fairseq/fairseq/hub_utils.py b/kosmos-g/fairseq/fairseq/hub_utils.py deleted file mode 100644 index b6fa2cb97..000000000 --- a/kosmos-g/fairseq/fairseq/hub_utils.py +++ /dev/null @@ -1,314 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import copy -import logging -import os -from typing import Any, Dict, Iterator, List - -import torch -from omegaconf import open_dict -from torch import nn - -from fairseq import utils -from fairseq.data import encoders - -logger = logging.getLogger(__name__) - - -def from_pretrained( - model_name_or_path, - checkpoint_file="model.pt", - data_name_or_path=".", - archive_map=None, - **kwargs -): - from fairseq import checkpoint_utils, file_utils - - if archive_map is not None: - if model_name_or_path in archive_map: - model_name_or_path = archive_map[model_name_or_path] - if data_name_or_path is not None and data_name_or_path in archive_map: - data_name_or_path = archive_map[data_name_or_path] - - # allow archive_map to set default arg_overrides (e.g., tokenizer, bpe) - # for each model - if isinstance(model_name_or_path, dict): - for k, v in model_name_or_path.items(): - if k == "checkpoint_file": - checkpoint_file = v - elif ( - k != "path" - # only set kwargs that don't already have overrides - and k not in kwargs - ): - kwargs[k] = v - model_name_or_path = model_name_or_path["path"] - - model_path = file_utils.load_archive_file(model_name_or_path) - - # convenience hack for loading data and BPE codes from model archive - if data_name_or_path.startswith("."): - kwargs["data"] = os.path.abspath(os.path.join(model_path, data_name_or_path)) - else: - kwargs["data"] = file_utils.load_archive_file(data_name_or_path) - for file, arg in { - "code": "bpe_codes", - "bpecodes": "bpe_codes", - "sentencepiece.bpe.model": "sentencepiece_model", - "merges.txt": "bpe_merges", - "vocab.json": "bpe_vocab", - }.items(): - path = os.path.join(model_path, file) - if os.path.exists(path): - kwargs[arg] = path - - if "user_dir" in kwargs: - utils.import_user_module(argparse.Namespace(user_dir=kwargs["user_dir"])) - - models, args, task = checkpoint_utils.load_model_ensemble_and_task( - [os.path.join(model_path, cpt) for cpt in checkpoint_file.split(os.pathsep)], - arg_overrides=kwargs, - ) - - return { - "args": args, - "task": task, - "models": models, - } - - -class GeneratorHubInterface(nn.Module): - """ - PyTorch Hub interface for generating sequences from a pre-trained - translation or language model. - """ - - def __init__(self, cfg, task, models): - super().__init__() - self.cfg = cfg - self.task = task - self.models = nn.ModuleList(models) - self.src_dict = task.source_dictionary - self.tgt_dict = task.target_dictionary - - # optimize model for generation - for model in self.models: - model.prepare_for_inference_(cfg) - - # Load alignment dictionary for unknown word replacement - # (None if no unknown word replacement, empty if no path to align dictionary) - self.align_dict = utils.load_align_dict(cfg.generation.replace_unk) - - self.tokenizer = encoders.build_tokenizer(cfg.tokenizer) - self.bpe = encoders.build_bpe(cfg.bpe) - - self.max_positions = utils.resolve_max_positions( - self.task.max_positions(), *[model.max_positions() for model in models] - ) - - # this is useful for determining the device - self.register_buffer("_float_tensor", torch.tensor([0], dtype=torch.float)) - - @property - def device(self): - return self._float_tensor.device - - def translate( - self, sentences: List[str], beam: int = 5, verbose: bool = False, **kwargs - ) -> List[str]: - return self.sample(sentences, beam, verbose, **kwargs) - - def sample( - self, sentences: List[str], beam: int = 1, verbose: bool = False, **kwargs - ) -> List[str]: - if isinstance(sentences, str): - return self.sample([sentences], beam=beam, verbose=verbose, **kwargs)[0] - tokenized_sentences = [self.encode(sentence) for sentence in sentences] - batched_hypos = self.generate(tokenized_sentences, beam, verbose, **kwargs) - return [self.decode(hypos[0]["tokens"]) for hypos in batched_hypos] - - def score( - self, sentences: List[str], replace_newline_with_eos: bool = False, **kwargs - ): - if isinstance(sentences, str): - return self.score( - [sentences], replace_newline_with_eos=replace_newline_with_eos, **kwargs - )[0] - - def encode(sentence): - if replace_newline_with_eos: - return torch.cat([self.encode(line) for line in sentence.splitlines()]) - else: - return self.encode(sentence) - - # NOTE: this doesn't support translation tasks currently - tokenized_sentences = [encode(sentence) for sentence in sentences] - return [ - hypos[0] - for hypos in self.generate( - tokenized_sentences, score_reference=True, **kwargs - ) - ] - - def generate( - self, - tokenized_sentences: List[torch.LongTensor], - beam: int = 5, - verbose: bool = False, - skip_invalid_size_inputs=False, - inference_step_args=None, - prefix_allowed_tokens_fn=None, - **kwargs - ) -> List[List[Dict[str, torch.Tensor]]]: - if torch.is_tensor(tokenized_sentences) and tokenized_sentences.dim() == 1: - return self.generate( - tokenized_sentences.unsqueeze(0), beam=beam, verbose=verbose, **kwargs - )[0] - - # build generator using current args as well as any kwargs - gen_args = copy.deepcopy(self.cfg.generation) - with open_dict(gen_args): - gen_args.beam = beam - for k, v in kwargs.items(): - setattr(gen_args, k, v) - generator = self.task.build_generator( - self.models, - gen_args, - prefix_allowed_tokens_fn=prefix_allowed_tokens_fn, - ) - - inference_step_args = inference_step_args or {} - results = [] - for batch in self._build_batches(tokenized_sentences, skip_invalid_size_inputs): - batch = utils.apply_to_sample(lambda t: t.to(self.device), batch) - translations = self.task.inference_step( - generator, self.models, batch, **inference_step_args - ) - for id, hypos in zip(batch["id"].tolist(), translations): - results.append((id, hypos)) - - # sort output to match input order - outputs = [hypos for _, hypos in sorted(results, key=lambda x: x[0])] - - if verbose: - - def getarg(name, default): - return getattr(gen_args, name, getattr(self.cfg, name, default)) - - for source_tokens, target_hypotheses in zip(tokenized_sentences, outputs): - src_str_with_unk = self.string(source_tokens) - logger.info("S\t{}".format(src_str_with_unk)) - for hypo in target_hypotheses: - hypo_str = self.decode(hypo["tokens"]) - logger.info("H\t{}\t{}".format(hypo["score"], hypo_str)) - logger.info( - "P\t{}".format( - " ".join( - map( - lambda x: "{:.4f}".format(x), - hypo["positional_scores"].tolist(), - ) - ) - ) - ) - if hypo["alignment"] is not None and getarg( - "print_alignment", False - ): - logger.info( - "A\t{}".format( - " ".join( - [ - "{}-{}".format(src_idx, tgt_idx) - for src_idx, tgt_idx in hypo["alignment"] - ] - ) - ) - ) - return outputs - - def encode(self, sentence: str) -> torch.LongTensor: - sentence = self.tokenize(sentence) - sentence = self.apply_bpe(sentence) - return self.binarize(sentence) - - def decode(self, tokens: torch.LongTensor) -> str: - sentence = self.string(tokens) - sentence = self.remove_bpe(sentence) - return self.detokenize(sentence) - - def tokenize(self, sentence: str) -> str: - if self.tokenizer is not None: - sentence = self.tokenizer.encode(sentence) - return sentence - - def detokenize(self, sentence: str) -> str: - if self.tokenizer is not None: - sentence = self.tokenizer.decode(sentence) - return sentence - - def apply_bpe(self, sentence: str) -> str: - if self.bpe is not None: - sentence = self.bpe.encode(sentence) - return sentence - - def remove_bpe(self, sentence: str) -> str: - if self.bpe is not None: - sentence = self.bpe.decode(sentence) - return sentence - - def binarize(self, sentence: str) -> torch.LongTensor: - return self.src_dict.encode_line(sentence, add_if_not_exist=False).long() - - def string(self, tokens: torch.LongTensor) -> str: - return self.tgt_dict.string(tokens) - - def _build_batches( - self, tokens: List[List[int]], skip_invalid_size_inputs: bool - ) -> Iterator[Dict[str, Any]]: - lengths = torch.LongTensor([t.numel() for t in tokens]) - batch_iterator = self.task.get_batch_iterator( - dataset=self.task.build_dataset_for_inference(tokens, lengths), - max_tokens=self.cfg.dataset.max_tokens, - max_sentences=self.cfg.dataset.batch_size, - max_positions=self.max_positions, - ignore_invalid_inputs=skip_invalid_size_inputs, - disable_iterator_cache=True, - ).next_epoch_itr(shuffle=False) - return batch_iterator - - -class BPEHubInterface(object): - """PyTorch Hub interface for Byte-Pair Encoding (BPE).""" - - def __init__(self, bpe, **kwargs): - super().__init__() - args = argparse.Namespace(bpe=bpe, **kwargs) - self.bpe = encoders.build_bpe(args) - assert self.bpe is not None - - def encode(self, sentence: str) -> str: - return self.bpe.encode(sentence) - - def decode(self, sentence: str) -> str: - return self.bpe.decode(sentence) - - -class TokenizerHubInterface(object): - """PyTorch Hub interface for tokenization.""" - - def __init__(self, tokenizer, **kwargs): - super().__init__() - args = argparse.Namespace(tokenizer=tokenizer, **kwargs) - self.tokenizer = encoders.build_tokenizer(args) - assert self.tokenizer is not None - - def encode(self, sentence: str) -> str: - return self.tokenizer.encode(sentence) - - def decode(self, sentence: str) -> str: - return self.tokenizer.decode(sentence) diff --git a/kosmos-g/fairseq/fairseq/incremental_decoding_utils.py b/kosmos-g/fairseq/fairseq/incremental_decoding_utils.py deleted file mode 100644 index b26e6cd01..000000000 --- a/kosmos-g/fairseq/fairseq/incremental_decoding_utils.py +++ /dev/null @@ -1,51 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import uuid -from typing import Dict, Optional - -from torch import Tensor - - -class FairseqIncrementalState(object): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self.init_incremental_state() - - def init_incremental_state(self): - self._incremental_state_id = str(uuid.uuid4()) - - def _get_full_incremental_state_key(self, key: str) -> str: - return "{}.{}".format(self._incremental_state_id, key) - - def get_incremental_state( - self, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]], - key: str, - ) -> Optional[Dict[str, Optional[Tensor]]]: - """Helper for getting incremental state for an nn.Module.""" - full_key = self._get_full_incremental_state_key(key) - if incremental_state is None or full_key not in incremental_state: - return None - return incremental_state[full_key] - - def set_incremental_state( - self, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]], - key: str, - value: Dict[str, Optional[Tensor]], - ) -> Optional[Dict[str, Dict[str, Optional[Tensor]]]]: - """Helper for setting incremental state for an nn.Module.""" - if incremental_state is not None: - full_key = self._get_full_incremental_state_key(key) - incremental_state[full_key] = value - return incremental_state - - -def with_incremental_state(cls): - cls.__bases__ = (FairseqIncrementalState,) + tuple( - b for b in cls.__bases__ if b != FairseqIncrementalState - ) - return cls diff --git a/kosmos-g/fairseq/fairseq/iterative_refinement_generator.py b/kosmos-g/fairseq/fairseq/iterative_refinement_generator.py deleted file mode 100644 index 4fb0946f4..000000000 --- a/kosmos-g/fairseq/fairseq/iterative_refinement_generator.py +++ /dev/null @@ -1,359 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from collections import namedtuple - -import numpy as np -import torch -from fairseq import utils - - -DecoderOut = namedtuple( - "IterativeRefinementDecoderOut", - ["output_tokens", "output_scores", "attn", "step", "max_step", "history"], -) - - -class IterativeRefinementGenerator(object): - def __init__( - self, - tgt_dict, - models=None, - eos_penalty=0.0, - max_iter=10, - max_ratio=2, - beam_size=1, - decoding_format=None, - retain_dropout=False, - adaptive=True, - retain_history=False, - reranking=False, - ): - """ - Generates translations based on iterative refinement. - - Args: - tgt_dict: target dictionary - eos_penalty: if > 0.0, it penalized early-stopping in decoding - max_iter: maximum number of refinement iterations - max_ratio: generate sequences of maximum length ax, where x is the source length - decoding_format: decoding mode in {'unigram', 'ensemble', 'vote', 'dp', 'bs'} - retain_dropout: retaining dropout in the inference - adaptive: decoding with early stop - """ - self.bos = tgt_dict.bos() - self.pad = tgt_dict.pad() - self.unk = tgt_dict.unk() - self.eos = tgt_dict.eos() - self.vocab_size = len(tgt_dict) - self.eos_penalty = eos_penalty - self.max_iter = max_iter - self.max_ratio = max_ratio - self.beam_size = beam_size - self.reranking = reranking - self.decoding_format = decoding_format - self.retain_dropout = retain_dropout - self.retain_history = retain_history - self.adaptive = adaptive - self.models = models - - def generate_batched_itr( - self, - data_itr, - maxlen_a=None, - maxlen_b=None, - cuda=False, - timer=None, - prefix_size=0, - ): - """Iterate over a batched dataset and yield individual translations. - - Args: - maxlen_a/b: generate sequences of maximum length ax + b, - where x is the source sentence length. - cuda: use GPU for generation - timer: StopwatchMeter for timing generations. - """ - - for sample in data_itr: - if "net_input" not in sample: - continue - if timer is not None: - timer.start() - with torch.no_grad(): - hypos = self.generate( - self.models, - sample, - prefix_tokens=sample["target"][:, :prefix_size] - if prefix_size > 0 - else None, - ) - if timer is not None: - timer.stop(sample["ntokens"]) - for i, id in enumerate(sample["id"]): - # remove padding - src = utils.strip_pad(sample["net_input"]["src_tokens"][i, :], self.pad) - ref = utils.strip_pad(sample["target"][i, :], self.pad) - yield id, src, ref, hypos[i] - - @torch.no_grad() - def generate(self, models, sample, prefix_tokens=None, constraints=None): - if constraints is not None: - raise NotImplementedError( - "Constrained decoding with the IterativeRefinementGenerator is not supported" - ) - - # TODO: iterative refinement generator does not support ensemble for now. - if not self.retain_dropout: - for model in models: - model.eval() - - model, reranker = models[0], None - if self.reranking: - assert len(models) > 1, "Assuming the last checkpoint is the reranker" - assert ( - self.beam_size > 1 - ), "Reranking requires multiple translation for each example" - - reranker = models[-1] - models = models[:-1] - - if len(models) > 1 and hasattr(model, "enable_ensemble"): - assert model.allow_ensemble, "{} does not support ensembling".format( - model.__class__.__name__ - ) - model.enable_ensemble(models) - - # TODO: better encoder inputs? - src_tokens = sample["net_input"]["src_tokens"] - src_lengths = sample["net_input"]["src_lengths"] - bsz, src_len = src_tokens.size() - - # initialize - encoder_out = model.forward_encoder([src_tokens, src_lengths]) - prev_decoder_out = model.initialize_output_tokens(encoder_out, src_tokens) - - if self.beam_size > 1: - assert ( - model.allow_length_beam - ), "{} does not support decoding with length beam.".format( - model.__class__.__name__ - ) - - # regenerate data based on length-beam - length_beam_order = ( - utils.new_arange(src_tokens, self.beam_size, bsz).t().reshape(-1) - ) - encoder_out = model.encoder.reorder_encoder_out( - encoder_out, length_beam_order - ) - prev_decoder_out = model.regenerate_length_beam( - prev_decoder_out, self.beam_size - ) - bsz = bsz * self.beam_size - - sent_idxs = torch.arange(bsz) - prev_output_tokens = prev_decoder_out.output_tokens.clone() - - if self.retain_history: - prev_decoder_out = prev_decoder_out._replace(history=[prev_output_tokens]) - - finalized = [[] for _ in range(bsz)] - - def is_a_loop(x, y, s, a): - b, l_x, l_y = x.size(0), x.size(1), y.size(1) - if l_x > l_y: - y = torch.cat([y, x.new_zeros(b, l_x - l_y).fill_(self.pad)], 1) - s = torch.cat([s, s.new_zeros(b, l_x - l_y)], 1) - if a is not None: - a = torch.cat([a, a.new_zeros(b, l_x - l_y, a.size(2))], 1) - elif l_x < l_y: - x = torch.cat([x, y.new_zeros(b, l_y - l_x).fill_(self.pad)], 1) - return (x == y).all(1), y, s, a - - def finalized_hypos(step, prev_out_token, prev_out_score, prev_out_attn): - cutoff = prev_out_token.ne(self.pad) - tokens = prev_out_token[cutoff] - if prev_out_score is None: - scores, score = None, None - else: - scores = prev_out_score[cutoff] - score = scores.mean() - - if prev_out_attn is None: - hypo_attn, alignment = None, None - else: - hypo_attn = prev_out_attn[cutoff] - alignment = hypo_attn.max(dim=1)[1] - return { - "steps": step, - "tokens": tokens, - "positional_scores": scores, - "score": score, - "hypo_attn": hypo_attn, - "alignment": alignment, - } - - for step in range(self.max_iter + 1): - - decoder_options = { - "eos_penalty": self.eos_penalty, - "max_ratio": self.max_ratio, - "decoding_format": self.decoding_format, - } - prev_decoder_out = prev_decoder_out._replace( - step=step, - max_step=self.max_iter + 1, - ) - - decoder_out = model.forward_decoder( - prev_decoder_out, encoder_out, **decoder_options - ) - - if self.adaptive: - # terminate if there is a loop - terminated, out_tokens, out_scores, out_attn = is_a_loop( - prev_output_tokens, - decoder_out.output_tokens, - decoder_out.output_scores, - decoder_out.attn, - ) - decoder_out = decoder_out._replace( - output_tokens=out_tokens, - output_scores=out_scores, - attn=out_attn, - ) - - else: - terminated = decoder_out.output_tokens.new_zeros( - decoder_out.output_tokens.size(0) - ).bool() - - if step == self.max_iter: # reach last iteration, terminate - terminated.fill_(1) - - # collect finalized sentences - finalized_idxs = sent_idxs[terminated] - finalized_tokens = decoder_out.output_tokens[terminated] - finalized_scores = decoder_out.output_scores[terminated] - finalized_attn = ( - None - if (decoder_out.attn is None or decoder_out.attn.size(0) == 0) - else decoder_out.attn[terminated] - ) - - if self.retain_history: - finalized_history_tokens = [h[terminated] for h in decoder_out.history] - - for i in range(finalized_idxs.size(0)): - finalized[finalized_idxs[i]] = [ - finalized_hypos( - step, - finalized_tokens[i], - finalized_scores[i], - None if finalized_attn is None else finalized_attn[i], - ) - ] - - if self.retain_history: - finalized[finalized_idxs[i]][0]["history"] = [] - for j in range(len(finalized_history_tokens)): - finalized[finalized_idxs[i]][0]["history"].append( - finalized_hypos( - step, finalized_history_tokens[j][i], None, None - ) - ) - - # check if all terminated - if terminated.sum() == terminated.size(0): - break - - # for next step - not_terminated = ~terminated - prev_decoder_out = decoder_out._replace( - output_tokens=decoder_out.output_tokens[not_terminated], - output_scores=decoder_out.output_scores[not_terminated], - attn=decoder_out.attn[not_terminated] - if (decoder_out.attn is not None and decoder_out.attn.size(0) > 0) - else None, - history=[h[not_terminated] for h in decoder_out.history] - if decoder_out.history is not None - else None, - ) - encoder_out = model.encoder.reorder_encoder_out( - encoder_out, not_terminated.nonzero(as_tuple=False).squeeze() - ) - sent_idxs = sent_idxs[not_terminated] - prev_output_tokens = prev_decoder_out.output_tokens.clone() - - if self.beam_size > 1: - if reranker is not None: - finalized = self.rerank( - reranker, finalized, [src_tokens, src_lengths], self.beam_size - ) - - # aggregate information from length beam - finalized = [ - finalized[ - np.argmax( - [ - finalized[self.beam_size * i + j][0]["score"] - for j in range(self.beam_size) - ] - ) - + self.beam_size * i - ] - for i in range(len(finalized) // self.beam_size) - ] - - return finalized - - def rerank(self, reranker, finalized, encoder_input, beam_size): - def rebuild_batch(finalized): - finalized_tokens = [f[0]["tokens"] for f in finalized] - finalized_maxlen = max(f.size(0) for f in finalized_tokens) - final_output_tokens = ( - finalized_tokens[0] - .new_zeros(len(finalized_tokens), finalized_maxlen) - .fill_(self.pad) - ) - for i, f in enumerate(finalized_tokens): - final_output_tokens[i, : f.size(0)] = f - return final_output_tokens - - final_output_tokens = rebuild_batch(finalized) - final_output_tokens[ - :, 0 - ] = self.eos # autoregressive model assumes starting with EOS - - reranker_encoder_out = reranker.encoder(*encoder_input) - length_beam_order = ( - utils.new_arange( - final_output_tokens, beam_size, reranker_encoder_out.encoder_out.size(1) - ) - .t() - .reshape(-1) - ) - reranker_encoder_out = reranker.encoder.reorder_encoder_out( - reranker_encoder_out, length_beam_order - ) - reranking_scores = reranker.get_normalized_probs( - reranker.decoder(final_output_tokens[:, :-1], reranker_encoder_out), - True, - None, - ) - reranking_scores = reranking_scores.gather(2, final_output_tokens[:, 1:, None]) - reranking_masks = final_output_tokens[:, 1:].ne(self.pad) - reranking_scores = ( - reranking_scores[:, :, 0].masked_fill_(~reranking_masks, 0).sum(1) - ) - reranking_scores = reranking_scores / reranking_masks.sum(1).type_as( - reranking_scores - ) - - for i in range(len(finalized)): - finalized[i][0]["score"] = reranking_scores[i] - - return finalized diff --git a/kosmos-g/fairseq/fairseq/logging/__init__.py b/kosmos-g/fairseq/fairseq/logging/__init__.py deleted file mode 100644 index e69de29bb..000000000 diff --git a/kosmos-g/fairseq/fairseq/logging/meters.py b/kosmos-g/fairseq/fairseq/logging/meters.py deleted file mode 100644 index d5f7c775d..000000000 --- a/kosmos-g/fairseq/fairseq/logging/meters.py +++ /dev/null @@ -1,321 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import bisect -import time -from collections import OrderedDict -from typing import Dict, Optional - -try: - import torch - - def type_as(a, b): - if torch.is_tensor(a) and torch.is_tensor(b): - return a.to(b) - else: - return a - -except ImportError: - torch = None - - def type_as(a, b): - return a - - -try: - import numpy as np -except ImportError: - np = None - - -class Meter(object): - """Base class for Meters.""" - - def __init__(self): - pass - - def state_dict(self): - return {} - - def load_state_dict(self, state_dict): - pass - - def reset(self): - raise NotImplementedError - - @property - def smoothed_value(self) -> float: - """Smoothed value used for logging.""" - raise NotImplementedError - - -def safe_round(number, ndigits): - if hasattr(number, "__round__"): - return round(number, ndigits) - elif torch is not None and torch.is_tensor(number) and number.numel() == 1: - return safe_round(number.item(), ndigits) - elif np is not None and np.ndim(number) == 0 and hasattr(number, "item"): - return safe_round(number.item(), ndigits) - else: - return number - - -class AverageMeter(Meter): - """Computes and stores the average and current value""" - - def __init__(self, round: Optional[int] = None): - self.round = round - self.reset() - - def reset(self): - self.val = None # most recent update - self.sum = 0 # sum from all updates - self.count = 0 # total n from all updates - - def update(self, val, n=1): - if val is not None: - self.val = val - if n > 0: - self.sum = type_as(self.sum, val) + (val * n) - self.count = type_as(self.count, n) + n - - def state_dict(self): - return { - "val": self.val, - "sum": self.sum, - "count": self.count, - "round": self.round, - } - - def load_state_dict(self, state_dict): - self.val = state_dict["val"] - self.sum = state_dict["sum"] - self.count = state_dict["count"] - self.round = state_dict.get("round", None) - - @property - def avg(self): - return self.sum / self.count if self.count > 0 else self.val - - @property - def smoothed_value(self) -> float: - val = self.avg - if self.round is not None and val is not None: - val = safe_round(val, self.round) - return val - - -class SumMeter(Meter): - """Computes and stores the sum""" - - def __init__(self, round: Optional[int] = None): - self.round = round - self.reset() - - def reset(self): - self.sum = 0 # sum from all updates - - def update(self, val): - if val is not None: - self.sum = type_as(self.sum, val) + val - - def state_dict(self): - return { - "sum": self.sum, - "round": self.round, - } - - def load_state_dict(self, state_dict): - self.sum = state_dict["sum"] - self.round = state_dict.get("round", None) - - @property - def smoothed_value(self) -> float: - val = self.sum - if self.round is not None and val is not None: - val = safe_round(val, self.round) - return val - - -class TimeMeter(Meter): - """Computes the average occurrence of some event per second""" - - def __init__( - self, - init: int = 0, - n: int = 0, - round: Optional[int] = None, - ): - self.round = round - self.reset(init, n) - - def reset(self, init=0, n=0): - self.init = init - self.start = time.perf_counter() - self.n = n - self.i = 0 - - def update(self, val=1): - self.n = type_as(self.n, val) + val - self.i += 1 - - def state_dict(self): - return { - "init": self.elapsed_time, - "n": self.n, - "round": self.round, - } - - def load_state_dict(self, state_dict): - if "start" in state_dict: - # backwards compatibility for old state_dicts - self.reset(init=state_dict["init"]) - else: - self.reset(init=state_dict["init"], n=state_dict["n"]) - self.round = state_dict.get("round", None) - - @property - def avg(self): - return self.n / self.elapsed_time - - @property - def elapsed_time(self): - return self.init + (time.perf_counter() - self.start) - - @property - def smoothed_value(self) -> float: - val = self.avg - if self.round is not None and val is not None: - val = safe_round(val, self.round) - return val - - -class StopwatchMeter(Meter): - """Computes the sum/avg duration of some event in seconds""" - - def __init__(self, round: Optional[int] = None): - self.round = round - self.sum = 0 - self.n = 0 - self.start_time = None - - def start(self): - self.start_time = time.perf_counter() - - def stop(self, n=1, prehook=None): - if self.start_time is not None: - if prehook is not None: - prehook() - delta = time.perf_counter() - self.start_time - self.sum = self.sum + delta - self.n = type_as(self.n, n) + n - - def reset(self): - self.sum = 0 # cumulative time during which stopwatch was active - self.n = 0 # total n across all start/stop - self.start() - - def state_dict(self): - return { - "sum": self.sum, - "n": self.n, - "round": self.round, - } - - def load_state_dict(self, state_dict): - self.sum = state_dict["sum"] - self.n = state_dict["n"] - self.start_time = None - self.round = state_dict.get("round", None) - - @property - def avg(self): - return self.sum / self.n if self.n > 0 else self.sum - - @property - def elapsed_time(self): - if self.start_time is None: - return 0.0 - return time.perf_counter() - self.start_time - - @property - def smoothed_value(self) -> float: - val = self.avg if self.sum > 0 else self.elapsed_time - if self.round is not None and val is not None: - val = safe_round(val, self.round) - return val - - -class MetersDict(OrderedDict): - """A sorted dictionary of :class:`Meters`. - - Meters are sorted according to a priority that is given when the - meter is first added to the dictionary. - """ - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self.priorities = [] - - def __setitem__(self, key, value): - assert key not in self, "MetersDict doesn't support reassignment" - priority, value = value - bisect.insort(self.priorities, (priority, len(self.priorities), key)) - super().__setitem__(key, value) - for _, _, key in self.priorities: # reorder dict to match priorities - self.move_to_end(key) - - def add_meter(self, key, meter, priority): - self.__setitem__(key, (priority, meter)) - - def state_dict(self): - return [ - (pri, key, self[key].__class__.__name__, self[key].state_dict()) - for pri, _, key in self.priorities - # can't serialize DerivedMeter instances - if not isinstance(self[key], MetersDict._DerivedMeter) - ] - - def load_state_dict(self, state_dict): - self.clear() - self.priorities.clear() - for pri, key, meter_cls, meter_state in state_dict: - meter = globals()[meter_cls]() - meter.load_state_dict(meter_state) - self.add_meter(key, meter, pri) - - def get_smoothed_value(self, key: str) -> float: - """Get a single smoothed value.""" - meter = self[key] - if isinstance(meter, MetersDict._DerivedMeter): - return meter.fn(self) - else: - return meter.smoothed_value - - def get_smoothed_values(self) -> Dict[str, float]: - """Get all smoothed values.""" - return OrderedDict( - [ - (key, self.get_smoothed_value(key)) - for key in self.keys() - if not key.startswith("_") - ] - ) - - def reset(self): - """Reset Meter instances.""" - for meter in self.values(): - if isinstance(meter, MetersDict._DerivedMeter): - continue - meter.reset() - - class _DerivedMeter(Meter): - """A Meter whose values are derived from other Meters.""" - - def __init__(self, fn): - self.fn = fn - - def reset(self): - pass diff --git a/kosmos-g/fairseq/fairseq/logging/metrics.py b/kosmos-g/fairseq/fairseq/logging/metrics.py deleted file mode 100644 index 892b0ea4d..000000000 --- a/kosmos-g/fairseq/fairseq/logging/metrics.py +++ /dev/null @@ -1,316 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -A standalone module for aggregating metrics. - -Metrics can be logged from anywhere using the `log_*` functions defined -in this module. The logged values will be aggregated dynamically based -on the aggregation context in which the logging occurs. See the -:func:`aggregate` context manager for more details. -""" - -import contextlib -import uuid -from collections import defaultdict -from typing import Callable, List, Optional - -from .meters import * - - -# Aggregation contexts are considered "active" when inside the scope -# created by the :func:`aggregate` context manager. -_aggregators = OrderedDict() -_active_aggregators = OrderedDict() -_active_aggregators_cnt = defaultdict(lambda: 0) - - -def reset() -> None: - """Reset all metrics aggregators.""" - _aggregators.clear() - _active_aggregators.clear() - _active_aggregators_cnt.clear() - - # The "default" aggregator observes all logged values. - _aggregators["default"] = MetersDict() - _active_aggregators["default"] = _aggregators["default"] - _active_aggregators_cnt["default"] = 1 - - -reset() - - -@contextlib.contextmanager -def aggregate(name: Optional[str] = None, new_root: bool = False): - """Context manager to aggregate metrics under a given name. - - Aggregations can be nested. If *new_root* is ``False``, then logged - metrics will be recorded along the entire stack of nested - aggregators, including a global "default" aggregator. If *new_root* - is ``True``, then this aggregator will be the root of a new - aggregation stack, thus bypassing any parent aggregators. - - Note that aggregation contexts are uniquely identified by their - *name* (e.g., train, valid). Creating a context with an existing - name will reuse the corresponding :class:`MetersDict` instance. - If no name is given, then a temporary aggregator will be created. - - Usage:: - - with metrics.aggregate("train"): - for step, batch in enumerate(epoch): - with metrics.aggregate("train_inner") as agg: - metrics.log_scalar("loss", get_loss(batch)) - if step % log_interval == 0: - print(agg.get_smoothed_value("loss")) - agg.reset() - print(metrics.get_smoothed_values("train")["loss"]) - - Args: - name (str): name of the aggregation. Defaults to a - random/temporary name if not given explicitly. - new_root (bool): make this aggregation the root of a new - aggregation stack. - """ - if name is None: - # generate a temporary name - name = str(uuid.uuid4()) - assert name not in _aggregators - agg = MetersDict() - else: - assert name != "default" - agg = _aggregators.setdefault(name, MetersDict()) - - if new_root: - backup_aggregators = _active_aggregators.copy() - _active_aggregators.clear() - backup_aggregators_cnt = _active_aggregators_cnt.copy() - _active_aggregators_cnt.clear() - - _active_aggregators[name] = agg - _active_aggregators_cnt[name] += 1 - - yield agg - - _active_aggregators_cnt[name] -= 1 - if _active_aggregators_cnt[name] == 0 and name in _active_aggregators: - del _active_aggregators[name] - - if new_root: - _active_aggregators.clear() - _active_aggregators.update(backup_aggregators) - _active_aggregators_cnt.clear() - _active_aggregators_cnt.update(backup_aggregators_cnt) - - -def get_active_aggregators() -> List[MetersDict]: - return list(_active_aggregators.values()) - - -def log_scalar( - key: str, - value: float, - weight: float = 1, - priority: int = 10, - round: Optional[int] = None, -): - """Log a scalar value. - - Args: - key (str): name of the field to log - value (float): value to log - weight (float): weight that this value contributes to the average. - A weight of 0 will always log the latest value. - priority (int): smaller values are logged earlier in the output - round (Optional[int]): number of digits to round to when displaying - """ - for agg in get_active_aggregators(): - if key not in agg: - agg.add_meter(key, AverageMeter(round=round), priority) - agg[key].update(value, weight) - - -def log_scalar_sum( - key: str, - value: float, - priority: int = 10, - round: Optional[int] = None, -): - """Log a scalar value that is summed for reporting. - - Args: - key (str): name of the field to log - value (float): value to log - priority (int): smaller values are logged earlier in the output - round (Optional[int]): number of digits to round to when displaying - """ - for agg in get_active_aggregators(): - if key not in agg: - agg.add_meter(key, SumMeter(round=round), priority) - agg[key].update(value) - - -def log_derived(key: str, fn: Callable[[MetersDict], float], priority: int = 20): - """Log a scalar value derived from other meters. - - Args: - key (str): name of the field to log - fn (Callable[[MetersDict], float]): function that takes a single - argument *meters* and returns the derived value - priority (int): smaller values are logged earlier in the output - """ - for agg in get_active_aggregators(): - if key not in agg: - agg.add_meter(key, MetersDict._DerivedMeter(fn), priority) - - -def log_speed( - key: str, - value: float, - priority: int = 30, - round: Optional[int] = None, -): - """Log the rate of some quantity per second. - - Args: - key (str): name of the field to log - value (float): value to log - priority (int): smaller values are logged earlier in the output - round (Optional[int]): number of digits to round to when displaying - """ - for agg in get_active_aggregators(): - if key not in agg: - agg.add_meter(key, TimeMeter(round=round), priority) - agg[key].reset() # reset meter on the first call - else: - agg[key].update(value) - - -def log_start_time(key: str, priority: int = 40, round: Optional[int] = None): - """Log the duration of some event in seconds. - - The duration will be computed once :func:`log_stop_time` is called. - - Args: - key (str): name of the field to log - priority (int): smaller values are logged earlier in the output - round (Optional[int]): number of digits to round to when displaying - """ - for agg in get_active_aggregators(): - if key not in agg: - agg.add_meter(key, StopwatchMeter(round=round), priority) - agg[key].start() - - -def log_stop_time(key: str, weight: float = 0.0, prehook=None): - """Log the duration of some event in seconds. - - The duration will be computed since :func:`log_start_time` was called. - Set weight > 0 to report the average time instead of the sum. - - Args: - key (str): name of the field to log - weight (float): weight that this time contributes to the average - prehook (function, no arguments): will be called before the timer - is stopped. For example, use prehook=torch.cuda.synchronize to - make sure all gpu operations are done before timer is stopped. - """ - for agg in get_active_aggregators(): - if key in agg: - agg[key].stop(weight, prehook) - - -def log_custom( - new_meter_fn: Callable[[], Meter], - key: str, - *args, - priority: int = 50, - **kwargs, -): - """Log using a custom Meter. - - Any extra *args* or *kwargs* will be passed through to the Meter's - *update* method. - - Args: - new_meter_fn (Callable[[], Meter]): function that returns a new - Meter instance - key (str): name of the field to log - priority (int): smaller values are logged earlier in the output - """ - for agg in get_active_aggregators(): - if key not in agg: - agg.add_meter(key, new_meter_fn(), priority) - agg[key].update(*args, **kwargs) - - -def reset_meter(name: str, key: str) -> None: - """Reset Meter instance aggregated under a given *name* and *key*.""" - meter = get_meter(name, key) - if meter is not None: - meter.reset() - - -def reset_meters(name: str) -> None: - """Reset Meter instances aggregated under a given *name*.""" - meters = get_meters(name) - if meters is not None: - meters.reset() - - -def get_meter(name: str, key: str) -> Meter: - """Get a single Meter instance aggregated under *name* and *key*. - - Returns: - Meter or None if no metrics have been logged under *name* and *key*. - """ - if name not in _aggregators: - return None - return _aggregators[name].get(key, None) - - -def get_meters(name: str) -> MetersDict: - """Get Meter instances aggregated under a given *name*. - - Returns: - MetersDict or None if no metrics have been logged under *name*. - """ - return _aggregators.get(name, None) - - -def get_smoothed_value(name: str, key: str) -> float: - """Get a single smoothed value. - - Raises: - KeyError: if no metrics have been logged under *name* and *key*. - """ - return _aggregators[name].get_smoothed_value(key) - - -def get_smoothed_values(name: str) -> Dict[str, float]: - """Get smoothed values aggregated under a given *name*. - - Raises: - KeyError: if no metrics have been logged under *name*. - """ - return _aggregators[name].get_smoothed_values() - - -def state_dict(): - return OrderedDict([(name, agg.state_dict()) for name, agg in _aggregators.items()]) - - -def load_state_dict(state_dict): - for name, agg_state in state_dict.items(): - _aggregators[name] = MetersDict() - _aggregators[name].load_state_dict(agg_state) - - -def xla_metrics_report(): - try: - import torch_xla.debug.metrics as met - - print(met.metrics_report()) - except ImportError: - return diff --git a/kosmos-g/fairseq/fairseq/logging/progress_bar.py b/kosmos-g/fairseq/fairseq/logging/progress_bar.py deleted file mode 100644 index 061082cae..000000000 --- a/kosmos-g/fairseq/fairseq/logging/progress_bar.py +++ /dev/null @@ -1,490 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Wrapper around various loggers and progress bars (e.g., tqdm). -""" - -import atexit -import json -import logging -import os -import sys -from collections import OrderedDict -from contextlib import contextmanager -from numbers import Number -from typing import Optional - -import torch - -from .meters import AverageMeter, StopwatchMeter, TimeMeter - - -logger = logging.getLogger(__name__) - - -def progress_bar( - iterator, - log_format: Optional[str] = None, - log_interval: int = 100, - log_file: Optional[str] = None, - epoch: Optional[int] = None, - prefix: Optional[str] = None, - tensorboard_logdir: Optional[str] = None, - default_log_format: str = "tqdm", - wandb_project: Optional[str] = None, - wandb_run_name: Optional[str] = None, - azureml_logging: Optional[bool] = False, -): - if log_format is None: - log_format = default_log_format - if log_file is not None: - handler = logging.FileHandler(filename=log_file) - logger.addHandler(handler) - - if log_format == "tqdm" and not sys.stderr.isatty(): - log_format = "simple" - - if log_format == "json": - bar = JsonProgressBar(iterator, epoch, prefix, log_interval) - elif log_format == "none": - bar = NoopProgressBar(iterator, epoch, prefix) - elif log_format == "simple": - bar = SimpleProgressBar(iterator, epoch, prefix, log_interval) - elif log_format == "tqdm": - bar = TqdmProgressBar(iterator, epoch, prefix) - else: - raise ValueError("Unknown log format: {}".format(log_format)) - - if tensorboard_logdir: - try: - # [FB only] custom wrapper for TensorBoard - import palaas # noqa - from .fb_tbmf_wrapper import FbTbmfWrapper - - bar = FbTbmfWrapper(bar, log_interval) - except ImportError: - bar = TensorboardProgressBarWrapper(bar, tensorboard_logdir) - - if wandb_project: - bar = WandBProgressBarWrapper(bar, wandb_project, run_name=wandb_run_name) - - if azureml_logging: - bar = AzureMLProgressBarWrapper(bar) - - return bar - - -def build_progress_bar( - args, - iterator, - epoch: Optional[int] = None, - prefix: Optional[str] = None, - default: str = "tqdm", - no_progress_bar: str = "none", -): - """Legacy wrapper that takes an argparse.Namespace.""" - if getattr(args, "no_progress_bar", False): - default = no_progress_bar - if getattr(args, "distributed_rank", 0) == 0: - tensorboard_logdir = getattr(args, "tensorboard_logdir", None) - else: - tensorboard_logdir = None - return progress_bar( - iterator, - log_format=args.log_format, - log_interval=args.log_interval, - epoch=epoch, - prefix=prefix, - tensorboard_logdir=tensorboard_logdir, - default_log_format=default, - ) - - -def format_stat(stat): - if isinstance(stat, Number): - stat = "{:g}".format(stat) - elif isinstance(stat, AverageMeter): - stat = "{:.3f}".format(stat.avg) - elif isinstance(stat, TimeMeter): - stat = "{:g}".format(round(stat.avg)) - elif isinstance(stat, StopwatchMeter): - stat = "{:g}".format(round(stat.sum)) - elif torch.is_tensor(stat): - stat = stat.tolist() - return stat - - -class BaseProgressBar(object): - """Abstract class for progress bars.""" - - def __init__(self, iterable, epoch=None, prefix=None): - self.iterable = iterable - self.n = getattr(iterable, "n", 0) - self.epoch = epoch - self.prefix = "" - if epoch is not None: - self.prefix += "epoch {:03d}".format(epoch) - if prefix is not None: - self.prefix += (" | " if self.prefix != "" else "") + prefix - - def __len__(self): - return len(self.iterable) - - def __enter__(self): - return self - - def __exit__(self, *exc): - return False - - def __iter__(self): - raise NotImplementedError - - def log(self, stats, tag=None, step=None): - """Log intermediate stats according to log_interval.""" - raise NotImplementedError - - def print(self, stats, tag=None, step=None): - """Print end-of-epoch stats.""" - raise NotImplementedError - - def update_config(self, config): - """Log latest configuration.""" - pass - - def _str_commas(self, stats): - return ", ".join(key + "=" + stats[key].strip() for key in stats.keys()) - - def _str_pipes(self, stats): - return " | ".join(key + " " + stats[key].strip() for key in stats.keys()) - - def _format_stats(self, stats): - postfix = OrderedDict(stats) - # Preprocess stats according to datatype - for key in postfix.keys(): - postfix[key] = str(format_stat(postfix[key])) - return postfix - - -@contextmanager -def rename_logger(logger, new_name): - old_name = logger.name - if new_name is not None: - logger.name = new_name - yield logger - logger.name = old_name - - -class JsonProgressBar(BaseProgressBar): - """Log output in JSON format.""" - - def __init__(self, iterable, epoch=None, prefix=None, log_interval=1000): - super().__init__(iterable, epoch, prefix) - self.log_interval = log_interval - self.i = None - self.size = None - - def __iter__(self): - self.size = len(self.iterable) - for i, obj in enumerate(self.iterable, start=self.n): - self.i = i - yield obj - - def log(self, stats, tag=None, step=None): - """Log intermediate stats according to log_interval.""" - step = step or self.i or 0 - if step > 0 and self.log_interval is not None and step % self.log_interval == 0: - update = ( - self.epoch - 1 + (self.i + 1) / float(self.size) - if self.epoch is not None - else None - ) - stats = self._format_stats(stats, epoch=self.epoch, update=update) - with rename_logger(logger, tag): - logger.info(json.dumps(stats)) - - def print(self, stats, tag=None, step=None): - """Print end-of-epoch stats.""" - self.stats = stats - if tag is not None: - self.stats = OrderedDict( - [(tag + "_" + k, v) for k, v in self.stats.items()] - ) - stats = self._format_stats(self.stats, epoch=self.epoch) - with rename_logger(logger, tag): - logger.info(json.dumps(stats)) - - def _format_stats(self, stats, epoch=None, update=None): - postfix = OrderedDict() - if epoch is not None: - postfix["epoch"] = epoch - if update is not None: - postfix["update"] = round(update, 3) - # Preprocess stats according to datatype - for key in stats.keys(): - postfix[key] = format_stat(stats[key]) - return postfix - - -class NoopProgressBar(BaseProgressBar): - """No logging.""" - - def __init__(self, iterable, epoch=None, prefix=None): - super().__init__(iterable, epoch, prefix) - - def __iter__(self): - for obj in self.iterable: - yield obj - - def log(self, stats, tag=None, step=None): - """Log intermediate stats according to log_interval.""" - pass - - def print(self, stats, tag=None, step=None): - """Print end-of-epoch stats.""" - pass - - -class SimpleProgressBar(BaseProgressBar): - """A minimal logger for non-TTY environments.""" - - def __init__(self, iterable, epoch=None, prefix=None, log_interval=1000): - super().__init__(iterable, epoch, prefix) - self.log_interval = log_interval - self.i = None - self.size = None - - def __iter__(self): - self.size = len(self.iterable) - for i, obj in enumerate(self.iterable, start=self.n): - self.i = i - yield obj - - def log(self, stats, tag=None, step=None): - """Log intermediate stats according to log_interval.""" - step = step or self.i or 0 - if step > 0 and self.log_interval is not None and step % self.log_interval == 0: - stats = self._format_stats(stats) - postfix = self._str_commas(stats) - with rename_logger(logger, tag): - logger.info( - "{}: {:5d} / {:d} {}".format( - self.prefix, self.i + 1, self.size, postfix - ) - ) - - def print(self, stats, tag=None, step=None): - """Print end-of-epoch stats.""" - postfix = self._str_pipes(self._format_stats(stats)) - with rename_logger(logger, tag): - logger.info("{} | {}".format(self.prefix, postfix)) - - -class TqdmProgressBar(BaseProgressBar): - """Log to tqdm.""" - - def __init__(self, iterable, epoch=None, prefix=None): - super().__init__(iterable, epoch, prefix) - from tqdm import tqdm - - self.tqdm = tqdm( - iterable, - self.prefix, - leave=False, - disable=(logger.getEffectiveLevel() > logging.INFO), - ) - - def __iter__(self): - return iter(self.tqdm) - - def log(self, stats, tag=None, step=None): - """Log intermediate stats according to log_interval.""" - self.tqdm.set_postfix(self._format_stats(stats), refresh=False) - - def print(self, stats, tag=None, step=None): - """Print end-of-epoch stats.""" - postfix = self._str_pipes(self._format_stats(stats)) - with rename_logger(logger, tag): - logger.info("{} | {}".format(self.prefix, postfix)) - - -try: - _tensorboard_writers = {} - from torch.utils.tensorboard import SummaryWriter -except ImportError: - try: - from tensorboardX import SummaryWriter - except ImportError: - SummaryWriter = None - - -def _close_writers(): - for w in _tensorboard_writers.values(): - w.close() - - -atexit.register(_close_writers) - - -class TensorboardProgressBarWrapper(BaseProgressBar): - """Log to tensorboard.""" - - def __init__(self, wrapped_bar, tensorboard_logdir): - self.wrapped_bar = wrapped_bar - self.tensorboard_logdir = tensorboard_logdir - - if SummaryWriter is None: - logger.warning( - "tensorboard not found, please install with: pip install tensorboard" - ) - - def _writer(self, key): - if SummaryWriter is None: - return None - _writers = _tensorboard_writers - if key not in _writers: - _writers[key] = SummaryWriter(os.path.join(self.tensorboard_logdir, key)) - _writers[key].add_text("sys.argv", " ".join(sys.argv)) - return _writers[key] - - def __iter__(self): - return iter(self.wrapped_bar) - - def log(self, stats, tag=None, step=None): - """Log intermediate stats to tensorboard.""" - self._log_to_tensorboard(stats, tag, step) - self.wrapped_bar.log(stats, tag=tag, step=step) - - def print(self, stats, tag=None, step=None): - """Print end-of-epoch stats.""" - self._log_to_tensorboard(stats, tag, step) - self.wrapped_bar.print(stats, tag=tag, step=step) - - def update_config(self, config): - """Log latest configuration.""" - # TODO add hparams to Tensorboard - self.wrapped_bar.update_config(config) - - def _log_to_tensorboard(self, stats, tag=None, step=None): - writer = self._writer(tag or "") - if writer is None: - return - if step is None: - step = stats["num_updates"] - for key in stats.keys() - {"num_updates"}: - if isinstance(stats[key], AverageMeter): - writer.add_scalar(key, stats[key].val, step) - elif isinstance(stats[key], Number): - writer.add_scalar(key, stats[key], step) - elif torch.is_tensor(stats[key]) and stats[key].numel() == 1: - writer.add_scalar(key, stats[key].item(), step) - writer.flush() - - -try: - import wandb -except ImportError: - wandb = None - - -class WandBProgressBarWrapper(BaseProgressBar): - """Log to Weights & Biases.""" - - def __init__(self, wrapped_bar, wandb_project, run_name=None): - self.wrapped_bar = wrapped_bar - if wandb is None: - logger.warning("wandb not found, pip install wandb") - return - - # reinit=False to ensure if wandb.init() is called multiple times - # within one process it still references the same run - wandb.init(project=wandb_project, reinit=False, name=run_name) - - def __iter__(self): - return iter(self.wrapped_bar) - - def log(self, stats, tag=None, step=None): - """Log intermediate stats to tensorboard.""" - self._log_to_wandb(stats, tag, step) - self.wrapped_bar.log(stats, tag=tag, step=step) - - def print(self, stats, tag=None, step=None): - """Print end-of-epoch stats.""" - self._log_to_wandb(stats, tag, step) - self.wrapped_bar.print(stats, tag=tag, step=step) - - def update_config(self, config): - """Log latest configuration.""" - if wandb is not None: - wandb.config.update(config) - self.wrapped_bar.update_config(config) - - def _log_to_wandb(self, stats, tag=None, step=None): - if wandb is None: - return - if step is None: - step = stats["num_updates"] - - prefix = "" if tag is None else tag + "/" - - for key in stats.keys() - {"num_updates"}: - if isinstance(stats[key], AverageMeter): - wandb.log({prefix + key: stats[key].val}, step=step) - elif isinstance(stats[key], Number): - wandb.log({prefix + key: stats[key]}, step=step) - - -try: - from azureml.core import Run -except ImportError: - Run = None - - -class AzureMLProgressBarWrapper(BaseProgressBar): - """Log to Azure ML""" - - def __init__(self, wrapped_bar): - self.wrapped_bar = wrapped_bar - if Run is None: - logger.warning("azureml.core not found, pip install azureml-core") - return - self.run = Run.get_context() - - def __exit__(self, *exc): - if Run is not None: - self.run.complete() - return False - - def __iter__(self): - return iter(self.wrapped_bar) - - def log(self, stats, tag=None, step=None): - """Log intermediate stats to AzureML""" - self._log_to_azureml(stats, tag, step) - self.wrapped_bar.log(stats, tag=tag, step=step) - - def print(self, stats, tag=None, step=None): - """Print end-of-epoch stats""" - self._log_to_azureml(stats, tag, step) - self.wrapped_bar.print(stats, tag=tag, step=step) - - def update_config(self, config): - """Log latest configuration.""" - self.wrapped_bar.update_config(config) - - def _log_to_azureml(self, stats, tag=None, step=None): - if Run is None: - return - if step is None: - step = stats["num_updates"] - - prefix = "" if tag is None else tag + "/" - - for key in stats.keys() - {"num_updates"}: - name = prefix + key - if isinstance(stats[key], AverageMeter): - self.run.log_row(name=name, **{"step": step, key: stats[key].val}) - elif isinstance(stats[key], Number): - self.run.log_row(name=name, **{"step": step, key: stats[key]}) diff --git a/kosmos-g/fairseq/fairseq/model_parallel/__init__.py b/kosmos-g/fairseq/fairseq/model_parallel/__init__.py deleted file mode 100644 index 69f216848..000000000 --- a/kosmos-g/fairseq/fairseq/model_parallel/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import criterions, models, modules # noqa diff --git a/kosmos-g/fairseq/fairseq/model_parallel/criterions/__init__.py b/kosmos-g/fairseq/fairseq/model_parallel/criterions/__init__.py deleted file mode 100644 index 5fae7bd4c..000000000 --- a/kosmos-g/fairseq/fairseq/model_parallel/criterions/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import importlib -import os - - -# automatically import any Python files in the criterions/ directory -for file in sorted(os.listdir(os.path.dirname(__file__))): - if file.endswith(".py") and not file.startswith("_"): - module = file[: file.find(".py")] - importlib.import_module("fairseq.model_parallel.criterions." + module) diff --git a/kosmos-g/fairseq/fairseq/model_parallel/criterions/vocab_parallel_cross_entropy.py b/kosmos-g/fairseq/fairseq/model_parallel/criterions/vocab_parallel_cross_entropy.py deleted file mode 100644 index 35c50ee15..000000000 --- a/kosmos-g/fairseq/fairseq/model_parallel/criterions/vocab_parallel_cross_entropy.py +++ /dev/null @@ -1,87 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -from fairseq import metrics, utils -from fairseq.criterions import FairseqCriterion, register_criterion - - -try: - from fairseq.model_parallel.megatron.mpu.cross_entropy import ( - vocab_parallel_cross_entropy, - ) - - has_megatron_submodule = True -except (ImportError, ModuleNotFoundError): - has_megatron_submodule = False - - -@register_criterion("vocab_parallel_cross_entropy") -class VocabParallelCrossEntropyCriterion(FairseqCriterion): - def __init__(self, task, sentence_avg): - super().__init__(task) - self.sentence_avg = sentence_avg - if not has_megatron_submodule: - raise ImportError( - "\n\nPlease install the megatron submodule:" - "\n\n git submodule update --init " - "fairseq/model_parallel/megatron" - ) - - def forward(self, model, sample, reduce=True): - """Compute the loss for the given sample. - - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - net_output = model(**sample["net_input"]) - target = sample["target"] - - loss = vocab_parallel_cross_entropy(net_output[0].float(), target) - loss = (loss * (target != self.padding_idx)).sum() - sample_size = ( - sample["target"].size(0) if self.sentence_avg else sample["ntokens"] - ) - logging_output = { - "loss": utils.item(loss.data) if reduce else loss.data, - "ntokens": sample["ntokens"], - "nsentences": sample["target"].size(0), - "sample_size": sample_size, - } - return loss, sample_size, logging_output - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_sum = sum(log.get("loss", 0) for log in logging_outputs) - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - sample_size = sum(log.get("sample_size", 0) for log in logging_outputs) - - metrics.log_scalar( - "loss", loss_sum / sample_size / math.log(2), sample_size, round=3 - ) - if sample_size != ntokens: - metrics.log_scalar( - "nll_loss", loss_sum / ntokens / math.log(2), ntokens, round=3 - ) - metrics.log_derived( - "ppl", lambda meters: utils.get_perplexity(meters["nll_loss"].avg) - ) - else: - metrics.log_derived( - "ppl", lambda meters: utils.get_perplexity(meters["loss"].avg) - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/kosmos-g/fairseq/fairseq/model_parallel/megatron_trainer.py b/kosmos-g/fairseq/fairseq/model_parallel/megatron_trainer.py deleted file mode 100644 index ca4211864..000000000 --- a/kosmos-g/fairseq/fairseq/model_parallel/megatron_trainer.py +++ /dev/null @@ -1,75 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Train a network across multiple GPUs. -""" - -from fairseq.dataclass.configs import FairseqConfig -from fairseq.distributed import utils as distributed_utils -from fairseq.trainer import Trainer - -try: - from fairseq.model_parallel.megatron.mpu import ( - get_data_parallel_rank, - get_data_parallel_world_size, - get_model_parallel_src_rank, - get_cuda_rng_tracker, - ) - - has_megatron_submodule = True -except (ImportError, ModuleNotFoundError): - has_megatron_submodule = False - - -class MegatronTrainer(Trainer): - """Main class for model parallel with data parallel training.""" - - def __init__(self, cfg: FairseqConfig, task, model, criterion, **kwargs): - if not has_megatron_submodule: - raise ImportError( - "\n\nPlease install the megatron submodule:" - "\n\n git submodule update --init " - "fairseq/model_parallel/megatron" - ) - super().__init__(cfg, task, model, criterion, **kwargs) - - def clip_grad_norm(self, clip_norm): - def _aggregate_model_parallel_grad_norm(total_norm): - total_norm = total_norm ** 2 - distributed_utils.all_reduce( - total_norm, group=distributed_utils.get_model_parallel_group() - ) - total_norm = total_norm ** 0.5 - return total_norm - - return self.optimizer.clip_grad_norm( - clip_norm, - aggregate_norm_fn=_aggregate_model_parallel_grad_norm, - ) - - def save_checkpoint(self, filename, extra_state): - """Save all training state in a checkpoint file.""" - extra_state["rng_tracker_states"] = get_cuda_rng_tracker().get_states() - super().save_checkpoint(filename, extra_state) - - def load_checkpoint( - self, - filename, - reset_optimizer=False, - reset_lr_scheduler=False, - optimizer_overrides=None, - reset_meters=False, - ): - extra_state = super().load_checkpoint( - filename, - reset_optimizer=reset_optimizer, - reset_lr_scheduler=reset_lr_scheduler, - optimizer_overrides=optimizer_overrides, - reset_meters=reset_meters, - ) - if extra_state is not None and "rng_tracker_states" in extra_state: - get_cuda_rng_tracker().set_states(extra_state["rng_tracker_states"]) - return extra_state diff --git a/kosmos-g/fairseq/fairseq/model_parallel/models/__init__.py b/kosmos-g/fairseq/fairseq/model_parallel/models/__init__.py deleted file mode 100644 index 3532479e5..000000000 --- a/kosmos-g/fairseq/fairseq/model_parallel/models/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import importlib -import os - - -# automatically import any Python files in the models/ directory -models_dir = os.path.dirname(__file__) -for file in os.listdir(models_dir): - path = os.path.join(models_dir, file) - if ( - not file.startswith("_") - and not file.startswith(".") - and (file.endswith(".py") or os.path.isdir(path)) - ): - model_name = file[: file.find(".py")] if file.endswith(".py") else file - module = importlib.import_module("fairseq.model_parallel.models." + model_name) diff --git a/kosmos-g/fairseq/fairseq/model_parallel/models/pipeline_parallel_transformer/__init__.py b/kosmos-g/fairseq/fairseq/model_parallel/models/pipeline_parallel_transformer/__init__.py deleted file mode 100644 index 117827c3e..000000000 --- a/kosmos-g/fairseq/fairseq/model_parallel/models/pipeline_parallel_transformer/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .model import * # noqa diff --git a/kosmos-g/fairseq/fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py b/kosmos-g/fairseq/fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py deleted file mode 100644 index c07a027a3..000000000 --- a/kosmos-g/fairseq/fairseq/model_parallel/models/pipeline_parallel_transformer/layers.py +++ /dev/null @@ -1,600 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from collections import namedtuple - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from fairseq import options, utils -from fairseq.modules import ( - AdaptiveSoftmax, - LayerNorm, - MultiheadAttention, - PositionalEmbedding, -) - -EncoderOut = namedtuple( - "TransformerEncoderOut", - [ - "encoder_out", # T x B x C - "encoder_padding_mask", # B x T - "encoder_embedding", # B x T x C - "encoder_states", # List[T x B x C] - ], -) - - -class TransformerEncoderEmbedding(nn.Module): - """Encoder Embedding + Positional Embedding""" - - def __init__(self, args, embed_tokens): - super().__init__() - self.dropout = args.dropout - self.max_source_positions = args.max_source_positions - self.embed_tokens = embed_tokens - if isinstance(embed_tokens, nn.ModuleList): - self.padding_idx = embed_tokens[0].padding_idx - embed_dim = sum(e.embedding_dim for e in embed_tokens) - else: - self.padding_idx = embed_tokens.padding_idx - embed_dim = embed_tokens.embedding_dim - self.embed_scale = math.sqrt(embed_dim) - self.embed_positions = ( - PositionalEmbedding( - args.max_source_positions, - embed_dim, - self.padding_idx, - learned=args.encoder_learned_pos, - ) - if not args.no_token_positional_embeddings - else None - ) - if getattr(args, "layernorm_embedding", False): - self.layernorm_embedding = LayerNorm(embed_dim) - else: - self.layernorm_embedding = None - - def forward(self, input): - # embed tokens and positions - src_tokens = input[0] - prev_output_tokens = input[2] - if isinstance(self.embed_tokens, nn.ModuleList): - x_embed_list = [] - for embed_tokens_part in self.embed_tokens: - x_embed_list.append(embed_tokens_part(src_tokens)) - - embedded = torch.cat(x_embed_list, dim=-1) - else: - embedded = self.embed_tokens(src_tokens) - x = embed = self.embed_scale * embedded - if self.embed_positions is not None: - x = embed + self.embed_positions(src_tokens) - if self.layernorm_embedding: - x = self.layernorm_embedding(x) - x = F.dropout(x, p=self.dropout, training=self.training) - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - # compute padding mask - encoder_padding_mask = src_tokens.eq(self.padding_idx) - return (x, encoder_padding_mask, prev_output_tokens) - - -class TransformerEncoderLayerNorm(nn.Module): - """ - Layer norm at the the end of all encoder layers if - args.encoder_enormalize_before = True - """ - - def __init__(self, args, embed_dim): - super().__init__() - if args.encoder_normalize_before: - self.layer_norm = LayerNorm(embed_dim) - else: - self.layer_norm = None - - def forward(self, input): - x = input[0] - encoder_padding_mask = input[1] - prev_output_tokens = input[2] - if self.layer_norm: - x = self.layer_norm(x) - # keeping track of the incremental_state is not supported yet - return (x, encoder_padding_mask, prev_output_tokens) - - -class TransformerDecoderEmbedding(nn.Module): - """Decoder Embedding + Positional Embedding""" - - def __init__(self, args, embed_tokens): - super().__init__() - self.dropout = args.dropout - self.share_input_output_embed = args.share_decoder_input_output_embed - input_embed_dim = ( - sum(e.embedding_dim for e in embed_tokens) - if isinstance(embed_tokens, nn.ModuleList) - else embed_tokens.embedding_dim - ) - embed_dim = args.decoder_embed_dim - self.output_embed_dim = args.decoder_output_dim - - padding_idx = ( - embed_tokens[0].padding_idx - if isinstance(embed_tokens, nn.ModuleList) - else embed_tokens.padding_idx - ) - self.max_target_positions = args.max_target_positions - - self.embed_tokens = embed_tokens - self.embed_scale = math.sqrt(embed_dim) # todo: try with input_embed_dim - - self.project_in_dim = ( - Linear(input_embed_dim, embed_dim, bias=False) - if embed_dim != input_embed_dim - else None - ) - - self.embed_positions = ( - PositionalEmbedding( - args.max_target_positions, - embed_dim, - padding_idx, - learned=args.decoder_learned_pos, - ) - if not args.no_token_positional_embeddings - else None - ) - - def forward(self, input): - mt_task = False - if isinstance(input, tuple): - if len(input) == 3: - encoder_out = input[0] - encoder_padding_mask = input[1] - prev_output_tokens = input[2] - incremental_state = None # Hardcoding to avoid passing of None objects - mt_task = True - else: - # HACK for now, need to fix (TODO sidgoyal) - prev_output_tokens = input[0] - # discard "src_lengths" - encoder_out = None - encoder_padding_mask = None - incremental_state = None - - else: - prev_output_tokens = input - encoder_out = None - encoder_padding_mask = None - incremental_state = None - - positions = ( - self.embed_positions( - prev_output_tokens, - incremental_state=incremental_state, - ) - if self.embed_positions is not None - else None - ) - - if incremental_state is not None: - prev_output_tokens = prev_output_tokens[:, -1:] - if positions is not None: - positions = positions[:, -1:] - - # embed tokens and positions - - if isinstance(self.embed_tokens, nn.ModuleList): - x_embed_list = [] - for embed_tokens_part in self.embed_tokens: - x_embed_list.append(embed_tokens_part(prev_output_tokens)) - - x = self.embed_scale * torch.cat(x_embed_list, dim=-1) - else: - x = self.embed_scale * self.embed_tokens(prev_output_tokens) - - if self.project_in_dim is not None: - x = self.project_in_dim(x) - - if positions is not None: - x += positions - x = F.dropout(x, p=self.dropout, training=self.training) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - if mt_task: - return (x, encoder_out, encoder_padding_mask) - return x - - -class TransformerDecoderOutputLayer(nn.Module): - def __init__(self, args, embed_tokens, dictionary): - super().__init__() - self.share_input_output_embed = args.share_decoder_input_output_embed - self.embed_tokens = embed_tokens - self.output_embed_dim = args.decoder_output_dim - embed_dim = args.decoder_embed_dim - - self.project_out_dim = ( - Linear(embed_dim, self.output_embed_dim, bias=False) - if embed_dim != self.output_embed_dim and not args.tie_adaptive_weights - else None - ) - self.adaptive_softmax = None - if args.adaptive_softmax_cutoff is not None: - assert not isinstance(embed_tokens, nn.ModuleList) - self.adaptive_softmax = AdaptiveSoftmax( - len(dictionary), - self.output_embed_dim, - options.eval_str_list(args.adaptive_softmax_cutoff, type=int), - dropout=args.adaptive_softmax_dropout, - adaptive_inputs=embed_tokens if args.tie_adaptive_weights else None, - factor=args.adaptive_softmax_factor, - tie_proj=args.tie_adaptive_proj, - ) - elif not self.share_input_output_embed: - self.embed_tokens = nn.Parameter( - torch.Tensor(len(dictionary), self.output_embed_dim) - ) - nn.init.normal_( - self.embed_tokens, mean=0, std=self.output_embed_dim ** -0.5 - ) - - if args.decoder_normalize_before and not getattr( - args, "no_decoder_final_norm", False - ): - self.layer_norm = LayerNorm(embed_dim) - else: - self.layer_norm = None - - def forward(self, input, apply_final_proj=True): - if isinstance(input, tuple): - x = input[0] - else: - x = input - - if self.layer_norm: - x = self.layer_norm(x) - - # T x B x C -> B x T x C - x = x.transpose(0, 1) - - if self.project_out_dim is not None: - x = self.project_out_dim(x) - if apply_final_proj: - x = self.output_layer(x) - return x - - def output_layer(self, features, **kwargs): - """Project features to the vocabulary size.""" - if self.adaptive_softmax is None: - # project back to size of vocabulary - if self.share_input_output_embed: - if isinstance(self.embed_tokens, nn.ModuleList): - output = None - for i, emb in enumerate(self.embed_tokens): - sidx = i * emb.embedding_dim - eidx = (i + 1) * emb.embedding_dim - if output is None: - output = F.linear(features[:, :, sidx:eidx], emb.weight) - else: - output += F.linear(features[:, :, sidx:eidx], emb.weight) - - return output - else: - return F.linear(features, self.embed_tokens.weight) - else: - return F.linear(features, self.embed_tokens) - else: - return features - - -class TransformerEncoderLayer(nn.Module): - """Encoder layer block. - In the original paper each operation (multi-head attention or FFN) is - postprocessed with: `dropout -> add residual -> layernorm`. In the - tensor2tensor code they suggest that learning is more robust when - preprocessing each layer with layernorm and postprocessing with: - `dropout -> add residual`. We default to the approach in the paper, but the - tensor2tensor approach can be enabled by setting - *args.encoder_normalize_before* to ``True``. - - Args: - args (argparse.Namespace): parsed command-line arguments - """ - - def __init__(self, args): - super().__init__() - self.embed_dim = args.encoder_embed_dim - self.self_attn = MultiheadAttention( - self.embed_dim, - args.encoder_attention_heads, - dropout=args.attention_dropout, - self_attention=True, - ) - self.self_attn_layer_norm = LayerNorm(self.embed_dim) - self.dropout = args.dropout - self.activation_fn = utils.get_activation_fn( - activation=getattr(args, "activation_fn", "relu") - ) - self.activation_dropout = getattr(args, "activation_dropout", 0) - if self.activation_dropout == 0: - # for backwards compatibility with models that use args.relu_dropout - self.activation_dropout = getattr(args, "relu_dropout", 0) - self.normalize_before = args.encoder_normalize_before - self.fc1 = Linear(self.embed_dim, args.encoder_ffn_embed_dim) - self.fc2 = Linear(args.encoder_ffn_embed_dim, self.embed_dim) - self.final_layer_norm = LayerNorm(self.embed_dim) - - def upgrade_state_dict_named(self, state_dict, name): - """ - Rename layer norm states from `...layer_norms.0.weight` to - `...self_attn_layer_norm.weight` and `...layer_norms.1.weight` to - `...final_layer_norm.weight` - """ - layer_norm_map = {"0": "self_attn_layer_norm", "1": "final_layer_norm"} - for old, new in layer_norm_map.items(): - for m in ("weight", "bias"): - k = "{}.layer_norms.{}.{}".format(name, old, m) - if k in state_dict: - state_dict["{}.{}.{}".format(name, new, m)] = state_dict[k] - del state_dict[k] - - def forward(self, input): - """ - Args: - input (Tuple): - input[0] (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - input[1] (ByteTensor/FloatTensor): encoder padding mask - - binary ByteTensor of shape `(batch, src_len)` where padding elements - are indicated by ``1``. - input[2] (LongTensor): previous decoder outputs of shape - `(batch, tgt_len)`, for teacher forcing) - Returns: - output (Tuple): - output[0] (Tensor): encoded output of shape `(batch, src_len, embed_dim)` - output[1] (ByteTensor/FloatTensor): encoder padding mask - output[2] (LongTensor): previous decoder outputs - """ - x = input[0] - encoder_padding_mask = input[1] - prev_output_tokens = input[2] - residual = x - x = self.maybe_layer_norm(self.self_attn_layer_norm, x, before=True) - x, _ = self.self_attn( - query=x, key=x, value=x, key_padding_mask=encoder_padding_mask - ) - x = F.dropout(x, p=self.dropout, training=self.training) - x = residual + x - x = self.maybe_layer_norm(self.self_attn_layer_norm, x, after=True) - - residual = x - x = self.maybe_layer_norm(self.final_layer_norm, x, before=True) - x = self.activation_fn(self.fc1(x)) - x = F.dropout(x, p=self.activation_dropout, training=self.training) - x = self.fc2(x) - x = F.dropout(x, p=self.dropout, training=self.training) - x = residual + x - x = self.maybe_layer_norm(self.final_layer_norm, x, after=True) - return (x, encoder_padding_mask, prev_output_tokens) - - def maybe_layer_norm(self, layer_norm, x, before=False, after=False): - assert before ^ after - if after ^ self.normalize_before: - return layer_norm(x) - else: - return x - - -class TransformerDecoderLayer(nn.Module): - """Decoder layer block. - - In the original paper each operation (multi-head attention, encoder - attention or FFN) is postprocessed with: `dropout -> add residual -> - layernorm`. In the tensor2tensor code they suggest that learning is more - robust when preprocessing each layer with layernorm and postprocessing with: - `dropout -> add residual`. We default to the approach in the paper, but the - tensor2tensor approach can be enabled by setting - *args.decoder_normalize_before* to ``True``. - - Args: - args (argparse.Namespace): parsed command-line arguments - no_encoder_attn (bool, optional): whether to attend to encoder outputs - (default: False). - """ - - def __init__( - self, args, no_encoder_attn=False, add_bias_kv=False, add_zero_attn=False - ): - super().__init__() - self.embed_dim = args.decoder_embed_dim - self.self_attn = MultiheadAttention( - embed_dim=self.embed_dim, - num_heads=args.decoder_attention_heads, - dropout=args.attention_dropout, - add_bias_kv=add_bias_kv, - add_zero_attn=add_zero_attn, - self_attention=True, - ) - self.dropout = args.dropout - self.activation_fn = utils.get_activation_fn( - activation=getattr(args, "activation_fn", "relu") - ) - self.activation_dropout = getattr(args, "activation_dropout", 0) - if self.activation_dropout == 0: - # for backwards compatibility with models that use args.relu_dropout - self.activation_dropout = getattr(args, "relu_dropout", 0) - self.normalize_before = args.decoder_normalize_before - - # use layerNorm rather than FusedLayerNorm for exporting. - # char_inputs can be used to determint this. - # TODO remove this once we update apex with the fix - export = getattr(args, "char_inputs", False) - self.self_attn_layer_norm = LayerNorm(self.embed_dim, export=export) - - if no_encoder_attn: - self.encoder_attn = None - self.encoder_attn_layer_norm = None - else: - self.encoder_attn = MultiheadAttention( - self.embed_dim, - args.decoder_attention_heads, - kdim=getattr(args, "encoder_embed_dim", None), - vdim=getattr(args, "encoder_embed_dim", None), - dropout=args.attention_dropout, - encoder_decoder_attention=True, - ) - self.encoder_attn_layer_norm = LayerNorm(self.embed_dim, export=export) - - self.fc1 = Linear(self.embed_dim, args.decoder_ffn_embed_dim) - self.fc2 = Linear(args.decoder_ffn_embed_dim, self.embed_dim) - - self.final_layer_norm = LayerNorm(self.embed_dim, export=export) - self.need_attn = True - - self.onnx_trace = False - - def prepare_for_onnx_export_(self): - self.onnx_trace = True - - def forward(self, input): - """ - Args: - input (Tuple): - input[0] (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - input[1] (Tensor): encoder output of shape `(batch, src_len, embed_dim)` - input[2] (ByteTensor/FloatTensor): encoder padding mask - - binary ByteTensor of shape `(batch, src_len)` where padding elements - are indicated by ``1``. - Returns: - output (Tuple): - output[0] (Tensor): encoded output of shape `(batch, src_len, embed_dim)` - output[1] (ByteTensor/FloatTensor): encoder padding mask - output[2] (LongTensor): previous decoder outputs - """ - # Note: incremental state is not yet supported - mt_task = False - if isinstance(input, tuple): - x = input[0] - encoder_out = input[1] - encoder_padding_mask = input[2] - incremental_state = None - mt_task = True - else: - x = input - encoder_out = None - encoder_padding_mask = None - incremental_state = None - - if incremental_state is None: - self_attn_mask = self.buffered_future_mask(x) - else: - self_attn_mask = None - - # TODO: add back prev_self_attn_state, prev_attn_state, - # self_attn_padding_mask - prev_self_attn_state = None - prev_attn_state = None - self_attn_padding_mask = None - - residual = x - x = self.maybe_layer_norm(self.self_attn_layer_norm, x, before=True) - if prev_self_attn_state is not None: - if incremental_state is None: - incremental_state = {} - prev_key, prev_value = prev_self_attn_state - saved_state = {"prev_key": prev_key, "prev_value": prev_value} - self.self_attn._set_input_buffer(incremental_state, saved_state) - x, attn = self.self_attn( - query=x, - key=x, - value=x, - key_padding_mask=self_attn_padding_mask, - incremental_state=incremental_state, - need_weights=False, - attn_mask=self_attn_mask, - ) - x = F.dropout(x, p=self.dropout, training=self.training) - x = residual + x - x = self.maybe_layer_norm(self.self_attn_layer_norm, x, after=True) - - if self.encoder_attn is not None: - residual = x - x = self.maybe_layer_norm(self.encoder_attn_layer_norm, x, before=True) - if prev_attn_state is not None: - if incremental_state is None: - incremental_state = {} - prev_key, prev_value = prev_attn_state - saved_state = {"prev_key": prev_key, "prev_value": prev_value} - self.encoder_attn._set_input_buffer(incremental_state, saved_state) - x, attn = self.encoder_attn( - query=x, - key=encoder_out, - value=encoder_out, - key_padding_mask=encoder_padding_mask, - incremental_state=incremental_state, - static_kv=True, - need_weights=(not self.training and self.need_attn), - ) - x = F.dropout(x, p=self.dropout, training=self.training) - x = residual + x - x = self.maybe_layer_norm(self.encoder_attn_layer_norm, x, after=True) - - residual = x - x = self.maybe_layer_norm(self.final_layer_norm, x, before=True) - x = self.activation_fn(self.fc1(x)) - x = F.dropout(x, p=self.activation_dropout, training=self.training) - x = self.fc2(x) - x = F.dropout(x, p=self.dropout, training=self.training) - x = residual + x - x = self.maybe_layer_norm(self.final_layer_norm, x, after=True) - - if mt_task: - return (x, encoder_out, encoder_padding_mask) - return x - - def buffered_future_mask(self, tensor): - dim = tensor.size(0) - if ( - not hasattr(self, "_future_mask") - or self._future_mask is None - or self._future_mask.device != tensor.device - ): - self._future_mask = torch.triu( - utils.fill_with_neg_inf(tensor.new(dim, dim)), 1 - ) - if self._future_mask.size(0) < dim: - self._future_mask = torch.triu( - utils.fill_with_neg_inf(self._future_mask.resize_(dim, dim)), 1 - ) - return self._future_mask[:dim, :dim] - - def maybe_layer_norm(self, layer_norm, x, before=False, after=False): - assert before ^ after - if after ^ self.normalize_before: - return layer_norm(x) - else: - return x - - def make_generation_fast_(self, need_attn=False, **kwargs): - self.need_attn = need_attn - - -def Embedding(num_embeddings, embedding_dim, padding_idx): - m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx) - nn.init.normal_(m.weight, mean=0, std=embedding_dim ** -0.5) - nn.init.constant_(m.weight[padding_idx], 0) - return m - - -def Linear(in_features, out_features, bias=True): - m = nn.Linear(in_features, out_features, bias) - nn.init.xavier_uniform_(m.weight) - if bias: - nn.init.constant_(m.bias, 0.0) - return m diff --git a/kosmos-g/fairseq/fairseq/model_parallel/models/pipeline_parallel_transformer/model.py b/kosmos-g/fairseq/fairseq/model_parallel/models/pipeline_parallel_transformer/model.py deleted file mode 100644 index 6ab82acd6..000000000 --- a/kosmos-g/fairseq/fairseq/model_parallel/models/pipeline_parallel_transformer/model.py +++ /dev/null @@ -1,791 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import utils -from fairseq.model_parallel.models.pipeline_parallel_transformer.layers import ( - Embedding, - TransformerDecoderEmbedding, - TransformerDecoderLayer, - TransformerDecoderOutputLayer, - TransformerEncoderEmbedding, - TransformerEncoderLayer, - TransformerEncoderLayerNorm, -) -from fairseq.models import ( - BaseFairseqModel, - FairseqDecoder, - FairseqEncoder, - register_model, - register_model_architecture, -) -from fairseq.models.fairseq_encoder import EncoderOut -from fairseq.models.transformer import ( - base_architecture, - transformer_iwslt_de_en, - transformer_wmt_en_de_big, -) -from fairseq.modules import SinusoidalPositionalEmbedding -import math -import numpy as np - - -logger = logging.getLogger(__name__) - - -DEFAULT_MAX_SOURCE_POSITIONS = 1024 -DEFAULT_MAX_TARGET_POSITIONS = 1024 -TORCH_PIPE = False -RPC_INIT = False - - -def import_pipe(): - global TORCH_PIPE - global RPC_INIT - try: - from torch.distributed.pipeline.sync import Pipe # noqa - - global Pipe - from torch.distributed.pipeline.sync.utils import partition_model - - global partition_model - from torch.distributed import rpc - import tempfile - - TORCH_PIPE = True - # Initialize single process RPC agent since TORCH_PIPE requires - # RRef. RRef depends on RPC being initialized and as a result we initialize - # RPC with a single node. - tmpfile = tempfile.NamedTemporaryFile() - if not RPC_INIT: - rpc.init_rpc( - name="worker", - rank=0, - world_size=1, - rpc_backend_options=rpc.TensorPipeRpcBackendOptions( - init_method="file://{}".format(tmpfile.name), - ), - ) - RPC_INIT = True - logger.info("Using torch pipe") - except ImportError: - try: - from fairscale.nn import Pipe # noqa - - logger.info("Using fairscale pipe") - except ImportError: - raise ImportError("Please install fairscale with: pip install fairscale") - - -@register_model("pipeline_parallel_transformer") -class PipelineParallelTransformerModel(BaseFairseqModel): - def __init__(self, encoder, decoder, balance, devices, chunks, checkpoint): - import_pipe() - super().__init__() - assert isinstance(encoder, FairseqEncoder) - assert isinstance(decoder, FairseqDecoder) - encoder_module_list = ( - [encoder.embedding_layer] - + list(encoder.encoder_layers) - + [encoder.final_layer_norm] - ) - self.num_encoder_modules = len(encoder_module_list) - decoder_module_list = ( - [decoder.embedding_layer] - + list(decoder.decoder_layers) - + [decoder.decoder_output_layer] - ) - self.num_decoder_modules = len(decoder_module_list) - module_list = encoder_module_list + decoder_module_list - self.devices = devices - if TORCH_PIPE: - self.model = Pipe( - partition_model(nn.Sequential(*module_list), balance, devices), - chunks=chunks, - checkpoint=checkpoint, - ) - else: - self.model = Pipe( - nn.Sequential(*module_list), - balance=balance, - devices=devices, - chunks=chunks, - checkpoint=checkpoint, - ) - self.encoder_max_positions = self.max_positions_helper( - encoder.embedding_layer, "max_source_positions" - ) - self.decoder_max_positions = self.max_positions_helper( - decoder.embedding_layer, "max_target_positions" - ) - self.adaptive_softmax = getattr(decoder, "adaptive_softmax", None) - # Note: To be populated during inference - self.encoder = None - self.decoder = None - - def forward(self, src_tokens, src_lengths, prev_output_tokens): - if self.training: - input_lst = [src_tokens, src_lengths, prev_output_tokens] - input = tuple(i.to(self.devices[0], non_blocking=True) for i in input_lst) - if TORCH_PIPE: - return self.model(input).local_value() - else: - return self.model(input) - else: - assert self.encoder is not None and self.decoder is not None, ( - "encoder and decoder need to be initialized by " - + "calling the `prepare_for_inference_()` method" - ) - encoder_output_tuple = self.encoder(input) - return self.decoder(encoder_output_tuple) - - def prepare_for_inference_(self, cfg): - if self.encoder is not None and self.decoder is not None: - logger.info("Encoder and Decoder already initialized") - return - encoder_module_list = [] - decoder_module_list = [] - module_count = 0 - for partition in self.model.partitions: - for module in partition: - if module_count < self.num_encoder_modules: - encoder_module_list.append(module) - else: - decoder_module_list.append(module) - module_count += 1 - self.model = None - self.encoder = TransformerEncoder( - cfg.distributed_training, None, None, encoder_module_list - ) - self.decoder = TransformerDecoder( - cfg.distributed_training, - None, - None, - decoder_module_list=decoder_module_list, - ) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--activation-fn', - choices=utils.get_available_activation_fns(), - help='activation function to use') - parser.add_argument('--dropout', type=float, metavar='D', - help='dropout probability') - parser.add_argument('--attention-dropout', type=float, metavar='D', - help='dropout probability for attention weights') - parser.add_argument('--activation-dropout', '--relu-dropout', type=float, metavar='D', - help='dropout probability after activation in FFN.') - parser.add_argument('--encoder-embed-path', type=str, metavar='STR', - help='path to pre-trained encoder embedding') - parser.add_argument('--encoder-embed-dim', type=int, metavar='N', - help='encoder embedding dimension') - parser.add_argument('--encoder-ffn-embed-dim', type=int, metavar='N', - help='encoder embedding dimension for FFN') - parser.add_argument('--encoder-layers', type=int, metavar='N', - help='num encoder layers') - parser.add_argument('--encoder-attention-heads', type=int, metavar='N', - help='num encoder attention heads') - parser.add_argument('--encoder-normalize-before', action='store_true', - help='apply layernorm before each encoder block') - parser.add_argument('--encoder-learned-pos', action='store_true', - help='use learned positional embeddings in the encoder') - parser.add_argument('--decoder-embed-path', type=str, metavar='STR', - help='path to pre-trained decoder embedding') - parser.add_argument('--decoder-embed-dim', type=int, metavar='N', - help='decoder embedding dimension') - parser.add_argument('--decoder-ffn-embed-dim', type=int, metavar='N', - help='decoder embedding dimension for FFN') - parser.add_argument('--decoder-layers', type=int, metavar='N', - help='num decoder layers') - parser.add_argument('--decoder-attention-heads', type=int, metavar='N', - help='num decoder attention heads') - parser.add_argument('--decoder-learned-pos', action='store_true', - help='use learned positional embeddings in the decoder') - parser.add_argument('--decoder-normalize-before', action='store_true', - help='apply layernorm before each decoder block') - parser.add_argument('--share-decoder-input-output-embed', action='store_true', - help='share decoder input and output embeddings') - parser.add_argument('--share-all-embeddings', action='store_true', - help='share encoder, decoder and output embeddings' - ' (requires shared dictionary and embed dim)') - parser.add_argument('--no-token-positional-embeddings', default=False, action='store_true', - help='if set, disables positional embeddings (outside self attention)') - parser.add_argument('--adaptive-softmax-cutoff', metavar='EXPR', - help='comma separated list of adaptive softmax cutoff points. ' - 'Must be used with adaptive_loss criterion'), - parser.add_argument('--adaptive-softmax-dropout', type=float, metavar='D', - help='sets adaptive softmax dropout for the tail projections') - parser.add_argument('--num-embedding-chunks', type=int, metavar='N', default=1, - help='Number of embedding layer chunks (enables more even distribution' - 'of optimizer states across data parallel nodes' - 'when using optimizer state sharding and' - 'a big embedding vocabulary)') - # fmt: on - - @classmethod - def build_model_base(cls, args, task): - """Build a new model instance.""" - - # make sure all arguments are present in older models - base_architecture(args) - - if not hasattr(args, "max_source_positions"): - args.max_source_positions = DEFAULT_MAX_SOURCE_POSITIONS - if not hasattr(args, "max_target_positions"): - args.max_target_positions = DEFAULT_MAX_TARGET_POSITIONS - - src_dict, tgt_dict = task.source_dictionary, task.target_dictionary - - def build_embedding(dictionary, embed_dim, path=None, num_embed_chunks=1): - assert embed_dim % num_embed_chunks == 0, ( - f"Number of embedding chunks = {num_embed_chunks} should be " - + f"divisible by the embedding dimension = {embed_dim}" - ) - assert path is None or num_embed_chunks == 1, ( - "Loading embedding from a path with number of embedding chunks > 1" - + " is not yet supported" - ) - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - # if provided, load from preloaded dictionaries - if path: - emb = Embedding(num_embeddings, embed_dim, padding_idx) - embed_dict = utils.parse_embedding(path) - utils.load_embedding(embed_dict, dictionary, emb) - else: - embed_chunk_dim = embed_dim // num_embed_chunks - emb = nn.ModuleList() - for i in range(num_embed_chunks): - emb.append(Embedding(num_embeddings, embed_chunk_dim, padding_idx)) - return emb - - num_embed_chunks = args.num_embedding_chunks - if args.share_all_embeddings: - if src_dict != tgt_dict: - raise ValueError("--share-all-embeddings requires a joined dictionary") - if args.encoder_embed_dim != args.decoder_embed_dim: - raise ValueError( - "--share-all-embeddings requires --encoder-embed-dim to match --decoder-embed-dim" - ) - if args.decoder_embed_path and ( - args.decoder_embed_path != args.encoder_embed_path - ): - raise ValueError( - "--share-all-embeddings not compatible with --decoder-embed-path" - ) - encoder_embed_tokens = build_embedding( - src_dict, - args.encoder_embed_dim, - args.encoder_embed_path, - num_embed_chunks, - ) - decoder_embed_tokens = encoder_embed_tokens - args.share_decoder_input_output_embed = True - else: - assert args.share_decoder_input_output_embed or num_embed_chunks == 1, ( - "Not sharing decoder I/O embeddings is not yet supported with number of " - + "embedding chunks > 1" - ) - encoder_embed_tokens = build_embedding( - src_dict, - args.encoder_embed_dim, - args.encoder_embed_path, - num_embed_chunks, - ) - decoder_embed_tokens = build_embedding( - tgt_dict, - args.decoder_embed_dim, - args.decoder_embed_path, - num_embed_chunks, - ) - - encoder = cls.build_encoder(args, src_dict, encoder_embed_tokens) - decoder = cls.build_decoder(args, tgt_dict, decoder_embed_tokens) - return (encoder, decoder) - - @classmethod - def build_encoder(cls, args, src_dict, embed_tokens): - return TransformerEncoder(args, src_dict, embed_tokens) - - @classmethod - def build_decoder(cls, args, tgt_dict, embed_tokens): - return TransformerDecoder(args, tgt_dict, embed_tokens) - - @classmethod - def build_model(cls, args, task): - encoder, decoder = cls.build_model_base(args, task) - return PipelineParallelTransformerModel( - encoder=encoder, - decoder=decoder, - balance=utils.eval_str_list(args.pipeline_balance, type=int), - devices=utils.eval_str_list(args.pipeline_devices, type=int), - chunks=args.pipeline_chunks, - checkpoint=args.pipeline_checkpoint, - ) - - def output_layer(self, features, **kwargs): - """Project features to the default output size (typically vocabulary size).""" - return self.decoder.output_layer(features, **kwargs) - - def max_positions(self): - """Maximum length supported by the model.""" - return (self.encoder_max_positions, self.decoder_max_positions) - - def max_positions_helper( - self, embedding_layer, max_positions_field="max_source_positions" - ): - """Maximum input length supported by the encoder or decoder.""" - if embedding_layer.embed_positions is None: - return getattr(embedding_layer, max_positions_field) - return min( - getattr(embedding_layer, max_positions_field), - embedding_layer.embed_positions.max_positions, - ) - - def get_normalized_probs(self, net_output, log_probs, sample=None): - """Get normalized probabilities (or log probs) from a net's output.""" - - if hasattr(self, "adaptive_softmax") and self.adaptive_softmax is not None: - if sample is not None: - assert "target" in sample - target = sample["target"] - else: - target = None - out = self.adaptive_softmax.get_log_prob(net_output, target=target) - return out.exp_() if not log_probs else out - - # A Pipe() module returns a tuple of tensors as the output. - # In this case, the tuple has one element - the output tensor of logits - logits = net_output if isinstance(net_output, torch.Tensor) else net_output[0] - if log_probs: - return utils.log_softmax(logits, dim=-1, onnx_trace=False) - else: - return utils.softmax(logits, dim=-1, onnx_trace=False) - - def max_decoder_positions(self): - """Maximum length supported by the decoder.""" - return self.decoder_max_positions - - def load_state_dict(self, state_dict, strict=True, model_cfg=None): - """Copies parameters and buffers from *state_dict* into this module and - its descendants. - - Overrides the method in :class:`nn.Module`. Compared with that method - this additionally "upgrades" *state_dicts* from old checkpoints. - """ - self.upgrade_state_dict(state_dict) - is_regular_transformer = not any("model.partitions" in k for k in state_dict) - if is_regular_transformer: - state_dict = self.convert_to_pipeline_parallel_state_dict(state_dict) - return super().load_state_dict(state_dict, strict) - - def convert_to_pipeline_parallel_state_dict(self, state_dict): - new_state_dict = self.state_dict() - encoder_layer_idx = 0 - decoder_layer_idx = 0 - encoder_key_suffixes = [ - "self_attn.k_proj.weight", - "self_attn.k_proj.bias", - "self_attn.v_proj.weight", - "self_attn.v_proj.bias", - "self_attn.q_proj.weight", - "self_attn.q_proj.bias", - "self_attn.out_proj.weight", - "self_attn.out_proj.bias", - "self_attn_layer_norm.weight", - "self_attn_layer_norm.bias", - "fc1.weight", - "fc1.bias", - "fc2.weight", - "fc2.bias", - "final_layer_norm.weight", - "final_layer_norm.bias", - ] - decoder_key_suffixes = [ - "self_attn.k_proj.weight", - "self_attn.k_proj.bias", - "self_attn.v_proj.weight", - "self_attn.v_proj.bias", - "self_attn.q_proj.weight", - "self_attn.q_proj.bias", - "self_attn.out_proj.weight", - "self_attn.out_proj.bias", - "self_attn_layer_norm.weight", - "self_attn_layer_norm.bias", - "encoder_attn.k_proj.weight", - "encoder_attn.k_proj.bias", - "encoder_attn.v_proj.weight", - "encoder_attn.v_proj.bias", - "encoder_attn.q_proj.weight", - "encoder_attn.q_proj.bias", - "encoder_attn.out_proj.weight", - "encoder_attn.out_proj.bias", - "encoder_attn_layer_norm.weight", - "encoder_attn_layer_norm.bias", - "fc1.weight", - "fc1.bias", - "fc2.weight", - "fc2.bias", - "final_layer_norm.weight", - "final_layer_norm.bias", - ] - for pid, partition in enumerate(self.model.partitions): - logger.info(f"Begin Partition {pid}") - for mid, module in enumerate(partition): - # fmt: off - if isinstance(module, TransformerEncoderEmbedding): - new_state_dict[f'model.partitions.{pid}.{mid}.embed_tokens.weight'] = state_dict['encoder.embed_tokens.weight'] - new_state_dict[f'model.partitions.{pid}.{mid}.embed_positions._float_tensor'] = state_dict['encoder.embed_positions._float_tensor'] - if isinstance(module, TransformerEncoderLayer): - for suffix in encoder_key_suffixes: - new_state_dict[f'model.partitions.{pid}.{mid}.{suffix}'] = state_dict[f'encoder.layers.{encoder_layer_idx}.{suffix}'] - encoder_layer_idx += 1 - if isinstance(module, TransformerDecoderLayer): - for suffix in decoder_key_suffixes: - new_state_dict[f'model.partitions.{pid}.{mid}.{suffix}'] = state_dict[f'decoder.layers.{decoder_layer_idx}.{suffix}'] - decoder_layer_idx += 1 - if isinstance(module, TransformerEncoderLayerNorm): - if 'encoder.layer_norm.weight' in state_dict: - new_state_dict[f'model.partitions.{pid}.{mid}.layer_norm.weight'] = state_dict['encoder.layer_norm.weight'] - new_state_dict[f'model.partitions.{pid}.{mid}.layer_norm.bias'] = state_dict['encoder.layer_norm.bias'] - if isinstance(module, TransformerDecoderEmbedding): - new_state_dict[f'model.partitions.{pid}.{mid}.embed_tokens.weight'] = state_dict['decoder.embed_tokens.weight'] - new_state_dict[f'model.partitions.{pid}.{mid}.embed_positions._float_tensor'] = state_dict['decoder.embed_positions._float_tensor'] - if isinstance(module, TransformerDecoderOutputLayer): - new_state_dict[f'model.partitions.{pid}.{mid}.output_projection.weight'] = state_dict['decoder.output_projection.weight'] - # fmt: on - return new_state_dict - - -class TransformerEncoder(FairseqEncoder): - """ - Transformer encoder consisting of *args.encoder_layers* layers. Each layer - is a :class:`TransformerEncoderLayer`. - - Args: - args (argparse.Namespace): parsed command-line arguments - dictionary (~fairseq.data.Dictionary): encoding dictionary - embed_tokens (torch.nn.Embedding): input embedding - """ - - def __init__(self, args, dictionary, embed_tokens, encoder_module_list=None): - super().__init__(dictionary) - self.register_buffer("version", torch.Tensor([3])) - import_pipe() - self.use_pipeline = encoder_module_list is not None - if not self.use_pipeline: - self.embedding_layer = TransformerEncoderEmbedding(args, embed_tokens) - self.encoder_layers = nn.Sequential( - *[TransformerEncoderLayer(args) for i in range(args.encoder_layers)] - ) - if isinstance(embed_tokens, nn.ModuleList): - emb_dim = sum(e.embedding_dim for e in embed_tokens) - else: - emb_dim = embed_tokens.embedding_dim - self.final_layer_norm = TransformerEncoderLayerNorm(args, emb_dim) - else: - encoder_balance = utils.eval_str_list( - args.pipeline_encoder_balance, type=int - ) - encoder_devices = utils.eval_str_list( - args.pipeline_encoder_devices, type=int - ) - assert sum(encoder_balance) == len(encoder_module_list), ( - f"Sum of encoder_balance={encoder_balance} is not equal " - + f"to num_encoder_modules={len(encoder_module_list)}" - ) - if TORCH_PIPE: - self.model = Pipe( - module=partition_model( - nn.Sequential(*encoder_module_list), - encoder_balance, - encoder_devices, - ), - chunks=args.pipeline_chunks, - checkpoint=args.pipeline_checkpoint, - ) - else: - self.model = Pipe( - module=nn.Sequential(*encoder_module_list), - balance=encoder_balance, - devices=encoder_devices, - chunks=args.pipeline_chunks, - checkpoint=args.pipeline_checkpoint, - ) - - def forward(self, src_tokens, src_lengths): - """ - Args: - input_tuple( - src_tokens (LongTensor): tokens in the source language of shape - `(batch, src_len)` - src_lengths (torch.LongTensor): lengths of each source sentence of - shape `(batch)` - ) - - Returns: - output_tuple( - - **encoder_out** (Tensor): the last encoder layer's output of - shape `(src_len, batch, embed_dim)` - - **encoder_padding_mask** (ByteTensor): the positions of - padding elements of shape `(batch, src_len)` - - prev_output_tokens - - **encoder_states** (List[Tensor]): all intermediate - hidden states of shape `(src_len, batch, embed_dim)`. - Only populated if *return_all_hiddens* is True. - ) - """ - dummy_prev_output_tokens = torch.zeros( - 1, dtype=src_tokens.dtype, device=src_tokens.device - ) - input_tuple = (src_tokens, src_lengths, dummy_prev_output_tokens) - if self.use_pipeline: - input_tuple = tuple(i.to(self.model.devices[0]) for i in input_tuple) - if TORCH_PIPE: - encoder_out = self.model(input_tuple).local_value() - else: - encoder_out = self.model(input_tuple) - else: - encoder_embed_output_tuple = self.embedding_layer(input_tuple) - encoder_layers_output = self.encoder_layers(encoder_embed_output_tuple) - encoder_out = self.final_layer_norm(encoder_layers_output) - # first element is the encoder output - # second element is the encoder padding mask - # the remaining elements of EncoderOut are not computed by - # the PipelineParallelTransformer - return EncoderOut(encoder_out[0], encoder_out[1], None, None, None, None) - - def reorder_encoder_out(self, encoder_out, new_order): - """ - Reorder encoder output according to *new_order*. - - Args: - encoder_out: output from the ``forward()`` method - new_order (LongTensor): desired order - - Returns: - *encoder_out* rearranged according to *new_order* - """ - if encoder_out.encoder_out is not None: - encoder_out = encoder_out._replace( - encoder_out=encoder_out.encoder_out.index_select(1, new_order) - ) - if encoder_out.encoder_padding_mask is not None: - encoder_out = encoder_out._replace( - encoder_padding_mask=encoder_out.encoder_padding_mask.index_select( - 0, new_order - ) - ) - if encoder_out.encoder_embedding is not None: - encoder_out = encoder_out._replace( - encoder_embedding=encoder_out.encoder_embedding.index_select( - 0, new_order - ) - ) - if encoder_out.encoder_states is not None: - for idx, state in enumerate(encoder_out.encoder_states): - encoder_out.encoder_states[idx] = state.index_select(1, new_order) - return encoder_out - - def max_positions(self): - """Maximum input length supported by the encoder.""" - if self.embedding_layer.embed_positions is None: - return self.embedding_layer.max_source_positions - return min( - self.embedding_layer.max_source_positions, - self.embedding_layer.embed_positions.max_positions, - ) - - -class TransformerDecoder(FairseqDecoder): - """ - Transformer decoder consisting of *args.decoder_layers* layers. Each layer - is a :class:`TransformerDecoderLayer`. - - Args: - args (argparse.Namespace): parsed command-line arguments - dictionary (~fairseq.data.Dictionary): decoding dictionary - embed_tokens (torch.nn.Embedding): output embedding - no_encoder_attn (bool, optional): whether to attend to encoder outputs - (default: False). - """ - - def __init__( - self, - args, - dictionary, - embed_tokens, - no_encoder_attn=False, - decoder_module_list=None, - ): - super().__init__(dictionary) - self.register_buffer("version", torch.Tensor([3])) - import_pipe() - self.use_pipeline = decoder_module_list is not None - if not self.use_pipeline: - self.embedding_layer = TransformerDecoderEmbedding(args, embed_tokens) - self.decoder_layers = nn.Sequential( - *[ - TransformerDecoderLayer(args, no_encoder_attn) - for _ in range(args.decoder_layers) - ] - ) - self.decoder_output_layer = TransformerDecoderOutputLayer( - args, embed_tokens, dictionary - ) - else: - decoder_balance = utils.eval_str_list( - args.pipeline_decoder_balance, type=int - ) - decoder_devices = utils.eval_str_list( - args.pipeline_decoder_devices, type=int - ) - assert sum(decoder_balance) == len(decoder_module_list), ( - f"Sum of decoder_balance={decoder_balance} is not equal " - + f"to num_decoder_modules={len(decoder_module_list)}" - ) - if TORCH_PIPE: - self.model = Pipe( - module=partition_model( - nn.Sequential(*decoder_module_list), - decoder_balance, - decoder_devices, - ), - chunks=args.pipeline_chunks, - checkpoint=args.pipeline_checkpoint, - ) - else: - self.model = Pipe( - module=nn.Sequential(*decoder_module_list), - balance=decoder_balance, - devices=decoder_devices, - chunks=args.pipeline_chunks, - checkpoint=args.pipeline_checkpoint, - ) - - def forward( - self, - prev_output_tokens, - encoder_out=None, - ): - """ - Args: - prev_output_tokens (LongTensor): previous decoder outputs of shape - `(batch, tgt_len)`, for teacher forcing - encoder_out (optional): output from the encoder, used for - encoder-side attention - incremental_state (dict): dictionary used for storing state during - :ref:`Incremental decoding` - features_only (bool, optional): only return features without - applying output layer (default: False). - - Returns: - tuple: - - the decoder's output of shape `(batch, tgt_len, vocab)` - - a dictionary with any model-specific outputs - """ - input_tuple = ( - encoder_out.encoder_out, - encoder_out.encoder_padding_mask, - prev_output_tokens, - ) - if self.use_pipeline: - input_tuple = tuple(i.to(self.model.devices[0]) for i in input_tuple) - if TORCH_PIPE: - return (self.model(input_tuple).local_value(),) - else: - return (self.model(input_tuple),) - else: - embed_layer_output = self.embedding_layer(input_tuple) - state = self.decoder_layers(embed_layer_output) - return (self.decoder_output_layer(state),) - - def output_layer(self, features, **kwargs): - """Project features to the vocabulary size.""" - if self.adaptive_softmax is None: - # project back to size of vocabulary - if self.share_input_output_embed: - return F.linear(features, self.embed_tokens.weight) - else: - return F.linear(features, self.embed_out) - else: - return features - - def max_positions(self): - """Maximum output length supported by the decoder.""" - if self.embedding_layer.embed_positions is None: - return self.embedding_layer.max_target_positions - return min( - self.embedding_layer.max_target_positions, - self.embedding_layer.embed_positions.max_positions, - ) - - def buffered_future_mask(self, tensor): - dim = tensor.size(0) - if ( - not hasattr(self, "_future_mask") - or self._future_mask is None - or self._future_mask.device != tensor.device - or self._future_mask.size(0) < dim - ): - self._future_mask = torch.triu( - utils.fill_with_neg_inf(tensor.new(dim, dim)), 1 - ) - return self._future_mask[:dim, :dim] - - def upgrade_state_dict_named(self, state_dict, name): - """Upgrade a (possibly old) state dict for new versions of fairseq.""" - if isinstance(self.embed_positions, SinusoidalPositionalEmbedding): - weights_key = "{}.embed_positions.weights".format(name) - if weights_key in state_dict: - del state_dict[weights_key] - state_dict[ - "{}.embed_positions._float_tensor".format(name) - ] = torch.FloatTensor(1) - - for i in range(len(self.layers)): - # update layer norms - layer_norm_map = { - "0": "self_attn_layer_norm", - "1": "encoder_attn_layer_norm", - "2": "final_layer_norm", - } - for old, new in layer_norm_map.items(): - for m in ("weight", "bias"): - k = "{}.layers.{}.layer_norms.{}.{}".format(name, i, old, m) - if k in state_dict: - state_dict[ - "{}.layers.{}.{}.{}".format(name, i, new, m) - ] = state_dict[k] - del state_dict[k] - - version_key = "{}.version".format(name) - if utils.item(state_dict.get(version_key, torch.Tensor([1]))[0]) <= 2: - # earlier checkpoints did not normalize after the stack of layers - self.layer_norm = None - self.normalize = False - state_dict[version_key] = torch.Tensor([1]) - - return state_dict - - -@register_model_architecture( - "pipeline_parallel_transformer", "transformer_iwslt_de_en_pipeline_parallel" -) -def transformer_iwslt_de_en_dist(args): - transformer_iwslt_de_en(args) - - -@register_model_architecture( - "pipeline_parallel_transformer", "transformer_wmt_en_de_big_pipeline_parallel" -) -def transformer_wmt_en_de_big_dist(args): - transformer_wmt_en_de_big(args) diff --git a/kosmos-g/fairseq/fairseq/model_parallel/models/roberta/__init__.py b/kosmos-g/fairseq/fairseq/model_parallel/models/roberta/__init__.py deleted file mode 100644 index 117827c3e..000000000 --- a/kosmos-g/fairseq/fairseq/model_parallel/models/roberta/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .model import * # noqa diff --git a/kosmos-g/fairseq/fairseq/model_parallel/models/roberta/model.py b/kosmos-g/fairseq/fairseq/model_parallel/models/roberta/model.py deleted file mode 100644 index 77a80ef72..000000000 --- a/kosmos-g/fairseq/fairseq/model_parallel/models/roberta/model.py +++ /dev/null @@ -1,225 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -RoBERTa: A Robustly Optimized BERT Pretraining Approach. -""" - -import logging - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import utils -from fairseq.model_parallel.models.transformer import ModelParallelTransformerEncoder -from fairseq.models import register_model, register_model_architecture -from fairseq.models.roberta import ( - roberta_base_architecture, - roberta_prenorm_architecture, - RobertaEncoder, - RobertaModel, -) -from fairseq.modules import LayerNorm - - -try: - from fairseq.model_parallel.megatron.mpu import ( - copy_to_model_parallel_region, - gather_from_model_parallel_region, - ColumnParallelLinear, - VocabParallelEmbedding, - ) - - has_megatron_submodule = True -except (ImportError, ModuleNotFoundError): - has_megatron_submodule = False - -logger = logging.getLogger(__name__) - - -@register_model("model_parallel_roberta") -class ModelParallelRobertaModel(RobertaModel): - def __init__(self, args, encoder): - super().__init__(args, encoder) - - self.classification_heads = nn.ModuleDict() - - @staticmethod - def add_args(parser): - RobertaModel.add_args(parser) - parser.add_argument( - "--no-final-layer-norm", - action="store_true", - help=( - "don't add final layernorm (only applicable when " - "--encoder-normalize-before=True" - ), - ) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - - # make sure all arguments are present - base_architecture(args) - - task.source_dictionary.pad_to_multiple_(args.model_parallel_size * 8) - task.target_dictionary.pad_to_multiple_(args.model_parallel_size * 8) - - if not hasattr(args, "max_positions"): - args.max_positions = args.tokens_per_sample - - if getattr(args, "untie_weights_roberta", False): - raise NotImplementedError( - "--untie-weights-roberta is not supported in model parallel mode" - ) - - encoder = ModelParallelRobertaEncoder(args, task.source_dictionary) - return cls(args, encoder) - - def forward( - self, - src_tokens, - features_only=False, - return_all_hiddens=False, - classification_head_name=None, - **kwargs - ): - if classification_head_name is not None: - features_only = True - - x, extra = self.encoder(src_tokens, features_only, return_all_hiddens, **kwargs) - - if classification_head_name is not None: - x = self.classification_heads[classification_head_name](x) - return x, extra - - def register_classification_head( - self, name, num_classes=None, inner_dim=None, **kwargs - ): - """Register a classification head.""" - if name in self.classification_heads: - prev_num_classes = self.classification_heads[name].out_proj.out_features - prev_inner_dim = self.classification_heads[name].dense.out_features - if num_classes != prev_num_classes or inner_dim != prev_inner_dim: - logger.warning( - 're-registering head "{}" with num_classes {} (prev: {}) ' - "and inner_dim {} (prev: {})".format( - name, num_classes, prev_num_classes, inner_dim, prev_inner_dim - ) - ) - self.classification_heads[name] = ModelParallelRobertaClassificationHead( - self.args.encoder_embed_dim, - inner_dim or self.args.encoder_embed_dim, - num_classes, - self.args.pooler_activation_fn, - self.args.pooler_dropout, - ) - - -class ModelParallelRobertaLMHead(nn.Module): - """Head for masked language modeling.""" - - def __init__(self, embed_dim, output_dim, activation_fn, weight=None): - super().__init__() - self.dense = ColumnParallelLinear(embed_dim, embed_dim, gather_output=True) - self.activation_fn = utils.get_activation_fn(activation_fn) - self.layer_norm = LayerNorm(embed_dim) - - if weight is None: - weight = nn.Linear(embed_dim, output_dim, bias=False).weight - self.weight = weight - self.bias = nn.Parameter(torch.zeros(output_dim)) - - def forward(self, features, masked_tokens=None, **kwargs): - # Only project the unmasked tokens while training, - # saves both memory and computation - if masked_tokens is not None: - features = features[masked_tokens, :] - - x = self.dense(features) - x = self.activation_fn(x) - x = self.layer_norm(x) - - x = copy_to_model_parallel_region(x) - # project back to size of vocabulary with bias - x = F.linear(x, self.weight) - x = gather_from_model_parallel_region(x).contiguous() - x = x + self.bias - return x - - -class ModelParallelRobertaClassificationHead(nn.Module): - """Head for sentence-level classification tasks.""" - - def __init__( - self, input_dim, inner_dim, num_classes, activation_fn, pooler_dropout - ): - super().__init__() - self.dense = ColumnParallelLinear(input_dim, inner_dim, gather_output=True) - self.activation_fn = utils.get_activation_fn(activation_fn) - self.dropout = nn.Dropout(p=pooler_dropout) - self.out_proj = nn.Linear(inner_dim, num_classes) - - def forward(self, features, **kwargs): - x = features[:, 0, :] # take <s> token (equiv. to [CLS]) - x = self.dropout(x) - x = self.dense(x) - x = self.activation_fn(x) - x = self.dropout(x) - x = self.out_proj(x) - return x - - -class ModelParallelRobertaEncoder(RobertaEncoder): - """RoBERTa encoder.""" - - def __init__(self, args, dictionary): - super().__init__(args, dictionary) - assert not self.args.untie_weights_roberta - - def build_embedding(self, vocab_size, embedding_dim, padding_idx): - return VocabParallelEmbedding(vocab_size, embedding_dim, padding_idx) - - def build_encoder(self, args, dictionary, embed_tokens): - return ModelParallelTransformerEncoder(args, dictionary, embed_tokens) - - def build_lm_head(self, embed_dim, output_dim, activation_fn, weight): - return ModelParallelRobertaLMHead(embed_dim, output_dim, activation_fn, weight) - - -@register_model_architecture("model_parallel_roberta", "model_parallel_roberta") -def base_architecture(args): - args.no_final_layer_norm = getattr(args, "no_final_layer_norm", False) - # model parallel RoBERTa defaults to "Pre-LN" formulation - roberta_prenorm_architecture(args) - - -# earlier versions of model parallel RoBERTa removed the final layer norm -@register_model_architecture("model_parallel_roberta", "model_parallel_roberta_v1") -def model_parallel_roberta_v1_architecture(args): - args.no_final_layer_norm = getattr(args, "no_final_layer_norm", True) - base_architecture(args) - - -@register_model_architecture( - "model_parallel_roberta", "model_parallel_roberta_postnorm" -) -def model_parallel_roberta_postnorm_architecture(args): - # the original BERT/RoBERTa uses the "Post-LN" formulation - roberta_base_architecture(args) - - -@register_model_architecture("model_parallel_roberta", "model_parallel_roberta_base") -def model_parallel_roberta_base_architecture(args): - base_architecture(args) - - -@register_model_architecture("model_parallel_roberta", "model_parallel_roberta_large") -def model_parallel_roberta_large_architecture(args): - args.encoder_layers = getattr(args, "encoder_layers", 24) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - base_architecture(args) diff --git a/kosmos-g/fairseq/fairseq/model_parallel/models/transformer.py b/kosmos-g/fairseq/fairseq/model_parallel/models/transformer.py deleted file mode 100644 index 6b330ef1b..000000000 --- a/kosmos-g/fairseq/fairseq/model_parallel/models/transformer.py +++ /dev/null @@ -1,121 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -import torch.nn as nn -from fairseq.model_parallel.modules import ( - ModelParallelTransformerDecoderLayer, - ModelParallelTransformerEncoderLayer, -) -from fairseq.models import register_model -from fairseq.models.transformer import ( - TransformerDecoder, - TransformerEncoder, - TransformerModel, -) - - -try: - from fairseq.model_parallel.megatron.mpu import ( - copy_to_model_parallel_region, - gather_from_model_parallel_region, - VocabParallelEmbedding, - ) - - has_megatron_submodule = True -except (ImportError, ModuleNotFoundError): - has_megatron_submodule = False - - -logger = logging.getLogger(__name__) - - -@register_model("model_parallel_transformer") -class ModelParallelTransformerModel(TransformerModel): - """ - Model parallel Transformer model. - """ - - @classmethod - def build_embedding(cls, args, dictionary, embed_dim, path=None): - if not has_megatron_submodule: - raise ImportError( - "\n\nPlease install the megatron submodule:" - "\n\n git submodule update --init " - "fairseq/model_parallel/megatron" - ) - dictionary.pad_to_multiple_(args.model_parallel_size * 8) - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - - def _vocab_init(tensor, **kwargs): - nn.init.normal_(tensor, mean=0, std=num_embeddings ** -0.5) - nn.init.constant_(tensor[1], 0) - - emb = VocabParallelEmbedding( - num_embeddings, embed_dim, padding_idx, init_method=_vocab_init - ) - # if provided, load from preloaded dictionaries - if path: - raise NotImplementedError( - "Loading of embedding from path is not supported for model parallel" - ) - return emb - - @classmethod - def build_encoder(cls, args, src_dict, embed_tokens): - return ModelParallelTransformerEncoder(args, src_dict, embed_tokens) - - @classmethod - def build_decoder(cls, args, tgt_dict, embed_tokens): - return ModelParallelTransformerDecoder( - args, - tgt_dict, - embed_tokens, - no_encoder_attn=getattr(args, "no_cross_attention", False), - ) - - -class ModelParallelTransformerEncoder(TransformerEncoder): - """ - Model parallel Transformer encoder consisting of *args.encoder_layers* layers. Each layer - is a :class:`ModelParallelTransformerEncoderLayer`. - """ - - def __init__(self, args, dictionary, embed_tokens): - super().__init__(args, dictionary, embed_tokens) - - if args.no_final_layer_norm: - self.layer_norm = None - - def build_encoder_layer(self, args): - return ModelParallelTransformerEncoderLayer(args) - - -class ModelParallelTransformerDecoder(TransformerDecoder): - """ - Model Parallel Transformer decoder consisting of *args.decoder_layers* layers. Each layer - is a :class:`ModelParallelTransformerDecoderLayer`. - """ - - def build_decoder_layer(self, args, no_encoder_attn=False): - return ModelParallelTransformerDecoderLayer(args, no_encoder_attn) - - def output_layer(self, features, **kwargs): - """Project features to the vocabulary size.""" - if not self.share_input_output_embed: - raise NotImplementedError( - "Model parallel training currently requires --share-decoder-input-output-embed" - ) - - features = copy_to_model_parallel_region(features) - - # project back to size of vocabulary - x = self.output_projection(features) - - if getattr(self.args, "criterion") != "vocab_parallel_cross_entropy": - x = gather_from_model_parallel_region(x).contiguous() - return x diff --git a/kosmos-g/fairseq/fairseq/model_parallel/models/transformer_lm.py b/kosmos-g/fairseq/fairseq/model_parallel/models/transformer_lm.py deleted file mode 100644 index a7ca5c9fe..000000000 --- a/kosmos-g/fairseq/fairseq/model_parallel/models/transformer_lm.py +++ /dev/null @@ -1,169 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch.nn as nn - -from fairseq.model_parallel.models.transformer import ModelParallelTransformerDecoder -from fairseq.models import register_model, register_model_architecture -from fairseq.models.transformer_lm import TransformerLanguageModel - -try: - from fairseq.model_parallel.megatron.mpu import VocabParallelEmbedding - - has_megatron_submodule = True -except (ImportError, ModuleNotFoundError): - has_megatron_submodule = False - - -DEFAULT_MAX_TARGET_POSITIONS = 1024 - - -@register_model("model_parallel_transformer_lm") -class ModelParallelTransformerLanguageModel(TransformerLanguageModel): - @staticmethod - def add_args(parser): - TransformerLanguageModel.add_args(parser) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - if not has_megatron_submodule: - raise ImportError( - "\n\nPlease install the megatron submodule:" - "\n\n git submodule update --init " - "fairseq/model_parallel/megatron" - ) - - # make sure all arguments are present in older models - base_lm_architecture(args) - - task.source_dictionary.pad_to_multiple_(args.model_parallel_size * 8) - task.target_dictionary.pad_to_multiple_(args.model_parallel_size * 8) - - if args.decoder_layers_to_keep: - args.decoder_layers = len(args.decoder_layers_to_keep.split(",")) - - if getattr(args, "max_target_positions", None) is None: - args.max_target_positions = getattr( - args, "tokens_per_sample", DEFAULT_MAX_TARGET_POSITIONS - ) - - if args.character_embeddings: - raise NotImplementedError( - "Character embeddings is not supported for model parallel" - ) - elif args.adaptive_input: - raise NotImplementedError( - "Adaptive input is not supported for model parallel" - ) - else: - embed_tokens = cls.build_embedding( - args, task.source_dictionary, args.decoder_input_dim - ) - - decoder = ModelParallelTransformerDecoder( - args, - task.target_dictionary, - embed_tokens, - no_encoder_attn=True, - ) - return cls(decoder) - - @classmethod - def build_embedding(cls, args, dictionary, embed_dim, path=None): - def _vocab_init(tensor, **kwargs): - nn.init.normal_(tensor, mean=0, std=embed_dim ** -0.5) - nn.init.constant_(tensor[1], 0) - - embed_tokens = VocabParallelEmbedding( - len(dictionary), embed_dim, dictionary.pad(), init_method=_vocab_init - ) - return embed_tokens - - -def base_lm_architecture(args): - # backward compatibility for older model checkpoints - if hasattr(args, "no_tie_adaptive_proj"): - # previous models defined --no-tie-adaptive-proj, so use the existence of - # that option to determine if this is an "old" model checkpoint - args.no_decoder_final_norm = True # old models always set this to True - if args.no_tie_adaptive_proj is False: - args.tie_adaptive_proj = True - if hasattr(args, "decoder_final_norm"): - args.no_decoder_final_norm = not args.decoder_final_norm - - args.activation_fn = getattr(args, "activation_fn", "relu") - args.dropout = getattr(args, "dropout", 0.1) - args.attention_dropout = getattr(args, "attention_dropout", 0.0) - args.activation_dropout = getattr(args, "activation_dropout", 0.0) - args.relu_dropout = getattr(args, "relu_dropout", 0.0) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 2048) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8) - # Model training is not stable without this - args.decoder_normalize_before = True - args.no_decoder_final_norm = getattr(args, "no_decoder_final_norm", False) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.adaptive_softmax_factor = getattr(args, "adaptive_softmax_factor", 4) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", False - ) - args.character_embeddings = getattr(args, "character_embeddings", False) - args.character_filters = getattr( - args, - "character_filters", - "[(1, 64), (2, 128), (3, 192), (4, 256), (5, 256), (6, 256), (7, 256)]", - ) - args.character_embedding_dim = getattr(args, "character_embedding_dim", 4) - args.char_embedder_highway_layers = getattr(args, "char_embedder_highway_layers", 2) - args.adaptive_input = getattr(args, "adaptive_input", False) - args.adaptive_input_factor = getattr(args, "adaptive_input_factor", 4) - args.adaptive_input_cutoff = getattr(args, "adaptive_input_cutoff", None) - args.tie_adaptive_weights = getattr(args, "tie_adaptive_weights", False) - args.tie_adaptive_proj = getattr(args, "tie_adaptive_proj", False) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False) - args.decoder_layerdrop = getattr(args, "decoder_layerdrop", 0.0) - args.decoder_layers_to_keep = getattr(args, "decoder_layers_to_keep", None) - args.layernorm_embedding = getattr(args, "layernorm_embedding", False) - args.no_scale_embedding = getattr(args, "no_scale_embedding", False) - args.quant_noise_pq = getattr(args, "quant_noise_pq", 0.0) - args.quant_noise_pq_block_size = getattr(args, "quant_noise_pq_block_size", 8) - args.quant_noise_scalar = getattr(args, "quant_noise_scalar", 0.0) - args.add_bos_token = getattr(args, "add_bos_token", False) - - -@register_model_architecture("model_parallel_transformer_lm", "transformer_lm_megatron") -def transformer_lm_megatron(args): - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 3072) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 3072 * 4) - args.decoder_layers = getattr(args, "decoder_layers", 72) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 32) - args.dropout = getattr(args, "dropout", 0.1) - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - args.activation_fn = getattr(args, "activation_fn", "gelu") - base_lm_architecture(args) - - -@register_model_architecture( - "model_parallel_transformer_lm", "transformer_lm_megatron_11b" -) -def transformer_lm_megatron_11b(args): - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 3072) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 3072 * 6) - args.decoder_layers = getattr(args, "decoder_layers", 72) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 32) - args.dropout = getattr(args, "dropout", 0.1) - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - args.activation_fn = getattr(args, "activation_fn", "gelu") - base_lm_architecture(args) diff --git a/kosmos-g/fairseq/fairseq/model_parallel/modules/__init__.py b/kosmos-g/fairseq/fairseq/model_parallel/modules/__init__.py deleted file mode 100644 index 11603217a..000000000 --- a/kosmos-g/fairseq/fairseq/model_parallel/modules/__init__.py +++ /dev/null @@ -1,17 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -"""isort:skip_file""" - -from .multihead_attention import ModelParallelMultiheadAttention -from .transformer_layer import ( - ModelParallelTransformerEncoderLayer, - ModelParallelTransformerDecoderLayer, -) - -__all__ = [ - "ModelParallelMultiheadAttention", - "ModelParallelTransformerEncoderLayer", - "ModelParallelTransformerDecoderLayer", -] diff --git a/kosmos-g/fairseq/fairseq/model_parallel/modules/multihead_attention.py b/kosmos-g/fairseq/fairseq/model_parallel/modules/multihead_attention.py deleted file mode 100644 index 8eb9d09da..000000000 --- a/kosmos-g/fairseq/fairseq/model_parallel/modules/multihead_attention.py +++ /dev/null @@ -1,349 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Dict, Optional, Tuple - -import torch -import torch.nn.functional as F -from fairseq import utils -from fairseq.incremental_decoding_utils import with_incremental_state -from fairseq.modules.fairseq_dropout import FairseqDropout -from torch import Tensor, nn - - -try: - from fairseq.model_parallel.megatron.mpu import ( - get_cuda_rng_tracker, - get_model_parallel_world_size, - ColumnParallelLinear, - RowParallelLinear, - ) - - has_megatron_submodule = True -except (ImportError, ModuleNotFoundError): - has_megatron_submodule = False - - -@with_incremental_state -class ModelParallelMultiheadAttention(nn.Module): - """Model parallel Multi-headed attention. - This performs the Multi-headed attention over multiple gpus. - - See "Megatron-LM: https://arxiv.org/pdf/1909.08053.pdf" for more details. - """ - - def __init__( - self, - embed_dim, - num_heads, - kdim=None, - vdim=None, - dropout=0.0, - bias=True, - self_attention=False, - encoder_decoder_attention=False, - ): - super().__init__() - if not has_megatron_submodule: - raise ImportError( - "\n\nPlease install the megatron submodule:" - "\n\n git submodule update --init " - "fairseq/model_parallel/megatron" - ) - self.embed_dim = embed_dim - self.kdim = kdim if kdim is not None else embed_dim - self.vdim = vdim if vdim is not None else embed_dim - self.qkv_same_dim = self.kdim == embed_dim and self.vdim == embed_dim - - self.model_parallel_size = get_model_parallel_world_size() - - self.num_heads_partition = num_heads // self.model_parallel_size - assert ( - self.num_heads_partition * self.model_parallel_size == num_heads - ), "Number of heads must be divisible by model parallel size" - - self.dropout_module = FairseqDropout( - dropout, module_name=self.__class__.__name__ - ) - self.head_dim = embed_dim // num_heads - assert ( - self.head_dim * num_heads == self.embed_dim - ), "embed_dim must be divisible by num_heads" - self.scaling = self.head_dim ** -0.5 - - self.self_attention = self_attention - self.encoder_decoder_attention = encoder_decoder_attention - - assert ( - not self.self_attention or self.qkv_same_dim - ), "Self-attention requires query, key and value to be of the same size" - - self.k_proj = ColumnParallelLinear( - self.kdim, embed_dim, bias=bias, gather_output=False - ) - self.v_proj = ColumnParallelLinear( - self.vdim, embed_dim, bias=bias, gather_output=False - ) - self.q_proj = ColumnParallelLinear( - embed_dim, embed_dim, bias=bias, gather_output=False - ) - self.out_proj = RowParallelLinear( - embed_dim, embed_dim, bias=bias, input_is_parallel=True - ) - - def forward( - self, - query, - key: Optional[Tensor], - value: Optional[Tensor], - key_padding_mask: Optional[Tensor] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - static_kv: bool = False, - attn_mask: Optional[Tensor] = None, - **unused_kwargs, - ) -> Tuple[Tensor, Optional[Tensor]]: - """Input shape: Time x Batch x Channel - - Args: - key_padding_mask (ByteTensor, optional): mask to exclude - keys that are pads, of shape `(batch, src_len)`, where - padding elements are indicated by 1s. - attn_mask (ByteTensor, optional): typically used to - implement causal attention, where the mask prevents the - attention from looking forward in time (default: None). - """ - tgt_len, bsz, embed_dim = query.size() - assert embed_dim == self.embed_dim - assert list(query.size()) == [tgt_len, bsz, embed_dim] - - is_tpu = query.device.type == "xla" - - if incremental_state is not None: - saved_state = self._get_input_buffer(incremental_state) - if saved_state is not None and "prev_key" in saved_state: - # previous time steps are cached - no need to recompute - # key and value if they are static - if static_kv: - assert self.encoder_decoder_attention and not self.self_attention - key = value = None - else: - saved_state = None - - if self.self_attention: - q = self.q_proj(query) - k = self.k_proj(query) - v = self.v_proj(query) - elif self.encoder_decoder_attention: - # encoder-decoder attention - q = self.q_proj(query) - if key is None: - assert value is None - k = v = None - else: - k = self.k_proj(key) - v = self.v_proj(key) - - else: - assert key is not None and value is not None - q = self.q_proj(query) - k = self.k_proj(key) - v = self.v_proj(value) - q *= self.scaling - - q = ( - q.contiguous() - .view(tgt_len, bsz * self.num_heads_partition, self.head_dim) - .transpose(0, 1) - ) - if k is not None: - k = ( - k.contiguous() - .view(-1, bsz * self.num_heads_partition, self.head_dim) - .transpose(0, 1) - ) - if v is not None: - v = ( - v.contiguous() - .view(-1, bsz * self.num_heads_partition, self.head_dim) - .transpose(0, 1) - ) - - if saved_state is not None: - # saved states are stored with shape (bsz, num_heads_partition, seq_len, head_dim) - if "prev_key" in saved_state: - _prev_key = saved_state["prev_key"] - assert _prev_key is not None - prev_key = _prev_key.view( - bsz * self.num_heads_partition, -1, self.head_dim - ) - if static_kv: - k = prev_key - else: - assert k is not None - k = torch.cat([prev_key, k], dim=1) - if "prev_value" in saved_state: - _prev_value = saved_state["prev_value"] - assert _prev_value is not None - prev_value = _prev_value.view( - bsz * self.num_heads_partition, -1, self.head_dim - ) - if static_kv: - v = prev_value - else: - assert v is not None - v = torch.cat([prev_value, v], dim=1) - prev_key_padding_mask: Optional[Tensor] = None - if "prev_key_padding_mask" in saved_state: - prev_key_padding_mask = saved_state["prev_key_padding_mask"] - assert k is not None and v is not None - key_padding_mask = ( - ModelParallelMultiheadAttention._append_prev_key_padding_mask( - key_padding_mask=key_padding_mask, - prev_key_padding_mask=prev_key_padding_mask, - batch_size=bsz, - src_len=k.size(1), - static_kv=static_kv, - ) - ) - - saved_state["prev_key"] = k.view( - bsz, self.num_heads_partition, -1, self.head_dim - ) - saved_state["prev_value"] = v.view( - bsz, self.num_heads_partition, -1, self.head_dim - ) - saved_state["prev_key_padding_mask"] = key_padding_mask - # In this branch incremental_state is never None - assert incremental_state is not None - incremental_state = self._set_input_buffer(incremental_state, saved_state) - assert k is not None - src_len = k.size(1) - - # This is part of a workaround to get around fork/join parallelism - # not supporting Optional types. - if key_padding_mask is not None and key_padding_mask.dim() == 0: - key_padding_mask = None - - if key_padding_mask is not None: - assert key_padding_mask.size(0) == bsz - assert key_padding_mask.size(1) == src_len - - attn_weights = torch.bmm(q, k.transpose(1, 2)) - - assert list(attn_weights.size()) == [ - bsz * self.num_heads_partition, - tgt_len, - src_len, - ] - - if attn_mask is not None: - attn_mask = attn_mask.unsqueeze(0) - attn_weights += attn_mask - - if key_padding_mask is not None: - # don't attend to padding symbols - attn_weights = attn_weights.view( - bsz, self.num_heads_partition, tgt_len, src_len - ) - if not is_tpu: - attn_weights = attn_weights.masked_fill( - key_padding_mask.unsqueeze(1).unsqueeze(2).to(torch.bool), - float("-inf"), - ) - else: - attn_weights = attn_weights.transpose(0, 2) - attn_weights = attn_weights.masked_fill(key_padding_mask, float("-inf")) - attn_weights = attn_weights.transpose(0, 2) - attn_weights = attn_weights.view( - bsz * self.num_heads_partition, tgt_len, src_len - ) - - attn_weights_float = utils.softmax(attn_weights, dim=-1) - attn_weights = attn_weights_float.type_as(attn_weights) - - with get_cuda_rng_tracker().fork(): - attn_probs = self.dropout_module(attn_weights) - - assert v is not None - attn = torch.bmm(attn_probs, v) - assert list(attn.size()) == [ - bsz * self.num_heads_partition, - tgt_len, - self.head_dim, - ] - embed_dim_partition = embed_dim // self.model_parallel_size - attn = attn.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim_partition) - attn = self.out_proj(attn) - # return attn_weights None to keep the return type same as single gpu multihead attention - # This will be deprecated. - attn_weights: Optional[Tensor] = None - - return attn, attn_weights - - @staticmethod - def _append_prev_key_padding_mask( - key_padding_mask: Optional[Tensor], - prev_key_padding_mask: Optional[Tensor], - batch_size: int, - src_len: int, - static_kv: bool, - ) -> Optional[Tensor]: - # saved key padding masks have shape (bsz, seq_len) - if prev_key_padding_mask is not None and static_kv: - new_key_padding_mask = prev_key_padding_mask - elif prev_key_padding_mask is not None and key_padding_mask is not None: - new_key_padding_mask = torch.cat( - [prev_key_padding_mask.float(), key_padding_mask.float()], dim=1 - ) - # During incremental decoding, as the padding token enters and - # leaves the frame, there will be a time when prev or current - # is None - elif prev_key_padding_mask is not None: - - filler = torch.zeros(batch_size, src_len - prev_key_padding_mask.size(1)) - if prev_key_padding_mask.is_cuda: - filler = filler.cuda() - new_key_padding_mask = torch.cat( - [prev_key_padding_mask.float(), filler.float()], dim=1 - ) - elif key_padding_mask is not None: - filler = torch.zeros(batch_size, src_len - key_padding_mask.size(1)) - if key_padding_mask.is_cuda: - filler = filler.cuda() - new_key_padding_mask = torch.cat( - [filler.float(), key_padding_mask.float()], dim=1 - ) - else: - new_key_padding_mask = prev_key_padding_mask - return new_key_padding_mask - - def reorder_incremental_state( - self, incremental_state: Dict[str, Dict[str, Optional[Tensor]]], new_order - ): - """Reorder buffered internal state (for incremental generation).""" - input_buffer = self._get_input_buffer(incremental_state) - if input_buffer is not None: - for k in input_buffer.keys(): - if input_buffer[k] is not None: - input_buffer[k] = input_buffer[k].index_select(0, new_order) - incremental_state = self._set_input_buffer(incremental_state, input_buffer) - return incremental_state - - def _get_input_buffer( - self, incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] - ) -> Dict[str, Optional[Tensor]]: - result = self.get_incremental_state(incremental_state, "attn_state") - if result is not None: - return result - else: - empty_result: Dict[str, Optional[Tensor]] = {} - return empty_result - - def _set_input_buffer( - self, - incremental_state: Dict[str, Dict[str, Optional[Tensor]]], - buffer: Dict[str, Optional[Tensor]], - ): - return self.set_incremental_state(incremental_state, "attn_state", buffer) diff --git a/kosmos-g/fairseq/fairseq/model_parallel/modules/transformer_layer.py b/kosmos-g/fairseq/fairseq/model_parallel/modules/transformer_layer.py deleted file mode 100644 index 7ab53c6e5..000000000 --- a/kosmos-g/fairseq/fairseq/model_parallel/modules/transformer_layer.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq.model_parallel.modules import ModelParallelMultiheadAttention -from fairseq.modules import TransformerDecoderLayer, TransformerEncoderLayer - - -try: - from fairseq.model_parallel.megatron.mpu import ( - ColumnParallelLinear, - RowParallelLinear, - ) - - has_megatron_submodule = True -except (ImportError, ModuleNotFoundError): - has_megatron_submodule = False - - -class ModelParallelTransformerEncoderLayer(TransformerEncoderLayer): - """Encoder layer block over multiple gpus. - - See "Megatron-LM: https://arxiv.org/pdf/1909.08053.pdf" for more details. - """ - - def build_fc1(self, input_dim, output_dim, q_noise, qn_block_size): - if q_noise > 0: - raise NotImplementedError - return ColumnParallelLinear(input_dim, output_dim, gather_output=False) - - def build_fc2(self, input_dim, output_dim, q_noise, qn_block_size): - if q_noise > 0: - raise NotImplementedError - return RowParallelLinear(input_dim, output_dim, input_is_parallel=True) - - def build_self_attention(self, embed_dim, args, **unused_kwargs): - return ModelParallelMultiheadAttention( - embed_dim, - args.encoder_attention_heads, - dropout=args.attention_dropout, - self_attention=True, - ) - - -class ModelParallelTransformerDecoderLayer(TransformerDecoderLayer): - """Decoder layer block. - - See "Megatron-LM: https://arxiv.org/pdf/1909.08053.pdf" for more details. - """ - - def build_fc1(self, input_dim, output_dim, q_noise, qn_block_size): - if q_noise > 0: - raise NotImplementedError - return ColumnParallelLinear(input_dim, output_dim, gather_output=False) - - def build_fc2(self, input_dim, output_dim, q_noise, qn_block_size): - if q_noise > 0: - raise NotImplementedError - return RowParallelLinear(input_dim, output_dim, input_is_parallel=True) - - def build_self_attention(self, embed_dim, args, **unused_kwargs): - return ModelParallelMultiheadAttention( - embed_dim=embed_dim, - num_heads=args.decoder_attention_heads, - dropout=args.attention_dropout, - self_attention=not getattr(args, "cross_self_attention", False), - ) - - def build_encoder_attention(self, embed_dim, args, **unused_kwargs): - return ModelParallelMultiheadAttention( - embed_dim=embed_dim, - num_heads=args.decoder_attention_heads, - kdim=getattr(args, "encoder_embed_dim", None), - vdim=getattr(args, "encoder_embed_dim", None), - dropout=args.attention_dropout, - encoder_decoder_attention=True, - ) diff --git a/kosmos-g/fairseq/fairseq/models/__init__.py b/kosmos-g/fairseq/fairseq/models/__init__.py deleted file mode 100644 index 616e3051e..000000000 --- a/kosmos-g/fairseq/fairseq/models/__init__.py +++ /dev/null @@ -1,235 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -"""isort:skip_file""" - -import argparse -import importlib -import os - -from contextlib import ExitStack - -from fairseq.dataclass import FairseqDataclass -from fairseq.dataclass.utils import merge_with_parent -from hydra.core.config_store import ConfigStore -from omegaconf import open_dict, OmegaConf - -from .composite_encoder import CompositeEncoder -from .distributed_fairseq_model import DistributedFairseqModel -from .fairseq_decoder import FairseqDecoder -from .fairseq_encoder import FairseqEncoder -from .fairseq_incremental_decoder import FairseqIncrementalDecoder -from .fairseq_model import ( - BaseFairseqModel, - FairseqEncoderDecoderModel, - FairseqEncoderModel, - FairseqLanguageModel, - FairseqModel, - FairseqMultiModel, -) - - -MODEL_REGISTRY = {} -MODEL_DATACLASS_REGISTRY = {} -ARCH_MODEL_REGISTRY = {} -ARCH_MODEL_NAME_REGISTRY = {} -ARCH_MODEL_INV_REGISTRY = {} -ARCH_CONFIG_REGISTRY = {} - - -__all__ = [ - "BaseFairseqModel", - "CompositeEncoder", - "DistributedFairseqModel", - "FairseqDecoder", - "FairseqEncoder", - "FairseqEncoderDecoderModel", - "FairseqEncoderModel", - "FairseqIncrementalDecoder", - "FairseqLanguageModel", - "FairseqModel", - "FairseqMultiModel", -] - - -def build_model(cfg: FairseqDataclass, task, from_checkpoint=False): - - model = None - model_type = getattr(cfg, "_name", None) or getattr(cfg, "arch", None) - - if not model_type and len(cfg) == 1: - # this is hit if config object is nested in directory that is named after model type - - model_type = next(iter(cfg)) - if model_type in MODEL_DATACLASS_REGISTRY: - cfg = cfg[model_type] - else: - raise Exception( - "Could not infer model type from directory. Please add _name field to indicate model type. " - "Available models: " - + str(MODEL_DATACLASS_REGISTRY.keys()) - + " Requested model type: " - + model_type - ) - - if model_type in ARCH_MODEL_REGISTRY: - # case 1: legacy models - model = ARCH_MODEL_REGISTRY[model_type] - elif model_type in MODEL_DATACLASS_REGISTRY: - # case 2: config-driven models - model = MODEL_REGISTRY[model_type] - - if model_type in MODEL_DATACLASS_REGISTRY: - # set defaults from dataclass. note that arch name and model name can be the same - dc = MODEL_DATACLASS_REGISTRY[model_type] - - if isinstance(cfg, argparse.Namespace): - cfg = dc.from_namespace(cfg) - else: - cfg = merge_with_parent(dc(), cfg, from_checkpoint) - else: - if model_type in ARCH_CONFIG_REGISTRY: - with open_dict(cfg) if OmegaConf.is_config(cfg) else ExitStack(): - # this calls the different "arch" functions (like base_architecture()) that you indicate - # if you specify --arch on the command line. this is only applicable to the old argparse based models - # hydra models should expose different architectures via different config files - # it will modify the cfg object and default parameters according to the arch - ARCH_CONFIG_REGISTRY[model_type](cfg) - - assert model is not None, ( - f"Could not infer model type from {cfg}. " - "Available models: {}".format(MODEL_DATACLASS_REGISTRY.keys()) - + f" Requested model type: {model_type}" - ) - - return model.build_model(cfg, task) - - -def register_model(name, dataclass=None): - """ - New model types can be added to fairseq with the :func:`register_model` - function decorator. - - For example:: - - @register_model('lstm') - class LSTM(FairseqEncoderDecoderModel): - (...) - - .. note:: All models must implement the :class:`BaseFairseqModel` interface. - Typically you will extend :class:`FairseqEncoderDecoderModel` for - sequence-to-sequence tasks or :class:`FairseqLanguageModel` for - language modeling tasks. - - Args: - name (str): the name of the model - """ - - def register_model_cls(cls): - if name in MODEL_REGISTRY: - raise ValueError("Cannot register duplicate model ({})".format(name)) - if not issubclass(cls, BaseFairseqModel): - raise ValueError( - "Model ({}: {}) must extend BaseFairseqModel".format(name, cls.__name__) - ) - MODEL_REGISTRY[name] = cls - if dataclass is not None and not issubclass(dataclass, FairseqDataclass): - raise ValueError( - "Dataclass {} must extend FairseqDataclass".format(dataclass) - ) - - cls.__dataclass = dataclass - if dataclass is not None: - MODEL_DATACLASS_REGISTRY[name] = dataclass - - cs = ConfigStore.instance() - node = dataclass() - node._name = name - cs.store(name=name, group="model", node=node, provider="fairseq") - - @register_model_architecture(name, name) - def noop(_): - pass - - return cls - - return register_model_cls - - -def register_model_architecture(model_name, arch_name): - """ - New model architectures can be added to fairseq with the - :func:`register_model_architecture` function decorator. After registration, - model architectures can be selected with the ``--arch`` command-line - argument. - - For example:: - - @register_model_architecture('lstm', 'lstm_luong_wmt_en_de') - def lstm_luong_wmt_en_de(cfg): - args.encoder_embed_dim = getattr(cfg.model, 'encoder_embed_dim', 1000) - (...) - - The decorated function should take a single argument *cfg*, which is a - :class:`omegaconf.DictConfig`. The decorated function should modify these - arguments in-place to match the desired architecture. - - Args: - model_name (str): the name of the Model (Model must already be - registered) - arch_name (str): the name of the model architecture (``--arch``) - """ - - def register_model_arch_fn(fn): - if model_name not in MODEL_REGISTRY: - raise ValueError( - "Cannot register model architecture for unknown model type ({})".format( - model_name - ) - ) - if arch_name in ARCH_MODEL_REGISTRY: - raise ValueError( - "Cannot register duplicate model architecture ({})".format(arch_name) - ) - if not callable(fn): - raise ValueError( - "Model architecture must be callable ({})".format(arch_name) - ) - ARCH_MODEL_REGISTRY[arch_name] = MODEL_REGISTRY[model_name] - ARCH_MODEL_NAME_REGISTRY[arch_name] = model_name - ARCH_MODEL_INV_REGISTRY.setdefault(model_name, []).append(arch_name) - ARCH_CONFIG_REGISTRY[arch_name] = fn - return fn - - return register_model_arch_fn - - -def import_models(models_dir, namespace): - for file in os.listdir(models_dir): - path = os.path.join(models_dir, file) - if ( - not file.startswith("_") - and not file.startswith(".") - and (file.endswith(".py") or os.path.isdir(path)) - ): - model_name = file[: file.find(".py")] if file.endswith(".py") else file - importlib.import_module(namespace + "." + model_name) - - # extra `model_parser` for sphinx - if model_name in MODEL_REGISTRY: - parser = argparse.ArgumentParser(add_help=False) - group_archs = parser.add_argument_group("Named architectures") - group_archs.add_argument( - "--arch", choices=ARCH_MODEL_INV_REGISTRY[model_name] - ) - group_args = parser.add_argument_group( - "Additional command-line arguments" - ) - MODEL_REGISTRY[model_name].add_args(group_args) - globals()[model_name + "_parser"] = parser - - -# automatically import any Python files in the models/ directory -models_dir = os.path.dirname(__file__) -import_models(models_dir, "fairseq.models") diff --git a/kosmos-g/fairseq/fairseq/models/bart/__init__.py b/kosmos-g/fairseq/fairseq/models/bart/__init__.py deleted file mode 100644 index a701923f7..000000000 --- a/kosmos-g/fairseq/fairseq/models/bart/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .hub_interface import * # noqa -from .model import * # noqa diff --git a/kosmos-g/fairseq/fairseq/models/bart/hub_interface.py b/kosmos-g/fairseq/fairseq/models/bart/hub_interface.py deleted file mode 100644 index 6b647c964..000000000 --- a/kosmos-g/fairseq/fairseq/models/bart/hub_interface.py +++ /dev/null @@ -1,211 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import copy -import logging -from typing import Dict, List - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import utils -from fairseq.data import encoders -from fairseq.hub_utils import GeneratorHubInterface -from omegaconf import open_dict - - -logger = logging.getLogger(__name__) - - -class BARTHubInterface(GeneratorHubInterface): - """A simple PyTorch Hub interface to BART. - - Usage: https://github.com/pytorch/fairseq/tree/main/examples/bart - """ - - def __init__(self, cfg, task, model): - super().__init__(cfg, task, [model]) - self.model = self.models[0] - - def encode( - self, sentence: str, *addl_sentences, no_separator=True - ) -> torch.LongTensor: - """ - BPE-encode a sentence (or multiple sentences). - - Every sequence begins with a beginning-of-sentence (`<s>`) symbol. - Every sentence ends with an end-of-sentence (`</s>`). - - Example (single sentence): `<s> a b c </s>` - Example (sentence pair): `<s> d e f </s> 1 2 3 </s>` - - The BPE encoding follows GPT-2. One subtle detail is that the GPT-2 BPE - requires leading spaces. For example:: - - >>> bart.encode('Hello world').tolist() - [0, 31414, 232, 2] - >>> bart.encode(' world').tolist() - [0, 232, 2] - >>> bart.encode('world').tolist() - [0, 8331, 2] - """ - tokens = self.bpe.encode(sentence) - if len(tokens.split(" ")) > min(self.max_positions) - 2: - tokens = " ".join(tokens.split(" ")[: min(self.max_positions) - 2]) - bpe_sentence = "<s> " + tokens + " </s>" - for s in addl_sentences: - bpe_sentence += " </s>" if not no_separator else "" - bpe_sentence += " " + self.bpe.encode(s) + " </s>" - tokens = self.task.source_dictionary.encode_line(bpe_sentence, append_eos=False) - return tokens.long() - - def decode(self, tokens: torch.LongTensor): - assert tokens.dim() == 1 - tokens = tokens.cpu().numpy() - if tokens[0] == self.task.source_dictionary.bos(): - tokens = tokens[1:] # remove <s> - eos_mask = tokens == self.task.source_dictionary.eos() - doc_mask = eos_mask[1:] & eos_mask[:-1] - sentences = np.split(tokens, doc_mask.nonzero()[0] + 1) - sentences = [ - self.bpe.decode(self.task.source_dictionary.string(s)) for s in sentences - ] - if len(sentences) == 1: - return sentences[0] - return sentences - - def _build_sample(self, src_tokens: List[torch.LongTensor]): - # assert torch.is_tensor(src_tokens) - dataset = self.task.build_dataset_for_inference( - src_tokens, - [x.numel() for x in src_tokens], - ) - sample = dataset.collater(dataset) - sample = utils.apply_to_sample(lambda tensor: tensor.to(self.device), sample) - return sample - - def generate( - self, - tokenized_sentences: List[torch.LongTensor], - *args, - inference_step_args=None, - skip_invalid_size_inputs=False, - **kwargs - ) -> List[List[Dict[str, torch.Tensor]]]: - inference_step_args = inference_step_args or {} - if "prefix_tokens" in inference_step_args: - raise NotImplementedError("prefix generation not implemented for BART") - res = [] - for batch in self._build_batches(tokenized_sentences, skip_invalid_size_inputs): - src_tokens = batch["net_input"]["src_tokens"] - inference_step_args["prefix_tokens"] = src_tokens.new_full( - (src_tokens.size(0), 1), fill_value=self.task.source_dictionary.bos() - ).to(device=self.device) - results = super().generate( - src_tokens, - *args, - inference_step_args=inference_step_args, - skip_invalid_size_inputs=skip_invalid_size_inputs, - **kwargs - ) - for id, hypos in zip(batch["id"].tolist(), results): - res.append((id, hypos)) - res = [hypos for _, hypos in sorted(res, key=lambda x: x[0])] - return res - - def extract_features( - self, tokens: torch.LongTensor, return_all_hiddens: bool = False - ) -> torch.Tensor: - if tokens.dim() == 1: - tokens = tokens.unsqueeze(0) - if tokens.size(-1) > min(self.model.max_positions()): - raise ValueError( - "tokens exceeds maximum length: {} > {}".format( - tokens.size(-1), self.model.max_positions() - ) - ) - tokens.to(device=self.device), - prev_output_tokens = tokens.clone() - - prev_output_tokens[:, 0] = tokens.gather( - 1, - (tokens.ne(self.task.source_dictionary.pad()).sum(dim=1) - 1).unsqueeze(-1), - ).squeeze() - - prev_output_tokens[:, 1:] = tokens[:, :-1] - features, extra = self.model( - src_tokens=tokens, - src_lengths=None, - prev_output_tokens=prev_output_tokens, - features_only=True, - return_all_hiddens=return_all_hiddens, - ) - if return_all_hiddens: - # convert from T x B x C -> B x T x C - inner_states = extra["inner_states"] - return [inner_state.transpose(0, 1) for inner_state in inner_states] - else: - return features # just the last layer's features - - def register_classification_head( - self, name: str, num_classes: int = None, embedding_size: int = None, **kwargs - ): - self.model.register_classification_head( - name, num_classes=num_classes, embedding_size=embedding_size, **kwargs - ) - - def predict(self, head: str, tokens: torch.LongTensor, return_logits: bool = False): - if tokens.dim() == 1: - tokens = tokens.unsqueeze(0) - features = self.extract_features(tokens.to(device=self.device)) - sentence_representation = features[ - tokens.eq(self.task.source_dictionary.eos()), : - ].view(features.size(0), -1, features.size(-1))[:, -1, :] - - logits = self.model.classification_heads[head](sentence_representation) - if return_logits: - return logits - return F.log_softmax(logits, dim=-1) - - def fill_mask( - self, - masked_inputs: List[str], - topk: int = 5, - match_source_len: bool = True, - **generate_kwargs - ): - masked_token = "<mask>" - batch_tokens = [] - for masked_input in masked_inputs: - assert ( - masked_token in masked_input - ), "please add one {} token for the input".format(masked_token) - - text_spans = masked_input.split(masked_token) - text_spans_bpe = ( - (" {0} ".format(masked_token)) - .join([self.bpe.encode(text_span.rstrip()) for text_span in text_spans]) - .strip() - ) - tokens = self.task.source_dictionary.encode_line( - "<s> " + text_spans_bpe + " </s>", - append_eos=False, - add_if_not_exist=False, - ).long() - batch_tokens.append(tokens) - - # ensure beam size is at least as big as topk - generate_kwargs["beam"] = max( - topk, - generate_kwargs.get("beam", -1), - ) - generate_kwargs["match_source_len"] = match_source_len - batch_hypos = self.generate(batch_tokens, **generate_kwargs) - - return [ - [(self.decode(hypo["tokens"]), hypo["score"]) for hypo in hypos[:topk]] - for hypos in batch_hypos - ] diff --git a/kosmos-g/fairseq/fairseq/models/bart/model.py b/kosmos-g/fairseq/fairseq/models/bart/model.py deleted file mode 100644 index bdb12b02f..000000000 --- a/kosmos-g/fairseq/fairseq/models/bart/model.py +++ /dev/null @@ -1,384 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -BART: Denoising Sequence-to-Sequence Pre-training for -Natural Language Generation, Translation, and Comprehension -""" -from typing import Optional - -import logging - -import torch -import torch.nn as nn -from fairseq import utils -from fairseq.models import register_model, register_model_architecture -from fairseq.models.transformer import TransformerModel -from fairseq.modules.transformer_sentence_encoder import init_bert_params - -from .hub_interface import BARTHubInterface - - -logger = logging.getLogger(__name__) - - -@register_model("bart") -class BARTModel(TransformerModel): - __jit_unused_properties__ = ["supported_targets"] - - @classmethod - def hub_models(cls): - return { - "bart.base": "http://dl.fbaipublicfiles.com/fairseq/models/bart.base.tar.gz", - "bart.large": "http://dl.fbaipublicfiles.com/fairseq/models/bart.large.tar.gz", - "bart.large.mnli": "http://dl.fbaipublicfiles.com/fairseq/models/bart.large.mnli.tar.gz", - "bart.large.cnn": "http://dl.fbaipublicfiles.com/fairseq/models/bart.large.cnn.tar.gz", - "bart.large.xsum": "http://dl.fbaipublicfiles.com/fairseq/models/bart.large.xsum.tar.gz", - } - - def __init__(self, args, encoder, decoder): - super().__init__(args, encoder, decoder) - - # We follow BERT's random weight initialization - self.apply(init_bert_params) - - self.classification_heads = nn.ModuleDict() - if hasattr(self.encoder, "dictionary"): - self.eos: int = self.encoder.dictionary.eos() - - @staticmethod - def add_args(parser): - super(BARTModel, BARTModel).add_args(parser) - parser.add_argument( - "--pooler-dropout", - type=float, - metavar="D", - help="dropout probability in the masked_lm pooler layers", - ) - parser.add_argument( - "--pooler-activation-fn", - choices=utils.get_available_activation_fns(), - help="activation function to use for pooler layer", - ) - parser.add_argument( - "--spectral-norm-classification-head", - action="store_true", - help="Apply spectral normalization on the classification head", - ) - - @property - def supported_targets(self): - return {"self"} - - def forward( - self, - src_tokens, - src_lengths, - prev_output_tokens, - features_only: bool = False, - classification_head_name: Optional[str] = None, - token_embeddings: Optional[torch.Tensor] = None, - return_all_hiddens: bool = True, - alignment_layer: Optional[int] = None, - alignment_heads: Optional[int] = None, - ): - if classification_head_name is not None: - features_only = True - - encoder_out = self.encoder( - src_tokens, - src_lengths=src_lengths, - token_embeddings=token_embeddings, - return_all_hiddens=return_all_hiddens, - ) - x, extra = self.decoder( - prev_output_tokens, - encoder_out=encoder_out, - features_only=features_only, - alignment_layer=alignment_layer, - alignment_heads=alignment_heads, - src_lengths=src_lengths, - return_all_hiddens=return_all_hiddens, - ) - eos: int = self.eos - if classification_head_name is not None: - sentence_representation = x[src_tokens.eq(eos), :].view( - x.size(0), -1, x.size(-1) - )[:, -1, :] - for k, head in self.classification_heads.items(): - # for torch script only supports iteration - if k == classification_head_name: - x = head(sentence_representation) - break - return x, extra - - @classmethod - def from_pretrained( - cls, - model_name_or_path, - checkpoint_file="model.pt", - data_name_or_path=".", - bpe="gpt2", - sample_break_mode="eos", - **kwargs, - ): - from fairseq import hub_utils - - x = hub_utils.from_pretrained( - model_name_or_path, - checkpoint_file, - data_name_or_path, - archive_map=cls.hub_models(), - bpe=bpe, - load_checkpoint_heads=True, - sample_break_mode=sample_break_mode, - **kwargs, - ) - return BARTHubInterface(x["args"], x["task"], x["models"][0]) - - def register_classification_head( - self, name, num_classes=None, inner_dim=None, **kwargs - ): - """Register a classification head.""" - logger.info("Registering classification head: {0}".format(name)) - if name in self.classification_heads: - prev_num_classes = self.classification_heads[name].out_proj.out_features - prev_inner_dim = self.classification_heads[name].dense.out_features - if num_classes != prev_num_classes or inner_dim != prev_inner_dim: - logger.warning( - 're-registering head "{}" with num_classes {} (prev: {}) ' - "and inner_dim {} (prev: {})".format( - name, num_classes, prev_num_classes, inner_dim, prev_inner_dim - ) - ) - self.classification_heads[name] = BARTClassificationHead( - input_dim=self.args.encoder_embed_dim, - inner_dim=inner_dim or self.args.encoder_embed_dim, - num_classes=num_classes, - activation_fn=self.args.pooler_activation_fn, - pooler_dropout=self.args.pooler_dropout, - do_spectral_norm=getattr( - self.args, "spectral_norm_classification_head", False - ), - ) - - def upgrade_state_dict_named(self, state_dict, name): - super().upgrade_state_dict_named(state_dict, name) - - prefix = name + "." if name != "" else "" - current_head_names = ( - [] - if not hasattr(self, "classification_heads") - else self.classification_heads.keys() - ) - - # Handle new classification heads present in the state dict. - keys_to_delete = [] - for k in state_dict.keys(): - if not k.startswith(prefix + "classification_heads."): - continue - - head_name = k[len(prefix + "classification_heads.") :].split(".")[0] - num_classes = state_dict[ - prefix + "classification_heads." + head_name + ".out_proj.weight" - ].size(0) - inner_dim = state_dict[ - prefix + "classification_heads." + head_name + ".dense.weight" - ].size(0) - - if getattr(self.args, "load_checkpoint_heads", False): - if head_name not in current_head_names: - self.register_classification_head(head_name, num_classes, inner_dim) - else: - if head_name not in current_head_names: - logger.warning( - "deleting classification head ({}) from checkpoint " - "not present in current model: {}".format(head_name, k) - ) - keys_to_delete.append(k) - elif ( - num_classes - != self.classification_heads[head_name].out_proj.out_features - or inner_dim - != self.classification_heads[head_name].dense.out_features - ): - logger.warning( - "deleting classification head ({}) from checkpoint " - "with different dimensions than current model: {}".format( - head_name, k - ) - ) - keys_to_delete.append(k) - for k in keys_to_delete: - del state_dict[k] - - def truncate_emb(key): - if key in state_dict: - state_dict[key] = state_dict[key][:-1, :] - - # When finetuning on translation task, remove last row of - # embedding matrix that corresponds to mask_idx token. - loaded_dict_size = state_dict["encoder.embed_tokens.weight"].size(0) - if ( - loaded_dict_size == len(self.encoder.dictionary) + 1 - and "<mask>" not in self.encoder.dictionary - ): - truncate_emb("encoder.embed_tokens.weight") - truncate_emb("decoder.embed_tokens.weight") - truncate_emb("encoder.output_projection.weight") - truncate_emb("decoder.output_projection.weight") - - # When continued pretraining on new set of languages for mbart, - # add extra lang embeddings at the end of embed_tokens. - # Note: newly added languages are assumed to have been added at the end. - if self.args.task == "multilingual_denoising" and loaded_dict_size < len( - self.encoder.dictionary - ): - logger.info( - "Adding extra language embeddings not found in pretrained model for " - "continued pretraining of MBART on new set of languages." - ) - loaded_mask_token_embedding = state_dict["encoder.embed_tokens.weight"][ - -1, : - ] - - num_langids_to_add = len(self.encoder.dictionary) - loaded_dict_size - embed_dim = state_dict["encoder.embed_tokens.weight"].size(1) - - new_lang_embed_to_add = torch.zeros(num_langids_to_add, embed_dim) - nn.init.normal_(new_lang_embed_to_add, mean=0, std=embed_dim ** -0.5) - new_lang_embed_to_add = new_lang_embed_to_add.to( - dtype=state_dict["encoder.embed_tokens.weight"].dtype, - ) - - state_dict["encoder.embed_tokens.weight"] = torch.cat( - [ - state_dict["encoder.embed_tokens.weight"][ - : loaded_dict_size - 1, : - ], - new_lang_embed_to_add, - loaded_mask_token_embedding.unsqueeze(0), - ] - ) - state_dict["decoder.embed_tokens.weight"] = torch.cat( - [ - state_dict["decoder.embed_tokens.weight"][ - : loaded_dict_size - 1, : - ], - new_lang_embed_to_add, - loaded_mask_token_embedding.unsqueeze(0), - ] - ) - - # Copy any newly-added classification heads into the state dict - # with their current weights. - if hasattr(self, "classification_heads"): - cur_state = self.classification_heads.state_dict() - for k, v in cur_state.items(): - if prefix + "classification_heads." + k not in state_dict: - logger.info("Overwriting " + prefix + "classification_heads." + k) - state_dict[prefix + "classification_heads." + k] = v - - -class BARTClassificationHead(nn.Module): - """Head for sentence-level classification tasks.""" - - def __init__( - self, - input_dim, - inner_dim, - num_classes, - activation_fn, - pooler_dropout, - do_spectral_norm=False, - ): - super().__init__() - self.dense = nn.Linear(input_dim, inner_dim) - self.activation_fn = utils.get_activation_fn(activation_fn) - self.dropout = nn.Dropout(p=pooler_dropout) - self.out_proj = nn.Linear(inner_dim, num_classes) - - if do_spectral_norm: - self.out_proj = torch.nn.utils.spectral_norm(self.out_proj) - - def forward(self, features, **kwargs): - x = features - x = self.dropout(x) - x = self.dense(x) - x = self.activation_fn(x) - x = self.dropout(x) - x = self.out_proj(x) - return x - - -@register_model_architecture("bart", "bart_large") -def bart_large_architecture(args): - args.encoder_embed_path = getattr(args, "encoder_embed_path", None) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4 * 1024) - args.encoder_layers = getattr(args, "encoder_layers", 12) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", True) - args.decoder_embed_path = getattr(args, "decoder_embed_path", None) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim) - args.decoder_ffn_embed_dim = getattr( - args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim - ) - args.decoder_layers = getattr(args, "decoder_layers", 12) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", False) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", True) - args.attention_dropout = getattr(args, "attention_dropout", 0.0) - args.relu_dropout = getattr(args, "relu_dropout", 0.0) - args.dropout = getattr(args, "dropout", 0.1) - args.max_target_positions = getattr(args, "max_target_positions", 1024) - args.max_source_positions = getattr(args, "max_source_positions", 1024) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", True - ) - args.share_all_embeddings = getattr(args, "share_all_embeddings", True) - - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - - args.no_scale_embedding = getattr(args, "no_scale_embedding", True) - args.layernorm_embedding = getattr(args, "layernorm_embedding", True) - - args.activation_fn = getattr(args, "activation_fn", "gelu") - args.pooler_activation_fn = getattr(args, "pooler_activation_fn", "tanh") - args.pooler_dropout = getattr(args, "pooler_dropout", 0.0) - - -@register_model_architecture("bart", "bart_base") -def bart_base_architecture(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 768) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4 * 768) - args.encoder_layers = getattr(args, "encoder_layers", 6) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 12) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 12) - bart_large_architecture(args) - - -@register_model_architecture("bart", "mbart_large") -def mbart_large_architecture(args): - args.no_scale_embedding = getattr(args, "no_scale_embedding", False) - bart_large_architecture(args) - - -@register_model_architecture("bart", "mbart_base") -def mbart_base_architecture(args): - args.no_scale_embedding = getattr(args, "no_scale_embedding", False) - bart_base_architecture(args) - - -@register_model_architecture("bart", "mbart_base_wmt20") -def mbart_base_wmt20_architecture(args): - args.layernorm_embedding = getattr(args, "layernorm_embedding", False) - mbart_base_architecture(args) diff --git a/kosmos-g/fairseq/fairseq/models/composite_encoder.py b/kosmos-g/fairseq/fairseq/models/composite_encoder.py deleted file mode 100644 index 4e20fe3a8..000000000 --- a/kosmos-g/fairseq/fairseq/models/composite_encoder.py +++ /dev/null @@ -1,57 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .fairseq_encoder import FairseqEncoder - - -class CompositeEncoder(FairseqEncoder): - """ - A wrapper around a dictionary of :class:`FairseqEncoder` objects. - - We run forward on each encoder and return a dictionary of outputs. The first - encoder's dictionary is used for initialization. - - Args: - encoders (dict): a dictionary of :class:`FairseqEncoder` objects. - """ - - def __init__(self, encoders): - super().__init__(next(iter(encoders.values())).dictionary) - self.encoders = encoders - for key in self.encoders: - self.add_module(key, self.encoders[key]) - - def forward(self, src_tokens, src_lengths): - """ - Args: - src_tokens (LongTensor): tokens in the source language of shape - `(batch, src_len)` - src_lengths (LongTensor): lengths of each source sentence of shape - `(batch)` - - Returns: - dict: - the outputs from each Encoder - """ - encoder_out = {} - for key in self.encoders: - encoder_out[key] = self.encoders[key](src_tokens, src_lengths) - return encoder_out - - def reorder_encoder_out(self, encoder_out, new_order): - """Reorder encoder output according to new_order.""" - for key in self.encoders: - encoder_out[key] = self.encoders[key].reorder_encoder_out( - encoder_out[key], new_order - ) - return encoder_out - - def max_positions(self): - return min(self.encoders[key].max_positions() for key in self.encoders) - - def upgrade_state_dict(self, state_dict): - for key in self.encoders: - self.encoders[key].upgrade_state_dict(state_dict) - return state_dict diff --git a/kosmos-g/fairseq/fairseq/models/distributed_fairseq_model.py b/kosmos-g/fairseq/fairseq/models/distributed_fairseq_model.py deleted file mode 100644 index 7c4ab558c..000000000 --- a/kosmos-g/fairseq/fairseq/models/distributed_fairseq_model.py +++ /dev/null @@ -1,148 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import signal -import threading - -import torch -import torch.nn as nn -from torch.nn.parallel import DistributedDataParallel - -from fairseq.distributed import ( - DistributedTimeoutWrapper, - LegacyDistributedDataParallel, - ModuleProxyWrapper, - TPUDistributedDataParallel, -) - - -logger = logging.getLogger(__name__) - - -_SLOWMO_DDP_DISABLED = False -try: - from fairscale.experimental.nn.data_parallel import ( - SlowMoBaseAlgorithm, - SlowMoDistributedDataParallel, - ) -except ImportError: - _SLOWMO_DDP_DISABLED = True - - -def DistributedFairseqModel(args, model, process_group, device): - """ - Wrap a *model* to support distributed data parallel training. - - This is similar to the built-in DistributedDataParallel, but allows - additional configuration of the DistributedDataParallel class to - use, and also provides easier access to the wrapped model by - forwarding requests for missing attributes to the wrapped model. - - Args: - args (argparse.Namespace): fairseq args - model (BaseFairseqModel): model to wrap - process_group: the c10d process group to be used for distributed data - parallel all-reduction. - device: device to move model to - """ - assert isinstance(model, nn.Module) - if args.tpu: - wrapped_model = TPUDistributedDataParallel( - module=model.to(device), - process_group=process_group, - ) - # forward missing getattr and state_dict/load_state_dict to orig model - wrapped_model = ModuleProxyWrapper(wrapped_model) - elif args.ddp_backend in {"c10d", "pytorch_ddp"}: - wrapped_model = DistributedDataParallel( - module=model.to(device), - device_ids=[args.device_id], - output_device=args.device_id, - broadcast_buffers=args.broadcast_buffers, - bucket_cap_mb=args.bucket_cap_mb, - process_group=process_group, - find_unused_parameters=args.find_unused_parameters, - gradient_as_bucket_view=args.gradient_as_bucket_view, - ) - if args.ddp_comm_hook == "fp16": - logger.info("enable fp16 communication hook in DDP") - try: - from torch.distributed.algorithms.ddp_comm_hooks import ( - register_ddp_comm_hook, - DDPCommHookType, - ) - except: - logger.error( - "Could not import from torch.distributed.algorithms.ddp_comm_hooks; you may need to update your pytorch version" - ) - raise - - register_ddp_comm_hook(DDPCommHookType.FP16_COMPRESS, wrapped_model) - # forward missing getattr and state_dict/load_state_dict to orig model - wrapped_model = ModuleProxyWrapper(wrapped_model) - elif args.ddp_backend in {"no_c10d", "legacy_ddp"}: - wrapped_model = LegacyDistributedDataParallel( - module=model.to(device), - buffer_size=2 ** 28, - process_group=process_group, - ) - # forward missing getattr and state_dict/load_state_dict to orig model - wrapped_model = ModuleProxyWrapper(wrapped_model) - elif args.ddp_backend == "slowmo": - if _SLOWMO_DDP_DISABLED: - raise ImportError( - "Cannot find SlowMoDistributedDataParallel. " - "Please install fairscale with: pip install fairscale" - ) - - # The values of slowmo_momentum below were obtained by tuning on the - # En-De 16 dataset by training the transformer_wmt_en_de_large model - if args.slowmo_momentum is None: - if args.distributed_world_size <= 16: - args.slowmo_momentum = 0.0 - elif args.distributed_world_size <= 32: - args.slowmo_momentum = 0.2 - elif args.distributed_world_size <= 64: - args.slowmo_momentum = 0.5 - else: - args.slowmo_momentum = 0.6 - slowmo_base_algorithm = SlowMoBaseAlgorithm[args.slowmo_base_algorithm.upper()] - - wrapped_model = SlowMoDistributedDataParallel( - module=model.to(device), - broadcast_buffers=args.broadcast_buffers, - nprocs_per_node=args.nprocs_per_node, - slowmo_momentum=args.slowmo_momentum, - slowmo_base_algorithm=slowmo_base_algorithm, - localsgd_frequency=args.localsgd_frequency, - ) - # forward missing getattr and state_dict/load_state_dict to orig model - wrapped_model = ModuleProxyWrapper(wrapped_model) - elif args.ddp_backend == "fully_sharded": - try: - from fairscale.nn.data_parallel import FullyShardedDataParallel as FSDP - except ImportError: - raise ImportError( - "Cannot find FullyShardedDataParallel. " - "Please install fairscale with: pip install fairscale" - ) - assert isinstance(model, FSDP), "expected model to already be wrapped in FSDP" - wrapped_model = model - if args.memory_efficient_fp16: - wrapped_model = wrapped_model.half() - if not args.cpu_offload: - wrapped_model = wrapped_model.to(device=device) - else: - raise ValueError("Unknown --ddp-backend: " + args.ddp_backend) - - # kill hung distributed jobs after a timeout - if getattr(args, "heartbeat_timeout", -1) > 0: - wrapped_model = DistributedTimeoutWrapper( - wrapped_model, timeout=getattr(args, "heartbeat_timeout", -1) - ) - - return wrapped_model diff --git a/kosmos-g/fairseq/fairseq/models/ema/__init__.py b/kosmos-g/fairseq/fairseq/models/ema/__init__.py deleted file mode 100644 index 503ceaa60..000000000 --- a/kosmos-g/fairseq/fairseq/models/ema/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import importlib -import os - -from .ema import EMA - - -def build_ema(model, cfg, device): - return EMA(model, cfg, device) - - -# automatically import any Python files in the models/ema/ directory -for file in sorted(os.listdir(os.path.dirname(__file__))): - if file.endswith(".py") and not file.startswith("_"): - file_name = file[: file.find(".py")] - importlib.import_module("fairseq.models.ema." + file_name) diff --git a/kosmos-g/fairseq/fairseq/models/ema/ema.py b/kosmos-g/fairseq/fairseq/models/ema/ema.py deleted file mode 100644 index bc966a9ae..000000000 --- a/kosmos-g/fairseq/fairseq/models/ema/ema.py +++ /dev/null @@ -1,206 +0,0 @@ -#!/usr/bin/env python3 - -""" -This module has the EMA class used to store a copy of the exponentially decayed -model params. - -Typical usage of EMA class involves initializing an object using an existing -model (random or from a seed model) and setting the config like ema_decay, -ema_start_update which determine how the EMA model is updated. After every -update of the model i.e. at the end of the train_step, the EMA should be updated -by passing the new model to the EMA.step function. The EMA model state dict -can be stored in the extra state under the key of "ema" and dumped -into a checkpoint and loaded. The EMA object can be passed to tasks -by setting task.uses_ema property. -EMA is a smoothed/ensemble model which might have better performance -when used for inference or further fine-tuning. EMA class has a -reverse function to load the EMA params into a model and use it -like a regular model. -""" - -import copy -import logging - -import torch - -from fairseq import checkpoint_utils - - -class EMA(object): - """Exponential Moving Average of Fairseq Models - EMA keeps a copy of the exponentially decayed model params. - The set of params should include both gradient-descent and - non-gradient descent params, such as batch mean/var and buffers. - This is a modified implementation of - the open source code in https://github.com/zhawe01/fairseq-gec.git, - and internal source code in - fbcode/mobile-vision/projects/classification_pytorch/lib/utils/model_ema.py. - - Similar to TF EMA. - https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage. - EMA provides a averaged and smoothed set of model weights, and has been shown to - improve vision models. EMA class does all necessary functions to update, reload, - or init EMA methods. - - EMA object is initialized from an arbitrary model. By default, it is stored in - the same device (unless device specified at initialization) and with the - same precision as the model (unless ema_fp32 is True). ema_fp32 is recommended. - This stores the EMA parameters in fp32 only for the EMA update step, and - is used at the default precision otherwise. - EMA is usually enabled using EMAConfig with store_ema=True. Some important - parameters to configure EMA are - 1) ema_decay - The decay of EMA - 2) ema_update_freq - EMA is updated every this many model updates. - 3) ema_start_update - Start EMA update after this many model updates [default 0] - - Key methods: - 1) step - One update of EMA using new model - 2) restore - Update EMA from a state dict - 3) reverse - Load EMA into a model - 4) get_decay, _set_decay - Used to get or set the decay. Note _set_decay is - called from step. - 5) build_fp32_params - Used to initialize or update the fp32 copy of EMA params. - Note this is enabled only when ema_fp32=True - """ - - def __init__(self, model, config, device=None, skip_keys=None): - """ - @param model model to initialize the EMA with - @param config EMAConfig object with configuration like - ema_decay, ema_update_freq, ema_fp32 - @param device If provided, copy EMA to this device (e.g. gpu). - Otherwise EMA is in the same device as the model. - """ - - self.decay = config.ema_decay - self.model = copy.deepcopy(model) - self.model.requires_grad_(False) - self.config = config - self.skip_keys = skip_keys or set() - self.fp32_params = {} - - if self.config.ema_seed_model is not None: - state = checkpoint_utils.load_ema_from_checkpoint( - self.config.ema_seed_model - ) - self.model.load_state_dict(state["model"], strict=True) - - if device is not None: - logging.info(f"Copying EMA model to device {device}") - self.model = self.model.to(device=device) - - if self.config.ema_fp32: - self.build_fp32_params() - - self.update_freq_counter = 0 - - def get_model(self): - return self.model - - def build_fp32_params(self, state_dict=None): - """ - Store a copy of the EMA params in fp32. - If state dict is passed, the EMA params is copied from - the provided state dict. Otherwise, it is copied from the - current EMA model parameters. - """ - if not self.config.ema_fp32: - raise RuntimeError( - "build_fp32_params should not be called if ema_fp32=False. " - "Use ema_fp32=True if this is really intended." - ) - - if state_dict is None: - state_dict = self.model.state_dict() - - def _to_float(t): - return t.float() if torch.is_floating_point(t) else t - - for param_key in state_dict: - if param_key in self.fp32_params: - self.fp32_params[param_key].copy_(state_dict[param_key]) - else: - self.fp32_params[param_key] = _to_float(state_dict[param_key]) - - def restore(self, state_dict, build_fp32_params=False): - """Load data from a model spec into EMA model""" - self.model.load_state_dict(state_dict, strict=False) - if build_fp32_params: - self.build_fp32_params(state_dict) - - def _set_decay(self, decay): - self.decay = decay - - def get_decay(self): - return self.decay - - def _step_internal(self, new_model, updates=None): - """One update of the EMA model based on new model weights""" - decay = self.decay - - ema_state_dict = {} - ema_params = ( - self.fp32_params if self.config.ema_fp32 else self.model.state_dict() - ) - for key, param in new_model.state_dict().items(): - if isinstance(param, dict): - continue - try: - ema_param = ema_params[key] - except KeyError: - ema_param = ( - param.float().clone() if param.ndim == 1 else copy.deepcopy(param) - ) - - if param.shape != ema_param.shape: - raise ValueError( - "incompatible tensor shapes between model param and ema param" - + "{} vs. {}".format(param.shape, ema_param.shape) - ) - - if "version" in key: - # Do not decay a model.version pytorch param - continue - - if key in self.skip_keys: - ema_param = param.to(dtype=ema_param.dtype).clone() - else: - ema_param.mul_(decay) - ema_param.add_(param.to(dtype=ema_param.dtype), alpha=1 - decay) - ema_state_dict[key] = ema_param - self.restore(ema_state_dict, build_fp32_params=False) - - def step(self, new_model, updates=None): - """ - One update of EMA which is done every self.config.ema_update_freq - updates of the model. - - @param updates The current number of model updates done. - Decay is set of 0 if model updates < ema_start_update, which means - the model will be simply copied over to the EMA. - When model updates >= ema_start_updates, then EMA is updated with - a decay of self.config.ema_decay. - """ - if updates is not None: - self._set_decay( - 0 if updates < self.config.ema_start_update else self.config.ema_decay - ) - if updates is not None and self.config.ema_update_freq > 1: - self.update_freq_counter += 1 - if self.update_freq_counter >= self.config.ema_update_freq: - self._step_internal(new_model, updates) - self.update_freq_counter = 0 - else: - self._step_internal(new_model, updates) - - def reverse(self, model): - """ - Load the model parameters from EMA model. - Useful for inference or fine-tuning from the EMA model. - """ - d = self.model.state_dict() - if "_ema" in d: - del d["_ema"] - - model.load_state_dict(d, strict=False) - return model diff --git a/kosmos-g/fairseq/fairseq/models/fairseq_decoder.py b/kosmos-g/fairseq/fairseq/models/fairseq_decoder.py deleted file mode 100644 index 13b73d639..000000000 --- a/kosmos-g/fairseq/fairseq/models/fairseq_decoder.py +++ /dev/null @@ -1,104 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Dict, List, Optional, Tuple - -import torch.nn as nn -from fairseq import utils -from torch import Tensor - - -class FairseqDecoder(nn.Module): - """Base class for decoders.""" - - def __init__(self, dictionary): - super().__init__() - self.dictionary = dictionary - self.onnx_trace = False - self.adaptive_softmax = None - - def forward(self, prev_output_tokens, encoder_out=None, **kwargs): - """ - Args: - prev_output_tokens (LongTensor): shifted output tokens of shape - `(batch, tgt_len)`, for teacher forcing - encoder_out (dict, optional): output from the encoder, used for - encoder-side attention - - Returns: - tuple: - - the decoder's output of shape `(batch, tgt_len, vocab)` - - a dictionary with any model-specific outputs - """ - x, extra = self.extract_features( - prev_output_tokens, encoder_out=encoder_out, **kwargs - ) - x = self.output_layer(x) - return x, extra - - def extract_features(self, prev_output_tokens, encoder_out=None, **kwargs): - """ - Returns: - tuple: - - the decoder's features of shape `(batch, tgt_len, embed_dim)` - - a dictionary with any model-specific outputs - """ - raise NotImplementedError - - def output_layer(self, features, **kwargs): - """ - Project features to the default output size, e.g., vocabulary size. - - Args: - features (Tensor): features returned by *extract_features*. - """ - raise NotImplementedError - - def get_normalized_probs( - self, - net_output: Tuple[Tensor, Optional[Dict[str, List[Optional[Tensor]]]]], - log_probs: bool, - sample: Optional[Dict[str, Tensor]] = None, - ): - """Get normalized probabilities (or log probs) from a net's output.""" - return self.get_normalized_probs_scriptable(net_output, log_probs, sample) - - # TorchScript doesn't support super() method so that the scriptable Subclass - # can't access the base class model in Torchscript. - # Current workaround is to add a helper function with different name and - # call the helper function from scriptable Subclass. - def get_normalized_probs_scriptable( - self, - net_output: Tuple[Tensor, Optional[Dict[str, List[Optional[Tensor]]]]], - log_probs: bool, - sample: Optional[Dict[str, Tensor]] = None, - ): - """Get normalized probabilities (or log probs) from a net's output.""" - - if hasattr(self, "adaptive_softmax") and self.adaptive_softmax is not None: - if sample is not None: - assert "target" in sample - target = sample["target"] - else: - target = None - out = self.adaptive_softmax.get_log_prob(net_output[0], target=target) - return out.exp_() if not log_probs else out - - logits = net_output[0] - if log_probs: - return utils.log_softmax(logits, dim=-1, onnx_trace=self.onnx_trace) - else: - return utils.softmax(logits, dim=-1, onnx_trace=self.onnx_trace) - - def max_positions(self): - """Maximum input length supported by the decoder.""" - return 1e6 # an arbitrary large number - - def upgrade_state_dict_named(self, state_dict, name): - """Upgrade old state dicts to work with newer code.""" - return state_dict - - def prepare_for_onnx_export_(self): - self.onnx_trace = True diff --git a/kosmos-g/fairseq/fairseq/models/fairseq_encoder.py b/kosmos-g/fairseq/fairseq/models/fairseq_encoder.py deleted file mode 100644 index 08cbde15a..000000000 --- a/kosmos-g/fairseq/fairseq/models/fairseq_encoder.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Dict, List, NamedTuple, Optional - -import torch -import torch.nn as nn -from torch import Tensor - - -EncoderOut = NamedTuple( - "EncoderOut", - [ - ("encoder_out", Tensor), # T x B x C - ("encoder_padding_mask", Optional[Tensor]), # B x T - ("encoder_embedding", Optional[Tensor]), # B x T x C - ("encoder_states", Optional[List[Tensor]]), # List[T x B x C] - ("src_tokens", Optional[Tensor]), # B x T - ("src_lengths", Optional[Tensor]), # B x 1 - ], -) - - -class FairseqEncoder(nn.Module): - """Base class for encoders.""" - - def __init__(self, dictionary): - super().__init__() - self.dictionary = dictionary - - def forward(self, src_tokens, src_lengths=None, **kwargs): - """ - Args: - src_tokens (LongTensor): tokens in the source language of shape - `(batch, src_len)` - src_lengths (LongTensor): lengths of each source sentence of shape - `(batch)` - """ - raise NotImplementedError - - def forward_torchscript(self, net_input: Dict[str, Tensor]): - """A TorchScript-compatible version of forward. - - Encoders which use additional arguments may want to override - this method for TorchScript compatibility. - """ - if torch.jit.is_scripting(): - return self.forward( - src_tokens=net_input["src_tokens"], - src_lengths=net_input["src_lengths"], - ) - else: - return self.forward_non_torchscript(net_input) - - @torch.jit.unused - def forward_non_torchscript(self, net_input: Dict[str, Tensor]): - encoder_input = { - k: v for k, v in net_input.items() if k != "prev_output_tokens" - } - return self.forward(**encoder_input) - - def reorder_encoder_out(self, encoder_out, new_order): - """ - Reorder encoder output according to `new_order`. - - Args: - encoder_out: output from the ``forward()`` method - new_order (LongTensor): desired order - - Returns: - `encoder_out` rearranged according to `new_order` - """ - raise NotImplementedError - - def max_positions(self): - """Maximum input length supported by the encoder.""" - return 1e6 # an arbitrary large number - - def upgrade_state_dict_named(self, state_dict, name): - """Upgrade old state dicts to work with newer code.""" - return state_dict - - def set_num_updates(self, num_updates): - """State from trainer to pass along to model at every update.""" - - def _apply(m): - if hasattr(m, "set_num_updates") and m != self: - m.set_num_updates(num_updates) - - self.apply(_apply) diff --git a/kosmos-g/fairseq/fairseq/models/fairseq_incremental_decoder.py b/kosmos-g/fairseq/fairseq/models/fairseq_incremental_decoder.py deleted file mode 100644 index cc72a0f8f..000000000 --- a/kosmos-g/fairseq/fairseq/models/fairseq_incremental_decoder.py +++ /dev/null @@ -1,118 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from typing import Dict, Optional - -from fairseq.incremental_decoding_utils import with_incremental_state -from fairseq.models import FairseqDecoder -from torch import Tensor - - -logger = logging.getLogger(__name__) - - -@with_incremental_state -class FairseqIncrementalDecoder(FairseqDecoder): - """Base class for incremental decoders. - - Incremental decoding is a special mode at inference time where the Model - only receives a single timestep of input corresponding to the previous - output token (for teacher forcing) and must produce the next output - *incrementally*. Thus the model must cache any long-term state that is - needed about the sequence, e.g., hidden states, convolutional states, etc. - - Compared to the standard :class:`FairseqDecoder` interface, the incremental - decoder interface allows :func:`forward` functions to take an extra keyword - argument (*incremental_state*) that can be used to cache state across - time-steps. - - The :class:`FairseqIncrementalDecoder` interface also defines the - :func:`reorder_incremental_state` method, which is used during beam search - to select and reorder the incremental state based on the selection of beams. - - To learn more about how incremental decoding works, refer to `this blog - <http://www.telesens.co/2019/04/21/understanding-incremental-decoding-in-fairseq/>`_. - """ - - def __init__(self, dictionary): - super().__init__(dictionary) - - def forward( - self, prev_output_tokens, encoder_out=None, incremental_state=None, **kwargs - ): - """ - Args: - prev_output_tokens (LongTensor): shifted output tokens of shape - `(batch, tgt_len)`, for teacher forcing - encoder_out (dict, optional): output from the encoder, used for - encoder-side attention - incremental_state (dict, optional): dictionary used for storing - state during :ref:`Incremental decoding` - - Returns: - tuple: - - the decoder's output of shape `(batch, tgt_len, vocab)` - - a dictionary with any model-specific outputs - """ - raise NotImplementedError - - def extract_features( - self, prev_output_tokens, encoder_out=None, incremental_state=None, **kwargs - ): - """ - Returns: - tuple: - - the decoder's features of shape `(batch, tgt_len, embed_dim)` - - a dictionary with any model-specific outputs - """ - raise NotImplementedError - - def reorder_incremental_state( - self, - incremental_state: Dict[str, Dict[str, Optional[Tensor]]], - new_order: Tensor, - ): - """Reorder incremental state. - - This will be called when the order of the input has changed from the - previous time step. A typical use case is beam search, where the input - order changes between time steps based on the selection of beams. - """ - pass - - def reorder_incremental_state_scripting( - self, - incremental_state: Dict[str, Dict[str, Optional[Tensor]]], - new_order: Tensor, - ): - """Main entry point for reordering the incremental state. - - Due to limitations in TorchScript, we call this function in - :class:`fairseq.sequence_generator.SequenceGenerator` instead of - calling :func:`reorder_incremental_state` directly. - """ - for module in self.modules(): - if hasattr(module, "reorder_incremental_state"): - result = module.reorder_incremental_state(incremental_state, new_order) - if result is not None: - incremental_state = result - - def set_beam_size(self, beam_size): - """Sets the beam size in the decoder and all children.""" - if getattr(self, "_beam_size", -1) != beam_size: - seen = set() - - def apply_set_beam_size(module): - if ( - module != self - and hasattr(module, "set_beam_size") - and module not in seen - ): - seen.add(module) - module.set_beam_size(beam_size) - - self.apply(apply_set_beam_size) - self._beam_size = beam_size diff --git a/kosmos-g/fairseq/fairseq/models/fairseq_model.py b/kosmos-g/fairseq/fairseq/models/fairseq_model.py deleted file mode 100644 index 42f9134a3..000000000 --- a/kosmos-g/fairseq/fairseq/models/fairseq_model.py +++ /dev/null @@ -1,574 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Base classes for various fairseq models. -""" - -import logging -from argparse import Namespace -from typing import Dict, List, Optional, Tuple - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import utils -from fairseq.data import Dictionary -from fairseq.dataclass.utils import ( - convert_namespace_to_omegaconf, - gen_parser_from_dataclass, -) -from fairseq.models import FairseqDecoder, FairseqEncoder -from omegaconf import DictConfig -from torch import Tensor - - -logger = logging.getLogger(__name__) - - -def check_type(module, expected_type): - if hasattr(module, "unwrapped_module"): - assert isinstance( - module.unwrapped_module, expected_type - ), f"{type(module.unwrapped_module)} != {expected_type}" - else: - assert isinstance(module, expected_type), f"{type(module)} != {expected_type}" - - -class BaseFairseqModel(nn.Module): - """Base class for fairseq models.""" - - def __init__(self): - super().__init__() - self._is_generation_fast = False - - @classmethod - def add_args(cls, parser): - """Add model-specific arguments to the parser.""" - dc = getattr(cls, "__dataclass", None) - if dc is not None: - # do not set defaults so that settings defaults from various architectures still works - gen_parser_from_dataclass(parser, dc(), delete_default=True) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - raise NotImplementedError("Model must implement the build_model method") - - def get_targets(self, sample, net_output): - """Get targets from either the sample or the net's output.""" - return sample["target"] - - def get_normalized_probs( - self, - net_output: Tuple[Tensor, Optional[Dict[str, List[Optional[Tensor]]]]], - log_probs: bool, - sample: Optional[Dict[str, Tensor]] = None, - ): - """Get normalized probabilities (or log probs) from a net's output.""" - return self.get_normalized_probs_scriptable(net_output, log_probs, sample) - - # TorchScript doesn't support super() method so that the scriptable Subclass - # can't access the base class model in Torchscript. - # Current workaround is to add a helper function with different name and - # call the helper function from scriptable Subclass. - def get_normalized_probs_scriptable( - self, - net_output: Tuple[Tensor, Optional[Dict[str, List[Optional[Tensor]]]]], - log_probs: bool, - sample: Optional[Dict[str, Tensor]] = None, - ): - """Scriptable helper function for get_normalized_probs in ~BaseFairseqModel""" - if hasattr(self, "decoder"): - return self.decoder.get_normalized_probs(net_output, log_probs, sample) - elif torch.is_tensor(net_output): - # syntactic sugar for simple models which don't have a decoder - # (e.g., the classification tutorial) - logits = net_output.float() - if log_probs: - return F.log_softmax(logits, dim=-1) - else: - return F.softmax(logits, dim=-1) - raise NotImplementedError - - def extract_features(self, *args, **kwargs): - """Similar to *forward* but only return features.""" - return self(*args, **kwargs) - - def max_positions(self): - """Maximum length supported by the model.""" - return None - - def load_state_dict( - self, - state_dict, - strict=True, - model_cfg: Optional[DictConfig] = None, - args: Optional[Namespace] = None, - ): - """Copies parameters and buffers from *state_dict* into this module and - its descendants. - - Overrides the method in :class:`nn.Module`. Compared with that method - this additionally "upgrades" *state_dicts* from old checkpoints. - """ - - if model_cfg is None and args is not None: - logger.warn( - "using 'args' is deprecated, please update your code to use dataclass config" - ) - model_cfg = convert_namespace_to_omegaconf(args).model - - self.upgrade_state_dict(state_dict) - - from fairseq.checkpoint_utils import prune_state_dict - - new_state_dict = prune_state_dict(state_dict, model_cfg) - return super().load_state_dict(new_state_dict, strict) - - def upgrade_state_dict(self, state_dict): - """Upgrade old state dicts to work with newer code.""" - self.upgrade_state_dict_named(state_dict, "") - - def upgrade_state_dict_named(self, state_dict, name): - """Upgrade old state dicts to work with newer code. - - Args: - state_dict (dict): state dictionary to upgrade, in place - name (str): the state dict key corresponding to the current module - """ - assert state_dict is not None - - def do_upgrade(m, prefix): - if len(prefix) > 0: - prefix += "." - - for n, c in m.named_children(): - name = prefix + n - if hasattr(c, "upgrade_state_dict_named"): - c.upgrade_state_dict_named(state_dict, name) - elif hasattr(c, "upgrade_state_dict"): - c.upgrade_state_dict(state_dict) - do_upgrade(c, name) - - do_upgrade(self, name) - - def set_num_updates(self, num_updates): - """State from trainer to pass along to model at every update.""" - for m in self.modules(): - if hasattr(m, "set_num_updates") and m != self: - m.set_num_updates(num_updates) - - def prepare_for_inference_(self, cfg: DictConfig): - """Prepare model for inference.""" - kwargs = {} - kwargs["beamable_mm_beam_size"] = ( - None - if getattr(cfg.generation, "no_beamable_mm", False) - else getattr(cfg.generation, "beam", 5) - ) - kwargs["need_attn"] = getattr(cfg.generation, "print_alignment", False) - if getattr(cfg.generation, "retain_dropout", False): - kwargs["retain_dropout"] = cfg.generation.retain_dropout - kwargs["retain_dropout_modules"] = cfg.generation.retain_dropout_modules - self.make_generation_fast_(**kwargs) - - def make_generation_fast_(self, **kwargs): - """ - Legacy entry point to optimize model for faster generation. - Prefer prepare_for_inference_. - """ - if self._is_generation_fast: - return # only apply once - self._is_generation_fast = True - - # remove weight norm from all modules in the network - def apply_remove_weight_norm(module): - try: - nn.utils.remove_weight_norm(module) - except (AttributeError, ValueError): # this module didn't have weight norm - return - - self.apply(apply_remove_weight_norm) - - def apply_make_generation_fast_(module, prefix): - if len(prefix) > 0: - prefix += "." - - base_func = BaseFairseqModel.make_generation_fast_ - for n, m in module.named_modules(): - if ( - m != self - and hasattr(m, "make_generation_fast_") - # don't call this implementation again, e.g., if - # children modules also inherit from BaseFairseqModel - and m.make_generation_fast_.__func__ is not base_func - ): - name = prefix + n - m.make_generation_fast_(name=name, **kwargs) - - apply_make_generation_fast_(self, "") - - def train(mode=True): - if mode: - raise RuntimeError("cannot train after make_generation_fast") - - # this model should no longer be used for training - self.eval() - self.train = train - - def prepare_for_onnx_export_(self, **kwargs): - """Make model exportable via ONNX trace.""" - seen = set() - - def apply_prepare_for_onnx_export_(module): - if ( - module != self - and hasattr(module, "prepare_for_onnx_export_") - and module not in seen - ): - seen.add(module) - module.prepare_for_onnx_export_(**kwargs) - - self.apply(apply_prepare_for_onnx_export_) - - @classmethod - def from_pretrained( - cls, - model_name_or_path, - checkpoint_file="model.pt", - data_name_or_path=".", - **kwargs, - ): - """ - Load a :class:`~fairseq.models.FairseqModel` from a pre-trained model - file. Downloads and caches the pre-trained model file if needed. - - The base implementation returns a - :class:`~fairseq.hub_utils.GeneratorHubInterface`, which can be used to - generate translations or sample from language models. The underlying - :class:`~fairseq.models.FairseqModel` can be accessed via the - *generator.models* attribute. - - Other models may override this to implement custom hub interfaces. - - Args: - model_name_or_path (str): either the name of a pre-trained model to - load or a path/URL to a pre-trained model state dict - checkpoint_file (str, optional): colon-separated list of checkpoint - files in the model archive to ensemble (default: 'model.pt') - data_name_or_path (str, optional): point args.data to the archive - at the given path/URL. Can start with '.' or './' to reuse the - model archive path. - """ - from fairseq import hub_utils - - x = hub_utils.from_pretrained( - model_name_or_path, - checkpoint_file, - data_name_or_path, - archive_map=cls.hub_models(), - **kwargs, - ) - logger.info(x["args"]) - return hub_utils.GeneratorHubInterface(x["args"], x["task"], x["models"]) - - @classmethod - def hub_models(cls): - return {} - - -class FairseqEncoderDecoderModel(BaseFairseqModel): - """Base class for encoder-decoder models. - - Args: - encoder (FairseqEncoder): the encoder - decoder (FairseqDecoder): the decoder - """ - - def __init__(self, encoder, decoder): - super().__init__() - - self.encoder = encoder - self.decoder = decoder - - check_type(self.encoder, FairseqEncoder) - check_type(self.decoder, FairseqDecoder) - - def forward(self, src_tokens, src_lengths, prev_output_tokens, **kwargs): - """ - Run the forward pass for an encoder-decoder model. - - First feed a batch of source tokens through the encoder. Then, feed the - encoder output and previous decoder outputs (i.e., teacher forcing) to - the decoder to produce the next outputs:: - - encoder_out = self.encoder(src_tokens, src_lengths) - return self.decoder(prev_output_tokens, encoder_out) - - Args: - src_tokens (LongTensor): tokens in the source language of shape - `(batch, src_len)` - src_lengths (LongTensor): source sentence lengths of shape `(batch)` - prev_output_tokens (LongTensor): previous decoder outputs of shape - `(batch, tgt_len)`, for teacher forcing - - Returns: - tuple: - - the decoder's output of shape `(batch, tgt_len, vocab)` - - a dictionary with any model-specific outputs - """ - encoder_out = self.encoder(src_tokens, src_lengths=src_lengths, **kwargs) - decoder_out = self.decoder( - prev_output_tokens, encoder_out=encoder_out, **kwargs - ) - return decoder_out - - def forward_decoder(self, prev_output_tokens, **kwargs): - return self.decoder(prev_output_tokens, **kwargs) - - def extract_features(self, src_tokens, src_lengths, prev_output_tokens, **kwargs): - """ - Similar to *forward* but only return features. - - Returns: - tuple: - - the decoder's features of shape `(batch, tgt_len, embed_dim)` - - a dictionary with any model-specific outputs - """ - encoder_out = self.encoder(src_tokens, src_lengths=src_lengths, **kwargs) - features = self.decoder.extract_features( - prev_output_tokens, encoder_out=encoder_out, **kwargs - ) - return features - - def output_layer(self, features, **kwargs): - """Project features to the default output size (typically vocabulary size).""" - return self.decoder.output_layer(features, **kwargs) - - def max_positions(self): - """Maximum length supported by the model.""" - return (self.encoder.max_positions(), self.decoder.max_positions()) - - def max_decoder_positions(self): - """Maximum length supported by the decoder.""" - return self.decoder.max_positions() - - -class FairseqModel(FairseqEncoderDecoderModel): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - utils.deprecation_warning( - "FairseqModel is deprecated, please use FairseqEncoderDecoderModel " - "or BaseFairseqModel instead", - stacklevel=4, - ) - - -class FairseqMultiModel(BaseFairseqModel): - """Base class for combining multiple encoder-decoder models.""" - - def __init__(self, encoders, decoders): - super().__init__() - assert encoders.keys() == decoders.keys() - self.keys = list(encoders.keys()) - for key in self.keys: - check_type(encoders[key], FairseqEncoder) - check_type(decoders[key], FairseqDecoder) - - self.models = nn.ModuleDict( - { - key: FairseqEncoderDecoderModel(encoders[key], decoders[key]) - for key in self.keys - } - ) - - @staticmethod - def build_shared_embeddings( - dicts: Dict[str, Dictionary], - langs: List[str], - embed_dim: int, - build_embedding: callable, - pretrained_embed_path: Optional[str] = None, - ): - """ - Helper function to build shared embeddings for a set of languages after - checking that all dicts corresponding to those languages are equivalent. - - Args: - dicts: Dict of lang_id to its corresponding Dictionary - langs: languages that we want to share embeddings for - embed_dim: embedding dimension - build_embedding: callable function to actually build the embedding - pretrained_embed_path: Optional path to load pretrained embeddings - """ - shared_dict = dicts[langs[0]] - if any(dicts[lang] != shared_dict for lang in langs): - raise ValueError( - "--share-*-embeddings requires a joined dictionary: " - "--share-encoder-embeddings requires a joined source " - "dictionary, --share-decoder-embeddings requires a joined " - "target dictionary, and --share-all-embeddings requires a " - "joint source + target dictionary." - ) - return build_embedding(shared_dict, embed_dim, pretrained_embed_path) - - def forward(self, src_tokens, src_lengths, prev_output_tokens, **kwargs): - raise NotImplementedError - - def max_positions(self): - """Maximum length supported by the model.""" - return { - key: ( - self.models[key].encoder.max_positions(), - self.models[key].decoder.max_positions(), - ) - for key in self.keys - } - - def max_decoder_positions(self): - """Maximum length supported by the decoder.""" - return min(model.decoder.max_positions() for model in self.models.values()) - - @property - def encoder(self): - return self.models[self.keys[0]].encoder - - @property - def decoder(self): - return self.models[self.keys[0]].decoder - - def forward_decoder(self, prev_output_tokens, **kwargs): - return self.decoder(prev_output_tokens, **kwargs) - - def load_state_dict( - self, - state_dict, - strict=True, - model_cfg=None, - args: Optional[Namespace] = None, - ): - """Copies parameters and buffers from *state_dict* into this module and - its descendants. - - Overrides the method in :class:`nn.Module`. Compared with that method - this additionally "upgrades" *state_dicts* from old checkpoints. - """ - - if model_cfg is None and args is not None: - logger.warn( - "using 'args' is deprecated, please update your code to use dataclass config" - ) - model_cfg = convert_namespace_to_omegaconf(args).model - - self.upgrade_state_dict(state_dict) - - from fairseq.checkpoint_utils import prune_state_dict - - new_state_dict = prune_state_dict(state_dict, model_cfg) - return super().load_state_dict(new_state_dict, strict) - - -class FairseqLanguageModel(BaseFairseqModel): - """Base class for decoder-only models. - - Args: - decoder (FairseqDecoder): the decoder - """ - - def __init__(self, decoder): - super().__init__() - self.decoder = decoder - check_type(self.decoder, FairseqDecoder) - - def forward(self, src_tokens, **kwargs): - """ - Run the forward pass for a decoder-only model. - - Feeds a batch of tokens through the decoder to predict the next tokens. - - Args: - src_tokens (LongTensor): tokens on which to condition the decoder, - of shape `(batch, tgt_len)` - src_lengths (LongTensor): source sentence lengths of shape `(batch)` - - Returns: - tuple: - - the decoder's output of shape `(batch, seq_len, vocab)` - - a dictionary with any model-specific outputs - """ - return self.decoder(src_tokens, **kwargs) - - def forward_decoder(self, prev_output_tokens, **kwargs): - return self.decoder(prev_output_tokens, **kwargs) - - def extract_features(self, src_tokens, **kwargs): - """ - Similar to *forward* but only return features. - - Returns: - tuple: - - the decoder's features of shape `(batch, seq_len, embed_dim)` - - a dictionary with any model-specific outputs - """ - return self.decoder.extract_features(src_tokens, **kwargs) - - def output_layer(self, features, **kwargs): - """Project features to the default output size (typically vocabulary size).""" - return self.decoder.output_layer(features, **kwargs) - - def max_positions(self): - """Maximum length supported by the model.""" - return self.decoder.max_positions() - - def max_decoder_positions(self): - """Maximum length supported by the decoder.""" - return self.decoder.max_positions() - - @property - def supported_targets(self): - return {"future"} - - -class FairseqEncoderModel(BaseFairseqModel): - """Base class for encoder-only models. - - Args: - encoder (FairseqEncoder): the encoder - """ - - def __init__(self, encoder): - super().__init__() - self.encoder = encoder - check_type(self.encoder, FairseqEncoder) - - def forward(self, src_tokens, src_lengths, **kwargs): - """ - Run the forward pass for a encoder-only model. - - Feeds a batch of tokens through the encoder to generate features. - - Args: - src_tokens (LongTensor): input tokens of shape `(batch, src_len)` - src_lengths (LongTensor): source sentence lengths of shape `(batch)` - - Returns: - the encoder's output, typically of shape `(batch, src_len, features)` - """ - return self.encoder(src_tokens, src_lengths, **kwargs) - - def get_normalized_probs(self, net_output, log_probs, sample=None): - """Get normalized probabilities (or log probs) from a net's output.""" - encoder_out = net_output["encoder_out"] - if torch.is_tensor(encoder_out): - logits = encoder_out.float() - if log_probs: - return F.log_softmax(logits, dim=-1) - else: - return F.softmax(logits, dim=-1) - raise NotImplementedError - - def max_positions(self): - """Maximum length supported by the model.""" - return self.encoder.max_positions() diff --git a/kosmos-g/fairseq/fairseq/models/fconv.py b/kosmos-g/fairseq/fairseq/models/fconv.py deleted file mode 100644 index c99a21510..000000000 --- a/kosmos-g/fairseq/fairseq/models/fconv.py +++ /dev/null @@ -1,756 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import utils -from fairseq.models import ( - FairseqEncoder, - FairseqEncoderDecoderModel, - FairseqIncrementalDecoder, - register_model, - register_model_architecture, -) -from fairseq.modules import ( - AdaptiveSoftmax, - BeamableMM, - FairseqDropout, - GradMultiply, - LearnedPositionalEmbedding, - LinearizedConvolution, -) - - -@register_model("fconv") -class FConvModel(FairseqEncoderDecoderModel): - """ - A fully convolutional model, i.e. a convolutional encoder and a - convolutional decoder, as described in `"Convolutional Sequence to Sequence - Learning" (Gehring et al., 2017) <https://arxiv.org/abs/1705.03122>`_. - - Args: - encoder (FConvEncoder): the encoder - decoder (FConvDecoder): the decoder - - The Convolutional model provides the following named architectures and - command-line arguments: - - .. argparse:: - :ref: fairseq.models.fconv_parser - :prog: - """ - - @classmethod - def hub_models(cls): - def moses_subword(path): - return { - "path": path, - "tokenizer": "moses", - "bpe": "subword_nmt", - } - - return { - "conv.wmt14.en-fr": moses_subword( - "https://dl.fbaipublicfiles.com/fairseq/models/wmt14.v2.en-fr.fconv-py.tar.bz2" - ), - "conv.wmt14.en-de": moses_subword( - "https://dl.fbaipublicfiles.com/fairseq/models/wmt14.en-de.fconv-py.tar.bz2" - ), - "conv.wmt17.en-de": moses_subword( - "https://dl.fbaipublicfiles.com/fairseq/models/wmt17.v2.en-de.fconv-py.tar.bz2" - ), - } - - def __init__(self, encoder, decoder): - super().__init__(encoder, decoder) - self.encoder.num_attention_layers = sum( - layer is not None for layer in decoder.attention - ) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--dropout', type=float, metavar='D', - help='dropout probability') - parser.add_argument('--encoder-embed-dim', type=int, metavar='N', - help='encoder embedding dimension') - parser.add_argument('--encoder-embed-path', type=str, metavar='STR', - help='path to pre-trained encoder embedding') - parser.add_argument('--encoder-layers', type=str, metavar='EXPR', - help='encoder layers [(dim, kernel_size), ...]') - parser.add_argument('--decoder-embed-dim', type=int, metavar='N', - help='decoder embedding dimension') - parser.add_argument('--decoder-embed-path', type=str, metavar='STR', - help='path to pre-trained decoder embedding') - parser.add_argument('--decoder-layers', type=str, metavar='EXPR', - help='decoder layers [(dim, kernel_size), ...]') - parser.add_argument('--decoder-out-embed-dim', type=int, metavar='N', - help='decoder output embedding dimension') - parser.add_argument('--decoder-attention', type=str, metavar='EXPR', - help='decoder attention [True, ...]') - parser.add_argument('--share-input-output-embed', action='store_true', - help='share input and output embeddings (requires' - ' --decoder-out-embed-dim and --decoder-embed-dim' - ' to be equal)') - # fmt: on - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - # make sure that all args are properly defaulted (in case there are any new ones) - base_architecture(args) - - encoder_embed_dict = None - if args.encoder_embed_path: - encoder_embed_dict = utils.parse_embedding(args.encoder_embed_path) - utils.print_embed_overlap(encoder_embed_dict, task.source_dictionary) - - decoder_embed_dict = None - if args.decoder_embed_path: - decoder_embed_dict = utils.parse_embedding(args.decoder_embed_path) - utils.print_embed_overlap(decoder_embed_dict, task.target_dictionary) - - encoder = FConvEncoder( - dictionary=task.source_dictionary, - embed_dim=args.encoder_embed_dim, - embed_dict=encoder_embed_dict, - convolutions=eval(args.encoder_layers), - dropout=args.dropout, - max_positions=args.max_source_positions, - ) - decoder = FConvDecoder( - dictionary=task.target_dictionary, - embed_dim=args.decoder_embed_dim, - embed_dict=decoder_embed_dict, - convolutions=eval(args.decoder_layers), - out_embed_dim=args.decoder_out_embed_dim, - attention=eval(args.decoder_attention), - dropout=args.dropout, - max_positions=args.max_target_positions, - share_embed=args.share_input_output_embed, - ) - return FConvModel(encoder, decoder) - - -class FConvEncoder(FairseqEncoder): - """ - Convolutional encoder consisting of `len(convolutions)` layers. - - Args: - dictionary (~fairseq.data.Dictionary): encoding dictionary - embed_dim (int, optional): embedding dimension - embed_dict (str, optional): filename from which to load pre-trained - embeddings - max_positions (int, optional): maximum supported input sequence length - convolutions (list, optional): the convolutional layer structure. Each - list item `i` corresponds to convolutional layer `i`. Layers are - given as ``(out_channels, kernel_width, [residual])``. Residual - connections are added between layers when ``residual=1`` (which is - the default behavior). - dropout (float, optional): dropout to be applied before each conv layer - """ - - def __init__( - self, - dictionary, - embed_dim=512, - embed_dict=None, - max_positions=1024, - convolutions=((512, 3),) * 20, - dropout=0.1, - ): - super().__init__(dictionary) - self.dropout_module = FairseqDropout( - dropout, module_name=self.__class__.__name__ - ) - self.num_attention_layers = None - - num_embeddings = len(dictionary) - self.padding_idx = dictionary.pad() - self.embed_tokens = Embedding(num_embeddings, embed_dim, self.padding_idx) - if embed_dict: - self.embed_tokens = utils.load_embedding( - embed_dict, self.dictionary, self.embed_tokens - ) - - self.embed_positions = PositionalEmbedding( - max_positions, - embed_dim, - self.padding_idx, - ) - - convolutions = extend_conv_spec(convolutions) - in_channels = convolutions[0][0] - self.fc1 = Linear(embed_dim, in_channels, dropout=dropout) - self.projections = nn.ModuleList() - self.convolutions = nn.ModuleList() - self.residuals = [] - - layer_in_channels = [in_channels] - for _, (out_channels, kernel_size, residual) in enumerate(convolutions): - if residual == 0: - residual_dim = out_channels - else: - residual_dim = layer_in_channels[-residual] - self.projections.append( - Linear(residual_dim, out_channels) - if residual_dim != out_channels - else None - ) - if kernel_size % 2 == 1: - padding = kernel_size // 2 - else: - padding = 0 - self.convolutions.append( - ConvTBC( - in_channels, - out_channels * 2, - kernel_size, - dropout=dropout, - padding=padding, - ) - ) - self.residuals.append(residual) - in_channels = out_channels - layer_in_channels.append(out_channels) - self.fc2 = Linear(in_channels, embed_dim) - - def forward(self, src_tokens, src_lengths): - """ - Args: - src_tokens (LongTensor): tokens in the source language of shape - `(batch, src_len)` - src_lengths (LongTensor): lengths of each source sentence of shape - `(batch)` - - Returns: - dict: - - **encoder_out** (tuple): a tuple with two elements, where the - first element is the last encoder layer's output and the - second element is the same quantity summed with the input - embedding (used for attention). The shape of both tensors is - `(batch, src_len, embed_dim)`. - - **encoder_padding_mask** (ByteTensor): the positions of - padding elements of shape `(batch, src_len)` - """ - # embed tokens and positions - x = self.embed_tokens(src_tokens) + self.embed_positions(src_tokens) - x = self.dropout_module(x) - input_embedding = x - - # project to size of convolution - x = self.fc1(x) - - # used to mask padding in input - encoder_padding_mask = src_tokens.eq(self.padding_idx).t() # -> T x B - if not encoder_padding_mask.any(): - encoder_padding_mask = None - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - residuals = [x] - # temporal convolutions - for proj, conv, res_layer in zip( - self.projections, self.convolutions, self.residuals - ): - if res_layer > 0: - residual = residuals[-res_layer] - residual = residual if proj is None else proj(residual) - else: - residual = None - - if encoder_padding_mask is not None: - x = x.masked_fill(encoder_padding_mask.unsqueeze(-1), 0) - - x = self.dropout_module(x) - if conv.kernel_size[0] % 2 == 1: - # padding is implicit in the conv - x = conv(x) - else: - padding_l = (conv.kernel_size[0] - 1) // 2 - padding_r = conv.kernel_size[0] // 2 - x = F.pad(x, (0, 0, 0, 0, padding_l, padding_r)) - x = conv(x) - x = F.glu(x, dim=2) - - if residual is not None: - x = (x + residual) * math.sqrt(0.5) - residuals.append(x) - - # T x B x C -> B x T x C - x = x.transpose(1, 0) - - # project back to size of embedding - x = self.fc2(x) - - if encoder_padding_mask is not None: - encoder_padding_mask = encoder_padding_mask.t() # -> B x T - x = x.masked_fill(encoder_padding_mask.unsqueeze(-1), 0) - - # scale gradients (this only affects backward, not forward) - x = GradMultiply.apply(x, 1.0 / (2.0 * self.num_attention_layers)) - - # add output to input embedding for attention - y = (x + input_embedding) * math.sqrt(0.5) - - return { - "encoder_out": (x, y), - "encoder_padding_mask": encoder_padding_mask, # B x T - } - - def reorder_encoder_out(self, encoder_out, new_order): - if encoder_out["encoder_out"] is not None: - encoder_out["encoder_out"] = ( - encoder_out["encoder_out"][0].index_select(0, new_order), - encoder_out["encoder_out"][1].index_select(0, new_order), - ) - if encoder_out["encoder_padding_mask"] is not None: - encoder_out["encoder_padding_mask"] = encoder_out[ - "encoder_padding_mask" - ].index_select(0, new_order) - return encoder_out - - def max_positions(self): - """Maximum input length supported by the encoder.""" - return self.embed_positions.max_positions - - -class AttentionLayer(nn.Module): - def __init__(self, conv_channels, embed_dim, bmm=None): - super().__init__() - # projects from output of convolution to embedding dimension - self.in_projection = Linear(conv_channels, embed_dim) - # projects from embedding dimension to convolution size - self.out_projection = Linear(embed_dim, conv_channels) - - self.bmm = bmm if bmm is not None else torch.bmm - - def forward(self, x, target_embedding, encoder_out, encoder_padding_mask): - residual = x - - # attention - x = (self.in_projection(x) + target_embedding) * math.sqrt(0.5) - x = self.bmm(x, encoder_out[0]) - - # don't attend over padding - if encoder_padding_mask is not None: - x = ( - x.float() - .masked_fill(encoder_padding_mask.unsqueeze(1), float("-inf")) - .type_as(x) - ) # FP16 support: cast to float and back - - # softmax over last dim - sz = x.size() - x = F.softmax(x.view(sz[0] * sz[1], sz[2]), dim=1) - x = x.view(sz) - attn_scores = x - - x = self.bmm(x, encoder_out[1]) - - # scale attention output (respecting potentially different lengths) - s = encoder_out[1].size(1) - if encoder_padding_mask is None: - x = x * (s * math.sqrt(1.0 / s)) - else: - s = s - encoder_padding_mask.type_as(x).sum( - dim=1, keepdim=True - ) # exclude padding - s = s.unsqueeze(-1) - x = x * (s * s.rsqrt()) - - # project back - x = (self.out_projection(x) + residual) * math.sqrt(0.5) - return x, attn_scores - - def make_generation_fast_(self, beamable_mm_beam_size=None, **kwargs): - """Replace torch.bmm with BeamableMM.""" - if beamable_mm_beam_size is not None: - del self.bmm - self.add_module("bmm", BeamableMM(beamable_mm_beam_size)) - - -class FConvDecoder(FairseqIncrementalDecoder): - """Convolutional decoder""" - - def __init__( - self, - dictionary, - embed_dim=512, - embed_dict=None, - out_embed_dim=256, - max_positions=1024, - convolutions=((512, 3),) * 20, - attention=True, - dropout=0.1, - share_embed=False, - positional_embeddings=True, - adaptive_softmax_cutoff=None, - adaptive_softmax_dropout=0.0, - ): - super().__init__(dictionary) - self.register_buffer("version", torch.Tensor([2])) - self.dropout_module = FairseqDropout( - dropout, module_name=self.__class__.__name__ - ) - self.need_attn = True - - convolutions = extend_conv_spec(convolutions) - in_channels = convolutions[0][0] - if isinstance(attention, bool): - # expand True into [True, True, ...] and do the same with False - attention = [attention] * len(convolutions) - if not isinstance(attention, list) or len(attention) != len(convolutions): - raise ValueError( - "Attention is expected to be a list of booleans of " - "length equal to the number of layers." - ) - - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - self.embed_tokens = Embedding(num_embeddings, embed_dim, padding_idx) - if embed_dict: - self.embed_tokens = utils.load_embedding( - embed_dict, self.dictionary, self.embed_tokens - ) - - self.embed_positions = ( - PositionalEmbedding( - max_positions, - embed_dim, - padding_idx, - ) - if positional_embeddings - else None - ) - - self.fc1 = Linear(embed_dim, in_channels, dropout=dropout) - self.projections = nn.ModuleList() - self.convolutions = nn.ModuleList() - self.attention = nn.ModuleList() - self.residuals = [] - - layer_in_channels = [in_channels] - for i, (out_channels, kernel_size, residual) in enumerate(convolutions): - if residual == 0: - residual_dim = out_channels - else: - residual_dim = layer_in_channels[-residual] - self.projections.append( - Linear(residual_dim, out_channels) - if residual_dim != out_channels - else None - ) - self.convolutions.append( - LinearizedConv1d( - in_channels, - out_channels * 2, - kernel_size, - padding=(kernel_size - 1), - dropout=dropout, - ) - ) - self.attention.append( - AttentionLayer(out_channels, embed_dim) if attention[i] else None - ) - self.residuals.append(residual) - in_channels = out_channels - layer_in_channels.append(out_channels) - - self.adaptive_softmax = None - self.fc2 = self.fc3 = None - - if adaptive_softmax_cutoff is not None: - assert not share_embed - self.adaptive_softmax = AdaptiveSoftmax( - num_embeddings, - in_channels, - adaptive_softmax_cutoff, - dropout=adaptive_softmax_dropout, - ) - else: - self.fc2 = Linear(in_channels, out_embed_dim) - if share_embed: - assert out_embed_dim == embed_dim, ( - "Shared embed weights implies same dimensions " - " out_embed_dim={} vs embed_dim={}".format(out_embed_dim, embed_dim) - ) - self.fc3 = nn.Linear(out_embed_dim, num_embeddings) - self.fc3.weight = self.embed_tokens.weight - else: - self.fc3 = Linear(out_embed_dim, num_embeddings, dropout=dropout) - - def forward( - self, prev_output_tokens, encoder_out=None, incremental_state=None, **unused - ): - if encoder_out is not None: - encoder_padding_mask = encoder_out["encoder_padding_mask"] - encoder_out = encoder_out["encoder_out"] - - # split and transpose encoder outputs - encoder_a, encoder_b = self._split_encoder_out( - encoder_out, incremental_state - ) - - if self.embed_positions is not None: - pos_embed = self.embed_positions(prev_output_tokens, incremental_state) - else: - pos_embed = 0 - - if incremental_state is not None: - prev_output_tokens = prev_output_tokens[:, -1:] - x = self._embed_tokens(prev_output_tokens, incremental_state) - - # embed tokens and combine with positional embeddings - x += pos_embed - x = self.dropout_module(x) - target_embedding = x - - # project to size of convolution - x = self.fc1(x) - - # B x T x C -> T x B x C - x = self._transpose_if_training(x, incremental_state) - - # temporal convolutions - avg_attn_scores = None - num_attn_layers = len(self.attention) - residuals = [x] - for proj, conv, attention, res_layer in zip( - self.projections, self.convolutions, self.attention, self.residuals - ): - if res_layer > 0: - residual = residuals[-res_layer] - residual = residual if proj is None else proj(residual) - else: - residual = None - - x = self.dropout_module(x) - x = conv(x, incremental_state) - x = F.glu(x, dim=2) - - # attention - if attention is not None: - x = self._transpose_if_training(x, incremental_state) - - x, attn_scores = attention( - x, target_embedding, (encoder_a, encoder_b), encoder_padding_mask - ) - - if not self.training and self.need_attn: - attn_scores = attn_scores / num_attn_layers - if avg_attn_scores is None: - avg_attn_scores = attn_scores - else: - avg_attn_scores.add_(attn_scores) - - x = self._transpose_if_training(x, incremental_state) - - # residual - if residual is not None: - x = (x + residual) * math.sqrt(0.5) - residuals.append(x) - - # T x B x C -> B x T x C - x = self._transpose_if_training(x, incremental_state) - - # project back to size of vocabulary if not using adaptive softmax - if self.fc2 is not None and self.fc3 is not None: - x = self.fc2(x) - x = self.dropout_module(x) - x = self.fc3(x) - - return x, avg_attn_scores - - def reorder_incremental_state(self, incremental_state, new_order): - super().reorder_incremental_state(incremental_state, new_order) - encoder_out = utils.get_incremental_state( - self, incremental_state, "encoder_out" - ) - if encoder_out is not None: - encoder_out = tuple(eo.index_select(0, new_order) for eo in encoder_out) - utils.set_incremental_state( - self, incremental_state, "encoder_out", encoder_out - ) - - def max_positions(self): - """Maximum output length supported by the decoder.""" - return ( - self.embed_positions.max_positions - if self.embed_positions is not None - else float("inf") - ) - - def upgrade_state_dict(self, state_dict): - if utils.item(state_dict.get("decoder.version", torch.Tensor([1]))[0]) < 2: - # old models use incorrect weight norm dimension - for i, conv in enumerate(self.convolutions): - # reconfigure weight norm - nn.utils.remove_weight_norm(conv) - self.convolutions[i] = nn.utils.weight_norm(conv, dim=0) - state_dict["decoder.version"] = torch.Tensor([1]) - return state_dict - - def make_generation_fast_(self, need_attn=False, **kwargs): - self.need_attn = need_attn - - def _embed_tokens(self, tokens, incremental_state): - if incremental_state is not None: - # keep only the last token for incremental forward pass - tokens = tokens[:, -1:] - return self.embed_tokens(tokens) - - def _split_encoder_out(self, encoder_out, incremental_state): - """Split and transpose encoder outputs. - - This is cached when doing incremental inference. - """ - cached_result = utils.get_incremental_state( - self, incremental_state, "encoder_out" - ) - if cached_result is not None: - return cached_result - - # transpose only once to speed up attention layers - encoder_a, encoder_b = encoder_out - encoder_a = encoder_a.transpose(1, 2).contiguous() - result = (encoder_a, encoder_b) - - if incremental_state is not None: - utils.set_incremental_state(self, incremental_state, "encoder_out", result) - return result - - def _transpose_if_training(self, x, incremental_state): - if incremental_state is None: - x = x.transpose(0, 1) - return x - - -def extend_conv_spec(convolutions): - """ - Extends convolutional spec that is a list of tuples of 2 or 3 parameters - (kernel size, dim size and optionally how many layers behind to look for residual) - to default the residual propagation param if it is not specified - """ - extended = [] - for spec in convolutions: - if len(spec) == 3: - extended.append(spec) - elif len(spec) == 2: - extended.append(spec + (1,)) - else: - raise Exception( - "invalid number of parameters in convolution spec " - + str(spec) - + ". expected 2 or 3" - ) - return tuple(extended) - - -def Embedding(num_embeddings, embedding_dim, padding_idx): - m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx) - nn.init.normal_(m.weight, 0, 0.1) - nn.init.constant_(m.weight[padding_idx], 0) - return m - - -def PositionalEmbedding(num_embeddings, embedding_dim, padding_idx): - m = LearnedPositionalEmbedding(num_embeddings, embedding_dim, padding_idx) - nn.init.normal_(m.weight, 0, 0.1) - nn.init.constant_(m.weight[padding_idx], 0) - return m - - -def Linear(in_features, out_features, dropout=0.0): - """Weight-normalized Linear layer (input: N x T x C)""" - m = nn.Linear(in_features, out_features) - nn.init.normal_(m.weight, mean=0, std=math.sqrt((1 - dropout) / in_features)) - nn.init.constant_(m.bias, 0) - return nn.utils.weight_norm(m) - - -def LinearizedConv1d(in_channels, out_channels, kernel_size, dropout=0.0, **kwargs): - """Weight-normalized Conv1d layer optimized for decoding""" - m = LinearizedConvolution(in_channels, out_channels, kernel_size, **kwargs) - std = math.sqrt((4 * (1.0 - dropout)) / (m.kernel_size[0] * in_channels)) - nn.init.normal_(m.weight, mean=0, std=std) - nn.init.constant_(m.bias, 0) - return nn.utils.weight_norm(m, dim=2) - - -def ConvTBC(in_channels, out_channels, kernel_size, dropout=0.0, **kwargs): - """Weight-normalized Conv1d layer""" - from fairseq.modules import ConvTBC - - m = ConvTBC(in_channels, out_channels, kernel_size, **kwargs) - std = math.sqrt((4 * (1.0 - dropout)) / (m.kernel_size[0] * in_channels)) - nn.init.normal_(m.weight, mean=0, std=std) - nn.init.constant_(m.bias, 0) - return nn.utils.weight_norm(m, dim=2) - - -@register_model_architecture("fconv", "fconv") -def base_architecture(args): - args.dropout = getattr(args, "dropout", 0.1) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_embed_path = getattr(args, "encoder_embed_path", None) - args.encoder_layers = getattr(args, "encoder_layers", "[(512, 3)] * 20") - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_embed_path = getattr(args, "decoder_embed_path", None) - args.decoder_layers = getattr(args, "decoder_layers", "[(512, 3)] * 20") - args.decoder_out_embed_dim = getattr(args, "decoder_out_embed_dim", 256) - args.decoder_attention = getattr(args, "decoder_attention", "True") - args.share_input_output_embed = getattr(args, "share_input_output_embed", False) - - -@register_model_architecture("fconv", "fconv_iwslt_de_en") -def fconv_iwslt_de_en(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 256) - args.encoder_layers = getattr(args, "encoder_layers", "[(256, 3)] * 4") - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 256) - args.decoder_layers = getattr(args, "decoder_layers", "[(256, 3)] * 3") - args.decoder_out_embed_dim = getattr(args, "decoder_out_embed_dim", 256) - base_architecture(args) - - -@register_model_architecture("fconv", "fconv_wmt_en_ro") -def fconv_wmt_en_ro(args): - args.decoder_out_embed_dim = getattr(args, "decoder_out_embed_dim", 512) - base_architecture(args) - - -@register_model_architecture("fconv", "fconv_wmt_en_de") -def fconv_wmt_en_de(args): - convs = "[(512, 3)] * 9" # first 9 layers have 512 units - convs += " + [(1024, 3)] * 4" # next 4 layers have 1024 units - convs += " + [(2048, 1)] * 2" # final 2 layers use 1x1 convolutions - - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 768) - args.encoder_layers = getattr(args, "encoder_layers", convs) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 768) - args.decoder_layers = getattr(args, "decoder_layers", convs) - args.decoder_out_embed_dim = getattr(args, "decoder_out_embed_dim", 512) - base_architecture(args) - - -@register_model_architecture("fconv", "fconv_wmt_en_fr") -def fconv_wmt_en_fr(args): - convs = "[(512, 3)] * 6" # first 6 layers have 512 units - convs += " + [(768, 3)] * 4" # next 4 layers have 768 units - convs += " + [(1024, 3)] * 3" # next 3 layers have 1024 units - convs += " + [(2048, 1)] * 1" # next 1 layer uses 1x1 convolutions - convs += " + [(4096, 1)] * 1" # final 1 layer uses 1x1 convolutions - - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 768) - args.encoder_layers = getattr(args, "encoder_layers", convs) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 768) - args.decoder_layers = getattr(args, "decoder_layers", convs) - args.decoder_out_embed_dim = getattr(args, "decoder_out_embed_dim", 512) - base_architecture(args) diff --git a/kosmos-g/fairseq/fairseq/models/fconv_lm.py b/kosmos-g/fairseq/fairseq/models/fconv_lm.py deleted file mode 100644 index 4b243d666..000000000 --- a/kosmos-g/fairseq/fairseq/models/fconv_lm.py +++ /dev/null @@ -1,136 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq import utils -from fairseq.models import ( - FairseqLanguageModel, - register_model, - register_model_architecture, -) -from fairseq.models.fconv import FConvDecoder -from fairseq.utils import safe_hasattr - - -@register_model("fconv_lm") -class FConvLanguageModel(FairseqLanguageModel): - def __init__(self, decoder): - super().__init__(decoder) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - parser.add_argument( - "--dropout", type=float, metavar="D", help="dropout probability" - ) - parser.add_argument( - "--decoder-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension", - ) - parser.add_argument( - "--decoder-layers", - type=str, - metavar="EXPR", - help="decoder layers [(dim, kernel_size), ...]", - ) - parser.add_argument( - "--decoder-out-embed-dim", - type=int, - metavar="N", - help="decoder output embedding dimension", - ) - parser.add_argument( - "--adaptive-softmax-cutoff", - metavar="EXPR", - help="comma separated list of adaptive softmax cutoff points. " - "Must be used with adaptive_loss criterion", - ) - parser.add_argument( - "--adaptive-softmax-dropout", - type=float, - metavar="D", - help="sets adaptive softmax dropout for the tail projections", - ) - parser.add_argument( - "--decoder-attention", - type=str, - metavar="EXPR", - help="decoder attention [True, ...]", - ) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - # make sure all arguments are present in older models - base_lm_architecture(args) - - if safe_hasattr(args, "max_target_positions") and not safe_hasattr( - args, "tokens_per_sample" - ): - args.tokens_per_sample = args.max_target_positions - - decoder = FConvDecoder( - dictionary=task.target_dictionary, - embed_dim=args.decoder_embed_dim, - convolutions=eval(args.decoder_layers), - out_embed_dim=args.decoder_embed_dim, - attention=eval(args.decoder_attention), - dropout=args.dropout, - max_positions=args.tokens_per_sample, - share_embed=False, - positional_embeddings=False, - adaptive_softmax_cutoff=( - utils.eval_str_list(args.adaptive_softmax_cutoff, type=int) - if args.criterion == "adaptive_loss" - else None - ), - adaptive_softmax_dropout=args.adaptive_softmax_dropout, - ) - return FConvLanguageModel(decoder) - - -@register_model_architecture("fconv_lm", "fconv_lm") -def base_lm_architecture(args): - args.dropout = getattr(args, "dropout", 0.1) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 128) - args.decoder_layers = getattr(args, "decoder_layers", "[(1268, 4)] * 13") - args.decoder_attention = getattr(args, "decoder_attention", "False") - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - - -@register_model_architecture("fconv_lm", "fconv_lm_dauphin_wikitext103") -def fconv_lm_dauphin_wikitext103(args): - layers = "[(850, 6)] * 3" - layers += " + [(850, 1)] * 1" - layers += " + [(850, 5)] * 4" - layers += " + [(850, 1)] * 1" - layers += " + [(850, 4)] * 3" - layers += " + [(1024, 4)] * 1" - layers += " + [(2048, 4)] * 1" - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 280) - args.decoder_layers = getattr(args, "decoder_layers", layers) - args.decoder_attention = getattr(args, "decoder_attention", "False") - args.adaptive_softmax_cutoff = getattr( - args, "adaptive_softmax_cutoff", "10000,20000,200000" - ) - base_lm_architecture(args) - - -@register_model_architecture("fconv_lm", "fconv_lm_dauphin_gbw") -def fconv_lm_dauphin_gbw(args): - layers = "[(512, 5)]" - layers += " + [(128, 1, 0), (128, 5, 0), (512, 1, 3)] * 3" - layers += " + [(512, 1, 0), (512, 5, 0), (1024, 1, 3)] * 3" - layers += " + [(1024, 1, 0), (1024, 5, 0), (2048, 1, 3)] * 6" - layers += " + [(1024, 1, 0), (1024, 5, 0), (4096, 1, 3)]" - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 128) - args.decoder_layers = getattr(args, "decoder_layers", layers) - args.decoder_attention = getattr(args, "decoder_attention", "False") - args.adaptive_softmax_cutoff = getattr( - args, "adaptive_softmax_cutoff", "10000,50000,200000" - ) - base_lm_architecture(args) diff --git a/kosmos-g/fairseq/fairseq/models/fconv_self_att.py b/kosmos-g/fairseq/fairseq/models/fconv_self_att.py deleted file mode 100644 index 8357ef784..000000000 --- a/kosmos-g/fairseq/fairseq/models/fconv_self_att.py +++ /dev/null @@ -1,674 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import math -import os - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import checkpoint_utils -from fairseq.incremental_decoding_utils import with_incremental_state -from fairseq.models import ( - CompositeEncoder, - FairseqDecoder, - FairseqEncoder, - FairseqEncoderDecoderModel, - register_model, - register_model_architecture, -) -from fairseq.modules import ( - DownsampledMultiHeadAttention, - FairseqDropout, - GradMultiply, - LayerNorm, - LearnedPositionalEmbedding, - LinearizedConvolution, -) - - -logger = logging.getLogger(__name__) - - -@register_model("fconv_self_att") -class FConvModelSelfAtt(FairseqEncoderDecoderModel): - @classmethod - def hub_models(cls): - return { - "conv.stories.pretrained": { - "path": "https://dl.fbaipublicfiles.com/fairseq/models/stories_checkpoint.tar.gz", - "checkpoint_file": "pretrained_checkpoint.pt", - "tokenizer": "nltk", - }, - "conv.stories": { - "path": "https://dl.fbaipublicfiles.com/fairseq/models/stories_checkpoint.tar.gz", - "checkpoint_file": "fusion_checkpoint.pt", - "tokenizer": "nltk", - "pretrained": "True", - "pretrained_checkpoint": "./pretrained_checkpoint.pt", - }, - # Test set containing dictionaries - "data.stories": "https://dl.fbaipublicfiles.com/fairseq/data/stories_test.tar.bz2", - } - - def __init__(self, encoder, decoder, pretrained_encoder=None): - super().__init__(encoder, decoder) - self.encoder.num_attention_layers = sum( - layer is not None for layer in decoder.attention - ) - self.pretrained_encoder = pretrained_encoder - if self.pretrained_encoder is None: - encoders = {"encoder": encoder} - else: - encoders = {"encoder": encoder, "pretrained": self.pretrained_encoder} - # for fusion model, CompositeEncoder contains both pretrained and training encoders - # these are forwarded and then combined in the decoder - self.encoder = CompositeEncoder(encoders) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--dropout', type=float, metavar='D', - help='dropout probability') - parser.add_argument('--encoder-embed-dim', type=int, metavar='N', - help='encoder embedding dimension') - parser.add_argument('--encoder-layers', type=str, metavar='EXPR', - help='encoder layers [(dim, kernel_size), ...]') - parser.add_argument('--decoder-embed-dim', type=int, metavar='N', - help='decoder embedding dimension') - parser.add_argument('--decoder-layers', type=str, metavar='EXPR', - help='decoder layers [(dim, kernel_size), ...]') - parser.add_argument('--decoder-out-embed-dim', type=int, metavar='N', - help='decoder output embedding dimension') - parser.add_argument('--decoder-attention', type=str, metavar='EXPR', - help='decoder attention [True, ...]') - parser.add_argument('--self-attention', type=str, metavar='EXPR', - help='decoder self-attention layers, ex: [True] + [False]*5') - parser.add_argument('--multihead-attention-nheads', type=int, - help='Number of heads to use in attention') - parser.add_argument('--multihead-self-attention-nheads', type=int, - help='Number of heads to use in self-attention') - parser.add_argument('--encoder-attention', type=str, metavar='EXPR', - help='encoder attention [True, ...]') - parser.add_argument('--encoder-attention-nheads', type=int, - help='Number of heads to use in encoder attention') - parser.add_argument('--project-input', type=str, metavar='EXPR', - help='Use projections in self-attention [True, ...]') - parser.add_argument('--gated-attention', type=str, metavar='EXPR', - help='Use GLU layers in self-attention projections [True, ...]') - parser.add_argument('--downsample', type=str, metavar='EXPR', - help='Use downsampling in self-attention [True, ...]') - parser.add_argument('--pretrained-checkpoint', metavar='DIR', - help='path to load checkpoint from pretrained model') - parser.add_argument('--pretrained', type=str, metavar='EXPR', - help='use pretrained model when training [True, ...]') - # fmt: on - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - trained_encoder, trained_decoder = None, None - pretrained = eval(args.pretrained) - if pretrained: - logger.info("loading pretrained model") - if not os.path.exists(args.pretrained_checkpoint): - new_pretrained_checkpoint = os.path.join( - args.data, args.pretrained_checkpoint - ) - if os.path.exists(new_pretrained_checkpoint): - args.pretrained_checkpoint = new_pretrained_checkpoint - trained_model = checkpoint_utils.load_model_ensemble( - filenames=[args.pretrained_checkpoint], - task=task, - )[0][0] - trained_decoder = list(trained_model.children())[1] - trained_encoder = list(trained_model.children())[0] - - # freeze pretrained model - for param in trained_decoder.parameters(): - param.requires_grad = False - for param in trained_encoder.parameters(): - param.requires_grad = False - - encoder = FConvEncoder( - task.source_dictionary, - embed_dim=args.encoder_embed_dim, - convolutions=eval(args.encoder_layers), - dropout=args.dropout, - max_positions=args.max_source_positions, - attention=eval(args.encoder_attention), - attention_nheads=args.encoder_attention_nheads, - ) - - decoder = FConvDecoder( - task.target_dictionary, - embed_dim=args.decoder_embed_dim, - convolutions=eval(args.decoder_layers), - out_embed_dim=args.decoder_out_embed_dim, - attention=eval(args.decoder_attention), - dropout=args.dropout, - max_positions=args.max_target_positions, - selfattention=eval(args.self_attention), - attention_nheads=args.multihead_attention_nheads, - selfattention_nheads=args.multihead_self_attention_nheads, - project_input=eval(args.project_input), - gated_attention=eval(args.gated_attention), - downsample=eval(args.downsample), - pretrained=pretrained, - trained_decoder=trained_decoder, - ) - model = FConvModelSelfAtt(encoder, decoder, trained_encoder) - - return model - - @property - def pretrained(self): - return self.pretrained_encoder is not None - - -class FConvEncoder(FairseqEncoder): - """Convolutional encoder""" - - def __init__( - self, - dictionary, - embed_dim=512, - max_positions=1024, - convolutions=((512, 3),) * 20, - dropout=0.1, - attention=False, - attention_nheads=1, - ): - super().__init__(dictionary) - self.dropout_module = FairseqDropout( - dropout, module_name=self.__class__.__name__ - ) - self.num_attention_layers = None - - num_embeddings = len(dictionary) - self.padding_idx = dictionary.pad() - self.embed_tokens = Embedding(num_embeddings, embed_dim, self.padding_idx) - self.embed_positions = PositionalEmbedding( - max_positions, - embed_dim, - self.padding_idx, - ) - - def expand_bool_array(val): - if isinstance(val, bool): - # expand True into [True, True, ...] and do the same with False - return [val] * len(convolutions) - return val - - attention = expand_bool_array(attention) - - in_channels = convolutions[0][0] - self.fc1 = Linear(embed_dim, in_channels, dropout=dropout) - self.projections = nn.ModuleList() - self.convolutions = nn.ModuleList() - self.attention = nn.ModuleList() - self.attproj = nn.ModuleList() - for i, (out_channels, kernel_size) in enumerate(convolutions): - self.projections.append( - Linear(in_channels, out_channels) - if in_channels != out_channels - else None - ) - self.convolutions.append( - ConvTBC(in_channels, out_channels * 2, kernel_size, dropout=dropout) - ) - - self.attention.append( - SelfAttention(out_channels, embed_dim, attention_nheads) - if attention[i] - else None - ) - in_channels = out_channels - - self.fc2 = Linear(in_channels, embed_dim) - - def forward(self, src_tokens, src_lengths): - # embed tokens and positions - x = self.embed_tokens(src_tokens) + self.embed_positions(src_tokens) - x = self.dropout_module(x) - input_embedding = x.transpose(0, 1) - - # project to size of convolution - x = self.fc1(x) - - encoder_padding_mask = src_tokens.eq(self.padding_idx).t() # -> T x B - if not encoder_padding_mask.any(): - encoder_padding_mask = None - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - # temporal convolutions - for proj, conv, attention in zip( - self.projections, self.convolutions, self.attention - ): - residual = x if proj is None else proj(x) - - if encoder_padding_mask is not None: - x = x.masked_fill(encoder_padding_mask.unsqueeze(-1), 0) - - x = self.dropout_module(x) - padding_l = (conv.kernel_size[0] - 1) // 2 - padding_r = conv.kernel_size[0] // 2 - x = F.pad(x, (0, 0, 0, 0, padding_l, padding_r)) - x = conv(x) - x = F.glu(x, dim=2) - if attention is not None: - x = attention(x) - x = (x + residual) * math.sqrt(0.5) - - # T x B x C -> B x T x C - x = x.transpose(1, 0) - - # project back to size of embedding - x = self.fc2(x) - - if encoder_padding_mask is not None: - encoder_padding_mask = encoder_padding_mask.t() # -> B x T - x = x.masked_fill(encoder_padding_mask.unsqueeze(-1), 0) - - # scale gradients (this only affects backward, not forward) - x = GradMultiply.apply(x, 1.0 / (2.0 * self.num_attention_layers)) - - # add output to input embedding for attention - y = (x + input_embedding.transpose(0, 1)) * math.sqrt(0.5) - - return { - "encoder_out": (x, y), - "encoder_padding_mask": encoder_padding_mask, # B x T - } - - def reorder_encoder_out(self, encoder_out, new_order): - encoder_out["encoder_out"] = tuple( - eo.index_select(0, new_order) for eo in encoder_out["encoder_out"] - ) - - if encoder_out["encoder_padding_mask"] is not None: - encoder_out["encoder_padding_mask"] = encoder_out[ - "encoder_padding_mask" - ].index_select(0, new_order) - - if "pretrained" in encoder_out: - encoder_out["pretrained"]["encoder_out"] = tuple( - eo.index_select(0, new_order) - for eo in encoder_out["pretrained"]["encoder_out"] - ) - - return encoder_out - - def max_positions(self): - """Maximum input length supported by the encoder.""" - return self.embed_positions.max_positions - - -@with_incremental_state -class FConvDecoder(FairseqDecoder): - """Convolutional decoder""" - - def __init__( - self, - dictionary, - embed_dim=512, - out_embed_dim=256, - max_positions=1024, - convolutions=((512, 3),) * 8, - attention=True, - dropout=0.1, - selfattention=False, - attention_nheads=1, - selfattention_nheads=1, - project_input=False, - gated_attention=False, - downsample=False, - pretrained=False, - trained_decoder=None, - ): - super().__init__(dictionary) - self.register_buffer("version", torch.Tensor([2])) - self.pretrained = pretrained - self.pretrained_decoder = trained_decoder - self.dropout_module = FairseqDropout( - dropout, module_name=self.__class__.__name__ - ) - self.need_attn = True - in_channels = convolutions[0][0] - - def expand_bool_array(val): - if isinstance(val, bool): - # expand True into [True, True, ...] and do the same with False - return [val] * len(convolutions) - return val - - attention = expand_bool_array(attention) - selfattention = expand_bool_array(selfattention) - - if not isinstance(attention, list) or len(attention) != len(convolutions): - raise ValueError( - "Attention is expected to be a list of booleans of " - "length equal to the number of layers." - ) - - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - self.embed_tokens = Embedding(num_embeddings, embed_dim, padding_idx) - - self.embed_positions = PositionalEmbedding( - max_positions, - embed_dim, - padding_idx, - ) - - self.fc1 = Linear(embed_dim, in_channels, dropout=dropout) - self.projections = nn.ModuleList() - self.convolutions = nn.ModuleList() - self.attention = nn.ModuleList() - self.selfattention = nn.ModuleList() - self.attproj = nn.ModuleList() - for i, (out_channels, kernel_size) in enumerate(convolutions): - self.projections.append( - Linear(in_channels, out_channels) - if in_channels != out_channels - else None - ) - self.convolutions.append( - LinearizedConv1d( - in_channels, - out_channels * 2, - kernel_size, - padding=(kernel_size - 1), - dropout=dropout, - ) - ) - - self.attention.append( - DownsampledMultiHeadAttention( - out_channels, - embed_dim, - attention_nheads, - project_input=project_input, - gated=False, - downsample=False, - ) - if attention[i] - else None - ) - - self.attproj.append( - Linear(out_channels, embed_dim, dropout=dropout) - if attention[i] - else None - ) - self.selfattention.append( - SelfAttention( - out_channels, - embed_dim, - selfattention_nheads, - project_input=project_input, - gated=gated_attention, - downsample=downsample, - ) - if selfattention[i] - else None - ) - in_channels = out_channels - - self.fc2 = Linear(in_channels, out_embed_dim) - self.fc3 = Linear(out_embed_dim, num_embeddings, dropout=dropout) - - # model fusion - if self.pretrained: - # independent gates are learned from the concatenated input - self.gate1 = nn.Sequential( - Linear(out_embed_dim * 2, out_embed_dim), nn.Sigmoid() - ) - self.gate2 = nn.Sequential( - Linear(out_embed_dim * 2, out_embed_dim), nn.Sigmoid() - ) - # pretrained and trained models are joined - self.joining = nn.Sequential( - Linear(out_embed_dim * 2, out_embed_dim * 2), - LayerNorm(out_embed_dim * 2), - nn.GLU(), - Linear(out_embed_dim, out_embed_dim * 2), - LayerNorm(out_embed_dim * 2), - nn.GLU(), - Linear(out_embed_dim, out_embed_dim), - LayerNorm(out_embed_dim), - ) - # pretrained model contains an output layer that is nhid -> vocab size - # but the models are combined in their hidden state - # the hook stores the output of the pretrained model forward - self.pretrained_outputs = {} - - def save_output(): - def hook(a, b, output): - self.pretrained_outputs["out"] = output - - return hook - - self.pretrained_decoder.fc2.register_forward_hook(save_output()) - - def forward(self, prev_output_tokens, encoder_out): - trained_encoder_out = encoder_out["pretrained"] if self.pretrained else None - encoder_out = encoder_out["encoder"]["encoder_out"] - - encoder_a, encoder_b = self._split_encoder_out(encoder_out) - - # embed positions - positions = self.embed_positions(prev_output_tokens) - - # embed tokens and positions - x = self.embed_tokens(prev_output_tokens) + positions - x = self.dropout_module(x) - target_embedding = x.transpose(0, 1) - - # project to size of convolution - x = self.fc1(x) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - # temporal convolutions - avg_attn_scores = None - for proj, conv, attention, selfattention, attproj in zip( - self.projections, - self.convolutions, - self.attention, - self.selfattention, - self.attproj, - ): - residual = x if proj is None else proj(x) - - x = self.dropout_module(x) - x = conv(x) - x = F.glu(x, dim=2) - - # attention - if attention is not None: - r = x - x, attn_scores = attention( - attproj(x) + target_embedding, encoder_a, encoder_b - ) - x = x + r - if not self.training and self.need_attn: - if avg_attn_scores is None: - avg_attn_scores = attn_scores - else: - avg_attn_scores.add_(attn_scores) - - if selfattention is not None: - x = selfattention(x) - - x = (x + residual) * math.sqrt(0.5) - - # T x B x C -> B x T x C - x = x.transpose(0, 1) - - # project back to size of vocabulary - x = self.fc2(x) - x = self.dropout_module(x) - if not self.pretrained: - x = self.fc3(x) - - # fusion gating - if self.pretrained: - trained_x, _ = self.pretrained_decoder.forward( - prev_output_tokens, trained_encoder_out - ) - y = torch.cat([x, self.pretrained_outputs["out"]], dim=-1) - gate1 = self.gate1(y) - gate2 = self.gate2(y) - gated_x1 = gate1 * x - gated_x2 = gate2 * self.pretrained_outputs["out"] - fusion = torch.cat([gated_x1, gated_x2], dim=-1) - fusion = self.joining(fusion) - fusion_output = self.fc3(fusion) - return fusion_output, avg_attn_scores - else: - return x, avg_attn_scores - - def max_positions(self): - """Maximum output length supported by the decoder.""" - return self.embed_positions.max_positions - - def make_generation_fast_(self, need_attn=False, **kwargs): - self.need_attn = need_attn - - def _split_encoder_out(self, encoder_out): - """Split and transpose encoder outputs.""" - # transpose only once to speed up attention layers - encoder_a, encoder_b = encoder_out - encoder_a = encoder_a.transpose(0, 1).contiguous() - encoder_b = encoder_b.transpose(0, 1).contiguous() - result = (encoder_a, encoder_b) - return result - - -class SelfAttention(nn.Module): - def __init__( - self, - out_channels, - embed_dim, - num_heads, - project_input=False, - gated=False, - downsample=False, - ): - super().__init__() - self.attention = DownsampledMultiHeadAttention( - out_channels, - embed_dim, - num_heads, - dropout=0, - bias=True, - project_input=project_input, - gated=gated, - downsample=downsample, - ) - self.in_proj_q = Linear(out_channels, embed_dim) - self.in_proj_k = Linear(out_channels, embed_dim) - self.in_proj_v = Linear(out_channels, embed_dim) - self.ln = LayerNorm(out_channels) - - def forward(self, x): - residual = x - query = self.in_proj_q(x) - key = self.in_proj_k(x) - value = self.in_proj_v(x) - x, _ = self.attention( - query, key, value, mask_future_timesteps=True, use_scalar_bias=True - ) - return self.ln(x + residual) - - -def Embedding(num_embeddings, embedding_dim, padding_idx): - m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx) - m.weight.data.normal_(0, 0.1) - return m - - -def PositionalEmbedding(num_embeddings, embedding_dim, padding_idx): - m = LearnedPositionalEmbedding(num_embeddings, embedding_dim, padding_idx) - m.weight.data.normal_(0, 0.1) - return m - - -def Linear(in_features, out_features, dropout=0.0): - """Weight-normalized Linear layer (input: N x T x C)""" - m = nn.Linear(in_features, out_features) - m.weight.data.normal_(mean=0, std=math.sqrt((1 - dropout) / in_features)) - m.bias.data.zero_() - return m - - -def LinearizedConv1d(in_channels, out_channels, kernel_size, dropout=0.0, **kwargs): - """Weight-normalized Conv1d layer optimized for decoding""" - m = LinearizedConvolution(in_channels, out_channels, kernel_size, **kwargs) - std = math.sqrt((4 * (1.0 - dropout)) / (m.kernel_size[0] * in_channels)) - m.weight.data.normal_(mean=0, std=std) - m.bias.data.zero_() - return m - - -def ConvTBC(in_channels, out_channels, kernel_size, dropout=0.0, **kwargs): - """Weight-normalized Conv1d layer""" - from fairseq.modules import ConvTBC - - m = ConvTBC(in_channels, out_channels, kernel_size, **kwargs) - std = math.sqrt((4 * (1.0 - dropout)) / (m.kernel_size[0] * in_channels)) - m.weight.data.normal_(mean=0, std=std) - m.bias.data.zero_() - return m - - -@register_model_architecture("fconv_self_att", "fconv_self_att") -def base_architecture(args): - args.dropout = getattr(args, "dropout", 0.1) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_layers = getattr(args, "encoder_layers", "[(512, 3)] * 3") - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_layers = getattr(args, "decoder_layers", "[(512, 3)] * 8") - args.decoder_out_embed_dim = getattr(args, "decoder_out_embed_dim", 256) - args.decoder_attention = getattr(args, "decoder_attention", "True") - args.self_attention = getattr(args, "self_attention", "False") - args.encoder_attention = getattr(args, "encoder_attention", "False") - args.multihead_attention_nheads = getattr(args, "multihead_attention_nheads", 1) - args.multihead_self_attention_nheads = getattr( - args, "multihead_self_attention_nheads", 1 - ) - args.encoder_attention_nheads = getattr(args, "encoder_attention_nheads", 1) - args.project_input = getattr(args, "project_input", "False") - args.gated_attention = getattr(args, "gated_attention", "False") - args.downsample = getattr(args, "downsample", "False") - args.pretrained_checkpoint = getattr(args, "pretrained_checkpoint", "") - args.pretrained = getattr(args, "pretrained", "False") - - -@register_model_architecture("fconv_self_att", "fconv_self_att_wp") -def fconv_self_att_wp(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 256) - args.encoder_layers = getattr( - args, "encoder_layers", "[(128, 3)] * 2 + [(512,3)] * 1" - ) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 256) - args.decoder_layers = getattr( - args, "decoder_layers", "[(512, 4)] * 4 + [(768, 4)] * 2 + [(1024, 4)] * 1" - ) - args.decoder_out_embed_dim = getattr(args, "decoder_out_embed_dim", 256) - args.self_attention = getattr(args, "self_attention", "True") - args.multihead_self_attention_nheads = getattr( - args, "multihead_self_attention_nheads", 4 - ) - args.project_input = getattr(args, "project_input", "True") - args.gated_attention = getattr(args, "gated_attention", "True") - args.downsample = getattr(args, "downsample", "True") - base_architecture(args) diff --git a/kosmos-g/fairseq/fairseq/models/hubert/__init__.py b/kosmos-g/fairseq/fairseq/models/hubert/__init__.py deleted file mode 100644 index a1b0eabbd..000000000 --- a/kosmos-g/fairseq/fairseq/models/hubert/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .hubert import * # noqa -from .hubert_asr import * # noqa diff --git a/kosmos-g/fairseq/fairseq/models/hubert/hubert.py b/kosmos-g/fairseq/fairseq/models/hubert/hubert.py deleted file mode 100644 index 40f35be68..000000000 --- a/kosmos-g/fairseq/fairseq/models/hubert/hubert.py +++ /dev/null @@ -1,542 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from dataclasses import dataclass, field -from typing import Dict, List, Optional, Tuple - -import numpy as np -import torch -import torch.nn as nn -from omegaconf import II - -from fairseq import utils -from fairseq.data.data_utils import compute_mask_indices -from fairseq.data.dictionary import Dictionary -from fairseq.dataclass import ChoiceEnum, FairseqDataclass -from fairseq.models import BaseFairseqModel, register_model -from fairseq.models.wav2vec.wav2vec2 import ( - ConvFeatureExtractionModel, - TransformerEncoder, -) -from fairseq.modules import GradMultiply, LayerNorm -from fairseq.tasks.hubert_pretraining import ( - HubertPretrainingConfig, - HubertPretrainingTask, -) - -logger = logging.getLogger(__name__) - -EXTRACTOR_MODE_CHOICES = ChoiceEnum(["default", "layer_norm"]) -MASKING_DISTRIBUTION_CHOICES = ChoiceEnum(["static", "uniform", "normal", "poisson"]) - - -@dataclass -class HubertConfig(FairseqDataclass): - label_rate: int = II("task.label_rate") - - extractor_mode: EXTRACTOR_MODE_CHOICES = field( - default="default", - metadata={ - "help": "mode for feature extractor. default has a single group " - "norm with d groups in the first conv block, whereas layer_norm " - "has layer norms in every block (meant to use with normalize=True)" - }, - ) - encoder_layers: int = field( - default=12, metadata={"help": "num encoder layers in the transformer"} - ) - encoder_embed_dim: int = field( - default=768, metadata={"help": "encoder embedding dimension"} - ) - encoder_ffn_embed_dim: int = field( - default=3072, metadata={"help": "encoder embedding dimension for FFN"} - ) - encoder_attention_heads: int = field( - default=12, metadata={"help": "num encoder attention heads"} - ) - activation_fn: ChoiceEnum(utils.get_available_activation_fns()) = field( - default="gelu", metadata={"help": "activation function to use"} - ) - - # dropouts - dropout: float = field( - default=0.1, - metadata={"help": "dropout probability for the transformer"}, - ) - attention_dropout: float = field( - default=0.1, - metadata={"help": "dropout probability for attention weights"}, - ) - activation_dropout: float = field( - default=0.0, - metadata={"help": "dropout probability after activation in FFN"}, - ) - encoder_layerdrop: float = field( - default=0.0, - metadata={"help": "probability of dropping a tarnsformer layer"}, - ) - dropout_input: float = field( - default=0.0, - metadata={"help": "dropout to apply to the input (after feat extr)"}, - ) - dropout_features: float = field( - default=0.0, - metadata={"help": "dropout to apply to the features (after feat extr)"}, - ) - - final_dim: int = field( - default=0, - metadata={ - "help": "project final representations and targets to this many " - "dimensions. set to encoder_embed_dim is <= 0" - }, - ) - untie_final_proj: bool = field( - default=False, - metadata={"help": "use separate projection for each target"}, - ) - layer_norm_first: bool = field( - default=False, - metadata={"help": "apply layernorm first in the transformer"}, - ) - conv_feature_layers: str = field( - default="[(512,10,5)] + [(512,3,2)] * 4 + [(512,2,2)] * 2", - metadata={ - "help": "string describing convolutional feature extraction " - "layers in form of a python list that contains " - "[(dim, kernel_size, stride), ...]" - }, - ) - conv_bias: bool = field( - default=False, metadata={"help": "include bias in conv encoder"} - ) - logit_temp: float = field( - default=0.1, metadata={"help": "temperature to divide logits by"} - ) - target_glu: bool = field( - default=False, metadata={"help": "adds projection + glu to targets"} - ) - feature_grad_mult: float = field( - default=1.0, - metadata={"help": "multiply feature extractor var grads by this"}, - ) - - # masking - mask_length: int = field(default=10, metadata={"help": "mask length"}) - mask_prob: float = field( - default=0.65, - metadata={"help": "probability of replacing a token with mask"}, - ) - mask_selection: MASKING_DISTRIBUTION_CHOICES = field( - default="static", metadata={"help": "how to choose mask length"} - ) - mask_other: float = field( - default=0, - metadata={ - "help": "secondary mask argument " - "(used for more complex distributions), " - "see help in compute_mask_indicesh" - }, - ) - no_mask_overlap: bool = field( - default=False, metadata={"help": "whether to allow masks to overlap"} - ) - mask_min_space: int = field( - default=1, - metadata={"help": "min space between spans (if no overlap is enabled)"}, - ) - - # channel masking - mask_channel_length: int = field( - default=10, - metadata={"help": "length of the mask for features (channels)"}, - ) - mask_channel_prob: float = field( - default=0.0, - metadata={"help": "probability of replacing a feature with 0"}, - ) - mask_channel_selection: MASKING_DISTRIBUTION_CHOICES = field( - default="static", - metadata={"help": "how to choose mask length for channel masking"}, - ) - mask_channel_other: float = field( - default=0, - metadata={ - "help": "secondary mask argument " - "(used for more complex distributions), " - "see help in compute_mask_indicesh" - }, - ) - no_mask_channel_overlap: bool = field( - default=False, - metadata={"help": "whether to allow channel masks to overlap"}, - ) - mask_channel_min_space: int = field( - default=1, - metadata={"help": "min space between spans (if no overlap is enabled)"}, - ) - - # positional embeddings - conv_pos: int = field( - default=128, - metadata={"help": "number of filters for convolutional positional embeddings"}, - ) - conv_pos_groups: int = field( - default=16, - metadata={"help": "number of groups for convolutional positional embedding"}, - ) - - latent_temp: Tuple[float, float, float] = field( - default=(2, 0.5, 0.999995), - metadata={"help": "legacy (to be removed)"}, - ) - - # loss computation - skip_masked: bool = field( - default=False, - metadata={"help": "skip computing losses over masked frames"}, - ) - skip_nomask: bool = field( - default=False, - metadata={"help": "skip computing losses over unmasked frames"}, - ) - - checkpoint_activations: bool = field( - default=False, - metadata={"help": "recompute activations and save memory for extra compute"}, - ) - - -@register_model("hubert", dataclass=HubertConfig) -class HubertModel(BaseFairseqModel): - def __init__( - self, - cfg: HubertConfig, - task_cfg: HubertPretrainingConfig, - dictionaries: List[Dictionary], - ) -> None: - super().__init__() - logger.info(f"HubertModel Config: {cfg}") - - feature_enc_layers = eval(cfg.conv_feature_layers) # noqa - self.embed = feature_enc_layers[-1][0] - - self.feature_extractor = ConvFeatureExtractionModel( - conv_layers=feature_enc_layers, - dropout=0.0, - mode=cfg.extractor_mode, - conv_bias=cfg.conv_bias, - ) - feature_ds_rate = np.prod([s for _, _, s in feature_enc_layers]) - self.feat2tar_ratio = cfg.label_rate * feature_ds_rate / task_cfg.sample_rate - - self.post_extract_proj = ( - nn.Linear(self.embed, cfg.encoder_embed_dim) - if self.embed != cfg.encoder_embed_dim - else None - ) - - self.mask_prob = cfg.mask_prob - self.mask_selection = cfg.mask_selection - self.mask_other = cfg.mask_other - self.mask_length = cfg.mask_length - self.no_mask_overlap = cfg.no_mask_overlap - self.mask_min_space = cfg.mask_min_space - - self.mask_channel_prob = cfg.mask_channel_prob - self.mask_channel_selection = cfg.mask_channel_selection - self.mask_channel_other = cfg.mask_channel_other - self.mask_channel_length = cfg.mask_channel_length - self.no_mask_channel_overlap = cfg.no_mask_channel_overlap - self.mask_channel_min_space = cfg.mask_channel_min_space - - self.dropout_input = nn.Dropout(cfg.dropout_input) - self.dropout_features = nn.Dropout(cfg.dropout_features) - - self.feature_grad_mult = cfg.feature_grad_mult - self.logit_temp = cfg.logit_temp - self.skip_masked = cfg.skip_masked - self.skip_nomask = cfg.skip_nomask - - final_dim = cfg.final_dim if cfg.final_dim > 0 else cfg.encoder_embed_dim - - self.mask_emb = nn.Parameter( - torch.FloatTensor(cfg.encoder_embed_dim).uniform_() - ) - - self.encoder = TransformerEncoder(cfg) - self.layer_norm = LayerNorm(self.embed) - - self.target_glu = None - if cfg.target_glu: - self.target_glu = nn.Sequential( - nn.Linear(final_dim, final_dim * 2), nn.GLU() - ) - - self.untie_final_proj = cfg.untie_final_proj - if self.untie_final_proj: - self.final_proj = nn.Linear( - cfg.encoder_embed_dim, final_dim * len(dictionaries) - ) - else: - self.final_proj = nn.Linear(cfg.encoder_embed_dim, final_dim) - - # modules below are not needed during fine-tuning - if any([d is None for d in dictionaries]): - logger.info("cannot find dictionary. assume will be used for fine-tuning") - else: - self.num_classes = [len(d) for d in dictionaries] - self.label_embs_concat = nn.Parameter( - torch.FloatTensor(sum(self.num_classes), final_dim) - ) - nn.init.uniform_(self.label_embs_concat) - - def upgrade_state_dict_named(self, state_dict, name): - """Upgrade a (possibly old) state dict for new versions of fairseq.""" - - super().upgrade_state_dict_named(state_dict, name) - return state_dict - - @classmethod - def build_model(cls, cfg: HubertConfig, task: HubertPretrainingTask): - """Build a new model instance.""" - - model = HubertModel(cfg, task.cfg, task.dictionaries) - return model - - def apply_mask(self, x, padding_mask, target_list): - B, T, C = x.shape - if self.mask_prob > 0: - mask_indices = compute_mask_indices( - (B, T), - padding_mask, - self.mask_prob, - self.mask_length, - self.mask_selection, - self.mask_other, - min_masks=2, - no_overlap=self.no_mask_overlap, - min_space=self.mask_min_space, - ) - mask_indices = torch.from_numpy(mask_indices).to(x.device) - x[mask_indices] = self.mask_emb - else: - mask_indices = None - - if self.mask_channel_prob > 0: - mask_channel_indices = compute_mask_indices( - (B, C), - None, - self.mask_channel_prob, - self.mask_channel_length, - self.mask_channel_selection, - self.mask_channel_other, - no_overlap=self.no_mask_channel_overlap, - min_space=self.mask_channel_min_space, - ) - mask_channel_indices = ( - torch.from_numpy(mask_channel_indices) - .to(x.device) - .unsqueeze(1) - .expand(-1, T, -1) - ) - x[mask_channel_indices] = 0 - - return x, mask_indices - - def compute_nce(self, x, pos, negs): - neg_is_pos = (pos == negs).all(-1) - pos = pos.unsqueeze(0) - targets = torch.cat([pos, negs], dim=0) - - logits = torch.cosine_similarity(x.float(), targets.float(), dim=-1).type_as(x) - logits /= self.logit_temp - if neg_is_pos.any(): - logits[1:][neg_is_pos] = float("-inf") - logits = logits.transpose(0, 1) # (num_x, num_cls+1) - return logits - - def forward_features(self, source: torch.Tensor) -> torch.Tensor: - if self.feature_grad_mult > 0: - features = self.feature_extractor(source) - if self.feature_grad_mult != 1.0: - features = GradMultiply.apply(features, self.feature_grad_mult) - else: - with torch.no_grad(): - features = self.feature_extractor(source) - return features - - def forward_targets( - self, - features: torch.Tensor, - target_list: List[torch.Tensor], - ) -> Tuple[torch.Tensor, torch.Tensor]: - # Trim features to ensure labels exist and then get aligned labels - feat_tsz = features.size(2) - targ_tsz = min([t.size(1) for t in target_list]) - if self.feat2tar_ratio * feat_tsz > targ_tsz: - feat_tsz = int(targ_tsz / self.feat2tar_ratio) - features = features[..., :feat_tsz] - target_inds = torch.arange(feat_tsz).float() * self.feat2tar_ratio - target_list = [t[:, target_inds.long()] for t in target_list] - return features, target_list - - def forward_padding_mask( - self, - features: torch.Tensor, - padding_mask: torch.Tensor, - ) -> torch.Tensor: - extra = padding_mask.size(1) % features.size(1) - if extra > 0: - padding_mask = padding_mask[:, :-extra] - padding_mask = padding_mask.view(padding_mask.size(0), features.size(1), -1) - padding_mask = padding_mask.all(-1) - return padding_mask - - def forward( - self, - source: torch.Tensor, - target_list: Optional[List[torch.Tensor]] = None, - padding_mask: Optional[torch.Tensor] = None, - mask: bool = True, - features_only: bool = False, - output_layer: Optional[int] = None, - ) -> Dict[str, torch.Tensor]: - """output layer is 1-based""" - features = self.forward_features(source) - if target_list is not None: - features, target_list = self.forward_targets(features, target_list) - - features_pen = features.float().pow(2).mean() - - features = features.transpose(1, 2) - features = self.layer_norm(features) - unmasked_features = features.clone() - - if padding_mask is not None: - padding_mask = self.forward_padding_mask(features, padding_mask) - - if self.post_extract_proj is not None: - features = self.post_extract_proj(features) - - features = self.dropout_input(features) - unmasked_features = self.dropout_features(unmasked_features) - - if mask: - x, mask_indices = self.apply_mask(features, padding_mask, target_list) - else: - x = features - mask_indices = None - - # feature: (B, T, D), float - # target: (B, T), long - # x: (B, T, D), float - # padding_mask: (B, T), bool - # mask_indices: (B, T), bool - x, _ = self.encoder( - x, - padding_mask=padding_mask, - layer=None if output_layer is None else output_layer - 1, - ) - - if features_only: - return {"x": x, "padding_mask": padding_mask, "features": features} - - def compute_pred(proj_x, target, label_embs): - # compute logits for the i-th label set - y = torch.index_select(label_embs, 0, target.long()) - negs = label_embs.unsqueeze(1).expand(-1, proj_x.size(0), -1) - if self.target_glu: - y = self.target_glu(y) - negs = self.target_glu(negs) - # proj_x: (S, D) - # y: (S, D) - # negs: (Neg, S, D) - return self.compute_nce(proj_x, y, negs) - - label_embs_list = self.label_embs_concat.split(self.num_classes, 0) - - if not self.skip_masked: - masked_indices = torch.logical_and(~padding_mask, mask_indices) - proj_x_m = self.final_proj(x[masked_indices]) - if self.untie_final_proj: - proj_x_m_list = proj_x_m.chunk(len(target_list), dim=-1) - else: - proj_x_m_list = [proj_x_m for _ in range(len(target_list))] - logit_m_list = [ - compute_pred(proj_x_m, t[masked_indices], label_embs_list[i]) - for i, (proj_x_m, t) in enumerate(zip(proj_x_m_list, target_list)) - ] - else: - logit_m_list = [None for _ in target_list] - - if not self.skip_nomask: - nomask_indices = torch.logical_and(~padding_mask, ~mask_indices) - proj_x_u = self.final_proj(x[nomask_indices]) - if self.untie_final_proj: - proj_x_u_list = proj_x_u.chunk(len(target_list), dim=-1) - else: - proj_x_u_list = [proj_x_u for _ in range(len(target_list))] - - logit_u_list = [ - compute_pred(proj_x_u, t[nomask_indices], label_embs_list[i]) - for i, (proj_x_u, t) in enumerate(zip(proj_x_u_list, target_list)) - ] - else: - logit_u_list = [None for _ in target_list] - - result = { - "logit_m_list": logit_m_list, - "logit_u_list": logit_u_list, - "padding_mask": padding_mask, - "features_pen": features_pen, - } - return result - - def extract_features( - self, - source: torch.Tensor, - padding_mask: Optional[torch.Tensor] = None, - mask: bool = False, - ret_conv: bool = False, - output_layer: Optional[int] = None, - ) -> Tuple[torch.Tensor, torch.Tensor]: - res = self.forward( - source, - padding_mask=padding_mask, - mask=mask, - features_only=True, - output_layer=output_layer, - ) - feature = res["features"] if ret_conv else res["x"] - return feature, res["padding_mask"] - - def get_logits(self, net_output, is_masked=True): - if is_masked: - logits_list = net_output["logit_m_list"] - else: - logits_list = net_output["logit_u_list"] - logits_list = [x.float() for x in logits_list if x is not None] - return logits_list - - def get_targets(self, net_output, is_masked=True): - logits_list = self.get_logits(net_output, is_masked) - targets_list = [x.new_zeros(x.size(0), dtype=torch.long) for x in logits_list] - return targets_list - - def get_extra_losses(self, net_output): - extra_losses = [] - names = [] - - if "features_pen" in net_output: - extra_losses.append(net_output["features_pen"]) - names.append("features_pen") - - return extra_losses, names - - def remove_pretraining_modules(self): - self.target_glu = None - self.final_proj = None diff --git a/kosmos-g/fairseq/fairseq/models/hubert/hubert_asr.py b/kosmos-g/fairseq/fairseq/models/hubert/hubert_asr.py deleted file mode 100644 index b1d0a89b4..000000000 --- a/kosmos-g/fairseq/fairseq/models/hubert/hubert_asr.py +++ /dev/null @@ -1,361 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import contextlib -from argparse import Namespace -from typing import Any - -import torch -import torch.nn as nn -from dataclasses import dataclass, field -from fairseq import checkpoint_utils, tasks, utils -from fairseq.dataclass import FairseqDataclass -from fairseq.dataclass.utils import convert_namespace_to_omegaconf -from fairseq.models import BaseFairseqModel, FairseqEncoder, register_model -from fairseq.models.hubert.hubert import MASKING_DISTRIBUTION_CHOICES -from fairseq.tasks import FairseqTask -from omegaconf import II, MISSING - - -@dataclass -class HubertAsrConfig(FairseqDataclass): - w2v_path: str = field(default=MISSING, metadata={"help": "path to hubert model"}) - no_pretrained_weights: bool = field( - default=False, - metadata={"help": "if true, does not load pretrained weights"}, - ) - dropout_input: float = field( - default=0.0, - metadata={"help": "dropout to apply to the input (after feat extr)"}, - ) - final_dropout: float = field( - default=0.0, - metadata={"help": "dropout after transformer and before final projection"}, - ) - dropout: float = field( - default=0.0, - metadata={"help": "dropout probability inside hubert model"}, - ) - attention_dropout: float = field( - default=0.0, - metadata={ - "help": "dropout probability for attention weights " "inside hubert model" - }, - ) - activation_dropout: float = field( - default=0.0, - metadata={ - "help": "dropout probability after activation in FFN " "inside hubert model" - }, - ) - - # masking - apply_mask: bool = field( - default=False, metadata={"help": "apply masking during fine-tuning"} - ) - mask_length: int = field( - default=10, metadata={"help": "repeat the mask indices multiple times"} - ) - mask_prob: float = field( - default=0.5, - metadata={ - "help": "probability of replacing a token with mask " - "(normalized by length)" - }, - ) - mask_selection: MASKING_DISTRIBUTION_CHOICES = field( - default="static", metadata={"help": "how to choose masks"} - ) - mask_other: float = field( - default=0, - metadata={ - "help": "secondary mask argument " - "(used for more complex distributions), " - "see help in compute_mask_indices" - }, - ) - no_mask_overlap: bool = field( - default=False, metadata={"help": "whether to allow masks to overlap"} - ) - - # channel masking - mask_channel_length: int = field( - default=10, - metadata={"help": "length of the mask for features (channels)"}, - ) - mask_channel_prob: float = field( - default=0.0, - metadata={"help": "probability of replacing a feature with 0"}, - ) - mask_channel_selection: MASKING_DISTRIBUTION_CHOICES = field( - default="static", - metadata={"help": "how to choose mask length for channel masking"}, - ) - mask_channel_other: float = field( - default=0, - metadata={ - "help": "secondary mask argument " - "(used for more complex distributions), " - "see help in compute_mask_indices" - }, - ) - no_mask_channel_overlap: bool = field( - default=False, - metadata={"help": "whether to allow channel masks to overlap"}, - ) - freeze_finetune_updates: int = field( - default=0, - metadata={"help": "dont finetune hubert for this many updates"}, - ) - feature_grad_mult: float = field( - default=0.0, - metadata={"help": "reset feature grad mult in hubert to this"}, - ) - layerdrop: float = field( - default=0.0, - metadata={"help": "probability of dropping a layer in hubert"}, - ) - normalize: bool = II("task.normalize") - data: str = II("task.data") - - # this holds the loaded hubert args - w2v_args: Any = None - - -@dataclass -class HubertCtcConfig(HubertAsrConfig): - pass - - -@register_model("hubert_ctc", dataclass=HubertCtcConfig) -class HubertCtc(BaseFairseqModel): - def __init__(self, cfg: HubertCtcConfig, w2v_encoder: BaseFairseqModel): - super().__init__() - self.cfg = cfg - self.w2v_encoder = w2v_encoder - - def upgrade_state_dict_named(self, state_dict, name): - super().upgrade_state_dict_named(state_dict, name) - return state_dict - - @classmethod - def build_model(cls, cfg: HubertCtcConfig, task: FairseqTask): - """Build a new model instance.""" - w2v_encoder = HubertEncoder(cfg, task.target_dictionary) - return cls(cfg, w2v_encoder) - - def get_normalized_probs(self, net_output, log_probs): - """Get normalized probabilities (or log probs) from a net's output.""" - - logits = net_output["encoder_out"] - if log_probs: - return utils.log_softmax(logits.float(), dim=-1) - else: - return utils.softmax(logits.float(), dim=-1) - - def get_logits(self, net_output): - logits = net_output["encoder_out"] - padding = net_output["encoder_padding_mask"] - if padding is not None and padding.any(): - padding = padding.T - logits[padding][..., 0] = 0 - logits[padding][..., 1:] = float("-inf") - - return logits - - def forward(self, **kwargs): - x = self.w2v_encoder(**kwargs) - return x - - -@dataclass -class HubertSeq2SeqConfig(HubertAsrConfig): - decoder_embed_dim: int = field( - default=768, metadata={"help": "decoder embedding dimension"} - ) - decoder_ffn_embed_dim: int = field( - default=3072, metadata={"help": "decoder embedding dimension for FFN"} - ) - decoder_layers: int = field(default=6, metadata={"help": "num of decoder layers"}) - decoder_layerdrop: float = field( - default=0.0, metadata={"help": "decoder layerdrop chance"} - ) - decoder_attention_heads: int = field( - default=4, metadata={"help": "num decoder attention heads"} - ) - decoder_learned_pos: bool = field( - default=False, - metadata={"help": "use learned positional embeddings in the decoder"}, - ) - decoder_normalize_before: bool = field( - default=False, - metadata={"help": "apply layernorm before each decoder block"}, - ) - no_token_positional_embeddings: bool = field( - default=False, - metadata={ - "help": "if set, disables positional embeddings " "(outside self attention)" - }, - ) - decoder_dropout: float = field( - default=0.0, metadata={"help": "dropout probability in the decoder"} - ) - decoder_attention_dropout: float = field( - default=0.0, - metadata={ - "help": "dropout probability for attention weights " "inside the decoder" - }, - ) - decoder_activation_dropout: float = field( - default=0.0, - metadata={ - "help": "dropout probability after activation in FFN " "inside the decoder" - }, - ) - max_target_positions: int = field( - default=2048, metadata={"help": "max target positions"} - ) - share_decoder_input_output_embed: bool = field( - default=False, - metadata={"help": "share decoder input and output embeddings"}, - ) - - -class HubertEncoder(FairseqEncoder): - def __init__(self, cfg: HubertAsrConfig, tgt_dict=None): - self.apply_mask = cfg.apply_mask - - arg_overrides = { - "dropout": cfg.dropout, - "activation_dropout": cfg.activation_dropout, - "dropout_input": cfg.dropout_input, - "attention_dropout": cfg.attention_dropout, - "mask_length": cfg.mask_length, - "mask_prob": cfg.mask_prob, - "mask_selection": cfg.mask_selection, - "mask_other": cfg.mask_other, - "no_mask_overlap": cfg.no_mask_overlap, - "mask_channel_length": cfg.mask_channel_length, - "mask_channel_prob": cfg.mask_channel_prob, - "mask_channel_selection": cfg.mask_channel_selection, - "mask_channel_other": cfg.mask_channel_other, - "no_mask_channel_overlap": cfg.no_mask_channel_overlap, - "encoder_layerdrop": cfg.layerdrop, - "feature_grad_mult": cfg.feature_grad_mult, - } - - if cfg.w2v_args is None: - state = checkpoint_utils.load_checkpoint_to_cpu(cfg.w2v_path, arg_overrides) - w2v_args = state.get("cfg", None) - if w2v_args is None: - w2v_args = convert_namespace_to_omegaconf(state["args"]) - cfg.w2v_args = w2v_args - else: - state = None - w2v_args = cfg.w2v_args - if isinstance(w2v_args, Namespace): - cfg.w2v_args = w2v_args = convert_namespace_to_omegaconf(w2v_args) - - assert cfg.normalize == w2v_args.task.normalize, ( - "Fine-tuning works best when data normalization is the same. " - "Please check that --normalize is set or unset for " - "both pre-training and here" - ) - - w2v_args.task.data = cfg.data - task = tasks.setup_task(w2v_args.task) - if state is not None and "task_state" in state: - # This will load the stored "dictionaries" object - task.load_state_dict(state["task_state"]) - model = task.build_model(w2v_args.model, from_checkpoint=True) - - if state is not None and not cfg.no_pretrained_weights: - # set strict=False because we omit some modules - model.load_state_dict(state["model"], strict=False) - - model.remove_pretraining_modules() - - super().__init__(task.source_dictionary) - - d = w2v_args.model.encoder_embed_dim - - self.w2v_model = model - - self.final_dropout = nn.Dropout(cfg.final_dropout) - self.freeze_finetune_updates = cfg.freeze_finetune_updates - self.num_updates = 0 - - if tgt_dict is not None: - self.proj = Linear(d, len(tgt_dict)) - elif getattr(cfg, "decoder_embed_dim", d) != d: - self.proj = Linear(d, cfg.decoder_embed_dim) - else: - self.proj = None - - def set_num_updates(self, num_updates): - """Set the number of parameters updates.""" - super().set_num_updates(num_updates) - self.num_updates = num_updates - - def forward(self, source, padding_mask, tbc=True, **kwargs): - - w2v_args = { - "source": source, - "padding_mask": padding_mask, - "mask": self.apply_mask and self.training, - } - - ft = self.freeze_finetune_updates <= self.num_updates - - with torch.no_grad() if not ft else contextlib.ExitStack(): - x, padding_mask = self.w2v_model.extract_features(**w2v_args) - - if tbc: - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - x = self.final_dropout(x) - - if self.proj: - x = self.proj(x) - - return { - "encoder_out": x, # T x B x C - "encoder_padding_mask": padding_mask, # B x T - "padding_mask": padding_mask, - } - - def reorder_encoder_out(self, encoder_out, new_order): - if encoder_out["encoder_out"] is not None: - encoder_out["encoder_out"] = encoder_out["encoder_out"].index_select( - 1, new_order - ) - if encoder_out["encoder_padding_mask"] is not None: - encoder_out["encoder_padding_mask"] = encoder_out[ - "encoder_padding_mask" - ].index_select(0, new_order) - return encoder_out - - def max_positions(self): - """Maximum input length supported by the encoder.""" - return None - - def upgrade_state_dict_named(self, state_dict, name): - return state_dict - - -def Embedding(num_embeddings, embedding_dim, padding_idx): - m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx) - nn.init.normal_(m.weight, mean=0, std=embedding_dim ** -0.5) - nn.init.constant_(m.weight[padding_idx], 0) - return m - - -def Linear(in_features, out_features, bias=True): - m = nn.Linear(in_features, out_features, bias) - nn.init.xavier_uniform_(m.weight) - if bias: - nn.init.constant_(m.bias, 0.0) - return m diff --git a/kosmos-g/fairseq/fairseq/models/huggingface/__init__.py b/kosmos-g/fairseq/fairseq/models/huggingface/__init__.py deleted file mode 100644 index f7911c2c8..000000000 --- a/kosmos-g/fairseq/fairseq/models/huggingface/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import importlib -import os - - -# automatically import any Python files in the models/huggingface/ directory -models_dir = os.path.dirname(__file__) -for file in os.listdir(models_dir): - path = os.path.join(models_dir, file) - if ( - not file.startswith("_") - and not file.startswith(".") - and (file.endswith(".py") or os.path.isdir(path)) - ): - model_name = file[: file.find(".py")] if file.endswith(".py") else file - module = importlib.import_module("fairseq.models.huggingface." + model_name) diff --git a/kosmos-g/fairseq/fairseq/models/huggingface/hf_gpt2.py b/kosmos-g/fairseq/fairseq/models/huggingface/hf_gpt2.py deleted file mode 100644 index 3a8eb7819..000000000 --- a/kosmos-g/fairseq/fairseq/models/huggingface/hf_gpt2.py +++ /dev/null @@ -1,168 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import sys -from typing import Dict, List, Optional - -import torch -from fairseq.models import ( - FairseqIncrementalDecoder, - FairseqLanguageModel, - register_model, - register_model_architecture, -) - - -logger = logging.getLogger(__name__) - - -DEFAULT_MAX_TARGET_POSITIONS = 1024 - - -@register_model("hf_gpt2") -class HuggingFaceGPT2LanguageModel(FairseqLanguageModel): - def __init__(self, decoder): - super().__init__(decoder) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--embed-dim', type=int, metavar='N', - help='embedding dimension') - parser.add_argument('--num-attention-heads', type=int, metavar='N', - help='num attention heads') - parser.add_argument('--num-layers', type=int, metavar='N', - help='num layers') - parser.add_argument('--dropout', type=float, metavar='D', - help='dropout probability for all fully connected layers ' - 'in the embeddings, encoder, and pooler') - parser.add_argument('--attention-dropout', type=float, metavar='D', - help='dropout probability for attention weights') - # fmt: on - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - default_architecture(args) - return cls(HuggingFaceGPT2Decoder(args, task)) - - -class HuggingFaceGPT2Decoder(FairseqIncrementalDecoder): - def __init__(self, args, task): - try: - from transformers import GPT2Config, GPT2LMHeadModel - except ImportError: - raise ImportError( - "\n\nPlease install huggingface/transformers with:" - "\n\n pip install transformers" - ) - - super().__init__(task.target_dictionary) - - config = GPT2Config( - vocab_size=len(task.target_dictionary), - n_positions=args.max_target_positions + 1, - n_ctx=args.max_target_positions, - n_embd=args.embed_dim, - n_layer=args.num_layers, - n_head=args.num_attention_heads, - resid_pdrop=args.dropout, - embd_pdrop=args.dropout, - attn_pdrop=args.attention_dropout, - layer_norm_epsilon=1e-6, - ) - self.model = GPT2LMHeadModel(config) - - # set zero embedding for padding symbol - self.pad_idx = task.target_dictionary.pad() - self.model.transformer.wte.weight.data[self.pad_idx].zero_() - self.model.transformer.wpe.weight.data[0].zero_() - - def forward( - self, - prev_output_tokens, - src_lengths=None, - incremental_state: Optional[Dict[str, List[torch.Tensor]]] = None, - encoder_out=None, - ): - features = self.extract_features(prev_output_tokens, incremental_state) - lm_logits = self.model.lm_head(features) - return (lm_logits,) - - def extract_features( - self, - prev_output_tokens, - incremental_state: Optional[Dict[str, List[torch.Tensor]]] = None, - ): - if incremental_state: - past = self.get_incremental_state("past") - else: - past = None - - # don't attend to padding symbols - attention_mask = prev_output_tokens.ne(self.pad_idx).int() - - # set position ids to exclude padding symbols - position_ids = attention_mask * ( - torch.arange(1, 1 + prev_output_tokens.size(1)) - .to(prev_output_tokens) - .repeat(prev_output_tokens.size(0), 1) - ) - - outputs = self.model.transformer( - input_ids=prev_output_tokens, - past=past, - attention_mask=attention_mask, - position_ids=position_ids, - ) - last_hidden_states = outputs[0] - - if incremental_state: - self.set_incremental_state(incremental_state, "past", outputs[1]) - - return last_hidden_states - - def max_positions(self): - return self.model.config.n_positions - 1 - - -@register_model_architecture("hf_gpt2", "hf_gpt2") -def default_architecture(args): - if getattr(args, "max_target_positions", None) is None: - args.max_target_positions = getattr( - args, "tokens_per_sample", DEFAULT_MAX_TARGET_POSITIONS - ) - args.embed_dim = getattr(args, "embed_dim", 768) - args.num_attention_heads = getattr(args, "num_attention_heads", 12) - args.num_layers = getattr(args, "num_layers", 12) - args.dropout = getattr(args, "dropout", 0.1) - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - - -@register_model_architecture("hf_gpt2", "hf_gpt2_medium") -def hf_gpt2_medium(args): - args.embed_dim = getattr(args, "embed_dim", 1024) - args.num_attention_heads = getattr(args, "num_attention_heads", 16) - args.num_layers = getattr(args, "num_layers", 24) - default_architecture(args) - - -@register_model_architecture("hf_gpt2", "hf_gpt2_large") -def hf_gpt2_large(args): - args.embed_dim = getattr(args, "embed_dim", 1280) - args.num_attention_heads = getattr(args, "num_attention_heads", 20) - args.num_layers = getattr(args, "num_layers", 36) - default_architecture(args) - - -@register_model_architecture("hf_gpt2", "hf_gpt2_xl") -def hf_gpt2_xl(args): - args.embed_dim = getattr(args, "embed_dim", 1600) - args.num_attention_heads = getattr(args, "num_attention_heads", 25) - args.num_layers = getattr(args, "num_layers", 48) - default_architecture(args) diff --git a/kosmos-g/fairseq/fairseq/models/lightconv.py b/kosmos-g/fairseq/fairseq/models/lightconv.py deleted file mode 100644 index 4edfe3593..000000000 --- a/kosmos-g/fairseq/fairseq/models/lightconv.py +++ /dev/null @@ -1,1019 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import utils -from fairseq.models import ( - FairseqEncoder, - FairseqEncoderDecoderModel, - FairseqIncrementalDecoder, - register_model, - register_model_architecture, -) -from fairseq.modules import ( - AdaptiveSoftmax, - DynamicConv, - FairseqDropout, - LayerNorm, - LightweightConv, - MultiheadAttention, - PositionalEmbedding, -) -from fairseq.utils import safe_hasattr - - -@register_model("lightconv") -class LightConvModel(FairseqEncoderDecoderModel): - """ - LightConv and DynamicConv model from `"Pay Less Attention with Lightweight and Dynamic Convolutions" (Wu, et al, 2019) - <https://openreview.net/pdf?id=SkVhlh09tX>`_. - To use LightConv please set ``--encoder-conv-type lightweight --decoder-conv-type lightweight`` - To use DynamicConv please set ``--encoder-conv-type dynamic --decoder-conv-type dynamic`` - - Args: - encoder (LightConvEncoder): the encoder - decoder (LightConvDecoder): the decoder - - The LightConv model provides the following named architectures and - command-line arguments: - - .. argparse:: - :ref: fairseq.models.lightconv_parser - :prog: - """ - - @classmethod - def hub_models(cls): - # fmt: off - - def moses_subword(path): - return { - 'path': path, - 'tokenizer': 'moses', - 'bpe': 'subword_nmt', - } - - return { - 'lightconv.no_glu.iwslt14.de-en': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/iwslt14.de-en.lightconv.tar.gz'), - 'dynamicconv.no_glu.iwslt14.de-en': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/iwslt14.de-en.dynamicconv.tar.gz'), - 'lightconv.no_glu.wmt16.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv.tar.gz'), - 'dynamicconv.no_glu.wmt16.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv.tar.gz'), - 'lightconv.glu.wmt16.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv-glu.tar.gz'), - 'dynamicconv.glu.wmt16.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv-glu.tar.gz'), - 'lightconv.glu.wmt17.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv-glu.tar.gz'), - 'dynamicconv.glu.wmt17.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv-glu.tar.gz'), - 'lightconv.glu.wmt14.en-fr': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt14.en-fr.joined-dict.lightconv-glu.tar.gz'), - 'dynamicconv.glu.wmt14.en-fr': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt14.en-fr.joined-dict.dynamicconv-glu.tar.gz'), - 'lightconv.glu.wmt17.zh-en': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt17.zh-en.lightconv-glu.tar.gz'), - 'dynamicconv.glu.wmt17.zh-en': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt17.zh-en.dynamicconv-glu.tar.gz'), - } - # fmt: on - - def __init__(self, encoder, decoder): - super().__init__(encoder, decoder) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - parser.add_argument( - "--dropout", type=float, metavar="D", help="dropout probability" - ) - parser.add_argument( - "--attention-dropout", - type=float, - metavar="D", - help="dropout probability for attention weights", - ) - parser.add_argument( - "--relu-dropout", - type=float, - metavar="D", - help="dropout probability after ReLU in FFN", - ) - parser.add_argument( - "--input-dropout", - type=float, - metavar="D", - help="dropout probability of the inputs", - ) - parser.add_argument( - "--encoder-embed-path", - type=str, - metavar="STR", - help="path to pre-trained encoder embedding", - ) - parser.add_argument( - "--encoder-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension", - ) - parser.add_argument( - "--encoder-conv-dim", - type=int, - metavar="N", - help="encoder embedding dimension", - ) - parser.add_argument( - "--encoder-ffn-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension for FFN", - ) - parser.add_argument( - "--encoder-layers", type=int, metavar="N", help="num encoder layers" - ) - parser.add_argument( - "--encoder-attention-heads", - type=int, - metavar="N", - help="num encoder attention heads or LightConv/DynamicConv heads", - ) - parser.add_argument( - "--encoder-normalize-before", - action="store_true", - help="apply layernorm before each encoder block", - ) - parser.add_argument( - "--encoder-learned-pos", - action="store_true", - help="use learned positional embeddings in the encoder", - ) - parser.add_argument( - "--decoder-embed-path", - type=str, - metavar="STR", - help="path to pre-trained decoder embedding", - ) - parser.add_argument( - "--decoder-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension", - ) - parser.add_argument( - "--decoder-conv-dim", - type=int, - metavar="N", - help="decoder embedding dimension", - ) - parser.add_argument( - "--decoder-ffn-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension for FFN", - ) - parser.add_argument( - "--decoder-layers", type=int, metavar="N", help="num decoder layers" - ) - parser.add_argument( - "--decoder-attention-heads", - type=int, - metavar="N", - help="num decoder attention heads or LightConv/DynamicConv heads", - ) - parser.add_argument( - "--decoder-learned-pos", - action="store_true", - help="use learned positional embeddings in the decoder", - ) - parser.add_argument( - "--decoder-normalize-before", - action="store_true", - help="apply layernorm before each decoder block", - ) - parser.add_argument( - "--share-decoder-input-output-embed", - action="store_true", - help="share decoder input and output embeddings", - ) - parser.add_argument( - "--share-all-embeddings", - action="store_true", - help="share encoder, decoder and output embeddings" - " (requires shared dictionary and embed dim)", - ) - parser.add_argument( - "--adaptive-softmax-cutoff", - metavar="EXPR", - help="comma separated list of adaptive softmax cutoff points. " - "Must be used with adaptive_loss criterion", - ), - parser.add_argument( - "--adaptive-softmax-dropout", - type=float, - metavar="D", - help="sets adaptive softmax dropout for the tail projections", - ) - - """LightConv and DynamicConv arguments""" - parser.add_argument( - "--encoder-kernel-size-list", - type=lambda x: utils.eval_str_list(x, int), - help='list of kernel size (default: "[3,7,15,31,31,31,31]")', - ) - parser.add_argument( - "--decoder-kernel-size-list", - type=lambda x: utils.eval_str_list(x, int), - help='list of kernel size (default: "[3,7,15,31,31,31]")', - ) - parser.add_argument( - "--encoder-glu", type=utils.eval_bool, help="glu after in proj" - ) - parser.add_argument( - "--decoder-glu", type=utils.eval_bool, help="glu after in proj" - ) - parser.add_argument( - "--encoder-conv-type", - default="dynamic", - type=str, - choices=["dynamic", "lightweight"], - help="type of convolution", - ) - parser.add_argument( - "--decoder-conv-type", - default="dynamic", - type=str, - choices=["dynamic", "lightweight"], - help="type of convolution", - ) - parser.add_argument("--weight-softmax", default=True, type=utils.eval_bool) - parser.add_argument( - "--weight-dropout", - type=float, - metavar="D", - help="dropout probability for conv weights", - ) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - - # make sure all arguments are present in older models - base_architecture(args) - - if not safe_hasattr(args, "max_source_positions"): - args.max_source_positions = 1024 - if not safe_hasattr(args, "max_target_positions"): - args.max_target_positions = 1024 - - src_dict, tgt_dict = task.source_dictionary, task.target_dictionary - - def build_embedding(dictionary, embed_dim, path=None): - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - emb = Embedding(num_embeddings, embed_dim, padding_idx) - # if provided, load from preloaded dictionaries - if path: - embed_dict = utils.parse_embedding(path) - utils.load_embedding(embed_dict, dictionary, emb) - return emb - - if args.share_all_embeddings: - if src_dict != tgt_dict: - raise RuntimeError( - "--share-all-embeddings requires a joined dictionary" - ) - if args.encoder_embed_dim != args.decoder_embed_dim: - raise RuntimeError( - "--share-all-embeddings requires --encoder-embed-dim to match --decoder-embed-dim" - ) - if args.decoder_embed_path and ( - args.decoder_embed_path != args.encoder_embed_path - ): - raise RuntimeError( - "--share-all-embeddings not compatible with --decoder-embed-path" - ) - encoder_embed_tokens = build_embedding( - src_dict, args.encoder_embed_dim, args.encoder_embed_path - ) - decoder_embed_tokens = encoder_embed_tokens - args.share_decoder_input_output_embed = True - else: - encoder_embed_tokens = build_embedding( - src_dict, args.encoder_embed_dim, args.encoder_embed_path - ) - decoder_embed_tokens = build_embedding( - tgt_dict, args.decoder_embed_dim, args.decoder_embed_path - ) - - encoder = LightConvEncoder(args, src_dict, encoder_embed_tokens) - decoder = LightConvDecoder(args, tgt_dict, decoder_embed_tokens) - return LightConvModel(encoder, decoder) - - -class LightConvEncoder(FairseqEncoder): - """ - LightConv encoder consisting of *args.encoder_layers* layers. Each layer - is a :class:`LightConvEncoderLayer`. - - Args: - args (argparse.Namespace): parsed command-line arguments - dictionary (~fairseq.data.Dictionary): encoding dictionary - embed_tokens (torch.nn.Embedding): input embedding - """ - - def __init__(self, args, dictionary, embed_tokens): - super().__init__(dictionary) - self.dropout_module = FairseqDropout( - args.dropout, module_name=self.__class__.__name__ - ) - - embed_dim = embed_tokens.embedding_dim - self.padding_idx = embed_tokens.padding_idx - self.max_source_positions = args.max_source_positions - - self.embed_tokens = embed_tokens - self.embed_scale = math.sqrt(embed_dim) - self.embed_positions = ( - PositionalEmbedding( - args.max_source_positions, - embed_dim, - self.padding_idx, - learned=args.encoder_learned_pos, - ) - if not args.no_token_positional_embeddings - else None - ) - - self.layers = nn.ModuleList([]) - self.layers.extend( - [ - LightConvEncoderLayer( - args, kernel_size=args.encoder_kernel_size_list[i] - ) - for i in range(args.encoder_layers) - ] - ) - self.register_buffer("version", torch.Tensor([2])) - self.normalize = args.encoder_normalize_before - if self.normalize: - self.layer_norm = LayerNorm(embed_dim) - - def forward(self, src_tokens, **unused): - """ - Args: - src_tokens (LongTensor): tokens in the source language of shape - `(batch, src_len)` - - Returns: - dict: - - **encoder_out** (Tensor): the last encoder layer's output of - shape `(src_len, batch, embed_dim)` - - **encoder_padding_mask** (ByteTensor): the positions of - padding elements of shape `(batch, src_len)` - """ - # embed tokens and positions - x = self.embed_scale * self.embed_tokens(src_tokens) - if self.embed_positions is not None: - x += self.embed_positions(src_tokens) - x = self.dropout_module(x) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - # compute padding mask - encoder_padding_mask = src_tokens.eq(self.padding_idx) - if not encoder_padding_mask.any(): - encoder_padding_mask = None - - # encoder layers - for layer in self.layers: - x = layer(x, encoder_padding_mask) - - if self.normalize: - x = self.layer_norm(x) - - return { - "encoder_out": x, # T x B x C - "encoder_padding_mask": encoder_padding_mask, # B x T - } - - def reorder_encoder_out(self, encoder_out, new_order): - """ - Reorder encoder output according to *new_order*. - - Args: - encoder_out: output from the ``forward()`` method - new_order (LongTensor): desired order - - Returns: - *encoder_out* rearranged according to *new_order* - """ - if encoder_out["encoder_out"] is not None: - encoder_out["encoder_out"] = encoder_out["encoder_out"].index_select( - 1, new_order - ) - if encoder_out["encoder_padding_mask"] is not None: - encoder_out["encoder_padding_mask"] = encoder_out[ - "encoder_padding_mask" - ].index_select(0, new_order) - return encoder_out - - def max_positions(self): - """Maximum input length supported by the encoder.""" - if self.embed_positions is None: - return self.max_source_positions - return min(self.max_source_positions, self.embed_positions.max_positions) - - -class LightConvDecoder(FairseqIncrementalDecoder): - """ - LightConv decoder consisting of *args.decoder_layers* layers. Each layer - is a :class:`LightConvDecoderLayer`. - - Args: - args (argparse.Namespace): parsed command-line arguments - dictionary (~fairseq.data.Dictionary): decoding dictionary - embed_tokens (torch.nn.Embedding): output embedding - no_encoder_attn (bool, optional): whether to attend to encoder outputs. - Default: ``False`` - """ - - def __init__( - self, args, dictionary, embed_tokens, no_encoder_attn=False, final_norm=True - ): - super().__init__(dictionary) - self.dropout_module = FairseqDropout( - args.dropout, module_name=self.__class__.__name__ - ) - self.share_input_output_embed = args.share_decoder_input_output_embed - - input_embed_dim = embed_tokens.embedding_dim - embed_dim = args.decoder_embed_dim - output_embed_dim = args.decoder_output_dim - - padding_idx = embed_tokens.padding_idx - self.max_target_positions = args.max_target_positions - - self.embed_tokens = embed_tokens - self.embed_scale = math.sqrt(embed_dim) # todo: try with input_embed_dim - - self.project_in_dim = ( - Linear(input_embed_dim, embed_dim, bias=False) - if embed_dim != input_embed_dim - else None - ) - - self.embed_positions = ( - PositionalEmbedding( - args.max_target_positions, - embed_dim, - padding_idx, - learned=args.decoder_learned_pos, - ) - if not args.no_token_positional_embeddings - else None - ) - - self.layers = nn.ModuleList([]) - self.layers.extend( - [ - LightConvDecoderLayer( - args, no_encoder_attn, kernel_size=args.decoder_kernel_size_list[i] - ) - for i in range(args.decoder_layers) - ] - ) - - self.adaptive_softmax = None - - self.project_out_dim = ( - Linear(embed_dim, output_embed_dim, bias=False) - if embed_dim != output_embed_dim and not args.tie_adaptive_weights - else None - ) - - if args.adaptive_softmax_cutoff is not None: - self.adaptive_softmax = AdaptiveSoftmax( - len(dictionary), - output_embed_dim, - utils.eval_str_list(args.adaptive_softmax_cutoff, type=int), - dropout=args.adaptive_softmax_dropout, - adaptive_inputs=embed_tokens if args.tie_adaptive_weights else None, - factor=args.adaptive_softmax_factor, - tie_proj=args.tie_adaptive_proj, - ) - elif not self.share_input_output_embed: - self.embed_out = nn.Parameter( - torch.Tensor(len(dictionary), output_embed_dim) - ) - nn.init.normal_(self.embed_out, mean=0, std=output_embed_dim ** -0.5) - self.register_buffer("version", torch.Tensor([2])) - self.normalize = args.decoder_normalize_before and final_norm - if self.normalize: - self.layer_norm = LayerNorm(embed_dim) - - def forward( - self, prev_output_tokens, encoder_out=None, incremental_state=None, **kwargs - ): - """ - Args: - prev_output_tokens (LongTensor): previous decoder outputs of shape - `(batch, tgt_len)`, for teacher forcing - encoder_out (Tensor, optional): output from the encoder, used for - encoder-side attention - incremental_state (dict): dictionary used for storing state during - :ref:`Incremental decoding` - - Returns: - tuple: - - the last decoder layer's output of shape `(batch, tgt_len, - vocab)` - - the last decoder layer's attention weights of shape `(batch, - tgt_len, src_len)` - """ - # embed positions - positions = ( - self.embed_positions( - prev_output_tokens, - incremental_state=incremental_state, - ) - if self.embed_positions is not None - else None - ) - - if incremental_state is not None: - prev_output_tokens = prev_output_tokens[:, -1:] - if positions is not None: - positions = positions[:, -1:] - - # embed tokens and positions - x = self.embed_scale * self.embed_tokens(prev_output_tokens) - - if self.project_in_dim is not None: - x = self.project_in_dim(x) - - if positions is not None: - x += positions - x = self.dropout_module(x) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - attn = None - - inner_states = [x] - - # decoder layers - for layer in self.layers: - x, attn = layer( - x, - encoder_out["encoder_out"] if encoder_out is not None else None, - encoder_out["encoder_padding_mask"] - if encoder_out is not None - else None, - incremental_state, - ) - inner_states.append(x) - - if self.normalize: - x = self.layer_norm(x) - - # T x B x C -> B x T x C - x = x.transpose(0, 1) - - if self.project_out_dim is not None: - x = self.project_out_dim(x) - - if self.adaptive_softmax is None: - # project back to size of vocabulary - if self.share_input_output_embed: - x = F.linear(x, self.embed_tokens.weight) - else: - x = F.linear(x, self.embed_out) - - return x, {"attn": attn, "inner_states": inner_states} - - def max_positions(self): - """Maximum output length supported by the decoder.""" - if self.embed_positions is None: - return self.max_target_positions - return min(self.max_target_positions, self.embed_positions.max_positions) - - def buffered_future_mask(self, tensor): - dim = tensor.size(0) - if ( - not hasattr(self, "_future_mask") - or self._future_mask is None - or self._future_mask.device != tensor.device - ): - self._future_mask = torch.triu( - utils.fill_with_neg_inf(tensor.new(dim, dim)), 1 - ) - if self._future_mask.size(0) < dim: - self._future_mask = torch.triu( - utils.fill_with_neg_inf(self._future_mask.resize_(dim, dim)), 1 - ) - return self._future_mask[:dim, :dim] - - -class LightConvEncoderLayer(nn.Module): - """Encoder layer block. - - Args: - args (argparse.Namespace): parsed command-line arguments - kernel_size: kernel size of the convolution - """ - - def __init__(self, args, kernel_size=0): - super().__init__() - self.embed_dim = args.encoder_embed_dim - self.conv_dim = args.encoder_conv_dim - padding_l = ( - kernel_size // 2 - if kernel_size % 2 == 1 - else ((kernel_size - 1) // 2, kernel_size // 2) - ) - - if args.encoder_glu: - self.linear1 = Linear(self.embed_dim, 2 * self.conv_dim) - self.act = nn.GLU() - else: - self.linear1 = Linear(self.embed_dim, self.conv_dim) - self.act = None - if args.encoder_conv_type == "lightweight": - self.conv = LightweightConv( - self.conv_dim, - kernel_size, - padding_l=padding_l, - weight_softmax=args.weight_softmax, - num_heads=args.encoder_attention_heads, - weight_dropout=args.weight_dropout, - ) - elif args.encoder_conv_type == "dynamic": - self.conv = DynamicConv( - self.conv_dim, - kernel_size, - padding_l=padding_l, - weight_softmax=args.weight_softmax, - num_heads=args.encoder_attention_heads, - weight_dropout=args.weight_dropout, - ) - else: - raise NotImplementedError - self.linear2 = Linear(self.conv_dim, self.embed_dim) - - self.dropout_module = FairseqDropout( - args.dropout, module_name=self.__class__.__name__ - ) - self.relu_dropout_module = FairseqDropout( - args.relu_dropout, module_name=self.__class__.__name__ - ) - self.input_dropout_module = FairseqDropout( - args.input_dropout, module_name=self.__class__.__name__ - ) - self.normalize_before = args.encoder_normalize_before - self.fc1 = Linear(self.embed_dim, args.encoder_ffn_embed_dim) - self.fc2 = Linear(args.encoder_ffn_embed_dim, self.embed_dim) - self.layer_norms = nn.ModuleList([LayerNorm(self.embed_dim) for _ in range(2)]) - - def forward(self, x, encoder_padding_mask): - """ - Args: - x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - encoder_padding_mask (ByteTensor): binary ByteTensor of shape - `(batch, src_len)` where padding elements are indicated by ``1``. - - Returns: - encoded output of shape `(batch, src_len, embed_dim)` - """ - residual = x - x = self.maybe_layer_norm(0, x, before=True) - x = self.input_dropout_module(x) - x = self.linear1(x) - if self.act is not None: - x = self.act(x) - if encoder_padding_mask is not None: - x = x.masked_fill(encoder_padding_mask.transpose(0, 1).unsqueeze(2), 0) - x = self.conv(x) - x = self.linear2(x) - x = self.dropout_module(x) - x = residual + x - x = self.maybe_layer_norm(0, x, after=True) - - residual = x - x = self.maybe_layer_norm(1, x, before=True) - x = F.relu(self.fc1(x)) - x = self.relu_dropout_module(x) - x = self.fc2(x) - x = self.dropout_module(x) - x = residual + x - x = self.maybe_layer_norm(1, x, after=True) - return x - - def maybe_layer_norm(self, i, x, before=False, after=False): - assert before ^ after - if after ^ self.normalize_before: - return self.layer_norms[i](x) - else: - return x - - def extra_repr(self): - return ( - "dropout={}, relu_dropout={}, input_dropout={}, normalize_before={}".format( - self.dropout_module.p, - self.relu_dropout_module.p, - self.input_dropout_module.p, - self.normalize_before, - ) - ) - - -class LightConvDecoderLayer(nn.Module): - """Decoder layer block. - - Args: - args (argparse.Namespace): parsed command-line arguments - no_encoder_attn (bool, optional): whether to attend to encoder outputs. - Default: ``False`` - kernel_size: kernel size of the convolution - """ - - def __init__(self, args, no_encoder_attn=False, kernel_size=0): - super().__init__() - self.embed_dim = args.decoder_embed_dim - self.conv_dim = args.decoder_conv_dim - if args.decoder_glu: - self.linear1 = Linear(self.embed_dim, 2 * self.conv_dim) - self.act = nn.GLU() - else: - self.linear1 = Linear(self.embed_dim, self.conv_dim) - self.act = None - if args.decoder_conv_type == "lightweight": - self.conv = LightweightConv( - self.conv_dim, - kernel_size, - padding_l=kernel_size - 1, - weight_softmax=args.weight_softmax, - num_heads=args.decoder_attention_heads, - weight_dropout=args.weight_dropout, - ) - elif args.decoder_conv_type == "dynamic": - self.conv = DynamicConv( - self.conv_dim, - kernel_size, - padding_l=kernel_size - 1, - weight_softmax=args.weight_softmax, - num_heads=args.decoder_attention_heads, - weight_dropout=args.weight_dropout, - ) - else: - raise NotImplementedError - self.linear2 = Linear(self.conv_dim, self.embed_dim) - - self.dropout_module = FairseqDropout( - args.dropout, module_name=self.__class__.__name__ - ) - self.relu_dropout_module = FairseqDropout( - args.relu_dropout, module_name=self.__class__.__name__ - ) - self.input_dropout_module = FairseqDropout( - args.input_dropout, module_name=self.__class__.__name__ - ) - self.normalize_before = args.decoder_normalize_before - - self.conv_layer_norm = LayerNorm(self.embed_dim) - - if no_encoder_attn: - self.encoder_attn = None - self.encoder_attn_layer_norm = None - else: - self.encoder_attn = MultiheadAttention( - self.embed_dim, - args.decoder_attention_heads, - dropout=args.attention_dropout, - encoder_decoder_attention=True, - ) - self.encoder_attn_layer_norm = LayerNorm(self.embed_dim) - - self.fc1 = Linear(self.embed_dim, args.decoder_ffn_embed_dim) - self.fc2 = Linear(args.decoder_ffn_embed_dim, self.embed_dim) - - self.final_layer_norm = LayerNorm(self.embed_dim) - self.need_attn = True - - def forward( - self, - x, - encoder_out, - encoder_padding_mask, - incremental_state, - prev_conv_state=None, - prev_attn_state=None, - conv_mask=None, - conv_padding_mask=None, - ): - """ - Args: - x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - encoder_padding_mask (ByteTensor): binary ByteTensor of shape - `(batch, src_len)` where padding elements are indicated by ``1``. - - Returns: - encoded output of shape `(batch, src_len, embed_dim)` - """ - residual = x - x = self.maybe_layer_norm(self.conv_layer_norm, x, before=True) - if prev_conv_state is not None: - if incremental_state is None: - incremental_state = {} - self.conv._set_input_buffer(incremental_state, prev_conv_state) - x = self.input_dropout_module(x) - x = self.linear1(x) - if self.act is not None: - x = self.act(x) - x = self.conv(x, incremental_state=incremental_state) - x = self.linear2(x) - x = self.dropout_module(x) - x = residual + x - x = self.maybe_layer_norm(self.conv_layer_norm, x, after=True) - - attn = None - if self.encoder_attn is not None: - residual = x - x = self.maybe_layer_norm(self.encoder_attn_layer_norm, x, before=True) - if prev_attn_state is not None: - if incremental_state is None: - incremental_state = {} - prev_key, prev_value = prev_attn_state - saved_state = {"prev_key": prev_key, "prev_value": prev_value} - self.encoder_attn._set_input_buffer(incremental_state, saved_state) - x, attn = self.encoder_attn( - query=x, - key=encoder_out, - value=encoder_out, - key_padding_mask=encoder_padding_mask, - incremental_state=incremental_state, - static_kv=True, - need_weights=(not self.training and self.need_attn), - ) - x = self.dropout_module(x) - x = residual + x - x = self.maybe_layer_norm(self.encoder_attn_layer_norm, x, after=True) - - residual = x - x = self.maybe_layer_norm(self.final_layer_norm, x, before=True) - x = F.relu(self.fc1(x)) - x = self.relu_dropout_module(x) - x = self.fc2(x) - x = self.dropout_module(x) - x = residual + x - x = self.maybe_layer_norm(self.final_layer_norm, x, after=True) - return x, attn - - def maybe_layer_norm(self, layer_norm, x, before=False, after=False): - assert before ^ after - if after ^ self.normalize_before: - return layer_norm(x) - else: - return x - - def make_generation_fast_(self, need_attn=False, **kwargs): - self.need_attn = need_attn - - def extra_repr(self): - return ( - "dropout={}, relu_dropout={}, input_dropout={}, normalize_before={}".format( - self.dropout_module.p, - self.relu_dropout_module.p, - self.input_dropout_module.p, - self.normalize_before, - ) - ) - - -def Embedding(num_embeddings, embedding_dim, padding_idx): - m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx) - nn.init.normal_(m.weight, mean=0, std=embedding_dim ** -0.5) - nn.init.constant_(m.weight[padding_idx], 0) - return m - - -def Linear(in_features, out_features, bias=True): - m = nn.Linear(in_features, out_features, bias) - nn.init.xavier_uniform_(m.weight) - if bias: - nn.init.constant_(m.bias, 0.0) - return m - - -@register_model_architecture("lightconv", "lightconv") -def base_architecture(args): - args.encoder_embed_path = getattr(args, "encoder_embed_path", None) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 2048) - args.encoder_layers = getattr(args, "encoder_layers", 7) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", False) - args.decoder_embed_path = getattr(args, "decoder_embed_path", None) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim) - args.decoder_ffn_embed_dim = getattr( - args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim - ) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", False) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False) - args.attention_dropout = getattr(args, "attention_dropout", 0.0) - args.relu_dropout = getattr(args, "relu_dropout", 0.0) - args.dropout = getattr(args, "dropout", 0.1) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", False - ) - args.share_all_embeddings = getattr(args, "share_all_embeddings", False) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - - args.encoder_conv_dim = getattr(args, "encoder_conv_dim", args.encoder_embed_dim) - args.decoder_conv_dim = getattr(args, "decoder_conv_dim", args.decoder_embed_dim) - - args.encoder_kernel_size_list = getattr( - args, "encoder_kernel_size_list", [3, 7, 15, 31, 31, 31, 31] - ) - args.decoder_kernel_size_list = getattr( - args, "decoder_kernel_size_list", [3, 7, 15, 31, 31, 31] - ) - if len(args.encoder_kernel_size_list) == 1: - args.encoder_kernel_size_list = ( - args.encoder_kernel_size_list * args.encoder_layers - ) - if len(args.decoder_kernel_size_list) == 1: - args.decoder_kernel_size_list = ( - args.decoder_kernel_size_list * args.decoder_layers - ) - assert ( - len(args.encoder_kernel_size_list) == args.encoder_layers - ), "encoder_kernel_size_list doesn't match encoder_layers" - assert ( - len(args.decoder_kernel_size_list) == args.decoder_layers - ), "decoder_kernel_size_list doesn't match decoder_layers" - args.encoder_glu = getattr(args, "encoder_glu", True) - args.decoder_glu = getattr(args, "decoder_glu", True) - args.input_dropout = getattr(args, "input_dropout", 0.1) - args.weight_dropout = getattr(args, "weight_dropout", args.attention_dropout) - - -@register_model_architecture("lightconv", "lightconv_iwslt_de_en") -def lightconv_iwslt_de_en(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 1024) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 4) - args.encoder_layers = getattr(args, "encoder_layers", 7) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 1024) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 4) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - args.weight_dropout = getattr(args, "weight_dropout", 0.1) - args.encoder_glu = getattr(args, "encoder_glu", False) - args.decoder_glu = getattr(args, "decoder_glu", False) - args.input_dropout = getattr(args, "input_dropout", 0.0) - base_architecture(args) - - -@register_model_architecture("lightconv", "lightconv_wmt_en_de") -def lightconv_wmt_en_de(args): - base_architecture(args) - - -@register_model_architecture("lightconv", "lightconv_wmt_en_de_big") -def lightconv_wmt_en_de_big(args): - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 1024) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4096) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - args.dropout = getattr(args, "dropout", 0.3) - base_architecture(args) - - -@register_model_architecture("lightconv", "lightconv_wmt_en_fr_big") -def lightconv_wmt_en_fr_big(args): - args.dropout = getattr(args, "dropout", 0.1) - lightconv_wmt_en_de_big(args) - - -@register_model_architecture("lightconv", "lightconv_wmt_zh_en_big") -def lightconv_wmt_zh_en_big(args): - args.dropout = getattr(args, "dropout", 0.2) - args.attention_dropout = getattr(args, "attention_dropout", 0.2) - args.weight_dropout = getattr(args, "weight_dropout", 0.2) - lightconv_wmt_en_de_big(args) diff --git a/kosmos-g/fairseq/fairseq/models/lightconv_lm.py b/kosmos-g/fairseq/fairseq/models/lightconv_lm.py deleted file mode 100644 index 1d9efc4e4..000000000 --- a/kosmos-g/fairseq/fairseq/models/lightconv_lm.py +++ /dev/null @@ -1,306 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq import utils -from fairseq.models import ( - FairseqLanguageModel, - register_model, - register_model_architecture, -) -from fairseq.models.lightconv import Embedding, LightConvDecoder -from fairseq.modules import AdaptiveInput, CharacterTokenEmbedder - - -@register_model("lightconv_lm") -class LightConvLanguageModel(FairseqLanguageModel): - def __init__(self, decoder): - super().__init__(decoder) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - parser.add_argument( - "--dropout", - default=0.1, - type=float, - metavar="D", - help="dropout probability", - ) - parser.add_argument( - "--attention-dropout", - default=0.0, - type=float, - metavar="D", - help="dropout probability for attention weights", - ) - parser.add_argument( - "--relu-dropout", - default=0.0, - type=float, - metavar="D", - help="dropout probability after ReLU in FFN", - ) - parser.add_argument( - "--input-dropout", - type=float, - metavar="D", - help="dropout probability of the inputs", - ) - parser.add_argument( - "--decoder-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension", - ) - parser.add_argument( - "--decoder-output-dim", - type=int, - metavar="N", - help="decoder output dimension", - ) - parser.add_argument( - "--decoder-input-dim", type=int, metavar="N", help="decoder input dimension" - ) - parser.add_argument( - "--decoder-ffn-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension for FFN", - ) - parser.add_argument( - "--decoder-layers", type=int, metavar="N", help="num decoder layers" - ) - parser.add_argument( - "--decoder-attention-heads", - type=int, - metavar="N", - help="num decoder attention heads or LightConv/DynamicConv heads", - ) - parser.add_argument( - "--decoder-normalize-before", - default=False, - action="store_true", - help="apply layernorm before each decoder block", - ) - parser.add_argument( - "--adaptive-softmax-cutoff", - metavar="EXPR", - help="comma separated list of adaptive softmax cutoff points. " - "Must be used with adaptive_loss criterion", - ) - parser.add_argument( - "--adaptive-softmax-dropout", - type=float, - metavar="D", - help="sets adaptive softmax dropout for the tail projections", - ) - parser.add_argument( - "--adaptive-softmax-factor", - type=float, - metavar="N", - help="adaptive input factor", - ) - parser.add_argument( - "--no-token-positional-embeddings", - default=False, - action="store_true", - help="if set, disables positional embeddings (outside self attention)", - ) - parser.add_argument( - "--share-decoder-input-output-embed", - default=False, - action="store_true", - help="share decoder input and output embeddings", - ) - parser.add_argument( - "--character-embeddings", - default=False, - action="store_true", - help="if set, uses character embedding convolutions to produce token embeddings", - ) - parser.add_argument( - "--character-filters", - type=str, - metavar="LIST", - default="[(1, 64), (2, 128), (3, 192), (4, 256), (5, 256), (6, 256), (7, 256)]", - help="size of character embeddings", - ) - parser.add_argument( - "--character-embedding-dim", - type=int, - metavar="N", - default=4, - help="size of character embeddings", - ) - parser.add_argument( - "--char-embedder-highway-layers", - type=int, - metavar="N", - default=2, - help="number of highway layers for character token embeddder", - ) - parser.add_argument( - "--adaptive-input", - default=False, - action="store_true", - help="if set, uses adaptive input", - ) - parser.add_argument( - "--adaptive-input-factor", - type=float, - metavar="N", - help="adaptive input factor", - ) - parser.add_argument( - "--adaptive-input-cutoff", - metavar="EXPR", - help="comma separated list of adaptive input cutoff points.", - ) - parser.add_argument( - "--tie-adaptive-weights", - action="store_true", - help="if set, ties the weights of adaptive softmax and adaptive input", - ) - parser.add_argument( - "--tie-adaptive-proj", - action="store_true", - help="if set, ties the projection weights of adaptive softmax and adaptive input", - ) - parser.add_argument( - "--decoder-learned-pos", - action="store_true", - help="use learned positional embeddings in the decoder", - ) - - """LightConv and DynamicConv arguments""" - parser.add_argument( - "--decoder-kernel-size-list", - type=lambda x: utils.eval_str_list(x, int), - help='list of kernel size (default: "[3,7,15,31,31,31]")', - ) - parser.add_argument( - "--decoder-glu", type=utils.eval_bool, help="glu after in proj" - ) - parser.add_argument( - "--decoder-conv-type", - default="dynamic", - type=str, - choices=["dynamic", "lightweight"], - help="type of convolution", - ) - parser.add_argument("--weight-softmax", default=True, type=utils.eval_bool) - parser.add_argument( - "--weight-dropout", - type=float, - metavar="D", - help="dropout probability for conv weights", - ) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - - # make sure all arguments are present in older models - base_lm_architecture(args) - - if getattr(args, "max_source_positions", None) is None: - args.max_source_positions = args.tokens_per_sample - if getattr(args, "max_target_positions", None) is None: - args.max_target_positions = args.tokens_per_sample - - if args.character_embeddings: - embed_tokens = CharacterTokenEmbedder( - task.dictionary, - eval(args.character_filters), - args.character_embedding_dim, - args.decoder_embed_dim, - args.char_embedder_highway_layers, - ) - elif args.adaptive_input: - embed_tokens = AdaptiveInput( - len(task.dictionary), - task.dictionary.pad(), - args.decoder_input_dim, - args.adaptive_input_factor, - args.decoder_embed_dim, - utils.eval_str_list(args.adaptive_input_cutoff, type=int), - ) - else: - embed_tokens = Embedding( - len(task.dictionary), args.decoder_input_dim, task.dictionary.pad() - ) - - if args.tie_adaptive_weights: - assert args.adaptive_input - assert args.adaptive_input_factor == args.adaptive_softmax_factor - assert ( - args.adaptive_softmax_cutoff == args.adaptive_input_cutoff - ), "{} != {}".format( - args.adaptive_softmax_cutoff, args.adaptive_input_cutoff - ) - assert args.decoder_input_dim == args.decoder_output_dim - - decoder = LightConvDecoder( - args, - task.output_dictionary, - embed_tokens, - no_encoder_attn=True, - final_norm=False, - ) - return LightConvLanguageModel(decoder) - - -@register_model_architecture("lightconv_lm", "lightconv_lm") -def base_lm_architecture(args): - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 2048) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.adaptive_softmax_factor = getattr(args, "adaptive_softmax_factor", 4) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False) - - args.character_embeddings = getattr(args, "character_embeddings", False) - - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - args.decoder_conv_dim = getattr(args, "decoder_conv_dim", args.decoder_embed_dim) - - # The model training is not stable without this - args.decoder_normalize_before = True - - args.adaptive_input = getattr(args, "adaptive_input", False) - args.adaptive_input_factor = getattr(args, "adaptive_input_factor", 4) - args.adaptive_input_cutoff = getattr(args, "adaptive_input_cutoff", None) - - args.tie_adaptive_weights = getattr(args, "tie_adaptive_weights", False) - args.tie_adaptive_proj = getattr(args, "tie_adaptive_proj", False) - - args.decoder_kernel_size_list = getattr( - args, "decoder_kernel_size_list", [3, 7, 15, 31, 31, 31] - ) - if len(args.decoder_kernel_size_list) == 1: - args.decoder_kernel_size_list = ( - args.decoder_kernel_size_list * args.decoder_layers - ) - assert ( - len(args.decoder_kernel_size_list) == args.decoder_layers - ), "decoder_kernel_size_list doesn't match decoder_layers" - args.decoder_glu = getattr(args, "decoder_glu", True) - args.input_dropout = getattr(args, "input_dropout", 0.1) - args.weight_dropout = getattr(args, "weight_dropout", args.attention_dropout) - - -@register_model_architecture("lightconv_lm", "lightconv_lm_gbw") -def lightconv_lm_gbw(args): - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.dropout = getattr(args, "dropout", 0.1) - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4096) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - base_lm_architecture(args) diff --git a/kosmos-g/fairseq/fairseq/models/lstm.py b/kosmos-g/fairseq/fairseq/models/lstm.py deleted file mode 100644 index 8a2915627..000000000 --- a/kosmos-g/fairseq/fairseq/models/lstm.py +++ /dev/null @@ -1,755 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Dict, List, Optional, Tuple - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import utils -from fairseq.models import ( - FairseqEncoder, - FairseqEncoderDecoderModel, - FairseqIncrementalDecoder, - register_model, - register_model_architecture, -) -from fairseq.modules import AdaptiveSoftmax, FairseqDropout -from torch import Tensor - - -DEFAULT_MAX_SOURCE_POSITIONS = 1e5 -DEFAULT_MAX_TARGET_POSITIONS = 1e5 - - -@register_model("lstm") -class LSTMModel(FairseqEncoderDecoderModel): - def __init__(self, encoder, decoder): - super().__init__(encoder, decoder) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--dropout', type=float, metavar='D', - help='dropout probability') - parser.add_argument('--encoder-embed-dim', type=int, metavar='N', - help='encoder embedding dimension') - parser.add_argument('--encoder-embed-path', type=str, metavar='STR', - help='path to pre-trained encoder embedding') - parser.add_argument('--encoder-freeze-embed', action='store_true', - help='freeze encoder embeddings') - parser.add_argument('--encoder-hidden-size', type=int, metavar='N', - help='encoder hidden size') - parser.add_argument('--encoder-layers', type=int, metavar='N', - help='number of encoder layers') - parser.add_argument('--encoder-bidirectional', action='store_true', - help='make all layers of encoder bidirectional') - parser.add_argument('--decoder-embed-dim', type=int, metavar='N', - help='decoder embedding dimension') - parser.add_argument('--decoder-embed-path', type=str, metavar='STR', - help='path to pre-trained decoder embedding') - parser.add_argument('--decoder-freeze-embed', action='store_true', - help='freeze decoder embeddings') - parser.add_argument('--decoder-hidden-size', type=int, metavar='N', - help='decoder hidden size') - parser.add_argument('--decoder-layers', type=int, metavar='N', - help='number of decoder layers') - parser.add_argument('--decoder-out-embed-dim', type=int, metavar='N', - help='decoder output embedding dimension') - parser.add_argument('--decoder-attention', type=str, metavar='BOOL', - help='decoder attention') - parser.add_argument('--adaptive-softmax-cutoff', metavar='EXPR', - help='comma separated list of adaptive softmax cutoff points. ' - 'Must be used with adaptive_loss criterion') - parser.add_argument('--share-decoder-input-output-embed', default=False, - action='store_true', - help='share decoder input and output embeddings') - parser.add_argument('--share-all-embeddings', default=False, action='store_true', - help='share encoder, decoder and output embeddings' - ' (requires shared dictionary and embed dim)') - - # Granular dropout settings (if not specified these default to --dropout) - parser.add_argument('--encoder-dropout-in', type=float, metavar='D', - help='dropout probability for encoder input embedding') - parser.add_argument('--encoder-dropout-out', type=float, metavar='D', - help='dropout probability for encoder output') - parser.add_argument('--decoder-dropout-in', type=float, metavar='D', - help='dropout probability for decoder input embedding') - parser.add_argument('--decoder-dropout-out', type=float, metavar='D', - help='dropout probability for decoder output') - # fmt: on - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - # make sure that all args are properly defaulted (in case there are any new ones) - base_architecture(args) - - if args.encoder_layers != args.decoder_layers: - raise ValueError("--encoder-layers must match --decoder-layers") - - max_source_positions = getattr( - args, "max_source_positions", DEFAULT_MAX_SOURCE_POSITIONS - ) - max_target_positions = getattr( - args, "max_target_positions", DEFAULT_MAX_TARGET_POSITIONS - ) - - def load_pretrained_embedding_from_file(embed_path, dictionary, embed_dim): - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - embed_tokens = Embedding(num_embeddings, embed_dim, padding_idx) - embed_dict = utils.parse_embedding(embed_path) - utils.print_embed_overlap(embed_dict, dictionary) - return utils.load_embedding(embed_dict, dictionary, embed_tokens) - - if args.encoder_embed_path: - pretrained_encoder_embed = load_pretrained_embedding_from_file( - args.encoder_embed_path, task.source_dictionary, args.encoder_embed_dim - ) - else: - num_embeddings = len(task.source_dictionary) - pretrained_encoder_embed = Embedding( - num_embeddings, args.encoder_embed_dim, task.source_dictionary.pad() - ) - - if args.share_all_embeddings: - # double check all parameters combinations are valid - if task.source_dictionary != task.target_dictionary: - raise ValueError("--share-all-embeddings requires a joint dictionary") - if args.decoder_embed_path and ( - args.decoder_embed_path != args.encoder_embed_path - ): - raise ValueError( - "--share-all-embed not compatible with --decoder-embed-path" - ) - if args.encoder_embed_dim != args.decoder_embed_dim: - raise ValueError( - "--share-all-embeddings requires --encoder-embed-dim to " - "match --decoder-embed-dim" - ) - pretrained_decoder_embed = pretrained_encoder_embed - args.share_decoder_input_output_embed = True - else: - # separate decoder input embeddings - pretrained_decoder_embed = None - if args.decoder_embed_path: - pretrained_decoder_embed = load_pretrained_embedding_from_file( - args.decoder_embed_path, - task.target_dictionary, - args.decoder_embed_dim, - ) - # one last double check of parameter combinations - if args.share_decoder_input_output_embed and ( - args.decoder_embed_dim != args.decoder_out_embed_dim - ): - raise ValueError( - "--share-decoder-input-output-embeddings requires " - "--decoder-embed-dim to match --decoder-out-embed-dim" - ) - - if args.encoder_freeze_embed: - pretrained_encoder_embed.weight.requires_grad = False - if args.decoder_freeze_embed: - pretrained_decoder_embed.weight.requires_grad = False - - encoder = LSTMEncoder( - dictionary=task.source_dictionary, - embed_dim=args.encoder_embed_dim, - hidden_size=args.encoder_hidden_size, - num_layers=args.encoder_layers, - dropout_in=args.encoder_dropout_in, - dropout_out=args.encoder_dropout_out, - bidirectional=args.encoder_bidirectional, - pretrained_embed=pretrained_encoder_embed, - max_source_positions=max_source_positions, - ) - decoder = LSTMDecoder( - dictionary=task.target_dictionary, - embed_dim=args.decoder_embed_dim, - hidden_size=args.decoder_hidden_size, - out_embed_dim=args.decoder_out_embed_dim, - num_layers=args.decoder_layers, - dropout_in=args.decoder_dropout_in, - dropout_out=args.decoder_dropout_out, - attention=utils.eval_bool(args.decoder_attention), - encoder_output_units=encoder.output_units, - pretrained_embed=pretrained_decoder_embed, - share_input_output_embed=args.share_decoder_input_output_embed, - adaptive_softmax_cutoff=( - utils.eval_str_list(args.adaptive_softmax_cutoff, type=int) - if args.criterion == "adaptive_loss" - else None - ), - max_target_positions=max_target_positions, - residuals=False, - ) - return cls(encoder, decoder) - - def forward( - self, - src_tokens, - src_lengths, - prev_output_tokens, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - ): - encoder_out = self.encoder(src_tokens, src_lengths=src_lengths) - decoder_out = self.decoder( - prev_output_tokens, - encoder_out=encoder_out, - incremental_state=incremental_state, - ) - return decoder_out - - -class LSTMEncoder(FairseqEncoder): - """LSTM encoder.""" - - def __init__( - self, - dictionary, - embed_dim=512, - hidden_size=512, - num_layers=1, - dropout_in=0.1, - dropout_out=0.1, - bidirectional=False, - left_pad=True, - pretrained_embed=None, - padding_idx=None, - max_source_positions=DEFAULT_MAX_SOURCE_POSITIONS, - ): - super().__init__(dictionary) - self.num_layers = num_layers - self.dropout_in_module = FairseqDropout( - dropout_in * 1.0, module_name=self.__class__.__name__ - ) - self.dropout_out_module = FairseqDropout( - dropout_out * 1.0, module_name=self.__class__.__name__ - ) - self.bidirectional = bidirectional - self.hidden_size = hidden_size - self.max_source_positions = max_source_positions - - num_embeddings = len(dictionary) - self.padding_idx = padding_idx if padding_idx is not None else dictionary.pad() - if pretrained_embed is None: - self.embed_tokens = Embedding(num_embeddings, embed_dim, self.padding_idx) - else: - self.embed_tokens = pretrained_embed - - self.lstm = LSTM( - input_size=embed_dim, - hidden_size=hidden_size, - num_layers=num_layers, - dropout=self.dropout_out_module.p if num_layers > 1 else 0.0, - bidirectional=bidirectional, - ) - self.left_pad = left_pad - - self.output_units = hidden_size - if bidirectional: - self.output_units *= 2 - - def forward( - self, - src_tokens: Tensor, - src_lengths: Tensor, - enforce_sorted: bool = True, - ): - """ - Args: - src_tokens (LongTensor): tokens in the source language of - shape `(batch, src_len)` - src_lengths (LongTensor): lengths of each source sentence of - shape `(batch)` - enforce_sorted (bool, optional): if True, `src_tokens` is - expected to contain sequences sorted by length in a - decreasing order. If False, this condition is not - required. Default: True. - """ - if self.left_pad: - # nn.utils.rnn.pack_padded_sequence requires right-padding; - # convert left-padding to right-padding - src_tokens = utils.convert_padding_direction( - src_tokens, - torch.zeros_like(src_tokens).fill_(self.padding_idx), - left_to_right=True, - ) - - bsz, seqlen = src_tokens.size() - - # embed tokens - x = self.embed_tokens(src_tokens) - x = self.dropout_in_module(x) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - # pack embedded source tokens into a PackedSequence - packed_x = nn.utils.rnn.pack_padded_sequence( - x, src_lengths.cpu(), enforce_sorted=enforce_sorted - ) - - # apply LSTM - if self.bidirectional: - state_size = 2 * self.num_layers, bsz, self.hidden_size - else: - state_size = self.num_layers, bsz, self.hidden_size - h0 = x.new_zeros(*state_size) - c0 = x.new_zeros(*state_size) - packed_outs, (final_hiddens, final_cells) = self.lstm(packed_x, (h0, c0)) - - # unpack outputs and apply dropout - x, _ = nn.utils.rnn.pad_packed_sequence( - packed_outs, padding_value=self.padding_idx * 1.0 - ) - x = self.dropout_out_module(x) - assert list(x.size()) == [seqlen, bsz, self.output_units] - - if self.bidirectional: - final_hiddens = self.combine_bidir(final_hiddens, bsz) - final_cells = self.combine_bidir(final_cells, bsz) - - encoder_padding_mask = src_tokens.eq(self.padding_idx).t() - - return tuple( - ( - x, # seq_len x batch x hidden - final_hiddens, # num_layers x batch x num_directions*hidden - final_cells, # num_layers x batch x num_directions*hidden - encoder_padding_mask, # seq_len x batch - ) - ) - - def combine_bidir(self, outs, bsz: int): - out = outs.view(self.num_layers, 2, bsz, -1).transpose(1, 2).contiguous() - return out.view(self.num_layers, bsz, -1) - - def reorder_encoder_out( - self, encoder_out: Tuple[Tensor, Tensor, Tensor, Tensor], new_order - ): - return tuple( - ( - encoder_out[0].index_select(1, new_order), - encoder_out[1].index_select(1, new_order), - encoder_out[2].index_select(1, new_order), - encoder_out[3].index_select(1, new_order), - ) - ) - - def max_positions(self): - """Maximum input length supported by the encoder.""" - return self.max_source_positions - - -class AttentionLayer(nn.Module): - def __init__(self, input_embed_dim, source_embed_dim, output_embed_dim, bias=False): - super().__init__() - - self.input_proj = Linear(input_embed_dim, source_embed_dim, bias=bias) - self.output_proj = Linear( - input_embed_dim + source_embed_dim, output_embed_dim, bias=bias - ) - - def forward(self, input, source_hids, encoder_padding_mask): - # input: bsz x input_embed_dim - # source_hids: srclen x bsz x source_embed_dim - - # x: bsz x source_embed_dim - x = self.input_proj(input) - - # compute attention - attn_scores = (source_hids * x.unsqueeze(0)).sum(dim=2) - - # don't attend over padding - if encoder_padding_mask is not None: - attn_scores = ( - attn_scores.float() - .masked_fill_(encoder_padding_mask, float("-inf")) - .type_as(attn_scores) - ) # FP16 support: cast to float and back - - attn_scores = F.softmax(attn_scores, dim=0) # srclen x bsz - - # sum weighted sources - x = (attn_scores.unsqueeze(2) * source_hids).sum(dim=0) - - x = torch.tanh(self.output_proj(torch.cat((x, input), dim=1))) - return x, attn_scores - - -class LSTMDecoder(FairseqIncrementalDecoder): - """LSTM decoder.""" - - def __init__( - self, - dictionary, - embed_dim=512, - hidden_size=512, - out_embed_dim=512, - num_layers=1, - dropout_in=0.1, - dropout_out=0.1, - attention=True, - encoder_output_units=512, - pretrained_embed=None, - share_input_output_embed=False, - adaptive_softmax_cutoff=None, - max_target_positions=DEFAULT_MAX_TARGET_POSITIONS, - residuals=False, - ): - super().__init__(dictionary) - self.dropout_in_module = FairseqDropout( - dropout_in * 1.0, module_name=self.__class__.__name__ - ) - self.dropout_out_module = FairseqDropout( - dropout_out * 1.0, module_name=self.__class__.__name__ - ) - self.hidden_size = hidden_size - self.share_input_output_embed = share_input_output_embed - self.need_attn = True - self.max_target_positions = max_target_positions - self.residuals = residuals - self.num_layers = num_layers - - self.adaptive_softmax = None - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - if pretrained_embed is None: - self.embed_tokens = Embedding(num_embeddings, embed_dim, padding_idx) - else: - self.embed_tokens = pretrained_embed - - self.encoder_output_units = encoder_output_units - if encoder_output_units != hidden_size and encoder_output_units != 0: - self.encoder_hidden_proj = Linear(encoder_output_units, hidden_size) - self.encoder_cell_proj = Linear(encoder_output_units, hidden_size) - else: - self.encoder_hidden_proj = self.encoder_cell_proj = None - - # disable input feeding if there is no encoder - # input feeding is described in arxiv.org/abs/1508.04025 - input_feed_size = 0 if encoder_output_units == 0 else hidden_size - self.layers = nn.ModuleList( - [ - LSTMCell( - input_size=input_feed_size + embed_dim - if layer == 0 - else hidden_size, - hidden_size=hidden_size, - ) - for layer in range(num_layers) - ] - ) - - if attention: - # TODO make bias configurable - self.attention = AttentionLayer( - hidden_size, encoder_output_units, hidden_size, bias=False - ) - else: - self.attention = None - - if hidden_size != out_embed_dim: - self.additional_fc = Linear(hidden_size, out_embed_dim) - - if adaptive_softmax_cutoff is not None: - # setting adaptive_softmax dropout to dropout_out for now but can be redefined - self.adaptive_softmax = AdaptiveSoftmax( - num_embeddings, - hidden_size, - adaptive_softmax_cutoff, - dropout=dropout_out, - ) - elif not self.share_input_output_embed: - self.fc_out = Linear(out_embed_dim, num_embeddings, dropout=dropout_out) - - def forward( - self, - prev_output_tokens, - encoder_out: Optional[Tuple[Tensor, Tensor, Tensor, Tensor]] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - src_lengths: Optional[Tensor] = None, - ): - x, attn_scores = self.extract_features( - prev_output_tokens, encoder_out, incremental_state - ) - return self.output_layer(x), attn_scores - - def extract_features( - self, - prev_output_tokens, - encoder_out: Optional[Tuple[Tensor, Tensor, Tensor, Tensor]] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - ): - """ - Similar to *forward* but only return features. - """ - # get outputs from encoder - if encoder_out is not None: - encoder_outs = encoder_out[0] - encoder_hiddens = encoder_out[1] - encoder_cells = encoder_out[2] - encoder_padding_mask = encoder_out[3] - else: - encoder_outs = torch.empty(0) - encoder_hiddens = torch.empty(0) - encoder_cells = torch.empty(0) - encoder_padding_mask = torch.empty(0) - srclen = encoder_outs.size(0) - - if incremental_state is not None and len(incremental_state) > 0: - prev_output_tokens = prev_output_tokens[:, -1:] - - bsz, seqlen = prev_output_tokens.size() - - # embed tokens - x = self.embed_tokens(prev_output_tokens) - x = self.dropout_in_module(x) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - # initialize previous states (or get from cache during incremental generation) - if incremental_state is not None and len(incremental_state) > 0: - prev_hiddens, prev_cells, input_feed = self.get_cached_state( - incremental_state - ) - elif encoder_out is not None: - # setup recurrent cells - prev_hiddens = [encoder_hiddens[i] for i in range(self.num_layers)] - prev_cells = [encoder_cells[i] for i in range(self.num_layers)] - if self.encoder_hidden_proj is not None: - prev_hiddens = [self.encoder_hidden_proj(y) for y in prev_hiddens] - prev_cells = [self.encoder_cell_proj(y) for y in prev_cells] - input_feed = x.new_zeros(bsz, self.hidden_size) - else: - # setup zero cells, since there is no encoder - zero_state = x.new_zeros(bsz, self.hidden_size) - prev_hiddens = [zero_state for i in range(self.num_layers)] - prev_cells = [zero_state for i in range(self.num_layers)] - input_feed = None - - assert ( - srclen > 0 or self.attention is None - ), "attention is not supported if there are no encoder outputs" - attn_scores: Optional[Tensor] = ( - x.new_zeros(srclen, seqlen, bsz) if self.attention is not None else None - ) - outs = [] - for j in range(seqlen): - # input feeding: concatenate context vector from previous time step - if input_feed is not None: - input = torch.cat((x[j, :, :], input_feed), dim=1) - else: - input = x[j] - - for i, rnn in enumerate(self.layers): - # recurrent cell - hidden, cell = rnn(input, (prev_hiddens[i], prev_cells[i])) - - # hidden state becomes the input to the next layer - input = self.dropout_out_module(hidden) - if self.residuals: - input = input + prev_hiddens[i] - - # save state for next time step - prev_hiddens[i] = hidden - prev_cells[i] = cell - - # apply attention using the last layer's hidden state - if self.attention is not None: - assert attn_scores is not None - out, attn_scores[:, j, :] = self.attention( - hidden, encoder_outs, encoder_padding_mask - ) - else: - out = hidden - out = self.dropout_out_module(out) - - # input feeding - if input_feed is not None: - input_feed = out - - # save final output - outs.append(out) - - # Stack all the necessary tensors together and store - prev_hiddens_tensor = torch.stack(prev_hiddens) - prev_cells_tensor = torch.stack(prev_cells) - cache_state = torch.jit.annotate( - Dict[str, Optional[Tensor]], - { - "prev_hiddens": prev_hiddens_tensor, - "prev_cells": prev_cells_tensor, - "input_feed": input_feed, - }, - ) - self.set_incremental_state(incremental_state, "cached_state", cache_state) - - # collect outputs across time steps - x = torch.cat(outs, dim=0).view(seqlen, bsz, self.hidden_size) - - # T x B x C -> B x T x C - x = x.transpose(1, 0) - - if hasattr(self, "additional_fc") and self.adaptive_softmax is None: - x = self.additional_fc(x) - x = self.dropout_out_module(x) - # srclen x tgtlen x bsz -> bsz x tgtlen x srclen - if not self.training and self.need_attn and self.attention is not None: - assert attn_scores is not None - attn_scores = attn_scores.transpose(0, 2) - else: - attn_scores = None - return x, attn_scores - - def output_layer(self, x): - """Project features to the vocabulary size.""" - if self.adaptive_softmax is None: - if self.share_input_output_embed: - x = F.linear(x, self.embed_tokens.weight) - else: - x = self.fc_out(x) - return x - - def get_cached_state( - self, - incremental_state: Dict[str, Dict[str, Optional[Tensor]]], - ) -> Tuple[List[Tensor], List[Tensor], Optional[Tensor]]: - cached_state = self.get_incremental_state(incremental_state, "cached_state") - assert cached_state is not None - prev_hiddens_ = cached_state["prev_hiddens"] - assert prev_hiddens_ is not None - prev_cells_ = cached_state["prev_cells"] - assert prev_cells_ is not None - prev_hiddens = [prev_hiddens_[i] for i in range(self.num_layers)] - prev_cells = [prev_cells_[j] for j in range(self.num_layers)] - input_feed = cached_state[ - "input_feed" - ] # can be None for decoder-only language models - return prev_hiddens, prev_cells, input_feed - - def reorder_incremental_state( - self, - incremental_state: Dict[str, Dict[str, Optional[Tensor]]], - new_order: Tensor, - ): - if incremental_state is None or len(incremental_state) == 0: - return - prev_hiddens, prev_cells, input_feed = self.get_cached_state(incremental_state) - prev_hiddens = [p.index_select(0, new_order) for p in prev_hiddens] - prev_cells = [p.index_select(0, new_order) for p in prev_cells] - if input_feed is not None: - input_feed = input_feed.index_select(0, new_order) - cached_state_new = torch.jit.annotate( - Dict[str, Optional[Tensor]], - { - "prev_hiddens": torch.stack(prev_hiddens), - "prev_cells": torch.stack(prev_cells), - "input_feed": input_feed, - }, - ) - self.set_incremental_state(incremental_state, "cached_state", cached_state_new), - return - - def max_positions(self): - """Maximum output length supported by the decoder.""" - return self.max_target_positions - - def make_generation_fast_(self, need_attn=False, **kwargs): - self.need_attn = need_attn - - -def Embedding(num_embeddings, embedding_dim, padding_idx): - m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx) - nn.init.uniform_(m.weight, -0.1, 0.1) - nn.init.constant_(m.weight[padding_idx], 0) - return m - - -def LSTM(input_size, hidden_size, **kwargs): - m = nn.LSTM(input_size, hidden_size, **kwargs) - for name, param in m.named_parameters(): - if "weight" in name or "bias" in name: - param.data.uniform_(-0.1, 0.1) - return m - - -def LSTMCell(input_size, hidden_size, **kwargs): - m = nn.LSTMCell(input_size, hidden_size, **kwargs) - for name, param in m.named_parameters(): - if "weight" in name or "bias" in name: - param.data.uniform_(-0.1, 0.1) - return m - - -def Linear(in_features, out_features, bias=True, dropout=0.0): - """Linear layer (input: N x T x C)""" - m = nn.Linear(in_features, out_features, bias=bias) - m.weight.data.uniform_(-0.1, 0.1) - if bias: - m.bias.data.uniform_(-0.1, 0.1) - return m - - -@register_model_architecture("lstm", "lstm") -def base_architecture(args): - args.dropout = getattr(args, "dropout", 0.1) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_embed_path = getattr(args, "encoder_embed_path", None) - args.encoder_freeze_embed = getattr(args, "encoder_freeze_embed", False) - args.encoder_hidden_size = getattr( - args, "encoder_hidden_size", args.encoder_embed_dim - ) - args.encoder_layers = getattr(args, "encoder_layers", 1) - args.encoder_bidirectional = getattr(args, "encoder_bidirectional", False) - args.encoder_dropout_in = getattr(args, "encoder_dropout_in", args.dropout) - args.encoder_dropout_out = getattr(args, "encoder_dropout_out", args.dropout) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_embed_path = getattr(args, "decoder_embed_path", None) - args.decoder_freeze_embed = getattr(args, "decoder_freeze_embed", False) - args.decoder_hidden_size = getattr( - args, "decoder_hidden_size", args.decoder_embed_dim - ) - args.decoder_layers = getattr(args, "decoder_layers", 1) - args.decoder_out_embed_dim = getattr(args, "decoder_out_embed_dim", 512) - args.decoder_attention = getattr(args, "decoder_attention", "1") - args.decoder_dropout_in = getattr(args, "decoder_dropout_in", args.dropout) - args.decoder_dropout_out = getattr(args, "decoder_dropout_out", args.dropout) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", False - ) - args.share_all_embeddings = getattr(args, "share_all_embeddings", False) - args.adaptive_softmax_cutoff = getattr( - args, "adaptive_softmax_cutoff", "10000,50000,200000" - ) - - -@register_model_architecture("lstm", "lstm_wiseman_iwslt_de_en") -def lstm_wiseman_iwslt_de_en(args): - args.dropout = getattr(args, "dropout", 0.1) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 256) - args.encoder_dropout_in = getattr(args, "encoder_dropout_in", 0) - args.encoder_dropout_out = getattr(args, "encoder_dropout_out", 0) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 256) - args.decoder_out_embed_dim = getattr(args, "decoder_out_embed_dim", 256) - args.decoder_dropout_in = getattr(args, "decoder_dropout_in", 0) - args.decoder_dropout_out = getattr(args, "decoder_dropout_out", args.dropout) - base_architecture(args) - - -@register_model_architecture("lstm", "lstm_luong_wmt_en_de") -def lstm_luong_wmt_en_de(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1000) - args.encoder_layers = getattr(args, "encoder_layers", 4) - args.encoder_dropout_out = getattr(args, "encoder_dropout_out", 0) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 1000) - args.decoder_layers = getattr(args, "decoder_layers", 4) - args.decoder_out_embed_dim = getattr(args, "decoder_out_embed_dim", 1000) - args.decoder_dropout_out = getattr(args, "decoder_dropout_out", 0) - base_architecture(args) diff --git a/kosmos-g/fairseq/fairseq/models/lstm_lm.py b/kosmos-g/fairseq/fairseq/models/lstm_lm.py deleted file mode 100644 index 454f0ac36..000000000 --- a/kosmos-g/fairseq/fairseq/models/lstm_lm.py +++ /dev/null @@ -1,142 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq import utils -from fairseq.models import ( - FairseqLanguageModel, - register_model, - register_model_architecture, -) -from fairseq.models.lstm import Embedding, LSTMDecoder - - -DEFAULT_MAX_TARGET_POSITIONS = 1e5 - - -@register_model("lstm_lm") -class LSTMLanguageModel(FairseqLanguageModel): - def __init__(self, decoder): - super().__init__(decoder) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--dropout', type=float, metavar='D', - help='dropout probability') - parser.add_argument('--decoder-embed-dim', type=int, metavar='N', - help='decoder embedding dimension') - parser.add_argument('--decoder-embed-path', type=str, metavar='STR', - help='path to pre-trained decoder embedding') - parser.add_argument('--decoder-hidden-size', type=int, metavar='N', - help='decoder hidden size') - parser.add_argument('--decoder-layers', type=int, metavar='N', - help='number of decoder layers') - parser.add_argument('--decoder-out-embed-dim', type=int, metavar='N', - help='decoder output embedding dimension') - parser.add_argument('--decoder-attention', type=str, metavar='BOOL', - help='decoder attention') - parser.add_argument('--adaptive-softmax-cutoff', metavar='EXPR', - help='comma separated list of adaptive softmax cutoff points. ' - 'Must be used with adaptive_loss criterion') - parser.add_argument('--residuals', default=False, - action='store_true', - help='applying residuals between LSTM layers') - - # Granular dropout settings (if not specified these default to --dropout) - parser.add_argument('--decoder-dropout-in', type=float, metavar='D', - help='dropout probability for decoder input embedding') - parser.add_argument('--decoder-dropout-out', type=float, metavar='D', - help='dropout probability for decoder output') - parser.add_argument('--share-decoder-input-output-embed', default=False, - action='store_true', - help='share decoder input and output embeddings') - # fmt: on - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - - # make sure all arguments are present in older models - base_architecture(args) - - if getattr(args, "max_target_positions", None) is not None: - max_target_positions = args.max_target_positions - else: - max_target_positions = getattr( - args, "tokens_per_sample", DEFAULT_MAX_TARGET_POSITIONS - ) - - def load_pretrained_embedding_from_file(embed_path, dictionary, embed_dim): - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - embed_tokens = Embedding(num_embeddings, embed_dim, padding_idx) - embed_dict = utils.parse_embedding(embed_path) - utils.print_embed_overlap(embed_dict, dictionary) - return utils.load_embedding(embed_dict, dictionary, embed_tokens) - - pretrained_decoder_embed = None - if args.decoder_embed_path: - pretrained_decoder_embed = load_pretrained_embedding_from_file( - args.decoder_embed_path, task.target_dictionary, args.decoder_embed_dim - ) - - if args.share_decoder_input_output_embed: - # double check all parameters combinations are valid - if task.source_dictionary != task.target_dictionary: - raise ValueError( - "--share-decoder-input-output-embeddings requires a joint dictionary" - ) - - if args.decoder_embed_dim != args.decoder_out_embed_dim: - raise ValueError( - "--share-decoder-input-output-embeddings requires " - "--decoder-embed-dim to match --decoder-out-embed-dim" - ) - - decoder = LSTMDecoder( - dictionary=task.dictionary, - embed_dim=args.decoder_embed_dim, - hidden_size=args.decoder_hidden_size, - out_embed_dim=args.decoder_out_embed_dim, - num_layers=args.decoder_layers, - dropout_in=args.decoder_dropout_in, - dropout_out=args.decoder_dropout_out, - attention=False, # decoder-only language model doesn't support attention - encoder_output_units=0, - pretrained_embed=pretrained_decoder_embed, - share_input_output_embed=args.share_decoder_input_output_embed, - adaptive_softmax_cutoff=( - utils.eval_str_list(args.adaptive_softmax_cutoff, type=int) - if args.criterion == "adaptive_loss" - else None - ), - max_target_positions=max_target_positions, - residuals=args.residuals, - ) - - return cls(decoder) - - -@register_model_architecture("lstm_lm", "lstm_lm") -def base_architecture(args): - args.dropout = getattr(args, "dropout", 0.1) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_embed_path = getattr(args, "decoder_embed_path", None) - args.decoder_hidden_size = getattr( - args, "decoder_hidden_size", args.decoder_embed_dim - ) - args.decoder_layers = getattr(args, "decoder_layers", 1) - args.decoder_out_embed_dim = getattr(args, "decoder_out_embed_dim", 512) - args.decoder_attention = getattr(args, "decoder_attention", "0") - args.decoder_dropout_in = getattr(args, "decoder_dropout_in", args.dropout) - args.decoder_dropout_out = getattr(args, "decoder_dropout_out", args.dropout) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", False - ) - args.adaptive_softmax_cutoff = getattr( - args, "adaptive_softmax_cutoff", "10000,50000,200000" - ) - args.residuals = getattr(args, "residuals", False) diff --git a/kosmos-g/fairseq/fairseq/models/masked_lm.py b/kosmos-g/fairseq/fairseq/models/masked_lm.py deleted file mode 100644 index 5cb49dd77..000000000 --- a/kosmos-g/fairseq/fairseq/models/masked_lm.py +++ /dev/null @@ -1,404 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import utils -from fairseq.models import ( - FairseqEncoder, - FairseqEncoderModel, - register_model, - register_model_architecture, -) -from fairseq.modules import ( - LayerNorm, - SinusoidalPositionalEmbedding, - TransformerSentenceEncoder, -) -from fairseq.modules.transformer_sentence_encoder import init_bert_params -from fairseq.utils import safe_hasattr - - -logger = logging.getLogger(__name__) - - -@register_model("masked_lm") -class MaskedLMModel(FairseqEncoderModel): - """ - Class for training a Masked Language Model. It also supports an - additional sentence level prediction if the sent-loss argument is set. - """ - - def __init__(self, args, encoder): - super().__init__(encoder) - self.args = args - - # if specified then apply bert initialization on the model. We need - # to explictly call this to make sure that the output embeddings - # and projection layers are also correctly initialized - if getattr(args, "apply_bert_init", False): - self.apply(init_bert_params) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - # Arguments related to dropout - parser.add_argument( - "--dropout", type=float, metavar="D", help="dropout probability" - ) - parser.add_argument( - "--attention-dropout", - type=float, - metavar="D", - help="dropout probability for" " attention weights", - ) - parser.add_argument( - "--act-dropout", - type=float, - metavar="D", - help="dropout probability after" " activation in FFN", - ) - - # Arguments related to hidden states and self-attention - parser.add_argument( - "--encoder-ffn-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension for FFN", - ) - parser.add_argument( - "--encoder-layers", type=int, metavar="N", help="num encoder layers" - ) - parser.add_argument( - "--encoder-attention-heads", - type=int, - metavar="N", - help="num encoder attention heads", - ) - - # Arguments related to input and output embeddings - parser.add_argument( - "--encoder-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension", - ) - parser.add_argument( - "--share-encoder-input-output-embed", - action="store_true", - help="share encoder input" " and output embeddings", - ) - parser.add_argument( - "--encoder-learned-pos", - action="store_true", - help="use learned positional embeddings in the encoder", - ) - parser.add_argument( - "--no-token-positional-embeddings", - action="store_true", - help="if set, disables positional embeddings" " (outside self attention)", - ) - parser.add_argument( - "--num-segment", type=int, metavar="N", help="num segment in the input" - ) - parser.add_argument( - "--max-positions", type=int, help="number of positional embeddings to learn" - ) - - # Arguments related to sentence level prediction - parser.add_argument( - "--sentence-class-num", - type=int, - metavar="N", - help="number of classes for sentence task", - ) - parser.add_argument( - "--sent-loss", - action="store_true", - help="if set," " calculate sentence level predictions", - ) - - # Arguments related to parameter initialization - parser.add_argument( - "--apply-bert-init", - action="store_true", - help="use custom param initialization for BERT", - ) - - # misc params - parser.add_argument( - "--activation-fn", - choices=utils.get_available_activation_fns(), - help="activation function to use", - ) - parser.add_argument( - "--pooler-activation-fn", - choices=utils.get_available_activation_fns(), - help="Which activation function to use for pooler layer.", - ) - parser.add_argument( - "--encoder-normalize-before", - action="store_true", - help="apply layernorm before each encoder block", - ) - - def forward(self, src_tokens, segment_labels=None, **kwargs): - return self.encoder(src_tokens, segment_labels=segment_labels, **kwargs) - - def max_positions(self): - return self.encoder.max_positions - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - # make sure all arguments are present in older models - base_architecture(args) - - if not safe_hasattr(args, "max_positions"): - args.max_positions = args.tokens_per_sample - - logger.info(args) - - encoder = MaskedLMEncoder(args, task.dictionary) - return cls(args, encoder) - - -class MaskedLMEncoder(FairseqEncoder): - """ - Encoder for Masked Language Modelling. - """ - - def __init__(self, args, dictionary): - super().__init__(dictionary) - - self.padding_idx = dictionary.pad() - self.vocab_size = dictionary.__len__() - self.max_positions = args.max_positions - - self.sentence_encoder = TransformerSentenceEncoder( - padding_idx=self.padding_idx, - vocab_size=self.vocab_size, - num_encoder_layers=args.encoder_layers, - embedding_dim=args.encoder_embed_dim, - ffn_embedding_dim=args.encoder_ffn_embed_dim, - num_attention_heads=args.encoder_attention_heads, - dropout=args.dropout, - attention_dropout=args.attention_dropout, - activation_dropout=args.act_dropout, - max_seq_len=self.max_positions, - num_segments=args.num_segment, - use_position_embeddings=not args.no_token_positional_embeddings, - encoder_normalize_before=args.encoder_normalize_before, - apply_bert_init=args.apply_bert_init, - activation_fn=args.activation_fn, - learned_pos_embedding=args.encoder_learned_pos, - ) - - self.share_input_output_embed = args.share_encoder_input_output_embed - self.embed_out = None - self.sentence_projection_layer = None - self.sentence_out_dim = args.sentence_class_num - self.lm_output_learned_bias = None - - # Remove head is set to true during fine-tuning - self.load_softmax = not getattr(args, "remove_head", False) - - self.masked_lm_pooler = nn.Linear( - args.encoder_embed_dim, args.encoder_embed_dim - ) - self.pooler_activation = utils.get_activation_fn(args.pooler_activation_fn) - - self.lm_head_transform_weight = nn.Linear( - args.encoder_embed_dim, args.encoder_embed_dim - ) - self.activation_fn = utils.get_activation_fn(args.activation_fn) - self.layer_norm = LayerNorm(args.encoder_embed_dim) - - self.lm_output_learned_bias = None - if self.load_softmax: - self.lm_output_learned_bias = nn.Parameter(torch.zeros(self.vocab_size)) - - if not self.share_input_output_embed: - self.embed_out = nn.Linear( - args.encoder_embed_dim, self.vocab_size, bias=False - ) - - if args.sent_loss: - self.sentence_projection_layer = nn.Linear( - args.encoder_embed_dim, self.sentence_out_dim, bias=False - ) - - def forward(self, src_tokens, segment_labels=None, masked_tokens=None, **unused): - """ - Forward pass for Masked LM encoder. This first computes the token - embedding using the token embedding matrix, position embeddings (if - specified) and segment embeddings (if specified). - - Here we assume that the sentence representation corresponds to the - output of the classification_token (see bert_task or cross_lingual_lm - task for more details). - Args: - - src_tokens: B x T matrix representing sentences - - segment_labels: B x T matrix representing segment label for tokens - Returns: - - a tuple of the following: - - logits for predictions in format B x T x C to be used in - softmax afterwards - - a dictionary of additional data, where 'pooled_output' contains - the representation for classification_token and 'inner_states' - is a list of internal model states used to compute the - predictions (similar in ELMO). 'sentence_logits' - is the prediction logit for NSP task and is only computed if - this is specified in the input arguments. - """ - - inner_states, sentence_rep = self.sentence_encoder( - src_tokens, - segment_labels=segment_labels, - ) - - x = inner_states[-1].transpose(0, 1) - # project masked tokens only - if masked_tokens is not None: - x = x[masked_tokens, :] - x = self.layer_norm(self.activation_fn(self.lm_head_transform_weight(x))) - - pooled_output = self.pooler_activation(self.masked_lm_pooler(sentence_rep)) - - # project back to size of vocabulary - if self.share_input_output_embed and hasattr( - self.sentence_encoder.embed_tokens, "weight" - ): - x = F.linear(x, self.sentence_encoder.embed_tokens.weight) - elif self.embed_out is not None: - x = self.embed_out(x) - if self.lm_output_learned_bias is not None: - x = x + self.lm_output_learned_bias - sentence_logits = None - if self.sentence_projection_layer: - sentence_logits = self.sentence_projection_layer(pooled_output) - - return x, { - "inner_states": inner_states, - "pooled_output": pooled_output, - "sentence_logits": sentence_logits, - } - - def max_positions(self): - """Maximum output length supported by the encoder.""" - return self.max_positions - - def upgrade_state_dict_named(self, state_dict, name): - if isinstance( - self.sentence_encoder.embed_positions, SinusoidalPositionalEmbedding - ): - state_dict[ - name + ".sentence_encoder.embed_positions._float_tensor" - ] = torch.FloatTensor(1) - if not self.load_softmax: - for k in list(state_dict.keys()): - if ( - "embed_out.weight" in k - or "sentence_projection_layer.weight" in k - or "lm_output_learned_bias" in k - ): - del state_dict[k] - return state_dict - - -@register_model_architecture("masked_lm", "masked_lm") -def base_architecture(args): - args.dropout = getattr(args, "dropout", 0.1) - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - args.act_dropout = getattr(args, "act_dropout", 0.0) - - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096) - args.encoder_layers = getattr(args, "encoder_layers", 6) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8) - - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.share_encoder_input_output_embed = getattr( - args, "share_encoder_input_output_embed", False - ) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", False) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - args.num_segment = getattr(args, "num_segment", 2) - - args.sentence_class_num = getattr(args, "sentence_class_num", 2) - args.sent_loss = getattr(args, "sent_loss", False) - - args.apply_bert_init = getattr(args, "apply_bert_init", False) - - args.activation_fn = getattr(args, "activation_fn", "relu") - args.pooler_activation_fn = getattr(args, "pooler_activation_fn", "tanh") - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - - -@register_model_architecture("masked_lm", "bert_base") -def bert_base_architecture(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 768) - args.share_encoder_input_output_embed = getattr( - args, "share_encoder_input_output_embed", True - ) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", True) - args.num_segment = getattr(args, "num_segment", 2) - - args.encoder_layers = getattr(args, "encoder_layers", 12) - - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 12) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 3072) - - args.sentence_class_num = getattr(args, "sentence_class_num", 2) - args.sent_loss = getattr(args, "sent_loss", True) - - args.apply_bert_init = getattr(args, "apply_bert_init", True) - - args.activation_fn = getattr(args, "activation_fn", "gelu") - args.pooler_activation_fn = getattr(args, "pooler_activation_fn", "tanh") - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", True) - base_architecture(args) - - -@register_model_architecture("masked_lm", "bert_large") -def bert_large_architecture(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.encoder_layers = getattr(args, "encoder_layers", 24) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096) - bert_base_architecture(args) - - -@register_model_architecture("masked_lm", "xlm_base") -def xlm_architecture(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.share_encoder_input_output_embed = getattr( - args, "share_encoder_input_output_embed", True - ) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", True) - args.num_segment = getattr(args, "num_segment", 1) - - args.encoder_layers = getattr(args, "encoder_layers", 6) - - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096) - - args.sent_loss = getattr(args, "sent_loss", False) - - args.activation_fn = getattr(args, "activation_fn", "gelu") - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.pooler_activation_fn = getattr(args, "pooler_activation_fn", "tanh") - args.apply_bert_init = getattr(args, "apply_bert_init", True) - base_architecture(args) diff --git a/kosmos-g/fairseq/fairseq/models/model_utils.py b/kosmos-g/fairseq/fairseq/models/model_utils.py deleted file mode 100644 index 732d66b1d..000000000 --- a/kosmos-g/fairseq/fairseq/models/model_utils.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import List, Optional - -import torch -from torch import Tensor - - -@torch.jit.script -def script_skip_tensor_list(x: List[Tensor], mask): - res = [xi[mask] if xi.size(0) == mask.size(0) else xi[:, mask] for xi in x] - outputs = [] - for i, t in enumerate(res): - if t.numel() != 0: - outputs.append(t) - else: - outputs.append(x[i]) - return outputs - - -@torch.jit.script -def script_skip_tensor(x: Tensor, mask): - # None case - if x.size(0) == 0: - return x - res = x[mask] if x.size(0) == mask.size(0) else x[:, mask] - if res.numel() == 0: - return x - else: - return res - - -@torch.jit.script -def expand_2d_or_3d_tensor(x, trg_dim: int, padding_idx: int): - """ - Expand 2D/3D tensor on dim=1 - """ - if x is None: - return None - - assert x.dim() == 2 or x.dim() == 3 - assert trg_dim >= x.size(1), (trg_dim, x.size()) - if trg_dim == x.size(1): - return x - - dims = [x.size(0), trg_dim - x.size(1)] - if x.dim() == 3: - dims.append(x.size(2)) - x = torch.cat([x, torch.zeros(dims).to(x).fill_(padding_idx)], 1) - - return x - - -@torch.jit.script -def coalesce(x: Optional[Tensor], y: Tensor) -> Tensor: - return x if x is not None else y - - -@torch.jit.script -def fill_tensors( - x: Optional[Tensor], mask, y: Optional[Tensor], padding_idx: int -) -> Optional[Tensor]: - """ - Filling tensor x with y at masked positions (dim=0). - """ - if x is None or x.size()[0] == 0 or y is None: - return x - assert x.dim() == y.dim() and mask.size(0) == x.size(0) - assert x.dim() == 2 or (x.dim() == 3 and x.size(2) == y.size(2)) - - n_selected = mask.sum() - if n_selected == 0: - return x - assert n_selected == y.size(0) - if n_selected == x.size(0): - return y - - if x.size(1) < y.size(1): - x = expand_2d_or_3d_tensor(x, y.size(1), padding_idx) - x[mask] = y - elif x.size(1) > y.size(1): - x[mask] = torch.tensor(padding_idx).type_as(x) - if x.dim() == 2: - x[mask, : y.size(1)] = y - else: - x[mask, : y.size(1), :] = y - else: - x[mask] = y - return x diff --git a/kosmos-g/fairseq/fairseq/models/multilingual_transformer.py b/kosmos-g/fairseq/fairseq/models/multilingual_transformer.py deleted file mode 100644 index e722b647e..000000000 --- a/kosmos-g/fairseq/fairseq/models/multilingual_transformer.py +++ /dev/null @@ -1,229 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from collections import OrderedDict - -from fairseq import utils -from fairseq.models import ( - FairseqMultiModel, - register_model, - register_model_architecture, -) -from fairseq.models.transformer import ( - Embedding, - TransformerDecoder, - TransformerEncoder, - TransformerModel, - base_architecture, -) -from fairseq.utils import safe_hasattr - - -@register_model("multilingual_transformer") -class MultilingualTransformerModel(FairseqMultiModel): - """Train Transformer models for multiple language pairs simultaneously. - - Requires `--task multilingual_translation`. - - We inherit all arguments from TransformerModel and assume that all language - pairs use a single Transformer architecture. In addition, we provide several - options that are specific to the multilingual setting. - - Args: - --share-encoder-embeddings: share encoder embeddings across all source languages - --share-decoder-embeddings: share decoder embeddings across all target languages - --share-encoders: share all encoder params (incl. embeddings) across all source languages - --share-decoders: share all decoder params (incl. embeddings) across all target languages - """ - - def __init__(self, encoders, decoders): - super().__init__(encoders, decoders) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - TransformerModel.add_args(parser) - parser.add_argument( - "--share-encoder-embeddings", - action="store_true", - help="share encoder embeddings across languages", - ) - parser.add_argument( - "--share-decoder-embeddings", - action="store_true", - help="share decoder embeddings across languages", - ) - parser.add_argument( - "--share-encoders", - action="store_true", - help="share encoders across languages", - ) - parser.add_argument( - "--share-decoders", - action="store_true", - help="share decoders across languages", - ) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - from fairseq.tasks.multilingual_translation import MultilingualTranslationTask - - assert isinstance(task, MultilingualTranslationTask) - - # make sure all arguments are present in older models - base_multilingual_architecture(args) - - if not safe_hasattr(args, "max_source_positions"): - args.max_source_positions = 1024 - if not safe_hasattr(args, "max_target_positions"): - args.max_target_positions = 1024 - - src_langs = [lang_pair.split("-")[0] for lang_pair in task.model_lang_pairs] - tgt_langs = [lang_pair.split("-")[1] for lang_pair in task.model_lang_pairs] - - if args.share_encoders: - args.share_encoder_embeddings = True - if args.share_decoders: - args.share_decoder_embeddings = True - - def build_embedding(dictionary, embed_dim, path=None): - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - emb = Embedding(num_embeddings, embed_dim, padding_idx) - # if provided, load from preloaded dictionaries - if path: - embed_dict = utils.parse_embedding(path) - utils.load_embedding(embed_dict, dictionary, emb) - return emb - - # build shared embeddings (if applicable) - shared_encoder_embed_tokens, shared_decoder_embed_tokens = None, None - if args.share_all_embeddings: - if args.encoder_embed_dim != args.decoder_embed_dim: - raise ValueError( - "--share-all-embeddings requires --encoder-embed-dim to match --decoder-embed-dim" - ) - if args.decoder_embed_path and ( - args.decoder_embed_path != args.encoder_embed_path - ): - raise ValueError( - "--share-all-embeddings not compatible with --decoder-embed-path" - ) - shared_encoder_embed_tokens = FairseqMultiModel.build_shared_embeddings( - dicts=task.dicts, - langs=task.langs, - embed_dim=args.encoder_embed_dim, - build_embedding=build_embedding, - pretrained_embed_path=args.encoder_embed_path, - ) - shared_decoder_embed_tokens = shared_encoder_embed_tokens - args.share_decoder_input_output_embed = True - else: - if args.share_encoder_embeddings: - shared_encoder_embed_tokens = FairseqMultiModel.build_shared_embeddings( - dicts=task.dicts, - langs=src_langs, - embed_dim=args.encoder_embed_dim, - build_embedding=build_embedding, - pretrained_embed_path=args.encoder_embed_path, - ) - if args.share_decoder_embeddings: - shared_decoder_embed_tokens = FairseqMultiModel.build_shared_embeddings( - dicts=task.dicts, - langs=tgt_langs, - embed_dim=args.decoder_embed_dim, - build_embedding=build_embedding, - pretrained_embed_path=args.decoder_embed_path, - ) - - # encoders/decoders for each language - lang_encoders, lang_decoders = {}, {} - - def get_encoder(lang): - if lang not in lang_encoders: - if shared_encoder_embed_tokens is not None: - encoder_embed_tokens = shared_encoder_embed_tokens - else: - encoder_embed_tokens = build_embedding( - task.dicts[lang], - args.encoder_embed_dim, - args.encoder_embed_path, - ) - lang_encoders[lang] = cls._get_module_class( - True, args, task.dicts[lang], encoder_embed_tokens, src_langs - ) - return lang_encoders[lang] - - def get_decoder(lang): - if lang not in lang_decoders: - if shared_decoder_embed_tokens is not None: - decoder_embed_tokens = shared_decoder_embed_tokens - else: - decoder_embed_tokens = build_embedding( - task.dicts[lang], - args.decoder_embed_dim, - args.decoder_embed_path, - ) - lang_decoders[lang] = cls._get_module_class( - False, args, task.dicts[lang], decoder_embed_tokens, tgt_langs - ) - return lang_decoders[lang] - - # shared encoders/decoders (if applicable) - shared_encoder, shared_decoder = None, None - if args.share_encoders: - shared_encoder = get_encoder(src_langs[0]) - if args.share_decoders: - shared_decoder = get_decoder(tgt_langs[0]) - - encoders, decoders = OrderedDict(), OrderedDict() - for lang_pair, src, tgt in zip(task.model_lang_pairs, src_langs, tgt_langs): - encoders[lang_pair] = ( - shared_encoder if shared_encoder is not None else get_encoder(src) - ) - decoders[lang_pair] = ( - shared_decoder if shared_decoder is not None else get_decoder(tgt) - ) - - return MultilingualTransformerModel(encoders, decoders) - - @classmethod - def _get_module_class(cls, is_encoder, args, lang_dict, embed_tokens, langs): - module_class = TransformerEncoder if is_encoder else TransformerDecoder - return module_class(args, lang_dict, embed_tokens) - - def load_state_dict(self, state_dict, strict=True, model_cfg=None): - state_dict_subset = state_dict.copy() - for k, _ in state_dict.items(): - assert k.startswith("models.") - lang_pair = k.split(".")[1] - if lang_pair not in self.models: - del state_dict_subset[k] - super().load_state_dict(state_dict_subset, strict=strict, model_cfg=model_cfg) - - -@register_model_architecture("multilingual_transformer", "multilingual_transformer") -def base_multilingual_architecture(args): - base_architecture(args) - args.share_encoder_embeddings = getattr(args, "share_encoder_embeddings", False) - args.share_decoder_embeddings = getattr(args, "share_decoder_embeddings", False) - args.share_encoders = getattr(args, "share_encoders", False) - args.share_decoders = getattr(args, "share_decoders", False) - - -@register_model_architecture( - "multilingual_transformer", "multilingual_transformer_iwslt_de_en" -) -def multilingual_transformer_iwslt_de_en(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 1024) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 4) - args.encoder_layers = getattr(args, "encoder_layers", 6) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 1024) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 4) - args.decoder_layers = getattr(args, "decoder_layers", 6) - base_multilingual_architecture(args) diff --git a/kosmos-g/fairseq/fairseq/models/nat/__init__.py b/kosmos-g/fairseq/fairseq/models/nat/__init__.py deleted file mode 100644 index 05fe82248..000000000 --- a/kosmos-g/fairseq/fairseq/models/nat/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -"""isort:skip_file""" - -from .fairseq_nat_model import * -from .nonautoregressive_transformer import * -from .nat_crf_transformer import * -from .iterative_nonautoregressive_transformer import * -from .cmlm_transformer import * -from .levenshtein_transformer import * -from .insertion_transformer import * diff --git a/kosmos-g/fairseq/fairseq/models/nat/cmlm_transformer.py b/kosmos-g/fairseq/fairseq/models/nat/cmlm_transformer.py deleted file mode 100644 index c876e9453..000000000 --- a/kosmos-g/fairseq/fairseq/models/nat/cmlm_transformer.py +++ /dev/null @@ -1,162 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -This file implements: -Ghazvininejad, Marjan, et al. -"Constant-time machine translation with conditional masked language models." -arXiv preprint arXiv:1904.09324 (2019). -""" - -from fairseq.models import register_model, register_model_architecture -from fairseq.models.nat import NATransformerModel -from fairseq.utils import new_arange - - -def _skeptical_unmasking(output_scores, output_masks, p): - sorted_index = output_scores.sort(-1)[1] - boundary_len = ( - (output_masks.sum(1, keepdim=True).type_as(output_scores) - 2) * p - ).long() - skeptical_mask = new_arange(output_masks) < boundary_len - return skeptical_mask.scatter(1, sorted_index, skeptical_mask) - - -@register_model("cmlm_transformer") -class CMLMNATransformerModel(NATransformerModel): - @staticmethod - def add_args(parser): - NATransformerModel.add_args(parser) - - def forward( - self, src_tokens, src_lengths, prev_output_tokens, tgt_tokens, **kwargs - ): - assert not self.decoder.src_embedding_copy, "do not support embedding copy." - - # encoding - encoder_out = self.encoder(src_tokens, src_lengths=src_lengths, **kwargs) - # length prediction - length_out = self.decoder.forward_length( - normalize=False, encoder_out=encoder_out - ) - length_tgt = self.decoder.forward_length_prediction( - length_out, encoder_out, tgt_tokens - ) - - # decoding - word_ins_out = self.decoder( - normalize=False, - prev_output_tokens=prev_output_tokens, - encoder_out=encoder_out, - ) - word_ins_mask = prev_output_tokens.eq(self.unk) - - return { - "word_ins": { - "out": word_ins_out, - "tgt": tgt_tokens, - "mask": word_ins_mask, - "ls": self.args.label_smoothing, - "nll_loss": True, - }, - "length": { - "out": length_out, - "tgt": length_tgt, - "factor": self.decoder.length_loss_factor, - }, - } - - def forward_decoder(self, decoder_out, encoder_out, decoding_format=None, **kwargs): - - step = decoder_out.step - max_step = decoder_out.max_step - - output_tokens = decoder_out.output_tokens - output_scores = decoder_out.output_scores - history = decoder_out.history - - # execute the decoder - output_masks = output_tokens.eq(self.unk) - _scores, _tokens = self.decoder( - normalize=True, - prev_output_tokens=output_tokens, - encoder_out=encoder_out, - ).max(-1) - output_tokens.masked_scatter_(output_masks, _tokens[output_masks]) - output_scores.masked_scatter_(output_masks, _scores[output_masks]) - - if history is not None: - history.append(output_tokens.clone()) - - # skeptical decoding (depend on the maximum decoding steps.) - if (step + 1) < max_step: - skeptical_mask = _skeptical_unmasking( - output_scores, output_tokens.ne(self.pad), 1 - (step + 1) / max_step - ) - - output_tokens.masked_fill_(skeptical_mask, self.unk) - output_scores.masked_fill_(skeptical_mask, 0.0) - - if history is not None: - history.append(output_tokens.clone()) - - return decoder_out._replace( - output_tokens=output_tokens, - output_scores=output_scores, - attn=None, - history=history, - ) - - -@register_model_architecture("cmlm_transformer", "cmlm_transformer") -def cmlm_base_architecture(args): - args.encoder_embed_path = getattr(args, "encoder_embed_path", None) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 2048) - args.encoder_layers = getattr(args, "encoder_layers", 6) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", False) - args.decoder_embed_path = getattr(args, "decoder_embed_path", None) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim) - args.decoder_ffn_embed_dim = getattr( - args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim - ) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", False) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False) - args.attention_dropout = getattr(args, "attention_dropout", 0.0) - args.activation_dropout = getattr(args, "activation_dropout", 0.0) - args.activation_fn = getattr(args, "activation_fn", "relu") - args.dropout = getattr(args, "dropout", 0.1) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", False - ) - args.share_all_embeddings = getattr(args, "share_all_embeddings", True) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - args.adaptive_input = getattr(args, "adaptive_input", False) - args.apply_bert_init = getattr(args, "apply_bert_init", False) - - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - - # --- special arguments --- - args.sg_length_pred = getattr(args, "sg_length_pred", False) - args.pred_length_offset = getattr(args, "pred_length_offset", False) - args.length_loss_factor = getattr(args, "length_loss_factor", 0.1) - args.ngram_predictor = getattr(args, "ngram_predictor", 1) - args.src_embedding_copy = getattr(args, "src_embedding_copy", False) - - -@register_model_architecture("cmlm_transformer", "cmlm_transformer_wmt_en_de") -def cmlm_wmt_en_de(args): - cmlm_base_architecture(args) diff --git a/kosmos-g/fairseq/fairseq/models/nat/fairseq_nat_model.py b/kosmos-g/fairseq/fairseq/models/nat/fairseq_nat_model.py deleted file mode 100644 index a5594a4ed..000000000 --- a/kosmos-g/fairseq/fairseq/models/nat/fairseq_nat_model.py +++ /dev/null @@ -1,172 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -from fairseq.models.transformer import ( - TransformerDecoder, - TransformerEncoder, - TransformerModel, -) -from fairseq.modules.transformer_sentence_encoder import init_bert_params - - -def ensemble_encoder(func): - def wrapper(self, *args, **kwargs): - if self.ensemble_models is None or len(self.ensemble_models) == 1: - return func(self, *args, **kwargs) - encoder_outs = [ - func(model, *args, **kwargs, return_all_hiddens=True) - for model in self.ensemble_models - ] - _encoder_out = encoder_outs[0].copy() - - def stack(key): - outs = [e[key][0] for e in encoder_outs] - return [torch.stack(outs, -1) if outs[0] is not None else None] - - _encoder_out["encoder_out"] = stack("encoder_out") - _encoder_out["encoder_embedding"] = stack("encoder_embedding") - - num_layers = len(_encoder_out["encoder_states"]) - if num_layers > 0: - _encoder_out["encoder_states"] = [ - torch.stack([e["encoder_states"][i] for e in encoder_outs], -1) - for i in range(num_layers) - ] - return _encoder_out - - return wrapper - - -def ensemble_decoder(func): - def wrapper(self, normalize=False, encoder_out=None, *args, **kwargs): - if self.ensemble_models is None or len(self.ensemble_models) == 1: - return func( - self, normalize=normalize, encoder_out=encoder_out, *args, **kwargs - ) - - def _replace(encoder_out, new_val): - new_encoder_out = encoder_out.copy() - new_encoder_out["encoder_out"] = [new_val] - return new_encoder_out - - action_outs = [ - func( - model, - normalize=normalize, - encoder_out=_replace( - encoder_out, encoder_out["encoder_out"][0][:, :, :, i] - ), - *args, - **kwargs - ) - for i, model in enumerate(self.ensemble_models) - ] - - if not isinstance(action_outs[0], tuple): # return multiple values - action_outs = [[a] for a in action_outs] - else: - action_outs = [list(a) for a in action_outs] - - ensembled_outs = [] - for i in range(len(action_outs[0])): - if i == 0 and normalize: - ensembled_outs += [ - torch.logsumexp( - torch.stack([a[i] for a in action_outs], -1), dim=-1 - ) - - math.log(len(self.ensemble_models)) - ] - elif action_outs[0][i] is not None: - ensembled_outs += [torch.stack([a[i] for a in action_outs], -1)] - else: - ensembled_outs += [None] - - if len(ensembled_outs) == 1: - return ensembled_outs[0] - return tuple(ensembled_outs) - - return wrapper - - -class FairseqNATModel(TransformerModel): - """ - Abstract class for all nonautoregressive-based models - """ - - def __init__(self, args, encoder, decoder): - super().__init__(args, encoder, decoder) - self.tgt_dict = decoder.dictionary - self.bos = decoder.dictionary.bos() - self.eos = decoder.dictionary.eos() - self.pad = decoder.dictionary.pad() - self.unk = decoder.dictionary.unk() - - self.ensemble_models = None - - @property - def allow_length_beam(self): - return False - - @property - def allow_ensemble(self): - return True - - def enable_ensemble(self, models): - self.encoder.ensemble_models = [m.encoder for m in models] - self.decoder.ensemble_models = [m.decoder for m in models] - - @staticmethod - def add_args(parser): - TransformerModel.add_args(parser) - parser.add_argument( - "--apply-bert-init", - action="store_true", - help="use custom param initialization for BERT", - ) - - @classmethod - def build_decoder(cls, args, tgt_dict, embed_tokens): - decoder = FairseqNATDecoder(args, tgt_dict, embed_tokens) - if getattr(args, "apply_bert_init", False): - decoder.apply(init_bert_params) - return decoder - - @classmethod - def build_encoder(cls, args, src_dict, embed_tokens): - encoder = FairseqNATEncoder(args, src_dict, embed_tokens) - if getattr(args, "apply_bert_init", False): - encoder.apply(init_bert_params) - return encoder - - def forward_encoder(self, encoder_inputs): - return self.encoder(*encoder_inputs) - - def forward_decoder(self, *args, **kwargs): - return NotImplementedError - - def initialize_output_tokens(self, *args, **kwargs): - return NotImplementedError - - def forward(self, *args, **kwargs): - return NotImplementedError - - -class FairseqNATEncoder(TransformerEncoder): - def __init__(self, args, dictionary, embed_tokens): - super().__init__(args, dictionary, embed_tokens) - self.ensemble_models = None - - @ensemble_encoder - def forward(self, *args, **kwargs): - return super().forward(*args, **kwargs) - - -class FairseqNATDecoder(TransformerDecoder): - def __init__(self, args, dictionary, embed_tokens, no_encoder_attn=False): - super().__init__(args, dictionary, embed_tokens, no_encoder_attn) - self.ensemble_models = None diff --git a/kosmos-g/fairseq/fairseq/models/nat/insertion_transformer.py b/kosmos-g/fairseq/fairseq/models/nat/insertion_transformer.py deleted file mode 100644 index bc28000f5..000000000 --- a/kosmos-g/fairseq/fairseq/models/nat/insertion_transformer.py +++ /dev/null @@ -1,280 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch -import torch.nn.functional as F -from fairseq.models import register_model, register_model_architecture -from fairseq.models.nat import ( - FairseqNATModel, - LevenshteinTransformerDecoder, - LevenshteinTransformerModel, - ensemble_decoder, -) -from fairseq.models.transformer import Linear -from fairseq.modules.transformer_sentence_encoder import init_bert_params -from fairseq.utils import new_arange - - -class NegativeDistanceScore(object): - def __init__(self): - - # pre-compute some values - self.scores = {} - - self.scores[0.5] = self.compute_score_full(50, 0.5) - self.scores[1.0] = self.compute_score_full(50, 1.0) - self.scores[2.0] = self.compute_score_full(50, 2.0) - - def __call__(self, i, L, tau): - if (tau is None) or (tau > 1000): - return 1 / L - - if tau in self.scores: - if L < self.scores[tau].shape[0]: - return self.scores[tau][L - 1, i] - return self.compute_score(L, tau)[i] - - def compute_score(self, L, tau): - s = np.array([-abs(L / 2 - i) / tau for i in range(L)]) - s = np.exp(s - s.max()) - return s / s.sum() - - def compute_score_full(self, L, tau): - s = -abs(np.arange(0, L - 1)[:, None] / 2 - np.arange(L)[None, :]) / tau - s = np.tril(s, 0) + np.triu(s - float("inf"), 1) - s = np.exp(s - s.max(1, keepdims=True)) - return s / s.sum(1, keepdims=True) - - -neg_scorer = NegativeDistanceScore() - - -def _get_ins_targets(in_tokens, out_tokens, padding_idx, unk_idx, vocab_size, tau=None): - try: - from fairseq import libnat - except ImportError as e: - import sys - - sys.stderr.write("ERROR: missing libnat. run `pip install --editable .`\n") - raise e - - B = in_tokens.size(0) - T = in_tokens.size(1) - V = vocab_size - - with torch.cuda.device_of(in_tokens): - in_tokens_list = [ - [t for t in s if t != padding_idx] for i, s in enumerate(in_tokens.tolist()) - ] - out_tokens_list = [ - [t for t in s if t != padding_idx] - for i, s in enumerate(out_tokens.tolist()) - ] - - full_labels = libnat.suggested_ed2_path( - in_tokens_list, out_tokens_list, padding_idx - ) - insert_labels = [a[:-1] for a in full_labels] - - # numericalize1 - insert_label_tensors = in_tokens.new_zeros(B * (T - 1) * V).float() - insert_index, insert_labels = zip( - *[ - (w + (j + i * (T - 1)) * V, neg_scorer(k, len(label), tau)) - for i, labels in enumerate(insert_labels) - for j, label in enumerate(labels[1:-1]) - for k, w in enumerate(label) - ] - ) # HACK 1:-1 - insert_index, insert_labels = [ - torch.tensor(list(a), device=in_tokens.device) - for a in [insert_index, insert_labels] - ] - insert_label_tensors.scatter_(0, insert_index.long(), insert_labels) - insert_label_tensors = insert_label_tensors.view(B, T - 1, V) - - return insert_label_tensors - - -def _apply_ins_words(in_tokens, in_scores, word_ins_pred, word_ins_scores, padding_idx): - - padding_masks = in_tokens[:, 1:].eq(padding_idx) - word_ins_scores.masked_fill_(padding_masks, 0.0) - word_ins_pred.masked_fill_(padding_masks, padding_idx) - - in_coords = new_arange(in_tokens).type_as(in_scores) - - # shift all padding predictions to infinite - out_coords = (in_coords[:, 1:] - 0.5).masked_fill( - word_ins_pred.eq(padding_idx), float("inf") - ) - out_coords = torch.cat([in_coords, out_coords], 1).sort(-1)[1] - out_tokens = torch.cat([in_tokens, word_ins_pred], 1).gather(1, out_coords) - out_scores = torch.cat([in_scores, word_ins_scores], 1).gather(1, out_coords) - return out_tokens, out_scores - - -@register_model("insertion_transformer") -class InsertionTransformerModel(LevenshteinTransformerModel): - def __init__(self, args, encoder, decoder): - super().__init__(args, encoder, decoder) - - @staticmethod - def add_args(parser): - FairseqNATModel.add_args(parser) - parser.add_argument("--label-tau", default=None, type=float) - - @classmethod - def build_decoder(cls, args, tgt_dict, embed_tokens): - decoder = InsertionTransformerDecoder(args, tgt_dict, embed_tokens) - if getattr(args, "apply_bert_init", False): - decoder.apply(init_bert_params) - return decoder - - def forward( - self, src_tokens, src_lengths, prev_output_tokens, tgt_tokens, **kwargs - ): - - assert tgt_tokens is not None, "forward function only supports training." - - # encoding - encoder_out = self.encoder(src_tokens, src_lengths=src_lengths, **kwargs) - - # generate training labels for insertion - word_ins_out = self.decoder.forward_word_ins( - normalize=False, - prev_output_tokens=prev_output_tokens, - encoder_out=encoder_out, - ) - - word_ins_tgt = _get_ins_targets( - prev_output_tokens, - tgt_tokens, - self.pad, - self.unk, - len(self.tgt_dict), - tau=self.decoder.label_tau, - ).type_as(word_ins_out) - word_ins_masks = prev_output_tokens[:, 1:].ne(self.pad) - - return { - "word_ins": { - "out": word_ins_out, - "tgt": word_ins_tgt, - "mask": word_ins_masks, - "ls": self.args.label_smoothing, - "nll_loss": True, - } - } - - def forward_decoder( - self, decoder_out, encoder_out, eos_penalty=0.0, max_ratio=None, **kwargs - ): - - output_tokens = decoder_out.output_tokens - output_scores = decoder_out.output_scores - history = decoder_out.history - - # TODO: decoding for InsertionTransformer - word_ins_score = self.decoder.forward_word_ins( - normalize=True, prev_output_tokens=output_tokens, encoder_out=encoder_out - ) - - if eos_penalty > 0.0: - word_ins_score[:, :, self.pad] -= eos_penalty - word_ins_score, word_ins_pred = word_ins_score.max(-1) - output_tokens, output_scores = _apply_ins_words( - output_tokens, output_scores, word_ins_pred, word_ins_score, self.pad - ) - - # delete some unnecessary paddings - cut_off = output_tokens.ne(self.pad).sum(1).max() - output_tokens = output_tokens[:, :cut_off] - output_scores = output_scores[:, :cut_off] - - if history is not None: - history.append(output_tokens.clone()) - - return decoder_out._replace( - output_tokens=output_tokens, - output_scores=output_scores, - attn=None, - history=history, - ) - - -class InsertionTransformerDecoder(LevenshteinTransformerDecoder): - def __init__(self, args, dictionary, embed_tokens, no_encoder_attn=False): - # use the TransformerDecoder's __init__ - super(LevenshteinTransformerDecoder, self).__init__( - args, dictionary, embed_tokens, no_encoder_attn=no_encoder_attn - ) - - self.dictionary = dictionary - self.bos = dictionary.bos() - self.unk = dictionary.unk() - self.eos = dictionary.eos() - self.pool_out = Linear(self.output_embed_dim * 2, self.output_embed_dim) - - self.label_tau = getattr(args, "label_tau", None) - - @ensemble_decoder - def forward_word_ins(self, normalize, encoder_out, prev_output_tokens): - features = self.extract_features(prev_output_tokens, encoder_out=encoder_out)[0] - features = self.pool_out( - torch.cat([features[:, :-1, :], features[:, 1:, :]], 2) - ) - decoder_out = self.output_layer(features) - return F.log_softmax(decoder_out, -1) if normalize else decoder_out - - def forward_mask_ins(self, *args, **kwargs): - raise NotImplementedError - - def forward_word_del(self, *args, **kwargs): - raise NotImplementedError - - -@register_model_architecture("insertion_transformer", "insertion_transformer") -def insertion_base_architecture(args): - args.encoder_embed_path = getattr(args, "encoder_embed_path", None) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 2048) - args.encoder_layers = getattr(args, "encoder_layers", 6) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", False) - args.decoder_embed_path = getattr(args, "decoder_embed_path", None) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim) - args.decoder_ffn_embed_dim = getattr( - args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim - ) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", False) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False) - args.attention_dropout = getattr(args, "attention_dropout", 0.0) - args.activation_dropout = getattr(args, "activation_dropout", 0.0) - args.activation_fn = getattr(args, "activation_fn", "relu") - args.dropout = getattr(args, "dropout", 0.1) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", False - ) - args.share_all_embeddings = getattr(args, "share_all_embeddings", False) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - args.adaptive_input = getattr(args, "adaptive_input", False) - args.apply_bert_init = getattr(args, "apply_bert_init", False) - - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - - # special for insertion transformer - args.label_tau = getattr(args, "label_tau", None) diff --git a/kosmos-g/fairseq/fairseq/models/nat/iterative_nonautoregressive_transformer.py b/kosmos-g/fairseq/fairseq/models/nat/iterative_nonautoregressive_transformer.py deleted file mode 100644 index bc3950998..000000000 --- a/kosmos-g/fairseq/fairseq/models/nat/iterative_nonautoregressive_transformer.py +++ /dev/null @@ -1,228 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -from fairseq.models import register_model, register_model_architecture -from fairseq.models.nat import NATransformerModel - - -def _sequential_poisoning(s, V, beta=0.33, bos=2, eos=3, pad=1): - # s: input batch - # V: vocabulary size - rand_words = torch.randint(low=4, high=V, size=s.size(), device=s.device) - choices = torch.rand(size=s.size(), device=s.device) - choices.masked_fill_((s == pad) | (s == bos) | (s == eos), 1) - - replace = choices < beta / 3 - repeat = (choices >= beta / 3) & (choices < beta * 2 / 3) - swap = (choices >= beta * 2 / 3) & (choices < beta) - safe = choices >= beta - - for i in range(s.size(1) - 1): - rand_word = rand_words[:, i] - next_word = s[:, i + 1] - self_word = s[:, i] - - replace_i = replace[:, i] - swap_i = swap[:, i] & (next_word != 3) - repeat_i = repeat[:, i] & (next_word != 3) - safe_i = safe[:, i] | ((next_word == 3) & (~replace_i)) - - s[:, i] = ( - self_word * (safe_i | repeat_i).long() - + next_word * swap_i.long() - + rand_word * replace_i.long() - ) - s[:, i + 1] = ( - next_word * (safe_i | replace_i).long() - + self_word * (swap_i | repeat_i).long() - ) - return s - - -def gumbel_noise(input, TINY=1e-8): - return ( - input.new_zeros(*input.size()) - .uniform_() - .add_(TINY) - .log_() - .neg_() - .add_(TINY) - .log_() - .neg_() - ) - - -@register_model("iterative_nonautoregressive_transformer") -class IterNATransformerModel(NATransformerModel): - @staticmethod - def add_args(parser): - NATransformerModel.add_args(parser) - parser.add_argument( - "--train-step", - type=int, - help="number of refinement iterations during training", - ) - parser.add_argument( - "--dae-ratio", - type=float, - help="the probability of switching to the denoising auto-encoder loss", - ) - parser.add_argument( - "--stochastic-approx", - action="store_true", - help="sampling from the decoder as the inputs for next iteration", - ) - - @classmethod - def build_model(cls, args, task): - model = super().build_model(args, task) - model.train_step = getattr(args, "train_step", 4) - model.dae_ratio = getattr(args, "dae_ratio", 0.5) - model.stochastic_approx = getattr(args, "stochastic_approx", False) - return model - - def forward( - self, src_tokens, src_lengths, prev_output_tokens, tgt_tokens, **kwargs - ): - - B, T = prev_output_tokens.size() - - # encoding - encoder_out = self.encoder(src_tokens, src_lengths=src_lengths, **kwargs) - - # length prediction - length_out = self.decoder.forward_length( - normalize=False, encoder_out=encoder_out - ) - length_tgt = self.decoder.forward_length_prediction( - length_out, encoder_out, tgt_tokens - ) - - # decoding - word_ins_outs, word_ins_tgts, word_ins_masks = [], [], [] - for t in range(self.train_step): - word_ins_out = self.decoder( - normalize=False, - prev_output_tokens=prev_output_tokens, - encoder_out=encoder_out, - step=t, - ) - word_ins_tgt = tgt_tokens - word_ins_mask = word_ins_tgt.ne(self.pad) - - word_ins_outs.append(word_ins_out) - word_ins_tgts.append(word_ins_tgt) - word_ins_masks.append(word_ins_mask) - - if t < (self.train_step - 1): - # prediction for next iteration - if self.stochastic_approx: - word_ins_prediction = ( - word_ins_out + gumbel_noise(word_ins_out) - ).max(-1)[1] - else: - word_ins_prediction = word_ins_out.max(-1)[1] - - prev_output_tokens = prev_output_tokens.masked_scatter( - word_ins_mask, word_ins_prediction[word_ins_mask] - ) - - if self.dae_ratio > 0: - # we do not perform denoising for the first iteration - corrputed = ( - torch.rand(size=(B,), device=prev_output_tokens.device) - < self.dae_ratio - ) - corrputed_tokens = _sequential_poisoning( - tgt_tokens[corrputed], - len(self.tgt_dict), - 0.33, - self.bos, - self.eos, - self.pad, - ) - prev_output_tokens[corrputed] = corrputed_tokens - - # concat everything - word_ins_out = torch.cat(word_ins_outs, 0) - word_ins_tgt = torch.cat(word_ins_tgts, 0) - word_ins_mask = torch.cat(word_ins_masks, 0) - - return { - "word_ins": { - "out": word_ins_out, - "tgt": word_ins_tgt, - "mask": word_ins_mask, - "ls": self.args.label_smoothing, - "nll_loss": True, - }, - "length": { - "out": length_out, - "tgt": length_tgt, - "factor": self.decoder.length_loss_factor, - }, - } - - -@register_model_architecture( - "iterative_nonautoregressive_transformer", "iterative_nonautoregressive_transformer" -) -def inat_base_architecture(args): - args.encoder_embed_path = getattr(args, "encoder_embed_path", None) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 2048) - args.encoder_layers = getattr(args, "encoder_layers", 6) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", False) - args.decoder_embed_path = getattr(args, "decoder_embed_path", None) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim) - args.decoder_ffn_embed_dim = getattr( - args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim - ) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", False) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False) - args.attention_dropout = getattr(args, "attention_dropout", 0.0) - args.activation_dropout = getattr(args, "activation_dropout", 0.0) - args.activation_fn = getattr(args, "activation_fn", "relu") - args.dropout = getattr(args, "dropout", 0.1) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", False - ) - args.share_all_embeddings = getattr(args, "share_all_embeddings", False) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - args.adaptive_input = getattr(args, "adaptive_input", False) - args.apply_bert_init = getattr(args, "apply_bert_init", False) - - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - - # --- special arguments --- - args.sg_length_pred = getattr(args, "sg_length_pred", False) - args.pred_length_offset = getattr(args, "pred_length_offset", False) - args.length_loss_factor = getattr(args, "length_loss_factor", 0.1) - args.ngram_predictor = getattr(args, "ngram_predictor", 1) - args.src_embedding_copy = getattr(args, "src_embedding_copy", False) - - args.train_step = getattr(args, "train_step", 4) - args.dae_ratio = getattr(args, "dae_ratio", 0.5) - args.stochastic_approx = getattr(args, "stochastic_approx", False) - - -@register_model_architecture( - "iterative_nonautoregressive_transformer", - "iterative_nonautoregressive_transformer_wmt_en_de", -) -def iter_nat_wmt_en_de(args): - inat_base_architecture(args) diff --git a/kosmos-g/fairseq/fairseq/models/nat/levenshtein_transformer.py b/kosmos-g/fairseq/fairseq/models/nat/levenshtein_transformer.py deleted file mode 100644 index d60d3c52d..000000000 --- a/kosmos-g/fairseq/fairseq/models/nat/levenshtein_transformer.py +++ /dev/null @@ -1,510 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq.iterative_refinement_generator import DecoderOut -from fairseq.models import register_model, register_model_architecture -from fairseq.models.nat import FairseqNATDecoder, FairseqNATModel, ensemble_decoder -from fairseq.models.transformer import Embedding -from fairseq.modules import TransformerDecoderLayer -from fairseq.modules.transformer_sentence_encoder import init_bert_params - -from .levenshtein_utils import ( - _apply_del_words, - _apply_ins_masks, - _apply_ins_words, - _fill, - _get_del_targets, - _get_ins_targets, - _skip, - _skip_encoder_out, -) - - -@register_model("levenshtein_transformer") -class LevenshteinTransformerModel(FairseqNATModel): - @property - def allow_length_beam(self): - return False - - @staticmethod - def add_args(parser): - FairseqNATModel.add_args(parser) - parser.add_argument( - "--early-exit", - default="6,6,6", - type=str, - help="number of decoder layers before word_del, mask_ins, word_ins", - ) - parser.add_argument( - "--no-share-discriminator", - action="store_true", - help="separate parameters for discriminator", - ) - parser.add_argument( - "--no-share-maskpredictor", - action="store_true", - help="separate parameters for mask-predictor", - ) - parser.add_argument( - "--share-discriminator-maskpredictor", - action="store_true", - help="share the parameters for both mask-predictor and discriminator", - ) - parser.add_argument( - "--sampling-for-deletion", - action="store_true", - help="instead of argmax, use sampling to predict the tokens", - ) - - @classmethod - def build_decoder(cls, args, tgt_dict, embed_tokens): - decoder = LevenshteinTransformerDecoder(args, tgt_dict, embed_tokens) - if getattr(args, "apply_bert_init", False): - decoder.apply(init_bert_params) - return decoder - - def forward( - self, src_tokens, src_lengths, prev_output_tokens, tgt_tokens, **kwargs - ): - - assert tgt_tokens is not None, "forward function only supports training." - - # encoding - encoder_out = self.encoder(src_tokens, src_lengths=src_lengths, **kwargs) - - # generate training labels for insertion - masked_tgt_masks, masked_tgt_tokens, mask_ins_targets = _get_ins_targets( - prev_output_tokens, tgt_tokens, self.pad, self.unk - ) - mask_ins_targets = mask_ins_targets.clamp(min=0, max=255) # for safe prediction - mask_ins_masks = prev_output_tokens[:, 1:].ne(self.pad) - - mask_ins_out, _ = self.decoder.forward_mask_ins( - normalize=False, - prev_output_tokens=prev_output_tokens, - encoder_out=encoder_out, - ) - word_ins_out, _ = self.decoder.forward_word_ins( - normalize=False, - prev_output_tokens=masked_tgt_tokens, - encoder_out=encoder_out, - ) - - # make online prediction - if self.decoder.sampling_for_deletion: - word_predictions = torch.multinomial( - F.softmax(word_ins_out, -1).view(-1, word_ins_out.size(-1)), 1 - ).view(word_ins_out.size(0), -1) - else: - word_predictions = F.log_softmax(word_ins_out, dim=-1).max(2)[1] - - word_predictions.masked_scatter_( - ~masked_tgt_masks, tgt_tokens[~masked_tgt_masks] - ) - - # generate training labels for deletion - word_del_targets = _get_del_targets(word_predictions, tgt_tokens, self.pad) - word_del_out, _ = self.decoder.forward_word_del( - normalize=False, - prev_output_tokens=word_predictions, - encoder_out=encoder_out, - ) - word_del_masks = word_predictions.ne(self.pad) - - return { - "mask_ins": { - "out": mask_ins_out, - "tgt": mask_ins_targets, - "mask": mask_ins_masks, - "ls": 0.01, - }, - "word_ins": { - "out": word_ins_out, - "tgt": tgt_tokens, - "mask": masked_tgt_masks, - "ls": self.args.label_smoothing, - "nll_loss": True, - }, - "word_del": { - "out": word_del_out, - "tgt": word_del_targets, - "mask": word_del_masks, - }, - } - - def forward_decoder( - self, decoder_out, encoder_out, eos_penalty=0.0, max_ratio=None, **kwargs - ): - - output_tokens = decoder_out.output_tokens - output_scores = decoder_out.output_scores - attn = decoder_out.attn - history = decoder_out.history - - bsz = output_tokens.size(0) - if max_ratio is None: - max_lens = torch.zeros_like(output_tokens).fill_(255) - else: - if not encoder_out["encoder_padding_mask"]: - max_src_len = encoder_out["encoder_out"].size(0) - src_lens = encoder_out["encoder_out"].new(bsz).fill_(max_src_len) - else: - src_lens = (~encoder_out["encoder_padding_mask"][0]).sum(1) - max_lens = (src_lens * max_ratio).clamp(min=10).long() - - # delete words - # do not delete tokens if it is <s> </s> - can_del_word = output_tokens.ne(self.pad).sum(1) > 2 - if can_del_word.sum() != 0: # we cannot delete, skip - word_del_score, word_del_attn = self.decoder.forward_word_del( - normalize=True, - prev_output_tokens=_skip(output_tokens, can_del_word), - encoder_out=_skip_encoder_out(self.encoder, encoder_out, can_del_word), - ) - word_del_pred = word_del_score.max(-1)[1].bool() - - _tokens, _scores, _attn = _apply_del_words( - output_tokens[can_del_word], - output_scores[can_del_word], - word_del_attn, - word_del_pred, - self.pad, - self.bos, - self.eos, - ) - output_tokens = _fill(output_tokens, can_del_word, _tokens, self.pad) - output_scores = _fill(output_scores, can_del_word, _scores, 0) - attn = _fill(attn, can_del_word, _attn, 0.0) - - if history is not None: - history.append(output_tokens.clone()) - - # insert placeholders - can_ins_mask = output_tokens.ne(self.pad).sum(1) < max_lens - if can_ins_mask.sum() != 0: - mask_ins_score, _ = self.decoder.forward_mask_ins( - normalize=True, - prev_output_tokens=_skip(output_tokens, can_ins_mask), - encoder_out=_skip_encoder_out(self.encoder, encoder_out, can_ins_mask), - ) - if eos_penalty > 0.0: - mask_ins_score[:, :, 0] = mask_ins_score[:, :, 0] - eos_penalty - mask_ins_pred = mask_ins_score.max(-1)[1] - mask_ins_pred = torch.min( - mask_ins_pred, max_lens[can_ins_mask, None].expand_as(mask_ins_pred) - ) - - _tokens, _scores = _apply_ins_masks( - output_tokens[can_ins_mask], - output_scores[can_ins_mask], - mask_ins_pred, - self.pad, - self.unk, - self.eos, - ) - output_tokens = _fill(output_tokens, can_ins_mask, _tokens, self.pad) - output_scores = _fill(output_scores, can_ins_mask, _scores, 0) - - if history is not None: - history.append(output_tokens.clone()) - - # insert words - can_ins_word = output_tokens.eq(self.unk).sum(1) > 0 - if can_ins_word.sum() != 0: - word_ins_score, word_ins_attn = self.decoder.forward_word_ins( - normalize=True, - prev_output_tokens=_skip(output_tokens, can_ins_word), - encoder_out=_skip_encoder_out(self.encoder, encoder_out, can_ins_word), - ) - word_ins_score, word_ins_pred = word_ins_score.max(-1) - _tokens, _scores = _apply_ins_words( - output_tokens[can_ins_word], - output_scores[can_ins_word], - word_ins_pred, - word_ins_score, - self.unk, - ) - - output_tokens = _fill(output_tokens, can_ins_word, _tokens, self.pad) - output_scores = _fill(output_scores, can_ins_word, _scores, 0) - attn = _fill(attn, can_ins_word, word_ins_attn, 0.0) - - if history is not None: - history.append(output_tokens.clone()) - - # delete some unnecessary paddings - cut_off = output_tokens.ne(self.pad).sum(1).max() - output_tokens = output_tokens[:, :cut_off] - output_scores = output_scores[:, :cut_off] - attn = None if attn is None else attn[:, :cut_off, :] - - return decoder_out._replace( - output_tokens=output_tokens, - output_scores=output_scores, - attn=attn, - history=history, - ) - - def initialize_output_tokens(self, encoder_out, src_tokens): - initial_output_tokens = src_tokens.new_zeros(src_tokens.size(0), 2) - initial_output_tokens[:, 0] = self.bos - initial_output_tokens[:, 1] = self.eos - - initial_output_scores = initial_output_tokens.new_zeros( - *initial_output_tokens.size() - ).type_as(encoder_out["encoder_out"][0]) - - return DecoderOut( - output_tokens=initial_output_tokens, - output_scores=initial_output_scores, - attn=None, - step=0, - max_step=0, - history=None, - ) - - -class LevenshteinTransformerDecoder(FairseqNATDecoder): - def __init__(self, args, dictionary, embed_tokens, no_encoder_attn=False): - super().__init__( - args, dictionary, embed_tokens, no_encoder_attn=no_encoder_attn - ) - self.dictionary = dictionary - self.bos = dictionary.bos() - self.unk = dictionary.unk() - self.eos = dictionary.eos() - self.sampling_for_deletion = getattr(args, "sampling_for_deletion", False) - self.embed_mask_ins = Embedding(256, self.output_embed_dim * 2, None) - self.embed_word_del = Embedding(2, self.output_embed_dim, None) - - # del_word, ins_mask, ins_word - self.early_exit = [int(i) for i in args.early_exit.split(",")] - assert len(self.early_exit) == 3 - - # copy layers for mask-predict/deletion - self.layers_msk = None - if getattr(args, "no_share_maskpredictor", False): - self.layers_msk = nn.ModuleList( - [ - TransformerDecoderLayer(args, no_encoder_attn) - for _ in range(self.early_exit[1]) - ] - ) - self.layers_del = None - if getattr(args, "no_share_discriminator", False): - self.layers_del = nn.ModuleList( - [ - TransformerDecoderLayer(args, no_encoder_attn) - for _ in range(self.early_exit[0]) - ] - ) - - if getattr(args, "share_discriminator_maskpredictor", False): - assert getattr( - args, "no_share_discriminator", False - ), "must set saperate discriminator" - self.layers_msk = self.layers_del - - def extract_features( - self, - prev_output_tokens, - encoder_out=None, - early_exit=None, - layers=None, - **unused - ): - """ - Similar to *forward* but only return features. - Inputs: - prev_output_tokens: Tensor(B, T) - encoder_out: a dictionary of hidden states and masks - - Returns: - tuple: - - the decoder's features of shape `(batch, tgt_len, embed_dim)` - - a dictionary with any model-specific outputs - the LevenshteinTransformer decoder has full-attention to all generated tokens - """ - # embed positions - positions = ( - self.embed_positions(prev_output_tokens) - if self.embed_positions is not None - else None - ) - - # embed tokens and positions - x = self.embed_scale * self.embed_tokens(prev_output_tokens) - if self.project_in_dim is not None: - x = self.project_in_dim(x) - - if positions is not None: - x += positions - x = self.dropout_module(x) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - attn = None - inner_states = [x] - - # decoder layers - decoder_padding_mask = prev_output_tokens.eq(self.padding_idx) - layers = self.layers if layers is None else layers - early_exit = len(layers) if early_exit is None else early_exit - for _, layer in enumerate(layers[:early_exit]): - x, attn, _ = layer( - x, - encoder_out["encoder_out"][0] - if (encoder_out is not None and len(encoder_out["encoder_out"]) > 0) - else None, - encoder_out["encoder_padding_mask"][0] - if ( - encoder_out is not None - and len(encoder_out["encoder_padding_mask"]) > 0 - ) - else None, - self_attn_mask=None, - self_attn_padding_mask=decoder_padding_mask, - ) - inner_states.append(x) - - if self.layer_norm: - x = self.layer_norm(x) - - # T x B x C -> B x T x C - x = x.transpose(0, 1) - - if self.project_out_dim is not None: - x = self.project_out_dim(x) - - return x, {"attn": attn, "inner_states": inner_states} - - @ensemble_decoder - def forward_mask_ins(self, normalize, encoder_out, prev_output_tokens, **unused): - features, extra = self.extract_features( - prev_output_tokens, - encoder_out=encoder_out, - early_exit=self.early_exit[1], - layers=self.layers_msk, - **unused - ) - features_cat = torch.cat([features[:, :-1, :], features[:, 1:, :]], 2) - decoder_out = F.linear(features_cat, self.embed_mask_ins.weight) - if normalize: - return F.log_softmax(decoder_out, -1), extra["attn"] - return decoder_out, extra["attn"] - - @ensemble_decoder - def forward_word_ins(self, normalize, encoder_out, prev_output_tokens, **unused): - features, extra = self.extract_features( - prev_output_tokens, - encoder_out=encoder_out, - early_exit=self.early_exit[2], - layers=self.layers, - **unused - ) - decoder_out = self.output_layer(features) - if normalize: - return F.log_softmax(decoder_out, -1), extra["attn"] - return decoder_out, extra["attn"] - - @ensemble_decoder - def forward_word_del(self, normalize, encoder_out, prev_output_tokens, **unused): - features, extra = self.extract_features( - prev_output_tokens, - encoder_out=encoder_out, - early_exit=self.early_exit[0], - layers=self.layers_del, - **unused - ) - decoder_out = F.linear(features, self.embed_word_del.weight) - if normalize: - return F.log_softmax(decoder_out, -1), extra["attn"] - return decoder_out, extra["attn"] - - -@register_model_architecture("levenshtein_transformer", "levenshtein_transformer") -def levenshtein_base_architecture(args): - args.encoder_embed_path = getattr(args, "encoder_embed_path", None) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 2048) - args.encoder_layers = getattr(args, "encoder_layers", 6) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", False) - args.decoder_embed_path = getattr(args, "decoder_embed_path", None) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim) - args.decoder_ffn_embed_dim = getattr( - args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim - ) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", False) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False) - args.attention_dropout = getattr(args, "attention_dropout", 0.0) - args.activation_dropout = getattr(args, "activation_dropout", 0.0) - args.activation_fn = getattr(args, "activation_fn", "relu") - args.dropout = getattr(args, "dropout", 0.1) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", False - ) - args.share_all_embeddings = getattr(args, "share_all_embeddings", False) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - args.adaptive_input = getattr(args, "adaptive_input", False) - args.apply_bert_init = getattr(args, "apply_bert_init", False) - - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.sampling_for_deletion = getattr(args, "sampling_for_deletion", False) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - args.early_exit = getattr(args, "early_exit", "6,6,6") - args.no_share_discriminator = getattr(args, "no_share_discriminator", False) - args.no_share_maskpredictor = getattr(args, "no_share_maskpredictor", False) - args.share_discriminator_maskpredictor = getattr( - args, "share_discriminator_maskpredictor", False - ) - args.no_share_last_layer = getattr(args, "no_share_last_layer", False) - - -@register_model_architecture( - "levenshtein_transformer", "levenshtein_transformer_wmt_en_de" -) -def levenshtein_transformer_wmt_en_de(args): - levenshtein_base_architecture(args) - - -# similar parameters used in the "Attention Is All You Need" paper (Vaswani et al., 2017) -@register_model_architecture( - "levenshtein_transformer", "levenshtein_transformer_vaswani_wmt_en_de_big" -) -def levenshtein_transformer_vaswani_wmt_en_de_big(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 1024) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4096) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - args.dropout = getattr(args, "dropout", 0.3) - levenshtein_base_architecture(args) - - -# default parameters used in tensor2tensor implementation -@register_model_architecture( - "levenshtein_transformer", "levenshtein_transformer_wmt_en_de_big" -) -def levenshtein_transformer_wmt_en_de_big_t2t(args): - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", True) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", True) - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - args.activation_dropout = getattr(args, "activation_dropout", 0.1) - levenshtein_transformer_vaswani_wmt_en_de_big(args) diff --git a/kosmos-g/fairseq/fairseq/models/nat/levenshtein_utils.py b/kosmos-g/fairseq/fairseq/models/nat/levenshtein_utils.py deleted file mode 100644 index 375a98c2e..000000000 --- a/kosmos-g/fairseq/fairseq/models/nat/levenshtein_utils.py +++ /dev/null @@ -1,293 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -from fairseq.utils import new_arange - - -# -------------- Helper Functions --------------------------------------------------- # - - -def load_libnat(): - try: - from fairseq import libnat_cuda - - return libnat_cuda, True - - except ImportError as e: - print(str(e) + "... fall back to CPU version") - - try: - from fairseq import libnat - - return libnat, False - - except ImportError as e: - import sys - - sys.stderr.write( - "ERROR: missing libnat_cuda. run `python setup.py build_ext --inplace`\n" - ) - raise e - - -def _get_ins_targets(in_tokens, out_tokens, padding_idx, unk_idx): - libnat, use_cuda = load_libnat() - - def _get_ins_targets_cuda(in_tokens, out_tokens, padding_idx, unk_idx): - in_masks = in_tokens.ne(padding_idx) - out_masks = out_tokens.ne(padding_idx) - mask_ins_targets, masked_tgt_masks = libnat.generate_insertion_labels( - out_tokens.int(), - libnat.levenshtein_distance( - in_tokens.int(), - out_tokens.int(), - in_masks.sum(1).int(), - out_masks.sum(1).int(), - ), - ) - masked_tgt_masks = masked_tgt_masks.bool() & out_masks - mask_ins_targets = mask_ins_targets.type_as(in_tokens)[ - :, 1 : in_masks.size(1) - ].masked_fill_(~in_masks[:, 1:], 0) - masked_tgt_tokens = out_tokens.masked_fill(masked_tgt_masks, unk_idx) - return masked_tgt_masks, masked_tgt_tokens, mask_ins_targets - - def _get_ins_targets_cpu(in_tokens, out_tokens, padding_idx, unk_idx): - in_seq_len, out_seq_len = in_tokens.size(1), out_tokens.size(1) - - in_tokens_list = [ - [t for t in s if t != padding_idx] for i, s in enumerate(in_tokens.tolist()) - ] - out_tokens_list = [ - [t for t in s if t != padding_idx] - for i, s in enumerate(out_tokens.tolist()) - ] - - full_labels = libnat.suggested_ed2_path( - in_tokens_list, out_tokens_list, padding_idx - ) - mask_inputs = [ - [len(c) if c[0] != padding_idx else 0 for c in a[:-1]] for a in full_labels - ] - - # generate labels - masked_tgt_masks = [] - for mask_input in mask_inputs: - mask_label = [] - for beam_size in mask_input[1:-1]: # HACK 1:-1 - mask_label += [0] + [1 for _ in range(beam_size)] - masked_tgt_masks.append( - mask_label + [0 for _ in range(out_seq_len - len(mask_label))] - ) - mask_ins_targets = [ - mask_input[1:-1] - + [0 for _ in range(in_seq_len - 1 - len(mask_input[1:-1]))] - for mask_input in mask_inputs - ] - - # transform to tensor - masked_tgt_masks = torch.tensor( - masked_tgt_masks, device=out_tokens.device - ).bool() - mask_ins_targets = torch.tensor(mask_ins_targets, device=in_tokens.device) - masked_tgt_tokens = out_tokens.masked_fill(masked_tgt_masks, unk_idx) - return masked_tgt_masks, masked_tgt_tokens, mask_ins_targets - - if use_cuda: - return _get_ins_targets_cuda(in_tokens, out_tokens, padding_idx, unk_idx) - return _get_ins_targets_cpu(in_tokens, out_tokens, padding_idx, unk_idx) - - -def _get_del_targets(in_tokens, out_tokens, padding_idx): - libnat, use_cuda = load_libnat() - - def _get_del_targets_cuda(in_tokens, out_tokens, padding_idx): - in_masks = in_tokens.ne(padding_idx) - out_masks = out_tokens.ne(padding_idx) - - word_del_targets = libnat.generate_deletion_labels( - in_tokens.int(), - libnat.levenshtein_distance( - in_tokens.int(), - out_tokens.int(), - in_masks.sum(1).int(), - out_masks.sum(1).int(), - ), - ) - word_del_targets = word_del_targets.type_as(in_tokens).masked_fill_( - ~in_masks, 0 - ) - return word_del_targets - - def _get_del_targets_cpu(in_tokens, out_tokens, padding_idx): - out_seq_len = out_tokens.size(1) - with torch.cuda.device_of(in_tokens): - in_tokens_list = [ - [t for t in s if t != padding_idx] - for i, s in enumerate(in_tokens.tolist()) - ] - out_tokens_list = [ - [t for t in s if t != padding_idx] - for i, s in enumerate(out_tokens.tolist()) - ] - - full_labels = libnat.suggested_ed2_path( - in_tokens_list, out_tokens_list, padding_idx - ) - word_del_targets = [b[-1] for b in full_labels] - word_del_targets = [ - labels + [0 for _ in range(out_seq_len - len(labels))] - for labels in word_del_targets - ] - - # transform to tensor - word_del_targets = torch.tensor(word_del_targets, device=out_tokens.device) - return word_del_targets - - if use_cuda: - return _get_del_targets_cuda(in_tokens, out_tokens, padding_idx) - return _get_del_targets_cpu(in_tokens, out_tokens, padding_idx) - - -def _apply_ins_masks( - in_tokens, in_scores, mask_ins_pred, padding_idx, unk_idx, eos_idx -): - - in_masks = in_tokens.ne(padding_idx) - in_lengths = in_masks.sum(1) - - # HACK: hacky way to shift all the paddings to eos first. - in_tokens.masked_fill_(~in_masks, eos_idx) - mask_ins_pred.masked_fill_(~in_masks[:, 1:], 0) - - out_lengths = in_lengths + mask_ins_pred.sum(1) - out_max_len = out_lengths.max() - out_masks = new_arange(out_lengths, out_max_len)[None, :] < out_lengths[:, None] - - reordering = (mask_ins_pred + in_masks[:, 1:].long()).cumsum(1) - out_tokens = ( - in_tokens.new_zeros(in_tokens.size(0), out_max_len) - .fill_(padding_idx) - .masked_fill_(out_masks, unk_idx) - ) - out_tokens[:, 0] = in_tokens[:, 0] - out_tokens.scatter_(1, reordering, in_tokens[:, 1:]) - - out_scores = None - if in_scores is not None: - in_scores.masked_fill_(~in_masks, 0) - out_scores = in_scores.new_zeros(*out_tokens.size()) - out_scores[:, 0] = in_scores[:, 0] - out_scores.scatter_(1, reordering, in_scores[:, 1:]) - - return out_tokens, out_scores - - -def _apply_ins_words(in_tokens, in_scores, word_ins_pred, word_ins_scores, unk_idx): - word_ins_masks = in_tokens.eq(unk_idx) - out_tokens = in_tokens.masked_scatter(word_ins_masks, word_ins_pred[word_ins_masks]) - - if in_scores is not None: - out_scores = in_scores.masked_scatter( - word_ins_masks, word_ins_scores[word_ins_masks] - ) - else: - out_scores = None - - return out_tokens, out_scores - - -def _apply_del_words( - in_tokens, in_scores, in_attn, word_del_pred, padding_idx, bos_idx, eos_idx -): - # apply deletion to a tensor - in_masks = in_tokens.ne(padding_idx) - bos_eos_masks = in_tokens.eq(bos_idx) | in_tokens.eq(eos_idx) - - max_len = in_tokens.size(1) - word_del_pred.masked_fill_(~in_masks, 1) - word_del_pred.masked_fill_(bos_eos_masks, 0) - - reordering = new_arange(in_tokens).masked_fill_(word_del_pred, max_len).sort(1)[1] - - out_tokens = in_tokens.masked_fill(word_del_pred, padding_idx).gather(1, reordering) - - out_scores = None - if in_scores is not None: - out_scores = in_scores.masked_fill(word_del_pred, 0).gather(1, reordering) - - out_attn = None - if in_attn is not None: - _mask = word_del_pred[:, :, None].expand_as(in_attn) - _reordering = reordering[:, :, None].expand_as(in_attn) - out_attn = in_attn.masked_fill(_mask, 0.0).gather(1, _reordering) - - return out_tokens, out_scores, out_attn - - -def _skip(x, mask): - """ - Getting sliced (dim=0) tensor by mask. Supporting tensor and list/dict of tensors. - """ - if isinstance(x, int): - return x - - if x is None: - return None - - if isinstance(x, torch.Tensor): - if x.size(0) == mask.size(0): - return x[mask] - elif x.size(1) == mask.size(0): - return x[:, mask] - - if isinstance(x, list): - return [_skip(x_i, mask) for x_i in x] - - if isinstance(x, dict): - return {k: _skip(v, mask) for k, v in x.items()} - - raise NotImplementedError - - -def _skip_encoder_out(encoder, encoder_out, mask): - if not mask.any(): - return encoder_out - else: - return encoder.reorder_encoder_out( - encoder_out, mask.nonzero(as_tuple=False).squeeze() - ) - - -def _fill(x, mask, y, padding_idx): - """ - Filling tensor x with y at masked positions (dim=0). - """ - if x is None: - return y - assert x.dim() == y.dim() and mask.size(0) == x.size(0) - assert x.dim() == 2 or (x.dim() == 3 and x.size(2) == y.size(2)) - n_selected = mask.sum() - assert n_selected == y.size(0) - - if n_selected == x.size(0): - return y - - if x.size(1) < y.size(1): - dims = [x.size(0), y.size(1) - x.size(1)] - if x.dim() == 3: - dims.append(x.size(2)) - x = torch.cat([x, x.new_zeros(*dims).fill_(padding_idx)], 1) - x[mask] = y - elif x.size(1) > y.size(1): - x[mask] = padding_idx - if x.dim() == 2: - x[mask, : y.size(1)] = y - else: - x[mask, : y.size(1), :] = y - else: - x[mask] = y - return x diff --git a/kosmos-g/fairseq/fairseq/models/nat/nat_crf_transformer.py b/kosmos-g/fairseq/fairseq/models/nat/nat_crf_transformer.py deleted file mode 100644 index d4b3cd931..000000000 --- a/kosmos-g/fairseq/fairseq/models/nat/nat_crf_transformer.py +++ /dev/null @@ -1,121 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -from fairseq.models import register_model, register_model_architecture -from fairseq.models.nat import NATransformerModel, base_architecture -from fairseq.modules import DynamicCRF - - -@register_model("nacrf_transformer") -class NACRFTransformerModel(NATransformerModel): - def __init__(self, args, encoder, decoder): - super().__init__(args, encoder, decoder) - self.crf_layer = DynamicCRF( - num_embedding=len(self.tgt_dict), - low_rank=args.crf_lowrank_approx, - beam_size=args.crf_beam_approx, - ) - - @property - def allow_ensemble(self): - return False - - @staticmethod - def add_args(parser): - NATransformerModel.add_args(parser) - parser.add_argument( - "--crf-lowrank-approx", - type=int, - help="the dimension of low-rank approximation of transition", - ) - parser.add_argument( - "--crf-beam-approx", - type=int, - help="the beam size for apporixmating the normalizing factor", - ) - parser.add_argument( - "--word-ins-loss-factor", - type=float, - help="weights on NAT loss used to co-training with CRF loss.", - ) - - def forward( - self, src_tokens, src_lengths, prev_output_tokens, tgt_tokens, **kwargs - ): - # encoding - encoder_out = self.encoder(src_tokens, src_lengths=src_lengths, **kwargs) - - # length prediction - length_out = self.decoder.forward_length( - normalize=False, encoder_out=encoder_out - ) - length_tgt = self.decoder.forward_length_prediction( - length_out, encoder_out, tgt_tokens - ) - - # decoding - word_ins_out = self.decoder( - normalize=False, - prev_output_tokens=prev_output_tokens, - encoder_out=encoder_out, - ) - word_ins_tgt, word_ins_mask = tgt_tokens, tgt_tokens.ne(self.pad) - - # compute the log-likelihood of CRF - crf_nll = -self.crf_layer(word_ins_out, word_ins_tgt, word_ins_mask) - crf_nll = (crf_nll / word_ins_mask.type_as(crf_nll).sum(-1)).mean() - - return { - "word_ins": { - "out": word_ins_out, - "tgt": word_ins_tgt, - "mask": word_ins_mask, - "ls": self.args.label_smoothing, - "nll_loss": True, - "factor": self.args.word_ins_loss_factor, - }, - "word_crf": {"loss": crf_nll}, - "length": { - "out": length_out, - "tgt": length_tgt, - "factor": self.decoder.length_loss_factor, - }, - } - - def forward_decoder(self, decoder_out, encoder_out, decoding_format=None, **kwargs): - output_tokens = decoder_out.output_tokens - output_scores = decoder_out.output_scores - history = decoder_out.history - - # execute the decoder and get emission scores - output_masks = output_tokens.ne(self.pad) - word_ins_out = self.decoder( - normalize=False, prev_output_tokens=output_tokens, encoder_out=encoder_out - ) - - # run viterbi decoding through CRF - _scores, _tokens = self.crf_layer.forward_decoder(word_ins_out, output_masks) - output_tokens.masked_scatter_(output_masks, _tokens[output_masks]) - output_scores.masked_scatter_(output_masks, _scores[output_masks]) - if history is not None: - history.append(output_tokens.clone()) - - return decoder_out._replace( - output_tokens=output_tokens, - output_scores=output_scores, - attn=None, - history=history, - ) - - -@register_model_architecture("nacrf_transformer", "nacrf_transformer") -def nacrf_base_architecture(args): - args.crf_lowrank_approx = getattr(args, "crf_lowrank_approx", 32) - args.crf_beam_approx = getattr(args, "crf_beam_approx", 64) - args.word_ins_loss_factor = getattr(args, "word_ins_loss_factor", 0.5) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", True) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", True) - base_architecture(args) diff --git a/kosmos-g/fairseq/fairseq/models/nat/nonautoregressive_ensembles.py b/kosmos-g/fairseq/fairseq/models/nat/nonautoregressive_ensembles.py deleted file mode 100644 index 0a0221f9c..000000000 --- a/kosmos-g/fairseq/fairseq/models/nat/nonautoregressive_ensembles.py +++ /dev/null @@ -1,254 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn.functional as F -from fairseq.models.nat import ( - _apply_del_words, - _apply_ins_masks, - _apply_ins_words, - _fill, - _skip, - _skip_encoder_out, -) - - -class _EnsembleModelEncoder(object): - def __init__(self, models): - self.models = models - - def reorder_encoder_out(self, encoder_outs, new_order): - encoder_outs = [ - model.encoder.reorder_encoder_out(encoder_out, new_order) - for model, encoder_out in zip(self.models, encoder_outs) - ] - return encoder_outs - - -class BasicEnsembleModel(torch.nn.Module): - """A wrapper around an ensemble of models.""" - - def __init__(self, models): - super().__init__() - self.models = torch.nn.ModuleList(models) - self.bos = self.models[0].decoder.dictionary.bos() - self.eos = self.models[0].decoder.dictionary.eos() - self.pad = self.models[0].decoder.dictionary.pad() - self.unk = self.models[0].decoder.dictionary.unk() - self.encoder = _EnsembleModelEncoder(self.models) - - def has_encoder(self): - return hasattr(self.models[0], "encoder") - - def max_decoder_positions(self): - return min(m.max_decoder_positions() for m in self.models) - - @torch.no_grad() - def forward_encoder(self, encoder_input): - if not self.has_encoder(): - return None - return [model.forward_encoder(encoder_input) for model in self.models] - - @torch.no_grad() - def forward_decoder(self, *inputs): - raise NotImplementedError - - def initialize_output_tokens(self, *inputs): - raise NotImplementedError - - -class EnsembleLevT(BasicEnsembleModel): - """A wrapper around an ensemble of models.""" - - def __init__(self, models): - super().__init__(models) - - @torch.no_grad() - def forward_decoder( - self, decoder_out, encoder_outs, eos_penalty=0.0, max_ratio=None, **kwargs - ): - # LevT ensembling - # A pipeline of three steps: deletion, placeholder, and word insertion. - # We need to average scores in each step in a pipeline way because of dependence. - # deletion - output_tokens = decoder_out.output_tokens - output_scores = decoder_out.output_scores - attn = decoder_out.attn - - bsz = output_tokens.size(0) - if max_ratio is None: - max_lens = output_tokens.new().fill_(255) - else: - if not encoder_outs[0]["encoder_padding_mask"]: - src_lens = ( - encoder_outs[0]["encoder_out"][0] - .new(bsz) - .fill_(encoder_outs[0]["encoder_out"][0].size(1)) - ) - else: - src_lens = (~encoder_outs[0]["encoder_padding_mask"][0]).sum(1) - max_lens = (src_lens * max_ratio).clamp(min=10).long() - - # delete words - # do not delete tokens if it is <s> </s> - can_del_word = output_tokens.ne(self.pad).sum(1) > 2 - if can_del_word.sum() != 0: # we cannot delete, skip - output_tokens, output_scores, attn = self.forward_word_del( - encoder_outs, - output_tokens, - output_scores, - attn, - can_del_word, - ) - - # insert placeholders - can_ins_mask = output_tokens.ne(self.pad).sum(1) < max_lens - if can_ins_mask.sum() != 0: - output_tokens, output_scores = self.forward_mask_ins( - encoder_outs, - output_tokens, - output_scores, - can_ins_mask, - eos_penalty, - max_lens, - ) - - # insert words - can_ins_word = output_tokens.eq(self.unk).sum(1) > 0 - if can_ins_word.sum() != 0: - output_tokens, output_scores, attn = self.forward_word_ins( - encoder_outs, - output_tokens, - output_scores, - attn, - can_ins_word, - ) - - # delete some unnecessary paddings - cut_off = output_tokens.ne(self.pad).sum(1).max() - output_tokens = output_tokens[:, :cut_off] - output_scores = output_scores[:, :cut_off] - attn = None if attn is None else attn[:, :cut_off, :] - return decoder_out._replace( - output_tokens=output_tokens, - output_scores=output_scores, - attn=attn, - history=None, - ) - - def forward_word_del( - self, encoder_outs, output_tokens, output_scores, attn, can_del_word - ): - word_del_score_avg = [] - word_del_attn_avg = [] - for model, encoder_out in zip(self.models, encoder_outs): - word_del_out, word_del_attn = model.decoder.forward_word_del( - _skip(output_tokens, can_del_word), - _skip_encoder_out(model.encoder, encoder_out, can_del_word), - ) - word_del_score = F.log_softmax(word_del_out, 2) - word_del_score_avg.append(word_del_score) - word_del_attn_avg.append(word_del_attn) - word_del_score_avg = torch.logsumexp( - torch.stack(word_del_score_avg, dim=0), dim=0 - ) - math.log(len(self.models)) - word_del_pred = word_del_score_avg.max(-1)[1].bool() - if word_del_attn_avg[0] is not None: - word_del_attn_avg = torch.stack(word_del_attn_avg, dim=0) / len(self.models) - else: - word_del_attn_avg = None - - _tokens, _scores, _attn = _apply_del_words( - output_tokens[can_del_word], - output_scores[can_del_word], - word_del_attn_avg, - word_del_pred, - self.pad, - self.bos, - self.eos, - ) - output_tokens = _fill(output_tokens, can_del_word, _tokens, self.pad) - output_scores = _fill(output_scores, can_del_word, _scores, 0) - attn = _fill(attn, can_del_word, _attn, 0.0) - return output_tokens, output_scores, attn - - def forward_mask_ins( - self, - encoder_outs, - output_tokens, - output_scores, - can_ins_mask, - eos_penalty, - max_lens, - ): - mask_ins_score_avg = [] - for model, encoder_out in zip(self.models, encoder_outs): - mask_ins_out, _ = model.decoder.forward_mask_ins( - _skip(output_tokens, can_ins_mask), - _skip_encoder_out(model.encoder, encoder_out, can_ins_mask), - ) - mask_ins_score = F.log_softmax(mask_ins_out, 2) - if eos_penalty > 0.0: - mask_ins_score[:, :, 0] -= eos_penalty - mask_ins_score_avg.append(mask_ins_score) - mask_ins_score_avg = torch.logsumexp( - torch.stack(mask_ins_score_avg, dim=0), dim=0 - ) - math.log(len(self.models)) - mask_ins_pred = mask_ins_score_avg.max(-1)[1] - mask_ins_pred = torch.min( - mask_ins_pred, max_lens[can_ins_mask, None].expand_as(mask_ins_pred) - ) - _tokens, _scores = _apply_ins_masks( - output_tokens[can_ins_mask], - output_scores[can_ins_mask], - mask_ins_pred, - self.pad, - self.unk, - self.eos, - ) - output_tokens = _fill(output_tokens, can_ins_mask, _tokens, self.pad) - output_scores = _fill(output_scores, can_ins_mask, _scores, 0) - return output_tokens, output_scores - - def forward_word_ins( - self, encoder_outs, output_tokens, output_scores, attn, can_ins_word - ): - word_ins_score_avg = [] - word_ins_attn_avg = [] - for model, encoder_out in zip(self.models, encoder_outs): - word_ins_out, word_ins_attn = model.decoder.forward_word_ins( - _skip(output_tokens, can_ins_word), - _skip_encoder_out(model.encoder, encoder_out, can_ins_word), - ) - word_ins_score = F.log_softmax(word_ins_out, 2) - word_ins_score_avg.append(word_ins_score) - word_ins_attn_avg.append(word_ins_attn) - word_ins_score_avg = torch.logsumexp( - torch.stack(word_ins_score_avg, dim=0), dim=0 - ) - math.log(len(self.models)) - if word_ins_attn_avg[0] is not None: - word_ins_attn_avg = torch.stack(word_ins_attn_avg, dim=0) / len(self.models) - else: - word_ins_attn_avg = None - word_ins_score_max, word_ins_pred = word_ins_score_avg.max(-1) - - _tokens, _scores = _apply_ins_words( - output_tokens[can_ins_word], - output_scores[can_ins_word], - word_ins_pred, - word_ins_score_max, - self.unk, - ) - - output_tokens = _fill(output_tokens, can_ins_word, _tokens, self.pad) - output_scores = _fill(output_scores, can_ins_word, _scores, 0) - attn = _fill(attn, can_ins_word, word_ins_attn, 0.0) - return output_tokens, output_scores, attn - - def initialize_output_tokens(self, encoder_outs, src_tokens): - # LevT doesn't do length prediction. - return self.models[0].initialize_output_tokens(encoder_outs[0], src_tokens) diff --git a/kosmos-g/fairseq/fairseq/models/nat/nonautoregressive_transformer.py b/kosmos-g/fairseq/fairseq/models/nat/nonautoregressive_transformer.py deleted file mode 100644 index d114202d2..000000000 --- a/kosmos-g/fairseq/fairseq/models/nat/nonautoregressive_transformer.py +++ /dev/null @@ -1,456 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn.functional as F -from fairseq import utils -from fairseq.iterative_refinement_generator import DecoderOut -from fairseq.models import register_model, register_model_architecture -from fairseq.models.nat import FairseqNATDecoder, FairseqNATModel, ensemble_decoder -from fairseq.models.transformer import Embedding -from fairseq.modules.transformer_sentence_encoder import init_bert_params - - -def _mean_pooling(enc_feats, src_masks): - # enc_feats: T x B x C - # src_masks: B x T or None - if src_masks is None: - enc_feats = enc_feats.mean(0) - else: - src_masks = (~src_masks).transpose(0, 1).type_as(enc_feats) - enc_feats = ( - (enc_feats / src_masks.sum(0)[None, :, None]) * src_masks[:, :, None] - ).sum(0) - return enc_feats - - -def _argmax(x, dim): - return (x == x.max(dim, keepdim=True)[0]).type_as(x) - - -def _uniform_assignment(src_lens, trg_lens): - max_trg_len = trg_lens.max() - steps = (src_lens.float() - 1) / (trg_lens.float() - 1) # step-size - # max_trg_len - index_t = utils.new_arange(trg_lens, max_trg_len).float() - index_t = steps[:, None] * index_t[None, :] # batch_size X max_trg_len - index_t = torch.round(index_t).long().detach() - return index_t - - -@register_model("nonautoregressive_transformer") -class NATransformerModel(FairseqNATModel): - @property - def allow_length_beam(self): - return True - - @staticmethod - def add_args(parser): - FairseqNATModel.add_args(parser) - - # length prediction - parser.add_argument( - "--src-embedding-copy", - action="store_true", - help="copy encoder word embeddings as the initial input of the decoder", - ) - parser.add_argument( - "--pred-length-offset", - action="store_true", - help="predicting the length difference between the target and source sentences", - ) - parser.add_argument( - "--sg-length-pred", - action="store_true", - help="stop the gradients back-propagated from the length predictor", - ) - parser.add_argument( - "--length-loss-factor", - type=float, - help="weights on the length prediction loss", - ) - - @classmethod - def build_decoder(cls, args, tgt_dict, embed_tokens): - decoder = NATransformerDecoder(args, tgt_dict, embed_tokens) - if getattr(args, "apply_bert_init", False): - decoder.apply(init_bert_params) - return decoder - - def forward( - self, src_tokens, src_lengths, prev_output_tokens, tgt_tokens, **kwargs - ): - # encoding - encoder_out = self.encoder(src_tokens, src_lengths=src_lengths, **kwargs) - - # length prediction - length_out = self.decoder.forward_length( - normalize=False, encoder_out=encoder_out - ) - length_tgt = self.decoder.forward_length_prediction( - length_out, encoder_out, tgt_tokens - ) - - # decoding - word_ins_out = self.decoder( - normalize=False, - prev_output_tokens=prev_output_tokens, - encoder_out=encoder_out, - ) - - return { - "word_ins": { - "out": word_ins_out, - "tgt": tgt_tokens, - "mask": tgt_tokens.ne(self.pad), - "ls": self.args.label_smoothing, - "nll_loss": True, - }, - "length": { - "out": length_out, - "tgt": length_tgt, - "factor": self.decoder.length_loss_factor, - }, - } - - def forward_decoder(self, decoder_out, encoder_out, decoding_format=None, **kwargs): - step = decoder_out.step - output_tokens = decoder_out.output_tokens - output_scores = decoder_out.output_scores - history = decoder_out.history - - # execute the decoder - output_masks = output_tokens.ne(self.pad) - _scores, _tokens = self.decoder( - normalize=True, - prev_output_tokens=output_tokens, - encoder_out=encoder_out, - step=step, - ).max(-1) - - output_tokens.masked_scatter_(output_masks, _tokens[output_masks]) - output_scores.masked_scatter_(output_masks, _scores[output_masks]) - if history is not None: - history.append(output_tokens.clone()) - - return decoder_out._replace( - output_tokens=output_tokens, - output_scores=output_scores, - attn=None, - history=history, - ) - - def initialize_output_tokens(self, encoder_out, src_tokens): - # length prediction - length_tgt = self.decoder.forward_length_prediction( - self.decoder.forward_length(normalize=True, encoder_out=encoder_out), - encoder_out=encoder_out, - ) - - max_length = length_tgt.clamp_(min=2).max() - idx_length = utils.new_arange(src_tokens, max_length) - - initial_output_tokens = src_tokens.new_zeros( - src_tokens.size(0), max_length - ).fill_(self.pad) - initial_output_tokens.masked_fill_( - idx_length[None, :] < length_tgt[:, None], self.unk - ) - initial_output_tokens[:, 0] = self.bos - initial_output_tokens.scatter_(1, length_tgt[:, None] - 1, self.eos) - - initial_output_scores = initial_output_tokens.new_zeros( - *initial_output_tokens.size() - ).type_as(encoder_out["encoder_out"][0]) - - return DecoderOut( - output_tokens=initial_output_tokens, - output_scores=initial_output_scores, - attn=None, - step=0, - max_step=0, - history=None, - ) - - def regenerate_length_beam(self, decoder_out, beam_size): - output_tokens = decoder_out.output_tokens - length_tgt = output_tokens.ne(self.pad).sum(1) - length_tgt = ( - length_tgt[:, None] - + utils.new_arange(length_tgt, 1, beam_size) - - beam_size // 2 - ) - length_tgt = length_tgt.view(-1).clamp_(min=2) - max_length = length_tgt.max() - idx_length = utils.new_arange(length_tgt, max_length) - - initial_output_tokens = output_tokens.new_zeros( - length_tgt.size(0), max_length - ).fill_(self.pad) - initial_output_tokens.masked_fill_( - idx_length[None, :] < length_tgt[:, None], self.unk - ) - initial_output_tokens[:, 0] = self.bos - initial_output_tokens.scatter_(1, length_tgt[:, None] - 1, self.eos) - - initial_output_scores = initial_output_tokens.new_zeros( - *initial_output_tokens.size() - ).type_as(decoder_out.output_scores) - - return decoder_out._replace( - output_tokens=initial_output_tokens, output_scores=initial_output_scores - ) - - -class NATransformerDecoder(FairseqNATDecoder): - def __init__(self, args, dictionary, embed_tokens, no_encoder_attn=False): - super().__init__( - args, dictionary, embed_tokens, no_encoder_attn=no_encoder_attn - ) - self.dictionary = dictionary - self.bos = dictionary.bos() - self.unk = dictionary.unk() - self.eos = dictionary.eos() - - self.encoder_embed_dim = args.encoder_embed_dim - self.sg_length_pred = getattr(args, "sg_length_pred", False) - self.pred_length_offset = getattr(args, "pred_length_offset", False) - self.length_loss_factor = getattr(args, "length_loss_factor", 0.1) - self.src_embedding_copy = getattr(args, "src_embedding_copy", False) - self.embed_length = Embedding(256, self.encoder_embed_dim, None) - - @ensemble_decoder - def forward(self, normalize, encoder_out, prev_output_tokens, step=0, **unused): - features, _ = self.extract_features( - prev_output_tokens, - encoder_out=encoder_out, - embedding_copy=(step == 0) & self.src_embedding_copy, - ) - decoder_out = self.output_layer(features) - return F.log_softmax(decoder_out, -1) if normalize else decoder_out - - @ensemble_decoder - def forward_length(self, normalize, encoder_out): - enc_feats = encoder_out["encoder_out"][0] # T x B x C - if len(encoder_out["encoder_padding_mask"]) > 0: - src_masks = encoder_out["encoder_padding_mask"][0] # B x T - else: - src_masks = None - enc_feats = _mean_pooling(enc_feats, src_masks) - if self.sg_length_pred: - enc_feats = enc_feats.detach() - length_out = F.linear(enc_feats, self.embed_length.weight) - return F.log_softmax(length_out, -1) if normalize else length_out - - def extract_features( - self, - prev_output_tokens, - encoder_out=None, - early_exit=None, - embedding_copy=False, - **unused - ): - """ - Similar to *forward* but only return features. - - Inputs: - prev_output_tokens: Tensor(B, T) - encoder_out: a dictionary of hidden states and masks - - Returns: - tuple: - - the decoder's features of shape `(batch, tgt_len, embed_dim)` - - a dictionary with any model-specific outputs - the LevenshteinTransformer decoder has full-attention to all generated tokens - """ - # embedding - if embedding_copy: - src_embd = encoder_out["encoder_embedding"][0] - if len(encoder_out["encoder_padding_mask"]) > 0: - src_mask = encoder_out["encoder_padding_mask"][0] - else: - src_mask = None - src_mask = ( - ~src_mask - if src_mask is not None - else prev_output_tokens.new_ones(*src_embd.size()[:2]).bool() - ) - - x, decoder_padding_mask = self.forward_embedding( - prev_output_tokens, - self.forward_copying_source( - src_embd, src_mask, prev_output_tokens.ne(self.padding_idx) - ), - ) - - else: - - x, decoder_padding_mask = self.forward_embedding(prev_output_tokens) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - attn = None - inner_states = [x] - - # decoder layers - for i, layer in enumerate(self.layers): - - # early exit from the decoder. - if (early_exit is not None) and (i >= early_exit): - break - - x, attn, _ = layer( - x, - encoder_out["encoder_out"][0] - if (encoder_out is not None and len(encoder_out["encoder_out"]) > 0) - else None, - encoder_out["encoder_padding_mask"][0] - if ( - encoder_out is not None - and len(encoder_out["encoder_padding_mask"]) > 0 - ) - else None, - self_attn_mask=None, - self_attn_padding_mask=decoder_padding_mask, - ) - inner_states.append(x) - - if self.layer_norm: - x = self.layer_norm(x) - - # T x B x C -> B x T x C - x = x.transpose(0, 1) - - if self.project_out_dim is not None: - x = self.project_out_dim(x) - - return x, {"attn": attn, "inner_states": inner_states} - - def forward_embedding(self, prev_output_tokens, states=None): - # embed positions - positions = ( - self.embed_positions(prev_output_tokens) - if self.embed_positions is not None - else None - ) - - # embed tokens and positions - if states is None: - x = self.embed_scale * self.embed_tokens(prev_output_tokens) - if self.project_in_dim is not None: - x = self.project_in_dim(x) - else: - x = states - - if positions is not None: - x += positions - x = self.dropout_module(x) - decoder_padding_mask = prev_output_tokens.eq(self.padding_idx) - return x, decoder_padding_mask - - def forward_copying_source(self, src_embeds, src_masks, tgt_masks): - length_sources = src_masks.sum(1) - length_targets = tgt_masks.sum(1) - mapped_inputs = _uniform_assignment(length_sources, length_targets).masked_fill( - ~tgt_masks, 0 - ) - copied_embedding = torch.gather( - src_embeds, - 1, - mapped_inputs.unsqueeze(-1).expand( - *mapped_inputs.size(), src_embeds.size(-1) - ), - ) - return copied_embedding - - def forward_length_prediction(self, length_out, encoder_out, tgt_tokens=None): - enc_feats = encoder_out["encoder_out"][0] # T x B x C - if len(encoder_out["encoder_padding_mask"]) > 0: - src_masks = encoder_out["encoder_padding_mask"][0] # B x T - else: - src_masks = None - if self.pred_length_offset: - if src_masks is None: - src_lengs = enc_feats.new_ones(enc_feats.size(1)).fill_( - enc_feats.size(0) - ) - else: - src_lengs = (~src_masks).transpose(0, 1).type_as(enc_feats).sum(0) - src_lengs = src_lengs.long() - - if tgt_tokens is not None: - # obtain the length target - tgt_lengs = tgt_tokens.ne(self.padding_idx).sum(1).long() - if self.pred_length_offset: - length_tgt = tgt_lengs - src_lengs + 128 - else: - length_tgt = tgt_lengs - length_tgt = length_tgt.clamp(min=0, max=255) - - else: - # predict the length target (greedy for now) - # TODO: implementing length-beam - pred_lengs = length_out.max(-1)[1] - if self.pred_length_offset: - length_tgt = pred_lengs - 128 + src_lengs - else: - length_tgt = pred_lengs - - return length_tgt - - -@register_model_architecture( - "nonautoregressive_transformer", "nonautoregressive_transformer" -) -def base_architecture(args): - args.encoder_embed_path = getattr(args, "encoder_embed_path", None) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 2048) - args.encoder_layers = getattr(args, "encoder_layers", 6) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", False) - args.decoder_embed_path = getattr(args, "decoder_embed_path", None) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim) - args.decoder_ffn_embed_dim = getattr( - args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim - ) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", False) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False) - args.attention_dropout = getattr(args, "attention_dropout", 0.0) - args.activation_dropout = getattr(args, "activation_dropout", 0.0) - args.activation_fn = getattr(args, "activation_fn", "relu") - args.dropout = getattr(args, "dropout", 0.1) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", False - ) - args.share_all_embeddings = getattr(args, "share_all_embeddings", False) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - args.adaptive_input = getattr(args, "adaptive_input", False) - args.apply_bert_init = getattr(args, "apply_bert_init", False) - - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - - # --- special arguments --- - args.sg_length_pred = getattr(args, "sg_length_pred", False) - args.pred_length_offset = getattr(args, "pred_length_offset", False) - args.length_loss_factor = getattr(args, "length_loss_factor", 0.1) - args.src_embedding_copy = getattr(args, "src_embedding_copy", False) - - -@register_model_architecture( - "nonautoregressive_transformer", "nonautoregressive_transformer_wmt_en_de" -) -def nonautoregressive_transformer_wmt_en_de(args): - base_architecture(args) diff --git a/kosmos-g/fairseq/fairseq/models/roberta/__init__.py b/kosmos-g/fairseq/fairseq/models/roberta/__init__.py deleted file mode 100644 index 4cd723ae9..000000000 --- a/kosmos-g/fairseq/fairseq/models/roberta/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .hub_interface import * # noqa -from .model import * # noqa -from .enc_dec import * # noqa -from .model_camembert import * # noqa -from .model_gottbert import * # noqa -from .model_xlmr import * # noqa diff --git a/kosmos-g/fairseq/fairseq/models/roberta/alignment_utils.py b/kosmos-g/fairseq/fairseq/models/roberta/alignment_utils.py deleted file mode 100644 index ccc7f74cb..000000000 --- a/kosmos-g/fairseq/fairseq/models/roberta/alignment_utils.py +++ /dev/null @@ -1,118 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from collections import Counter -from typing import List - -import torch - - -def align_bpe_to_words(roberta, bpe_tokens: torch.LongTensor, other_tokens: List[str]): - """ - Helper to align GPT-2 BPE to other tokenization formats (e.g., spaCy). - - Args: - roberta (RobertaHubInterface): RoBERTa instance - bpe_tokens (torch.LongTensor): GPT-2 BPE tokens of shape `(T_bpe)` - other_tokens (List[str]): other tokens of shape `(T_words)` - - Returns: - List[str]: mapping from *other_tokens* to corresponding *bpe_tokens*. - """ - assert bpe_tokens.dim() == 1 - assert bpe_tokens[0] == 0 - - def clean(text): - return text.strip() - - # remove whitespaces to simplify alignment - bpe_tokens = [roberta.task.source_dictionary.string([x]) for x in bpe_tokens] - bpe_tokens = [ - clean(roberta.bpe.decode(x) if x not in {"<s>", ""} else x) for x in bpe_tokens - ] - other_tokens = [clean(str(o)) for o in other_tokens] - - # strip leading <s> - bpe_tokens = bpe_tokens[1:] - assert "".join(bpe_tokens) == "".join(other_tokens) - - # create alignment from every word to a list of BPE tokens - alignment = [] - bpe_toks = filter(lambda item: item[1] != "", enumerate(bpe_tokens, start=1)) - j, bpe_tok = next(bpe_toks) - for other_tok in other_tokens: - bpe_indices = [] - while True: - if other_tok.startswith(bpe_tok): - bpe_indices.append(j) - other_tok = other_tok[len(bpe_tok) :] - try: - j, bpe_tok = next(bpe_toks) - except StopIteration: - j, bpe_tok = None, None - elif bpe_tok.startswith(other_tok): - # other_tok spans multiple BPE tokens - bpe_indices.append(j) - bpe_tok = bpe_tok[len(other_tok) :] - other_tok = "" - else: - raise Exception('Cannot align "{}" and "{}"'.format(other_tok, bpe_tok)) - if other_tok == "": - break - assert len(bpe_indices) > 0 - alignment.append(bpe_indices) - assert len(alignment) == len(other_tokens) - - return alignment - - -def align_features_to_words(roberta, features, alignment): - """ - Align given features to words. - - Args: - roberta (RobertaHubInterface): RoBERTa instance - features (torch.Tensor): features to align of shape `(T_bpe x C)` - alignment: alignment between BPE tokens and words returned by - func:`align_bpe_to_words`. - """ - assert features.dim() == 2 - - bpe_counts = Counter(j for bpe_indices in alignment for j in bpe_indices) - assert bpe_counts[0] == 0 # <s> shouldn't be aligned - denom = features.new([bpe_counts.get(j, 1) for j in range(len(features))]) - weighted_features = features / denom.unsqueeze(-1) - - output = [weighted_features[0]] - largest_j = -1 - for bpe_indices in alignment: - output.append(weighted_features[bpe_indices].sum(dim=0)) - largest_j = max(largest_j, *bpe_indices) - for j in range(largest_j + 1, len(features)): - output.append(weighted_features[j]) - output = torch.stack(output) - assert torch.all(torch.abs(output.sum(dim=0) - features.sum(dim=0)) < 1e-4) - return output - - -def spacy_nlp(): - if getattr(spacy_nlp, "_nlp", None) is None: - try: - from spacy.lang.en import English - - spacy_nlp._nlp = English() - except ImportError: - raise ImportError("Please install spacy with: pip install spacy") - return spacy_nlp._nlp - - -def spacy_tokenizer(): - if getattr(spacy_tokenizer, "_tokenizer", None) is None: - try: - nlp = spacy_nlp() - spacy_tokenizer._tokenizer = nlp.Defaults.create_tokenizer(nlp) - except ImportError: - raise ImportError("Please install spacy with: pip install spacy") - return spacy_tokenizer._tokenizer diff --git a/kosmos-g/fairseq/fairseq/models/roberta/enc_dec.py b/kosmos-g/fairseq/fairseq/models/roberta/enc_dec.py deleted file mode 100644 index e538dee0a..000000000 --- a/kosmos-g/fairseq/fairseq/models/roberta/enc_dec.py +++ /dev/null @@ -1,192 +0,0 @@ -import argparse -import logging - -import torch.nn as nn -import fairseq.checkpoint_utils -from fairseq.models import ( - FairseqEncoderDecoderModel, - register_model, - register_model_architecture, -) -from fairseq.models.transformer import TransformerDecoder -from fairseq.models.roberta import model as roberta - -logger = logging.getLogger(__name__) - - -@register_model("roberta_enc_dec") -class RobertaEncDecModel(FairseqEncoderDecoderModel): - @staticmethod - def add_args(parser): - parser.add_argument( - "--pretrained-mlm-checkpoint", - default=None, - type=str, - metavar="PRETRAINED", - help="path to pretrained mlm checkpoint", - ) - parser.add_argument( - "--pretrained-decoder", action="store_true", help="reload decoder" - ) - parser.add_argument( - "--hack-layernorm-embedding", - action="store_true", - help="hack to reload old models trained with encoder-normalize-before=False (no equivalent to encoder-normalize-before=False and layernorm_embedding=False", - ) - parser.add_argument( - "--share-decoder-input-output-embed", - action="store_true", - help="share decoder input and output embeddings", - ) - parser.add_argument( - "--share-all-embeddings", - action="store_true", - help="share encoder, decoder and output embeddings" - " (requires shared dictionary and embed dim)", - ) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - - # make sure all arguments are present - base_enc_dec_architecture(args) - if args.pretrained_mlm_checkpoint: - arg_overrides = None - if args.hack_layernorm_embedding: - arg_overrides = {"layernorm_embedding": False} - loaded = fairseq.checkpoint_utils.load_model_ensemble_and_task( - [args.pretrained_mlm_checkpoint], arg_overrides=arg_overrides - ) - ([roberta_enc], _cfg, _task) = loaded - else: - # Do we need to edit untie_weights here ? - share_in_out = ( - args.share_decoder_input_output_embed or args.share_all_embeddings - ) - args.untie_weights_roberta = not share_in_out - if args.hack_layernorm_embedding: - args.layernorm_embedding = False - args.encoder_normalize_before = False - roberta_enc = roberta.RobertaModel.build_model(args, task) - - return cls.from_roberta(roberta_enc, args, task.source_dictionary) - - @staticmethod - def from_roberta(roberta_enc: roberta.RobertaModel, args, dictionary): - encoder = roberta_enc.encoder.sentence_encoder - vocab_size, embed_dim = encoder.embed_tokens.weight.shape - - if args.share_all_embeddings: - lm_head = roberta_enc.encoder.lm_head - assert encoder.embed_tokens.weight is lm_head.weight, ( - "Can't use --share-all-embeddings with a model " - "that was pretraiend with --untie-weights-roberta_enc" - ) - else: - lm_head = roberta.RobertaLMHead( - embed_dim, vocab_size, roberta_enc.args.activation_fn - ) - - dec_embs = nn.Embedding(vocab_size, embed_dim, dictionary.pad()) - if args.share_all_embeddings or args.share_decoder_input_output_embed: - # Note: I wasn't able to use Embedding _weight parameter to achive this sharing. - dec_embs.weight = lm_head.weight - - decoder = TransformerDecoder( - RobertaEncDecModel.read_args_from_roberta(roberta_enc.args), - dictionary, - dec_embs, - no_encoder_attn=False, - output_projection=lm_head, - ) - if getattr(args, "pretrained_decoder", False): - decoder_dict = encoder.state_dict() - - # TODO: hide setting "encoder_attn" layers behind a flag. - for k, w in list(decoder_dict.items()): - if ".self_attn" in k: - k_enc_attn = k.replace(".self_attn", ".encoder_attn") - decoder_dict[k_enc_attn] = w.detach().clone() - - for k, w in lm_head.state_dict().items(): - decoder_dict["output_projection." + k] = w - - missing_keys, unexpected_keys = decoder.load_state_dict( - decoder_dict, strict=False - ) - # missing_keys = [m for m in missing_keys if ".encoder_attn" not in m] - assert not missing_keys and not unexpected_keys, ( - "Failed to load state dict. " - f"Missing keys: {missing_keys}. " - f"Unexpected keys: {unexpected_keys}." - ) - - if args.share_all_embeddings: - assert decoder.output_projection.weight is decoder.embed_tokens.weight - assert encoder.embed_tokens.weight is decoder.embed_tokens.weight - elif args.share_decoder_input_output_embed: - assert decoder.output_projection.weight is decoder.embed_tokens.weight - assert encoder.embed_tokens.weight is not decoder.embed_tokens.weight - else: - assert decoder.output_projection.weight is not decoder.embed_tokens.weight - assert encoder.embed_tokens.weight is not decoder.embed_tokens.weight - - return RobertaEncDecModel(encoder, decoder) - - @staticmethod - def read_args_from_roberta(roberta_args: argparse.Namespace): - # TODO: this would become easier if encoder/decoder where using a similar - # TransformerConfig object - args = argparse.Namespace(**vars(roberta_args)) - attr_map = [ - ("encoder_attention_heads", "decoder_attention_heads"), - ("encoder_embed_dim", "decoder_embed_dim"), - ("encoder_embed_dim", "decoder_output_dim"), - ("encoder_normalize_before", "decoder_normalize_before"), - ("encoder_layers_to_keep", "decoder_layers_to_keep"), - ("encoder_ffn_embed_dim", "decoder_ffn_embed_dim"), - ("encoder_layerdrop", "decoder_layerdrop"), - ("encoder_layers", "decoder_layers"), - ("encoder_learned_pos", "decoder_learned_pos"), - # should this be set from here ? - ("max_positions", "max_target_positions"), - ] - for k1, k2 in attr_map: - setattr(args, k2, getattr(roberta_args, k1)) - - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.share_decoder_input_output_embed = not roberta_args.untie_weights_roberta - return args - - def upgrade_state_dict_named(self, state_dict, name): - prefix = name + "." if name != "" else "" - super().upgrade_state_dict_named(state_dict, name) - old_keys = list(state_dict.keys()) - - # rename decoder -> encoder before upgrading children modules - for k in old_keys: - if k.startswith(prefix + "encoder.lm_head"): - state_dict.pop(k) - continue - new_k = k - new_k = new_k.replace(".sentence_encoder.", ".") - new_k = new_k.replace("decoder.lm_head.", "decoder.output_projection.") - if k == new_k: - continue - # print(k, "->", new_k) - state_dict[new_k] = state_dict.pop(k) - - -@register_model_architecture("roberta_enc_dec", "roberta_enc_dec") -def base_enc_dec_architecture(args): - args.hack_layernorm_embedding = getattr(args, "hack_layernorm_embedding", False) - args.pretrained_mlm_checkpoint = getattr(args, "pretrained_mlm_checkpoint", None) - args.pretrained_decoder = getattr(args, "pretrained_decoder", None) - args.share_all_embeddings = getattr(args, "share_all_embeddings", False) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", False - ) - - roberta.base_architecture(args) diff --git a/kosmos-g/fairseq/fairseq/models/roberta/hub_interface.py b/kosmos-g/fairseq/fairseq/models/roberta/hub_interface.py deleted file mode 100644 index ba298d63b..000000000 --- a/kosmos-g/fairseq/fairseq/models/roberta/hub_interface.py +++ /dev/null @@ -1,235 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import utils -from fairseq.data import encoders - - -class RobertaHubInterface(nn.Module): - """A simple PyTorch Hub interface to RoBERTa. - - Usage: https://github.com/pytorch/fairseq/tree/main/examples/roberta - """ - - def __init__(self, cfg, task, model): - super().__init__() - self.cfg = cfg - self.task = task - self.model = model - - self.bpe = encoders.build_bpe(cfg.bpe) - - # this is useful for determining the device - self.register_buffer("_float_tensor", torch.tensor([0], dtype=torch.float)) - - @property - def device(self): - return self._float_tensor.device - - def encode( - self, sentence: str, *addl_sentences, no_separator=False - ) -> torch.LongTensor: - """ - BPE-encode a sentence (or multiple sentences). - - Every sequence begins with a beginning-of-sentence (`<s>`) symbol. - Every sentence ends with an end-of-sentence (`</s>`) and we use an - extra end-of-sentence (`</s>`) as a separator. - - Example (single sentence): `<s> a b c </s>` - Example (sentence pair): `<s> d e f </s> </s> 1 2 3 </s>` - - The BPE encoding follows GPT-2. One subtle detail is that the GPT-2 BPE - requires leading spaces. For example:: - - >>> roberta.encode('Hello world').tolist() - [0, 31414, 232, 2] - >>> roberta.encode(' world').tolist() - [0, 232, 2] - >>> roberta.encode('world').tolist() - [0, 8331, 2] - """ - bpe_sentence = "<s> " + self.bpe.encode(sentence) + " </s>" - for s in addl_sentences: - bpe_sentence += " </s>" if not no_separator else "" - bpe_sentence += " " + self.bpe.encode(s) + " </s>" - tokens = self.task.source_dictionary.encode_line( - bpe_sentence, append_eos=False, add_if_not_exist=False - ) - return tokens.long() - - def decode(self, tokens: torch.LongTensor): - assert tokens.dim() == 1 - tokens = tokens.numpy() - if tokens[0] == self.task.source_dictionary.bos(): - tokens = tokens[1:] # remove <s> - eos_mask = tokens == self.task.source_dictionary.eos() - doc_mask = eos_mask[1:] & eos_mask[:-1] - sentences = np.split(tokens, doc_mask.nonzero()[0] + 1) - sentences = [ - self.bpe.decode(self.task.source_dictionary.string(s)) for s in sentences - ] - if len(sentences) == 1: - return sentences[0] - return sentences - - def extract_features( - self, tokens: torch.LongTensor, return_all_hiddens: bool = False - ) -> torch.Tensor: - if tokens.dim() == 1: - tokens = tokens.unsqueeze(0) - if tokens.size(-1) > self.model.max_positions(): - raise ValueError( - "tokens exceeds maximum length: {} > {}".format( - tokens.size(-1), self.model.max_positions() - ) - ) - features, extra = self.model( - tokens.to(device=self.device), - features_only=True, - return_all_hiddens=return_all_hiddens, - ) - if return_all_hiddens: - # convert from T x B x C -> B x T x C - inner_states = extra["inner_states"] - return [inner_state.transpose(0, 1) for inner_state in inner_states] - else: - return features # just the last layer's features - - def register_classification_head( - self, name: str, num_classes: int = None, embedding_size: int = None, **kwargs - ): - self.model.register_classification_head( - name, num_classes=num_classes, embedding_size=embedding_size, **kwargs - ) - - def predict(self, head: str, tokens: torch.LongTensor, return_logits: bool = False): - features = self.extract_features(tokens.to(device=self.device)) - logits = self.model.classification_heads[head](features) - if return_logits: - return logits - return F.log_softmax(logits, dim=-1) - - def extract_features_aligned_to_words( - self, sentence: str, return_all_hiddens: bool = False - ) -> torch.Tensor: - """Extract RoBERTa features, aligned to spaCy's word-level tokenizer.""" - from fairseq.models.roberta import alignment_utils - from spacy.tokens import Doc - - nlp = alignment_utils.spacy_nlp() - tokenizer = alignment_utils.spacy_tokenizer() - - # tokenize both with GPT-2 BPE and spaCy - bpe_toks = self.encode(sentence) - spacy_toks = tokenizer(sentence) - spacy_toks_ws = [t.text_with_ws for t in tokenizer(sentence)] - alignment = alignment_utils.align_bpe_to_words(self, bpe_toks, spacy_toks_ws) - - # extract features and align them - features = self.extract_features( - bpe_toks, return_all_hiddens=return_all_hiddens - ) - features = features.squeeze(0) - aligned_feats = alignment_utils.align_features_to_words( - self, features, alignment - ) - - # wrap in spaCy Doc - doc = Doc( - nlp.vocab, - words=["<s>"] + [x.text for x in spacy_toks] + ["</s>"], - spaces=[True] - + [x.endswith(" ") for x in spacy_toks_ws[:-1]] - + [True, False], - ) - assert len(doc) == aligned_feats.size(0) - doc.user_token_hooks["vector"] = lambda token: aligned_feats[token.i] - return doc - - def fill_mask(self, masked_input: str, topk: int = 5): - masked_token = "<mask>" - assert ( - masked_token in masked_input and masked_input.count(masked_token) == 1 - ), "Please add one {0} token for the input, eg: 'He is a {0} guy'".format( - masked_token - ) - - text_spans = masked_input.split(masked_token) - text_spans_bpe = ( - (" {0} ".format(masked_token)) - .join([self.bpe.encode(text_span.rstrip()) for text_span in text_spans]) - .strip() - ) - tokens = self.task.source_dictionary.encode_line( - "<s> " + text_spans_bpe + " </s>", - append_eos=False, - add_if_not_exist=False, - ) - - masked_index = (tokens == self.task.mask_idx).nonzero(as_tuple=False) - if tokens.dim() == 1: - tokens = tokens.unsqueeze(0) - - with utils.model_eval(self.model): - features, extra = self.model( - tokens.long().to(device=self.device), - features_only=False, - return_all_hiddens=False, - ) - logits = features[0, masked_index, :].squeeze() - prob = logits.softmax(dim=0) - values, index = prob.topk(k=topk, dim=0) - topk_predicted_token_bpe = self.task.source_dictionary.string(index) - - topk_filled_outputs = [] - for index, predicted_token_bpe in enumerate( - topk_predicted_token_bpe.split(" ") - ): - predicted_token = self.bpe.decode(predicted_token_bpe) - # Quick hack to fix https://github.com/pytorch/fairseq/issues/1306 - if predicted_token_bpe.startswith("\u2581"): - predicted_token = " " + predicted_token - if " {0}".format(masked_token) in masked_input: - topk_filled_outputs.append( - ( - masked_input.replace( - " {0}".format(masked_token), predicted_token - ), - values[index].item(), - predicted_token, - ) - ) - else: - topk_filled_outputs.append( - ( - masked_input.replace(masked_token, predicted_token), - values[index].item(), - predicted_token, - ) - ) - return topk_filled_outputs - - def disambiguate_pronoun(self, sentence: str) -> bool: - """ - Usage:: - - >>> disambiguate_pronoun('The _trophy_ would not fit in the brown suitcase because [it] was too big.') - True - - >>> disambiguate_pronoun('The trophy would not fit in the brown suitcase because [it] was too big.') - 'The trophy' - """ - assert hasattr( - self.task, "disambiguate_pronoun" - ), "roberta.disambiguate_pronoun() requires a model trained with the WSC task." - with utils.model_eval(self.model): - return self.task.disambiguate_pronoun( - self.model, sentence, use_cuda=self.device.type == "cuda" - ) diff --git a/kosmos-g/fairseq/fairseq/models/roberta/model.py b/kosmos-g/fairseq/fairseq/models/roberta/model.py deleted file mode 100644 index d27e05cf1..000000000 --- a/kosmos-g/fairseq/fairseq/models/roberta/model.py +++ /dev/null @@ -1,687 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -RoBERTa: A Robustly Optimized BERT Pretraining Approach. -""" - -import logging - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from fairseq import utils -from fairseq.models import ( - FairseqEncoder, - FairseqEncoderModel, - register_model, - register_model_architecture, -) -from fairseq.models.transformer import DEFAULT_MIN_PARAMS_TO_WRAP, TransformerEncoder -from fairseq.modules import LayerNorm -from fairseq.modules.quant_noise import quant_noise as apply_quant_noise_ -from fairseq.modules.transformer_sentence_encoder import init_bert_params -from fairseq.utils import safe_getattr, safe_hasattr - -from .hub_interface import RobertaHubInterface - -logger = logging.getLogger(__name__) - - -@register_model("roberta") -class RobertaModel(FairseqEncoderModel): - @classmethod - def hub_models(cls): - return { - "roberta.base": "http://dl.fbaipublicfiles.com/fairseq/models/roberta.base.tar.gz", - "roberta.large": "http://dl.fbaipublicfiles.com/fairseq/models/roberta.large.tar.gz", - "roberta.large.mnli": "http://dl.fbaipublicfiles.com/fairseq/models/roberta.large.mnli.tar.gz", - "roberta.large.wsc": "http://dl.fbaipublicfiles.com/fairseq/models/roberta.large.wsc.tar.gz", - } - - def __init__(self, args, encoder): - super().__init__(encoder) - self.args = args - - # We follow BERT's random weight initialization - self.apply(init_bert_params) - - self.classification_heads = nn.ModuleDict() - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - parser.add_argument( - "--encoder-layers", type=int, metavar="L", help="num encoder layers" - ) - parser.add_argument( - "--encoder-embed-dim", - type=int, - metavar="H", - help="encoder embedding dimension", - ) - parser.add_argument( - "--encoder-ffn-embed-dim", - type=int, - metavar="F", - help="encoder embedding dimension for FFN", - ) - parser.add_argument( - "--encoder-attention-heads", - type=int, - metavar="A", - help="num encoder attention heads", - ) - parser.add_argument( - "--activation-fn", - choices=utils.get_available_activation_fns(), - help="activation function to use", - ) - parser.add_argument( - "--pooler-activation-fn", - choices=utils.get_available_activation_fns(), - help="activation function to use for pooler layer", - ) - parser.add_argument( - "--encoder-normalize-before", - action="store_true", - help="apply layernorm before each encoder block", - ) - parser.add_argument( - "--layernorm-embedding", - action="store_true", - help="add layernorm to embedding", - ) - parser.add_argument( - "--dropout", type=float, metavar="D", help="dropout probability" - ) - parser.add_argument( - "--attention-dropout", - type=float, - metavar="D", - help="dropout probability for attention weights", - ) - parser.add_argument( - "--activation-dropout", - type=float, - metavar="D", - help="dropout probability after activation in FFN", - ) - parser.add_argument( - "--pooler-dropout", - type=float, - metavar="D", - help="dropout probability in the masked_lm pooler layers", - ) - parser.add_argument( - "--max-positions", type=int, help="number of positional embeddings to learn" - ) - parser.add_argument( - "--load-checkpoint-heads", - action="store_true", - help="(re-)register and load heads when loading checkpoints", - ) - parser.add_argument( - "--untie-weights-roberta", - action="store_true", - help="Untie weights between embeddings and classifiers in RoBERTa", - ) - # args for "Reducing Transformer Depth on Demand with Structured Dropout" (Fan et al., 2019) - parser.add_argument( - "--encoder-layerdrop", - type=float, - metavar="D", - default=0, - help="LayerDrop probability for encoder", - ) - parser.add_argument( - "--encoder-layers-to-keep", - default=None, - help="which layers to *keep* when pruning as a comma-separated list", - ) - # args for Training with Quantization Noise for Extreme Model Compression ({Fan*, Stock*} et al., 2020) - parser.add_argument( - "--quant-noise-pq", - type=float, - metavar="D", - default=0, - help="iterative PQ quantization noise at training time", - ) - parser.add_argument( - "--quant-noise-pq-block-size", - type=int, - metavar="D", - default=8, - help="block size of quantization noise at training time", - ) - parser.add_argument( - "--quant-noise-scalar", - type=float, - metavar="D", - default=0, - help="scalar quantization noise and scalar quantization at training time", - ) - # args for "Better Fine-Tuning by Reducing Representational Collapse" (Aghajanyan et al. 2020) - parser.add_argument( - "--spectral-norm-classification-head", - action="store_true", - default=False, - help="Apply spectral normalization on the classification head", - ) - # args for Fully Sharded Data Parallel (FSDP) training - parser.add_argument( - "--min-params-to-wrap", - type=int, - metavar="D", - default=DEFAULT_MIN_PARAMS_TO_WRAP, - help=( - "minimum number of params for a layer to be wrapped with FSDP() when " - "training with --ddp-backend=fully_sharded. Smaller values will " - "improve memory efficiency, but may make torch.distributed " - "communication less efficient due to smaller input sizes. This option " - "is set to 0 (i.e., always wrap) when --checkpoint-activations or " - "--offload-activations are passed." - ), - ) - # args for AdaPruning - # In short, it adds regularizarion for the multihead attention module and feed forward neural nets - # For more details, please refer to the paper https://openreview.net/forum?id=_CMSV7FTzGI - parser.add_argument( - "--mha-reg-scale-factor", - type=float, - metavar="D", - default=0.0, - help="scaling factor for regularization term in adptive pruning, recommendation is 0.000375", - ) - parser.add_argument( - "--ffn-reg-scale-factor", - type=float, - metavar="D", - default=0.0, - help="scaling factor for regularization term in adptive pruning, recommendation is 0.000375", - ) - parser.add_argument( - "--mha-heads-to-keep", - type=int, - metavar="D", - default=-1, - help="number of heads to keep in each multi-head attention module, -1 means keeping all heads", - ) - parser.add_argument( - "--ffn-blocks-to-remove", - type=int, - metavar="D", - default=-1, - help="number of feedforward blocks to remove in each transformer layer, -1 means keeping all ffn blocks", - ) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - - from omegaconf import OmegaConf - - if OmegaConf.is_config(args): - OmegaConf.set_struct(args, False) - - # make sure all arguments are present - base_architecture(args) - - if not safe_hasattr(args, "max_positions"): - if not safe_hasattr(args, "tokens_per_sample"): - args.tokens_per_sample = task.max_positions() - args.max_positions = args.tokens_per_sample - - encoder = RobertaEncoder(args, task.source_dictionary) - - if OmegaConf.is_config(args): - OmegaConf.set_struct(args, True) - - return cls(args, encoder) - - def forward( - self, - src_tokens, - features_only=False, - return_all_hiddens=False, - classification_head_name=None, - **kwargs, - ): - if classification_head_name is not None: - features_only = True - - x, extra = self.encoder(src_tokens, features_only, return_all_hiddens, **kwargs) - - if classification_head_name is not None: - x = self.classification_heads[classification_head_name](x) - return x, extra - - def _get_adaptive_head_loss(self): - norm_loss = 0 - scaling = float(self.args.mha_reg_scale_factor) - for layer in self.encoder.sentence_encoder.layers: - norm_loss_layer = 0 - for i in range(layer.self_attn.num_heads): - start_idx = i * layer.self_attn.head_dim - end_idx = (i + 1) * layer.self_attn.head_dim - norm_loss_layer += scaling * ( - torch.sum( - torch.abs( - layer.self_attn.q_proj.weight[ - start_idx:end_idx, - ] - ) - ) - + torch.sum( - torch.abs(layer.self_attn.q_proj.bias[start_idx:end_idx]) - ) - ) - norm_loss_layer += scaling * ( - torch.sum( - torch.abs( - layer.self_attn.k_proj.weight[ - start_idx:end_idx, - ] - ) - ) - + torch.sum( - torch.abs(layer.self_attn.k_proj.bias[start_idx:end_idx]) - ) - ) - norm_loss_layer += scaling * ( - torch.sum( - torch.abs( - layer.self_attn.v_proj.weight[ - start_idx:end_idx, - ] - ) - ) - + torch.sum( - torch.abs(layer.self_attn.v_proj.bias[start_idx:end_idx]) - ) - ) - - norm_loss += norm_loss_layer - return norm_loss - - def _get_adaptive_ffn_loss(self): - ffn_scale_factor = float(self.args.ffn_reg_scale_factor) - filter_loss = 0 - for layer in self.encoder.sentence_encoder.layers: - filter_loss += torch.sum( - torch.abs(layer.fc1.weight * ffn_scale_factor) - ) + torch.sum(torch.abs(layer.fc2.weight * ffn_scale_factor)) - filter_loss += torch.sum( - torch.abs(layer.fc1.bias * ffn_scale_factor) - ) + torch.sum(torch.abs(layer.fc2.bias * ffn_scale_factor)) - return filter_loss - - def get_normalized_probs(self, net_output, log_probs, sample=None): - """Get normalized probabilities (or log probs) from a net's output.""" - logits = net_output[0].float() - if log_probs: - return F.log_softmax(logits, dim=-1) - else: - return F.softmax(logits, dim=-1) - - def register_classification_head( - self, name, num_classes=None, inner_dim=None, **kwargs - ): - """Register a classification head.""" - if name in self.classification_heads: - prev_num_classes = self.classification_heads[name].out_proj.out_features - prev_inner_dim = self.classification_heads[name].dense.out_features - if num_classes != prev_num_classes or inner_dim != prev_inner_dim: - logger.warning( - 're-registering head "{}" with num_classes {} (prev: {}) ' - "and inner_dim {} (prev: {})".format( - name, num_classes, prev_num_classes, inner_dim, prev_inner_dim - ) - ) - self.classification_heads[name] = RobertaClassificationHead( - input_dim=self.args.encoder_embed_dim, - inner_dim=inner_dim or self.args.encoder_embed_dim, - num_classes=num_classes, - activation_fn=self.args.pooler_activation_fn, - pooler_dropout=self.args.pooler_dropout, - q_noise=self.args.quant_noise_pq, - qn_block_size=self.args.quant_noise_pq_block_size, - do_spectral_norm=self.args.spectral_norm_classification_head, - ) - - @property - def supported_targets(self): - return {"self"} - - @classmethod - def from_pretrained( - cls, - model_name_or_path, - checkpoint_file="model.pt", - data_name_or_path=".", - bpe="gpt2", - **kwargs, - ): - from fairseq import hub_utils - - x = hub_utils.from_pretrained( - model_name_or_path, - checkpoint_file, - data_name_or_path, - archive_map=cls.hub_models(), - bpe=bpe, - load_checkpoint_heads=True, - **kwargs, - ) - - logger.info(x["args"]) - return RobertaHubInterface(x["args"], x["task"], x["models"][0]) - - def upgrade_state_dict_named(self, state_dict, name): - prefix = name + "." if name != "" else "" - - # rename decoder -> encoder before upgrading children modules - for k in list(state_dict.keys()): - if k.startswith(prefix + "decoder"): - new_k = prefix + "encoder" + k[len(prefix + "decoder") :] - state_dict[new_k] = state_dict[k] - del state_dict[k] - - # rename emb_layer_norm -> layernorm_embedding - for k in list(state_dict.keys()): - if ".emb_layer_norm." in k: - new_k = k.replace(".emb_layer_norm.", ".layernorm_embedding.") - state_dict[new_k] = state_dict[k] - del state_dict[k] - - # upgrade children modules - super().upgrade_state_dict_named(state_dict, name) - - # Handle new classification heads present in the state dict. - current_head_names = ( - [] - if not hasattr(self, "classification_heads") - else self.classification_heads.keys() - ) - keys_to_delete = [] - for k in state_dict.keys(): - if not k.startswith(prefix + "classification_heads."): - continue - - head_name = k[len(prefix + "classification_heads.") :].split(".")[0] - num_classes = state_dict[ - prefix + "classification_heads." + head_name + ".out_proj.weight" - ].size(0) - inner_dim = state_dict[ - prefix + "classification_heads." + head_name + ".dense.weight" - ].size(0) - - if getattr(self.args, "load_checkpoint_heads", False): - if head_name not in current_head_names: - self.register_classification_head(head_name, num_classes, inner_dim) - else: - if head_name not in current_head_names: - logger.warning( - "deleting classification head ({}) from checkpoint " - "not present in current model: {}".format(head_name, k) - ) - keys_to_delete.append(k) - elif ( - num_classes - != self.classification_heads[head_name].out_proj.out_features - or inner_dim - != self.classification_heads[head_name].dense.out_features - ): - logger.warning( - "deleting classification head ({}) from checkpoint " - "with different dimensions than current model: {}".format( - head_name, k - ) - ) - keys_to_delete.append(k) - for k in keys_to_delete: - del state_dict[k] - - # Copy any newly-added classification heads into the state dict - # with their current weights. - if hasattr(self, "classification_heads"): - cur_state = self.classification_heads.state_dict() - for k, v in cur_state.items(): - if prefix + "classification_heads." + k not in state_dict: - logger.info("Overwriting " + prefix + "classification_heads." + k) - state_dict[prefix + "classification_heads." + k] = v - - -class RobertaLMHead(nn.Module): - """Head for masked language modeling.""" - - def __init__(self, embed_dim, output_dim, activation_fn, weight=None): - super().__init__() - self.dense = nn.Linear(embed_dim, embed_dim) - self.activation_fn = utils.get_activation_fn(activation_fn) - self.layer_norm = LayerNorm(embed_dim) - - if weight is None: - weight = nn.Linear(embed_dim, output_dim, bias=False).weight - self.weight = weight - self.bias = nn.Parameter(torch.zeros(output_dim)) - - def forward(self, features, masked_tokens=None, **kwargs): - # Only project the masked tokens while training, - # saves both memory and computation - if masked_tokens is not None: - features = features[masked_tokens, :] - - x = self.dense(features) - x = self.activation_fn(x) - x = self.layer_norm(x) - # project back to size of vocabulary with bias - x = F.linear(x, self.weight) + self.bias - return x - - -class RobertaClassificationHead(nn.Module): - """Head for sentence-level classification tasks.""" - - def __init__( - self, - input_dim, - inner_dim, - num_classes, - activation_fn, - pooler_dropout, - q_noise=0, - qn_block_size=8, - do_spectral_norm=False, - ): - super().__init__() - self.dense = nn.Linear(input_dim, inner_dim) - self.activation_fn = utils.get_activation_fn(activation_fn) - self.dropout = nn.Dropout(p=pooler_dropout) - self.out_proj = apply_quant_noise_( - nn.Linear(inner_dim, num_classes), q_noise, qn_block_size - ) - if do_spectral_norm: - if q_noise != 0: - raise NotImplementedError( - "Attempting to use Spectral Normalization with Quant Noise. This is not officially supported" - ) - self.out_proj = torch.nn.utils.spectral_norm(self.out_proj) - - def forward(self, features, **kwargs): - x = features[:, 0, :] # take <s> token (equiv. to [CLS]) - x = self.dropout(x) - x = self.dense(x) - x = self.activation_fn(x) - x = self.dropout(x) - x = self.out_proj(x) - return x - - -class RobertaEncoder(FairseqEncoder): - """RoBERTa encoder.""" - - def __init__(self, args, dictionary): - super().__init__(dictionary) - - # set any missing default values - base_architecture(args) - self.args = args - - if args.encoder_layers_to_keep: - args.encoder_layers = len(args.encoder_layers_to_keep.split(",")) - - embed_tokens = self.build_embedding( - len(dictionary), args.encoder_embed_dim, dictionary.pad() - ) - - self.sentence_encoder = self.build_encoder(args, dictionary, embed_tokens) - - self.lm_head = self.build_lm_head( - embed_dim=args.encoder_embed_dim, - output_dim=len(dictionary), - activation_fn=args.activation_fn, - weight=( - self.sentence_encoder.embed_tokens.weight - if not args.untie_weights_roberta - else None - ), - ) - - def build_embedding(self, vocab_size, embedding_dim, padding_idx): - return nn.Embedding(vocab_size, embedding_dim, padding_idx) - - def build_encoder(self, args, dictionary, embed_tokens): - encoder = TransformerEncoder(args, dictionary, embed_tokens) - encoder.apply(init_bert_params) - return encoder - - def build_lm_head(self, embed_dim, output_dim, activation_fn, weight): - return RobertaLMHead(embed_dim, output_dim, activation_fn, weight) - - def forward( - self, - src_tokens, - features_only=False, - return_all_hiddens=False, - masked_tokens=None, - **unused, - ): - """ - Args: - src_tokens (LongTensor): input tokens of shape `(batch, src_len)` - features_only (bool, optional): skip LM head and just return - features. If True, the output will be of shape - `(batch, src_len, embed_dim)`. - return_all_hiddens (bool, optional): also return all of the - intermediate hidden states (default: False). - - Returns: - tuple: - - the LM output of shape `(batch, src_len, vocab)` - - a dictionary of additional data, where 'inner_states' - is a list of hidden states. Note that the hidden - states have shape `(src_len, batch, vocab)`. - """ - x, extra = self.extract_features( - src_tokens, return_all_hiddens=return_all_hiddens - ) - if not features_only: - x = self.output_layer(x, masked_tokens=masked_tokens) - return x, extra - - def extract_features(self, src_tokens, return_all_hiddens=False, **kwargs): - encoder_out = self.sentence_encoder( - src_tokens, - return_all_hiddens=return_all_hiddens, - token_embeddings=kwargs.get("token_embeddings", None), - ) - # T x B x C -> B x T x C - features = encoder_out["encoder_out"][0].transpose(0, 1) - inner_states = encoder_out["encoder_states"] if return_all_hiddens else None - return features, {"inner_states": inner_states} - - def output_layer(self, features, masked_tokens=None, **unused): - return self.lm_head(features, masked_tokens) - - def max_positions(self): - """Maximum output length supported by the encoder.""" - return self.args.max_positions - - -@register_model_architecture("roberta", "roberta") -def base_architecture(args): - args.encoder_layers = safe_getattr(args, "encoder_layers", 12) - args.encoder_embed_dim = safe_getattr(args, "encoder_embed_dim", 768) - args.encoder_ffn_embed_dim = safe_getattr(args, "encoder_ffn_embed_dim", 3072) - args.encoder_attention_heads = safe_getattr(args, "encoder_attention_heads", 12) - - args.dropout = safe_getattr(args, "dropout", 0.1) - args.attention_dropout = safe_getattr(args, "attention_dropout", 0.1) - args.activation_dropout = safe_getattr(args, "activation_dropout", 0.0) - args.pooler_dropout = safe_getattr(args, "pooler_dropout", 0.0) - - args.max_source_positions = safe_getattr(args, "max_positions", 512) - args.no_token_positional_embeddings = safe_getattr( - args, "no_token_positional_embeddings", False - ) - - # BERT has a few structural differences compared to the original Transformer - args.encoder_learned_pos = safe_getattr(args, "encoder_learned_pos", True) - args.layernorm_embedding = safe_getattr(args, "layernorm_embedding", True) - args.no_scale_embedding = safe_getattr(args, "no_scale_embedding", True) - args.activation_fn = safe_getattr(args, "activation_fn", "gelu") - args.encoder_normalize_before = safe_getattr( - args, "encoder_normalize_before", False - ) - args.pooler_activation_fn = safe_getattr(args, "pooler_activation_fn", "tanh") - args.untie_weights_roberta = safe_getattr(args, "untie_weights_roberta", False) - - # Adaptive input config - args.adaptive_input = safe_getattr(args, "adaptive_input", False) - - # LayerDrop config - args.encoder_layerdrop = safe_getattr(args, "encoder_layerdrop", 0.0) - args.encoder_layers_to_keep = safe_getattr(args, "encoder_layers_to_keep", None) - - # Quantization noise config - args.quant_noise_pq = safe_getattr(args, "quant_noise_pq", 0) - args.quant_noise_pq_block_size = safe_getattr(args, "quant_noise_pq_block_size", 8) - args.quant_noise_scalar = safe_getattr(args, "quant_noise_scalar", 0) - - # R4F config - args.spectral_norm_classification_head = safe_getattr( - args, "spectral_norm_classification_head", False - ) - - -@register_model_architecture("roberta", "roberta_prenorm") -def roberta_prenorm_architecture(args): - args.layernorm_embedding = safe_getattr(args, "layernorm_embedding", False) - args.encoder_normalize_before = safe_getattr(args, "encoder_normalize_before", True) - base_architecture(args) - - -@register_model_architecture("roberta", "roberta_base") -def roberta_base_architecture(args): - base_architecture(args) - - -@register_model_architecture("roberta", "roberta_large") -def roberta_large_architecture(args): - args.encoder_layers = safe_getattr(args, "encoder_layers", 24) - args.encoder_embed_dim = safe_getattr(args, "encoder_embed_dim", 1024) - args.encoder_ffn_embed_dim = safe_getattr(args, "encoder_ffn_embed_dim", 4096) - args.encoder_attention_heads = safe_getattr(args, "encoder_attention_heads", 16) - base_architecture(args) - - -@register_model_architecture("roberta", "xlm") -def xlm_architecture(args): - args.encoder_layers = safe_getattr(args, "encoder_layers", 16) - args.encoder_embed_dim = safe_getattr(args, "encoder_embed_dim", 1280) - args.encoder_ffn_embed_dim = safe_getattr(args, "encoder_ffn_embed_dim", 1280 * 4) - args.encoder_attention_heads = safe_getattr(args, "encoder_attention_heads", 16) - base_architecture(args) diff --git a/kosmos-g/fairseq/fairseq/models/roberta/model_camembert.py b/kosmos-g/fairseq/fairseq/models/roberta/model_camembert.py deleted file mode 100644 index 46447546f..000000000 --- a/kosmos-g/fairseq/fairseq/models/roberta/model_camembert.py +++ /dev/null @@ -1,50 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -CamemBERT: a Tasty French Language Model -""" - -from fairseq.models import register_model - -from .hub_interface import RobertaHubInterface -from .model import RobertaModel - - -@register_model("camembert") -class CamembertModel(RobertaModel): - @classmethod - def hub_models(cls): - return { - "camembert": "http://dl.fbaipublicfiles.com/fairseq/models/camembert-base.tar.gz", - "camembert.v0": "http://dl.fbaipublicfiles.com/fairseq/models/camembert-base.tar.gz", - "camembert-base": "http://dl.fbaipublicfiles.com/fairseq/models/camembert-base.tar.gz", - "camembert-large": "http://dl.fbaipublicfiles.com/fairseq/models/camembert-large.tar.gz", - "camembert-base-ccnet": "http://dl.fbaipublicfiles.com/fairseq/models/camembert-base-ccnet.tar.gz", - "camembert-base-ccnet-4gb": "http://dl.fbaipublicfiles.com/fairseq/models/camembert-base-ccnet-4gb.tar.gz", - "camembert-base-wikipedia-4gb": "http://dl.fbaipublicfiles.com/fairseq/models/camembert-base-wikipedia-4gb.tar.gz", - "camembert-base-oscar-4gb": "http://dl.fbaipublicfiles.com/fairseq/models/camembert-base-oscar-4gb.tar.gz", - } - - @classmethod - def from_pretrained( - cls, - model_name_or_path, - checkpoint_file="model.pt", - data_name_or_path=".", - bpe="sentencepiece", - **kwargs - ): - from fairseq import hub_utils - - x = hub_utils.from_pretrained( - model_name_or_path, - checkpoint_file, - data_name_or_path, - archive_map=cls.hub_models(), - bpe=bpe, - load_checkpoint_heads=True, - **kwargs, - ) - return RobertaHubInterface(x["args"], x["task"], x["models"][0]) diff --git a/kosmos-g/fairseq/fairseq/models/roberta/model_gottbert.py b/kosmos-g/fairseq/fairseq/models/roberta/model_gottbert.py deleted file mode 100644 index dc7a019b3..000000000 --- a/kosmos-g/fairseq/fairseq/models/roberta/model_gottbert.py +++ /dev/null @@ -1,49 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -GottBERT: a pure German Language Model -""" - -from fairseq.models import register_model - -from .hub_interface import RobertaHubInterface -from .model import RobertaModel - - -@register_model("gottbert") -class GottbertModel(RobertaModel): - @classmethod - def hub_models(cls): - return { - "gottbert-base": "https://dl.gottbert.de/fairseq/models/gottbert-base.tar.gz", - } - - @classmethod - def from_pretrained( - cls, - model_name_or_path, - checkpoint_file="model.pt", - data_name_or_path=".", - bpe="hf_byte_bpe", - bpe_vocab="vocab.json", - bpe_merges="merges.txt", - bpe_add_prefix_space=False, - **kwargs - ): - from fairseq import hub_utils - - x = hub_utils.from_pretrained( - model_name_or_path, - checkpoint_file, - data_name_or_path, - archive_map=cls.hub_models(), - bpe=bpe, - load_checkpoint_heads=True, - bpe_vocab=bpe_vocab, - bpe_merges=bpe_merges, - bpe_add_prefix_space=bpe_add_prefix_space, - **kwargs, - ) - return RobertaHubInterface(x["args"], x["task"], x["models"][0]) diff --git a/kosmos-g/fairseq/fairseq/models/roberta/model_xlmr.py b/kosmos-g/fairseq/fairseq/models/roberta/model_xlmr.py deleted file mode 100644 index cf6e354d5..000000000 --- a/kosmos-g/fairseq/fairseq/models/roberta/model_xlmr.py +++ /dev/null @@ -1,46 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Unsupervised Cross-lingual Representation Learning at Scale -""" - -from fairseq.models import register_model - -from .hub_interface import RobertaHubInterface -from .model import RobertaModel - - -@register_model("xlmr") -class XLMRModel(RobertaModel): - @classmethod - def hub_models(cls): - return { - "xlmr.base": "http://dl.fbaipublicfiles.com/fairseq/models/xlmr.base.tar.gz", - "xlmr.large": "http://dl.fbaipublicfiles.com/fairseq/models/xlmr.large.tar.gz", - "xlmr.xl": "http://dl.fbaipublicfiles.com/fairseq/models/xlmr/xlmr.xl.tar.gz", - "xlmr.xxl": "http://dl.fbaipublicfiles.com/fairseq/models/xlmr/xlmr.xxl.tar.gz", - } - - @classmethod - def from_pretrained( - cls, - model_name_or_path, - checkpoint_file="model.pt", - data_name_or_path=".", - bpe="sentencepiece", - **kwargs - ): - from fairseq import hub_utils - - x = hub_utils.from_pretrained( - model_name_or_path, - checkpoint_file, - data_name_or_path, - archive_map=cls.hub_models(), - bpe=bpe, - load_checkpoint_heads=True, - **kwargs, - ) - return RobertaHubInterface(x["args"], x["task"], x["models"][0]) diff --git a/kosmos-g/fairseq/fairseq/models/speech_to_speech/__init__.py b/kosmos-g/fairseq/fairseq/models/speech_to_speech/__init__.py deleted file mode 100644 index d34883552..000000000 --- a/kosmos-g/fairseq/fairseq/models/speech_to_speech/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .modules import * # noqa -from .s2s_transformer import * # noqa diff --git a/kosmos-g/fairseq/fairseq/models/speech_to_speech/modules.py b/kosmos-g/fairseq/fairseq/models/speech_to_speech/modules.py deleted file mode 100644 index d07994b90..000000000 --- a/kosmos-g/fairseq/fairseq/models/speech_to_speech/modules.py +++ /dev/null @@ -1,59 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -from torch import nn - -from fairseq.models import FairseqEncoder -from fairseq.models.transformer import Linear - - -class CTCDecoder(FairseqEncoder): - def __init__(self, dictionary, in_dim): - super().__init__(dictionary) - self.proj = nn.Linear(in_dim, len(dictionary)) - - def forward(self, src_tokens, src_lengths=None, **kwargs): - encoder_out = self.proj(src_tokens) - return {"encoder_out": encoder_out} - - -class StackedEmbedding(nn.Embedding): - """Embedding module that supports stacked units -> single embedding""" - - def __init__(self, num_embeddings, embed_dim, padding_idx, num_stacked=1): - super().__init__(num_embeddings, embed_dim, padding_idx) - # follow transformer.Embedding - nn.init.normal_(self.weight, mean=0, std=embed_dim ** -0.5) - nn.init.constant_(self.weight[padding_idx], 0) - - self.offset = ( - 4 # skip <bos>, <pad>, <eos>, <unk>, specific to fairseq dictionary - ) - self.vocab_size = num_embeddings - self.offset - self.num_stacked = num_stacked - - if self.num_stacked > 1: - self.project_in_dim = Linear(embed_dim * num_stacked, embed_dim, bias=False) - - def forward(self, input): - if self.num_stacked == 1: - return super().forward(input) - - # expand input indices - mask = input >= self.offset - stacked_input = [] - cum_input = input.new_zeros(input.shape) - for i in range(1, self.num_stacked + 1): - div = pow(self.vocab_size, i) - next_input = torch.remainder(input - self.offset - cum_input, div) - cum_input += next_input - next_input = torch.floor_divide(next_input, div // self.vocab_size) - stacked_input.append((next_input + self.offset) * mask + input * ~mask) - - stacked_input = torch.stack(stacked_input[::-1], dim=2) - embed = super().forward(stacked_input).view(input.size(0), input.size(1), -1) - embed = self.project_in_dim(embed) - return embed diff --git a/kosmos-g/fairseq/fairseq/models/speech_to_speech/s2s_transformer.py b/kosmos-g/fairseq/fairseq/models/speech_to_speech/s2s_transformer.py deleted file mode 100644 index a5954a83e..000000000 --- a/kosmos-g/fairseq/fairseq/models/speech_to_speech/s2s_transformer.py +++ /dev/null @@ -1,703 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from pathlib import Path -from typing import Any, Dict, List, Optional - -import torch -from torch import Tensor - -from fairseq import checkpoint_utils, utils -from fairseq.models import ( - FairseqEncoderModel, - FairseqEncoderDecoderModel, - FairseqLanguageModel, - register_model, - register_model_architecture, -) -from fairseq.models.speech_to_text import S2TTransformerEncoder -from fairseq.models.speech_to_speech.modules import CTCDecoder, StackedEmbedding -from fairseq.models.text_to_speech import TTSTransformerDecoder -from fairseq.models.transformer import ( - Linear, - TransformerDecoder, - TransformerModelBase, -) - - -logger = logging.getLogger(__name__) - - -class S2STransformerEncoder(S2TTransformerEncoder): - """Based on S2T transformer encoder, with support - to incorporate target speaker embedding.""" - - def __init__(self, args): - super().__init__(args) - - self.spk_emb_proj = None - if args.target_speaker_embed: - self.spk_emb_proj = Linear( - args.encoder_embed_dim + args.speaker_embed_dim, args.encoder_embed_dim - ) - - def forward( - self, src_tokens, src_lengths, tgt_speaker=None, return_all_hiddens=False - ): - out = super().forward(src_tokens, src_lengths, return_all_hiddens) - - if self.spk_emb_proj: - x = out["encoder_out"][0] - seq_len, bsz, _ = x.size() - tgt_speaker_emb = tgt_speaker.view(1, bsz, -1).expand(seq_len, bsz, -1) - x = self.spk_emb_proj(torch.cat([x, tgt_speaker_emb], dim=2)) - out["encoder_out"][0] = x - - return out - - -class TransformerUnitDecoder(TransformerDecoder): - """Based on Transformer decoder, with support to decoding stacked units""" - - def __init__( - self, - args, - dictionary, - embed_tokens, - no_encoder_attn=False, - output_projection=None, - ): - super().__init__( - args, dictionary, embed_tokens, no_encoder_attn, output_projection - ) - self.n_frames_per_step = args.n_frames_per_step - - self.out_proj_n_frames = ( - Linear( - self.output_embed_dim, - self.output_embed_dim * self.n_frames_per_step, - bias=False, - ) - if self.n_frames_per_step > 1 - else None - ) - - def forward( - self, - prev_output_tokens, - encoder_out: Optional[Dict[str, List[Tensor]]] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - features_only: bool = False, - full_context_alignment: bool = False, - alignment_layer: Optional[int] = None, - alignment_heads: Optional[int] = None, - src_lengths: Optional[Any] = None, - return_all_hiddens: bool = False, - ): - """ - Args: - prev_output_tokens (LongTensor): previous decoder outputs of shape - `(batch, tgt_len)`, for teacher forcing - encoder_out (optional): output from the encoder, used for - encoder-side attention, should be of size T x B x C - incremental_state (dict): dictionary used for storing state during - :ref:`Incremental decoding` - features_only (bool, optional): only return features without - applying output layer (default: False). - full_context_alignment (bool, optional): don't apply - auto-regressive mask to self-attention (default: False). - - Returns: - tuple: - - the decoder's output of shape `(batch, tgt_len, vocab)` - - a dictionary with any model-specific outputs - """ - - x, extra = self.extract_features( - prev_output_tokens, - encoder_out=encoder_out, - incremental_state=incremental_state, - full_context_alignment=full_context_alignment, - alignment_layer=alignment_layer, - alignment_heads=alignment_heads, - ) - - if not features_only: - bsz, seq_len, d = x.size() - if self.out_proj_n_frames: - x = self.out_proj_n_frames(x) - x = self.output_layer(x.view(bsz, seq_len, self.n_frames_per_step, d)) - x = x.view(bsz, seq_len * self.n_frames_per_step, -1) - if ( - incremental_state is None and self.n_frames_per_step > 1 - ): # teacher-forcing mode in training - x = x[ - :, : -(self.n_frames_per_step - 1), : - ] # remove extra frames after <eos> - - return x, extra - - def upgrade_state_dict_named(self, state_dict, name): - if self.n_frames_per_step > 1: - move_keys = [ - ( - f"{name}.project_in_dim.weight", - f"{name}.embed_tokens.project_in_dim.weight", - ) - ] - for from_k, to_k in move_keys: - if from_k in state_dict and to_k not in state_dict: - state_dict[to_k] = state_dict[from_k] - del state_dict[from_k] - - -class S2STransformerMultitaskModelBase(FairseqEncoderDecoderModel): - @classmethod - def build_encoder(cls, args): - encoder = S2STransformerEncoder(args) - pretraining_path = getattr(args, "load_pretrained_encoder_from", None) - if pretraining_path is not None: - if not Path(pretraining_path).exists(): - logger.warning( - f"skipped pretraining because {pretraining_path} does not exist" - ) - else: - encoder = checkpoint_utils.load_pretrained_component_from_model( - component=encoder, checkpoint=pretraining_path - ) - logger.info(f"loaded pretrained encoder from: {pretraining_path}") - return encoder - - @classmethod - def build_multitask_decoder(cls, args, tgt_dict, in_dim): - decoder_args = args.decoder_args - decoder_args.encoder_embed_dim = in_dim - if args.decoder_type == "transformer": - base_multitask_text_transformer_decoder_arch(decoder_args) - task_decoder = TransformerDecoder( - decoder_args, - tgt_dict, - embed_tokens=TransformerModelBase.build_embedding( - decoder_args, - tgt_dict, - decoder_args.decoder_embed_dim, - ), - ) - elif args.decoder_type == "ctc": - task_decoder = CTCDecoder( - dictionary=tgt_dict, - in_dim=in_dim, - ) - else: - raise NotImplementedError( - "currently only support multitask decoder_type 'transformer', 'ctc'" - ) - - return task_decoder - - @classmethod - def build_model(cls, args, task): - encoder = cls.build_encoder(args) - decoder = ( - cls.build_decoder(args, task.target_dictionary) - if task.args.target_is_code - else cls.build_decoder(args) - ) - base_model = cls(encoder, decoder) - - # set up multitask decoders - base_model.multitask_decoders = {} - for task_name, task_obj in task.multitask_tasks.items(): - in_dim = ( - args.encoder_embed_dim - if task_obj.args.input_from == "encoder" - else args.decoder_embed_dim - ) - task_decoder = cls.build_multitask_decoder( - task_obj.args, task_obj.target_dictionary, in_dim - ) - - setattr(base_model, f"{task_name}_decoder", task_decoder) - decoder_model_cls = ( - FairseqEncoderModel - if task_obj.args.decoder_type == "ctc" - else FairseqLanguageModel - ) - base_model.multitask_decoders[task_name] = decoder_model_cls( - getattr(base_model, f"{task_name}_decoder") - ) - - return base_model - - def forward_encoder(self, src_tokens, src_lengths, speaker=None, **kwargs): - return self.encoder( - src_tokens, src_lengths=src_lengths, tgt_speaker=speaker, **kwargs - ) - - -@register_model("s2ut_transformer") -class S2UTTransformerModel(S2STransformerMultitaskModelBase): - """ - Direct speech-to-speech translation model with S2T Transformer encoder + Transformer discrete unit decoder - https://arxiv.org/abs/2107.05604 - """ - - @staticmethod - def add_args(parser): - # input - parser.add_argument( - "--conv-kernel-sizes", - type=str, - metavar="N", - help="kernel sizes of Conv1d subsampling layers", - ) - parser.add_argument( - "--conv-channels", - type=int, - metavar="N", - help="# of channels in Conv1d subsampling layers", - ) - # Transformer - parser.add_argument( - "--activation-fn", - type=str, - default="relu", - choices=utils.get_available_activation_fns(), - help="activation function to use", - ) - parser.add_argument( - "--dropout", type=float, metavar="D", help="dropout probability" - ) - parser.add_argument( - "--attention-dropout", - type=float, - metavar="D", - help="dropout probability for attention weights", - ) - parser.add_argument( - "--activation-dropout", - "--relu-dropout", - type=float, - metavar="D", - help="dropout probability after activation in FFN.", - ) - parser.add_argument( - "--encoder-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension", - ) - parser.add_argument( - "--encoder-ffn-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension for FFN", - ) - parser.add_argument( - "--encoder-layers", type=int, metavar="N", help="num encoder layers" - ) - parser.add_argument( - "--encoder-attention-heads", - type=int, - metavar="N", - help="num encoder attention heads", - ) - parser.add_argument( - "--encoder-normalize-before", - action="store_true", - help="apply layernorm before each encoder block", - ) - parser.add_argument( - "--decoder-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension", - ) - parser.add_argument( - "--decoder-ffn-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension for FFN", - ) - parser.add_argument( - "--decoder-layers", type=int, metavar="N", help="num decoder layers" - ) - parser.add_argument( - "--decoder-attention-heads", - type=int, - metavar="N", - help="num decoder attention heads", - ) - parser.add_argument( - "--decoder-normalize-before", - action="store_true", - help="apply layernorm before each decoder block", - ) - parser.add_argument( - "--share-decoder-input-output-embed", - action="store_true", - help="share decoder input and output embeddings", - ) - parser.add_argument( - "--layernorm-embedding", - action="store_true", - help="add layernorm to embedding", - ) - parser.add_argument( - "--no-scale-embedding", - action="store_true", - help="if True, dont scale embeddings", - ) - parser.add_argument( - "--load-pretrained-encoder-from", - type=str, - metavar="STR", - help="model to take encoder weights from (for initialization)", - ) - parser.add_argument( - "--encoder-freezing-updates", - type=int, - metavar="N", - help="freeze encoder for first N updates", - ) - # speaker - parser.add_argument( - "--speaker-embed-dim", - type=int, - metavar="N", - help="speaker embedding dimension", - ) - - @classmethod - def build_decoder(cls, args, tgt_dict): - num_embeddings = len(tgt_dict) - padding_idx = tgt_dict.pad() - embed_tokens = StackedEmbedding( - num_embeddings, - args.decoder_embed_dim, - padding_idx, - num_stacked=args.n_frames_per_step, - ) - - return TransformerUnitDecoder( - args, - tgt_dict, - embed_tokens, - ) - - def forward( - self, - src_tokens, - src_lengths, - prev_output_tokens, - tgt_speaker=None, - return_all_hiddens=False, - ): - encoder_out = self.encoder( - src_tokens, - src_lengths=src_lengths, - tgt_speaker=tgt_speaker, - return_all_hiddens=return_all_hiddens, - ) - decoder_out = self.decoder( - prev_output_tokens, - encoder_out=encoder_out, - ) - if return_all_hiddens: - decoder_out[-1]["encoder_states"] = encoder_out["encoder_states"] - decoder_out[-1]["encoder_padding_mask"] = encoder_out[ - "encoder_padding_mask" - ] - return decoder_out - - -@register_model("s2spect_transformer") -class S2SpecTTransformerModel(S2STransformerMultitaskModelBase): - """ - Speech-to-spectrogram model with S2T Transformer encoder + TTS Transformer decoder - """ - - @staticmethod - def add_args(parser): - # input - parser.add_argument( - "--conv-kernel-sizes", - type=str, - metavar="N", - help="kernel sizes of Conv1d subsampling layers", - ) - parser.add_argument( - "--conv-channels", - type=int, - metavar="N", - help="# of channels in Conv1d subsampling layers", - ) - # Transformer - parser.add_argument( - "--activation-fn", - type=str, - default="relu", - choices=utils.get_available_activation_fns(), - help="activation function to use", - ) - parser.add_argument( - "--dropout", type=float, metavar="D", help="dropout probability" - ) - parser.add_argument( - "--attention-dropout", - type=float, - metavar="D", - help="dropout probability for attention weights", - ) - parser.add_argument( - "--activation-dropout", - "--relu-dropout", - type=float, - metavar="D", - help="dropout probability after activation in FFN.", - ) - parser.add_argument( - "--encoder-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension", - ) - parser.add_argument( - "--encoder-ffn-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension for FFN", - ) - parser.add_argument( - "--encoder-layers", type=int, metavar="N", help="num encoder layers" - ) - parser.add_argument( - "--encoder-attention-heads", - type=int, - metavar="N", - help="num encoder attention heads", - ) - parser.add_argument( - "--encoder-normalize-before", - action="store_true", - help="apply layernorm before each encoder block", - ) - parser.add_argument( - "--no-scale-embedding", - action="store_true", - help="if True, dont scale embeddings", - ) - parser.add_argument( - "--load-pretrained-encoder-from", - type=str, - metavar="STR", - help="model to take encoder weights from (for initialization)", - ) - parser.add_argument( - "--encoder-freezing-updates", - type=int, - metavar="N", - help="freeze encoder for first N updates", - ) - # speaker - parser.add_argument( - "--speaker-embed-dim", - type=int, - metavar="N", - help="speaker embedding dimension", - ) - # decoder - parser.add_argument("--output-frame-dim", type=int) - # decoder prenet - parser.add_argument("--prenet-dropout", type=float) - parser.add_argument("--prenet-layers", type=int) - parser.add_argument("--prenet-dim", type=int) - # decoder postnet - parser.add_argument("--postnet-dropout", type=float) - parser.add_argument("--postnet-layers", type=int) - parser.add_argument("--postnet-conv-dim", type=int) - parser.add_argument("--postnet-conv-kernel-size", type=int) - # decoder transformer layers - parser.add_argument("--decoder-transformer-layers", type=int) - parser.add_argument("--decoder-embed-dim", type=int) - parser.add_argument("--decoder-ffn-embed-dim", type=int) - parser.add_argument("--decoder-normalize-before", action="store_true") - parser.add_argument("--decoder-attention-heads", type=int) - - @classmethod - def build_decoder(cls, args): - return TTSTransformerDecoder(args, None, padding_idx=1) - - def forward( - self, - src_tokens, - src_lengths, - prev_output_tokens, - tgt_speaker=None, - incremental_state=None, - target_lengths=None, - speaker=None, - return_all_hiddens=False, - ): - encoder_out = self.encoder( - src_tokens, - src_lengths=src_lengths, - tgt_speaker=tgt_speaker, - return_all_hiddens=return_all_hiddens, - ) - decoder_out = self.decoder( - prev_output_tokens, - encoder_out=encoder_out, - incremental_state=incremental_state, - target_lengths=target_lengths, - speaker=speaker, - ) - if return_all_hiddens: - decoder_out[-1]["encoder_states"] = encoder_out["encoder_states"] - decoder_out[-1]["encoder_padding_mask"] = encoder_out[ - "encoder_padding_mask" - ] - return decoder_out - - -def base_multitask_text_transformer_decoder_arch(args): - args.dropout = getattr(args, "dropout", 0.3) - args.decoder_layerdrop = getattr(args, "decoder_layerdrop", 0.0) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", True - ) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 256) - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - - args.max_target_positions = getattr(args, "max_target_positions", 1024) - args.no_scale_embedding = getattr(args, "no_scale_embedding", False) - - args.adaptive_input = getattr(args, "adaptive_input", False) - args.quant_noise_pq = getattr(args, "quant_noise_pq", 0) - - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - - args.decoder_layers = getattr(args, "decoder_layers", 2) - - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - - # decoder layer - args.activation_dropout = getattr(args, "activation_dropout", args.dropout) - args.activation_fn = getattr(args, "activation_fn", "relu") - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", True) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 2048) - - args.attention_dropout = getattr(args, "attention_dropout", args.dropout) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 4) - - -def base_s2st_transformer_encoder_architecture(args): - args.encoder_freezing_updates = getattr(args, "encoder_freezing_updates", 0) - - # Convolutional subsampler - args.conv_kernel_sizes = getattr(args, "conv_kernel_sizes", "5,5") - args.conv_channels = getattr(args, "conv_channels", 1024) - # Transformer - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 2048) - args.encoder_layers = getattr(args, "encoder_layers", 12) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", True) - args.no_scale_embedding = getattr(args, "no_scale_embedding", False) - - args.dropout = getattr(args, "dropout", 0.1) - args.attention_dropout = getattr(args, "attention_dropout", args.dropout) - args.activation_dropout = getattr(args, "activation_dropout", args.dropout) - args.activation_fn = getattr(args, "activation_fn", "relu") - - args.speaker_embed_dim = getattr(args, "speaker_embed_dim", 256) - - -@register_model_architecture( - model_name="s2ut_transformer", arch_name="s2ut_transformer" -) -def s2ut_architecture_base(args): - base_s2st_transformer_encoder_architecture(args) - - # decoder - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim) - args.decoder_ffn_embed_dim = getattr( - args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim - ) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", True) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", False - ) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - args.adaptive_input = getattr(args, "adaptive_input", False) - args.decoder_layerdrop = getattr(args, "decoder_layerdrop", 0.0) - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - args.quant_noise_pq = getattr(args, "quant_noise_pq", 0) - - -@register_model_architecture("s2ut_transformer", "s2ut_transformer_fisher") -def s2ut_architecture_fisher(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 256) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 4) - args.dropout = getattr(args, "dropout", 0.1) - - s2ut_architecture_base(args) - - -@register_model_architecture( - model_name="s2spect_transformer", arch_name="s2spect_transformer" -) -def s2spect_architecture_base(args): - base_s2st_transformer_encoder_architecture(args) - - # decoder - args.output_frame_dim = getattr(args, "output_frame_dim", 80) - # decoder prenet - args.prenet_dropout = getattr(args, "prenet_dropout", 0.5) - args.prenet_layers = getattr(args, "prenet_layers", 2) - args.prenet_dim = getattr(args, "prenet_dim", 256) - # decoder postnet - args.postnet_dropout = getattr(args, "postnet_dropout", 0.5) - args.postnet_layers = getattr(args, "postnet_layers", 5) - args.postnet_conv_dim = getattr(args, "postnet_conv_dim", 512) - args.postnet_conv_kernel_size = getattr(args, "postnet_conv_kernel_size", 5) - # decoder transformer layers - args.decoder_transformer_layers = getattr(args, "decoder_transformer_layers", 6) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_ffn_embed_dim = getattr( - args, "decoder_ffn_embed_dim", 4 * args.decoder_embed_dim - ) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", False) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 4) - - -@register_model_architecture("s2spect_transformer", "s2spect_transformer_fisher") -def s2spect_architecture_fisher(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 256) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 256 * 8) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 4) - args.dropout = getattr(args, "dropout", 0.1) - - # decoder - args.prenet_dim = getattr(args, "prenet_dim", 32) - - s2spect_architecture_base(args) diff --git a/kosmos-g/fairseq/fairseq/models/speech_to_text/__init__.py b/kosmos-g/fairseq/fairseq/models/speech_to_text/__init__.py deleted file mode 100644 index e5d2ede31..000000000 --- a/kosmos-g/fairseq/fairseq/models/speech_to_text/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .berard import * # noqa -from .convtransformer import * # noqa -from .s2t_transformer import * # noqa -from .xm_transformer import * # noqa -from .s2t_conformer import * # noqa diff --git a/kosmos-g/fairseq/fairseq/models/speech_to_text/berard.py b/kosmos-g/fairseq/fairseq/models/speech_to_text/berard.py deleted file mode 100644 index c505e3aca..000000000 --- a/kosmos-g/fairseq/fairseq/models/speech_to_text/berard.py +++ /dev/null @@ -1,606 +0,0 @@ -#!/usr/bin/env python3 - -from ast import literal_eval -from typing import List, Tuple - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import checkpoint_utils, utils -from fairseq.data.data_utils import lengths_to_padding_mask -from fairseq.models import ( - FairseqEncoder, - FairseqEncoderDecoderModel, - FairseqIncrementalDecoder, - register_model, - register_model_architecture, -) - - -@register_model("s2t_berard") -class BerardModel(FairseqEncoderDecoderModel): - """Implementation of a model similar to https://arxiv.org/abs/1802.04200 - - Paper title: End-to-End Automatic Speech Translation of Audiobooks - An implementation is available in tensorflow at - https://github.com/eske/seq2seq - Relevant files in this implementation are the config - (https://github.com/eske/seq2seq/blob/master/config/LibriSpeech/AST.yaml) - and the model code - (https://github.com/eske/seq2seq/blob/master/translate/models.py). - The encoder and decoder try to be close to the original implementation. - The attention is an MLP as in Bahdanau et al. - (https://arxiv.org/abs/1409.0473). - There is no state initialization by averaging the encoder outputs. - """ - - def __init__(self, encoder, decoder): - super().__init__(encoder, decoder) - - @staticmethod - def add_args(parser): - parser.add_argument( - "--input-layers", - type=str, - metavar="EXPR", - help="List of linear layer dimensions. These " - "layers are applied to the input features and " - "are followed by tanh and possibly dropout.", - ) - parser.add_argument( - "--dropout", - type=float, - metavar="D", - help="Dropout probability to use in the encoder/decoder. " - "Note that this parameters control dropout in various places, " - "there is no fine-grained control for dropout for embeddings " - "vs LSTM layers for example.", - ) - parser.add_argument( - "--in-channels", - type=int, - metavar="N", - help="Number of encoder input channels. " "Typically value is 1.", - ) - parser.add_argument( - "--conv-layers", - type=str, - metavar="EXPR", - help="List of conv layers " "(format: (channels, kernel, stride)).", - ) - parser.add_argument( - "--num-blstm-layers", - type=int, - metavar="N", - help="Number of encoder bi-LSTM layers.", - ) - parser.add_argument( - "--lstm-size", type=int, metavar="N", help="LSTM hidden size." - ) - parser.add_argument( - "--decoder-embed-dim", - type=int, - metavar="N", - help="Embedding dimension of the decoder target tokens.", - ) - parser.add_argument( - "--decoder-hidden-dim", - type=int, - metavar="N", - help="Decoder LSTM hidden dimension.", - ) - parser.add_argument( - "--decoder-num-layers", - type=int, - metavar="N", - help="Number of decoder LSTM layers.", - ) - parser.add_argument( - "--attention-dim", - type=int, - metavar="N", - help="Hidden layer dimension in MLP attention.", - ) - parser.add_argument( - "--output-layer-dim", - type=int, - metavar="N", - help="Hidden layer dim for linear layer prior to output projection.", - ) - parser.add_argument( - "--load-pretrained-encoder-from", - type=str, - metavar="STR", - help="model to take encoder weights from (for initialization)", - ) - parser.add_argument( - "--load-pretrained-decoder-from", - type=str, - metavar="STR", - help="model to take decoder weights from (for initialization)", - ) - - @classmethod - def build_encoder(cls, args, task): - encoder = BerardEncoder( - input_layers=literal_eval(args.input_layers), - conv_layers=literal_eval(args.conv_layers), - in_channels=args.input_channels, - input_feat_per_channel=args.input_feat_per_channel, - num_blstm_layers=args.num_blstm_layers, - lstm_size=args.lstm_size, - dropout=args.dropout, - ) - if getattr(args, "load_pretrained_encoder_from", None): - encoder = checkpoint_utils.load_pretrained_component_from_model( - component=encoder, checkpoint=args.load_pretrained_encoder_from - ) - return encoder - - @classmethod - def build_decoder(cls, args, task): - decoder = LSTMDecoder( - dictionary=task.target_dictionary, - embed_dim=args.decoder_embed_dim, - num_layers=args.decoder_num_layers, - hidden_size=args.decoder_hidden_dim, - dropout=args.dropout, - encoder_output_dim=2 * args.lstm_size, # bidirectional - attention_dim=args.attention_dim, - output_layer_dim=args.output_layer_dim, - ) - if getattr(args, "load_pretrained_decoder_from", None): - decoder = checkpoint_utils.load_pretrained_component_from_model( - component=decoder, checkpoint=args.load_pretrained_decoder_from - ) - return decoder - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - encoder = cls.build_encoder(args, task) - decoder = cls.build_decoder(args, task) - - return cls(encoder, decoder) - - def get_normalized_probs(self, net_output, log_probs, sample=None): - # net_output['encoder_out'] is a (B, T, D) tensor - lprobs = super().get_normalized_probs(net_output, log_probs, sample) - # lprobs is a (B, T, D) tensor - lprobs.batch_first = True - return lprobs - - -class BerardEncoder(FairseqEncoder): - def __init__( - self, - input_layers: List[int], - conv_layers: List[Tuple[int]], - in_channels: int, - input_feat_per_channel: int, - num_blstm_layers: int, - lstm_size: int, - dropout: float, - ): - """ - Args: - input_layers: list of linear layer dimensions. These layers are - applied to the input features and are followed by tanh and - possibly dropout. - conv_layers: list of conv2d layer configurations. A configuration is - a tuple (out_channels, conv_kernel_size, stride). - in_channels: number of input channels. - input_feat_per_channel: number of input features per channel. These - are speech features, typically 40 or 80. - num_blstm_layers: number of bidirectional LSTM layers. - lstm_size: size of the LSTM hidden (and cell) size. - dropout: dropout probability. Dropout can be applied after the - linear layers and LSTM layers but not to the convolutional - layers. - """ - super().__init__(None) - - self.input_layers = nn.ModuleList() - in_features = input_feat_per_channel - for out_features in input_layers: - if dropout > 0: - self.input_layers.append( - nn.Sequential( - nn.Linear(in_features, out_features), nn.Dropout(p=dropout) - ) - ) - else: - self.input_layers.append(nn.Linear(in_features, out_features)) - in_features = out_features - - self.in_channels = in_channels - self.input_dim = input_feat_per_channel - self.conv_kernel_sizes_and_strides = [] - self.conv_layers = nn.ModuleList() - lstm_input_dim = input_layers[-1] - for conv_layer in conv_layers: - out_channels, conv_kernel_size, conv_stride = conv_layer - self.conv_layers.append( - nn.Conv2d( - in_channels, - out_channels, - conv_kernel_size, - stride=conv_stride, - padding=conv_kernel_size // 2, - ) - ) - self.conv_kernel_sizes_and_strides.append((conv_kernel_size, conv_stride)) - in_channels = out_channels - lstm_input_dim //= conv_stride - - lstm_input_dim *= conv_layers[-1][0] - self.lstm_size = lstm_size - self.num_blstm_layers = num_blstm_layers - self.lstm = nn.LSTM( - input_size=lstm_input_dim, - hidden_size=lstm_size, - num_layers=num_blstm_layers, - dropout=dropout, - bidirectional=True, - ) - self.output_dim = 2 * lstm_size # bidirectional - if dropout > 0: - self.dropout = nn.Dropout(p=dropout) - else: - self.dropout = None - - def forward(self, src_tokens, src_lengths=None, **kwargs): - """ - Args - src_tokens: padded tensor (B, T, C * feat) - src_lengths: tensor of original lengths of input utterances (B,) - """ - bsz, max_seq_len, _ = src_tokens.size() - # (B, C, T, feat) - x = ( - src_tokens.view(bsz, max_seq_len, self.in_channels, self.input_dim) - .transpose(1, 2) - .contiguous() - ) - - for input_layer in self.input_layers: - x = input_layer(x) - x = torch.tanh(x) - - for conv_layer in self.conv_layers: - x = conv_layer(x) - - bsz, _, output_seq_len, _ = x.size() - - # (B, C, T, feat) -> (B, T, C, feat) -> (T, B, C, feat) -> - # (T, B, C * feat) - x = x.transpose(1, 2).transpose(0, 1).contiguous().view(output_seq_len, bsz, -1) - - input_lengths = src_lengths.clone() - for k, s in self.conv_kernel_sizes_and_strides: - p = k // 2 - input_lengths = (input_lengths.float() + 2 * p - k) / s + 1 - input_lengths = input_lengths.floor().long() - - packed_x = nn.utils.rnn.pack_padded_sequence(x, input_lengths) - - h0 = x.new(2 * self.num_blstm_layers, bsz, self.lstm_size).zero_() - c0 = x.new(2 * self.num_blstm_layers, bsz, self.lstm_size).zero_() - packed_outs, _ = self.lstm(packed_x, (h0, c0)) - - # unpack outputs and apply dropout - x, output_lengths = nn.utils.rnn.pad_packed_sequence(packed_outs) - if self.dropout is not None: - x = self.dropout(x) - - encoder_padding_mask = ( - lengths_to_padding_mask(output_lengths).to(src_tokens.device).t() - ) - - return { - "encoder_out": x, # (T, B, C) - "encoder_padding_mask": encoder_padding_mask, # (T, B) - } - - def reorder_encoder_out(self, encoder_out, new_order): - encoder_out["encoder_out"] = encoder_out["encoder_out"].index_select( - 1, new_order - ) - encoder_out["encoder_padding_mask"] = encoder_out[ - "encoder_padding_mask" - ].index_select(1, new_order) - return encoder_out - - -class MLPAttention(nn.Module): - """The original attention from Badhanau et al. (2014) - - https://arxiv.org/abs/1409.0473, based on a Multi-Layer Perceptron. - The attention score between position i in the encoder and position j in the - decoder is: alpha_ij = V_a * tanh(W_ae * enc_i + W_ad * dec_j + b_a) - """ - - def __init__(self, decoder_hidden_state_dim, context_dim, attention_dim): - super().__init__() - - self.context_dim = context_dim - self.attention_dim = attention_dim - # W_ae and b_a - self.encoder_proj = nn.Linear(context_dim, self.attention_dim, bias=True) - # W_ad - self.decoder_proj = nn.Linear( - decoder_hidden_state_dim, self.attention_dim, bias=False - ) - # V_a - self.to_scores = nn.Linear(self.attention_dim, 1, bias=False) - - def forward(self, decoder_state, source_hids, encoder_padding_mask): - """The expected input dimensions are: - decoder_state: bsz x decoder_hidden_state_dim - source_hids: src_len x bsz x context_dim - encoder_padding_mask: src_len x bsz - """ - src_len, bsz, _ = source_hids.size() - # (src_len*bsz) x context_dim (to feed through linear) - flat_source_hids = source_hids.view(-1, self.context_dim) - # (src_len*bsz) x attention_dim - encoder_component = self.encoder_proj(flat_source_hids) - # src_len x bsz x attention_dim - encoder_component = encoder_component.view(src_len, bsz, self.attention_dim) - # 1 x bsz x attention_dim - decoder_component = self.decoder_proj(decoder_state).unsqueeze(0) - # Sum with broadcasting and apply the non linearity - # src_len x bsz x attention_dim - hidden_att = torch.tanh( - (decoder_component + encoder_component).view(-1, self.attention_dim) - ) - # Project onto the reals to get attentions scores (src_len x bsz) - attn_scores = self.to_scores(hidden_att).view(src_len, bsz) - - # Mask + softmax (src_len x bsz) - if encoder_padding_mask is not None: - attn_scores = ( - attn_scores.float() - .masked_fill_(encoder_padding_mask, float("-inf")) - .type_as(attn_scores) - ) # FP16 support: cast to float and back - # srclen x bsz - normalized_masked_attn_scores = F.softmax(attn_scores, dim=0) - - # Sum weighted sources (bsz x context_dim) - attn_weighted_context = ( - source_hids * normalized_masked_attn_scores.unsqueeze(2) - ).sum(dim=0) - - return attn_weighted_context, normalized_masked_attn_scores - - -class LSTMDecoder(FairseqIncrementalDecoder): - def __init__( - self, - dictionary, - embed_dim, - num_layers, - hidden_size, - dropout, - encoder_output_dim, - attention_dim, - output_layer_dim, - ): - """ - Args: - dictionary: target text dictionary. - embed_dim: embedding dimension for target tokens. - num_layers: number of LSTM layers. - hidden_size: hidden size for LSTM layers. - dropout: dropout probability. Dropout can be applied to the - embeddings, the LSTM layers, and the context vector. - encoder_output_dim: encoder output dimension (hidden size of - encoder LSTM). - attention_dim: attention dimension for MLP attention. - output_layer_dim: size of the linear layer prior to output - projection. - """ - super().__init__(dictionary) - self.num_layers = num_layers - self.hidden_size = hidden_size - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - self.embed_tokens = nn.Embedding(num_embeddings, embed_dim, padding_idx) - if dropout > 0: - self.dropout = nn.Dropout(p=dropout) - else: - self.dropout = None - - self.layers = nn.ModuleList() - for layer_id in range(num_layers): - input_size = embed_dim if layer_id == 0 else encoder_output_dim - self.layers.append( - nn.LSTMCell(input_size=input_size, hidden_size=hidden_size) - ) - - self.context_dim = encoder_output_dim - self.attention = MLPAttention( - decoder_hidden_state_dim=hidden_size, - context_dim=encoder_output_dim, - attention_dim=attention_dim, - ) - - self.deep_output_layer = nn.Linear( - hidden_size + encoder_output_dim + embed_dim, output_layer_dim - ) - self.output_projection = nn.Linear(output_layer_dim, num_embeddings) - - def forward( - self, prev_output_tokens, encoder_out=None, incremental_state=None, **kwargs - ): - encoder_padding_mask = encoder_out["encoder_padding_mask"] - encoder_outs = encoder_out["encoder_out"] - - if incremental_state is not None: - prev_output_tokens = prev_output_tokens[:, -1:] - bsz, seqlen = prev_output_tokens.size() - - srclen = encoder_outs.size(0) - - # embed tokens - embeddings = self.embed_tokens(prev_output_tokens) - x = embeddings - if self.dropout is not None: - x = self.dropout(x) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - # initialize previous states (or get from cache during incremental - # generation) - cached_state = utils.get_incremental_state( - self, incremental_state, "cached_state" - ) - if cached_state is not None: - prev_hiddens, prev_cells = cached_state - else: - prev_hiddens = [encoder_out["encoder_out"].mean(dim=0)] * self.num_layers - prev_cells = [x.new_zeros(bsz, self.hidden_size)] * self.num_layers - - attn_scores = x.new_zeros(bsz, srclen) - attention_outs = [] - outs = [] - for j in range(seqlen): - input = x[j, :, :] - attention_out = None - for i, layer in enumerate(self.layers): - # the previous state is one layer below except for the bottom - # layer where the previous state is the state emitted by the - # top layer - hidden, cell = layer( - input, - ( - prev_hiddens[(i - 1) % self.num_layers], - prev_cells[(i - 1) % self.num_layers], - ), - ) - if self.dropout is not None: - hidden = self.dropout(hidden) - prev_hiddens[i] = hidden - prev_cells[i] = cell - if attention_out is None: - attention_out, attn_scores = self.attention( - hidden, encoder_outs, encoder_padding_mask - ) - if self.dropout is not None: - attention_out = self.dropout(attention_out) - attention_outs.append(attention_out) - input = attention_out - - # collect the output of the top layer - outs.append(hidden) - - # cache previous states (no-op except during incremental generation) - utils.set_incremental_state( - self, incremental_state, "cached_state", (prev_hiddens, prev_cells) - ) - - # collect outputs across time steps - x = torch.cat(outs, dim=0).view(seqlen, bsz, self.hidden_size) - attention_outs_concat = torch.cat(attention_outs, dim=0).view( - seqlen, bsz, self.context_dim - ) - - # T x B x C -> B x T x C - x = x.transpose(0, 1) - attention_outs_concat = attention_outs_concat.transpose(0, 1) - - # concat LSTM output, attention output and embedding - # before output projection - x = torch.cat((x, attention_outs_concat, embeddings), dim=2) - x = self.deep_output_layer(x) - x = torch.tanh(x) - if self.dropout is not None: - x = self.dropout(x) - # project back to size of vocabulary - x = self.output_projection(x) - - # to return the full attn_scores tensor, we need to fix the decoder - # to account for subsampling input frames - # return x, attn_scores - return x, None - - def reorder_incremental_state(self, incremental_state, new_order): - super().reorder_incremental_state(incremental_state, new_order) - cached_state = utils.get_incremental_state( - self, incremental_state, "cached_state" - ) - if cached_state is None: - return - - def reorder_state(state): - if isinstance(state, list): - return [reorder_state(state_i) for state_i in state] - return state.index_select(0, new_order) - - new_state = tuple(map(reorder_state, cached_state)) - utils.set_incremental_state(self, incremental_state, "cached_state", new_state) - - -@register_model_architecture(model_name="s2t_berard", arch_name="s2t_berard") -def berard(args): - """The original version: "End-to-End Automatic Speech Translation of - Audiobooks" (https://arxiv.org/abs/1802.04200) - """ - args.input_layers = getattr(args, "input_layers", "[256, 128]") - args.conv_layers = getattr(args, "conv_layers", "[(16, 3, 2), (16, 3, 2)]") - args.num_blstm_layers = getattr(args, "num_blstm_layers", 3) - args.lstm_size = getattr(args, "lstm_size", 256) - args.dropout = getattr(args, "dropout", 0.2) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 128) - args.decoder_num_layers = getattr(args, "decoder_num_layers", 2) - args.decoder_hidden_dim = getattr(args, "decoder_hidden_dim", 512) - args.attention_dim = getattr(args, "attention_dim", 512) - args.output_layer_dim = getattr(args, "output_layer_dim", 128) - args.load_pretrained_encoder_from = getattr( - args, "load_pretrained_encoder_from", None - ) - args.load_pretrained_decoder_from = getattr( - args, "load_pretrained_decoder_from", None - ) - - -@register_model_architecture(model_name="s2t_berard", arch_name="s2t_berard_256_3_3") -def berard_256_3_3(args): - """Used in - * "Harnessing Indirect Training Data for End-to-End Automatic Speech - Translation: Tricks of the Trade" (https://arxiv.org/abs/1909.06515) - * "CoVoST: A Diverse Multilingual Speech-To-Text Translation Corpus" - (https://arxiv.org/pdf/2002.01320.pdf) - * "Self-Supervised Representations Improve End-to-End Speech Translation" - (https://arxiv.org/abs/2006.12124) - """ - args.decoder_num_layers = getattr(args, "decoder_num_layers", 3) - berard(args) - - -@register_model_architecture(model_name="s2t_berard", arch_name="s2t_berard_512_3_2") -def berard_512_3_2(args): - args.num_blstm_layers = getattr(args, "num_blstm_layers", 3) - args.lstm_size = getattr(args, "lstm_size", 512) - args.dropout = getattr(args, "dropout", 0.3) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 256) - args.decoder_num_layers = getattr(args, "decoder_num_layers", 2) - args.decoder_hidden_dim = getattr(args, "decoder_hidden_dim", 1024) - args.attention_dim = getattr(args, "attention_dim", 512) - args.output_layer_dim = getattr(args, "output_layer_dim", 256) - berard(args) - - -@register_model_architecture(model_name="s2t_berard", arch_name="s2t_berard_512_5_3") -def berard_512_5_3(args): - args.num_blstm_layers = getattr(args, "num_blstm_layers", 5) - args.lstm_size = getattr(args, "lstm_size", 512) - args.dropout = getattr(args, "dropout", 0.3) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 256) - args.decoder_num_layers = getattr(args, "decoder_num_layers", 3) - args.decoder_hidden_dim = getattr(args, "decoder_hidden_dim", 1024) - args.attention_dim = getattr(args, "attention_dim", 512) - args.output_layer_dim = getattr(args, "output_layer_dim", 256) - berard(args) diff --git a/kosmos-g/fairseq/fairseq/models/speech_to_text/convtransformer.py b/kosmos-g/fairseq/fairseq/models/speech_to_text/convtransformer.py deleted file mode 100644 index eba000d7b..000000000 --- a/kosmos-g/fairseq/fairseq/models/speech_to_text/convtransformer.py +++ /dev/null @@ -1,448 +0,0 @@ -#!/usr/bin/env python3 - -import logging -import math -from typing import Dict, List, Optional, Tuple - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import checkpoint_utils, utils -from fairseq.data.data_utils import lengths_to_padding_mask -from fairseq.models import ( - FairseqEncoder, - FairseqEncoderDecoderModel, - register_model, - register_model_architecture, -) -from fairseq.models.transformer import Embedding, TransformerDecoder -from fairseq.modules import LayerNorm, PositionalEmbedding, TransformerEncoderLayer -from torch import Tensor - -logger = logging.getLogger(__name__) - - -@register_model("convtransformer") -class ConvTransformerModel(FairseqEncoderDecoderModel): - """ - Transformer-based Speech translation model from ESPNet-ST - https://arxiv.org/abs/2004.10234 - """ - - def __init__(self, encoder, decoder): - super().__init__(encoder, decoder) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - parser.add_argument( - "--input-feat-per-channel", - type=int, - metavar="N", - help="encoder input dimension per input channel", - ) - parser.add_argument( - "--activation-fn", - choices=utils.get_available_activation_fns(), - help="activation function to use", - ) - parser.add_argument( - "--dropout", type=float, metavar="D", help="dropout probability" - ) - parser.add_argument( - "--attention-dropout", - type=float, - metavar="D", - help="dropout probability for attention weights", - ) - parser.add_argument( - "--activation-dropout", - "--relu-dropout", - type=float, - metavar="D", - help="dropout probability after activation in FFN.", - ) - parser.add_argument( - "--encoder-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension", - ) - parser.add_argument( - "--encoder-ffn-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension for FFN", - ) - parser.add_argument( - "--encoder-layers", type=int, metavar="N", help="num encoder layers" - ) - parser.add_argument( - "--encoder-attention-heads", - type=int, - metavar="N", - help="num encoder attention heads", - ) - parser.add_argument( - "--encoder-normalize-before", - action="store_true", - help="apply layernorm before each encoder block", - ) - parser.add_argument( - "--decoder-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension", - ) - parser.add_argument( - "--decoder-ffn-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension for FFN", - ) - parser.add_argument( - "--decoder-layers", type=int, metavar="N", help="num decoder layers" - ) - parser.add_argument( - "--decoder-attention-heads", - type=int, - metavar="N", - help="num decoder attention heads", - ) - parser.add_argument( - "--decoder-normalize-before", - action="store_true", - help="apply layernorm before each decoder block", - ) - parser.add_argument( - "--decoder-output-dim", - type=int, - metavar="N", - help="decoder output dimension (extra linear layer if different from decoder embed dim)", - ) - parser.add_argument( - "--share-decoder-input-output-embed", - action="store_true", - help="share decoder input and output embeddings", - ) - parser.add_argument( - "--layernorm-embedding", - action="store_true", - help="add layernorm to embedding", - ) - parser.add_argument( - "--no-scale-embedding", - action="store_true", - help="if True, dont scale embeddings", - ) - parser.add_argument( - "--load-pretrained-encoder-from", - type=str, - metavar="STR", - help="model to take encoder weights from (for initialization)", - ) - parser.add_argument( - "--load-pretrained-decoder-from", - type=str, - metavar="STR", - help="model to take decoder weights from (for initialization)", - ) - parser.add_argument( - "--conv-out-channels", - type=int, - metavar="INT", - help="the number of output channels of conv layer", - ) - - @classmethod - def build_encoder(cls, args): - encoder = ConvTransformerEncoder(args) - if getattr(args, "load_pretrained_encoder_from", None): - encoder = checkpoint_utils.load_pretrained_component_from_model( - component=encoder, checkpoint=args.load_pretrained_encoder_from - ) - return encoder - - @classmethod - def build_decoder(cls, args, task, embed_tokens): - decoder = TransformerDecoderNoExtra(args, task.target_dictionary, embed_tokens) - if getattr(args, "load_pretrained_decoder_from", None): - decoder = checkpoint_utils.load_pretrained_component_from_model( - component=decoder, checkpoint=args.load_pretrained_decoder_from - ) - return decoder - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - - # make sure all arguments are present in older models - base_architecture(args) - - def build_embedding(dictionary, embed_dim): - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - return Embedding(num_embeddings, embed_dim, padding_idx) - - decoder_embed_tokens = build_embedding( - task.target_dictionary, args.decoder_embed_dim - ) - encoder = cls.build_encoder(args) - decoder = cls.build_decoder(args, task, decoder_embed_tokens) - return cls(encoder, decoder) - - @staticmethod - @torch.jit.unused - def set_batch_first(lprobs): - lprobs.batch_first = True - - def get_normalized_probs( - self, - net_output: Tuple[Tensor, Optional[Dict[str, List[Optional[Tensor]]]]], - log_probs: bool, - sample: Optional[Dict[str, Tensor]] = None, - ): - # net_output['encoder_out'] is a (B, T, D) tensor - lprobs = self.get_normalized_probs_scriptable(net_output, log_probs, sample) - if self.training: - self.set_batch_first(lprobs) - return lprobs - - def output_layout(self): - return "BTD" - - """ - The forward method inherited from the base class has a **kwargs argument in - its input, which is not supported in torchscript. This method overrites the forward - method definition without **kwargs. - """ - - def forward(self, src_tokens, src_lengths, prev_output_tokens): - encoder_out = self.encoder(src_tokens=src_tokens, src_lengths=src_lengths) - decoder_out = self.decoder( - prev_output_tokens=prev_output_tokens, encoder_out=encoder_out - ) - return decoder_out - - -class ConvTransformerEncoder(FairseqEncoder): - """Conv + Transformer encoder""" - - def __init__(self, args): - """Construct an Encoder object.""" - super().__init__(None) - - self.dropout = args.dropout - self.embed_scale = ( - 1.0 if args.no_scale_embedding else math.sqrt(args.encoder_embed_dim) - ) - self.padding_idx = 1 - self.in_channels = 1 - self.input_dim = args.input_feat_per_channel - self.conv = torch.nn.Sequential( - torch.nn.Conv2d(1, args.conv_out_channels, 3, stride=2, padding=3 // 2), - torch.nn.ReLU(), - torch.nn.Conv2d( - args.conv_out_channels, - args.conv_out_channels, - 3, - stride=2, - padding=3 // 2, - ), - torch.nn.ReLU(), - ) - transformer_input_dim = self.infer_conv_output_dim( - self.in_channels, self.input_dim, args.conv_out_channels - ) - self.out = torch.nn.Linear(transformer_input_dim, args.encoder_embed_dim) - self.embed_positions = PositionalEmbedding( - args.max_source_positions, - args.encoder_embed_dim, - self.padding_idx, - learned=False, - ) - - self.transformer_layers = nn.ModuleList([]) - self.transformer_layers.extend( - [TransformerEncoderLayer(args) for i in range(args.encoder_layers)] - ) - if args.encoder_normalize_before: - self.layer_norm = LayerNorm(args.encoder_embed_dim) - else: - self.layer_norm = None - - def pooling_ratio(self): - return 4 - - def infer_conv_output_dim(self, in_channels, input_dim, out_channels): - sample_seq_len = 200 - sample_bsz = 10 - x = torch.randn(sample_bsz, in_channels, sample_seq_len, input_dim) - x = torch.nn.Conv2d(1, out_channels, 3, stride=2, padding=3 // 2)(x) - x = torch.nn.Conv2d(out_channels, out_channels, 3, stride=2, padding=3 // 2)(x) - x = x.transpose(1, 2) - mb, seq = x.size()[:2] - return x.contiguous().view(mb, seq, -1).size(-1) - - def forward(self, src_tokens, src_lengths): - """Encode input sequence. - :param torch.Tensor xs: input tensor - :param torch.Tensor masks: input mask - :return: position embedded tensor and mask - :rtype Tuple[torch.Tensor, torch.Tensor]: - """ - bsz, max_seq_len, _ = src_tokens.size() - x = ( - src_tokens.view(bsz, max_seq_len, self.in_channels, self.input_dim) - .transpose(1, 2) - .contiguous() - ) - x = self.conv(x) - bsz, _, output_seq_len, _ = x.size() - x = x.transpose(1, 2).transpose(0, 1).contiguous().view(output_seq_len, bsz, -1) - x = self.out(x) - x = self.embed_scale * x - - subsampling_factor = int(max_seq_len * 1.0 / output_seq_len + 0.5) - input_len_0 = (src_lengths.float() / subsampling_factor).ceil().long() - input_len_1 = x.size(0) * torch.ones([src_lengths.size(0)]).long().to( - input_len_0.device - ) - input_lengths = torch.min(input_len_0, input_len_1) - - encoder_padding_mask = lengths_to_padding_mask(input_lengths) - - positions = self.embed_positions(encoder_padding_mask).transpose(0, 1) - x += positions - x = F.dropout(x, p=self.dropout, training=self.training) - - for layer in self.transformer_layers: - x = layer(x, encoder_padding_mask) - - if not encoder_padding_mask.any(): - maybe_encoder_padding_mask = None - else: - maybe_encoder_padding_mask = encoder_padding_mask - - return { - "encoder_out": [x], - "encoder_padding_mask": [maybe_encoder_padding_mask] - if maybe_encoder_padding_mask is not None - else [], - "encoder_embedding": [], - "encoder_states": [], - "src_tokens": [], - "src_lengths": [], - } - - @torch.jit.export - def reorder_encoder_out(self, encoder_out: Dict[str, List[Tensor]], new_order): - """ - Reorder encoder output according to *new_order*. - - Args: - encoder_out: output from the ``forward()`` method - new_order (LongTensor): desired order - - Returns: - *encoder_out* rearranged according to *new_order* - """ - new_encoder_out = [encoder_out["encoder_out"][0].index_select(1, new_order)] - if len(encoder_out["encoder_padding_mask"]) == 0: - new_encoder_padding_mask = [] - else: - new_encoder_padding_mask = [ - (encoder_out["encoder_padding_mask"][0]).index_select(0, new_order) - ] - if len(encoder_out["encoder_embedding"]) == 0: - new_encoder_embedding = [] - else: - new_encoder_embedding = [ - (encoder_out["encoder_embedding"][0]).index_select(0, new_order) - ] - encoder_states = encoder_out["encoder_states"] - if len(encoder_states) > 0: - for idx, state in enumerate(encoder_states): - encoder_states[idx] = state.index_select(1, new_order) - - return { - "encoder_out": new_encoder_out, - "encoder_padding_mask": new_encoder_padding_mask, - "encoder_embedding": new_encoder_embedding, - "encoder_states": encoder_states, - "src_tokens": [], - "src_lengths": [], - } - - -class TransformerDecoderNoExtra(TransformerDecoder): - def extract_features( - self, - prev_output_tokens, - encoder_out: Optional[Dict[str, List[Tensor]]], - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - full_context_alignment: bool = False, - alignment_layer: Optional[int] = None, - alignment_heads: Optional[int] = None, - ): - # call scriptable method from parent class - x, _ = self.extract_features_scriptable( - prev_output_tokens, - encoder_out, - incremental_state, - full_context_alignment, - alignment_layer, - alignment_heads, - ) - return x, None - - -@register_model_architecture(model_name="convtransformer", arch_name="convtransformer") -def base_architecture(args): - args.input_feat_per_channel = getattr(args, "input_feat_per_channel", 80) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 2048) - args.encoder_layers = getattr(args, "encoder_layers", 6) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim) - args.decoder_ffn_embed_dim = getattr( - args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim - ) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", False) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False) - args.attention_dropout = getattr(args, "attention_dropout", 0.0) - args.activation_dropout = getattr(args, "activation_dropout", 0.0) - args.activation_fn = getattr(args, "activation_fn", "relu") - args.dropout = getattr(args, "dropout", 0.1) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", False - ) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - args.adaptive_input = getattr(args, "adaptive_input", False) - args.decoder_layerdrop = getattr(args, "decoder_layerdrop", 0.0) - - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - args.no_scale_embedding = getattr(args, "no_scale_embedding", False) - args.quant_noise_pq = getattr(args, "quant_noise_pq", 0) - args.max_source_positions = getattr(args, "max_source_positions", 3000) - args.max_target_positions = getattr(args, "max_target_positions", 1024) - args.tie_adaptive_weights = getattr(args, "tie_adaptive_weights", False) - args.conv_out_channels = getattr(args, "conv_out_channels", args.encoder_embed_dim) - - -@register_model_architecture("convtransformer", "convtransformer_espnet") -def convtransformer_espnet(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 256) - args.encoder_layers = getattr(args, "encoder_layers", 12) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 4) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 4) diff --git a/kosmos-g/fairseq/fairseq/models/speech_to_text/hub_interface.py b/kosmos-g/fairseq/fairseq/models/speech_to_text/hub_interface.py deleted file mode 100644 index ff6fd638a..000000000 --- a/kosmos-g/fairseq/fairseq/models/speech_to_text/hub_interface.py +++ /dev/null @@ -1,126 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from argparse import Namespace -import logging -from typing import Union, Tuple, Optional - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from fairseq.data import encoders -from fairseq.data.audio.audio_utils import ( - get_waveform as get_wav, - convert_waveform as convert_wav, - get_fbank, -) -import fairseq.data.audio.feature_transforms.utterance_cmvn as utt_cmvn -from fairseq.data.audio.speech_to_text_dataset import SpeechToTextDataset - -logger = logging.getLogger(__name__) - - -class S2THubInterface(nn.Module): - def __init__(self, cfg, task, model): - super().__init__() - self.cfg = cfg - self.task = task - self.model = model - self.model.eval() - self.generator = self.task.build_generator([self.model], self.cfg) - - @classmethod - def get_model_input(cls, task, audio: Union[str, torch.Tensor]): - input_type = task.data_cfg.hub.get("input_type", "fbank80") - if input_type == "fbank80_w_utt_cmvn": - if isinstance(audio, str): - feat = utt_cmvn.UtteranceCMVN()(get_fbank(audio)) - feat = feat.unsqueeze(0) # T x D -> 1 x T x D - else: - import torchaudio.compliance.kaldi as kaldi - - feat = kaldi.fbank(audio, num_mel_bins=80).numpy() # 1 x T x D - elif input_type in {"waveform", "standardized_waveform"}: - if isinstance(audio, str): - feat, sr = get_wav(audio) # C x T - feat, _ = convert_wav( - feat, sr, to_sample_rate=16_000, to_mono=True - ) # C x T -> 1 x T - else: - feat = audio.numpy() - else: - raise ValueError(f"Unknown value: input_type = {input_type}") - - src_lengths = torch.Tensor([feat.shape[1]]).long() - src_tokens = torch.from_numpy(feat) # 1 x T (x D) - if input_type == "standardized_waveform": - with torch.no_grad(): - src_tokens = F.layer_norm(src_tokens, src_tokens.shape) - - return { - "net_input": { - "src_tokens": src_tokens, - "src_lengths": src_lengths, - "prev_output_tokens": None, - }, - "target_lengths": None, - "speaker": None, - } - - @classmethod - def detokenize(cls, task, tokens): - text = task.tgt_dict.string(tokens) - tkn_cfg = task.data_cfg.bpe_tokenizer - tokenizer = encoders.build_bpe(Namespace(**tkn_cfg)) - return text if tokenizer is None else tokenizer.decode(text) - - @classmethod - def get_prefix_token(cls, task, lang): - prefix_size = int(task.data_cfg.prepend_tgt_lang_tag) - prefix_tokens = None - if prefix_size > 0: - assert lang is not None - lang_tag = SpeechToTextDataset.get_lang_tag_idx(lang, task.tgt_dict) - prefix_tokens = torch.Tensor([lang_tag]).long().unsqueeze(0) - return prefix_tokens - - @classmethod - def get_prediction( - cls, task, model, generator, sample, tgt_lang=None, synthesize_speech=False - ) -> Union[str, Tuple[str, Tuple[torch.Tensor, int]]]: - _tgt_lang = tgt_lang or task.data_cfg.hub.get("tgt_lang", None) - prefix = cls.get_prefix_token(task, _tgt_lang) - pred_tokens = generator.generate([model], sample, prefix_tokens=prefix) - pred = cls.detokenize(task, pred_tokens[0][0]["tokens"]) - - if synthesize_speech: - pfx = f"{_tgt_lang}_" if task.data_cfg.prepend_tgt_lang_tag else "" - tts_model_id = task.data_cfg.hub.get(f"{pfx}tts_model_id", None) - if tts_model_id is None: - logger.warning("TTS model configuration not found") - else: - _repo, _id = tts_model_id.split(":") - tts_model = torch.hub.load(_repo, _id, verbose=False) - pred = (pred, tts_model.predict(pred)) - return pred - - def predict( - self, - audio: Union[str, torch.Tensor], - tgt_lang: Optional[str] = None, - synthesize_speech: bool = False, - ) -> Union[str, Tuple[str, Tuple[torch.Tensor, int]]]: - # `audio` is either a file path or a 1xT Tensor - # return either text or (text, synthetic speech) - sample = self.get_model_input(self.task, audio) - return self.get_prediction( - self.task, - self.model, - self.generator, - sample, - tgt_lang=tgt_lang, - synthesize_speech=synthesize_speech, - ) diff --git a/kosmos-g/fairseq/fairseq/models/speech_to_text/modules/augmented_memory_attention.py b/kosmos-g/fairseq/fairseq/models/speech_to_text/modules/augmented_memory_attention.py deleted file mode 100644 index e7465bc88..000000000 --- a/kosmos-g/fairseq/fairseq/models/speech_to_text/modules/augmented_memory_attention.py +++ /dev/null @@ -1,488 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Tuple, List - -import torch -import torch.nn.functional as F -from fairseq.models import FairseqEncoder -from fairseq.models.speech_to_text import ( - ConvTransformerEncoder, -) -from fairseq.models.speech_to_text.utils import attention_suppression -from fairseq.models.speech_to_text.utils import ( - lengths_to_encoder_padding_mask, - segments_to_sequence, - sequence_to_segments, -) -from fairseq.modules import MultiheadAttention, TransformerEncoderLayer -from torch import nn, Tensor - -# ------------------------------------------------------------------------------ -# AugmentedMemoryConvTransformerEncoder -# ------------------------------------------------------------------------------ - - -class AugmentedMemoryConvTransformerEncoder(ConvTransformerEncoder): - def __init__(self, args): - super().__init__(args) - - args.encoder_stride = self.stride() - - self.left_context = args.left_context // args.encoder_stride - - self.right_context = args.right_context // args.encoder_stride - - self.left_context_after_stride = args.left_context // args.encoder_stride - self.right_context_after_stride = args.right_context // args.encoder_stride - - self.transformer_layers = nn.ModuleList([]) - self.transformer_layers.extend( - [ - AugmentedMemoryTransformerEncoderLayer(args) - for i in range(args.encoder_layers) - ] - ) - - def stride(self): - # Hard coded here. Should infer from convs in future - stride = 4 - return stride - - def forward(self, src_tokens, src_lengths, states=None): - """Encode input sequence. - :param torch.Tensor xs: input tensor - :param torch.Tensor masks: input mask - :return: position embedded tensor and mask - :rtype Tuple[torch.Tensor, torch.Tensor]: - """ - bsz, max_seq_len, _ = src_tokens.size() - x = ( - src_tokens.view(bsz, max_seq_len, self.in_channels, self.input_dim) - .transpose(1, 2) - .contiguous() - ) - x = self.conv(x) - bsz, _, output_seq_len, _ = x.size() - x = x.transpose(1, 2).transpose(0, 1).contiguous().view(output_seq_len, bsz, -1) - x = self.out(x) - x = self.embed_scale * x - - subsampling_factor = 1.0 * max_seq_len / output_seq_len - input_lengths = torch.max( - (src_lengths.float() / subsampling_factor).ceil().long(), - x.size(0) * src_lengths.new_ones([src_lengths.size(0)]).long(), - ) - - encoder_padding_mask, _ = lengths_to_encoder_padding_mask( - input_lengths, batch_first=True - ) - - # TODO: fix positional embedding - positions = self.embed_positions(encoder_padding_mask).transpose(0, 1) - - x += positions - x = F.dropout(x, p=self.dropout, training=self.training) - - # State to store memory banks etc. - if states is None: - states = [ - {"memory_banks": None, "encoder_states": None} - for i in range(len(self.transformer_layers)) - ] - - for i, layer in enumerate(self.transformer_layers): - # x size: - # (self.left_size + self.segment_size + self.right_size) - # / self.stride, num_heads, dim - # TODO: Consider mask here - x = layer(x, states[i]) - states[i]["encoder_states"] = x[ - self.left_context_after_stride : -self.right_context_after_stride - ] - - lengths = ( - ( - ~encoder_padding_mask[ - :, self.left_context_after_stride : -self.right_context_after_stride - ] - ) - .sum(dim=1, keepdim=True) - .long() - ) - - return states[-1]["encoder_states"], lengths, states - - -# ------------------------------------------------------------------------------ -# AugmentedMemoryTransformerEncoderLayer -# ------------------------------------------------------------------------------ -class AugmentedMemoryTransformerEncoderLayer(TransformerEncoderLayer): - def __init__(self, args): - super().__init__(args) - - self.left_context = args.left_context // args.encoder_stride - self.right_context = args.right_context // args.encoder_stride - - def forward(self, x, state): - - length, batch_size, x_dim = x.size() - - residual = x - - if self.normalize_before: - x = self.self_attn_layer_norm(x) - - # init_state - if state.get("memory_banks", None) is None: - state["memory_banks"] = [] - - # TODO reseach new sum_query method - seg_start = self.left_context - seg_end = length - self.right_context - if seg_start < seg_end: - summarization_query = torch.mean(x[seg_start:seg_end], keepdim=True, dim=0) - else: - summarization_query = x.new_zeros(1, batch_size, x_dim) - - x = torch.cat([x, summarization_query], dim=0) - - x = self.self_attn(input_and_summary=x, state=state) - - x = self.dropout_module(x) - x = residual + x - - if not self.normalize_before: - x = self.self_attn_layer_norm(x) - - residual = x - if self.normalize_before: - x = self.final_layer_norm(x) - - x = self.activation_fn(self.fc1(x)) - x = self.activation_dropout_module(x) - x = self.fc2(x) - x = self.dropout_module(x) - x = residual + x - if not self.normalize_before: - x = self.final_layer_norm(x) - - return x - - def build_self_attention(self, embed_dim, args): - return AugmentedMemoryMultiheadAttention( - embed_dim=embed_dim, - num_heads=args.encoder_attention_heads, - dropout=args.attention_dropout, - self_attention=True, - q_noise=self.quant_noise, - qn_block_size=self.quant_noise_block_size, - tanh_on_mem=True, - max_memory_size=args.max_memory_size, - ) - - -# ------------------------------------------------------------------------------ -# AugmentedMemoryMultiheadAttention -# ------------------------------------------------------------------------------ -class AugmentedMemoryMultiheadAttention(MultiheadAttention): - """ - Augmented Memory Attention from - Streaming Transformer-based Acoustic Models - Using Self-attention with Augmented Memory - https://arxiv.org/abs/2005.08042 - """ - - def __init__( - self, - embed_dim, - num_heads, - kdim=None, - vdim=None, - dropout=0.0, - bias=True, - add_bias_kv=False, - add_zero_attn=False, - self_attention=False, - encoder_decoder_attention=False, - q_noise=0.0, - qn_block_size=8, - tanh_on_mem=False, - memory_dim=None, - std_scale=0.5, # 0.5 based on https://arxiv.org/abs/2005.09137 - max_memory_size=-1, - disable_mem_on_mem_attn=True, - ): - super().__init__( - embed_dim, - num_heads, - kdim, - vdim, - dropout, - bias, - add_bias_kv, - add_zero_attn, - self_attention, - encoder_decoder_attention, - q_noise, - qn_block_size, - ) - - self.memory_dim = memory_dim if memory_dim is not None else embed_dim - self.std_scale = std_scale - self.disable_mem_on_mem_attn = disable_mem_on_mem_attn - - # This Operator was used for factorization in PySpeech - self.v2e = lambda x: x - - if tanh_on_mem: - self.squash_mem = torch.tanh - self.nonlinear_squash_mem = True - else: - self.squash_mem = lambda x: x - self.nonlinear_squash_mem = False - - self.max_memory_size = max_memory_size - - def forward(self, input_and_summary, state): - """ - input: Encoder states of current segment with left or right context, - plus one summarization query - - """ - - length, batch_size, _ = input_and_summary.shape - length = length - 1 # not include sum_query, last index - - memory = state["memory_banks"] - # TODO: positional embedding on memory - - if self.max_memory_size > -1 and len(memory) > self.max_memory_size: - # TODO: need to fix here - if self.max_memory_size == 0: - memory = memory.new_zeros(1, memory.size(1), self.memory_dim) - else: - memory = memory[-self.max_memory_size :] - - memory_and_input = torch.cat(memory + [input_and_summary[:-1]], dim=0) - input_and_sum_query = input_and_summary - - q = self.q_proj(self.v2e(input_and_sum_query)) - k = self.k_proj(self.v2e(memory_and_input)) - v = self.v_proj(self.v2e(memory_and_input)) - - q = ( - q.contiguous() - .view(-1, batch_size * self.num_heads, self.head_dim) - .transpose(0, 1) - * self.scaling - ) - k = ( - k.contiguous() - .view(-1, batch_size * self.num_heads, self.head_dim) - .transpose(0, 1) - ) - - v = ( - v.contiguous() - .view(-1, batch_size * self.num_heads, self.head_dim) - .transpose(0, 1) - ) - - attention_weights = torch.bmm(q, k.transpose(1, 2)) - - if self.disable_mem_on_mem_attn: - attention_weights = self.suppress_mem_on_mem_attention( - batch_size, self.num_heads, len(memory), attention_weights - ) - - if self.std_scale is not None: - attention_weights = attention_suppression(attention_weights, self.std_scale) - - assert list(attention_weights.shape) == [ - batch_size * self.num_heads, - length + 1, - length + len(memory), - ] - - attention_weights = torch.nn.functional.softmax( - attention_weights.float(), dim=-1 - ).type_as(attention_weights) - - attention_probs = self.dropout_module(attention_weights) - - # [T, T, B, n_head] + [T, B, n_head, d_head] -> [T, B, n_head, d_head] - attention = torch.bmm(attention_probs, v) - - assert list(attention.shape) == [ - batch_size * self.num_heads, - length + 1, - self.head_dim, - ] - - attention = ( - attention.transpose(0, 1) - .contiguous() - .view(length + 1, batch_size, self.embed_dim) - ) - - output_and_memory = self.out_proj(attention) - - next_m = output_and_memory[-1:] - next_m = self.squash_mem(next_m) - output = output_and_memory[:-1] - - state["memory_banks"].append(next_m) - - return output - - def suppress_mem_on_mem_attention( - self, B: int, num_heads: int, mem_size: int, attention_weight: Tensor - ): - """ - Arguments: - - B: batch size - - num_heads: number of attention heads - - mem_size: size of memory bank - - attention_weight: a [B*num_heads, T + 1, T + mem_size] vector - - Return: - modified attention_weight with [B*num_heads, -1, :mem_size] = -inf - """ - attention_weight[:, -1, :mem_size] = float("-inf") - return attention_weight - - -# ------------------------------------------------------------------------------ -# SequenceEncoder -# ------------------------------------------------------------------------------ -class SequenceEncoder(FairseqEncoder): - """ - SequenceEncoder encodes sequences. - - More specifically, `src_tokens` and `src_lengths` in `forward()` should - describe a batch of "complete" sequences rather than segments. - - Segment-by-segment inference can be triggered by `segment_size`: - 1) `segment_size` is None: - SequenceEncoder treats the input sequence as one single segment. - 2) `segment_size` is not None (some int instead): - SequenceEncoder does the following: - 1. breaks the input sequence into several segments - 2. inference on each segment and collect the outputs - 3. concatanete segment outputs into the output sequence. - Note that `segment_size` here shouldn't include additional left/right - contexts needed, for example if we wish to infer with LC-BLSTM where the - middle chunk size is 100 and right context is 20, `segment_size` should be - 100. - """ - - def __init__(self, args, module): - super().__init__(None) - - self.module = module - self.input_time_axis = 1 - self.output_time_axis = 0 - self.segment_size = args.segment_size - self.left_context = args.left_context - self.right_context = args.right_context - - def forward( - self, - src_tokens: Tensor, - src_lengths: Tensor, - states=None, - ): - - seg_src_tokens_lengths = sequence_to_segments( - sequence=src_tokens, - time_axis=self.input_time_axis, - lengths=src_lengths, - segment_size=self.segment_size, - extra_left_context=self.left_context, - extra_right_context=self.right_context, - ) - - seg_encoder_states_lengths: List[Tuple[Tensor, Tensor]] = [] - - for seg_src_tokens, seg_src_lengths in seg_src_tokens_lengths: - (seg_encoder_states, seg_enc_lengths, states) = self.module( - seg_src_tokens, - seg_src_lengths, - states=states, - ) - - seg_encoder_states_lengths.append((seg_encoder_states, seg_enc_lengths)) - - encoder_out, enc_lengths = segments_to_sequence( - segments=seg_encoder_states_lengths, time_axis=self.output_time_axis - ) - - encoder_padding_mask, _ = lengths_to_encoder_padding_mask( - enc_lengths, batch_first=True - ) - - if not encoder_padding_mask.any(): - encoder_padding_mask = None - - return { - "encoder_out": [encoder_out], - "encoder_padding_mask": [encoder_padding_mask], - "encoder_embedding": [], - "encoder_states": [states], - "src_tokens": [], - "src_lengths": [], - } - - def incremental_encode( - self, - seg_src_tokens: Tensor, - seg_src_lengths: Tensor, - states=None, - ): - """ - Different from forward function, this function takes segmented speech - as input, and append encoder states to previous states - """ - (seg_encoder_states, seg_enc_lengths, states) = self.module( - seg_src_tokens, - seg_src_lengths, - states=states, - ) - return seg_encoder_states, seg_enc_lengths, states - - -# ------------------------------------------------------------------------------ -# Augmented memory model decorator -# ------------------------------------------------------------------------------ -def augmented_memory(klass): - class StreamSeq2SeqModel(klass): - @staticmethod - def add_args(parser): - super(StreamSeq2SeqModel, StreamSeq2SeqModel).add_args(parser) - parser.add_argument( - "--segment-size", type=int, required=True, help="Length of the segment." - ) - parser.add_argument( - "--left-context", - type=int, - default=0, - help="Left context for the segment.", - ) - parser.add_argument( - "--right-context", - type=int, - default=0, - help="Right context for the segment.", - ) - parser.add_argument( - "--max-memory-size", - type=int, - default=-1, - help="Right context for the segment.", - ) - - StreamSeq2SeqModel.__name__ = klass.__name__ - return StreamSeq2SeqModel diff --git a/kosmos-g/fairseq/fairseq/models/speech_to_text/modules/emformer.py b/kosmos-g/fairseq/fairseq/models/speech_to_text/modules/emformer.py deleted file mode 100644 index 70339788f..000000000 --- a/kosmos-g/fairseq/fairseq/models/speech_to_text/modules/emformer.py +++ /dev/null @@ -1,1844 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) 2017-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the LICENSE file in -# the root directory of this source tree. An additional grant of patent rights -# can be found in the PATENTS file in the same directory. - - -import math -import re -from functools import partial -from typing import List, Optional, Tuple - -import torch -import torch.nn as nn -from torch import Tensor -from torch import device as Device - -from fairseq.models import FairseqEncoder -from fairseq.models.speech_to_text.utils import ( - NoOp, - attention_suppression, - layer_norm_backward_hook, - lengths_to_padding_mask, - segments_to_sequence, -) - -try: - import torch.ao.quantization as quantization - from torch.ao.quantization.qconfig import ( - default_dynamic_qconfig, - per_channel_dynamic_qconfig, - ) -except ImportError: - import torch.quantization as quantization - from torch.quantization.qconfig import ( - default_dynamic_qconfig, - per_channel_dynamic_qconfig, - ) - - -class RelativePositionEmbedding(nn.Module): - """ - Implementation according to https://arxiv.org/abs/1803.02155 - """ - - def __init__(self, head_dim, max_position, norm_init=True): - super().__init__() - self.head_dim = head_dim - self.max_position = max_position - self.embeddings = nn.Parameter(torch.Tensor(max_position * 2 + 1, head_dim)) - if norm_init: - nn.init.xavier_normal_(self.embeddings) - else: - nn.init.xavier_uniform_(self.embeddings) - - def forward(self, input: Tensor): - output = nn.functional.embedding(input.long(), self.embeddings) - return output - - -class Fp32LayerNorm(nn.Module): - def __init__( - self, - input_dim, - clamp_grad=True, - max_grad_value=256, - eps=1e-5, - elementwise_affine=True, - ): - super().__init__() - self.torch_module = torch.nn.LayerNorm( - input_dim, eps=eps, elementwise_affine=elementwise_affine - ) - if clamp_grad: - hook = partial(layer_norm_backward_hook, clamp_value=max_grad_value) - self.torch_module.register_backward_hook(hook) - - def forward(self, input): - output = torch.nn.functional.layer_norm( - input.float(), - self.torch_module.normalized_shape, - self.torch_module.weight.float() - if self.torch_module.weight is not None - else None, - self.torch_module.bias.float() - if self.torch_module.bias is not None - else None, - self.torch_module.eps, - ).type_as(input) - return output - - -# ------------------------------------------------------------------------------ -# PositionwiseFF -# ------------------------------------------------------------------------------ - - -class PositionwiseFF(nn.Module): - """ - FFN layer in transformer. - - Args: - input_dim: input embedding dimension - ffn_dim: FFN layer inner dimension - dropout_on_fc1: dropout for first linear layer - dropout_on_fc2: dropout fr second linear layer - activation_fn: activation function used after first linear layer. \ - Only relu or gelu is supported. - - """ - - def __init__( - self, input_dim, ffn_dim, dropout_on_fc1, dropout_on_fc2, activation_fn - ): - super(PositionwiseFF, self).__init__() - - self.input_dim = input_dim - self.ffn_dim = ffn_dim - if activation_fn == "relu": - ac = nn.ReLU() - elif activation_fn == "gelu": - ac = nn.GELU() - else: - raise ValueError("Unsupported activation_fn = ({})".format(activation_fn)) - - # fc1 -> ac -> dropout -> fc2 -> dropout - self.module = nn.Sequential( - nn.Linear(input_dim, ffn_dim), - ac, - nn.Dropout(dropout_on_fc1), - nn.Linear(ffn_dim, input_dim), - nn.Dropout(dropout_on_fc2), - ) - - self.layer_norm = Fp32LayerNorm(input_dim) - - def forward(self, input): - module_out = self.module(self.layer_norm(input)) - output = module_out + input - - return output - - def quantize_(self, params=None): - if params and "per_channel" in params and params["per_channel"]: - qconfig = per_channel_dynamic_qconfig - else: - qconfig = default_dynamic_qconfig - quantization.quantize_dynamic( - self, {torch.nn.Linear: qconfig}, dtype=torch.qint8, inplace=True - ) - return self - - -# ------------------------------------------------------------------------------ -# SummarizationLayer -# ------------------------------------------------------------------------------ - - -class SummarizationLayer(nn.Module): - def __init__(self, method, segment_size, embedding_dim): - super(SummarizationLayer, self).__init__() - self.segment_size = segment_size - self.embedding_dim = embedding_dim - nonlin_match = re.match(r"nonlinear\((?P<act>[a-z]+),(?P<dim>[0-9]+)\)", method) - self.method = method - if method == "mean": - self.module = nn.AvgPool1d( - kernel_size=segment_size, - stride=segment_size, - ceil_mode=True, - ) - elif method == "max": - self.module = nn.MaxPool1d( - kernel_size=segment_size, - stride=segment_size, - ceil_mode=True, - ) - elif method == "linear": - self.module = nn.Linear(segment_size, 1) - elif nonlin_match: - nonlin_args = nonlin_match.groupdict() - act_type = nonlin_args["act"] - hid_dim = int(nonlin_args["dim"]) - if act_type == "relu": - act = nn.ReLU() - elif act_type == "gelu": - act = nn.GELU() - else: - raise ValueError("Unsupported activation_fn = ({})".format(act_type)) - self.module = nn.Sequential( - nn.Linear(segment_size, hid_dim), - act, - nn.Linear(hid_dim, 1), - ) - else: - raise ValueError("Unsupported summarization method = ({})".format(method)) - - def forward(self, input): - # T, B, D -> B, D, T - input = input.permute(1, 2, 0) - - if self.method == "mean" or self.method == "max": - output = self.module(input) - output = output.permute(2, 0, 1) - return output - - full_seg_length = input.size(2) // self.segment_size * self.segment_size - if full_seg_length > 0: - # at least one seg is full - B = input.size(0) - D = input.size(1) - input_todo = ( - input[:, :, :full_seg_length] - .contiguous() - .view(B, -1, self.segment_size) - ) - output = self.module(input_todo) - output = output.view(B, D, -1) - else: - output = input.new_zeros(input.size(0), input.size(1), 0) - left = input.size(2) - full_seg_length - if left > 0: - # when last seg is not full, use zeros as last memory placeholder - zeros = input.new_zeros(input.size(0), input.size(1), 1) - output = torch.cat([output, zeros], dim=2) - output = output.permute(2, 0, 1) - return output - - -# ------------------------------------------------------------------------------ -# NoSegAugmentedMemoryMultiheadAttentionBmm -# ------------------------------------------------------------------------------ - - -class NoSegAugmentedMemoryMultiheadAttentionBmm(nn.Module): - """ - Whole utterance augmented memory multihead attention using BMM. - - Different with previous augmented memory multihead attention where - the utterance is chunked into segments. Here we use attention mask - achieve so. The input embedding [right_context, utterance, summary] - is a concatenation of right context, utterance and summary. - - Right context block is the concatenation of all the right context for - each segments. [right_context_0, right_context_1, ..., right_context_n] - For example, if we have utterance = [v0, v1, v2, ...., v20]. segment - size 8, right_context size 4. Then the right context blocks = - [v8, v9, v10, v11, v16, v17, v18, v19, 0, 0, 0, 0], where v8, v9, v10, - and v11 are the right context for first segment. v16, v17, v18 and v19 - are the right context for second segment. 0, 0, 0 and 0 are right context - for the last segment. - - utterance is corresponding to input embedding sequence - - summary is concatenation of average of each segments. [summary_0, - summary_1, ..., ]. - - In augmented memory multihead attention, the query is [right_context, - utterance, summary], key is [memory, right_context, utterance]. Different - with AugmentedMemoryMultiheadAttentionBmm, memory here is passed from - previous attention layer. For the first attention layer, memory is average - of each segment. - - Memory is a concatenation of memory from each segments in previous attention - layer. For example, current layer is i, then memory is [m_0, m_1, ..., m_n]. - Each m_k is the output from seg_k in layer i-1. - - args: - input_dim: input embedding dimension - num_heads: number of heads in multihead self-attention - dropout: attention dropout - std_scale: if std_scale is not None. The weak attention suppression is - turned on. For std_scale = 0.5, all the attention smaller than - mean + 0.5 * std will be suppressed. - scaled_init: whether to use scaled init for linear weight - tanh_on_mem: whether to use tanh on memory output - use_mem: whether to use memory or not. When max_memory_size is 0, then - we don't have memory anymore. - layer_index: current self-attention layer index that is used in depth - initialization - max_relative_position: max relative position used in relative position - embedding - rpe_old_option: To be compatible with previous model. The previous model - was trained with attention += attention + rpe. The correct equation - should be attention = attention + rpe - - """ - - def __init__( - self, - input_dim, - num_heads, - dropout=0.0, - std_scale=None, - scaled_init=False, - tanh_on_mem=False, - use_mem=True, - mini_batches=False, - negative_inf="-inf", - layer_index=-1, - max_relative_position=0, - rpe_old_option=True, - ): - if input_dim % num_heads: - raise ValueError( - "input_dim ({}) must be divisible by num_heads ({})".format( - input_dim, num_heads - ) - ) - - super().__init__() - - embed_dim = input_dim - self.e2h_kv = torch.nn.Linear(input_dim, 2 * input_dim, bias=True) - self.e2h_q = torch.nn.Linear(input_dim, input_dim, bias=True) - self.rpe_old_option = rpe_old_option - if max_relative_position > 0: - self.use_rpe = True - self.rpe_k = RelativePositionEmbedding( - head_dim=input_dim // num_heads, - max_position=max_relative_position, - ) - self.rpe_v = RelativePositionEmbedding( - head_dim=input_dim // num_heads, - max_position=max_relative_position, - ) - else: - self.use_rpe = False - self.rpe_k = None - self.rpe_v = None - if scaled_init: - if layer_index == -1: - gain = 1.0 / math.sqrt(2) - else: - # https://arxiv.org/abs/2005.09684 depthwise initialization - # stablize the training greatly. Use depthwise initialization to - # replace incremental loss. - gain = 1.0 / math.sqrt(layer_index + 1) - torch.nn.init.xavier_uniform_(self.e2h_kv.weight, gain=gain) - torch.nn.init.xavier_uniform_(self.e2h_q.weight, gain=gain) - - self.out_proj = torch.nn.Linear(embed_dim, embed_dim, bias=True) - - self.embed_dim = embed_dim - self.num_heads = num_heads - self.dropout = dropout - - self.head_dim = embed_dim // num_heads - self.scaling = self.head_dim ** -0.5 - - self.std_scale = std_scale - self.use_mem = use_mem - self.mini_batches = mini_batches - self.negative_inf = negative_inf - - if tanh_on_mem: - self.squash_mem = torch.tanh - self.nonlinear_squash_mem = True - else: - self.squash_mem = NoOp() - self.nonlinear_squash_mem = False - - def prepare_qkv( - self, - input: Tensor, - mems: Tensor, - lengths: Tensor, - summary_length: int, - lc_length: int, - ): - # T: right_context length + utterance_length + summary_length - T, B, D = input.shape - mem_length = mems.size(0) - utterance_length = torch.max(lengths) - - right_context_blocks_length = T - utterance_length - summary_length - rc_block = input[:right_context_blocks_length, :, :] - utterance_block = input[right_context_blocks_length : T - summary_length, :, :] - - if B == 1: - padding_mask = None - else: - klengths = lengths + mem_length + right_context_blocks_length + lc_length - padding_mask = lengths_to_padding_mask(lengths=klengths) - - mem_rc_input = torch.cat([mems, rc_block, utterance_block], dim=0) - - # In training lc_length = 0 - key_length = mem_rc_input.size(0) + lc_length - rc_input_sum = input - q = self.e2h_q(rc_input_sum) - kv = self.e2h_kv(mem_rc_input) - k, v = kv.chunk(chunks=2, dim=2) - result_qkv = (q, k, v) - input_shape = (T, B, D) - result_lengths_info = ( - mem_length, - utterance_length, - right_context_blocks_length, - key_length, - ) - if padding_mask is not None: - assert padding_mask.size(0) == B - assert padding_mask.size(1) == key_length - - return result_qkv, input_shape, result_lengths_info, padding_mask - - def prepare_attention_weights( - self, - q: Tensor, - new_k: Tensor, - new_v: Tensor, - input_shape: Tuple[int, int, int], - rpe: Optional[Tensor], - ) -> Tuple[Tensor, Tensor, Tensor]: - T, B, D = input_shape - q = ( - q.contiguous().view(-1, B * self.num_heads, self.head_dim).transpose(0, 1) - * self.scaling - ) - - k = ( - new_k.contiguous() - .view(-1, B * self.num_heads, self.head_dim) - .transpose(0, 1) - ) - - v = ( - new_v.contiguous() - .view(-1, B * self.num_heads, self.head_dim) - .transpose(0, 1) - ) - - attention_weights = torch.bmm(q, k.transpose(1, 2)) - if self.use_rpe and rpe is not None and self.rpe_v is not None: - r_k = self.rpe_k(rpe) - # [q, B*h, d] * [q, k, d] -> [B*h, q, k] - attention_weights_rpe = torch.matmul( - q.transpose(0, 1), r_k.transpose(1, 2) - ).transpose(0, 1) - attention_weights = attention_weights + attention_weights_rpe - attention_weights_float = attention_weights.float() - - return attention_weights, attention_weights_float, v - - def prepare_attention_output( - self, - attention_weights: Tensor, - attention_weights_float: Tensor, - v: Tensor, - input_shape: Tuple[int, int, int], - key_length: int, - padding_mask: Optional[Tensor], - rpe: Optional[Tensor], - ) -> Tensor: - T, B, D = input_shape - if padding_mask is not None: - attention_weights_float = attention_weights_float.view( - B, self.num_heads, T, key_length - ) - attention_weights_float = attention_weights_float.masked_fill( - padding_mask.unsqueeze(1).unsqueeze(2).to(torch.bool), float("-inf") - ) - attention_weights_float = attention_weights_float.view( - B * self.num_heads, T, key_length - ) - - if self.std_scale is not None: - attention_weights_float = attention_suppression( - attention_weights_float, self.std_scale - ) - - attention_weights_float = torch.nn.functional.softmax( - attention_weights_float, dim=-1 - ) - attention_weights = attention_weights_float.type_as(attention_weights) - - attention_probs = torch.nn.functional.dropout( - attention_weights, p=self.dropout, training=self.training - ) - - # [T, key_length, B, n_head]+ [key_length, B, n_head, d_head] - # -> [T, B, n_head, d_head] - attention = torch.bmm(attention_probs, v) - if self.use_rpe and rpe is not None and self.rpe_v is not None: - r_v = self.rpe_v(rpe) - attention_rpe = torch.matmul( - attention_probs.transpose(0, 1), r_v - ).transpose(0, 1) - - if self.rpe_old_option: - attention += attention + attention_rpe - else: - attention = attention + attention_rpe - - assert list(attention.shape) == [B * self.num_heads, T, self.head_dim] - - attention = attention.transpose(0, 1).contiguous().view(T, B, self.embed_dim) - - rc_output_memory = self.out_proj(attention) - return rc_output_memory - - @torch.jit.unused - def forward( - self, - input: Tensor, - lengths: Tensor, - mems: Tensor, - attention_mask: Tensor, - pre_mems: Optional[Tensor] = None, - left_context_key: Optional[Tensor] = None, - left_context_val: Optional[Tensor] = None, - rpe: Optional[Tensor] = None, - ) -> Tuple[Tensor, Tensor, Tensor, Tensor]: - """ - forward function for NoSegAugmentedMemoryMultiheadAttentionBmm in training. - - args: - input: formed in the following way - [right_context_0, right_contex_1, ..., seg_0, seg_1, - ..., summary_0, summary_1,..] - lengths: the length of query which is [seg_0, seg_1, ....] - mems: [mem_0, mem_1, ...]. - attention_mask: attention mask for query = [right_context, query, summary] - key = [mem, right_context, query]. This is only used for traing. - - """ - if self.use_mem: - mem_length = mems.size(0) - summary_length = mem_length + 1 - if pre_mems is not None: - mems = torch.cat([pre_mems, mems], dim=0) - else: - mem_length = 0 - summary_length = 0 - - # In training, lc_length = 0 - if left_context_key is not None: - lc_length = left_context_key.size(0) - else: - lc_length = 0 - results = self.prepare_qkv( - input=input, - mems=mems, - lengths=lengths, - summary_length=summary_length, - lc_length=lc_length, - ) - result_qkv, input_shape, result_lengths_info, padding_mask = results - q, k, v = result_qkv - ( - mem_length, - utterance_length, - right_context_blocks_length, - key_length, - ) = result_lengths_info - - if left_context_key is not None: - # add the cache key and value - new_k = torch.cat( - [ - k[: mem_length + right_context_blocks_length, :, :], - left_context_key, - k[-utterance_length:, :, :], - ], - dim=0, - ) - new_v = torch.cat( - [ - v[: mem_length + right_context_blocks_length, :, :], - left_context_val, - v[-utterance_length:, :, :], - ], - dim=0, - ) - next_k = new_k[mem_length + right_context_blocks_length :, :, :] - next_v = new_v[mem_length + right_context_blocks_length :, :, :] - else: - new_k = k - new_v = v - next_k = None - next_v = None - - attention_weights, attention_weights_float, v = self.prepare_attention_weights( - q=q, - new_k=new_k, - new_v=new_v, - input_shape=input_shape, - rpe=rpe, - ) - - # mask attention - attention_mask = attention_mask.unsqueeze(0) - attention_weights_float = attention_weights_float.masked_fill( - attention_mask, float(self.negative_inf) - ) - - rc_output_memory = self.prepare_attention_output( - attention_weights=attention_weights, - attention_weights_float=attention_weights_float, - v=v, - input_shape=input_shape, - key_length=key_length, - padding_mask=padding_mask, - rpe=rpe, - ) - - if self.use_mem: - # next_m length equals to summary length - 1 - # last memory is ignored - if self.mini_batches: - next_m = rc_output_memory[-summary_length:] - else: - next_m = rc_output_memory[-summary_length:-1] - - next_m = self.squash_mem(next_m) - # rc and output - rc_output = rc_output_memory[:-summary_length] - if not self.nonlinear_squash_mem: - next_m = torch.clamp(next_m, min=-10, max=10) - else: - next_m = mems - rc_output = rc_output_memory - - return rc_output, next_m, next_k, next_v - - @torch.jit.export - def forward_jit( - self, - input: Tensor, - lengths: Tensor, - mems: Tensor, - left_context_key: Tensor, - left_context_val: Tensor, - rpe: Optional[Tensor], - ) -> Tuple[Tensor, Tensor, Tensor, Tensor]: - """ - forward function for NoSegAugmentedMemoryMultiheadAttentionBmm in decoding. - - args: - input: formed in the following way - [right_context_0, right_contex_1, ..., seg_0, seg_1, - ..., summary_0, summary_1,..] - lengths: the length of query which is [seg_0, seg_1, ....] - mems: [mem_0, mem_1, ...]. - left_context_key: left_context for key part. This is only used for online - decoding. In training, this is empty tensor - left_context_val: left_context for value part. This is only used for online - decoding. In training, this is empty tensor - - """ - lc_length = left_context_key.size(0) - - # In decoding, summary_length = 1 or 0 - if self.use_mem: - summary_length = 1 - else: - summary_length = 0 - - results = self.prepare_qkv( - input=input, - mems=mems, - lengths=lengths, - summary_length=summary_length, - lc_length=lc_length, - ) - result_qkv, input_shape, result_lengths_info, padding_mask = results - q, k, v = result_qkv - ( - mem_length, - utterance_length, - right_context_blocks_length, - key_length, - ) = result_lengths_info - - # add the cache key and value - new_k = torch.cat( - [ - k[: mem_length + right_context_blocks_length, :, :], - left_context_key, - k[-utterance_length:, :, :], - ], - dim=0, - ) - new_v = torch.cat( - [ - v[: mem_length + right_context_blocks_length, :, :], - left_context_val, - v[-utterance_length:, :, :], - ], - dim=0, - ) - next_k = new_k[mem_length + right_context_blocks_length :, :, :] - next_v = new_v[mem_length + right_context_blocks_length :, :, :] - - attention_weights, attention_weights_float, v = self.prepare_attention_weights( - q=q, - new_k=new_k, - new_v=new_v, - input_shape=input_shape, - rpe=rpe, - ) - # In online decoding, we don't have attention mask. But we still need - # to disable the attention from summary query to memory - attention_weights_float[:, -1, :mem_length] = float(self.negative_inf) - rc_output_memory = self.prepare_attention_output( - attention_weights=attention_weights, - attention_weights_float=attention_weights_float, - v=v, - input_shape=input_shape, - key_length=key_length, - padding_mask=padding_mask, - rpe=rpe, - ) - - # In decoding, summary length is 1 - if self.use_mem: - next_m = rc_output_memory[-1:] - next_m = self.squash_mem(next_m) - # rc and output - rc_output = rc_output_memory[:-1] - if not self.nonlinear_squash_mem: - next_m = torch.clamp(next_m, min=-10, max=10) - else: - rc_output = rc_output_memory - # empty tensor as input mems - next_m = mems - - return rc_output, next_m, next_k, next_v - - def quantize_(self, params=None): - if params and "per_channel" in params and params["per_channel"]: - qconfig = per_channel_dynamic_qconfig - else: - qconfig = default_dynamic_qconfig - quantization.quantize_dynamic( - self, {torch.nn.Linear: qconfig}, dtype=torch.qint8, inplace=True - ) - return self - - -class NoSegAugmentedMemoryTransformer(nn.Module): - """ - Whole utterance augmented memory transformer. - - This is not pyspeech nn layer. It is used as a module in a master layer where - multiple transformers is used. - """ - - def __init__( - self, - input_dim, - num_heads, - ffn_dim, - dropout_in_attn=0.0, - dropout_on_attn=None, - dropout_on_fc1=None, - dropout_on_fc2=None, - activation_fn="relu", - tanh_on_mem=False, - std_scale=None, - scaled_init=False, - segment_size=128, - use_mem=True, - mini_batches=False, - negative_inf="-inf", - layer_index=-1, - summarization_method="mean", - max_relative_position=0, - rpe_old_option=True, - ): - super(NoSegAugmentedMemoryTransformer, self).__init__() - - self.attention = NoSegAugmentedMemoryMultiheadAttentionBmm( - input_dim=input_dim, - num_heads=num_heads, - dropout=dropout_in_attn, - scaled_init=scaled_init, - tanh_on_mem=tanh_on_mem, - std_scale=std_scale, - use_mem=use_mem, - mini_batches=mini_batches, - negative_inf=negative_inf, - layer_index=layer_index, - max_relative_position=max_relative_position, - ) - self.dropout = nn.Dropout(dropout_on_attn) - self.pos_ff = PositionwiseFF( - input_dim=input_dim, - ffn_dim=ffn_dim, - dropout_on_fc1=dropout_on_fc1, - dropout_on_fc2=dropout_on_fc2, - activation_fn=activation_fn, - ) - self.layer_norm_pre = Fp32LayerNorm(input_dim) - self.layer_norm = Fp32LayerNorm(input_dim) - self.segment_size = segment_size - self.use_mem = use_mem - - self.memory_op = SummarizationLayer( - summarization_method, segment_size, input_dim - ) - - def set_mini_batches(self, mini_batches): - self.attention.mini_batches = mini_batches - - def gen_summary_queries(self, input): - sum_input = self.memory_op(input) - return sum_input - - def pre_attention_ops(self, input, right_context_blocks): - rc_length = right_context_blocks.size(0) - input_length = input.size(0) - - rc_and_input = torch.cat([right_context_blocks, input], dim=0) - residual_input = rc_and_input - rc_and_input = self.layer_norm_pre(rc_and_input) - - query_input = rc_and_input[-input_length:, :, :] - return rc_length, input_length, residual_input, query_input, rc_and_input - - def after_attention_ops(self, attention_output, residual_input): - output = self.dropout(attention_output) - output = output + residual_input - output = self.pos_ff(output) - output = self.layer_norm(output) - return output - - @torch.jit.export - def forward_jit( - self, - input: Tensor, - lengths: Tensor, - mems: Tensor, - left_context_key: Tensor, - left_context_val: Tensor, - right_context_blocks: Tensor, - rpe: Optional[Tensor], - ) -> Tuple[Tensor, Tensor, Tensor, Tensor, Tensor]: - - results = self.pre_attention_ops(input, right_context_blocks) - rc_length, input_length, residual_input, query_input, rc_and_input = results - - # In online decoding, the summary query size is always 1 or 0 - if self.use_mem: - summary_query = self.gen_summary_queries(query_input) - summary_query = summary_query[0:1, :, :] - rc_qu_su = torch.cat([rc_and_input, summary_query], dim=0) - else: - rc_qu_su = rc_and_input - - rc_output, next_m, next_k, next_v = self.attention.forward_jit( - input=rc_qu_su, - lengths=lengths, - mems=mems, - left_context_key=left_context_key, - left_context_val=left_context_val, - rpe=rpe, - ) - rc_output = self.after_attention_ops(rc_output, residual_input) - results = ( - rc_output[-input_length:, :, :], - next_m, - rc_output[0:rc_length, :, :], - next_k, - next_v, - ) - return results - - @torch.jit.unused - def forward( - self, - input, - lengths, - mems, - right_context_blocks, - attention_mask, - pre_mems, - left_context_key, - left_context_val, - rpe, - ): - - results = self.pre_attention_ops(input, right_context_blocks) - rc_length, input_length, residual_input, query_input, rc_and_input = results - if self.use_mem: - summary_query = self.gen_summary_queries(query_input) - rc_qu_su = torch.cat([rc_and_input, summary_query], dim=0) - else: - rc_qu_su = rc_and_input - - rc_output, next_m, next_k, next_v = self.attention( - input=rc_qu_su, - lengths=lengths, - mems=mems, - attention_mask=attention_mask, - pre_mems=pre_mems, - left_context_key=left_context_key, - left_context_val=left_context_val, - rpe=rpe, - ) - - # [TODO] Note memory did not go through pos_ff. What happen if we pass - # memory through the pos_ff as well? - rc_output = self.after_attention_ops(rc_output, residual_input) - results = ( - rc_output[-input_length:, :, :], - next_m, - rc_output[0:rc_length, :, :], - next_k, - next_v, - ) - - return results - - -class NoSegAugmentedMemoryTransformerEncoderLayer(FairseqEncoder): - """ - Whole utterance augmented memory transformer encoder layer. This is a master layer - where we can define multiple augmented memory transformers. There are two reasons - to setup the master layer. - 1. We only need to define once about the attention mask. All the layers in the master - layer share the same mask. - 2. pyspeech nn layer has special input and output format. Defining one master layer is - easier to passing memory between different layes inside the master layer - - args: - input_dim: input embedding dimension - num_heads: number of heads in multihead self-attention - ffn_dim: ffn dimension in FFN layer - num_layers: number of augmented memory transformer layers - dropout_in_attn: dropout used in multi-head self-attention - dropout_on_attn: dropout used for output from te multihead self-attention - dropout_on_fc1: dropout used in FFN layer for the first linear layer - dropout_on_fc2: dropout used in FFN layer for the second linear layer - segment_size: segment size for each segment - context_config: (left_context_size, right_context_size) defines the surround context size - for each segment - max_memory_size: maximum memory size used for each segment - scaled_init: whether use scaled init for weight initialization in attention layer - std_scale: if std_scale is not None. The weak attention suppression is - turned on. For std_scale = 0.5, all the attention smaller than - mean + 0.5 * std will be suppressed. - activation_fn: activation function used in FFN layer. [ReLU, GELU] supported - tanh_on_mem: whether use tanh on memory - mini_batches: use mini-btach training - negative_inf: the negative infinity value used in attention masking. default is "-inf". - For some situation, e.g. LM. it is better to use "-1e8" to avoid nan issue. - summarization_method: method to generate segment summrization embedding - max_relative_position: max relatie position for relative position embedding - rpe_old_option: To be compatible with previous model. The previous model - was trained with attention += attention + rpe. The correct equation - should be attention = attention + rpe - [TODO]: remove the rpe_old_option by the end of 2021 Q1. - - """ - - def __init__( - self, - input_dim, - num_heads, - ffn_dim, - num_layers=1, - dropout_in_attn=0.0, - dropout_on_attn=0.0, - dropout_on_fc1=0.0, - dropout_on_fc2=0.0, - segment_size=128, - context_config=(0, 0), - max_memory_size=0, - scaled_init=True, - std_scale=None, - activation_fn="relu", - tanh_on_mem=False, - mini_batches=False, - negative_inf="-inf", - deep_init=True, - summarization_method="mean", - max_relative_position=0, - rpe_old_option=True, - ): - super().__init__(None) - if input_dim % num_heads: - raise ValueError( - "input_dim ({}) must be divisible by num_heads ({})".format( - input_dim, num_heads - ) - ) - - # we used to support growing memory size. However, it will cause - # cross stream batching failure. Now we need to have exact max memory size - if max_memory_size < 0: - raise ValueError("max_memory_size must be >= 0") - - # Only assign right_context. In decoding, left context will be cached. - # No need to let the online decoder to re-assign the left context - self.left_context, self.right_context = context_config - self.segment_size = segment_size - self.memory_dim = input_dim - self.max_memory_size = max_memory_size - self.mini_batches = mini_batches - if self.max_memory_size != 0: - self.use_mem = True - else: - self.use_mem = False - - self.memory_op = SummarizationLayer( - summarization_method, segment_size, input_dim - ) - - self.layers = torch.nn.ModuleList() - self.num_layers = num_layers - self.max_relative_position = max_relative_position - if self.max_relative_position > 0: - self.use_rpe = True - else: - self.use_rpe = False - for i in range(self.num_layers): - if deep_init: - layer_index = i - else: - layer_index = -1 - - self.layers.append( - NoSegAugmentedMemoryTransformer( - num_heads=num_heads, - input_dim=input_dim, - ffn_dim=ffn_dim, - dropout_in_attn=dropout_in_attn, - dropout_on_attn=dropout_on_attn, - dropout_on_fc1=dropout_on_fc1, - dropout_on_fc2=dropout_on_fc2, - segment_size=segment_size, - std_scale=std_scale, - activation_fn=activation_fn, - tanh_on_mem=tanh_on_mem, - scaled_init=scaled_init, - use_mem=self.use_mem, - mini_batches=mini_batches, - negative_inf=negative_inf, - layer_index=layer_index, - summarization_method=summarization_method, - max_relative_position=max_relative_position, - rpe_old_option=rpe_old_option, - ) - ) - - def set_mini_batches(self, mini_batches): - # handy function only used for unit test - self.mini_batches = mini_batches - for layer in self.layers: - layer.set_mini_batches(mini_batches) - - def _get_relative_position( - self, - input: Tensor, - max_relative_position: int, - left_context_length: int, - past_length: int, - is_decoding: bool, - ): - # For training, we copy the right context to the start of the utterance - # First dimension in distance is corresponding to query. - # [right context, utterance, summary vector] - # Second dimension in distance is corresponding to key. - # [Memory bank, right context, utterance] - # For summary vector in query part, the distance with - # all other position is 2*max_position. For memory bank in key, - # the distance with all other positions is 0. - - T, B, D = input.shape - num_segs = math.ceil((T - self.right_context) / self.segment_size) - - # utterance - u_st = past_length * self.segment_size - u_ed = u_st + T - utterance_ranges = torch.arange(u_st, u_ed - self.right_context) - - # left context. Only in minibatch or decoding - left_context_ranges = torch.arange(u_st - left_context_length, u_st) - - # Right context block - # right context + utterance - right_context_blocks = [] - for i in range(0, num_segs - 1): - st = (i + 1) * self.segment_size + u_st - ed = st + self.right_context - assert ed < u_ed - temp = torch.arange(st, ed) - right_context_blocks.append(temp) - right_context_blocks.append(torch.arange(u_ed - self.right_context, u_ed)) - right_context_ranges = torch.cat(right_context_blocks) - - if self.use_mem: - # Memory bank - # The position for memory -n, .., -1 - if is_decoding: - memory_size = min(past_length, self.max_memory_size) - else: - memory_size = num_segs + past_length - 1 - memory_bank_ranges = torch.arange( - -max_relative_position - 1, -max_relative_position - 1 - memory_size, -1 - ) - - # summary vector - # The position for summary vector as the T+max_relative_position+1. - # After the clamping, the relative position is max_relative_position - summary_pos_st = u_ed + max_relative_position + 1 - summary_vector_ranges = torch.arange( - summary_pos_st, summary_pos_st + num_segs - ) - - key_ranges = torch.cat( - [ - memory_bank_ranges, - right_context_ranges, - left_context_ranges, - utterance_ranges, - ] - ) - - query_ranges = torch.cat( - [right_context_ranges, utterance_ranges, summary_vector_ranges] - ) - else: - key_ranges = torch.cat( - [right_context_ranges, left_context_ranges, utterance_ranges] - ) - - query_ranges = torch.cat([right_context_ranges, utterance_ranges]) - - distance = key_ranges[None, :] - query_ranges[:, None] - distance_clamp = ( - torch.clamp(distance, -max_relative_position, max_relative_position) - + max_relative_position - ) - distance_clamp = distance_clamp.to(input.device).long().detach() - return distance_clamp - - def _get_attention_mask(self, input, past_length=0, left_context_cache=0): - # attention mask for each query contains three parts: - # 1. memory part - # 2. left_context + segment - # 3. right_context_block - # so for each segment and its correspoinding right context block, - # the attention matrix is formed by 9 parts: - # [0, m, 0, 0, right_context, 0, 0, seg, 0] - # [before memory, memory, after memory, before right context, right_context, - # after right context, before seg, seg, after seg] - # - # Query is formed in the way as [right_context_blocks, utterance, summary] - # - # Note: put m and right_context before segment is convenient - # for padding_mask operation. - # Key lengths = m_length + right_context_block_length + lengths - utterance_length, batch_size, _ = input.shape - summary_length = math.ceil(utterance_length / self.segment_size) - num_segs = summary_length - rc_length = self.right_context * num_segs - rc = self.right_context - lc = self.left_context - - # using mini-batches, there is left context cache available for current - # sequence. - lcc = left_context_cache - - # max_memory_size is 0 then we don't have memory and summary - # past_length is the memory carry from previous sequence - if self.use_mem: - mem_length = num_segs - 1 + past_length - else: - mem_length = 0 - rc_mask = [] - query_mask = [] - summary_mask = [] - for j in range(0, num_segs): - ssize = min(self.segment_size, utterance_length - j * self.segment_size) - - rc_size = rc - rc_mat = [] - q_mat = [] - s_mat = [] - m_start = max(j + past_length - self.max_memory_size, 0) - - # max_memory_size is 0, then we don't use memory - if self.use_mem: - # part 0: before memory - rc_mat.append(input.new_zeros(rc_size, m_start)) - q_mat.append(input.new_zeros(ssize, m_start)) - s_mat.append(input.new_zeros(1, m_start)) - - # part 1: memory - col_1 = j + past_length - m_start - rc_mat.append(torch.ones(rc_size, col_1, device=input.device)) - q_mat.append(torch.ones(ssize, col_1, device=input.device)) - # based on D22875746, disable summary query attention - # on memeory is better for long form utterance - s_mat.append(input.new_zeros(1, col_1)) - - # part 2: after memory - col_2 = mem_length - (j + past_length) - rc_mat.append(input.new_zeros(rc_size, col_2)) - q_mat.append(input.new_zeros(ssize, col_2)) - s_mat.append(input.new_zeros(1, col_2)) - - # part 3: before right context - rc_start = j * rc - rc_mat.append(input.new_zeros(rc_size, rc_start)) - q_mat.append(input.new_zeros(ssize, rc_start)) - s_mat.append(input.new_zeros(1, rc_start)) - - # part 4: right context - rc_end = rc_start + rc - col_4 = rc - rc_mat.append(torch.ones(rc_size, col_4, device=input.device)) - q_mat.append(torch.ones(ssize, col_4, device=input.device)) - s_mat.append(torch.ones(1, col_4, device=input.device)) - - # part 5: after right context - col_5 = rc_length - rc_end - rc_mat.append(input.new_zeros(rc_size, col_5)) - q_mat.append(input.new_zeros(ssize, col_5)) - s_mat.append(input.new_zeros(1, col_5)) - - # part 6: before query segment - seg_start = max(j * self.segment_size + lcc - lc, 0) - rc_mat.append(input.new_zeros(rc_size, seg_start)) - q_mat.append(input.new_zeros(ssize, seg_start)) - s_mat.append(input.new_zeros(1, seg_start)) - - # part 7: query segment - # note: right context is put in right context block - # here we only need to consider about left context - seg_end = min((j + 1) * self.segment_size + lcc, utterance_length + lcc) - col_7 = seg_end - seg_start - rc_mat.append(torch.ones(rc_size, col_7, device=input.device)) - q_mat.append(torch.ones(ssize, col_7, device=input.device)) - s_mat.append(torch.ones(1, col_7, device=input.device)) - - # part 8: after query segment - col_8 = utterance_length + lcc - seg_end - rc_mat.append(input.new_zeros(rc_size, col_8)) - q_mat.append(input.new_zeros(ssize, col_8)) - s_mat.append(input.new_zeros(1, col_8)) - - rc_mask.append(torch.cat(rc_mat, dim=1)) - query_mask.append(torch.cat(q_mat, dim=1)) - summary_mask.append(torch.cat(s_mat, dim=1)) - - # no memory, then we don't need summary either - if self.use_mem: - attention_mask = ( - 1 - - torch.cat( - [ - torch.cat(rc_mask, dim=0), - torch.cat(query_mask, dim=0), - torch.cat(summary_mask, dim=0), - ], - dim=0, - ) - ).to(torch.bool) - else: - attention_mask = ( - 1 - - torch.cat( - [torch.cat(rc_mask, dim=0), torch.cat(query_mask, dim=0)], dim=0 - ) - ).to(torch.bool) - - return attention_mask - - @torch.jit.export - def init_state( - self, batch_size: int, device: Optional[Device] = None - ) -> List[Tensor]: - empty_memory = torch.zeros( - self.num_layers, - self.max_memory_size, - batch_size, - self.memory_dim, - device=device, - ) - left_context_key = torch.zeros( - self.num_layers, - self.left_context, - batch_size, - self.memory_dim, - device=device, - ) - left_context_val = torch.zeros( - self.num_layers, - self.left_context, - batch_size, - self.memory_dim, - device=device, - ) - past_length = torch.zeros(1, batch_size, dtype=torch.int32, device=device) - - return [empty_memory, left_context_key, left_context_val, past_length] - - @torch.jit.export - def batch_state(self, states: List[List[Tensor]]) -> List[Tensor]: - if len(states) == 0: - return [] - batched_m = [] - batched_lc_key = [] - batched_lc_val = [] - batched_past_length = [] - for state in states: - if len(state) == 0: - continue - m, lc_key, lc_val, past_length = state - batched_m.append(m) - batched_lc_key.append(lc_key) - batched_lc_val.append(lc_val) - batched_past_length.append(past_length) - - if ( - (len(batched_m) == 0) - or (len(batched_lc_key) == 0) - or (len(batched_lc_val) == 0) - or (len(batched_past_length) == 0) - ): - return [ - torch.tensor([]), - torch.tensor([]), - torch.tensor([]), - torch.tensor([]), - ] - - batched_m = torch.cat(batched_m, dim=2) - batched_lc_key = torch.cat(batched_lc_key, dim=2) - batched_lc_val = torch.cat(batched_lc_val, dim=2) - batched_past_length = torch.cat(batched_past_length, dim=1) - return [batched_m, batched_lc_key, batched_lc_val, batched_past_length] - - @torch.jit.export - def reorder_state(self, state: List[Tensor], indices: Tensor) -> List[Tensor]: - if len(state) == 0: - return [] - m, lc_key, lc_val, past_length = state - indices = indices.to(device=m.device) - reord_m = torch.index_select(m, 2, indices) - reord_lc_key = torch.index_select(lc_key, 2, indices) - reord_lc_val = torch.index_select(lc_val, 2, indices) - reord_past_length = torch.index_select(past_length, 1, indices) - return [reord_m, reord_lc_key, reord_lc_val, reord_past_length] - - @torch.jit.export - def reset_state(self, state: List[Tensor], indices: Tensor) -> List[Tensor]: - m, lc_key, lc_val, past_length = state - m = m.index_fill(dim=2, index=indices, value=0.0) - lc_key = lc_key.index_fill(dim=2, index=indices, value=0.0) - lc_val = lc_val.index_fill(dim=2, index=indices, value=0.0) - past_length = past_length.index_fill(dim=1, index=indices, value=0) - - return [m, lc_key, lc_val, past_length] - - @torch.jit.export - def state_size(self) -> int: - return 4 - - @torch.jit.export - def batch_size_in_state( - self, state: Optional[List[Tensor]], sloppy: bool = True - ) -> Optional[int]: - if state is None: - return None - return state[0].size(2) - - def gen_summary_queries(self, input): - sum_input = self.memory_op(input) - return sum_input - - def _gen_right_context_padded_input(self, input): - # This function deals with input that is already - # padded with right context (e.g. minibatch training) - right_context_blocks = [] - T, B, D = input.shape - num_segs = math.ceil((T - self.right_context) / self.segment_size) - for i in range(0, num_segs - 1): - st = (i + 1) * self.segment_size - ed = st + self.right_context - assert ed < T - temp = input[st:ed, :, :] - right_context_blocks.append(temp) - - # last segment right context is already available - right_context_blocks.append(input[T - self.right_context :, :, :]) - return torch.cat(right_context_blocks, dim=0) - - def _gen_segs_right_context(self, input, lengths): - segments = [] - T, B, D = input.size() - nT = T - self.right_context - - # assume input is right context padded - num_segs = math.ceil(nT / self.segment_size) - # pad zeros to the utterance to make sure each - # segment has the same right context. For the - for i in range(0, num_segs - 1): - st = i * self.segment_size - ed = min(T, st + self.segment_size + self.right_context) - temp = input[st:ed, :, :] - rest_lengths = torch.clamp( - lengths - self.segment_size, min=0, max=nT - (i + 1) * self.segment_size - ) - segments.append((temp, lengths - rest_lengths + self.right_context)) - lengths = rest_lengths - - last_seg = input[st + self.segment_size :, :, :] - segments.append((last_seg, rest_lengths + self.right_context)) - - return segments - - @torch.jit.unused - def forward( - self, input: Tensor, padding_masks: Tensor, state: Optional[List[Tensor]] = None - ) -> Tuple[Tensor, Tensor, List[Tensor], List[Tensor]]: - # Xutai: originally the second argument is lengths. - lengths = (~padding_masks).sum(dim=1).long() - # mini batch training. - if self.mini_batches: - return self.forward_mini_batches(input, lengths, state) - - # regular full sequence training. Note, assume the right context in provided - # in the input. - T, B, D = input.size() - right_context_blocks = self._gen_right_context_padded_input(input) - - # generate the relative positional embedding - if self.use_rpe: - rpe = self._get_relative_position( - input=input, - max_relative_position=self.max_relative_position, - left_context_length=0, - past_length=0, - is_decoding=False, - ) - else: - rpe = None - input = input[: T - self.right_context, :, :] - - attention_mask = self._get_attention_mask(input) - - # firt layer use each segment mean as memory - # ignore the last one seg average - if self.use_mem: - mems = self.gen_summary_queries(input)[:-1, :, :] - else: - mems = torch.zeros(0, input.size(1), input.size(2), device=input.device) - mems = mems.type_as(input) - - output = input - all_outputs = [] - - for layer in self.layers: - output, mems, right_context_blocks, _, _ = layer( - input=output, - lengths=lengths, - attention_mask=attention_mask, - mems=mems, - right_context_blocks=right_context_blocks, - pre_mems=None, - left_context_key=None, - left_context_val=None, - rpe=rpe, - ) - all_outputs.append(output) - return output, padding_masks, [], all_outputs - - def forward_jit_mini_batch_init( - self, - seg: Tensor, - state: Optional[List[Tensor]] = None, - is_decoding: bool = False, - ): - # Prepare state. In whole sequence training, state is ignored. - # For minibatch training, we need to prepare state - if state is None: - state = self.init_state(batch_size=seg.size(1), device=seg.device) - if seg.dtype == torch.half: - state = [state[0].half(), state[1].half(), state[2].half(), state[3]] - - if self.use_mem: - # note input average only on seg, not on right context - # first layer use each segmetn mean as memory. the last - # one segment average is used in state - full_mems = self.gen_summary_queries(seg) - if is_decoding: - mems = full_mems[0:1, :, :] - state_mems = torch.cat([state[0][0], mems], dim=0) - else: - mems = full_mems[:-1, :, :] - state_mems = torch.cat([state[0][0], full_mems], dim=0) - else: - mems = state[0][0] - state_mems = mems - - # track processed segment number or memory number - # the same batch as the same bumber of past length - past_length = state[3][0][0].item() - past_left_context = min(past_length * self.segment_size, self.left_context) - past_length = min(self.max_memory_size, past_length) - - return state, mems, state_mems, past_length, past_left_context - - def state_update_before( - self, layer: int, state: List[Tensor], past_length: int, past_left_context: int - ): - pre_mems = state[0][layer][self.max_memory_size - past_length :, :, :] - lc_key = state[1][layer][self.left_context - past_left_context :, :, :] - lc_val = state[2][layer][self.left_context - past_left_context :, :, :] - return pre_mems, lc_key, lc_val - - def state_update_after( - self, - layer: int, - state: List[Tensor], - mems: Tensor, - next_key: Tensor, - next_val: Tensor, - mems_list: List[Tensor], - lc_key_list: List[Tensor], - lc_val_list: List[Tensor], - ): - # mems is used for next layer - if layer < self.num_layers - 1: - state_mems = torch.cat([state[0][layer + 1], mems], dim=0) - mems_list.append(state_mems[-self.max_memory_size :, :, :]) - - # when mems pass to next sequence, we need the last memory. when mems - # use for the next layer, we can ignore the last memory - mems = mems[:-1, :, :] - - # note state[1][i] and state[2][i] original length equals to self.left_context - new_k = torch.cat([state[1][layer], next_key], dim=0) - new_v = torch.cat([state[2][layer], next_val], dim=0) - lc_key_list.append(new_k[-self.left_context :, :, :]) - lc_val_list.append(new_v[-self.left_context :, :, :]) - return mems_list, lc_key_list, lc_val_list, mems - - def state_update_after_loop( - self, - state: List[Tensor], - mems_list: List[Tensor], - lc_key_list: List[Tensor], - lc_val_list: List[Tensor], - update_length: int, - ): - state[0] = torch.stack(mems_list, dim=0) - state[1] = torch.stack(lc_key_list, dim=0) - state[2] = torch.stack(lc_val_list, dim=0) - state[3] = state[3] + update_length - return state - - @torch.jit.unused - def forward_mini_batches( - self, input: Tensor, lengths: Tensor, state: Optional[List[Tensor]] = None - ) -> Tuple[Tensor, Tensor, List[Tensor], List[Tensor]]: - T, B, D = input.size() - - # input without right context - seg = input[: T - self.right_context, :, :] - - # get right context blocks - right_context_blocks = self._gen_right_context_padded_input(input) - - mems_list = [] - lc_key_list = [] - lc_val_list = [] - results = self.forward_jit_mini_batch_init(seg, state, False) - state, mems, state_mems, past_length, past_left_context = results - - # relative position embedding - if self.use_rpe: - rpe = self._get_relative_position( - input=input, - max_relative_position=self.max_relative_position, - left_context_length=past_left_context, - past_length=past_length, - is_decoding=False, - ) - else: - rpe = None - - # get attention mask based on seg (not include right context) and available - # left context - attention_mask = self._get_attention_mask(seg, past_length, past_left_context) - mems_list.append(state_mems[-self.max_memory_size :, :, :]) - output = seg - i = 0 - all_outputs = [] - for layer in self.layers: - # In order to make cross stream batching work, mem, left context key - # and left context value in the state should always be the same shape. - # We use the past length to track the processed segment number. In this - # way, we take out the essential memory, left context key and left - # context val from the state. After finish the forward for current segment - # we add the new memory, left context key and left context value into the - # staate and trim out the oldest part to keep the shape consistent. - pre_mems, lc_key, lc_val = self.state_update_before( - i, state, past_length, past_left_context - ) - - output, mems, right_context_blocks, next_key, next_val = layer.forward( - input=output, - lengths=lengths, - attention_mask=attention_mask, - mems=mems, - right_context_blocks=right_context_blocks, - pre_mems=pre_mems, - left_context_key=lc_key, - left_context_val=lc_val, - rpe=rpe, - ) - all_outputs.append(output) - mems_list, lc_key_list, lc_val_list, mems = self.state_update_after( - layer=i, - state=state, - mems=mems, - next_key=next_key, - next_val=next_val, - mems_list=mems_list, - lc_key_list=lc_key_list, - lc_val_list=lc_val_list, - ) - - i += 1 - - # update state - update_length = math.ceil((T - self.right_context) / self.segment_size) - state = self.state_update_after_loop( - state=state, - mems_list=mems_list, - lc_key_list=lc_key_list, - lc_val_list=lc_val_list, - update_length=update_length, - ) - - return output, lengths, state, all_outputs - - def forward_jit_test( - self, input: Tensor, lengths: Tensor, state: Optional[List[Tensor]] = None - ) -> Tuple[Tensor, Tensor, List[Tensor]]: - """ - This one simulate sequence encoder forward jit. This is for unit test purpose. - It is not used in training or decoding. Note, extra_right_context is set in - the model. In unit test, input = [utterance, right_context], lengths = - [utterance_length]. - args: - input: input utterance - lengths: utterance input length - state: None here. input is whole utterance - """ - # [TODO] sequence_to_segment has bug in lengths. - seg_src_tokens_lengths = self._gen_segs_right_context(input, lengths) - - seg_enc_tokens_lengths: List[Tuple[Tensor, Tensor]] = [] - state: Optional[List[Tensor]] = None - for seg_src_tokens, seg_src_lengths in seg_src_tokens_lengths: - seg_enc_tokens, seg_enc_lengths, state = self.forward_jit( - input=seg_src_tokens, lengths=seg_src_lengths, state=state - ) - seg_enc_tokens_lengths.append((seg_enc_tokens, seg_enc_lengths)) - - enc_tokens, enc_lengths = segments_to_sequence( - segments=seg_enc_tokens_lengths, time_axis=0 - ) - - state = [] # returns trivial state - - return enc_tokens, enc_lengths, state - - @torch.jit.export - def forward_jit( - self, input: Tensor, lengths: Tensor, state: Optional[List[Tensor]] = None - ) -> Tuple[Tensor, Tensor, List[Tensor]]: - """ - Forward helper for online decoding. - - args: - input: [seg, right_context]. We assume in online we - always padding the right context to the preset right context size. - For the last segment, we may have short segment size, but right - context size is the same as other segments - lengths: utterance input length is the utterance segment length and - right context size - state: [memory, left_context_key, left_context_val]. To improve throughput, - in addition to memory, we also cache key and value for left_context in - multihead self-attention - """ - # In online decoding, input = [segment, right_context] - # Lengths = [segment_length, right_context_length] - # so we need strip right context in output - T, B, D = input.size() - rc_str = T - self.right_context - rc_end = T - right_context_blocks = input[rc_str:rc_end, :, :] - seg = input[:rc_str, :, :] - lengths = torch.clamp(lengths - self.right_context, min=0) - mems_list = [] - lc_key_list = [] - lc_val_list = [] - - results = self.forward_jit_mini_batch_init(seg, state, True) - state, mems, state_mems, past_length, past_left_context = results - - # relative position embedding - if self.use_rpe: - rpe = self._get_relative_position( - input=input, - max_relative_position=self.max_relative_position, - left_context_length=past_left_context, - past_length=past_length, - is_decoding=True, - ) - else: - rpe = None - - # memory for first layer. - mems_list.append(state_mems[-self.max_memory_size :, :, :]) - output = seg - i = 0 - for layer in self.layers: - # In order to make cross stream batching work, mem, left context key - # and left context value in the state should always be the same shape. - # We use the past length to track the processed segment number. In this - # way, we take out the essential memory, left context key and left - # context val from the state. After finish the forward for current segment - # we add the new memory, left context key and left context value into the - # staate and trim out the oldest part to keep the shape consistent. - true_mems, lc_key, lc_val = self.state_update_before( - layer=i, - state=state, - past_length=past_length, - past_left_context=past_left_context, - ) - - output, mems, right_context_blocks, next_key, next_val = layer.forward_jit( - input=output, - lengths=lengths, - mems=true_mems, - right_context_blocks=right_context_blocks, - left_context_key=lc_key, - left_context_val=lc_val, - rpe=rpe, - ) - # mems is used for next layer - mems_list, lc_key_list, lc_val_list, _ = self.state_update_after( - layer=i, - state=state, - mems_list=mems_list, - mems=mems, - next_key=next_key, - next_val=next_val, - lc_key_list=lc_key_list, - lc_val_list=lc_val_list, - ) - i += 1 - - # update state - state = self.state_update_after_loop( - state=state, - mems_list=mems_list, - lc_key_list=lc_key_list, - lc_val_list=lc_val_list, - update_length=1, - ) - - return output, lengths, state - - def quantize_(self, params=None): - if params and "per_channel" in params and params["per_channel"]: - qconfig = per_channel_dynamic_qconfig - else: - qconfig = default_dynamic_qconfig - quantization.quantize_dynamic( - self, {torch.nn.Linear: qconfig}, dtype=torch.qint8, inplace=True - ) - return self - - -# ------------------------------------------------------------------------------ -# Emformer encoder for seq2seq model -# This is a wrapper over the original emformer -# ------------------------------------------------------------------------------ -def emformer_encoder(klass): - class SpeechEncoder(klass): - def __init__(self, args): - super().__init__(args) - stride = SpeechEncoder.conv_layer_stride(args) - trf_left_context = args.segment_left_context // stride - trf_right_context = args.segment_right_context // stride - context_config = [trf_left_context, trf_right_context] - self.transformer_layers = nn.ModuleList( - [ - NoSegAugmentedMemoryTransformerEncoderLayer( - input_dim=args.encoder_embed_dim, - num_heads=args.encoder_attention_heads, - ffn_dim=args.encoder_ffn_embed_dim, - num_layers=args.encoder_layers, - dropout_in_attn=args.dropout, - dropout_on_attn=args.dropout, - dropout_on_fc1=args.dropout, - dropout_on_fc2=args.dropout, - activation_fn=args.activation_fn, - context_config=context_config, - segment_size=args.segment_length, - max_memory_size=args.max_memory_size, - scaled_init=True, # TODO: use constant for now. - tanh_on_mem=args.amtrf_tanh_on_mem, - ) - ] - ) - - def forward(self, src_tokens, src_lengths): - encoder_out = super().forward(src_tokens, src_lengths) - output = encoder_out["encoder_out"][0] - encoder_padding_masks = encoder_out["encoder_padding_mask"][0] - - # This is because that in the original implementation - # the output didn't consider the last segment as right context. - encoder_padding_masks = encoder_padding_masks[:, : output.size(0)] - - return { - "encoder_out": [output], - "encoder_padding_mask": [encoder_padding_masks], - "encoder_embedding": [], - "encoder_states": [], - "src_tokens": [], - "src_lengths": [], - } - - @staticmethod - def conv_layer_stride(args): - # TODO: make it configurable from the args - return 4 - - SpeechEncoder.__name__ = klass.__name__ - return SpeechEncoder diff --git a/kosmos-g/fairseq/fairseq/models/speech_to_text/s2t_conformer.py b/kosmos-g/fairseq/fairseq/models/speech_to_text/s2t_conformer.py deleted file mode 100644 index fbac61d5a..000000000 --- a/kosmos-g/fairseq/fairseq/models/speech_to_text/s2t_conformer.py +++ /dev/null @@ -1,159 +0,0 @@ -import logging -import torch -from fairseq.models.speech_to_text.s2t_transformer import ( - S2TTransformerEncoder, - S2TTransformerModel, - Conv1dSubsampler, - base_architecture as transformer_base_architecture, -) -from fairseq.data.data_utils import lengths_to_padding_mask -from fairseq.modules.conformer_layer import ConformerEncoderLayer -from fairseq.models import FairseqEncoder, register_model_architecture, register_model -from fairseq.modules import PositionalEmbedding, RelPositionalEncoding -import math - -logger = logging.getLogger(__name__) - - -class S2TConformerEncoder(FairseqEncoder): - """Conformer Encoder for speech translation based on https://arxiv.org/abs/2005.08100""" - - def __init__(self, args): - super().__init__(None) - self.embed_scale = math.sqrt(args.encoder_embed_dim) - if args.no_scale_embedding: - self.embed_scale = 1.0 - self.padding_idx = 1 - self.subsample = Conv1dSubsampler( - args.input_feat_per_channel * args.input_channels, - args.conv_channels, - args.encoder_embed_dim, - [int(k) for k in args.conv_kernel_sizes.split(",")], - ) - self.pos_enc_type = args.pos_enc_type - if self.pos_enc_type == "rel_pos": - self.embed_positions = RelPositionalEncoding( - args.max_source_positions, args.encoder_embed_dim - ) - elif self.pos_enc_type == "rope": - self.embed_positions = None - else: # Use absolute positional embedding - self.pos_enc_type = "abs" - self.embed_positions = PositionalEmbedding( - args.max_source_positions, args.encoder_embed_dim, self.padding_idx - ) - - self.linear = torch.nn.Linear(args.encoder_embed_dim, args.encoder_embed_dim) - self.dropout = torch.nn.Dropout(args.dropout) - self.conformer_layers = torch.nn.ModuleList( - [ - ConformerEncoderLayer( - embed_dim=args.encoder_embed_dim, - ffn_embed_dim=args.encoder_ffn_embed_dim, - attention_heads=args.encoder_attention_heads, - dropout=args.dropout, - depthwise_conv_kernel_size=args.depthwise_conv_kernel_size, - attn_type=args.attn_type, - pos_enc_type=self.pos_enc_type, - use_fp16=args.fp16, - ) - for _ in range(args.encoder_layers) - ] - ) - - def forward(self, src_tokens, src_lengths, return_all_hiddens=False): - """ - Args: - src_tokens: Input source tokens Tensor of shape B X T X C - src_lengths: Lengths Tensor corresponding to input source tokens - return_all_hiddens: If true will append the self attention states to the encoder states - Returns: - encoder_out: Tensor of shape B X T X C - encoder_padding_mask: Optional Tensor with mask - encoder_embedding: Optional Tensor. Always empty here - encoder_states: List of Optional Tensors wih self attention states - src_tokens: Optional Tensor. Always empty here - src_lengths: Optional Tensor. Always empty here - """ - x, input_lengths = self.subsample(src_tokens, src_lengths) # returns T X B X C - encoder_padding_mask = lengths_to_padding_mask(input_lengths) - x = self.embed_scale * x - if self.pos_enc_type == "rel_pos": - positions = self.embed_positions(x) - - elif self.pos_enc_type == "rope": - positions = None - - else: - positions = self.embed_positions(encoder_padding_mask).transpose(0, 1) - x += positions - positions = None - - x = self.linear(x) - x = self.dropout(x) - encoder_states = [] - - # x is T X B X C - for layer in self.conformer_layers: - x, _ = layer(x, encoder_padding_mask, positions) - if return_all_hiddens: - encoder_states.append(x) - - return { - "encoder_out": [x], # T x B x C - "encoder_padding_mask": [encoder_padding_mask] - if encoder_padding_mask.any() - else [], # B x T - "encoder_embedding": [], # B x T x C - "encoder_states": encoder_states, # List[T x B x C] - "src_tokens": [], - "src_lengths": [], - } - - def reorder_encoder_out(self, encoder_out, new_order): - """Required method for a FairseqEncoder. Calls the method from the parent class""" - return S2TTransformerEncoder.reorder_encoder_out(self, encoder_out, new_order) - - -@register_model("s2t_conformer") -class S2TConformerModel(S2TTransformerModel): - def __init__(self, encoder, decoder): - super().__init__(encoder, decoder) - - @staticmethod - def add_args(parser): - S2TTransformerModel.add_args(parser) - parser.add_argument("--input-feat-per-channel", default=80) - parser.add_argument("--depthwise-conv-kernel-size", default=31) - parser.add_argument("--input-channels", default=1) - parser.add_argument( - "--attn-type", - default=None, - help="If not specified uses fairseq MHA. Other valid option is espnet", - ) - parser.add_argument( - "--pos-enc-type", - default="abs", - help="Must be specified in addition to attn-type=espnet for rel_pos and rope", - ) - - @classmethod - def build_encoder(cls, args): - encoder = S2TConformerEncoder(args) - return encoder - - -@register_model_architecture("s2t_conformer", "s2t_conformer") -def base_architecture(args): - args.attn_type = getattr(args, "attn_type", None) - args.pos_enc_type = getattr(args, "pos_enc_type", "abs") - args.input_feat_per_channel = getattr(args, "input_feat_per_channel", 80) - args.input_channels = getattr(args, "input_channels", 1) - args.max_source_positions = getattr(args, "max_source_positions", 6000) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 256) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 2048) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 4) - args.dropout = getattr(args, "dropout", 0.1) - args.encoder_layers = getattr(args, "encoder_layers", 16) - args.depthwise_conv_kernel_size = getattr(args, "depthwise_conv_kernel_size", 31) - transformer_base_architecture(args) diff --git a/kosmos-g/fairseq/fairseq/models/speech_to_text/s2t_transformer.py b/kosmos-g/fairseq/fairseq/models/speech_to_text/s2t_transformer.py deleted file mode 100644 index 33c30e1b0..000000000 --- a/kosmos-g/fairseq/fairseq/models/speech_to_text/s2t_transformer.py +++ /dev/null @@ -1,546 +0,0 @@ -#!/usr/bin/env python3 - -import logging -import math -from pathlib import Path -from typing import Dict, List, Optional, Tuple - -import torch -import torch.nn as nn -from torch import Tensor - -from fairseq import checkpoint_utils, utils -from fairseq.data.data_utils import lengths_to_padding_mask -from fairseq.models import ( - FairseqEncoder, - FairseqEncoderDecoderModel, - register_model, - register_model_architecture, -) -from fairseq.models.speech_to_text.hub_interface import S2THubInterface -from fairseq.models.transformer import Embedding, TransformerDecoder -from fairseq.modules import ( - FairseqDropout, - LayerNorm, - PositionalEmbedding, - TransformerEncoderLayer, -) - -logger = logging.getLogger(__name__) - - -class Conv1dSubsampler(nn.Module): - """Convolutional subsampler: a stack of 1D convolution (along temporal - dimension) followed by non-linear activation via gated linear units - (https://arxiv.org/abs/1911.08460) - - Args: - in_channels (int): the number of input channels - mid_channels (int): the number of intermediate channels - out_channels (int): the number of output channels - kernel_sizes (List[int]): the kernel size for each convolutional layer - """ - - def __init__( - self, - in_channels: int, - mid_channels: int, - out_channels: int, - kernel_sizes: List[int] = (3, 3), - ): - super(Conv1dSubsampler, self).__init__() - self.n_layers = len(kernel_sizes) - self.conv_layers = nn.ModuleList( - nn.Conv1d( - in_channels if i == 0 else mid_channels // 2, - mid_channels if i < self.n_layers - 1 else out_channels * 2, - k, - stride=2, - padding=k // 2, - ) - for i, k in enumerate(kernel_sizes) - ) - - def get_out_seq_lens_tensor(self, in_seq_lens_tensor): - out = in_seq_lens_tensor.clone() - for _ in range(self.n_layers): - out = ((out.float() - 1) / 2 + 1).floor().long() - return out - - def forward(self, src_tokens, src_lengths): - bsz, in_seq_len, _ = src_tokens.size() # B x T x (C x D) - x = src_tokens.transpose(1, 2).contiguous() # -> B x (C x D) x T - for conv in self.conv_layers: - x = conv(x) - x = nn.functional.glu(x, dim=1) - _, _, out_seq_len = x.size() - x = x.transpose(1, 2).transpose(0, 1).contiguous() # -> T x B x (C x D) - return x, self.get_out_seq_lens_tensor(src_lengths) - - -@register_model("s2t_transformer") -class S2TTransformerModel(FairseqEncoderDecoderModel): - """Adapted Transformer model (https://arxiv.org/abs/1706.03762) for - speech-to-text tasks. The Transformer encoder/decoder remains the same. - A trainable input subsampler is prepended to the Transformer encoder to - project inputs into the encoder dimension as well as downsample input - sequence for computational efficiency.""" - - @classmethod - def hub_models(cls): - base_url = "http://dl.fbaipublicfiles.com/fairseq/s2t" - model_ids = [ - "s2t_transformer_s-en-asr-librispeech", - "s2t_transformer_m-en-asr-librispeech", - "s2t_transformer_l-en-asr-librispeech", - ] - return {i: f"{base_url}/{i}.tar.gz" for i in model_ids} - - @classmethod - def from_pretrained( - cls, - model_name_or_path, - checkpoint_file="model.pt", - data_name_or_path=".", - config_yaml="config.yaml", - **kwargs, - ): - from fairseq import hub_utils - - x = hub_utils.from_pretrained( - model_name_or_path, - checkpoint_file, - data_name_or_path, - archive_map=cls.hub_models(), - config_yaml=config_yaml, - **kwargs, - ) - return S2THubInterface(x["args"], x["task"], x["models"][0]) - - def __init__(self, encoder, decoder): - super().__init__(encoder, decoder) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - # input - parser.add_argument( - "--conv-kernel-sizes", - type=str, - metavar="N", - help="kernel sizes of Conv1d subsampling layers", - ) - parser.add_argument( - "--conv-channels", - type=int, - metavar="N", - help="# of channels in Conv1d subsampling layers", - ) - # Transformer - parser.add_argument( - "--activation-fn", - type=str, - default="relu", - choices=utils.get_available_activation_fns(), - help="activation function to use", - ) - parser.add_argument( - "--dropout", type=float, metavar="D", help="dropout probability" - ) - parser.add_argument( - "--attention-dropout", - type=float, - metavar="D", - help="dropout probability for attention weights", - ) - parser.add_argument( - "--activation-dropout", - "--relu-dropout", - type=float, - metavar="D", - help="dropout probability after activation in FFN.", - ) - parser.add_argument( - "--encoder-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension", - ) - parser.add_argument( - "--encoder-ffn-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension for FFN", - ) - parser.add_argument( - "--encoder-layers", type=int, metavar="N", help="num encoder layers" - ) - parser.add_argument( - "--encoder-attention-heads", - type=int, - metavar="N", - help="num encoder attention heads", - ) - parser.add_argument( - "--encoder-normalize-before", - action="store_true", - help="apply layernorm before each encoder block", - ) - parser.add_argument( - "--decoder-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension", - ) - parser.add_argument( - "--decoder-ffn-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension for FFN", - ) - parser.add_argument( - "--decoder-layers", type=int, metavar="N", help="num decoder layers" - ) - parser.add_argument( - "--decoder-attention-heads", - type=int, - metavar="N", - help="num decoder attention heads", - ) - parser.add_argument( - "--decoder-normalize-before", - action="store_true", - help="apply layernorm before each decoder block", - ) - parser.add_argument( - "--share-decoder-input-output-embed", - action="store_true", - help="share decoder input and output embeddings", - ) - parser.add_argument( - "--layernorm-embedding", - action="store_true", - help="add layernorm to embedding", - ) - parser.add_argument( - "--no-scale-embedding", - action="store_true", - help="if True, dont scale embeddings", - ) - parser.add_argument( - "--load-pretrained-encoder-from", - type=str, - metavar="STR", - help="model to take encoder weights from (for initialization)", - ) - parser.add_argument( - "--encoder-freezing-updates", - type=int, - metavar="N", - help="freeze encoder for first N updates", - ) - - @classmethod - def build_encoder(cls, args): - encoder = S2TTransformerEncoder(args) - pretraining_path = getattr(args, "load_pretrained_encoder_from", None) - if pretraining_path is not None: - if not Path(pretraining_path).exists(): - logger.warning( - f"skipped pretraining because {pretraining_path} does not exist" - ) - else: - encoder = checkpoint_utils.load_pretrained_component_from_model( - component=encoder, checkpoint=pretraining_path - ) - logger.info(f"loaded pretrained encoder from: {pretraining_path}") - return encoder - - @classmethod - def build_decoder(cls, args, task, embed_tokens): - return TransformerDecoderScriptable(args, task.target_dictionary, embed_tokens) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - - # make sure all arguments are present in older models - base_architecture(args) - - def build_embedding(dictionary, embed_dim): - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - return Embedding(num_embeddings, embed_dim, padding_idx) - - decoder_embed_tokens = build_embedding( - task.target_dictionary, args.decoder_embed_dim - ) - encoder = cls.build_encoder(args) - decoder = cls.build_decoder(args, task, decoder_embed_tokens) - return cls(encoder, decoder) - - def get_normalized_probs( - self, - net_output: Tuple[Tensor, Optional[Dict[str, List[Optional[Tensor]]]]], - log_probs: bool, - sample: Optional[Dict[str, Tensor]] = None, - ): - # net_output['encoder_out'] is a (B, T, D) tensor - lprobs = self.get_normalized_probs_scriptable(net_output, log_probs, sample) - lprobs.batch_first = True - return lprobs - - def forward(self, src_tokens, src_lengths, prev_output_tokens): - """ - The forward method inherited from the base class has a **kwargs - argument in its input, which is not supported in torchscript. This - method overwrites the forward method definition without **kwargs. - """ - encoder_out = self.encoder(src_tokens=src_tokens, src_lengths=src_lengths) - decoder_out = self.decoder( - prev_output_tokens=prev_output_tokens, encoder_out=encoder_out - ) - return decoder_out - - -class S2TTransformerEncoder(FairseqEncoder): - """Speech-to-text Transformer encoder that consists of input subsampler and - Transformer encoder.""" - - def __init__(self, args): - super().__init__(None) - - self.encoder_freezing_updates = args.encoder_freezing_updates - self.num_updates = 0 - - self.dropout_module = FairseqDropout( - p=args.dropout, module_name=self.__class__.__name__ - ) - self.embed_scale = math.sqrt(args.encoder_embed_dim) - if args.no_scale_embedding: - self.embed_scale = 1.0 - self.padding_idx = 1 - - self.subsample = Conv1dSubsampler( - args.input_feat_per_channel * args.input_channels, - args.conv_channels, - args.encoder_embed_dim, - [int(k) for k in args.conv_kernel_sizes.split(",")], - ) - - self.embed_positions = PositionalEmbedding( - args.max_source_positions, args.encoder_embed_dim, self.padding_idx - ) - - self.transformer_layers = nn.ModuleList( - [TransformerEncoderLayer(args) for _ in range(args.encoder_layers)] - ) - if args.encoder_normalize_before: - self.layer_norm = LayerNorm(args.encoder_embed_dim) - else: - self.layer_norm = None - - def _forward(self, src_tokens, src_lengths, return_all_hiddens=False): - x, input_lengths = self.subsample(src_tokens, src_lengths) - x = self.embed_scale * x - - encoder_padding_mask = lengths_to_padding_mask(input_lengths) - positions = self.embed_positions(encoder_padding_mask).transpose(0, 1) - x += positions - x = self.dropout_module(x) - - encoder_states = [] - - for layer in self.transformer_layers: - x = layer(x, encoder_padding_mask) - if return_all_hiddens: - encoder_states.append(x) - - if self.layer_norm is not None: - x = self.layer_norm(x) - - return { - "encoder_out": [x], # T x B x C - "encoder_padding_mask": [encoder_padding_mask] - if encoder_padding_mask.any() - else [], # B x T - "encoder_embedding": [], # B x T x C - "encoder_states": encoder_states, # List[T x B x C] - "src_tokens": [], - "src_lengths": [], - } - - def forward(self, src_tokens, src_lengths, return_all_hiddens=False): - if self.num_updates < self.encoder_freezing_updates: - with torch.no_grad(): - x = self._forward( - src_tokens, src_lengths, return_all_hiddens=return_all_hiddens - ) - else: - x = self._forward( - src_tokens, src_lengths, return_all_hiddens=return_all_hiddens - ) - return x - - def reorder_encoder_out(self, encoder_out, new_order): - new_encoder_out = ( - [] - if len(encoder_out["encoder_out"]) == 0 - else [x.index_select(1, new_order) for x in encoder_out["encoder_out"]] - ) - - new_encoder_padding_mask = ( - [] - if len(encoder_out["encoder_padding_mask"]) == 0 - else [ - x.index_select(0, new_order) - for x in encoder_out["encoder_padding_mask"] - ] - ) - - new_encoder_embedding = ( - [] - if len(encoder_out["encoder_embedding"]) == 0 - else [ - x.index_select(0, new_order) for x in encoder_out["encoder_embedding"] - ] - ) - - encoder_states = encoder_out["encoder_states"] - if len(encoder_states) > 0: - for idx, state in enumerate(encoder_states): - encoder_states[idx] = state.index_select(1, new_order) - - return { - "encoder_out": new_encoder_out, # T x B x C - "encoder_padding_mask": new_encoder_padding_mask, # B x T - "encoder_embedding": new_encoder_embedding, # B x T x C - "encoder_states": encoder_states, # List[T x B x C] - "src_tokens": [], # B x T - "src_lengths": [], # B x 1 - } - - def set_num_updates(self, num_updates): - super().set_num_updates(num_updates) - self.num_updates = num_updates - - -class TransformerDecoderScriptable(TransformerDecoder): - def extract_features( - self, - prev_output_tokens, - encoder_out: Optional[Dict[str, List[Tensor]]] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - full_context_alignment: bool = False, - alignment_layer: Optional[int] = None, - alignment_heads: Optional[int] = None, - ): - # call scriptable method from parent class - x, _ = self.extract_features_scriptable( - prev_output_tokens, - encoder_out, - incremental_state, - full_context_alignment, - alignment_layer, - alignment_heads, - ) - return x, None - - -@register_model_architecture(model_name="s2t_transformer", arch_name="s2t_transformer") -def base_architecture(args): - args.encoder_freezing_updates = getattr(args, "encoder_freezing_updates", 0) - # Convolutional subsampler - args.conv_kernel_sizes = getattr(args, "conv_kernel_sizes", "5,5") - args.conv_channels = getattr(args, "conv_channels", 1024) - # Transformer - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 2048) - args.encoder_layers = getattr(args, "encoder_layers", 12) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", True) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim) - args.decoder_ffn_embed_dim = getattr( - args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim - ) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", True) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False) - args.dropout = getattr(args, "dropout", 0.1) - args.attention_dropout = getattr(args, "attention_dropout", args.dropout) - args.activation_dropout = getattr(args, "activation_dropout", args.dropout) - args.activation_fn = getattr(args, "activation_fn", "relu") - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", False - ) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - args.adaptive_input = getattr(args, "adaptive_input", False) - args.decoder_layerdrop = getattr(args, "decoder_layerdrop", 0.0) - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - args.no_scale_embedding = getattr(args, "no_scale_embedding", False) - args.quant_noise_pq = getattr(args, "quant_noise_pq", 0) - - -@register_model_architecture("s2t_transformer", "s2t_transformer_s") -def s2t_transformer_s(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 256) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 256 * 8) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 4) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 4) - args.dropout = getattr(args, "dropout", 0.1) - base_architecture(args) - - -@register_model_architecture("s2t_transformer", "s2t_transformer_xs") -def s2t_transformer_xs(args): - args.encoder_layers = getattr(args, "encoder_layers", 6) - args.decoder_layers = getattr(args, "decoder_layers", 3) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 256 * 4) - args.dropout = getattr(args, "dropout", 0.3) - s2t_transformer_s(args) - - -@register_model_architecture("s2t_transformer", "s2t_transformer_sp") -def s2t_transformer_sp(args): - args.encoder_layers = getattr(args, "encoder_layers", 16) - s2t_transformer_s(args) - - -@register_model_architecture("s2t_transformer", "s2t_transformer_m") -def s2t_transformer_m(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 512 * 4) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8) - args.dropout = getattr(args, "dropout", 0.15) - base_architecture(args) - - -@register_model_architecture("s2t_transformer", "s2t_transformer_mp") -def s2t_transformer_mp(args): - args.encoder_layers = getattr(args, "encoder_layers", 16) - s2t_transformer_m(args) - - -@register_model_architecture("s2t_transformer", "s2t_transformer_l") -def s2t_transformer_l(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 1024 * 4) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - args.dropout = getattr(args, "dropout", 0.2) - base_architecture(args) - - -@register_model_architecture("s2t_transformer", "s2t_transformer_lp") -def s2t_transformer_lp(args): - args.encoder_layers = getattr(args, "encoder_layers", 16) - s2t_transformer_l(args) diff --git a/kosmos-g/fairseq/fairseq/models/speech_to_text/utils.py b/kosmos-g/fairseq/fairseq/models/speech_to_text/utils.py deleted file mode 100644 index 168b8bf13..000000000 --- a/kosmos-g/fairseq/fairseq/models/speech_to_text/utils.py +++ /dev/null @@ -1,563 +0,0 @@ -# Copyright (c) 2017-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the LICENSE file in -# the root directory of this source tree. An additional grant of patent rights -# can be found in the PATENTS file in the same directory. - - -import logging -from collections.abc import Iterable -from itertools import repeat -from typing import List, Optional, Tuple - -import torch -from torch import Tensor - - -# ------------------------------------------------------------------------------ -# assert_equal() -# ------------------------------------------------------------------------------ - - -def assert_equal(value1, value2, name1=None, name2=None): - """Asserts two values are equal otherwise raise an error.""" - - str_name1 = "" if name1 is None else "{} ".format(name1) - str_name2 = "" if name2 is None else "{} ".format(name2) - if value1 != value2: - str_value1 = "{}" if name1 is None else "({})" - str_value1 = str_value1.format(value1) - str_value2 = "{}" if name2 is None else "({})" - str_value2 = str_value2.format(value2) - raise ValueError( - "Expected {}{} == {}{}".format(str_name1, str_value1, str_name2, str_value2) - ) - - -def fill_config(config, key, value): - if value is not None: - if key not in config or config[key] is None: - config[key] = value - assert_equal(value, config[key], "value", f'config["{key}"]') - - -# ------------------------------------------------------------------------------ -# check_and_return_expected() -# ------------------------------------------------------------------------------ - - -def check_and_return_expected(value, undefined_value, expected_value, name=None): - """ - Return the expected value while checking if the given value is undefined or - equal to the expected value. - """ - if (undefined_value is None and value is None) or (undefined_value == value): - return expected_value - if value != expected_value: - str_name = "" if name is None else "{} ".format(name) - str_value = "{}" if name is None else "({})" - str_value = str_value.format(value) - raise ValueError( - "Expected {}{} == {}".format(str_name, str_value, expected_value) - ) - return expected_value - - -# ------------------------------------------------------------------------------ -# get_time_axis() -# ------------------------------------------------------------------------------ - - -def get_time_axis(layout): - """ - Extract the time axis from the layout, for example for breaking sequence into - segments. - """ - if layout in ["TB", "TBD"]: - return 0 - if layout in ["BT", "BTD"]: - return 1 - if layout in ["BCTD"]: - return 2 - raise ValueError("Unsupported layout = {}".format(layout)) - - -# ------------------------------------------------------------------------------ -# get_batch_axis() -# ------------------------------------------------------------------------------ - - -def get_batch_axis(layout): - """ - Extract the batch axis from the layout - """ - if layout in ["TB", "TBD"]: - return 1 - if layout in ["BT", "BTD", "BCTD"]: - return 0 - raise ValueError("Unsupported layout = {}".format(layout)) - - -# ------------------------------------------------------------------------------ -# monotonically_increasing_and_bounded() -# ------------------------------------------------------------------------------ - - -def monotonically_increasing_and_bounded(iterable, min=None, max=None): - """ - Check if the elements in the given iterable are monotonically increasing and - bounded by upper/lower bounds. - """ - if not isinstance(iterable, Iterable): - raise TypeError( - "Expected iterable to be of type Iterable, got ({})".format( - iterable.__class__.__name__ - ) - ) - for i in range(len(iterable)): - if min is not None and iterable[i] < min: - return False - if max is not None and iterable[i] > max: - return False - if i > 0 and iterable[i] <= iterable[i - 1]: - return False - return True - - -# ------------------------------------------------------------------------------ -# to_pair() -# ------------------------------------------------------------------------------ - - -def to_pair(value, name): - """Make a pair (of type tuple) of given value.""" - if isinstance(value, Iterable): - if len(value) != 2: - raise ValueError( - "Expected `{}` to have exactly 2 elements, got: ({})".format( - name, value - ) - ) - return value - return tuple(repeat(value, 2)) - - -# ------------------------------------------------------------------------------ -# infer_conv_output_attrs() -# ------------------------------------------------------------------------------ - - -# TODO(cfyeh): figure out if we can get `output_dim` without calling the module. -def infer_conv_output_attrs( - module, input_channels, input_dim, batch_size=1, max_length=8 -): - """Get output attributes of a module with input.""" - input = torch.randn(batch_size, input_channels, max_length, input_dim) - output = module(input) - output_channels = output.shape[1] - output_dim = output.shape[-1] - return output_channels, output_dim - - -# ------------------------------------------------------------------------------ -# NoOp -# ------------------------------------------------------------------------------ - - -class NoOp(torch.nn.Module): - """ - NoOp simply passes the input as the output. - """ - - def __init__(self): - super().__init__() - - def forward(self, input: Tensor) -> Tensor: - return input - - -# ------------------------------------------------------------------------------ -# Permute: a torch.nn.Module applies permutation on the input tensor. -# ------------------------------------------------------------------------------ - - -class Permute(torch.nn.Module): - def __init__(self, dims): - super().__init__() - self.dims = dims - - def forward(self, input: Tensor) -> Tensor: - return input.permute(self.dims).contiguous() - - -# ------------------------------------------------------------------------------ -# lengths_to_padding_mask() -# ------------------------------------------------------------------------------ - - -def lengths_to_padding_mask(lengths: Tensor) -> Tensor: - """Convert lengths of shape (B, ) to padding mask.""" - batch_size = lengths.shape[0] - max_length = int(torch.max(lengths).item()) - padding_mask = torch.arange( # [0, ..., T-1] - max_length, device=lengths.device, dtype=lengths.dtype - ).expand(batch_size, max_length) >= lengths.unsqueeze(1) - - return padding_mask - - -# ------------------------------------------------------------------------------ -# lengths_to_attention_mask() -# ------------------------------------------------------------------------------ - - -def lengths_to_attention_mask( - lengths: Tensor, - left_context: Optional[int] = None, - right_context: Optional[int] = None, -) -> Optional[Tensor]: - """ - Generate attention mask based on (lengths, left_context, right_context). - left_context is None means unlimited left context. - right_context is None means unlimited right context. - """ - - if left_context is None and right_context is None: - return None - - max_length = int(torch.max(lengths).item()) - - # For example, with `max_length` == 5, - # indices = tensor([ - # [ 0, 1, 2, 3, 4, 5], - # [-1, 0, 1, 2, 3, 4], - # [-2, -1, 0, 1, 2, 3], - # [-3, -2, -1, 0, 1, 2], - # [-4, -3, -2, -1, 0, 1], - # [-5, -4, -3, -2, -1, 0], - # ]) - - # In some cases the second torch.arange is created on cpu which causes a - # failure. Adding the device option to guard against it. - indices = torch.arange( - max_length, device=lengths.device, dtype=lengths.dtype - ).expand(max_length, max_length) - torch.arange( - max_length, device=lengths.device - ).view( - max_length, -1 - ) - - # For example, with `max_length` == 5, - # bool_mask = tensor([ - # [True, True, True, True, True], - # [True, True, True, True, True], - # [True, True, True, True, True], - # [True, True, True, True, True], - # [True, True, True, True, True], - # ]) - bool_mask = ( - torch.tensor([True]).to(device=lengths.device).expand(max_length, max_length) - ) - - # For example, with `max_length` == 5, left_context == 2 - # left_mask = tensor([ - # [ True, True, True, True, True], - # [ True, True, True, True, True], - # [ True, True, True, True, True], - # [False, True, True, True, True], - # [False, False, True, True, True], - # ]) - if left_context is not None: - left_mask = indices >= -left_context - bool_mask = bool_mask & left_mask - - # For example, with `max_length` == 5, right_context == 1 - # right_mask = tensor([ - # [True, True, False, False, False], - # [True, True, True, False, False], - # [True, True, True, True, False], - # [True, True, True, True, True], - # [True, True, True, True, True], - # ]) - if right_context is not None: - right_mask = indices <= right_context - bool_mask = bool_mask & right_mask - - bool_mask = (~bool_mask).to(device=lengths.device) - return bool_mask - - -# ------------------------------------------------------------------------------ -# infer_output_norm() -# ------------------------------------------------------------------------------ - - -def infer_output_norm(module, output_norm=None): - """ - Infer the output norm (string and module) needed on the module gvien desired - output normalization. - """ - if output_norm == module.output_norm(): - # output_norm already matches module.output_norm(). - return (None, NoOp()) - - if output_norm is None and module.output_norm() is not None: - logger = logging.getLogger("infer_output_norm()") - logger.warning( - "trying to set output_norm ({}) ".format(output_norm) - + "but got module.output_norm() ({}), ".format(module.output_norm()) - + "the combined output_norm() will be ({})".format(module.output_norm()) - ) - return (None, NoOp()) - - if output_norm == "log_softmax": - if module.output_norm() is not None: - raise ValueError( - "incompatible output_norm ({}) ".format(output_norm) - + "and module.output_norm() ({})".format(module.output_norm()) - ) - else: - return ("log_softmax", torch.nn.LogSoftmax(dim=-1)) - - if output_norm == "softmax": - if module.output_norm() is not None: - raise ValueError( - "incompatible output_norm ({}) ".format(output_norm) - + "and module.output_norm() ({})".format(module.output_norm()) - ) - else: - return ("softmax", torch.nn.Softmax(dim=-1)) - - raise ValueError( - "output_norm ({}) not in ".format(output_norm) - + "supported list = [None, softmax, log_softmax]" - ) - - -# ------------------------------------------------------------------------------ -# infer_channels_from_layout() -# ------------------------------------------------------------------------------ - - -def infer_channels_from_layout(layout, channels): - """Extract the number of channels from the layout.""" - if layout in ("TBD", "BTD"): - if channels is not None and channels != 1: - raise ValueError( - "Expected channels ({}) to be 1 for layout = {}".format( - channels, layout - ) - ) - if channels is None: - return 1 - return channels - - -# ------------------------------------------------------------------------------ -# pad_sequence() -# ------------------------------------------------------------------------------ - - -@torch.jit.export -def pad_sequence( - sequence: Tensor, - time_axis: int, - extra_left_context: int = 0, - extra_right_context: int = 0, -) -> Tensor: - """Pad extra left/right contexts to the sequence.""" - - if extra_left_context == 0 and extra_right_context == 0: - return sequence - - tensors_to_concat = [] - - if extra_left_context: - size = (extra_left_context,) - fill_value = 0 - indices = torch.full( - size=size, - fill_value=fill_value, - dtype=torch.long, - device=sequence.device, - ) - left_padding = torch.index_select(sequence, time_axis, indices) - tensors_to_concat.append(left_padding) - - tensors_to_concat.append(sequence) - - # NOTE(cfyeh): for efficiency reason we pad 0 instead of the last frame for - # extra right contexts. - if extra_right_context: - size = list(sequence.shape) - size[time_axis] = extra_right_context - right_padding = torch.zeros(size, dtype=sequence.dtype, device=sequence.device) - tensors_to_concat.append(right_padding) - - padded_sequence = torch.cat(tensors_to_concat, dim=time_axis) - return padded_sequence - - -# ------------------------------------------------------------------------------ -# sequence_to_segments() -# ------------------------------------------------------------------------------ - - -@torch.jit.export -def sequence_to_segments( - sequence: Tensor, - time_axis: int, - lengths: Tensor, - segment_size: Optional[int] = None, - extra_left_context: int = 0, - extra_right_context: int = 0, -) -> List[Tuple[Tensor, Tensor]]: - """Breaks sequence into segments.""" - - sequence = pad_sequence( - sequence=sequence, - time_axis=time_axis, - extra_left_context=extra_left_context, - extra_right_context=extra_right_context, - ) - - lengths = lengths + extra_left_context + extra_right_context - - segments: List[Tuple[Tensor, Tensor]] = [] - - if segment_size is None: - segments.append((sequence, lengths)) - return segments - - offset = 0 - end = sequence.shape[time_axis] - step = segment_size - size = extra_left_context + segment_size + extra_right_context - - while offset + extra_left_context + extra_right_context < end: - clamped_size = min(size, end - offset) - segment_lengths = torch.clamp(lengths - offset, min=0, max=clamped_size) - indices = torch.arange( - start=offset, - end=(offset + clamped_size), - step=1, - dtype=torch.long, - device=sequence.device, - ) - segment_tensor = torch.index_select(sequence, time_axis, indices) - segments.append((segment_tensor, segment_lengths)) - offset = offset + step - - return segments - - -# ------------------------------------------------------------------------------ -# segments_to_sequence() -# ------------------------------------------------------------------------------ - - -@torch.jit.export -def segments_to_sequence( - segments: List[Tuple[Tensor, Tensor]], time_axis: int -) -> Tuple[Tensor, Tensor]: - """Concatenate segments into a full sequence.""" - if len(segments) == 1: - return segments[0] - - tensors_to_concat: List[Tensor] = [] - lengths_to_stack: List[Tensor] = [] - - for tensor, lengths in segments: - tensors_to_concat.append(tensor) - lengths_to_stack.append(lengths) - - sequence = torch.cat(tensors_to_concat, dim=time_axis) - lengths = torch.stack(lengths_to_stack, dim=0) - lengths = torch.sum(lengths, dim=0) - - return sequence, lengths - - -def lengths_to_encoder_padding_mask(lengths, batch_first: bool = False): - """ - convert lengths (a 1-D Long/Int tensor) to 2-D binary tensor - - Args: - lengths: a (B, )-shaped tensor - batch_first: whether to return a (B, T) tensor - - Return: - max_length: maximum length of B sequences - encoder_padding_mask: a (max_length, B) binary mask, where - [t, b] = False for t < lengths[b] and True otherwise - - TODO: - kernelize this function if benchmarking shows this function is slow - """ - max_lengths = torch.max(lengths).item() - bsz = lengths.size(0) - encoder_padding_mask = torch.arange( - max_lengths - ).to( # a (T, ) tensor with [0, ..., T-1] - lengths.device - ).view( # move to the right device - 1, max_lengths - ).expand( # reshape to (1, T)-shaped tensor - bsz, -1 - ) > lengths.view( # expand to (B, T)-shaped tensor - bsz, 1 - ).expand( - -1, max_lengths - ) - if not batch_first: - return encoder_padding_mask.t(), max_lengths - else: - return encoder_padding_mask, max_lengths - - -# ------------------------------------------------------------------------------ -# attention suppression -# ------------------------------------------------------------------------------ - - -def attention_suppression(attention_weights: Tensor, scale: float): - # B, H, qlen, klen -> B, H, qlen, 1 - attention_prob = torch.nn.functional.softmax(attention_weights.float(), dim=-1) - attention_nozeros = attention_prob.to(torch.bool) - nozeros_sum = torch.sum(attention_nozeros.to(torch.float), dim=-1, keepdim=True) - - # For very sparse situation, we need get round about 0s - key_sum = torch.sum(attention_prob, dim=-1, keepdim=True) - - # nozeros_sum should > 1 - key_mean = key_sum / (nozeros_sum + 1e-8) - - # std calculation - dis = (attention_prob - key_mean) * (attention_prob - key_mean) - - # if attention_prob[i] < threshold, then dis_masked[i] = 0; for all i - dis_masked = torch.where( - attention_nozeros, dis, attention_prob.new_zeros(attention_prob.size()) - ) - - key_var = torch.sum(dis_masked, dim=-1, keepdim=True) - key_var = key_var / (nozeros_sum - 1.0 + 1e-8) - key_std = torch.sqrt(key_var) - key_thread = key_mean - scale * key_std - - # if attention_prob[i] >= key_thread, then attention_prob[i] - # , otherwise "-inf" - inf_tensor = attention_prob.new_zeros(attention_prob.size()).detach() - inf_tensor[:] = float("-inf") - attention_weights_float = torch.where( - attention_prob < key_thread, - inf_tensor, - attention_weights.float(), - ) - - return attention_weights_float.type_as(attention_weights) - - -def layer_norm_backward_hook(module, grad_input, grad_output, clamp_value): - return tuple(torch.clamp(v, min=-clamp_value, max=clamp_value) for v in grad_input) diff --git a/kosmos-g/fairseq/fairseq/models/speech_to_text/xm_transformer.py b/kosmos-g/fairseq/fairseq/models/speech_to_text/xm_transformer.py deleted file mode 100644 index 5d6f162d3..000000000 --- a/kosmos-g/fairseq/fairseq/models/speech_to_text/xm_transformer.py +++ /dev/null @@ -1,684 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import copy -import logging -from typing import Dict, List, Optional, Tuple - -import numpy as np -import torch -import torch.nn as nn -from torch import Tensor - -from fairseq import checkpoint_utils, utils -from fairseq.data.data_utils import lengths_to_padding_mask -from fairseq.models import ( - FairseqEncoder, - FairseqEncoderDecoderModel, - register_model, - register_model_architecture, -) -from fairseq.models.speech_to_text.hub_interface import S2THubInterface -from fairseq.models.transformer import Embedding, TransformerDecoder -from fairseq.models.wav2vec import Wav2VecEncoder -from fairseq.modules.layer_norm import LayerNorm - -logger = logging.getLogger(__name__) - - -class Conv1dAdaptor(nn.Module): - def __init__( - self, - in_dim, - out_dim, - n_layers=3, - kernel_size=3, - stride=2, - layerdrop=0.0, - layernorm=False, - proj=False, - ): - super().__init__() - self.proj, self.proj_ln = None, None - self.post_proj, self.post_proj_ln = None, None - if proj: - self.proj = nn.Sequential( - nn.Linear(in_dim, in_dim * 4), nn.ReLU(), nn.Linear(in_dim * 4, in_dim) - ) - self.proj_ln = LayerNorm(in_dim) - self.post_proj = nn.Sequential( - nn.Linear(out_dim, out_dim * 4), - nn.ReLU(), - nn.Linear(out_dim * 4, out_dim), - ) - self.post_proj_ln = LayerNorm(out_dim) - - self.layers = nn.ModuleList( - nn.Conv1d( - in_dim if i == 0 else out_dim, - out_dim * 2, - kernel_size, - stride=stride, - padding=kernel_size // 2, - ) - for i in range(n_layers) - ) - self.stride = stride - self.layerdrop = layerdrop - self.layernorm = LayerNorm(in_dim) if layernorm else None - - @classmethod - def add_args(cls, parser): - parser.add_argument("--adaptor-n-layers", type=int) - parser.add_argument("--adaptor-kernel-size", type=int) - parser.add_argument("--adaptor-stride", type=int) - parser.add_argument("--adaptor-layerdrop", type=float) - parser.add_argument("--adaptor-layernorm", action="store_true") - parser.add_argument("--adaptor-proj", action="store_true") - - def forward(self, x, padding_mask: Optional[torch.Tensor]): - if self.layernorm is not None: - x = self.layernorm(x) - - if self.proj is not None: - x = x + 0.5 * self.proj(x) - x = self.proj_ln(x) - - # T x B x C -> B x C x T - x = x.transpose(0, 1).transpose(1, 2) - out_lens = None - if padding_mask is not None: - out_lens = (~padding_mask).sum(1).float() - - for layer in self.layers: - layerdrop_prob = np.random.random() - if not self.training or (layerdrop_prob > self.layerdrop): - x = nn.functional.glu(layer(x), dim=1) - if padding_mask is not None: - out_lens = ((out_lens - 1) / self.stride + 1).floor() - # B x C x T -> T x B x C - x = x.transpose(1, 2).transpose(0, 1) - - if self.post_proj is not None: - x = x + 0.5 * self.post_proj(x) - x = self.post_proj_ln(x) - - out_padding_mask = None - if padding_mask is not None: - out_padding_mask = lengths_to_padding_mask(out_lens.long()) - return x, out_padding_mask - - -def add_wav2vec_asr_args(parser): - parser.add_argument("--w2v-path", help="path to wav2vec 2.0 model") - parser.add_argument( - "--no-pretrained-weights", - action="store_true", - help="if true, does not load pretrained weights", - ) - parser.add_argument( - "--dropout-input", - type=float, - metavar="D", - help="dropout to apply to the input (after feat extr)", - ) - parser.add_argument( - "--final-dropout", - type=float, - metavar="D", - help="dropout after transformer and before final projection", - ) - parser.add_argument( - "--apply-mask", action="store_true", help="apply masking during fine-tuning" - ) - parser.add_argument( - "--dropout", - type=float, - metavar="D", - help="dropout probability inside wav2vec 2.0 model", - ) - parser.add_argument( - "--attention-dropout", - type=float, - metavar="D", - help="dropout probability for attention weights inside wav2vec 2.0 model", - ) - parser.add_argument( - "--activation-dropout", - "--relu-dropout", - type=float, - metavar="D", - help="dropout probability after activation in FFN inside wav2vec 2.0 model", - ) - - parser.add_argument( - "--mask-length", type=int, help="repeat the mask indices multiple times" - ) - - parser.add_argument( - "--mask-prob", type=float, help="probability of replacing a token with mask" - ) - - parser.add_argument( - "--mask-selection", - type=str, - choices=["static", "uniform", "normal", "poisson"], - help="how to choose masks", - ) - - parser.add_argument( - "--mask-other", - type=float, - help="stdev of the mask length in case of 'normal' selection strategy", - ) - - parser.add_argument( - "--no-mask-overlap", - action="store_true", - help="whether to allow masks to overlap", - ) - - parser.add_argument( - "--mask-channel-length", type=int, help="repeat the mask indices multiple times" - ) - - parser.add_argument( - "--mask-channel-prob", - type=float, - help="probability of replacing a token with mask", - ) - - parser.add_argument( - "--mask-channel-selection", - type=str, - choices=["static", "uniform", "normal", "poisson"], - help="how to choose masks", - ) - - parser.add_argument( - "--mask-channel-other", - type=float, - help="stdev of the mask length in case of 'normal' selection strategy", - ) - - parser.add_argument( - "--no-mask-channel-overlap", - action="store_true", - help="whether to allow masks to overlap", - ) - - parser.add_argument( - "--freeze-finetune-updates", - default=0, - type=int, - help="dont finetune wav2vec for this many updates", - ) - - parser.add_argument( - "--feature-grad-mult", - default=None, - type=float, - help="reset feature grad mult in wav2vec 2.0 to this", - ) - - parser.add_argument( - "--layerdrop", - default=0.0, - type=float, - help="probability of dropping a layer in wav2vec 2.0", - ) - parser.add_argument("--w2v-args", default=None) - - -def need_finetuning(ft_params, param_name): - if ft_params == "all": - return True - ft_params_list = ft_params.split(",") - for ft_param in ft_params_list: - if ft_param in param_name: - return True - return False - - -class Wav2VecEncoderWithAdaptor(FairseqEncoder): - def build_adaptor(self, args): - adaptor = None - if args.adaptor_n_layers > 0: - adaptor = Conv1dAdaptor( - args.decoder_embed_dim, - args.decoder_embed_dim, - n_layers=args.adaptor_n_layers, - kernel_size=args.adaptor_kernel_size, - stride=args.adaptor_stride, - layerdrop=args.adaptor_layerdrop, - layernorm=args.adaptor_layernorm, - proj=args.adaptor_proj, - ) - return adaptor - - def __init__(self, args): - super().__init__(None) - self.w2v_encoder = Wav2VecEncoder(args) - self.is_v0_arch = not args.adaptor_proj - self.w2v_proj_ln = None - if not self.is_v0_arch and self.w2v_encoder.proj is not None: - self.w2v_proj_ln = LayerNorm(args.decoder_embed_dim) - self.adaptor = self.build_adaptor(args) - - self.num_updates = 0 - self.freezing_updates = args.w2v_freezing_updates - self.finetuning_params = args.finetune_w2v_params - for k, p in self.w2v_encoder.w2v_model.named_parameters(): - p.requires_grad = need_finetuning(self.finetuning_params, k) - - @classmethod - def add_args(cls, parser): - add_wav2vec_asr_args(parser) - parser.add_argument( - "--normalize", - action="store_true", - help="if set, normalizes input to have 0 mean and unit variance", - ) - parser.add_argument( - "--finetune-w2v-params", - type=str, - metavar="STR", - help="comma-separated param strings to finetune.", - ) - parser.add_argument("--w2v-freezing-updates", type=int) - parser.add_argument("--load-pretrained-encoder-from", type=str, metavar="STR") - Conv1dAdaptor.add_args(parser) - - def set_num_updates(self, num_updates): - super().set_num_updates(num_updates) - self.num_updates = num_updates - - def forward(self, src_tokens, src_lengths=None, **kwargs): - if ( - self.freezing_updates is not None - and self.num_updates > self.freezing_updates - ): - for p in self.w2v_encoder.w2v_model.parameters(): - p.requires_grad = True - - padding_mask = lengths_to_padding_mask(src_lengths) - out = self.w2v_encoder.forward(src_tokens, padding_mask, tbc=True) - x, padding_mask = out["encoder_out"], out["padding_mask"] - if self.w2v_proj_ln is not None: - x = self.w2v_proj_ln(x) - - if self.adaptor is not None: - x, padding_mask = self.adaptor(x, padding_mask) - - return { - "encoder_out": [x], # T x B x C - "encoder_padding_mask": [] - if padding_mask is None - else [padding_mask], # B x T - "encoder_embedding": [], # B x T x C - "encoder_states": [], # List[T x B x C] - "src_tokens": [], - "src_lengths": [], - } - - def reorder_encoder_out(self, encoder_out, new_order): - new_encoder_out = ( - [] - if len(encoder_out["encoder_out"]) == 0 - else [x.index_select(1, new_order) for x in encoder_out["encoder_out"]] - ) - - new_encoder_padding_mask = ( - [] - if len(encoder_out["encoder_padding_mask"]) == 0 - else [ - x.index_select(0, new_order) - for x in encoder_out["encoder_padding_mask"] - ] - ) - - new_encoder_embedding = ( - [] - if len(encoder_out["encoder_embedding"]) == 0 - else [ - x.index_select(0, new_order) for x in encoder_out["encoder_embedding"] - ] - ) - - encoder_states = encoder_out["encoder_states"] - if len(encoder_states) > 0: - for idx, state in enumerate(encoder_states): - encoder_states[idx] = state.index_select(1, new_order) - - return { - "encoder_out": new_encoder_out, # T x B x C - "encoder_padding_mask": new_encoder_padding_mask, # B x T - "encoder_embedding": new_encoder_embedding, # B x T x C - "encoder_states": encoder_states, # List[T x B x C] - "src_tokens": [], # B x T - "src_lengths": [], # B x 1 - } - - -def add_decoder_args(parser): - parser.add_argument( - "--activation-fn", - type=str, - default="relu", - choices=utils.get_available_activation_fns(), - help="activation function to use", - ) - parser.add_argument( - "--decoder-dropout", type=float, metavar="D", help="dropout probability" - ) - parser.add_argument( - "--decoder-attention-dropout", - type=float, - metavar="D", - help="dropout probability for attention weights", - ) - parser.add_argument( - "--decoder-activation-dropout", - type=float, - metavar="D", - help="dropout probability after activation in FFN.", - ) - parser.add_argument( - "--decoder-embed-dim", type=int, metavar="N", help="decoder embedding dimension" - ) - parser.add_argument( - "--decoder-ffn-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension for FFN", - ) - parser.add_argument( - "--decoder-layers", type=int, metavar="N", help="num decoder layers" - ) - parser.add_argument( - "--decoder-attention-heads", - type=int, - metavar="N", - help="num decoder attention heads", - ) - parser.add_argument( - "--decoder-normalize-before", - action="store_true", - help="apply layernorm before each decoder block", - ) - parser.add_argument( - "--layernorm-embedding", action="store_true", help="add layernorm to embedding" - ) - parser.add_argument("--decoder-layerdrop", type=float, metavar="D") - parser.add_argument("--decoder-learned-pos", action="store_true") - parser.add_argument("--share-decoder-input-output-embed", action="store_true") - parser.add_argument( - "--no-scale-embedding", - action="store_true", - help="if True, dont scale embeddings", - ) - parser.add_argument( - "--load-pretrained-decoder-from", - type=str, - metavar="STR", - help="model to take decoder weights from (for initialization)", - ) - parser.add_argument( - "--finetune-decoder-params", - type=str, - metavar="STR", - help="comma-separated param strings to finetune.", - ) - - -@register_model("xm_transformer") -class XMTransformerModel(FairseqEncoderDecoderModel): - @classmethod - def hub_models(cls): - base_url = "http://dl.fbaipublicfiles.com/fairseq/s2t" - model_ids = [ - "xm_transformer_600m-es_en-multi_domain", - "xm_transformer_600m-ru_en-multi_domain", - "xm_transformer_600m-fr_en-multi_domain", - "xm_transformer_600m-en_es-multi_domain", - "xm_transformer_600m-en_ru-multi_domain", - "xm_transformer_600m-en_fr-multi_domain", - "xm_transformer_600m-en_zh-multi_domain", - "xm_transformer_600m-en_ar-multi_domain", - "xm_transformer_600m-en_tr-multi_domain", - "xm_transformer_600m-en_vi-multi_domain", - "xm_transformer-21_en-xls_r_300m", - "xm_transformer-en_15-xls_r_300m", - "xm_transformer-21_en-xls_r_1b", - "xm_transformer-en_15-xls_r_1b", - "xm_transformer-21_en-xls_r_2b", - "xm_transformer-en_15-xls_r_2b", - "xm_transformer-22_16-xls_r_2b", - ] - return {i: f"{base_url}/{i}.tar.gz" for i in model_ids} - - @classmethod - def from_pretrained( - cls, - model_name_or_path, - checkpoint_file="model.pt", - data_name_or_path=".", - config_yaml="config.yaml", - **kwargs, - ): - from fairseq import hub_utils - - x = hub_utils.from_pretrained( - model_name_or_path, - checkpoint_file, - data_name_or_path, - archive_map=cls.hub_models(), - config_yaml=config_yaml, - **kwargs, - ) - return S2THubInterface(x["args"], x["task"], x["models"][0]) - - def __init__(self, encoder, decoder): - super().__init__(encoder, decoder) - - @classmethod - def add_args(cls, parser): - """Add model-specific arguments to the parser.""" - Wav2VecEncoderWithAdaptor.add_args(parser) - add_decoder_args(parser) - parser.add_argument("--checkpoint-activations", action="store_true") - parser.add_argument("--offload-activations", action="store_true") - parser.add_argument("--min-params-to-wrap", type=int) - - @classmethod - def maybe_load_pretrained(cls, component, checkpoint: Optional[str] = None): - if checkpoint is None: - return component - - _load = checkpoint_utils.load_pretrained_component_from_model - try: - return _load(component, checkpoint) - except RuntimeError as e: - logger.warning(e) - return _load(component, checkpoint, strict=False) - - @classmethod - def build_encoder(cls, args): - _args = copy.deepcopy(args) - if not args.adaptor_proj: # V0 arch - state = checkpoint_utils.load_checkpoint_to_cpu(args.w2v_path) - if state.get("cfg") is not None: - encoder_embed_dim = state["cfg"]._content["model"]["encoder_embed_dim"] - elif state.get("args") is not None: - encoder_embed_dim = state["args"].encoder_embed_dim - else: - raise ValueError(f"Invalid config in {args.w2v_path}") - _args.decoder_embed_dim = encoder_embed_dim - del state - - encoder = Wav2VecEncoderWithAdaptor(_args) - return cls.maybe_load_pretrained( - encoder, getattr(args, "load_pretrained_encoder_from", None) - ) - - @classmethod - def build_decoder(cls, args, task, embed_tokens): - _args = copy.deepcopy(args) - if args.adaptor_proj: # not V0 arch - _args.encoder_embed_dim = _args.decoder_embed_dim - _args.dropout = args.decoder_dropout - _args.attention_dropout = args.decoder_attention_dropout - _args.activation_dropout = args.decoder_activation_dropout - _args.max_target_positions = 1024 - - decoder = TransformerDecoder(_args, task.target_dictionary, embed_tokens) - decoder = cls.maybe_load_pretrained( - decoder, getattr(args, "load_pretrained_decoder_from", None) - ) - - for k, p in decoder.named_parameters(): - p.requires_grad = need_finetuning(args.finetune_decoder_params, k) - return decoder - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - - # make sure all arguments are present in older models - base_architecture(args) - - def build_embedding(dictionary, embed_dim): - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - return Embedding(num_embeddings, embed_dim, padding_idx) - - decoder_embed_tokens = build_embedding( - task.target_dictionary, args.decoder_embed_dim - ) - - encoder = cls.build_encoder(args) - decoder = cls.build_decoder(args, task, decoder_embed_tokens) - return cls(encoder, decoder) - - def get_normalized_probs( - self, - net_output: Tuple[Tensor, Optional[Dict[str, List[Optional[Tensor]]]]], - log_probs: bool, - sample: Optional[Dict[str, Tensor]] = None, - ): - return self.get_normalized_probs_scriptable(net_output, log_probs, sample) - - def forward(self, src_tokens, src_lengths, prev_output_tokens, **kwargs): - """ - The forward method inherited from the base class has a **kwargs - argument in its input, which is not supported in torchscript. This - method overwrites the forward method definition without **kwargs. - """ - encoder_out = self.encoder( - src_tokens=src_tokens, src_lengths=src_lengths, **kwargs - ) - decoder_out = self.decoder( - prev_output_tokens=prev_output_tokens, encoder_out=encoder_out - ) - return decoder_out - - def upgrade_state_dict(self, state_dict): - for k, _ in state_dict.items(): - if "adaptor.layers" in state_dict: - print(k) - new = k.replace("adaptor.layers", "adaptor_layers") - state_dict[new] = state_dict[k] - del state_dict[k] - - -def set_default_w2v_encoder_args(args): - args.no_pretrained_weights = getattr(args, "no_pretrained_weights", False) - args.dropout_input = getattr(args, "dropout_input", 0) - args.final_dropout = getattr(args, "final_dropout", 0) - args.apply_mask = getattr(args, "apply_mask", False) - args.dropout = getattr(args, "dropout", 0) - args.attention_dropout = getattr(args, "attention_dropout", 0) - args.activation_dropout = getattr(args, "activation_dropout", 0) - - args.mask_length = getattr(args, "mask_length", 10) - args.mask_prob = getattr(args, "mask_prob", 0.5) - args.mask_selection = getattr(args, "mask_selection", "static") - args.mask_other = getattr(args, "mask_other", 0) - args.no_mask_overlap = getattr(args, "no_mask_overlap", False) - args.mask_channel_length = getattr(args, "mask_channel_length", 10) - args.mask_channel_prob = getattr(args, "mask_channel_prob", 0.5) - args.mask_channel_before = getattr(args, "mask_channel_before", False) - args.mask_channel_selection = getattr(args, "mask_channel_selection", "static") - args.mask_channel_other = getattr(args, "mask_channel_other", 0) - args.no_mask_channel_overlap = getattr(args, "no_mask_channel_overlap", False) - - args.freeze_finetune_updates = getattr(args, "freeze_finetune_updates", 0) - args.feature_grad_mult = 0.1 - args.layerdrop = getattr(args, "layerdrop", 0.0) - - args.normalize = getattr(args, "normalize", False) - args.finetune_w2v_params = getattr(args, "finetune_w2v_params", "all") - args.w2v_freezing_updates = getattr(args, "w2v_freezing_updates", None) - - -def set_default_adaptor_args(args): - args.adaptor_n_layers = getattr(args, "adaptor_n_layers", 3) - args.adaptor_kernel_size = getattr(args, "adaptor_kernel_size", 3) - args.adaptor_stride = getattr(args, "adaptor_stride", 2) - args.adaptor_layerdrop = getattr(args, "adaptor_layerdrop", 0.0) - args.adaptor_layernorm = getattr(args, "adaptor_layernorm", False) - args.adaptor_proj = getattr(args, "adaptor_proj", False) - - -def set_default_transformer_decoder_args(args): - args.decoder_embed_path = getattr(args, "decoder_embed_path", None) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 1024) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4 * 1024) - args.decoder_layers = getattr(args, "decoder_layers", 12) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", False) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False) - args.decoder_layerdrop = getattr(args, "decoder_layerdrop", 0.0) - args.adaptive_input = getattr(args, "adaptive_input", False) - args.decoder_attention_dropout = getattr(args, "decoder_attention_dropout", 0.0) - args.decoder_activation_dropout = getattr(args, "decoder_activation_dropout", 0.0) - args.decoder_dropout = getattr(args, "decoder_dropout", 0.1) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", False - ) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - - args.no_scale_embedding = getattr(args, "no_scale_embedding", False) - args.quant_noise_pq = getattr(args, "quant_noise_pq", 0) - args.layernorm_embedding = getattr(args, "layernorm_embedding", False) - - args.activation_fn = getattr(args, "activation_fn", "gelu") - args.pooler_activation_fn = getattr(args, "pooler_activation_fn", "tanh") - args.pooler_dropout = getattr(args, "pooler_dropout", 0.0) - - args.finetune_decoder_params = getattr(args, "finetune_decoder_params", "all") - - -def set_default_general_args(args): - args.checkpoint_activations = getattr(args, "checkpoint_activations", False) - args.offload_activations = getattr(args, "offload_activations", False) - args.min_params_to_wrap = getattr(args, "min_params_to_wrap", int(1e8)) - - -@register_model_architecture(model_name="xm_transformer", arch_name="xm_transformer") -def base_architecture(args): - set_default_general_args(args) - set_default_w2v_encoder_args(args) - set_default_adaptor_args(args) - set_default_transformer_decoder_args(args) diff --git a/kosmos-g/fairseq/fairseq/models/text_to_speech/__init__.py b/kosmos-g/fairseq/fairseq/models/text_to_speech/__init__.py deleted file mode 100644 index 652fee0d6..000000000 --- a/kosmos-g/fairseq/fairseq/models/text_to_speech/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .tacotron2 import * # noqa -from .tts_transformer import * # noqa -from .fastspeech2 import * # noqa diff --git a/kosmos-g/fairseq/fairseq/models/text_to_speech/codehifigan.py b/kosmos-g/fairseq/fairseq/models/text_to_speech/codehifigan.py deleted file mode 100644 index ee4bb7d6f..000000000 --- a/kosmos-g/fairseq/fairseq/models/text_to_speech/codehifigan.py +++ /dev/null @@ -1,77 +0,0 @@ -from argparse import Namespace -import torch -import torch.nn as nn - -from fairseq.models.text_to_speech.fastspeech2 import VariancePredictor -from fairseq.models.text_to_speech.hifigan import Generator - - -class CodeGenerator(Generator): - def __init__(self, cfg): - super().__init__(cfg) - self.dict = nn.Embedding(cfg["num_embeddings"], cfg["embedding_dim"]) - self.multispkr = cfg.get("multispkr", None) - self.embedder = cfg.get("embedder_params", None) - - if self.multispkr and not self.embedder: - self.spkr = nn.Embedding(cfg.get("num_speakers", 200), cfg["embedding_dim"]) - elif self.embedder: - self.spkr = nn.Linear(cfg.get("embedder_dim", 256), cfg["embedding_dim"]) - - self.dur_predictor = None - if cfg.get("dur_predictor_params", None): - self.dur_predictor = VariancePredictor( - Namespace(**cfg["dur_predictor_params"]) - ) - - @staticmethod - def _upsample(signal, max_frames): - if signal.dim() == 3: - bsz, channels, cond_length = signal.size() - elif signal.dim() == 2: - signal = signal.unsqueeze(2) - bsz, channels, cond_length = signal.size() - else: - signal = signal.view(-1, 1, 1) - bsz, channels, cond_length = signal.size() - - signal = signal.unsqueeze(3).repeat(1, 1, 1, max_frames // cond_length) - - # pad zeros as needed (if signal's shape does not divide completely with max_frames) - reminder = (max_frames - signal.shape[2] * signal.shape[3]) // signal.shape[3] - if reminder > 0: - raise NotImplementedError( - "Padding condition signal - misalignment between condition features." - ) - - signal = signal.view(bsz, channels, max_frames) - return signal - - def forward(self, **kwargs): - x = self.dict(kwargs["code"]).transpose(1, 2) - - if self.dur_predictor and kwargs.get("dur_prediction", False): - assert x.size(0) == 1, "only support single sample" - log_dur_pred = self.dur_predictor(x.transpose(1, 2)) - dur_out = torch.clamp( - torch.round((torch.exp(log_dur_pred) - 1)).long(), min=1 - ) - # B x C x T - x = torch.repeat_interleave(x, dur_out.view(-1), dim=2) - - if self.multispkr: - assert ( - "spkr" in kwargs - ), 'require "spkr" input for multispeaker CodeHiFiGAN vocoder' - spkr = self.spkr(kwargs["spkr"]).transpose(1, 2) - spkr = self._upsample(spkr, x.shape[-1]) - x = torch.cat([x, spkr], dim=1) - - for k, feat in kwargs.items(): - if k in ["spkr", "code", "dur_prediction"]: - continue - - feat = self._upsample(feat, x.shape[-1]) - x = torch.cat([x, feat], dim=1) - - return super().forward(x) diff --git a/kosmos-g/fairseq/fairseq/models/text_to_speech/fastspeech2.py b/kosmos-g/fairseq/fairseq/models/text_to_speech/fastspeech2.py deleted file mode 100644 index f6ccac5ab..000000000 --- a/kosmos-g/fairseq/fairseq/models/text_to_speech/fastspeech2.py +++ /dev/null @@ -1,448 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -import torch -from torch import nn - -from fairseq import utils -from fairseq.data.data_utils import lengths_to_padding_mask -from fairseq.models import ( - FairseqEncoder, - FairseqEncoderModel, - register_model, - register_model_architecture, -) -from fairseq.models.text_to_speech.hub_interface import TTSHubInterface -from fairseq.models.text_to_speech.tacotron2 import Postnet -from fairseq.modules import ( - FairseqDropout, - LayerNorm, - MultiheadAttention, - PositionalEmbedding, -) - -logger = logging.getLogger(__name__) - - -def model_init(m): - if isinstance(m, nn.Conv1d): - nn.init.xavier_uniform_(m.weight, torch.nn.init.calculate_gain("relu")) - - -def Embedding(num_embeddings, embedding_dim, padding_idx=None): - m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx) - nn.init.normal_(m.weight, mean=0, std=embedding_dim ** -0.5) - return m - - -class PositionwiseFeedForward(nn.Module): - def __init__(self, in_dim, hidden_dim, kernel_size, dropout): - super().__init__() - self.ffn = nn.Sequential( - nn.Conv1d( - in_dim, - hidden_dim, - kernel_size=kernel_size, - padding=(kernel_size - 1) // 2, - ), - nn.ReLU(), - nn.Conv1d( - hidden_dim, - in_dim, - kernel_size=kernel_size, - padding=(kernel_size - 1) // 2, - ), - ) - self.layer_norm = LayerNorm(in_dim) - self.dropout = self.dropout_module = FairseqDropout( - p=dropout, module_name=self.__class__.__name__ - ) - - def forward(self, x): - # B x T x C - residual = x - x = self.ffn(x.transpose(1, 2)).transpose(1, 2) - x = self.dropout(x) - return self.layer_norm(x + residual) - - -class FFTLayer(torch.nn.Module): - def __init__( - self, embed_dim, n_heads, hidden_dim, kernel_size, dropout, attention_dropout - ): - super().__init__() - self.self_attn = MultiheadAttention( - embed_dim, n_heads, dropout=attention_dropout, self_attention=True - ) - self.layer_norm = LayerNorm(embed_dim) - self.ffn = PositionwiseFeedForward( - embed_dim, hidden_dim, kernel_size, dropout=dropout - ) - - def forward(self, x, padding_mask=None): - # B x T x C - residual = x - x = x.transpose(0, 1) - x, _ = self.self_attn( - query=x, key=x, value=x, key_padding_mask=padding_mask, need_weights=False - ) - x = x.transpose(0, 1) - x = self.layer_norm(x + residual) - return self.ffn(x) - - -class LengthRegulator(nn.Module): - def forward(self, x, durations): - # x: B x T x C - out_lens = durations.sum(dim=1) - max_len = out_lens.max() - bsz, seq_len, dim = x.size() - out = x.new_zeros((bsz, max_len, dim)) - - for b in range(bsz): - indices = [] - for t in range(seq_len): - indices.extend([t] * utils.item(durations[b, t])) - indices = torch.tensor(indices, dtype=torch.long).to(x.device) - out_len = utils.item(out_lens[b]) - out[b, :out_len] = x[b].index_select(0, indices) - - return out, out_lens - - -class VariancePredictor(nn.Module): - def __init__(self, args): - super().__init__() - self.conv1 = nn.Sequential( - nn.Conv1d( - args.encoder_embed_dim, - args.var_pred_hidden_dim, - kernel_size=args.var_pred_kernel_size, - padding=(args.var_pred_kernel_size - 1) // 2, - ), - nn.ReLU(), - ) - self.ln1 = nn.LayerNorm(args.var_pred_hidden_dim) - self.dropout_module = FairseqDropout( - p=args.var_pred_dropout, module_name=self.__class__.__name__ - ) - self.conv2 = nn.Sequential( - nn.Conv1d( - args.var_pred_hidden_dim, - args.var_pred_hidden_dim, - kernel_size=args.var_pred_kernel_size, - padding=1, - ), - nn.ReLU(), - ) - self.ln2 = nn.LayerNorm(args.var_pred_hidden_dim) - self.proj = nn.Linear(args.var_pred_hidden_dim, 1) - - def forward(self, x): - # Input: B x T x C; Output: B x T - x = self.conv1(x.transpose(1, 2)).transpose(1, 2) - x = self.dropout_module(self.ln1(x)) - x = self.conv2(x.transpose(1, 2)).transpose(1, 2) - x = self.dropout_module(self.ln2(x)) - return self.proj(x).squeeze(dim=2) - - -class VarianceAdaptor(nn.Module): - def __init__(self, args): - super().__init__() - self.args = args - self.length_regulator = LengthRegulator() - self.duration_predictor = VariancePredictor(args) - self.pitch_predictor = VariancePredictor(args) - self.energy_predictor = VariancePredictor(args) - - n_bins, steps = self.args.var_pred_n_bins, self.args.var_pred_n_bins - 1 - self.pitch_bins = torch.linspace(args.pitch_min, args.pitch_max, steps) - self.embed_pitch = Embedding(n_bins, args.encoder_embed_dim) - self.energy_bins = torch.linspace(args.energy_min, args.energy_max, steps) - self.embed_energy = Embedding(n_bins, args.encoder_embed_dim) - - def get_pitch_emb(self, x, tgt=None, factor=1.0): - out = self.pitch_predictor(x) - bins = self.pitch_bins.to(x.device) - if tgt is None: - out = out * factor - emb = self.embed_pitch(torch.bucketize(out, bins)) - else: - emb = self.embed_pitch(torch.bucketize(tgt, bins)) - return out, emb - - def get_energy_emb(self, x, tgt=None, factor=1.0): - out = self.energy_predictor(x) - bins = self.energy_bins.to(x.device) - if tgt is None: - out = out * factor - emb = self.embed_energy(torch.bucketize(out, bins)) - else: - emb = self.embed_energy(torch.bucketize(tgt, bins)) - return out, emb - - def forward( - self, - x, - padding_mask, - durations=None, - pitches=None, - energies=None, - d_factor=1.0, - p_factor=1.0, - e_factor=1.0, - ): - # x: B x T x C - log_dur_out = self.duration_predictor(x) - dur_out = torch.clamp( - torch.round((torch.exp(log_dur_out) - 1) * d_factor).long(), min=0 - ) - dur_out.masked_fill_(padding_mask, 0) - - pitch_out, pitch_emb = self.get_pitch_emb(x, pitches, p_factor) - x = x + pitch_emb - energy_out, energy_emb = self.get_energy_emb(x, energies, e_factor) - x = x + energy_emb - - x, out_lens = self.length_regulator( - x, dur_out if durations is None else durations - ) - - return x, out_lens, log_dur_out, pitch_out, energy_out - - -class FastSpeech2Encoder(FairseqEncoder): - def __init__(self, args, src_dict, embed_speaker): - super().__init__(src_dict) - self.args = args - self.padding_idx = src_dict.pad() - self.n_frames_per_step = args.n_frames_per_step - self.out_dim = args.output_frame_dim * args.n_frames_per_step - - self.embed_speaker = embed_speaker - self.spk_emb_proj = None - if embed_speaker is not None: - self.spk_emb_proj = nn.Linear( - args.encoder_embed_dim + args.speaker_embed_dim, args.encoder_embed_dim - ) - - self.dropout_module = FairseqDropout( - p=args.dropout, module_name=self.__class__.__name__ - ) - self.embed_tokens = Embedding( - len(src_dict), args.encoder_embed_dim, padding_idx=self.padding_idx - ) - - self.embed_positions = PositionalEmbedding( - args.max_source_positions, args.encoder_embed_dim, self.padding_idx - ) - self.pos_emb_alpha = nn.Parameter(torch.ones(1)) - self.dec_pos_emb_alpha = nn.Parameter(torch.ones(1)) - - self.encoder_fft_layers = nn.ModuleList( - FFTLayer( - args.encoder_embed_dim, - args.encoder_attention_heads, - args.fft_hidden_dim, - args.fft_kernel_size, - dropout=args.dropout, - attention_dropout=args.attention_dropout, - ) - for _ in range(args.encoder_layers) - ) - - self.var_adaptor = VarianceAdaptor(args) - - self.decoder_fft_layers = nn.ModuleList( - FFTLayer( - args.decoder_embed_dim, - args.decoder_attention_heads, - args.fft_hidden_dim, - args.fft_kernel_size, - dropout=args.dropout, - attention_dropout=args.attention_dropout, - ) - for _ in range(args.decoder_layers) - ) - - self.out_proj = nn.Linear(args.decoder_embed_dim, self.out_dim) - - self.postnet = None - if args.add_postnet: - self.postnet = Postnet( - self.out_dim, - args.postnet_conv_dim, - args.postnet_conv_kernel_size, - args.postnet_layers, - args.postnet_dropout, - ) - - self.apply(model_init) - - def forward( - self, - src_tokens, - src_lengths=None, - speaker=None, - durations=None, - pitches=None, - energies=None, - **kwargs, - ): - x = self.embed_tokens(src_tokens) - - enc_padding_mask = src_tokens.eq(self.padding_idx) - x += self.pos_emb_alpha * self.embed_positions(enc_padding_mask) - x = self.dropout_module(x) - - for layer in self.encoder_fft_layers: - x = layer(x, enc_padding_mask) - - if self.embed_speaker is not None: - bsz, seq_len, _ = x.size() - emb = self.embed_speaker(speaker).expand(bsz, seq_len, -1) - x = self.spk_emb_proj(torch.cat([x, emb], dim=2)) - - x, out_lens, log_dur_out, pitch_out, energy_out = self.var_adaptor( - x, enc_padding_mask, durations, pitches, energies - ) - - dec_padding_mask = lengths_to_padding_mask(out_lens) - x += self.dec_pos_emb_alpha * self.embed_positions(dec_padding_mask) - for layer in self.decoder_fft_layers: - x = layer(x, dec_padding_mask) - - x = self.out_proj(x) - x_post = None - if self.postnet is not None: - x_post = x + self.postnet(x) - return x, x_post, out_lens, log_dur_out, pitch_out, energy_out - - -@register_model("fastspeech2") -class FastSpeech2Model(FairseqEncoderModel): - """ - Implementation for https://arxiv.org/abs/2006.04558 - """ - - NON_AUTOREGRESSIVE = True - - @classmethod - def hub_models(cls): - base_url = "http://dl.fbaipublicfiles.com/fairseq/s2" - model_ids = [ - "fastspeech2-en-ljspeech", - "fastspeech2-en-200_speaker-cv4", - ] - return {i: f"{base_url}/{i}.tar.gz" for i in model_ids} - - @classmethod - def from_pretrained( - cls, - model_name_or_path, - checkpoint_file="model.pt", - data_name_or_path=".", - config_yaml="config.yaml", - vocoder: str = "griffin_lim", - fp16: bool = False, - **kwargs, - ): - from fairseq import hub_utils - - x = hub_utils.from_pretrained( - model_name_or_path, - checkpoint_file, - data_name_or_path, - archive_map=cls.hub_models(), - config_yaml=config_yaml, - vocoder=vocoder, - fp16=fp16, - **kwargs, - ) - return TTSHubInterface(x["args"], x["task"], x["models"][0]) - - @staticmethod - def add_args(parser): - parser.add_argument("--dropout", type=float) - parser.add_argument("--output-frame-dim", type=int) - parser.add_argument("--speaker-embed-dim", type=int) - # FFT blocks - parser.add_argument("--fft-hidden-dim", type=int) - parser.add_argument("--fft-kernel-size", type=int) - parser.add_argument("--attention-dropout", type=float) - parser.add_argument("--encoder-layers", type=int) - parser.add_argument("--encoder-embed-dim", type=int) - parser.add_argument("--encoder-attention-heads", type=int) - parser.add_argument("--decoder-layers", type=int) - parser.add_argument("--decoder-embed-dim", type=int) - parser.add_argument("--decoder-attention-heads", type=int) - # variance predictor - parser.add_argument("--var-pred-n-bins", type=int) - parser.add_argument("--var-pred-hidden-dim", type=int) - parser.add_argument("--var-pred-kernel-size", type=int) - parser.add_argument("--var-pred-dropout", type=float) - # postnet - parser.add_argument("--add-postnet", action="store_true") - parser.add_argument("--postnet-dropout", type=float) - parser.add_argument("--postnet-layers", type=int) - parser.add_argument("--postnet-conv-dim", type=int) - parser.add_argument("--postnet-conv-kernel-size", type=int) - - def __init__(self, encoder, args, src_dict): - super().__init__(encoder) - self._num_updates = 0 - - out_dim = args.output_frame_dim * args.n_frames_per_step - self.ctc_proj = None - if getattr(args, "ctc_weight", 0.0) > 0.0: - self.ctc_proj = nn.Linear(out_dim, len(src_dict)) - - @classmethod - def build_model(cls, args, task): - embed_speaker = task.get_speaker_embeddings(args) - encoder = FastSpeech2Encoder(args, task.src_dict, embed_speaker) - return cls(encoder, args, task.src_dict) - - def set_num_updates(self, num_updates): - super().set_num_updates(num_updates) - self._num_updates = num_updates - - def get_normalized_probs(self, net_output, log_probs, sample=None): - logits = self.ctc_proj(net_output[0]) - if log_probs: - return utils.log_softmax(logits.float(), dim=-1) - else: - return utils.softmax(logits.float(), dim=-1) - - -@register_model_architecture("fastspeech2", "fastspeech2") -def base_architecture(args): - args.dropout = getattr(args, "dropout", 0.2) - args.output_frame_dim = getattr(args, "output_frame_dim", 80) - args.speaker_embed_dim = getattr(args, "speaker_embed_dim", 64) - # FFT blocks - args.fft_hidden_dim = getattr(args, "fft_hidden_dim", 1024) - args.fft_kernel_size = getattr(args, "fft_kernel_size", 9) - args.attention_dropout = getattr(args, "attention_dropout", 0.0) - args.encoder_layers = getattr(args, "encoder_layers", 4) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 256) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 2) - args.decoder_layers = getattr(args, "decoder_layers", 4) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 256) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 2) - # variance predictor - args.var_pred_n_bins = getattr(args, "var_pred_n_bins", 256) - args.var_pred_hidden_dim = getattr(args, "var_pred_hidden_dim", 256) - args.var_pred_kernel_size = getattr(args, "var_pred_kernel_size", 3) - args.var_pred_dropout = getattr(args, "var_pred_dropout", 0.5) - # postnet - args.add_postnet = getattr(args, "add_postnet", False) - args.postnet_dropout = getattr(args, "postnet_dropout", 0.5) - args.postnet_layers = getattr(args, "postnet_layers", 5) - args.postnet_conv_dim = getattr(args, "postnet_conv_dim", 512) - args.postnet_conv_kernel_size = getattr(args, "postnet_conv_kernel_size", 5) diff --git a/kosmos-g/fairseq/fairseq/models/text_to_speech/hifigan.py b/kosmos-g/fairseq/fairseq/models/text_to_speech/hifigan.py deleted file mode 100644 index a777af516..000000000 --- a/kosmos-g/fairseq/fairseq/models/text_to_speech/hifigan.py +++ /dev/null @@ -1,179 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.nn import Conv1d, ConvTranspose1d -from torch.nn.utils import weight_norm, remove_weight_norm - -LRELU_SLOPE = 0.1 - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return (kernel_size * dilation - dilation) // 2 - - -class ResBlock(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - xt = c2(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for layer in self.convs1: - remove_weight_norm(layer) - for layer in self.convs2: - remove_weight_norm(layer) - - -class Generator(torch.nn.Module): - def __init__(self, cfg): - super(Generator, self).__init__() - self.num_kernels = len(cfg["resblock_kernel_sizes"]) - self.num_upsamples = len(cfg["upsample_rates"]) - self.conv_pre = weight_norm( - Conv1d( - cfg.get("model_in_dim", 80), - cfg["upsample_initial_channel"], - 7, - 1, - padding=3, - ) - ) - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate( - zip(cfg["upsample_rates"], cfg["upsample_kernel_sizes"]) - ): - self.ups.append( - weight_norm( - ConvTranspose1d( - cfg["upsample_initial_channel"] // (2 ** i), - cfg["upsample_initial_channel"] // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = cfg["upsample_initial_channel"] // (2 ** (i + 1)) - for k, d in zip( - cfg["resblock_kernel_sizes"], cfg["resblock_dilation_sizes"] - ): - self.resblocks.append(ResBlock(ch, k, d)) - - self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3)) - self.ups.apply(init_weights) - self.conv_post.apply(init_weights) - - def forward(self, x): - x = self.conv_pre(x) - for i in range(self.num_upsamples): - x = F.leaky_relu(x, LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print("Removing weight norm...") - for layer in self.ups: - remove_weight_norm(layer) - for layer in self.resblocks: - layer.remove_weight_norm() - remove_weight_norm(self.conv_pre) - remove_weight_norm(self.conv_post) diff --git a/kosmos-g/fairseq/fairseq/models/text_to_speech/hub_interface.py b/kosmos-g/fairseq/fairseq/models/text_to_speech/hub_interface.py deleted file mode 100644 index 8e367665d..000000000 --- a/kosmos-g/fairseq/fairseq/models/text_to_speech/hub_interface.py +++ /dev/null @@ -1,137 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from pathlib import Path -from typing import Optional, Dict, Tuple -import random - -import torch -import torch.nn as nn - -logger = logging.getLogger(__name__) - - -class TTSHubInterface(nn.Module): - def __init__(self, cfg, task, model): - super().__init__() - self.cfg = cfg - self.task = task - self.model = model - self.model.eval() - - self.update_cfg_with_data_cfg(self.cfg, self.task.data_cfg) - self.generator = self.task.build_generator([self.model], self.cfg) - - @classmethod - def phonemize( - cls, - text: str, - lang: Optional[str], - phonemizer: Optional[str] = None, - preserve_punct: bool = False, - to_simplified_zh: bool = False, - ): - if to_simplified_zh: - import hanziconv - - text = hanziconv.HanziConv.toSimplified(text) - - if phonemizer == "g2p": - import g2p_en - - g2p = g2p_en.G2p() - if preserve_punct: - return " ".join("|" if p == " " else p for p in g2p(text)) - else: - res = [{",": "sp", ";": "sp"}.get(p, p) for p in g2p(text)] - return " ".join(p for p in res if p.isalnum()) - if phonemizer == "g2pc": - import g2pc - - g2p = g2pc.G2pC() - return " ".join([w[3] for w in g2p(text)]) - elif phonemizer == "ipa": - assert lang is not None - import phonemizer - from phonemizer.separator import Separator - - lang_map = {"en": "en-us", "fr": "fr-fr"} - return phonemizer.phonemize( - text, - backend="espeak", - language=lang_map.get(lang, lang), - separator=Separator(word="| ", phone=" "), - ) - else: - return text - - @classmethod - def tokenize(cls, text: str, tkn_cfg: Dict[str, str]): - sentencepiece_model = tkn_cfg.get("sentencepiece_model", None) - if sentencepiece_model is not None: - assert Path(sentencepiece_model).exists() - import sentencepiece as sp - - spm = sp.SentencePieceProcessor() - spm.Load(sentencepiece_model) - return " ".join(spm.Encode(text, out_type=str)) - else: - return text - - @classmethod - def update_cfg_with_data_cfg(cls, cfg, data_cfg): - cfg["task"].vocoder = data_cfg.vocoder.get("type", "griffin_lim") - - @classmethod - def get_model_input( - cls, task, text: str, speaker: Optional[int] = None, verbose: bool = False - ): - phonemized = cls.phonemize( - text, - task.data_cfg.hub.get("lang", None), - task.data_cfg.hub.get("phonemizer", None), - task.data_cfg.hub.get("preserve_punct", False), - task.data_cfg.hub.get("to_simplified_zh", False), - ) - tkn_cfg = task.data_cfg.bpe_tokenizer - tokenized = cls.tokenize(phonemized, tkn_cfg) - if verbose: - logger.info(f"text: {text}") - logger.info(f"phonemized: {phonemized}") - logger.info(f"tokenized: {tokenized}") - - spk = task.data_cfg.hub.get("speaker", speaker) - n_speakers = len(task.speaker_to_id or {}) - if spk is None and n_speakers > 0: - spk = random.randint(0, n_speakers - 1) - if spk is not None: - spk = max(0, min(spk, n_speakers - 1)) - if verbose: - logger.info(f"speaker: {spk}") - spk = None if spk is None else torch.Tensor([[spk]]).long() - - src_tokens = task.src_dict.encode_line(tokenized).view(1, -1) - src_lengths = torch.Tensor([len(tokenized.split())]).long() - return { - "net_input": { - "src_tokens": src_tokens, - "src_lengths": src_lengths, - "prev_output_tokens": None, - }, - "target_lengths": None, - "speaker": spk, - } - - @classmethod - def get_prediction(cls, task, model, generator, sample) -> Tuple[torch.Tensor, int]: - prediction = generator.generate(model, sample) - return prediction[0]["waveform"], task.sr - - def predict( - self, text: str, speaker: Optional[int] = None, verbose: bool = False - ) -> Tuple[torch.Tensor, int]: - sample = self.get_model_input(self.task, text, speaker, verbose=verbose) - return self.get_prediction(self.task, self.model, self.generator, sample) diff --git a/kosmos-g/fairseq/fairseq/models/text_to_speech/tacotron2.py b/kosmos-g/fairseq/fairseq/models/text_to_speech/tacotron2.py deleted file mode 100644 index 4df407561..000000000 --- a/kosmos-g/fairseq/fairseq/models/text_to_speech/tacotron2.py +++ /dev/null @@ -1,380 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -import torch -from torch import nn -from torch.nn import functional as F - -from fairseq.models import ( - FairseqEncoder, - FairseqEncoderDecoderModel, - FairseqIncrementalDecoder, - register_model, - register_model_architecture, -) -from fairseq.modules import LSTMCellWithZoneOut, LocationAttention - - -logger = logging.getLogger(__name__) - - -def encoder_init(m): - if isinstance(m, nn.Conv1d): - nn.init.xavier_uniform_(m.weight, torch.nn.init.calculate_gain("relu")) - - -class Tacotron2Encoder(FairseqEncoder): - def __init__(self, args, src_dict, embed_speaker): - super().__init__(src_dict) - self.padding_idx = src_dict.pad() - self.embed_speaker = embed_speaker - self.spk_emb_proj = None - if embed_speaker is not None: - self.spk_emb_proj = nn.Linear( - args.encoder_embed_dim + args.speaker_embed_dim, args.encoder_embed_dim - ) - - self.embed_tokens = nn.Embedding( - len(src_dict), args.encoder_embed_dim, padding_idx=self.padding_idx - ) - - assert args.encoder_conv_kernel_size % 2 == 1 - self.convolutions = nn.ModuleList( - nn.Sequential( - nn.Conv1d( - args.encoder_embed_dim, - args.encoder_embed_dim, - kernel_size=args.encoder_conv_kernel_size, - padding=((args.encoder_conv_kernel_size - 1) // 2), - ), - nn.BatchNorm1d(args.encoder_embed_dim), - nn.ReLU(), - nn.Dropout(args.encoder_dropout), - ) - for _ in range(args.encoder_conv_layers) - ) - - self.lstm = nn.LSTM( - args.encoder_embed_dim, - args.encoder_embed_dim // 2, - num_layers=args.encoder_lstm_layers, - batch_first=True, - bidirectional=True, - ) - - self.apply(encoder_init) - - def forward(self, src_tokens, src_lengths=None, speaker=None, **kwargs): - x = self.embed_tokens(src_tokens) - x = x.transpose(1, 2).contiguous() # B x T x C -> B x C x T - for conv in self.convolutions: - x = conv(x) - x = x.transpose(1, 2).contiguous() # B x C x T -> B x T x C - - src_lengths = src_lengths.cpu().long() - x = nn.utils.rnn.pack_padded_sequence(x, src_lengths, batch_first=True) - x = self.lstm(x)[0] - x = nn.utils.rnn.pad_packed_sequence(x, batch_first=True)[0] - - encoder_padding_mask = src_tokens.eq(self.padding_idx) - - if self.embed_speaker is not None: - seq_len, bsz, _ = x.size() - emb = self.embed_speaker(speaker).expand(seq_len, bsz, -1) - x = self.spk_emb_proj(torch.cat([x, emb], dim=2)) - - return { - "encoder_out": [x], # B x T x C - "encoder_padding_mask": encoder_padding_mask, # B x T - } - - -class Prenet(nn.Module): - def __init__(self, in_dim, n_layers, n_units, dropout): - super().__init__() - self.layers = nn.ModuleList( - nn.Sequential(nn.Linear(in_dim if i == 0 else n_units, n_units), nn.ReLU()) - for i in range(n_layers) - ) - self.dropout = dropout - - def forward(self, x): - for layer in self.layers: - x = F.dropout(layer(x), p=self.dropout) # always applies dropout - return x - - -class Postnet(nn.Module): - def __init__(self, in_dim, n_channels, kernel_size, n_layers, dropout): - super(Postnet, self).__init__() - self.convolutions = nn.ModuleList() - assert kernel_size % 2 == 1 - for i in range(n_layers): - cur_layers = ( - [ - nn.Conv1d( - in_dim if i == 0 else n_channels, - n_channels if i < n_layers - 1 else in_dim, - kernel_size=kernel_size, - padding=((kernel_size - 1) // 2), - ), - nn.BatchNorm1d(n_channels if i < n_layers - 1 else in_dim), - ] - + ([nn.Tanh()] if i < n_layers - 1 else []) - + [nn.Dropout(dropout)] - ) - nn.init.xavier_uniform_( - cur_layers[0].weight, - torch.nn.init.calculate_gain("tanh" if i < n_layers - 1 else "linear"), - ) - self.convolutions.append(nn.Sequential(*cur_layers)) - - def forward(self, x): - x = x.transpose(1, 2) # B x T x C -> B x C x T - for conv in self.convolutions: - x = conv(x) - return x.transpose(1, 2) - - -def decoder_init(m): - if isinstance(m, torch.nn.Conv1d): - nn.init.xavier_uniform_(m.weight, torch.nn.init.calculate_gain("tanh")) - - -class Tacotron2Decoder(FairseqIncrementalDecoder): - def __init__(self, args, src_dict): - super().__init__(None) - self.args = args - self.n_frames_per_step = args.n_frames_per_step - self.out_dim = args.output_frame_dim * args.n_frames_per_step - - self.prenet = Prenet( - self.out_dim, args.prenet_layers, args.prenet_dim, args.prenet_dropout - ) - - # take prev_context, prev_frame, (speaker embedding) as input - self.attention_lstm = LSTMCellWithZoneOut( - args.zoneout, - args.prenet_dim + args.encoder_embed_dim, - args.decoder_lstm_dim, - ) - - # take attention_lstm output, attention_state, encoder_out as input - self.attention = LocationAttention( - args.attention_dim, - args.encoder_embed_dim, - args.decoder_lstm_dim, - (1 + int(args.attention_use_cumprob)), - args.attention_conv_dim, - args.attention_conv_kernel_size, - ) - - # take attention_lstm output, context, (gated_latent) as input - self.lstm = nn.ModuleList( - LSTMCellWithZoneOut( - args.zoneout, - args.encoder_embed_dim + args.decoder_lstm_dim, - args.decoder_lstm_dim, - ) - for i in range(args.decoder_lstm_layers) - ) - - proj_in_dim = args.encoder_embed_dim + args.decoder_lstm_dim - self.feat_proj = nn.Linear(proj_in_dim, self.out_dim) - self.eos_proj = nn.Linear(proj_in_dim, 1) - - self.postnet = Postnet( - self.out_dim, - args.postnet_conv_dim, - args.postnet_conv_kernel_size, - args.postnet_layers, - args.postnet_dropout, - ) - - self.ctc_proj = None - if getattr(args, "ctc_weight", 0.0) > 0.0: - self.ctc_proj = nn.Linear(self.out_dim, len(src_dict)) - - self.apply(decoder_init) - - def _get_states(self, incremental_state, enc_out): - bsz, in_len, _ = enc_out.size() - alstm_h = self.get_incremental_state(incremental_state, "alstm_h") - if alstm_h is None: - alstm_h = enc_out.new_zeros(bsz, self.args.decoder_lstm_dim) - alstm_c = self.get_incremental_state(incremental_state, "alstm_c") - if alstm_c is None: - alstm_c = enc_out.new_zeros(bsz, self.args.decoder_lstm_dim) - - lstm_h = self.get_incremental_state(incremental_state, "lstm_h") - if lstm_h is None: - lstm_h = [ - enc_out.new_zeros(bsz, self.args.decoder_lstm_dim) - for _ in range(self.args.decoder_lstm_layers) - ] - lstm_c = self.get_incremental_state(incremental_state, "lstm_c") - if lstm_c is None: - lstm_c = [ - enc_out.new_zeros(bsz, self.args.decoder_lstm_dim) - for _ in range(self.args.decoder_lstm_layers) - ] - - attn_w = self.get_incremental_state(incremental_state, "attn_w") - if attn_w is None: - attn_w = enc_out.new_zeros(bsz, in_len) - attn_w_cum = self.get_incremental_state(incremental_state, "attn_w_cum") - if attn_w_cum is None: - attn_w_cum = enc_out.new_zeros(bsz, in_len) - return alstm_h, alstm_c, lstm_h, lstm_c, attn_w, attn_w_cum - - def _get_init_attn_c(self, enc_out, enc_mask): - bsz = enc_out.size(0) - if self.args.init_attn_c == "zero": - return enc_out.new_zeros(bsz, self.args.encoder_embed_dim) - elif self.args.init_attn_c == "avg": - enc_w = (~enc_mask).type(enc_out.type()) - enc_w = enc_w / enc_w.sum(dim=1, keepdim=True) - return torch.sum(enc_out * enc_w.unsqueeze(2), dim=1) - else: - raise ValueError(f"{self.args.init_attn_c} not supported") - - def forward( - self, - prev_output_tokens, - encoder_out=None, - incremental_state=None, - target_lengths=None, - **kwargs, - ): - enc_mask = encoder_out["encoder_padding_mask"] - enc_out = encoder_out["encoder_out"][0] - in_len = enc_out.size(1) - - if incremental_state is not None: - prev_output_tokens = prev_output_tokens[:, -1:, :] - bsz, out_len, _ = prev_output_tokens.size() - - prenet_out = self.prenet(prev_output_tokens) - (alstm_h, alstm_c, lstm_h, lstm_c, attn_w, attn_w_cum) = self._get_states( - incremental_state, enc_out - ) - attn_ctx = self._get_init_attn_c(enc_out, enc_mask) - - attn_out = enc_out.new_zeros(bsz, in_len, out_len) - feat_out = enc_out.new_zeros(bsz, out_len, self.out_dim) - eos_out = enc_out.new_zeros(bsz, out_len) - for t in range(out_len): - alstm_in = torch.cat((attn_ctx, prenet_out[:, t, :]), dim=1) - alstm_h, alstm_c = self.attention_lstm(alstm_in, (alstm_h, alstm_c)) - - attn_state = attn_w.unsqueeze(1) - if self.args.attention_use_cumprob: - attn_state = torch.stack((attn_w, attn_w_cum), dim=1) - attn_ctx, attn_w = self.attention(enc_out, enc_mask, alstm_h, attn_state) - attn_w_cum = attn_w_cum + attn_w - attn_out[:, :, t] = attn_w - - for i, cur_lstm in enumerate(self.lstm): - if i == 0: - lstm_in = torch.cat((attn_ctx, alstm_h), dim=1) - else: - lstm_in = torch.cat((attn_ctx, lstm_h[i - 1]), dim=1) - lstm_h[i], lstm_c[i] = cur_lstm(lstm_in, (lstm_h[i], lstm_c[i])) - - proj_in = torch.cat((attn_ctx, lstm_h[-1]), dim=1) - feat_out[:, t, :] = self.feat_proj(proj_in) - eos_out[:, t] = self.eos_proj(proj_in).squeeze(1) - self.attention.clear_cache() - - self.set_incremental_state(incremental_state, "alstm_h", alstm_h) - self.set_incremental_state(incremental_state, "alstm_c", alstm_c) - self.set_incremental_state(incremental_state, "lstm_h", lstm_h) - self.set_incremental_state(incremental_state, "lstm_c", lstm_c) - self.set_incremental_state(incremental_state, "attn_w", attn_w) - self.set_incremental_state(incremental_state, "attn_w_cum", attn_w_cum) - - post_feat_out = feat_out + self.postnet(feat_out) - eos_out = eos_out.view(bsz, out_len, 1) - return post_feat_out, eos_out, {"attn": attn_out, "feature_out": feat_out} - - -@register_model("tacotron_2") -class Tacotron2Model(FairseqEncoderDecoderModel): - """ - Implementation for https://arxiv.org/pdf/1712.05884.pdf - """ - - @staticmethod - def add_args(parser): - # encoder - parser.add_argument("--encoder-dropout", type=float) - parser.add_argument("--encoder-embed-dim", type=int) - parser.add_argument("--encoder-conv-layers", type=int) - parser.add_argument("--encoder-conv-kernel-size", type=int) - parser.add_argument("--encoder-lstm-layers", type=int) - # decoder - parser.add_argument("--attention-dim", type=int) - parser.add_argument("--attention-conv-dim", type=int) - parser.add_argument("--attention-conv-kernel-size", type=int) - parser.add_argument("--prenet-dropout", type=float) - parser.add_argument("--prenet-layers", type=int) - parser.add_argument("--prenet-dim", type=int) - parser.add_argument("--postnet-dropout", type=float) - parser.add_argument("--postnet-layers", type=int) - parser.add_argument("--postnet-conv-dim", type=int) - parser.add_argument("--postnet-conv-kernel-size", type=int) - parser.add_argument("--init-attn-c", type=str) - parser.add_argument("--attention-use-cumprob", action="store_true") - parser.add_argument("--zoneout", type=float) - parser.add_argument("--decoder-lstm-layers", type=int) - parser.add_argument("--decoder-lstm-dim", type=int) - parser.add_argument("--output-frame-dim", type=int) - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self._num_updates = 0 - - @classmethod - def build_model(cls, args, task): - embed_speaker = task.get_speaker_embeddings(args) - encoder = Tacotron2Encoder(args, task.src_dict, embed_speaker) - decoder = Tacotron2Decoder(args, task.src_dict) - return cls(encoder, decoder) - - def forward_encoder(self, src_tokens, src_lengths, **kwargs): - return self.encoder(src_tokens, src_lengths=src_lengths, **kwargs) - - def set_num_updates(self, num_updates): - super().set_num_updates(num_updates) - self._num_updates = num_updates - - -@register_model_architecture("tacotron_2", "tacotron_2") -def base_architecture(args): - # encoder - args.encoder_dropout = getattr(args, "encoder_dropout", 0.5) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_conv_layers = getattr(args, "encoder_conv_layers", 3) - args.encoder_conv_kernel_size = getattr(args, "encoder_conv_kernel_size", 5) - args.encoder_lstm_layers = getattr(args, "encoder_lstm_layers", 1) - # decoder - args.attention_dim = getattr(args, "attention_dim", 128) - args.attention_conv_dim = getattr(args, "attention_conv_dim", 32) - args.attention_conv_kernel_size = getattr(args, "attention_conv_kernel_size", 15) - args.prenet_dropout = getattr(args, "prenet_dropout", 0.5) - args.prenet_layers = getattr(args, "prenet_layers", 2) - args.prenet_dim = getattr(args, "prenet_dim", 256) - args.postnet_dropout = getattr(args, "postnet_dropout", 0.5) - args.postnet_layers = getattr(args, "postnet_layers", 5) - args.postnet_conv_dim = getattr(args, "postnet_conv_dim", 512) - args.postnet_conv_kernel_size = getattr(args, "postnet_conv_kernel_size", 5) - args.init_attn_c = getattr(args, "init_attn_c", "zero") - args.attention_use_cumprob = getattr(args, "attention_use_cumprob", True) - args.zoneout = getattr(args, "zoneout", 0.1) - args.decoder_lstm_layers = getattr(args, "decoder_lstm_layers", 2) - args.decoder_lstm_dim = getattr(args, "decoder_lstm_dim", 1024) - args.output_frame_dim = getattr(args, "output_frame_dim", 80) diff --git a/kosmos-g/fairseq/fairseq/models/text_to_speech/tts_transformer.py b/kosmos-g/fairseq/fairseq/models/text_to_speech/tts_transformer.py deleted file mode 100644 index 265d3998a..000000000 --- a/kosmos-g/fairseq/fairseq/models/text_to_speech/tts_transformer.py +++ /dev/null @@ -1,454 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from typing import List, Optional - -import torch -from torch import nn - -from fairseq import utils -from fairseq.data.data_utils import lengths_to_padding_mask -from fairseq.models import ( - FairseqEncoder, - FairseqEncoderDecoderModel, - FairseqIncrementalDecoder, - register_model, - register_model_architecture, -) -from fairseq.models.text_to_speech.hub_interface import TTSHubInterface -from fairseq.models.text_to_speech.tacotron2 import Postnet, Prenet -from fairseq.modules import ( - FairseqDropout, - LayerNorm, - PositionalEmbedding, - TransformerDecoderLayer, - TransformerEncoderLayer, -) - -logger = logging.getLogger(__name__) - - -def encoder_init(m): - if isinstance(m, nn.Conv1d): - nn.init.xavier_uniform_(m.weight, torch.nn.init.calculate_gain("relu")) - - -def Embedding(num_embeddings, embedding_dim): - m = nn.Embedding(num_embeddings, embedding_dim) - nn.init.normal_(m.weight, mean=0, std=embedding_dim ** -0.5) - return m - - -class TTSTransformerEncoder(FairseqEncoder): - def __init__(self, args, src_dict, embed_speaker): - super().__init__(src_dict) - self.padding_idx = src_dict.pad() - self.embed_speaker = embed_speaker - self.spk_emb_proj = None - if embed_speaker is not None: - self.spk_emb_proj = nn.Linear( - args.encoder_embed_dim + args.speaker_embed_dim, args.encoder_embed_dim - ) - - self.dropout_module = FairseqDropout( - p=args.dropout, module_name=self.__class__.__name__ - ) - self.embed_tokens = nn.Embedding( - len(src_dict), args.encoder_embed_dim, padding_idx=self.padding_idx - ) - assert args.encoder_conv_kernel_size % 2 == 1 - self.prenet = nn.ModuleList( - nn.Sequential( - nn.Conv1d( - args.encoder_embed_dim, - args.encoder_embed_dim, - kernel_size=args.encoder_conv_kernel_size, - padding=((args.encoder_conv_kernel_size - 1) // 2), - ), - nn.BatchNorm1d(args.encoder_embed_dim), - nn.ReLU(), - nn.Dropout(args.encoder_dropout), - ) - for _ in range(args.encoder_conv_layers) - ) - self.prenet_proj = nn.Linear(args.encoder_embed_dim, args.encoder_embed_dim) - self.embed_positions = PositionalEmbedding( - args.max_source_positions, args.encoder_embed_dim, self.padding_idx - ) - self.pos_emb_alpha = nn.Parameter(torch.ones(1)) - - self.transformer_layers = nn.ModuleList( - TransformerEncoderLayer(args) - for _ in range(args.encoder_transformer_layers) - ) - if args.encoder_normalize_before: - self.layer_norm = LayerNorm(args.encoder_embed_dim) - else: - self.layer_norm = None - - self.apply(encoder_init) - - def forward(self, src_tokens, src_lengths=None, speaker=None, **kwargs): - x = self.embed_tokens(src_tokens) - x = x.transpose(1, 2).contiguous() # B x T x C -> B x C x T - for conv in self.prenet: - x = conv(x) - x = x.transpose(1, 2).contiguous() # B x C x T -> B x T x C - x = self.prenet_proj(x) - - padding_mask = src_tokens.eq(self.padding_idx) - positions = self.embed_positions(padding_mask) - x += self.pos_emb_alpha * positions - x = self.dropout_module(x) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - for layer in self.transformer_layers: - x = layer(x, padding_mask) - - if self.layer_norm is not None: - x = self.layer_norm(x) - - if self.embed_speaker is not None: - seq_len, bsz, _ = x.size() - emb = self.embed_speaker(speaker).transpose(0, 1) - emb = emb.expand(seq_len, bsz, -1) - x = self.spk_emb_proj(torch.cat([x, emb], dim=2)) - - return { - "encoder_out": [x], # T x B x C - "encoder_padding_mask": [padding_mask] - if padding_mask.any() - else [], # B x T - "encoder_embedding": [], # B x T x C - "encoder_states": [], # List[T x B x C] - "src_tokens": [], - "src_lengths": [], - } - - -def decoder_init(m): - if isinstance(m, torch.nn.Conv1d): - nn.init.xavier_uniform_(m.weight, torch.nn.init.calculate_gain("tanh")) - - -class TTSTransformerDecoder(FairseqIncrementalDecoder): - def __init__(self, args, src_dict, padding_idx=1): - super().__init__(None) - self._future_mask = torch.empty(0) - - self.args = args - self.padding_idx = src_dict.pad() if src_dict else padding_idx - self.n_frames_per_step = args.n_frames_per_step - self.out_dim = args.output_frame_dim * args.n_frames_per_step - - self.dropout_module = FairseqDropout( - args.dropout, module_name=self.__class__.__name__ - ) - self.embed_positions = PositionalEmbedding( - args.max_target_positions, args.decoder_embed_dim, self.padding_idx - ) - self.pos_emb_alpha = nn.Parameter(torch.ones(1)) - self.prenet = nn.Sequential( - Prenet( - self.out_dim, args.prenet_layers, args.prenet_dim, args.prenet_dropout - ), - nn.Linear(args.prenet_dim, args.decoder_embed_dim), - ) - - self.n_transformer_layers = args.decoder_transformer_layers - self.transformer_layers = nn.ModuleList( - TransformerDecoderLayer(args) for _ in range(self.n_transformer_layers) - ) - if args.decoder_normalize_before: - self.layer_norm = LayerNorm(args.decoder_embed_dim) - else: - self.layer_norm = None - - self.feat_proj = nn.Linear(args.decoder_embed_dim, self.out_dim) - self.eos_proj = nn.Linear(args.decoder_embed_dim, 1) - - self.postnet = Postnet( - self.out_dim, - args.postnet_conv_dim, - args.postnet_conv_kernel_size, - args.postnet_layers, - args.postnet_dropout, - ) - - self.ctc_proj = None - if getattr(args, "ctc_weight", 0.0) > 0.0: - self.ctc_proj = nn.Linear(self.out_dim, len(src_dict)) - - self.apply(decoder_init) - - def extract_features( - self, - prev_outputs, - encoder_out=None, - incremental_state=None, - target_lengths=None, - speaker=None, - **kwargs, - ): - alignment_layer = self.n_transformer_layers - 1 - self_attn_padding_mask = lengths_to_padding_mask(target_lengths) - positions = self.embed_positions( - self_attn_padding_mask, incremental_state=incremental_state - ) - - if incremental_state is not None: - prev_outputs = prev_outputs[:, -1:, :] - self_attn_padding_mask = self_attn_padding_mask[:, -1:] - if positions is not None: - positions = positions[:, -1:] - - x = self.prenet(prev_outputs) - x += self.pos_emb_alpha * positions - x = self.dropout_module(x) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - if not self_attn_padding_mask.any(): - self_attn_padding_mask = None - - attn: Optional[torch.Tensor] = None - inner_states: List[Optional[torch.Tensor]] = [x] - for idx, transformer_layer in enumerate(self.transformer_layers): - if incremental_state is None: - self_attn_mask = self.buffered_future_mask(x) - else: - self_attn_mask = None - - x, layer_attn, _ = transformer_layer( - x, - encoder_out["encoder_out"][0] - if (encoder_out is not None and len(encoder_out["encoder_out"]) > 0) - else None, - encoder_out["encoder_padding_mask"][0] - if ( - encoder_out is not None - and len(encoder_out["encoder_padding_mask"]) > 0 - ) - else None, - incremental_state, - self_attn_mask=self_attn_mask, - self_attn_padding_mask=self_attn_padding_mask, - need_attn=bool((idx == alignment_layer)), - need_head_weights=bool((idx == alignment_layer)), - ) - inner_states.append(x) - if layer_attn is not None and idx == alignment_layer: - attn = layer_attn.float().to(x) - - if attn is not None: - # average probabilities over heads, transpose to - # (B, src_len, tgt_len) - attn = attn.mean(dim=0).transpose(2, 1) - - if self.layer_norm is not None: - x = self.layer_norm(x) - - # T x B x C -> B x T x C - x = x.transpose(0, 1) - - return x, {"attn": attn, "inner_states": inner_states} - - def forward( - self, - prev_output_tokens, - encoder_out=None, - incremental_state=None, - target_lengths=None, - speaker=None, - **kwargs, - ): - x, extra = self.extract_features( - prev_output_tokens, - encoder_out=encoder_out, - incremental_state=incremental_state, - target_lengths=target_lengths, - speaker=speaker, - **kwargs, - ) - attn = extra["attn"] - feat_out = self.feat_proj(x) - bsz, seq_len, _ = x.size() - eos_out = self.eos_proj(x) - post_feat_out = feat_out + self.postnet(feat_out) - return ( - post_feat_out, - eos_out, - { - "attn": attn, - "feature_out": feat_out, - "inner_states": extra["inner_states"], - }, - ) - - def get_normalized_probs(self, net_output, log_probs, sample): - logits = self.ctc_proj(net_output[2]["feature_out"]) - if log_probs: - return utils.log_softmax(logits.float(), dim=-1) - else: - return utils.softmax(logits.float(), dim=-1) - - def buffered_future_mask(self, tensor): - dim = tensor.size(0) - # self._future_mask.device != tensor.device is not working in TorchScript. This is a workaround. - if ( - self._future_mask.size(0) == 0 - or (not self._future_mask.device == tensor.device) - or self._future_mask.size(0) < dim - ): - self._future_mask = torch.triu( - utils.fill_with_neg_inf(torch.zeros([dim, dim])), 1 - ) - self._future_mask = self._future_mask.to(tensor) - return self._future_mask[:dim, :dim] - - -@register_model("tts_transformer") -class TTSTransformerModel(FairseqEncoderDecoderModel): - """ - Implementation for https://arxiv.org/pdf/1809.08895.pdf - """ - - @classmethod - def hub_models(cls): - base_url = "http://dl.fbaipublicfiles.com/fairseq/s2" - model_ids = [ - "tts_transformer-en-ljspeech", - "tts_transformer-en-200_speaker-cv4", - "tts_transformer-es-css10", - "tts_transformer-fr-cv7_css10", - "tts_transformer-ru-cv7_css10", - "tts_transformer-zh-cv7_css10", - "tts_transformer-ar-cv7_css10", - "tts_transformer-tr-cv7_css10", - "tts_transformer-vi-cv7", - ] - return {i: f"{base_url}/{i}.tar.gz" for i in model_ids} - - @classmethod - def from_pretrained( - cls, - model_name_or_path, - checkpoint_file="model.pt", - data_name_or_path=".", - config_yaml="config.yaml", - vocoder: str = "griffin_lim", - fp16: bool = False, - **kwargs, - ): - from fairseq import hub_utils - - x = hub_utils.from_pretrained( - model_name_or_path, - checkpoint_file, - data_name_or_path, - archive_map=cls.hub_models(), - config_yaml=config_yaml, - vocoder=vocoder, - fp16=fp16, - **kwargs, - ) - return TTSHubInterface(x["args"], x["task"], x["models"][0]) - - @staticmethod - def add_args(parser): - parser.add_argument("--dropout", type=float) - parser.add_argument("--output-frame-dim", type=int) - parser.add_argument("--speaker-embed-dim", type=int) - # encoder prenet - parser.add_argument("--encoder-dropout", type=float) - parser.add_argument("--encoder-conv-layers", type=int) - parser.add_argument("--encoder-conv-kernel-size", type=int) - # encoder transformer layers - parser.add_argument("--encoder-transformer-layers", type=int) - parser.add_argument("--encoder-embed-dim", type=int) - parser.add_argument("--encoder-ffn-embed-dim", type=int) - parser.add_argument("--encoder-normalize-before", action="store_true") - parser.add_argument("--encoder-attention-heads", type=int) - parser.add_argument("--attention-dropout", type=float) - parser.add_argument("--activation-dropout", "--relu-dropout", type=float) - parser.add_argument("--activation-fn", type=str, default="relu") - # decoder prenet - parser.add_argument("--prenet-dropout", type=float) - parser.add_argument("--prenet-layers", type=int) - parser.add_argument("--prenet-dim", type=int) - # decoder postnet - parser.add_argument("--postnet-dropout", type=float) - parser.add_argument("--postnet-layers", type=int) - parser.add_argument("--postnet-conv-dim", type=int) - parser.add_argument("--postnet-conv-kernel-size", type=int) - # decoder transformer layers - parser.add_argument("--decoder-transformer-layers", type=int) - parser.add_argument("--decoder-embed-dim", type=int) - parser.add_argument("--decoder-ffn-embed-dim", type=int) - parser.add_argument("--decoder-normalize-before", action="store_true") - parser.add_argument("--decoder-attention-heads", type=int) - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self._num_updates = 0 - - @classmethod - def build_model(cls, args, task): - embed_speaker = task.get_speaker_embeddings(args) - encoder = TTSTransformerEncoder(args, task.src_dict, embed_speaker) - decoder = TTSTransformerDecoder(args, task.src_dict) - return cls(encoder, decoder) - - def forward_encoder(self, src_tokens, src_lengths, speaker=None, **kwargs): - return self.encoder( - src_tokens, src_lengths=src_lengths, speaker=speaker, **kwargs - ) - - def set_num_updates(self, num_updates): - super().set_num_updates(num_updates) - self._num_updates = num_updates - - -@register_model_architecture("tts_transformer", "tts_transformer") -def base_architecture(args): - args.dropout = getattr(args, "dropout", 0.1) - args.output_frame_dim = getattr(args, "output_frame_dim", 80) - args.speaker_embed_dim = getattr(args, "speaker_embed_dim", 64) - # encoder prenet - args.encoder_dropout = getattr(args, "encoder_dropout", 0.5) - args.encoder_conv_layers = getattr(args, "encoder_conv_layers", 3) - args.encoder_conv_kernel_size = getattr(args, "encoder_conv_kernel_size", 5) - # encoder transformer layers - args.encoder_transformer_layers = getattr(args, "encoder_transformer_layers", 6) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr( - args, "encoder_ffn_embed_dim", 4 * args.encoder_embed_dim - ) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 4) - args.attention_dropout = getattr(args, "attention_dropout", 0.0) - args.activation_dropout = getattr(args, "activation_dropout", 0.0) - args.activation_fn = getattr(args, "activation_fn", "relu") - # decoder prenet - args.prenet_dropout = getattr(args, "prenet_dropout", 0.5) - args.prenet_layers = getattr(args, "prenet_layers", 2) - args.prenet_dim = getattr(args, "prenet_dim", 256) - # decoder postnet - args.postnet_dropout = getattr(args, "postnet_dropout", 0.5) - args.postnet_layers = getattr(args, "postnet_layers", 5) - args.postnet_conv_dim = getattr(args, "postnet_conv_dim", 512) - args.postnet_conv_kernel_size = getattr(args, "postnet_conv_kernel_size", 5) - # decoder transformer layers - args.decoder_transformer_layers = getattr(args, "decoder_transformer_layers", 6) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_ffn_embed_dim = getattr( - args, "decoder_ffn_embed_dim", 4 * args.decoder_embed_dim - ) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", False) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 4) diff --git a/kosmos-g/fairseq/fairseq/models/text_to_speech/vocoder.py b/kosmos-g/fairseq/fairseq/models/text_to_speech/vocoder.py deleted file mode 100644 index 69c927206..000000000 --- a/kosmos-g/fairseq/fairseq/models/text_to_speech/vocoder.py +++ /dev/null @@ -1,253 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import json -from typing import Dict - -import numpy as np -import torch -from torch import nn -import torch.nn.functional as F - -from fairseq.data.audio.audio_utils import ( - get_window, - get_fourier_basis, - get_mel_filters, - TTSSpectrogram, -) -from fairseq.data.audio.speech_to_text_dataset import S2TDataConfig -from fairseq.models.text_to_speech.codehifigan import CodeGenerator as CodeHiFiGANModel -from fairseq.models.text_to_speech.hifigan import Generator as HiFiGANModel - -logger = logging.getLogger(__name__) - - -class PseudoInverseMelScale(torch.nn.Module): - def __init__(self, n_stft, n_mels, sample_rate, f_min, f_max) -> None: - super(PseudoInverseMelScale, self).__init__() - self.n_mels = n_mels - basis = get_mel_filters(sample_rate, (n_stft - 1) * 2, n_mels, f_min, f_max) - basis = torch.pinverse(basis) # F x F_mel - self.register_buffer("basis", basis) - - def forward(self, melspec: torch.Tensor) -> torch.Tensor: - # pack batch - shape = melspec.shape # B_1 x ... x B_K x F_mel x T - n_mels, time = shape[-2], shape[-1] - melspec = melspec.view(-1, n_mels, time) - - freq, _ = self.basis.size() # F x F_mel - assert self.n_mels == n_mels, (self.n_mels, n_mels) - specgram = self.basis.matmul(melspec).clamp(min=0) - - # unpack batch - specgram = specgram.view(shape[:-2] + (freq, time)) - return specgram - - -class GriffinLim(torch.nn.Module): - def __init__( - self, - n_fft: int, - win_length: int, - hop_length: int, - n_iter: int, - window_fn=torch.hann_window, - ): - super(GriffinLim, self).__init__() - self.transform = TTSSpectrogram( - n_fft, win_length, hop_length, return_phase=True - ) - - basis = get_fourier_basis(n_fft) - basis = torch.pinverse(n_fft / hop_length * basis).T[:, None, :] - basis *= get_window(window_fn, n_fft, win_length) - self.register_buffer("basis", basis) - - self.n_fft = n_fft - self.win_length = win_length - self.hop_length = hop_length - self.n_iter = n_iter - - self.tiny = 1.1754944e-38 - - @classmethod - def get_window_sum_square( - cls, n_frames, hop_length, win_length, n_fft, window_fn=torch.hann_window - ) -> torch.Tensor: - w_sq = get_window(window_fn, n_fft, win_length) ** 2 - n = n_fft + hop_length * (n_frames - 1) - x = torch.zeros(n, dtype=torch.float32) - for i in range(n_frames): - ofst = i * hop_length - x[ofst : min(n, ofst + n_fft)] += w_sq[: max(0, min(n_fft, n - ofst))] - return x - - def inverse(self, magnitude: torch.Tensor, phase) -> torch.Tensor: - x = torch.cat( - [magnitude * torch.cos(phase), magnitude * torch.sin(phase)], dim=1 - ) - x = F.conv_transpose1d(x, self.basis, stride=self.hop_length) - win_sum_sq = self.get_window_sum_square( - magnitude.shape[-1], - hop_length=self.hop_length, - win_length=self.win_length, - n_fft=self.n_fft, - ).to(magnitude.device) - # remove modulation effects - approx_nonzero_indices = win_sum_sq > self.tiny - x[:, :, approx_nonzero_indices] /= win_sum_sq[approx_nonzero_indices] - x *= self.n_fft / self.hop_length - x = x[:, :, self.n_fft // 2 :] - x = x[:, :, : -self.n_fft // 2 :] - return x - - def forward(self, specgram: torch.Tensor) -> torch.Tensor: - angles = np.angle(np.exp(2j * np.pi * np.random.rand(*specgram.shape))) - angles = torch.from_numpy(angles).to(specgram) - _specgram = specgram.view(-1, specgram.shape[-2], specgram.shape[-1]) - waveform = self.inverse(_specgram, angles).squeeze(1) - for _ in range(self.n_iter): - _, angles = self.transform(waveform) - waveform = self.inverse(_specgram, angles).squeeze(1) - return waveform.squeeze(0) - - -class GriffinLimVocoder(nn.Module): - def __init__( - self, - sample_rate, - win_size, - hop_size, - n_fft, - n_mels, - f_min, - f_max, - window_fn, - spec_bwd_max_iter=32, - fp16=False, - ): - super().__init__() - self.inv_mel_transform = PseudoInverseMelScale( - n_stft=n_fft // 2 + 1, - n_mels=n_mels, - sample_rate=sample_rate, - f_min=f_min, - f_max=f_max, - ) - self.gl_transform = GriffinLim( - n_fft=n_fft, - win_length=win_size, - hop_length=hop_size, - window_fn=window_fn, - n_iter=spec_bwd_max_iter, - ) - if fp16: - self.half() - self.inv_mel_transform.half() - self.gl_transform.half() - else: - self.float() - self.inv_mel_transform.float() - self.gl_transform.float() - - def forward(self, x): - # x: (B x) T x D -> (B x) 1 x T - # NOTE: batched forward produces noisier waveform. recommend running - # one utterance at a time - self.eval() - x = x.exp().transpose(-1, -2) - x = self.inv_mel_transform(x) - x = self.gl_transform(x) - return x - - @classmethod - def from_data_cfg(cls, args, data_cfg: S2TDataConfig): - feat_cfg = data_cfg.config["features"] - window_fn = getattr(torch, feat_cfg["window_fn"] + "_window") - return cls( - sample_rate=feat_cfg["sample_rate"], - win_size=int(feat_cfg["win_len_t"] * feat_cfg["sample_rate"]), - hop_size=int(feat_cfg["hop_len_t"] * feat_cfg["sample_rate"]), - n_fft=feat_cfg["n_fft"], - n_mels=feat_cfg["n_mels"], - f_min=feat_cfg["f_min"], - f_max=feat_cfg["f_max"], - window_fn=window_fn, - spec_bwd_max_iter=args.spec_bwd_max_iter, - fp16=args.fp16, - ) - - -class HiFiGANVocoder(nn.Module): - def __init__( - self, checkpoint_path: str, model_cfg: Dict[str, str], fp16: bool = False - ) -> None: - super().__init__() - self.model = HiFiGANModel(model_cfg) - state_dict = torch.load(checkpoint_path) - self.model.load_state_dict(state_dict["generator"]) - if fp16: - self.model.half() - logger.info(f"loaded HiFiGAN checkpoint from {checkpoint_path}") - - def forward(self, x: torch.Tensor) -> torch.Tensor: - # (B x) T x D -> (B x) 1 x T - model = self.model.eval() - if len(x.shape) == 2: - return model(x.unsqueeze(0).transpose(1, 2)).detach().squeeze(0) - else: - return model(x.transpose(-1, -2)).detach() - - @classmethod - def from_data_cfg(cls, args, data_cfg: S2TDataConfig): - vocoder_cfg = data_cfg.vocoder - assert vocoder_cfg.get("type", "griffin_lim") == "hifigan" - with open(vocoder_cfg["config"]) as f: - model_cfg = json.load(f) - return cls(vocoder_cfg["checkpoint"], model_cfg, fp16=args.fp16) - - -class CodeHiFiGANVocoder(nn.Module): - def __init__( - self, checkpoint_path: str, model_cfg: Dict[str, str], fp16: bool = False - ) -> None: - super().__init__() - self.model = CodeHiFiGANModel(model_cfg) - state_dict = torch.load(checkpoint_path) - self.model.load_state_dict(state_dict["generator"]) - self.model.eval() - if fp16: - self.model.half() - self.model.remove_weight_norm() - logger.info(f"loaded CodeHiFiGAN checkpoint from {checkpoint_path}") - - def forward(self, x: Dict[str, torch.Tensor], dur_prediction=False) -> torch.Tensor: - assert "code" in x - x["dur_prediction"] = dur_prediction - mask = x["code"] >= 0 # remove invalid code - x["code"] = x["code"][mask].unsqueeze(dim=0) - - return self.model(**x).detach().squeeze() - - @classmethod - def from_data_cfg(cls, args, data_cfg): - vocoder_cfg = data_cfg.vocoder - assert vocoder_cfg is not None, "vocoder not specified in the data config" - with open(vocoder_cfg["config"]) as f: - model_cfg = json.load(f) - return cls(vocoder_cfg["checkpoint"], model_cfg, fp16=args.fp16) - - -def get_vocoder(args, data_cfg: S2TDataConfig): - if args.vocoder == "griffin_lim": - return GriffinLimVocoder.from_data_cfg(args, data_cfg) - elif args.vocoder == "hifigan": - return HiFiGANVocoder.from_data_cfg(args, data_cfg) - elif args.vocoder == "code_hifigan": - return CodeHiFiGANVocoder.from_data_cfg(args, data_cfg) - else: - raise ValueError("Unknown vocoder") diff --git a/kosmos-g/fairseq/fairseq/models/transformer/__init__.py b/kosmos-g/fairseq/fairseq/models/transformer/__init__.py deleted file mode 100644 index 681fca3d4..000000000 --- a/kosmos-g/fairseq/fairseq/models/transformer/__init__.py +++ /dev/null @@ -1,50 +0,0 @@ -# Copyright (c) Facebook Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -"""isort:skip_file""" - -from .transformer_config import ( - TransformerConfig, - DEFAULT_MAX_SOURCE_POSITIONS, - DEFAULT_MAX_TARGET_POSITIONS, - DEFAULT_MIN_PARAMS_TO_WRAP, -) -from .transformer_decoder import TransformerDecoder, TransformerDecoderBase, Linear -from .transformer_encoder import TransformerEncoder, TransformerEncoderBase -from .transformer_legacy import ( - TransformerModel, - base_architecture, - tiny_architecture, - transformer_iwslt_de_en, - transformer_wmt_en_de, - transformer_vaswani_wmt_en_de_big, - transformer_vaswani_wmt_en_fr_big, - transformer_wmt_en_de_big, - transformer_wmt_en_de_big_t2t, -) -from .transformer_base import TransformerModelBase, Embedding - - -__all__ = [ - "TransformerModelBase", - "TransformerConfig", - "TransformerDecoder", - "TransformerDecoderBase", - "TransformerEncoder", - "TransformerEncoderBase", - "TransformerModel", - "Embedding", - "Linear", - "base_architecture", - "tiny_architecture", - "transformer_iwslt_de_en", - "transformer_wmt_en_de", - "transformer_vaswani_wmt_en_de_big", - "transformer_vaswani_wmt_en_fr_big", - "transformer_wmt_en_de_big", - "transformer_wmt_en_de_big_t2t", - "DEFAULT_MAX_SOURCE_POSITIONS", - "DEFAULT_MAX_TARGET_POSITIONS", - "DEFAULT_MIN_PARAMS_TO_WRAP", -] diff --git a/kosmos-g/fairseq/fairseq/models/transformer/transformer_base.py b/kosmos-g/fairseq/fairseq/models/transformer/transformer_base.py deleted file mode 100644 index b4d5604db..000000000 --- a/kosmos-g/fairseq/fairseq/models/transformer/transformer_base.py +++ /dev/null @@ -1,179 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Dict, List, Optional, Tuple - -import torch -import torch.nn as nn -from fairseq import utils -from fairseq.dataclass.utils import gen_parser_from_dataclass -from fairseq.distributed import fsdp_wrap -from fairseq.models import FairseqEncoderDecoderModel -from fairseq.models.transformer import ( - TransformerEncoderBase, - TransformerDecoderBase, - TransformerConfig, -) -from torch import Tensor - - -class TransformerModelBase(FairseqEncoderDecoderModel): - """ - Transformer model from `"Attention Is All You Need" (Vaswani, et al, 2017) - <https://arxiv.org/abs/1706.03762>`_. - - Args: - encoder (TransformerEncoder): the encoder - decoder (TransformerDecoder): the decoder - - The Transformer model provides the following named architectures and - command-line arguments: - - .. argparse:: - :ref: fairseq.models.transformer_parser - :prog: - """ - - def __init__(self, cfg, encoder, decoder): - super().__init__(encoder, decoder) - self.cfg = cfg - self.supports_align_args = True - - @classmethod - def add_args(cls, parser): - """Add model-specific arguments to the parser.""" - # we want to build the args recursively in this case. - gen_parser_from_dataclass( - parser, TransformerConfig(), delete_default=False, with_prefix="" - ) - - @classmethod - def build_model(cls, cfg, task): - """Build a new model instance.""" - - # -- TODO T96535332 - # bug caused by interaction between OmegaConf II and argparsing - cfg.decoder.input_dim = int(cfg.decoder.input_dim) - cfg.decoder.output_dim = int(cfg.decoder.output_dim) - # -- - - if cfg.encoder.layers_to_keep: - cfg.encoder.layers = len(cfg.encoder.layers_to_keep.split(",")) - if cfg.decoder.layers_to_keep: - cfg.decoder.layers = len(cfg.decoder.layers_to_keep.split(",")) - - src_dict, tgt_dict = task.source_dictionary, task.target_dictionary - - if cfg.share_all_embeddings: - if src_dict != tgt_dict: - raise ValueError("--share-all-embeddings requires a joined dictionary") - if cfg.encoder.embed_dim != cfg.decoder.embed_dim: - raise ValueError( - "--share-all-embeddings requires --encoder-embed-dim to match --decoder-embed-dim" - ) - if cfg.decoder.embed_path and ( - cfg.decoder.embed_path != cfg.encoder.embed_path - ): - raise ValueError( - "--share-all-embeddings not compatible with --decoder-embed-path" - ) - encoder_embed_tokens = cls.build_embedding( - cfg, src_dict, cfg.encoder.embed_dim, cfg.encoder.embed_path - ) - decoder_embed_tokens = encoder_embed_tokens - cfg.share_decoder_input_output_embed = True - else: - encoder_embed_tokens = cls.build_embedding( - cfg, src_dict, cfg.encoder.embed_dim, cfg.encoder.embed_path - ) - decoder_embed_tokens = cls.build_embedding( - cfg, tgt_dict, cfg.decoder.embed_dim, cfg.decoder.embed_path - ) - if cfg.offload_activations: - cfg.checkpoint_activations = True # offloading implies checkpointing - encoder = cls.build_encoder(cfg, src_dict, encoder_embed_tokens) - decoder = cls.build_decoder(cfg, tgt_dict, decoder_embed_tokens) - if not cfg.share_all_embeddings: - # fsdp_wrap is a no-op when --ddp-backend != fully_sharded - encoder = fsdp_wrap(encoder, min_num_params=cfg.min_params_to_wrap) - decoder = fsdp_wrap(decoder, min_num_params=cfg.min_params_to_wrap) - return cls(cfg, encoder, decoder) - - @classmethod - def build_embedding(cls, cfg, dictionary, embed_dim, path=None): - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - - emb = Embedding(num_embeddings, embed_dim, padding_idx) - # if provided, load from preloaded dictionaries - if path: - embed_dict = utils.parse_embedding(path) - utils.load_embedding(embed_dict, dictionary, emb) - return emb - - @classmethod - def build_encoder(cls, cfg, src_dict, embed_tokens): - return TransformerEncoderBase(cfg, src_dict, embed_tokens) - - @classmethod - def build_decoder(cls, cfg, tgt_dict, embed_tokens): - return TransformerDecoderBase( - cfg, - tgt_dict, - embed_tokens, - no_encoder_attn=cfg.no_cross_attention, - ) - - # TorchScript doesn't support optional arguments with variable length (**kwargs). - # Current workaround is to add union of all arguments in child classes. - def forward( - self, - src_tokens, - src_lengths, - prev_output_tokens, - return_all_hiddens: bool = True, - features_only: bool = False, - alignment_layer: Optional[int] = None, - alignment_heads: Optional[int] = None, - ): - """ - Run the forward pass for an encoder-decoder model. - - Copied from the base class, but without ``**kwargs``, - which are not supported by TorchScript. - """ - encoder_out = self.encoder( - src_tokens, src_lengths=src_lengths, return_all_hiddens=return_all_hiddens - ) - decoder_out = self.decoder( - prev_output_tokens, - encoder_out=encoder_out, - features_only=features_only, - alignment_layer=alignment_layer, - alignment_heads=alignment_heads, - src_lengths=src_lengths, - return_all_hiddens=return_all_hiddens, - ) - return decoder_out - - # Since get_normalized_probs is in the Fairseq Model which is not scriptable, - # I rewrite the get_normalized_probs from Base Class to call the - # helper function in the Base Class. - @torch.jit.export - def get_normalized_probs( - self, - net_output: Tuple[Tensor, Optional[Dict[str, List[Optional[Tensor]]]]], - log_probs: bool, - sample: Optional[Dict[str, Tensor]] = None, - ): - """Get normalized probabilities (or log probs) from a net's output.""" - return self.get_normalized_probs_scriptable(net_output, log_probs, sample) - - -def Embedding(num_embeddings, embedding_dim, padding_idx): - m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx) - nn.init.normal_(m.weight, mean=0, std=embedding_dim ** -0.5) - nn.init.constant_(m.weight[padding_idx], 0) - return m diff --git a/kosmos-g/fairseq/fairseq/models/transformer/transformer_config.py b/kosmos-g/fairseq/fairseq/models/transformer/transformer_config.py deleted file mode 100644 index 341a9bde2..000000000 --- a/kosmos-g/fairseq/fairseq/models/transformer/transformer_config.py +++ /dev/null @@ -1,338 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import re -from dataclasses import dataclass, field, fields -from typing import List, Optional - -from omegaconf import II - -from fairseq import utils -from fairseq.dataclass import ChoiceEnum, FairseqDataclass -from fairseq.utils import safe_getattr, safe_hasattr - -DEFAULT_MAX_SOURCE_POSITIONS = 1024 -DEFAULT_MAX_TARGET_POSITIONS = 1024 - -DEFAULT_MIN_PARAMS_TO_WRAP = int(1e8) - -_NAME_PARSER = r"(decoder|encoder|quant_noise)_(.*)" - - -@dataclass -class EncDecBaseConfig(FairseqDataclass): - embed_path: Optional[str] = field( - default=None, metadata={"help": "path to pre-trained embedding"} - ) - embed_dim: Optional[int] = field( - default=512, metadata={"help": "embedding dimension"} - ) - ffn_embed_dim: int = field( - default=2048, metadata={"help": "embedding dimension for FFN"} - ) - layers: int = field(default=6, metadata={"help": "number of layers"}) - attention_heads: int = field( - default=8, metadata={"help": "number of attention heads"} - ) - normalize_before: bool = field( - default=False, metadata={"help": "apply layernorm before each block"} - ) - learned_pos: bool = field( - default=False, metadata={"help": "use learned positional embeddings"} - ) - # args for "Reducing Transformer Depth on Demand with Structured Dropout" (Fan et al., 2019) - layerdrop: float = field(default=0, metadata={"help": "LayerDrop probability"}) - layers_to_keep: Optional[List[int]] = field( - default=None, metadata={"help": "which layers to *keep* when pruning"} - ) - - -@dataclass -class DecoderConfig(EncDecBaseConfig): - input_dim: int = II("model.decoder.embed_dim") - output_dim: int = field( - default=II("model.decoder.embed_dim"), - metadata={ - "help": "decoder output dimension (extra linear layer if different from decoder embed dim)" - }, - ) - - def __post_init__(self): - # II doesn't work if we are just creating the object outside of hydra so fix that - if self.input_dim == II("model.decoder.embed_dim"): - self.input_dim = self.embed_dim - if self.output_dim == II("model.decoder.embed_dim"): - self.output_dim = self.embed_dim - - -@dataclass -class QuantNoiseConfig(FairseqDataclass): - pq: float = field( - default=0.0, - metadata={"help": "iterative PQ quantization noise at training time"}, - ) - pq_block_size: int = field( - default=8, - metadata={"help": "block size of quantization noise at training time"}, - ) - scalar: float = field( - default=0.0, - metadata={ - "help": "scalar quantization noise and scalar quantization at training time" - }, - ) - - -@dataclass -class TransformerConfig(FairseqDataclass): - activation_fn: ChoiceEnum(utils.get_available_activation_fns()) = field( - default="relu", - metadata={"help": "activation function to use"}, - ) - dropout: float = field(default=0.1, metadata={"help": "dropout probability"}) - attention_dropout: float = field( - default=0.0, metadata={"help": "dropout probability for attention weights"} - ) - activation_dropout: float = field( - default=0.0, - metadata={ - "help": "dropout probability after activation in FFN.", - "alias": "--relu-dropout", - }, - ) - adaptive_input: bool = False - encoder: EncDecBaseConfig = EncDecBaseConfig() - # TODO should really be in the encoder config - max_source_positions: int = field( - default=DEFAULT_MAX_SOURCE_POSITIONS, - metadata={"help": "Maximum input length supported by the encoder"}, - ) - decoder: DecoderConfig = DecoderConfig() - # TODO should really be in the decoder config - max_target_positions: int = field( - default=DEFAULT_MAX_TARGET_POSITIONS, - metadata={"help": "Maximum output length supported by the decoder"}, - ) - share_decoder_input_output_embed: bool = field( - default=False, metadata={"help": "share decoder input and output embeddings"} - ) - share_all_embeddings: bool = field( - default=False, - metadata={ - "help": "share encoder, decoder and output embeddings (requires shared dictionary and embed dim)" - }, - ) - no_token_positional_embeddings: bool = field( - default=False, - metadata={ - "help": "if True, disables positional embeddings (outside self attention)" - }, - ) - adaptive_softmax_cutoff: Optional[List[int]] = field( - default=None, - metadata={ - "help": "list of adaptive softmax cutoff points. Must be used with adaptive_loss criterion" - }, - ) - adaptive_softmax_dropout: float = field( - default=0.0, - metadata={"help": "sets adaptive softmax dropout for the tail projections"}, - ) - adaptive_softmax_factor: float = field( - default=4, metadata={"help": "adaptive input factor"} - ) - layernorm_embedding: bool = field( - default=False, metadata={"help": "add layernorm to embedding"} - ) - tie_adaptive_weights: bool = field( - default=False, - metadata={ - "help": "if set, ties the weights of adaptive softmax and adaptive input" - }, - ) - tie_adaptive_proj: bool = field( - default=False, - metadata={ - "help": "if set, ties the projection weights of adaptive softmax and adaptive input" - }, - ) - no_scale_embedding: bool = field( - default=False, metadata={"help": "if True, dont scale embeddings"} - ) - checkpoint_activations: bool = field( - default=False, - metadata={ - "help": "checkpoint activations at each layer, which saves GPU memory usage at the cost of some additional compute" - }, - ) - offload_activations: bool = field( - default=False, - metadata={ - "help": "checkpoint activations at each layer, then save to gpu. Sets --checkpoint-activations." - }, - ) - # args for "Cross+Self-Attention for Transformer Models" (Peitz et al., 2019) - no_cross_attention: bool = field( - default=False, metadata={"help": "do not perform cross-attention"} - ) - cross_self_attention: bool = field( - default=False, metadata={"help": "perform cross+self-attention"} - ) - # args for Training with Quantization Noise for Extreme Model Compression ({Fan*, Stock*} et al., 2020) - quant_noise: QuantNoiseConfig = field(default=QuantNoiseConfig()) - min_params_to_wrap: int = field( - default=DEFAULT_MIN_PARAMS_TO_WRAP, - metadata={ - "help": "minimum number of params for a layer to be wrapped with FSDP() when " - "training with --ddp-backend=fully_sharded. Smaller values will " - "improve memory efficiency, but may make torch.distributed " - "communication less efficient due to smaller input sizes. This option " - "is set to 0 (i.e., always wrap) when --checkpoint-activations or " - "--offload-activations are passed." - }, - ) - # DEPRECATED field, but some old checkpoints might have it - char_inputs: bool = field( - default=False, metadata={"help": "if set, model takes character ids as input"} - ) - relu_dropout: float = 0.0 - # config for "BASE Layers: Simplifying Training of Large, Sparse Models" - base_layers: Optional[int] = field( - default=0, metadata={"help": "number of BASE layers in total"} - ) - base_sublayers: Optional[int] = field( - default=1, metadata={"help": "number of sublayers in each BASE layer"} - ) - base_shuffle: Optional[int] = field( - default=1, - metadata={"help": "shuffle tokens between workers before computing assignment"}, - ) - - export: bool = field( - default=False, - metadata={"help": "make the layernorm exportable with torchscript."}, - ) - - # copied from transformer_lm but expected in transformer_decoder: - no_decoder_final_norm: bool = field( - default=False, - metadata={"help": "don't add an extra layernorm after the last decoder block"}, - ) - deepnet: bool = field( - default=False, - metadata={ - "help": "enable deepnet in decoder" - }, - ) - last_ln_scale: bool = field( - default=False, - metadata={ - "help": "enable last_ln_scale in decoder" - }, - ) - - # We need to make this hierarchical dataclass like the flat namespace - # __getattr__ and __setattr__ here allow backward compatibility - # for subclasses of Transformer(Legacy) that depend on read/write on - # the flat namespace. - - def __getattr__(self, name): - match = re.match(_NAME_PARSER, name) - if match: - sub = safe_getattr(self, match[1]) - return safe_getattr(sub, match[2]) - raise AttributeError(f"invalid argument {name}.") - - def __setattr__(self, name, value): - match = re.match(_NAME_PARSER, name) - if match: - sub = safe_getattr(self, match[1]) - setattr(sub, match[2], value) - else: - super().__setattr__(name, value) - - @staticmethod - def _copy_keys(args, cls, prefix, seen): - """ - copy the prefixed keys (decoder_embed_dim) to the DC fields: decoder.embed_dim - """ - cfg = cls() - for fld in fields(cls): - # for all the fields in the DC, find the fields (e.g. embed_dim) - # in the namespace with the prefix (e.g. decoder) - # and set it on the dc. - args_key = f"{prefix}_{fld.name}" - if safe_hasattr(args, args_key): - seen.add(args_key) - setattr(cfg, fld.name, safe_getattr(args, args_key)) - if safe_hasattr(args, fld.name): - seen.add(fld.name) - setattr(cfg, fld.name, safe_getattr(args, fld.name)) - return cfg - - @classmethod - def from_namespace(cls, args): - if args is None: - return None - if not isinstance(args, cls): - seen = set() - config = cls() - # currently, we can go generically from DC fields to args hierarchically - # but we can't easily deconstruct a flat namespace to a hierarchical - # DC. Mostly because we could have a sub-dc called `decoder-foo` that should not - # go to the sub struct called `decoder`. There are ways to go around this, but let's keep it simple - # for now. - for fld in fields(cls): - # concretelly, the transformer_config know what sub-dc it has, so we go through all the dc fields - # and if it's one that has a sub-dc, we build that sub-dc with `copy_keys()` - if fld.name == "decoder": - if safe_hasattr(args, "decoder"): - # in some cases, the args we receive is already structured (as DictConfigs), so let's just build the correct DC - seen.add("decoder") - config.decoder = DecoderConfig(**args.decoder) - else: - config.decoder = cls._copy_keys( - args, DecoderConfig, "decoder", seen - ) - elif fld.name == "encoder": - # same but for encoder - if safe_hasattr(args, "encoder"): - seen.add("encoder") - config.encoder = EncDecBaseConfig(**args.encoder) - else: - config.encoder = cls._copy_keys( - args, EncDecBaseConfig, "encoder", seen - ) - elif fld.name == "quant_noise": - # same but for quant_noise - if safe_hasattr(args, "quant_noise"): - seen.add("quant_noise") - config.quant_noise = QuantNoiseConfig(**args.quant_noise) - else: - config.quant_noise = cls._copy_keys( - args, QuantNoiseConfig, "quant_noise", seen - ) - elif safe_hasattr(args, fld.name): - # if it's not a structure field, it's just a normal field, copy it over - seen.add(fld.name) - setattr(config, fld.name, safe_getattr(args, fld.name)) - # we got all the fields defined in the dataclass, but - # the argparse namespace might have extra args for two reasons: - # - we are in a legacy class so all the args are not declared in the dataclass. Ideally once everyone has defined a dataclass for their model, we won't need this - # - some places expect args to be there but never define them - args_dict = ( - args._asdict() - if safe_hasattr(args, "_asdict") - else vars(args) - if safe_hasattr(args, "__dict__") - else {} - ) # namedtupled doesn't have __dict__ :-/ - for key, value in args_dict.items(): - if key not in seen: - setattr(config, key, value) - return config - else: - return args diff --git a/kosmos-g/fairseq/fairseq/models/transformer/transformer_decoder.py b/kosmos-g/fairseq/fairseq/models/transformer/transformer_decoder.py deleted file mode 100644 index 2c8eaef3a..000000000 --- a/kosmos-g/fairseq/fairseq/models/transformer/transformer_decoder.py +++ /dev/null @@ -1,501 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from typing import Any, Dict, List, Optional - -import torch -import torch.nn as nn -from fairseq import utils -from fairseq.distributed import fsdp_wrap -from fairseq.models import FairseqIncrementalDecoder -from fairseq.models.transformer import TransformerConfig -from fairseq.modules import ( - AdaptiveSoftmax, - BaseLayer, - FairseqDropout, - LayerDropModuleList, - LayerNorm, - PositionalEmbedding, - SinusoidalPositionalEmbedding, -) -from fairseq.modules import transformer_layer -from fairseq.modules.checkpoint_activations import checkpoint_wrapper -from fairseq.modules.quant_noise import quant_noise as apply_quant_noise_ -from torch import Tensor -import numpy as np - - -# rewrite name for backward compatibility in `make_generation_fast_` -def module_name_fordropout(module_name: str) -> str: - if module_name == "TransformerDecoderBase": - return "TransformerDecoder" - else: - return module_name - - -class TransformerDecoderBase(FairseqIncrementalDecoder): - """ - Transformer decoder consisting of *cfg.decoder.layers* layers. Each layer - is a :class:`TransformerDecoderLayer`. - - Args: - args (argparse.Namespace): parsed command-line arguments - dictionary (~fairseq.data.Dictionary): decoding dictionary - embed_tokens (torch.nn.Embedding): output embedding - no_encoder_attn (bool, optional): whether to attend to encoder outputs - (default: False). - """ - - def __init__( - self, - cfg, - dictionary, - embed_tokens, - no_encoder_attn=False, - output_projection=None, - ): - self.cfg = cfg - super().__init__(dictionary) - self.register_buffer("version", torch.Tensor([3])) - self._future_mask = torch.empty(0) - - self.dropout_module = FairseqDropout( - cfg.dropout, module_name=module_name_fordropout(self.__class__.__name__) - ) - self.decoder_layerdrop = cfg.decoder.layerdrop - self.share_input_output_embed = cfg.share_decoder_input_output_embed - - input_embed_dim = embed_tokens.embedding_dim - embed_dim = cfg.decoder.embed_dim - self.embed_dim = embed_dim - self.output_embed_dim = cfg.decoder.output_dim - - self.padding_idx = embed_tokens.padding_idx - self.max_target_positions = cfg.max_target_positions - - self.embed_tokens = embed_tokens - - self.embed_scale = 1.0 if cfg.no_scale_embedding else math.sqrt(embed_dim) - - if not cfg.adaptive_input and cfg.quant_noise.pq > 0: - self.quant_noise = apply_quant_noise_( - nn.Linear(embed_dim, embed_dim, bias=False), - cfg.quant_noise.pq, - cfg.quant_noise.pq_block_size, - ) - else: - self.quant_noise = None - - self.project_in_dim = ( - Linear(input_embed_dim, embed_dim, bias=False) - if embed_dim != input_embed_dim - else None - ) - self.embed_positions = ( - PositionalEmbedding( - self.max_target_positions, - embed_dim, - self.padding_idx, - learned=cfg.decoder.learned_pos, - ) - if not cfg.no_token_positional_embeddings - else None - ) - if cfg.layernorm_embedding: - self.layernorm_embedding = LayerNorm(embed_dim, export=cfg.export) - else: - self.layernorm_embedding = None - - self.cross_self_attention = cfg.cross_self_attention - - if self.decoder_layerdrop > 0.0: - self.layers = LayerDropModuleList(p=self.decoder_layerdrop) - else: - self.layers = nn.ModuleList([]) - self.layers.extend( - [ - self.build_decoder_layer(cfg, no_encoder_attn) - for _ in range(cfg.decoder.layers) - ] - ) - self.num_layers = len(self.layers) - - if cfg.decoder.normalize_before and not cfg.no_decoder_final_norm: - self.layer_norm = LayerNorm(embed_dim, export=cfg.export) - else: - self.layer_norm = None - - self.project_out_dim = ( - Linear(embed_dim, self.output_embed_dim, bias=False) - if embed_dim != self.output_embed_dim and not cfg.tie_adaptive_weights - else None - ) - - self.adaptive_softmax = None - self.output_projection = output_projection - if self.output_projection is None: - self.build_output_projection(cfg, dictionary, embed_tokens) - - if utils.safe_getattr(cfg, 'deepnet', False): - self.rescale_decoder_only_parameters(cfg) - - def rescale_decoder_only_parameters(self, cfg): - def rescale(param, layer_id): - param.mul_(math.sqrt(math.log(len(self.layers) * 2))) - # param.div_(math.sqrt(2.0 * layer_id)) - - for layer_id in range(len(self.layers)): - layer = self.layers[layer_id] - rescale(layer.self_attn.out_proj.weight.data, layer_id + 1) - rescale(layer.self_attn.v_proj.weight.data, layer_id + 1) - rescale(layer.fc1.weight.data, layer_id + 1) - rescale(layer.fc2.weight.data, layer_id + 1) - return - - def build_output_projection(self, cfg, dictionary, embed_tokens): - if cfg.adaptive_softmax_cutoff is not None: - self.adaptive_softmax = AdaptiveSoftmax( - len(dictionary), - self.output_embed_dim, - utils.eval_str_list(cfg.adaptive_softmax_cutoff, type=int), - dropout=cfg.adaptive_softmax_dropout, - adaptive_inputs=embed_tokens if cfg.tie_adaptive_weights else None, - factor=cfg.adaptive_softmax_factor, - tie_proj=cfg.tie_adaptive_proj, - ) - elif self.share_input_output_embed: - self.output_projection = nn.Linear( - self.embed_tokens.weight.shape[1], - self.embed_tokens.weight.shape[0], - bias=False, - ) - self.output_projection.weight = self.embed_tokens.weight - else: - self.output_projection = nn.Linear( - self.output_embed_dim, len(dictionary), bias=False - ) - nn.init.normal_( - self.output_projection.weight, mean=0, std=self.output_embed_dim ** -0.5 - ) - num_base_layers = cfg.base_layers - for i in range(num_base_layers): - self.layers.insert( - ((i + 1) * cfg.decoder.layers) // (num_base_layers + 1), - BaseLayer(cfg), - ) - - def build_decoder_layer(self, cfg, no_encoder_attn=False): - layer = transformer_layer.TransformerDecoderLayerBase(cfg, no_encoder_attn) - checkpoint = cfg.checkpoint_activations - if checkpoint: - offload_to_cpu = cfg.offload_activations - layer = checkpoint_wrapper(layer, offload_to_cpu=offload_to_cpu) - # if we are checkpointing, enforce that FSDP always wraps the - # checkpointed layer, regardless of layer size - min_params_to_wrap = cfg.min_params_to_wrap if not checkpoint else 0 - layer = fsdp_wrap(layer, min_num_params=min_params_to_wrap) - return layer - - def forward( - self, - prev_output_tokens, - encoder_out: Optional[Dict[str, List[Tensor]]] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - features_only: bool = False, - full_context_alignment: bool = False, - alignment_layer: Optional[int] = None, - alignment_heads: Optional[int] = None, - src_lengths: Optional[Any] = None, - return_all_hiddens: bool = False, - ): - """ - Args: - prev_output_tokens (LongTensor): previous decoder outputs of shape - `(batch, tgt_len)`, for teacher forcing - encoder_out (optional): output from the encoder, used for - encoder-side attention, should be of size T x B x C - incremental_state (dict): dictionary used for storing state during - :ref:`Incremental decoding` - features_only (bool, optional): only return features without - applying output layer (default: False). - full_context_alignment (bool, optional): don't apply - auto-regressive mask to self-attention (default: False). - - Returns: - tuple: - - the decoder's output of shape `(batch, tgt_len, vocab)` - - a dictionary with any model-specific outputs - """ - - x, extra = self.extract_features( - prev_output_tokens, - encoder_out=encoder_out, - incremental_state=incremental_state, - full_context_alignment=full_context_alignment, - alignment_layer=alignment_layer, - alignment_heads=alignment_heads, - ) - - if not features_only: - x = self.output_layer(x) - return x, extra - - def extract_features( - self, - prev_output_tokens, - encoder_out: Optional[Dict[str, List[Tensor]]], - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - full_context_alignment: bool = False, - alignment_layer: Optional[int] = None, - alignment_heads: Optional[int] = None, - ): - return self.extract_features_scriptable( - prev_output_tokens, - encoder_out, - incremental_state, - full_context_alignment, - alignment_layer, - alignment_heads, - ) - - """ - A scriptable subclass of this class has an extract_features method and calls - super().extract_features, but super() is not supported in torchscript. A copy of - this function is made to be used in the subclass instead. - """ - - def extract_features_scriptable( - self, - prev_output_tokens, - encoder_out: Optional[Dict[str, List[Tensor]]], - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - full_context_alignment: bool = False, - alignment_layer: Optional[int] = None, - alignment_heads: Optional[int] = None, - ): - """ - Similar to *forward* but only return features. - - Includes several features from "Jointly Learning to Align and - Translate with Transformer Models" (Garg et al., EMNLP 2019). - - Args: - full_context_alignment (bool, optional): don't apply - auto-regressive mask to self-attention (default: False). - alignment_layer (int, optional): return mean alignment over - heads at this layer (default: last layer). - alignment_heads (int, optional): only average alignment over - this many heads (default: all heads). - - Returns: - tuple: - - the decoder's features of shape `(batch, tgt_len, embed_dim)` - - a dictionary with any model-specific outputs - """ - bs, slen = prev_output_tokens.size() - if alignment_layer is None: - alignment_layer = self.num_layers - 1 - - enc: Optional[Tensor] = None - padding_mask: Optional[Tensor] = None - if encoder_out is not None and len(encoder_out["encoder_out"]) > 0: - enc = encoder_out["encoder_out"][0] - assert ( - enc.size()[1] == bs - ), f"Expected enc.shape == (t, {bs}, c) got {enc.shape}" - if encoder_out is not None and len(encoder_out["encoder_padding_mask"]) > 0: - padding_mask = encoder_out["encoder_padding_mask"][0] - - # embed positions - positions = None - if self.embed_positions is not None: - positions = self.embed_positions( - prev_output_tokens, incremental_state=incremental_state - ) - - if incremental_state is not None: - prev_output_tokens = prev_output_tokens[:, -1:] - if positions is not None: - positions = positions[:, -1:] - - # embed tokens and positions - x = self.embed_scale * self.embed_tokens(prev_output_tokens) - - if self.quant_noise is not None: - x = self.quant_noise(x) - - if self.project_in_dim is not None: - x = self.project_in_dim(x) - - if positions is not None: - x += positions - - if self.layernorm_embedding is not None: - x = self.layernorm_embedding(x) - - x = self.dropout_module(x) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - self_attn_padding_mask: Optional[Tensor] = None - if self.cross_self_attention or prev_output_tokens.eq(self.padding_idx).any(): - self_attn_padding_mask = prev_output_tokens.eq(self.padding_idx) - - # decoder layers - attn: Optional[Tensor] = None - inner_states: List[Optional[Tensor]] = [x] - for idx, layer in enumerate(self.layers): - if incremental_state is None and not full_context_alignment: - self_attn_mask = self.buffered_future_mask(x) - else: - self_attn_mask = None - - x, layer_attn, _ = layer( - x, - enc, - padding_mask, - incremental_state, - self_attn_mask=self_attn_mask, - self_attn_padding_mask=self_attn_padding_mask, - need_attn=bool((idx == alignment_layer)), - need_head_weights=bool((idx == alignment_layer)), - ) - inner_states.append(x) - if layer_attn is not None and idx == alignment_layer: - attn = layer_attn.float().to(x) - - if attn is not None: - if alignment_heads is not None: - attn = attn[:alignment_heads] - - # average probabilities over heads - attn = attn.mean(dim=0) - - if self.layer_norm is not None: - x = self.layer_norm(x) - if self.alpha is not None: - x = torch.mul(self.alpha, x) - - # T x B x C -> B x T x C - x = x.transpose(0, 1) - - if self.project_out_dim is not None: - x = self.project_out_dim(x) - - return x, {"attn": [attn], "inner_states": inner_states} - - def output_layer(self, features): - """Project features to the vocabulary size.""" - if self.adaptive_softmax is None: - # project back to size of vocabulary - return self.output_projection(features) - else: - return features - - def max_positions(self): - """Maximum output length supported by the decoder.""" - if self.embed_positions is None: - return self.max_target_positions - return min(self.max_target_positions, self.embed_positions.max_positions) - - def buffered_future_mask(self, tensor): - dim = tensor.size(0) - # self._future_mask.device != tensor.device is not working in TorchScript. This is a workaround. - if ( - self._future_mask.size(0) == 0 - or (not self._future_mask.device == tensor.device) - or self._future_mask.size(0) < dim - ): - self._future_mask = torch.triu( - utils.fill_with_neg_inf(torch.zeros([dim, dim])), 1 - ) - self._future_mask = self._future_mask.to(tensor) - return self._future_mask[:dim, :dim] - - def upgrade_state_dict_named(self, state_dict, name): - """Upgrade a (possibly old) state dict for new versions of fairseq.""" - if isinstance(self.embed_positions, SinusoidalPositionalEmbedding): - weights_key = "{}.embed_positions.weights".format(name) - if weights_key in state_dict: - del state_dict[weights_key] - state_dict[ - "{}.embed_positions._float_tensor".format(name) - ] = torch.FloatTensor(1) - - if f"{name}.output_projection.weight" not in state_dict: - if self.share_input_output_embed: - embed_out_key = f"{name}.embed_tokens.weight" - else: - embed_out_key = f"{name}.embed_out" - if embed_out_key in state_dict: - state_dict[f"{name}.output_projection.weight"] = state_dict[ - embed_out_key - ] - if not self.share_input_output_embed: - del state_dict[embed_out_key] - - for i in range(self.num_layers): - # update layer norms - layer_norm_map = { - "0": "self_attn_layer_norm", - "1": "encoder_attn_layer_norm", - "2": "final_layer_norm", - } - for old, new in layer_norm_map.items(): - for m in ("weight", "bias"): - k = "{}.layers.{}.layer_norms.{}.{}".format(name, i, old, m) - if k in state_dict: - state_dict[ - "{}.layers.{}.{}.{}".format(name, i, new, m) - ] = state_dict[k] - del state_dict[k] - - version_key = "{}.version".format(name) - if utils.item(state_dict.get(version_key, torch.Tensor([1]))[0]) <= 2: - # earlier checkpoints did not normalize after the stack of layers - self.layer_norm = None - self.normalize = False - state_dict[version_key] = torch.Tensor([1]) - - return state_dict - - -def Linear(in_features, out_features, bias=True): - m = nn.Linear(in_features, out_features, bias) - nn.init.xavier_uniform_(m.weight) - if bias: - nn.init.constant_(m.bias, 0.0) - return m - - -class TransformerDecoder(TransformerDecoderBase): - def __init__( - self, - args, - dictionary, - embed_tokens, - no_encoder_attn=False, - output_projection=None, - ): - self.args = args - super().__init__( - TransformerConfig.from_namespace(args), - dictionary, - embed_tokens, - no_encoder_attn=no_encoder_attn, - output_projection=output_projection, - ) - - def build_output_projection(self, args, dictionary, embed_tokens): - super().build_output_projection( - TransformerConfig.from_namespace(args), dictionary, embed_tokens - ) - - def build_decoder_layer(self, args, no_encoder_attn=False): - return super().build_decoder_layer( - TransformerConfig.from_namespace(args), no_encoder_attn=no_encoder_attn - ) diff --git a/kosmos-g/fairseq/fairseq/models/transformer/transformer_encoder.py b/kosmos-g/fairseq/fairseq/models/transformer/transformer_encoder.py deleted file mode 100644 index 578f03d9d..000000000 --- a/kosmos-g/fairseq/fairseq/models/transformer/transformer_encoder.py +++ /dev/null @@ -1,346 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from typing import Dict, List, Optional - -import torch -import torch.nn as nn -from fairseq import utils -from fairseq.distributed import fsdp_wrap -from fairseq.models import FairseqEncoder -from fairseq.modules import ( - FairseqDropout, - LayerDropModuleList, - LayerNorm, - PositionalEmbedding, - SinusoidalPositionalEmbedding, -) -from fairseq.modules import transformer_layer -from fairseq.modules.checkpoint_activations import checkpoint_wrapper -from fairseq.modules.quant_noise import quant_noise as apply_quant_noise_ -from torch import Tensor -from fairseq.models.transformer import ( - TransformerConfig, -) - - -# rewrite name for backward compatibility in `make_generation_fast_` -def module_name_fordropout(module_name: str) -> str: - if module_name == "TransformerEncoderBase": - return "TransformerEncoder" - else: - return module_name - - -class TransformerEncoderBase(FairseqEncoder): - """ - Transformer encoder consisting of *cfg.encoder.layers* layers. Each layer - is a :class:`TransformerEncoderLayer`. - - Args: - args (argparse.Namespace): parsed command-line arguments - dictionary (~fairseq.data.Dictionary): encoding dictionary - embed_tokens (torch.nn.Embedding): input embedding - """ - - def __init__(self, cfg, dictionary, embed_tokens): - self.cfg = cfg - super().__init__(dictionary) - self.register_buffer("version", torch.Tensor([3])) - - self.dropout_module = FairseqDropout( - cfg.dropout, module_name=module_name_fordropout(self.__class__.__name__) - ) - self.encoder_layerdrop = cfg.encoder.layerdrop - - embed_dim = embed_tokens.embedding_dim - self.padding_idx = embed_tokens.padding_idx - self.max_source_positions = cfg.max_source_positions - - self.embed_tokens = embed_tokens - - self.embed_scale = 1.0 if cfg.no_scale_embedding else math.sqrt(embed_dim) - - self.embed_positions = ( - PositionalEmbedding( - cfg.max_source_positions, - embed_dim, - self.padding_idx, - learned=cfg.encoder.learned_pos, - ) - if not cfg.no_token_positional_embeddings - else None - ) - if cfg.layernorm_embedding: - self.layernorm_embedding = LayerNorm(embed_dim, export=cfg.export) - else: - self.layernorm_embedding = None - - if not cfg.adaptive_input and cfg.quant_noise.pq > 0: - self.quant_noise = apply_quant_noise_( - nn.Linear(embed_dim, embed_dim, bias=False), - cfg.quant_noise.pq, - cfg.quant_noise.pq_block_size, - ) - else: - self.quant_noise = None - - if self.encoder_layerdrop > 0.0: - self.layers = LayerDropModuleList(p=self.encoder_layerdrop) - else: - self.layers = nn.ModuleList([]) - self.layers.extend( - [self.build_encoder_layer(cfg) for i in range(cfg.encoder.layers)] - ) - self.num_layers = len(self.layers) - - if cfg.encoder.normalize_before: - self.layer_norm = LayerNorm(embed_dim, export=cfg.export) - else: - self.layer_norm = None - - def build_encoder_layer(self, cfg): - layer = transformer_layer.TransformerEncoderLayerBase(cfg) - checkpoint = cfg.checkpoint_activations - if checkpoint: - offload_to_cpu = cfg.offload_activations - layer = checkpoint_wrapper(layer, offload_to_cpu=offload_to_cpu) - # if we are checkpointing, enforce that FSDP always wraps the - # checkpointed layer, regardless of layer size - min_params_to_wrap = cfg.min_params_to_wrap if not checkpoint else 0 - layer = fsdp_wrap(layer, min_num_params=min_params_to_wrap) - return layer - - def forward_embedding( - self, src_tokens, token_embedding: Optional[torch.Tensor] = None - ): - # embed tokens and positions - if token_embedding is None: - token_embedding = self.embed_tokens(src_tokens) - x = embed = self.embed_scale * token_embedding - if self.embed_positions is not None: - x = embed + self.embed_positions(src_tokens) - if self.layernorm_embedding is not None: - x = self.layernorm_embedding(x) - x = self.dropout_module(x) - if self.quant_noise is not None: - x = self.quant_noise(x) - return x, embed - - def forward( - self, - src_tokens, - src_lengths: Optional[torch.Tensor] = None, - return_all_hiddens: bool = False, - token_embeddings: Optional[torch.Tensor] = None, - ): - """ - Args: - src_tokens (LongTensor): tokens in the source language of shape - `(batch, src_len)` - src_lengths (torch.LongTensor): lengths of each source sentence of - shape `(batch)` - return_all_hiddens (bool, optional): also return all of the - intermediate hidden states (default: False). - token_embeddings (torch.Tensor, optional): precomputed embeddings - default `None` will recompute embeddings - - Returns: - dict: - - **encoder_out** (Tensor): the last encoder layer's output of - shape `(src_len, batch, embed_dim)` - - **encoder_padding_mask** (ByteTensor): the positions of - padding elements of shape `(batch, src_len)` - - **encoder_embedding** (Tensor): the (scaled) embedding lookup - of shape `(batch, src_len, embed_dim)` - - **encoder_states** (List[Tensor]): all intermediate - hidden states of shape `(src_len, batch, embed_dim)`. - Only populated if *return_all_hiddens* is True. - """ - return self.forward_scriptable( - src_tokens, src_lengths, return_all_hiddens, token_embeddings - ) - - # TorchScript doesn't support super() method so that the scriptable Subclass - # can't access the base class model in Torchscript. - # Current workaround is to add a helper function with different name and - # call the helper function from scriptable Subclass. - def forward_scriptable( - self, - src_tokens, - src_lengths: Optional[torch.Tensor] = None, - return_all_hiddens: bool = False, - token_embeddings: Optional[torch.Tensor] = None, - ): - """ - Args: - src_tokens (LongTensor): tokens in the source language of shape - `(batch, src_len)` - src_lengths (torch.LongTensor): lengths of each source sentence of - shape `(batch)` - return_all_hiddens (bool, optional): also return all of the - intermediate hidden states (default: False). - token_embeddings (torch.Tensor, optional): precomputed embeddings - default `None` will recompute embeddings - - Returns: - dict: - - **encoder_out** (Tensor): the last encoder layer's output of - shape `(src_len, batch, embed_dim)` - - **encoder_padding_mask** (ByteTensor): the positions of - padding elements of shape `(batch, src_len)` - - **encoder_embedding** (Tensor): the (scaled) embedding lookup - of shape `(batch, src_len, embed_dim)` - - **encoder_states** (List[Tensor]): all intermediate - hidden states of shape `(src_len, batch, embed_dim)`. - Only populated if *return_all_hiddens* is True. - """ - # compute padding mask - encoder_padding_mask = src_tokens.eq(self.padding_idx) - has_pads = src_tokens.device.type == "xla" or encoder_padding_mask.any() - - x, encoder_embedding = self.forward_embedding(src_tokens, token_embeddings) - - # account for padding while computing the representation - if has_pads: - x = x * (1 - encoder_padding_mask.unsqueeze(-1).type_as(x)) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - encoder_states = [] - - if return_all_hiddens: - encoder_states.append(x) - - # encoder layers - for layer in self.layers: - x = layer( - x, encoder_padding_mask=encoder_padding_mask if has_pads else None - ) - if return_all_hiddens: - assert encoder_states is not None - encoder_states.append(x) - - if self.layer_norm is not None: - x = self.layer_norm(x) - - # The Pytorch Mobile lite interpreter does not supports returning NamedTuple in - # `forward` so we use a dictionary instead. - # TorchScript does not support mixed values so the values are all lists. - # The empty list is equivalent to None. - src_lengths = ( - src_tokens.ne(self.padding_idx) - .sum(dim=1, dtype=torch.int32) - .reshape(-1, 1) - .contiguous() - ) - return { - "encoder_out": [x], # T x B x C - "encoder_padding_mask": [encoder_padding_mask], # B x T - "encoder_embedding": [encoder_embedding], # B x T x C - "encoder_states": encoder_states, # List[T x B x C] - "src_tokens": [], - "src_lengths": [src_lengths], - } - - @torch.jit.export - def reorder_encoder_out(self, encoder_out: Dict[str, List[Tensor]], new_order): - """ - Reorder encoder output according to *new_order*. - - Args: - encoder_out: output from the ``forward()`` method - new_order (LongTensor): desired order - - Returns: - *encoder_out* rearranged according to *new_order* - """ - if len(encoder_out["encoder_out"]) == 0: - new_encoder_out = [] - else: - new_encoder_out = [encoder_out["encoder_out"][0].index_select(1, new_order)] - if len(encoder_out["encoder_padding_mask"]) == 0: - new_encoder_padding_mask = [] - else: - new_encoder_padding_mask = [ - encoder_out["encoder_padding_mask"][0].index_select(0, new_order) - ] - if len(encoder_out["encoder_embedding"]) == 0: - new_encoder_embedding = [] - else: - new_encoder_embedding = [ - encoder_out["encoder_embedding"][0].index_select(0, new_order) - ] - - if len(encoder_out["src_tokens"]) == 0: - src_tokens = [] - else: - src_tokens = [(encoder_out["src_tokens"][0]).index_select(0, new_order)] - - if len(encoder_out["src_lengths"]) == 0: - src_lengths = [] - else: - src_lengths = [(encoder_out["src_lengths"][0]).index_select(0, new_order)] - - encoder_states = encoder_out["encoder_states"] - if len(encoder_states) > 0: - for idx, state in enumerate(encoder_states): - encoder_states[idx] = state.index_select(1, new_order) - - return { - "encoder_out": new_encoder_out, # T x B x C - "encoder_padding_mask": new_encoder_padding_mask, # B x T - "encoder_embedding": new_encoder_embedding, # B x T x C - "encoder_states": encoder_states, # List[T x B x C] - "src_tokens": src_tokens, # B x T - "src_lengths": src_lengths, # B x 1 - } - - def max_positions(self): - """Maximum input length supported by the encoder.""" - if self.embed_positions is None: - return self.max_source_positions - return min(self.max_source_positions, self.embed_positions.max_positions) - - def upgrade_state_dict_named(self, state_dict, name): - """Upgrade a (possibly old) state dict for new versions of fairseq.""" - if isinstance(self.embed_positions, SinusoidalPositionalEmbedding): - weights_key = "{}.embed_positions.weights".format(name) - if weights_key in state_dict: - print("deleting {0}".format(weights_key)) - del state_dict[weights_key] - state_dict[ - "{}.embed_positions._float_tensor".format(name) - ] = torch.FloatTensor(1) - for i in range(self.num_layers): - # update layer norms - self.layers[i].upgrade_state_dict_named( - state_dict, "{}.layers.{}".format(name, i) - ) - - version_key = "{}.version".format(name) - if utils.item(state_dict.get(version_key, torch.Tensor([1]))[0]) < 2: - # earlier checkpoints did not normalize after the stack of layers - self.layer_norm = None - self.normalize = False - state_dict[version_key] = torch.Tensor([1]) - return state_dict - - -class TransformerEncoder(TransformerEncoderBase): - def __init__(self, args, dictionary, embed_tokens): - self.args = args - super().__init__( - TransformerConfig.from_namespace(args), - dictionary, - embed_tokens, - ) - - def build_encoder_layer(self, args): - return super().build_encoder_layer( - TransformerConfig.from_namespace(args), - ) diff --git a/kosmos-g/fairseq/fairseq/models/transformer/transformer_legacy.py b/kosmos-g/fairseq/fairseq/models/transformer/transformer_legacy.py deleted file mode 100644 index af9646740..000000000 --- a/kosmos-g/fairseq/fairseq/models/transformer/transformer_legacy.py +++ /dev/null @@ -1,275 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq.dataclass.utils import gen_parser_from_dataclass -from fairseq.models import ( - register_model, - register_model_architecture, -) -from fairseq.models.transformer.transformer_config import ( - TransformerConfig, - DEFAULT_MAX_SOURCE_POSITIONS, - DEFAULT_MAX_TARGET_POSITIONS, - DEFAULT_MIN_PARAMS_TO_WRAP, -) -from fairseq.models.transformer.transformer_base import ( - TransformerModelBase, -) - - -@register_model("transformer") -class TransformerModel(TransformerModelBase): - """ - This is the legacy implementation of the transformer model that - uses argparse for configuration. - """ - - @classmethod - def hub_models(cls): - # fmt: off - - def moses_subword(path): - return { - 'path': path, - 'tokenizer': 'moses', - 'bpe': 'subword_nmt', - } - - def moses_fastbpe(path): - return { - 'path': path, - 'tokenizer': 'moses', - 'bpe': 'fastbpe', - } - - def spm(path): - return { - 'path': path, - 'bpe': 'sentencepiece', - 'tokenizer': 'space', - } - - return { - 'transformer.wmt14.en-fr': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/wmt14.en-fr.joined-dict.transformer.tar.bz2'), - 'transformer.wmt16.en-de': 'https://dl.fbaipublicfiles.com/fairseq/models/wmt16.en-de.joined-dict.transformer.tar.bz2', - 'transformer.wmt18.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/wmt18.en-de.ensemble.tar.gz'), - 'transformer.wmt19.en-de': moses_fastbpe('https://dl.fbaipublicfiles.com/fairseq/models/wmt19.en-de.joined-dict.ensemble.tar.gz'), - 'transformer.wmt19.en-ru': moses_fastbpe('https://dl.fbaipublicfiles.com/fairseq/models/wmt19.en-ru.ensemble.tar.gz'), - 'transformer.wmt19.de-en': moses_fastbpe('https://dl.fbaipublicfiles.com/fairseq/models/wmt19.de-en.joined-dict.ensemble.tar.gz'), - 'transformer.wmt19.ru-en': moses_fastbpe('https://dl.fbaipublicfiles.com/fairseq/models/wmt19.ru-en.ensemble.tar.gz'), - 'transformer.wmt19.en-de.single_model': moses_fastbpe('https://dl.fbaipublicfiles.com/fairseq/models/wmt19.en-de.joined-dict.single_model.tar.gz'), - 'transformer.wmt19.en-ru.single_model': moses_fastbpe('https://dl.fbaipublicfiles.com/fairseq/models/wmt19.en-ru.single_model.tar.gz'), - 'transformer.wmt19.de-en.single_model': moses_fastbpe('https://dl.fbaipublicfiles.com/fairseq/models/wmt19.de-en.joined-dict.single_model.tar.gz'), - 'transformer.wmt19.ru-en.single_model': moses_fastbpe('https://dl.fbaipublicfiles.com/fairseq/models/wmt19.ru-en.single_model.tar.gz'), - 'transformer.wmt20.en-ta': spm('https://dl.fbaipublicfiles.com/fairseq/models/wmt20.en-ta.single.tar.gz'), - 'transformer.wmt20.en-iu.news': spm('https://dl.fbaipublicfiles.com/fairseq/models/wmt20.en-iu.news.single.tar.gz'), - 'transformer.wmt20.en-iu.nh': spm('https://dl.fbaipublicfiles.com/fairseq/models/wmt20.en-iu.nh.single.tar.gz'), - 'transformer.wmt20.ta-en': spm('https://dl.fbaipublicfiles.com/fairseq/models/wmt20.ta-en.single.tar.gz'), - 'transformer.wmt20.iu-en.news': spm('https://dl.fbaipublicfiles.com/fairseq/models/wmt20.iu-en.news.single.tar.gz'), - 'transformer.wmt20.iu-en.nh': spm('https://dl.fbaipublicfiles.com/fairseq/models/wmt20.iu-en.nh.single.tar.gz'), - 'transformer.flores101.mm100.615M': spm('https://dl.fbaipublicfiles.com/flores101/pretrained_models/flores101_mm100_615M.tar.gz'), - 'transformer.flores101.mm100.175M': spm('https://dl.fbaipublicfiles.com/flores101/pretrained_models/flores101_mm100_175M.tar.gz'), - } - # fmt: on - - def __init__(self, args, encoder, decoder): - cfg = TransformerConfig.from_namespace(args) - super().__init__(cfg, encoder, decoder) - self.args = args - - @classmethod - def add_args(cls, parser): - """Add model-specific arguments to the parser.""" - # we want to build the args recursively in this case. - # do not set defaults so that settings defaults from various architectures still works - gen_parser_from_dataclass( - parser, TransformerConfig(), delete_default=True, with_prefix="" - ) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - - # make sure all arguments are present in older models - base_architecture(args) - - if args.encoder_layers_to_keep: - args.encoder_layers = len(args.encoder_layers_to_keep.split(",")) - if args.decoder_layers_to_keep: - args.decoder_layers = len(args.decoder_layers_to_keep.split(",")) - - if getattr(args, "max_source_positions", None) is None: - args.max_source_positions = DEFAULT_MAX_SOURCE_POSITIONS - if getattr(args, "max_target_positions", None) is None: - args.max_target_positions = DEFAULT_MAX_TARGET_POSITIONS - - src_dict, tgt_dict = task.source_dictionary, task.target_dictionary - - if args.share_all_embeddings: - if src_dict != tgt_dict: - raise ValueError("--share-all-embeddings requires a joined dictionary") - if args.encoder_embed_dim != args.decoder_embed_dim: - raise ValueError( - "--share-all-embeddings requires --encoder-embed-dim to match --decoder-embed-dim" - ) - if args.decoder_embed_path and ( - args.decoder_embed_path != args.encoder_embed_path - ): - raise ValueError( - "--share-all-embeddings not compatible with --decoder-embed-path" - ) - args.share_decoder_input_output_embed = True - - if getattr(args, "offload_activations", False): - args.checkpoint_activations = True # offloading implies checkpointing - - if not args.share_all_embeddings: - args.min_params_to_wrap = getattr( - args, "min_params_to_wrap", DEFAULT_MIN_PARAMS_TO_WRAP - ) - cfg = TransformerConfig.from_namespace(args) - return super().build_model(cfg, task) - - @classmethod - def build_embedding(cls, args, dictionary, embed_dim, path=None): - return super().build_embedding( - TransformerConfig.from_namespace(args), dictionary, embed_dim, path - ) - - @classmethod - def build_encoder(cls, args, src_dict, embed_tokens): - return super().build_encoder( - TransformerConfig.from_namespace(args), src_dict, embed_tokens - ) - - @classmethod - def build_decoder(cls, args, tgt_dict, embed_tokens): - return super().build_decoder( - TransformerConfig.from_namespace(args), tgt_dict, embed_tokens - ) - - -# architectures - - -@register_model_architecture("transformer", "transformer_tiny") -def tiny_architecture(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 64) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 64) - args.encoder_layers = getattr(args, "encoder_layers", 2) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 2) - args.decoder_layers = getattr(args, "decoder_layers", 2) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 2) - return base_architecture(args) - - -@register_model_architecture("transformer", "transformer") -def base_architecture(args): - args.encoder_embed_path = getattr(args, "encoder_embed_path", None) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 2048) - args.encoder_layers = getattr(args, "encoder_layers", 6) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", False) - args.decoder_embed_path = getattr(args, "decoder_embed_path", None) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim) - args.decoder_ffn_embed_dim = getattr( - args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim - ) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", False) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False) - args.attention_dropout = getattr(args, "attention_dropout", 0.0) - args.activation_dropout = getattr(args, "activation_dropout", 0.0) - args.activation_fn = getattr(args, "activation_fn", "relu") - args.dropout = getattr(args, "dropout", 0.1) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", False - ) - args.share_all_embeddings = getattr(args, "share_all_embeddings", False) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - args.adaptive_input = getattr(args, "adaptive_input", False) - args.no_cross_attention = getattr(args, "no_cross_attention", False) - args.cross_self_attention = getattr(args, "cross_self_attention", False) - - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - - args.no_scale_embedding = getattr(args, "no_scale_embedding", False) - args.layernorm_embedding = getattr(args, "layernorm_embedding", False) - args.tie_adaptive_weights = getattr(args, "tie_adaptive_weights", False) - args.checkpoint_activations = getattr(args, "checkpoint_activations", False) - args.offload_activations = getattr(args, "offload_activations", False) - if args.offload_activations: - args.checkpoint_activations = True - args.encoder_layers_to_keep = getattr(args, "encoder_layers_to_keep", None) - args.decoder_layers_to_keep = getattr(args, "decoder_layers_to_keep", None) - args.encoder_layerdrop = getattr(args, "encoder_layerdrop", 0) - args.decoder_layerdrop = getattr(args, "decoder_layerdrop", 0) - args.quant_noise_pq = getattr(args, "quant_noise_pq", 0) - args.quant_noise_pq_block_size = getattr(args, "quant_noise_pq_block_size", 8) - args.quant_noise_scalar = getattr(args, "quant_noise_scalar", 0) - - -@register_model_architecture("transformer", "transformer_iwslt_de_en") -def transformer_iwslt_de_en(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 1024) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 4) - args.encoder_layers = getattr(args, "encoder_layers", 6) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 1024) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 4) - args.decoder_layers = getattr(args, "decoder_layers", 6) - base_architecture(args) - - -@register_model_architecture("transformer", "transformer_wmt_en_de") -def transformer_wmt_en_de(args): - base_architecture(args) - - -# parameters used in the "Attention Is All You Need" paper (Vaswani et al., 2017) -@register_model_architecture("transformer", "transformer_vaswani_wmt_en_de_big") -def transformer_vaswani_wmt_en_de_big(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 1024) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4096) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - args.dropout = getattr(args, "dropout", 0.3) - base_architecture(args) - - -@register_model_architecture("transformer", "transformer_vaswani_wmt_en_fr_big") -def transformer_vaswani_wmt_en_fr_big(args): - args.dropout = getattr(args, "dropout", 0.1) - transformer_vaswani_wmt_en_de_big(args) - - -@register_model_architecture("transformer", "transformer_wmt_en_de_big") -def transformer_wmt_en_de_big(args): - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - transformer_vaswani_wmt_en_de_big(args) - - -# default parameters used in tensor2tensor implementation -@register_model_architecture("transformer", "transformer_wmt_en_de_big_t2t") -def transformer_wmt_en_de_big_t2t(args): - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", True) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", True) - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - args.activation_dropout = getattr(args, "activation_dropout", 0.1) - transformer_vaswani_wmt_en_de_big(args) diff --git a/kosmos-g/fairseq/fairseq/models/transformer_align.py b/kosmos-g/fairseq/fairseq/models/transformer_align.py deleted file mode 100644 index eaf585bd1..000000000 --- a/kosmos-g/fairseq/fairseq/models/transformer_align.py +++ /dev/null @@ -1,93 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq.models import register_model, register_model_architecture -from fairseq.models.transformer import ( - TransformerModel, - base_architecture, - transformer_wmt_en_de_big, -) - - -@register_model("transformer_align") -class TransformerAlignModel(TransformerModel): - """ - See "Jointly Learning to Align and Translate with Transformer - Models" (Garg et al., EMNLP 2019). - """ - - def __init__(self, encoder, decoder, args): - super().__init__(args, encoder, decoder) - self.alignment_heads = args.alignment_heads - self.alignment_layer = args.alignment_layer - self.full_context_alignment = args.full_context_alignment - - @staticmethod - def add_args(parser): - # fmt: off - super(TransformerAlignModel, TransformerAlignModel).add_args(parser) - parser.add_argument('--alignment-heads', type=int, metavar='D', - help='Number of cross attention heads per layer to supervised with alignments') - parser.add_argument('--alignment-layer', type=int, metavar='D', - help='Layer number which has to be supervised. 0 corresponding to the bottommost layer.') - parser.add_argument('--full-context-alignment', action='store_true', - help='Whether or not alignment is supervised conditioned on the full target context.') - # fmt: on - - @classmethod - def build_model(cls, args, task): - # set any default arguments - transformer_align(args) - - transformer_model = TransformerModel.build_model(args, task) - return TransformerAlignModel( - transformer_model.encoder, transformer_model.decoder, args - ) - - def forward(self, src_tokens, src_lengths, prev_output_tokens): - encoder_out = self.encoder(src_tokens, src_lengths) - return self.forward_decoder(prev_output_tokens, encoder_out) - - def forward_decoder( - self, - prev_output_tokens, - encoder_out=None, - incremental_state=None, - features_only=False, - **extra_args, - ): - attn_args = { - "alignment_layer": self.alignment_layer, - "alignment_heads": self.alignment_heads, - } - decoder_out = self.decoder(prev_output_tokens, encoder_out, **attn_args) - - if self.full_context_alignment: - attn_args["full_context_alignment"] = self.full_context_alignment - _, alignment_out = self.decoder( - prev_output_tokens, - encoder_out, - features_only=True, - **attn_args, - **extra_args, - ) - decoder_out[1]["attn"] = alignment_out["attn"] - - return decoder_out - - -@register_model_architecture("transformer_align", "transformer_align") -def transformer_align(args): - args.alignment_heads = getattr(args, "alignment_heads", 1) - args.alignment_layer = getattr(args, "alignment_layer", 4) - args.full_context_alignment = getattr(args, "full_context_alignment", False) - base_architecture(args) - - -@register_model_architecture("transformer_align", "transformer_wmt_en_de_big_align") -def transformer_wmt_en_de_big_align(args): - args.alignment_heads = getattr(args, "alignment_heads", 1) - args.alignment_layer = getattr(args, "alignment_layer", 4) - transformer_wmt_en_de_big(args) diff --git a/kosmos-g/fairseq/fairseq/models/transformer_from_pretrained_xlm.py b/kosmos-g/fairseq/fairseq/models/transformer_from_pretrained_xlm.py deleted file mode 100644 index 236d9942e..000000000 --- a/kosmos-g/fairseq/fairseq/models/transformer_from_pretrained_xlm.py +++ /dev/null @@ -1,152 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import os -from typing import Any, Dict - -from fairseq import checkpoint_utils -from fairseq.data.legacy.masked_lm_dictionary import MaskedLMDictionary -from fairseq.models import register_model, register_model_architecture -from fairseq.models.transformer import ( - TransformerDecoder, - TransformerEncoder, - TransformerModel, - base_architecture as transformer_base_architecture, -) - - -@register_model("transformer_from_pretrained_xlm") -class TransformerFromPretrainedXLMModel(TransformerModel): - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - TransformerModel.add_args(parser) - parser.add_argument( - "--pretrained-xlm-checkpoint", - type=str, - metavar="STR", - help="XLM model to use for initializing transformer encoder and/or decoder", - ) - parser.add_argument( - "--init-encoder-only", - action="store_true", - help="if set, don't load the XLM weights and embeddings into decoder", - ) - parser.add_argument( - "--init-decoder-only", - action="store_true", - help="if set, don't load the XLM weights and embeddings into encoder", - ) - - @classmethod - def build_model(self, args, task, cls_dictionary=MaskedLMDictionary): - assert hasattr(args, "pretrained_xlm_checkpoint"), ( - "You must specify a path for --pretrained-xlm-checkpoint to use " - "--arch transformer_from_pretrained_xlm" - ) - assert isinstance(task.source_dictionary, cls_dictionary) and isinstance( - task.target_dictionary, cls_dictionary - ), ( - "You should use a MaskedLMDictionary when using --arch " - "transformer_from_pretrained_xlm because the pretrained XLM model " - "was trained using data binarized with MaskedLMDictionary. " - "For translation, you may want to use --task " - "translation_from_pretrained_xlm" - ) - assert not ( - getattr(args, "init_encoder_only", False) - and getattr(args, "init_decoder_only", False) - ), "Only one of --init-encoder-only and --init-decoder-only can be set." - return super().build_model(args, task) - - @classmethod - def build_encoder(cls, args, src_dict, embed_tokens): - return TransformerEncoderFromPretrainedXLM(args, src_dict, embed_tokens) - - @classmethod - def build_decoder(cls, args, tgt_dict, embed_tokens): - return TransformerDecoderFromPretrainedXLM(args, tgt_dict, embed_tokens) - - -def upgrade_state_dict_with_xlm_weights( - state_dict: Dict[str, Any], pretrained_xlm_checkpoint: str -) -> Dict[str, Any]: - """ - Load XLM weights into a Transformer encoder or decoder model. - - Args: - state_dict: state dict for either TransformerEncoder or - TransformerDecoder - pretrained_xlm_checkpoint: checkpoint to load XLM weights from - - Raises: - AssertionError: If architecture (num layers, attention heads, etc.) - does not match between the current Transformer encoder or - decoder and the pretrained_xlm_checkpoint - """ - if not os.path.exists(pretrained_xlm_checkpoint): - raise IOError("Model file not found: {}".format(pretrained_xlm_checkpoint)) - - state = checkpoint_utils.load_checkpoint_to_cpu(pretrained_xlm_checkpoint) - xlm_state_dict = state["model"] - for key in xlm_state_dict.keys(): - - for search_key in ["embed_tokens", "embed_positions", "layers"]: - if search_key in key: - subkey = key[key.find(search_key) :] - assert subkey in state_dict, ( - "{} Transformer encoder / decoder " - "state_dict does not contain {}. Cannot " - "load {} from pretrained XLM checkpoint " - "{} into Transformer.".format( - str(state_dict.keys()), subkey, key, pretrained_xlm_checkpoint - ) - ) - - state_dict[subkey] = xlm_state_dict[key] - return state_dict - - -class TransformerEncoderFromPretrainedXLM(TransformerEncoder): - def __init__(self, args, dictionary, embed_tokens): - super().__init__(args, dictionary, embed_tokens) - if getattr(args, "init_decoder_only", False): - # Don't load XLM weights for encoder if --init-decoder-only - return - - assert hasattr(args, "pretrained_xlm_checkpoint"), ( - "--pretrained-xlm-checkpoint must be specified to load Transformer " - "encoder from pretrained XLM" - ) - xlm_loaded_state_dict = upgrade_state_dict_with_xlm_weights( - state_dict=self.state_dict(), - pretrained_xlm_checkpoint=args.pretrained_xlm_checkpoint, - ) - self.load_state_dict(xlm_loaded_state_dict, strict=True) - - -class TransformerDecoderFromPretrainedXLM(TransformerDecoder): - def __init__(self, args, dictionary, embed_tokens, no_encoder_attn=False): - super().__init__(args, dictionary, embed_tokens, no_encoder_attn) - if getattr(args, "init_encoder_only", False): - # Don't load XLM weights for decoder if --init-encoder-only - return - assert hasattr(args, "pretrained_xlm_checkpoint"), ( - "--pretrained-xlm-checkpoint must be specified to load Transformer " - "decoder from pretrained XLM" - ) - - xlm_loaded_state_dict = upgrade_state_dict_with_xlm_weights( - state_dict=self.state_dict(), - pretrained_xlm_checkpoint=args.pretrained_xlm_checkpoint, - ) - self.load_state_dict(xlm_loaded_state_dict, strict=True) - - -@register_model_architecture( - "transformer_from_pretrained_xlm", "transformer_from_pretrained_xlm" -) -def base_architecture(args): - transformer_base_architecture(args) diff --git a/kosmos-g/fairseq/fairseq/models/transformer_lm.py b/kosmos-g/fairseq/fairseq/models/transformer_lm.py deleted file mode 100644 index f029cf05d..000000000 --- a/kosmos-g/fairseq/fairseq/models/transformer_lm.py +++ /dev/null @@ -1,574 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -from dataclasses import dataclass, field -from typing import Optional - -from fairseq import options, utils -from fairseq.dataclass import ChoiceEnum, FairseqDataclass -from fairseq.models import ( - FairseqLanguageModel, - register_model, - register_model_architecture, -) -from fairseq.models.transformer import ( - DEFAULT_MIN_PARAMS_TO_WRAP, - Embedding, - TransformerDecoder, -) -from fairseq.modules import AdaptiveInput, CharacterTokenEmbedder -from fairseq.utils import safe_getattr, safe_hasattr -from omegaconf import II - - -DEFAULT_MAX_TARGET_POSITIONS = 1024 - - -@dataclass -class TransformerLanguageModelConfig(FairseqDataclass): - activation_fn: ChoiceEnum(utils.get_available_activation_fns()) = field( - default="relu", metadata={"help": "activation function to use"} - ) - dropout: float = field(default=0.1, metadata={"help": "dropout probability"}) - attention_dropout: float = field( - default=0.0, metadata={"help": "dropout probability for attention weights"} - ) - activation_dropout: float = field( - default=0.0, metadata={"help": "dropout probability after activation in FFN."} - ) - relu_dropout: float = field( - default=0.0, metadata={"help": "dropout probability after activation in FFN."} - ) - decoder_embed_dim: int = field( - default=512, metadata={"help": "decoder embedding dimension"} - ) - decoder_output_dim: int = field( - default=512, metadata={"help": "decoder output dimension"} - ) - decoder_input_dim: int = field( - default=512, metadata={"help": "decoder input dimension"} - ) - decoder_ffn_embed_dim: int = field( - default=2048, metadata={"help": "decoder embedding dimension for FFN"} - ) - decoder_layers: int = field(default=6, metadata={"help": "num decoder layers"}) - decoder_attention_heads: int = field( - default=8, metadata={"help": "num decoder attention heads"} - ) - decoder_normalize_before: bool = field( - default=False, metadata={"help": "apply layernorm before each decoder block"} - ) - no_decoder_final_norm: bool = field( - default=False, - metadata={"help": "don't add an extra layernorm after the last decoder block"}, - ) - adaptive_softmax_cutoff: Optional[str] = field( - default=None, - metadata={ - "help": "comma separated list of adaptive softmax cutoff points. " - "Must be used with adaptive_loss criterion" - }, - ) - adaptive_softmax_dropout: float = field( - default=0, - metadata={"help": "sets adaptive softmax dropout for the tail projections"}, - ) - adaptive_softmax_factor: float = field( - default=4, metadata={"help": "adaptive input factor"} - ) - no_token_positional_embeddings: bool = field( - default=False, - metadata={ - "help": "if set, disables positional embeddings (outside self attention)" - }, - ) - share_decoder_input_output_embed: bool = field( - default=False, metadata={"help": "share decoder input and output embeddings"} - ) - character_embeddings: bool = field( - default=False, - metadata={ - "help": "if set, uses character embedding convolutions to produce token embeddings" - }, - ) - character_filters: str = field( - default="[(1, 64), (2, 128), (3, 192), (4, 256), (5, 256), (6, 256), (7, 256)]", - metadata={"help": "size of character embeddings"}, - ) - character_embedding_dim: int = field( - default=4, metadata={"help": "size of character embeddings"} - ) - char_embedder_highway_layers: int = field( - default=2, - metadata={"help": "number of highway layers for character token embeddder"}, - ) - adaptive_input: bool = field( - default=False, metadata={"help": "if set, uses adaptive input"} - ) - adaptive_input_factor: float = field( - default=4, metadata={"help": "adaptive input factor"} - ) - adaptive_input_cutoff: Optional[str] = field( - default=None, - metadata={"help": "comma separated list of adaptive input cutoff points."}, - ) - tie_adaptive_weights: bool = field( - default=False, - metadata={ - "help": "if set, ties the weights of adaptive softmax and adaptive input" - }, - ) - tie_adaptive_proj: bool = field( - default=False, - metadata={ - "help": "if set, ties the projection weights of adaptive softmax and adaptive input" - }, - ) - decoder_learned_pos: bool = field( - default=False, - metadata={"help": "use learned positional embeddings in the decoder"}, - ) - layernorm_embedding: bool = field( - default=False, metadata={"help": "add layernorm to embedding"} - ) - no_scale_embedding: bool = field( - default=False, metadata={"help": "if True, dont scale embeddings"} - ) - checkpoint_activations: bool = field( - default=False, metadata={"help": "checkpoint activations at each layer"} - ) - offload_activations: bool = field( - default=False, - metadata={"help": "move checkpointed activations to CPU after they are used."}, - ) - # config for "Reducing Transformer Depth on Demand with Structured Dropout" (Fan et al., 2019) - decoder_layerdrop: float = field( - default=0.0, metadata={"help": "LayerDrop probability for decoder"} - ) - decoder_layers_to_keep: Optional[str] = field( - default=None, - metadata={ - "help": "which layers to *keep* when pruning as a comma-separated list" - }, - ) - # config for Training with Quantization Noise for Extreme Model Compression ({Fan*, Stock*} et al., 2020) - quant_noise_pq: float = field( - default=0.0, - metadata={"help": "iterative PQ quantization noise at training time"}, - ) - quant_noise_pq_block_size: int = field( - default=8, - metadata={"help": "block size of quantization noise at training time"}, - ) - quant_noise_scalar: float = field( - default=0.0, - metadata={ - "help": "scalar quantization noise and scalar quantization at training time" - }, - ) - # config for Fully Sharded Data Parallel (FSDP) training - min_params_to_wrap: int = field( - default=DEFAULT_MIN_PARAMS_TO_WRAP, - metadata={ - "help": ( - "minimum number of params for a layer to be wrapped with FSDP() when " - "training with --ddp-backend=fully_sharded. Smaller values will " - "improve memory efficiency, but may make torch.distributed " - "communication less efficient due to smaller input sizes. This option " - "is set to 0 (i.e., always wrap) when --checkpoint-activations or " - "--offload-activations are passed." - ) - }, - ) - # config for "BASE Layers: Simplifying Training of Large, Sparse Models" - base_layers: Optional[int] = field( - default=0, metadata={"help": "number of BASE layers in total"} - ) - base_sublayers: Optional[int] = field( - default=1, metadata={"help": "number of sublayers in each BASE layer"} - ) - base_shuffle: Optional[int] = field( - default=1, - metadata={"help": "shuffle tokens between workers before computing assignment"}, - ) - # NormFormer - scale_fc: Optional[bool] = field( - default=False, - metadata={"help": "Insert LayerNorm between fully connected layers"}, - ) - scale_attn: Optional[bool] = field( - default=False, metadata={"help": "Insert LayerNorm after attention"} - ) - scale_heads: Optional[bool] = field( - default=False, - metadata={"help": "Learn a scale coefficient for each attention head"}, - ) - scale_resids: Optional[bool] = field( - default=False, - metadata={"help": "Learn a scale coefficient for each residual connection"}, - ) - # options from other parts of the config - add_bos_token: bool = II("task.add_bos_token") - tokens_per_sample: int = II("task.tokens_per_sample") - max_target_positions: Optional[int] = II("task.max_target_positions") - tpu: bool = II("common.tpu") - - -@register_model("transformer_lm", dataclass=TransformerLanguageModelConfig) -class TransformerLanguageModel(FairseqLanguageModel): - @classmethod - def hub_models(cls): - def moses_fastbpe(path): - return {"path": path, "tokenizer": "moses", "bpe": "fastbpe"} - - def spm(path): - return {"path": path, "tokenizer": "space", "bpe": "sentencepiece"} - - return { - "transformer_lm.gbw.adaptive_huge": "https://dl.fbaipublicfiles.com/fairseq/models/lm/adaptive_lm_gbw_huge.tar.bz2", - "transformer_lm.wiki103.adaptive": "https://dl.fbaipublicfiles.com/fairseq/models/lm/adaptive_lm_wiki103.v2.tar.bz2", - "transformer_lm.wmt19.en": moses_fastbpe( - "https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt19.en.tar.bz2" - ), - "transformer_lm.wmt19.de": moses_fastbpe( - "https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt19.de.tar.bz2" - ), - "transformer_lm.wmt19.ru": moses_fastbpe( - "https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt19.ru.tar.bz2" - ), - "transformer_lm.wmt20.en": spm( - "https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt20.en.tar.gz" - ), - "transformer_lm.wmt20.ta": spm( - "https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt20.ta.tar.gz" - ), - "transformer_lm.wmt20.iu.news": spm( - "https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt20.iu.news.tar.gz" - ), - "transformer_lm.wmt20.iu.nh": spm( - "https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt20.iu.nh.tar.gz" - ), - } - - def __init__(self, decoder): - super().__init__(decoder) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - - if args.decoder_layers_to_keep: - args.decoder_layers = len(args.decoder_layers_to_keep.split(",")) - - if safe_getattr(args, "max_target_positions", None) is None: - args.max_target_positions = safe_getattr( - args, "tokens_per_sample", DEFAULT_MAX_TARGET_POSITIONS - ) - - if args.character_embeddings: - embed_tokens = CharacterTokenEmbedder( - task.source_dictionary, - eval(args.character_filters), - args.character_embedding_dim, - args.decoder_embed_dim, - args.char_embedder_highway_layers, - ) - elif args.adaptive_input: - embed_tokens = AdaptiveInput( - len(task.source_dictionary), - task.source_dictionary.pad(), - args.decoder_input_dim, - args.adaptive_input_factor, - args.decoder_embed_dim, - options.eval_str_list(args.adaptive_input_cutoff, type=int), - args.quant_noise_pq, - args.quant_noise_pq_block_size, - ) - else: - embed_tokens = cls.build_embedding( - args, task.source_dictionary, args.decoder_input_dim - ) - - if args.tie_adaptive_weights: - assert args.adaptive_input - assert args.adaptive_input_factor == args.adaptive_softmax_factor - assert ( - args.adaptive_softmax_cutoff == args.adaptive_input_cutoff - ), "{} != {}".format( - args.adaptive_softmax_cutoff, args.adaptive_input_cutoff - ) - assert args.decoder_input_dim == args.decoder_output_dim - - decoder = TransformerDecoder( - args, task.target_dictionary, embed_tokens, no_encoder_attn=True - ) - return cls(decoder) - - @classmethod - def build_embedding(cls, args, dictionary, embed_dim, path=None): - embed_tokens = Embedding(len(dictionary), embed_dim, dictionary.pad()) - return embed_tokens - - -def base_lm_architecture(args): - # backward compatibility for older model checkpoints - if safe_hasattr(args, "no_tie_adaptive_proj"): - # previous models defined --no-tie-adaptive-proj, so use the existence of - # that option to determine if this is an "old" model checkpoint - args.no_decoder_final_norm = True # old models always set this to True - if args.no_tie_adaptive_proj is False: - args.tie_adaptive_proj = True - if safe_hasattr(args, "decoder_final_norm"): - args.no_decoder_final_norm = not args.decoder_final_norm - - args.dropout = safe_getattr(args, "dropout", 0.1) - args.attention_dropout = safe_getattr(args, "attention_dropout", 0.0) - - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 512) - args.decoder_ffn_embed_dim = safe_getattr(args, "decoder_ffn_embed_dim", 2048) - args.decoder_layers = safe_getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 8) - args.adaptive_softmax_cutoff = safe_getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = safe_getattr(args, "adaptive_softmax_dropout", 0) - args.adaptive_softmax_factor = safe_getattr(args, "adaptive_softmax_factor", 4) - args.decoder_learned_pos = safe_getattr(args, "decoder_learned_pos", False) - args.activation_fn = safe_getattr(args, "activation_fn", "relu") - - args.decoder_layerdrop = safe_getattr(args, "decoder_layerdrop", 0) - args.decoder_layers_to_keep = safe_getattr(args, "decoder_layers_to_keep", None) - args.quant_noise_pq = safe_getattr(args, "quant_noise_pq", 0) - args.quant_noise_pq_block_size = safe_getattr(args, "quant_noise_pq_block_size", 8) - args.quant_noise_scalar = safe_getattr(args, "quant_noise_scalar", 0) - - args.base_layers = safe_getattr(args, "base_layers", 0) - args.base_sublayers = safe_getattr(args, "base_sublayers", 1) - args.base_shuffle = safe_getattr(args, "base_shuffle", False) - - args.add_bos_token = safe_getattr(args, "add_bos_token", False) - args.no_token_positional_embeddings = safe_getattr( - args, "no_token_positional_embeddings", False - ) - args.share_decoder_input_output_embed = safe_getattr( - args, "share_decoder_input_output_embed", False - ) - args.character_embeddings = safe_getattr(args, "character_embeddings", False) - - args.decoder_output_dim = safe_getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = safe_getattr( - args, "decoder_input_dim", args.decoder_embed_dim - ) - - # Model training is not stable without this - args.decoder_normalize_before = True - args.no_decoder_final_norm = safe_getattr(args, "no_decoder_final_norm", False) - - args.adaptive_input = safe_getattr(args, "adaptive_input", False) - args.adaptive_input_factor = safe_getattr(args, "adaptive_input_factor", 4) - args.adaptive_input_cutoff = safe_getattr(args, "adaptive_input_cutoff", None) - - args.tie_adaptive_weights = safe_getattr(args, "tie_adaptive_weights", False) - args.tie_adaptive_proj = safe_getattr(args, "tie_adaptive_proj", False) - - args.no_scale_embedding = safe_getattr(args, "no_scale_embedding", False) - args.layernorm_embedding = safe_getattr(args, "layernorm_embedding", False) - args.checkpoint_activations = safe_getattr(args, "checkpoint_activations", False) - args.offload_activations = safe_getattr(args, "offload_activations", False) - args.scale_fc = safe_getattr(args, "scale_fc", False) - args.scale_attn = safe_getattr(args, "scale_attn", False) - args.scale_heads = safe_getattr(args, "scale_heads", False) - args.scale_resids = safe_getattr(args, "scale_resids", False) - if args.offload_activations: - args.checkpoint_activations = True - - -@register_model_architecture("transformer_lm", "transformer_lm_big") -def transformer_lm_big(args): - args.decoder_layers = safe_getattr(args, "decoder_layers", 12) - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 1024) - args.decoder_ffn_embed_dim = safe_getattr(args, "decoder_ffn_embed_dim", 4096) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 16) - base_lm_architecture(args) - - -@register_model_architecture("transformer_lm", "transformer_lm_wiki103") -@register_model_architecture("transformer_lm", "transformer_lm_baevski_wiki103") -def transformer_lm_baevski_wiki103(args): - args.decoder_layers = safe_getattr(args, "decoder_layers", 16) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 8) - args.dropout = safe_getattr(args, "dropout", 0.3) - args.adaptive_input = safe_getattr(args, "adaptive_input", True) - args.tie_adaptive_weights = safe_getattr(args, "tie_adaptive_weights", True) - args.adaptive_input_cutoff = safe_getattr( - args, "adaptive_input_cutoff", "20000,60000" - ) - args.adaptive_softmax_cutoff = safe_getattr( - args, "adaptive_softmax_cutoff", "20000,60000" - ) - args.adaptive_softmax_dropout = safe_getattr(args, "adaptive_softmax_dropout", 0.2) - args.attention_dropout = safe_getattr(args, "attention_dropout", 0.1) - args.activation_dropout = safe_getattr(args, "activation_dropout", 0.1) - args.no_decoder_final_norm = safe_getattr(args, "no_decoder_final_norm", True) - args.tie_adaptive_proj = safe_getattr(args, "tie_adaptive_proj", True) - transformer_lm_big(args) - - -@register_model_architecture("transformer_lm", "transformer_lm_gbw") -@register_model_architecture("transformer_lm", "transformer_lm_baevski_gbw") -def transformer_lm_baevski_gbw(args): - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 512) - args.dropout = safe_getattr(args, "dropout", 0.1) - args.attention_dropout = safe_getattr(args, "attention_dropout", 0.1) - args.no_decoder_final_norm = safe_getattr(args, "no_decoder_final_norm", True) - transformer_lm_big(args) - - -@register_model_architecture("transformer_lm", "transformer_lm_gpt") -def transformer_lm_gpt(args): - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 768) - args.decoder_ffn_embed_dim = safe_getattr(args, "decoder_ffn_embed_dim", 3072) - args.decoder_layers = safe_getattr(args, "decoder_layers", 12) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 12) - args.dropout = safe_getattr(args, "dropout", 0.1) - args.attention_dropout = safe_getattr(args, "attention_dropout", 0.1) - args.activation_fn = safe_getattr(args, "activation_fn", "gelu") - base_lm_architecture(args) - - -@register_model_architecture("transformer_lm", "transformer_lm_gpt2_small") -def transformer_lm_gpt2_small(args): - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 1024) - args.decoder_ffn_embed_dim = safe_getattr(args, "decoder_ffn_embed_dim", 4096) - args.decoder_layers = safe_getattr(args, "decoder_layers", 24) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 16) - args.dropout = safe_getattr(args, "dropout", 0.1) - args.attention_dropout = safe_getattr(args, "attention_dropout", 0.1) - args.activation_fn = safe_getattr(args, "activation_fn", "gelu") - base_lm_architecture(args) - - -@register_model_architecture("transformer_lm", "transformer_lm_gpt2_tiny") -def transformer_lm_gpt2_tiny(args): - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 64) - args.decoder_ffn_embed_dim = safe_getattr(args, "decoder_ffn_embed_dim", 64) - args.decoder_layers = safe_getattr(args, "decoder_layers", 2) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 1) - args.dropout = safe_getattr(args, "dropout", 0.1) - args.attention_dropout = safe_getattr(args, "attention_dropout", 0.1) - args.activation_fn = safe_getattr(args, "activation_fn", "gelu") - base_lm_architecture(args) - - -@register_model_architecture("transformer_lm", "transformer_lm_gpt2_medium") -def transformer_lm_gpt2_medium(args): - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 1280) - args.decoder_ffn_embed_dim = safe_getattr(args, "decoder_ffn_embed_dim", 5120) - args.decoder_layers = safe_getattr(args, "decoder_layers", 36) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 20) - args.dropout = safe_getattr(args, "dropout", 0.1) - args.attention_dropout = safe_getattr(args, "attention_dropout", 0.1) - args.activation_fn = safe_getattr(args, "activation_fn", "gelu") - base_lm_architecture(args) - - -@register_model_architecture("transformer_lm", "transformer_lm_gpt2_big") -def transformer_lm_gpt2_big(args): - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 1600) - args.decoder_ffn_embed_dim = safe_getattr(args, "decoder_ffn_embed_dim", 6400) - args.decoder_layers = safe_getattr(args, "decoder_layers", 48) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 25) - args.dropout = safe_getattr(args, "dropout", 0.1) - args.attention_dropout = safe_getattr(args, "attention_dropout", 0.1) - args.activation_fn = safe_getattr(args, "activation_fn", "gelu") - base_lm_architecture(args) - - -def base_gpt3_architecture(args): - args.decoder_input_dim = args.decoder_embed_dim - args.decoder_output_dim = args.decoder_embed_dim - args.decoder_ffn_embed_dim = safe_getattr( - args, "decoder_ffn_embed_dim", args.decoder_embed_dim * 4 - ) - # GPT-3 used learned positional embeddings, rather than sinusoidal - args.decoder_learned_pos = safe_getattr(args, "decoder_learned_pos", True) - args.dropout = safe_getattr(args, "dropout", 0.0) - args.attention_dropout = safe_getattr(args, "attention_dropout", 0.0) - args.activation_fn = safe_getattr(args, "activation_fn", "gelu") - args.share_decoder_input_output_embed = True - base_lm_architecture(args) - - -@register_model_architecture("transformer_lm", "transformer_lm_gpt3_small") -def transformer_lm_gpt3_small(args): - # 125M params - args.decoder_layers = safe_getattr(args, "decoder_layers", 12) - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 768) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 12) - base_gpt3_architecture(args) - - -@register_model_architecture("transformer_lm", "transformer_lm_gpt3_medium") -def transformer_lm_gpt3_medium(args): - # 350M params - args.decoder_layers = safe_getattr(args, "decoder_layers", 24) - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 1024) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 16) - base_gpt3_architecture(args) - - -@register_model_architecture("transformer_lm", "transformer_lm_gpt3_large") -def transformer_lm_gpt3_large(args): - # 760M params - args.decoder_layers = safe_getattr(args, "decoder_layers", 24) - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 1536) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 16) - base_gpt3_architecture(args) - - -@register_model_architecture("transformer_lm", "transformer_lm_gpt3_xl") -def transformer_lm_gpt3_xl(args): - # 1.3B params - args.decoder_layers = safe_getattr(args, "decoder_layers", 24) - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 2048) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 32) - base_gpt3_architecture(args) - - -@register_model_architecture("transformer_lm", "transformer_lm_gpt3_2_7") -def transformer_lm_gpt3_2_7(args): - # 2.7B params - args.decoder_layers = safe_getattr(args, "decoder_layers", 32) - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 2560) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 32) - base_gpt3_architecture(args) - - -@register_model_architecture("transformer_lm", "transformer_lm_gpt3_6_7") -def transformer_lm_gpt3_6_7(args): - # 6.7B params - args.decoder_layers = safe_getattr(args, "decoder_layers", 32) - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 4096) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 32) - base_gpt3_architecture(args) - - -@register_model_architecture("transformer_lm", "transformer_lm_gpt3_13") -def transformer_lm_gpt3_13(args): - # 13B params - args.decoder_layers = safe_getattr(args, "decoder_layers", 40) - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 5120) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 40) - base_gpt3_architecture(args) - - -@register_model_architecture("transformer_lm", "transformer_lm_gpt3_175") -def transformer_lm_gpt3_175(args): - # 175B params - args.decoder_layers = safe_getattr(args, "decoder_layers", 96) - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 12288) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 96) - base_gpt3_architecture(args) diff --git a/kosmos-g/fairseq/fairseq/models/wav2vec/__init__.py b/kosmos-g/fairseq/fairseq/models/wav2vec/__init__.py deleted file mode 100644 index 06cec1818..000000000 --- a/kosmos-g/fairseq/fairseq/models/wav2vec/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .wav2vec import * # noqa -from .wav2vec2 import * # noqa -from .wav2vec2_asr import * # noqa diff --git a/kosmos-g/fairseq/fairseq/models/wav2vec/utils.py b/kosmos-g/fairseq/fairseq/models/wav2vec/utils.py deleted file mode 100644 index dd52d8624..000000000 --- a/kosmos-g/fairseq/fairseq/models/wav2vec/utils.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -import torch.nn.functional as F - - -def pad_to_multiple(x, multiple, dim=-1, value=0): - # Inspired from https://github.com/lucidrains/local-attention/blob/master/local_attention/local_attention.py#L41 - if x is None: - return None, 0 - tsz = x.size(dim) - m = tsz / multiple - remainder = math.ceil(m) * multiple - tsz - if m.is_integer(): - return x, 0 - pad_offset = (0,) * (-1 - dim) * 2 - - return F.pad(x, (*pad_offset, 0, remainder), value=value), remainder diff --git a/kosmos-g/fairseq/fairseq/models/wav2vec/wav2vec.py b/kosmos-g/fairseq/fairseq/models/wav2vec/wav2vec.py deleted file mode 100644 index af6604da1..000000000 --- a/kosmos-g/fairseq/fairseq/models/wav2vec/wav2vec.py +++ /dev/null @@ -1,630 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field -import logging -import math -from typing import Optional, Tuple -from omegaconf import II -import sys - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq.dataclass import ChoiceEnum, FairseqDataclass -from fairseq.models import BaseFairseqModel, register_model -from fairseq.modules import ( - Fp32GroupNorm, - Fp32LayerNorm, - GumbelVectorQuantizer, - KmeansVectorQuantizer, - TransposeLast, -) -from fairseq.tasks import FairseqTask -from fairseq.utils import buffered_arange - - -logger = logging.getLogger(__name__) - - -AGGREGATOR_CHOICES = ChoiceEnum(["cnn", "gru"]) -PROJECT_FEATURES_CHOICES = ChoiceEnum(["none", "same", "new"]) -ACTIVATION_CHOICES = ChoiceEnum(["relu", "gelu"]) -VQ_TYPE_CHOICES = ChoiceEnum(["none", "gumbel", "kmeans"]) - - -@dataclass -class Wav2VecConfig(FairseqDataclass): - prediction_steps: int = field( - default=12, metadata={"help": "number of steps ahead to predict"} - ) - sample_distance: Optional[int] = field( - default=None, - metadata={ - "help": "sample distance from target. does not work properly with cross-sampling" - }, - ) - cross_sample_negatives: int = field( - default=0, metadata={"help": "num of cross sampled negatives"} - ) - num_negatives: int = field( - default=10, metadata={"help": "num of sampled negatives"} - ) - conv_feature_layers: str = field( - default="[(512, 10, 5), (512, 8, 4), (512, 4, 2), (512, 4, 2), (512, 4, 2), (512, 1, 1), (512, 1, 1), (512, 1, 1)]", - metadata={ - "help": "convolutional feature extraction layers [(dim, kernel_size, stride), ...]" - }, - ) - conv_aggregator_layers: str = field( - default="[(512, 2, 1), (512, 3, 1), (512, 4, 1), (512, 5, 1), (512, 6, 1), (512, 7, 1), (512, 8, 1), (512, 9, 1), (512, 10, 1), (512, 11, 1), (512, 12, 1), (512, 13, 1)]", - metadata={ - "help": "convolutional aggregator layers [(dim, kernel_size, stride), ...]" - }, - ) - dropout: float = field( - default=0.0, metadata={"help": "dropout to apply within the model"} - ) - dropout_features: float = field( - default=0.0, metadata={"help": "dropout to apply to the features"} - ) - dropout_agg: float = field( - default=0.0, metadata={"help": "dropout to apply after aggregation step"} - ) - aggregator: AGGREGATOR_CHOICES = field( - default="cnn", metadata={"help": "type of aggregator to use"} - ) - gru_dim: int = field(default=512, metadata={"help": "GRU dimensionality"}) - no_conv_bias: bool = field( - default=False, metadata={"help": "if set, does not learn bias for conv layers"} - ) - agg_zero_pad: bool = field( - default=False, - metadata={"help": "if set, zero pads in aggregator instead of repl pad"}, - ) - skip_connections_feat: bool = field( - default=False, - metadata={"help": "if set, adds skip connections to the feature extractor"}, - ) - skip_connections_agg: bool = field( - default=True, - metadata={"help": "if set, adds skip connections to the aggregator"}, - ) - residual_scale: float = field( - default=0.5, metadata={"help": "scales residual by sqrt(value)"} - ) - log_compression: bool = field( - default=True, - metadata={"help": "if set, adds a log compression to feature extractor"}, - ) - balanced_classes: bool = field( - default=False, - metadata={"help": "if set, loss is scaled to balance for number of negatives"}, - ) - project_features: PROJECT_FEATURES_CHOICES = field( - default="none", - metadata={ - "help": "if not none, features are projected using the (same or new) aggregator" - }, - ) - non_affine_group_norm: bool = field( - default=False, metadata={"help": "if set, group norm is not affine"} - ) - offset: str = field( - default="auto", - metadata={ - "help": "if set to 'auto', it is computed automatically from the receptive field, else set to int value" - }, - ) - activation: ACTIVATION_CHOICES = field( - default="relu", - metadata={ - "help": "if set to 'auto', it is computed automatically from the receptive field, else set to int value" - }, - ) - vq_type: VQ_TYPE_CHOICES = field( - default="none", metadata={"help": "which type of quantizer to use"} - ) - vq_vars: int = field( - default=320, - metadata={"help": "project to this many vector quantized variables per group"}, - ) - vq_groups: int = field( - default=2, metadata={"help": "number of groups of latent variables"} - ) - vq_dim: int = field( - default=0, - metadata={ - "help": "uses this dimensionality for quantized vectors. 0 to use model dim // groups" - }, - ) - vq_depth: int = field( - default=1, metadata={"help": "number of layers for vq weight projection"} - ) - combine_groups: bool = field( - default=False, metadata={"help": "if set, variables are shared among groups"} - ) - vq_temp: Tuple[float, float, float] = field( - default=(2.0, 0.5, 0.999995), - metadata={ - "help": "temperature for latent variable sampling with gumbel softmax. should be a tuple of 3 values (start, end, decay)" - }, - ) - vq_gamma: float = field( - default=0.25, - metadata={"help": "gamma parameter for kmeans style vector quantization"}, - ) - infonce: bool = II("criterion.infonce") - - -@register_model("wav2vec", dataclass=Wav2VecConfig) -class Wav2VecModel(BaseFairseqModel): - @classmethod - def build_model(cls, cfg: Wav2VecConfig, task: FairseqTask): - """Build a new model instance.""" - - model = Wav2VecModel(cfg) - logger.info(model) - return model - - def __init__(self, cfg: Wav2VecConfig): - super().__init__() - - self.prediction_steps = cfg.prediction_steps - offset = cfg.offset - - if cfg.activation == "relu": - activation = nn.ReLU() - elif cfg.activation == "gelu": - activation = nn.GELU() - else: - raise Exception("unknown activation " + cfg.activation) - - feature_enc_layers = eval(cfg.conv_feature_layers) - self.feature_extractor = ConvFeatureExtractionModel( - conv_layers=feature_enc_layers, - dropout=0.0, - log_compression=cfg.log_compression, - skip_connections=cfg.skip_connections_feat, - residual_scale=cfg.residual_scale, - non_affine_group_norm=cfg.non_affine_group_norm, - activation=activation, - ) - embed = feature_enc_layers[-1][0] - - self.vector_quantizer = None - if cfg.vq_type == "gumbel": - self.vector_quantizer = GumbelVectorQuantizer( - dim=embed, - num_vars=cfg.vq_vars, - temp=cfg.vq_temp, - groups=cfg.vq_groups, - combine_groups=cfg.combine_groups, - vq_dim=cfg.vq_dim if cfg.vq_dim > 0 else embed, - time_first=False, - activation=activation, - weight_proj_depth=cfg.vq_depth, - weight_proj_factor=2, - ) - elif cfg.vq_type == "kmeans": - self.vector_quantizer = KmeansVectorQuantizer( - dim=embed, - num_vars=cfg.vq_vars, - groups=cfg.vq_groups, - combine_groups=cfg.combine_groups, - vq_dim=cfg.vq_dim if cfg.vq_dim > 0 else embed, - time_first=False, - gamma=cfg.vq_gamma, - ) - else: - assert ( - cfg.vq_type == "none" or cfg.vq_type is None - ), "Unknown quantizer type" - - if cfg.offset == "auto": - jin = 0 - rin = 0 - for _, k, stride in feature_enc_layers: - if rin == 0: - rin = k - rin = rin + (k - 1) * jin - if jin == 0: - jin = stride - else: - jin *= stride - offset = math.ceil(rin / jin) - - offset = int(offset) - - def make_aggregator(): - if cfg.aggregator == "cnn": - agg_layers = eval(cfg.conv_aggregator_layers) - agg_dim = agg_layers[-1][0] - feature_aggregator = ConvAggegator( - conv_layers=agg_layers, - embed=embed, - dropout=cfg.dropout, - skip_connections=cfg.skip_connections_agg, - residual_scale=cfg.residual_scale, - non_affine_group_norm=cfg.non_affine_group_norm, - conv_bias=not cfg.no_conv_bias, - zero_pad=cfg.agg_zero_pad, - activation=activation, - ) - elif cfg.aggregator == "gru": - agg_dim = cfg.gru_dim - feature_aggregator = nn.Sequential( - TransposeLast(), - nn.GRU( - input_size=embed, - hidden_size=agg_dim, - num_layers=1, - dropout=cfg.dropout, - ), - TransposeLast(deconstruct_idx=0), - ) - else: - raise Exception("unknown aggregator type " + cfg.aggregator) - - return feature_aggregator, agg_dim - - self.feature_aggregator, agg_dim = make_aggregator() - - self.wav2vec_predictions = Wav2VecPredictionsModel( - in_dim=agg_dim, - out_dim=embed, - prediction_steps=cfg.prediction_steps, - n_negatives=cfg.num_negatives, - cross_sample_negatives=cfg.cross_sample_negatives, - sample_distance=cfg.sample_distance, - dropout=cfg.dropout, - offset=offset, - balanced_classes=cfg.balanced_classes, - infonce=cfg.infonce, - ) - - self.dropout_feats = nn.Dropout(p=cfg.dropout_features) - self.dropout_agg = nn.Dropout(p=cfg.dropout_agg) - - if cfg.project_features == "none": - self.project_features = None - elif cfg.project_features == "same": - self.project_features = self.feature_aggregator - elif cfg.project_features == "new": - self.project_features, _ = make_aggregator() - - def forward(self, source): - result = {} - - features = self.feature_extractor(source) - if self.vector_quantizer: - q_res = self.vector_quantizer(features) - features = q_res["x"] - for k in q_res.keys(): - if k != "x": - result[k] = q_res[k] - - x = self.dropout_feats(features) - x = self.feature_aggregator(x) - x = self.dropout_agg(x) - - if self.project_features is not None: - features = self.project_features(features) - x, targets = self.wav2vec_predictions(x, features) - result["cpc_logits"] = x - result["cpc_targets"] = targets - - return result - - def upgrade_state_dict_named(self, state_dict, name): - super().upgrade_state_dict_named(state_dict, name) - - def max_positions(self): - """Maximum length supported by the model.""" - return sys.maxsize - - def get_logits(self, net_output): - logits = net_output["cpc_logits"] - return logits - - def get_targets(self, sample, net_output): - t = net_output["cpc_targets"] - if isinstance(t, tuple): - t = t[0] - return t.contiguous() - - def get_target_weights(self, targets, net_output): - targets = net_output["cpc_targets"] - if isinstance(targets, tuple) and targets[-1] is not None: - return targets[-1] - return None - - def get_extra_losses(self, net_output): - loss = None - if "prob_perplexity" in net_output: - loss = net_output["num_vars"] - net_output["prob_perplexity"] - elif "kmeans_loss" in net_output: - loss = net_output["kmeans_loss"] - - return loss - - -def norm_block(is_layer_norm, dim, affine=True): - if is_layer_norm: - mod = nn.Sequential( - TransposeLast(), - Fp32LayerNorm(dim, elementwise_affine=affine), - TransposeLast(), - ) - else: - mod = Fp32GroupNorm(1, dim, affine=affine) - - return mod - - -class ConvFeatureExtractionModel(nn.Module): - def __init__( - self, - conv_layers, - dropout, - log_compression, - skip_connections, - residual_scale, - non_affine_group_norm, - activation, - ): - super().__init__() - - def block(n_in, n_out, k, stride): - return nn.Sequential( - nn.Conv1d(n_in, n_out, k, stride=stride, bias=False), - nn.Dropout(p=dropout), - norm_block( - is_layer_norm=False, dim=n_out, affine=not non_affine_group_norm - ), - activation, - ) - - in_d = 1 - self.conv_layers = nn.ModuleList() - for dim, k, stride in conv_layers: - self.conv_layers.append(block(in_d, dim, k, stride)) - in_d = dim - - self.log_compression = log_compression - self.skip_connections = skip_connections - self.residual_scale = math.sqrt(residual_scale) - - def forward(self, x): - # BxT -> BxCxT - x = x.unsqueeze(1) - - for conv in self.conv_layers: - residual = x - x = conv(x) - if self.skip_connections and x.size(1) == residual.size(1): - tsz = x.size(2) - r_tsz = residual.size(2) - residual = residual[..., :: r_tsz // tsz][..., :tsz] - x = (x + residual) * self.residual_scale - - if self.log_compression: - x = x.abs() - x = x + 1 - x = x.log() - - return x - - -class ZeroPad1d(nn.Module): - def __init__(self, pad_left, pad_right): - super().__init__() - self.pad_left = pad_left - self.pad_right = pad_right - - def forward(self, x): - return F.pad(x, (self.pad_left, self.pad_right)) - - -class ConvAggegator(nn.Module): - def __init__( - self, - conv_layers, - embed, - dropout, - skip_connections, - residual_scale, - non_affine_group_norm, - conv_bias, - zero_pad, - activation, - ): - super().__init__() - - def block(n_in, n_out, k, stride): - # padding dims only really make sense for stride = 1 - ka = k // 2 - kb = ka - 1 if k % 2 == 0 else ka - - pad = ( - ZeroPad1d(ka + kb, 0) if zero_pad else nn.ReplicationPad1d((ka + kb, 0)) - ) - - return nn.Sequential( - pad, - nn.Conv1d(n_in, n_out, k, stride=stride, bias=conv_bias), - nn.Dropout(p=dropout), - norm_block(False, n_out, affine=not non_affine_group_norm), - activation, - ) - - in_d = embed - self.conv_layers = nn.ModuleList() - self.residual_proj = nn.ModuleList() - for dim, k, stride in conv_layers: - if in_d != dim and skip_connections: - self.residual_proj.append(nn.Conv1d(in_d, dim, 1, bias=False)) - else: - self.residual_proj.append(None) - - self.conv_layers.append(block(in_d, dim, k, stride)) - in_d = dim - self.conv_layers = nn.Sequential(*self.conv_layers) - self.skip_connections = skip_connections - self.residual_scale = math.sqrt(residual_scale) - - def forward(self, x): - for rproj, conv in zip(self.residual_proj, self.conv_layers): - residual = x - x = conv(x) - if self.skip_connections: - if rproj is not None: - residual = rproj(residual) - x = (x + residual) * self.residual_scale - return x - - -class Wav2VecPredictionsModel(nn.Module): - def __init__( - self, - in_dim, - out_dim, - prediction_steps, - n_negatives, - cross_sample_negatives, - sample_distance, - dropout, - offset, - balanced_classes, - infonce, - ): - super().__init__() - - self.n_negatives = n_negatives - self.cross_sample_negatives = cross_sample_negatives - self.sample_distance = sample_distance - self.project_to_steps = nn.ConvTranspose2d( - in_dim, out_dim, (1, prediction_steps) - ) - self.dropout = nn.Dropout(p=dropout) - self.offset = offset - self.balanced_classes = balanced_classes - self.infonce = infonce - - def sample_negatives(self, y): - bsz, fsz, tsz = y.shape - - y = y.transpose(0, 1) # BCT -> CBT - y = y.contiguous().view(fsz, -1) # CBT => C(BxT) - - cross_high = tsz * bsz - high = tsz if self.sample_distance is None else min(tsz, self.sample_distance) - assert high > 1 - - neg_idxs = torch.randint(low=0, high=high, size=(bsz, self.n_negatives * tsz)) - - with torch.no_grad(): - if self.n_negatives > 0: - tszs = ( - buffered_arange(tsz) - .unsqueeze(-1) - .expand(-1, self.n_negatives) - .flatten() - ) - - neg_idxs = torch.randint( - low=0, high=high - 1, size=(bsz, self.n_negatives * tsz) - ) - neg_idxs[neg_idxs >= tszs] += 1 - - if self.cross_sample_negatives > 0: - tszs = ( - buffered_arange(tsz) - .unsqueeze(-1) - .expand(-1, self.cross_sample_negatives) - .flatten() - ) - - cross_neg_idxs = torch.randint( - low=0, - high=cross_high - 1, - size=(bsz, self.cross_sample_negatives * tsz), - ) - cross_neg_idxs[cross_neg_idxs >= tszs] += 1 - - if self.n_negatives > 0: - for i in range(1, bsz): - neg_idxs[i] += i * high - else: - neg_idxs = cross_neg_idxs - - if self.cross_sample_negatives > 0 and self.n_negatives > 0: - neg_idxs = torch.cat([neg_idxs, cross_neg_idxs], dim=1) - - negs = y[..., neg_idxs.view(-1)] - negs = negs.view( - fsz, bsz, self.n_negatives + self.cross_sample_negatives, tsz - ).permute( - 2, 1, 0, 3 - ) # to NxBxCxT - - return negs - - def forward(self, x, y): - - x = x.unsqueeze(-1) - x = self.project_to_steps(x) # BxCxTxS - x = self.dropout(x) - - negatives = self.sample_negatives(y) - y = y.unsqueeze(0) - targets = torch.cat([y, negatives], dim=0) # Copies x B x C x T - - copies = targets.size(0) - bsz, dim, tsz, steps = x.shape - steps = min(steps, tsz - self.offset) - - predictions = x.new( - bsz * copies * (tsz - self.offset + 1) * steps - - ((steps + 1) * steps // 2) * copies * bsz - ) - if self.infonce: - labels = predictions.new_full( - (predictions.shape[0] // copies,), 0, dtype=torch.long - ) - else: - labels = torch.zeros_like(predictions) - weights = ( - torch.full_like(labels, 1 / self.n_negatives) - if self.balanced_classes and not self.infonce - else None - ) - - start = end = 0 - for i in range(steps): - offset = i + self.offset - end = start + (tsz - offset) * bsz * copies - if self.infonce: - predictions[start:end] = torch.einsum( - "bct,nbct->tbn", x[..., :-offset, i], targets[..., offset:] - ).flatten() - else: - pos_num = (end - start) // copies - predictions[start:end] = torch.einsum( - "bct,nbct->nbt", x[..., :-offset, i], targets[..., offset:] - ).flatten() - labels[start : start + pos_num] = 1.0 - if weights is not None: - weights[start : start + pos_num] = 1.0 - start = end - assert end == predictions.numel(), "{} != {}".format(end, predictions.numel()) - - if self.infonce: - predictions = predictions.view(-1, copies) - else: - if weights is not None: - labels = (labels, weights) - - return predictions, labels diff --git a/kosmos-g/fairseq/fairseq/models/wav2vec/wav2vec2.py b/kosmos-g/fairseq/fairseq/models/wav2vec/wav2vec2.py deleted file mode 100644 index 8f41b60e0..000000000 --- a/kosmos-g/fairseq/fairseq/models/wav2vec/wav2vec2.py +++ /dev/null @@ -1,1199 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from dataclasses import dataclass, field -from typing import List, Tuple - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import utils -from fairseq.data.data_utils import compute_mask_indices -from fairseq.dataclass import ChoiceEnum, FairseqDataclass -from fairseq.models import BaseFairseqModel, register_model -from fairseq.modules import ( - Fp32GroupNorm, - Fp32LayerNorm, - GradMultiply, - GumbelVectorQuantizer, - LayerNorm, - MultiheadAttention, - SamePad, - TransposeLast, -) -from fairseq.modules.checkpoint_activations import checkpoint_wrapper -from fairseq.modules.transformer_sentence_encoder import init_bert_params -from fairseq.utils import buffered_arange, index_put, is_xla_tensor -from fairseq.distributed import fsdp_wrap -from fairseq.modules.conformer_layer import ConformerWav2Vec2EncoderLayer -from fairseq.modules import RelPositionalEncoding -from .utils import pad_to_multiple - -EXTRACTOR_MODE_CHOICES = ChoiceEnum(["default", "layer_norm"]) -MASKING_DISTRIBUTION_CHOICES = ChoiceEnum(["static", "uniform", "normal", "poisson"]) -LAYER_TYPE_CHOICES = ChoiceEnum(["transformer", "conformer"]) - - -@dataclass -class Wav2Vec2Config(FairseqDataclass): - extractor_mode: EXTRACTOR_MODE_CHOICES = field( - default="default", - metadata={ - "help": "mode for feature extractor. default has a single group norm with d " - "groups in the first conv block, whereas layer_norm has layer norms in " - "every block (meant to use with normalize=True)" - }, - ) - encoder_layers: int = field( - default=12, metadata={"help": "num encoder layers in the transformer"} - ) - encoder_embed_dim: int = field( - default=768, metadata={"help": "encoder embedding dimension"} - ) - encoder_ffn_embed_dim: int = field( - default=3072, metadata={"help": "encoder embedding dimension for FFN"} - ) - encoder_attention_heads: int = field( - default=12, metadata={"help": "num encoder attention heads"} - ) - activation_fn: ChoiceEnum(utils.get_available_activation_fns()) = field( - default="gelu", metadata={"help": "activation function to use"} - ) - layer_type: LAYER_TYPE_CHOICES = field( - default="transformer", metadata={"help": "layer type in encoder"} - ) - # dropouts - dropout: float = field( - default=0.1, metadata={"help": "dropout probability for the transformer"} - ) - attention_dropout: float = field( - default=0.1, metadata={"help": "dropout probability for attention weights"} - ) - activation_dropout: float = field( - default=0.0, metadata={"help": "dropout probability after activation in FFN"} - ) - encoder_layerdrop: float = field( - default=0.0, metadata={"help": "probability of dropping a tarnsformer layer"} - ) - dropout_input: float = field( - default=0.0, - metadata={"help": "dropout to apply to the input (after feat extr)"}, - ) - dropout_features: float = field( - default=0.0, - metadata={"help": "dropout to apply to the features (after feat extr)"}, - ) - - final_dim: int = field( - default=0, - metadata={ - "help": "project final representations and targets to this many dimensions." - "set to encoder_embed_dim is <= 0" - }, - ) - layer_norm_first: bool = field( - default=False, metadata={"help": "apply layernorm first in the transformer"} - ) - conv_feature_layers: str = field( - default="[(512, 10, 5)] + [(512, 3, 2)] * 4 + [(512,2,2)] + [(512,2,2)]", - metadata={ - "help": "string describing convolutional feature extraction layers in form of a python list that contains " - "[(dim, kernel_size, stride), ...]" - }, - ) - conv_bias: bool = field( - default=False, metadata={"help": "include bias in conv encoder"} - ) - logit_temp: float = field( - default=0.1, metadata={"help": "temperature to divide logits by"} - ) - quantize_targets: bool = field( - default=False, metadata={"help": "use quantized targets"} - ) - quantize_input: bool = field( - default=False, metadata={"help": "use quantized inputs"} - ) - same_quantizer: bool = field( - default=False, metadata={"help": "use same quantizer for inputs and targets"} - ) - target_glu: bool = field( - default=False, metadata={"help": "adds projection + glu to targets"} - ) - feature_grad_mult: float = field( - default=1.0, metadata={"help": "multiply feature extractor var grads by this"} - ) - quantizer_depth: int = field( - default=1, - metadata={"help": "number of quantizer layers"}, - ) - quantizer_factor: int = field( - default=3, - metadata={ - "help": "dimensionality increase for inner quantizer layers (if depth > 1)" - }, - ) - latent_vars: int = field( - default=320, - metadata={"help": "number of latent variables V in each group of the codebook"}, - ) - latent_groups: int = field( - default=2, - metadata={"help": "number of groups G of latent variables in the codebook"}, - ) - latent_dim: int = field( - default=0, - metadata={ - "help": "if > 0, uses this dimensionality for latent variables. " - "otherwise uses final_dim / latent_groups" - }, - ) - - # masking - mask_length: int = field(default=10, metadata={"help": "mask length"}) - mask_prob: float = field( - default=0.65, metadata={"help": "probability of replacing a token with mask"} - ) - mask_selection: MASKING_DISTRIBUTION_CHOICES = field( - default="static", metadata={"help": "how to choose mask length"} - ) - mask_other: float = field( - default=0, - metadata={ - "help": "secondary mask argument (used for more complex distributions), " - "see help in compute_mask_indices" - }, - ) - no_mask_overlap: bool = field( - default=False, metadata={"help": "whether to allow masks to overlap"} - ) - mask_min_space: int = field( - default=1, - metadata={"help": "min space between spans (if no overlap is enabled)"}, - ) - - # channel masking - mask_channel_length: int = field( - default=10, metadata={"help": "length of the mask for features (channels)"} - ) - mask_channel_prob: float = field( - default=0.0, metadata={"help": "probability of replacing a feature with 0"} - ) - mask_channel_before: bool = False - mask_channel_selection: MASKING_DISTRIBUTION_CHOICES = field( - default="static", - metadata={"help": "how to choose mask length for channel masking"}, - ) - mask_channel_other: float = field( - default=0, - metadata={ - "help": "secondary mask argument (used for more complex distributions), " - "see help in compute_mask_indicesh" - }, - ) - no_mask_channel_overlap: bool = field( - default=False, metadata={"help": "whether to allow channel masks to overlap"} - ) - mask_channel_min_space: int = field( - default=1, - metadata={"help": "min space between spans (if no overlap is enabled)"}, - ) - - # negative selection - num_negatives: int = field( - default=100, - metadata={"help": "number of negative examples from the same sample"}, - ) - negatives_from_everywhere: bool = field( - default=False, - metadata={"help": "sample negatives from everywhere, not just masked states"}, - ) - cross_sample_negatives: int = field( - default=0, metadata={"help": "number of negative examples from the any sample"} - ) - codebook_negatives: int = field( - default=0, metadata={"help": "number of negative examples codebook"} - ) - - # positional embeddings - conv_pos: int = field( - default=128, - metadata={"help": "number of filters for convolutional positional embeddings"}, - ) - conv_pos_groups: int = field( - default=16, - metadata={"help": "number of groups for convolutional positional embedding"}, - ) - - latent_temp: Tuple[float, float, float] = field( - default=(2, 0.5, 0.999995), - metadata={ - "help": "temperature for latent variable sampling. " - "can be tuple of 3 values (start, end, decay)" - }, - ) - max_positions: int = field(default=100000, metadata={"help": "Max positions"}) - checkpoint_activations: bool = field( - default=False, - metadata={"help": "recompute activations and save memory for extra compute"}, - ) - - # FP16 optimization - required_seq_len_multiple: int = field( - default=1, - metadata={ - "help": "pad the input to encoder such that the sequence length is divisible by multiple" - }, - ) - crop_seq_to_multiple: int = field( - default=1, - metadata={ - "help": "crop convolutional feature extractor output such that the sequence length is divisible by multiple" - }, - ) - - # Conformer - depthwise_conv_kernel_size: int = field( - default=31, - metadata={ - "help": "depthwise-conv-kernel-size for convolution in conformer layer" - }, - ) - attn_type: str = field( - default="", - metadata={"help": "if espnet use ESPNET MHA"}, - ) - pos_enc_type: str = field( - default="abs", - metadata={"help": "Positional encoding type to use in conformer"}, - ) - fp16: bool = field(default=False, metadata={"help": "If fp16 is being used"}) - - -@register_model("wav2vec2", dataclass=Wav2Vec2Config) -class Wav2Vec2Model(BaseFairseqModel): - def __init__(self, cfg: Wav2Vec2Config): - super().__init__() - self.cfg = cfg - - feature_enc_layers = eval(cfg.conv_feature_layers) - self.embed = feature_enc_layers[-1][0] - - self.feature_extractor = ConvFeatureExtractionModel( - conv_layers=feature_enc_layers, - dropout=0.0, - mode=cfg.extractor_mode, - conv_bias=cfg.conv_bias, - ) - - self.post_extract_proj = ( - nn.Linear(self.embed, cfg.encoder_embed_dim) - if self.embed != cfg.encoder_embed_dim and not cfg.quantize_input - else None - ) - - self.crop_seq_to_multiple = cfg.crop_seq_to_multiple - - self.mask_prob = cfg.mask_prob - self.mask_selection = cfg.mask_selection - self.mask_other = cfg.mask_other - self.mask_length = cfg.mask_length - self.no_mask_overlap = cfg.no_mask_overlap - self.mask_min_space = cfg.mask_min_space - - self.mask_channel_prob = cfg.mask_channel_prob - self.mask_channel_before = cfg.mask_channel_before - self.mask_channel_selection = cfg.mask_channel_selection - self.mask_channel_other = cfg.mask_channel_other - self.mask_channel_length = cfg.mask_channel_length - self.no_mask_channel_overlap = cfg.no_mask_channel_overlap - self.mask_channel_min_space = cfg.mask_channel_min_space - - self.dropout_input = nn.Dropout(cfg.dropout_input) - self.dropout_features = nn.Dropout(cfg.dropout_features) - - self.feature_grad_mult = cfg.feature_grad_mult - - self.quantizer = None - self.input_quantizer = None - - self.n_negatives = cfg.num_negatives - self.cross_sample_negatives = cfg.cross_sample_negatives - self.codebook_negatives = cfg.codebook_negatives - self.negatives_from_everywhere = cfg.negatives_from_everywhere - - self.logit_temp = cfg.logit_temp - - final_dim = cfg.final_dim if cfg.final_dim > 0 else cfg.encoder_embed_dim - - if cfg.quantize_targets: - vq_dim = cfg.latent_dim if cfg.latent_dim > 0 else final_dim - self.quantizer = GumbelVectorQuantizer( - dim=self.embed, - num_vars=cfg.latent_vars, - temp=cfg.latent_temp, - groups=cfg.latent_groups, - combine_groups=False, - vq_dim=vq_dim, - time_first=True, - weight_proj_depth=cfg.quantizer_depth, - weight_proj_factor=cfg.quantizer_factor, - ) - self.project_q = nn.Linear(vq_dim, final_dim) - else: - self.project_q = nn.Linear(self.embed, final_dim) - - if cfg.quantize_input: - if cfg.same_quantizer and self.quantizer is not None: - vq_dim = final_dim - self.input_quantizer = self.quantizer - else: - vq_dim = cfg.latent_dim if cfg.latent_dim > 0 else cfg.encoder_embed_dim - self.input_quantizer = GumbelVectorQuantizer( - dim=self.embed, - num_vars=cfg.latent_vars, - temp=cfg.latent_temp, - groups=cfg.latent_groups, - combine_groups=False, - vq_dim=vq_dim, - time_first=True, - weight_proj_depth=cfg.quantizer_depth, - weight_proj_factor=cfg.quantizer_factor, - ) - self.project_inp = nn.Linear(vq_dim, cfg.encoder_embed_dim) - - self.mask_emb = nn.Parameter( - torch.FloatTensor(cfg.encoder_embed_dim).uniform_() - ) - encoder_cls = TransformerEncoder - if cfg.layer_type == "conformer" and cfg.pos_enc_type in ["rel_pos", "rope"]: - encoder_cls = ConformerEncoder - - self.encoder = encoder_cls(cfg) - self.layer_norm = LayerNorm(self.embed) - - self.target_glu = None - if cfg.target_glu: - self.target_glu = nn.Sequential( - nn.Linear(final_dim, final_dim * 2), nn.GLU() - ) - - self.final_proj = nn.Linear(cfg.encoder_embed_dim, final_dim) - - def upgrade_state_dict_named(self, state_dict, name): - super().upgrade_state_dict_named(state_dict, name) - """Upgrade a (possibly old) state dict for new versions of fairseq.""" - return state_dict - - @classmethod - def build_model(cls, cfg: Wav2Vec2Config, task=None): - """Build a new model instance.""" - - return cls(cfg) - - def apply_mask( - self, - x, - padding_mask, - mask_indices=None, - mask_channel_indices=None, - ): - B, T, C = x.shape - - if self.mask_channel_prob > 0 and self.mask_channel_before: - mask_channel_indices = compute_mask_indices( - (B, C), - None, - self.mask_channel_prob, - self.mask_channel_length, - self.mask_channel_selection, - self.mask_channel_other, - no_overlap=self.no_mask_channel_overlap, - min_space=self.mask_channel_min_space, - ) - mask_channel_indices = ( - torch.from_numpy(mask_channel_indices) - .to(x.device) - .unsqueeze(1) - .expand(-1, T, -1) - ) - x[mask_channel_indices] = 0 - - if self.mask_prob > 0: - if mask_indices is None: - mask_indices = compute_mask_indices( - (B, T), - padding_mask, - self.mask_prob, - self.mask_length, - self.mask_selection, - self.mask_other, - min_masks=2, - no_overlap=self.no_mask_overlap, - min_space=self.mask_min_space, - ) - mask_indices = torch.from_numpy(mask_indices).to(x.device) - x = index_put(x, mask_indices, self.mask_emb) - else: - mask_indices = None - - if self.mask_channel_prob > 0 and not self.mask_channel_before: - if mask_channel_indices is None: - mask_channel_indices = compute_mask_indices( - (B, C), - None, - self.mask_channel_prob, - self.mask_channel_length, - self.mask_channel_selection, - self.mask_channel_other, - no_overlap=self.no_mask_channel_overlap, - min_space=self.mask_channel_min_space, - ) - mask_channel_indices = ( - torch.from_numpy(mask_channel_indices) - .to(x.device) - .unsqueeze(1) - .expand(-1, T, -1) - ) - x = index_put(x, mask_channel_indices, 0) - - return x, mask_indices - - def sample_negatives(self, y, num, padding_count=None): - - if self.n_negatives == 0 and self.cross_sample_negatives == 0: - return y.new(0) - - bsz, tsz, fsz = y.shape - y = y.view(-1, fsz) # BTC => (BxT)C - - # FIXME: what happens if padding_count is specified? - cross_high = tsz * bsz - high = tsz - (padding_count or 0) - with torch.no_grad(): - assert high > 1, f"{bsz,tsz,fsz}" - - if self.n_negatives > 0: - tszs = ( - buffered_arange(num) - .unsqueeze(-1) - .expand(-1, self.n_negatives) - .flatten() - ) - - neg_idxs = torch.randint( - low=0, high=high - 1, size=(bsz, self.n_negatives * num) - ) - neg_idxs[neg_idxs >= tszs] += 1 - - if self.cross_sample_negatives > 0: - tszs = ( - buffered_arange(num) - .unsqueeze(-1) - .expand(-1, self.cross_sample_negatives) - .flatten() - ) - - cross_neg_idxs = torch.randint( - low=0, - high=cross_high - 1, - size=(bsz, self.cross_sample_negatives * num), - ) - cross_neg_idxs[cross_neg_idxs >= tszs] += 1 - - if self.n_negatives > 0: - neg_idxs = neg_idxs + (torch.arange(bsz).unsqueeze(1) * high) - else: - neg_idxs = cross_neg_idxs - - if self.cross_sample_negatives > 0 and self.n_negatives > 0: - neg_idxs = torch.cat([neg_idxs, cross_neg_idxs], dim=1) - - negs = y[neg_idxs.view(-1)] - negs = negs.view( - bsz, num, self.n_negatives + self.cross_sample_negatives, fsz - ).permute( - 2, 0, 1, 3 - ) # to NxBxTxC - return negs, neg_idxs - - def compute_preds(self, x, y, negatives): - - neg_is_pos = (y == negatives).all(-1) - y = y.unsqueeze(0) - targets = torch.cat([y, negatives], dim=0) - - logits = torch.cosine_similarity(x.float(), targets.float(), dim=-1).type_as(x) - - logits = logits / self.logit_temp - - if is_xla_tensor(logits) or neg_is_pos.any(): - fillval = -float(2 ** 30) - if not hasattr(self, "_inftensor"): - self._inftensor = ( - torch.tensor(fillval).to(x.device) - if is_xla_tensor(logits) - else float("-inf") - ) - logits[1:] = index_put(logits[1:], neg_is_pos, self._inftensor) - - return logits - - def _get_feat_extract_output_lengths(self, input_lengths: torch.LongTensor): - """ - Computes the output length of the convolutional layers - """ - - def _conv_out_length(input_length, kernel_size, stride): - return torch.floor((input_length - kernel_size) / stride + 1) - - conv_cfg_list = eval(self.cfg.conv_feature_layers) - - for i in range(len(conv_cfg_list)): - input_lengths = _conv_out_length( - input_lengths, conv_cfg_list[i][1], conv_cfg_list[i][2] - ) - - return input_lengths.to(torch.long) - - def forward( - self, - source, - padding_mask=None, - mask=True, - features_only=False, - layer=None, - mask_indices=None, - mask_channel_indices=None, - padding_count=None, - ): - - if self.feature_grad_mult > 0: - features = self.feature_extractor(source) - if self.feature_grad_mult != 1.0: - features = GradMultiply.apply(features, self.feature_grad_mult) - else: - with torch.no_grad(): - features = self.feature_extractor(source) - - features_pen = features.float().pow(2).mean() - - features = features.transpose(1, 2) - features = self.layer_norm(features) - unmasked_features = features.clone() - - if padding_mask is not None and padding_mask.any(): - input_lengths = (1 - padding_mask.long()).sum(-1) - # apply conv formula to get real output_lengths - output_lengths = self._get_feat_extract_output_lengths(input_lengths) - - padding_mask = torch.zeros( - features.shape[:2], dtype=features.dtype, device=features.device - ) - - # these two operations makes sure that all values - # before the output lengths indices are attended to - padding_mask[ - ( - torch.arange(padding_mask.shape[0], device=padding_mask.device), - output_lengths - 1, - ) - ] = 1 - padding_mask = (1 - padding_mask.flip([-1]).cumsum(-1).flip([-1])).bool() - else: - padding_mask = None - - time_steps_to_drop = features.size(1) % self.crop_seq_to_multiple - if time_steps_to_drop != 0: - features = features[:, :-time_steps_to_drop] - unmasked_features = unmasked_features[:, :-time_steps_to_drop] - if padding_mask is not None: - padding_mask = padding_mask[:, :-time_steps_to_drop] - - if self.post_extract_proj is not None: - features = self.post_extract_proj(features) - - features = self.dropout_input(features) - unmasked_features = self.dropout_features(unmasked_features) - - num_vars = None - code_ppl = None - prob_ppl = None - curr_temp = None - - if self.input_quantizer: - q = self.input_quantizer(features, produce_targets=False) - features = q["x"] - num_vars = q["num_vars"] - code_ppl = q["code_perplexity"] - prob_ppl = q["prob_perplexity"] - curr_temp = q["temp"] - features = self.project_inp(features) - - if mask: - x, mask_indices = self.apply_mask( - features, - padding_mask, - mask_indices=mask_indices, - mask_channel_indices=mask_channel_indices, - ) - if not is_xla_tensor(x) and mask_indices is not None: - # tpu-comment: reducing the size in a dynamic way causes - # too many recompilations on xla. - y = unmasked_features[mask_indices].view( - unmasked_features.size(0), -1, unmasked_features.size(-1) - ) - else: - y = unmasked_features - else: - x = features - y = unmasked_features - mask_indices = None - - x, layer_results = self.encoder(x, padding_mask=padding_mask, layer=layer) - - if features_only: - return { - "x": x, - "padding_mask": padding_mask, - "features": unmasked_features, - "layer_results": layer_results, - } - - if self.quantizer: - q = self.quantizer(y, produce_targets=False) - y = q["x"] - num_vars = q["num_vars"] - code_ppl = q["code_perplexity"] - prob_ppl = q["prob_perplexity"] - curr_temp = q["temp"] - - y = self.project_q(y) - - if self.negatives_from_everywhere: - neg_cands = self.quantizer(unmasked_features, produce_targets=False)[ - "x" - ] - negs, _ = self.sample_negatives( - neg_cands, - y.size(1), - padding_count=padding_count, - ) - negs = self.project_q(negs) - - else: - negs, _ = self.sample_negatives( - y, - y.size(1), - padding_count=padding_count, - ) - - if self.codebook_negatives > 0: - cb_negs = self.quantizer.sample_from_codebook( - y.size(0) * y.size(1), self.codebook_negatives - ) - cb_negs = cb_negs.view( - self.codebook_negatives, y.size(0), y.size(1), -1 - ) # order doesnt matter - cb_negs = self.project_q(cb_negs) - negs = torch.cat([negs, cb_negs], dim=0) - else: - y = self.project_q(y) - - if self.negatives_from_everywhere: - negs, _ = self.sample_negatives( - unmasked_features, - y.size(1), - padding_count=padding_count, - ) - negs = self.project_q(negs) - else: - negs, _ = self.sample_negatives( - y, - y.size(1), - padding_count=padding_count, - ) - - if not is_xla_tensor(x): - # tpu-comment: reducing the size in a dynamic way causes - # too many recompilations on xla. - x = x[mask_indices].view(x.size(0), -1, x.size(-1)) - - if self.target_glu: - y = self.target_glu(y) - negs = self.target_glu(negs) - - x = self.final_proj(x) - x = self.compute_preds(x, y, negs) - - result = { - "x": x, - "padding_mask": padding_mask, - "features_pen": features_pen, - } - - if prob_ppl is not None: - result["prob_perplexity"] = prob_ppl - result["code_perplexity"] = code_ppl - result["num_vars"] = num_vars - result["temp"] = curr_temp - - return result - - def quantize(self, x): - assert self.quantizer is not None - x = self.feature_extractor(x) - x = x.transpose(1, 2) - x = self.layer_norm(x) - return self.quantizer.forward_idx(x) - - def extract_features(self, source, padding_mask, mask=False, layer=None): - res = self.forward( - source, padding_mask, mask=mask, features_only=True, layer=layer - ) - return res - - def get_logits(self, net_output): - logits = net_output["x"] - logits = logits.transpose(0, 2) - logits = logits.reshape(-1, logits.size(-1)) - return logits - - def get_targets(self, sample, net_output, expand_steps=True): - x = net_output["x"] - return x.new_zeros(x.size(1) * x.size(2), dtype=torch.long) - - def get_extra_losses(self, net_output): - pen = [] - - if "prob_perplexity" in net_output: - pen.append( - (net_output["num_vars"] - net_output["prob_perplexity"]) - / net_output["num_vars"] - ) - - if "features_pen" in net_output: - pen.append(net_output["features_pen"]) - - return pen - - def remove_pretraining_modules(self): - self.quantizer = None - self.project_q = None - self.target_glu = None - self.final_proj = None - - -class ConvFeatureExtractionModel(nn.Module): - def __init__( - self, - conv_layers: List[Tuple[int, int, int]], - dropout: float = 0.0, - mode: str = "default", - conv_bias: bool = False, - ): - super().__init__() - - assert mode in {"default", "layer_norm"} - - def block( - n_in, - n_out, - k, - stride, - is_layer_norm=False, - is_group_norm=False, - conv_bias=False, - ): - def make_conv(): - conv = nn.Conv1d(n_in, n_out, k, stride=stride, bias=conv_bias) - nn.init.kaiming_normal_(conv.weight) - return conv - - assert ( - is_layer_norm and is_group_norm - ) == False, "layer norm and group norm are exclusive" - - if is_layer_norm: - return nn.Sequential( - make_conv(), - nn.Dropout(p=dropout), - nn.Sequential( - TransposeLast(), - Fp32LayerNorm(dim, elementwise_affine=True), - TransposeLast(), - ), - nn.GELU(), - ) - elif is_group_norm: - return nn.Sequential( - make_conv(), - nn.Dropout(p=dropout), - Fp32GroupNorm(dim, dim, affine=True), - nn.GELU(), - ) - else: - return nn.Sequential(make_conv(), nn.Dropout(p=dropout), nn.GELU()) - - in_d = 1 - self.conv_layers = nn.ModuleList() - for i, cl in enumerate(conv_layers): - assert len(cl) == 3, "invalid conv definition: " + str(cl) - (dim, k, stride) = cl - - self.conv_layers.append( - block( - in_d, - dim, - k, - stride, - is_layer_norm=mode == "layer_norm", - is_group_norm=mode == "default" and i == 0, - conv_bias=conv_bias, - ) - ) - in_d = dim - - def forward(self, x): - - # BxT -> BxCxT - x = x.unsqueeze(1) - - for conv in self.conv_layers: - x = conv(x) - - return x - - -class TransformerEncoder(nn.Module): - def build_encoder_layer(self, args): - if args.layer_type == "transformer": - layer = TransformerSentenceEncoderLayer( - embedding_dim=self.embedding_dim, - ffn_embedding_dim=args.encoder_ffn_embed_dim, - num_attention_heads=args.encoder_attention_heads, - dropout=self.dropout, - attention_dropout=args.attention_dropout, - activation_dropout=args.activation_dropout, - activation_fn=args.activation_fn, - layer_norm_first=args.layer_norm_first, - ) - elif args.layer_type == "conformer": - layer = ConformerWav2Vec2EncoderLayer( - embed_dim=self.embedding_dim, - ffn_embed_dim=args.encoder_ffn_embed_dim, - attention_heads=args.encoder_attention_heads, - dropout=args.dropout, - depthwise_conv_kernel_size=args.depthwise_conv_kernel_size, - activation_fn="swish", - attn_type=args.attn_type, - use_fp16=args.fp16, - pos_enc_type="abs", - ) - layer = fsdp_wrap(layer) - if args.checkpoint_activations: - layer = checkpoint_wrapper(layer) - return layer - - def __init__(self, args): - super().__init__() - - self.dropout = args.dropout - self.embedding_dim = args.encoder_embed_dim - self.required_seq_len_multiple = args.required_seq_len_multiple - - self.pos_conv = nn.Conv1d( - self.embedding_dim, - self.embedding_dim, - kernel_size=args.conv_pos, - padding=args.conv_pos // 2, - groups=args.conv_pos_groups, - ) - dropout = 0 - std = math.sqrt((4 * (1.0 - dropout)) / (args.conv_pos * self.embedding_dim)) - nn.init.normal_(self.pos_conv.weight, mean=0, std=std) - nn.init.constant_(self.pos_conv.bias, 0) - - self.pos_conv = nn.utils.weight_norm(self.pos_conv, name="weight", dim=2) - self.pos_conv = nn.Sequential(self.pos_conv, SamePad(args.conv_pos), nn.GELU()) - - self.layers = nn.ModuleList( - [self.build_encoder_layer(args) for _ in range(args.encoder_layers)] - ) - self.layer_norm_first = args.layer_norm_first - self.layer_norm = LayerNorm(self.embedding_dim) - self.layerdrop = args.encoder_layerdrop - - self.apply(init_bert_params) - - def forward(self, x, padding_mask=None, layer=None): - x, layer_results = self.extract_features(x, padding_mask, layer) - - if self.layer_norm_first and layer is None: - x = self.layer_norm(x) - - return x, layer_results - - def extract_features(self, x, padding_mask=None, tgt_layer=None): - - if padding_mask is not None: - x = index_put(x, padding_mask, 0) - - x_conv = self.pos_conv(x.transpose(1, 2)) - x_conv = x_conv.transpose(1, 2) - x = x + x_conv - - if not self.layer_norm_first: - x = self.layer_norm(x) - - # pad to the sequence length dimension - x, pad_length = pad_to_multiple( - x, self.required_seq_len_multiple, dim=-2, value=0 - ) - if pad_length > 0 and padding_mask is None: - padding_mask = x.new_zeros((x.size(0), x.size(1)), dtype=torch.bool) - padding_mask[:, -pad_length:] = True - else: - padding_mask, _ = pad_to_multiple( - padding_mask, self.required_seq_len_multiple, dim=-1, value=True - ) - x = F.dropout(x, p=self.dropout, training=self.training) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - layer_results = [] - r = None - for i, layer in enumerate(self.layers): - dropout_probability = np.random.random() - if not self.training or (dropout_probability > self.layerdrop): - x, z = layer(x, self_attn_padding_mask=padding_mask, need_weights=False) - if tgt_layer is not None: - # unpad if needed - if pad_length > 0: - layer_results.append( - ( - x[:-pad_length], - z[:, :-pad_length, :-pad_length] - if z is not None - else z, - ) - ) - else: - layer_results.append((x, z)) - if i == tgt_layer: - r = x - break - - if r is not None: - x = r - - # T x B x C -> B x T x C - x = x.transpose(0, 1) - # undo paddding - if pad_length > 0: - x = x[:, :-pad_length] - - return x, layer_results - - def max_positions(self): - """Maximum output length supported by the encoder.""" - return self.args.max_positions - - def upgrade_state_dict_named(self, state_dict, name): - """Upgrade a (possibly old) state dict for new versions of fairseq.""" - return state_dict - - -class ConformerEncoder(TransformerEncoder): - def build_encoder_layer(self, args): - layer = ConformerWav2Vec2EncoderLayer( - embed_dim=self.embedding_dim, - ffn_embed_dim=args.encoder_ffn_embed_dim, - attention_heads=args.encoder_attention_heads, - dropout=args.dropout, - depthwise_conv_kernel_size=args.depthwise_conv_kernel_size, - activation_fn="swish", - attn_type=args.attn_type, - pos_enc_type=args.pos_enc_type, - use_fp16=args.fp16, # only used for rope - ) - layer = fsdp_wrap(layer) - if args.checkpoint_activations: - layer = checkpoint_wrapper(layer) - return layer - - def __init__(self, args): - super().__init__(args) - self.args = args - self.dropout = args.dropout - self.embedding_dim = args.encoder_embed_dim - self.pos_enc_type = args.pos_enc_type - max_source_positions = self.max_positions() - - if self.pos_enc_type == "rel_pos": - self.embed_positions = RelPositionalEncoding( - max_source_positions, self.embedding_dim - ) - elif self.pos_enc_type == "rope": - self.embed_positions = None - else: - raise Exception("Unsupported positional encoding type") - - self.layers = nn.ModuleList( - [self.build_encoder_layer(args) for _ in range(args.encoder_layers)] - ) - self.layer_norm_first = args.layer_norm_first - self.layer_norm = LayerNorm(self.embedding_dim) - self.layerdrop = args.encoder_layerdrop - - self.apply(init_bert_params) - - def extract_features(self, x, padding_mask=None, tgt_layer=None): - if padding_mask is not None: - x = index_put(x, padding_mask, 0) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - # B X T X C here - position_emb = None - if self.pos_enc_type == "rel_pos": - position_emb = self.embed_positions(x) - - if not self.layer_norm_first: - x = self.layer_norm(x) - - x = F.dropout(x, p=self.dropout, training=self.training) - - layer_results = [] - r = None - for i, layer in enumerate(self.layers): - dropout_probability = np.random.random() - if not self.training or (dropout_probability > self.layerdrop): - x, z = layer( - x, - self_attn_padding_mask=padding_mask, - need_weights=False, - position_emb=position_emb, - ) - if tgt_layer is not None: - layer_results.append((x, z)) - if i == tgt_layer: - r = x - break - - if r is not None: - x = r - - # T x B x C -> B x T x C - x = x.transpose(0, 1) - - return x, layer_results - - -class TransformerSentenceEncoderLayer(nn.Module): - """ - Implements a Transformer Encoder Layer used in BERT/XLM style pre-trained - models. - """ - - def __init__( - self, - embedding_dim: float = 768, - ffn_embedding_dim: float = 3072, - num_attention_heads: float = 8, - dropout: float = 0.1, - attention_dropout: float = 0.1, - activation_dropout: float = 0.1, - activation_fn: str = "relu", - layer_norm_first: bool = False, - ) -> None: - - super().__init__() - # Initialize parameters - self.embedding_dim = embedding_dim - self.dropout = dropout - self.activation_dropout = activation_dropout - - # Initialize blocks - self.activation_fn = utils.get_activation_fn(activation_fn) - self.self_attn = MultiheadAttention( - self.embedding_dim, - num_attention_heads, - dropout=attention_dropout, - self_attention=True, - ) - - self.dropout1 = nn.Dropout(dropout) - self.dropout2 = nn.Dropout(self.activation_dropout) - self.dropout3 = nn.Dropout(dropout) - - self.layer_norm_first = layer_norm_first - - # layer norm associated with the self attention layer - self.self_attn_layer_norm = LayerNorm(self.embedding_dim) - self.fc1 = nn.Linear(self.embedding_dim, ffn_embedding_dim) - self.fc2 = nn.Linear(ffn_embedding_dim, self.embedding_dim) - - # layer norm associated with the position wise feed-forward NN - self.final_layer_norm = LayerNorm(self.embedding_dim) - - def forward( - self, - x: torch.Tensor, - self_attn_mask: torch.Tensor = None, - self_attn_padding_mask: torch.Tensor = None, - need_weights: bool = False, - att_args=None, - ): - """ - LayerNorm is applied either before or after the self-attention/ffn - modules similar to the original Transformer imlementation. - """ - residual = x - - if self.layer_norm_first: - x = self.self_attn_layer_norm(x) - x, attn = self.self_attn( - query=x, - key=x, - value=x, - key_padding_mask=self_attn_padding_mask, - attn_mask=self_attn_mask, - ) - x = self.dropout1(x) - x = residual + x - - residual = x - x = self.final_layer_norm(x) - x = self.activation_fn(self.fc1(x)) - x = self.dropout2(x) - x = self.fc2(x) - x = self.dropout3(x) - x = residual + x - else: - x, attn = self.self_attn( - query=x, - key=x, - value=x, - key_padding_mask=self_attn_padding_mask, - ) - - x = self.dropout1(x) - x = residual + x - - x = self.self_attn_layer_norm(x) - - residual = x - x = self.activation_fn(self.fc1(x)) - x = self.dropout2(x) - x = self.fc2(x) - x = self.dropout3(x) - x = residual + x - x = self.final_layer_norm(x) - - return x, attn diff --git a/kosmos-g/fairseq/fairseq/models/wav2vec/wav2vec2_asr.py b/kosmos-g/fairseq/fairseq/models/wav2vec/wav2vec2_asr.py deleted file mode 100644 index d2a039d2b..000000000 --- a/kosmos-g/fairseq/fairseq/models/wav2vec/wav2vec2_asr.py +++ /dev/null @@ -1,727 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import contextlib -import copy -import math -import re -from argparse import Namespace -from dataclasses import dataclass, field -from typing import Any, Optional - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from omegaconf import II, MISSING, open_dict - -from fairseq import checkpoint_utils, tasks, utils -from fairseq.dataclass import FairseqDataclass -from fairseq.dataclass.utils import convert_namespace_to_omegaconf -from fairseq.models import ( - BaseFairseqModel, - FairseqEncoder, - FairseqEncoderDecoderModel, - FairseqIncrementalDecoder, - register_model, -) -from fairseq.models.wav2vec.wav2vec2 import MASKING_DISTRIBUTION_CHOICES -from fairseq.modules import LayerNorm, PositionalEmbedding, TransformerDecoderLayer -from fairseq.tasks import FairseqTask - - -@dataclass -class Wav2Vec2AsrConfig(FairseqDataclass): - w2v_path: str = field( - default=MISSING, metadata={"help": "path to wav2vec 2.0 model"} - ) - no_pretrained_weights: bool = field( - default=False, metadata={"help": "if true, does not load pretrained weights"} - ) - dropout_input: float = field( - default=0.0, - metadata={"help": "dropout to apply to the input (after feat extr)"}, - ) - final_dropout: float = field( - default=0.0, - metadata={"help": "dropout after transformer and before final projection"}, - ) - dropout: float = field( - default=0.0, metadata={"help": "dropout probability inside wav2vec 2.0 model"} - ) - attention_dropout: float = field( - default=0.0, - metadata={ - "help": "dropout probability for attention weights inside wav2vec 2.0 model" - }, - ) - activation_dropout: float = field( - default=0.0, - metadata={ - "help": "dropout probability after activation in FFN inside wav2vec 2.0 model" - }, - ) - conv_feature_layers: Optional[str] = field( - default="[(512, 10, 5)] + [(512, 3, 2)] * 4 + [(512,2,2)] + [(512,2,2)]", - metadata={ - "help": ( - "string describing convolutional feature extraction " - "layers in form of a python list that contains " - "[(dim, kernel_size, stride), ...]" - ), - }, - ) - encoder_embed_dim: Optional[int] = field( - default=768, metadata={"help": "encoder embedding dimension"} - ) - - # masking - apply_mask: bool = field( - default=False, metadata={"help": "apply masking during fine-tuning"} - ) - mask_length: int = field( - default=10, metadata={"help": "repeat the mask indices multiple times"} - ) - mask_prob: float = field( - default=0.5, - metadata={ - "help": "probability of replacing a token with mask (normalized by length)" - }, - ) - mask_selection: MASKING_DISTRIBUTION_CHOICES = field( - default="static", metadata={"help": "how to choose masks"} - ) - mask_other: float = field( - default=0, - metadata={ - "help": "secondary mask argument (used for more complex distributions), " - "see help in compute_mask_indices" - }, - ) - no_mask_overlap: bool = field( - default=False, metadata={"help": "whether to allow masks to overlap"} - ) - mask_min_space: Optional[int] = field( - default=1, - metadata={"help": "min space between spans (if no overlap is enabled)"}, - ) - - # channel masking - mask_channel_length: int = field( - default=10, metadata={"help": "length of the mask for features (channels)"} - ) - mask_channel_prob: float = field( - default=0.0, metadata={"help": "probability of replacing a feature with 0"} - ) - mask_channel_selection: MASKING_DISTRIBUTION_CHOICES = field( - default="static", - metadata={"help": "how to choose mask length for channel masking"}, - ) - mask_channel_other: float = field( - default=0, - metadata={ - "help": "secondary mask argument (used for more complex distributions), " - "see help in compute_mask_indicesh" - }, - ) - no_mask_channel_overlap: bool = field( - default=False, metadata={"help": "whether to allow channel masks to overlap"} - ) - freeze_finetune_updates: int = field( - default=0, metadata={"help": "dont finetune wav2vec for this many updates"} - ) - feature_grad_mult: float = field( - default=0.0, metadata={"help": "reset feature grad mult in wav2vec 2.0 to this"} - ) - layerdrop: float = field( - default=0.0, metadata={"help": "probability of dropping a layer in wav2vec 2.0"} - ) - mask_channel_min_space: Optional[int] = field( - default=1, - metadata={"help": "min space between spans (if no overlap is enabled)"}, - ) - mask_channel_before: bool = False - normalize: bool = II("task.normalize") - data: str = II("task.data") - # this holds the loaded wav2vec args - w2v_args: Any = None - checkpoint_activations: bool = field( - default=False, metadata={"help": "checkpoint_activations"} - ) - offload_activations: bool = field( - default=False, metadata={"help": "offload_activations"} - ) - min_params_to_wrap: int = field( - default=int(1e8), - metadata={ - "help": "minimum number of params for a layer to be wrapped with FSDP() when " - "training with --ddp-backend=fully_sharded. Smaller values will " - "improve memory efficiency, but may make torch.distributed " - "communication less efficient due to smaller input sizes. This option " - "is set to 0 (i.e., always wrap) when --checkpoint-activations or " - "--offload-activations are passed." - }, - ) - - checkpoint_activations: bool = field( - default=False, - metadata={"help": "recompute activations and save memory for extra compute"}, - ) - ddp_backend: str = II("distributed_training.ddp_backend") - - -@dataclass -class Wav2Vec2CtcConfig(Wav2Vec2AsrConfig): - blank_weight: float = 0 - blank_mode: str = "add" - - -@register_model("wav2vec_ctc", dataclass=Wav2Vec2CtcConfig) -class Wav2VecCtc(BaseFairseqModel): - def __init__(self, cfg: Wav2Vec2CtcConfig, w2v_encoder: BaseFairseqModel): - super().__init__() - self.cfg = cfg - self.w2v_encoder = w2v_encoder - self.blank_weight = cfg.blank_weight - self.blank_mode = cfg.blank_mode - - def upgrade_state_dict_named(self, state_dict, name): - super().upgrade_state_dict_named(state_dict, name) - return state_dict - - @classmethod - def build_model(cls, cfg: Wav2Vec2CtcConfig, task: FairseqTask): - """Build a new model instance.""" - w2v_encoder = Wav2VecEncoder(cfg, len(task.target_dictionary)) - return cls(cfg, w2v_encoder) - - def get_logits(self, net_output, normalize=False): - logits = net_output["encoder_out"] - if self.blank_weight != 0: - if self.blank_mode == "add": - logits[..., 0] += self.blank_weight - elif self.blank_mode == "set": - logits[..., 0] = self.blank_weight - else: - raise Exception(f"invalid blank mode {self.blank_mode}") - - if net_output["padding_mask"] is not None and net_output["padding_mask"].any(): - number_of_classes = logits.size(-1) - masking_tensor = torch.ones( - number_of_classes, device=logits.device - ) * float("-inf") - masking_tensor[0] = 0 - logits[net_output["padding_mask"].T] = masking_tensor.type_as(logits) - - if normalize: - logits = utils.log_softmax(logits.float(), dim=-1) - - return logits - - def get_normalized_probs(self, net_output, log_probs): - """Get normalized probabilities (or log probs) from a net's output.""" - - logits = self.get_logits(net_output) - - if log_probs: - return utils.log_softmax(logits.float(), dim=-1) - else: - return utils.softmax(logits.float(), dim=-1) - - def forward(self, **kwargs): - x = self.w2v_encoder(**kwargs) - return x - - -@dataclass -class Wav2Vec2Seq2SeqConfig(Wav2Vec2AsrConfig): - decoder_embed_dim: int = field( - default=768, metadata={"help": "decoder embedding dimension"} - ) - decoder_ffn_embed_dim: int = field( - default=3072, metadata={"help": "decoder embedding dimension for FFN"} - ) - decoder_layers: int = field(default=6, metadata={"help": "num of decoder layers"}) - decoder_layerdrop: float = field( - default=0.0, metadata={"help": "decoder layerdrop chance"} - ) - decoder_attention_heads: int = field( - default=4, metadata={"help": "num decoder attention heads"} - ) - decoder_learned_pos: bool = field( - default=False, - metadata={"help": "use learned positional embeddings in the decoder"}, - ) - decoder_normalize_before: bool = field( - default=False, metadata={"help": "apply layernorm before each decoder block"} - ) - no_token_positional_embeddings: bool = field( - default=False, - metadata={ - "help": "if set, disables positional embeddings (outside self attention)" - }, - ) - decoder_dropout: float = field( - default=0.0, metadata={"help": "dropout probability in the decoder"} - ) - decoder_attention_dropout: float = field( - default=0.0, - metadata={ - "help": "dropout probability for attention weights inside the decoder" - }, - ) - decoder_activation_dropout: float = field( - default=0.0, - metadata={ - "help": "dropout probability after activation in FFN inside the decoder" - }, - ) - max_target_positions: int = field( - default=2048, metadata={"help": "max target positions"} - ) - share_decoder_input_output_embed: bool = field( - default=False, metadata={"help": "share decoder input and output embeddings"} - ) - autoregressive: bool = II("task.autoregressive") - - -@register_model("wav2vec_seq2seq", dataclass=Wav2Vec2Seq2SeqConfig) -class Wav2Vec2Seq2SeqModel(FairseqEncoderDecoderModel): - def __init__(self, encoder, decoder): - super().__init__(encoder, decoder) - - @classmethod - def build_model(cls, cfg: Wav2Vec2Seq2SeqConfig, task: FairseqTask): - """Build a new model instance.""" - - assert ( - cfg.autoregressive - ), "Please set task.autoregressive=true for seq2seq asr models" - - src_dict, tgt_dict = task.source_dictionary, task.target_dictionary - - def build_embedding(dictionary, embed_dim): - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - emb = Embedding(num_embeddings, embed_dim, padding_idx) - return emb - - decoder_embed_tokens = build_embedding(tgt_dict, cfg.decoder_embed_dim) - - encoder = cls.build_encoder(cfg) - decoder = cls.build_decoder(cfg, tgt_dict, decoder_embed_tokens) - - return Wav2Vec2Seq2SeqModel(encoder, decoder) - - @classmethod - def build_encoder(cls, cfg: Wav2Vec2AsrConfig): - return Wav2VecEncoder(cfg) - - @classmethod - def build_decoder(cls, cfg: Wav2Vec2Seq2SeqConfig, tgt_dict, embed_tokens): - return TransformerDecoder(cfg, tgt_dict, embed_tokens) - - def forward(self, **kwargs): - encoder_out = self.encoder(**kwargs) - decoder_out = self.decoder(encoder_out=encoder_out, **kwargs) - return decoder_out - - def upgrade_state_dict_named(self, state_dict, name): - super().upgrade_state_dict_named(state_dict, name) - return state_dict - - -class Wav2VecEncoder(FairseqEncoder): - def __init__(self, cfg: Wav2Vec2AsrConfig, output_size=None): - self.apply_mask = cfg.apply_mask - - arg_overrides = { - "dropout": cfg.dropout, - "activation_dropout": cfg.activation_dropout, - "dropout_input": cfg.dropout_input, - "attention_dropout": cfg.attention_dropout, - "mask_length": cfg.mask_length, - "mask_prob": cfg.mask_prob, - "mask_selection": cfg.mask_selection, - "mask_other": cfg.mask_other, - "no_mask_overlap": cfg.no_mask_overlap, - "mask_channel_length": cfg.mask_channel_length, - "mask_channel_prob": cfg.mask_channel_prob, - "mask_channel_before": cfg.mask_channel_before, - "mask_channel_selection": cfg.mask_channel_selection, - "mask_channel_other": cfg.mask_channel_other, - "no_mask_channel_overlap": cfg.no_mask_channel_overlap, - "encoder_layerdrop": cfg.layerdrop, - "feature_grad_mult": cfg.feature_grad_mult, - "checkpoint_activations": cfg.checkpoint_activations, - "offload_activations": cfg.offload_activations, - "min_params_to_wrap": cfg.min_params_to_wrap, - } - - if cfg.w2v_args is None: - state = checkpoint_utils.load_checkpoint_to_cpu(cfg.w2v_path, arg_overrides) - w2v_args = state.get("cfg", None) - if w2v_args is None: - w2v_args = convert_namespace_to_omegaconf(state["args"]) - w2v_args.criterion = None - w2v_args.lr_scheduler = None - cfg.w2v_args = w2v_args - else: - state = None - w2v_args = cfg.w2v_args - if isinstance(w2v_args, Namespace): - cfg.w2v_args = w2v_args = convert_namespace_to_omegaconf(w2v_args) - - assert cfg.normalize == w2v_args.task.normalize, ( - "Fine-tuning works best when data normalization is the same. " - "Please check that --normalize is set or unset for both pre-training and here" - ) - - if hasattr(cfg, "checkpoint_activations") and cfg.checkpoint_activations: - with open_dict(w2v_args): - w2v_args.model.checkpoint_activations = cfg.checkpoint_activations - - w2v_args.task.data = cfg.data - task = tasks.setup_task(w2v_args.task) - model = task.build_model(w2v_args.model, from_checkpoint=True) - - if state is not None and not cfg.no_pretrained_weights: - self.load_model_weights(state, model, cfg) - - model.remove_pretraining_modules() - - super().__init__(task.source_dictionary) - - d = w2v_args.model.encoder_embed_dim - - self.w2v_model = model - - self.final_dropout = nn.Dropout(cfg.final_dropout) - self.freeze_finetune_updates = cfg.freeze_finetune_updates - self.num_updates = 0 - - targ_d = None - self.proj = None - - if output_size is not None: - targ_d = output_size - elif getattr(cfg, "decoder_embed_dim", d) != d: - targ_d = cfg.decoder_embed_dim - - if targ_d is not None: - self.proj = Linear(d, targ_d) - - def load_model_weights(self, state, model, cfg): - if cfg.ddp_backend == "fully_sharded": - from fairseq.distributed import FullyShardedDataParallel - - for name, module in model.named_modules(): - if "encoder.layers" in name and len(name.split(".")) == 3: - # Only for layers, we do a special handling and load the weights one by one - # We dont load all weights together as that wont be memory efficient and may - # cause oom - new_dict = { - k.replace(name + ".", ""): v - for (k, v) in state["model"].items() - if name + "." in k - } - assert isinstance(module, FullyShardedDataParallel) - with module.summon_full_params(): - module.load_state_dict(new_dict, strict=True) - module._reset_lazy_init() - - # Once layers are loaded, filter them out and load everything else. - r = re.compile("encoder.layers.\d.") - filtered_list = list(filter(r.match, state["model"].keys())) - - new_big_dict = { - k: v for (k, v) in state["model"].items() if k not in filtered_list - } - - model.load_state_dict(new_big_dict, strict=False) - else: - model.load_state_dict(state["model"], strict=True) - - def set_num_updates(self, num_updates): - """Set the number of parameters updates.""" - super().set_num_updates(num_updates) - self.num_updates = num_updates - - def forward(self, source, padding_mask, **kwargs): - - w2v_args = { - "source": source, - "padding_mask": padding_mask, - "mask": self.apply_mask and self.training, - } - - ft = self.freeze_finetune_updates <= self.num_updates - - with torch.no_grad() if not ft else contextlib.ExitStack(): - res = self.w2v_model.extract_features(**w2v_args) - - x = res["x"] - padding_mask = res["padding_mask"] - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - x = self.final_dropout(x) - - if self.proj: - x = self.proj(x) - - return { - "encoder_out": x, # T x B x C - "padding_mask": padding_mask, # B x T, - "layer_results": res["layer_results"], - } - - def forward_torchscript(self, net_input): - if torch.jit.is_scripting(): - return self.forward(net_input["source"], net_input["padding_mask"]) - else: - return self.forward_non_torchscript(net_input) - - def reorder_encoder_out(self, encoder_out, new_order): - if encoder_out["encoder_out"] is not None: - encoder_out["encoder_out"] = encoder_out["encoder_out"].index_select( - 1, new_order - ) - if encoder_out["padding_mask"] is not None: - encoder_out["padding_mask"] = encoder_out["padding_mask"].index_select( - 0, new_order - ) - return encoder_out - - def max_positions(self): - """Maximum input length supported by the encoder.""" - return None - - def upgrade_state_dict_named(self, state_dict, name): - return state_dict - - -class TransformerDecoder(FairseqIncrementalDecoder): - """ - Transformer decoder consisting of *args.decoder_layers* layers. Each layer - is a :class:`TransformerDecoderLayer`. - - Args: - args (argparse.Namespace): parsed command-line arguments - dictionary (~fairseq.data.Dictionary): decoding dictionary - embed_tokens (torch.nn.Embedding): output embedding - no_encoder_attn (bool, optional): whether to attend to encoder outputs - (default: False). - """ - - def __init__( - self, - cfg: Wav2Vec2Seq2SeqConfig, - dictionary, - embed_tokens, - no_encoder_attn=False, - ): - super().__init__(dictionary) - - self.dropout = cfg.decoder_dropout - self.share_input_output_embed = cfg.share_decoder_input_output_embed - - input_embed_dim = embed_tokens.embedding_dim - embed_dim = cfg.decoder_embed_dim - self.output_embed_dim = cfg.decoder_embed_dim - - self.layerdrop = cfg.decoder_layerdrop - - self.padding_idx = embed_tokens.padding_idx - self.max_target_positions = cfg.max_target_positions - - self.embed_tokens = embed_tokens - self.embed_scale = math.sqrt(embed_dim) # todo: try with input_embed_dim - - self.project_in_dim = ( - Linear(input_embed_dim, embed_dim, bias=False) - if embed_dim != input_embed_dim - else None - ) - - self.embed_positions = ( - PositionalEmbedding( - cfg.max_target_positions, - embed_dim, - self.padding_idx, - learned=cfg.decoder_learned_pos, - ) - if not cfg.no_token_positional_embeddings - else None - ) - - # TODO: update this when transformer gets converted to dataclass configs - transformer_cfg = copy.deepcopy(cfg) - with open_dict(transformer_cfg): - transformer_cfg.dropout = transformer_cfg.decoder_dropout - transformer_cfg.attention_dropout = ( - transformer_cfg.decoder_attention_dropout - ) - transformer_cfg.activation_dropout = ( - transformer_cfg.decoder_activation_dropout - ) - - self.layers = nn.ModuleList([]) - self.layers.extend( - [ - TransformerDecoderLayer(transformer_cfg, no_encoder_attn) - for _ in range(transformer_cfg.decoder_layers) - ] - ) - - if not self.share_input_output_embed: - self.embed_out = nn.Parameter( - torch.Tensor(len(dictionary), self.output_embed_dim) - ) - nn.init.normal_(self.embed_out, mean=0, std=self.output_embed_dim ** -0.5) - - if transformer_cfg.decoder_normalize_before: - self.layer_norm = LayerNorm(embed_dim) - else: - self.layer_norm = None - - def forward( - self, prev_output_tokens, encoder_out=None, incremental_state=None, **unused - ): - """ - Args: - prev_output_tokens (LongTensor): previous decoder outputs of shape - `(batch, tgt_len)`, for teacher forcing - encoder_out (Tensor, optional): output from the encoder, used for - encoder-side attention - incremental_state (dict): dictionary used for storing state during - :ref:`Incremental decoding` - - Returns: - tuple: - - the decoder's output of shape `(batch, tgt_len, vocab)` - - a dictionary with any model-specific outputs - """ - prev_output_tokens = prev_output_tokens.long() - x, extra = self.extract_features( - prev_output_tokens, encoder_out, incremental_state - ) - x = self.output_layer(x) - return x, extra - - def extract_features( - self, prev_output_tokens, encoder_out=None, incremental_state=None, **unused - ): - """ - Similar to *forward* but only return features. - - Returns: - tuple: - - the decoder's features of shape `(batch, tgt_len, embed_dim)` - - a dictionary with any model-specific outputs - """ - - # embed positions - positions = ( - self.embed_positions( - prev_output_tokens, incremental_state=incremental_state - ) - if self.embed_positions is not None - else None - ) - - if incremental_state is not None: - prev_output_tokens = prev_output_tokens[:, -1:] - if positions is not None: - positions = positions[:, -1:] - - # embed tokens and positions - x = self.embed_scale * self.embed_tokens(prev_output_tokens) - - if self.project_in_dim is not None: - x = self.project_in_dim(x) - - if positions is not None: - x += positions - x = F.dropout(x, p=self.dropout, training=self.training) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - attn = None - - inner_states = [x] - - # decoder layers - self_attn_padding_mask = None - if prev_output_tokens.eq(self.padding_idx).any(): - self_attn_padding_mask = prev_output_tokens.eq(self.padding_idx) - for layer in self.layers: - dropout_probability = np.random.random() - if not self.training or (dropout_probability > self.layerdrop): - x, attn, _ = layer( - x, - encoder_out["encoder_out"] if encoder_out is not None else None, - encoder_out["padding_mask"] if encoder_out is not None else None, - incremental_state, - self_attn_mask=self.buffered_future_mask(x) - if incremental_state is None - else None, - self_attn_padding_mask=self_attn_padding_mask, - ) - inner_states.append(x) - - if self.layer_norm: - x = self.layer_norm(x) - - # T x B x C -> B x T x C - x = x.transpose(0, 1) - - return x, {"attn": attn, "inner_states": inner_states} - - def output_layer(self, features, **kwargs): - """Project features to the vocabulary size.""" - # project back to size of vocabulary - if self.share_input_output_embed: - return F.linear(features, self.embed_tokens.weight) - else: - return F.linear(features, self.embed_out) - - def max_positions(self): - """Maximum output length supported by the decoder.""" - if self.embed_positions is None: - return self.max_target_positions - return min(self.max_target_positions, self.embed_positions.max_positions) - - def buffered_future_mask(self, tensor): - dim = tensor.size(0) - if ( - not hasattr(self, "_future_mask") - or self._future_mask is None - or self._future_mask.device != tensor.device - or self._future_mask.size(0) < dim - ): - self._future_mask = torch.triu( - utils.fill_with_neg_inf(tensor.new(dim, dim)), 1 - ) - return self._future_mask[:dim, :dim] - - def upgrade_state_dict_named(self, state_dict, name): - return state_dict - - -def Embedding(num_embeddings, embedding_dim, padding_idx): - m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx) - nn.init.normal_(m.weight, mean=0, std=embedding_dim ** -0.5) - nn.init.constant_(m.weight[padding_idx], 0) - return m - - -def Linear(in_features, out_features, bias=True): - m = nn.Linear(in_features, out_features, bias) - nn.init.xavier_uniform_(m.weight) - if bias: - nn.init.constant_(m.bias, 0.0) - return m diff --git a/kosmos-g/fairseq/fairseq/modules/__init__.py b/kosmos-g/fairseq/fairseq/modules/__init__.py deleted file mode 100644 index 085769c4f..000000000 --- a/kosmos-g/fairseq/fairseq/modules/__init__.py +++ /dev/null @@ -1,101 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -"""isort:skip_file""" - -from .adaptive_input import AdaptiveInput -from .adaptive_softmax import AdaptiveSoftmax -from .base_layer import BaseLayer -from .beamable_mm import BeamableMM -from .character_token_embedder import CharacterTokenEmbedder -from .conv_tbc import ConvTBC -from .cross_entropy import cross_entropy -from .downsampled_multihead_attention import DownsampledMultiHeadAttention -from .dynamic_convolution import DynamicConv, DynamicConv1dTBC -from .dynamic_crf_layer import DynamicCRF -from .fairseq_dropout import FairseqDropout -from .fp32_batch_norm import Fp32BatchNorm -from .fp32_group_norm import Fp32GroupNorm -from .fp32_instance_norm import Fp32InstanceNorm -from .gelu import gelu, gelu_accurate -from .grad_multiply import GradMultiply -from .gumbel_vector_quantizer import GumbelVectorQuantizer -from .kmeans_vector_quantizer import KmeansVectorQuantizer -from .layer_drop import LayerDropModuleList -from .layer_norm import Fp32LayerNorm, LayerNorm -from .learned_positional_embedding import LearnedPositionalEmbedding -from .lightweight_convolution import LightweightConv, LightweightConv1dTBC -from .linearized_convolution import LinearizedConvolution -from .location_attention import LocationAttention -from .lstm_cell_with_zoneout import LSTMCellWithZoneOut -from .multihead_attention import MultiheadAttention -from .positional_embedding import PositionalEmbedding -from .same_pad import SamePad -from .scalar_bias import ScalarBias -from .sinusoidal_positional_embedding import SinusoidalPositionalEmbedding -from .transformer_sentence_encoder_layer import TransformerSentenceEncoderLayer -from .transformer_sentence_encoder import TransformerSentenceEncoder -from .transpose_last import TransposeLast -from .unfold import unfold1d -from .transformer_layer import TransformerDecoderLayer, TransformerEncoderLayer -from .vggblock import VGGBlock -from .espnet_multihead_attention import ( - ESPNETMultiHeadedAttention, - RelPositionMultiHeadedAttention, - RotaryPositionMultiHeadedAttention, -) -from .rotary_positional_embedding import RotaryPositionalEmbedding -from .positional_encoding import ( - RelPositionalEncoding, -) - -__all__ = [ - "AdaptiveInput", - "AdaptiveSoftmax", - "BaseLayer", - "BeamableMM", - "CharacterTokenEmbedder", - "ConvTBC", - "cross_entropy", - "DownsampledMultiHeadAttention", - "DynamicConv1dTBC", - "DynamicConv", - "DynamicCRF", - "FairseqDropout", - "Fp32BatchNorm", - "Fp32GroupNorm", - "Fp32LayerNorm", - "Fp32InstanceNorm", - "gelu", - "gelu_accurate", - "GradMultiply", - "GumbelVectorQuantizer", - "KmeansVectorQuantizer", - "LayerDropModuleList", - "LayerNorm", - "LearnedPositionalEmbedding", - "LightweightConv1dTBC", - "LightweightConv", - "LinearizedConvolution", - "LocationAttention", - "LSTMCellWithZoneOut", - "MultiheadAttention", - "PositionalEmbedding", - "SamePad", - "ScalarBias", - "SinusoidalPositionalEmbedding", - "TransformerSentenceEncoderLayer", - "TransformerSentenceEncoder", - "TransformerDecoderLayer", - "TransformerEncoderLayer", - "TransposeLast", - "VGGBlock", - "unfold1d", - "ESPNETMultiheadedAttention", - "PositionalEmbedding", - "RelPositionMultiHeadedAttention", - "RelPositionalEncoding", - "RotaryPositionalEmbedding", - "RotaryPositionMultiHeadedAttention", -] diff --git a/kosmos-g/fairseq/fairseq/modules/adaptive_input.py b/kosmos-g/fairseq/fairseq/modules/adaptive_input.py deleted file mode 100644 index 446534a9f..000000000 --- a/kosmos-g/fairseq/fairseq/modules/adaptive_input.py +++ /dev/null @@ -1,80 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -from typing import List - -import torch -from fairseq.modules.quant_noise import quant_noise -from torch import nn - - -class AdaptiveInput(nn.Module): - def __init__( - self, - vocab_size: int, - padding_idx: int, - initial_dim: int, - factor: float, - output_dim: int, - cutoff: List[int], - q_noise: float = 0, - qn_block_size: int = 8, - ): - super().__init__() - - if vocab_size > cutoff[-1]: - cutoff = cutoff + [vocab_size] - else: - assert ( - vocab_size == cutoff[-1] - ), "cannot specify cutoff larger than vocab size" - - self.cutoff = cutoff - self.embedding_dim = output_dim - self.padding_idx = padding_idx - - self.embeddings = nn.ModuleList() - for i in range(len(self.cutoff)): - prev = self.cutoff[i - 1] if i > 0 else 0 - size = self.cutoff[i] - prev - dim = int(initial_dim // (factor ** i)) - seq = nn.Sequential( - nn.Embedding(size, dim, self.padding_idx), - quant_noise( - nn.Linear(dim, output_dim, bias=False), q_noise, qn_block_size - ), - ) - - self.embeddings.append(seq) - self.padding_idx = None - self.padding_idx = padding_idx - - def init_weights(m): - if isinstance(m, nn.Embedding): - nn.init.normal_(m.weight, mean=0, std=m.weight.shape[1] ** -0.5) - nn.init.constant_(m.weight[padding_idx], 0) - elif hasattr(m, "weight"): - nn.init.xavier_uniform_(m.weight) - - self.apply(init_weights) - - self.register_buffer("_float_tensor", torch.FloatTensor(1)) - - def weights_for_band(self, band: int): - return self.embeddings[band][0].weight, self.embeddings[band][1].weight - - def forward(self, input: torch.Tensor): - result = self._float_tensor.new(input.shape + (self.embedding_dim,)) - for i in range(len(self.cutoff)): - mask = input.lt(self.cutoff[i]) - if i > 0: - mask.mul_(input.ge(self.cutoff[i - 1])) - chunk_input = input[mask] - self.cutoff[i - 1] - else: - chunk_input = input[mask] - if mask.any(): - result[mask] = self.embeddings[i](chunk_input) - return result diff --git a/kosmos-g/fairseq/fairseq/modules/adaptive_softmax.py b/kosmos-g/fairseq/fairseq/modules/adaptive_softmax.py deleted file mode 100644 index ae0c77ba0..000000000 --- a/kosmos-g/fairseq/fairseq/modules/adaptive_softmax.py +++ /dev/null @@ -1,268 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import functools -import operator - -import torch -import torch.nn.functional as F -from fairseq.modules.fairseq_dropout import FairseqDropout -from fairseq.modules.quant_noise import quant_noise -from torch import nn - - -class TiedLinear(nn.Module): - def __init__(self, weight, transpose): - super().__init__() - self.weight = weight - self.transpose = transpose - - def forward(self, input): - return F.linear(input, self.weight.t() if self.transpose else self.weight) - - -class TiedHeadModule(nn.Module): - def __init__(self, weights, input_dim, num_classes, q_noise, qn_block_size): - super().__init__() - tied_emb, _ = weights - self.num_words, emb_dim = tied_emb.size() - - self.word_proj = quant_noise( - TiedLinear(tied_emb, transpose=False), q_noise, qn_block_size - ) - if input_dim != emb_dim: - self.word_proj = nn.Sequential( - quant_noise( - nn.Linear(input_dim, emb_dim, bias=False), q_noise, qn_block_size - ), - self.word_proj, - ) - - self.class_proj = quant_noise( - nn.Linear(input_dim, num_classes, bias=False), q_noise, qn_block_size - ) - self.out_dim = self.num_words + num_classes - - self.register_buffer("_float_tensor", torch.FloatTensor(1)) - - def forward(self, input): - inp_sz = functools.reduce(operator.mul, input.shape[:-1], 1) - out = self._float_tensor.new(inp_sz, self.out_dim) - out[:, : self.num_words] = self.word_proj(input.view(inp_sz, -1)) - out[:, self.num_words :] = self.class_proj(input.view(inp_sz, -1)) - return out - - -class AdaptiveSoftmax(nn.Module): - """ - This is an implementation of the efficient softmax approximation for - graphical processing units (GPU), described in the paper "Efficient softmax - approximation for GPUs" (http://arxiv.org/abs/1609.04309). - """ - - def __init__( - self, - vocab_size, - input_dim, - cutoff, - dropout, - factor=4.0, - adaptive_inputs=None, - tie_proj=False, - q_noise=0, - qn_block_size=8, - ): - super().__init__() - - if vocab_size > cutoff[-1]: - cutoff = cutoff + [vocab_size] - else: - assert ( - vocab_size == cutoff[-1] - ), "cannot specify cutoff larger than vocab size" - - output_dim = cutoff[0] + len(cutoff) - 1 - - self.vocab_size = vocab_size - self.cutoff = cutoff - self.dropout_module = FairseqDropout( - dropout, module_name=self.__class__.__name__ - ) - self.input_dim = input_dim - self.factor = factor - self.q_noise = q_noise - self.qn_block_size = qn_block_size - - self.lsm = nn.LogSoftmax(dim=1) - - if adaptive_inputs is not None: - self.head = TiedHeadModule( - adaptive_inputs.weights_for_band(0), - input_dim, - len(cutoff) - 1, - self.q_noise, - self.qn_block_size, - ) - else: - self.head = quant_noise( - nn.Linear(input_dim, output_dim, bias=False), - self.q_noise, - self.qn_block_size, - ) - - self._make_tail(adaptive_inputs, tie_proj) - - def init_weights(m): - if ( - hasattr(m, "weight") - and not isinstance(m, TiedLinear) - and not isinstance(m, TiedHeadModule) - ): - nn.init.xavier_uniform_(m.weight) - - self.apply(init_weights) - - self.register_buffer("version", torch.LongTensor([1])) - - def _make_tail(self, adaptive_inputs=None, tie_proj=False): - self.tail = nn.ModuleList() - for i in range(len(self.cutoff) - 1): - dim = int(self.input_dim // self.factor ** (i + 1)) - - tied_emb, tied_proj = ( - adaptive_inputs.weights_for_band(i + 1) - if adaptive_inputs is not None - else (None, None) - ) - - if tied_proj is not None: - if tie_proj: - proj = quant_noise( - TiedLinear(tied_proj, transpose=True), - self.q_noise, - self.qn_block_size, - ) - else: - proj = quant_noise( - nn.Linear(tied_proj.size(0), tied_proj.size(1), bias=False), - self.q_noise, - self.qn_block_size, - ) - else: - proj = quant_noise( - nn.Linear(self.input_dim, dim, bias=False), - self.q_noise, - self.qn_block_size, - ) - - if tied_emb is None: - out_proj = nn.Linear( - dim, self.cutoff[i + 1] - self.cutoff[i], bias=False - ) - else: - out_proj = TiedLinear(tied_emb, transpose=False) - - m = nn.Sequential( - proj, - nn.Dropout(self.dropout_module.p), - quant_noise(out_proj, self.q_noise, self.qn_block_size), - ) - - self.tail.append(m) - - def upgrade_state_dict_named(self, state_dict, name): - version_name = name + ".version" - if version_name not in state_dict: - raise Exception("This version of the model is no longer supported") - - def adapt_target(self, target): - """ - In order to be efficient, the AdaptiveSoftMax does not compute the - scores for all the word of the vocabulary for all the examples. It is - thus necessary to call the method adapt_target of the AdaptiveSoftMax - layer inside each forward pass. - """ - - target = target.view(-1) - new_target = [target.clone()] - target_idxs = [] - - for i in range(len(self.cutoff) - 1): - mask = target.ge(self.cutoff[i]).mul(target.lt(self.cutoff[i + 1])) - new_target[0][mask] = self.cutoff[0] + i - - if mask.any(): - target_idxs.append(mask.nonzero(as_tuple=False).squeeze(1)) - new_target.append(target[mask].add(-self.cutoff[i])) - else: - target_idxs.append(None) - new_target.append(None) - - return new_target, target_idxs - - def forward(self, input, target): - """ - Args: - input: (b x t x d) - target: (b x t) - Returns: - 2 lists: output for each cutoff section and new targets by cut off - """ - - input = input.contiguous().view(-1, input.size(-1)) - input = self.dropout_module(input) - - new_target, target_idxs = self.adapt_target(target) - output = [self.head(input)] - - for i in range(len(target_idxs)): - if target_idxs[i] is not None: - output.append(self.tail[i](input.index_select(0, target_idxs[i]))) - else: - output.append(None) - - return output, new_target - - def get_log_prob(self, input, target): - """ - Computes the log probabilities for all the words of the vocabulary, - given a 2D tensor of hidden vectors. - """ - - bsz, length, dim = input.size() - input = input.contiguous().view(-1, dim) - - if target is not None: - _, target_idxs = self.adapt_target(target) - else: - target_idxs = None - - head_y = self.head(input) - log_probs = head_y.new_zeros(input.size(0), self.vocab_size) - - head_sz = self.cutoff[0] + len(self.tail) - log_probs[:, :head_sz] = self.lsm(head_y) - tail_priors = log_probs[:, self.cutoff[0] : head_sz].clone() - - for i in range(len(self.tail)): - start = self.cutoff[i] - end = self.cutoff[i + 1] - - if target_idxs is None: - tail_out = log_probs[:, start:end] - tail_out.copy_(self.tail[i](input)) - log_probs[:, start:end] = self.lsm(tail_out).add_( - tail_priors[:, i, None] - ) - elif target_idxs[i] is not None: - idxs = target_idxs[i] - tail_out = log_probs[idxs, start:end] - tail_out.copy_(self.tail[i](input[idxs])) - log_probs[idxs, start:end] = self.lsm(tail_out).add_( - tail_priors[idxs, i, None] - ) - - log_probs = log_probs.view(bsz, length, -1) - return log_probs diff --git a/kosmos-g/fairseq/fairseq/modules/base_layer.py b/kosmos-g/fairseq/fairseq/modules/base_layer.py deleted file mode 100644 index e823f7bae..000000000 --- a/kosmos-g/fairseq/fairseq/modules/base_layer.py +++ /dev/null @@ -1,170 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch.nn as nn -import torch -import sys -from fairseq import utils -from fairseq.distributed import utils as distributed_utils -from fairseq.modules.layer_norm import LayerNorm - - -class BaseLayer(nn.Module): - def __init__(self, args): - super().__init__() - self.num_workers = distributed_utils.get_data_parallel_world_size() - expert_centroids = torch.empty(self.num_workers, args.decoder_embed_dim) - torch.nn.init.orthogonal_(expert_centroids, gain=0.1) - self.register_parameter( - "expert_centroids", torch.nn.Parameter(expert_centroids) - ) - self.expert_network = nn.Sequential( - *([BaseSublayer(args) for _ in range(args.base_sublayers)]) - ) - self.expert_id = distributed_utils.get_data_parallel_rank() - self.shuffle = args.base_shuffle - self.cpp = self.load_assignment() - - # Add a special attribute to the expert parameters, so we know not to sync their gradients - for param in self.expert_network.parameters(): - param.expert = True - - def forward(self, input_features, *args, **kwargs): - features = input_features.reshape(-1, input_features.size(-1)) - is_training = input_features.requires_grad - - if self.shuffle and is_training: - # Send each token to a random worker, to break correlations within the batch - shuffle_sort = torch.randperm(features.size(0), device=features.device) - features = All2All.apply(features[shuffle_sort]) - - with torch.no_grad(): - # Compute similarity of each token to each expert, for routing - token_expert_affinities = features.matmul( - self.expert_centroids.transpose(0, 1) - ) - - # Compute which token goes to which expert - sort_by_expert, input_splits, output_splits = ( - self.balanced_assignment(token_expert_affinities) - if is_training - else self.greedy_assignment(token_expert_affinities) - ) - # Swap these tokens for the right ones for our expert - routed_features = All2All.apply( - features[sort_by_expert], output_splits, input_splits - ) - - if routed_features.size(0) > 0: - # Mix in the expert network based on how appropriate it is for these tokens - alpha = torch.sigmoid( - routed_features.mv(self.expert_centroids[self.expert_id]) - ).unsqueeze(1) - routed_features = ( - alpha * self.expert_network(routed_features) - + (1 - alpha) * routed_features - ) - # Return to original worker and ordering - result = All2All.apply(routed_features, input_splits, output_splits)[ - self.inverse_sort(sort_by_expert) - ] - - if self.shuffle and is_training: - # Undo shuffling - result = All2All.apply(result)[self.inverse_sort(shuffle_sort)] - - # Return additional Nones for compatibility with TransformerDecoderLayer - return result.view(input_features.size()), None, None - - def inverse_sort(self, order): - # Creates an index that undoes a sort: xs==xs[order][inverse_sort(order)] - return torch.empty_like(order).scatter_( - 0, order, torch.arange(0, order.size(0), device=order.device) - ) - - def balanced_assignment(self, scores): - ok = scores.isfinite() - if not ok.all(): - # NaNs here can break the assignment algorithm - scores[~ok] = scores[ok].min() - return self.cpp.balanced_assignment(scores), None, None - - # Assigns each token to the top k experts - def greedy_assignment(self, scores, k=1): - token_to_workers = torch.topk(scores, dim=1, k=k, largest=True).indices.view(-1) - token_to_workers, sort_ordering = torch.sort(token_to_workers) - worker2token = sort_ordering // k - - # Find how many tokens we're sending to each other worker (being careful for sending 0 tokens to some workers) - output_splits = torch.zeros( - (self.num_workers,), dtype=torch.long, device=scores.device - ) - workers, counts = torch.unique_consecutive(token_to_workers, return_counts=True) - output_splits[workers] = counts - # Tell other workers how many tokens to expect from us - input_splits = All2All.apply(output_splits) - return worker2token, input_splits.tolist(), output_splits.tolist() - - def load_assignment(self): - try: - from fairseq import libbase - - return libbase - - except ImportError as e: - sys.stderr.write( - "ERROR: missing libbase. run `python setup.py build_ext --inplace`\n" - ) - raise e - - -class BaseSublayer(nn.Module): - def __init__(self, args): - super().__init__() - self.activation_fn = utils.get_activation_fn( - activation=getattr(args, "activation_fn", "relu") or "relu" - ) - self.norm = LayerNorm(args.decoder_embed_dim, export=False) - self.ff1 = torch.nn.Linear(args.decoder_embed_dim, args.decoder_ffn_embed_dim) - self.ff2 = torch.nn.Linear(args.decoder_ffn_embed_dim, args.decoder_embed_dim) - self.ff2.weight.data.zero_() - - def forward(self, xs): - return xs + self.ff2(self.activation_fn(self.ff1(self.norm(xs)))) - - -# Wraps torch.distributed.all_to_all_single as a function that supports autograd -class All2All(torch.autograd.Function): - @staticmethod - def forward(ctx, xs, input_splits=None, output_splits=None): - ctx.input_splits = input_splits - ctx.output_splits = output_splits - - ys = ( - torch.empty_like(xs) - if output_splits is None - else xs.new_empty(size=[sum(output_splits)] + list(xs.size()[1:])) - ) - torch.distributed.all_to_all_single( - ys, xs, output_split_sizes=output_splits, input_split_sizes=input_splits - ) - return ys - - @staticmethod - def backward(ctx, grad_output): - result = ( - torch.empty_like(grad_output) - if ctx.input_splits is None - else grad_output.new_empty( - size=[sum(ctx.input_splits)] + list(grad_output.size()[1:]) - ) - ) - torch.distributed.all_to_all_single( - result, - grad_output, - output_split_sizes=ctx.input_splits, - input_split_sizes=ctx.output_splits, - ) - return result, None, None diff --git a/kosmos-g/fairseq/fairseq/modules/beamable_mm.py b/kosmos-g/fairseq/fairseq/modules/beamable_mm.py deleted file mode 100644 index eff1a4607..000000000 --- a/kosmos-g/fairseq/fairseq/modules/beamable_mm.py +++ /dev/null @@ -1,49 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn - - -class BeamableMM(nn.Module): - """This module provides an optimized MM for beam decoding with attention. - - It leverage the fact that the source-side of the input is replicated beam - times and the target-side of the input is of width one. This layer speeds up - inference by replacing the inputs {(bsz x 1 x nhu), (bsz x sz2 x nhu)} - with smaller inputs {(bsz/beam x beam x nhu), (bsz/beam x sz2 x nhu)}. - """ - - def __init__(self, beam_size=None): - super(BeamableMM, self).__init__() - self.beam_size = beam_size - - def forward(self, input1, input2): - if ( - not self.training - and self.beam_size is not None # test mode - and input1.dim() == 3 # beam size is set - and input1.size(1) # only support batched input - == 1 # single time step update - ): - bsz, beam = input1.size(0), self.beam_size - - # bsz x 1 x nhu --> bsz/beam x beam x nhu - input1 = input1[:, 0, :].unfold(0, beam, beam).transpose(2, 1) - - # bsz x sz2 x nhu --> bsz/beam x sz2 x nhu - input2 = input2.unfold(0, beam, beam)[:, :, :, 0] - - # use non batched operation if bsz = beam - if input1.size(0) == 1: - output = torch.mm(input1[0, :, :], input2[0, :, :]) - else: - output = input1.bmm(input2) - return output.view(bsz, 1, -1) - else: - return input1.bmm(input2) - - def set_beam_size(self, beam_size): - self.beam_size = beam_size diff --git a/kosmos-g/fairseq/fairseq/modules/character_token_embedder.py b/kosmos-g/fairseq/fairseq/modules/character_token_embedder.py deleted file mode 100644 index 181221b61..000000000 --- a/kosmos-g/fairseq/fairseq/modules/character_token_embedder.py +++ /dev/null @@ -1,214 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from typing import List, Tuple - -import torch -import torch.nn.functional as F -from fairseq.data import Dictionary -from torch import nn - - -CHAR_PAD_IDX = 0 -CHAR_EOS_IDX = 257 - - -logger = logging.getLogger(__name__) - - -class CharacterTokenEmbedder(torch.nn.Module): - def __init__( - self, - vocab: Dictionary, - filters: List[Tuple[int, int]], - char_embed_dim: int, - word_embed_dim: int, - highway_layers: int, - max_char_len: int = 50, - char_inputs: bool = False, - ): - super(CharacterTokenEmbedder, self).__init__() - - self.onnx_trace = False - self.embedding_dim = word_embed_dim - self.max_char_len = max_char_len - self.char_embeddings = nn.Embedding(257, char_embed_dim, padding_idx=0) - self.symbol_embeddings = nn.Parameter(torch.FloatTensor(2, word_embed_dim)) - self.eos_idx, self.unk_idx = 0, 1 - self.char_inputs = char_inputs - - self.convolutions = nn.ModuleList() - for width, out_c in filters: - self.convolutions.append( - nn.Conv1d(char_embed_dim, out_c, kernel_size=width) - ) - - last_dim = sum(f[1] for f in filters) - - self.highway = Highway(last_dim, highway_layers) if highway_layers > 0 else None - - self.projection = nn.Linear(last_dim, word_embed_dim) - - assert ( - vocab is not None or char_inputs - ), "vocab must be set if not using char inputs" - self.vocab = None - if vocab is not None: - self.set_vocab(vocab, max_char_len) - - self.reset_parameters() - - def prepare_for_onnx_export_(self): - self.onnx_trace = True - - def set_vocab(self, vocab, max_char_len): - word_to_char = torch.LongTensor(len(vocab), max_char_len) - - truncated = 0 - for i in range(len(vocab)): - if i < vocab.nspecial: - char_idxs = [0] * max_char_len - else: - chars = vocab[i].encode() - # +1 for padding - char_idxs = [c + 1 for c in chars] + [0] * (max_char_len - len(chars)) - if len(char_idxs) > max_char_len: - truncated += 1 - char_idxs = char_idxs[:max_char_len] - word_to_char[i] = torch.LongTensor(char_idxs) - - if truncated > 0: - logger.info( - "truncated {} words longer than {} characters".format( - truncated, max_char_len - ) - ) - - self.vocab = vocab - self.word_to_char = word_to_char - - @property - def padding_idx(self): - return Dictionary().pad() if self.vocab is None else self.vocab.pad() - - def reset_parameters(self): - nn.init.xavier_normal_(self.char_embeddings.weight) - nn.init.xavier_normal_(self.symbol_embeddings) - nn.init.xavier_uniform_(self.projection.weight) - - nn.init.constant_( - self.char_embeddings.weight[self.char_embeddings.padding_idx], 0.0 - ) - nn.init.constant_(self.projection.bias, 0.0) - - def forward( - self, - input: torch.Tensor, - ): - if self.char_inputs: - chars = input.view(-1, self.max_char_len) - pads = chars[:, 0].eq(CHAR_PAD_IDX) - eos = chars[:, 0].eq(CHAR_EOS_IDX) - if eos.any(): - if self.onnx_trace: - chars = torch.where(eos.unsqueeze(1), chars.new_zeros(1), chars) - else: - chars[eos] = 0 - - unk = None - else: - flat_words = input.view(-1) - chars = self.word_to_char[flat_words.type_as(self.word_to_char)].type_as( - input - ) - pads = flat_words.eq(self.vocab.pad()) - eos = flat_words.eq(self.vocab.eos()) - unk = flat_words.eq(self.vocab.unk()) - - word_embs = self._convolve(chars) - if self.onnx_trace: - if pads.any(): - word_embs = torch.where( - pads.unsqueeze(1), word_embs.new_zeros(1), word_embs - ) - if eos.any(): - word_embs = torch.where( - eos.unsqueeze(1), self.symbol_embeddings[self.eos_idx], word_embs - ) - if unk is not None and unk.any(): - word_embs = torch.where( - unk.unsqueeze(1), self.symbol_embeddings[self.unk_idx], word_embs - ) - else: - if pads.any(): - word_embs[pads] = 0 - if eos.any(): - word_embs[eos] = self.symbol_embeddings[self.eos_idx] - if unk is not None and unk.any(): - word_embs[unk] = self.symbol_embeddings[self.unk_idx] - - return word_embs.view(input.size()[:2] + (-1,)) - - def _convolve( - self, - char_idxs: torch.Tensor, - ): - char_embs = self.char_embeddings(char_idxs) - char_embs = char_embs.transpose(1, 2) # BTC -> BCT - - conv_result = [] - - for conv in self.convolutions: - x = conv(char_embs) - x, _ = torch.max(x, -1) - x = F.relu(x) - conv_result.append(x) - - x = torch.cat(conv_result, dim=-1) - - if self.highway is not None: - x = self.highway(x) - x = self.projection(x) - - return x - - -class Highway(torch.nn.Module): - """ - A `Highway layer <https://arxiv.org/abs/1505.00387>`_. - Adopted from the AllenNLP implementation. - """ - - def __init__(self, input_dim: int, num_layers: int = 1): - super(Highway, self).__init__() - self.input_dim = input_dim - self.layers = nn.ModuleList( - [nn.Linear(input_dim, input_dim * 2) for _ in range(num_layers)] - ) - self.activation = nn.ReLU() - - self.reset_parameters() - - def reset_parameters(self): - for layer in self.layers: - # As per comment in AllenNLP: - # We should bias the highway layer to just carry its input forward. We do that by - # setting the bias on `B(x)` to be positive, because that means `g` will be biased to - # be high, so we will carry the input forward. The bias on `B(x)` is the second half - # of the bias vector in each Linear layer. - nn.init.constant_(layer.bias[self.input_dim :], 1) - - nn.init.constant_(layer.bias[: self.input_dim], 0) - nn.init.xavier_normal_(layer.weight) - - def forward(self, x: torch.Tensor): - for layer in self.layers: - projection = layer(x) - proj_x, gate = projection.chunk(2, dim=-1) - proj_x = self.activation(proj_x) - gate = torch.sigmoid(gate) - x = gate * x + (gate.new_tensor([1]) - gate) * proj_x - return x diff --git a/kosmos-g/fairseq/fairseq/modules/checkpoint_activations.py b/kosmos-g/fairseq/fairseq/modules/checkpoint_activations.py deleted file mode 100644 index aa0b5929a..000000000 --- a/kosmos-g/fairseq/fairseq/modules/checkpoint_activations.py +++ /dev/null @@ -1,242 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import functools -from typing import Any, Dict, List, Tuple, Union - -import torch -import torch.utils.checkpoint as checkpoint -from fairseq import utils - - -def checkpoint_wrapper(m, offload_to_cpu=False): - """ - A friendlier wrapper for performing activation checkpointing. - - Compared to the PyTorch version, this version: - - wraps an nn.Module, so that all subsequent calls will use checkpointing - - handles keyword arguments in the forward - - handles non-Tensor outputs from the forward - - Usage:: - - checkpointed_module = checkpoint_wrapper(my_module, offload_to_cpu=True) - a, b = checkpointed_module(x, y=3, z=torch.Tensor([1])) - """ - # should I check whether original_forward has already been set? - assert not hasattr( - m, "precheckpoint_forward" - ), "checkpoint function has already been applied?" - m.precheckpoint_forward = m.forward - m.forward = functools.partial( - _checkpointed_forward, - m.precheckpoint_forward, # original_forward - offload_to_cpu, - ) - return m - - -def unwrap_checkpoint(m: torch.nn.Module): - """ - unwrap a module and its children from checkpoint_wrapper - """ - for module in m.modules(): - if hasattr(module, "precheckpoint_forward"): - module.forward = module.precheckpoint_forward - del module.precheckpoint_forward - if hasattr(module, "old_deepcopy_method"): - module.__deepcopy__ = module.old_deepcopy_method - del module.old_deepcopy_method - return m - - -def _checkpointed_forward(original_forward, offload_to_cpu, *args, **kwargs): - # Autograd Functions in PyTorch work best with positional args, since - # the backward must return gradients (or None) for every input argument. - # We can flatten keyword arguments to make this easier. - kwarg_keys, flat_args = pack_kwargs(*args, **kwargs) - parent_ctx_dict = {"offload": offload_to_cpu} - output = CheckpointFunction.apply( - original_forward, parent_ctx_dict, kwarg_keys, *flat_args - ) - if isinstance(output, torch.Tensor): - return output - else: - packed_non_tensor_outputs = parent_ctx_dict["packed_non_tensor_outputs"] - if packed_non_tensor_outputs: - output = unpack_non_tensors(output, packed_non_tensor_outputs) - return output - - -def pack_kwargs(*args, **kwargs) -> Tuple[List[str], List[Any]]: - """ - Usage:: - - kwarg_keys, flat_args = pack_kwargs(1, 2, a=3, b=4) - args, kwargs = unpack_kwargs(kwarg_keys, flat_args) - assert args == [1, 2] - assert kwargs == {"a": 3, "b": 4} - """ - kwarg_keys = [] - flat_args = list(args) - for k, v in kwargs.items(): - kwarg_keys.append(k) - flat_args.append(v) - return kwarg_keys, flat_args - - -def unpack_kwargs( - kwarg_keys: List[str], flat_args: List[Any] -) -> Tuple[List[Any], Dict[str, Any]]: - if len(kwarg_keys) == 0: - return flat_args, {} - args = flat_args[: -len(kwarg_keys)] - kwargs = {k: v for k, v in zip(kwarg_keys, flat_args[-len(kwarg_keys) :])} - return args, kwargs - - -def split_non_tensors( - mixed: Union[torch.Tensor, Tuple[Any]] -) -> Tuple[Tuple[torch.Tensor], Dict[str, List[Any]]]: - """ - Usage:: - - x = torch.Tensor([1]) - y = torch.Tensor([2]) - tensors, packed_non_tensors = split_non_tensors((x, y, None, 3)) - recon = unpack_non_tensors(tensors, packed_non_tensors) - assert recon == (x, y, None, 3) - """ - if isinstance(mixed, torch.Tensor): - return (mixed,), None - tensors = [] - packed_non_tensors = {"is_tensor": [], "objects": []} - for o in mixed: - if isinstance(o, torch.Tensor): - packed_non_tensors["is_tensor"].append(True) - tensors.append(o) - else: - packed_non_tensors["is_tensor"].append(False) - packed_non_tensors["objects"].append(o) - return tuple(tensors), packed_non_tensors - - -def unpack_non_tensors( - tensors: Tuple[torch.Tensor], - packed_non_tensors: Dict[str, List[Any]], -) -> Tuple[Any]: - if packed_non_tensors is None: - return tensors - assert isinstance(packed_non_tensors, dict) - mixed = [] - is_tensor_list = packed_non_tensors["is_tensor"] - objects = packed_non_tensors["objects"] - assert len(tensors) + len(objects) == len(is_tensor_list) - obj_i = tnsr_i = 0 - for is_tensor in is_tensor_list: - if is_tensor: - mixed.append(tensors[tnsr_i]) - tnsr_i += 1 - else: - mixed.append(objects[obj_i]) - obj_i += 1 - return tuple(mixed) - - -class CheckpointFunction(torch.autograd.Function): - """Similar to the torch version, but support non-Tensor outputs. - - The caller is expected to provide a dict (*parent_ctx_dict*) that will hold - the non-Tensor outputs. These should be combined with the Tensor *outputs* - by calling ``unpack_non_tensors``. - """ - - @staticmethod - def forward(ctx, run_function, parent_ctx_dict, kwarg_keys, *args): - if torch.is_grad_enabled(): # grad may be disabled, e.g., during validation - checkpoint.check_backward_validity(args) - - ctx.run_function = run_function - ctx.kwarg_keys = kwarg_keys - ctx.fwd_rng_state = utils.get_rng_state() - - tensor_inputs, packed_non_tensor_inputs = split_non_tensors(args) - if parent_ctx_dict["offload"]: - ctx.fwd_device = tuple(x.device for x in tensor_inputs) - ctx.grad_requirements = tuple(x.requires_grad for x in tensor_inputs) - tensor_inputs = tuple( - x.to(torch.device("cpu"), non_blocking=True) for x in tensor_inputs - ) - - else: - ctx.fwd_device, ctx.grad_requirements = None, None - - ctx.save_for_backward(*tensor_inputs) - ctx.packed_non_tensor_inputs = packed_non_tensor_inputs - - with torch.no_grad(): - unpacked_args, unpacked_kwargs = unpack_kwargs(kwarg_keys, args) - outputs = run_function(*unpacked_args, **unpacked_kwargs) - - if isinstance(outputs, torch.Tensor): - return outputs - else: - # Autograd Functions don't like non-Tensor outputs. We can split the - # non-Tensor and Tensor outputs, returning the former by reference - # through *parent_ctx_dict* and returning the latter directly. - outputs, packed_non_tensor_outputs = split_non_tensors(outputs) - parent_ctx_dict["packed_non_tensor_outputs"] = packed_non_tensor_outputs - return outputs - - @staticmethod - def backward(ctx, *args): - if not torch.autograd._is_checkpoint_valid(): - raise RuntimeError( - "Checkpointing is not compatible with .grad(), please use .backward() if possible" - ) - - tensor_inputs: Tuple = ctx.saved_tensors - tensor_inputs = checkpoint.detach_variable(tensor_inputs) - if ctx.fwd_device is not None: - tensor_inputs = [ - t.to(ctx.fwd_device[i], non_blocking=True) - for i, t in enumerate(tensor_inputs) - ] - for i, need_grad in enumerate(ctx.grad_requirements): - tensor_inputs[i].requires_grad = need_grad - inputs = unpack_non_tensors(tensor_inputs, ctx.packed_non_tensor_inputs) - - # Store the current states. - bwd_rng_state = utils.get_rng_state() - - # Set the states to what it used to be before the forward pass. - utils.set_rng_state(ctx.fwd_rng_state) - - with torch.enable_grad(): - unpacked_args, unpacked_kwargs = unpack_kwargs(ctx.kwarg_keys, inputs) - outputs = ctx.run_function(*unpacked_args, **unpacked_kwargs) - tensor_outputs, _ = split_non_tensors(outputs) - # Set the states back to what it was at the start of this function. - utils.set_rng_state(bwd_rng_state) - - # Run backward() with only Tensors that require grad - outputs_with_grad = [] - args_with_grad = [] - for i in range(len(tensor_outputs)): - if tensor_outputs[i].requires_grad: - outputs_with_grad.append(tensor_outputs[i]) - args_with_grad.append(args[i]) - if len(outputs_with_grad) == 0: - raise RuntimeError( - "None of the outputs have requires_grad=True, " - "this checkpoint() is not necessary" - ) - - torch.autograd.backward(outputs_with_grad, args_with_grad) - - grads = tuple( - inp.grad if isinstance(inp, torch.Tensor) else None for inp in inputs - ) - return (None, None, None) + grads diff --git a/kosmos-g/fairseq/fairseq/modules/conformer_layer.py b/kosmos-g/fairseq/fairseq/modules/conformer_layer.py deleted file mode 100644 index f749eb4ff..000000000 --- a/kosmos-g/fairseq/fairseq/modules/conformer_layer.py +++ /dev/null @@ -1,296 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import torch -from typing import Optional -from fairseq.modules import ( - LayerNorm, - MultiheadAttention, - ESPNETMultiHeadedAttention, - RelPositionMultiHeadedAttention, - RotaryPositionMultiHeadedAttention, -) -from fairseq.utils import get_activation_fn - - -class ConvolutionModule(torch.nn.Module): - """Convolution block used in the conformer block""" - - def __init__( - self, - embed_dim, - channels, - depthwise_kernel_size, - dropout, - activation_fn="swish", - bias=False, - export=False, - ): - """ - Args: - embed_dim: Embedding dimension - channels: Number of channels in depthwise conv layers - depthwise_kernel_size: Depthwise conv layer kernel size - dropout: dropout value - activation_fn: Activation function to use after depthwise convolution kernel - bias: If bias should be added to conv layers - export: If layernorm should be exported to jit - """ - super(ConvolutionModule, self).__init__() - assert ( - depthwise_kernel_size - 1 - ) % 2 == 0, "kernel_size should be a odd number for 'SAME' padding" - self.layer_norm = LayerNorm(embed_dim, export=export) - self.pointwise_conv1 = torch.nn.Conv1d( - embed_dim, - 2 * channels, - kernel_size=1, - stride=1, - padding=0, - bias=bias, - ) - self.glu = torch.nn.GLU(dim=1) - self.depthwise_conv = torch.nn.Conv1d( - channels, - channels, - depthwise_kernel_size, - stride=1, - padding=(depthwise_kernel_size - 1) // 2, - groups=channels, - bias=bias, - ) - self.batch_norm = torch.nn.BatchNorm1d(channels) - self.activation = get_activation_fn(activation_fn)(channels) - self.pointwise_conv2 = torch.nn.Conv1d( - channels, - embed_dim, - kernel_size=1, - stride=1, - padding=0, - bias=bias, - ) - self.dropout = torch.nn.Dropout(dropout) - - def forward(self, x): - """ - Args: - x: Input of shape B X T X C - Returns: - Tensor of shape B X T X C - """ - x = self.layer_norm(x) - # exchange the temporal dimension and the feature dimension - x = x.transpose(1, 2) - - # GLU mechanism - x = self.pointwise_conv1(x) # (batch, 2*channel, dim) - x = self.glu(x) # (batch, channel, dim) - - # 1D Depthwise Conv - x = self.depthwise_conv(x) - x = self.batch_norm(x) - x = self.activation(x) - - x = self.pointwise_conv2(x) - x = self.dropout(x) - return x.transpose(1, 2) - - -class FeedForwardModule(torch.nn.Module): - """Positionwise feed forward layer used in conformer""" - - def __init__( - self, - input_feat, - hidden_units, - dropout1, - dropout2, - activation_fn="swish", - bias=True, - ): - """ - Args: - input_feat: Input feature dimension - hidden_units: Hidden unit dimension - dropout1: dropout value for layer1 - dropout2: dropout value for layer2 - activation_fn: Name of activation function - bias: If linear layers should have bias - """ - - super(FeedForwardModule, self).__init__() - self.layer_norm = LayerNorm(input_feat) - self.w_1 = torch.nn.Linear(input_feat, hidden_units, bias=bias) - self.w_2 = torch.nn.Linear(hidden_units, input_feat, bias=bias) - self.dropout1 = torch.nn.Dropout(dropout1) - self.dropout2 = torch.nn.Dropout(dropout2) - self.activation = get_activation_fn(activation_fn)(hidden_units) - - def forward(self, x): - """ - Args: - x: Input Tensor of shape T X B X C - Returns: - Tensor of shape T X B X C - """ - x = self.layer_norm(x) - x = self.w_1(x) - x = self.activation(x) - x = self.dropout1(x) - x = self.w_2(x) - return self.dropout2(x) - - -class ConformerEncoderLayer(torch.nn.Module): - """Conformer block based on https://arxiv.org/abs/2005.08100. We currently don't support relative positional encoding in MHA""" - - def __init__( - self, - embed_dim, - ffn_embed_dim, - attention_heads, - dropout, - use_fp16, - depthwise_conv_kernel_size=31, - activation_fn="swish", - attn_type=None, - pos_enc_type="abs", - ): - """ - Args: - embed_dim: Input embedding dimension - ffn_embed_dim: FFN layer dimension - attention_heads: Number of attention heads in MHA - dropout: dropout value - depthwise_conv_kernel_size: Size of kernel in depthwise conv layer in convolution module - activation_fn: Activation function name to use in convulation block and feed forward block - attn_type: MHA implementation from ESPNET vs fairseq - pos_enc_type: Positional encoding type - abs, rope, rel_pos - """ - self.pos_enc_type = pos_enc_type - super(ConformerEncoderLayer, self).__init__() - - self.ffn1 = FeedForwardModule( - embed_dim, - ffn_embed_dim, - dropout, - dropout, - ) - - self.self_attn_layer_norm = LayerNorm(embed_dim, export=False) - self.self_attn_dropout = torch.nn.Dropout(dropout) - if attn_type == "espnet": - if self.pos_enc_type == "rel_pos": - self.self_attn = RelPositionMultiHeadedAttention( - embed_dim, - attention_heads, - dropout=dropout, - ) - elif self.pos_enc_type == "rope": - self.self_attn = RotaryPositionMultiHeadedAttention( - embed_dim, attention_heads, dropout=dropout, precision=use_fp16 - ) - elif self.pos_enc_type == "abs": - self.self_attn = ESPNETMultiHeadedAttention( - embed_dim, - attention_heads, - dropout=dropout, - ) - else: - raise Exception(f"Unsupported attention type {self.pos_enc_type}") - else: - # Default to fairseq MHA - self.self_attn = MultiheadAttention( - embed_dim, - attention_heads, - dropout=dropout, - ) - - self.conv_module = ConvolutionModule( - embed_dim=embed_dim, - channels=embed_dim, - depthwise_kernel_size=depthwise_conv_kernel_size, - dropout=dropout, - activation_fn=activation_fn, - ) - - self.ffn2 = FeedForwardModule( - embed_dim, - ffn_embed_dim, - dropout, - dropout, - activation_fn=activation_fn, - ) - self.final_layer_norm = LayerNorm(embed_dim, export=False) - - def forward( - self, - x, - encoder_padding_mask: Optional[torch.Tensor], - position_emb: Optional[torch.Tensor] = None, - ): - """ - Args: - x: Tensor of shape T X B X C - encoder_padding_mask: Optional mask tensor - positions: - Returns: - Tensor of shape T X B X C - """ - residual = x - x = self.ffn1(x) - x = x * 0.5 + residual - residual = x - x = self.self_attn_layer_norm(x) - if self.pos_enc_type == "rel_pos": - x, attn = self.self_attn( - query=x, - key=x, - value=x, - key_padding_mask=encoder_padding_mask, - pos_emb=position_emb, - need_weights=False, - ) - else: - x, attn = self.self_attn( - query=x, - key=x, - value=x, - key_padding_mask=encoder_padding_mask, - need_weights=False, - ) - x = self.self_attn_dropout(x) - x = x + residual - - residual = x - # TBC to BTC - x = x.transpose(0, 1) - x = self.conv_module(x) - # BTC to TBC - x = x.transpose(0, 1) - x = residual + x - - residual = x - x = self.ffn2(x) - x = x * 0.5 + residual - - x = self.final_layer_norm(x) - return x, attn - - -class ConformerWav2Vec2EncoderLayer(ConformerEncoderLayer): - """Encoder layer for Wav2vec2 encoder""" - - def forward( - self, - x: torch.Tensor, - self_attn_mask: torch.Tensor = None, - self_attn_padding_mask: torch.Tensor = None, - need_weights: bool = False, - att_args=None, - position_emb=None, - ): - return super().forward(x, self_attn_padding_mask, position_emb) diff --git a/kosmos-g/fairseq/fairseq/modules/conv_tbc.py b/kosmos-g/fairseq/fairseq/modules/conv_tbc.py deleted file mode 100644 index 65e17ec94..000000000 --- a/kosmos-g/fairseq/fairseq/modules/conv_tbc.py +++ /dev/null @@ -1,53 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -from torch import nn -from torch.nn.modules.utils import _single -from torch import Tensor - - -class ConvTBC(torch.nn.Module): - """1D convolution over an input of shape (time x batch x channel) - - The implementation uses gemm to perform the convolution. This implementation - is faster than cuDNN for small kernel sizes. - """ - - def __init__(self, in_channels, out_channels, kernel_size, padding=0): - super(ConvTBC, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = _single(kernel_size) - self.padding = _single(padding) - - self.weight = torch.nn.Parameter( - torch.Tensor(self.kernel_size[0], in_channels, out_channels) - ) - self.bias = torch.nn.Parameter(torch.Tensor(out_channels)) - - self.reset_parameters() - - def reset_parameters(self): - nn.init.xavier_normal_(self.weight) - nn.init.zeros_(self.bias) - - def conv_tbc(self, input: Tensor): - return torch.conv_tbc( - input.contiguous(), self.weight, self.bias, self.padding[0] - ) - - def forward(self, input: Tensor): - return self.conv_tbc(input) - - def __repr__(self): - s = ( - "{name}({in_channels}, {out_channels}, kernel_size={kernel_size}" - ", padding={padding}" - ) - if self.bias is None: - s += ", bias=False" - s += ")" - return s.format(name=self.__class__.__name__, **self.__dict__) diff --git a/kosmos-g/fairseq/fairseq/modules/cross_entropy.py b/kosmos-g/fairseq/fairseq/modules/cross_entropy.py deleted file mode 100644 index 286c00eec..000000000 --- a/kosmos-g/fairseq/fairseq/modules/cross_entropy.py +++ /dev/null @@ -1,59 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -import torch -import torch.nn.functional as F - -logger = logging.getLogger(__name__) - - -def _cross_entropy_pytorch(logits, target, ignore_index=None, reduction="mean"): - lprobs = F.log_softmax(logits, dim=-1, dtype=torch.float32) - return F.nll_loss( - lprobs, - target, - ignore_index=ignore_index, - reduction=reduction, - ) - - -try: - import xentropy_cuda - from apex.contrib import xentropy - - def cross_entropy(logits, target, ignore_index=-100, reduction="mean"): - if logits.device == torch.device("cpu"): - return _cross_entropy_pytorch(logits, target, ignore_index, reduction) - else: - if not getattr(cross_entropy, "_has_logged_once", False): - logger.info("using fused cross entropy") - cross_entropy._has_logged_once = True - - half_to_float = logits.dtype == torch.half - losses = xentropy.SoftmaxCrossEntropyLoss.apply( - logits, - target, - 0.0, - ignore_index, - half_to_float, - ) - if reduction == "sum": - return losses.sum() - elif reduction == "mean": - if ignore_index >= 0: - return losses.sum() / target.ne(ignore_index).sum() - else: - return losses.mean() - elif reduction == "none": - return losses - else: - raise NotImplementedError - -except ImportError: - - def cross_entropy(logits, target, ignore_index=-100, reduction="mean"): - return _cross_entropy_pytorch(logits, target, ignore_index, reduction) diff --git a/kosmos-g/fairseq/fairseq/modules/cuda_utils.cu b/kosmos-g/fairseq/fairseq/modules/cuda_utils.cu deleted file mode 100644 index 924f85275..000000000 --- a/kosmos-g/fairseq/fairseq/modules/cuda_utils.cu +++ /dev/null @@ -1,202 +0,0 @@ -/** - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */ - -template <typename U, typename V> -constexpr __host__ __device__ auto divUp(U a, V b) -> decltype(a + b) { - return (a + b - 1) / b; -} - -template <int FS, int SB, int padding_l, typename scalar_t> -__inline__ __device__ void zeroSharedMem(scalar_t* data) { - /* - Given an array of length FS + SB, zero out the first padding_l and last - (FS - padding_l) values in the array - */ - - int tid = threadIdx.x; - - if (FS < SB) { - // zero all if we have enough threads in a block to do all of them - if (tid < padding_l || tid > SB - FS + padding_l - 1) { - data[tid] = scalar_t(0.0); - } - } else { - // otherwise zero out one block at a time - const int numIterations = divUp<int, int>(FS, SB); - for (int i = 0; i < numIterations; i++) { - int offset = i * SB; - if (tid + offset < padding_l) { - data[tid + offset] = scalar_t(0.0); - } else if (tid + offset < FS) { - data[SB + tid + offset] = scalar_t(0.0); - } - } - } -} - -template <typename scalar_t> -__inline__ __device__ scalar_t warpReduce(scalar_t data) { - /* - Reduce an array within each warp. After processing all values in warp will - caontain the sum of all original values in that warp. - - data - pointer to data to reduce - */ - data += __shfl_xor_sync(SHFL_MASK, data, 16); - data += __shfl_xor_sync(SHFL_MASK, data, 8); - data += __shfl_xor_sync(SHFL_MASK, data, 4); - data += __shfl_xor_sync(SHFL_MASK, data, 2); - data += __shfl_xor_sync(SHFL_MASK, data, 1); - return data; -} - -template <typename scalar_t> -__inline__ __device__ scalar_t blockReduce(scalar_t data) { - /* - Reduce an entire array on the block level. After processing, the - first value in the array will contain the reduced sum. - - data - pointer to data to reduce - */ - - static __shared__ scalar_t warpSum[32]; - const int tid = threadIdx.x; - int wid = tid / 32; - int lane = tid % 32; - - __syncthreads(); - - // reduce each warp then write to shared memory - scalar_t sum = warpReduce(data); - if (lane == 0) { - warpSum[wid] = sum; - } - - __syncthreads(); - - scalar_t v; - // perform final sum of partial warp sums - if (tid < blockDim.x / 32) { - v = warpSum[lane]; - } else { - v = scalar_t(0.0); - } - - if (wid == 0) { - v = warpReduce(v); - } - __syncthreads(); - - return v; -} - -void checkCudaStatus(cudaError_t status, int lineNumber = -1) { - if (status != cudaSuccess) { - std::cout << cudaGetErrorString(status) << " at line " << lineNumber - << std::endl; - std::cout << "Exiting" << std::endl; - exit(1); - } -} - -template <int FS, int SB, int padding_l, typename scalar_t> -__device__ void load_input_to_shared( - const scalar_t* input, // global memory - int inputOffset, - int sequenceLength, - int iteration, - int numIterations, - bool no_prev, - scalar_t* output /* shared memory */) { - /* - Load a block size of input into shared memory with - right and left overhang of total size FS. If previously - loaded memory, overlap will be shifted over to reduce - global memory access - - input - pointer to start of channel sequence - inputOffset - how far in the sequence to start loading - sequenceLength - total length of sequence - iteration - which block of sequence we are loading - numIterations - total number of blocks to load - no_prev - whether to load the whole block if the previous block - wasn't loaded - output - shared memory to write input to - */ - - const int tid = threadIdx.x; - - // Load the left "overhang" of input - if (iteration > 0) { - if (padding_l < SB) { - // load all at once - if (tid < padding_l) { - output[tid] = - (no_prev) ? input[inputOffset - padding_l + tid] : output[tid + SB]; - } - } else { - // load in chunks of size SB - int numIterations = divUp<int, int>(padding_l, SB); - for (int i = 0; i < numIterations; i++) { - int offset = i * SB; - if ((tid + offset) < padding_l) { - output[tid + offset] = (no_prev) - ? input[inputOffset - padding_l + tid + offset] - : output[tid + offset + SB]; - } - } - } - } - - // Load the right "overhang" of input - if (iteration < (numIterations - 1)) { - const int elementsLeft = sequenceLength - (iteration + 1) * SB; - - if ((FS - padding_l) < SB) { - // load all at once - if (tid < (FS - padding_l)) { - output[padding_l + SB + tid] = (tid < elementsLeft) - ? input[inputOffset + SB + tid] - : scalar_t(0.0); - } - } else { - // load in chunks of size SB - int numIterations = divUp<int, int>(FS - padding_l, SB); - for (int i = 0; i < numIterations; i++) { - int offset = i * SB; - if ((tid + offset) < (FS - padding_l)) { - output[padding_l + SB + tid + offset] = - ((tid + offset) < elementsLeft) - ? input[inputOffset + SB + tid + offset] - : scalar_t(0.0); - } - } - } - } - - // We should also clear out the right "overhang" - if (iteration == (numIterations - 1)) { - if ((FS - padding_l) < SB) { - // clear out all at once - if (tid < (FS - padding_l)) { - output[padding_l + SB + tid] = scalar_t(0.0); - } - } else { - // clear in chunks of size SB - int numIterations = divUp<int, int>(FS - padding_l, SB); - for (int i = 0; i < numIterations; i++) { - int offset = i * SB; - if ((tid + offset) < (FS - padding_l)) { - output[padding_l + SB + tid + offset] = scalar_t(0.0); - } - } - } - } - output[tid + padding_l] = ((inputOffset + tid) < sequenceLength) - ? input[inputOffset + tid] - : scalar_t(0.0); -} diff --git a/kosmos-g/fairseq/fairseq/modules/downsampled_multihead_attention.py b/kosmos-g/fairseq/fairseq/modules/downsampled_multihead_attention.py deleted file mode 100644 index 2cdece3f7..000000000 --- a/kosmos-g/fairseq/fairseq/modules/downsampled_multihead_attention.py +++ /dev/null @@ -1,316 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -# - -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq.modules.fairseq_dropout import FairseqDropout -from fairseq.modules.scalar_bias import scalar_bias - - -class SingleHeadAttention(nn.Module): - """ - Single-head attention that supports Gating and Downsampling - """ - - def __init__( - self, - out_channels, - embed_dim, - head_dim, - head_index, - dropout=0.0, - bias=True, - project_input=True, - gated=False, - downsample=False, - num_heads=1, - ): - super().__init__() - self.embed_dim = embed_dim - self.dropout_module = FairseqDropout( - dropout, module_name=self.__class__.__name__ - ) - self.head_index = head_index - self.head_dim = head_dim - self.project_input = project_input - self.gated = gated - self.downsample = downsample - self.num_heads = num_heads - self.projection = None - - k_layers = [] - v_layers = [] - if self.downsample: - k_layers.append(Downsample(self.head_index)) - v_layers.append(Downsample(self.head_index)) - out_proj_size = self.head_dim - else: - out_proj_size = self.head_dim * self.num_heads - if self.gated: - k_layers.append(GatedLinear(self.embed_dim, out_proj_size, bias=bias)) - self.in_proj_q = GatedLinear(self.embed_dim, out_proj_size, bias=bias) - v_layers.append(GatedLinear(self.embed_dim, out_proj_size, bias=bias)) - else: - k_layers.append(Linear(self.embed_dim, out_proj_size, bias=bias)) - self.in_proj_q = Linear(self.embed_dim, out_proj_size, bias=bias) - v_layers.append(Linear(self.embed_dim, out_proj_size, bias=bias)) - - self.in_proj_k = nn.Sequential(*k_layers) - self.in_proj_v = nn.Sequential(*v_layers) - - if self.downsample: - self.out_proj = Linear(out_proj_size, self.head_dim, bias=bias) - else: - self.out_proj = Linear(out_proj_size, out_channels, bias=bias) - - self.scaling = self.head_dim ** -0.5 - - def forward( - self, - query, - key, - value, - mask_future_timesteps=False, - key_padding_mask=None, - use_scalar_bias=False, - ): - """Input shape: Time x Batch x Channel - Self-attention can be implemented by passing in the same arguments for - query, key and value. Future timesteps can be masked with the - `mask_future_timesteps` argument. Padding elements can be excluded from - the key by passing a binary ByteTensor (`key_padding_mask`) with shape: - batch x src_len, where padding elements are indicated by 1s. - """ - src_len, bsz, out_channels = key.size() - tgt_len = query.size(0) - assert list(query.size()) == [tgt_len, bsz, out_channels] - assert key.size() == value.size() - - if key_padding_mask is not None: - assert key_padding_mask.size(0) == bsz - assert key_padding_mask.size(1) == src_len - - if self.downsample: - size = bsz - else: - size = bsz * self.num_heads - - k = key - v = value - q = query - if self.project_input: - q = self.in_proj_q(q) - k = self.in_proj_k(k) - v = self.in_proj_v(v) - src_len = k.size()[0] - q *= self.scaling - - if not self.downsample: - q = q.view(tgt_len, size, self.head_dim) - k = k.view(src_len, size, self.head_dim) - v = v.view(src_len, size, self.head_dim) - - q = q.transpose(0, 1) - k = k.transpose(0, 1) - v = v.transpose(0, 1) - - attn_weights = torch.bmm(q, k.transpose(1, 2)) - if mask_future_timesteps: - assert ( - query.size() == key.size() - ), "mask_future_timesteps only applies to self-attention" - attn_weights *= torch.tril( - attn_weights.data.new([1]).expand(tgt_len, tgt_len).clone(), - diagonal=-1, - )[:, :: self.head_index + 1 if self.downsample else 1].unsqueeze(0) - attn_weights += torch.triu( - attn_weights.data.new([-math.inf]).expand(tgt_len, tgt_len).clone(), - diagonal=0, - )[:, :: self.head_index + 1 if self.downsample else 1].unsqueeze(0) - tgt_size = tgt_len - if use_scalar_bias: - attn_weights = scalar_bias(attn_weights, 2) - v = scalar_bias(v, 1) - tgt_size += 1 - - if key_padding_mask is not None: - # don't attend to padding symbols - if key_padding_mask.max() > 0: - if self.downsample: - attn_weights = attn_weights.view(bsz, 1, tgt_len, src_len) - else: - attn_weights = attn_weights.view( - size, self.num_heads, tgt_len, src_len - ) - attn_weights = attn_weights.masked_fill( - key_padding_mask.unsqueeze(1).unsqueeze(2), - -math.inf, - ) - attn_weights = attn_weights.view(size, tgt_len, src_len) - attn_weights = F.softmax(attn_weights, dim=-1) - attn_weights = self.dropout_module(attn_weights) - - attn = torch.bmm(attn_weights, v) - if self.downsample: - attn = attn.transpose(0, 1).contiguous().view(tgt_len, bsz, self.head_dim) - else: - attn = attn.transpose(0, 1).contiguous().view(tgt_len, bsz, self.embed_dim) - - attn = self.out_proj(attn) - - return attn, attn_weights - - -class DownsampledMultiHeadAttention(nn.ModuleList): - """ - Multi-headed attention with Gating and Downsampling - """ - - def __init__( - self, - out_channels, - embed_dim, - num_heads, - dropout=0.0, - bias=True, - project_input=True, - gated=False, - downsample=False, - ): - self.embed_dim = embed_dim - self.num_heads = num_heads - self.head_dim = embed_dim // num_heads - self.downsample = downsample - self.gated = gated - self.project_input = project_input - assert self.head_dim * num_heads == embed_dim - - if self.downsample: - attention_heads = [] - for index in range(self.num_heads): - attention_heads.append( - SingleHeadAttention( - out_channels, - self.embed_dim, - self.head_dim, - index, - dropout, - bias, - self.project_input, - self.gated, - self.downsample, - self.num_heads, - ) - ) - super().__init__(modules=attention_heads) - self.out_proj = Linear(embed_dim, out_channels, bias=bias) - else: - # either we have a list of attention heads, or just one attention head - # if not being downsampled, we can do the heads with one linear layer instead of separate ones - super().__init__() - self.attention_module = SingleHeadAttention( - out_channels, - self.embed_dim, - self.head_dim, - 1, - dropout, - bias, - self.project_input, - self.gated, - self.downsample, - self.num_heads, - ) - - def forward( - self, - query, - key, - value, - mask_future_timesteps=False, - key_padding_mask=None, - use_scalar_bias=False, - ): - src_len, bsz, embed_dim = key.size() - tgt_len = query.size(0) - assert embed_dim == self.embed_dim - assert list(query.size()) == [tgt_len, bsz, embed_dim] - assert key.size() == value.size() - - tgt_size = tgt_len - if use_scalar_bias: - tgt_size += 1 - - attn = [] - attn_weights = [] - if self.downsample: - for attention_head_number in range(self.num_heads): - # call the forward of each attention head - _attn, _attn_weight = self[attention_head_number]( - query, - key, - value, - mask_future_timesteps, - key_padding_mask, - use_scalar_bias, - ) - attn.append(_attn) - attn_weights.append(_attn_weight) - full_attn = torch.cat(attn, dim=2) - full_attn = self.out_proj(full_attn) - return full_attn, attn_weights[0].clone() - else: - _attn, _attn_weight = self.attention_module( - query, - key, - value, - mask_future_timesteps, - key_padding_mask, - use_scalar_bias, - ) - attn.append(_attn) - attn_weights.append(_attn_weight) - full_attn = torch.cat(attn, dim=2) - full_attn_weights = torch.cat(attn_weights) - full_attn_weights = full_attn_weights.view( - bsz, self.num_heads, tgt_size, src_len - ) - full_attn_weights = full_attn_weights.sum(dim=1) / self.num_heads - return full_attn, full_attn_weights - - -class Downsample(nn.Module): - """ - Selects every nth element, where n is the index - """ - - def __init__(self, index): - super().__init__() - self.index = index - - def forward(self, x): - return x[:: self.index + 1] - - -def Linear(in_features, out_features, dropout=0.0, bias=True): - """Weight-normalized Linear layer (input: B x T x C)""" - m = nn.Linear(in_features, out_features, bias=bias) - m.weight.data.normal_(mean=0, std=math.sqrt((1 - dropout) / in_features)) - m.bias.data.zero_() - return nn.utils.weight_norm(m) - - -def GatedLinear(in_features, out_features, dropout=0.0, bias=True): - """Weight-normalized Linear layer (input: B x T x C) with interspersed GLU units""" - return nn.Sequential( - Linear(in_features, out_features * 4, dropout, bias), - nn.GLU(), - Linear(out_features * 2, out_features * 2, dropout, bias), - nn.GLU(), - Linear(out_features, out_features, dropout, bias), - ) diff --git a/kosmos-g/fairseq/fairseq/modules/dynamic_convolution.py b/kosmos-g/fairseq/fairseq/modules/dynamic_convolution.py deleted file mode 100644 index 0121d453b..000000000 --- a/kosmos-g/fairseq/fairseq/modules/dynamic_convolution.py +++ /dev/null @@ -1,310 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import utils -from fairseq.incremental_decoding_utils import with_incremental_state -from fairseq.modules.fairseq_dropout import FairseqDropout - -from .unfold import unfold1d - - -def DynamicConv( - input_size, - kernel_size=1, - padding_l=None, - num_heads=1, - weight_dropout=0.0, - weight_softmax=False, - renorm_padding=False, - bias=False, - conv_bias=False, - query_size=None, - in_proj=False, -): - if torch.cuda.is_available(): - try: - from fairseq.modules.dynamicconv_layer import DynamicconvLayer - - return DynamicconvLayer( - input_size, - kernel_size=kernel_size, - padding_l=padding_l, - num_heads=num_heads, - weight_dropout=weight_dropout, - weight_softmax=weight_softmax, - renorm_padding=renorm_padding, - bias=bias, - conv_bias=conv_bias, - query_size=query_size, - ) - except ImportError as e: - print(e) - return DynamicConv1dTBC( - input_size, - kernel_size=kernel_size, - padding_l=padding_l, - num_heads=num_heads, - weight_dropout=weight_dropout, - weight_softmax=weight_softmax, - renorm_padding=renorm_padding, - bias=bias, - conv_bias=conv_bias, - query_size=query_size, - ) - - -def Linear(in_features, out_features, bias=True): - m = nn.Linear(in_features, out_features, bias) - nn.init.xavier_uniform_(m.weight) - if bias: - nn.init.constant_(m.bias, 0.0) - return m - - -@with_incremental_state -class DynamicConv1dTBC(nn.Module): - """Dynamic lightweight convolution taking T x B x C inputs - Args: - input_size: # of channels of the input - kernel_size: convolution channels - padding_l: padding to the left when using "same" padding - num_heads: number of heads used. The weight is of shape (num_heads, 1, kernel_size) - weight_dropout: the drop rate of the DropConnect to drop the weight - weight_softmax: normalize the weight with softmax before the convolution - renorm_padding: re-normalize the filters to ignore the padded part (only the non-padding parts sum up to 1) - bias: use bias - conv_bias: bias of the convolution - query_size: specified when feeding a different input as the query - in_proj: project the input and generate the filter together - - Shape: - Input: TxBxC, i.e. (timesteps, batch_size, input_size) - Output: TxBxC, i.e. (timesteps, batch_size, input_size) - - Attributes: - weight: the learnable weights of the module of shape - `(num_heads, 1, kernel_size)` - bias: the learnable bias of the module of shape `(input_size)` - """ - - def __init__( - self, - input_size, - kernel_size=1, - padding_l=None, - num_heads=1, - weight_dropout=0.0, - weight_softmax=False, - renorm_padding=False, - bias=False, - conv_bias=False, - query_size=None, - in_proj=False, - ): - super().__init__() - self.input_size = input_size - self.query_size = input_size if query_size is None else query_size - self.kernel_size = kernel_size - self.padding_l = padding_l - self.num_heads = num_heads - self.weight_dropout_module = FairseqDropout( - weight_dropout, module_name=self.__class__.__name__ - ) - self.weight_softmax = weight_softmax - self.renorm_padding = renorm_padding - - if in_proj: - self.weight_linear = Linear( - self.input_size, self.input_size + num_heads * kernel_size * 1 - ) - else: - self.weight_linear = Linear( - self.query_size, num_heads * kernel_size * 1, bias=bias - ) - if conv_bias: - self.conv_bias = nn.Parameter(torch.Tensor(input_size)) - else: - self.conv_bias = None - self.reset_parameters() - - @property - def in_proj(self): - return ( - self.weight_linear.out_features - == self.input_size + self.num_heads * self.kernel_size - ) - - def reset_parameters(self): - self.weight_linear.reset_parameters() - if self.conv_bias is not None: - nn.init.constant_(self.conv_bias, 0.0) - - def forward(self, x, incremental_state=None, query=None, unfold=None): - """Assuming the input, x, of the shape T x B x C and producing an output in the shape T x B x C - args: - x: Input of shape T x B x C, i.e. (timesteps, batch_size, input_size) - incremental_state: A dict to keep the state - unfold: unfold the input or not. If not, we use the matrix trick instead - query: use the specified query to predict the conv filters - """ - unfold = ( - x.size(0) > 512 if unfold is None else unfold - ) # use unfold mode as default for long sequence to save memory - unfold = unfold or (incremental_state is not None) - assert query is None or not self.in_proj - - if query is None: - query = x - if unfold: - output = self._forward_unfolded(x, incremental_state, query) - else: - output = self._forward_expanded(x, incremental_state, query) - - if self.conv_bias is not None: - output = output + self.conv_bias.view(1, 1, -1) - return output - - def _forward_unfolded(self, x, incremental_state, query): - """The conventional implementation of convolutions. - Unfolding the input by having a window shifting to the right.""" - T, B, C = x.size() - K, H = self.kernel_size, self.num_heads - R = C // H - assert R * H == C == self.input_size - - if self.in_proj: - proj = self.weight_linear(x) - x = proj.narrow(2, 0, self.input_size).contiguous() - weight = ( - proj.narrow(2, self.input_size, H * K).contiguous().view(T * B * H, -1) - ) - else: - weight = self.weight_linear(query).view(T * B * H, -1) - - # renorm_padding is only implemented in _forward_expanded - assert not self.renorm_padding or incremental_state is not None - - if incremental_state is not None: - input_buffer = self._get_input_buffer(incremental_state) - if input_buffer is None: - input_buffer = x.new() - x_unfold = torch.cat([input_buffer, x.unsqueeze(3)], dim=3) - if self.kernel_size > 1: - self._set_input_buffer( - incremental_state, x_unfold[:, :, :, -self.kernel_size + 1 :] - ) - x_unfold = x_unfold.view(T * B * H, R, -1) - else: - padding_l = self.padding_l - if K > T and padding_l == K - 1: - weight = weight.narrow(1, K - T, T) - K, padding_l = T, T - 1 - # unfold the input: T x B x C --> T' x B x C x K - x_unfold = unfold1d(x, K, padding_l, 0) - x_unfold = x_unfold.view(T * B * H, R, K) - - if self.weight_softmax and not self.renorm_padding: - weight = F.softmax(weight, dim=1) - weight = weight.narrow(1, 0, K) - - if incremental_state is not None: - weight = weight[:, -x_unfold.size(2) :] - K = weight.size(1) - - if self.weight_softmax and self.renorm_padding: - weight = F.softmax(weight, dim=1) - - weight = self.weight_dropout_module(weight, inplace=False) - - output = torch.bmm(x_unfold, weight.unsqueeze(2)) # T*B*H x R x 1 - output = output.view(T, B, C) - return output - - def _forward_expanded(self, x, incremental_stat, query): - """Turn the convolution filters into band matrices and do matrix multiplication. - This is faster when the sequence is short, but less memory efficient. - This is not used in the decoder during inference. - """ - T, B, C = x.size() - K, H = self.kernel_size, self.num_heads - R = C // H - assert R * H == C == self.input_size - if self.in_proj: - proj = self.weight_linear(x) - x = proj.narrow(2, 0, self.input_size).contiguous() - weight = ( - proj.narrow(2, self.input_size, H * K).contiguous().view(T * B * H, -1) - ) - else: - weight = self.weight_linear(query).view(T * B * H, -1) - - if not self.renorm_padding: - if self.weight_softmax: - weight = F.softmax(weight, dim=1) - weight = self.weight_dropout_module(weight, inplace=False) - weight = weight.narrow(1, 0, K).contiguous() - weight = weight.view(T, B * H, K).transpose(0, 1) - - x = x.view(T, B * H, R).transpose(0, 1) - if self.weight_softmax and self.renorm_padding: - # turn the convolution filters into band matrices - weight_expanded = weight.new(B * H, T, T + K - 1).fill_(float("-inf")) - weight_expanded.as_strided( - (B * H, T, K), (T * (T + K - 1), T + K, 1) - ).copy_(weight) - weight_expanded = weight_expanded.narrow(2, self.padding_l, T) - # normalize the weight over valid positions like self-attention - weight_expanded = F.softmax(weight_expanded, dim=2) - weight_expanded = self.weight_dropout_module(weight_expanded, inplace=False) - else: - P = self.padding_l - # For efficiency, we cut the kernel size and reduce the padding when the kernel is larger than the length - if K > T and P == K - 1: - weight = weight.narrow(2, K - T, T) - K, P = T, T - 1 - # turn the convolution filters into band matrices - weight_expanded = weight.new_zeros(B * H, T, T + K - 1, requires_grad=False) - weight_expanded.as_strided( - (B * H, T, K), (T * (T + K - 1), T + K, 1) - ).copy_(weight) - weight_expanded = weight_expanded.narrow(2, P, T) # B*H x T x T - output = torch.bmm(weight_expanded, x) - output = output.transpose(0, 1).contiguous().view(T, B, C) - return output - - def reorder_incremental_state(self, incremental_state, new_order): - input_buffer = self._get_input_buffer(incremental_state) - if input_buffer is not None: - input_buffer = input_buffer.index_select(1, new_order) - self._set_input_buffer(incremental_state, input_buffer) - - def _get_input_buffer(self, incremental_state): - return utils.get_incremental_state(self, incremental_state, "input_buffer") - - def _set_input_buffer(self, incremental_state, new_buffer): - return utils.set_incremental_state( - self, incremental_state, "input_buffer", new_buffer - ) - - def extra_repr(self): - s = "{}, kernel_size={}, padding_l={}, num_heads={}, weight_softmax={}, conv_bias={}, renorm_padding={}, in_proj={}".format( - self.input_size, - self.kernel_size, - self.padding_l, - self.num_heads, - self.weight_softmax, - self.conv_bias is not None, - self.renorm_padding, - self.in_proj, - ) - - if self.query_size != self.input_size: - s += ", query_size={}".format(self.query_size) - if self.weight_dropout_module.p > 0.0: - s += ", weight_dropout={}".format(self.weight_dropout_module.p) - return s diff --git a/kosmos-g/fairseq/fairseq/modules/dynamic_crf_layer.py b/kosmos-g/fairseq/fairseq/modules/dynamic_crf_layer.py deleted file mode 100644 index 8fcc6b8d2..000000000 --- a/kosmos-g/fairseq/fairseq/modules/dynamic_crf_layer.py +++ /dev/null @@ -1,189 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -This file is to re-implemented the low-rank and beam approximation of CRF layer -Proposed by: - -Sun, Zhiqing, et al. -Fast Structured Decoding for Sequence Models -https://arxiv.org/abs/1910.11555 - -The CRF implementation is mainly borrowed from -https://github.com/kmkurn/pytorch-crf/blob/master/torchcrf/__init__.py - -""" - -import numpy as np -import torch -import torch.nn as nn - - -def logsumexp(x, dim=1): - return torch.logsumexp(x.float(), dim=dim).type_as(x) - - -class DynamicCRF(nn.Module): - """Dynamic CRF layer is used to approximate the traditional - Conditional Random Fields (CRF) - $P(y | x) = 1/Z(x) exp(sum_i s(y_i, x) + sum_i t(y_{i-1}, y_i, x))$ - - where in this function, we assume the emition scores (s) are given, - and the transition score is a |V| x |V| matrix $M$ - - in the following two aspects: - (1) it used a low-rank approximation for the transition matrix: - $M = E_1 E_2^T$ - (2) it used a beam to estimate the normalizing factor Z(x) - """ - - def __init__(self, num_embedding, low_rank=32, beam_size=64): - super().__init__() - - self.E1 = nn.Embedding(num_embedding, low_rank) - self.E2 = nn.Embedding(num_embedding, low_rank) - - self.vocb = num_embedding - self.rank = low_rank - self.beam = beam_size - - def extra_repr(self): - return "vocab_size={}, low_rank={}, beam_size={}".format( - self.vocb, self.rank, self.beam - ) - - def forward(self, emissions, targets, masks, beam=None): - """ - Compute the conditional log-likelihood of a sequence of target tokens given emission scores - - Args: - emissions (`~torch.Tensor`): Emission score are usually the unnormalized decoder output - ``(batch_size, seq_len, vocab_size)``. We assume batch-first - targets (`~torch.LongTensor`): Sequence of target token indices - ``(batch_size, seq_len) - masks (`~torch.ByteTensor`): Mask tensor with the same size as targets - - Returns: - `~torch.Tensor`: approximated log-likelihood - """ - numerator = self._compute_score(emissions, targets, masks) - denominator = self._compute_normalizer(emissions, targets, masks, beam) - return numerator - denominator - - def forward_decoder(self, emissions, masks=None, beam=None): - """ - Find the most likely output sequence using Viterbi algorithm. - - Args: - emissions (`~torch.Tensor`): Emission score are usually the unnormalized decoder output - ``(batch_size, seq_len, vocab_size)``. We assume batch-first - masks (`~torch.ByteTensor`): Mask tensor with the same size as targets - - Returns: - `~torch.LongTensor`: decoded sequence from the CRF model - """ - return self._viterbi_decode(emissions, masks, beam) - - def _compute_score(self, emissions, targets, masks=None): - batch_size, seq_len = targets.size() - emission_scores = emissions.gather(2, targets[:, :, None])[:, :, 0] # B x T - transition_scores = (self.E1(targets[:, :-1]) * self.E2(targets[:, 1:])).sum(2) - - scores = emission_scores - scores[:, 1:] += transition_scores - - if masks is not None: - scores = scores * masks.type_as(scores) - return scores.sum(-1) - - def _compute_normalizer(self, emissions, targets=None, masks=None, beam=None): - # HACK: we include "target" which is a hueristic for training - # HACK: we use a beam of tokens to approximate the normalizing factor (which is bad?) - - beam = beam if beam is not None else self.beam - batch_size, seq_len = emissions.size()[:2] - if targets is not None: - _emissions = emissions.scatter(2, targets[:, :, None], np.float("inf")) - beam_targets = _emissions.topk(beam, 2)[1] - beam_emission_scores = emissions.gather(2, beam_targets) - else: - beam_emission_scores, beam_targets = emissions.topk(beam, 2) - beam_transition_score1 = self.E1(beam_targets[:, :-1]) # B x (T-1) x K x D - beam_transition_score2 = self.E2(beam_targets[:, 1:]) # B x (T-1) x K x D - beam_transition_matrix = torch.bmm( - beam_transition_score1.view(-1, beam, self.rank), - beam_transition_score2.view(-1, beam, self.rank).transpose(1, 2), - ) - beam_transition_matrix = beam_transition_matrix.view(batch_size, -1, beam, beam) - - # compute the normalizer in the log-space - score = beam_emission_scores[:, 0] # B x K - for i in range(1, seq_len): - next_score = score[:, :, None] + beam_transition_matrix[:, i - 1] - next_score = logsumexp(next_score, dim=1) + beam_emission_scores[:, i] - - if masks is not None: - score = torch.where(masks[:, i : i + 1], next_score, score) - else: - score = next_score - - # Sum (log-sum-exp) over all possible tags - return logsumexp(score, dim=1) - - def _viterbi_decode(self, emissions, masks=None, beam=None): - # HACK: we use a beam of tokens to approximate the normalizing factor (which is bad?) - - beam = beam if beam is not None else self.beam - batch_size, seq_len = emissions.size()[:2] - beam_emission_scores, beam_targets = emissions.topk(beam, 2) - beam_transition_score1 = self.E1(beam_targets[:, :-1]) # B x (T-1) x K x D - beam_transition_score2 = self.E2(beam_targets[:, 1:]) # B x (T-1) x K x D - beam_transition_matrix = torch.bmm( - beam_transition_score1.view(-1, beam, self.rank), - beam_transition_score2.view(-1, beam, self.rank).transpose(1, 2), - ) - beam_transition_matrix = beam_transition_matrix.view(batch_size, -1, beam, beam) - - traj_tokens, traj_scores = [], [] - finalized_tokens, finalized_scores = [], [] - - # compute the normalizer in the log-space - score = beam_emission_scores[:, 0] # B x K - dummy = ( - torch.arange(beam, device=score.device).expand(*score.size()).contiguous() - ) - - for i in range(1, seq_len): - traj_scores.append(score) - _score = score[:, :, None] + beam_transition_matrix[:, i - 1] - _score, _index = _score.max(dim=1) - _score = _score + beam_emission_scores[:, i] - - if masks is not None: - score = torch.where(masks[:, i : i + 1], _score, score) - index = torch.where(masks[:, i : i + 1], _index, dummy) - else: - score, index = _score, _index - traj_tokens.append(index) - - # now running the back-tracing and find the best - best_score, best_index = score.max(dim=1) - finalized_tokens.append(best_index[:, None]) - finalized_scores.append(best_score[:, None]) - - for idx, scs in zip(reversed(traj_tokens), reversed(traj_scores)): - previous_index = finalized_tokens[-1] - finalized_tokens.append(idx.gather(1, previous_index)) - finalized_scores.append(scs.gather(1, previous_index)) - - finalized_tokens.reverse() - finalized_tokens = torch.cat(finalized_tokens, 1) - finalized_tokens = beam_targets.gather(2, finalized_tokens[:, :, None])[:, :, 0] - - finalized_scores.reverse() - finalized_scores = torch.cat(finalized_scores, 1) - finalized_scores[:, 1:] = finalized_scores[:, 1:] - finalized_scores[:, :-1] - - return finalized_scores, finalized_tokens diff --git a/kosmos-g/fairseq/fairseq/modules/dynamicconv_layer/__init__.py b/kosmos-g/fairseq/fairseq/modules/dynamicconv_layer/__init__.py deleted file mode 100644 index 22dc6f403..000000000 --- a/kosmos-g/fairseq/fairseq/modules/dynamicconv_layer/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .dynamicconv_layer import DynamicconvLayer # noqa diff --git a/kosmos-g/fairseq/fairseq/modules/dynamicconv_layer/cuda_function_gen.py b/kosmos-g/fairseq/fairseq/modules/dynamicconv_layer/cuda_function_gen.py deleted file mode 100644 index 9304f99eb..000000000 --- a/kosmos-g/fairseq/fairseq/modules/dynamicconv_layer/cuda_function_gen.py +++ /dev/null @@ -1,223 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -def gen_forward(): - - kernels = [3, 5, 7, 15, 31, 63, 127, 255] - blocks = [32, 64, 128, 256] - - head = """ -/** - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include "dynamicconv_cuda.cuh" - -std::vector<at::Tensor> dynamicconv_cuda_forward(at::Tensor input, at::Tensor weight, int padding_l) { - - at::DeviceGuard g(input.device()); - const auto minibatch = input.size(0); - const auto numFeatures = input.size(1); - const auto sequenceLength = input.size(2); - - const auto numHeads = weight.size(1); - const auto filterSize = weight.size(2); - - const auto numFiltersInBlock = numFeatures / numHeads; - const dim3 blocks(minibatch, numFeatures); - - auto output = at::zeros_like(input); - auto stream = at::cuda::getCurrentCUDAStream(); -""" - - switch = """ - switch(filterSize) { -""" - - case_k = """ - case {k}: -""" - - main_block = """ - if (padding_l == {pad}) {{ - AT_DISPATCH_FLOATING_TYPES_AND_HALF(input.scalar_type(), "dynamicconv_forward", ([&] {{ - dynamicconv_forward_kernel<{k}, {b_size}, {pad}, scalar_t> - <<<blocks, {b_size}, 0, stream>>>( - input.data<scalar_t>(), - weight.data<scalar_t>(), - minibatch, - sequenceLength, - numFeatures, - numFiltersInBlock, - numHeads, - output.data<scalar_t>()); - }})); - }} else -""" - - bad_padding = """ - { - std::cout << "WARNING: Unsupported padding size - skipping forward pass" << std::endl; - } - break;\n -""" - - end = """ - default: - std::cout << "WARNING: Unsupported filter length passed - skipping forward pass" << std::endl; - } - - return {output}; -} -""" - - with open("dynamicconv_cuda_forward.cu", "w") as forward: - forward.write(head) - forward.write(switch) - for k in kernels: - b_size = 32 - for b in blocks: - if b > k: - b_size = b - break - forward.write(case_k.format(k=k)) - for pad in [k // 2, k - 1]: - forward.write(main_block.format(k=k, b_size=b_size, pad=pad)) - forward.write(bad_padding) - forward.write(end) - - -def gen_backward(): - - kernels = [3, 5, 7, 15, 31, 63, 127, 255] - thresh = [512, 512, 512, 512, 512, 380, 256, 256] - min_block = [64, 64, 64, 64, 64, 64, 128, 256] - seqs = [32 * x for x in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]] - - head = """ -/** - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include "dynamicconv_cuda.cuh" - -std::vector<at::Tensor> dynamicconv_cuda_backward(at::Tensor gradOutput, int padding_l, at::Tensor input, at::Tensor weight) { - - at::DeviceGuard g(input.device()); - const auto minibatch = input.size(0); - const auto numFeatures = input.size(1); - const auto sequenceLength = input.size(2); - - const auto numHeads = weight.size(1); - const auto filterSize = weight.size(2); - - const auto numFiltersInBlock = numFeatures / numHeads; - auto numChunks = 1; - - auto gradInput = at::zeros_like(input); - auto gradWeight = at::zeros_like(weight); - auto stream = at::cuda::getCurrentCUDAStream(); - - dim3 blocks(minibatch, numHeads, numChunks); -""" - - sequence_if = """ - if (sequenceLength < {seq}) {{ - switch(filterSize) {{ -""" - - case_k = """ - case {k}: -""" - - chunks_reset = """ - numChunks = int(ceilf(sequenceLength/float({b_size}))); - blocks = dim3(minibatch, numHeads, numChunks); -""" - - main_block = """ - if (padding_l == {p}) {{ - AT_DISPATCH_FLOATING_TYPES_AND_HALF(gradOutput.scalar_type(), "dynamicconv_backward", ([&] {{ - dynamicconv_backward_kernel<{k}, {b_size}, {p}, scalar_t> - <<<blocks, {b_size}, 0, stream>>>( - gradOutput.data<scalar_t>(), - input.data<scalar_t>(), - weight.data<scalar_t>(), - minibatch, - sequenceLength, - numFeatures, - numFiltersInBlock, - numHeads, - gradWeight.data<scalar_t>(), - gradInput.data<scalar_t>()); - }})); - }} else -""" - - bad_padding = """ - { - std::cout << "WARNING: Unsupported padding size - skipping backward pass" << std::endl; - } - break;\n -""" - - bad_filter = """ - default: - std::cout << "WARNING: Unsupported filter length passed - skipping backward pass" << std::endl; - } -""" - - con_else = """ - } else -""" - - final_else = """ - { - switch(filterSize) { -""" - - last_return = """ - } - return {gradInput, gradWeight}; -} -""" - - with open("dynamicconv_cuda_backward.cu", "w") as backward: - backward.write(head) - for seq in seqs: - backward.write(sequence_if.format(seq=seq)) - for k, t, m in zip(kernels, thresh, min_block): - backward.write(case_k.format(k=k)) - if seq <= t: - b_size = seq - else: - b_size = m - backward.write(chunks_reset.format(b_size=b_size)) - for p in [k // 2, k - 1]: - backward.write(main_block.format(k=k, b_size=b_size, p=p)) - backward.write(bad_padding) - backward.write(bad_filter) - backward.write(con_else) - backward.write(final_else) - for k, m in zip(kernels, min_block): - backward.write(case_k.format(k=k)) - backward.write(chunks_reset.format(b_size=m)) - for p in [k // 2, k - 1]: - backward.write(main_block.format(k=k, b_size=m, p=p)) - backward.write(bad_padding) - backward.write(bad_filter) - backward.write(last_return) - - -if __name__ == "__main__": - gen_forward() - gen_backward() diff --git a/kosmos-g/fairseq/fairseq/modules/dynamicconv_layer/dynamicconv_cuda.cpp b/kosmos-g/fairseq/fairseq/modules/dynamicconv_layer/dynamicconv_cuda.cpp deleted file mode 100644 index 744c363e5..000000000 --- a/kosmos-g/fairseq/fairseq/modules/dynamicconv_layer/dynamicconv_cuda.cpp +++ /dev/null @@ -1,51 +0,0 @@ -/** - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include <torch/extension.h> -#include <vector> - -std::vector<at::Tensor> -dynamicconv_cuda_forward(at::Tensor input, at::Tensor filters, int padding_l); - -std::vector<at::Tensor> dynamicconv_cuda_backward( - at::Tensor gradOutput, - int padding_l, - at::Tensor input, - at::Tensor filters); - -#define CHECK_CUDA(x) \ - AT_ASSERTM(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) \ - AT_ASSERTM(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) \ - CHECK_CUDA(x); \ - CHECK_CONTIGUOUS(x) - -std::vector<at::Tensor> -dynamicconv_forward(at::Tensor input, at::Tensor filters, int padding_l) { - CHECK_INPUT(input); - CHECK_INPUT(filters); - - return dynamicconv_cuda_forward(input, filters, padding_l); -} - -std::vector<at::Tensor> dynamicconv_backward( - at::Tensor gradOutput, - int padding_l, - at::Tensor input, - at::Tensor filters) { - CHECK_INPUT(gradOutput); - CHECK_INPUT(input); - CHECK_INPUT(filters); - - return dynamicconv_cuda_backward(gradOutput, padding_l, input, filters); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("forward", &dynamicconv_forward, "dynamicconv forward (CUDA)"); - m.def("backward", &dynamicconv_backward, "dynamicconv backward (CUDA)"); -} diff --git a/kosmos-g/fairseq/fairseq/modules/dynamicconv_layer/dynamicconv_cuda.cuh b/kosmos-g/fairseq/fairseq/modules/dynamicconv_layer/dynamicconv_cuda.cuh deleted file mode 100644 index 44baf21bd..000000000 --- a/kosmos-g/fairseq/fairseq/modules/dynamicconv_layer/dynamicconv_cuda.cuh +++ /dev/null @@ -1,50 +0,0 @@ -/** - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include <ATen/ATen.h> -#include <c10/cuda/CUDAStream.h> - -#include <cuda.h> -#include <cuda_fp16.h> -#include <cuda_runtime.h> - -#include <algorithm> -#include <functional> -#include <iostream> -#include <stdexcept> -#include <utility> -#include <vector> - -#include <assert.h> -#include <math.h> -#include <stdlib.h> - -#define SHFL_MASK 0xffffffff - -template <int FS, int SB, int padding_l, typename scalar_t> -__global__ void dynamicconv_forward_kernel( - const scalar_t* input, - const scalar_t* weight, - int minibatch, - int sequenceLength, - int numFeatures, - int numFiltersInBlock, - int numHeads, - scalar_t* output); - -template <int FS, int SB, int padding_l, typename scalar_t> -__global__ void dynamicconv_backward_kernel( - const scalar_t* gradOutput, // B * C * T - const scalar_t* input, // B * C * T - const scalar_t* weight, - int minibatch, - int sequenceLength, - int numFeatures, - int numFiltersInBlock, - int numHeads, - scalar_t* gradWeight, - scalar_t* gradInput); // B * H * k * T diff --git a/kosmos-g/fairseq/fairseq/modules/dynamicconv_layer/dynamicconv_cuda_kernel.cu b/kosmos-g/fairseq/fairseq/modules/dynamicconv_layer/dynamicconv_cuda_kernel.cu deleted file mode 100644 index 4630f1e98..000000000 --- a/kosmos-g/fairseq/fairseq/modules/dynamicconv_layer/dynamicconv_cuda_kernel.cu +++ /dev/null @@ -1,176 +0,0 @@ -/** - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include "../cuda_utils.cu" -#include "dynamicconv_cuda.cuh" -#include "dynamicconv_cuda_backward.cu" -#include "dynamicconv_cuda_forward.cu" - -// FS is filter size and kernels are specialized for filter sizes -template <int FS, int SB, int padding_l, typename scalar_t> -__global__ void dynamicconv_forward_kernel( - const scalar_t* input, - const scalar_t* weight, - int minibatch, - int sequenceLength, - int numFeatures, - int numFiltersInBlock, - int numHeads, - scalar_t* output) { - assert(blockDim.x == SB); - - const int tid = threadIdx.x; - const int batchIdx = blockIdx.x; - const int featureIdx = blockIdx.y; - const int head = featureIdx / numFiltersInBlock; - - const int IOOffset = - batchIdx * numFeatures * sequenceLength + featureIdx * sequenceLength; - const scalar_t* inputFeature = &input[IOOffset]; - scalar_t* outputFeature = &output[IOOffset]; - - scalar_t filter[FS]; - - __shared__ scalar_t tempInput[SB + FS]; - zeroSharedMem<FS, SB, padding_l>(tempInput); - - const int numIterations = divUp<int, int>(sequenceLength, SB); - - for (int i = 0; i < numIterations; ++i) { - __syncthreads(); - const int inputOffset = i * SB; - load_input_to_shared<FS, SB, padding_l>( - inputFeature, - inputOffset, - sequenceLength, - i, - numIterations, - false, - tempInput); - __syncthreads(); - if (inputOffset + tid < sequenceLength) { -#pragma unroll - for (int k = 0; k < FS; ++k) { - const int filterOffset = batchIdx * numHeads * FS * sequenceLength + - head * FS * sequenceLength + k * sequenceLength + i * SB + tid; - filter[k] = weight[filterOffset]; - } - - scalar_t out = scalar_t(0.0); -#pragma unroll - for (int k = 0; k < FS; ++k) { - out += filter[k] * tempInput[tid + k]; - } - - outputFeature[inputOffset + tid] = out; - } - } -} - -template <int FS, int SB, int padding_l, typename scalar_t> -__global__ void dynamicconv_backward_kernel( - const scalar_t* gradOutput, // B * C * T - const scalar_t* input, // B * C * T - const scalar_t* weight, - int minibatch, - int sequenceLength, - int numFeatures, - int numFiltersInBlock, - int numHeads, - scalar_t* gradWeight, - scalar_t* gradInput) { // B * H * k * T - - assert(blockDim.x == SB); - - // each block operates on a single batch and filter head - const int tid = threadIdx.x; - const int batchIdx = blockIdx.x; - const int headIdx = blockIdx.y; - const int chunkIdx = blockIdx.z; - - const int numChunks = divUp<int, int>(sequenceLength, SB); - const int inputOffset = chunkIdx * SB; - - // initialize shared memory for output gradient and input - __shared__ scalar_t tempGradOutput[SB + FS]; - __shared__ scalar_t tempInput[SB + FS]; - const int padding = FS - padding_l - 1; - - zeroSharedMem<FS, SB, padding>(tempGradOutput); - zeroSharedMem<FS, SB, padding_l>(tempInput); - - // initialize local filter and weight gradient sum arrays - scalar_t tempGradSum[FS]; - scalar_t bfilter[FS]; - for (int k = 0; k < FS; ++k) { - tempGradSum[k] = scalar_t(0.0); - - int idxOffset = inputOffset + tid + k - padding; - if (idxOffset >= 0 && idxOffset < sequenceLength) { - int bfilterOffset = batchIdx * numHeads * FS * sequenceLength + - headIdx * FS * sequenceLength + (FS - k - 1) * sequenceLength + - idxOffset; - bfilter[k] = weight[bfilterOffset]; - } else { - bfilter[k] = scalar_t(0.0); - } - } - - // iterate over filter block - for (int featureIdx = 0; featureIdx < numFiltersInBlock; ++featureIdx) { - __syncthreads(); - - // load input and output gradient for this channel and chunk - const int IOOffset = batchIdx * numFeatures * sequenceLength + - (headIdx * numFiltersInBlock + featureIdx) * sequenceLength; - const scalar_t* inputFeature = &input[IOOffset]; - const scalar_t* gradOutputFeature = &gradOutput[IOOffset]; - scalar_t* gradInputFeature = &gradInput[IOOffset]; - - load_input_to_shared<FS, SB, padding>( - gradOutputFeature, - inputOffset, - sequenceLength, - chunkIdx, - numChunks, - true, - tempGradOutput); - load_input_to_shared<FS, SB, padding_l>( - inputFeature, - inputOffset, - sequenceLength, - chunkIdx, - numChunks, - true, - tempInput); - __syncthreads(); - - // sum input and weight gradients - scalar_t out = scalar_t(0.0); -#pragma unroll - for (int k = 0; k < FS; ++k) { - tempGradSum[k] += tempInput[tid + k] * tempGradOutput[tid + padding]; - out += bfilter[k] * tempGradOutput[tid + k]; - } - - if (inputOffset + tid < sequenceLength) { - gradInputFeature[inputOffset + tid] = out; - } - } - - const int gradOffset = - batchIdx * numHeads * FS * sequenceLength + headIdx * FS * sequenceLength; - scalar_t* gradWeightFeature = &gradWeight[gradOffset]; - - // write weight gradient - if (inputOffset + tid < sequenceLength) { - for (int k = 0; k < FS; ++k) { - const int outputOffset = k * sequenceLength + inputOffset + tid; - gradWeightFeature[outputOffset] = tempGradSum[k]; - } - } -} diff --git a/kosmos-g/fairseq/fairseq/modules/dynamicconv_layer/dynamicconv_layer.py b/kosmos-g/fairseq/fairseq/modules/dynamicconv_layer/dynamicconv_layer.py deleted file mode 100644 index 711ed0348..000000000 --- a/kosmos-g/fairseq/fairseq/modules/dynamicconv_layer/dynamicconv_layer.py +++ /dev/null @@ -1,227 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import dynamicconv_cuda -import torch -import torch.nn.functional as F -from fairseq import utils -from fairseq.incremental_decoding_utils import with_incremental_state -from fairseq.modules.fairseq_dropout import FairseqDropout -from fairseq.modules.unfold import unfold1d -from torch import nn -from torch.autograd import Function - - -class dynamicconvFunction(Function): - @staticmethod - def forward(ctx, x, weights, padding_l): - ctx.padding_l = padding_l - outputs = dynamicconv_cuda.forward(x, weights, padding_l) - variables = [x, weights] - ctx.save_for_backward(*variables) - return outputs[0] - - @staticmethod - def backward(ctx, grad_output): - outputs = dynamicconv_cuda.backward( - grad_output.contiguous(), ctx.padding_l, *ctx.saved_tensors - ) - grad_input, grad_weights = outputs - return grad_input, grad_weights, None - - -@with_incremental_state -class DynamicconvLayer(nn.Module): - def __init__( - self, - input_size, - kernel_size=1, - padding_l=None, - weight_softmax=False, - num_heads=1, - weight_dropout=0.0, - bias=False, - renorm_padding=False, - conv_bias=False, - query_size=None, - ): - - super(DynamicconvLayer, self).__init__() - self.input_size = input_size - self.query_size = input_size if query_size is None else query_size - self.kernel_size = kernel_size - self.padding_l = padding_l - self.num_heads = num_heads - self.weight_softmax = weight_softmax - self.weight_dropout_module = FairseqDropout( - weight_dropout, module_name=self.__class__.__name__ - ) - self.renorm_padding = renorm_padding - self.bias = bias - - self.weight_linear = nn.Linear(input_size, num_heads * kernel_size, bias) - if conv_bias: - self.conv_bias = nn.Parameter(torch.Tensor(input_size)) - else: - self.conv_bias = None - self.reset_parameters() - - def reset_parameters(self): - nn.init.xavier_uniform_(self.weight_linear.weight) - if self.conv_bias is not None: - nn.init.constant_(self.conv_bias, 0.0) - nn.init.constant_(self.weight_linaer.bias, 0.0) - - def forward(self, x, incremental_state=None, query=None, unfold=None): - - T, B, C = x.size() - K, H = self.kernel_size, self.num_heads - # R = C // H - - # during inference time, incremental BMM is faster - if incremental_state is not None: - unfold = ( - x.size(0) > 512 if unfold is None else unfold - ) # use unfold mode as default for long sequence to save memory - unfold = unfold or (incremental_state is not None) - assert query is None - - if query is None: - query = x - if unfold: - output = self._forward_unfolded(x, incremental_state, query) - else: - output = self._forward_expanded(x, incremental_state, query) - - if self.conv_bias is not None: - output = output + self.conv_bias.view(1, 1, -1) - - return output - - # during training time, use CUDA kernel - else: - weight = self.weight_linear(x).view(T, B, H, K) - if self.weight_softmax: - weight = F.softmax(weight, dim=-1) - if self.weight_dropout_module.p: - weight = self.weight_dropout_module(weight) - - weight = weight.permute(1, 2, 3, 0).contiguous() - self.filters = weight - x = x.permute(1, 2, 0).contiguous() - output = dynamicconvFunction.apply(x, weight, self.padding_l).permute( - 2, 0, 1 - ) - if self.conv_bias is not None: - output = output + self.conv_bias.view(1, 1, -1) - return output - - def reorder_incremental_state(self, incremental_state, new_order): - input_buffer = self._get_input_buffer(incremental_state) - if input_buffer is not None: - input_buffer = input_buffer.index_select(1, new_order) - self._set_input_buffer(incremental_state, input_buffer) - - def _get_input_buffer(self, incremental_state): - return utils.get_incremental_state(self, incremental_state, "input_buffer") - - def _set_input_buffer(self, incremental_state, new_buffer): - return utils.set_incremental_state( - self, incremental_state, "input_buffer", new_buffer - ) - - def _forward_unfolded(self, x, incremental_state, query): - """The conventional implementation of convolutions. - Unfolding the input by having a window shifting to the right.""" - T, B, C = x.size() - K, H = self.kernel_size, self.num_heads - R = C // H - assert R * H == C == self.input_size - - weight = self.weight_linear(query).view(T * B * H, -1) - - # renorm_padding is only implemented in _forward_expanded - assert not self.renorm_padding or incremental_state is not None - - if incremental_state is not None: - input_buffer = self._get_input_buffer(incremental_state) - if input_buffer is None: - input_buffer = x.new() - x_unfold = torch.cat([input_buffer, x.unsqueeze(3)], dim=3) - if self.kernel_size > 1: - self._set_input_buffer( - incremental_state, x_unfold[:, :, :, -self.kernel_size + 1 :] - ) - x_unfold = x_unfold.view(T * B * H, R, -1) - else: - padding_l = self.padding_l - if K > T and padding_l == K - 1: - weight = weight.narrow(1, K - T, T) - K, padding_l = T, T - 1 - # unfold the input: T x B x C --> T' x B x C x K - x_unfold = unfold1d(x, K, padding_l, 0) - x_unfold = x_unfold.view(T * B * H, R, K) - - if self.weight_softmax and not self.renorm_padding: - weight = F.softmax(weight, dim=1) - weight = weight.narrow(1, 0, K) - - if incremental_state is not None: - weight = weight[:, -x_unfold.size(2) :] - K = weight.size(1) - - if self.weight_softmax and self.renorm_padding: - weight = F.softmax(weight, dim=1) - - weight = self.weight_dropout_module(weight, inplace=False) - - output = torch.bmm(x_unfold, weight.unsqueeze(2)) # T*B*H x R x 1 - output = output.view(T, B, C) - return output - - def _forward_expanded(self, x, incremental_stat, query): - """Turn the convolution filters into band matrices and do matrix multiplication. - This is faster when the sequence is short, but less memory efficient. - This is not used in the decoder during inference. - """ - T, B, C = x.size() - K, H = self.kernel_size, self.num_heads - R = C // H - assert R * H == C == self.input_size - weight = self.weight_linear(query).view(T * B * H, -1) - - if not self.renorm_padding: - if self.weight_softmax: - weight = F.softmax(weight, dim=1) - weight = self.weight_dropout_module(weight, inplace=False) - weight = weight.narrow(1, 0, K).contiguous() - weight = weight.view(T, B * H, K).transpose(0, 1) - - x = x.view(T, B * H, R).transpose(0, 1) - if self.weight_softmax and self.renorm_padding: - # turn the convolution filters into band matrices - weight_expanded = weight.new(B * H, T, T + K - 1).fill_(float("-inf")) - weight_expanded.as_strided( - (B * H, T, K), (T * (T + K - 1), T + K, 1) - ).copy_(weight) - weight_expanded = weight_expanded.narrow(2, self.padding_l, T) - # normalize the weight over valid positions like self-attention - weight_expanded = F.softmax(weight_expanded, dim=2) - weight_expanded = self.weight_dropout_module(weight_expanded, inplace=False) - else: - P = self.padding_l - # For efficiency, we cut the kernel size and reduce the padding when the kernel is larger than the length - if K > T and P == K - 1: - weight = weight.narrow(2, K - T, T) - K, P = T, T - 1 - # turn the convolution filters into band matrices - weight_expanded = weight.new_zeros(B * H, T, T + K - 1, requires_grad=False) - weight_expanded.as_strided( - (B * H, T, K), (T * (T + K - 1), T + K, 1) - ).copy_(weight) - weight_expanded = weight_expanded.narrow(2, P, T) # B*H x T x T - output = torch.bmm(weight_expanded, x) - output = output.transpose(0, 1).contiguous().view(T, B, C) - return output diff --git a/kosmos-g/fairseq/fairseq/modules/dynamicconv_layer/dynamiconv_cpu.cpp b/kosmos-g/fairseq/fairseq/modules/dynamicconv_layer/dynamiconv_cpu.cpp deleted file mode 100644 index d7e57c859..000000000 --- a/kosmos-g/fairseq/fairseq/modules/dynamicconv_layer/dynamiconv_cpu.cpp +++ /dev/null @@ -1,29 +0,0 @@ -#include <torch/torch.h> -#include <vector> - -std::vector<float*> -dynamicconv_cpu_forward(float* input, float* filters, int padding_l); - -std::vector<float*> dynamicconv_cpu_backward( - float* gradOutput, - int padding_l, - float* input, - float* filters); - -std::vector<float*> -dynamicconv_forward(float* input, float* filters, int padding_l) { - return dynamicconv_cpu_forward(input, filters, padding_l); -} - -std::vector<float*> dynamicconv_backward( - float* gradOutput, - int padding_l, - float* input, - float* filters) { - return dynamicconv_cpu_backward(gradOutput, padding_l, input, filters); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("forward", &dynamicconv_forward, "dynamicconv forward (CPU)"); - m.def("backward", &dynamicconv_backward, "dynamicconv backward (CPU)"); -} diff --git a/kosmos-g/fairseq/fairseq/modules/dynamicconv_layer/setup.py b/kosmos-g/fairseq/fairseq/modules/dynamicconv_layer/setup.py deleted file mode 100644 index 6a21f7e2e..000000000 --- a/kosmos-g/fairseq/fairseq/modules/dynamicconv_layer/setup.py +++ /dev/null @@ -1,23 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from setuptools import setup -from torch.utils.cpp_extension import BuildExtension, CUDAExtension - - -setup( - name="dynamicconv_layer", - ext_modules=[ - CUDAExtension( - name="dynamicconv_cuda", - sources=[ - "dynamicconv_cuda.cpp", - "dynamicconv_cuda_kernel.cu", - ], - ), - ], - cmdclass={"build_ext": BuildExtension}, -) diff --git a/kosmos-g/fairseq/fairseq/modules/espnet_multihead_attention.py b/kosmos-g/fairseq/fairseq/modules/espnet_multihead_attention.py deleted file mode 100644 index d319a168f..000000000 --- a/kosmos-g/fairseq/fairseq/modules/espnet_multihead_attention.py +++ /dev/null @@ -1,254 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- - -# Copyright 2019 Shigeki Karita -# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) - -"""Multi-Head Attention layer definition.""" - -import math -import torch -from torch import nn -from fairseq.modules.rotary_positional_embedding import ( - RotaryPositionalEmbedding, - apply_rotary_pos_emb, -) - - -class ESPNETMultiHeadedAttention(nn.Module): - """Multi-Head Attention layer. - Args: - n_head: The number of heads. - n_feat: The number of features. - dropout: Dropout rate. - """ - - def __init__(self, n_feat, n_head, dropout): - """Construct an MultiHeadedAttention object.""" - super(ESPNETMultiHeadedAttention, self).__init__() - assert n_feat % n_head == 0 - # We assume d_v always equals d_k - self.d_k = n_feat // n_head - self.h = n_head - self.linear_q = nn.Linear(n_feat, n_feat) - self.linear_k = nn.Linear(n_feat, n_feat) - self.linear_v = nn.Linear(n_feat, n_feat) - self.linear_out = nn.Linear(n_feat, n_feat) - self.attn = None - self.dropout = nn.Dropout(p=dropout) - - def forward_qkv(self, query, key, value, **kwargs): - """Transform query, key and value. - Args: - query: Query tensor B X T1 X C - key: Key tensor B X T2 X C - value: Value tensor B X T2 X C - Returns: - torch.Tensor: Transformed query tensor B X n_head X T1 X d_k - torch.Tensor: Transformed key tensor B X n_head X T2 X d_k - torch.Tensor: Transformed value tensor B X n_head X T2 X d_k - """ - n_batch = query.size(0) - q = self.linear_q(query).view(n_batch, -1, self.h, self.d_k) - k = self.linear_k(key).view(n_batch, -1, self.h, self.d_k) - v = self.linear_v(value).view(n_batch, -1, self.h, self.d_k) - q = q.transpose(1, 2) # (batch, head, time1, d_k) - k = k.transpose(1, 2) # (batch, head, time2, d_k) - v = v.transpose(1, 2) # (batch, head, time2, d_k) - return q, k, v - - def forward_attention(self, value, scores, mask): - """Compute attention context vector. - Args: - value: Transformed value B X n_head X T2 X d_k. - scores: Attention score B X n_head X T1 X T2 - mask: Mask T2 X B - Returns: - torch.Tensor: Transformed value B X T1 X d_model - weighted by the attention score B X T1 X T2 - """ - n_batch = value.size(0) - if mask is not None: - scores = scores.masked_fill( - mask.unsqueeze(1).unsqueeze(2).to(bool), - float("-inf"), # (batch, head, time1, time2) - ) - self.attn = torch.softmax(scores, dim=-1) # (batch, head, time1, time2) - - else: - self.attn = torch.softmax(scores, dim=-1) # (batch, head, time1, time2) - p_attn = self.dropout(self.attn) - x = torch.matmul(p_attn, value) # (batch, head, time1, d_k) - x = ( - x.transpose(1, 2).contiguous().view(n_batch, -1, self.h * self.d_k) - ) # (batch, time1, d_model) - - return self.linear_out(x) # (batch, time1, d_model) - - def forward(self, query, key, value, key_padding_mask=None, **kwargs): - """Compute scaled dot product attention. - Args: - query (torch.Tensor): Query tensor T X B X C - key (torch.Tensor): Key tensor T X B X C - value (torch.Tensor): Value tensor T X B X C - mask (torch.Tensor): Mask tensor T X B - Returns: - torch.Tensor: Output tensor T X B X D. - """ - query = query.transpose(0, 1) - key = key.transpose(0, 1) - value = value.transpose(0, 1) - - q, k, v = self.forward_qkv(query, key, value) - scores = torch.matmul(q, k.transpose(-2, -1)) / math.sqrt(self.d_k) - scores = self.forward_attention(v, scores, key_padding_mask) - scores = scores.transpose(0, 1) - return scores, None - - -class RelPositionMultiHeadedAttention(ESPNETMultiHeadedAttention): - """Multi-Head Attention layer with relative position encoding. - Paper: https://arxiv.org/abs/1901.02860 - Args: - n_head: The number of heads. - n_feat: The number of features. - dropout: Dropout rate. - zero_triu: Whether to zero the upper triangular part of attention matrix. - """ - - def __init__(self, n_feat, n_head, dropout, zero_triu=False): - """Construct an RelPositionMultiHeadedAttention object.""" - super().__init__(n_feat, n_head, dropout) - self.zero_triu = zero_triu - # linear transformation for positional encoding - self.linear_pos = nn.Linear(n_feat, n_feat, bias=False) - # these two learnable bias are used in matrix c and matrix d - # as described in https://arxiv.org/abs/1901.02860 Section 3.3 - self.pos_bias_u = nn.Parameter(torch.Tensor(self.h, self.d_k)) - self.pos_bias_v = nn.Parameter(torch.Tensor(self.h, self.d_k)) - torch.nn.init.xavier_uniform_(self.pos_bias_u) - torch.nn.init.xavier_uniform_(self.pos_bias_v) - - def rel_shift(self, x): - """Compute relative positional encoding. - Args: - x: Input tensor B X n_head X T X 2T-1 - Returns: - torch.Tensor: Output tensor. - """ - zero_pad = torch.zeros((*x.size()[:3], 1), device=x.device, dtype=x.dtype) - x_padded = torch.cat([zero_pad, x], dim=-1) - - x_padded = x_padded.view(*x.size()[:2], x.size(3) + 1, x.size(2)) - x = x_padded[:, :, 1:].view_as(x)[ - :, :, :, : x.size(-1) // 2 + 1 - ] # only keep the positions from 0 to time2 - - if self.zero_triu: - ones = torch.ones((x.size(2), x.size(3)), device=x.device) - x = x * torch.tril(ones, x.size(3) - x.size(2))[None, None, :, :] - - return x - - def forward(self, query, key, value, pos_emb, key_padding_mask=None, **kwargs): - """Compute scaled dot product attention. - Args: - query: Query tensor T X B X C - key: Key tensor T X B X C - value: Value tensor T X B X C - pos_emb: Positional embedding tensor B X 2T-1 X C - key_padding_mask: Mask tensor T X B - Returns: - torch.Tensor: Output tensor T X B X C. - """ - query = query.transpose(0, 1) - key = key.transpose(0, 1) - value = value.transpose(0, 1) - pos_emb = pos_emb.transpose(0, 1) - q, k, v = self.forward_qkv(query, key, value) - q = q.transpose(1, 2) # (batch, time1, head, d_k) - n_batch_pos = pos_emb.size(0) - p = self.linear_pos(pos_emb).view(n_batch_pos, -1, self.h, self.d_k) - p = p.transpose(1, 2) # (batch, head, 2*time1-1, d_k) - - # (batch, head, time1, d_k) - q_with_bias_u = (q + self.pos_bias_u).transpose(1, 2) - # (batch, head, time1, d_k) - q_with_bias_v = (q + self.pos_bias_v).transpose(1, 2) - - # compute attention score - # first compute matrix a and matrix c - # as described in https://arxiv.org/abs/1901.02860 Section 3.3 - # (batch, head, time1, time2) - matrix_ac = torch.matmul(q_with_bias_u, k.transpose(-2, -1)) - - # compute matrix b and matrix d - # (batch, head, time1, 2*time1-1) - matrix_bd = torch.matmul(q_with_bias_v, p.transpose(-2, -1)) - matrix_bd = self.rel_shift(matrix_bd) - - scores = (matrix_ac + matrix_bd) / math.sqrt( - self.d_k - ) # (batch, head, time1, time2) - - scores = self.forward_attention(v, scores, key_padding_mask) - scores = scores.transpose(0, 1) - return scores, None - - -class RotaryPositionMultiHeadedAttention(ESPNETMultiHeadedAttention): - def __init__( - self, - n_feat, - n_head, - dropout, - precision, - rotary_emd_base=10000, - ): - """Construct an RotaryPositionMultiHeadedAttention object.""" - super().__init__(n_feat, n_head, dropout) - precision = torch.float - self.rotary_ndims = self.d_k # also try self.d_k//2 - if precision == "fp16": - precision = torch.half - - self.rotary_emb = RotaryPositionalEmbedding( - self.rotary_ndims, base=rotary_emd_base, precision=precision - ) - - def forward(self, query, key, value, key_padding_mask=None, **kwargs): - """Compute rotary position attention. - Args: - query: Query tensor T X B X C - key: Key tensor T X B X C - value: Value tensor T X B X C - key_padding_mask: Mask tensor T X B - Returns: - torch.Tensor: Output tensor T X B X D. - Notes: - Assumes self attn - """ - - T, B, C = value.size() - query = query.view(T, B, self.h, self.d_k) - key = key.view(T, B, self.h, self.d_k) - value = value.view(T, B, self.h, self.d_k) - cos, sin = self.rotary_emb(value, seq_len=T) - query, key = apply_rotary_pos_emb( - query, key, cos, sin, offset=0 - ) # offset is based on layer_past - - query = query.view(T, B, self.h * self.d_k) - key = key.view(T, B, self.h * self.d_k) - value = value.view(T, B, self.h * self.d_k) - - # TBD to BTD - query = query.transpose(0, 1) - key = key.transpose(0, 1) - value = value.transpose(0, 1) - - q, k, v = self.forward_qkv(query, key, value) - scores = torch.matmul(q, k.transpose(-2, -1)) / math.sqrt(self.d_k) - scores = self.forward_attention(v, scores, key_padding_mask) - scores = scores.transpose(0, 1) - return scores, None diff --git a/kosmos-g/fairseq/fairseq/modules/fairseq_dropout.py b/kosmos-g/fairseq/fairseq/modules/fairseq_dropout.py deleted file mode 100644 index 3cddca771..000000000 --- a/kosmos-g/fairseq/fairseq/modules/fairseq_dropout.py +++ /dev/null @@ -1,51 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from typing import List, Optional - -import torch.nn as nn -import torch.nn.functional as F - - -logger = logging.getLogger(__name__) - - -class FairseqDropout(nn.Module): - def __init__(self, p, module_name=None): - super().__init__() - self.p = p - self.module_name = module_name - self.apply_during_inference = False - - def forward(self, x, inplace: bool = False): - if self.p > 0 and (self.training or self.apply_during_inference): - return F.dropout(x, p=self.p, training=True, inplace=inplace) - else: - return x - - def make_generation_fast_( - self, - name: str, - retain_dropout: bool = False, - retain_dropout_modules: Optional[List[str]] = None, - **kwargs - ): - if retain_dropout: - if retain_dropout_modules is not None and self.module_name is None: - logger.warning( - "Cannot enable dropout during inference for module {} " - "because module_name was not set".format(name) - ) - elif ( - retain_dropout_modules is None # if None, apply to all modules - or self.module_name in retain_dropout_modules - ): - logger.info( - "Enabling dropout during inference for module: {}".format(name) - ) - self.apply_during_inference = True - else: - logger.info("Disabling dropout for module: {}".format(name)) diff --git a/kosmos-g/fairseq/fairseq/modules/fp32_batch_norm.py b/kosmos-g/fairseq/fairseq/modules/fp32_batch_norm.py deleted file mode 100644 index c560f338f..000000000 --- a/kosmos-g/fairseq/fairseq/modules/fp32_batch_norm.py +++ /dev/null @@ -1,44 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -batch norm done in fp32 (for fp16 training) -""" -import torch -import torch.nn as nn - - -class Fp32BatchNorm(nn.Module): - def __init__(self, sync=False, *args, **kwargs): - super().__init__() - - if sync: - from fairseq.distributed import utils - - if utils.get_global_world_size() == 1: - sync = False - - if sync: - self.bn = nn.SyncBatchNorm(*args, **kwargs) - else: - self.bn = nn.BatchNorm1d(*args, **kwargs) - - self.sync = sync - - def forward(self, input): - if self.bn.running_mean.dtype != torch.float: - if self.sync: - self.bn.running_mean = self.bn.running_mean.float() - self.bn.running_var = self.bn.running_var.float() - if self.bn.affine: - try: - self.bn.weight = self.bn.weight.float() - self.bn.bias = self.bn.bias.float() - except: - self.bn.float() - else: - self.bn.float() - - output = self.bn(input.float()) - return output.type_as(input) diff --git a/kosmos-g/fairseq/fairseq/modules/fp32_group_norm.py b/kosmos-g/fairseq/fairseq/modules/fp32_group_norm.py deleted file mode 100644 index d03aac022..000000000 --- a/kosmos-g/fairseq/fairseq/modules/fp32_group_norm.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Layer norm done in fp32 (for fp16 training) -""" - -import torch.nn as nn -import torch.nn.functional as F - - -class Fp32GroupNorm(nn.GroupNorm): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - - def forward(self, input): - output = F.group_norm( - input.float(), - self.num_groups, - self.weight.float() if self.weight is not None else None, - self.bias.float() if self.bias is not None else None, - self.eps, - ) - return output.type_as(input) diff --git a/kosmos-g/fairseq/fairseq/modules/fp32_instance_norm.py b/kosmos-g/fairseq/fairseq/modules/fp32_instance_norm.py deleted file mode 100644 index 30a54496d..000000000 --- a/kosmos-g/fairseq/fairseq/modules/fp32_instance_norm.py +++ /dev/null @@ -1,35 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Layer norm done in fp32 (for fp16 training) -""" - -import torch.nn as nn -import torch.nn.functional as F - - -class Fp32InstanceNorm(nn.InstanceNorm1d): - def __init__(self, *args, **kwargs): - self.transpose_last = "transpose_last" in kwargs and kwargs["transpose_last"] - if "transpose_last" in kwargs: - del kwargs["transpose_last"] - super().__init__(*args, **kwargs) - - def forward(self, input): - if self.transpose_last: - input = input.transpose(1, 2) - output = F.instance_norm( - input.float(), - running_mean=self.running_mean, - running_var=self.running_var, - weight=self.weight.float() if self.weight is not None else None, - bias=self.bias.float() if self.bias is not None else None, - use_input_stats=self.training or not self.track_running_stats, - momentum=self.momentum, - eps=self.eps, - ) - if self.transpose_last: - output = output.transpose(1, 2) - return output.type_as(input) diff --git a/kosmos-g/fairseq/fairseq/modules/gelu.py b/kosmos-g/fairseq/fairseq/modules/gelu.py deleted file mode 100644 index a2f1ecff4..000000000 --- a/kosmos-g/fairseq/fairseq/modules/gelu.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -See "Gaussian Error Linear Units (GELUs)" by Dan Hendrycks and Kevin Gimpel with -the corresponding GitHub repo: https://github.com/hendrycks/GELUs -""" - -import math - -import torch -import torch.nn as nn - - -def gelu_accurate(x): - if not hasattr(gelu_accurate, "_a"): - gelu_accurate._a = math.sqrt(2 / math.pi) - return ( - 0.5 * x * (1 + torch.tanh(gelu_accurate._a * (x + 0.044715 * torch.pow(x, 3)))) - ) - - -def gelu(x: torch.Tensor) -> torch.Tensor: - return torch.nn.functional.gelu(x.float()).type_as(x) diff --git a/kosmos-g/fairseq/fairseq/modules/grad_multiply.py b/kosmos-g/fairseq/fairseq/modules/grad_multiply.py deleted file mode 100644 index 08d15f55d..000000000 --- a/kosmos-g/fairseq/fairseq/modules/grad_multiply.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - - -class GradMultiply(torch.autograd.Function): - @staticmethod - def forward(ctx, x, scale): - ctx.scale = scale - res = x.new(x) - return res - - @staticmethod - def backward(ctx, grad): - return grad * ctx.scale, None diff --git a/kosmos-g/fairseq/fairseq/modules/gumbel_vector_quantizer.py b/kosmos-g/fairseq/fairseq/modules/gumbel_vector_quantizer.py deleted file mode 100644 index a0e40c36d..000000000 --- a/kosmos-g/fairseq/fairseq/modules/gumbel_vector_quantizer.py +++ /dev/null @@ -1,203 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class GumbelVectorQuantizer(nn.Module): - def __init__( - self, - dim, - num_vars, - temp, - groups, - combine_groups, - vq_dim, - time_first, - activation=nn.GELU(), - weight_proj_depth=1, - weight_proj_factor=1, - ): - """Vector quantization using gumbel softmax - - Args: - dim: input dimension (channels) - num_vars: number of quantized vectors per group - temp: temperature for training. this should be a tuple of 3 elements: (start, stop, decay factor) - groups: number of groups for vector quantization - combine_groups: whether to use the vectors for all groups - vq_dim: dimensionality of the resulting quantized vector - time_first: if true, expect input in BxTxC format, otherwise in BxCxT - activation: what activation to use (should be a module). this is only used if weight_proj_depth is > 1 - weight_proj_depth: number of layers (with activation in between) to project input before computing logits - weight_proj_factor: this is used only if weight_proj_depth is > 1. scales the inner dimensionality of - projections by this factor - """ - super().__init__() - - self.groups = groups - self.combine_groups = combine_groups - self.input_dim = dim - self.num_vars = num_vars - self.time_first = time_first - - assert ( - vq_dim % groups == 0 - ), f"dim {vq_dim} must be divisible by groups {groups} for concatenation" - - var_dim = vq_dim // groups - num_groups = groups if not combine_groups else 1 - - self.vars = nn.Parameter(torch.FloatTensor(1, num_groups * num_vars, var_dim)) - nn.init.uniform_(self.vars) - - if weight_proj_depth > 1: - - def block(input_dim, output_dim): - return nn.Sequential(nn.Linear(input_dim, output_dim), activation) - - inner_dim = self.input_dim * weight_proj_factor - self.weight_proj = nn.Sequential( - *[ - block(self.input_dim if i == 0 else inner_dim, inner_dim) - for i in range(weight_proj_depth - 1) - ], - nn.Linear(inner_dim, groups * num_vars), - ) - else: - self.weight_proj = nn.Linear(self.input_dim, groups * num_vars) - nn.init.normal_(self.weight_proj.weight, mean=0, std=1) - nn.init.zeros_(self.weight_proj.bias) - - if isinstance(temp, str): - import ast - - temp = ast.literal_eval(temp) - assert len(temp) == 3, f"{temp}, {len(temp)}" - - self.max_temp, self.min_temp, self.temp_decay = temp - self.curr_temp = self.max_temp - self.codebook_indices = None - - def set_num_updates(self, num_updates): - self.curr_temp = max( - self.max_temp * self.temp_decay ** num_updates, self.min_temp - ) - - def get_codebook_indices(self): - if self.codebook_indices is None: - from itertools import product - - p = [range(self.num_vars)] * self.groups - inds = list(product(*p)) - self.codebook_indices = torch.tensor( - inds, dtype=torch.long, device=self.vars.device - ).flatten() - - if not self.combine_groups: - self.codebook_indices = self.codebook_indices.view( - self.num_vars ** self.groups, -1 - ) - for b in range(1, self.groups): - self.codebook_indices[:, b] += self.num_vars * b - self.codebook_indices = self.codebook_indices.flatten() - return self.codebook_indices - - def codebook(self): - indices = self.get_codebook_indices() - return ( - self.vars.squeeze(0) - .index_select(0, indices) - .view(self.num_vars ** self.groups, -1) - ) - - def sample_from_codebook(self, b, n): - indices = self.get_codebook_indices() - indices = indices.view(-1, self.groups) - cb_size = indices.size(0) - assert ( - n < cb_size - ), f"sample size {n} is greater than size of codebook {cb_size}" - sample_idx = torch.randint(low=0, high=cb_size, size=(b * n,)) - indices = indices[sample_idx] - - z = self.vars.squeeze(0).index_select(0, indices.flatten()).view(b, n, -1) - return z - - def to_codebook_index(self, indices): - res = indices.new_full(indices.shape[:-1], 0) - for i in range(self.groups): - exponent = self.groups - i - 1 - res += indices[..., i] * (self.num_vars ** exponent) - return res - - def forward_idx(self, x): - res = self.forward(x, produce_targets=True) - return res["x"], res["targets"] - - def forward(self, x, produce_targets=False): - - result = {"num_vars": self.num_vars * self.groups} - - if not self.time_first: - x = x.transpose(1, 2) - - bsz, tsz, fsz = x.shape - x = x.reshape(-1, fsz) - x = self.weight_proj(x) - x = x.view(bsz * tsz * self.groups, -1) - - _, k = x.max(-1) - hard_x = ( - x.new_zeros(*x.shape) - .scatter_(-1, k.view(-1, 1), 1.0) - .view(bsz * tsz, self.groups, -1) - ) - hard_probs = torch.mean(hard_x.float(), dim=0) - result["code_perplexity"] = torch.exp( - -torch.sum(hard_probs * torch.log(hard_probs + 1e-7), dim=-1) - ).sum() - - avg_probs = torch.softmax( - x.view(bsz * tsz, self.groups, -1).float(), dim=-1 - ).mean(dim=0) - result["prob_perplexity"] = torch.exp( - -torch.sum(avg_probs * torch.log(avg_probs + 1e-7), dim=-1) - ).sum() - - result["temp"] = self.curr_temp - - if self.training: - x = F.gumbel_softmax(x.float(), tau=self.curr_temp, hard=True).type_as(x) - else: - x = hard_x - - x = x.view(bsz * tsz, -1) - - vars = self.vars - if self.combine_groups: - vars = vars.repeat(1, self.groups, 1) - - if produce_targets: - result["targets"] = ( - x.view(bsz * tsz * self.groups, -1) - .argmax(dim=-1) - .view(bsz, tsz, self.groups) - .detach() - ) - - x = x.unsqueeze(-1) * vars - x = x.view(bsz * tsz, self.groups, self.num_vars, -1) - x = x.sum(-2) - x = x.view(bsz, tsz, -1) - - if not self.time_first: - x = x.transpose(1, 2) # BTC -> BCT - - result["x"] = x - - return result diff --git a/kosmos-g/fairseq/fairseq/modules/kmeans_attention.py b/kosmos-g/fairseq/fairseq/modules/kmeans_attention.py deleted file mode 100644 index ca5063010..000000000 --- a/kosmos-g/fairseq/fairseq/modules/kmeans_attention.py +++ /dev/null @@ -1,744 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import math -from inspect import isfunction -from operator import mul -from functools import reduce, wraps - -from aml.multimodal_video.utils.einops.lib import rearrange, repeat -from aml.multimodal_video.utils.einops.lib.layers.torch import Rearrange - -from fairseq.modules.local_attention import LocalAttention - -# constants - -TOKEN_SELF_ATTN_VALUE = -5e4 -KMEAN_INIT_ITERS = 10 - -# helper functions - - -def exists(val): - return val is not None - - -def identity(x, *args, **kwargs): - return x - - -def default(x, d): - if not exists(x): - return d if not isfunction(d) else d() - return x - - -def cast_tuple(x): - return x if isinstance(x, tuple) else (x,) - - -def cache_fn(f): - cache = None - - @wraps(f) - def cached_fn(*args, **kwargs): - nonlocal cache - if exists(cache): - return cache - cache = f(*args, **kwargs) - return cache - - return cached_fn - - -def to(t): - return {"device": t.device, "dtype": t.dtype} - - -def find_modules(nn_module, type): - return [module for module in nn_module.modules() if isinstance(module, type)] - - -def is_empty(t): - return t.nelement() == 0 - - -def max_neg_value(tensor): - return -torch.finfo(tensor.dtype).max - - -def batched_index_select(values, indices): - last_dim = values.shape[-1] - return values.gather(2, expand_dim(indices, -1, last_dim)) - - -def merge_dims(ind_from, ind_to, tensor): - shape = list(tensor.shape) - arr_slice = slice(ind_from, ind_to + 1) - shape[arr_slice] = [reduce(mul, shape[arr_slice])] - return tensor.reshape(*shape) - - -def expand_dim(t, dim, k): - t = t.unsqueeze(dim) - expand_shape = [-1] * len(t.shape) - expand_shape[dim] = k - return t.expand(*expand_shape) - - -def scatter_mean(src, t, index, dim, eps=1e-5): - numer = src.scatter_add(dim, index, t) - denom = src.scatter_add(dim, index, torch.ones_like(t)) - return numer / (denom + eps) - - -def split_at_index(dim, index, t): - pre_slices = (slice(None),) * dim - l = (*pre_slices, slice(None, index)) - r = (*pre_slices, slice(index, None)) - return t[l], t[r] - - -def reshape_dim(t, dim, split_dims): - shape = list(t.shape) - num_dims = len(shape) - dim = (dim + num_dims) % num_dims - shape[dim : dim + 1] = split_dims - return t.reshape(shape) - - -def ema(old, new, decay): - if not exists(old): - return new - return old * decay + new * (1 - decay) - - -def ema_inplace(moving_avg, new, decay): - if is_empty(moving_avg): - moving_avg.data.copy_(new) - return - moving_avg.data.mul_(decay).add_(new, alpha=(1 - decay)) - - -# helper classes - - -def map_first_tuple_or_el(x, fn): - if isinstance(x, tuple): - return (fn(x[0]),) + x[1:] - return fn(x) - - -class Chunk(nn.Module): - def __init__(self, chunks, fn, along_dim=-1): - super().__init__() - self.dim = along_dim - self.chunks = chunks - self.fn = fn - - def forward(self, x, **kwargs): - if self.chunks <= 1: - return self.fn(x, **kwargs) - chunks = x.chunk(self.chunks, dim=self.dim) - return torch.cat([self.fn(c, **kwargs) for c in chunks], dim=self.dim) - - -class PreNorm(nn.ModuleList): - def __init__(self, norm_class, dim, fn): - super().__init__() - self.norm = norm_class(dim) - self.fn = fn - - def forward(self, x, **kwargs): - x = self.norm(x) - return self.fn(x, **kwargs) - - -class ReZero(nn.Module): - def __init__(self, fn): - super().__init__() - self.residual_weight = nn.Parameter(torch.zeros(1)) - self.fn = fn - - def forward(self, x, **kwargs): - x = self.fn(x, **kwargs) - return map_first_tuple_or_el(x, lambda t: t * self.residual_weight) - - -class ScaleNorm(nn.Module): - def __init__(self, dim, eps=1e-5): - super().__init__() - self.g = nn.Parameter(torch.ones(1)) - self.eps = eps - - def forward(self, x): - def norm(t): - n = torch.norm(t, dim=-1, keepdim=True).clamp(min=self.eps) - return t / n * self.g - - return map_first_tuple_or_el(x, norm) - - -class ProjectInOut(nn.Module): - def __init__(self, fn, dim_in, dim_out, project_out=True): - super().__init__() - self.fn = fn - self.project_in = nn.Linear(dim_in, dim_out) - self.project_out = nn.Linear(dim_out, dim_in) if project_out else identity - - def forward(self, x, **kwargs): - x = self.project_in(x) - x, loss = self.fn(x, **kwargs) - x = self.project_out(x) - return x, loss - - -class MatrixMultiply(nn.Module): - def __init__(self, tensor, transpose=False): - super().__init__() - self.tensor = tensor - self.transpose = transpose - - def forward(self, x): - tensor = self.tensor - if self.transpose: - tensor = tensor.t() - return x @ tensor - - -# positional embeddings - - -class DepthWiseConv1d(nn.Module): - def __init__(self, dim_in, dim_out, kernel_size, stride=1, bias=True, causal=False): - super().__init__() - self.padding = ( - ((kernel_size - 1), 0) if causal else (kernel_size // 2, kernel_size // 2) - ) - - self.net = nn.Sequential( - nn.Conv1d( - dim_in, - dim_in, - kernel_size=kernel_size, - groups=dim_in, - stride=stride, - bias=bias, - ), - nn.Conv1d(dim_in, dim_out, 1, bias=bias), - ) - - def forward(self, x): - x = F.pad(x, self.padding, value=0.0) - return self.net(x) - - -class FixedPositionalEmbedding(nn.Module): - def __init__(self, dim, max_seq_len): - super().__init__() - inv_freq = 1.0 / (10000 ** (torch.arange(0, dim, 2).float() / dim)) - position = torch.arange(0, max_seq_len, dtype=torch.float) - sinusoid_inp = torch.einsum("i,j->ij", position, inv_freq) - emb = torch.cat((sinusoid_inp.sin(), sinusoid_inp.cos()), dim=-1) - self.register_buffer("emb", emb) - - def forward(self, x): - return self.emb[None, : x.shape[1], :].to(x) - - -def rotate_every_two(x): - x = rearrange(x, "... (d j) -> ... d j", j=2) - x1, x2 = x.unbind(dim=-1) - x = torch.stack((-x2, x1), dim=-1) - return rearrange(x, "... d j -> ... (d j)") - - -def apply_rotary_pos_emb(q, k, sinu_pos): - sinu_pos = rearrange(sinu_pos, "() n (j d) -> n j d", j=2) - sin, cos = sinu_pos.unbind(dim=-2) - sin, cos = map(lambda t: repeat(t, "b n -> b (n j)", j=2), (sin, cos)) - q, k = map(lambda t: (t * cos) + (rotate_every_two(t) * sin), (q, k)) - return q, k - - -# kmeans related function and class - - -def update_kmeans_on_backwards(module): - module.kmean_modules = find_modules(module, Kmeans) - - def hook(_, grad_in, grad_out): - for m in module.kmean_modules: - m.update() - - return module.register_backward_hook(hook) - - -def similarity(x, means): - return torch.einsum("bhld,hcd->bhlc", x, means) - - -def dists_and_buckets(x, means): - dists = similarity(x, means) - _, buckets = torch.max(dists, dim=-1) - return dists, buckets - - -def batched_bincount(index, num_classes, dim=-1): - shape = list(index.shape) - shape[dim] = num_classes - out = index.new_zeros(shape) - out.scatter_add_(dim, index, torch.ones_like(index, dtype=index.dtype)) - return out - - -def kmeans_iter(x, means, buckets=None): - b, h, _, d, dtype, num_clusters = *x.shape, x.dtype, means.shape[1] - - if not exists(buckets): - _, buckets = dists_and_buckets(x, means) - - bins = batched_bincount(buckets, num_clusters).sum(0, keepdim=True) - zero_mask = bins.long() == 0 - - means_ = buckets.new_zeros(b, h, num_clusters, d, dtype=dtype) - means_.scatter_add_(-2, expand_dim(buckets, -1, d), x) - means_ = F.normalize(means_.sum(0, keepdim=True), dim=-1).type(dtype) - - means = torch.where(zero_mask.unsqueeze(-1), means, means_) - means = means.squeeze(0) - return means - - -def distribution(dists, window_size): - _, topk_indices = dists.topk(k=window_size, dim=-2) - indices = topk_indices.transpose(-2, -1) - return indices.reshape(*indices.size()[:2], -1) - - -class Kmeans(nn.Module): - def __init__( - self, num_heads, head_dim, num_clusters, ema_decay=0.999, commitment=1e-4 - ): - super().__init__() - self.commitment = commitment - self.ema_decay = ema_decay - - self.register_buffer("means", torch.randn(num_heads, num_clusters, head_dim)) - self.register_buffer("initted", torch.tensor(False)) - self.num_new_means = 0 - self.new_means = None - - @torch.no_grad() - def init(self, x): - if self.initted: - return - _, h, _, d, device, _ = *x.shape, x.device, x.dtype - - num_clusters = self.means.shape[1] - - means = x.transpose(0, 1).contiguous().view(h, -1, d) - num_samples = means.shape[1] - - if num_samples >= num_clusters: - indices = torch.randperm(num_samples, device=device)[:num_clusters] - else: - indices = torch.randint(0, num_samples, (num_clusters,), device=device) - - means = means[:, indices] - - for _ in range(KMEAN_INIT_ITERS): - means = kmeans_iter(x, means) - - self.num_new_means = 0 - self.means.data.copy_(means) - self.initted.data.copy_(torch.tensor(True)) - - @torch.no_grad() - def update(self, new_means=None): - new_means = default(new_means, self.new_means) - assert exists(new_means), "new kmeans has not been supplied" - ema_inplace(self.means, new_means, self.ema_decay) - - del self.new_means - self.new_means = None - self.num_new_means = 0 - - def forward(self, x, update_means=False): - self.init(x) - - b, dtype = x.shape[0], x.dtype - means = self.means.type(dtype) - x = F.normalize(x, 2, dim=-1).type(dtype) - - with torch.no_grad(): - dists, buckets = dists_and_buckets(x, means) - - routed_means = batched_index_select(expand_dim(means, 0, b), buckets) - loss = F.mse_loss(x, routed_means) * self.commitment - - if update_means: - with torch.no_grad(): - means = kmeans_iter(x, means, buckets) - self.new_means = ema( - self.new_means, means, self.num_new_means / (self.num_new_means + 1) - ) - self.num_new_means += 1 - - return dists, loss - - -# kmeans attention class - - -class KmeansAttention(nn.Module): - def __init__( - self, - num_clusters, - window_size, - num_heads, - head_dim, - causal=False, - dropout=0.0, - ema_decay=0.999, - commitment=1e-4, - context_window_size=None, - receives_context=False, - num_mem_kv=0, - shared_qk=False, - ): - super().__init__() - self.num_heads = num_heads - self.num_clusters = num_clusters - self.head_dim = head_dim - - self.window_size = window_size - self.context_window_size = default(context_window_size, window_size) - self.causal = causal - - self.shared_qk = shared_qk - self.receives_context = receives_context - self.kmeans = Kmeans(num_heads, head_dim, num_clusters, ema_decay, commitment) - self.dropout = nn.Dropout(dropout) - - self.num_mem_kv = max(num_mem_kv, 1 if causal and not shared_qk else 0) - self.mem_key = nn.Parameter( - torch.randn(num_heads, num_clusters, self.num_mem_kv, head_dim) - ) - self.mem_value = nn.Parameter( - torch.randn(num_heads, num_clusters, self.num_mem_kv, head_dim) - ) - - def forward(self, q, k, v, query_mask=None, key_mask=None, **kwargs): - b, h, t, d, kv_t, wsz, c_wsz, nc, device, dtype = ( - *q.shape, - k.shape[2], - self.window_size, - self.context_window_size, - self.num_clusters, - q.device, - q.dtype, - ) - is_reverse = kwargs.pop("_reverse", False) - - out = torch.zeros_like(q, dtype=dtype) - - update_kmeans = self.training and not is_reverse - - key_mask = ( - default(key_mask, query_mask) if not self.receives_context else key_mask - ) - kv_wsz = wsz if not self.receives_context else c_wsz - - wsz = min(wsz, t) - kv_wsz = min(kv_wsz, kv_t) - - if not self.shared_qk or self.receives_context: - dists, aux_loss = self.kmeans(torch.cat((q, k), dim=2), update_kmeans) - q_dists, k_dists = split_at_index(2, t, dists) - indices = distribution(q_dists, wsz) - kv_indices = distribution(k_dists, kv_wsz) - else: - dists, aux_loss = self.kmeans(q, update_kmeans) - k = F.normalize(k, dim=-1).to(q) - indices = distribution(dists, wsz) - kv_indices = indices - - q = batched_index_select(q, indices) - k = batched_index_select(k, kv_indices) - v = batched_index_select(v, kv_indices) - - reshape_with_window = lambda x: x.reshape(b, h, nc, -1, d) - q, k, v = map(reshape_with_window, (q, k, v)) - - m_k, m_v = map( - lambda x: expand_dim(x, 0, b).to(q), (self.mem_key, self.mem_value) - ) - k, v = map(lambda x: torch.cat(x, dim=3), ((m_k, k), (m_v, v))) - - dots = torch.einsum("bhnid,bhnjd->bhnij", q, k) * (d ** -0.5) - - mask_value = max_neg_value(dots) - - if exists(query_mask) or exists(key_mask): - query_mask = default( - query_mask, lambda: torch.ones((b, t), device=device).bool() - ) - key_mask = default( - key_mask, lambda: torch.ones((b, kv_t), device=device).bool() - ) - - q_mask = expand_dim(query_mask, 1, h).gather(2, indices) - kv_mask = expand_dim(key_mask, 1, h).gather(2, kv_indices) - q_mask, kv_mask = map(lambda t: t.reshape(b, h, nc, -1), (q_mask, kv_mask)) - mask = q_mask[:, :, :, :, None] * kv_mask[:, :, :, None, :] - mask = F.pad(mask, (self.num_mem_kv, 0), value=1) - dots.masked_fill_(~mask, mask_value) - del mask - - if self.causal: - q_mask, kv_mask = map( - lambda t: t.reshape(b, h, nc, -1), (indices, kv_indices) - ) - mask = q_mask[:, :, :, :, None] >= kv_mask[:, :, :, None, :] - mask = F.pad(mask, (self.num_mem_kv, 0), value=1) - dots.masked_fill_(~mask, mask_value) - del mask - - if self.shared_qk: - q_mask, kv_mask = map( - lambda t: t.reshape(b, h, nc, -1), (indices, kv_indices) - ) - mask = q_mask[:, :, :, :, None] == kv_mask[:, :, :, None, :] - mask = F.pad(mask, (self.num_mem_kv, 0), value=0) - dots.masked_fill_(mask, TOKEN_SELF_ATTN_VALUE) - del mask - - dots = dots.softmax(dim=-1) - dots = self.dropout(dots) - - bo = torch.einsum("bhcij,bhcjd->bhcid", dots, v) - so = torch.reshape(bo, (b, h, -1, bo.shape[-1])).type(dtype) - out = scatter_mean(out, so, indices.unsqueeze(-1).expand_as(so), -2) - return out, aux_loss - - -# feedforward - - -class GELU_(nn.Module): - def forward(self, x): - return ( - 0.5 - * x - * ( - 1 - + torch.tanh(math.sqrt(2 / math.pi) * (x + 0.044715 * torch.pow(x, 3))) - ) - ) - - -GELU = nn.GELU if hasattr(nn, "GELU") else GELU_ - - -class FeedForward(nn.Module): - def __init__(self, dim, mult=4, dropout=0.0, activation=None, glu=False): - super().__init__() - activation = default(activation, GELU) - - self.glu = glu - self.w1 = nn.Linear(dim, dim * mult * (2 if glu else 1)) - self.act = activation() - self.dropout = nn.Dropout(dropout) - self.w2 = nn.Linear(dim * mult, dim) - - def forward(self, x, **kwargs): - if not self.glu: - x = self.w1(x) - x = self.act(x) - else: - x, v = self.w1(x).chunk(2, dim=-1) - x = self.act(x) * v - - x = self.dropout(x) - x = self.w2(x) - return x - - -# self attention - - -class SelfAttention(nn.Module): - def __init__( - self, - dim, - max_seq_len, - heads, - local_attn_heads, - window_size, - dim_head=None, - local_attn_window_size=None, - local_attn_radius_blocks=1, - causal=False, - attn_dropout=0.0, - dropout=0.0, - kmeans_ema_decay=0.999, - commitment_factor=1e-4, - receives_context=False, - context_window_size=None, - rel_pos_emb=True, - num_mem_kv=0, - shared_qk=False, - conv_query_kernel=9, - ): - super().__init__() - assert ( - dim_head or (dim % heads) == 0 - ), "hidden dimension must be divisible by number of heads" - assert ( - max_seq_len % window_size - ) == 0, "maximum sequence length must be divisible by the target window size" - assert ( - local_attn_heads <= heads - ), "number of local attention heads must be less than total heads" - assert not ( - receives_context and local_attn_heads > 0 - ), "local attention cannot be used for self attention with context" - assert not ( - receives_context and causal - ), "contextual attention layer cannot be causal" - - local_attn_window_size = default(local_attn_window_size, window_size) - context_window_size = default(context_window_size, window_size) - - self.shared_qk = shared_qk - self.receives_context = receives_context - self.heads = heads - self.local_attn_heads = local_attn_heads - self.global_attn_heads = heads - local_attn_heads - - self.causal = causal - self.window_size = window_size - - dim_head = default(dim_head, dim // heads) - dim_heads = dim_head * heads - self.dim_head = dim_head - - num_clusters = max_seq_len // window_size - - # local - - local_dim_heads = dim_head * self.local_attn_heads - - if self.local_attn_heads > 0: - rel_pos_emb_config = (dim_head, local_attn_heads) if rel_pos_emb else None - self.local_attn = LocalAttention( - dim=dim_head, - window_size=local_attn_window_size, - causal=causal, - dropout=attn_dropout, - rel_pos_emb_config=rel_pos_emb_config, - look_backward=local_attn_radius_blocks, - look_forward=0 if causal else local_attn_radius_blocks, - ) - self.local_to_qkv = nn.Linear(dim, 3 * local_dim_heads) - - # global - - global_dim_heads = dim_head * self.global_attn_heads - - if self.global_attn_heads > 0: - self.global_attn = KmeansAttention( - num_clusters, - window_size, - self.global_attn_heads, - dim_head, - causal=causal, - dropout=attn_dropout, - ema_decay=kmeans_ema_decay, - commitment=commitment_factor, - receives_context=receives_context, - num_mem_kv=num_mem_kv, - shared_qk=shared_qk, - ) - - self.to_q = nn.Sequential( - Rearrange("b n c -> b c n"), - DepthWiseConv1d(dim, global_dim_heads, conv_query_kernel, causal=causal), - Rearrange("b c n -> b n c"), - ) - - self.to_v = nn.Linear(dim, global_dim_heads, bias=False) - - if not self.shared_qk: - self.to_k = nn.Linear(dim, global_dim_heads, bias=False) - - # out - - self.to_out = nn.Linear(dim_heads, dim, bias=False) - self.dropout = nn.Dropout(dropout) - - def forward( - self, - query, - key, - value, - context=None, - key_padding_mask=None, - context_mask=None, - pos_emb=None, - **kwargs - ): - assert not ( - self.receives_context and not exists(context) - ), "context must be passed if self attention is set to receive context" - input_mask = key_padding_mask - x = query.transpose(0, 1) - b, t, _, h, dh = *x.shape, self.heads, self.dim_head - has_local, has_global = map( - lambda x: x > 0, (self.local_attn_heads, self.global_attn_heads) - ) - - split_heads = ( - lambda v: reshape_dim(v, -1, (-1, dh)).transpose(1, 2).contiguous() - ) - - if has_local: - local_qkv = self.local_to_qkv(x).chunk(3, dim=-1) - lq, lk, lv = map(split_heads, local_qkv) - - if has_global: - kv_input = x if not self.receives_context else context - - q, v = self.to_q(x), self.to_v(kv_input) - - if not self.shared_qk: - k = self.to_k(kv_input) - else: - k = self.to_q(kv_input) if self.receives_context else q - - q, k, v = map(split_heads, (q, k, v)) - - out = [] - total_loss = torch.tensor(0.0, requires_grad=True, **to(x)) - - if has_local: - local_out = self.local_attn(lq, lk, lv, input_mask=input_mask) - out.append(local_out) - - if has_global: - if not self.receives_context and exists(pos_emb): - q, k = apply_rotary_pos_emb(q, k, pos_emb) - - global_out, loss = self.global_attn( - q, k, v, query_mask=input_mask, key_mask=context_mask - ) - total_loss = total_loss + loss - - out.append(global_out) - - out = torch.cat(out, dim=1) - out = out.reshape(b, h, t, -1).transpose(1, 2).reshape(b, t, -1) - out = self.dropout(out.transpose(0, 1)) - # out = self.to_out(out) - return out, total_loss diff --git a/kosmos-g/fairseq/fairseq/modules/kmeans_vector_quantizer.py b/kosmos-g/fairseq/fairseq/modules/kmeans_vector_quantizer.py deleted file mode 100644 index 040db1e83..000000000 --- a/kosmos-g/fairseq/fairseq/modules/kmeans_vector_quantizer.py +++ /dev/null @@ -1,127 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -from fairseq.modules import Fp32GroupNorm - - -class KmeansVectorQuantizer(nn.Module): - def __init__( - self, dim, num_vars, groups, combine_groups, vq_dim, time_first, gamma=0.25 - ): - """Vector quantization using straight pass-through estimator (i.e. kmeans) - - Args: - dim: input dimension (channels) - num_vars: number of quantized vectors per group - groups: number of groups for vector quantization - combine_groups: whether to use the vectors for all groups - vq_dim: dimensionality of the resulting quantized vector - time_first: if true, expect input in BxTxC format, otherwise in BxCxT - gamma: commitment loss coefficient - """ - super().__init__() - - self.groups = groups - self.combine_groups = combine_groups - self.input_dim = dim - self.num_vars = num_vars - self.vq_dim = vq_dim - self.time_first = time_first - - assert ( - vq_dim % groups == 0 - ), f"dim {vq_dim} must be divisible by groups {groups} for concatenation" - - self.var_dim = vq_dim // groups - num_groups = groups if not combine_groups else 1 - - self.embedding = nn.Parameter( - 0.01 * torch.randn(num_vars, num_groups, self.var_dim) - ) - self.projection = nn.Sequential( - nn.Conv1d(dim, dim, kernel_size=1, groups=groups, bias=False), - Fp32GroupNorm(groups, dim), - ) - self.gamma = gamma - self.mse_mean = nn.MSELoss(reduction="mean") - - def _pass_grad(self, x, y): - """Manually set gradient for backward pass. - for y = f(x), ensure that during the backward pass, - dL/dy = dL/dx regardless of f(x). - Returns: - y, with the gradient forced to be dL/dy = dL/dx. - """ - - return y.detach() + (x - x.detach()) - - @property - def expand_embedding(self): - if self.combine_groups: - return self.embedding.expand(self.num_vars, self.groups, self.var_dim) - return self.embedding - - def forward_idx(self, x): - res = self.forward(x, produce_targets=True) - return res["x"], res["targets"] - - def forward(self, x, produce_targets=False): - - result = {"num_vars": self.num_vars} - - if self.time_first: - x = x.transpose(1, 2) - - bsz, fsz, tsz = x.shape - - ze = self.projection(x) - ze_ = ze.view(bsz, self.groups, self.var_dim, tsz).permute(0, 3, 1, 2) - d = ( - (ze_.unsqueeze(0) - self.expand_embedding.unsqueeze(1).unsqueeze(1)) - .view(self.num_vars, bsz, tsz, self.groups, -1) - .norm(dim=-1, p=2) - ) - idx = d.argmin(dim=0) - zq = ( - torch.stack( - [ - self.expand_embedding[idx[..., group], group] - for group in range(self.groups) - ], - dim=-2, - ) - .view(bsz, tsz, self.groups * self.var_dim) - .permute(0, 2, 1) - ) - assert ze.shape == zq.shape, (ze.shape, zq.shape) - x = self._pass_grad(ze, zq) - - hard_x = ( - idx.new_zeros(bsz * tsz * self.groups, self.num_vars) - .scatter_(-1, idx.view(-1, 1), 1.0) - .view(bsz * tsz, self.groups, -1) - ) - hard_probs = torch.mean(hard_x.float(), dim=0) - result["code_perplexity"] = torch.exp( - -torch.sum(hard_probs * torch.log(hard_probs + 1e-7), dim=-1) - ).sum() - - if produce_targets: - result["targets"] = idx - - if self.time_first: - x = x.transpose(1, 2) # BCT -> BTC - result["x"] = x - - ze = ze.float() - zq = zq.float() - latent_loss = self.mse_mean(zq, ze.detach()) - commitment_loss = self.mse_mean(ze, zq.detach()) - - result["kmeans_loss"] = latent_loss + self.gamma * commitment_loss - - return result diff --git a/kosmos-g/fairseq/fairseq/modules/layer_drop.py b/kosmos-g/fairseq/fairseq/modules/layer_drop.py deleted file mode 100644 index 8961d8bcb..000000000 --- a/kosmos-g/fairseq/fairseq/modules/layer_drop.py +++ /dev/null @@ -1,44 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -LayerDrop as described in https://arxiv.org/abs/1909.11556. -""" - -import torch -import torch.nn as nn - - -class LayerDropModuleList(nn.ModuleList): - """ - A LayerDrop implementation based on :class:`torch.nn.ModuleList`. - - We refresh the choice of which layers to drop every time we iterate - over the LayerDropModuleList instance. During evaluation we always - iterate over all layers. - - Usage:: - - layers = LayerDropList(p=0.5, modules=[layer1, layer2, layer3]) - for layer in layers: # this might iterate over layers 1 and 3 - x = layer(x) - for layer in layers: # this might iterate over all layers - x = layer(x) - for layer in layers: # this might not iterate over any layers - x = layer(x) - - Args: - p (float): probability of dropping out each layer - modules (iterable, optional): an iterable of modules to add - """ - - def __init__(self, p, modules=None): - super().__init__(modules) - self.p = p - - def __iter__(self): - dropout_probs = torch.empty(len(self)).uniform_() - for i, m in enumerate(super().__iter__()): - if not self.training or (dropout_probs[i] > self.p): - yield m diff --git a/kosmos-g/fairseq/fairseq/modules/layer_norm.py b/kosmos-g/fairseq/fairseq/modules/layer_norm.py deleted file mode 100644 index 0b276ce02..000000000 --- a/kosmos-g/fairseq/fairseq/modules/layer_norm.py +++ /dev/null @@ -1,48 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -import torch.nn.functional as F - -try: - from apex.normalization import FusedLayerNorm as _FusedLayerNorm - - has_fused_layernorm = True - - class FusedLayerNorm(_FusedLayerNorm): - @torch.jit.unused - def forward(self, x): - if not x.is_cuda: - return super().forward(x) - else: - with torch.cuda.device(x.device): - return super().forward(x) - -except ImportError: - has_fused_layernorm = False - - -def LayerNorm(normalized_shape, eps=1e-5, elementwise_affine=True, export=False): - if torch.jit.is_scripting() or torch.jit.is_tracing(): - export = True - if not export and torch.cuda.is_available() and has_fused_layernorm: - return FusedLayerNorm(normalized_shape, eps, elementwise_affine) - return torch.nn.LayerNorm(normalized_shape, eps, elementwise_affine) - - -class Fp32LayerNorm(nn.LayerNorm): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - - def forward(self, input): - output = F.layer_norm( - input.float(), - self.normalized_shape, - self.weight.float() if self.weight is not None else None, - self.bias.float() if self.bias is not None else None, - self.eps, - ) - return output.type_as(input) diff --git a/kosmos-g/fairseq/fairseq/modules/learned_positional_embedding.py b/kosmos-g/fairseq/fairseq/modules/learned_positional_embedding.py deleted file mode 100644 index 378d0f707..000000000 --- a/kosmos-g/fairseq/fairseq/modules/learned_positional_embedding.py +++ /dev/null @@ -1,61 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Dict, Optional - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import utils -from torch import Tensor - - -class LearnedPositionalEmbedding(nn.Embedding): - """ - This module learns positional embeddings up to a fixed maximum size. - Padding ids are ignored by either offsetting based on padding_idx - or by setting padding_idx to None and ensuring that the appropriate - position ids are passed to the forward function. - """ - - def __init__(self, num_embeddings: int, embedding_dim: int, padding_idx: int): - super().__init__(num_embeddings, embedding_dim, padding_idx) - self.onnx_trace = False - if self.padding_idx is not None: - self.max_positions = self.num_embeddings - self.padding_idx - 1 - else: - self.max_positions = self.num_embeddings - - def forward( - self, - input: Tensor, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - positions: Optional[Tensor] = None, - ): - """Input is expected to be of size [bsz x seqlen].""" - assert (positions is None) or ( - self.padding_idx is None - ), "If positions is pre-computed then padding_idx should not be set." - - if positions is None: - if incremental_state is not None: - # positions is the same for every token when decoding a single step - # Without the int() cast, it doesn't work in some cases when exporting to ONNX - positions = torch.zeros( - (1, 1), device=input.device, dtype=input.dtype - ).fill_(int(self.padding_idx + input.size(1))) - else: - positions = utils.make_positions( - input, self.padding_idx, onnx_trace=self.onnx_trace - ) - return F.embedding( - positions, - self.weight, - self.padding_idx, - self.max_norm, - self.norm_type, - self.scale_grad_by_freq, - self.sparse, - ) diff --git a/kosmos-g/fairseq/fairseq/modules/lightconv_layer/__init__.py b/kosmos-g/fairseq/fairseq/modules/lightconv_layer/__init__.py deleted file mode 100644 index 3b2a99c12..000000000 --- a/kosmos-g/fairseq/fairseq/modules/lightconv_layer/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .lightconv_layer import LightconvLayer # noqa diff --git a/kosmos-g/fairseq/fairseq/modules/lightconv_layer/cuda_function_gen.py b/kosmos-g/fairseq/fairseq/modules/lightconv_layer/cuda_function_gen.py deleted file mode 100644 index a25433dd8..000000000 --- a/kosmos-g/fairseq/fairseq/modules/lightconv_layer/cuda_function_gen.py +++ /dev/null @@ -1,289 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -def gen_forward(): - - kernels = [3, 5, 7, 15, 31, 63, 127, 255] - seqs = [32 * x for x in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]] - - head = """ -/** - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include "lightconv_cuda.cuh" - -std::vector<at::Tensor> lightconv_cuda_forward(at::Tensor input, at::Tensor filters, int padding_l) { - - at::DeviceGuard g(input.device()); - const auto minibatch = input.size(0); - const auto numFeatures = input.size(1); - const auto sequenceLength = input.size(2); - - const auto numHeads = filters.size(0); - const auto filterSize = filters.size(1); - - const auto numFiltersInBlock = numFeatures / numHeads; - - const dim3 blocks(minibatch, numFeatures); - - auto output = at::zeros_like(input); - auto stream = at::cuda::getCurrentCUDAStream(); -""" - - sequence_if = """ - if (sequenceLength <= {seq}) {{ - switch(filterSize) {{ -""" - - case_k = """ - case {k}: -""" - - main_block = """ - if (padding_l == {pad}) {{ - AT_DISPATCH_FLOATING_TYPES_AND_HALF(input.scalar_type(), "lightconv_forward", ([&] {{ - lightconv_forward_kernel<{k}, {b_size}, {pad}, scalar_t> - <<<blocks, {b_size}, 0, stream>>>( - input.data<scalar_t>(), - filters.data<scalar_t>(), - minibatch, - sequenceLength, - numFeatures, - numFiltersInBlock, - output.data<scalar_t>()); - }})); - }} else -""" - - bad_padding = """ - { - std::cout << "WARNING: Unsupported padding size - skipping forward pass" << std::endl; - } - break; -""" - - bad_filter = """ - default: - std::cout << "WARNING: Unsupported filter length passed - skipping forward pass" << std::endl; - } -""" - - con_else = """ - } else -""" - - final_else = """ - { - switch(filterSize) { -""" - - final_return = """ - } - - return {output}; -} -""" - - with open("lightconv_cuda_forward.cu", "w") as forward: - forward.write(head) - for seq in seqs: - forward.write(sequence_if.format(seq=seq)) - for k in kernels: - forward.write(case_k.format(k=k)) - for pad in [k // 2, k - 1]: - forward.write(main_block.format(k=k, b_size=seq, pad=pad)) - forward.write(bad_padding) - forward.write(bad_filter) - forward.write(con_else) - - forward.write(final_else) - for k in kernels: - forward.write(case_k.format(k=k)) - for pad in [k // 2, k - 1]: - forward.write(main_block.format(k=k, b_size=seq, pad=pad)) - forward.write(bad_padding) - forward.write(bad_filter) - forward.write(final_return) - - -def gen_backward(): - - head = """ -/** - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include "lightconv_cuda.cuh" - -std::vector<at::Tensor> lightconv_cuda_backward( - at::Tensor gradOutput, - int padding_l, - at::Tensor input, - at::Tensor filters) { - - // gradWrtInput - const int minibatch = input.size(0); - const int numFeatures = input.size(1); - const int sequenceLength = input.size(2); - - const int numHeads = filters.size(0); - const int filterSize = filters.size(1); - - const dim3 gradBlocks(minibatch, numFeatures); - const dim3 weightGradFirstpassShortBlocks(minibatch, numHeads); - const dim3 weightGradSecondpassBlocks(numHeads, filterSize); - - const int numFiltersInBlock = numFeatures / numHeads; - - auto gradInput = at::zeros_like(input); - auto gradFilters = at::zeros_like(filters); - - at::DeviceGuard g(input.device()); - auto stream = at::cuda::getCurrentCUDAStream(); - - switch(filterSize) { -""" - - sequence_if = """ - if (sequenceLength <= {seq}) {{ -""" - - case_k = """ - case {k}: -""" - - main_block = """ - if (padding_l == {p}) {{ - AT_DISPATCH_FLOATING_TYPES_AND_HALF(input.scalar_type(), "lightconv_backward", ([&] {{ - lightconv_grad_wrt_input_kernel<{k}, {b_size}, {p}, scalar_t> - <<<gradBlocks, {b_size}, 0, stream>>>( - gradOutput.data<scalar_t>(), - filters.data<scalar_t>(), - minibatch, - sequenceLength, - numFeatures, - numFiltersInBlock, - gradInput.data<scalar_t>()); - -""" - - weight_grad_short = """ - at::Tensor tempSumGradFilters = at::zeros({{minibatch, numHeads, filterSize}}, input.options().dtype(at::kFloat)); - lightconv_grad_wrt_weights_firstpass_short_kernel<{k}, {b_size}, {p}, scalar_t> - <<<weightGradFirstpassShortBlocks, {b_size}, 0, stream>>>( - input.data<scalar_t>(), - gradOutput.data<scalar_t>(), - minibatch, - sequenceLength, - numFeatures, - numFiltersInBlock, - numHeads, - tempSumGradFilters.data<float>() - ); - - lightconv_grad_wrt_weights_secondpass_short_kernel<{k}, {b_size}, scalar_t> - <<<weightGradSecondpassBlocks, {b_size}, 0, stream>>>( - tempSumGradFilters.data<float>(), - minibatch, - numFiltersInBlock, - gradFilters.data<scalar_t>() - ); - }})); - }} else -""" - - weight_grad = """ - at::Tensor tempSumGradFilters = at::zeros({{minibatch, numFeatures, filterSize}}, input.options().dtype(at::kFloat)); - lightconv_grad_wrt_weights_firstpass_kernel<{k}, {b_size}, {p}, scalar_t> - <<<gradBlocks, {b_size}, 0, stream>>>( - input.data<scalar_t>(), - gradOutput.data<scalar_t>(), - minibatch, - sequenceLength, - numFeatures, - numFiltersInBlock, - tempSumGradFilters.data<float>() - ); - - lightconv_grad_wrt_weights_secondpass_kernel<{k}, {b_size}, scalar_t> - <<<weightGradSecondpassBlocks, {b_size}, 0, stream>>>( - tempSumGradFilters.data<float>(), - minibatch, - numFiltersInBlock, - gradFilters.data<scalar_t>() - ); - }})); - }} else -""" - - bad_padding = """ - { - std::cout << "WARNING: Unsupported padding size - skipping backward pass" << std::endl; - } -""" - - breakout = """ - break; -""" - - bad_filter = """ - default: - std::cout << "WARNING: Unsupported filter length passed - skipping backward pass" << std::endl; -""" - - con_else = """ - } else -""" - - final_else = """ - { - switch(filterSize) { -""" - - last_return = """ - } - return {gradInput, gradFilters}; -} -""" - - kernels = [3, 5, 7, 15, 31, 63, 127, 255] - seqs = [32 * x for x in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]] - thresh = [32, 32, 64, 128, 256, -1, -1, -1] - max_mem = [-1, -1, -1, -1, -1, 192, 96, 64] - - with open("lightconv_cuda_backward.cu", "w") as backward: - backward.write(head) - for (k, t, mem) in zip(kernels, thresh, max_mem): - backward.write(case_k.format(k=k)) - for seq in seqs: - if (t == -1 or seq <= t) and (mem == -1 or seq < mem): - backward.write(sequence_if.format(seq=seq)) - for p in [k // 2, k - 1]: - backward.write(main_block.format(k=k, b_size=seq, p=p)) - backward.write(weight_grad_short.format(k=k, b_size=seq, p=p)) - backward.write(bad_padding) - else: - for p in [k // 2, k - 1]: - backward.write(main_block.format(k=k, b_size=32, p=p)) - backward.write(weight_grad.format(k=k, b_size=32, p=p)) - backward.write(bad_padding) - backward.write(breakout) - break - backward.write(con_else) - backward.write(bad_filter) - backward.write(last_return) - - -if __name__ == "__main__": - gen_forward() - gen_backward() diff --git a/kosmos-g/fairseq/fairseq/modules/lightconv_layer/lightconv_cuda.cpp b/kosmos-g/fairseq/fairseq/modules/lightconv_layer/lightconv_cuda.cpp deleted file mode 100644 index ece47a8d9..000000000 --- a/kosmos-g/fairseq/fairseq/modules/lightconv_layer/lightconv_cuda.cpp +++ /dev/null @@ -1,51 +0,0 @@ -/** - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include <torch/extension.h> -#include <vector> - -std::vector<at::Tensor> -lightconv_cuda_forward(at::Tensor input, at::Tensor filters, int padding_l); - -std::vector<at::Tensor> lightconv_cuda_backward( - at::Tensor gradOutput, - int padding_l, - at::Tensor input, - at::Tensor filters); - -#define CHECK_CUDA(x) \ - AT_ASSERTM(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) \ - AT_ASSERTM(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) \ - CHECK_CUDA(x); \ - CHECK_CONTIGUOUS(x) - -std::vector<at::Tensor> -lightconv_forward(at::Tensor input, at::Tensor filters, int padding_l) { - CHECK_INPUT(input); - CHECK_INPUT(filters); - - return lightconv_cuda_forward(input, filters, padding_l); -} - -std::vector<at::Tensor> lightconv_backward( - at::Tensor gradOutput, - int padding_l, - at::Tensor input, - at::Tensor filters) { - CHECK_INPUT(gradOutput); - CHECK_INPUT(input); - CHECK_INPUT(filters); - - return lightconv_cuda_backward(gradOutput, padding_l, input, filters); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("forward", &lightconv_forward, "lighconv forward (CUDA)"); - m.def("backward", &lightconv_backward, "lighconv backward (CUDA)"); -} diff --git a/kosmos-g/fairseq/fairseq/modules/lightconv_layer/lightconv_cuda.cuh b/kosmos-g/fairseq/fairseq/modules/lightconv_layer/lightconv_cuda.cuh deleted file mode 100644 index 610ab399e..000000000 --- a/kosmos-g/fairseq/fairseq/modules/lightconv_layer/lightconv_cuda.cuh +++ /dev/null @@ -1,79 +0,0 @@ -/** - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include <ATen/ATen.h> -#include <c10/cuda/CUDAStream.h> - -#include <cuda.h> -#include <cuda_runtime.h> - -#include <algorithm> -#include <functional> -#include <iostream> -#include <stdexcept> -#include <utility> -#include <vector> - -#include <assert.h> -#include <stdlib.h> - -#define SHFL_MASK 0xffffffff - -template <int FS, int SB, int padding_l, typename scalar_t> -__global__ void lightconv_forward_kernel( - const scalar_t* input, - const scalar_t* filters, - int minibatch, - int sequenceLength, - int numFeatures, - int numFiltersInBlock, - scalar_t* output); - -template <int FS, int SB, int padding_l, typename scalar_t> -__global__ void lightconv_grad_wrt_input_kernel( - const scalar_t* input, - const scalar_t* filters, - int minibatch, - int sequenceLength, - int numFeatures, - int numFiltersInBlock, - scalar_t* output); - -template <int FS, int SB, int padding_l, typename scalar_t> -__global__ void lightconv_grad_wrt_weights_firstpass_short_kernel( - const scalar_t* input, - const scalar_t* gradInput, - int minibatch, - int sequenceLength, - int numFeatures, - int numFiltersInBlock, - int numHeads, - float* output); - -template <int FS, int SB, typename scalar_t> -__global__ void lightconv_grad_wrt_weights_secondpass_short_kernel( - const float* input, - const int minibatch, - const int numFiltersInBlock, - scalar_t* output); - -template <int FS, int SB, int padding_l, typename scalar_t> -__global__ void lightconv_grad_wrt_weights_firstpass_kernel( - const scalar_t* input, - const scalar_t* gradInput, - int minibatch, - int sequenceLength, - int numFeatures, - int numFiltersInBlock, - float* output); - -template <int FS, int SB, typename scalar_t> -__global__ void lightconv_grad_wrt_weights_secondpass_kernel( - const float* input, - const int minibatch, - const int numFiltersInBlock, - scalar_t* output); diff --git a/kosmos-g/fairseq/fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu b/kosmos-g/fairseq/fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu deleted file mode 100644 index cdf31d5d2..000000000 --- a/kosmos-g/fairseq/fairseq/modules/lightconv_layer/lightconv_cuda_kernel.cu +++ /dev/null @@ -1,400 +0,0 @@ -/** - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include "../cuda_utils.cu" -#include "lightconv_cuda.cuh" -#include "lightconv_cuda_backward.cu" -#include "lightconv_cuda_forward.cu" - -template <int FS, int SB, int padding_l, typename scalar_t> -__global__ void lightconv_forward_kernel( - const scalar_t* input, - const scalar_t* filters, - int minibatch, - int sequenceLength, - int numFeatures, - int numFiltersInBlock, - scalar_t* output) { - const int tid = threadIdx.x; - const int batchIdx = blockIdx.x; - const int featureIdx = blockIdx.y; - const int filterIdx = featureIdx / numFiltersInBlock; - - const int IOOffset = - numFeatures * sequenceLength * batchIdx + featureIdx * sequenceLength; - const scalar_t* inputFeature = &input[IOOffset]; - scalar_t* outputFeature = &output[IOOffset]; - const scalar_t* inputFilter = &filters[filterIdx * FS]; - - assert(blockDim.x == SB); - - scalar_t filter[FS]; -#pragma unroll - for (int i = 0; i < FS; ++i) { - filter[i] = inputFilter[i]; - } - - __shared__ scalar_t temp[SB + FS]; - zeroSharedMem<FS, SB, padding_l>(temp); - - const int numIterations = divUp<int, int>(sequenceLength, SB); - - for (int i = 0; i < numIterations; ++i) { - // Read input into shared memory - const int inputOffset = i * SB; - - load_input_to_shared<FS, SB, padding_l>( - inputFeature, - inputOffset, - sequenceLength, - i, - numIterations, - (numIterations == 1), - temp); - - __syncthreads(); - - scalar_t out = 0; -#pragma unroll - for (int j = 0; j < FS; ++j) { - out += filter[j] * temp[tid + j]; - } - - // Write output - const int outputOffset = inputOffset; - if ((outputOffset + tid) < sequenceLength) { - outputFeature[outputOffset + tid] = out; - } - - __syncthreads(); - } -} - -template <int FS, int SB, int padding_l, typename scalar_t> -__global__ void lightconv_grad_wrt_input_kernel( - const scalar_t* input, - const scalar_t* filters, - int minibatch, - int sequenceLength, - int numFeatures, - int numFiltersInBlock, - scalar_t* output) { - // input grad kernel is similar to forward kernel - const int tid = threadIdx.x; - const int batchIdx = blockIdx.x; - const int featureIdx = blockIdx.y; - const int filterIdx = featureIdx / numFiltersInBlock; - - const int IOOffset = - numFeatures * sequenceLength * batchIdx + featureIdx * sequenceLength; - const scalar_t* inputFeature = &input[IOOffset]; - scalar_t* outputFeature = &output[IOOffset]; - const scalar_t* inputFilter = &filters[filterIdx * FS]; - - assert(blockDim.x == SB); - - scalar_t filter[FS]; - -// The only change is loading the filter in reverse -#pragma unroll - for (int i = 0; i < FS; ++i) { - filter[i] = inputFilter[FS - i - 1]; - } - - __shared__ scalar_t temp[SB + FS]; - const int padding = FS - padding_l - 1; - zeroSharedMem<FS, SB, padding>(temp); - - __syncthreads(); - - const int numIterations = divUp<int, int>(sequenceLength, SB); - - for (int i = 0; i < numIterations; ++i) { - // Read input into shared memory - const int inputOffset = i * SB; - - load_input_to_shared<FS, SB, padding>( - inputFeature, - inputOffset, - sequenceLength, - i, - numIterations, - false, - temp); - - __syncthreads(); - - scalar_t out = 0; -#pragma unroll - for (int j = 0; j < FS; ++j) { - out += filter[j] * temp[tid + j]; - } - - // Write output - const int outputOffset = inputOffset; - if ((outputOffset + tid) < sequenceLength) { - outputFeature[outputOffset + tid] = out; - } - - __syncthreads(); - } -} - -// This is by far the most expensive kernel in terms of time taken. -// Can be 16x slower than the forward or grad_wrt_input when filter size is 31 -template <int FS, int SB, int padding_l, typename scalar_t> -__global__ void lightconv_grad_wrt_weights_firstpass_short_kernel( - const scalar_t* input, - const scalar_t* gradInput, - int minibatch, - int sequenceLength, - int numFeatures, - int numFiltersInBlock, - int numHeads, - float* output) { - const int tid = threadIdx.x; - const int batchIdx = blockIdx.x; - const int filterIdx = blockIdx.y; - - const int numIterations = divUp<int, int>(sequenceLength, SB); - - float* tempOutputGradWeight = &output[filterIdx * FS * minibatch]; - - assert(blockDim.x == SB); - - __shared__ scalar_t tempInput[SB + FS]; - __shared__ scalar_t tempGradInput[SB + FS]; - - // local weight accumulation - float accumWeights[FS]; - - // Initialize memory - for (int i = 0; i < FS; ++i) { - accumWeights[i] = float(0.0); - } - - // loop over each sequence within filterblock - for (int idxInFilterBlock = 0; idxInFilterBlock < numFiltersInBlock; - ++idxInFilterBlock) { - const int featureOffset = batchIdx * numFeatures * sequenceLength + - (filterIdx * numFiltersInBlock + idxInFilterBlock) * sequenceLength; - const scalar_t* inputFeature = &input[featureOffset]; - const scalar_t* gradInputFeature = &gradInput[featureOffset]; - - zeroSharedMem<FS, SB, padding_l>(tempInput); - zeroSharedMem<FS, SB, (FS / 2)>(tempGradInput); - __syncthreads(); - - for (int i = 0; i < numIterations; ++i) { - const int inputOffset = i * SB; - - load_input_to_shared<FS, SB, padding_l>( - inputFeature, - inputOffset, - sequenceLength, - i, - numIterations, - false, - tempInput); - load_input_to_shared<FS, SB, (FS / 2)>( - gradInputFeature, - inputOffset, - sequenceLength, - i, - numIterations, - false, - tempGradInput); - - __syncthreads(); - - const int gradIndex = (FS / 2) + tid; - scalar_t tempGrad = tempGradInput[gradIndex]; - -#pragma unroll - for (int j = 0; j < FS; j++) { - const int inputIndex = tid + j; - accumWeights[j] += tempInput[inputIndex] * tempGrad; - } - - __syncthreads(); - } - } - - // Row-major sum - for (int filterWeightIdx = 0; filterWeightIdx < FS; ++filterWeightIdx) { - float temp; - if (tid < sequenceLength) { - temp = accumWeights[filterWeightIdx]; - } else { - temp = float(0.0); - } - - const int outputOffset = filterWeightIdx * minibatch + batchIdx; - - temp = blockReduce(temp); - - if (tid == 0) { - tempOutputGradWeight[outputOffset] = temp; - } - } -} - -template <int FS, int SB, typename scalar_t> -__global__ void lightconv_grad_wrt_weights_secondpass_short_kernel( - const float* input, - const int minibatch, - const int numFiltersInBlock, - scalar_t* output) { - assert(blockDim.x == SB); - - const int tid = threadIdx.x; - - const int filterIdx = blockIdx.x; - const int filterWeightIdx = blockIdx.y; - - const int inputOffset = - filterIdx * FS * minibatch + filterWeightIdx * minibatch; - const float* tempInput = &input[inputOffset]; - - // read into shared memory for reduction - int readIndex = tid; - - float sum = 0.0; - while (readIndex < minibatch) { - sum += tempInput[readIndex]; - readIndex += SB; - } - - float temp = blockReduce(sum); - - if (tid == 0) { - output[blockIdx.x * FS + blockIdx.y] = temp; - } -} - -// This is by far the most expensive kernel in terms of time taken. -// Can be 16x slower than the forward or grad_wrt_input when filter size is 31 -template <int FS, int SB, int padding_l, typename scalar_t> -__global__ void lightconv_grad_wrt_weights_firstpass_kernel( - const scalar_t* input, - const scalar_t* gradInput, - int minibatch, - int sequenceLength, - int numFeatures, - int numFiltersInBlock, - float* output) { - assert(blockDim.x == SB); - - const int tid = threadIdx.x; - const int batchIdx = blockIdx.x; - const int featureIdx = blockIdx.y; - const int filterIdx = featureIdx / numFiltersInBlock; - const int idxInFilterBlock = featureIdx % numFiltersInBlock; - - const int numIterations = divUp<int, int>(sequenceLength, SB); - - float temp; - - __shared__ scalar_t tempInput[SB + FS]; - __shared__ scalar_t tempGradInput[SB + FS]; - zeroSharedMem<FS, SB, padding_l>(tempInput); - zeroSharedMem<FS, SB, (FS / 2)>(tempGradInput); - __syncthreads(); - - float accumWeights[FS]; - - for (int i = 0; i < FS; ++i) { - accumWeights[i] = float(0.0); - } - - const int IOOffset = - batchIdx * numFeatures * sequenceLength + featureIdx * sequenceLength; - const scalar_t* inputFeature = &input[IOOffset]; - const scalar_t* gradInputFeature = &gradInput[IOOffset]; - float* tempOutputGradWeight = - &output[filterIdx * FS * minibatch * numFiltersInBlock]; - - for (int i = 0; i < numIterations; ++i) { - const int inputOffset = i * SB; - - load_input_to_shared<FS, SB, padding_l>( - inputFeature, - inputOffset, - sequenceLength, - i, - numIterations, - false, - tempInput); - load_input_to_shared<FS, SB, (FS / 2)>( - gradInputFeature, - inputOffset, - sequenceLength, - i, - numIterations, - false, - tempGradInput); - __syncthreads(); - -#pragma unroll - for (int j = 0; j < FS; ++j) { - accumWeights[j] += tempInput[tid + j] * tempGradInput[tid + (FS / 2)]; - } - - __syncthreads(); - } - - // Row-major sum - for (int filterWeightIdx = 0; filterWeightIdx < FS; ++filterWeightIdx) { - // Write to shared memory before reduction - if (tid < sequenceLength) { - temp = accumWeights[filterWeightIdx]; - } else { - temp = float(0.0); - } - - temp = blockReduce(temp); - - const int outputOffset = filterWeightIdx * minibatch * numFiltersInBlock + - batchIdx * numFiltersInBlock + idxInFilterBlock; - - if (tid == 0) { - tempOutputGradWeight[outputOffset] = temp; - } - } -} - -template <int FS, int SB, typename scalar_t> -__global__ void lightconv_grad_wrt_weights_secondpass_kernel( - const float* input, - const int minibatch, - const int numFiltersInBlock, - scalar_t* output) { - assert(blockDim.x == SB); - const int tid = threadIdx.x; - - // What is the id within a minibatch - const int filterIdx = blockIdx.x; - const int filterWeightIdx = blockIdx.y; - - const int inputOffset = filterIdx * FS * minibatch * numFiltersInBlock + - filterWeightIdx * minibatch * numFiltersInBlock; - const float* tempInput = &input[inputOffset]; - - int readIndex = tid; - - float sum = float(0.0); - while (readIndex < (minibatch * numFiltersInBlock)) { - sum += tempInput[readIndex]; - readIndex += SB; - } - - float temp = blockReduce(sum); - - if (tid == 0) { - output[blockIdx.x * FS + blockIdx.y] = temp; - } -} diff --git a/kosmos-g/fairseq/fairseq/modules/lightconv_layer/lightconv_layer.py b/kosmos-g/fairseq/fairseq/modules/lightconv_layer/lightconv_layer.py deleted file mode 100644 index e7e597f47..000000000 --- a/kosmos-g/fairseq/fairseq/modules/lightconv_layer/lightconv_layer.py +++ /dev/null @@ -1,137 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import lightconv_cuda -import torch -import torch.nn.functional as F -from fairseq import utils -from fairseq.incremental_decoding_utils import with_incremental_state -from fairseq.modules.fairseq_dropout import FairseqDropout -from torch import nn -from torch.autograd import Function - - -class lightconvFunction(Function): - @staticmethod - def forward(ctx, x, weights, padding_l): - ctx.padding_l = padding_l - outputs = lightconv_cuda.forward(x, weights, padding_l) - variables = [x, weights] - ctx.save_for_backward(*variables) - return outputs[0] - - @staticmethod - def backward(ctx, grad_output): - outputs = lightconv_cuda.backward( - grad_output.contiguous(), ctx.padding_l, *ctx.saved_tensors - ) - grad_input, grad_weights = outputs - return grad_input, grad_weights, None - - -@with_incremental_state -class LightconvLayer(nn.Module): - def __init__( - self, - input_size, - kernel_size=1, - padding_l=None, - weight_softmax=False, - num_heads=1, - weight_dropout=0.0, - bias=False, - ): - super(LightconvLayer, self).__init__() - self.input_size = input_size - self.kernel_size = kernel_size - self.padding_l = padding_l - self.num_heads = num_heads - self.weight_softmax = weight_softmax - self.weight_dropout_module = FairseqDropout( - weight_dropout, module_name=self.__class__.__name__ - ) - - self.weight = nn.Parameter(torch.Tensor(num_heads, kernel_size)) - if bias: - self.bias = nn.Parameter(torch.Tensor(input_size)) - else: - self.bias = None - self.reset_parameters() - - def upgrade_state_dict_named(self, state_dict, name): - prefix = name + "." if name != "" else "" - for k, v in state_dict.items(): - if k.endswith(prefix + "weight"): - if v.dim() == 3 and v.size(1) == 1: - state_dict[k] = v.squeeze(1) - - def reset_parameters(self): - nn.init.xavier_uniform_(self.weight) - if self.bias is not None: - nn.init.constant_(self.bias, 0.0) - - def forward(self, x, incremental_state=None): - - # during inference time, incremental BMM is faster - if incremental_state is not None: - T, B, C = x.size() - K, H = self.kernel_size, self.num_heads - R = C // H - input_buffer = self._get_input_buffer(incremental_state) - if input_buffer is None: - input_buffer = x.new() - x_unfold = torch.cat([input_buffer, x.unsqueeze(3)], dim=3) - if self.kernel_size > 1: - self._set_input_buffer( - incremental_state, x_unfold[:, :, :, -self.kernel_size + 1 :] - ) - x_unfold = x_unfold.view(T * B * H, R, -1) - - weight = self.weight - if self.weight_softmax: - weight = F.softmax(weight.float(), dim=1).type_as(weight) - - weight = weight[:, -x_unfold.size(2) :] - - K = weight.size(1) - - weight = ( - weight.view(1, H, K) - .expand(T * B, H, K) - .contiguous() - .view(T * B * H, K, 1) - ) - - weight = self.weight_dropout_module(weight) - output = torch.bmm(x_unfold, weight) # T*B*H x R x 1 - output = output.view(T, B, C) - return output - - # during training time, use CUDA kernel - else: - x = x.permute(1, 2, 0).contiguous() - weight = self.weight - if self.weight_softmax: - weight = F.softmax(self.weight, -1) - if self.weight_dropout_module.p: - weight = self.weight_dropout_module(weight) - return lightconvFunction.apply(x, weight, self.padding_l).permute(2, 0, 1) - - def reorder_incremental_state(self, incremental_state, new_order): - input_buffer = self._get_input_buffer(incremental_state) - if input_buffer is not None: - input_buffer = input_buffer.index_select(1, new_order) - self._set_input_buffer(incremental_state, input_buffer) - - def _get_input_buffer(self, incremental_state): - return utils.get_incremental_state(self, incremental_state, "input_buffer") - - def _set_input_buffer(self, incremental_state, new_buffer): - return utils.set_incremental_state( - self, incremental_state, "input_buffer", new_buffer - ) - - def half(self): - return self._apply(lambda t: t.half() if t.is_floating_point() else t) diff --git a/kosmos-g/fairseq/fairseq/modules/lightconv_layer/setup.py b/kosmos-g/fairseq/fairseq/modules/lightconv_layer/setup.py deleted file mode 100644 index 052635be7..000000000 --- a/kosmos-g/fairseq/fairseq/modules/lightconv_layer/setup.py +++ /dev/null @@ -1,23 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from setuptools import setup -from torch.utils.cpp_extension import BuildExtension, CUDAExtension - - -setup( - name="lightconv_layer", - ext_modules=[ - CUDAExtension( - "lightconv_cuda", - [ - "lightconv_cuda.cpp", - "lightconv_cuda_kernel.cu", - ], - ), - ], - cmdclass={"build_ext": BuildExtension}, -) diff --git a/kosmos-g/fairseq/fairseq/modules/lightweight_convolution.py b/kosmos-g/fairseq/fairseq/modules/lightweight_convolution.py deleted file mode 100644 index ec11a9507..000000000 --- a/kosmos-g/fairseq/fairseq/modules/lightweight_convolution.py +++ /dev/null @@ -1,310 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import utils -from fairseq.incremental_decoding_utils import with_incremental_state -from fairseq.modules.fairseq_dropout import FairseqDropout -from fairseq.modules.unfold import unfold1d - - -def LightweightConv( - input_size, - kernel_size=1, - padding_l=None, - num_heads=1, - weight_dropout=0.0, - weight_softmax=False, - bias=False, -): - if torch.cuda.is_available(): - try: - from fairseq.modules.lightconv_layer import LightconvLayer - - return LightconvLayer( - input_size, - kernel_size=kernel_size, - padding_l=padding_l, - num_heads=num_heads, - weight_dropout=weight_dropout, - weight_softmax=weight_softmax, - bias=bias, - ) - except ImportError as e: - print(e) - return LightweightConv1dTBC( - input_size, - kernel_size=kernel_size, - padding_l=padding_l, - num_heads=num_heads, - weight_dropout=weight_dropout, - weight_softmax=weight_softmax, - bias=bias, - ) - - -class LightweightConv1d(nn.Module): - """Lightweight Convolution assuming the input is BxCxT - This is just an example that explains LightConv clearer than the TBC version. - We don't use this module in the model. - - Args: - input_size: # of channels of the input and output - kernel_size: convolution channels - padding: padding - num_heads: number of heads used. The weight is of shape - `(num_heads, 1, kernel_size)` - weight_softmax: normalize the weight with softmax before the convolution - - Shape: - Input: BxCxT, i.e. (batch_size, input_size, timesteps) - Output: BxCxT, i.e. (batch_size, input_size, timesteps) - - Attributes: - weight: the learnable weights of the module of shape - `(num_heads, 1, kernel_size)` - bias: the learnable bias of the module of shape `(input_size)` - """ - - def __init__( - self, - input_size, - kernel_size=1, - padding=0, - num_heads=1, - weight_softmax=False, - bias=False, - weight_dropout=0.0, - ): - super().__init__() - self.input_size = input_size - self.kernel_size = kernel_size - self.num_heads = num_heads - self.padding = padding - self.weight_softmax = weight_softmax - self.weight = nn.Parameter(torch.Tensor(num_heads, 1, kernel_size)) - - if bias: - self.bias = nn.Parameter(torch.Tensor(input_size)) - else: - self.bias = None - self.weight_dropout_module = FairseqDropout( - weight_dropout, module_name=self.__class__.__name__ - ) - self.reset_parameters() - - def reset_parameters(self): - nn.init.xavier_uniform_(self.weight) - if self.bias is not None: - nn.init.constant_(self.bias, 0.0) - - def forward(self, input): - """ - input size: B x C x T - output size: B x C x T - """ - B, C, T = input.size() - H = self.num_heads - - weight = self.weight - if self.weight_softmax: - weight = F.softmax(weight, dim=-1) - - weight = self.weight_dropout_module(weight) - # Merge every C/H entries into the batch dimension (C = self.input_size) - # B x C x T -> (B * C/H) x H x T - # One can also expand the weight to C x 1 x K by a factor of C/H - # and do not reshape the input instead, which is slow though - input = input.view(-1, H, T) - output = F.conv1d(input, weight, padding=self.padding, groups=self.num_heads) - output = output.view(B, C, T) - if self.bias is not None: - output = output + self.bias.view(1, -1, 1) - - return output - - -@with_incremental_state -class LightweightConv1dTBC(nn.Module): - """Lightweight Convolution assuming the input is TxBxC - Args: - input_size: # of channels of the input - kernel_size: convolution channels - padding_l: padding to the left when using "same" padding - num_heads: number of heads used. The weight is of shape (num_heads, 1, kernel_size) - weight_dropout: the drop rate of the DropConnect to drop the weight - weight_softmax: normalize the weight with softmax before the convolution - bias: use bias - - Shape: - Input: TxBxC, i.e. (timesteps, batch_size, input_size) - Output: TxBxC, i.e. (timesteps, batch_size, input_size) - - Attributes: - weight: the learnable weights of the module of shape - `(num_heads, 1, kernel_size)` - bias: the learnable bias of the module of shape `(input_size)` - """ - - def __init__( - self, - input_size, - kernel_size=1, - padding_l=None, - num_heads=1, - weight_dropout=0.0, - weight_softmax=False, - bias=False, - ): - super().__init__() - self.input_size = input_size - self.kernel_size = kernel_size - self.padding_l = padding_l - self.num_heads = num_heads - self.weight_dropout_module = FairseqDropout( - weight_dropout, module_name=self.__class__.__name__ - ) - self.weight_softmax = weight_softmax - - self.weight = nn.Parameter(torch.Tensor(num_heads, 1, kernel_size)) - if bias: - self.bias = nn.Parameter(torch.Tensor(input_size)) - else: - self.bias = None - - self.reset_parameters() - self.onnx_trace = False - - def reset_parameters(self): - nn.init.xavier_uniform_(self.weight) - if self.bias is not None: - nn.init.constant_(self.bias, 0.0) - - def forward(self, x, incremental_state=None, unfold=False): - """Assuming the input, x, of the shape T x B x C and producing an output in the shape T x B x C - args: - x: Input of shape T x B x C, i.e. (timesteps, batch_size, input_size) - incremental_state: A dict to keep the state - unfold: unfold the input or not. If not, we use the matrix trick instead - """ - unfold = unfold or (incremental_state is not None) - - if unfold: - output = self._forward_unfolded(x, incremental_state) - else: - output = self._forward_expanded(x, incremental_state) - - if self.bias is not None: - output = output + self.bias.view(1, 1, -1) - return output - - def prepare_for_onnx_export_(self): - self.onnx_trace = True - - def _forward_unfolded(self, x, incremental_state): - """The conventional implementation of convolutions. - Unfolding the input by having a window shifting to the right.""" - T, B, C = x.size() - K, H = self.kernel_size, self.num_heads - R = C // H - assert R * H == C == self.input_size - - weight = self.weight.view(H, K) - if incremental_state is not None: - input_buffer = self._get_input_buffer(incremental_state) - if input_buffer is None: - input_buffer = x.new() - x_unfold = torch.cat([input_buffer, x.unsqueeze(3)], dim=3) - if self.kernel_size > 1: - self._set_input_buffer( - incremental_state, x_unfold[:, :, :, -self.kernel_size + 1 :] - ) - x_unfold = x_unfold.view(T * B * H, R, -1) - else: - # unfold the input: T x B x C --> T' x B x C x K - x_unfold = unfold1d(x, self.kernel_size, self.padding_l, 0) - x_unfold = x_unfold.view(T * B * H, R, K) - - if self.weight_softmax: - weight = utils.softmax(weight, dim=1, onnx_trace=self.onnx_trace).type_as( - weight - ) - - if incremental_state is not None: - weight = weight[:, -x_unfold.size(2) :] - K = weight.size(1) - - weight = ( - weight.view(1, H, K).expand(T * B, H, K).contiguous().view(T * B * H, K, 1) - ) - - weight = self.weight_dropout_module(weight) - output = torch.bmm(x_unfold, weight) # T*B*H x R x 1 - output = output.view(T, B, C) - return output - - def _forward_expanded(self, x, incremental_state): - """Turn the convolution filters into band matrices and do matrix multiplication. - This is faster when the sequence is short, but less memory efficient. - This is not used in the decoder during inference. - """ - T, B, C = x.size() - K, H = self.kernel_size, self.num_heads - R = C // H - assert R * H == C == self.input_size - - weight = self.weight.view(H, K) - if self.weight_softmax: - weight = utils.softmax(weight, dim=1, onnx_trace=self.onnx_trace).type_as( - weight - ) - weight = weight.view(1, H, K).expand(T * B, H, K).contiguous() - weight = weight.view(T, B * H, K).transpose(0, 1) - - x = x.view(T, B * H, R).transpose(0, 1) - P = self.padding_l - if K > T and P == K - 1: - weight = weight.narrow(2, K - T, T) - K, P = T, T - 1 - # turn the convolution filters into band matrices - weight_expanded = weight.new_zeros(B * H, T, T + K - 1, requires_grad=False) - weight_expanded.as_strided((B * H, T, K), (T * (T + K - 1), T + K, 1)).copy_( - weight - ) - weight_expanded = weight_expanded.narrow(2, P, T) - weight_expanded = self.weight_dropout_module(weight_expanded) - - output = torch.bmm(weight_expanded, x) - output = output.transpose(0, 1).contiguous().view(T, B, C) - return output - - def reorder_incremental_state(self, incremental_state, new_order): - input_buffer = self._get_input_buffer(incremental_state) - if input_buffer is not None: - input_buffer = input_buffer.index_select(1, new_order) - self._set_input_buffer(incremental_state, input_buffer) - - def _get_input_buffer(self, incremental_state): - return utils.get_incremental_state(self, incremental_state, "input_buffer") - - def _set_input_buffer(self, incremental_state, new_buffer): - return utils.set_incremental_state( - self, incremental_state, "input_buffer", new_buffer - ) - - def extra_repr(self): - s = "{}, kernel_size={}, padding_l={}, num_heads={}, weight_softmax={}, bias={}".format( - self.input_size, - self.kernel_size, - self.padding_l, - self.num_heads, - self.weight_softmax, - self.bias is not None, - ) - if self.weight_dropout_module.p > 0.0: - s += ", weight_dropout={}".format(self.weight_dropout_module.p) - return s diff --git a/kosmos-g/fairseq/fairseq/modules/linearized_convolution.py b/kosmos-g/fairseq/fairseq/modules/linearized_convolution.py deleted file mode 100644 index 1c7a9f09a..000000000 --- a/kosmos-g/fairseq/fairseq/modules/linearized_convolution.py +++ /dev/null @@ -1,125 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn.functional as F -from fairseq import utils -from fairseq.incremental_decoding_utils import with_incremental_state - -from .conv_tbc import ConvTBC - -from typing import Dict, Optional -from torch import Tensor - - -@with_incremental_state -class LinearizedConvolution(ConvTBC): - """An optimized version of nn.Conv1d. - - At training time, this module uses ConvTBC, which is an optimized version - of Conv1d. At inference time, it optimizes incremental generation (i.e., - one time step at a time) by replacing the convolutions with linear layers. - Note that the input order changes from training to inference. - """ - - def __init__(self, in_channels, out_channels, kernel_size, **kwargs): - super().__init__(in_channels, out_channels, kernel_size, **kwargs) - self._linearized_weight = None - self.register_backward_hook(self._clear_linearized_weight) - - def state_dict(self, destination=None, prefix="", keep_vars=False): - state = ConvTBC.state_dict(self, destination, prefix, keep_vars=keep_vars) - # don't store redundant _linearized_weight in checkpoints - if prefix + "_linearized_weight" in state: - del state[prefix + "_linearized_weight"] - return state - - def upgrade_state_dict_named(self, state_dict, name): - prefix = name + "." if name != "" else "" - if prefix + "_linearized_weight" in state_dict: - del state_dict[prefix + "_linearized_weight"] - - @torch.jit.export - def forward( - self, - input, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - ): - """ - Args: - incremental_state: Used to buffer signal; if not None, then input is - expected to contain a single frame. If the input order changes - between time steps, call reorder_incremental_state. - Input: - Time x Batch x Channel during training - Batch x Time x Channel during inference - """ - if incremental_state is None: - output = self.conv_tbc(input) - if self.kernel_size[0] > 1 and self.padding[0] > 0: - # remove future timesteps added by padding - output = output[: -self.padding[0], :, :] - return output - - # reshape weight - weight = self._get_linearized_weight() - kw = self.kernel_size[0] - - bsz = input.size(0) # input: bsz x len x dim - if kw > 1: - input = input.data - input_buffer = self._get_input_buffer(incremental_state) - if input_buffer is None: - input_buffer = input.new(bsz, kw, input.size(2)).zero_() - self._set_input_buffer(incremental_state, input_buffer) - else: - # shift buffer - input_buffer[:, :-1, :] = input_buffer[:, 1:, :].clone() - # append next input - input_buffer[:, -1, :] = input[:, -1, :] - input = input_buffer - with torch.no_grad(): - output = F.linear(input.view(bsz, -1), weight, self.bias) - return output.view(bsz, 1, -1) - - @torch.jit.unused - def reorder_incremental_state( - self, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]], - new_order, - ): - input_buffer = self._get_input_buffer(incremental_state) - if input_buffer is not None: - input_buffer = input_buffer.index_select(0, new_order) - self._set_input_buffer(incremental_state, input_buffer) - - @torch.jit.unused - def _get_input_buffer( - self, incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] - ): - return utils.get_incremental_state(self, incremental_state, "input_buffer") - - @torch.jit.unused - def _set_input_buffer( - self, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]], - new_buffer, - ): - return utils.set_incremental_state( - self, incremental_state, "input_buffer", new_buffer - ) - - @torch.jit.unused - def _get_linearized_weight(self): - if self._linearized_weight is None: - kw = self.kernel_size[0] - weight = self.weight.transpose(2, 1).transpose(1, 0).contiguous() - assert weight.size() == (self.out_channels, kw, self.in_channels) - return weight.view(self.out_channels, -1) - return self._linearized_weight - - @torch.jit.unused - def _clear_linearized_weight(self, *args): - self._linearized_weight = None diff --git a/kosmos-g/fairseq/fairseq/modules/location_attention.py b/kosmos-g/fairseq/fairseq/modules/location_attention.py deleted file mode 100644 index dbbbfb9f2..000000000 --- a/kosmos-g/fairseq/fairseq/modules/location_attention.py +++ /dev/null @@ -1,83 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch.nn as nn -import torch -import torch.nn.functional as F - - -class LocationAttention(nn.Module): - """ - Attention-Based Models for Speech Recognition - https://arxiv.org/pdf/1506.07503.pdf - - :param int encoder_dim: # projection-units of encoder - :param int decoder_dim: # units of decoder - :param int attn_dim: attention dimension - :param int conv_dim: # channels of attention convolution - :param int conv_kernel_size: filter size of attention convolution - """ - - def __init__( - self, - attn_dim, - encoder_dim, - decoder_dim, - attn_state_kernel_size, - conv_dim, - conv_kernel_size, - scaling=2.0, - ): - super(LocationAttention, self).__init__() - self.attn_dim = attn_dim - self.decoder_dim = decoder_dim - self.scaling = scaling - self.proj_enc = nn.Linear(encoder_dim, attn_dim) - self.proj_dec = nn.Linear(decoder_dim, attn_dim, bias=False) - self.proj_attn = nn.Linear(conv_dim, attn_dim, bias=False) - self.conv = nn.Conv1d( - attn_state_kernel_size, - conv_dim, - 2 * conv_kernel_size + 1, - padding=conv_kernel_size, - bias=False, - ) - self.proj_out = nn.Sequential(nn.Tanh(), nn.Linear(attn_dim, 1)) - - self.proj_enc_out = None # cache - - def clear_cache(self): - self.proj_enc_out = None - - def forward(self, encoder_out, encoder_padding_mask, decoder_h, attn_state): - """ - :param torch.Tensor encoder_out: padded encoder hidden state B x T x D - :param torch.Tensor encoder_padding_mask: encoder padding mask - :param torch.Tensor decoder_h: decoder hidden state B x D - :param torch.Tensor attn_prev: previous attention weight B x K x T - :return: attention weighted encoder state (B, D) - :rtype: torch.Tensor - :return: previous attention weights (B x T) - :rtype: torch.Tensor - """ - bsz, seq_len, _ = encoder_out.size() - if self.proj_enc_out is None: - self.proj_enc_out = self.proj_enc(encoder_out) - - # B x K x T -> B x C x T - attn = self.conv(attn_state) - # B x C x T -> B x T x C -> B x T x D - attn = self.proj_attn(attn.transpose(1, 2)) - - if decoder_h is None: - decoder_h = encoder_out.new_zeros(bsz, self.decoder_dim) - dec_h = self.proj_dec(decoder_h).view(bsz, 1, self.attn_dim) - - out = self.proj_out(attn + self.proj_enc_out + dec_h).squeeze(2) - out.masked_fill_(encoder_padding_mask, -float("inf")) - - w = F.softmax(self.scaling * out, dim=1) - c = torch.sum(encoder_out * w.view(bsz, seq_len, 1), dim=1) - return c, w diff --git a/kosmos-g/fairseq/fairseq/modules/lstm_cell_with_zoneout.py b/kosmos-g/fairseq/fairseq/modules/lstm_cell_with_zoneout.py deleted file mode 100644 index 273308951..000000000 --- a/kosmos-g/fairseq/fairseq/modules/lstm_cell_with_zoneout.py +++ /dev/null @@ -1,37 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch.nn as nn - - -class LSTMCellWithZoneOut(nn.Module): - """ - Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations - https://arxiv.org/abs/1606.01305 - """ - - def __init__( - self, prob: float, input_size: int, hidden_size: int, bias: bool = True - ): - super(LSTMCellWithZoneOut, self).__init__() - self.lstm_cell = nn.LSTMCell(input_size, hidden_size, bias=bias) - self.prob = prob - if prob > 1.0 or prob < 0.0: - raise ValueError( - "zoneout probability must be in the range from " "0.0 to 1.0." - ) - - def zoneout(self, h, next_h, prob): - if isinstance(h, tuple): - return tuple([self.zoneout(h[i], next_h[i], prob) for i in range(len(h))]) - - if self.training: - mask = h.new_zeros(*h.size()).bernoulli_(prob) - return mask * h + (1 - mask) * next_h - - return prob * h + (1 - prob) * next_h - - def forward(self, x, h): - return self.zoneout(h, self.lstm_cell(x, h), self.prob) diff --git a/kosmos-g/fairseq/fairseq/modules/multihead_attention.py b/kosmos-g/fairseq/fairseq/modules/multihead_attention.py deleted file mode 100644 index f3d7d0787..000000000 --- a/kosmos-g/fairseq/fairseq/modules/multihead_attention.py +++ /dev/null @@ -1,649 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from typing import Dict, List, Optional, Tuple - -import torch -import torch.nn.functional as F -from fairseq import utils -from fairseq.incremental_decoding_utils import with_incremental_state -from fairseq.modules.fairseq_dropout import FairseqDropout -from fairseq.modules.quant_noise import quant_noise -from torch import Tensor, nn -from torch.nn import Parameter -from fairseq.modules import LayerNorm - - -@with_incremental_state -class MultiheadAttention(nn.Module): - """Multi-headed attention. - - See "Attention Is All You Need" for more details. - """ - - def __init__( - self, - embed_dim, - num_heads, - kdim=None, - vdim=None, - dropout=0.0, - bias=True, - add_bias_kv=False, - add_zero_attn=False, - self_attention=False, - encoder_decoder_attention=False, - q_noise=0.0, - qn_block_size=8, - attention_norm=False, - ): - super().__init__() - self.embed_dim = embed_dim - self.kdim = kdim if kdim is not None else embed_dim - self.vdim = vdim if vdim is not None else embed_dim - self.qkv_same_dim = self.kdim == embed_dim and self.vdim == embed_dim - - if attention_norm: - self.attn_ln = LayerNorm(self.embed_dim) - else: - self.attn_ln = None - - self.num_heads = num_heads - self.dropout_module = FairseqDropout( - dropout, module_name=self.__class__.__name__ - ) - - self.head_dim = embed_dim // num_heads - assert ( - self.head_dim * num_heads == self.embed_dim - ), "embed_dim must be divisible by num_heads" - self.scaling = self.head_dim ** -0.5 - - self.self_attention = self_attention - self.encoder_decoder_attention = encoder_decoder_attention - - assert not self.self_attention or self.qkv_same_dim, ( - "Self-attention requires query, key and " "value to be of the same size" - ) - - self.k_proj = quant_noise( - nn.Linear(self.kdim, embed_dim, bias=bias), q_noise, qn_block_size - ) - self.v_proj = quant_noise( - nn.Linear(self.vdim, embed_dim, bias=bias), q_noise, qn_block_size - ) - self.q_proj = quant_noise( - nn.Linear(embed_dim, embed_dim, bias=bias), q_noise, qn_block_size - ) - - self.out_proj = quant_noise( - nn.Linear(embed_dim, embed_dim, bias=bias), q_noise, qn_block_size - ) - - if add_bias_kv: - self.bias_k = Parameter(torch.Tensor(1, 1, embed_dim)) - self.bias_v = Parameter(torch.Tensor(1, 1, embed_dim)) - else: - self.bias_k = self.bias_v = None - - self.add_zero_attn = add_zero_attn - - self.reset_parameters() - - self.onnx_trace = False - self.skip_embed_dim_check = False - - def prepare_for_onnx_export_(self): - self.onnx_trace = True - - def reset_parameters(self): - if self.qkv_same_dim: - # Empirically observed the convergence to be much better with - # the scaled initialization - nn.init.xavier_uniform_(self.k_proj.weight, gain=1 / math.sqrt(2)) - nn.init.xavier_uniform_(self.v_proj.weight, gain=1 / math.sqrt(2)) - nn.init.xavier_uniform_(self.q_proj.weight, gain=1 / math.sqrt(2)) - else: - nn.init.xavier_uniform_(self.k_proj.weight) - nn.init.xavier_uniform_(self.v_proj.weight) - nn.init.xavier_uniform_(self.q_proj.weight) - - nn.init.xavier_uniform_(self.out_proj.weight) - if self.out_proj.bias is not None: - nn.init.constant_(self.out_proj.bias, 0.0) - if self.bias_k is not None: - nn.init.xavier_normal_(self.bias_k) - if self.bias_v is not None: - nn.init.xavier_normal_(self.bias_v) - - def _get_reserve_head_index(self, num_heads_to_keep: int): - k_proj_heads_norm = [] - q_proj_heads_norm = [] - v_proj_heads_norm = [] - - for i in range(self.num_heads): - start_idx = i * self.head_dim - end_idx = (i + 1) * self.head_dim - k_proj_heads_norm.append( - torch.sum( - torch.abs( - self.k_proj.weight[ - start_idx:end_idx, - ] - ) - ).tolist() - + torch.sum(torch.abs(self.k_proj.bias[start_idx:end_idx])).tolist() - ) - q_proj_heads_norm.append( - torch.sum( - torch.abs( - self.q_proj.weight[ - start_idx:end_idx, - ] - ) - ).tolist() - + torch.sum(torch.abs(self.q_proj.bias[start_idx:end_idx])).tolist() - ) - v_proj_heads_norm.append( - torch.sum( - torch.abs( - self.v_proj.weight[ - start_idx:end_idx, - ] - ) - ).tolist() - + torch.sum(torch.abs(self.v_proj.bias[start_idx:end_idx])).tolist() - ) - - heads_norm = [] - for i in range(self.num_heads): - heads_norm.append( - k_proj_heads_norm[i] + q_proj_heads_norm[i] + v_proj_heads_norm[i] - ) - - sorted_head_index = sorted( - range(self.num_heads), key=lambda k: heads_norm[k], reverse=True - ) - reserve_head_index = [] - for i in range(num_heads_to_keep): - start = sorted_head_index[i] * self.head_dim - end = (sorted_head_index[i] + 1) * self.head_dim - reserve_head_index.append((start, end)) - return reserve_head_index - - def _adaptive_prune_heads(self, reserve_head_index: List[Tuple[int, int]]): - new_q_weight = [] - new_q_bias = [] - new_k_weight = [] - new_k_bias = [] - new_v_weight = [] - new_v_bias = [] - new_out_proj_weight = [] - - for ele in reserve_head_index: - start_idx, end_idx = ele - new_q_weight.append( - self.q_proj.weight[ - start_idx:end_idx, - ] - ) - new_q_bias.append(self.q_proj.bias[start_idx:end_idx]) - - new_k_weight.append( - self.k_proj.weight[ - start_idx:end_idx, - ] - ) - - new_k_bias.append(self.k_proj.bias[start_idx:end_idx]) - - new_v_weight.append( - self.v_proj.weight[ - start_idx:end_idx, - ] - ) - new_v_bias.append(self.v_proj.bias[start_idx:end_idx]) - - new_out_proj_weight.append(self.out_proj.weight[:, start_idx:end_idx]) - - new_q_weight = torch.cat(new_q_weight).detach() - new_k_weight = torch.cat(new_k_weight).detach() - new_v_weight = torch.cat(new_v_weight).detach() - new_out_proj_weight = torch.cat(new_out_proj_weight, dim=-1).detach() - new_q_weight.requires_grad = True - new_k_weight.requires_grad = True - new_v_weight.requires_grad = True - new_out_proj_weight.requires_grad = True - - new_q_bias = torch.cat(new_q_bias).detach() - new_q_bias.requires_grad = True - - new_k_bias = torch.cat(new_k_bias).detach() - new_k_bias.requires_grad = True - - new_v_bias = torch.cat(new_v_bias).detach() - new_v_bias.requires_grad = True - - self.q_proj.weight = torch.nn.Parameter(new_q_weight) - self.q_proj.bias = torch.nn.Parameter(new_q_bias) - - self.k_proj.weight = torch.nn.Parameter(new_k_weight) - self.k_proj.bias = torch.nn.Parameter(new_k_bias) - - self.v_proj.weight = torch.nn.Parameter(new_v_weight) - self.v_proj.bias = torch.nn.Parameter(new_v_bias) - - self.out_proj.weight = torch.nn.Parameter(new_out_proj_weight) - - self.num_heads = len(reserve_head_index) - self.embed_dim = self.head_dim * self.num_heads - self.q_proj.out_features = self.embed_dim - self.k_proj.out_features = self.embed_dim - self.v_proj.out_features = self.embed_dim - - def _set_skip_embed_dim_check(self): - self.skip_embed_dim_check = True - - def forward( - self, - query, - key: Optional[Tensor], - value: Optional[Tensor], - key_padding_mask: Optional[Tensor] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - need_weights: bool = True, - static_kv: bool = False, - attn_mask: Optional[Tensor] = None, - before_softmax: bool = False, - need_head_weights: bool = False, - ) -> Tuple[Tensor, Optional[Tensor]]: - """Input shape: Time x Batch x Channel - - Args: - key_padding_mask (ByteTensor, optional): mask to exclude - keys that are pads, of shape `(batch, src_len)`, where - padding elements are indicated by 1s. - need_weights (bool, optional): return the attention weights, - averaged over heads (default: False). - attn_mask (ByteTensor, optional): typically used to - implement causal attention, where the mask prevents the - attention from looking forward in time (default: None). - before_softmax (bool, optional): return the raw attention - weights and values before the attention softmax. - need_head_weights (bool, optional): return the attention - weights for each head. Implies *need_weights*. Default: - return the average attention weights over all heads. - """ - if need_head_weights: - need_weights = True - - is_tpu = query.device.type == "xla" - - tgt_len, bsz, embed_dim = query.size() - src_len = tgt_len - if not self.skip_embed_dim_check: - assert ( - embed_dim == self.embed_dim - ), f"query dim {embed_dim} != {self.embed_dim}" - assert list(query.size()) == [tgt_len, bsz, embed_dim] - if key is not None: - src_len, key_bsz, _ = key.size() - if not torch.jit.is_scripting(): - assert key_bsz == bsz - assert value is not None - assert src_len, bsz == value.shape[:2] - - if ( - not self.onnx_trace - and not is_tpu # don't use PyTorch version on TPUs - and incremental_state is None - and not static_kv - # A workaround for quantization to work. Otherwise JIT compilation - # treats bias in linear module as method. - and not torch.jit.is_scripting() - # The Multihead attention implemented in pytorch forces strong dimension check - # for input embedding dimention and K,Q,V projection dimension. - # Since pruning will break the dimension check and it is not easy to modify the pytorch API, - # it is preferred to bypass the pytorch MHA when we need to skip embed_dim_check - and not self.skip_embed_dim_check - and self.attn_ln is None - ): - assert key is not None and value is not None - return F.multi_head_attention_forward( - query, - key, - value, - self.embed_dim, - self.num_heads, - torch.empty([0]), - torch.cat((self.q_proj.bias, self.k_proj.bias, self.v_proj.bias)), - self.bias_k, - self.bias_v, - self.add_zero_attn, - self.dropout_module.p, - self.out_proj.weight, - self.out_proj.bias, - self.training or self.dropout_module.apply_during_inference, - key_padding_mask, - need_weights, - attn_mask, - use_separate_proj_weight=True, - q_proj_weight=self.q_proj.weight, - k_proj_weight=self.k_proj.weight, - v_proj_weight=self.v_proj.weight, - ) - - if incremental_state is not None: - saved_state = self._get_input_buffer(incremental_state) - if saved_state is not None and "prev_key" in saved_state: - # previous time steps are cached - no need to recompute - # key and value if they are static - if static_kv: - assert self.encoder_decoder_attention and not self.self_attention - key = value = None - else: - saved_state = None - - if self.self_attention: - q = self.q_proj(query) - k = self.k_proj(query) - v = self.v_proj(query) - elif self.encoder_decoder_attention: - # encoder-decoder attention - q = self.q_proj(query) - if key is None: - assert value is None - k = v = None - else: - k = self.k_proj(key) - v = self.v_proj(key) - - else: - assert key is not None and value is not None - q = self.q_proj(query) - k = self.k_proj(key) - v = self.v_proj(value) - q *= self.scaling - - if self.bias_k is not None: - assert self.bias_v is not None - k = torch.cat([k, self.bias_k.repeat(1, bsz, 1)]) - v = torch.cat([v, self.bias_v.repeat(1, bsz, 1)]) - if attn_mask is not None: - attn_mask = torch.cat( - [attn_mask, attn_mask.new_zeros(attn_mask.size(0), 1)], dim=1 - ) - if key_padding_mask is not None: - key_padding_mask = torch.cat( - [ - key_padding_mask, - key_padding_mask.new_zeros(key_padding_mask.size(0), 1), - ], - dim=1, - ) - - q = ( - q.contiguous() - .view(tgt_len, bsz * self.num_heads, self.head_dim) - .transpose(0, 1) - ) - if k is not None: - k = ( - k.contiguous() - .view(-1, bsz * self.num_heads, self.head_dim) - .transpose(0, 1) - ) - if v is not None: - v = ( - v.contiguous() - .view(-1, bsz * self.num_heads, self.head_dim) - .transpose(0, 1) - ) - - if saved_state is not None: - # saved states are stored with shape (bsz, num_heads, seq_len, head_dim) - if "prev_key" in saved_state: - _prev_key = saved_state["prev_key"] - assert _prev_key is not None - prev_key = _prev_key.view(bsz * self.num_heads, -1, self.head_dim) - if static_kv: - k = prev_key - else: - assert k is not None - k = torch.cat([prev_key, k], dim=1) - src_len = k.size(1) - if "prev_value" in saved_state: - _prev_value = saved_state["prev_value"] - assert _prev_value is not None - prev_value = _prev_value.view(bsz * self.num_heads, -1, self.head_dim) - if static_kv: - v = prev_value - else: - assert v is not None - v = torch.cat([prev_value, v], dim=1) - prev_key_padding_mask: Optional[Tensor] = None - if "prev_key_padding_mask" in saved_state: - prev_key_padding_mask = saved_state["prev_key_padding_mask"] - assert k is not None and v is not None - key_padding_mask = MultiheadAttention._append_prev_key_padding_mask( - key_padding_mask=key_padding_mask, - prev_key_padding_mask=prev_key_padding_mask, - batch_size=bsz, - src_len=k.size(1), - static_kv=static_kv, - ) - - saved_state["prev_key"] = k.view(bsz, self.num_heads, -1, self.head_dim) - saved_state["prev_value"] = v.view(bsz, self.num_heads, -1, self.head_dim) - saved_state["prev_key_padding_mask"] = key_padding_mask - # In this branch incremental_state is never None - assert incremental_state is not None - incremental_state = self._set_input_buffer(incremental_state, saved_state) - assert k is not None - assert k.size(1) == src_len - - # This is part of a workaround to get around fork/join parallelism - # not supporting Optional types. - if key_padding_mask is not None and key_padding_mask.dim() == 0: - key_padding_mask = None - - if key_padding_mask is not None: - assert key_padding_mask.size(0) == bsz - assert key_padding_mask.size(1) == src_len - - if self.add_zero_attn: - assert v is not None - src_len += 1 - k = torch.cat([k, k.new_zeros((k.size(0), 1) + k.size()[2:])], dim=1) - v = torch.cat([v, v.new_zeros((v.size(0), 1) + v.size()[2:])], dim=1) - if attn_mask is not None: - attn_mask = torch.cat( - [attn_mask, attn_mask.new_zeros(attn_mask.size(0), 1)], dim=1 - ) - if key_padding_mask is not None: - key_padding_mask = torch.cat( - [ - key_padding_mask, - torch.zeros(key_padding_mask.size(0), 1).type_as( - key_padding_mask - ), - ], - dim=1, - ) - - attn_weights = torch.bmm(q, k.transpose(1, 2)) - attn_weights = self.apply_sparse_mask(attn_weights, tgt_len, src_len, bsz) - - assert list(attn_weights.size()) == [bsz * self.num_heads, tgt_len, src_len] - - if attn_mask is not None: - attn_mask = attn_mask.unsqueeze(0) - if self.onnx_trace: - attn_mask = attn_mask.repeat(attn_weights.size(0), 1, 1) - attn_weights += attn_mask - - if key_padding_mask is not None: - # don't attend to padding symbols - attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) - if not is_tpu: - attn_weights = attn_weights.masked_fill( - key_padding_mask.unsqueeze(1).unsqueeze(2).to(torch.bool), - float("-inf"), - ) - else: - attn_weights = attn_weights.transpose(0, 2) - attn_weights = attn_weights.masked_fill(key_padding_mask, float("-inf")) - attn_weights = attn_weights.transpose(0, 2) - attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) - - if before_softmax: - return attn_weights, v - - attn_weights_float = utils.softmax( - attn_weights, dim=-1, onnx_trace=self.onnx_trace - ) - attn_weights = attn_weights_float.type_as(attn_weights) - attn_probs = self.dropout_module(attn_weights) - - assert v is not None - attn = torch.bmm(attn_probs, v) - assert list(attn.size()) == [bsz * self.num_heads, tgt_len, self.head_dim] - if self.onnx_trace and attn.size(1) == 1: - # when ONNX tracing a single decoder step (sequence length == 1) - # the transpose is a no-op copy before view, thus unnecessary - attn = attn.contiguous().view(tgt_len, bsz, self.embed_dim) - else: - attn = attn.transpose(0, 1).contiguous().view(tgt_len, bsz, self.embed_dim) - - if self.attn_ln is not None: - attn = self.attn_ln(attn) - - attn = self.out_proj(attn) - attn_weights: Optional[Tensor] = None - if need_weights: - attn_weights = attn_weights_float.view( - bsz, self.num_heads, tgt_len, src_len - ).transpose(1, 0) - if not need_head_weights: - # average attention weights over heads - attn_weights = attn_weights.mean(dim=0) - - return attn, attn_weights - - @staticmethod - def _append_prev_key_padding_mask( - key_padding_mask: Optional[Tensor], - prev_key_padding_mask: Optional[Tensor], - batch_size: int, - src_len: int, - static_kv: bool, - ) -> Optional[Tensor]: - # saved key padding masks have shape (bsz, seq_len) - if prev_key_padding_mask is not None and static_kv: - new_key_padding_mask = prev_key_padding_mask - elif prev_key_padding_mask is not None and key_padding_mask is not None: - new_key_padding_mask = torch.cat( - [prev_key_padding_mask.float(), key_padding_mask.float()], dim=1 - ) - # During incremental decoding, as the padding token enters and - # leaves the frame, there will be a time when prev or current - # is None - elif prev_key_padding_mask is not None: - if src_len > prev_key_padding_mask.size(1): - filler = torch.zeros( - (batch_size, src_len - prev_key_padding_mask.size(1)), - device=prev_key_padding_mask.device, - ) - new_key_padding_mask = torch.cat( - [prev_key_padding_mask.float(), filler.float()], dim=1 - ) - else: - new_key_padding_mask = prev_key_padding_mask.float() - elif key_padding_mask is not None: - if src_len > key_padding_mask.size(1): - filler = torch.zeros( - (batch_size, src_len - key_padding_mask.size(1)), - device=key_padding_mask.device, - ) - new_key_padding_mask = torch.cat( - [filler.float(), key_padding_mask.float()], dim=1 - ) - else: - new_key_padding_mask = key_padding_mask.float() - else: - new_key_padding_mask = prev_key_padding_mask - return new_key_padding_mask - - @torch.jit.export - def reorder_incremental_state( - self, - incremental_state: Dict[str, Dict[str, Optional[Tensor]]], - new_order: Tensor, - ): - """Reorder buffered internal state (for incremental generation).""" - input_buffer = self._get_input_buffer(incremental_state) - if input_buffer is not None: - for k in input_buffer.keys(): - input_buffer_k = input_buffer[k] - if input_buffer_k is not None: - if self.encoder_decoder_attention and input_buffer_k.size( - 0 - ) == new_order.size(0): - break - input_buffer[k] = input_buffer_k.index_select(0, new_order) - incremental_state = self._set_input_buffer(incremental_state, input_buffer) - return incremental_state - - def _get_input_buffer( - self, incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] - ) -> Dict[str, Optional[Tensor]]: - result = self.get_incremental_state(incremental_state, "attn_state") - if result is not None: - return result - else: - empty_result: Dict[str, Optional[Tensor]] = {} - return empty_result - - def _set_input_buffer( - self, - incremental_state: Dict[str, Dict[str, Optional[Tensor]]], - buffer: Dict[str, Optional[Tensor]], - ): - return self.set_incremental_state(incremental_state, "attn_state", buffer) - - def apply_sparse_mask(self, attn_weights, tgt_len: int, src_len: int, bsz: int): - return attn_weights - - def upgrade_state_dict_named(self, state_dict, name): - prefix = name + "." if name != "" else "" - items_to_add = {} - keys_to_remove = [] - for k in state_dict.keys(): - if k.endswith(prefix + "in_proj_weight"): - # in_proj_weight used to be q + k + v with same dimensions - dim = int(state_dict[k].shape[0] / 3) - items_to_add[prefix + "q_proj.weight"] = state_dict[k][:dim] - items_to_add[prefix + "k_proj.weight"] = state_dict[k][dim : 2 * dim] - items_to_add[prefix + "v_proj.weight"] = state_dict[k][2 * dim :] - - keys_to_remove.append(k) - - k_bias = prefix + "in_proj_bias" - if k_bias in state_dict.keys(): - dim = int(state_dict[k].shape[0] / 3) - items_to_add[prefix + "q_proj.bias"] = state_dict[k_bias][:dim] - items_to_add[prefix + "k_proj.bias"] = state_dict[k_bias][ - dim : 2 * dim - ] - items_to_add[prefix + "v_proj.bias"] = state_dict[k_bias][2 * dim :] - - keys_to_remove.append(prefix + "in_proj_bias") - - for k in keys_to_remove: - del state_dict[k] - - for key, value in items_to_add.items(): - state_dict[key] = value diff --git a/kosmos-g/fairseq/fairseq/modules/positional_embedding.py b/kosmos-g/fairseq/fairseq/modules/positional_embedding.py deleted file mode 100644 index 8e94e35ed..000000000 --- a/kosmos-g/fairseq/fairseq/modules/positional_embedding.py +++ /dev/null @@ -1,35 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch.nn as nn - -from .learned_positional_embedding import LearnedPositionalEmbedding -from .sinusoidal_positional_embedding import SinusoidalPositionalEmbedding - - -def PositionalEmbedding( - num_embeddings: int, - embedding_dim: int, - padding_idx: int, - learned: bool = False, -): - if learned: - # if padding_idx is specified then offset the embedding ids by - # this index and adjust num_embeddings appropriately - # TODO: The right place for this offset would be inside - # LearnedPositionalEmbedding. Move this there for a cleaner implementation. - if padding_idx is not None: - num_embeddings = num_embeddings + padding_idx + 1 - m = LearnedPositionalEmbedding(num_embeddings, embedding_dim, padding_idx) - nn.init.normal_(m.weight, mean=0, std=embedding_dim ** -0.5) - if padding_idx is not None: - nn.init.constant_(m.weight[padding_idx], 0) - else: - m = SinusoidalPositionalEmbedding( - embedding_dim, - padding_idx, - init_size=num_embeddings + padding_idx + 1, - ) - return m diff --git a/kosmos-g/fairseq/fairseq/modules/positional_encoding.py b/kosmos-g/fairseq/fairseq/modules/positional_encoding.py deleted file mode 100644 index 67f635353..000000000 --- a/kosmos-g/fairseq/fairseq/modules/positional_encoding.py +++ /dev/null @@ -1,129 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch.nn as nn -import math -import torch - - -class PositionalEncoding(nn.Module): - """Positional encoding. - - Args: - d_model: Embedding dimension. - dropout_rate: Dropout rate. - max_len: Maximum input length. - reverse: Whether to reverse the input position. - """ - - def __init__(self, d_model, dropout_rate, max_len=5000, reverse=False): - """Construct an PositionalEncoding object.""" - super(PositionalEncoding, self).__init__() - self.d_model = d_model - self.reverse = reverse - self.xscale = math.sqrt(self.d_model) - self.dropout = nn.Dropout(p=dropout_rate) - self.pe = None - self.extend_pe(torch.tensor(0.0).expand(1, max_len)) - - def extend_pe(self, x): - """Reset the positional encodings.""" - if self.pe is not None: - if self.pe.size(1) >= x.size(1): - if self.pe.dtype != x.dtype or self.pe.device != x.device: - self.pe = self.pe.to(dtype=x.dtype, device=x.device) - return - pe = torch.zeros(x.size(1), self.d_model) - if self.reverse: - position = torch.arange( - x.size(1) - 1, -1, -1.0, dtype=torch.float32 - ).unsqueeze(1) - else: - position = torch.arange(0, x.size(1), dtype=torch.float32).unsqueeze(1) - div_term = torch.exp( - torch.arange(0, self.d_model, 2, dtype=torch.float32) - * -(math.log(10000.0) / self.d_model) - ) - pe[:, 0::2] = torch.sin(position * div_term) - pe[:, 1::2] = torch.cos(position * div_term) - pe = pe.unsqueeze(0) - self.pe = pe.to(device=x.device, dtype=x.dtype) - - def forward(self, x: torch.Tensor): - """Add positional encoding. - Args: - x (torch.Tensor): Input tensor B X T X C - Returns: - torch.Tensor: Encoded tensor B X T X C - """ - self.extend_pe(x) - x = x * self.xscale + self.pe[:, : x.size(1)] - return self.dropout(x) - - -class RelPositionalEncoding(nn.Module): - """Relative positional encoding module (new implementation). - - Args: - d_model: Embedding dimension. - dropout_rate: Dropout rate. - max_len: Maximum input length. - """ - - def __init__(self, max_len, d_model): - """Construct an PositionalEncoding object.""" - super(RelPositionalEncoding, self).__init__() - self.d_model = d_model - self.pe = None - self.extend_pe(torch.tensor(0.0).expand(1, max_len)) - - def extend_pe(self, x): - """Reset the positional encodings.""" - if self.pe is not None: - # self.pe contains both positive and negative parts - # the length of self.pe is 2 * input_len - 1 - if self.pe.size(1) >= x.size(1) * 2 - 1: - if self.pe.dtype != x.dtype or self.pe.device != x.device: - self.pe = self.pe.to(dtype=x.dtype, device=x.device) - return - # Suppose `i` means to the position of query vecotr and `j` means the - # position of key vector. We use position relative positions when keys - # are to the left (i>j) and negative relative positions otherwise (i<j). - pe_positive = torch.zeros(x.size(1), self.d_model) - pe_negative = torch.zeros(x.size(1), self.d_model) - position = torch.arange(0, x.size(1), dtype=torch.float32).unsqueeze(1) - div_term = torch.exp( - torch.arange(0, self.d_model, 2, dtype=torch.float32) - * -(math.log(10000.0) / self.d_model) - ) - pe_positive[:, 0::2] = torch.sin(position * div_term) - pe_positive[:, 1::2] = torch.cos(position * div_term) - pe_negative[:, 0::2] = torch.sin(-1 * position * div_term) - pe_negative[:, 1::2] = torch.cos(-1 * position * div_term) - - # Reserve the order of positive indices and concat both positive and - # negative indices. This is used to support the shifting trick - # as in https://arxiv.org/abs/1901.02860 - pe_positive = torch.flip(pe_positive, [0]).unsqueeze(0) - pe_negative = pe_negative[1:].unsqueeze(0) - pe = torch.cat([pe_positive, pe_negative], dim=1) - self.pe = pe.to(device=x.device, dtype=x.dtype) - - def forward(self, x: torch.Tensor): - """Add positional encoding. - Args: - x : Input tensor T X B X C. - Returns: - torch.Tensor: Encoded tensor T X B X C. - - """ - x = x.transpose(0, 1) # Change TBC to BTC - self.extend_pe(x) - pos_emb = self.pe[ - :, - self.pe.size(1) // 2 - x.size(1) + 1 : self.pe.size(1) // 2 + x.size(1), - ] - pos_emb = pos_emb.transpose(0, 1) # change to TBC - return pos_emb diff --git a/kosmos-g/fairseq/fairseq/modules/quant_noise.py b/kosmos-g/fairseq/fairseq/modules/quant_noise.py deleted file mode 100644 index d777dfbb6..000000000 --- a/kosmos-g/fairseq/fairseq/modules/quant_noise.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn - - -def quant_noise(module, p, block_size): - """ - Wraps modules and applies quantization noise to the weights for - subsequent quantization with Iterative Product Quantization as - described in "Training with Quantization Noise for Extreme Model Compression" - - Args: - - module: nn.Module - - p: amount of Quantization Noise - - block_size: size of the blocks for subsequent quantization with iPQ - - Remarks: - - Module weights must have the right sizes wrt the block size - - Only Linear, Embedding and Conv2d modules are supported for the moment - - For more detail on how to quantize by blocks with convolutional weights, - see "And the Bit Goes Down: Revisiting the Quantization of Neural Networks" - - We implement the simplest form of noise here as stated in the paper - which consists in randomly dropping blocks - """ - - # if no quantization noise, don't register hook - if p <= 0: - return module - - # supported modules - assert isinstance(module, (nn.Linear, nn.Embedding, nn.Conv2d)) - - # test whether module.weight has the right sizes wrt block_size - is_conv = module.weight.ndim == 4 - - # 2D matrix - if not is_conv: - assert ( - module.weight.size(1) % block_size == 0 - ), "Input features must be a multiple of block sizes" - - # 4D matrix - else: - # 1x1 convolutions - if module.kernel_size == (1, 1): - assert ( - module.in_channels % block_size == 0 - ), "Input channels must be a multiple of block sizes" - # regular convolutions - else: - k = module.kernel_size[0] * module.kernel_size[1] - assert k % block_size == 0, "Kernel size must be a multiple of block size" - - def _forward_pre_hook(mod, input): - # no noise for evaluation - if mod.training: - if not is_conv: - # gather weight and sizes - weight = mod.weight - in_features = weight.size(1) - out_features = weight.size(0) - - # split weight matrix into blocks and randomly drop selected blocks - mask = torch.zeros( - in_features // block_size * out_features, device=weight.device - ) - mask.bernoulli_(p) - mask = mask.repeat_interleave(block_size, -1).view(-1, in_features) - - else: - # gather weight and sizes - weight = mod.weight - in_channels = mod.in_channels - out_channels = mod.out_channels - - # split weight matrix into blocks and randomly drop selected blocks - if mod.kernel_size == (1, 1): - mask = torch.zeros( - int(in_channels // block_size * out_channels), - device=weight.device, - ) - mask.bernoulli_(p) - mask = mask.repeat_interleave(block_size, -1).view(-1, in_channels) - else: - mask = torch.zeros( - weight.size(0), weight.size(1), device=weight.device - ) - mask.bernoulli_(p) - mask = ( - mask.unsqueeze(2) - .unsqueeze(3) - .repeat(1, 1, mod.kernel_size[0], mod.kernel_size[1]) - ) - - # scale weights and apply mask - mask = mask.to( - torch.bool - ) # x.bool() is not currently supported in TorchScript - s = 1 / (1 - p) - mod.weight.data = s * weight.masked_fill(mask, 0) - - module.register_forward_pre_hook(_forward_pre_hook) - return module diff --git a/kosmos-g/fairseq/fairseq/modules/quantization/__init__.py b/kosmos-g/fairseq/fairseq/modules/quantization/__init__.py deleted file mode 100644 index e69de29bb..000000000 diff --git a/kosmos-g/fairseq/fairseq/modules/quantization/pq/__init__.py b/kosmos-g/fairseq/fairseq/modules/quantization/pq/__init__.py deleted file mode 100644 index c142a802e..000000000 --- a/kosmos-g/fairseq/fairseq/modules/quantization/pq/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .utils import SizeTracker, get_param, attrsetter, quantize_model_ # NOQA diff --git a/kosmos-g/fairseq/fairseq/modules/quantization/pq/em.py b/kosmos-g/fairseq/fairseq/modules/quantization/pq/em.py deleted file mode 100644 index 6f15c3e46..000000000 --- a/kosmos-g/fairseq/fairseq/modules/quantization/pq/em.py +++ /dev/null @@ -1,211 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import random -from collections import Counter - -import torch - - -class EM: - """ - EM algorithm used to quantize the columns of W to minimize - - ||W - W_hat||^2 - - Args: - - W: weight matrix of size (in_features x out_features) - - n_iter: number of k-means iterations - - n_centroids: number of centroids (size of codebook) - - eps: for cluster reassignment when an empty cluster is found - - max_tentatives for cluster reassignment when an empty cluster is found - - verbose: print error after each iteration - - Remarks: - - If one cluster is empty, the most populated cluster is split into - two clusters - - All the relevant dimensions are specified in the code - """ - - def __init__( - self, W, n_centroids=256, n_iter=20, eps=1e-6, max_tentatives=30, verbose=True - ): - self.W = W - self.n_centroids = n_centroids - self.n_iter = n_iter - self.eps = eps - self.max_tentatives = max_tentatives - self.verbose = verbose - self.centroids = torch.Tensor() - self.assignments = torch.Tensor() - self.objective = [] - - def initialize_centroids(self): - """ - Initializes the centroids by sampling random columns from W. - """ - - in_features, out_features = self.W.size() - indices = torch.randint( - low=0, high=out_features, size=(self.n_centroids,) - ).long() - self.centroids = self.W[:, indices].t() # (n_centroids x in_features) - - def step(self, i): - """ - There are two standard steps for each iteration: expectation (E) and - minimization (M). The E-step (assignment) is performed with an exhaustive - search and the M-step (centroid computation) is performed with - the exact solution. - - Args: - - i: step number - - Remarks: - - The E-step heavily uses PyTorch broadcasting to speed up computations - and reduce the memory overhead - """ - - # assignments (E-step) - distances = self.compute_distances() # (n_centroids x out_features) - self.assignments = torch.argmin(distances, dim=0) # (out_features) - n_empty_clusters = self.resolve_empty_clusters() - - # centroids (M-step) - for k in range(self.n_centroids): - W_k = self.W[:, self.assignments == k] # (in_features x size_of_cluster_k) - self.centroids[k] = W_k.mean(dim=1) # (in_features) - - # book-keeping - obj = (self.centroids[self.assignments].t() - self.W).norm(p=2).item() - self.objective.append(obj) - if self.verbose: - logging.info( - f"Iteration: {i},\t" - f"objective: {obj:.6f},\t" - f"resolved empty clusters: {n_empty_clusters}" - ) - - def resolve_empty_clusters(self): - """ - If one cluster is empty, the most populated cluster is split into - two clusters by shifting the respective centroids. This is done - iteratively for a fixed number of tentatives. - """ - - # empty clusters - counts = Counter(map(lambda x: x.item(), self.assignments)) - empty_clusters = set(range(self.n_centroids)) - set(counts.keys()) - n_empty_clusters = len(empty_clusters) - - tentatives = 0 - while len(empty_clusters) > 0: - # given an empty cluster, find most populated cluster and split it into two - k = random.choice(list(empty_clusters)) - m = counts.most_common(1)[0][0] - e = torch.randn_like(self.centroids[m]) * self.eps - self.centroids[k] = self.centroids[m].clone() - self.centroids[k] += e - self.centroids[m] -= e - - # recompute assignments - distances = self.compute_distances() # (n_centroids x out_features) - self.assignments = torch.argmin(distances, dim=0) # (out_features) - - # check for empty clusters - counts = Counter(map(lambda x: x.item(), self.assignments)) - empty_clusters = set(range(self.n_centroids)) - set(counts.keys()) - - # increment tentatives - if tentatives == self.max_tentatives: - logging.info( - f"Could not resolve all empty clusters, {len(empty_clusters)} remaining" - ) - raise EmptyClusterResolveError - tentatives += 1 - - return n_empty_clusters - - def compute_distances(self): - """ - For every centroid m, computes - - ||M - m[None, :]||_2 - - Remarks: - - We rely on PyTorch's broadcasting to speed up computations - and reduce the memory overhead - - Without chunking, the sizes in the broadcasting are modified as: - (n_centroids x n_samples x out_features) -> (n_centroids x out_features) - - The broadcasting computation is automatically chunked so that - the tensors fit into the memory of the GPU - """ - - nb_centroids_chunks = 1 - - while True: - try: - return torch.cat( - [ - (self.W[None, :, :] - centroids_c[:, :, None]).norm(p=2, dim=1) - for centroids_c in self.centroids.chunk( - nb_centroids_chunks, dim=0 - ) - ], - dim=0, - ) - except RuntimeError: - nb_centroids_chunks *= 2 - - def assign(self): - """ - Assigns each column of W to its closest centroid, thus essentially - performing the E-step in train(). - - Remarks: - - The function must be called after train() or after loading - centroids using self.load(), otherwise it will return empty tensors - """ - - distances = self.compute_distances() # (n_centroids x out_features) - self.assignments = torch.argmin(distances, dim=0) # (out_features) - - def save(self, path, layer): - """ - Saves centroids and assignments. - - Args: - - path: folder used to save centroids and assignments - """ - - torch.save(self.centroids, os.path.join(path, "{}_centroids.pth".format(layer))) - torch.save( - self.assignments, os.path.join(path, "{}_assignments.pth".format(layer)) - ) - torch.save(self.objective, os.path.join(path, "{}_objective.pth".format(layer))) - - def load(self, path, layer): - """ - Loads centroids and assignments from a given path - - Args: - - path: folder use to load centroids and assignments - """ - - self.centroids = torch.load( - os.path.join(path, "{}_centroids.pth".format(layer)) - ) - self.assignments = torch.load( - os.path.join(path, "{}_assignments.pth".format(layer)) - ) - self.objective = torch.load( - os.path.join(path, "{}_objective.pth".format(layer)) - ) - - -class EmptyClusterResolveError(Exception): - pass diff --git a/kosmos-g/fairseq/fairseq/modules/quantization/pq/modules/__init__.py b/kosmos-g/fairseq/fairseq/modules/quantization/pq/modules/__init__.py deleted file mode 100644 index b67c8e8ad..000000000 --- a/kosmos-g/fairseq/fairseq/modules/quantization/pq/modules/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .qconv import PQConv2d # NOQA -from .qemb import PQEmbedding # NOQA -from .qlinear import PQLinear # NOQA diff --git a/kosmos-g/fairseq/fairseq/modules/quantization/pq/modules/qconv.py b/kosmos-g/fairseq/fairseq/modules/quantization/pq/modules/qconv.py deleted file mode 100644 index d15ec192e..000000000 --- a/kosmos-g/fairseq/fairseq/modules/quantization/pq/modules/qconv.py +++ /dev/null @@ -1,115 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.nn.modules.utils import _pair - - -class PQConv2d(nn.Module): - """ - Quantized counterpart of nn.Conv2d module. Stores the centroid, the assignments - and the non-quantized biases. The full weight is re-instantiated at each forward - pass and autograd automatically computes the gradients with respect to the - centroids. - - Args: - - centroids: centroids of size n_centroids x block_size - - assignments: assignments of the centroids to the subvectors - of size self.out_channels x n_blocks - - bias: the non-quantized bias, must be either torch.Tensor or None - - Remarks: - - We refer the reader to the official documentation of the nn.Conv2d module - for the other arguments and the behavior of the module. - - Performance tests on GPU show that this implementation is 10% slower than - the non-quantized nn.Conv2d module for a standard training loop. - - During the backward, the gradients are averaged by cluster and not summed. - This explains the hook registered to the centroids. - """ - - def __init__( - self, - centroids, - assignments, - bias, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - padding_mode="zeros", - ): - super(PQConv2d, self).__init__() - self.block_size = centroids.size(1) - self.n_centroids = centroids.size(0) - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = _pair(kernel_size) - self.stride = _pair(stride) - self.padding = _pair(padding) - self.dilation = _pair(dilation) - self.groups = groups - self.padding_mode = padding_mode - # check compatibility - if in_channels // groups * np.prod(self.kernel_size) % self.block_size != 0: - raise ValueError("Wrong PQ sizes") - if len(assignments) % out_channels != 0: - raise ValueError("Wrong PQ sizes") - if in_channels % groups != 0: - raise ValueError("in_channels must be divisible by groups") - if out_channels % groups != 0: - raise ValueError("out_channels must be divisible by groups") - # define parameters - self.centroids = nn.Parameter(centroids, requires_grad=True) - self.register_buffer("assignments", assignments) - self.register_buffer("counts", torch.bincount(assignments).type_as(centroids)) - if bias is not None: - self.bias = nn.Parameter(bias) - else: - self.register_parameter("bias", None) - # register hook for averaging gradients per centroids instead of summing - self.centroids.register_hook(lambda x: x / self.counts[:, None]) - - @property - def weight(self): - return ( - self.centroids[self.assignments] - .reshape(-1, self.out_channels, self.block_size) - .permute(1, 0, 2) - .reshape( - self.out_channels, self.in_channels // self.groups, *self.kernel_size - ) - ) - - def forward(self, x): - return F.conv2d( - x, - self.weight, - self.bias, - self.stride, - self.padding, - self.dilation, - self.groups, - ) - - def extra_repr(self): - s = "{in_channels}, {out_channels}, kernel_size={kernel_size}, stride={stride}" - if self.padding != (0,) * len(self.padding): - s += ", padding={padding}" - if self.dilation != (1,) * len(self.dilation): - s += ", dilation={dilation}" - if self.groups != 1: - s += ", groups={groups}" - if self.bias is None: - s += ", bias=False" - if self.padding_mode != "zeros": - s += ", padding_mode={padding_mode}" - s += ", n_centroids={n_centroids}, block_size={block_size}" - return s.format(**self.__dict__) diff --git a/kosmos-g/fairseq/fairseq/modules/quantization/pq/modules/qemb.py b/kosmos-g/fairseq/fairseq/modules/quantization/pq/modules/qemb.py deleted file mode 100644 index 3a74ad3c4..000000000 --- a/kosmos-g/fairseq/fairseq/modules/quantization/pq/modules/qemb.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class PQEmbedding(nn.Module): - """ - Quantized counterpart of nn.Embedding module. Stores the centroids and - the assignments. The full weight is re-instantiated at each forward - pass. - - Args: - - centroids: centroids of size n_centroids x block_size - - assignments: assignments of the centroids to the subvectors - of size self.out_features x n_blocks - - bias: the non-quantized bias - - Remarks: - - We refer the reader to the official documentation of the nn.Embedding module - for the other arguments and the behavior of the module - - Performance tests on GPU show that this implementation is 10% slower than - the non-quantized nn.Embedding module for a standard training loop. - """ - - def __init__( - self, - centroids, - assignments, - num_embeddings, - embedding_dim, - padding_idx=None, - max_norm=None, - norm_type=2.0, - scale_grad_by_freq=False, - sparse=False, - _weight=None, - ): - super(PQEmbedding, self).__init__() - self.block_size = centroids.size(1) - self.n_centroids = centroids.size(0) - self.num_embeddings = num_embeddings - self.embedding_dim = embedding_dim - if padding_idx is not None: - if padding_idx > 0: - assert ( - padding_idx < self.num_embeddings - ), "Padding_idx must be within num_embeddings" - elif padding_idx < 0: - assert ( - padding_idx >= -self.num_embeddings - ), "Padding_idx must be within num_embeddings" - padding_idx = self.num_embeddings + padding_idx - self.padding_idx = padding_idx - self.max_norm = max_norm - self.norm_type = norm_type - self.scale_grad_by_freq = scale_grad_by_freq - self.sparse = sparse - # check compatibility - if self.embedding_dim % self.block_size != 0: - raise ValueError("Wrong PQ sizes") - if len(assignments) % self.num_embeddings != 0: - raise ValueError("Wrong PQ sizes") - # define parameters - self.centroids = nn.Parameter(centroids, requires_grad=True) - self.register_buffer("assignments", assignments) - self.register_buffer("counts", torch.bincount(assignments).type_as(centroids)) - - @property - def weight(self): - return ( - self.centroids[self.assignments] - .reshape(-1, self.num_embeddings, self.block_size) - .permute(1, 0, 2) - .flatten(1, 2) - ) - - def forward(self, input): - return F.embedding( - input, - self.weight, - self.padding_idx, - self.max_norm, - self.norm_type, - self.scale_grad_by_freq, - self.sparse, - ) - - def extra_repr(self): - s = "{num_embeddings}, {embedding_dim}" - if self.padding_idx is not None: - s += ", padding_idx={padding_idx}" - if self.max_norm is not None: - s += ", max_norm={max_norm}" - if self.norm_type != 2: - s += ", norm_type={norm_type}" - if self.scale_grad_by_freq is not False: - s += ", scale_grad_by_freq={scale_grad_by_freq}" - if self.sparse is not False: - s += ", sparse=True" - s += ", n_centroids={n_centroids}, block_size={block_size}" - - return s.format(**self.__dict__) diff --git a/kosmos-g/fairseq/fairseq/modules/quantization/pq/modules/qlinear.py b/kosmos-g/fairseq/fairseq/modules/quantization/pq/modules/qlinear.py deleted file mode 100644 index 9bdd25a86..000000000 --- a/kosmos-g/fairseq/fairseq/modules/quantization/pq/modules/qlinear.py +++ /dev/null @@ -1,71 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class PQLinear(nn.Module): - """ - Quantized counterpart of nn.Linear module. Stores the centroid, the assignments - and the non-quantized biases. The full weight is re-instantiated at each forward - pass. - - Args: - - centroids: centroids of size n_centroids x block_size - - assignments: assignments of the centroids to the subvectors - of size self.out_features x n_blocks - - bias: the non-quantized bias - - Remarks: - - We refer the reader to the official documentation of the nn.Linear module - for the other arguments and the behavior of the module - - Performance tests on GPU show that this implementation is 15% slower than - the non-quantized nn.Linear module for a standard training loop. - """ - - def __init__(self, centroids, assignments, bias, in_features, out_features): - super(PQLinear, self).__init__() - self.block_size = centroids.size(1) - self.n_centroids = centroids.size(0) - self.in_features = in_features - self.out_features = out_features - # check compatibility - if self.in_features % self.block_size != 0: - raise ValueError("Wrong PQ sizes") - if len(assignments) % self.out_features != 0: - raise ValueError("Wrong PQ sizes") - # define parameters - self.centroids = nn.Parameter(centroids, requires_grad=True) - self.register_buffer("assignments", assignments) - self.register_buffer("counts", torch.bincount(assignments).type_as(centroids)) - if bias is not None: - self.bias = nn.Parameter(bias) - else: - self.register_parameter("bias", None) - - @property - def weight(self): - return ( - self.centroids[self.assignments] - .reshape(-1, self.out_features, self.block_size) - .permute(1, 0, 2) - .flatten(1, 2) - ) - - def forward(self, x): - return F.linear( - x, - self.weight, - self.bias, - ) - - def extra_repr(self): - return f"in_features={self.in_features},\ - out_features={self.out_features},\ - n_centroids={self.n_centroids},\ - block_size={self.block_size},\ - bias={self.bias is not None}" diff --git a/kosmos-g/fairseq/fairseq/modules/quantization/pq/pq.py b/kosmos-g/fairseq/fairseq/modules/quantization/pq/pq.py deleted file mode 100644 index eddc2eb34..000000000 --- a/kosmos-g/fairseq/fairseq/modules/quantization/pq/pq.py +++ /dev/null @@ -1,128 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .em import EM, EmptyClusterResolveError - - -class PQ(EM): - """ - Quantizes the layer weights W with the standard Product Quantization - technique. This learns a codebook of codewords or centroids of size - block_size from W. For further reference on using PQ to quantize - neural networks, see "And the Bit Goes Down: Revisiting the Quantization - of Neural Networks", Stock et al., ICLR 2020. - - PQ is performed in two steps: - (1) The matrix W (weights or fully-connected or convolutional layer) - is reshaped to (block_size, -1). - - If W is fully-connected (2D), its columns are split into - blocks of size block_size. - - If W is convolutional (4D), its filters are split along the - spatial dimension. - (2) We apply the standard EM/k-means algorithm to the resulting reshaped matrix. - - Args: - - W: weight matrix to quantize of size (in_features x out_features) - - block_size: size of the blocks (subvectors) - - n_centroids: number of centroids - - n_iter: number of k-means iterations - - eps: for cluster reassignment when an empty cluster is found - - max_tentatives for cluster reassignment when an empty cluster is found - - verbose: print information after each iteration - - Remarks: - - block_size be compatible with the shape of W - """ - - def __init__( - self, - W, - block_size, - n_centroids=256, - n_iter=20, - eps=1e-6, - max_tentatives=30, - verbose=True, - ): - self.block_size = block_size - W_reshaped = self._reshape(W) - super(PQ, self).__init__( - W_reshaped, - n_centroids=n_centroids, - n_iter=n_iter, - eps=eps, - max_tentatives=max_tentatives, - verbose=verbose, - ) - - def _reshape(self, W): - """ - Reshapes the matrix W as expained in step (1). - """ - - # fully connected: by convention the weight has size out_features x in_features - if len(W.size()) == 2: - self.out_features, self.in_features = W.size() - assert ( - self.in_features % self.block_size == 0 - ), "Linear: n_blocks must be a multiple of in_features" - return ( - W.reshape(self.out_features, -1, self.block_size) - .permute(2, 1, 0) - .flatten(1, 2) - ) - - # convolutional: we reshape along the spatial dimension - elif len(W.size()) == 4: - self.out_channels, self.in_channels, self.k_h, self.k_w = W.size() - assert ( - self.in_channels * self.k_h * self.k_w - ) % self.block_size == 0, ( - "Conv2d: n_blocks must be a multiple of in_channels * k_h * k_w" - ) - return ( - W.reshape(self.out_channels, -1, self.block_size) - .permute(2, 1, 0) - .flatten(1, 2) - ) - # not implemented - else: - raise NotImplementedError(W.size()) - - def encode(self): - """ - Performs self.n_iter EM steps. - """ - - self.initialize_centroids() - for i in range(self.n_iter): - try: - self.step(i) - except EmptyClusterResolveError: - break - - def decode(self): - """ - Returns the encoded full weight matrix. Must be called after - the encode function. - """ - - # fully connected case - if "k_h" not in self.__dict__: - return ( - self.centroids[self.assignments] - .reshape(-1, self.out_features, self.block_size) - .permute(1, 0, 2) - .flatten(1, 2) - ) - - # convolutional case - else: - return ( - self.centroids[self.assignments] - .reshape(-1, self.out_channels, self.block_size) - .permute(1, 0, 2) - .reshape(self.out_channels, self.in_channels, self.k_h, self.k_w) - ) diff --git a/kosmos-g/fairseq/fairseq/modules/quantization/pq/utils.py b/kosmos-g/fairseq/fairseq/modules/quantization/pq/utils.py deleted file mode 100644 index eceeef8ba..000000000 --- a/kosmos-g/fairseq/fairseq/modules/quantization/pq/utils.py +++ /dev/null @@ -1,376 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import re -from operator import attrgetter, itemgetter -import torch -import numpy as np -import torch.distributed as dist -import torch.nn as nn - -from .modules import PQConv2d, PQEmbedding, PQLinear -from .pq import PQ - - -def quantize_model_( - model, - size_tracker, - layers_to_quantize, - block_sizes_config, - n_centroids_config, - step=0, - n_iter=15, - eps=1e-6, - max_tentatives=100, - remove_weights=False, - verbose=True, - state_dict=None, -): - """ - Quantize a model in-place by stages. All the targeted - layers are replaced by their quantized counterpart, - and the model is ready for the finetuning of the - centroids in a standard training loop (no modifications - required). Note that we do not quantize biases. - - Args: - - model: a nn.Module - - size_tracker: useful for tracking quatization statistics - - layers_to_quantize: a list containing regexps for - filtering the layers to quantize at each stage according - to their name (as in model.named_parameters()) - - block_sizes_config: dict like - { - 'Conv2d': ('kernel_size', {'(3, 3)': 9, '(1, 1)': 4}), - 'Linear': ('in_features', {'*': 8}) - } - For instance, all conv2d layers with kernel size 3x3 have - a block size of 9 and all Linear layers are quantized with - a block size of 8, irrespective of their size. - - n_centroids_config: dict like - { - 'Conv2d': ('kernel_size', {'*': 256}), - 'Linear': ('in_features', {'*': 256}) - } - For instance, all conv2d layers are quantized with 256 centroids - - step: the layers to quantize inplace corresponding - to layers_to_quantize[step] - """ - - quantized_layers = get_layers( - model, layers_to_quantize[step], remove_weights=remove_weights - ) - - for layer in quantized_layers: - - # book-keeping - is_master_process = (not dist.is_initialized()) or ( - dist.is_initialized() and dist.get_rank() == 0 - ) - verbose = verbose and is_master_process - - # get block size and centroids - module = attrgetter(layer)(model) - block_size = get_param(module, layer, block_sizes_config) - n_centroids = get_param(module, layer, n_centroids_config) - if verbose: - logging.info( - f"Quantizing layer {layer} with block size {block_size} and {n_centroids} centroids" - ) - - # quantize layer - weight = module.weight.data.clone() - is_bias = "bias" in [x[0] for x in module.named_parameters()] - bias = module.bias.data.clone() if is_bias else None - quantizer = PQ( - weight, - block_size, - n_centroids=n_centroids, - n_iter=n_iter, - eps=eps, - max_tentatives=max_tentatives, - verbose=verbose, - ) - - # quantization performed on all GPUs with same seed - quantizer.encode() - centroids = quantizer.centroids.contiguous() - assignments = quantizer.assignments.contiguous() - - # If n_iter = 0 and state_dict is provided, then - # we initialize random assignments and centroids to - # random values of the appropriate dimensions - # because the quantized model parameters will - # overwritten by the state_dict later on. - if n_iter == 0 and state_dict: - # Initialize random centroids of the correct size - centroids = torch.rand(centroids.size()) - centroids.cuda() - # Get counts and assignment keys from layer in loaded checkpoint. - counts_key = layer + "." + "counts" - assignment_key = layer + "." + "assignments" - # Get number of different bins to include. - counts = list(state_dict[counts_key].shape)[0] - print(layer) - print(state_dict[counts_key]) - print(counts) - # Initialize random assignments of the correct size - # with an appropriate number of bins. - num_assignments = list(state_dict[assignment_key].shape)[0] - num_extra = num_assignments - counts - print(num_assignments) - print(num_extra) - assignments_bins = torch.arange(counts) - assignments_rand = torch.randint(0, counts - 1, (num_extra,)) - assignments = torch.cat((assignments_bins, assignments_rand), 0) - # assignments = assignments.type(torch.IntTensor) - assignments.cuda() - print("assignments") - print(assignments) - - # broadcast results to make sure weights are up-to-date - if dist.is_initialized(): - dist.broadcast(centroids, 0) - dist.broadcast(assignments, 0) - - # instantiate the quantized counterpart - if isinstance(module, nn.Linear): - out_features, in_features = map( - lambda k: module.__dict__[k], ["out_features", "in_features"] - ) - quantized_module = PQLinear( - centroids, assignments, bias, in_features, out_features - ) - elif isinstance(module, nn.Embedding): - num_embeddings, embedding_dim = map( - lambda k: module.__dict__[k], ["num_embeddings", "embedding_dim"] - ) - quantized_module = PQEmbedding( - centroids, assignments, num_embeddings, embedding_dim - ) - elif isinstance(module, nn.Conv2d): - out_channels, in_channels, kernel_size = map( - lambda k: module.__dict__[k], - ["out_channels", "in_channels", "kernel_size"], - ) - stride, padding, dilation, groups, padding_mode = map( - lambda k: module.__dict__[k], - ["stride", "padding", "dilation", "groups", "padding_mode"], - ) - - quantized_module = PQConv2d( - centroids, - assignments, - bias, - in_channels, - out_channels, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups, - padding_mode=padding_mode, - ) - else: - raise ValueError(f"Module {module} not yet supported for quantization") - - # replace layer by its quantized counterpart - attrsetter(layer)(model, quantized_module) - - # update statistics - size_tracker.update(weight, block_size, n_centroids) - - # return name of quantized layers - return quantized_layers - - -def get_layers(model, filter_regexp, remove_weights=False): - """ - Filters out the layers according to a regexp. Note that - we omit biases. - - Args: - - model: a nn.Module - - filter_regexp: a regexp to filter the layers to keep - according to their name in model.named_parameters(). - For instance, the regexp: - - down_layers\\.[123456]\\.(conv[12]|identity\\.conv)) - - is keeping blocks down_layers from 1 to 6, and inside - each block is keeping conv1, conv2 and identity.conv. - - Remarks: - - We add (module\\.)? at the beginning of the regexp to - account for the possible use of nn.parallel.DataParallel - """ - - # get all parameter names - all_layers = map(itemgetter(0), model.named_parameters()) - - # remove biases - all_layers = filter(lambda x: "bias" not in x, all_layers) - - # remove .weight in all other names (or .weight_orig is spectral norm) - all_layers = map(lambda x: x.replace(".weight_orig", ""), all_layers) - # remove weights indicates whether the weights extension should be removed, in addition to - # weight_orig and weight extension on names - if remove_weights: - all_layers = map(lambda x: x.replace(".weights", ""), all_layers) - all_layers = map(lambda x: x.replace(".weight", ""), all_layers) - - # return filtered layers - filter_regexp = "(module\\.)?" + "(" + filter_regexp + ")" - r = re.compile(filter_regexp) - - return list(filter(r.match, all_layers)) - - -def get_param(module, layer_name, param_config): - """ - Given a quantization configuration, get the right parameter - for the module to be quantized. - - Args: - - module: a nn.Module - - layer_name: the name of the layer - - param_config: a dict like - { - 'Conv2d': ('kernel_size', {'(3, 3)': 9, '(1, 1)': 4}), - 'Linear': ('in_features', {'*': 8}) - } - For instance, all conv2d layers with kernel size 3x3 have - a block size of 9 and all Linear layers are quantized with - a block size of 8, irrespective of their size. - - Remarks: - - if 'fuzzy_name' is passed as a parameter, layers whose layer_name - include 'fuzzy_name' will be assigned the given parameter. - In the following example, conv.expand layers will have a block - size of 9 while conv.reduce will have a block size of 4 and all - other layers will have a block size of 2. - { - 'Conv2d': ('fuzzy_name', {'expand': 9, 'reduce': 4, '*': 2}), - 'Linear': ('fuzzy_name', {'classifier': 8, 'projection': 4}) - } - - """ - - layer_type = module.__class__.__name__ - - if layer_type not in param_config: - raise KeyError(f"Layer type {layer_type} not in config for layer {module}") - - feature, params = param_config[module.__class__.__name__] - - if feature != "fuzzy_name": - feature_value = str(getattr(module, feature)) - if feature_value not in params: - if "*" in params: - feature_value = "*" - else: - raise KeyError( - f"{feature}={feature_value} not in config for layer {module}" - ) - else: - feature_values = [name for name in params if name in layer_name] - if len(feature_values) == 0: - if "*" in params: - feature_value = "*" - else: - raise KeyError(f"name={layer_name} not in config for {module}") - else: - feature_value = feature_values[0] - - return params[feature_value] - - -class SizeTracker(object): - """ - Class to keep track of the compressed network size with iPQ. - - Args: - - model: a nn.Module - - Remarks: - - The compressed size is the sum of three components - for each layer in the network: - (1) Storing the centroids given by iPQ in fp16 - (2) Storing the assignments of the blocks in int8 - (3) Storing all non-compressed elements such as biases - - This cost in only valid if we use 256 centroids (then - indexing can indeed by done with int8). - """ - - def __init__(self, model): - self.model = model - self.size_non_compressed_model = self.compute_size() - self.size_non_quantized = self.size_non_compressed_model - self.size_index = 0 - self.size_centroids = 0 - self.n_quantized_layers = 0 - - def compute_size(self): - """ - Computes the size of the model (in MB). - """ - - res = 0 - for _, p in self.model.named_parameters(): - res += p.numel() - return res * 4 / 1024 / 1024 - - def update(self, W, block_size, n_centroids): - """ - Updates the running statistics when quantizing a new layer. - """ - - # bits per weights - bits_per_weight = np.log2(n_centroids) / block_size - self.n_quantized_layers += 1 - - # size of indexing the subvectors of size block_size (in MB) - size_index_layer = bits_per_weight * W.numel() / 8 / 1024 / 1024 - self.size_index += size_index_layer - - # size of the centroids stored in float16 (in MB) - size_centroids_layer = n_centroids * block_size * 2 / 1024 / 1024 - self.size_centroids += size_centroids_layer - - # size of non-compressed layers, e.g. LayerNorms or biases (in MB) - size_uncompressed_layer = W.numel() * 4 / 1024 / 1024 - self.size_non_quantized -= size_uncompressed_layer - - def __repr__(self): - size_compressed = ( - self.size_index + self.size_centroids + self.size_non_quantized - ) - compression_ratio = self.size_non_compressed_model / size_compressed # NOQA - return ( - f"Non-compressed model size: {self.size_non_compressed_model:.2f} MB. " - f"After quantizing {self.n_quantized_layers} layers, size " - f"(indexing + centroids + other): {self.size_index:.2f} MB + " - f"{self.size_centroids:.2f} MB + {self.size_non_quantized:.2f} MB = " - f"{size_compressed:.2f} MB, compression ratio: {compression_ratio:.2f}x" - ) - - -def attrsetter(*items): - def resolve_attr(obj, attr): - attrs = attr.split(".") - head = attrs[:-1] - tail = attrs[-1] - - for name in head: - obj = getattr(obj, name) - return obj, tail - - def g(obj, val): - for attr in items: - resolved_obj, resolved_attr = resolve_attr(obj, attr) - setattr(resolved_obj, resolved_attr, val) - - return g diff --git a/kosmos-g/fairseq/fairseq/modules/quantization/quantization_options.py b/kosmos-g/fairseq/fairseq/modules/quantization/quantization_options.py deleted file mode 100644 index b46d682c0..000000000 --- a/kosmos-g/fairseq/fairseq/modules/quantization/quantization_options.py +++ /dev/null @@ -1,44 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -def parse_config_yaml(yaml_data): - # Initialize to default options. - quantization_options = { - "n_centroids": { - "Linear": ["in_features", {"*": 256}], - "Embedding": ["embedding_dim", {"*": 256}], - }, - "block_sizes": { - "Linear": ["fuzzy_name", {"fc": 8, "attn": 4, "emb": 4}], - "Embedding": ["fuzzy_name", {"emb": 8}], - }, - "layers_to_quantize": [ - "decoder\\.layers\\.\\d+\\.fc[12]", - "decoder\\.embed_tokens\\.embeddings\\.[012]\\.[01]", - "decoder\\.layers\\.\\d+\\.self_attn\\.(k_proj|v_proj|q_proj|out_proj)", - ], - } - - if "n_centroids" in yaml_data: - quantization_options["n_centroids"] = { - layer: convert_yaml_to_tuple(layer_data) - for layer, layer_data in yaml_data["n_centroids"].items() - } - if "block_sizes" in yaml_data: - quantization_options["block_sizes"] = { - layer: convert_yaml_to_tuple(layer_data) - for layer, layer_data in yaml_data["block_sizes"].items() - } - if "layers_to_quantize" in yaml_data: - quantization_options["layers_to_quantize"] = yaml_data["layers_to_quantize"] - - return quantization_options - - -def convert_yaml_to_tuple(yaml_dictionary): - """Converts a yaml dictionary with two keys: `key` and `value` into a two - argument tuple of those values.""" - return (yaml_dictionary["key"], yaml_dictionary["value"]) diff --git a/kosmos-g/fairseq/fairseq/modules/quantization/scalar/__init__.py b/kosmos-g/fairseq/fairseq/modules/quantization/scalar/__init__.py deleted file mode 100644 index 143834f3d..000000000 --- a/kosmos-g/fairseq/fairseq/modules/quantization/scalar/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .utils import quantize_model_ # NOQA diff --git a/kosmos-g/fairseq/fairseq/modules/quantization/scalar/modules/__init__.py b/kosmos-g/fairseq/fairseq/modules/quantization/scalar/modules/__init__.py deleted file mode 100644 index 8031d9cdb..000000000 --- a/kosmos-g/fairseq/fairseq/modules/quantization/scalar/modules/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .qact import ActivationQuantizer # NOQA -from .qconv import IntConv2d # NOQA -from .qemb import IntEmbedding # NOQA -from .qlinear import IntLinear # NOQA diff --git a/kosmos-g/fairseq/fairseq/modules/quantization/scalar/modules/qact.py b/kosmos-g/fairseq/fairseq/modules/quantization/scalar/modules/qact.py deleted file mode 100644 index c5dd1d633..000000000 --- a/kosmos-g/fairseq/fairseq/modules/quantization/scalar/modules/qact.py +++ /dev/null @@ -1,88 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from ..ops import emulate_int - - -class ActivationQuantizer: - """ - Fake scalar quantization of the activations using a forward hook. - - Args: - - module. a nn.Module for which we quantize the *post-activations* - - p: proportion of activations to quantize, set by default to 1 - - update_step: to recompute quantization parameters - - bits: number of bits for quantization - - method: choose among {"tensor", "histogram", "channel"} - - clamp_threshold: to prevent gradients overflow - - Remarks: - - Parameters scale and zero_point are recomputed every update_step - forward pass to reduce the overhead - - For the list of quantization methods and number of bits, see ops.py - - To remove the hook from the module, simply call self.handle.remove() - - At test time, the activations are fully quantized - - We use the straight-through estimator so that the gradients - back-propagate nicely in the network, this is implemented with - the detach() trick - - The activations are hard-clamped in [-clamp_threshold, clamp_threshold] - to prevent overflow during the backward pass - """ - - def __init__( - self, - module, - p=1, - update_step=1000, - bits=8, - method="histogram", - clamp_threshold=5, - ): - self.module = module - self.p = p - self.update_step = update_step - self.counter = 0 - self.bits = bits - self.method = method - self.clamp_threshold = clamp_threshold - self.handle = None - self.register_hook() - - def register_hook(self): - # forward hook - def quantize_hook(module, x, y): - - # update parameters every 1000 iterations - if self.counter % self.update_step == 0: - self.scale = None - self.zero_point = None - self.counter += 1 - - # train with QuantNoise and evaluate the fully quantized network - p = self.p if self.module.training else 1 - - # quantize activations - y_q, self.scale, self.zero_point = emulate_int( - y.detach(), - bits=self.bits, - method=self.method, - scale=self.scale, - zero_point=self.zero_point, - ) - - # mask to apply noise - mask = torch.zeros_like(y) - mask.bernoulli_(1 - p) - noise = (y_q - y).masked_fill(mask.bool(), 0) - - # using straight-through estimator (STE) - clamp_low = -self.scale * self.zero_point - clamp_high = self.scale * (2 ** self.bits - 1 - self.zero_point) - return torch.clamp(y, clamp_low.item(), clamp_high.item()) + noise.detach() - - # register hook - self.handle = self.module.register_forward_hook(quantize_hook) diff --git a/kosmos-g/fairseq/fairseq/modules/quantization/scalar/modules/qconv.py b/kosmos-g/fairseq/fairseq/modules/quantization/scalar/modules/qconv.py deleted file mode 100644 index 83788c6f7..000000000 --- a/kosmos-g/fairseq/fairseq/modules/quantization/scalar/modules/qconv.py +++ /dev/null @@ -1,149 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn.functional as F -from torch.nn.modules.conv import _ConvNd -from torch.nn.modules.utils import _pair - -from ..ops import emulate_int - - -class IntConv2d(_ConvNd): - """ - Quantized counterpart of the nn.Conv2d module that applies QuantNoise during training. - - Args: - - standard nn.Conv2d parameters - - p: amount of noise to inject (0 = no quantization, 1 = quantize all the weights) - - bits: number of bits - - method: choose among {"tensor", "histogram", "channel"} - - update_step: recompute scale and zero_point every update_steps iterations - - Remarks: - - We use the straight-thgourh estimator so that the gradients - back-propagate nicely in the network, this is implemented with - the detach() trick - - Parameters scale and zero_point are recomputed every update_step - forward pass to reduce the overhead - - At test time, the weights are fully quantized - """ - - def __init__( - self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - bias=True, - padding_mode="zeros", - p=0, - bits=8, - method="histogram", - update_step=1000, - ): - kernel_size = _pair(kernel_size) - stride = _pair(stride) - padding = _pair(padding) - dilation = _pair(dilation) - super(IntConv2d, self).__init__( - in_channels, - out_channels, - kernel_size, - stride, - padding, - dilation, - False, - _pair(0), - groups, - bias, - padding_mode, - ) - - # quantization parameters - self.p = p - self.bits = bits - self.method = method - self.update_step = update_step - self.counter = 0 - - def _conv_forward(self, input, weight): - if self.padding_mode != "zeros": - return F.conv2d( - F.pad(input, self._padding_repeated_twice, mode=self.padding_mode), - weight, - self.bias, - self.stride, - _pair(0), - self.dilation, - self.groups, - ) - return F.conv2d( - input, - weight, - self.bias, - self.stride, - self.padding, - self.dilation, - self.groups, - ) - - def forward(self, input): - # train with QuantNoise and evaluate the fully quantized network - p = self.p if self.training else 1 - - # update parameters every 100 iterations - if self.counter % self.update_step == 0: - self.scale = None - self.zero_point = None - self.counter += 1 - - # quantize weight - weight_quantized, self.scale, self.zero_point = emulate_int( - self.weight.detach(), - bits=self.bits, - method=self.method, - scale=self.scale, - zero_point=self.zero_point, - ) - - # mask to apply noise - mask = torch.zeros_like(self.weight) - mask.bernoulli_(1 - p) - noise = (weight_quantized - self.weight).masked_fill(mask.bool(), 0) - - # using straight-through estimator (STE) - clamp_low = -self.scale * self.zero_point - clamp_high = self.scale * (2 ** self.bits - 1 - self.zero_point) - weight = ( - torch.clamp(self.weight, clamp_low.item(), clamp_high.item()) - + noise.detach() - ) - - # return output - output = self._conv_forward(input, weight) - return output - - def extra_repr(self): - return ( - "in_channels={}, out_channels={}, kernel_size={}, stride={}, " - "padding={}, dilation={}, groups={}, bias={}, quant_noise={}, " - "bits={}, method={}".format( - self.in_channels, - self.out_channels, - self.kernel_size, - self.stride, - self.padding, - self.dilation, - self.groups, - self.bias is not None, - self.p, - self.bits, - self.method, - ) - ) diff --git a/kosmos-g/fairseq/fairseq/modules/quantization/scalar/modules/qemb.py b/kosmos-g/fairseq/fairseq/modules/quantization/scalar/modules/qemb.py deleted file mode 100644 index d6cf06e58..000000000 --- a/kosmos-g/fairseq/fairseq/modules/quantization/scalar/modules/qemb.py +++ /dev/null @@ -1,147 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..ops import emulate_int - - -class IntEmbedding(nn.Module): - """ - Quantized counterpart of the nn.Embedding module that applies QuantNoise during training. - - Args: - - num_embeddings: number of tokens - - embedding_dim: embedding dimension - - p: amount of noise to inject (0 = no quantization, 1 = quantize all the weights) - - bits: number of bits - - method: choose among {"tensor", "histogram", "channel"} - - update_step: recompute scale and zero_point every update_steps iterations - - Remarks: - - We use the straight-through estimator so that the gradients - back-propagate nicely in the network, this is implemented with - the detach() trick - - Parameters scale and zero_point are recomputed every update_step - forward pass to reduce the overhead - - At test time, the weights are fully quantized - """ - - def __init__( - self, - num_embeddings, - embedding_dim, - padding_idx=None, - max_norm=None, - norm_type=2.0, - scale_grad_by_freq=False, - sparse=False, - _weight=None, - p=0, - update_step=1000, - bits=8, - method="histogram", - ): - super(IntEmbedding, self).__init__() - self.num_embeddings = num_embeddings - self.embedding_dim = embedding_dim - if padding_idx is not None: - if padding_idx > 0: - assert ( - padding_idx < self.num_embeddings - ), "Padding_idx must be within num_embeddings" - elif padding_idx < 0: - assert ( - padding_idx >= -self.num_embeddings - ), "Padding_idx must be within num_embeddings" - padding_idx = self.num_embeddings + padding_idx - self.padding_idx = padding_idx - self.max_norm = max_norm - self.norm_type = norm_type - self.scale_grad_by_freq = scale_grad_by_freq - if _weight is None: - self.weight = nn.Parameter(torch.Tensor(num_embeddings, embedding_dim)) - self.reset_parameters() - else: - assert list(_weight.shape) == [ - num_embeddings, - embedding_dim, - ], "Shape of weight does not match num_embeddings and embedding_dim" - self.weight = nn.Parameter(_weight) - self.sparse = sparse - - # quantization parameters - self.p = p - self.bits = bits - self.method = method - self.update_step = update_step - self.counter = 0 - - def reset_parameters(self): - nn.init.normal_(self.weight) - if self.padding_idx is not None: - with torch.no_grad(): - self.weight[self.padding_idx].fill_(0) - - def forward(self, input): - # train with QuantNoise and evaluate the fully quantized network - p = self.p if self.training else 1 - - # update parameters every 1000 iterations - if self.counter % self.update_step == 0: - self.scale = None - self.zero_point = None - self.counter += 1 - - # quantize weight - weight_quantized, self.scale, self.zero_point = emulate_int( - self.weight.detach(), - bits=self.bits, - method=self.method, - scale=self.scale, - zero_point=self.zero_point, - ) - - # mask to apply noise - mask = torch.zeros_like(self.weight) - mask.bernoulli_(1 - p) - noise = (weight_quantized - self.weight).masked_fill(mask.bool(), 0) - - # using straight-through estimator (STE) - clamp_low = -self.scale * self.zero_point - clamp_high = self.scale * (2 ** self.bits - 1 - self.zero_point) - weight = ( - torch.clamp(self.weight, clamp_low.item(), clamp_high.item()) - + noise.detach() - ) - - # return output - output = F.embedding( - input, - weight, - self.padding_idx, - self.max_norm, - self.norm_type, - self.scale_grad_by_freq, - self.sparse, - ) - return output - - def extra_repr(self): - s = "{num_embeddings}, {embedding_dim}" - if self.padding_idx is not None: - s += ", padding_idx={padding_idx}" - if self.max_norm is not None: - s += ", max_norm={max_norm}" - if self.norm_type != 2: - s += ", norm_type={norm_type}" - if self.scale_grad_by_freq is not False: - s += ", scale_grad_by_freq={scale_grad_by_freq}" - if self.sparse is not False: - s += ", sparse=True" - s += "quant_noise={p}, bits={bits}, method={method}" - return s.format(**self.__dict__) diff --git a/kosmos-g/fairseq/fairseq/modules/quantization/scalar/modules/qlinear.py b/kosmos-g/fairseq/fairseq/modules/quantization/scalar/modules/qlinear.py deleted file mode 100644 index 9db155938..000000000 --- a/kosmos-g/fairseq/fairseq/modules/quantization/scalar/modules/qlinear.py +++ /dev/null @@ -1,113 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ..ops import emulate_int - - -class IntLinear(nn.Module): - """ - Quantized counterpart of the nn.Linear module that applies QuantNoise during training. - - Args: - - in_features: input features - - out_features: output features - - bias: bias or not - - p: amount of noise to inject (0 = no quantization, 1 = quantize all the weights) - - bits: number of bits - - method: choose among {"tensor", "histogram", "channel"} - - update_step: recompute scale and zero_point every update_steps iterations - - Remarks: - - We use the straight-through estimator so that the gradients - back-propagate nicely in the network, this is implemented with - the detach() trick. - - Parameters scale and zero_point are recomputed every update_step - forward pass to reduce the overhead - - At test time, the weights are fully quantized - """ - - def __init__( - self, - in_features, - out_features, - bias=True, - p=0, - update_step=3000, - bits=8, - method="histogram", - ): - super(IntLinear, self).__init__() - self.in_features = int(in_features) - self.out_features = int(out_features) - self.weight = torch.nn.Parameter(torch.Tensor(out_features, in_features)) - self.chosen_bias = bias - if self.chosen_bias: - self.bias = torch.nn.Parameter(torch.Tensor(out_features)) - else: - self.register_parameter("bias", None) - self.reset_parameters() - - # quantization parameters - self.p = p - self.bits = bits - self.method = method - self.update_step = update_step - self.counter = 0 - - def reset_parameters(self): - nn.init.xavier_uniform_(self.weight) - if self.chosen_bias: - nn.init.constant_(self.bias, 0.0) - return - - def forward(self, input): - # train with QuantNoise and evaluate the fully quantized network - p = self.p if self.training else 1 - - # update parameters every 100 iterations - if self.counter % self.update_step == 0: - self.scale = None - self.zero_point = None - self.counter += 1 - - # quantize weight - weight_quantized, self.scale, self.zero_point = emulate_int( - self.weight.detach(), - bits=self.bits, - method=self.method, - scale=self.scale, - zero_point=self.zero_point, - ) - - # mask to apply noise - mask = torch.zeros_like(self.weight) - mask.bernoulli_(1 - p) - noise = (weight_quantized - self.weight).masked_fill(mask.bool(), 0) - - # using straight-through estimator (STE) - clamp_low = -self.scale * self.zero_point - clamp_high = self.scale * (2 ** self.bits - 1 - self.zero_point) - weight = ( - torch.clamp(self.weight, clamp_low.item(), clamp_high.item()) - + noise.detach() - ) - - # return output - output = F.linear(input, weight, self.bias) - return output - - def extra_repr(self): - return "in_features={}, out_features={}, bias={}, quant_noise={}, bits={}, method={}".format( - self.in_features, - self.out_features, - self.bias is not None, - self.p, - self.bits, - self.method, - ) diff --git a/kosmos-g/fairseq/fairseq/modules/quantization/scalar/ops.py b/kosmos-g/fairseq/fairseq/modules/quantization/scalar/ops.py deleted file mode 100644 index ad1e14e05..000000000 --- a/kosmos-g/fairseq/fairseq/modules/quantization/scalar/ops.py +++ /dev/null @@ -1,59 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -try: - import torch.ao.quantization as quantization -except ImportError: - import torch.quantization as quantization - - -def emulate_int(w, bits, method, scale=None, zero_point=None): - q = globals()[f"emulate_int8_{method}"] - return q(w, scale=scale, zero_point=zero_point, bits=bits) - - -def quantize(w, scale, zero_point, bits=8): - # In the default behavior, max_val = 255. - max_val = 2 ** bits - 1 - return ( - torch.clamp(torch.round(w / scale + zero_point), 0, max_val) - zero_point - ) * scale - - -def emulate_int8_histogram(w, scale=None, zero_point=None, bits=8): - if scale is None: - obs = quantization.observer.HistogramObserver() - obs.to(device=w.device) - _ = obs(w.float()) - scale, zero_point = obs.calculate_qparams() - scale = scale.cuda().type_as(w) - zero_point = zero_point.cuda().type_as(w) - return quantize(w, scale, zero_point, bits=bits), scale, zero_point - - -def emulate_int8_channel(w, scale=None, zero_point=None, bits=8): - if scale is None: - obs = quantization.observer.PerChannelMinMaxObserver( - ch_axis=-1, qscheme=torch.per_channel_symmetric - ) - obs.to(device=w.device) - _ = obs(w) - scale, zero_point, ch_axis = obs.get_qparams() - scale = scale.cuda().type_as(w) - zero_point = zero_point.cuda().type_as(w) - return quantize(w, scale, zero_point, bits=bits), scale, zero_point - - -def emulate_int8_tensor(w, scale=None, zero_point=None, bits=8): - if scale is None: - obs = quantization.observer.MinMaxObserver() - obs.to(device=w.device) - _ = obs(w) - scale, zero_point = obs.calculate_qparams() - scale = scale.cuda().type_as(w) - zero_point = zero_point.cuda().type_as(w) - return quantize(w, scale, zero_point, bits=bits), scale, zero_point diff --git a/kosmos-g/fairseq/fairseq/modules/quantization/scalar/utils.py b/kosmos-g/fairseq/fairseq/modules/quantization/scalar/utils.py deleted file mode 100644 index d4b1cc255..000000000 --- a/kosmos-g/fairseq/fairseq/modules/quantization/scalar/utils.py +++ /dev/null @@ -1,80 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from operator import attrgetter - -import torch.distributed as dist -import torch.nn as nn - -from ..pq.utils import attrsetter, get_layers -from .modules import ActivationQuantizer, IntConv2d, IntEmbedding, IntLinear - - -MAPPING = {nn.Linear: IntLinear, nn.Embedding: IntEmbedding, nn.Conv2d: IntConv2d} - - -def quantize_model_( - model, p=0.2, bits=8, update_step=3000, method="histogram", remove_weights=False -): - """ - Replaces all modules with their scalar quantized counterpart and - registers hooks to quantize the post-ativations of those modules. - - Args: - - model: a nn.Module - - p: amount of noise (0 for no noise, 1 to quantize all the weights/activations) - - bits: number of bits - - update_step: update quantization parameters every update_step steps - """ - # quantize all layers - # remove weights indicates whether the weights extension should be removed, in addition to - # weight_orig and weight extension on names - quantized_layers = get_layers(model, "(.*?)", remove_weights=remove_weights) - - for layer in quantized_layers: - - # book-keeping - is_master_process = (not dist.is_initialized()) or ( - dist.is_initialized() and dist.get_rank() == 0 - ) - - # recover module - module = attrgetter(layer)(model) - if is_master_process: - logging.info( - f"Quantizing layer {layer} with bits={bits} and QuantNoise={p}" - ) - - # quantization params - q_params = { - "p": p, - "update_step": update_step, - "bits": bits, - "method": method, - "counter": 0, - } - - # instantiate the quantized counterpart - if isinstance(module, tuple(MAPPING.keys())): - QuantizedModule = MAPPING[module.__class__] - quantized_module = QuantizedModule.__new__(QuantizedModule) - params = module.__dict__ - params.update(q_params) - quantized_module.__dict__.update(params) - - else: - if is_master_process: - logging.info(f"Module {module} not yet supported for quantization") - continue - - # activation quantization - a_q = ActivationQuantizer(quantized_module, p=0, bits=bits, method=method) - - # replace layer by its quantized counterpart - attrsetter(layer)(model, quantized_module) - - # return name of quantized layers - return quantized_layers diff --git a/kosmos-g/fairseq/fairseq/modules/rotary_positional_embedding.py b/kosmos-g/fairseq/fairseq/modules/rotary_positional_embedding.py deleted file mode 100644 index 84b88984e..000000000 --- a/kosmos-g/fairseq/fairseq/modules/rotary_positional_embedding.py +++ /dev/null @@ -1,51 +0,0 @@ -import torch - - -class RotaryPositionalEmbedding(torch.nn.Module): - def __init__(self, dim, base=10000, precision=torch.half): - """Rotary positional embedding - Reference : https://blog.eleuther.ai/rotary-embeddings/ - Paper: https://arxiv.org/pdf/2104.09864.pdf - Args: - dim: Dimension of embedding - base: Base value for exponential - precision: precision to use for numerical values - """ - super().__init__() - inv_freq = 1.0 / (base ** (torch.arange(0, dim, 2).float() / dim)) - self.register_buffer("inv_freq", inv_freq) - self.seq_len_cached = None - self.cos_cached = None - self.sin_cached = None - self.precision = precision - - def forward(self, x, seq_len=None): - """ - Args: - x: Input x with T X B X C - seq_len: Sequence length of input x - """ - if seq_len != self.seq_len_cached: - self.seq_len_cached = seq_len - t = torch.arange(seq_len, device=x.device).type_as(self.inv_freq) - freqs = torch.einsum("i,j->ij", t, self.inv_freq) - emb = torch.cat((freqs, freqs), dim=-1).to(x.device) - self.cos_cached = emb.cos()[:, None, None, :] - self.sin_cached = emb.sin()[:, None, None, :] - return self.cos_cached, self.sin_cached - - -# rotary pos emb helpers: -def rotate_half(x): - x1, x2 = x[..., : x.shape[-1] // 2], x[..., x.shape[-1] // 2 :] - return torch.cat( - (-x2, x1), dim=x1.ndim - 1 - ) # dim=-1 triggers a bug in earlier torch versions - - -def apply_rotary_pos_emb(q, k, cos, sin, offset: int = 0): - cos, sin = ( - cos[offset : q.shape[0] + offset, ...], - sin[offset : q.shape[0] + offset, ...], - ) - return (q * cos) + (rotate_half(q) * sin), (k * cos) + (rotate_half(k) * sin) diff --git a/kosmos-g/fairseq/fairseq/modules/same_pad.py b/kosmos-g/fairseq/fairseq/modules/same_pad.py deleted file mode 100644 index 4c04990ea..000000000 --- a/kosmos-g/fairseq/fairseq/modules/same_pad.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -from torch import nn - - -class SamePad(nn.Module): - def __init__(self, kernel_size, causal=False): - super().__init__() - if causal: - self.remove = kernel_size - 1 - else: - self.remove = 1 if kernel_size % 2 == 0 else 0 - - def forward(self, x): - if self.remove > 0: - x = x[:, :, : -self.remove] - return x diff --git a/kosmos-g/fairseq/fairseq/modules/scalar_bias.py b/kosmos-g/fairseq/fairseq/modules/scalar_bias.py deleted file mode 100644 index c96247c75..000000000 --- a/kosmos-g/fairseq/fairseq/modules/scalar_bias.py +++ /dev/null @@ -1,31 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -# - -import torch - - -class ScalarBias(torch.autograd.Function): - """ - Adds a vector of scalars, used in self-attention mechanism to allow - the model to optionally attend to this vector instead of the past - """ - - @staticmethod - def forward(ctx, input, dim, bias_init): - size = list(input.size()) - size[dim] += 1 - output = input.new(*size).fill_(bias_init) - output.narrow(dim, 1, size[dim] - 1).copy_(input) - ctx.dim = dim - return output - - @staticmethod - def backward(ctx, grad): - return grad.narrow(ctx.dim, 1, grad.size(ctx.dim) - 1), None, None - - -def scalar_bias(input, dim, bias_init=0): - return ScalarBias.apply(input, dim, bias_init) diff --git a/kosmos-g/fairseq/fairseq/modules/sinusoidal_positional_embedding.py b/kosmos-g/fairseq/fairseq/modules/sinusoidal_positional_embedding.py deleted file mode 100644 index 4793ecfb5..000000000 --- a/kosmos-g/fairseq/fairseq/modules/sinusoidal_positional_embedding.py +++ /dev/null @@ -1,105 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from typing import Any, Optional - -import torch -import torch.onnx.operators -from fairseq import utils -from torch import Tensor, nn - - -class SinusoidalPositionalEmbedding(nn.Module): - """This module produces sinusoidal positional embeddings of any length. - - Padding symbols are ignored. - """ - - def __init__(self, embedding_dim, padding_idx, init_size=1024): - super().__init__() - self.embedding_dim = embedding_dim - self.padding_idx = padding_idx if padding_idx is not None else 0 - self.weights = SinusoidalPositionalEmbedding.get_embedding( - init_size, embedding_dim, padding_idx - ) - self.onnx_trace = False - self.register_buffer("_float_tensor", torch.FloatTensor(1)) - self.max_positions = int(1e5) - - def prepare_for_onnx_export_(self): - self.onnx_trace = True - - @staticmethod - def get_embedding( - num_embeddings: int, embedding_dim: int, padding_idx: Optional[int] = None - ): - """Build sinusoidal embeddings. - - This matches the implementation in tensor2tensor, but differs slightly - from the description in Section 3.5 of "Attention Is All You Need". - """ - half_dim = embedding_dim // 2 - emb = math.log(10000) / (half_dim - 1) - emb = torch.exp(torch.arange(half_dim, dtype=torch.float) * -emb) - emb = torch.arange(num_embeddings, dtype=torch.float).unsqueeze( - 1 - ) * emb.unsqueeze(0) - emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1).view( - num_embeddings, -1 - ) - if embedding_dim % 2 == 1: - # zero pad - emb = torch.cat([emb, torch.zeros(num_embeddings, 1)], dim=1) - if padding_idx is not None: - emb[padding_idx, :] = 0 - return emb - - def forward( - self, - input, - incremental_state: Optional[Any] = None, - timestep: Optional[Tensor] = None, - positions: Optional[Any] = None, - ): - """Input is expected to be of size [bsz x seqlen].""" - bspair = torch.onnx.operators.shape_as_tensor(input) - bsz, seq_len = bspair[0], bspair[1] - max_pos = self.padding_idx + 1 + seq_len - if self.weights is None or max_pos > self.weights.size(0): - # recompute/expand embeddings if needed - self.weights = SinusoidalPositionalEmbedding.get_embedding( - max_pos, self.embedding_dim, self.padding_idx - ) - self.weights = self.weights.to(self._float_tensor) - - if incremental_state is not None: - # positions is the same for every token when decoding a single step - pos = timestep.view(-1)[0] + 1 if timestep is not None else seq_len - if self.onnx_trace: - return ( - self.weights.index_select(index=self.padding_idx + pos, dim=0) - .unsqueeze(1) - .repeat(bsz, 1, 1) - ) - return self.weights[self.padding_idx + pos, :].expand(bsz, 1, -1) - - positions = utils.make_positions( - input, self.padding_idx, onnx_trace=self.onnx_trace - ) - if self.onnx_trace: - flat_embeddings = self.weights.detach().index_select(0, positions.view(-1)) - embedding_shape = torch.cat( - (bsz.view(1), seq_len.view(1), torch.tensor([-1], dtype=torch.long)) - ) - embeddings = torch.onnx.operators.reshape_from_tensor_shape( - flat_embeddings, embedding_shape - ) - return embeddings - return ( - self.weights.index_select(0, positions.view(-1)) - .view(bsz, seq_len, -1) - .detach() - ) diff --git a/kosmos-g/fairseq/fairseq/modules/sparse_multihead_attention.py b/kosmos-g/fairseq/fairseq/modules/sparse_multihead_attention.py deleted file mode 100644 index 3cbd9d678..000000000 --- a/kosmos-g/fairseq/fairseq/modules/sparse_multihead_attention.py +++ /dev/null @@ -1,140 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch - -from .multihead_attention import MultiheadAttention - - -class SparseMultiheadAttention(MultiheadAttention): - """Sparse Multi-Headed Attention. - - "Generating Long Sequences with Sparse Transformers". Implements - fixed factorized self attention, where l=stride and c=expressivity. - A(1) includes all words in the stride window and A(2) takes a summary of c - words from the end of each stride window. - If is_bidirectional=False, we do not include any words past the current word, - as in the paper. - """ - - def __init__( - self, - embed_dim, - num_heads, - kdim=None, - vdim=None, - dropout=0.0, - bias=True, - add_bias_kv=False, - add_zero_attn=False, - self_attention=False, - encoder_decoder_attention=False, - stride=32, - expressivity=8, - is_bidirectional=True, - ): - - super().__init__( - embed_dim, - num_heads, - kdim, - vdim, - dropout, - bias, - add_bias_kv, - add_zero_attn, - self_attention, - encoder_decoder_attention, - ) - - self.is_bidirectional = is_bidirectional - self.stride = stride - self.expressivity = expressivity - assert self.stride > 0 and self.stride >= self.expressivity - - # Used for Ai(2) calculations - beginning of [l-c, l] range - def compute_checkpoint(self, word_index): - if word_index % self.stride == 0 and word_index != 0: - checkpoint_index = word_index - self.expressivity - else: - checkpoint_index = ( - math.floor(word_index / self.stride) * self.stride - + self.stride - - self.expressivity - ) - return checkpoint_index - - # Computes Ai(2) - def compute_subset_summaries(self, absolute_max): - checkpoint_index = self.compute_checkpoint(0) - subset_two = set() - while checkpoint_index <= absolute_max - 1: - summary = set( - range( - checkpoint_index, - min(checkpoint_index + self.expressivity + 1, absolute_max), - ) - ) - subset_two = subset_two.union(summary) - checkpoint_index = self.compute_checkpoint(checkpoint_index + self.stride) - return subset_two - - # Sparse Transformer Fixed Attention Pattern: https://arxiv.org/pdf/1904.10509.pdf - def compute_fixed_attention_subset(self, word_index, tgt_len): - # +1s account for range function; [min, max) -> [min, max] - if not self.is_bidirectional: - absolute_max = word_index + 1 - else: - absolute_max = tgt_len - - # Subset 1 - whole window - rounded_index = ( - math.floor((word_index + self.stride) / self.stride) * self.stride - ) - if word_index % self.stride == 0 and word_index != 0: - subset_one = set( - range(word_index - self.stride, min(absolute_max, word_index + 1)) - ) - else: - subset_one = set( - range( - max(0, rounded_index - self.stride), - min(absolute_max, rounded_index + 1), - ) - ) - - # Subset 2 - summary per window - # If bidirectional, subset 2 is the same for every index - subset_two = set() - if not self.is_bidirectional: - subset_two = self.compute_subset_summaries(absolute_max) - - return subset_one.union(subset_two) - - # Compute sparse mask - if bidirectional, can pre-compute and store - def buffered_sparse_mask(self, tensor, tgt_len, src_len): - assert tgt_len > self.stride - sparse_mask = torch.empty((tgt_len, src_len)).float().fill_(float("-inf")) - - # If bidirectional, subset 2 is the same for every index - subset_summaries = set() - if self.is_bidirectional: - subset_summaries = self.compute_subset_summaries(tgt_len) - - for i in range(tgt_len): - fixed_attention_subset = self.compute_fixed_attention_subset(i, tgt_len) - fixed_attention_subset = fixed_attention_subset.union(subset_summaries) - included_word_indices = torch.LongTensor(list(fixed_attention_subset)) - sparse_mask[i].index_fill_(0, included_word_indices, 0) - return sparse_mask.type_as(tensor) - - def apply_sparse_mask(self, attn_weights, tgt_len, src_len, bsz): - sparse_mask = self.buffered_sparse_mask(attn_weights, tgt_len, src_len) - sparse_mask = sparse_mask.unsqueeze(0).expand( - bsz * self.num_heads, tgt_len, src_len - ) - attn_weights += sparse_mask diff --git a/kosmos-g/fairseq/fairseq/modules/sparse_transformer_sentence_encoder.py b/kosmos-g/fairseq/fairseq/modules/sparse_transformer_sentence_encoder.py deleted file mode 100644 index f41ec0932..000000000 --- a/kosmos-g/fairseq/fairseq/modules/sparse_transformer_sentence_encoder.py +++ /dev/null @@ -1,96 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch.nn as nn -from fairseq.modules import TransformerSentenceEncoder -from fairseq.modules.sparse_transformer_sentence_encoder_layer import ( - SparseTransformerSentenceEncoderLayer, -) - - -class SparseTransformerSentenceEncoder(TransformerSentenceEncoder): - """ - Sparse implementation of the TransformerSentenceEncoder - - see SparseMultiheadAttention - """ - - def __init__( - self, - padding_idx: int, - vocab_size: int, - num_encoder_layers: int = 6, - embedding_dim: int = 768, - ffn_embedding_dim: int = 3072, - num_attention_heads: int = 8, - dropout: float = 0.1, - attention_dropout: float = 0.1, - activation_dropout: float = 0.1, - max_seq_len: int = 256, - num_segments: int = 2, - use_position_embeddings: bool = True, - offset_positions_by_padding: bool = True, - encoder_normalize_before: bool = False, - apply_bert_init: bool = False, - activation_fn: str = "relu", - learned_pos_embedding: bool = True, - embed_scale: float = None, - freeze_embeddings: bool = False, - n_trans_layers_to_freeze: int = 0, - export: bool = False, - is_bidirectional: bool = True, - stride: int = 32, - expressivity: int = 8, - ) -> None: - - super().__init__( - padding_idx, - vocab_size, - num_encoder_layers, - embedding_dim, - ffn_embedding_dim, - num_attention_heads, - dropout, - attention_dropout, - activation_dropout, - max_seq_len, - num_segments, - use_position_embeddings, - offset_positions_by_padding, - encoder_normalize_before, - apply_bert_init, - activation_fn, - learned_pos_embedding, - embed_scale, - freeze_embeddings, - n_trans_layers_to_freeze, - export, - ) - - self.layers = nn.ModuleList( - [ - SparseTransformerSentenceEncoderLayer( - embedding_dim=self.embedding_dim, - ffn_embedding_dim=ffn_embedding_dim, - num_attention_heads=num_attention_heads, - dropout=dropout, - attention_dropout=attention_dropout, - activation_dropout=activation_dropout, - activation_fn=activation_fn, - export=export, - is_bidirectional=is_bidirectional, - stride=stride, - expressivity=expressivity, - ) - for _ in range(num_encoder_layers) - ] - ) - - def freeze_module_params(m): - if m is not None: - for p in m.parameters(): - p.requires_grad = False - - for layer in range(n_trans_layers_to_freeze): - freeze_module_params(self.layers[layer]) diff --git a/kosmos-g/fairseq/fairseq/modules/sparse_transformer_sentence_encoder_layer.py b/kosmos-g/fairseq/fairseq/modules/sparse_transformer_sentence_encoder_layer.py deleted file mode 100644 index d95da59c2..000000000 --- a/kosmos-g/fairseq/fairseq/modules/sparse_transformer_sentence_encoder_layer.py +++ /dev/null @@ -1,51 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq.modules import TransformerSentenceEncoderLayer -from fairseq.modules.sparse_multihead_attention import SparseMultiheadAttention - - -class SparseTransformerSentenceEncoderLayer(TransformerSentenceEncoderLayer): - """ - Implements a Sprase Transformer Encoder Layer (see SparseMultiheadAttention) - """ - - def __init__( - self, - embedding_dim: int = 768, - ffn_embedding_dim: int = 3072, - num_attention_heads: int = 8, - dropout: float = 0.1, - attention_dropout: float = 0.1, - activation_dropout: float = 0.1, - activation_fn: str = "relu", - export: bool = False, - is_bidirectional: bool = True, - stride: int = 32, - expressivity: int = 8, - ) -> None: - - super().__init__( - embedding_dim, - ffn_embedding_dim, - num_attention_heads, - dropout, - attention_dropout, - activation_dropout, - activation_fn, - export, - ) - - self.self_attn = SparseMultiheadAttention( - self.embedding_dim, - num_attention_heads, - dropout=attention_dropout, - add_bias_kv=False, - add_zero_attn=False, - self_attention=True, - is_bidirectional=is_bidirectional, - stride=stride, - expressivity=expressivity, - ) diff --git a/kosmos-g/fairseq/fairseq/modules/transformer_layer.py b/kosmos-g/fairseq/fairseq/modules/transformer_layer.py deleted file mode 100644 index 3ead580ad..000000000 --- a/kosmos-g/fairseq/fairseq/modules/transformer_layer.py +++ /dev/null @@ -1,554 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Dict, List, Optional - -import torch -import torch.nn as nn -from fairseq import utils -from fairseq.modules import LayerNorm, MultiheadAttention -from fairseq.modules.fairseq_dropout import FairseqDropout -from fairseq.modules.quant_noise import quant_noise -from torch import Tensor -from fairseq.models.transformer import ( - TransformerConfig, -) - - -class TransformerEncoderLayerBase(nn.Module): - """Encoder layer block. - - In the original paper each operation (multi-head attention or FFN) is - postprocessed with: `dropout -> add residual -> layernorm`. In the - tensor2tensor code they suggest that learning is more robust when - preprocessing each layer with layernorm and postprocessing with: - `dropout -> add residual`. We default to the approach in the paper, but the - tensor2tensor approach can be enabled by setting - *cfg.encoder.normalize_before* to ``True``. - - Args: - args (argparse.Namespace): parsed command-line arguments - """ - - def __init__(self, cfg): - super().__init__() - self.cfg = cfg - self.embed_dim = cfg.encoder.embed_dim - self.quant_noise = cfg.quant_noise.pq - self.quant_noise_block_size = cfg.quant_noise.pq_block_size - self.self_attn = self.build_self_attention(self.embed_dim, cfg) - self.self_attn_layer_norm = LayerNorm(self.embed_dim, export=cfg.export) - self.dropout_module = FairseqDropout( - cfg.dropout, module_name=self.__class__.__name__ - ) - self.activation_fn = utils.get_activation_fn(activation=cfg.activation_fn) - activation_dropout_p = cfg.activation_dropout - if activation_dropout_p == 0: - # for backwards compatibility with models that use cfg.relu_dropout - activation_dropout_p = cfg.relu_dropout or 0 - self.activation_dropout_module = FairseqDropout( - float(activation_dropout_p), module_name=self.__class__.__name__ - ) - self.normalize_before = cfg.encoder.normalize_before - self.fc1 = self.build_fc1( - self.embed_dim, - cfg.encoder.ffn_embed_dim, - self.quant_noise, - self.quant_noise_block_size, - ) - self.fc2 = self.build_fc2( - cfg.encoder.ffn_embed_dim, - self.embed_dim, - self.quant_noise, - self.quant_noise_block_size, - ) - - self.final_layer_norm = LayerNorm(self.embed_dim, export=cfg.export) - - def build_fc1(self, input_dim, output_dim, q_noise, qn_block_size): - return quant_noise( - nn.Linear(input_dim, output_dim), p=q_noise, block_size=qn_block_size - ) - - def build_fc2(self, input_dim, output_dim, q_noise, qn_block_size): - return quant_noise( - nn.Linear(input_dim, output_dim), p=q_noise, block_size=qn_block_size - ) - - def _get_fc_rank(self, remove_num: int) -> List[int]: - f1_filter_param = [] - for i in range(self.fc1.out_features): - f1_filter_param.append( - torch.sum(torch.abs(self.fc1.weight[i])) - + torch.sum(torch.abs(self.fc2.weight[:, i])) - + torch.abs(self.fc1.bias[i]) - ) - return sorted( - range(len(f1_filter_param)), key=lambda k: f1_filter_param[k], reverse=False - )[0:remove_num] - - def _prune_fc_layer(self, remove_index: List[int]): - new_fc1_weight = [] - new_fc1_bias = [] - for i in range(self.fc1.out_features): - if i not in remove_index: - new_fc1_weight.append(self.fc1.weight[i]) - new_fc1_bias.append(self.fc1.bias[i]) - - new_fc1_weight = torch.stack(new_fc1_weight).detach() - new_fc1_weight.requires_grad = True - - new_fc1_bias = torch.stack(new_fc1_bias).detach() - new_fc1_bias.requires_grad = True - - self.fc1 = quant_noise( - nn.Linear(self.fc1.in_features, self.fc1.out_features - len(remove_index)), - p=self.quant_noise, - block_size=self.quant_noise_block_size, - ) - self.fc1.weight = torch.nn.Parameter(new_fc1_weight) - self.fc1.bias = torch.nn.Parameter(new_fc1_bias) - - new_fc2_weight = [] - new_fc2_bias = [] - for i in range(self.fc2.in_features): - if i not in remove_index: - new_fc2_weight.append(self.fc2.weight[:, i]) - new_fc2_bias = self.fc2.bias.detach() - - new_fc2_weight = torch.stack(new_fc2_weight, dim=-1).detach() - new_fc2_weight.requires_grad = True - - new_fc2_bias = self.fc2.bias.detach() - new_fc2_bias.requires_grad = True - - self.fc2 = quant_noise( - nn.Linear(self.fc2.in_features - len(remove_index), self.fc2.out_features), - p=self.quant_noise, - block_size=self.quant_noise_block_size, - ) - self.fc2.weight = torch.nn.Parameter(new_fc2_weight) - self.fc2.bias = torch.nn.Parameter(new_fc2_bias) - - def build_self_attention(self, embed_dim, cfg): - return MultiheadAttention( - embed_dim, - cfg.encoder.attention_heads, - dropout=cfg.attention_dropout, - self_attention=True, - q_noise=self.quant_noise, - qn_block_size=self.quant_noise_block_size, - ) - - def residual_connection(self, x, residual): - return residual + x - - def upgrade_state_dict_named(self, state_dict, name): - """ - Rename layer norm states from `...layer_norms.0.weight` to - `...self_attn_layer_norm.weight` and `...layer_norms.1.weight` to - `...final_layer_norm.weight` - """ - layer_norm_map = {"0": "self_attn_layer_norm", "1": "final_layer_norm"} - for old, new in layer_norm_map.items(): - for m in ("weight", "bias"): - k = "{}.layer_norms.{}.{}".format(name, old, m) - if k in state_dict: - state_dict["{}.{}.{}".format(name, new, m)] = state_dict[k] - del state_dict[k] - - def forward( - self, - x, - encoder_padding_mask: Optional[Tensor], - attn_mask: Optional[Tensor] = None, - ): - """ - Args: - x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - encoder_padding_mask (ByteTensor): binary ByteTensor of shape - `(batch, seq_len)` where padding elements are indicated by ``1``. - attn_mask (ByteTensor): binary tensor of shape `(tgt_len, src_len)`, - where `tgt_len` is the length of output and `src_len` is the - length of input, though here both are equal to `seq_len`. - `attn_mask[tgt_i, src_j] = 1` means that when calculating the - embedding for `tgt_i`, we exclude (mask out) `src_j`. This is - useful for strided self-attention. - - Returns: - encoded output of shape `(seq_len, batch, embed_dim)` - """ - # anything in original attn_mask = 1, becomes -1e8 - # anything in original attn_mask = 0, becomes 0 - # Note that we cannot use -inf here, because at some edge cases, - # the attention weight (before softmax) for some padded element in query - # will become -inf, which results in NaN in model parameters - if attn_mask is not None: - attn_mask = attn_mask.masked_fill( - attn_mask.to(torch.bool), -1e8 if x.dtype == torch.float32 else -1e4 - ) - - residual = x - if self.normalize_before: - x = self.self_attn_layer_norm(x) - x, _ = self.self_attn( - query=x, - key=x, - value=x, - key_padding_mask=encoder_padding_mask, - need_weights=False, - attn_mask=attn_mask, - ) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.self_attn_layer_norm(x) - - residual = x - if self.normalize_before: - x = self.final_layer_norm(x) - x = self.activation_fn(self.fc1(x)) - x = self.activation_dropout_module(x) - x = self.fc2(x) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.final_layer_norm(x) - return x - - -# backward compatible with the legacy argparse format -class TransformerEncoderLayer(TransformerEncoderLayerBase): - def __init__(self, args): - super().__init__(TransformerConfig.from_namespace(args)) - self.args = args - - def build_self_attention(self, embed_dim, args): - return super().build_self_attention( - embed_dim, TransformerConfig.from_namespace(args) - ) - - -class TransformerDecoderLayerBase(nn.Module): - """Decoder layer block. - - In the original paper each operation (multi-head attention, encoder - attention or FFN) is postprocessed with: `dropout -> add residual -> - layernorm`. In the tensor2tensor code they suggest that learning is more - robust when preprocessing each layer with layernorm and postprocessing with: - `dropout -> add residual`. We default to the approach in the paper, but the - tensor2tensor approach can be enabled by setting - *cfg.decoder.normalize_before* to ``True``. - - Args: - args (argparse.Namespace): parsed command-line arguments - no_encoder_attn (bool, optional): whether to attend to encoder outputs - (default: False). - """ - - def __init__( - self, cfg, no_encoder_attn=False, add_bias_kv=False, add_zero_attn=False - ): - super().__init__() - self.embed_dim = cfg.decoder.embed_dim - self.dropout_module = FairseqDropout( - cfg.dropout, module_name=self.__class__.__name__ - ) - self.quant_noise = cfg.quant_noise.pq - self.quant_noise_block_size = cfg.quant_noise.pq_block_size - - self.cross_self_attention = cfg.cross_self_attention - - self.self_attn = self.build_self_attention( - self.embed_dim, - cfg, - add_bias_kv=add_bias_kv, - add_zero_attn=add_zero_attn, - ) - self.attn_ln = ( - LayerNorm(self.embed_dim) - if utils.safe_getattr(cfg, "scale_attn", False) - else None - ) - self.nh = self.self_attn.num_heads - self.head_dim = self.self_attn.head_dim - scale_heads = utils.safe_getattr(cfg, "scale_heads", False) - self.c_attn = ( - nn.Parameter(torch.ones((self.nh,)), requires_grad=True) - if scale_heads - else None - ) - - self.activation_fn = utils.get_activation_fn(activation=cfg.activation_fn) - activation_dropout_p = cfg.activation_dropout - if activation_dropout_p == 0: - # for backwards compatibility with models that use cfg.relu_dropout - activation_dropout_p = cfg.relu_dropout or 0 - self.activation_dropout_module = FairseqDropout( - float(activation_dropout_p), module_name=self.__class__.__name__ - ) - self.normalize_before = cfg.decoder.normalize_before - - self.self_attn_layer_norm = LayerNorm(self.embed_dim, export=cfg.export) - - if no_encoder_attn: - self.encoder_attn = None - self.encoder_attn_layer_norm = None - else: - self.encoder_attn = self.build_encoder_attention(self.embed_dim, cfg) - self.encoder_attn_layer_norm = LayerNorm(self.embed_dim, export=cfg.export) - - self.ffn_layernorm = ( - LayerNorm(cfg.decoder.ffn_embed_dim) - if utils.safe_getattr(cfg, "deepnet", False) - else None - ) - self.w_resid = ( - nn.Parameter( - torch.ones( - self.embed_dim, - ), - requires_grad=True, - ) - if utils.safe_getattr(cfg, "scale_resids", False) - else None - ) - - self.fc1 = self.build_fc1( - self.embed_dim, - cfg.decoder.ffn_embed_dim, - self.quant_noise, - self.quant_noise_block_size, - ) - self.fc2 = self.build_fc2( - cfg.decoder.ffn_embed_dim, - self.embed_dim, - self.quant_noise, - self.quant_noise_block_size, - ) - - self.final_layer_norm = LayerNorm(self.embed_dim, export=cfg.export) - self.need_attn = True - - self.onnx_trace = False - - def build_fc1(self, input_dim, output_dim, q_noise, qn_block_size): - return quant_noise(nn.Linear(input_dim, output_dim), q_noise, qn_block_size) - - def build_fc2(self, input_dim, output_dim, q_noise, qn_block_size): - return quant_noise(nn.Linear(input_dim, output_dim), q_noise, qn_block_size) - - def build_self_attention( - self, embed_dim, cfg, add_bias_kv=False, add_zero_attn=False - ): - return MultiheadAttention( - embed_dim, - cfg.decoder.attention_heads, - dropout=cfg.attention_dropout, - add_bias_kv=add_bias_kv, - add_zero_attn=add_zero_attn, - self_attention=not cfg.cross_self_attention, - q_noise=self.quant_noise, - qn_block_size=self.quant_noise_block_size, - attention_norm=True if utils.safe_getattr(cfg, "deepnet", False) else False, - ) - - def build_encoder_attention(self, embed_dim, cfg): - return MultiheadAttention( - embed_dim, - cfg.decoder.attention_heads, - kdim=cfg.encoder.embed_dim, - vdim=cfg.encoder.embed_dim, - dropout=cfg.attention_dropout, - encoder_decoder_attention=True, - q_noise=self.quant_noise, - qn_block_size=self.quant_noise_block_size, - ) - - def prepare_for_onnx_export_(self): - self.onnx_trace = True - - def residual_connection(self, x, residual): - return residual + x - - def forward( - self, - x, - encoder_out: Optional[torch.Tensor] = None, - encoder_padding_mask: Optional[torch.Tensor] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - prev_self_attn_state: Optional[List[torch.Tensor]] = None, - prev_attn_state: Optional[List[torch.Tensor]] = None, - self_attn_mask: Optional[torch.Tensor] = None, - self_attn_padding_mask: Optional[torch.Tensor] = None, - need_attn: bool = False, - need_head_weights: bool = False, - ): - """ - Args: - x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - encoder_padding_mask (ByteTensor, optional): binary - ByteTensor of shape `(batch, src_len)` where padding - elements are indicated by ``1``. - need_attn (bool, optional): return attention weights - need_head_weights (bool, optional): return attention weights - for each head (default: return average over heads). - - Returns: - encoded output of shape `(seq_len, batch, embed_dim)` - """ - if need_head_weights: - need_attn = True - - residual = x - if self.normalize_before: - x = self.self_attn_layer_norm(x) - if prev_self_attn_state is not None: - prev_key, prev_value = prev_self_attn_state[:2] - saved_state: Dict[str, Optional[Tensor]] = { - "prev_key": prev_key, - "prev_value": prev_value, - } - if len(prev_self_attn_state) >= 3: - saved_state["prev_key_padding_mask"] = prev_self_attn_state[2] - assert incremental_state is not None - self.self_attn._set_input_buffer(incremental_state, saved_state) - _self_attn_input_buffer = self.self_attn._get_input_buffer(incremental_state) - if self.cross_self_attention and not ( - incremental_state is not None - and _self_attn_input_buffer is not None - and "prev_key" in _self_attn_input_buffer - ): - if self_attn_mask is not None: - assert encoder_out is not None - self_attn_mask = torch.cat( - (x.new_zeros(x.size(0), encoder_out.size(0)), self_attn_mask), dim=1 - ) - if self_attn_padding_mask is not None: - if encoder_padding_mask is None: - assert encoder_out is not None - encoder_padding_mask = self_attn_padding_mask.new_zeros( - encoder_out.size(1), encoder_out.size(0) - ) - self_attn_padding_mask = torch.cat( - (encoder_padding_mask, self_attn_padding_mask), dim=1 - ) - assert encoder_out is not None - y = torch.cat((encoder_out, x), dim=0) - else: - y = x - - x, attn = self.self_attn( - query=x, - key=y, - value=y, - key_padding_mask=self_attn_padding_mask, - incremental_state=incremental_state, - need_weights=False, - attn_mask=self_attn_mask, - ) - if self.c_attn is not None: - tgt_len, bsz = x.size(0), x.size(1) - x = x.view(tgt_len, bsz, self.nh, self.head_dim) - x = torch.einsum("tbhd,h->tbhd", x, self.c_attn) - x = x.reshape(tgt_len, bsz, self.embed_dim) - if self.attn_ln is not None: - x = self.attn_ln(x) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.self_attn_layer_norm(x) - - if self.encoder_attn is not None and encoder_out is not None: - residual = x - if self.normalize_before: - x = self.encoder_attn_layer_norm(x) - if prev_attn_state is not None: - prev_key, prev_value = prev_attn_state[:2] - saved_state: Dict[str, Optional[Tensor]] = { - "prev_key": prev_key, - "prev_value": prev_value, - } - if len(prev_attn_state) >= 3: - saved_state["prev_key_padding_mask"] = prev_attn_state[2] - assert incremental_state is not None - self.encoder_attn._set_input_buffer(incremental_state, saved_state) - - x, attn = self.encoder_attn( - query=x, - key=encoder_out, - value=encoder_out, - key_padding_mask=encoder_padding_mask, - incremental_state=incremental_state, - static_kv=True, - need_weights=need_attn or (not self.training and self.need_attn), - need_head_weights=need_head_weights, - ) - x = self.dropout_module(x) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.encoder_attn_layer_norm(x) - - residual = x - if self.normalize_before: - x = self.final_layer_norm(x) - - x = self.activation_fn(self.fc1(x)) - x = self.activation_dropout_module(x) - if self.ffn_layernorm is not None: - x = self.ffn_layernorm(x) - x = self.fc2(x) - x = self.dropout_module(x) - if self.w_resid is not None: - residual = torch.mul(self.w_resid, residual) - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.final_layer_norm(x) - if self.onnx_trace and incremental_state is not None: - saved_state = self.self_attn._get_input_buffer(incremental_state) - assert saved_state is not None - if self_attn_padding_mask is not None: - self_attn_state = [ - saved_state["prev_key"], - saved_state["prev_value"], - saved_state["prev_key_padding_mask"], - ] - else: - self_attn_state = [saved_state["prev_key"], saved_state["prev_value"]] - return x, attn, self_attn_state - return x, attn, None - - def make_generation_fast_(self, need_attn: bool = False, **kwargs): - self.need_attn = need_attn - - -# backward compatible with the legacy argparse format -class TransformerDecoderLayer(TransformerDecoderLayerBase): - def __init__( - self, args, no_encoder_attn=False, add_bias_kv=False, add_zero_attn=False - ): - super().__init__( - TransformerConfig.from_namespace(args), - no_encoder_attn=no_encoder_attn, - add_bias_kv=add_bias_kv, - add_zero_attn=add_zero_attn, - ) - self.args = args - - def build_self_attention( - self, embed_dim, args, add_bias_kv=False, add_zero_attn=False - ): - return super().build_self_attention( - embed_dim, - TransformerConfig.from_namespace(args), - add_bias_kv=add_bias_kv, - add_zero_attn=add_zero_attn, - ) - - def build_encoder_attention(self, embed_dim, args): - return super().build_encoder_attention( - embed_dim, - TransformerConfig.from_namespace(args), - ) diff --git a/kosmos-g/fairseq/fairseq/modules/transformer_sentence_encoder.py b/kosmos-g/fairseq/fairseq/modules/transformer_sentence_encoder.py deleted file mode 100644 index 5d2db91ad..000000000 --- a/kosmos-g/fairseq/fairseq/modules/transformer_sentence_encoder.py +++ /dev/null @@ -1,291 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Optional, Tuple - -import torch -import torch.nn as nn -from fairseq.modules import ( - FairseqDropout, - LayerDropModuleList, - LayerNorm, - MultiheadAttention, - PositionalEmbedding, - TransformerSentenceEncoderLayer, -) -from fairseq.modules.quant_noise import quant_noise as apply_quant_noise_ - - -def init_bert_params(module): - """ - Initialize the weights specific to the BERT Model. - This overrides the default initializations depending on the specified arguments. - 1. If normal_init_linear_weights is set then weights of linear - layer will be initialized using the normal distribution and - bais will be set to the specified value. - 2. If normal_init_embed_weights is set then weights of embedding - layer will be initialized using the normal distribution. - 3. If normal_init_proj_weights is set then weights of - in_project_weight for MultiHeadAttention initialized using - the normal distribution (to be validated). - """ - - def normal_(data): - # with FSDP, module params will be on CUDA, so we cast them back to CPU - # so that the RNG is consistent with and without FSDP - data.copy_(data.cpu().normal_(mean=0.0, std=0.02).to(data.device)) - - if isinstance(module, nn.Linear): - normal_(module.weight.data) - if module.bias is not None: - module.bias.data.zero_() - if isinstance(module, nn.Embedding): - normal_(module.weight.data) - if module.padding_idx is not None: - module.weight.data[module.padding_idx].zero_() - if isinstance(module, MultiheadAttention): - normal_(module.q_proj.weight.data) - normal_(module.k_proj.weight.data) - normal_(module.v_proj.weight.data) - - -class TransformerSentenceEncoder(nn.Module): - """ - Implementation for a Bi-directional Transformer based Sentence Encoder used - in BERT/XLM style pre-trained models. - - This first computes the token embedding using the token embedding matrix, - position embeddings (if specified) and segment embeddings - (if specified). After applying the specified number of - TransformerEncoderLayers, it outputs all the internal states of the - encoder as well as the final representation associated with the first - token (usually CLS token). - - Input: - - tokens: B x T matrix representing sentences - - segment_labels: B x T matrix representing segment label for tokens - - Output: - - a tuple of the following: - - a list of internal model states used to compute the - predictions where each tensor has shape T x B x C - - sentence representation associated with first input token - in format B x C. - """ - - def __init__( - self, - padding_idx: int, - vocab_size: int, - num_encoder_layers: int = 6, - embedding_dim: int = 768, - ffn_embedding_dim: int = 3072, - num_attention_heads: int = 8, - dropout: float = 0.1, - attention_dropout: float = 0.1, - activation_dropout: float = 0.1, - layerdrop: float = 0.0, - max_seq_len: int = 256, - num_segments: int = 2, - use_position_embeddings: bool = True, - offset_positions_by_padding: bool = True, - encoder_normalize_before: bool = False, - apply_bert_init: bool = False, - activation_fn: str = "relu", - learned_pos_embedding: bool = True, - embed_scale: float = None, - freeze_embeddings: bool = False, - n_trans_layers_to_freeze: int = 0, - export: bool = False, - traceable: bool = False, - q_noise: float = 0.0, - qn_block_size: int = 8, - ) -> None: - - super().__init__() - self.padding_idx = padding_idx - self.vocab_size = vocab_size - self.dropout_module = FairseqDropout( - dropout, module_name=self.__class__.__name__ - ) - self.layerdrop = layerdrop - self.max_seq_len = max_seq_len - self.embedding_dim = embedding_dim - self.num_segments = num_segments - self.use_position_embeddings = use_position_embeddings - self.apply_bert_init = apply_bert_init - self.learned_pos_embedding = learned_pos_embedding - self.traceable = traceable - - self.embed_tokens = self.build_embedding( - self.vocab_size, self.embedding_dim, self.padding_idx - ) - self.embed_scale = embed_scale - - if q_noise > 0: - self.quant_noise = apply_quant_noise_( - nn.Linear(self.embedding_dim, self.embedding_dim, bias=False), - q_noise, - qn_block_size, - ) - else: - self.quant_noise = None - - self.segment_embeddings = ( - nn.Embedding(self.num_segments, self.embedding_dim, padding_idx=None) - if self.num_segments > 0 - else None - ) - - self.embed_positions = ( - PositionalEmbedding( - self.max_seq_len, - self.embedding_dim, - padding_idx=(self.padding_idx if offset_positions_by_padding else None), - learned=self.learned_pos_embedding, - ) - if self.use_position_embeddings - else None - ) - - if encoder_normalize_before: - self.emb_layer_norm = LayerNorm(self.embedding_dim, export=export) - else: - self.emb_layer_norm = None - - if self.layerdrop > 0.0: - self.layers = LayerDropModuleList(p=self.layerdrop) - else: - self.layers = nn.ModuleList([]) - self.layers.extend( - [ - self.build_transformer_sentence_encoder_layer( - embedding_dim=self.embedding_dim, - ffn_embedding_dim=ffn_embedding_dim, - num_attention_heads=num_attention_heads, - dropout=self.dropout_module.p, - attention_dropout=attention_dropout, - activation_dropout=activation_dropout, - activation_fn=activation_fn, - export=export, - q_noise=q_noise, - qn_block_size=qn_block_size, - ) - for _ in range(num_encoder_layers) - ] - ) - - # Apply initialization of model params after building the model - if self.apply_bert_init: - self.apply(init_bert_params) - - def freeze_module_params(m): - if m is not None: - for p in m.parameters(): - p.requires_grad = False - - if freeze_embeddings: - freeze_module_params(self.embed_tokens) - freeze_module_params(self.segment_embeddings) - freeze_module_params(self.embed_positions) - freeze_module_params(self.emb_layer_norm) - - for layer in range(n_trans_layers_to_freeze): - freeze_module_params(self.layers[layer]) - - def build_embedding(self, vocab_size, embedding_dim, padding_idx): - return nn.Embedding(vocab_size, embedding_dim, padding_idx) - - def build_transformer_sentence_encoder_layer( - self, - embedding_dim, - ffn_embedding_dim, - num_attention_heads, - dropout, - attention_dropout, - activation_dropout, - activation_fn, - export, - q_noise, - qn_block_size, - ): - return TransformerSentenceEncoderLayer( - embedding_dim=embedding_dim, - ffn_embedding_dim=ffn_embedding_dim, - num_attention_heads=num_attention_heads, - dropout=dropout, - attention_dropout=attention_dropout, - activation_dropout=activation_dropout, - activation_fn=activation_fn, - export=export, - q_noise=q_noise, - qn_block_size=qn_block_size, - ) - - def forward( - self, - tokens: torch.Tensor, - segment_labels: torch.Tensor = None, - last_state_only: bool = False, - positions: Optional[torch.Tensor] = None, - token_embeddings: Optional[torch.Tensor] = None, - attn_mask: Optional[torch.Tensor] = None, - ) -> Tuple[torch.Tensor, torch.Tensor]: - is_tpu = tokens.device.type == "xla" - - # compute padding mask. This is needed for multi-head attention - padding_mask = tokens.eq(self.padding_idx) - if not self.traceable and not is_tpu and not padding_mask.any(): - padding_mask = None - - if token_embeddings is not None: - x = token_embeddings - else: - x = self.embed_tokens(tokens) - - if self.embed_scale is not None: - x = x * self.embed_scale - - if self.embed_positions is not None: - x = x + self.embed_positions(tokens, positions=positions) - - if self.segment_embeddings is not None and segment_labels is not None: - x = x + self.segment_embeddings(segment_labels) - - if self.quant_noise is not None: - x = self.quant_noise(x) - - if self.emb_layer_norm is not None: - x = self.emb_layer_norm(x) - - x = self.dropout_module(x) - - # account for padding while computing the representation - if padding_mask is not None: - x = x * (1 - padding_mask.unsqueeze(-1).type_as(x)) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - inner_states = [] - if not last_state_only: - inner_states.append(x) - - for layer in self.layers: - x, _ = layer( - x, self_attn_padding_mask=padding_mask, self_attn_mask=attn_mask - ) - if not last_state_only: - inner_states.append(x) - - sentence_rep = x[0, :, :] - - if last_state_only: - inner_states = [x] - - if self.traceable: - return torch.stack(inner_states), sentence_rep - else: - return inner_states, sentence_rep diff --git a/kosmos-g/fairseq/fairseq/modules/transformer_sentence_encoder_layer.py b/kosmos-g/fairseq/fairseq/modules/transformer_sentence_encoder_layer.py deleted file mode 100644 index f869c4b2f..000000000 --- a/kosmos-g/fairseq/fairseq/modules/transformer_sentence_encoder_layer.py +++ /dev/null @@ -1,139 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Callable, Optional - -import torch -import torch.nn as nn -from fairseq import utils -from fairseq.modules import LayerNorm, MultiheadAttention -from fairseq.modules.fairseq_dropout import FairseqDropout -from fairseq.modules.quant_noise import quant_noise - - -class TransformerSentenceEncoderLayer(nn.Module): - """ - Implements a Transformer Encoder Layer used in BERT/XLM style pre-trained - models. - """ - - def __init__( - self, - embedding_dim: int = 768, - ffn_embedding_dim: int = 3072, - num_attention_heads: int = 8, - dropout: float = 0.1, - attention_dropout: float = 0.1, - activation_dropout: float = 0.1, - activation_fn: str = "relu", - export: bool = False, - q_noise: float = 0.0, - qn_block_size: int = 8, - init_fn: Callable = None, - ) -> None: - super().__init__() - - if init_fn is not None: - init_fn() - - # Initialize parameters - self.embedding_dim = embedding_dim - self.num_attention_heads = num_attention_heads - self.attention_dropout = attention_dropout - self.q_noise = q_noise - self.qn_block_size = qn_block_size - - self.dropout_module = FairseqDropout( - dropout, module_name=self.__class__.__name__ - ) - self.activation_dropout_module = FairseqDropout( - activation_dropout, module_name=self.__class__.__name__ - ) - - # Initialize blocks - self.activation_fn = utils.get_activation_fn(activation_fn) - self.self_attn = self.build_self_attention( - self.embedding_dim, - num_attention_heads, - dropout=attention_dropout, - self_attention=True, - q_noise=q_noise, - qn_block_size=qn_block_size, - ) - - # layer norm associated with the self attention layer - self.self_attn_layer_norm = LayerNorm(self.embedding_dim, export=export) - - self.fc1 = self.build_fc1( - self.embedding_dim, - ffn_embedding_dim, - q_noise=q_noise, - qn_block_size=qn_block_size, - ) - self.fc2 = self.build_fc2( - ffn_embedding_dim, - self.embedding_dim, - q_noise=q_noise, - qn_block_size=qn_block_size, - ) - - # layer norm associated with the position wise feed-forward NN - self.final_layer_norm = LayerNorm(self.embedding_dim, export=export) - - def build_fc1(self, input_dim, output_dim, q_noise, qn_block_size): - return quant_noise(nn.Linear(input_dim, output_dim), q_noise, qn_block_size) - - def build_fc2(self, input_dim, output_dim, q_noise, qn_block_size): - return quant_noise(nn.Linear(input_dim, output_dim), q_noise, qn_block_size) - - def build_self_attention( - self, - embed_dim, - num_attention_heads, - dropout, - self_attention, - q_noise, - qn_block_size, - ): - return MultiheadAttention( - embed_dim, - num_attention_heads, - dropout=dropout, - self_attention=True, - q_noise=q_noise, - qn_block_size=qn_block_size, - ) - - def forward( - self, - x: torch.Tensor, - self_attn_mask: Optional[torch.Tensor] = None, - self_attn_padding_mask: Optional[torch.Tensor] = None, - ): - """ - LayerNorm is applied either before or after the self-attention/ffn - modules similar to the original Transformer implementation. - """ - residual = x - x, attn = self.self_attn( - query=x, - key=x, - value=x, - key_padding_mask=self_attn_padding_mask, - need_weights=False, - attn_mask=self_attn_mask, - ) - x = self.dropout_module(x) - x = residual + x - x = self.self_attn_layer_norm(x) - - residual = x - x = self.activation_fn(self.fc1(x)) - x = self.activation_dropout_module(x) - x = self.fc2(x) - x = self.dropout_module(x) - x = residual + x - x = self.final_layer_norm(x) - return x, attn diff --git a/kosmos-g/fairseq/fairseq/modules/transpose_last.py b/kosmos-g/fairseq/fairseq/modules/transpose_last.py deleted file mode 100644 index e578b3ec5..000000000 --- a/kosmos-g/fairseq/fairseq/modules/transpose_last.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -transpose last 2 dimensions of the input -""" - -import torch.nn as nn - - -class TransposeLast(nn.Module): - def __init__(self, deconstruct_idx=None): - super().__init__() - self.deconstruct_idx = deconstruct_idx - - def forward(self, x): - if self.deconstruct_idx is not None: - x = x[self.deconstruct_idx] - return x.transpose(-2, -1) diff --git a/kosmos-g/fairseq/fairseq/modules/unfold.py b/kosmos-g/fairseq/fairseq/modules/unfold.py deleted file mode 100644 index 138272f1e..000000000 --- a/kosmos-g/fairseq/fairseq/modules/unfold.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch.nn.functional as F - - -def unfold1d(x, kernel_size, padding_l, pad_value=0): - """unfold T x B x C to T x B x C x K""" - if kernel_size > 1: - T, B, C = x.size() - x = F.pad( - x, (0, 0, 0, 0, padding_l, kernel_size - 1 - padding_l), value=pad_value - ) - x = x.as_strided((T, B, C, kernel_size), (B * C, C, 1, B * C)) - else: - x = x.unsqueeze(3) - return x diff --git a/kosmos-g/fairseq/fairseq/modules/vggblock.py b/kosmos-g/fairseq/fairseq/modules/vggblock.py deleted file mode 100644 index ee5ee19a3..000000000 --- a/kosmos-g/fairseq/fairseq/modules/vggblock.py +++ /dev/null @@ -1,116 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from __future__ import absolute_import, division, print_function, unicode_literals - -from collections.abc import Iterable -from itertools import repeat - -import torch -import torch.nn as nn - - -def _pair(v): - if isinstance(v, Iterable): - assert len(v) == 2, "len(v) != 2" - return v - return tuple(repeat(v, 2)) - - -def infer_conv_output_dim(conv_op, input_dim, sample_inchannel): - sample_seq_len = 200 - sample_bsz = 10 - x = torch.randn(sample_bsz, sample_inchannel, sample_seq_len, input_dim) - # N x C x H x W - # N: sample_bsz, C: sample_inchannel, H: sample_seq_len, W: input_dim - x = conv_op(x) - # N x C x H x W - x = x.transpose(1, 2) - # N x H x C x W - bsz, seq = x.size()[:2] - per_channel_dim = x.size()[3] - # bsz: N, seq: H, CxW the rest - return x.contiguous().view(bsz, seq, -1).size(-1), per_channel_dim - - -class VGGBlock(torch.nn.Module): - """ - VGG motibated cnn module https://arxiv.org/pdf/1409.1556.pdf - - Args: - in_channels: (int) number of input channels (typically 1) - out_channels: (int) number of output channels - conv_kernel_size: convolution channels - pooling_kernel_size: the size of the pooling window to take a max over - num_conv_layers: (int) number of convolution layers - input_dim: (int) input dimension - conv_stride: the stride of the convolving kernel. - Can be a single number or a tuple (sH, sW) Default: 1 - padding: implicit paddings on both sides of the input. - Can be a single number or a tuple (padH, padW). Default: None - layer_norm: (bool) if layer norm is going to be applied. Default: False - - Shape: - Input: BxCxTxfeat, i.e. (batch_size, input_size, timesteps, features) - Output: BxCxTxfeat, i.e. (batch_size, input_size, timesteps, features) - """ - - def __init__( - self, - in_channels, - out_channels, - conv_kernel_size, - pooling_kernel_size, - num_conv_layers, - input_dim, - conv_stride=1, - padding=None, - layer_norm=False, - ): - assert ( - input_dim is not None - ), "Need input_dim for LayerNorm and infer_conv_output_dim" - super(VGGBlock, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.conv_kernel_size = _pair(conv_kernel_size) - self.pooling_kernel_size = _pair(pooling_kernel_size) - self.num_conv_layers = num_conv_layers - self.padding = ( - tuple(e // 2 for e in self.conv_kernel_size) - if padding is None - else _pair(padding) - ) - self.conv_stride = _pair(conv_stride) - - self.layers = nn.ModuleList() - for layer in range(num_conv_layers): - conv_op = nn.Conv2d( - in_channels if layer == 0 else out_channels, - out_channels, - self.conv_kernel_size, - stride=self.conv_stride, - padding=self.padding, - ) - self.layers.append(conv_op) - if layer_norm: - conv_output_dim, per_channel_dim = infer_conv_output_dim( - conv_op, input_dim, in_channels if layer == 0 else out_channels - ) - self.layers.append(nn.LayerNorm(per_channel_dim)) - input_dim = per_channel_dim - self.layers.append(nn.ReLU()) - - if self.pooling_kernel_size is not None: - pool_op = nn.MaxPool2d(kernel_size=self.pooling_kernel_size, ceil_mode=True) - self.layers.append(pool_op) - self.total_output_dim, self.output_dim = infer_conv_output_dim( - pool_op, input_dim, out_channels - ) - - def forward(self, x): - for i, _ in enumerate(self.layers): - x = self.layers[i](x) - return x diff --git a/kosmos-g/fairseq/fairseq/nan_detector.py b/kosmos-g/fairseq/fairseq/nan_detector.py deleted file mode 100644 index 7d46d766d..000000000 --- a/kosmos-g/fairseq/fairseq/nan_detector.py +++ /dev/null @@ -1,108 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -import torch - - -logger = logging.getLogger(__name__) - - -class NanDetector: - """ - Detects the first NaN or Inf in forward and/or backward pass and logs, together with the module name - """ - - def __init__(self, model, forward=True, backward=True): - self.bhooks = [] - self.fhooks = [] - self.forward = forward - self.backward = backward - self.named_parameters = list(model.named_parameters()) - self.reset() - - for name, mod in model.named_modules(): - mod.__module_name = name - self.add_hooks(mod) - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_value, exc_traceback): - # Dump out all model gnorms to enable better debugging - norm = {} - gradients = {} - for name, param in self.named_parameters: - if param.grad is not None: - grad_norm = torch.norm(param.grad.data.float(), p=2) - norm[name] = grad_norm.item() - if torch.isnan(grad_norm).any() or torch.isinf(grad_norm).any(): - gradients[name] = param.grad.data - if len(gradients) > 0: - logger.info("Detected nan/inf grad norm, dumping norms...") - logger.info(f"norms: {norm}") - logger.info(f"gradients: {gradients}") - - self.close() - - def add_hooks(self, module): - if self.forward: - self.fhooks.append(module.register_forward_hook(self.fhook_fn)) - if self.backward: - self.bhooks.append(module.register_backward_hook(self.bhook_fn)) - - def reset(self): - self.has_printed_f = False - self.has_printed_b = False - - def _detect(self, tensor, name, backward): - err = None - if ( - torch.is_floating_point(tensor) - # single value tensors (like the loss) will not provide much info - and tensor.numel() >= 2 - ): - with torch.no_grad(): - if torch.isnan(tensor).any(): - err = "NaN" - elif torch.isinf(tensor).any(): - err = "Inf" - if err is not None: - err = f"{err} detected in output of {name}, shape: {tensor.shape}, {'backward' if backward else 'forward'}" - return err - - def _apply(self, module, inp, x, backward): - if torch.is_tensor(x): - if isinstance(inp, tuple) and len(inp) > 0: - inp = inp[0] - err = self._detect(x, module.__module_name, backward) - if err is not None: - if torch.is_tensor(inp) and not backward: - err += ( - f" input max: {inp.max().item()}, input min: {inp.min().item()}" - ) - - has_printed_attr = "has_printed_b" if backward else "has_printed_f" - logger.warning(err) - setattr(self, has_printed_attr, True) - elif isinstance(x, dict): - for v in x.values(): - self._apply(module, inp, v, backward) - elif isinstance(x, list) or isinstance(x, tuple): - for v in x: - self._apply(module, inp, v, backward) - - def fhook_fn(self, module, inp, output): - if not self.has_printed_f: - self._apply(module, inp, output, backward=False) - - def bhook_fn(self, module, inp, output): - if not self.has_printed_b: - self._apply(module, inp, output, backward=True) - - def close(self): - for hook in self.fhooks + self.bhooks: - hook.remove() diff --git a/kosmos-g/fairseq/fairseq/ngram_repeat_block.py b/kosmos-g/fairseq/fairseq/ngram_repeat_block.py deleted file mode 100644 index 98e707d1b..000000000 --- a/kosmos-g/fairseq/fairseq/ngram_repeat_block.py +++ /dev/null @@ -1,150 +0,0 @@ -# Originally from Microsoft Corporation. -# Licensed under the MIT License. - -""" Wrapper for ngram_repeat_block cuda extension """ -import math -import warnings -from typing import Dict, List, Optional - -import torch -from torch import nn - -try: - from fairseq import ngram_repeat_block_cuda - - EXTENSION_BUILT = True -except ImportError: - EXTENSION_BUILT = False - - -def is_cuda_extension_usable() -> bool: - """Check whether ngram_repeat_block_cuda is built properly""" - if not EXTENSION_BUILT or not torch.cuda.is_available(): - return False - bsz = 2 - tokens = torch.tensor([[4, 4, 3, 2], [1, 2, 3, 4]], dtype=torch.long, device="cuda") - lprobs = torch.rand((8, 12), device="cuda") - try: - outputs = ngram_repeat_block_cuda.forward(tokens, lprobs, bsz, 3, 4, 3) - outputs = outputs + 4 # This line breaks if the extension is built incorrectly. - return True - except RuntimeError: - warnings.warn( - "NGramRepeatBlock extension must be rebuilt." - 'Run TORCH_CUDA_ARCH_LIST="6.0;6.1;7.0" python setup.py build_ext --inplace' - ) - return False - - -class NGramRepeatBlock(nn.Module): - """Wrapper class for calling ngram_repeat_block cuda extension""" - - def __init__(self, no_repeat_ngram_size: int, use_extension: bool = True): - super().__init__() - self.use_extension = is_cuda_extension_usable() if use_extension else False - self.no_repeat_ngram_size = no_repeat_ngram_size - - def reset_parameters(self): - pass - - @torch.jit.unused - def call_cuda_extension( - self, - tokens, - lprobs, - bsz: int, - beam_size: int, - step: int, - ): - return ngram_repeat_block_cuda.forward( - tokens, lprobs, bsz, step, beam_size, self.no_repeat_ngram_size - ) - - def forward( - self, - tokens, - lprobs, - bsz: int, - beam_size: int, - step: int, - ): - """ - Args: - tokens(Tensor): Input tokens(Bsz*beam, seq_len) - lprobs(Tensor): likelihood probability, - Expected to be updated in place.(Bsz*beam, vocab_size) - bsz(int): batch size - step(int): current step - beam_size(int): beam size - no_repeat_ngram_size(int): Ngram size - """ - msg = f"expected {bsz *beam_size} got" - assert tokens.size(0) == bsz * beam_size, f"{msg} {tokens.size(0)}" - assert lprobs.size(0) == bsz * beam_size, f"{msg} {lprobs.size(0)}" - if self.use_extension: - return self.call_cuda_extension(tokens, lprobs, bsz, beam_size, step) - - else: - return self._no_repeat_ngram( - tokens, - lprobs, - bsz, - beam_size, - step, - ) - - def _no_repeat_ngram(self, tokens, lprobs, bsz: int, beam_size: int, step: int): - """For each hypothesis generate a list of previous ngrams and set associated lprobs to -inf""" - gen_ngrams: List[Dict[str, List[int]]] = [ - torch.jit.annotate(Dict[str, List[int]], {}) - for bbsz_idx in range(bsz * beam_size) - ] - cpu_tokens = tokens.cpu() - for bbsz_idx in range(bsz * beam_size): - gen_tokens: List[int] = cpu_tokens[bbsz_idx].tolist() - for ngram in self.transpose_list( - [gen_tokens[i:] for i in range(self.no_repeat_ngram_size)] - ): - key = ",".join([str(x) for x in ngram[:-1]]) - gen_ngrams[bbsz_idx][key] = gen_ngrams[bbsz_idx].get( - key, torch.jit.annotate(List[int], []) - ) + [ngram[-1]] - if step + 2 - self.no_repeat_ngram_size >= 0: - # no banned tokens if we haven't generated no_repeat_ngram_size tokens yet - banned_tokens = [ - self.calculate_banned_tokens( - tokens, step, gen_ngrams, self.no_repeat_ngram_size, bbsz_idx - ) - for bbsz_idx in range(bsz * beam_size) - ] - else: - banned_tokens = [ - torch.jit.annotate(List[int], []) for bbsz_idx in range(bsz * beam_size) - ] - for bbsz_idx in range(bsz * beam_size): - lprobs[bbsz_idx][ - torch.tensor(banned_tokens[bbsz_idx], dtype=torch.int64) - ] = torch.tensor(-math.inf).to(lprobs) - return lprobs - - @staticmethod - def calculate_banned_tokens( - tokens, - step: int, - gen_ngrams: List[Dict[str, List[int]]], - no_repeat_ngram_size: int, - bbsz_idx: int, - ): - tokens_list: List[int] = tokens[ - bbsz_idx, step + 2 - no_repeat_ngram_size : step + 1 - ].tolist() - # before decoding the next token, prevent decoding of ngrams that have already appeared - ngram_index = ",".join([str(x) for x in tokens_list]) - return gen_ngrams[bbsz_idx].get(ngram_index, torch.jit.annotate(List[int], [])) - - @staticmethod - def transpose_list(l: List[List[int]]): - # GeneratorExp aren't supported in TS so ignoring the lint - min_len = min([len(x) for x in l]) # noqa - l2 = [[row[i] for row in l] for i in range(min_len)] - return l2 diff --git a/kosmos-g/fairseq/fairseq/optim/__init__.py b/kosmos-g/fairseq/fairseq/optim/__init__.py deleted file mode 100644 index be783be89..000000000 --- a/kosmos-g/fairseq/fairseq/optim/__init__.py +++ /dev/null @@ -1,48 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -"""isort:skip_file""" - -import importlib -import os - -from fairseq import registry -from fairseq.optim.bmuf import FairseqBMUF # noqa -from fairseq.optim.fairseq_optimizer import ( # noqa - FairseqOptimizer, - LegacyFairseqOptimizer, -) -from fairseq.optim.amp_optimizer import AMPOptimizer -from fairseq.optim.fp16_optimizer import FP16Optimizer, MemoryEfficientFP16Optimizer -from fairseq.optim.shard import shard_ -from omegaconf import DictConfig - -__all__ = [ - "AMPOptimizer", - "FairseqOptimizer", - "FP16Optimizer", - "MemoryEfficientFP16Optimizer", - "shard_", -] - -( - _build_optimizer, - register_optimizer, - OPTIMIZER_REGISTRY, - OPTIMIZER_DATACLASS_REGISTRY, -) = registry.setup_registry("--optimizer", base_class=FairseqOptimizer, required=True) - - -def build_optimizer(cfg: DictConfig, params, *extra_args, **extra_kwargs): - if all(isinstance(p, dict) for p in params): - params = [t for p in params for t in p.values()] - params = list(filter(lambda p: p.requires_grad, params)) - return _build_optimizer(cfg, params, *extra_args, **extra_kwargs) - - -# automatically import any Python files in the optim/ directory -for file in sorted(os.listdir(os.path.dirname(__file__))): - if file.endswith(".py") and not file.startswith("_"): - file_name = file[: file.find(".py")] - importlib.import_module("fairseq.optim." + file_name) diff --git a/kosmos-g/fairseq/fairseq/optim/adadelta.py b/kosmos-g/fairseq/fairseq/optim/adadelta.py deleted file mode 100644 index f1a215497..000000000 --- a/kosmos-g/fairseq/fairseq/optim/adadelta.py +++ /dev/null @@ -1,47 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch.optim - -from . import LegacyFairseqOptimizer, register_optimizer - - -@register_optimizer("adadelta") -class Adadelta(LegacyFairseqOptimizer): - def __init__(self, args, params): - super().__init__(args) - self._optimizer = torch.optim.Adadelta(params, **self.optimizer_config) - - @staticmethod - def add_args(parser): - """Add optimizer-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--adadelta-rho', type=float, default=0.9, metavar='RHO', - help='coefficient used for computing a running average of squared gradients') - parser.add_argument('--adadelta-eps', type=float, default=1e-6, metavar='EPS', - help='term added to the denominator to improve numerical stability') - parser.add_argument('--weight-decay', '--wd', default=0.0, type=float, metavar='WD', - help='weight decay') - parser.add_argument('--anneal-eps', action='store_true', help='flag to anneal eps') - # fmt: on - - @property - def optimizer_config(self): - """ - Return a kwarg dictionary that will be used to override optimizer - args stored in checkpoints. This allows us to load a checkpoint and - resume training using a different set of optimizer args, e.g., with a - different learning rate. - """ - return { - "lr": self.args.lr[0], - "rho": self.args.adadelta_rho, - "eps": self.args.adadelta_eps, - "weight_decay": self.args.weight_decay, - } - - @property - def supports_flat_params(self): - return True diff --git a/kosmos-g/fairseq/fairseq/optim/adafactor.py b/kosmos-g/fairseq/fairseq/optim/adafactor.py deleted file mode 100644 index c969b9fbc..000000000 --- a/kosmos-g/fairseq/fairseq/optim/adafactor.py +++ /dev/null @@ -1,268 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.optim - -from . import LegacyFairseqOptimizer, register_optimizer - - -@register_optimizer("adafactor") -class FairseqAdafactor(LegacyFairseqOptimizer): - def __init__(self, args, params): - super().__init__(args) - self._optimizer = Adafactor(params, **self.optimizer_config) - - @staticmethod - def add_args(parser): - """Add optimizer-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--adafactor-eps', default='(1e-30, 1e-3)', metavar="E", - help='epsilons for Adafactor optimizer') - parser.add_argument('--clip-threshold', type=float, default=1.0, metavar="C", - help='threshold for clipping update root mean square') - parser.add_argument('--decay-rate', type=float, default=-0.8, metavar="D", - help='decay rate of the second moment estimator') - parser.add_argument('--beta1', type=float, default=None, metavar="B", - help='beta for first moment estimator. Optional') - parser.add_argument('--weight-decay', '--wd', default=0.0, type=float, metavar='WD', - help='weight decay') - parser.add_argument('--scale-parameter', action='store_true', - help='scale learning rate by root mean square of parameter') - parser.add_argument('--relative-step', action='store_true', - help='set learning rate to inverse square root of timestep,' - 'otherwise use external learning rate') - parser.add_argument('--warmup-init', action='store_true', - help='use relative step for warm-up learning rate schedule') - # fmt: on - - @property - def optimizer_config(self): - """ - Return a kwarg dictionary that will be used to override optimizer - args stored in checkpoints. This allows us to load a checkpoint and - resume training using a different set of optimizer args, e.g., with a - different learning rate. - Note : Convergence issues empirically observed with fp16 on. - Might require search for appropriate configuration. - """ - return { - "lr": self.args.lr[0], - "eps": eval(self.args.adafactor_eps), - "clip_threshold": self.args.clip_threshold, - "decay_rate": self.args.decay_rate, - "beta1": self.args.beta1, - "weight_decay": self.args.weight_decay, - "scale_parameter": self.args.scale_parameter, # defaults to False - "relative_step": self.args.relative_step, # defaults to False - "warmup_init": self.args.warmup_init, - } - - -class Adafactor(torch.optim.Optimizer): - """Implements Adafactor algorithm. - - This implementation is based on: - `Adafactor: Adaptive Learning Rates with Sublinear Memory Cost` - (see https://arxiv.org/abs/1804.04235) - - Note that this optimizer internally adjusts the learning rate - depending on the *scale_parameter*, *relative_step* and - *warmup_init* options. To use a manual (external) learning rate - schedule you should set `scale_parameter=False` and - `relative_step=False`. - - Args: - params (iterable): iterable of parameters to optimize or dicts defining - parameter groups - lr (float, optional): external learning rate (default: None) - eps (tuple[float, float]): regularization constans for square gradient - and parameter scale respectively (default: (1e-30, 1e-3)) - clip_threshold (float): threshold of root mean square of - final gradient update (default: 1.0) - decay_rate (float): coefficient used to compute running averages of square - gradient (default: -0.8) - beta1 (float): coefficient used for computing running averages of gradient - (default: None) - weight_decay (float, optional): weight decay (L2 penalty) (default: 0) - scale_parameter (bool): if True, learning rate is scaled by root mean square of - parameter (default: True) - relative_step (bool): if True, time-dependent learning rate is computed - instead of external learning rate (default: True) - warmup_init (bool): time-dependent learning rate computation depends on - whether warm-up initialization is being used (default: False) - """ - - def __init__( - self, - params, - lr=None, - eps=(1e-30, 1e-3), - clip_threshold=1.0, - decay_rate=-0.8, - beta1=None, - weight_decay=0.0, - scale_parameter=True, - relative_step=True, - warmup_init=False, - ): - if lr is not None and relative_step: - raise ValueError("Cannot combine manual lr and relative_step options") - if warmup_init and not relative_step: - raise ValueError("warmup_init requires relative_step=True") - - defaults = dict( - lr=lr, - eps=eps, - clip_threshold=clip_threshold, - decay_rate=decay_rate, - beta1=beta1, - weight_decay=weight_decay, - scale_parameter=scale_parameter, - relative_step=relative_step, - warmup_init=warmup_init, - ) - super(Adafactor, self).__init__(params, defaults) - - @property - def supports_memory_efficient_fp16(self): - return True - - @property - def supports_flat_params(self): - return False - - def _get_lr(self, param_group, param_state): - rel_step_sz = param_group["lr"] - if param_group["relative_step"]: - min_step = ( - 1e-6 * param_state["step"] if param_group["warmup_init"] else 1e-2 - ) - rel_step_sz = min(min_step, 1.0 / math.sqrt(param_state["step"])) - param_scale = 1.0 - if param_group["scale_parameter"]: - param_scale = max(param_group["eps"][1], param_state["RMS"]) - return param_scale * rel_step_sz - - def _get_options(self, param_group, param_shape): - factored = len(param_shape) >= 2 - use_first_moment = param_group["beta1"] is not None - return factored, use_first_moment - - def _rms(self, tensor): - return tensor.norm(2) / (tensor.numel() ** 0.5) - - def _approx_sq_grad(self, exp_avg_sq_row, exp_avg_sq_col): - r_factor = ( - (exp_avg_sq_row / exp_avg_sq_row.mean(dim=-1, keepdim=True)) - .rsqrt_() - .unsqueeze(-1) - ) - c_factor = exp_avg_sq_col.unsqueeze(-2).rsqrt() - return torch.mul(r_factor, c_factor) - - def step(self, closure=None): - """Performs a single optimization step. - - Args: - closure (callable, optional): A closure that reevaluates the model - and returns the loss. - """ - loss = None - if closure is not None: - loss = closure() - - for group in self.param_groups: - for p in group["params"]: - if p.grad is None: - continue - grad = p.grad.data - if grad.dtype in {torch.float16, torch.bfloat16}: - grad = grad.float() - if grad.is_sparse: - raise RuntimeError("Adafactor does not support sparse gradients.") - - state = self.state[p] - grad_shape = grad.shape - - factored, use_first_moment = self._get_options(group, grad_shape) - # State Initialization - if len(state) == 0: - state["step"] = 0 - - if use_first_moment: - # Exponential moving average of gradient values - state["exp_avg"] = torch.zeros_like(grad) - if factored: - state["exp_avg_sq_row"] = torch.zeros(grad_shape[:-1]).to(grad) - state["exp_avg_sq_col"] = torch.zeros( - grad_shape[:-2] + grad_shape[-1:] - ).to(grad) - else: - state["exp_avg_sq"] = torch.zeros_like(grad) - - state["RMS"] = 0 - else: - if use_first_moment: - state["exp_avg"] = state["exp_avg"].to(grad) - if factored: - state["exp_avg_sq_row"] = state["exp_avg_sq_row"].to(grad) - state["exp_avg_sq_col"] = state["exp_avg_sq_col"].to(grad) - else: - state["exp_avg_sq"] = state["exp_avg_sq"].to(grad) - - p_data_fp32 = p.data - if p.data.dtype in {torch.float16, torch.bfloat16}: - p_data_fp32 = p_data_fp32.float() - - state["step"] += 1 - state["RMS"] = self._rms(p_data_fp32) - group["lr"] = self._get_lr(group, state) - - beta2t = 1.0 - math.pow(state["step"], group["decay_rate"]) - update = (grad ** 2) + group["eps"][0] - if factored: - exp_avg_sq_row = state["exp_avg_sq_row"] - exp_avg_sq_col = state["exp_avg_sq_col"] - - exp_avg_sq_row.mul_(beta2t).add_( - update.mean(dim=-1), alpha=1.0 - beta2t - ) - exp_avg_sq_col.mul_(beta2t).add_( - update.mean(dim=-2), alpha=1.0 - beta2t - ) - - # Approximation of exponential moving average of square of gradient - update = self._approx_sq_grad(exp_avg_sq_row, exp_avg_sq_col) - update.mul_(grad) - else: - exp_avg_sq = state["exp_avg_sq"] - - exp_avg_sq.mul_(beta2t).add_(update, alpha=1.0 - beta2t) - update = exp_avg_sq.rsqrt().mul_(grad) - - update.div_( - (self._rms(update) / group["clip_threshold"]).clamp_(min=1.0) - ) - update.mul_(group["lr"]) - - if use_first_moment: - exp_avg = state["exp_avg"] - exp_avg.mul_(group["beta1"]).add_(update, alpha=1 - group["beta1"]) - update = exp_avg - - if group["weight_decay"] != 0: - p_data_fp32.add_( - p_data_fp32, alpha=-group["weight_decay"] * group["lr"] - ) - - p_data_fp32.add_(-update) - - if p.data.dtype in {torch.float16, torch.bfloat16}: - p.data.copy_(p_data_fp32) - - return loss diff --git a/kosmos-g/fairseq/fairseq/optim/adagrad.py b/kosmos-g/fairseq/fairseq/optim/adagrad.py deleted file mode 100644 index 4f539541c..000000000 --- a/kosmos-g/fairseq/fairseq/optim/adagrad.py +++ /dev/null @@ -1,40 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch.optim - -from . import LegacyFairseqOptimizer, register_optimizer - - -@register_optimizer("adagrad") -class Adagrad(LegacyFairseqOptimizer): - def __init__(self, args, params): - super().__init__(args) - self._optimizer = torch.optim.Adagrad(params, **self.optimizer_config) - - @staticmethod - def add_args(parser): - """Add optimizer-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--weight-decay', '--wd', default=0.0, type=float, metavar='WD', - help='weight decay') - # fmt: on - - @property - def optimizer_config(self): - """ - Return a kwarg dictionary that will be used to override optimizer - args stored in checkpoints. This allows us to load a checkpoint and - resume training using a different set of optimizer args, e.g., with a - different learning rate. - """ - return { - "lr": self.args.lr[0], - "weight_decay": self.args.weight_decay, - } - - @property - def supports_flat_params(self): - return False diff --git a/kosmos-g/fairseq/fairseq/optim/adam.py b/kosmos-g/fairseq/fairseq/optim/adam.py deleted file mode 100644 index 678ec7c61..000000000 --- a/kosmos-g/fairseq/fairseq/optim/adam.py +++ /dev/null @@ -1,239 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import math -from collections.abc import Collection -from dataclasses import dataclass, field -from typing import Any, List - -import torch -import torch.distributed as dist -import torch.optim -from fairseq.dataclass import FairseqDataclass -from fairseq.optim import FairseqOptimizer, register_optimizer -from fairseq.optim.fused_adam import get_fused_adam_class -from omegaconf import II, OmegaConf - - -logger = logging.getLogger(__name__) - - -@dataclass -class FairseqAdamConfig(FairseqDataclass): - adam_betas: Any = field( - default=(0.9, 0.999), metadata={"help": "betas for Adam optimizer"} - ) - adam_eps: float = field( - default=1e-8, metadata={"help": "epsilon for Adam optimizer"} - ) - weight_decay: float = field(default=0.0, metadata={"help": "weight decay"}) - use_old_adam: bool = field( - default=False, metadata={"help": "Use fairseq.optim.adam.Adam"} - ) - fp16_adam_stats: bool = field( - default=False, metadata={"help": "use FP16 stats (with automatic scaling)"} - ) - # TODO common vars below in parent - tpu: bool = II("common.tpu") - lr: List[float] = II("optimization.lr") - - -@register_optimizer("adam", dataclass=FairseqAdamConfig) -class FairseqAdam(FairseqOptimizer): - """Adam optimizer for fairseq. - - Important note: this optimizer corresponds to the "AdamW" variant of - Adam in its weight decay behavior. As such, it is most closely - analogous to torch.optim.AdamW from PyTorch. - """ - - def __init__(self, cfg: FairseqAdamConfig, params): - super().__init__(cfg) - fused_adam_cls = get_fused_adam_class() - use_fused_adam = ( - not getattr(cfg, "use_old_adam", False) - and fused_adam_cls is not None - and torch.cuda.is_available() - ) - if getattr(cfg, "tpu", False): - if self.cfg.fp16_adam_stats: - raise NotImplementedError("--fp16-adam-stats is only supported on GPU") - # on TPUs we use the Adam defined here, since it - # automatically casts gradients to FP32 - self._optimizer = Adam(params, **self.optimizer_config) - elif use_fused_adam: - logger.info("using FusedAdam") - self._optimizer = fused_adam_cls( - params, use_fp16_stats=self.cfg.fp16_adam_stats, **self.optimizer_config - ) - else: - if self.cfg.fp16_adam_stats: - raise NotImplementedError( - "--fp16-adam-stats is only supported with FusedAdamV1" - ) - self._optimizer = Adam(params, **self.optimizer_config) - - @property - def optimizer_config(self): - """ - Return a kwarg dictionary that will be used to override optimizer - args stored in checkpoints. This allows us to load a checkpoint and - resume training using a different set of optimizer args, e.g., with a - different learning rate. - """ - return { - "lr": self.cfg.lr[0] - if isinstance(self.cfg.lr, Collection) - else self.cfg.lr, - "betas": eval(self.cfg.adam_betas) - if isinstance(self.cfg.adam_betas, str) - else OmegaConf.to_container(self.cfg.adam_betas), - "eps": self.cfg.adam_eps, - "weight_decay": self.cfg.weight_decay, - } - - def average_params(self): - """Reduce Params is only used during BMUF distributed training.""" - state_dict = self.optimizer.state_dict() - total_gpus = float(dist.get_world_size()) - - for _, value in state_dict["state"].items(): - value["exp_avg"] /= total_gpus - value["exp_avg_sq"] /= total_gpus - dist.all_reduce(value["exp_avg"], op=dist.ReduceOp.SUM) - dist.all_reduce(value["exp_avg_sq"], op=dist.ReduceOp.SUM) - - -class Adam(torch.optim.Optimizer): - r"""Implements Adam algorithm. - - This implementation is modified from torch.optim.Adam based on: - `Fixed Weight Decay Regularization in Adam` - (see https://arxiv.org/abs/1711.05101) - - It has been proposed in `Adam: A Method for Stochastic Optimization`_. - - Args: - params (iterable): iterable of parameters to optimize or dicts defining - parameter groups - lr (float, optional): learning rate (default: 1e-3) - betas (Tuple[float, float], optional): coefficients used for computing - running averages of gradient and its square (default: (0.9, 0.999)) - eps (float, optional): term added to the denominator to improve - numerical stability (default: 1e-8) - weight_decay (float, optional): weight decay (L2 penalty) (default: 0) - amsgrad (boolean, optional): whether to use the AMSGrad variant of this - algorithm from the paper `On the Convergence of Adam and Beyond`_ - - .. _Adam\: A Method for Stochastic Optimization: - https://arxiv.org/abs/1412.6980 - .. _On the Convergence of Adam and Beyond: - https://openreview.net/forum?id=ryQu7f-RZ - """ - - def __init__( - self, - params, - lr=1e-3, - betas=(0.9, 0.999), - eps=1e-8, - weight_decay=0, - amsgrad=False, - ): - defaults = dict( - lr=lr, betas=betas, eps=eps, weight_decay=weight_decay, amsgrad=amsgrad - ) - super(Adam, self).__init__(params, defaults) - - @property - def supports_memory_efficient_fp16(self): - return True - - @property - def supports_flat_params(self): - return True - - def step(self, closure=None): - """Performs a single optimization step. - - Args: - closure (callable, optional): A closure that reevaluates the model - and returns the loss. - """ - loss = None - if closure is not None: - loss = closure() - - for group in self.param_groups: - for p in group["params"]: - if p.grad is None: - continue - grad = p.grad.data - if grad.dtype in {torch.float16, torch.bfloat16}: - grad = grad.float() - if grad.is_sparse: - raise RuntimeError( - "Adam does not support sparse gradients, please consider SparseAdam instead" - ) - amsgrad = group.get("amsgrad", False) - - p_data_fp32 = p.data - if p.data.dtype in {torch.float16, torch.bfloat16}: - p_data_fp32 = p_data_fp32.float() - - state = self.state[p] - - # State initialization - if len(state) == 0: - state["step"] = 0 - # Exponential moving average of gradient values - state["exp_avg"] = torch.zeros_like(p_data_fp32) - # Exponential moving average of squared gradient values - state["exp_avg_sq"] = torch.zeros_like(p_data_fp32) - if amsgrad: - # Maintains max of all exp. moving avg. of sq. grad. values - state["max_exp_avg_sq"] = torch.zeros_like(p_data_fp32) - else: - state["exp_avg"] = state["exp_avg"].to(p_data_fp32) - state["exp_avg_sq"] = state["exp_avg_sq"].to(p_data_fp32) - if amsgrad: - state["max_exp_avg_sq"] = state["max_exp_avg_sq"].to( - p_data_fp32 - ) - - exp_avg, exp_avg_sq = state["exp_avg"], state["exp_avg_sq"] - if amsgrad: - max_exp_avg_sq = state["max_exp_avg_sq"] - beta1, beta2 = group["betas"] - - state["step"] += 1 - - # Decay the first and second moment running average coefficient - exp_avg.mul_(beta1).add_(grad, alpha=1 - beta1) - exp_avg_sq.mul_(beta2).addcmul_(grad, grad, value=1 - beta2) - if amsgrad: - # Maintains the maximum of all 2nd moment running avg. till now - torch.max(max_exp_avg_sq, exp_avg_sq, out=max_exp_avg_sq) - # Use the max. for normalizing running avg. of gradient - denom = max_exp_avg_sq.sqrt().add_(group["eps"]) - else: - denom = exp_avg_sq.sqrt().add_(group["eps"]) - - bias_correction1 = 1 - beta1 ** state["step"] - bias_correction2 = 1 - beta2 ** state["step"] - step_size = group["lr"] * math.sqrt(bias_correction2) / bias_correction1 - - if group["weight_decay"] != 0: - p_data_fp32.add_( - p_data_fp32, alpha=-group["weight_decay"] * group["lr"] - ) - - p_data_fp32.addcdiv_(exp_avg, denom, value=-step_size) - - if p.data.dtype in {torch.float16, torch.bfloat16}: - p.data.copy_(p_data_fp32) - - return loss diff --git a/kosmos-g/fairseq/fairseq/optim/adamax.py b/kosmos-g/fairseq/fairseq/optim/adamax.py deleted file mode 100644 index 98ff8ad7a..000000000 --- a/kosmos-g/fairseq/fairseq/optim/adamax.py +++ /dev/null @@ -1,172 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.optim - -from . import LegacyFairseqOptimizer, register_optimizer - - -@register_optimizer("adamax") -class FairseqAdamax(LegacyFairseqOptimizer): - def __init__(self, args, params): - super().__init__(args) - self._optimizer = Adamax(params, **self.optimizer_config) - - @staticmethod - def add_args(parser): - """Add optimizer-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--adamax-betas', default='(0.9, 0.999)', metavar='B', - help='betas for Adam optimizer') - parser.add_argument('--adamax-eps', type=float, default=1e-8, metavar='D', - help='epsilon for Adam optimizer') - parser.add_argument('--weight-decay', '--wd', default=0.0, type=float, metavar='WD', - help='weight decay') - parser.add_argument('--no-bias-correction', default=False, action='store_true', - help='disable bias correction') - # fmt: on - - @property - def optimizer_config(self): - """ - Return a kwarg dictionary that will be used to override optimizer - args stored in checkpoints. This allows us to load a checkpoint and - resume training using a different set of optimizer args, e.g., with a - different learning rate. - """ - return { - "lr": self.args.lr[0], - "betas": eval(self.args.adamax_betas), - "eps": self.args.adamax_eps, - "weight_decay": self.args.weight_decay, - "bias_correction": not self.args.no_bias_correction, - } - - -class Adamax(torch.optim.Optimizer): - """Implements Adamax algorithm (a variant of Adam based on infinity norm). - - It has been proposed in `Adam: A Method for Stochastic Optimization`__. - - Compared to the version in PyTorch, this version implements a fix for weight decay. - - Args: - params (iterable): iterable of parameters to optimize or dicts defining - parameter groups - lr (float, optional): learning rate (default: 2e-3) - betas (Tuple[float, float], optional): coefficients used for computing - running averages of gradient and its square - eps (float, optional): term added to the denominator to improve - numerical stability (default: 1e-8) - weight_decay (float, optional): weight decay (L2 penalty) (default: 0) - bias_correction (bool, optional): enable bias correction (default: True) - - __ https://arxiv.org/abs/1412.6980 - """ - - def __init__( - self, - params, - lr=2e-3, - betas=(0.9, 0.999), - eps=1e-8, - weight_decay=0, - bias_correction=True, - ): - if not 0.0 <= lr: - raise ValueError("Invalid learning rate: {}".format(lr)) - if not 0.0 <= eps: - raise ValueError("Invalid epsilon value: {}".format(eps)) - if not 0.0 <= betas[0] < 1.0: - raise ValueError("Invalid beta parameter at index 0: {}".format(betas[0])) - if not 0.0 <= betas[1] < 1.0: - raise ValueError("Invalid beta parameter at index 1: {}".format(betas[1])) - if not 0.0 <= weight_decay: - raise ValueError("Invalid weight_decay value: {}".format(weight_decay)) - - defaults = dict( - lr=lr, - betas=betas, - eps=eps, - weight_decay=weight_decay, - bias_correction=bias_correction, - ) - super(Adamax, self).__init__(params, defaults) - - @property - def supports_memory_efficient_fp16(self): - return True - - @property - def supports_flat_params(self): - return True - - def step(self, closure=None): - """Performs a single optimization step. - - Args: - closure (callable, optional): A closure that reevaluates the model - and returns the loss. - """ - loss = None - if closure is not None: - loss = closure() - - for group in self.param_groups: - for p in group["params"]: - if p.grad is None: - continue - grad = p.grad.data.float() - if grad.is_sparse: - raise RuntimeError("Adamax does not support sparse gradients") - - p_data_fp32 = p.data - if p.data.dtype in {torch.float16, torch.bfloat16}: - p_data_fp32 = p_data_fp32.float() - - state = self.state[p] - - # State initialization - if len(state) == 0: - state["step"] = 0 - state["exp_avg"] = torch.zeros_like(p_data_fp32) - state["exp_inf"] = torch.zeros_like(p_data_fp32) - else: - state["exp_avg"] = state["exp_avg"].to(p_data_fp32) - state["exp_inf"] = state["exp_inf"].to(p_data_fp32) - - exp_avg, exp_inf = state["exp_avg"], state["exp_inf"] - beta1, beta2 = group["betas"] - eps = group["eps"] - - state["step"] += 1 - - # Update biased first moment estimate. - exp_avg.mul_(beta1).add_(grad, alpha=1 - beta1) - - # Update the exponentially weighted infinity norm. - torch.max( - exp_inf.mul_(beta2), - grad.abs_(), - out=exp_inf, - ) - - step_size = group["lr"] - if group["bias_correction"]: - bias_correction = 1 - beta1 ** state["step"] - step_size /= bias_correction - - if group["weight_decay"] != 0: - p_data_fp32.add_( - p_data_fp32, alpha=-group["weight_decay"] * group["lr"] - ) - - p_data_fp32.addcdiv_(exp_avg, exp_inf.add(eps), value=-step_size) - - if p.data.dtype in {torch.float16, torch.bfloat16}: - p.data.copy_(p_data_fp32) - - return loss diff --git a/kosmos-g/fairseq/fairseq/optim/amp_optimizer.py b/kosmos-g/fairseq/fairseq/optim/amp_optimizer.py deleted file mode 100644 index cfe57d07f..000000000 --- a/kosmos-g/fairseq/fairseq/optim/amp_optimizer.py +++ /dev/null @@ -1,106 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -import torch -from fairseq import optim -from omegaconf import DictConfig - -logger = logging.getLogger(__name__) - - -class AMPOptimizer(optim.FairseqOptimizer): - """ - Wrap an *optimizer* to support AMP (automatic mixed precision) training. - """ - - def __init__(self, cfg: DictConfig, params, fp32_optimizer, **kwargs): - super().__init__(cfg.optimizer) - self.fp32_optimizer = fp32_optimizer - amp_kwargs = {"init_scale": cfg.common.fp16_init_scale} - if getattr(cfg.common, "amp_scale_window", None) is not None: - amp_kwargs["growth_interval"] = cfg.common.amp_init_scale - self._grad_scaler = torch.cuda.amp.GradScaler(**amp_kwargs) - self.min_loss_scale = cfg.common.min_loss_scale - - @classmethod - def build_optimizer(cls, cfg: DictConfig, params, **kwargs): - """ - Args: - cfg (omegaconf.DictConfig): fairseq args - params (iterable): iterable of parameters to optimize - """ - fp32_optimizer = optim.build_optimizer(cfg.optimizer, params) - return cls(cfg, params, fp32_optimizer, **kwargs) - - def backward(self, loss): - """Computes the sum of gradients of the given tensor w.r.t. graph leaves. - - Compared to :func:`fairseq.optim.FairseqOptimizer.backward`, this - function additionally dynamically scales the loss to avoid gradient - underflow. - """ - self._grad_scaler.scale(loss).backward() - - def step(self): - self.scaler.step(self.fp32_optimizer) - self.scaler.update() - - def clip_grad_norm(self, max_norm, aggregate_norm_fn=None): - """Clips gradient norm.""" - self.scaler.unscale_(self.optimizer) - grad_norm = self.fp32_optimizer.clip_grad_norm(max_norm, aggregate_norm_fn) - if not torch.isfinite(grad_norm).all(): - new_loss_scale = self.next_loss_scale - if new_loss_scale <= self.min_loss_scale: - raise FloatingPointError( - ( - "AMP: Minimum loss scale reached ({}). Your loss is probably exploding. " - "Try restarting training or use fp32. {}" - ).format(self.min_loss_scale, new_loss_scale) - ) - else: - logger.info( - "AMP: overflow detected, setting scale to " f"to {new_loss_scale}" - ) - return grad_norm - - @property - def scaler(self): - return self._grad_scaler - - @property - def next_loss_scale(self): - return self.scaler.get_scale() * self.scaler.get_backoff_factor() - - @property - def optimizer(self): - return self.fp32_optimizer.optimizer - - @optimizer.setter - def optimizer(self, optimizer): - self.fp32_optimizer.optimizer = optimizer - - @property - def lr_scheduler(self): - return getattr(self.fp32_optimizer, "lr_scheduler", None) - - @property - def optimizer_config(self): - return self.fp32_optimizer.optimizer_config - - def get_lr(self): - return self.fp32_optimizer.get_lr() - - def set_lr(self, lr): - self.fp32_optimizer.set_lr(lr) - - def all_reduce_grads(self, module): - self.fp32_optimizer.all_reduce_grads(module) - - @property - def supports_flat_params(self): - return self.fp32_optimizer.supports_flat_params diff --git a/kosmos-g/fairseq/fairseq/optim/bmuf.py b/kosmos-g/fairseq/fairseq/optim/bmuf.py deleted file mode 100644 index d6d0e04e8..000000000 --- a/kosmos-g/fairseq/fairseq/optim/bmuf.py +++ /dev/null @@ -1,200 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field - -import torch -import torch.distributed as dist -from fairseq.dataclass.configs import FairseqBMUFConfig -from fairseq.dataclass.utils import gen_parser_from_dataclass -from fairseq.optim.fairseq_optimizer import FairseqOptimizer - - -class FairseqBMUF(FairseqOptimizer): - """ - Implements incremental block distributed data parallelism similar to - https://ieeexplore.ieee.org/document/7472805 - - Paper title: Scalable training of deep learning machines by incremental - block training with intra-block parallel optimization and blockwise - model-update filtering - """ - - def __init__(self, cfg: FairseqBMUFConfig, optimizer): - super().__init__(cfg) - self._optimizer = optimizer - self._num_updates = 0 - self.sync_iter = cfg.global_sync_iter - self.block_momentum = cfg.block_momentum - self.block_lr = cfg.block_lr - self._reset_local_data() - self.warmup_iteration = cfg.warmup_iterations - self.use_nbm = cfg.use_nbm - self.initial_state = self._optimizer.state_dict() - self.average_sync = self.cfg.average_sync - self.world_size = self.cfg.distributed_world_size - - @staticmethod - def add_args(parser): - """Add optimizer-specific arguments to the parser.""" - gen_parser_from_dataclass(parser, FairseqBMUFConfig()) - - @property - def optimizer(self): - return self._optimizer.optimizer - - @property - def optimizer_config(self): - return self._optimizer.optimizer_config - - def get_lr(self): - return self._optimizer.get_lr() - - def set_lr(self, lr): - self._optimizer.set_lr(lr) - - def state_dict(self): - return self._optimizer.state_dict() - - def load_state_dict(self, state_dict, optimizer_overrides=None): - self._optimizer.load_state_dict(state_dict, optimizer_overrides) - self.initial_state = self._optimizer.state_dict() - - def multiply_grads(self, c): - """Multiplies grads by a constant *c*.""" - self._optimizer.multiply_grads(c) - - def clip_grad_norm(self, max_norm, aggregate_norm_fn=None): - """Clips gradient norm.""" - return self._optimizer.clip_grad_norm(max_norm, aggregate_norm_fn) - - def average_params(self): - self._optimizer.average_params() - - def _block_sync(self): - if self.world_size <= 1: - return - # Update the global model using local models from all GPUs - # (Step-1) Calculate grad between previously synced model and - # currrent local model - if self.block_momentum != 0: - self._calc_grad() - - # (Step-2) Average gradient from all GPUs - self._avg_grad_from_all_gpus() - - # (Step-3) Calculate global momentum and update the global model - if self.block_momentum != 0: - self._update_global_model() - - # (Step-4) Average local optimizer params - if self.average_sync: - self.average_params() - - def _is_warmup_end(self): - # Check whether train iterations is equal to warmup iter - if self.get_num_updates() == self.warmup_iteration: - return True - return False - - def _is_bmuf_iter(self): - # Check whether train iterations is equal to bmuf sync iter - if (self.get_num_updates() > self.warmup_iteration) and ( - self.get_num_updates() % self.sync_iter == 0 - ): - return True - return False - - def _warmup_sync(self, root_rank=0): - if self.world_size <= 1: - return - # Broadcast the local model to all gpus - for param in self.params: - dist.broadcast(param.data, src=root_rank) - - # Update local optimizer state - if self.average_sync: - self._optimizer.average_params() - else: - self._optimizer.load_state_dict(self.initial_state) - - self._reset_local_data() - - def step(self, closure=None): - """Performs a single optimization step.""" - self._optimizer.step(closure) - self.set_num_updates(self.get_num_updates() + 1) - if self._is_warmup_end(): - self._warmup_sync() - elif self._is_bmuf_iter(): - self._block_sync() - - def zero_grad(self): - """Clears the gradients of all optimized parameters.""" - self._optimizer.zero_grad() - - def get_num_updates(self): - """Get the number of parameters updates.""" - return self._num_updates - - def set_num_updates(self, num_updates): - """Set the number of parameters updates.""" - self._num_updates = num_updates - - @torch.no_grad() - def _reset_local_data(self): - # (Step-0) Initialize global momentum parameters and store global copy on each gpu - self.global_params = [torch.zeros_like(p.data) for p in self.params] - self.smoothed_grads = [p.data.new_zeros(p.data.size()) for p in self.params] - self.grads = [p.data.new_zeros(p.data.size()) for p in self.params] - - # saving the global model locally for calculating gradient during bmuf sync - for param, global_param in zip(self.params, self.global_params): - global_param.copy_(param.data) - - @torch.no_grad() - def _calc_grad(self): - # global_params is basically the global copy from the previously finished - # synchronisation. param.data is local parameter after block_sync_freq - # for the local gpu. so grad is difference between previously synced - # model and currrent local model. - for index, (param, global_param) in enumerate( - zip(self.params, self.global_params) - ): - self.grads[index] = global_param - param.data - - def _avg_grad_from_all_gpus(self): - for index, param in enumerate(self.params): - sync_para = param.data if self.block_momentum == 0 else self.grads[index] - sync_para /= float(dist.get_world_size()) - dist.all_reduce(sync_para, op=dist.ReduceOp.SUM) - - @torch.no_grad() - def _update_global_model(self): - for index, (param, global_param, smoothed_grad, grad) in enumerate( - zip( - self.params, - self.global_params, - self.smoothed_grads, - # all gpus would share the same value of smoothed_grad, since it is - # always computed on synchronized gradients. - self.grads, - ) - ): - # global_param is basically last syncrhornized parameter. though - # smoothed_grad is local, all processes will have same value of - # smoothed_grad and hence param is globally synchronized copy. - # smoothed_grad(t) = BM * smoothed_grad(t-1) + BM_lr * grad(t) - smoothed_grad = self.block_momentum * smoothed_grad + self.block_lr * grad - param.data.copy_(global_param - smoothed_grad) - - # A Nesterov momentum here is to do a partial weight update before - # calculating the gradient - if self.use_nbm: - param.data.copy_(param.data - self.block_momentum * smoothed_grad) - - # backup for the next synchronization. - self.smoothed_grads[index] = smoothed_grad - global_param.copy_(param.data) diff --git a/kosmos-g/fairseq/fairseq/optim/composite.py b/kosmos-g/fairseq/fairseq/optim/composite.py deleted file mode 100644 index 63701ee8b..000000000 --- a/kosmos-g/fairseq/fairseq/optim/composite.py +++ /dev/null @@ -1,190 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from collections import defaultdict -from dataclasses import dataclass, field -from typing import Dict, Any, List, Optional - -import torch.optim -from fairseq.dataclass import FairseqDataclass -from fairseq.optim import FairseqOptimizer, register_optimizer, _build_optimizer -from fairseq.optim.lr_scheduler import FairseqLRScheduler, build_lr_scheduler -from omegaconf import II, open_dict - - -logger = logging.getLogger(__name__) - - -@dataclass -class OptimizerAndSchedulerConfig(FairseqDataclass): - optimizer: Any = None - lr_scheduler: Optional[Any] = None - lr: List = II("optimization.lr") - lr_float: Optional[ - float - ] = None # this makes it easier to sweep on learning rate with auto sweepers - - -@dataclass -class CompositeOptimizerConfig(FairseqDataclass): - groups: Dict[str, Any] = field( - default_factory=lambda: {}, - metadata={ - "help": "optimizer name -> optimizer OptimizerAndSchedulerConfig. " - "Configures a different optimizer and (optionally) lr scheduler for each parameter group" - }, - ) - - -@register_optimizer("composite", dataclass=CompositeOptimizerConfig) -class FairseqCompositeOptimizer(FairseqOptimizer): - - optimizers: Dict[str, FairseqOptimizer] = {} - lr_schedulers: Dict[str, FairseqLRScheduler] = {} - lr_scheduler: FairseqLRScheduler = None - _optimizer: torch.optim.Optimizer - - def __init__(self, cfg: CompositeOptimizerConfig, params): - super().__init__(cfg) - - assert ( - len(params) > 1 - ), "Composite optimizer only works when there are multiple parameter groups (try fp16_no_flatten_grads: true)" - - groupped_params = defaultdict(list) - for p in params: - group = getattr(p, "param_group", "default") - groupped_params[group].append(p) - - assert groupped_params.keys() == cfg.groups.keys(), ( - f"Parameter groups {groupped_params.keys()} and optimizer groups {cfg.groups.keys()} are not the same! " - "Try setting 'param_group' on your parameters in the model." - ) - - for group, group_params in groupped_params.items(): - group_cfg = cfg.groups[group] - with open_dict(group_cfg): - if group_cfg.lr_float is not None: - group_cfg.optimizer.lr = [group_cfg.lr_float] - group_cfg.lr_scheduler.lr = [group_cfg.lr_float] - else: - group_cfg.optimizer.lr = group_cfg.lr - group_cfg.lr_scheduler.lr = group_cfg.lr - self.optimizers[group] = _build_optimizer(group_cfg.optimizer, group_params) - if group_cfg.lr_scheduler is not None: - self.lr_schedulers[group] = build_lr_scheduler( - group_cfg.lr_scheduler, self.optimizers[group] - ) - - if len(self.lr_schedulers) > 0: - assert len(self.lr_schedulers) == len(self.optimizers), ( - f"Please provide an lr scheduler for each optimizer to use pass_through scheduler. " - f"Optimizers: {self.optimizers}; Lr scheds: {self.lr_schedulers}" - ) - self.lr_scheduler = CompositeLRScheduler(self.lr_schedulers) - - self._optimizer = CompositeOptimizer(self.optimizers) - - @property - def supports_groups(self): - return True - - @property - def param_groups(self): - for opt in self.optimizers.values(): - for group in opt.param_groups: - yield group - - def get_lr(self): - """Return the current learning rate.""" - k = ( - "default" - if "default" in self.optimizers - else next(iter(self.optimizers.keys())) - ) - return self.optimizers[k].param_groups[0]["lr"] - - def state_dict(self): - """Return the LR scheduler state dict.""" - return {k: s.state_dict() for k, s in self.optimizers.items()} - - def load_state_dict(self, state_dict, optimizer_overrides=None): - """Load an LR scheduler state dict.""" - for k, state in state_dict.items(): - if k not in self.optimizers: - # skip extra keys like "loss_scale" added by fp16 optimizer - continue - - overrides = ( - optimizer_overrides[k] - if isinstance(optimizer_overrides, dict) and k in optimizer_overrides - else None - ) - self.optimizers[k].load_state_dict(state, optimizer_overrides=overrides) - - -class CompositeOptimizer(torch.optim.Optimizer): - def __init__(self, optimizers: Dict[str, FairseqOptimizer]): - self.optimizers = optimizers - - @property - def supports_memory_efficient_fp16(self): - return all(o.supports_memory_efficient_fp16 for o in self.optimizers.values()) - - @property - def supports_flat_params(self): - return all(o.supports_flat_params for o in self.optimizers.values()) - - def step(self, closure=None, groups=None): - """Performs a single optimization step. - - Args: - closure (callable, optional): A closure that reevaluates the model - and returns the loss. - """ - loss = None - if closure is not None: - loss = closure() - - for k, opt in self.optimizers.items(): - if groups is None or k in groups: - opt.step() - - return loss - - def zero_grad(self): - for opt in self.optimizers.values(): - opt.zero_grad() - - -class CompositeLRScheduler(FairseqLRScheduler): - def __init__(self, lr_schedulers): - super().__init__(None, None) - - self.lr_schedulers = lr_schedulers - - def state_dict(self): - """Return the LR scheduler state dict.""" - return {k: s.state_dict() for k, s in self.lr_schedulers.items()} - - def load_state_dict(self, state_dict): - """Load an LR scheduler state dict.""" - for k, state in state_dict.items(): - self.lr_schedulers[k].load_state_dict(state) - - def step_begin_epoch(self, epoch): - """Update the learning rate at the beginning of the given epoch.""" - for s in self.lr_schedulers.values(): - s.step_begin_epoch(epoch) - - def step(self, epoch, val_loss=None): - """Update the learning rate at the end of the given epoch.""" - for s in self.lr_schedulers.values(): - s.step(epoch) - - def step_update(self, num_updates): - """Update the learning rate after each update.""" - return {k: s.step_update(num_updates) for k, s in self.lr_schedulers.items()} diff --git a/kosmos-g/fairseq/fairseq/optim/cpu_adam.py b/kosmos-g/fairseq/fairseq/optim/cpu_adam.py deleted file mode 100644 index b218934e7..000000000 --- a/kosmos-g/fairseq/fairseq/optim/cpu_adam.py +++ /dev/null @@ -1,210 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import importlib -from collections.abc import Collection -from dataclasses import dataclass, field -from typing import List - -import torch -from fairseq.dataclass import FairseqDataclass -from fairseq.optim import FairseqOptimizer, register_optimizer -from omegaconf import II, DictConfig - - -try: - import deepspeed - - has_deepspeed = True -except ImportError as e: - has_deepspeed = False - - -def _get_cpu_adam(): - try: - from deepspeed.ops.op_builder import CPUAdamBuilder - - return CPUAdamBuilder().load() - except ImportError: - # fbcode - from deepspeed.ops.adam import DeepSpeedCPUAdam as ds_opt_adam - - return ds_opt_adam - - -@dataclass -class FairseqCPUAdamConfig(FairseqDataclass): - adam_betas: str = field( - default="(0.9, 0.999)", metadata={"help": "betas for Adam optimizer"} - ) - adam_eps: float = field( - default=1e-8, metadata={"help": "epsilon for Adam optimizer"} - ) - weight_decay: float = field(default=0.0, metadata={"help": "weight decay"}) - fp16_adam_stats: bool = field( - default=False, metadata={"help": "use FP16 stats (with automatic scaling)"} - ) - # TODO common vars below in parent - lr: List[float] = II("optimization.lr") - - -@register_optimizer("cpu_adam", dataclass=FairseqCPUAdamConfig) -class FairseqCPUAdam(FairseqOptimizer): - """Adam optimizer for fairseq, optimized for CPU tensors. - - Important note: this optimizer corresponds to the "AdamW" variant of - Adam in its weight decay behavior. As such, it is most closely - analogous to torch.optim.AdamW from PyTorch. - """ - - def __init__(self, cfg: DictConfig, params): - super().__init__(cfg) - self._optimizer = CPUAdam(params, **self.optimizer_config) - - @property - def optimizer_config(self): - """ - Return a kwarg dictionary that will be used to override optimizer - args stored in checkpoints. This allows us to load a checkpoint and - resume training using a different set of optimizer args, e.g., with a - different learning rate. - """ - return { - "lr": self.cfg.lr[0] - if isinstance(self.cfg.lr, Collection) - else self.cfg.lr, - "betas": eval(self.cfg.adam_betas), - "eps": self.cfg.adam_eps, - "weight_decay": self.cfg.weight_decay, - "use_fp16_stats": self.cfg.fp16_adam_stats, - } - - -class CPUAdam(torch.optim.Optimizer): - - optimizer_id = 0 - - def __init__( - self, - params, - lr=1e-3, - bias_correction=True, - betas=(0.9, 0.999), - eps=1e-8, - weight_decay=0, - use_fp16_stats=False, - ): - defaults = { - "lr": lr, - "bias_correction": bias_correction, - "betas": betas, - "eps": eps, - "weight_decay": weight_decay, - } - super().__init__(params, defaults) - - self.use_fp16_stats = use_fp16_stats - self.FLOAT16_MAX = 65504.0 - - if not has_deepspeed: - raise ImportError("Please install DeepSpeed: pip install deepspeed") - - self.opt_id = CPUAdam.optimizer_id - CPUAdam.optimizer_id = CPUAdam.optimizer_id + 1 - - self.ds_opt_adam = _get_cpu_adam() - adamw_mode = True - self.ds_opt_adam.create_adam( - self.opt_id, lr, betas[0], betas[1], eps, weight_decay, adamw_mode - ) - - @property - def supports_memory_efficient_fp16(self): - return True - - @property - def supports_flat_params(self): - return True - - @torch.no_grad() - def step(self, closure=None): - loss = None - if closure is not None: - with torch.enable_grad(): - loss = closure() - - torch.cuda.synchronize() - - for group_id, group in enumerate(self.param_groups): - for param_id, p in enumerate(group["params"]): - if p.grad is None: - continue - - state = self.state[p] - if len(state) == 0: - state["step"] = 0 - dtype = torch.float16 if self.use_fp16_stats else p.data.dtype - # gradient momentums - state["exp_avg"] = torch.zeros_like( - p.data, dtype=dtype, device="cpu" - ) - # gradient variances - state["exp_avg_sq"] = torch.zeros_like( - p.data, dtype=dtype, device="cpu" - ) - if self.use_fp16_stats: - assert torch.is_floating_point(p.data) - state["exp_avg_scale"] = 1.0 - state["exp_avg_sq_scale"] = 1.0 - - exp_avg, exp_avg_sq = state["exp_avg"], state["exp_avg_sq"] - - p_data_bak = p.data # backup of the original data pointer - - p.data = p.data.to(dtype=torch.float32, device="cpu") - p.grad.data = p.grad.data.to(dtype=torch.float32, device="cpu") - - if self.use_fp16_stats: - exp_avg = exp_avg.float() * state["exp_avg_scale"] - exp_avg_sq = exp_avg_sq.float() * state["exp_avg_sq_scale"] - - state["step"] += 1 - beta1, beta2 = group["betas"] - - self.ds_opt_adam.adam_update( - self.opt_id, - state["step"], - group["lr"], - beta1, - beta2, - group["eps"], - group["weight_decay"], - group["bias_correction"], - p.data, - p.grad.data, - exp_avg, - exp_avg_sq, - ) - - if p_data_bak.data_ptr() != p.data.data_ptr(): - p_data_bak.copy_(p.data) - p.data = p_data_bak - - if self.use_fp16_stats: - - def inf_norm(t): - return torch.norm(t, float("inf")) - - # from github.com/openai/jukebox/blob/master/jukebox/utils/fp16.py - state["exp_avg_scale"], state["exp_avg_sq_scale"] = ( - 1e-8 + inf_norm(exp_avg) / self.FLOAT16_MAX, - 1e-8 + inf_norm(exp_avg_sq) / self.FLOAT16_MAX, - ) - state["exp_avg"], state["exp_avg_sq"] = ( - (exp_avg / state["exp_avg_scale"]).half(), - (exp_avg_sq / state["exp_avg_sq_scale"]).half(), - ) - - return loss diff --git a/kosmos-g/fairseq/fairseq/optim/dynamic_loss_scaler.py b/kosmos-g/fairseq/fairseq/optim/dynamic_loss_scaler.py deleted file mode 100644 index f42df45ea..000000000 --- a/kosmos-g/fairseq/fairseq/optim/dynamic_loss_scaler.py +++ /dev/null @@ -1,70 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -class DynamicLossScaler(object): - def __init__( - self, - init_scale=2.0 ** 15, - scale_factor=2.0, - scale_window=2000, - tolerance=0.0, - threshold=None, - min_loss_scale=1e-4, - ): - self.loss_scale = init_scale - self.scale_factor = scale_factor - self.scale_window = scale_window - self.tolerance = tolerance - self.threshold = threshold - self._iter = 0 - self._last_overflow_iter = -1 - self._last_rescale_iter = -1 - self._overflows_since_rescale = 0 - self.min_loss_scale = min_loss_scale - - def scale(self, outputs): - return self.loss_scale * outputs - - def update(self): - if (self._iter - self._last_overflow_iter) % self.scale_window == 0: - self.loss_scale *= self.scale_factor - self._last_rescale_iter = self._iter - self._iter += 1 - - def _decrease_loss_scale(self): - self.loss_scale /= self.scale_factor - if self.threshold is not None: - self.loss_scale = max(self.loss_scale, self.threshold) - - def check_overflow(self, grad_norm=None, overflow=False): - # detect inf and nan - if overflow or grad_norm == float("inf") or grad_norm != grad_norm: - # overflow has occured - prev_scale = self.loss_scale - iter_since_rescale = self._iter - self._last_rescale_iter - - self._last_overflow_iter = self._iter - self._overflows_since_rescale += 1 - pct_overflow = self._overflows_since_rescale / float(iter_since_rescale) - if pct_overflow >= self.tolerance: - self._decrease_loss_scale() - self._last_rescale_iter = self._iter - self._overflows_since_rescale = 0 - - if self.loss_scale <= self.min_loss_scale: - # Use FloatingPointError as an uncommon error that parent - # functions can safely catch to stop training. - self.loss_scale = prev_scale - raise FloatingPointError( - ( - "Minimum loss scale reached ({}). Your loss is probably exploding. " - "Try lowering the learning rate, using gradient clipping or " - "increasing the batch size." - ).format(self.min_loss_scale) - ) - - self._iter += 1 - raise OverflowError("setting loss scale to: " + str(self.loss_scale)) diff --git a/kosmos-g/fairseq/fairseq/optim/fairseq_optimizer.py b/kosmos-g/fairseq/fairseq/optim/fairseq_optimizer.py deleted file mode 100644 index 7e5411753..000000000 --- a/kosmos-g/fairseq/fairseq/optim/fairseq_optimizer.py +++ /dev/null @@ -1,179 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -from fairseq import utils -from fairseq.dataclass.utils import gen_parser_from_dataclass - - -class FairseqOptimizer(object): - def __init__(self, cfg): - super().__init__() - self.cfg = cfg - - @classmethod - def add_args(cls, parser): - """Add optimizer-specific arguments to the parser.""" - dc = getattr(cls, "__dataclass", None) - if dc is not None: - gen_parser_from_dataclass(parser, dc()) - - @property - def optimizer(self): - """Return a torch.optim.optimizer.Optimizer instance.""" - if not hasattr(self, "_optimizer"): - raise NotImplementedError - if not isinstance(self._optimizer, torch.optim.Optimizer): - raise ValueError("_optimizer must be an instance of torch.optim.Optimizer") - return self._optimizer - - @optimizer.setter - def optimizer(self, optimizer): - """Reset optimizer instance.""" - if not hasattr(self, "_optimizer"): - raise NotImplementedError - if not isinstance(self._optimizer, torch.optim.Optimizer): - raise ValueError("_optimizer must be an instance of torch.optim.Optimizer") - self._optimizer = optimizer - - @property - def optimizer_config(self): - """ - Return a kwarg dictionary that will be used to override optimizer - args stored in checkpoints. This allows us to load a checkpoint and - resume training using a different set of optimizer args, e.g., with a - different learning rate. - """ - raise NotImplementedError - - @property - def params(self): - """Return an iterable of the parameters held by the optimizer.""" - for param_group in self.param_groups: - for p in param_group["params"]: - yield p - - @property - def param_groups(self): - return self.optimizer.param_groups - - def __getstate__(self): - return self._optimizer.__getstate__() - - def get_lr(self): - """Return the current learning rate.""" - return self.param_groups[0]["lr"] - - def set_lr(self, lr): - """Set the learning rate.""" - for param_group in self.param_groups: - param_group["lr"] = lr - - def state_dict(self): - """Return the optimizer's state dict.""" - return self.optimizer.state_dict() - - def load_state_dict(self, state_dict, optimizer_overrides=None): - """Load an optimizer state dict. - - In general we should prefer the configuration of the existing optimizer - instance (e.g., learning rate) over that found in the state_dict. This - allows us to resume training from a checkpoint using a new set of - optimizer args. - """ - self.optimizer.load_state_dict(state_dict) - - if optimizer_overrides is not None and len(optimizer_overrides) > 0: - # override learning rate, momentum, etc. with latest values - for group in self.param_groups: - group.update(optimizer_overrides) - - def backward(self, loss): - """Computes the sum of gradients of the given tensor w.r.t. graph leaves.""" - loss.backward() - - def all_reduce_grads(self, module): - """Manually all-reduce gradients (if required).""" - if hasattr(module, "all_reduce_grads"): - module.all_reduce_grads() - - def multiply_grads(self, c): - """Multiplies grads by a constant *c*.""" - for p in self.params: - if p.grad is not None: - if torch.is_tensor(c): - c = c.to(p.grad.device) - p.grad.data.mul_(c) - - def clip_grad_norm(self, max_norm, aggregate_norm_fn=None): - """Clips gradient norm.""" - return utils.clip_grad_norm_(self.params, max_norm, aggregate_norm_fn) - - def step(self, closure=None, scale=1.0, groups=None): - """Performs a single optimization step.""" - if self.supports_step_with_scale: - if self.supports_groups: - self.optimizer.step(closure, scale=scale, groups=groups) - else: - self.optimizer.step(closure, scale=scale) - else: - if scale != 1.0: - self.multiply_grads(1.0 / scale) - if self.supports_groups: - self.optimizer.step(closure, groups=groups) - else: - self.optimizer.step(closure) - - def zero_grad(self): - """Clears the gradients of all optimized parameters.""" - for p in self.params: - p.grad = None - self.optimizer.zero_grad() - - @property - def supports_memory_efficient_fp16(self): - if hasattr(self.optimizer, "supports_memory_efficient_fp16"): - return self.optimizer.supports_memory_efficient_fp16 - return False - - @property - def supports_step_with_scale(self): - if hasattr(self.optimizer, "supports_step_with_scale"): - return self.optimizer.supports_step_with_scale - return False - - @property - def supports_groups(self): - if hasattr(self.optimizer, "supports_groups"): - return self.optimizer.supports_groups - return False - - @property - def supports_flat_params(self): - """ - Whether the optimizer supports collapsing of the model - parameters/gradients into a single contiguous Tensor. - """ - if hasattr(self.optimizer, "supports_flat_params"): - return self.optimizer.supports_flat_params - return False - - def average_params(self): - pass - - def broadcast_global_state_dict(self, state_dict): - """ - Broadcasts a global state dict to all ranks. - Useful for optimizers that shard state between ranks. - """ - if hasattr(self.optimizer, "broadcast_global_state_dict"): - return self.optimizer.broadcast_global_state_dict(state_dict) - else: - return state_dict - - -class LegacyFairseqOptimizer(FairseqOptimizer): - def __init__(self, args): - self.args = args diff --git a/kosmos-g/fairseq/fairseq/optim/fp16_optimizer.py b/kosmos-g/fairseq/fairseq/optim/fp16_optimizer.py deleted file mode 100644 index f8af2883a..000000000 --- a/kosmos-g/fairseq/fairseq/optim/fp16_optimizer.py +++ /dev/null @@ -1,552 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from collections import defaultdict -from itertools import chain - -import torch -from fairseq import optim -from omegaconf import DictConfig - -from .dynamic_loss_scaler import DynamicLossScaler - - -class _FP16OptimizerMixin(object): - def __init__(self, *args, **kwargs): - # forward __init__ call to the next class in mro(method resolution order) - super().__init__(*args, **kwargs) - self._multiply_factor = 1.0 - - @property - def has_flat_params(self): - return torch.is_tensor(self.fp32_params) or ( - isinstance(self.fp32_params, dict) - and all(torch.is_tensor(t) for t in self.fp32_params.values()) - ) - - @classmethod - def build_fp32_params(cls, args, params, flatten=True): - # create FP32 copy of parameters and grads - if flatten: - is_pipeline_parallel = getattr( - args, "pipeline_model_parallel", False - ) and getattr(args, "distributed_no_spawn", False) - total_param_size = sum(p.data.numel() for p in params) - devices = [torch.cuda.current_device()] - if is_pipeline_parallel: - devices = list(set(args.pipeline_devices)) - fp32_params = {} - for device in devices: - if is_pipeline_parallel: - device_param_size = sum( - p.data.numel() for p in params if p.device.index == device - ) - device_params = [p for p in params if p.device.index == device] - else: - device_param_size = total_param_size - device_params = params - fp32_params[device] = ( - device_params[0].new(0).float().new(device_param_size) - ) - offset = 0 - for p in device_params: - numel = p.data.numel() - fp32_params[device][offset : offset + numel].copy_(p.data.view(-1)) - offset += numel - fp32_params[device] = torch.nn.Parameter(fp32_params[device]) - fp32_params[device].grad = fp32_params[device].data.new( - device_param_size - ) - return fp32_params - else: - fp32_params = [] - for p in params: - p32 = torch.nn.Parameter(p.data.float()) - if hasattr(p, "expert"): - p32.expert = True - elif hasattr(p, "base_expert"): - p32.base_expert = True - p32.grad = torch.zeros_like(p32.data) - if hasattr(p, "param_group"): - p32.param_group = p.param_group - fp32_params.append(p32) - return fp32_params - - def state_dict(self): - """Return the optimizer's state dict.""" - state_dict = self.fp32_optimizer.state_dict() - if self.scaler is not None: - state_dict["loss_scale"] = self.scaler.loss_scale - return state_dict - - def load_state_dict(self, state_dict, optimizer_overrides=None): - """Load an optimizer state dict. - - In general we should prefer the configuration of the existing optimizer - instance (e.g., learning rate) over that found in the state_dict. This - allows us to resume training from a checkpoint using a new set of - optimizer args. - """ - if "loss_scale" in state_dict and self.scaler is not None: - self.scaler.loss_scale = state_dict["loss_scale"] - self.fp32_optimizer.load_state_dict(state_dict, optimizer_overrides) - - def backward(self, loss): - """Computes the sum of gradients of the given tensor w.r.t. graph leaves. - - Compared to :func:`fairseq.optim.FairseqOptimizer.backward`, this - function additionally dynamically scales the loss to avoid gradient - underflow. - """ - if self.scaler is not None: - loss = self.scaler.scale(loss) - loss.backward() - self._needs_sync = True - - def _sync_fp16_grads_to_fp32(self): - if self._needs_sync: - # copy FP16 grads to FP32 - if self.has_flat_params: - devices = list(self.fp32_params.keys()) - device_params_dict = defaultdict(list) - for p in self.fp16_params: - if p.requires_grad: - device_params_dict[p.device.index].append(p) - for device in devices: - device_params = device_params_dict[device] - offset = 0 - for p in device_params: - grad_data = ( - p.grad.data - if p.grad is not None - else p.data.new_zeros(p.data.shape) - ) - numel = grad_data.numel() - self.fp32_params[device].grad.data[ - offset : offset + numel - ].copy_(grad_data.view(-1)) - offset += numel - else: - for p, p32 in zip(self.fp16_params, self.fp32_params): - if not p.requires_grad: - continue - if p.grad is not None: - if p32.grad is None: - p32.grad = p.grad.data.float() - else: - p32.grad.data.copy_(p.grad.data) - else: - p32.grad = torch.zeros_like(p.data, dtype=torch.float) - - self._needs_sync = False - - def _sync_fp32_params_to_fp16(self): - # copy FP32 params back into FP16 model - if self.has_flat_params: - devices = list(self.fp32_params.keys()) - device_params_dict = defaultdict(list) - for p in self.fp16_params: - device_params_dict[p.device.index].append(p) - for device in devices: - device_params = device_params_dict[device] - offset = 0 - for p in device_params: - numel = p.data.numel() - p.data.copy_( - self.fp32_params[device] - .data[offset : offset + numel] - .view_as(p.data) - ) - offset += numel - else: - for p, p32 in zip(self.fp16_params, self.fp32_params): - if not p.requires_grad: - continue - p.data.copy_(p32.data) - - def _unscale_grads(self): - self._sync_fp16_grads_to_fp32() - if ( - # Skip the multiplication if it's a no-op (i.e., if _multiply_factor - # is 1.0). At the same time, we want to avoid the device-to-host - # transfer by comparing it to 1.0. Since _multiply_factor starts as - # a Python float, we roughly assume that if it's a tensor then it's - # probably not =1.0 anymore and we do the multiplication. Otherwise - # we can safely check the value without a D2H transfer. - torch.is_tensor(self._multiply_factor) - or self._multiply_factor != 1.0 - ): - self.fp32_optimizer.multiply_grads(self._multiply_factor) - self._multiply_factor = 1.0 - - def multiply_grads(self, c): - """Multiplies grads by a constant ``c``.""" - self._multiply_factor *= c - - def clip_grad_norm(self, max_norm, aggregate_norm_fn=None): - """Clips gradient norm and updates dynamic loss scaler.""" - self._sync_fp16_grads_to_fp32() - - grad_norm = self._multiply_factor * self.fp32_optimizer.clip_grad_norm( - 0, aggregate_norm_fn - ) - - if self.scaler is not None: - if grad_norm > max_norm > 0.0: - self._multiply_factor *= max_norm / grad_norm - - self.scaler.check_overflow(grad_norm) - elif max_norm > 0.0: - clip_coef = (max_norm / (grad_norm + 1e-6)).clamp_(max=1) - self._multiply_factor *= clip_coef - - return grad_norm - - def step(self, closure=None, groups=None): - """Performs a single optimization step.""" - self._sync_fp16_grads_to_fp32() - - if getattr(self, "supports_step_with_scale", False): - self.fp32_optimizer.step( - closure, scale=(1.0 / self._multiply_factor), groups=groups - ) - else: - self._unscale_grads() - self.fp32_optimizer.step(closure, groups=groups) - - if self.scaler is not None: - self.scaler.update() - - self._sync_fp32_params_to_fp16() - - def zero_grad(self): - """Clears the gradients of all optimized parameters.""" - for p in self.fp16_params: - p.grad = None - if self.has_flat_params: - if torch.is_tensor(self.fp32_params): - self.fp32_params.grad.zero_() - elif isinstance(self.fp32_params, dict): - for fp32_params in self.fp32_params.values(): - fp32_params.grad.zero_() - else: - raise RuntimeError("self.fp32_params must be a tensor or dict") - else: - for p32 in self.fp32_params: - if p32.grad is not None: - p32.grad.zero_() - self._needs_sync = False - - if self.scaler is not None: - self._multiply_factor = 1.0 / float(self.scaler.loss_scale) - - -class FP16Optimizer(_FP16OptimizerMixin, optim.FairseqOptimizer): - """ - Wrap an *optimizer* to support FP16 (mixed precision) training. - """ - - def __init__(self, cfg: DictConfig, params, fp32_optimizer, fp32_params, **kwargs): - super().__init__(cfg.optimizer) - self.fp16_params = params - self.fp32_optimizer = fp32_optimizer - self.fp32_params = fp32_params - - if getattr(cfg.common, "fp16_scale_window", None) is None: - if len(cfg.optimization.update_freq) > 1: - raise ValueError( - "--fp16-scale-window must be given explicitly when using a " - "custom --update-freq schedule" - ) - data_parallel_size = int( - cfg.distributed_training.distributed_world_size - / cfg.common.model_parallel_size - ) - scale_window = int( - 2 ** 14 / data_parallel_size / cfg.optimization.update_freq[0] - ) - else: - scale_window = cfg.common.fp16_scale_window - - if not getattr(cfg.common, "bf16", False): - self.scaler = DynamicLossScaler( - init_scale=cfg.common.fp16_init_scale, - scale_window=scale_window, - tolerance=cfg.common.fp16_scale_tolerance, - threshold=cfg.common.threshold_loss_scale, - min_loss_scale=cfg.common.min_loss_scale, - ) - else: - # disable loss scaling for bfloat16 - self.scaler = None - - @classmethod - def build_optimizer(cls, cfg: DictConfig, params, **kwargs): - """ - Args: - cfg (omegaconf.DictConfig): fairseq args - params (iterable): iterable of parameters to optimize - """ - flatten = not getattr(cfg.common, "fp16_no_flatten_grads", False) - if getattr(cfg.common, "bf16", False): - flatten = False # mixed precision is faster on TPUs without flat grads - fp32_params = cls.build_fp32_params(cfg.optimizer, params, flatten=flatten) - if flatten: - fp32_optimizer = optim.build_optimizer(cfg.optimizer, [fp32_params]) - else: - fp32_optimizer = optim.build_optimizer(cfg.optimizer, fp32_params) - if flatten and not fp32_optimizer.supports_flat_params: - raise RuntimeError( - f"chosen optimizer {fp32_optimizer.__class__.__name__} does not support flat params, please set --fp16-no-flatten-grads" - ) - return cls(cfg, params, fp32_optimizer, fp32_params, **kwargs) - - @property - def optimizer(self): - return self.fp32_optimizer.optimizer - - @optimizer.setter - def optimizer(self, optimizer): - self.fp32_optimizer.optimizer = optimizer - - @property - def lr_scheduler(self): - return getattr(self.fp32_optimizer, "lr_scheduler", None) - - @property - def optimizer_config(self): - return self.fp32_optimizer.optimizer_config - - def get_lr(self): - return self.fp32_optimizer.get_lr() - - def set_lr(self, lr): - self.fp32_optimizer.set_lr(lr) - - def all_reduce_grads(self, module): - self.fp32_optimizer.all_reduce_grads(module) - - @property - def supports_flat_params(self): - return self.fp32_optimizer.supports_flat_params - - -class _MemoryEfficientFP16OptimizerMixin(object): - def __init__(self, *args, **kwargs): - # forward __init__ call to the next class in MRO (method resolution order) - super().__init__(*args, **kwargs) - self._multiply_factor = 1.0 - - @property - def has_flat_params(self): - return False - - def state_dict(self): - """Return the optimizer's state dict.""" - state_dict = self.wrapped_optimizer.state_dict() - if self.scaler is not None: - state_dict["loss_scale"] = self.scaler.loss_scale - return state_dict - - def load_state_dict(self, state_dict, optimizer_overrides=None): - """Load an optimizer state dict. - - In general we should prefer the configuration of the existing optimizer - instance (e.g., learning rate) over that found in the state_dict. This - allows us to resume training from a checkpoint using a new set of - optimizer args. - """ - if "loss_scale" in state_dict and self.scaler is not None: - self.scaler.loss_scale = state_dict["loss_scale"] - - self.wrapped_optimizer.load_state_dict(state_dict, optimizer_overrides) - - # Hack: PyTorch automatically casts the optimizer state to match the - # type of the current parameters. But with --memory-efficient-fp16 the - # params are FP16 while the optimizer state is FP32 and we don't want - # to cast. A workaround is to manually copy back the original state - # after the optimizer has been loaded. - if not getattr(self.optimizer, "disable_mem_eff_fp16_loading_hack", False): - groups = self.optimizer.param_groups - saved_groups = state_dict["param_groups"] - id_map = { - old_id: p - for old_id, p in zip( - chain(*(g["params"] for g in saved_groups)), - chain(*(g["params"] for g in groups)), - ) - } - for k, v in state_dict["state"].items(): - if k in id_map: - param = id_map[k] - self.optimizer.state[param] = v - - def backward(self, loss): - """Computes the sum of gradients of the given tensor w.r.t. graph leaves. - - Compared to :func:`fairseq.optim.FairseqOptimizer.backward`, this - function additionally dynamically scales the loss to avoid gradient - underflow. - """ - if self.scaler is not None: - loss = self.scaler.scale(loss) - loss.backward() - - def _unscale_grads(self): - if ( - # Skip the multiplication if it's a no-op (i.e., if _multiply_factor - # is 1.0). At the same time, we want to avoid the device-to-host - # transfer by comparing it to 1.0. Since _multiply_factor starts as - # a Python float, we roughly assume that if it's a tensor then it's - # probably not =1.0 anymore and we do the multiplication. Otherwise - # we can safely check the value without a D2H transfer. - torch.is_tensor(self._multiply_factor) - or self._multiply_factor != 1.0 - ): - self.wrapped_optimizer.multiply_grads(self._multiply_factor) - self._multiply_factor = 1.0 - - def multiply_grads(self, c): - """Multiplies grads by a constant *c*.""" - self._multiply_factor *= c - - def clip_grad_norm(self, max_norm, aggregate_norm_fn=None): - """Clips gradient norm and updates dynamic loss scaler.""" - max_norm = float(max_norm) - grad_norm = self._multiply_factor * self.wrapped_optimizer.clip_grad_norm( - 0, aggregate_norm_fn - ) - - if self.scaler is not None: - grad_norm_cpu = float(grad_norm) - if grad_norm_cpu > max_norm > 0.0: - self._multiply_factor *= max_norm / grad_norm_cpu - - # detect overflow and adjust loss scale - self.scaler.check_overflow(grad_norm_cpu) - elif max_norm > 0.0: - clip_coef = (max_norm / (grad_norm + 1e-6)).clamp_(max=1) - self._multiply_factor *= clip_coef - - return grad_norm - - def step(self, closure=None, groups=None): - """Performs a single optimization step.""" - if getattr(self, "supports_step_with_scale", False): - # NOTE(msb) optimizer divides by scale factor - self.wrapped_optimizer.step( - closure, scale=(1.0 / self._multiply_factor), groups=groups - ) - else: - self._unscale_grads() - self.wrapped_optimizer.step(closure, groups=groups) - - if self.scaler is not None: - self.scaler.update() - - def zero_grad(self): - """Clears the gradients of all optimized parameters.""" - self.wrapped_optimizer.zero_grad() - if self.scaler is not None: - self._multiply_factor = 1.0 / float(self.scaler.loss_scale) - else: - self._multiply_factor = 1.0 - - @property - def supports_flat_params(self): - return self.wrapped_optimizer.supports_flat_params - - -class MemoryEfficientFP16Optimizer( - _MemoryEfficientFP16OptimizerMixin, optim.FairseqOptimizer -): - """ - Wrap an *optimizer* to support FP16 (mixed precision) training. - - Compared to :class:`fairseq.optim.FP16Optimizer`, this version does not - maintain an FP32 copy of the model. We instead expect the optimizer to - convert the gradients to FP32 internally and sync the results back to the - FP16 model params. This significantly reduces memory usage but slightly - increases the time spent in the optimizer. - - Since this wrapper depends on specific functionality in the wrapped - optimizer (i.e., on-the-fly conversion of grads to FP32), only certain - optimizers can be wrapped. This is determined by the - *supports_memory_efficient_fp16* property. - """ - - def __init__( - self, cfg: DictConfig, params, optimizer, allow_unsupported=False, **kwargs - ): - if not allow_unsupported and not optimizer.supports_memory_efficient_fp16: - raise ValueError( - "Unsupported optimizer: {}".format(optimizer.__class__.__name__) - ) - - super().__init__(getattr(cfg, "optimizer", None)) - self.wrapped_optimizer = optimizer - - if getattr(cfg.common, "fp16_scale_window", None) is None: - if len(cfg.optimization.update_freq) > 1: - raise ValueError( - "--fp16-scale-window must be given explicitly when using a " - "custom --update-freq schedule" - ) - data_parallel_size = int( - cfg.distributed_training.distributed_world_size - / cfg.common.model_parallel_size - ) - scale_window = int( - 2 ** 14 / data_parallel_size / cfg.optimization.update_freq[0] - ) - else: - scale_window = cfg.common.fp16_scale_window - - if not getattr(cfg.common, "bf16", False): - self.scaler = DynamicLossScaler( - init_scale=cfg.common.fp16_init_scale, - scale_window=scale_window, - tolerance=cfg.common.fp16_scale_tolerance, - threshold=cfg.common.threshold_loss_scale, - min_loss_scale=cfg.common.min_loss_scale, - ) - else: - # disable loss scaling for bfloat16 - self.scaler = None - - @classmethod - def build_optimizer(cls, cfg: DictConfig, params, **kwargs): - """ - Args: - args (argparse.Namespace): fairseq args - params (iterable): iterable of parameters to optimize - """ - fp16_optimizer = optim.build_optimizer(cfg.optimizer, params) - return cls(cfg, params, fp16_optimizer, **kwargs) - - @property - def optimizer(self): - return self.wrapped_optimizer.optimizer - - @optimizer.setter - def optimizer(self, optimizer): - self.wrapped_optimizer.optimizer = optimizer - - @property - def optimizer_config(self): - return self.wrapped_optimizer.optimizer_config - - @property - def lr_scheduler(self): - return getattr(self.wrapped_optimizer, "lr_scheduler", None) - - def get_lr(self): - return self.wrapped_optimizer.get_lr() - - def set_lr(self, lr): - self.wrapped_optimizer.set_lr(lr) - - def all_reduce_grads(self, module): - self.wrapped_optimizer.all_reduce_grads(module) diff --git a/kosmos-g/fairseq/fairseq/optim/fused_adam.py b/kosmos-g/fairseq/fairseq/optim/fused_adam.py deleted file mode 100644 index 1290ecfdb..000000000 --- a/kosmos-g/fairseq/fairseq/optim/fused_adam.py +++ /dev/null @@ -1,386 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import types - -import torch - - -def get_fused_adam_class(): - """ - Look for the FusedAdam optimizer from apex. We first try to load the - "contrib" interface, which is a bit faster than the main interface, - but is technically deprecated. - """ - try: - # The "deprecated" interface in recent versions of apex is a bit - # faster than the main interface, since we don't use the apex - # optimizer. This can be installed by passing the - # `--deprecated_fused_adam` option when building apex. - global fused_adam_cuda - import importlib - - fused_adam_cuda = importlib.import_module("fused_adam_cuda") - return FusedAdamV1 - except ImportError: - try: - # fallback to the newer interface - from apex.multi_tensor_apply import multi_tensor_applier - from apex.optimizers import FusedAdam as _FusedAdam # noqa - - if multi_tensor_applier.available: - return FusedAdamV2 - except ImportError: - pass - return None - - -class FusedAdamV1(torch.optim.Optimizer): - """ - Implements Adam algorithm. Currently GPU-only. Requires Apex to be installed via - ``python setup.py install --cuda_ext --cpp_ext``. - - It has been proposed in `Adam: A Method for Stochastic Optimization`_. - - Compared to the original version in Apex, the fairseq version casts grads - and params to FP32 internally to support ``--memory-efficient-fp16``. - - Args: - params (iterable): iterable of parameters to optimize or dicts defining - parameter groups. - lr (float, optional): learning rate. (default: 1e-3) - betas (Tuple[float, float], optional): coefficients used for computing - running averages of gradient and its square. (default: (0.9, 0.999)) - eps (float, optional): term added to the denominator to improve - numerical stability. (default: 1e-8) - weight_decay (float, optional): weight decay (L2 penalty) (default: 0) - amsgrad (boolean, optional): whether to use the AMSGrad variant of this - algorithm from the paper `On the Convergence of Adam and Beyond`_ - (default: False) NOT SUPPORTED in FusedAdam! - eps_inside_sqrt (boolean, optional): in the 'update parameters' step, - adds eps to the bias-corrected second moment estimate before - evaluating square root instead of adding it to the square root of - second moment estimate as in the original paper. (default: False) - .. _Adam: A Method for Stochastic Optimization: - https://arxiv.org/abs/1412.6980 - .. _On the Convergence of Adam and Beyond: - https://openreview.net/forum?id=ryQu7f-RZ - """ - - def __init__( - self, - params, - lr=1e-3, - bias_correction=True, - betas=(0.9, 0.999), - eps=1e-8, - eps_inside_sqrt=False, - weight_decay=0.0, - max_grad_norm=0.0, - amsgrad=False, - use_fp16_stats=False, - ): - global fused_adam_cuda - import importlib - - fused_adam_cuda = importlib.import_module("fused_adam_cuda") - - if amsgrad: - raise RuntimeError("FusedAdam does not support the AMSGrad variant.") - defaults = { - "lr": lr, - "bias_correction": bias_correction, - "betas": betas, - "eps": eps, - "weight_decay": weight_decay, - "max_grad_norm": max_grad_norm, - } - super().__init__(params, defaults) - self.eps_mode = 0 if eps_inside_sqrt else 1 - - self.use_fp16_stats = use_fp16_stats - self.FLOAT16_MAX = 65504.0 - - @property - def supports_memory_efficient_fp16(self): - return True - - @property - def supports_flat_params(self): - return True - - @property - def supports_step_with_scale(self): - return True - - def step(self, closure=None, grads=None, scale=1.0, grad_norms=None): - """Performs a single optimization step. - Args: - closure (callable, optional): A closure that reevaluates the model - and returns the loss. - grads (list of tensors, optional): weight gradient to use for the - optimizer update. If gradients have type torch.half, parameters - are expected to be in type torch.float. (default: None) - output params (list of tensors, optional): A reduced precision copy - of the updated weights written out in addition to the regular - updated weights. Have to be of same type as gradients. (default: None) - scale (float, optional): factor to divide gradient tensor values - by before applying to weights. (default: 1) - """ - loss = None - if closure is not None: - loss = closure() - - if grads is None: - grads_group = [None] * len(self.param_groups) - # backward compatibility - # assuming a list/generator of parameter means single group - elif isinstance(grads, types.GeneratorType): - grads_group = [grads] - elif type(grads[0]) != list: - grads_group = [grads] - else: - grads_group = grads - - if grad_norms is None: - grad_norms = [None] * len(self.param_groups) - - for group, grads_this_group, grad_norm in zip( - self.param_groups, grads_group, grad_norms - ): - if grads_this_group is None: - grads_this_group = [None] * len(group["params"]) - - # compute combined scale factor for this group - combined_scale = scale - if group.get("max_grad_norm", 0) > 0: - # norm is in fact norm*scale - clip = ((grad_norm / scale) + 1e-6) / group["max_grad_norm"] - if clip > 1: - combined_scale = clip * scale - - bias_correction = 1 if group.get("bias_correction", 1) else 0 - - for p, grad in zip(group["params"], grads_this_group): - # note: p.grad should not ever be set for correct - # operation of mixed precision optimizer that sometimes - # sends None gradients - if p.grad is None and grad is None: - continue - if grad is None: - grad = p.grad.data - if grad.is_sparse: - raise RuntimeError( - "FusedAdam does not support sparse gradients, " - "please consider SparseAdam instead" - ) - - if p.device.type == "cpu": - p_data_fp32 = p.data.cuda(non_blocking=True).float() - out_p = torch.tensor([], dtype=torch.float) - else: - p_data_fp32 = p.data.float() - out_p = p.data - - state = self.state[p] - - # State initialization - dtype = torch.float16 if self.use_fp16_stats else p_data_fp32.dtype - if len(state) == 0: - state["step"] = 0 - # Exponential moving average of gradient values - state["exp_avg"] = torch.zeros_like(p_data_fp32, dtype=dtype) - # Exponential moving average of squared gradient values - state["exp_avg_sq"] = torch.zeros_like(p_data_fp32, dtype=dtype) - if self.use_fp16_stats: - state["exp_avg_scale"] = 1.0 - state["exp_avg_sq_scale"] = 1.0 - else: - device = p_data_fp32.device - state["exp_avg"] = state["exp_avg"].to(device, dtype) - state["exp_avg_sq"] = state["exp_avg_sq"].to(device, dtype) - - exp_avg = state["exp_avg"] - exp_avg_sq = state["exp_avg_sq"] - if self.use_fp16_stats: - assert exp_avg.dtype == torch.float16 - exp_avg = exp_avg.float() * state["exp_avg_scale"] - exp_avg_sq = exp_avg_sq.float() * state["exp_avg_sq_scale"] - beta1, beta2 = group["betas"] - - state["step"] += 1 - - with torch.cuda.device(p_data_fp32.device): - fused_adam_cuda.adam( - p_data_fp32, - out_p, - exp_avg, - exp_avg_sq, - grad, - group["lr"], - beta1, - beta2, - group["eps"], - combined_scale, - state["step"], - self.eps_mode, - bias_correction, - group["weight_decay"], - ) - - if p.device.type == "cpu": - p.data.copy_(p_data_fp32, non_blocking=True) - - if self.use_fp16_stats: - - def inf_norm(t): - return torch.norm(t, float("inf")) - - # from github.com/openai/jukebox/blob/master/jukebox/utils/fp16.py - state["exp_avg_scale"], state["exp_avg_sq_scale"] = ( - 1e-8 + inf_norm(exp_avg) / self.FLOAT16_MAX, - 1e-8 + inf_norm(exp_avg_sq) / self.FLOAT16_MAX, - ) - state["exp_avg"], state["exp_avg_sq"] = ( - (exp_avg / state["exp_avg_scale"]).half(), - (exp_avg_sq / state["exp_avg_sq_scale"]).half(), - ) - - return loss - - -try: - from apex.multi_tensor_apply import multi_tensor_applier - from apex.optimizers import FusedAdam - - class FusedAdamV2(FusedAdam): - """ - Compared to the original version in Apex, the fairseq version casts grads - and params to FP32 internally to support ``--memory-efficient-fp16``. - """ - - def __init__(self, *args, use_fp16_stats=False, **kwargs): - if use_fp16_stats: - raise NotImplementedError( - "--fp16-adam-stats is only supported with FusedAdamV1" - ) - super().__init__(*args, **kwargs) - if not hasattr(self, "multi_tensor_adam"): - raise Exception( - "Apex installation is outdated. Please install an updated version of apex." - ) - - @property - def supports_memory_efficient_fp16(self): - return True - - @property - def supports_flat_params(self): - return True - - def step( - self, - closure=None, - grads=None, - output_params=None, - scale=None, - grad_norms=None, - ): - """Performs a single optimization step.""" - loss = None - if closure is not None: - loss = closure() - - for group in self.param_groups: - bias_correction = 1 if group["bias_correction"] else 0 - beta1, beta2 = group["betas"] - - # assume same step across group now to simplify things - # per parameter step can be easily support by making it tensor, or pass list into kernel - if "step" in group: - group["step"] += 1 - else: - group["step"] = 1 - - # create lists for multi-tensor apply - g_16, p_16, orig_p_16, m_16, v_16 = [], [], [], [], [] - g_32, p_32, m_32, v_32 = [], [], [], [] - - for p in group["params"]: - if p.grad is None: - continue - if p.grad.data.is_sparse: - raise RuntimeError( - "FusedAdam does not support sparse gradients, " - "please consider SparseAdam instead" - ) - - state = self.state[p] - # State initialization - if len(state) == 0: - # Exponential moving average of gradient values - state["exp_avg"] = torch.zeros_like(p.data, dtype=torch.float) - # Exponential moving average of squared gradient values - state["exp_avg_sq"] = torch.zeros_like( - p.data, dtype=torch.float - ) - else: - state["exp_avg"] = state["exp_avg"].to( - device=p.data.device, dtype=torch.float - ) - state["exp_avg_sq"] = state["exp_avg_sq"].to( - device=p.data.device, dtype=torch.float - ) - - if p.dtype == torch.float16: - g_16.append(p.grad.data.float()) - p_16.append(p.data.float()) - orig_p_16.append(p.data) - m_16.append(state["exp_avg"]) - v_16.append(state["exp_avg_sq"]) - elif p.dtype == torch.float32: - g_32.append(p.grad.data) - p_32.append(p.data) - m_32.append(state["exp_avg"]) - v_32.append(state["exp_avg_sq"]) - else: - raise RuntimeError("FusedAdam only support fp16 and fp32.") - - with torch.cuda.device(p.device): - if len(g_16) > 0: - multi_tensor_applier( - self.multi_tensor_adam, - self._dummy_overflow_buf, - [g_16, p_16, m_16, v_16], - group["lr"], - beta1, - beta2, - group["eps"], - group["step"], - self.adam_w_mode, - bias_correction, - group["weight_decay"], - ) - for orig_p, p in zip(orig_p_16, p_16): - orig_p.copy_(p.data) - if len(g_32) > 0: - multi_tensor_applier( - self.multi_tensor_adam, - self._dummy_overflow_buf, - [g_32, p_32, m_32, v_32], - group["lr"], - beta1, - beta2, - group["eps"], - group["step"], - self.adam_w_mode, - bias_correction, - group["weight_decay"], - ) - - return loss - -except ImportError: - pass diff --git a/kosmos-g/fairseq/fairseq/optim/fused_lamb.py b/kosmos-g/fairseq/fairseq/optim/fused_lamb.py deleted file mode 100644 index f4f2bdb0c..000000000 --- a/kosmos-g/fairseq/fairseq/optim/fused_lamb.py +++ /dev/null @@ -1,51 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq.optim import LegacyFairseqOptimizer, register_optimizer - - -@register_optimizer("lamb") -class FairseqLAMB(LegacyFairseqOptimizer): - """LAMB optimizer.""" - - def __init__(self, args, params): - super().__init__(args) - try: - from apex.optimizers import FusedLAMB - - self._optimizer = FusedLAMB(params, **self.optimizer_config) - except ImportError: - raise ImportError("Please install apex to use LAMB optimizer") - - @staticmethod - def add_args(parser): - """Add optimizer-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--lamb-betas', default='(0.9, 0.999)', metavar='B', - help='betas for LAMB optimizer') - parser.add_argument('--lamb-eps', type=float, default=1e-8, metavar='D', - help='epsilon for LAMB optimizer') - parser.add_argument('--weight-decay', '--wd', default=0.0, type=float, metavar='WD', - help='weight decay') - # fmt: on - - @property - def optimizer_config(self): - """ - Return a kwarg dictionary that will be used to override optimizer - args stored in checkpoints. This allows us to load a checkpoint and - resume training using a different set of optimizer args, e.g., with a - different learning rate. - """ - return { - "lr": self.args.lr[0], - "betas": eval(self.args.lamb_betas), - "eps": self.args.lamb_eps, - "weight_decay": self.args.weight_decay, - } - - @property - def supports_flat_params(self): - return False diff --git a/kosmos-g/fairseq/fairseq/optim/lr_scheduler/__init__.py b/kosmos-g/fairseq/fairseq/optim/lr_scheduler/__init__.py deleted file mode 100644 index 5b3dbc023..000000000 --- a/kosmos-g/fairseq/fairseq/optim/lr_scheduler/__init__.py +++ /dev/null @@ -1,36 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -"""isort:skip_file""" - -import importlib -import os - -from fairseq import registry -from fairseq.optim.lr_scheduler.fairseq_lr_scheduler import ( # noqa - FairseqLRScheduler, - LegacyFairseqLRScheduler, -) -from omegaconf import DictConfig - - -( - build_lr_scheduler_, - register_lr_scheduler, - LR_SCHEDULER_REGISTRY, - LR_SCHEDULER_DATACLASS_REGISTRY, -) = registry.setup_registry( - "--lr-scheduler", base_class=FairseqLRScheduler, default="fixed" -) - - -def build_lr_scheduler(cfg: DictConfig, optimizer): - return build_lr_scheduler_(cfg, optimizer) - - -# automatically import any Python files in the optim/lr_scheduler/ directory -for file in sorted(os.listdir(os.path.dirname(__file__))): - if file.endswith(".py") and not file.startswith("_"): - file_name = file[: file.find(".py")] - importlib.import_module("fairseq.optim.lr_scheduler." + file_name) diff --git a/kosmos-g/fairseq/fairseq/optim/lr_scheduler/cosine_lr_scheduler.py b/kosmos-g/fairseq/fairseq/optim/lr_scheduler/cosine_lr_scheduler.py deleted file mode 100644 index 51f58359e..000000000 --- a/kosmos-g/fairseq/fairseq/optim/lr_scheduler/cosine_lr_scheduler.py +++ /dev/null @@ -1,147 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from collections.abc import Collection -from dataclasses import dataclass, field -from typing import List - -from omegaconf import II - -from fairseq.dataclass import FairseqDataclass -from fairseq.optim.lr_scheduler import FairseqLRScheduler, register_lr_scheduler - - -@dataclass -class CosineLRScheduleConfig(FairseqDataclass): - warmup_updates: int = field( - default=0, - metadata={"help": "warmup the learning rate linearly for the first N updates"}, - ) - warmup_init_lr: float = field( - default=-1, - metadata={ - "help": "initial learning rate during warmup phase; default is cfg.lr" - }, - ) - lr: List[float] = field( - default=II("optimization.lr"), - metadata={"help": "max learning rate, must be more than cfg.min_lr"}, - ) - min_lr: float = field(default=0.0, metadata={"help": "min learning rate"}) - t_mult: float = field( - default=1.0, metadata={"help": "factor to grow the length of each period"} - ) - lr_period_updates: float = field( - default=-1, metadata={"help": "initial number of updates per period"} - ) - lr_shrink: float = field( - default=0.1, metadata={"help": "shrink factor for annealing"} - ) - # This is not required, but is for convenience in inferring lr_period_updates - max_update: int = II("optimization.max_update") - - -@register_lr_scheduler("cosine", dataclass=CosineLRScheduleConfig) -class CosineLRSchedule(FairseqLRScheduler): - """Assign LR based on a cyclical schedule that follows the cosine function. - - See https://arxiv.org/pdf/1608.03983.pdf for details. - - We also support a warmup phase where we linearly increase the learning rate - from some initial learning rate (``--warmup-init-lr``) until the configured - max learning rate (``--lr``). - - During warmup:: - - lrs = torch.linspace(cfg.warmup_init_lr, cfg.lr, cfg.warmup_updates) - lr = lrs[update_num] - - After warmup:: - - lr = cfg.min_lr + 0.5*(cfg.lr - cfg.min_lr)*(1 + cos(t_curr / t_i)) - - where ``t_curr`` is current percentage of updates within the current period - range and ``t_i`` is the current period range, which is scaled by ``t_mul`` - after every iteration. - """ - - def __init__(self, cfg: CosineLRScheduleConfig, fairseq_optimizer): - super().__init__(cfg, fairseq_optimizer) - if isinstance(cfg.lr, Collection) and len(cfg.lr) > 1: - raise ValueError( - "Cannot use a fixed learning rate schedule with cosine." - f" Consider --lr-scheduler=fixed instead. ({cfg.lr})" - ) - - self.max_lr = cfg.lr[0] if isinstance(cfg.lr, Collection) else cfg.lr - assert ( - self.max_lr > cfg.min_lr - ), f"max_lr (={cfg.lr}) must be more than min_lr (={cfg.min_lr})" - - warmup_end_lr = self.max_lr - if cfg.warmup_init_lr < 0: - cfg.warmup_init_lr = cfg.min_lr - - self.t_mult = cfg.t_mult - self.period = cfg.lr_period_updates - - if self.period <= 0: - assert ( - cfg.max_update > 0 - ), "Either --max_update or --lr-period-updates must be set" - self.period = cfg.max_update - cfg.warmup_updates - - if cfg.warmup_updates > 0: - # linearly warmup for the first cfg.warmup_updates - self.lr_step = (warmup_end_lr - cfg.warmup_init_lr) / cfg.warmup_updates - else: - self.lr_step = 1 - - self.warmup_updates = cfg.warmup_updates - self.lr_shrink = cfg.lr_shrink - - # initial learning rate - self.lr = cfg.warmup_init_lr - self.optimizer.set_lr(self.lr) - - def step(self, epoch, val_loss=None): - """Update the learning rate at the end of the given epoch.""" - super().step(epoch, val_loss) - # we don't change the learning rate at epoch boundaries - return self.optimizer.get_lr() - - def step_update(self, num_updates): - """Update the learning rate after each update.""" - if num_updates < self.cfg.warmup_updates: - self.lr = self.cfg.warmup_init_lr + num_updates * self.lr_step - else: - curr_updates = num_updates - self.cfg.warmup_updates - if self.t_mult != 1: - i = math.floor( - math.log( - 1 - curr_updates / self.period * (1 - self.t_mult), self.t_mult - ) - ) - t_i = self.t_mult ** i * self.period - t_curr = ( - curr_updates - - (1 - self.t_mult ** i) / (1 - self.t_mult) * self.period - ) - else: - i = math.floor(curr_updates / self.period) - t_i = self.period - t_curr = curr_updates - (self.period * i) - - lr_shrink = self.lr_shrink ** i - min_lr = self.cfg.min_lr * lr_shrink - max_lr = self.max_lr * lr_shrink - - self.lr = min_lr + 0.5 * (max_lr - min_lr) * ( - 1 + math.cos(math.pi * t_curr / t_i) - ) - - self.optimizer.set_lr(self.lr) - return self.lr diff --git a/kosmos-g/fairseq/fairseq/optim/lr_scheduler/fairseq_lr_scheduler.py b/kosmos-g/fairseq/fairseq/optim/lr_scheduler/fairseq_lr_scheduler.py deleted file mode 100644 index 0bdf610b6..000000000 --- a/kosmos-g/fairseq/fairseq/optim/lr_scheduler/fairseq_lr_scheduler.py +++ /dev/null @@ -1,60 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from argparse import Namespace - -from fairseq.dataclass.utils import gen_parser_from_dataclass -from fairseq.optim import FairseqOptimizer - - -class FairseqLRScheduler(object): - def __init__(self, cfg, optimizer): - super().__init__() - # DS: disable check to support wrapping deepspeed optimizer wrapper - # if optimizer is not None and not isinstance(optimizer, FairseqOptimizer): - # raise ValueError("optimizer must be an instance of FairseqOptimizer") - self.cfg = cfg - self.optimizer = optimizer - self.best = None - - @classmethod - def add_args(cls, parser): - """Add arguments to the parser for this LR scheduler.""" - dc = getattr(cls, "__dataclass", None) - if dc is not None: - gen_parser_from_dataclass(parser, dc()) - - def state_dict(self): - """Return the LR scheduler state dict.""" - return {"best": self.best} - - def load_state_dict(self, state_dict): - """Load an LR scheduler state dict.""" - self.best = state_dict["best"] - - def step_begin_epoch(self, epoch): - """Update the learning rate at the beginning of the given epoch.""" - pass - - def step(self, epoch, val_loss=None): - """Update the learning rate at the end of the given epoch.""" - if val_loss is not None: - if self.best is None: - self.best = val_loss - else: - self.best = min(self.best, val_loss) - - def step_update(self, num_updates): - """Update the learning rate after each update.""" - return self.optimizer.get_lr() - - -class LegacyFairseqLRScheduler(FairseqLRScheduler): - def __init__(self, args: Namespace, optimizer): - if not isinstance(optimizer, FairseqOptimizer): - raise ValueError("optimizer must be an instance of FairseqOptimizer") - self.args = args - self.optimizer = optimizer - self.best = None diff --git a/kosmos-g/fairseq/fairseq/optim/lr_scheduler/fixed_schedule.py b/kosmos-g/fairseq/fairseq/optim/lr_scheduler/fixed_schedule.py deleted file mode 100644 index d0e7e14b7..000000000 --- a/kosmos-g/fairseq/fairseq/optim/lr_scheduler/fixed_schedule.py +++ /dev/null @@ -1,76 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field -from typing import Optional, List -from omegaconf import II - -from fairseq.dataclass import FairseqDataclass -from fairseq.optim.lr_scheduler import FairseqLRScheduler, register_lr_scheduler - - -@dataclass -class FixedLRScheduleConfig(FairseqDataclass): - force_anneal: Optional[int] = field( - default=None, - metadata={"help": "force annealing at specified epoch"}, - ) - lr_shrink: float = field( - default=0.1, - metadata={"help": "shrink factor for annealing, lr_new = (lr * lr_shrink)"}, - ) - warmup_updates: int = field( - default=0, - metadata={"help": "warmup the learning rate linearly for the first N updates"}, - ) - lr: List[float] = II("optimization.lr") - - -@register_lr_scheduler("fixed", dataclass=FixedLRScheduleConfig) -class FixedLRSchedule(FairseqLRScheduler): - """Decay the LR on a fixed schedule.""" - - def __init__(self, cfg: FixedLRScheduleConfig, optimizer): - super().__init__(cfg, optimizer) - - self.lr = cfg.lr[0] - if cfg.warmup_updates > 0: - self.warmup_factor = 1.0 / cfg.warmup_updates - else: - self.warmup_factor = 1 - - def state_dict(self): - return {"lr": self.lr} - - def load_state_dict(self, state_dict): - if "lr" in state_dict: - self.lr = state_dict["lr"] - - def get_next_lr(self, epoch): - lrs = self.cfg.lr - if self.cfg.force_anneal is None or epoch < self.cfg.force_anneal: - # use fixed LR schedule - next_lr = lrs[min(epoch - 1, len(lrs) - 1)] - else: - # annneal based on lr_shrink - next_lr = lrs[-1] * self.cfg.lr_shrink ** ( - epoch + 1 - self.cfg.force_anneal - ) - return next_lr - - def step_begin_epoch(self, epoch): - """Update the learning rate at the beginning of the given epoch.""" - self.lr = self.get_next_lr(epoch) - self.optimizer.set_lr(self.warmup_factor * self.lr) - return self.optimizer.get_lr() - - def step_update(self, num_updates): - """Update the learning rate after each update.""" - if self.cfg.warmup_updates > 0 and num_updates < self.cfg.warmup_updates: - self.warmup_factor = (num_updates + 1) / float(self.cfg.warmup_updates) - self.optimizer.set_lr(self.warmup_factor * self.lr) - else: - self.optimizer.set_lr(self.lr) - return self.optimizer.get_lr() diff --git a/kosmos-g/fairseq/fairseq/optim/lr_scheduler/inverse_square_root_schedule.py b/kosmos-g/fairseq/fairseq/optim/lr_scheduler/inverse_square_root_schedule.py deleted file mode 100644 index 0f87bb5d7..000000000 --- a/kosmos-g/fairseq/fairseq/optim/lr_scheduler/inverse_square_root_schedule.py +++ /dev/null @@ -1,85 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from collections.abc import Collection -from dataclasses import dataclass, field -from typing import List - -from omegaconf import II - -from fairseq.dataclass import FairseqDataclass -from fairseq.optim.lr_scheduler import FairseqLRScheduler, register_lr_scheduler - - -@dataclass -class InverseSquareRootLRScheduleConfig(FairseqDataclass): - warmup_updates: int = field( - default=4000, - metadata={"help": "warmup the learning rate linearly for the first N updates"}, - ) - warmup_init_lr: float = field( - default=-1, - metadata={ - "help": "initial learning rate during warmup phase; default is cfg.lr" - }, - ) - lr: List[float] = II("optimization.lr") - - -@register_lr_scheduler("inverse_sqrt", dataclass=InverseSquareRootLRScheduleConfig) -class InverseSquareRootSchedule(FairseqLRScheduler): - """Decay the LR based on the inverse square root of the update number. - - We also support a warmup phase where we linearly increase the learning rate - from some initial learning rate (``--warmup-init-lr``) until the configured - learning rate (``--lr``). Thereafter we decay proportional to the number of - updates, with a decay factor set to align with the configured learning rate. - - During warmup:: - - lrs = torch.linspace(cfg.warmup_init_lr, cfg.lr, cfg.warmup_updates) - lr = lrs[update_num] - - After warmup:: - - decay_factor = cfg.lr * sqrt(cfg.warmup_updates) - lr = decay_factor / sqrt(update_num) - """ - - def __init__(self, cfg: InverseSquareRootLRScheduleConfig, optimizer): - super().__init__(cfg, optimizer) - if isinstance(cfg.lr, Collection) and len(cfg.lr) > 1: - raise ValueError( - "Cannot use a fixed learning rate schedule with inverse_sqrt." - " Consider --lr-scheduler=fixed instead." - ) - warmup_end_lr = cfg.lr[0] if isinstance(cfg.lr, Collection) else cfg.lr - if cfg.warmup_init_lr < 0: - cfg.warmup_init_lr = 0 if cfg.warmup_updates > 0 else warmup_end_lr - - # linearly warmup for the first cfg.warmup_updates - self.lr_step = (warmup_end_lr - cfg.warmup_init_lr) / cfg.warmup_updates - - # then, decay prop. to the inverse square root of the update number - self.decay_factor = warmup_end_lr * cfg.warmup_updates ** 0.5 - - # initial learning rate - self.lr = cfg.warmup_init_lr - self.optimizer.set_lr(self.lr) - - def step(self, epoch, val_loss=None): - """Update the learning rate at the end of the given epoch.""" - super().step(epoch, val_loss) - # we don't change the learning rate at epoch boundaries - return self.optimizer.get_lr() - - def step_update(self, num_updates): - """Update the learning rate after each update.""" - if num_updates < self.cfg.warmup_updates: - self.lr = self.cfg.warmup_init_lr + num_updates * self.lr_step - else: - self.lr = self.decay_factor * num_updates ** -0.5 - self.optimizer.set_lr(self.lr) - return self.lr diff --git a/kosmos-g/fairseq/fairseq/optim/lr_scheduler/manual_lr_scheduler.py b/kosmos-g/fairseq/fairseq/optim/lr_scheduler/manual_lr_scheduler.py deleted file mode 100644 index 57edc256f..000000000 --- a/kosmos-g/fairseq/fairseq/optim/lr_scheduler/manual_lr_scheduler.py +++ /dev/null @@ -1,121 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import LegacyFairseqLRScheduler, register_lr_scheduler -import logging -import ast - -logger = logging.getLogger(__name__) -logger.setLevel(logging.WARNING) - - -@register_lr_scheduler("manual") -class ManualSchedule(LegacyFairseqLRScheduler): - """Decay the LR on a manual schedule.""" - - def __init__(self, args, optimizer): - super().__init__(args, optimizer) - - self.epoch2lr = self.parse_manuallr_args(args.epoch2lr) - self.update2lr = self.parse_manuallr_args(args.update2lr) - logger.info("@@@ ManualSchedule epoch2lr={}".format(self.epoch2lr)) - logger.info("@@@ ManualSchedule update2lr={}".format(self.update2lr)) - - if 1 in self.epoch2lr: - self.lr = self.epoch2lr[1] - elif 1 in self.update2lr: - self.lr = self.update2lr[1] - else: - self.lr = args.lr[0] - self.optimizer.set_lr(self.lr) # Set the beginning of the epoch. - - def parse_manuallr_args(self, lr_args_str): - lr_dict = ast.literal_eval(lr_args_str.replace(" ", "")) - if not isinstance(lr_dict, dict): - raise ValueError("epoch2lr/update2lr must be abel to evaluated to a dict") - - lr_args = {} - logger.info("@@@ after parsing input dictionary lr_dict = {}".format(lr_dict)) - for key, val in lr_dict.items(): - if "," in key: - for k in key.split(","): - lr_args[int(k)] = float(val) - elif "-" in key: - s = int(key.split("-")[0]) - e = int(key.split("-")[1]) - for k in range(s, e + 1, 1): - lr_args[k] = float(val) - else: - lr_args[int(key)] = float(val) - - return lr_args - - @staticmethod - def add_args(parser): - """Add arguments to the parser for this LR scheduler.""" - # fmt: off - parser.add_argument( - "--epoch2lr", - type=str, - metavar="DICT", - default="{}", - help="a dictionary used to set lr for each epoch manually", - ) - parser.add_argument( - "--update2lr", - type=str, - metavar="DICT", - default="{}", - help="a dictionary used to set lr for each update manually", - ) - # fmt: on - - def state_dict(self): - return {"lr": self.lr} - - def load_state_dict(self, state_dict): - if "lr" in state_dict: - self.lr = state_dict["lr"] - - def get_next_lr(self, epoch): - manual_keys = [k for k in self.epoch2lr if k <= epoch] - if manual_keys: - manual_lr = self.epoch2lr[max(manual_keys)] - else: - logger.warning( - "@@@ epoch={} does not exist in manual lr input. epoch2lr={}...".format( - epoch, - list(self.epoch2lr.items())[ - : min(10, len(self.epoch2lr.keys()) - 1) - ], - ) - ) - manual_lr = self.optimizer.get_lr() - return manual_lr - - def step_begin_epoch(self, epoch): - """Update the learning rate at the beginning of the given epoch.""" - self.lr = self.get_next_lr(epoch) - self.optimizer.set_lr(self.lr) - return self.optimizer.get_lr() - - def step_update(self, num_updates): - """Update the learning rate after each update.""" - manual_keys = [k for k in self.update2lr if k <= num_updates] - if manual_keys: - manual_lr = self.update2lr[max(manual_keys)] - else: - logger.warning( - "epoch={} does not exist in manual lr input update2lr={}...".format( - num_updates, - list(self.update2lr.items())[ - : min(10, len(self.update2lr.keys()) - 1) - ], - ) - ) - manual_lr = self.optimizer.get_lr() - - self.optimizer.set_lr(manual_lr) - return self.optimizer.get_lr() diff --git a/kosmos-g/fairseq/fairseq/optim/lr_scheduler/pass_through.py b/kosmos-g/fairseq/fairseq/optim/lr_scheduler/pass_through.py deleted file mode 100644 index 2f93db328..000000000 --- a/kosmos-g/fairseq/fairseq/optim/lr_scheduler/pass_through.py +++ /dev/null @@ -1,39 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass - -from fairseq.dataclass import FairseqDataclass -from fairseq.optim.lr_scheduler import FairseqLRScheduler, register_lr_scheduler - - -@dataclass -class PassThroughScheduleConfig(FairseqDataclass): - pass - - -@register_lr_scheduler("pass_through", dataclass=PassThroughScheduleConfig) -class PassThroughScheduleSchedule(FairseqLRScheduler): - """Delegate lr scheduling to the optimizer.""" - - def __init__(self, cfg: PassThroughScheduleConfig, optimizer): - super().__init__(cfg, optimizer) - assert ( - hasattr(optimizer, "lr_scheduler") and optimizer.lr_scheduler is not None - ), "Pass-through schedule can only be used with optimizers with their own schedulers" - - def state_dict(self): - return self.optimizer.lr_scheduler.state_dict() - - def load_state_dict(self, state_dict): - self.optimizer.lr_scheduler.load_state_dict(state_dict) - - def step_begin_epoch(self, epoch): - """Update the learning rate at the beginning of the given epoch.""" - return self.optimizer.lr_scheduler.step_begin_epoch(epoch) - - def step_update(self, num_updates): - """Update the learning rate after each update.""" - return self.optimizer.lr_scheduler.step_update(num_updates) diff --git a/kosmos-g/fairseq/fairseq/optim/lr_scheduler/polynomial_decay_schedule.py b/kosmos-g/fairseq/fairseq/optim/lr_scheduler/polynomial_decay_schedule.py deleted file mode 100644 index b8109a7c1..000000000 --- a/kosmos-g/fairseq/fairseq/optim/lr_scheduler/polynomial_decay_schedule.py +++ /dev/null @@ -1,89 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field -from typing import Optional, List -from omegaconf import II - -from fairseq.dataclass import FairseqDataclass -from fairseq.optim.lr_scheduler import FairseqLRScheduler, register_lr_scheduler - - -@dataclass -class PolynomialDecayLRScheduleConfig(FairseqDataclass): - warmup_updates: int = field( - default=0, - metadata={"help": "warmup the learning rate linearly for the first N updates"}, - ) - force_anneal: Optional[int] = field( - default=None, - metadata={"help": "force annealing at specified epoch"}, - ) - end_learning_rate: float = field( - default=0.0, - metadata={"help": "learning rate to decay to"}, - ) - power: float = field( - default=1.0, - metadata={"help": "decay exponent"}, - ) - total_num_update: float = field( - default=II("optimization.max_update"), - metadata={"help": "total number of updates over which to decay learning rate"}, - ) - lr: List[float] = II("optimization.lr") - - -@register_lr_scheduler("polynomial_decay", dataclass=PolynomialDecayLRScheduleConfig) -class PolynomialDecayLRSchedule(FairseqLRScheduler): - """Decay the LR on a fixed schedule.""" - - def __init__(self, cfg: PolynomialDecayLRScheduleConfig, optimizer): - super().__init__(cfg, optimizer) - - assert cfg.total_num_update > 0 - - self.lr = cfg.lr[0] - if cfg.warmup_updates > 0: - self.warmup_factor = 1.0 / cfg.warmup_updates - else: - self.warmup_factor = 1 - self.end_learning_rate = cfg.end_learning_rate - self.total_num_update = cfg.total_num_update - self.power = cfg.power - self.optimizer.set_lr(self.warmup_factor * self.lr) - - def get_next_lr(self, epoch): - lrs = self.cfg.lr - if self.cfg.force_anneal is None or epoch < self.cfg.force_anneal: - # use fixed LR schedule - next_lr = lrs[min(epoch, len(lrs) - 1)] - else: - # annneal based on lr_shrink - next_lr = self.optimizer.get_lr() - return next_lr - - def step_begin_epoch(self, epoch): - """Update the learning rate at the beginning of the given epoch.""" - self.lr = self.get_next_lr(epoch) - self.optimizer.set_lr(self.warmup_factor * self.lr) - return self.optimizer.get_lr() - - def step_update(self, num_updates): - """Update the learning rate after each update.""" - if self.cfg.warmup_updates > 0 and num_updates <= self.cfg.warmup_updates: - self.warmup_factor = num_updates / float(self.cfg.warmup_updates) - lr = self.warmup_factor * self.lr - elif num_updates >= self.total_num_update: - lr = self.end_learning_rate - else: - warmup = self.cfg.warmup_updates - lr_range = self.lr - self.end_learning_rate - pct_remaining = 1 - (num_updates - warmup) / ( - self.total_num_update - warmup - ) - lr = lr_range * pct_remaining ** (self.power) + self.end_learning_rate - self.optimizer.set_lr(lr) - return self.optimizer.get_lr() diff --git a/kosmos-g/fairseq/fairseq/optim/lr_scheduler/reduce_lr_on_plateau.py b/kosmos-g/fairseq/fairseq/optim/lr_scheduler/reduce_lr_on_plateau.py deleted file mode 100644 index 5ee9c1be4..000000000 --- a/kosmos-g/fairseq/fairseq/optim/lr_scheduler/reduce_lr_on_plateau.py +++ /dev/null @@ -1,143 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field -from typing import List - -import torch.optim.lr_scheduler -from omegaconf import II - -from fairseq.dataclass import FairseqDataclass -from fairseq.optim.lr_scheduler import FairseqLRScheduler, register_lr_scheduler - - -@dataclass -class ReduceLROnPlateauLRScheduleConfig(FairseqDataclass): - lr_shrink: float = field( - default=0.1, metadata={"help": "shrink factor for annealing"} - ) - lr_threshold: float = field( - default=1e-4, - metadata={ - "help": ( - "threshold for measuring the new optimum, to only focus on " - "significant changes" - ) - }, - ) - lr_patience: int = field( - default=0, - metadata={ - "help": ( - "number of epochs with no improvement after which learning rate will " - "be reduced" - ) - }, - ) - warmup_updates: int = field( - default=0, - metadata={"help": "warmup the learning rate linearly for the first N updates"}, - ) - warmup_init_lr: float = field( - default=-1, - metadata={ - "help": "initial learning rate during warmup phase; default is cfg.lr" - }, - ) - lr: List[float] = II("optimization.lr") - maximize_best_checkpoint_metric: bool = II( - "checkpoint.maximize_best_checkpoint_metric" - ) - - -@register_lr_scheduler( - "reduce_lr_on_plateau", dataclass=ReduceLROnPlateauLRScheduleConfig -) -class ReduceLROnPlateauLRSchedule(FairseqLRScheduler): - """ - Decay the LR by a factor every time the validation loss plateaus. - Also comes with optional warmup phase, where we linearly increase - the learning rate from some initial learning rate - (``--warmup-init-lr``) until the configured learning rate - (``--lr``). Thereafter the lr is adjusted according to original - reduce_on_plateau scheme. - - During warmup:: - - lrs = torch.linspace( - cfg.warmup_init_lr, cfg.lr, cfg.warmup_updates - ) - lr = lrs[update_num] - """ - - def __init__(self, cfg: ReduceLROnPlateauLRScheduleConfig, optimizer): - super().__init__(cfg, optimizer) - if len(cfg.lr) > 1: - raise ValueError( - "Cannot use a fixed learning rate schedule with reduce_lr_on_plateau." - " Consider --lr-scheduler=fixed instead." - ) - self.lr_scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau( - self.optimizer.optimizer, - patience=cfg.lr_patience, - factor=cfg.lr_shrink, - mode="max" if cfg.maximize_best_checkpoint_metric else "min", - threshold=cfg.lr_threshold, - ) - warmup_end_lr = cfg.lr[0] - # if no warm up, sets initial lr to be cfg.lr[0] - if cfg.warmup_init_lr < 0: - cfg.warmup_init_lr = 0 if cfg.warmup_updates > 0 else warmup_end_lr - - # linearly warmup for the first cfg.warmup_updates - if cfg.warmup_updates > 0: - self.lr_step = (warmup_end_lr - cfg.warmup_init_lr) / cfg.warmup_updates - - # this flag is either set from arg when no warm up, or set by - # step_update() when warmup finishes - self.warmup_end = True if cfg.warmup_updates <= 0 else False - - # initial learning rate - # this self.lr is used only during init and/or warm up period - self.lr = warmup_end_lr if self.warmup_end else cfg.warmup_init_lr - self.optimizer.set_lr(self.lr) - - def state_dict(self): - """Return the LR scheduler state dict.""" - return { - "best": self.lr_scheduler.best, - "last_epoch": self.lr_scheduler.last_epoch, - } - - def load_state_dict(self, state_dict): - """Load an LR scheduler state dict.""" - self.lr_scheduler.best = state_dict["best"] - if "last_epoch" in state_dict: - self.lr_scheduler.last_epoch = state_dict["last_epoch"] - - def step(self, epoch, val_loss=None): - """ - Update the learning rate at the end of the given epoch if warmup - finishes otherwise no update of lr on epoch boundaries - """ - if val_loss is not None and self.warmup_end is True: - self.lr_scheduler.step(val_loss) - else: - self.lr_scheduler.last_epoch = epoch - return self.optimizer.get_lr() - - def step_update(self, num_updates): - """ - Update the learning rate after each update.""" - # if there is warmup - if self.cfg.warmup_updates > 0: - if num_updates <= self.cfg.warmup_updates: - self.lr = self.cfg.warmup_init_lr + num_updates * self.lr_step - self.optimizer.set_lr(self.lr) - else: - if self.warmup_end is False: - self.warmup_end = True - # else do nothing - return self.optimizer.get_lr() diff --git a/kosmos-g/fairseq/fairseq/optim/lr_scheduler/step_lr_scheduler.py b/kosmos-g/fairseq/fairseq/optim/lr_scheduler/step_lr_scheduler.py deleted file mode 100644 index db99d4eee..000000000 --- a/kosmos-g/fairseq/fairseq/optim/lr_scheduler/step_lr_scheduler.py +++ /dev/null @@ -1,85 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from collections.abc import Collection -from dataclasses import dataclass, field -from typing import List - -from omegaconf import II - -from fairseq.dataclass import FairseqDataclass -from fairseq.optim.lr_scheduler import FairseqLRScheduler, register_lr_scheduler - - -@dataclass -class StepLRScheduleConfig(FairseqDataclass): - warmup_updates: int = field( - default=0, - metadata={"help": "warmup the learning rate linearly for the first N updates"}, - ) - warmup_init_lr: float = field( - default=-1, - metadata={ - "help": "initial learning rate during warmup phase; default is cfg.lr" - }, - ) - lr: List[float] = field( - default=II("optimization.lr"), - metadata={"help": "max learning rate, must be more than cfg.min_lr"}, - ) - min_lr: float = field(default=0.0, metadata={"help": "min learning rate"}) - lr_deacy_period: int = field(default=25000, metadata={"help": "decay period"}) - lr_decay: float = field(default=0.5, metadata={"help": "decay factor"}) - - -@register_lr_scheduler("step", dataclass=StepLRScheduleConfig) -class StepLRSchedule(FairseqLRScheduler): - """Decay learning rate every k updates by a fixed factor""" - - def __init__(self, cfg: StepLRScheduleConfig, fairseq_optimizer): - super().__init__(cfg, fairseq_optimizer) - self.max_lr = cfg.lr[0] if isinstance(cfg.lr, Collection) else cfg.lr - self.min_lr = cfg.min_lr - self.lr_deacy_period = cfg.lr_deacy_period - self.lr_decay = cfg.lr_decay - self.warmup_updates = cfg.warmup_updates - self.warmup_init_lr = ( - cfg.warmup_init_lr if cfg.warmup_init_lr >= 0 else self.min_lr - ) - - assert self.lr_deacy_period > 0 - assert self.lr_decay <= 1 - assert self.min_lr >= 0 - assert self.max_lr > self.min_lr - - if cfg.warmup_updates > 0: - # linearly warmup for the first cfg.warmup_updates - self.warmup_lr_step = ( - self.max_lr - self.warmup_init_lr - ) / self.warmup_updates - else: - self.warmup_lr_step = 1 - - # initial learning rate - self.lr = self.warmup_init_lr - self.optimizer.set_lr(self.lr) - - def step(self, epoch, val_loss=None): - """Update the learning rate at the end of the given epoch.""" - super().step(epoch, val_loss) - # we don't change the learning rate at epoch boundaries - return self.optimizer.get_lr() - - def step_update(self, num_updates): - """Update the learning rate after each update.""" - if num_updates < self.cfg.warmup_updates: - self.lr = self.warmup_init_lr + num_updates * self.warmup_lr_step - else: - curr_updates = num_updates - self.cfg.warmup_updates - lr_mult = self.lr_decay ** (curr_updates // self.lr_deacy_period) - self.lr = max(self.max_lr * lr_mult, self.min_lr) - - self.optimizer.set_lr(self.lr) - return self.lr diff --git a/kosmos-g/fairseq/fairseq/optim/lr_scheduler/tri_stage_lr_scheduler.py b/kosmos-g/fairseq/fairseq/optim/lr_scheduler/tri_stage_lr_scheduler.py deleted file mode 100644 index 4d5547c39..000000000 --- a/kosmos-g/fairseq/fairseq/optim/lr_scheduler/tri_stage_lr_scheduler.py +++ /dev/null @@ -1,175 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from dataclasses import dataclass, field -from typing import Optional, List, Tuple -from omegaconf import II - -from fairseq.dataclass import FairseqDataclass -from fairseq.optim.lr_scheduler import FairseqLRScheduler, register_lr_scheduler - - -@dataclass -class TriStageLRScheduleConfig(FairseqDataclass): - warmup_steps: int = field( - default=0, - metadata={"help": "warmup the learning rate linearly for the first N updates"}, - ) - hold_steps: int = field( - default=0, - metadata={"help": "steps in hold stage"}, - ) - decay_steps: int = field( - default=0, - metadata={"help": "steps in decay stages"}, - ) - phase_ratio: Optional[Tuple[float, float, float]] = field( - default=None, - metadata={ - "help": ( - "if set, automatically sets warmup/hold/decay steps to the ratio " - "specified here from max_updates. the ratios must add up to 1.0" - ) - }, - ) - init_lr_scale: float = field( - default=0.01, - metadata={"help": "initial learning rate scale during warmup phase"}, - ) - final_lr_scale: float = field( - default=0.01, - metadata={"help": "final learning rate scale"}, - ) - max_update: float = II("optimization.max_update") - lr: List[float] = II("optimization.lr") - - -@register_lr_scheduler("tri_stage", dataclass=TriStageLRScheduleConfig) -class TriStageLRSchedule(FairseqLRScheduler): - """Tristage learning rate schedulr - - Implement the learning rate scheduler in https://arxiv.org/pdf/1904.08779.pdf - - Similar to inverse_squre_root scheduler, but tri_stage learning rate employs - three stages LR scheduling: - - - warmup stage, starting from `lr` * `init_lr_scale`, linearly - increased to `lr` in `warmup_steps` iterations - - - hold stage, after `warmup_steps`, keep the LR as `lr` for `hold_steps` - iterations - - - decay stage, after hold stage, decay LR exponetially to - `lr` * `final_lr_scale` in `decay_steps`; - after that LR is keep as `final_lr_scale` * `lr` - - During warmup:: - - init_lr = cfg.init_lr_scale * cfg.lr - lrs = torch.linspace(init_lr, cfg.lr, cfg.warmup_steps) - lr = lrs[update_num] - - During hold:: - - lr = cfg.lr - - During decay:: - - decay_factor = - math.log(cfg.final_lr_scale) / cfg.decay_steps - lr = cfg.lr * exp(- (update_num - warmup_steps - decay_steps) * decay_factor) - - After that:: - - lr = cfg.lr * cfg.final_lr_scale - """ - - def __init__(self, cfg: TriStageLRScheduleConfig, optimizer): - super().__init__(cfg, optimizer) - if len(cfg.lr) > 1: - raise ValueError( - "Cannot use a fixed learning rate schedule with tri-stage lr." - " Consider --lr-scheduler=fixed instead." - ) - - # calculate LR at each point - self.peak_lr = cfg.lr[0] - self.init_lr = cfg.init_lr_scale * cfg.lr[0] - self.final_lr = cfg.final_lr_scale * cfg.lr[0] - - if cfg.phase_ratio is not None: - assert cfg.max_update > 0 - assert sum(cfg.phase_ratio) == 1, "phase ratios must add up to 1" - self.warmup_steps = int(cfg.max_update * cfg.phase_ratio[0]) - self.hold_steps = int(cfg.max_update * cfg.phase_ratio[1]) - self.decay_steps = int(cfg.max_update * cfg.phase_ratio[2]) - else: - self.warmup_steps = cfg.warmup_steps - self.hold_steps = cfg.hold_steps - self.decay_steps = cfg.decay_steps - - assert ( - self.warmup_steps + self.hold_steps + self.decay_steps > 0 - ), "please specify steps or phase_ratio" - - self.warmup_rate = ( - (self.peak_lr - self.init_lr) / self.warmup_steps - if self.warmup_steps != 0 - else 0 - ) - self.decay_factor = -math.log(cfg.final_lr_scale) / self.decay_steps - - # initial learning rate - self.lr = self.init_lr - self.optimizer.set_lr(self.lr) - - def _decide_stage(self, update_step): - """ - return stage, and the corresponding steps within the current stage - """ - if update_step < self.warmup_steps: - # warmup state - return 0, update_step - - offset = self.warmup_steps - - if update_step < offset + self.hold_steps: - # hold stage - return 1, update_step - offset - - offset += self.hold_steps - - if update_step <= offset + self.decay_steps: - # decay stage - return 2, update_step - offset - - offset += self.decay_steps - - # still here ? constant lr stage - return 3, update_step - offset - - def step(self, epoch, val_loss=None): - """Update the learning rate at the end of the given epoch.""" - super().step(epoch, val_loss) - # we don't change the learning rate at epoch boundaries - return self.optimizer.get_lr() - - def step_update(self, num_updates): - """Update the learning rate after each update.""" - stage, steps_in_stage = self._decide_stage(num_updates) - if stage == 0: - self.lr = self.init_lr + self.warmup_rate * steps_in_stage - elif stage == 1: - self.lr = self.peak_lr - elif stage == 2: - self.lr = self.peak_lr * math.exp(-self.decay_factor * steps_in_stage) - elif stage == 3: - self.lr = self.final_lr - else: - raise ValueError("Undefined stage") - - self.optimizer.set_lr(self.lr) - - return self.lr diff --git a/kosmos-g/fairseq/fairseq/optim/lr_scheduler/triangular_lr_scheduler.py b/kosmos-g/fairseq/fairseq/optim/lr_scheduler/triangular_lr_scheduler.py deleted file mode 100644 index bfe2a0d38..000000000 --- a/kosmos-g/fairseq/fairseq/optim/lr_scheduler/triangular_lr_scheduler.py +++ /dev/null @@ -1,83 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from dataclasses import dataclass, field -from typing import List - -from omegaconf import II - -from fairseq.dataclass import FairseqDataclass -from fairseq.optim.lr_scheduler import FairseqLRScheduler, register_lr_scheduler - - -@dataclass -class TriangularLRScheduleConfig(FairseqDataclass): - max_lr: float = field( - default="???", metadata={"help": "max learning rate, must be more than cfg.lr"} - ) - lr_period_updates: float = field( - default=5000, - metadata={"help": "initial number of updates per period (cycle length)"}, - ) - lr_shrink: float = field( - default=0.1, metadata={"help": "shrink factor for annealing"} - ) - shrink_min: bool = field( - default=False, metadata={"help": "if set, also shrinks min lr"} - ) - lr: List[float] = II("optimization.lr") - - -@register_lr_scheduler("triangular", dataclass=TriangularLRScheduleConfig) -class TriangularLRSchedule(FairseqLRScheduler): - """Assign LR based on a triangular cyclical schedule. - - See https://arxiv.org/pdf/1506.01186.pdf for details. - """ - - def __init__(self, cfg: TriangularLRScheduleConfig, optimizer): - super().__init__(cfg, optimizer) - if len(cfg.lr) > 1: - raise ValueError( - "Cannot use a fixed learning rate schedule with triangular." - " Consider --lr-scheduler=fixed instead." - ) - - lr = cfg.lr[0] - - assert cfg.max_lr > lr, "max_lr must be more than lr" - self.min_lr = lr - self.max_lr = cfg.max_lr - self.stepsize = cfg.lr_period_updates // 2 - self.lr_shrink = cfg.lr_shrink - self.shrink_min = cfg.shrink_min - - # initial learning rate - self.lr = self.min_lr - self.optimizer.set_lr(self.lr) - - def step(self, epoch, val_loss=None): - """Update the learning rate at the end of the given epoch.""" - super().step(epoch, val_loss) - # we don't change the learning rate at epoch boundaries - return self.optimizer.get_lr() - - def step_update(self, num_updates): - """Update the learning rate after each update.""" - cycle = math.floor(num_updates / (2 * self.stepsize)) - - lr_shrink = self.lr_shrink ** cycle - max_lr = self.max_lr * lr_shrink - if self.shrink_min: - min_lr = self.min_lr * lr_shrink - else: - min_lr = self.min_lr - - x = abs(num_updates / self.stepsize - 2 * (cycle + 1) + 1) - self.lr = min_lr + (max_lr - min_lr) * max(0, (1 - x)) - - self.optimizer.set_lr(self.lr) - return self.lr diff --git a/kosmos-g/fairseq/fairseq/optim/nag.py b/kosmos-g/fairseq/fairseq/optim/nag.py deleted file mode 100644 index c30a6c0fb..000000000 --- a/kosmos-g/fairseq/fairseq/optim/nag.py +++ /dev/null @@ -1,111 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from collections.abc import Collection -from dataclasses import dataclass, field -from typing import List - -import torch -from fairseq.dataclass import FairseqDataclass -from omegaconf import II, DictConfig -from torch.optim.optimizer import Optimizer, required - -from . import FairseqOptimizer, register_optimizer - - -@dataclass -class FairseqNAGConfig(FairseqDataclass): - momentum: float = field(default=0.99, metadata={"help": "momentum factor"}) - weight_decay: float = field(default=0.0, metadata={"help": "weight decay"}) - # TODO common vars in parent class - lr: List[float] = II("optimization.lr") - - -@register_optimizer("nag", dataclass=FairseqNAGConfig) -class FairseqNAG(FairseqOptimizer): - def __init__(self, cfg: DictConfig, params): - super().__init__(cfg) - self._optimizer = NAG(params, **self.optimizer_config) - - @property - def optimizer_config(self): - """ - Return a kwarg dictionary that will be used to override optimizer - args stored in checkpoints. This allows us to load a checkpoint and - resume training using a different set of optimizer args, e.g., with a - different learning rate. - """ - return { - "lr": self.cfg.lr[0] - if isinstance(self.cfg.lr, Collection) - else self.cfg.lr, - "momentum": self.cfg.momentum, - "weight_decay": self.cfg.weight_decay, - } - - -class NAG(Optimizer): - def __init__(self, params, lr=required, momentum=0, weight_decay=0): - defaults = dict(lr=lr, lr_old=lr, momentum=momentum, weight_decay=weight_decay) - super(NAG, self).__init__(params, defaults) - - @property - def supports_memory_efficient_fp16(self): - return True - - @property - def supports_flat_params(self): - return True - - def step(self, closure=None): - """Performs a single optimization step. - - Args: - closure (callable, optional): A closure that reevaluates the model - and returns the loss. - """ - loss = None - if closure is not None: - loss = closure() - - for group in self.param_groups: - weight_decay = group["weight_decay"] - momentum = group["momentum"] - lr = group["lr"] - lr_old = group.get("lr_old", lr) - lr_correct = lr / lr_old if lr_old > 0 else lr - - for p in group["params"]: - if p.grad is None: - continue - - p_data_fp32 = p.data - if p_data_fp32.dtype in {torch.float16, torch.bfloat16}: - p_data_fp32 = p_data_fp32.float() - - d_p = p.grad.data.float() - param_state = self.state[p] - if "momentum_buffer" not in param_state: - param_state["momentum_buffer"] = torch.zeros_like(d_p) - else: - param_state["momentum_buffer"] = param_state["momentum_buffer"].to( - d_p - ) - - buf = param_state["momentum_buffer"] - - if weight_decay != 0: - p_data_fp32.mul_(1 - lr * weight_decay) - p_data_fp32.add_(buf, alpha=momentum * momentum * lr_correct) - p_data_fp32.add_(d_p, alpha=-(1 + momentum) * lr) - - buf.mul_(momentum * lr_correct).add_(d_p, alpha=-lr) - - if p.data.dtype in {torch.float16, torch.bfloat16}: - p.data.copy_(p_data_fp32) - - group["lr_old"] = lr - - return loss diff --git a/kosmos-g/fairseq/fairseq/optim/sgd.py b/kosmos-g/fairseq/fairseq/optim/sgd.py deleted file mode 100644 index 8e34fb99a..000000000 --- a/kosmos-g/fairseq/fairseq/optim/sgd.py +++ /dev/null @@ -1,43 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch.optim - -from . import LegacyFairseqOptimizer, register_optimizer - - -@register_optimizer("sgd") -class SGD(LegacyFairseqOptimizer): - def __init__(self, args, params): - super().__init__(args) - self._optimizer = torch.optim.SGD(params, **self.optimizer_config) - - @staticmethod - def add_args(parser): - """Add optimizer-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--momentum', default=0.0, type=float, metavar='M', - help='momentum factor') - parser.add_argument('--weight-decay', '--wd', default=0.0, type=float, metavar='WD', - help='weight decay') - # fmt: on - - @property - def optimizer_config(self): - """ - Return a kwarg dictionary that will be used to override optimizer - args stored in checkpoints. This allows us to load a checkpoint and - resume training using a different set of optimizer args, e.g., with a - different learning rate. - """ - return { - "lr": self.args.lr[0], - "momentum": self.args.momentum, - "weight_decay": self.args.weight_decay, - } - - @property - def supports_flat_params(self): - return True diff --git a/kosmos-g/fairseq/fairseq/optim/shard.py b/kosmos-g/fairseq/fairseq/optim/shard.py deleted file mode 100644 index 9d7f2eb9e..000000000 --- a/kosmos-g/fairseq/fairseq/optim/shard.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Any, Dict - -from fairseq.distributed import utils - - -try: - from fairscale.optim import OSS - - _has_fairscale = True -except ImportError: - _has_fairscale = False - - -def shard_(optimizer, group): - if not _has_fairscale: - raise ImportError( - "\n\nPlease install the fairscale package:" "\n\n pip install fairscale" - ) - - class FairseqOSS(OSS): - @property - def disable_mem_eff_fp16_loading_hack(self): - return True - - def __getattr__(self, name): - if name.startswith("supports") and hasattr(self.optim, name): - return getattr(self.optim, name) - raise AttributeError( - "'FairseqOSS' object has no attribute {0!r}".format(name) - ) - - def broadcast_global_state_dict( - self, state_dict: Dict[str, Any] - ) -> Dict[str, Any]: - """ - Broadcasts the entire state_dict to all other ranks - each rank is responsible to load their own partition of data - """ - return utils.broadcast_object( - state_dict, - src_rank=0, - group=self.group, - ) - - torch_optimizer = optimizer.optimizer - optim_cls = type(torch_optimizer) - - optimizer.optimizer = FairseqOSS( - torch_optimizer.param_groups, - optim_cls, - group=group, - **optimizer.optimizer_config - ) diff --git a/kosmos-g/fairseq/fairseq/options.py b/kosmos-g/fairseq/fairseq/options.py deleted file mode 100644 index 41e15a0a2..000000000 --- a/kosmos-g/fairseq/fairseq/options.py +++ /dev/null @@ -1,425 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -from pathlib import Path -from typing import Callable, List, Optional, Union - -import torch -from fairseq import utils -from fairseq.data.indexed_dataset import get_available_dataset_impl -from fairseq.dataclass.configs import ( - CheckpointConfig, - CommonConfig, - CommonEvalConfig, - DatasetConfig, - DistributedTrainingConfig, - EvalLMConfig, - GenerationConfig, - InteractiveConfig, - OptimizationConfig, - EMAConfig, -) -from fairseq.dataclass.utils import gen_parser_from_dataclass - -# this import is for backward compatibility -from fairseq.utils import csv_str_list, eval_bool, eval_str_dict, eval_str_list # noqa - - -def get_preprocessing_parser(default_task="translation"): - parser = get_parser("Preprocessing", default_task) - add_preprocess_args(parser) - return parser - - -def get_training_parser(default_task="translation"): - parser = get_parser("Trainer", default_task) - add_dataset_args(parser, train=True) - add_distributed_training_args(parser) - add_model_args(parser) - add_optimization_args(parser) - add_checkpoint_args(parser) - add_ema_args(parser) - add_deepspeed_args(parser) - return parser - - -def get_generation_parser(interactive=False, default_task="translation"): - parser = get_parser("Generation", default_task) - add_dataset_args(parser, gen=True) - add_distributed_training_args(parser, default_world_size=1) - add_generation_args(parser) - add_checkpoint_args(parser) - if interactive: - add_interactive_args(parser) - return parser - - -def get_speech_generation_parser(default_task="text_to_speech"): - parser = get_parser("Speech Generation", default_task) - add_dataset_args(parser, gen=True) - add_distributed_training_args(parser, default_world_size=1) - add_speech_generation_args(parser) - return parser - - -def get_interactive_generation_parser(default_task="translation"): - return get_generation_parser(interactive=True, default_task=default_task) - - -def get_eval_lm_parser(default_task="language_modeling"): - parser = get_parser("Evaluate Language Model", default_task) - add_dataset_args(parser, gen=True) - add_distributed_training_args(parser, default_world_size=1) - add_eval_lm_args(parser) - return parser - - -def get_validation_parser(default_task=None): - parser = get_parser("Validation", default_task) - add_dataset_args(parser, train=True) - add_distributed_training_args(parser, default_world_size=1) - group = parser.add_argument_group("Evaluation") - gen_parser_from_dataclass(group, CommonEvalConfig()) - return parser - - -def parse_args_and_arch( - parser: argparse.ArgumentParser, - input_args: List[str] = None, - parse_known: bool = False, - suppress_defaults: bool = False, - modify_parser: Optional[Callable[[argparse.ArgumentParser], None]] = None, -): - """ - Args: - parser (ArgumentParser): the parser - input_args (List[str]): strings to parse, defaults to sys.argv - parse_known (bool): only parse known arguments, similar to - `ArgumentParser.parse_known_args` - suppress_defaults (bool): parse while ignoring all default values - modify_parser (Optional[Callable[[ArgumentParser], None]]): - function to modify the parser, e.g., to set default values - """ - if suppress_defaults: - # Parse args without any default values. This requires us to parse - # twice, once to identify all the necessary task/model args, and a second - # time with all defaults set to None. - args = parse_args_and_arch( - parser, - input_args=input_args, - parse_known=parse_known, - suppress_defaults=False, - ) - suppressed_parser = argparse.ArgumentParser(add_help=False, parents=[parser]) - suppressed_parser.set_defaults(**{k: None for k, v in vars(args).items()}) - args = suppressed_parser.parse_args(input_args) - return argparse.Namespace( - **{k: v for k, v in vars(args).items() if v is not None} - ) - - from fairseq.models import ARCH_MODEL_REGISTRY, ARCH_CONFIG_REGISTRY, MODEL_REGISTRY - - # Before creating the true parser, we need to import optional user module - # in order to eagerly import custom tasks, optimizers, architectures, etc. - usr_parser = argparse.ArgumentParser(add_help=False, allow_abbrev=False) - usr_parser.add_argument("--user-dir", default=None) - usr_args, _ = usr_parser.parse_known_args(input_args) - utils.import_user_module(usr_args) - - if modify_parser is not None: - modify_parser(parser) - - # The parser doesn't know about model/criterion/optimizer-specific args, so - # we parse twice. First we parse the model/criterion/optimizer, then we - # parse a second time after adding the *-specific arguments. - # If input_args is given, we will parse those args instead of sys.argv. - args, _ = parser.parse_known_args(input_args) - - # Add model-specific args to parser. - if hasattr(args, "arch"): - model_specific_group = parser.add_argument_group( - "Model-specific configuration", - # Only include attributes which are explicitly given as command-line - # arguments or which have default values. - argument_default=argparse.SUPPRESS, - ) - if args.arch in ARCH_MODEL_REGISTRY: - ARCH_MODEL_REGISTRY[args.arch].add_args(model_specific_group) - elif args.arch in MODEL_REGISTRY: - MODEL_REGISTRY[args.arch].add_args(model_specific_group) - else: - raise RuntimeError() - - if hasattr(args, "task"): - from fairseq.tasks import TASK_REGISTRY - - TASK_REGISTRY[args.task].add_args(parser) - if getattr(args, "use_bmuf", False): - # hack to support extra args for block distributed data parallelism - from fairseq.optim.bmuf import FairseqBMUF - - FairseqBMUF.add_args(parser) - - # Add *-specific args to parser. - from fairseq.registry import REGISTRIES - - for registry_name, REGISTRY in REGISTRIES.items(): - choice = getattr(args, registry_name, None) - if choice is not None: - cls = REGISTRY["registry"][choice] - if hasattr(cls, "add_args"): - cls.add_args(parser) - elif hasattr(cls, "__dataclass"): - gen_parser_from_dataclass(parser, cls.__dataclass()) - - # Modify the parser a second time, since defaults may have been reset - if modify_parser is not None: - modify_parser(parser) - - # Parse a second time. - if parse_known: - args, extra = parser.parse_known_args(input_args) - else: - args = parser.parse_args(input_args) - extra = None - # Post-process args. - if ( - hasattr(args, "batch_size_valid") and args.batch_size_valid is None - ) or not hasattr(args, "batch_size_valid"): - args.batch_size_valid = args.batch_size - if hasattr(args, "max_tokens_valid") and args.max_tokens_valid is None: - args.max_tokens_valid = args.max_tokens - if getattr(args, "memory_efficient_fp16", False): - args.fp16 = True - if getattr(args, "memory_efficient_bf16", False): - args.bf16 = True - args.tpu = getattr(args, "tpu", False) - args.bf16 = getattr(args, "bf16", False) - if args.bf16: - args.tpu = True - if args.tpu and args.fp16: - raise ValueError("Cannot combine --fp16 and --tpu, use --bf16 on TPUs") - - if getattr(args, "seed", None) is None: - args.seed = 1 # default seed for training - args.no_seed_provided = True - else: - args.no_seed_provided = False - - if getattr(args, "update_epoch_batch_itr", None) is None: - if hasattr(args, "grouped_shuffling"): - args.update_epoch_batch_itr = args.grouped_shuffling - else: - args.grouped_shuffling = False - args.update_epoch_batch_itr = False - - # Apply architecture configuration. - if hasattr(args, "arch") and args.arch in ARCH_CONFIG_REGISTRY: - ARCH_CONFIG_REGISTRY[args.arch](args) - - if parse_known: - return args, extra - else: - return args - - -def get_parser(desc, default_task="translation"): - # Before creating the true parser, we need to import optional user module - # in order to eagerly import custom tasks, optimizers, architectures, etc. - usr_parser = argparse.ArgumentParser(add_help=False, allow_abbrev=False) - usr_parser.add_argument("--user-dir", default=None) - usr_args, _ = usr_parser.parse_known_args() - utils.import_user_module(usr_args) - - parser = argparse.ArgumentParser(allow_abbrev=False) - gen_parser_from_dataclass(parser, CommonConfig()) - - from fairseq.registry import REGISTRIES - - for registry_name, REGISTRY in REGISTRIES.items(): - parser.add_argument( - "--" + registry_name.replace("_", "-"), - default=REGISTRY["default"], - choices=REGISTRY["registry"].keys(), - ) - - # Task definitions can be found under fairseq/tasks/ - from fairseq.tasks import TASK_REGISTRY - - parser.add_argument( - "--task", - metavar="TASK", - default=default_task, - choices=TASK_REGISTRY.keys(), - help="task", - ) - # fmt: on - return parser - - -def add_preprocess_args(parser): - group = parser.add_argument_group("Preprocessing") - # fmt: off - group.add_argument("-s", "--source-lang", default=None, metavar="SRC", - help="source language") - group.add_argument("-t", "--target-lang", default=None, metavar="TARGET", - help="target language") - group.add_argument("--trainpref", metavar="FP", default=None, - help="train file prefix (also used to build dictionaries)") - group.add_argument("--validpref", metavar="FP", default=None, - help="comma separated, valid file prefixes " - "(words missing from train set are replaced with <unk>)") - group.add_argument("--testpref", metavar="FP", default=None, - help="comma separated, test file prefixes " - "(words missing from train set are replaced with <unk>)") - group.add_argument("--align-suffix", metavar="FP", default=None, - help="alignment file suffix") - group.add_argument("--destdir", metavar="DIR", default="data-bin", - help="destination dir") - group.add_argument("--thresholdtgt", metavar="N", default=0, type=int, - help="map words appearing less than threshold times to unknown") - group.add_argument("--thresholdsrc", metavar="N", default=0, type=int, - help="map words appearing less than threshold times to unknown") - group.add_argument("--tgtdict", metavar="FP", - help="reuse given target dictionary") - group.add_argument("--srcdict", metavar="FP", - help="reuse given source dictionary") - group.add_argument("--nwordstgt", metavar="N", default=-1, type=int, - help="number of target words to retain") - group.add_argument("--nwordssrc", metavar="N", default=-1, type=int, - help="number of source words to retain") - group.add_argument("--alignfile", metavar="ALIGN", default=None, - help="an alignment file (optional)") - parser.add_argument('--dataset-impl', metavar='FORMAT', default='mmap', - choices=get_available_dataset_impl(), - help='output dataset implementation') - group.add_argument("--joined-dictionary", action="store_true", - help="Generate joined dictionary") - group.add_argument("--only-source", action="store_true", - help="Only process the source language") - group.add_argument("--padding-factor", metavar="N", default=8, type=int, - help="Pad dictionary size to be multiple of N") - group.add_argument("--workers", metavar="N", default=1, type=int, - help="number of parallel workers") - group.add_argument("--dict-only", action='store_true', - help="if true, only builds a dictionary and then exits") - # fmt: on - return parser - - -def add_dataset_args(parser, train=False, gen=False): - group = parser.add_argument_group("dataset_data_loading") - gen_parser_from_dataclass(group, DatasetConfig()) - # fmt: on - return group - - -def add_distributed_training_args(parser, default_world_size=None): - group = parser.add_argument_group("distributed_training") - if default_world_size is None: - default_world_size = max(1, torch.cuda.device_count()) - gen_parser_from_dataclass( - group, DistributedTrainingConfig(distributed_world_size=default_world_size) - ) - return group - - -def add_optimization_args(parser): - group = parser.add_argument_group("optimization") - # fmt: off - gen_parser_from_dataclass(group, OptimizationConfig()) - # fmt: on - return group - - -def add_checkpoint_args(parser): - group = parser.add_argument_group("checkpoint") - # fmt: off - gen_parser_from_dataclass(group, CheckpointConfig()) - # fmt: on - return group - - -def add_common_eval_args(group): - gen_parser_from_dataclass(group, CommonEvalConfig()) - - -def add_eval_lm_args(parser): - group = parser.add_argument_group("LM Evaluation") - add_common_eval_args(group) - gen_parser_from_dataclass(group, EvalLMConfig()) - - -def add_generation_args(parser): - group = parser.add_argument_group("Generation") - add_common_eval_args(group) - gen_parser_from_dataclass(group, GenerationConfig()) - return group - - -def add_speech_generation_args(parser): - group = parser.add_argument_group("Speech Generation") - add_common_eval_args(group) # NOTE: remove_bpe is not needed - # fmt: off - group.add_argument('--eos_prob_threshold', default=0.5, type=float, - help='terminate when eos probability exceeds this') - # fmt: on - return group - - -def add_interactive_args(parser): - group = parser.add_argument_group("Interactive") - gen_parser_from_dataclass(group, InteractiveConfig()) - - -def add_model_args(parser): - group = parser.add_argument_group("Model configuration") - # fmt: off - - # Model definitions can be found under fairseq/models/ - # - # The model architecture can be specified in several ways. - # In increasing order of priority: - # 1) model defaults (lowest priority) - # 2) --arch argument - # 3) --encoder/decoder-* arguments (highest priority) - from fairseq.models import ARCH_MODEL_REGISTRY - group.add_argument('--arch', '-a', metavar='ARCH', - choices=ARCH_MODEL_REGISTRY.keys(), - help='model architecture') - # fmt: on - return group - - -def get_args( - data: Union[str, Path], - task: str = "translation", - arch: str = "transformer", - **overrides -): - parser = get_training_parser(task) - args = parse_args_and_arch(parser, [str(data), "--task", task, "--arch", arch]) - - for k, v in overrides.items(): - setattr(args, k, v) - - return args - - -def add_ema_args(parser): - group = parser.add_argument_group("EMA configuration") - gen_parser_from_dataclass(group, EMAConfig()) - -def add_deepspeed_args(parser): - pass - # group = parser.add_argument_group("DeepSpeed") - # group.add_argument('--deepspeed', nargs='?', const=True, default=False, - # help="Enable DeepSpeed with auto-generated config with flag and " \ - # "no argument, or pass an argument to a ds_config json to use.") - # group.add_argument("--zero", default=0, type=int, help="enable a specific ZeRO stage") - # group.add_argument('--exit-interval', type=int, default=None, - # help='Exit the program after the iteration is divisible ' - # 'by this value.') diff --git a/kosmos-g/fairseq/fairseq/pdb.py b/kosmos-g/fairseq/fairseq/pdb.py deleted file mode 100644 index 1ba6ef0d3..000000000 --- a/kosmos-g/fairseq/fairseq/pdb.py +++ /dev/null @@ -1,47 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import multiprocessing -import os -import pdb -import sys - - -__all__ = ["set_trace"] - - -_stdin = [None] -_stdin_lock = multiprocessing.Lock() -try: - _stdin_fd = sys.stdin.fileno() -except Exception: - _stdin_fd = None - - -class MultiprocessingPdb(pdb.Pdb): - """A Pdb wrapper that works in a multiprocessing environment. - - Usage: `from fairseq import pdb; pdb.set_trace()` - """ - - def __init__(self): - pdb.Pdb.__init__(self, nosigint=True) - - def _cmdloop(self): - stdin_bak = sys.stdin - with _stdin_lock: - try: - if _stdin_fd is not None: - if not _stdin[0]: - _stdin[0] = os.fdopen(_stdin_fd) - sys.stdin = _stdin[0] - self.cmdloop() - finally: - sys.stdin = stdin_bak - - -def set_trace(): - pdb = MultiprocessingPdb() - pdb.set_trace(sys._getframe().f_back) diff --git a/kosmos-g/fairseq/fairseq/quantization_utils.py b/kosmos-g/fairseq/fairseq/quantization_utils.py deleted file mode 100644 index 11fc414c8..000000000 --- a/kosmos-g/fairseq/fairseq/quantization_utils.py +++ /dev/null @@ -1,143 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -from fairseq.modules.quantization import pq, quantization_options, scalar -from omegaconf import DictConfig - - -logger = logging.getLogger(__name__) - - -def quantize_model_scalar(model, model_cfg: DictConfig): - quant_noise_scalar = getattr(model_cfg, "quant_noise_scalar", 0) or 0 - if quant_noise_scalar > 0: - # quantize_model edits the model in place - scalar.quantize_model_(model, p=quant_noise_scalar, bits=8, update_step=1000) - return model - - -class Quantizer(object): - def __init__(self, config_path, max_epoch, max_update): - try: - import yaml - except ImportError: - raise ImportError("Please install yaml with: pip install yaml") - - # parse config - if config_path: - with open(config_path) as config_file: - config = quantization_options.parse_config_yaml( - yaml.safe_load(config_file) - ) - else: - config = quantization_options.parse_config_yaml({}) - - self.n_centroids_config = config["n_centroids"] - self.block_sizes_config = config["block_sizes"] - self.layers_to_quantize = config["layers_to_quantize"] - - # We assume that training will run for a fixed number of epochs - # (or updates) and that we should train for equal durations - # between iterations of PQ. - num_iterations = len(self.layers_to_quantize) - if max_epoch > 0: - assert max_epoch % num_iterations == 0, ( - "for iterative PQ, --max-epoch (={}) must be evenly divisible by " - "len(layers_to_quantize) (={})".format(max_epoch, num_iterations) - ) - self.epoch_schedule = max_epoch // num_iterations - else: - self.epoch_schedule = None - if max_update > 0: - assert max_update % num_iterations == 0, ( - "for iterative PQ, --max-update (={}) must be evenly divisible by " - "len(layers_to_quantize) (={})".format(max_update, num_iterations) - ) - self.update_schedule = max_update // num_iterations - else: - self.update_schedule = None - assert (self.epoch_schedule is not None) ^ ( - self.update_schedule is not None - ), "for iterative PQ, cannot specify both --max-update and --max-epoch" - - # 0 is a special value for quantization step, which will force - # the first call to begin_epoch() to call step() - self.quantization_step = 0 - - def set_trainer(self, trainer): - self.trainer = trainer - self.size_tracker = pq.SizeTracker(self.trainer.get_model()) - - def step(self): - """Move to the next stage of quantization.""" - if self.quantization_step >= len(self.layers_to_quantize): - # Maybe we just finished the last training step or we loaded - # a checkpoint for an iterative PQ model which previously - # finished training. Either way, don't quantize again. - return - - logger.info( - "quantizing model (step={}; layers_to_quantize[step]={})".format( - self.quantization_step, self.layers_to_quantize[self.quantization_step] - ) - ) - quantized_layers = pq.quantize_model_( - self.trainer.get_model(), - self.size_tracker, - self.layers_to_quantize, - self.block_sizes_config, - self.n_centroids_config, - step=self.quantization_step, - ) - logger.info("quantized layers: {}".format(quantized_layers)) - logger.info(self.size_tracker) - - self.quantization_step += 1 - - # reintialize the Trainer since model parameters have changed - self.trainer.reinitialize() - - def begin_epoch(self, epoch): - """Called at the beginning of each epoch (epochs start at 1).""" - if ( - ( - self.epoch_schedule is not None - and epoch > 0 - and (epoch - 1) % self.epoch_schedule == 0 - ) - # we always step once in the beginning, even if using - # update-based quantization - or self.quantization_step == 0 - ): - self.step() - - def step_update(self, num_updates): - """Called at the end of each step.""" - if ( - self.update_schedule is not None - and num_updates > 0 - and num_updates % self.update_schedule == 0 - ): - self.step() - - def state_dict(self): - return { - "n_centroids_config": self.n_centroids_config, - "block_sizes_config": self.block_sizes_config, - "layers_to_quantize": self.layers_to_quantize, - "epoch_schedule": self.epoch_schedule, - "update_schedule": self.update_schedule, - "quantization_step": self.quantization_step, - } - - def load_state_dict(self, state_dict): - self.n_centroids_config = state_dict["n_centroids_config"] - self.block_sizes_config = state_dict["block_sizes_config"] - self.layers_to_quantize = state_dict["layers_to_quantize"] - self.epoch_schedule = state_dict["epoch_schedule"] - self.update_schedule = state_dict["update_schedule"] - self.quantization_step = state_dict["quantization_step"] diff --git a/kosmos-g/fairseq/fairseq/registry.py b/kosmos-g/fairseq/fairseq/registry.py deleted file mode 100644 index f3b940604..000000000 --- a/kosmos-g/fairseq/fairseq/registry.py +++ /dev/null @@ -1,100 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from argparse import Namespace - -from typing import Union -from fairseq.dataclass import FairseqDataclass -from fairseq.dataclass.utils import merge_with_parent -from hydra.core.config_store import ConfigStore -from omegaconf import DictConfig - -REGISTRIES = {} - - -def setup_registry(registry_name: str, base_class=None, default=None, required=False): - assert registry_name.startswith("--") - registry_name = registry_name[2:].replace("-", "_") - - REGISTRY = {} - REGISTRY_CLASS_NAMES = set() - DATACLASS_REGISTRY = {} - - # maintain a registry of all registries - if registry_name in REGISTRIES: - return # registry already exists - REGISTRIES[registry_name] = { - "registry": REGISTRY, - "default": default, - "dataclass_registry": DATACLASS_REGISTRY, - } - - def build_x(cfg: Union[DictConfig, str, Namespace], *extra_args, **extra_kwargs): - if isinstance(cfg, DictConfig): - choice = cfg._name - - if choice and choice in DATACLASS_REGISTRY: - dc = DATACLASS_REGISTRY[choice] - cfg = merge_with_parent(dc(), cfg) - elif isinstance(cfg, str): - choice = cfg - if choice in DATACLASS_REGISTRY: - cfg = DATACLASS_REGISTRY[choice]() - else: - choice = getattr(cfg, registry_name, None) - if choice in DATACLASS_REGISTRY: - cfg = DATACLASS_REGISTRY[choice].from_namespace(cfg) - - if choice is None: - if required: - raise ValueError("{} is required!".format(registry_name)) - return None - - cls = REGISTRY[choice] - if hasattr(cls, "build_" + registry_name): - builder = getattr(cls, "build_" + registry_name) - else: - builder = cls - - return builder(cfg, *extra_args, **extra_kwargs) - - def register_x(name, dataclass=None): - def register_x_cls(cls): - if name in REGISTRY: - raise ValueError( - "Cannot register duplicate {} ({})".format(registry_name, name) - ) - if cls.__name__ in REGISTRY_CLASS_NAMES: - raise ValueError( - "Cannot register {} with duplicate class name ({})".format( - registry_name, cls.__name__ - ) - ) - if base_class is not None and not issubclass(cls, base_class): - raise ValueError( - "{} must extend {}".format(cls.__name__, base_class.__name__) - ) - - if dataclass is not None and not issubclass(dataclass, FairseqDataclass): - raise ValueError( - "Dataclass {} must extend FairseqDataclass".format(dataclass) - ) - - cls.__dataclass = dataclass - if cls.__dataclass is not None: - DATACLASS_REGISTRY[name] = cls.__dataclass - - cs = ConfigStore.instance() - node = dataclass() - node._name = name - cs.store(name=name, group=registry_name, node=node, provider="fairseq") - - REGISTRY[name] = cls - - return cls - - return register_x_cls - - return build_x, register_x, REGISTRY, DATACLASS_REGISTRY diff --git a/kosmos-g/fairseq/fairseq/scoring/__init__.py b/kosmos-g/fairseq/fairseq/scoring/__init__.py deleted file mode 100644 index 58f2f563e..000000000 --- a/kosmos-g/fairseq/fairseq/scoring/__init__.py +++ /dev/null @@ -1,55 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import importlib -import os -from abc import ABC, abstractmethod - -from fairseq import registry -from omegaconf import DictConfig - - -class BaseScorer(ABC): - def __init__(self, cfg): - self.cfg = cfg - self.ref = [] - self.pred = [] - - def add_string(self, ref, pred): - self.ref.append(ref) - self.pred.append(pred) - - @abstractmethod - def score(self) -> float: - pass - - @abstractmethod - def result_string(self) -> str: - pass - - -_build_scorer, register_scorer, SCORER_REGISTRY, _ = registry.setup_registry( - "--scoring", default="bleu" -) - - -def build_scorer(choice, tgt_dict): - _choice = choice._name if isinstance(choice, DictConfig) else choice - - if _choice == "bleu": - from fairseq.scoring import bleu - - return bleu.Scorer( - bleu.BleuConfig(pad=tgt_dict.pad(), eos=tgt_dict.eos(), unk=tgt_dict.unk()) - ) - return _build_scorer(choice) - - -# automatically import any Python files in the current directory -for file in sorted(os.listdir(os.path.dirname(__file__))): - if file.endswith(".py") and not file.startswith("_"): - module = file[: file.find(".py")] - importlib.import_module("fairseq.scoring." + module) diff --git a/kosmos-g/fairseq/fairseq/scoring/bleu.py b/kosmos-g/fairseq/fairseq/scoring/bleu.py deleted file mode 100644 index e55bd2f39..000000000 --- a/kosmos-g/fairseq/fairseq/scoring/bleu.py +++ /dev/null @@ -1,168 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import ctypes -import math -import sys -from dataclasses import dataclass, field - -import torch -from fairseq.dataclass import FairseqDataclass -from fairseq.scoring import BaseScorer, register_scorer -from fairseq.scoring.tokenizer import EvaluationTokenizer - - -class BleuStat(ctypes.Structure): - _fields_ = [ - ("reflen", ctypes.c_size_t), - ("predlen", ctypes.c_size_t), - ("match1", ctypes.c_size_t), - ("count1", ctypes.c_size_t), - ("match2", ctypes.c_size_t), - ("count2", ctypes.c_size_t), - ("match3", ctypes.c_size_t), - ("count3", ctypes.c_size_t), - ("match4", ctypes.c_size_t), - ("count4", ctypes.c_size_t), - ] - - -@dataclass -class SacrebleuConfig(FairseqDataclass): - sacrebleu_tokenizer: EvaluationTokenizer.ALL_TOKENIZER_TYPES = field( - default="13a", metadata={"help": "tokenizer"} - ) - sacrebleu_lowercase: bool = field( - default=False, metadata={"help": "apply lowercasing"} - ) - sacrebleu_char_level: bool = field( - default=False, metadata={"help": "evaluate at character level"} - ) - - -@register_scorer("sacrebleu", dataclass=SacrebleuConfig) -class SacrebleuScorer(BaseScorer): - def __init__(self, cfg): - super(SacrebleuScorer, self).__init__(cfg) - import sacrebleu - - self.sacrebleu = sacrebleu - self.tokenizer = EvaluationTokenizer( - tokenizer_type=cfg.sacrebleu_tokenizer, - lowercase=cfg.sacrebleu_lowercase, - character_tokenization=cfg.sacrebleu_char_level, - ) - - def add_string(self, ref, pred): - self.ref.append(self.tokenizer.tokenize(ref)) - self.pred.append(self.tokenizer.tokenize(pred)) - - def _score(self, order=4): - if order != 4: - raise NotImplementedError - # tokenization and lowercasing are performed by self.tokenizer instead. - return self.sacrebleu.corpus_bleu(self.pred, [self.ref], tokenize="none") - - def score(self, order=4): - return self._score(order).score - - def result_string(self, order=4): - return self._score(order).format() - - -@dataclass -class BleuConfig(FairseqDataclass): - pad: int = field(default=1, metadata={"help": "padding index"}) - eos: int = field(default=2, metadata={"help": "eos index"}) - unk: int = field(default=3, metadata={"help": "unk index"}) - - -@register_scorer("bleu", dataclass=BleuConfig) -class Scorer(object): - def __init__(self, cfg): - self.stat = BleuStat() - self.pad = cfg.pad - self.eos = cfg.eos - self.unk = cfg.unk - - try: - from fairseq import libbleu - except ImportError as e: - sys.stderr.write( - "ERROR: missing libbleu.so. run `pip install --editable .`\n" - ) - raise e - - self.C = ctypes.cdll.LoadLibrary(libbleu.__file__) - - self.reset() - - def reset(self, one_init=False): - if one_init: - self.C.bleu_one_init(ctypes.byref(self.stat)) - else: - self.C.bleu_zero_init(ctypes.byref(self.stat)) - - def add(self, ref, pred): - if not isinstance(ref, torch.IntTensor): - raise TypeError("ref must be a torch.IntTensor (got {})".format(type(ref))) - if not isinstance(pred, torch.IntTensor): - raise TypeError("pred must be a torch.IntTensor(got {})".format(type(pred))) - - # don't match unknown words - rref = ref.clone() - assert not rref.lt(0).any() - rref[rref.eq(self.unk)] = -999 - - rref = rref.contiguous().view(-1) - pred = pred.contiguous().view(-1) - - self.C.bleu_add( - ctypes.byref(self.stat), - ctypes.c_size_t(rref.size(0)), - ctypes.c_void_p(rref.data_ptr()), - ctypes.c_size_t(pred.size(0)), - ctypes.c_void_p(pred.data_ptr()), - ctypes.c_int(self.pad), - ctypes.c_int(self.eos), - ) - - def score(self, order=4): - psum = sum( - math.log(p) if p > 0 else float("-Inf") for p in self.precision()[:order] - ) - return self.brevity() * math.exp(psum / order) * 100 - - def precision(self): - def ratio(a, b): - return a / b if b > 0 else 0 - - return [ - ratio(self.stat.match1, self.stat.count1), - ratio(self.stat.match2, self.stat.count2), - ratio(self.stat.match3, self.stat.count3), - ratio(self.stat.match4, self.stat.count4), - ] - - def brevity(self): - r = self.stat.reflen / self.stat.predlen - return min(1, math.exp(1 - r)) - - def result_string(self, order=4): - assert order <= 4, "BLEU scores for order > 4 aren't supported" - fmt = "BLEU{} = {:2.2f}, {:2.1f}" - for _ in range(1, order): - fmt += "/{:2.1f}" - fmt += " (BP={:.3f}, ratio={:.3f}, syslen={}, reflen={})" - bleup = [p * 100 for p in self.precision()[:order]] - return fmt.format( - order, - self.score(order=order), - *bleup, - self.brevity(), - self.stat.predlen / self.stat.reflen, - self.stat.predlen, - self.stat.reflen - ) diff --git a/kosmos-g/fairseq/fairseq/scoring/chrf.py b/kosmos-g/fairseq/fairseq/scoring/chrf.py deleted file mode 100644 index 5df5a1c01..000000000 --- a/kosmos-g/fairseq/fairseq/scoring/chrf.py +++ /dev/null @@ -1,36 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -from dataclasses import dataclass - -from fairseq.dataclass import FairseqDataclass -from fairseq.scoring import BaseScorer, register_scorer - - -@dataclass -class ChrFScorerConfig(FairseqDataclass): - pass - - -@register_scorer("chrf", dataclass=ChrFScorerConfig) -class ChrFScorer(BaseScorer): - def __init__(self, args): - super(ChrFScorer, self).__init__(args) - import sacrebleu - - self.sacrebleu = sacrebleu - - def add_string(self, ref, pred): - self.ref.append(ref) - self.pred.append(pred) - - def score(self, order=4): - return self.result_string(order).score - - def result_string(self, order=4): - if order != 4: - raise NotImplementedError - return self.sacrebleu.corpus_chrf(self.pred, [self.ref]).format() diff --git a/kosmos-g/fairseq/fairseq/scoring/meteor.py b/kosmos-g/fairseq/fairseq/scoring/meteor.py deleted file mode 100644 index 32719956f..000000000 --- a/kosmos-g/fairseq/fairseq/scoring/meteor.py +++ /dev/null @@ -1,42 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -from dataclasses import dataclass - -from fairseq.dataclass import FairseqDataclass -from fairseq.scoring import BaseScorer, register_scorer - - -@dataclass -class MeteorScorerConfig(FairseqDataclass): - pass - - -@register_scorer("meteor", dataclass=MeteorScorerConfig) -class MeteorScorer(BaseScorer): - def __init__(self, args): - super(MeteorScorer, self).__init__(args) - try: - import nltk - except ImportError: - raise ImportError("Please install nltk to use METEOR scorer") - - self.nltk = nltk - self.scores = [] - - def add_string(self, ref, pred): - self.ref.append(ref) - self.pred.append(pred) - - def score(self, order=4): - self.scores = [ - self.nltk.translate.meteor_score.single_meteor_score(r, p) - for r, p in zip(self.ref, self.pred) - ] - return np.mean(self.scores) - - def result_string(self, order=4): - return f"METEOR: {self.score():.4f}" diff --git a/kosmos-g/fairseq/fairseq/scoring/tokenizer.py b/kosmos-g/fairseq/fairseq/scoring/tokenizer.py deleted file mode 100644 index b0cedd509..000000000 --- a/kosmos-g/fairseq/fairseq/scoring/tokenizer.py +++ /dev/null @@ -1,80 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import unicodedata - -import sacrebleu as sb - -from fairseq.dataclass import ChoiceEnum - -SACREBLEU_V2_ABOVE = int(sb.__version__[0]) >= 2 - - -class EvaluationTokenizer(object): - """A generic evaluation-time tokenizer, which leverages built-in tokenizers - in sacreBLEU (https://github.com/mjpost/sacrebleu). It additionally provides - lowercasing, punctuation removal and character tokenization, which are - applied after sacreBLEU tokenization. - - Args: - tokenizer_type (str): the type of sacreBLEU tokenizer to apply. - lowercase (bool): lowercase the text. - punctuation_removal (bool): remove punctuation (based on unicode - category) from text. - character_tokenization (bool): tokenize the text to characters. - """ - - SPACE = chr(32) - SPACE_ESCAPE = chr(9601) - _ALL_TOKENIZER_TYPES = ( - sb.BLEU.TOKENIZERS - if SACREBLEU_V2_ABOVE - else ["none", "13a", "intl", "zh", "ja-mecab"] - ) - ALL_TOKENIZER_TYPES = ChoiceEnum(_ALL_TOKENIZER_TYPES) - - def __init__( - self, - tokenizer_type: str = "13a", - lowercase: bool = False, - punctuation_removal: bool = False, - character_tokenization: bool = False, - ): - - assert ( - tokenizer_type in self._ALL_TOKENIZER_TYPES - ), f"{tokenizer_type}, {self._ALL_TOKENIZER_TYPES}" - self.lowercase = lowercase - self.punctuation_removal = punctuation_removal - self.character_tokenization = character_tokenization - if SACREBLEU_V2_ABOVE: - self.tokenizer = sb.BLEU(tokenize=str(tokenizer_type)).tokenizer - else: - self.tokenizer = sb.tokenizers.TOKENIZERS[tokenizer_type]() - - @classmethod - def remove_punctuation(cls, sent: str): - """Remove punctuation based on Unicode category.""" - return cls.SPACE.join( - t - for t in sent.split(cls.SPACE) - if not all(unicodedata.category(c)[0] == "P" for c in t) - ) - - def tokenize(self, sent: str): - tokenized = self.tokenizer(sent) - - if self.punctuation_removal: - tokenized = self.remove_punctuation(tokenized) - - if self.character_tokenization: - tokenized = self.SPACE.join( - list(tokenized.replace(self.SPACE, self.SPACE_ESCAPE)) - ) - - if self.lowercase: - tokenized = tokenized.lower() - - return tokenized diff --git a/kosmos-g/fairseq/fairseq/scoring/wer.py b/kosmos-g/fairseq/fairseq/scoring/wer.py deleted file mode 100644 index 633dc47c2..000000000 --- a/kosmos-g/fairseq/fairseq/scoring/wer.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field - -from fairseq.dataclass import FairseqDataclass -from fairseq.scoring import BaseScorer, register_scorer -from fairseq.scoring.tokenizer import EvaluationTokenizer - - -@dataclass -class WerScorerConfig(FairseqDataclass): - wer_tokenizer: EvaluationTokenizer.ALL_TOKENIZER_TYPES = field( - default="none", metadata={"help": "sacreBLEU tokenizer to use for evaluation"} - ) - wer_remove_punct: bool = field( - default=False, metadata={"help": "remove punctuation"} - ) - wer_char_level: bool = field( - default=False, metadata={"help": "evaluate at character level"} - ) - wer_lowercase: bool = field(default=False, metadata={"help": "lowercasing"}) - - -@register_scorer("wer", dataclass=WerScorerConfig) -class WerScorer(BaseScorer): - def __init__(self, cfg): - super().__init__(cfg) - self.reset() - try: - import editdistance as ed - except ImportError: - raise ImportError("Please install editdistance to use WER scorer") - self.ed = ed - self.tokenizer = EvaluationTokenizer( - tokenizer_type=self.cfg.wer_tokenizer, - lowercase=self.cfg.wer_lowercase, - punctuation_removal=self.cfg.wer_remove_punct, - character_tokenization=self.cfg.wer_char_level, - ) - - def reset(self): - self.distance = 0 - self.ref_length = 0 - - def add_string(self, ref, pred): - ref_items = self.tokenizer.tokenize(ref).split() - pred_items = self.tokenizer.tokenize(pred).split() - self.distance += self.ed.eval(ref_items, pred_items) - self.ref_length += len(ref_items) - - def result_string(self): - return f"WER: {self.score():.2f}" - - def score(self): - return 100.0 * self.distance / self.ref_length if self.ref_length > 0 else 0 diff --git a/kosmos-g/fairseq/fairseq/search.py b/kosmos-g/fairseq/fairseq/search.py deleted file mode 100644 index d5ea68b4c..000000000 --- a/kosmos-g/fairseq/fairseq/search.py +++ /dev/null @@ -1,814 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from typing import List, Optional - -import torch -import torch.nn as nn -from fairseq.token_generation_constraints import ( - ConstraintState, - OrderedConstraintState, - UnorderedConstraintState, -) -from torch import Tensor - - -class Search(nn.Module): - def __init__(self, tgt_dict): - super().__init__() - self.pad = tgt_dict.pad() - self.unk = tgt_dict.unk() - self.eos = tgt_dict.eos() - self.vocab_size = len(tgt_dict) - self.src_lengths = torch.tensor(-1) - self.supports_constraints = False - self.stop_on_max_len = False - - def step( - self, step, lprobs, scores, prev_output_tokens=None, original_batch_idxs=None - ): - """Take a single search step. - - Args: - step: the current search step, starting at 0 - lprobs: (bsz x input_beam_size x vocab_size) - the model's log-probabilities over the vocabulary at the current step - scores: (bsz x input_beam_size x step) - the historical model scores of each hypothesis up to this point - prev_output_tokens: (bsz x step) - the previously generated oputput tokens - original_batch_idxs: (bsz) - the tensor with the batch indices, in the range [0, bsz) - this is useful in case there has been applied a re-ordering - and we need to know the orignal indices - - Return: A tuple of (scores, indices, beams) where: - scores: (bsz x output_beam_size) - the scores of the chosen elements; output_beam_size can be - larger than input_beam_size, e.g., we may return - 2*input_beam_size to account for EOS - indices: (bsz x output_beam_size) - the indices of the chosen elements - beams: (bsz x output_beam_size) - the hypothesis ids of the chosen elements, in the range [0, input_beam_size) - """ - raise NotImplementedError - - @torch.jit.export - def set_src_lengths(self, src_lengths): - self.src_lengths = src_lengths - - @torch.jit.export - def init_constraints(self, batch_constraints: Optional[Tensor], beam_size: int): - """Initialize constraint states for constrained decoding (if supported). - - Args: - batch_constraints: (torch.Tensor, optional) - the list of constraints, in packed form - beam_size: (int) - the beam size - Returns: - *encoder_out* rearranged according to *new_order* - """ - pass - - def prune_sentences(self, batch_idxs: Tensor): - """ - Removes constraint states for completed sentences (if supported). - This is called from sequence_generator._generate() when sentences are - deleted from the batch. - - Args: - batch_idxs: Indices of *sentences* whose constraint state should be *kept*. - """ - pass - - def update_constraints(self, active_hypos: Tensor): - """ - Updates the constraint states by selecting the beam items that are retained. - This is called at each time step of sequence_generator._generate() when - the set of 2 * {beam_size} candidate hypotheses are reduced to the beam size. - - Args: - active_hypos: (batch size, beam size) - list of integers denoting, for each sentence, which beam candidate items - should be kept. - """ - pass - - -class BeamSearch(Search): - def __init__(self, tgt_dict): - super().__init__(tgt_dict) - self.constraint_states = None - - @torch.jit.export - def step( - self, - step: int, - lprobs, - scores: Optional[Tensor], - prev_output_tokens: Optional[Tensor] = None, - original_batch_idxs: Optional[Tensor] = None, - ): - bsz, beam_size, vocab_size = lprobs.size() - - if step == 0: - # at the first step all hypotheses are equally likely, so use - # only the first beam - lprobs = lprobs[:, ::beam_size, :].contiguous() - else: - # make probs contain cumulative scores for each hypothesis - assert scores is not None - lprobs = lprobs + scores[:, :, step - 1].unsqueeze(-1) - - top_prediction = torch.topk( - lprobs.view(bsz, -1), - k=min( - # Take the best 2 x beam_size predictions. We'll choose the first - # beam_size of these which don't predict eos to continue with. - beam_size * 2, - lprobs.view(bsz, -1).size(1) - 1, # -1 so we never select pad - ), - ) - scores_buf = top_prediction[0] - indices_buf = top_prediction[1] - # Project back into relative indices and beams - beams_buf = indices_buf // vocab_size - indices_buf = indices_buf.fmod(vocab_size) - - # At this point, beams_buf and indices_buf are single-dim and contain relative indices - return scores_buf, indices_buf, beams_buf - - -class PrefixConstrainedBeamSearch(Search): - def __init__(self, tgt_dict, prefix_allowed_tokens_fn): - super().__init__(tgt_dict) - self.prefix_allowed_tokens_fn = prefix_allowed_tokens_fn - self.stop_on_max_len = True - - @torch.jit.export - def apply_mask(self, x, prev_output_tokens, original_batch_idxs): - beam_size = x.shape[0] // original_batch_idxs.shape[0] - original_batch_idxs = ( - original_batch_idxs.unsqueeze(-1).repeat((1, beam_size)).flatten().tolist() - ) - - mask = torch.full_like(x, -math.inf) - for sent_i, (sent, batch_i) in enumerate( - zip(prev_output_tokens, original_batch_idxs) - ): - mask[sent_i, :, self.prefix_allowed_tokens_fn(batch_i, sent)] = 0 - - return mask - - @torch.jit.export - def step( - self, - step: int, - lprobs: Tensor, - scores: Tensor, - prev_output_tokens: Tensor, - original_batch_idxs: Tensor, - ): - bsz, beam_size, vocab_size = lprobs.size() - - lprobs += self.apply_mask( - lprobs.view(bsz * beam_size, 1, vocab_size), - prev_output_tokens, - original_batch_idxs, - ).view(bsz, beam_size, vocab_size) - - if step == 0: - # at the first step all hypotheses are equally likely, so use - # only the first beam - lprobs = lprobs[:, ::beam_size, :].contiguous() - else: - # make probs contain cumulative scores for each hypothesis - assert scores is not None - lprobs = lprobs + scores[:, :, step - 1].unsqueeze(-1) - - top_prediction = torch.topk( - lprobs.view(bsz, -1), - k=min( - # Take the best beam_size predictions. We'll choose the first - # beam_size of these which don't predict eos to continue with. - beam_size, - lprobs.view(bsz, -1).size(1) - 1, # -1 so we never select pad - ), - ) - scores_buf = top_prediction[0] - indices_buf = top_prediction[1] - beams_buf = indices_buf // vocab_size - indices_buf = indices_buf.fmod(vocab_size) - return scores_buf, indices_buf, beams_buf - - -class LexicallyConstrainedBeamSearch(Search): - """Implements lexically constrained beam search as described in - - Fast Lexically Constrained Decoding with Dynamic Beam - Allocation for Neural Machine Translation. Post & Vilar, - NAACL 2018. https://www.aclweb.org/anthology/N18-1119/ - - and - - Improved Lexically Constrained Decoding for Translation and - Monolingual Rewriting. Hu et al, NAACL - 2019. https://www.aclweb.org/anthology/N19-1090/ - - This is accomplished by maintaining, for each beam hypothesis, a - ConstraintState object (see constraints.py) that tracks which - constraints have been generated and using this information to - shape the beam for each input sentence. - """ - - def __init__(self, tgt_dict, representation): - super().__init__(tgt_dict) - self.representation = representation - self.vocab_size = len(tgt_dict) - self.num_cands = 0 - self.supports_constraints = True - - @torch.jit.export - def init_constraints(self, batch_constraints: Optional[Tensor], beam_size: int): - self.constraint_states = [] - for constraint_tensor in batch_constraints: - if self.representation == "ordered": - constraint_state = OrderedConstraintState.create(constraint_tensor) - elif self.representation == "unordered": - constraint_state = UnorderedConstraintState.create(constraint_tensor) - - self.constraint_states.append([constraint_state for i in range(beam_size)]) - - @torch.jit.export - def prune_sentences(self, batch_idxs: Tensor): - self.constraint_states = [ - self.constraint_states[i] for i in batch_idxs.tolist() - ] - - @torch.jit.export - def update_constraints(self, active_hypos: Tensor): - if self.constraint_states: - batch_size = active_hypos.size(0) - for sentid in range(batch_size): - self.constraint_states[sentid] = [ - self.constraint_states[sentid][i] for i in active_hypos[sentid] - ] - - @torch.jit.export - def step( - self, - step: int, - lprobs: Tensor, - scores: Optional[Tensor], - prev_output_tokens: Optional[Tensor] = None, - original_batch_idxs: Optional[Tensor] = None, - ): - """ - A constrained step builds a large candidates list from the following: - - the top 2 * {beam_size} items over the whole beam - - for each item in the beam - - the top {each_k} (default 1) - - all next constraints - We then compute the constrained state of each beam item, and assign - stripe codes: 0 to the best in each bank, 1 to the 2nd-best, and so - on. We then sort by (stripe, score), and truncate the list at - 2 * beam size. - - Args: - step: the decoder step - lprobs: (batch size, beam size, target vocab) - the target-vocab distributions for each item in the beam. - Retrun: A tuple of (scores, indices, beams, constraints) where: - scores: (batch, output beam size) - the scores of the chosen elements - indices: (batch, output beam size) - the target vocab indices of the chosen elements - beams: (batch, output beam size) - the 0-indexed hypothesis ids of the chosen elements - constraints: (batch, output beam size) - the new constraint states - """ - each_k = 1 - device = lprobs.device - - batch_size, beam_size, vocab_size = lprobs.size() - - self.num_cands = min( - # Just take the k-best. We'll get another k from the 1-best from each - # row, plus more from the constraints - beam_size * 2, - lprobs.view(batch_size, -1).size(1) - 1, # -1 so we never select pad - ) - - # STEP 0: Preliminary. Prevent EOS for unfinished hyps across all batch items - constraint_states = self.constraint_states - if constraint_states and step > 0: - not_finished_indices = [] - for sentno, sent_constraints in enumerate(constraint_states): - for beamno, state in enumerate(sent_constraints): - index = sentno * beam_size + beamno - if not state.finished: - not_finished_indices.append(index) - not_finished_indices = torch.tensor(not_finished_indices) - if not_finished_indices.numel() > 0: - lprobs.view(batch_size * beam_size, -1)[ - not_finished_indices, self.eos - ] = -math.inf - - if step == 0: - # at the first step all hypotheses are equally likely, so use - # only the first beam entry for each batch item - lprobs = lprobs[:, ::beam_size, :].contiguous() - else: - # make probs contain cumulative scores for each hypothesis - assert scores is not None - lprobs = lprobs + scores[:, :, step - 1].unsqueeze(-1) - - top_prediction = torch.topk( - lprobs.view(batch_size, -1), - self.num_cands, - ) - scores_buf, indices_buf = top_prediction - # Project back into relative indices and beams - beams_buf = indices_buf // vocab_size - indices_buf = indices_buf.fmod(vocab_size) - - # Short circuit if there are no constraints in this batch - if not constraint_states: - return scores_buf, indices_buf, beams_buf - - # STEP 1: get top-1 from each hypothesis across all sentences in the batch - if step > 0: - top_scores, top_indices = torch.topk( - lprobs.view(batch_size * beam_size, -1), - k=each_k, - dim=1, - ) - top_scores = top_scores.view(batch_size, -1) - top_indices = top_indices.view(batch_size, -1) - scores_buf = torch.cat((scores_buf, top_scores), dim=1) - indices_buf = torch.cat((indices_buf, top_indices), dim=1) - new_beams = torch.arange(0, beam_size, device=device).repeat(batch_size, 1) - beams_buf = torch.cat((beams_buf, new_beams), dim=1) - - # Now, process sentences in the batch one by one. - new_scores_buf = torch.zeros((batch_size, 2 * beam_size), device=device) - new_indices_buf = torch.zeros((batch_size, 2 * beam_size), device=device).long() - new_beams_buf = torch.zeros((batch_size, 2 * beam_size), device=device).long() - for sentno, states in enumerate(constraint_states): - scores, indices, beams, new_states = self.step_sentence( - step, - sentno, - lprobs[sentno], - constraint_states[sentno], - beams_buf[sentno].clone(), - indices_buf[sentno].clone(), - scores_buf[sentno].clone(), - ) - new_scores_buf[sentno] = scores - new_indices_buf[sentno] = indices - new_beams_buf[sentno] = beams - self.constraint_states[sentno] = new_states - - return new_scores_buf, new_indices_buf, new_beams_buf - - @torch.jit.export - def step_sentence( - self, - step: int, - sentno: int, - lprobs: Tensor, - constraint_states: List[List[ConstraintState]], - beams_buf: Tensor, - indices_buf: Tensor, - scores_buf: Tensor, - ): - """Does per-sentence processing. Adds all constraints for each - hypothesis to the list of candidates; then removes duplicates, - sorts, and dynamically stripes across the banks. All tensor inputs - are collapsed to those pertaining to a single input sentence. - """ - device = lprobs.device - - # STEP 2: Add all constraints for each beam item - for beamno, state in enumerate(constraint_states): - next_tokens = torch.tensor(list(state.next_tokens()), device=device).long() - if next_tokens.numel() != 0: - indices_buf = torch.cat((indices_buf, next_tokens)) - next_beams = ( - torch.tensor(beamno, device=device) - .repeat(next_tokens.size(0)) - .long() - ) - beams_buf = torch.cat((beams_buf, next_beams)) - next_values = lprobs[beamno].take(next_tokens.view(-1)) - scores_buf = torch.cat((scores_buf, next_values)) - - # At the 0th time step, there is just one beam item - if step == 0: - break - - # STEP 3: Compute the "bank" for each candidate. This is the - # number of constraints it's generated. We need this so that - # we can do round-robin allocation of the beam across these - # banks. If C is the number of constraints, we select the best - # item in bank C, then the best in bank C-1, etc, followed by - # the 2nd-best in bank C, the 2nd-best in bank C-1, etc, and so - # on, until the maximum beam size. We accomplish this by - # creating a sort key and striping across the banks. - - # Compute the new states for all candidates - cands_size = indices_buf.size(0) - constraint_states = [ - constraint_states[beams_buf[i]].advance(indices_buf[i]) - for i in range(cands_size) - ] - - banks = torch.tensor([state.bank for state in constraint_states], device=device) - - # STEP 4: Sort - num_constraint_tokens = len(state.tokens) - - # Sort by keys (bank, score) (i.e., sort banks together, and scores - # within banks). AFAIK pytorch doesn't support either stable sort or - # multi-key sorting, so we have to hack this. - MAX_SCORE = -100 - sort_key = (num_constraint_tokens - banks) * MAX_SCORE + scores_buf - sort_values, sort_indices = sort_key.sort(dim=0, descending=True) - scores_buf = scores_buf[sort_indices] - indices_buf = indices_buf[sort_indices] - beams_buf = beams_buf[sort_indices] - banks = banks[sort_indices] - - # Sort the constraints to follow suit - constraint_states = [constraint_states[i] for i in sort_indices] - - # STEP 5: Remove duplicates. The topk calls (overall and - # per-row) plus the per-row generation of constraints will - # produce duplicates. Here we remove them. - - def roll(t): - """Rolls a 1d tensor left by 1. - - [0, 1, 2, 3, 4] becomes [4, 0, 1, 2, 3] - """ - return torch.cat((t[-1].unsqueeze(0), t[0:-1]), dim=0) - - # We map candidates (beam, token_id) to a single dimension. - # This is then shifted by 1. We can then easily identify - # duplicates and create a mask that identifies unique - # extensions. - uniques_mask = beams_buf * (self.vocab_size + 1) + indices_buf - uniques_mask = roll(uniques_mask) != uniques_mask - - # Use the mask to pare down the data structures - scores_buf = torch.masked_select(scores_buf, uniques_mask) - indices_buf = torch.masked_select(indices_buf, uniques_mask) - beams_buf = torch.masked_select(beams_buf, uniques_mask) - banks = torch.masked_select(banks, uniques_mask) - i = 1 - for mask in uniques_mask[1:]: - if not mask: - constraint_states.pop(i) - i += mask - - # STEP 6: Assign IDs round-robin across banks, sort, and - # truncate. Now that the candidates are sorted by (bank, - # score) and uniqed, we dynamically allocate the {beam_size} - # beam by striping across the candidates. These stripes will - # be used as sort keys to do round-robin selection. This is - # accomplished in a single pass with offsets. Sorting by - # highest-banks (furthest-along hypotheses) first ensures - # progress through the constraints. - # - # e.g., BANKS: 3 3 3 2 2 2 2 1 1 1 0 0 - # OLD STRIPES: 0 1 2 0 1 2 3 0 1 2 0 1 - # NEW STRIPES: 0 1+4 2+8 0+1 1+5 2+9 3+11 0+2 1+6 2+10 0+3 1+7 - # = 0 5 10 1 6 11 13 2 7 12 3 8 - # - # Sorting by this then gives the following banks: - # - # 3 2 1 0 3 2 1 0 3 2 1 2 - # - # We'll take the top {beam_size} of these. - stripe_offsets = [offset * (len(banks) + 1) for offset in range(len(banks) + 1)] - stripes = torch.zeros_like(banks) - cur_bank_count = -1 - cur_bank = banks[0] - for i, bank in enumerate(banks): - if bank != cur_bank: - cur_bank_count = 0 - cur_bank = bank - else: - cur_bank_count += 1 - stripes[i] = num_constraint_tokens - bank + stripe_offsets[cur_bank_count] - - # STEP 7: Sort by the stripes values - sort_values, sort_indices = stripes.sort(dim=0) - scores_buf = scores_buf[sort_indices] - indices_buf = indices_buf[sort_indices] - beams_buf = beams_buf[sort_indices] - constraint_states = [constraint_states[i] for i in sort_indices] - - # STEP 8: Truncate to the candidates size! - scores_buf = scores_buf[: self.num_cands] - indices_buf = indices_buf[: self.num_cands] - beams_buf = beams_buf[: self.num_cands] - - return scores_buf, indices_buf, beams_buf, constraint_states - - -class LengthConstrainedBeamSearch(Search): - def __init__(self, tgt_dict, min_len_a, min_len_b, max_len_a, max_len_b): - super().__init__(tgt_dict) - self.min_len_a = min_len_a - self.min_len_b = min_len_b - self.max_len_a = max_len_a - self.max_len_b = max_len_b - self.beam = BeamSearch(tgt_dict) - self.needs_src_lengths = True - - def step( - self, - step: int, - lprobs, - scores, - prev_output_tokens: Optional[Tensor] = None, - original_batch_idxs: Optional[Tensor] = None, - ): - min_lens = self.min_len_a * self.src_lengths + self.min_len_b - max_lens = self.max_len_a * self.src_lengths + self.max_len_b - lprobs[step < min_lens, :, self.eos] = -math.inf - lprobs[step >= max_lens, :, self.eos] = 0 - return self.beam.step(step, lprobs, scores) - - -class DiverseBeamSearch(Search): - """Diverse Beam Search. - - See "Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence - Models" for details. - - We only implement the Hamming Diversity penalty here, which performed best - in the original paper. - """ - - def __init__(self, tgt_dict, num_groups, diversity_strength): - super().__init__(tgt_dict) - self.num_groups = num_groups - self.diversity_strength = -diversity_strength - self.beam = BeamSearch(tgt_dict) - - @torch.jit.export - def step( - self, - step: int, - lprobs, - scores, - prev_output_tokens: Optional[Tensor] = None, - original_batch_idxs: Optional[Tensor] = None, - ): - bsz, beam_size, vocab_size = lprobs.size() - if beam_size % self.num_groups != 0: - raise ValueError( - "DiverseBeamSearch requires --beam to be divisible by the number of groups" - ) - - # initialize diversity penalty - diversity_buf = torch.zeros(lprobs[:, 0, :].size()).to(lprobs) - - scores_G, indices_G, beams_G = [], [], [] - for g in range(self.num_groups): - lprobs_g = lprobs[:, g :: self.num_groups, :] - scores_g = scores[:, g :: self.num_groups, :] if step > 0 else None - - # apply diversity penalty - if g > 0: - lprobs_g = torch.add( - lprobs_g, - other=diversity_buf.unsqueeze(1), - alpha=self.diversity_strength, - ) - else: - lprobs_g = lprobs_g.contiguous() - - scores_buf, indices_buf, beams_buf = self.beam.step( - step, lprobs_g, scores_g - ) - beams_buf.mul_(self.num_groups).add_(g) - - scores_G.append(scores_buf.clone()) - indices_G.append(indices_buf.clone()) - beams_G.append(beams_buf.clone()) - - # update diversity penalty - diversity_buf.scatter_add_( - 1, indices_buf, torch.ones(indices_buf.size()).to(diversity_buf) - ) - - # interleave results from different groups - scores_buf = torch.stack(scores_G, dim=2).view(bsz, -1) - indices_buf = torch.stack(indices_G, dim=2).view(bsz, -1) - beams_buf = torch.stack(beams_G, dim=2).view(bsz, -1) - return scores_buf, indices_buf, beams_buf - - -class Sampling(Search): - sampling_topk: int - sampling_topp: float - - def __init__(self, tgt_dict, sampling_topk=-1, sampling_topp=-1.0): - super().__init__(tgt_dict) - self.sampling_topk = sampling_topk - self.sampling_topp = sampling_topp - - def _sample_topp(self, lprobs): - """Sample among the smallest set of elements whose cumulative probability mass exceeds p. - - See `"The Curious Case of Neural Text Degeneration" - (Holtzman et al., 2019) <https://arxiv.org/abs/1904.09751>`_. - - Args: - lprobs: (bsz x input_beam_size x vocab_size) - the model's log-probabilities over the vocabulary at the current step - - Return: A tuple of (trimed_probs, truncated_indices) where: - trimed_probs: (bsz x input_beam_size x ?) - the model's probabilities over the elements selected to sample from. The - width of the third dimension is determined by top-P. - truncated_indices: (bsz x input_beam_size x ?) - the indices of the chosen elements. - """ - probs = lprobs.exp_() - - # sort the last dimension (vocab dimension) in descending order - sorted_probs, sorted_indices = probs.sort(descending=True) - - # compute a mask to indicate the words to be included in the top-P set. - cumsum_probs = sorted_probs.cumsum(dim=2) - mask = cumsum_probs.lt(self.sampling_topp) - - # note that mask was computed by 'lt'. One more word needs to be included - # so that the cumulative probability mass can exceed p. - cumsum_mask = mask.cumsum(dim=2) - last_included = cumsum_mask[:, :, -1:] - last_included.clamp_(0, mask.size()[2] - 1) - mask = mask.scatter_(2, last_included, 1) - - # truncate unnecessary dims. - max_dim = last_included.max() - truncated_mask = mask[:, :, : max_dim + 1] - truncated_probs = sorted_probs[:, :, : max_dim + 1] - truncated_indices = sorted_indices[:, :, : max_dim + 1] - - # trim the words that are not in top-P by setting their probabilities - # to 0, so that they would not be sampled later. - trim_mask = ~truncated_mask - trimed_probs = truncated_probs.masked_fill_(trim_mask, 0) - return trimed_probs, truncated_indices - - @torch.jit.export - def step( - self, - step: int, - lprobs, - scores, - prev_output_tokens: Optional[Tensor] = None, - original_batch_idxs: Optional[Tensor] = None, - ): - bsz, beam_size, vocab_size = lprobs.size() - - if step == 0: - # at the first step all hypotheses are equally likely, so use - # only the first beam - lprobs = lprobs[:, ::beam_size, :].contiguous() - - if self.sampling_topp > 0: - # only sample from the smallest set of words whose cumulative probability mass exceeds p - probs, top_indices = self._sample_topp(lprobs) - elif self.sampling_topk > 0: - # only sample from top-k candidates - lprobs, top_indices = lprobs.topk(self.sampling_topk) - probs = lprobs.exp_() - else: - probs = lprobs.exp_() - - # dummy data to be consistent with true branch for type check - top_indices = torch.empty(0).to(probs) - # sample - if step == 0: - indices_buf = torch.multinomial( - probs.view(bsz, -1), - beam_size, - replacement=True, - ).view(bsz, beam_size) - else: - indices_buf = torch.multinomial( - probs.view(bsz * beam_size, -1), - 1, - replacement=True, - ).view(bsz, beam_size) - - if step == 0: - # expand to beam size - probs = probs.expand(bsz, beam_size, -1) - - # gather scores - scores_buf = torch.gather(probs, dim=2, index=indices_buf.unsqueeze(-1)) - scores_buf = scores_buf.log_().view(bsz, -1) - - # remap indices if using top-k or top-P sampling - if self.sampling_topk > 0 or self.sampling_topp > 0: - indices_buf = torch.gather( - top_indices.expand(bsz, beam_size, -1), - dim=2, - index=indices_buf.unsqueeze(-1), - ).squeeze(2) - - if step == 0: - beams_buf = indices_buf.new_zeros(bsz, beam_size) - else: - beams_buf = torch.arange(0, beam_size).to(indices_buf).repeat(bsz, 1) - # make scores cumulative - scores_buf.add_( - torch.gather(scores[:, :, step - 1], dim=1, index=beams_buf) - ) - - return scores_buf, indices_buf, beams_buf - - -class DiverseSiblingsSearch(Search): - """ - Beam search with diverse siblings. - - See "A Simple, Fast Diverse Decoding Algorithm for Neural Generation" for details. - https://arxiv.org/abs/1611.08562 - - 1/ Calculate hypotheses for each beam - 2/ Intra-sibling ordering - 3/ Rewrite scores - 4/ Choose top K hypotheses - - if diversity_rate == 0 is equivalent to BeamSearch - """ - - def __init__(self, tgt_dict, diversity_rate): - super().__init__(tgt_dict) - self.diversity_rate = diversity_rate - self.beam = BeamSearch(tgt_dict) - - def step( - self, - step: int, - lprobs, - scores, - prev_output_tokens: Optional[Tensor] = None, - original_batch_idxs: Optional[Tensor] = None, - ): - bsz, beam_size, vocab_size = lprobs.size() - k = min( - # Take the best 2 x beam_size predictions. We'll choose the first - # beam_size of these which don't predict eos to continue with. - beam_size * 2, - lprobs.view(bsz, -1).size(1) - 1, # -1 so we never select pad - ) - s_list: List[Tensor] - i_list: List[Tensor] - s_list = [torch.empty(0).to(lprobs) for i in range(beam_size)] - i_list = [torch.LongTensor().to(device=lprobs.device) for i in range(beam_size)] - sibling_score = torch.arange(1, k + 1).to(lprobs) * self.diversity_rate - - if step == 0: - return self.beam.step(step, lprobs, scores) - lprobs.add_(scores[:, :, step - 1].unsqueeze(-1)) - - # 1/ Calculate hypotheses for each beam - for i in range(beam_size): - torch.topk(lprobs[:, i, :].view(bsz, -1), k, out=(s_list[i], i_list[i])) - i_list[i].fmod_(vocab_size) - - # 2/ Intra-sibling ordering by default from topk + 3/ Rewrite scores - s_list[i].sub_(sibling_score) - - # 4/ Choose top K hypotheses - indices = torch.stack(i_list, dim=1).view(bsz, -1) - - final_scores = torch.empty(0).to(lprobs) - final_indices = torch.LongTensor().to(device=lprobs.device) - final_beams = torch.LongTensor().to(device=lprobs.device) - (final_scores, final_indices) = torch.topk( - torch.stack(s_list, dim=1).view(bsz, -1), - k, - ) - - final_beams = final_indices // k - - for i in range(bsz): - final_indices[i] = indices[i][final_indices[i]] - - return final_scores, final_indices, final_beams diff --git a/kosmos-g/fairseq/fairseq/sequence_generator.py b/kosmos-g/fairseq/fairseq/sequence_generator.py deleted file mode 100644 index edc55aab1..000000000 --- a/kosmos-g/fairseq/fairseq/sequence_generator.py +++ /dev/null @@ -1,1075 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from typing import Dict, List, Optional -import sys - -import torch -import torch.nn as nn -from fairseq import search, utils -from fairseq.data import data_utils -from fairseq.models import FairseqIncrementalDecoder -from torch import Tensor -from fairseq.ngram_repeat_block import NGramRepeatBlock - - -class SequenceGenerator(nn.Module): - def __init__( - self, - models, - tgt_dict, - beam_size=1, - max_len_a=0, - max_len_b=200, - max_len=0, - min_len=1, - normalize_scores=True, - len_penalty=1.0, - unk_penalty=0.0, - temperature=1.0, - match_source_len=False, - no_repeat_ngram_size=0, - search_strategy=None, - eos=None, - symbols_to_strip_from_output=None, - lm_model=None, - lm_weight=1.0, - ): - """Generates translations of a given source sentence. - - Args: - models (List[~fairseq.models.FairseqModel]): ensemble of models, - currently support fairseq.models.TransformerModel for scripting - beam_size (int, optional): beam width (default: 1) - max_len_a/b (int, optional): generate sequences of maximum length - ax + b, where x is the source length - max_len (int, optional): the maximum length of the generated output - (not including end-of-sentence) - min_len (int, optional): the minimum length of the generated output - (not including end-of-sentence) - normalize_scores (bool, optional): normalize scores by the length - of the output (default: True) - len_penalty (float, optional): length penalty, where <1.0 favors - shorter, >1.0 favors longer sentences (default: 1.0) - unk_penalty (float, optional): unknown word penalty, where <0 - produces more unks, >0 produces fewer (default: 0.0) - temperature (float, optional): temperature, where values - >1.0 produce more uniform samples and values <1.0 produce - sharper samples (default: 1.0) - match_source_len (bool, optional): outputs should match the source - length (default: False) - """ - super().__init__() - if isinstance(models, EnsembleModel): - self.model = models - else: - self.model = EnsembleModel(models) - self.tgt_dict = tgt_dict - self.pad = tgt_dict.pad() - self.unk = tgt_dict.unk() - self.eos = tgt_dict.eos() if eos is None else eos - self.symbols_to_strip_from_output = ( - symbols_to_strip_from_output.union({self.eos}) - if symbols_to_strip_from_output is not None - else {self.eos} - ) - self.vocab_size = len(tgt_dict) - self.beam_size = beam_size - # the max beam size is the dictionary size - 1, since we never select pad - self.beam_size = min(beam_size, self.vocab_size - 1) - self.max_len_a = max_len_a - self.max_len_b = max_len_b - self.min_len = min_len - self.max_len = max_len or self.model.max_decoder_positions() - - self.normalize_scores = normalize_scores - self.len_penalty = len_penalty - self.unk_penalty = unk_penalty - self.temperature = temperature - self.match_source_len = match_source_len - - if no_repeat_ngram_size > 0: - self.repeat_ngram_blocker = NGramRepeatBlock(no_repeat_ngram_size) - else: - self.repeat_ngram_blocker = None - - assert temperature > 0, "--temperature must be greater than 0" - - self.search = ( - search.BeamSearch(tgt_dict) if search_strategy is None else search_strategy - ) - # We only need to set src_lengths in LengthConstrainedBeamSearch. - # As a module attribute, setting it would break in multithread - # settings when the model is shared. - self.should_set_src_lengths = ( - hasattr(self.search, "needs_src_lengths") and self.search.needs_src_lengths - ) - - self.model.eval() - - self.lm_model = lm_model - self.lm_weight = lm_weight - if self.lm_model is not None: - self.lm_model.eval() - - def cuda(self): - self.model.cuda() - return self - - @torch.no_grad() - def forward( - self, - sample: Dict[str, Dict[str, Tensor]], - prefix_tokens: Optional[Tensor] = None, - bos_token: Optional[int] = None, - ): - """Generate a batch of translations. - - Args: - sample (dict): batch - prefix_tokens (torch.LongTensor, optional): force decoder to begin - with these tokens - bos_token (int, optional): beginning of sentence token - (default: self.eos) - """ - return self._generate(sample, prefix_tokens, bos_token=bos_token) - - # TODO(myleott): unused, deprecate after pytorch-translate migration - def generate_batched_itr(self, data_itr, beam_size=None, cuda=False, timer=None): - """Iterate over a batched dataset and yield individual translations. - Args: - cuda (bool, optional): use GPU for generation - timer (StopwatchMeter, optional): time generations - """ - for sample in data_itr: - s = utils.move_to_cuda(sample) if cuda else sample - if "net_input" not in s: - continue - input = s["net_input"] - # model.forward normally channels prev_output_tokens into the decoder - # separately, but SequenceGenerator directly calls model.encoder - encoder_input = { - k: v for k, v in input.items() if k != "prev_output_tokens" - } - if timer is not None: - timer.start() - with torch.no_grad(): - hypos = self.generate(encoder_input) - if timer is not None: - timer.stop(sum(len(h[0]["tokens"]) for h in hypos)) - for i, id in enumerate(s["id"].data): - # remove padding - src = utils.strip_pad(input["src_tokens"].data[i, :], self.pad) - ref = ( - utils.strip_pad(s["target"].data[i, :], self.pad) - if s["target"] is not None - else None - ) - yield id, src, ref, hypos[i] - - @torch.no_grad() - def generate( - self, models, sample: Dict[str, Dict[str, Tensor]], **kwargs - ) -> List[List[Dict[str, Tensor]]]: - """Generate translations. Match the api of other fairseq generators. - - Args: - models (List[~fairseq.models.FairseqModel]): ensemble of models - sample (dict): batch - prefix_tokens (torch.LongTensor, optional): force decoder to begin - with these tokens - constraints (torch.LongTensor, optional): force decoder to include - the list of constraints - bos_token (int, optional): beginning of sentence token - (default: self.eos) - """ - return self._generate(sample, **kwargs) - - def _generate( - self, - sample: Dict[str, Dict[str, Tensor]], - prefix_tokens: Optional[Tensor] = None, - constraints: Optional[Tensor] = None, - bos_token: Optional[int] = None, - ): - incremental_states = torch.jit.annotate( - List[Dict[str, Dict[str, Optional[Tensor]]]], - [ - torch.jit.annotate(Dict[str, Dict[str, Optional[Tensor]]], {}) - for i in range(self.model.models_size) - ], - ) - net_input = sample["net_input"] - prefix_tokens_with_bos = prefix_tokens.clone() - prefix_tokens = prefix_tokens[:, 1:] - - if "src_tokens" in net_input: - src_tokens = net_input["src_tokens"] - # length of the source text being the character length except EndOfSentence and pad - src_lengths = ( - (src_tokens.ne(self.eos) & src_tokens.ne(self.pad)).long().sum(dim=1) - ) - elif "source" in net_input: - src_tokens = net_input["source"] - src_lengths = ( - net_input["padding_mask"].size(-1) - net_input["padding_mask"].sum(-1) - if net_input["padding_mask"] is not None - else torch.tensor(src_tokens.size(-1)).to(src_tokens) - ) - elif "features" in net_input: - src_tokens = net_input["features"] - src_lengths = ( - net_input["padding_mask"].size(-1) - net_input["padding_mask"].sum(-1) - if net_input["padding_mask"] is not None - else torch.tensor(src_tokens.size(-1)).to(src_tokens) - ) - else: - raise Exception( - "expected src_tokens or source in net input. input keys: " - + str(net_input.keys()) - ) - - # bsz: total number of sentences in beam - # Note that src_tokens may have more than 2 dimensions (i.e. audio features) - bsz, src_len = src_tokens.size()[:2] - beam_size = self.beam_size - - if constraints is not None and not self.search.supports_constraints: - raise NotImplementedError( - "Target-side constraints were provided, but search method doesn't support them" - ) - - # Initialize constraints, when active - self.search.init_constraints(constraints, beam_size) - - max_len: int = -1 - if self.match_source_len: - max_len = src_lengths.max().item() - else: - max_len = min( - int(self.max_len_a * src_len + self.max_len_b), - self.max_len - 1, - ) - assert ( - self.min_len <= max_len - ), "min_len cannot be larger than max_len, please adjust these!" - # compute the encoder output for each beam - with torch.autograd.profiler.record_function("EnsembleModel: forward_encoder"): - encoder_outs = self.model.forward_encoder(net_input) - - # placeholder of indices for bsz * beam_size to hold tokens and accumulative scores - new_order = torch.arange(bsz).view(-1, 1).repeat(1, beam_size).view(-1) - new_order = new_order.to(src_tokens.device).long() - encoder_outs = self.model.reorder_encoder_out(encoder_outs, new_order) - # ensure encoder_outs is a List. - assert encoder_outs is not None - - # initialize buffers - scores = ( - torch.zeros(bsz * beam_size, max_len + 1).to(src_tokens).float() - ) # +1 for eos; pad is never chosen for scoring - tokens = ( - torch.zeros(bsz * beam_size, max_len + 2) - .to(src_tokens) - .long() - .fill_(self.pad) - ) # +2 for eos and pad - tokens[:, 0] = self.eos if bos_token is None else bos_token - attn: Optional[Tensor] = None - - # A list that indicates candidates that should be ignored. - # For example, suppose we're sampling and have already finalized 2/5 - # samples. Then cands_to_ignore would mark 2 positions as being ignored, - # so that we only finalize the remaining 3 samples. - cands_to_ignore = ( - torch.zeros(bsz, beam_size).to(src_tokens).eq(-1) - ) # forward and backward-compatible False mask - - # list of completed sentences - finalized = torch.jit.annotate( - List[List[Dict[str, Tensor]]], - [torch.jit.annotate(List[Dict[str, Tensor]], []) for i in range(bsz)], - ) # contains lists of dictionaries of infomation about the hypothesis being finalized at each step - - # a boolean array indicating if the sentence at the index is finished or not - finished = [False for i in range(bsz)] - num_remaining_sent = bsz # number of sentences remaining - - # number of candidate hypos per step - cand_size = 2 * beam_size # 2 x beam size in case half are EOS - - # offset arrays for converting between different indexing schemes - bbsz_offsets = ( - (torch.arange(0, bsz) * beam_size) - .unsqueeze(1) - .type_as(tokens) - .to(src_tokens.device) - ) - cand_offsets = torch.arange(0, cand_size).type_as(tokens).to(src_tokens.device) - - reorder_state: Optional[Tensor] = None - batch_idxs: Optional[Tensor] = None - - original_batch_idxs: Optional[Tensor] = None - if "id" in sample and isinstance(sample["id"], Tensor): - original_batch_idxs = sample["id"] - else: - original_batch_idxs = torch.arange(0, bsz).type_as(tokens) - - prefix_lprobs = None - multimodal_infer = False - for step in range(max_len + 1): # one extra step for EOS marker - # reorder decoder internal states based on the prev choice of beams - if reorder_state is not None: - if batch_idxs is not None: - # update beam indices to take into account removed sentences - corr = batch_idxs - torch.arange(batch_idxs.numel()).type_as( - batch_idxs - ) - reorder_state.view(-1, beam_size).add_( - corr.unsqueeze(-1) * beam_size - ) - original_batch_idxs = original_batch_idxs[batch_idxs] - self.model.reorder_incremental_state(incremental_states, reorder_state) - encoder_outs = self.model.reorder_encoder_out( - encoder_outs, reorder_state - ) - with torch.autograd.profiler.record_function( - "EnsembleModel: forward_decoder" - ): - if "img_src_tokens" in net_input and step == 0: - # import pudb; pu.db - # input: [B, D] -> [B, Beam Size, D] - multimodal_infer = True - img_features = self.model.models[0].get_image_representation(sample['net_input']['img_src_tokens'].cuda()) - first_src_tokens = sample['net_input']['src_tokens'].unsqueeze(1).repeat(1, beam_size, 1).view(bsz*beam_size, -1) - img_feature_dim = img_features.size(-1) - first_img_features = img_features.view(bsz, -1, img_feature_dim).unsqueeze(1).repeat(1, beam_size, 1, 1).view(-1, img_feature_dim) - img_gpt_input_mask = sample['net_input']['img_gpt_input_mask'].cuda().bool() - first_gpt_input_mask = img_gpt_input_mask.unsqueeze(1).repeat(1, beam_size, 1).view(bsz*beam_size, -1) - - decoder_out = self.model.models[0].gpt_model.decoder.forward( - first_src_tokens, - img_features=first_img_features, - img_gpt_input_mask=first_gpt_input_mask, - incremental_state=incremental_states[0], - first_step=True) - attn: Optional[Tensor] = None - decoder_out_tuple = decoder_out[0].div_(self.temperature) - decoder_out_tuple = (decoder_out_tuple, None) - - probs = self.model.models[0].gpt_model.get_normalized_probs( - decoder_out_tuple, log_probs=True, sample=None - ) - if len(probs.size()) == 2: - probs = probs.unsqueeze(0) - - prefix_lprobs = probs.clone().reshape(bsz*beam_size, probs.size(1), -1) - lprobs, avg_attn_scores = prefix_lprobs[:,step,:].clone(), None - elif "aud_src_tokens" in net_input and step == 0: - # import pudb; pu.db - multimodal_infer = True - aud_features = self.model.models[0].get_audio_representation(sample['net_input']['aud_src_tokens'].cuda(), sample['net_input']['aud_masks'].cuda()) - decoder_out = self.model.models[0].gpt_model.decoder.forward( - sample['net_input']['src_tokens'], - aud_features=aud_features, - aud_gpt_input_mask=sample['net_input']['aud_gpt_input_mask'].cuda().bool(), - incremental_state=incremental_states[0], - first_step=True) - attn: Optional[Tensor] = None - decoder_out_tuple = decoder_out[0].div_(self.temperature) - decoder_out_tuple = (decoder_out_tuple, None) - - probs = self.model.models[0].gpt_model.get_normalized_probs( - decoder_out_tuple, log_probs=True, sample=None - ) - if len(probs.size()) == 2: - probs = probs.unsqueeze(0) - - prefix_lprobs = probs.clone().unsqueeze(0).expand(beam_size, -1, -1, -1).reshape(bsz*beam_size, probs.size(1), -1) - lprobs, avg_attn_scores = prefix_lprobs[:,step,:].clone(), None - elif ("img_src_tokens" in net_input or "aud_src_tokens" in net_input) and step < len(sample['net_input']['src_tokens'][0]): - lprobs, avg_attn_scores = prefix_lprobs[:,step,:].clone(), None - multimodal_infer = True - else: - lprobs, avg_attn_scores = self.model.forward_decoder( - tokens[:, : step + 1], - encoder_outs, - incremental_states, - self.temperature, - multimodal=multimodal_infer, - ) - - if self.lm_model is not None: - lm_out = self.lm_model(tokens[:, : step + 1]) - probs = self.lm_model.get_normalized_probs( - lm_out, log_probs=True, sample=None - ) - probs = probs[:, -1, :] * self.lm_weight - lprobs += probs - - lprobs[lprobs != lprobs] = torch.tensor(-math.inf).to(lprobs) - - lprobs[:, self.pad] = -math.inf # never select pad - lprobs[:, self.unk] -= self.unk_penalty # apply unk penalty - - # handle max length constraint - if step >= max_len: - lprobs[:, : self.eos] = -math.inf - lprobs[:, self.eos + 1 :] = -math.inf - - # handle prefix tokens (possibly with different lengths) - if ( - prefix_tokens is not None - and step < prefix_tokens.size(1) - and step < max_len - ): - lprobs, tokens, scores = self._prefix_tokens( - step, lprobs, scores, tokens, prefix_tokens, beam_size - ) - elif step < self.min_len: - # minimum length constraint (does not apply if using prefix_tokens) - lprobs[:, self.eos] = -math.inf - - # Record attention scores, only support avg_attn_scores is a Tensor - if avg_attn_scores is not None: - if attn is None: - attn = torch.empty( - bsz * beam_size, avg_attn_scores.size(1), max_len + 2 - ).to(scores) - attn[:, :, step + 1].copy_(avg_attn_scores) - - scores = scores.type_as(lprobs) - eos_bbsz_idx = torch.empty(0).to( - tokens - ) # indices of hypothesis ending with eos (finished sentences) - eos_scores = torch.empty(0).to( - scores - ) # scores of hypothesis ending with eos (finished sentences) - - if self.should_set_src_lengths: - self.search.set_src_lengths(src_lengths) - - if self.repeat_ngram_blocker is not None: - lprobs = self.repeat_ngram_blocker(tokens, lprobs, bsz, beam_size, step) - - # Shape: (batch, cand_size) - cand_scores, cand_indices, cand_beams = self.search.step( - step, - lprobs.view(bsz, -1, self.vocab_size), - scores.view(bsz, beam_size, -1)[:, :, :step], - tokens[:, : step + 1], - original_batch_idxs, - ) - - # cand_bbsz_idx contains beam indices for the top candidate - # hypotheses, with a range of values: [0, bsz*beam_size), - # and dimensions: [bsz, cand_size] - cand_bbsz_idx = cand_beams.add(bbsz_offsets) - - # finalize hypotheses that end in eos - # Shape of eos_mask: (batch size, beam size) - eos_mask = cand_indices.eq(self.eos) & cand_scores.ne(-math.inf) - eos_mask[:, :beam_size][cands_to_ignore] = torch.tensor(0).to(eos_mask) - - # only consider eos when it's among the top beam_size indices - # Now we know what beam item(s) to finish - # Shape: 1d list of absolute-numbered - eos_bbsz_idx = torch.masked_select( - cand_bbsz_idx[:, :beam_size], mask=eos_mask[:, :beam_size] - ) - - finalized_sents: List[int] = [] - if eos_bbsz_idx.numel() > 0: - eos_scores = torch.masked_select( - cand_scores[:, :beam_size], mask=eos_mask[:, :beam_size] - ) - - finalized_sents = self.finalize_hypos( - step, - eos_bbsz_idx, - eos_scores, - tokens, - scores, - finalized, - finished, - beam_size, - attn, - src_lengths, - max_len, - ) - num_remaining_sent -= len(finalized_sents) - - assert num_remaining_sent >= 0 - if num_remaining_sent == 0: - break - if self.search.stop_on_max_len and step >= max_len: - break - if step >= max_len: - break - assert step < max_len, f"{step} < {max_len}" - - # Remove finalized sentences (ones for which {beam_size} - # finished hypotheses have been generated) from the batch. - if len(finalized_sents) > 0: - new_bsz = bsz - len(finalized_sents) - - # construct batch_idxs which holds indices of batches to keep for the next pass - batch_mask = torch.ones( - bsz, dtype=torch.bool, device=cand_indices.device - ) - batch_mask[finalized_sents] = False - # TODO replace `nonzero(as_tuple=False)` after TorchScript supports it - batch_idxs = torch.arange( - bsz, device=cand_indices.device - ).masked_select(batch_mask) - - # Choose the subset of the hypothesized constraints that will continue - self.search.prune_sentences(batch_idxs) - - eos_mask = eos_mask[batch_idxs] - cand_beams = cand_beams[batch_idxs] - bbsz_offsets.resize_(new_bsz, 1) - cand_bbsz_idx = cand_beams.add(bbsz_offsets) - cand_scores = cand_scores[batch_idxs] - cand_indices = cand_indices[batch_idxs] - - if prefix_tokens is not None: - prefix_tokens = prefix_tokens[batch_idxs] - src_lengths = src_lengths[batch_idxs] - cands_to_ignore = cands_to_ignore[batch_idxs] - - scores = scores.view(bsz, -1)[batch_idxs].view(new_bsz * beam_size, -1) - tokens = tokens.view(bsz, -1)[batch_idxs].view(new_bsz * beam_size, -1) - if attn is not None: - attn = attn.view(bsz, -1)[batch_idxs].view( - new_bsz * beam_size, attn.size(1), -1 - ) - bsz = new_bsz - else: - batch_idxs = None - - # Set active_mask so that values > cand_size indicate eos hypos - # and values < cand_size indicate candidate active hypos. - # After, the min values per row are the top candidate active hypos - - # Rewrite the operator since the element wise or is not supported in torchscript. - - eos_mask[:, :beam_size] = ~((~cands_to_ignore) & (~eos_mask[:, :beam_size])) - active_mask = torch.add( - eos_mask.type_as(cand_offsets) * cand_size, - cand_offsets[: eos_mask.size(1)], - ) - - # get the top beam_size active hypotheses, which are just - # the hypos with the smallest values in active_mask. - # {active_hypos} indicates which {beam_size} hypotheses - # from the list of {2 * beam_size} candidates were - # selected. Shapes: (batch size, beam size) - new_cands_to_ignore, active_hypos = torch.topk( - active_mask, k=beam_size, dim=1, largest=False - ) - - # update cands_to_ignore to ignore any finalized hypos. - cands_to_ignore = new_cands_to_ignore.ge(cand_size)[:, :beam_size] - # Make sure there is at least one active item for each sentence in the batch. - assert (~cands_to_ignore).any(dim=1).all() - - # update cands_to_ignore to ignore any finalized hypos - - # {active_bbsz_idx} denotes which beam number is continued for each new hypothesis (a beam - # can be selected more than once). - active_bbsz_idx = torch.gather(cand_bbsz_idx, dim=1, index=active_hypos) - active_scores = torch.gather(cand_scores, dim=1, index=active_hypos) - - active_bbsz_idx = active_bbsz_idx.view(-1) - active_scores = active_scores.view(-1) - - # copy tokens and scores for active hypotheses - - # Set the tokens for each beam (can select the same row more than once) - tokens[:, : step + 1] = torch.index_select( - tokens[:, : step + 1], dim=0, index=active_bbsz_idx - ) - # Select the next token for each of them - tokens.view(bsz, beam_size, -1)[:, :, step + 1] = torch.gather( - cand_indices, dim=1, index=active_hypos - ) - if step > 0: - scores[:, :step] = torch.index_select( - scores[:, :step], dim=0, index=active_bbsz_idx - ) - scores.view(bsz, beam_size, -1)[:, :, step] = torch.gather( - cand_scores, dim=1, index=active_hypos - ) - - # Update constraints based on which candidates were selected for the next beam - self.search.update_constraints(active_hypos) - - # copy attention for active hypotheses - if attn is not None: - attn[:, :, : step + 2] = torch.index_select( - attn[:, :, : step + 2], dim=0, index=active_bbsz_idx - ) - - # reorder incremental state in decoder - reorder_state = active_bbsz_idx - - # sort by score descending - for sent in range(len(finalized)): - scores = torch.tensor( - [float(elem["score"].item()) for elem in finalized[sent]] - ) - _, sorted_scores_indices = torch.sort(scores, descending=True) - finalized[sent] = [finalized[sent][ssi] for ssi in sorted_scores_indices] - finalized[sent] = torch.jit.annotate( - List[Dict[str, Tensor]], finalized[sent] - ) - return finalized - - def _prefix_tokens( - self, step: int, lprobs, scores, tokens, prefix_tokens, beam_size: int - ): - """Handle prefix tokens""" - prefix_toks = prefix_tokens[:, step].unsqueeze(-1).repeat(1, beam_size).view(-1) - prefix_lprobs = lprobs.gather(-1, prefix_toks.unsqueeze(-1)) - prefix_mask = prefix_toks.ne(self.pad) - lprobs[prefix_mask] = torch.tensor(-math.inf).to(lprobs) - lprobs[prefix_mask] = lprobs[prefix_mask].scatter( - -1, prefix_toks[prefix_mask].unsqueeze(-1), prefix_lprobs[prefix_mask] - ) - # if prefix includes eos, then we should make sure tokens and - # scores are the same across all beams - eos_mask = prefix_toks.eq(self.eos) - if eos_mask.any(): - # validate that the first beam matches the prefix - first_beam = tokens[eos_mask].view(-1, beam_size, tokens.size(-1))[ - :, 0, 1 : step + 1 - ] - eos_mask_batch_dim = eos_mask.view(-1, beam_size)[:, 0] - target_prefix = prefix_tokens[eos_mask_batch_dim][:, :step] - assert (first_beam == target_prefix).all() - - # copy tokens, scores and lprobs from the first beam to all beams - tokens = self.replicate_first_beam(tokens, eos_mask_batch_dim, beam_size) - scores = self.replicate_first_beam(scores, eos_mask_batch_dim, beam_size) - lprobs = self.replicate_first_beam(lprobs, eos_mask_batch_dim, beam_size) - return lprobs, tokens, scores - - def replicate_first_beam(self, tensor, mask, beam_size: int): - tensor = tensor.view(-1, beam_size, tensor.size(-1)) - tensor[mask] = tensor[mask][:, :1, :] - return tensor.view(-1, tensor.size(-1)) - - def finalize_hypos( - self, - step: int, - bbsz_idx, - eos_scores, - tokens, - scores, - finalized: List[List[Dict[str, Tensor]]], - finished: List[bool], - beam_size: int, - attn: Optional[Tensor], - src_lengths, - max_len: int, - ): - """Finalize hypothesis, store finalized information in `finalized`, and change `finished` accordingly. - A sentence is finalized when {beam_size} finished items have been collected for it. - - Returns number of sentences (not beam items) being finalized. - These will be removed from the batch and not processed further. - Args: - bbsz_idx (Tensor): - """ - assert bbsz_idx.numel() == eos_scores.numel() - - # clone relevant token and attention tensors. - # tokens is (batch * beam, max_len). So the index_select - # gets the newly EOS rows, then selects cols 1..{step + 2} - tokens_clone = tokens.index_select(0, bbsz_idx)[ - :, 1 : step + 2 - ] # skip the first index, which is EOS - - tokens_clone[:, step] = self.eos - attn_clone = ( - attn.index_select(0, bbsz_idx)[:, :, 1 : step + 2] - if attn is not None - else None - ) - - # compute scores per token position - pos_scores = scores.index_select(0, bbsz_idx)[:, : step + 1] - pos_scores[:, step] = eos_scores - # convert from cumulative to per-position scores - pos_scores[:, 1:] = pos_scores[:, 1:] - pos_scores[:, :-1] - - # normalize sentence-level scores - if self.normalize_scores: - eos_scores /= (step + 1) ** self.len_penalty - - # cum_unfin records which sentences in the batch are finished. - # It helps match indexing between (a) the original sentences - # in the batch and (b) the current, possibly-reduced set of - # sentences. - cum_unfin: List[int] = [] - prev = 0 - for f in finished: - if f: - prev += 1 - else: - cum_unfin.append(prev) - cum_fin_tensor = torch.tensor(cum_unfin, dtype=torch.int).to(bbsz_idx) - - unfin_idx = bbsz_idx // beam_size - sent = unfin_idx + torch.index_select(cum_fin_tensor, 0, unfin_idx) - - # Create a set of "{sent}{unfin_idx}", where - # "unfin_idx" is the index in the current (possibly reduced) - # list of sentences, and "sent" is the index in the original, - # unreduced batch - # For every finished beam item - # sentence index in the current (possibly reduced) batch - seen = (sent << 32) + unfin_idx - unique_seen: List[int] = torch.unique(seen).tolist() - - if self.match_source_len: - condition = step > torch.index_select(src_lengths, 0, unfin_idx) - eos_scores = torch.where(condition, torch.tensor(-math.inf), eos_scores) - sent_list: List[int] = sent.tolist() - for i in range(bbsz_idx.size()[0]): - # An input sentence (among those in a batch) is finished when - # beam_size hypotheses have been collected for it - if len(finalized[sent_list[i]]) < beam_size: - if attn_clone is not None: - # remove padding tokens from attn scores - hypo_attn = attn_clone[i] - else: - hypo_attn = torch.empty(0) - - finalized[sent_list[i]].append( - { - "tokens": tokens_clone[i], - "score": eos_scores[i], - "attention": hypo_attn, # src_len x tgt_len - "alignment": torch.empty(0), - "positional_scores": pos_scores[i], - } - ) - - newly_finished: List[int] = [] - for unique_s in unique_seen: - # check termination conditions for this sentence - unique_sent: int = unique_s >> 32 - unique_unfin_idx: int = unique_s - (unique_sent << 32) - - if not finished[unique_sent] and self.is_finished( - step, unique_unfin_idx, max_len, len(finalized[unique_sent]), beam_size - ): - finished[unique_sent] = True - newly_finished.append(unique_unfin_idx) - - return newly_finished - - def is_finished( - self, - step: int, - unfin_idx: int, - max_len: int, - finalized_sent_len: int, - beam_size: int, - ): - """ - Check whether decoding for a sentence is finished, which - occurs when the list of finalized sentences has reached the - beam size, or when we reach the maximum length. - """ - assert finalized_sent_len <= beam_size - if finalized_sent_len == beam_size or step == max_len: - return True - return False - - -class EnsembleModel(nn.Module): - """A wrapper around an ensemble of models.""" - - def __init__(self, models): - super().__init__() - self.models_size = len(models) - # method '__len__' is not supported in ModuleList for torch script - self.single_model = models[0] - self.models = nn.ModuleList(models) - - self.has_incremental: bool = False - if all( - hasattr(m, "decoder") and isinstance(m.decoder, FairseqIncrementalDecoder) - for m in models - ): - self.has_incremental = True - if all( - hasattr(m, "gpt_model") and isinstance(m.gpt_model.decoder, FairseqIncrementalDecoder) - for m in models - ): - self.has_incremental = True - - def forward(self): - pass - - def has_encoder(self): - return hasattr(self.single_model, "encoder") - - def has_incremental_states(self): - return self.has_incremental - - def max_decoder_positions(self): - return min( - [ - m.max_decoder_positions() - for m in self.models - if hasattr(m, "max_decoder_positions") - ] - + [sys.maxsize] - ) - - @torch.jit.export - def forward_encoder(self, net_input: Dict[str, Tensor]): - if not self.has_encoder(): - return None - return [model.encoder.forward_torchscript(net_input) for model in self.models] - - @torch.jit.export - def forward_decoder( - self, - tokens, - encoder_outs: List[Dict[str, List[Tensor]]], - incremental_states: List[Dict[str, Dict[str, Optional[Tensor]]]], - temperature: float = 1.0, - multimodal: bool = False, - ): - log_probs = [] - avg_attn: Optional[Tensor] = None - encoder_out: Optional[Dict[str, List[Tensor]]] = None - for i, model in enumerate(self.models): - if self.has_encoder(): - encoder_out = encoder_outs[i] - # decode each model - if self.has_incremental_states(): - if hasattr(model, "gpt_model"): - decoder_out = model.gpt_model.decoder.forward( - tokens, - encoder_out=encoder_out, - incremental_state=incremental_states[i], - ) - else: - decoder_out = model.decoder.forward( - tokens, - encoder_out=encoder_out, - incremental_state=incremental_states[i], - ) - elif incremental_states is not None and hasattr(model, "gpt_model") and multimodal: - # elif False: #TODO: skip this for text zero-shot - decoder_out = model.gpt_model.decoder.forward( - tokens, - encoder_out=encoder_out, - incremental_state=incremental_states[i], - ) - else: - if hasattr(model, "decoder"): - decoder_out = model.decoder.forward(tokens, encoder_out=encoder_out) - else: - decoder_out = model.forward(tokens) - - attn: Optional[Tensor] = None - decoder_len = len(decoder_out) - if decoder_len > 1 and decoder_out[1] is not None: - if isinstance(decoder_out[1], Tensor): - attn = decoder_out[1] - else: - attn_holder = decoder_out[1]["attn"] - if isinstance(attn_holder, Tensor): - attn = attn_holder - elif attn_holder is not None: - attn = attn_holder[0] - if attn is not None: - attn = attn[:, -1, :] - - decoder_out_tuple = ( - decoder_out[0][:, -1:, :].div_(temperature), - None if decoder_len <= 1 else decoder_out[1], - ) - probs = model.get_normalized_probs( - decoder_out_tuple, log_probs=True, sample=None - ) - probs = probs[:, -1, :] - if self.models_size == 1: - return probs, attn - - log_probs.append(probs) - if attn is not None: - if avg_attn is None: - avg_attn = attn - else: - avg_attn.add_(attn) - - avg_probs = torch.logsumexp(torch.stack(log_probs, dim=0), dim=0) - math.log( - self.models_size - ) - - if avg_attn is not None: - avg_attn.div_(self.models_size) - return avg_probs, avg_attn - - @torch.jit.export - def reorder_encoder_out( - self, encoder_outs: Optional[List[Dict[str, List[Tensor]]]], new_order - ): - """ - Reorder encoder output according to *new_order*. - - Args: - encoder_out: output from the ``forward()`` method - new_order (LongTensor): desired order - - Returns: - *encoder_out* rearranged according to *new_order* - """ - new_outs: List[Dict[str, List[Tensor]]] = [] - if not self.has_encoder(): - return new_outs - for i, model in enumerate(self.models): - assert encoder_outs is not None - new_outs.append( - model.encoder.reorder_encoder_out(encoder_outs[i], new_order) - ) - return new_outs - - @torch.jit.export - def reorder_incremental_state( - self, - incremental_states: List[Dict[str, Dict[str, Optional[Tensor]]]], - new_order, - ): - if not self.has_incremental_states(): - return - for i, model in enumerate(self.models): - if hasattr(model, "gpt_model"): - model.gpt_model.decoder.reorder_incremental_state_scripting( - incremental_states[i], new_order - ) - else: - model.decoder.reorder_incremental_state_scripting( - incremental_states[i], new_order - ) - - -class SequenceGeneratorWithAlignment(SequenceGenerator): - def __init__( - self, models, tgt_dict, left_pad_target=False, print_alignment="hard", **kwargs - ): - """Generates translations of a given source sentence. - - Produces alignments following "Jointly Learning to Align and - Translate with Transformer Models" (Garg et al., EMNLP 2019). - - Args: - left_pad_target (bool, optional): Whether or not the - hypothesis should be left padded or not when they are - teacher forced for generating alignments. - """ - super().__init__(EnsembleModelWithAlignment(models), tgt_dict, **kwargs) - self.left_pad_target = left_pad_target - - if print_alignment == "hard": - self.extract_alignment = utils.extract_hard_alignment - elif print_alignment == "soft": - self.extract_alignment = utils.extract_soft_alignment - - @torch.no_grad() - def generate(self, models, sample, **kwargs): - finalized = super()._generate(sample, **kwargs) - - src_tokens = sample["net_input"]["src_tokens"] - bsz = src_tokens.shape[0] - beam_size = self.beam_size - ( - src_tokens, - src_lengths, - prev_output_tokens, - tgt_tokens, - ) = self._prepare_batch_for_alignment(sample, finalized) - if any(getattr(m, "full_context_alignment", False) for m in self.model.models): - attn = self.model.forward_align(src_tokens, src_lengths, prev_output_tokens) - else: - attn = [ - finalized[i // beam_size][i % beam_size]["attention"].transpose(1, 0) - for i in range(bsz * beam_size) - ] - - if src_tokens.device != "cpu": - src_tokens = src_tokens.to("cpu") - tgt_tokens = tgt_tokens.to("cpu") - attn = [i.to("cpu") for i in attn] - - # Process the attn matrix to extract hard alignments. - for i in range(bsz * beam_size): - alignment = self.extract_alignment( - attn[i], src_tokens[i], tgt_tokens[i], self.pad, self.eos - ) - finalized[i // beam_size][i % beam_size]["alignment"] = alignment - return finalized - - def _prepare_batch_for_alignment(self, sample, hypothesis): - src_tokens = sample["net_input"]["src_tokens"] - bsz = src_tokens.shape[0] - src_tokens = ( - src_tokens[:, None, :] - .expand(-1, self.beam_size, -1) - .contiguous() - .view(bsz * self.beam_size, -1) - ) - src_lengths = sample["net_input"]["src_lengths"] - src_lengths = ( - src_lengths[:, None] - .expand(-1, self.beam_size) - .contiguous() - .view(bsz * self.beam_size) - ) - prev_output_tokens = data_utils.collate_tokens( - [beam["tokens"] for example in hypothesis for beam in example], - self.pad, - self.eos, - self.left_pad_target, - move_eos_to_beginning=True, - ) - tgt_tokens = data_utils.collate_tokens( - [beam["tokens"] for example in hypothesis for beam in example], - self.pad, - self.eos, - self.left_pad_target, - move_eos_to_beginning=False, - ) - return src_tokens, src_lengths, prev_output_tokens, tgt_tokens - - -class EnsembleModelWithAlignment(EnsembleModel): - """A wrapper around an ensemble of models.""" - - def __init__(self, models): - super().__init__(models) - - def forward_align(self, src_tokens, src_lengths, prev_output_tokens): - avg_attn = None - for model in self.models: - decoder_out = model(src_tokens, src_lengths, prev_output_tokens) - attn = decoder_out[1]["attn"][0] - if avg_attn is None: - avg_attn = attn - else: - avg_attn.add_(attn) - if len(self.models) > 1: - avg_attn.div_(len(self.models)) - return avg_attn diff --git a/kosmos-g/fairseq/fairseq/sequence_scorer.py b/kosmos-g/fairseq/fairseq/sequence_scorer.py deleted file mode 100644 index 411d4df44..000000000 --- a/kosmos-g/fairseq/fairseq/sequence_scorer.py +++ /dev/null @@ -1,153 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import sys - -import torch -from fairseq import utils - - -class SequenceScorer(object): - """Scores the target for a given source sentence.""" - - def __init__( - self, - tgt_dict, - softmax_batch=None, - compute_alignment=False, - eos=None, - symbols_to_strip_from_output=None, - ): - self.pad = tgt_dict.pad() - self.eos = tgt_dict.eos() if eos is None else eos - self.softmax_batch = softmax_batch or sys.maxsize - assert self.softmax_batch > 0 - self.compute_alignment = compute_alignment - self.symbols_to_strip_from_output = ( - symbols_to_strip_from_output.union({self.eos}) - if symbols_to_strip_from_output is not None - else {self.eos} - ) - - @torch.no_grad() - def generate(self, models, sample, **kwargs): - """Score a batch of translations.""" - net_input = sample["net_input"] - - def batch_for_softmax(dec_out, target): - # assumes decoder_out[0] is the only thing needed (may not be correct for future models!) - first, rest = dec_out[0], dec_out[1:] - bsz, tsz, dim = first.shape - if bsz * tsz < self.softmax_batch: - yield dec_out, target, True - else: - flat = first.contiguous().view(1, -1, dim) - flat_tgt = target.contiguous().view(flat.shape[:-1]) - s = 0 - while s < flat.size(1): - e = s + self.softmax_batch - yield (flat[:, s:e],) + rest, flat_tgt[:, s:e], False - s = e - - def gather_target_probs(probs, target): - probs = probs.gather( - dim=2, - index=target.unsqueeze(-1), - ) - return probs - - orig_target = sample["target"] - - # compute scores for each model in the ensemble - avg_probs = None - avg_attn = None - for model in models: - model.eval() - decoder_out = model(**net_input) - attn = decoder_out[1] if len(decoder_out) > 1 else None - if type(attn) is dict: - attn = attn.get("attn", None) - - batched = batch_for_softmax(decoder_out, orig_target) - probs, idx = None, 0 - for bd, tgt, is_single in batched: - sample["target"] = tgt - curr_prob = model.get_normalized_probs( - bd, log_probs=len(models) == 1, sample=sample - ).data - if is_single: - probs = gather_target_probs(curr_prob, orig_target) - else: - if probs is None: - probs = curr_prob.new(orig_target.numel()) - step = curr_prob.size(0) * curr_prob.size(1) - end = step + idx - tgt_probs = gather_target_probs( - curr_prob.view(tgt.shape + (curr_prob.size(-1),)), tgt - ) - probs[idx:end] = tgt_probs.view(-1) - idx = end - sample["target"] = orig_target - - probs = probs.view(sample["target"].shape) - - if avg_probs is None: - avg_probs = probs - else: - avg_probs.add_(probs) - if attn is not None: - if torch.is_tensor(attn): - attn = attn.data - else: - attn = attn[0] - if avg_attn is None: - avg_attn = attn - else: - avg_attn.add_(attn) - if len(models) > 1: - avg_probs.div_(len(models)) - avg_probs.log_() - if avg_attn is not None: - avg_attn.div_(len(models)) - - bsz = avg_probs.size(0) - hypos = [] - start_idxs = sample["start_indices"] if "start_indices" in sample else [0] * bsz - for i in range(bsz): - # remove padding from ref - ref = ( - utils.strip_pad(sample["target"][i, start_idxs[i] :], self.pad) - if sample["target"] is not None - else None - ) - tgt_len = ref.numel() - avg_probs_i = avg_probs[i][start_idxs[i] : start_idxs[i] + tgt_len] - score_i = avg_probs_i.sum() / tgt_len - if avg_attn is not None: - avg_attn_i = avg_attn[i] - if self.compute_alignment: - alignment = utils.extract_hard_alignment( - avg_attn_i, - sample["net_input"]["src_tokens"][i], - sample["target"][i], - self.pad, - self.eos, - ) - else: - alignment = None - else: - avg_attn_i = alignment = None - hypos.append( - [ - { - "tokens": ref, - "score": score_i, - "attention": avg_attn_i, - "alignment": alignment, - "positional_scores": avg_probs_i, - } - ] - ) - return hypos diff --git a/kosmos-g/fairseq/fairseq/speech_generator.py b/kosmos-g/fairseq/fairseq/speech_generator.py deleted file mode 100644 index 90ec914e6..000000000 --- a/kosmos-g/fairseq/fairseq/speech_generator.py +++ /dev/null @@ -1,231 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import numpy as np - -from fairseq.data.audio.speech_to_text_dataset import S2TDataConfig - - -class SpeechGenerator(object): - def __init__(self, model, vocoder, data_cfg: S2TDataConfig): - self.model = model - self.vocoder = vocoder - stats_npz_path = data_cfg.global_cmvn_stats_npz - self.gcmvn_stats = None - if stats_npz_path is not None: - self.gcmvn_stats = np.load(stats_npz_path) - - def gcmvn_denormalize(self, x): - # x: B x T x C - if self.gcmvn_stats is None: - return x - mean = torch.from_numpy(self.gcmvn_stats["mean"]).to(x) - std = torch.from_numpy(self.gcmvn_stats["std"]).to(x) - assert len(x.shape) == 3 and mean.shape[0] == std.shape[0] == x.shape[2] - x = x * std.view(1, 1, -1).expand_as(x) - return x + mean.view(1, 1, -1).expand_as(x) - - def get_waveform(self, feat): - # T x C -> T - return None if self.vocoder is None else self.vocoder(feat).squeeze(0) - - -class AutoRegressiveSpeechGenerator(SpeechGenerator): - def __init__( - self, - model, - vocoder, - data_cfg, - max_iter: int = 6000, - eos_prob_threshold: float = 0.5, - ): - super().__init__(model, vocoder, data_cfg) - self.max_iter = max_iter - self.eos_prob_threshold = eos_prob_threshold - - @torch.no_grad() - def generate(self, model, sample, has_targ=False, **kwargs): - model.eval() - - src_tokens = sample["net_input"]["src_tokens"] - src_lengths = sample["net_input"]["src_lengths"] - bsz, src_len = src_tokens.size()[:2] - n_frames_per_step = model.decoder.n_frames_per_step - out_dim = model.decoder.out_dim - raw_dim = out_dim // n_frames_per_step - - # initialize - encoder_out = model.forward_encoder( - src_tokens, src_lengths, speaker=sample["speaker"] - ) - incremental_state = {} - feat, attn, eos_prob = [], [], [] - finished = src_tokens.new_zeros((bsz,)).bool() - out_lens = src_lengths.new_zeros((bsz,)).long().fill_(self.max_iter) - - prev_feat_out = encoder_out["encoder_out"][0].new_zeros(bsz, 1, out_dim) - for step in range(self.max_iter): - cur_out_lens = out_lens.clone() - cur_out_lens.masked_fill_(cur_out_lens.eq(self.max_iter), step + 1) - _, cur_eos_out, cur_extra = model.forward_decoder( - prev_feat_out, - encoder_out=encoder_out, - incremental_state=incremental_state, - target_lengths=cur_out_lens, - speaker=sample["speaker"], - **kwargs - ) - cur_eos_prob = torch.sigmoid(cur_eos_out).squeeze(2) - feat.append(cur_extra["feature_out"]) - attn.append(cur_extra["attn"]) - eos_prob.append(cur_eos_prob) - - cur_finished = cur_eos_prob.squeeze(1) > self.eos_prob_threshold - out_lens.masked_fill_((~finished) & cur_finished, step + 1) - finished = finished | cur_finished - if finished.sum().item() == bsz: - break - prev_feat_out = cur_extra["feature_out"] - - feat = torch.cat(feat, dim=1) - feat = model.decoder.postnet(feat) + feat - eos_prob = torch.cat(eos_prob, dim=1) - attn = torch.cat(attn, dim=2) - alignment = attn.max(dim=1)[1] - - feat = feat.reshape(bsz, -1, raw_dim) - feat = self.gcmvn_denormalize(feat) - - eos_prob = eos_prob.repeat_interleave(n_frames_per_step, dim=1) - attn = attn.repeat_interleave(n_frames_per_step, dim=2) - alignment = alignment.repeat_interleave(n_frames_per_step, dim=1) - out_lens = out_lens * n_frames_per_step - - finalized = [ - { - "feature": feat[b, :out_len], - "eos_prob": eos_prob[b, :out_len], - "attn": attn[b, :, :out_len], - "alignment": alignment[b, :out_len], - "waveform": self.get_waveform(feat[b, :out_len]), - } - for b, out_len in zip(range(bsz), out_lens) - ] - - if has_targ: - assert sample["target"].size(-1) == out_dim - tgt_feats = sample["target"].view(bsz, -1, raw_dim) - tgt_feats = self.gcmvn_denormalize(tgt_feats) - tgt_lens = sample["target_lengths"] * n_frames_per_step - for b, (f, l) in enumerate(zip(tgt_feats, tgt_lens)): - finalized[b]["targ_feature"] = f[:l] - finalized[b]["targ_waveform"] = self.get_waveform(f[:l]) - return finalized - - -class NonAutoregressiveSpeechGenerator(SpeechGenerator): - @torch.no_grad() - def generate(self, model, sample, has_targ=False, **kwargs): - model.eval() - - bsz, max_src_len = sample["net_input"]["src_tokens"].size() - n_frames_per_step = model.encoder.n_frames_per_step - out_dim = model.encoder.out_dim - raw_dim = out_dim // n_frames_per_step - - feat, feat_post, out_lens, log_dur_out, _, _ = model( - src_tokens=sample["net_input"]["src_tokens"], - src_lengths=sample["net_input"]["src_lengths"], - prev_output_tokens=sample["net_input"]["prev_output_tokens"], - incremental_state=None, - target_lengths=sample["target_lengths"], - speaker=sample["speaker"], - ) - if feat_post is not None: - feat = feat_post - - feat = feat.view(bsz, -1, raw_dim) - feat = self.gcmvn_denormalize(feat) - - dur_out = torch.clamp(torch.round(torch.exp(log_dur_out) - 1).long(), min=0) - - def get_dur_plot_data(d): - r = [] - for i, dd in enumerate(d): - r += [i + 1] * dd.item() - return r - - out_lens = out_lens * n_frames_per_step - finalized = [ - { - "feature": feat[b, :l] if l > 0 else feat.new_zeros([1, raw_dim]), - "waveform": self.get_waveform( - feat[b, :l] if l > 0 else feat.new_zeros([1, raw_dim]) - ), - "attn": feat.new_tensor(get_dur_plot_data(dur_out[b])), - } - for b, l in zip(range(bsz), out_lens) - ] - - if has_targ: - tgt_feats = sample["target"].view(bsz, -1, raw_dim) - tgt_feats = self.gcmvn_denormalize(tgt_feats) - tgt_lens = sample["target_lengths"] * n_frames_per_step - for b, (f, l) in enumerate(zip(tgt_feats, tgt_lens)): - finalized[b]["targ_feature"] = f[:l] - finalized[b]["targ_waveform"] = self.get_waveform(f[:l]) - return finalized - - -class TeacherForcingAutoRegressiveSpeechGenerator(AutoRegressiveSpeechGenerator): - @torch.no_grad() - def generate(self, model, sample, has_targ=False, **kwargs): - model.eval() - - src_tokens = sample["net_input"]["src_tokens"] - src_lens = sample["net_input"]["src_lengths"] - prev_out_tokens = sample["net_input"]["prev_output_tokens"] - tgt_lens = sample["target_lengths"] - n_frames_per_step = model.decoder.n_frames_per_step - raw_dim = model.decoder.out_dim // n_frames_per_step - bsz = src_tokens.shape[0] - - feat, eos_prob, extra = model( - src_tokens, - src_lens, - prev_out_tokens, - incremental_state=None, - target_lengths=tgt_lens, - speaker=sample["speaker"], - ) - - attn = extra["attn"] # B x T_s x T_t - alignment = attn.max(dim=1)[1] - feat = feat.reshape(bsz, -1, raw_dim) - feat = self.gcmvn_denormalize(feat) - eos_prob = eos_prob.repeat_interleave(n_frames_per_step, dim=1) - attn = attn.repeat_interleave(n_frames_per_step, dim=2) - alignment = alignment.repeat_interleave(n_frames_per_step, dim=1) - tgt_lens = sample["target_lengths"] * n_frames_per_step - - finalized = [ - { - "feature": feat[b, :tgt_len], - "eos_prob": eos_prob[b, :tgt_len], - "attn": attn[b, :, :tgt_len], - "alignment": alignment[b, :tgt_len], - "waveform": self.get_waveform(feat[b, :tgt_len]), - } - for b, tgt_len in zip(range(bsz), tgt_lens) - ] - - if has_targ: - tgt_feats = sample["target"].view(bsz, -1, raw_dim) - tgt_feats = self.gcmvn_denormalize(tgt_feats) - for b, (f, l) in enumerate(zip(tgt_feats, tgt_lens)): - finalized[b]["targ_feature"] = f[:l] - finalized[b]["targ_waveform"] = self.get_waveform(f[:l]) - return finalized diff --git a/kosmos-g/fairseq/fairseq/tasks/__init__.py b/kosmos-g/fairseq/fairseq/tasks/__init__.py deleted file mode 100644 index 9a46b012c..000000000 --- a/kosmos-g/fairseq/fairseq/tasks/__init__.py +++ /dev/null @@ -1,136 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -"""isort:skip_file""" - -import argparse -import importlib -import os - -from fairseq.dataclass import FairseqDataclass -from fairseq.dataclass.utils import merge_with_parent -from hydra.core.config_store import ConfigStore - -from .fairseq_task import FairseqTask, LegacyFairseqTask # noqa - - -# register dataclass -TASK_DATACLASS_REGISTRY = {} -TASK_REGISTRY = {} -TASK_CLASS_NAMES = set() - - -def setup_task(cfg: FairseqDataclass, **kwargs): - task = None - task_name = getattr(cfg, "task", None) - - if isinstance(task_name, str): - # legacy tasks - task = TASK_REGISTRY[task_name] - if task_name in TASK_DATACLASS_REGISTRY: - dc = TASK_DATACLASS_REGISTRY[task_name] - cfg = dc.from_namespace(cfg) - else: - task_name = getattr(cfg, "_name", None) - - if task_name and task_name in TASK_DATACLASS_REGISTRY: - dc = TASK_DATACLASS_REGISTRY[task_name] - cfg = merge_with_parent(dc(), cfg) - task = TASK_REGISTRY[task_name] - - assert ( - task is not None - ), f"Could not infer task type from {cfg}. Available argparse tasks: {TASK_REGISTRY.keys()}. Available hydra tasks: {TASK_DATACLASS_REGISTRY.keys()}" - - return task.setup_task(cfg, **kwargs) - - -def register_task(name, dataclass=None): - """ - New tasks can be added to fairseq with the - :func:`~fairseq.tasks.register_task` function decorator. - - For example:: - - @register_task('classification') - class ClassificationTask(FairseqTask): - (...) - - .. note:: - - All Tasks must implement the :class:`~fairseq.tasks.FairseqTask` - interface. - - Args: - name (str): the name of the task - """ - - def register_task_cls(cls): - if name in TASK_REGISTRY: - raise ValueError("Cannot register duplicate task ({})".format(name)) - if not issubclass(cls, FairseqTask): - raise ValueError( - "Task ({}: {}) must extend FairseqTask".format(name, cls.__name__) - ) - if cls.__name__ in TASK_CLASS_NAMES: - raise ValueError( - "Cannot register task with duplicate class name ({})".format( - cls.__name__ - ) - ) - TASK_REGISTRY[name] = cls - TASK_CLASS_NAMES.add(cls.__name__) - - if dataclass is not None and not issubclass(dataclass, FairseqDataclass): - raise ValueError( - "Dataclass {} must extend FairseqDataclass".format(dataclass) - ) - - cls.__dataclass = dataclass - if dataclass is not None: - TASK_DATACLASS_REGISTRY[name] = dataclass - - cs = ConfigStore.instance() - node = dataclass() - node._name = name - cs.store(name=name, group="task", node=node, provider="fairseq") - - return cls - - return register_task_cls - - -def get_task(name): - return TASK_REGISTRY[name] - - -def import_tasks(tasks_dir, namespace): - for file in os.listdir(tasks_dir): - path = os.path.join(tasks_dir, file) - if ( - not file.startswith("_") - and not file.startswith(".") - and (file.endswith(".py") or os.path.isdir(path)) - ): - task_name = file[: file.find(".py")] if file.endswith(".py") else file - importlib.import_module(namespace + "." + task_name) - - # expose `task_parser` for sphinx - if task_name in TASK_REGISTRY: - parser = argparse.ArgumentParser(add_help=False) - group_task = parser.add_argument_group("Task name") - # fmt: off - group_task.add_argument('--task', metavar=task_name, - help='Enable this task with: ``--task=' + task_name + '``') - # fmt: on - group_args = parser.add_argument_group( - "Additional command-line arguments" - ) - TASK_REGISTRY[task_name].add_args(group_args) - globals()[task_name + "_parser"] = parser - - -# automatically import any Python files in the tasks/ directory -tasks_dir = os.path.dirname(__file__) -import_tasks(tasks_dir, "fairseq.tasks") diff --git a/kosmos-g/fairseq/fairseq/tasks/audio_finetuning.py b/kosmos-g/fairseq/fairseq/tasks/audio_finetuning.py deleted file mode 100644 index 5e04a1b79..000000000 --- a/kosmos-g/fairseq/fairseq/tasks/audio_finetuning.py +++ /dev/null @@ -1,343 +0,0 @@ -# Copyright (c) 2017-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the LICENSE file in -# the root directory of this source tree. An additional grant of patent rights -# can be found in the PATENTS file in the same directory. - -import logging -import os -import torch -import json - -from argparse import Namespace -from dataclasses import dataclass, field -from typing import Optional, Any - -from fairseq.data import AddTargetDataset, Dictionary, encoders -from fairseq.tasks.audio_pretraining import AudioPretrainingTask, AudioPretrainingConfig -from fairseq.dataclass import FairseqDataclass -from fairseq.dataclass.configs import GenerationConfig -from fairseq.data.text_compressor import TextCompressor, TextCompressionLevel - -from . import register_task -from .. import utils -from ..logging import metrics - - -logger = logging.getLogger(__name__) - - -class LabelEncoder(object): - def __init__(self, dictionary): - self.dictionary = dictionary - - def __call__(self, label): - return self.dictionary.encode_line( - label, append_eos=False, add_if_not_exist=False - ) - - -def label_len_fn(label): - return len(label.split(" ")) - - -@dataclass -class AudioFinetuningConfig(AudioPretrainingConfig): - # Options for reporting WER metrics during validation. Only applicable to - # Seq2Seq models during fine-tuning - eval_wer: bool = field( - default=False, metadata={"help": "compute WER for Seq2Seq models"} - ) - eval_wer_config: GenerationConfig = field( - default_factory=lambda: GenerationConfig(), - metadata={"help": "beam search config for evaluating wer during training"}, - ) - eval_wer_tokenizer: Any = field( - default=None, - metadata={"help": "tokenizer config for evaluating wer during training"}, - ) - eval_wer_post_process: str = field( - default="letter", - metadata={ - "help": "remove BPE tokens before scoring (can be sentencepiece, letter, and more)" - }, - ) - eval_bleu: bool = field( - default=False, metadata={"help": "evaluation with BLEU scores"} - ) - eval_bleu_detok: Optional[str] = field( - default=None, - metadata={ - "help": "detokenize before computing BLEU (e.g., 'moses'); " - "required if using --eval-bleu; use 'space' to disable " - "detokenization; see fairseq.data.encoders for other options" - }, - ) - eval_bleu_detok_args: str = field( - default="{}", metadata={"help": "args for building the tokenizer, if needed"} - ) - eval_tokenized_bleu: bool = field( - default=False, metadata={"help": "compute tokenized BLEU instead of sacrebleu"} - ) - eval_bleu_remove_bpe: Optional[str] = field( - default=None, metadata={"help": "remove BPE before computing BLEU"} - ) - eval_bleu_args: str = field( - default="{}", - metadata={ - "help": "generation args for BLUE scoring, e.g., " - '\'{"beam": 4, "lenpen": 0.6}\'' - }, - ) - eval_bleu_print_samples: bool = field( - default=False, metadata={"help": "print sample generations during validation"} - ) - autoregressive: bool = field( - default=False, - metadata={ - "help": "required for autoregressive decoders (like seq2seq models); " - "adds 'prev_output_tokens' to input and appends eos to target" - }, - ) - - -@register_task("audio_finetuning", dataclass=AudioFinetuningConfig) -class AudioFinetuningTask(AudioPretrainingTask): - """ """ - - cfg: AudioFinetuningConfig - - def __init__( - self, - cfg: AudioFinetuningConfig, - ): - super().__init__(cfg) - self.blank_symbol = "<s>" - - self.state.add_factory("target_dictionary", self.load_target_dictionary) - - def load_target_dictionary(self): - if self.cfg.labels: - dict_path = os.path.join(self.cfg.data, f"dict.{self.cfg.labels}.txt") - return Dictionary.load(dict_path) - return None - - def load_dataset( - self, split: str, task_cfg: AudioFinetuningConfig = None, **kwargs - ): - super().load_dataset(split, task_cfg, **kwargs) - - task_cfg = task_cfg or self.cfg - assert task_cfg.labels is not None - text_compression_level = getattr( - TextCompressionLevel, str(self.cfg.text_compression_level) - ) - data_path = self.cfg.data - label_path = os.path.join(data_path, f"{split}.{task_cfg.labels}") - skipped_indices = getattr(self.datasets[split], "skipped_indices", set()) - text_compressor = TextCompressor(level=text_compression_level) - with open(label_path, "r") as f: - labels = [ - text_compressor.compress(l) - for i, l in enumerate(f) - if i not in skipped_indices - ] - - assert len(labels) == len(self.datasets[split]), ( - f"labels length ({len(labels)}) and dataset length " - f"({len(self.datasets[split])}) do not match" - ) - - process_label = LabelEncoder(self.target_dictionary) - - self.datasets[split] = AddTargetDataset( - self.datasets[split], - labels, - pad=self.target_dictionary.pad(), - eos=self.target_dictionary.eos(), - batch_targets=True, - process_label=process_label, - label_len_fn=label_len_fn, - add_to_input=task_cfg.get("autoregressive", False), - text_compression_level=text_compression_level, - ) - - @property - def target_dictionary(self): - """Return the :class:`~fairseq.data.Dictionary` for the language - model.""" - return self.state.target_dictionary - - def valid_step(self, sample, model, criterion): - loss, sample_size, logging_output = super().valid_step(sample, model, criterion) - if self.cfg.eval_wer and self.cfg.autoregressive: - metrics = self._inference_with_wer(self.sequence_generator, sample, model) - logging_output["_num_char_errors"] = metrics["num_char_errors"] - logging_output["_num_chars"] = metrics["num_chars"] - logging_output["_num_word_errors"] = metrics["num_word_errors"] - logging_output["_num_words"] = metrics["num_words"] - if self.cfg.eval_bleu and self.cfg.autoregressive: - metrics = self._inference_with_bleu(self.sequence_generator, sample, model) - logging_output["_bleu_sys_len"] = metrics.sys_len - logging_output["_bleu_ref_len"] = metrics.ref_len - # we split counts into separate entries so that they can be - # summed efficiently across workers using fast-stat-sync - assert len(metrics.counts) == 4 - for i in range(4): - logging_output[f"_bleu_counts_{i}"] = metrics.counts[i] - logging_output[f"_bleu_totals_{i}"] = metrics.totals[i] - return loss, sample_size, logging_output - - def build_model(self, model_cfg: FairseqDataclass, from_checkpoint=False): - model = super().build_model(model_cfg, from_checkpoint) - - if self.cfg.eval_wer and self.cfg.autoregressive: - self.sequence_generator = self.build_generator( - [model], - self.cfg.eval_wer_config, - ) - if self.cfg.eval_wer_tokenizer: - self.tokenizer = encoders.build_tokenizer(self.cfg.eval_wer_tokenizer) - else: - self.tokenizer = None - if self.cfg.eval_bleu and self.cfg.autoregressive: - assert self.cfg.eval_bleu_detok is not None, ( - "--eval-bleu-detok is required if using --eval-bleu; " - "try --eval-bleu-detok=moses (or --eval-bleu-detok=space " - "to disable detokenization, e.g., when using sentencepiece)" - ) - detok_args = json.loads(self.cfg.eval_bleu_detok_args) - self.tokenizer = encoders.build_tokenizer( - Namespace(tokenizer=self.cfg.eval_bleu_detok, **detok_args) - ) - gen_args = json.loads(self.cfg.eval_bleu_args) - gen_args = Namespace(**gen_args) - self.sequence_generator = self.build_generator([model], gen_args) - - return model - - def _inference_with_wer(self, generator, sample, model): - import editdistance - - def decode(toks): - s = self.target_dictionary.string( - toks.int().cpu(), - self.cfg.eval_wer_post_process, - escape_unk=True, - ) - if self.tokenizer: - s = self.tokenizer.decode(s) - return s - - num_word_errors, num_char_errors = 0, 0 - num_chars, num_words = 0, 0 - gen_out = self.inference_step(generator, [model], sample, None) - for i in range(len(gen_out)): - hyp = decode(gen_out[i][0]["tokens"]) - ref = decode( - utils.strip_pad(sample["target"][i], self.target_dictionary.pad()), - ) - num_char_errors += editdistance.eval(hyp, ref) - num_chars += len(ref) - hyp_words = hyp.split() - ref_words = ref.split() - num_word_errors += editdistance.eval(hyp_words, ref_words) - num_words += len(ref_words) - - return { - "num_char_errors": num_char_errors, - "num_chars": num_chars, - "num_word_errors": num_word_errors, - "num_words": num_words, - } - - def _inference_with_bleu(self, generator, sample, model): - import sacrebleu - - def decode(toks, is_ref): - s = self.target_dictionary.string( - toks.int().cpu(), - self.cfg.eval_bleu_remove_bpe, - # The default unknown string in fairseq is `<unk>`, but - # this is tokenized by sacrebleu as `< unk >`, inflating - # BLEU scores. Instead, we use a somewhat more verbose - # alternative that is unlikely to appear in the real - # reference, but doesn't get split into multiple tokens. - unk_string=("UNKNOWNTOKENINREF" if is_ref else "UNKNOWNTOKENINHYP"), - ) - if self.tokenizer: - s = self.tokenizer.decode(s) - return s - - gen_out = self.inference_step(generator, [model], sample) - hyps, refs = [], [] - for i in range(len(gen_out)): - hyps.append(decode(gen_out[i][0]["tokens"], is_ref=False)) - refs.append( - decode( - utils.strip_pad(sample["target"][i], self.target_dictionary.pad()), - is_ref=True, # don't count <unk> as matches to the hypo - ) - ) - if self.cfg.eval_bleu_print_samples: - logger.info("H-{} {}".format(sample["id"][0], hyps[0])) - logger.info("T-{} {}".format(sample["id"][0], refs[0])) - - eval_tokenization = "none" if self.cfg.eval_tokenized_bleu else "13a" - return sacrebleu.corpus_bleu(hyps, [refs], tokenize=eval_tokenization) - - def reduce_metrics(self, logging_outputs, criterion): - super().reduce_metrics(logging_outputs, criterion) - - if self.cfg.eval_wer: - zero = torch.scalar_tensor(0.0) - num_char_errors = sum( - log.get("_num_char_errors", zero) for log in logging_outputs - ) - num_chars = sum(log.get("_num_chars", zero) for log in logging_outputs) - num_word_errors = sum( - log.get("_num_word_errors", zero) for log in logging_outputs - ) - num_words = sum(log.get("_num_words", zero) for log in logging_outputs) - metrics.log_scalar("_num_char_errors", num_char_errors) - metrics.log_scalar("_num_chars", num_chars) - metrics.log_scalar("_num_word_errors", num_word_errors) - metrics.log_scalar("_num_words", num_words) - if num_chars > 0: - metrics.log_derived( - "uer", - lambda meters: meters["_num_char_errors"].sum - * 100.0 - / meters["_num_chars"].sum - if meters["_num_chars"].sum > 0 - else float("nan"), - ) - if num_words > 0: - metrics.log_derived( - "wer", - lambda meters: meters["_num_word_errors"].sum - * 100.0 - / meters["_num_words"].sum - if meters["_num_words"].sum > 0 - else float("nan"), - ) - if self.cfg.eval_bleu: - len_keys = ["_bleu_sys_len", "_bleu_ref_len"] - count_keys = [f"_bleu_counts_{i}" for i in range(4)] - total_keys = [f"_bleu_totals_{i}" for i in range(4)] - for k in len_keys + count_keys + total_keys: - metrics.log_scalar(k, sum(log.get(k, 0) for log in logging_outputs)) - - import sacrebleu - - metrics.log_derived( - "bleu", - lambda meters: sacrebleu.compute_bleu( - correct=[meters[k].sum for k in count_keys], - total=[meters[k].sum for k in total_keys], - sys_len=meters["_bleu_sys_len"].sum, - ref_len=meters["_bleu_ref_len"].sum, - smooth_method="exp", - ).score, - ) diff --git a/kosmos-g/fairseq/fairseq/tasks/audio_pretraining.py b/kosmos-g/fairseq/fairseq/tasks/audio_pretraining.py deleted file mode 100644 index a55c70400..000000000 --- a/kosmos-g/fairseq/fairseq/tasks/audio_pretraining.py +++ /dev/null @@ -1,205 +0,0 @@ -# Copyright (c) 2017-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the LICENSE file in -# the root directory of this source tree. An additional grant of patent rights -# can be found in the PATENTS file in the same directory. - -import logging -import os -import sys - -from argparse import Namespace -from dataclasses import dataclass, field -from typing import Optional -from omegaconf import MISSING, II, OmegaConf - -from fairseq.data import BinarizedAudioDataset, FileAudioDataset -from fairseq.dataclass import FairseqDataclass, ChoiceEnum -from fairseq.data.text_compressor import TextCompressionLevel - -from . import FairseqTask, register_task - - -logger = logging.getLogger(__name__) - - -@dataclass -class InferredW2vConfig: - # The following are needed to precompute mask and mask channel indices - # before model's forward. - mask_length: Optional[int] = II("model.mask_length") - mask_prob: Optional[float] = II("model.mask_prob") - mask_selection: Optional[str] = II("model.mask_selection") - mask_other: Optional[float] = II("model.mask_other") - no_mask_overlap: Optional[bool] = II("model.no_mask_overlap") - mask_min_space: Optional[int] = II("model.mask_min_space") - mask_channel_length: Optional[int] = II("model.mask_channel_length") - mask_channel_prob: Optional[float] = II("model.mask_channel_prob") - mask_channel_selection: Optional[str] = II("model.mask_channel_selection") - mask_channel_other: Optional[float] = II("model.mask_channel_other") - no_mask_channel_overlap: Optional[bool] = II("model.no_mask_channel_overlap") - mask_channel_min_space: Optional[int] = II("model.mask_channel_min_space") - - conv_feature_layers: Optional[str] = II("model.conv_feature_layers") - encoder_embed_dim: Optional[int] = II("model.encoder_embed_dim") - - -@dataclass -class AudioPretrainingConfig(FairseqDataclass): - data: str = field(default=MISSING, metadata={"help": "path to data directory"}) - labels: Optional[str] = field( - default=None, - metadata={"help": "extension of the label file to load, used for fine-tuning"}, - ) - binarized_dataset: bool = field( - default=False, - metadata={ - "help": "if true, loads binarized dataset (useful for very large datasets). " - "See examples/wav2vec/scripts/binarize_manifest.sh" - }, - ) - sample_rate: int = field( - default=16_000, - metadata={ - "help": "target sample rate. audio files will be up/down sampled to this rate" - }, - ) - normalize: bool = field( - default=False, - metadata={"help": "if set, normalizes input to have 0 mean and unit variance"}, - ) - enable_padding: bool = field( - default=False, metadata={"help": "pad shorter samples instead of cropping"} - ) - max_sample_size: Optional[int] = field( - default=None, metadata={"help": "max sample size to crop to for batching"} - ) - min_sample_size: Optional[int] = field( - default=None, metadata={"help": "min sample size to skip small examples"} - ) - num_batch_buckets: int = field( - default=0, - metadata={"help": "number of buckets"}, - ) - precompute_mask_indices: bool = field( - default=False, - metadata={ - "help": "flag to compute mask indices in data preparation.", - }, - ) - - inferred_w2v_config: Optional[InferredW2vConfig] = field( - default=None, - metadata={ - "help": "wav2vec 2.0 masking arguments used to pre-compute masks (required for TPU)", - }, - ) - - tpu: bool = II("common.tpu") - text_compression_level: ChoiceEnum([x.name for x in TextCompressionLevel]) = field( - default="none", - metadata={ - "help": "compression level for texts (e.g. audio filenames, " - "target texts): none/low/high (default: none). " - }, - ) - - -@register_task("audio_pretraining", dataclass=AudioPretrainingConfig) -class AudioPretrainingTask(FairseqTask): - """ """ - - cfg: AudioPretrainingConfig - - @classmethod - def setup_task(cls, cfg: AudioPretrainingConfig, **kwargs): - """Setup the task (e.g., load dictionaries). - - Args: - cfg (AudioPretrainingConfig): configuration of this task - """ - - return cls(cfg) - - def _get_mask_precompute_kwargs(self, cfg): - if self.cfg.precompute_mask_indices or self.cfg.tpu: - assert ( - cfg.inferred_w2v_config is not None - ), "inferred_w2v_config must be set" - return OmegaConf.to_container( - cfg.inferred_w2v_config, resolve=True, enum_to_str=True - ) - else: - return {} - - def load_dataset(self, split: str, task_cfg: FairseqDataclass = None, **kwargs): - data_path = self.cfg.data - task_cfg = task_cfg or self.cfg - - # upgrade old task - if isinstance(task_cfg, Namespace): - if not hasattr(task_cfg, "autoregressive"): - task_cfg.autoregressive = not task_cfg.criterion == "ctc" - - text_compression_level = getattr( - TextCompressionLevel, str(self.cfg.text_compression_level) - ) - if getattr(task_cfg, "binarized_dataset", False): - self.datasets[split] = BinarizedAudioDataset( - data_path, - split=split, - sample_rate=task_cfg.get("sample_rate", self.cfg.sample_rate), - max_sample_size=self.cfg.max_sample_size, - min_sample_size=self.cfg.min_sample_size, - pad=task_cfg.labels is not None or task_cfg.enable_padding, - normalize=task_cfg.normalize, - num_buckets=self.cfg.num_batch_buckets or int(self.cfg.tpu), - compute_mask_indices=(self.cfg.precompute_mask_indices or self.cfg.tpu), - **self._get_mask_precompute_kwargs(task_cfg), - ) - else: - manifest_path = os.path.join(data_path, "{}.tsv".format(split)) - - self.datasets[split] = FileAudioDataset( - manifest_path=manifest_path, - sample_rate=task_cfg.get("sample_rate", self.cfg.sample_rate), - max_sample_size=self.cfg.max_sample_size, - min_sample_size=self.cfg.min_sample_size, - pad=task_cfg.labels is not None or task_cfg.enable_padding, - normalize=task_cfg.normalize, - num_buckets=self.cfg.num_batch_buckets or int(self.cfg.tpu), - compute_mask_indices=(self.cfg.precompute_mask_indices or self.cfg.tpu), - text_compression_level=text_compression_level, - **self._get_mask_precompute_kwargs(task_cfg), - ) - - if self.cfg.tpu and task_cfg.inferred_w2v_config.mask_channel_prob == 0.0: - logger.info( - "Pretraining on TPUs may suffer convergence " - "issues when training with `mask_channel_prob` value of " - "0. You may want to set this to a low value close to 0." - ) - - @property - def source_dictionary(self): - return None - - @property - def target_dictionary(self): - return None - - def max_positions(self): - """Maximum input length supported by the encoder.""" - return sys.maxsize, sys.maxsize - - def build_model(self, model_cfg: FairseqDataclass, from_checkpoint=False): - model = super().build_model(model_cfg, from_checkpoint) - - actualized_cfg = getattr(model, "cfg", None) - if actualized_cfg is not None: - # if "w2v_args" in actualized_cfg: - if hasattr(actualized_cfg, "w2v_args"): - model_cfg.w2v_args = actualized_cfg.w2v_args - - return model diff --git a/kosmos-g/fairseq/fairseq/tasks/cross_lingual_lm.py b/kosmos-g/fairseq/fairseq/tasks/cross_lingual_lm.py deleted file mode 100644 index 8f8fe7e2d..000000000 --- a/kosmos-g/fairseq/fairseq/tasks/cross_lingual_lm.py +++ /dev/null @@ -1,191 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import itertools -import logging -import os -from collections import OrderedDict - -import numpy as np -from fairseq import tokenizer, utils -from fairseq.data import ConcatDataset, Dictionary, TokenBlockDataset, data_utils -from fairseq.data.legacy.masked_lm_dataset import MaskedLMDataset -from fairseq.data.legacy.masked_lm_dictionary import MaskedLMDictionary -from fairseq.data.multi_corpus_sampled_dataset import MultiCorpusSampledDataset -from fairseq.tasks import LegacyFairseqTask, register_task - - -logger = logging.getLogger(__name__) - - -@register_task("cross_lingual_lm") -class CrossLingualLMTask(LegacyFairseqTask): - """ - Task for training cross-lingual language models. - - For more details look at: https://arxiv.org/pdf/1901.07291.pdf - - Args: - dictionary (Dictionary): the dictionary for the input of the task - """ - - @staticmethod - def add_args(parser): - """Add task-specific arguments to the parser.""" - parser.add_argument( - "data", - help="colon separated path to data directories list, \ - will be iterated upon during epochs in round-robin manner", - ) - parser.add_argument( - "--tokens-per-sample", - default=512, - type=int, - help="max number of total tokens over all segments" " per sample", - ) - parser.add_argument( - "--monolingual-langs", - default="en", - type=str, - help="comma separated list of languages for which we" - " want to train XLM on", - ) - parser.add_argument( - "--shuffle", - action="store_true", - help="shuffle each monolingual dataset while" " training", - ) - - def __init__(self, args, dictionary): - super().__init__(args) - self.dictionary = dictionary - self.seed = args.seed - self.distributed_world_size = args.distributed_world_size - self.langs2id = self._lang_to_id(args.monolingual_langs) - - def _lang_to_id(self, languages: str): - """ - Build a map from languages to ids. These ids are used as segment labels - for cross-lingual LM training. - """ - lang2id = {} - langs = [l.strip() for l in languages.split(",")] - for id, lang in enumerate(langs): - lang2id[lang] = id - return lang2id - - @classmethod - def load_dictionary(cls, filename): - return MaskedLMDictionary.load(filename) - - @classmethod - def build_dictionary( - cls, filenames, workers=1, threshold=-1, nwords=-1, padding_factor=8 - ): - d = MaskedLMDictionary() - for filename in filenames: - Dictionary.add_file_to_dictionary( - filename, d, tokenizer.tokenize_line, workers - ) - d.finalize(threshold=threshold, nwords=nwords, padding_factor=padding_factor) - return d - - @property - def target_dictionary(self): - return self.dictionary - - @classmethod - def setup_task(cls, args, **kwargs): - """Setup the task.""" - dictionary = MaskedLMDictionary.load(os.path.join(args.data, "dict.txt")) - logger.info("dictionary: {} types".format(len(dictionary))) - return cls(args, dictionary) - - def _load_single_lang_dataset(self, split, epoch): - loaded_datasets = [] - - paths = utils.split_paths(self.args.data) - assert len(paths) > 0 - data_path = paths[(epoch - 1) % len(paths)] - - for k in itertools.count(): - split_k = split + (str(k) if k > 0 else "") - path = os.path.join(data_path, split_k) - - ds = data_utils.load_indexed_dataset( - path, self.dictionary, self.args.dataset_impl - ) - if ds is None: - if k > 0: - break - else: - raise FileNotFoundError( - "Dataset not found: {} ({})".format(split, data_path) - ) - - # Since we append each block with the classification_token, - # we need to effectively create blocks of length - # tokens_per_sample-1 - loaded_datasets.append( - TokenBlockDataset( - ds, - ds.sizes, - self.args.tokens_per_sample - 1, - pad=self.dictionary.pad(), - eos=self.dictionary.eos(), - ) - ) - - logger.info( - "{} {} {} examples".format(data_path, split_k, len(loaded_datasets[-1])) - ) - - if len(loaded_datasets) == 1: - dataset = loaded_datasets[0] - sizes = dataset.sizes - else: - dataset = ConcatDataset(loaded_datasets) - sizes = np.concatenate([ds.sizes for ds in loaded_datasets]) - - return dataset, sizes - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - """Load a given dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - """ - dataset_map = OrderedDict() - - for lang in self.langs2id.keys(): - # Datasets are expected to be in "split.lang" format (Eg: train.en) - language_split = "{}.{}".format(split, lang) - - block_dataset, sizes = self._load_single_lang_dataset( - split=language_split, epoch=epoch - ) - - dataset_map[lang] = MaskedLMDataset( - dataset=block_dataset, - sizes=sizes, - vocab=self.dictionary, - pad_idx=self.dictionary.pad(), - mask_idx=self.dictionary.mask(), - classif_token_idx=self.dictionary.eos(), - sep_token_idx=self.dictionary.eos(), - shuffle=getattr(self.args, "shuffle", False), - has_pairs=False, - segment_id=self.langs2id[lang], - seed=self.seed, - ) - - self.datasets[split] = MultiCorpusSampledDataset(dataset_map) - logger.info( - "{} {} {} examples".format( - utils.split_paths(self.args.data)[epoch - 1], - split, - len(self.datasets[split]), - ) - ) diff --git a/kosmos-g/fairseq/fairseq/tasks/denoising.py b/kosmos-g/fairseq/fairseq/tasks/denoising.py deleted file mode 100644 index 1d4f84c08..000000000 --- a/kosmos-g/fairseq/fairseq/tasks/denoising.py +++ /dev/null @@ -1,276 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os - -from fairseq import utils -from fairseq.data import ( - AppendTokenDataset, - DenoisingDataset, - Dictionary, - IdDataset, - NestedDictionaryDataset, - NumelDataset, - PadDataset, - PrependTokenDataset, - StripTokenDataset, - TokenBlockDataset, - data_utils, -) -from fairseq.data.encoders.utils import get_whole_word_mask -from fairseq.data.shorten_dataset import maybe_shorten_dataset -from fairseq.tasks import LegacyFairseqTask, register_task -import numpy as np - - -logger = logging.getLogger(__name__) - - -@register_task("denoising") -class DenoisingTask(LegacyFairseqTask): - """ - Denoising task for applying sequence to sequence denoising. (ie. BART) - """ - - @staticmethod - def add_args(parser): - """Add task-specific arguments to the parser.""" - parser.add_argument("data", help="path to data directory") - parser.add_argument( - "--tokens-per-sample", - default=512, - type=int, - help="max number of total tokens over all segments" - " per sample for dataset", - ) - parser.add_argument( - "--sample-break-mode", - default="complete_doc", - type=str, - help="mode for breaking sentence", - ) - parser.add_argument( - "--mask", - default=0.0, - type=float, - help="fraction of words/subwords that will be masked", - ) - parser.add_argument( - "--mask-random", - default=0.0, - type=float, - help="instead of using [MASK], use random token this often", - ) - parser.add_argument( - "--insert", - default=0.0, - type=float, - help="insert this percentage of additional random tokens", - ) - parser.add_argument( - "--permute", - default=0.0, - type=float, - help="take this proportion of subwords and permute them", - ) - parser.add_argument( - "--rotate", - default=0.5, - type=float, - help="rotate this proportion of inputs", - ) - parser.add_argument( - "--poisson-lambda", - default=3.0, - type=float, - help="randomly shuffle sentences for this proportion of inputs", - ) - parser.add_argument( - "--permute-sentences", - default=0.0, - type=float, - help="shuffle this proportion of sentences in all inputs", - ) - parser.add_argument( - "--mask-length", - default="subword", - type=str, - choices=["subword", "word", "span-poisson"], - help="mask length to choose", - ) - parser.add_argument( - "--replace-length", - default=-1, - type=int, - help="when masking N tokens, replace with 0, 1, or N tokens (use -1 for N)", - ) - parser.add_argument( - "--max-source-positions", - default=1024, - type=int, - metavar="N", - help="max number of tokens in the source sequence", - ) - parser.add_argument( - "--max-target-positions", - default=1024, - type=int, - metavar="N", - help="max number of tokens in the target sequence", - ) - - parser.add_argument( - "--shorten-method", - default="none", - choices=["none", "truncate", "random_crop"], - help="if not none, shorten sequences that exceed --tokens-per-sample", - ) - parser.add_argument( - "--shorten-data-split-list", - default="", - help="comma-separated list of dataset splits to apply shortening to, " - 'e.g., "train,valid" (default: all dataset splits)', - ) - - def __init__(self, args, dictionary): - super().__init__(args) - self.dictionary = dictionary - self.seed = args.seed - - # add mask token - self.mask_idx = self.dictionary.add_symbol("<mask>") - - @classmethod - def setup_task(cls, args, **kwargs): - """Setup the task.""" - paths = utils.split_paths(args.data) - assert len(paths) > 0 - dictionary = Dictionary.load(os.path.join(paths[0], "dict.txt")) - logger.info("dictionary: {} types".format(len(dictionary))) - if not hasattr(args, "shuffle_instance"): - args.shuffle_instance = False - return cls(args, dictionary) - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - """Load a given dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - """ - paths = utils.split_paths(self.args.data) - assert len(paths) > 0 - data_path = paths[(epoch - 1) % len(paths)] - split_path = os.path.join(data_path, split) - - dataset = data_utils.load_indexed_dataset( - split_path, - self.dictionary, - self.args.dataset_impl, - combine=combine, - ) - if dataset is None: - raise FileNotFoundError( - "Dataset not found: {} ({})".format(split, split_path) - ) - - dataset = StripTokenDataset(dataset, self.dictionary.eos()) - - dataset = maybe_shorten_dataset( - dataset, - split, - self.args.shorten_data_split_list, - self.args.shorten_method, - self.args.tokens_per_sample, - self.args.seed, - ) - - # create continuous blocks of tokens - dataset = TokenBlockDataset( - dataset, - dataset.sizes, - self.args.tokens_per_sample - 2, # one less for <s> and one for </s> - pad=self.dictionary.pad(), - eos=self.dictionary.eos(), - break_mode=self.args.sample_break_mode, - document_sep_len=0, - ) - logger.info("loaded {} blocks from: {}".format(len(dataset), split_path)) - - # prepend beginning-of-sentence token (<s>, equiv. to [CLS] in BERT) - dataset = PrependTokenDataset(dataset, self.source_dictionary.bos()) - dataset = AppendTokenDataset(dataset, self.source_dictionary.eos()) - - mask_whole_words = ( - get_whole_word_mask(self.args, self.source_dictionary) - if self.args.mask_length != "subword" - else None - ) - - self.datasets[split] = DenoisingDataset( - dataset, - dataset.sizes, - self.dictionary, - self.mask_idx, - mask_whole_words, - shuffle=self.args.shuffle_instance, - seed=self.seed, - args=self.args, - ) - logger.info( - "Split: {0}, Loaded {1} samples of denoising_dataset".format( - split, - len(self.datasets[split]), - ) - ) - - def build_dataset_for_inference(self, src_tokens, src_lengths, **kwargs): - """ - Generate batches for inference. We assume that the input begins with a - bos symbol (`<s>`) and ends with an eos symbol (`</s>`). - """ - pad = self.source_dictionary.pad() - eos = self.source_dictionary.eos() - src_dataset = TokenBlockDataset( - src_tokens, - src_lengths, - block_size=self.args.tokens_per_sample - 2, # for <s> and </s> - pad=pad, - eos=eos, - break_mode=self.args.sample_break_mode, - document_sep_len=0, - ) - prev_output_tokens = PrependTokenDataset( - StripTokenDataset(src_dataset, eos), eos - ) - src_dataset = PadDataset(src_dataset, pad_idx=pad, left_pad=False) - return NestedDictionaryDataset( - { - "id": IdDataset(), - "net_input": { - "src_tokens": src_dataset, - "src_lengths": NumelDataset(src_dataset, reduce=False), - "prev_output_tokens": PadDataset( - prev_output_tokens, pad_idx=pad, left_pad=False - ), - }, - "target": src_dataset, - }, - sizes=[np.array(src_lengths)], - ) - - def max_positions(self): - """Return the max sentence length allowed by the task.""" - return (self.args.max_source_positions, self.args.max_target_positions) - - @property - def source_dictionary(self): - """Return the source :class:`~fairseq.data.Dictionary`.""" - return self.dictionary - - @property - def target_dictionary(self): - """Return the target :class:`~fairseq.data.Dictionary`.""" - return self.dictionary diff --git a/kosmos-g/fairseq/fairseq/tasks/fairseq_task.py b/kosmos-g/fairseq/fairseq/tasks/fairseq_task.py deleted file mode 100644 index 3d6683f2d..000000000 --- a/kosmos-g/fairseq/fairseq/tasks/fairseq_task.py +++ /dev/null @@ -1,695 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import warnings -from argparse import Namespace -from typing import Any, Callable, Dict, List - -import torch -from fairseq import metrics, search, tokenizer, utils -from fairseq.data import Dictionary, FairseqDataset, data_utils, encoders, iterators -from fairseq.dataclass import FairseqDataclass -from fairseq.dataclass.utils import gen_parser_from_dataclass -from fairseq.optim.amp_optimizer import AMPOptimizer -from omegaconf import DictConfig -from deepspeed.runtime.engine import DeepSpeedEngine - - -logger = logging.getLogger(__name__) - - -class StatefulContainer(object): - def __init__(self): - self._state = dict() - self._factories = dict() - - def add_factory(self, name, factory: Callable[[], Any]): - self._factories[name] = factory - - def merge_state_dict(self, state_dict: Dict[str, Any]): - self._state.update(state_dict) - - @property - def state_dict(self) -> Dict[str, Any]: - return self._state - - def __getattr__(self, name): - if name not in self._state and name in self._factories: - self._state[name] = self._factories[name]() - - if name in self._state: - return self._state[name] - - raise AttributeError(f"Task state has no factory for attribute {name}") - - -class FairseqTask(object): - """ - Tasks store dictionaries and provide helpers for loading/iterating over - Datasets, initializing the Model/Criterion and calculating the loss. - - Tasks have limited statefulness. In particular, state that needs to be - saved to/loaded from checkpoints needs to be stored in the `self.state` - :class:`StatefulContainer` object. For example:: - - self.state.add_factory("dictionary", self.load_dictionary) - print(self.state.dictionary) # calls self.load_dictionary() - - This is necessary so that when loading checkpoints, we can properly - recreate the task state after initializing the task instance. - """ - - @classmethod - def add_args(cls, parser): - """Add task-specific arguments to the parser.""" - dc = getattr(cls, "__dataclass", None) - if dc is not None: - gen_parser_from_dataclass(parser, dc()) - - @staticmethod - def logging_outputs_can_be_summed(criterion) -> bool: - """ - Whether the logging outputs returned by `train_step` and `valid_step` can - be summed across workers prior to calling `aggregate_logging_outputs`. - Setting this to True will improves distributed training speed. - """ - return criterion.logging_outputs_can_be_summed() - - def __init__(self, cfg: FairseqDataclass, **kwargs): - self.cfg = cfg - self.datasets = dict() - self.dataset_to_epoch_iter = dict() - self.state = StatefulContainer() - - @classmethod - def load_dictionary(cls, filename): - """Load the dictionary from the filename - - Args: - filename (str): the filename - """ - return Dictionary.load(filename) - - @classmethod - def build_dictionary( - cls, filenames, workers=1, threshold=-1, nwords=-1, padding_factor=8 - ): - """Build the dictionary - - Args: - filenames (list): list of filenames - workers (int): number of concurrent workers - threshold (int): defines the minimum word count - nwords (int): defines the total number of words in the final dictionary, - including special symbols - padding_factor (int): can be used to pad the dictionary size to be a - multiple of 8, which is important on some hardware (e.g., Nvidia - Tensor Cores). - """ - d = Dictionary() - for filename in filenames: - Dictionary.add_file_to_dictionary( - filename, d, tokenizer.tokenize_line, workers - ) - d.finalize(threshold=threshold, nwords=nwords, padding_factor=padding_factor) - return d - - @classmethod - def setup_task(cls, cfg: DictConfig, **kwargs): - """Setup the task (e.g., load dictionaries). - - Args: - cfg (omegaconf.DictConfig): parsed command-line arguments - """ - return cls(cfg, **kwargs) - - def has_sharded_data(self, split): - return os.pathsep in getattr(self.cfg, "data", "") - - def load_dataset( - self, - split: str, - combine: bool = False, - task_cfg: FairseqDataclass = None, - **kwargs, - ): - """Load a given dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - combine (bool): combines a split segmented into pieces into one dataset - task_cfg (FairseqDataclass): optional task configuration stored in the checkpoint that can be used - to load datasets - """ - raise NotImplementedError - - def dataset(self, split): - """ - Return a loaded dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - - Returns: - a :class:`~fairseq.data.FairseqDataset` corresponding to *split* - """ - from fairseq.data import FairseqDataset - - if split not in self.datasets: - raise KeyError("Dataset not loaded: " + split) - if not isinstance(self.datasets[split], FairseqDataset): - raise TypeError("Datasets are expected to be of type FairseqDataset") - return self.datasets[split] - - def filter_indices_by_size( - self, indices, dataset, max_positions=None, ignore_invalid_inputs=False - ): - """ - Filter examples that are too large - - Args: - indices (np.array): original array of sample indices - dataset (~fairseq.data.FairseqDataset): dataset to batch - max_positions (optional): max sentence length supported by the - model (default: None). - ignore_invalid_inputs (bool, optional): don't raise Exception for - sentences that are too long (default: False). - Returns: - np.array: array of filtered sample indices - """ - indices, ignored = dataset.filter_indices_by_size(indices, max_positions) - if len(ignored) > 0: - if not ignore_invalid_inputs: - raise Exception( - ( - "Size of sample #{} is invalid (={}) since max_positions={}, " - "skip this example with --skip-invalid-size-inputs-valid-test" - ).format(ignored[0], dataset.size(ignored[0]), max_positions) - ) - logger.warning( - ( - "{:,} samples have invalid sizes and will be skipped, " - "max_positions={}, first few sample ids={}" - ).format(len(ignored), max_positions, ignored[:10]) - ) - return indices - - def can_reuse_epoch_itr(self, dataset): - # We can reuse the epoch iterator across epochs as long as the dataset - # hasn't disabled it. We default to ``False`` here, although in practice - # this will be ``True`` for most datasets that inherit from - # ``FairseqDataset`` due to the base implementation there. - return getattr(dataset, "can_reuse_epoch_itr_across_epochs", False) - - def get_batch_iterator( - self, - dataset, - max_tokens=None, - max_sentences=None, - max_positions=None, - ignore_invalid_inputs=False, - required_batch_size_multiple=1, - seed=1, - num_shards=1, - shard_id=0, - num_workers=0, - epoch=1, - data_buffer_size=0, - disable_iterator_cache=False, - skip_remainder_batch=False, - grouped_shuffling=False, - update_epoch_batch_itr=False, - ): - """ - Get an iterator that yields batches of data from the given dataset. - - Args: - dataset (~fairseq.data.FairseqDataset): dataset to batch - max_tokens (int, optional): max number of tokens in each batch - (default: None). - max_sentences (int, optional): max number of sentences in each - batch (default: None). - max_positions (optional): max sentence length supported by the - model (default: None). - ignore_invalid_inputs (bool, optional): don't raise Exception for - sentences that are too long (default: False). - required_batch_size_multiple (int, optional): require batch size to - be a multiple of N (default: 1). - seed (int, optional): seed for random number generator for - reproducibility (default: 1). - num_shards (int, optional): shard the data iterator into N - shards (default: 1). - shard_id (int, optional): which shard of the data iterator to - return (default: 0). - num_workers (int, optional): how many subprocesses to use for data - loading. 0 means the data will be loaded in the main process - (default: 0). - epoch (int, optional): the epoch to start the iterator from - (default: 1). - data_buffer_size (int, optional): number of batches to - preload (default: 0). - disable_iterator_cache (bool, optional): don't cache the - EpochBatchIterator (ignores `FairseqTask::can_reuse_epoch_itr`) - (default: False). - skip_remainder_batch (bool, optional): if set, discard the last - batch in each training epoch, as the last batch is often smaller than - local_batch_size * distributed_word_size (default: ``True``). - grouped_shuffling (bool, optional): group batches with each groups - containing num_shards batches and shuffle groups. Reduces difference - between sequence lengths among workers for batches sorted by length. - update_epoch_batch_itr (bool optional): if true then donot use the cached - batch iterator for the epoch - - Returns: - ~fairseq.iterators.EpochBatchIterator: a batched iterator over the - given dataset split - """ - can_reuse_epoch_itr = ( - not disable_iterator_cache - and not update_epoch_batch_itr - and self.can_reuse_epoch_itr(dataset) - ) - if can_reuse_epoch_itr and dataset in self.dataset_to_epoch_iter: - logger.debug("reusing EpochBatchIterator for epoch {}".format(epoch)) - return self.dataset_to_epoch_iter[dataset] - - assert isinstance(dataset, FairseqDataset) - - # initialize the dataset with the correct starting epoch - dataset.set_epoch(epoch) - - # get indices ordered by example size - with data_utils.numpy_seed(seed): - indices = dataset.ordered_indices() - - # filter examples that are too large - if max_positions is not None: - indices = self.filter_indices_by_size( - indices, dataset, max_positions, ignore_invalid_inputs - ) - - # create mini-batches with given size constraints - batch_sampler = dataset.batch_by_size( - indices, - max_tokens=max_tokens, - max_sentences=max_sentences, - required_batch_size_multiple=required_batch_size_multiple, - ) - - # return a reusable, sharded iterator - epoch_iter = iterators.EpochBatchIterator( - dataset=dataset, - collate_fn=dataset.collater, - batch_sampler=batch_sampler, - seed=seed, - num_shards=num_shards, - shard_id=shard_id, - num_workers=num_workers, - epoch=epoch, - buffer_size=data_buffer_size, - skip_remainder_batch=skip_remainder_batch, - grouped_shuffling=grouped_shuffling, - ) - - if can_reuse_epoch_itr: - self.dataset_to_epoch_iter[dataset] = epoch_iter - - return epoch_iter - - def build_model(self, cfg: FairseqDataclass, from_checkpoint=False): - """ - Build the :class:`~fairseq.models.BaseFairseqModel` instance for this - task. - - Args: - cfg (FairseqDataclass): configuration object - - Returns: - a :class:`~fairseq.models.BaseFairseqModel` instance - """ - from fairseq import models, quantization_utils - - model = models.build_model(cfg, self, from_checkpoint) - model = quantization_utils.quantize_model_scalar(model, cfg) - return model - - def build_criterion(self, cfg: DictConfig): - """ - Build the :class:`~fairseq.criterions.FairseqCriterion` instance for - this task. - - Args: - cfg (omegaconf.DictConfig): configration object - - Returns: - a :class:`~fairseq.criterions.FairseqCriterion` instance - """ - from fairseq import criterions - - return criterions.build_criterion(cfg, self) - - def build_generator( - self, - models, - args, - seq_gen_cls=None, - extra_gen_cls_kwargs=None, - prefix_allowed_tokens_fn=None, - ): - """ - Build a :class:`~fairseq.SequenceGenerator` instance for this - task. - - Args: - models (List[~fairseq.models.FairseqModel]): ensemble of models - args (fairseq.dataclass.configs.GenerationConfig): - configuration object (dataclass) for generation - extra_gen_cls_kwargs (Dict[str, Any]): extra options to pass - through to SequenceGenerator - prefix_allowed_tokens_fn (Callable[[int, torch.Tensor], List[int]]): - If provided, this function constrains the beam search to - allowed tokens only at each step. The provided function - should take 2 arguments: the batch ID (`batch_id: int`) - and a unidimensional tensor of token ids (`inputs_ids: - torch.Tensor`). It has to return a `List[int]` with the - allowed tokens for the next generation step conditioned - on the previously generated tokens (`inputs_ids`) and - the batch ID (`batch_id`). This argument is useful for - constrained generation conditioned on the prefix, as - described in "Autoregressive Entity Retrieval" - (https://arxiv.org/abs/2010.00904) and - https://github.com/facebookresearch/GENRE. - """ - if getattr(args, "score_reference", False): - from fairseq.sequence_scorer import SequenceScorer - - return SequenceScorer( - self.target_dictionary, - compute_alignment=getattr(args, "print_alignment", False), - ) - - from fairseq.sequence_generator import ( - SequenceGenerator, - SequenceGeneratorWithAlignment, - ) - - # Choose search strategy. Defaults to Beam Search. - sampling = getattr(args, "sampling", False) - sampling_topk = getattr(args, "sampling_topk", -1) - sampling_topp = getattr(args, "sampling_topp", -1.0) - diverse_beam_groups = getattr(args, "diverse_beam_groups", -1) - diverse_beam_strength = getattr(args, "diverse_beam_strength", 0.5) - match_source_len = getattr(args, "match_source_len", False) - diversity_rate = getattr(args, "diversity_rate", -1) - constrained = getattr(args, "constraints", False) - if prefix_allowed_tokens_fn is None: - prefix_allowed_tokens_fn = getattr(args, "prefix_allowed_tokens_fn", None) - if ( - sum( - int(cond) - for cond in [ - sampling, - diverse_beam_groups > 0, - match_source_len, - diversity_rate > 0, - ] - ) - > 1 - ): - raise ValueError("Provided Search parameters are mutually exclusive.") - assert sampling_topk < 0 or sampling, "--sampling-topk requires --sampling" - assert sampling_topp < 0 or sampling, "--sampling-topp requires --sampling" - - if sampling: - search_strategy = search.Sampling( - self.target_dictionary, sampling_topk, sampling_topp - ) - elif diverse_beam_groups > 0: - search_strategy = search.DiverseBeamSearch( - self.target_dictionary, diverse_beam_groups, diverse_beam_strength - ) - elif match_source_len: - # this is useful for tagging applications where the output - # length should match the input length, so we hardcode the - # length constraints for simplicity - search_strategy = search.LengthConstrainedBeamSearch( - self.target_dictionary, - min_len_a=1, - min_len_b=0, - max_len_a=1, - max_len_b=0, - ) - elif diversity_rate > -1: - search_strategy = search.DiverseSiblingsSearch( - self.target_dictionary, diversity_rate - ) - elif constrained: - search_strategy = search.LexicallyConstrainedBeamSearch( - self.target_dictionary, args.constraints - ) - elif prefix_allowed_tokens_fn: - search_strategy = search.PrefixConstrainedBeamSearch( - self.target_dictionary, prefix_allowed_tokens_fn - ) - else: - search_strategy = search.BeamSearch(self.target_dictionary) - - extra_gen_cls_kwargs = extra_gen_cls_kwargs or {} - if seq_gen_cls is None: - if getattr(args, "print_alignment", False): - seq_gen_cls = SequenceGeneratorWithAlignment - extra_gen_cls_kwargs["print_alignment"] = args.print_alignment - else: - seq_gen_cls = SequenceGenerator - - return seq_gen_cls( - models, - self.target_dictionary, - beam_size=getattr(args, "beam", 5), - max_len_a=getattr(args, "max_len_a", 0), - max_len_b=getattr(args, "max_len_b", 200), - min_len=getattr(args, "min_len", 1), - normalize_scores=(not getattr(args, "unnormalized", False)), - len_penalty=getattr(args, "lenpen", 1), - unk_penalty=getattr(args, "unkpen", 0), - temperature=getattr(args, "temperature", 1.0), - match_source_len=getattr(args, "match_source_len", False), - no_repeat_ngram_size=getattr(args, "no_repeat_ngram_size", 0), - search_strategy=search_strategy, - **extra_gen_cls_kwargs, - ) - - def train_step( - self, sample, model, criterion, optimizer, update_num, ignore_grad=False - ): - """ - Do forward and backward, and return the loss as computed by *criterion* - for the given *model* and *sample*. - - Args: - sample (dict): the mini-batch. The format is defined by the - :class:`~fairseq.data.FairseqDataset`. - model (~fairseq.models.BaseFairseqModel): the model - criterion (~fairseq.criterions.FairseqCriterion): the criterion - optimizer (~fairseq.optim.FairseqOptimizer): the optimizer - update_num (int): the current update - ignore_grad (bool): multiply loss by 0 if this is set to True - - Returns: - tuple: - - the loss - - the sample size, which is used as the denominator for the - gradient - - logging outputs to display while training - """ - model.train() - model.set_num_updates(update_num) - with torch.autograd.profiler.record_function("forward"): - with torch.cuda.amp.autocast(enabled=(isinstance(optimizer, AMPOptimizer))): - loss, sample_size, logging_output = criterion(model, sample) - if ignore_grad: - loss *= 0 - with torch.autograd.profiler.record_function("backward"): - if isinstance(model, DeepSpeedEngine): - model.backward(loss) - else: - optimizer.backward(loss) - return loss, sample_size, logging_output - - def valid_step(self, sample, model, criterion): - model.eval() - with torch.no_grad(): - loss, sample_size, logging_output = criterion(model, sample) - return loss, sample_size, logging_output - - def optimizer_step(self, optimizer, model, update_num): - if isinstance(model, DeepSpeedEngine): - model.step() - else: - optimizer.step() - - def build_dataset_for_inference( - self, src_tokens: List[torch.Tensor], src_lengths: List[int], **kwargs - ) -> torch.utils.data.Dataset: - raise NotImplementedError - - def inference_step( - self, generator, models, sample, prefix_tokens=None, constraints=None - ): - with torch.no_grad(): - return generator.generate( - models, sample, prefix_tokens=prefix_tokens, constraints=constraints - ) - - def begin_epoch(self, epoch, model): - """Hook function called before the start of each epoch.""" - pass - - def begin_valid_epoch(self, epoch, model): - """Hook function called before the start of each validation epoch.""" - pass - - def aggregate_logging_outputs(self, logging_outputs, criterion): - """[deprecated] Aggregate logging outputs from data parallel training.""" - utils.deprecation_warning( - "The aggregate_logging_outputs API is deprecated. " - "Please use the reduce_metrics API instead." - ) - with metrics.aggregate() as agg: - self.reduce_metrics(logging_outputs, criterion) - return agg.get_smoothed_values() - - def reduce_metrics(self, logging_outputs, criterion): - """Aggregate logging outputs from data parallel training.""" - # backward compatibility for tasks that override aggregate_logging_outputs - base_func = FairseqTask.aggregate_logging_outputs - self_func = getattr(self, "aggregate_logging_outputs").__func__ - if self_func is not base_func: - utils.deprecation_warning( - "Tasks should implement the reduce_metrics API. " - "Falling back to deprecated aggregate_logging_outputs API." - ) - agg_logging_outputs = self.aggregate_logging_outputs( - logging_outputs, criterion - ) - for k, v in agg_logging_outputs.items(): - metrics.log_scalar(k, v) - return - - if not any("ntokens" in log for log in logging_outputs): - warnings.warn( - "ntokens not found in Criterion logging outputs, cannot log wpb or wps" - ) - else: - ntokens = sum(log.get("ntokens", 0) for log in logging_outputs) - metrics.log_scalar("wpb", ntokens, priority=180, round=1) - metrics.log_speed("wps", ntokens, priority=90, round=1) - - if not any("nsentences" in log for log in logging_outputs): - warnings.warn( - "nsentences not found in Criterion logging outputs, cannot log bsz" - ) - else: - nsentences = sum(log.get("nsentences", 0) for log in logging_outputs) - metrics.log_scalar("bsz", nsentences, priority=190, round=1) - - criterion.__class__.reduce_metrics(logging_outputs) - - def state_dict(self): - if self.state is not None: - return self.state.state_dict - return {} - - def load_state_dict(self, state_dict: Dict[str, Any]): - if self.state is not None: - self.state.merge_state_dict(state_dict) - - def max_positions(self): - """Return the max input length allowed by the task.""" - return None - - @property - def source_dictionary(self): - """Return the source :class:`~fairseq.data.Dictionary` (if applicable - for this task).""" - raise NotImplementedError - - @property - def target_dictionary(self): - """Return the target :class:`~fairseq.data.Dictionary` (if applicable - for this task).""" - raise NotImplementedError - - def build_tokenizer(self, args): - """Build the pre-tokenizer for this task.""" - return encoders.build_tokenizer(args) - - def build_bpe(self, args): - """Build the tokenizer for this task.""" - return encoders.build_bpe(args) - - def get_interactive_tokens_and_lengths(self, lines, encode_fn): - tokens = [ - self.source_dictionary.encode_line( - encode_fn(src_str), add_if_not_exist=False - ).long() - for src_str in lines - ] - lengths = [t.numel() for t in tokens] - return tokens, lengths - - -class LegacyFairseqTask(FairseqTask): - def __init__(self, args: Namespace): - super().__init__(None) - self.args = args - self.datasets = {} - self.dataset_to_epoch_iter = {} - - @classmethod - def setup_task(cls, args: Namespace, **kwargs): - """Setup the task (e.g., load dictionaries). - - Args: - args (argparse.Namespace): parsed command-line arguments - """ - return cls(args, **kwargs) - - def has_sharded_data(self, split): - return os.pathsep in getattr(self.args, "data", "") - - def build_model(self, args: Namespace, from_checkpoint=False): - """ - Build the :class:`~fairseq.models.BaseFairseqModel` instance for this - task. - - Args: - args (argparse.Namespace): parsed command-line arguments - - Returns: - a :class:`~fairseq.models.BaseFairseqModel` instance - """ - from fairseq import models, quantization_utils - - model = models.build_model(args, self, from_checkpoint) - model = quantization_utils.quantize_model_scalar(model, args) - return model - - def build_criterion(self, args: Namespace): - """ - Build the :class:`~fairseq.criterions.FairseqCriterion` instance for - this task. - - Args: - args (argparse.Namespace): parsed command-line arguments - - Returns: - a :class:`~fairseq.criterions.FairseqCriterion` instance - """ - from fairseq import criterions - - return criterions.build_criterion(args, self) diff --git a/kosmos-g/fairseq/fairseq/tasks/frm_text_to_speech.py b/kosmos-g/fairseq/fairseq/tasks/frm_text_to_speech.py deleted file mode 100644 index 667f5f8ee..000000000 --- a/kosmos-g/fairseq/fairseq/tasks/frm_text_to_speech.py +++ /dev/null @@ -1,55 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging - -from fairseq.data.audio.frm_text_to_speech_dataset import FrmTextToSpeechDatasetCreator -from fairseq.tasks import register_task -from fairseq.tasks.text_to_speech import TextToSpeechTask - - -logging.basicConfig( - format="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=logging.INFO, -) -logger = logging.getLogger(__name__) - - -@register_task("frm_text_to_speech") -class FrmTextToSpeechTask(TextToSpeechTask): - @staticmethod - def add_args(parser): - TextToSpeechTask.add_args(parser) - parser.add_argument("--do_chunk", action="store_true", help="train on chunks") - parser.add_argument("--chunk_bound", default=-1, type=int) - parser.add_argument("--chunk_init", default=50, type=int) - parser.add_argument("--chunk_incr", default=5, type=int) - parser.add_argument("--add_eos", action="store_true") - parser.add_argument("--dedup", action="store_true") - parser.add_argument("--ref_fpu", default=-1, type=float) - - def load_dataset(self, split, **unused_kwargs): - is_train_split = split.startswith("train") - pre_tokenizer = self.build_tokenizer(self.args) - bpe_tokenizer = self.build_bpe(self.args) - self.datasets[split] = FrmTextToSpeechDatasetCreator.from_tsv( - self.args.data, - self.data_cfg, - split, - self.src_dict, - pre_tokenizer, - bpe_tokenizer, - is_train_split=is_train_split, - n_frames_per_step=self.args.n_frames_per_step, - speaker_to_id=self.speaker_to_id, - do_chunk=self.args.do_chunk, - chunk_bound=self.args.chunk_bound, - chunk_init=self.args.chunk_init, - chunk_incr=self.args.chunk_incr, - add_eos=self.args.add_eos, - dedup=self.args.dedup, - ref_fpu=self.args.ref_fpu, - ) diff --git a/kosmos-g/fairseq/fairseq/tasks/hubert_pretraining.py b/kosmos-g/fairseq/fairseq/tasks/hubert_pretraining.py deleted file mode 100644 index b8667d42a..000000000 --- a/kosmos-g/fairseq/fairseq/tasks/hubert_pretraining.py +++ /dev/null @@ -1,191 +0,0 @@ -# Copyright (c) 2017-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the LICENSE file in -# the root directory of this source tree. An additional grant of patent rights -# can be found in the PATENTS file in the same directory. - -import logging -import os -import sys -from typing import Dict, List, Optional, Tuple - -import numpy as np - -from dataclasses import dataclass, field -from fairseq.data import Dictionary, HubertDataset -from fairseq.dataclass.configs import FairseqDataclass -from fairseq.tasks import register_task -from fairseq.tasks.fairseq_task import FairseqTask -from omegaconf import MISSING - -logger = logging.getLogger(__name__) - - -class LabelEncoder(object): - def __init__(self, dictionary: Dictionary) -> None: - self.dictionary = dictionary - - def __call__(self, label: str) -> List[str]: - return self.dictionary.encode_line( - label, - append_eos=False, - add_if_not_exist=False, - ) - - -@dataclass -class HubertPretrainingConfig(FairseqDataclass): - data: str = field(default=MISSING, metadata={"help": "path to data directory"}) - fine_tuning: bool = field( - default=False, metadata={"help": "set to true if fine-tuning Hubert"} - ) - labels: List[str] = field( - default_factory=lambda: ["ltr"], - metadata={ - "help": ( - "extension of the label files to load, frame-level labels for" - " pre-training, and sequence-level label for fine-tuning" - ) - }, - ) - label_dir: Optional[str] = field( - default=None, - metadata={ - "help": "if set, looks for labels in this directory instead", - }, - ) - label_rate: int = field( - default=-1, - metadata={"help": "label frame rate. -1 for sequence label"}, - ) - sample_rate: int = field( - default=16_000, - metadata={ - "help": "target sample rate. audio files will be up/down " - "sampled to this rate" - }, - ) - normalize: bool = field( - default=False, - metadata={"help": "if set, normalizes input to have 0 mean and unit variance"}, - ) - enable_padding: bool = field( - default=False, - metadata={"help": "pad shorter samples instead of cropping"}, - ) - max_keep_size: Optional[int] = field( - default=None, - metadata={"help": "exclude sample longer than this"}, - ) - max_sample_size: Optional[int] = field( - default=None, - metadata={"help": "max sample size to crop to for batching"}, - ) - min_sample_size: Optional[int] = field( - default=None, - metadata={"help": "min sample size to crop to for batching"}, - ) - single_target: Optional[bool] = field( - default=False, - metadata={ - "help": "if set, AddTargetDatasets outputs same keys " "as AddTargetDataset" - }, - ) - random_crop: Optional[bool] = field( - default=True, - metadata={"help": "always crop from the beginning if false"}, - ) - pad_audio: Optional[bool] = field( - default=False, - metadata={"help": "pad audio to the longest one in the batch if true"}, - ) - - -@register_task("hubert_pretraining", dataclass=HubertPretrainingConfig) -class HubertPretrainingTask(FairseqTask): - - cfg: HubertPretrainingConfig - - def __init__( - self, - cfg: HubertPretrainingConfig, - ) -> None: - super().__init__(cfg) - - logger.info(f"current directory is {os.getcwd()}") - logger.info(f"HubertPretrainingTask Config {cfg}") - - self.cfg = cfg - self.fine_tuning = cfg.fine_tuning - - if cfg.fine_tuning: - self.state.add_factory("target_dictionary", self.load_dictionaries) - else: - self.state.add_factory("dictionaries", self.load_dictionaries) - - self.blank_symbol = "<s>" - - @property - def source_dictionary(self) -> Optional[Dictionary]: - return None - - @property - def target_dictionary(self) -> Optional[Dictionary]: - return self.state.target_dictionary - - @property - def dictionaries(self) -> List[Dictionary]: - return self.state.dictionaries - - @classmethod - def setup_task( - cls, cfg: HubertPretrainingConfig, **kwargs - ) -> "HubertPretrainingTask": - return cls(cfg) - - def load_dictionaries(self): - label_dir = self.cfg.data if self.cfg.label_dir is None else self.cfg.label_dir - dictionaries = [ - Dictionary.load(f"{label_dir}/dict.{label}.txt") - for label in self.cfg.labels - ] - return dictionaries[0] if self.cfg.fine_tuning else dictionaries - - def get_label_dir(self) -> str: - if self.cfg.label_dir is None: - return self.cfg.data - return self.cfg.label_dir - - def load_dataset(self, split: str, **kwargs) -> None: - manifest = f"{self.cfg.data}/{split}.tsv" - dicts = [self.target_dictionary] if self.cfg.fine_tuning else self.dictionaries - pad_list = [dict.pad() for dict in dicts] - eos_list = [dict.eos() for dict in dicts] - procs = [LabelEncoder(dict) for dict in dicts] - paths = [f"{self.get_label_dir()}/{split}.{l}" for l in self.cfg.labels] - - # hubert v1: pad_audio=True, random_crop=False; - self.datasets[split] = HubertDataset( - manifest, - sample_rate=self.cfg.sample_rate, - label_paths=paths, - label_rates=self.cfg.label_rate, - pad_list=pad_list, - eos_list=eos_list, - label_processors=procs, - max_keep_sample_size=self.cfg.max_keep_size, - min_keep_sample_size=self.cfg.min_sample_size, - max_sample_size=self.cfg.max_sample_size, - pad_audio=self.cfg.pad_audio, - normalize=self.cfg.normalize, - store_labels=False, - random_crop=self.cfg.random_crop, - single_target=self.cfg.single_target, - ) - - def max_positions(self) -> Tuple[int, int]: - return (sys.maxsize, sys.maxsize) - - def filter_indices_by_size(self, indices: np.array, *args, **kwargs) -> np.array: - return indices diff --git a/kosmos-g/fairseq/fairseq/tasks/language_modeling.py b/kosmos-g/fairseq/fairseq/tasks/language_modeling.py deleted file mode 100644 index 44d5324b3..000000000 --- a/kosmos-g/fairseq/fairseq/tasks/language_modeling.py +++ /dev/null @@ -1,383 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -from dataclasses import dataclass, field -from typing import Optional - -import numpy as np -import torch -from fairseq import utils -from fairseq.data import ( - AppendTokenDataset, - Dictionary, - IdDataset, - LMContextWindowDataset, - MonolingualDataset, - NestedDictionaryDataset, - NumelDataset, - PadDataset, - PrependTokenDataset, - StripTokenDataset, - TokenBlockDataset, - TruncatedDictionary, - data_utils, -) -from fairseq.data.indexed_dataset import get_available_dataset_impl -from fairseq.data.shorten_dataset import maybe_shorten_dataset -from fairseq.dataclass import ChoiceEnum, FairseqDataclass -from fairseq.tasks import LegacyFairseqTask, register_task -from omegaconf import II - - -SAMPLE_BREAK_MODE_CHOICES = ChoiceEnum(["none", "complete", "complete_doc", "eos"]) -SHORTEN_METHOD_CHOICES = ChoiceEnum(["none", "truncate", "random_crop"]) -logger = logging.getLogger(__name__) - - -@dataclass -class LanguageModelingConfig(FairseqDataclass): - data: Optional[str] = field( - default=None, metadata={"help": "path to data directory"} - ) - sample_break_mode: SAMPLE_BREAK_MODE_CHOICES = field( - default="none", - metadata={ - "help": 'If omitted or "none", fills each sample with tokens-per-sample ' - 'tokens. If set to "complete", splits samples only at the end ' - "of sentence, but may include multiple sentences per sample. " - '"complete_doc" is similar but respects doc boundaries. ' - 'If set to "eos", includes only one sentence per sample.' - }, - ) - tokens_per_sample: int = field( - default=1024, - metadata={"help": "max number of tokens per sample for LM dataset"}, - ) - output_dictionary_size: int = field( - default=-1, metadata={"help": "limit the size of output dictionary"} - ) - self_target: bool = field(default=False, metadata={"help": "include self target"}) - future_target: bool = field( - default=False, metadata={"help": "include future target"} - ) - past_target: bool = field(default=False, metadata={"help": "include past target"}) - add_bos_token: bool = field( - default=False, metadata={"help": "prepend beginning of sentence token (<s>)"} - ) - max_target_positions: Optional[int] = field( - default=None, metadata={"help": "max number of tokens in the target sequence"} - ) - shorten_method: SHORTEN_METHOD_CHOICES = field( - default="none", - metadata={ - "help": "if not none, shorten sequences that exceed --tokens-per-sample" - }, - ) - shorten_data_split_list: str = field( - default="", - metadata={ - "help": "comma-separated list of dataset splits to apply shortening to, " - 'e.g., "train,valid" (default: all dataset splits)' - }, - ) - pad_to_fixed_length: Optional[bool] = field( - default=False, - metadata={"help": "pad to fixed length"}, - ) - pad_to_fixed_bsz: Optional[bool] = field( - default=False, - metadata={"help": "boolean to pad to fixed batch size"}, - ) - - # TODO common vars below add to parent - seed: int = II("common.seed") - batch_size: Optional[int] = II("dataset.batch_size") - batch_size_valid: Optional[int] = II("dataset.batch_size_valid") - dataset_impl: Optional[ChoiceEnum(get_available_dataset_impl())] = II( - "dataset.dataset_impl" - ) - data_buffer_size: int = II("dataset.data_buffer_size") - tpu: bool = II("common.tpu") - use_plasma_view: bool = II("common.use_plasma_view") - plasma_path: str = II("common.plasma_path") - - -@register_task("language_modeling", dataclass=LanguageModelingConfig) -class LanguageModelingTask(LegacyFairseqTask): - """ - Train a language model. - - Args: - dictionary (~fairseq.data.Dictionary): the dictionary for the input of - the language model - output_dictionary (~fairseq.data.Dictionary): the dictionary for the - output of the language model. In most cases it will be the same as - *dictionary*, but could possibly be a more limited version of the - dictionary (if ``--output-dictionary-size`` is used). - targets (List[str]): list of the target types that the language model - should predict. Can be one of "self", "future", and "past". - Defaults to "future". - - .. note:: - - The language modeling task is compatible with :mod:`fairseq-train`, - :mod:`fairseq-generate`, :mod:`fairseq-interactive` and - :mod:`fairseq-eval-lm`. - - The language modeling task provides the following additional command-line - arguments: - - .. argparse:: - :ref: fairseq.tasks.language_modeling_parser - :prog: - """ - - def __init__(self, args, dictionary, output_dictionary=None, targets=None): - super().__init__(args) - self.dictionary = dictionary - self.output_dictionary = output_dictionary or dictionary - - if targets is None: - targets = ["future"] - self.targets = targets - - @classmethod - def setup_dictionary(cls, args, **kwargs): - dictionary = None - output_dictionary = None - if args.data: - paths = utils.split_paths(args.data) - assert len(paths) > 0 - dictionary = Dictionary.load(os.path.join(paths[0], "dict.txt")) - logger.info("dictionary: {} types".format(len(dictionary))) - output_dictionary = dictionary - if args.output_dictionary_size >= 0: - output_dictionary = TruncatedDictionary( - dictionary, args.output_dictionary_size - ) - return (dictionary, output_dictionary) - - @classmethod - def setup_task(cls, args, **kwargs): - """Setup the task (e.g., load dictionaries). - - Args: - args (argparse.Namespace): parsed command-line arguments - """ - dictionary, output_dictionary = cls.setup_dictionary(args, **kwargs) - - # upgrade old checkpoints - if getattr(args, "exclude_self_target", False): - args.self_target = False - - targets = [] - if getattr(args, "self_target", False): - targets.append("self") - if getattr(args, "future_target", False): - targets.append("future") - if getattr(args, "past_target", False): - targets.append("past") - if len(targets) == 0: - # standard language modeling - targets = ["future"] - - return cls(args, dictionary, output_dictionary, targets=targets) - - def build_model(self, args, from_checkpoint=False): - model = super().build_model(args, from_checkpoint) - for target in self.targets: - if target not in model.supported_targets: - raise ValueError( - "Unsupported language modeling target: {}".format(target) - ) - - return model - - def load_dataset( - self, split: str, epoch=1, combine=False, **kwargs - ) -> MonolingualDataset: - """Load a given dataset split. - - Args: - split (str): name of the split (e.g., train, valid, valid1, test) - """ - paths = utils.split_paths(self.args.data) - assert len(paths) > 0 - - data_path = paths[(epoch - 1) % len(paths)] - split_path = os.path.join(data_path, split) - - # each process has its own copy of the raw data (likely to be an np.memmap) - dataset = data_utils.load_indexed_dataset( - split_path, self.dictionary, self.args.dataset_impl, combine=combine - ) - if dataset is None: - raise FileNotFoundError(f"Dataset not found: {split} ({split_path})") - - dataset = maybe_shorten_dataset( - dataset, - split, - self.args.shorten_data_split_list, - self.args.shorten_method, - self.args.tokens_per_sample, - self.args.seed, - ) - dataset = TokenBlockDataset( - dataset, - dataset.sizes, - self.args.tokens_per_sample, - pad=self.dictionary.pad(), - eos=self.dictionary.eos(), - break_mode=self.args.sample_break_mode, - include_targets=True, - use_plasma_view=self.args.use_plasma_view, - split_path=split_path, - plasma_path=self.args.plasma_path, - ) - - add_eos_for_other_targets = ( - self.args.sample_break_mode is not None - and self.args.sample_break_mode != "none" - ) - fixed_pad_length = None - if self.args.pad_to_fixed_length: - fixed_pad_length = self.args.tokens_per_sample - - pad_to_bsz = None - if self.args.pad_to_fixed_bsz: - pad_to_bsz = ( - self.args.batch_size_valid if "valid" in split else self.args.batch_size - ) - - self.datasets[split] = MonolingualDataset( - dataset=dataset, - sizes=dataset.sizes, - src_vocab=self.dictionary, - tgt_vocab=self.output_dictionary, - add_eos_for_other_targets=add_eos_for_other_targets, - shuffle=True, - targets=self.targets, - add_bos_token=self.args.add_bos_token, - fixed_pad_length=fixed_pad_length, - pad_to_bsz=pad_to_bsz, - ) - - def build_dataset_for_inference(self, src_tokens, src_lengths, **kwargs): - """ - Generate batches for inference. We prepend an eos token to src_tokens - (or bos if `--add-bos-token` is set) and we append a <pad> to target. - This is convenient both for generation with a prefix and LM scoring. - """ - dataset = StripTokenDataset( - TokenBlockDataset( - src_tokens, - src_lengths, - block_size=None, # ignored for "eos" break mode - pad=self.source_dictionary.pad(), - eos=self.source_dictionary.eos(), - break_mode="eos", - ), - # remove eos from (end of) target sequence - self.source_dictionary.eos(), - ) - src_dataset = PrependTokenDataset( - dataset, - token=( - self.source_dictionary.bos() - if getattr(self.args, "add_bos_token", False) - else self.source_dictionary.eos() - ), - ) - tgt_dataset = AppendTokenDataset(dataset, token=self.source_dictionary.pad()) - return NestedDictionaryDataset( - { - "id": IdDataset(), - "net_input": { - "src_tokens": PadDataset( - src_dataset, - pad_idx=self.source_dictionary.pad(), - left_pad=False, - ), - "src_lengths": NumelDataset(src_dataset, reduce=False), - }, - "target": PadDataset( - tgt_dataset, pad_idx=self.source_dictionary.pad(), left_pad=False - ), - }, - sizes=[np.array(src_lengths)], - ) - - def inference_step( - self, generator, models, sample, prefix_tokens=None, constraints=None - ): - with torch.no_grad(): - # Generation will always be conditioned on bos_token - if getattr(self.args, "add_bos_token", False): - bos_token = self.source_dictionary.bos() - else: - bos_token = self.source_dictionary.eos() - - if constraints is not None: - raise NotImplementedError( - "Constrained decoding with the language_modeling task is not supported" - ) - - # SequenceGenerator doesn't use src_tokens directly, we need to - # pass the `prefix_tokens` argument instead - if prefix_tokens is None and sample["net_input"]["src_tokens"].nelement(): - prefix_tokens = sample["net_input"]["src_tokens"] - if prefix_tokens[:, 0].eq(bos_token).all(): - prefix_tokens = prefix_tokens[:, 1:] - - return generator.generate( - models, sample, prefix_tokens=prefix_tokens, bos_token=bos_token - ) - - def eval_lm_dataloader( - self, - dataset, - max_tokens: Optional[int] = 36000, - batch_size: Optional[int] = None, - max_positions: Optional[int] = None, - num_shards: int = 1, - shard_id: int = 0, - num_workers: int = 1, - data_buffer_size: int = 10, - # ensures that every evaluated token has access to a context of at least - # this size, if possible - context_window: int = 0, - ): - if context_window > 0: - dataset = LMContextWindowDataset( - dataset=dataset, - tokens_per_sample=self.args.tokens_per_sample, - context_window=context_window, - pad_idx=self.source_dictionary.pad(), - ) - return self.get_batch_iterator( - dataset=dataset, - max_tokens=max_tokens, - max_sentences=batch_size, - max_positions=max_positions, - ignore_invalid_inputs=True, - num_shards=num_shards, - shard_id=shard_id, - num_workers=num_workers, - data_buffer_size=data_buffer_size, - ).next_epoch_itr(shuffle=False) - - @property - def source_dictionary(self): - """Return the :class:`~fairseq.data.Dictionary` for the language - model.""" - return self.dictionary - - @property - def target_dictionary(self): - """Return the :class:`~fairseq.data.Dictionary` for the language - model.""" - return self.output_dictionary diff --git a/kosmos-g/fairseq/fairseq/tasks/legacy_masked_lm.py b/kosmos-g/fairseq/fairseq/tasks/legacy_masked_lm.py deleted file mode 100644 index 975497654..000000000 --- a/kosmos-g/fairseq/fairseq/tasks/legacy_masked_lm.py +++ /dev/null @@ -1,152 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import itertools -import logging -import os - -import numpy as np -from fairseq import tokenizer, utils -from fairseq.data import ConcatDataset, Dictionary, data_utils, indexed_dataset -from fairseq.data.legacy.block_pair_dataset import BlockPairDataset -from fairseq.data.legacy.masked_lm_dataset import MaskedLMDataset -from fairseq.data.legacy.masked_lm_dictionary import BertDictionary -from fairseq.tasks import LegacyFairseqTask, register_task - - -logger = logging.getLogger(__name__) - - -@register_task("legacy_masked_lm") -class LegacyMaskedLMTask(LegacyFairseqTask): - """ - Task for training Masked LM (BERT) model. - Args: - dictionary (Dictionary): the dictionary for the input of the task - """ - - @staticmethod - def add_args(parser): - """Add task-specific arguments to the parser.""" - parser.add_argument( - "data", - help="colon separated path to data directories list, \ - will be iterated upon during epochs in round-robin manner", - ) - parser.add_argument( - "--tokens-per-sample", - default=512, - type=int, - help="max number of total tokens over all segments" - " per sample for BERT dataset", - ) - parser.add_argument( - "--break-mode", default="doc", type=str, help="mode for breaking sentence" - ) - parser.add_argument("--shuffle-dataset", action="store_true", default=False) - - def __init__(self, args, dictionary): - super().__init__(args) - self.dictionary = dictionary - self.seed = args.seed - - @classmethod - def load_dictionary(cls, filename): - return BertDictionary.load(filename) - - @classmethod - def build_dictionary( - cls, filenames, workers=1, threshold=-1, nwords=-1, padding_factor=8 - ): - d = BertDictionary() - for filename in filenames: - Dictionary.add_file_to_dictionary( - filename, d, tokenizer.tokenize_line, workers - ) - d.finalize(threshold=threshold, nwords=nwords, padding_factor=padding_factor) - return d - - @property - def target_dictionary(self): - return self.dictionary - - @classmethod - def setup_task(cls, args, **kwargs): - """Setup the task.""" - paths = utils.split_paths(args.data) - assert len(paths) > 0 - dictionary = BertDictionary.load(os.path.join(paths[0], "dict.txt")) - logger.info("dictionary: {} types".format(len(dictionary))) - - return cls(args, dictionary) - - def load_dataset(self, split, epoch=1, combine=False): - """Load a given dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - """ - loaded_datasets = [] - - paths = utils.split_paths(self.args.data) - assert len(paths) > 0 - data_path = paths[(epoch - 1) % len(paths)] - logger.info("data_path", data_path) - - for k in itertools.count(): - split_k = split + (str(k) if k > 0 else "") - path = os.path.join(data_path, split_k) - ds = indexed_dataset.make_dataset( - path, - impl=self.args.dataset_impl, - fix_lua_indexing=True, - dictionary=self.dictionary, - ) - - if ds is None: - if k > 0: - break - else: - raise FileNotFoundError( - "Dataset not found: {} ({})".format(split, data_path) - ) - - with data_utils.numpy_seed(self.seed + k): - loaded_datasets.append( - BlockPairDataset( - ds, - self.dictionary, - ds.sizes, - self.args.tokens_per_sample, - break_mode=self.args.break_mode, - doc_break_size=1, - ) - ) - - logger.info( - "{} {} {} examples".format(data_path, split_k, len(loaded_datasets[-1])) - ) - - if not combine: - break - - if len(loaded_datasets) == 1: - dataset = loaded_datasets[0] - sizes = dataset.sizes - else: - dataset = ConcatDataset(loaded_datasets) - sizes = np.concatenate([ds.sizes for ds in loaded_datasets]) - - self.datasets[split] = MaskedLMDataset( - dataset=dataset, - sizes=sizes, - vocab=self.dictionary, - pad_idx=self.dictionary.pad(), - mask_idx=self.dictionary.mask(), - classif_token_idx=self.dictionary.cls(), - sep_token_idx=self.dictionary.sep(), - shuffle=self.args.shuffle_dataset, - seed=self.seed, - ) diff --git a/kosmos-g/fairseq/fairseq/tasks/masked_lm.py b/kosmos-g/fairseq/fairseq/tasks/masked_lm.py deleted file mode 100644 index 0c08132fb..000000000 --- a/kosmos-g/fairseq/fairseq/tasks/masked_lm.py +++ /dev/null @@ -1,255 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field -import logging -import os - -from omegaconf import MISSING, II, OmegaConf - -import numpy as np -from fairseq import utils -from fairseq.data import ( - Dictionary, - IdDataset, - MaskTokensDataset, - NestedDictionaryDataset, - NumelDataset, - NumSamplesDataset, - PrependTokenDataset, - RightPadDataset, - SortDataset, - TokenBlockDataset, - data_utils, -) -from fairseq.data.encoders.utils import get_whole_word_mask -from fairseq.data.shorten_dataset import maybe_shorten_dataset -from fairseq.dataclass import FairseqDataclass -from fairseq.tasks import FairseqTask, register_task - -from .language_modeling import SAMPLE_BREAK_MODE_CHOICES, SHORTEN_METHOD_CHOICES - - -logger = logging.getLogger(__name__) - - -@dataclass -class MaskedLMConfig(FairseqDataclass): - data: str = field( - default=MISSING, - metadata={ - "help": "colon separated path to data directories list, \ - will be iterated upon during epochs in round-robin manner" - }, - ) - sample_break_mode: SAMPLE_BREAK_MODE_CHOICES = field( - default="none", - metadata={ - "help": 'If omitted or "none", fills each sample with tokens-per-sample ' - 'tokens. If set to "complete", splits samples only at the end ' - "of sentence, but may include multiple sentences per sample. " - '"complete_doc" is similar but respects doc boundaries. ' - 'If set to "eos", includes only one sentence per sample.' - }, - ) - tokens_per_sample: int = field( - default=1024, - metadata={"help": "max number of tokens per sample for LM dataset"}, - ) - mask_prob: float = field( - default=0.15, - metadata={"help": "probability of replacing a token with mask"}, - ) - leave_unmasked_prob: float = field( - default=0.1, - metadata={"help": "probability that a masked token is unmasked"}, - ) - random_token_prob: float = field( - default=0.1, - metadata={"help": "probability of replacing a token with a random token"}, - ) - freq_weighted_replacement: bool = field( - default=False, - metadata={"help": "sample random replacement words based on word frequencies"}, - ) - mask_whole_words: bool = field( - default=False, - metadata={"help": "mask whole words; you may also want to set --bpe"}, - ) - mask_multiple_length: int = field( - default=1, - metadata={"help": "repeat the mask indices multiple times"}, - ) - mask_stdev: float = field( - default=0.0, - metadata={"help": "stdev of the mask length"}, - ) - shorten_method: SHORTEN_METHOD_CHOICES = field( - default="none", - metadata={ - "help": "if not none, shorten sequences that exceed --tokens-per-sample" - }, - ) - shorten_data_split_list: str = field( - default="", - metadata={ - "help": "comma-separated list of dataset splits to apply shortening to, " - 'e.g., "train,valid" (default: all dataset splits)' - }, - ) - seed: int = II("common.seed") - - -@register_task("masked_lm", dataclass=MaskedLMConfig) -class MaskedLMTask(FairseqTask): - - cfg: MaskedLMConfig - - """Task for training masked language models (e.g., BERT, RoBERTa).""" - - def __init__(self, cfg: MaskedLMConfig, dictionary): - super().__init__(cfg) - self.dictionary = dictionary - - # add mask token - self.mask_idx = dictionary.add_symbol("<mask>") - - @classmethod - def setup_task(cls, cfg: MaskedLMConfig, **kwargs): - paths = utils.split_paths(cfg.data) - assert len(paths) > 0 - dictionary = Dictionary.load(os.path.join(paths[0], "dict.txt")) - logger.info("dictionary: {} types".format(len(dictionary))) - return cls(cfg, dictionary) - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - """Load a given dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - """ - paths = utils.split_paths(self.cfg.data) - assert len(paths) > 0 - data_path = paths[(epoch - 1) % len(paths)] - split_path = os.path.join(data_path, split) - - dataset = data_utils.load_indexed_dataset( - split_path, - self.source_dictionary, - combine=combine, - ) - if dataset is None: - raise FileNotFoundError( - "Dataset not found: {} ({})".format(split, split_path) - ) - - dataset = maybe_shorten_dataset( - dataset, - split, - self.cfg.shorten_data_split_list, - self.cfg.shorten_method, - self.cfg.tokens_per_sample, - self.cfg.seed, - ) - - # create continuous blocks of tokens - dataset = TokenBlockDataset( - dataset, - dataset.sizes, - self.cfg.tokens_per_sample - 1, # one less for <s> - pad=self.source_dictionary.pad(), - eos=self.source_dictionary.eos(), - break_mode=self.cfg.sample_break_mode, - ) - logger.info("loaded {} blocks from: {}".format(len(dataset), split_path)) - - # prepend beginning-of-sentence token (<s>, equiv. to [CLS] in BERT) - dataset = PrependTokenDataset(dataset, self.source_dictionary.bos()) - - # create masked input and targets - mask_whole_words = ( - get_whole_word_mask(self.args, self.source_dictionary) - if self.cfg.mask_whole_words - else None - ) - - src_dataset, tgt_dataset = MaskTokensDataset.apply_mask( - dataset, - self.source_dictionary, - pad_idx=self.source_dictionary.pad(), - mask_idx=self.mask_idx, - seed=self.cfg.seed, - mask_prob=self.cfg.mask_prob, - leave_unmasked_prob=self.cfg.leave_unmasked_prob, - random_token_prob=self.cfg.random_token_prob, - freq_weighted_replacement=self.cfg.freq_weighted_replacement, - mask_whole_words=mask_whole_words, - mask_multiple_length=self.cfg.mask_multiple_length, - mask_stdev=self.cfg.mask_stdev, - ) - - with data_utils.numpy_seed(self.cfg.seed): - shuffle = np.random.permutation(len(src_dataset)) - - self.datasets[split] = SortDataset( - NestedDictionaryDataset( - { - "id": IdDataset(), - "net_input": { - "src_tokens": RightPadDataset( - src_dataset, - pad_idx=self.source_dictionary.pad(), - ), - "src_lengths": NumelDataset(src_dataset, reduce=False), - }, - "target": RightPadDataset( - tgt_dataset, - pad_idx=self.source_dictionary.pad(), - ), - "nsentences": NumSamplesDataset(), - "ntokens": NumelDataset(src_dataset, reduce=True), - }, - sizes=[src_dataset.sizes], - ), - sort_order=[ - shuffle, - src_dataset.sizes, - ], - ) - - def build_dataset_for_inference(self, src_tokens, src_lengths, sort=True): - src_dataset = RightPadDataset( - TokenBlockDataset( - src_tokens, - src_lengths, - self.cfg.tokens_per_sample - 1, # one less for <s> - pad=self.source_dictionary.pad(), - eos=self.source_dictionary.eos(), - break_mode="eos", - ), - pad_idx=self.source_dictionary.pad(), - ) - src_dataset = PrependTokenDataset(src_dataset, self.source_dictionary.bos()) - src_dataset = NestedDictionaryDataset( - { - "id": IdDataset(), - "net_input": { - "src_tokens": src_dataset, - "src_lengths": NumelDataset(src_dataset, reduce=False), - }, - }, - sizes=src_lengths, - ) - if sort: - src_dataset = SortDataset(src_dataset, sort_order=[src_lengths]) - return src_dataset - - @property - def source_dictionary(self): - return self.dictionary - - @property - def target_dictionary(self): - return self.dictionary diff --git a/kosmos-g/fairseq/fairseq/tasks/multilingual_denoising.py b/kosmos-g/fairseq/fairseq/tasks/multilingual_denoising.py deleted file mode 100644 index d1c914917..000000000 --- a/kosmos-g/fairseq/fairseq/tasks/multilingual_denoising.py +++ /dev/null @@ -1,254 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os - -import numpy as np -from fairseq.data import ( - AppendTokenDataset, - ConcatDataset, - DenoisingDataset, - Dictionary, - PrependTokenDataset, - ResamplingDataset, - SortDataset, - TokenBlockDataset, - data_utils, -) -from fairseq.data.encoders.utils import get_whole_word_mask -from fairseq.tasks import register_task - -from .denoising import DenoisingTask - - -logger = logging.getLogger(__name__) - - -@register_task("multilingual_denoising") -class MultilingualDenoisingTask(DenoisingTask): - @staticmethod - def add_args(parser): - DenoisingTask.add_args(parser) - parser.add_argument( - "--multilang-sampling-alpha", - type=float, - default=1.0, - help="smoothing alpha for sample ratios across multiple datasets", - ) - parser.add_argument("--add-lang-token", default=False, action="store_true") - parser.add_argument( - "--langs", type=str, help="language ids we are considering", default=None - ) - parser.add_argument( - "--no-whole-word-mask-langs", - type=str, - default="", - metavar="N", - help="languages without spacing between words dont support whole word masking", - ) - - @classmethod - def setup_task(cls, args, **kwargs): - """Setup the task.""" - paths = args.data.split(":") - assert len(paths) > 0 - dictionary = Dictionary.load(os.path.join(paths[0], "dict.txt")) - - data_path = paths[0] - if args.langs is None: - languages = sorted( - [ - name - for name in os.listdir(data_path) - if os.path.isdir(os.path.join(data_path, name)) - ] - ) - else: - languages = args.langs.split(",") - - if args.add_lang_token: - for lang in languages: - dictionary.add_symbol("[{}]".format(lang)) - - logger.info("dictionary: {} types".format(len(dictionary))) - if not hasattr(args, "shuffle_instance"): - args.shuffle_instance = False - return cls(args, dictionary) - - def __init__(self, args, dictionary): - super().__init__(args, dictionary) - self.dictionary = dictionary - self.seed = args.seed - - # add mask token - self.mask_idx = self.dictionary.add_symbol("<mask>") - self.langs = args.langs - self.args = args - - def _get_sample_prob(self, dataset_lens): - """ - Get smoothed sampling porbability by languages. This helps low resource - languages by upsampling them. - """ - prob = dataset_lens / dataset_lens.sum() - smoothed_prob = prob ** self.args.multilang_sampling_alpha - smoothed_prob = smoothed_prob / smoothed_prob.sum() - return smoothed_prob - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - """Load a given dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - """ - paths = self.args.data.split(":") - assert len(paths) > 0 - data_path = paths[(epoch - 1) % len(paths)] - split_path = os.path.join(data_path, split) - - if self.langs is None: - languages = sorted( - [ - name - for name in os.listdir(data_path) - if os.path.isdir(os.path.join(data_path, name)) - ] - ) - else: - languages = self.langs.split(",") - for name in languages: - p = os.path.join(data_path, name) - assert os.path.exists(p), "data not found: {}".format(p) - - logger.info("Training on {0} languages: {1}".format(len(languages), languages)) - logger.info( - "Language to id mapping: ", {lang: id for id, lang in enumerate(languages)} - ) - - mask_whole_words = get_whole_word_mask(self.args, self.dictionary) - language_without_segmentations = self.args.no_whole_word_mask_langs.split(",") - lang_datasets = [] - for language in languages: - split_path = os.path.join(data_path, language, split) - - dataset = data_utils.load_indexed_dataset( - split_path, - self.source_dictionary, - self.args.dataset_impl, - combine=combine, - ) - if dataset is None: - raise FileNotFoundError( - "Dataset not found: {} ({})".format(split, split_path) - ) - - end_token = ( - self.source_dictionary.index("[{}]".format(language)) - if self.args.add_lang_token - else self.source_dictionary.eos() - ) - - # create continuous blocks of tokens - dataset = TokenBlockDataset( - dataset, - dataset.sizes, - self.args.tokens_per_sample - 2, # one less for <s> - pad=self.source_dictionary.pad(), - eos=end_token, - break_mode=self.args.sample_break_mode, - ) - logger.info("loaded {} blocks from: {}".format(len(dataset), split_path)) - - # prepend beginning-of-sentence token (<s>, equiv. to [CLS] in BERT) - dataset = PrependTokenDataset(dataset, self.source_dictionary.bos()) - dataset = AppendTokenDataset(dataset, end_token) - - lang_mask_whole_words = ( - mask_whole_words - if language not in language_without_segmentations - else None - ) - lang_dataset = DenoisingDataset( - dataset, - dataset.sizes, - self.dictionary, - self.mask_idx, - lang_mask_whole_words, - shuffle=self.args.shuffle_instance, - seed=self.seed, - args=self.args, - eos=None - if not self.args.add_lang_token - else self.source_dictionary.index("[{}]".format(language)), - ) - lang_datasets.append(lang_dataset) - - dataset_lengths = np.array( - [len(d) for d in lang_datasets], - dtype=float, - ) - logger.info( - "loaded total {} blocks for all languages".format( - int(dataset_lengths.sum()), - ) - ) - if split == self.args.train_subset: - # For train subset, additionally up or down sample languages. - sample_probs = self._get_sample_prob(dataset_lengths) - logger.info( - "Sample probability by language: {}".format( - { - lang: "{0:.4f}".format(sample_probs[id]) - for id, lang in enumerate(languages) - } - ) - ) - size_ratio = (sample_probs * dataset_lengths.sum()) / dataset_lengths - logger.info( - "Up/Down Sampling ratio by language: {}".format( - { - lang: "{0:.2f}".format(size_ratio[id]) - for id, lang in enumerate(languages) - } - ) - ) - - resampled_lang_datasets = [ - ResamplingDataset( - lang_datasets[i], - size_ratio=size_ratio[i], - seed=self.args.seed, - epoch=epoch, - replace=size_ratio[i] >= 1.0, - ) - for i, d in enumerate(lang_datasets) - ] - dataset = ConcatDataset( - resampled_lang_datasets, - ) - else: - dataset = ConcatDataset(lang_datasets) - lang_splits = [split] - for lang_id, lang_dataset in enumerate(lang_datasets): - split_name = split + "_" + languages[lang_id] - lang_splits.append(split_name) - self.datasets[split_name] = lang_dataset - - if split in self.args.valid_subset: - self.args.valid_subset = self.args.valid_subset.replace( - split, ",".join(lang_splits) - ) - - with data_utils.numpy_seed(self.args.seed + epoch): - shuffle = np.random.permutation(len(dataset)) - - self.datasets[split] = SortDataset( - dataset, - sort_order=[ - shuffle, - dataset.sizes, - ], - ) diff --git a/kosmos-g/fairseq/fairseq/tasks/multilingual_language_modeling.py b/kosmos-g/fairseq/fairseq/tasks/multilingual_language_modeling.py deleted file mode 100644 index 673f563be..000000000 --- a/kosmos-g/fairseq/fairseq/tasks/multilingual_language_modeling.py +++ /dev/null @@ -1,627 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -from dataclasses import dataclass, field -from typing import Optional - -import numpy as np -import torch -from omegaconf import II - -from fairseq import utils -from fairseq.data import ( - AppendTokenDataset, - ConcatDataset, - Dictionary, - IdDataset, - LMContextWindowDataset, - MonolingualDataset, - NestedDictionaryDataset, - NumelDataset, - PadDataset, - PrependTokenDataset, - ResamplingDataset, - SortDataset, - StripTokenDataset, - TokenBlockDataset, - TruncatedDictionary, - data_utils, -) -from fairseq.data.indexed_dataset import get_available_dataset_impl -from fairseq.data.shorten_dataset import maybe_shorten_dataset -from fairseq.dataclass import ChoiceEnum, FairseqDataclass -from fairseq.tasks import LegacyFairseqTask, register_task - -SAMPLE_BREAK_MODE_CHOICES = ChoiceEnum(["none", "complete", "complete_doc", "eos"]) -SHORTEN_METHOD_CHOICES = ChoiceEnum(["none", "truncate", "random_crop"]) -logger = logging.getLogger(__name__) - - -def lang_token(lang): - return f"<{lang}>" - - -@dataclass -class MultilingualLanguageModelingConfig(FairseqDataclass): - # TODO common var add to parent - data: Optional[str] = field( - default=None, metadata={"help": "path to data directory"} - ) - sample_break_mode: SAMPLE_BREAK_MODE_CHOICES = field( - default="none", - metadata={ - "help": 'If omitted or "none", fills each sample with tokens-per-sample ' - 'tokens. If set to "complete", splits samples only at the end ' - "of sentence, but may include multiple sentences per sample. " - '"complete_doc" is similar but respects doc boundaries. ' - 'If set to "eos", includes only one sentence per sample.' - }, - ) - tokens_per_sample: int = field( - default=1024, - metadata={"help": "max number of tokens per sample for LM dataset"}, - ) - output_dictionary_size: int = field( - default=-1, metadata={"help": "limit the size of output dictionary"} - ) - self_target: bool = field(default=False, metadata={"help": "include self target"}) - future_target: bool = field( - default=False, metadata={"help": "include future target"} - ) - past_target: bool = field(default=False, metadata={"help": "include past target"}) - add_bos_token: bool = field( - default=False, metadata={"help": "prepend lang id token <dialect>"} - ) - max_source_positions: Optional[int] = field( - default=None, metadata={"help": "max number of tokens in the source sequence"} - ) - max_target_positions: Optional[int] = field( - default=None, metadata={"help": "max number of tokens in the target sequence"} - ) - pad_to_fixed_length: Optional[bool] = field( - default=False, metadata={"help": "pad to fixed length"} - ) - pad_to_fixed_bsz: Optional[bool] = field( - default=False, metadata={"help": "boolean to pad to fixed batch size"} - ) - - multilang_sampling_alpha: Optional[float] = field( - default=1.0, - metadata={ - "help": "smoothing alpha for sample rations across multiple datasets" - }, - ) - - shorten_method: SHORTEN_METHOD_CHOICES = field( - default="none", - metadata={ - "help": "if not none, shorten sequences that exceed --tokens-per-sample" - }, - ) - shorten_data_split_list: str = field( - default="", - metadata={ - "help": "comma-separated list of dataset splits to apply shortening to, " - 'e.g., "train,valid" (default: all dataset splits)' - }, - ) - - langs: str = field( - default="", - metadata={ - "help": "comma-separated list of languages (default: all directories in data path)" - }, - ) - baseline_model_langs: str = field( - default="", - metadata={ - "help": "comma-separated list of languages in the baseline model (default: none)" - }, - ) - # TODO: legacy parameter kept for compatibility - baseline_model: str = field( - default="", - metadata={"help": "path to the baseline model (default: none)"}, - ) - - lang_to_offline_shard_ratio: str = field( - default="", - metadata={ - "help": "absolute path of tsv file location to indicate lang to offline shard ratio.", - }, - ) - # TODO common vars below add to parent - seed: int = II("common.seed") - dataset_impl: Optional[ChoiceEnum(get_available_dataset_impl())] = II( - "dataset.dataset_impl" - ) - data_buffer_size: int = II("dataset.data_buffer_size") - tpu: bool = II("common.tpu") - batch_size: Optional[int] = II("dataset.batch_size") - batch_size_valid: Optional[int] = II("dataset.batch_size_valid") - train_subset: str = II("common.train_subset") - valid_subset: str = II("common.valid_subset") - - -@register_task( - "multilingual_language_modeling", dataclass=MultilingualLanguageModelingConfig -) -class MultilingualLanguageModelingTask(LegacyFairseqTask): - """ - Train a language model. - - Args: - dictionary (~fairseq.data.Dictionary): the dictionary for the input of - the language model - output_dictionary (~fairseq.data.Dictionary): the dictionary for the - output of the language model. In most cases it will be the same as - *dictionary*, but could possibly be a more limited version of the - dictionary (if ``--output-dictionary-size`` is used). - targets (List[str]): list of the target types that the language model - should predict. Can be one of "self", "future", and "past". - Defaults to "future". - - .. note:: - - The language modeling task is compatible with :mod:`fairseq-train`, - :mod:`fairseq-generate`, :mod:`fairseq-interactive` and - :mod:`fairseq-eval-lm`. - - The language modeling task provides the following additional command-line - arguments: - - .. argparse:: - :ref: fairseq.tasks.language_modeling_parser - :prog: - """ - - def __init__(self, args, dictionary, output_dictionary=None, targets=None): - super().__init__(args) - self.dictionary = dictionary - self.output_dictionary = output_dictionary or dictionary - - if targets is None: - targets = ["future"] - self.targets = targets - - @staticmethod - def _get_langs(args, epoch=1): - paths = utils.split_paths(args.data) - assert len(paths) > 0 - data_path = paths[(epoch - 1) % len(paths)] - - languages = sorted( - name - for name in os.listdir(data_path) - if os.path.isdir(os.path.join(data_path, name)) - ) - if args.langs: - keep_langs = set(args.langs.split(",")) - languages = [lang for lang in languages if lang in keep_langs] - assert len(languages) == len(keep_langs) - - return languages, data_path - - @classmethod - def setup_dictionary(cls, args, **kwargs): - dictionary = None - output_dictionary = None - if args.data: - paths = utils.split_paths(args.data) - assert len(paths) > 0 - dictionary = Dictionary.load(os.path.join(paths[0], "dict.txt")) - if args.add_bos_token: - languages, _ = cls._get_langs(args) - logger.info("----------------") - for lang in languages: - dictionary.add_symbol(lang_token(lang)) - logger.info(f"add language token: {lang_token(lang)}") - logger.info("----------------") - - logger.info("dictionary: {} types".format(len(dictionary))) - output_dictionary = dictionary - if args.output_dictionary_size >= 0: - output_dictionary = TruncatedDictionary( - dictionary, args.output_dictionary_size - ) - return (dictionary, output_dictionary) - - @classmethod - def setup_task(cls, args, **kwargs): - """Setup the task (e.g., load dictionaries). - - Args: - args (argparse.Namespace): parsed command-line arguments - """ - dictionary, output_dictionary = cls.setup_dictionary(args, **kwargs) - - # upgrade old checkpoints - if hasattr(args, "exclude_self_target"): - args.self_target = not args.exclude_self_target - - targets = [] - if getattr(args, "self_target", False): - targets.append("self") - if getattr(args, "future_target", False): - targets.append("future") - if getattr(args, "past_target", False): - targets.append("past") - if len(targets) == 0: - # standard language modeling - targets = ["future"] - - return cls(args, dictionary, output_dictionary, targets=targets) - - def build_model(self, args, from_checkpoint=False): - model = super().build_model(args, from_checkpoint) - for target in self.targets: - if target not in model.supported_targets: - raise ValueError( - f"Unsupported language modeling target: {target} not in {model.supported_targets}" - ) - - return model - - def _get_sample_prob(self, dataset_lens): - """ - Get smoothed sampling porbability by languages. This helps low resource - languages by upsampling them. - """ - prob = dataset_lens / dataset_lens.sum() - smoothed_prob = prob ** self.args.multilang_sampling_alpha - smoothed_prob = smoothed_prob / smoothed_prob.sum() - return smoothed_prob - - def load_dataset(self, split: str, epoch=1, combine=False, **kwargs): - """Load a given dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - """ - languages, data_path = MultilingualLanguageModelingTask._get_langs( - self.args, epoch - ) - lang_to_offline_shard_ratio = None - if self.args.lang_to_offline_shard_ratio != "": - lang_to_offline_shard_ratio = {} - assert os.path.exists( - self.args.lang_to_offline_shard_ratio - ), "provided offline shard ratio file doesn't exist: {0}".format( - self.args.lang_to_offline_shard_ratio - ) - with open(self.args.lang_to_offline_shard_ratio) as fin: - for line in fin: - lang, ratio = line.strip().split("\t") - ratio = float(ratio) - lang_to_offline_shard_ratio[lang] = ratio - - logger.info( - "Found offline sharded ratio: %s", - lang_to_offline_shard_ratio, - ) - - if split == self.args.train_subset: - logger.info( - "Training on {0} languages: {1}".format(len(languages), languages) - ) - else: - logger.info( - "Evaluating on {0} languages: {1}".format(len(languages), languages) - ) - - tokens_per_sample = self.args.tokens_per_sample - int(self.args.add_bos_token) - - fixed_pad_length = None - if self.args.pad_to_fixed_length: - fixed_pad_length = self.args.tokens_per_sample - - pad_to_bsz = None - if self.args.pad_to_fixed_bsz: - pad_to_bsz = ( - self.args.batch_size_valid if "valid" in split else self.args.batch_size - ) - - lang_datasets = [] - for lang_id, language in enumerate(languages): - split_path = os.path.join(data_path, language, split) - dataset = data_utils.load_indexed_dataset( - split_path, self.dictionary, self.args.dataset_impl, combine=combine - ) - # print('len(dataset) =', len(dataset)) - if dataset is None: - raise FileNotFoundError( - "Dataset not found: {} ({})".format(split, split_path) - ) - - dataset = maybe_shorten_dataset( - dataset, - split, - self.args.shorten_data_split_list, - self.args.shorten_method, - tokens_per_sample, - self.args.seed, - ) - - dataset = TokenBlockDataset( - dataset, - dataset.sizes, - tokens_per_sample, - pad=self.dictionary.pad(), - eos=self.dictionary.eos(), - break_mode=self.args.sample_break_mode, - include_targets=True, - ) - - add_eos_for_other_targets = ( - self.args.sample_break_mode is not None - and self.args.sample_break_mode != "none" - ) - src_lang_idx, tgt_lang_idx = None, None - if self.args.add_bos_token: - src_lang_idx = self.dictionary.index(lang_token(language)) - tgt_lang_idx = self.output_dictionary.index(lang_token(language)) - - lang_datasets.append( - MonolingualDataset( - dataset=dataset, - sizes=dataset.sizes, - src_vocab=self.dictionary, - tgt_vocab=self.output_dictionary, - add_eos_for_other_targets=add_eos_for_other_targets, - shuffle=True, - targets=self.targets, - fixed_pad_length=fixed_pad_length, - pad_to_bsz=pad_to_bsz, - add_bos_token=self.args.add_bos_token, - src_lang_idx=src_lang_idx, - tgt_lang_idx=tgt_lang_idx, - ) - ) - - dataset_lengths = np.array( - [len(d) for d in lang_datasets], - dtype=float, - ) - logger.info( - "loaded total {} blocks for all languages".format( - dataset_lengths.sum(), - ) - ) - if split == self.args.train_subset: - dataset_lengths_ratio_multiplier = np.ones(len(dataset_lengths)) - if lang_to_offline_shard_ratio is not None: - dataset_lengths_ratio_multiplier = [] - for lang in languages: - assert ( - lang in lang_to_offline_shard_ratio - ), "Lang: {0} missing in offline shard ratio file: {1}".format( - lang, - self.args.lang_to_offline_shard_ratio, - ) - dataset_lengths_ratio_multiplier.append( - lang_to_offline_shard_ratio[lang] - ) - dataset_lengths_ratio_multiplier = np.array( - dataset_lengths_ratio_multiplier - ) - true_dataset_lengths = ( - dataset_lengths * dataset_lengths_ratio_multiplier - ) - else: - true_dataset_lengths = dataset_lengths - # For train subset, additionally up or down sample languages. - sample_probs = self._get_sample_prob(true_dataset_lengths) - - logger.info( - "Sample probability by language: %s", - { - lang: "{0:.4f}".format(sample_probs[id]) - for id, lang in enumerate(languages) - }, - ) - size_ratio = (sample_probs * true_dataset_lengths.sum()) / dataset_lengths - # TODO: add an option for shrinking all size ratios to below 1 - # if self.args.multilang_sampling_alpha != 1: - # size_ratio /= size_ratio.max() - - # Fix numeric errors in size ratio computation - # 0.999999999999999999 -> 1 - # 1.000000000000000002 -> 1 - for i in range(len(size_ratio)): - size_ratio[i] = round(size_ratio[i], 8) - - logger.info( - "Up/Down Sampling ratio by language: %s", - { - lang: "{0:.2f}".format(size_ratio[id]) - for id, lang in enumerate(languages) - }, - ) - logger.info( - "Actual dataset size by language: %s", - { - lang: "{0:.2f}".format(len(lang_datasets[id])) - for id, lang in enumerate(languages) - }, - ) - resampled_lang_datasets = [ - ResamplingDataset( - lang_datasets[i], - size_ratio=size_ratio[i], - seed=self.args.seed, - epoch=epoch, - replace=size_ratio[i] > 1.0, - ) - for i, d in enumerate(lang_datasets) - ] - logger.info( - "Resampled dataset size by language: %s", - { - lang: "{0:.2f}".format(len(resampled_lang_datasets[id])) - for id, lang in enumerate(languages) - }, - ) - dataset = ConcatDataset(resampled_lang_datasets) - else: - dataset = ConcatDataset(lang_datasets) - lang_splits = [split] - for lang_id, lang_dataset in enumerate(lang_datasets): - split_name = split + "_" + languages[lang_id] - lang_splits.append(split_name) - self.datasets[split_name] = lang_dataset - - # [TODO]: This is hacky for now to print validation ppl for each - # language individually. Maybe need task API changes to allow it - # in more generic ways. - if split in self.args.valid_subset: - self.args.valid_subset = self.args.valid_subset.replace( - split, ",".join(lang_splits) - ) - - with data_utils.numpy_seed(self.args.seed + epoch): - shuffle = np.random.permutation(len(dataset)) - - self.datasets[split] = SortDataset( - dataset, - sort_order=[ - shuffle, - dataset.sizes, - ], - ) - - def build_dataset_for_inference( - self, src_tokens, src_lengths, language="en_XX", **kwargs - ): - """ - Generate batches for inference. We prepend an eos token to src_tokens - (or bos if `--add-bos-token` is set) and we append a <pad> to target. - This is convenient both for generation with a prefix and LM scoring. - """ - dataset = StripTokenDataset( - TokenBlockDataset( - src_tokens, - src_lengths, - block_size=None, # ignored for "eos" break mode - pad=self.source_dictionary.pad(), - eos=self.source_dictionary.eos(), - break_mode="eos", - ), - # remove eos from (end of) target sequence - self.source_dictionary.eos(), - ) - - src_lang_idx = self.dictionary.index(lang_token(language)) - src_dataset = PrependTokenDataset( - dataset, - token=( - (src_lang_idx or self.source_dictionary.bos()) - if getattr(self.args, "add_bos_token", False) - else self.source_dictionary.eos() - ), - ) - - max_seq_len = max(src_lengths) + 1 - tgt_dataset = AppendTokenDataset(dataset, token=self.source_dictionary.pad()) - return NestedDictionaryDataset( - { - "id": IdDataset(), - "net_input": { - "src_tokens": PadDataset( - src_dataset, - pad_idx=self.source_dictionary.pad(), - left_pad=False, - pad_length=max_seq_len, - ), - "src_lengths": NumelDataset(src_dataset, reduce=False), - }, - "target": PadDataset( - tgt_dataset, - pad_idx=self.source_dictionary.pad(), - left_pad=False, - pad_length=max_seq_len, - ), - }, - sizes=[np.array(src_lengths)], - ) - - @torch.no_grad() - def inference_step( - self, - generator, - models, - sample, - language="en_XX", - prefix_tokens=None, - constraints=None, - ): - # Generation will always be conditioned on bos_token - if getattr(self.args, "add_bos_token", False): - src_lang_idx = self.dictionary.index(lang_token(language)) - bos_token = src_lang_idx or self.source_dictionary.bos() - else: - bos_token = self.source_dictionary.eos() - - if constraints is not None: - raise NotImplementedError( - "Constrained decoding with the language_modeling task is not supported" - ) - - # SequenceGenerator doesn't use src_tokens directly, we need to - # pass the `prefix_tokens` argument instead - if prefix_tokens is None and sample["net_input"]["src_tokens"].nelement(): - prefix_tokens = sample["net_input"]["src_tokens"] - if prefix_tokens[:, 0].eq(bos_token).all(): - prefix_tokens = prefix_tokens[:, 1:] - - return generator.generate( - models, sample, prefix_tokens=prefix_tokens, bos_token=bos_token - ) - - def eval_lm_dataloader( - self, - dataset, - max_tokens: Optional[int] = 36000, - batch_size: Optional[int] = None, - max_positions: Optional[int] = None, - num_shards: int = 1, - shard_id: int = 0, - num_workers: int = 1, - data_buffer_size: int = 10, - # ensures that every evaluated token has access to a context of at least - # this size, if possible - context_window: int = 0, - ): - if context_window > 0: - dataset = LMContextWindowDataset( - dataset=dataset, - tokens_per_sample=self.args.tokens_per_sample, - context_window=context_window, - pad_idx=self.source_dictionary.pad(), - ) - return self.get_batch_iterator( - dataset=dataset, - max_tokens=max_tokens, - max_sentences=batch_size, - max_positions=max_positions, - ignore_invalid_inputs=True, - num_shards=num_shards, - shard_id=shard_id, - num_workers=num_workers, - data_buffer_size=data_buffer_size, - ) - - @property - def source_dictionary(self): - """Return the :class:`~fairseq.data.Dictionary` for the language - model.""" - return self.dictionary - - @property - def target_dictionary(self): - """Return the :class:`~fairseq.data.Dictionary` for the language - model.""" - return self.output_dictionary diff --git a/kosmos-g/fairseq/fairseq/tasks/multilingual_masked_lm.py b/kosmos-g/fairseq/fairseq/tasks/multilingual_masked_lm.py deleted file mode 100644 index 9e6ce4b8a..000000000 --- a/kosmos-g/fairseq/fairseq/tasks/multilingual_masked_lm.py +++ /dev/null @@ -1,338 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os - -import numpy as np -import torch -from fairseq import utils -from fairseq.data import ( - ConcatDataset, - Dictionary, - IdDataset, - MaskTokensDataset, - NestedDictionaryDataset, - NumelDataset, - NumSamplesDataset, - PadDataset, - PrependTokenDataset, - RawLabelDataset, - ResamplingDataset, - SortDataset, - TokenBlockDataset, - data_utils, - encoders, -) -from fairseq.tasks import LegacyFairseqTask, register_task - - -logger = logging.getLogger(__name__) - - -@register_task("multilingual_masked_lm") -class MultiLingualMaskedLMTask(LegacyFairseqTask): - """Task for training masked language models (e.g., BERT, RoBERTa).""" - - @staticmethod - def add_args(parser): - """Add task-specific arguments to the parser.""" - parser.add_argument( - "data", - help="colon separated path to data directories list, \ - will be iterated upon during epochs in round-robin manner", - ) - parser.add_argument( - "--sample-break-mode", - default="complete", - choices=["none", "complete", "complete_doc", "eos"], - help='If omitted or "none", fills each sample with tokens-per-sample ' - 'tokens. If set to "complete", splits samples only at the end ' - "of sentence, but may include multiple sentences per sample. " - '"complete_doc" is similar but respects doc boundaries. ' - 'If set to "eos", includes only one sentence per sample.', - ) - parser.add_argument( - "--tokens-per-sample", - default=512, - type=int, - help="max number of total tokens over all segments " - "per sample for BERT dataset", - ) - parser.add_argument( - "--mask-prob", - default=0.15, - type=float, - help="probability of replacing a token with mask", - ) - parser.add_argument( - "--leave-unmasked-prob", - default=0.1, - type=float, - help="probability that a masked token is unmasked", - ) - parser.add_argument( - "--random-token-prob", - default=0.1, - type=float, - help="probability of replacing a token with a random token", - ) - parser.add_argument( - "--freq-weighted-replacement", - action="store_true", - help="sample random replacement words based on word frequencies", - ) - parser.add_argument( - "--mask-whole-words", - default=False, - action="store_true", - help="mask whole words; you may also want to set --bpe", - ) - parser.add_argument( - "--multilang-sampling-alpha", - type=float, - default=1.0, - help="smoothing alpha for sample rations across multiple datasets", - ) - - def __init__(self, args, dictionary): - super().__init__(args) - self.dictionary = dictionary - self.seed = args.seed - - # add mask token - self.mask_idx = dictionary.add_symbol("<mask>") - - @classmethod - def setup_task(cls, args, **kwargs): - paths = utils.split_paths(args.data) - assert len(paths) > 0 - dictionary = Dictionary.load(os.path.join(paths[0], "dict.txt")) - logger.info("dictionary: {} types".format(len(dictionary))) - return cls(args, dictionary) - - def _get_whole_word_mask(self): - # create masked input and targets - if self.args.mask_whole_words: - bpe = encoders.build_bpe(self.args) - if bpe is not None: - - def is_beginning_of_word(i): - if i < self.source_dictionary.nspecial: - # special elements are always considered beginnings - return True - tok = self.source_dictionary[i] - if tok.startswith("madeupword"): - return True - try: - return bpe.is_beginning_of_word(tok) - except ValueError: - return True - - mask_whole_words = torch.ByteTensor( - list(map(is_beginning_of_word, range(len(self.source_dictionary)))) - ) - else: - mask_whole_words = None - return mask_whole_words - - def _get_sample_prob(self, dataset_lens): - """ - Get smoothed sampling porbability by languages. This helps low resource - languages by upsampling them. - """ - prob = dataset_lens / dataset_lens.sum() - smoothed_prob = prob ** self.args.multilang_sampling_alpha - smoothed_prob = smoothed_prob / smoothed_prob.sum() - return smoothed_prob - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - """Load a given dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - """ - paths = utils.split_paths(self.args.data) - assert len(paths) > 0 - data_path = paths[(epoch - 1) % len(paths)] - - languages = sorted( - name - for name in os.listdir(data_path) - if os.path.isdir(os.path.join(data_path, name)) - ) - - logger.info("Training on {0} languages: {1}".format(len(languages), languages)) - logger.info( - "Language to id mapping: ", {lang: id for id, lang in enumerate(languages)} - ) - - mask_whole_words = self._get_whole_word_mask() - lang_datasets = [] - for lang_id, language in enumerate(languages): - split_path = os.path.join(data_path, language, split) - - dataset = data_utils.load_indexed_dataset( - split_path, - self.source_dictionary, - self.args.dataset_impl, - combine=combine, - ) - if dataset is None: - raise FileNotFoundError( - "Dataset not found: {} ({})".format(split, split_path) - ) - - # create continuous blocks of tokens - dataset = TokenBlockDataset( - dataset, - dataset.sizes, - self.args.tokens_per_sample - 1, # one less for <s> - pad=self.source_dictionary.pad(), - eos=self.source_dictionary.eos(), - break_mode=self.args.sample_break_mode, - ) - logger.info("loaded {} blocks from: {}".format(len(dataset), split_path)) - - # prepend beginning-of-sentence token (<s>, equiv. to [CLS] in BERT) - dataset = PrependTokenDataset(dataset, self.source_dictionary.bos()) - - src_dataset, tgt_dataset = MaskTokensDataset.apply_mask( - dataset, - self.source_dictionary, - pad_idx=self.source_dictionary.pad(), - mask_idx=self.mask_idx, - seed=self.args.seed, - mask_prob=self.args.mask_prob, - leave_unmasked_prob=self.args.leave_unmasked_prob, - random_token_prob=self.args.random_token_prob, - freq_weighted_replacement=self.args.freq_weighted_replacement, - mask_whole_words=mask_whole_words, - ) - - lang_dataset = NestedDictionaryDataset( - { - "net_input": { - "src_tokens": PadDataset( - src_dataset, - pad_idx=self.source_dictionary.pad(), - left_pad=False, - ), - "src_lengths": NumelDataset(src_dataset, reduce=False), - }, - "target": PadDataset( - tgt_dataset, - pad_idx=self.source_dictionary.pad(), - left_pad=False, - ), - "nsentences": NumSamplesDataset(), - "ntokens": NumelDataset(src_dataset, reduce=True), - "lang_id": RawLabelDataset([lang_id] * src_dataset.sizes.shape[0]), - }, - sizes=[src_dataset.sizes], - ) - lang_datasets.append(lang_dataset) - - dataset_lengths = np.array( - [len(d) for d in lang_datasets], - dtype=float, - ) - logger.info( - "loaded total {} blocks for all languages".format( - dataset_lengths.sum(), - ) - ) - if split == self.args.train_subset: - # For train subset, additionally up or down sample languages. - sample_probs = self._get_sample_prob(dataset_lengths) - logger.info( - "Sample probability by language: ", - { - lang: "{0:.4f}".format(sample_probs[id]) - for id, lang in enumerate(languages) - }, - ) - size_ratio = (sample_probs * dataset_lengths.sum()) / dataset_lengths - logger.info( - "Up/Down Sampling ratio by language: ", - { - lang: "{0:.2f}".format(size_ratio[id]) - for id, lang in enumerate(languages) - }, - ) - - resampled_lang_datasets = [ - ResamplingDataset( - lang_datasets[i], - size_ratio=size_ratio[i], - seed=self.args.seed, - epoch=epoch, - replace=size_ratio[i] >= 1.0, - ) - for i, d in enumerate(lang_datasets) - ] - dataset = ConcatDataset(resampled_lang_datasets) - else: - dataset = ConcatDataset(lang_datasets) - lang_splits = [split] - for lang_id, lang_dataset in enumerate(lang_datasets): - split_name = split + "_" + languages[lang_id] - lang_splits.append(split_name) - self.datasets[split_name] = lang_dataset - - # [TODO]: This is hacky for now to print validation ppl for each - # language individually. Maybe need task API changes to allow it - # in more generic ways. - if split in self.args.valid_subset: - self.args.valid_subset = self.args.valid_subset.replace( - split, ",".join(lang_splits) - ) - - with data_utils.numpy_seed(self.args.seed + epoch): - shuffle = np.random.permutation(len(dataset)) - - self.datasets[split] = SortDataset( - dataset, - sort_order=[ - shuffle, - dataset.sizes, - ], - ) - - def build_dataset_for_inference(self, src_tokens, src_lengths, sort=True): - src_dataset = PadDataset( - TokenBlockDataset( - src_tokens, - src_lengths, - self.args.tokens_per_sample - 1, # one less for <s> - pad=self.source_dictionary.pad(), - eos=self.source_dictionary.eos(), - break_mode="eos", - ), - pad_idx=self.source_dictionary.pad(), - left_pad=False, - ) - src_dataset = PrependTokenDataset(src_dataset, self.source_dictionary.bos()) - src_dataset = NestedDictionaryDataset( - { - "id": IdDataset(), - "net_input": { - "src_tokens": src_dataset, - "src_lengths": NumelDataset(src_dataset, reduce=False), - }, - }, - sizes=src_lengths, - ) - if sort: - src_dataset = SortDataset(src_dataset, sort_order=[src_lengths]) - return src_dataset - - @property - def source_dictionary(self): - return self.dictionary - - @property - def target_dictionary(self): - return self.dictionary diff --git a/kosmos-g/fairseq/fairseq/tasks/multilingual_translation.py b/kosmos-g/fairseq/fairseq/tasks/multilingual_translation.py deleted file mode 100644 index e692b6669..000000000 --- a/kosmos-g/fairseq/fairseq/tasks/multilingual_translation.py +++ /dev/null @@ -1,462 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import contextlib -import logging -import os -from collections import OrderedDict -from argparse import ArgumentError - -import torch -from fairseq import metrics, options, utils -from fairseq.data import ( - Dictionary, - LanguagePairDataset, - RoundRobinZipDatasets, - TransformEosLangPairDataset, -) -from fairseq.models import FairseqMultiModel -from fairseq.tasks.translation import load_langpair_dataset - -from . import LegacyFairseqTask, register_task - - -logger = logging.getLogger(__name__) - - -def _lang_token(lang: str): - return "__{}__".format(lang) - - -def _lang_token_index(dic: Dictionary, lang: str): - """Return language token index.""" - idx = dic.index(_lang_token(lang)) - assert idx != dic.unk_index, "cannot find language token for lang {}".format(lang) - return idx - - -@register_task("multilingual_translation") -class MultilingualTranslationTask(LegacyFairseqTask): - """A task for training multiple translation models simultaneously. - - We iterate round-robin over batches from multiple language pairs, ordered - according to the `--lang-pairs` argument. - - The training loop is roughly: - - for i in range(len(epoch)): - for lang_pair in args.lang_pairs: - batch = next_batch_for_lang_pair(lang_pair) - loss = criterion(model_for_lang_pair(lang_pair), batch) - loss.backward() - optimizer.step() - - In practice, `next_batch_for_lang_pair` is abstracted in a FairseqDataset - (e.g., `RoundRobinZipDatasets`) and `model_for_lang_pair` is a model that - implements the `FairseqMultiModel` interface. - - During inference it is required to specify a single `--source-lang` and - `--target-lang`, which indicates the inference langauge direction. - `--lang-pairs`, `--encoder-langtok`, `--decoder-langtok` have to be set to - the same value as training. - """ - - @staticmethod - def add_args(parser): - """Add task-specific arguments to the parser.""" - # fmt: off - parser.add_argument('data', metavar='DIR', help='path to data directory') - parser.add_argument('--lang-pairs', default=None, metavar='PAIRS', - help='comma-separated list of language pairs (in training order): en-de,en-fr,de-fr') - parser.add_argument('-s', '--source-lang', default=None, metavar='SRC', - help='source language (only needed for inference)') - parser.add_argument('-t', '--target-lang', default=None, metavar='TARGET', - help='target language (only needed for inference)') - parser.add_argument('--left-pad-source', default='True', type=str, metavar='BOOL', - help='pad the source on the left (default: True)') - parser.add_argument('--left-pad-target', default='False', type=str, metavar='BOOL', - help='pad the target on the left (default: False)') - try: - parser.add_argument('--max-source-positions', default=1024, type=int, metavar='N', - help='max number of tokens in the source sequence') - parser.add_argument('--max-target-positions', default=1024, type=int, metavar='N', - help='max number of tokens in the target sequence') - except ArgumentError: - # this might have already been defined. Once we transition this to hydra it should be fine to add it here. - pass - parser.add_argument('--upsample-primary', default=1, type=int, - help='amount to upsample primary dataset') - parser.add_argument('--encoder-langtok', default=None, type=str, choices=['src', 'tgt'], - metavar='SRCTGT', - help='replace beginning-of-sentence in source sentence with source or target ' - 'language token. (src/tgt)') - parser.add_argument('--decoder-langtok', action='store_true', - help='replace beginning-of-sentence in target sentence with target language token') - # fmt: on - - def __init__(self, args, dicts, training): - super().__init__(args) - self.dicts = dicts - self.training = training - if training: - self.lang_pairs = args.lang_pairs - else: - self.lang_pairs = ["{}-{}".format(args.source_lang, args.target_lang)] - # eval_lang_pairs for multilingual translation is usually all of the - # lang_pairs. However for other multitask settings or when we want to - # optimize for certain languages we want to use a different subset. Thus - # the eval_lang_pairs class variable is provided for classes that extend - # this class. - self.eval_lang_pairs = self.lang_pairs - # model_lang_pairs will be used to build encoder-decoder model pairs in - # models.build_model(). This allows multitask type of sub-class can - # build models other than the input lang_pairs - self.model_lang_pairs = self.lang_pairs - self.langs = list(dicts.keys()) - - @classmethod - def setup_task(cls, args, **kwargs): - dicts, training = cls.prepare(args, **kwargs) - return cls(args, dicts, training) - - @classmethod - def update_args(cls, args): - args.left_pad_source = utils.eval_bool(args.left_pad_source) - args.left_pad_target = utils.eval_bool(args.left_pad_target) - - if args.lang_pairs is None: - raise ValueError( - "--lang-pairs is required. List all the language pairs in the training objective." - ) - if isinstance(args.lang_pairs, str): - args.lang_pairs = args.lang_pairs.split(",") - - @classmethod - def prepare(cls, args, **kargs): - cls.update_args(args) - sorted_langs = sorted( - list({x for lang_pair in args.lang_pairs for x in lang_pair.split("-")}) - ) - if args.source_lang is not None or args.target_lang is not None: - training = False - else: - training = True - - # load dictionaries - dicts = OrderedDict() - for lang in sorted_langs: - paths = utils.split_paths(args.data) - assert len(paths) > 0 - dicts[lang] = cls.load_dictionary( - os.path.join(paths[0], "dict.{}.txt".format(lang)) - ) - if len(dicts) > 0: - assert dicts[lang].pad() == dicts[sorted_langs[0]].pad() - assert dicts[lang].eos() == dicts[sorted_langs[0]].eos() - assert dicts[lang].unk() == dicts[sorted_langs[0]].unk() - if args.encoder_langtok is not None or args.decoder_langtok: - for lang_to_add in sorted_langs: - dicts[lang].add_symbol(_lang_token(lang_to_add)) - logger.info("[{}] dictionary: {} types".format(lang, len(dicts[lang]))) - return dicts, training - - def get_encoder_langtok(self, src_lang, tgt_lang): - if self.args.encoder_langtok is None: - return self.dicts[src_lang].eos() - if self.args.encoder_langtok == "src": - return _lang_token_index(self.dicts[src_lang], src_lang) - else: - return _lang_token_index(self.dicts[src_lang], tgt_lang) - - def get_decoder_langtok(self, tgt_lang): - if not self.args.decoder_langtok: - return self.dicts[tgt_lang].eos() - return _lang_token_index(self.dicts[tgt_lang], tgt_lang) - - def alter_dataset_langtok( - self, - lang_pair_dataset, - src_eos=None, - src_lang=None, - tgt_eos=None, - tgt_lang=None, - ): - if self.args.encoder_langtok is None and not self.args.decoder_langtok: - return lang_pair_dataset - - new_src_eos = None - if ( - self.args.encoder_langtok is not None - and src_eos is not None - and src_lang is not None - and tgt_lang is not None - ): - new_src_eos = self.get_encoder_langtok(src_lang, tgt_lang) - else: - src_eos = None - - new_tgt_bos = None - if self.args.decoder_langtok and tgt_eos is not None and tgt_lang is not None: - new_tgt_bos = self.get_decoder_langtok(tgt_lang) - else: - tgt_eos = None - - return TransformEosLangPairDataset( - lang_pair_dataset, - src_eos=src_eos, - new_src_eos=new_src_eos, - tgt_bos=tgt_eos, - new_tgt_bos=new_tgt_bos, - ) - - def load_dataset(self, split, epoch=1, **kwargs): - """Load a dataset split.""" - paths = utils.split_paths(self.args.data) - assert len(paths) > 0 - data_path = paths[(epoch - 1) % len(paths)] - - def language_pair_dataset(lang_pair): - src, tgt = lang_pair.split("-") - langpair_dataset = load_langpair_dataset( - data_path, - split, - src, - self.dicts[src], - tgt, - self.dicts[tgt], - combine=True, - dataset_impl=self.args.dataset_impl, - upsample_primary=self.args.upsample_primary, - left_pad_source=self.args.left_pad_source, - left_pad_target=self.args.left_pad_target, - max_source_positions=self.args.max_source_positions, - max_target_positions=self.args.max_target_positions, - ) - return self.alter_dataset_langtok( - langpair_dataset, - src_eos=self.dicts[src].eos(), - src_lang=src, - tgt_eos=self.dicts[tgt].eos(), - tgt_lang=tgt, - ) - - self.datasets[split] = RoundRobinZipDatasets( - OrderedDict( - [ - (lang_pair, language_pair_dataset(lang_pair)) - for lang_pair in self.lang_pairs - ] - ), - eval_key=None - if self.training - else "%s-%s" % (self.args.source_lang, self.args.target_lang), - ) - - def build_dataset_for_inference(self, src_tokens, src_lengths, constraints=None): - if constraints is not None: - raise NotImplementedError( - "Constrained decoding with the multilingual_translation task is not supported" - ) - - lang_pair = "%s-%s" % (self.args.source_lang, self.args.target_lang) - return RoundRobinZipDatasets( - OrderedDict( - [ - ( - lang_pair, - self.alter_dataset_langtok( - LanguagePairDataset( - src_tokens, src_lengths, self.source_dictionary - ), - src_eos=self.source_dictionary.eos(), - src_lang=self.args.source_lang, - tgt_eos=self.target_dictionary.eos(), - tgt_lang=self.args.target_lang, - ), - ) - ] - ), - eval_key=lang_pair, - ) - - def build_model(self, args, from_checkpoint=False): - def check_args(): - messages = [] - if ( - len(set(self.args.lang_pairs).symmetric_difference(args.lang_pairs)) - != 0 - ): - messages.append( - "--lang-pairs should include all the language pairs {}.".format( - args.lang_pairs - ) - ) - if self.args.encoder_langtok != args.encoder_langtok: - messages.append( - "--encoder-langtok should be {}.".format(args.encoder_langtok) - ) - if self.args.decoder_langtok != args.decoder_langtok: - messages.append( - "--decoder-langtok should {} be set.".format( - "" if args.decoder_langtok else "not" - ) - ) - - if len(messages) > 0: - raise ValueError(" ".join(messages)) - - # Update args -> the fact that the constructor here - # changes the args object doesn't mean you get the same one here - self.update_args(args) - - # Check if task args are consistant with model args - check_args() - - from fairseq import models - - model = models.build_model(args, self, from_checkpoint) - if not isinstance(model, FairseqMultiModel): - raise ValueError( - "MultilingualTranslationTask requires a FairseqMultiModel architecture" - ) - return model - - def _per_lang_pair_train_loss( - self, lang_pair, model, update_num, criterion, sample, optimizer, ignore_grad - ): - loss, sample_size, logging_output = criterion( - model.models[lang_pair], sample[lang_pair] - ) - if ignore_grad: - loss *= 0 - optimizer.backward(loss) - return loss, sample_size, logging_output - - def train_step( - self, sample, model, criterion, optimizer, update_num, ignore_grad=False - ): - model.train() - from collections import defaultdict - - agg_loss, agg_sample_size, agg_logging_output = 0.0, 0.0, defaultdict(float) - curr_lang_pairs = [ - lang_pair - for lang_pair in self.model_lang_pairs - if sample[lang_pair] is not None and len(sample[lang_pair]) != 0 - ] - - for idx, lang_pair in enumerate(curr_lang_pairs): - - def maybe_no_sync(): - if ( - self.args.distributed_world_size > 1 - and hasattr(model, "no_sync") - and idx < len(curr_lang_pairs) - 1 - ): - return model.no_sync() - else: - return contextlib.ExitStack() # dummy contextmanager - - with maybe_no_sync(): - loss, sample_size, logging_output = self._per_lang_pair_train_loss( - lang_pair, - model, - update_num, - criterion, - sample, - optimizer, - ignore_grad, - ) - agg_loss += loss.detach().item() - # TODO make summing of the sample sizes configurable - agg_sample_size += sample_size - for k in logging_output: - agg_logging_output[k] += logging_output[k] - agg_logging_output[f"{lang_pair}:{k}"] += logging_output[k] - return agg_loss, agg_sample_size, agg_logging_output - - def _per_lang_pair_valid_loss(self, lang_pair, model, criterion, sample): - return criterion(model.models[lang_pair], sample[lang_pair]) - - def valid_step(self, sample, model, criterion): - model.eval() - with torch.no_grad(): - from collections import defaultdict - - agg_loss, agg_sample_size, agg_logging_output = 0.0, 0.0, defaultdict(float) - for lang_pair in self.eval_lang_pairs: - if ( - lang_pair not in sample - or sample[lang_pair] is None - or len(sample[lang_pair]) == 0 - ): - continue - loss, sample_size, logging_output = self._per_lang_pair_valid_loss( - lang_pair, model, criterion, sample - ) - agg_loss += loss.data.item() - # TODO make summing of the sample sizes configurable - agg_sample_size += sample_size - for k in logging_output: - agg_logging_output[k] += logging_output[k] - agg_logging_output[f"{lang_pair}:{k}"] += logging_output[k] - return agg_loss, agg_sample_size, agg_logging_output - - def inference_step( - self, generator, models, sample, prefix_tokens=None, constraints=None - ): - with torch.no_grad(): - if self.args.decoder_langtok: - bos_token = _lang_token_index( - self.target_dictionary, self.args.target_lang - ) - else: - bos_token = self.target_dictionary.eos() - return generator.generate( - models, - sample, - prefix_tokens=prefix_tokens, - constraints=constraints, - bos_token=bos_token, - ) - - def reduce_metrics(self, logging_outputs, criterion): - with metrics.aggregate(): - # pass 'sample_size', 'nsentences', 'ntokens' stats to fairseq_task - super().reduce_metrics(logging_outputs, criterion) - for k in ["sample_size", "nsentences", "ntokens"]: - metrics.log_scalar(k, sum(l[k] for l in logging_outputs)) - - @property - def source_dictionary(self): - if self.training: - return next(iter(self.dicts.values())) - else: - return self.dicts[self.args.source_lang] - - @property - def target_dictionary(self): - if self.training: - return next(iter(self.dicts.values())) - else: - return self.dicts[self.args.target_lang] - - def max_positions(self): - """Return the max sentence length allowed by the task.""" - if len(self.datasets.values()) == 0: - return { - "%s-%s" - % (self.args.source_lang, self.args.target_lang): ( - self.args.max_source_positions, - self.args.max_target_positions, - ) - } - return OrderedDict( - [ - (key, (self.args.max_source_positions, self.args.max_target_positions)) - for split in self.datasets.keys() - for key in self.datasets[split].datasets.keys() - ] - ) diff --git a/kosmos-g/fairseq/fairseq/tasks/online_backtranslation.py b/kosmos-g/fairseq/fairseq/tasks/online_backtranslation.py deleted file mode 100644 index 52ce58ced..000000000 --- a/kosmos-g/fairseq/fairseq/tasks/online_backtranslation.py +++ /dev/null @@ -1,682 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import contextlib -import json -import logging -import math -import os -from argparse import Namespace -from collections import OrderedDict, defaultdict -from pathlib import Path -from typing import Dict, Sequence, Tuple -from argparse import ArgumentError - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F - -import fairseq -from fairseq import metrics, options, utils -from fairseq.data import ( - FairseqDataset, - LanguagePairDataset, - NoisingDataset, - PrependTokenDataset, - RoundRobinZipDatasets, - TransformEosLangPairDataset, - data_utils, - encoders, -) -from fairseq.sequence_generator import SequenceGenerator -from fairseq.tasks import register_task -from fairseq.tasks.translation import TranslationTask, load_langpair_dataset - -logger = logging.getLogger(__name__) - - -class PiecewiseLinearFn: - """Piecewise linear function. Can be configured with a string.""" - - def __init__(self, pieces: Sequence[Tuple[int, float]]): - assert pieces == sorted( - pieces - ), f"PiecewiseLinearFn configuration should be sorted, received: {pieces}" - - self.pieces = pieces - - def __call__(self, x: int) -> float: - for i, (x_a, y_a) in enumerate(self.pieces[:-1]): - x_b, y_b = self.pieces[i + 1] - if x_a <= x <= x_b: - return y_a + (x - x_a) * (y_b - y_a) / (x_b - x_a) - - return self.pieces[-1][1] - - @staticmethod - def from_string(configuration: str) -> "PiecewiseLinearFn": - """ - Parse the configuration of lambda coefficient (for scheduling). - x = "3" # lambda will be a constant equal to x - x = "0:1,1000:0" # lambda will start from 1 and linearly decrease - # to 0 during the first 1000 iterations - x = "0:0,1000:0,2000:1" # lambda will be equal to 0 for the first 1000 - # iterations, then will linearly increase to 1 until iteration 2000 - """ - if isinstance(configuration, float): - return PiecewiseLinearFn([(0, configuration)]) - - try: - parts = configuration.split(",") - if len(parts) == 1: - v = float(configuration) - return PiecewiseLinearFn([(0, v)]) - - split = [s.split(":") for s in parts] - pieces = [(int(t), float(v)) for t, v in split] - return PiecewiseLinearFn(pieces) - except Exception: - raise ValueError( - f"Invalid PiecewiseLinearFn configuration: {configuration!r}" - ) - - @staticmethod - def one() -> "PiecewiseLinearFn": - return PiecewiseLinearFn([(0, 1.0)]) - - -@register_task("online_backtranslation") -class OnlineBackTranslationTask(TranslationTask): - @staticmethod - def add_args(parser): - """Add task-specific arguments to the parser.""" - # fmt: off - # Generic translation args - parser.add_argument('data', help='colon separated path to data directories list, \ - will be iterated upon during epochs in round-robin manner; \ - however, valid and test data are always in the first directory to \ - avoid the need for repeating them in all directories') - parser.add_argument('--mono-langs', metavar='MONO_LANGS', - help='monolingual languages for training') - parser.add_argument('--valid-lang-pairs', default=None, metavar='VALID_LANG_PAIRS', - help='language pairs for validation') - parser.add_argument('--load-alignments', action='store_true', - help='load the binarized alignments') - parser.add_argument('--left-pad-source', default='False', type=str, metavar='BOOL', - help='pad the source on the left') - parser.add_argument('--left-pad-target', default='False', type=str, metavar='BOOL', - help='pad the target on the left') - parser.add_argument('--upsample-primary', default=1, type=int, - help='amount to upsample primary dataset') - try: - parser.add_argument('--max-source-positions', default=1024, type=int, metavar='N', - help='max number of tokens in the source sequence') - parser.add_argument('--max-target-positions', default=1024, type=int, metavar='N', - help='max number of tokens in the target sequence') - except ArgumentError: - # this might have already been defined. Once we transition this to hydra it should be fine to add it here. - pass - parser.add_argument('--truncate-source', action='store_true', default=False, - help='truncate source to max-source-positions') - parser.add_argument('--num-batch-buckets', default=0, type=int, metavar='N', - help='if >0, then bucket source and target lengths into N ' - 'buckets and pad accordingly; this is useful on TPUs ' - 'to minimize the number of compilations') - - # Denoising args - parser.add_argument('--max-word-shuffle-distance', default=3.0, type=float, metavar='N', - help='maximum word shuffle distance for denoising autoencoding data generation') - parser.add_argument('--word-dropout-prob', default=0.1, type=float, metavar='N', - help='word dropout probability for denoising autoencoding data generation') - parser.add_argument('--word-blanking-prob', default=0.2, type=float, metavar='N', - help='word blanking probability for denoising autoencoding data generation') - - # Backtranslation args - parser.add_argument('--lambda-bt', default="1.0", type=str, metavar='N', - help='back-translation weight') - parser.add_argument('--lambda-dae', default="1.0", type=str, metavar='N', - help='denoising auto-encoder weight') - - # Evaluation args - parser.add_argument('--generate-one-by-one', action='store_true', - help='generate one sentence at a time for backtranslation') - - parser.add_argument('--eval-bleu', action='store_true', - help='evaluation with BLEU scores') - parser.add_argument('--eval-bleu-detok', type=str, default="space", - help='detokenize before computing BLEU (e.g., "moses"); ' - 'required if using --eval-bleu; use "space" to ' - 'disable detokenization; see fairseq.data.encoders ' - 'for other options') - parser.add_argument('--eval-bleu-detok-args', type=str, metavar='JSON', - help='args for building the tokenizer, if needed') - parser.add_argument('--eval-tokenized-bleu', action='store_true', default=False, - help='compute tokenized BLEU instead of sacrebleu') - parser.add_argument('--eval-bleu-remove-bpe', nargs='?', const='@@ ', default=None, - help='remove BPE before computing BLEU') - parser.add_argument('--eval-bleu-args', type=str, metavar='JSON', - help='generation args for BLUE scoring, ' - 'e.g., \'{"beam": 4, "lenpen": 0.6}\'') - parser.add_argument('--eval-bleu-print-samples', action='store_true', - help='print sample generations during validation') - # fmt: on - - def __init__(self, args, common_dict, mono_langs, valid_lang_pairs): - super().__init__(args, common_dict, common_dict) - self.common_dict = common_dict - self.mono_langs = mono_langs - self.valid_lang_pairs = valid_lang_pairs - - self.SHOW_SAMPLES_INTERVAL = 1000 - # Start by showing samples - self._show_samples_ctr = self.SHOW_SAMPLES_INTERVAL - self.SHOW_SAMPLES_NUMBER = 5 - self.lambda_bt = PiecewiseLinearFn.from_string(args.lambda_bt) - self.lambda_dae = PiecewiseLinearFn.from_string(args.lambda_dae) - - self.args = args - self.data = utils.split_paths(self.args.data) - if len(self.data) == 1: - shards = list(Path(self.data[0]).glob("shard*")) - if len(shards) > 0: - # keep this as strings, since it can also be a manifold path - old_data = self.data - self.data = [str(shard) for shard in shards] - logging.warning(f"Expanded data directory {old_data} to {self.data}") - - @classmethod - def setup_task(cls, args, **kwargs): - """Setup the task (e.g., load dictionaries). - - Args: - args (argparse.Namespace): parsed command-line arguments - """ - args.left_pad_source = options.eval_bool(args.left_pad_source) - args.left_pad_target = options.eval_bool(args.left_pad_target) - - paths = utils.split_paths(args.data) - assert len(paths) > 0 - assert args.mono_langs is not None - - mono_langs = args.mono_langs.split(",") - valid_lang_pairs = args.valid_lang_pairs.split(",") - - # load dictionary - dict_path = os.path.join(paths[0], "dict.txt") - common_dict = cls.load_dictionary(dict_path) - - return cls(args, common_dict, mono_langs, valid_lang_pairs) - - def load_dataset(self, split, epoch=1, combine=False, **kwargs) -> FairseqDataset: - """Load a given dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - """ - if split == "train": - data_path = self.data[(epoch - 1) % len(self.data)] - dataset = self.load_train_dataset(data_path) - else: - # valid/test should always be the same. - dataset = self.load_translation_dataset(split, self.data[0]) - - self.datasets[split] = dataset - return dataset - - def load_train_dataset(self, data_path: str) -> FairseqDataset: - """The training dataset is made of backtranslation dataset and denoising dataset.""" - data = [] - for lang in self.mono_langs: - train_path = os.path.join(data_path, lang, "train") - # TODO: could we do the BT using denoise sample ? - # this would half the data loading work - data.append((f"{lang}-BT", self.load_bt_dataset(train_path, lang))) - data.append( - (f"{lang}-DENOISE", self.load_denoise_dataset(train_path, lang)) - ) - - return RoundRobinZipDatasets(OrderedDict(data)) - - def _langpair_dataset( - self, src: FairseqDataset, tgt: FairseqDataset - ) -> LanguagePairDataset: - return LanguagePairDataset( - src, - src.sizes, - self.dictionary, - tgt=tgt, - tgt_sizes=tgt.sizes, - tgt_dict=self.dictionary, - left_pad_source=self.args.left_pad_source, - left_pad_target=self.args.left_pad_target, - # TODO: should we shuffle ? we are already sorting batch by sizes so ? - # shuffle=True, - ) - - def _prepend_lang_bos_to_target( - self, dataset: LanguagePairDataset, lang: str - ) -> LanguagePairDataset: - bos = _lang_token_index(self.dictionary, lang) - return TransformEosLangPairDataset( - dataset, - src_eos=self.dictionary.eos(), - new_src_eos=self.dictionary.eos(), - tgt_bos=self.dictionary.eos(), - new_tgt_bos=bos, - ) - - def load_bt_dataset(self, data_path: str, lang: str) -> FairseqDataset: - """The BT dataset is generated with (tgt, tgt) pairs. - The actual translation to a (generated_src, tgt) pair - is done on the fly during training. - """ - mono_dataset = data_utils.load_indexed_dataset( - data_path, self.common_dict, self.args.dataset_impl - ) - assert mono_dataset is not None, f"No dataset found for {lang}" - - mono_dataset_src = PrependTokenDataset( - mono_dataset, _lang_token_index(self.dictionary, lang) - ) - - mono_dataset_bt = self._langpair_dataset(mono_dataset_src, mono_dataset) - logger.info( - f"mono_lang = {lang} " - f"lang token index = {_lang_token_index(self.dictionary, lang)} " - f"lang token = {_lang_token(lang)}" - ) - - mono_dataset_bt = self._prepend_lang_bos_to_target(mono_dataset_bt, lang) - return mono_dataset_bt - - def load_denoise_dataset(self, data_path: str, lang: str) -> FairseqDataset: - """Classic denoising dataset""" - dataset = data_utils.load_indexed_dataset( - data_path, self.common_dict, self.args.dataset_impl - ) - noisy_dataset = NoisingDataset( - dataset, - self.dictionary, - seed=1, - max_word_shuffle_distance=self.args.max_word_shuffle_distance, - word_dropout_prob=self.args.word_dropout_prob, - word_blanking_prob=self.args.word_blanking_prob, - ) - noisy_dataset = PrependTokenDataset( - noisy_dataset, _lang_token_index(self.dictionary, lang) - ) - - clean_dataset = data_utils.load_indexed_dataset( - data_path, self.common_dict, self.args.dataset_impl - ) - denoising_dataset = self._langpair_dataset(noisy_dataset, clean_dataset) - denoising_dataset = self._prepend_lang_bos_to_target(denoising_dataset, lang) - return denoising_dataset - - def load_translation_dataset( - self, split: str, data_path: str, combine: bool = False - ): - # only judging with one language pair for the moment, - # since ConcatDataset doesn't work as expected - assert len(self.valid_lang_pairs) == 1, "For now..." - valid_lang_pair = self.valid_lang_pairs[0] - src, tgt = valid_lang_pair.split("-") - - # use the same function than TranslationTask - src_tgt_dt = load_langpair_dataset( - data_path, - split, - src, - self.common_dict, - tgt, - self.common_dict, - combine=combine, - dataset_impl=self.args.dataset_impl, - upsample_primary=self.args.upsample_primary, - left_pad_source=self.args.left_pad_source, - left_pad_target=self.args.left_pad_target, - max_source_positions=self.args.max_source_positions, - max_target_positions=self.args.max_target_positions, - load_alignments=self.args.load_alignments, - truncate_source=self.args.truncate_source, - num_buckets=self.args.num_batch_buckets, - shuffle=(split != "test"), - prepend_bos_src=_lang_token_index(self.dictionary, src), - ) - - src_tgt_eos_dt = self._prepend_lang_bos_to_target(src_tgt_dt, tgt) - src_tgt_eos_dt.args = self.args - return src_tgt_eos_dt - - def build_dataset_for_inference(self, src_tokens, src_lengths, constraints=None): - raise NotImplementedError - - def build_model(self, args, from_checkpoint=False): - # torch.autograd.set_detect_anomaly(True) - model = super().build_model(args, from_checkpoint) - - add_secial_tokens_to_dict_and_model(self.common_dict, model, self.mono_langs) - - self.sequence_generators = {} - for mono_lang in self.mono_langs: - self.sequence_generators[mono_lang] = SequenceGenerator( - [model], - tgt_dict=self.dictionary, - beam_size=1, - max_len_a=1.3, - max_len_b=5, - min_len=5, - # keep 1 to be able to prepend bos - max_len=model.max_decoder_positions() - 1, - ) - - if getattr(args, "eval_bleu", False): - assert getattr(args, "eval_bleu_detok", None) is not None, ( - "--eval-bleu-detok is required if using --eval-bleu; " - "try --eval-bleu-detok=moses (or --eval-bleu-detok=space " - "to disable detokenization, e.g., when using sentencepiece)" - ) - detok_args = json.loads(getattr(args, "eval_bleu_detok_args", "{}") or "{}") - self.tokenizer = encoders.build_tokenizer( - Namespace( - tokenizer=getattr(args, "eval_bleu_detok", None), **detok_args - ) - ) - - gen_args = json.loads(getattr(args, "eval_bleu_args", "{}") or "{}") - self.bleu_sequence_generator = self.build_generator( - [model], Namespace(**gen_args) - ) - - return model - - def max_positions(self): - """Return the max sentence length allowed by the task.""" - return (self.args.max_source_positions, self.args.max_target_positions) - - @property - def dictionary(self): - """Return the source :class:`~fairseq.data.Dictionary`.""" - return self.common_dict - - def display_samples_once_in_a_while(self, smp, mono_lang, other_lang): - self._show_samples_ctr += 1 - if self._show_samples_ctr < self.SHOW_SAMPLES_INTERVAL: - return - self._show_samples_ctr = 0 - - ln = smp["net_input"]["src_tokens"].shape[0] - - logger.info( - f"(r:{self.args.distributed_rank}) : " - f"{other_lang} ---> {mono_lang} " - f"({other_lang} was generated by back-translation.) {ln} samples" - ) - - for i in range(min(ln, self.SHOW_SAMPLES_NUMBER)): - src_tokens = smp["net_input"]["src_tokens"][i] - tgt_tokens = smp["target"][i] - - src_str = self.dictionary.string(src_tokens, "sentencepiece") - tgt_str = self.dictionary.string(tgt_tokens, "sentencepiece") - logger.info( - f"\n{i}\t\t[{other_lang} generated] {src_str}\n" - f"\t\t[{mono_lang} original ] {tgt_str}\n" - f"\t\t[ src tokens] {src_tokens}\n" - ) - - def backtranslate_sample(self, smp, orig_lang, other_lang) -> None: - """ - * WARNING: smp is modified in place. - * At the start of this function, `smp` has the same input and target: - |--------------------------------------------------------| - | smp['net_input']['src_tokens'] | smp['target'] | - | (from data) __en__ hello world | __en__ hello world | - |--------------------------------------------------------| - - * We call generator.generate(smp, bos_token = token("ro")), - and copy the result as input - * At the end, `smp` has the translation to other language. - |--------------------------------------------------------| - | smp['net_input']['src_tokens'] | smp['target'] | - | (generated) __ro__ salut lume | __en__ hello world | - |--------------------------------------------------------| - - """ - bos_token = _lang_token_index(self.dictionary, other_lang) - generated = self.sequence_generators[orig_lang].generate( - models=[], sample=smp, bos_token=bos_token - ) - - max_lngth = max([gn[0]["tokens"].size(0) for gn in generated]) - net_input = smp["net_input"] - n_src_tokens = torch.empty( - size=(len(generated), max_lngth + 1), dtype=net_input["src_tokens"].dtype - ) - n_src_lengths = torch.empty( - len(generated), dtype=net_input["src_lengths"].dtype - ) - - for i, gn in enumerate(generated): - tokens = gn[0]["tokens"] - tokens_size = tokens.size(0) - padding_needed = max_lngth - tokens_size - tokens = torch.cat([tokens.new([bos_token]), tokens]) - tokens = F.pad(tokens, (0, padding_needed), value=self.dictionary.pad()) - n_src_tokens[i] = tokens - n_src_lengths[i] = tokens_size + 1 - - device = net_input["src_tokens"].device - # This seems to be important - del net_input["src_tokens"] - del net_input["src_lengths"] - net_input["src_tokens"] = n_src_tokens.to(device) - net_input["src_lengths"] = n_src_lengths.to(device) - - def generate(self, smp, model): - model.eval() - orig_lang = ( - self.dictionary[smp["net_input"]["src_tokens"][0][0]] - .replace(" ", "") - .replace("_", "") - ) - bos_token = smp["net_input"]["prev_output_tokens"][0][0] - with torch.no_grad(): - generated = self.sequence_generators[orig_lang].generate( - models=[model], sample=smp, bos_token=bos_token - ) - return generated - - def get_other_lang(self, lang): - # TODO: allow more complex mapping - if lang != self.mono_langs[0]: - return self.mono_langs[0] - if len(self.mono_langs) == 2: - return self.mono_langs[1] - return self.mono_langs[np.random.randint(1, len(self.mono_langs))] - - def train_step( - self, sample, model, criterion, optimizer, update_num, ignore_grad=False - ): - - model.train() - model.set_num_updates(update_num) - - agg_loss, agg_sample_size = 0.0, 0.0 - agg_logging_output: Dict[str, float] = defaultdict(float) - - dataset_keys = self.datasets["train"].datasets.keys() - - weights = { - "BT": self.lambda_bt(update_num), - "DENOISE": self.lambda_dae(update_num), - } - log_keys = {"BT": "bt_", "DENOISE": "dae_"} - - for dataset_key in dataset_keys: - smp = sample[dataset_key] - mono_lang, task_subtype = dataset_key.split("-") - if weights[task_subtype] == 0: - continue - - if task_subtype == "BT": - with torch.autograd.profiler.record_function("backtranslation"): - model.eval() - # TODO: Could we translate to several language at once ? - # this would allow to share encoder_out and maximize GPU usage. - other_lang = self.get_other_lang(mono_lang) - self.backtranslate_sample(smp, mono_lang, other_lang) - self.display_samples_once_in_a_while(smp, mono_lang, other_lang) - model.train() - - # Like in FairseqTask.train_step - with torch.autograd.profiler.record_function("forward"): - loss, sample_size, logging_output = criterion(model, smp) - loss *= weights[task_subtype] - if ignore_grad: - loss *= 0 - with torch.autograd.profiler.record_function("backward"): - optimizer.backward(loss) - - agg_loss += loss.item() - agg_sample_size += sample_size - for k in logging_output: - agg_logging_output[log_keys[task_subtype] + k] += logging_output[k] - agg_logging_output[k] += logging_output[k] - - return agg_loss, agg_sample_size, agg_logging_output - - def get_bos_token_from_sample(self, sample): - net_input = sample["net_input"] - source_lang_token_id = torch.unique(net_input["src_tokens"][:, 0]).item() - source_lang_token = self.dictionary[source_lang_token_id].replace("_", "") - target_lang_token_id = _lang_token_index( - self.dictionary, self.get_other_lang(source_lang_token) - ) - - return target_lang_token_id - - def reduce_metrics(self, logging_outputs, criterion): - super().reduce_metrics(logging_outputs, criterion) - bt_sample_size = sum(x.get("bt_sample_size", 0) for x in logging_outputs) - if bt_sample_size: - bt_loss_sum = sum(x.get("bt_loss", 0) for x in logging_outputs) - bt_loss_sum *= 1 / bt_sample_size / math.log(2) - metrics.log_scalar("bt_loss", bt_loss_sum, bt_sample_size, round=3) - - bt_nll_loss_sum = sum(x.get("bt_nll_loss", 0) for x in logging_outputs) - bt_ntokens = sum(x.get("bt_ntokens", 0) for x in logging_outputs) - bt_nll_loss_sum *= 1 / bt_ntokens / math.log(2) - metrics.log_scalar("bt_nll_loss", bt_nll_loss_sum, bt_ntokens, round=3) - metrics.log_derived( - "bt_ppl", lambda meters: utils.get_perplexity(meters["bt_nll_loss"].avg) - ) - - dae_sample_size = sum(x.get("dae_sample_size", 0) for x in logging_outputs) - if dae_sample_size: - dae_loss_sum = sum(x.get("dae_loss", 0) for x in logging_outputs) - dae_loss_sum *= 1 / dae_sample_size / math.log(2) - metrics.log_scalar("dae_loss", dae_loss_sum, dae_sample_size, round=3) - - dae_nll_loss_sum = sum(x.get("dae_nll_loss", 0) for x in logging_outputs) - dae_ntokens = sum(x.get("dae_ntokens", 0) for x in logging_outputs) - dae_nll_loss_sum *= 1 / dae_ntokens / math.log(2) - metrics.log_scalar("dae_nll_loss", dae_nll_loss_sum, dae_ntokens, round=3) - metrics.log_derived( - "dae_ppl", - lambda meters: utils.get_perplexity(meters["dae_nll_loss"].avg), - ) - - -@torch.no_grad() -def extend_embedding( - emb: nn.Module, new_vocab_size: int, copy_from_token_id: int -) -> None: - old_emb_data = emb.weight.data - (old_vocab_size, dim) = old_emb_data.shape - assert new_vocab_size >= old_vocab_size - - if new_vocab_size > old_vocab_size: - emb.weight.data = torch.zeros((new_vocab_size, dim)) - emb.weight.data[:old_vocab_size, :] = old_emb_data - # initialize new embeddings - emb.weight.data[old_vocab_size:, :] = old_emb_data[copy_from_token_id] - if hasattr(emb, "num_embeddings"): - emb.num_embeddings = new_vocab_size - if hasattr(emb, "out_features"): - emb.out_features = new_vocab_size - - if getattr(emb, "bias", None) is None: - return - - # Fix the bias. - # Bias shape can be different from the previous vocab size - # if the weight matrix was shared and alread extended but not the bias. - (old_vocab_size,) = emb.bias.shape - assert new_vocab_size >= old_vocab_size - if new_vocab_size > old_vocab_size: - old_bias = emb.bias.data - new_bias = torch.zeros( - (new_vocab_size,), dtype=old_bias.dtype, device=old_bias.device - ) - new_bias[:old_vocab_size] = old_bias - emb.bias.data = new_bias - - -def add_secial_tokens_to_dict_and_model( - dictionary: "fairseq.data.Dictionary", - model: nn.Module, - mono_langs: Sequence[str], -) -> None: - embs = model.encoder.embed_tokens - vocab_size, embedding_dim = embs.weight.shape - - # The model may or may not have a '<mask>' embedding yet - assert ( - len(dictionary) <= vocab_size <= len(dictionary) + 1 - ), f"Dictionary len ({len(dictionary)}) doesn't match embs shape ({embs.weight.shape})" - # TODO: we should reuse the pretrained model dict which already has <mask> - dictionary.add_symbol("<mask>") - - for lang in mono_langs: - lang_token = _lang_token(lang) - dictionary.add_symbol(lang_token) - logger.info( - f"dictionary: {len(dictionary)} -> {vocab_size} tokens " - f"after adding {len(mono_langs)} lang tokens." - ) - - if len(dictionary) <= vocab_size: - return - - extend_embedding(embs, len(dictionary), dictionary.bos()) - dec_embs = model.decoder.embed_tokens - extend_embedding(dec_embs, len(dictionary), dictionary.bos()) - lm_head = model.decoder.output_projection - extend_embedding(lm_head, len(dictionary), dictionary.bos()) - assert lm_head.weight.shape == (len(dictionary), embedding_dim) - - -def _lang_token(lang: str) -> str: - return f"__{lang}__" - - -def _lang_token_index(dictionary, lang: str) -> int: - return dictionary.index(_lang_token(lang)) - - -@contextlib.contextmanager -def assert_weights_have_changed(model: nn.Module): - def checksum(model: nn.Module) -> float: - return sum(p.sum().item() for p in model.parameters()) - - initial_checksum = checksum(model) - yield model - final_checksum = checksum(model) - logger.info( - f"initial_checksum={initial_checksum} -> final_checksum={final_checksum}" - ) - assert initial_checksum != final_checksum, "Model hasn't changed !" diff --git a/kosmos-g/fairseq/fairseq/tasks/semisupervised_translation.py b/kosmos-g/fairseq/fairseq/tasks/semisupervised_translation.py deleted file mode 100644 index 432b8a52c..000000000 --- a/kosmos-g/fairseq/fairseq/tasks/semisupervised_translation.py +++ /dev/null @@ -1,485 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -from collections import OrderedDict - -from fairseq import utils -from fairseq.data import ( - BacktranslationDataset, - IndexedCachedDataset, - IndexedDataset, - IndexedRawTextDataset, - LanguagePairDataset, - NoisingDataset, - RoundRobinZipDatasets, - data_utils, - indexed_dataset, -) -from fairseq.models import FairseqMultiModel -from fairseq.sequence_generator import SequenceGenerator - -from . import register_task -from .multilingual_translation import MultilingualTranslationTask - - -logger = logging.getLogger(__name__) - - -def _get_bt_dataset_key(lang_pair): - return "bt:" + lang_pair - - -def _get_denoising_dataset_key(lang_pair): - return "denoising:" + lang_pair - - -# ported from UnsupervisedMT -def parse_lambda_config(x): - """ - Parse the configuration of lambda coefficient (for scheduling). - x = "3" # lambda will be a constant equal to x - x = "0:1,1000:0" # lambda will start from 1 and linearly decrease - # to 0 during the first 1000 iterations - x = "0:0,1000:0,2000:1" # lambda will be equal to 0 for the first 1000 - # iterations, then will linearly increase to 1 until iteration 2000 - """ - split = x.split(",") - if len(split) == 1: - return float(x), None - else: - split = [s.split(os.pathsep) for s in split] - assert all(len(s) == 2 for s in split) - assert all(k.isdigit() for k, _ in split) - assert all( - int(split[i][0]) < int(split[i + 1][0]) for i in range(len(split) - 1) - ) - return float(split[0][1]), [(int(k), float(v)) for k, v in split] - - -@register_task("semisupervised_translation") -class SemisupervisedTranslationTask(MultilingualTranslationTask): - """A task for training multiple translation models simultaneously. - - We iterate round-robin over batches from multiple language pairs, ordered - according to the `--lang-pairs` argument. - - The training loop is roughly: - - for i in range(len(epoch)): - for lang_pair in args.lang_pairs: - batch = next_batch_for_lang_pair(lang_pair) - loss = criterion(model_for_lang_pair(lang_pair), batch) - loss.backward() - optimizer.step() - - In practice, `next_batch_for_lang_pair` is abstracted in a FairseqDataset - (e.g., `RoundRobinZipDatasets`) and `model_for_lang_pair` is a model that - implements the `FairseqMultiModel` interface. - - During inference it is required to specify a single `--source-lang` and - `--target-lang`, instead of `--lang-pairs`. - """ - - @staticmethod - def add_args(parser): - """Add task-specific arguments to the parser.""" - # fmt: off - MultilingualTranslationTask.add_args(parser) - parser.add_argument('--lambda-parallel-config', default="1.0", type=str, metavar='CONFIG', - help='cross-entropy reconstruction coefficient (parallel data). ' - 'use fixed weight during training if set to floating point number. ' - 'use piecewise linear function over number of updates to schedule the ' - 'weight with the format: w0:step0,w1:step1,...') - parser.add_argument('--lambda-denoising-config', default="0.0", type=str, metavar='CONFIG', - help='Cross-entropy reconstruction coefficient (denoising autoencoding)' - 'use fixed weight during training if set to floating point number. ' - 'use piecewise linear function over number of updates to schedule the ' - 'weight with the format: w0:step0,w1:step1,...') - parser.add_argument('--lambda-otf-bt-config', default="0.0", type=str, metavar='CONFIG', - help='cross-entropy reconstruction coefficient (on-the-fly back-translation parallel data)' - 'use fixed weight during training if set to floating point number. ' - 'use piecewise linear function over number of updates to schedule the ' - 'weight with the format: w0:step0,w1:step1,...') - parser.add_argument('--bt-max-len-a', default=1.1, type=float, metavar='N', - help='generate back-translated sequences of maximum length ax + b, where x is the ' - 'source length') - parser.add_argument('--bt-max-len-b', default=10.0, type=float, metavar='N', - help='generate back-translated sequences of maximum length ax + b, where x is the ' - 'source length') - parser.add_argument('--bt-beam-size', default=1, type=int, metavar='N', - help='beam size used in beam search of online back-translation') - parser.add_argument('--max-word-shuffle-distance', default=3.0, type=float, metavar='N', - help='maximum word shuffle distance for denoising autoencoding data generation') - parser.add_argument('--word-dropout-prob', default=0.1, type=float, metavar='N', - help='word dropout probability for denoising autoencoding data generation') - parser.add_argument('--word-blanking-prob', default=0.2, type=float, metavar='N', - help='word blanking probability for denoising autoencoding data generation') - # fmt: on - - def __init__(self, args, dicts, training): - super().__init__(args, dicts, training) - self.lambda_parallel, self.lambda_parallel_steps = parse_lambda_config( - args.lambda_parallel_config - ) - self.lambda_otf_bt, self.lambda_otf_bt_steps = parse_lambda_config( - args.lambda_otf_bt_config - ) - self.lambda_denoising, self.lambda_denoising_steps = parse_lambda_config( - args.lambda_denoising_config - ) - if self.lambda_denoising > 0.0 or self.lambda_denoising_steps is not None: - denoising_lang_pairs = [ - "%s-%s" % (tgt, tgt) - for tgt in {lang_pair.split("-")[1] for lang_pair in args.lang_pairs} - ] - self.model_lang_pairs = self.model_lang_pairs + denoising_lang_pairs - self.backtranslate_datasets = {} - self.backtranslators = {} - - @classmethod - def setup_task(cls, args, **kwargs): - dicts, training = MultilingualTranslationTask.prepare(args, **kwargs) - return cls(args, dicts, training) - - def load_dataset(self, split, epoch=1, **kwargs): - """Load a dataset split.""" - paths = utils.split_paths(self.args.data) - assert len(paths) > 0 - data_path = paths[(epoch - 1) % len(paths)] - - def split_exists(split, src, tgt, lang): - if src is not None: - filename = os.path.join( - data_path, "{}.{}-{}.{}".format(split, src, tgt, lang) - ) - else: - filename = os.path.join( - data_path, "{}.{}-None.{}".format(split, src, tgt) - ) - return indexed_dataset.dataset_exists(filename, impl=self.args.dataset_impl) - - def load_indexed_dataset(path, dictionary): - return data_utils.load_indexed_dataset( - path, dictionary, self.args.dataset_impl - ) - - # load parallel datasets - src_datasets, tgt_datasets = {}, {} - if ( - self.lambda_parallel > 0.0 - or self.lambda_parallel_steps is not None - or not split.startswith("train") - ): - for lang_pair in self.lang_pairs: - src, tgt = lang_pair.split("-") - if split_exists(split, src, tgt, src): - prefix = os.path.join( - data_path, "{}.{}-{}.".format(split, src, tgt) - ) - elif split_exists(split, tgt, src, src): - prefix = os.path.join( - data_path, "{}.{}-{}.".format(split, tgt, src) - ) - else: - continue - src_datasets[lang_pair] = load_indexed_dataset( - prefix + src, self.dicts[src] - ) - tgt_datasets[lang_pair] = load_indexed_dataset( - prefix + tgt, self.dicts[tgt] - ) - logger.info( - "parallel-{} {} {} examples".format( - data_path, split, len(src_datasets[lang_pair]) - ) - ) - if len(src_datasets) == 0: - raise FileNotFoundError( - "Dataset not found: {} ({})".format(split, data_path) - ) - - # back translation datasets - backtranslate_datasets = {} - if ( - self.lambda_otf_bt > 0.0 or self.lambda_otf_bt_steps is not None - ) and split.startswith("train"): - for lang_pair in self.lang_pairs: - src, tgt = lang_pair.split("-") - if not split_exists(split, tgt, None, tgt): - raise FileNotFoundError( - "Dataset not found: backtranslation {} ({})".format( - split, data_path - ) - ) - filename = os.path.join( - data_path, "{}.{}-None.{}".format(split, tgt, tgt) - ) - dataset = load_indexed_dataset(filename, self.dicts[tgt]) - lang_pair_dataset_tgt = LanguagePairDataset( - dataset, - dataset.sizes, - self.dicts[tgt], - left_pad_source=self.args.left_pad_source, - left_pad_target=self.args.left_pad_target, - ) - lang_pair_dataset = LanguagePairDataset( - dataset, - dataset.sizes, - src_dict=self.dicts[src], - tgt=dataset, - tgt_sizes=dataset.sizes, - tgt_dict=self.dicts[tgt], - left_pad_source=self.args.left_pad_source, - left_pad_target=self.args.left_pad_target, - ) - backtranslate_datasets[lang_pair] = BacktranslationDataset( - tgt_dataset=self.alter_dataset_langtok( - lang_pair_dataset_tgt, - src_eos=self.dicts[tgt].eos(), - src_lang=tgt, - tgt_lang=src, - ), - backtranslation_fn=self.backtranslators[lang_pair], - src_dict=self.dicts[src], - tgt_dict=self.dicts[tgt], - output_collater=self.alter_dataset_langtok( - lang_pair_dataset=lang_pair_dataset, - src_eos=self.dicts[src].eos(), - src_lang=src, - tgt_eos=self.dicts[tgt].eos(), - tgt_lang=tgt, - ).collater, - ) - logger.info( - "backtranslate-{}: {} {} {} examples".format( - tgt, - data_path, - split, - len(backtranslate_datasets[lang_pair]), - ) - ) - self.backtranslate_datasets[lang_pair] = backtranslate_datasets[ - lang_pair - ] - - # denoising autoencoder - noising_datasets = {} - if ( - self.lambda_denoising > 0.0 or self.lambda_denoising_steps is not None - ) and split.startswith("train"): - for lang_pair in self.lang_pairs: - _, tgt = lang_pair.split("-") - if not split_exists(split, tgt, None, tgt): - continue - filename = os.path.join( - data_path, "{}.{}-None.{}".format(split, tgt, tgt) - ) - tgt_dataset1 = load_indexed_dataset(filename, self.dicts[tgt]) - tgt_dataset2 = load_indexed_dataset(filename, self.dicts[tgt]) - noising_dataset = NoisingDataset( - tgt_dataset1, - self.dicts[tgt], - seed=1, - max_word_shuffle_distance=self.args.max_word_shuffle_distance, - word_dropout_prob=self.args.word_dropout_prob, - word_blanking_prob=self.args.word_blanking_prob, - ) - noising_datasets[lang_pair] = self.alter_dataset_langtok( - LanguagePairDataset( - noising_dataset, - tgt_dataset1.sizes, - self.dicts[tgt], - tgt_dataset2, - tgt_dataset2.sizes, - self.dicts[tgt], - left_pad_source=self.args.left_pad_source, - left_pad_target=self.args.left_pad_target, - ), - src_eos=self.dicts[tgt].eos(), - src_lang=tgt, - tgt_eos=self.dicts[tgt].eos(), - tgt_lang=tgt, - ) - logger.info( - "denoising-{}: {} {} {} examples".format( - tgt, - data_path, - split, - len(noising_datasets[lang_pair]), - ) - ) - - def language_pair_dataset(lang_pair): - src, tgt = lang_pair.split("-") - src_dataset, tgt_dataset = src_datasets[lang_pair], tgt_datasets[lang_pair] - return self.alter_dataset_langtok( - LanguagePairDataset( - src_dataset, - src_dataset.sizes, - self.dicts[src], - tgt_dataset, - tgt_dataset.sizes, - self.dicts[tgt], - left_pad_source=self.args.left_pad_source, - left_pad_target=self.args.left_pad_target, - ), - self.dicts[src].eos(), - src, - self.dicts[tgt].eos(), - tgt, - ) - - self.datasets[split] = RoundRobinZipDatasets( - OrderedDict( - [ - (lang_pair, language_pair_dataset(lang_pair)) - for lang_pair in src_datasets.keys() - ] - + [ - (_get_bt_dataset_key(lang_pair), dataset) - for lang_pair, dataset in backtranslate_datasets.items() - ] - + [ - (_get_denoising_dataset_key(lang_pair), dataset) - for lang_pair, dataset in noising_datasets.items() - ] - ), - eval_key=None - if self.training - else "%s-%s" % (self.args.source_lang, self.args.target_lang), - ) - - def build_model(self, args, from_checkpoint=False): - from fairseq import models - - model = models.build_model(args, self, from_checkpoint) - if not isinstance(model, FairseqMultiModel): - raise ValueError( - "SemisupervisedTranslationTask requires a FairseqMultiModel architecture" - ) - - # create SequenceGenerator for each model that has backtranslation dependency on it - self.sequence_generators = {} - if ( - self.lambda_otf_bt > 0.0 or self.lambda_otf_bt_steps is not None - ) and self.training: - for lang_pair in self.lang_pairs: - src, tgt = lang_pair.split("-") - key = "{}-{}".format(tgt, src) - self.sequence_generators[key] = SequenceGenerator( - [model.models[key]], - tgt_dict=self.dicts[src], - beam_size=args.bt_beam_size, - max_len_a=args.bt_max_len_a, - max_len_b=args.bt_max_len_b, - ) - decoder_lang_tok_idx = self.get_decoder_langtok(src) - - def backtranslate_fn( - sample, - model=model.models[key], - bos_token=decoder_lang_tok_idx, - sequence_generator=self.sequence_generators[key], - ): - return sequence_generator.generate( - [model], - sample, - bos_token=bos_token, - ) - - self.backtranslators[lang_pair] = backtranslate_fn - - return model - - def train_step( - self, sample, model, criterion, optimizer, update_num, ignore_grad=False - ): - model.train() - - if update_num > 0: - self.update_step(update_num) - - agg_loss, agg_sample_size, agg_logging_output = 0.0, 0.0, {} - - def forward_backward(model, samples, logging_output_key, weight): - nonlocal agg_loss, agg_sample_size, agg_logging_output - if samples is None or len(samples) == 0: - return - loss, sample_size, logging_output = criterion(model, samples) - if ignore_grad: - loss *= 0 - else: - loss *= weight - optimizer.backward(loss) - agg_loss += loss.detach().item() - # TODO make summing of the sample sizes configurable - agg_sample_size += sample_size - for k in logging_output: - agg_logging_output[k] += logging_output[k] - agg_logging_output[logging_output_key] += logging_output[k] - - if self.lambda_parallel > 0.0: - for lang_pair in self.lang_pairs: - forward_backward( - model.models[lang_pair], - sample[lang_pair], - lang_pair, - self.lambda_parallel, - ) - - if self.lambda_otf_bt > 0.0: - for lang_pair in self.lang_pairs: - sample_key = _get_bt_dataset_key(lang_pair) - forward_backward( - model.models[lang_pair], - sample[sample_key], - sample_key, - self.lambda_otf_bt, - ) - - if self.lambda_denoising > 0.0: - for lang_pair in self.lang_pairs: - _, tgt = lang_pair.split("-") - sample_key = _get_denoising_dataset_key(lang_pair) - forward_backward( - model.models["{0}-{0}".format(tgt)], - sample[sample_key], - sample_key, - self.lambda_denoising, - ) - - return agg_loss, agg_sample_size, agg_logging_output - - def update_step(self, num_updates): - def lambda_step_func(config, n_iter): - """ - Update a lambda value according to its schedule configuration. - """ - ranges = [ - i - for i in range(len(config) - 1) - if config[i][0] <= n_iter < config[i + 1][0] - ] - if len(ranges) == 0: - assert n_iter >= config[-1][0] - return config[-1][1] - assert len(ranges) == 1 - i = ranges[0] - x_a, y_a = config[i] - x_b, y_b = config[i + 1] - return y_a + (n_iter - x_a) * float(y_b - y_a) / float(x_b - x_a) - - if self.lambda_parallel_steps is not None: - self.lambda_parallel = lambda_step_func( - self.lambda_parallel_steps, num_updates - ) - if self.lambda_denoising_steps is not None: - self.lambda_denoising = lambda_step_func( - self.lambda_denoising_steps, num_updates - ) - if self.lambda_otf_bt_steps is not None: - self.lambda_otf_bt = lambda_step_func(self.lambda_otf_bt_steps, num_updates) diff --git a/kosmos-g/fairseq/fairseq/tasks/sentence_prediction.py b/kosmos-g/fairseq/fairseq/tasks/sentence_prediction.py deleted file mode 100644 index 52532ff61..000000000 --- a/kosmos-g/fairseq/fairseq/tasks/sentence_prediction.py +++ /dev/null @@ -1,286 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os - -import contextlib -from dataclasses import dataclass, field -from typing import Optional -from omegaconf import MISSING, II, open_dict, OmegaConf - -import numpy as np -from fairseq.data import ( - ConcatSentencesDataset, - Dictionary, - IdDataset, - NestedDictionaryDataset, - NumelDataset, - NumSamplesDataset, - OffsetTokensDataset, - PrependTokenDataset, - RawLabelDataset, - RightPadDataset, - RollDataset, - SortDataset, - StripTokenDataset, - data_utils, -) -from fairseq.data.shorten_dataset import maybe_shorten_dataset -from fairseq.tasks import FairseqDataclass, FairseqTask, register_task -from fairseq.dataclass import ChoiceEnum - - -logger = logging.getLogger(__name__) -SHORTEN_METHOD_CHOICES = ChoiceEnum(["none", "truncate", "random_crop"]) - - -@dataclass -class SentencePredictionConfig(FairseqDataclass): - data: str = field(default=MISSING, metadata={"help": "path to data directory"}) - num_classes: int = field( - default=-1, - metadata={"help": "number of classes or regression targets"}, - ) - init_token: Optional[int] = field( - default=None, - metadata={"help": "add token at the beginning of each batch item"}, - ) - separator_token: Optional[int] = field( - default=None, - metadata={"help": "add separator token between inputs"}, - ) - no_shuffle: bool = field( - default=False, - ) - shorten_method: SHORTEN_METHOD_CHOICES = field( - default="none", - metadata={ - "help": "if not none, shorten sequences that exceed tokens_per_sample" - }, - ) - shorten_data_split_list: str = field( - default="", - metadata={ - "help": "comma-separated list of dataset splits to apply shortening to, " - 'e.g., "train,valid" (default: all dataset splits)' - }, - ) - add_prev_output_tokens: bool = field( - default=False, - metadata={ - "help": "add prev_output_tokens to sample, used for encoder-decoder arch" - }, - ) - max_positions: int = field( - default=512, - metadata={"help": "max tokens per example"}, - ) - - regression_target: bool = II("criterion.regression_target") - classification_head_name: str = II("criterion.classification_head_name") - seed: int = II("common.seed") - - -@register_task("sentence_prediction", dataclass=SentencePredictionConfig) -class SentencePredictionTask(FairseqTask): - """ - Sentence (or sentence pair) prediction (classification or regression) task. - - Args: - dictionary (Dictionary): the dictionary for the input of the task - """ - - def __init__(self, cfg, data_dictionary, label_dictionary): - super().__init__(cfg) - self.dictionary = data_dictionary - self._label_dictionary = label_dictionary - - @classmethod - def load_dictionary(cls, filename): - """Load the dictionary from the filename - - Args: - filename (str): the filename - """ - dictionary = Dictionary.load(filename) - dictionary.add_symbol("<mask>") - return dictionary - - @classmethod - def setup_task(cls, cfg, **kwargs): - assert cfg.num_classes > 0, "Must set task.num_classes" - - # load data dictionary - data_dict = cls.load_dictionary( - os.path.join(cfg.data, "input0", "dict.txt"), - ) - logger.info("[input] dictionary: {} types".format(len(data_dict))) - - # load label dictionary - if not cfg.regression_target: - label_dict = cls.load_dictionary( - os.path.join(cfg.data, "label", "dict.txt"), - ) - logger.info("[label] dictionary: {} types".format(len(label_dict))) - else: - label_dict = data_dict - return cls(cfg, data_dict, label_dict) - - def load_dataset(self, split, combine=False, **kwargs): - """Load a given dataset split (e.g., train, valid, test).""" - - def get_path(key, split): - return os.path.join(self.cfg.data, key, split) - - def make_dataset(key, dictionary): - split_path = get_path(key, split) - - try: - dataset = data_utils.load_indexed_dataset( - split_path, - dictionary, - combine=combine, - ) - except Exception as e: - if "StorageException: [404] Path not found" in str(e): - logger.warning(f"dataset {e} not found") - dataset = None - else: - raise e - return dataset - - input0 = make_dataset("input0", self.source_dictionary) - assert input0 is not None, "could not find dataset: {}".format( - get_path("input0", split) - ) - input1 = make_dataset("input1", self.source_dictionary) - - if self.cfg.init_token is not None: - input0 = PrependTokenDataset(input0, self.cfg.init_token) - - if input1 is None: - src_tokens = input0 - else: - if self.cfg.separator_token is not None: - input1 = PrependTokenDataset(input1, self.cfg.separator_token) - - src_tokens = ConcatSentencesDataset(input0, input1) - - with data_utils.numpy_seed(self.cfg.seed): - shuffle = np.random.permutation(len(src_tokens)) - - src_tokens = maybe_shorten_dataset( - src_tokens, - split, - self.cfg.shorten_data_split_list, - self.cfg.shorten_method, - self.max_positions(), - self.cfg.seed, - ) - - dataset = { - "id": IdDataset(), - "net_input": { - "src_tokens": RightPadDataset( - src_tokens, - pad_idx=self.source_dictionary.pad(), - ), - "src_lengths": NumelDataset(src_tokens, reduce=False), - }, - "nsentences": NumSamplesDataset(), - "ntokens": NumelDataset(src_tokens, reduce=True), - } - - if self.cfg.add_prev_output_tokens: - prev_tokens_dataset = RightPadDataset( - RollDataset(src_tokens, 1), - pad_idx=self.dictionary.pad(), - ) - dataset["net_input"].update( - prev_output_tokens=prev_tokens_dataset, - ) - - if not self.cfg.regression_target: - label_dataset = make_dataset("label", self.label_dictionary) - if label_dataset is not None: - dataset.update( - target=OffsetTokensDataset( - StripTokenDataset( - label_dataset, - id_to_strip=self.label_dictionary.eos(), - ), - offset=-self.label_dictionary.nspecial, - ) - ) - else: - label_path = "{0}.label".format(get_path("label", split)) - if os.path.exists(label_path): - - def parse_regression_target(i, line): - values = line.split() - assert ( - len(values) == self.cfg.num_classes - ), f'expected num_classes={self.cfg.num_classes} regression target values on line {i}, found: "{line}"' - return [float(x) for x in values] - - with open(label_path) as h: - dataset.update( - target=RawLabelDataset( - [ - parse_regression_target(i, line.strip()) - for i, line in enumerate(h.readlines()) - ] - ) - ) - - nested_dataset = NestedDictionaryDataset( - dataset, - sizes=[src_tokens.sizes], - ) - - if self.cfg.no_shuffle: - dataset = nested_dataset - else: - dataset = SortDataset( - nested_dataset, - # shuffle - sort_order=[shuffle], - ) - - logger.info("Loaded {0} with #samples: {1}".format(split, len(dataset))) - - self.datasets[split] = dataset - return self.datasets[split] - - def build_model(self, cfg, from_checkpoint=False): - from fairseq import models - - with open_dict(cfg) if OmegaConf.is_config(cfg) else contextlib.ExitStack(): - cfg.max_positions = self.cfg.max_positions - - model = models.build_model(cfg, self, from_checkpoint) - - model.register_classification_head( - self.cfg.classification_head_name, - num_classes=self.cfg.num_classes, - ) - - return model - - def max_positions(self): - return self.cfg.max_positions - - @property - def source_dictionary(self): - return self.dictionary - - @property - def target_dictionary(self): - return self.dictionary - - @property - def label_dictionary(self): - return self._label_dictionary diff --git a/kosmos-g/fairseq/fairseq/tasks/sentence_ranking.py b/kosmos-g/fairseq/fairseq/tasks/sentence_ranking.py deleted file mode 100644 index 57f63aab6..000000000 --- a/kosmos-g/fairseq/fairseq/tasks/sentence_ranking.py +++ /dev/null @@ -1,219 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os - -import numpy as np -from fairseq import utils -from fairseq.data import ( - ConcatSentencesDataset, - Dictionary, - IdDataset, - NestedDictionaryDataset, - NumelDataset, - NumSamplesDataset, - PrependTokenDataset, - RawLabelDataset, - RightPadDataset, - SortDataset, - TruncateDataset, - data_utils, -) -from fairseq.data.shorten_dataset import maybe_shorten_dataset -from fairseq.tasks import LegacyFairseqTask, register_task - - -logger = logging.getLogger(__name__) - - -@register_task("sentence_ranking") -class SentenceRankingTask(LegacyFairseqTask): - """ - Ranking task on multiple sentences. - - Args: - dictionary (Dictionary): the dictionary for the input of the task - """ - - @staticmethod - def add_args(parser): - """Add task-specific arguments to the parser.""" - parser.add_argument("data", metavar="FILE", help="file prefix for data") - parser.add_argument( - "--num-classes", type=int, help="number of sentences to be ranked" - ) - parser.add_argument( - "--init-token", - type=int, - help="add token at the beginning of each batch item", - ) - parser.add_argument( - "--separator-token", type=int, help="add separator token between inputs" - ) - parser.add_argument("--no-shuffle", action="store_true") - parser.add_argument( - "--shorten-method", - default="none", - choices=["none", "truncate", "random_crop"], - help="if not none, shorten sequences that exceed --tokens-per-sample", - ) - parser.add_argument( - "--shorten-data-split-list", - default="", - help="comma-separated list of dataset splits to apply shortening to, " - 'e.g., "train,valid" (default: all dataset splits)', - ) - parser.add_argument( - "--max-option-length", type=int, help="max length for each option" - ) - - def __init__(self, args, dictionary): - super().__init__(args) - self.dictionary = dictionary - - @classmethod - def load_dictionary(cls, args, filename, source=True): - """Load the dictionary from the filename - - Args: - filename (str): the filename - """ - dictionary = Dictionary.load(filename) - dictionary.add_symbol("<mask>") - return dictionary - - @classmethod - def setup_task(cls, args, **kwargs): - assert ( - args.criterion == "sentence_ranking" - ), "Must set --criterion=sentence_ranking" - - # load data dictionary - data_dict = cls.load_dictionary( - args, - os.path.join(args.data, "input0", "dict.txt"), - source=True, - ) - logger.info("[input] dictionary: {} types".format(len(data_dict))) - return SentenceRankingTask(args, data_dict) - - def load_dataset(self, split, combine=False, **kwargs): - """Load a given dataset split (e.g., train, valid, test).""" - - def get_path(type, split): - return os.path.join(self.args.data, type, split) - - def make_dataset(type, dictionary): - split_path = get_path(type, split) - - dataset = data_utils.load_indexed_dataset( - split_path, - self.source_dictionary, - self.args.dataset_impl, - combine=combine, - ) - return dataset - - input0 = make_dataset("input0", self.source_dictionary) - input_options = [ - make_dataset("input{idx}".format(idx=idx + 1), self.source_dictionary) - for idx in range(self.args.num_classes) - ] - - if self.args.separator_token is not None: - input0 = PrependTokenDataset(input0, self.args.separator_token) - - src_tokens = [] - for input_option in input_options: - if self.args.init_token is not None: - input_option = PrependTokenDataset(input_option, self.args.init_token) - if self.args.max_option_length is not None: - input_option = TruncateDataset( - input_option, self.args.max_option_length - ) - src_token = ConcatSentencesDataset(input_option, input0) - src_token = maybe_shorten_dataset( - src_token, - split, - self.args.shorten_data_split_list, - self.args.shorten_method, - self.args.max_positions, - self.args.seed, - ) - src_tokens.append(src_token) - - with data_utils.numpy_seed(self.args.seed): - shuffle = np.random.permutation(len(src_tokens[0])) - - dataset = { - "id": IdDataset(), - "nsentences": NumSamplesDataset(), - "ntokens": NumelDataset(src_tokens[0], reduce=True), - } - - for src_token_idx in range(len(src_tokens)): - dataset.update( - { - "net_input{idx}".format(idx=src_token_idx + 1): { - "src_tokens": RightPadDataset( - src_tokens[src_token_idx], - pad_idx=self.source_dictionary.pad(), - ), - "src_lengths": NumelDataset( - src_tokens[src_token_idx], reduce=False - ), - } - } - ) - - label_path = "{}.label".format(get_path("label", split)) - if os.path.exists(label_path): - with open(label_path) as h: - dataset.update( - target=RawLabelDataset([int(x.strip()) for x in h.readlines()]) - ) - - nested_dataset = NestedDictionaryDataset( - dataset, - sizes=[np.maximum.reduce([src_token.sizes for src_token in src_tokens])], - ) - - if self.args.no_shuffle: - dataset = nested_dataset - else: - dataset = SortDataset( - nested_dataset, - # shuffle - sort_order=[shuffle], - ) - - logger.info("Loaded {0} with #samples: {1}".format(split, len(dataset))) - - self.datasets[split] = dataset - return self.datasets[split] - - def build_model(self, args, from_checkpoint=False): - from fairseq import models - - model = models.build_model(args, self, from_checkpoint) - - model.register_classification_head( - getattr(args, "ranking_head_name", "sentence_classification_head"), - num_classes=1, - ) - - return model - - def max_positions(self): - return self.args.max_positions - - @property - def source_dictionary(self): - return self.dictionary - - @property - def target_dictionary(self): - return self.dictionary diff --git a/kosmos-g/fairseq/fairseq/tasks/simultaneous_translation.py b/kosmos-g/fairseq/fairseq/tasks/simultaneous_translation.py deleted file mode 100644 index 9576b2680..000000000 --- a/kosmos-g/fairseq/fairseq/tasks/simultaneous_translation.py +++ /dev/null @@ -1,41 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from fairseq.tasks import register_task -from fairseq.tasks.speech_to_text import SpeechToTextTask -from fairseq.tasks.translation import TranslationTask, TranslationConfig - -try: - import examples.simultaneous_translation # noqa - - import_successful = True -except BaseException: - import_successful = False - - -logger = logging.getLogger(__name__) - - -def check_import(flag): - if not flag: - raise ImportError( - "'examples.simultaneous_translation' is not correctly imported. " - "Please considering `pip install -e $FAIRSEQ_DIR`." - ) - - -@register_task("simul_speech_to_text") -class SimulSpeechToTextTask(SpeechToTextTask): - def __init__(self, args, tgt_dict): - check_import(import_successful) - super().__init__(args, tgt_dict) - - -@register_task("simul_text_to_text", dataclass=TranslationConfig) -class SimulTextToTextTask(TranslationTask): - def __init__(self, cfg, src_dict, tgt_dict): - check_import(import_successful) - super().__init__(cfg, src_dict, tgt_dict) diff --git a/kosmos-g/fairseq/fairseq/tasks/speech_to_speech.py b/kosmos-g/fairseq/fairseq/tasks/speech_to_speech.py deleted file mode 100644 index ed3a7ccd7..000000000 --- a/kosmos-g/fairseq/fairseq/tasks/speech_to_speech.py +++ /dev/null @@ -1,472 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from argparse import Namespace -import json -import logging -import math -from pathlib import Path -import torch -import torch.nn as nn - -from fairseq import utils -from fairseq.data import Dictionary -from fairseq.data.audio.data_cfg import S2SDataConfig, MultitaskConfig -from fairseq.data.audio.speech_to_speech_dataset import SpeechToSpeechDatasetCreator -from fairseq.tasks import LegacyFairseqTask, register_task -from fairseq.tasks.text_to_speech import batch_mel_cepstral_distortion - - -logger = logging.getLogger(__name__) - - -class StackUnitSequenceGenerator(nn.Module): - def __init__(self, tgt_dict, vocab_size): - super().__init__() - self.pad = tgt_dict.pad() - self.eos = tgt_dict.eos() - self.unk = tgt_dict.unk() - self.offset = len(tgt_dict) - vocab_size - self.vocab_size = vocab_size - - def pack_units(self, input: torch.Tensor, n_frames_per_step) -> torch.Tensor: - if n_frames_per_step <= 1: - return input - - bsz, _, n = input.shape - assert n == n_frames_per_step - - scale = [ - pow(self.vocab_size, n_frames_per_step - 1 - i) - for i in range(n_frames_per_step) - ] - scale = torch.LongTensor(scale).squeeze(0).to(input.device) - mask = input >= self.offset - res = ((input - self.offset) * scale * mask).sum(dim=2) + self.offset - return res - - @torch.no_grad() - def generate(self, models, sample, **kwargs): - # currently only support viterbi search for stacked units - model = models[0] - model.eval() - - max_len = model.max_decoder_positions() - # TODO: incorporate max_len_a and max_len_b - - src_tokens = sample["net_input"]["src_tokens"] - src_lengths = sample["net_input"]["src_lengths"] - bsz, src_len, _ = src_tokens.size() - n_frames_per_step = model.decoder.n_frames_per_step - - # initialize - encoder_out = model.forward_encoder( - src_tokens, src_lengths, speaker=sample["speaker"] - ) - incremental_state = {} - pred_out, attn, scores = [], [], [] - finished = src_tokens.new_zeros((bsz,)).bool() - - prev_output_tokens = src_lengths.new_zeros((bsz, 1)).long().fill_(self.eos) - for _ in range(max_len): - cur_out, cur_extra = model.forward_decoder( - prev_output_tokens, - encoder_out=encoder_out, - incremental_state=incremental_state, - ) - - lprobs = model.get_normalized_probs([cur_out], log_probs=True) - # never select pad, unk - lprobs[:, :, self.pad] = -math.inf - lprobs[:, :, self.unk] = -math.inf - - cur_pred_lprob, cur_pred_out = torch.max(lprobs, dim=2) - scores.append(cur_pred_lprob) - pred_out.append(cur_pred_out) - - prev_output_tokens = torch.cat( - ( - prev_output_tokens, - self.pack_units( - cur_pred_out.view(bsz, 1, n_frames_per_step), n_frames_per_step - ), - ), - dim=1, - ) - - attn.append(cur_extra["attn"][0]) - - cur_finished = torch.any(cur_pred_out.squeeze(1) == self.eos, dim=1) - finished = finished | cur_finished - if finished.sum().item() == bsz: - break - - pred_out = torch.cat(pred_out, dim=1).view(bsz, -1) - attn = torch.cat(attn, dim=2) - alignment = attn.max(dim=1)[1] - attn = attn.repeat_interleave(n_frames_per_step, dim=2) - alignment = alignment.repeat_interleave(n_frames_per_step, dim=1) - scores = torch.cat(scores, dim=1) - eos_idx = (pred_out == self.eos).nonzero(as_tuple=True) - out_lens = src_lengths.new_zeros((bsz,)).long().fill_(max_len) - for b, l in zip(eos_idx[0], eos_idx[1]): - out_lens[b] = min(l, out_lens[b]) - - hypos = [ - [ - { - "tokens": pred_out[b, :out_len], - "attn": attn[b, :, :out_len], - "alignment": alignment[b, :out_len], - "positional_scores": scores[b, :out_len], - "score": utils.item(scores[b, :out_len].sum().data), - } - ] - for b, out_len in zip(range(bsz), out_lens) - ] - - return hypos - - -@register_task("speech_to_speech") -class SpeechToSpeechTask(LegacyFairseqTask): - @classmethod - def add_args(cls, parser): - parser.add_argument("data", help="manifest root path") - parser.add_argument( - "--config-yaml", - type=str, - default="config.yaml", - help="Configuration YAML filename (under manifest root)", - ) - parser.add_argument( - "--max-source-positions", - default=6000, - type=int, - metavar="N", - help="max number of tokens in the source sequence", - ) - parser.add_argument( - "--max-target-positions", - default=1024, - type=int, - metavar="N", - help="max number of tokens in the target sequence", - ) - parser.add_argument( - "--target-is-code", - action="store_true", - help="set if target is discrete unit instead of spectrogram", - ) - parser.add_argument( - "--target-code-size", type=int, default=None, help="# discrete units" - ) - parser.add_argument( - "--n-frames-per-step", - type=int, - default=1, - help="# stacked frames, use 0 for reduced discrete unit sequence", - ) - parser.add_argument( - "--multitask-config-yaml", - type=str, - default=None, - help="Configuration YAML filename for the multitasks (under manifest root)", - ) - parser.add_argument("--eval-inference", action="store_true") - parser.add_argument( - "--eval-args", - type=str, - default="{}", - help='generation args for speech-to-unit model , e.g., \'{"beam": 5, "max_len_a": 1}\', as JSON string', - ) - parser.add_argument("--eos-prob-threshold", type=float, default=0.5) - parser.add_argument( - "--mcd-normalize-type", - type=str, - default="targ", - choices=["targ", "pred", "path"], - ) - parser.add_argument( - "--vocoder", - type=str, - default="griffin_lim", - choices=["griffin_lim", "hifigan", "code_hifigan"], - ) - parser.add_argument("--spec-bwd-max-iter", type=int, default=8) - - def __init__(self, args, tgt_dict): - super().__init__(args) - self.tgt_dict = tgt_dict - self.data_cfg = S2SDataConfig(Path(args.data) / args.config_yaml) - self.multitask_tasks = {} - if getattr(args, "multitask_config_yaml", None) is not None: - multitask_cfg = MultitaskConfig( - Path(args.data) / args.multitask_config_yaml - ) - for task_name, task_config in multitask_cfg.get_all_tasks().items(): - self.multitask_tasks[task_name] = DummyMultiTask( - task_config, task_config.tgt_dict - ) - - @classmethod - def setup_task(cls, args, **kwargs): - tgt_dict = None - if args.target_is_code: - assert args.target_code_size is not None - - tgt_dict = Dictionary() - for i in range(args.target_code_size): - tgt_dict.add_symbol(str(i)) - logger.info(f"dictionary size: " f"{len(tgt_dict):,}") - - if getattr(args, "train_subset", None) is not None: - if not all(s.startswith("train") for s in args.train_subset.split(",")): - raise ValueError('Train splits should be named like "train*".') - - assert args.n_frames_per_step >= 1 - assert ( - not args.eval_inference - or (args.target_is_code and args.vocoder == "code_hifigan") - or (not args.target_is_code and args.vocoder != "code_hifigan") - ) - - return cls(args, tgt_dict) - - def build_criterion(self, args): - from fairseq import criterions - - if len(self.multitask_tasks) > 0: - if self.args.target_is_code and args._name != "speech_to_unit": - raise ValueError( - "set --criterion speech_to_unit for speech-to-unit loss with multitask" - ) - elif not self.args.target_is_code and args._name != "speech_to_spectrogram": - raise ValueError( - "set --criterion speech_to_spectrogram for speech-to-spectrogram loss with multitask" - ) - - return criterions.build_criterion(args, self) - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - self.datasets[split] = SpeechToSpeechDatasetCreator.from_tsv( - self.args.data, - self.data_cfg, - split, - is_train_split=split.startswith("train"), - epoch=epoch, - seed=self.args.seed, - target_is_code=self.args.target_is_code, - target_dictionary=self.target_dictionary, - n_frames_per_step=self.args.n_frames_per_step, - multitask=self.multitask_tasks, - ) - - @property - def target_dictionary(self): - return self.tgt_dict - - @property - def source_dictionary(self): - return None - - def max_positions(self): - return self.args.max_source_positions, self.args.max_target_positions - - def build_model(self, args, from_checkpoint=False): - args.input_feat_per_channel = self.data_cfg.input_feat_per_channel - args.input_channels = self.data_cfg.input_transformed_channels - args.target_speaker_embed = self.data_cfg.target_speaker_embed is not None - args.n_frames_per_step = self.args.n_frames_per_step - - model = super().build_model(args, from_checkpoint) - - if len(self.multitask_tasks) > 0: - from fairseq.models.speech_to_speech.s2s_transformer import ( - S2STransformerMultitaskModelBase, - ) - - assert isinstance(model, S2STransformerMultitaskModelBase) - - if self.args.eval_inference: - self.eval_gen_args = json.loads(self.args.eval_args) - self.generator = self.build_generator( - [model], Namespace(**self.eval_gen_args) - ) - - return model - - def build_generator( - self, - models, - args, - seq_gen_cls=None, - extra_gen_cls_kwargs=None, - ): - - if not self.args.target_is_code or self.args.eval_inference: - from fairseq.models.text_to_speech.vocoder import get_vocoder - - self.vocoder = get_vocoder(self.args, self.data_cfg) - self.vocoder = ( - self.vocoder.cuda() - if torch.cuda.is_available() and not self.args.cpu - else self.vocoder.cpu() - ) - - if self.args.target_is_code: - if self.args.n_frames_per_step == 1: - seq_generator = super().build_generator( - models, - args, - seq_gen_cls=None, - extra_gen_cls_kwargs=extra_gen_cls_kwargs, - ) - else: - assert ( - getattr(args, "beam", 1) == 1 and getattr(args, "nbest", 1) == 1 - ), "only support viterbi search for stacked units" - seq_generator = StackUnitSequenceGenerator( - self.tgt_dict, - self.args.target_code_size, - ) - else: - if getattr(args, "teacher_forcing", False): - from fairseq.speech_generator import ( - TeacherForcingAutoRegressiveSpeechGenerator, - ) - - generator = TeacherForcingAutoRegressiveSpeechGenerator - logger.info("Teacher forcing mode for generation") - else: - from fairseq.speech_generator import AutoRegressiveSpeechGenerator - - generator = AutoRegressiveSpeechGenerator - seq_generator = generator( - models[0], - self.vocoder, - self.data_cfg, - max_iter=self.args.max_target_positions, - eos_prob_threshold=self.args.eos_prob_threshold, - ) - - return seq_generator - - def train_step( - self, sample, model, criterion, optimizer, update_num, ignore_grad=False - ): - for task_name, task_obj in self.multitask_tasks.items(): - criterion.set_multitask_loss_weight( - task_name, task_obj.args.get_loss_weight(update_num) - ) - - loss, sample_size, logging_output = super().train_step( - sample, model, criterion, optimizer, update_num, ignore_grad - ) - return loss, sample_size, logging_output - - def valid_step(self, sample, model, criterion): - loss, sample_size, logging_output = super().valid_step(sample, model, criterion) - - if self.args.eval_inference: - hypos, inference_losses = self.valid_step_with_inference( - sample, model, self.generator - ) - for k, v in inference_losses.items(): - assert k not in logging_output - logging_output[k] = v - - return loss, sample_size, logging_output - - def valid_step_with_inference(self, sample, model, generator): - if self.args.target_is_code: - hypos = generator.generate([model], sample) - tgt_lens = ( - sample["target_lengths"] - 1 - ) * self.args.n_frames_per_step # strip <eos> - for b, (f, l) in enumerate(zip(sample["target"], tgt_lens)): - hypos[b][0]["targ_waveform"] = self.vocoder( - {"code": f[:l] - 4}, # remove <bos>, <pad>, <eos>, <unk> - dur_prediction=self.eval_gen_args.get("dur_prediction", False), - ) - if len(hypos[b][0]["tokens"]) > 0: - hypos[b][0]["waveform"] = self.vocoder( - {"code": hypos[b][0]["tokens"] - 4}, - dur_prediction=self.eval_gen_args.get("dur_prediction", False), - ) - else: - hypos[b][0]["waveform"] = torch.flip( - hypos[b][0]["targ_waveform"], dims=[0] - ) - else: - hypos = [ - [hypo] for hypo in generator.generate(model, sample, has_targ=True) - ] - - losses = { - "mcd_loss": 0.0, - "targ_frames": 0.0, - "pred_frames": 0.0, - "path_frames": 0.0, - "nins": 0.0, - "ndel": 0.0, - } - rets = batch_mel_cepstral_distortion( - [hypo[0]["targ_waveform"] for hypo in hypos], - [hypo[0]["waveform"] for hypo in hypos], - self.data_cfg.output_sample_rate, - normalize_type=None, - ) - for d, extra in rets: - pathmap = extra[-1] - losses["mcd_loss"] += d.item() - losses["targ_frames"] += pathmap.size(0) - losses["pred_frames"] += pathmap.size(1) - losses["path_frames"] += pathmap.sum().item() - losses["nins"] += (pathmap.sum(dim=1) - 1).sum().item() - losses["ndel"] += (pathmap.sum(dim=0) - 1).sum().item() - losses["norm_frames"] = losses[ - f"{getattr(self.args, 'mcd_normalize_type', 'targ')}_frames" - ] - - return hypos, losses - - -class DummyMultiTask(LegacyFairseqTask): - def __init__(self, args, tgt_dict): - super().__init__(args) - self.tgt_dict = tgt_dict - - @property - def target_dictionary(self): - return self.tgt_dict - - def inference_step( - self, generator, models, sample, prefix_tokens=None, constraints=None - ): - if self.args.decoder_type == "ctc": - model = models[0] # only support single model - encoder_out = model(**sample) - if hasattr(model, "get_logits"): - emissions = model.get_logits( - encoder_out - ) # no need to normalize emissions - else: - emissions = model.get_normalized_probs(encoder_out, log_probs=True) - return generator.decode( - emissions.transpose(0, 1).float().cpu().contiguous() - ) - else: - raise NotImplementedError("only ctc decoder is supported at the moment") - - def build_generator( - self, models, args, seq_gen_cls=None, extra_gen_cls_kwargs=None - ): - if self.args.decoder_type == "ctc": - from examples.speech_recognition.w2l_decoder import W2lViterbiDecoder - - return W2lViterbiDecoder(args, self.tgt_dict) - else: - raise NotImplementedError("only ctc decoder is supported at the moment") diff --git a/kosmos-g/fairseq/fairseq/tasks/speech_to_text.py b/kosmos-g/fairseq/fairseq/tasks/speech_to_text.py deleted file mode 100644 index 55ba284f8..000000000 --- a/kosmos-g/fairseq/fairseq/tasks/speech_to_text.py +++ /dev/null @@ -1,164 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from pathlib import Path -from argparse import Namespace - -from fairseq.data import Dictionary, encoders -from fairseq.data.audio.speech_to_text_dataset import ( - S2TDataConfig, - SpeechToTextDataset, - SpeechToTextDatasetCreator, - get_features_or_waveform, -) -from fairseq.tasks import LegacyFairseqTask, register_task - - -logger = logging.getLogger(__name__) - - -@register_task("speech_to_text") -class SpeechToTextTask(LegacyFairseqTask): - @classmethod - def add_args(cls, parser): - parser.add_argument("data", help="manifest root path") - parser.add_argument( - "--config-yaml", - type=str, - default="config.yaml", - help="Configuration YAML filename (under manifest root)", - ) - parser.add_argument( - "--max-source-positions", - default=6000, - type=int, - metavar="N", - help="max number of tokens in the source sequence", - ) - parser.add_argument( - "--max-target-positions", - default=1024, - type=int, - metavar="N", - help="max number of tokens in the target sequence", - ) - - def __init__(self, args, tgt_dict): - super().__init__(args) - self.tgt_dict = tgt_dict - self.data_cfg = S2TDataConfig(Path(args.data) / args.config_yaml) - self.speaker_to_id = self._get_speaker_to_id() - - def _get_speaker_to_id(self): - speaker_to_id = None - speaker_set_filename = self.data_cfg.config.get("speaker_set_filename") - if speaker_set_filename is not None: - speaker_set_path = Path(self.args.data) / speaker_set_filename - with open(speaker_set_path) as f: - speaker_to_id = {r.strip(): i for i, r in enumerate(f)} - return speaker_to_id - - @classmethod - def setup_task(cls, args, **kwargs): - data_cfg = S2TDataConfig(Path(args.data) / args.config_yaml) - dict_path = Path(args.data) / data_cfg.vocab_filename - if not dict_path.is_file(): - raise FileNotFoundError(f"Dict not found: {dict_path.as_posix()}") - tgt_dict = Dictionary.load(dict_path.as_posix()) - logger.info( - f"dictionary size ({data_cfg.vocab_filename}): " f"{len(tgt_dict):,}" - ) - - if getattr(args, "train_subset", None) is not None: - if not all(s.startswith("train") for s in args.train_subset.split(",")): - raise ValueError('Train splits should be named like "train*".') - return cls(args, tgt_dict) - - def build_criterion(self, args): - from fairseq import criterions - - if self.data_cfg.prepend_tgt_lang_tag and args.ignore_prefix_size != 1: - raise ValueError( - 'Please set "--ignore-prefix-size 1" since ' - "target language ID token is prepended as BOS." - ) - return criterions.build_criterion(args, self) - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - is_train_split = split.startswith("train") - pre_tokenizer = self.build_tokenizer(self.args) - bpe_tokenizer = self.build_bpe(self.args) - self.datasets[split] = SpeechToTextDatasetCreator.from_tsv( - self.args.data, - self.data_cfg, - split, - self.tgt_dict, - pre_tokenizer, - bpe_tokenizer, - is_train_split=is_train_split, - epoch=epoch, - seed=self.args.seed, - speaker_to_id=self.speaker_to_id, - ) - - @property - def target_dictionary(self): - return self.tgt_dict - - @property - def source_dictionary(self): - return None - - def max_positions(self): - return self.args.max_source_positions, self.args.max_target_positions - - def build_model(self, args, from_checkpoint=False): - args.input_feat_per_channel = self.data_cfg.input_feat_per_channel - args.input_channels = self.data_cfg.input_channels - args.speaker_to_id = self.speaker_to_id - return super(SpeechToTextTask, self).build_model(args, from_checkpoint) - - def build_generator( - self, - models, - args, - seq_gen_cls=None, - extra_gen_cls_kwargs=None, - ): - if self.data_cfg.prepend_tgt_lang_tag and args.prefix_size != 1: - raise ValueError( - 'Please set "--prefix-size 1" since ' - "target language ID token is prepended as BOS." - ) - lang_token_ids = { - i - for s, i in self.tgt_dict.indices.items() - if SpeechToTextDataset.is_lang_tag(s) - } - - if extra_gen_cls_kwargs is None: - extra_gen_cls_kwargs = {} - extra_gen_cls_kwargs["symbols_to_strip_from_output"] = lang_token_ids - return super().build_generator( - models, args, seq_gen_cls=None, extra_gen_cls_kwargs=extra_gen_cls_kwargs - ) - - def build_tokenizer(self, args): - logger.info(f"pre-tokenizer: {self.data_cfg.pre_tokenizer}") - return encoders.build_tokenizer(Namespace(**self.data_cfg.pre_tokenizer)) - - def build_bpe(self, args): - logger.info(f"tokenizer: {self.data_cfg.bpe_tokenizer}") - return encoders.build_bpe(Namespace(**self.data_cfg.bpe_tokenizer)) - - def get_interactive_tokens_and_lengths(self, lines, encode_fn): - n_frames = [get_features_or_waveform(p).shape[0] for p in lines] - return lines, n_frames - - def build_dataset_for_inference(self, src_tokens, src_lengths, **kwargs): - return SpeechToTextDataset( - "interactive", False, self.data_cfg, src_tokens, src_lengths - ) diff --git a/kosmos-g/fairseq/fairseq/tasks/text_to_speech.py b/kosmos-g/fairseq/fairseq/tasks/text_to_speech.py deleted file mode 100644 index 82e7e6643..000000000 --- a/kosmos-g/fairseq/fairseq/tasks/text_to_speech.py +++ /dev/null @@ -1,501 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import os.path as op - -import torch -import torch.nn.functional as F -import numpy as np - -from fairseq.data.audio.text_to_speech_dataset import TextToSpeechDatasetCreator -from fairseq.tasks import register_task -from fairseq.tasks.speech_to_text import SpeechToTextTask -from fairseq.speech_generator import ( - AutoRegressiveSpeechGenerator, - NonAutoregressiveSpeechGenerator, - TeacherForcingAutoRegressiveSpeechGenerator, -) - -logging.basicConfig( - format="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=logging.INFO, -) -logger = logging.getLogger(__name__) - - -try: - from tensorboardX import SummaryWriter -except ImportError: - logger.info("Please install tensorboardX: pip install tensorboardX") - SummaryWriter = None - - -@register_task("text_to_speech") -class TextToSpeechTask(SpeechToTextTask): - @staticmethod - def add_args(parser): - parser.add_argument("data", help="manifest root path") - parser.add_argument( - "--config-yaml", - type=str, - default="config.yaml", - help="Configuration YAML filename (under manifest root)", - ) - parser.add_argument( - "--max-source-positions", - default=1024, - type=int, - metavar="N", - help="max number of tokens in the source sequence", - ) - parser.add_argument( - "--max-target-positions", - default=1200, - type=int, - metavar="N", - help="max number of tokens in the target sequence", - ) - parser.add_argument("--n-frames-per-step", type=int, default=1) - parser.add_argument("--eos-prob-threshold", type=float, default=0.5) - parser.add_argument("--eval-inference", action="store_true") - parser.add_argument("--eval-tb-nsample", type=int, default=8) - parser.add_argument("--vocoder", type=str, default="griffin_lim") - parser.add_argument("--spec-bwd-max-iter", type=int, default=8) - - def __init__(self, args, src_dict): - super().__init__(args, src_dict) - self.src_dict = src_dict - self.sr = self.data_cfg.config.get("features").get("sample_rate") - - self.tensorboard_writer = None - self.tensorboard_dir = "" - if args.tensorboard_logdir and SummaryWriter is not None: - self.tensorboard_dir = os.path.join(args.tensorboard_logdir, "valid_extra") - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - is_train_split = split.startswith("train") - pre_tokenizer = self.build_tokenizer(self.args) - bpe_tokenizer = self.build_bpe(self.args) - self.datasets[split] = TextToSpeechDatasetCreator.from_tsv( - self.args.data, - self.data_cfg, - split, - self.src_dict, - pre_tokenizer, - bpe_tokenizer, - is_train_split=is_train_split, - epoch=epoch, - seed=self.args.seed, - n_frames_per_step=self.args.n_frames_per_step, - speaker_to_id=self.speaker_to_id, - ) - - @property - def target_dictionary(self): - return None - - @property - def source_dictionary(self): - return self.src_dict - - def get_speaker_embeddings_path(self): - speaker_emb_path = None - if self.data_cfg.config.get("speaker_emb_filename") is not None: - speaker_emb_path = op.join( - self.args.data, self.data_cfg.config.get("speaker_emb_filename") - ) - return speaker_emb_path - - @classmethod - def get_speaker_embeddings(cls, args): - embed_speaker = None - if args.speaker_to_id is not None: - if args.speaker_emb_path is None: - embed_speaker = torch.nn.Embedding( - len(args.speaker_to_id), args.speaker_embed_dim - ) - else: - speaker_emb_mat = np.load(args.speaker_emb_path) - assert speaker_emb_mat.shape[1] == args.speaker_embed_dim - embed_speaker = torch.nn.Embedding.from_pretrained( - torch.from_numpy(speaker_emb_mat), - freeze=True, - ) - logger.info( - f"load speaker embeddings from {args.speaker_emb_path}. " - f"train embedding? {embed_speaker.weight.requires_grad}\n" - f"embeddings:\n{speaker_emb_mat}" - ) - return embed_speaker - - def build_model(self, cfg, from_checkpoint=False): - cfg.pitch_min = self.data_cfg.config["features"].get("pitch_min", None) - cfg.pitch_max = self.data_cfg.config["features"].get("pitch_max", None) - cfg.energy_min = self.data_cfg.config["features"].get("energy_min", None) - cfg.energy_max = self.data_cfg.config["features"].get("energy_max", None) - cfg.speaker_emb_path = self.get_speaker_embeddings_path() - model = super().build_model(cfg, from_checkpoint) - self.generator = None - if getattr(cfg, "eval_inference", False): - self.generator = self.build_generator([model], cfg) - return model - - def build_generator(self, models, cfg, vocoder=None, **unused): - if vocoder is None: - vocoder = self.build_default_vocoder() - model = models[0] - if getattr(model, "NON_AUTOREGRESSIVE", False): - return NonAutoregressiveSpeechGenerator(model, vocoder, self.data_cfg) - else: - generator = AutoRegressiveSpeechGenerator - if getattr(cfg, "teacher_forcing", False): - generator = TeacherForcingAutoRegressiveSpeechGenerator - logger.info("Teacher forcing mode for generation") - return generator( - model, - vocoder, - self.data_cfg, - max_iter=self.args.max_target_positions, - eos_prob_threshold=self.args.eos_prob_threshold, - ) - - def build_default_vocoder(self): - from fairseq.models.text_to_speech.vocoder import get_vocoder - - vocoder = get_vocoder(self.args, self.data_cfg) - if torch.cuda.is_available() and not self.args.cpu: - vocoder = vocoder.cuda() - else: - vocoder = vocoder.cpu() - return vocoder - - def valid_step(self, sample, model, criterion): - loss, sample_size, logging_output = super().valid_step(sample, model, criterion) - - if getattr(self.args, "eval_inference", False): - hypos, inference_losses = self.valid_step_with_inference( - sample, model, self.generator - ) - for k, v in inference_losses.items(): - assert k not in logging_output - logging_output[k] = v - - picked_id = 0 - if self.tensorboard_dir and (sample["id"] == picked_id).any(): - self.log_tensorboard( - sample, - hypos[: self.args.eval_tb_nsample], - model._num_updates, - is_na_model=getattr(model, "NON_AUTOREGRESSIVE", False), - ) - return loss, sample_size, logging_output - - def valid_step_with_inference(self, sample, model, generator): - hypos = generator.generate(model, sample, has_targ=True) - - losses = { - "mcd_loss": 0.0, - "targ_frames": 0.0, - "pred_frames": 0.0, - "nins": 0.0, - "ndel": 0.0, - } - rets = batch_mel_cepstral_distortion( - [hypo["targ_waveform"] for hypo in hypos], - [hypo["waveform"] for hypo in hypos], - self.sr, - normalize_type=None, - ) - for d, extra in rets: - pathmap = extra[-1] - losses["mcd_loss"] += d.item() - losses["targ_frames"] += pathmap.size(0) - losses["pred_frames"] += pathmap.size(1) - losses["nins"] += (pathmap.sum(dim=1) - 1).sum().item() - losses["ndel"] += (pathmap.sum(dim=0) - 1).sum().item() - - return hypos, losses - - def log_tensorboard(self, sample, hypos, num_updates, is_na_model=False): - if self.tensorboard_writer is None: - self.tensorboard_writer = SummaryWriter(self.tensorboard_dir) - tb_writer = self.tensorboard_writer - for b in range(len(hypos)): - idx = sample["id"][b] - text = sample["src_texts"][b] - targ = hypos[b]["targ_feature"] - pred = hypos[b]["feature"] - attn = hypos[b]["attn"] - - if is_na_model: - data = plot_tts_output( - [targ.transpose(0, 1), pred.transpose(0, 1)], - [f"target (idx={idx})", "output"], - attn, - "alignment", - ret_np=True, - suptitle=text, - ) - else: - eos_prob = hypos[b]["eos_prob"] - data = plot_tts_output( - [targ.transpose(0, 1), pred.transpose(0, 1), attn], - [f"target (idx={idx})", "output", "alignment"], - eos_prob, - "eos prob", - ret_np=True, - suptitle=text, - ) - - tb_writer.add_image( - f"inference_sample_{b}", data, num_updates, dataformats="HWC" - ) - - if hypos[b]["waveform"] is not None: - targ_wave = hypos[b]["targ_waveform"].detach().cpu().float() - pred_wave = hypos[b]["waveform"].detach().cpu().float() - tb_writer.add_audio( - f"inference_targ_{b}", targ_wave, num_updates, sample_rate=self.sr - ) - tb_writer.add_audio( - f"inference_pred_{b}", pred_wave, num_updates, sample_rate=self.sr - ) - - -def save_figure_to_numpy(fig): - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="") - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - return data - - -DEFAULT_V_MIN = np.log(1e-5) - - -def plot_tts_output( - data_2d, - title_2d, - data_1d, - title_1d, - figsize=(24, 4), - v_min=DEFAULT_V_MIN, - v_max=3, - ret_np=False, - suptitle="", -): - try: - import matplotlib.pyplot as plt - from mpl_toolkits.axes_grid1 import make_axes_locatable - except ImportError: - raise ImportError("Please install Matplotlib: pip install matplotlib") - - data_2d = [ - x.detach().cpu().float().numpy() if isinstance(x, torch.Tensor) else x - for x in data_2d - ] - fig, axes = plt.subplots(1, len(data_2d) + 1, figsize=figsize) - if suptitle: - fig.suptitle(suptitle[:400]) # capped at 400 chars - axes = [axes] if len(data_2d) == 0 else axes - for ax, x, name in zip(axes, data_2d, title_2d): - ax.set_title(name) - divider = make_axes_locatable(ax) - cax = divider.append_axes("right", size="5%", pad=0.05) - im = ax.imshow( - x, - origin="lower", - aspect="auto", - vmin=max(x.min(), v_min), - vmax=min(x.max(), v_max), - ) - fig.colorbar(im, cax=cax, orientation="vertical") - - if isinstance(data_1d, torch.Tensor): - data_1d = data_1d.detach().cpu().numpy() - axes[-1].plot(data_1d) - axes[-1].set_title(title_1d) - plt.tight_layout() - - if ret_np: - fig.canvas.draw() - data = save_figure_to_numpy(fig) - plt.close(fig) - return data - - -def antidiag_indices(offset, min_i=0, max_i=None, min_j=0, max_j=None): - """ - for a (3, 4) matrix with min_i=1, max_i=3, min_j=1, max_j=4, outputs - - offset=2 (1, 1), - offset=3 (2, 1), (1, 2) - offset=4 (2, 2), (1, 3) - offset=5 (2, 3) - - constraints: - i + j = offset - min_j <= j < max_j - min_i <= offset - j < max_i - """ - if max_i is None: - max_i = offset + 1 - if max_j is None: - max_j = offset + 1 - min_j = max(min_j, offset - max_i + 1, 0) - max_j = min(max_j, offset - min_i + 1, offset + 1) - j = torch.arange(min_j, max_j) - i = offset - j - return torch.stack([i, j]) - - -def batch_dynamic_time_warping(distance, shapes=None): - """full batched DTW without any constraints - - distance: (batchsize, max_M, max_N) matrix - shapes: (batchsize,) vector specifying (M, N) for each entry - """ - # ptr: 0=left, 1=up-left, 2=up - ptr2dij = {0: (0, -1), 1: (-1, -1), 2: (-1, 0)} - - bsz, m, n = distance.size() - cumdist = torch.zeros_like(distance) - backptr = torch.zeros_like(distance).type(torch.int32) - 1 - - # initialize - cumdist[:, 0, :] = distance[:, 0, :].cumsum(dim=-1) - cumdist[:, :, 0] = distance[:, :, 0].cumsum(dim=-1) - backptr[:, 0, :] = 0 - backptr[:, :, 0] = 2 - - # DP with optimized anti-diagonal parallelization, O(M+N) steps - for offset in range(2, m + n - 1): - ind = antidiag_indices(offset, 1, m, 1, n) - c = torch.stack( - [ - cumdist[:, ind[0], ind[1] - 1], - cumdist[:, ind[0] - 1, ind[1] - 1], - cumdist[:, ind[0] - 1, ind[1]], - ], - dim=2, - ) - v, b = c.min(axis=-1) - backptr[:, ind[0], ind[1]] = b.int() - cumdist[:, ind[0], ind[1]] = v + distance[:, ind[0], ind[1]] - - # backtrace - pathmap = torch.zeros_like(backptr) - for b in range(bsz): - i = m - 1 if shapes is None else (shapes[b][0] - 1).item() - j = n - 1 if shapes is None else (shapes[b][1] - 1).item() - dtwpath = [(i, j)] - while (i != 0 or j != 0) and len(dtwpath) < 10000: - assert i >= 0 and j >= 0 - di, dj = ptr2dij[backptr[b, i, j].item()] - i, j = i + di, j + dj - dtwpath.append((i, j)) - dtwpath = dtwpath[::-1] - indices = torch.from_numpy(np.array(dtwpath)) - pathmap[b, indices[:, 0], indices[:, 1]] = 1 - - return cumdist, backptr, pathmap - - -def compute_l2_dist(x1, x2): - """compute an (m, n) L2 distance matrix from (m, d) and (n, d) matrices""" - return torch.cdist(x1.unsqueeze(0), x2.unsqueeze(0), p=2).squeeze(0).pow(2) - - -def compute_rms_dist(x1, x2): - l2_dist = compute_l2_dist(x1, x2) - return (l2_dist / x1.size(1)).pow(0.5) - - -def get_divisor(pathmap, normalize_type): - if normalize_type is None: - return 1 - elif normalize_type == "len1": - return pathmap.size(0) - elif normalize_type == "len2": - return pathmap.size(1) - elif normalize_type == "path": - return pathmap.sum().item() - else: - raise ValueError(f"normalize_type {normalize_type} not supported") - - -def batch_compute_distortion(y1, y2, sr, feat_fn, dist_fn, normalize_type): - d, s, x1, x2 = [], [], [], [] - for cur_y1, cur_y2 in zip(y1, y2): - assert cur_y1.ndim == 1 and cur_y2.ndim == 1 - cur_x1 = feat_fn(cur_y1) - cur_x2 = feat_fn(cur_y2) - x1.append(cur_x1) - x2.append(cur_x2) - - cur_d = dist_fn(cur_x1, cur_x2) - d.append(cur_d) - s.append(d[-1].size()) - max_m = max(ss[0] for ss in s) - max_n = max(ss[1] for ss in s) - d = torch.stack( - [F.pad(dd, (0, max_n - dd.size(1), 0, max_m - dd.size(0))) for dd in d] - ) - s = torch.LongTensor(s).to(d.device) - cumdists, backptrs, pathmaps = batch_dynamic_time_warping(d, s) - - rets = [] - itr = zip(s, x1, x2, d, cumdists, backptrs, pathmaps) - for (m, n), cur_x1, cur_x2, dist, cumdist, backptr, pathmap in itr: - cumdist = cumdist[:m, :n] - backptr = backptr[:m, :n] - pathmap = pathmap[:m, :n] - divisor = get_divisor(pathmap, normalize_type) - - distortion = cumdist[-1, -1] / divisor - ret = distortion, (cur_x1, cur_x2, dist, cumdist, backptr, pathmap) - rets.append(ret) - return rets - - -def batch_mel_cepstral_distortion(y1, y2, sr, normalize_type="path", mfcc_fn=None): - """ - https://arxiv.org/pdf/2011.03568.pdf - - The root mean squared error computed on 13-dimensional MFCC using DTW for - alignment. MFCC features are computed from an 80-channel log-mel - spectrogram using a 50ms Hann window and hop of 12.5ms. - - y1: list of waveforms - y2: list of waveforms - sr: sampling rate - """ - - try: - import torchaudio - except ImportError: - raise ImportError("Please install torchaudio: pip install torchaudio") - - if mfcc_fn is None or mfcc_fn.sample_rate != sr: - melkwargs = { - "n_fft": int(0.05 * sr), - "win_length": int(0.05 * sr), - "hop_length": int(0.0125 * sr), - "f_min": 20, - "n_mels": 80, - "window_fn": torch.hann_window, - } - mfcc_fn = torchaudio.transforms.MFCC( - sr, n_mfcc=13, log_mels=True, melkwargs=melkwargs - ).to(y1[0].device) - return batch_compute_distortion( - y1, - y2, - sr, - lambda y: mfcc_fn(y).transpose(-1, -2), - compute_rms_dist, - normalize_type, - ) diff --git a/kosmos-g/fairseq/fairseq/tasks/translation.py b/kosmos-g/fairseq/fairseq/tasks/translation.py deleted file mode 100644 index 73b3d7c7b..000000000 --- a/kosmos-g/fairseq/fairseq/tasks/translation.py +++ /dev/null @@ -1,497 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field -import itertools -import json -import logging -import os -from typing import Optional -from argparse import Namespace -from omegaconf import II - -import numpy as np -from fairseq import metrics, utils -from fairseq.data import ( - AppendTokenDataset, - ConcatDataset, - LanguagePairDataset, - PrependTokenDataset, - StripTokenDataset, - TruncateDataset, - data_utils, - encoders, - indexed_dataset, -) -from fairseq.data.indexed_dataset import get_available_dataset_impl -from fairseq.dataclass import ChoiceEnum, FairseqDataclass -from fairseq.tasks import FairseqTask, register_task - - -EVAL_BLEU_ORDER = 4 - - -logger = logging.getLogger(__name__) - - -def load_langpair_dataset( - data_path, - split, - src, - src_dict, - tgt, - tgt_dict, - combine, - dataset_impl, - upsample_primary, - left_pad_source, - left_pad_target, - max_source_positions, - max_target_positions, - prepend_bos=False, - load_alignments=False, - truncate_source=False, - append_source_id=False, - num_buckets=0, - shuffle=True, - pad_to_multiple=1, - prepend_bos_src=None, -): - def split_exists(split, src, tgt, lang, data_path): - filename = os.path.join(data_path, "{}.{}-{}.{}".format(split, src, tgt, lang)) - return indexed_dataset.dataset_exists(filename, impl=dataset_impl) - - src_datasets = [] - tgt_datasets = [] - - for k in itertools.count(): - split_k = split + (str(k) if k > 0 else "") - - # infer langcode - if split_exists(split_k, src, tgt, src, data_path): - prefix = os.path.join(data_path, "{}.{}-{}.".format(split_k, src, tgt)) - elif split_exists(split_k, tgt, src, src, data_path): - prefix = os.path.join(data_path, "{}.{}-{}.".format(split_k, tgt, src)) - else: - if k > 0: - break - else: - raise FileNotFoundError( - "Dataset not found: {} ({})".format(split, data_path) - ) - - src_dataset = data_utils.load_indexed_dataset( - prefix + src, src_dict, dataset_impl - ) - if truncate_source: - src_dataset = AppendTokenDataset( - TruncateDataset( - StripTokenDataset(src_dataset, src_dict.eos()), - max_source_positions - 1, - ), - src_dict.eos(), - ) - src_datasets.append(src_dataset) - - tgt_dataset = data_utils.load_indexed_dataset( - prefix + tgt, tgt_dict, dataset_impl - ) - if tgt_dataset is not None: - tgt_datasets.append(tgt_dataset) - - logger.info( - "{} {} {}-{} {} examples".format( - data_path, split_k, src, tgt, len(src_datasets[-1]) - ) - ) - - if not combine: - break - - assert len(src_datasets) == len(tgt_datasets) or len(tgt_datasets) == 0 - - if len(src_datasets) == 1: - src_dataset = src_datasets[0] - tgt_dataset = tgt_datasets[0] if len(tgt_datasets) > 0 else None - else: - sample_ratios = [1] * len(src_datasets) - sample_ratios[0] = upsample_primary - src_dataset = ConcatDataset(src_datasets, sample_ratios) - if len(tgt_datasets) > 0: - tgt_dataset = ConcatDataset(tgt_datasets, sample_ratios) - else: - tgt_dataset = None - - if prepend_bos: - assert hasattr(src_dict, "bos_index") and hasattr(tgt_dict, "bos_index") - src_dataset = PrependTokenDataset(src_dataset, src_dict.bos()) - if tgt_dataset is not None: - tgt_dataset = PrependTokenDataset(tgt_dataset, tgt_dict.bos()) - elif prepend_bos_src is not None: - logger.info(f"prepending src bos: {prepend_bos_src}") - src_dataset = PrependTokenDataset(src_dataset, prepend_bos_src) - - eos = None - if append_source_id: - src_dataset = AppendTokenDataset( - src_dataset, src_dict.index("[{}]".format(src)) - ) - if tgt_dataset is not None: - tgt_dataset = AppendTokenDataset( - tgt_dataset, tgt_dict.index("[{}]".format(tgt)) - ) - eos = tgt_dict.index("[{}]".format(tgt)) - - align_dataset = None - if load_alignments: - align_path = os.path.join(data_path, "{}.align.{}-{}".format(split, src, tgt)) - if indexed_dataset.dataset_exists(align_path, impl=dataset_impl): - align_dataset = data_utils.load_indexed_dataset( - align_path, None, dataset_impl - ) - - tgt_dataset_sizes = tgt_dataset.sizes if tgt_dataset is not None else None - return LanguagePairDataset( - src_dataset, - src_dataset.sizes, - src_dict, - tgt_dataset, - tgt_dataset_sizes, - tgt_dict, - left_pad_source=left_pad_source, - left_pad_target=left_pad_target, - align_dataset=align_dataset, - eos=eos, - num_buckets=num_buckets, - shuffle=shuffle, - pad_to_multiple=pad_to_multiple, - ) - - -@dataclass -class TranslationConfig(FairseqDataclass): - data: Optional[str] = field( - default=None, - metadata={ - "help": "colon separated path to data directories list, will be iterated upon during epochs " - "in round-robin manner; however, valid and test data are always in the first directory " - "to avoid the need for repeating them in all directories" - }, - ) - source_lang: Optional[str] = field( - default=None, - metadata={ - "help": "source language", - "argparse_alias": "-s", - }, - ) - target_lang: Optional[str] = field( - default=None, - metadata={ - "help": "target language", - "argparse_alias": "-t", - }, - ) - load_alignments: bool = field( - default=False, metadata={"help": "load the binarized alignments"} - ) - left_pad_source: bool = field( - default=True, metadata={"help": "pad the source on the left"} - ) - left_pad_target: bool = field( - default=False, metadata={"help": "pad the target on the left"} - ) - max_source_positions: int = field( - default=1024, metadata={"help": "max number of tokens in the source sequence"} - ) - max_target_positions: int = field( - default=1024, metadata={"help": "max number of tokens in the target sequence"} - ) - upsample_primary: int = field( - default=-1, metadata={"help": "the amount of upsample primary dataset"} - ) - truncate_source: bool = field( - default=False, metadata={"help": "truncate source to max-source-positions"} - ) - num_batch_buckets: int = field( - default=0, - metadata={ - "help": "if >0, then bucket source and target lengths into " - "N buckets and pad accordingly; this is useful on TPUs to minimize the number of compilations" - }, - ) - train_subset: str = II("dataset.train_subset") - dataset_impl: Optional[ChoiceEnum(get_available_dataset_impl())] = II( - "dataset.dataset_impl" - ) - required_seq_len_multiple: int = II("dataset.required_seq_len_multiple") - - # options for reporting BLEU during validation - eval_bleu: bool = field( - default=False, metadata={"help": "evaluation with BLEU scores"} - ) - eval_bleu_args: Optional[str] = field( - default="{}", - metadata={ - "help": 'generation args for BLUE scoring, e.g., \'{"beam": 4, "lenpen": 0.6}\', as JSON string' - }, - ) - eval_bleu_detok: str = field( - default="space", - metadata={ - "help": "detokenize before computing BLEU (e.g., 'moses'); required if using --eval-bleu; " - "use 'space' to disable detokenization; see fairseq.data.encoders for other options" - }, - ) - eval_bleu_detok_args: Optional[str] = field( - default="{}", - metadata={"help": "args for building the tokenizer, if needed, as JSON string"}, - ) - eval_tokenized_bleu: bool = field( - default=False, metadata={"help": "compute tokenized BLEU instead of sacrebleu"} - ) - eval_bleu_remove_bpe: Optional[str] = field( - default=None, - metadata={ - "help": "remove BPE before computing BLEU", - "argparse_const": "@@ ", - }, - ) - eval_bleu_print_samples: bool = field( - default=False, metadata={"help": "print sample generations during validation"} - ) - - -@register_task("translation", dataclass=TranslationConfig) -class TranslationTask(FairseqTask): - """ - Translate from one (source) language to another (target) language. - - Args: - src_dict (~fairseq.data.Dictionary): dictionary for the source language - tgt_dict (~fairseq.data.Dictionary): dictionary for the target language - - .. note:: - - The translation task is compatible with :mod:`fairseq-train`, - :mod:`fairseq-generate` and :mod:`fairseq-interactive`. - """ - - cfg: TranslationConfig - - def __init__(self, cfg: TranslationConfig, src_dict, tgt_dict): - super().__init__(cfg) - self.src_dict = src_dict - self.tgt_dict = tgt_dict - - @classmethod - def setup_task(cls, cfg: TranslationConfig, **kwargs): - """Setup the task (e.g., load dictionaries). - - Args: - args (argparse.Namespace): parsed command-line arguments - """ - - paths = utils.split_paths(cfg.data) - assert len(paths) > 0 - # find language pair automatically - if cfg.source_lang is None or cfg.target_lang is None: - cfg.source_lang, cfg.target_lang = data_utils.infer_language_pair(paths[0]) - if cfg.source_lang is None or cfg.target_lang is None: - raise Exception( - "Could not infer language pair, please provide it explicitly" - ) - - # load dictionaries - src_dict = cls.load_dictionary( - os.path.join(paths[0], "dict.{}.txt".format(cfg.source_lang)) - ) - tgt_dict = cls.load_dictionary( - os.path.join(paths[0], "dict.{}.txt".format(cfg.target_lang)) - ) - assert src_dict.pad() == tgt_dict.pad() - assert src_dict.eos() == tgt_dict.eos() - assert src_dict.unk() == tgt_dict.unk() - logger.info("[{}] dictionary: {} types".format(cfg.source_lang, len(src_dict))) - logger.info("[{}] dictionary: {} types".format(cfg.target_lang, len(tgt_dict))) - - return cls(cfg, src_dict, tgt_dict) - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - """Load a given dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - """ - paths = utils.split_paths(self.cfg.data) - assert len(paths) > 0 - if split != self.cfg.train_subset: - # if not training data set, use the first shard for valid and test - paths = paths[:1] - data_path = paths[(epoch - 1) % len(paths)] - - # infer langcode - src, tgt = self.cfg.source_lang, self.cfg.target_lang - - self.datasets[split] = load_langpair_dataset( - data_path, - split, - src, - self.src_dict, - tgt, - self.tgt_dict, - combine=combine, - dataset_impl=self.cfg.dataset_impl, - upsample_primary=self.cfg.upsample_primary, - left_pad_source=self.cfg.left_pad_source, - left_pad_target=self.cfg.left_pad_target, - max_source_positions=self.cfg.max_source_positions, - max_target_positions=self.cfg.max_target_positions, - load_alignments=self.cfg.load_alignments, - truncate_source=self.cfg.truncate_source, - num_buckets=self.cfg.num_batch_buckets, - shuffle=(split != "test"), - pad_to_multiple=self.cfg.required_seq_len_multiple, - ) - - def build_dataset_for_inference(self, src_tokens, src_lengths, constraints=None): - return LanguagePairDataset( - src_tokens, - src_lengths, - self.source_dictionary, - tgt_dict=self.target_dictionary, - constraints=constraints, - ) - - def build_model(self, cfg, from_checkpoint=False): - model = super().build_model(cfg, from_checkpoint) - if self.cfg.eval_bleu: - detok_args = json.loads(self.cfg.eval_bleu_detok_args) - self.tokenizer = encoders.build_tokenizer( - Namespace(tokenizer=self.cfg.eval_bleu_detok, **detok_args) - ) - - gen_args = json.loads(self.cfg.eval_bleu_args) - self.sequence_generator = self.build_generator( - [model], Namespace(**gen_args) - ) - return model - - def valid_step(self, sample, model, criterion): - loss, sample_size, logging_output = super().valid_step(sample, model, criterion) - if self.cfg.eval_bleu: - bleu = self._inference_with_bleu(self.sequence_generator, sample, model) - logging_output["_bleu_sys_len"] = bleu.sys_len - logging_output["_bleu_ref_len"] = bleu.ref_len - # we split counts into separate entries so that they can be - # summed efficiently across workers using fast-stat-sync - assert len(bleu.counts) == EVAL_BLEU_ORDER - for i in range(EVAL_BLEU_ORDER): - logging_output["_bleu_counts_" + str(i)] = bleu.counts[i] - logging_output["_bleu_totals_" + str(i)] = bleu.totals[i] - return loss, sample_size, logging_output - - def reduce_metrics(self, logging_outputs, criterion): - super().reduce_metrics(logging_outputs, criterion) - if self.cfg.eval_bleu: - - def sum_logs(key): - import torch - - result = sum(log.get(key, 0) for log in logging_outputs) - if torch.is_tensor(result): - result = result.cpu() - return result - - counts, totals = [], [] - for i in range(EVAL_BLEU_ORDER): - counts.append(sum_logs("_bleu_counts_" + str(i))) - totals.append(sum_logs("_bleu_totals_" + str(i))) - - if max(totals) > 0: - # log counts as numpy arrays -- log_scalar will sum them correctly - metrics.log_scalar("_bleu_counts", np.array(counts)) - metrics.log_scalar("_bleu_totals", np.array(totals)) - metrics.log_scalar("_bleu_sys_len", sum_logs("_bleu_sys_len")) - metrics.log_scalar("_bleu_ref_len", sum_logs("_bleu_ref_len")) - - def compute_bleu(meters): - import inspect - - try: - from sacrebleu.metrics import BLEU - - comp_bleu = BLEU.compute_bleu - except ImportError: - # compatibility API for sacrebleu 1.x - import sacrebleu - - comp_bleu = sacrebleu.compute_bleu - - fn_sig = inspect.getfullargspec(comp_bleu)[0] - if "smooth_method" in fn_sig: - smooth = {"smooth_method": "exp"} - else: - smooth = {"smooth": "exp"} - bleu = comp_bleu( - correct=meters["_bleu_counts"].sum, - total=meters["_bleu_totals"].sum, - sys_len=meters["_bleu_sys_len"].sum, - ref_len=meters["_bleu_ref_len"].sum, - **smooth, - ) - return round(bleu.score, 2) - - metrics.log_derived("bleu", compute_bleu) - - def max_positions(self): - """Return the max sentence length allowed by the task.""" - return (self.cfg.max_source_positions, self.cfg.max_target_positions) - - @property - def source_dictionary(self): - """Return the source :class:`~fairseq.data.Dictionary`.""" - return self.src_dict - - @property - def target_dictionary(self): - """Return the target :class:`~fairseq.data.Dictionary`.""" - return self.tgt_dict - - def _inference_with_bleu(self, generator, sample, model): - import sacrebleu - - def decode(toks, escape_unk=False): - s = self.tgt_dict.string( - toks.int().cpu(), - self.cfg.eval_bleu_remove_bpe, - # The default unknown string in fairseq is `<unk>`, but - # this is tokenized by sacrebleu as `< unk >`, inflating - # BLEU scores. Instead, we use a somewhat more verbose - # alternative that is unlikely to appear in the real - # reference, but doesn't get split into multiple tokens. - unk_string=("UNKNOWNTOKENINREF" if escape_unk else "UNKNOWNTOKENINHYP"), - ) - if self.tokenizer: - s = self.tokenizer.decode(s) - return s - - gen_out = self.inference_step(generator, [model], sample, prefix_tokens=None) - hyps, refs = [], [] - for i in range(len(gen_out)): - hyps.append(decode(gen_out[i][0]["tokens"])) - refs.append( - decode( - utils.strip_pad(sample["target"][i], self.tgt_dict.pad()), - escape_unk=True, # don't count <unk> as matches to the hypo - ) - ) - if self.cfg.eval_bleu_print_samples: - logger.info("example hypothesis: " + hyps[0]) - logger.info("example reference: " + refs[0]) - if self.cfg.eval_tokenized_bleu: - return sacrebleu.corpus_bleu(hyps, [refs], tokenize="none") - else: - return sacrebleu.corpus_bleu(hyps, [refs]) diff --git a/kosmos-g/fairseq/fairseq/tasks/translation_from_pretrained_bart.py b/kosmos-g/fairseq/fairseq/tasks/translation_from_pretrained_bart.py deleted file mode 100644 index 0fd7a5b29..000000000 --- a/kosmos-g/fairseq/fairseq/tasks/translation_from_pretrained_bart.py +++ /dev/null @@ -1,132 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -from fairseq import utils -from fairseq.data import LanguagePairDataset - -from . import register_task -from .translation import TranslationTask, load_langpair_dataset - - -@register_task("translation_from_pretrained_bart") -class TranslationFromPretrainedBARTTask(TranslationTask): - """ - Translate from source language to target language with a model initialized with a multilingual pretrain. - - Args: - src_dict (~fairseq.data.Dictionary): dictionary for the source language - tgt_dict (~fairseq.data.Dictionary): dictionary for the target language - - .. note:: - - The translation task is compatible with :mod:`fairseq-train`, - :mod:`fairseq-generate` and :mod:`fairseq-interactive`. - - The translation task provides the following additional command-line - arguments: - - .. argparse:: - :ref: fairseq.tasks.translation_parser - :prog: - """ - - @staticmethod - def add_args(parser): - """Add task-specific arguments to the parser.""" - # fmt: off - TranslationTask.add_args(parser) - parser.add_argument('--langs', type=str, metavar='LANG', - help='comma-separated list of monolingual language, ' - 'for example, "en,de,fr". These should match the ' - 'langs from pretraining (and be in the same order). ' - 'You should always add all pretraining language idx ' - 'during finetuning.') - parser.add_argument('--prepend-bos', action='store_true', - help='prepend bos token to each sentence, which matches ' - 'mBART pretraining') - # fmt: on - - def __init__(self, args, src_dict, tgt_dict): - super().__init__(args, src_dict, tgt_dict) - self.langs = args.langs.split(",") - for d in [src_dict, tgt_dict]: - for l in self.langs: - d.add_symbol("[{}]".format(l)) - d.add_symbol("<mask>") - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - """Load a given dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - """ - paths = utils.split_paths(self.args.data) - assert len(paths) > 0 - data_path = paths[(epoch - 1) % len(paths)] - - # infer langcode - src, tgt = self.args.source_lang, self.args.target_lang - - self.datasets[split] = load_langpair_dataset( - data_path, - split, - src, - self.src_dict, - tgt, - self.tgt_dict, - combine=combine, - dataset_impl=self.args.dataset_impl, - upsample_primary=self.args.upsample_primary, - left_pad_source=self.args.left_pad_source, - left_pad_target=self.args.left_pad_target, - max_source_positions=getattr(self.args, "max_source_positions", 1024), - max_target_positions=getattr(self.args, "max_target_positions", 1024), - load_alignments=self.args.load_alignments, - prepend_bos=getattr(self.args, "prepend_bos", False), - append_source_id=True, - ) - - def build_generator(self, models, args, **unused): - if getattr(args, "score_reference", False): - from fairseq.sequence_scorer import SequenceScorer - - return SequenceScorer( - self.target_dictionary, - eos=self.tgt_dict.index("[{}]".format(self.args.target_lang)), - ) - else: - from fairseq.sequence_generator import SequenceGenerator - - return SequenceGenerator( - models, - self.target_dictionary, - beam_size=getattr(args, "beam", 5), - max_len_a=getattr(args, "max_len_a", 0), - max_len_b=getattr(args, "max_len_b", 200), - min_len=getattr(args, "min_len", 1), - normalize_scores=(not getattr(args, "unnormalized", False)), - len_penalty=getattr(args, "lenpen", 1), - unk_penalty=getattr(args, "unkpen", 0), - temperature=getattr(args, "temperature", 1.0), - match_source_len=getattr(args, "match_source_len", False), - no_repeat_ngram_size=getattr(args, "no_repeat_ngram_size", 0), - eos=self.tgt_dict.index("[{}]".format(self.args.target_lang)), - ) - - def build_dataset_for_inference(self, src_tokens, src_lengths, constraints=None): - src_lang_id = self.source_dictionary.index("[{}]".format(self.args.source_lang)) - source_tokens = [] - for s_t in src_tokens: - s_t = torch.cat([s_t, s_t.new(1).fill_(src_lang_id)]) - source_tokens.append(s_t) - dataset = LanguagePairDataset( - source_tokens, - src_lengths, - self.source_dictionary, - tgt_dict=self.target_dictionary, - constraints=constraints, - ) - return dataset diff --git a/kosmos-g/fairseq/fairseq/tasks/translation_from_pretrained_xlm.py b/kosmos-g/fairseq/fairseq/tasks/translation_from_pretrained_xlm.py deleted file mode 100644 index a05f28915..000000000 --- a/kosmos-g/fairseq/fairseq/tasks/translation_from_pretrained_xlm.py +++ /dev/null @@ -1,39 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass -from fairseq.data.legacy.masked_lm_dictionary import MaskedLMDictionary -from fairseq.tasks.translation import TranslationConfig, TranslationTask - -from . import register_task - - -@dataclass -class TranslationFromPretrainedXLMConfig(TranslationConfig): - pass - - -@register_task( - "translation_from_pretrained_xlm", dataclass=TranslationFromPretrainedXLMConfig -) -class TranslationFromPretrainedXLMTask(TranslationTask): - """ - Same as TranslationTask except use the MaskedLMDictionary class so that - we can load data that was binarized with the MaskedLMDictionary class. - - This task should be used for the entire training pipeline when we want to - train an NMT model from a pretrained XLM checkpoint: binarizing NMT data, - training NMT with the pretrained XLM checkpoint, and subsequent evaluation - of that trained model. - """ - - @classmethod - def load_dictionary(cls, filename): - """Load the masked LM dictionary from the filename - - Args: - filename (str): the filename - """ - return MaskedLMDictionary.load(filename) diff --git a/kosmos-g/fairseq/fairseq/tasks/translation_lev.py b/kosmos-g/fairseq/fairseq/tasks/translation_lev.py deleted file mode 100644 index b45fecd1f..000000000 --- a/kosmos-g/fairseq/fairseq/tasks/translation_lev.py +++ /dev/null @@ -1,195 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass, field -import torch -from fairseq import utils -from fairseq.data import LanguagePairDataset -from fairseq.dataclass import ChoiceEnum -from fairseq.tasks import register_task -from fairseq.tasks.translation import ( - TranslationConfig, - TranslationTask, - load_langpair_dataset, -) -from fairseq.utils import new_arange - - -NOISE_CHOICES = ChoiceEnum(["random_delete", "random_mask", "no_noise", "full_mask"]) - - -@dataclass -class TranslationLevenshteinConfig(TranslationConfig): - noise: NOISE_CHOICES = field( - default="random_delete", - metadata={"help": "type of noise"}, - ) - - -@register_task("translation_lev", dataclass=TranslationLevenshteinConfig) -class TranslationLevenshteinTask(TranslationTask): - """ - Translation (Sequence Generation) task for Levenshtein Transformer - See `"Levenshtein Transformer" <https://arxiv.org/abs/1905.11006>`_. - """ - - cfg: TranslationLevenshteinConfig - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - """Load a given dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - """ - paths = utils.split_paths(self.cfg.data) - assert len(paths) > 0 - data_path = paths[(epoch - 1) % len(paths)] - - # infer langcode - src, tgt = self.cfg.source_lang, self.cfg.target_lang - - self.datasets[split] = load_langpair_dataset( - data_path, - split, - src, - self.src_dict, - tgt, - self.tgt_dict, - combine=combine, - dataset_impl=self.cfg.dataset_impl, - upsample_primary=self.cfg.upsample_primary, - left_pad_source=self.cfg.left_pad_source, - left_pad_target=self.cfg.left_pad_target, - max_source_positions=self.cfg.max_source_positions, - max_target_positions=self.cfg.max_target_positions, - prepend_bos=True, - ) - - def inject_noise(self, target_tokens): - def _random_delete(target_tokens): - pad = self.tgt_dict.pad() - bos = self.tgt_dict.bos() - eos = self.tgt_dict.eos() - - max_len = target_tokens.size(1) - target_mask = target_tokens.eq(pad) - target_score = target_tokens.clone().float().uniform_() - target_score.masked_fill_( - target_tokens.eq(bos) | target_tokens.eq(eos), 0.0 - ) - target_score.masked_fill_(target_mask, 1) - target_score, target_rank = target_score.sort(1) - target_length = target_mask.size(1) - target_mask.float().sum( - 1, keepdim=True - ) - - # do not delete <bos> and <eos> (we assign 0 score for them) - target_cutoff = ( - 2 - + ( - (target_length - 2) - * target_score.new_zeros(target_score.size(0), 1).uniform_() - ).long() - ) - target_cutoff = target_score.sort(1)[1] >= target_cutoff - - prev_target_tokens = ( - target_tokens.gather(1, target_rank) - .masked_fill_(target_cutoff, pad) - .gather(1, target_rank.masked_fill_(target_cutoff, max_len).sort(1)[1]) - ) - prev_target_tokens = prev_target_tokens[ - :, : prev_target_tokens.ne(pad).sum(1).max() - ] - - return prev_target_tokens - - def _random_mask(target_tokens): - pad = self.tgt_dict.pad() - bos = self.tgt_dict.bos() - eos = self.tgt_dict.eos() - unk = self.tgt_dict.unk() - - target_masks = ( - target_tokens.ne(pad) & target_tokens.ne(bos) & target_tokens.ne(eos) - ) - target_score = target_tokens.clone().float().uniform_() - target_score.masked_fill_(~target_masks, 2.0) - target_length = target_masks.sum(1).float() - target_length = target_length * target_length.clone().uniform_() - target_length = target_length + 1 # make sure to mask at least one token. - - _, target_rank = target_score.sort(1) - target_cutoff = new_arange(target_rank) < target_length[:, None].long() - prev_target_tokens = target_tokens.masked_fill( - target_cutoff.scatter(1, target_rank, target_cutoff), unk - ) - return prev_target_tokens - - def _full_mask(target_tokens): - pad = self.tgt_dict.pad() - bos = self.tgt_dict.bos() - eos = self.tgt_dict.eos() - unk = self.tgt_dict.unk() - - target_mask = ( - target_tokens.eq(bos) | target_tokens.eq(eos) | target_tokens.eq(pad) - ) - return target_tokens.masked_fill(~target_mask, unk) - - if self.cfg.noise == "random_delete": - return _random_delete(target_tokens) - elif self.cfg.noise == "random_mask": - return _random_mask(target_tokens) - elif self.cfg.noise == "full_mask": - return _full_mask(target_tokens) - elif self.cfg.noise == "no_noise": - return target_tokens - else: - raise NotImplementedError - - def build_generator(self, models, args, **unused): - # add models input to match the API for SequenceGenerator - from fairseq.iterative_refinement_generator import IterativeRefinementGenerator - - return IterativeRefinementGenerator( - self.target_dictionary, - eos_penalty=getattr(args, "iter_decode_eos_penalty", 0.0), - max_iter=getattr(args, "iter_decode_max_iter", 10), - beam_size=getattr(args, "iter_decode_with_beam", 1), - reranking=getattr(args, "iter_decode_with_external_reranker", False), - decoding_format=getattr(args, "decoding_format", None), - adaptive=not getattr(args, "iter_decode_force_max_iter", False), - retain_history=getattr(args, "retain_iter_history", False), - ) - - def build_dataset_for_inference(self, src_tokens, src_lengths, constraints=None): - if constraints is not None: - # Though see Susanto et al. (ACL 2020): https://www.aclweb.org/anthology/2020.acl-main.325/ - raise NotImplementedError( - "Constrained decoding with the translation_lev task is not supported" - ) - - return LanguagePairDataset( - src_tokens, src_lengths, self.source_dictionary, append_bos=True - ) - - def train_step( - self, sample, model, criterion, optimizer, update_num, ignore_grad=False - ): - model.train() - sample["prev_target"] = self.inject_noise(sample["target"]) - loss, sample_size, logging_output = criterion(model, sample) - if ignore_grad: - loss *= 0 - optimizer.backward(loss) - return loss, sample_size, logging_output - - def valid_step(self, sample, model, criterion): - model.eval() - with torch.no_grad(): - sample["prev_target"] = self.inject_noise(sample["target"]) - loss, sample_size, logging_output = criterion(model, sample) - return loss, sample_size, logging_output diff --git a/kosmos-g/fairseq/fairseq/tasks/translation_multi_simple_epoch.py b/kosmos-g/fairseq/fairseq/tasks/translation_multi_simple_epoch.py deleted file mode 100644 index 5db36a7c7..000000000 --- a/kosmos-g/fairseq/fairseq/tasks/translation_multi_simple_epoch.py +++ /dev/null @@ -1,441 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import datetime -import logging -import time - -import torch -from fairseq.data import ( - FairseqDataset, - LanguagePairDataset, - ListDataset, - data_utils, - iterators, -) -from fairseq.data.multilingual.multilingual_data_manager import ( - MultilingualDatasetManager, -) -from fairseq.data.multilingual.sampling_method import SamplingMethod -from fairseq.tasks import LegacyFairseqTask, register_task -from fairseq.utils import FileContentsAction - - -### -def get_time_gap(s, e): - return ( - datetime.datetime.fromtimestamp(e) - datetime.datetime.fromtimestamp(s) - ).__str__() - - -### - - -logger = logging.getLogger(__name__) - - -@register_task("translation_multi_simple_epoch") -class TranslationMultiSimpleEpochTask(LegacyFairseqTask): - """ - Translate from one (source) language to another (target) language. - - Args: - langs (List[str]): a list of languages that are being supported - dicts (Dict[str, fairseq.data.Dictionary]): mapping from supported languages to their dictionaries - training (bool): whether the task should be configured for training or not - - .. note:: - - The translation task is compatible with :mod:`fairseq-train`, - :mod:`fairseq-generate` and :mod:`fairseq-interactive`. - - The translation task provides the following additional command-line - arguments: - - .. argparse:: - :ref: fairseq.tasks.translation_parser - :prog: - """ - - @staticmethod - def add_args(parser): - """Add task-specific arguments to the parser.""" - # fmt: off - parser.add_argument('-s', '--source-lang', default=None, metavar='SRC', - help='inference source language') - parser.add_argument('-t', '--target-lang', default=None, metavar='TARGET', - help='inference target language') - parser.add_argument('--lang-pairs', default=None, metavar='PAIRS', - help='comma-separated list of language pairs (in training order): en-de,en-fr,de-fr', - action=FileContentsAction) - parser.add_argument('--keep-inference-langtok', action='store_true', - help='keep language tokens in inference output (e.g. for analysis or debugging)') - - SamplingMethod.add_arguments(parser) - MultilingualDatasetManager.add_args(parser) - # fmt: on - - def __init__(self, args, langs, dicts, training): - super().__init__(args) - self.langs = langs - self.dicts = dicts - self.training = training - if training: - self.lang_pairs = args.lang_pairs - else: - self.lang_pairs = ["{}-{}".format(args.source_lang, args.target_lang)] - # eval_lang_pairs for multilingual translation is usually all of the - # lang_pairs. However for other multitask settings or when we want to - # optimize for certain languages we want to use a different subset. Thus - # the eval_lang_pairs class variable is provided for classes that extend - # this class. - self.eval_lang_pairs = self.lang_pairs - # model_lang_pairs will be used to build encoder-decoder model pairs in - # models.build_model(). This allows multitask type of sub-class can - # build models other than the input lang_pairs - self.model_lang_pairs = self.lang_pairs - self.source_langs = [d.split("-")[0] for d in self.lang_pairs] - self.target_langs = [d.split("-")[1] for d in self.lang_pairs] - self.check_dicts(self.dicts, self.source_langs, self.target_langs) - - self.sampling_method = SamplingMethod.build_sampler(args, self) - self.data_manager = MultilingualDatasetManager.setup_data_manager( - args, self.lang_pairs, langs, dicts, self.sampling_method - ) - - def check_dicts(self, dicts, source_langs, target_langs): - if self.args.source_dict is not None or self.args.target_dict is not None: - # no need to check whether the source side and target side are sharing dictionaries - return - src_dict = dicts[source_langs[0]] - tgt_dict = dicts[target_langs[0]] - for src_lang in source_langs: - assert ( - src_dict == dicts[src_lang] - ), "Diffrent dictionary are specified for different source languages; " - "TranslationMultiSimpleEpochTask only supports one shared dictionary across all source languages" - for tgt_lang in target_langs: - assert ( - tgt_dict == dicts[tgt_lang] - ), "Diffrent dictionary are specified for different target languages; " - "TranslationMultiSimpleEpochTask only supports one shared dictionary across all target languages" - - @classmethod - def setup_task(cls, args, **kwargs): - langs, dicts, training = MultilingualDatasetManager.prepare( - cls.load_dictionary, args, **kwargs - ) - return cls(args, langs, dicts, training) - - def has_sharded_data(self, split): - return self.data_manager.has_sharded_data(split) - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - """Load a given dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - """ - if split in self.datasets: - dataset = self.datasets[split] - if self.has_sharded_data(split): - if self.args.virtual_epoch_size is not None: - if dataset.load_next_shard: - shard_epoch = dataset.shard_epoch - else: - # no need to load next shard so skip loading - # also this avoid always loading from beginning of the data - return - else: - shard_epoch = epoch - else: - # estimate the shard epoch from virtual data size and virtual epoch size - shard_epoch = self.data_manager.estimate_global_pass_epoch(epoch) - logger.info(f"loading data for {split} epoch={epoch}/{shard_epoch}") - logger.info(f"mem usage: {data_utils.get_mem_usage()}") - if split in self.datasets: - del self.datasets[split] - logger.info("old dataset deleted manually") - logger.info(f"mem usage: {data_utils.get_mem_usage()}") - self.datasets[split] = self.data_manager.load_dataset( - split, - self.training, - epoch=epoch, - combine=combine, - shard_epoch=shard_epoch, - **kwargs, - ) - - def build_dataset_for_inference(self, src_tokens, src_lengths, constraints=None): - if constraints is not None: - raise NotImplementedError( - "Constrained decoding with the multilingual_translation task is not supported" - ) - - src_data = ListDataset(src_tokens, src_lengths) - dataset = LanguagePairDataset(src_data, src_lengths, self.source_dictionary) - src_langtok_spec, tgt_langtok_spec = self.args.langtoks["main"] - if self.args.lang_tok_replacing_bos_eos: - dataset = self.data_manager.alter_dataset_langtok( - dataset, - src_eos=self.source_dictionary.eos(), - src_lang=self.args.source_lang, - tgt_eos=self.target_dictionary.eos(), - tgt_lang=self.args.target_lang, - src_langtok_spec=src_langtok_spec, - tgt_langtok_spec=tgt_langtok_spec, - ) - else: - dataset.src = self.data_manager.src_dataset_tranform_func( - self.args.source_lang, - self.args.target_lang, - dataset=dataset.src, - spec=src_langtok_spec, - ) - return dataset - - def build_generator( - self, - models, - args, - seq_gen_cls=None, - extra_gen_cls_kwargs=None, - ): - if not getattr(args, "keep_inference_langtok", False): - _, tgt_langtok_spec = self.args.langtoks["main"] - if tgt_langtok_spec: - tgt_lang_tok = self.data_manager.get_decoder_langtok( - self.args.target_lang, tgt_langtok_spec - ) - extra_gen_cls_kwargs = extra_gen_cls_kwargs or {} - extra_gen_cls_kwargs["symbols_to_strip_from_output"] = {tgt_lang_tok} - - return super().build_generator( - models, args, seq_gen_cls=None, extra_gen_cls_kwargs=extra_gen_cls_kwargs - ) - - def build_model(self, args, from_checkpoint=False): - return super().build_model(args, from_checkpoint) - - def valid_step(self, sample, model, criterion): - loss, sample_size, logging_output = super().valid_step(sample, model, criterion) - return loss, sample_size, logging_output - - def inference_step( - self, generator, models, sample, prefix_tokens=None, constraints=None - ): - with torch.no_grad(): - _, tgt_langtok_spec = self.args.langtoks["main"] - if not self.args.lang_tok_replacing_bos_eos: - if prefix_tokens is None and tgt_langtok_spec: - tgt_lang_tok = self.data_manager.get_decoder_langtok( - self.args.target_lang, tgt_langtok_spec - ) - src_tokens = sample["net_input"]["src_tokens"] - bsz = src_tokens.size(0) - prefix_tokens = ( - torch.LongTensor([[tgt_lang_tok]]).expand(bsz, 1).to(src_tokens) - ) - return generator.generate( - models, - sample, - prefix_tokens=prefix_tokens, - constraints=constraints, - ) - else: - return generator.generate( - models, - sample, - prefix_tokens=prefix_tokens, - bos_token=self.data_manager.get_decoder_langtok( - self.args.target_lang, tgt_langtok_spec - ) - if tgt_langtok_spec - else self.target_dictionary.eos(), - ) - - def reduce_metrics(self, logging_outputs, criterion): - super().reduce_metrics(logging_outputs, criterion) - - def max_positions(self): - """Return the max sentence length allowed by the task.""" - return (self.args.max_source_positions, self.args.max_target_positions) - - @property - def source_dictionary(self): - return self.data_manager.get_source_dictionary(self.source_langs[0]) - - @property - def target_dictionary(self): - return self.data_manager.get_target_dictionary(self.target_langs[0]) - - def create_batch_sampler_func( - self, - max_positions, - ignore_invalid_inputs, - max_tokens, - max_sentences, - required_batch_size_multiple=1, - seed=1, - ): - def construct_batch_sampler(dataset, epoch): - splits = [ - s for s, _ in self.datasets.items() if self.datasets[s] == dataset - ] - split = splits[0] if len(splits) > 0 else None - # NEW implementation - if epoch is not None: - # initialize the dataset with the correct starting epoch - dataset.set_epoch(epoch) - - # get indices ordered by example size - start_time = time.time() - logger.info(f"start batch sampler: mem usage: {data_utils.get_mem_usage()}") - - with data_utils.numpy_seed(seed): - indices = dataset.ordered_indices() - logger.info( - f"[{split}] @batch_sampler order indices time: {get_time_gap(start_time, time.time())}" - ) - logger.info(f"mem usage: {data_utils.get_mem_usage()}") - - # filter examples that are too large - if max_positions is not None: - my_time = time.time() - indices = self.filter_indices_by_size( - indices, dataset, max_positions, ignore_invalid_inputs - ) - logger.info( - f"[{split}] @batch_sampler filter_by_size time: {get_time_gap(my_time, time.time())}" - ) - logger.info(f"mem usage: {data_utils.get_mem_usage()}") - - # create mini-batches with given size constraints - my_time = time.time() - batch_sampler = dataset.batch_by_size( - indices, - max_tokens=max_tokens, - max_sentences=max_sentences, - required_batch_size_multiple=required_batch_size_multiple, - ) - - logger.info( - f"[{split}] @batch_sampler batch_by_size time: {get_time_gap(my_time, time.time())}" - ) - logger.info( - f"[{split}] per epoch batch_sampler set-up time: {get_time_gap(start_time, time.time())}" - ) - logger.info(f"mem usage: {data_utils.get_mem_usage()}") - - return batch_sampler - - return construct_batch_sampler - - # we need to override get_batch_iterator because we want to reset the epoch iterator each time - def get_batch_iterator( - self, - dataset, - max_tokens=None, - max_sentences=None, - max_positions=None, - ignore_invalid_inputs=False, - required_batch_size_multiple=1, - seed=1, - num_shards=1, - shard_id=0, - num_workers=0, - epoch=1, - data_buffer_size=0, - disable_iterator_cache=False, - skip_remainder_batch=False, - grouped_shuffling=False, - update_epoch_batch_itr=False, - ): - """ - Get an iterator that yields batches of data from the given dataset. - - Args: - dataset (~fairseq.data.FairseqDataset): dataset to batch - max_tokens (int, optional): max number of tokens in each batch - (default: None). - max_sentences (int, optional): max number of sentences in each - batch (default: None). - max_positions (optional): max sentence length supported by the - model (default: None). - ignore_invalid_inputs (bool, optional): don't raise Exception for - sentences that are too long (default: False). - required_batch_size_multiple (int, optional): require batch size to - be a multiple of N (default: 1). - seed (int, optional): seed for random number generator for - reproducibility (default: 1). - num_shards (int, optional): shard the data iterator into N - shards (default: 1). - shard_id (int, optional): which shard of the data iterator to - return (default: 0). - num_workers (int, optional): how many subprocesses to use for data - loading. 0 means the data will be loaded in the main process - (default: 0). - epoch (int, optional): the epoch to start the iterator from - (default: 0). - data_buffer_size (int, optional): number of batches to - preload (default: 0). - disable_iterator_cache (bool, optional): don't cache the - EpochBatchIterator (ignores `FairseqTask::can_reuse_epoch_itr`) - (default: False). - grouped_shuffling (bool, optional): group batches with each groups - containing num_shards batches and shuffle groups. Reduces difference - between sequence lengths among workers for batches sorted by length. - update_epoch_batch_itr (bool optional): if true then donot use the cached - batch iterator for the epoch - - Returns: - ~fairseq.iterators.EpochBatchIterator: a batched iterator over the - given dataset split - """ - # initialize the dataset with the correct starting epoch - assert isinstance(dataset, FairseqDataset) - if dataset in self.dataset_to_epoch_iter: - return self.dataset_to_epoch_iter[dataset] - if self.args.sampling_method == "RoundRobin": - batch_iter = super().get_batch_iterator( - dataset, - max_tokens=max_tokens, - max_sentences=max_sentences, - max_positions=max_positions, - ignore_invalid_inputs=ignore_invalid_inputs, - required_batch_size_multiple=required_batch_size_multiple, - seed=seed, - num_shards=num_shards, - shard_id=shard_id, - num_workers=num_workers, - epoch=epoch, - data_buffer_size=data_buffer_size, - disable_iterator_cache=disable_iterator_cache, - skip_remainder_batch=skip_remainder_batch, - update_epoch_batch_itr=update_epoch_batch_itr, - ) - self.dataset_to_epoch_iter[dataset] = batch_iter - return batch_iter - - construct_batch_sampler = self.create_batch_sampler_func( - max_positions, - ignore_invalid_inputs, - max_tokens, - max_sentences, - required_batch_size_multiple=required_batch_size_multiple, - seed=seed, - ) - - epoch_iter = iterators.EpochBatchIterator( - dataset=dataset, - collate_fn=dataset.collater, - batch_sampler=construct_batch_sampler, - seed=seed, - num_shards=num_shards, - shard_id=shard_id, - num_workers=num_workers, - epoch=epoch, - ) - return epoch_iter diff --git a/kosmos-g/fairseq/fairseq/token_generation_constraints.py b/kosmos-g/fairseq/fairseq/token_generation_constraints.py deleted file mode 100644 index e708dc51b..000000000 --- a/kosmos-g/fairseq/fairseq/token_generation_constraints.py +++ /dev/null @@ -1,506 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -"""Implements tracking of constraints for a beam item. - -A list of constraints is given as a list of one or more token -sequences, each of length at least one token. For example, for an input sentence - -> Die maschinelle Übersetzung ist schwer zu kontrollieren. - -We could have the constraints: -* to influence -* hard - -There are two implementations: -* OrderedConstraintState: Tracks progress through an ordered list of multitoken constraints. -* UnorderedConstraintState: Tracks progress through an unordered list of multitoken constraints. - -The difference is that in the first, the constraints are assumed to be -in order; the algorithm will permit zero or more tokens between them. -In the second, the constraints are not ordered, so many orderings will -be explored. - -The same sequence can be present any number of times, and will appear -that many times in the output. -""" - -from collections import Counter -from typing import List, Optional, Set, Tuple - -import torch - - -class ConstraintState: - def __init__(self): - pass - - -def pack_constraints(batch_constraints: List[List[torch.Tensor]]) -> torch.Tensor: - """Takes a list of list of constraints in tensor form (a list of - tensor constraints for each sentence) and transforms it into a - packed Tensor. For example, here is a batch of size 3 with 3, 0, - and 1 constraints: - - [ [ [3 1 2], [3], [4 5 6 7], ] - [], - [ [1 8 9 10 1 4 11 12], ] - ] - - Its corresponding packed structure is: - - [ [ 3 3 1 2 0 3 0 4 5 6 7 0], - [ 0 0 0 0 0 0 0 0 0 0 0 0], - [ 1 1 8 9 10 1 4 11 12 0 0 0] ] - - The packed tensor has shape (batch size, maxlen), where - maxlen is defined below. Each row contains concatenated - constraint tokens for that sentence, with 0 appended after - each constraint. The first item in each row is the number - of constraints for that sentence. So maxlen is the maximum - of - - (number of constraints) + (sum length of constraints) + 1. - - across all sentences in the batch. - """ - # The maximum word length of concatenated constraints for any sentence - max_constraints_len = 1 - for sentence_constraints in batch_constraints: - if len(sentence_constraints): - # number of constraints, plus sum of constrain lens, plus a zero after each - constraints_len = ( - 1 - + sum([c.size(0) for c in sentence_constraints]) - + len(sentence_constraints) - ) - max_constraints_len = max(max_constraints_len, constraints_len) - - batch_size = len(batch_constraints) - constraints_tensor = torch.zeros((batch_size, max_constraints_len)).long() - for i, sentence_constraints in enumerate(batch_constraints): - constraints_tensor[i, 0] = len(sentence_constraints) - offset = 1 - for j, constraint in enumerate(sentence_constraints): - this_len = constraint.size(0) - constraints_tensor[i, offset : offset + this_len] = constraint - offset += this_len + 1 - - return constraints_tensor.long() - - -def unpack_constraints(constraint_tensor: torch.Tensor) -> List[torch.Tensor]: - """ - Transforms *one row* of a packed constraint tensor (e.g., for one - sentence in the batch) into a list of constraint tensors. - """ - constraint_list = [] - num_constraints = constraint_tensor[0] - constraints = constraint_tensor.tolist() - offset = 1 - for i in range(num_constraints): - where = constraints.index(0, offset) - constraint_list.append(constraint_tensor[offset:where]) - offset = where + 1 - - return constraint_list - - -class ConstraintNode: - """ - Represents a node in a trie managing unordered constraints. - """ - - def __init__(self, token: int = None, parent=None): - # The token associate with this node (None for the root) - self.token = int(token) if token is not None else None - # The parent (None at the root) - self.parent = parent - # Whether this node is a completed constraint - self.terminal = 0 - # List of child nodes - self.children = {} - - # The cumulative number of constraints from this point in the - # trie forward - self.num_constraints = 0 - - @property - def id(self): - return self.token - - def __str__(self): - term = self.terminal != 0 - return f"[{self.token}].{term}#{self.num_constraints}" - - def __getitem__(self, key: int): - return self.children.get(key, None) - - def next_tokens(self) -> Set[int]: - """The set of child labels.""" - return set(self.children.keys()) - - @staticmethod - def create(constraints: List[List[int]]): - root = ConstraintNode() - for sequence in constraints: - root.add_sequence(sequence) - - return root - - @staticmethod - def print_graph(node: "ConstraintNode"): - if len(node.children) == 0: - return str(node) - else: - s = f"({node}" - for child in node.children.values(): - s += " " + ConstraintNode.print_graph(child) - s += ")" - return s - - def token_counts(self) -> Counter: - """Returns a counter of the number of times each token is used - in a constraint. - """ - token_counts = Counter() - kids = list(self.children.values()) - while len(kids) > 0: - kid = kids.pop() - token_counts[kid.id] += kid.num_constraints - kids += list(kid.children.values()) - - return token_counts - - def tokens(self) -> Set[int]: - """Returns the set of tokens in constraints.""" - return set(self.token_counts().keys()) - - def add_sequence(self, sequence: List[int]): - """Adds a constraint, represented as a list of integers, to - the trie.""" - assert len(sequence) > 0 - - token = int(sequence[0]) - if token not in self.children: - self.children[token] = ConstraintNode(token, parent=self) - - node = self.children[token] - if len(sequence) == 1: - node.terminal += 1 - node.num_constraints += 1 - parent = node.parent - while parent is not None: - parent.num_constraints += 1 - parent = parent.parent - else: - node.add_sequence(sequence[1:]) - - -class UnorderedConstraintState(ConstraintState): - """ - Records progress through the set of constraints for each item in the beam - using a trie. - """ - - def __init__(self, node: ConstraintNode, copy_from: "ConstraintState" = None): - self.node = node - - if copy_from is None: - # The root node - self.root = node - # The set of states in the graph that have been completed - self.completed = Counter() - # The... - self.generated = Counter() - # The list of tokens we need to generate - self.needed_tokens = self.root.tokens() - else: - self.completed = Counter(copy_from.completed) - self.generated = Counter(copy_from.generated) - self.root = copy_from.root - - # Mark the node as generated - if self.node != self.root: - self.generated[node] += 1 - - @staticmethod - def create(constraint_tensor: torch.Tensor): - constraint_list = unpack_constraints(constraint_tensor) - constraint_trie_root = ConstraintNode.create(constraint_list) - return UnorderedConstraintState(constraint_trie_root) - - def __str__(self): - gen_str = ",".join([str(node) for node in self.generated]) - return f"{self.name}/{self.bank}({gen_str})x{self.num_completed}" - - def __copy__(self): - copied_state = UnorderedConstraintState(self.node, copy_from=self) - return copied_state - - def copy(self): - return self.__copy__() - - @property - def name(self): - if self.node.id is None: - return "ROOT" - else: - return str(self.node.id) - - @property - def is_root(self): - return self.node == self.root - - @property - def bank(self): - return sum(self.generated.values()) - - @property - def num_completed(self): - """The number of constraints (not constraint tokens) that are completed. - In addition to the already-completed states, we need to account for the - current state, which might get marked as completed when another token - is generated. - """ - in_final = self.node.terminal and self.completed[self.node] < self.node.terminal - return sum(self.completed.values()) + in_final - - @property - def finished(self): - return self.root.num_constraints - self.num_completed == 0 - - @property - def token_counts(self): - return self.root.token_counts() - - @property - def tokens(self): - return self.root.tokens() - - @property - def num_constraint_tokens(self): - return sum(self.token_counts.values()) - - def next_tokens(self) -> Set[int]: - """Returns the list of tokens that could come next. - These are (a) all tokens extending the root state and, for - non-root states, additionally all tokens extending the current - state.""" - - if self.node != self.root: - return self.root.next_tokens().union(self.node.next_tokens()) - else: - return self.root.next_tokens() - - def advance(self, token: int): - """Reads in a token and advances the state. Here's how it works. - - We can advance to the next state if: - - there is a matching child - - its path isn't blocked - - A path is blocked when all constraints that are descendants of - that node have already been generated, in the current state. - - If we are not able to advance from the current state, we "fall - off the graph" and return to the root state. There, we again - try to advance, checking the same criteria. - - In any case, when falling off the graph, we need to do some - bookkeeping. We: - - check whether any constraints were met (all prefixes of - current state) - - if one is found, mark it as completed - - adjust visited nodes accordingly - """ - token = int(token) - - next_state = None - child = self.node[token] - if child is not None and self.generated[child] < child.num_constraints: - next_state = UnorderedConstraintState(child, copy_from=self) - - def rewind(): - """If we're mid-trie and an "illegal" token is chosen next, we need - to reset our state to the root state. However, along the way, we need - to check whether a prefix of the current trie state represents a state - we could mark as completed. - """ - node = self.node - while node != self.root: - if node.terminal and self.completed[node] < node.terminal: - next_state.completed[node] += 1 - return - - next_state.generated[node] -= 1 - node = node.parent - - # Fall off the graph, check the root - if next_state is None and token in self.root.next_tokens(): - child = self.root[token] - # We can only traverse this edge if it's not saturated - if self.generated[child] < child.num_constraints: - next_state = UnorderedConstraintState(child, copy_from=self) - else: - next_state = UnorderedConstraintState(self.root, copy_from=self) - - # Rewind - rewind() - - elif next_state is None: - next_state = UnorderedConstraintState(self.root, copy_from=self) - # Rewind - rewind() - - return next_state - - -class ConstraintSequence: - def __init__(self, sequences: List[List[int]]): - """Represents a set of possibly multitoken constraints by - concatenating them and internally recording the end points. - """ - self.sequences = [] - self.endpoints = [] - self.num_tokens = 0 - self.tokens = set() - for sequence in sequences: - for token in sequence: - self.tokens.add(token) - self.num_tokens += len(sequence) - self.endpoints += [False for x in range(len(sequence) - 1)] + [True] - self.sequences += sequence - - def __getitem__(self, key: int): - return self.sequences[key] - - def __len__(self): - return len(self.sequences) - - def __str__(self): - return str(self.sequences) - - -class OrderedConstraintState(ConstraintState): - """ - Records progress through the set of linear nonbranching constraints with gaps. - """ - - def __init__(self, sequence: ConstraintSequence, state: int = -1): - self.sequence = sequence - self.state = state - - @staticmethod - def create(constraint_tensor: torch.Tensor): - constraint_list = unpack_constraints(constraint_tensor) - return OrderedConstraintState(ConstraintSequence(constraint_list), -1) - - def __str__(self): - return f"{self.state}/{self.bank}x{self.num_completed}" - - def __copy__(self): - return OrderedConstraintState(self.sequence, self.state) - - def copy(self): - return self.__copy__() - - @property - def num_completed(self): - if self.state == -1: - return 0 - count = len( - list(filter(lambda x: x, self.sequence.endpoints[0 : self.state + 1])) - ) - return count - - @property - def is_root(self): - return self.state == -1 - - @property - def name(self): - if self.state == -1: - return "ROOT" - else: - return str(self.sequence[self.state]) - - @property - def bank(self) -> int: - return self.state + 1 - - @property - def finished(self): - return self.state + 1 == len(self.sequence) - - @property - def token_counts(self): - return self.sequence.token_counts() - - @property - def tokens(self): - return self.sequence.tokens - - @property - def num_constraint_tokens(self): - return sum(self.token_counts.values()) - - def next_tokens(self) -> Set[int]: - """Returns the list of tokens that could come next. - These are (a) all tokens extending the root state and, for - non-root states, additionally all tokens extending the current - state.""" - - tokens = set() - if self.state > 0: - tokens.add(self.sequence[0]) - if not self.finished: - tokens.add(self.sequence[self.state + 1]) - return tokens - - def advance(self, token: int): - """Reads in a token and advances the state. Here's how it works. - - We can advance to the next state if: - - there is a matching child - - its path isn't blocked - - A path is blocked when all constraints that are descendants of - that node have already been generated, in the current state. - - If we are not able to advance from the current state, we "fall - off the graph" and return to the root state. There, we again - try to advance, checking the same criteria. - - In any case, when falling off the graph, we need to do some - bookkeeping. We: - - check whether any constraints were met (all prefixes of - current state) - - if one is found, mark it as completed - - adjust visited nodes accordingly - """ - token = int(token) - # print(f"{self} ADVANCE({token}) {self.sequence} -> ", end="") - - if self.finished: - # Accept anything - next_state = self.copy() - - elif self.sequence[self.state + 1] == token: - # Advance to the next token - next_state = OrderedConstraintState(self.sequence, self.state + 1) - - elif self.sequence.endpoints[self.state]: - # Accept anything between constraints (*) - next_state = self.copy() - - elif token == self.sequence[0]: - # Start over having generated the first token - next_state = OrderedConstraintState(self.sequence, 0) - else: - # Start over from the root - next_state = OrderedConstraintState(self.sequence, -1) - - return next_state diff --git a/kosmos-g/fairseq/fairseq/tokenizer.py b/kosmos-g/fairseq/fairseq/tokenizer.py deleted file mode 100644 index 42131f7b1..000000000 --- a/kosmos-g/fairseq/fairseq/tokenizer.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import re - - -SPACE_NORMALIZER = re.compile(r"\s+") - - -def tokenize_line(line): - line = SPACE_NORMALIZER.sub(" ", line) - line = line.strip() - return line.split() diff --git a/kosmos-g/fairseq/fairseq/trainer.py b/kosmos-g/fairseq/fairseq/trainer.py deleted file mode 100644 index 8ee5f38d1..000000000 --- a/kosmos-g/fairseq/fairseq/trainer.py +++ /dev/null @@ -1,1593 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Train a network across multiple GPUs. -""" - -import contextlib -import logging -import os -import sys -import time -from argparse import Namespace -from itertools import chain -from typing import Any, Dict, List - -import torch -from fairseq import checkpoint_utils, models, optim, utils -from fairseq.dataclass.configs import FairseqConfig -from fairseq.dataclass.utils import convert_namespace_to_omegaconf -from fairseq.distributed import utils as distributed_utils -from fairseq.file_io import PathManager -from fairseq.logging import meters, metrics -# from fairseq.models.ema import build_ema -from fairseq.nan_detector import NanDetector -from fairseq.optim import lr_scheduler -from fairseq.utils import safe_hasattr -from omegaconf import OmegaConf - -logger = logging.getLogger(__name__) - - -class Trainer(object): - """Main class for data parallel training. - - This class supports synchronous distributed data parallel training, - where multiple workers each have a full model replica and gradients - are accumulated across workers before each update. We use - :class:`~torch.nn.parallel.DistributedDataParallel` to handle - communication of the gradients across workers. - """ - - def __init__(self, cfg: FairseqConfig, task, model, criterion, quantizer=None): - - if isinstance(cfg, Namespace): - logger.warning( - "argparse.Namespace configuration is deprecated! Automatically converting to OmegaConf" - ) - cfg = convert_namespace_to_omegaconf(cfg) - - self.cfg = cfg - self.task = task - - # catalog shared parameters - shared_params = _catalog_shared_params(model) - self.tpu = cfg.common.tpu - self.cuda = torch.cuda.is_available() and not cfg.common.cpu and not self.tpu - if self.cuda: - self.device = torch.device("cuda") - elif self.tpu: - self.device = utils.get_tpu_device() - else: - self.device = torch.device("cpu") - - if self.is_fsdp: - import fairscale - - if self.cfg.common.bf16: - raise ValueError( - "FullyShardedDataParallel is not compatible with --bf16 or " - "--memory-efficient-bf16" - ) - if self.cfg.distributed_training.zero_sharding != "none": - raise ValueError( - "FullyShardedDataParallel is not compatible with --zero-sharding " - "option (it's already built in)" - ) - if ( - max(self.cfg.optimization.update_freq) > 1 - and fairscale.__version__ < "0.4.0" - ): - raise RuntimeError( - "Please update to fairscale 0.4.0 or newer when combining " - "--update-freq with FullyShardedDataParallel" - ) - else: - if ( - hasattr(self.cfg.distributed_training, "cpu_offload") - and self.cfg.distributed_training.cpu_offload - ): - raise ValueError("--cpu-offload requires --ddp-backend=fully_sharded") - - # copy model and criterion to current device/dtype - self._criterion = criterion - self._model = model - if not self.is_fsdp: - if cfg.common.fp16: - assert not cfg.common.amp, "Cannot use fp16 and AMP together" - self._criterion = self._criterion.half() - self._model = self._model.half() - elif cfg.common.bf16: - self._criterion = self._criterion.to(dtype=torch.bfloat16) - self._model = self._model.to(dtype=torch.bfloat16) - elif cfg.common.amp: - self._amp_retries = 0 - if ( - not cfg.distributed_training.pipeline_model_parallel - # the DistributedFairseqModel wrapper will handle moving to device, - # so only handle cases which don't use the wrapper - and not self.use_distributed_wrapper - ): - self._criterion = self._criterion.to(device=self.device) - self._model = self._model.to(device=self.device) - self.pipeline_model_parallel = cfg.distributed_training.pipeline_model_parallel - self.last_device = None - if self.cuda and self.pipeline_model_parallel: - self.last_device = torch.device( - cfg.distributed_training.pipeline_devices[-1] - ) - - # check that shared parameters are preserved after device transfer - for shared_param in shared_params: - ref = _get_module_by_path(self._model, shared_param[0]) - for path in shared_param[1:]: - logger.info( - "detected shared parameter: {} <- {}".format(shared_param[0], path) - ) - _set_module_by_path(self._model, path, ref) - - self._dummy_batch = None # indicates we don't have a dummy batch at first - self._lr_scheduler = None - self._num_updates = 0 - self._num_xla_compiles = 0 # for TPUs - self._optim_history = None - self._optimizer = None - self._warn_once = set() - self._wrapped_criterion = None - self._wrapped_model = None - self._ema = None - - # TODO(myleott): support tpu - if self.cuda and self.data_parallel_world_size > 1: - self._grad_norm_buf = torch.cuda.DoubleTensor(self.data_parallel_world_size) - else: - self._grad_norm_buf = None - - self.quantizer = quantizer - if self.quantizer is not None: - self.quantizer.set_trainer(self) - - # get detailed cuda environment - if self.cuda: - self.cuda_env = utils.CudaEnvironment() - if self.data_parallel_world_size > 1: - self.cuda_env_arr = distributed_utils.all_gather_list( - self.cuda_env, group=distributed_utils.get_global_group() - ) - else: - self.cuda_env_arr = [self.cuda_env] - if self.data_parallel_rank == 0: - utils.CudaEnvironment.pretty_print_cuda_env_list(self.cuda_env_arr) - else: - self.cuda_env = None - self.cuda_env_arr = None - - metrics.log_start_time("wall", priority=790, round=0) - - self._start_time = time.time() - self._previous_training_time = 0 - self._cumulative_training_time = None - - def reinitialize(self): - """Reinitialize the Trainer, typically after model params change.""" - self._lr_scheduler = None - self._optimizer = None - self._wrapped_criterion = None - self._wrapped_model = None - - @property - def data_parallel_world_size(self): - if self.cfg.distributed_training.distributed_world_size == 1: - return 1 - return distributed_utils.get_data_parallel_world_size() - - @property - def data_parallel_process_group(self): - return distributed_utils.get_data_parallel_group() - - @property - def data_parallel_rank(self): - if self.cfg.distributed_training.distributed_world_size == 1: - return 0 - return distributed_utils.get_data_parallel_rank() - - @property - def is_data_parallel_master(self): - # NOTE: this returns true for all model parallel replicas with data - # parallel rank 0 - return self.data_parallel_rank == 0 - - @property - def use_distributed_wrapper(self) -> bool: - return ( - self.data_parallel_world_size > 1 and not self.cfg.optimization.use_bmuf - ) or (self.is_fsdp and self.cfg.distributed_training.cpu_offload) - - @property - def should_save_checkpoint_on_current_rank(self) -> bool: - """Indicates whether to save checkpoints on the current DDP rank.""" - if ( - self.is_fsdp and self.cfg.distributed_training.use_sharded_state - ) or getattr(self.cfg.model, "base_layers", 0) > 0: - return True - else: - return self.is_data_parallel_master - - @property - def always_call_state_dict_during_save_checkpoint(self) -> bool: - if self.is_fsdp and not self.cfg.distributed_training.use_sharded_state: - # FSDP calls communication collective when consolidating checkpoints - return True - else: - return False - - @property - def checkpoint_suffix(self) -> str: - """Suffix to add to the checkpoint file name.""" - if self.is_fsdp and self.cfg.distributed_training.use_sharded_state: - return self.cfg.checkpoint.checkpoint_suffix + "-shard{0}".format( - self.data_parallel_rank - ) - else: - return self.cfg.checkpoint.checkpoint_suffix or "" - - @property - def criterion(self): - if self._wrapped_criterion is None: - if utils.has_parameters(self._criterion) and self.use_distributed_wrapper: - self._wrapped_criterion = models.DistributedFairseqModel( - self.cfg.distributed_training, - self._criterion, - process_group=self.data_parallel_process_group, - device=self.device, - ) - else: - self._wrapped_criterion = self._criterion - return self._wrapped_criterion - - @property - def model(self): - if self._wrapped_model is None: - if self.use_distributed_wrapper: - self._wrapped_model = models.DistributedFairseqModel( - self.cfg.distributed_training, - self._model, - process_group=self.data_parallel_process_group, - device=self.device, - ) - else: - self._wrapped_model = self._model - return self._wrapped_model - - @property - def ema(self): - if self._ema is None: - self._build_ema() - return self._ema - - def _build_ema(self): - if self.cfg.ema.store_ema: - self._ema = build_ema(self._model, self.cfg.ema, self.device) - logger.info("Exponential Moving Average Shadow Model is initialized.") - - @property - def optimizer(self): - if self._optimizer is None: - self._build_optimizer() - return self._optimizer - - @property - def lr_scheduler(self): - if self._lr_scheduler is None: - self._build_optimizer() # this will initialize self._lr_scheduler - return self._lr_scheduler - - def _build_optimizer(self): - params = list( - filter( - lambda p: p.requires_grad, - chain(self.model.parameters(), self.criterion.parameters()), - ) - ) - - if self.is_fsdp and self.cfg.common.fp16: - # FullyShardedDataParallel always uses MemoryEfficientFP16 wrapper, - # mostly for the grad scaling. But if we don't have the - # --memory-efficient-fp16 flag set, then we're effectively doing - # regular --fp16 and can allow the use of optimizers that would - # otherwise be unsupported by MemoryEfficientFP16Optimizer. - allow_unsupported = not self.cfg.common.memory_efficient_fp16 - self._optimizer = optim.MemoryEfficientFP16Optimizer.build_optimizer( - self.cfg, params, allow_unsupported=allow_unsupported - ) - elif self.cfg.common.fp16 or self.cfg.common.bf16 or self.cfg.common.amp: - if self.cuda and torch.cuda.get_device_capability(0)[0] < 7: - logger.info( - "NOTE: your device does NOT support faster training with --fp16 or --amp, " - "please switch to FP32 which is likely to be faster" - ) - if ( - self.cfg.common.memory_efficient_fp16 - or self.cfg.common.memory_efficient_bf16 - ): - self._optimizer = optim.MemoryEfficientFP16Optimizer.build_optimizer( - self.cfg, params - ) - elif self.cfg.common.amp: - self._optimizer = optim.AMPOptimizer.build_optimizer(self.cfg, params) - else: - self._optimizer = optim.FP16Optimizer.build_optimizer(self.cfg, params) - else: - if self.cuda and torch.cuda.get_device_capability(0)[0] >= 7: - logger.info( - "NOTE: your device may support faster training with --fp16 or --amp" - ) - self._optimizer = optim.build_optimizer(self.cfg.optimizer, params) - - if self.is_fsdp: - assert ( - not self.cfg.optimization.use_bmuf - ), "--ddp-backend=fully_sharded is not compatible with BMUF" - assert self._optimizer.supports_flat_params, ( - "--ddp-backend=fully_sharded is only compatible with pointwise " - "optimizers (e.g., Adam, AdamW, Adadelta, Adamax, SGD, etc.). " - "However, the sharding will result in slightly different results when " - "using non-pointwise optimizers (e.g., Adagrad, Adafactor, LAMB)" - ) - - if self.cfg.optimization.use_bmuf: - self._optimizer = optim.FairseqBMUF( - self.cfg.bmuf, - self._optimizer, - ) - - if self.cfg.distributed_training.zero_sharding == "os": - if ( - self.cfg.common.fp16 - and not self.cfg.common.memory_efficient_fp16 - and not self.cfg.common.memory_efficient_bf16 - ) and not self.cfg.common.fp16_no_flatten_grads: - raise ValueError( - "ZeRO is incomptabile with fp16 and flattened grads. " - "Please use --fp16-no-flatten-grads" - ) - else: - optim.shard_(self._optimizer, self.data_parallel_process_group) - - # We should initialize the learning rate scheduler immediately after - # building the optimizer, so that the initial learning rate is set. - self._lr_scheduler = lr_scheduler.build_lr_scheduler( - self.cfg.lr_scheduler, - self.optimizer, - ) - self._lr_scheduler.step_update(0) - - @property - def is_fsdp(self): - return self.cfg.distributed_training.ddp_backend == "fully_sharded" - - def consolidate_optimizer(self): - """For OSS, we need to consolidate the state dict.""" - if self.cfg.checkpoint.no_save_optimizer_state: - return - self._gathered_optim_state = None - if hasattr(self.optimizer.optimizer, "consolidate_state_dict"): - self.optimizer.optimizer.consolidate_state_dict() - elif self.is_fsdp and not self.model.use_sharded_state: - st = self.model.gather_full_optim_state_dict( - self.optimizer - ) # only returns on rank 0 - self._gathered_optim_state = st - - def state_dict(self): - state_dict = { - "args": None, # legacy - "cfg": ( - OmegaConf.to_container(self.cfg, resolve=True, enum_to_str=True) - if OmegaConf.is_config(self.cfg) - else self.cfg - ), - "model": self.model.state_dict(), - "criterion": ( - self.criterion.state_dict() - if utils.has_parameters(self.criterion) - else None - ), - "optimizer_history": (self._optim_history or []) - + [ - { - "criterion_name": self.get_criterion().__class__.__name__, - "optimizer_name": self.optimizer.__class__.__name__, - "lr_scheduler_state": self.lr_scheduler.state_dict(), - "num_updates": self.get_num_updates(), - } - ], - "task_state": self.task.state_dict() if self.task is not None else {}, - "extra_state": { - "metrics": metrics.state_dict(), - "previous_training_time": self.cumulative_training_time(), - }, - } - if self.cfg.ema.store_ema: - # Save EMA model state as extra state - state_dict["extra_state"]["ema"] = self.ema.get_model().state_dict() - if self.cfg.ema.ema_fp32: - # Save EMA params in fp32 - state_dict["extra_state"]["ema_fp32_params"] = self.ema.fp32_params - if not self.cfg.checkpoint.no_save_optimizer_state: - if self._gathered_optim_state is not None: - state_dict["last_optimizer_state"] = self._gathered_optim_state - self._gathered_optim_state = None - else: - state_dict["last_optimizer_state"] = self.optimizer.state_dict() - if self.is_fsdp: - # save meta data for recombining checkpoint upon loading - state_dict["fsdp_metadata"] = self.model.local_metadata_dict() - return state_dict - - def save_checkpoint(self, filename, extra_state): - """Save all training state in a checkpoint file.""" - - logger.info(f"Saving checkpoint to {os.path.abspath(filename)}") - # call state_dict on all ranks in case it needs internal communication - state_dict = utils.move_to_cpu(self.state_dict()) - state_dict["extra_state"].update(extra_state) - if self.should_save_checkpoint_on_current_rank: - checkpoint_utils.torch_persistent_save( - state_dict, - filename, - async_write=self.cfg.checkpoint.write_checkpoints_asynchronously, - ) - logger.info(f"Finished saving checkpoint to {os.path.abspath(filename)}") - - def load_checkpoint( - self, - filename, - reset_optimizer=False, - reset_lr_scheduler=False, - optimizer_overrides=None, - reset_meters=False, - ): - """ - Load all training state from a checkpoint file. - rank = 0 will load the checkpoint, and then broadcast it to all - other ranks. - """ - extra_state, self._optim_history, last_optim_state = None, [], None - - logger.info(f"Preparing to load checkpoint {filename}") - is_distributed = self.data_parallel_world_size > 1 - bexists = PathManager.isfile(filename) - if bexists: - load_on_all_ranks = ( - self.cfg.checkpoint.load_checkpoint_on_all_dp_ranks - # TPUs don't support broadcast yet, so load checkpoints - # on every worker for now - or self.tpu - # FSDP requires loading checkpoint shards on all ranks - or (self.is_fsdp and self.cfg.distributed_training.use_sharded_state) - or getattr(self.cfg.model, "base_layers", 0) > 0 - ) - - if load_on_all_ranks or self.data_parallel_rank == 0: - state = checkpoint_utils.load_checkpoint_to_cpu( - filename, load_on_all_ranks=load_on_all_ranks - ) - last_optim_state = state.get("last_optimizer_state", None) - - # If doing zero_sharding, do not broadcast global optimizer - # state. Later we will broadcast sharded states to each rank - # to avoid memory from exploding. - if ( - not load_on_all_ranks - and self.cfg.distributed_training.zero_sharding == "os" - and "last_optimizer_state" in state - and is_distributed - ): - state["last_optimizer_state"] = "SHARDED" - else: - last_optim_state = None - state = None - - if is_distributed and not load_on_all_ranks: - state = distributed_utils.broadcast_object( - state, - src_rank=0, - group=self.data_parallel_process_group, - dist_device=self.device, - ) - if self.data_parallel_rank > 0: - last_optim_state = state.get("last_optimizer_state", None) - - # load model parameters - try: - if ( - "optimizer_history" in state - and len(state["optimizer_history"]) > 0 - and "num_updates" in state["optimizer_history"][-1] - ): - self.model.set_num_updates( - state["optimizer_history"][-1]["num_updates"] - ) - - # this is the code related to AdaPrune - # In short, it removes redundant heads in multi-head attention module based on heads importance provided - # For more info, please refer to the paper: https://openreview.net/forum?id=_CMSV7FTzGI - # The idea of prune in mha can be summarized as - # Fine tune model (e.g. roberta encoder) on a certain datasets with regularization - # After the model is trained. User could use get_reserve_head_index and _adaptive_prune_heads functions to get the top X heads with most importance. - # Then user uses the rank to prune a new roberta encoder and save the pruned ckpt manually. - # User will fine tune the the new roberta encoder via the ckpt saved above - # To get rid of registering different pruned version of Roberta, I use the argument --mha-heads-to-keep to prune the Roberta model into a pruned version which matches the pruned ckpt. - if ( - safe_hasattr(self.model, "args") - and safe_hasattr(self.model.args, "mha_heads_to_keep") - and self.model.args.mha_heads_to_keep != -1 - ): - logger.info( - f"Prune model: keep {self.model.args.mha_heads_to_keep} heads for each multihead attention module" - ) - for layer in self.model.encoder.sentence_encoder.layers: - reserve_head_index = layer.self_attn._get_reserve_head_index( - num_heads_to_keep=self.model.args.mha_heads_to_keep - ) - layer.self_attn._adaptive_prune_heads( - reserve_head_index=reserve_head_index - ) - layer.self_attn._set_skip_embed_dim_check() - logger.info(self.model) - # this is the code related to AdaPrune - # In short, it removes redundant units in feedforward layer in each transformer layer based on importance - # For more info, please refer to the paper: https://openreview.net/forum?id=_CMSV7FTzGI - # The idea of prune in ffn can be summarized as - # Fine tune model (e.g. roberta encoder) on a certain datasets with regularization - # After the model is trained. User could use _get_fc_rank and _prune_fc_layer functions to get the top X units with most importance. - # Then user uses the rank to prune a new roberta encoder and save the pruned ckpt manually. - # User will fine tune the the new roberta encoder via the ckpt saved above - # To get rid of registering different pruned version of Roberta, I use the argument --ffn-blocks-to-remove to prune the Roberta model into a pruned version which matches the pruned ckpt. - if ( - safe_hasattr(self.model, "args") - and safe_hasattr(self.model.args, "ffn_blocks_to_remove") - and self.model.args.ffn_blocks_to_remove != -1 - ): - logger.info( - f"Prune model: remove {self.model.args.ffn_blocks_to_remove} ffn blocks for each transformer layer" - ) - for layer in self.model.encoder.sentence_encoder.layers: - remove_index = layer._get_fc_rank( - remove_num=self.model.args.ffn_blocks_to_remove - ) - layer._prune_fc_layer(remove_index=remove_index) - logger.info(self.model) - - self.model.load_state_dict( - state["model"], strict=True, model_cfg=self.cfg.model - ) - # save memory for later steps - del state["model"] - if utils.has_parameters(self.get_criterion()): - self.get_criterion().load_state_dict( - state["criterion"], strict=True - ) - del state["criterion"] - - except Exception: - raise Exception( - "Cannot load model parameters from checkpoint {}; " - "please ensure that the architectures match.".format(filename) - ) - extra_state = state["extra_state"] - self._optim_history = state["optimizer_history"] - - if last_optim_state is not None and not reset_optimizer: - # rebuild optimizer after loading model, since params may have changed - self._build_optimizer() - - # only reload optimizer and lr_scheduler if they match - last_optim = self._optim_history[-1] - assert ( - last_optim["criterion_name"] == self.get_criterion().__class__.__name__ - ), f"Criterion does not match; please reset the optimizer (--reset-optimizer). {last_optim['criterion_name']} vs {self.get_criterion().__class__.__name__}" - assert ( - last_optim["optimizer_name"] == self.optimizer.__class__.__name__ - ), f"Optimizer does not match; please reset the optimizer (--reset-optimizer). {last_optim['optimizer_name']} vs {self.optimizer.__class__.__name__}" - - if not reset_lr_scheduler: - self.lr_scheduler.load_state_dict(last_optim["lr_scheduler_state"]) - - if self.is_fsdp and not self.model.use_sharded_state: - # if use_sharded_state, the last_optim_state is already sharded, skip this - last_optim_state = self.model.get_shard_from_optim_state_dict( - last_optim_state - ) - elif not load_on_all_ranks and is_distributed: - last_optim_state = self.optimizer.broadcast_global_state_dict( - last_optim_state - ) - - self.optimizer.load_state_dict(last_optim_state, optimizer_overrides) - - self.set_num_updates(last_optim["num_updates"]) - - if extra_state is not None: - itr_state = extra_state["train_iterator"] - if type(itr_state) == list: - # assert len(itr_state) == self.data_parallel_world_size - itr_state = itr_state[self.data_parallel_rank % len(itr_state)] - extra_state["train_iterator"] = itr_state - epoch = itr_state.get("epoch", 1) - - if "previous_training_time" in extra_state: - self._previous_training_time = extra_state["previous_training_time"] - self._start_time = time.time() - - self.lr_step(epoch) - - if ( - itr_state.get("version", 1) >= 2 - and itr_state["iterations_in_epoch"] == 0 - ): - # reset meters at start of epoch - reset_meters = True - - if "metrics" in extra_state and not reset_meters: - metrics.load_state_dict(extra_state["metrics"]) - - # reset TimeMeters, since their start times don't make sense anymore - for meter in metrics.get_meters("default"): - if isinstance(meter, meters.TimeMeter): - meter.reset() - - if self.cfg.ema.store_ema: - if "ema" not in extra_state: - logger.warn( - "EMA not found in checkpoint. But store_ema is True. " - "EMA is re-initialized from checkpoint." - ) - self.ema.restore( - state["model"], build_fp32_params=self.cfg.ema.ema_fp32 - ) - else: - logger.info("Loading EMA from checkpoint") - self.ema.restore(extra_state["ema"], build_fp32_params=False) - - if self.cfg.ema.ema_fp32: - if "ema_fp32_params" in extra_state: - logger.info("Loading EMA fp32 params from checkpoint") - self.ema.build_fp32_params(extra_state["ema_fp32_params"]) - else: - logger.info( - "Building EMA fp32 params from EMA model in checkpoint" - ) - self.ema.build_fp32_params() - - logger.info( - "Loaded checkpoint {} (epoch {} @ {} updates)".format( - filename, epoch, self.get_num_updates() - ) - ) - - else: - logger.info("No existing checkpoint found {}".format(filename)) - - return extra_state - - def get_train_iterator( - self, - epoch, - combine=True, - load_dataset=True, - data_selector=None, - shard_batch_itr=True, - disable_iterator_cache=False, - ): - """Return an EpochBatchIterator over the training set for a given epoch.""" - if load_dataset: - logger.info("loading train data for epoch {}".format(epoch)) - self.task.load_dataset( - self.cfg.dataset.train_subset, - epoch=epoch, - combine=combine, - data_selector=data_selector, - tpu=self.tpu, - ) - batch_iterator = self.task.get_batch_iterator( - dataset=self.task.dataset(self.cfg.dataset.train_subset), - max_tokens=self.cfg.dataset.max_tokens, - max_sentences=self.cfg.dataset.batch_size, - max_positions=utils.resolve_max_positions( - self.task.max_positions(), - self.model.max_positions(), - self.cfg.dataset.max_tokens, - ), - ignore_invalid_inputs=True, - required_batch_size_multiple=self.cfg.dataset.required_batch_size_multiple, - seed=(self.cfg.common.seed + epoch) - if self.cfg.dataset.update_ordered_indices_seed - else self.cfg.common.seed, - num_shards=self.data_parallel_world_size if shard_batch_itr else 1, - shard_id=self.data_parallel_rank if shard_batch_itr else 0, - num_workers=self.cfg.dataset.num_workers, - epoch=epoch, - data_buffer_size=self.cfg.dataset.data_buffer_size, - disable_iterator_cache=disable_iterator_cache, - skip_remainder_batch=self.cfg.optimization.skip_remainder_batch, - grouped_shuffling=self.cfg.dataset.grouped_shuffling, - update_epoch_batch_itr=self.cfg.dataset.update_epoch_batch_itr, - ) - self.reset_dummy_batch(batch_iterator.first_batch) - return batch_iterator - - def get_valid_iterator( - self, - subset, - disable_iterator_cache=False, - ): - """Return an EpochBatchIterator over given validation subset for a given epoch.""" - batch_iterator = self.task.get_batch_iterator( - dataset=self.task.dataset(subset), - max_tokens=self.cfg.dataset.max_tokens_valid, - max_sentences=self.cfg.dataset.batch_size_valid, - max_positions=utils.resolve_max_positions( - self.task.max_positions(), - self.model.max_positions(), - ), - ignore_invalid_inputs=self.cfg.dataset.skip_invalid_size_inputs_valid_test, - required_batch_size_multiple=self.cfg.dataset.required_batch_size_multiple, - seed=self.cfg.common.seed, - num_shards=self.data_parallel_world_size, - shard_id=self.data_parallel_rank, - num_workers=self.cfg.dataset.num_workers, - # always pass a fixed "epoch" to keep validation data consistent - # across training epochs - epoch=1, - data_buffer_size=self.cfg.dataset.data_buffer_size, - disable_iterator_cache=disable_iterator_cache, - skip_remainder_batch=False, - ) - self.reset_dummy_batch(batch_iterator.first_batch) - return batch_iterator - - def begin_epoch(self, epoch): - """Called at the beginning of each epoch.""" - logger.info("begin training epoch {}".format(epoch)) - - self.lr_step_begin_epoch(epoch) - - if self.quantizer is not None: - self.quantizer.begin_epoch(epoch) - - # task specific setup per epoch - self.task.begin_epoch(epoch, self.get_model()) - - if self.tpu: - import torch_xla.core.xla_model as xm - - xm.rendezvous("begin_epoch") # wait for all workers - xm.mark_step() - - def begin_valid_epoch(self, epoch): - """Called at the beginning of each validation epoch.""" - - # task specific setup per validation epoch - self.task.begin_valid_epoch(epoch, self.get_model()) - - def reset_dummy_batch(self, batch): - self._dummy_batch = batch - - @metrics.aggregate("train") - def train_step(self, samples, raise_oom=False): - """Do forward, backward and parameter update.""" - self._set_seed() - self.model.train() - self.criterion.train() - self.zero_grad() - - metrics.log_start_time("train_wall", priority=800, round=0) - - # If EMA is enabled through store_ema=True - # and task.uses_ema is True, pass the EMA model as a keyword - # argument to the task. - extra_kwargs = {} - if self.cfg.ema.store_ema and getattr(self.task, "uses_ema", False): - extra_kwargs["ema_model"] = self.ema.get_model() - - # forward and backward pass - logging_outputs, sample_size, ooms = [], 0, 0 - for i, sample in enumerate(samples): # delayed update loop - sample, is_dummy_batch = self._prepare_sample(sample) - - def maybe_no_sync(): - """ - Whenever *samples* contains more than one mini-batch, we - want to accumulate gradients locally and only call - all-reduce in the last backwards pass. - """ - if ( - self.data_parallel_world_size > 1 - and hasattr(self.model, "no_sync") - and i < len(samples) - 1 - # The no_sync context manager results in increased memory - # usage with FSDP, since full-size gradients will be - # accumulated on each GPU. It's typically a better tradeoff - # to do the extra communication with FSDP. - and not self.is_fsdp - ): - return self.model.no_sync() - else: - return contextlib.ExitStack() # dummy contextmanager - - try: - with maybe_no_sync(): - # forward and backward - loss, sample_size_i, logging_output = self.task.train_step( - sample=sample, - model=self.model, - criterion=self.criterion, - optimizer=self.optimizer, - update_num=self.get_num_updates(), - ignore_grad=is_dummy_batch, - **extra_kwargs, - ) - del loss - - logging_outputs.append(logging_output) - sample_size += sample_size_i - - # emptying the CUDA cache after the first step can - # reduce the chance of OOM - if self.cuda and self.get_num_updates() == 0: - torch.cuda.empty_cache() - except RuntimeError as e: - if "out of memory" in str(e): - self._log_oom(e) - if raise_oom: - raise e - logger.warning( - "attempting to recover from OOM in forward/backward pass" - ) - ooms += 1 - self.zero_grad() - if self.cuda: - torch.cuda.empty_cache() - if self.cfg.distributed_training.distributed_world_size == 1: - return None - else: - raise e - except Exception: - self.consolidate_optimizer() - self.save_checkpoint( - os.path.join(self.cfg.checkpoint.save_dir, "crash.pt"), {} - ) - raise - - if self.tpu and i < len(samples) - 1: - # tpu-comment: every XLA operation before marking step is - # appended to the IR graph, and processing too many batches - # before marking step can lead to OOM errors. - # To handle gradient accumulation use case, we explicitly - # mark step here for every forward pass without a backward pass - self._xla_markstep_and_send_to_cpu() - - if is_dummy_batch: - if torch.is_tensor(sample_size): - sample_size.zero_() - else: - sample_size *= 0.0 - - if torch.is_tensor(sample_size): - sample_size = sample_size.float() - else: - sample_size = float(sample_size) - - # gather logging outputs from all replicas - if self._sync_stats(): - train_time = self._local_cumulative_training_time() - ( - logging_outputs, - ( - sample_size, - ooms, - total_train_time, - ), - ) = self._aggregate_logging_outputs( - logging_outputs, sample_size, ooms, train_time, ignore=is_dummy_batch - ) - self._cumulative_training_time = ( - total_train_time / self.data_parallel_world_size - ) - - overflow = False - try: - with torch.autograd.profiler.record_function("reduce-grads"): - # reduce gradients across workers - self.optimizer.all_reduce_grads(self.model) - if utils.has_parameters(self.criterion): - self.optimizer.all_reduce_grads(self.criterion) - - with torch.autograd.profiler.record_function("multiply-grads"): - # multiply gradients by (data_parallel_size / sample_size) since - # DDP normalizes by the number of data parallel workers for - # improved fp16 precision. - # Thus we get (sum_of_gradients / sample_size) at the end. - # In case of fp16, this step also undoes loss scaling. - # (Debugging note: Some optimizers perform this scaling on the - # fly, so inspecting model.parameters() or optimizer.params may - # still show the original, unscaled gradients.) - numer = ( - self.data_parallel_world_size - if not self.cfg.optimization.use_bmuf or self._sync_stats() - else 1 - ) - self.optimizer.multiply_grads(numer / (sample_size or 1.0)) - # Note: (sample_size or 1.0) handles the case of a zero gradient, in a - # way that avoids CPU/device transfers in case sample_size is a GPU or - # TPU object. The assumption is that the gradient itself is also 0. - - with torch.autograd.profiler.record_function("clip-grads"): - # clip grads - grad_norm = self.clip_grad_norm(self.cfg.optimization.clip_norm) - - # check that grad norms are consistent across workers - # on tpu check tensor is slow - if not self.tpu: - if ( - not self.cfg.optimization.use_bmuf - and self.cfg.distributed_training.ddp_backend != "slowmo" - ): - self._check_grad_norms(grad_norm) - if not torch.isfinite(grad_norm).all(): - # in case of AMP, if gradients are Nan/Inf then - # optimizer step is still required - if self.cfg.common.amp: - overflow = True - else: - # check local gradnorm single GPU case, trigger NanDetector - raise FloatingPointError("gradients are Nan/Inf") - - with torch.autograd.profiler.record_function("optimizer"): - # take an optimization step - self.task.optimizer_step( - self.optimizer, model=self.model, update_num=self.get_num_updates() - ) - if self.cfg.common.amp and overflow: - if self._amp_retries == self.cfg.common.amp_batch_retries: - logger.info("AMP: skipping this batch.") - self._amp_retries = 0 - else: - self._amp_retries += 1 - return self.train_step( - samples, raise_oom - ) # recursion to feed in same batch - - except FloatingPointError: - - self.consolidate_optimizer() - self.save_checkpoint( - os.path.join(self.cfg.checkpoint.save_dir, "crash.pt"), {} - ) - - # re-run the forward and backward pass with hooks attached to print - # out where it fails - self.zero_grad() - with NanDetector(self.get_model()): - for _, sample in enumerate(samples): - sample, _ = self._prepare_sample(sample) - self.task.train_step( - sample, - self.model, - self.criterion, - self.optimizer, - self.get_num_updates(), - ignore_grad=False, - **extra_kwargs, - ) - raise - except OverflowError as e: - overflow = True - logger.info( - f"NOTE: gradient overflow detected, ignoring gradient, {str(e)}" - ) - grad_norm = torch.tensor(0.0).cuda() - self.zero_grad() - except RuntimeError as e: - if "out of memory" in str(e): - self._log_oom(e) - logger.error("OOM during optimization, irrecoverable") - raise e - - # Some distributed wrappers (e.g., SlowMo) need access to the optimizer - # after the step - if hasattr(self.model, "perform_slowmo"): - self.model.perform_slowmo( - self.optimizer.optimizer, getattr(self.optimizer, "fp32_params", None) - ) - - logging_output = None - if not overflow or self.cfg.distributed_training.ddp_backend == "slowmo": - self.set_num_updates(self.get_num_updates() + 1) - - if self.cfg.ema.store_ema: - # Step EMA forward with new model. - self.ema.step( - self.get_model(), - self.get_num_updates(), - ) - metrics.log_scalar( - "ema_decay", - self.ema.get_decay(), - priority=10000, - round=5, - weight=0, - ) - - if self.tpu: - import torch_xla.core.xla_model as xm - - # mark step on TPUs - self._xla_markstep_and_send_to_cpu() - - # only log stats every log_interval steps - # this causes wps to be misreported when log_interval > 1 - logging_output = {} - if self.get_num_updates() % self.cfg.common.log_interval == 0: - # log memory usage - mem_info = xm.get_memory_info(self.device) - gb_free = mem_info["kb_free"] / 1024 / 1024 - gb_total = mem_info["kb_total"] / 1024 / 1024 - metrics.log_scalar( - "gb_free", gb_free, priority=1500, round=1, weight=0 - ) - metrics.log_scalar( - "gb_total", gb_total, priority=1600, round=1, weight=0 - ) - logging_outputs = self._xla_markstep_and_send_to_cpu( - logging_outputs - ) - logging_output = self._reduce_and_log_stats( - logging_outputs, sample_size, grad_norm - ) - - # log whenever there's an XLA compilation, since these - # slow down training and may indicate opportunities for - # optimization - self._check_xla_compilation() - else: - if self.cuda and self.cuda_env is not None: - # log minimum free memory over the iteration - gb_used = torch.cuda.max_memory_allocated() / 1024 / 1024 / 1024 - torch.cuda.reset_peak_memory_stats() - gb_free = self.cuda_env.total_memory_in_GB - gb_used - metrics.log_scalar( - "gb_free", gb_free, priority=1500, round=1, weight=0 - ) - - # log stats - logging_output = self._reduce_and_log_stats( - logging_outputs, sample_size, grad_norm - ) - - # clear CUDA cache to reduce memory fragmentation - if ( - self.cuda - and self.cfg.common.empty_cache_freq > 0 - and ( - (self.get_num_updates() + self.cfg.common.empty_cache_freq - 1) - % self.cfg.common.empty_cache_freq - ) - == 0 - ): - torch.cuda.empty_cache() - - if self.cfg.common.fp16 or self.cfg.common.amp: - metrics.log_scalar( - "loss_scale", - ( - self.optimizer.scaler.loss_scale - if self.cfg.common.fp16 - else self.optimizer.scaler.get_scale() - ), - priority=700, - round=4, - weight=0, - ) - - metrics.log_stop_time("train_wall") - return logging_output - - @metrics.aggregate("valid") - def valid_step(self, sample, raise_oom=False): - """Do forward pass in evaluation mode.""" - if self.tpu: - import torch_xla.core.xla_model as xm - - xm.rendezvous("valid_step") # wait for all workers - - # If EMA is enabled through store_ema=True - # and task.uses_ema is True, pass the EMA model as a keyword - # argument to the task. - extra_kwargs = {} - if self.cfg.ema.store_ema and getattr(self.task, "uses_ema", False): - extra_kwargs["ema_model"] = self.ema.get_model() - - with torch.no_grad(): - self.model.eval() - self.criterion.eval() - - sample, is_dummy_batch = self._prepare_sample(sample) - - try: - _loss, sample_size, logging_output = self.task.valid_step( - sample, self.model, self.criterion, **extra_kwargs - ) - except RuntimeError as e: - if "out of memory" in str(e): - self._log_oom(e) - if not raise_oom: - logger.warning( - "ran out of memory in validation step, retrying batch" - ) - for p in self.model.parameters(): - if p.grad is not None: - p.grad = None # free some memory - if self.cuda: - torch.cuda.empty_cache() - return self.valid_step(sample, raise_oom=True) - raise e - - logging_outputs = [logging_output] - if is_dummy_batch: - if torch.is_tensor(sample_size): - sample_size.zero_() - else: - sample_size *= 0.0 - - # gather logging outputs from all replicas - if self.data_parallel_world_size > 1: - logging_outputs, (sample_size,) = self._aggregate_logging_outputs( - logging_outputs, - sample_size, - ignore=is_dummy_batch, - ) - - # log validation stats - if self.tpu: - logging_outputs = self._xla_markstep_and_send_to_cpu(logging_outputs) - logging_output = self._reduce_and_log_stats(logging_outputs, sample_size) - - return logging_output - - def zero_grad(self): - self.optimizer.zero_grad() - - def lr_step_begin_epoch(self, epoch): - """Adjust the learning rate at the beginning of the epoch.""" - self.lr_scheduler.step_begin_epoch(epoch) - # prefer updating the LR based on the number of steps - return self.lr_step_update() - - def lr_step(self, epoch, val_loss=None): - """Adjust the learning rate at the end of the epoch.""" - self.lr_scheduler.step(epoch, val_loss) - # prefer updating the LR based on the number of steps - return self.lr_step_update() - - def lr_step_update(self): - """Update the learning rate after each update.""" - new_lr = self.lr_scheduler.step_update(self.get_num_updates()) - if isinstance(new_lr, dict): - for k, v in new_lr.items(): - metrics.log_scalar(f"lr_{k}", v, weight=0, priority=300) - new_lr = new_lr.get("default", next(iter(new_lr.values()))) - else: - metrics.log_scalar("lr", new_lr, weight=0, priority=300) - return new_lr - - def get_lr(self): - """Get the current learning rate.""" - return self.optimizer.get_lr() - - def get_model(self): - """Get the (non-wrapped) model instance.""" - return self._model - - def get_criterion(self): - """Get the (non-wrapped) criterion instance.""" - return self._criterion - - def get_meter(self, name): - """[deprecated] Get a specific meter by name.""" - from fairseq import meters - - if "get_meter" not in self._warn_once: - self._warn_once.add("get_meter") - utils.deprecation_warning( - "Trainer.get_meter is deprecated. Please use fairseq.metrics instead." - ) - - train_meters = metrics.get_meters("train") - if train_meters is None: - train_meters = {} - - if name == "train_loss" and "loss" in train_meters: - return train_meters["loss"] - elif name == "train_nll_loss": - # support for legacy train.py, which assumed this meter is - # always initialized - m = train_meters.get("nll_loss", None) - return m or meters.AverageMeter() - elif name == "wall": - # support for legacy train.py, which assumed this meter is - # always initialized - m = metrics.get_meter("default", "wall") - return m or meters.TimeMeter() - elif name == "wps": - m = metrics.get_meter("train", "wps") - return m or meters.TimeMeter() - elif name in {"valid_loss", "valid_nll_loss"}: - # support for legacy train.py, which assumed these meters - # are always initialized - k = name[len("valid_") :] - m = metrics.get_meter("valid", k) - return m or meters.AverageMeter() - elif name == "oom": - return meters.AverageMeter() - elif name in train_meters: - return train_meters[name] - return None - - def get_num_updates(self): - """Get the number of parameters updates.""" - return self._num_updates - - def set_num_updates(self, num_updates): - """Set the number of parameters updates.""" - self._num_updates = num_updates - self.lr_step_update() - if self.quantizer: - self.quantizer.step_update(self._num_updates) - metrics.log_scalar("num_updates", self._num_updates, weight=0, priority=200) - - def clip_grad_norm(self, clip_norm): - def agg_norm_fn(total_norm): - total_norm = total_norm.cuda().float() ** 2 - total_norm = distributed_utils.all_reduce( - total_norm, group=self.data_parallel_process_group - ) - return total_norm ** 0.5 - - should_agg_norm = self.is_fsdp and ( - self.data_parallel_process_group is not None - or torch.distributed.is_initialized() - ) - return self.optimizer.clip_grad_norm( - clip_norm, aggregate_norm_fn=agg_norm_fn if should_agg_norm else None - ) - - def cumulative_training_time(self): - if self._cumulative_training_time is None: - # single GPU - return self._local_cumulative_training_time() - else: - return self._cumulative_training_time - - def _local_cumulative_training_time(self): - """Aggregate training time in seconds.""" - return time.time() - self._start_time + self._previous_training_time - - def _fp_convert_sample(self, sample): - def apply_half(t): - if t.dtype is torch.float32: - return t.to(dtype=torch.half) - return t - - def apply_bfloat16(t): - if t.dtype is torch.float32: - return t.to(dtype=torch.bfloat16) - return t - - if self.cfg.common.fp16: - sample = utils.apply_to_sample(apply_half, sample) - - if self.cfg.common.bf16: - sample = utils.apply_to_sample(apply_bfloat16, sample) - - return sample - - def _prepare_sample(self, sample, is_dummy=False): - if sample == "DUMMY": - raise Exception( - "Trying to use an uninitialized 'dummy' batch. This usually indicates " - "that the total number of batches is smaller than the number of " - "participating GPUs. Try reducing the batch size or using fewer GPUs." - ) - - if sample is None or len(sample) == 0: - assert ( - self._dummy_batch is not None and len(self._dummy_batch) > 0 - ), "Invalid dummy batch: {}".format(self._dummy_batch) - sample, _ = self._prepare_sample(self._dummy_batch, is_dummy=True) - return sample, True - - # Given that PCIe/NVLink bandwidth is significantly smaller than DRAM bandwidth - # it makes sense to do the format conversion on the CPU and then transfer - # a smaller buffer to the device. This also saves GPU memory capacity. - - if self.cfg.common.on_cpu_convert_precision: - sample = self._fp_convert_sample(sample) - - if self.cuda: - if self.pipeline_model_parallel: - if "target" in sample: - sample["target"] = utils.move_to_cuda( - sample["target"], device=self.last_device - ) - else: - sample = utils.move_to_cuda(sample) - elif self.tpu and is_dummy: - # the dummy batch may not be on the appropriate device - sample = utils.move_to_cuda(sample, device=self.device) - - if not self.cfg.common.on_cpu_convert_precision: - sample = self._fp_convert_sample(sample) - - if self._dummy_batch == "DUMMY": - self._dummy_batch = sample - - return sample, False - - def _set_seed(self): - # Set seed based on args.seed and the update number so that we get - # reproducible results when resuming from checkpoints - seed = self.cfg.common.seed + self.get_num_updates() - utils.set_torch_seed(seed) - - def _sync_stats(self): - # Return True if it's using multiple GPUs and DDP or multiple GPUs with - # BMUF and it's a bmuf sync with warmup iterations completed before. - if self.data_parallel_world_size == 1: - return False - elif self.cfg.optimization.use_bmuf: - return ( - self.get_num_updates() + 1 - ) % self.cfg.bmuf.global_sync_iter == 0 and ( - self.get_num_updates() + 1 - ) > self.cfg.bmuf.warmup_iterations - else: - return True - - def _log_oom(self, exc): - msg = "OOM: Ran out of memory with exception: {}".format(exc) - logger.warning(msg) - if torch.cuda.is_available() and hasattr(torch.cuda, "memory_summary"): - for device_idx in range(torch.cuda.device_count()): - logger.warning(torch.cuda.memory_summary(device=device_idx)) - sys.stderr.flush() - - def _aggregate_logging_outputs( - self, - logging_outputs: List[Dict[str, Any]], - *extra_stats_to_sum, - ignore=False, - ): - if self.task.__class__.logging_outputs_can_be_summed(self.get_criterion()): - return self._fast_stat_sync_sum( - logging_outputs, *extra_stats_to_sum, ignore=ignore - ) - else: - return self._all_gather_list_sync( - logging_outputs, *extra_stats_to_sum, ignore=ignore - ) - - def _all_gather_list_sync( - self, - logging_outputs: List[Dict[str, Any]], - *extra_stats_to_sum, - ignore=False, - ): - """ - Sync logging outputs across workers. all_gather_list_sync is - suitable when logging outputs are complex types. - """ - if self.tpu: - raise NotImplementedError - if ignore: - logging_outputs = [] - results = list( - zip( - *distributed_utils.all_gather_list( - [logging_outputs] + list(extra_stats_to_sum), - max_size=getattr(self.cfg.common, "all_gather_list_size", 16384), - group=self.data_parallel_process_group, - ) - ) - ) - logging_outputs, extra_stats_to_sum = results[0], results[1:] - logging_outputs = list(chain.from_iterable(logging_outputs)) - extra_stats_to_sum = [sum(s) for s in extra_stats_to_sum] - return logging_outputs, extra_stats_to_sum - - def _fast_stat_sync_sum( - self, - logging_outputs: List[Dict[str, Any]], - *extra_stats_to_sum, - ignore=False, - ): - """ - Sync logging outputs across workers. fast_stat_sync_sum is - faster than all_gather_list_sync, but is only suitable when - logging outputs are scalars and can be summed. Note that - *logging_outputs* cannot contain any nested dicts/lists. - """ - data = {} - for i, stat in enumerate(extra_stats_to_sum): - data["extra_stats_" + str(i)] = stat - if len(logging_outputs) > 0: - log_keys = list(logging_outputs[0].keys()) - for k in log_keys: - if not ignore: - v = sum(log[k] for log in logging_outputs if k in log) - else: - v = logging_outputs[0][k] - v = torch.zeros_like(v) if torch.is_tensor(v) else 0 - data["logging_outputs_" + k] = v - else: - log_keys = None - - data = distributed_utils.all_reduce_dict( - data, device=self.device, group=self.data_parallel_process_group - ) - - extra_stats_to_sum = [ - data["extra_stats_" + str(i)] for i in range(len(extra_stats_to_sum)) - ] - if log_keys is not None: - logging_outputs = [{k: data["logging_outputs_" + k] for k in log_keys}] - else: - logging_outputs = [] - return logging_outputs, extra_stats_to_sum - - def _check_grad_norms(self, grad_norm): - """Check that grad norms are consistent across workers.""" - if self._grad_norm_buf is not None: - self._grad_norm_buf.zero_() - self._grad_norm_buf[self.data_parallel_rank] = grad_norm - distributed_utils.all_reduce( - self._grad_norm_buf, group=self.data_parallel_process_group - ) - - def is_consistent(tensor): - max_abs_diff = torch.max(torch.abs(tensor - tensor[0])) - return ( - ( - torch.isfinite(tensor).all() - and (max_abs_diff / (tensor[0] + 1e-6) < 1e-6).all() - ) - or (self.cfg.common.amp and not torch.isfinite(tensor).all()) - # in case of amp non-finite grads are fine - ) - - if not is_consistent(self._grad_norm_buf): - pretty_detail = "\n".join( - "rank {:3d} = {:.8f}".format(r, n) - for r, n in enumerate(self._grad_norm_buf.tolist()) - ) - error_detail = "grad_norm across the workers:\n{}\n".format( - pretty_detail - ) - # use FloatingPointError to trigger NanDetector - raise FloatingPointError( - "Fatal error: gradients are inconsistent between workers. " - "Try --ddp-backend=legacy_ddp. " - "Or are you mixing up different generation of GPUs in training?" - + "\n" - + "-" * 80 - + "\n{}\n".format(error_detail) - + "-" * 80 - ) - - def _reduce_and_log_stats(self, logging_outputs, sample_size, grad_norm=None): - if grad_norm is not None and ( - not torch.is_tensor(grad_norm) or torch.isfinite(grad_norm) - ): - metrics.log_speed("ups", 1.0, priority=100, round=2) - metrics.log_scalar("gnorm", grad_norm, priority=400, round=3) - if self.cfg.optimization.clip_norm > 0: - metrics.log_scalar( - "clip", - torch.where( - grad_norm > self.cfg.optimization.clip_norm, - grad_norm.new_tensor(100), - grad_norm.new_tensor(0), - ), - priority=500, - round=1, - ) - - with metrics.aggregate() as agg: - if logging_outputs is not None: - self.task.reduce_metrics(logging_outputs, self.get_criterion()) - del logging_outputs - - # extra warning for criterions that don't properly log a loss value - if "loss" not in agg: - if "loss" not in self._warn_once: - self._warn_once.add("loss") - logger.warning( - "Criterion.reduce_metrics did not log a 'loss' value, " - "which may break some functionality" - ) - metrics.log_scalar("loss", -1) - - # support legacy interface - if self.tpu: - logging_output = {} - else: - logging_output = agg.get_smoothed_values() - logging_output["sample_size"] = sample_size - for key_to_delete in ["ppl", "wps", "wpb", "bsz"]: - if key_to_delete in logging_output: - del logging_output[key_to_delete] - return logging_output - - def _check_xla_compilation(self): - import torch_xla.debug.metrics as met - - compile_stats = met.metric_data("CompileTime") - if compile_stats is None: - return - num_xla_compiles = compile_stats[0] - if num_xla_compiles > self._num_xla_compiles: - logger.warning( - "XLA compilation detected on device #{}; too many of these can lead " - "to slow training, but we expect a few in the beginning".format( - self.cfg.distributed_training.distributed_rank - ) - ) - self._num_xla_compiles = num_xla_compiles - - def _xla_markstep_and_send_to_cpu(self, data=None): - import torch_xla.core.xla_model as xm - - xm.mark_step() - if data is not None: - from fairseq.utils import xla_device_to_cpu - - return xla_device_to_cpu(data) - - -def _catalog_shared_params(module, memo=None, prefix=""): - if memo is None: - first_call = True - memo = {} - else: - first_call = False - for name, param in module._parameters.items(): - param_prefix = prefix + ("." if prefix else "") + name - if param not in memo: - memo[param] = [] - memo[param].append(param_prefix) - for name, m in module._modules.items(): - if m is None: - continue - submodule_prefix = prefix + ("." if prefix else "") + name - _catalog_shared_params(m, memo, submodule_prefix) - if first_call: - return [x for x in memo.values() if len(x) > 1] - - -def _get_module_by_path(module, path): - path = path.split(".") - for name in path: - module = getattr(module, name) - return module - - -def _set_module_by_path(module, path, value): - path = path.split(".") - for name in path[:-1]: - module = getattr(module, name) - setattr(module, path[-1], value) diff --git a/kosmos-g/fairseq/fairseq/utils.py b/kosmos-g/fairseq/fairseq/utils.py deleted file mode 100644 index c7161fa6d..000000000 --- a/kosmos-g/fairseq/fairseq/utils.py +++ /dev/null @@ -1,839 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import contextlib -import copy -import importlib -import logging -import os -import sys -import warnings -from itertools import accumulate -from typing import Callable, Dict, List, Optional, TYPE_CHECKING - -import torch -import torch.nn.functional as F -from torch import Tensor -import collections - -if TYPE_CHECKING: - from fairseq.modules.multihead_attention import MultiheadAttention - -try: - from amp_C import multi_tensor_l2norm - - multi_tensor_l2norm_available = True -except ImportError: - multi_tensor_l2norm_available = False - -try: - import torch_xla.core.xla_model as xm -except ImportError: - xm = None - - -logger = logging.getLogger(__name__) - - -MANIFOLD_PATH_SEP = "|" - - -class FileContentsAction(argparse.Action): - def __init__(self, option_strings, dest, nargs=None, **kwargs): - if nargs is not None: - raise ValueError("nargs not allowed") - super(FileContentsAction, self).__init__(option_strings, dest, **kwargs) - - def __call__(self, parser, namespace, values, option_string=None): - from fairseq.file_io import PathManager - - if PathManager.isfile(values): - with PathManager.open(values) as f: - argument = f.read().strip() - else: - argument = values - setattr(namespace, self.dest, argument) - - -def split_paths(paths: str, separator=os.pathsep) -> List[str]: - return ( - paths.split(separator) if "://" not in paths else paths.split(MANIFOLD_PATH_SEP) - ) - - -def load_ensemble_for_inference(filenames, task, model_arg_overrides=None): - from fairseq import checkpoint_utils - - deprecation_warning( - "utils.load_ensemble_for_inference is deprecated. " - "Please use checkpoint_utils.load_model_ensemble instead." - ) - return checkpoint_utils.load_model_ensemble( - filenames, arg_overrides=model_arg_overrides, task=task - ) - - -def apply_to_sample(f, sample): - if hasattr(sample, "__len__") and len(sample) == 0: - return {} - - def _apply(x): - if torch.is_tensor(x): - return f(x) - elif isinstance(x, collections.OrderedDict): - # OrderedDict has attributes that needs to be preserved - od = collections.OrderedDict( - (key, _apply(value)) for key, value in x.items() - ) - od.__dict__ = x.__dict__ - return od - elif isinstance(x, dict): - return {key: _apply(value) for key, value in x.items()} - elif isinstance(x, list): - return [_apply(x) for x in x] - elif isinstance(x, tuple): - return tuple(_apply(x) for x in x) - elif isinstance(x, set): - return {_apply(x) for x in x} - else: - return x - - return _apply(sample) - - -def move_to_cuda(sample, device=None): - device = device or torch.cuda.current_device() - - def _move_to_cuda(tensor): - # non_blocking is ignored if tensor is not pinned, so we can always set - # to True (see github.com/PyTorchLightning/pytorch-lightning/issues/620) - return tensor.to(device=device, non_blocking=True) - - return apply_to_sample(_move_to_cuda, sample) - - -def move_to_cpu(sample): - def _move_to_cpu(tensor): - # PyTorch has poor support for half tensors (float16) on CPU. - # Move any such tensors to float32. - if tensor.dtype in {torch.bfloat16, torch.float16}: - tensor = tensor.to(dtype=torch.float32) - return tensor.cpu() - - return apply_to_sample(_move_to_cpu, sample) - - -def move_to_tpu(sample): - - import torch_xla.core.xla_model as xm - - device = xm.xla_device() - - def _move_to_tpu(tensor): - return tensor.to(device) - - return apply_to_sample(_move_to_tpu, sample) - - -def get_incremental_state( - module: "MultiheadAttention", - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]], - key: str, -) -> Optional[Dict[str, Optional[Tensor]]]: - """Helper for getting incremental state for an nn.Module.""" - return module.get_incremental_state(incremental_state, key) - - -def set_incremental_state( - module: "MultiheadAttention", - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]], - key: str, - value: Dict[str, Optional[Tensor]], -) -> Optional[Dict[str, Dict[str, Optional[Tensor]]]]: - """Helper for setting incremental state for an nn.Module.""" - if incremental_state is not None: - result = module.set_incremental_state(incremental_state, key, value) - if result is not None: - incremental_state = result - return incremental_state - - -def load_align_dict(replace_unk): - if replace_unk is None: - align_dict = None - elif isinstance(replace_unk, str) and len(replace_unk) > 0: - # Load alignment dictionary for unknown word replacement if it was passed as an argument. - align_dict = {} - with open(replace_unk, "r") as f: - for line in f: - cols = line.split() - align_dict[cols[0]] = cols[1] - else: - # No alignment dictionary provided but we still want to perform unknown word replacement by copying the - # original source word. - align_dict = {} - return align_dict - - -def print_embed_overlap(embed_dict, vocab_dict): - embed_keys = set(embed_dict.keys()) - vocab_keys = set(vocab_dict.symbols) - overlap = len(embed_keys & vocab_keys) - logger.info("found {}/{} types in embedding file".format(overlap, len(vocab_dict))) - - -def parse_embedding(embed_path): - """Parse embedding text file into a dictionary of word and embedding tensors. - - The first line can have vocabulary size and dimension. The following lines - should contain word and embedding separated by spaces. - - Example: - 2 5 - the -0.0230 -0.0264 0.0287 0.0171 0.1403 - at -0.0395 -0.1286 0.0275 0.0254 -0.0932 - """ - embed_dict = {} - with open(embed_path) as f_embed: - next(f_embed) # skip header - for line in f_embed: - pieces = line.rstrip().split(" ") - embed_dict[pieces[0]] = torch.Tensor( - [float(weight) for weight in pieces[1:]] - ) - return embed_dict - - -def load_embedding(embed_dict, vocab, embedding): - for idx in range(len(vocab)): - token = vocab[idx] - if token in embed_dict: - embedding.weight.data[idx] = embed_dict[token] - return embedding - - -def replace_unk(hypo_str, src_str, alignment, align_dict, unk): - from fairseq import tokenizer - - # Tokens are strings here - hypo_tokens = tokenizer.tokenize_line(hypo_str) - # TODO: Very rare cases where the replacement is '<eos>' should be handled gracefully - src_tokens = tokenizer.tokenize_line(src_str) + ["<eos>"] - for i, ht in enumerate(hypo_tokens): - if ht == unk: - src_token = src_tokens[alignment[i]] - # Either take the corresponding value in the aligned dictionary or just copy the original value. - hypo_tokens[i] = align_dict.get(src_token, src_token) - return " ".join(hypo_tokens) - - -def post_process_prediction( - hypo_tokens, - src_str, - alignment, - align_dict, - tgt_dict, - remove_bpe=None, - extra_symbols_to_ignore=None, -): - hypo_str = tgt_dict.string( - hypo_tokens, remove_bpe, extra_symbols_to_ignore=extra_symbols_to_ignore - ) - if align_dict is not None: - hypo_str = replace_unk( - hypo_str, src_str, alignment, align_dict, tgt_dict.unk_string() - ) - if align_dict is not None or remove_bpe is not None: - # Convert back to tokens for evaluating with unk replacement or without BPE - # Note that the dictionary can be modified inside the method. - hypo_tokens = tgt_dict.encode_line(hypo_str, add_if_not_exist=True) - return hypo_tokens, hypo_str, alignment - - -def make_positions(tensor, padding_idx: int, onnx_trace: bool = False): - """Replace non-padding symbols with their position numbers. - - Position numbers begin at padding_idx+1. Padding symbols are ignored. - """ - # The series of casts and type-conversions here are carefully - # balanced to both work with ONNX export and XLA. In particular XLA - # prefers ints, cumsum defaults to output longs, and ONNX doesn't know - # how to handle the dtype kwarg in cumsum. - mask = tensor.ne(padding_idx).int() - return (torch.cumsum(mask, dim=1).type_as(mask) * mask).long() + padding_idx - - -def strip_pad(tensor, pad): - return tensor[tensor.ne(pad)] - - -def buffered_arange(max): - if not hasattr(buffered_arange, "buf"): - buffered_arange.buf = torch.LongTensor() - if max > buffered_arange.buf.numel(): - buffered_arange.buf.resize_(max) - torch.arange(max, out=buffered_arange.buf) - return buffered_arange.buf[:max] - - -def convert_padding_direction( - src_tokens, padding_idx, right_to_left: bool = False, left_to_right: bool = False -): - assert right_to_left ^ left_to_right - pad_mask = src_tokens.eq(padding_idx) - if not pad_mask.any(): - # no padding, return early - return src_tokens - if left_to_right and not pad_mask[:, 0].any(): - # already right padded - return src_tokens - if right_to_left and not pad_mask[:, -1].any(): - # already left padded - return src_tokens - max_len = src_tokens.size(1) - buffered = torch.empty(0).long() - if max_len > 0: - torch.arange(max_len, out=buffered) - range = buffered.type_as(src_tokens).expand_as(src_tokens) - num_pads = pad_mask.long().sum(dim=1, keepdim=True) - if right_to_left: - index = torch.remainder(range - num_pads, max_len) - else: - index = torch.remainder(range + num_pads, max_len) - return src_tokens.gather(1, index) - - -def item(tensor): - # tpu-comment: making this a no-op for xla devices. - if torch.is_tensor(tensor) and tensor.device.type == "xla": - return tensor.detach() - if hasattr(tensor, "item"): - return tensor.item() - if hasattr(tensor, "__getitem__"): - return tensor[0] - return tensor - - -def multi_tensor_total_norm(grads, chunk_size=2048 * 32) -> torch.Tensor: - per_device_grads = {} - norms = [] - for grad in grads: - device = grad.device - cur_device_grads = per_device_grads.get(device) - if cur_device_grads is None: - cur_device_grads = [] - per_device_grads[device] = cur_device_grads - cur_device_grads.append(grad) - for device in per_device_grads.keys(): - cur_device_grads = per_device_grads[device] - if device.type == "cuda": - # TODO(msb) return has_inf - has_inf = torch.zeros((1, 1), dtype=torch.int, device=device) - with torch.cuda.device(device): - norm = multi_tensor_l2norm( - chunk_size, has_inf, [cur_device_grads], False - ) - norms.append(norm[0].to(torch.cuda.current_device())) - else: - norms += [torch.norm(g, p=2, dtype=torch.float32) for g in cur_device_grads] - total_norm = torch.norm(torch.stack(norms)) - return total_norm - - -@torch.no_grad() -def clip_grad_norm_(params, max_norm, aggregate_norm_fn=None) -> torch.Tensor: - def grad_exists(p): - return p is not None and getattr(p, "grad", None) is not None - - if isinstance(params, torch.Tensor): - params = [params] - params = list(params) - grads = [ - p.grad.detach() for p in params if grad_exists(p) and not hasattr(p, "expert") - ] - expert_grads = [ - p.grad.detach() for p in params if grad_exists(p) and hasattr(p, "expert") - ] - - if len(grads) == 0: - if len(params) > 0: - return params[0].new_tensor(0.0) - else: - return torch.tensor(0.0) - - if len(grads) == 1: - total_norm = torch.norm(grads[0], p=2, dtype=torch.float32) - else: - if multi_tensor_l2norm_available: - total_norm = multi_tensor_total_norm(grads) - else: - if torch.cuda.is_available(): - warnings.warn( - "amp_C fused kernels unavailable, disabling multi_tensor_l2norm; " - "you may get better performance by installing NVIDIA's apex library" - ) - device = torch.cuda.current_device() - elif grads[0].device.type == "xla": - device = grads[0].device - else: - device = torch.device("cpu") - total_norm = torch.norm( - torch.stack( - [torch.norm(g, p=2, dtype=torch.float32).to(device) for g in grads] - ) - ) - - if aggregate_norm_fn is not None: - total_norm = aggregate_norm_fn(total_norm) - - if max_norm > 0: - max_norm = float(max_norm) - clip_coef = (max_norm / (total_norm + 1e-6)).clamp_(max=1) - for g in grads + expert_grads: - g.mul_(clip_coef) - return total_norm - - -def fill_with_neg_inf(t): - """FP16-compatible function that fills a tensor with -inf.""" - return t.float().fill_(float("-inf")).type_as(t) - - -def _match_types(arg1, arg2): - """Convert the numerical argument to the same type as the other argument""" - - def upgrade(arg_number, arg_structure): - if isinstance(arg_structure, tuple): - return tuple([arg_number] * len(arg_structure)) - elif isinstance(arg_structure, dict): - arg = copy.deepcopy(arg_structure) - for k in arg: - arg[k] = upgrade(arg_number, arg_structure[k]) - return arg - else: - return arg_number - - if isinstance(arg1, float) or isinstance(arg1, int): - return upgrade(arg1, arg2), arg2 - elif isinstance(arg2, float) or isinstance(arg2, int): - return arg1, upgrade(arg2, arg1) - - return arg1, arg2 - - -def resolve_max_positions(*args): - """Resolve max position constraints from multiple sources.""" - - def map_value_update(d1, d2): - updated_value = copy.deepcopy(d1) - for key in d2: - if key not in updated_value: - updated_value[key] = d2[key] - else: - updated_value[key] = min(d1[key], d2[key]) - return updated_value - - def nullsafe_min(l): - minim = None - for item in l: - if minim is None: - minim = item - elif item is not None and item < minim: - minim = item - return minim - - max_positions = None - for arg in args: - if max_positions is None: - max_positions = arg - elif arg is not None: - max_positions, arg = _match_types(max_positions, arg) - if isinstance(arg, float) or isinstance(arg, int): - max_positions = min(max_positions, arg) - elif isinstance(arg, dict): - max_positions = map_value_update(max_positions, arg) - else: - max_positions = tuple(map(nullsafe_min, zip(max_positions, arg))) - - return max_positions - - -def import_user_module(args): - module_path = getattr(args, "user_dir", None) - if module_path is not None: - module_path = os.path.abspath(args.user_dir) - if not os.path.exists(module_path) and not os.path.isfile( - os.path.dirname(module_path) - ): - fairseq_rel_path = os.path.join(os.path.dirname(__file__), args.user_dir) - if os.path.exists(fairseq_rel_path): - module_path = fairseq_rel_path - else: - fairseq_rel_path = os.path.join( - os.path.dirname(__file__), "..", args.user_dir - ) - if os.path.exists(fairseq_rel_path): - module_path = fairseq_rel_path - else: - raise FileNotFoundError(module_path) - - # ensure that user modules are only imported once - import_user_module.memo = getattr(import_user_module, "memo", set()) - if module_path not in import_user_module.memo: - import_user_module.memo.add(module_path) - - module_parent, module_name = os.path.split(module_path) - if module_name not in sys.modules: - sys.path.insert(0, module_parent) - importlib.import_module(module_name) - - tasks_path = os.path.join(module_path, "tasks") - if os.path.exists(tasks_path): - from fairseq.tasks import import_tasks - - import_tasks(tasks_path, f"{module_name}.tasks") - - models_path = os.path.join(module_path, "models") - if os.path.exists(models_path): - from fairseq.models import import_models - - import_models(models_path, f"{module_name}.models") - else: - raise ImportError( - "Failed to import --user-dir={} because the corresponding module name " - "({}) is not globally unique. Please rename the directory to " - "something unique and try again.".format(module_path, module_name) - ) - - -def softmax(x, dim: int, onnx_trace: bool = False): - if onnx_trace: - return F.softmax(x.float(), dim=dim) - else: - return F.softmax(x, dim=dim, dtype=torch.float32) - - -def log_softmax(x, dim: int, onnx_trace: bool = False): - if onnx_trace: - return F.log_softmax(x.float(), dim=dim) - else: - return F.log_softmax(x, dim=dim, dtype=torch.float32) - - -def get_perplexity(loss, round=2, base=2): - from fairseq.logging.meters import safe_round - - if loss is None: - return 0.0 - try: - return safe_round(base ** loss, round) - except OverflowError: - return float("inf") - - -def deprecation_warning(message, stacklevel=3): - # don't use DeprecationWarning, since it's ignored by default - warnings.warn(message, stacklevel=stacklevel) - - -def relu_squared(x: torch.Tensor): - return F.relu(x).pow(2) - - -def get_activation_fn(activation: str) -> Callable: - """Returns the activation function corresponding to `activation`""" - from fairseq.modules import gelu, gelu_accurate - - if activation == "relu": - return F.relu - elif activation == "relu_squared": - return relu_squared - elif activation == "gelu": - return gelu - elif activation == "gelu_fast": - deprecation_warning( - "--activation-fn=gelu_fast has been renamed to gelu_accurate" - ) - return gelu_accurate - elif activation == "gelu_accurate": - return gelu_accurate - elif activation == "tanh": - return torch.tanh - elif activation == "linear": - return lambda x: x - elif activation == "swish": - return torch.nn.SiLU - else: - raise RuntimeError("--activation-fn {} not supported".format(activation)) - - -def get_available_activation_fns() -> List: - return [ - "relu", - "gelu", - "gelu_fast", # deprecated - "gelu_accurate", - "tanh", - "linear", - ] - - -@contextlib.contextmanager -def model_eval(model): - is_training = model.training - model.eval() - yield - model.train(is_training) - - -def has_parameters(module): - try: - next(module.parameters()) - return True - except StopIteration: - return False - - -def get_rng_state(): - state = {"torch_rng_state": torch.get_rng_state()} - if xm is not None: - state["xla_rng_state"] = xm.get_rng_state() - if torch.cuda.is_available(): - state["cuda_rng_state"] = torch.cuda.get_rng_state() - return state - - -def set_rng_state(state): - torch.set_rng_state(state["torch_rng_state"]) - if xm is not None: - xm.set_rng_state(state["xla_rng_state"]) - if torch.cuda.is_available(): - torch.cuda.set_rng_state(state["cuda_rng_state"]) - - -class set_torch_seed(object): - def __init__(self, seed): - assert isinstance(seed, int) - self.rng_state = get_rng_state() - - torch.manual_seed(seed) - if xm is not None: - xm.set_rng_state(seed) - if torch.cuda.is_available(): - torch.cuda.manual_seed(seed) - - def __enter__(self): - return self - - def __exit__(self, *exc): - set_rng_state(self.rng_state) - - -def parse_alignment(line): - """ - Parses a single line from the alingment file. - - Args: - line (str): String containing the alignment of the format: - <src_idx_1>-<tgt_idx_1> <src_idx_2>-<tgt_idx_2> .. - <src_idx_m>-<tgt_idx_m>. All indices are 0 indexed. - - Returns: - torch.IntTensor: packed alignments of shape (2 * m). - """ - alignments = line.strip().split() - parsed_alignment = torch.IntTensor(2 * len(alignments)) - for idx, alignment in enumerate(alignments): - src_idx, tgt_idx = alignment.split("-") - parsed_alignment[2 * idx] = int(src_idx) - parsed_alignment[2 * idx + 1] = int(tgt_idx) - return parsed_alignment - - -def get_token_to_word_mapping(tokens, exclude_list): - n = len(tokens) - word_start = [int(token not in exclude_list) for token in tokens] - word_idx = list(accumulate(word_start)) - token_to_word = {i: word_idx[i] for i in range(n)} - return token_to_word - - -def extract_hard_alignment(attn, src_sent, tgt_sent, pad, eos): - tgt_valid = ( - ((tgt_sent != pad) & (tgt_sent != eos)).nonzero(as_tuple=False).squeeze(dim=-1) - ) - src_invalid = ( - ((src_sent == pad) | (src_sent == eos)).nonzero(as_tuple=False).squeeze(dim=-1) - ) - src_token_to_word = get_token_to_word_mapping(src_sent, [eos, pad]) - tgt_token_to_word = get_token_to_word_mapping(tgt_sent, [eos, pad]) - alignment = [] - if len(tgt_valid) != 0 and len(src_invalid) < len(src_sent): - attn_valid = attn[tgt_valid] - attn_valid[:, src_invalid] = float("-inf") - _, src_indices = attn_valid.max(dim=1) - for tgt_idx, src_idx in zip(tgt_valid, src_indices): - alignment.append( - ( - src_token_to_word[src_idx.item()] - 1, - tgt_token_to_word[tgt_idx.item()] - 1, - ) - ) - return alignment - - -def extract_soft_alignment(attn, src_sent, tgt_sent, pad, eos): - tgt_valid = ((tgt_sent != pad)).nonzero(as_tuple=False) - src_valid = ((src_sent != pad)).nonzero(as_tuple=False).squeeze(dim=-1) - alignment = [] - if len(tgt_valid) != 0 and len(src_valid) != 0: - attn_valid = attn[tgt_valid, src_valid] - alignment = [ - ["{:.6f}".format(p) for p in src_probs.tolist()] for src_probs in attn_valid - ] - return alignment - - -def new_arange(x, *size): - """ - Return a Tensor of `size` filled with a range function on the device of x. - If size is empty, using the size of the variable x. - """ - if len(size) == 0: - size = x.size() - return torch.arange(size[-1], device=x.device).expand(*size).contiguous() - - -def get_tpu_device(): - return xm.xla_device() - - -def tpu_data_loader(itr): - import torch_xla.core.xla_model as xm - import torch_xla.distributed.parallel_loader as pl - from fairseq.data import iterators - - xm.rendezvous("tpu_data_loader") # wait for all workers - xm.mark_step() - device = xm.xla_device() - return iterators.CountingIterator( - pl.ParallelLoader(itr, [device]).per_device_loader(device), - start=getattr(itr, "n", 0), - total=len(itr), - ) - - -def is_xla_tensor(tensor): - return torch.is_tensor(tensor) and tensor.device.type == "xla" - - -def index_put(tensor, indices, value): - if is_xla_tensor(tensor): - for _ in range(indices.dim(), tensor.dim()): - indices = indices.unsqueeze(-1) - if indices.size(-1) < tensor.size(-1): - indices = indices.expand_as(tensor) - tensor = torch.mul(tensor, ~indices) + torch.mul(value, indices) - else: - tensor[indices] = value - return tensor - - -def xla_device_to_cpu(dat): - import torch_xla.core.xla_model as xm - - return xm._maybe_convert_to_cpu(dat) - - -class CudaEnvironment(object): - def __init__(self): - cur_device = torch.cuda.current_device() - prop = torch.cuda.get_device_properties("cuda:{}".format(cur_device)) - self.name = prop.name - self.major = prop.major - self.minor = prop.minor - self.total_memory_in_GB = prop.total_memory / 1024 / 1024 / 1024 - - @staticmethod - def pretty_print_cuda_env_list(cuda_env_list): - """ - Given a list of CudaEnviorments, pretty print them - """ - num_workers = len(cuda_env_list) - center = "CUDA enviroments for all {} workers".format(num_workers) - banner_len = 40 - len(center) // 2 - first_line = "*" * banner_len + center + "*" * banner_len - logger.info(first_line) - for r, env in enumerate(cuda_env_list): - logger.info( - "rank {:3d}: ".format(r) - + "capabilities = {:2d}.{:<2d} ; ".format(env.major, env.minor) - + "total memory = {:.3f} GB ; ".format(env.total_memory_in_GB) - + "name = {:40s}".format(env.name) - ) - logger.info(first_line) - - -def csv_str_list(x): - return x.split(",") - - -def eval_str_list(x, type=float): - if x is None: - return None - if isinstance(x, str): - x = eval(x) - try: - return list(map(type, x)) - except TypeError: - return [type(x)] - - -def eval_str_dict(x, type=dict): - if x is None: - return None - if isinstance(x, str): - x = eval(x) - return x - - -def eval_bool(x, default=False): - if x is None: - return default - try: - return bool(eval(x)) - except TypeError: - return default - - -def reset_logging(): - root = logging.getLogger() - for handler in root.handlers: - root.removeHandler(handler) - root.setLevel(os.environ.get("LOGLEVEL", "INFO").upper()) - handler = logging.StreamHandler(sys.stdout) - handler.setFormatter( - logging.Formatter( - fmt="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - ) - ) - root.addHandler(handler) - - -def safe_getattr(obj, k, default=None): - """Returns obj[k] if it exists and is not None, otherwise returns default.""" - from omegaconf import OmegaConf - - if OmegaConf.is_config(obj): - return obj[k] if k in obj and obj[k] is not None else default - - return getattr(obj, k, default) - - -def safe_hasattr(obj, k): - """Returns True if the given key exists and is not None.""" - return getattr(obj, k, None) is not None diff --git a/kosmos-g/fairseq/fairseq/version.txt b/kosmos-g/fairseq/fairseq/version.txt deleted file mode 100644 index 41432f00d..000000000 --- a/kosmos-g/fairseq/fairseq/version.txt +++ /dev/null @@ -1 +0,0 @@ -1.0.0a0 diff --git a/kosmos-g/fairseq/fairseq_cli/__init__.py b/kosmos-g/fairseq/fairseq_cli/__init__.py deleted file mode 100644 index e69de29bb..000000000 diff --git a/kosmos-g/fairseq/fairseq_cli/eval_lm.py b/kosmos-g/fairseq/fairseq_cli/eval_lm.py deleted file mode 100644 index ab6e77029..000000000 --- a/kosmos-g/fairseq/fairseq_cli/eval_lm.py +++ /dev/null @@ -1,347 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Evaluate the perplexity of a trained language model. -""" - -import logging -import math -import os -import sys -from argparse import Namespace -from typing import Iterable, List, Optional - -import torch -import fairseq -from fairseq import checkpoint_utils, distributed_utils, options, tasks, utils -from fairseq.dataclass.utils import convert_namespace_to_omegaconf -from fairseq.logging import progress_bar -from fairseq.logging.meters import StopwatchMeter -from fairseq.sequence_scorer import SequenceScorer -from omegaconf import DictConfig - - -logging.basicConfig( - format="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=os.environ.get("LOGLEVEL", "INFO").upper(), - stream=sys.stdout, -) -logger = logging.getLogger("fairseq_cli.eval_lm") - - -def eval_lm( - models: List[fairseq.models.FairseqModel], - source_dictionary: fairseq.data.Dictionary, - batch_iterator: Iterable, - post_process: Optional[str] = None, - output_word_probs: bool = False, - output_word_stats: bool = False, - target_dictionary: Optional[fairseq.data.Dictionary] = None, - softmax_batch: int = 0, - remove_bos_token: bool = False, - device: Optional[torch.device] = None, -): - """ - Args: - models (List[~fairseq.models.FairseqModel]): list of models to - evaluate. Models are essentially `nn.Module` instances, but - must be compatible with fairseq's `SequenceScorer`. - source_dictionary (~fairseq.data.Dictionary): dictionary for - applying any relevant post processing or outputing word - probs/stats. - batch_iterator (Iterable): yield batches of data - post_process (Optional[str]): post-process text by removing BPE, - letter segmentation, etc. Valid options can be found in - fairseq.data.utils.post_process, although not all options - are implemented here. - output_word_probs (Optional[bool]): output words and their - predicted log probabilities - output_word_stats (Optional[bool]): output word statistics such - as word count and average probability - target_dictionary (Optional[~fairseq.data.Dictionary]): output - dictionary (defaults to *source_dictionary*) - softmax_batch (Optional[bool]): if BxT is more than this, will - batch the softmax over vocab to this amount of tokens, in - order to fit into GPU memory - remove_bos_token (Optional[bool]): if True, confirm that the - first token is the beginning-of-sentence symbol (according - to the relevant dictionary) and remove it from the output - device (Optional[torch.device]): device to use for evaluation - (defaults to device of first model parameter) - """ - if target_dictionary is None: - target_dictionary = source_dictionary - if device is None: - device = next(models[0].parameters()).device - - gen_timer = StopwatchMeter() - scorer = SequenceScorer(target_dictionary, softmax_batch) - - score_sum = 0.0 - count = 0 - - if post_process is not None: - if post_process in {"subword_nmt", "@@ "}: - bpe_cont = post_process.rstrip() - bpe_toks = { - i - for i in range(len(source_dictionary)) - if source_dictionary[i].endswith(bpe_cont) - } - else: - raise NotImplementedError( - "--post-process={post_process} is not implemented" - ) - bpe_len = len(bpe_cont) - else: - bpe_toks = None - bpe_len = 0 - - word_stats = dict() - - for sample in batch_iterator: - if "net_input" not in sample: - continue - - sample = utils.move_to_cuda(sample, device=device) - - gen_timer.start() - hypos = scorer.generate(models, sample) - gen_timer.stop(sample["ntokens"]) - - for i, hypos_i in enumerate(hypos): - hypo = hypos_i[0] - sample_id = sample["id"][i] - - tokens = hypo["tokens"] - tgt_len = tokens.numel() - pos_scores = hypo["positional_scores"].float() - - if remove_bos_token: - assert hypo["tokens"][0].item() == target_dictionary.bos() - tokens = tokens[1:] - pos_scores = pos_scores[1:] - - skipped_toks = 0 - if bpe_toks is not None: - for i in range(tgt_len - 1): - if tokens[i].item() in bpe_toks: - skipped_toks += 1 - pos_scores[i + 1] += pos_scores[i] - pos_scores[i] = 0 - - inf_scores = pos_scores.eq(float("inf")) | pos_scores.eq(float("-inf")) - if inf_scores.any(): - logger.info( - "skipping tokens with inf scores:", - target_dictionary.string(tokens[inf_scores.nonzero()]), - ) - pos_scores = pos_scores[(~inf_scores).nonzero()] - score_sum += pos_scores.sum().cpu() - count += pos_scores.numel() - skipped_toks - - if output_word_probs or output_word_stats: - w = "" - word_prob = [] - is_bpe = False - for i in range(len(tokens)): - w_ind = tokens[i].item() - w += source_dictionary[w_ind] - if bpe_toks is not None and w_ind in bpe_toks: - w = w[:-bpe_len] - is_bpe = True - else: - word_prob.append((w, pos_scores[i].item())) - - next_prob = None - ind = i + 1 - while ind < len(tokens): - if pos_scores[ind].item() != 0: - next_prob = pos_scores[ind] - break - ind += 1 - - word_stats.setdefault(w, WordStat(w, is_bpe)).add( - pos_scores[i].item(), next_prob - ) - is_bpe = False - w = "" - if output_word_probs: - logger.info( - str(int(sample_id)) - + " " - + ( - "\t".join( - "{} [{:2f}]".format(x[0], x[1]) for x in word_prob - ) - ) - ) - - avg_nll_loss = ( - -score_sum / count / math.log(2) if count > 0 else 0 - ) # convert to base 2 - logger.info( - "Evaluated {:,} tokens in {:.1f}s ({:.2f} tokens/s)".format( - gen_timer.n, gen_timer.sum, 1.0 / gen_timer.avg if gen_timer.avg > 0 else 0 - ) - ) - - if output_word_stats: - for ws in sorted(word_stats.values(), key=lambda x: x.count, reverse=True): - logger.info(ws) - - return { - "loss": avg_nll_loss, - "perplexity": 2 ** avg_nll_loss, - } - - -class WordStat(object): - def __init__(self, word, is_bpe): - self.word = word - self.is_bpe = is_bpe - self.log_prob = 0 - self.next_word_prob = 0 - self.count = 0 - self.missing_next_words = 0 - - def add(self, log_prob, next_word_prob): - """increments counters for the sum of log probs of current word and next - word (given context ending at current word). Since the next word might be at the end of the example, - or it might be not counted because it is not an ending subword unit, - also keeps track of how many of those we have seen""" - if next_word_prob is not None: - self.next_word_prob += next_word_prob - else: - self.missing_next_words += 1 - self.log_prob += log_prob - self.count += 1 - - def __str__(self): - return "{}\t{}\t{}\t{}\t{}\t{}".format( - self.word, - self.count, - self.log_prob, - self.is_bpe, - self.next_word_prob, - self.count - self.missing_next_words, - ) - - -def main(cfg: DictConfig, **unused_kwargs): - if isinstance(cfg, Namespace): - cfg = convert_namespace_to_omegaconf(cfg) - - utils.import_user_module(cfg.common) - - logger.info(cfg) - - if cfg.eval_lm.context_window > 0: - # reduce tokens per sample by the required context window size - cfg.task.tokens_per_sample -= cfg.eval_lm.context_window - - # Initialize the task using the current *cfg* - task = tasks.setup_task(cfg.task) - - # Load ensemble - logger.info("loading model(s) from {}".format(cfg.common_eval.path)) - models, model_args, task = checkpoint_utils.load_model_ensemble_and_task( - [cfg.common_eval.path], - arg_overrides=eval(cfg.common_eval.model_overrides), - suffix=cfg.checkpoint.checkpoint_suffix, - strict=(cfg.checkpoint.checkpoint_shard_count == 1), - num_shards=cfg.checkpoint.checkpoint_shard_count, - task=task, - ) - - use_fp16 = cfg.common.fp16 - use_cuda = torch.cuda.is_available() and not cfg.common.cpu - if use_cuda: - torch.cuda.set_device(cfg.distributed_training.device_id) - - # Optimize ensemble for generation and set the source and dest dicts on the model - # (required by scorer) - for model in models: - if use_fp16: - model.half() - if use_cuda and not cfg.distributed_training.pipeline_model_parallel: - model.cuda() - model.prepare_for_inference_(cfg) - - assert len(models) > 0 - - logger.info( - "num. model params: {:,}".format(sum(p.numel() for p in models[0].parameters())) - ) - - # Load dataset splits - task.load_dataset(cfg.dataset.gen_subset) - dataset = task.dataset(cfg.dataset.gen_subset) - logger.info( - "{} {} {:,} examples".format( - cfg.task.data, cfg.dataset.gen_subset, len(dataset) - ) - ) - - itr = task.eval_lm_dataloader( - dataset=dataset, - max_tokens=cfg.dataset.max_tokens or 36000, - batch_size=cfg.dataset.batch_size, - max_positions=utils.resolve_max_positions( - *[model.max_positions() for model in models] - ), - num_shards=max( - cfg.dataset.num_shards, - cfg.distributed_training.distributed_world_size, - ), - shard_id=max( - cfg.dataset.shard_id, - cfg.distributed_training.distributed_rank, - ), - num_workers=cfg.dataset.num_workers, - data_buffer_size=cfg.dataset.data_buffer_size, - context_window=cfg.eval_lm.context_window, - ) - - itr = progress_bar.progress_bar( - itr, - log_format=cfg.common.log_format, - log_interval=cfg.common.log_interval, - default_log_format=("tqdm" if not cfg.common.no_progress_bar else "simple"), - ) - - results = eval_lm( - models=models, - source_dictionary=task.source_dictionary, - batch_iterator=itr, - post_process=cfg.common_eval.post_process, - output_word_probs=cfg.eval_lm.output_word_probs, - output_word_stats=cfg.eval_lm.output_word_stats, - target_dictionary=task.target_dictionary, - softmax_batch=cfg.eval_lm.softmax_batch, - remove_bos_token=getattr(cfg.task, "add_bos_token", False), - ) - - logger.info( - "Loss (base 2): {:.4f}, Perplexity: {:.2f}".format( - results["loss"], results["perplexity"] - ) - ) - - return results - - -def cli_main(): - parser = options.get_eval_lm_parser() - args = options.parse_args_and_arch(parser) - - distributed_utils.call_main(convert_namespace_to_omegaconf(args), main) - - -if __name__ == "__main__": - cli_main() diff --git a/kosmos-g/fairseq/fairseq_cli/generate.py b/kosmos-g/fairseq/fairseq_cli/generate.py deleted file mode 100644 index b8757835d..000000000 --- a/kosmos-g/fairseq/fairseq_cli/generate.py +++ /dev/null @@ -1,417 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Translate pre-processed data with a trained model. -""" - -import ast -import logging -import math -import os -import sys -from argparse import Namespace -from itertools import chain - -import numpy as np -import torch -from omegaconf import DictConfig - -from fairseq import checkpoint_utils, options, scoring, tasks, utils -from fairseq.dataclass.utils import convert_namespace_to_omegaconf -from fairseq.logging import progress_bar -from fairseq.logging.meters import StopwatchMeter, TimeMeter - - -def main(cfg: DictConfig): - - if isinstance(cfg, Namespace): - cfg = convert_namespace_to_omegaconf(cfg) - - assert cfg.common_eval.path is not None, "--path required for generation!" - assert ( - not cfg.generation.sampling or cfg.generation.nbest == cfg.generation.beam - ), "--sampling requires --nbest to be equal to --beam" - assert ( - cfg.generation.replace_unk is None or cfg.dataset.dataset_impl == "raw" - ), "--replace-unk requires a raw text dataset (--dataset-impl=raw)" - - if cfg.common_eval.results_path is not None: - os.makedirs(cfg.common_eval.results_path, exist_ok=True) - output_path = os.path.join( - cfg.common_eval.results_path, - "generate-{}.txt".format(cfg.dataset.gen_subset), - ) - with open(output_path, "w", buffering=1, encoding="utf-8") as h: - return _main(cfg, h) - else: - return _main(cfg, sys.stdout) - - -def get_symbols_to_strip_from_output(generator): - if hasattr(generator, "symbols_to_strip_from_output"): - return generator.symbols_to_strip_from_output - else: - return {generator.eos} - - -def _main(cfg: DictConfig, output_file): - logging.basicConfig( - format="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=os.environ.get("LOGLEVEL", "INFO").upper(), - stream=output_file, - ) - logger = logging.getLogger("fairseq_cli.generate") - - utils.import_user_module(cfg.common) - - if cfg.dataset.max_tokens is None and cfg.dataset.batch_size is None: - cfg.dataset.max_tokens = 12000 - logger.info(cfg) - - # Fix seed for stochastic decoding - if cfg.common.seed is not None and not cfg.generation.no_seed_provided: - np.random.seed(cfg.common.seed) - utils.set_torch_seed(cfg.common.seed) - - use_cuda = torch.cuda.is_available() and not cfg.common.cpu - - # Load dataset splits - task = tasks.setup_task(cfg.task) - - # Set dictionaries - try: - src_dict = getattr(task, "source_dictionary", None) - except NotImplementedError: - src_dict = None - tgt_dict = task.target_dictionary - - overrides = ast.literal_eval(cfg.common_eval.model_overrides) - - # Load ensemble - logger.info("loading model(s) from {}".format(cfg.common_eval.path)) - models, saved_cfg = checkpoint_utils.load_model_ensemble( - utils.split_paths(cfg.common_eval.path), - arg_overrides=overrides, - task=task, - suffix=cfg.checkpoint.checkpoint_suffix, - strict=(cfg.checkpoint.checkpoint_shard_count == 1), - num_shards=cfg.checkpoint.checkpoint_shard_count, - ) - - # loading the dataset should happen after the checkpoint has been loaded so we can give it the saved task config - task.load_dataset(cfg.dataset.gen_subset, task_cfg=saved_cfg.task) - - if cfg.generation.lm_path is not None: - overrides["data"] = cfg.task.data - - try: - lms, _ = checkpoint_utils.load_model_ensemble( - [cfg.generation.lm_path], arg_overrides=overrides, task=None - ) - except: - logger.warning( - f"Failed to load language model! Please make sure that the language model dict is the same " - f"as target dict and is located in the data dir ({cfg.task.data})" - ) - raise - - assert len(lms) == 1 - else: - lms = [None] - - # Optimize ensemble for generation - for model in chain(models, lms): - if model is None: - continue - if cfg.common.fp16: - model.half() - if use_cuda and not cfg.distributed_training.pipeline_model_parallel: - model.cuda() - model.prepare_for_inference_(cfg) - - # Load alignment dictionary for unknown word replacement - # (None if no unknown word replacement, empty if no path to align dictionary) - align_dict = utils.load_align_dict(cfg.generation.replace_unk) - - # Load dataset (possibly sharded) - itr = task.get_batch_iterator( - dataset=task.dataset(cfg.dataset.gen_subset), - max_tokens=cfg.dataset.max_tokens, - max_sentences=cfg.dataset.batch_size, - max_positions=utils.resolve_max_positions( - task.max_positions(), *[m.max_positions() for m in models] - ), - ignore_invalid_inputs=cfg.dataset.skip_invalid_size_inputs_valid_test, - required_batch_size_multiple=cfg.dataset.required_batch_size_multiple, - seed=cfg.common.seed, - num_shards=cfg.distributed_training.distributed_world_size, - shard_id=cfg.distributed_training.distributed_rank, - num_workers=cfg.dataset.num_workers, - data_buffer_size=cfg.dataset.data_buffer_size, - ).next_epoch_itr(shuffle=False) - progress = progress_bar.progress_bar( - itr, - log_format=cfg.common.log_format, - log_interval=cfg.common.log_interval, - default_log_format=("tqdm" if not cfg.common.no_progress_bar else "simple"), - ) - - # Initialize generator - gen_timer = StopwatchMeter() - - extra_gen_cls_kwargs = {"lm_model": lms[0], "lm_weight": cfg.generation.lm_weight} - generator = task.build_generator( - models, cfg.generation, extra_gen_cls_kwargs=extra_gen_cls_kwargs - ) - - # Handle tokenization and BPE - tokenizer = task.build_tokenizer(cfg.tokenizer) - bpe = task.build_bpe(cfg.bpe) - - def decode_fn(x): - if bpe is not None: - x = bpe.decode(x) - if tokenizer is not None: - x = tokenizer.decode(x) - return x - - scorer = scoring.build_scorer(cfg.scoring, tgt_dict) - - num_sentences = 0 - has_target = True - wps_meter = TimeMeter() - for sample in progress: - sample = utils.move_to_cuda(sample) if use_cuda else sample - if "net_input" not in sample: - continue - - prefix_tokens = None - if cfg.generation.prefix_size > 0: - prefix_tokens = sample["target"][:, : cfg.generation.prefix_size] - - constraints = None - if "constraints" in sample: - constraints = sample["constraints"] - - gen_timer.start() - hypos = task.inference_step( - generator, - models, - sample, - prefix_tokens=prefix_tokens, - constraints=constraints, - ) - num_generated_tokens = sum(len(h[0]["tokens"]) for h in hypos) - gen_timer.stop(num_generated_tokens) - - for i, sample_id in enumerate(sample["id"].tolist()): - has_target = sample["target"] is not None - - # Remove padding - if "src_tokens" in sample["net_input"]: - src_tokens = utils.strip_pad( - sample["net_input"]["src_tokens"][i, :], tgt_dict.pad() - ) - else: - src_tokens = None - - target_tokens = None - if has_target: - target_tokens = ( - utils.strip_pad(sample["target"][i, :], tgt_dict.pad()).int().cpu() - ) - - # Either retrieve the original sentences or regenerate them from tokens. - if align_dict is not None: - src_str = task.dataset(cfg.dataset.gen_subset).src.get_original_text( - sample_id - ) - target_str = task.dataset(cfg.dataset.gen_subset).tgt.get_original_text( - sample_id - ) - else: - if src_dict is not None: - src_str = src_dict.string(src_tokens, cfg.common_eval.post_process) - else: - src_str = "" - if has_target: - target_str = tgt_dict.string( - target_tokens, - cfg.common_eval.post_process, - escape_unk=True, - extra_symbols_to_ignore=get_symbols_to_strip_from_output( - generator - ), - ) - - src_str = decode_fn(src_str) - if has_target: - target_str = decode_fn(target_str) - - if not cfg.common_eval.quiet: - if src_dict is not None: - print("S-{}\t{}".format(sample_id, src_str), file=output_file) - if has_target: - print("T-{}\t{}".format(sample_id, target_str), file=output_file) - - # Process top predictions - for j, hypo in enumerate(hypos[i][: cfg.generation.nbest]): - hypo_tokens, hypo_str, alignment = utils.post_process_prediction( - hypo_tokens=hypo["tokens"].int().cpu(), - src_str=src_str, - alignment=hypo["alignment"], - align_dict=align_dict, - tgt_dict=tgt_dict, - remove_bpe=cfg.common_eval.post_process, - extra_symbols_to_ignore=get_symbols_to_strip_from_output(generator), - ) - detok_hypo_str = decode_fn(hypo_str) - if not cfg.common_eval.quiet: - score = hypo["score"] / math.log(2) # convert to base 2 - # original hypothesis (after tokenization and BPE) - print( - "H-{}\t{}\t{}".format(sample_id, score, hypo_str), - file=output_file, - ) - # detokenized hypothesis - print( - "D-{}\t{}\t{}".format(sample_id, score, detok_hypo_str), - file=output_file, - ) - print( - "P-{}\t{}".format( - sample_id, - " ".join( - map( - lambda x: "{:.4f}".format(x), - # convert from base e to base 2 - hypo["positional_scores"] - .div_(math.log(2)) - .tolist(), - ) - ), - ), - file=output_file, - ) - - if cfg.generation.print_alignment == "hard": - print( - "A-{}\t{}".format( - sample_id, - " ".join( - [ - "{}-{}".format(src_idx, tgt_idx) - for src_idx, tgt_idx in alignment - ] - ), - ), - file=output_file, - ) - if cfg.generation.print_alignment == "soft": - print( - "A-{}\t{}".format( - sample_id, - " ".join( - [",".join(src_probs) for src_probs in alignment] - ), - ), - file=output_file, - ) - - if cfg.generation.print_step: - print( - "I-{}\t{}".format(sample_id, hypo["steps"]), - file=output_file, - ) - - if cfg.generation.retain_iter_history: - for step, h in enumerate(hypo["history"]): - _, h_str, _ = utils.post_process_prediction( - hypo_tokens=h["tokens"].int().cpu(), - src_str=src_str, - alignment=None, - align_dict=None, - tgt_dict=tgt_dict, - remove_bpe=None, - ) - print( - "E-{}_{}\t{}".format(sample_id, step, h_str), - file=output_file, - ) - - # Score only the top hypothesis - if has_target and j == 0: - if ( - align_dict is not None - or cfg.common_eval.post_process is not None - ): - # Convert back to tokens for evaluation with unk replacement and/or without BPE - target_tokens = tgt_dict.encode_line( - target_str, add_if_not_exist=True - ) - hypo_tokens = tgt_dict.encode_line( - detok_hypo_str, add_if_not_exist=True - ) - if hasattr(scorer, "add_string"): - scorer.add_string(target_str, detok_hypo_str) - else: - scorer.add(target_tokens, hypo_tokens) - - wps_meter.update(num_generated_tokens) - progress.log({"wps": round(wps_meter.avg)}) - num_sentences += ( - sample["nsentences"] if "nsentences" in sample else sample["id"].numel() - ) - - logger.info("NOTE: hypothesis and token scores are output in base 2") - logger.info( - "Translated {:,} sentences ({:,} tokens) in {:.1f}s ({:.2f} sentences/s, {:.2f} tokens/s)".format( - num_sentences, - gen_timer.n, - gen_timer.sum, - num_sentences / gen_timer.sum, - 1.0 / gen_timer.avg, - ) - ) - if has_target: - if cfg.bpe and not cfg.generation.sacrebleu: - if cfg.common_eval.post_process: - logger.warning( - "BLEU score is being computed by splitting detokenized string on spaces, this is probably not what you want. Use --sacrebleu for standard 13a BLEU tokenization" - ) - else: - logger.warning( - "If you are using BPE on the target side, the BLEU score is computed on BPE tokens, not on proper words. Use --sacrebleu for standard 13a BLEU tokenization" - ) - # use print to be consistent with other main outputs: S-, H-, T-, D- and so on - print( - "Generate {} with beam={}: {}".format( - cfg.dataset.gen_subset, cfg.generation.beam, scorer.result_string() - ), - file=output_file, - ) - - return scorer - - -def cli_main(): - parser = options.get_generation_parser() - # TODO: replace this workaround with refactoring of `AudioPretraining` - parser.add_argument( - "--arch", - "-a", - metavar="ARCH", - default="wav2vec2", - help="Model architecture. For constructing tasks that rely on " - "model args (e.g. `AudioPretraining`)", - ) - args = options.parse_args_and_arch(parser) - main(args) - - -if __name__ == "__main__": - cli_main() diff --git a/kosmos-g/fairseq/fairseq_cli/hydra_train.py b/kosmos-g/fairseq/fairseq_cli/hydra_train.py deleted file mode 100644 index 607340af0..000000000 --- a/kosmos-g/fairseq/fairseq_cli/hydra_train.py +++ /dev/null @@ -1,91 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os - -import hydra -import torch -from hydra.core.hydra_config import HydraConfig -from omegaconf import OmegaConf, open_dict - -from fairseq import distributed_utils, metrics -from fairseq.dataclass.configs import FairseqConfig -from fairseq.dataclass.initialize import add_defaults, hydra_init -from fairseq.dataclass.utils import omegaconf_no_object_check -from fairseq.utils import reset_logging -from fairseq_cli.train import main as pre_main - -logger = logging.getLogger("fairseq_cli.hydra_train") - - -@hydra.main(config_path=os.path.join("..", "fairseq", "config"), config_name="config") -def hydra_main(cfg: FairseqConfig) -> float: - _hydra_main(cfg) - - -def _hydra_main(cfg: FairseqConfig, **kwargs) -> float: - add_defaults(cfg) - - if cfg.common.reset_logging: - reset_logging() # Hydra hijacks logging, fix that - else: - # check if directly called or called through hydra_main - if HydraConfig.initialized(): - with open_dict(cfg): - # make hydra logging work with ddp (see # see https://github.com/facebookresearch/hydra/issues/1126) - cfg.job_logging_cfg = OmegaConf.to_container( - HydraConfig.get().job_logging, resolve=True - ) - - with omegaconf_no_object_check(): - cfg = OmegaConf.create( - OmegaConf.to_container(cfg, resolve=True, enum_to_str=True) - ) - OmegaConf.set_struct(cfg, True) - - try: - if cfg.common.profile: - with torch.cuda.profiler.profile(): - with torch.autograd.profiler.emit_nvtx(): - distributed_utils.call_main(cfg, pre_main, **kwargs) - else: - distributed_utils.call_main(cfg, pre_main, **kwargs) - except BaseException as e: - if not cfg.common.suppress_crashes: - raise - else: - logger.error("Crashed! " + str(e)) - - # get best val and return - useful for sweepers - try: - best_val = metrics.get_smoothed_value( - "valid", cfg.checkpoint.best_checkpoint_metric - ) - except: - best_val = None - - if best_val is None: - best_val = float("inf") - - return best_val - - -def cli_main(): - try: - from hydra._internal.utils import get_args - - cfg_name = get_args().config_name or "config" - except: - logger.warning("Failed to get config name from hydra args") - cfg_name = "config" - - hydra_init(cfg_name) - hydra_main() - - -if __name__ == "__main__": - cli_main() diff --git a/kosmos-g/fairseq/fairseq_cli/interactive.py b/kosmos-g/fairseq/fairseq_cli/interactive.py deleted file mode 100644 index 03265d00e..000000000 --- a/kosmos-g/fairseq/fairseq_cli/interactive.py +++ /dev/null @@ -1,317 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Translate raw text with a trained model. Batches data on-the-fly. -""" - -import ast -import fileinput -import logging -import math -import os -import sys -import time -from argparse import Namespace -from collections import namedtuple - -import numpy as np -import torch - -from fairseq import checkpoint_utils, distributed_utils, options, tasks, utils -from fairseq.dataclass.configs import FairseqConfig -from fairseq.dataclass.utils import convert_namespace_to_omegaconf -from fairseq.token_generation_constraints import pack_constraints, unpack_constraints -from fairseq_cli.generate import get_symbols_to_strip_from_output - -logging.basicConfig( - format="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=os.environ.get("LOGLEVEL", "INFO").upper(), - stream=sys.stdout, -) -logger = logging.getLogger("fairseq_cli.interactive") - - -Batch = namedtuple("Batch", "ids src_tokens src_lengths constraints") -Translation = namedtuple("Translation", "src_str hypos pos_scores alignments") - - -def buffered_read(input, buffer_size): - buffer = [] - with fileinput.input(files=[input], openhook=fileinput.hook_encoded("utf-8")) as h: - for src_str in h: - buffer.append(src_str.strip()) - if len(buffer) >= buffer_size: - yield buffer - buffer = [] - - if len(buffer) > 0: - yield buffer - - -def make_batches(lines, cfg, task, max_positions, encode_fn): - def encode_fn_target(x): - return encode_fn(x) - - if cfg.generation.constraints: - # Strip (tab-delimited) contraints, if present, from input lines, - # store them in batch_constraints - batch_constraints = [list() for _ in lines] - for i, line in enumerate(lines): - if "\t" in line: - lines[i], *batch_constraints[i] = line.split("\t") - - # Convert each List[str] to List[Tensor] - for i, constraint_list in enumerate(batch_constraints): - batch_constraints[i] = [ - task.target_dictionary.encode_line( - encode_fn_target(constraint), - append_eos=False, - add_if_not_exist=False, - ) - for constraint in constraint_list - ] - - if cfg.generation.constraints: - constraints_tensor = pack_constraints(batch_constraints) - else: - constraints_tensor = None - - tokens, lengths = task.get_interactive_tokens_and_lengths(lines, encode_fn) - - itr = task.get_batch_iterator( - dataset=task.build_dataset_for_inference( - tokens, lengths, constraints=constraints_tensor - ), - max_tokens=cfg.dataset.max_tokens, - max_sentences=cfg.dataset.batch_size, - max_positions=max_positions, - ignore_invalid_inputs=cfg.dataset.skip_invalid_size_inputs_valid_test, - ).next_epoch_itr(shuffle=False) - for batch in itr: - ids = batch["id"] - src_tokens = batch["net_input"]["src_tokens"] - src_lengths = batch["net_input"]["src_lengths"] - constraints = batch.get("constraints", None) - - yield Batch( - ids=ids, - src_tokens=src_tokens, - src_lengths=src_lengths, - constraints=constraints, - ) - - -def main(cfg: FairseqConfig): - if isinstance(cfg, Namespace): - cfg = convert_namespace_to_omegaconf(cfg) - - start_time = time.time() - total_translate_time = 0 - - utils.import_user_module(cfg.common) - - if cfg.interactive.buffer_size < 1: - cfg.interactive.buffer_size = 1 - if cfg.dataset.max_tokens is None and cfg.dataset.batch_size is None: - cfg.dataset.batch_size = 1 - - assert ( - not cfg.generation.sampling or cfg.generation.nbest == cfg.generation.beam - ), "--sampling requires --nbest to be equal to --beam" - assert ( - not cfg.dataset.batch_size - or cfg.dataset.batch_size <= cfg.interactive.buffer_size - ), "--batch-size cannot be larger than --buffer-size" - - logger.info(cfg) - - # Fix seed for stochastic decoding - if cfg.common.seed is not None and not cfg.generation.no_seed_provided: - np.random.seed(cfg.common.seed) - utils.set_torch_seed(cfg.common.seed) - - use_cuda = torch.cuda.is_available() and not cfg.common.cpu - - # Setup task, e.g., translation - task = tasks.setup_task(cfg.task) - - # Load ensemble - overrides = ast.literal_eval(cfg.common_eval.model_overrides) - logger.info("loading model(s) from {}".format(cfg.common_eval.path)) - models, _model_args = checkpoint_utils.load_model_ensemble( - utils.split_paths(cfg.common_eval.path), - arg_overrides=overrides, - task=task, - suffix=cfg.checkpoint.checkpoint_suffix, - strict=(cfg.checkpoint.checkpoint_shard_count == 1), - num_shards=cfg.checkpoint.checkpoint_shard_count, - ) - - # Set dictionaries - src_dict = task.source_dictionary - tgt_dict = task.target_dictionary - - # Optimize ensemble for generation - for model in models: - if model is None: - continue - if cfg.common.fp16: - model.half() - if use_cuda and not cfg.distributed_training.pipeline_model_parallel: - model.cuda() - model.prepare_for_inference_(cfg) - - # Initialize generator - generator = task.build_generator(models, cfg.generation) - - # Handle tokenization and BPE - tokenizer = task.build_tokenizer(cfg.tokenizer) - bpe = task.build_bpe(cfg.bpe) - - def encode_fn(x): - if tokenizer is not None: - x = tokenizer.encode(x) - if bpe is not None: - x = bpe.encode(x) - return x - - def decode_fn(x): - if bpe is not None: - x = bpe.decode(x) - if tokenizer is not None: - x = tokenizer.decode(x) - return x - - # Load alignment dictionary for unknown word replacement - # (None if no unknown word replacement, empty if no path to align dictionary) - align_dict = utils.load_align_dict(cfg.generation.replace_unk) - - max_positions = utils.resolve_max_positions( - task.max_positions(), *[model.max_positions() for model in models] - ) - - if cfg.generation.constraints: - logger.warning( - "NOTE: Constrained decoding currently assumes a shared subword vocabulary." - ) - - if cfg.interactive.buffer_size > 1: - logger.info("Sentence buffer size: %s", cfg.interactive.buffer_size) - logger.info("NOTE: hypothesis and token scores are output in base 2") - logger.info("Type the input sentence and press return:") - start_id = 0 - for inputs in buffered_read(cfg.interactive.input, cfg.interactive.buffer_size): - results = [] - for batch in make_batches(inputs, cfg, task, max_positions, encode_fn): - bsz = batch.src_tokens.size(0) - src_tokens = batch.src_tokens - src_lengths = batch.src_lengths - constraints = batch.constraints - if use_cuda: - src_tokens = src_tokens.cuda() - src_lengths = src_lengths.cuda() - if constraints is not None: - constraints = constraints.cuda() - - sample = { - "net_input": { - "src_tokens": src_tokens, - "src_lengths": src_lengths, - }, - } - translate_start_time = time.time() - translations = task.inference_step( - generator, models, sample, constraints=constraints - ) - translate_time = time.time() - translate_start_time - total_translate_time += translate_time - list_constraints = [[] for _ in range(bsz)] - if cfg.generation.constraints: - list_constraints = [unpack_constraints(c) for c in constraints] - for i, (id, hypos) in enumerate(zip(batch.ids.tolist(), translations)): - src_tokens_i = utils.strip_pad(src_tokens[i], tgt_dict.pad()) - constraints = list_constraints[i] - results.append( - ( - start_id + id, - src_tokens_i, - hypos, - { - "constraints": constraints, - "time": translate_time / len(translations), - }, - ) - ) - - # sort output to match input order - for id_, src_tokens, hypos, info in sorted(results, key=lambda x: x[0]): - src_str = "" - if src_dict is not None: - src_str = src_dict.string(src_tokens, cfg.common_eval.post_process) - print("S-{}\t{}".format(id_, src_str)) - print("W-{}\t{:.3f}\tseconds".format(id_, info["time"])) - for constraint in info["constraints"]: - print( - "C-{}\t{}".format( - id_, - tgt_dict.string(constraint, cfg.common_eval.post_process), - ) - ) - - # Process top predictions - for hypo in hypos[: min(len(hypos), cfg.generation.nbest)]: - hypo_tokens, hypo_str, alignment = utils.post_process_prediction( - hypo_tokens=hypo["tokens"].int().cpu(), - src_str=src_str, - alignment=hypo["alignment"], - align_dict=align_dict, - tgt_dict=tgt_dict, - remove_bpe=cfg.common_eval.post_process, - extra_symbols_to_ignore=get_symbols_to_strip_from_output(generator), - ) - detok_hypo_str = decode_fn(hypo_str) - score = hypo["score"] / math.log(2) # convert to base 2 - # original hypothesis (after tokenization and BPE) - print("H-{}\t{}\t{}".format(id_, score, hypo_str)) - # detokenized hypothesis - print("D-{}\t{}\t{}".format(id_, score, detok_hypo_str)) - print( - "P-{}\t{}".format( - id_, - " ".join( - map( - lambda x: "{:.4f}".format(x), - # convert from base e to base 2 - hypo["positional_scores"].div_(math.log(2)).tolist(), - ) - ), - ) - ) - if cfg.generation.print_alignment: - alignment_str = " ".join( - ["{}-{}".format(src, tgt) for src, tgt in alignment] - ) - print("A-{}\t{}".format(id_, alignment_str)) - - # update running id_ counter - start_id += len(inputs) - - logger.info( - "Total time: {:.3f} seconds; translation time: {:.3f}".format( - time.time() - start_time, total_translate_time - ) - ) - - -def cli_main(): - parser = options.get_interactive_generation_parser() - args = options.parse_args_and_arch(parser) - distributed_utils.call_main(convert_namespace_to_omegaconf(args), main) - - -if __name__ == "__main__": - cli_main() diff --git a/kosmos-g/fairseq/fairseq_cli/preprocess.py b/kosmos-g/fairseq/fairseq_cli/preprocess.py deleted file mode 100644 index 2ba9e0933..000000000 --- a/kosmos-g/fairseq/fairseq_cli/preprocess.py +++ /dev/null @@ -1,393 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Data pre-processing: build vocabularies and binarize training data. -""" - -import logging -import os -import shutil -import sys -import typing as tp -from argparse import Namespace -from itertools import zip_longest - -from fairseq import options, tasks, utils -from fairseq.binarizer import ( - AlignmentDatasetBinarizer, - FileBinarizer, - VocabularyDatasetBinarizer, -) -from fairseq.data import Dictionary - -logging.basicConfig( - format="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=os.environ.get("LOGLEVEL", "INFO").upper(), - stream=sys.stdout, -) -logger = logging.getLogger("fairseq_cli.preprocess") - -##################################################################### -# file name tools -##################################################################### - - -def _train_path(lang, trainpref): - return "{}{}".format(trainpref, ("." + lang) if lang else "") - - -def _file_name(prefix, lang): - fname = prefix - if lang is not None: - fname += ".{lang}".format(lang=lang) - return fname - - -def _dest_path(prefix, lang, destdir): - return os.path.join(destdir, _file_name(prefix, lang)) - - -def _dict_path(lang, destdir): - return _dest_path("dict", lang, destdir) + ".txt" - - -def dataset_dest_prefix(args, output_prefix, lang): - base = os.path.join(args.destdir, output_prefix) - if lang is not None: - lang_part = f".{args.source_lang}-{args.target_lang}.{lang}" - elif args.only_source: - lang_part = "" - else: - lang_part = f".{args.source_lang}-{args.target_lang}" - - return "{}{}".format(base, lang_part) - - -def dataset_dest_file(args, output_prefix, lang, extension): - return "{}.{}".format(dataset_dest_prefix(args, output_prefix, lang), extension) - - -##################################################################### -# dictionary tools -##################################################################### - - -def _build_dictionary( - filenames, - task, - args, - src=False, - tgt=False, -): - assert src ^ tgt - return task.build_dictionary( - filenames, - workers=args.workers, - threshold=args.thresholdsrc if src else args.thresholdtgt, - nwords=args.nwordssrc if src else args.nwordstgt, - padding_factor=args.padding_factor, - ) - - -##################################################################### -# bin file creation logic -##################################################################### - - -def _make_binary_dataset( - vocab: Dictionary, - input_prefix: str, - output_prefix: str, - lang: tp.Optional[str], - num_workers: int, - args: Namespace, -): - logger.info("[{}] Dictionary: {} types".format(lang, len(vocab))) - - binarizer = VocabularyDatasetBinarizer( - vocab, - append_eos=True, - ) - - input_file = "{}{}".format(input_prefix, ("." + lang) if lang is not None else "") - full_output_prefix = dataset_dest_prefix(args, output_prefix, lang) - - final_summary = FileBinarizer.multiprocess_dataset( - input_file, - args.dataset_impl, - binarizer, - full_output_prefix, - vocab_size=len(vocab), - num_workers=num_workers, - ) - - logger.info(f"[{lang}] {input_file}: {final_summary} (by {vocab.unk_word})") - - -def _make_binary_alignment_dataset( - input_prefix: str, output_prefix: str, num_workers: int, args: Namespace -): - - binarizer = AlignmentDatasetBinarizer(utils.parse_alignment) - - input_file = input_prefix - full_output_prefix = dataset_dest_prefix(args, output_prefix, lang=None) - - final_summary = FileBinarizer.multiprocess_dataset( - input_file, - args.dataset_impl, - binarizer, - full_output_prefix, - vocab_size=None, - num_workers=num_workers, - ) - - logger.info( - "[alignments] {}: parsed {} alignments".format( - input_file, final_summary.num_seq - ) - ) - - -##################################################################### -# routing logic -##################################################################### - - -def _make_dataset( - vocab: Dictionary, - input_prefix: str, - output_prefix: str, - lang: tp.Optional[str], - args: Namespace, - num_workers: int, -): - if args.dataset_impl == "raw": - # Copy original text file to destination folder - output_text_file = _dest_path( - output_prefix + ".{}-{}".format(args.source_lang, args.target_lang), - lang, - args.destdir, - ) - shutil.copyfile(_file_name(input_prefix, lang), output_text_file) - else: - _make_binary_dataset( - vocab, input_prefix, output_prefix, lang, num_workers, args - ) - - -def _make_all(lang, vocab, args): - if args.trainpref: - _make_dataset( - vocab, args.trainpref, "train", lang, args=args, num_workers=args.workers - ) - if args.validpref: - for k, validpref in enumerate(args.validpref.split(",")): - outprefix = "valid{}".format(k) if k > 0 else "valid" - _make_dataset( - vocab, validpref, outprefix, lang, args=args, num_workers=args.workers - ) - if args.testpref: - for k, testpref in enumerate(args.testpref.split(",")): - outprefix = "test{}".format(k) if k > 0 else "test" - _make_dataset( - vocab, testpref, outprefix, lang, args=args, num_workers=args.workers - ) - - -def _make_all_alignments(args): - if args.trainpref and os.path.exists(args.trainpref + "." + args.align_suffix): - _make_binary_alignment_dataset( - args.trainpref + "." + args.align_suffix, - "train.align", - num_workers=args.workers, - args=args, - ) - if args.validpref and os.path.exists(args.validpref + "." + args.align_suffix): - _make_binary_alignment_dataset( - args.validpref + "." + args.align_suffix, - "valid.align", - num_workers=args.workers, - args=args, - ) - if args.testpref and os.path.exists(args.testpref + "." + args.align_suffix): - _make_binary_alignment_dataset( - args.testpref + "." + args.align_suffix, - "test.align", - num_workers=args.workers, - args=args, - ) - - -##################################################################### -# align -##################################################################### - - -def _align_files(args, src_dict, tgt_dict): - assert args.trainpref, "--trainpref must be set if --alignfile is specified" - src_file_name = _train_path(args.source_lang, args.trainpref) - tgt_file_name = _train_path(args.target_lang, args.trainpref) - freq_map = {} - with open(args.alignfile, "r", encoding="utf-8") as align_file: - with open(src_file_name, "r", encoding="utf-8") as src_file: - with open(tgt_file_name, "r", encoding="utf-8") as tgt_file: - for a, s, t in zip_longest(align_file, src_file, tgt_file): - si = src_dict.encode_line(s, add_if_not_exist=False) - ti = tgt_dict.encode_line(t, add_if_not_exist=False) - ai = list(map(lambda x: tuple(x.split("-")), a.split())) - for sai, tai in ai: - srcidx = si[int(sai)] - tgtidx = ti[int(tai)] - if srcidx != src_dict.unk() and tgtidx != tgt_dict.unk(): - assert srcidx != src_dict.pad() - assert srcidx != src_dict.eos() - assert tgtidx != tgt_dict.pad() - assert tgtidx != tgt_dict.eos() - if srcidx not in freq_map: - freq_map[srcidx] = {} - if tgtidx not in freq_map[srcidx]: - freq_map[srcidx][tgtidx] = 1 - else: - freq_map[srcidx][tgtidx] += 1 - align_dict = {} - for srcidx in freq_map.keys(): - align_dict[srcidx] = max(freq_map[srcidx], key=freq_map[srcidx].get) - with open( - os.path.join( - args.destdir, - "alignment.{}-{}.txt".format(args.source_lang, args.target_lang), - ), - "w", - encoding="utf-8", - ) as f: - for k, v in align_dict.items(): - print("{} {}".format(src_dict[k], tgt_dict[v]), file=f) - - -##################################################################### -# MAIN -##################################################################### - - -def main(args): - # setup some basic things - utils.import_user_module(args) - - os.makedirs(args.destdir, exist_ok=True) - - logger.addHandler( - logging.FileHandler( - filename=os.path.join(args.destdir, "preprocess.log"), - ) - ) - logger.info(args) - - assert ( - args.dataset_impl != "huffman" - ), "preprocessing.py doesn't support Huffman yet, use HuffmanCodeBuilder directly." - - # build dictionaries - - target = not args.only_source - - if not args.srcdict and os.path.exists(_dict_path(args.source_lang, args.destdir)): - raise FileExistsError(_dict_path(args.source_lang, args.destdir)) - - if ( - target - and not args.tgtdict - and os.path.exists(_dict_path(args.target_lang, args.destdir)) - ): - raise FileExistsError(_dict_path(args.target_lang, args.destdir)) - - task = tasks.get_task(args.task) - - if args.joined_dictionary: - assert ( - not args.srcdict or not args.tgtdict - ), "cannot use both --srcdict and --tgtdict with --joined-dictionary" - - if args.srcdict: - src_dict = task.load_dictionary(args.srcdict) - elif args.tgtdict: - src_dict = task.load_dictionary(args.tgtdict) - else: - assert ( - args.trainpref - ), "--trainpref must be set if --srcdict is not specified" - src_dict = _build_dictionary( - { - _train_path(lang, args.trainpref) - for lang in [args.source_lang, args.target_lang] - }, - task=task, - args=args, - src=True, - ) - tgt_dict = src_dict - else: - if args.srcdict: - src_dict = task.load_dictionary(args.srcdict) - else: - assert ( - args.trainpref - ), "--trainpref must be set if --srcdict is not specified" - src_dict = _build_dictionary( - [_train_path(args.source_lang, args.trainpref)], - task=task, - args=args, - src=True, - ) - - if target: - if args.tgtdict: - tgt_dict = task.load_dictionary(args.tgtdict) - else: - assert ( - args.trainpref - ), "--trainpref must be set if --tgtdict is not specified" - tgt_dict = _build_dictionary( - [_train_path(args.target_lang, args.trainpref)], - task=task, - args=args, - tgt=True, - ) - else: - tgt_dict = None - - # save dictionaries - - src_dict.save(_dict_path(args.source_lang, args.destdir)) - if target and tgt_dict is not None: - tgt_dict.save(_dict_path(args.target_lang, args.destdir)) - - if args.dict_only: - return - - _make_all(args.source_lang, src_dict, args) - if target: - _make_all(args.target_lang, tgt_dict, args) - - # align the datasets if needed - if args.align_suffix: - _make_all_alignments(args) - - logger.info("Wrote preprocessed data to {}".format(args.destdir)) - - if args.alignfile: - _align_files(args, src_dict=src_dict, tgt_dict=tgt_dict) - - -def cli_main(): - parser = options.get_preprocessing_parser() - args = parser.parse_args() - main(args) - - -if __name__ == "__main__": - cli_main() diff --git a/kosmos-g/fairseq/fairseq_cli/score.py b/kosmos-g/fairseq/fairseq_cli/score.py deleted file mode 100644 index 0b207be95..000000000 --- a/kosmos-g/fairseq/fairseq_cli/score.py +++ /dev/null @@ -1,102 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -BLEU scoring of generated translations against reference translations. -""" - -import argparse -import os -import sys - -from fairseq.data import dictionary -from fairseq.scoring import bleu - - -def get_parser(): - parser = argparse.ArgumentParser( - description="Command-line script for BLEU scoring." - ) - # fmt: off - parser.add_argument('-s', '--sys', default='-', help='system output') - parser.add_argument('-r', '--ref', required=True, help='references') - parser.add_argument('-o', '--order', default=4, metavar='N', - type=int, help='consider ngrams up to this order') - parser.add_argument('--ignore-case', action='store_true', - help='case-insensitive scoring') - parser.add_argument('--sacrebleu', action='store_true', - help='score with sacrebleu') - parser.add_argument('--sentence-bleu', action='store_true', - help='report sentence-level BLEUs (i.e., with +1 smoothing)') - # fmt: on - return parser - - -def cli_main(): - parser = get_parser() - args = parser.parse_args() - print(args) - - assert args.sys == "-" or os.path.exists( - args.sys - ), "System output file {} does not exist".format(args.sys) - assert os.path.exists(args.ref), "Reference file {} does not exist".format(args.ref) - - dict = dictionary.Dictionary() - - def readlines(fd): - for line in fd.readlines(): - if args.ignore_case: - yield line.lower() - else: - yield line - - if args.sacrebleu: - import sacrebleu - - def score(fdsys): - with open(args.ref) as fdref: - print(sacrebleu.corpus_bleu(fdsys, [fdref]).format()) - - elif args.sentence_bleu: - - def score(fdsys): - with open(args.ref) as fdref: - scorer = bleu.Scorer(dict.pad(), dict.eos(), dict.unk()) - for i, (sys_tok, ref_tok) in enumerate( - zip(readlines(fdsys), readlines(fdref)) - ): - scorer.reset(one_init=True) - sys_tok = dict.encode_line(sys_tok) - ref_tok = dict.encode_line(ref_tok) - scorer.add(ref_tok, sys_tok) - print(i, scorer.result_string(args.order)) - - else: - - def score(fdsys): - with open(args.ref) as fdref: - scorer = bleu.Scorer( - bleu.BleuConfig( - pad=dict.pad(), - eos=dict.eos(), - unk=dict.unk(), - ) - ) - for sys_tok, ref_tok in zip(readlines(fdsys), readlines(fdref)): - sys_tok = dict.encode_line(sys_tok) - ref_tok = dict.encode_line(ref_tok) - scorer.add(ref_tok, sys_tok) - print(scorer.result_string(args.order)) - - if args.sys == "-": - score(sys.stdin) - else: - with open(args.sys, "r") as f: - score(f) - - -if __name__ == "__main__": - cli_main() diff --git a/kosmos-g/fairseq/fairseq_cli/train.py b/kosmos-g/fairseq/fairseq_cli/train.py deleted file mode 100644 index f914c6ba0..000000000 --- a/kosmos-g/fairseq/fairseq_cli/train.py +++ /dev/null @@ -1,547 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Train a new model on one or across multiple GPUs. -""" - -import argparse -import logging -import math -import os -import sys -from typing import Any, Callable, Dict, List, Optional, Tuple - -# We need to setup root logger before importing any fairseq libraries. -logging.basicConfig( - format="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=os.environ.get("LOGLEVEL", "INFO").upper(), - stream=sys.stdout, -) -logger = logging.getLogger("fairseq_cli.train") - -import numpy as np -import torch -from omegaconf import DictConfig, OmegaConf - -from fairseq import checkpoint_utils, options, quantization_utils, tasks, utils -from fairseq.data import data_utils, iterators -from fairseq.data.plasma_utils import PlasmaStore -from fairseq.dataclass.configs import FairseqConfig -from fairseq.dataclass.utils import convert_namespace_to_omegaconf -from fairseq.distributed import fsdp_enable_wrap, fsdp_wrap -from fairseq.distributed import utils as distributed_utils -from fairseq.file_io import PathManager -from fairseq.logging import meters, metrics, progress_bar -from fairseq.model_parallel.megatron_trainer import MegatronTrainer -from fairseq.trainer import Trainer -from multiprocessing.pool import ThreadPool - - -def main(cfg: FairseqConfig) -> None: - if isinstance(cfg, argparse.Namespace): - cfg = convert_namespace_to_omegaconf(cfg) - - utils.import_user_module(cfg.common) - - if ( - distributed_utils.is_master(cfg.distributed_training) - and "job_logging_cfg" in cfg - ): - # make hydra logging work with ddp (see # see https://github.com/facebookresearch/hydra/issues/1126) - logging.config.dictConfig(OmegaConf.to_container(cfg.job_logging_cfg)) - - assert ( - cfg.dataset.max_tokens is not None or cfg.dataset.batch_size is not None - ), "Must specify batch size either with --max-tokens or --batch-size" - metrics.reset() - - if cfg.common.log_file is not None: - handler = logging.FileHandler(filename=cfg.common.log_file) - logger.addHandler(handler) - - np.random.seed(cfg.common.seed) - utils.set_torch_seed(cfg.common.seed) - - ds_local_master = cfg.common.deepspeed and distributed_utils.is_local_master(cfg.distributed_training) - - if distributed_utils.is_master(cfg.distributed_training) or ds_local_master: - checkpoint_utils.verify_checkpoint_directory(cfg.checkpoint.save_dir, rank=torch.distributed.get_rank()) - ckp_copy_thread = ThreadPool(processes=1) - else: - ckp_copy_thread = None - # if distributed_utils.is_master(cfg.distributed_training): - # checkpoint_utils.verify_checkpoint_directory(cfg.checkpoint.save_dir) - - # Print args - logger.info(cfg) - - if cfg.checkpoint.write_checkpoints_asynchronously: - try: - import iopath # noqa: F401 - except ImportError: - logging.exception( - "Asynchronous checkpoint writing is specified but iopath is " - "not installed: `pip install iopath`" - ) - return - - # Setup task, e.g., translation, language modeling, etc. - task = tasks.setup_task(cfg.task) - - assert cfg.criterion, "Please specify criterion to train a model" - - # Build model and criterion - if cfg.distributed_training.ddp_backend == "fully_sharded": - with fsdp_enable_wrap(cfg.distributed_training): - model = fsdp_wrap(task.build_model(cfg.model)) - else: - model = task.build_model(cfg.model) - criterion = task.build_criterion(cfg.criterion) - logger.info(model) - logger.info("task: {}".format(task.__class__.__name__)) - logger.info("model: {}".format(model.__class__.__name__)) - logger.info("criterion: {}".format(criterion.__class__.__name__)) - logger.info( - "num. shared model params: {:,} (num. trained: {:,})".format( - sum( - p.numel() for p in model.parameters() if not getattr(p, "expert", False) - ), - sum( - p.numel() - for p in model.parameters() - if not getattr(p, "expert", False) and p.requires_grad - ), - ) - ) - - logger.info( - "num. expert model params: {} (num. trained: {})".format( - sum(p.numel() for p in model.parameters() if getattr(p, "expert", False)), - sum( - p.numel() - for p in model.parameters() - if getattr(p, "expert", False) and p.requires_grad - ), - ) - ) - - # Load valid dataset (we load training data below, based on the latest checkpoint) - # We load the valid dataset AFTER building the model - data_utils.raise_if_valid_subsets_unintentionally_ignored(cfg) - if cfg.dataset.combine_valid_subsets: - task.load_dataset("valid", combine=True, epoch=1) - else: - for valid_sub_split in cfg.dataset.valid_subset.split(","): - task.load_dataset(valid_sub_split, combine=False, epoch=1) - - # (optionally) Configure quantization - if cfg.common.quantization_config_path is not None: - quantizer = quantization_utils.Quantizer( - config_path=cfg.common.quantization_config_path, - max_epoch=cfg.optimization.max_epoch, - max_update=cfg.optimization.max_update, - ) - else: - quantizer = None - - # Build trainer - if cfg.common.deepspeed: - assert quantizer is None, "fairseq wuantizer is not currently supported on deepspeed" - from fairseq.ds_trainer import DeepSpeedTrainer - trainer = DeepSpeedTrainer(cfg, task, model, criterion) - elif cfg.common.model_parallel_size == 1: - trainer = Trainer(cfg, task, model, criterion, quantizer) - else: - trainer = MegatronTrainer(cfg, task, model, criterion) - logger.info( - "training on {} devices (GPUs/TPUs)".format( - cfg.distributed_training.distributed_world_size - ) - ) - logger.info( - "max tokens per device = {} and max sentences per device = {}".format( - cfg.dataset.max_tokens, - cfg.dataset.batch_size, - ) - ) - - # Load the latest checkpoint if one is available and restore the - # corresponding train iterator - extra_state, epoch_itr = checkpoint_utils.load_checkpoint( - cfg.checkpoint, - trainer, - # don't cache epoch iterators for sharded datasets - disable_iterator_cache=task.has_sharded_data("train"), - ) - if cfg.common.tpu: - import torch_xla.core.xla_model as xm - - xm.rendezvous("load_checkpoint") # wait for all workers - - max_epoch = cfg.optimization.max_epoch or math.inf - lr = trainer.get_lr() - - train_meter = meters.StopwatchMeter() - train_meter.start() - while epoch_itr.next_epoch_idx <= max_epoch: - if lr <= cfg.optimization.stop_min_lr: - logger.info( - f"stopping training because current learning rate ({lr}) is smaller " - "than or equal to minimum learning rate " - f"(--stop-min-lr={cfg.optimization.stop_min_lr})" - ) - break - - # train for one epoch - valid_losses, should_stop = train(cfg, trainer, task, epoch_itr) - if should_stop: - break - - # only use first validation loss to update the learning rate - lr = trainer.lr_step(epoch_itr.epoch, valid_losses[0]) - - epoch_itr = trainer.get_train_iterator( - epoch_itr.next_epoch_idx, - # sharded data: get train iterator for next epoch - load_dataset=task.has_sharded_data("train"), - # don't cache epoch iterators for sharded datasets - disable_iterator_cache=task.has_sharded_data("train"), - ) - train_meter.stop() - logger.info("done training in {:.1f} seconds".format(train_meter.sum)) - - # ioPath implementation to wait for all asynchronous file writes to complete. - if cfg.checkpoint.write_checkpoints_asynchronously: - logger.info( - "ioPath PathManager waiting for all asynchronous checkpoint " - "writes to finish." - ) - PathManager.async_close() - logger.info("ioPath PathManager finished waiting.") - - -def should_stop_early(cfg: DictConfig, valid_loss: float) -> bool: - # skip check if no validation was done in the current epoch - if valid_loss is None: - return False - if cfg.checkpoint.patience <= 0: - return False - - def is_better(a, b): - return a > b if cfg.checkpoint.maximize_best_checkpoint_metric else a < b - - prev_best = getattr(should_stop_early, "best", None) - if prev_best is None or is_better(valid_loss, prev_best): - should_stop_early.best = valid_loss - should_stop_early.num_runs = 0 - return False - else: - should_stop_early.num_runs += 1 - if should_stop_early.num_runs >= cfg.checkpoint.patience: - logger.info( - "early stop since valid performance hasn't improved for last {} runs".format( - cfg.checkpoint.patience - ) - ) - return True - else: - return False - - -@metrics.aggregate("train") -def train( - cfg: DictConfig, trainer: Trainer, task: tasks.FairseqTask, epoch_itr -) -> Tuple[List[Optional[float]], bool]: - """Train the model for one epoch and return validation losses.""" - # Initialize data iterator - itr = epoch_itr.next_epoch_itr( - fix_batches_to_gpus=cfg.distributed_training.fix_batches_to_gpus, - shuffle=(epoch_itr.next_epoch_idx > cfg.dataset.curriculum), - ) - update_freq = ( - cfg.optimization.update_freq[epoch_itr.epoch - 1] - if epoch_itr.epoch <= len(cfg.optimization.update_freq) - else cfg.optimization.update_freq[-1] - ) - itr = iterators.GroupedIterator( - itr, - update_freq, - skip_remainder_batch=cfg.optimization.skip_remainder_batch, - ) - if cfg.common.tpu: - itr = utils.tpu_data_loader(itr) - progress = progress_bar.progress_bar( - itr, - log_format=cfg.common.log_format, - log_file=cfg.common.log_file, - log_interval=cfg.common.log_interval, - epoch=epoch_itr.epoch, - tensorboard_logdir=( - cfg.common.tensorboard_logdir - if distributed_utils.is_master(cfg.distributed_training) - else None - ), - default_log_format=("tqdm" if not cfg.common.no_progress_bar else "simple"), - wandb_project=( - cfg.common.wandb_project - if distributed_utils.is_master(cfg.distributed_training) - else None - ), - wandb_run_name=os.environ.get( - "WANDB_NAME", os.path.basename(cfg.checkpoint.save_dir) - ), - azureml_logging=( - cfg.common.azureml_logging - if distributed_utils.is_master(cfg.distributed_training) - else False - ), - ) - progress.update_config(_flatten_config(cfg)) - - trainer.begin_epoch(epoch_itr.epoch) - - valid_subsets = cfg.dataset.valid_subset.split(",") - should_stop = False - num_updates = trainer.get_num_updates() - logger.info("Start iterating over samples") - for i, samples in enumerate(progress): - with metrics.aggregate("train_inner"), torch.autograd.profiler.record_function( - "train_step-%d" % i - ): - log_output = trainer.train_step(samples) - - if log_output is not None: # not OOM, overflow, ... - # log mid-epoch stats - num_updates = trainer.get_num_updates() - if num_updates % cfg.common.log_interval == 0: - stats = get_training_stats(metrics.get_smoothed_values("train_inner")) - progress.log(stats, tag="train_inner", step=num_updates) - - # reset mid-epoch stats after each log interval - # the end-of-epoch stats will still be preserved - metrics.reset_meters("train_inner") - - end_of_epoch = not itr.has_next() - valid_losses, should_stop = validate_and_save( - cfg, trainer, task, epoch_itr, valid_subsets, end_of_epoch - ) - - if should_stop: - break - - # log end-of-epoch stats - logger.info("end of epoch {} (average epoch stats below)".format(epoch_itr.epoch)) - stats = get_training_stats(metrics.get_smoothed_values("train")) - progress.print(stats, tag="train", step=num_updates) - - # reset epoch-level meters - metrics.reset_meters("train") - return valid_losses, should_stop - - -def _flatten_config(cfg: DictConfig): - config = OmegaConf.to_container(cfg) - # remove any legacy Namespaces and replace with a single "args" - namespace = None - for k, v in list(config.items()): - if isinstance(v, argparse.Namespace): - namespace = v - del config[k] - if namespace is not None: - config["args"] = vars(namespace) - return config - - -def validate_and_save( - cfg: DictConfig, - trainer: Trainer, - task: tasks.FairseqTask, - epoch_itr, - valid_subsets: List[str], - end_of_epoch: bool, -) -> Tuple[List[Optional[float]], bool]: - num_updates = trainer.get_num_updates() - max_update = cfg.optimization.max_update or math.inf - - # Stopping conditions (and an additional one based on validation loss later - # on) - should_stop = False - if num_updates >= max_update: - should_stop = True - logger.info( - f"Stopping training due to " - f"num_updates: {num_updates} >= max_update: {max_update}" - ) - - training_time_hours = trainer.cumulative_training_time() / (60 * 60) - if ( - cfg.optimization.stop_time_hours > 0 - and training_time_hours > cfg.optimization.stop_time_hours - ): - should_stop = True - logger.info( - f"Stopping training due to " - f"cumulative_training_time: {training_time_hours} > " - f"stop_time_hours: {cfg.optimization.stop_time_hours} hour(s)" - ) - - do_save = ( - (end_of_epoch and epoch_itr.epoch % cfg.checkpoint.save_interval == 0) - or should_stop - or ( - cfg.checkpoint.save_interval_updates > 0 - and num_updates > 0 - and num_updates % cfg.checkpoint.save_interval_updates == 0 - and num_updates >= cfg.dataset.validate_after_updates - ) - ) - do_validate = ( - ( - (not end_of_epoch and do_save) # validate during mid-epoch saves - or (end_of_epoch and epoch_itr.epoch % cfg.dataset.validate_interval == 0) - or should_stop - or ( - cfg.dataset.validate_interval_updates > 0 - and num_updates > 0 - and num_updates % cfg.dataset.validate_interval_updates == 0 - ) - ) - and not cfg.dataset.disable_validation - and num_updates >= cfg.dataset.validate_after_updates - ) - - # Validate - valid_losses = [None] - if do_validate: - valid_losses = validate(cfg, trainer, task, epoch_itr, valid_subsets) - - should_stop |= should_stop_early(cfg, valid_losses[0]) - - # Save checkpoint - if do_save or should_stop: - checkpoint_utils.save_checkpoint( - cfg.checkpoint, trainer, epoch_itr, valid_losses[0] - ) - - return valid_losses, should_stop - - -def get_training_stats(stats: Dict[str, Any]) -> Dict[str, Any]: - stats["wall"] = round(metrics.get_meter("default", "wall").elapsed_time, 0) - return stats - - -def validate( - cfg: DictConfig, - trainer: Trainer, - task: tasks.FairseqTask, - epoch_itr, - subsets: List[str], -) -> List[Optional[float]]: - """Evaluate the model on the validation set(s) and return the losses.""" - - if cfg.dataset.fixed_validation_seed is not None: - # set fixed seed for every validation - utils.set_torch_seed(cfg.dataset.fixed_validation_seed) - - trainer.begin_valid_epoch(epoch_itr.epoch) - valid_losses = [] - for subset in subsets: - logger.info('begin validation on "{}" subset'.format(subset)) - - # Initialize data iterator - itr = trainer.get_valid_iterator(subset).next_epoch_itr( - shuffle=False, set_dataset_epoch=False # use a fixed valid set - ) - if cfg.common.tpu: - itr = utils.tpu_data_loader(itr) - progress = progress_bar.progress_bar( - itr, - log_format=cfg.common.log_format, - log_interval=cfg.common.log_interval, - epoch=epoch_itr.epoch, - prefix=f"valid on '{subset}' subset", - tensorboard_logdir=( - cfg.common.tensorboard_logdir - if distributed_utils.is_master(cfg.distributed_training) - else None - ), - default_log_format=("tqdm" if not cfg.common.no_progress_bar else "simple"), - wandb_project=( - cfg.common.wandb_project - if distributed_utils.is_master(cfg.distributed_training) - else None - ), - wandb_run_name=os.environ.get( - "WANDB_NAME", os.path.basename(cfg.checkpoint.save_dir) - ), - ) - - # create a new root metrics aggregator so validation metrics - # don't pollute other aggregators (e.g., train meters) - with metrics.aggregate(new_root=True) as agg: - for i, sample in enumerate(progress): - if ( - cfg.dataset.max_valid_steps is not None - and i > cfg.dataset.max_valid_steps - ): - break - trainer.valid_step(sample) - - # log validation stats - stats = get_valid_stats(cfg, trainer, agg.get_smoothed_values()) - - if hasattr(task, "post_validate"): - task.post_validate(trainer.get_model(), stats, agg) - - progress.print(stats, tag=subset, step=trainer.get_num_updates()) - - valid_losses.append(stats[cfg.checkpoint.best_checkpoint_metric]) - return valid_losses - - -def get_valid_stats( - cfg: DictConfig, trainer: Trainer, stats: Dict[str, Any] -) -> Dict[str, Any]: - stats["num_updates"] = trainer.get_num_updates() - if hasattr(checkpoint_utils.save_checkpoint, "best"): - key = "best_{0}".format(cfg.checkpoint.best_checkpoint_metric) - best_function = max if cfg.checkpoint.maximize_best_checkpoint_metric else min - stats[key] = best_function( - checkpoint_utils.save_checkpoint.best, - stats[cfg.checkpoint.best_checkpoint_metric], - ) - return stats - - -def cli_main( - modify_parser: Optional[Callable[[argparse.ArgumentParser], None]] = None -) -> None: - parser = options.get_training_parser() - args = options.parse_args_and_arch(parser, modify_parser=modify_parser) - - cfg = convert_namespace_to_omegaconf(args) - - if cfg.common.use_plasma_view: - server = PlasmaStore(path=cfg.common.plasma_path) - logger.info( - f"Started plasma server pid {server.server.pid} {cfg.common.plasma_path}" - ) - - if args.profile: - with torch.cuda.profiler.profile(): - with torch.autograd.profiler.emit_nvtx(): - distributed_utils.call_main(cfg, main) - else: - distributed_utils.call_main(cfg, main) - - # if cfg.common.use_plasma_view: - # server.server.kill() - - -if __name__ == "__main__": - cli_main() diff --git a/kosmos-g/fairseq/fairseq_cli/validate.py b/kosmos-g/fairseq/fairseq_cli/validate.py deleted file mode 100644 index 4617b6d54..000000000 --- a/kosmos-g/fairseq/fairseq_cli/validate.py +++ /dev/null @@ -1,153 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import sys -from argparse import Namespace -from itertools import chain - -import torch -from omegaconf import DictConfig - -from fairseq import checkpoint_utils, distributed_utils, options, utils -from fairseq.dataclass.utils import convert_namespace_to_omegaconf -from fairseq.logging import metrics, progress_bar -from fairseq.utils import reset_logging - -logging.basicConfig( - format="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=os.environ.get("LOGLEVEL", "INFO").upper(), - stream=sys.stdout, -) -logger = logging.getLogger("fairseq_cli.validate") - - -def main(cfg: DictConfig, override_args=None): - if isinstance(cfg, Namespace): - cfg = convert_namespace_to_omegaconf(cfg) - - utils.import_user_module(cfg.common) - - reset_logging() - - assert ( - cfg.dataset.max_tokens is not None or cfg.dataset.batch_size is not None - ), "Must specify batch size either with --max-tokens or --batch-size" - - use_fp16 = cfg.common.fp16 - use_cuda = torch.cuda.is_available() and not cfg.common.cpu - - if use_cuda: - torch.cuda.set_device(cfg.distributed_training.device_id) - - if cfg.distributed_training.distributed_world_size > 1: - data_parallel_world_size = distributed_utils.get_data_parallel_world_size() - data_parallel_rank = distributed_utils.get_data_parallel_rank() - else: - data_parallel_world_size = 1 - data_parallel_rank = 0 - - if override_args is not None: - overrides = vars(override_args) - overrides.update(eval(getattr(override_args, "model_overrides", "{}"))) - else: - overrides = None - - # Load ensemble - logger.info("loading model(s) from {}".format(cfg.common_eval.path)) - models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( - [cfg.common_eval.path], - arg_overrides=overrides, - suffix=cfg.checkpoint.checkpoint_suffix, - ) - model = models[0] - - # Move models to GPU - for model in models: - model.eval() - if use_fp16: - model.half() - if use_cuda: - model.cuda() - - # Print args - logger.info(saved_cfg) - - # Build criterion - criterion = task.build_criterion(saved_cfg.criterion) - criterion.eval() - - for subset in cfg.dataset.valid_subset.split(","): - try: - task.load_dataset(subset, combine=False, epoch=1, task_cfg=saved_cfg.task) - dataset = task.dataset(subset) - except KeyError: - raise Exception("Cannot find dataset: " + subset) - - # Initialize data iterator - itr = task.get_batch_iterator( - dataset=dataset, - max_tokens=cfg.dataset.max_tokens, - max_sentences=cfg.dataset.batch_size, - max_positions=utils.resolve_max_positions( - task.max_positions(), - *[m.max_positions() for m in models], - ), - ignore_invalid_inputs=cfg.dataset.skip_invalid_size_inputs_valid_test, - required_batch_size_multiple=cfg.dataset.required_batch_size_multiple, - seed=cfg.common.seed, - num_shards=data_parallel_world_size, - shard_id=data_parallel_rank, - num_workers=cfg.dataset.num_workers, - data_buffer_size=cfg.dataset.data_buffer_size, - ).next_epoch_itr(shuffle=False) - progress = progress_bar.progress_bar( - itr, - log_format=cfg.common.log_format, - log_interval=cfg.common.log_interval, - prefix=f"valid on '{subset}' subset", - default_log_format=("tqdm" if not cfg.common.no_progress_bar else "simple"), - ) - - log_outputs = [] - for i, sample in enumerate(progress): - sample = utils.move_to_cuda(sample) if use_cuda else sample - _loss, _sample_size, log_output = task.valid_step(sample, model, criterion) - progress.log(log_output, step=i) - log_outputs.append(log_output) - - if data_parallel_world_size > 1: - log_outputs = distributed_utils.all_gather_list( - log_outputs, - max_size=cfg.common.all_gather_list_size, - group=distributed_utils.get_data_parallel_group(), - ) - log_outputs = list(chain.from_iterable(log_outputs)) - - with metrics.aggregate() as agg: - task.reduce_metrics(log_outputs, criterion) - log_output = agg.get_smoothed_values() - - progress.print(log_output, tag=subset, step=i) - - -def cli_main(): - parser = options.get_validation_parser() - args = options.parse_args_and_arch(parser) - - # only override args that are explicitly given on the command line - override_parser = options.get_validation_parser() - override_args = options.parse_args_and_arch(override_parser, suppress_defaults=True) - - distributed_utils.call_main( - convert_namespace_to_omegaconf(args), main, override_args=override_args - ) - - -if __name__ == "__main__": - cli_main() diff --git a/kosmos-g/fairseq/hubconf.py b/kosmos-g/fairseq/hubconf.py deleted file mode 100644 index 5949e274e..000000000 --- a/kosmos-g/fairseq/hubconf.py +++ /dev/null @@ -1,73 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -"""isort:skip_file""" - -import functools -import importlib - - -dependencies = [ - "dataclasses", - "hydra", - "numpy", - "omegaconf", - "regex", - "requests", - "torch", -] - - -# Check for required dependencies and raise a RuntimeError if any are missing. -missing_deps = [] -for dep in dependencies: - try: - importlib.import_module(dep) - except ImportError: - # Hack: the hydra package is provided under the "hydra-core" name in - # pypi. We don't want the user mistakenly calling `pip install hydra` - # since that will install an unrelated package. - if dep == "hydra": - dep = "hydra-core" - missing_deps.append(dep) -if len(missing_deps) > 0: - raise RuntimeError("Missing dependencies: {}".format(", ".join(missing_deps))) - - -# only do fairseq imports after checking for dependencies -from fairseq.hub_utils import ( # noqa; noqa - BPEHubInterface as bpe, - TokenizerHubInterface as tokenizer, -) -from fairseq.models import MODEL_REGISTRY # noqa - - -# torch.hub doesn't build Cython components, so if they are not found then try -# to build them here -try: - import fairseq.data.token_block_utils_fast # noqa -except ImportError: - try: - import cython # noqa - import os - from setuptools import sandbox - - sandbox.run_setup( - os.path.join(os.path.dirname(__file__), "setup.py"), - ["build_ext", "--inplace"], - ) - except ImportError: - print( - "Unable to build Cython components. Please make sure Cython is " - "installed if the torch.hub model you are loading depends on it." - ) - - -# automatically expose models defined in FairseqModel::hub_models -for _model_type, _cls in MODEL_REGISTRY.items(): - for model_name in _cls.hub_models().keys(): - globals()[model_name] = functools.partial( - _cls.from_pretrained, - model_name, - ) diff --git a/kosmos-g/fairseq/pyproject.toml b/kosmos-g/fairseq/pyproject.toml deleted file mode 100644 index 6d1b4c5b6..000000000 --- a/kosmos-g/fairseq/pyproject.toml +++ /dev/null @@ -1,3 +0,0 @@ -[build-system] -requires = ["setuptools", "wheel", "cython"] -build-backend = "setuptools.build_meta" diff --git a/kosmos-g/fairseq/scripts/__init__.py b/kosmos-g/fairseq/scripts/__init__.py deleted file mode 100644 index e69de29bb..000000000 diff --git a/kosmos-g/fairseq/scripts/average_checkpoints.py b/kosmos-g/fairseq/scripts/average_checkpoints.py deleted file mode 100644 index a4711e484..000000000 --- a/kosmos-g/fairseq/scripts/average_checkpoints.py +++ /dev/null @@ -1,160 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import collections -import os -import re - -import torch -from fairseq.file_io import PathManager - - -def average_checkpoints(inputs): - """Loads checkpoints from inputs and returns a model with averaged weights. - - Args: - inputs: An iterable of string paths of checkpoints to load from. - - Returns: - A dict of string keys mapping to various values. The 'model' key - from the returned dict should correspond to an OrderedDict mapping - string parameter names to torch Tensors. - """ - params_dict = collections.OrderedDict() - params_keys = None - new_state = None - num_models = len(inputs) - - for fpath in inputs: - with PathManager.open(fpath, "rb") as f: - state = torch.load( - f, - map_location=( - lambda s, _: torch.serialization.default_restore_location(s, "cpu") - ), - ) - # Copies over the settings from the first checkpoint - if new_state is None: - new_state = state - - model_params = state["model"] - - model_params_keys = list(model_params.keys()) - if params_keys is None: - params_keys = model_params_keys - elif params_keys != model_params_keys: - raise KeyError( - "For checkpoint {}, expected list of params: {}, " - "but found: {}".format(f, params_keys, model_params_keys) - ) - - for k in params_keys: - p = model_params[k] - if isinstance(p, torch.HalfTensor): - p = p.float() - if k not in params_dict: - params_dict[k] = p.clone() - # NOTE: clone() is needed in case of p is a shared parameter - else: - params_dict[k] += p - - averaged_params = collections.OrderedDict() - for k, v in params_dict.items(): - averaged_params[k] = v - if averaged_params[k].is_floating_point(): - averaged_params[k].div_(num_models) - else: - averaged_params[k] //= num_models - new_state["model"] = averaged_params - return new_state - - -def last_n_checkpoints(paths, n, update_based, upper_bound=None): - assert len(paths) == 1 - path = paths[0] - if update_based: - pt_regexp = re.compile(r"checkpoint_\d+_(\d+)\.pt") - else: - pt_regexp = re.compile(r"checkpoint(\d+)\.pt") - files = PathManager.ls(path) - - entries = [] - for f in files: - m = pt_regexp.fullmatch(f) - if m is not None: - sort_key = int(m.group(1)) - if upper_bound is None or sort_key <= upper_bound: - entries.append((sort_key, m.group(0))) - if len(entries) < n: - raise Exception( - "Found {} checkpoint files but need at least {}", len(entries), n - ) - return [os.path.join(path, x[1]) for x in sorted(entries, reverse=True)[:n]] - - -def main(): - parser = argparse.ArgumentParser( - description="Tool to average the params of input checkpoints to " - "produce a new checkpoint", - ) - # fmt: off - parser.add_argument('--inputs', required=True, nargs='+', - help='Input checkpoint file paths.') - parser.add_argument('--output', required=True, metavar='FILE', - help='Write the new checkpoint containing the averaged weights to this path.') - num_group = parser.add_mutually_exclusive_group() - num_group.add_argument('--num-epoch-checkpoints', type=int, - help='if set, will try to find checkpoints with names checkpoint_xx.pt in the ' - 'path specified by input, and average last this many of them.') - num_group.add_argument('--num-update-checkpoints', type=int, - help='if set, will try to find checkpoints with names checkpoint_ee_xx.pt in the path specified by' - ' input, and average last this many of them.') - parser.add_argument('--checkpoint-upper-bound', type=int, - help='when using --num-epoch-checkpoints, this will set an upper bound on which epoch to use, ' - 'when using --num-update-checkpoints, this will set an upper bound on which update to use' - 'e.g., with --num-epoch-checkpoints=10 --checkpoint-upper-bound=50, checkpoints 41-50 would be' - ' averaged.' - 'e.g., with --num-update-checkpoints=10 --checkpoint-upper-bound=50000, checkpoints 40500-50000 would' - ' be averaged assuming --save-interval-updates 500' - ) - # fmt: on - args = parser.parse_args() - print(args) - - num = None - is_update_based = False - if args.num_update_checkpoints is not None: - num = args.num_update_checkpoints - is_update_based = True - elif args.num_epoch_checkpoints is not None: - num = args.num_epoch_checkpoints - - assert args.checkpoint_upper_bound is None or ( - args.num_epoch_checkpoints is not None - or args.num_update_checkpoints is not None - ), "--checkpoint-upper-bound requires --num-epoch-checkpoints or --num-update-checkpoints" - assert ( - args.num_epoch_checkpoints is None or args.num_update_checkpoints is None - ), "Cannot combine --num-epoch-checkpoints and --num-update-checkpoints" - - if num is not None: - args.inputs = last_n_checkpoints( - args.inputs, - num, - is_update_based, - upper_bound=args.checkpoint_upper_bound, - ) - print("averaging checkpoints: ", args.inputs) - - new_state = average_checkpoints(args.inputs) - with PathManager.open(args.output, "wb") as f: - torch.save(new_state, f) - print("Finished writing averaged checkpoint to {}".format(args.output)) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/scripts/build_sym_alignment.py b/kosmos-g/fairseq/scripts/build_sym_alignment.py deleted file mode 100644 index 0ca5c18f7..000000000 --- a/kosmos-g/fairseq/scripts/build_sym_alignment.py +++ /dev/null @@ -1,97 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Use this script in order to build symmetric alignments for your translation -dataset. -This script depends on fast_align and mosesdecoder tools. You will need to -build those before running the script. -fast_align: - github: http://github.com/clab/fast_align - instructions: follow the instructions in README.md -mosesdecoder: - github: http://github.com/moses-smt/mosesdecoder - instructions: http://www.statmt.org/moses/?n=Development.GetStarted -The script produces the following files under --output_dir: - text.joined - concatenation of lines from the source_file and the - target_file. - align.forward - forward pass of fast_align. - align.backward - backward pass of fast_align. - aligned.sym_heuristic - symmetrized alignment. -""" - -import argparse -import os -from itertools import zip_longest - - -def main(): - parser = argparse.ArgumentParser(description="symmetric alignment builer") - # fmt: off - parser.add_argument('--fast_align_dir', - help='path to fast_align build directory') - parser.add_argument('--mosesdecoder_dir', - help='path to mosesdecoder root directory') - parser.add_argument('--sym_heuristic', - help='heuristic to use for symmetrization', - default='grow-diag-final-and') - parser.add_argument('--source_file', - help='path to a file with sentences ' - 'in the source language') - parser.add_argument('--target_file', - help='path to a file with sentences ' - 'in the target language') - parser.add_argument('--output_dir', - help='output directory') - # fmt: on - args = parser.parse_args() - - fast_align_bin = os.path.join(args.fast_align_dir, "fast_align") - symal_bin = os.path.join(args.mosesdecoder_dir, "bin", "symal") - sym_fast_align_bin = os.path.join( - args.mosesdecoder_dir, "scripts", "ems", "support", "symmetrize-fast-align.perl" - ) - - # create joined file - joined_file = os.path.join(args.output_dir, "text.joined") - with open(args.source_file, "r", encoding="utf-8") as src, open( - args.target_file, "r", encoding="utf-8" - ) as tgt: - with open(joined_file, "w", encoding="utf-8") as joined: - for s, t in zip_longest(src, tgt): - print("{} ||| {}".format(s.strip(), t.strip()), file=joined) - - bwd_align_file = os.path.join(args.output_dir, "align.backward") - - # run forward alignment - fwd_align_file = os.path.join(args.output_dir, "align.forward") - fwd_fast_align_cmd = "{FASTALIGN} -i {JOINED} -d -o -v > {FWD}".format( - FASTALIGN=fast_align_bin, JOINED=joined_file, FWD=fwd_align_file - ) - assert os.system(fwd_fast_align_cmd) == 0 - - # run backward alignment - bwd_align_file = os.path.join(args.output_dir, "align.backward") - bwd_fast_align_cmd = "{FASTALIGN} -i {JOINED} -d -o -v -r > {BWD}".format( - FASTALIGN=fast_align_bin, JOINED=joined_file, BWD=bwd_align_file - ) - assert os.system(bwd_fast_align_cmd) == 0 - - # run symmetrization - sym_out_file = os.path.join(args.output_dir, "aligned") - sym_cmd = "{SYMFASTALIGN} {FWD} {BWD} {SRC} {TGT} {OUT} {HEURISTIC} {SYMAL}".format( - SYMFASTALIGN=sym_fast_align_bin, - FWD=fwd_align_file, - BWD=bwd_align_file, - SRC=args.source_file, - TGT=args.target_file, - OUT=sym_out_file, - HEURISTIC=args.sym_heuristic, - SYMAL=symal_bin, - ) - assert os.system(sym_cmd) == 0 - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/scripts/compare_namespaces.py b/kosmos-g/fairseq/scripts/compare_namespaces.py deleted file mode 100644 index bc24db624..000000000 --- a/kosmos-g/fairseq/scripts/compare_namespaces.py +++ /dev/null @@ -1,46 +0,0 @@ -#!/usr/bin/env python -"""Helper script to compare two argparse.Namespace objects.""" - -from argparse import Namespace # noqa - - -def main(): - - ns1 = eval(input("Namespace 1: ")) - ns2 = eval(input("Namespace 2: ")) - - def keys(ns): - ks = set() - for k in dir(ns): - if not k.startswith("_"): - ks.add(k) - return ks - - k1 = keys(ns1) - k2 = keys(ns2) - - def print_keys(ks, ns1, ns2=None): - for k in ks: - if ns2 is None: - print("{}\t{}".format(k, getattr(ns1, k, None))) - else: - print( - "{}\t{}\t{}".format(k, getattr(ns1, k, None), getattr(ns2, k, None)) - ) - - print("Keys unique to namespace 1:") - print_keys(k1 - k2, ns1) - print() - - print("Keys unique to namespace 2:") - print_keys(k2 - k1, ns2) - print() - - print("Overlapping keys with different values:") - ks = [k for k in k1 & k2 if getattr(ns1, k, "None") != getattr(ns2, k, "None")] - print_keys(ks, ns1, ns2) - print() - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/scripts/compound_split_bleu.sh b/kosmos-g/fairseq/scripts/compound_split_bleu.sh deleted file mode 100644 index 1972fddce..000000000 --- a/kosmos-g/fairseq/scripts/compound_split_bleu.sh +++ /dev/null @@ -1,20 +0,0 @@ -#!/bin/bash - -if [ $# -ne 1 ]; then - echo "usage: $0 GENERATE_PY_OUTPUT" - exit 1 -fi - -GEN=$1 - -SYS=$GEN.sys -REF=$GEN.ref - -if [ $(tail -n 1 $GEN | grep BLEU | wc -l) -ne 1 ]; then - echo "not done generating" - exit -fi - -grep ^H $GEN | awk -F '\t' '{print $NF}' | perl -ple 's{(\S)-(\S)}{$1 ##AT##-##AT## $2}g' > $SYS -grep ^T $GEN | cut -f2- | perl -ple 's{(\S)-(\S)}{$1 ##AT##-##AT## $2}g' > $REF -fairseq-score --sys $SYS --ref $REF diff --git a/kosmos-g/fairseq/scripts/constraints/extract.py b/kosmos-g/fairseq/scripts/constraints/extract.py deleted file mode 100644 index 437b37385..000000000 --- a/kosmos-g/fairseq/scripts/constraints/extract.py +++ /dev/null @@ -1,90 +0,0 @@ -#!/usr/bin/env python3 -# -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -"""Extracts random constraints from reference files.""" - -import argparse -import random -import sys - - -def get_phrase(words, index, length): - assert index < len(words) - length + 1 - phr = " ".join(words[index : index + length]) - for i in range(index, index + length): - words.pop(index) - return phr - - -def main(args): - - if args.seed: - random.seed(args.seed) - - for line in sys.stdin: - constraints = [] - - def add_constraint(constraint): - constraints.append(constraint) - - source = line.rstrip() - if "\t" in line: - source, target = line.split("\t") - if args.add_sos: - target = f"<s> {target}" - if args.add_eos: - target = f"{target} </s>" - - if len(target.split()) >= args.len: - words = [target] - - num = args.number - - choices = {} - for i in range(num): - if len(words) == 0: - break - segmentno = random.choice(range(len(words))) - segment = words.pop(segmentno) - tokens = segment.split() - phrase_index = random.choice(range(len(tokens))) - choice = " ".join( - tokens[phrase_index : min(len(tokens), phrase_index + args.len)] - ) - for j in range( - phrase_index, min(len(tokens), phrase_index + args.len) - ): - tokens.pop(phrase_index) - if phrase_index > 0: - words.append(" ".join(tokens[0:phrase_index])) - if phrase_index + 1 < len(tokens): - words.append(" ".join(tokens[phrase_index:])) - choices[target.find(choice)] = choice - - # mask out with spaces - target = target.replace(choice, " " * len(choice), 1) - - for key in sorted(choices.keys()): - add_constraint(choices[key]) - - print(source, *constraints, sep="\t") - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--number", "-n", type=int, default=1, help="number of phrases") - parser.add_argument("--len", "-l", type=int, default=1, help="phrase length") - parser.add_argument( - "--add-sos", default=False, action="store_true", help="add <s> token" - ) - parser.add_argument( - "--add-eos", default=False, action="store_true", help="add </s> token" - ) - parser.add_argument("--seed", "-s", default=0, type=int) - args = parser.parse_args() - - main(args) diff --git a/kosmos-g/fairseq/scripts/constraints/validate.py b/kosmos-g/fairseq/scripts/constraints/validate.py deleted file mode 100644 index d531ad9f3..000000000 --- a/kosmos-g/fairseq/scripts/constraints/validate.py +++ /dev/null @@ -1,34 +0,0 @@ -#!/usr/bin/env python3 -# -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import sys - - -"""Reads in a fairseq output file, and verifies that the constraints -(C- lines) are present in the output (the first H- line). Assumes that -constraints are listed prior to the first hypothesis. -""" - -constraints = [] -found = 0 -total = 0 -for line in sys.stdin: - if line.startswith("C-"): - constraints.append(line.rstrip().split("\t")[1]) - elif line.startswith("H-"): - text = line.split("\t")[2] - - for constraint in constraints: - total += 1 - if constraint in text: - found += 1 - else: - print(f"No {constraint} in {text}", file=sys.stderr) - - constraints = [] - -print(f"Found {found} / {total} = {100 * found / total:.1f}%") diff --git a/kosmos-g/fairseq/scripts/convert_dictionary.lua b/kosmos-g/fairseq/scripts/convert_dictionary.lua deleted file mode 100644 index 14ee8c997..000000000 --- a/kosmos-g/fairseq/scripts/convert_dictionary.lua +++ /dev/null @@ -1,34 +0,0 @@ --- Copyright (c) Facebook, Inc. and its affiliates. --- --- This source code is licensed under the MIT license found in the --- LICENSE file in the root directory of this source tree. --- --- Usage: convert_dictionary.lua <dict.th7> -require 'fairseq' -require 'torch' -require 'paths' - -if #arg < 1 then - print('usage: convert_dictionary.lua <dict.th7>') - os.exit(1) -end -if not paths.filep(arg[1]) then - print('error: file does not exit: ' .. arg[1]) - os.exit(1) -end - -dict = torch.load(arg[1]) -dst = paths.basename(arg[1]):gsub('.th7', '.txt') -assert(dst:match('.txt$')) - -f = io.open(dst, 'w') -for idx, symbol in ipairs(dict.index_to_symbol) do - if idx > dict.cutoff then - break - end - f:write(symbol) - f:write(' ') - f:write(dict.index_to_freq[idx]) - f:write('\n') -end -f:close() diff --git a/kosmos-g/fairseq/scripts/convert_model.lua b/kosmos-g/fairseq/scripts/convert_model.lua deleted file mode 100644 index 61b921392..000000000 --- a/kosmos-g/fairseq/scripts/convert_model.lua +++ /dev/null @@ -1,108 +0,0 @@ --- Copyright (c) Facebook, Inc. and its affiliates. --- --- This source code is licensed under the MIT license found in the --- LICENSE file in the root directory of this source tree. --- --- Usage: convert_model.lua <model_epoch1.th7> -require 'torch' -local fairseq = require 'fairseq' - -model = torch.load(arg[1]) - -function find_weight_norm(container, module) - for _, wn in ipairs(container:listModules()) do - if torch.type(wn) == 'nn.WeightNorm' and wn.modules[1] == module then - return wn - end - end -end - -function push_state(dict, key, module) - if torch.type(module) == 'nn.Linear' then - local wn = find_weight_norm(model.module, module) - assert(wn) - dict[key .. '.weight_v'] = wn.v:float() - dict[key .. '.weight_g'] = wn.g:float() - elseif torch.type(module) == 'nn.TemporalConvolutionTBC' then - local wn = find_weight_norm(model.module, module) - assert(wn) - local v = wn.v:float():view(wn.viewOut):transpose(2, 3) - dict[key .. '.weight_v'] = v - dict[key .. '.weight_g'] = wn.g:float():view(module.weight:size(3), 1, 1) - else - dict[key .. '.weight'] = module.weight:float() - end - if module.bias then - dict[key .. '.bias'] = module.bias:float() - end -end - -encoder_dict = {} -decoder_dict = {} -combined_dict = {} - -function encoder_state(encoder) - luts = encoder:findModules('nn.LookupTable') - push_state(encoder_dict, 'embed_tokens', luts[1]) - push_state(encoder_dict, 'embed_positions', luts[2]) - - fcs = encoder:findModules('nn.Linear') - assert(#fcs >= 2) - local nInputPlane = fcs[1].weight:size(1) - push_state(encoder_dict, 'fc1', table.remove(fcs, 1)) - push_state(encoder_dict, 'fc2', table.remove(fcs, #fcs)) - - for i, module in ipairs(encoder:findModules('nn.TemporalConvolutionTBC')) do - push_state(encoder_dict, 'convolutions.' .. tostring(i - 1), module) - if nInputPlane ~= module.weight:size(3) / 2 then - push_state(encoder_dict, 'projections.' .. tostring(i - 1), table.remove(fcs, 1)) - end - nInputPlane = module.weight:size(3) / 2 - end - assert(#fcs == 0) -end - -function decoder_state(decoder) - luts = decoder:findModules('nn.LookupTable') - push_state(decoder_dict, 'embed_tokens', luts[1]) - push_state(decoder_dict, 'embed_positions', luts[2]) - - fcs = decoder:findModules('nn.Linear') - local nInputPlane = fcs[1].weight:size(1) - push_state(decoder_dict, 'fc1', table.remove(fcs, 1)) - push_state(decoder_dict, 'fc2', fcs[#fcs - 1]) - push_state(decoder_dict, 'fc3', fcs[#fcs]) - - table.remove(fcs, #fcs) - table.remove(fcs, #fcs) - - for i, module in ipairs(decoder:findModules('nn.TemporalConvolutionTBC')) do - if nInputPlane ~= module.weight:size(3) / 2 then - push_state(decoder_dict, 'projections.' .. tostring(i - 1), table.remove(fcs, 1)) - end - nInputPlane = module.weight:size(3) / 2 - - local prefix = 'attention.' .. tostring(i - 1) - push_state(decoder_dict, prefix .. '.in_projection', table.remove(fcs, 1)) - push_state(decoder_dict, prefix .. '.out_projection', table.remove(fcs, 1)) - push_state(decoder_dict, 'convolutions.' .. tostring(i - 1), module) - end - assert(#fcs == 0) -end - - -_encoder = model.module.modules[2] -_decoder = model.module.modules[3] - -encoder_state(_encoder) -decoder_state(_decoder) - -for k, v in pairs(encoder_dict) do - combined_dict['encoder.' .. k] = v -end -for k, v in pairs(decoder_dict) do - combined_dict['decoder.' .. k] = v -end - - -torch.save('state_dict.t7', combined_dict) diff --git a/kosmos-g/fairseq/scripts/count_docs.py b/kosmos-g/fairseq/scripts/count_docs.py deleted file mode 100644 index 58d85af85..000000000 --- a/kosmos-g/fairseq/scripts/count_docs.py +++ /dev/null @@ -1,58 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Count the number of documents and average number of lines and tokens per -document in a large file. Documents should be separated by a single empty line. -""" - -import argparse -import gzip -import sys - -import numpy as np - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("input") - parser.add_argument("--gzip", action="store_true") - args = parser.parse_args() - - def gopen(): - if args.gzip: - return gzip.open(args.input, "r") - else: - return open(args.input, "r", encoding="utf-8") - - num_lines = [] - num_toks = [] - with gopen() as h: - num_docs = 1 - num_lines_in_doc = 0 - num_toks_in_doc = 0 - for i, line in enumerate(h): - if len(line.strip()) == 0: # empty line indicates new document - num_docs += 1 - num_lines.append(num_lines_in_doc) - num_toks.append(num_toks_in_doc) - num_lines_in_doc = 0 - num_toks_in_doc = 0 - else: - num_lines_in_doc += 1 - num_toks_in_doc += len(line.rstrip().split()) - if i % 1000000 == 0: - print(i, file=sys.stderr, end="", flush=True) - elif i % 100000 == 0: - print(".", file=sys.stderr, end="", flush=True) - print(file=sys.stderr, flush=True) - - print("found {} docs".format(num_docs)) - print("average num lines per doc: {}".format(np.mean(num_lines))) - print("average num toks per doc: {}".format(np.mean(num_toks))) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/scripts/read_binarized.py b/kosmos-g/fairseq/scripts/read_binarized.py deleted file mode 100644 index a414095d0..000000000 --- a/kosmos-g/fairseq/scripts/read_binarized.py +++ /dev/null @@ -1,48 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse - -from fairseq.data import Dictionary, data_utils, indexed_dataset - - -def get_parser(): - parser = argparse.ArgumentParser( - description="writes text from binarized file to stdout" - ) - # fmt: off - parser.add_argument('--dataset-impl', help='dataset implementation', - choices=indexed_dataset.get_available_dataset_impl()) - parser.add_argument('--dict', metavar='FP', help='dictionary containing known words', default=None) - parser.add_argument('--input', metavar='FP', required=True, help='binarized file to read') - # fmt: on - - return parser - - -def main(): - parser = get_parser() - args = parser.parse_args() - - dictionary = Dictionary.load(args.dict) if args.dict is not None else None - dataset = data_utils.load_indexed_dataset( - args.input, - dictionary, - dataset_impl=args.dataset_impl, - default="lazy", - ) - - for tensor_line in dataset: - if dictionary is None: - line = " ".join([str(int(x)) for x in tensor_line]) - else: - line = dictionary.string(tensor_line) - - print(line) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/scripts/rm_pt.py b/kosmos-g/fairseq/scripts/rm_pt.py deleted file mode 100644 index 6cd063d21..000000000 --- a/kosmos-g/fairseq/scripts/rm_pt.py +++ /dev/null @@ -1,141 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import os -import re -import shutil -import sys - - -pt_regexp = re.compile(r"checkpoint(\d+|_\d+_\d+|_[a-z]+)\.pt") -pt_regexp_epoch_based = re.compile(r"checkpoint(\d+)\.pt") -pt_regexp_update_based = re.compile(r"checkpoint_\d+_(\d+)\.pt") - - -def parse_checkpoints(files): - entries = [] - for f in files: - m = pt_regexp_epoch_based.fullmatch(f) - if m is not None: - entries.append((int(m.group(1)), m.group(0))) - else: - m = pt_regexp_update_based.fullmatch(f) - if m is not None: - entries.append((int(m.group(1)), m.group(0))) - return entries - - -def last_n_checkpoints(files, n): - entries = parse_checkpoints(files) - return [x[1] for x in sorted(entries, reverse=True)[:n]] - - -def every_n_checkpoints(files, n): - entries = parse_checkpoints(files) - return [x[1] for x in sorted(sorted(entries)[::-n])] - - -def main(): - parser = argparse.ArgumentParser( - description=( - "Recursively delete checkpoint files from `root_dir`, " - "but preserve checkpoint_best.pt and checkpoint_last.pt" - ) - ) - parser.add_argument("root_dirs", nargs="*") - parser.add_argument( - "--save-last", type=int, default=0, help="number of last checkpoints to save" - ) - parser.add_argument( - "--save-every", type=int, default=0, help="interval of checkpoints to save" - ) - parser.add_argument( - "--preserve-test", - action="store_true", - help="preserve checkpoints in dirs that start with test_ prefix (default: delete them)", - ) - parser.add_argument( - "--delete-best", action="store_true", help="delete checkpoint_best.pt" - ) - parser.add_argument( - "--delete-last", action="store_true", help="delete checkpoint_last.pt" - ) - parser.add_argument( - "--no-dereference", action="store_true", help="don't dereference symlinks" - ) - args = parser.parse_args() - - files_to_desymlink = [] - files_to_preserve = [] - files_to_delete = [] - for root_dir in args.root_dirs: - for root, _subdirs, files in os.walk(root_dir): - if args.save_last > 0: - to_save = last_n_checkpoints(files, args.save_last) - else: - to_save = [] - if args.save_every > 0: - to_save += every_n_checkpoints(files, args.save_every) - for file in files: - if not pt_regexp.fullmatch(file): - continue - full_path = os.path.join(root, file) - if ( - not os.path.basename(root).startswith("test_") or args.preserve_test - ) and ( - (file == "checkpoint_last.pt" and not args.delete_last) - or (file == "checkpoint_best.pt" and not args.delete_best) - or file in to_save - ): - if os.path.islink(full_path) and not args.no_dereference: - files_to_desymlink.append(full_path) - else: - files_to_preserve.append(full_path) - else: - files_to_delete.append(full_path) - - if len(files_to_desymlink) == 0 and len(files_to_delete) == 0: - print("Nothing to do.") - sys.exit(0) - - files_to_desymlink = sorted(files_to_desymlink) - files_to_preserve = sorted(files_to_preserve) - files_to_delete = sorted(files_to_delete) - - print("Operations to perform (in order):") - if len(files_to_desymlink) > 0: - for file in files_to_desymlink: - print(" - preserve (and dereference symlink): " + file) - if len(files_to_preserve) > 0: - for file in files_to_preserve: - print(" - preserve: " + file) - if len(files_to_delete) > 0: - for file in files_to_delete: - print(" - delete: " + file) - while True: - resp = input("Continue? (Y/N): ") - if resp.strip().lower() == "y": - break - elif resp.strip().lower() == "n": - sys.exit(0) - - print("Executing...") - if len(files_to_desymlink) > 0: - for file in files_to_desymlink: - realpath = os.path.realpath(file) - print("rm " + file) - os.remove(file) - print("cp {} {}".format(realpath, file)) - shutil.copyfile(realpath, file) - if len(files_to_delete) > 0: - for file in files_to_delete: - print("rm " + file) - os.remove(file) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/scripts/sacrebleu.sh b/kosmos-g/fairseq/scripts/sacrebleu.sh deleted file mode 100644 index c10bf2b76..000000000 --- a/kosmos-g/fairseq/scripts/sacrebleu.sh +++ /dev/null @@ -1,27 +0,0 @@ -#!/bin/bash - -if [ $# -ne 4 ]; then - echo "usage: $0 TESTSET SRCLANG TGTLANG GEN" - exit 1 -fi - -TESTSET=$1 -SRCLANG=$2 -TGTLANG=$3 - -GEN=$4 - -if ! command -v sacremoses &> /dev/null -then - echo "sacremoses could not be found, please install with: pip install sacremoses" - exit -fi - -grep ^H $GEN \ -| sed 's/^H\-//' \ -| sort -n -k 1 \ -| cut -f 3 \ -| sacremoses detokenize \ -> $GEN.sorted.detok - -sacrebleu --test-set $TESTSET --language-pair "${SRCLANG}-${TGTLANG}" < $GEN.sorted.detok diff --git a/kosmos-g/fairseq/scripts/shard_docs.py b/kosmos-g/fairseq/scripts/shard_docs.py deleted file mode 100644 index 97232c3c8..000000000 --- a/kosmos-g/fairseq/scripts/shard_docs.py +++ /dev/null @@ -1,54 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Split a large file into shards while respecting document boundaries. Documents -should be separated by a single empty line. -""" - -import argparse -import contextlib - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("input") - parser.add_argument("--num-shards", type=int) - args = parser.parse_args() - - assert args.num_shards is not None and args.num_shards > 1 - - with open(args.input, "r", encoding="utf-8") as h: - with contextlib.ExitStack() as stack: - outputs = [ - stack.enter_context( - open(args.input + ".shard" + str(i), "w", encoding="utf-8") - ) - for i in range(args.num_shards) - ] - - doc = [] - first_doc = [True] * args.num_shards - - def output_doc(i): - if not first_doc[i]: - outputs[i].write("\n") - first_doc[i] = False - for line in doc: - outputs[i].write(line) - doc.clear() - - num_docs = 0 - for line in h: - if line.strip() == "": # empty line indicates new document - output_doc(num_docs % args.num_shards) - num_docs += 1 - else: - doc.append(line) - output_doc(num_docs % args.num_shards) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/scripts/split_train_valid_docs.py b/kosmos-g/fairseq/scripts/split_train_valid_docs.py deleted file mode 100644 index ff1597852..000000000 --- a/kosmos-g/fairseq/scripts/split_train_valid_docs.py +++ /dev/null @@ -1,86 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Split a large file into a train and valid set while respecting document -boundaries. Documents should be separated by a single empty line. -""" - -import argparse -import random -import sys - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("input") - parser.add_argument("sample_output", help="train output file") - parser.add_argument("remainder_output", help="valid output file") - parser.add_argument("-k", type=int, help="remainder size") - parser.add_argument( - "--lines", action="store_true", help="split lines instead of docs" - ) - args = parser.parse_args() - - assert args.k is not None - - sample = [] - remainder = [] - num_docs = [0] - - def update_sample(doc): - if len(sample) < args.k: - sample.append(doc.copy()) - else: - i = num_docs[0] - j = random.randrange(i + 1) - if j < args.k: - remainder.append(sample[j]) - sample[j] = doc.copy() - else: - remainder.append(doc.copy()) - num_docs[0] += 1 - doc.clear() - - with open(args.input, "r", encoding="utf-8") as h: - doc = [] - for i, line in enumerate(h): - if line.strip() == "": # empty line indicates new document - update_sample(doc) - else: - doc.append(line) - if args.lines: - update_sample(doc) - if i % 1000000 == 0: - print(i, file=sys.stderr, end="", flush=True) - elif i % 100000 == 0: - print(".", file=sys.stderr, end="", flush=True) - if len(doc) > 0: - update_sample(doc) - print(file=sys.stderr, flush=True) - - assert len(sample) == args.k - - with open(args.sample_output, "w", encoding="utf-8") as out: - first = True - for doc in sample: - if not first and not args.lines: - out.write("\n") - first = False - for line in doc: - out.write(line) - - with open(args.remainder_output, "w", encoding="utf-8") as out: - first = True - for doc in remainder: - if not first and not args.lines: - out.write("\n") - first = False - for line in doc: - out.write(line) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/scripts/spm_decode.py b/kosmos-g/fairseq/scripts/spm_decode.py deleted file mode 100644 index 7d7b68b24..000000000 --- a/kosmos-g/fairseq/scripts/spm_decode.py +++ /dev/null @@ -1,53 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from __future__ import absolute_import, division, print_function, unicode_literals - -import argparse - -import sentencepiece as spm - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument( - "--model", required=True, help="sentencepiece model to use for decoding" - ) - parser.add_argument("--input", required=True, help="input file to decode") - parser.add_argument("--input_format", choices=["piece", "id"], default="piece") - args = parser.parse_args() - - sp = spm.SentencePieceProcessor() - sp.Load(args.model) - - if args.input_format == "piece": - - def decode(input): - return "".join(sp.DecodePieces(input)) - - elif args.input_format == "id": - - def decode(input): - return "".join(sp.DecodeIds(input)) - - else: - raise NotImplementedError - - def tok2int(tok): - # remap reference-side <unk> (represented as <<unk>>) to 0 - return int(tok) if tok != "<<unk>>" else 0 - - with open(args.input, "r", encoding="utf-8") as h: - for line in h: - if args.input_format == "id": - print(decode(list(map(tok2int, line.rstrip().split())))) - elif args.input_format == "piece": - print(decode(line.rstrip().split())) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/scripts/spm_encode.py b/kosmos-g/fairseq/scripts/spm_encode.py deleted file mode 100644 index f91e0bb72..000000000 --- a/kosmos-g/fairseq/scripts/spm_encode.py +++ /dev/null @@ -1,119 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from __future__ import absolute_import, division, print_function, unicode_literals - -import argparse -import contextlib -import sys - -import sentencepiece as spm - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument( - "--model", required=True, help="sentencepiece model to use for encoding" - ) - parser.add_argument( - "--inputs", nargs="+", default=["-"], help="input files to filter/encode" - ) - parser.add_argument( - "--outputs", nargs="+", default=["-"], help="path to save encoded outputs" - ) - parser.add_argument("--output_format", choices=["piece", "id"], default="piece") - parser.add_argument( - "--min-len", - type=int, - metavar="N", - help="filter sentence pairs with fewer than N tokens", - ) - parser.add_argument( - "--max-len", - type=int, - metavar="N", - help="filter sentence pairs with more than N tokens", - ) - args = parser.parse_args() - - assert len(args.inputs) == len( - args.outputs - ), "number of input and output paths should match" - - sp = spm.SentencePieceProcessor() - sp.Load(args.model) - - if args.output_format == "piece": - - def encode(input): - return sp.EncodeAsPieces(input) - - elif args.output_format == "id": - - def encode(input): - return list(map(str, sp.EncodeAsIds(input))) - - else: - raise NotImplementedError - - if args.min_len is not None or args.max_len is not None: - - def valid(line): - return (args.min_len is None or len(line) >= args.min_len) and ( - args.max_len is None or len(line) <= args.max_len - ) - - else: - - def valid(lines): - return True - - with contextlib.ExitStack() as stack: - inputs = [ - stack.enter_context(open(input, "r", encoding="utf-8")) - if input != "-" - else sys.stdin - for input in args.inputs - ] - outputs = [ - stack.enter_context(open(output, "w", encoding="utf-8")) - if output != "-" - else sys.stdout - for output in args.outputs - ] - - stats = { - "num_empty": 0, - "num_filtered": 0, - } - - def encode_line(line): - line = line.strip() - if len(line) > 0: - line = encode(line) - if valid(line): - return line - else: - stats["num_filtered"] += 1 - else: - stats["num_empty"] += 1 - return None - - for i, lines in enumerate(zip(*inputs), start=1): - enc_lines = list(map(encode_line, lines)) - if not any(enc_line is None for enc_line in enc_lines): - for enc_line, output_h in zip(enc_lines, outputs): - print(" ".join(enc_line), file=output_h) - if i % 10000 == 0: - print("processed {} lines".format(i), file=sys.stderr) - - print("skipped {} empty lines".format(stats["num_empty"]), file=sys.stderr) - print("filtered {} lines".format(stats["num_filtered"]), file=sys.stderr) - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/fairseq/scripts/spm_train.py b/kosmos-g/fairseq/scripts/spm_train.py deleted file mode 100644 index 9db668fd4..000000000 --- a/kosmos-g/fairseq/scripts/spm_train.py +++ /dev/null @@ -1,16 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from __future__ import absolute_import, division, print_function, unicode_literals - -import sys - -import sentencepiece as spm - - -if __name__ == "__main__": - spm.SentencePieceTrainer.Train(" ".join(sys.argv[1:])) diff --git a/kosmos-g/fairseq/scripts/test_fsdp.sh b/kosmos-g/fairseq/scripts/test_fsdp.sh deleted file mode 100644 index 1f428a035..000000000 --- a/kosmos-g/fairseq/scripts/test_fsdp.sh +++ /dev/null @@ -1,24 +0,0 @@ -#!/usr/bin/env bash -rm -rf fsdp_dummy -mkdir -p fsdp_dummy -CUDA_VISIBLE_DEVICES=0,1,2,3 fairseq-train /private/home/sshleifer/data-bin/stories_mmap \ - --ddp-backend fully_sharded --fp16 --fp16-init-scale 4 \ - --cpu-offload --checkpoint-activations \ - --task language_modeling --tokens-per-sample 256 --batch-size 8 \ - --arch transformer_lm_gpt2_tiny \ - --optimizer cpu_adam --adam-betas "(0.9,0.98)" \ - --lr 0.0001 --lr-scheduler polynomial_decay --warmup-updates 5 --total-num-update 10 \ - --max-update 5 --log-format json --log-interval 1 \ - --save-interval-updates 5 --save-dir fsdp_dummy --disable-validation \ - --restore-file x.pt "$@" - -# Now we try to load the checkpoint -CUDA_VISIBLE_DEVICES=0,1 fairseq-train /private/home/sshleifer/data-bin/stories_mmap \ - --ddp-backend fully_sharded --fp16 --fp16-init-scale 4 \ - --cpu-offload --checkpoint-activations \ - --task language_modeling --tokens-per-sample 256 --batch-size 8 \ - --arch transformer_lm_gpt2_tiny \ - --optimizer cpu_adam --adam-betas "(0.9,0.98)" \ - --lr 0.0001 --lr-scheduler polynomial_decay --warmup-updates 5 --total-num-update 10 \ - --max-update 2 --log-format json --log-interval 1 \ - --save-interval-updates 2 --save-dir fsdp_dummy diff --git a/kosmos-g/fairseq/setup.cfg b/kosmos-g/fairseq/setup.cfg deleted file mode 100644 index 3fa679ddf..000000000 --- a/kosmos-g/fairseq/setup.cfg +++ /dev/null @@ -1,4 +0,0 @@ -[flake8] -max-line-length = 127 -extend-ignore = E203, W503 -extend-exclude = fairseq/model_parallel/megatron diff --git a/kosmos-g/fairseq/setup.py b/kosmos-g/fairseq/setup.py deleted file mode 100644 index 4fbda983b..000000000 --- a/kosmos-g/fairseq/setup.py +++ /dev/null @@ -1,285 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import os -import subprocess -import sys - -from setuptools import Extension, find_packages, setup - -if sys.version_info < (3, 6): - sys.exit("Sorry, Python >= 3.6 is required for fairseq.") - - -def write_version_py(): - with open(os.path.join("fairseq", "version.txt")) as f: - version = f.read().strip() - - # append latest commit hash to version string - try: - sha = ( - subprocess.check_output(["git", "rev-parse", "HEAD"]) - .decode("ascii") - .strip() - ) - version += "+" + sha[:7] - except Exception: - pass - - # write version info to fairseq/version.py - with open(os.path.join("fairseq", "version.py"), "w") as f: - f.write('__version__ = "{}"\n'.format(version)) - return version - - -version = write_version_py() - - -with open("README.md") as f: - readme = f.read() - - -if sys.platform == "darwin": - extra_compile_args = ["-stdlib=libc++", "-O3"] -else: - extra_compile_args = ["-std=c++11", "-O3"] - - -class NumpyExtension(Extension): - """Source: https://stackoverflow.com/a/54128391""" - - def __init__(self, *args, **kwargs): - self.__include_dirs = [] - super().__init__(*args, **kwargs) - - @property - def include_dirs(self): - import numpy - - return self.__include_dirs + [numpy.get_include()] - - @include_dirs.setter - def include_dirs(self, dirs): - self.__include_dirs = dirs - - -extensions = [ - Extension( - "fairseq.libbleu", - sources=[ - "fairseq/clib/libbleu/libbleu.cpp", - "fairseq/clib/libbleu/module.cpp", - ], - extra_compile_args=extra_compile_args, - ), - NumpyExtension( - "fairseq.data.data_utils_fast", - sources=["fairseq/data/data_utils_fast.pyx"], - language="c++", - extra_compile_args=extra_compile_args, - ), - NumpyExtension( - "fairseq.data.token_block_utils_fast", - sources=["fairseq/data/token_block_utils_fast.pyx"], - language="c++", - extra_compile_args=extra_compile_args, - ), -] - - -cmdclass = {} - - -try: - # torch is not available when generating docs - from torch.utils import cpp_extension - - extensions.extend( - [ - cpp_extension.CppExtension( - "fairseq.libbase", - sources=[ - "fairseq/clib/libbase/balanced_assignment.cpp", - ], - ) - ] - ) - - extensions.extend( - [ - cpp_extension.CppExtension( - "fairseq.libnat", - sources=[ - "fairseq/clib/libnat/edit_dist.cpp", - ], - ), - cpp_extension.CppExtension( - "alignment_train_cpu_binding", - sources=[ - "examples/operators/alignment_train_cpu.cpp", - ], - ), - ] - ) - if "CUDA_HOME" in os.environ: - extensions.extend( - [ - cpp_extension.CppExtension( - "fairseq.libnat_cuda", - sources=[ - "fairseq/clib/libnat_cuda/edit_dist.cu", - "fairseq/clib/libnat_cuda/binding.cpp", - ], - ), - cpp_extension.CppExtension( - "fairseq.ngram_repeat_block_cuda", - sources=[ - "fairseq/clib/cuda/ngram_repeat_block_cuda.cpp", - "fairseq/clib/cuda/ngram_repeat_block_cuda_kernel.cu", - ], - ), - cpp_extension.CppExtension( - "alignment_train_cuda_binding", - sources=[ - "examples/operators/alignment_train_kernel.cu", - "examples/operators/alignment_train_cuda.cpp", - ], - ), - ] - ) - cmdclass["build_ext"] = cpp_extension.BuildExtension - -except ImportError: - pass - - -if "READTHEDOCS" in os.environ: - # don't build extensions when generating docs - extensions = [] - if "build_ext" in cmdclass: - del cmdclass["build_ext"] - - # use CPU build of PyTorch - dependency_links = [ - "https://download.pytorch.org/whl/cpu/torch-1.7.0%2Bcpu-cp36-cp36m-linux_x86_64.whl" - ] -else: - dependency_links = [] - - -if "clean" in sys.argv[1:]: - # Source: https://bit.ly/2NLVsgE - print("deleting Cython files...") - import subprocess - - subprocess.run( - ["rm -f fairseq/*.so fairseq/**/*.so fairseq/*.pyd fairseq/**/*.pyd"], - shell=True, - ) - - -extra_packages = [] -if os.path.exists(os.path.join("fairseq", "model_parallel", "megatron", "mpu")): - extra_packages.append("fairseq.model_parallel.megatron.mpu") - - -def do_setup(package_data): - setup( - name="fairseq", - version=version, - description="Facebook AI Research Sequence-to-Sequence Toolkit", - url="https://github.com/pytorch/fairseq", - classifiers=[ - "Intended Audience :: Science/Research", - "License :: OSI Approved :: MIT License", - "Programming Language :: Python :: 3.6", - "Programming Language :: Python :: 3.7", - "Programming Language :: Python :: 3.8", - "Topic :: Scientific/Engineering :: Artificial Intelligence", - ], - long_description=readme, - long_description_content_type="text/markdown", - setup_requires=[ - "cython", - 'numpy<1.20.0; python_version<"3.7"', - 'numpy; python_version>="3.7"', - "setuptools>=18.0", - ], - install_requires=[ - "cffi", - "cython", - 'dataclasses; python_version<"3.7"', - "hydra-core>=1.0.7,<1.1", - "omegaconf<2.1", - 'numpy<1.20.0; python_version<"3.7"', - 'numpy; python_version>="3.7"', - "regex", - "sacrebleu>=1.4.12", - "torch", - "tqdm", - "bitarray", - # "torchaudio>=0.8.0", - ], - dependency_links=dependency_links, - packages=find_packages( - exclude=[ - "examples", - "examples.*", - "scripts", - "scripts.*", - "tests", - "tests.*", - ] - ) - + extra_packages, - package_data=package_data, - ext_modules=extensions, - test_suite="tests", - entry_points={ - "console_scripts": [ - "fairseq-eval-lm = fairseq_cli.eval_lm:cli_main", - "fairseq-generate = fairseq_cli.generate:cli_main", - "fairseq-hydra-train = fairseq_cli.hydra_train:cli_main", - "fairseq-interactive = fairseq_cli.interactive:cli_main", - "fairseq-preprocess = fairseq_cli.preprocess:cli_main", - "fairseq-score = fairseq_cli.score:cli_main", - "fairseq-train = fairseq_cli.train:cli_main", - "fairseq-validate = fairseq_cli.validate:cli_main", - ], - }, - cmdclass=cmdclass, - zip_safe=False, - ) - - -def get_files(path, relative_to="fairseq"): - all_files = [] - for root, _dirs, files in os.walk(path, followlinks=True): - root = os.path.relpath(root, relative_to) - for file in files: - if file.endswith(".pyc"): - continue - all_files.append(os.path.join(root, file)) - return all_files - - -if __name__ == "__main__": - try: - # symlink examples into fairseq package so package_data accepts them - fairseq_examples = os.path.join("fairseq", "examples") - if "build_ext" not in sys.argv[1:] and not os.path.exists(fairseq_examples): - os.symlink(os.path.join("..", "examples"), fairseq_examples) - - package_data = { - "fairseq": ( - get_files(fairseq_examples) - + get_files(os.path.join("fairseq", "config")) - ) - } - do_setup(package_data) - finally: - if "build_ext" not in sys.argv[1:] and os.path.islink(fairseq_examples): - os.unlink(fairseq_examples) diff --git a/kosmos-g/fairseq/train.py b/kosmos-g/fairseq/train.py deleted file mode 100644 index 321de3d9b..000000000 --- a/kosmos-g/fairseq/train.py +++ /dev/null @@ -1,14 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -Legacy entry point. Use fairseq_cli/train.py or fairseq-train instead. -""" - -from fairseq_cli.train import cli_main - - -if __name__ == "__main__": - cli_main() diff --git a/kosmos-g/infinibatch/.gitattributes b/kosmos-g/infinibatch/.gitattributes deleted file mode 100644 index 5dc46e6b3..000000000 --- a/kosmos-g/infinibatch/.gitattributes +++ /dev/null @@ -1,3 +0,0 @@ -* text=auto eol=lf -*.{cmd,[cC][mM][dD]} text eol=crlf -*.{bat,[bB][aA][tT]} text eol=crlf \ No newline at end of file diff --git a/kosmos-g/infinibatch/.github/workflows/gh-pages.yml b/kosmos-g/infinibatch/.github/workflows/gh-pages.yml deleted file mode 100644 index 084a8750d..000000000 --- a/kosmos-g/infinibatch/.github/workflows/gh-pages.yml +++ /dev/null @@ -1,25 +0,0 @@ -name: Build and Deploy Documentation -on: - push: - branches: - - main -jobs: - build-and-deploy-docs: - runs-on: ubuntu-latest - steps: - - name: Checkout - uses: actions/checkout@v2 - with: - persist-credentials: false - - uses: actions/setup-python@v2 - with: - python-version: '3.8' - - name: Build Documentation - run: | - pip install pdoc3 - pdoc -o docs --template-dir docs --html infinibatch - - name: Deploy Documentation - uses: JamesIves/github-pages-deploy-action@4.0.0 - with: - branch: gh-pages - folder: docs/infinibatch \ No newline at end of file diff --git a/kosmos-g/infinibatch/.github/workflows/unit_tests.yml b/kosmos-g/infinibatch/.github/workflows/unit_tests.yml deleted file mode 100644 index f20938315..000000000 --- a/kosmos-g/infinibatch/.github/workflows/unit_tests.yml +++ /dev/null @@ -1,23 +0,0 @@ -name: Unit Tests -on: [push, pull_request] - -jobs: - run_unit_tests: - runs-on: ${{ matrix.os }} - strategy: - matrix: - os: [ubuntu-16.04, ubuntu-18.04, ubuntu-20.04, windows-2019] - python-version: [3.6, 3.7, 3.8, 3.9] - - steps: - - uses: actions/checkout@v2 - - name: Set up Python ${{ matrix.python-version }} - uses: actions/setup-python@v2 - with: - python-version: ${{ matrix.python-version }} - - name: Install torch - run: | - pip install torch==1.7.1 - - name: Run unit tests - run: | - python -m unittest discover -s ./test diff --git a/kosmos-g/infinibatch/.gitignore b/kosmos-g/infinibatch/.gitignore deleted file mode 100644 index d698cd10b..000000000 --- a/kosmos-g/infinibatch/.gitignore +++ /dev/null @@ -1,494 +0,0 @@ -# Editor temporary/working/backup files # -######################################### -.#* -[#]*# -*~ -*$ -*.bak -*.diff -.idea/ -*.iml -*.ipr -*.iws -*.org -.project -pmip -*.rej -.settings/ -.*.sw[nop] -.sw[nop] -*.tmp -*.vim -.vscode -tags -cscope.out -# gnu global -GPATH -GRTAGS -GSYMS -GTAGS -.cache - -# Compiled source # -################### -*.a -*.com -*.class -*.dll -*.exe -*.o -*.o.d -*.py[ocd] -*.so - -# Packages # -############ -# it's better to unpack these files and commit the raw source -# git has its own built in compression methods -*.7z -*.bz2 -*.bzip2 -*.dmg -*.gz -*.iso -*.jar -*.rar -*.tar -*.tbz2 -*.tgz -*.zip - -# Python files # -################ -# setup.py working directory -build -# sphinx build directory -_build -# setup.py dist directory -dist -doc/build -doc/cdoc/build -# Egg metadata -*.egg-info -# The shelf plugin uses this dir -./.shelf -MANIFEST -.cache - -# Paver generated files # -######################### -/release - -# Logs and databases # -###################### -*.log -*.sql -*.sqlite - -# Patches # -########### -*.patch -*.diff - -# OS generated files # -###################### -.DS_Store* -.VolumeIcon.icns -.fseventsd -Icon? -.gdb_history -ehthumbs.db -Thumbs.db -.directory - -# General - -# Compiled Object files -*.slo -*.lo -*.o -*.cuo -*.obj - -# Compiled Dynamic libraries -*.so -*.dylib -*.dll - -# Compiled Static libraries -*.lai -*.la -*.a -*.lib - -# Compiled protocol buffers -*.pb.h -*.pb.cc -*_pb2.py - -# Compiled python -*.pyc -*.pyd - -# Compiled MATLAB -*.mex* - -# IPython notebook checkpoints -.ipynb_checkpoints - -# Editor temporaries -*.swn -*.swo -*.swp -## Ignore Visual Studio temporary files, build results, and -## files generated by popular Visual Studio add-ons. -## -## Get latest from https://github.com/github/gitignore/blob/master/VisualStudio.gitignore - -# User-specific files -*.rsuser -*.suo -*.user -*.userosscache -*.sln.docstates - -# User-specific files (MonoDevelop/Xamarin Studio) -*.userprefs - -# Mono auto generated files -mono_crash.* - -# Build results -[Dd]ebug/ -[Dd]ebugPublic/ -[Rr]elease/ -[Rr]eleases/ -x64/ -x86/ -[Aa][Rr][Mm]/ -[Aa][Rr][Mm]64/ -bld/ -[Bb]in/ -[Oo]bj/ -[Ll]og/ -[Ll]ogs/ - -# Visual Studio 2015/2017 cache/options directory -.vs/ -# Uncomment if you have tasks that create the project's static files in wwwroot -#wwwroot/ - -# Visual Studio 2017 auto generated files -Generated\ Files/ - -# MSTest test Results -[Tt]est[Rr]esult*/ -[Bb]uild[Ll]og.* - -# NUnit -*.VisualState.xml -TestResult.xml -nunit-*.xml - -# Build Results of an ATL Project -[Dd]ebugPS/ -[Rr]eleasePS/ -dlldata.c - -# Benchmark Results -BenchmarkDotNet.Artifacts/ - -# .NET Core -project.lock.json -project.fragment.lock.json -artifacts/ - -# StyleCop -StyleCopReport.xml - -# Files built by Visual Studio -*_i.c -*_p.c -*_h.h -*.ilk -*.meta -*.obj -*.iobj -*.pch -*.pdb -*.ipdb -*.pgc -*.pgd -*.rsp -*.sbr -*.tlb -*.tli -*.tlh -*.tmp -*.tmp_proj -*_wpftmp.csproj -*.log -*.vspscc -*.vssscc -.builds -*.pidb -*.svclog -*.scc - -# Chutzpah Test files -_Chutzpah* - -# Visual C++ cache files -ipch/ -*.aps -*.ncb -*.opendb -*.opensdf -*.sdf -*.cachefile -*.VC.db -*.VC.VC.opendb - -# Visual Studio profiler -*.psess -*.vsp -*.vspx -*.sap - -# Visual Studio Trace Files -*.e2e - -# TFS 2012 Local Workspace -$tf/ - -# Guidance Automation Toolkit -*.gpState - -# ReSharper is a .NET coding add-in -_ReSharper*/ -*.[Rr]e[Ss]harper -*.DotSettings.user - -# TeamCity is a build add-in -_TeamCity* - -# DotCover is a Code Coverage Tool -*.dotCover - -# AxoCover is a Code Coverage Tool -.axoCover/* -!.axoCover/settings.json - -# Visual Studio code coverage results -*.coverage -*.coveragexml - -# NCrunch -_NCrunch_* -.*crunch*.local.xml -nCrunchTemp_* - -# MightyMoose -*.mm.* -AutoTest.Net/ - -# Web workbench (sass) -.sass-cache/ - -# Installshield output folder -[Ee]xpress/ - -# DocProject is a documentation generator add-in -DocProject/buildhelp/ -DocProject/Help/*.HxT -DocProject/Help/*.HxC -DocProject/Help/*.hhc -DocProject/Help/*.hhk -DocProject/Help/*.hhp -DocProject/Help/Html2 -DocProject/Help/html - -# Click-Once directory -publish/ - -# Publish Web Output -*.[Pp]ublish.xml -*.azurePubxml -# Note: Comment the next line if you want to checkin your web deploy settings, -# but database connection strings (with potential passwords) will be unencrypted -*.pubxml -*.publishproj - -# Microsoft Azure Web App publish settings. Comment the next line if you want to -# checkin your Azure Web App publish settings, but sensitive information contained -# in these scripts will be unencrypted -PublishScripts/ - -# NuGet Packages -*.nupkg -# NuGet Symbol Packages -*.snupkg -# The packages folder can be ignored because of Package Restore -**/[Pp]ackages/* -# except build/, which is used as an MSBuild target. -!**/[Pp]ackages/build/ -# Uncomment if necessary however generally it will be regenerated when needed -#!**/[Pp]ackages/repositories.config -# NuGet v3's project.json files produces more ignorable files -*.nuget.props -*.nuget.targets - -# Microsoft Azure Build Output -csx/ -*.build.csdef - -# Microsoft Azure Emulator -ecf/ -rcf/ - -# Windows Store app package directories and files -AppPackages/ -BundleArtifacts/ -Package.StoreAssociation.xml -_pkginfo.txt -*.appx -*.appxbundle -*.appxupload - -# Visual Studio cache files -# files ending in .cache can be ignored -*.[Cc]ache -# but keep track of directories ending in .cache -!?*.[Cc]ache/ - -# Others -ClientBin/ -~$* -*~ -*.dbmdl -*.dbproj.schemaview -*.jfm -*.pfx -*.publishsettings -orleans.codegen.cs - -# Including strong name files can present a security risk -# (https://github.com/github/gitignore/pull/2483#issue-259490424) -#*.snk - -# Since there are multiple workflows, uncomment next line to ignore bower_components -# (https://github.com/github/gitignore/pull/1529#issuecomment-104372622) -#bower_components/ - -# RIA/Silverlight projects -Generated_Code/ - -# Backup & report files from converting an old project file -# to a newer Visual Studio version. Backup files are not needed, -# because we have git ;-) -_UpgradeReport_Files/ -Backup*/ -UpgradeLog*.XML -UpgradeLog*.htm -ServiceFabricBackup/ -*.rptproj.bak - -# SQL Server files -*.mdf -*.ldf -*.ndf - -# Business Intelligence projects -*.rdl.data -*.bim.layout -*.bim_*.settings -*.rptproj.rsuser -*- [Bb]ackup.rdl -*- [Bb]ackup ([0-9]).rdl -*- [Bb]ackup ([0-9][0-9]).rdl - -# Microsoft Fakes -FakesAssemblies/ - -# GhostDoc plugin setting file -*.GhostDoc.xml - -# Node.js Tools for Visual Studio -.ntvs_analysis.dat -node_modules/ - -# Visual Studio 6 build log -*.plg - -# Visual Studio 6 workspace options file -*.opt - -# Visual Studio 6 auto-generated workspace file (contains which files were open etc.) -*.vbw - -# Visual Studio LightSwitch build output -**/*.HTMLClient/GeneratedArtifacts -**/*.DesktopClient/GeneratedArtifacts -**/*.DesktopClient/ModelManifest.xml -**/*.Server/GeneratedArtifacts -**/*.Server/ModelManifest.xml -_Pvt_Extensions - -# Paket dependency manager -.paket/paket.exe -paket-files/ - -# FAKE - F# Make -.fake/ - -# CodeRush personal settings -.cr/personal - -# Python Tools for Visual Studio (PTVS) -__pycache__/ -*.pyc - -# Cake - Uncomment if you are using it -# tools/** -# !tools/packages.config - -# Tabs Studio -*.tss - -# Telerik's JustMock configuration file -*.jmconfig - -# BizTalk build output -*.btp.cs -*.btm.cs -*.odx.cs -*.xsd.cs - -# OpenCover UI analysis results -OpenCover/ - -# Azure Stream Analytics local run output -ASALocalRun/ - -# MSBuild Binary and Structured Log -*.binlog - -# NVidia Nsight GPU debugger configuration file -*.nvuser - -# MFractors (Xamarin productivity tool) working folder -.mfractor/ - -# Local History for Visual Studio -.localhistory/ - -# BeatPulse healthcheck temp database -healthchecksdb - -# Backup folder for Package Reference Convert tool in Visual Studio 2017 -MigrationBackup/ - -# Ionide (cross platform F# VS Code tools) working folder -.ionide/ - -.devcontainer \ No newline at end of file diff --git a/kosmos-g/infinibatch/CODE_OF_CONDUCT.md b/kosmos-g/infinibatch/CODE_OF_CONDUCT.md deleted file mode 100644 index f9ba8cf65..000000000 --- a/kosmos-g/infinibatch/CODE_OF_CONDUCT.md +++ /dev/null @@ -1,9 +0,0 @@ -# Microsoft Open Source Code of Conduct - -This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). - -Resources: - -- [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/) -- [Microsoft Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) -- Contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with questions or concerns diff --git a/kosmos-g/infinibatch/LICENSE b/kosmos-g/infinibatch/LICENSE deleted file mode 100644 index 9e841e7a2..000000000 --- a/kosmos-g/infinibatch/LICENSE +++ /dev/null @@ -1,21 +0,0 @@ - MIT License - - Copyright (c) Microsoft Corporation. - - Permission is hereby granted, free of charge, to any person obtaining a copy - of this software and associated documentation files (the "Software"), to deal - in the Software without restriction, including without limitation the rights - to use, copy, modify, merge, publish, distribute, sublicense, and/or sell - copies of the Software, and to permit persons to whom the Software is - furnished to do so, subject to the following conditions: - - The above copyright notice and this permission notice shall be included in all - copies or substantial portions of the Software. - - THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE - AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER - LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, - OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE - SOFTWARE diff --git a/kosmos-g/infinibatch/README.md b/kosmos-g/infinibatch/README.md deleted file mode 100644 index 8becdb8b6..000000000 --- a/kosmos-g/infinibatch/README.md +++ /dev/null @@ -1,336 +0,0 @@ -# InfiniBatch - -Infinibatch is a library of checkpointable iterators for randomized data loading of massive data sets in deep neural network training. - - -## Features - - * support for corpora much larger than fit into RAM - * hierarchical block+sentence-level randomization over the whole corpus, different randomization in each epoch - * only load the data that is needed - * very fast start-up time (does not need to read full corpus) - * only requires the most basic of data preparation (e.g. no indexing) - * for multi-GPU, only load what the respective GPU needs - * 100% accurate check-pointing, restore from checkpoint should not read all data up to the checkpoint - * support automatic bucketed batching with dynamic batch sizes - * pre-fetching thread - * composable, as to support for complex batching, e.g. negative samples from multiple documents - - -## Getting Started - -Infinibatch requires Python 3.6 or higher and has no dependencies. -There is presently no pip package. - -To install it, clone this repository and install it locally. - -```bash -git clone https://github.com/microsoft/infinibatch -cd infinibatch -pip install -e . -``` - -## Documentation - -The documentation can be found here: https://microsoft.github.io/infinibatch/ - -## Tutorial - -This little tutorial walks you through the steps of preparing your data and consuming them from Python code as batches. - -### Infinibatch Basics: Iterators and Checkpointing - -Infinibatch provides [Python iterators](https://docs.python.org/3.5/glossary.html#term-iterator) -to read your data. -An iterator represents a stream of data that can be retrieved item by item, e.g. via a -`for` loop or repeatedly calling `next()` on it. - -Infinibatch is agnostic to the data type of the items, which is determined by a user-supplied file-read function. -In NLP applications, items would typically be tuples of text. In other applications, -they can be images or an audio file with a textual annotation. - -Infinibatch makes it easy to read your data in randomized order, and supports checkpointing, which allows you to restart training exactly where you left off. - -Randomization is done _on the fly_, which means that it is not necessary to read the entire data set into memory -to be shuffled. Infinibatch implements a hierarchical shuffling algorithm -that only holds a subset of the data in RAM at any point in time. - -Infinibatch iterators are _checkpointable_. -Checkpointing lets you retrieve the current position (the "checkpoint") in the data stream at any time, so that -later, you can "rewind" to that same position. -The sad reality is that long-running trainings occasionally crash. -To be able to continue a crashed training as if it had not crashed, -save your Infinibatch iterator's checkpoint to disk whenever you save an intermediate model during training. -To restart a crashed training, reset the iterator to the saved checkpoint. -The data reader will now yield the exact same data-item sequence it would have yielded without the crash. - -### Data Preparation - -Infinibatch has one requirement on your data organization: -To use your data with Infinibatch, it must be split into a large number of small chunks. -A chunk is the smallest unit of data that is loaded from disk into RAM. Infinibatch holds a random subset of chunks in memory -that it randomly draws samples from. - -Below we want to show how such a split can be created. An easy way to split your data into chunks is with the Linux `split` command. - -In this tutorial, our "corpus" consists of 6 lines of text, where each line is one data item. -To create that corpus, please run this command in a bash shell. It creates a 6-line text file named `corpus.txt`: -```bash -echo \\ -'Lorem ipsum dolor sit amet, -consectetur adipiscing elit, -sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. -Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. -Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. -The quick brown fox jumps over the lazy dog.' \\ -> corpus.txt -``` -Now let us split it into 3 chunks of 2 lines each. Each chunk is stored as a zipped text file. -We will create them inside a new subdirectory called `corpus_chunks`: -```bash -mkdir corpus_chunks -split --lines 2 --numeric-suffixes \\ - --filter 'gzip > corpus_chunks/$FILE.txt.gz' \\ - corpus.txt corpus. -``` -This will have created three files: `corpus_chunks/corpus.00.txt.gz`, `corpus_chunks/corpus.01.txt.gz`, and `corpus_chunks/corpus.02.txt.gz`. -To verify whether the data has been split as expected, you can use this command: -```bash -zcat corpus_chunks/corpus.*.txt.gz -``` - -Hint: For large corpora, we recommend replacing `gzip` by `pigz` (`apt-get install pigz`), which runs notably faster via multi-threading. - -### Reading Items in Random Order With Infinibatch - -We will first show the easiest way to read data with Infinibatch, using the helper function `chunked_dataset_iterator``()`. -This function will create an Infinibatch iterator that yields the content of your data in random order. -Please the following program: -```python -import gzip, glob - -from infinibatch import datasets as ds - -ds = ds.chunked_dataset_iterator( - chunk_refs = glob.glob('corpus_chunks/corpus.*.txt.gz'), - read_chunk_fn = lambda path: iter(gzip.decompress(open(path, "rb") \\ - .read()).decode(encoding='utf-8') \\ - .splitlines()), - buffer_size = 6, seed = 1) - -for i in range(10): - print(next(ds)) -``` -You should get output that contains the 6 example lines in randomized order: -```text -Lorem ipsum dolor sit amet, -consectetur adipiscing elit, -Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. -Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. -The quick brown fox jumps over the lazy dog. -sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. -consectetur adipiscing elit, -Lorem ipsum dolor sit amet, -The quick brown fox jumps over the lazy dog. -sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. -``` -Note: The `buffer_size` parameter determines how many sentences are read into memory at any given time, -to draw randomized items from. In real settings with corpora of hundreds of millions of text lines, -the `buffer_size` parameter should be set in the millions. -RAM usage and startup time will be proportional to the buffer size -(but much lower than having to load the entire corpus into RAM). - -### Reading Items of Different Lengths in Batches - -For deep learning, we want to group multiple items into batches. -For NLP tasks, items are often lines of text of varying length. -Infinibatch implements an algorithm that randomizes the input sequence and groups it into -batches of approximately the same length (aka _bucketing_). - -Infinibatch's `BucketedReadaheadBatchIterator` performs this task. -It implements an algorithm modeled after the [Marian toolkit](https://github.com/marian-nmt/marian) -that preloads a large number of randomized items (typically millions; in this example: 6), -sorts them and groups them into batches of similar length, and then yields -them, in turn, in randomized order. - -Here is an example. Note that the `BucketedReadaheadBatchIterator` accepts -the previous randomized sentence sequence iterator (`ds`) as the source of items to randomize over. -This is an example how one forms pipelines of iterators with Infinibatch -(a concept familiar from Python's own `itertools`). -Once an iterator is passed to another as its source, consider it owned by that other iterator, -it must no longer be accessed by the calling code. -```python -import gzip, glob - -from infinibatch import datasets as ds -from infinibatch import iterators as it - -ds = ds.chunked_dataset_iterator( - chunk_refs = glob.glob('corpus_chunks/corpus.*.txt.gz'), - read_chunk_fn = lambda path: iter(gzip.decompress(open(path, "rb") \\ - .read()).decode(encoding='utf-8') \\ - .splitlines()), - buffer_size = 6, seed = 1) - -bs = it.BucketedReadaheadBatchIterator( - source_iterator = ds, # note: this is the iterator from above - read_ahead = 6, - key = lambda line: len(line), - batch_size = 2, - seed = 1) - -for i in range(25): - print(next(bs)) -``` -This code should output something like this: -```python -['sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.', - 'The quick brown fox jumps over the lazy dog.'] -['consectetur adipiscing elit,', 'Lorem ipsum dolor sit amet,'] -['Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.', - 'Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.'] -``` -followed by different permutations of the same tuples. -As you can see, the sentences are in random order and grouped in batches of 2 of approximately the same length. -You may notice that there is no variation in how the items get grouped into batches--that -is an artifact of this example, and generally not the case in real use when the data size is much larger -than the batch size. - -In NLP, sentence length often varies considerably. As a result, using batches of a fixed number of lines, -as in the example above, will waste GPU RAM and cores. -This is because the number of lines is limited by the longest possible sequence; batches of shorter lines -would leave GPU cycles on the table. -Ideally, one would use batches that have as many lines as fit into GPU RAM, -given the number of tokens of the longest line in the batch. -To support variable batch sizes, Infinibatch allows to pass a function as the `batch_size` parameter. -That function will be given the longest item of a batch and should estimate how many items of at most this length can fit. - -In our example, we assume that batches can hold at most 150 tokens. -Please change the above code as follows: -```python - batch_size = lambda longest_line: 150 // len(longest_line), -``` -The output looks like this: -``` -['consectetur adipiscing elit,', 'Lorem ipsum dolor sit amet,'] -['Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.'] -['sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.', - 'The quick brown fox jumps over the lazy dog.'] -['Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.'] -``` -That shorter sentences got grouped, while longer did not because they would exceed the total of 150 characters. - -### Reading Batches Into Numpy Arrays - -Lastly, we will need to feed batches into our favorite deep-learning tool. -We will show how to convert the batches of text lines into padded `numpy` arrays. - -In a typical NLP application, text items would be tokenized, and then each token -would be represented by an index into a unit vocabulary. -For simplicity, in this example each character is its own token, -and each token's numeric unit index is just its ASCII code. -These sequences are then padded to equal length with -1, and converted into a `numpy` array. - -Please rerun the previous example, but first insert the following code before the final `for` loop. -This example uses an Infinibatch `MapIterator`, which applies a user-supplied function or -lambda to each item: -```python -import numpy as np -def collate(lines_batch): - # tokenize all lines in the batch and map to unit ids - ids_batch = [[ord(c) for c in line] for line in lines_batch] - # create a padded numpy array as wide as the longest line, - # where shorter sequences are padded with -1 - width = max(len(ids) for ids in ids_batch) - return np.array([ids + [-1] * (width-len(ids)) for ids in ids_batch]) - -bs = it.MapIterator( - source_iterator = bs, - transform = collate) -``` -This will output batches like this. Note that in batches with multiple sentences, -some entries are padded with `-1`. -```python -[[ 99 111 110 115 101 99 116 101 116 117 114 32 97 100 105 112 105 115 - 99 105 110 103 32 101 108 105 116 44] - [ 76 111 114 101 109 32 105 112 115 117 109 32 100 111 108 111 114 32 - 115 105 116 32 97 109 101 116 44 -1]] -[[ 85 116 32 101 110 105 109 32 97 100 32 109 105 110 105 109 32 118 - 101 110 105 97 109 44 32 113 117 105 115 32 110 111 115 116 114 117 - 100 32 101 120 101 114 99 105 116 97 116 105 111 110 32 117 108 108 - 97 109 99 111 32 108 97 98 111 114 105 115 32 110 105 115 105 32 - 117 116 32 97 108 105 113 117 105 112 32 101 120 32 101 97 32 99 - 111 109 109 111 100 111 32 99 111 110 115 101 113 117 97 116 46]] -[[115 101 100 32 100 111 32 101 105 117 115 109 111 100 32 116 101 109 - 112 111 114 32 105 110 99 105 100 105 100 117 110 116 32 117 116 32 - 108 97 98 111 114 101 32 101 116 32 100 111 108 111 114 101 32 109 - 97 103 110 97 32 97 108 105 113 117 97 46] - [ 84 104 101 32 113 117 105 99 107 32 98 114 111 119 110 32 102 111 - 120 32 106 117 109 112 115 32 111 118 101 114 32 116 104 101 32 108 - 97 122 121 32 100 111 103 46 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 - -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1]] -[[ 68 117 105 115 32 97 117 116 101 32 105 114 117 114 101 32 100 111 - 108 111 114 32 105 110 32 114 101 112 114 101 104 101 110 100 101 114 - 105 116 32 105 110 32 118 111 108 117 112 116 97 116 101 32 118 101 - 108 105 116 32 101 115 115 101 32 99 105 108 108 117 109 32 100 111 - 108 111 114 101 32 101 117 32 102 117 103 105 97 116 32 110 117 108 - 108 97 32 112 97 114 105 97 116 117 114 46]] -``` - -## Where To Go From Here - -The above tutorial showed you the use of the most common iterator type, as created by the -convenience function `chunked_dataset_iterator()`. - -Not all real-life scenarios are covered by this function. For example, multi-task learning -scenarios require more complex combinations of data. To create those, you will need -to compose the necessary data reader from the underlying building blocks. -This is described at the documentation of the module `iterators`. - -## Documentation - -To view the documentation, please clone the repository and go to docs/infinibatch/index.html - -When working on the documentation, install pdoc: -``` -pip install pdoc3 -``` -You can then start a local http server that dynamically updates the documentation: -``` -pdoc --template-dir docs --http : infinibatch -``` - -We currently haven't set up the CI to automatically generate the documentation. -Before you merge anything into master, please delete the existing documentation in docs/infinibatch and run -``` -pdoc -o docs --template-dir docs --html infinibatch -``` - -## Testing - -To run unit tests, run the following command. -``` -python -m unittest discover -s test -``` -If you would like the unit tests to stop after the first failed test, use: -``` -python -m unittest discover -s test --failfast -``` -To type-check with `mypy` (if installed): -``` -mypy infinibatch -``` - -# Contributing - -This project welcomes contributions and suggestions. Most contributions require you to agree to a -Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us -the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com. - -When you submit a pull request, a CLA bot will automatically determine whether you need to provide -a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions -provided by the bot. You will only need to do this once across all repos using our CLA. - -This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). -For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or -contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. diff --git a/kosmos-g/infinibatch/SECURITY.md b/kosmos-g/infinibatch/SECURITY.md deleted file mode 100644 index f7b89984f..000000000 --- a/kosmos-g/infinibatch/SECURITY.md +++ /dev/null @@ -1,41 +0,0 @@ -<!-- BEGIN MICROSOFT SECURITY.MD V0.0.5 BLOCK --> - -## Security - -Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/Microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), and [our GitHub organizations](https://opensource.microsoft.com/). - -If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://docs.microsoft.com/en-us/previous-versions/tn-archive/cc751383(v=technet.10)), please report it to us as described below. - -## Reporting Security Issues - -**Please do not report security vulnerabilities through public GitHub issues.** - -Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://msrc.microsoft.com/create-report). - -If you prefer to submit without logging in, send email to [secure@microsoft.com](mailto:secure@microsoft.com). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://www.microsoft.com/en-us/msrc/pgp-key-msrc). - -You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://www.microsoft.com/msrc). - -Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue: - - * Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.) - * Full paths of source file(s) related to the manifestation of the issue - * The location of the affected source code (tag/branch/commit or direct URL) - * Any special configuration required to reproduce the issue - * Step-by-step instructions to reproduce the issue - * Proof-of-concept or exploit code (if possible) - * Impact of the issue, including how an attacker might exploit the issue - -This information will help us triage your report more quickly. - -If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://microsoft.com/msrc/bounty) page for more details about our active programs. - -## Preferred Languages - -We prefer all communications to be in English. - -## Policy - -Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://www.microsoft.com/en-us/msrc/cvd). - -<!-- END MICROSOFT SECURITY.MD BLOCK --> \ No newline at end of file diff --git a/kosmos-g/infinibatch/docs/config.mako b/kosmos-g/infinibatch/docs/config.mako deleted file mode 100644 index b6b0e8da7..000000000 --- a/kosmos-g/infinibatch/docs/config.mako +++ /dev/null @@ -1,41 +0,0 @@ -<%! - # This is a configuration file for pdoc3, the tool we use for generating html documentation from docstrings. - # Please look at the README.md for instruction on how to generate the documentation. - # Template configuration. Copy over in your template directory - # (used with --template-dir) and adapt as required. - html_lang = 'en' - show_inherited_members = False - extract_module_toc_into_sidebar = True - list_class_variables_in_index = True - sort_identifiers = False - show_type_annotations = True - # Show collapsed source code block next to each item. - # Disabling this can improve rendering speed of large modules. - show_source_code = True - # If set, format links to objects in online source code repository - # according to this template. Supported keywords for interpolation - # are: commit, path, start_line, end_line. - #git_link_template = 'https://github.com/USER/PROJECT/blob/{commit}/{path}#L{start_line}-L{end_line}' - #git_link_template = 'https://gitlab.com/USER/PROJECT/blob/{commit}/{path}#L{start_line}-L{end_line}' - #git_link_template = 'https://bitbucket.org/USER/PROJECT/src/{commit}/{path}#lines-{start_line}:{end_line}' - #git_link_template = 'https://CGIT_HOSTNAME/PROJECT/tree/{path}?id={commit}#n{start-line}' - git_link_template = None - # A prefix to use for every HTML hyperlink in the generated documentation. - # No prefix results in all links being relative. - link_prefix = '' - # Enable syntax highlighting for code/source blocks by including Highlight.js - syntax_highlighting = True - # Set the style keyword such as 'atom-one-light' or 'github-gist' - # Options: https://github.com/highlightjs/highlight.js/tree/master/src/styles - # Demo: https://highlightjs.org/static/demo/ - hljs_style = 'github' - # If set, insert Google Analytics tracking code. Value is GA - # tracking id (UA-XXXXXX-Y). - google_analytics = '' - # If set, render LaTeX math syntax within \(...\) (inline equations), - # or within \[...\] or $$...$$ or `.. math::` (block equations) - # as nicely-formatted math formulas using MathJax. - # Note: in Python docstrings, either all backslashes need to be escaped (\\) - # or you need to use raw r-strings. - latex_math = False -%> \ No newline at end of file diff --git a/kosmos-g/infinibatch/docs/presentations/Infinibatch Introduction, Seide and Gmyr, Aug 2020.pptx b/kosmos-g/infinibatch/docs/presentations/Infinibatch Introduction, Seide and Gmyr, Aug 2020.pptx deleted file mode 100644 index 99c6e117b..000000000 Binary files a/kosmos-g/infinibatch/docs/presentations/Infinibatch Introduction, Seide and Gmyr, Aug 2020.pptx and /dev/null differ diff --git a/kosmos-g/infinibatch/infinibatch/__init__.py b/kosmos-g/infinibatch/infinibatch/__init__.py deleted file mode 100644 index d55a732ce..000000000 --- a/kosmos-g/infinibatch/infinibatch/__init__.py +++ /dev/null @@ -1,280 +0,0 @@ -""" -Infinibatch is a library of checkpointable iterators for randomized data loading of massive data sets in deep neural network training. - - -## Features - - * support for corpora much larger than fit into RAM - * hierarchical block+sentence-level randomization over the whole corpus, different randomization in each epoch - * only load the data that is needed - * very fast start-up time (does not need to read full corpus) - * only requires the most basic of data preparation (e.g. no indexing) - * for multi-GPU, only load what the respective GPU needs - * 100% accurate check-pointing, restore from checkpoint should not read all data up to the checkpoint - * support automatic bucketed batching with dynamic batch sizes - * pre-fetching thread - * composable, as to support for complex batching, e.g. negative samples from multiple documents - - -## Getting Started - -Infinibatch requires Python 3.5 and has no dependencies. -There is presently no pip package. - -To install it, see README.md - -## Tutorial - -This little tutorial walks you through the steps of preparing your data and consuming them from Python code as batches. - -### Infinibatch Basics: Iterators and Checkpointing - -Infinibatch provides [Python iterators](https://docs.python.org/3.5/glossary.html#term-iterator) -to read your data. -An iterator represents a stream of data that can be retrieved item by item, e.g. via a -`for` loop or repeatedly calling `next()` on it. - -Infinibatch is agnostic to the data type of the items, which is determined by a user-supplied file-read function. -In NLP applications, items would typically be tuples of text. In other applications, -they can be images or an audio file with a textual annotation. - -Infinibatch makes it easy to read your data in randomized order, and supports checkpointing, which allows you to restart training exactly where you left off. - -Randomization is done _on the fly_, which means that it is not necessary to read the entire data set into memory -to be shuffled. Infinibatch implements a hierarchical shuffling algorithm -that only holds a subset of the data in RAM at any point in time. - -Infinibatch iterators are _checkpointable_. -Checkpointing lets you retrieve the current position (the "checkpoint") in the data stream at any time, so that -later, you can "rewind" to that same position. -The sad reality is that long-running trainings occasionally crash. -To be able to continue a crashed training as if it had not crashed, -save your Infinibatch iterator's checkpoint to disk whenever you save an intermediate model during training. -To restart a crashed training, reset the iterator to the saved checkpoint. -The data reader will now yield the exact same data-item sequence it would have yielded without the crash. - -### Data Preparation - -Infinibatch has one requirement on your data organization: -To use your data with Infinibatch, it must be split into a large number of small chunks. -A chunk is the smallest unit of data that is loaded from disk into RAM. Infinibatch holds a random subset of chunks in memory -that it randomly draws samples from. - -Below we want to show how such a split can be created. An easy way to split your data into chunks is with the Linux `split` command. - -In this tutorial, our "corpus" consists of 6 lines of text, where each line is one data item. -To create that corpus, please run this command in a bash shell. It creates a 6-line text file named `corpus.txt`: -```bash -echo \\ -'Lorem ipsum dolor sit amet, -consectetur adipiscing elit, -sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. -Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. -Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. -The quick brown fox jumps over the lazy dog.' \\ -> corpus.txt -``` -Now let us split it into 3 chunks of 2 lines each. Each chunk is stored as a zipped text file. -We will create them inside a new subdirectory called `corpus_chunks`: -```bash -mkdir corpus_chunks -split --lines 2 --numeric-suffixes \\ - --filter 'gzip > corpus_chunks/$FILE.txt.gz' \\ - corpus.txt corpus. -``` -This will have created three files: `corpus_chunks/corpus.00.txt.gz`, `corpus_chunks/corpus.01.txt.gz`, and `corpus_chunks/corpus.02.txt.gz`. -To verify whether the data has been split as expected, you can use this command: -```bash -zcat corpus_chunks/corpus.*.txt.gz -``` - -Hint: For large corpora, we recommend replacing `gzip` by `pigz` (`apt-get install pigz`), which runs notably faster via multi-threading. - -### Reading Items in Random Order With Infinibatch - -We will first show the easiest way to read data with Infinibatch, using the helper function `chunked_dataset_iterator``()`. -This function will create an Infinibatch iterator that yields the content of your data in random order. -Please the following program: -```python -import gzip, glob - -from infinibatch import datasets as ds - -ds = ds.chunked_dataset_iterator( - chunk_refs = glob.glob('corpus_chunks/corpus.*.txt.gz'), - read_chunk_fn = lambda path: iter(gzip.decompress(open(path, "rb") \\ - .read()).decode(encoding='utf-8') \\ - .splitlines()), - buffer_size = 6, seed = 1) - -for i in range(10): - print(next(ds)) -``` -You should get output that contains the 6 example lines in randomized order: -```text -Lorem ipsum dolor sit amet, -consectetur adipiscing elit, -Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. -Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. -The quick brown fox jumps over the lazy dog. -sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. -consectetur adipiscing elit, -Lorem ipsum dolor sit amet, -The quick brown fox jumps over the lazy dog. -sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. -``` -Note: The `buffer_size` parameter determines how many sentences are read into memory at any given time, -to draw randomized items from. In real settings with corpora of hundreds of millions of text lines, -the `buffer_size` parameter should be set in the millions. -RAM usage and startup time will be proportional to the buffer size -(but much lower than having to load the entire corpus into RAM). - -### Reading Items of Different Lengths in Batches - -For deep learning, we want to group multiple items into batches. -For NLP tasks, items are often lines of text of varying length. -Infinibatch implements an algorithm that randomizes the input sequence and groups it into -batches of approximately the same length (aka _bucketing_). - -Infinibatch's `BucketedReadaheadBatchIterator` performs this task. -It implements an algorithm modeled after the [Marian toolkit](https://github.com/marian-nmt/marian) -that preloads a large number of randomized items (typically millions; in this example: 6), -sorts them and groups them into batches of similar length, and then yields -them, in turn, in randomized order. - -Here is an example. Note that the `BucketedReadaheadBatchIterator` accepts -the previous randomized sentence sequence iterator (`ds`) as the source of items to randomize over. -This is an example how one forms pipelines of iterators with Infinibatch -(a concept familiar from Python's own `itertools`). -Once an iterator is passed to another as its source, consider it owned by that other iterator, -it must no longer be accessed by the calling code. -```python -import gzip, glob - -from infinibatch import datasets as ds -from infinibatch import iterators as it - -ds = ds.chunked_dataset_iterator( - chunk_refs = glob.glob('corpus_chunks/corpus.*.txt.gz'), - read_chunk_fn = lambda path: iter(gzip.decompress(open(path, "rb") \\ - .read()).decode(encoding='utf-8') \\ - .splitlines()), - buffer_size = 6, seed = 1) - -bs = it.BucketedReadaheadBatchIterator( - source_iterator = ds, # note: this is the iterator from above - read_ahead = 6, - key = lambda line: len(line), - batch_size = 2, - seed = 1) - -for i in range(25): - print(next(bs)) -``` -This code should output something like this: -```python -['sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.', - 'The quick brown fox jumps over the lazy dog.'] -['consectetur adipiscing elit,', 'Lorem ipsum dolor sit amet,'] -['Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.', - 'Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.'] -``` -followed by different permutations of the same tuples. -As you can see, the sentences are in random order and grouped in batches of 2 of approximately the same length. -You may notice that there is no variation in how the items get grouped into batches--that -is an artifact of this example, and generally not the case in real use when the data size is much larger -than the batch size. - -In NLP, sentence length often varies considerably. As a result, using batches of a fixed number of lines, -as in the example above, will waste GPU RAM and cores. -This is because the number of lines is limited by the longest possible sequence; batches of shorter lines -would leave GPU cycles on the table. -Ideally, one would use batches that have as many lines as fit into GPU RAM, -given the number of tokens of the longest line in the batch. -To support variable batch sizes, Infinibatch allows to pass a function as the `batch_size` parameter. -That function will be given the longest item of a batch and should estimate how many items of at most this length can fit. - -In our example, we assume that batches can hold at most 150 tokens. -Please change the above code as follows: -```python - batch_size = lambda longest_line: 150 // len(longest_line), -``` -The output looks like this: -``` -['consectetur adipiscing elit,', 'Lorem ipsum dolor sit amet,'] -['Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.'] -['sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.', - 'The quick brown fox jumps over the lazy dog.'] -['Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.'] -``` -That shorter sentences got grouped, while longer did not because they would exceed the total of 150 characters. - -### Reading Batches Into Numpy Arrays - -Lastly, we will need to feed batches into our favorite deep-learning tool. -We will show how to convert the batches of text lines into padded `numpy` arrays. - -In a typical NLP application, text items would be tokenized, and then each token -would be represented by an index into a unit vocabulary. -For simplicity, in this example each character is its own token, -and each token's numeric unit index is just its ASCII code. -These sequences are then padded to equal length with -1, and converted into a `numpy` array. - -Please rerun the previous example, but first insert the following code before the final `for` loop. -This example uses an Infinibatch `MapIterator`, which applies a user-supplied function or -lambda to each item: -```python -import numpy as np -def collate(lines_batch): - # tokenize all lines in the batch and map to unit ids - ids_batch = [[ord(c) for c in line] for line in lines_batch] - # create a padded numpy array as wide as the longest line, - # where shorter sequences are padded with -1 - width = max(len(ids) for ids in ids_batch) - return np.array([ids + [-1] * (width-len(ids)) for ids in ids_batch]) - -bs = it.MapIterator( - source_iterator = bs, - transform = collate) -``` -This will output batches like this. Note that in batches with multiple sentences, -some entries are padded with `-1`. -```python -[[ 99 111 110 115 101 99 116 101 116 117 114 32 97 100 105 112 105 115 - 99 105 110 103 32 101 108 105 116 44] - [ 76 111 114 101 109 32 105 112 115 117 109 32 100 111 108 111 114 32 - 115 105 116 32 97 109 101 116 44 -1]] -[[ 85 116 32 101 110 105 109 32 97 100 32 109 105 110 105 109 32 118 - 101 110 105 97 109 44 32 113 117 105 115 32 110 111 115 116 114 117 - 100 32 101 120 101 114 99 105 116 97 116 105 111 110 32 117 108 108 - 97 109 99 111 32 108 97 98 111 114 105 115 32 110 105 115 105 32 - 117 116 32 97 108 105 113 117 105 112 32 101 120 32 101 97 32 99 - 111 109 109 111 100 111 32 99 111 110 115 101 113 117 97 116 46]] -[[115 101 100 32 100 111 32 101 105 117 115 109 111 100 32 116 101 109 - 112 111 114 32 105 110 99 105 100 105 100 117 110 116 32 117 116 32 - 108 97 98 111 114 101 32 101 116 32 100 111 108 111 114 101 32 109 - 97 103 110 97 32 97 108 105 113 117 97 46] - [ 84 104 101 32 113 117 105 99 107 32 98 114 111 119 110 32 102 111 - 120 32 106 117 109 112 115 32 111 118 101 114 32 116 104 101 32 108 - 97 122 121 32 100 111 103 46 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 - -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1]] -[[ 68 117 105 115 32 97 117 116 101 32 105 114 117 114 101 32 100 111 - 108 111 114 32 105 110 32 114 101 112 114 101 104 101 110 100 101 114 - 105 116 32 105 110 32 118 111 108 117 112 116 97 116 101 32 118 101 - 108 105 116 32 101 115 115 101 32 99 105 108 108 117 109 32 100 111 - 108 111 114 101 32 101 117 32 102 117 103 105 97 116 32 110 117 108 - 108 97 32 112 97 114 105 97 116 117 114 46]] -``` - -## Where To Go From Here - -The above tutorial showed you the use of the most common iterator type, as created by the -convenience function `chunked_dataset_iterator()`. - -Not all real-life scenarios are covered by this function. For example, multi-task learning -scenarios require more complex combinations of data. To create those, you will need -to compose the necessary data reader from the underlying building blocks. -This is described at the documentation of the module `iterators`. -""" - -from .iterators import * diff --git a/kosmos-g/infinibatch/infinibatch/datasets.py b/kosmos-g/infinibatch/infinibatch/datasets.py deleted file mode 100644 index f69160752..000000000 --- a/kosmos-g/infinibatch/infinibatch/datasets.py +++ /dev/null @@ -1,65 +0,0 @@ -from .iterators import create_source_iterator, CheckpointableIterator, SelectManyIterator, PrefetchIterator, BufferedShuffleIterator, BlockwiseShuffleIterator, MapIterator -from typing import List, Union, Iterable, Iterator, Callable, Any, Optional, Dict -import os, sys - -""" -This module contains common datasets, which are implemented as convenience functions that compose underlying Infinibatch iterators. -""" - -def bump_seed(seed: Optional[int], step = 1): - """ - Helper to bump a random seed if not None. - """ - return None if seed is None else seed + 1 - - -def chunked_dataset_iterator(chunk_refs: List, read_chunk_fn: Callable[[Any], Iterator], buffer_size: int, - train: bool=True, - seed: Optional[int]=None, shuffle: bool=True, use_windowed: bool=False, - transform: Callable[[Any],Any]=None, - prefetch: bool=False, - num_instances: int=1, instance_rank: int=0) -> CheckpointableIterator: - """ - Dataset reading data from gzipped chunks. - - If train=True, this chunks are strided assigned to instances in strides and the data is infinitely repeated in permutations. - Otherwise, the chunks are split among the instances in consecutive blocks and the data is not repeated. - This way, when using this dataset for inference on multiple GPUs, to order the outputs in a way that corresponds - to the original order of the data items in the dataset, one simply has to collect the lists of outputs from each GPU - and then concatenate these lists in order of increasing rank. - When using MPI, this can be achieved by a gather-operation to get a list of lists of outputs, one list per GPU, - followed by flattening the lists back into a single list. - - Args: - chunk_refs: references (such as path names) to chunk files - read_chunk_fn: function(chunk_ref) -> Iterator to read a chunk's content into an iterator over its items, e.g. read a file and split into text lines - train: see above - shuffle: if true, the data is shuffled. If train is False then shuffle must be False as well. - buffer_size: size of the buffer in number of samples / data items used for shuffling (default: 2**20) - transform: transform to be applied to each data item (transform(Any) -> Any) - prefetch: if True, insert a prefetch iterator with buffer_size - seed: random seed (or None) - num_instances: number of instances of this dataset. Meant for use with multi-process data loading, e.g., in distributed training. - instance_rank: rank of this instance of the dataset. Meant for use with multi-process data loading, e.g., in distributed training. - use_windowed: temporary option to switch back to the WindowedShuffleIterator (default False). Will go away once shown that we don't need it anymore. - """ - if not train and shuffle: - raise ValueError('shuffling is not supported when train=False') - # set up the chunk reader - chunks = create_source_iterator(chunk_refs, train=train, seed=seed, shuffle=shuffle, num_instances=num_instances, instance_rank=instance_rank) - # set up the item reader - samples = SelectManyIterator(source_iterator=chunks, collection_selector=read_chunk_fn) # type: CheckpointableIterator - # wrap the I/O operation in a prefetch iterator - if prefetch: - samples = PrefetchIterator(samples, buffer_size) - # set up the item randomizer - if shuffle: - if use_windowed: - samples = BufferedShuffleIterator(samples, buffer_size, bump_seed(seed, 1)) - else: - samples = BlockwiseShuffleIterator(samples, buffer_size, bump_seed(seed, 1)) - # apply transform, if given - if transform is not None: - samples = MapIterator(samples, transform) - # this is what we are serving out - return samples diff --git a/kosmos-g/infinibatch/infinibatch/iterators.py b/kosmos-g/infinibatch/infinibatch/iterators.py deleted file mode 100644 index cbd029f5a..000000000 --- a/kosmos-g/infinibatch/infinibatch/iterators.py +++ /dev/null @@ -1,1494 +0,0 @@ -""" -## Overview - -This part of the documentation covers the __advanced usage__ of Infinibatch by assembling __custom data loading pipelines__. -Before you continue, please go through the tutorial on the top-level of the documentation of the `infinibatch` module. - -Two of the main features of Infinibatch are __lazy evaluation__ through the use of __iterators__ -and built-in support for __checkpointing__. -In this section, we give an introduction to these features and the basic usage of the Infinibatch iterator library. - - -### Iterators - -As a Python programmer, you are probably familiar with the concept of iterators. -According to the [Python documentation](https://docs.python.org/3.5/glossary.html#term-iterator), -an iterator is an object representing a stream of data, -and repeated calls to the iterator's `__next__()` method (or passing it to the built-in function `next()`) -return successive items in the stream. -It is important not to confuse an [iterator](https://docs.python.org/3.5/glossary.html#term-iterator) -with an [iterable](https://docs.python.org/3.5/glossary.html#term-iterable). -For more information on this subject, please follow the links above. - -The Python standard library contains a module of iterators called `itertools` -that bears some resembles to Infinibatch. -Infinibatch differs from `itertools` in two ways: - -1. Infinibatch provides iterators specifically for the purpose of creating __randomized batches of data for machine learning__. -2. All iterators in Infinibatch support __checkpointing__ (see the following section). - -Infinibatch iterators are not directly compatible with itertools due to the checkpointing requirement. - -Infinibatch enables you to build complex data loaders by combining iterators from this module into a pipeline. -To give you a high-level idea of how this is works, we provide a very simple example. -Note that this example is completely artificial and does not solve any useful task. -Its only purpose is to demonstrate the behavior of a pipeline of iterators. -We provide a more realistic example in a later section. - -First, we create a small test data set. ->>> dataset = list(range(6)) # 0, 1, 2, 3, 4, 5 - -We can turn this data set into an Infinibatch iterator by wrapping it in a `NativeCheckpointableIterator`. ->>> it = NativeCheckpointableIterator(dataset) # 0, 1, 2, 3, 4, 5 - -We can then transform the data items using a `MapIterator`, -which applies a given function to each individual data item. -For example, we can multiply each data item by 2. ->>> it = MapIterator(it, lambda n: 2 * n) # 0, 2, 4, 6, 8, 10 - -We can restructure the data set by batching together pairs of data items into lists using a `FixedBatchIterator`. ->>> it = FixedBatchIterator(it, batch_size=2) # [0, 2], [4, 6], [8, 10] - -Using another `MapIterator`, we can reduce each of these lists to its second element. ->>> it = MapIterator(it, lambda l: l[1]) # 2, 6, 10 - -Finally, we can use the resulting iterator `it` just like any standard Python iterator. -```py ->>> for item in it: -... print(item) -2 -6 -10 - -``` - -By using iterators, Infinibatch operates in a __lazy__ fashion: -It generally doesn't apply operations to an entire data set at once, -but rather operates on individual data items on-the-fly as they are consumed. -When used correctly, this allows Infinibatch to have a low start-up time and low memory overhead. -For more detail on this, please consult the section on performance considerations below. - - -### Checkpointing - -The main features that sets Infinibatch iterators apart from standard Python iterators is that they support __checkpointing__. -A checkpoint encapsulates the internal state of an entire pipeline of iterators at a specific point while iterating through a data set. -Once you retrieve a checkpoint, you can later use it to reset the pipeline of iterators to the exact state it was in -when the checkpoint was created. -Checkpoints can easily be serialized and stored to disk using [Pythons `pickle` module](https://docs.python.org/3.5/library/pickle.html). -Infinibatch's checkpointing feature is particularly useful when you're training large deep neural network models over days or weeks, -and you want to make sure that, in case your training is interrupted for any reason, __you can pick up your training exactly where you left off__. - -The checkpointing interface consists of two functions `getstate` and `setstate` that are defined in `CheckpointableIterator`, -the common base class of all iterators in this module. -As the names suggest `getstate` returns a checkpoint object that represents the state of a pipeline at the time the function is called, -and 'setstate' receives a checkpoint object to reset the state of a pipeline. -`setstate` also accepts `None`, which resets a pipeline to the __beginning__ of the iteration, -i.e. the state of the pipeline immediately after its construction. - -It is important to realize that __a checkpoint represents the state of a complete pipeline of iterators__. -If you have a pipeline consisting of a sequence of iterators, you only have to call `getstate` on the __last__ iterator in the sequence -to capture the state of the entire pipeline. -Internally, this is achieved by recursive calls that traverse the entire data loading pipeline to collect the state of every iterator in it. -Similarly, when you want to reset a pipeline to a previous state, you only have to call `setstate` on the __last__ iterator in the pipeline. - - -To demonstrate this, we recreate the pipeline from the previous section. ->>> dataset = list(range(6)) # 0, 1, 2, 3, 4, 5 ->>> it = NativeCheckpointableIterator(dataset) # 0, 1, 2, 3, 4, 5 ->>> it = MapIterator(it, lambda n: 2 * n) # 0, 2, 4, 6, 8, 10 ->>> it = FixedBatchIterator(it, batch_size=2) # [0, 2], [4, 6], [8, 10] ->>> it = MapIterator(it, lambda l: l[1]) # 2, 6, 10 - -Since `it` behaves just like a standard Python iterator, we can call `next` to retrieve its first element. ->>> next(it) -2 - -We can now call `getstate` on `it` (which is the last `MapIterator` in the pipeline) -to get a checkpoint of the internal state of the entire data loading pipeline. ->>> checkpoint = it.getstate() - -Note that the checkpoint represents the internal state of the pipeline after the data item `2` has been retrieved. -Using the checkpoint, we can always return to this __exact__ point in the data set. -To show this, let's exhaust the iterator by casting it to a list. ->>> list(it) -[6, 10] - -Since the iterator is now exhausted, calling `next` raises a `StopIteration` exception. -``` ->>> next(it) -Traceback (most recent call last): - ... -StopIteration - -``` - -We can now reset the pipeline to the checkpoint using `setstate`. ->>> it.setstate(checkpoint) - -This recovers the state of the pipeline after the data item `2` has been retrieved. -Thereby, we expect the next element to be `6`. ->>> next(it) -6 - - -## Types of Iterators - -This section provides a brief overview of the different types of iterators in Infinibatch. - - -### Classes and Factory Functions - -Most iterators in this module are implemented as classes that inherit from the abstract base class `CheckpointableIterator`. -However, some iterators (such as the `BlockwiseShuffleIterator`) are simple combinations of other iterators. -These iterators are implemented as __factory functions__ that construct a pipeline of iterators -and return the last iterator in the pipeline. -For consistency with class-based iterators, -we name these factory function using CamelCase instead of the more pythonic use_of_underscores. - -.. todo:: - We currently also have one factory function that actually looks like one: `create_source_iterator`. - Provide a comment on this describing why that is. - - -### Source Iterators - -There are three iterators that are intended to go at the __beginning__ of a data loading pipeline: - -- `InfinitePermutationSourceIterator`: -This iterator accepts a list, shuffles it, and yields its elements. -It repeats this infinitely, shuffling the list after each pass. -Thereby, __this iterator is infinte and cannot be exhausted__. -This iterator is meant to be used as the first iterator in a training scenario -and supports splitting the data for multi-GPU training. -- `ChunkedSourceIterator`: -This iterator accepts a list and yields its elements. -It is meant to be used as the first iterator in an inference or validation scenario -and supports splitting the data for mult-GPU inference. -- `NativeCheckpointableIterator`: -This iterator wraps a Python iterable and makes it checkpointable. -It is mainly intended for demonstration and debugging purposes. - - -### Shuffling - -.. todo:: Describe `BufferedShuffleIterator` and `BlockwiseShuffleIterator`. - - -### Batching, SelectMany, and Windowing - -.. todo:: Describe `FixedBatchIterator`, `SelectManyIterator`, and `WindowedIterator`. - - -### Mapping - -.. todo:: Describe `MapIterator`, `ParallelMapIterator`, `RecurrentIterator`, and `SamplingRandomMapIterator`. - - -### Other Iterators - -.. todo:: Describe `ZipIterator`, `PrefetchIterator`, and `BucketedReadaheadBatchIterator`. - - -## Complete Example - -.. todo:: - Give a more realistic example following, in broad strokes, the ChunkedDataset including: - - - use gzip chunks - - training pipeline example - - inference pipeline example - - pipeline that can do both - - etc. - -## Performance Considerations - -.. todo:: - Describe what parameters influence performance measures such as memory usage and start-up time. -""" - -from abc import abstractmethod -import collections -import copy -import gzip -from itertools import cycle, islice -import logging -import math -import multiprocessing -import os -import queue -from random import Random -import threading -import time -from typing import Any, Callable, Dict, Generator, Iterable, Iterator, List, Optional, Tuple, Union, cast - -logger = logging.getLogger(__name__) - -# TODO for next release: -# - benchmark the accuracy when using BlockwiseShuffleIterator vs. the BufferedShuffleIterator -# - change all convenience functions back to true classes, using a wrapper class - -# TODO later: -# - make iterator pipeline work for streaming data - -def _advance_iterator(iterator: Iterator, n: int): - """ Little helper to advance an iterator by n items """ - for i in range(n): - try: - next(iterator) - except StopIteration: - raise RuntimeError('Trying to advance iterator by {} but iterator raised StopIteration exception on call to next with index {}.'.format(n, i)) - return n - - -class CheckpointableIterator(collections.abc.Iterator): - """ - Abstract base class that defines the interface for checkpointing. - - The interface (getstate, setstate) is inspired by Python's random package. - """ - def __iter__(self) -> 'CheckpointableIterator': - return self - - @abstractmethod - def getstate(self) -> Dict: - """ - Get checkpoint of current state of iterator - - In a pipeline of iterators, this function __recursively__ calls itself on the preceeding iterator - and includes the gathered information in the returned checkpoint. - Thereby, to obtain a checkpoint of the state of an entire pipeline of iterators - you only have to call this function on the __last__ iterator in the pipeline. - A checkpoint is represented as a `dict`, - but the caller should treat a checkpoint as an opaque object - and not make any assumptions about the existence or meaning of the `dict` entries. - """ - pass - - @abstractmethod - def setstate(self, checkpoint: Optional[Dict]): - """ - Set state of iterator to given checkpoint - - In a pipeline of iterators, this function __recursively__ calls itself on the preceeding iterator. - Thereby, to set the state of an entire pipeline of iterators to a given checkpoint - you only have to call this function on the __last__ iterator in the pipeline. - - Args: - checkpoint: Checkpoint that should be used to reset the state of the iterator (or pipeline). - If this is __None__, the state of the iterator (or pipeline) is reset to the initial - state immediately after construction. - """ - pass - - def __getstate__(self) -> Dict: # implementation of pickle Protocol - return self.getstate() - - def __setstate__(self, checkpoint: Optional[Dict]): - self.setstate(checkpoint) - - @abstractmethod - def __next__(self): - pass - - @abstractmethod - def close(self): - """ - Close all PrefetchIterators in this pipeline - - PrefetchIterators have internal resources that need to be properly managed by calling close() manually. - Failure to do so can lead to dangling processes and threads, or the PrefetchIterator hanging on finalization. - Note that it is not correct to rely on the garbage collector to destroy PrefetchIterators - as CPython does not assure that the finalizer (__del__) of a PrefetchIterator will be called. - - This function, which is implemented for every CheckpointableIterator, recursively traverses all preceeding - iterators and closes all PrefetchIterators in the pipeline. - For pipelines that do not contain PrefetchIterators this function has no effect. - """ - pass - - -class NativeCheckpointableIterator(CheckpointableIterator): - """ - Simple wrapper class that turns a Python Iterable into a CheckpointableIterator - - When calling setstate on this class, it simply replays the iterator all the way to the checkpoint one element at a time, - which makes it generally inefficient. - - Warning: This class cannot be used with Iterators (as opposed to Iterables), which have an `__iter__` function that simply returns self, but does not reset. - """ - def __init__(self, iterable: Iterable): - # check whether iterable is iterable or iterator: - # if the variable iterable contains an iterator, the function __iter__ returns self - # if the variable iterable is an actual iterator, it should not return self - if iter(iterable) is iterable: - raise ValueError('It looks like you are passing an iterator instead of an iterable. This is not supported and can cause undefined behavior when used with checkpointing.') - self._input_iterable = iterable - self.setstate(None) - - def getstate(self) -> Dict: - return {'num_items_yielded': self._num_items_yielded} - - def setstate(self, checkpoint: Optional[Dict]): - self._iterator = iter(self._input_iterable) - self._num_items_yielded = _advance_iterator(self._iterator, checkpoint['num_items_yielded']) if checkpoint is not None else 0 - - def __next__(self): - item = next(self._iterator) # call this before increasing _num_items_yielded to correctly handle the case when a StopIteration exception is thrown - self._num_items_yielded += 1 - return item - - def close(self): - pass - - -def create_source_iterator(source_items: List, train: bool=True, seed: Optional[int]=None, shuffle: bool=True, num_instances: int=1, instance_rank: int=0) -> CheckpointableIterator: - if not train and shuffle: - raise ValueError('shuffling is not supported when train=False') - if train: - return InfinitePermutationSourceIterator(source_items, seed=seed, shuffle=shuffle, num_instances=num_instances, instance_rank=instance_rank) - else: - return ChunkedSourceIterator(source_items, num_instances=num_instances, instance_rank=instance_rank) - - -def ChunkedSourceIterator(source_items: List, num_instances: int=1, instance_rank: int=0) -> CheckpointableIterator: - """ - Cuts source list into chunks, one per instance, and serves out items in chunk corresponding to instance_rank - - This is a source iterator: - It is meant to be used at the beginning of a data loading pipeline. - As such, it takes a list as its source and not a CheckpointableIterator. - - Args: - source_items: input list, must not be empty and must be small enough to fit into RAM entirely, ownership of the list and the data goes to the iterator, do not modify it! - num_instances: number of instances of this iterator. Meant for use with multi-process data loading, e.g., in distributed training. - instance_rank: rank of this instance of the iterator. Meant for use with multi-process data loading, e.g., in distributed training. - """ - if instance_rank >= num_instances: - raise ValueError("invalid instance_rank") - # we split the data into num_instances consecutive parts - # that differ by at most 1 in size - num_items_per_rank = len(source_items) // num_instances - ranks_with_additional_item = len(source_items) - num_instances * num_items_per_rank - def boundary(rank): - return rank * num_items_per_rank + min(rank, ranks_with_additional_item) - items = source_items[boundary(instance_rank):boundary(instance_rank + 1)] - return NativeCheckpointableIterator(items) - - -class InfinitePermutationSourceIterator(CheckpointableIterator): - """ - Infinitely generates permutations of the items in the given list. - - This is a source iterator: - It is meant to be used at the beginning of a data loading pipeline. - As such, it takes a list as its source and not a CheckpointableIterator. - The given list is loaded completely into RAM. - - For example, this is used for randomizing the pathnames of data blocks read by ChunkedReadlinesIterator. - """ - - def __init__( - self, - source_items: List, - seed: int = 0, - shuffle: bool = True, - num_instances: int = 1, - instance_rank: int = 0, - ): - """ - Args: - source_items: input list, must not be empty, must be small enough to fit into RAM entirely, and must support deepcopies - seed: random seed used for shuffling - shuffle: set False to bypass the shuffling. Then this is just a checkpointed version of itertools.cycle(). (Default: True) - num_instances: number of instances of this iterator. Meant for use with multi-process data loading, e.g., in distributed training. - instance_rank: rank of this instance of the iterator. Meant for use with multi-process data loading, e.g., in distributed training. - """ - if not source_items: - raise ValueError("source must not be empty") - if instance_rank >= num_instances: - raise ValueError("invalid instance_rank") - self._source_items = copy.deepcopy(source_items) - self._shuffle = shuffle - self._seed = seed - self._num_instances = num_instances - self._instance_rank = instance_rank - self.setstate(None) - - def getstate(self) -> Dict: - return {"random_state": self._random_state, "index": self._index} - - def setstate(self, checkpoint: Optional[Dict]): - self._random_state = checkpoint["random_state"] if checkpoint else None - self._index = checkpoint["index"] if checkpoint else self._instance_rank - - self._random = None # this will trigger the lazy initialization in self.__next__ - - def __next__(self): - if self._random == None: - # lazy initialization - self._random = Random(self._seed) - if self._random_state is not None: - self._random.setstate(self._random_state) - if self._shuffle: - self._reshuffle() # create initial permutation - self._reshuffle_as_necessary() # reshuffle as often as necesary to bring self._index into range - else: - self._index = self._index % len(self._source_items) - - assert 0 <= self._index and self._index < len(self._source_items) - if self._shuffle: - result = self._shuffled_items[self._index] - self._index += self._num_instances - self._reshuffle_as_necessary() # reshuffle as often as necesary to bring self._index into range - else: - result = self._source_items[self._index] - self._index = (self._index + self._num_instances) % len(self._source_items) - assert 0 <= self._index and self._index < len(self._source_items) - return result - - def close(self): - pass - - def _reshuffle_as_necessary(self): - while self._index >= len(self._source_items): - # The new index is out of range, so we need to reshuffle. - # Since len(self._source_items) can be smaller than self._num_instances, - # we might have to reshuffle multiple times to "skip through" permutations of self._source_items. - # Even though there might be intermediate permutations that are not actually used, - # we have to generate all of them to make sure we get the right RNG state - # to guarantee correctness when using multiple instances. - self._reshuffle() - self._index -= len(self._source_items) - - def _reshuffle(self): - self._random_state = self._random.getstate() - self._shuffled_items = copy.deepcopy(self._source_items) - self._random.shuffle(self._shuffled_items) - - - - -class MultiplexIterator(CheckpointableIterator): - """ - Multiplexes multiple input iterators. - - A control iterator is expected to yield a sequence of indices into an array of input iterators. - The next item is selected from the input iterator whose index was read from the control iterator - """ - def __init__(self, control_iterator: CheckpointableIterator, source_iterators: List[CheckpointableIterator]): - if any(not isinstance(it, CheckpointableIterator) for it in [control_iterator] + source_iterators): - raise ValueError('control_iterator and source_iterators have to be CheckpointableIterators') - self._control_iterator = control_iterator # type: CheckpointableIterator - self._source_iterators = list(source_iterators) # type: List[CheckpointableIterator] - self.setstate(None) - - def getstate(self) -> Dict: - return {'control_iterator_state': self._control_iterator.getstate(), - 'source_iterator_states': [source_iterator.getstate() for source_iterator in self._source_iterators]} - - def setstate(self, checkpoint: Optional[Dict]): - self._control_iterator.setstate(checkpoint['control_iterator_state'] if checkpoint else None) - for i, source_iterator in enumerate(self._source_iterators): - source_iterator.setstate(checkpoint['source_iterator_states'][i] if checkpoint else None) - def _generate(): - for index in self._control_iterator: - item = next(self._source_iterators[index]) - yield item - self._iterator = _generate() - - def __next__(self): - return next(self._iterator) - - def close(self): - self._control_iterator.close() - for it in self._source_iterators: - it.close() - -class SelectManyIterator(CheckpointableIterator): - """ - Projects each element of a source sequence to a sequence and flattens the resulting sequences into one sequence. - """ - def __init__(self, source_iterator: CheckpointableIterator, collection_selector: Optional[Callable[[Any], Iterator]]=None): - """ - Args: - source_iterator: iterator over the items to pass to collection_selector() - collection_selector: user callback that maps an item into an Iterable, whose items will be yielded. - The returned Iterator is used only once. Hence, it is also allowed to - return self-iterables, such as iterators and generator expressions. - If None is given, no callback is applied. - """ - if not isinstance(source_iterator, CheckpointableIterator): - raise ValueError('source_iterator has to be a CheckpointableIterator') - self._source_iterator = source_iterator # type: CheckpointableIterator - self._collection_selector = collection_selector # type: Optional[Callable[[Any], Iterator]] - self.setstate(None) - - def getstate(self) -> Dict: - return {'source_state': self._source_state, - 'flattened_items_yielded': self._flattened_items_yielded} - - def setstate(self, checkpoint: Optional[Dict]): - self._source_state = checkpoint['source_state'] if checkpoint else None - self._flattened_items_yielded = checkpoint['flattened_items_yielded'] if checkpoint else 0 - self._source_iterator.setstate(self._source_state) - def _generate(): - skip_to_checkpoint = self._flattened_items_yielded - # main loop over source source_items - for source_item in self._source_iterator: - if self._collection_selector is not None: - data = iter(self._collection_selector(source_item)) - else: - data = iter(source_item) - self._flattened_items_yielded = 0 - if skip_to_checkpoint: - #print("Skipping to index", skip_to_checkpoint, file=sys.stderr) - self._flattened_items_yielded += _advance_iterator(data, skip_to_checkpoint) - skip_to_checkpoint = 0 - # main loop over lines - for item in data: - self._flattened_items_yielded += 1 - yield item - self._source_state = self._source_iterator.getstate() - self._iterator = _generate() - - def __next__(self): - return next(self._iterator) - - def close(self): - self._source_iterator.close() - -class BufferedShuffleIterator(CheckpointableIterator): - """ - Shuffles given iterable using a limited buffer. - """ - def __init__(self, source_iterator: CheckpointableIterator, buffer_size: int, seed: int=0): - """ - Args: - source_iterator: checkpointable iterator or restartable iterable over input items to shuffle - buffer_size: size of the buffer in number of items used for shuffling - seed: random seed used for shuffling (or None) - """ - if not isinstance(source_iterator, CheckpointableIterator): - raise ValueError('source_iterator has to be a CheckpointableIterator') - self._source_iterator = source_iterator - self._buffer_size = buffer_size - self._seed = seed - self.setstate(None) - - def getstate(self) -> Dict: - return {'source_state': self._source_iterator.getstate(), - 'buffer': copy.deepcopy(self._buffer), # create deepcopy so that iterator cannot modify checkpoint after it was taken - 'random_state': self._random.getstate()} - - def setstate(self, checkpoint: Optional[Dict]): - if checkpoint: - self._source_iterator.setstate(checkpoint['source_state']) - self._buffer = copy.deepcopy(checkpoint['buffer']) # create deepcopy so that iterator cannot modify checkpoint - self._random.setstate(checkpoint['random_state']) - # @TODO: Can we add a comment how the flush part is handled? - else: - self._source_iterator.setstate(None) - self._buffer = [None for _ in range(self._buffer_size)] - self._random = Random(self._seed) # type: Random - self._iterator = self._generate() - - def _generate(self) -> Iterator: - # shuffle data with a buffer: - # this is similar to what the Fisher-Yates shuffle does, - # but modified to run with a constant-size buffer - # see https://en.wikipedia.org/wiki/Fisher%E2%80%93Yates_shuffle - # this was inspired by an algorithm implemented in Kaldi - # see https://kaldi-asr.org/doc/nnet-shuffle-egs_8cc.html - for item in self._source_iterator: - index = self._random.randrange(0, len(self._buffer)) - result = None - if self._buffer[index] is not None: - result = self._buffer[index] - self._buffer[index] = item - # only yield value once buffer is updated to allow for correct checkpointing! - if result is not None: - yield result - - # flush buffer - while self._buffer: - item = self._buffer.pop() - if item is not None: - yield item - - def __next__(self): - return next(self._iterator) - - def close(self): - self._source_iterator.close() - - -class MapIterator(CheckpointableIterator): - """ - Applies given tranform to each data item - """ - def __init__(self, source_iterator: CheckpointableIterator, transform: Callable[[str],Any]): - """ - Args: - source_iterator: checkpointable iterator - transform: function to be applied to each data item - """ - if not isinstance(source_iterator, CheckpointableIterator): - raise ValueError('source_iterator has to be a CheckpointableIterator') - self._source_iterator = source_iterator - self._transform = transform - - def getstate(self) -> Dict: - return self._source_iterator.getstate() - - def setstate(self, checkpoint: Optional[Dict]): - self._source_iterator.setstate(checkpoint) - - def __next__(self): - return self._transform(next(self._source_iterator)) - - def close(self): - self._source_iterator.close() - - -def ParallelMapIterator(source_iterator: CheckpointableIterator, transform: Callable[[str],Any], num_processes: int, num_items_per_process: int) -> CheckpointableIterator: - """ - Applies given transform to each data item - - Behaves the same as MapIterator, but applies transform in parallel using multiple processes in a parallel map operation. - - Warning: - The transform function has to be pickleable because it is sent across process boundaries. - To achieve this, transform should be a top-level function. - - Args: - source_iterator: checkpointable iterator - transform: function to be applied to each data item, has to be pickleable, see above - num_processes: number of processes to use for parallel map - num_items_per_process: number of data items each process operates on - """ - # divide stream of data items into batches - batched_samples = FixedBatchIterator(source_iterator, num_processes * num_items_per_process) - # create process pool and capture it in closure that performs parallel map - p = multiprocessing.Pool(num_processes) - def parallel_map_transform(buffer): - return p.map(transform, buffer) - # apply transform in parallel to data items in a batch - batched_transformed_samples = MapIterator(batched_samples, parallel_map_transform) - # unpack batches to go back to stream of (now transformed) data items - transformed_samples = SelectManyIterator(batched_transformed_samples) - return transformed_samples - - -class ZipIterator(CheckpointableIterator): - """ - Zips items from all given iterators, like the Python standard function zip(). - - Like Python's build-in zip(), the iteration stops when the shortest input iterable is exhausted. - """ - def __init__(self, *source_iterators: CheckpointableIterator): - """ - Args: - source_iterators: list of iterators to zip, item by item - """ - # TODO: Use all function? - for source_iterator in source_iterators: - if not isinstance(source_iterator, CheckpointableIterator): - raise ValueError('all iterators in source_iterators have to be CheckpointableIterator') - self._source_iterators = list(source_iterators) # type: List[CheckpointableIterator] - - def getstate(self) -> Dict: - return {'input_states': tuple(iterator.getstate() for iterator in self._source_iterators)} - - def setstate(self, checkpoint: Optional[Dict]): - if checkpoint is None: - for iterator in self._source_iterators: - iterator.setstate(None) - else: - # TODO: Add check that both lists have the same length? - for iterator, state in zip(self._source_iterators, checkpoint['input_states']): - iterator.setstate(state) - - def __next__(self): - res = [] # (note: can't use a generator expression, as it gets confused when a next() call raises StopIteration) - for iterator in self._source_iterators: - res.append(next(iterator)) - return tuple(res) - - def close(self): - for it in self._source_iterators: - it.close() - -# @TODO: The yield makes a (shallow) copy of the window, which has complexity O(width * length). In some cases, -# we don't actually need to consume all items in the window. Hence, to make this faster, we should use -# double-buffering and return a slice view (which we'd have to write). -class WindowedIterator(CheckpointableIterator): - """ - Yields 'width' consecutive items in a sliding window. - - E.g. [1, 2, 3, 4, 5, 6] with width = 3 will yield - [(1, 2, 3), (2, 3, 4), (3, 4, 5), (4, 5, 6)] - """ - def __init__(self, source_iterator: CheckpointableIterator, width: int): - """ - Args: - source_iterator: checkpointable input iterators - """ - if not isinstance(source_iterator, CheckpointableIterator): - raise ValueError('source_iterator has to be a CheckpointableIterator') - self._source_iterator = source_iterator # type: CheckpointableIterator - self._width = width # type: int - self.setstate(None) - - def getstate(self) -> Dict: - return {'source_state': self._source_state, # state for first item in FIFO - 'item_index': self._item_index} # index of next item to serve - - def setstate(self, checkpoint: Optional[Dict]): - self._source_state = checkpoint['source_state'] if checkpoint else None - self._item_index = checkpoint['item_index'] if checkpoint else 0 - self._source_iterator.setstate(self._source_state) - self._iterator = self._generate() - - def _fifo_slice(self, i): # returns a window into the FIFO beginning at i - # @TODO: for efficiency, make this a slice view - return tuple(self._fifo[i:i + self._width]) - - def _generate(self) -> Iterator: - self._source_state = self._source_iterator.getstate() - self._fifo = list(islice(self._source_iterator, self._width)) - # we do this in overlapping blocks of length 2*width, for easier checkpointing and potential efficiency - while len(self._fifo) == self._width: - # we got 'width' items; append another 'width' (or less if at end) - next_input_state = self._source_iterator.getstate() - self._fifo.extend(islice(self._source_iterator, self._width)) - # now serve all positions in first half (last = width - 1). If at end, then limit accordingly. - last = min(self._width - 1, len(self._fifo) - self._width) - while self._item_index <= last: - window = self._fifo_slice(self._item_index) - self._item_index += 1 - yield window - # drop all we just served; if < width left, we have hit the end - self._fifo = self._fifo[last + 1:] # Note: This must be a new list, since the old might still be in a slice view. - self._source_state = next_input_state # this reflects now the first element in the FIFO - self._item_index = 0 - - def __next__(self): - return next(self._iterator) - - def close(self): - self._source_iterator.close() - - -# @TODO: research on whether this operation has a well-known name -class FixedBatchIterator(CheckpointableIterator): - """ - Batches N consecutive items into a single item that is a list of these items. - - E.g. [1, 2, 3 4, 5, 6, 7, 8] with batch_size = 3 will yield - [[1, 2, 3], [4, 5, 6], [7, 8]] - """ - def __init__(self, source_iterator: CheckpointableIterator, batch_size: int): - """ - Args: - source_iterator: checkpointable input iterators - batch_size: number of items per batch - """ - if not isinstance(source_iterator, CheckpointableIterator): - raise ValueError('source_iterator has to be a CheckpointableIterator') - if batch_size <= 0: - raise ValueError('batch_size has to be positive') - self._source_iterator = source_iterator # type: CheckpointableIterator - self._batch_size = batch_size # type: int - self.setstate(None) - - def getstate(self) -> Dict: - return {'source_state': self._source_iterator.getstate()} # state for first item in next batch - - def setstate(self, checkpoint: Optional[Dict]): - self._source_state = checkpoint['source_state'] if checkpoint else None - self._source_iterator.setstate(self._source_state) - self._iterator = self._generate() - - def _generate(self) -> Iterator: - while True: - batch = list(islice(self._source_iterator, self._batch_size)) - if not batch: - break - yield batch - - def __next__(self): - return next(self._iterator) - - def close(self): - self._source_iterator.close() - - -class RandomIterator(CheckpointableIterator): - """ - Iterator to generate uniformly distributed random numbers in the interval [0,1). - Very similar to Random.random(), except that random numbers are - obtained via next(). - """ - def __init__(self, seed: int=0): - """ - Args: - seed: Random seed. - """ - self._seed = seed - self._random = Random(self._seed) # type: Random - - def getstate(self) -> Dict: - return {'random_state': self._random.getstate()} - - def setstate(self, checkpoint: Optional[Dict]): - if checkpoint is None: - self._random.seed(self._seed) - else: - self._random.setstate(checkpoint['random_state']) - - def __next__(self): - return self._random.random() - - def close(self): - pass - - -class RecurrentIterator(CheckpointableIterator): - """ - Iterates statefully over a step function. The step function accepts a state and a new item, - and returns a new state and an output item, which is yielded. - """ - def __init__(self, source_iterator: CheckpointableIterator, step_function: Callable[[Any,Any], Tuple[Any,Any]], initial_state: Any = None): - """ - Args: - source_iterator: checkpointable iterator to recur over - step_function: user-supplied function with signature step_function(state, item) -> (new_state, output) - initial_state: initial state to be passed to the step_function upon first invocation - """ - if not isinstance(source_iterator, CheckpointableIterator): - raise ValueError('source_iterator has to be a CheckpointableIterator') - self._source_iterator = source_iterator # type: CheckpointableIterator - self._step_function = step_function # type: Callable[[Any,Any], Tuple[Any,Any]] - # take deepcopy of initial state so that user cannot change initial state after iterator is created - self._initial_state = copy.deepcopy(initial_state) # type: Any - self.setstate(None) - - def getstate(self): - # return deepcopy of recurrent state so that user cannot change recurrent state within a checkpoint after it was taken - # by modifying the recurrent_state in place during the step_function - return {'recurrent_state': copy.deepcopy(self._recurrent_state), - 'source_state': self._source_iterator.getstate()} - - def setstate(self, checkpoint): - # take deepcopy of recurrent_state from checkpoint and initial state so that user cannot modify the checkpoint / the initial state - # by modifying the recurrent_state in place during the step_function - self._recurrent_state = copy.deepcopy(checkpoint['recurrent_state']) if checkpoint else copy.deepcopy(self._initial_state) - self._source_iterator.setstate(checkpoint['source_state'] if checkpoint else None) - def _generate(): - for item in self._source_iterator: - # with all the deepcopies above, in-place modification of recurrent_state within the step_function is now ok - self._recurrent_state, output = self._step_function(self._recurrent_state, item) - yield output - self._iterator = _generate() - - def __next__(self): - return next(self._iterator) - - def close(self): - self._source_iterator.close() - - -def SamplingRandomMapIterator(source_iterator: CheckpointableIterator, transform: Callable[[Random,Any],Any], seed: int=0): - """ - An iterator that calls a transform function on each item, while also passing a checkpointed - random generator. - - Args: - source_iterator: checkpointable iterator to recur over - step_function: user-supplied function with signature step_function(random, item) -> result_item - seed: random seed - """ - _random = Random(seed) - def _step_function(state, item): - _random.setstate(state) - output = transform(_random, item) - return _random.getstate(), output - return RecurrentIterator(source_iterator, _step_function, initial_state=_random.getstate()) - - -def BlockwiseShuffleIterator(source_iterator: CheckpointableIterator, block_size: int, seed: int=0): - """ - Shuffles a sequence of items by grouping consecutive items in blocks of fixed size, shuffling - each block, and yielding the shuffled items of all blocks as a flat sequence. - - E.g. [1, 2, 3, 4, 5, 6, 7, 8] with block_size = 3 may yield [3, 1, 2, 4, 6, 5, 8, 7]. - - Args: - source_iterator: checkpointable iterator or restartable iterable over input items to shuffle - block_size: size of the buffer in number of items used for shuffling - seed: random seed used for shuffling (or None) - """ - # This is implemented as a pipeline: - # - group N consecutive items together - # - shuffle them - # - flatten the result - blocks = FixedBatchIterator(source_iterator, batch_size=block_size) - def shuffle_block_fn(random: Random, block: List): - random.shuffle(block) - return block - shuffled_blocks = SamplingRandomMapIterator(blocks, transform=shuffle_block_fn, seed=seed) - samples = SelectManyIterator(shuffled_blocks, collection_selector=lambda shuffled_block: iter(shuffled_block)) - return samples - - -def PrefetchIterator(source_iterator: CheckpointableIterator, buffer_size: int, buffer_in_main_process:bool=False, log_empty_buffer_warning: bool=False): - """ - An iterator prefetching data into a buffer on a seperate process. - - Args: - source_iterator: checkpointable iterator to recur over - buffer_size: number of items to prefetch; this is the maximum number of items held in the prefetch queue - buffer_in_main_process: use experimental version of PrefetchBuffer that has buffer in main process instead of prefetch process - log_empty_buffer_warning: log warning message if prefetch buffer is empty, only supported if buffer_in_main_process=True - """ - if not isinstance(source_iterator, CheckpointableIterator): - raise ValueError('source_iterator has to be a CheckpointableIterator') - if buffer_size <= 0: - raise ValueError('buffer_size must be positive') - - if multiprocessing.get_start_method() != 'fork': - print('WARNING: \ - PrefetchIterator is only supported on operating system that use fork to create new processes.\ - This excludes Windows.\ - A dummy iterator is inserted instead of a PrefetchIterator.\ - This also means that checkpoints of this iterator pipeline cannot be ported to a system that uses fork.') - return source_iterator - else: - if buffer_in_main_process: - return _ForkPrefetchIteratorExperimental(source_iterator, buffer_size, log_empty_buffer_warning) - else: - return _ForkPrefetchIterator(source_iterator, buffer_size) - - -class _ForkPrefetchIterator(CheckpointableIterator): - """ - Actual internal implementation of the prefetch iterator for systems that support creating processes through fork. - Args: - source_iterator: checkpointable iterator to recur over - buffer_size: number of items to prefetch; this is the maximum number of items held in the prefetch queue - """ - def __init__(self, source_iterator: CheckpointableIterator, buffer_size: int): - self._source_iterator = source_iterator # type:CheckpointableIterator - self._buffer_size = buffer_size # type: int - self._prefetch_process = None # type: Optional[multiprocessing.Process] - self.setstate(None) - - def getstate(self) -> Dict: - return {'source_state': self._source_state, - 'item_offset' : self._item_offset } - - def setstate(self, checkpoint: Optional[Dict]): - self._terminate_and_join_prefetch_process() # kill current process if any - - self._source_state = checkpoint['source_state'] if checkpoint is not None else None - self._item_offset = checkpoint['item_offset' ] if checkpoint is not None else 0 - self._source_iterator.setstate(self._source_state) - self._queue = multiprocessing.Queue(maxsize=self._buffer_size) - _prefetch_process = multiprocessing.Process(target=self._prefetch_process_fn, - args=(self._source_iterator, - self._item_offset, # @TODO: why pass all these parameters? They are forked anyways. Seems a left-over from thread days. - self._buffer_size, - self._queue)) - _prefetch_process.start() # this invokes fork() - self._prefetch_process = _prefetch_process - # make sure that in case of an unexpected shutdown, we still get rid of any active child process - import atexit - atexit.register(_ForkPrefetchIterator._join_process, self._prefetch_process) - - @staticmethod - def _prefetch_process_fn(source, item_offset, buffer_size, queue): # behavior of the prefetching process, only to be called from that process! - _advance_iterator(source, item_offset) # skip to checkpoint - while True: - try: - item = next(source) - except StopIteration: - queue.put(StopIteration()) - # It seems Python Queue has a bug: if we return here, then the StopIteration message is never sent to the receiver. - # So we just dead-loop, assuming that the process will be killed anyways when the consuming side destructs the prefetcher. - import time - while True: - time.sleep(1000) - return # we never actually get here - if item_offset == buffer_size - 1: # for efficiency, we send a new source state only at the END of each window of length _buffer_size - source_state = source.getstate() # this is the state for retrieving the NEXT element, i.e. the first element of the next buffer - item_offset = 0 - else: - source_state = None - item_offset += 1 - msg = (item, source_state) - queue.put(msg) - - def __next__(self): - if self._queue is None: # iterator has already been exhausted - raise StopIteration() - msg = self._queue.get() - if isinstance(msg, StopIteration): - self._queue = None - raise StopIteration() - item, prefetch_source_state = msg # for efficiency, the prefetch_source_state is only transmitted at the end of each window of length _buffer_size - if prefetch_source_state is not None: - assert self._item_offset == self._buffer_size - 1 # we expect a new source state at then END of each window of length _buffer_size - self._source_state = prefetch_source_state - self._item_offset = 0 - else: - self._item_offset = self._item_offset + 1 - assert self._item_offset < self._buffer_size - return item # for debugging, its useful to return msg instead of item - - def __del__(self): # note: this is often not called. If you really need it, gc.collect() will do the trick. - self._terminate_and_join_prefetch_process() - - def _terminate_and_join_prefetch_process(self): # terminate the pre-fetch process if one is running - if hasattr(self, "_prefetch_process") and self._prefetch_process: - _ForkPrefetchIterator._join_process(self._prefetch_process) - self._prefetch_process = None - - @staticmethod - def _join_process(p): # called from setstate(), __del__(), and atexit handler - # We create prefetching processes with UNIX fork. - # That means that we might end up with inactive copies - # of prefetchers in the memory of prefetching processes. - # These inactive copies can never create their - # own prefetching processes, even if setstate is called. - # All prefetching processes are exclusively created by - # the main process, even if there are nested PrefetchIterators. - # Hence, the main process should be the only one to terminate - # and join prefetching processes. - # The if-statement below guarantees that, even if __del__ is called - # on a copy of a PrefetchIterator in another process - if p._parent_pid != os.getpid(): - return - if p.exitcode is not None: # already joined: p.pid is invalid - return - # Note that we must terminate here instead of cleanly shutting down - # the prefetching process, e.g. using synchronization primitives. - # This is deliberate (and unfortunate). - # The prefetching process might spend an arbitrary amount of time - # in the preceeding iterators before it checks for potential termination messages. - # This would hold up the entire pipeline due to the join below. - # Hence, we terminate the process immediately. - # In some cases, the process function already ran its course. In that case, - # the terminate() call will have no effect. - p.terminate() - p.join() - - def close(self): - # this functionality is currently not implemented for this iterator - self._source_iterator.close() - - -class _ForkPrefetchIteratorExperimental(CheckpointableIterator): - """ - Actual internal implementation of the prefetch iterator for systems that support creating processes through fork. - - WARNING: - PrefetchIterators have internal resources that need to be properly managed by calling close() manually. - Failure to do so can lead to dangling processes and threads, or the PrefetchIterator hanging on finalization. - Note that it is not correct to rely on the garbage collector to destroy the PrefetchIterator - as CPython does not assure that the finalizer (__del__) of the PrefetchIterator will be called. - The close() function is implemented for every CheckpointableIterator. - It recursively traverses all preceeding iterators in the pipeline and closes all PrefetchIterators. - - Args: - source_iterator: checkpointable iterator to recur over - buffer_size: number of items to prefetch; this is the maximum number of items held in the prefetch queue - log_empty_buffer_warning: log warning message if prefetch buffer is empty - """ - - # HOW THIS ITERATOR WORKS, AND WHY: - # - # This iterator offloads the work of evaluating all preceeding iterators - # into a separate prefetch process and tries to maintain a buffer - # of buffer_size many items in order to hide any latency spikes in the evaluation - # of the preceeding iterators. - # - # The prefetch process (self._prefetch_process) generates items and puts them - # into an inter-process queue (self._inter_process_queue of type multiprocessing.Queue). - # The sole purpose of this queue is to act as a means for inter-process communication. - # Its purpose is NOT to act as the above-mentioned buffer. - # Accordingly, the size of this queue is restricted to 1. - # - # The actual buffer is realized as a local, thread-safe queue (queue.Queue) that lives - # in the main process (self._local_queue). We create a thread (self._queue_fetcher_thread) - # within the main process that is responsible for fetching items from the - # inter-process queue and storing them in the local queue. We then obtain an item from the - # local queue whenever __next__ is called within the main thread of the main process. - # - # You might wonder why we jump through all of these hoops instead of just using - # a multiprocessing.Queue with the desired buffer_size to act as both a means for - # inter-process communication and as a buffer to smooth out latency at the same time. - # In fact, this iterator used to be implemented that way. - # However, we observed severe issues with that implementation. - # - # Specifically, with multiprocessing.Queue the buffer lives in the prefetch process - # and a queue feeding thread is responsible for moving items from the buffer onto a pipe - # (see https://docs.python.org/3.6/library/multiprocessing.html#multiprocessing.Queue) - # The main thread in the prefetch process, which is responsible for generating items, - # competes with the queue feeder thread for CPU cycles, and because of CPython's - # global interpreter lock only one of these two processes can run at any given time. - # - # We observed situations in which the main thread is so busy generating new items - # that the queue feeder thread does not get enough CPU time to keep the pipe filled. - # This starvation of the queue feeder thread led to severe hangs (up to a minute) - # in calls to multiprocessing.Queue.get in the main process, even though the buffer had - # hundreds of items stored in it. - # - # This problem gets even worse when PyTorch tensors are sent over the queue. PyTorch registers - # custom reducers to the ForkingPickler used to pickle tensors before sending them over a pipe - # (see https://github.com/pytorch/pytorch/blob/master/torch/multiprocessing/reductions.py). - # As a consequence, the actual tensor data is not sent via the queue, but shared via shared - # memory. This process involves spawning yet another thread in the prefetch process and opening - # sockets to transmit file descriptors - # (see https://pytorch.org/docs/stable/multiprocessing.html#file-descriptor-file-descriptor). - # So in this case, there is yet another thread competing for the global interpreter lock. - # - # By restricting the size of the inter-process queue to 1 we avoid or at least lessen - # the starvation issues in the prefetch process caused by multiple threads fighting - # for the global interpreter lock. Any remaining hangs or delays in the prefetch process - # are hidden by having the buffer in the main process instead of the prefetch process. - # - # We suspect the hanging issues described above to be manifestations of the "Convoy effect": - # https://bugs.python.org/issue7946 - # http://www.dabeaz.com/python/GIL.pdf - # https://in.pycon.org/2011/static/files/talks/41/Python-threads_v1.0.pdf - - def __init__( - self, source_iterator: CheckpointableIterator, buffer_size: int, log_empty_buffer_warning: bool = False - ): - self._source_iterator = source_iterator # type: CheckpointableIterator - self._buffer_size = buffer_size # type: int - self._log_empty_buffer_warning = log_empty_buffer_warning - self._is_closed = False - self.setstate(None) - - def getstate(self) -> Dict: - return {"source_state": self._source_state, "item_offset": self._item_offset} - - def setstate(self, checkpoint: Optional[Dict]): - if self._is_closed: - raise RuntimeError("PrefetchIterator has already been closed.") - - # terminate the prefetch process and queue fetcher thread if they are running - self._shutdown() - - # set state according to checkpoint - self._source_state = checkpoint["source_state"] if checkpoint is not None else None - self._item_offset = checkpoint["item_offset"] if checkpoint is not None else 0 - self._source_iterator.setstate(self._source_state) - - # In the given checkpoint, the source iterator might already be exhausted. - # We will figure that out once we try to get the first item. - # For now, we have to reset this variable. - self._is_exhausted = False - - def __next__(self): - if self._is_closed: - raise RuntimeError("PrefetchIterator has already been closed.") - if not hasattr(self, "_prefetch_process") or self._prefetch_process is None: - # prefetcher process has not yet been started - self._startup() - if self._is_exhausted: - raise StopIteration() - if self._log_empty_buffer_warning: - if self._local_queue.empty(): - logger.warning("trying to fetch item, but prefetch buffer is empty") - # This get-operation cannot deadlock: - # Under the assumption that the prefetch process and the queue fetcher thread work correctly, - # this operation can only deadlock if at the time of this call the source iterator is - # exhausted, the local queue is empty, and there are no items in transit to the local queue. - # In that case, a StopIteration was the last item ever put on the local queue. - # That stop iteration would have caused self._is_exhausted to be set to True, - # which means the following line would never have been reached, a contradiction. - msg = self._local_queue.get() - if isinstance(msg, StopIteration): - self._is_exhausted = True - # The source iterator is exhausted. - # At this point, the queue fetcher thread should already have terminated. - # The prefetch process will only terminate once we signal it via _prefetch_process_should_terminate. - # This is because we have to make sure no more items are taken from the inter_process_queue - # before we shut down the prefetch process as explained in _startup(). - # We would like to terminate the prefetch process, but we cannot use _shutdown() here - # because that would set self._prefetch_process = None, - # which would mean we would call _startup() on the next call of __next__. - # Instead, manually make sure the queue fetcher thread has actually terminated so that - # no more elements are taken from the inter_process_queue - # and then signal the prefetch process to terminate. - self._queue_fetcher_thread.join() - self._prefetch_process_should_terminate.set() - raise StopIteration() - # for efficiency, the prefetch_source_state is only transmitted at the end of each window of length _buffer_size - item, prefetch_source_state = msg - if prefetch_source_state is not None: - # we expect a new source state at then END of each window of length _buffer_size - assert self._item_offset == self._buffer_size - 1 - self._source_state = prefetch_source_state - self._item_offset = 0 - else: - self._item_offset = self._item_offset + 1 - assert self._item_offset < self._buffer_size - return item # for debugging, its useful to return msg instead of item - - def close(self): - """ - Close all PrefetchIterators in this pipeline - - PrefetchIterators have internal resources that need to be properly managed by calling close() manually. - Failure to do so can lead to dangling processes and threads, or the PrefetchIterator hanging on finalization. - Note that it is not correct to rely on the garbage collector to destroy PrefetchIterators - as CPython does not assure that the finalizer (__del__) of a PrefetchIterator will be called. - - This function, which is implemented for every CheckpointableIterator, recursively traverses all preceeding - iterators and closes all PrefetchIterators in the pipeline. - For pipelines that do not contain PrefetchIterators this function has no effect. - """ - if not self._is_closed: - self._is_closed = True - self._shutdown() - self._source_iterator.close() - - def _startup(self): - # set up prefetch process and associated queue - self._inter_process_queue = multiprocessing.Queue(maxsize=1) - # Because of the way PyTorch transfers tensors through shared memory (see comment at top of this class) - # we have to keep the prefetch process alive until we are sure that - # no more items are taken from the inter-process queue. - # This event is used to communicate to the prefetch process that it is safe to terminate. - self._prefetch_process_should_terminate = multiprocessing.Event() - _prefetch_process = multiprocessing.Process( - target=self._prefetch_process_fn, - args=( - self._source_iterator, - self._item_offset, - self._buffer_size, - self._inter_process_queue, - self._prefetch_process_should_terminate, - ), - ) - _prefetch_process.start() # this invokes fork() - # set self._prefetch_process after fork so that variable never exists for within prefetch process - self._prefetch_process = _prefetch_process - - # set up queue fetcher thread - self._local_queue = queue.Queue(maxsize=self._buffer_size) - self._queue_fetcher_thread_should_terminate = threading.Event() - self._queue_fetcher_thread = threading.Thread(target=self._queue_fetcher_thread_fn, daemon=True) - self._queue_fetcher_thread.start() - - def _shutdown(self): - # Only shut down if this is the parent process and the prefetcher is running. - # The variable self._prefetch_process can only exist in the parent process. - # The variable exists and is not None only if the prefetcher is running. - if hasattr(self, "_prefetch_process") and self._prefetch_process is not None: - # if self._prefetch_process is not None, neither should self._queue_fetcher_thread - assert self._queue_fetcher_thread is not None - # sanity check that this is actually the parent of the prefetch process - assert self._prefetch_process._parent_pid == os.getpid() - # shut down queue fetcher thread - self._queue_fetcher_thread_should_terminate.set() - self._queue_fetcher_thread.join() - self._queue_fetcher_thread = None - # shut down prefetch process - self._prefetch_process_should_terminate.set() - self._prefetch_process.join() - self._prefetch_process = None - - @staticmethod - def _prefetch_process_fn( - source_iterator, item_offset, buffer_size, inter_process_queue, should_terminate_event - ): # behavior of the prefetching process, only to be called from that process! - _advance_iterator(source_iterator, item_offset) # skip to checkpoint - while True: - try: - item = next(source_iterator) - except StopIteration: - _ForkPrefetchIteratorExperimental._try_put(inter_process_queue, StopIteration(), should_terminate_event) - # Because of the way PyTorch transfers tensors through shared memory (see comment at top of this class) - # we have to keep the prefetch process alive until we are sure that - # no more items are taken from the inter-process queue. - # This event is used to communicate to the prefetch process that it is safe to terminate. - should_terminate_event.wait() - break - if item_offset == buffer_size - 1: - # for efficiency, we send a new source state only at the END of each window of length _buffer_size - # this is the state for retrieving the NEXT element, i.e. the first element of the next buffer - source_state = source_iterator.getstate() - item_offset = 0 - else: - source_state = None - item_offset += 1 - msg = (item, source_state) - should_terminate = _ForkPrefetchIteratorExperimental._try_put( - inter_process_queue, msg, should_terminate_event - ) - if should_terminate: - break - source_iterator.close() - - def _queue_fetcher_thread_fn(self): - while True: - # This get-operation cannot deadlock: - # For his operation to deadlock, the queue must be empty and the prefetch process must never put - # another item on the queue. Under the assumption that the prefetch process works correctly, - # this can only happen in two ways. First, the prefetch process could exhaust its source iterator - # and put a StopIteration on the queue. In this case, this thread will receive the StopIteration - # and terminate, which means the operation does not deadlock. Second, the prefetch process could - # terminate because the corresponding signal is set as part of a _shutdown(). However, we terminate - # and join this thread before we set the terminate signal for the prefetch process, - # so this case cannot happen. - msg = self._inter_process_queue.get() - should_terminate = _ForkPrefetchIteratorExperimental._try_put( - self._local_queue, msg, self._queue_fetcher_thread_should_terminate - ) - if should_terminate: - return - if isinstance(msg, StopIteration): - return - - @staticmethod - def _try_put(q, msg, should_terminate_event, timeout=0.001): - """ - Repeatedly try to put message on queue until success or should_terminate_event is set. - If success, return False. - If should_terminate_event is set, return True. - """ - while not should_terminate_event.is_set(): - try: - q.put(msg, timeout=timeout) - return False - except queue.Full: - pass - return True - - def __del__(self): - if hasattr(self, "_prefetch_process") and not self._is_closed: - logger.warning( - f"unclosed PrefetchIterator {self!r}: not closing a PrefetchIterator may lead to dangling processes and hangs on finalization" - ) - self._shutdown() - - -class BucketedReadaheadBatchIterator(CheckpointableIterator): - """ - Iterates over items from a checkpointable iterator and groups items of similar length into batches. - - The algorithm reads a head a certain number of lines (e.g. 10 million), sorts them by - length, and them groups them into batches from start to end. The sort is stable, such - that prior randomization is not undone (except for the length grouping). The batch size - is dynamic, and determined by a user-provided callback. - - This is based on Marian NMT's BatchGenerator. - """ - - def __init__(self, source_iterator: CheckpointableIterator, read_ahead: int, key: Callable[[Any], Any], batch_size: Union[int,Callable[[Any], int]], boundary_key: Callable[[Any], Any]=None, shuffle: bool=True, seed: int=0): - """ - Args: - source_iterator: The data set that is read from. Typically this is an infinite source. - read_ahead: Number of items to fetch ahead for grouping purposes. - key: User-provided callback to define how data is sorted for purpose of batching. - batch_size: Batch size in number of items. Either an integer or a callback to determine batch size for a given first batch item. - boundary_key: This optional callback, which maps an item to a key, allows to impose an additional restriction on the way batches are formed. Specifically, the iterator starts a new batch whenever the key changes. Thereby, it guarantees that all items in a batch have the same key. Keys are not allowed to be None. - shuffle: Pass False to not randomize the batches. (default: True) - seed: Random seed for batch shuffling. - """ - if not isinstance(source_iterator, CheckpointableIterator): - raise ValueError('source_iterator has to be a CheckpointableIterator') - # keep arguments - self._key = key # type: Callable[[Any], Any] - self._batch_size = batch_size # type: Union[int,Callable[[Any], int]] - self._boundary_key = boundary_key # type: Callable[[Any], Any] - self._read_ahead = read_ahead # type: int - # initialize state - self._seed = seed - self._random = None # type: Optional[Random] - if shuffle: - self._random = Random(self._seed) - self._source_iterator = cast(CheckpointableIterator, iter(source_iterator)) # type: CheckpointableIterator - self.setstate(None) - - def getstate(self): - return {'source_state': self._source_state, - 'random_state': self._random_state, - 'num_served': self._num_batches_yielded} - - def setstate(self, checkpoint: Optional[Dict]): - self._source_state = checkpoint['source_state'] if checkpoint else None # type: Dict # state of input before reading the current set of batches - self._random_state = checkpoint['random_state'] if checkpoint else None # type: Any # state of random generator at _source_state - self._num_batches_yielded = checkpoint['num_served'] if checkpoint else 0 # type: int # number of batches served from the current set of batches - # checkpointing: restore to start of current set of batches - self._source_iterator.setstate(self._source_state) - if self._random: - if self._random_state: - self._random.setstate(self._random_state) - else: - self._random.seed(self._seed) - self._source_exhausted = False # type: bool # set to True once we hit StopIteration on source - def _generate(): - skip_to_checkpoint = self._num_batches_yielded - source_exhausted = False - while not source_exhausted: - # prefetch the readahead buffer - self._source_state = self._source_iterator.getstate() - self._random_state = self._random.getstate() if self._random else None - items = list(islice(self._source_iterator, self._read_ahead)) - source_exhausted = (len(items) < self._read_ahead) - # create batches - batches = self._create_batches(items) - # shuffle the batches - if self._random: - self._random.shuffle(batches) - # on first loop iteration, restore iterator inside batches from checkpoint - batches = iter(batches) - self._num_batches_yielded = _advance_iterator(batches, skip_to_checkpoint) - skip_to_checkpoint = 0 - # main loop over batches in current read-ahead section - for batch in batches: - self._num_batches_yielded += 1 - yield batch - self._iterator = _generate() # type: Iterator # iterator into current set of batches - - def _create_batches(self, items: List[Any]) -> List[List[Any]]: # helper to form batches from a list of items - # sort by length, longest first - if self._key: - items.sort(key=self._key, reverse=True) # note: sort() is stable, so we won't undo any randomization besides the bucketing - # group into batches - cur_batch = None # type: Optional[List[Any]] - prev_val = None - batches = [] # type: List[Any] - for item in items: - if self._boundary_key and self._boundary_key(item) != prev_val: - if cur_batch: - batches.append(cur_batch) - cur_batch = None - prev_val = None - if not cur_batch: - batch_size = self._batch_size if isinstance(self._batch_size, int) else \ - self._batch_size(item) - cur_batch = [] - cur_batch.append(item) - if self._boundary_key: - prev_val = self._boundary_key(item) - assert prev_val is not None - if len(cur_batch) >= batch_size: # this batch is full - batches.append(cur_batch) - cur_batch = None - prev_val = None - if cur_batch: - batches.append(cur_batch) - return batches - - def __next__(self): - return next(self._iterator) - - def close(self): - self._source_iterator.close() \ No newline at end of file diff --git a/kosmos-g/infinibatch/pyproject.toml b/kosmos-g/infinibatch/pyproject.toml deleted file mode 100644 index e34796ec5..000000000 --- a/kosmos-g/infinibatch/pyproject.toml +++ /dev/null @@ -1,2 +0,0 @@ -[tool.black] -line-length = 120 \ No newline at end of file diff --git a/kosmos-g/infinibatch/requirements.txt b/kosmos-g/infinibatch/requirements.txt deleted file mode 100644 index e69de29bb..000000000 diff --git a/kosmos-g/infinibatch/setup.py b/kosmos-g/infinibatch/setup.py deleted file mode 100644 index 1ff213bb5..000000000 --- a/kosmos-g/infinibatch/setup.py +++ /dev/null @@ -1,13 +0,0 @@ -from setuptools import setup, find_packages -import site - -site.ENABLE_USER_SITE = True -setup( - name='infinibatch', - version='0.1.0', - url='https://github.com/microsoft/infinibatch', - author='Frank Seide', - author_email='fseide@microsoft.com', - description='Infinibatch is a library of checkpointable iterators for randomized data loading of massive data sets in deep neural network training.', - packages=find_packages() -) diff --git a/kosmos-g/infinibatch/test/test_datasets.py b/kosmos-g/infinibatch/test/test_datasets.py deleted file mode 100644 index 9f62c8a63..000000000 --- a/kosmos-g/infinibatch/test/test_datasets.py +++ /dev/null @@ -1,129 +0,0 @@ -import gzip -import itertools -from random import Random -import os -import shutil -import tempfile -from typing import Iterator -import unittest -import gc - -from infinibatch.datasets import chunked_dataset_iterator - - -class TestBase(unittest.TestCase): - def setUp(self): - self.test_data = [ - ["item number one", "item number two", "item number three", "item number four"], - ["item number five"], - [ - "item number six", - "item number seven", - "item number eight", - "item number nine", - "item number ten", - "item number eleven", - ], - ["item number twelve", "item number thirteen", "item number fourteen",], - ] - - self.flattened_test_data = [] - for chunk in self.test_data: - for item in chunk: - self.flattened_test_data.append(item) - - self.data_dir = tempfile.mkdtemp() - self.chunk_file_paths = [] - for chunk_id, chunk in enumerate(self.test_data): - file_name = os.path.join(self.data_dir, "chunk_" + str(chunk_id).zfill(10) + ".gz") - self.chunk_file_paths.append(file_name) - file_content = "\n".join(chunk) - with gzip.open(file_name, "wt", encoding="utf-8") as f: - f.write(file_content) - - @staticmethod - def read_chunk(textfile_path: str) -> Iterator[str]: # read_chunk_fn for chunked_dataset_iterator - with gzip.open(textfile_path, "rt", encoding="utf-8") as f: - return iter(f.read().splitlines()) - - def tearDown(self): - gc.collect() # this will get the pre-fetch terminated in some tests, which otherwise may still want to read these files - shutil.rmtree(self.data_dir) - - def assertMultisetEqual(self, a, b): - self.assertEqual(len(a), len(b)) - self.assertSetEqual(set(a), set(b)) - - -class Test_chunked_dataset_iterator(TestBase): - def test_no_shuffle(self): - items = list( - itertools.islice( - chunked_dataset_iterator(self.chunk_file_paths, self.read_chunk, shuffle=False, buffer_size=1000), - len(self.flattened_test_data), - ) - ) - self.assertListEqual(items, self.flattened_test_data) - - def test_other_files_present(self): - with open(os.path.join(self.data_dir, "i_do_not_belong_here.txt"), "w") as f: - f.write("really ...") - items = list( - itertools.islice( - chunked_dataset_iterator(self.chunk_file_paths, self.read_chunk, shuffle=False, buffer_size=1000), - len(self.flattened_test_data), - ) - ) - self.assertListEqual(items, self.flattened_test_data) - - def test_transform(self): - transform = lambda s: s + "!" - modified_test_data = [transform(s) for s in self.flattened_test_data] - items = list( - itertools.islice( - chunked_dataset_iterator( - self.chunk_file_paths, self.read_chunk, shuffle=False, buffer_size=1000, transform=transform - ), - len(self.flattened_test_data), - ) - ) - self.assertListEqual(items, modified_test_data) - - def test_two_instances(self): - dataset0 = chunked_dataset_iterator( - self.chunk_file_paths, self.read_chunk, shuffle=False, buffer_size=1000, num_instances=2, instance_rank=0 - ) - dataset1 = chunked_dataset_iterator( - self.chunk_file_paths, self.read_chunk, shuffle=False, buffer_size=1000, num_instances=2, instance_rank=1 - ) - items0 = list(itertools.islice(dataset0, len(self.test_data[0]) + len(self.test_data[2]))) - items1 = list(itertools.islice(dataset1, len(self.test_data[1]) + len(self.test_data[3]))) - self.assertMultisetEqual(set(items0 + items1), self.flattened_test_data) - - def test_checkpointing(self): - random = Random(1) - for use_windowed in (True, False): - for i in range(2): - first_length = random.randrange(11, 21) - extra_length = random.randrange(11, 21) - dataset = chunked_dataset_iterator( - self.chunk_file_paths, - self.read_chunk, - shuffle=(i % 2 == 0), - buffer_size=1000, - seed=i, - num_instances=2, - instance_rank=0, - use_windowed=use_windowed, - ) - for _ in range(first_length): - next(dataset) - checkpoint = dataset.getstate() - items1 = list(itertools.islice(dataset, extra_length)) - dataset.setstate(checkpoint) - items2 = list(itertools.islice(dataset, extra_length)) - self.assertListEqual(items1, items2) - - -if __name__ == "__main__": - unittest.main() diff --git a/kosmos-g/infinibatch/test/test_doctests.py b/kosmos-g/infinibatch/test/test_doctests.py deleted file mode 100644 index 59e9e22ee..000000000 --- a/kosmos-g/infinibatch/test/test_doctests.py +++ /dev/null @@ -1,13 +0,0 @@ -""" -This file causes the doctests to be included as part of unit tests. - -To make sure the doctests of a specific module are included, -please replicate the `addTests` call for the iterators module below. -""" - -import doctest -import infinibatch.iterators - -def load_tests(loader, tests, ignore): - tests.addTests(doctest.DocTestSuite(infinibatch.iterators)) - return tests \ No newline at end of file diff --git a/kosmos-g/infinibatch/test/test_iterators.py b/kosmos-g/infinibatch/test/test_iterators.py deleted file mode 100644 index 52c3da5ed..000000000 --- a/kosmos-g/infinibatch/test/test_iterators.py +++ /dev/null @@ -1,965 +0,0 @@ -import copy -import itertools -import multiprocessing -from random import Random -import unittest - -import torch - -from infinibatch.iterators import * - -if __name__ == "__main__": - unittest.main() - - -class TestBase(unittest.TestCase): - def setUp(self): - self.lengths = [1, 2, 3, 42, 57] - self.world_sizes = [1, 2, 3, 4, 5, 11, 16, 64, 73] - self.seed = 42 - - def assertMultisetEqual(self, a, b): - def list_to_dict(l): - d = {} - for item in l: - d[item] = d.get(item, 0) + 1 - return d - - self.assertEqual(list_to_dict(a), list_to_dict(b)) - - -class TestFiniteIteratorMixin: - """ - Mixin to be used in combination with TestBase - to test basic function of finite CheckpointableIterators - """ - - def test_basic(self): - for case_name, expected_result, it in self.test_cases: - with self.subTest(case_name): - result = list(it) - self.assertEqual(result, expected_result) - - -class TestFiniteIteratorCheckpointingMixin: - """ - Mixin to be used in combination with TestBase - to test checkpointing functionality of finite CheckpointableIterators - """ - - def test_checkpointing_reset(self): - for case_name, _, it in self.test_cases: - with self.subTest(case_name): - expected_result = list(it) # extract data - it.setstate(None) # reset to start - result = list(it) - self.assertEqual(result, expected_result) - - # TODO: Can this be rewritten in terms of _test_checkpointing_from_pos? - def test_checkpointing_from_start(self): - for case_name, _, it in self.test_cases: - with self.subTest(case_name): - checkpoint = it.getstate() - expected_result = list(it) # extract data - it.setstate(checkpoint) # reset to start - result = list(it) - self.assertEqual(result, expected_result) - - def _test_checkpointing_from_pos(self, it, pos): - for _ in range(pos): # go to pos - next(it) - checkpoint = it.getstate() # take checkpoint - expected_result = list(it) # extract data - it.setstate(checkpoint) # reset to checkpoint - result = list(it) - self.assertEqual(result, expected_result) - - def test_checkpointing_from_one(self): - for case_name, _, it in self.test_cases: - with self.subTest(case_name): - pos = 1 - self._test_checkpointing_from_pos(it, pos) - - def test_checkpointing_from_quarter(self): - for case_name, _, it in self.test_cases: - with self.subTest(case_name): - expected_result = list(it) - it.setstate(None) - pos = len(expected_result) // 4 - self._test_checkpointing_from_pos(it, pos) - - def test_checkpointing_from_third(self): - for case_name, _, it in self.test_cases: - with self.subTest(case_name): - expected_result = list(it) - it.setstate(None) - pos = len(expected_result) // 3 - self._test_checkpointing_from_pos(it, pos) - - def test_checkpointing_from_half(self): - for case_name, _, it in self.test_cases: - with self.subTest(case_name): - expected_result = list(it) - it.setstate(None) - pos = len(expected_result) // 2 - self._test_checkpointing_from_pos(it, pos) - - def test_checkpointing_before_end(self): - for case_name, _, it in self.test_cases: - with self.subTest(case_name): - expected_result = list(it) - it.setstate(None) - pos = len(expected_result) - 1 - self._test_checkpointing_from_pos(it, pos) - - def test_checkpointing_at_end(self): - for case_name, _, it in self.test_cases: - with self.subTest(case_name): - list(it) # exhaust iterator - self.assertRaises(StopIteration, it.__next__) - checkpoint = it.getstate() # take checkpoint - it.setstate(None) # reset to beginning - it.setstate(checkpoint) # reset to checkpoint - self.assertRaises(StopIteration, it.__next__) - - def test_checkpointing_complex(self): - for case_name, _, it in self.test_cases: - with self.subTest(case_name): - expected_result = list(it) - - # get a bunch of checkpoints at different positions - it.setstate(None) - positions = [ - 0, - len(expected_result) // 7, - len(expected_result) // 6, - len(expected_result) // 5, - len(expected_result) // 4, - len(expected_result) // 3, - len(expected_result) // 2, - ] - checkpoints = [] - for i in range(len(positions)): - offset = positions[i] - positions[i - 1] if i > 0 else positions[0] - for _ in range(offset): - next(it) - checkpoints.append(it.getstate()) - - # check that iterator returns correct result at all checkpoints - for pos, checkpoint in zip(positions, checkpoints): - it.setstate(checkpoint) - self.assertEqual(list(it), expected_result[pos:]) - - # check that iterator returns correct result at all checkpoints in reverse order - tuples = list(zip(positions, checkpoints)) - tuples.reverse() - for pos, checkpoint in tuples: - it.setstate(checkpoint) - self.assertEqual(list(it), expected_result[pos:]) - - # check that iterator returns correct result at all checkpoints - # while resetting between any two checkpoints - for pos, checkpoint in zip(positions, checkpoints): - it.setstate(None) - it.setstate(checkpoint) - self.assertEqual(list(it), expected_result[pos:]) - - # and as the grand finale: reset and check again - it.setstate(None) - result = list(it) - self.assertEqual(result, expected_result) - - -class TestInfinitePermutationSourceIterator(TestBase): - def setUp(self): - super().setUp() - self.repeats = [1, 2, 3] - - def test_no_shuffle(self): - for n, k, num_instances in itertools.product(self.lengths, self.repeats, self.world_sizes): - data = list(range(n)) - for instance_rank in range(num_instances): - with self.subTest(f"n={n}, k={k}, num_instances={num_instances}, instance_rank={instance_rank}"): - it = InfinitePermutationSourceIterator( - copy.deepcopy(data), shuffle=False, num_instances=num_instances, instance_rank=instance_rank - ) - repeated_data = [] - while len(repeated_data) < k * n * num_instances: - repeated_data.extend(data) - expected_result = [] - pos = instance_rank - while len(expected_result) < k * n: - expected_result.append(repeated_data[pos]) - pos += num_instances - result = [next(it) for _ in range(k * n)] - self.assertEqual(result, expected_result) - - def test_shuffle(self): - for n, k, num_instances in itertools.product(self.lengths, self.repeats, self.world_sizes): - data = list(range(n)) - for instance_rank in range(num_instances): - with self.subTest(f"n={n}, k={k}, num_instances={num_instances}, instance_rank={instance_rank}"): - it = InfinitePermutationSourceIterator( - copy.deepcopy(data), - seed=self.seed, - shuffle=True, - num_instances=num_instances, - instance_rank=instance_rank, - ) - random = Random(self.seed) - repeated_data = [] - while len(repeated_data) < k * n * num_instances: - shuffled_data = copy.deepcopy(data) - random.shuffle(shuffled_data) - repeated_data.extend(shuffled_data) - expected_result = [] - pos = instance_rank - while len(expected_result) < k * n: - expected_result.append(repeated_data[pos]) - pos += num_instances - result = [next(it) for _ in range(k * n)] - self.assertEqual(result, expected_result) - - def test_single_instance_no_shuffle(self): - # this test is technically included in test_no_shuffle - # but the calculation of the expected result is less error prone - for n, k in itertools.product(self.lengths, self.repeats): - with self.subTest(f"n={n}, k={k}"): - data = list(range(n)) - expected_result = data * k - it = InfinitePermutationSourceIterator(copy.deepcopy(data), shuffle=False) - result = [next(it) for _ in range(k * n)] - self.assertEqual(result, expected_result) - - def test_single_instance_shuffle(self): - # this test is technically included in test_shuffle - # but the calculation of the expected result is less error prone - for n, k in itertools.product(self.lengths, self.repeats): - with self.subTest(f"n={n}, k={k}"): - data = list(range(n)) - expected_result = data * k - it = InfinitePermutationSourceIterator(copy.deepcopy(data), seed=self.seed, shuffle=True) - result = [next(it) for _ in range(k * n)] - self.assertMultisetEqual(result, expected_result) - - def test_checkpointing_reset_no_shuffle(self): - for n, k, num_instances in itertools.product(self.lengths, self.repeats, self.world_sizes): - data = list(range(n)) - for instance_rank in range(num_instances): - with self.subTest(f"n={n}, k={k}, num_instances={num_instances}, instance_rank={instance_rank}"): - it = InfinitePermutationSourceIterator( - copy.deepcopy(data), shuffle=False, num_instances=num_instances, instance_rank=instance_rank - ) - expected_result = [next(it) for _ in range(k * n)] # extract data - it.setstate(None) # reset to start - result = [next(it) for _ in range(k * n)] - self.assertEqual(result, expected_result) - - def test_checkpointing_reset_shuffle(self): - for n, k, num_instances in itertools.product(self.lengths, self.repeats, self.world_sizes): - data = list(range(n)) - for instance_rank in range(num_instances): - with self.subTest(f"n={n}, k={k}, num_instances={num_instances}, instance_rank={instance_rank}"): - it = InfinitePermutationSourceIterator( - copy.deepcopy(data), - seed=self.seed, - shuffle=True, - num_instances=num_instances, - instance_rank=instance_rank, - ) - expected_result = [next(it) for _ in range(k * n)] # extract data - it.setstate(None) # reset to start - result = [next(it) for _ in range(k * n)] - self.assertEqual(result, expected_result) - - def test_checkpointing_from_start_no_shuffle(self): - for n, k, num_instances in itertools.product(self.lengths, self.repeats, self.world_sizes): - data = list(range(n)) - for instance_rank in range(num_instances): - with self.subTest(f"n={n}, k={k}, num_instances={num_instances}, instance_rank={instance_rank}"): - it = InfinitePermutationSourceIterator( - copy.deepcopy(data), shuffle=False, num_instances=num_instances, instance_rank=instance_rank - ) - checkpoint = it.getstate() - expected_result = [next(it) for _ in range(k * n)] # extract data - it.setstate(checkpoint) # reset to start - result = [next(it) for _ in range(k * n)] - self.assertEqual(result, expected_result) - - def test_checkpointing_from_start_shuffle(self): - for n, k, num_instances in itertools.product(self.lengths, self.repeats, self.world_sizes): - data = list(range(n)) - for instance_rank in range(num_instances): - with self.subTest(f"n={n}, k={k}, num_instances={num_instances}, instance_rank={instance_rank}"): - it = InfinitePermutationSourceIterator( - copy.deepcopy(data), - seed=self.seed, - shuffle=True, - num_instances=num_instances, - instance_rank=instance_rank, - ) - checkpoint = it.getstate() - expected_result = [next(it) for _ in range(k * n)] # extract data - it.setstate(checkpoint) # reset to start - result = [next(it) for _ in range(k * n)] - self.assertEqual(result, expected_result) - - def test_checkpointing_from_middle_no_shuffle(self): - for n, k, num_instances in itertools.product(self.lengths, self.repeats, self.world_sizes): - data = list(range(n)) - for instance_rank in range(num_instances): - with self.subTest(f"n={n}, k={k}, num_instances={num_instances}, instance_rank={instance_rank}"): - it = InfinitePermutationSourceIterator( - copy.deepcopy(data), shuffle=False, num_instances=num_instances, instance_rank=instance_rank - ) - checkpoint_pos = k * n // 3 - for _ in range(checkpoint_pos): # go to checkpoint_pos - next(it) - checkpoint = it.getstate() # take checkpoint - expected_result = [next(it) for _ in range(k * n)] # extract data - for _ in range(checkpoint_pos): # move forward some more - next(it) - it.setstate(checkpoint) # reset to checkpoint - result = [next(it) for _ in range(k * n)] # get data again - self.assertEqual(result, expected_result) - - def test_checkpointing_from_middle_shuffle(self): - for n, k, num_instances in itertools.product(self.lengths, self.repeats, self.world_sizes): - data = list(range(n)) - for instance_rank in range(num_instances): - with self.subTest(f"n={n}, k={k}, num_instances={num_instances}, instance_rank={instance_rank}"): - it = InfinitePermutationSourceIterator( - copy.deepcopy(data), - seed=self.seed, - shuffle=True, - num_instances=num_instances, - instance_rank=instance_rank, - ) - checkpoint_pos = k * n // 3 - for _ in range(checkpoint_pos): # go to checkpoint_pos - next(it) - checkpoint = it.getstate() # take checkpoint - expected_result = [next(it) for _ in range(k * n)] # extract data - for _ in range(checkpoint_pos): # move forward some more - next(it) - it.setstate(checkpoint) # reset to checkpoint - result = [next(it) for _ in range(k * n)] # get data again - self.assertEqual(result, expected_result) - - def test_checkpointing_at_boundary_no_shuffle(self): - for n, k, num_instances in itertools.product(self.lengths, self.repeats, self.world_sizes): - data = list(range(n)) - for instance_rank in range(num_instances): - with self.subTest(f"n={n}, k={k}, num_instances={num_instances}, instance_rank={instance_rank}"): - it = InfinitePermutationSourceIterator( - copy.deepcopy(data), shuffle=False, num_instances=num_instances, instance_rank=instance_rank - ) - checkpoint_pos = k * n - for _ in range(checkpoint_pos): # go to checkpoint_pos - next(it) - checkpoint = it.getstate() # take checkpoint - expected_result = [next(it) for _ in range(k * n)] # extract data - for _ in range(checkpoint_pos): # move forward some more - next(it) - it.setstate(checkpoint) # reset to checkpoint - result = [next(it) for _ in range(k * n)] # get data again - self.assertEqual(result, expected_result) - - def test_checkpointing_at_boundary_shuffle(self): - for n, k, num_instances in itertools.product(self.lengths, self.repeats, self.world_sizes): - data = list(range(n)) - for instance_rank in range(num_instances): - with self.subTest(f"n={n}, k={k}, num_instances={num_instances}, instance_rank={instance_rank}"): - it = InfinitePermutationSourceIterator( - copy.deepcopy(data), - seed=self.seed, - shuffle=True, - num_instances=num_instances, - instance_rank=instance_rank, - ) - checkpoint_pos = k * n - for _ in range(checkpoint_pos): # go to checkpoint_pos - next(it) - checkpoint = it.getstate() # take checkpoint - expected_result = [next(it) for _ in range(k * n)] # extract data - for _ in range(checkpoint_pos): # move forward some more - next(it) - it.setstate(checkpoint) # reset to checkpoint - result = [next(it) for _ in range(k * n)] # get data again - self.assertEqual(result, expected_result) - - def test_empty_source(self): - f = lambda: InfinitePermutationSourceIterator([]) - self.assertRaises(ValueError, f) - - def test_rank_too_large(self): - f = lambda: InfinitePermutationSourceIterator([1], num_instances=2, instance_rank=2) - self.assertRaises(ValueError, f) - - -class TestChunkedSourceIterator(TestBase, TestFiniteIteratorMixin, TestFiniteIteratorCheckpointingMixin): - def setUp(self): - super().setUp() - self.test_cases = [] - for n in self.lengths: - data = list(range(n)) - it = ChunkedSourceIterator(copy.deepcopy(data)) - self.test_cases.append(("n={}".format(n), data, it)) - - def test_multiple_instances(self): - for n, num_instances in itertools.product(self.lengths, self.world_sizes): - with self.subTest("n={}, num_instances={}".format(n, num_instances)): - data = list(range(n)) - result = [] - sizes = [] - for instance_rank in range(num_instances): - it = ChunkedSourceIterator( - copy.deepcopy(data), num_instances=num_instances, instance_rank=instance_rank - ) - output = list(it) - result.extend(output) - sizes.append(len(output)) - self.assertEqual(data, result) - self.assertTrue(max(sizes) - min(sizes) <= 1) # make sure data is split as evenly as possible - - def test_rank_too_large(self): - def create_iterator(): - it = ChunkedSourceIterator([1], num_instances=2, instance_rank=2) - - self.assertRaises(ValueError, create_iterator) - - -class TestSamplingRandomMapIterator(TestBase, TestFiniteIteratorMixin, TestFiniteIteratorCheckpointingMixin): - @staticmethod - def transform(random, item): - return item + random.random() - - def setUp(self): - super().setUp() - self.test_cases = [] - for n in self.lengths: - data = list(range(n)) - random = Random() - random.seed(self.seed) - expected_result = [n + random.random() for n in data] - it = SamplingRandomMapIterator(NativeCheckpointableIterator(data), transform=self.transform, seed=self.seed) - self.test_cases.append(("n={}".format(n), expected_result, it)) - - -class TestMapIterator(TestBase, TestFiniteIteratorMixin, TestFiniteIteratorCheckpointingMixin): - @staticmethod - def transform(item): - return 2 * item - - def setUp(self): - super().setUp() - self.test_cases = [] - for n in self.lengths: - data = list(range(n)) - expected_result = [self.transform(item) for item in data] - it = MapIterator(NativeCheckpointableIterator(data), self.transform) - self.test_cases.append(("n={}".format(n), expected_result, it)) - - -class TestZipIterator(TestBase, TestFiniteIteratorMixin, TestFiniteIteratorCheckpointingMixin): - def setUp(self): - super().setUp() - self.test_cases = [] - - # pairs - for n in self.lengths: - data1 = list(range(n)) - data2 = [item * item for item in data1] - expected_result = list(zip(data1, data2)) - it = ZipIterator(NativeCheckpointableIterator(data1), NativeCheckpointableIterator(data2)) - self.test_cases.append(("n={}, pairs".format(n), expected_result, it)) - - # triples - for n in self.lengths: - data1 = list(range(n)) - data2 = [item * item for item in data1] - data3 = [item * item for item in data2] - expected_result = list(zip(data1, data2, data3)) - it = ZipIterator( - NativeCheckpointableIterator(data1), - NativeCheckpointableIterator(data2), - NativeCheckpointableIterator(data3), - ) - self.test_cases.append(("n={}, triples".format(n), expected_result, it)) - - # different lengths - for n in self.lengths: - if n > 3: # smaller n give us an empty iterator, which causes issues - data1 = list(range(n)) - data2 = [item * item for item in data1] - data2 = data2[:-3] - expected_result = list(zip(data1, data2)) - it = ZipIterator(NativeCheckpointableIterator(data1), NativeCheckpointableIterator(data2)) - self.test_cases.append(("n={}, different lengths".format(n), expected_result, it)) - - -class TestPrefetchIterator(TestBase, TestFiniteIteratorMixin, TestFiniteIteratorCheckpointingMixin): - def setUp(self): - super().setUp() - self.test_cases = [] - for n in self.lengths: - for buffer_size in self.lengths: - data = list(range(n)) - it = PrefetchIterator(NativeCheckpointableIterator(data), buffer_size) - self.test_cases.append(("n={}, buffer_size={}".format(n, buffer_size), data, it)) - - def test_zero_buffer_size(self): - f = lambda: PrefetchIterator(NativeCheckpointableIterator([0]), buffer_size=0) - self.assertRaises(ValueError, f) - - def test_torch_tensors(self): - for n in self.lengths: - for buffer_size in self.lengths: - with self.subTest("n={}, buffer_size={}".format(n, buffer_size)): - data = [torch.Tensor([float(i)]) for i in range(n)] - it = PrefetchIterator(NativeCheckpointableIterator(copy.deepcopy(data)), buffer_size) - result = list(it) - self.assertEqual(result, data) - - -class TestPrefetchIteratorExperimental(TestBase, TestFiniteIteratorMixin, TestFiniteIteratorCheckpointingMixin): - def setUp(self): - super().setUp() - self.test_cases = [] - for n in self.lengths: - for buffer_size in self.lengths: - data = list(range(n)) - it = PrefetchIterator(NativeCheckpointableIterator(data), buffer_size, buffer_in_main_process=True) - self.test_cases.append(("n={}, buffer_size={}".format(n, buffer_size), data, it)) - - def test_zero_buffer_size(self): - f = lambda: PrefetchIterator(NativeCheckpointableIterator([0]), buffer_size=0, buffer_in_main_process=True) - self.assertRaises(ValueError, f) - - def test_closing(self): - if multiprocessing.get_start_method() != "fork": - return # dummy iterator used, skip test - it = PrefetchIterator(NativeCheckpointableIterator([0]), buffer_size=42, buffer_in_main_process=True) - it.close() - f = lambda: it.__next__() - self.assertRaises(RuntimeError, f) - f = lambda: it.setstate(None) - self.assertRaises(RuntimeError, f) - - def test_nested(self): - for n in self.lengths: - for buffer_size in self.lengths: - for depth in [2, 3, 4, 5]: - with self.subTest("n={}, buffer_size={}, depth={}".format(n, buffer_size, depth)): - data = [torch.Tensor([float(i)]) for i in range(n)] - it = NativeCheckpointableIterator(copy.deepcopy(data)) - for _ in range(depth): - it = PrefetchIterator(it, buffer_size, buffer_in_main_process=True) - result = list(it) - self.assertEqual(result, data) - it.close() - - def test_torch_tensors(self): - for n in self.lengths: - for buffer_size in self.lengths: - with self.subTest("n={}, buffer_size={}".format(n, buffer_size)): - data = [torch.Tensor([float(i)]) for i in range(n)] - it = PrefetchIterator( - NativeCheckpointableIterator(copy.deepcopy(data)), buffer_size, buffer_in_main_process=True - ) - result = list(it) - self.assertEqual(result, data) - it.close() - - def tearDown(self): - if hasattr(self, "test_cases"): - for _, _, it in self.test_cases: - it.close() - - -class TestMultiplexIterator(TestBase, TestFiniteIteratorMixin, TestFiniteIteratorCheckpointingMixin): - # TODO: Add test cases for behavior when source iterators end but item is retrieved - def setUp(self): - super().setUp() - random = Random() - random.seed(42) - self.test_cases = [] - - # two source iterators - for n in self.lengths: - indices = [random.randrange(0, 2) for _ in range(n)] - data = [[2 * i + 0 for i in range(n)], [2 * i + 1 for i in range(n)]] - data_copy = copy.deepcopy(data) - expected_result = [data_copy[i].pop(0) for i in indices] - it = MultiplexIterator( - NativeCheckpointableIterator(indices), [NativeCheckpointableIterator(d) for d in data] - ) - self.test_cases.append(("n={}, two source iterators".format(n), expected_result, it)) - - # three source iterators - for n in self.lengths: - indices = [random.randrange(0, 3) for _ in range(n)] - data = [[3 * i + 0 for i in range(n)], [3 * i + 1 for i in range(n)], [3 * i + 2 for i in range(n)]] - data_copy = copy.deepcopy(data) - expected_result = [data_copy[i].pop(0) for i in indices] - it = MultiplexIterator( - NativeCheckpointableIterator(indices), [NativeCheckpointableIterator(d) for d in data] - ) - self.test_cases.append(("n={}, three source iterators".format(n), expected_result, it)) - - -class TestNativeCheckpointableIterator(TestBase, TestFiniteIteratorMixin, TestFiniteIteratorCheckpointingMixin): - def setUp(self): - super().setUp() - self.test_cases = [] - for n in self.lengths: - data = list(range(n)) - expected_result = copy.deepcopy(data) - it = NativeCheckpointableIterator(data) - self.test_cases.append(("n={}".format(n), expected_result, it)) - - def test_empty(self): - it = NativeCheckpointableIterator([]) - self.assertRaises(StopIteration, it.__next__) - - def test_iterator_exception(self): - self.assertRaises(ValueError, NativeCheckpointableIterator, iter(range(10))) - - -class TestFixedBatchIterator(TestBase, TestFiniteIteratorMixin, TestFiniteIteratorCheckpointingMixin): - def setUp(self): - super().setUp() - self.test_cases = [] - for n in self.lengths: - for batch_size in self.lengths: - data = list(range(n)) - data_copy = copy.deepcopy(data) - expected_result = [] - while data_copy: - expected_result.append(data_copy[:batch_size]) - data_copy = data_copy[batch_size:] - it = FixedBatchIterator(NativeCheckpointableIterator(data), batch_size=batch_size) - self.test_cases.append(("n={}, batch_size={}".format(n, batch_size), expected_result, it)) - - def test_invalid_batch_size(self): - f = lambda: FixedBatchIterator(NativeCheckpointableIterator([0]), batch_size=0) - self.assertRaises(ValueError, f) - - -class TestRecurrentIterator(TestBase, TestFiniteIteratorMixin, TestFiniteIteratorCheckpointingMixin): - @staticmethod - def step_function(prev_state, item): - output = prev_state + item - return output, output - - def setUp(self): - super().setUp() - self.test_cases = [] - for n in self.lengths: - data = list(range(n)) - expected_result = [data[0]] - for i in data[1:]: - expected_result.append(self.step_function(expected_result[-1], i)[1]) - it = RecurrentIterator(NativeCheckpointableIterator(data), self.step_function, initial_state=0) - self.test_cases.append(("n={}".format(n), expected_result, it)) - - -class TestSelectManyIterator(TestBase, TestFiniteIteratorMixin, TestFiniteIteratorCheckpointingMixin): - @staticmethod - def custom_selector(l): - return [l[0]] - - def setUp(self): - super().setUp() - self.test_cases = [] - - # default selector - for n in self.lengths: - for list_length in [1, 4, 9]: - data = list(range(n)) - expected_result = copy.deepcopy(data) - lists = [] - while data: - lists.append(data[:list_length]) - data = data[list_length:] - it = SelectManyIterator(NativeCheckpointableIterator(lists)) - self.test_cases.append( - ("n={}, list_length={}, default selector".format(n, list_length), expected_result, it) - ) - - # custom selector - for n in self.lengths: - for list_length in [4, 9]: - data = list(range(n)) - expected_result = [item for i, item in enumerate(data) if (i % list_length) == 0] - lists = [] - while data: - lists.append(data[:list_length]) - data = data[list_length:] - it = SelectManyIterator(NativeCheckpointableIterator(lists), collection_selector=self.custom_selector) - self.test_cases.append( - ("n={}, list_length={}, custom selector".format(n, list_length), expected_result, it) - ) - - -class TestBlockwiseShuffleIterator(TestBase, TestFiniteIteratorCheckpointingMixin): - def setUp(self): - super().setUp() - self.test_cases = [] - for n in self.lengths: - for block_size in self.lengths: - data = list(range(n)) - it = BlockwiseShuffleIterator(NativeCheckpointableIterator(copy.deepcopy(data)), block_size, self.seed) - self.test_cases.append(("n={}, block_size={}".format(n, block_size), data, it)) - - def test_basic(self): - for case_name, expected_result, it in self.test_cases: - with self.subTest(case_name): - result = list(it) - self.assertMultisetEqual(result, expected_result) - - -class TestWindowedIterator(TestBase, TestFiniteIteratorMixin, TestFiniteIteratorCheckpointingMixin): - def setUp(self): - super().setUp() - self.test_cases = [] - for n in self.lengths: - for window_size in self.lengths: - if n < window_size: - continue - data = list(range(n)) - it = WindowedIterator(NativeCheckpointableIterator(copy.deepcopy(data)), window_size) - expected_result = [] - for i in range(len(data)): - if i + window_size > len(data): - break - expected_result.append(tuple(data[i : i + window_size])) - self.test_cases.append(("n={}, window_size={}".format(n, window_size), expected_result, it)) - - -class TestSourceIterator(TestBase): - # TODO: Do we need more tests for this? - def test_exception(self): - self.assertRaises(ValueError, create_source_iterator, [1], train=False, shuffle=True) - - -class TestBucketedReadaheadBatchIterator(TestBase, TestFiniteIteratorCheckpointingMixin): - dynamic_batch_size = 15 - - @staticmethod - def key_fn(item): - return len(item) - - @staticmethod - def batch_size_fn(item): - return TestBucketedReadaheadBatchIterator.dynamic_batch_size // len(item) - - @staticmethod - def boundary_key_fn(item): - return len(item) < 5 - - @staticmethod - def setup_data(n): - data = [] - for i in range(n): - data.append(tuple(range(i % 10 + 1))) - return data - - def setUp(self): - super().setUp() - self.batch_sizes = [1, 2, 3, 9] - self.test_cases = [] - - # fixed batch size, not shuffled, no boundary key - for n, read_ahead in itertools.product(self.lengths, self.lengths): - for batch_size in self.batch_sizes: - data = self.setup_data(n) - it = BucketedReadaheadBatchIterator( - NativeCheckpointableIterator(copy.deepcopy(data)), - read_ahead=read_ahead, - key=self.key_fn, - batch_size=batch_size, - shuffle=False, - ) - self.test_cases.append( - ( - "n={}, read_ahead={}, batch_size={}, boundary_key=None, shuffled=False".format( - n, read_ahead, batch_size - ), - data, - it, - ) - ) - - # fixed batch size, shuffled, no boundary key - for n, read_ahead in itertools.product(self.lengths, self.lengths): - for batch_size in self.batch_sizes: - data = self.setup_data(n) - it = BucketedReadaheadBatchIterator( - NativeCheckpointableIterator(copy.deepcopy(data)), - read_ahead=read_ahead, - key=self.key_fn, - batch_size=batch_size, - shuffle=True, - seed=self.seed, - ) - self.test_cases.append( - ( - "n={}, read_ahead={}, batch_size={}, boundary_key=None, shuffled=True".format( - n, read_ahead, batch_size - ), - data, - it, - ) - ) - - # dynamic batch size, not shuffled, no boundary key - for n, read_ahead in itertools.product(self.lengths, self.lengths): - data = self.setup_data(n) - it = BucketedReadaheadBatchIterator( - NativeCheckpointableIterator(copy.deepcopy(data)), - read_ahead=read_ahead, - key=self.key_fn, - batch_size=self.batch_size_fn, - shuffle=False, - ) - self.test_cases.append( - ( - "n={}, read_ahead={}, batch_size=dynamic, boundary_key=None, shuffled=False".format(n, read_ahead), - data, - it, - ) - ) - - # dynamic batch size, shuffled, no boundary key - for n, read_ahead in itertools.product(self.lengths, self.lengths): - data = self.setup_data(n) - it = BucketedReadaheadBatchIterator( - NativeCheckpointableIterator(copy.deepcopy(data)), - read_ahead=read_ahead, - key=self.key_fn, - batch_size=self.batch_size_fn, - shuffle=True, - seed=self.seed, - ) - self.test_cases.append( - ( - "n={}, read_ahead={}, batch_size=dynamic, boundary_key=None, shuffled=True".format(n, read_ahead), - data, - it, - ) - ) - - # fixed batch size, not shuffled, boundary key - for n, read_ahead in itertools.product(self.lengths, self.lengths): - for batch_size in self.batch_sizes: - data = self.setup_data(n) - it = BucketedReadaheadBatchIterator( - NativeCheckpointableIterator(copy.deepcopy(data)), - read_ahead=read_ahead, - key=self.key_fn, - batch_size=batch_size, - boundary_key=self.boundary_key_fn, - shuffle=False, - ) - self.test_cases.append( - ( - "n={}, read_ahead={}, batch_size={}, boundary_key=len(item)<5, shuffled=False".format( - n, read_ahead, batch_size - ), - data, - it, - ) - ) - - # fixed batch size, shuffled, boundary key - for n, read_ahead in itertools.product(self.lengths, self.lengths): - for batch_size in self.batch_sizes: - data = self.setup_data(n) - it = BucketedReadaheadBatchIterator( - NativeCheckpointableIterator(copy.deepcopy(data)), - read_ahead=read_ahead, - key=self.key_fn, - batch_size=batch_size, - boundary_key=self.boundary_key_fn, - shuffle=True, - seed=self.seed, - ) - self.test_cases.append( - ( - "n={}, read_ahead={}, batch_size={}, boundary_key=len(item)<5, shuffled=True".format( - n, read_ahead, batch_size - ), - data, - it, - ) - ) - - # dynamic batch size, not shuffled, boundary key - for n, read_ahead in itertools.product(self.lengths, self.lengths): - data = self.setup_data(n) - it = BucketedReadaheadBatchIterator( - NativeCheckpointableIterator(copy.deepcopy(data)), - read_ahead=read_ahead, - key=self.key_fn, - batch_size=self.batch_size_fn, - boundary_key=self.boundary_key_fn, - shuffle=False, - seed=self.seed, - ) - self.test_cases.append( - ( - "n={}, read_ahead={}, batch_size=dynamic, boundary_key=len(item)<5, shuffled=False".format( - n, read_ahead - ), - data, - it, - ) - ) - - # dynamic batch size, shuffled, boundary key - for n, read_ahead in itertools.product(self.lengths, self.lengths): - data = self.setup_data(n) - it = BucketedReadaheadBatchIterator( - NativeCheckpointableIterator(copy.deepcopy(data)), - read_ahead=read_ahead, - key=self.key_fn, - batch_size=self.batch_size_fn, - boundary_key=self.boundary_key_fn, - shuffle=True, - seed=self.seed, - ) - self.test_cases.append( - ( - "n={}, read_ahead={}, batch_size=dynamic, boundary_key=len(item)<5, shuffled=True".format( - n, read_ahead - ), - data, - it, - ) - ) - - def test_basic(self): - for case_name, expected_result, it in self.test_cases: - with self.subTest(case_name): - result = list(it) - flattened_result = [item for batch in result for item in batch] - self.assertMultisetEqual(flattened_result, expected_result) - - def test_max_len(self): - for case_name, expected_result, it in self.test_cases: - if "batch_size=dynamic" in case_name: - with self.subTest(case_name): - result = list(it) - for batch in result: - length = sum((len(item) for item in batch)) - self.assertTrue(length <= TestBucketedReadaheadBatchIterator.dynamic_batch_size) - - def test_boundary_key(self): - for case_name, expected_result, it in self.test_cases: - if "boundary_key=len(item)<5" in case_name: - with self.subTest(case_name): - result = list(it) - for batch in result: - boundary_keys = [self.boundary_key_fn(item) for item in batch] - self.assertTrue(all(boundary_keys) or not any(boundary_keys)) diff --git a/kosmos-g/open_clip/.github/workflows/ci.yml b/kosmos-g/open_clip/.github/workflows/ci.yml deleted file mode 100644 index 74b8eb5ac..000000000 --- a/kosmos-g/open_clip/.github/workflows/ci.yml +++ /dev/null @@ -1,33 +0,0 @@ -name: Continuous integration - -on: - push: - branches: - - main - pull_request: - branches: - - main - -jobs: - tests: - runs-on: ubuntu-latest - strategy: - matrix: - python-version: [3.8] - - steps: - - uses: actions/checkout@v2 - - name: Set up Python ${{ matrix.python-version }} - uses: actions/setup-python@v2 - with: - python-version: ${{ matrix.python-version }} - - name: Install - run: | - python3 -m venv .env - source .env/bin/activate - make install - make install-dev - - name: Unit tests - run: | - source .env/bin/activate - make test diff --git a/kosmos-g/open_clip/.github/workflows/python-publish.yml b/kosmos-g/open_clip/.github/workflows/python-publish.yml deleted file mode 100644 index f336b13ba..000000000 --- a/kosmos-g/open_clip/.github/workflows/python-publish.yml +++ /dev/null @@ -1,37 +0,0 @@ -name: Release - -on: - push: - branches: - - main -jobs: - deploy: - runs-on: ubuntu-latest - steps: - - uses: actions/checkout@v2 - - uses: actions-ecosystem/action-regex-match@v2 - id: regex-match - with: - text: ${{ github.event.head_commit.message }} - regex: '^Release ([^ ]+)' - - name: Set up Python - uses: actions/setup-python@v2 - with: - python-version: '3.8' - - name: Install dependencies - run: | - python -m pip install --upgrade pip - pip install setuptools wheel twine - - name: Release - if: ${{ steps.regex-match.outputs.match != '' }} - uses: softprops/action-gh-release@v1 - with: - tag_name: v${{ steps.regex-match.outputs.group1 }} - - name: Build and publish - if: ${{ steps.regex-match.outputs.match != '' }} - env: - TWINE_USERNAME: __token__ - TWINE_PASSWORD: ${{ secrets.PYPI_PASSWORD }} - run: | - python setup.py sdist bdist_wheel - twine upload dist/* diff --git a/kosmos-g/open_clip/.gitignore b/kosmos-g/open_clip/.gitignore deleted file mode 100644 index e232eb028..000000000 --- a/kosmos-g/open_clip/.gitignore +++ /dev/null @@ -1,150 +0,0 @@ -logs/ -wandb/ -models/ -features/ -results/ - -# Byte-compiled / optimized / DLL files -__pycache__/ -*.py[cod] -*$py.class - -# C extensions -*.so - -# Distribution / packaging -.Python -build/ -develop-eggs/ -dist/ -downloads/ -eggs/ -.eggs/ -lib/ -lib64/ -parts/ -sdist/ -var/ -wheels/ -pip-wheel-metadata/ -share/python-wheels/ -*.egg-info/ -.installed.cfg -*.egg -MANIFEST - -# PyInstaller -# Usually these files are written by a python script from a template -# before PyInstaller builds the exe, so as to inject date/other infos into it. -*.manifest -*.spec - -# Installer logs -pip-log.txt -pip-delete-this-directory.txt - -# Unit test / coverage reports -htmlcov/ -.tox/ -.nox/ -.coverage -.coverage.* -.cache -nosetests.xml -coverage.xml -*.cover -*.py,cover -.hypothesis/ -.pytest_cache/ - -# Translations -*.mo -*.pot - -# Django stuff: -*.log -local_settings.py -db.sqlite3 -db.sqlite3-journal - -# Flask stuff: -instance/ -.webassets-cache - -# Scrapy stuff: -.scrapy - -# Sphinx documentation -docs/_build/ - -# PyBuilder -target/ - -# Jupyter Notebook -.ipynb_checkpoints - -# IPython -profile_default/ -ipython_config.py - -# pyenv -.python-version - -# pipenv -# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. -# However, in case of collaboration, if having platform-specific dependencies or dependencies -# having no cross-platform support, pipenv may install dependencies that don't work, or not -# install all needed dependencies. -#Pipfile.lock - -# PEP 582; used by e.g. github.com/David-OConnor/pyflow -__pypackages__/ - -# Celery stuff -celerybeat-schedule -celerybeat.pid - -# SageMath parsed files -*.sage.py - -# Environments -.env -.venv -env/ -venv/ -ENV/ -env.bak/ -venv.bak/ - -# Spyder project settings -.spyderproject -.spyproject - -# Rope project settings -.ropeproject - -# mkdocs documentation -/site - -# mypy -.mypy_cache/ -.dmypy.json -dmypy.json - -# Pyre type checker -.pyre/ -sync.sh -gpu1sync.sh -.idea -*.pdf -**/._* -**/*DS_* -**.jsonl -src/sbatch -src/misc -.vscode -src/debug -core.* - -# Allow -!src/evaluation/misc/results_dbs/* \ No newline at end of file diff --git a/kosmos-g/open_clip/CITATION.cff b/kosmos-g/open_clip/CITATION.cff deleted file mode 100644 index 1072ddd3a..000000000 --- a/kosmos-g/open_clip/CITATION.cff +++ /dev/null @@ -1,33 +0,0 @@ -cff-version: 1.1.0 -message: If you use this software, please cite it as below. -authors: - - family-names: Ilharco - given-names: Gabriel - - family-names: Wortsman - given-names: Mitchell - - family-names: Wightman - given-names: Ross - - family-names: Gordon - given-names: Cade - - family-names: Carlini - given-names: Nicholas - - family-names: Taori - given-names: Rohan - - family-names: Dave - given-names: Achal - - family-names: Shankar - given-names: Vaishaal - - family-names: Namkoong - given-names: Hongseok - - family-names: Miller - given-names: John - - family-names: Hajishirzi - given-names: Hannaneh - - family-names: Farhadi - given-names: Ali - - family-names: Schmidt - given-names: Ludwig -title: OpenCLIP -version: v0.1 -doi: 10.5281/zenodo.5143773 -date-released: 2021-07-28 diff --git a/kosmos-g/open_clip/CLIP.png b/kosmos-g/open_clip/CLIP.png deleted file mode 100644 index a1b5ec917..000000000 Binary files a/kosmos-g/open_clip/CLIP.png and /dev/null differ diff --git a/kosmos-g/open_clip/HISTORY.md b/kosmos-g/open_clip/HISTORY.md deleted file mode 100644 index 0af8ba347..000000000 --- a/kosmos-g/open_clip/HISTORY.md +++ /dev/null @@ -1,10 +0,0 @@ -## 1.2.0 - -* ViT-B/32 trained on Laion2B-en -* add missing openai RN50x64 model - -## 1.1.1 - -* ViT-B/16+ -* Add grad checkpointing support -* more robust data loader diff --git a/kosmos-g/open_clip/LICENSE b/kosmos-g/open_clip/LICENSE deleted file mode 100644 index 5bfbf6c09..000000000 --- a/kosmos-g/open_clip/LICENSE +++ /dev/null @@ -1,23 +0,0 @@ -Copyright (c) 2012-2021 Gabriel Ilharco, Mitchell Wortsman, -Nicholas Carlini, Rohan Taori, Achal Dave, Vaishaal Shankar, -John Miller, Hongseok Namkoong, Hannaneh Hajishirzi, Ali Farhadi, -Ludwig Schmidt - -Permission is hereby granted, free of charge, to any person obtaining -a copy of this software and associated documentation files (the -"Software"), to deal in the Software without restriction, including -without limitation the rights to use, copy, modify, merge, publish, -distribute, sublicense, and/or sell copies of the Software, and to -permit persons to whom the Software is furnished to do so, subject to -the following conditions: - -The above copyright notice and this permission notice shall be -included in all copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/kosmos-g/open_clip/MANIFEST.in b/kosmos-g/open_clip/MANIFEST.in deleted file mode 100644 index c74de18e6..000000000 --- a/kosmos-g/open_clip/MANIFEST.in +++ /dev/null @@ -1,3 +0,0 @@ -include src/open_clip/bpe_simple_vocab_16e6.txt.gz -include src/open_clip/model_configs/*.json - diff --git a/kosmos-g/open_clip/Makefile b/kosmos-g/open_clip/Makefile deleted file mode 100644 index d4b5fc320..000000000 --- a/kosmos-g/open_clip/Makefile +++ /dev/null @@ -1,9 +0,0 @@ -install: ## [Local development] Upgrade pip, install requirements, install package. - python -m pip install -U pip - python -m pip install -e . - -install-dev: ## [Local development] Install test requirements - python -m pip install -r requirements-test.txt - -test: ## [Local development] Run unit tests - python -m pytest -x -s -v tests diff --git a/kosmos-g/open_clip/README.md b/kosmos-g/open_clip/README.md deleted file mode 100644 index 010b2236d..000000000 --- a/kosmos-g/open_clip/README.md +++ /dev/null @@ -1,488 +0,0 @@ -# OpenCLIP - -[[Paper]](https://arxiv.org/abs/2109.01903) [[Colab]](https://colab.research.google.com/github/mlfoundations/open_clip/blob/master/docs/Interacting_with_open_clip.ipynb) - -Welcome to an open source implementation of OpenAI's [CLIP](https://arxiv.org/abs/2103.00020) (Contrastive Language-Image Pre-training). - -The goal of this repository is to enable training models with contrastive image-text supervision, and to investigate their properties such as robustness to distribution shift. Our starting point is an implementation of CLIP that matches the accuracy of the original CLIP models when trained on the same dataset. -Specifically, a ResNet-50 model trained with our codebase on OpenAI's [15 million image subset of YFCC](https://github.com/openai/CLIP/blob/main/data/yfcc100m.md) achieves **32.7%** top-1 accuracy on ImageNet. OpenAI's CLIP model reaches **31.3%** when trained on the same subset of YFCC. For ease of experimentation, we also provide code for training on the 3 million images in the [Conceptual Captions](https://ai.google.com/research/ConceptualCaptions/download) dataset, where a ResNet-50x4 trained with our codebase reaches 22.2% top-1 ImageNet accuracy. - -We further this with a replication study on a dataset of comparable size to OpenAI's. Using [LAION-400M](https://arxiv.org/abs/2111.02114), we train CLIP with a - * ViT-B/32 and achieve an accuracy of **62.9%**, comparable to OpenAI's **63.2%**, zero-shot top-1 on ImageNet1k - * ViT-B/16 and achieve an accuracy of **67.1%**, comparable to OpenAI's **68.3%** (as measured here, 68.6% in paper) - * ViT-B/16+ 240x240 (~50% more FLOPS than B/16 224x224) and achieve an accuracy of **69.2%** - * ViT-L/14 and achieve an accuracy of **72.77%**, vs OpenAI's **75.5%** (as measured here, 75.3% in paper) - -As we describe in more detail [below](#why-are-low-accuracy-clip-models-interesting), CLIP models in a medium accuracy regime already allow us to draw conclusions about the robustness of larger CLIP models since the models follow [reliable scaling laws](https://arxiv.org/abs/2107.04649). - -This codebase is work in progress, and we invite all to contribute in making it more acessible and useful. In the future, we plan to add support for TPU training and release larger models. We hope this codebase facilitates and promotes further research in contrastive image-text learning. Please submit an issue or send an email if you have any other requests or suggestions. - -Note that portions of `src/open_clip/` modelling and tokenizer code are adaptations of OpenAI's official [repository](https://github.com/openai/CLIP). - -## Approach - -| ![CLIP](https://raw.githubusercontent.com/mlfoundations/open_clip/main/docs/CLIP.png) | -|:--:| -| Image Credit: https://github.com/openai/CLIP | - -## Usage - -``` -pip install open_clip_torch -``` - -```python -import torch -from PIL import Image -import open_clip - -model, _, preprocess = open_clip.create_model_and_transforms('ViT-B-32-quickgelu', pretrained='laion400m_e32') - -image = preprocess(Image.open("CLIP.png")).unsqueeze(0) -text = open_clip.tokenize(["a diagram", "a dog", "a cat"]) - -with torch.no_grad(): - image_features = model.encode_image(image) - text_features = model.encode_text(text) - image_features /= image_features.norm(dim=-1, keepdim=True) - text_features /= text_features.norm(dim=-1, keepdim=True) - - text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1) - -print("Label probs:", text_probs) # prints: [[1., 0., 0.]] -``` - -To compute billions of embeddings efficiently, you can use [clip-retrieval](https://github.com/rom1504/clip-retrieval) which has openclip support. - -## Fine-tuning on classification tasks - -This repository is focused on training CLIP models. To fine-tune a *trained* zero-shot model on a downstream classification task such as ImageNet, please see [our other repository: WiSE-FT](https://github.com/mlfoundations/wise-ft). The [WiSE-FT repository](https://github.com/mlfoundations/wise-ft) contains code for our paper on [Robust Fine-tuning of Zero-shot Models](https://arxiv.org/abs/2109.01903), in which we introduce a technique for fine-tuning zero-shot models while preserving robustness under distribution shift. - -## Data - - -### Conceptual Captions - -OpenCLIP reads a CSV file with two columns: a path to an image, and a text caption. The names of the columns are passed as an argument to `main.py`. - -The script `src/data/gather_cc.py` will collect the Conceptual Captions images. First, download the [Conceptual Captions URLs](https://ai.google.com/research/ConceptualCaptions/download) and then run the script from our repository: - -```bash -python3 src/data/gather_cc.py path/to/Train_GCC-training.tsv path/to/Validation_GCC-1.1.0-Validation.tsv -``` - -Our training set contains 2.89M images, and our validation set contains 13K images. - - -### YFCC and other datasets - -In addition to specifying the training data via CSV files as mentioned above, our codebase also supports [webdataset](https://github.com/webdataset/webdataset), which is recommended for larger scale datasets. The expected format is a series of `.tar` files. Each of these `.tar` files should contain two files for each training example, one for the image and one for the corresponding text. Both files should have the same name but different extensions. For instance, `shard_001.tar` could contain files such as `abc.jpg` and `abc.txt`. You can learn more about `webdataset` at [https://github.com/webdataset/webdataset](https://github.com/webdataset/webdataset). We use `.tar` files with 1,000 data points each, which we create using [tarp](https://github.com/webdataset/tarp). - -You can download the YFCC dataset from [Multimedia Commons](http://mmcommons.org/). -Similar to OpenAI, we used a subset of YFCC to reach the aforementioned accuracy numbers. -The indices of images in this subset are in [OpenAI's CLIP repository](https://github.com/openai/CLIP/blob/main/data/yfcc100m.md). - - -## Training CLIP - -### Setup Environment and Install dependencies - -#### Conda - -```bash -# Create a conda environment (heavily recommended) -conda create -n open_clip python=3.10 -conda activate open_clip -``` - -Install conda PyTorch as per https://pytorch.org/get-started/locally/ - -#### Virtualenv - -openclip also can be used with virtualenv with these lines: -``` -python3 -m venv .env -source .env/bin/activate -pip install -U pip -make install -``` - -Install pip PyTorch as per https://pytorch.org/get-started/locally/ - -Test can be run with `make install-dev` then `make test` - -#### Other dependencies - -Install open_clip pacakge and remaining dependencies: - -```bash -cd open_clip -python setup.py install -``` - -If you want to train models, you will also need to install the packages -from `requirements-training.txt`. - -### Sample single-process running code: - -```bash -python -m training.main \ - --save-frequency 1 \ - --zeroshot-frequency 1 \ - --report-to tensorboard \ - --train-data="/path/to/train_data.csv" \ - --val-data="/path/to/validation_data.csv" \ - --csv-img-key filepath \ - --csv-caption-key title \ - --imagenet-val=/path/to/imagenet/root/val/ \ - --warmup 10000 \ - --batch-size=128 \ - --lr=1e-3 \ - --wd=0.1 \ - --epochs=30 \ - --workers=8 \ - --model RN50 -``` - -Note: `imagenet-val` is the path to the *validation* set of ImageNet for zero-shot evaluation, not the training set! -You can remove this argument if you do not want to perform zero-shot evaluation on ImageNet throughout training. Note that the `val` folder should contain subfolders. If it doest not, please use [this script](https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh). - -### Multi-GPU and Beyond - -This code has been battle tested up to 1024 A100s and offers a variety of solutions -for distributed training. We include native support for SLURM clusters. - -As the number of devices used to train increases, so does the space complexity of -the the logit matrix. Using a naïve all-gather scheme, space complexity will be -`O(n^2)`. Instead, complexity may become effectively linear if the flags -`--gather-with-grad` and `--local-loss` are used. This alteration results in one-to-one -numerical results as the naïve method. - -#### Single-Node - -We make use of `torchrun` to launch distributed jobs. The following launches a -a job on a node of 4 GPUs: - -```bash -cd open_clip/src -torchrun --nproc_per_node 4 -m training.main \ - --train-data '/data/cc12m/cc12m-train-{0000..2175}.tar' \ - --train-num-samples 10968539 \ - --dataset-type webdataset \ - --batch-size 320 \ - --precision amp \ - --workers 4 \ - --imagenet-val /data/imagenet/validation/ -``` - -#### Multi-Node - -The same script above works, so long as users include information about the number -of nodes and host node. - -```bash -cd open_clip/src -torchrun --nproc_per_node=4 \ - --rdzv_endpoint=$HOSTE_NODE_ADDR \ - -m training.main \ - --train-data '/data/cc12m/cc12m-train-{0000..2175}.tar' \ - --train-num-samples 10968539 \ - --dataset-type webdataset \ - --batch-size 320 \ - --precision amp \ - --workers 4 \ - --imagenet-val /data/imagenet/validation/ -``` - -#### SLURM - -This is likely the easiest solution to utilize. The following script was used to -train our largest models: - -```bash -#!/bin/bash -x -#SBATCH --nodes=32 -#SBATCH --gres=gpu:4 -#SBATCH --ntasks-per-node=4 -#SBATCH --cpus-per-task=6 -#SBATCH --wait-all-nodes=1 -#SBATCH --job-name=open_clip -#SBATCH --account=ACCOUNT_NAME -#SBATCH --partition PARTITION_NAME - -eval "$(/path/to/conda/bin/conda shell.bash hook)" # init conda -conda activate open_clip -export CUDA_VISIBLE_DEVICES=0,1,2,3 -export MASTER_PORT=12802 - -master_addr=$(scontrol show hostnames "$SLURM_JOB_NODELIST" | head -n 1) -export MASTER_ADDR=$master_addr - -cd /shared/open_clip -export PYTHONPATH="$PYTHONPATH:$PWD/src" -srun --cpu_bind=v --accel-bind=gn python -u src/training/main.py \ - --save-frequency 1 \ - --report-to tensorboard \ - --train-data="/data/LAION-400M/{00000..41455}.tar" \ - --warmup 2000 \ - --batch-size=256 \ - --epochs=32 \ - --workers=8 \ - --model ViT-B-32 \ - --name "ViT-B-32-Vanilla" \ - --seed 0 \ - --local-loss \ - --gather-with-grad -``` - -### Resuming from a checkpoint: - -```bash -python -m training.main \ - --train-data="/path/to/train_data.csv" \ - --val-data="/path/to/validation_data.csv" \ - --resume /path/to/checkpoints/epoch_K.pt -``` - -### Loss Curves - -When run on a machine with 8 GPUs the command should produce the following training curve for Conceptual Captions: - -![CLIP zero shot training curve](https://raw.githubusercontent.com/mlfoundations/open_clip/main/docs/clip_zeroshot.png) - -More detailed curves for Conceptual Captions are given at [/docs/clip_conceptual_captions.md](/docs/clip_conceptual_captions.md). - -When training a RN50 on YFCC the same hyperparameters as above are used, with the exception of `lr=5e-4` and `epochs=32`. - -Note that to use another model, like `ViT-B/32` or `RN50x4` or `RN50x16` or `ViT-B/16`, specify with `--model RN50x4`. - -### Launch tensorboard: -```bash -tensorboard --logdir=logs/tensorboard/ --port=7777 -``` - -## Evaluation / Zero-Shot - -### Evaluating local checkpoint: - -```bash -python -m training.main \ - --val-data="/path/to/validation_data.csv" \ - --model RN101 \ - --pretrained /path/to/checkpoints/epoch_K.pt -``` - -### Evaluating hosted pretrained checkpoint on ImageNet zero-shot prediction: - -```bash -python -m training.main \ - --imagenet-val /path/to/imagenet/validation \ - --model ViT-B-32-quickgelu \ - --pretrained laion400m_e32 -``` - -## Pretrained model details - -### LAION-400M - https://laion.ai/laion-400-open-dataset - -We are working on reproducing OpenAI's ViT results with the comparably sized (and open) LAION-400M dataset. Trained -weights may be found in release [v0.2](https://github.com/mlfoundations/open_clip/releases/tag/v0.2-weights). - -The LAION400M weights have been trained on the JUWELS supercomputer (see acknowledgements section below). - -#### ViT-B/32 224x224 - -We replicate OpenAI's results on ViT-B/32, reaching a top-1 ImageNet-1k zero-shot accuracy of 62.96%. - -<img src="https://raw.githubusercontent.com/mlfoundations/open_clip/main/docs/laion_clip_zeroshot.png" width="700"> - -__Zero-shot comparison (courtesy of Andreas Fürst)__ -<img src="https://raw.githubusercontent.com/mlfoundations/open_clip/main/docs/laion_openai_compare_b32.jpg" width="700"> - -ViT-B/32 was trained with 128 A100 (40 GB) GPUs for ~36 hours, 4600 GPU-hours. The per-GPU batch size was 256 for a global batch size of 32768. 256 is much lower than it could have been (~320-384) due to being sized initially before moving to 'local' contrastive loss. - -#### ViT-B/16 224x224 - -The B/16 LAION400M training reached a top-1 ImageNet-1k zero-shot validation score of 67.07. - -<img src="https://raw.githubusercontent.com/mlfoundations/open_clip/main/docs/laion_clip_zeroshot_b16.png" width="700"> - -This was the first major train session using the updated webdataset 0.2.x code. A bug was found that prevented shards from being shuffled properly between nodes/workers each epoch. This was fixed part way through training (epoch 26) but likely had an impact. - -ViT-B/16 was trained with 176 A100 (40 GB) GPUS for ~61 hours, 10700 GPU-hours. Batch size per GPU was 192 for a global batch size of 33792. - -#### ViT-B/16+ 240x240 - -The B/16+ 240x240 LAION400M training reached a top-1 ImageNet-1k zero-shot validation score of 69.21. - -This model is the same depth as the B/16, but increases the - * vision width from 768 -> 896 - * text width from 512 -> 640 - * the resolution 224x224 -> 240x240 (196 -> 225 tokens) - -<img src="https://raw.githubusercontent.com/mlfoundations/open_clip/main/docs/laion_clip_zeroshot_b16_plus_240.png" width="700"> - -Unlike the B/16 run above, this model was a clean run with no dataset shuffling issues. - -ViT-B/16+ was trained with 224 A100 (40 GB) GPUS for ~61 hours, 13620 GPU-hours. Batch size per GPU was 160 for a global batch size of 35840. - -#### ViT-L/14 224x224 - -The L/14 LAION-400M training reached a top-1 ImageNet-1k zero-shot validation score of 72.77. - -<img src="https://raw.githubusercontent.com/mlfoundations/open_clip/main/docs/laion_clip_zeroshot_l14.png" width="700"> - -ViT-L/14 was trained with 400 A100 (40 GB) GPUS for ~127 hours, 50800 GPU-hours. Batch size per GPU was 96 for a global batch size of 38400. Grad checkpointing was enabled. - -### LAION-2B (en) - https://laion.ai/laion-5b-a-new-era-of-open-large-scale-multi-modal-datasets/ - -A ~2B sample subset of LAION-5B with english captions (https://huggingface.co/datasets/laion/laion2B-en) - -#### ViT-B/32 224x224 -A ViT-B/32 trained on LAION-2B, reaching a top-1 ImageNet-1k zero-shot accuracy of 65.62%. - -<img src="https://raw.githubusercontent.com/mlfoundations/open_clip/main/docs/laion2b_clip_zeroshot_b32.png" width="700"> - -ViT-B/32 was trained with 112 A100 (40 GB) GPUs. The per-GPU batch size was 416 for a global batch size of 46592. Compute generously provided by [stability.ai](https://stability.ai/). - -#### YFCC-15M - -Below are checkpoints of models trained on YFCC-15M, along with their zero-shot top-1 accuracies on ImageNet and ImageNetV2. These models were trained using 8 GPUs and the same hyperparameters described in the "Sample running code" section, with the exception of `lr=5e-4` and `epochs=32`. - -* [ResNet-50](https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/rn50-quickgelu-yfcc15m-455df137.pt) (32.7% / 27.9%) -* [ResNet-101](https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/rn101-quickgelu-yfcc15m-3e04b30e.pt) (34.8% / 30.0%) - -#### CC12M - https://github.com/google-research-datasets/conceptual-12m - -* [ResNet-50](https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/rn50-quickgelu-cc12m-f000538c.pt) (36.45%) - -### Pretrained Model Interface - -We offer a simple model interface to instantiate both pre-trained and untrained models. - -NOTE: Many existing checkpoints use the QuickGELU activation from the original OpenAI models. This activation is actually less efficient that native torch.nn.GELU in recent versions of PyTorch. The model defaults are now nn.GELU, so one should use model definitions with `-quickgelu` postfix for the OpenCLIP pretrained weights. All OpenAI pretrained weights will always default to QuickGELU. One can also use the non `-quickgelu` model definitions with pretrained weights using QuickGELU but there will be an accuracy drop, for fine-tune that will likely vanish for longer runs. - -Future trained models will use nn.GELU. - -```python ->>> import open_clip ->>> open_clip.list_pretrained() -[('RN50', 'openai'), - ('RN50', 'yfcc15m'), - ('RN50', 'cc12m'), - ('RN50-quickgelu', 'openai'), - ('RN50-quickgelu', 'yfcc15m'), - ('RN50-quickgelu', 'cc12m'), - ('RN101', 'openai'), - ('RN101', 'yfcc15m'), - ('RN101-quickgelu', 'openai'), - ('RN101-quickgelu', 'yfcc15m'), - ('RN50x4', 'openai'), - ('RN50x16', 'openai'), - ('RN50x64', 'openai'), - ('ViT-B-32', 'openai'), - ('ViT-B-32', 'laion2b_e16'), - ('ViT-B-32', 'laion400m_e31'), - ('ViT-B-32', 'laion400m_e32'), - ('ViT-B-32-quickgelu', 'openai'), - ('ViT-B-32-quickgelu', 'laion400m_e31'), - ('ViT-B-32-quickgelu', 'laion400m_e32'), - ('ViT-B-16', 'openai'), - ('ViT-B-16', 'laion400m_e31'), - ('ViT-B-16', 'laion400m_e32'), - ('ViT-B-16-plus-240', 'laion400m_e31'), - ('ViT-B-16-plus-240', 'laion400m_e32'), - ('ViT-L-14', 'openai'), - ('ViT-L-14-336', 'openai')] - ->>> model, train_transform, eval_transform = open_clip.create_model_and_transforms('ViT-B-32', pretrained='laion2b_e16') -``` - -## Scaling trends - -The plot below shows how zero-shot performance of CLIP models varies as we scale the number of samples used for training. Zero-shot performance increases steadily for both ImageNet and [ImageNetV2](https://arxiv.org/abs/1902.10811), and is far from saturated at ~15M samples. - -<img src="https://raw.githubusercontent.com/mlfoundations/open_clip/main/docs/scaling.png" width="700"> - -## Why are low-accuracy CLIP models interesting? - -**TL;DR:** CLIP models have high effective robustness, even at small scales. - -CLIP models are particularly intriguing because they are more robust to natural distribution shifts (see Section 3.3 in the [CLIP paper](https://arxiv.org/abs/2103.00020)). -This phenomena is illustrated by the figure below, with ImageNet accuracy on the x-axis -and [ImageNetV2](https://arxiv.org/abs/1902.10811) (a reproduction of the ImageNet validation set with distribution shift) accuracy on the y-axis. -Standard training denotes training on the ImageNet train set and the CLIP zero-shot models -are shown as stars. - -![CLIP scatter plot](https://raw.githubusercontent.com/mlfoundations/open_clip/main/docs/effective_robustness.png) - -As observed by [Taori et al., 2020](https://arxiv.org/abs/2007.00644) and [Miller et al., 2021](https://arxiv.org/abs/2107.04649), the in-distribution -and out-of-distribution accuracies of models trained on ImageNet follow a predictable linear trend (the red line in the above plot). *Effective robustness* -quantifies robustness as accuracy beyond this baseline, i.e., how far a model lies above the red line. Ideally a model would not suffer from distribution shift and fall on the y = x line ([trained human labelers are within a percentage point of the y = x line](http://proceedings.mlr.press/v119/shankar20c.html)). - -Even though the CLIP models trained with -this codebase achieve much lower accuracy than those trained by OpenAI, our models still lie on the same -trend of improved effective robustness (the purple line). Therefore, we can study what makes -CLIP robust without requiring industrial-scale compute. - -For more information on effective robustness, please see: - -- [Recht et al., 2019](https://arxiv.org/abs/1902.10811). -- [Taori et al., 2020](https://arxiv.org/abs/2007.00644). -- [Miller et al., 2021](https://arxiv.org/abs/2107.04649). - -To know more about the factors that contribute to CLIP's robustness refer to [Fang et al., 2022](https://arxiv.org/abs/2205.01397). - -## Acknowledgments - -We gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) for funding this part of work by providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS Booster at Jülich Supercomputing Centre (JSC). - -## The Team - -Current development of this repository is led by [Ross Wightman](https://rwightman.com/), [Cade Gordon](http://cadegordon.io/), and [Vaishaal Shankar](http://vaishaal.com/). - -The original version of this repository is from a group of researchers at UW, Google, Stanford, Amazon, Columbia, and Berkeley. - -[Gabriel Ilharco*](http://gabrielilharco.com/), [Mitchell Wortsman*](https://mitchellnw.github.io/), [Nicholas Carlini](https://nicholas.carlini.com/), [Rohan Taori](https://www.rohantaori.com/), [Achal Dave](http://www.achaldave.com/), [Vaishaal Shankar](http://vaishaal.com/), [John Miller](https://people.eecs.berkeley.edu/~miller_john/), [Hongseok Namkoong](https://hsnamkoong.github.io/), [Hannaneh Hajishirzi](https://homes.cs.washington.edu/~hannaneh/), [Ali Farhadi](https://homes.cs.washington.edu/~ali/), [Ludwig Schmidt](https://people.csail.mit.edu/ludwigs/) - -Special thanks to [Jong Wook Kim](https://jongwook.kim/) and [Alec Radford](https://github.com/Newmu) for help with reproducing CLIP! - -## Citing - -If you found this repository useful, please consider citing: -```bibtex -@software{ilharco_gabriel_2021_5143773, - author = {Ilharco, Gabriel and - Wortsman, Mitchell and - Wightman, Ross and - Gordon, Cade and - Carlini, Nicholas and - Taori, Rohan and - Dave, Achal and - Shankar, Vaishaal and - Namkoong, Hongseok and - Miller, John and - Hajishirzi, Hannaneh and - Farhadi, Ali and - Schmidt, Ludwig}, - title = {OpenCLIP}, - month = jul, - year = 2021, - note = {If you use this software, please cite it as below.}, - publisher = {Zenodo}, - version = {0.1}, - doi = {10.5281/zenodo.5143773}, - url = {https://doi.org/10.5281/zenodo.5143773} -} -``` - -```bibtex -@inproceedings{Radford2021LearningTV, - title={Learning Transferable Visual Models From Natural Language Supervision}, - author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever}, - booktitle={ICML}, - year={2021} -} -``` - -[![DOI](https://zenodo.org/badge/390536799.svg)](https://zenodo.org/badge/latestdoi/390536799) diff --git a/kosmos-g/open_clip/docs/CLIP.png b/kosmos-g/open_clip/docs/CLIP.png deleted file mode 100644 index a1b5ec917..000000000 Binary files a/kosmos-g/open_clip/docs/CLIP.png and /dev/null differ diff --git a/kosmos-g/open_clip/docs/Interacting_with_open_clip.ipynb b/kosmos-g/open_clip/docs/Interacting_with_open_clip.ipynb deleted file mode 100644 index 19361c7cf..000000000 --- a/kosmos-g/open_clip/docs/Interacting_with_open_clip.ipynb +++ /dev/null @@ -1,916 +0,0 @@ -{ - "cells": [ - { - "cell_type": "markdown", - "metadata": { - "id": "YPHN7PJgKOzb" - }, - "source": [ - "# Interacting with open_clip\n", - "\n", - "This is a self-contained notebook that shows how to download and run open_clip models, calculate the similarity between arbitrary image and text inputs, and perform zero-shot image classifications." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Preparation for colab" - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "metadata": {}, - "outputs": [], - "source": [ - "import os\n", - "os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"\"" - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "0BpdJkdBssk9", - "outputId": "4d9b51f8-d255-4868-97f6-be0a67dadfae" - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Requirement already satisfied: open_clip_torch in /home/rom1504/safety_patricks/laion-main/.env/lib/python3.8/site-packages (0.2.0)\n", - "Requirement already satisfied: matplotlib in /home/rom1504/safety_patricks/laion-main/.env/lib/python3.8/site-packages (3.5.1)\n", - "Requirement already satisfied: regex in /home/rom1504/safety_patricks/laion-main/.env/lib/python3.8/site-packages (from open_clip_torch) (2022.3.15)\n", - "Requirement already satisfied: ftfy in /home/rom1504/safety_patricks/laion-main/.env/lib/python3.8/site-packages (from open_clip_torch) (6.1.1)\n", - "Requirement already satisfied: torch>=1.9 in /home/rom1504/safety_patricks/laion-main/.env/lib/python3.8/site-packages (from open_clip_torch) (1.11.0)\n", - "Requirement already satisfied: webdataset>=0.2.5 in /home/rom1504/safety_patricks/laion-main/.env/lib/python3.8/site-packages (from open_clip_torch) (0.2.5)\n", - "Requirement already satisfied: torchvision in /home/rom1504/safety_patricks/laion-main/.env/lib/python3.8/site-packages (from open_clip_torch) (0.12.0)\n", - "Requirement already satisfied: pyparsing>=2.2.1 in /home/rom1504/safety_patricks/laion-main/.env/lib/python3.8/site-packages (from matplotlib) (3.0.7)\n", - "Requirement already satisfied: python-dateutil>=2.7 in /home/rom1504/safety_patricks/laion-main/.env/lib/python3.8/site-packages (from matplotlib) (2.8.2)\n", - "Requirement already satisfied: packaging>=20.0 in /home/rom1504/safety_patricks/laion-main/.env/lib/python3.8/site-packages (from matplotlib) (21.3)\n", - "Requirement already satisfied: kiwisolver>=1.0.1 in /home/rom1504/safety_patricks/laion-main/.env/lib/python3.8/site-packages (from matplotlib) (1.4.2)\n", - "Requirement already satisfied: cycler>=0.10 in /home/rom1504/safety_patricks/laion-main/.env/lib/python3.8/site-packages (from matplotlib) (0.11.0)\n", - "Requirement already satisfied: numpy>=1.17 in /home/rom1504/safety_patricks/laion-main/.env/lib/python3.8/site-packages (from matplotlib) (1.22.3)\n", - "Requirement already satisfied: pillow>=6.2.0 in /home/rom1504/safety_patricks/laion-main/.env/lib/python3.8/site-packages (from matplotlib) (9.1.0)\n", - "Requirement already satisfied: fonttools>=4.22.0 in /home/rom1504/safety_patricks/laion-main/.env/lib/python3.8/site-packages (from matplotlib) (4.31.2)\n", - "Requirement already satisfied: six>=1.5 in /home/rom1504/safety_patricks/laion-main/.env/lib/python3.8/site-packages (from python-dateutil>=2.7->matplotlib) (1.16.0)\n", - "Requirement already satisfied: typing-extensions in /home/rom1504/safety_patricks/laion-main/.env/lib/python3.8/site-packages (from torch>=1.9->open_clip_torch) (4.1.1)\n", - "Requirement already satisfied: pyyaml in /home/rom1504/safety_patricks/laion-main/.env/lib/python3.8/site-packages (from webdataset>=0.2.5->open_clip_torch) (6.0)\n", - "Requirement already satisfied: braceexpand in /home/rom1504/safety_patricks/laion-main/.env/lib/python3.8/site-packages (from webdataset>=0.2.5->open_clip_torch) (0.1.7)\n", - "Requirement already satisfied: wcwidth>=0.2.5 in /home/rom1504/safety_patricks/laion-main/.env/lib/python3.8/site-packages (from ftfy->open_clip_torch) (0.2.5)\n", - "Requirement already satisfied: requests in /home/rom1504/safety_patricks/laion-main/.env/lib/python3.8/site-packages (from torchvision->open_clip_torch) (2.27.1)\n", - "Requirement already satisfied: certifi>=2017.4.17 in /home/rom1504/safety_patricks/laion-main/.env/lib/python3.8/site-packages (from requests->torchvision->open_clip_torch) (2021.10.8)\n", - "Requirement already satisfied: urllib3<1.27,>=1.21.1 in /home/rom1504/safety_patricks/laion-main/.env/lib/python3.8/site-packages (from requests->torchvision->open_clip_torch) (1.26.9)\n", - "Requirement already satisfied: idna<4,>=2.5 in /home/rom1504/safety_patricks/laion-main/.env/lib/python3.8/site-packages (from requests->torchvision->open_clip_torch) (3.3)\n", - "Requirement already satisfied: charset-normalizer~=2.0.0 in /home/rom1504/safety_patricks/laion-main/.env/lib/python3.8/site-packages (from requests->torchvision->open_clip_torch) (2.0.12)\n" - ] - } - ], - "source": [ - "! pip install open_clip_torch matplotlib" - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "C1hkDT38hSaP", - "outputId": "70a44964-883d-4fd0-b95a-2c7f2b19aca9" - }, - "outputs": [], - "source": [ - "import numpy as np\n", - "import torch" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "eFxgLV5HAEEw" - }, - "source": [ - "# Loading the model\n", - "\n", - "`clip.available_models()` will list the names of available CLIP models." - ] - }, - { - "cell_type": "code", - "execution_count": 4, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "uLFS29hnhlY4", - "outputId": "11779e1e-8bdd-4167-c18e-d26bdd6b67db" - }, - "outputs": [ - { - "data": { - "text/plain": [ - "[('RN50', 'openai'),\n", - " ('RN50', 'yfcc15m'),\n", - " ('RN50', 'cc12m'),\n", - " ('RN50-quickgelu', 'openai'),\n", - " ('RN50-quickgelu', 'yfcc15m'),\n", - " ('RN50-quickgelu', 'cc12m'),\n", - " ('RN101', 'openai'),\n", - " ('RN101', 'yfcc15m'),\n", - " ('RN101-quickgelu', 'openai'),\n", - " ('RN101-quickgelu', 'yfcc15m'),\n", - " ('RN50x4', 'openai'),\n", - " ('RN50x16', 'openai'),\n", - " ('ViT-B-32', 'openai'),\n", - " ('ViT-B-32', 'laion400m_e31'),\n", - " ('ViT-B-32', 'laion400m_e32'),\n", - " ('ViT-B-32', 'laion400m_avg'),\n", - " ('ViT-B-32-quickgelu', 'openai'),\n", - " ('ViT-B-32-quickgelu', 'laion400m_e31'),\n", - " ('ViT-B-32-quickgelu', 'laion400m_e32'),\n", - " ('ViT-B-32-quickgelu', 'laion400m_avg'),\n", - " ('ViT-B-16', 'openai'),\n", - " ('ViT-L-14', 'openai')]" - ] - }, - "execution_count": 4, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "import open_clip\n", - "open_clip.list_pretrained()" - ] - }, - { - "cell_type": "code", - "execution_count": 5, - "metadata": {}, - "outputs": [], - "source": [ - "model, _, preprocess = open_clip.create_model_and_transforms('ViT-B-32-quickgelu', pretrained='laion400m_e32')\n" - ] - }, - { - "cell_type": "code", - "execution_count": 6, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "IBRVTY9lbGm8", - "outputId": "f06fd2fd-6126-475b-87d0-b10aa3b7da49" - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Model parameters: 151,277,313\n", - "Context length: 77\n", - "Vocab size: 49408\n" - ] - } - ], - "source": [ - "model.eval()\n", - "context_length = model.context_length\n", - "vocab_size = model.vocab_size\n", - "\n", - "print(\"Model parameters:\", f\"{np.sum([int(np.prod(p.shape)) for p in model.parameters()]):,}\")\n", - "print(\"Context length:\", context_length)\n", - "print(\"Vocab size:\", vocab_size)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "21slhZGCqANb" - }, - "source": [ - "# Image Preprocessing\n", - "\n", - "We resize the input images and center-crop them to conform with the image resolution that the model expects. Before doing so, we will normalize the pixel intensity using the dataset mean and standard deviation.\n", - "\n", - "The second return value from `clip.load()` contains a torchvision `Transform` that performs this preprocessing.\n", - "\n" - ] - }, - { - "cell_type": "code", - "execution_count": 7, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "d6cpiIFHp9N6", - "outputId": "880cb98e-1e5e-430e-8b59-4bf35fa554f9" - }, - "outputs": [ - { - "data": { - "text/plain": [ - "Compose(\n", - " Resize(size=224, interpolation=bicubic, max_size=None, antialias=None)\n", - " CenterCrop(size=(224, 224))\n", - " <function _convert_to_rgb at 0x7fdb99768310>\n", - " ToTensor()\n", - " Normalize(mean=(0.48145466, 0.4578275, 0.40821073), std=(0.26862954, 0.26130258, 0.27577711))\n", - ")" - ] - }, - "execution_count": 7, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "preprocess" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "xwSB5jZki3Cj" - }, - "source": [ - "# Text Preprocessing\n", - "\n", - "We use a case-insensitive tokenizer, which can be invoked using `tokenizer.tokenize()`. By default, the outputs are padded to become 77 tokens long, which is what the CLIP models expects." - ] - }, - { - "cell_type": "code", - "execution_count": 8, - "metadata": {}, - "outputs": [], - "source": [ - "from open_clip import tokenizer" - ] - }, - { - "cell_type": "code", - "execution_count": 9, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "qGom156-i2kL", - "outputId": "050b0ce1-caba-47e1-f4ac-dba994599718" - }, - "outputs": [ - { - "data": { - "text/plain": [ - "tensor([[49406, 3306, 1002, 256, 49407, 0, 0, 0, 0, 0,\n", - " 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n", - " 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n", - " 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n", - " 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n", - " 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n", - " 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n", - " 0, 0, 0, 0, 0, 0, 0]])" - ] - }, - "execution_count": 9, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "tokenizer.tokenize(\"Hello World!\")" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "4W8ARJVqBJXs" - }, - "source": [ - "# Setting up input images and texts\n", - "\n", - "We are going to feed 8 example images and their textual descriptions to the model, and compare the similarity between the corresponding features.\n", - "\n", - "The tokenizer is case-insensitive, and we can freely give any suitable textual descriptions." - ] - }, - { - "cell_type": "code", - "execution_count": 10, - "metadata": { - "id": "tMc1AXzBlhzm" - }, - "outputs": [], - "source": [ - "import os\n", - "import skimage\n", - "import IPython.display\n", - "import matplotlib.pyplot as plt\n", - "from PIL import Image\n", - "import numpy as np\n", - "\n", - "from collections import OrderedDict\n", - "import torch\n", - "\n", - "%matplotlib inline\n", - "%config InlineBackend.figure_format = 'retina'\n", - "\n", - "# images in skimage to use and their textual descriptions\n", - "descriptions = {\n", - " \"page\": \"a page of text about segmentation\",\n", - " \"chelsea\": \"a facial photo of a tabby cat\",\n", - " \"astronaut\": \"a portrait of an astronaut with the American flag\",\n", - " \"rocket\": \"a rocket standing on a launchpad\",\n", - " \"motorcycle_right\": \"a red motorcycle standing in a garage\",\n", - " \"camera\": \"a person looking at a camera on a tripod\",\n", - " \"horse\": \"a black-and-white silhouette of a horse\", \n", - " \"coffee\": \"a cup of coffee on a saucer\"\n", - "}" - ] - }, - { - "cell_type": "code", - "execution_count": 11, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/", - "height": 368 - }, - "id": "NSSrLY185jSf", - "outputId": "06451963-5ecb-4ddc-d0a8-24e9b110af7d" - }, - "outputs": [ - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAACH4AAAK+CAYAAADel0tQAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjUuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/YYfK9AAAACXBIWXMAABYlAAAWJQFJUiTwAAEAAElEQVR4nOy9d7wcSXXo/z3VPTP3Kq2kzcCCyCwZk+MKbBOMbTDBgMFm/Z7B/vkZgwGDwdisI/DIwZiMwGADJppggg3CRBMeCyYvsGKB3WV3laV778x01/n9carvtEYzc2fmJkl7vvqM5k53pa6urjp1+tQpUVUcx3Ecx3Ecx3Ecx3Ecx3Ecx3Ecx3Ecx3Gck4+w3gVwHMdxHMdxHMdxHMdxHMdxHMdxHMdxHMdxpsMNPxzHcRzHcRzHcRzHcRzHcRzHcRzHcRzHcU5S3PDDcRzHcRzHcRzHcRzHcRzHcRzHcRzHcRznJMUNPxzHcRzHcRzHcRzHcRzHcRzHcRzHcRzHcU5S3PDDcRzHcRzHcRzHcRzHcRzHcRzHcRzHcRznJMUNPxzHcRzHcRzHcRzHcRzHcRzHcRzHcRzHcU5S3PDDcRzHcRzHcRzHcRzHcRzHcRzHcRzHcRznJMUNPxzHcRzHcRzHcRzHcRzHcRzHcRzHcRzHcU5S3PDDcRzHcRzHcRzHcRzHcRzHcRzHcRzHcRznJMUNPxzHcRzHcRzHcRzHcRzHcRzHcRzHcRzHcU5S3PDDcRzHcRzHcRzHcRzHcRzHcRzHcRzHcRznJMUNPxzHcRzHcRzHcRzHcRzHcRzHcRzHcRzHcU5S3PDDcRzHcRzHcRzHcRzHcRzHcRzHcRzHcRznJMUNP5xrLSKyR0RURHaud1kcx3Ecx3Ecx3Ecx3Ecx3Ecx3EcZzURkc0i8hIR+aGIdNJ7sj19YR4jIl8QkcPpvL9Lc5yTgHy9C+A4juM4juM4zuqQJuU7gYtV9f3rWZaTDRF5CrAV2KWqe9a1MI7jOI7jOI7jOI7jOCvDe4FfSn8fAvYBV1cnReSxwNvSzy7w8/R3Z60K6DjOdLjHD8dxHMdxHMc5ddkJPBd46PoW46TkKVjd7VjfYjiO4ziO4ziO4ziO4ywfEbkVZvTRBe6uqqep6jmqeudasKek75cCG9L5c1T182tcXMdxJsQNPxzHcRzHcRzHcRzHcRzHcRzHcRzHcU5tbpW+v6GqX1wizJtUtViDMjmOs0K44YfjOI7jOI7jOI7jOI7jOI7jOI7jOM6pzWz6PrLMMI7jnIC44YfjACKyXUReIiKXikhbRH4mIq8XkXNHxLmviLxXRK4UkU76fp+I3G9EHE2fHSJyvoi8RUR+IiJdEXl/LdxZIvJCEfmmiBwVkYUU7vMi8tcicoMh6Z8pIs8Tkf8RkSMp7jdF5O9EZPsU9bKjKnP6fU8R+ZCIXC0icyJysYj8kYgM7EtEZE+Kv3OaOk5pPF5E/jtdyz4R+ZSI/Gp/+pNem+M4juOsJyJyhoj8oYh8QES+KyKH01j37TReXmdIvLFkhGoMx7YqAXh8TQ5ZlEfqYWvj/d1E5N0icoWIlCLysr4yLFcGun6SAX6aZIJLReRFIrJlhetqZ8pzz4hyXZjC7K4duyjVRSVvfaqv3nYPSmtI+rtTnAtFZJuIvFREfpTu209F5HXDZKGqHCKyK/2uZKLDInIoyUS/vET+txSRd4rIVSIyn+rvr0Rkpj99x3EcxzkRENOVvEZEvi+mdzggpuN4hYjcsRauJSKPFJG3isjXReSaNL7+WETeXg87II+6ruLclN9P0lj5HRH5E6npOVI+n0llOSQiHxaRWy9xHRPrZ/rKdV0ReXWSG9oicnEt3PVE5Oki8lERuSTV0yER+Voa57dOUue1dBdlAxEJqR6+nsq+V0T+TUTuMiTuMXKX9PRH16R6/bqY/khG5F/JSnvSNf9ERN4gIuf1p+84juM4pyLjykG18HcQkbelMbOdxt2PicjDB4StdB270qEL5FhdR6Uf0Vq0S2vndw1I89fEdDWVfugqEfmgiDxgietsJrngM2LvfNpJhnuTiJw/UaX10qzLUddPMsRPxOTDSu902pC4u1Lci0QkE5GnJNllLpXvQyJypyXyv6eYjLgvyU5fT+mEevrTXJvjTISq+sc/18oPsAdQ4HG1v48CC+lvBS4Ftg2I+7e1MBHYn76rY88bkmd1/rdTXgocAuaB96cwNwAur4UtgH196f/BgLTvBeythWmndKvflwE3n7COdtTiPxzb903T9XZr594H5CtZxyn+62vhyr56fnItzZ3r3Z784x//+Mc//pnkA7yoNsZ10xhe1I5dBdy2L87YMgJwHnAltjpDk0xwZd/nvBS2Pt4/qjbGHwA6wMtqZViuDPSQmrxyqE+e+DLQWIm6SvF2pvN7RtyHC1OY3bVjT0/1U6Zz+/rq7b0T3OfdKY2nAT9If8/V7ktV/vMHxL0ond8FvKF2zw/W4pbAw4fk/UscKwsexORDBb4APK9Kf72fB//4xz/+8Y9/VBXgSX1j/JEka1S/d9fC/mqfTLKvb9zrAr89JJ89KczvAlfUxsl63q9MYZ9fG4MP1c7vB246JP2p9DO1cj0RuJqeDuUIcHEt3Lv70t5bk1s0yRzXm6L+K9njLcB7a/V4oJZ2ATxqQNyd6fweTL4q0n2px1VqcmVf/Oth+qEq3BxwmJ6s9HtV+uvdTv3jH//4xz/+WY3PJHJQCv/EvvF/f1/8fwKyWvhK11HpFDocq+u4oPZ3lcbVtWMvr6XVAN7WN8Yf7Pv9giHXeS5wcS1cybEy1jzwsCnqr5Kjfi/JDppkiboMdglw7oC4u9L5vwU+Wqufw33luvuQvH9nwL2o9F3vraV/0Xq3M/+c+h/3+OE48EqsI76Hqm4ENmEvJQ5gL0KeVQ8sIo8G/jz9fBVwlqpuA85MaQH8mYg8bkSer8ZebtxGVbcAG7AXAmArc8/FJur3AZqquh1zr3UbbPC5sq9MNwA+CGwH/hG4aQq/McX5OPYC6L0iko1RJ4N4I/AfwI3S9W4FnoFN5B+a/h7GRHWcrul3sUEa7MXE9pTvOaksL8Tq3HEcx3FORi4Dng3cFphV1dOBFnAn4GPYGPfPfasix5YRVPUnqnoOZjQB8E5VPafv85MB5XoD8AHghqq6FZNRXgYrJgPtwib4lQy0Cfjf2EuLOwFPWKG6mhpVfVGqu6p+HtZXbw+bItm/ADYDvwZsUtVN2AuSS7Hy/6uINIbEfQjwWOD/A7ao6mnAjYD/wjw4vlJE8noEETkDeAcwA3wJq+/TsPp+LHBr4A+muA7HcRzHWRVE5JHAK4AMM2y4papuSrLG6diCkq/WohxJ4e+Dja3bVXUWM5R9GZADrxOR64/I9qXYWHy7NE5uwcZsgP8jIs8Gngo8BTgtyS63Ab6H6UT+bsB1rIR+5sWYQco9VXVjkhseUTv/HeCPgZvRk41mMNniy8CNgdeOuO6leAjw69i1b0ky4U2AT2D3580icuMhcc9Mef8j9mJlK7CNnqz4xyJyqwHx3obph36OGfVsUtXNwD0xo54XLuN6HMdxHOeEZlI5SETugY21IYU/r/bO5jn0FuMuvnep6TqenA59vk/X8enq71rR7lw7/+Ta8f+L6RZ+APwmNm5XstQfYgYTzxCRx/RdZwPTOd0O+E/gHsBMkrGug8lwM8A/jZA1luJFmBHKvZMssRF7f3UNJs+8ZUTc/wPcGVsUVckitwO+mcr18v4IInILbAFzAD6C6dO2YXXxx5ge6CFTXovjTM56W574xz/r9aFnAXglcPqA809L539UOyaYVaAC/zIk3X+m58ki9J2rLP5+iE3OB8X/dgpz3AqKEddSWVcOW2XbBL6ewjxignR31Mr8TaA1IMxF9Cw6Nyy3jmv1XK30eN2Qsn24Vrad692e/OMf//jHP/5ZqQ9m1PCtNMZdUDs+jYxQjdO7RoSpj/ef7ZdfUpiVkoGGyROvTOc/uRJ1lc7tZAqPH7VzlRyzcxn3cje9lcj3GnD+5vS8cDxuyL1T4LED4l6nFvc+fef+Kh3/ObB1QNzfrKU9tG34xz/+8Y9//LMWH2zV6E/TuPTPK5TmG1N6zx1wrhrj9w0ZJ/+zNk7+5YDz907nFjBD3Pq5qfUztXLtB86e8rq3Y6tcI7Bjwrh12ePPB5yfAb6bzr+h79zOWtzXD0n/G4PqFLhvTV6654B4OzAPICPlOv/4xz/+8Y9/TsbPNHJQTVb5LDWvHrXzf0/P48WWvnMXMkQPUgtTjek7Bpy7aRqzryJ5kh0Q5tEp/jf7jlcevP6LAR5fU5jXpDCvmrAeKzlqHrjJgPP3rV3XvfrO7Rp2Lp2/Y+389fvOvSUd/59+uTCdf0Yt7kXr3d78c+p/3OOH45hhwd4Bx9+fvm8oIhvT37fHrALBVtUO4q/S9w5g4N6n2KA1P+TcofQ9cL/3fkRkA/BIbLB9yaAwqtrBLD8BRu4FP4IXq2p7wPGXYMqOLcD9h8SdpI4BfgGrPzDr0UG8YGRpHcdxHOckJY23n0g/71k7NZGMMCUvVtU44PjtWRkZ6CVD5In3p+9bL13EHiPq6kTjM6r62f6Dqvo9ejLaI/rPJy7DjGr6416OefOA4+ut8kryOlU9MCDuu4AfLV1sx3Ecx1kTfhG4LuYi+09XKM0Ppu9R8sFrBo2TmLdTMBffg/Qsn8P0IC168tFK6mfeqqo/H1HuoajqPuDzmNHuPaZJAzOyeNmAtBcwbyQADx/hbe15Q45/IH0Pk1s+p6qfG5DvHsyTmeM4juOcikwkB4nIdsyIAczQtBwQ7AWYrLIJ+JUVKmfF72Byxjt1sCdZMFmnDdxKROo6rMen75erandI3Len72nfY71LVX/Qf1BVP4XJSDBc/zJMd/NVzDgHanKMiATMmwjYdnadAWm+Ctu6z3HWBDf8cBxzgzmIn9X+3pq+fyF9X62q3xoUKSnwf9YXvp8vjCjPR9L3C0TkH0TkviIyOyL8HbEVIwL8j4hcOeiD7eEG5lJ0GnYPOqiqh4CvpZ/DrneSOga4Q/q+ctAgnfgitk+a4ziO45yUiMgtRORVIvINETkkIlFEVESUnuvN69SiTCojTMMwGWWlZKClZIJtg05OUVcnGrtHnPt0+h5WZ19RVR1y7rh6E5EWcMv08ziFRY1R5xzHcRxnLblb+v66qv5sZMgaIrJdRP5CRD4vIntFpKjJB+9LwUbJB/8z5PhV6XuPqh7pP5mMZK9JP+uyy0rpZ0bpjAAQkbuIyJtE5LsicqS67nTtD0nBppWNvqKqw15QVHLLVuCGA87vU9VhxqXD5L1KBzRKNvnMiHOO4ziOczIzqRx0B0zWUHrj8jGo6kF6W8MM0zVMS2VY+vgRss5PMU8mkOSdtEVttUjotSPivrcebwp2jzi3lP5lmM4KBssxN8IWRMMQOUZV5zh2u0LHWVXypYM4zinP4UEHVXWhtnihGqTOTN9LDcA/xaw0zxxy/uoRcV+AKQt+HdsP7Q+BQkS+jCkuXt+3IqWymBTg7CXKBbBhjDCDGHXN1blh1ztJHQOckb6vGJahqnZEZC9wzrAwjuM4jnOiIiKPBt5Kb/yL2LZplTeMTdg+pHWPWJPKCNMwTEZZKRlooEyArUSBAfOTKevqRGPF5ahEVW91OWobPQP/obIUcPmIc47jOI6zllS6jMvGjSAitwQ+ybF6kMOYe2/FDDC2MVo+GDZOlkucr4epj8ErpZ8ZpTNCRJ6OeUetFColtj1Mtcr0NGxblmllo3HkFjDZpd/IY1K5BcbQAeFyi+M4jnPqMqkcVOkODg4yUK1ReagYpmuYlkre2Zw+S1HJO9sx+Qzg9DHiTbvQaS31L2fU/nY5xjkhcI8fjjMdM8uMP8j9FmAuy1X1IcDdsYn8FzGlRfX7+yJyu1qU6jk+qKoyxmfnMsvuOI7jOM4yEJEzgddjk8V3AncCZlR1m6qeo6rnAC+tglfxppARJmaIi9A6y5WBJmLaunIcx3Ec55TnzdiLkv8HPBDYrKpbVPXsJB88MoVbS/lgpfQzQ+UxEbkVZgwsmOvwWwEtVd1ek42qrWRcNnIcx3GcU5fWOuVbyTt/Mqa8s7svHsAdxom7tpflOKcGbvjhOJNRrbpYys3U9frCT4yqflFVn6mqd8dWqTwGs/o8E3hDLWi17+sWETlt2vzGYJSL0Orc1NfbR+Uy9dxhAUSkyXiWoY7jOI5zovEgzEvFt4HfUtWvDtjbdOgq0QlkhJVkzWSgPpZTV0X6HmWsspqyU521lKP2Y15RYIQstcQ5x3Ecx1lLKr3GDcYJLCLXx1yFl8Cvq+rHBqx4HcfjxkqzFvqZh2P63I+p6pNU9dsDDHeXe+3jyC2whjqgJc45juM4zsnMRHIQvfF3Ni2WGcZK62cqqvJef8J4e+kZt04adxLW4z0WuBzjnCC44YfjTMb/S98bReQugwKIyM0wF+f18MtCVY+q6juAJ6ZDdxSRymXnV7AXG4KtclktLhh0UEQ209sTbUWuF/ha+j5HRG48JMxdOd49qOM4juOcDFST72+kPeKPQWwftPuNk9ASMgL0DACWu1JiXWQglldXB9L3WclgdBB3HpH3StUdDJGj+s6tlNzYxgxlAO41Iui9VyI/x3Ecx1kBvpi+bysi1x0Z0lh8kaGqw9x5/9LyizUxa6Gfqa79a4NOJjnwbsvM404iMmwbmkpuOQBcusx8KqprcbnFcRzHuTYyqRz0Ncz7K8B9BwVIBqh3TD9XSj9T8YX0PZGskxbxfCX9fNCKluhY1kz/gm15dyj9PVCOEZFZevfCcVYdN/xwnMm4GPhB+vvZQ8JclL73AF+aNIMRLybA9qoFUyI0AVT1MPCedPyvkyHGsLRzEdk0aZkSTxtStqdgK2kPAR+fMu1+vgb8OP399CFhnrFCeTmO4zjOWnMwfd86GS708wTgOMPHSWWERDUB3TphGfu5mFWWgYYwVV0lvg+0sTr5tf6TInITbNXsMFaq7gAuEJF7DCjDTYFHpJ//ugL5VLwvfT9h0IpjEXk4cKMVzM9xHMdxlsN/YnuuZ8ALxwhfyQdni8hZ/SdF5DbAb61c8cZjjfQz1bXfZsj5PweG5jsmG4En9x8UkRbw1PTz3aqq/WGmpJJb7ikidx+Q7/WBR69QXo7jOI5zojGRHKSq+4BPpZ/PFJFB73mfib2zOQJ8ZIXKWfFWzPDkfBH5/VEBRWRb36Fd6fvCpbYqHhB3XB4lIsfpO0TkPsA9088V0b+kBUofSD+fLCKDFir/IebJ1nHWBDf8cJwJSJPa56SfDxGRV4rI6QAicrqIvAJztw7wnEErU8fgmyLy9yJy5+oFjxh3AV6ZwnxZVffX4vwZsA+4GfB5EXlgNcikuDcVkacC3wXuVM9MRC4UEU2fHSPKdX3gfVUYEdkgIk+j95LnBao6N8X1Hkeqt79JP/9ARP5GRLakfM8UkdcBDwBWJD/HcRzHWWP+A5sk3xp4hYhsBRCRLSLyp8A/YC4w+5lGRvhW+r5XMjKYijWSgQYxbV2hqh16E/CXisi9RCSkz/2BT9AzmBlEVXePEZGh28XU5KiLRqR1CHiviPxKZcAiIvcG/h3bl/dbwLtGxJ+UV2JbvpwN/LuI3CrlmYvIo4E30/OI4jiO4zjrSloB+rT08zEi8i4RuUV1XkS2i8gTkrwB8B3gp5hx5zuTMSci0hCRh2FjfP/WL2vF1PqZMflE+n6wiDyr8syRdCUvBJ7FENkohdud5JbdI/I4CPyNiDw5rVIlvUD5AHA+sAA8f4qyD+NTwGew+/keEXlQTV66G/BRoLOC+TmO4zjOCcMUchDAX2BeSn8BeIeIXC+F3SQiz8bkEYDnq+ohVhBV/Tbw0vTz1SLyvCr/VIbNInJ/EXkbxxtYvBHzcDIDfDJd15Za3HNE5LEi8mn6jFBFZEdN/3LhiCJ2MD3IPVK8ICK/Brw7nf+Eqn5u4gsfzvNSnrfB5JgbpHxnROT/YDLTgRXMz3FG4oYfjjMhqvpO4O/Szz8CrhKRfcBVwJPS8eer6tunzOIsbKL+JWBORPZiq1X/G7gttm/Y7/WVaQ/mWuty7MXIvwNHReQabEL+feDF2IrYaVdk/G/g/sClIrIfUwS8COtHPgD83ynTHcabsJcSYC+a9qV6/jl2/U+lt4dae4XzdhzHcZxVQ1W/B7ws/fwjYH8aW/dj4+l/Aq8ZEHViGQHYDfwQ2A58T0SuEpE96XM9JmANZKBBeU5bVxXVy4/zsBcKh4GjwMewifdFI+K+MX0/EjgoIj9J9faOKS7lb7AXUB/GZLTDwH9hstnVwG8mZc+KoKpXY4Y4beDumNHQgVSGfwG+Qa/eXI5yHMdx1p0kZzwNe4nxSOA7InI4jft7gddh8k61WOSPU9idwCUicggb596DjW1PWeNLIJVtD6uon1HVjwPvTT//HjhS05U8HZNfPrS8q+ADwL9hMtjBdA9+iC3AKYHfVdUfLjOPRZKB8eOAy4BzsZXJlbz0BUyOrbzButziOI7jnHJMIgel8J/HPElU4S9L8sABTG8jwNtZWUPNOs8A/hF7N/RnwE9E5GDSOxzEdC6PxbyY1K+zCzwE+Bw2vr8O0/PsFZEjwBXA24D7MP17rKcD24DPJVniCCbXnIl5sn38lOkORFW/A/wBVt5fA/ake3EIeBXm2ezfUnCXY5xVxw0/HGcKVPU5wC9ik+FrMFdNe7EO/JdU9VnLSP4hmJXg5zBFwSbMYvAb2EB9K1X9xoAyfRm4BebG6/PYgLYV84rxFeAVwAWq+ulpCqWq78H2jPswNtEvgK9jL3oepqrFNOmOyE8xY5P/BXyZnqv23cCDVfVVQGUNemAl83Ycx3Gc1UZVnwo8EdverI1Nhr+GvaR4MDbO9jOxjJAm1b8I/BPmOnQbcIP0yaco92rKQMPynKauqrg/Au6KGTtcneL+FFOE3JPedi6D4n4S+A3g05hnkOti9XbOFJexF7gL9gLl59h2PJcDrwdun1bMrCiq+jFsJfG7U/4t4FLgudg9nE1BD6x03o7jOI4zDar6EuAO2CKQPUADU6J/A3g58Ce1sO8D7od5wDicwv4YW6ByB2y8XxdWWz8DPAp7yfIdoIvpSj4HPF5V+42Ap0Gxl0hPTXk0MaPbDwH3UNVpjGBHZ6h6GbZq+RWYAUiGySivB+5Iz4vJgZXO23Ecx3FOBCaRg1L41wJ3Bv4ZM5jYhBldfAJ4pKo+TlXLVSprqap/CNwLM9T4MaZzmMHG8X/DFu88YkDcq4ALMMOQj2C6mmqbuu9iW8n8JtMbrfwA04W8CauPDKvPFwN3UtUrpkx3KKr6ZsxY5aMpzxbwbcxQ+dFAtQXvgZXO23H6kZXbjtFxnFMNsW1dLgVQVVnf0hyLiNwYG8Q7wObkzt1xHMdxHOeEIblRvwBbGbtrfUtzLCLyGUxJc8KVzXEcx3GctSdtW/dc4C2qeuH6luZYRORvMG+wJ1zZHMdxHMdZf0RkD7ZY576qunt9S9MjbV/3Y8wT7QlVNufUxD1+OI5zsvKM9P1fbvThOI7jOI4zPiJyd8zoI2Lb5TiO4ziO45yQiMh2zCMs2Cpmx3Ecx3Gck4VHY0Yfh7Ctmh1nVXHDD8dxTlhE5M0i8ggROb127IYi8mrM5TuYiy7HcRzHcRynhog8UUSeLSI3FpEsHdskIr+DuWoHeJeq/mT9Suk4juM4jgMiclcReaWI3ElEZtKxXETuB3wKOBdz0/6edSym4ziO4zjOcSTdy5NE5DwRCenYNhF5MvDGFOzVqjq/fqV0ri1MvK+34zjOGvLLwIUAInIUW5W6uXb+b1X1o+tQLsdxHMdxnBOd6wN/DvwdUIrIQWArPeP/i4EnrUvJHMdxHMdxjmUz8Efpg4jsBzYCzXR+H/AoVV1Yn+I5juM4juMM5ZbAY4FXAJ30LmsrIOn8fwB/tT5Fc65tuOGH4zgnMn8KPAS4A3A2sAG4HPgCZiH5yXUsm+M4juM4zonMO4BZ4ALgesB2zLXot4F3A6/x1SaO4ziO45wgXAw8B1sAdCPgLKALXAJ8FHixql6xbqVzHMdxHMcZzqsxfcu9MC9lWzGj1W8AbwPeqqrFupXOuVYhqrreZXAcx3Ecx3Ecx3Ecx3Ecx3Ecx3Ecx3Ecx3GmICwdxHEcx3Ecx3Ecx3Ecx3Ecx3Ecx3Ecx3EcxzkRccMPx3Ecx3Ecx3Ecx3Ecx3Ecx3Ecx3Ecx3GckxQ3/HAcx3Ecx3Ecx3Ecx3Ecx3Ecx3Ecx3EcxzlJOWENP0TkIhFREdk1RdwdKa6uQtEmLcuuVJaL1rssa8Vy7t0qlGVPKsvOKeKeMvdu1LVUz4qI7Og7fsLcR2dlEOOPRORiEZkbdu/XuEw7Uxn2rFcZHOdkRUQuTM/P7vUuyyBE5BwReYOI/EREuidyWR1nNRGR3an9X7ge8ZeLy4QnJqfSXGUpROT+IvKfInJARGL9eVjOfM9xTlROtH532vJcm/opx3FWl5NVd+RyiuM4IvIYEfmCiByu6eJ3rne5hrFUeUXkpiLyDhG5UkTKE0lmdRzHyNe7AI5zMlJTXLxMVQ+sY1GcFUJEHgrcHtitqruHhNkKPAVAVS9ak4KtLM8G/jb9vQD8PP1drk9xHOfkJE14dgIXq+r717Msw0gvxHYA71fVi9ch/xz4JHB+OrQf6AD71rosJyOnwHizbpwMz2fFiXCfReQpwFZgl6ruWY8yrBTpWl6afr5BVZ+wjsVxVggRuTfw79iilRK4GlBgfj3L5TiOUyctprgQOKCqL1vXwjjOCcI4ejbHcZxTGRF5LPC29LNLTxffWZ8SjWap8orIduAzwNnYnGwfUAAH17akjuOMwg0/HGc4VwDfA64ZcO656XsXcGCNyrMcRl2LYzwUeHz6e/eQMFvp3fuLVrU0q8OT0/dTMaOldfeK5DgnKTuxvuAtwPvXsRwHsb79sgHnLgQuAPYAF69ZiXo8ADP62AfcTVUvWYcynMxs5eQeb9aTnZwYz2edy7BntV8ZspX1v89PAW6AyT571qkMK8Xja38/UkSepKoL61aa1efaIt//MWb08S7gQlV1gw/HOXm4tvRTYAbXzwV+DLxsXUviOCcOD2VpPdu4zGH9yc+WmY7jOM5a8pT0/VLgGaparGNZxuEp6XtYeR+DGX18H9ipqlesYdkcxxkTN/xwnCGo6rOAZ613OVaCU+lanOkQkbOAM9PP17vRh+Oc/Kjq+4D3rXc5hnCr9P0pN/pwru2o6u+sdxlOdUTkttiK0j3AJcAvYy8b3rFuhVplrkXyfTWe/JMbfTjOycW1qJ9yHGeVUdUvAbdY73I4juNMSDWXedNJYPQBS5e3Ov9BN/pwnBOXsN4FcBzHcdaE2eoPVT2yngVxHOdaQdXneH/jOM5aUK0m/Rd6rmkfPySsc3Lh44njOI7jOI7jOCcjJ9tcZqnynmzX4zjXSpZt+CEiZ4jIH4rIB0TkuyJyWESOisi3ReQlInKdFcgjiMifiMjXU9p7ReTfROQuU6TVEpFHishbU3rXiMiCiPxYRN4uInccI43zReQ1IvJ9EZkTkQMi8j8i8opx4vdd16tFREVkv4jcfcJr+QUReb6IfFZELhORdqqb3SLyeyKSDYl3UcpzV/r9eBH573TvDonIp0Tkl5fI++Yi8i8icpWIzKd7/1wRaU1yDX1p/kUq178OOHfHdE5F5B8HnH9AOrdnRPrbU5u8NNXVz0Tk9SJy7pDwu1KaF/UfqwW7tFauxTrtS+dMEXleaiNHUhv+poj8XdoXbWJE5IYi8o+pDc6ndvjjdO+fJSJnLHUtU+Q5TTs5W0RenNrHnIgcFJEvicjThrWVdA0qIheOSHdPCrNzyPlNIvJsEflyynNBRC5Jz+h5fWF3pntavRx4bt891apcwKW1eNr3uWhAOXaIyCtF5Hvp+g+LyFdF5JkisnFU3Y1iknqtXd+eIWU/rtxD8lx23zlmPvcUkQ+l9OdTXn8kIrJE2Z6a2ufBFO97Ys/7OUPiXJiuf3f6/VgR+bRYH6pie9FWYS8QkXeLyE9FpJPyuERE3i8ivy8iA8dSEbmXiLwjxav65/8QkceMup6TCVklGUBq45RMKQOIyJaUztfF+t4jIvINEfkrETltzHz/KD1bB9Lx26fnqdqa4fED+oIdKa0dcmwfcrfUjq4QkVJEXlbLd9rx/Jh2XD+GbfMC8Oa+8u0Z5x705XNfEXmviFyZnoErReR9InK/AWGrcfKiIXW0Y8w8ryciTxeRj6bnbU5s7Plauodbh8TbWb9OmaJPGVGmTEQeJCKvFevLf57q4/Jh9VGLG9K9+VS6t10RuVpEviUibxKRB9bC7mbM8UZqY6aIbBWRF0hvbDjQV4ZpxuRF+SFd/1NSHc6JyL5Ut3dahfoaOc731cuO9HuHjPl8jkJWQR6VAbLNJPe5L63ZdD++l9r0VWJjzU2Xura+dC5K9XWDdOhTffnvHhF3GpmwmZ69z6S20xaTId4kIudPUvYBaefAY9PPt2OekOaBX5Yhcn6KV2/fTRF5joh8J7Xvy8Tkxm218HeUXl84LyZnPnSlr7uvXC0R+XOx8etwOr61P9yI/B8oPRmmncr+xXSt/TLxVGO6HD/e3Tq1ySvFZMXvpueqOaquBqRbpbkjHaq30d1jpnEfEXl5aq+Xi/VBV4mNLY8YI/6vpvZ9MLX1L4rI49O5Jecs68m093OMdFdCRgsi8tsi8gmxsbAaG94pIncdM9/jZLRa2IeIyEfExp1ueva+J6a/eNSQ9FdiPvFrqb0cEJM7vygijxm3bich1cH/Fpu/7EvP2qUi8joRuckScSeWkccoz7NSXSyIyENqxwf2UyvRb8ix4+GCmIz9jpTWMelPeC03E5G/FJFPpjpdSPf0i2Iy0+yAOHuAT6WfN5Djx/QLJ8h/qnnBBOmPrc+UZegApCbLici5Kc+fpGfrO2J9R6iFf6TYWHlArL/9sIjceolrmVjX1leu64rpZH+U6vniWrip5iLjIiJ3EJG3pTppp7r9mIg8fEScetkn0m+OUZ6J2/0S6e2UMfRs9bDSm789SET+XWy8jiLylEHh+vKrz4m2ichL031dEJOBXrdU3YjIjcXmLVW8/SLyX+M8d2L6pC+mdrgv1eODJ6kzx1kPZJXf64nIxtSXfl568sqPxOTFx4pIoxb2GLlqSHqL8mDf8X65otIDXZ3674vFZMep34XKBPJTf3kS9XdIu5iASepxtcpb9bPYttLQ168PyH+HTPlORFZJny6n+NhbS3ds/WMtzrR66XF0EiPnzSLSEJEnish/pnJWOpOPp+MD24vY3OsD0tNVXyUiHxSRBwwJ3z93G/ou6JRAVZf1AV4EaPp0gb1AUTt2FXDbKdK9KMV/C/DeWvoHamkXwKMGxN1RhRlw7ldr8SO29/x83zX89ohyPanv+o4A+2u/d/eF35WOX9R3PMcUkwr8HLjdFHV0TS3fo33lUODDQD6ibncBb6jV5cFa3BJ4+JB875Pyq8IeBNrp788Df1+lP+H13CfFu3LAuafW8vvWgPNVnm/pO74nHX9c7e+jwEItvUuBbQPSPO7eAS8HrqzFvTr9rj4v70vjXtgzUYVv97W3y4CbT1hPvwAcqqXRGXDvHzhOO0znqjg7Vrid3KXv2g/1XfvFwFkD4u1O5y8cUQfVvdw54Nz5tfPVM32k9nsfcM9a+Huke1eV7UjfPb0yhXtvut9VOlf2fZ7eV46H9V3v0XSvqt/fAM6e4rmfqF5r1zes7E8fM99l9Z0j0t2Z4u/BhMcipX+glrYCLxsS/0zg/9XCLXDs87EPuNuAeBem87uBV9Brz/vS90NTuCf2leNoX3tSYGZA+i/oC3MwXVf1+1+AMGl9nWgfTkAZIMW/Ccf2A0c5dtz6MXDTJfJ9fy2fqo+9d3puqjYwz/F9wXkprR21/B6Vyq/pGjr1Ns304/liO64de1QqR9XfHOwr35cnvBd/WytHTGWrt+Xn9YWvxslhdXTemPm+u5ZHO7WtsnbsB8D1VrpPWaJMt+5L4yDH9wfPGhL37X3hDtCTnRT4Yi3s2OMNvTHzT4Efcmw/eKAWbtoxeVc6/7fAR+nJHYdrceeBu69wfe1hyDhfC1OlsSP9Pm+JtjdW+2N15NHqPl24zPv8x/TGvAVsr/Mq/l7gxhO056enfKrnal9f/u8d0DfuYjqZ8NzUxuph62P1PPCwSZ/JWvoPTul8vXbsHenYn46ItyuF+Xvgv2plqT8bXwZmgIekOu/vTyLwmyt53bVyPR/4b3rPXZXv1r5wFw1Iown8Uy2vqt+pP4MX9cWZakzn2PHu/vTa5QGO7bffP+F9rdrioDZab597GNBfAJv6rv8Qx7ZZBV47Iv/n9N3n/bWyvJQx5izr+Zn2fo6R7kX0ZKVpZLTNwCf66ra/L/mjJfJ9fy2f/env26dwfzfgvtef6UF9+0rMJ/6iVv4DfWV4yjLqedeAcxuAj9XS7/TlOQ88ZEi6y5WRB5WnmvccAX6x79wuBvc3O2p5TtxvAKcBX6mFa9fa0RHgt6pzU9R9Pd157Nmpy75fBjb3xflyaidVG+gf0wc+D0Pyn2peMGbak+ozp9YB1NrZ7wJXpL8P9uX/yhT2+fSe6fqzt39Qe0xxptK11cr1RHpyWDXHv7gWbqq5yJj34Yl9ae3vq5d/ArIRZZ9Yv7ka7X6J9MbSs6WwO1OYPcDTau2tqpen9IcbkN/udO5p6d4o1q/U5Z6rgPOHlPdX+9rPAY7V330C2Dgk7qtq4UqOnS//MWPMa/zjn/X6sEryYkr7lqlP6k+/Wzu2oxb+QgaMRX1pXsQAeYRj5YqH1/LY35ff+5hiDGVC+YmebmLYO6SXT5D3RPW4WuXF5P6h/Xpf/lO/E2GV9OlcC8beWrpj6x9rcabVS+9K5y8aUZ7dKcyFA85dF/haLZ8y1UO9vDv74jQwL6/97aT++wUD8rowndvNiHdBp8pn+QmYEPMs4DbVzQcy4I70FMTfBGTCdC+qNcwC+BNgNp27MfBxeoLcjfvi7qhu8oB0d2Kd1b2BDbXj18cUONWDdv0BcR9Zazz/Sk1gBLZjK81evFTjx5SH/0ZvMnKzKev+n4FHA+fUjm3EOqJqYnWcsrNWt/vTtf5BVRfADYFPp/OX0/dAA9swQxUFvkoyWEkP3O9gHcMBBgzCY1xPi16neYu+cx9Ix6tJ4Jl95z+Xjv+vvuN7atf6NdKLCczw5tfpdWL/d0B5jrt3tXMDB9a+MDeopf9qbNAN6XNresqabzFgYBmR7idTvC8Cd6gd3wDcKbXjuy/3WlagnVxObzC/c61veAQ9pcgnBpRnN0MGgwH3dWff8dPoCUPvAm5b1S1wI3oD35Ukpfk4dTRO39IX7s6YQNPFXpZdt3b9d8cGbgU+NuEzspx6HavsI/LeyZR95xjpKtZ3tIFXkoQ/YCu9gTgCtxoQ/9/T+X1YH13d7zulOqru9xl98S5M5w6ntP+S3ouULcBZ2DNVvdx8I7UXhlif/0CsH272pf3kWr5PAE5Lx2exl/JV/zzwhefJ9OHElAGawNfpjbG/DEj6/CI2wanK1RqS72FsPPr/6PV7ZwFb+sLtGnENO+j1r4cx5eGOdC7n2AnutON51Y53Dzi3myX60jHuw6Nr1/DK6jkCTqf3bCrwuBH3cGgdLZH332DK6ZuSJnWYrHEB8KWU9ocHxNvJMvqUJcp0M6wvuH/VFmpt4zn0jEzu2hevMiQogKeQJm6pTZ6LrYR70bD2s0SZqvt8GGvvD6zV103S93LGjl30ZIG9wG+S+jxsjP2fdP5LK1VfKcweBozzfWGWkl+mbXurIY9W9+nCvuOT3uf9mJzzgHT/AjYu/ySdf9cU1ztOXV9Uy39SmbBB75n9D0wOaqRz59KTIY4ygeFKXx7vSmk8o3bs19Kxb46IV7XvA1hf++BUpxlm6FHd5+enMG8k9dPYi+L3r8Z118p1ONX5o+g9dzeopVOFu2jAtf0DvX7nImqKtXTPng48sS/OVGM6x453+4F30hvvNgJ/Rk+B9Ssr3UaHncfkuH8FHgpsrx3fCvwfejLeIwekeb/aNb2JZBiHzTX+ptZuljXOruZn2vs5RroX1a5/IhkthXkfPV3C/UkG1NhY9efYHKqkZqjfl+9QGS21xUqh+/fU5H/smX048MYBZVrufKKqi+fQm0+cndqfYv3m9inredeAc69J5xaA3yfJs9i4+yl6fcvN+uKthIy8q3Ys1Mqyn8FGoLsY0E+xzH4DMwBS7KXD4+j1i7cCvkBNWT3Fs/MPwP8GblA71sLGle+ldP9hQLyd6dyeZT67U80Lxkh3Gn3mTqbXn+6h93x8nvTiEOubK8O6CDwbe+6fTHqxjunLvssQ2YZl6Npq5TqMPd/3qJ27Se3vqeYiY9yHe9Drp/6VZDyCGSv+ea3dP2dEne5nQv3marX7MdLdxdJ6tp21tlSkslTzt5laHVXhjnvG6MnKBzCd9a/W7tsFwI/o9W+Nvrg3pmcgsptkMJSu/4n05gVvGJDvY+k9Vy/k2DHgLVjbrl627lxO3+Af/6zGh9WTF7djsoam5+8h9OYzDeCemIx9vVqcC6vncES6FzFAPuJYueIAJtvdMJ3biC2UqfreZ094LVPLTyl+Va4dU9yfaepxVcvLEv06y3gnwirp07kWjb1MoX9M56fVS49sDynMbgbrw1r0jO+vxt4tV7Jghi1+fynH61grGfQSTL6t4mzG5oiVHucxffEupCcDDnwXNOk9PJE/q5u43bxvpQq9YMK4F9HraP58wPkZehOBN/Sd21HFnaLMb0xxn9t3vAH8NJ375wnSO6bxpwb4qXTs+0z4knSCfO+d8rh0ibp97IDz16FnVXWfvnPVSpZr6FN8pPOPq6W9a4pyV4rj368dC5gS5hD2AkeprSjEJo6V1WC/4nQPvQHj9AH5VdbkP1rq3vWdW3LQpmd59rwh5+sD8SMmqKNqNcxxL0rGbYfjXMsKtZP91AaM2vn719K+X9+53SyhRGW4grdamT70GaWn2Ov30DG0jmphdlTlXqK+P9vfjvvOb6f3Eu5OE9zH5dTrWGWf9sOQvnOMeDtrZX79kDCVwvUv+47fuxb3AQPinU3vheZf9527sBb374fkexd6CsWxjLOwlwmHMaXF7YaEuTu9FVPNcdI9GT+snwzw2+l4B7j1gLi3ojdm9L+cref7xDHKt2tEmB21tD7LlB5eGD2eV+1494Bzu1miL10iX8GEaAX+ZUiYf67K1n9949TRMtrWdmzlSeT4sWvqPmUFylX10W/uO/6MdPzfJ0hrsf0sEa66zwPbe1+5phk7dtXO3WtA3DvWzk9q+DewvtK5PQwY5/vCLCW/TN32WHl5dODzOMV9nqP2UqJ2/uHp/AITjitj1nVVp8rkMuHvpeP/RZ+ivRamenH4qinu1TZ6njjqBpoNeqtW7jgkbr19XzCijSrwyQHnN9JTKqzYdfeV6/4jrr0Kd1Hf8VvRU14NHcsmrOehYzrHjncfZ4BiGPhgOv+mKfIe2UbHacND4lXywqcGnKv6gI8NuZ5X1675wpWo47X8jLqfY8St9weTymi/lI5/l6TIHRD/z1KYD43Id2C7xowTFfjOBNezUvOJQXUxi8krCvzOlPW8q+/4DnqK6+PmmthYVK12f+uQNr8cGXlX+t3AVlwqIzzYMp7hx0T9Brago+rjfmtAvNPozbV1hZ+dG2IvMo5SM4RI53amPPesZJ5D2uulE8abSp85RrpDdQD0+uZ99C26Sef/s9YGjpPJa9d6nGzDMnRtHPsCZ2IPrCmNoXORMeJW1/1ZBq8srrzHHaZmON1X9on1m8u8z0Pb/RhxB/YBfWGqZ2dk+xz1jNGTlSOD5yw3pyerPq7vXNWOfzDo+uh5gY0caxwk9PrbXQPiCcd6uNq5kvfFP/5Z7Q/Lkxf/b4p3NenF/xhxLmSIfqsW5qJBzxzHyhXDDBqquAcn6ctYhvyUzlfl2jHFPZimHle1vEv160z5ToRV1KdzLRp7mUL/OEaao/TSI9tDCrObwfqwP6Qn643lWQgzyI2YHDbQoy+9RYzf7Dt+Ya19D3wXdCp9pt7XahxUtY0JOWAWaNMwB7xsQNoLwIvTz4cvZ3+nPj6YvvvL+4uY65kSsxKcGBE5HetodmIvHe6tqpdNV8zRqOpnMAvHHTJ8P7bLsJc2/XEvxyzYwazl61R7Ib9eVa8ZkObbMcvBafmv9H1B7dhtMKXuZzFvF/3n745NZH+mqj8cku7rVHXvgOPvT983XGp/sUkQkQ2YxVkEXjIojKp2sBXgYNaX43IofU+1f9gULKedvEFVrxwQ9+PYShwwBd1K8fj0/eIRYaprmaTOx0ZEboz1HwewCeRxqOo+zABl0nKsV72Ow7C+cxKeN+T4B9L3sHb2FVX9WH8kVf059lIFhtdHyZBnlN6z1sA8HIzDwzGL4f9Q1a8PCqCqXyC5gcNemp6SrKMMULWLD6jqNwfE/Ra9vndYu9iLWc2vFC9W1ThNxDHH89Xg9tjqOTCjukH8VfregRlKrQmpD/08pky7x4igk/Ypy2VYP1j1JWfJMvaUXYJ/H9TeEysxdnxGVT87IO5XsRcJMHl9rsS4sVqsljy6XN6tqj8YcLzyJNii99yuBtPIhJVs9nJV7Q5J9+3pexrZ7FHYdX9GVX9SK1MXW81TL8MwvqCqnx5w/D9qfx/Xn6jqUcwDH6zOdX8jPZ+T8ttY//hdVX3dFPGPY4Ix/fmatCp9vD99r3S/uxyqPuhu9T2LReQMbKUU2MqtQdfzgtUu3GqyjjJa9Vy8XlUPDkm3ei7uO2Qv6VEyWjXenpbm4+OwEvOJBQbXxTxmPAQr1/Z/AzNEvBLbeqs/zznsJQHAw/rqcCVkZNJ+4+/DlKo/wfRaA+c9YzJpv/EbWB/3E8z45BhS23pN//GVQFUvxV6CbcBk5TVlGfOCZeszhzCOLPcaVT0w4Hg1xnYYPB//HPZsHSPbrKCu7a3p+Z6YCeYixyAi24H7pp/PU9VyQLAXYNe9CfiVIUmtqX5zjdv9C5cZf9ic5Xv02kTVF5LGqIenny9NfWg/bwB+ht3vR9SO3x7zFgKD5UTFXiY6zknJMuXF30nfL1LVn61cqcbixans/bwE61+3YItfxmVF5KcpmaYe1628y3wnsir69Gvh2Lvi+sdV1EtX7fvNqvqNCeII8M66/qePd2PGnrcSkUHvT0e9CzplWJGbLyK3EJFXicg3ROSQiEQRURFRzEUP2EqwafhKUqoNolLQbcWsoMYt73YR+QsR+byI7BWRolbe9w0p793S99enHLCuk8p7Z0xBuHPaSUYdEXmkiLxfRC4TkfnqOtK1bK3lPYivDJlggwm1YB1plVcTswqEXt0fQ0rvvwadG5Mq3boi/YLauWr/7WHnh/HlIcfr93LreEUciztiqwwE+B8RuXLQB3OzDLaX2rh8JH2/VUSeLyJ3E5HGCpa9n2naSaWc+dSIdKuXJr+wvOIt5nsecL308yMj6vzlKcwkdT4J1cR/E/DTEeV41CTlWK967SvDNH3nuOxT1R8NOXdcO0tU1zhOfdxsiAD2gyFGbGDeDi7BnuUviMifpPFulKFhdf/vN+zep/tf3ffVaodrxgkoA0zSLoY9J19R1WKsEo7HF5YKsMzxfDWo6ubqNDE8jqRA+1lf+BVDRO4iIm8Ske+KyJG+OnlICjasTqbpU8Yp02zqC3aLyFUi0q2V6WtDyvSfmFL7F4DdIvK4VTDiGdjGVnDsGCZHwYj6nLK+TgRWSx5dLgPvQzIsuCr9nLhdT8CkMmFOzyjstSPGxPemMNOMiRem7+MMUui9QH7MErLy/ww5flXt72GGVdV8bjWue8mxYwjVvPUjI0MNYAXG9KXmXKvZPo9DRHIR+d8i8lERuUJE2rXr2Z+CzfSV6/bpO2Iv9o5DVX+MGUKd0JyAMlolJz9nxHNRtaENDDa8HiWj/Te2AvBcTHZ/oogspSdaifnEt0fUxUq3/aq8nxmiuIZeeTdiq9z74y5HHtiCuX5/MDZPupeqfn9kiZdm0n7jDun7cyPGpM8sp0Ai8ssi8i8i8kMRmeuTQW+Xgq2a/LIK84Kp9ZkroANYaozdo6pH+k8mo/lqnl5vAyulaxtnfracucgg7pDKrQzXqR7EtsKCyeXyZek317vdY6u8l2NEBraqeBhVndfr9UaYlyAY0jemtlilW49b/f3zNC8exOcxd/uOc8Ky0vKiiOzAPKbBFPORFWD3oIOqeoieDmIS/dVKyE8Ts4x6XJfyJpbzTmS19OnXtrF3av3jKsifo/Jq0DPemaR9V+3k8SPayE+xRVkwuJ2Mehd0ypAvNwEReTTwVnqVGTGXSZVl3SZswjmtxdOoSUn93JnYXlcjEZFbYp3b2bXDlRshxSYQ2zi+vFX4aRU8T0jf+4EHjljhgoh8mcGN8kWq+qIUJsf2s/6N2vk2NjGqFABnYsY9w+r+8IjyLqTvupJ0O7a/EphLpmEcd8/EXsoP6yAfpqqVUu3zmBuj64jITdKqxkqRvltV94rIt4DbiMj2ZCVYnR9lcDLwWlV1QXrvcFfSeKKyJhOObWvDGHdFEtgKjZtjHd0z02dBRL6ArWzclVYXrRTTtJPKqGzU81utED5zynL1U7fgO2uM8JPU+TTlyFnZe79e9Qosq+8cl0nbGfSucZz6EOAMzEVanauHRVTVUkR+C7PevRFmDfoSYJ+IfBL4J+CDfQrH6v5vYLx7u1rtcE04QWWASdrF6SIiA5TGQ9vFlAxNb4XG89VgnHoEq8vrsvJ9ztOxFavVIF1iMlQn/T4Ne1m3UjLOOGU6F1Mi3Kx2+GgqV8RkpDP6y6Sql4jI/we8CnOReO+U3h7s5cnrVPVrLI9hbWylxo6J63Pa+jpBWC15dLmseLte5fy3Y/IBjOc5a3aSwojIzYG7YvfqXwcE+RzmifAG2MqdDwwIA7Zn7SAWX6qq6lJhVuO6px2Lppq3rsSYrqrD2shatM9jEJFNmLeF+mrseaxeKy9cVV1tpPeC8Yz0fXCJOdXlwPVXprQrzwkqo1Vy8tYx8xgkJ4+S3feLyG9j20DcFngtQFIAfhzbMqRf4bsS84m17JsnKW89/KRxh8nIlbzaxfRayzaAmqLfqJ7RYf0yjNZXjUREXgE8qXaoixkUVd6btqcyrbj8sorzgmnHhZXQASw1fo66j4PG2JXStY0cY1dgLjKI6hk8OMjYpcZUcvly9Jvr2e5r7NUpvWTWGGdsGtQvLhV30D2p/h7a36hqW0SuAc4ZkbbjrBurJC/W++b1MJSetB9YipWQn6Zh2npcr/LC8t6JrJY+/Vo19k6jf1wnvfR2erYJk7Tvqp1sTp+lmGg+eSqxLI8fInIm8HqsAb4TuBMwo6rbVPUcVT0HeGkVfFklXTnejHU8/w94ILBZVbeo6tmpvI9M4Va6vB/BOohtwKtltKudM1MZ+z+bamGegD2Mc8AfY3sazajqmbW6r4TPE6HuMwZf09n0FKOVa9LKwu4CsZ7zPsCR2vFPY9d0bxGZwRS+1fETher+HlRVGeOzc9yEk1upe2HusF6BWas2MbdVrwa+KSLXG57CmjKzhnnVn6ltY9T5jlUux9fHvPcXTpHHWtZrxXr1neOwnPoYtlIOAFX9CrZ/3OOwydCPMOHkEdgLpA/LsS6Uq/v/8jHv/65llH1dOQlkgFVrF5MyYkUmnPjj+Zr3NyJyK8zVomCTlVthe7Rur9VJ5ZpyLevkZZgRw48wN5TbVXWTqp6VynS3YRFV9U3YquenYH3HXmyLnD8Avioiz15m2cZps2t9L1/GlPW13pxC8uh6U5fN7jDOuDhh+tW2EQ1gb31FSlqVEjGjj3rYtWClrntFx6JRnARj+jT8BWb0cQ12/89W1Q21Pui6tbAnyzWNxQl8P6tn4zfGlJP3DEhjKdn9I9h4+0RMgXk59sLtd7BVb8O2P1qP+dVyWE55lxP3v7A6bQBvENv25ZRBRB6EKeBL4CJsi5GWqp5ee3b+uwq+CkU40eYFJ6IOYKV0bUP7kjWYi7SmiLNqnADtvmLN5J4hnGzjgOMsixNYXjxROdn6iPUo73Leiay2Pv1aM/ZOoX880eTPUVTt5E/GbCe7B6Sx3vLGmrDcrV4ehBkjfBv4LVX9qh6/h/E41l2jGOU+pn5uSUsdEbk+5na3BH5dVT+mx1t6DStv5cb3BkPOL8WXsZVmR4HfAt4oMni7AFXdMaShXlQLVk2w/kZVX6mq9VUdpBeRZ7Cy7KP3YIx7XwBQ1T0TPIB199q3xK7jc9pz6Vo/f1es4/65qn538ktaNar2skVEThsZcgrU+A9VfbKq/gJWR7+P3aMb0RPM1oN99FbRjVoFVxmn9D+71X0eJaAMqtP61knrufquKsdKb+Gx3HqdmmX2natJdY3j1IfSW8k5Eao6r6pvV9XHq+qNsWfseSnNB2GCU0V1/0/YFaAryIkqA0zSLvaqrphl+7Ssx3g+DlU9LtWXrXifgxkJBOBjqvokVf22Hm88s6Z9jtiWKZVL58eq6ntVdX9fsJFlUtWfq+rLVfWhmJHvXTAX2QL8jYjcdoWLDes0dqxAfY2UBVZDthrAqSCPrjd76c0dVnRcTEb0vz1BlAeLyDjeN1aCVbvuMZlm3roWY/paU42vT1LVt6rqVX3nh11PJS+etsRL7UH7BZ8onKgy2prIyap6UFVfr6qPUtXrYi9tX59OP0FEHjygfKs6n1hBJilvPfykcYfJyJcCv4jdy/sC7xeRtVakV/dg1DM47fNZ9RtvUNW/UtUfDqiH1ewLV2teMPG4cALrAFZV15ZYrblI9QzOpheuw1iN+dUo1rvdryTjjE2D+kWYfK5U/T00zzQnWo+5vOOMw2rJi3X9/CTzkWnfB/QzaT+wFOulY5y2HtdTJ7qcdyKrNU+4Vo69E+oflyN/Tvvc7qvFnaR9X5veuyyL5Rp+VA/EN3SAO7Zk2HC/ZeZxJxEZ5rqncql8AJuALsXiA6zD97X8pSHHv5i+bysi1x0SZiSq+lng1zC3iBcCrxlm/DEG1bUMcw1+T1bYsk9VO8C30s/7DApTWxG5HCoX2RcweL/0QedX0632IKqOeNj9+wrWeQm2MmJ1C6O6X1VfB1QWexeMCr/KZenQ2wf9viOCVn3D/+s7fiB9D/RaIiI3YYB7YFW9lF7n/6AxitpP1YeNeiYX+7kRz261V+x2EbnrkDATswL1uhyW03euJtU1XjDiflT18X0dvvf2RKjqpar6bMwiHo593qr7v/NUWwE3gBNVBqjaxWo+J+P0F+OyWuP5cstY1c1GEbnLoAAicjN6K6ZXo88ZWCcispG19xZxBr0VAsPu1dj9YDLg/DI2wfopJpPfqxZknPFmnHzWa+xYbn0dSN/DPJjdeUTclXo+10IeXZH7vExWsj87hqQ4/Er6OY1sNor7Ye1jAVPgbRvx+TrmHe8xK1yGgazydY9DNW+dJO+1GNPXmqXG12F90MXpO3DsNjGLpBei0y4IWQtOVBmtkpPX9LlIL22fSO/ZqMvu6zKfWAZVee86ov6r8h4Fvjcg7rLkgWTg+EuYAcb9gfekl5trRfVM33PEPbv3lGkvJYPeAFuROYiVGE9Xa14wjT7zRNUBrIWubbXmIl+jp0sc+BwmY5Zqz/uVnF+NYjntfilWTc4cwihdaHWuXq8/ojfvGHZPArBzQNzq77PTvHgQ96Dnyt5xTjRWRV5U89h2Zfr5KxNEPdBXrkGM0gNUDOwHRGQz8Avp5yT961roGI9jGfW4LuVNLOedyGrp06+NY+8xjKF/XI78eaAvjWNIMtP5A8rUpedNd5L2XbWTVX/ferKzXMOPg+n71kMmXE8AbrzMPDYCT+4/mFYVPDX9fPeY1mlVec8WkbMGpHkbzBvHIP4T2xsrA144Rl4DUdVPAQ/F9kl6IvDyKZOqruU2/SfSvkx/O2W6S1Htn/0EEdk+4PyjMddBy+Gz2KqC62MGMmB7xANmsYYpMG4P/Ho6vNZutQ+l762DTqrtU/ue9POvk3AxEBHJxfagXhIRCen+DqPah3q93VdVbi8vFJHjVtuIyP2Bu6ef7+o7/T/p+9cZzJ+NyHdX+n76KIWGGFv7Do+8p31hhoZLirBKsfJ/RWTo/m4iMjvhCqnl1OtyWE7fuZpU9XEreivLFxGRs+l545i4PsZQYA563v4VU7JuA/5yifS3TVqmE4wTVQao2sWDROQOA+LeCtuqB6Z/TsbpL8Zltcbz5ZbxYuAH6e9h25BclL73AF+aMp9BDK2TxJ8z3l6OK8lhepPFQffqXI7dm7N+bmhfklYPVqtq6n3JkuPNBKzH2DF1fSUqWWBQ3y7AM0fEXanncy3k0ZW8z9Oykv3ZIHal7wtF5HajAk44LlZbt3xMVa9Q1QPDPvRk8rXc7mVX+l7p6x6Hf8Kev1uIyO+PGWctxvS1ZtT4ugkbS45DVa8BPpN+Pn1I2n+67NKtLieqjLYrfT9AREYq66Z5LqaU3Vd1PrEKvBd7kXo6pk86hmQMUrXP9/Z5KVgxGVlVv4ltPbsfeDDwjiX0FCvJ+7E+7jzgN/tPisgWjvXIOAlLyaB/z/AX2NV4uhwvFKs1L5hGn3lC6gBWS9fWx6rMRVR1H/Cp9POZMngL8GdiL1eOYFuGrwXLafdLsdpyZj8XiMhxRpsiclN6/Vul1yaNUe9NP588xKDu97DFDlqPy7Hz5ePmJmn8HaW/dJz1ZjXlxX9K30+bwOCw0gFcV0Tu2H9SRO6NvYBeiqcNkQmfgvWvh4CPj1kmWBsd4zCmqcd1K+8y34msij792jb2Tql/XI78WT239xfbArmfP2H4e8q3pu8LZXwPyG/FxuPzl9J1nALvXZbFcg0//gOr6FsDr6hepIrIFhH5U+AfMFe3y+Eg5n7myZW1l4jcCNuf6Hxsldfzx0zrO5hVkwDvFPMcgIg0RORhwCewB/w4khXS09LPx4jIu0TkFtV5EdkuIk8QkVcsVQhV/TjmOrADPElEXjRm+et8In3/hYg8RMzlDqlMH8Tc96zGapR/AK7CVnJ+rHooUx0+DnOhenBE/CVR1UPYyjwwS86j9FbNVXwaa793rP1eSyrPJ79T1f0A/gxzW3Qz4PMi8sBqwEuGBzcVkacC38X20RuHLcAPROTPReQ2tfseROQXgb9L4T42xTWtJK8CrgBmgY+KyJ3AXEOJyMOBd6Rw/6Gqn+yL+26sX7mNiLy81q+clZ6v38b2HBvE8zGL/TOwOv9NqVmJisj1ReSJmPXmQ/viVvf0gYNejAGklwfVnma/O6QMYPuhtTHvN/8pIveqBItUB7cRkb9MZZ3EDe1y6nU5TN13riaq+hngo+nnm0TkEbVn4o6YIL8N8wQzjZHdr4jIF1LfvriqU0Q2iMgTgMemQ4vPm6ruBZ6Vfv6ZiLxeaqs/kmB7bxH5R+DzU5TpROJElQHeCXwj/f1+EfmlahKb+smPYHuYfgt4+5TlqvqLeyUl0nJYrfG8KuPDZAo3yEkJ9pz08yEi8kpJ2ySIyOmpP65Wzz9n0AqRZVDVyYNF5FmVAk5EzhSRF2LP2HLb1kQkJXM1gX2TiNw+lakafz/N8EnZ34vIu0XkoVIzmhWRs1M93hB7lqrrnmS8GYc1HzuWWV/QU0A8WESeKbZKABHZAfwLPflvECvyfK6FPLrC93laqvp6zJCJ+nJ5I9YWZoBPpjF1S3VSRM4RkceKyKcZ8BJ5EGIveR6Wfr53VNi+MHcSkVuOX/RlseLXPS6q+i3gtennP4jIRVJ7aSciN0zH6i9G12JMX2uqPvUlIrLozUFE7oy9BB219c9fp+8HisgbqvpL9fFXwP9hmXPeVeaElNFU9aPY8yjA+0TkT6XmcllMp/JQEfk34CVTlOn/E5GPichv1edzIrJVbC/rnelQXXZf7fnEiqKqPwZel34+X0SeKElpnuYcH8ZWB85xvJJ2RWVkVb0Y8/hxENsX/O0yXDeyYqjqD+mV7w3pfucAqY//d2CYN5SlqPqN3xeR/yVJeS6mR3gLJvv2b11XcQmmTD8tyVfLyX9F5wVT6jNPSB1AYjV0bXVWcy7yF5jx1i9gBlPXS2lvSv1UZSjw/CSLrgXLafdLsaSebYU5BLxXRH6l1r/dG+sXWqk8/S86/x57rq4DfFhEbp7itcT0PtVz8cbU/wCL8+WL0s//JSIvqI23ZwNvwlbVD9NfOs56s5ry4gswg8MzgM+IyK/X+pZGks0X+0BYlHGqBUW7xAwMq/CPxAw/x+mLro/JmTtS/A0i8jR6z+sLVHWS53ItdIzDmLge17m8MOU7kVXWp1+bxt6J9Y8sT/78IGZcfybwVunNm08TkT/Hnrth8+Y3YkaULayt/HZN5spE5E6pHSx6j1HVbwMvTT9fLSLPq7d/EdksIvcXkbdxrLHmtQ9VXdYHm5Br7bMfWx2n2AT6b9PfuyZM96IU7y2YckAxQ4n9tbwK4NED4u6owgw49xu18ikmFLbT3z8GHpf+3jOkXE/ti3+4r0y7+8LvSscvGpDWQ7GJoQJ/N2H9bMcsi6t8O9hDVNXLhdgKXAV2DqnbofdkiXJfgAmuVd4HMMWOYh3v86a55yPa1ccHnP+t2vlrABmSzsA66AtTpbNjgjr43Vq8+dR29gAv6gt3Z2yArt+na2ptrvpcMGa9bO2L18GEsKJ27IfA9Sa4lmHXv9x2chdsMl5/1uZrv78OnDXG/a/6lbhU205xb4LtT1jvJ67pa7MKPL4v3hmpLhV7xq9I+ezpC/dXtTSOVGGAp/SFexD2bFRhF1I5On3luMGEz8ZU9cqIfnHMfJfVd45Id+dS8dI9P65/TefOpOe2rXoeD9V+7wPuPkmatTAP7btXcym9WDv2YSAfEPc5feGOpLj1Orx0mntxIn04AWWAFP8m9PoJxQTUo7XfPwZuNiLfkeXFJknVGBwxg8g96XO9SZ45ljeej3o2bkHvGe1iY9Ee4LMT3ou/rZWtHNCOn7fEPZzo3tfiv6eWR+x79t7AkPGHZfYpS5Tprhw7lhyp/d6LrRQ+7p4DL6vF0XR/D/Ude/aA/JYcbzAPFApcuETZpx07BtZzX5iBZZi2voa0gZLe8z+HvWiqzu2Y9Pmc4J6vlDw69D6t1H1mDJl3SLz71fJvAz9Jab1jkud5VFsBzsI8qNTv5950vfXn4LljlrmSwzvA1jHjfDfFecG47Zsx+vHVuO6lyjVm3i1M6VfPZ39f3hf1xZlqTB+znnYyhZw4Ttsedh64EbZHc3U987XrH9mPpPjPrZ2vxqFqzvVCzNBLgcdMek1r8Zn2fo6R7kUsT0bbiO0vXa/b/Rw/Lr55SL5Dy4ut5KyncaSvXAq8dkC81ZxPLFnuSeNhRg0fr5Wvv/4XgIcMSXfFZWRsy4uqvt4KhNq5XQzub3ZUeY6og50M6TcwvUj9ni3Qm3sfxpTlCrQnrPcm5kK63pbrdfsXjB7T31ILe4DemP6IMfOfel4wZvqT6jOn1gEsVU7Ge36GpsGUurZx648p5yJj3offr9Vr//iiwNuAbNI6TWGqNHasVbtfIu0l9WyMKSOMClcr39PoPUNz9LwQKjYnuOWQtKtt2auw+zlWd/cfwMYhcV/VV3f1tvLH47Y5//hnPT6skryY0r4NNrfs76e7DOmrOF6HcJhevz7WfARbbF3lsb8vv/czQIc7xrVMJT+luBP3yytQj6tWXsbTEU39ToRV0qdzLRl7mUL/yDLlT2ysG9aP/OWo68A8+P1PXz30y3M7++JkwKsHXOeBvrbzqb54F7KE7HkqfZbr8QNVfSrmYvJr6YZk6e+nYC4fi+Vmge0/9FTM4ryJNZ4PAfdQ1XeMiDuovO/DFJyfwAaPBtbZvQi4A2bRPir+S1K4N2MNvpHK+A1sBcifTFCW92MT0hJ4tog8d4K4+7AJ9j/WyjyPDWAXqOqucdOaFFX9NFYH78SUaS2sLi7C6ra9Atl8esjfg459RtPTu1ao6psxl2dfwtr4edg+z2f0hfsy9vLtmZhRzBFMSTGHrRp9BXa/xl0hegj4VawT/xJW/5uxwfvLmMvJ26vqyHa8Fqjql4BbYlZ438eelQK77j8F7qqqVw2J/jTgD7EXUZVR0ceA+y3VtlX1B1j7/EPMldd+zN1qgT2nr8P6prf1xbsGuC+mwLwaUwDegOP37/5r7H5+A1v9UoXZ2pfev2MrUP4W8zDSTmEOYW3h+cAd1Syax2aZ9To1y+07VwtVvRrbouDpWB10sXHiEuw5uZWqfmFoAqP5JOZh5i2YEDKHPW97sXr4HeDXVPW4cU5V/xa4HdbeLsFWhG/EFB0fA57B9HtPnzCcqDJA6gduhz2v36yd+ibwN8BtVfX7UxfKVs39IuZ28WfYStCqL5jIzfVqjedqLhZ/GZscHwTOSeUbtV/qoHSeg13rBzDhexP2DPwb8Euq+qwR0ZfDozCr++9gz7UAn8OM9n5vlfIciar+N9bfvB9rhw1MgfhabLuPrw+J+lJsIvQBrN8WTHb6CSZL3UdV/35AvLHGmzHLvuZjxzLqq+IxmFzzvVTWLqaEv5ua97xh+a7Y88nayKMrdp+nQc3Ly29g1zKPubK+AdZnrFQeV2GG44/FVhhV8iuYQcZbMXf943pxfHz6/pSa15RxqNzCP24tVqTDqlz3JHm3VfVRmIHVBzFvBRsxGe6L2LP1+r44qz2mrymq+iPM6O1tWN+TYcqgtwN3HtWPpPh/hdXff2FzrRybbz1OVf+U3nYOB1ah+MvmBJbRjqrqb2Bz2vdiXo820DPaexdm3DVqO7Bh/DM2R38nPflhEyZ//xvw66p6nEvgVZ5PrDhqq1QfhG0/8BlsjrIBm5u9AbiNqn5gSNwVl5FV9YvY3txHsbnT66uVpatF6vvviZX5B9j4uYB5BbsLdv9hwudTVTvAL9HzJFotPvkENu/7myWS+ANsIdR3MVmvGtPH2m5ktfV8k+ozT1QdQCrbSuva+lm1uYiqvhYzXPlnrH/ahM3XPgE8UlUfp8du07SqrEC7H5X2uHq2lWIv1ge8DJN9mtg483pMV/rtIeX8IPZy9fXYs7EBa0ufxcbSB6jqwNXOqvpHmBHUf2PjrWBy9a+q6pJewR1nPVlNeVFV/wfbSu85WL88j81HLsPGtcfQN44kHcK9sPnLAUz+/j6msxirPKr6Hqzf+TD2zq3AdA9PAh42SIc7RpqrqmNcIu9p6nHdypvyn/qdyGrp069FY+/E+sflyp9prHsUpmeYw+7Z54DfUNW/XiLuTzDvbH+MjbmH6c3fPobNd77UF6dU1T/E+oq3YbJpC/O2ehk27/sjelsaXSuRNX5f7jiO4ziOMxIRuQhbafsWVb1wfUvjOI7jOI7TQ2z7qb2YgumGqrpnfUu0driM5pwMiMj/xoxgPq2qO9e5OI7jrDIishsztv3d1VwI6TjOiUna1uVSAFVdVeNTx3Gck4Fle/xwHMdxHMdxHMdxHMe5lvDHmNHHJdcmow/HORlIe6Q/Of38xKiwjuM4juM4juM4pxpu+OE4juM4juM4juM4jpMQkZeIyIUicnbt2Dki8teYi2SAF69P6Rzn2o2IXF9E3iwi904eeBCRICJ3wdxC3wZz3/2G9Syn4ziO4ziO4zjOWjPpPteO4ziO4ziO4ziO4zinMncB/gRARBaABWxv6op/wvaedhxn7WkCF6YPInIA29d7Jp1fAB6nqj9fh7I5juM4juM4juOsG+7xw3Ecx3Ecx3Ecx3Ecp8ffAbuA7wDzwEbgKuAjwCNU9XdUVdeveI5zreZy4GmYd48fY4YgClwCvBa4rap+aP2K5ziO4ziO4ziOsz6I6yocx3Ecx3Ecx3Ecx3Ecx3Ecx3Ecx3Ecx3FOTtzjh+M4juM4juM4juM4juM4juM4juM4juM4zkmKG344juM4juM4juM4juM4juM4juM4juM4juOcpLjhh+M4juM4juM4juM4juM4juM4juM4juM4zkmKG344juM4juM4juM4juM4juM4juM4juM4juOcpLjhh+M4juM4juM4juM4juM4juM4juM4juM4zklKvt4FuLYgIpcCW4A961wUx3Gc1WQHcEhVb7jeBXEc59TGZSvHcXC5w3GcUxSXcxzHSezAZR3HOeXwcd5xHMdhleQ8N/xYO7bMzs5uP//887evd0Ecx3FWi+985zvMzMx4P+c4zlqwpdVqbb/eda+7XUQIAbqdLocOHeXQoTmKsotSAhDIEckAARQQQhA2bmyS58JCu8vCfEGzmbN58wZarRYhCIiMyF6O+1N6f4wK3ZeuIKlMVrZ+FNV0XAecr2cpslgGkQAysCiDSlTltJiHVv8JiIh9amWX/sR1MYXaZVTXdUwOtWuRAWFq5ZOUU/93vfjHZtjLaVBd1ZI+JoFBQYfdjsVr7NXTsaXv1Y+iSF8bqn5LvQzSi1WvQ7sGTVnZvanX33HXWKtSTddV/y6LDp32HEX3CBAXL10VYlTKCGUJSmB2wyZmZzeR5/liWWstg2P/rJWr+mtgMF0MXZVdtXZ91f/HXNZxudayUK7Zu588z1zucBznVGRLq9XavmPHjhOyj1M9foyr+vbqeP84JSLHhBk5Vo8ZxlkfBt3/lUqnfs+9DcBll11Gu91e72I4jrPybAG2Z0G2w6h5e+KYOWNtSqvHhhkQbfC0lmPnoL15/vHzt2Pm3sdM4aXvYE8nUs1DRY7VLSxyzO+lSnn8ORlwsVWRxhs2lEyEZhAaAhmKIIjYuBRSfcdYjVX2OwTTJUkQQrpOkaQ7Wpw3K6ilpWnCHWNKNwuLV6QxorE3Dvbm7oAI2TFp9o6HYLqesozEGCEmPU51G1MGZTpclU9jtN8pYFVNMSpRLZoIZJnQyANZJqhCyGzTiLhYCPsKwe6CpZvm+FVRVFN6SY+TdEqqisZ4TBuSUJMPq3uZ/lCt1XHKO6rWGlVKG2ryRC8tAQgp75Rgr04WM0h1G1ARNCqqEVG1dhAkpRF6edR1NVq/h1WR7JpiujcSgl2TWvk1WsgQsPtZtYnFBFI20a4/pPrrtd6qAnvPR4zpaE2HVyVU6ZaO009VeqvFA9K7rsXrTAWqztWfPWtQhFq9VPesqod60vYzLDbUxWjIom7I7qUmnWbVZrR2zVWJev2TLJav196rTnPxvlX1H8tjdEpVPSw+ExprZZNe262ur36z6rrKms60Hre/A62ejUrX+oOfXk2rka/4fM8NP9aOPeeff/72r371q+tdDsdxnFXjjne843oXwXGcaw97bnbTm2x//7++k26ny959V/Kl/76Y97zrv/ned37MXGcvC3EvSpembGNj42wCGZESQWjOZtzjghty85tt5+dXHuQzn95DGUt+8Rd/gTvc/pZs2rKJLM9BGoioTc4ANACRkGU2eRIhiE2WQsgQEWIsCSFbnIxKyDA7jFAzyEhxsYmeiEDsYpMGscmUKjF2KMsuGguEDI0liGIxo01iECRAnudkoUmQjJDlIEqQLE06UjlMfWGGMFra5JYSEbuesixQFcrYRciASCNvkOdNsixYWbNAkICkyS9ALLvEUoixSyzLVF8R21nS6kgQVEpiGYlagua9aaMIqiFNiQIhy5BQfQtZqsNAUnSkya8pKKKlo2FRgRJjuahIQaxckupZ6pM3AWJBUuOgGq1+JSTFRZr0k6GUVAYrlWLD7kdvAhyy3Opfgk0i0yQ0SIZKtHrIIJMcgqawVduwCWLUiMaSsoyUUYllSVkUqJZoEYnJaCOWJTGWiM1LKSkJZKCBsuwSY0lZFpRFydzRw/zsJ9/kJz/6BPOHfwRkEKFbRspSWJiHo0cLDh2Bw/PC5jO2cbe7PoCb3PxWzG6cIYQMlUrpFXoTZrH6iWUkKOlORNQqdlGxAbqoEIplpFt0KMqSooiUMZqyiRJVm1DHWKT6F0otiZXWiuqeWVqve+u/MFxJ6DiOc1KzZ8eOHdt37dq1eGDaF+D9L9er36v5Qn1QHtXvQXn3G4VUaaxW2frzHPR7WNxR5Zu0XuvhxzWmWOr+TXNvx6mTcdMd9/6NqsuVbJ/D6nXUdQ267rV6Zvr5/d//fS655JI9q5ax4zjrxZ48yPatG5uLL+wlVPPh443gQkjGCMkoIVRz0Fj1U9UL4oCEXn9SvWi1eWsv897ryCrtniHAYs4a03vf3gtakZ4xweKL/Co9qRaM9IwARISopk9ZjEz/C+FjX4weY7lR9b3pcPWCOVB/wR2QNI/vld3Krellr9kNWCKBwDmb4CabWpwhkWZZIjHSSPqZZiZsmA2EqHQWCoqyoNEQNm5osmljk01bZmk2IRaKZoHZLZspYsn8fIdOt4uWUHYLWnkOsWB+oYAsp9XIyINQxEgsCxoZZJIRuyULnYiKzaebzQazM01iELTZpNScThnRLNDIc1SVufYC2u1Au4N2I2UZTbegSlfh6EKXua7QLiJ5I2dTAzIV2t1ICWirSauV013ocmSuy0K7S1Pg9NMydlx/G62moHmDRqtFjDDf7lDGQBEhy3OarZxQFNBZgFjQ7ZhOJUal2ympLEDKItJq5mSNfLGtxCKi3QIC5K0miOmh0Iy8IbTyQFFG2gsFQZQ8C6gEuoXS7hQIQp5BJhBEkSwQJSAhkOcBiQVBIxmBLIO8maMiFGWk6JSomu7MVFmmg4iSs5DNMEdOd26BRvsom2czZpo5gpI1MhqtnKhQFiUiSsiC1btZB1EGTA8IdLuRGAVRJQsQcqEsoNst0Wh6oNnZjDwPdItIu1NQRCUPgUarQZSMhY7pS1pNodlsJSMUBQIxQiwjmvRzAJFAGQUtFwgEskYTDTlltwsUZLnpCWMZiWUHAdMfZkIWAmikLDtmAhWFsuianjDLUMkoK51oyJPxjdLIhUamiELE9ELdbkksTA+UBZidadCayUyPp6B5k9CYAYRuqbS7JbGIZJmQ503ToanSaDaQ3MrbbXeIsSTLcrJgz7KVTxZ1h1mWW/tCkDwjNJo0mk3yvJH6pCIZfShIRllCu90hxoIQcsgy0z/FwvrELOnoQiDLMyRrLLa1MgrpViSLqNL0rRLMaKrZoDkzS9baQCSn256nLLpW97FEY0mrkdFo5Pzq0167aLSykrjhh+M4juM4jnNSIiK0Wi3a7XmuuWYf3/nO5Vx9dZcgebLcjvYSmpIydpDQJBBQIrFUykLZsHEj171egxve5BB7fnwVC53DlGWXLNjLe8k0KRLorWCoFAbBJqBIoJpqgdqkQSwfSYYPtnokJPMAQatVF8mAwAzBQ9JxRDNoUEVVzABDoikzQvVb0/di6qiasYJKJGpBkGyxrswwQhfT12Q8IoKFEwiSE0JmqxJiTEYjOXnIbYJZ08dESjLNzbgjRqIKMRZELc1wQZWoMSml8qSIsolgZXWvYPVaKWMCCBkSzBjCwqX6T8YyZmhi9aW28AIlIGW1qkSIGu2kimWAICEHzCClVy8RYkgrUmIy8rAJYW9lQJYuORmxCJAMQIIEYmb3MKC2XIM0ZwvHrkHSZHCkNiekDAWBDA1KkeopS+q3IEKUjBB6njFiqocyV0LMUY1kSFIsVCs7cojWNlUyKBQlp4wl8/MHOLR/D525a8zwRaHUZBxTphVLuRBypRmUfVdfwY8u/Q7bTz+bc2eui0qZWllAYzSjnBAWV0jYHN5WGwVpoLFECRCETM24CCltFUqELMvsGcCUXiJ2L9BqpVVI12FGMUEqZSEQSyIcu3DCcRznWkC/8cY08eq/B710XylPHP15DEtDarJQf/jVNAAZVAf18gy7xlHlG5TmoDDDDArG9aBRz2eQwUJ/OStG1ePgF3DD8xxkSFT9PeheTnMPpzVgGdbeB6U/Ksyg88PSWok2uhIeVBzHObmoVpAvTmw0Lq6mr/e3x/RHVAYPoBpqxg6VcURM8z1NK+JDlVPqq9Lku8pTeob69bC97yp/rdttEJN+wfQjx16VpnmvVNcnPa+dFr/nUeAYAxWt5VyNAdWYUhmbLAaOLC7qqJXV5pDmbYNo+hitLlVhNg/cYMsM5zWVDVrSJBJQMpScSI4gEToLSkOUEJTTNs6wYVOTVivQamZkRIpOpBDYsGkLUZROt2sGAVEoigLtRmLsks02CV3zHNIthX1zJSEWbNuUM9sQFha6zC9ECoSQCa3ZJjOtJhoalHnD7g9K1mxQRDh8pE1ZdGhkkYaW5M0cyZRuN9LulHRVWCiUuS7Mt0tElWYWQXK6CmUINpeOJaKCZhlZU9neCmzbGNi6oUGmoORkIbAw16XdLSgKhZATZmYIubDQ7pAVHUK7QMsyLWay+9Bo5BRpocfsbIsQSHN+TEcTS0KeLXoMCZmg0fRmWchodzqU3Uggpamm4ynKkjzPaGQ5mRQokSzLk/7KrlWINHMhyzMyCagKRZE8kiQjpxjNU4qWIJmgEmiXJUe7Cyx0A3nZYXYmo5FnFBppZGbMVHQLiqIAFbIgEAJBhG6nRClpbtxIyJsstO3aG7nQajQpS6XTKegUJVpGWjMZGzfMEBo5nU7JQtEmRqHREPIsI5aRdmG6q5lWIx0rkSwjSG76uKJYNIgSVfJmEw2BbrcAyXuPeVmQB8G0TQFRq1MhWKNMi5NQTQt7hEBGqSUigZAFNE8eZ2KgxBYqCUqzIYSkA7JnNrdFTrGgKLog0GwIjbwBqhSFeT7JAY1Kuyxpd5R2p0MQmA0NVEyXpzESiy6iSoyRTJRmMyfLAMwwyJ5z01mqlhRFh6imZ8oE8oaSifWrseyiZce6kywAOWUsUl3YQj4tS8B0l2VZJp0pZOSgWVrQFchCIA+Z3f8sh5BRlgWx20G1MuwJhIaQNWcIGuh222mBU0SBrNEgBnrGNKug1HLDD8dxHMdxHOekRFUpum26ZZt2p8PRo13QnBByAlnyWNGlpItKREKW3GAGumWbhU5BljU4/czN3OZ2Jaed3mDz1pxIQSbBJqJJqxLSShXSShoRmzAvrmrJMogFyeeEGR9URgvpmClmKiOQnpIjaiQkDwmZZKhayW1qGs0zRMwWDSmo9B+VwURlVKJJWbOoAEnePEjGC1qmSMnjBxHI6bmstIljhiChuWhUEiQkQ5NkwJBWz0QtzKmDxsXJc4yajD56Fh0RW/0RkncSpUxKrhKpT0e0Miiw+q68fQQRu38SzABGMlvNZMsNbJqWJmxm+CBmyIFNLCvPEZaX1oxgrC40Vm44u6ZcU0VDmsRWZvwak7GJladSXklaVbXYPuxCen5Qk/Ijas/gR4RkKFRTrAUzNAnpGkNl4LNYFrvmht1WyggaUh0ng56oydBJzAOM5IGshM5CyZHDV3Pk4I+J8Uiqr+rlDMntLOSi5AIhAzoFe/Z8n+ued2O2nb6NTVs2W3sSJQs5ASFKJEokaI6tG0rObkXMMY6mCXRqd1b/Ec1BSkBsVU8DoZQSCckbTLTJdSlFUhhahUl69pAs3b9qtYa/KHEc59pD/0ugUQYF/X8PY5wXzoNe7E/DOC/TBxlDrLYXkHGMAwaVbVyDjaXSH8fzxShjlHG8Uizl9WJQ/oPijfo9zCBkqfKMG64/7X6jk9U0nhiUfv+xUd5DhnlTWW1PIo7jnPj0+oDa1gcj+jRRQbXytlHbagRNXgzS3Nj+RKVn6FEZZCiS5rp2LprFvc17h1jYa83IQ9Jc3+b10RZjSFich2ttnra4aaiy6LWTxTNqeWq2aItyzLUuzrt777LB0ln0j7KYV/pOc0it9CJiHgm2b8i51fYZzm0FOnNHyYGmQCuzeWwjKJkosVSaIZDlwsYtG9m8pQXATKtB7Hbs+rKcmQ0zdIsSikjRURY6JZkIUpZ0y5Lr3PhmaJxn31V7OXyo4MjCPM2ZnLO2bqBJl067y0K7JEpGs5Uzu6FB1syJYZauBDNUyEyr0y2Uo0fnKNsFGxqR2TxHo9AKgU7sMt+OzJfm0fNou6TTVTbM5rRaTWKW0dVI2S3s/kQ1zyFd8+K5ZUY5Z3OTViOYIchsiyzLWJhrM9cu6XTM0+eGGUW6JQsLStCCrIyUhRIw4woJkSxklJ0uZYw0G01CI0eARmb5R4W80aCyGMqTIUaj0SCidDpdKJVmlleam7Q1ijDbyMka5rnU8gtJp6MEIIslWQjm7SLkKEIZS0q1RVsheRO1liNIJkRsS5xSMmIRmdUOG5pCMzcvEs1GsMVGYp59IbPFM5lQFiWFguY5WeVBoiwoS2g0GzSSR5BOu6AslUYQmrNNms2ckOV0u5GFThdFmJnNyQN0upFOp0BCoJU3iDHSjqXpOIMQC/MMW5Smx5KoabugAslyRM1DBQqaqXnBUKHotAmxu/gcZQLkmW3jo9E8BwVBNBBJ3n1D8lYSIGpp+rxoKrc8M+MKWewDTAelCJoJs7MNGo2cPARiVDrdaJ5FJKOIQqfo0C2FolTzMCyaPIKk5zpklLEwxZcIrTwjzzNUlKIMlIued5ORSIxECtNFZRlBC0JsQynEUmyxWllSapaM48z7hsaCEALR3KkQi4KybFufWuZkeWYGG+laEZA8R0IjeWhSQkNohC0UC22KhUOQPHpYV1jS2/ZKyPOc0JgxLy1aUnTaVp5VEAPd8MNxHMdxHMc5OUlaizzL2b5tK7e+zXW4/PIDHDlsXix6Bg8FMRZIsE1BJEBr1lZtNFpNTjttM1kW2LIlp9PtEosOqmru/MQmHWYQkLJFAPPqsfiyP5aLbkcFyILYhCkZYJgSJu1zGkLyaGFeKIKYoUJI3ixYXJ2SFBUakwV9b69JMwow7xkqIBLN0wgRyCxcMqSoyiCQXGQcu1LGXMLmyZjBXEVmEnvlTi/zK3VLtbKAtN9tjIVt1REVLc3biK0hsJ1yFw0+kjGMWR1gE2ZJk8xFzx7JkCb0jD6svqqtaqy+Q2oCQkC1u/grSrlosJJqKxkeVIqqntcQU7DVjFXSiofKA0mUsKjQqnRQZvCTVk+Fai/ZnhGJ3RczkFE1A5dF0x+BoJVJT4My3ddMzCWkKeR0ceWSihJDIA+VRxdFM2sXIUZTgKiyuDIsbVVqk/IGBLsXC+0jHDp0BQvzV/V0aZq2YYmpXpLxR8ggy5Q8CIcO7OWyyy7hvOvemE2bTiNrZHZNUrnztYm7KU1Ixk6VYs/cf4oCWWZ5SETElGEqgqbtWyTkhAKkFMqQ7l/avkYWt4GBECLmoTgjSpmUg2704TjOqc1SL9KHeWLoDxeSIeQ425Ms9fJ8lLeHUV4oJn25PcyDxjjbgkxqADEq3/6yV38P8nzRn86o+pi0bP3pDPIoMuqaJzXqGNUmRpVlWLqDyrsUw9raUsYSg4xzRrX9YUYZw8pSZ9i1D6q/peqlnuaoPB3HOXWI1fxUWJynwiiDQY453x9Kasb4omaIYdtpRguglohinhhNv6D2QW3+W+kJlEVPDlotrqgWtizaklQr1qt+81hZIyRPjoFqg9ZU9pSEpheisXo5mhbN9OatioSqD03b3aasqxX5or30Fm1gSPNwVRoinLMx4zbbZjmzpWTdBbY0lFhEWpIRixKRSEOULAQajcDMbGDLtk1kzcy2wZUAeQZRyGc3EBoNokZmmzmdhcjBTpfQatKKyqHDHTafeRqNpjJ3cIGyLCljwfZNOdu2tMhiydH5gnYHojSY2dBgZkOL0GhShoyOCkVRpq0rlE67zcEjbaRb0BIlj0rRjrSaDebmOxyZ63K0gIXCtkcpY2TzTINt27fQEWGhXUCnJJRKBoSZnCAK2uWc0xpsajagLJE8oznbQovAQrekW0S6nQ5ZyGiEjM5cOy3EsPaTZaYLsG1eIGRKt2NGRyFvmndcTKdTlgUaMhoN83MaoxKLLqrQbIm9lO92ySUjy7PFGxmjtdssg0aW2q3YAh/zWGHbqQQizYYZaywulMF0F1luL/UpLS1J26OUmhavJGODDTMNGhqQWEKekTea5LnpbyQL5i2ljEjRpiwKYinkM00E6LTbtOfbEKDVbKBFYK5dEIsuWQjMtIRmywwJygjz7YJO15QxzYYtfup0I92ipNGw+iuKtIVPMP1eUZgX1qKEbqlkIvZ8lAUkowpiYTqoLCPLc2II0O2SCxAaSfljHlpCyJJxjRl9RFtNlPqGyitthpa2QEdVyTKhKcG2HBYxbyICUWyLFFA2tJo08wYqwkI30i7NWKolph/sloF2oWl7okCeadKZRSi7ZrghgagZWS4085wssz4yFjHpu8yrb6nQ7XYpyxIyyFo5IW9Qll3KTkGQTjIQsrVZVu4s6fyUWGI6yqSLM08wpRnRSEZZKGVWomUw47YsI88CWasFJGOgGAkZNGeaBDZQdhYWtwSiY54+gkBottKCr4bpH4u4aMC3Grjhh+M4zknIMEVAURTMzc2xZcuW9SiW4zjO2iJCkJw8a7J121ZucYvrcWD/Afbv3cvRS/elrSOg2vIly3M2bBK2n7WR699gG+effz3OOH0bMxuaBAmU3QWu+PnV7Dt4kC2nzTG7cRbJQnrNn1bVxLio5BBIBhzJQEFjUozYKpagYVHBEqptXkQWjRckuSelUvZopNoKJoL91vQGXM3VpYqCBkRskqtaGYZIyscmSSELZCFLhhaWRtQSIoTkT0QAyWzCJ5mtjEAqBXa1HYqgMW1Tko6oQki7qcRo2+aomtFHVd7FVULVBIpI5ZHEjBMKlECoXlQs3lQzaLH8lbRsAOgZYVTKpVAzXKmMfKpyiqTVRNEmyho1uY1MhhJptUc1qa2MiBb1V5VpiSqSmdIMLdGySFu6JAOUkC8qLswIJW1nUt1FFbpSkmEeVspgE82gtrewXWLaRCWIHUtb0lhtZMSsREMw4wwCMUYkN28gsXJfWynORJEyxzy9BIqi5MiR/Rw5cBmxPJym05XKTZMXjd4NsH1y7Vtj5MeX/Yib3ORyTj/zdDa2NhLT/QhkxNR+Q/LskUUzijElYSTEkAxxhCBZr/0IlKJk2oS8i0Rrw2bIYnVbSmHPV2V0ZTfXtpTRgGi56LLXcRznVGZcrwmDXooPeqk9yntC9Xc9jWH5Dfo97OV2Pd162GF5jesdY1g5+g1cRnnTGFa+UfkPMviojDyGxRu3PP0GJvV0++trUJ7jeHeZxuiiP/9BbWyp/IYZRyyV1rC2O+x3f/hhxiKDzlfxR92vpZ6nYeWbhEnuk+M4JzeV34rK0L8yxDh2XE/GDNV0WGxWZ/Yisji301gZQFTflp6tOK+PW/X+WNMcPl80tqjmxCL1X1V5tCr4Yj5WZpv11406qrnnYn+oVf61PrZmMCLVpC/pXyqdiyxGjmkjXdLClDS9ry2Xl1Q/lVnMxqZw3Y0557UCZ2YFm8oiGT0IEaHslmmebde3oZWxZWuTbWduYmE+Mt/uQlSiFOSNktBoosHm+jPNjKJUukXBxg0Z2u0SNXDO9bfTbAgLh/Yxd+AQFCVnbW2ycbZJtyiYmytodwVtNJhpZWycnUEbTbqhRadb0CmLRb3R3NwCCwsLzGRCI8uInQ5lYbV79Og8++YKNDTQkKFSENUMMhozM7SLkk5ZUnZKKM1japblBIENrcBpWzfQyKHoFmSNBs1mTrvTpSwinbJLLAs2Jl3Z3EKbTrcgI6BaLHpyiIrpWmJJpsG2bcFekmfBDDoKVRrNBs1WixAi2i3ozHUIRPJGTuxGYixoZU00twVWItXWwTkhCpRd00PMzJiRSllQdAqKTpdGLuR5RtZsECQSQ/ICoWqLrGLl+QVCo2H3XktCnhNL05G1mhkaMWOOZoPmTE4IDcgyglmd0MibxFhSzpWImLcSYknRiXQLM4rIs5yyUOY7C2hp26G0WhmzM00kQLeEhXbBfEfJg9JsNIgldDtdkEDenCELlZ4HsrQ1T1nYIpugEFXIMiELUMbSdHli+qpF/ZKYHo/YtWvPwqLeR5JH2zJWHkEyym6HqGkRlpqhk8ki5ts1RjP4ISpFaQY7hBJtZGiWU5YFpSqzrZxmboYq3TJSlAUhCEhOgdiW2zGtWApCFOgUQqYFjRCh2no6z2k0G7SaOVnWQCMUaVtpkkEaAbQUymieQ/KQkYdAlmXEskvZ7ZCJEJpN089JRsit04gaKVVQsW1aJGSp/ylt4R9mVKIa0KJLyGybmViaYUgmII0ZNCqx04ayS5ZlhNYMHRSN3WRsImmrZCFrNAmtjRRFSefgPjQWaNRaX72yuOGH4zjOSYSqcvnll/PiF7+YX/7lX+YBD3iADXZAu93mmc98JmeffTbPetaz1rmkjuM4q4+qsn//Yfbv20dZtplptrjxjc7hdne4hr17r2F+X24v7AUaLeG8m5zGTW92Otc7bztnnbOV07dvZXbjTFJQmPLg8JEjdNoLbN96Dtu3bzPPIWn7lYiioTLQMKRauaLp5X1S1JqRRkwqGNvqI4SwaCRiSovSXmKLEELa51ZtxUpay4BqaWt2VNM+ugISbQJCtdWFuWJEFA1x0RghbYSLpgkgsOhVpPIKESpjCi1tRcaioYypVJQyKXFIRhJluo6QjD4KoiY3htiqCptgB2zL4cyMX9JLgqiVhw1TEFQ1WZQlRXsB5AizM7O0ZmfJGra/pmRWFts9xRRC1XYlQlhUithcNL04kGjGMiEpukTNhycx7Wlr162p4JrSrNpVCLaCgWSQQ3UN6Z9ibj7TEgFrE1EpYxcwY4XFpUaxRFPVyqLrXDM6sT1W7V6JBpvJqylT0NQGSEYskrx9ZMnTSBQkt/LE0larRPLUViJBodM5ypFDP2fu8OUQCySoeWqJi/ZAdt1plVSl7AtiK6MO7d/PT392Kedd/wZs2LLB7qukek5xophCxrbfsdU3ZVmiQc0rRzIEsW1oSJPugGSa2pmSZ/miEUu1X2uJ2iqeLKBFQWVCpJLaD9X2Q/5ixHGcaweDXozXDQD6vT4sZTQy6oV8/+/+sON49Rj0gnyY14aVfNE9qUHHpGkuZYAwTTqDvLGMY2QzLP2lvMP0hxnmvWSUkcOw9AYZXgwzcuk3Xhlm1DPI6KXOMMOhcY1uBl1Xf/pLxR12ncPijDLeGWVI5DjOKUayq+gZP4Rj+tNQebugMniwOZEttJf0ty0EqIwibB5psao50yKSvClqMtZQoVr0UL0or1ydVuYTqn3lq+bNVLoNMM+joWd4IrV5Ws2SxL7qMktte5s0IZVaifW4P1K4ZOBhlaK90qY8GiFw1sYmOzYK27Vkcx7ZJEqu0O1Eym60eXCM5AhBIps2Ndl++ka2bN9Eu93h6PwCkptnjxAypDSPJ3nSCSwsdAkibJltcvTwUXRmlq2bZ2jkTeYOH+LogaOUhbJ5Y5OcwNx8pN0t6RZC1moys7FJqzVDFKFDYL7TIcaSXDJyiRyZP0L7aBspIahSElnoFmgJHRUOtSNlltHMbVsRlZKwqUUIOaVGiqNdtDAdTs8aJ7B5Q5OtWzeQ5w26BPKGooVtO1MUSrtbkOXC5g0tNJa0uyWNENAMJCp51iDGAtS2vG132zTzzLYqiWbokwUoCqWMSrPVABFi7BKi0l3ooDGSZRlFp6xaEoUqmQRI+p+sGQiNnLDQQbtpSVWlZ4olgtJo5jRbZpyRZRkSKi8gEGKk7JaU0Z4UyczwIGuIGRMUESQnNMzbRVGUNJtCsxUgE1Sh2+2QEcjyFjF2KRDChk3k3S7duTnmD7eJMbJxQ4u8EZifjxw92kGLgo2zDbZsbJI3m5RRaXe6dNtQqm21k2fQLSKdTiTkGa2ZHCSjWxbW3gS0W9h2Qmnr3zwL5M2cECPELiELhDxffECyzDzsUNrWKiq2V0uMpW2Xi+kHVc2LSbdb0u0WiJYggSzYptki5pEjYh54JfUxsVvQ7RSUsSTPA2WplJihSKvZICPQ7RYUWqIhI88DMQpFGSm6ChJ7HpCDbQccY0RUoSjQtGVNs5kz0wyEPCOmtm9eZsx3kPVXoBoJWWAmaxGySC4Rii4SzVtKt9MlqBDyfNFAptRIWaa+JgSyLG1znbxxENNirrJEsoZ5002GaSEPaISys0CeZWRZC/IMIZohEYEQI0W7a7rfLEMaLfNy3Mht+3GEbqeEuEAIpr8TjpcTl4sbfjiO45xEXHLJJTz72c/mfe97H3e6050Wj8cYec973sNrXvMafu/3fm8dS+g4jrN2lEXJ1772bb7+9Z/SbHQ57TQhSMGmmRlO334a+w9eQ6FHUCKNprB9W5NzztrE6ds3s3nDBvI8N6t5CWTNjNnZGWZnmhw6eIRDhw8Qi+vSatlEIyYFQiZmtU1ahRIVc7FYlmlC1FsNEzUpYyRLLiqVxS1PtCCKLDqP6Bl9JC8aUc0rhtoE1/QaZoRgv80Lhxn/2XYqIeTkIUdFCVlYXG1TGV2omltMqq1VgmVo3jVCMvDQxZf4i+UFkrVEmqj0rPR705NkuJKUP4t5YwqkmJQNZvARabcXKLsFB/ft5crLf8zlP/sB8/P72X7aFs4+ayvnnHMuZ597Q0JjllIbhHwDjZlNZHmLLG+aa0pyU3QlFVdIRhMEs+CvFFQRM8qoVF6mF7JKD+SoFNidykBLJGS2CigpnWTR+IPkZlatvlJ6MSpl0e25jgxlSi9HNfYUW5VCK73ckSDJiMGUZEq0NlPt11xNyMX2O87IiZS2LY9GsiBkmhR/5JRiyivb2kbplh26CwscPnQFnYW9i/UvMUMo0goJ0ssFep+ka8ky0KLDz376Q/buuxWnn3UmzdbM4kQ7kOxiqhUKwValaNTkTlNtH90ytY+YakyzyvzGticSW7WRZRlIJFNLXSK29UsEJCdSUEqa3Iug2vNK4ziOc6oyjkeFQb+XMvoYFHaU4cE4RgejylPlUy1aGJXuqJfyyzHkGOdF+iBDgZUwTlnKAGeYscoww4VJwk1SvlFeRurnB7WZfkOQYdc36Nio6xpW1lHHR9XNsDSGGUKNKmv/tYxjkDMonaWMpBzHOXWptjytWUcsGkJYH9B3LL2UrXsKsam7JGeZyTVnTY9QGYRYGuZlUhYXiaQ5nFZG9dGmZxJMV7GYv6T5NYvTL0nzyxAE1Z4XSrAXz5pJmsvZRyRtJlvpKOgf12ueQ/uOapqfW/A0t1fb5qNK10IGZpvCTbfNsGNG2BS7zMTIhqbQCEK3sO0dVNNihwyaudCcaXD6OafRaGQcODAHeUZr0wxgXkFElZBFWs2c+W4kdiOzTSETZb7dYWbzLGedey5z+/fSXpjnyNF5ym5BK8+I3ZID8x06ZSDLA42ZBhs2zJK3msSQsRBhrt1GI7QaGbEsOXxkgc7cAgtHF2zRxkxGLIVOoeS5EJotNmxo0ClKZvPI5tlAIZuZmy/ozHeI3YKyUyIKjWaOaCTP4KyzNrBhtklXlQ7QnGkRiy7dskMsChbmC2ZmZ9mwITA/V9DpFASJiEJTMiQDFfOuUJRKu90mD6ZLKqMyM5MBGQvdLqHRYmbTLBJLtCwpi455c1AzCFGg2TDPK+12mfQ6HSTLCI3cDDnKLovLhdTuedHtokVJI8vImxmN2RnI8uRxtyQX20YmdswgIWs1oVRK7dKYyczgpdM1j7WlYvYfQquVkwVBC0W0QJMBQMA8obTLaF5aZjeQaaDbVaIENrQCzVaDdgkLnQ5adNnQCmzZZB4rOmXJ3ELBQrsgCLRmcrI80C1K2kXa3jcPi207y3NCCBSdwgyz1JYvNTKh2RBEC0hbrUjI7ZkOsihjq5qBRMhyW6QF5JIRY0ERk44pBIpuQVn0PKMIEUl6s2prl5DltlVvihdjSYyRUiPdrkCnJG8GZht2fzrdNlEysqZtk1OmLaFjmfLIbBsfJelhg+l3YtlBtSQEodVo0AggWunZrP8J1YI3corCPMkoSqMp5JIWHMXStrspoSjNI0qmXTI1Q4iowTyEJG81eZ5b3Yst3KuMQbRbQlkgEpC0hVDIbFEgIacogHabPC/MYKbZJGvNEjS3bWeKBhBMb9loARlRu5RFm1gWIGqL6Mqi6uzHGS4mwg0/HMdxThK++c1v8qxnPYsPfehDbNy4ketc5zqLioBvfetbvPCFL6QsS3uR6TiOcy2gWxR89cs/5Iufv5yibNNodjlti1CUbQ4f6iDk2P6JbeaOznPppdfQ7cL2nxzkzDM2ce51T+d6553DltM20Gq2KDdu4swzz2Z+PtLpLtBeWGDDxllTlNQUDVmyto+qi0oVpDIIYHFVTkaGilLGIq2EqcwozLhCJL3ArjwXJE8iUQuzDldJLkwFpOztYSoxGSmYQjiEnJAF8ixD0oSv2nLG/FXat/lXyBb3uAR7cQ9x0bggqV/MIGTRACG5laS+N3C6VjFXnmRCpEAlM8MD26uGMha2GiRN1+cOH+Hyyy7lyp9dQmfhMNIsyXLh9DObbN18Y7actokNsy06C3P8/GffIkjOzIbTObIwR7sTaTS3sHnrddh+5vXZfNp28ixfLAsxeboANBMzhoilbSUCkPYm7rm6MA8sombw0fNmIalOxAxEkkcYqy/7I5AnKx2l1IJSS2JZmAePtDpDRGwVg1bb1pC8uYDGwuo0NCtzlLSVDeYVJW1DU5kYKdTuadrTFszQhZySAtsOp6DMhDxrUJQd5ucOM3foKlTnk0tbQNL1lPQWSVGtmMD2/c3MyEkErtl7NddcfRXXv/4Nac3MLho1gXkZsfia1HRpT12NBEn7wGLGUz3PIkrQHMHqTjQQMshpIEUJeYloRpHqsygLa8MhQwvtFTZLL0im6Twcx3FOIqYxupjGC8W4XhSGvfQftf1FZfSxnC0yhnldWMpwYhyGvXgflsYoLxSTGoeMMt7pT79+7NgV06PzX8rwYJyy1A1CBrWZpbxyjMpnUPxh8WJaTTzKQGIpA6RBx8Yt73IMMkbVzaA83PDDca4FDOkPF/tZSDqE5LUA865ZmYQs9pZCmp9Wx9KcluqlaZrrSzW7xOZlAVQlbZFiHzMWSXNnC7iY5qKhyWLOshhiMV/FZocx5WfqhZ7xSW96vji/jr3Di/oVSavszU4k5ZO8P4bFLW3TOREyEc7e3OTWp89yphS0Om1iUbBhpkEjKGVpL4QllmRBaQahkQW2nD7LttM3EjXj4ME5NA9sPm0zZbtDWbZBlI0bZ8iKLgcPzKPNJhtbDeYOLyB5xrbtpyFll70//RmdTpv2XEHRbhM0MN/u0sW2omi0hE2bZ2m2WpSScTRm5vGh6JKJ0Gw2aM/PcfTQEUJQZjY2CNKEsqTownynYHa2RWtjk07eouyYJ5ONrRbSmmVhoUNZdKHbJRQFzUZG1siYbQmNRoPNW2ZotHI63RIJTVQCnXYXKUtiN1IUkWYjg26b/QeUbqloCbkqM61gxgaZvSxf6JR0it5WxwTYMNuEICx0SrIsMyOK0vQUWhZklAQRyjLa9c02CAJzCwXdEoKU5DEnz62d6EIb1YKya545LN82nYUueVAaM00asy2kOYOKoKWaV9myS5GMXiQ33Zjm0MgyAkq306XoFHQWCjpFQaORMdtomdeLmLzbdoukC8pZOLpAtxspy4IiC6BCq2E6n9lW2t5FzahhdrbJppayoRmQEJib79Iuonl5CYFmK4cQ6BSRboQOATQjFCWqXZp5ThZs+5EYY3LSkjGTB5pBkViYgVVM+smYkYXMeocy6e2yBhLEvLBKWmJUlpRFbwuXbrtLtygQUULWwDZ+tsU6UUsQyPIsbZ9cmk5QlSwPRIRuRyiKkkYjkAczTCuKgkBGa7ZJyJuUKnQ6EY0FjXQfymiqnJBn5FmORKXQAgrz4NJsZDSzgMRIt90hCkhmxjpm1qNEMmLqDxp5IAtpC+MolKqUZZeisMV0EqCMBVlX6JQRyTLbFilajxWjmNfaPCMWGWUszLNI2sKmWgiWV39jHnREAkXRQWOwLYayGbJ8hhiVfGbGDFwUYqmQ28KsslTzclx2zfOIhJTWZPO2cfG3g47jOCc4qsrXv/51nvGMZ/DJT34SgBve8IacccYZABw6dIg3vOENfOtb3yLPc7Zt27aexXUcx1kzOu0O3/725XTaJWWEI0cWuOryo3S7c3S6R+2FMjmRBRaKo1x5+QH2X9Ulz4TZjQ123PQMLrifcLON55FnGZ2FLhphy8YNhBDpdBYg2mRWAwQyyrTEJSaL95A8HgjBPE6QXm6IeXQok2Km2hITCcmNIkBNsa0QQpa0LIJqAUFAM8pYJqOLkPQoNYVPELJgnkMkmJV6CHl6OV8uvs23FUFlMuyIECRt+9LbvsQUQ9X1VGWrPJVYeZP9AgSIsVhUHqEQsrTNiCoLC0fZ+/OrOHDwarZsPY2yKLjm6p/z059ewtFDV9DIA+eec11ueOObsXnzLDOzs8zObiZvZHTbC8wdnaMshYX5OfYf3MfRo0dpt9scOfoj5tv/zcaN27nxTe7EDW9+OzZu3GIePqJtEaIxEstIWRRp9ZSY8UkyhKmuxbbSKVI9mGlKSMYgvfuSPH7I4l42aVJmBggxFmhRmEeT0rYhQQMieZrA5WmFUiSQPFSI3WMASkVDSbXHsGi15U8ACjPwkbTCS4SsUvKl+6sqaKxWUMfFlRMkjyFzc/uZP3oFooFMAioFJTbB1N4tXbyvUv2tIMEUjvNHj3D5FT/hpodvyZZtWwmZuRZVUnslrc4R24sWycxoKJrhklWfmrvSaFsjoUIMBRJtlZm1/YhmSiaVC+CMQs3VrW1lI0gWKGKBYkq7PnWn4zjOKcuwl//Djo1Kp2IpA4WlvBcs9TJ9UB6TvkAfZlCylKHJIJaqo35jinEMPgZdxzCjhkFeM/rjjPIIMSjtYYYEowwr+q9jHMYxxpnE28UgRhkG1ctb9xoz6B4Mq+v6d3+9LWVsMuq6B4UbdW/729lSebrxh+OcuiwaQtS6A5HK7iEZf0nPNkSkt5AEqll62ipMwuL6hspwojIPCWk+K/XEqnSSXT1SeWdMaWkvH+lZXyzqQCozDUmGHRyjQ6ijNvVVtVX1atumBukZhUDlCUR7lSDJi8fi3FDMW0S1mEZrXkbVptc32jbDLbY1OScrKdttyk7BplZgJse2VgAkFjQEskZGY0Y4/YzNzM7mFCXMHZ2j3S7RbmRmft62yAgZp800kKJgod2lkWUUnS6HOh02bJ5hQ7NBMXeY9nxBe75Lt1vQCPYafa4daZeRkDdobmiyedMsjZkWBRnzpXCk3QGNtDIhEzh86DALR4/SzDPOOfcsNm/bwP4rruGaqw4y1+2SzzRobt5INwhH5udta4zQ4EiWod0O7bmj6EKXPEayZoO8kdFqBbZsymnOzlJooB0jWd6k6EZbsAMUnS5BI7N5YH6hSycKGsxjQsA8NVRbpChCu1sgKI2QoQJ5ntHKhaKESE6WZ5RFSbtTIt1IGYXZpukjyqIgDxl5KwMJFEXaNlgitpeM3UxbVFNSFqa3CLnp/8rC8pasiTRn0UYztSuFsqTsLhC75vE1CxmqkTwozTxQFl3b2qQdmV/ooKrMtHKaWYCyY4tUQpZe/DdQbDufbqdrrS5W8SKNzTPMNAIaM4rC3Es0mjkzGzIa2SbKbpe5uYJ2x/QqrZmMrJFTxIxu2SWTjCwEREo63YLYLWiGCM0IsUlEKQvzsjvbCGQCxMrzCZSl6afMU0hEu5UHEHvWS4JpDcuS2O2YV4/MdF3dMtItbZFWIwRCWrQTS/MWFDLImw0ImdW1CkURqbZ5KpJuZmYmp5HnZlgSLf0sa6CYN5Si7EJU8jyQh0CZ9GciObkIQQuIShZLQsO2N8rFtt2hLNHCtkyuNJUqQlFmRFEka5iXWkC0a7pA1fTMBqLUDNFioKNQdgvyTAm5Qpa81GiJxoB0u5TdYNsR5bZ9TtZoJH1i7HkqUdCii21zlaVts0wHqtoFAnmjQd7aiBIoigXz3FJ0Mc8mBbHbAbUtk8ibmMJsnBFjMtzww3Ec5wRGVbn44ot50pOexOc///lFIfj888/njDPOIMbIxz/+cd7xjnfQ7XaZnZ3lOte5zjqX2nEcZ22Yn+9wxRUHaGSnEaRhRgdFacJ4zTtF9YI/akEsbeVCpzPPTy87zP69h+i0uxTa5ceX/pzLLvs5WYhs3w6dzvziRCVmNgHJklcJkUiW0Zt8kZkhhlQKEjOaCGDbxCSDA0nbcFjYjBi7aQJlxiOLShAySjWjixBsqxGVEpGMjBwkmEtVsbf0QXKzVA8ZSmkGDmrGDKb8sS1CzK2jpJU1YpPbzFzLCmmSVRk1LHpxSHWpZTouaCzMmCS5pdW0tKizMM9P9/yQn199Oe2FNp35g3znGz9loX2YDRtbbN2yhR3XvTFnnnkG27adwabNpxGC0MhbptzJG2hUTjt9E0ECc/NzzGzahOy9Gj10hFangwi05/dy8Vc/xI8u+RJnnX0jTj/repx73k1oNGdQJW2tg3mLsCVB6KIRTsRc6kq65oIs2LTIHFzGmgZOe4YtIVs0bCEpQDR5BjGzmmTckFa9ZGn/zrhoVZG0aqnORATJTMFR7W2sUhnXiG2rYw1jUfEXqr2BwO5njMRQGZsIeRYgYitEul3m5/bT7R62tINAbJCl/VFFbNUGyZiJqLUVXLaFUQ50isiVV/yUgwf3c255Ho1GTildU7hZwlanmOLCPMTY6qwsKfTIzJMJImYEgiKaEbP0MqQUyFpkEkEjIQoxKBJLCunasyIdVCGTjCImIxvHcZxTnEGGBksZBCz10n9QuGFeEOrHRhlOjGPAMejYUsYfg8rd//K+v0zjvCgf12ij/1rHNbIYZVgwypPDsPof5/6MKs+ocP1GEf3XOqztjLp3S3m0GFX+YeUbxDCjoGFpjWNQYS8Xe15FBhmSjGMgNOqaBv1e6hl1HOfUxIwx+o7VxgmxA2krhtQvp2OkRQPmXaMy6Ld5rMWzF4pm0G8r+s3FZVicL0tIlibJY6elpTZ3TPNYreazMSJB0nw6pENpgYT5C0BV0xS22gpWF7dqqBZqVMYiVRenqsnTac8YpvJUcqxuQqsQx9TNlpmMm2/bwA1mlO1SMCvQjspsK9AM0Flok4kQy5IsBLLZwKbtM5x++gYkKt0C5o/Mc3SuQBoN00UE89DQigUL821amXks6JKxafsmmi3zknD46mvM4KOwxT3NzF6SFxqIeaA106Q1kzO7oQmS0yZjvlCOzicDgtwML44udKDbZdvGWVqtFsSCw3sPsP/gHPNdaDSadDWw78g8nW7J/EKHogslBTMzJRsbSii65NFe2meZsLEV2Li5RXPDjL34low8M6OMMiplaS+xsxxajSadhTkaMy1yoL3QZTYXyliiBGKesdAGyWbYsm0rzQ0bKfMZmjMzZM0ZiihIq0VzdpZOobRiMA8HsUszDzRU0WKOnEhOQVksUMx3KOcOQ5hD5ueIxQJRGpSAFhEpIyE37UXsds0ABAjNnMZsC/K0RXFZEIsO5f/P3p/F2pZl53ngN2az1m5Od5uIuNFkn8nMFBuxscqiLbkpuVSoouRGpSo9VOmlAPtFLxIg2y+GDRg2wAcbMOAHAoItofxguFAwy5AEWlWWBJoSKVJksZGYIjOZGZERkdHd9nR779XMOUc9jLn2OXl57rn3RkYyqcj9A+fec/Zee62515p7nTP++Y//3wwmXhAQCWSUGITgoIwDmhKalXHIhABtG0HNKcQ5rHklZ0IQglOGlIFECIERGHKipEwYR2LyEBoynpJGAomZlOq8qpyvM+dnAyF6lvOG2EZS5dZaF0EcfSo4B20EHV2NRbHPVC4Zp5g4JGWyZsRFUhaGoUdEiTGgeQDxFkOihTwWcim42IAWcuqRYufROWtkE+8IXpEieMQ4l1zIlfsUqYKP+nnL2TirVBJawEugacFHRzH6popOLNplHHqy9ZrRNi0umDgmZ633ptHEKEUJ4miCNx4LE4eogmQ1HY6YO0dfMrl4imR89IRovFWpMUyq1vOWs4lFXAwU76CY73HOmXEcUe9pJBDCDBci2QlFPXkYKakn+GDcqlgsOK4KjsWhLqAhIDgTJKVs/CXeXGnYWNxLO0dcpKSMJNCcGIcNJQ04rKlJdcQ3M5zEJ/6t/Z1iJ/zYYYcddvhDjN/4jd/gL/2lv8Qv//Ivbx8TET7/+c9z48YNvvKVr/Cf/Wf/GXfv3gVgsVjw+c9//ns13B122GGHP1BsNonOr3F7M0QCrsaVwEQU+PpdlSboUBekPRQYutE6GoaB/nzgd/7ZO/zGr7/NwWHD5z7bceeFV8h3RmahmZpOTOWNI1dSwzlz8zAxSE2WFbNKtXVpga2rhBEmjokkSUBdzC9lS+RQyRKLxDXnBhHwwdV3I4izoil4y//0zuOqoMNMK2qXBCYImDJxrXaRrehBpViRJAXJF7avU/cQ4mrnDqCm5BdfCzDNoJlSYBgGju99wOtf+y3ef+9r5FwYx0Q/rAkRlsvI7Zs3eOXVVznYWzJbLHFOWJ2dEmJLHweWe/tGUoTAOI7ksScCsxCZz1qGrqOP9vxiseD4+D4nx+9x//7bHLz1Ih+89wZ3Xv0st154jaZdoGRUE6IexSFFwbFV7U8tUV4CW7WDVH2G6nYuTZ0wueQqAHFbXYjDUWpkj4SIc96e95NTiOKLgItM3ViWyVztdp3lsUqNRbEJMhWo4LxsyUCp4hHBoaLVjtfmnKviIhPyWCrQkAZW53cp4/mWVBMRSiUDXRWUbHu5pvckNQd6OgVFeXDvHg8f3Wfse9r5DOdMJCRqET4eE4CoCEXSRUeYVGtRTLCRJZtLDp6iGSngJJL9aIN2gqh1T0ixLqSpA1iK2OCyx7naoSRUcnOHHXbYYYfH8TxiiGdZvL5q0fxpLhqXXRye9biX8SwRHKr6+7b7sHEnj7/uadEoTxJNXN7+w4gqrvv58uMflii+yqnjWQUt0+uf57mr8KyCicvn76rzfd21f1bhxOP7eJL7yOPn/KrP2HWOI5fx+GdjJ/LYYYfvd0z3gMeElYjVZvV7ZapX7RutLhsWrWm17lTbGXNx4e+JVCfJSchRK7WJL5mOrKK1faHyGkjVi0yCCxvRVIaZ2KPUGna6/12MTy6NpR7AaJKpBL8oVq2uvdCcbGs9uSwOuVQ239lb8EO3Ircks9ARGTLZORqxWnXoRuatOWvG6ImNY/9wweKgMUpAhc1q5KxLuDZy68VDc1FIA3m9wbWB/cN9xvUGf3jIrds3GVcnpHEgDYlVN6K50DTmHLHuEmMBomfWBGazBh8jWYQknrEI625DyVbP972JFmat4+iFF6DvCNFxvjqn72C1SvR9YRwLQ7F4kL5An5SxgI9Cm0dElDIW+qKoc8xngXbmGdQEHi5EXCm2+F0sBkNLwTsrsYch0bQNSGS1SWTXkJsj5rdf4vYnPsPi1i3aw5dYHB0wP9jDz/cIszmhibjgER9NYFSJC2tIwjgXnTiuguZCTiMlZ1LXk/uRcb2mOzunf3SX7vgu63vvsnr/HcrZfUI+w0s2YYBYk46LEYkNEozXKWPPuO5gLLjgUG9ZMSF6nGQ0p4nqIBdoGkdsAlosfgMRsoJHiY3xdikVQmxxbct6PdL3PSLC4bJh7iF3mVRGJCpNgFlocAhDV+jGTN8lcErbeEQc/ZDwTaSNjlKUYUxoERofwWWSWpNXqEnG3lmEiha1WGEVxlQYR3OLjd64TtWCqEOziTdELHo4D6NxKVT331IoOppTBuC8w2WhpMyY6g0l1A9l/VGLxZqUIozJXJZjY1Es5gAsOCm46HDiSCUzdAWRRAie0DT1XGZyTmY87Nz2vHsgzJT5zFdxhprgY3LrRUgFVB1DKeAgVKcPp1ODlQnLclHjZTWjzoEP4Bs0Z/I4WOOQQsmjOXzU2gEchVLFZ4oLAfGN/ZwUJOFigxa54GydA69oNiFMvfshCF7NaaYgpGFD6kzwIcWuW65Oxc5HfIjmkquK5vycvy+ejp3wY4cddtjhDyn+8T/+x/zVv/pXv030AbC3t8enP/1pjo+P+Y//4/+Y3/qt39o+t1wu+eIXv/gHPdQddthhh+8JxjGR80ApiRADPkScC2jeIDicmBpb8ChW6NgivqndxyGzXg2cH695cO+E17/+iJP7PecnI8OQ+fSnH/LZT3fsyQIvQjVWMCJEjCRRFUox21FzUbDSilo4gBotUsUGWt0mJrqE2vliOb21TcdhcnWcxbioRZU45wHLElUK3sUq9jBbRRHFuSrtqC4gOok9nEec7d4TtrSOkUb2PqQWShevU6BccgExYYjFgBhJ0vc99z+4y+985de5+97vcbAfaSMkNty+dYPZ7BZNE1nM93DOEUPDatUx9EpsI4eHByyWC1ywIqxbn5OHnlJMkS/iKRS61QndZlXHk0ldYX/viKbp6TYr1uv7vPnGKffuvcnNm6/y6ie/zK0XX8b5FlA7f1O0jatOH8UcVXCKVCHClBPM1KGkirqJrK+CESbyqyDiq5DB5pk4sW6LyVEFzFGlul24qizRqiSaiDRxViyXUo97ecHATWTdheWuTp1OW4eZ6boLuSRySfTdOd36GFTwbmajlkhhhVQBig9CSsNFtxXWWaGVcHMYwdet1zx4eI9NN7AsWm0/p20UzdMc0Xp2BFypHRpThJDNcxNJ1Yk+nQciyGjnSMUSihCc1m6TUqwbQkFIANts653uY4cddvi44zrxxJMW25/myvGdxH48aZ/P6jTyPMe9zunhKjeNx1933Xl7nnP6NAeHJy30P+0Y17laPH5unyZSefy5ZxFpPAueJtB40tx7mlvI8+A6kcrThDeXx3ideOfy89fNr+cRzFy336uu0fOKZ3bYYYePD37/PcscJrcpAFIqg1Brv63bR40ZmPahpf4vTC6X5jYKItWRQ61xoNTavsozMAagXCgrLhQkVoddvkcJMEXL1G0vRB9TXKwtt4taQ8RWvKKXBCFV5SF1gfpiL5fvnxfjmd5TE4TP35zzg0czljpC39O6QlBl2CTGodBEYdF6orPfod4rh3stmjM6Qp8K/VAYtbB/Y4/FoiH4kdKZqGM+nzObRUYV4tENGgrp5CFd17F6eAYiRKBdRnRUVptEKuDalnYRmTUWQ9IjDApjSqhmGudY5551NzKTQuuEvXnD/iIyeuXsdE3fFbpVYuwsmiSNmU1WBgXfBEIAUsaVjBuUfjD+adZG9vc9e/szc48Qi+MQ1Bbfx8w4JkSURRPoexhTQ3Nwh/Diayxfeo1X77zG8tVPcePOLdqDG4iLaBnr9SmUWodPTShbsdL0q2vS8WznkF031KGNLfgrMKvcmCLgF7XppIMykoeR1aMTjr/1LYb77zHcf4f1g3cZHrwH6QznqzNvv0b7jFOB4G1f4oltwJcEqSDBopBxQtMYZZHHAa2cW8kF5xUfgwkdEHwjSPTgIq4V5igxZmQcSEOhMBIEmhbmsSUn5fy8Z7MeKQp+FpnNZhQcm2GgmUViEygZ+jFBKUQXyZVHjK0nl0LWTBs9XhzkUuONHGNRxmzuwN45cygR439KdXCB2jiUCkg2brC+11JFDcZTVqHIAH3OOCA6E0NMV04rTwWOcUyknGlmkbZtEFVSsftE8B4RZRgL42gcUxOEGD0Z42y3jiAiNYoavBPmrWfeRCQEVAXNVZxS45eTCkMqKEP9/Fo8DkUp2bgjBbIWcsEETVolbK6Ab4wscplQFAJVJAL9MCKlINGiaXAOH+aIb8xZuTpzUAUiuViMtmggMDkohzrfQv3yFC2kYU3O0G/OKJtza6wSj3pPHpPxviJQYOw6Sk6XPjwfHXbCjx122GGHPyBMxfvUkfT4Y5ef+83f/E3+/X//3+cXf/EXf99+9vb2EBH+w//wP+Tnfu7nvu25H/iBH+Cll176br6NHXbYYYc/ZJiKkoDzARcDLgWyDLZULqH+Da1kHSiaCc4eL6lw/HDDO2/d5913T7j33goR+2P8wb1z3nn3Pqdnp9y4eUiIseY2muLe3BK8OWAIiFNUXXVWnWwJqyuCeCZWxLgSi9qYYjvMGaRG1JRMIdf18FxdTAQv3r531a5Vgok+atFhrhFmQTjlcKrDnBhqoTFlel7iitBiYpNSBQ+ToKWUgvcBC3axriEV3arztRQ25yvee+ctfvuf/ArvfOv3mLfQNPvceelF9g8+yWKxsCicrOSU6Dcd6/MNRTNNG7nd3ELLQL85I7YzYvQ0sYG4IOWEbByr1Sn9kFksDpkvjlivzrh//wFDHuhSYr1eEZuG2HpW58d03YYHD97i7Td/m1de/TKf+cKPsHf4ArFpTTgjDmUACabUnwipMrmtVFarii8uBD6V7HK1u0mLiTicUsThXIPD2mWkCoOM53JGkAhWjBu7BbU7odL81m3kJ+7EivtvW8SQSUhkTjaCFaGFwuS0oShZE6rWVbFen9Kvj+tYPMHfsC6GMVN0w2x+mzi7waOHb5BPzrZDmyQu6HaqkEvmwYN7rDcrlJsEF1E1yqaUbM4xVTQyUXPOBTw2tlyKCWAKkB0qxQQjksw+Ux0qDaWUrazFTouNoFAFSlKq8OpCXCPfhSJ5hx122OEPE57megEffnH+8W2vWrD/qIQhl/G4S8fThBJP2+fziE6uWrB/0vt+VlyOALnqGE9z/PhOF/yv2vd17+NpQogPe+wnPXfVmJ7nvV83l687zvOM8brnnkd0cx2eJFjZuX3ssMP3L64SgF18VSG+VNfOabuL9VOr6WvxNOkxtm0CdWF4amqYvqqCYvuAINtoF6mqepXqEvJti8GX7n21icIeckwRL9T/TVxCdfnY+omyHeR2lJPoYxIH1AXn7W3x0jHrmJbR8cUbC15pYDasiCUTNCPJFp9zEeat43Dm8QHSqLStuX10Q0GdZ3Xa0+VMjIGDwwXBW7tQPu9Q59jf2wM1kcO8Deg4sDpdsVn1bLqBcSwslpE2etJY2HSZpA4fA4vFjHbmGLNjXTx9KaQ8Er0jCKy7nnEcOZwHct/TtpHFsmF1csr56WCutL0t5muuTine4UQ4mAeiF8aUyD2QFE9hPm9ZzDyLvcBy2ZAJ+OBoxCN5tEaVsbBZD9DMmS2OyMuXeO1HfoIXPvs5wtGLLG4cMttbgmvBNVB6cFWmIWHLXVw0O1VqwwJYmKxpVG3uTddPtc5dqUIPdbgaD2vxQBnJXZ3bDiTi2sj+nTn7L9/BFSWnkX59znB2TD474cEbr/Pga7+N9m/g/CnCSE4DKoXoHaIJyCBKSSM+NjTtjJQzaRhMKJBrg1D0uGDCAzx4H8hZ6c7X4CPRQQxQ+kKfTIwRZ3aemzaSU+F8NXJ22pFTYv9gQWgjWYWhKO08MmsbclZz7CgmVMk5GW/oPCoekUwUR3DmtGEtZDBkE1TMol0LzRZVHKIzUQWC82IRPllpYsF7T06JPOYLPjTr9mPVDzDmgnOBJnhEMymPSPY4l3FBoDYmiXcm5vAOTYkhZZwPxFkLqnT9SBoB55nNAs4pqVgklALiA5oLWhLOG+fXRIuKyTgks3UUisFTxBrrcjYei9o45DA3EAWkOIooKdc1tlLvJKpoqfzSVGsAzvs6ewNJhTImpCRicUhQfGwR3xjphznhiPMUjCuduFhVtQYysTgZ37b4dob4hpyU1K1AR3ABco34RlBxxiHHSCnGY/Wbc1Ynj1C16/VRYyf82GGHHXb4iKGq5JxZr9esVis2mw1d13FycsIHH3zA3bt3uXv3Lvfv3+fBgwc8evSI4+Njjo+POTk54fj4mM1mQ36CzdP9+/f5D/6D/4Czs7Nv20ZE+Kmf+qnvyi+LHXbYYYc/jJiEElqLRhGHc+FSkYk5CeBQEqqJXAbwC3IZcTi+9eYJ3fnIBx+c03XmCKJkUl/44P1HnJ6dU0o2twGqxWl1JECzERQOXLHnitYCQTNaJgLDXD1MkX7RxaJqcSOKWodCVaYHlW10zNShY24Orgo8fFWhFxOATISOo7owOFDF1yLJeWeih9qlo1OOL9RojVoo1aLISB5zxJiybbPmSgpBSiOJwrvvvcnf+7v/E8cP3ia4gpTI4V6Dyz1RjIDAO9b9OZvzDevNKauzc2LwHB7d5u74AWfHJywP9mkXc/b293HSsV6dksYRF6MJJtIIgokNSqZtPOMAKRtddH5+RimJbnNK0+wTG8fQP+Lu+/+M8/N3WSxuc/ulT/Pqp75oETNi1p8qUq+BoK5YdxOwNdOtIhBRyys2/sltibhJ5CPIVmCjZKaIIWr2saqJZ8TlOmnBCBAjOkpJ1lUy5YjWzirT6BQoYkWmXSTUqYlMqihCtVjOMmYFWkphSD3rzSnDeA4oIRyyWLxKQVmfj+BaPvulf43bL3+e3/vaL/KNf/L3rfvAcWEiArhK2Klif590q7pI4SppY/Nd1Dp4JtGMSKmvsx1Moy2SbYfFCntzBIF0KepISYgEpFiOcimC6FgdWagdY1M0zg477LDDxxff7QXh6xbILy9EPatLweXnn9fJ46rjXrXdh7n3Py4yuE448Czn+1mdJD4MnuQccvlYTxvv0xw3nmW836kg5DoXjOfZ5klOHh/2fT0rnjaHrxPaPMtn4HHBx7OIdXbYYYePJyah/beJwS49b80s7pLY3biIXEUXtSTl8m3DIeYcKpNnZa0Vt8KPyd/D1YX6stWBTPWcqzXrxf1oGulj9yzVbR29jXCpsaXWBGPH29o0CvW9yIXgo/4L3/4+Lp8jajvKrVngM3uR22VgmTPL4GiD0m8Km96afJatYzFzNN6aEEKwmAh8BK+M6hGvzILj4HBG00bKONKfb0CVtnFsVh1h1jLPhdXDU7r1yKbrGQZbzN47mONR1p1dh4IQGs9yvyU2gR7HujhWg7klzIJDh8zxekNEuT2LxjEsZ9y8ucfqbM3JozXrTSalyiMIFiGrCg4WXlBNDD04LSyDZ3YQODxqaWKgiQ3qlTEJYRFpvSN3A2nMdIPH7b/ErR/4AW5+4Ye48+UfZO/2bXwzt0sjl+N7FNHerk51nDBhQqybOUQ8Kg5xDnXBmjIcW+fQLa1Rp9TlB6Qqg9QmCVJGqNyYcVMZLRYrjCaKgERhdnCD2eEtQLj5pR/jc3/6z9KdnPHgjTd59LWvsHn7K3T33yDqmuAdmgspZbz3JpQqxgm6KkApGXwQvDdnVucFFyPDOLLZZBRluVAagWHIDF0mJ2iDZzEPNG1gTMrZ6cD52QYnjoOjBb5pGBJk72jnwRxxk7lGWGOXMqZsM99IH7wUQvC4klAxIciYM8NQLNo5CLnycTFEc75Qix0RPKUKIEIVfahmE4Ko7V+LIEWRnMklkbLStJ4mejRnxoxFnBTIueCGhLqC+EDbRtBC39tcCjHQLiK+bRjGQtaE8zBbtLjgyEM2cUS9MWm2iB8vhdZ72tZbVEwVBJEV78wFhBgZi6BjIkRP9AFVSGOP5gElgIuMVYCSc8I5RwgeVWcNQ0WRUh1PyOClfkYjGSEl45Lc1nC53oHU4oQtNssjLuBcIGdBxhEVD95RNCEKLnp89PgQSSmTeot2yaknNi3NrMG1M3IpJjQRxYWIFM+w3tCdn6C5x3uP8/H33/i+Q+yEHzvssMMOHwFKKdy/f5833niDN998k7fffpvXX3+d119/nXfffZd3332Xhw8fVlv57wzjOHJ8fPz7Hj86OuKnfuqnvuP977DDDjv88wSLcEnkMjLZBEziD+cCUmxZHqCQyDqQdbQ4E4X33zvj/r0VQzfYgjIOVbPou39/xcNHJ/TDQGztD3Gpi/8OoehoRa+yXZR34slaJtZkS+BsaRK9VO1OBEvJ5uZQNQFlInycQ11G1FsUiPeV+LFK2lXXAwfVCaTGu2wTJkHE1y9TmBep76Fg2Zcq1bVEEM0I0YQEWFeBAKU6fWgBHyKb/pycMu+8/RZf+9o3QNe0UVDZ5733P+DeB+9xdHjAcm/OMEIIMw72lywPDtg/OCKGSIiB/b19lgf7LPb2uXn7RWKMOB9AC5vNOevzM85OTkjaA7A6PWUYO0RgPmsQH4lRePTomPWmx4VAyh1pI4gO9N0x6/MZcfYS733wNmNKfP5Lf5QQ7DhKFTVUcsqEPaZ6MGHNlKFsLEXRbATBJA0RwTxV7HmVUs9fBjEBSCHXCJd80dEkzq55VtSNFJx1LGipmcZWCIvWErSOSYpMk+jbOqm2kUN1ca7kwjh09JtT8rhBJLJ39Bo/8a/+X+n7Db/5D/+fpOGML//In+SVT34RaYS3v/7LDJtTI9oueDdcbdxxqmzOz9is1+RcqitNjR7YLgpemrvTlzMLYdXJVaYKRmSaoRZhINg1wCtkE86IF4SAcyPiHN5HShm2gpkLVnC3OLLDDjt8fHGdUOE7WYj/TsZz3T6fJqB41jE/6fVXCUOeVQRweWwflYPJVcKUZ3EOedp5etLzz3odp3ig67Z/XvHEdQKUj0KM+aRz+axj+bDH+05FJE8TMT2OZ4n12Ylbd9jh+wffJvLY3j/MPVRM9c7kHmqLl25b94EttLt6y1BM7OEn8chWnDHJRvSxI1aB51S7TXXtRExwafF++zq99NrLD5k4ZRJ7TGKWylZsuZqqNLn0s/23dRqpte72Hl2P14jjEwczPjOHIxKzouwFYRYcabCIlXnrWTZCECF6QaIQG89YHIe3Dkhjpl/3LBuIyxmhDTgteApjTjRNZOwz6/XIbN4ifeZs1bHpR8bRRtM2nnY5J85nnJ+cM2bFR08THcu9OdJEegKrVNgMyZxUgc1qTd8PzDQTUPpxoDlYsFjMeHjvhE0/kEZl3VudHaMHzSQtSHTMxVFyJhX7HRGccHjYcHg0B3EU59kkRbIQg0O7zFoLfnaT5uUv8Ikf/WPc+eIPMDu6SWjn4D2otwjauvxtHiwm8pjiK8S3EI13mmQ6eukam0ikzlOd3Gyny1rn0eW6XepcnIgYL0D7+yp6E4AU4wjSCHlAU1+PW2dFaFjceoHFi6/x6k/+bxnOV6zfeYMHv/1LrF7/TYb730DcOaqZsZ/cNSCPoOoIc4d3uTakmLvFejMyDCYMWcwbvMBmMzAOSslCO48sF+a+e74a2awTq/OBGITlXouKpxsU1wYW8wYnhXHIoA6JM0ouDGOhZBAxF1kXxNyAzb6DInYtilN8ExExrkidmkuut7gTrQ1KaRjJRQjR472Qk5JLIRVlHC/ie02QMeI9tG1D0wRz1igKKvggW44nlYJ3FtmiKN0wsu4yUZQ2BpxXVDw+eGZzNZFZDOSiW86xaEaycabBO5oQaVqTI5jrBZXnsbgYxYyESlEQj3fmfpxLRiaHGIWUEpteSSkhjEQfyYALHlxj5y4roqMJZIIjFzFRS8mA4n3Ex4CPDc6bkzO5x5Kz7V7oxc4BeMZ+IGdrFGyaORLsugiO1PWMQ09JIzjBxzkqdQxS0JyR7T17pGRl7M8hD+zvHeB8+K4IfnfCjx122GGHDwlV5d69e/zDf/gP+V//1/+Vr371q7z99tu8++67Vwozvtv4c3/uz/Hqq6/+gR93hx122OF7hUmEIVVgUXImNjPSuDFnAHx9PlRRRaJgzgGCKe3LoPRDqmJ03QoAkMDDR+fcff8R52cr9pb7TC4DgqdUi0GKKfWdeDLV8g8rJFUK1ATTyYXE1u2rywdSrQpNFGjxIVa8uRop4tS6J7wz8YctqJvNpxOPUnAS68nItW9Gcc5v9zXZxCKuWq1iLgyTKEVTFTL4alsoeDFrxlIKUhJOApmBoR9Zrc754J13+Cf/9Nd4+OiYoR945c6So+WSG7duEII3hb2HwMh84Vnu7zGfzyuZ4G2xX5RuvcKhPEgDs/mCxf4hOY1mS5kVHyKxmbFZbcBD27Q4hPl8DhpYnZ0RgtA0AWFGbCz3c905KAnnGygrXnzpDnc/+F1iLHzq01+mmR3inbfeJ9GLc1mJqYuuJeuQAnDba2nXwIqzGmwsYDnKlQDTqXvD1bgSs6WchEITXaEJkAQCXgPqAyKldsXYPHDqqsuH7deyaE29UyUhoNZBoqVQciGNibHv0DziXOQHfvRf5V//N/9vjOPA6dm3eOsrv8qtF17jzkuf5p33fo/QLPDNCkl5a9G61YDUhKDNZsOmM0cyZ8mmqAhFFMSsaKf8aFPTTJRMte/VOse0QHU/8c6BBILL5NqFYrFHitdAkd7IAOcpongXybmHIpSS2F6kHXbYYYePKbYuU9e4VMDTF6+fNULlqkiYjxJPcqN4FrHEk55/lvf4rO4STxrPVY9f9V6eV2zytPP7PNEzT4rOedZjP8ld5FnG/KxiheuEMh8mZuhpDjGPH+uqbZ7m3vH4WK6aA8+Dp4mintU1Z4cddvh44bILkEWHbpkOtrWj6KUay+DA4jzFVQ7kQjAxuS5Oi/UXS/VUTkKmZ7dajMlJkuk+NOnt5dJ9aisjqbXwJTsHqUIC4z+mWnCKqLVjGX9T32cVtExdMNOxLqDsN5Ev31zyqQUclIEwFnNaLZBTQVPm1l7DLIgttGohNI75PBKXLfPlgiGNlJTYWwSknaFOCQ50LOQh4VUYE0jTcHgg6KhszjrW3WCRGN6xnAWaJrLpE+NgDRHiHe2iZT6LECKDerqcWXUdJHMpWXcD3XlHHAcQwe03zBcN/VC4tz6BktlsMufrQhZhHj0pZxSYzSKqytBnonhCcOSg7C09e3tzMtYAMo4jXiKNV/oews3P8NqP/3Fu/5Ef4YXPfAYJjV1psYVsExXVayoOZI6EgPgq9vATsVUvQ0lIHqBkJA/GbWgBzbZaXx077JLpdo5M7i642mQ1uZziLEbGmWsIEsA3EJr6uIDztmkTgDlSgFzQMiIpQerNfSFtcDIwW8yZfekHufmFH2RcnfDgq7/O+Td+i/d/81dwp98iSiIxophIwongg8fHwNj3rNY9QxLaJjKfWUPVplfGXvGNY++gpY0FEpyfjZyuB0ouzBtHGwPdkOmLMl8EFnsREWHsbH6K95Q0Mg7J+JySzDHFNzXCWcEJZnRSaiTKjDRm0mhxNG0IxBpFIwhpTOSUUO8J3iFS45eKucaaQKxQVCip0HUjbetom4h31oiU1T7P4h3OO3NuKYLD2zlC6Uohq6NpWxpv1zLlgs+Z2ERcnFGKMiaz0PAh2pXPvcW3xECMEe+AGp+CCME7vDgkOLKqNRgV47icM0IqpZ6cEmBOsDklhqykeusx11ytvF2NvnaAZpRCVsU7xftAyQUZHeKUtmnwsbEYGqCMiZIHRMUcUyohFqNAdJQSGceRshkQVVqZo3lOt15TxszQn9O2M3y7xIdpTI6UkkUNid3n+vU5aeyhZOaLGcv9g+0986PGTvixww477PCMmHK8hmHgN3/zN/kbf+Nv8Au/8AvcvXuXs7MzUkpP38l3Ca+88gr/7r/77xLC7ra+ww47fD+hEhLFCgznAiWbfaJznqyDCT/EV0FEdSrAnAgEsdfht1apl4mLfp14794DTs/OuP3iLYL3oELWwYqo2hkxZdBO/TGg5nRQtNoFGoMxFdegaB7JWiY3VLRG1dh+ZNtzgTMXD6ldMK5aa+KmvgzP5FEoU7SL9Q3UgskECqYjkO33U3EENRoGsw296LaR6nphTlM5dYzjwNnpKW+9+Xt87au/zVtvvc5mPdJ3yhvfPKd07/ATP+a58conaNuW2LYImeWsZX//Jk3boLUdKcRoSnnvaNoG5wPHZyu++c1v4Zwwn825/cIL3Lr1Ml234tg95PThA44fPUDVcX5+ymbTMVsuODo6xIfAkHoe3r+Ld54bR0es12esTo6tcFVYHhyxOp3x9d894cVXvsjNFz6FBHNyMaFNFexoYoposetR6tXIJiJCt4Wcw28JuOnaIiavySVX4v6iiwZVE0poJcrEinEhkEuiVNGOd97m9cUrKWKdEA5BJdfxMnFk9n8p5JLIaWBMPUXBS0OUGd5HxDnm832ca2jiHB8C3ge8m+HDHB/WUAVM8thXGgc263NyqvE4ohYBBDhxNiadSENXSaVSxUfeOlqsfQ3L+fXgIasgRQjeYzIqTy4JlYJ3ETTZ57USod5HsqZ6XS7TnjvssMMOHz9c5R5wnSDhaYv8T4qleHy77xTPslj/YdwsrnP8eBb3jaft+/L5eNo5vW7M14kjnuccP49zyuTy8fg2TxPYXHX+nleA8bSxPkmg8aS597wOGB/WneNpz39Yx5UPg6uOtRN/7LDDxxxy9WdfqDUYl6Qa0+1AL+Qa00ut4cDWyrePUWNNH1NSFBS2/MS0z0v7uhTXMo3xQjiidUF0ir69EHe4unjpnKtL/TJtvRWjTBqAy++5VCGIiRImsYDt14twZzHjy4cNd9rCniZIiXGErIWoguTErYOGxcwzDoWCI/qIc0o7m+Hbhn4cQJWmCbjYkLNASeSuo2ln9P3AZhhp25ZZdIgqq25g3Y0UNfeQeeMIDvrNyFAUiZ7ZvGU2twXkTRaG7OjTwDiOkOydrNYdadPjh54mOo5eOCS2kfN1Yj2MpD7RD4VBLWYkogwpkUVYNOC1kJMSvdiCfyvM5g0hBmLbkEumjErrA2G5j3/h83zpJ/8Utz/7WWY3buJCU+fARRyyUmtraZGwQGOs4gs18UYeoOsg92iubhuazIFjEnuowtQsU2rcj5qYx7gCc7mweSYXc1ioPIFxVpO7gjWfBOO0XKixPA3EhXE2zoN4cx/BQwOiCyQXdEzouIIywngKzhHngTs/8uOUP/LDvPInf4qTb3yVb/3y/xfufx02J0DBDmUim3WfKEXZ34sEH9h0dh2d8zQzz/5eSwyezWrg0fGa1XokNoH9gwannvPNyDAm9g9nLOcB0khW8OIpIZLGbI0602fb+crNFXKyyBlNwpjVHEB8PUceZsFEGZOMStVimVUUH7w9V/L246rFkVNG1NrQun4gZSVGYTmLiFh0jZBwPmyFJDklQoy0s/pzVsYxE3ygWUQQc8Yw0Y81izlxNh7NVYRWKNlEFM572sbTxBbnHGPKpCEhIjQxEMS4zDQWhmL3BV+djksp5JItJkUVLZlcHFmBXIhObD5ooerTUOqcS8UELN5ihccxE2LEOUfTRpsfIYIUSrZ7oQ8elZY8DOQ04tWaAr0U4mLJfG9B0Mj67IRus8GHhpKVft3RnzxCyogcjMxFKERCbJHgGPtMSQNalFQyaezxdRxN0+D9xf34o8ZuhXCHHXbY4RkwjiNvvPEGv/RLv8R/99/9d/yDf/APvqdCj8fxoz/6o3ziE5/4Xg9jhx122OEPHE68dQlSLNrFVccLH3AlVFGDLT6jkEtHKj3eNahmtEagqKq5PYggap0QeUy8//4xx8fnjONI8NVWdYpS0cvuEBNBU0kQanE97b+UWohoXRwvQMZpMBvAUmw7V1X2Wigi+OocomIdMhaVAaKKOBOfiE5Ferbih3o+sKJPqruGatnyRY5gUTWVABCptTtUUYGgKdN1a8axZ71e8fDe+zy4/wGr1THeKacna4a+oAWKCO897Hjz7bs4UV5++WUL2HGw8Q2hGwmzhsOjG7QxMvY9U6fH0A30/ZpHJycM48ByueR8s+bs9d/j/vvv86233yG2c2azFi0joLgQIDpCDMxnM4om+v6c27dvEeOcd958m7ad4w4Dq82K4/UjwnLJO29/k2FIvPnNN/jsF36Cz335xwlNy+TcYXxVtbxQjPgQP1WSdTuLInHOIoRErbCsl3Y6oyi5ijpcJVccRcfq8nKhE7G4ogFPqIWr2WMqYt0BLqCY6CdIqMOYisMao1IypSQjfop1fqQ0UigE9bz35jd45/e+SdaBd77+O2jJjMPA2Pd03QrFEeIC7wZEhonfs5iXyhWWVBj6jpKqKCN4SnLkMQEFcyeVmp9rFpr2oXDWBTSRfzK9DwUXEDUvHi0grpiNZxV1TASRE1dFTA6hbMlO+7x+J3eQHXbYYYd//vC0xefHBQBPeq2qftu2V4lLniRguIwniTAeFyE8y6L5k17/uPPJVcKAZxEXXHfMq7a9yrHjedw7rnNQedaF/Wdxs7jqWFcd5/Ftrhrfs7qMPIs7yOP7eJIA6Unn5aMUWny39nXVeXjSZ+Q7Oc4OO+zw8YcxCDU24VJbydYJozYUTLUaUw1vXR7b11x+sqheElQAWhsJlNrAoJeOXg+mbDveJ+HJt923Veo2uq0VTbByyeX0cp0mVgluX1/r2HK5g0HURCf1vTTe8bn9hs8vHC/NC3uzwPokmeGqc8hYaBzcWkYWS3OGGL3QOMcwGO9RVh2NOLxXgtgCseSM1AgRHzzdWUcqcHgww5VEvxnpupGuy6jCrPHE4Emj0g2CBsEHRzOLLBYLCA09jrO+Z0gDXo2D6YfEOHSMqw2BzM1be9w4WlBUebga6cZCtxlZ9wpxxmIeod/Qb8Ya51MYB6FpIDghIcRlZLE3x0fjhHLOZBXi8jaHn/kRXvtj/xK3Pv8F2sXehVCCC82QSSwCxBmEGYRo86kM0HeQejRt0JygWKzFxE+gBSm5zjfjZEzwwYUIRKtvar2GU6MR07zcXv5gjzmLCzEKxhpDRHyNHqpNUS5WEUgLoUViC2FuY59cQaIHbSFldFxCWqN5ADIueBa3b7K4/S/zwo/9OMdvvM47v/ZLnP/ur9CkB6Sup+8TWgp7ixkhenPuqAKFeXQsF5Hg4exkw70Ha842ib155HC/RYHT9UhKynLR0tboIfFCaBd47+i7kVym9X3FecEFR872t63zFterosSmxXkPpeCcwzfm5FsoCLZdyhlxntg0dn1TJiep10tBEyWbaGIci72PmSMG47lSVvrRhBqNE1y9BwTvmbURFzzjkElpJISGdlZFDMWjydxIJDhUYRxGm2tq4q9SChSLWAlOiMEjAmMpjGmwGegtSjjngZSUPpsLSKgCJM3GPdUWI5LCmBOqSvQWO1Ry3rrVFDXXW5ER8bKl7rS6zWpSkESoLh++nSFAGkfII148Lnq8Cr14+r6n7xKNTzjJxDbSLPYIcc6oQjrLdCnju4FutWFYn9NEO7fjkHA5QVF8hDyODF1P7ntcEGbLBW1saiNionwXBb474ccOO+ywwzVIKfHVr36Vn/u5n+N//B//R37913/dlLt/yPD1r3+dDz74gJdffvl7PZQddthhhz9YqDllbJ0P1BwTto0stZNAth0khaw9RRd4ZzaTuo2oEEqxTNXKhHD//jn3Hz6gW6+Zz1oTUaiQKdaNoFOPisPVKBgnbLNcsyamnFobb6niAKGgFJI5dGyfV6RM0SyTYMS6IpTCpPEoFMil5l5qdZUolWBRql8ISCFn3cbEaLVuLag5kqDG25RsBbcUyJn16oy+60lpYOg33Lv7PqvVMU3jCc0RZ2en5JyJjdB1SgaOjpasNxEX9sjFMkJjGxEHq/UJ4opF64zniDqWB3vM9xYcHB3hfOSFO7fZnK/oNh2b9Tnv3/2Ad957m2++9z6lOOZtZDYL7C8aXnzxZUJs6M7PGdZnZIXl3ozFvOWbb77NP/vdb/DZz32GOy+/xNGt2wxDhyMwDCZaePToff5/v/p32Tu8ycuf+GzN1QSwCJ2ipX4fKZpQNWHRRKSZyKOeb6m9TE7MfrR2STmEolO4jquRMd6EIgJaHT7M6rb2b4inlIwmc7BRNZcaj0OcjcthQp6sCkXIpaA4VBwpmd1oKUrJI5oTRZQHH7zF3//Z/wc5DTx6/21cEE5PHvDo4SHHD+7bvHOhiovsbTqveDWPE+9NyDIMHbkk2tmM+XKf9WbFaf/QxuQwOkLzREtUDs8oJutAM1caEQfe1U4yjzohkSDb/LbPa+0gqmIqrY41znmyTJ/T79aNZYcddtjhDweeVSzxJGHEs7hVPKu441ncLa5zD3nSOK4SeDy+/6ucOK4ax1Xj/DBuEU9zFHlWJ4gniVGeJSrkuvf9JEeOp4lynjaep4lgnvS6p43vuvPyrGKQy+/pSce6Tozy3XbQeBaRzFXYuXrssMP3Oyb+ogouZBJv6GUJx8Ua+uSioZBr3TQtqGcUp9RIV3OXFEzgeeG/MZWrtaadCqrKpUziDa31rDUzyKW6LteFeXuZOONOBFvUnmJiULbchU5dJvV1Mv3dglw4IABgzTC3Z54fubXg1VhwZWQvNLhhgKyUMdNGz2Lp2F8E9g5a0pgJrWcmhbPzjoIntIE+F/qzNYsoEDw+eHIeEaeQMpvBozii96R1T8nQ9SPDaI0IizagpXB2VlDvaBdCM2uIbcT7huwCXRHWw8CYRtoQcMDx6cj6fIMbel791Kssm4SXwmrTcXbac7ZKdCOkENH5jNZ7fEmsNwN9nwkRojenkazmSrGcNzTzSPDmZtt1Cb+4wQtf+Bf45L/0r3HzE5/Ez2bmliGe7QWaZpCL4JcQGkSyuXqsTtB0jpbeJlPlPagRspQEag4PqvaYlmzfa7ZmFAV0WoSfYnvqvKR8m6MLlTMQHFODiC2AOxPzVLcPc40RxDmKMzGIOZR4E3r4CK7BNQtoTHyDj2gMVQQyR1OBYY2O5+hwjguB0Cy4/cUf5Obnv8Dm4f+JN3/lF3nwa/8LuXuDZWuRu0OfyLkwayPz1hwrNGUenvV88HDDepPZm3kOFp6SM+d9JqEcHMwQcZyseuatY3m4RBwWYZwSmo1TicHVzhrFNa7GuxgnFxqP9zbnnBdCsEYyrU1JJU8xKA4f6rlTRYOJZ3IqlOr8kVXph5EQIotZNC6uCnXMMNmTUFKXCT6znHmapkXx9P1IGgsxeJpoPFlJGRS894h35FIY0oDzEe/UxCMpoSUTXCBGjxNHLokhVSEE5vLqvKMAfV/oNYOAjw0qkW5IaBlwChnHUGBIgqonOrVmI/HkrKRsjrJZlFymGWbiJCeCU/s+a8b5Gu3jPS5Em7cp40Mg+IDzAXVCs1yi68TJg7ukbmPzNY5IXOGcY9lCr3voMNKfr+jPjxEdiM0hITaEdkbOiVw847qjW52xOj1DcsfB/oJ2f58QzOnEhSVKvPRZ/WixE37ssMMOO1wBVeXBgwf8N//Nf8PP/uzP8k//6T+l67rv9bCeiK9//ev89E//NH/9r/91lsvl93o4O+ywww5/cKgdIg4rBr2PhGaO79d4N5i4IzvAg44USZRamJpIw8QfpSTrMKgRFaIO7xrOT0Y+uPuI1brj8KiYw4gIXgKaL6Jatnmm1YHBHA/Eol5QpNSyt3ZNmBNIMAV6KUy5t0Vz/bvfAlu880AVthQrnAFEXCViSu20cBYXU0rN5c1s3UFsRd66BITa+VGjQqZcXewcpGHk9OwRaRg4O37Iycl9ihaaNnJ0cISS6boNIQr7+y2imdks08SI6sgXv/ga73/wCCcwDj23bh5x59VPEGYzVmdrNsOase/x3jMmRc83dP2Ac56SlTT09MOKbrXm/Oyc5d4N/uiPvgLqePjwAY+O77MeM8cnp9y8uc+6f4QQaGdLHjx8wN27iVSgmQe+8s9eJzRzbhwtceJYnZ9SUsKHBu+U0Hi+8bXf4PDGbfYPb1bSSrZCjKnjyImrJqbmHkO9ProlMXy9ejXuZMuxeGRrfzpNVt0KgUqdu9YpVXObNSMSqpWluWMUMfotl4QTv3W1MfFJRlFCnKHJk3RD7kdKSfa5kELRxNA/4sEHXyOVkTScQVbef/dNNpuO9976GnkY8a5hsuEVLIbXRyUUwTtzq03JhEyL5R77+4cUVR7pPVQL0RkpoBOJtx1fFWSJ4p2/JLLxiDO3G3EOKUYIZa2xS1vr4EkAYl+qBXG+fo7kQlS1ww477PAxxrMIOB7//rrHnnex+WnuGU+LkHlW4ciTnB4eP/7TxCHP6lpxnZDiSa9/mkvGVbhOKPIk0cbj+77OPeSqn686zrSf687fVe/lsrjoWV53lTjjSe/hqn0/i4jiWYQ434lzyNMERh8Frvuc7AQhO+zw/QH7zNc6fapDp/scfNv35nRqqKUkk/solx6fXBbt54soBDvexf15elxkWqi/dP9RiyiVKuKQ7d5rY4Ng6hAq3yAgVFdSvaw7uBRNo1XKcknAYu/Djh6c8NmjGT96w3MrmptA8EKjmaEfkZLYXwQapxapsWhYbRJSlLQeKEXxMTCPAeeFIRdKziR1eBwpD7RNYNgkVIXZzONCYFh3jNlcAxCYzyPkQt8lVpsC3jGfe+bzhuX+kuQ9Y/asx8wmdZRS2G8szuN0tYFuJKaeNgZuzDxj6ViPhb4rnHcjzBf4eUS80FDwXU/uEw6hjULTCLN5i/eO0HqaeYsX8LXxoRs9e1/4F/jsT/4rvPSlHyTM9qw+rlzWNEsUENcgfg7eQx6hP6muHiPkZLEdKOhYnT4Smi1aQ8tgTR0pWbRIngQgFouCWuyIzaVs4qJJrYTY89Xpxf43h10v3ng0Mb7KnHvNqda5QHYecRbfjA/WyCUg3oF4VAI4hw7n1rwSIhIWSLOAZgnVHVbiPuQ9GDt07NC0wZFxPrJ8+Q5f/rf+PGc//hPc+41f4v1f/Dtw9h4+FGaLlqb1eC2M/cijk4GT855hVA72I3uzyJALw5gIwbPXRjKw6hNOHTFGNJl7vCXjKEGU4O19qia89xSxyBUkEBtz7i1Db21rAjkXXNMCUIaRXAoSPN5b05l99M39Io0DJSdyGhkHZRwGvHcsWk8M1TNVIaWM4nDO06cR0UITTXCSkpL6Da4UnFO0eGt+LiZOi41DomcsYvypOLyApEweEpoT3kNsAiFEioo5DuWxij4c3jlyUbqUSQo4wfva5FRM3FK0kIsyZOiK8XGNMwdcRUi5MGTMRYWCirO5T0GK3aO8ExwjItaA5EWtwagkSD3gzV3JmyBHnDUKOi8sFg3jeEB3lulV8cOI23TEovgYabxnFGHsVpR+zbxtza3Ee1xoKRLIOTEOPauzE/rzR8y8JxBxeQOlMD+8DYuXGNcrvv2G+dFhJ/zYYYcddriEUgqr1Yq/83f+Dv/Jf/Kf8Prrr9P3/fd6WE9FKYWf/dmfZT6f81/8F/8Ft27d+lDkxg477LDDP29wPnxbN4tzVhg6H5Hsa1eJr/EtAIWiI3kbrTH9kT2JLlxdlM+ICKkvvP/+I05Oznjxxds0TbhoXRGL2EBrkYEzFwbVrfBjslu0grhmotYul1LM3uDbiF0VCgXnzH1ENeOcR4ojOzE3EBwqCa1OJjIJQCiImMtCuUT2SBUA6JSfKqX+jqgbFBtLGgfOz45ZnR/zwbtv8+D+u8QYODg6IoYZTTPbOqOQCrduHDCmETrhU6/e5vBgyQ/+kS+w6QbefvtNHj56RNetOV+fs7d3hMPRLubEGNg/2ONgf59m1uCCw8dAUcWp4/jRIx4N99g/uMUnX7jFYr7k5NEjXrhxyOnZLU7Pz4nB0bQHHBxGsha6TYfzLf1gMTc3bx6y7o/51V/9LX70Rz7PnTsvEqLnvF/TrR6y3oyMY+aDD97lpZc/yxd/6AgvDYUEuBohRO10cTW7NNvZLlKzSwXnzPll248lk8OHbp1kipbt9VGUUtIFaaeKm7pcajcWWkU5mur8NfmIK0LxF+IRQapoItK2S0LInJ+fsu43jONQxUI273Ja0W0embgnrSjjyDtv/Db33n+L99/+XXLucb6hSM2OFXBWM+MDuBHIhZyTdY0sDmhnM8I6kEshp0LwDSKKd4Fc1DKAi7O819qpUqqOw+ZiwUkEyUZWJMtvLcXEIFq0EkpGFvna8ZOlAEY07LDDDjt8nHF5MfwqZ4TvZGH4eRw9Psz+rluIv0ro8bTXPGnfT3KccM79vuevOm9XCTCehutcNa4a17Pgw7pFXD7eVeKXq97T0wQ4TxPsPC72uE4U9LR58WHn3bM4ijztej/v8SZ8GOHPs+z38s870ccOO3x/QC59TQ+YGMJqSyfVGfTSRtPtYdpGL7/+0vfb7S4pRYwCEHNquJBjAGwdL7ePizU6UEDc1PCg1ZHEtnJ1gd9JrVCrSGQrKHn8jVLvb1XAUMtNENgPwo+8uOCHbzTWxFOE/XmkUSUPPa0X9o4aQnTsHy0IXjg/t8VuUsE1gflBQ/ACIlZLrjoyArkGtopjtR6ZLRoCgo6ZPivDqIx9om2FWQwMKXO+Lmz6gg/C3kHL/uGc5f6C0QVScZxvejbjSBOERRNYnfccn67Im57ZLHLr5SPG846Hjx6w2GvwCqkoMttDZ3NcLhy6QpMGzoeBzZCQENlfzpjPW5plS7OYI8E4H9JI3yuzl77Il/6lf41X/+iPExf7iA9AMCGFmq8szsQRzkU76emE0vWQE1Qxh5YMJUOuUbR5pOQezVablzxSckJzoaSRotkcI6qDrTnZVpGJApQLd5dpVtUFeeM0pmhdR8Gj2zmFRR2LxzmLSRbnwUW8D4g3bs/5gOSAuIxKwtXXqAtIcuDWSOfBN7hmiTRLaObgI+Jn0M4g7aP9CtIGSSMicPDqq+zf+XO8+BM/ydu/8POsf/fnmcsJ3hfWq5F7DzecnieChxdvzmnawGYodOo4OIy0wdGtRtZDwnnHsvXkITGOBReC+Q+LEoKiJNBioo+c6YeRpObsUZK5fFCSCZCqEGZMhSEnUs40saVpHM6ZEMOpMOTM+apnGEZcKfS9vX7eBNrGEwSLggGymnNGTomkShsD81nAi6PP5r4anSdGTyZTUkYRgg94L0AmDWqOOs5kBWnM9N0Kl0YaHwgh4MWazrJYA15wiqfgvZBE6IaRTME1M8R74yNzRrwg3pOGRN9nxjziQkMIHu8sDmbEMeTMJiVyNicP8ULWgo4Wr+2cp4kNsd7ufPCIi8ZRjYOJVXxjDiAIXuqNSBx56ADPso1EDsijNQ4OXY8UkFKgCGUYcHkkxoiEpjos21wvKpydnNCfnTKsjplpz3Kxj3fGG7d7+8xufxptbjH03/x99/CPCjvhxw477LAD9odn13X80i/9Ej/zMz/D3/7bf/ufC8HHZaSU+O//+/+evu/5T//T/5TPfe5zW8Jrhx122OHjiqIZjwkwvERAtp0CU9eKRb14qAIJsys1EUZhyp61BfWiGVHMWaFaE773/jEPHh3zyWEghOpqUPJFByaTbaJ1Oli8qQlBUDWrzIlwKQUcNc9WLAfT2BRQxTtzOZgiWUQ8KZsjhsPVgieZkAWFbPEgW4JHTAZiLgvYdk5qRIjHXBbclonRYtmdpWTOTh/x8P57vP+tN1ivj0EysT3Ei6BZGYYeLYX1es3J6THKyNHRHmPfs78341/8F3+cGCO3XrjN4dE+JycPSWPP6vQRSYXYREaUGGecnr7Hw/lDonfM9w+YL/dZnZ0yjoOp5YPn1U+8jADj5ozD/RkiIyHsc+v2DVJRTo9POTg8BApvn59xfrYCcZyffYD3kTsvLpCXb6Dicb7h8LBlHBNpHFnMHWMU3v/gPX71l/8udz7xOW6/8LI5wNTuJ9l2VFWflUvM1bZLarr+9fwKlbgAnDcHkboRSc36sujk0jJ1P/ktbSfbXiy2sStahUhKwanfGmCYmMiyUGNscd4EPf3Y0XW9CZOsxCeXRMprm/OaSXnDvQ++ifct65N3UR3Mcnf6KIg5fjgvhLLVRpFVaedLmtayZ10NdR7GRNsYiZEpIJZRizP3HNutVdRa36nU95ZLsbgahZwn4cc0QXXrIFJQE3h5h8Nvz953oztihx122OEPC65yg3jSwjt8NKKQ72SsTxvD87pUWISfe+L7uUpQ8iSRybOcp+scTa469nVOHVe97qMU71y136e5kzz+mudxKrluDl5+/kn7fJKI5EnPPetjVx33oxB7POm1z3rdHz83z3Otd008O+zw/YLLfovWuKHbn6zJz0mtm8olt49anOr2tQZ3ab9UvoHL90awaFJHbUxw22YEvcSJyFV1ltTY2GlxX+34k+jD9qG2GK+l3vfs/+n9IFIbYGqTiipB4GYUfvyFOV84ENphTU7Z3AFGCxA9urGgjZ48JuI8klRZn41IKbQI2oYavyL2/JhYn/ekERBlb78FlCEVFvOGJgjdemTdKSUENCtNE/Ci9ENi3We6AULbcnDQcHRjQZzP6BVWfWG16clFWbYel5WT4zUnJ2vSOHLjxpxPfupl0uqcB6sVMQSGzcj5AGc5sCpKM6yYFyX3I4NmUlZCjDRtJDaeOI/Ml3NcNMFDSYmheYFX/tif4DM/+S+zvPUCEls7n9trbYvKdo4L6EhJa6QMVdiRQZNFreqIjj2aR3IaLCI2jeTUUVIhpx5VMeFHXZQvaOUwqNf+wvEWlcpdGD9hzUY2F6bGKnvemQBE1CJenavzycRFBbGmGJdRRpKzmFcRjw8OF1p8aBHfUPyhcRZ5Y7SBE8RHJPfk1CHdCRJm0OwhswMI0WJgwgHkJdqtkbQGTYjz7N95jS//uf8zp2/+BB/8yt/j7Hf+AeenPZu+sFwIh3u2uJ+1ENvIoonEAEPXg4O9eWMuqUMmpwweQqxuFlrMNdhVYUAxRwsqM1JyguBrQ5dshQQlJYaxJ/tAbGfEJuKDw7uAAEWVjIkl0ERKSgiOpo0EB5ozoxaqVgFxkVRdcGatZ75oEXGM44hTaNtAbFpyETSNOF8IIRAa4xG7zUgpSmjMcWVMSj9myjjSeCF6+2ynbC64Ehqa6JAcKGlkLI4eyK7BB4+Pc8aUSeMK5zKihVSEfijklIkx4L2FJqs0qPOkonQ5MaaRIkr0AfHe7iuq5rAiQnGgzoM4VDxF3ZaD06EnNAEfQr0D2n1WnTV/pZQoJZlQo8Y5l1QYtTduF8fYbxCUEFtCbLDIZiX1HcOmZ338kGF9Sshr5osZ80WLa+e4uADxlLGncEIa1/Y333fh776d8GOHHXbYAXj99df5b//b/5b/4X/4H3jjjTe+18P50BjHkZ/92Z9ls9nw0z/903zxi1/ckQY77LDDxxZGIpgt4HzP00ZPt3bk7GtjgcdLJEjLaK/AiitH8LGSIGY5OYkhVBSn/qKARjh+sObuvUesVz2L2ZJCRqVcECJ1LLUWrg+47f4BUhnxU1RIqZSMWJfERfeNCVkQQYpSnHXjiLOFblTJRQmuvkCn4JdSSZbpcNZJYQ4VDooJFXIuNU7UHheBXDJDv2GzPufeB2/zzltfo1ufUNSEC5vViqEbmc07nHfE2NJtVjRNZDlbcLAf2JyfMJ85lrM97t97l+WyZf9gj+DNLmJ9eIvN6pTZbEE7a7h56xY5C6cnxxyfnXPa3Wd+1rFZn6Elm2gxDzx837oeRDztfMGYzeI1qdK0La9+6g6LdsG777zL0dENxHke3P+AECOlQNCCuMLZ2QMe3G/49KdeYTlruXe3o9ucgRNms5a33vwdvvKb/4g/+af+LUKIl8ivjGYT5QgFM7HIuBqPY79flawFt82ptevpnDdWbYqOwZHV1c4Yy141EsPVLxMgqbNYIL6No8v1h4DWot3sawTE4XzABQ/Zoo5SHtl0K0o24YV5aGZy6QFFywYtPd3mId4vyWXEPjCxWqya6EM8iBdcVoKHvoDzntlij6ZpccFbpqv3jGkk5WSEDBbX4p2v59IIDuVCKCPFV6ooV2FLqc9Zvi2ukpuuvlfnzKFn2gbF475ni5s77LDDDn+QuMpt4DqXhesEF88SF/O8Y5vEGY8//iS3hSct0F81jqe5UDx+Pi6LRJ50bi47qTwJ1wk2pp+fxx3kOvHOVc8/aRzPg+tELJfP63XxIpev75POyePv4Vnn0tPe+9MeexYR0ZPmyvNEqlwl4Lhqfj/ptU8a3+7vlx122GHiCrbijar6KGpRDyLWOe+ZuIpv03FY6ak1QqOWfltnDi6EF0B14VDAUXQiIC7iR60IM9G9B5CLOFqp0bNSx7E9OLb4LMAU+6JabD/TfbaKE7YCEer9T4WZh9fmjh8+arjlEr5PttucGYZMs4jmqhAt5DbOPZtNT872+iY6mmWDDdjhg6fvMmcnHZt1z2JvwcFeQ7/uyA72ljO8mLtHN2aGseC1sJxFxjGz2hSSQhaPXzpeuL3Pcm9JDo7zEc66xKofiM4x97bIf3K8YXV2hjj41KdeYhEDx+/dY7PqyamwORvoQ0uZzwE4SBm3XhGK0g+ZLkFsI230+Daw2JvTtLE2/gibwTN/5cv8yJ/6KW59+tO42NZCvTqBiLky2GRRc/TICdXRHDi1UMoAxaJcdOwpqaOkgTL2jMNATomcEiUnc50t0yJ62TY1FbSGEbstByVu4rJke/2NJ8FeK9Ro4XrNZXKFqcKgLFsOxZn6g1wuFsGdE4qMiPPkBCI9zgUTgNy+TTs/JJ+8jZDNlbVYbK64gvqMpAzjGtcdI80+zPbNBSR4ZG8fHfeQ7hRNa9tHaDn87BfZe+0TvPtbP8b5/+dn2dPfZX9h56LrBkL0LGdQ0sD6dCREz8HBnDwmuvXAmKzpKzoTEqlmxlQIwdxMciqos2gV1QEKxOhomkjxDpkvAKEMPSmPuHlDu9gzoYTY2Zdi4oRcFAdEESQILgYTX2Qljdnii1KuAi2PukTwwt7BjCYKXVbW6w2+CIt5tLjmnMnFIWLRLt6bm2tKI7mYA27JCR1tnsyc0O7NCOJM0KIFLZkgkSZEBBhyoc9KRpHGEUMLzjOWzJhGi9BWR86ZMXtKygSB6BwXtxxrBEolAeb04RETwQRvOiMR8igEJ0TvcCGgKhahPCZcNCfoooq4ZOIaBa+g3pxyYpwh6ll3G3SzNqHUOKLOUYqQijkpgwPXkEuCYYNvPK4XunHN+vSYsn7EvoN2ecRsPiPM9/GzJRpa0lDoTh6S5RjJPduOr48YO+HHDjvs8H0LVWUYBv7m3/yb/Ff/1X/Fr/3arzEMw/d6WN8xhmHgb//tv82DBw/4G3/jb/C5z31uJ/7YYYcdPraYH8745Gdv8OlPHNKtHW+8sWYYJnePfGnLqcAMxDjj4MaSkoV+ZSvptpauVtjKRJ6Ym8ZmPfDB+484OzvnxtEB3vtLrIs3y8apu0Gmjgs7ZikZstaoC1Ogiyho7bDRUm05MbGBc7jijGiZMlEL1f6ykulTNw2TwWs2cqg6nDCRPFgkiaiv4ylktLqMJARhs1mxWZ9x953XuXf3bVZnD+m6DSUrs+US75RxWHO+OiX6OS+/9hrLecMLt46IrtBEh+ortE3grbff4OjwiDdff5vXPvkJXnz5FbrNwP7eIcorNCGQS8KJMAwbDm/e4sbN26xOjxmHgdms5fxszfsfvM2jhx+wv5hzcPAS7XxJrlag+zdusFgsaOczUio8ePCIXGBv/wDnPPfvvcO46XE+kL1nGWfsLSIxCt96+23W5yeUIqz7gdl8wf7BAQ8ePOQ3fu3v8Ud++F/kzmufMsLECVqKCTgw60hRwbtoXRpaIOvWolRFkEtkWSmKEyMrFA++4NUsPaXUjNpK8BmBYkSIcW6FUkp9LhjJog515ZJYydvzwROCEUMohFrcbrqOoR/s2tf5oKVQSo/qiJLJaU3JSs4ZtFCqPAouprDzDu+L5Z8KtO2C+fIAH8KWSAw+kkthGHqaaN0fTkKdY64WxljsTCWBSklc2MJWhrMWvK7mu+JAShXg1LOBBJwUvPMWq1SJpR122GGHHQxPWuR+kmDho4iWeHwhfRIJXCUMePznJ4lantU15FkEA0+Lk3n8GFedq6tcHp4m1LjKDeMyniZSedoxrrt2z/LcVT9fdz6fJnR4GufwpHP9tLE+TYTzpMceH9OzfH/d2C/v//G5/aTP1dP2edX1f3w/Oy5nhx2+P2CljXEWU0NHqaKJUu8L5dLv8wnTPcLLJWcNLt9f7VHb7CKaQ7YL9ebdMN1ptgv3tpdpb1uRCbXhQbfRMObcqJNYZXvsujejWKxrf3IJAY4i/OBhw8vRccsVZgKtczgt5JJZLhuO9j3eKf1mILQB7Qo5KTFaDI6fNbggNLOIZjh+1NH3I5shs3/7iP2DOY/uHZPHzNF+gwB9nxmGBBT29wJkZRxNgKEIRM/+smXvoCW0LWuFvi+su8SQijmWOsd63bNadwzrDUe3Dnnp1gLdDNx7/xHrLoHzZPWUxR4lRFqntF2PTyOg5GwuoLEJzGYNzf6Mo9tHJsAo1nTSc8irf+Lf4FP/m3+FxZ1PQ9pAGbZXyk5yqK4Rg7l7aIaczOWjFItyyRtK6injQB470rCmDCMpWfytZmt4KZrIxaKDL7t6TLyFiqKSto9oqRNluvDANt5lKyiqUcfTXFKHSrHmEKbXT7xYrmIWe66Uuj+ZBJeZIhlJifTeVxhDwEnABfAh4nyLBHM8kZLBJYvlyAnSgAynSFwisyNoZhAdGg6RcYF2KyR3qGR8u+S1n/gXuPnpT/POP/oF3v/ln0P7uzSzhqYNjH1isx7wHtqZiRtKtigc5zzeCz7k6jBhn8E0ZnJSfPS4GOvnxRGjEBqPxECcLxAX6TcrkmZcO6vRyI05EmtBk0W5XJzehJdCmEeCdxbLkwcqnWTRKanQp8Q8epbLhrb1rIfCejNQgBAg5UzqAC/46MzVQzObTSKjhNDifYBi0T9OYBE9TfSEGElaG80Uc97xFoHcp8RmGBlLIoRZ5a0yaTQnXI+SxJPzFEWjSL3eOY04dbjgKVg0sGRl5iPF++qO4lFx5JJxAs0s4GNjLsLiyKkwpExfBpostE0DEkhjqg4zJn4LSXEOfDMnxDkSPJtc0DFTRmeRMlJ5WWfNh0WVMhbS5hxNiaH1hGaOp7D0MJ9HmsWcMFtAbMAFFMx1pVtTnEdqg993Azvhxw477PB9iVIK77zzDv/lf/lf8tf/+l/n7Ozsez2kjxQ5Z37xF3+Rv/AX/gI/8zM/w0/8xE/YQuUOO+yww8cITev5N/4PX+Rzn7mNp/DmN855/RtrBG8FoIQLVwWqzSTCcm/GJz95RPCR3/vaA4aVOTNMYgkXBB+EMoJk+//9uw85OTlmfPkFy56sa9WgiJqdqa8FB8WcO4raQn3B7BSBiQm5WMCv8SCV6bGuDNH6baqWg0LRhBRzv8il4FwlVMBsC6l9PYo5KiAUcu2ymQQqJi4oVT8wjhvOzo556xu/zcO7b7FenzHkzNhvCN7Td8r6bMWmOyPOZrz22ueZNYHl0U3KuKYJ5mASXMPRzUPe/OY3rRgTxzffeIOT4xP2Dw4oqoxjYujPUTI3brzA0c1bNE3DOI7owQIfDomhQVX5dPoMfddx/OAB3aZnvtzDiSePA/35OanrrCvER3yM7O/v8+jRA05Pj5kv9nl4/y5OhNgE1qszxgRjnxBRlst9Pvmpz9KXRLc5paCM48D777/Jb//WP+SlVz6JD4FSJuFMtmiREozoKGlql6rdS9RzX2q3k9R4ULueW9FCvVZblxqxOB/nAkWLuXxUa1ZUjYgTu161l8qIA1eJHc010siZ8EMskCaEBh8CWTMpZaaOrrLtrwqoRtAEKLl0aOnJBXA9mhPCRcyLc4oX8A58DNy4dZvZbMHk0gEF563o3fQ98/nk9CJ4ddvOIMS4nZzTtquHOlenz97k9mLWv64KS6odbJ3Pdo7sPPtafF+wTDvssMMOHz98WEHGswhAPsrxXSXWeJJQ4jrXkctjvA7P6u5web9PGvtV+34eocdVIpfrxvO04z8Nz+J08d3Cd+pa8aTzc92+rxIKXSdkepJryVVj+bDjf5aol+see/y56x7bOYPssMPHGVPTxhQba4ueFGWrbhchT1TBpJvHhBsi4LiI+7DmlVJL1QuHygvBh0W6TC5d29+5lTNQEdxU6qqzRhqpgghsuV8ndwWxWk51G+KJ+Q9MbiM1xrQIIlaPCkIU+NTc84MHDXfawpxM6yEIOB0JzrF3NOfocAYKJycbUoFWYBiUJniCOpJmSraf+3Vi6EbOVj0uOj75A68RG8/7dx+yTspBiKQspPXIMGbamWO/aRj6wtmYGBL4NtLMPPv7M2azGRoCq1E56wa6PuPEsYgW+fDwZMX6fEUkc/PGghdu73H+4Jjj4w0pK7ObRww+kNWxd2OfzekprlsTUjF3zAxalKZxhEXD4YtH3HrxgJwS/VrohkC480W+9K//GW5/4Yv45YtISSbikCrMkWlCZFQTUhKljCb2SIM5f6TehADjhjSsSX3POHSkcaSMIzmX2gRic87q+1pdT/Ovzo9t+KqRHUxXfZp9214OBOtsunCtAalcnJrDLhffS/3exCbUxiXjRRRn/JhztZFKgBFcRnJGxtGiYbwgLuB9hw8bfNvgwgx0H0rBM4BPaOmR3CP9CmkXyPwQ2iXaRIhHaN9Df46WHnzL4qWX+cK/8xfZ/+wf4f1f+J/Qe1+hX52y7hIxeGaziJbCkBIp2WexWTS44MjJhDdOhaQmjgqzgIsB0YwWueDv1AgXcUIeN4xjj4SIj41xTONAVi7cO7y3Bp6xIJKJTTB+qRR8DChK1hEKpBG6zUgTHIu54BysRmVMhbaZ4b29LmfjjpoYaGYNmhPrzlxrYtPaVSoZSiE4T9s4msabQKNksta4bW/Cl6yw6QeGwWKC4myOk0DOCdGCD4HGe1LyJDaUMiJEc0nZ3kWqwExNbIKAE4vMcaG157yniDAOIN4RoyPO54hvSCkzpoGUM0IiJw/OXIFVMkbRNpTR5oZHkDDQNA2zWYOOLTIO4O3e6tR4WJxHfAAxjvX0+IT+5ITF4ZzlrZbl4T6jDIQ24hdLpFmgzpNTAu9rM5THSazuPJeldx8ddsKPHXbY4fsOq9WKn//5n+c//8//c/7RP/pH3+vhfNegqvz6r/86f/kv/2X+6//6v+bHfuzHfp8F7w477LDDP8+4eXPJ//F//8fxLvG1r77OW28f051Ni8JNLSLBSagkiDcHApRbtxe8cHNJyoVvfPUhqS+IFxb7LS+9siAQuX9vw+mjjTlL3F9xfHJG33U0TVO7EHT757lZ/1UXD8nVZrPUBe3aVYMVLlrLGBFwajINa9iwbgYFJIM4j4qp5rUUi+Eouo3BEGz7okYgcOlLp7DUmptRGOs5cCZKyZmT4/u8/tXf5OT4XVarU8vr9J6hjGQg9wNdv8KHyNHhTfb39nCuoKnw0ksvk8eelDKrzYpxGJm1M85OT2nbiA+BB48ecHz6kHEcyEmZLxa0rWfoNqzOT+hdoE89IsLewYI8ZJxTs8VsG2Yvv8I4jpSUWa86MkIuoCnR94nMqqruoRS4/cLLeN9yfnLGg0fHuByJTaDvT+gHZbFcMlsuaZsZL730Cd74xu+yOl/TzJZoKfzOb/9jfvyP/SlefPk1c24kUC5F8ZRSUEwUYhczM5Wl7pL4RlVR8bVRxtxeTHOjQLEOBpQpDkjJdX64rWhnmlgmgjBxkciFSEicEXnBRVQcKdtIvAvM2iUxNogL2GTJOAnMlrfwPpAenKJjb3mkzREpr8jdGSX1aFG8g+yxPFVvHSAhws39Q177xOeITSTVAlUQYjABxjCOpJyJTTM5BV8QOdN5qp0RgqPkZHPSObQoTqXaGhcTgGiNPkIQ5+15EuIUR2Tb5bRbD9lhhx0+prhOBHHdAvPl11/1+PPgWRfHnzSO54nUePx4jwtJniZ4uGrb69wxnuSy8KT9X7XddUKAx0UNT7seT3M5eVZBw7O8h8eFOh9GBHHV/p8m1niW83Dd2K96/joxyeUxPWkfz/O+rxIEPT6ep43hwxxzhx12+LhiCrF87D4xPXvpMZ2MGHTaYopiYVsfmohj2sfkwHEh/rDI0FqLYvVkwUT/pgupwhORKuMwPsFeUbbjKjo1OdRF67ogy7bZYJIFXBzPCRx6+OzS84U9x60mcRAUr+aSSTEeoG0CMXi66rBRxIETutF4kFxgSIl2aXERJ6cDabSGmb3bh7xw5waqyv37p+iYmFV3LS2F0DgOly1RHJtN4nQ90o1CmAX2D+cs92eIF0Y83SicbgY2XSJ4xyzAet1zdt7Tn685XDbcfnGftEq8+8b7pGRREcsbh+h8xtB35DGxuv8QV0aiCK6JpG4AJ4TgofUcvXKLg72WcbVBVUgsOPqhn+Szf/JfZe+lV5HQQFpvm3hw1SGgZCiZUkabHHmwr1Ljh9NA7s9ImxVjv2HsO/IwkMaRnM01ArVIF9nqLrRGZrhLuqMqhq2zyniniceonIZZdW7nRJUZ1WYOBTKXl6HLt4mUpt+ZVTlSOTybXLZPKZV32c5Li0BCHKWAKw5x1mxVUqKMG1zcEF/5DKGZkx+8iaQeyQ4NCi6hZcANHTRLZHGExhZmLRIipVsjaYVUp9Q7P/xD3HztDl//X/5nHvzj/5l5c0YbFaFGA9cY5dA4fHBYHK8jK3SD8RlNG427SSM4RXM9N97hKr84bDbkNFbBjKLJOBdNiZwz4jwSGzTDMJjrRvShzm9BXCRLdaUNSt4MZE3sLxrmM49zQp9BY6BdtDiENA4mxgiO2SwSYnXkSBnvAyFa7LSWjJJpg31GvQPNwmB9b4gDHwGnpCKkrIxZUQpN9Ihz5DygJRNDi3OOXKxxTkRwIZrrTKm8EYXggznnlElgRm3UUzx+6laypiPnoDjE2w1NAZzgvSMEwZWAl8q7+ghijrnmkJJQFdSNSDeAnONjtM/sokG0AbTOrxHB4Z1DQmCzEYY04HMilIKMI0UVFzx+toeLC1RC5WEVzdkapVTQnMhjVz9nz/L74vmwE37ssMMO31d4+PAhf+2v/TV+5md+hrfeeut7PZw/EPzKr/wKP/3TP81f+2t/jaOjox1xsMMOO3xssFjMee21O3zr7W/xzW8+4K231qQSa4eKWmG0LQ+Cdahopu8tm/POKzeIs0jXFz54Z83NV1q+9AO3efHWkkcP1qxWAycPLZbi0YOOd9+/xxc+u2a5XOJ8JW6ZckprqaxWENkCP2ZQYL0Rtei17gWnVCcGK6Sdq0IBikV/iLMuCXW1A4cq8qiCjlIsxFerjapaNIlFj7iJEaJQrHQWvy3OS1HOTk/5xlf/CavTu6CJxWKPsR/o+3O6/hQtjr7P4Ao3b76IiLJZH9M4pXOZNs7IaWDsBgqF45NHHD+8y/277/HinTvsLQ9p2zmbbkWMLYtlZDZvEA2W8Xn/HvN5pG2X+BBZnZwxlpFZY0VVGgYojiEZOdH3vVmw9htAWezvMW56UuoQhKadc3z2gJNHJ5yerek3Hffvn3F2lmmbgDjPvQdnJD3g859bko6PaWdL7r19yuFRQ0ojd+9+i6999Te4/eIdEyhIQcQz+WRKzfJ0YCSEOFQFqQtzSq6CjmCEixSLKpHHbFJlIvAUa5mpDh4YKbDN6BUxAk3VxEOuoBnEOSjV7tQFKFqdZhTnPW3T0s72CNHet/jIbH6DL375TxGblq9+deTut/4Jt25/nqMXP80770TeffPXyGmFFstV9lVD5BFCUJqZ8MonPsUnPvFZlotFFb6Y6saHBu89fdcxDCOLxRIvkKfOnW2HTzFBTQk1gNrmbMlp20k0kTtGGNm5zWqfCyO6jO0T5ydK6LvRHLHDDjvs8IceT3OQeJqo4WmL/c/jvvH4fqd9XyeEeJrQ4kkL7Nc5iFxX5163zeXxXhadXIUnHeNxQcNV5/dJQpDHH7tKNPC8gobrcFkw8TS3ku/E0eO6407HuGocTzr2de4gzyLIeN4xP76/Zz0XzzKGq67lVe9/x93ssMP3ASbB/Pa+dxFlefm+YPWmXHqBq3WQOWrUlXbjFqZu+a0A3+CmOpOL8suLcRVOhFKdFS/dbbffKVVQUBe5S414YTsmKutR94+YswEO74SXW+FL+4FPzIQDpxy0geCEMmaGITNvPDE4NmNiVUzAsmg8oRFSFrKYtkBdoVk0lFJYn3SshkQR4ZOffpGjF444P19xcnxOWieaWYC9QF5t8F64cdAyDpnz1ch5V+iLx88cN44WzGeeMSWGMmcUt3U7WMxneCmcn644eXhO7kbaUDjab0jnAycPO4akLA/2aJczNkVZn51zft7jNbEeM4s20s4iWhJRTfiQm8CdT7/ErPX0m5E8FsbmFnf++J/m0z/5JwjLA5xrqxijoFI5olJQTZATRTOaE+iI5g5SooQ53mf6k3tszh4xbDryOFq0RRopWauLQq2xL11jE++47RW83IxyeWVaKqclW55Da0yxOXZOEg7VghPdzgmpAiInYSsQmmbpVqA0iZem46luXzdNZc3FODgmcYktqkvOFHFk73E5k9/+NZq9W1CSubi6gCSMi3CJXBKSe9x4DrMjcwAJEbfcQ4cG3Zwg2oM4mhsv8aV/+//Cjc98nvf//v8LPfmGCVCcQ4sYf6K1KcuZ+CONmawmPsgAY4+r5wjNhHlDXCwAx9gNaBkRKbjiQdT6hMaBcRiQEBAXKEM2N2DvCE1ESiYNXY3bjeQijCN0XUJTYtE42iaQFPohE5Yz2vkCKZCH3gRRwRObgPOenBQ04Zy391GozqzQRNuXiDCOic1gl6tpPKGOb8yFIeeanuwsikWFkkY0F6I37nPIiXEYgMqFlkJxjuBsHjo8zlsjVk4DJUtN/rXHbarWJikfLIq4StpKySgjIp7gHRoCkjNehOA9sQ3gWoa+R6kRMxl0LGTd4Md+G2HTxGicmmakNlyNfYeWQurO0bxhPos0s0PaxqPjOaotYbGPa5fgI0UFzQVxihZrSEQhp95cVKYPzEeMnfBjhx12+L6AqnLv3j3+yl/5K/ytv/W3PnbRLtehlMLf/Jt/kz/7Z/8sf/Ev/sXv9XB22GGHHT4yOGeZjR+8+5Df/Z2HrFeF6L0JJ7zH4QGHk4CXQKrdKCkl+r6wPNjjxu0j8IEHj1a89soRL790xLDpufv+Gacnff0jXOk2mQ/eP+bk9JxbL9wmmNehdUR4E1psOxcqUaFVvV4kVzrEmxunWqHqxOHFCqqsWjsrLLtTpgV/geB8dfCw/Zqt60QMTbawmDgAK8SlOoc48UYca0Jr1E2/WfO7X/lVHt77Jt4Jy719NucrhrGj687puw3DkPAu8sLt12ibGa4UXFFOju/TNoHj7j459fTdgPeO9fqc05P32awf8ui+0Hcrbtx4mcbN2GzWrM83lP2FRdeMC9CE5owQ2ZsvWLQz+q6j3/SExhNnDd2qR7xjMdtn/8Yhw5BwwfPg3kNcEMSPnD44Q0vh/PwdxpzZdCO4wN0HifWmULJ1BRUsOme9OSHlTNNEwLHuCt29U4JT1uvMm69/lR/6kT/O0Y2blXwoFFG8BDvXrpgtZ7FiXet1NevcGuejW91NJU/EbFPFBBCubq9aKGQjRBAj0qSKg9SuXXFqTi9yQfJpKYgEnDTVtaPZki7OeUKcEUPYzhFXZgS3RxsOmTd7tHEf7yNHN1/l6OBlzs7u8X5sAfAuoVIoBXyd205gNvN85vM/xIsvvcxyuQeipHFAxOGdJ4SGMR/TDz25FEIM5tBRynbeORfQPFJcdU/RqWuHKlaybhmbxLX7x3mz+8xGfNjCXK7zPbNTfeywww47XI3ndem4Ch9m0fvyttctXF/33PT4ZbfKZ1kAf5rbx+PbXd73ZXHF08b9pPdw2Tr/8X0/LrZ4HmHJ087T8ziBXCdGedL7us5R48M4jjzpuccFQ096/qoxfFgRzHV4kljjSef9ozrWTuyxww7fZ9DaLcJWG38lRKQugOt2Xdw4A9jWRFNkwLZ0nO5TdVH9YkneHCSp9eJ2YZ7tY7ZrZarPhOl3yHT/q9yHyMXhkOoYAZOT5ULgs3PhS/ueF+bCURSWjaNpAilnxi7Ttg7xwlCUMG/Jmone0baBkpU+Z6IXZstAdI5uPbJe9wwJjm4vuPHSDebzGQ/uPWR1ukIKlKykvtA4ODxs8Q42q4Gz1Ug3gDrHchHYO1gg3rPJQuc83ZgYx4KSOXrxDnuHhzx67x3G9UOkG5h5mDWB43vnDEMmo+zduEG7N2OTCw/O1pycrNGkzIOjCQ4ZC4mR1kOf4TxlXnz1CKEwdIWUPengk3z23/g3eemLP0ic76NuEkdIPcfWJKGlt/jZktFizhWkgTIO5HFDHj9gXJ0xbNaMfcfY9+RkETCl1IYSLgl8nLnIWj0O5kA7ERpyMSsql7HVY/y+eUx1Jb0056rwxybHhVjD5ru7iHy5JB6a+C3UttDtWZDt1J5kIlRRTMl5G4+kziJjVT0lZ9KwwfsGFyK+Mf5EQoOoMyGRN7GG6H0Yz5H2EGYHSNug/ha6OYd0DhR8XPDKj/8x9l58gW/93f83/Tf/McGPqHjSaByKiCPnzDhmCOY2UZKiY4ZQ+TsvNMs5YbmHqmPYnKOpIJpIJeHEYqpzSeSxRuXmQikDuIALJgzR1KFisSNFC6lkxlHpNxscMFsu8WT6IVFUifOGZnlILmquKLnQhIDz5rzaD9la5oLxm6kfEFVigBjN6UNV6YdCPxZygSZGYoygxeKdEXPaEHMJyVmroAW8E1SFMSWymuuN854xYREywW//lpZSEA85TbG/VZQkJqrBB4v5mW443mJnNGcTqgiIeLKDIIp6V5u0oIwZ3FjvYQEExtxTRvDqGLo1ySnLvT38fA+H4rz9ne+iJ40jQ7cip4Egyo39PaIXvChSEtLMkGaOOmsIU8FERiXjo7d5WmycW073o/8zdif82GGHHT7+SCnxla98hX/v3/v3+NVf/dXvCinwhx3DMPDTP/3T/Pk//+dZLBbf6+HssMMOO3xk2PRr7j465u7dFaqtZZpOZaiYetyJ5SdOZaKq0qce5wMvvHiL/cN9+r5juZiTU+JrHzzkjW+esjrvbB8SGFPPu+895MHxMa/0G+ZhZsISBc25RoM4Cqnm3WLMTalRH+JQtf6JqfMBVdTnWgBbl021EgEtddzVtcP5WiwrkKu2xIrokvPF8ncRiiRQrNi6dC4AShm59/473P/gTUIUDm68QOo2dN2KzfqEUhTnHc4rMUZiFGBkvT7l8PCAoiPnqzXnp8e0TSRnq2S0DDQxsFjMKKXn/genlAIvvHiH2bxh7DecrY6hKOfuPgIs92/ifIO4NX3saJpA23pKUWJokHkh54L3gcXBPvuHRywPjuiHkW+99SapH5jN93j3W6+TyXT9GbiIUjg4asll4P0POooWbt327B+23Lh5wKOHD0EzaVyzXATOVgObsdDGwj/5zV/lj/+J/x17B/v1utllnMg4qmCn2qhUck1BhYJ1e8CFFSVMwgaPkmpTVsF5E49M/S052/W2/p96KSkm8slKcRaTYsWp1C4P68xoYos4GEeL83FOaJoZ3gdUEiod3eYu3/j6LxCalvt3Xyflnof33yEV5eH9N5HsifGQMTyEYuJYX60xVYTD5W0+85kf5uDoiBBdjb2pnWZYLEzOiU2/IqWbuBCwKJhcT9sFkSTqbP6L5cGKq7aX9RRTPw8TuVS04Gv3DKomiClTSjQXc3yHHXbY4WOK72Sh+So3i2c53ndSMz9pUf6qhfMnOTc8/th1YoMniRceP+aHdVB4/HhlcgJ77LpM/5dSI8vq/1ftyzl35Tm+zmnjuseve+w6J5Knza2nOVVch49KIPL4Ns8ijnme8/dhcZ1DynWOLh9m/zvssMPHGFWTsV1il+nBS5tUccX2n0v3kiJUh4/Li+oX7iF6UWTVA+nFT5PAQ2HiUKZO/e2g6n6dTEJ9zGVUL8k96qGd1MV5secPAnxhHvh0Cy82jputOToinpwzwyaTih1UFbIU9oJwuJyxmAVycZyeDfhYeOGFPSgDpw971usRFeH2y3vcvn1EX+CDdx+Shh5NhaJC9J42wv7CU/qB01WhG2HIIDGwmAUWywZipFPHqLDaDAyDEquzQfBKv3rEcHZKt+rx3rE3j6xWHZuusDyYc/PWPq5t2YyJ4/MN56uBvstE5wnOczAPLBcNopnTVWINtHszsoeUldwr8tKX+NKf/re5/bkvIM1s65AxCTRMB5EgT2KPEUqC3KOppwwdZVgzdOcMmxXD2qJdyjiQc6bkZA1IWJSLNad4EGcxL1ojfpicQMDyhgUkVHoqV7URVYBhC/E2BQTdijIuiUu5/DfQ5NxhE24rDhLHNhqGi9pequDl8lzXKfrl8l63x01Q6hyt/ALiLD65jEgxgYiOAy42+KbF+dbcVFw2kUrOSB6R1CHzG0icwfIA7QN0JyAJfMv+Jz7L5/6d/ztv/b1bbL72C2jqcD7jgjfHi6GgGnC+4CnGJxXQrLjW0y5b4nJJUkceNkgxB9QyZjt/ThnHnjEVovM4L4zjiHOBODeepQwZFx0SPfiIlMI4ZPphwDtYzBucOLrVyJiUNjribEbBkfoOn9UiR0TJRS3uRQQfPLlkSgJSIgShjRFxQt9ni1/KiguB+azFB0/RQtePxmlFT2hbSlHGwZxmggMfPCmZG4cEoZmFylf5yoCZGA3xlFTIKdutSgUfIoqQNFvszXTP8dFEFNYVBDhUE94JzkVwHlHFhZaUs4lTcqEMA+K1ikU8RUdSLoh3ROfREihpTR46SvT4MMe5hqyZ1HcM6zP6bkMbAsvlPmlziqdA06I6Q2JjEVV1XEUzOjVCFRN95FJqo6BuP5cfNXbCjx122OFjjWEY+Pmf/3n+o//oP+LXfu3Xvi9FHxO++c1v8vf//t/nz/yZP/O9HsoOO+yww0cExQfHrVsLPvP5Q37vnw22qFzC1iFjik2xqBeH95GX7uzz4ot7NLPAbN6wf2CCuGHMvPvmB7z5xgnvvX2KZLF6WAJOW+7dPebuvQecn69p2ogEVxfnHeY9qluxh6qYUwPm2iEqNTvVSI2iZkQotUPHbZ0PqlUmDicOlSoRV7aFspJxEmsng24LdZ0Igdqzk4uJA+xns71IaeCtt75KShuaZkEZR1brY1IZcaFBcqJpWpp2QfQRqWKGm7du4VxgPnOcnfUcHh0xa1vGvkdKoWk9s7blYP+Ibr1is3qTex+8yzBsODy8SdO0zJslB0c3aNsGJLFZrxCnFDVyxId9mnZJGXvOTlecnZ+gZPb3DxiPFVXHbLFHG2Z8+Yf+KK//3ldZrTpCu0cIpzTNPimPnJ8fM3QDMWZevOPJOdBEYX8p6DhwevqIeWsiofks0g3W0SOiHJ98wP177/PJT3/exBb4eh2NwnCKRa1MggadiAcj1TI11EUviAlKqeRZQJ0g2Vw1xLuaV1sX5ARUp2MViipexV5D7USp3T4iHh/abcwKYsWvcxC8JzQNPjZVcFRI+ZyHD74G6hjHU1QTD+7+Ho8evsvZ+n2LeAkwdXQ5Dz44nHiCV26+8GnuvPxJi3FxAZGM99FIRudwziM4NpsNw9AzmzfkiWyhRreICVmQYoIPHM4HxjSa24nzxiMVRdSbSAqH5Ix4xauJp0qZes+27WQ77LDDDh9LPGs8xHU17nVuEX9QtfF1x7nOZeNJi/fPIxK4yt3icdeNq46rqpRSKKWQpmx1EVtAqY+HEPDe4711KHpvwsSc87cde+vAdUkE8iTnjKvEDdcJXZ43fuSqc3D5nD6vOOZZHEOuGvN1oonnxdNcRZ7kcPJho1yuG8PziGp22GGHHaalcRXzNpBpYZ1LhgtY9MvFwuAUNXtRBimVX5gW4etrbUHd1W2r8BBBfF1Qr/9922J7Pfa0L2oNanuyseRJpK+AWNe91JrWi9CI8mrr+Nzc83IDB42yaBzDMOC9Z+4geMcYlegd0RViI+wtG/YOGryHvlPOup4w99xeLsl9Tx4yYz/iG+HFOzcJ0XH/wZrVaoMrBVGLTYjRsZgLe/NAt0mszgqrwSIdZovAcn/GbNYwOJTnDQABAABJREFUqmeQQDcq/dgxjIVZ0xBcodtsGN9+C3JmfdqbjajC8cnAmJUbLx5xeHOfAeFs3bGq0TBBCjf2WgLCwdxx5+UDoHC2HvHNnD3v2Js33DiYM/SZ8NoP84N/9s9z8PJrSGi3vMPF9SgmSigJtg4fI1oGdOhIwzmpOydtNnTdmtRt6Dc9OSUTiWi5iK5FqisHQI2aVXfpd5c9N4kp7JpuJ6I1IlUBR1ELNBZxE0mCU+Oeps1tdk+NWW6r4TA+Rf//7P1ZsG3bfdYJ/v5jjNmsZrfnnNtLsiRLlhvcAJnpNEWSpIA0WZSjsoLKqKjIKIKICoIXop54sF8IXggeIHghggeiKiAMFQRgKtIuZDrTZFiUIZEt9xaS71V3m9PtbjWzGc2/HsaYa+97ON29kpElrU9xdfbea665xpxzzLXW/xvf//t2IpBJSCKlaYYyr1TfLT4CSkNJFgrkAzGZy6F8BmPQpIgkEE/SCk2Fd4meFC02BpL32MpjmxYxTRGMeNAIMWQxR3MC7QHSzsHWsL1AtUdMRXNymw//if+Jt/9/L3Dvf/snMJzj/UAYs5NDigmJCeuKn44RXFtTNRZsdiMhjkgYUZ+FAEbAJCH5gKaII4uJQyoOKc7gx4GkCWctJFfuVcnOGjEyr4TKNkQVhtGDEWprARi6gPgOqwnrMrcYNaFCjsEhkYInxojB0NQmO+WqMHSRbszvMfPW0jYVamGIcecAkqKCD4hPqBriMFAZhdoyDkrAICrUFlQj3rNTu4kIKUaSJmJIaAhYHNjimmuyG0eZQKBZ+yRWidFDiNkNRXLzkjA5hZQZawwp5vetiMmck6Qc5aMeZ6GuLNYKFktMNSlERAPWOcTWjOsVVw8e0K3OsSkyPzggdpHUr5GqyS4jtkbFZn5XlUSElKiqFuMqVLOLi7FVTu/W6Y386/9dcS/82GOPPb5lkVLiZ3/2Z/mJn/gJfvM3f/PbvuAex5FPfepTe+HHHnvs8S0E4WB5wEc+/CF++L/qSONX+dLrgRATBoO1NZoi1hQnBHEcH9Z818du88pLC6wEhmFLUx9hbcV2c8Fq3fGVL18ydqWbJeXiwIhlu/a8884569Wao6MlIFjJhSYiWGNzMZGyoCOlmPMgp6zTlGM9Sl9oLniVLAKQsuhdimJkIuOLC4QqmCIyEUtK14sKooZYOjBUE9ZM5LPdFb1abDTHfuTeW29Q18Iw9LSzBc7UODuiduRifYkILBYHCJFxWHN6+gqNa/DDSHUwo3Jzjk4PaIzDSCL4kaTKdrsljJ54EPHec3l1BinihytiZ+ido9+sqduG49NT2nZOioZh6zm5c4u6bkuuaCRqx8HhEavLczarNS+8fMpmu+LNL32R2y+9ynpzycHBAZfnDzg+OWK9WnN+8VW67Zqriw0x5hqybR1+TFiT99ttz2lbmLU1s1lLUItr5ty994BF03D7zjHzxRxNip1IkaLfMEZ2AguZhDiqRbQzLR5ItgwlF98TiZGYYkxCziTVCBhUDKq5wE0pMAkvsmOI7ARMOTNXC/WTMKbCuRrnaowzJXfVIEYw1iCSnWaMGqKS41fS1NElaFJ8WEOMOW81B6aCEbTwM5NQiug5PDxlsTymqZp8PxQSR2504agq3o/4EIvzSbb/1SJ+STd4G1Oyf7NDj5TiPZW5nq7ZRk1ZWKIWUiYmRPIcn3xu9h2xe+yxx7cynicq49EF5me9Lz4rsuK9uhI8y+3gUXeLx73eo2KMR8f6qKvGzceeNK7p3+dx0Hjc60wijpQS3nv6vme9XnN5ebn7vWkaDg4OOD095eDgAABrLSEErLU7wYctpPuzxvvoeXn0sScJKt6v+Odxf3uvvMnTxBVP2/5xYpRH8X5FIU87Z0/6+6Pjet64nOfZ76M/P6/QaY899vjWRlnzRMoS+XWAxc2l7vyHa18E3W0zvW1M7x52WkCcFkY1Xe+o7DQpyBT3kdUdeb8qZSF92rfkhVGZxlVqU6ZF+qm2k1J7CtYIR5L4jnnFJ5YVCwJzC8REtx3BwKI22eUzJma1UNeWxkI9y4viRiw+JAafF2NnzhB6T78aGcaEm805PZmBcTy46FivBxhGmsqBKFUlnJ7UWIRhG9l0kd6DWMPBvGFx0GDrGm8sQ1RW257oI66yHM9qYopcrrfYFElBM7cUI3NrWW1HcI4PfORF5rOGi27k7HLN0I/Mlgv62GNVWDrLbGY5PWoYh4FthPboiGNbU9WGZSOMXaL64A/xPX/sx4roo75xmaaLllD1ECOSAin54vIxksYNcdgy9iv8dsPY9Yz9Bj+OJB9ImtCou+svQhHo5IkjMolRdSe0ELkpBi0yjBsCpGs3GZhmDiRIkiNXKPyXyZyVlvmTFUZlekle9M88iZ3ugt2M34k/dnNfijNIiZspwqfMg5WdTu6iUAQfhdDQzDHkZipXXEZTjr1xKYsMNFJVEUOLJJMbqTRzMJoSEnuY3YKqguUpdFfg1yiCbZe88gf+O+x8wev/7KcI3VvZYSLlqOUUhTFGZnNL2zYlUlchhHx1Iwx9jmFxxuRzmHLMrTMGNYqPCVXB1UKKnij5O6WIQcUSEwQ/gCpNnaOjfVB80OIWYui3A2GI1MbQVA5nMz8Ts71P4bMCMXjAUBlH7YS6znHEg9cs7FClcRWuqkkqbDvP4AExOGcZRVn3njAmjAoSAq1NDGOOdrF1RdPOUIW+C4WrTVA5IhB9nk+Koja73aZY5lyZQMY6sCZHIYlAlHxeNUflGGvRRIm/KcIgyXNhSonJPygphN37YN20VJXFSoKqQlLASETEgQqjj2yu1myurpAUaazBxMDgu9wsJxanFsRiXA0uO06nsc8R39YgtspjMvneRsjX4MZ7+NcTe+HHHnvs8S2JGCP//J//c/7sn/2zvPPOO9/o4fyuQIyRX/7lX+bu3bu8+OKL3+jh7LHHHnt8zYgxsr7qaZuGj374RcbtSAz3+dLrkWDzInYiW/pNVpHOGaxNxDSw7VdUnaFpGmatoXKW2azixZeXvPnlFd26Lwvc2TXE+8jb9x5yebXipfgC1oJPY16YFpvtI0sXxBSDMf2cO3muSV0jeZ+5AyOWBfksBMBkX48kYIuzBCqQsitEimkXAZMzgW+Q0FIiMYrcIOe15nxNax2r1RkXl+9wenqHebtEk0dVWSwXPHxwyWKxBAzL5SHb1QWm5NIOw5rLqysOhxMSkRM9xCi4ytE2s6zuT4ZRekIM3L7zGtYKCcvR4R2Mgcvz+zlb0zT03ZaD5cu89MEPcX72kHe+8iVu3X6F2XLJ/OCIetYQ00B2jADvA6ZynJ1fslmtOH3xBc7u38dZw/HJMZfnV3z1y56L8y1V7RjWHgEWMyV6uLoKzNoaY2B1dcV2fUWIYKsF1grRK6lKDNs1lw/fKTaTcoP4YNfFpBrRGw4cmRibFmdK/E6KqGTBjjWu0CQBSmyPFtcKMaXzJYEYR9KIYLPAgamDJQt3RCwK2OLocm1/CpmwSeW/YtUqWsgXLcIiQ3aSqbKASFMWExFRQiZgpm4ZyM9DMNZS1Q3OGRJK0kiIiRiVECMx5f+SJnwYSSmgRSylZUw7q+HSVTR15qSJdNKYxU4pL5DFlEmpKfM0nyxlch1GcwTTDd3JHnvssce3PZ5X8AHvXnh+2vOeFl1x8/EnCTveizjveRa+nxRVchPPcvt42vNu7n9y7gBYr9esVitCCAzDQN/3hBA4Ozvj/v37PHz4kNu3b7NYLLDW5rxzwDnHbDajqqos0BTZ/XvzNZ+Gx4lrnnTMN8/RzXPwNGHO0/Z1c5+PjudJ43wePO+8eJYDyvOem/c7rme97vvF0+bt9Fp7Yesee3x7QHaOB0AR76tOZf4k9Ljx/rD7r8gDdAolzfxB/tt1fCg6xWjk5pSp4eQ6LnP6bMivn4chqBRnyjI42W1UXEvFTDoRRAy1CC/W8H1HNR87rmljJA65jtt4pW4rjg4bZs7i/chi0XB4VEEMxDHvOwTd1dApRHwIzBz4mOijUi1nNAcLVn3k8uKK4AOi4INSu8jBYc3RQYsOns0Q2HSBMQnNvOHgsGW2WOBF6BW2vafrPaAs5zVWlM1mpBsGmpQIfUQqw+nhnPPzDZ0qxy8sODo+wDaOi37g7Kpn1XmWB4d0fYcMntYKhweWw4OGbe/ZeHCzlpTg9NixaGsuLzvcB36Q7/4T/2cWt15AbLu7DteXLYLmWBeNEdKIxgHGnui3+GFF2K7pt1f4TccwjEQ/5Di6dF1/Zw4gsxE3LjE6cRnyGLnRJOyQzDHsTD+EUstnEVDmSijC3XwNNSVEU+EjBExx4J0my83vi+R4j5vRNsi17EV2DVE3Z/90PJMIZOJNinap8CJJ9dq5NYGWOGQxFAFIFnaYlBthXApUBy+D1KjfInbI7r7JI2FE5qfQLGBxhG4tjFeIgK0bXvr9P4w2M37rf/l/Y86/SCWZ8/AhULeWxSxH8277SFSlqmpi8vixONS4KvNCwSMWfITQBZSErfNjMQSsMzhDafapiGJQAVdVVFknwRgiakzWHUQYfcCHzD+2taOyeV8xao4ItoaQFB8yD1RV0NQWg+K94oNHjaFpKlyVBQwxCetupB8D1lU0bZlHMWEFrBiiKkOMjKOWJiihToB19D5ztM4IxmmOwjE2O8pKROS6eWniypiESyYLfUJMEHNccDKKsxaxk2Nfec1pbooiNjuaGHMtelITi/gIKmtwIkgKOcrKKogjhERcd3Tbjs3ZA4wGFos5NYbICK7Noi3rSMZhrcE1NeocKUaMCMZkDlEkliat0gQVI2J/5+QZe+HHHnvs8S2HYRj41Kc+xZ/7c39uL/p4BG+++Sa//Mu/zB/7Y3/sGz2UPfbYY4+vGet1x7/9+V9hvjDUdWC5cHzHawvOHo4MDwzG5oLQmCp/oVfh4rLji2/c4/DgkLZdslweArkmrZqK2y8c8n2/5yW2W8+vf/YdfBfK4wZJlrMHa87Or+iHgarKC89Gs5WgprzYLsLOKQTNhYxOj0/dMrkFhylTkjQFuZjiJjHlnhYbTC0RMCX2JbsoZPFHtoS1uUguC+XTYv+uc8Lkhf7V5TliIASPSpxcOUkxcnL6UvakCCOVtYS2pqnnRD/Q956LiwcM2zOa5oB147AHS1JwRBvpx4FxGHAmF2FNNefk1sts1hs0Jpyd8fJrH2N9eY/G1iznJ3jveedLX+Tw5BYvvvYdXJw9ZNN1mHtnzA4WHJ0ecnw65+H9u4gZ2XYbxpC4uLhi248okb7bcHh8ytHRMbdfuMODh5f4MTKb14gIfR84OWh48cUjXnnpiHv332a72lDXDuMaYlKuVldYK5y+cMLQXfL5//hL/Fd/4I9jqlwETsKO3GV10+UiCxvS5MhCth1NSXeMnJDFPznaBKYcXTFVlgSlwJStLMaiMQuHjCmuGqVDxhiXiRRN+dpLLmCvreg1d2+JFGLFgoARh2rJLiWU6COH2KpYChcRhmrp/MrzwUzNYUU4ZIuQCk2kJMQYShd0RBUM13EviCDWYk22WQ3RI5JKZ9AkqDHZ/rSc4ZyQVLp3CkGVnXSyY07ixjkthbrRuLt399hjjz2+VfG4hefHOQU8T+TG+xFkPOpW8Kio4EmuGU9yU3jS709aRH9azMnT9p+KrfnN/TxLMDI5jkwuHVVVcf/+fe7fv884jgzDwOuvv85bb721E4Usl0vquubtt9/GOcdyucRay/HxMa+88sru9Y6Ojnb7PD4+pqqqZwoJHucU8aRr/6Rr+17cQp4m6HjW357kavGkbZ4k1nheIczjfn/avfI8zihPep1prP85hRh70ccee3z7QIorlk61PpC70KeIFnYeCJMgJHeyFxGGTB4Jch33wrS+XsT0lDX40lSxe71J2JFHQo6JKSutOkVUlJdKeUyCgC3vjeWhhsTH55ZPHFR88NBwUic0wlUf6YPJjQStA4Rx9BwsLUeHFRZlGCLdEKnn810922222aHjsMUPAdPWHM8XDAEenG8Z+xEdAiLQzBoOZ8LRgeOgrei7kc3G03kFJywWNYfHS+qqZcTiMXRDz3o7UDvHrKmwRC43W6JXWmDoPbODlsNlzcX5lsEoB7fmHB0sCQKdT6wGjzjHwYFF/YD1A7WDW0cLGitshohWLW1raWaO48MGGwOXl5H2tR/i43/0/8D89A5iK67FN9mpYmoOQgPEEZJHw4iOHTpu8P2Gsbti3KzoNmtCPxJiiRfZrZRPVzVlrkqLkEOKaELs7vWmv+VeizwfkkhuFoGdWkS1zDUzxbdMscLsHBnypBV27iAx7fgBLRHM05STIih61/fGyU6EEpnMdZSQMs3P6a6YbpCiDNi5pxbOZuLnkJ14RFMqvErm2DRF1EYkKdp6Zi9+mPDgS0hc5dezirDCbAMST5HZATpfgljScB9DRGzFS9/3g4hp+Nw//H8SL7+EpkTbONra4n1g8IGQFFcbYvTFySJH36pkZw1T1YTkWXuP9wFroA6RusoRPfnSKsbkQ0OF1lVUVXYEHscRRHHOEXyg7wY0RioLTZ25y6EfCN5TVYbKtajJTTiuclQmUrssShgDjGMAY6hdRTOrEGcZfaIfRsaguMriqtxcFEMC9SwbQYylGxWcI0lExeJTZEhK9Dku0Whk3liqVBFDpKqyQMIYW+K0I86W+F/NUdoqkt30EFJSYixcLQJW0CTEFHOzmeYGqqQRsRZxuYEuxUTlBOsqMG0+kRqya3Ka6oCAMxWklF10/Iqh67CpZ75oaOcH+DE7uriqRuoi3Co8lxbHGGLmAI2RzKXFMfPSrgKpd4+XN9gnfka8X+yFH3vssce3FLz3/JN/8k/48R//cd58881v9HB+1+Hu3bv8yq/8Cn/kj/yRd2UM77HHHnt8M+LyYstP/YNf4+DQcnxiOD6oWF9B+f5/vdCsmVCRaBnGji988S6JhoPDW9x54Q7j4NHQEeJIVVlee+0WXT+yXne88VsXBB8LMeK4PF/z4OEZXbdluZhjZPo6nbsRDIYkOVokFyShECiyI1dSEXIYtSQiSfMChRWDZXKCyIVZ7sJIu3o8O0XkYnoqlAVTuhpysWDM1AWRF+3zT1kwsF6f42xNjJ4wbHnr7B2OTu5wfHIrx2fEADHnkhoLYejpthe5I6bb4nA0VZutD1Nku17jYyQGT4zkrNAkxJCoqhlNW2I9NDFuO5pmQQiefrtmLkdEo5w9vMvB0SlHx7fYbDr6bstw5um2HYtlhTXCxdk9xFVE74CG8/NLmrblwd27nD18m8XyFHTg8Kjh/GykaWqatsHYhg+88jK3To5QTdSV5a23vsR2OzKfWWIKiAgnpwfMZo4wCF/67d/g7Tff4AMf/gQiUhaPrvtg8v+ZHLejuctAyU4bWQiSr0+i2I+WGi6lbNOJKlo6GTIBkS/kFHuSuwEc09VFdPe6Io5dvnJuYdhd9+tOm/z5bo0FMSRy8W5MhZEGMUMWf6jN+aPGIcnuusYEiGRiJ+lYSCGHYSKHynyflBilo8xZR1VXmXwsWcLFPZQo7ObtdYyRwdhsbymR3LlSnjNF42i6LpZTLB1Bmh1vsv3nfmFkjz32+PbBVL89TsTwNOHA48QOT3KCePTxx23zqADkaYKCx4lEnuV28bRF+me5jjzPPh8nOpn+mx5br9d86Utf4td+7df48pe/zIMHD/DeA3B5eUmMcSf+mM/niAhXV1e7309PT/mu7/oufuiHfoiu66iq/Pm43W65vLzkgx/84E788bW4RzxJ2PE85+BZ+3jSfp40jqft93nG/Lj9PbrN4wQuzytWedbrPG2bR/G4+fzo4zf3/6xrs8cee3x7QoE4vV9wXXEqaefMkP9ibog/pveTKQKhbLP7LCx7zqv4ucaT6e9Ma+PXdV1ZrBfNjQTTWLQI9rnx+XhtF3G9zH7q4HuXDd8xN5y0hsMaKlux6fvMxxihaVx+ahxZLiqWM8e4HYgxuwpI5Yia41b8MOJmDbWBbTcym82om4r1dqQbIjYmXIxEo7S15XAmHC4XaAysViPjqHQBpHYcLFuWyzlBHGufstPH0LFZ97Sto60d/dCz3XRUAsZnF8vjo5Zm5jg/3+CB114+BevYePBeGQePYKlrJWx6bIxQCScHDaTE1iu2aZDKsDxYcHzYEPqRy7Xn4CP/BR//0f8j8zsvI67ZCXOmsnZaeJbki9vHCMGjfkMY1vjuCr9d0a+u6Ldb/Bhy80gqTpsp8wy5CShmnmK3/yzuyf0q5buPKVNCITuC2tKscR2fkp8pO8EE+shno9xwDdlN5CzOmLbLnEraTRwxhmlp2sgk8ri+L2CSdmS3kMm35nqj6XN42pIi8Ehlbk/zdXIUKT9rbsiS8p06cX0u5OHr6OYerkR1iBYHiJRjQ0y6D3FAFrdgPgdzB+0fIgi2annp+34Pzv3f+dz/5+/QXH4OK0KISr8dSSK4yoFC1ISxNZQ4k0mCNY6JzgPO5YYqn6NoFI8mhyawkjBErB1xtkU0MXSJmFJ2nDPgx0joPKSEs9BUFowhBCXEG34pml1PKiv5e6paoo/0RaRinaNtHFWTnWe7fmQcEinCrMmCrhASIY6gyswZxBjGlNt45q3DSkXQLABLpgZriEappM6NQmQe1Ei+UikFfAgIgqsddeVQMYQQs9TIZIFbpIiPpDgdR4gkxAgiNRghpoCmkPdb+NPMmwasWowkNI1kN5xZjudRxeBALDF51AhuNkdixNpAVTs0pnx7qEWtwVb1bs7GKOjgYRxBsjOLrarceFWiZTAR4wxJDDEO5d32689r7YUfe+yxx7cMVJVf+qVf4id+4if43Oc+940ezu9K9H3Pr/7qr3JxccHp6ek3ejh77LHHHl8TRp84O+s4ewhf/KKnqg2VbfF+csIwiHEYG7CxwpmWGDeM3vPmm+e8ffeKj3x04PJ8xcPzFduVZzGvmc0ch4sZ3/GBY87uD5zf3ZbuE8vQDzx4sGK7GdBbU/tLhjEWEcGoIclkgZpjOygRMJmfyeKIgM9xFWQBQEwJsQmDLS05heDRvNgzCQNy80LIr63TPzeooqm7RyYRgdlxNEYMYQxUVnlw7wFKpK5bTk/uMKtrkIYYPQZlHGEVBq7WF1ytBh6ebZi9fEQ7m1PZitpVXI4PWa+vMLZCpCruHi3WWlJINHVDGHtijBipaFxNH7YMfYd1La5eElPgwTv3qeq6dHN4xjhw8daW0zu3WR4fsF5tcyE7W3J5scFaw2p1wcXFOd32jKb6MlHBkPOAuy5ydGQ4OmoIfuDevbsMfU+MQz7nJlJXjiSOVw6OMK7CWaGvei5XV3zx9c/x8gc+iq3q3DmlEE0m4FS0ECiaXSom8oDcUZI0kZ1bcudGJmH0untqRzxMC01CTBNZl0UyU3dVFl0kDOaasCjb5X2kwuspaCykkSBYUrI3yDsD5LzdLB0KiLVZuCRTj04EY6cn5H/EksSU7p78mlloFHfki5icM7tcHlBVDcY6jJicF5sSSQ1GBNEI1pDSlPWbCSNrJJMfCAkDUa87JaYTMZGNRvI10JRzUm+SmHvsscce34J40gL241w2nve5jz72JEeHRxe3nxbv8bjtbu7XlI7mJ4k/nkdw8Oi+b4o1nnacjxOxPEnA4r3n7bff5hd+4Rf49V//de7fv4+1lmEY2Gw2DMNACNmtK4SwE3PcFI+oKpvNhnv37vHGG29wfHzMbDbje7/3e7l9+zb379/HGMOLL75IVVU4577mpoz34ubxJIeQp82hx7l13Pz9Wa4hN/G8QovncRq5+ZrP4+rxrHn2vEKNp43x5n7ei+Bpjz32+PbE7v1Abog3Sgf4FLNSHs4NBjcKIMl/LLEw14vyeY2+vOdwvb2W1zOToOPma04N5zLxCjpp9qdnlnow12OtUV5thI/ODMcmUkdFx8gqRbRNOGdpWsGJoWody9awrASriTRGMCbXqpUwP24hJIJG5oczULi47LB1jR88k0Nl2xhCl/DAvILT4zkHc4sfI5t1oOsCSWA2r5kvappZi8eyiYmNd6z6nhRGnDU4Y7m63NL1HSezmtj32MpxeNCyWQ3ce6ejWcx46c4hEcvGJ7pxoN/0jCkLFyqJ1CkLVuZNw9WmxxjLwdEBUhmOjuYsZg3DtmO9CTQvfTcf++T/nvntlxBXMXm1TBdHCaARSYEUfV6UDiPJ98T+Cr+5ZNxc0W/WdNs1cYwl8qK4wzJF/ChoQKRwB3nFnB1/lBSKIydRi8inRPdojvjJ80GL1iJzCdOEkElEsptV15gar3Qn+thJQooeJELM9bwazU0oYndxMtd7ndxoUnEeSdmxhDyezHto+TntnsfNz2W9/j2fi+neMMXdIZTjgqgBkxJRr1BX4ZpZjiFxNt9yId9/ZrzKn+mLW0g7z/ddf57HYiy3P/5d2P/T/8yXfvr/xfDwy0Cins8AIWnAGMHaLOZK0zkSg/fK4DNvVFWOzivJBJzLTV4xJWIEcSWWxEDwgSEkBItxjoRh9JEwRITEYpbdZDQpMYKxJu8vhjwNUqIyipN8zUbvGYZIStkRo2kMVV0RQqIbtoQE1tQ0M8GaHGVD0hzZUiJ0fEiMITfE2cogxkLI7xdiHFEgJcWJ2VFjVVXjXOY7YxJUclNSzIMsvI9k8UfKYqaUDyjHC6dAiBHymSAVS5QQMydsi1ONEUGMmyZBPg/R52ugvswZQWMgxp4YAu3BnMXhId5Z4jpzsJGEYkHSTjgkJvNcSQM6ueBKFuMkFKb4yAikIXNeCilcx0p+vbEXfuyxxx7fElBVXn/9df70n/7T/MZv/MY3eji/q/GLv/iLfPWrX+Xk5OS5O2H22GOPPX5Xoiy6Y/IC/Dgo0YQsrCCBoQgQDFbqUrDmonEYB64uerq1x/dX/OIvvsHnf+uCw6OGF+4sWC4dl+cjKeYuCFGDCZbghQcPLtlsspjBVTnSIleRZtcJY4uqfEeoCCUrt3QjSLZETCmxs54UyR0XkgM4RFN+beNywW5gWsnPgpHiBsJ110XeTbZJTJojQwRDIueHzmYHDP0KP3akFKnqFiQCI9tNDyQOjk+oK0c1Wt5560tstz0P7m14eNXzgZdPWS4OWS4XKMrh0Ql9t8WPA3Xt6DdXHBw1NHUFSdkOG8auo6kbcJFuu0FSzfLwhKpu2VxeYRtHt9ny4O0rqqamaWvGoUOc4f7bb1I3H6du5/TbLUk6kgbWFz3Q0Q899+52HByADwNDb3AIA4GHFysurjacnV1Q146u22KdcHDQ0s4diOHgYEkIuVC7urzi8mrNOETuP3gnO3SEUApSxcRsM6oTwZAALUIEIJELX0PuqFCTxQwxZNGP6tSBknYEW9JsyYpSImISMokxpHQlmMkZJBf1amImSpRScBZiRgwpRq4uL3n7nXu8fe+M1TpRi6FydtcdpigpKuiAcTMSpSOoRLkU1rH0nkxyk5iFIYW7kdKRMRGA1hoWywOs7TDWkhDMtLBmQJNibJnH0wJIOa6kMc/dBFnY5HP0i5hi7ZkdP8SS7+tkKL8A7+Z29thjjz2+HfA0d4XHCSEeJ86Y8DixxuNiMh7n3vG015see3SfNx9/9OcnHdvjHn/e5z7qBnLz7zfHFEJgHEc++9nP8u///b9nvV5zdnYGZFdR7z2qirUWay2z2YxhGIgxW2vHGDHGYIyh6zpEhGEY+NznPkdd1yyXS87OznjttdeYzWZsNhuMMcznc46Pj7E2Cy+nfx8nYngeYcOTztnTzuN73eeTrveTfn/WfHmSYOj9julZIqUnjfPR7R4Vd3wt5/1Zwpk99tjj2xXTZ9H1e0Ka3hOZ/pzfkybGYWoomNwORK6bCXYr58qk2Lh2T8ir+rsoD9m5Skp5bq7/jBYhieh1bUiO9pyEKEdW+a6F40M1tBI5qC1zZ7BWaJ1QNxWIAZeorXKwNMxqSxwS2wHqgxqJiq0Ny1nD2A+ogYOjQ7p1x3bwtJUl+YiPitWEFcH77G5xMLOcni6Z1Y6+GxjHSDJKPa+pa8ts7lBjGUxLp4bLYWTdbyEllu2MOA7cu3eO+MDteUPcbJkf1cyamouHHasBFieHHB4eEmLishsYQ2B1taEblLoyLKwhDSNVa1gczlmttsQktIuaellz62hOLYbtZkM/eKrjD/GJP/Y/snzlA8Xpw7AjEiZZj8Yc6xIDJA/Rk3xH7FcM2yvGzSXd5SV9NxD9SIqxNJnkxoyd4EdD3tdUuBsp4hBhCk3ZzSFJJawlu0+IMWgRbIhMz3eIKU4aIrlZRBKTaGJq1sj6gSLUkCmu+GZUmkFLvIyk4hAqmf8Qm5tUkBJA9K6P6Mm5pjh2lCYvVUWIXNuF6I07aWoiSaXR6pHItvKjxliugyvuGopN2UnVVBGbKpzOwWVH0sSI6BrVgFm8gLaz/PrDGWJqpHKcfvx7SD/6f+PzP/O3qbu3MQZiilhrMUZJqfBCJp8fHxKRCldV+NHTj4EYE5WJhSs0GAOVs8xaqK1jGEd8SlTzBtdYBgzjmMVXjsistVhJ+NEQNOZmHSOkEIkxx7TULm+jKTF6CCFfTNdW1HWFc8LQRzbdiIrSzmrquiKpxw+BGDzOOmwFIeTnp5Sy2zIQY+aLnGszI5lAUyzcV25QSyrElDBJduKz/H3a7prgUsyCkhATkB1hM3emJJOfp9OTpQQ0F74sNyIlTIpQWYyrsLY0JjHFXwkaElFS/nfsiRqpmppaBPwWCVvQ3LAkTsBaqmrOZMCkqbg6G8E4h21q/DAQw7jjiDEWTZEYRpL32LrCCDvO+OuNvfBjjz32+KaHqvL5z3+eP/tn/yy/+Zu/+Y0ezu96fO5zn+M3fuM3+N7v/d4dsbTHHnvs8c2JrJqXZMoieMRQIaq5oEilkDaSxRRiS7EoxBhZXfVcXFwRfeRLr59x/801998U3rAPqVuLMYZxG0oXRLalFK05O1uzWm0IIVInSJLKviduxpI05i//SfPzS8xH0pjdGEoHj6iQyM+fitwdoSN53KKQq0UBCUUEUCoDMTcIZMnHrhMJZBC1YDTHeqDMl4u8qB4Va4XKWqwRzh+8Q+Vq2naONY7F8pj1xUP6PrBerxnGSF07klY0TcvtOy/jhy0nVY2o8ODem2W/I0O3ZW4dtnIcL5boMDCOHTGAiqGyltXDuzTzJa5qGNYbUhqwLpH8wKrfsN1e4kNA/QhSc3h8xOrigoghARfn94lxYLve0o8Dsm2o6oZh7Nl2kbYxHC5bnKvZbDpWmx4jyq3FnKPDI7p+DWQCaUwBTbDZblCxJA1cXV0QfcJJFiZM80g0/xc12zSmMLVGXRNr2d1CsguIwJRHK2Qibyd+SClnk6aY52/KJIaRYi+pgjowastzJztLsytsk0ZUHaDEpFxcrviN3/xNfu5f/gu+/MXP4cYHvHykvHTHk3UXib7r2Gy2iKlxTU+KkZQMJk4NQUraFc8ONHdSYGwWaiTKHBOS5jgjVclU4CTIUN1tDykTi6mIkhBUXJ63MWHEkAREUu5wKkRBSqnYf+6q6Sy0ybkwJQ5mjz322ONbG09bxH4UzxJMPOl5T9tmeu1HY2Yet9D+6L4nl4/ned2vRXhwc5zPEkw8+jqqmaTebDb823/7b/kX/+Jf7AQdXdehqsxmM5qmwTlHVVW7iBdV3bl9WGup65oQAnVdA+y2G8eRi4sLLi4u+K3f+i2WyyU/9EM/xNHREcYYQgjcunWLqqryZ1+p0Z917p7m4vG4c/U0gcaThAmPc0t5r3jaGG7+7Wnje57jfB5Rx5O2f9zrPiqWetY99zzjetzYniQC2TuD7LHHtwsk93FM5f20WF9q/uuGAW7qQ3Z499uE3lgEv/G3sqFK4QikyPtLPKnuxP5liV1vLLyL2YlFapQPtMLHl45bVmgFWutobI5bqEVo2oaYYNYItUQOjubYlNiuBoIYFqcLRu9ZnCxoHJyfbXB1xcFBzbjdEn3CJBj7kWbR0iwsY+/xYyQlpaqFk8MWScrqaiTGgCK0s4Z21uAqyxgET0WfLFddR9eNtEZYzGuGceTyfIX2A4va0DaOdj5HZGS9GuixHL14SFXXbINnteqImuMohj4yryxzk5DgmS0q7HLBJiaiWA5OWk5vn3D7dEEaevquzwv4i5d49b/5Hzj80Eew9SyLYoqoJ4t/Qo6I1QgpQApoHNGxIw5XjNtLhvUF/dVFbrrxAQ0BTVMkUMoNQVpcZ3finRIArFNFrddansI3TcYLpownaQItwo4iJoopIETETIIQmyM1ihMDpjSSqC0TcooGup6jIlOYRdptA4KmAJLQJHmfxiK2KuKPyfVDy2L+dIfI9b2SD/MGTBa9TK838WTvuinS9a1RxCSk7CyTCOWWCYUcybydoUXEokHBgkHR9T1Y3EZmLcoJ2l8g1iHVnFvf873E8X/ii//fv03oH+Bsfv3gJ04jfyeNSTEuz9sQsuCjBWwlmGRJEXzwWdC0MNSVo+8T3iuuLiMOIcfn+ohNkUpS5qwKj1m1+ftp30dUA5UzNFVuFlMkP9daKmdxlVBXNSpC13vWXSBGZTZzVNZkl5EhIJqonMNYRwgJH3yOnHYWMIQAyUesMZnTSZkrRBVrHVhLSFK+bw8kZzEuNxCpJmqXHVQkJSLCOEZ8CLgy51RBNCFWiZpylJCYLCKJWTAT/QAxZMcQoFKHcw4RIcUxc7jGogmC90QfiGPP6HvapqJZzDEaSd2aFCIBW4RWYJzFVg5bV/hNl+9hkeIknI9ZqgpNkt12NIteNMXM2WJh1yC2j3rZY4899ngs3n77bf7SX/pLfPrTn94Xxs8B7z0/8zM/w4/92I8xn8+/0cPZY4899viakL8oZwcEwZDU5+KN7KqgkrfJhd+0KJ1IBC4vr7h/d8Uwjjx8uM6kRwqkZPB+zDEYpUidOggMhtVVx937DxmGgWZWI6l0UagUZ4/SSaMpdzpMbg8UO80bi/9ickEHuSZNpCJtsDfyd0sXjuRj3BXJu4JVCzmk2UZQlZjAWpf3VxT1xlS0syXLw2N8b1BGmtrhhxVpdDA7oFnMiSkwdj3nZw94+PABq01kvnTMxTL6bIG6WC6pT0+I3jNrZjS14/zsAk0eHwIxjLRNzWK2gBSp6hlDt0HqGj9u0Kh03Yp+cwXimC1mtAdHBD/Sb7dsYs/6/ILZrOHN3/410oc+ytm9N3GzQ2KC7eaK7aYj+IHVpWe7UT7wHbcw0mOtkILSVLlgvriI5YwJTd3Q9xtWqy3LeUvwgRQTd+/do53PCJsRMZb15Tmb9QXLo+NMbOyiR3JhlmLMhXqMOTdUBGtyaZUkZOJjIvGKTYbKZOMoWWyhiRhT2UcRmIgppE3CWC0WlhEjrhBxEVM6aFKKhTTIQoyhH/jil97gU//kZ/g3/+vP48dAbZWrWwbMFVV1C5GOy7P7bLYddZNw3oD2JEk0UmX3kakLiCxYyvEvimghk6Z7SycCRonR432k77fUVVum5USUlD2a0o1WJvvUAaSlUNeJ2FRzo2kn3SBrFJEy6290n+2xxx57fDvjaQ4Yz/v8pz33ed0dHrdY/jiXkeddbH8cnuaU8Og4HjeeR19vEn1cXV3xL//lv+Sf/tN/ytnZGScnJ8QYGYaBqqq4uLjAOcdsNkMkk9Tr9ZphGBjHcSf8ePTYp0iYlMpnffn71dUVm82Gd955h49+9KOs12u6ruODH/wgzrmnCg0ePbabx/I48cP7ERI8jyPLf248zzx/2rx53DZP2ufjHG+eNq5nvf57weOu7R577PGti93yd1FdZEfPyQUid7Vfvw3ojQaQqc4szQdl8R/Ke9tU/T7yFqJM+5uep2WBv0S4TPmwRTmQy8683+Na+N7Dmo8fWua5eGXZVmzWI2OIHC1yNMQw5NiFpppxdDzHxMhq7RmjMD+o6DYDB7eWGBL3723xMXFoHZuLLZqEcfCoGGbHB9S14ep8y2YzUlfCYllzOKvoO0/f+1yjY5gtGxbLClHDoI7RNazHwPlqxTh65rWjdRWr9Zrt5ZZGI7deOqIWobKGbtNx2Q24w0NO7xzRJ7joBnw/0m16nLMkmzhwgomRqrI0swX2YM7WB/wYWC4bXnzpmJODOX6MbLvsrBCl5db3/QgvfO/3Y5vZoyqFfM12og+fHShijnjJoo9zhvWa/uqSoesIw0AKuX42Crl5AkoeauG+XBb4UOZUmsQ9JT5ocn2B3ChU+DItIofdHJBJI6KZo0iF1yAhSYvJSF6Iz4INg4jB2DrPKZHyOGVs1/KiaQ5PTjRKRFOONRENGFNhbFWGqsWVhMyzTedQpn3ksepuf+bGUWjhNiYJyQ3Rx7tujtzhoklJdvqOaLBl3K64T5iqQXd7UczmPugLSDvL/MZ4AbZG6gUv/MDvY3v2kPv/9n8hjBfImON3RE02aUlgjKNyuTHMSWLW1AQfASX6SEyBWV2xPGqRKjvc+KhUsznGaJ5jMSAasTGSYsQraBRsXdHMclPaMARC8LR1xawRjKbsRmEqjM1xO5VT6taRVNj2iX4M2Moym9cYgX4IhNEjAnVTIWIZojKOAWcMdVsRMMSQv5OaYmoToyc7rphMc7oKsZY0RlIiRxtJIkxCGJHiGpMbo8Zk8SGfC9SWRqEyD0JujEqFo5KkhLHEJMURg2LUYp2lMi7HXKVASiG72NgKTYofR7rtClGlqYRZbak0on4k2YpERcITk0IE6wPIkMVQRiCCGIORfE9qEZwY1+a3V7WE4CEF6kWLrRaMmyvSuH3sdPx6YC/82GOPPb6p0XUdf/Nv/k3+4T/8h4zj+I0ezjcNfvZnf5YHDx7wwQ9+8Bs9lD322GOP9413LSakUoyqycVavHZiECRbBUrJNQVUPZeXV7z+xn38MLJeryE1GOyuLFSmGBbdOXIk9Yxd4t69C87PLzg4WICUnEhykTk9P49LMGKJ6osqfVpSn4reXChMYzSik4Yk7ylFMNWukJ8KWW4caznMUlRfd/LoVE+rLcNL1JXj8PCE3iXGYcNmdcl8eYBzFQtziCRle3VBd3XJg/tvcnm1oq4FY7OjSeUajpYnVHXNbLakOWkhJk7vvMjbb3+ZzcU5wzjiqpqkkdpUnJycYqqW5EcSib5fM3QdUrf44JFkgEjXb/BJqReH+HHN5vKS2jX03QWr84fEGBiu7nN5scG4lsuzLVGVbacoA6vLNaYU0M2iIng4WLR89MOHDGFg1jbcufUil1cP2G56xNQkDDH2bLYj201eoOl7z1e//NucP3iHw5PTna4Gys41E2ExjCVjVIrYJl13vhSRCGJz3mnK10cpeapF2Z9SQhPE5IsoKCGuQjVbYVoEFUMklDmShT9Tp9b0P1W4vFrxm7/163z2V36FYfCklKNE714qy/s9jXsIuqLfrPMccQp+JKZIImIkd0yFoCSZ7CZlR+RktxoQYzE221jmc5Gn4Ha7out6FvOjUuhf23VS5qOQmCxtRUxudkpx17VhxGZrWU1Y57K4JaUd6SSlJSmfqr3jxx577PHtgac5XDy6zeMW+p/HkeBJgoqb27yXsT5pQfxZrglPG8+zRCjPcrx4VByRUmK1WvGpT32Kn//5n+fBgwfM53NEhHv37mVRp7U0TcPJyQnDMLDZbPDes16vSSnRtu1OrGGMoW1bVqvVjpuw1u6iYEQE5xzGGC4uLvjsZz/Lb//2b3Pnzh2+53u+hx/90R/lxRdfpKqqdzmsPOn4Hnden9dN4mvF40QTzxKdfK3jeL8iiCcJY560z+e5Jx59/tOO6XGipvfi1LLHHnt8q6M0hEzv49jsWFAwlftm95dpofz65/zjJAZJpU5UjJZwzl0tdWNRv+wjlQYVEWGn+yj7QgVrlI8uHN937PjQQYX1gb6LtLXF9wMpZgeBtjVYgW5MzA7nNJXSr3v6PtGPEakt88rQOEd3tWG99ag0jIMnDj3WCSkqzjlcZei6kYfnHvWJygm3ThfUFvrOE0LCuQpbWZpZTdNUKJGrrWdsHFdDx3rVQUocz1pCiFxcrohdz5zIwaLm5GiG70YePNxwFQLL27doD5asx4H1dmQcPIwhizeHiE2CJGgPGuZHh2xDIPUD4gNHyxmvvHJMY2C9XjH0CdUAWOpXv4dXfv8P4+YHmSuYGix2GpuEEEnJZ5cPza4BYfsQ368Y11dZ9LHd5ugI79GoO/ZJJSEopJgdPLClgaIILHSnKnoXpLiOpF1sSuY58j5SEXxQYl9KBKwqKlMUTI5gzc64FlIkxZDddv2Y+bfKIcZlF1trJ2auKDQKWcUkQJLduDWR+bMUEOdKg43kGKI8s0EjsiMJps/YdH1s5LgPUbKYqsx7mZq1JJU7pzRdMTnvRogAtrjs+KIvSDuxlaW9FpKIINsHIKfIbIFGj6QtYuZoDR/4kT+IX1/y4H/7x1Quu756n118rRGkBj/m5ipbOTRACIboIyEl6tYwayuCQr8aESJV3ZRoXIcYg0ORlNN8EhHjLK521LMGSQHfRxoDi6OGylWkEBhDbrCpHBhrqa3iXOaPtl2gHyPOWdpFi6phu94SRo+ziqtqAkLXBWIINJXBWIePEDW7iBib+Z8QRoL3iHG4qkacIaqi4wCqVJVQGwfOMQYlxECU7NoSNRGSEjVkpwxr8CHkSCBjsdZlPs44xFZFMKKE4IlhoDKG2jkaZ6gqR1VnJ5kwdohkoUZK2UlkHHtS8CxnDfNZRV1iikMCNQafBF8cckkJ/OQqM+R/XY115bv7FElTuDtVgeLO42rH7OiQ9vg1Lt5+h/7B9nFasK8L9sKPPfbY45sWIQT+wT/4B/zlv/yX6fv+Gz2cbyqcn5/zd/7O3+HHf/zH96TCHnvs8U0L1WxlmVLIf4iRkmR6TVogGJMLAsFgpMoL8JpYb7a88dvv4OPIMCZa06AkDJZERJIWtwMQKRmTtiGEwMXFmu22IyalFgMl6gVyvntK00hynidwI5qiWGKWbhwjJdJFpgK8SOPJMS25n6AUy2nq37FZIGKUtKt2JRe76d0EdCpWjyJC1TheeOkl3nlzS1NbxmGFHz3L5W0UQ99tEHJe/cPz+6XwdoSULR1fumVp2hprs01iXdVUiwZrbrE8PGF9dUm3WVE1DSqW7uKMyjUc33kp52ymwHZzBVEJKTKGkRgixmRry77bsLm6YF7XzNycvt9ycvQi55tLWufY9CMpBDbrFX5M+GjwAcKYuH9vzcGRI3hFo8U5x2bbUVlPUE8YBi4eXhBiYBgGFJtFFqnn8GDO/furXSbpxeUlX337S3zwO7+3XBspnEkmwVJxb4mpWH9KFtdMl05NKvE6OSZm1z0jUmJ/slAkm2JoSTHJIo6kEY0Jm/tKSEazVWQpovPndrZhnVxkvA/cvX+XL37xDdrZjMWyZbPqSapsR3hwOUB8m9oEGhKzuZBcJImQUnaKiTHiQ3aLIQlEIaqHmF09imwjJyil3J2RYnYeSSnh/cgYBpAcV5PFL6ZEFuXOCJXJblaQlIpAZBIpTfat1x1oOfv4uhNoispJem3X/w1oPN5jjz32+Ibjcc4Fk0vEsxa5nybseJrLwZNcOb5WQcHzuorcfL0niQ0et82Em9EzKSWGYeCzn/0sP//zP8/9+/eJMXJ1dcWDBw9QVeq6ZjabcXx8zDAMpJRomoa+72maBmMMi8WifO/L++u6LnfhFuGGc47FYsHFxQXee1SVtm3x3tN1HSEENpsNVVXx2muvMY4jp6enHB4e7kQiN91TniQUeD9inedxYXmaKOFJ51tvfEa/V67haeP+Wubc45xgHvf7417rSds9yT3kWWN4XteVvZvtHnt862MSY+wW8TU7HE7uBdM2+Yf8x5uSj6zPmBbmp43zZ8W1T6hk3oCytK1kt4GpIWW3Jl62FlMWwJWFSXznoub7jx0vzS1NiowhcthaUgzUjeXO6YK2tqSoRBJzsViUi8uekBJVVdMczbj94gFWlfvvrFEsames+47GCkOIaA8nLx7gLIydZxgCRmF+YLh1vMgNKtsBjXmRuWodi8M5lXV0w8h2UEZTc7XxbDtPa6BuWgbvubraEvueoyrHvYjCxb1zNmMiNnNOX1hC5bjc9qw3HaEbcSE7lxwtWvq1Z0iR26+cMjtYsB5Gxn5Ekufll0956YVThs0Vl1cDOT7HMm56mL/Ad/7IH2Z+6wWMqdBdI1IR7xR3ixwBEYrowBP7LWN3zri9YlhfMXRbwhiIPtfdaMJMnxNJIYYs3nENU4yM7MQ9qTRgTHPHsJMRqZJjVtK1iCFm8ccuKmbqgpHrSGK5MevyR1vec9kdSiKqlgaPgWRyJIixBrW2uJleC5gml5qJEyCRm0g0oeNIshFrsogEo6WZpNwZ+q67pLRv5cfNzaas0tA1bTpxL9cftcVNFclNZRrBKilCUMFq5u2mW3G6v6ZzzBZkcRuZHaLbhOAx1RJZRj743/xR1ve+Qv+VXyX4HmMcMSaGfmCGparr0rgWSVHw40iIibaqqBvDmAxXmwGCcrisUXEMwWNtoq4NVizeBxJK1dTM5y2urokhMnqPiNDMKtq6ZhyhD4FUvudaJ9RVwhkhRGHTj4wB6jrzflhh6D3EQF3ZLGjGsOkjw3agcWW+x5h5yF1UbyJqYPSeIQbECa0F8ZHgc2yyEaFx2SFGXIWJATOJLHxENRb+KUdjV2KIMZbYZZNFLslgXRaCiQWTgCJkayrLvK2pqyrzukYJfkRDxFUV1jYMYURFqRbHOJR5a6mqCsWgOGISNCZCErzmpi67EywJwQfEgHFCSoIzNt+LhQP0IeYmOnKzmxiDpIRRpVksGVdLCuH17A+M94i98GOPPfb4pkRKiZ/7uZ/jz//5P78XfbxP/ORP/iR/5s/8GW7fvv2NHsoee+yxx/vD1OCilAVzRWPIdnuSXSo0RVBDEoMzNULOnEQUn3quNqtCsFiSBiwVSdL1C+hEfsQs/kiCqGN9NbLedMQQSU0uPpNK6dYott6aSApGhVAKe7Cl6yAP34jNooGpciwL22Jcdh+Z4l1KMSxiMVKcJ0obhsWQY24CopYgMRMAxpZiNotQYvQYiSznM4yB7aannR2hmgg+d5i07YwuBR6evUMIiX6r1DPHaj0wm1UcHR5xeHiIEYO1DmsqmnqOMzWVa3G25eDwNq6qsa5me3gCCZbLY5rFAVUzp+/OGbstw9ARYkTEYXKTCJo8sR/ZbM65uHjIw/v3sK7mcnXFw7N7bPstfnwd79dUznK+uu6GWl8FrM15p2+9veG2P+BgqVwM22xt6SPLA8fB8ZJKHUmzFWaOwHGc3lLOL9YMXWKoPX7I3R05fmdyrCiE/a6DRndFmhi5DsgtnVN5u2x5mmIgqezEI6pK0sBU6E27SindmNgOl7LrhxSHDSOlhNM8p2KKdN2Wq6szjo9O+YN/4JPcvftVvvLVL/H2O28zbHtWvTL0IzOjvHAktCnHuKR07TySUo5M0ptTkVJcx3IcgKZQHkuouSZRVBWDzdmuYhArO4cbyO4e1+dHC92UhTDGKClOC5XXQg+hEE+lcM5kJ5kEKOdxjz322GOP/3QR+tG/TXge94yniT6e9dqPw38OocijYpcnHd/0r/eez3zmM/z9v//3uXfvHsMw4L3HOYe1lvl8jrWWO3fuUNc5G30YBlarFfP5vHzWZUeQ7XbLarXCGENd13jvd4/PZrOdiCQVp6pxHJnNZiXLPS/ivPnmm3z605/G2vw5GkLg4OCA2Wz2xGN43Lm9eX6f5rrxuOe/VwHI487vk17nWXiWIONxLhxfC57m9PFen/e4fTzLZed597lv0tljj28HXEeyZCGH7paxb76DXAtBpgXy8typiHzk/SZpdqISzVWc7jgOLTVgXr5XvcF9SH7fqoxypzF8fF7x8eOKl2ZZsGAs3DluMCjJNDTW0M4cV5dbBGgaRx8GfIhUdUXlLMd3FjRtzXq9RkcBV7EaEs4FTo9b1lcDQ4BZ6xi2A8lZVBOL2jKfV8zqHJPa9wmoMLXQLhyLZQMIvfcMpiHWDakPEAaOW4eq4WzTMfYDdD0HlXBw0BB9ousj0QqhqlicHBOMsrrccnWZBRb4yBgjtw5awjaLUNrjBVrVnF2s8P2Gqqp55UOv8PILS/r1hq4PqBpSUrZdh7VLPvRf/3GOPvhRpGoKNzNdOCWvkkfQlDmrFNEwkIYNYXuJX6/or67otz1+8IRxzK4gpfFEUWSKhwkeqWal9s6RLDkOOS+wZxXH1JhkyihMWZ735erbMo9Ko8mkJBIt/JrBYMvP+VjE1uySV2RqULG7matFlKHB58aRYMDkeBPjLCIVmOIgouXzEy3cyMSJCBoC0ShiI0Kd+bAiopnulewkwk5I8ihPsNM3XW+ETm4oQPFazVuUw9epgUsVJDuM2GlcWuQzAhoNwhbdPEQWd4r44yGkgNRHNLcS3/FH/kd+5R9cYC/foHIGRo+43EyFCsMQSIVTtLbC1SDG0GedD7WrsA7UOBIGpMJJxKLEFLIAQZR2NqdqW2JSQlKsc1TWUdlEGBP9OIIm2rpmvmiom9xQNIxKNyS8jzSzilnbEFXxXY4dmrWGJBVdH+kieCxta2msIWFQLIhjVPApT+8YlNEHBEtSZd31GBLOZKfiFBWTMr+pWoxWnEV9xMeApByFHFLCGsVZg3GlOYqUY5HE7K6b1Uwq2hhxRplVuRlMqirrowZPDD47hWAYx5HgO+rZnNOTO4yXFoY1MZZbxgiRLGrTiV+OStSxiKxcbkJMkZSGHO/SzIurCUikuH8otjQKBp/oViti+iqooWoa3qX0+zpiL/zYY489vinx2c9+lr/wF/4C9+/f/0YP5ZsWX/nKV/iZn/kZ/vSf/tPf6KHssccee7wvCCbHaqREDDkPVVHEWKytuO5kKAVw6X6Z3EBUPSqa/yuFsBSHD9Ui9JhEFkkQK2AcosLqcs3V6opxHJgv2l2RCaDE4m6RnRwS2bnAmhpRyUUKZHGGzYvuQu5qQASZvqKb/H+58CzdB+a6KMj1qtmRNWIcmsCKkCNFpg7RQviIy+4nGELIOZvOOZytSToShx5nDX3XEQOsr7aElGAcs+VigMViTuUqjFicqbDG5SzeGAi+AzHMlgsq12KMo2laNCl1Pcs5pK7GNTPicmDsN0Q/4qoW1zSkGHO3S4iM/Zo7w4Y7Z/czKWUs52f3Wa1W3HnpNd786hf5/Otvk8gOHn2X2PTK5io7noiBd+5dcbk2VFXO7rWVydmzCtY0CANdf0nwsFgcM58vgMTajagm3njjC/whTdmmlIlW0OzIUaw+jdipjwarWWyT67a0Ey8kzXElquX4iohhcgpRC5IiVjPBYcQwxQFlZ5DcOmPIOcuJgMFl8UOa0nqVw8Njvv8Hfx+2WEquN1e8/vrn+fVf/2XeeOM/8vCdu8Q+smiUZQNVAyZFpkgjUj5GjZlISqqkoIwhEELOmc0ypKJSKp1DSOlCLgKWHFlE7kIqcS9auoWiTsKZWO6zfF8W7gVVU/5Nu06eHGeUSTQ0FrGQ4WbH2x577LHHtzIeFTQ8K7LleWIn3u/rv599PEuo8CQ865hv7vvmv48uvD/6/Bgjv/qrv8rf/tt/m3v37jGOI9576rrm+PiYo6MjrM1djX3fE0KgbdscC0h2HrU2f19MKTGOIwcHBzsHD+fc7rX7vmcYBoZhYHK52m63jONI0zTEGAkhsF6v+cIXvsBsNmM2m/Hxj3+cvu+JMTKfz595zI97/HFimKe5wDzN1eJ559fz4EnOMY8b47NEEV8v94zHOYI8z333pH08emzvR/i0F33ssce3B67FAPm3dy9a67u2ZKp+brw97N63ZHJWvOESReYY8jr+FDtQZAdTNwpMfTEIysLBxxeO12rhhQZu13kYrjZUmjAkbN3QdYFuOxKveiprqIyBIVFXFVVraRcNJ3cO6HvP2cWK1jqiM2x6z/yo4oXTBeuryGrsmTuYVRYjCSTR1oajRYMojGPIdbEYXGM5OGyYLRrGPrIaPEFaEoYQ88JrbS2dj5xfXcKYcCLM5xV37hzQbwdWmx7bVrRHh8S6ofeRbrthe7Ul9gFrhWZmmNuKFBNDTDRHC8xywf2LFcYPHB+2vPjiLY4WDauLNaMPYCwpefrOk1Ydi498D3e+7/dgZwtE7I3rODWQpCL6CNntI3rU94Rhg+/W9Osrxn6DH3piGEkxlKYm8sUige9JocPYOnNPOpT9FuHHjgOzXFvi5ujVzCVMYqPMfyEUlwyYKu3JjWMXACOy+7tInx0MjMEYC8ZkdwNjb4grinvqLjIXYkhEn3k7sQ6xDmOrMrFl1+wxNTrleyKSQsyuobYCVxWByo3PWmxhZjTP+akJpzStINecDjqxGzeuipbYHJnEK8V1JGl2Qy0NWWI8k1OvkekGAgkddOcwP0XqQ3Q4B1tjqmOOP/ydfPQP/Shf/md/D/UXOJOwjQMxJONyjGA0mKbh8IWX6WJufmkXC1688ypX995kPL9PGK+ojcUZRUJk6D1RBSuGZtZinGEYcnRQZaGuHRI9wedGKFsJ7aKlaSrqyhFTou8To09EFZpFg6sqxqho9DhR1AmoZVDLmEacE5q6xmibOS0MSRxjTHRDdvmw2BxrjCv3hhKTxxihshbR7KkRU8qiKackkRKHVHiyKRWo8GNJhNo1WCNEPxbnj8xjqfekmBuKnGRHHGIijQOKYkxd5lIWDHmfOU4TRqoKzLDG6MCYIkEzZ2qcyVyuy/FLJIiiSHLX76cpoWHMcxhy3JG4nRjPGZN5PM2RSliH9wLbHldZnC1T/ncgxXgv/Nhjjz2+qaCqvPPOO/y1v/bX+MxnPvM1kw7fzthut/zUT/0UP/ZjP8atW7e+0cPZY4899nhfsLYipSnf01K8JYs6X0thlh0xsltGXZoeik5cA0Yqko4k9WiaxBEwFXti3G5fkLBmsmCMjGNZyJe8fVaDp1LH58X+WBauHyVqVLIgRAvpklLCistFo7EoRQwCWSSy6wQqBTFpJ+zINqE3XyMXU9dFUh5n1yXGUVC1dH3H0fFp7hcZB0KKRBV8CPRjYr3xJS7FUzcVzhluHd/BGpNzNsWw7Xv6szOuzh+SUuDw+AWObr1A2xrqxlLVc4x1ufC22WJRTIU10IgwssG1c6p2QRy2aPREVaq6xVU1VT3PXQJiWBycsF5fcOflD/CdH/shPvaJL/PF17/A61/8bb74xbfZ9pFugKYVcPk8xKjMZg6xQtsKTVNhrMHHER0TlakwVohxYL3pWCwWzBZLHty/wFVuR3SUgNdCBhRKQgwlaDbXm5rFGZTilOIKIill8Uoh1nJ3DtjiiqGahSqaFDR3QxTJSH5OIe9SmQdapCaZqMlbNnXL6elLODOjmdUcHx8hwEe+46PcuX2Ltm35pe7fcb49ZzvC4KEOinF5XilK1FhiiQohkxQfh1yYKqjKuzJy8wLWtdsHBozN3c+5E4hcXKNoTLlLYjpOyQ49ImBMmd5SLF1JqJosmtJM9IFijBCDLfet7AQr+2+Ce+yxx7cynuTk8Thni+dxwvhax/L1dut4v9s+byTI4xb07969y9/6W3+Lt956C+89Mcad0GPabrPZMJvNEMnfzx4+fLiLXhmGAVUlhEAIgcViQV3XtG3LOI6ICN57vPdst9vdOFUVXyy3J8GHtXbn+jEMA5/73Odo25bDw0Pu3LlD0zRZpFucSG7i5n4f5/rxrPnyJIHHexEP3ZxfTxKWPM91ehyeFj/0uOv6uH3fjJ15L3jW/t/r8yc8zQFkz23tsccewvX6fOYKys+5a6TUPbk2un6fyY4eifw+Ykr2qCHHbWbhx1TP5dJ1ChxJ035UqQ18aOH46MxxuxJOa6Eh4lTxg8dIYn60YLvt0TCwXo+kqCwXLaOPiMvRClUrHL54xKKtOD9bE1Li5GDBxcbTx8CrLx/QtBX3H1xxcTFyUFuq0gjjnNC0lmVrQSMxGcRZRAXXWhbLvGDdrQfWPjFKTQgQfIdB8SFxftWx2vaYkJi3jtoKbWXZbD0+WTg6oj4+RFG67UC/7Yg+YIyyWFa8dPuIbtNzdblFxdHeWmLmC9arDcmP3Ll9zMsvHeFEymc8aFC68zPCeQerAeoTPvyH/zjzOy9nUYaY3TVT1RxDrJPjR4530bAlDBtCf0W/ucJvN/huIIwj0YfsbqBFqZESqCeNazQGklRI9PnCpkwu5AVzWzik7Dw78RSquQkoO3feiGvZ1fqFB9vNyxw9rDrNz9zKZMSgyRQuRLKgRPLrWmOyqMM5rDjUXMfjKmYXUYOPYEfEOJyzWQBiHZNg490fo/ncJVVIoWxrs+hp2rAcj0ziEbRE9k4znzKGG5yNKkgq12d6kCLMkcJdgOg4UTxFk1XuNTFoUIw14NdIZ2B+CukAjVukmiN6xCs/8PsZ7r3N5S//M4zP7nNRBWnmbF3NgzFgFwvk4ATqeW72aVuuWsdw52XM4R2W6hnvv0m6eItt5/ExMmtrZof5XPc+Evue2kI9bzApi0Pyd06YLRa0bYMS6QZP3yVSTLjK0DYONYbgleQ9bQV13bAdDF0/kBipnaWuazAWHygx08IYAp2HGHPMSdKIWAOmRkPEaMIKVLWjctlJGYWgrkQNa26eU0WMwVohiSMlRZJSuzw/rMvcosURguCDJxIx1mBRKmuQqiJEpR8GrIcqeGwdsfUMa2MWjcSIlURtLWkcWT94kyhCSHV5g7yWP4k4jJHyn4NUoxoJo8+irRCQBEkM3gYsCZHCiZkSa5QUMeTIbOsQY7Lzrbz7u/3XE3vhxx577PFNhb7v+Rt/42/w9/7e3yOE8I0ezjc1VJX/8B/+Az/3cz/Hn/yTf3KXQ7zHHnvs8c0EKTEnqokUr8UeCojNjh0qsbg2lGx3sQhl2xt7SqULIBFvlLmK5IBTJOUOlKquefUDpxwdL6kqV0QXmYhJKKS8KE/JFDWmODgoiBVMMqQS+5IKA2NtXkyQqcjQSeiRCxs1xaIVUIqzCZMWpHgwlK4eNB9vKjagWfxh0BRIMfHWW1/h/r238eMWP6yZzxdYEYxtGMZIPw6cXZxnDxQDVWOYzRyNrTk6OiHGxNiNfOXhbzL6TD4sDo85OLzNOHq++sX/SBg8QmR5eMzJ7Rc5feFVjK2KXaMgziG2xrgxu2CU3HvVLGrIXSKWqhbUeGJM1M2MpXGY7RXG1YgzWNewODzm1unr3D+/5Gq1Zr5wvPnWGV/6qiehDCFwfGA5OJgjKBcPV1lcILC0wmzeUjeOupoxXx6DqVhdbTg6PkFM7p6IKaEaipgnC3omcgRypElW6iekOG5MziCTa4xqREq3TM7eTVixaMriiKnrKtv3ar7oTM4fiRRDXoDSPO8p3ScpJkKIjOPIanOFq09o6hm1M9y6dYsXX3iZk5NT2tmCZC7oBqUPwiIoGhSxmTuKIRHTNY+RojKMkWEsxw+7bjEtxE+OZ3FMBKSRTDBO4o+JR4GcI5tSYreuoSWZtXT/GGMyYZQAya+XOz00E10IKiHPn5QdSfZLJHvssce3C95rPMTTFs3hP11Uf9xjj3u9r5eI5L3icS4RT4sDedwYVZVhGPjpn/5pfvu3f3sn+pgiWCYxx5tvvplJbdg9Vtc1wzDsXD2Ojo6YzWZcXV2xXC4xxmCMoWkaLi4u6LqOlBJt2xJCYBzH3dicc7RtS1VV+PL9J6VECIHVasVnPvMZ+r7nk5/8JC+//DLOuV30zM3jffTYbp6TJ52/Z+FRMciz3Dee5LjyNDxL6PCkOfusfT6Pm8zN8T1r+8eJZx435552LO9lrE/a9x577PHtgizFkFIxXnMVN34uC+FKeY9CriNkr/sRcjOCKY0G8kgDSnlert3yPlsLv/fOnI80hjYFKhIHjcvOAgjOWJaHba5fLWz7wHzZUhklxcjBQY1VaJcNd147QTVwdvcSMDhxXG0DWOXWvMWq4a23rug3PfMigqjaivkMFvOatqlQBB9j7usp9fTBwQzRyPnZis0I3swYkicMSlPnqIzzq4711Qaj4CRzJCIw9AFphDibszw5Zhw8m6s1Oo6EfiQmOFg2zJxjc7lmtfV06jg4PWJA8ReX4HteuHPKCy8cQ4r0o89sznZkeHCJXqxxfcIHy+0f/t9x8p3fidT1zlQFptq4xJkQQQOaPBoHou+I45qxK/8NPX7sSX7McRGTk4smNAZIHTEGZBJgkON+s+tFOfAyWcSYXS2vk0vCrjGlOGQUOZBqce6U6fXKjJKbLrpA4QSkqIlyzG1+EdVAEMD73TxUYzE2O7AaW+d5KXnfRIWYCNEgxmNsha3qLAC5cf6kTHBNKQs7koJmx5DsvKq77fVavZH/f7efa/lUbu6ZnnMtpNpdJ73eX6b5EoIniilDLy4nMoLL9I0kg/o1MjZIvUC7HnyHqY+QxcBrP/zfsPnK5/Dv/EeiCPULL7GdndB3I2nIDnPiKg4PZxjXZHfdMJIcYB3trRc4+dj38eZn/wP95z/DvErUbUWSiugVIrSVUFWOpOB9ZPARg1I3FXWTG8+228gwBCxCO6twNrvqDGMg+pHaCcZWjDHSeyUmsK4q57/4qqghhsAYE2OIWNvQzhcE7xmHnhRyRHAj0DjBNQbrKiImj9U2xCh432OKvUdEkSi4ymGty+K0MuejESrjEWuxpiIZEPUYjTiTqF2NsY6kQlLPLrY7ZqEQcSj8WaSuKxweK4bRR/q+R43Lzh5OcVWZdZIdO0xSpDIY4/C+cFrFxTZJvg9CCGAtBoO47KQbdcwu1WKx5LgXMQnn3C5CRozld+Ir3174sccee3zTQFX5R//oH/HX/tpf24s+vk64e/cu//gf/2M++clP7l0/9thjj29KGFx2lPC5c0GKiE0nAUYI7HJrJced5IXrIuYQsxN5JPK2U4zGRIhMVpGqA00jfOhjL/D9P/BhXnnpZVxVoUl27guacnETU+7gkFIkC2QXD80dB0ZyUe0koZo7H8QUa8PSCZGfd91tkSQX3jIt/KOI2OwuQq5QxRis5pgbMVPu5fXrW1sR4kjXhWydqiMxJNp2BgSQAe8jMSrtrCUOHldnwmA+b1kul8SYSAKzgxNeuvUih4e3WCyPaRdLxFT4oaO7vOTq6j733/4yb/zGL3L3K1/g1Q9/NweHp1hnMbXB2KYU25EUR+K4zbmmMWTRQddxdXnOdrtmGHuCj/TDhs1mTTd4zs8ecnl1iQ+ek9OXODp5iU13xThu2G57vvTWFaNXLlaJh1ul91tuHVusUSwmZ55aYfSBrvcslw0xBrq1p65bXnvtOzBljhgSSclzKeUuBk1TI0gu2ChXKi8SFdEN13SDQZicWJJk2sRoycxVUxxFcwFN2Y8I+fqJ5MgYFUxuqyiLVZF+6Lm8umC9vqTbdjhnuJjNqCvL1eUV282W4COacpdCF5RuAB/AeM3Zo5qFKikowbOLy0EhjODHsZAnWs5JuQGLVSZKnpuQO5ikECWFPBIxudguxJ2mqaOmdARZ2TmdGGtRHCKRGHIbySR+ycRl3BGc+yWRPfbY41sZk/sCPHmh+WnxHM/a93t57HGCi0fH8bgxPI8I4L0umj/tb09zaogx8uabb/Lv/t2/27l2pJSIMTK5cFxeXrJcLmnblq7rUFWqqqKqqt0+jo+PWSwWnJ2dcXJyQl3XjGPuHpzNZrz88sus12uurq7YbDY78QiQxcB6HQGzXC6x1u4iY2KMDMPAr/zKr9B1HZ/85Cf56Ec/CkDTNNR1/a6GjUevxfM4dTzNleNpThnPK3p4XpHSk8QU7xfPEr08a5xP29+Tolsedy6ex+Xkccf+JPeevQBkjz2+lSHv+snK1KiSa6tpsT7XVEWwcUMPoihGpygOMGLyv6bUoQI3m12mbaLE/DwV5jbx+05qPtYKlXrmrWExazk9cKTgWSwXbDaeJEJjEtEKx0czKidUtdA4l3kNZ2hmDZuzNeMw0BjLevSMxjKbCy4azh+sOdt4DmeO49bSDyPzRc3hsmY5d1gUnxJYVxwkLfP5DGvAD55VF9iEiq2HwQ9UTlg0NZvOc37Z0V9tiH1AqgrbWkwCjGV265jkHH0YefD2XfwQqUWJfaT3I/OmYdwMXAwbQlDUWIIR/IMLKkkcHra88NILLBctGsccZes7wt0LhgcbGBMaYIiCvPRBXv3kf4tbHCDiuHbUKFzCLuIllsYcD6Enjj1+u2HcrPBdRxgGoh9JMZbI05CdRCn7iBExFcZex0pcO3zAtV2GlMaL3DixG4dcCzgmvmqKs7321byWT+T5UwQm2dZj5/6R56wpnEUqfy/PjYmIAyJRcpOPwSLWYF0Wd1hXhBtq0CCZDwoesQZTOaxrmSI6MoeXjyOREO+RpJgqgmmuj53JGZdr0dMkTinRLO/CzTGXz2jNXS/l3Gh2KEFJIRGlzc7CAlGKbEYkN5GJR/tzzNwh9QHaPUDsiDQntC8nXvqDP8Zv/f2/wToE8HChASTRto7ZvCbGgB82tDYfe93MiXFAU2B19YBb33mb3/s//z/4wr/6Wbb/4R8S1TIOiUZgXoGpalJShj7g/YARoZ01NK0lxcRm6xnHgLOG2TzHBIVxJKbsENS0FUYMW5/YjiMpFPa0iGFieX9KKkTN0bxS5rQx0MxnoMqwXmMINI3jYFFj6pqEZbv1KFBVDY0k/Gh347TWTH1AOJsdY4NUJIpIOiUqY8hfhRWrkVojTqp8jZISUsxRMs5hjcVYCD4Rhg1JlWbW0BiLxpEIhGQIkWsN0jBSJaFpLGIShIixFpOEGDLHKkoWiSB4qYghEKNHQsQKIIGEwaSEkxpTWaY4caOQvCelhKlMfr82X3/nj73wY4899vimwac//Wn+4l/8i7ts3T2+PvjUpz7Fn/pTf4o//If/8J5U2GOPPb6pIAjGOgy2dCSk0gUz2UhOdtUCxmBNhZUaKy2qHiUS0pbKLAElppFkIs62aCrdMQpKwCA0c+Fjn3iZ7/+B7+TDH3mV5dECEIKPJU4mUy5JJ8FIykWpmOLQICCJySpTTOmeKIv/OR/TkTRiyS4KSVO2ay1dGbs+C7HFFUKLHacUO9hi2VkKXawyxcxkpYKhrhe08yMuL94mbSPOJlZXAwg0jUOkop3Nmc1azu5dUFcVISZOb9/GVTn2ppktWR4eslwe4GzOsx37HtUtIXiUkeXykPYj383w2mv4YWR1dcbgew4WhzSzOfWsKt0nytBd4X2PimEcOvpNx+riAZfnZyQF17bMDo6olwtmx8ckDMcvvMjdd97kK1/8j9x95yskDE3b0Lg2L8Q0KwavWAN9UK42kbaG2yeWmBJGlHFQhjBQWeEyRd55ZwNJOLn1IrfvvFQEFiFbqMZAiD7bmZLnWybgTLayVEOSa9GHUhaSdNhZOCrshD/5E7eo+yenjWnBQAQjmXwRneJP8rxPQEqBmCIhRtRYZrMZH/jAh3j11dfyeDUx9Fv6YUM3bNlst8SYMMbiU6L3yugFV8GUyiIykYSZbEypiFqC5sWo4Av9k5gEUdn5JBNQsYg5rLU5U3VqRyNBiZCRQj4hsnP/EJMdU5waEjmPOKaETqHD6fqcCLJzs9kRNv/5G8/32GOPPf6z41mChydt8zv5+o8+9jxjvInfSeeQR4ULMUZef/11fuqnfoq33nqLlFL5HHq3cGWKXpliX5omLyIsFguAnYvHJNxwztF13e44V6sVMUYWi8XOqWMYBqqq2kW7TGNzzhFjJMa4G3OMcec88vbbb/PZz352ty/nHCEEqqraCUie5IDyOPeT53HEmPC8wqCnOYS8FzxJAPEkQdOzBC5Pi0J6vy4dz3qtm78/6d8n7XMv9Nhjj29fGGPKon3+fXoPMEJ2TNSbS+/ZjeDmonUs7y121zTCTkw/FUr5/UfLfvMOLMrtxvCJRc1LVWKmAdHsOPDS7RbGnuWLS9Rb1puB2kBKkcVBDWKxBuYzR7cZQCxp8Gz7gDWKE0sfDVQtM2MIm4GrLrAeE0eN4WTR4Pueg3nNclkzn+Wab0yQREA81lTU8xltW9FtPBed4pmzGkf6wVMZiKNyb9Wx2faM24HkI8tFzbyx9EPkclAOlm12ENlsGDtPvxnBR7oUcbOK06MF225ktfWIcyyOl4xdj/WBysHRUc2HPvgCtjL4fkQ3W8aHF5jVQOwD41B0GNYhhwte/e//e44/+jGknuWY2+srR+YJinuoRkgBiYHkB+K4xXdrfNfhh57gB1KMuSZOMT8nlSYKLR4dVQ3l+8BOFCSTO8f1NZ+4i0kBsXO31UjpHyruHzc+g28KjIqzQXYJKYKSqVZHy9/L78W5M+fUTu41mS+YZCTZJdeQfAQxJGtyJIx12dnVOJRASkqKI8mM2GqGcdV1LHMRNeXXCoQBjNMsJpm4F6UwdNkJ5VrFUe6j3S8gktjZgqjuxCs3T4aSRQ65O8ZPuprrc24dIr44mAranyPtLbAz0niBmd0Bd8yt7/oe5t/9w3zxV/9XTg9PuHN4h/VmjTVCVWU71rpZIgLOJWaLlsvLnhg9VTXj/ttfoTk64ON/4v/Kr57fZfjCL1AbpamE2lk0JlKIaIpZkNVW2Eroxsh2O5JwtLOK2bwFVcZxSwoBawy1syCGvk+sNj1jTBhxWGOA7FRhnc3zL2iOcNGEGAhW8H5EfAAiVQ2OiqYyGHGkBENMDD5kgVAVUNegdiSO2fW3rmuimhzdnRJihco6Jgm1FcGUe8FoAmPzPVKcY6MGUhhLhGNVooSV4CNh7PJ3b28Yk5JSABNBHNXskCiGGCJ+8KgaXJNw5MYtYsSHgRgCqlLcTyxqHCJKLAIk0Yh1DUEzZ4axmMri6jpfkxCIooAt923h/LgWdX+9sBd+7LHHHr/roaq8/vrr/MW/+Bf5whe+8I0ezrccHjx4wF//63+dH/mRH6Ft22/0cPbYY4893hu0OHKULM68WGzAZtGHGJtdOEIkacBKjZFqp/xXwo0WgFQK27QrJHPVFmiXNZ/4vlf5gR/Moo/5coGIIcQxL5hHQYzmxX9NqOTSRDB58dwkjM1xM6RdHZxtA03+Sp6L7FiEHjBZE5KERABMEQRYkvpS19tdUXpt6zn9nyJqS7GbO2aa+YLj09t85Yu/Sr8N9F0gJbAWmkrYbjyzhXJ4fAgpx8wkHMrIrVt3QJQsRs+RJ96PjN4znp1x7+7bbDcbfIzEsaOtKhaLOfPFgqapi8BjQ2UNrnZE32OchSTE6Bn9SNdv0ZjYnF8QYmR+fIrB4dqK2fKQupozaWhiTLzwwsvcvv0iv/Wbv8TrX/h1YjRQzVgslrS1oXMJ56EVCCHTDbF0qKw3gdUqUtVCqAxhG1mtI21t+I6TEw4Pj0gxZZFFzIVzjLl7IO0kEBFTRDqqqXAF2elFJXdgISZ3o+StinAh7IiSKcc0E0GZuDGSO0ySSplToPhrN1KRXAwjzGcz6sMjrMljiNHjQ2C72QLCiy+8xK1bp7i6zrEuAoMHr+QsVJvnfNPUaEqMgyfFnVwIjOL9muh9Kaqzy8zuDikxNklDJgXs5GBjc7SPEVKZ95rK9uhOaKKaXURUdBdZlDR3cYgGFIfGmM+hGEQUayaB1TTf99hjjz2+NfGkxfRniT4e/f15Frmf5FrwtIXyr2WR+llRF8/z/OcRAHjv+epXv8pP//RP82/+zb9hHMed6GOKaKmqirquqeua5XLJfD7fPW5tJqKrqmIYBtq2Zbvd0nWZQH7llVfYbres12uWyyVVVbFer3HOsVgs6PuelBJ932OM2Qk9pr8BzGYzAEIIOyeSs7Mzfu3Xfo2DgwPGceSjH/0ot2/ffpeDyOQK8zwCied19nie8/usuJLH4WlRKU/C+xFpPM+278Wp41ljeJbTyLP2/6wx7rHHHt/akBv/qpYGgSlmozyyE/CVuNLr95NJ0HYtCJHiUgApR7lwLQ6ZtPkL4BNHDd/ZGE5c4qDKi8NHJ4dYlDQGwPHg/oA1QlsbKivEKMRSu7eNpR8TPiY0pixKTJG+87hKSQYwyqobURVSEg4qQ2sEv+1w1nB63DKfWbpuQLGE0vzQ1A2zxQxNyuXFwCYIq+DYrDcYA/Na6LYjD87WhJAbEIaQ+NDLRxwdzrnajKhuOJi1RBW251cYI1g/UqVEUMHOZyyPWjadZ91HkpsWwhONUeyi4oVXTzg5mhH9gL/siOdrZrbCeOhHQ+clszTLlhd/6Pu580O/j5Mf/ENUxx/EmOzo8Z+IbzRHyJIChEgKA8F3+G7FsN3ghw7v+1x7R90JP0hZJJGdNgRcUxqVSgStlBgJSsQwTOqG6wVmIhTOKPNnRbBB4SJk95Qiz8iiDtG0ixWeIl0mB9TsxDlJjsz1DsrvqmEn/mCKk5nGWQiCoCAhEk2fG7ZclUUetkR3pEAKa4yz2LrBumaa7OW+yEjBQ1JMVSHWQmnEycc6WZpO0N25YndsE0VTeJ3SCJZdS/OINU0Rz4FYbigRm+M+TBGXmPLdLHTIuELcHOI2R8A0J9jZEZ/47/4o97YPmR02tLOWw4MlXd/hjME6RzOrscZRVYIhcnR4xHaY4/uebjvw9ud+iQ+/9l/yHf/Dn+Irf/cN5tv7OGdJqsSgEDXft7XDOMt2O7JadRhrmS8qZjVICPTjgJCo6wYhEn2OSh7GiKSEQUkpYoxQ1Y6mrjHO4kdPTAGJCWMU19RUrsFvPX7omFmoZxXOWkSFMSohKkNUBp9FZLZOSEUWuwRHZRzWZLcf7yNjDBgMlTOAyaKSlF/PmCwCicYSU0QhvxeliMbsZoJGNKYcYe0sjbRYQollUXwIGCvUyyXN4S1GH9k8fIA12a3XoVjndjEu0fdZeCVCSjGLtlRzZEvwSG5jAsC5zPFWtsIZg3MVUSxh7IqbUblvxzKnJD7lU+L9YS/82GOPPX5XQ1U5Ozvjr/yVv8K/+lf/al/8/g7hp3/6p/nX//pf86M/+qPf6KHssccee7wH5CJLYhF8TOS1ZEcEYxzZ/cOgxuC0IZoRiUWAsXMuSJQ0yeKsELFSlcLYM19WfM/veY3f+3s/xkc+8gGWhwuiZkGAqiOmRIwJSy7wjMlxHZl0Mdm5YYr+EMl2lrDrlpi8PowhO0YIxbFE82J3Xn2/edgYsdf9BzteWXdVbxaMwO4wJTtTOFOxXJxydbXh6ioUtwUtrpWKsZYQcufRgwdn1LXDVLngfPnFl7BiqauWFALj6Lk8f5Nf/+Vf5I0379EHODt/iB8HKgOvvvwyH3rtNY7mC05PTzh94TZ1JfhxIMRAa3JGZggDPgxs+3WOK1ltuXxwTkiecYz4YaBuWjA1i4NDXFPTzFrqtkY10jQ1r77yYVDL2flDYvDUrqVtK7gM1BZighSzC0rVOCobCNEQS+OLc4Jimc+FGJWPfOS7aeuWcezRlCNVUokOoth+TnajSQqVIQmjU2RJJhnUsOu+Ucp1m4g3zb0vmWHJto+T2kKUnSAiSe7+TRTCglxoJs1siy2dRKF0KNdVk51UVDnwh9y6dYdbp7eZty11VdGpZ+uhH0p3gZnKU83jtbnxYSfuUBjHDu9zJmqZkPk4NN93Io4pPxpyJ9nkTkNxEFHSjlSxYoiaQE053kSabIq12BxLyudSNWfnqiVpcYFNkq1795Yfe+yxx7cBnuV08DwLyo9bnH7c4vuTIl2eto+vR1THsxwPnhRP8jwxHjFG/uk//af8+3//798lmrgpemiaBpHsRtV1HavVisPDw50oZBxHFosFx8fHeO8ZhoGTkxNmsxmz2Yztdsurr77KfD4nhEDXdYzj1HGYhSM3Y2Umx5GmaVBVuq5jNpsRY8R7j4gQQuDevXv8wi/8At57rLU7Uco0/sn549Hz8qTz+DSxzdO2f/T8/k7EsjzLuWR67EmuIM9yCnmSOOZJ+/la3Uue9vrvZz977LHHtx6mZevJFXHnILB7pCxMF+eC3O+R68dpKf/a6aHsc/f8KVZTC0cBFXCnFr7nqOJVp8xs5HjmODisOTpa0vee9dWWtqoYxkgXAndOFzSV4H3i/GpgcdDQVI75wZw4jgTNY9ystxhbYSvH6APbfsTVlt5HauuojOAk4IxjNm84PKioLHTdSCiRpCLQLFratmbsR9bbkY02bIZA13lqZ7Ekzldb7j1Ys9l42ko4vr3k4y8cotbSaWJ5UnP7dM561dNtBg4ay/HRkrtfvY9PCrOGkw+8xGqzZrXdoqbCVpZu29HUlvmy5tbtQ46OGsJmS3/vIU5aPvJ7/0vWb73BW7/yBv02YQ8XvPQD38Orv/eHqGeGvj7AHJ5iFscwnpdolum9XEuNn0rcbYko9j1hWDFu11n0MYykMZBidjWAiCZf5kdxdxWHMSVeY/oucGMCiJaZMamJSLuYFybvhIlPUs1CIyKq1wKIawGIZncGmfgr8oI6dirM82sL5bjyc7UIKK7dUBPXp0LRsrietSUGJeTo2KT4EBEzYJxDrMPZButy3IemQLQjxrW4Krsu5IEqqJKSJ40JV9WIs0wuoapgpntouk+EPFAFnYgznR5IN07CtZeOoKUBR0lRiv7ElqazLEZQyUIFEYP6DSIO3ALCBmqPaQ9ZvvAy3/UDv4+3vvJ5SBGxymxW01Qttq6prYUY6Ncrkl5SuQpra6IkxhC5enBFf/8Nbn/sv+XiIz8I//HfMMZESvk9wVmhqbMzx3o1slpvcGJYLiuaWY0fPH3Xg8B8ZhAifvSMY446wkJjKioA66hqR93UuMoRfObEJGXHGGtdcWmBVhR1SltZ2sqQxDB6ZQxZgCYkTAVpFIIPVNbjbIWdHyIpgLVZdK2QYqByBmctA5Zx7Il9j62m6OgKa0Ct2c2yzG/l85+SEqPHVBVN02C0Io1b1ORrbFyVJ0JKaOiRqNS1gyrLoyyZlwoKKfl8b9k8p2Iy2BIpozHPHit5boS+x83aPAfRItzK4rYYIxabOThVKFyfxr3wY4899vg2Q0qJn/zJn+Tv/t2/u+uM2ePrjxgjf/Wv/lV++Id/mOPj42/0cPbYY489ngtihFsvzBm6yN27K2IwuavihisCIhhTkUxA1GDEYqVCpCJpKCVdyE4LkP8muTNBNFK3ho9/98v84A9+lO/48CssDhuMs5hk8VAW5BMhxWw7KSkvThfByVSck6UeqCrW3HCDKAV/rj3kRqwLTIvauQCX6aBK50XZTLLLSC7tXckUzd0cYmyOyTDlcRGsWObL21xceKom2xKmkEr3Qs5Jraqah/fP8WOPDzUnJ0sWiyOODw5wzlA3jugj3fqM3/jlz/KZX/l1qpMX+cT3/9fc/Rf/EFu1vP7lt0lJePXD34WPkYf33sL3W+689DJV22ZnCg04u0C1J8TEOIyszq44u3+X3gdWF2cMY8Dalm4847ff+BLexxzn0s64fXLMhz78AZbLBTFEZvMl875ntT7HGGHeVggdAPM211uqgjUVziqHB45ALAIMOKpbYoq46oTf8wP/BYgSgieF0pkbQnFzUdg13BiEkJ1lVIhSXCmMTNGy+UJJzlNWYqbhxJBE2Bnwit11yghSHEJKQVrIFbCotaVzyJBiIoTEetNztV6x2VxxdHjMreMjXFW6gYxQVTVt29I0Dc3sCLsdGMdI7yFEwRWCJ6ZM6kxc1MTdqAq+HxiGPmeqAjuSceraKYtn1jqsLaInU7JgEWwSMFPEjinEy3TsZRGnCD8msUd2MLFgMpmoRpHiGFJO4l72sccee3zb4UmOCc8jlnjSvqZtJ/eI9zOep43tcWN5lkDhca/zaCzLo6/9OHFCSomvfvWrfPrTn+by8hLv/W57Y8zOySPGuBNmQO7UuxnLUtc1wE6ocXR0hPeeuq45OzvDOcfl5SWbzWYXF/Piiy/uXEG893RdR7oRi/bocWy32924JkcS7z1vv/02n/nMZ3Zj+shHPsLh4SHOuf/kHL+X6JXnFVQ8+vj7iY6Z9vG8eNJrvBfXmedxE3mWSOZJ5+BxY30WHncOn8e1Zo899vjWhjDV6nCj6i91VHbtSHAtANGyij1Vi7KryPI+5N0VUm4+gOPK8vEFfOK4YY6nBpZNxXJZM1u0XFx1DINH1XK59tjKMG8sKSYu+oAzhpOTOYeHDc5l59MwRIYuMfpIVVnEOdbbnr73VJUDa1kaw7y2NI0hRoNBOVzWWBH6MRJSSQ2phaPjQ1Qj68sN6w68aVh1ntEHDmYOVeXN+yvOLjourzpmy4ZPfNcr3DqpudpE1t3IyWFFhePBvRXryw3OGMZx5G43soqOy3GgqeHh5Tnr8y2bqw7jKuYp0ArM5o5bpwuq2LP6/F3q+pBXP/xdmNHz+s/9HN1WYXHAi7/nu3jpBz/B0WFF987n+erdLXc++X/BzVq0e4CkcKPRZ5IMpCya0IgRIYUt0Xf47QbfbwjDSPQjqbiMKprjLkp08cQPyTXRUBbciyBkes41WQTTgjXstpvcW9HCOegUR1zmVuHSps8ow9SQQblYJgsn0iSYyE1UWQRi8r833EB2YpMS+SIIJCkCE0WMo5BY5Z7IT0k+gh+JMmJchXMW5xpwgRQSKRhs1eZ4FzPdBSCaiH7AqMM4B5K/w+XroZM6ancPUc7NNbkg5b6cBFg3pFhabjrVfJ1EScaQgpCsw5iASESNR5JBJUBYI9UBKg4d10h7gjQnvPKd3023OmO1PsuciXXUbQti8XEkDFt8SHifGKTHmMh2c8HYr6mscPfzn/7/s/ensbpl510v+nvGGLN5u9XstppdVXa1jrs4iWMnMaE7CRzule4l3MC594rmKOIoUgQf8g0h8SGCQEAgAQFLgBAhhkQKIZwEro1CCA6QOE5MTtzELsdVdrW736t9m9mMMZ77YYz5rlU7e1ftcsouY79Padda623mHHPO8TbPf/wbtp/8Xyh3z3PkPfhI4QTnBCuOEJVV07JadRRGGI8dZVHRth3LeUuISlU6QlDUd0QfMLbEVSVBLMFHKudwhckkHEkxMl2XiArGpnljlOA7xHjGThFrccaiIvgoNH2g7QOCUlhDZS3qFB97jBfU1agBZ1yOL0okCmvAlgU4S8xCtYTBScbcSPtxFgkBUU9RGMQYYkgit7IqMc4g2iSCjnWo5phsW2CcI0RPXBwg5Yh6MsGvlljpUwwXaU7HMLynGowr0ZjiGYk9BjCFw7lEWhFRrBiMkYQhrnoInmhrUI9GxZoSY4WiGmV87I1HtTbEj01talNf0/XRj36UH/uxH+P4+PjNHsrXfX3iE5/gP/yH/8Cf/bN/9s0eyqY2talN3VONxpY/9IcvcfXl6/zmbx5wdTnH5AX4ISdRRUlWmoCm6JekLTjJNk2L8QUMeatRkxVoCY8+cZ73vPtx3vrWB5hNJymaBUAM1pq0kB1MWpd3qUHMFI91g48MraMlkhYcjCaSR5REAhiQHEEQY7J1pq4XxcmuCTGGdWOQDBayS4KQj5lMcDFZ4ZMb2KxvEOOYbu1w5sJ9zA+vMqstq2bFaumJIZ2Ng705PkQmkxJnU77qhbPnGdcjbGFxRYErDPO9PUbTbR65dIkn3vF+Hn3H+/jcb/z/aFaRUV3ynvf/Yb71D3wvL37m1/E3X8Q6pWsautWCONtO8R0+N7TNguXREYc3b3Lzxg1Wbcu58+c5U465ujfnk7/12zx3+Rrd8ph3fPO3sbVzjqef/yLPPvcFnnz0cbZ2Z4S+p6pGLFdLYmyYTGpUjhI9QaCshKYNBLFMZmcJfsHBvKeoknNMVVZgLI89+h4eeuAtSFRizNaLSlIM5KzOASAhzwUzQAaalDUaNbuy5BzfmIgLA6AhBiQ7spxAQaQ5aAQTkwWqwSXnGatITJBDlEwyCZ7DoyMOl/t8+rOf4fLVmzz+1sd5+5MPcd/FM9RFmReXDGVZMBrPmI4blvNDFt2StlX6DupKCKrYmJtoEqCoMXneqCgheppmlUgwMafznp63+fBjVr8k4pMlioL45CSiA4mFRITBZKKSoNn5I6qgMeXlGpF1lmyMAdEIYjGYpBbZLIZsalOb+gauN/o98Cu1vdOEki8n7uJO27z999O3nd72QPx49tlnOTo6QlWpqmoduTIajTDG4L1nPB7jnFvjDiJC0zSMRiNEhOVyuSaKDHEut27dom1bmqah73tCCFRVxWQywTlH13U45/DeU9c1zrk1OWMgmagqRVFgrV2TPQY3kBjjOmrm1q1bfOlLX2J3d5fxeMxkMqEoivVj7kQkeL0uMHdy13itx9/pvH85NSiW77av28f4ar/f7fmvddvdtnv6tju5kdxtfHcb/93GfjdyzaY2tamv45LkEJAAg+SYOCw8p35rEBPkBkpAJGMGa6JIdssgE/czlpGcFBSnwsXa8M07lifPlOyOLZWtOD5YYcsCj+PGXoNGxdU1i0XLctGyPbUUVUXXB3wfmG0X7GxVBB9o+kDfeo72G9oA1ajAB+Xo1pKyFmZbI0qX0A+D4KziLIxHDic2xTiootagxlKMDKNxjYae+bJn3lhWsWDZrKgLw6RydL1ybW/Fy9cWqEYeeWSbd37TI1gjXN9f0C7nXNjZxfueqzcPaZcNOztjlqueZXTgCpaLFZQlAaE9aBAfsSK40DMraramJed2Z8T9feLhkvsefzuOwM1Pf5bjgyV2a5uz73kLD7/v23CmYfXS7/L8x57n+Moh9fv/V2YPPYxYAzHHmwyfGWhe8PUMMS8xdIRumbGQhr5tCb5Fg0ezk8eAQwyCphO+wjAf5GSukElEwtppY3CvSLoJzdMsC4kYLD4HvGqtnwLRNSmDgVSa44fygPID4wnRJGYxUXYYSY4fCS9QyfNbBwKFDGcl412ZDCImiZ6M5rENBJZA7AN9sISuRazFFBWucETfE12JKWpsUSY0TpLzQ/SJ3GKsILZAM06hgzhn/RGb9qOaBVlowhHj8FrM5Jv1Y/P3AMiEgD4Jf4JDQhbCRIvGgJEI0oNfJVdWv0LjBEa7jHbuZ/fCw9y8+hJMa8b1CBHFOWXVK13bgJoUz2syXhIUQqCoRixvXKH50n9mcflzhC5iJVAawVhL8LDqIp1XxuOaurJoiLTLFh8jzlqqIl2fvg0YC64uwSUnl+RnzJAuBBrpVwHftnjvsc4md+Dg8SFgVClMdjnJcyFEoQvQRyEomUCUsKbCJrJPYQuiIV8bC6oU1mKlRF0S10VVbL+ixmPrClcklz5jk6hLNRBDjyE5/AaFNgZqa6irtJ8YYyJJWSH0Ib+/Kil8OxGxitEYU8yIvkOCJ8SA9C0qRb6W+TWXMVaNntC3iIB1BdaVFEWJERATU3yzLYgqBB9Bu+z+kSaaMZHCDW69v58PkjvXhvixqU1t6muyVJXPf/7z/PAP/zDXrl17s4fzDVEHBwf81E/9FH/kj/wRHnzwwTd7OJva1KY29Zq1tTXi3e98GHyHBosGJUrI7gsxkyPSArQal0gTOKwpMLEk0maChmFwW1ACSIdzlkcePcu3vucxHnvsEjvbO4hRxCYlJgopsAVCTMoFVSVqzhFdK3DSKr8ASMBgETXp8UPzruHUOBLoHTSpPMAl0CZv34gh4DHiOCVJyE39KUA6g0FKzM4jiY4SNbB79gLf9t7v5ld/5d8hpTCtRsS4pFlFVq2iGigKoWk92ztjuqZhMh7jioLJZEpZ1rSrFVU14dHHnuCB+x+ibyOrK1/k2979HTz/8jPsbDlq4xkZZWs2pg8XqCc1W+dmWFugAXzfIWZO8B2rxRHHBwfMVyv6YLj//ocxUTl76SnOPTrm2ee+wFKmqDbsnj/Pd37gD/P02bN88uO/wv/1mf/Bu97xTsb1mKJ0FKWha3umsxFFIfSdEiMpkiVEQhsoywlaCHp4i7ZJ1rcHe7dwxvEHPvAw1poEQAxEDo1E+gxPJHDCaiLprEGQAW3TmOYgEaUnZsAl2ctrJiedWmwQYfC/WNORzGBEmsg8RgcCT6IQqRpiFK5f3+N3nv0in/7MpwmxpO8rDo/3edtjD3L/+fMUBal5diXWCkjEOksEmgBdyJnPGVCUDAyhCoEEkiiEGFgsD+i6FbXWmRwSchOdVEvDfDdkUpLE5P4qGYDUBEYNIFKMIQOdmv8TLMkJxQBqJIFEJNDF2iplqMYekQFkggE+2tSmNrWpr/d6LfLE3RwKht/h1ReS73W79zLO1/r99r/vdFxfbkTG6WNeLBb8+q//+ppw0fc9zjnqukY1qfW6rsN7jzGGEALOuXU2d9M0tG27Jq/EGDk6OsJmK+rj42OsTd8Ht7a22N7eZnd3l77vOT4+pm1bxuPxmmiiqjRNs973QPaoqmrt9OG9Xx/7QOrouo5PfepTWGt54okn6Pt+7T5ymlhzt2vxelwp7uUxt5Mgbr/9y4k1+XLGeC9uH3fb7p3Oze2kl1dz+Hg1x5O7kULu9rzT+90QWze1qW+0imkxer2QH3OXL2uRR36XTT1k7rWSGCUjCCKJEKLp+ZA6LKfwUC18+4WKB8fKfedrzs5q9m8uCLYimpSJeuu4I2JwTUOhgQce3GJaWmLv8SFy/oEZBUrbtLSrQAjKwWGDOEdVWRxKF5XJVsEDD8wYF8LyqGFxnFwv3KhiOq0oS0ez7PCk/twWhmpcUpaWdtlwNI9IMaKjQG1ke1pTaM/BwYqXry/YW3ZUDr7pbfdz/8UZ81XHYdNRmsj5nRnHRwuOjo6pXYE4w8oLXT3CN57V0YJqOmFkhX7Z0fcRoufMpGB77NK+QgsvXsU1EVdX3PrCsxzuHSFFxeytj/Lwd347uxenHH/pd7j2hS8yf2kP9ZZVrLn0nm/HjWeIFOvruiY4ZGxHB8cMDahvCP0K3yzomhV91xG6nhgiGnKvq5kMJAMxcogayUQMDCJxjQolccrQ25/MgyQCkowvxTW+lJ4XEuEhzy8dhCp5/GnsrL9HYVyOktF8bLo+3jXpQ4dY3DV1I29L88RNpJQc9pJJFoOrZ7IfTdHFIe1PUjxziJlEEgOEQGgNxjW4aoQNPdoX2KpGiirtO0IgoHGF04hx1anxaiKmMAiphvOUP6NPPU44HdnDbccPMURiH4jSE02X8ELbJfKHCSS1V4uYEsQi7TFMzsNomwuPv4sXn3kaZYGYHAUiiZgR1WT32UhV1hjjmGxtMw+eLgR8LLj69Odo9/eoXBKWGaOEACufYl9GpaOoHBo9q9bj+0BZQlEXSIQuCkYKXGnWcS2KYCTk+edRKVMMzLIhhg7jHMZZUIMTl51aFWuTaKdXCH1y9IgxCZFi4j4RyAK1wmKNQVQxwYMxmWQSQdN9qgIExEdMSLimcwWuKLDCwNJJc8dYpPdEPN4UVGVFXRqMTYQXQRCboqaJbZpjxHScYhGxaNcQfBIDRpswO/VhTYiKgEOIvk+v0b5J78fY9D5OoHQOTHLpjd0KxGFslWKJXHovCKuW0M6xUhJ8l7Dr+MYzPzbEj01talNfc6WqXL58mb/21/4an/3sZ9/s4XxD1a/92q/xkY98hL/wF/4CRVG89hM2talNbepNrLoqMa7ihRePmS89kmMw1EdcPUrNopAIG5mq7myJ8S77cpx4fqRWN2JUsC7y4CNn+dZvfYwnnnyE3TO72CJZcmN0DcyLZjtuE9fAvRG37sujJOPLAZSBvPBtJZE/VNdN8PohkgFhzcSDQfVjMiVAcmORG1Py+E8A5qFZJSs3hszTtG9rDFUF3/SO9/KF3/0dblz7AsTIYqUsFgkEEBFCUCZjWK0WlIWhrooUG2IdRVVT2BEyNfjg2Tkj2Q7dMN56H+/85m+hWy2xRNorzzI1grl4hvHWhNF0hvfZMSMmxUIX2mQ3uVwQEC7efx/TyZjKjdmdTWmjcuHCWW7c+AST7fNcuvAgdjXnTGF49NKDLJolGpJKYWt7B2MC3WrJ1mhEXRm6LlC41BeqCG27yhbrBW0bOJ73aAzp+Jzh7Jn70lwCIGA0nUOLQ0JMTamSfmagQCU7vIjJIEaGX3SACSSDM8M5ThEm6DAHw1rlQnZySSBPAh1ESfaqYtLciEofA2rh3JkJ7/uWtyG2pPWWq9f3ubV3yBNvucBjD98HeIxN87/1KyKRCLQe2g5izMazmgguQ1ObVAwJmwrB06wWNF3LNKb4IV3/l+OHM/NFjMWaAiQplKxN2b9iIyqp6Y8xYEw6r4OAbQCpTD6+CMnlg/yaU12DQNYIPvjf/5vIpja1qU19jdfd4iXuFm1yp+cP993JyeHVFqfvtP/fjwPBaZLC3Uge97roffvzY4xcuXKFp59+msPDQ5xzzGYz9vf3efrpp+n7nq5LYLxzjjYrFr33axKH956madaEimHMgxPFcrmk6zrG4/F6DGfPnmU2m3H16lVijPR9T9/3XLt2be3kcXh4yGQySYtfOa4lhPCKKJkhYqbv+/Vtw36dc5k8CtevX+fZZ5/lwoUL1HW9dgS507m7E0HibsSg11uvRe6503W8U8zJ7fffvo/XM/fudPzDebvb6+hux/BqhKnXmqN3InG8HkLVxuVjU5v6ximBFJ+QSfjryk4Ma6r76YXoQTuQO05jMvEDct+pWIWZCI/PLI/PHI9sGS49tM20EG5dPWavtZhJwbmxY763oAA6AmMr7M4qdseO2Hf4Ei5uT2jmLasQCQHEFewfHGOrgp3piPmipes9u7tjZjuWrWnB8rhLPaZzjCvH1naNNZHFoqXvlN4oRVVSTSsqV7BaNhwvlehGIEJVGbYLR2hbXnrxmMs3j4jGcunCiEcfOoe4gmsHHdEoZ2Zj/KrjypUDFvOWs9sjmlXP4ULpK48rIhIi091dmralb3razjN2nguzitrAxBVwcIAsPAZH2wSOj+eEwjK99CAXvuWdnH/oPN2tF3n5v77A0Rev0S8DGoWggfHb3snWww9jyvqUqCM5Ya5dMnK0SiItdETfEboVfTMntEtC3558ZpEenwQPGXPQvPxsMiHEWAZXjBTHkolDehLDup4zmnJqVTMeMUwj1RSRMjxeQEKeRSI5RjmLnIbF9Sx2yvDGel8pLjmTN7AZ61IGLkpyGkm8jjyoFKGbBS4nUpBMcBo+Q2OKkUnQXt5XegJREljR+h7jHGU5IvgWW45x1SiNNybHDh8bjCrWFejgMCaBQb+ThhqAfBzrs5TOoRlIHjKMUjI+QSZb9JhgiL0lGIN1ZSZ9JDKvRotIvg6hRfoGKbYoZp6HHn0b1176NEaEqp7ijFLZmrbt6ZsGI56irEAjs60t+uAx9Ix378P3hlJb6pFDY6Dznj6TPsrKYg10TUfvPTFCXVrKyiaXCSOJqIBbu9uKWNQHYuxRApGSPkSapgPvsc4RjcN7iLHFWEdRFIiCD0rnFR+ExcoTfIuxOSZGsuuKpO/hapILhvct1qbvslZAjCF0Tf5uLvnVkFx0I4J6j1rBlA4Rmwg74vBWaTtFojAqoapc2p4YorUE7zExCbQGtxBjTCbNZafZfgUOirKi73q879eYoBgDxuFDxDcLiD2FFap6jK2mQMCIYiQSi5LYKbFtMKbDloorSkxZIxLxZUW/OsIWNdg6v0w3xI9NbWpT3wC1WCz4Z//sn/Ef/+N/XAMgm/rq1OHhIf/8n/9zvud7vodHHnlkozbZ1KY29TVdCrz84gFf+tKc6NOicQzJDlBjTHZ8zpG7uASGxJhdNQzCEMVhsTggUhjLxYtT3vPNb+GJxx/i7JkdCmezawaJIY4gJkA0aUtiiOSYC7EMMTNGbG4JBxKHpGiPTPgQY/Jz0gJ/WkDvM4RjTxwf5KT5NMYSNWYyQoKCjKTjSaSC5HIi2SXBiGTSSCadEHG25PzFB3jfd30PH/6FqxwcXqdrI70HawSJMMqYhcaespgymUyxxmGsQ2KgrKdYVyTAIURsUVCUFWexxKDMD2+yOt4jtC1lMUUFClulHFQAAjFERKHvO0Lo6boVxhqKsubcA5eoXIEVjzXCH/8j/3fe/cS7mB8fU1hHv9inFqV2hnJrRlUUTCYT6lGJ9xOqsmIy8kzHBatVwACuEKxVQoj0fcv29pTCGPomsmoSkPLYExc4d/58tgBNTWQCYGK6njbNmbXgQ2IGY2CdlUsGVVTStRIhqEfEZnvIpGxA3foaDhNUsrKLteIkIvl2K8ktQ0RQMYyKMdvnd/imp56gKCxt23Hj1i0+8du/w6eefp7ipcjFs1tMxiXOWqaTCePxCGN26fqebtGxahXfRpzTDARlUocHsafARVG6rqFtVqlpHsREKsQQ8/e1rDIRWY97YHUklxObKCchgWDJc9Zk0KRPc36IKzImzY/hfElMahAiUZKDSCJAcQJobWpTm9rU12m91mL13Ral74VYcbeoi9da2H+tqItXczy4fWyvVa/mlADQdR3//b//d37+53+eW7du0TQNwNrBY7FY0LbtmkzRdR1lWVLXNU3TrGNW2rZdR6wkZ6q4JmWIyJqoMZ/Pcc7xwAMP4L1nb29vHeMSY2R/f5/VasVoNMI5hzGGg4MDBseR02MDKMsSa+3ajeT0ORrcPobn3Lhxg4997GNMp1O+5Vu+hd3d3buer9PbuVtkyd2IGF8J8sFrkXvudPvp8b7acd1++53uP72NGONdySCvRqC62+PuNEfvRLK5F7eS1zq+TW1qU19ftZaiGFIEwCAsOLUQmLwhs3HA8AwZSB+Za5DRjcoY3jqxvKUueGgq3H9mxEMPn6Gg48blQw47SzDCzAqL/TnLZUdhlLOzmknlsCQRxWQ2YmYNBzfn7B97nLWEqHSxY1TXOIHDoyVSOHbP1Nx/3wwJgeODlvm8RUUYjR1nzmzhu5b5yicnTgRXFIwmNSoVxyvP0VGHuBprhenWiMIJ81vHXH55n5durAgC73r7/ZybFewdrDjyDefvP8NsMmbv1pxr1w7Y21tQlCP2WsNipQQVqqqiqpPzZbNs6doOOs9E4NzUcW5rAofHhGt7mB4Uw3HbpmjV2nHxHW/j4pOXsHHOtV/7ZY6vHKKd4ttA8BExgjcl2088TrlzBjGn3D4SY4fkjBnRIVpFFQ0B7Rt8u8A3DX3nid6jIX0HQRX1PuMCJvfMcb3dE0eKwTU04wmZlDG4d5ywLsiPTz318IxkzZlG/EqyiOR23UPI8TQKmALcifgoRlkTMeLgarJmUawBk7R7yRjXWvWR8APE5uQayViKHWb7+vjT4Q7HIZkwkkgaURN5JIRI23usK7GlR32LrcaYos6fweB9ixIwLjtvDOOQJPoRzdhhxmMG2seAow2Y4snne6ZbxYCKJfpAsD0mODQ0ycU14xnGBDSaRLRBoJvD+BxiCs48eInlwQs0saWqLGIqRCKj6RYigl8dUdgaHzuMEXa3d8AK050LtDcPKWKHtY4uKH1Mr1MrijWBro+0PViFqhDKwmFMJtmIxboS7zW5zZkImuZh1tzQdh29JncZa0t8DEgAcfk6ZkFcxNBppA8p3qVVQ4wGo2CiR5wDa7C2wLgika/7HgkeJy5hmn2fo4LTvDa2ImLxGolqQT3Bd1ijqFVskZxHYgj0MRJFGI8KqsIkpNck5xlX1ARtcyxxImgkiFgJ3uOMAWvW77VSFPQ+gjMpRisTtwRomyWhW1K6ElfUmOxAIlIi2qVIG5VE8lCgXyB02MJRTGZ431JicMUIjQFclY73KwBpbYgfm9rUpr6mSlX5T//pP/HjP/7jzOfzN3s435D1iU98gn/1r/4Vf/Wv/tUN0LCpTW3qa7qCDzzzu9c4PugQ69ICeWZwRx+ILmXKmqKg71YMq/UGi6y/XOcFawxWLLtnat75rod54omHuHDhHEVVZ8XlaSmAIOJAhxiVAeCNaaE/Z1uuVRSSskRhaFrTrgcyhypYsirRSFp4V08CCrLrh0kNfiSx8CNpmzLktnJiT7luUCVHkUCOd/WQc1+resy7v/k7KMqa//B//iuuvPgiRdkTQ6AqkmrICTjjGJUFVVUCYI2lKEdU1QhrC1xZQ0wNU9c0+K4hBE/oWlwxwrkaMJhCsDbg2wZrCsQYTFEQQk/sA4UrmU53aW/tE3xH7LuU21rWFKpYaXjwgbNEPUs0jr5pERe4GC/RB09dVpy7cBZsJPhAPZ5RNT2juqBwLSEohUuRLt4HlssV58+doaorlGW2lbR867e+n90zFzI5KFmJGgNCkRaDMsCiGXT7vXEjA5Eh2zVKVoKoyQSdsCaUDLoWNLuRcBI4lJGYNaiXukbyvDIohnoy48y585w9v5UWjNqO6WTE9RvXeea5y4zrOtlqxkBdj3jLI4+ws7XDcrnkc09/kue+8Bxtr3QerB0wHkPEZ2CGBKoEgSj0XUfbrtICWJHHH0MGp4QYPeJKyGQkI/bEDWewHB7UMyKZ7JTmeLo5ZrJMyuQdlG9CzK+5gYw0ONqcUittalOb2tQ3SN1tcfz2ej0RL6+2ndd67N2II2/kuO5Uw/Pm8zkf/vCH+fCHP8xyucR7z/HxMcArnDAGksXg+jG4aQzb8t6vnyMieJ/s6QdXjRAC3nuKoljve39/nxgjFy5cYH9/P6kVreXo6IiiKOj7nqIo1m4ig5vHsD1jDGVZsr29vSarWGtpmmb92NNjGe57+eWX+eIXv8ilS5eo6/o1nTpfjexzp79faxv34rzxRhJHXg9R5E77vxsJ6fWch9sJHffqHHK3cd3+GrnTeIeYn01talNfv3Va6C0DRsCwlB+H5fy8mD4kG8gJNDH0iwrOKA+PLW/fKbloYVoKuzPH/Q9sUcSWl56/xWHrOF41BJTRmZraSiJ9bI2pK0NVWlZNz87ZbbrFipsLT98brCtYLjtsYZhNykTGR5juVJzZrdkZFzTLlsOjnqbpqSrHZFoyrh3dqsWTFoODKKNZInY0Tce87fEerKtxhWU6LbCh4+rLR1y9vqDrYffCmLc9cR9F7Ll65RZNr1y8/yyFwpUXb3Lr+h4aDfV4wspHjo4aNJIiX1ct7WqZFvh9ZMcBRWSyXXNhVKLXbhEOekwUVn3AK5iyoNoacelb3knpem7+X59gdf2I2CZsQIPifSKEalTibIdz73h7ihhJwAGnsaPb41fUGDT2hH5Jv1rQNy3Rd2hIPTAZUxqiUIbnJbPX046xcuKAISZtW30mfwgnsyk1+kpYu24MGEZ6WO7P43BPjmrRgPqW2HeItYirEFOkQ4mBFFecxqLreFxyn5+PlRPnXc24GOuRx/XxaSamiAwUp7xdFJEcORNPf64OTqiDS0j6ThdDRHVFCD2+s7hqiavHFNU0kT2iEPseomDKTMLQE0LNcF7TLoZx5OM8RbZZf7rrSfysRghBkV4IxoKzGOOzYCxkh1mLEFAxaFghsUOqGfXuBSbbF1ldf47l8QFlPaGoCqZbMwRYoQR6yqoCcdgyOUuEvqc/uEwZG5q2x/sUJ12UFnxHu+wJOKrSUmTRWQiBKAkPFVegaogCxgGxS9iNKYgKq2XLqg04B/W0wuAIXcSQZWpGiFFp+0AgsuoiXZ+cZAkRMY6okdB3VK5Izh/WgghGI46A2PR+1nufrqRRirJYk32879E+xTkZl8R7riwwtsDYhP/2QWl6T+2gKgAN9AioTWLAmIVHogmHtAXB9/i+IwYlOkOR8VjjlxDzOSJiMBRVgYilWczR0DKdzhhNpoTcO0Tf4wqbhHHWISaRQVQEtWmaWWsQPGU9QYoRvmlZHlzH+zaRSr4CoNaG+LGpTW3qa6ZijHzqU5/ih37oh9jb23uzh/OGV1VV/MiP/AjXrl3jZ37mZ9jb22O1Wr3Zw/o95b3n7/29v8ef+lN/ire//e1v9nA2talNbequFVXZP2yIKjm6hROVQ7YkEAGiYI1DQ7JWNOJwlPR5AV0JCMq4KnjisQd48vG3cOHCeYqyZEj9XDfeJMVllBS/EWLA2gIfMkAPmNOkAZEkpNCAEYPG7OCRsytFksMCeRwac+OaLS9T82uBsG7ok0PI0JumEYpIUijI0K7mplkHtcWgXFhrhCirEU889Q7+wB/+X/mNX/soL37pd1k1gcIJhQUnSl1WVKWjsAVGknNDWVQ4Y7DGYgRsWVMUkaKv6Is6u4zYoa9HRPHtkuXRDXrvsXWZtBXeE0JHiMkNQ0SxVlMGp0JV10CKvSmKEmNmyd7RCn40gtJRjidZrWsYjydEAnO3wFlHUVim44rCzVEVCidYJzgnqPY0bUNVaAJveuXxJy/xnR/4XyjKOp2hmPNsB5WmEUyOcVljKZIcPhLg4fLxQoikeBXNES0mqUaMcdlVI4IOLi0QJZGFomgGH0xS6pxStySiT1J24QrqekRZOIbIH8FQVxUPXDzPe97+CKOqZDweE3xDUTgunL+f7a0dVssFe/vXeO7Z52m80nuoqkHNwmBAcxpzSWIOVbq2pes7rLNp3mb8I2oixcSs4EmEjtT0p8UVm91tJL8m8+MGZVEmgMSYLVRjtpBdhyUJIoE0tSyh71jHIG2oH5va1KY29Xvqbovad3PjuNPz7/TYVyMA3MtC9etZrL99H7fv+0tf+hI/8zM/wyc/+Um898SYHKgGd4+iKNakiME+fXiuqtK2LSGENaljOIYY49rdo+s6RGTt1DHsoygSIdQ5x5UrV4gxMh6PWS6XryAVHB0d0fc9ZVmuxzYejzHG0DTN2hWkLEuccywWizXRY3AcGf4exj6fz/n0pz/N2972Ni5evLge7704V9xr3X6uXyuq5PcjGLnbfPz9EEhei3Txerd7J0LH7S4e93IM90piuZsbyaY2tamvv9JTDgMnZgupBza69oZcL+Mn7kcWDOT3l5mDd+6WfNNuxZYF8T3OOsbTGc18xf7Rkpdv9ngiIUa2xsJOBct5S1lZqjI1dZ0K050py+OWvaO0mBn6yKLpqWrh3HZF1wfMqKQoLefPjiijcvP6nHkTiQqjsmR7u8JZ4fi4oYsC1iKlpRSHs47lwnO86AhSUpYFqKcoS2IXefGlG1y7viCYgnMXJjz04Bbx+IjLe8fUk5pR6djbm9P4Q7pVhzWCqyviqqPvAqGPjCc1Ej3qA0qkLoSdsTIdF2zt7mLmc9rnb0ADfQdd8KgzlKOS8fYYV8D8mafpDhf4pSfEfKWEdSZqMuYQ7MX72bp0KS2ii0nCHDl11VRTPMogIIqgIRLajr5t8b4j9jFhVRoSeSOERKawQ4BszB12ZI04rXt1ySKLPI/0ZB4l7okSs5hI5RRGxOBGopmYEdckiHTu2kTwMA5siUoikMTg12QRzdswkhw3RMxg1pAiW43JDiTZaTcrOeLwuWgcyOBsQsLchojjHCejUZNzhkrGwmzGuiJqBhJJEnMNOJqGQAiR6D2+a4lVRzGe4OoJiiGGHunAFEUWXuWIlzULZa04ecUrFTlN0IqsiTgxZswoORDH4DHeE53HaAD1iLok6op6Eu/UL6A+g1Q7bN//Fg4PLmNMSd9H2nbO1pbDCExnO4gENPZU1QgfoG2O0NDR37oOywb1iTBaJF4FTS9EFYrSUBWGKIJvQaxSjSYIhhBjiuxNAEzmtTiCCm0f8AhVWVIUJrlioJi6THMpRkLMrwMkOX20Ad/5dGaCgrFYZ3HFGFu4NB9in+adKNa5NF+j5ljj5KY8YGjeR9q2g25JWSVM0lUFtqhQIGiKJPYxUjlLaWJyghZQK4AjBEWlz6Qsi9hM/MDQtz2GPsfZ+DTXCJSjEdPdCywXS/zhLSoH1WxGMz+itoa6HmNsge+7NGaTMbxM9rJWEGvxWmCNhdAR+g4VoZwkcghWiWqI7SphxOaN/763IX5salOb+pooVeXzn/88f+kv/SWuXr36Zg/nK1If+MAH+P7v/37e+ta38oM/+IP83M/9HB/96Ef51Kc+9TV3zIeHh/z1v/7X+af/9J8ym83e7OFsalOb2tQdy1rD2991H88/f5PlyqRFdRPREFNDnhtFMTY7gWTiBSYTDVxqKIlAy4WtCY9dOsf58zPKqmDYSGpgbSaIWETjuuEzJpMyjEE1EBUK41DRNUs9EVAcSiI4pFJEIyppiT9kh49E1IiJCDCoeUyKxBgcEkQUIUfODIBwXq0XkiuIGZQPktUQObs1bSPfBoxGE977/j/IaDzlY7/yEQ5vXYfuBr6JVIWhroTZ1m6KdJGsPgwRcYmIYI3DmnQuLYGyKIjqEVsgURBrCH2TnCK6DutcZuW3lMbQN8tsaRqIw0KK7/Btm+JZXJmP0RG0RDOoQGypypozZ3aT/WTT4ENH8LkBF8EVFaUzDMKbwlpKq/Re6bpI0yzxIZERxqOKP/mn/j9ceuitFK4ixj7Pn5B5NoJoJglpdvWQCKbILhYCMebglmEMBUjEYtfUhHTu8x8ChiI9XiSRiTKQI8p6W5JdRCQDGwCFqynLCh+Vo8NVJpAEUDh35izveAp83+GcY37c5sWDRJqwzmFzrmrroe1gPCKDQqQ5Elk3n0p+KWikaxf0XUtZ2PzaGaKFWDfsIS9WYbKVq+bjzgetmkEfJf1uDDGDWwNPSYwgIQFC1gghZvURNufkGobM4DVBZVOb2tSmvo7rjVoEvpdF9tvdCF6LrHGnRfV7dfR4reiLOxFN5vM5/+W//Bc+8pGPcP369WRPLSdkwuHf4KpxmjQxOCkM7iCDK8hA+Bie571fx7cMrh2nHTvKssQYQ9u266gYay1FUTCbzZhMJutIGVXlxRdfRFWpqmrtOlLX9ZpgEkJYkz/6vn9F5O3t7iR933Pr1i0++9nP8vjjj7/mub3T9Xg9JJFXm3evFYFy+zjuNq47Xec32u3ibuSNO9Xd9v1qRI67ndM7Hf+dztG9uJJsalOb+voryf/T9c+Yb9OT+yEtWKMYSX4gouAEHhg7vm235GJtmJWGUrJLKHDj6iGrpiM6OGw8aM+lcxPuP1tSOUMYOc7sTlkue5a9YPDMDxcY57h4YUq7CFw7XLCzU3N2e8TNW0uiGKo+cOHclP6449rNBZ1XbGnY3irYHlesmo5jrzlCoqecjhnVFb4P7O8vWDYgRYWIoe06dmY1q6MFz19fcLQMzL3wwH0j3nr/lKNre8xXnu2tMbEP3Dhecdz11E5ADV2EvlnS+8B0NMKerWjmK1SV8dgxGzkqlLISLpyZ0L10i9VLc2Iw9AE6A64eUY1dErcQCEctbeNhECXoIKghp7Wk7rYH7n/iKYrZFMmRwAmjyddzeFvX3KtHj/oV2s/pu2N832Xiak9yvDzBsRLvIEImf5wIerK7BREZnCo0P3Y9VyKDlyiEhCFIJjhodpLRhBcpMYky8gK8xiV0XerTiwpj60QY0ZicSYbni+R45Yia/B1MBAlprAOhRKxkBxJZjz/pkTJJRRLelrgfSZiFnIi6kGFbljVeRj5N64/NfMJN/m6RzwFRiB20/RF921BNVhTjHXAlQRWNPaYc5ZiPU2wZHQQmJpNt8o50+KwWBhue9f40olGIElI0ru9Qn0gzGJsiUSTma5BEP+pbRD1SzBifuch0cobF/BqjrV2WbaRZLXHFmL5dUhaKKWsWxwdojBhnUR/oDveQoJTZWSKEmPAyUepREmWFGPAYbFHi6hpXVaCe0KzwTZ/Iz5qiU7xCHyLqPaUFWyRCU4ghETnEYCSJdGLo0QhRkrPeEHetocNKEu4YY7E5MiVqJLZNwnJcmckyMV3LqMl1xAgUDnE1vk3EHQmJVOEKoXRCVE/ftXhTY5xjZD1urECZyCj52sUQUITCaToOYzMuCmVZJeJPvwBxqFqIASTgXIFTGNcVq9WYGFrC8phxIVCOCdHTLwPa95RVkV71IqQ0oICRANYwqrfw7YpuuSI0TcJJjcVKAVIiVY2oPyFpvcG1IX5salOb+pqol19+mb/1t/4Wv/mbv/lmD+UrUpPJhD/+x/84jzzyCMYYnnrqKf7KX/kr/Pk//+f5xCc+wa/92q/x0Y9+lN/+7d9eZ/i+maWq/Of//J/58Ic/zJ/5M39mAzhsalOb+pqswlne8563cu36LT76i8esVjapJQhYMWgISDnkQ9pXNIEpdiPFvlhVJrJi18zZtoHSSFIoaAJNbG42B+hl3cTn5j/qkBOeGr4hD3QgXqT7NTPIs9tDJoXEmJ08BpvL3KAONoTrOBFNC+NG8gLGuifVlDtJkp9EAYkCJues5ozU7JDJ0HCLyaQVFXZ2zvHud7+P+cE+V178Ajde7JFxjzVKVZRcuPAgzhYUpsykmuSSEbXHWIfYGmtLQmxR3yFR0ZiUK8YW+LDCd8ukfKgqVCzOOCRookRET/QdfdcSNTWo6j19s8KUaSHF5CwSYx1qHdZayqqmaZaoPwRniL3QB08IAc0klZDScxCFrvOMRwVd5+l7T9Mcs1quqGrLH/qjf4z3f+cfwhUOIwnIkWjBGmL0mGiyOuVEhWLUJFqGTW4yYl1yCSFFnCSyhj1Riwjp2qaJlxUkw/2s0aGQAZwhtEclYo0lhohYi0alNo6I5WjRMG86vO+Z1gWzcUFVluzubLNaLenabr2AFaNP0ToxZsWDoQ+BzoPvBR0NIFWGGzPekdRMydGj9z1ts6SuUqSLxmxHm8GltK8+RcBk9CUMWcaSyDEMrxXI28hAE2QwaADW0nkbXldxAJFMynd9hSPJpja1qU19ndcb0Y/dKU7iXuMt7nbbnRbQ7/b7nUgHr2dxX1W5du0aP/mTP8knP/lJmqZZkzWGcYkI1tr0XUCTq0eM6XtajCl+7LS7xzAWa+2a/HH6vqIoKMtyvU3vPW3brskmg+vHsB1jDKvVak3u2NnZoe97nHMcHR0xHo+BEweSvu85Pj6mbVsmkwm7u7trcsfp4xnIJYOzSdd1LBaLO57n2+u1yDWvdvvdyAqv5ZrxeuNjXu94vtwSkXXUzt3qbiSm09t4LXLLq+3/9M/b93f7NjZRL5va1Nd3pfbc5IXyk1t1iLTMhHzNi/RGkswDoLLKO3ZHPFoJpe+JwSLeEFVZLDpigC4GdrdqpiOL6XpGteORB2fUTgkRdicVx0vP1SPPcuW5sFOjQbj//ISycnzphQNc7djanbC/9Ly037A9KnDGcuvKEQdHyVlye1Yy266ZTByHN+YcrTxBFesso+mEsnZoHzjaa2iCwY3TgnTsW8Z1xbWr+3Rt4HDuWfaBC2drzo2E6y/fYr7wqFjaG0uwQhvABMEHBQf1pEKKmth42qg0h3OaZcfWSNguHeO6YLY9ZRJ65p+7jN/r6WMibVBYpls1rnDYwiC+Jzae0PgcG5JJHkL+7IhrR8sYlb6s2XrscUxZg5Hb3scHZ9eBNJDIGeobfN/ju47Y9ZB759wy5//JCaakAyKUP48GBVIm/yShRVzvA9HkHHrCJALCGo545WeMH5r9RFwIXVr8NmUCUVyZthMDMbTZFUJZ82DMEAlDwrvSpM2Rt+lYNAwsluFcJEwH404IJAmcyo4qNjuBSMbJMoTAIIaKWehkIMqaRJJeM5n9sX5xgWr6Pqh9w+q4xbct5WQbN5oQsWjf4KjRIR7wBKU42YicEE6yJez6WMjxORJTnI5KivKJwRG9x7oATrN7SnIWSYv9Foke+hYpK0w5Y3LmPg5uXUXnh9SjXarRlKCBvhO6rsO5TOzwHWd2H+DwhcvYvmVU1xgNeB/ovCIERmWJcyV91xH6gGqPKUskRvq2xUhMrhy+Ax9QYwheCTHPJ7HJhSOmeKKYHVe8xuQqko87CX/y5NWIGHBkt2AHPYa+9xSiGFfiXJEEU6r0QQk+YFRxoljjwBZYW2EImNBQqKcsDaOqoLQFIQSaZgm2oKpLLB0mrHDWoK5E+xyPZDLyqwn3SghXjj0KAesK3GQEOsbHiO9THI8VQYPgV0dYm7C10HbEdoV1xVqIZlGisQnzEzCuyPMtufdI9FinVNPz2GpMc+sqVntiOKZQxVRTCguxLnllxNEbVxvix6Y2tak3tVSVxWLBP/pH/4if/dmf/ZogPXwl6tKlS/yxP/bHcO7kbVdEePDBB3nggQf43u/9Xn7oh36Iz33uc/z0T/80H/nIR7h169YrLGm/2nXz5k0+9KEP8R3f8R08/PDDG/LHpja1qa+9EuH++87y7e99jGuX9/j1X72Fbww+9njtEvARPNYVGOOw1qUMRSTbXSZ6xgjDrukZzffpbr5Ae/Qg09kMdQ6jLjVzBlIbGDG4EzAGxRpDNMnmMIbkdmBMspw0WKw4yLacyZkhxXwgrIkciSA+NJOkx6uCuEQCMNkGcWiwRU9UEvlJBpMa7qEvza4Icur/RkiEEknMEWccRgzT2Tbvee93ceH8BX7HH7I7nXBweBPf99R1xWi6lZsYj0ZP6NsUZWMEKcapMZbURAWf2exagIlo6EEUV9aIK1OeZoyIK5PVoiRlgHUGCYHSOQg9vm+SY4e10PtkG5pjYQoxgKe0Fl8V9F1D7HtiTCAKMTHuu65L589C1ykBxRmhWbUsjgOrpfL2d7yD/8ef+v9SVaOcmxwx1qAmQEyOLDHPN6uDmkUzOYN0Zo1DiVibP+eDQOKKpEUGCiKR5MmarVQzrJA2NTjEyKmoIDKeENcqGYlJJRBiwPuOxpMWvkKks0IIJquFhOADTbNi1Szpuo7gw1qlYk2BiKFXaHvIIo/s7pEBKxmOjzVm5HtP1zeEmEABMQZjLAaDj54ipuMPURE8ZOKRRj0BT1SRbPdqxBB0sGYdHpPOm2EAEgJr3EVjViLF/Or9SrTIm9rUpjb1tVOvFrlyN2eN1xOdcafH3b7dVyNyvJ4e8cvtJ4cxPv/88/yLf/Ev+NznPrdeuD99bowxa1LHaVLGEM8y9Nanz8Nwv6pirV3360O8yrCdgbxxOvJluL9t27U7iDGG6XRK0zSsVitWq9Urxnp0dIS1lrqu1+MJIWCtpW1b9vb2aNv291zXwZnEWktVVcQYaZrmFY+9G6HmXhwnTj/2diLN7du509+nr8PtUSi3P/61SBL3Qgi502vitY4NWJN4hnkyXM97dfG40zjudiyvdr5e63F3c2bZ1KY29fVVeur/A9Yg2RVh4MKfPDb1ckaUc5Xwru0RF01gBKgo46IA39N3num0AlFK6yiN0LQ9s5Hlvvt3iF3D0UqZnjnLQee5etRweNwxqwqa3jMqa3oVrl89oBpVlGK4euWYZR8Z1w5noGl6lm1kNhlRFcpsu2I0Ljk8WnLcRoI6TAnjrRGjcU1z3HAw7wimpBxX+NBToNS1Y+/GAceLQNsq81XLme2aM5Xj1q0Vx63S9YE+RLwG6vEIK5ESxTrDdGuKt0LjPXvHK2LfUxiYlZHzswkXHzzL9m6Nf+kGh797Hb+M9GpoNFLOxozGJZNzM0KzRNrB+YC8uKuZxECO5hC8D4i1xABBlPLcBUb3nUmRFfLKznTt/KrZMUIB9cTgiX3GLvqe6MNaXCNrsUQiFSS8KmSCBBlDYEj+TVhQTBEgACIFunbFyPuNGadZO1RkHCImsocQkpNHjnARW6BSgFE0etR7NIREoBhiQQbSQxbHrKexDJExGbfC5PEpGE2RwGLQ0CcB1MBxEZucTdIKOmR8QLNrr0gWVJm0eC/kmJrsMKrE7MCrIDFtL+NkMmBrOZqnjUt831F0K+rZNqI1XlfYskJslZ63JtcM+N3tipOB2JJu0phdUKJBCMkBJeRrHTw2eNQWGQfRk3MlFnyL1BOwYyZnzlOPRmhREDQQY4u1junONt1iTui7dRzuct6yuvIltmuDtULTkuanRipnsKIJF0OxZYHRRH7ulnMgueKuuSy2AEkYTppiJrm5ool4JJKw1OiJPiKaCC9RLVFDdnxNczERYPL3Fy3Q6FMckw6oJAk7MwbU432gkOQYlK5hcpm1RJwqRVVgCwNG6bxnuWoIGtk9f4aqdPTHRynSp6wxhcM4iPl16mxBH8EHjzMK6hGE6BPByZgK62qMGmwBTlyaW9EjAvVsm2hW+OURahXUEHyXiCpVBYVBrOKKAlc6ojFEn2KLrW8IraEYTRhtbdPNj+iO99Gmp/Mto2nAVaP0WpPhzLyxtSF+bGpTm3pTa7Va8eM//uP83b/7d19VdfE/c4kI3/zN38y73/3uu94/Ho955JFHePjhh/me7/kebt68yYc//GF++qd/ms997nPcunWLpmm+quNWVX7pl36Jf//v/z0/+IM/uM5H3tSmNrWpr5Vqmo7r1w8pTcHDl7Z5eqdmeXTISUaoEnyPyQ0j5EX2GHJDCiXCVBq2xeMax/yFK6wevobf2cU5i+QF8tRYx2TLh+ZUUcuQi2rFEgiEmBtzTGrOc1MsajCSHB7isPCfG76oybg1UzOSSkNdalrXK94xd/unmnzSeMTkxh0gCCoJFHImvW+neJlBIZHOnWRFisk5l6Urue+Bt1AWJbK4jviGt7z1ca5deYECYVLXGfDQFPViDGU9SRbq0WOLbcSWGaQAEwIhdoS2JwSP4CjKkt6nrF2MEkPKN3VFDcsVdTVhudpjPj+iUMNWH9AqZaIaSRaKfdemHNZMeilcgbMFFiH0Pd2qTThBUNpmRbNqCJ61DWjohN5rcr7wgfP3P8L3/en/nbPn7kvqXlUGsoKoEAmITY0z6FpRk+xMc2xOVnqgkp+jyc0TzVFAeaFjnRurWRlwkm+bNFyJ0KNRT6ZwfmaMIbvWDBSJpJwZlw5nU8ptWTicM4QgKB2K5IUp1nFH2qeFrGG+9BGaDtoAwSviIBtrpNfP6YWNGPGhp2tb+r5L20vZMETiWlUkmoHKCGQ1y3DuNCZr00ROSYCQEUsUD5hEvIlhSCkGSe4q4dTCimTC0Ul4zGZRZFOb2tTXd90tluL2Beq7kT/udQH+9N/3svD8WovSr5cY8mr7ee655/jH//gf8/zzz6/dPIZ9GGPWkSnA2h3DGMN4PObo6Oj3kFkGd4+2bdfPcc6t42GGappmve3BxeNu53Y6nRJj5OjoCO/92u1jeO5poob3fr29wW2k6zqWyyVD9AuwJpR0XbeOnHHOrckL0+n0jtfiTuf9dlLH6Xo1gs/dtvN673u9j32t/dxpnPcyb3d2dtZxtjdv3mR/f/+ex/xa9Wokm3s9N3cjdG1qU5v6+qt1N5PfskyScqTkDUDz4v1gOFAY5fGJ4x0zx4VCKMQyG5Ucz5d0Rw22MtSTggcuTQitZ//GkqNFh7PCzvaYdtUyno2xVcF+q+wftRzNW6ajkum0ZKs2tH3ksPNcuG+H9mDFzVsrfFkwHRcs50twJdFaqoJE+tgZE4BrNxZE6+hdBYUy2xnhnGVxtOTguEfLEaUz+NAzGzli1zFfNvQqHCw6UKEqDT4Grh+1tD7100EsdlpRj8fMj+ZUolRGiF7Z25/jZqNEDuk9DsO4gHNnt3jgkXOc3y45fPo5Dp/Zx7fCso+YccXWxTMp3WO1ortxgF+sCH2K3lXNMaMZe1FRQswEAGPytVEUQ3X+AtX2DphifTVT6clPBdUA+EQCCT2hb4ldIprEMCgwwppPkSbEKaLjcNPawUNAYsK18sxBNUXHDtsYFtlJC9aoJnHOsI3YQ2wJvs/7l+RgqiCxS44nMSRSgybCiCRgIbk95MXqjGycOvYUVTvwGyCJgSTPZoNBVfK2JGErml1ljcWYFP1hjIVoUJvILCmm2SbsSwYcJscjmxQ7syaWDIQGGT5HT4mmVPB9RzzaJ3ZLqtkZinpGAEyZHVv1BGNQTYSVJAQaSBvZ+XQQtazJLwPmMUQZezS0aLQYrRANiFqG0TPEKAMUY4rJFlU9pWmPiFT4vmc6GiWCSD1K89eUNKsFTlaU3RzfNhwtPL7rGY0Mo9EIxdO1DYKhrCtcXdN1Hd2qRQKI9GjIhI9TDBZrFCVhPNaCc5lIlMVlfddDAOOSu7KP4GOKeFHVdDwkQnMMIKIpzllSZI5vW0ImZWANpShioCgLqrKAfG183xFEGJUlRVUAltWypcUTPWxVhooeXbX4rkcxRLUUJAzV2oh1RYrPzgQgVBGjGJOilkNQQtvT+xRX7KzFGYs1KboGIr5bIqKYqkRjIASfSCUmCfikSM7EIqB9i6lKyunshOASlH45z0RvJSiJYNVGvFthbY4kX79a39jaED82talNvWm1v7/PT/zET/A3/sbf+LolfQAURcH3fd/3vQJAuluJCM457rvvPn7gB36AP/fn/hy/8Ru/wS/90i/x8Y9/nE984hPcuHHjqzDqVG3b8sEPfpDv/d7v5cknn9woTja1qU19TdX+3pxf+PnfZDQSLr90SNsk9rsxFiVFrGheHDDGrZUWRlMDUOCY0XPeBLZxCI6jq0ccPfcsO+fOU9YTcAHJFuFIcmMIGhDjMKRFclTW0RlCZslnBv262R4oAtlq8iTCRbCDUkFSvIyQGO1isp+EDq1AbqolZWQCWb1xojiIAzCA5EX1bBsoyRo2Nb32RMWZI19EwBWOM+fvRx97O6Y9YLQ1Znu2Rd93OGcgKqFtiaMMYJ8CKMRVmGqEqUeE+SHtwQ3a1Zx2cQTWoDGRZ3zXpHxTJxhLGmnvccYxcmNie4PFaol4S1WPKStHVVbY0qERCoHOrxBXQgwE3xK7LqkffKDr2tTIi7JczTH02BRJizHQdp5VG9nZnXLxwfv5f/2//w+eeOqbM0lGs6JEQMPa5QUUbAIlIhFRk/EYQ9REIkpEhNTw2pjAiOSwksEQkaz2SW4zaLIlReDE28siElEzpMlmYIbMAiFF/VgpqMoJZVHiipIqN5JlUWAtRB8JwWRL+MCobRmPJuyxR+87NA4QY2rx+6D4kAgrTgWrA/lDGERCieSSFix839H3HWVZs44rGrangUhSC5FfM2ggasqBFbHraKR0viPgMkAzkFQMEiHG9pQ62xKjz2qfDMTE00exqU1talNfn3Wn/uvVFrfv5hZwN5LI3bZxr+P6/cZ9vFapKsvlkp/6qZ9akz4G7GAgTQy/n3bhmE6nKW/8lGvF8LsxZt2bD+MfHEFEZO0CMTwmxkhRFIik6JcQAm3bvoIkoqrs7+9TFMU6+uXg4CARZDNxY9jXQEoZ3E4Hl5FhbMP+B8LJ8LyB9HH6OOq6Xp+D0+f99bpp3Gmu3Os8u9vzX21/t2/nTgSO10uYuJf9rVYrqqqiqipEhPvuu4/lcknTNOtxvB6yxd3G/2qvsbs5s9zrMWxqU5v6equMFmiO+hRgiH8d3keAqYOnxob3nHHsWCiccnarQnuPdIbj3tMHOD+d0B61dIk5we6sRiRiSuHMhbOocew3gb1bS44P5lzYrjm7U3N+Z8rh4TF2VHJ2p2S5d8wKoRuPcZWjXyzZmpS4ymFFmZaWuq6YLyNzU7L78OO4Qjj84ufYHpVYazg6bDlaQjnewuBp2obpdIzvGsSMkrPI4U2Kyia3zmiJ4nCjksIH9g8XzM6fwVYl7dExW5VyfjblaH/Jsu0Yn9tltrPDrVv7lCSJzXg24rG33Ucdeq7/5udor7Z0vdDESKhLJhd3GJ2ZEg/2CSHSLZaEbogvSZ89cRB2CGhMYo2YCTgpbCeRFuzZc1T1mGRNevq9+wSrSY1tEn9oDMTQEPoW7zviEPOigTVTIjtmDKQFkcH5wyaRCIOjRiZr6OAbm4Qo61m1/njJRIjs9EFIkRYaWgh9GmmOp02P69BoEpaWSSbph8tkheH48hhiwK8a+uUc6xwaEgFDoxL7kEgCMbma2Ok44VNpVRwRgylLKPICeBCihEQYMIkYITHHotiEG0Tt83hNjjM2OSrZpPHFdaZNen7Gy4TBUSJhF1GhbyD4G1TjBeXsLKoBV40TyQQFCVl7lc6d6Mmxm4FMS3Lz1fzajRqQIIjvUVskh4hYpQX/wWZDBhzNIJLOjVQVppiydeY+li8fopoiTerRGFc6yqqm88mNpBxtcfTCc8TDPfb2D+lXgVEhTCdTxFj6PmCxlHWBLS2hT3iZEDGFS5iKKh7Bdz5hhdYhRqAPRAnr6J0QFB96QogQItZIitiJJNcMhJBJQUYEr0rI38cLm91ZghIIeI34PmL7LkUsYxArlK6gcC6dFp9cZ2zhKCuLLQzLNrC/aiiqiq3KUbtAaOZ4L4lMYU0ibMQWxGKsw7rk+uIieGHtOiMZ81ObXE2871AUW1iwBSJF/l6u+HZB6EOK2CoKQtch1iJFiRQO64QQE8FDHZQoTjx2NMFWU2KI+HaFb5eo7zAmgCjOCtEv6RvBumpNTnqja0P82NSmNvWm1K1bt/gH/+Af8MEPfpDlcvlmD+crWtvb23zP93zPl/Xcoij4wAc+wHd8x3dw5coVfvu3f5tf+IVf4Bd+4Re4du3aGzzSO9fnPvc5PvjBD/L3//7f/6rsb1Ob2tSm7rWaVeDj//0lrBNWzTGr5qRpNtmNYLDDTJEdg/NAwKCUeM6YjgviGElBQGlb2HvmBXYvPkA5mSQLv7JcM7CjxETE0JAb4ZgaVhWcscn+MPpMApF1Ex6JGLHJ3SArdyz2RLAxuE0IifghMQE/A6kEGDJN07K8y1EcmUAQU5NpRBCTvuIPGaPkSI2BQCAmExxyTEc6J6QmpHDsPvBWzPxlbGEpHy7wvqVp0iK8uiI11RLo+wVOK7RUYrtARAnLY5a3Xub41hXmh3v4pmW8c56irBEZ7GkLTEyuFKIOEaXAICFdr6iR3jfs37hJXReUVY0tyqTnsI66mKFiMTGgpkOPAjFEQt/ju57Od3jvWTUNDiGGk9PX9akxf9s3vZ3/2//zz/COt39bAjkkrk1Rkt7B5GubSAeqmq9RUjwkVUNE1BKVRKpRSyCiEtfXXkym66TwWZSIqCOaHG8igwoH1lEnkpxYEM3XNV3NgeSjRMQairKkKAu8D8kRxNiMMXmMhbJ0lKXD2oGiFIgaibFPexIlanL76D3EIacVJaokXII05RIpKRFMfPB436d8VpJCZ3D+yPG8GfvSTBhJ4x9UNoakqkjcj8HSGJy1xJiiknQgOonJ5I+AkayE0Lh+jWxIH5va1Ka+Eeu1Fu9fjZRxJ6eEe1nsvhfCwJ2cQ+51HyEElssl8/l87XRpraUsS/7bf/tvfPrTn14TPu60zxjjmvhRFAXT6ZS9vT2895Rl+YrHDL+fJhdAIgVYe0KO9d6viRZt266jZIYYmGH/zjmcc3jv18+r63r9uKIo1scy3G+MYTKZUFUVy+VyHT0yuJHcfp2qqlqTPAaCyqVLl6jr+hXkljfSIeJeSBt3qtcbv3Kv5KN7cbW5076GbcUYOTw8ZD6fMxqNXuGgcvpxr+Yicqf938tr4/bb7tUNZ0MC2dSmvhFq+CxKC+K6dmpIPVIJ3F8J75pVPFAqZ0uhdsLurMJozypE+hAYzRzTUcnycIHdqZiNRyznh/RROHPfjPsevsCqVY67gmWzT79acH7muHR2zHR7xGLRsN94Hnr0HItb++z1Fb6wjJxyfOMA9Z7p9hhnYDZyODEctJEw3aXc2mLeLvDHLW0XsGPD0XFLKyXl1NA3DSb6tEjcJtLj3sEhz18+xLqSsrSEvicYoS4sTmAZYHLhIiEEmmt7lBIxFm7ePMb3kVaFsGrZf/ka4ntqJ5y/sMXFMzWLL13m8MoBcR5pemXRK+WkZrIzYjIpYT6HrkN9InFqdqPQgUiQBTWQ1ulDjOlmMcnpQcCWNZP7LuKGz+GBuCHpmp78GyQSoJpiU2LoiH0H6pGMX61dKrID7NB7D8SgoVMXknOsDk4ZOpBeU88fs1tFwsVgDRZphOCJviN0q7UoCFMwuG4MAqVutcIU5drhM0RPbNr0cJTYtqiEJGppe9qjFkLMGIBglLS4PpBXsjBKlk0e20BqSpE9blpjRxXGGoL3iCZSrKlK1KYFeS1cEl9ZA1qBWKLEjIEkbEAHgRMhC7EMSc0ziK7iCT1H0jkKXmnnxwQfGG3tEgFXTZLLRtRM4M1El+F66Cs/m9c4TlIHETViQsxRL4Hoe4zrUS2RU7Kf9FxJDJR6jNqK+sx9yAtPYyqHK1x2x6hZzW+hvqNr54wnF2huXoejBYUqk4lhNCqwRmnbFcYo1aTCVZa+9XStR1Tz9gqMKH3bEUMWcakmpDRGrIBzlmigC4k8JjFiQoAc8ashkY+cEaKaJATKZB8NPcTk2Bv65CbTZwwsoWhJCBd8nzAvQH1HIGKcoSgttnC4ckTwPYtVQxugHo2YFIaxS+nJfRfoY3pNyuB4E0KaQxqQkB16omdQM2kf8TEJn8QUWGPSeGJIBK8Q8b5FnFBVI2w9QTtPd3AdZyzVZJpiXCwUdY0YR+yWKeJGkvuxX82xziLVBMmOftYI6hQtHMYI1jkiKW45hC67x7zxtSF+bOp/6rpTnubtYMPrURts6itfIQQuX77MD//wD/ORj3zk6570AfDd3/3dnDlz5ve1DWstly5d4sEHH+QP/sE/yF/8i3+RH//xH+fnfu7nvirn8EMf+hB/+k//aT7wgQ9sXkeb2tSmvmYqaqTvlb6LdH3KlTTGEmMAVaxxqdHVtACf7CIjaEuNZ0sazooylgorhqAeRVje6rnx2d+h2JpRTWYURYU1NVhQ7xFrQG1WXxhiTIvYxggShRA8IaZGYihDcsyIornBSfQPBsb50ByTyA9mrRKBdcxLzvxNzWwCBaIqRtLWREiL7VFzlAtrJUVifKTGWPJ2jbVYc2KFSR5XOTmL9g2laahmKWN1Pp9zcHCA94GubzASMcWIqJHQN+j8BvEg0K2WzPdvsnftCsdHe8zO3o9zFSiUZUUUMGpTRv34LOIstlsidoFdLRGxVGVNVY7Z37+BcYqtCs7YgtK6pBhRTdaNxiQlhUQigaCBtm/oe08fAoPNqJNAn8QLOGv4rg98B//bn/s/ePDSo8kqdLhGOVN0cKNIt1mCDk4T2XHFJGvS5CwDVh0pN1YRbBZnaWY/JABA5YQgoUazi4s92Y+YEyVJbl6tFKjkXOCYlTKDYkVYzwHVfk0I0RDxMWcEk5UvMe83K40GssYAX/RR8D2EXrEGgiZ74QGTCECIgUgkqhJ8xHct3lXESP4XiNojUmKsxdh8XqOiJPvP9HsmHeX5qpoWTSzJNjehQoPKKc1zIynqJS2wxIG/lMlKm+8km9rUpr5x614X1F+NgPFavd3riWu5m7PBncY4RKK8/PLLPPPMMzzzzDNcuXKF+Xy+dscYHD3S948TssVAfDi97cHdI8ZI3/csl8u1G8fgujHErJwmVwyuHUPMi4gwm80QEebz+ZqkEWPEWrt2Bjl9jIMLyUAAmc1mbG9vY4whhMBsNqOua+bzObdu3WK1WgHQdR1t277CqaQsy3WUzUDyKMuS7e3tNWkEEhGk67o1UeX2c34n8sLtpIM7OVK8EcSR1yI83GudHvPpn68WS3M7Nqiq7O7uIiIcHh7S9z3PPfccRVEwHo+T9XnXvepr6dUwxdfrEHK37WxqU5v6xqzk9CHr/lzzYr9mvsBUlG+alTw+gh2jlCYycoZpaQhNx6qNHDcts90ZZ2YVx/tHlIVF28DNoyNcVXLx0bNceugMi+OGTktWi5v0R/u89fyIs9tjpjtTjhYdVw+OOffwBUZVxZXesTKRaVgRjlr6Zc+oKujajklREHvh2rKD3XNoUbDqOh64/yzG98Su49atA1w9oqodzfERwSfnzNEoYQsvXj1mf79DjaEuHdr2ybFSoV221NUW5aTm6GjO8bxhbDSLfSKj0tEbwYxqorO40OOccN8D5zg/LWkvX6O7coT0hi5EOmMoxwVbF3coSkuYHxOXPX3XoY1PsS5RTugVUXPMaCYNJrZDwmtiwhcUCLZmcvFiIk6IzbKFFGYi2Slz+EmOiNUYiaFLJJe+T1EgMQtQSJhOAg+ySAQgby8RS4ZxZUxAs5uoSBaNDG4gkJttNHogkT60W+LbFaH1uLrMsReJtIoIdIFmfszy5gHVdIS6gth2+GVDv+xz6532Ky5hFipKCNl1NqRIkKADmJBwBUOKBzbWklw0UkNvrcG3Ed8v0MM51hYpsoWEd9jSYqsCUxTYusTWNbEowUXEJRJDwr4c67heMmmK5PighCzkkTwkzaKd/AqUSIxCu5wTQ0+9lb6DunqasJRh7XEtZ1lfAtaupfk/k/cRJRBUkNBiYkkMPRozySdG1MQ8V3KFDmGCmAnlaExVliy7FXV9NpFv1NN1De18TlWO8a2Hg6uMtGM0TkQJ1UjfttiyYDwap3PbRUKX3IOS0UrCEpWIDx6NSewjCtH36TuxdagRQkjTUFSxKlhbEBG8T5gTaMLm8hyNmeiiIWCM4Jyl95E+k9q0C1TWYJ0Z3vgY4q9jn4RLRkpcWeKqMX2IHC9W9N4zns0YlxYX+3z+CoL3iWBjCow1iRQkgrEGH3yKJrYuuezEgFFDCB41gV6TY5IzJU4ULIi1aEhutgZDcB30JVaEohrhCoO4EqsRtE/ztDQUdobGnqIsQA39fB9jHEFTbLhql+ZyUYOmOa/GIGqIPhBDOBErvsG1IX5s6muyVFOmadM0639t29K2LU3T0HXdWmkx/D08pu97rLVr5cN0OsUYQ1VVjEYjyrKkLEvqumY8HjOdTplMJr/HHnNTb2yFELh69Sq/+Iu/yN/8m3+TZ5555s0e0letvv/7v/8Na+xFhK2tLb7927+df/kv/yXf933fx4/+6I/ymc98Zm0V+5Wog4MDfvRHf5QPfehDnDt37iu2n01talOber2VrCQtogYj2baQRODQbKc5NCYaI2hHRcuWLDknkZlJpI8hRiWo4oPh8MVDRmd/l/Fsh7KuEStYW2biRXbUyP11ijzRlPM4NIEJH0jKEVWMpKgZg0FMan4siZAyZD6mZXxJFotkqOF0tMUaRhgaVUiL5DYTByQvmOfen/SLNTYRCCQ5T4jJjhIZVEqEEzDiiATERHR6ju74BWajmtFoxGzrPMfHv0PfN/RdgbOGjhUihtgfggh929AulxzeusHNa9coxmPGsxlGAxoMZlRRFEktizGU22cheLwPFJVB3R7BeywF3WpFFwI3bt1AEawYdnbOYmOPGgfGgRp82xJV8a3H94EYEx1CxDCdbnPfhR3GkyUv3VhR1RPe9fYn+d/+9x/kgYeeyMqWgSBz4kCRznsmSuTblJjIO4Z0HYxFNWaVQLo/kAEaHWCOpBqKBIzYBMDYTBbKSpFEwEgTyWCyo6zka27yooUQiDl/Jd8eh0WDwYtmIPwYUufaoToomk8IFqoQQrLs1Kwb6hUar/ReKJ1k0ggnhCDSd/MYUsZviD1d1+HKpE7QTAiB5E7jJF2viCaBjWZylBniXIbM03SeZXgumfAkkl9bKaN4sIONgMkuJTH6U6+BTW1qU5v6+q47ORvAqxM6bn/+nf4eFsZfrVd9LWeQu5EJhrqdcDCfz3nmmWf4rd/6LT772c9y/fp1uq5bkx2Gxw3uHKvVao3h3OkcDNse3DZCSGTVvu+ZzWaEEOj7fk3wGEgawzZO40APPfQQ3/Zt38bDDz9MWZZ84Qtf4Dd+4ze4fPnyeky3H/tpdxCAuq4py3IdJzOZTDh37hy7u7s457h8+TLPPfccR0dHa7LJQEJxzmGtXWNdIQSstTz66KOUZclyucQ5h6pSluXrwrEG8svpc3Y7YWI4ljsRg76cGJQ71e3EjVcjn9yLk8ZrEUCOj48pioLt7e31fYeHh/zWb/0Wu7u79H3P1tbWl3Vcp8/Z7fu903hvJ6Xc6XxvalOb+sargTAwrChrVt9fKIV3z0qe2HIUGmkbnxZpY6TroG08Yh3jyYy+gSuLQ0orzBc+kSxmFe/81rdy4eKMw6s3+cIXbrL0gZ3KcmlW8uCDZyhGJQf7S1a9553f8gTSd7x05QZFUXKfXXF0ZcnhIq2zCBFjSo7mHjGRWFWsjpfMe5ic30VMBVYwRtGywNITlg2lMQTjsUWFqSdcfeEK84MWKzByDm08TR9REQKW8flzROPxqyWjSc3RsuOwaQlFJokUDutKTGHxoWd3e8T2rKYUmD/7AnJrBcHQxUgwFltYppMKmhX9MiIZNwidJ4bUr4sVVC2YRGJIRI+TfnmI6R3iPETAO8d4ZxdTTaEYIbHPQpEcXzqwebKAh5jJF94T+x4NMWFVeDLIQG72s8An5Fsz1URhcMXUGJMpCBHEplHGgZowxA/71NB7T+xXaLtKsSMCRVWgQembRRI1iaE9PGZ1a4HveqDAz+eozdsdzD/FZKdRyXEfgRAjUQWJMRteDN8x8nmTHK+BgA9ZrBQRY4g2YxgSEWfTuZCEs1lr8U0g9oq1nn6+wpQLXF1hqxKpK0w9xrgatSUiDjGWOIBhIcfxKugwphyl/AoBVowJ+wH6rkUPbhH6nloDbrSVSCNkbE4H0dZAwknOv2vnlYRcpWPN542QLVxjRGJgfTI14UVpewGiR4oCcSVVvUXsDxETcdbSh4jBYoqSVdezunaFarHHpIwYZwldwIeOelJTb0/BWNqmzeKfwSFG6INC6DIZRiicTbE5ShLRZdJOiOmcJdGNhaIgBk/XNkQfs4mKAU+Kb/ERCR4nGVvL329cYQGL9xGjPdYMQr0kbkrEHMUUUBSWunKIK/AiLFpP4wPT8ZjpyGEGpz4DuIoYDOgqkUzKKkUFkb5Tm2DxXY/YQIwJT+t8IEaf32Zzv2EMztp0rMEjKNZIImcMcUgmCf6sKxBjCT4QQ3alUUsxmuBDgfZNisHB0K0aJERc4cAaUtR2ASRc1tiK0DUJdx3m0leAD7whfmzqa6JUlYODA1588UVefvllrly5wpUrV7h+/Tp7e3vs7e1xcHDA/v4++/v7HB4erjM4X0/Vdc3W1hbb29ucPXuWixcv8uCDD/LQQw/xwAMP8OCDD3Lp0iXuv//+tcpjU7//unHjBh/+8If5t//23/LLv/zLLBaLN3tIX7W6ePEi733ve9/w7Q4gwZ/8k3+SJ554gh/7sR/j3/27f7dWD73Rpap8/OMf5+d+7uf4gR/4gbUd6qY2talNvZklmlw/JDsxGGNT5mJITgdGbGqa+57gFbqGKq7YkSVnpGVHRmujRcu6RSYgtCvYe/p5Jtu71JNZsrq0BuOK1LwZBc3kCZKThsnf10OIxBBR4xIT3Ep2OpDEQpeBGR9PImjMicVrckRIjbwZGqfcLqsMXBCXv6cMrhEGm+NBVOPajSIRSVhHjWBSwytiUvMoJ64kA9NcEWw1oesvcHBwGe1zQxk6ulboi4AzPWoDIURcTM4jXbNkfrjH1ZdfIhYVlx5+lLqqCN2KvvEslkc0TYsYYby9RcjEk77tOTi4xQvPP0uzWFLVI9quIUQPoeB4teDKC89hUSbjLaRUgs8ZoV2PevC+p8+qGd93IIbt2S7jt9R84YvP8t73PMFTjz3OE9/8fi498rYM5KSjNSrETNjJwph0DWwCTAYCjRhJma3ZhtdKkcGYdKZddiMRFUyO3kFMav2HHNFM/rGqRKNAipHRQT0zgEuSQAojEKLHmkTKSc1qJnhkJxfJ/jBmcI7Jc4VXLOxlKoomFcNgJymZUBEC+HBCZsrY0mCYk345cZvF+47ge9aYR/5nc5yRtRajig/DHM6Zwkm/g2pIkTjGooN1P4IS8qzNhC6ruVmPyHrMyc0nxrAW22xqU5va1Ndr3esi993iKE7XqzklvNri9as5HtzLfruu4/Lly3zyk5/kN37jN3jhhRdYrVZ3JFPcTv7o+x7n3JoIMuwnxrh29IATYsPw/ME5A6Bt2/WxD48biCbOOYwxPProo3z7t387t27d4n/8j/+Bc44nnniCP/En/gQf/ehHefbZZ9fOHKfPw+AAMvwsy5KmabDWMpvNqKqKoijY2dlhNBqtxUgvvvjiK3AtO9hBW0tVVbRti/ceay3OOcbjMWfPnuWbvumb+OIXv8i1a9d4/PHH18SPu8WO3Oma341Acyciw+uZA/dar4dM9OXWsN2iKLh8+TKj0QjnHNPplLIsiTGuY3zuNo7fz35P16uRQU7//kaRaza1qU39z1OphyM3XakbqkR5y8jw7u2S+yuwMeKsMt4Zc/H8iOMbhxzPA31MjH4Jga7vOH9+DD7ShCWzM2Pe9f6nePDBGXvPX+ZTv3OFw1YorWe2XXP//TvY0ZRr12/iJfLgI+fplkteePEW82XLWGA+b4jGcWa3xqhgrKXpA8ZAMSoxdUlzuMCHCLMRN1++TBFWGB+pJcWBTOqC1aIlYlj2LbrqUB/Z3apBIKjhcNnShkg5GnHx0n3E7pi26dm9cJ7Wt+hhAb1nNhthC4sZj5kfzSnVMyqgduDaFf3VfexBh2BpI1A6SusorBJDjxWDth4fAqFPAhAVQJQQIkGT02Ry2wprQwhF1/hNivNIDXKrSjkZIdU24ibQH/6eq5tESV0iaBBBPdFn8kd2L0icnxRHu46VVc1xKXJC+JDUeA/xKYOjCDqIMIYBD0QDT2wX+OOjJL6IAVNUCEI3X7C6Oac7bnC1zQTWRGLVAKp9wiS8ZOFIBGMJweO75NwxkFtOhESJfItJ8zhpmRIJNhEnEhEjRI/FoASsTS6kxoKEkB0bhBAjoUxyGROF0Htc4Qi+J656xJGERJMauzXF1BOMGyUCiE0L9ArEMLiiaSZXxYy4pIhcVU0RPoMIKHpCrzTzIzQqIxWK8QxdE3OSyEcgEUyyGyk6UEPydwol4R3RoLFfu05IdvQZhDuiESTFrdD7RBKYnmV88THkxueRql7jMUqgKgt8KOj3b3DGRawUeJ+IF7Zy1NMxxhhWqyY75RliCElQ44bHesQIdV1mJ+MUAZxETnl+Y0lXJhM/Ypo/AGINYoQQM+6pgokeyW67SQxncKJEVxK7QElI5sJCin9JiBrJ6c5Sjhz1qMSVNX0ILOZzutYzqwumY4uLCXcMg4OtCUhRJsdla3CW/PqQtYtO1IgTm3DR7KoRfEjz0FrU90RrwFUYDcSwWpNgfOexlWDLGltNaPoejV0ihEgSe6kGhIC1hmI0ZXFzSWwOU7yMRozvoBCsmxAU+qYhdC2mD9hRnpMmk4S+EqwPNsSPTb2J1fc9zzzzDB/72Mf49V//dZ599llu3bq1JnnM5/M3vOEZ3EOuX7/OF77whfXtxhi2trbY3d3l7Nmz7O7u8uijj/L+97+f9773vTz55JNUVfWGjuUboW7evMnP//zP82/+zb/hE5/4BLdu3Xqzh/RVr+/6ru/izJkzXzESkTGGd77znfztv/232d7e5p/8k3/ye8CzN6r29/f52Z/9Wb73e7+Xt771rV+RfWxqU5va1OspBaL6zJ/ICoMoaNDEnrbJTcG3K1TA+SN2zSEXaZmaEapCo37dtBoEn90LVAztUeDW5z5LNdvCjkom7r7UDFiAnB8aSW4OCEZSNEjUREQonEvNeyZdiKQdaWZ0G7FE9WvlwdB8Qkx2lUpm47t1L2kQrJSoGVRBqVIESWrY1nEyhsyiT9tbR39kBwUkEwSIp4DmkwWCcnqO3pZcufYs3fKQVRughKZdoNpTOIsVi7ZJtXJ8dMCVl76EGU156u3fyrgsMQrRWOxY2Lt2nZe/+ALj0YTJzg715ACA+fExN29e42D/gMX8mKKuqMYVxpUY5yiKkkXXcOP6FeSioWCceuM+OU80q4au72m7Zk0AsVbYmu5g9AwXzt/PbHuH6dn7ePzd34W1Li8GpbY/Rfaks5iuTUTXv2edjaQGLt2WyBOZ0cC6ec9Nv0FQI8m+MXrEuLVjyHqBxZqs8sjXHdZNNxka0UhWyJhEEMpuHsZYjMi6PUxzP6DkyJnBvnSgF8VEsNG19Wu2ts87DgJtGDJGdU2GUk6RQAJr4lHUSNCY4l9izhbOcynhTIqPOWJIFNGkZI4ZLEjzzWKcTbmyQgKsMvMmiXQSySZGHSZ4eolIiowRIwmc2NSmNrWpb5B6tYXi22+/G4njXnvSe33c6YXq2wkEMUb29vb47Gc/y2/+5m/yzDPPcHR0dMeolNtJHacdP4Z/d3KJOF0DCaSua9q2paoqvPdrgshpV47BVaMoCuq6pqoq3ve+9+Gc44EHHuCxxx7j05/+NP/1v/5X3vWud/He976XmzdvMp/P1+O5fRyD6+xoNGJra4vz58+ztbW1jn0Z9lOW5fqxV69eZX9/fx33Yq2lLMv19vq+pygK3vve93L//feztbVF0zTs7e1x5swZnnrqqfVx341scC9En9dzze9GCrkbaeFeSByv5uzx5WKCqsrh4SFnz55lMpkQQqCuay5cuECMcT036rq+IwHpTtt7PeN+refebfsb0semNvWNV5L7KItyxkTeuV3w+NSyVYATKK0gTnjooW3m1/c5OOooi4qiikiRnCfPnp0wLoW+Fx58aItH3/0w95+rufbs8zz33D5furbk8cfOcd/UcuHCLstFy+WXX+DMhS3uv2+X+c0FV68dEKLnTOm4fGXB3rLjwftrppXFt555l8gA9daMuoxE9ZyZTujmHeHmTXQ2wUokLufQe+qyYHXUsYxwPF/Rd4Gydpy57wzNYkkbk1trN2/YPbdN6z2Ht64xGtVsn7/Asp1zcLhgseooouJVE2njeMHWuKKWHquBsukINw4pVinSokGhKrGiWAkggi1KyN8/otcUS5GUF8SYHSXJa8pkgchAGNCTUA4xNrs+CNEYXFUj3SGIJzf3kPEFVLJbRP5M0OTUEaNPkRPeo6HPin/ygnKOadEUiyoqqPiM6WShRPTruZNEGJrdP/KieOyJqyX98pjm4DCJM4yjmS/pjm+iMeJKi0ZBrMW3io+K94qGNJYQ0udviCFrTYSonoghqkU92AQ2pcV4k36KcUkkRUTE0HeesrQ4Z4mD2kQluSzEmIgDUSkKg1iQqFnkYbDBY6xgXBKSxNAl0oEoNiSSQtMvMfMWWx3gxjVuexdTjcCNwbjkHIJFSYSZE+eUkIg2QiLfQJbnZIfSvqedH62xNDeaIpLcK9I1ug0/IpNfMl6kADFgYiIRJZJPFrOQiD3pusfMnDHgW8TWaN9S1cLh8pjpzm6eR4F6vE3fdyxvXMMuD7CFpdeCqJ6yhLIy+D7SNnOIhj5Eet8lgZsx+KYh+IBxBcY5fExuqmIEEyVjnCWQ4g1RwRQO7dM8FY3pdSQG1GOjEoxB+56QrI5BI9Y5nLMYawl4HB61St8BfYo9cYXDGUtRCq5wlHWNLSxeYdl2xK5lu7KMqgIIhD6TTFCiD1gR7GSMqsNpj6gmfCqTWlQsIskhOImZBFdUyckkNBQuk5x9SzDJrdaIgrH4GPGhpxBwXU8Ic6x1WKtI6LAWNLpEYvGR0K5SDxPye0DGATV6gg/YMmJdSdP1xGZBdDn62BXpXJr87yvw/W9D/NjUV7xUdR3JMp/P+dSnPsUv/uIv8su//Mu88MIL64iWQY3xZlSMkYODAw4ODvjSl74EpEb7Qx/6EFVVceHCBd7//vfzR//oH+V973sf99133yua9o0zyIkip2ka9vf3+df/+l/zkz/5kzz33HMsl8s3e3hvSokI3/3d383W1tZXfD+XLl3i7/ydv0Nd13zwgx+kbduvyL5+5Vd+hV/7tV/j4Ycf3sQjbWpTm3rzayBKSFY7pFwIQInBY4wj9g0Se5yu2NFbPChLzlkHGI6HGAwADE6ytWHOZiU65lcabn3+d6i2tijqMbVzYEqsJttKclYmqkQdlA1KiCeLDRjJzhpxTf4QkbxA71IzoplBn11BBJI1JS67N6TFcsSk7Fc9cfNIcSWDooTsdJKVIVZQNXkRPoEPYrKlowxkkzTm4duMrAELoZ7uUE+/lRB6Ln/2Y1iOobB0XUvwgjUpX/h474Crl7/E9sUHePTt72IyHiegIgRMVaG+4+z959h94CJGSpplw/7Va8wPDzg8PqKLnmI6YlInJUkUwVmhGo+p6jHjUU3brTg8PGCrsBkcKfAhcny8z3I1p4/J1lasobQl0+k2hS0IITDeOceT7/mD1KMRqik3NsbUkEWNJKeVxC5QzaSJQcEhgwonuVYMjh8JApK10oWYlCI62GWIIuaUtSmKMen6aiY7yDoQKG1ZsvVlalBzxqhJZB4NaTzGSF7EMvkSDgqTQal7cg3Jr49EzICgMYEPvHLhqouCjxBz/vCw/DE4oSiRoIN6ImeRhoAO+NZaA3WiPhqAjzhkFBvWETU62BlnYsvwt2pK5I0aB25SAuDyOVMNa1KLGsMr2E+b2tSmNvV1VndyRrhTzMmruTz8ft0Z7mV8p8cWQuDGjRt8/OMf51d/9Ve5fPkyfd+vF9qHMdzJdWLAhE4vxK9Vrbc5WtzpuIwxOOfW/7quY7lc/h7MZnD6GI1GAMxmM3Z3d7l48SKz2YzJZMLW1ha/8Au/wOc//3ne/va38/DDD/P8888TY1zjV8M2q6ri4sWLPProo9x3331rp48huqWqqnUEzBA7cubMGR588EH29va4fv06BwcH62MdHENEZB1R/NRTT3F4eIgxhre//e089NBDnD179p6JHXe6bnf7/W6OIKfve7W58Fr7HLbz5ZBC7kQsudMYVZUbN27w0EMPrWOCbneFUVX6vl9HAb1avdq5ea16IxxMNrWpTX39lqhSErivFN69VfOAU8YGZqOC5bzh5sLjnIEvXmM6LimdYArP2TNjcML2uSnzm8fc2l/w8Dc9xAMPX8Bqx7XnX2K5DPReOLc94ty05PyFHebHS557aY/tsxO2ZyMOru1z9eoB03GBjZYbtzoahNn2JEXMLD3LLhLFMj23w9aZMdYv6JrIvPGMpzWT8ZjoW/rlnJErWC479ucLqrqk7T3zpedw2fHoQ/cjWyOCLTm8ccj+/jHT6RbFqOTo1pIHnnyMi/ef5+aNyyyOAm2nTJxlXAqFM0joUOuotcMRqcVirh9RdpGo0IlldHYL4zti24MRiqogdB0ak8MlaI6xGGJZs3sDqe/sg8dI6kdTzz1cKLKIIneglcO4TMSIIeErp6QZiEVNQVLjJPxcyaQT36deOsaT5wxKH/UnHW4mlEh2+lRigiRIkcaDLywxoOrBt8TVktWNPUzhsKbg+MYhzf4SW5SUlQVn8CGimfASsvhDI3ifXBW8j4k4oWCsoY+JsCEasoNsckxNog+DekEkruNVRAqUiC0KItD7hGNIJqu2Pjv3IjhjUuSO9xRiMSbdF2MiJJjoKaxNeJhmh7eYiDHGGkKrxD4Q2x5tOuykxs62MfUO2EA0ZcZiUlxu2r7JZz3Fs5wAFencokqM0CyOAWWE4kazEzLQmuRx+jvKyfdWEYVo104fMXpibDGxTtfPaIaZcoBQ7NPzbVqqF2NpF3Nm2hKoCV1PUVYEHKuDI+zNy8QxSGGoqgInjt4HfJ/jfo0mRyAEFYMPAR/ScRpboAh9F8AZqjozzIxgnUMihDZkQoiCBIIDCQaJCT8kZlcXS8IaW0VtAqGMS3Pdd+0a4+p6pe16nIGqLKnrEuuEwgmuqsEYmjbgJQmepiNL7UDx+D5HFcdE3okx4jTinBC1IPSB2Ics3ErXI8T0Xa/PIqyY54sYxUST51F6reEbTFWjGWvFJqGVGEfXLHEuOe+p9/jgwRaomDyXDaHtEB+yg7NgxSWsTQ0xQN+u0GaF+jaJtaIQgs9vJ8JXUsu0IX5s6g0tVaVtW5bLJYeHh+zv73Pjxg0+85nP8NGPfpSPf/zj3Lhx480e5j1VCIHlcslyuWR/f5/Pf/7z/ORP/iRFUfDYY4/xnd/5nbz//e/nqaee4sKFC1y8eJHd3d1X2I1+I1SMkcPDQ1588UU+85nP8OEPf5iPfOQj7O3tvdlDe9PrwQcf5N3vfjdFUXxV9jedTvmRH/kRptMp//Af/kMODw/f8H10XcfnP//5te3spja1qU19LVSKLXGQFQrGuNRMd0usGEqdczbuc17mnLOWqRFaTSDLsGgtAkZT6+zRtClRorccffE6o+3PUk6m2LKitAYtTG7aMsA8LLpna8s4NO4ykDJIXWYmEAgnigHJzebg/iCkZiTRT2xeHHcMbgjkGIw09EQqEWNy25juO7H+HCgBAaE4OV/GpWM2Bo1pdd3YPJ7UFWWFQ2psHUO8jUn9UOUobIlkh5SmW/DAk2/jwv0PUxQFCtiiRJ0ivsU4i9UCW4yyk8ch9DuUzlCMStrY44On7bps654arqqosNaBGCY7u7TLJfP9ParJFkE9i8Uxq3ZJ07XEmICRqGCjsFot8NWYCw8/wVPv+W5Gk618HhJ5wzCAPKfIHvlvUda/ZxpGun1NAgEVi0aPEbJDS54QJHJHzM2pWRNpyO4YibRRZAWQSEFKAw7rvNkhdsdkpxGbSTpiTGpEMyiRppVLETRJFpQJIZKa2fSwnB+sJ2SMkFw7hupV6b0Q/CkwaxANpeGTZp4CBh/S9Uq5uDk7lqzQ1pDJNCRwIZM3VE+IHCd0K0ElXQ+VRCTRtYuIPVE0ieR4l5iIT5kckmlXm9rUpjb1dVuvFbsx1N2cPW5foH49opXXs1AdY+T69ev86q/+Kh/72Me4cuUKXde9wrHj1SIw7uS4cHqB/nQMyZ1iSbz39H2/XtxfLBZrwomIrONSBtLGEPcSQljfNxqNqOua/f19iqLgwQcf5Etf+hLHx8eMRiNmsxnL5XLtIDK4dFy8eJHHHnuMRx55hOl0ur5vIKIMP8uypK5rjDFUVcV0OuXMmTNcuHCBz3/+81y/fv0V4x3OyeHhIdZazp07R1EUGGMYjUZvuADptebJ7Q4zd/r7XvZxtziZ0/u+FweRu8W0nJ4nZVlSVdUr5tLwc3CfGc75vRzDa8XUvFbd7TkbcsimNvWNW7tGeWtpuFQZdvA461ARVl2kVUvwHW0f2d2a4gNsnRmxvVXhRpbpuGBxtKQXeOf7n+K+h8/RHh9ydDTHFiXzwxuoRp589AzbO1vsHy65emNOC4zrkusv7uN9y/akJPjA89eWHC0juzs140JYtZ4YBXElk9mYcxe3KF1geWA4WPTYyYytqmQ5P4K2Z1rXHM8XHLaeajKm8ZHLNxdEDA+/7SG2zp1hfzln2QbmndJEaPfnHLc1Dz3+KPVI2LvxMqEPdE1PXCw4VxnqqqB0UIxqrIkUqtTOoS/vU3RKEKETKGZjrPbEZZMxD/DLFTFADEqIEVsWkKNP/PCeayTHbhis2kT6yGIchr4esrAjCSooXHLxJAtubr+wxiGmSEKK7OaBRjR0xNBngYdL8baZwHHaSQIGQqKkfjxHn6ZeOSQ8IEY0dsTFMX65oF80GJdieeZXjuhWHWVlGU3rBEqIEH0kRui7iPdKiND7kAUguUsXuxaEoIpIdjiIGa3wmsdnkwto1l8ZDCrJjdSISSqSfM6cE2KvGGcytJIwqz49KwlVRDAKJijWKBoCzoCXgLWCFIq1ESOaontiEtKERole0S4i84ZiscJtNditbSgnIBViB9zOJkKNWNDsvJuvn5LIH0mfEtAA7fwQUGoUV82ygGotkxmenmbB+qtsOj9Rc8Stpu+dxIBEj1IMJzptQ1sIithRwt7KFMlTGEMxnlHUO8yPrlO4MbJYUHQNGhvqnR2kHBGCpJiYEPF9j0awxoIr6DpPDIkAgbhEhoiWEGKa/yLY0lIWI4xNDiKJMWCQ2KCS4luCRpxJr4Xge4SAyQ4ZUSOiCWNEoelbul7p1RCUFDcD2BwTgwiuLCiqkgC0faRXobTCZFxhQ7r2IeSoX2MxriBkspC4hL2pWvpecwxLEud3bU/v/To2xuCxVjBGKY0hZEEUoukcGbMmc4tEXFFnDC/FLxbOEkOL7/uEP0XF1lOMNRgTKcoCU0/o20h3fAslOUqrSfM+dJ7QNWla2HTdvSfFHVufCCNZkPhG14b4sal7Ku898/mc4+Nj5vP5K35fLBYsl0vm8zn7+/vrf1euXOHFF1/kpZdeYrFYvNmH8IZV3/c8/fTTPP300/zET/wEu7u7PPbYYzz55JM8+eSTPP744zz++OM88cQT7O7uvtnD/YrVjRs3+OQnP8mnPvUpPvWpT/Hbv/3bfO5zn6Prujd7aF8z9cQTT3zVI1Fmsxl/+S//ZVarFR/84AffcLeV8Xj8VSWzbGpTm9rUq5aSGkuNyQIz35YsMFsQGIvnnB6yK8dsW0ttBCcRrxabm7ThO7YRwWjKMY3Cun33K8fe01+k2N7GjcdQWCrrwA2ZjCEz9iU1D5AVFBFrhvzwmLJhVU4W7jVFe0gmfQyL9ymuhWztmdpvMUmlcBosH5wndCCPZEcR1cHhIbspZCGJsQM5xaZmxti0bzs4jJw0vobkLLJ2bsiZroVRrHEU1lIWFWVVgkLsI9Wkpm0a+r6nHoErc06rcVhX4KioJ7tU022mszN0Oyv6rqFrlrRtQ9OsODo+oGlTJqnvWzRC16/o+hXzRWB36wzNcompS0KMrLo5TbPk/8/enwbZkp133ejvWWtl5h5rPGP3Oadnza3BlmTLWNiEbTC+xhhs/MYLcTFx44JuBHyAIOAD4Q8EEAT+YgIiCG5ABPASgB2Arl/PCGMsyRay1dbQklHPp7tPn7nq1LDnzFzD/bBWZu1TqtPdslqW1dpPx+mq2kPmysy1q/J51v/5/atqHm1BlEKrjPd8+AcZrm3QHWywuXkuiVp8Sr5T4V0FxCcaSoCG1BEteHz7mApJACI+WglBLJqkExtfAw39o5lQSkfMo5JEY5EAQSfP2YRkVekahYiRVc1+iDqSQOxO8MHHroKQxEXNPJB4XBDJJaqhfDRzOqkuoujDtSqOOM+OylMuQOUiBaU5gKRXaTsrQmNNnM6Ndw6lEyEkhFawEQsbiXYTohAnnrIjNknskkmfvxC7rsQ3czgea/Cp8NImwnFeKwQfwlH31SpWsYpVfIvE66U4NHGcrHHS88cX2ZcXphsRxUnva8J7z87ODp/85Cf51Kc+xc2bN3HO3UX4WLZxWR7v8s/HSQrLP0ef8pPHoVQszDaUkEbU0VAzVCroeu/p9/vUdQ3QbrOxU8myjLquuX79Ol/4whe4fv06h4eHDAYDnHNordne3m7f04xxfX29JYb0+/1W6NGIP7Isa+kdWZaR53kr7GisXYwxXLp0iaqq2N/fb58ry5Isy+j3+/R6vdYGJsuy1yWg+GritaxGjos8Xuu51xI4nPTYqwmDXms7x4kwEOfv8Tm8/P29bHJeK17tXHwt9jTN+5f3s4pVrOLNHbnAu/vCtghDIxgNJgSUd/SyDgYQr0FgNplQrPVYGxrWTxcUWjM6WGC6XR595BSbp9aY7e9ha0cmmus3dqh1ztkHNhn0Dfu7I27dmVAExZopuHX1gMIoNrdyytJxOIdacjqFx1c1VTBIlrF5/jzKzTBYlJsxnzkOpp5gchSW6cEUIVBoxZ39MZVWbJ7fZDyac/Owgt6Ad7/rYcp6wbWbO4ynM9zCYStP3xhMx7B2ZgtCTTmqCEExGY2whxN6Ruh2NOtrBUWnoJqVSICB1tRXd8ktVB7KEFBFgcZivGYBiHOxpuM8zgXIMjqnNqjHkygU8EctHrR/A2PNRmK3R5u7AqnekmxHRVBZngQRibrRVpWiWCLoLqKLVE9J/M8Qc/LgUm7fQibSe5puCSR9SbRRlp4n0kujLUugPrjD3tOvYOce7wLoSNrMioxOJ1mqBI94jbUOV3nqAHUdsC7en1kvBKVT7UHRWLg6HwUzIfi2+cUTbWUjKMuCSjayIig81sdtKIl2Kk1tSkmyqzGKLDW3eEDrQJHHZqdAJJDqRnwhChsE5z3Kx3kdkuOuE482ka6hlCCW2JRjFbDA2130bIoeDjHDUyAdRBTex7oayhEkNptIW83jrisOnuCFcjoBremKQuVdRJv0fFP8kSSQUZHyETxKx8e8b07kkQ1M0150NMmytC8bhTKZQZRQjcdk+Rp5kdPr5PjFIfWNpznXNXR1j3L/ILaSmRznwfqAc6l2I+C9xfqANnlrA+2qGlEKL5FwbENjEyxJDCLUIaR6WLLzDfG9ykSrHbSKtcpkm43oaMPiHd6DR7MIMK8smVZkJkNrla5xrIEG8VS2YlF7VF4wGKyhfIkJHu91IvGEVONLJBExBBVtZCKxJ5JoJXi8eKzzkXpCwNc1Rglg0SpDqdik5er4PiMGneXJ5ilEexiiAMx7MFpQ3hGqCkIdL2EIaBNFIXEWKcTkKNOBao6YxgrHo3SWzqnDuvTbQcfasHM21WQtrk750kr4sYo3Oo4nEru7u1y+fJkXX3yRy5cvc/nyZW7dusV4PKYsS6qq+op/DRqxqqpvuGXLH3aEENjb22Nvb48nnniCLMvY2Nhgc3OTU6dO8eijj/Lt3/7tfNu3fRvvete7WF9f/4pt/FG0iTkpwVwsFjz55JN85jOf4YknnuC5557j5s2b7OzsfMtaubxaiAiPPfYY999//x/6vk+fPs3f/bt/l52dHf79v//3b2jB4L3vfS+PP/74txzZZhWrWMUf5VjqIJW0tu5KlK/oyIJzLDinZuRi0ElgobBEMF9Mah2xA0MQlEAdPHWIsgcSfaM88Ow/9SzF2ga6W2BMju51IwpQJOIxk0BDKx27IZokVx0JNrSKqV7bPRJcSq79UbqpIn0kJg0qLXbH7ol0yNH2ohElJJGKKJ1el9LJlIuGpkhBIJm0opTGpMRDkiAFmqQvvTREYYkSkxZXNN7VCF10VmCKgiw3GJ2zfe4+lDG40nLl2ae49vJVts+eY/v0afrDDTpFn7woqEONUZ4sG2L6XXxRU+oxigmugl5WgzdM60lMgqzF2jJ2xOSKs+cL7CR257q6wto6KvmzHOUsoaowRYfzFx5lc/tcOq+6FU8kv5CjudIk9MS/3UqE4ANISPOj6agx6X1pcYIkApF4Tn1TJGqERw3RQqVCgqQ+IIlIUFHxGjSvi8QPhfc1IvH6O0kJYAAJmkBNs/X2+gYPwcUrLBoRTyOQaDUorVAofvXe3iV4glhgsRZqFxKHJp6qZi4QGusWn7qSiliYcb7dR7TQid0lWiXhkaSzGNLc8z6OJ8QjDk29xCd/WZIIJ/goBgmBoCQJbpJtTnpc/JLNzipWsYpVvInj9dhMnGR58Xq3e6+vy9s9vv0QAuPxmE996lP8xm/8BteuXaOqqrsEGM2Yj9trLIsWlr9vyCDHwzl31wL+8dc0ApPl13jvMca0whHnHFVVtfdky+8VEeq6Zjwes1gseN/73sfjjz/Oc889h3OOhx9+mCeffJI8zzk4OGhtVRuxx9bWVkvjkHTf15yn2WyGMaa1E3HOURRF9FtP4pOiKNje3m6pmru7u61gZTab0ev1mM1mbQ1ub2+PLMsoiqIVrRhj7rIjfq1YPpfN+VssFhFnnQQ13nu+9KUvISKsra3x8MMP39UA8mrz5rXmzr1ed3zbJ13ve8VJ874hugAtfebVtvkHsZ9ZjnuRSo6///W8bhWrWMWbP7pauNgVChTbA00mnm6uGeSK24cztDZs9jV5JyMT6A4yTp8dUk5m3C5rNs5ts312G+8sezevYbKCclxy7fodvOmyeXabnlbsXr3DzYMJuQ/UVWCRKdaGOYXAzt6c4vRZip7CTa6hQyRBahXob/TpdIDSk2cFk5mldBpvNIvZgtm0pJcJOM/+bMHg/Bm2Bzk7N/Y4mFScv7TN/ee3GR1OeOXaDouFp59nlLakp0BJIO8ZMqlQDsqyZrx7gF9UdIym1y1i84rS1FVNf9hlEBzjZ6+TlYJDURFQmWG43oNF6qxPDRx2UYLSqKygd3YDvGW6qI8oY6kBgfZv0BF1IzT/NeIMpWIWrgTv09+J9ld4I86INANEwPQg1zBXS1CIRMoMKjWD0O7Xt3uNebI0jSIkGqZz6WcXbV58rHvNrt2hGtWoLMN7i9JEW5xUI/CSdAc2UNtAVXuCKByCE4FEbRBR0eIFQMBah/eJ6ElqPvHxqwOsi8IOkdgsZAmx+YmAkmiB7L2LdZWWDAG+thiloxCAgLOR1qG0oHWyZVOglUIrjcWjQmyAqWqPVonWq8CFOE+V9hgUARUZqmUU9oRyip8vCPMRZv0M0l9HVIFIFHTEOlhDcUmkXaUQ0eAtiCKgCdZTHh6glSLrgy560T4ZifW6VE9q7F6A1Hx1RGnxPlFf2luI0NalgmSxxhOiZYlSBmOKKDxAYecHrBUlk90ruN1XcEah1jt0Bn3me3ew3TVqleG9p7aWyjpqG48ryzJ0Fq13fBVJM9pAnuUoHcUP6Iy58wQJuCA4H3CVI4RGtBSFRs7b1mraexubzqSpTUaBiPUBKwqvhUwrunn854KQSaDby9FKUZUltQtI1qPQQqECqujgSkdVW5yLIrAo9DBkWR7LRg6cTRQVFNpk2Npiq4rgFUYZEIsKFSYbEHwS5KTfFkWuEuHGE3yNc2CKApV3CUTCkuAxnVjztItZOg+pzuYCoZ4hmQLTjeOwc4x2SK/bCrJiDVCj/QKvLbUjknJDFMho4uejqkIi17zxsRJ+fAtGCAFrLePxmN3dXT75yU/yW7/1W/zu7/4u169fb5PRJlk/Cbm5ipOjrmt2dnbY2dnh2Wef5Xd+53f4uZ/7udbX9cEHH+R973sf73//+3nPe97D/fff33q9Nn6vTbHgD0MQ0syFBova/CvLkuvXr/PFL36RL33pS/z+7/8+X/7yl5lOp+3rV/Pi1WM4HPJt3/ZtbVfOH3acPn2an/mZn+G5557jf/2v//WGbLPf7/NjP/ZjPPLII2/I9laxilWs4muPgA+OaJih0cHE5fNQk8uCUxxwXjwbWmKnQLPKvNRx0NhGhGgWg8SsBUvABigkLbB7zfTahL2nv0x3uE7W6VGYlPSJJKsLhVKCFk3wFba25EVIKHCT6BexS6S5s9dKx2RKkuUKkRIRpFHYA3hcM8LgY4tD6gBBgUgkdxBCJIUgMVltEhwVj+/IJiTZw6QxNOKP+ANpJT2SKHxwMWEVQZuCYEm+nSEJGAxFp0+nM6SuSkIeOHvhAT7z8d/if//vp3nPO9/N1uY2/Y11Ct+lMxjQyW6jtcLbGjEZi+mC0ldM5zPG430OD/eZTceMJyMuX3mBO3szvvOD7+I7f+B7WNsYMvXCYH0Dd7BPXvRAHRBspKuAo9Nfo9MboLTG+xATeyQlWvEgvYvXPvr3RgFB7KaJCZ2S+L0EwYtOQh1FIxURFKICjcZGKYVGR8GC8khInQlpOyGk893uJwl3VBSNiAiSsPDOpwWuJjFMc0GCtNeqsRSKgwltB1F7PZNoiTZJTSSbpUW15Y4WD1QOakfTsNKUr+K/lOiGhNeMnQt1tPuRZGcjktCuafZIs6ikloQw8Xsfjo5RQnMuUrErdVER4mAEiQKR0BBomq0dCb5WsYpVrOLNGvdaDH81csLr2d6rLXzfy16j+eqc45lnnuGjH/0oX/7yl1shxHKNIHa0ubsEBveifJy0/0bE0ZAvGoHGccFIs53lbeV5jnOura1471tRA9ASO8qyJITAfD5vbWEee+wxNjY2eO6555hMJvR6Pd75zndSVRVPP/00WZaxvb0NwNraGp1Oh83NTTqdTrv9RmTinOPg4IDJZNIeW6fTYTAYtOIMpVRL8LDWtsKKyWTS2iT/p//0n/jN3/xN7rvvPowxjEYj7rvvPh566KGWDtJYwDjn6Pf7bG9vs7a2hta6Hc9isWA6nbbk3ul0SlmWdxFVNjY22nNcliWf//znmUwmnD9/nq2tLU6fPv0V1+zViB/H587x63aveXCvbX81QolmDM3cudc+vxZ6x2uRbJYfO+nz+2rHdi9BzSpWsYo3T+QKznQVuShy5Rn0M3q5YT6pMMrQyTRZril6im6voNtR3Ll1SOhmPPieR+n3C8rJHLuY0+sP2bm+x63bE3ob62yd3iRzjr3rdxgfTHHTBZXRrG/26XUy5vMFB5VQbG9Suppqf5+Od3ijyIym2+8gdYkdHdJf61DOPQcThzOa0WiKt5atbkE9nzOvLKcfuA+dCTu395hUlgcfu0ivo7mze8h4ViEWhloIszgWbwymm7F9dguL5WB/RHk4peMDGEFrCC7WC6rScv7sOt26ZPzsDUwZ7SGsCFmekZmAPZygteB1LJPEdXtDCKAzQ3k4Zj6Z4auAI8QGA4l1lUhmjZRJUVHc4Jdz5hBfD6nhRksiYbp0D+KBVDMgkjwDCpTCkwQCKUsO3ifxRsrLfaRlxtRetbWYpomCYMEl8Yf3ECqiZYzFz2dU4xJt4vgzI5E0kbbgarDWU1Yea2Plq6x9Em+k4/EhUlGdw7qjfTfUTkRHK5ZU92gEL0pJpMtojVaq1UBYb5P2RdBIWmBvjg9y0VHW4ppcPmCDx9cebeK+tBGMU2RZpEMoEWwIGIniDvE+NavE2oH2Ko2pIeAKIoHaeXSqqYTqJpmvUMPTBFU0V5aG/trU2WITkGuvfZDYfBKCZzE+RJk8ESi6UTCSahZxmkSxRPN9FApE6kpEYYRWlHRUhSE1xqRrHBxBFEXRI8NT+OsEP8P0z3O4d8jDf+wDbG09xM6vf5RBz9Bf32K0t0fIulRSMJvWVJVFGU23l2OyWP/0FpwXAobgFWIMKsvAFNQSG3p8cHgfBUYOcM6iVWqXk0R/sZ7gHEol8g0aVLRJcs5T1pYgnkIp8gKKIl5P7wJ5rsmKjLK0zEtHp9el1+uifIldjDAMsK6iqmt8mqem6GJMTpYJzjqcszjncbWP4hsdG5hsVUeRl+hoqp3lSUQkhFDF+acyxIAmgFEE08P7Gu8cudYE0bigyPAYEzD9HlYyysM9JCTSjFuAqxE6KBOtdURAmQyIVjLaC9bXKAlI1kF8IFMBj8IGECdR0OVB6+byv/FC4JXw41ssZrMZX/ziF/nMZz7Dxz72MX7rt36L8Xj8jR7Wmza89631yXw+5wtf+AJf+MIX+Lf/9t8CMBgMuHjxIg8//DAPPvgg9913H+fOnWs7RwaDQeszWxRF+1VE0Fq3yNDlBLGu61a001Balokss9mM+XzOfD5nOp1y584dbt68yY0bN7h69SrXrl3j2rVrq3nxNcbGxgYf+MAHvqFJ+ubmJj/90z/NT/zET3Djxo2vaVsiwh//43+cn/iJn2gLWqtYxSpW8Y2OaJESFwYMgvEaQ6CQBVsy4qzMGOqCTASbEqyYqEayhxZQxKS3IUPotFDvCDgS9i9RNXytGV2+SXfrObLBEF10EFWgsuhl6XHooNvOAe98xHhqkwT9sfNACYiOlI3gj7pYo0Ah2a34hvQRE7XGhiQkOkIQSQn23Yv9TddISIIVIVIlVCKCtKIBEUh+nZKS6aYLxYeQvjYigVh06Ay2KMv96J+pdExmE0Ej63RQOqOaTjh17jw/+Of+PJ///S/x0Hd8mLWOYbA2YLE/IThYHE7QpmC+f5ugFLVRjPb3CSJU5Zyso6jHMy6/9Dwv3dzlg+99F9/+4e9k89Qp8I5TFx/EZJrxeBTHqgwiNZ6AdZbtjVNkWY53LmVRjXxBAy6KfSSe48ZeR5pOn4g4QYjJa/ABk47V+1jkaYodDckj2ukIXqUOkVQUEUk2MjSC3jSGZtEqpfuquW4qEjGUJHqGAEpH8YkKgCHg0SLJWibOacQAVVtAuGsBLCpWcDZiML1zLY0mlTBSGUqoPZQ2oLMGRRs7hJq52zSlOOfRyiZv11hk8kvPg0dUSAW02AXSeBaT5lirL4IkjmnGIogYlLIEFN7bo+KSxPKOD7EDqiXbrLplV7GKVazixDhpsf34c19tlGXJJz7xCT760Y+yt7fX+mIviz5CCF8h+lj+ukzEaCgMzd+u47YwDXkiimh1KwZpKB4N2aPZXiOuqOu6tXppxA2N/UszvmZfk8mEqqra8XQ6HR544AFu3boFwN7eHpubmwyHQ7Isw9poL1jXNd1ul7W1tdayJYTQkkb29/cZj8fs7e2xWCzaMTe0j2UaSVEU9Pv9tjGo3+8zmUxaC5r5fI61lvl8jjGGfr/f2sg02yvLkm63y2g0YjQaUVUVm5ubrXimIbI057mpLy3PkW63257vPM/5Y3/sj3Ht2jWGwyHz+fzEOXEvS6CT4l40meX5ca/X30us8WoipmXrneOvbebSvUQrJ23vtcZ40s/Lj301JJGTPrerWMUq3lyhBfLgKatArjQ4YTZaMFl4nHLoXNMdGk6dXcPNSvYOKrLNPo++40GMUuxduwFeULlhf3fMK9f22TqzxX0XzyG25MaVHUYHM8ajGtUtuPTAOmFcsj9aUKHonD/L2naH2bXrhDrgdUauIO93WT89oNAOJRnz8YL9mWd36hkvDlkvNJudnMVkSshyultdrt3a5WBUsX3hLO9816PY6QHXr99if2eCswFla+pZCTZQKM3mqXU6W2sczuYc7OyhXCDYQLcQupkiK7JIIO1kbJ/dpFOXTJ67gZt4KgtWGbJeBlWNuCh4CMFHqwsXUi4ZEK2o5vMofEgWorGxIKC0SguusdAiEhf6m/KBhJDybUG0jrgBUu5cLlLzT4iNMokYEcs2R/UW0T3ETPEu0bia5p4li9QoHGkaP+I2JdUEYseKi8KEUKcmIsHXC6o7I1y5SH+HUz2B2D9Rl4HaB2wdRRwugA2BOoD4gFIG60Oylw1YB8boiNJIjR64lOcTz40PgncOk2l0AJUplIoL6bEeFciyKASxiZQiaEKyyWj0I86F2EST6ilKAC+42iFG4Z2ishbnYpUgy2PdL8prorBGh4D2AaNSc46PzT2RwOvQqUnKeaLNSj2HsEOwNXp4BrIuSLLglbT63uwh0IByWzEPAWxVMj/co6cyCBrJpG1MifUJAXFteUJSbc0HFwUdwRIlFUS6abJuJhAJqMTrHFDoTg/rD+h4j97YxvohT332WR66NGT7u74HZQbc/th/oOtnbJw+w/7tHWrx1F5QIaAJScwR51VlXdus5EONq2tEa1xdQRIJW3vk4hBEY7RGvEXwSJbhfB3rVFolUrBHVBRUWAd1CIgWMqUolKIwgmQBRJNJwGhHXdbM5jVFJ2d90MeIowqC94Kdz6itw81rJDiULhJN1oMFbx1efBQ1eYetLSiLTnVOtCJoE+uvJn4ejVbx94/KEV1EUokCU/TI17ZZTGeIncYmOSJpxogiuBpXjjCdNXQ5IFQTgov3/x6HdzV2PkP7HAkhEmcERAzBKDQFztpYi82iNVCmDSYItlTYRRnrsOnz1bQ2vZGxWr37FonFYsGnP/1p/st/+S/85m/+Js8//3ybrK7iGxeTyYSnnnqKp5566q7Hm+LB2tpa6+fa7XZbEYhSqu3sOEn44ZxryR3L/xaLBePxmNlsxnQ6XVm0fB3jvvvu421ve9s3dAwiwuOPP86P/diP8S/+xb/4qrpjjsd9993HT/3UT3HhwoU3cISrWMUqVvG1Rkz6VADtPIV2DAeaNWUZzEvWyMkFVNOeQEzylcSODSXR2sOFJO5IOoqYwgRsiEmyFxJtIVBPFPtPPU+xvkk2GKDyDNG9Fj7Q7EoAF+q0T9X6QEoibUiIiX+b8MajSZSD5J2Zigcx+U8F4cZ/I3VOtIjJJBChIUI0dIqmYCxE4YfWkVYhIMGnt8REOurOfUo24z5k6b/eYJNyT1POx+RrmzQ8EkdKaLTBFF1sXXH6gYt8MM85vHOb+bltzp17lH6r67UAAQAASURBVOF5QYLBZH2KU+e59Xu/RdYfkG9tcPO5Z8j6a4zu3KYMNbc+vsuZ+87x3d/9x7n06AU6gy5BFN3hgN5gndloH+c8i9mCEDza5Nhqgfee0+cfQJs8nZtUuPGBxoPU+1ahEM9x6mppLqKo2K4iAYIkn9sASifxDiEWArwHpSJ6VJqCUaTPKJUoFSJp1qVrlFpdjgr5fkl0AqTOIBccznmiwiNZnQQX52+iZjS7TM6r7TwSaayFlj4nrdin1Qq1oIxALEpYL1gXNUeaNO+bUXqP8z7a5RCAtBDnopew9y4VPBRaFNHRqPkweBQ6CYlcmm8pwY1tNO1nOe4wFkskBLQ2bReUSByHknjdGmjualFkFatYxZs5jttCvNbvvJOoAa9n0fn4e07a3mw24xd+4Rf4tV/7NSaTSSvuOE5WOL6Y/2qEjuNEhua1y5YwjVCjEW0sUz8aUsbxY2tEGL1erxU8NHTb5jmIooXFYsHu7i6PPPIIBwcHnD9/nlOnTtHtdlurk26325I5msYaEWFzc5PNzc12PI3ow3vPYDBgPp9zcHDAYrG4iywSQmCxWFDXNXme0+12WSwWnD17lvF4zIc//GE+/elPM5lM2Nzc5L777mMwGLC/v9+KPabTKXVdc+7cOV566SUee+wxRISiKLh06RKTyQStNd77dp/HRRBKqZY+YoxhfX29pYQ01JarV6+yvr7e1o++mvl00jy411y7l1BpmWj2Wvtofm6ugbW2FRqdJCw6aZvHSSWv9bn7Wj9fr1dQsopVrOLNF94HRnNPpyjYn0OeQT13mL5he9Cht5azuTlgtD/BFzn3v+sSG1tDysMRe4djvBcmozmjSY11lktvvcDG2pB6NOPa5ZvcvjmmCoqZgw2tOLw94XBaowcDhuc2KTLL+KVXsLOSEAxKeXr9jK1zQ4brA0b7I/YPp0wmJaNaUSGc2eiRVSUHh1NMr0M9r7i5P8Mqw2Czz7CbsXfzBrdu7TI5mJIHIcxLgvWIteS9Dg++7QGcDuzcOeTOzUPKyrLWyfG+ppt3GQ4ySidk3Q6bpzZQsxGjZ6+iF4HSQeWh6BoKDd6AznS0E7E1zvq2wQE8tvap3hPPeQipbNLUUaQhMKT3eGJTUKqtRELoMgUj2ZhYHxeeQ1robwQSpHpSiJQE8gH4w1hMkNQ40lAwvANvY66caK3N+EAI3kbqiQd8lTJoha/m+NGM2e1DjMmwNhYGvPd4AVsHrFe42BWEtYE6jV90JM7WzuJ9vCcwuSFT0dLEO4uziVYRwChBdKwfRQvXRCzTkU6iTSRaqET9iPPaJponUVRBapYBnPWQCSqJeIVonWOyQCSFSqyluICvakRpyrQ9rYEsklWCEoIYSA0moQYRjw7R3rm2Lq2bkWpnASY1wR7gqwVm8xySrwE6urWQxDOpXtM2v4hDkkAkhEBdLqgmhxTr8dwFnXFUNYnzQ5LIJRJELPiMhg7TzBOlOwTvmmJMFNw0QiICWjTlaE5x7lHmtyfs3bnMC1+8wmOnHkamd9h83wfQRcG1X/y39MOIrfvOEW7cxhtFmXWoraV0Hm8VuCgyNhK70LTRiDJYm+pdXuFS01csPSZ6jHGE2kfQTO3inMagVADv8CiCRGGP8/F+KzddMhXQoggSr6/RGcFbZvMFThzD9SH9XoE4R1WVmG4PlXcpF5bg5ygFRrI474KNtBLnsLVvNDI463E+EJyHTBO0bgUgymiUlkglCTVZluEdlIsxIQiS5cQmL4vODEYVFEVOvZgi4sDE+2A/n5KbLt1+B18YvK1w1RST5ZiiB/UCv5jhxeFFCEpF+q/JUTqLZJ1E2hHnMJnQ6a4xX3hcfQsVPDYkuszX4Z5vJfx4k4f3nhdffJF/8k/+CR/72MdaK5dV/NGOhsjRdJis4psrRIQPfehD9Hq9b/RQWFtb4/u+7/v46Ec/+gemfhhj+Ef/6B/xHd/xHW/w6FaxilWs4muNaH+BdxQFnLu0wcXT2xSTgsmTI6q9MplqSHLdbJbIG6lEss9ISb4PiT4QaH9uXhvz1tjRONtZsP/MM+Tra5heD5UVMflvKATJliNYj68tmDxatyQRQOznIAkA8ijWcNGO42ghI9mshCYJUEeEh5Swx4F6RBWpSaUpWKRXi6CUhkSvUEodIUp9wCuJa/PeY72LSb6PuMy6qlv/WoKPHTSmwHrBeYX3DtV4hVqHszVKm3Si4gj6W+tQW6rrtxnlAzYefAzd60bsovOsP/QQRXcdUZq1syMkyzkc3Wb80itcXD/Dex58O6qb4Y2g8y5aZ/SHpwghWtPZusb6SPVwocbaGp3lnDp3KRYWfAKYBonYShdSQkv6F1BtESKgRCfBDakAREwIIdrGJINfBVGooVXK2xuqRfRjJXV7RKpHs5iVKBqS5BkhCkokmKQ28snihChCQtCi8CJJiBOvISEKMKJMpFm8OSoQxG6VJOhJr2pqF4gkiUs8B35pkSEA1oO1UGkoVDtd47kipPOZ5lgaB1InAkfsmoqyGNeKPpSoOEeChegOHD+HTTdU8IRgETFp/sfDVETP3ugdK20RIs7voy6au4ssq1jFKlbx5ozXa/lwrwXq42KJ49t+LQpBY4fy0Y9+lP/23/4bs9msrSstizSan4+PuREYLNuNLb/nXlYYy/Yux+tYx4kgx58zxuC959y5cxhj2NnZaQUgDSGjET2ICFevXuXtb387Z8+ebQmq29vbnD17lul02opGGvHAfD6nqirOnj1713ZCCC2x4/Tp01y4cIG3v/3tHBwcsLe3x+3btynLkul0yiuvvMJsNqPb7XL69Gkee+wxtre3efHFFzlz5gwhBNbX1zlz5gydTofFYkEI0aLm5s2bFEXB+fPnMcawt7fH1atXGQwGrK+v0+v1WkqK1ro9H8vn+DipoygKBoMBly9f5vbt2+15ePnllymKgkceeeQr5terUTtOEnXcS9Bxr/e92ntebX/Nz0qpu4Qfx+d6M3fuNYZXE3009JaTKCb3ijeKvLOKVazizRFZbhiu93lld44JlmEn4/z9A7QROt2MTAd2bu3TOb3BQ489gMFx5/pN5gtLnufMJjV7o5qsn/HYWx6hnwX2ru5y9cVdZuOKuVVUStjeLKgWjhsVDLa2GW51kemIyeGMunJ4gU7uGA4zhlsbbJ7a4GBvzI2dGaULOMnpdBWnuhnVeMLBtGZ9u085WXB7b0a22eORRx+k2ys42N3n2tXr1JVFyprgBZOcLnResH52m4l3zA5HHOxNqKxnozDkvmS43WN9Y0jlA/1OQbfXxU9GLJ6/ip55FjWULqBzQ9E1UNcoiOIM73A+JFJFckUh5bxy1FHvJVLU804GSqgWJSmZR3zzqmUqR7pYLrQNCy4E6nKOr+rUMNKIHppaSEgigsSpaBptRKNFCEHHekKqSQSliTatdWrIISbhzhGsizWJEEUUdn5AmM65c/k2OtUForg1JFqtwrrIOXU+4JXCCrigklcFOB/QRtHrdSAovLXUpY0WISIY03REBZTyGJ2kG6KSfiWez6zTAa0jTbfI6fT6YDRFp4i1Ex/XuEIAbKRM1LMF1XyBsxaldbRqURLpLCrSGUQC3gc8HNXEXKwEeIm1mYAQRFH7uOjuQ4iUDyKtNgSf6LqxgBPrax5cFcmw4SZmuECGp2msYQgq1Wl0rK1JbMoKwSUqbCyOVLMpOsswXQEfm5ACsY4YbX6i6CjScRuLnlgrIV3bIAo4aspPd8dHAiWEvavXqNa3mZWel5+7QXXrNnd+b8yp85tsfOhHWH/3t4MxXP2//y3Dwz3OXLgfc/MmB2WJNTk6y3BeYW2NEo/KM0yukcwQtImWO8GlORHrO0LAmHid49RVeOexzhFcbOKJx9LYFzskBHINOsvAu2RxLaA0ptMFL1QLi5UOw0HB2voQvGW+WBBqj2WO8h6hAyKRnGxiQ5GzdSLWQkBT1ZEagreIxGvqdE2324+fvOBR4shMHs+iD4jSUczkAiqPBGRfW9x8ghYVhVjljFDP8UFH6m2qe9nFDGUKsl4P9JBybKK1eJbjlVDOZ/iywllBjEYZE7dpdBLYZDibUbspGiFb26A4dRrnDH6xQ6hMrDmne9U3MlbCjzdphBDY39/n53/+5/mpn/opbt269brV56tYxSq+tlBK8UM/9EPf6GEA8Wbsgx/8IO985zv/QMIPYwx//a//dX78x3+8LZisYhWrWMUfpVAeuoXwwNu2eM97L3D+TJ/qzimu+QN2P/MULKJtSyvikLgYLiJoSLYaafGg8QMlJo5OPI7Y0dAI8YMI3hoOX75Nsf0cxfo6ulMgRRfRGu9jB4jSgnM11lvypswgOiIRhZjopVXu2LHgk44jFQskJGqCOvIVxaHEHJEkUqdKpFLouFoukoQhJq3ch9bipV3At47K1izmC2bTKePxhOlkxnw6ZzafUS4WVGVNXZVxkUCFVADwPHBaU5ge1jlEKWzwqFCTeduKCpyvYu0k02RbA3zlGT/3EvbGiO7Z0xRr65Td21SLKbY3op4uWNzZweGpXr5ONraY/hrjaozOCtbMJmhD3umCcyymE6bjMdP5HOsdogVfBxZVxXDzfoYbW9EihJj4IRKtWEISvMSWHKJUR6WijsbjE7LyyHVVRKLFSHDJkiR2ADUenG1RJsSCAz5ZxzSSoaag39aJmsJTEt2IazuHgkTxDyG0IpFGoEISW0jaZiP4EBqblDTu4BK1Rqf5IclbOKJtJSlaopBk6RiIGNTKQpEFnAgm0Hr7ep8IMyHuy+MRFwkkwftI/PDRF1ZLQm0qSQWXVHwRSXOVKMbxLu08Fli8j8m1St1VQWxM0n28hm0XViKsrFKbVaxiFd8KcZL9w+t97UmPn7Sg/Vq2GVVV8Uu/9Eut6OMkAcdxYcnx7bzWgv1xMsiySKGxdmlee3yxvhFzLG+3oVZ0u10+8IEP8MILL/DMM89QliXLFh+NXcr169d57rnn2NraYnt7m7quW1HFSy+9xHw+j1Sx06fx3rdNVcPhsLVGEZGWzNrv91v7l7Is2/0ZYxiNRqyvr1NVFVprHnjgAS5dusSDDz5Iv99vm0h+67d+i16vx3g85ubNm2xvb7O9vY33np2dHfb391siye7uLs888wyDwYA/9+f+HHmeMxwO2+tynHKxLFpoBB6NQOTGjRs88cQTHB4etg1JjTDlpHl10vx6PXGSYOJewqXm66sJT44TP4D22JrjXJ67y9te3lbzvpMsYu61n+XtfbUEkOPbXcUqVvGtE1XleOr6iPWO5r6NgnNnu+Qdoeh1qBcVMyecf9tFNreH1NMF16/eJO/m1HXgcFQxqyq27j/H/fetIbZk9+VdZvtTbBVwJmP9jKaX50wnFfT7nL14FlNPmd64iZ3Z9m/TxkaHtbWcta1NVDHg+tWbvHztEJ8XuBDoZ4phDge3biO54dz5TXZ3DzlcOE49dB/nz22hlDDZP2Dn6g0ooQhgK0fpA4UxVKJQnZy9WYkfjckJyKJiO9ec3e7SH25RW8/cWobrHTSecnQH99JtOgtYWJg7j9KGXjcnVwonse5hqypRPVQSccS807kQqa6pRoISNJFggVYQHHmRxXqAh7qsIDQU1NTQI40lqYrUTyIx1tuKajYmuCq+Oei2+SEQCK4kkENwBGsBQZSKdaDgUl1Ap+XlZOuSMvOWBOsrCJYgOi4M13PceMzhlQPEeoKKgo+2WuQl5s4qUi+CViitIdRgPXVl0VlGlis6eUZdlVSLuq2Bdfs5SqJQobKeLFMYk2Myg+nl5L0OvbUB3Y0NTl24QNE15N0OOu8gIeCqBTrLCRLvs7QylOUcWzuyrENtF1jrsHPLfDrjzq1bzPYPsbOS+cGI+XRKbqItszNJMOwj7c27dF9oA87XeC+EYGI9L+pZ4sI6UWthVBIo+Fg30EmqYIPHKKHenxCqikxAeqchNaLQiDNSF0wklma0HWEBXLr2Ki8QFA4QU9A25HgXr6E01FkXayKJRiIC4utkXRtrM83dhAhtvc/5gLMepQquPfMsZjZndM3xwq98gsdMzvB938f6O96NqP831/7Lv0bu3Gb7/jOo2/tIZZmiwVu0RHFNlgm6yBBTJGFSbIDyLuCcp7KWPMtQGILzWOsiYVkrtAh1av6pXYDgyZRGELQRCqNRWkcLGRcFM8pk0c7Z1ehOn2EvJ2eBciVeMnRWUFuHrSziPJ4KVzlwkbrhbYkLHpQBneM9eFulmluy7RbAWQg1SgzGaIwRlPKIzvA1iYijMbkmy4pI5vAOsSFaODpH7WN+4LyKpBAlhCB46yFUKBUwekje6eCrgHUVSgmSrWGr2OSkEep6jieQG4UYhW3usU0H7yyhmhJUQdHReD1Acheb9b4OOuCV8ONNGN57nnrqKf7pP/2n/OzP/uzKzmMVq/hDjq2tLd7//vd/o4fRxn333cf3fd/38du//dssFouv6r3vfOc7+cmf/EkGg8HXaXSrWMUqVvG1Ra4V9z844P3f+Rbe8tZLFLnnsBNYvOPduMMJB19+GVclAQRpiVws4g0KlZbN41K/jgYXaBQ1jXAg1Q6ar8Sv9Uyx/8xLdDa2yAdDitNnUEWBJo8DE0FER8KExKReJFl0oAmiUGKi1YskQoiziE44yJA6O5qOEZLoJCRhgago6FCq0Y9E4kYg2o8klGjMdwPeC/NpycHBiPHBlMP9A8bjQ8pyxmJeYp3FuZq6rLC2xjqHcxbv6pgsJ4KEnfRZ623S62Uoeli1QJNhTY3hCKGttcI5hc4KOue20HPI1ZDNC29BlLC4eQOmUyazW9STOc47ynKBHZfM7YJ5PUd3cvpaYzJDnnXJsy6+9izGc+azOdPpJNqH+EBZVbgaLj32PrTJk61IFPJEwsVSkb5ZQIhVoEhgEZKoQiXTm0Cjd4xbUI3WJooWxNEYxJDoG4FIFNGi0zt0K1ZoSBkk8U0UgrTykqPxBN++Ps4/dzRDk2iERrjSzo/m4YBSmiC27ThqCgtKIsrW+4Y8E7tRlsPHpqKYUKs095snQ6DxUqUdP+0CSkM1iT0fcWxKGuJJc4wufR+ZNzGZbvbQFEaStZGEBvsRu0WSL7Cgkmdw3e5vFatYxSrerHHSIvjrpX7ca0H7tbZz/HFrLR//+Mf5lV/5ldS1ebfgo/l++evxcSw/f5w60Xzf0CeOUxkascTy+44v3jeL+s1CfbPfRmTxoQ99iEceeYTDw0OuX7/eCkWyLCPLslac8fu///usr6+TZRn9fh/nHEVRkGUZ8/mcfr/PQw89hLWWs2fPIiKcOnUKa20r7jDGtCIUrXW7D2stk8mkPU7nHO9973t5+9vfzrlz5wgh0Ol0Wmponuc8/vjjlGXJu9/9bvb29rh16xZf/vKXefTRR1vhiLUWay3ee8qyREQYDod3CTuaa3G8kaMReyyfN601w+GQCxcuUFUVly9fbskhxnz1JeTjQoh7zbuTxBivRQI56f3HRUzH/y2/5rjA6Pj7G4HR8nlcHtey6On1jP/Vxr4if6xiFd+a4QIMC+GhrYzTp3vkfU2vY9i7PULW+zz4lvvYWO+zv7PLdDxjuLnGfFQyq4RaeS49dIq1fo/53gGTyYh6XDMeWw4ry+bpdbJgOZxa6A8IGsL4kPneAXYWawqVdZw9P2B7q09vbUjl4ZUXrrIzLnF5Fxc8/SKncBWjO1OGm2tU1vHKzSgKOX1hiAf2xxNGuwe4SVRoaB8bCW1qYFCFodCaWsFiPKbjPcZo+spTdHLWt/rUtaPygcFaD/GJwnFjn3zmqWphbgOSZXRyRagW1K6OC+S++Z0cqREuBHSWYReL2FPhA6Ki3Ysm1mgIAVGB4EBlhhA8vvaRzqkFHSTWEyCm3c2v6JSPi2iUE6aHY7xLi/q6qScECAHlS4IfxIaIppakNKJAkj1MrBE0f1sTFaIpQAWH+EgJBSG4BdXubQ6vHBAWDlMorA0xfw9QO/ASCR/ee4LSbF+6xGJyyOLmXiJrKDIteOeYHtap9gFZphANSju0KDAGkwfyfsaZBx5gcHqLwalNBsMhEioWkxH17DaLSWDmLHXl8VVNOTnAZB2ct2hjkqBFWMyn9Pp9VMeQdQqyokeeZTz2rofob5zGYjjY2WF8e4/54YhrTz9FNZ5htAE0dW1xVbwOPvj2/EQYiCDGJDuaaGETVOS6ai+Y1hk5IBJtN1zlCE4QX6H0fmxc6W2BUgTVSHEEgo9aHuXa5hNUnAO2XFDPJmS9KN5RYknKoGayxBqGD+l+whMrTz41vZQoyVOdTaXnUhUo1UTqqiLLu5Sl4uUvPc39CATN+Oacy7/4P3igrtj4rj/L8C1v474////ilf/6r2Fvh+0HHsW/9BJ2saBWHXKjCcFFcm8NinhfFwBbJ7tE4a57QueP7HmCi0KlTCkcAet9ktHEGplWgtEawYE4ghZ0ZnAI09EhmdF0eh2UK3F1DVoTNLiqTuKceF0ipVjQ3SKOwcZtiRi8V1TlHOUdIrG5TekMjSFSRzxGB7REO2zvE+DGK+qyRhmFVgYBtAo4BKzFB48KQlDxnMd7QxDvMFkWP7NBcIspWsVaVBChri1agzIZmTboLMfkHeajPbSr8HUFFrwl0ZYVSgLVdIpUNVmWw+AMzKrUqPfG/31ZCT/eZBFC4Mknn+Rv/a2/xac//WmqqvpGD2kVq/iWiw984AP0+/1v9DDuip/4iZ/gX//rf83ly5df93t6vR4/9mM/xjve8Y6v48hWsYpVrOIPHhJgczvn8W97hLe87SG2tocs5ofoXpf1Cw/i3z2hOjxk/6U9RBqRRwRuqkTR0EeSeiBgJBIHBBU7OY72hrQL8PH5xZ0F+888Tb62zmaeU2yfAVxc/FdC8BqCtGSCEAKiMkR0/NcquxP+U2UQLAkIihcHoqPlBx4VVOzeEBCdISourjfbEiIeM2ICY8JalY7JqGR/95A7u3c4PLzDfDbDWUftSmxlKasZ80VJXc2Zz2aMJxPm5YLFfIZzsUNBiZBrw8HeGg9cfBcb6z2KThfnLbUVsrqK8geVoXSICV0iXBSdAVmvg0ZTjncBwdYl5B38+CDiOvs9EIerBVsJQoei16fb6yOiMUWOd5ZyVjKdjBlPRpSLGlBUdUVdOXqbp7nw8Nsi2cM6PMn6hESHEJW6A5JFS3CpitN05qQ6jYrChuZ6x26CuwUK7fepXNNqMiQk6xMPaLw4VNPZkYQlR4YrHhrCSwj44FKnT5yp3rsoXAkukmCCJMsU2vGSthlCWLLZ0US7FZWKDypRTqQld2it2wpWczQeqJPdS5aapZQidk8R8I0tUrOA5x2OgPPR9xUEFfTRJyU0tTF11MEkJGpIY18Txxz1KAmjmt4rgOjUtSQkEkgcmBLfHtMqVrGKVXwrxr0WlY9TGE5aEL/X48cXsUMIPPPMM/zX//pfGY/Hd5EtjsdJti2N0OP4a+61GH4v8ohSqhU5LBMs7hYfHglLluka4/GY4XDIj/7ojzKdTvm5n/u51rJFKUWe522h+/DwkCeeeAJrLQ888ACnT5/GOcdsNqOqKpRSLdGj1+u1+2nqbkVRtGNuxBiN4KIhgxhj6Ha7ADzwwAPcd999dDqddtzdbpfJZEKv1+PHf/zH2+NvaCubm5u8/e1v5+LFi3cV6BsxCcBgMMDao07ukwgUy2SVJpxz3LlzhxAC73znOxkOh3zuc5/j8PCQra0t8jw/8bq9Wnw1NJCTCCLLz72eeXP88WbOwtExL8+f1yO4WH5f83MTy3Y5fxDSx/I+VrGKVXzrhRZ4dKtLr68YnOnRU4q9/Tmbj93PufMbuMmC6y/epLtesLGxwd7elINJRW+zx2MXT7F/Y58vPvsip7c6DHsFL9864Pa44tyZdQiBUvdQmzn1YorcOWSBYETodjX9tQ7rWwX9XoeA4ebuhPGkYlIFVNEh9x5dO5jMmVYVa5tD6spxZ1pj1npkIuzePsB6RagXFLWFOsRFUqCuaupkKTKqLHkvREuJytEdZvQLgyl6ZJ2c0awi73TYOt1lPpmgxOBu3EYfVDgHc+tQmaHbyzDOIV7w1iY7l5gjxnuCKNio7TzloZHkKj5m2NFWxSNecKVDZZHS6oPg61jzaBoegERgjfRTHwBpiJwOXM3BjZtxgRebBBwxZw8ExM3BxnpFtA3RiMpRukj6gEQV0Q3ZoqlBEEUNoT5qHwk1fjJmcuMAO7d0uxnOg7Weqop2L7XzhJj6ozODZBnj3VvMRhNAYXID3uOqGh/AGFDi0dqglCfTGTrX6NwwPLXJ5qX76K71yLUCH5jefJmDpw+ZHIxxdU0IqfXFRyGGUjounOt0H0hAEW0zRAUWBwcQBFNkZL0O6+fu57AGTweG26zff4nTFy/g64q3fdcH2L16g9svPMfhjdtMDw5QhcG6aPvirIvXzDuUMrFugiQLGI31sdYSRKJ9j4rH4NsrG2smtnIwmhJsTXbaI71tJESSblNPifbQic4aIITYKOSDo1pMQWeYIlpLK1O08yc04tF0TUOim4InSLQHasuQSCv2aJp8vHPY+YzN06c5fGmPxY0denmch87C/isT6v/7f/CIrdj8jv8H629/B/7P/mWu/fz/hdx8hVOXHsJdfp6pt1iV4YKn9oJb1BhnyXoDrHXU80SzyAryLNJWQup2cz5Q20jjMBLrYloMkoTAEgIqiSUkkU60ysBk1AQshryX01UOqWfULmB0pKe4+TzZ/QjaZFGMrAKSa7xIFOeoaDcdgsLbGvGRTtLQSlAu0pbFR6KLijU1bQOYLmVZUVclWqvUQGQwyqGNQZPhKkvwnhA8znocClFRRK2MiRZGwYISlBhwNd4rnPVR8OUcOotkEZ0ZTK9HB8gyyNdP4UKOO7iNQlEuSpQCLV3EC1oFTN4lV/10L//G/31ZCT/eROG958knn+Qv/aW/xNNPP/11SRxW+MFVrOK147u/+7v/yNmiPPjgg/ydv/N3+Bt/4298hT/ySZFlGT/wAz/AX/trf+2uAtYqVrGKVfxRCqXgocc2eeQtF9k6tYk2gTD3aKPpbm4QHnqU2d4u0/0ncAc2ChPEpUXjiCs0MWUAmuX/iIhUaanfBY9HoxvaB3F93QcBnzG9usfB1tNkwzVUt4cYBUZHjKOKan7nfPRJVY0FxpLgA9JidoiJS8gIwcYukZC3JAUViGhQQHTEiKpG+KEa7Ggs8BOgrh2T8ZzdW7vs7t5hPBpRlnOqssRZx3wxY+9gl1s3b3Bz9wb7h4eMJ4fMFvO4IJLOyZGUIBamHrlwih/83rczHk/o9fqxO1RpaldjshxrS1SiOWitIcvx1uJYoHTObL5LOR/jK4fSXWpTw1BjfUndcRjpoWc13tYUvQ5ZphGjcMExr+bM5iMOp3vMyhlOPNZWVPOK2nre8a7vougOwPsEy4z2OLE441EqS8WalHQrlTpwPEi0TUm2vakwdLQgEO1/4vUM6Zq1FixNCSGQSC5xFrVinZTM+wAiRyjWhmJBonn44NtCgQ82dbIka5XgwKeuU6KlC+m4GlGFT108IdRxzJL+KUkLEx7rLD5EMsjdyaW0xA9rwRvwOn0u0hwMweK8xXmH8mnhhMbOxR11hISYjCul4mslFYHE0bRNiRx9xgSJ3UneRrpMEi1FUkt8nVKpR6ax7qFB4a4sX1axilV8a8ar2V8sx+sle5wU+/v7/OzP/iwHBwetqKIRWyxbrCzTOl6P0OP4AvnyMZxEbFBN9+oJwpRXq08ppajrmueff5719XU+8pGP8Hu/93tcvXo1doXWdUv8aPL327dv84lPfILz589z5swZZrMZWZahlGqpF8aYVmjR2IgAdDqdVlBgrcU5R1VV5HmOUoqiKFhbWyPPczqdDltbW614oyiKdvtaa+q6bvPwEALWWi5cuADEBo3jJIwsy04UNdxLlHDS9aqqiitXrlBVFWVZcunSJba3t9t9fj0poK9nnv5B6pHNdYO759lxEchJ1I7j8/MkwVFDmnm143q1MS/P6a9FOLKKVazimzO0gs7QcOGRU4itGJWe829/gG5HMdsbMZ/O6HcHFJJz88YhB7OK+x87y6m1Drev7PDSK3dYX19DO8Xzl+/gipzznZzprKKz1kNpYD7CTBd4GxsYOusdts522dzqkqHZn8K4qpmXgvUqUjOcJXMlbjxDxJANB7hiyEE5xfR79DJhPImkB8o5Xe/RAjXRSqMWUBLpAE6EzqBH0dFMDw7ZHGZsbXbYOr1JWVlmpafTL+hkBlsuMEWBHE5wu3MMhoWt8VnGYFCgXU3casC72Jnf5IYpq04CDdJCeqRgaNUkth5cwAVLqDw6KDyC6XRQheAr2y4sI0eQBx+OticSz5G3FXeuvIItF+RNTp5sfGN4xAbQBtGG4EvQOVoXiGSxIaJpfAgujT+G4NMPPi6o1yWTG3dYjBZ0ux0CUFlHbQOOKNwUFetQKFAmp65rytrHXFw8vq4IITY+aaXQOkRKg4qkh96wz9aD5+hv9sjzgtnBAbs3XqEcjeLIbLqvECEzGiE2OxHiuY39JhrdiBtU4likcxjShQni8LMZ45cuo4qc+Y0rdM4/yKTbZf3sJfoPvJu9y09w/q0Pcf4tDzE7nLJ75QpXn/wc5eGYUHusBesCrk71jnTOgpdoFeuijazSQkjCEAGcEpRvulOinYevAk4scrAT7WE6m3GjPpJTA9EyhxCSBTCtRYu3Fb5eEHSejs0eNeM0JFefbHwkUnRBtwKftn7CkeAjTlofRRQmZ3K4x5c+/Tuo+ZysE++HHQZEMb0956Vf+xQozdaHfoSNd74HV/6f3PzFf8emucnphx6CF19gVC3w2hBChTY5mIK6DlTlguBqMp2hxLeU2OCj/U8IUQiC1oRgkSwniAHvyJSKhA8v8dyHOAeD0iysBaMYrK9jQiDMR7jaobMuKssJrsKGRjykQMWqbFBRAFNVlqpySKKISHAoAqLi7xiRKOQJtUfnCpPl8bNkPcqkGpGzuLpGS6DQEq+Zr2L9TTl0pglaosWTdbhUXopOydGiWwUPviTLu6i8wHlwzmODRCptU4MTjy0neGuRoNH9AfnaGqZzCt0b4GtL2N/HlQf4VEPDecJ0H0m1069HrIQfb5JwzvHZz36Wj3zkIzz11FNv+PZFhEcffZR3vetdfPazn+XKlStv+D5WsYo3Q2it+c7v/M4/csIPpRQ/+ZM/yRNPPMHP/uzPMp/PadC0Fy9eJM9zFosFzjnW1tb4wR/8Qf7m3/ybf+TIJatYxSpWsRxZpnj74w9y331n6HYLrF20Sm6VZ3S3t9l67G3Mdm7hPncZXzcdFEckD51U/FEMkKgCpAV1ieSNEJIoREgL0i0fgnquOXzuCsX6FmbQQ2UZ0ikQURh0EiBEWxYV+Z9HxQA5WjyPzSWxU0CUQoWsLVrElDnxSlJme0T2SJtS8TXeQ7kI3Lk15sb1q8yms2TnMme+mLN35zaXX3yBF155gZ29XeaLeRIf3DviwnwSQ4in281AFPPZApU6YLQxWOtQiTChTOyGUbmiciU+OEo7jcWQTo4qwJUVNq8RHN4G3KKirhZIgMGgR5FFYYaIwjvHoqw4PNhjNBlTVQ5vI9a8rCrOXnwLlx5+R6zOpOsTUrVGUiGF4NrtBQIqROSlai1TpLHlTaKIkIQYNG0PrWAjbjGSVeIFVUhIxi6pKBAakUiihpCuMRIS9SKJNBqqR7OQhoMgsUslhDQnFUEaIYrcVSyQJDxphtyOCx3HRapWhRCvj0QR1HKCmaQo2CDUyVY1ZBE/HEIUOvlGnJK2JaLTglFTbZF2Xkoaj1FZFN0goHTsqBAIzsXPkne0Fjexkha7X0idVelA42fHE1wsnChR2GSDI/Lq83cVq1jFKr5Z4w97AXh5kb2qKn7hF36BZ555BmstdV1/xWI40AofTrLCWH7dSZYdzWPHqQmNIOFeooXjpIbji+fLi/EhBJ599llefvll3v3ud/NDP/RD/If/8B9accWyBUrz3sViweXLl3nllVc4e/YsGxsbLZWjIXlkWdaOsbFYaUQeeZ63oo/lc9TYz2xtbZFlGVVVtXYwy8emtWaxWNzVgFFVFb/9279NCIFz5861VJBut8tgMMAYc9d1aJo97kV6WRbxNOPz3jMej9tzMxgM+At/4S/w8MMPY4yh1+t9hXDitebo18PG5KsRSJw0Z5cfv5ew5CTLl+VtNOerEZWcNL5Xe/wk25lXe90qVrGKN2dkheaRRzc52BkTujnnL2wh1rJ7dYLWgibnyrUDLIq8Jzz21lOU4ynPvrzLbDSnGxSz3SmvjGZsnF1jo6u5dXuBHvYxmcaPDpnvLdgbl2xvdbn//iGnTvVZW+9ia9jZr9kvY041Gk8pnZALVOMZ88mMTqHobq9j1occju6gg1CO5ozLjGru6NmaQjxeQRVA9Qoy8VSTksrW6KLg9IXT4CsWBwec7WQUwy79rTWsC5Qu0OkV5HlOXS3ITI6azZm8cB0phZl3eGUYbvRR5TQKKVB472INQ6RtwPAhJGvcJleV1E6g4iKrSk0LAlrpSOq0IX4tF5gigyzmzz7UNLYtSlS07yUJGIKP9SHvOLx2g3Iyord95miVVUi5vkPcAkwfdA9siWiDzvNo0Uv6He9Du+YPIYkkHGId3lvEWeqDQ0LlKPIc7wPWOpxtjteDMQSlo30NwrwsgVjbqqsqCgnSIn5mBGNi3q61wnQyesMOa2dPUeSBySuv4CqLKx1ioDCpscU4VKbRWqEV4AQMKJ2EDElIEckbKlrOoSMNVQQJHhLlIhDrIUoH7HSPxfP7YAyL557EfupXMYMhnff8CYyu0JnjLd/xQc48+iC7L17m9lPPM7l9G1fX1FoR0op9tObxSAgYI6nRJdrQatGxzuZTDSQRYJ1LtZcqwNgDt8hOZYRigCidaoAqjh0hiCd4aOp7OLBViTYlSiu8ExR5at5xsbYUfHy/hLhf0YlkGmtrsWbTiD7SvawGrQzVYsruzh5PfvoJhsqT6RzvAio4gtFYZzi8NuO5j/4GD41GnP7eP8vW+9+HGx2w8z/+M6d7Baff8k7cs08BjlIMyhhEaWw5R3xNYQy5iszh4MF5j3WubeTyPgqljVboTg+tDb4qY20Ng2Bj45QxqLxgYQE0vSLHhBJXzvB1JO0qrfBE62ClNQ4V6SyuQmvT1twcYNH4OmC8xYgHpaP1iiiUKHxVAYLJFVkni59lWyM6w3nB+wUaTZ4XZDrgQ6Ba1FR1SWfQi/ZPLtYUrfU4H1DKo1QRPy+1Aw1Z0cd0CrxotDEo04XZHKopQWXxd42OY3JliXjHfH+GiCVfW5BlPZzxdHqGKmT4chIthbIerlqQFb04f139hv99WQk/3gQRQuDFF1/k7//9v88Xv/jFr8s+PvCBD/AzP/MzfPu3fzv/+B//Y/7hP/yHX5f9rGIV3+xx8eJF7r///je8uPFGRKfT4R/8g3/AqVOn+PjHP87Zs2f5i3/xL/L93//9DIdD9vf3KcuS7e1t+v3+H8ljWMUqVrGK5ej0Mh569CIbm2soraD2KYmKdi2uMPTOnWX7He9isXOH0YuHNDyBhroQxR5pcQAgSLR/SYvODnCAOVpaj0vtEpX5AlSHNftPP4sa9NGdHsWpbXTRSdtwscNAdBIdRLV5TEQligXSr9v4mpCSw0ZpH5JQISnbRaXu2mipEVRI1ArBeajmluuv7HLj5g3m8wm29izmC3Z3b/G5Jz/DUy88xXg6jQWC1xlN54xDsC6gjGHj1CaHdw7JC4OUC5C4yGJ0jgpCXnTIsuxIF5FoJLVdoGpJXrg1KgTsfIoLgncWZYS1bg+TazKTR9FBbZlPxswmI8bjEbWrsenfYl6S9TZ4/APfR5Z3on+mNF0czblVsRslEUBSlh0XlhqhBo0QwqNS8i2i0yxJiXqI86OxgmlEIJGGkX4OgpJG/BG31xjFSELPKiJJhmTrEoJvHV1JBA3vLaJMWpiIz2qVJQpGSMKjuL/GWkU1hS+SMCmEJHZpxEKxGypiZqWd90cRDWpiJ0NzLPEIvG9oHdFXN3q+uuRZa1NXVbKqSd3ZpM9HbDvy0C6cHKmfJAmvnAhaRepIJIXEua+XxClG6bgvFIJF1HJf1CpWsYpVfGvGG5WzHV8Ef/LJJ/mN3/gNZrNZK2y412L1SQKNZfHGSa9vBBcnLWxrre8SJDTvcc619iXL+12Ok8QIu7u7fPazn+Xxxx/nh3/4h/n1X/91ptMpeZ7fZQPS0EWabaytrd0l8ABaQcUyUcM51xI+6rqmqqrWmqbX67WUkCzL6Ha7hBCoqorFYtEKMPr9Pp1OvHfSWjOZTFhbW2v3oZTizJkzXLlyheeee47z589jreXLX/4yZ8+e5cKFC62IpK7rdtvLgphlcsXyMTXfN/u31mJt9Hzv9/usr6/fdf2Wz/NJ8++4yOckMcUfNF7LOuZedJnXSw5Zfq4hzRzffnPNG/LL8X29mqjjXmM/STR1r/euYhWrePNEpoSbrxxi1tfY2hqwc2vKwjrObHewc8fewhFMxplzfU4Pu+zduMMrt6Z0OhnKBRYLy6yuue/CGotZyY1ZhhoO6WlgdAilo3KeU6d6XLw05NR2h0F/jdLC7rhm4hSL2YxFZXFiWNsasn/5CgeHC7I8x5Mxm1Xkfg8Wc+YzIt10tqArUQRQ5Dmz0mIGayjlONw5wNlAnmcMNoe46QhtHcYKZrPD+ultJrMJJsvpdgtcWVPamizP8HXN7OWbGK+Z+UDtPd2tHhtnNxhfmbYNBCGAaI2z8ftGABKdQGLThfPJ6kMawINEAkSIRBNUEoVowdtA5et4D5AplDJRTGKP8luCxHsAFUkIgjDZ2WW2v8f6/TUqd7EJRXRsdgme4EpEBojpECqFaI0uirj4nuxY4698SZQPnyxBIEhsWrGTGXa6iJRVwFYB6yTaEvtA0ImUAJGeKdHqztWReCCASgTQTCu0BmMUyiiKXsHamU0Ulvmt65TptUYLeVdHCoSW5CBrYg0qOcZKoqjEmhaR+JEsbUKAPIvnyfvYxBEht7G2olBxrMFjulm6VgF8GWsos5rbn/kV+mcuwNom4gJb919g7cwpzj32Vnaef56rX/wS1f4B9aLC+YD3Aj6RXySAdwSljsRACOIhFqgi+UVCJJg4G0AF1LTG53dQm4aQ9WgqSBJSUSs1i4Wo/AERnK2x1QKjDdpovERrXgm0NZv4dz01yLT1jeWGnqYxKEASJI33dtCimIxrxtd3OK8jaaYOAeIUwxiD94H5nuXyr/0OAKf+xI+w/aHvoZpN2f2dX+H8u/ucfvSt+GeewupokRSCR3lPjsZog0ic6857nHcEgVxHsoct60jwFQO2QhuNGA3WEqwjeBuJOiKUVY0YQy/LMcFST2ucdeA9ysT6kFId8FHcEsTjgyaUC7xzCB5RCkcU6igJmCxS+bSJttdQYRBCJ9azVNZQU4QgGushVCUaH0VWUWmEqz22XET4cvC4ah6JOYHYnORiQ5L3Di2RCFJ0M0yng2Q9fL1AZZq834cQcNSRjCsaCT7a/IijLivq8QQBvBVUr0+j+8nyAusSObGuMJmK9kQBnG0snd+4WAk/3gSxWCz4Z//sn/E//+f//Iqk5I2I++67j7/9t/823/Ed34GIfF3xjqtYxTd7vPOd72Rtbe0bPYwTQ0S4//77+Xt/7+/xl//yX2Y4HHLx4sW2mHDu3Llv8AhXsYpVrOKri06n4Oz5c5g8Kc1FpQXuiFEUFHl/jfVLj1A+vke592kYOcATkgGqxOwvcTlIwou4tOyCT+nY0UJ1TFeiYCA5fiJeM7txgHruOfL1TXSvg+jofem9xbmKhtIRm0ojvYOQRCCiUidERGZ6BBFNREtE8khIQg0VV/jxRLygiAYV1fC2Fq69fJsrV65g6wpPYDod8dnP/Q6/+4Xf5WB8+AfqHIwwB2mT3OAdw+E6rnaMRlMG633EahbzGUUWyEyGt4486+F8iclyQhBMVpCFLnVVYssFIo6q8uhiHWctkity6aCMwQeLt46yKgk12JlnOplQ2Tnew2IxZzKeMF+UPPqeD7G+cSaJehr8fEJVhkRQUc0CRaR8xNpNQEIic6TnVTrvUaTTdIi2eX4sNoWlTmYJ7bZcCKkbJM6SSKpQ+OBicaOZRun9IcSOFE+zIKPa8yyoxJQlahtE0RjYNIjVkPChcaPNmBoUeRSLSDt30v4kziGdsK7LoRR4gdof/cvSayQt4EUxiEpFkGj/4r0loj3T+SYVh5Igyafz7oNP3rhRWJXUTFEo0ibNri2ihFQI8WncwQe0gtr5I1GPsNJ+rGIVq1jFq8S9FueXqRnHiR07Ozv85//8nzk4OGipkMepHMs/N0KNRqBw0hiaOMlqY3m7J42pEXs45+6iN5wkPlgmhjRjLMuST3/60/zIj/wIDz30EN/7vd/Lb/zGb2CMaWkmdV239bTGvqaxeDGNl7ncbQvSCCiac9kQP6y1GGMoyxKlFN1utxV0NKKBxWLBaDRiOp3S7XapqoqNjQ2GwyGdTofxeMxsNrurIaPT6fD+97+fP/kn/yTvfe97ERGuXLnCZDIBoiilsa7pdDqUZclkMmnJJsdFOMeFH8siH+99K15Zvm7H4/UIE47bo7xRRIvjc+D4PFv+t/ye4+SP49s7fkzHCTZa61aU1Hx/0hz+auKk8a9oH6tYxZs/KuvpnBrS63XZP5igC839ZwaMbk8YLSDvaB66uEE3h53ru4zmnrVBh8VkwXhuWT/X53wv5+DOnL1FhjJCdWcfLY5yUeOUcOH+IWsbGadODzCm4PZBxcE8/o4f74+ZOk93Y8hQAvOrVxkfzvFSUNYWnQsDncOiorQa62qkslBbdLcg6+VMUMj6ELuY4WYlVWnpD3uMpnPs3h6nOjlOPFv3b7K+1mMynZL3uxSZoV7Ev5Od3FBOZyxeukF+WFO6QO0UWa/D9vltqr3dlI+GtpfAW9/mgT41qkRqa/rdrhRYlygDkaJBIrBqFUUXjTVrIBBcI5AQTLeDKx0u6gdSpM5+pahtFP8d3N5hdO0GZ97yDnTHLzUmpBzd1akPJItNHTpH5z1MXhDzc9c2dTS1gbaZQqLNxejqbarDEiVQ1YHaNvUoScKGSDBxNi3YZ0I5r3FVsg9WQqaEPNdkRqG0oDOht9Yhzw314S5KhH433mspEZRp/n5qVBYJoyhBK8MRbTOkfo5UvdAq/R2lfdxZh2lsZxtKqcQmG2d9FNjg0V7jgm8FBAFPcDOqm8/hbwnXXyyYXXyI4X0PsP7gWxievY9zb3ucq1/8HDd//wtUh2OcS2ATF7C2uccNgIsCFImL70paOQc+Ov5G6kvtohXMZAyZRtbOIrpo60ctgVclumrwsV7hwNclTptIcnEeJSbWRYREOvFokVR/UY1KBtr/h1QncSQVAOXoEK00L734Covbd+htRYGB86EVi2gRUHF+zw8dL//mZ8nWBmx8+C9w4Yf/D14cHXDjC5/g/u/8Y2xeeAB35TIj6xGj0Vkkz4gWbF1Rl7F+mHd74C2hqqlqR127eOS1w81nBL/AmBxxHl+XaKMIWpjPFngCg2EXcdGqJfj4+ROj8Q5sXVOYDNEGJ4LUcV57rbB1BUHhBVwQtAITIMszdJYDlrqsCLYEBXkW7ZJC83FTpGsVCN6jM9CpmFXW8V5f8gxjFK6ucTbgdU6wFSo1cYm3GJNRdDp0ej0kEXZwFiTexwuJXpIVBO/QRY/gbbLz1ogLiC8imcd51MJT1ZZMe0ynj+kX2BrK0Yh6MaGyNc5W6UDe2FgJP77JI4TAL/3SL/Gv/tW/ajGSb2RorfnBH/xB/vSf/tMYY9jf3+fZZ599w/ezilW8WeKPsvCjifX19bZrZxWrWMUqvpkjL3J6vU4kXwQVb8BVxOwpFC6uvNPd3GT9oceYXrvG+EvP4r0mLk8HNNBwHSAmXhpJ7/cx2ScQ0qp9ezueRBvNY8Eqpi/fZHTqRfK1dfpZB9NVBA91ZfHOYzLavTVJeGMn0xBBQpD0NaDEEHS4i/aBil+lzagFax0d3ePy089z5eoVXO2oreXWzav8+if/Gy+8/ELsovgqk4nYvREX5Y1EIoMg1GUJIbB9+gxXr7zC4cEY1gJ0A1rlFPkQ70ByhTEFoY4JbK47oDSZGGqBShxGFC4oQi5keUFVl3gsOmRMJnNsCFR1zWQWLWuMNtRVxWK2YDavkCzj1LmLscBAxI46b5PVSkgJIEeJN1FA0eBFQzgivqTTGZP6tmYvR3QPSGIF0AlbypIIJE6HVikBiQQiyfalEWc0KX4QhQs24VxV2wHTkj8SitQ380XivJQkGorXKI0pzf9GTxGSQCN2Luu2yySgEfEYpY8OEdBaMVzPwFr8zMfCiQMfm52Sr3GsRTjnI4qTI2FMIGFmk3AlnjPiGwhIUEl0En2UVVDx8xMCPsTuCpU+Bw1dpemCUkE17jJ4H9I5dpGWE44+u6tYxSpW8a0Uy4v4x2kBr0VFOL6N5fsDay2/9mu/xrPPPtuKPu5FPWi23/xrhA3NPpeJCcctN04iUTTPNcKLRsCxfGwnWXQsW24cv9dptvHUU0/x9NNP8+EPf5gf/dEf5fOf/3xLtpjP560goxGMNEKThnyxTMpYPp6IwfbtOE6yg1kWXmit23065+h0Oq1YZDwes7a2Rr/fxxjDeDxurVe999y+fZtz587x4IMP0u12cc5x8eJFxuNxu19rLXt7e63opN/vc3h42JJKlufEcdFDQyrJsqydC6PR6CsEOsvbudfjx6/B8nV/tdcenx/3eu6kbR+P48KPk+bOa1FMju9LRFpbnYZAc5It0UnxWvfhxwkhK9rHKlbx5g+dqKXXbx9w8YFN1oxi5/aC0axmsGbYGhiq/RH7C0tFoBDhcH9OFRwPv+0888NDdm9NOXSagMOO53R1RuUEnRnOnRtw+nSfPDdUteLm/oJxJfjKcnA4wWcZp86tE0b7jG7sM5140AUqwOZ6l0E/587BAi8SiY/WJ8uHDiHPcYMBbl4y2jmgLGtObwzYWhswGc3YHhT0jEIVGWcvncaYwHhS0R30KTqKugzkvQG5Vtj5FLs7whzUaGWYJxJokQnlzV3sYkHwMT+OvQ6JpJDya+dT40BjtSoCPtpJeDmyafGJ2KlUs2IcRZNKVCRSuIALUUAhRqMkgxp00NiqTp370TJEG4W3M24+d5mHvuu7YvNCkwS34ggXxR+SJ/FHjhQDTLfTikQkNSVFMmiifQpIcFS7u5T7czJj0t/5xt5FYk3IGAyC8xZJdARXWXzt0AREC0prsgyyTOhkhqwIZJ2cEBwZirxjQAeUApNFuxhJQo5I8PSI1hHiGSSSQ7WKzR/eo01qh2ru24g2r0FBpkwkKJh47XxQ7T2WyTK88yAGE1INR0VRiPdNXSSKM3w1Y3rlWabXX2Cxv8P2uz5I/9Q6D33HBxme3uL2l/83B6+8RCgVdR0Fq8GH+PlSqZ4TAJ2au3wUYQQB55vqS6RCyMLBZIzp9mjxJvHKpKphaJtdSE02ztYoVxNchaicIJamESgoHcegNKgsigMk2dW2f+abWlMULRA8s8MJIh2ee/JpegSMxHkbfCD4QG0DgZpOlkVqDYry0PHSxz7FQ6bDxnf/GBd+6P/ghYMDbn7+Cc5/8EO4soSdG8xUpNQao3C2xld1rIBKQFyNqy3WBmocwXmM0WQGgq1xLqAKi9Y5ppPhQ8AGj+n1yXFkymBtHetASoPJ0ny2aIGAB20IaKwL4BfJDljjao+jRhtDZjJEebJujxCExWgESqHyHAkO0Yna7JN9sE/kWckwRU6nm0HWYzad4WqH1oas6OLrmsrWIA4VasTbSKVRmqzI6A765N0uknUJQF2VaF0gWSfOy7JCVPzMBDdFxGF6XawTxNWIBHSngyMgiwVhUeO8R0wUwOSdAj1Yo65gfP1a+sXk4u+kNzhWwo9v8njxxRf5qZ/6qa+L6AOibcVf/at/lcFgQAiBz33uc/zyL//y12Vfq1jFN3t0Oh3e+ta30u12v9FDWcUqVrGKb4mIiRxJgR+O8iYVxROE+D2ZorO9xcZb3069d4fqyn5MlokWH4pASg85yr5SJyeh9YuNi/jxWVmiN8RvNPXYs//Ms+TrG5jeAMk03nmsCLYuyYsCcLHbI4EfJRUrIjki2pAE7486JZSJFh0h0SgkJiUxsfbU1rLWX+Pq5etcu3kNW1usq3n5ygv88sf+f9za220pEV+tijwKEUjdMKBcYF5W1K7Ce0umDPc/cIHnn3mWanfBxvo6gibTHTLJsHmNluhXGULAuxotGlEaCRrxkXCi0VgXO1tiISBnsVgwr0sq75lMJszKOVoryqpkPBkxnc5xEijyDuubp+I88PGsLncbB1zbhdKKdRJC8+iKq5iXq1Q+Uipei3S6hEidCLFOdIRfbQQaDZaV1P0cIqKyqfcE1aDWPdK8hiOChvM+FXwACZFwESR1K6XqlsQuJa+Inq5JUHHUURSSr23qFiIl1fFCpnFFSo0NcT4sn6eiYzi1fYaymrC/OKByUWQRfJynHrAhSWaSqCWOOi1khJhwxxOWxBkiqQBHGpO0Yo+4xfQ8gHd4iWdbVLNII2hU6rKok7BLYoHoqDfmq5rTq1jFKlbxZorXuwh+nLLRPHaSSOLy5cv89//+35lMJq0Q4iTRxjJRY5kA0iyIN/s4bpdxXOyxTOhoXn+cpHB8OyeJRk4SCyx/f3BwwBNPPMH73/9+HnnkEb7ne76HT37yk2RZ1u6nLMu73lPXNc65VkTR/N1szmFzDpqxlWWJMRFJned5Owbvo494cz7n8znT6RTvPUVRsLW1RZZlzGYz5vM5WmvqumY2m7Xnfj6f8+KLLzIajSiKgve9733M53N2dnbIsuwuOslxQsmygOZe86gRqDTHGkLAWtvSRI7Pp1cTXJz0+uXXNtf6uCjj+Otf7bnj3y8fy/Lzd90THpsby9Sbe21z+T1lWVIURXuexuMxp0+fbi1fGjJO877XEsQc38dJn+WV+GMVq3hzh3eB2uS89cEz+HnJ8y8dcmtU8uilAZsdxfjOnPFsQW9zQOECL718h972kEv3b7N7dZedsUV3uhyMZwwkYIDRbEqnk7N9Zsilh7cp8py9vZJbByWlA1nMGY/nVHnB1nqf8tpV9MKSSU4x1NjS0hXodTSLeYkKgUwCU+ujGN8Y1LCH14rbO3ssJhVaw8UHztJd77N74yan+opCQ3d7i7P3bzEbjRmNSzr9IQTHYuEZDIcoEexijlnM8LcOyIJmXnosQq9jMNpTzyoIYF2yRZHYcBCbDhQuFWlC+trYh8a6SlPAiURYFVegU34Kd2eTEoldWUZw0Fnv41QUD9j5DDVW+MrhahubLazFiOLq089SjcZ0NmowxV1ASsET7BzJ+ojqIKZCF32ywRBtMmyYp1dGO5GGWCIEmM8ZX91FGwEFLok+RFINTGL+rYh1B5NFexdbWowWlGiMgaLQaKPICugUGtMxGEVL6dRaR8IHIVI9kvUHIcRaVCwmxRydJJppqLVaR6JtCIiKtQ4tOh18AKXQSRwhSqFR6fxL2xQSBb+CFgNB4a1NtiCSmj9CbEjxNVJ7Dp7+PSYvP83axYc5/b7v4f73fICt+y9x+TOf5ODpp+k4oVw4ausJLl5376PVclQeRGsQn/68mlQt8j4gPkRx0dzix/uo9Zyg+iAZElxLDRFU0+US/057i3clzuboLDZzxfuBowYjpTRKDNHGOYpJpKklBpBk/wvgFjMOd27S73d5+cvPcjbXFFpH0cfSfHUOrHYYHZvfbOkYX5vx4i/+Oo92Ogw/+KNc+vP/T57/V/+E3c99hrPf8b3Yck6Yjim9p64qgvWxhpZoLHYRKXhBa0BQIaCNQ6tUv5Q4t5QyePFU1mEyxaCX453DViWudgQx8bPqQWtB5wVePBI8WufEalUVRRzGIN4TLIgPUEeirOnEOexCJDVrHTCddYKtCdTxHGYaV5X4OgpNdKHJM0FpQx18sppxiBbKco6tY46ixREceG/R2lD0Crpr/VjHVYagTOp+MgRn8X6OlYDoGpV1ITgEFz9LpojkEO0pugVkBbULzPbuEMpJLHv1CnQnJ9gaH2a4umI+mWO0S/WvryQmfq2xEn58E0dZlvzLf/kveemll74u2xcR/tSf+lN84AMfQEQ4PDzkp3/6p7l169bXZX+rWMU3e5w9e/Yu65RVrGIVq1jF1z+0qNRTEO05GiuJqCQPiANQ6E6XwcWL2NG72Dn8Xfy+RRMRmUqOyBaROhC3HYkfHodZkoMkdT/QLD5HDobHeWFxe8Ho+efobp1C97u4ZH/hrY2CAFEEH20xGqsOQUVBQNOKIEv0BolJHLoZg6dRhYQgdIqMyeGCZ55+nroqcd5x/foVfvFjH+X2nd0lQsnrWyBvRCJaxyTde4dSUJjoCbvWLxAP3taI6tHRBQ89/BCvvHKVnd09XBU7ZnpFj6zKMEUXY7px0V4XiMlQDrTJyIoeRgesd9j5DC2aoAJlXTEe7VPWNePplNF4HL1QdcZkOmI0HicMZZdub50876JIdi4+GmjGPHyZsBGtdBDfChFAUuePRRGpEloaYUayEkn2KhHoqVoLoGgXFLtRJMSrSAhHwpLgo6RFx2vlg4vFjXSdvXfRbzSJJ7x3SQDRiDgaUYdHxLTzLoRGaNG8NBXAkmikFZWkuerDkeUMkgQsuFisuQvHLwyGG2Sl4cAcYstA7cA4YtdPAO8szsVFMG88yiu8cxGp6tPnInVZxYIAhKDS/hRKSJ0xqagR0nGGRqDSLHREYZNI6mgJDeUjdthIUDgfRe9/AD3TKlaxilW8KeLV7CiOx0kCj+PbaIgPv/RLv8TNmzep6/rE9ywLPpqF88b2oqF1LIs2Ttr3spjj+PE0C+jLBJFlokbz2lc73mWxQyOCCCHw+c9/np2dHR588EF+4Ad+gM985jOUZUme560wY3n/DUWjqqr2mBthx/K/5r3Na/M8v4sEsTze+XzObDbDGMPt27cZDAZsbGy072ksXiaTyV1Ck6qqWiuWT3ziEzz11FMsFgustTzyyCMMh0N6vR4bGxttI8q9RAj3uibNMTQClcY6phnzYDCgKIpXFRbdK5rrqZRisVjQ6/Va+5vmHC4LZI5TZV4rjr92+ZjuJeZYFpjcSwh10s/N3JpMJpw5c6a91tbadp7cS5T1WmKWV3tsFatYxZsvtFGcXiuY7M2Z6C5TLbz1wgYyGnP9hoVM0+3kTG6NGZc15x85g1osuPHiPlMKfFEwHo1hXlKKUKvAoJdz/tyAM+fWEa95+ZUxB/PAdDbH1ZZyXlEFRV9KyquHhLKmImfhhIUr6RWabqaQOlpk9Lo5i9pSOkvlhU6eY8uSejpHrDDoZpw6t4nPhP07O5zqaHTwDE5ts312k/29fRZzS2+4hnUO0YbN7W18OWE+HdPPcyZX9+iogjk1NVB0NUUmUFtwKbcNqeXBRwGKbxpouPvvnPdJCEokfjpinadpxGhaN0Ij+giJUuFSw4N1eOeYT2dsPfwQ0smoRneYVTuRppHy8LKq0Nqwd/M2491bDO6/gA4Wgl5axA3gKwjdI9qD6ZD3N8k7farRQRSn4MH7tpFF/ILJ1Rv4hcNoHR1jgqKqKnSRR8sZrfC1QySgAth5zI+NgrwwUfSRQZYrOh2N6WZoo4mAtIAW4kK5gcbuGBUFIKIErTQhiWYifDZa0Ua7myhG0UFSHcInYUi6b0l1CImKBxCdmpuSfW2IIhwvHq3UkXAnRKvaRhDiCQRf45XBuyPKBr5kcuU5yvEdTr3rg3S3L/DYh/8Ehxcf5OaTv4fauU1WO2yV7itEiPWb2C4mCpSOVbgQXFuRwwdCbfFiUJM53uyjdAdERdFNc7wQBTohCgMICl/XhKwmKIMohQ+CCrGeoRSoLCdok+ZGIw8KLDe0hBDJFXU5Z3znDouqx/xgysA04F+Vak/JHtp5qsoBGSaLx+A9jG/Mef6jv8JbOgX9x3+QS3/h/8PV//r/5fC5z3Lmne/H//4XCHXFzIJ3yQrbS7ItTofoY83LKMjQseYpoLXBK01FIFhPoTyFFqgX2NpTlnWs+yRKDcGTFT10bqgrh/U+2sR4h9YQ0OkzJUhmCFaS0EKBg3p6CMqQ5QXKeLqDAmU2qWYT3GKKSs1jaIPSCqPiOS3rGhdcnLfB4BAqF3AukAWHZIqgYp2s6PfoDNdQJl634EHqMkF8BWcD1lcxd9EuPu7j75qyqsFPcdUCYwI666C7XYp8QFlalDaU432Ur8k7fcpZTTm/w2JRo0zWtgM28+CNjJXw45s0mqT1V3/1V78iIX+jIssyPvKRj7TJ9n/8j/+Rj3/841+Xfa1iFW+GOH/+PBcvXvxGD2MVq1jFKr5l4qhTMKR75YBSWUw6Q/TvdEQ/Ua+g2Nhm45G3Uu3dwX7uKXyZhB9pew3BQUP04AxHj3uWFq6PRkCkGwSUeLwYvIXZ1RuMzr6AHq4hm2sYI9R12S6IhNTNIaIQFZX0UWgQF/FjUhVRi4hCmYy2WyD1AQjgvKdXbPD7n/0iVV3hHYzHh/z3//kr3Lqz++rnjtitIYnKYLRglCI3hqJrkBCpF9Y5dBI/4AMmSwvw3kULGKNZW9vgoUdyrr9yldu7eyidobcNncxjckfe6aNNgcpyBEW1mMfFIq3BBTSKPO9jbUlVTyjnNWXlmUzmjCZjFlVJrg3T0ZTRdERQQp5HDPlw8zR50YmLL0SkaAhJ4CA+WYzEY0mmKUkEETGuqe+n7WRpCC5KVBLqCA2VJWknCB6sT2IPQhSChOb9zdklUVyScKSxVpE4o0R0ooiEVNSgFXawvGAQkvUJJNhpFPwcrRekYgyNlUpKmtNzjYVQFFKo1GmjMSZrsfgQk1atFP3BGiY32LLGufSZCHFR78ikJgpsXJqv8XjCUiFNHakxRKFoFr9isSWe6oRLTQUhn2g3PoBqfF2UjtSREK2cgndth46IigWq9LlZxSpWsYpvlXg9i+7HH3s1O47l1zz99NN84hOfYDabfYUwoxF8NN8ft1ppaAfHhQ6vNu72b+DS9pYf11qfSHho7qeAdp/N48tki7v/znlefPFFnnvuOS5cuMADDzzA+973Pj71qU+htaYoCoqiYDqdtscPsFgsvkJM0JA1mvE1xIdmP9ZajDGtxcuyuGE2m1EUBb1ej6tXr/KlL32JO3fuMBgMmM1mTKdT8jyn2+3y+OOPtzSR5ri01pw7d47BYMDu7i77+/tcunSJwWDQ2sI01I7m/JxkYdIcz/L5boQed+7cYTqdcuHCBfb29vh3/+7f8fLLL3PhwgW+//u/n7e+9a13nfd7zanjzzXnrdPptONrzk1zLpvzV9d1W+t8NSLG8X0f//n4vDrJuuVe2zj+vLWW27dvtzXSPM/Z29tDa81kMqEoCkSE06dPf8XxH9/+SRSe10NPWcUqVvHmiuADN/ZqumfP4etDthXceWUXbzWOwKDruHJrRLY24My5LeY7h1y/M4feEDEVfrFAzWtyFVta1tdzLl5YZzjsM51Zbt22HM5L8J66rDgYLRAX6JqAcY5+p8d+ZRjNSnwQNjY6dKjRAjMbWCjDohIWc4uYjBCgni1QOq4bba336G72GU0PGZiMzbU+Wis6/QKjNTeu30HlOYP1AbPpmM7mJtvbZ1iMdqiqkguPvpXD//0lwsEC54TaedSgz+b9fdibUFUORyQ+RhprEn2EaO+S/jqwvIAu0og/4uuUkiPSlFI0RqG+zVflqDYRQlwIV4IpOuSbGwTtMfkpxHr2RldSv0W838i7htlowq3nX+LM296J7gxAHdV6ICC+Al+DMiAK0R2ywTbFcJPZ7k0cNpIplho/6sN9yjtjstxgq0gBscGRFRnBaJRO9RytojAmBExu0OJjTSdX5HkUgGS5JisUykQ7G2Uk9RL5aO2hVSvXQMVtSqp7Ka0TsTMRaZVCtI41KwmEYEHF+yEkJCFFQ/9M21Xp+iiDiMbaBcqYKPLQkbyqsyxa7NQebQweF8foQqyXKYUon8p+8bw6HO5gnzuf+yQbb3mcwYNvRx59hLxbcPPJJ5jt3MCVnqp0OOvwKKyN91PBgujUXCPRgjakDjBnA0p53ALUbAbdGUENk7VI06gV0rdRDOGxiAPvapTOEEwqMHkUGq2yWNNT8XFRqcbT/P2Htg5EsNSTMfODffamFb3a0iuSpXWy32lrIgguBKo6CRy0SkICxfjqlBd//pd4rL/B5nt+ADv7P7n6H/85F7Zf5PQ7vw154UuUhyW+jjUm72NNVZsc5eIc1kqRZSo63mhN0BobYF7XaBXoGcGoWOupqxLnAkEpnMrIjCZTOQEhKIOtkyWO1jhn8QScj3S5aBtTRFufugYrGK0R5cBZgqtRypDrjAyP6ReovMtsp0ZcTaYFL6nByFV4r7AhNnspyfA6WnALQm4irQYFWaeDyQxZ0cEqRV07MhwqS4IYUQQfcKkSl+eRauJsFa+BV+A8SsX56mqHzgRdZIRiiOqsoWyFMTnBzSmnE+qQUVc13i7Qucb5bvrs35vM9weNlfDjmzTm8zm/8Au/wDPPPPN128cP//AP8453vAOAK1eu8M//+T//uolMVrGKb/YQES5dusQDDzzwjR7KKlaxilV8S4VItGshJV0xrYx0hyYhjevDCsmF7ulTbL7lHSx27zB+4TbiFBpBS5QACK0xSEIvNijQmAiF9jVHVAmtQEvAScAHwY0tkxcvY7a26WWPoPI8Lnq7QNAxwWusLBpSgzTbliQoEAiion+kCIID0bGDhICzNUWmeeXFm9zZuY13Ae88n/7Mx7n8ykv3PF9KJPryKsVDj1xke3sLEeHqlatMJ2O889TzKib4PnZ71C5SOTIl2LrG+wrvPVoUeRaTOetqzl+8D1Fw/dYtlNIEUQxZQ9DR4tNavLfYqsTXFtEa6ytsXYHO8ALlvGLvYIeD0QH7h4fMFyXOOQ7mc+ZlSaffjYV1rfBe2Nq+j9zkaQEieayHVA4IOgk2hBB06jpJvrkEjGQ4abpKY0eTEh9zaVGxcwXV9FPEZFyip60El7xnVVt6CGEJQe89jpA6WNrJGhelQkh42rhAFFKSHbyPrI4k/vH4aO+Tumgg1RiCb4+1IWREkknq1EklsKazqf2aUKu1q/HORWFRCh+ixc362jr9YZeDSY0P4BxoHbuaWmubtP92oQKfimupWweXiiiGSADx7bkLDdlEiNY1ruksbsQuimhik2wDiPhN70MSj4APFqVVay1zJN1axSpWsYo3b5y0SP3VWEkcf6+IUBRFS3hYLBb86q/+Knt7e9R1jVKqtQ8B7qJX3MvSotlHs4i/bNXSLOovCxmax5bHuGwD0hA2jlvHHD+24+M4vpjejH1/f58nn3yS97///WxubvL93//9fPazn2U2m9Hv91u7lWWRgLWW2WyGtfYu4cdJ59t739JBGpFDYxWzLN4YDocURcHb3/52Pvaxj/GlL32J+XzOe97znrae8Na3vpVLly7dJcyAuNBW1zX9fp8/82f+DLPZjF6v15JNGmJGIzg5Pl/uRcYIIbT1vsPDQ9773vciIvz8z/98K2r4whe+wI0bN/jIRz7C2bNn2/c1wpfjpIuTRCfNHGiuaSPwaV7fnN9GRLFcg1we+72IL8fn+rJg6bjIaPm1x7e3TLBpxtFcw6qq2sesta1d0HQ6bV+/PN6T5srxx5dFOCeNZxWrWMWbMywac+4cZnGHG89cZ7aATq7YHBjqec31vZLh1gCD4vqLd7DaIL01qrLEzGuwnk6u6XZhY63H+QdO0+n12dkvuTWuOZhV2PkCH4T5tMbYmkxga9Clowx3RhXjRY1oxXo3I1cekYyxg5H1WAXlvESHSFhdyzR4sN6jtGZSljCzXDi1TtHtESTQG/SYjSeMZguGWxsoBeV0xtrmKTrdjN3LTxGAjbOnufPsU0yevoLpdpnuTfGiOX1mDVNOmU3mBB/zSS8xV3Uu9mn4ENqmi5DyU6U0TctOFBuAqOb3qLR0j4aSKWqZ9Knavw/BRxpGdTDi8IUXKLb7BFthxzMyY/DWEURQYmJOW5dc/99P8ZYPfwjTX0O0AXKaylQIDtwc0T0gw4vBdNYpNjdRVzPcwrY1qEBA6hI3mpL3DHbuCSKxrmAiLSJxNHCJIFpXlixXZNqjFXR7GXmmyHKNyjxZJ0MbgIDWqZaVaUTieYqkjvg1puwSaxgEwCJBEC2pkSXazZBsX0RDaO7LQrJwS1RTneobRkeKbFCA9+RFJ9YiTHO9YhUOL4TMAB4VYqNOrPl4CDqKHgI4H8A7jAiBGr/wHDz7JDrrkK2fYnj/JXTR5ebnf5v5rZuYDJxT2CrgjcZ5h7VLlFgk2dckmx1RkS7iPL6sUdM7qKwgqOKorpPmnwRacjAegvUEZSFTRCtcD+JjA1TWjdY5ugDTjVa2yFHzUEOV9SWjm7eopiW7taNLjVFZnPgQKbmEWJPSpPqcx9dxZmjT1JwUe88fcPWXf5GHBltsv+/9zK79CLc/8ytc/N6zbD7yLuyzT7NnDykrC6JwHqytIkk3KLwSghJKRSSZiInUGS10tSIzkQBrawte0FkW70d1gSJE2yQRXPQNRoIjuICX2OBTVzXBCybX5J1uJPu4guAcEiqU1rRWyX5OCB5XljA9RFRGd9DHVRXal3ivqK2lXlSgApgMvAKdIcSakgpVnHdZD6Gm6HRReY71ASMajMbbGoLDiYp2xzbgQ0UnM4REELFBwIF3nix3FGub1FXATe6Q5Tn9MxdxnYdZXNlhcuNF3GzMcG2NTO2SFz1M1iHrD6icw9axBezrcc+3En58E0YI0Xf1l3/5l9tk6Y2ObrfLX/krf4UsyyjLkn/zb/4NL7/88tdlX6tYxZshOp0O3/7t395iVVexilWsYhV/OKHSIkFc1NfNmne82Ze4KOxCjfc1Co3pdeicP8/aW97C7M4h5Z0aCMSSc+qySCSBQESD+lbkQavVaBT2EP0wG2CjSknr/OYBvPwCan2DbNjD2hLva0LQCDoKPP7/7P1ZkCXZed8J/s7i7neNG3tWLrWgqlCoAlAASGwUQJAESQAiqBZITo9NS9Pqnp6HsbZ+m3nX05j0NPOsWTRmbZrRiBo1KY1EgWyQBEgQhX0haktUVVZl5RYZe8SNu7r7WebhnONxM1Bg21ijBEPxfmVlGZlxr/vx4+dG+Pm+//f7i9Q5oRt6BD52WMjwZ6I9iOSNGs88m5yxvr7FGzdeYj6vsM5x994bfO/57+L48U2DFMHvVgiBsY61jQEf/NCzCC84OxsFhX5tMdZgrQv2HfGirfNYD91cxgJ/EFR4gj+nUNDybaTUbG1vYozh1v27TCZjttbWKTtTinY7bAKdw5loVyMUlhrrDOVkRFnOOT4+4uTklIPTA8bjMbWxzOs5TgjanQ5aB/KKlApjLFnRJdAyE7slJC3wEutdY62S5DxBwCODnMO5KH4QTeMGJAFDvB3Ne0N3R+gaUmEOMHhvUs9QQ+SAQHKRaVWIeMz4miSiSN7ELhJDROzcIApRImImcUqC4Mc7FDa8h/AS66OYRSQrmGa4UUxCI3Zx3jMajxmNTyLWM4Sz4Yav9FdYW19neHiGsWCDdXJIHhiDcwFtGVCtKiRxnMc6g3UOZw3ehtcHEoprbGuSgMo6GwUvrhlvSOgIjK9D0qgh45wXpYLcyoUODQhiLpIgZRnLWMYy3pnxN1EJLlINLr7vrd6TjqeUYnV1FSEEZVnyrW99i+985ztUVYUxpimuZ1nWCC5SEf2iaABoiuIpR6WUQmuNMeYBMccikePiOBdFIul8i2KPVLhf/PtiLBbrF0UH6XV1XfOjH/2Ivb09BoMBTz75JB/60Id4+eWX0Vozm82auUlCDwgEjaqqHhAOLM5BErpIKRtaRSKFpDlJYprV1dXmdb1ej8985jO8+uqrfPWrX+XKlSv87u/+LgBa6weK/61Wi83NTQ4PD3njjTdYW1uj1WqxsbHxgNVOsky5KH5JY3kr8UOK2WzG66+/3ohJbt++zRtvvEG3223EHfv7+7z66qs888wzzZxmWUZVVVRV1VBO9vf3GQ6HjYAoi8WALMsausoi7SPds4tjvbgu3iouCiYW/10p1RA6LoqGftIxF9f04npctPTJ8/yBMad7ltbCRSHHTxrv4r9d/Ppv+nwvYxnLeGeEEB6xd4u7O0NuHczZ6HdwRnA8tXRW2ly91Od4b8TuyYTuSh+baarxFGENNY5WJuh3JduXB2xeGuBli7tHUw5PS45PznBOBWrGrKItLP1exkq3g5mW7J5NmRjHrAo5iY1BgfCe/ZljHtT32Pkc4RWbg5yqcpyVNd44hFTUruThzQ2uPbyG1x50i0IpDo+OUFmLhy5fppyMsbVnsL5OOZlwcGsHhURoxemdu0xeuc+VJ97LdOc2TiryXGGOj2MxG1JzhkCFvZ9wYd9KtHqJO2UhQwf/uSVcymWkn7fpZ3VkWDZF/0Wx3XkN3nmPnc7x9w8x4ykCQz2pMLVryLLCe4QzFJli5+UbnO3cp731EDJvIbwLlr2RPostQbWCfa8TkPdoDbbRrQ5mPgt7fx8kEPXwjGo4Cg4xkUYamipU0xih0u+FqqbdkhS5RipodzOyzKMzRVYohJboSPrwzgXhh4jdUsggWBA+0EAE4F0o+hNyXSrLQ54t5aZQCBHIsWF2NS7mZryNNjVKIVwQ64BGaoWwwd425G0kSgi8jNTS+OtQ+kAj8WictXgRqBhKSKQTeGOC34kM4hLrQwuO9wbKCUcvfJ3BI8/Qeew9dDa32Hr2FzmRP2RycBdZWjIlqGuLtaEJyonQMOT8+fWERI3DS4G1AlFZ1HQGnVGgdaACiFeEsScialhGDm8rvNeADvfOB5KMytsIlUcqh0aoFt6aaKVzbk/snUDInNP7+5iy5OT4lI4SZDLmG4UIljNxDKS17zzGeWTM4Xkf7IipJQcv3KZz5Ys89NubXPnMF3jj7pvsf+/bXP7c7zE93KacnFGWYc17CZlUyDyjqgxzDyPjcWQ4B9rO6ElHr12Q6fCe0jh8LdBZIOZopULjm6nCXCbKsXJgQ05zPpszL0uEN7SKVrCFsRaURGY6Co3yQKnVCo+gnsqQ58sCdUNEUoySBa42wYa4rrE+D8/QMlBAvJCAISsy6grAUrQ0ShbIPAvWM4BWoLKCalZSWUeNxNrwU0YpiRdB5OKdp/YhW5zFPFlVzrC1ohyfkemarpkhlOBg9zblyT7aBdufFgq9ppGZRrW69PIe5SyIid+OjNZS+PFzGM45vvrVr3L9+vW37Rwf/vCH+cAHPoCUkldffZUvfvGLzOfzt+18y1jGz3v0ej0+8YlP/KyHsYxlLGMZf+siWD4EexYrmi1q2PS788St9QSUpdDkqyusvOvdlCcnlN95CWkj3YO4qfagCKQE3/ALkrVH9D1tCBBBXKKFQYlAcZCAmTnObu9QbF+lvb6Orx3exkS8hHM6wqItR0BnRsVH2NhAszlPm0HvDbPpGWPZ43R4FpX5Nddf/T6z8sef13Kt0FJR1jXOO5SWvOc9T+Cd4+D4kMnZjHJWU9WW2kTsehRB+NhVEaQNgnlZM6+iiACHF56i6BOuekK3N2D7IRBCs7tzn93dfdY6PTbWVun2ukidoWSGUlnslqyoTc1sOuHk5Jijk2OOTo8YT0dMqzleSvK8TadVoFXEiMqgsK+t4IXnb1HWHZ555kmKdo5EN4INJVWz6XVYJArja4TQsQAQBCIIEfw/G2FPECGIqPIRQuGECX7CeJyNnUTpvgEIFddFurdJ7ODBqyAe8kTBRkgwhI1+EpkkaxfR3P/UsRRsUVKExJVMQpbm32Knjgh/hqRQuDYRPVmVVEghMHVJPSvjWM+LD947Op0OV648xtH+PuXRFGNByiTACB1QwRuX8LXzoZOFmACLQptwQBE7kiI+1dWx4BQIH8FOKPgkCyTG2fiZ8nhrQ9NLPKaIPr8NnjcKWc7nZBnLWMYy3pnxNxV//6ai9eL7FokMzjmyLKPVagHnAoOvf/3rnJ6ePiAOSGKF9JpFYcVPEnAs/vtiYXyxoL84/rcig7zV/xfFHBevffHYF2kfiSjhvefmzZvcuHGDxx57jFarxec//3n29/cZjUaN2OOiSCWJN9I1JgFBq9VqhChJ2KCjp/mi4CPZl3Q6HaqqYj6f472nHUWxzzzzDGVZ0uv1GhHEW93XdB+HwyG7u7vcuHGDlZUViqJoxBVAI3RI/3ZROJFELCmSoGFtbY2nn36aPM+ZTCa88sorDfFEKUWr1cJ7z3PPPcenPvUpLl26RLvdxhhDURRAsMYZjUa89NJLjEYjtra2aLcDrS3ZvCSBSKvVIs/zZixp3ItjWxRfLN7Ht4rFtbFIGFmki1w8/qKd0MW1lSgqobtVNVY9i8Kjxc9FnucN5eWifdHfNN5lLGMZf4ujNhztnFFZycZKQZErSiHYWMtpScfO7il1LVi5tMnZdIyazKCsEUqjtGRtLefq1TVanRajEg7HYw6PJswnJdO5pTYWOx3TQ7DSV6y2M04PR5yOLRMHlfd0W5qN1YJWoTkclpzNLdJb2lrQ7RSUXjDzilpLZG3J2pKNrTZrnRaTScnx6Yjtq9vU5YzD0xohMtp5xuToEJEVDAYDzs5OGE8mdPp9ZmdnyLpGHI3INt9N99HL7L70PCpv021nqLqmrj3OxTxEpHnYqsIJInXTgZRRrBGK8NbFZgqxUPwGkuijKe43zyNxDy3Ss0va54ZGEe899azEzCuECPYQ1tqwFyVYwthakmvBwc5ddl95jY0nn0S1+ggRmpB8+vnvK7AzgmeGRmQd8sElOhublOMzhHMIXyGcpTw+ws1D7skT7H29FwgdLdKi6ARjyJWgyAVFW5G3JFlLorIgBlDKhaYZHfJXKBVOnyxXWRBwSBFzVBlCOJTM8dIHq18VqCsiPQtKiXOmsSxO8geZJ4JHKLYLb8PvThcoCVKkc8d7IGNriw+NQSpTRPUOOtM4H39HBi/bkE8gikUCOpVAagGsB1MxvvMjnJ1SXHqM3vYVirzD4YvfZ7r7Oh6BqgW2rrG1wEbxiHWhfSg09xCQp5F8663DlgY5HaGKlWjVK2O+JlgJQxaXlcU7ibdBnBRySwKlFapoI3Qe7r8XeDONx2hmDJzF+5JyuMfOiy/hUJwdjbmqRLjnMUUU1vt5SCXxcY6MDZRgqUXTFFTPPDt/+UPy3h+w8Zn/ioe/8I+4+d//nzj8xp+w+Xf+PrPDXVrzQ+ZWojNF0W9RUTCsJkyMCXSfuiR3jkHLs9EtcCqM2sS8YcjPWITKUZkGCVaIYEHsAxFWKIWTivnMUFZleK7sdtFZF+89pjZIFHmucU4ifMjpKg9IhW51g5WRUsF2Jih2Ao0FhylnGJmjOoqs3cGaGleVeFsjXIkq2sh2GyWhs7qC7mwEEvF8inQWZxzeltQWpvOays7RWtPp9vDO4OoKWzmEypEqEJutcwgjMbOa6fiY+ckp9egoiFVWT9lcE4iNT3By7w5ucox0Nc5UVJVE5jkwxZpEsf3px1L48XMYVVXxr/7Vv3rbaB9KKT7zmc/w0EMPUVUVX/nKV3jhhRfelnMtYxnvlHj00Uf5yEc+8rMexjKWsYxl/K2LQHCI5ABks4FCgJAZxpRYZ8LuXwWFvGrldC5vs/bMM4x27zO5cYx3NNSOxfxs+tLF/1UD1zwnf4TXCRQeGzdk3mnmhyWjO2/SvXSJams7Fv3Dpsh5SSBDhD9l3EQjFELF5IQInR3pZIESYrn3+i26/YJ7d/eo6hpna0bjEw6O7qGVCPhLQkK53+4gvWFcVng8RSH58Eee5e//vd/i69/8BrPZhJPhCVUdqCEuERrCPj8o/+M1SiEoa8u9/WM+mjbgzuKcJSsKnA9enZ1uh83tTXSuuX37Nq/ev4O/dZNep02raFFkQb1PPF9tLKPRGafDU6blnLKegNTknYJW0UJlBZnQSE2DJ63Litq3KY3gpZfeYHd3n1/88PvY3tpCKgHekmxakolPEFFkAR0a1fkei/AyEF/kwh0PDSB4H3kvsfNDOLDe0tStZBBr+EgXiegOkhlQQs+GtSrwSZSUhBQIpFA4EUQPgiT8SOiO86KBJIpehGuEQGnthUSAbwQm1ruQFBCAcCGxoiSZ1nRabTY211GtnJX1Q+rK0u50eOyxx3j46iOsra1zOjzkjenL1MYhLRjrsdYEWoc3eKGD9KX5wEQhk4h+xiKSaxzxPQ5jAxHEukA7IWHvncPaRAAJx5AorA+mL1JmoQNKqPi5Flgb5ibQTP7//KGxjGUsYxk/R/E3FYaVUuR5TrIkUUoxnU6BB4UfKysreO8Zj8cA5HnOYDBoitZvvvkmX/3qVxvBwkXrj1R0XyxyXxRaLBI6Fsd+UUTxk2gHi5Yvi1Yg6fuL4o9FGsTFa71II7lIUjg4OOD69et8+MMf5vLlyzzxxBO8//3v55vf/Caz2ewnCg+AhuKxKCpI5Iok7kiECaUUdV03AgwhBJPJhNPTU+q6ptfrURQF1lru37/PysoKjz76aCOyuTg/jz76KAcHB6yvr3Pv3j3yPOfWrVuNsGZra4vHH3+csiwxxjTv1VqT5zl5nje0jUWxhXOOsiyp65qjoyP29/e5dOkS3/rWtzg7O6Pb7TIej5lMJs213Llzh7/8y7/kC1/4As45xuNxI0r50z/9U+7cuUOe52xvb9Ptdmm327RaLTqdTnMP2+12My+L6yqJO9I9TkKKRNoYjUZUVdWsf601a2trF4Ss5/dwcc1cXAuLr3+rWBQtOefQWtPr9ZrvVVXVjLOqKrIswznHq6+++oBdUhK8dLtd+v3+3/iZXtI9lrGMv11hPchC0ZZtXGnJ2rDeFgjvuLc/wwpBpjP29o5YKQSFdOSrLRCey5cHrA1y8Bm7Q8vMe05PRowmFWXlmJUWPZvQE45HLq9RzSvu3DvF2ozT2mFURqEsK4UgE7B3WjKbO5Sx9Hs5WgjoZGAz5lWJxtDvCh6+sk4rg9NRTTEY0F9tMx3NmMxrslyTKcXk7JR2p0ue55zu3gGh6Le7nE3GCCHIjGQ2smx/5F3sv/BdDAVrGyu0hKM6NUQgZszLJMIrTTuOI9psxO9DsGcBGdszFilP5xTJprPe+/O0kQjNHlKIYH8qYrMD8feDtwjAxT1403ogAj8kUxIpPK//4Ic8/omPka9s4FWG8IFS4gn307sKoVogcpAO1Vmju/UI4/1dyvk0nGMypBxOUEJTzmsQKjQWKQnSoyIR1EwrCgXttqbV0eQthdYKnQtQHqkVUkQimQpiCi88QgR6QSJnhqajQE4Nwo+YuxAipDlUmg+C+EPImGMITUpSKHzMtTlMsL9B4AUonSEiaRXvoxVwyHWISCVFCoQV0VrEN4IZpEAR6BCJ8CJ0sAcJlJLYLxPbYZCR5eIN84NdRN6i2+1RrA5YffI91PMR9ek+WSZRKsMqhxMCEwkgRMppWAg62NPGnI2zDj+bQ2+KbOV4bExAhOYWmbiv6bnChSYWIRySDJ130EUPVB7yJN4EC5JoFSQ9MWdkwdaUx0NObt9hVhv83NDphCauRijSNMOEpa9laCpy2OZ5WrrwzBzIIJrZ0HD7T79Fvr5F72NfYPPTv8ve//f/Ru/yD9j6hU9QP/cniNrjlGZeew6qGUPjsV6gnCerKjYLWGsFCyPvoI7rIdNFFAV5VBQgW2sDORaBtyasNR1sU6y1ZDqj6LTReStQNeIal0iEFzjjcMYglQcvyVo9dJbhzCzkbZXGCwfRMtghseQgBVJpXNbCo6GuwbtoHWzJCknR7yKLQOvXhSbvbSFlwdnBLmY+wThBXQchjpYe4epgS1OXeJkjvSWXCiElVeWozRQ1LTG1wTtPNasZ3tthBcW7nvkgvnWVPPs2k9030JRBtGId3liMsCFn1lgr/XRjKfz4OYyXX36Zr3/962/b8d/1rnfxqU99ijzPuXPnDv/yX/5LyrJ82863jGW8E+If/sN/2HRNLWMZy1jGMv4ThvCR1hEwfGGzmBL9NlprhE2bkhqlQyeB7mn81UdYfe8zDA+/jTs2sWwfCspSBGiDRzRWL4nMkSgLyT5EiFCPV8KjCMVqAFMJRrd26V29y+Daw5i6At+JFAMRNmc+0SFSojfISWRDjYBzgoSkKi0/+NbzfPYLn+X1V24GJbu37B/s8sat+1S1a0gOWxsbCFdzPJyiJPQ7mmfe/27+2//df8srr/6I0WjCaHREOa0pKxOIFNHWBZIwYmGqJdTW8vXvvcyv//LHWB2ArWucKsnaHdqdQfBmLQINQmYBzZjnBQcHh+yeHnN2d0Q5neJtjbcuUE1EoGgILSiKFp1On06vR5bp4C8rJEqA1BKFxDqL9Y7XXrvBnDM+9IFPcHDg+fKff4v3P/sennnmcTKdRBih+yLRNFS0CZE+iIAcC8kMFqxDFpLywXk2eMq6SANBBJ2Ii+eQKKRUOB8LVTGB5LyL4h0ZOkQI3UVhTaY+nrCGJT52ziRRg22WhZIqJBaEDD6p3mG9RXoVe0R8o0Q6v2sikkskUoQu5KKds765wZUrl+mvboCA2tRIqVlf2+Dy5Ws478i05OzkkKM7uxgXhB+mKQYSx56mKaH03fn5fRRx+KCgcc7jHVjjGmsmFzfhwmsiPyZ2yoTuKIHAC49zNUplIH38SKSkx/lnbRnLWMYy3smxKMZ4q6JxonMIIRpxwkUqRVVVtFothBDkeQ4Em46qqviDP/gDdnZ2GtuNRL5IBW0pJUVR/BjBICWY0/gWiRxwXjhfLOwvvv/iay5akCQLk0QtuUgBuTgvSTRwkSySvi+lbOxMdnZ2uHTpElprfuM3foNXX32VW7duPXC8ZFMD5zSKJA5ZpE8kYUwSGSQyBoTkdxIIzGYzjo+PEUIwGAx4/fXXGY/HbG9v8+yzz/KhD32ITqfTjHtxLj7/+c/z2c9+lslkwj/5J/+kEfIopdjZ2eHmzZvs7OzQ6/XY3t5uhDpJQJPmLwlBFi1J9vf3+YVf+AXefPNNVldXmc1mDaXjox/9KF//+tcbG5x0j8bjcbNe0n0aDoccHh5S1zWXL1/m6tWrrK2t0ev1aLVaaK0bgkyySlkUxlwU6qS/G2MwxmCt5eDggOFw2HwviZguUlIWrVYW19eioOgnkWPS99L9S9ec1sTiWkxzmdaIlJLbt29TlmUzP8458jxnY2ODp59++oH3L1rFXBzLW41vGctYxjsrpITjMxj5EY9stRlknsPDkpERtPodOspzdjimXyj6mSLLBYPVnI3NHt1ui7OZZXdoGZWGcl4ymdbMK5hPpqxpT6+Q6LzFwcmMunZ4leOExDpPphzrbUWWKYbTGmsFSnraXQ1ZRt3O6Q8GuOMTKDz9bsFmt8VoPOPYetYvbdBuZQyHI1TWptftY02JyhUbq5cZHe2ze3ePQX8V3e0ymszQeYHWjtH1N2lPHePvf4PxeE7WysmFQ9gqUDZjHkYEzQLOhwYHlwQZhHzDeSdEoHVAaNwQJBEHwLntavi5ukhtSn8qsk6OmYf6l7UL4geC2CGcNZIrouRgOq9YGwzIZyV3X3qF/Vdfp3vpCllehH19okMIAbbGCxX+3SmE7tLeepju5pvMTw8Bw/x4iCbYAGutMAiwHpWF61ZKYec1ufB0VzJanYyspVEShHaoLENIgVICqSBMkENmCiEcSIVwFqQPxXNs+F0ldfOc5JMFTBQNCCEgNmIQqR060/F3o0Oo8LrQBBNFDOn5TUmEl1H0ATiP9Od2PKkJRKpgIaui2FN4GvqtzgP9A2diikWBsygt8S7cT++DFYp1gdowu3sDrXNaV56ke+khzPQ9nL5uMKNheBZSGmMtwnqUEjSkl6Ts8TI2pRi8tVhj0eUYkXdAFU2uJ+RDLMkKWXgTKCAi5EyEgrzdQ+ZdhNR4KRGkRpnQCISzeFeF5jBbc7pzl5PdPYZlSY6nrUAl0kcjhEo2gwQLXkBFix4XG7gCwViGnJLxTPcrbv37P+GpjYfY/vgvc/bq89z7+td4/D9/lPX3/iLm+W9z6CQH04qZkFghMbMZA2m4vKLpFAqEwjiJwWNNTVG0UIVE5TneB6sVbwLdAyJdxYacrJ8LjKlQOkPpDOehrmo8kGtNlmc4Y6jnM+azKZmQiEKDylDCIUMnXHgWK2chjyeDyNp5j1BZWPd5BkLi6hm4MqwRWSDzNrrbRagWxjjs7JSsldHq9EBl6MEqsreGmE5wYgylQeHB1bjaIqzFCrBVTZYpVNEDN6OeTpiNjpCZRIoMZwXj0ykyu0++uglW0+7mtB59D9PjHZwEhwo/g6VG6bTWf5z29z83lsKPn8P41//6X/8YVvOnFUIInn76aT7wgQ8A8K1vfYvvfve7b8u5lrGMd0psbGzwhS984Wc9jGUsYxnL+FsYPtApCP6fkoBOxBG6DpzFGYM3UaQhQKAQMiRwi8Eaa4+9h9H9fQ7/+jWoaTZwaqF7JO5xwzEWkrDufBhIIdHSYm0ofCsRkgHmzDC7c5fqqRHeEDb/PnR+SKmC2CSdIFpaNJtqIgUkntday80bd7hz602k1AxPTrEmvHb/4D6zKnTm5lqyubFJrjz398+QAtY2WnzoF57lf/mf/5cMj0/44Q9fBGfJVE5dzQKFwfk0dTwo+Yj5Ch8U96/cuM//8998kf/6v/gdHn9XD+scsq7J8hZ5q0NtanJdIGSOXtNkKmOwsspkPuVsNOb06JCT4ZDR2ZDReAjO0e0O6LTbdLo9uu0uOlMBCxnRrTLmS7DgrcNYyxu37/ODV3/A6zff5DO/9nnWN7b47nde4mD/iI997AP0+12k90GKk56dI50DHwkgCf0qNCyIGlzcRMso6kgiGCF06N7wATcqvEEI3SQKwtqI/4vYlRR1JN6LRvzhXZB2OGeRIqwF712c50DL8LHjRgrV2A45Z7FotBOROqJCcmFBrENi0ngR1DoiJFuUzCiKNlpqNjY2uXbtMdqdVihseei2O2xduoqXksoY7u7c4buTLzM5HFPXYI2N4qCQNLPE4lhE4xLPfT7X4TNkrMGY0K1rzYINgAcp8niM2FXtg5WLsyZ0CCUKi3d4mzJ7Mn6+TRSeLMUfy1jGMt65kaw9ZrMZxhhms1kjMEgF5EREAB4QSzz22GNcvny5EW0kAUiiFyileOGFF/j3//7fN4SCVqtFHRO1qdidbH/b7dAhd5HSkMQfi0XzNK7FAvfFont6XSp+p++n3xPpPYsEjXTMNI7Fwnkq/i8KQi7SRJxz3L59mxs3bvDUU0/R7/e5dOkSv/Ebv8Err7zSXOuiAOTi9SXrk3SN6X4k4YsxppmrRNRIghzvPUdHR9y/f5+TkxPyPOf3fu/3+OQnP9lYtVyMJCBIFj2/+7u/y+///u9jjGEwGNBqtTg6OuLGjRusrq6ys7PDfD5nPp/T6XRYX19nbW2N8XjciDr6/T7dbsBrHx8fN8SKVqvFfD7n0Ucf5eTkpFlzyZ7GuWAL99RTTzXEEikld+/e5dvf/jaTyYTBYMDGxkZDNVmkxaQ5THQUKSV1XT9gubK4RqSUGGMa+shjjz3WWK4Mh8MHjvlWkdbOomXMWwktfpL4Igk9EgFncdxJyLJ4zKqqmEwm1HVNt9vFGIMQgqqq2N/f5969e2xubpLnOUVR8K53vatZKxdjKfpYxjLe+WEdnAnLex/bRE8mHB6WjCvP3HuKecW0LGnnEqkUWTunyGFze0B/sMJwYjicw7SuODkeM5lbvBeY6ZQN4ekLQWkFR+MS5xW9tsYag5Ce9bakneWUxrB3UoJS5LlgpV8AnrPak0lJeXSIUp4n332VXFju3TtCFC2ubg2oTMXpaEJetGh3+8xHJ7T6PQaDFU53d6hKuPTIEwhqzkZTev0Bppwx2tlFTgx4xdnxBNnO6bYldjyjqgzWBKZHsImQgUAqFvsyEmfTpe1nSJ+ExMrCz+QfFxKmvWPSYoQI+3ZnbVM4R5zboCQhSTLYsAsiXGMcXkg6nYyjkzPe/P7zXPnAs+h2H1QGXiISBQPAGqRSeCFBalR/g+72Nc7uvkldTbHzKiSiDFEZ4VGZQiqJ8ODmJRpPq6tpdzV5W6CyMGitNVLLILaIgg5ktK8V0fZORBsWBVIJvJOh4Uh6lJB4kVipkcgRxRve2Wb+RGywCSTWmCsTyWYWvK1RWoXmFw8y0zFx5pBaBSJG7YL4REqE0qAkygXrHq0zvA80UBEJJyLm8Xy0oEXIxkJXikBoMXU4L8LirWW2d4tsbRPd26DY2qY9PmFajvGmxrlIQ22BcyE30xwbgbfheccbBTioHX4+Q3YN6AIXhSaw8OziQ77Me4MkRyDQWYusO0BkbYTUD6zJaHQTmlm8BWdx8zEHr7zGbDalrB2FkmgZV49IvWDiPAeTOnFSvlKIQM6QwYJFxmaj9BkY74w5+MZXuHL5Ca5+9nd4/f4tDr/xF1z6u/+Qs9u3kKMjKNpoB/Vwxqq0PLya0S40Tmq8yLAWrK3C+JWMeRoHSmAB4QhWM9bhbI2tHKW1ICx5XqDzIhBcnUPGBq/kdu18FYQazgEGrXKUyoOghJArc9ZgfRB2Ka2wTmBNhfSgZA5O4KYnyGqOdQalJUW3S9bqIHWOA0w5B2sxSlBOxljOECqn6K8iVA7OYoTAzEeYWTi3lArQeDcDAVmRU6ytM5pWTF95kZaZI3MV7G+s5/RkinntBnl3D6HCc2TW6QT7bRd+bmSFpNXf4IEc2k8xlsKPn7OYTCb82Z/92dt2/KIo+OQnP8n6+jrOOf75P//nb5vIZBnLeKfE7/zO73D16tWf9TCWsYxlLONvXfhmZx6SA6HE75pOBI/AWofxdexkUIEaAMH3NM/pXNpi65n3UR4eMHzzLHaK+NjFQeR3nP/XNJVEKkj4SxB5KCRSWIwXeB8tWoygPDhifnJMPZ+nPpHGV8YLETaBIkJLIx1CiFTAb3paGJ7VvP7Sy0zLGoGgrGsQDucsw+EJEEQfg36PK9sbvPzKqygJ25f6fOzvfJTP/9bvsjHY4P/x7/49w+GQ0fiE8fCM4Wga/HPhAjHiPHIlm+LKrDJ89dvXuXV3n//qf/Wf8clPfJzBygoIUFlB0W6H/WddoTo5Osso2j3KqqKuKmZXH2YymzMZj9jf3eF0GFCw7U6HLJNoqRGeKEYJqFclwn2x1uC8p5yVHJyOqeqaF155ib3DPT77a7/Fu598mptv3OX45IhPfvKjPHRpa+EqBEJ6vAu7ZuHjchGR1CHSpvjCOksbbRE2/lKHpId3QYDgSdhTgUCHJRkZFojoP4rEUS90b4roWxvFRa4OSZdz5khYhzKIHkI2ylN7T2UtwhranmYDHAgZsWjh0/stiV4iREBiapVjncH5UPQr8hZSmZCUUbLpJOr2+rzvmQ9xcnrIS9/7DnNjqFLHhnNYa8Lm1ycMLgT/5PDJsT4kHl20cHHO4YyN989F8U24PmdTsU+E+91026SEn8MS58ADMnQEufTZv3jDlrGMZSzjHRSpWK2U4urVqzz//PMN8cB7T1mWeO9ptVoNeQCg1+vR6XQaIUIqhCSBQl3XzOdzfv/3f5979+4BQWCQ5zndbpfpdNpYhjjnGkFEURRNsTuNa5HOsfh34IGC/luROFIsFuYXrzu9Lo357OyMVqtFlmXNOBZff9HiZVH8kc63v7/P9evX+aVf+iW63S6Hh4d0Oh0+97nP8cd//Mfs7e01dIhEmkgChzTHo9EIIQTtdrsRAlhrSTSQ2WzW3BMpJYPBgO3tbR599FGKoqAsS/b29pjNZvT7fXq93gPXskisuCgA+NSnPsVgMOCFF17gzTffbF53dnbGaDQiy7Jmrk5PTzk5OaHValGWJZPJBKVUI/7I85wnnniC+/fv841vfKOxT+l0OmRZxu7uLpPJpJmHFEmQ0Wq1WFlZ4fr169y7d6+hd5yenpIINMnuJdExFtfP4n1cJLYAzZx772m322itm9evrq6yvb1NVVWNIGNRgHFxXV20Mbo4r4trZ3F9pnueXq+1pqqqRjiVxEZp7FVVUVVV87lc/PwNBgNGo1HzvaIouHLlyk8UfixjGct454eQgvdvdzi7s8dk7rBS4h20pcTOSlq5QmlB0Vas9ttsrHfpbm4wnNfcOTrj9HTMcFQyKaEuK3reMMBQqAyXa7q9NnZaMh3OsKUHKcgySa4k47pmPLXU3tPOFVppzqZzMiFpaUV9ckr/2hqPPXqJydEx90YVarXP1mDAZHRGbR3dXg8pJLPxKe12i5Vuj+M7t0EXbF69gq/HlMMxnaLFbD7BTSZw75S2yqmNw+DptSTtrsZNoJ7WoVEgWuLakAggsiCxTqC8j+KMRJeMwszm53f4uXz+M14siDyS6CNZhkZimQdrorDBnx+jOaaQoTifBCUiiU8VdT1nc7vP4ck+r3zrO7zvNz5Fa3UDkRUokUgZEiFcKJJ7CZEGIvMVOg89QvfSQwyP9jDjGVopbPSxEUIgtURqhXAOpRVFR9LuKLK2JMt0KPQLQCu0TueRCCWQWuGtDfkqHQgRUgYxpJQCZI6XPtIUPFLLIDaROuQTZGoiSb+3Zcy7xTmSMQfhARyurqMAQYLyeJvmK9qm1CUyzxE+0ioixUHiQcnw+pAICj0sUdATiCEGXKA8eAnaKZzweCmR1qJVFiQr8RhuOmT4ynfoPfo+8t4GgyuPUR3dx02HSOtxVuCER2kdhT8u0GVwCKWwBrxw8d8tZlaR2RmWdiA7YJHOI2Qel13MI/oa4WqEblH0uqjOKkLnCJGsbiROyNi8E6giuEAWmR+fcPOHz6OkYj6vGWiFViBkmHGXVqYIFBzZiGrDP0oZuSve4bzFEIQV1kURSKXY/cYrtDf/kLVP/xdc+uzvcf/f/F/Jf/hnXPnEpyn/4o84KmvORiW9esZj6wXtVgZFhtctXAUIR6Y9ItNgHN55LAJXhjyd1AW2nOMrA6amnJY4BEWnQMkgmEV4MqXQeY4QHu8M1tY441BSkOU5GoPWEjCYeR0+etIjVY6XOqw3PKpoQ6sbnsGNRVRTcDUojc4ydK4oOh3QLVAqNLMhEKrAGZjPK5z3aOUQahKEUlKA0hgRLKGFzhC6wBuDdxV1ndOThs3H38NW70ler9rM3nwOqRVKZJgqWLoY49DGgJDU5RSdFYExLHUQhHmH0hmJJfzTjqXw4+csvve97zWb8rcjut0uv/7rv44QgldeeYXnnnvubTvXMpbxTojBYMA/+Af/oMHlLmMZy1jGMv7ThscjpY8+oR4XrV5C0SFs1LyNG/y4MXW4INKQkrzXZeXaNdafeZrp4bcpz87FHUnsARH/KCK60tOIBEIBPxTgpfCoiBkVIijYrRdUp1NmB7vMZ2OcD68T8ThCCrwIwgGpQqfnA6IPD3Vd4Z3kzr1jXv3Rj2j3i7B7QzTEiHk5R0pJv9Pi0UceZvf+Hs47rl5b5dc+/at86lO/zlNPPM1zz32F3cMTjJ0xHo8oS0dZ1k3x/q1EH4WWZEpgfRBgOGepKsPtnWP+z/+X/zdf/qtv8p4nn+CRq1d59LFH2dxco9MqAInSGTrTKF1gnInI85J5XWHrLbYvbXJyfEpZBtyK9B7nDdY5mNdhUyd8TPQEAUBtHfcOjtk5HMVxe3YPD/jD//hv+MTHPsknPvarnJzM+PKXv8FHPvx+nnzyXcF/FhsLTucWJElwkyJ0FLnQ6SICcSUgULPoYRuSHUoKnHAIn0W7GBm7aQTCh7GGDiUXNo4+IDh93KSH9RXOG7xqfQBZOEvCzwoCYlSI0HFinaCsHaLooPMWWulwFJ9sV+Jaje9FqNBBEkPK4MUrhIyoXBvFTLEAEhGvVV3jhWDz0iXe/94Pc3p6zN03X2c8m1FWZSychAXjvAm+uz7Y1zhrwzW7ZNPig9DEuSgGsY1nM5bY0WWi/U20evHE5BLgJc76QMkl2t14GxI6S8HHMpaxjHd4KKUYDAasrKwwm80YDAasrq5yeHgIhCLz6ekp3ntWV1cbsoe1tqEULAoeFgvdxhj+7b/9t/zxH/8x83l4hkhF+yRYmM1mlGXZiP4S9SHZdqTjJaLC4nnSuROVI71usbi+KERJ51gUPDjnmmNLKWm32wyHw4ZkkWgSF0kfi+9fjHS+s7Mz7t27x/7+Pnt7ewyHQ6qq4tKlS/yjf/SP+A//4T/w4osvkqxukrgAIM9zsizDe8/JyQnOOVqtFq1Wq7mO9FopJZ1Oh263S7fb5eTkhNFoxOXLl1lZWWFjY4PJZNLM+0UhwuJcLgpZsizjF37hF3jf+97H4eEhN27c4NatW7z44oucnJyglKIoCiaTCaPR6IH7lu7j6ekpBwcHlGXJxz/+cV599VW+8Y1vkGUZ/X6/EddorVlfX2c8Hjeilscff5xnn30W7z1vvPEGZVk+IHLpdrtcuXLlAZuXdKwkmEjrIgk+LoqDxuMxxpiGgpL+PQmA0nE6nQ7WWubzOVVVNXO0KEzSWjfii/T9t/rzYiwKmS5+LpO9UvrebDZjNpsxHo8bgQjQXHe6j2k86XuLIpRFUk5aP8tYxjLe2aHwHOwPsU5igLZWDcC00ymY1Ya11R5ra102rmyjpGT3eMLR6Yjj4wnHwznzWmBmUwbO0VY+2Hp0Mop+m6PjQB1dW+mCqZnNTfBOyTWylqyt5+RKcTYxHJ7NaHUKVvodJtMJl66uce3aOkd7x5ROMtjaQCvP0ekpzmtWul3MbIrXOf1+l/LsjIOTU9pr67RXV5mNDpidnDKbW4yY0mllFKMR1AYyTR1QqcjSUh3VuNqD0HgMUsWGCAtSBzKnjXtmT9gzI1XYZvtE6SK8B3fBai78+/nv0kSpEA2lAiHCMZMgxHm0UKHJgvh7JWaJPAIpfNhb46nmlpVBh3Zbcbi7z83v/4DVRx5Gt9s4qcM+X6Sf5zY2P2R4oUDmZL1rrF59kuH1F1AJ/ipDs5DMJDKL9E9jaA9a5G0oCoHMJEoFKxmBQ2mJVCLauYBQIZcRGkDCdUsZhAcqSyQuFXJRMjSZCMBrFecx5g5ksMIhNq9gXSMQEURrWxXIu1mrhUfg6jrkU1SgiGDDfRFSReFGIH7gQTgDXgYbHCXxzoAUKGKTSSR6BGGDDsxWEeijKBWsgI0JAhIZbficC+ezNdXpLt12D91p0926QrlXYebzMCdeUpaheUwp0VjdBJshGdYkEm8d3hiYjxGtAV46lA/2OeeuzWmNhIa0LNMUK2vIvAdSB9EHAoGKVtUp9xFpH2bG4c3X2X/zHgioZjWdViCyKKUbwdJiuGh3pNK9IhJq4prGhz65ZJtjrWN2arn3F9+j/fCTbD7zSU6e+CDHP/gug2c+zOrTH2PzO19B41hb6SKlx6KQPsdZiXGGLJdkeR/vHNbVGOtws2Bvh9KYaoqwFmfBeodUoFSGygqEcHhXAw6dFyjpcbbGGUOg/CoEhjzXId/pHULkWBkawKTIcC5Qd60N91dpi+70UUUXOzoN91VneKmRWpDlBV4qnLGo8xajkL9Ka0vmlOUM407RWRsnBPNygrQOqSQqy2n3B9SVYzosMXVNNZ1iJodkrW3a/T4T1aaqDKrooYVA6Sxkk6VHa0U5h7K2EcqrmM7n1M4yOruBtzU8eGt/KrEUfvwchfee5557jtFo9LadY2Njgw9+8IMA/OEf/mHT1bGMZSzjreOzn/0s73//+3/Ww1jGMpaxjL+V4fE4bxHuvCPjvOMg0AACYcAgoxI/bLdCMcRjEVKQ9fv0rj7CyhP3OH7hNpigIH/gTMIRtlTEr2Wz+XcEYoSSJgj9Y3dCoB8ApWR+cEA1GoWNShIzSB27JgCRkhOQ/GmFELz4199nY3PA+voW3eyYq+9aYTwMyXAlgz9pEkoMeh3W1/qsDVZ4+fp1trb6/Oqv/Rq/9Eu/wsNXH+X05Jh7O8e0Wl1GownT6YSqlBgbOyDfYo7zKPpQUqDirDpgVhmkEtip5avfeoGvfecFMq1ot1p0Ox26nRaDfo+Vfo/V1RU219fZurTB1sYGK6srdLtdilySFwVaFUxnU6p6Di54gU6nEzw1zthGLDEv5+zdP+SVm/f44Ss7zEvzwFjHsxlfee4vODk54jc//dsIuvzVX32H0XjMBz/wDFJF5GhcB2G/na49FYtU6CgKzSdxk5xH0U8gVEihEcKjvIwEl3BEKUJnkrEOH32ARSRcOGdjcSAmVKLNjCNsfH1MRrmYAFBKh7WNjz6tgmltcarFoLdKr9MJhZJ08SJ1QkVxEuF8YUwirrV44QhMbZjPZ/R6fRAC6w14R54XKF3Q6w7orZxx9eHH+MUPfRJrLOPpjNHojLXVdfK8wHoTBFGRSOJdtKJJGH4Hzlu883gXSSFRIBIAtJEE4pKFTB0EUFLGlEmklwiPtRGnKqM4x3mMM0mttIxlLGMZ78io65rd3V2UUk2jQaJpJGGD1rohBtR13RSQT09P+eY3v8kzzzxDlmXkef5Acf2FF17gn/2zf8bp6WkjVnDONUQFrXVj1TGZTKiqqimwLxbV4VzQkaxYFgv5abyLlIeLdI/FQv2iYOPBwk0opq+srHB2dsZwOGyuPc/zprgupXxLam06l3MOYwx3797la1/7GnVds729zRNPPMEbb7zB2dkZX/jCF5jP59y5c+cBIkVZlhhjHqCAHB8f0+v1ABoBiFKquUdlWTZjKsuSnZ0dbt68ydWrV2m32+zt7eG9f4D4kcZ7kU6R5iaRWJLo4N69e7z44ov84Ac/YDabNSSNbrdLr9djY2OD+XyOc448z5nP55ydnTEejxuRxc2bNxFC0Ol06Pf7zZjb7TatVovV1dVGdCKl5IUXXqDVajVUkV6vx5UrV9Ba89BDDzEYDMKzXlE8IPpI9iiLFilJ4JAIIUdHR9R1TVEUzTq4SJa5KO7oxOei2WzWzNki4SOt3bQW67puBCSLx7xoOeOcYzQaNZ+PtN7TOp5MJsxmM0ajUUNGSf+nOVRKYYxpRFTJVslay2w2I8uyRkiySC5J87akMC9jGe/cqI1Dtgu0CXmA2gVbk1avwCvY2hhw5eo2Ww8/xGxm2bl3wK17R5wdjygrwWhqoKoZ4MBb8o0u3ZZmMrPcvTfEIxj0Wti6JpcikD3yjNOyQmWKTElOxnOMgSLLyKRkMpuwdWlAv9/izt4JqILtSyvUsxlHh1NanS6doqCcT8h0jpSe8fEJSmjWH3s3SjvO9nYYjSsm80A0Haz0UOWUyd0T2kJjrMRYQ7tXkGcCO3NYC86fi/RqZ+leu0pve4PR/Tu44zHM6rBHVCpIMnz6+ZieKVLTThKihn1jwwKJNro+kiQbMUgUfDyYRwrsV4GIe3tigwJ4F5o9hPBUpcNJ6HcLhqMp17/6LR750LMUq5tkSuNFDmTBwjeOJ6gxNMIbyNt0rryb1ceeYvLmbcx0jvACrSVShz+dsaiWougq8iJYYygVBBxeeqTXeGGbPb9QAqFEBAnEnJQMFBAiKcJ5G/UoURSis9DgkhanAJmpQKMQYe5cbQKhQAQSCz7s0YWQOG/wkeQhowUIQoALVjloRcgouQBQkTJk15QKY3Mu3k6FzFSgK3iPLSuydgdn6kgYUSAkOteBLFqbIHYK8Bek8Dhhg8CkLnGjY1gdItsrdC4/AtUUjvdwdYVzniyX4M4bdLTSQWQkBdZ7gjwr3ruqRJoSq1pNI1jI64QcUuhwCWLeoj8gX7kMWSs2CcVMkEh5KMBbhDN4a/CzEa9//3lGJ2cUmSIXUCiBSoSWJIQ+TwAhhWjGIQSBIiJE08zlvA85vEjISTnK8f0Jt//jH/HEf3mZq7/5eV7/729y8LUv89Dn/2uq2z9ibXrGqXXMjKEvM6qyorIlKpNo1Q6iGOcxIloyRkGRry3e1lgfmsa0VnT6Hera4EwJOkcJi1Ie4S2YOXVV40yF1jkqK8i0QilN7cBaj1IOlRdI38JZg7cVCB2teBzOerQ/Qatx/IAKsiJHZFkQxWQFtS8ox6dkJkhvhJJYa7BChZytVng01bzGzE/J2isIoXDWYEyNwDCrDdNZSVlO0UWBbnfZ+dFLePESZh7EJsbMcfUsvNeUiCx8dmd2SjV3WMrQHOXDc69UkiLPm3zaTzuWwo+fo5jP57z44otvqxjj4x//OEVRUNc1f/EXf/EAynEZy1jGg7G6usrnPvc5Njc3f0x1uYxlLGMZy/hPEDGpKxXEyjzCx818pA6Ef5fNZt8LgfCB+yFEQFc6BZ2tbTaeeob6eMjo9ogAYEiq/HCcVPyXgafwIA2EsBmW3qOFoyIgN40PTTXT/X3mw2PMfE7WKpBK03jXomKnQBIeBHnFbDbjpe9+nY//6q+wtfkEx3tz7u/eotvpY1wdfWcl3ju6Lc1jVy/R6bYYnp6yvt7m13/z03zyk7/KtSuP0ml1+e73vsvR6ZiqnnN2dkw1N5ha/8TaeaYlhQ5dIqm4U9tgsTGvDa1M4mP3iDEu0E3MlOFogvcCqXxDrfB4tFQB26gknU7B5UuXeOThq2yt99ncXKPfbVHkOaYylGXNdB6Q5PsHJ9y5v8/O3jE3750ymtZxT5c24+dRG8P3XvhrJpMpf/c3/zO63R7f+fbzzGdTPvKRD5Jn2flm2LvGwzfMQbDvSc1CQoQOiyT4CCki2SSGRESfpiHIJOeQCi8C2UJKjbdV6NpxEucjESN25qSnBxeJIOl/b6qQbIlnLY3FyZz+YJ3VlQFZ2iD6OK4o3LHeBYKHcwuJMEK3hExpHI/1tulMcel5Pya9Op02WxubofvaWMwTT1FVFbfv3OR0eMrGxoS8yMl16FR1PlA9EvbVO09ta7TSUdDi4rVInAiEGpzHuTokDGwNTuFwaCUWrktgvEGicN7jfIVwYYacC/jSlG5ZxjKWsYx3YozHY/7qr/6qoXF0u13Ozs6YTCYPCCQSseOi1cXOzg4HBwcPCCy89wyHQ770pS9x9+7dpqCdhApAQ1NIBW6tNdPplNls1hSrAdrt9gPF90WLl8XxpOMmYkN6zcUCfoqL4wWa4nyWZaytrTEej5lMJpydnaG1ptvtNtSNxdcvUhMWCQ+np6cNJeL69evcuHEDay15nrOxscHHP/5x2u12U4ifTqfcu3ePs7MzvPcYY5jP5yilGA6HOOdYXV1lZWWFTqcDBKpKEnzUdU1ZlpycnLC3t8fOzg7WWsqy/DGrl8VYvKeLFJXFv//hH/4hL730Es8++yyPPvooN27cYDKZcHp6SlEUXLp0iSzLKMuS6XTKdDpt7EqSZUuyiEn0kcV5T38mu5RXX32VL33pSzz99NPNWMqypN1uk2VZc1+SECbd34vXtyj2ATg+Puab3/wmnU6Hra2t5v4JIaiqqqGtJLLIRaufRNJI4o8UxhjOzs64desWZ2dnDAYDhsMhxhg6nQ7OOXq9XmNJk8ac1nGWZc29zrIsiK+jmCOJaNI9TvOXnpuTQCgJQUwsZKXP1uuvv95YMiWBVRpDu91GSklZlj+2LpaxjGW8M0JpSb+dUY4r5l4iMo0uFN1+h7XNFS49epluK2Nv94Cd/Qk7O0cMT6c4p5mMziiMpZ8JpNZ4FZpbTkcVk3koYLcLia0Mea4wHko8VBbjJPPKczKZsb7SwVlHheXRh9fptjMm1jEzgrWNVbRUjE7PUEJzaXuT2XjK9OyU1fV1nKmYj0e0uh0Gm5eYTs8od/aZTB1zr+j12vR7XeZ1xXjvmHatMAiskOhWQa9X4CbTkGsh0TyCAEC026y89/101tuIrmL+/I9w8zrs/xe2f4l6Gb6OWgMfxAbORepF3Cs771BSIaXCWBO/F97ovUsyj5Anwp0/p5B+14Q6uxTR2QOBrSzzUUWnXZDJKbs37/Lac99m7dGHkVmOElFQ6LNIzQBhXbSr0CBzdP8y6898mNGt1zl97XWwJlq8SLSU2Lqi6GZkhSfLAn1DKo+QDql0IEhIgQiVfaTOkCoMVCgVBDD4QAKJtA8ldBQjBCEIUoAzSF2QxCA4E/blEa0iMx1EG17GhiUZc3J1EFp4Dz7QQBxBnOLw+LpCUiBkbK8SGpzDWIPWCmdtpJGcE84EIJSG3COkR2kV1oeJZBZr8c6FHIwUYBxCCZyXKJlFSozFzieUh/cpHmqRtVuIVpusKKi9gdoiNVgXclZK6WAt5ILgI4ksjDPB0qQ0ZHaC9D2E0AHNIuJt9T6SfAUyy2mtXUa2BwiZQbRzhkRTAbzFOxOsbudDjm+8xGvf/A5aQl1Z2kqQq5B1VFKlVA0xbYgUSewB4BE+WfCcE26FkFgcErkgCgFnJcNXDzn4iz9i++/9b9j45Gc5/tL/i9U3vs/mRz5N9dwfoaxH6IzaOmxlkVKFnzMiw/nQpCSlQOY5+EDt8MZQl1W493mBUlmwP5GKcjrG2xlZv4PKWlSzOcKVILPQrCRAyUh2yTKoHKacgcrIlERkEm8F3oRmIpUV1FUVzitNbKAKpA2RF0iVxfyUR0W6i5cqJGe9x/jQtOTsDFEbnJBYD66sMNUpwoYPvHBBsGsqwXA8x5ZTVlahns0Yn56BM+i8AA9S5aEFy9tImIZyVlJZgzMKMo3xHutqhLNkeYbMwrOr+An0uf85sRR+/BzF3bt3uXXr1k/EEP40ItE+9vb22N3dfdvOs4xlvBPiqaee4tOf/vQDXsrLWMYylrGM/3QRGjIi5UOEToyE6wxdGRbn6phEcDhfo1yOCy6UzftBInttBo+8i+r0mOnR9/Ejh0o7q4UTCpG2/TRUAosI9h0+WKGYKBAIJWofCt5nU8qTY+r5lLZfCZvUmFAIm0SJRMevQ+dEq93l0aeeIstzZtMp3/ne9zk5OaHb6TEej5DSU1clQjgevnqN0/Eeq2urvPj883zqV3+ZT37yU1y5fJl+r0+eF3g0x8eHlPMxR4cjsqzFeDjGL1ykkiKSRCSZTJYo4dqt80gfRDDGeCpr0T4hMQXeOowARbQ4QYL0oSNGiODj6yqssxyenHHz1h7PfeuHSClptQrWBn22t9bodXKm85rxeMr+0Rl7R2fMK/uAyCPsrRPh4sFw3nP9jVeZ/tHv8/nP/H3W17b4wQ9+xHxW8slf/ijtVjva8yT5jkdEexdEvHOCRgghRejwECQcaiBpxNaXOH9BBIIPtkPSSZz04MMmWXiBIyBHw5ym8y4UQSKiNtiZhMSHE4LSWhw5vf46qysrdFotnPdBVORtU9Tyyd+YZOkS13ei23iPirjXcHmqsZ0hFt6kDMW+QX+AR+J98Fs1Tz6F1pqzsxOOT05otdrIrmpoKaGrNhI8Ii3Hx4SblPE8MvgnOwE22dzE7gbrg9WPtTIKmsLn1wPG20g/EQvzFGxk+AmkmmUsYxnLeCdEKown8cFwOARgZWXlAZFFElQke5eUM1oUPRhjqOua+/fv8/Wvf527d+9SVdUDVIkkALloiSGEaOw/JpMJ8/n8AfFHOleibixSC1IsUj4W7SwWC+yL4zj/3XZ+jPSaZKGilGqsUpII5aLdzEUbjVSUT3QTay29Xq8Zn7WWvb092u02a2trjEYjjo6O2NnZ4fDwEGMMs9mMs7Mz6romz3Pa7XYjjNBa0+l0GAwGDwgUtNbNa1977TVGo1FzLUVRNMKAxXufIr3uok0OBGHBcDjEe0+e57RaLYbDYUNeKcuS0WjEe97zHnq9XjPu+/fvU1UVxhi2traa9yQSRpqvRTpFEhqVZcmdO3d4z3ve88BY0zzP53Pu37/P6ekp29vbXLt2rSGhLIpXFskuzjnu3bvXWOD0er1G5JFoMmmeFq1lFgUlVVU1IowkAplOp5yenrK7u8utW7eYTCbkeY6UkqqqaLVaTKdTpJQ8+uijrK6usr6+TqvVasbW7XYb8U8Sbiyu5UTsSMSSRBpJgo/Fe7X4OZjP5xweHjbUkfQZTdZAk8mkIYEsYxnLeGeGt7C3O0MrhZcZMldsbq6wtb3B9rU1jIG7u2fcvj/m7p19RsNZ2LdVJT1r2F4rEFJQOxBKMptZ5hY6LY30itpZWq2MqbEcjed02wW5ksyqilA/V1gB+UrG0489gpSC4+EUkWdsr/cReIbDCUXRpd3uMD47Ym4sg/UNZrMznIeNy1dBCk727uJmJVMDtVC0FWS54mRe0tUKeTRGIKlqQYmn0y3IpMRqTVVWgMd7iY32FdmgT97KsLUh73TIuzlmUuHKmN2J+8PwuyfMZ/g6FLp9qpDHvb1I2RnvwYVnkGDzEP4M5IrQ/BGeB0T8+vx3VvNsQiRreov3jsnpnCwTZFoxLmte/tq3eeKXPkirP0BmebT2yKKgJBAwhTchf0AGukP72tNsfejjlMeHVMcniEyhcoW0BtXS5LkmyxVSB6sTITVCheIyCJQElIraEg/Co7QOQoyUd5Bx9CLkqfAeWWiEd8FORhbn4hQRm6gi1dQ7E7IZqbnFe3xtEEpG0Udo9PDONlQRvEch8VkRLFtCt1QQi+CQUuCsxctw3wWyGX94fY3MsiA0kcH6TeUeb9PzXRCWKCFwSiCURpiayBhBZQrrPXY2xIwPEe1VlNa4XJP7nMrN8V6QZUmYGaiswaJGomNewluLdQ5vHZQzZLuOuaTYxBUFBin30VpZp1i7jMw755Tf1OFDaILxLli8YAxmNuLuiy9ycG+P9U7O8ZkhlxItQ34x6HLOxU3pCxEJNIIgcMK7IHYSSQQVX+zAK3FOVDXgZ4Kd514mf+iLPPRLv8vw+e9x7y+/zGP/zbO0L72L/s4rjF1GXftg1ZIrslaBygusdSFvJMBiqKYGUbumsUgIhVQKY6tIptG0un2ErdEEckxd1ghvyFsKpYtg4+Q9DompHVVtoDZIYyA36FY/iIWmFuMcJq6BIIDJQORIrYINEcGe2nkHtce7ijxTCC2RTmBdhZQ6PF95Ad6iOgVCtzC1YHp6QKYFeZ6hWmtUDspaYIUhU5qWzjBlialqvPVUZYnSmizrACbk/5CBIFJPMN4F6rPNqZxECE+WRZG9VGF9LDZt/ZRiKfz4OYpbt25x7969t/Uc7373u4HQGTKdTt/Wcy1jGT/PobXmE5/4BI899tjPeijLWMYylvG3N1ICXERP1pAjCIp2L2JR2MXdkUeKrNn0CySJFyAArXLyzXVWH383w3u3mby0i3SyEWEkxGeC8CVbjTAMj/UKRKAtSCGQIinrwzHstGJ6sMP8bEh3YxOdqYibjh0UyPC/iB0mUqGUZG3jErdv/ojp+JjdnfucnYw5bh9y995t+itrHB0doZXg0vZVSjtk0Ful6CiefPe76XbatPIWQnru796l1e5QVVN2d+4znRm6ZNQmEBiKTJNpQaY1zjp8JEIIJM56SmMQnrBxjqSMqnKIPNjcCC9Bhi4CJzzSS2rrou+sx7skBkk0uSAYqV3AMw6HM46GU157cw8X9sJRs/DWZf1Eenhr6QdkAu7v7fAf/uTf8vc+9zusrW7y8vWbWGP5tV//JK1WEfxOsUFAEhMRzgV/WBG7DhA+EjyCUKcpIsm4uSYlRJJIJJArhK+Cal+KIHbwAiFigSNu+p33OBGFMSQhiztvVUJQGU/lJd2VFVYGq3TbnXBeF6gwUujoeRuTT6nnw8WOJSnj90LCK21CvQfrgr1LuCYZyB8RxSqkYKXXw7krWOsCItfD/Xu3mUynnJwcI4Qnz4uYTIJE3QlSKBE8fJ1thCZSCIwPti4p55E+P3FFBLsYEexfhEi4/pC8cPg4vrgOjQmapLdRFL+MZSxjGT/LuCieuPj3RBBIheZUmEiNCYv2Jkk48Morr3D//v2G3pGOkygEi6SOVKROv/uUUqysrJDneWNxATRF+EWxxqKAo0GpL4wlFcFT0f8i/WFRHJDEE4vHUUo1Ni+L9Ic0F+n6L1ImkohCCEGv16PX6/HMM8+wvb3NrVu3eOGFFxgOh02R//r165ycnDAejxmPxw0tI50j2b/keY4xhqqqGkuRRAZVSjWWJ9euXePk5IS7d+82Vjr9fr+Zr7e6z4tij7eiqJRlifeeg4ODRpiTxALJjuTo6KgRSvR6vQfsg87OzhqBQsoDpnVU1/UDVkJAQw95q7W2OMbZbMadO3fY3Nx8YM0uWrek91ZVxdraGisrK1EQ3GqENLPZjNPTU8qybKxRBoNBI7hIdjXHx8ccHBw8QP44Ozvj5s2b3Lt3rxG6JKpGslapqgrnHG+++SaDwYD5fM4jjzzyAP2k0+k09zXLMuq6bsQyVVU1YqIklElrPF1bWsdra2vM5/PmvGVZNtZA6R5UVcXp6Slaa6qqoqqqn/gzYhnLWMbPd1jvmaPIEGysF2xtdtnYGJB3OuzcO+HwdM79oxEHh2dU0xqMI/cVfSXp99uoXNDrt5mO5sxKR66gmytq59AKwAd7Au/ptcPP/KlxeBwrHY1QiquPbbDSKxiPZtRIVla7dIqMaRUK/Wtb2zhbc3JwjG6vsrGasX//Pr3VFbY3NynnY+rJDCU107qisjVFJqkNTCclV554ktbeDs4IqspTWofQkpV+GzObBNqHEHgncNGWRWqFBs5uv0HWbeFmM5TMYt6EwNmUEiVkQ6/0+FgUD/mXsN+U8XdN+vu5TWmwRAVwOAF4EfMDIHCRnhGK2R6aZ4zQYBHEFwjwTjIrK7ZX1yjyMdPKcbSzx0t/+jVWtq+w1uriVQeECsIEFGDPmzaEwAuFbG0wePojzI93OPz+t8CaQLrwwaIlbwuyXCD0+b4+kDpkyF3gSAkxpVQQvfhAxVBaB+GBjCQ0EUgnQsU8hgx5CC9UFG0EyqkzNvnahvwUieaZbEMMOBXanlTMyUmJ0AWunqeJC2AMKfHRqi6QSCXYeG+kwtm6seLBuZheUXhrObczEVFE4UIjVWxq8cIHeotUSGlil1DAiygsdj5mfnCPYjsj67Tw8x7WGPK2xJoK70GLrHkO80n8ohQq92gXKafWYecVmTc4kUc+TGwQ8g4kqFZBd/MqqruBUAVIjRcyKDRYeCb1gdLq6yln9+/w2vdeQjhPITXz+YS+FmgRaTVNs044hIz/LoVsqCQuPnMk0UeaG7zAiZiqS9Y+3uONpxoajl++weqHTnno1z7Dnf/PLYbf/mO2P/J7TP/oDXIBRimU9GStDCk1pq6QWqNEhikrXFVhyyrkVPOcTBZRLxQth51F1pYsLxBFB2NqZpMZxliyLAchURK8tbjaIXyO9xVKgfdhXdgKsrqDcZZ5LXAyx0sL0uKlxskMgUUridAtbGVw9RyR56i8i/dlEDrZ0ExWG09ZzcLngDSnkEmHbGX4aQauxJJRVYbahgam1W4GVYFzHluVeBfI0rO5RUhDUXhyKShtifUeqWTI1wlwrsSJGuMcOlMI30JriZCt0Ez2Nrj6LYUfPyfhnOP27dtvO4XjySefBGA4HC5V5ctYxt8Q7Xab3/7t336gK2cZy1jGMpbxnz6Sl2uwS4mbC6FQIgcxQ4gML4MwISRdFTJiAIUINAEvwkY10wXtrS3W3v00o90j6v3wvmD/4fBepn1SU+OOJi9YL/HeBnSoc0jh0UIEcYkXUMPk/i6TowMGlx9G6RwhwrgRKm6MQvdE2nx7BO1WgXOCtdVtti+tsbb2S7z26g2uv/gCn/ilz/PmTQEo8qJLluUo5Xj6mad46NIGrVZOVuRMJ2P29g/pd1dw1nBweEa7VTCflwx6bYoiA+dwNrAa0ibbubCBzjRIY6l92Ci5CAExzqNtUtkHrKcQYUMc5jh0coR9cyzEuOBT6r3DuHNBjY/wE0voIkmb2gc2x28h8PAxiXPxO0qB1oKdvV2++D/+O3777/4Oq6ub/OiVmwgp+PSv/zJF0Y6dJD7gSL1HCnVO7ogJFYQM9BdB+DqeMRQDZJOAcj5cZ0TD0GyqifPjRKC/eAkiJiq8wBKEFUKCtxK8xUaxTGmh1VthMNig1+mjlExpqzB3UUWRfFZT8oEoXBFCIpVGyfC/jN0XjWgpTrF1BucTYDdckxaC1f5qWL/xU+a8Z+/+XU6Gp0it6ffS/MekFT7QM13sII83Mo01+fq6KFqR8tzSxSc2j3U47xKQNPzpQ1eXRGCdbXCrS9HHMpaxjHdyLFqwpOLvoqgCQjE+2Y9oren1eg15I/2fXnvjxg1eeeWVxpYiiStS4XnRHmVRNHGRIlIURWP/kqyIk81KEhekgnx6fxIJLNIhLpI9Lv5MX7zWRduWxWtKgpOLApnF918kfiSBw9raGlevXmVjY4O1tTUeeughtra2+Pa3v81wOKTVajEajdjd3WU0GmFiweLieZxzTTG/rmvm8zmj0Yjj42P6/T7tdpuiKBBCcHp6ytHREevr60ynU7Isa2xeFnML6fouzsXi/F383o0bNxrhy+LclWXZ2AMtij+SZUlVVY2FzvHxMePxuBHIJOsWrTXtdrsRjVy0pknnS0KeNNYkklFKMZvNODg4YG1traF2JIuWTqdDnueN2CPRWKqq4vDwkLW1tWZ+kv1JoqY45zg5OWF3d7cRUhhjMMbw/e9/n7OzM05OTh5YK2k+0j1NdBEhBKPRiP39fa5du4bWmjt37jRzXlXVA3Y5zjlarRZbW1uNmCQ10jnnmE6njQWMUqoR/iR6T7rvSTiU/p4+g2lelrGMZbxTQ6C14Nq1AZuXBqyt9ShLy/F4yuHxhNt3DjgaljgL0lRsdAQdJZnNPNMq7Kem4wrlPJmW5FpgpWBzvct0OOXwZE7eaQUBvhBUFlZ6BapWbF5e4/LDm8ynM06HUxya1bUuSkiOT6fIIufS1Uew1YTDowP6m2toKTja22Pj0iadXof57AxnapCK4XiKMZaN1T7jeYlvd3jksYephkccvvgaWiqMqwFBngnMeIytDER9BcQtdNiaU43HiEOJm2TUsznSeJTU1KLGe9EQENKbmt/z0ZYkzC7NbrT5SgRKRiKL4APx1MUGiKA7iL9vL2w1kwVMsj8JORCYTww6K1jp5ZxNaoxxXH/uuzz83qfobGxR6BwpDN7ZKHpYJIfHirxqka0/ztqzn6A6PWV+9wbCVDhXU7Qy8laGzGSwIvEukA28CNQMZxAqWMDgfPgah1AgUVF8IYL2wLoglEi5FiURQoGtwu5bqiAYQSF1jjUVCBNptaG1ySehqlLh/lkXCBhK4Z1BCh3EJkhwNdjQmIJSeFMHwUio7DfEBRlzGt6DkBpr62CMIlXMd4RjO2PxIpBqPSbYyQgRrF5siU92yKaO5FGB9A5ralw5DmIO6bFahNco8NYHcokIpAipMmxcM0oIvBYYJcB6fFUjbAWyHft/fFRUOKTM6Gxcoth8BJl1QJyX3oWP9wfAW3A13s4xszPuvvgCb778Ou1ch2cBFwQEQZwU7Vl8tHdpfnKEz4CPlsHeh5zJOT2VmMcJogXvfdOvhBRY48BJjv/6DQYP/QGbn/tvOHn2Yxx/7y8ZfPBXGLznI7iXn6PqrOClQOYdbF0inKHodjGlwdQGZw0eA4SGMKWCBY6X4ZlQySCu8KbCUFEZG4RCWiKVQErfNP2Ax9YlWgWKnxUerSTWGmbjEVYr5qUBV5L3BvhOD0yJ9RbpJUoH0VdZ1dTzmhwNMllkC6qyBu+oY6Og1OeCC+kcbjqGskSqYKNpq5rZ3JBpSVtLlFYIrZBCYU0d8ppaQW2p61m4X1ozns0ZzUuU0PS6BV4orHDkWQtFsBLyzuLKmvl4jLcP7i9+WrEUfvycRFmWvP76682m+u2ILMu4dOkSELxkl8KPZSzjJ0e32+WjH/3oz3oYy1jGMpaxjBjeu6D8T74k6Wvhoqdrw0SMWEZCcR9iN0R4n2q36F95mPWnH2c2egVmqjFtabZQ4ny7FUw0PC4WtqVQQQAR/wu9DA7hBdOdfc72d9iYvptWrxc6M6QOog8hzze1sWAvhOTaE8/w2vXn+e73vo4zls2Ny7wwu84Pf/gDfvVX/x4Ciakr2kWHTrvLymCVSw9vsb62wRPvegqtW7y2d53XXrlOf7DJ8fEJQki2t9aYT+eAD0V/AGTANVqwxgXcprcYgrhDqvOuDAjJEhu9cMEjnEA6F0gNIhxP+CAECF0agfrhfcAzOpe6VaM8QAoUHlz0fl2can8+3z92799iPcxqD3XYru/s7/KlP/8P/Pbnfo92p8f1H71B0cr4lV/5RMSfehQK4V0YaxSjhARJoGWcC4xEvKPRlgUfhQlxJAFBERMYyWIoElTS6CNONgkipBCg4hqWFikyrA/dUEV3hdXVdVZ6PTIdE0TOxVxW8h8mdLT4hBglNfskrUq4lkQlkYFEk7qj8OHcLhb9nKuDLy6gtGCw0se6y6HDxQbv1oODXU5PTnDW0mq1QvImFl9cM75YmIoDCsQXH0VUaS7tQhdWTGo4Bx4MNcJLDBYZ6R8unQNw1r31zV/GMpaxjHdIJBuJ9PWiPcri39fX1ymKoikgQ7C4SHYfSXDwne98h8lk0tAaUqSifRKZeO8b2kGK9L1FUUi320UpxXw+RwgRfx8k6zD5Y9SQNPbFYy4KVJJoAM6FD4vCjYuEkEXKx+J7Fs+zaCWzKAAxxrCzs8Pjjz/ObDajqiryPCfLMrz3tNttjo+PuXPnTmOlAjT0jjSvi5YeqdA/m80Yj8ccHR3RarWaeUmvWV1d5dKlSxwdHbG2ttbY1Cxe70VSyaK4ZfE60nWlcSdBz+I1p9cn4UQiXUgp6ff7bG9vY+Pv87W1NU5PTxtBhDGmEWYkwkae5zz66KPN8S+G957RaES/339AiJLEMa+//jrtdputrS3G4zEbGxusrKw8IOZJzxN1XfPII48wnU4ZDocYY5jP51RVxWQyeUCMkcRRVVU1c5NoJotCEq11IzBJnxulVPOaNObxeMzKygpra2vNZ0JrzenpaUMsabVajYAmkW+uXLmCMYZWq9XQR9rtdmP3U5ZlswaTYCsdryxLyrIkz3O01kgp+Rf/4l+8xU+HZSxjGe+EUBIeeWSFhx/bpp3nnAwnjCrLcDhm796QyRSEMawV0G/LQJlQGSIXjOce01/lF37xScrXrzM5m6LbLXrdnJPTKcZ6Vta77J3OEUWBkI7tKxusdBVrq1usdLscHJxwcjan3WuxNVihtJ7RtKTdX6EoNPdvvY4uci49/hS2HDE+OeHqE09hqynz2Rxd9BEdTT09Y21jDZ23Ua027uQI5yRvvPAi1c4R2zPHvHbB3lZJWnlIOijAeI/1YV8brFAB7zCzeRCGKFBaBTKkMeBcLOTS5HCa35s+CAxCjuecTuF8oGoKFfbfLoo+fOzmCZayMmQbIolDSJE6O6DhvoYGGYFHCUJDiHDMxjX7R2dcfnyL/eMpxnlOj4f84D/+GZtPPM5Wq0UmNU7UyFwjZIZwRLuaeClCIHSHztVn2frFEw7NhPnOXbI8I29nSA2hRyUQPNM+X0gRiusNkTRAOpwMohAvRGzKsQif4ZWKIgsRmqE84A3IkBdh0T7ZO5TO8M6G9wPeB2sWISRS5zhTx4K7PG/0qGucINjZcP4c5jxIVeBiEwfeh8YWIUBleGea/IZSmqQKcs6hhAKZIWXMIdkoGpISb+uYs5EoqUITiVZBGKMV0uVhfswcbwJ5Q0qBUBlg8Qqs8Uh0cz+UFFhACI1oqZAjm1chF1POkVk/NHDF/KNHkq2s0b3yJLq7FY690DyUElveGbyt8LbC1jOmR7v86Bs/ZDSc8vBGn+mkQiHIdfy8syDEdkFoIgRIlXKUkawrPML76G6UnrtTviiS45wP8+VDrss5x3wCd//qefpPfYOHf/Pv8qOXf8Den/87Nn/3f8/kzZfAz3D5gKos0QQxTj2bgNDxvAKd5TgfBEQyWiMLb1HeIWWOR1CVBuMcUkl0JijyVvgcuyBW8bHpx9lAiHXWIW34jBkvKOclMstCzsuF+5+1CwyG0FEmmZce62fUTiLyIuSEbR3TcxnGemxdIZRCa4Uq8ihuMrE9KeSsVKvNbDyjHA/JlCJXHZzzlNMpUmfkSqEyjcVTzucI65DO402FEx6hNbqVMZuV1MMxeZGRqQytPe1WhtRZoNMIMJVp9nM/7VgKP35OYjKZ8Prrr7+t5xgMBs2mJG1Il7GMZbx1vOtd72JlZeVnPYxlLGMZy/jbHbG4LQmbMg8N9SOhL2WkLjhvaXCdBJKDdSC9QKmoQvceqTO6W5tsPvkMozv7nLx5GkgfkeKQ6vuhGWShWO0FzqdkQBAASKLFBhLvJeXxlOGbN5k89SwrG9soHUQfUqpAZVABu9l0NAqJ8Y7dwyEf/9gv8q2v/zm9lRLvDLdu3mTv/m1W+iucnBzgnUKQ02kH8sfq6kMMVtYAxdraFsOzMw6Pj+l0Cj7/936TIpe8/tqb7Ny+h3U2bAJtoE/UtcF7go+pDxYdQiq0hEqcPx9aFzCJuRbRLtbFDhaPcBLrHVaA9J6YO4hFe4FxHmfDptUtupvEGf0xzkfcM/8kwMNbUT/Se6333N25x19+7U/43Ge+gJI5P/zrV2l32vzSxz6MVDI2mrimGwCSICfeZ09Efvoo81HNAkzkFx/pHj5hO72Pyy0mobwn2L1IkGGzKqWMHRiAECihcd5T1w5ddBkM1hn0B+RZHhNW6YLdefLAE9d18FNNya8waYHEktZT+nfnPLVJxaEw38ZU1FWFcz7cs3hlSmnWVzeawo0xltrUHB/uMzw9oe50yHS4waFxJyTMkjil6QIWAWMrZDi+d9E+R/hY5LHgQMYOFOGjvZI/L0QaWwdP3LiW3koItIxlLGMZ75QQQjSF6VQIXyxS13VNVVWsrKywvr7ePD+UZcn9+/eZTqe0222MMbzwwgscHBxQ13UjDkhEhyQKSJSEVPjO8/wBC5dkT7Fo15FIH7PZrDleyitdHDfwADkkiVeSiELr4Ld9USSySAxJXydhyqK1yyIRI13TW9mjQBCdnJyc8PDDD3NwcECe50ynU/b397l8+TLr6+vs7+8zGo2AYPXa7/dZXV194J4IIRrKR7KHSeSPqqo4OztrKCnr6+tsbGxQFEUjJLh27dqPEUr+pvVwUSyjlOLy5csAHB0dPdAstijMWLTTSbQNKWVDiUnf63Q6aK0ZjUaUZdnct/T3Xq/HxsZG0zS2KChJa+T4+Jj79+8315uua2Njg42NDW7fvk2r1SLP80YkM5lMGkFKURQ452i32839K8uSk5MTzs7OyPOc+/fvN6KJ9P35fP6AXVFaj4sWOkqpZh2lNZzWaFVVjQBmNBoxHo9ZW1tjdXWVsiwbu5kkekpzls5xcnLSnDdRR+q6bmx+2u12s96TWCjRVtI9stYym82YTCYNjWVp9bKMZbxzI29lvPvZJykywf7uEZWXHO/us38wZTitMbVlNYO+CJairVbG8ajieFbjtGLVjdn/4Q/IRGjorZ1nuDckb2W0WwXH4xqbFbQEaCy9Djx6bZ3ToyGv7e1SlY7BoMNKr0tZWYytWVkZ4L3nYPc+ndV1Ll2+yunubbyU9De2qWYjbFUjZSialsMThK0QXjAendIm5GeO9vdoocjnFdJKKisx1tEqFO1eC0pLOS4bi9fUJADEhgOPMDb8vhU13nrSzjXWwyOY8kFxaTRvaEgf3oNAgqLJ3aTcT5QkhFfHHI4gohW8P39tzCl5QSjwCtBKBXtfITDec3Y65fH3XuPypRNev30CXnDntVu89tVv0NlYZyVvBYqolJBlgazhWBCXgJcK0Vql+9gvYsanHE8myPlJoCNk0Y5WSmQqkwuBUMkyONrrEexppJDISAHxziJlhsMvWLRJhLcxEbMgGHGusYtNXUlCBNsXqYIlMZHq6nywvsHYOGcGhA/ElEyHhijnohCBULQXNlrphhSGlxKEJRBLVRCYxPGk+wsxVxBFQXgimcQgXGhMIW+FOx7FDym/JPGgw797UyF1GxHatiCTeBvyDkoavBRYK2OCxUV6hcDZOuTqlEBkItjweBtagIQKebA8p3/lcfK1h0G3A+0jUlIaO15v8c6Aq3H1FDM+Yufl67z2wxu0co10jum8IpOLdA8X8zhhLtISIq7tQKAJf1dSBrKKiCKg2FBD+pyI0EillAiUljhXk8OS+1/+Sx75XzzM6i/+Mmff/iIbxzfovefjjH74RWyrT1WX2EhEMeUZ7ZUBWZ5jS4fMVMiZKhkauKxBxWdMYy2mdlgTzquVRCiNUgKpMxLxhZgvSvmpoHQKoojKeLzIsRi8F+hWJ3xWnKFodxC2xhtLbYIoqWhlwU4Fj7eeytRYW1FXNUoovDE4UaG0DxZGUgVxkzXooo1XObIw+OmUTIGt5+gswztBPS+RmcQ5TYXEmMCpPb9njlarQOgWRafN6HREVVbI3CExtPIW1nu8NzgnsTGR/XZktZbCj5+TmE6n3Lhx4209x9ra2luq9ZexjGX8eDz77LM/6yEsYxnLWMYyWEjw42Lx26GkxhM6PEKxPCjJvfOI2DlgnSEU8X2wc5EJtQGq06G/fZW1Jx5nvPcDymlQ8CcaqBU+9YPEc4NFBAsPH6xMRPR8bWAiQkJlGd68yXDvLhvXHiVrtZAiFDsSSUKm7o1IAPneN77GD777XT780Y+xtbWFQNIqCqw1fPs73+DvfPwznJ4e46xjtbdJluXkeQutFZPJjCwr6PUGCNliPB3y+ONP4XzoJHz8iUfZ272PnYGxjroOG2Dvgo1LstkISez0/+Lsx/8duLiJDPhSHzfaSXQTXyiCGMBH/U3otAn3zwZtSCP4CPqLKLSIx7LurW1d0lhUFKiYC6hE58Px33zzDb75rS/zqU/+FtiMb3/zeXrdDs8++94w71JFQgcx6SMCnYLwNf6808djw4Y6jtpGcRFeBhFNTJK46KuLj0mYROpIa1ckX1gJPnQZVMYishb9wQYr/QF5XjTzkN7v43pGKpAe3LmdSrDVCckTT0wQCRkWI0FlY1yNb7q3AxbUGIsxNdY6lIrdId6GPhIpGKys4bzAmBpjapw1HB3ucTo8pVVkZFojZHGO6vUsbGTFOYEEgZQ6dji5kPOysTvcBcFRSsqFhJzEmSiY8TJskj34dM1L8ccylrGMd2ikQjU8KKJIReWqqjDGMJ1OWV1dbYQOeZ7TarWYz+d477l37x7Xr1/HWktd1w3VIBX+U3E9CUIWqQfJjmPxtcniIglTtNa0Wi1ms9kDooRFikciYyySN7Isa4rtFykXi2KOxfckqsWibcci1WJROLFoj3LR7sU5x8svv0yWZXS7XVqtFpubmzzyyCNIKXnhhRf44he/iJSSlZUVBoMB7Xa7uS9J/JJED0qpRqiQCvuLopder0ee58255/M51lqeeOKJ5roXr+EnkTQWRTTpPb/1W7/FZDLhr/7qr3juuecQQnDp0iXKsnzAzmVRSJAEDP1+vxEupPOme5quxRjDbDajKAouX77Me9/7XhLBJAkn0r0YDofcvHmzEb4kwsWdO3eo67q5xmR/k+c5Ozs73L9//4F1571nfX2dLMs4PDyk0+mwu7vLdDpt6CFpTSbqx+LaVUo115DEREmUk2gnwAOCorRGEyWk2+1ycnLCaDT6MbHNRTuki1ZJ6XObRCenp6ecnJw09342m6GUYnd394H7n2VZI96RUjaUkmUsYxnvzMiLHD+fcPfehOms5Hj/jKPTOcejkpaCQQa5lEwqR7erOTwtmVrH+kaXDpa6KhEyx+c5o7LECeh1W0zmJS4v6Ky36VEjjWPz2iWuXbvE7t0d9g+nSKXZ2uwjpWQyntLpDRhsXeLs+ITJbMrld78Pc7bPvddeZrD9EFmeMRseY63BGhifHIMLdKOz0RChOzz89HsZHt5jb3+frUtblPfu44zHRltVLwRagpuV2Hko4jofhf+uYbdinUfKsIc0xoRmGi8iddNHO9bYmNPYxIjYfEG0EyVah9jzn7PE3ARhbywENPIOb8/FIpFiCg0rIRAj4nnC771AbAnHdcyHM06HMzauDNg9GDGZWebziu/9yVdZfeQyT/W6tNYeAqmRskLoIpAOvG32+EGxkaP6V1l5zyegLBm/8hzKz1EyWcKF5y8aaqyMtq822LbIYC/rhcBZG60pRGgwigQIL0JeQygVqB8EiqtXQSwgxDkhVGgBUqFzHe6PCM1WCIX0gc4gswwvAnlCqCzMpQ9E1dS5450hWbAIHSZPCY/zJiRsfPy+C7YrHhtyeD7Y4iJUGK8IeROcQ6kiaEB8tGhpskWh0SaJI4Q3CCR2NkF0JDrP8DbDG4uQElPXjX2MUApbRyIJUTyiFVIZnJQh72INAoMgD2eU0H/oMbpXn0G2VhEyiT7CvJOoG94hnMGZEl/NOL3zJj/4H7/KaDThSj/H1Za6dgy0JJcCJQKRN9FnlZKoJEpKC9+HTItM4hopmpxe6F1beDaOoqVGCJKyNFaw/+IdBu/9Htt/57OcvfICx1/6H7j0v/0/Mnnlu7jpWcixRhoHVY2ZjJGtAvICZ6IVkPeIPEdlHaSZI5ynnteUZYUgElVcoOlYC17aQHIR4XPqjAHvcLZGyBZeZ8xrS20qvARnIC9atPr98FHBkrdyvM2YTSZYb8hlaEzTWmO9pzKG2jqMDT+DEC4IYKyEWYmqDSIrsFVF1upA3sbOKwotoNVCCo+rw34rNOspKmuYlQaEIFOSPFdYq3BCobIW1ktwlna3j/M51dk+KloFmboKNkJCIkW+YLf804+l8OPnIJL/5M2bN9/W86REAfy4n+gylrGMB+ORRx75WQ9hGctYxjKWIUJROwjdwwZQylDAT7hAUhHAgvcxMSBiF0CzEXNp7x82+BJagxUGDz/C8eUblG8MScYtgfcQCB+LHpuOIC5w3sXNlo2bGyIpQqAQzHaPGd67w/SpIZ3BKl6ExIWU55KCRgLhPN/7+peRds7R3psoKZiWE/JWTqYV3/3ON/jVT30WnWVU1Yw86wbxRh18z3N9hFIZtTFsbW7x8vWXEFKxsbHCeDymledcurzBmzfuYozHRCFA6LpJWPAwqpBYdwHdyDmK0FofhTCklgpMpDaIuPn0wiN9oIkIl5I5YEJlPzXUBKGJOH/feXrGNx05f+OWSAge2tzieHjKdD574FvWeyrreeW1V9jYvMyz7/sY5czzta9+j8HKgMcee4TQISJjAij29vhk52Mb0YJPPrw+gT2TICRgZD0B2+qcbUQrHof0isaMxYduDUewB/Kx08YYg5cZvf4qqyurtIrQueKcCefzySolnEuldezDOATJt1jj6zKcJ46yKUL5YLNTmVRsywCBtYaynFObmizTeOHxXpB8jLNMs7YywJor4TjWYYxl/2CHs9GEdpGjM4G1dSO6EiLST1xI2Dhv4ufQRvFMkDshwEREaViDiaBim6Ye7+q4KZbN9S5FH8tYxjLeyaGUYjAYNMXwRRpGWZZUVUWyhyiKoqE4JEuNLMswxvDtb3+b8XjcUAbgXESwSP5ot9tMp9PmfKngnI6byBVaa6qqYj6fMxqNmnNnWcZ0OqXT6TSWM6kADrAoZlgspC8KNlKxO43x4vcT9aKu6+Yc8GAO663EIotClPRvd+/e5fbt21y9epXV1dWGfnL9+nX+8T/+x+zs7HD58mU6nU4jCkhjSOdYtJ2RUjailEWKSRpfssuZz+ecnJywublJv99/QKyyGIsWORdJKYv38OjoiK997WvcvHmzEUM88cQTAJydnVHXNffv36ff76OU4vj4mMlkgjGGS5cuMZvNmmMlMcci6SXPc4qi4F3vehePPvpoQ0G5du1aI7wAmM1m3Lx5k93d3cYCZTQaMZ/POTo6YjqdopRiNps157l27Rqnp6chQd9Yzp0TZ9bW1pjNZsxms0ZYkexa0nykNVaWZXNfFufHe0+v16Ou60bUsUj/SPY9SQCU5zl5njdzuSj0SGt6cf0m25hF4UcS01wUiHjv6ff7DSVGKcVkMmE8HjMYDOj1egyHQ46PjxtByjKWsYx3bti65u69I6q5ZXg04f7RnOlsznono9+SFLlmOqmZWTidB/uM1bU2ha9QXlJ0MsrKMKtA+yAkGVlPtjKgKCSuLukOely9dglblty+cZf5tKLdbtHtdvDeMplMWd24RLvf4eTkFCcEg9U+R69fp6or1i9dJcs0ppyE3ELtKKuadqdDnmuGw1N6G9u02m3u3bjObF7z2NPPMj/Zpdo7piVVKN5ah1CSTq/AziaRuWCDwIBIcxTRhiU2ikgtcSLutmNTSSqCi9hokZQf3qe2+bCHjYzIsKuPZISm2YAFuxiaF4Tckg/iCHcO2DwXZqTxhYRBKMojEMLjrOH+zX1+8Vfex/07R0x3xiAFRwcH/OCLf87mw1fZfLpLrnKEnIHUCN0Ke3Vrzp9zEHiZo1cfpf/ev4O0Y+Z3XkK6MpAqlAp5Em8CbQMPwqOy1OwRhSFKgg1/CmdDDgwf3h/zMN4LkALvZUM79UlcKiRCx+e0kGgL12rr2KARciJCAUKFpg6ZIYTC1UGEEppQgqUsQiF1hlcC4YLAxPkgKgk5gHA8qVLDTKBXGOPRWRauwRmQkT6iM6ypo9DFgQoWON7amGMLQhUZm3BEItBW03Oyq1b4OtiPiEisEA6kUDgRbGeEDo02UgXLEKzFVnMyX4MI15x31+k98h5U/yGEKiKtIj43RNFHIErUYOc4M6OcnnDruz/k1b9+lVwLcqUoZxXOe3IpAx1G+HB/8OiQNWnyYoEynBKSsslLee9CU1MUNfm4VuH8OdyGfrnYOCTw0uNmsPfN7/Dup3+BwQc+wNGf/yFb83t0P/IFxn/1fwfdx7mYx3Jgyjk6EYu1jHNvUNKRaYlxkrIqMcYGwYuIdFh8vMceTI1XEheb+KQgipHCR7LygRQsZR6shGIDlRIelWusMZi6AtXGOYUtKxwanwd7GSckc1sxryOJWSiquiQrCpTW2HIKrgQvscbgzRwhW5jpGGlmKC2QXmOFCzY1ItgfWnK8qRDexgaqmKNzAlvWWBnsv9GOajYiE55cCqx3TMsKpCbPdMwbplzeT/95byn8+DmJvb09Tk5O3tZzrK6uvmVXwTKWsYwfj4ceeuhnPYRlLGMZy/hbH0nZDgIvQjJdSI2UKha/ieKQsJkJGycXd3wK4QVCGmRwZ40bwDpsCls57Y1NVh57jOn+X+PGoSPE+rDZUpH60XRmeLA+KNVlpDkECGbc1BDRhRPH8M4tpqcn2Ieuooo8vF/oSGYgel06zHjIeH+HcnTErVdu8/QHn+DglVdYGfQo8oI79+5y4/WX2Vi/wv2dCUIqbGXJBgXHxycIsigQEGxvXeIXPviLHJ0ecXY6ZHh2xlgpur0euhBUVSBKGBfJEsI3m3wVN8Yx1/GAACMIXoJKQ/ooiIndMKGwAggZe0gkQoYNqlQC4ZK0g7jZj3vX+DgqRSBRyJiA+Z8q8zvnOB4ec3l7m529feZleT5OD5V1zKuKH77wXR7avszGxlWm0xlf/vLX+d3fG7C62g/rpUkY+XPRiQDng1cpMqJC4zjjq+I6SNYmInaJRKEHYS1KmUWMbSByqLjRQwqMNRgvaXX7rKwMaLdaCOlD947ngWtJ698ROlzCCyxeqNAV4887d0lpAGep6xJnQyeUNQ5nPV4FQoh1jrqaY40NEhIXOp8QsRMJjdaS9bUNrHXUdUVdldTVnL2D+4wmY/KWxMTjOh0KOOHO2Qbje34l4TOakhHCKzymmU0AGzugvE/JDICAfxXLWsgylrGMd3ikn+NZljW2Kos5G2POCwWz2awhBVhrmc/nFEXBt771LW7evNkQD9IxkwCiruumuJ1lGZ1Op6FRJPsLoLEnSSQFrXVDc0jnzfMcay1lWTbHBB6waFm050jF9ySIuEj9WLQngXOxSBKlLFI/FoUhFwUfiwSNVJz33jOdTnn++ed5+umnyfMc5xxf+cpX+Kf/9J82og+Auq6b+7FoEbI4tiQ2SIKEdM3pfIncUFUVs9ms+frVV19lbW2tyS8sij3SOdPx03GS6McYw97eHm+++SZSSt797nc3YiDvPUVRsLq6yv7+fkMjSdSOg4MDZrMZKysrnJycNDYlKZLYIxE9iqKg3W4jpWyIIel+JoFDsrtJYogkJDk7O3tAXFLXdSPSsNY2QpE09kW6TRKOTKdTyjIQ65J4Kc2VMaZZk4kykkgeSYyTLFU6nQ7j8bhZC61Wi5WVFfr9fmPJko61KOg5R+OLH1uTjUXjwp+LxJS3ek8SgKyvrzOZTJhMJoxGI/b395u5qqqKyWSybMxbxjLewVFbTzmvODuccm9/gnGe1VZGt5BI7ylnNeO5Y2gsK+0ObQmytHgl6a92mExmeBRtBdYLRqUl77YQxmCdYfPhba5d3eZk/4jjwzOcdbS7LcgKZmVFlkm2L1/Fece9N29SDLZpd1rs37vN2tY2l9cHjI6PqWcOqRVlbaiModvrYsqSyeSM9YcuI7KMe2/cIsvbPPaeJzndeY3Z3UP8yOCsxIY+HAatHCmCACOIFlTcqgq8cJFqGV1DCETK8DPT4WLziZRRoCEEzibKVhB5xDcSN+8hJyHC3l1KhVAeF5slQuNFJK96j5eBWJHsUaXwQdAREhrn+/34p0KgFSHH5IKVzOneCaNxyePve4z9g5eYV0F6cvP5V/n+v/tjPv6/7rD+eLCqk2iEaoPuIijxdhr287GRw+s22dZTdN9XorKM6u4LUE2CuMMHi+IkjgGJUCq61oiY1xJ4GQWrKgvzEHfU4XuBNhsonJFKIWzM44jYdBEsQdL8CB/yBkKEXJr3UZiQ9vMKvLMcnUzBCza2VvBSBYqs1nhrYn4u5DSEd3gXxSpKBbGCtQHYITKccGStDGFibkPpkC9SQfhgjUGpFiKLIhDhkEqhSGvAIJXERfqqSMksJcEIbG1RUqLwOJHFRItDaIlwGpRv8mJCCZQOpAgpCPkgBLK9wuCJD1BsPYHMOyCzOJcCCHY33pswn7bEmxIzH3Pyxht870+/zsnZlIfXO9jSUIbUC7mMNFvAO4HOZHwO9OdCJpEyTeHz4Tm3+3WRmBPeT/is+UTSDaIdKVTTeOa8x1rB+M1TDv7yi2x9/LcYfv9hJi/+gK3f+aec/eBL6Ok+RmSBCJPlWFtTT6dk7TaqaIc8qMqQQmKtoa4ddWmRQNbOUCrDEsQ2WZ5hhQ4WUUoFSyfhQSqq2kcih6WqLJnSKK2pnUMJ0JnCeY8i2KRU0wntQQupAslFZRm4QDG0qqCsPcY5tJAoZ8l02O9IpYE2ppqgPFifUY0m6JYOP0qURhrBdDqknJcUOkMqj9Q5eZYFgod3yLyFUDneeuampCxrnJlQdNo4PyYTUNcWa2oyl1PIDko7hHUIV8Vc17nN1U8zlsKPn4Pw3vPGG2+87Urv5FEJ5xuWZSxjGW8dV65c+VkPYRnLWMYylkEsCniDty52X4REQFDBBxWBx54X3wmiirThlUJESUJEdes8fE968n6P/rWHKfcPOL2+i3OBhrGok/XNnyJahESRB4kB4SMnJJATlBeMb99jdLiHqZ8iF52EM4ivDuPL71+Hv/4K77OnvHR0yBe/+CVKPs3B3v2IrbQI5/jzP/sS/91/938IqGhvkDZD+oDlnpdl3ChmrK6tcO3aI3zko58AAX/wh/+a/YN7SKFYXR9g7An1yMZES9gQC0RwEtHndiap2/V8AlIiJKrUvQevQpdFyruQ0KhByCFk6FiojYu+rmEiA4zFo0QQNDQevvhG7PA/FWVZsX9wwMOXH+LO/d1G/CEIyNh5ZZhOxnzne8/xW5/7PXResH845C+/+jV+6+/+OnmWh+uIHUXNmvGhuyB1VJwnjIj0j1CgcETLFyRC6nB93oFQzYZbJGiFjwkvH9CsxgV0Zb83oNPuBKFR8qUnoWvPO3+DTRF4kfxug1AjFAhC0SHYr6hmNToXi1Zx3ae8mPDhGqqqxJgK7wwitO9E1G3YYDsPWmnWVteD5UtdUZUl87Lk4HCH+Wwe3+9wxuLTePz/j70/f7IsSc8zscfdz3L3G3tE7plVlbVXV1dvABobOSAwAEUNhxoOhyOajWSSftKYyfQ36G+QmaiRNAaZTCJGmuGCleQ0mugG0PtSVV1VWdlZuW+RGftdz+KLfnD3kzezqwnIhG6gi/cry8qMG+eexY/fiOPf93zvi/8cPjWm4RPkjL/bznprnGBH45xBWN8ZIXCIaN2ETxaJSH4sAZBlLGMZn9AoioJ33nmHNE0ZDAaUZUm73abT6dButxtFiag2sAhYRCDj29/+dlNQjwX6+Ls8bheL7VJKsixr7D0iZFCWJeDhD3gCIUS4I8sy5vN5o8JRFAVVVZHnebPfZ+1L4tfxWLG4vaisEM9x8boioDKfzxuLmUXQYxEAeep35gIIEl+rqop3332Xf/SP/hEnJyd86Utf4p/+03/K0dERw+GwUVhZhEcW9x+BEGNMAwxEpYc4fvG6oirEYDBgfX29Gbvbt28zHo8ZDodPnWcED7TW1HXdKGRIKRsIItr2rK2tkWUZs9msgRnitmma8tZbbzEej/n6179Op9Mhz3O2trYYjUZsbm6yt7fXXGv8fpqmwBOAJcuyZs4Nh0NardZTYwseFtne3m4AlSRJGI/HfPDBB801TSaT5rzjXByPx4zH42Y847WPRiMODw/p9XqMx2OstXQ6nQaiifMnz3NGo1EDO8Wxjqod8/m8sb1ZvLaNjQ02Nzeb7eLcT5KkmYcR3FhUEfk4GCne4/gZW/zzrP3Ls3BSq9Wi3++zsrLC4eEhDx8+ZH9/v1EEWcYylvHJDWssD++dcHRSkSaws5LhtKMylvZwwMHJBK0S1votUqOZFRadSvrA3vGUXj9noFKmtUam0Co1LWtIey3OvfAcw37Co1t3ODiYk3V7DIc5utbMi4K806HdbXEyOkHbmtMXX+Bo7xG3bjzktc9+mszM2L97C5F10EIym47J8w6bZy9wtLdLjePSm5/n8b3r7F6/y8bmFkmnz97ubZR15AZKB5UxaCfJ04T2ICNvJ5hpgamjBQxo6yvexhl89iAoFljrm0dicTusxY3xkIcNy8ho2+LtXvwa2waAInZQWOu3YeE1/O78SwR1hpgGCGoaUVHTWt/dHw1gHI5EBq2FkDuw1rJ/b8Srv3iZ7Q8fcOvWIQBaG37w1W+zfu40r3b79E+n3oamShGtVUi7fm2rC6KCiRAKVJdk40VaWECjH3yIqIsAPwhkGq1ZPJjiG2d8od8hkCrBOX+BwtUgVSj+y0BeWNAGGxRavfrHE0Nj5yyYCHqE5x8nfCOJ73zyag/SgdXedghHv+fnmcdevNKLCwofwkt+erDD1L7Jx1gQusmiebfmcF0GHF4VxOoKpxQieOzINCWqxggpgqov4HxzlwWsiHVOhXOmyVMpCSKJPTgSkTivkGIMThs/ngTFDBWeCTKFNbZpipJZm/65F2mffhmRD0FkIVkY85A+t+ecxpkaZypsPWe6/5AP/t3XuHbtHt1ui3YicbVmXtekQpBK0cxVpWRodgtWvOEzQcjpWGMa9ZsnqjGx9cerFNPYOEcuSoQcVezqEmHcMva+/xG9528zfOl1Hv3ZVxn8nQ9ZefPvcfKN/wuV8RbRDpDBxkcmCU5IBDVJlmG0pS79c72QEiUsiVJoIb1qTJKTpP7PfFySOoOUCmfAWo0TCbWN9WmFE8Jb3GQKEkWatTAo9Lyiqgx6NiNrtVFJgsgzcFBWFbVzOFGTJBlGa6w2CCf859Uan2uUINMWMm2TpEApvGpJlpK3e6g6YVIbKGq0LhEWWu02WTvFkmCsRaY5Kh8wnRc4U9HudCmLBFcU6KpCyZwSia6MV79lTpKkyMxisRSVwbkn1lJ/lbEEP34GwjnH9evXf+LHiYs44ClZx2UsYxk/GrH7ZxnLWMYylvHXG9polPLWGtG/VarY5SBDYd3/MUaTqOQJ8R4W6VIlSBe8XsJiyTqDyjO6axvUFy4yu39Aeez7TPy2T7QcRLM09oogiZIIoREiLMKIZXdQQH00Zbq3SzWd0l1d89KSQvpFr3DIkz3kD/6Ye+9+j0tpxYVE8o1Hd/jaV/6MX/7bv8qtWx+xvb3NycmMO3dv8MH777CzfYbd3XtobSlmkv6Kw1mN1pBmKetra6RpTqczYHV9k3/yP/+v+O3f/r+xf7hLv7+BsWDtHm7q7U5sADCSRKLwi03tggXHwvhH1iV2JPhWD98pY6NyRly4468zylNKwkI8ADh1SHIYF9Q+rGu6MoRwHuz5C8IB06Jk72Cf586f5aPbd6hrHZRDBJV2lKVmf/8xb7/7TT73uV/BGMOHV25x5swHfPYznwr3NSi1OBpgJWaHhLNexUL4xTTCYMO1ERdtgagQUjUL89iREb2GY8ZA4JVWUCntXp9up0siFVi/CDTGUVU1uCAnKyQEVRsROj+c88kJF/yTcRqtzZPubhnOJXrMOosxFVpX5HkC0mG1oawKyrLA2YFvUAlADtYrdsRkV563WV/b8gUpbTBGU1UFWs/QWqNNjUhC0gbprWoIXr1uQREmAEaBNEIgUCJBKIMxHhIS1nd8QAS1aLpalrGMZSzjkxxSSobDIfv7+/T7fe7du4dSim63i5SyKf6naYqUku3tbcqyZGVlBa013/3udzk4OHgKKIj2LxH2iGBBmqZPNQJFoCRJEoqiaACRPM+fKnaDzx91Op2muB6VP6KSx2JuKapBxH/H4y2CH/F7Pw7eiMX0qCwSLVoW37/4nsVjx9fBK3m899577O3t8fu///v8zu/8TmOJE0GIxUK+tbYZ60X7j3idzjnm83lj61LXNUIIOp0OOzs7dLtdsixrriuCLBsbG0wmkwb0qKqqseWJahbtdrsBcIwxXLlyhWvXrjX3L6pvrKyscPbsWQaDQXOM4+Njbt265W3+Wq1mHmRZRr/fZ3d3l1OnTjEajRoblQhBLAJBUTXk+PiYbrfL5uZmM5aL5wEe3ogwULyeNE2p67pRtIjqKIuQzOJ4CiEa9Y5FUGQ2mzXvj5BHkiTNPcuyrFETKcuyue/z+RwhBIPBgJWVFQaDQQNlRKuiRZWaxbm4aB20OD+fnVPxe1GhJ9rKLH7+IjQVPyPxsyyEoN1uc+rUKbIs48aNGz8CQi1jGcv4ZIXWlsOjOadWWuQJjMclxqWoPOXB0RiVKoYdhxSambFkSiCMxnXbnN7sMRrNmYgWNq9JraWVSVr9Ns+9eI5yNuP2jROMkfTWh2RZwmg2RyHZPnuOcjZl//E+W+cv0Msl929eo7W6wRf/o19l/Og2d2/dore+g7aCaj6llbexznDro6sM1zbYPrvDrStv42SLlz7zOQ7v3eLm1fcZrqySJznjvRNyoZhYg0wkWTslVVBPCqx1aBfBj7AWDD9GGy4j5AS8IkHQDw3PECL8Tpa+RcTDCEGtA0T42ftEeVWEdaZxXrk1tqcQnxOEfJJ7QBDtUpz1aqZ+k2B14qxXssSRSEWWeNVTW2kkgkd37vPGL3+Ks89t8PDBEUXtV/zjyZxv/It/S2d9jRd+uU1nI1yvANnaRCQDnxnQhS8E461vyFdJtt6ghaJKM6p7VxD1xCcGEIgkbXJg1jlUIrDOK2vYcAApwboEbA3Wq2pK6dUznDPe8lh6exsAYS1OBOAjzREOjK2D4q1vymiW4gKv5IGhrgqkEORpSpYosJXPUWhv9eqVOTRJmqOyDKcVBOUGjEYpn5OLOTZrvL2syjKMk0FZ14H0sIizFpkkYHWAhESAQlTIOSmv9CLVk7yOEEiZYKRCqgBABNscEbJU2nqbXqkUVgqkkyRKIVNFHRpapErpn3+V/oU3kZ0thMy9HA2S4H8DQSEFW4OpwBnqYsSDd3/At778DSpjONtr4aoanEMby1AFi2WpAqAifG4tWOX63FJUiAiWNQsyIPFry8I0b+COqBgS6Sa/gXc9cpTzCuMEe1//c9Y++6ucvGs4/PLvsPq3/rdM3/lXZNM9agfGGkQCaTtDKYkVAqFayCzHCIOpJuAsSZagSL39jvXAjBQWmQhUKtFpijMaXdUIHC5RGOeV5qSSHm5SEpH4/KQQCpm1qCzMZ3OENiRSYnVFkqVeIcg4tDWUlSFJK7JOQi0F2tSoNEGm/udDWc5JU0miMozR5K2UNO8zn0yQEtK8hUhzut0hRVWgqxIpHUU5h0SSdftYMmw9g2pOmmTUKifF0u510YWjquYYXSOFojQV6BpdVhgp0N02Mqr0/IRiCX78DERU/PhJx6JsaLvdXpLly1jGjwkpJZubm3/dp7GMZSxjGf/Bh3MWq3UDCVhnUCJByQQlM1KZoUKh21pfOFbSZxIsGnAIp7yagJcICeoIDuUkKklJu106m1t0dlaoR3s4478PT2COKLZgUH6RHfwy43+Lq2IBMDNMdx9QTSY4I5DpEzlTZR2dh+/h5iNGx8fc+Ogxf/t0hw8OC+7cusXo5NNgYef0No8eHoAUfPlL/5b/9f/mv+bwIMPqgmJUcJAd4JC0Wn2m0ylbm1solVAbnwB5+eXX+cf/5T/hv/lv/k/M9Jhue0jdr8lbJWWhKWd1KMJLpFJoXftkCD+afI6WLrF7wFqHRyEgSaJSRLDGcaG5JGyL86ofSqqgxBK6DsIAR5iheeHfV+wXT/4qa81odMwLF85z884dikr75JGEaVnTqQXXr9/g0oXLDIabGJ3yja+9zbmzp9jc2vLHDOcdwY2ggRIWzgrf62DAqQAjBGub0MXSdAdbERIYXn0GIRBOBEhEeIsVXZLkbTqtHmn0+7QCYxwn4ym7j/cQWjPstMjTlLTVJut2USShGzXxXTxEjZWgIBNFMazvmIoKGw6BsaCNQRsTkmGSsiyYz6bM5zVZHsZzwetYxM4QZ8jznI31HV/sqQqqquToeJc6FH9UKMj4IfFJsghuWOu7lJx1aKO9rK8LnU0iSvs7vPCq8KowMTkXkhy106FZawmALGMZy/hkRpZlvPrqq0ynU6SUbGxsNAXvWDCeTqccHx9z9+5dqqri/v37rK2tobVulBYiABDfAzRF5qgSUVVVkxNaLGgnSdKoLEQVkTzPn7JyiUXvPM+bQn+EGBYhibht/P24aOUSwYV4fXEf8T0RvIivRYWLuq6fstNY3P8iULIY8Xyjzcgf/uEf8m/+zb9hOp02hfpF65ZFUOVZy5oIycRifgRrhBCNHU1U5Kiq6ikbnQhKTCYTzp8/T7fbJU1TWq1Wo0KRJAl5nnPx4kUePnxIXdc8ePCA69evc3R01JxbVI5IkoTZbEav12vG6vr169y6dQtrLaPRqLku8HnACNAMh8PGcqSua9rtNlmWNRYpUeVjZWWlsX2J9+7k5IT79+9zcnJCp9Np5ldZluzu7tJutxs7l8VrBxowJNqjLCrQxDFfnD8RqInzIAIpnU6nAVqiikyEkSKUc3Jy0tjtxHs4HA7Z2Nhge3ubjY0NhsNhMz7xPOL9f1bp41mQKc6JGKPRqPkMpGnaXGdVVUwmE46Pj9Fac3JyAsB0Om0+Z71e76n5uoxlLOOTF9YaLp3pkCmJ1Y4kUczmBicFSSZpSUhx1NbRSSWzWtNb67G10WdqHHJjk/rgEDspSdo5289tsNJpcfDgIUfjCtHq0R/muKJiNJmRCMHaxjrl7ISjvUO2T59nvn/AcTnlzEtvsLI+4N6H30PrhJWtM+w+3CXvDzl15jyz6ZjHDx6wc/YC1hRc/f67DFdWWV1Z5da7b6OtZWtri+l0wsmDE9LKobXzS/zaoNoSYVKKWYU1AZbAq1NgXbCPlQ2g4RBBgEM2OZXmx6GNapwLlmvRh1WIBtKICqTxjV49wYXmhvDz3D2BQ3ytPhIooaDuS9I4JxALK08hpLd+AK9YEcrqk+MJ96/dZu3UCr2Oohh5mNEKwcH+Md/473+f7sqQs2+9Rb4mUEJ4pYj2GiIb4IRC1LNQ3A86ssmAdOs1XygWAv3wCqIaI0WAQ4TzDq1BCaNprMDhjPaqmc6EcfDNIw4DxgMGDQIQmnx8g0to2jF1FFUJg5KAMDitkYnCK456Kx0PZiic1ohEIWSC1ZW/FuXVL5TyEIgzNWma45xgMprS6rRQWQJYn7sTAoFByQSinbIMDTXGBsFQAwaskEih8F9o31Alhe8ywgO8zhqsscEix4MbzlVhgqigmOGPLaXCWN+wIoXAClBpEnI7JSbJ6Zx9kf6lt1D9HYRqBeijmTQB+rBgaj+GdYXTU45v3eYbv/fvuHV/nzOrXVpKoCsNQqINpIkjkRIl/aDLRkDEN9VENV0XEnCLcKiIc71JmzmiP7GxsZHGW0s7P208P0QURXbIQnB47QHrn5/See0LHP/pH7HxW/87std/Df21/wfzNEMkbdJEBlU4f5+E8I1LKkvJXA9XlbjaP/Np43OBWTtFqQRdlqRZTtbqUM4nQVnYYozASO0/80lKkmakWYpwmnpe4aoaJybUNkFZR5YJJBnFvEAacNohrQ7PXKDrCj2bIC1kCDLlMNpSlAVJ2sJoh9Fz0jxHG5DK0e0PfDMVCmtq0kRi8szDWFKgnWU8mpFrcCpHSUsuatIE1jZWcWVFOT7x6yHjbcbTxJANu1SlRRcFRltmpaHVSsiUh4T+sgrH/7/EEvz4GYlbt279xI/R6XSe+ndc6CxjGct4OgaDQdOps4xlLGMZy/hrDGtxuoY09Yt3o1EqQ0pvUWLRXsTDGr9AJRSNncP7yYbFo/AoSKTepZQIJ1AyBSFQvR79C+eZ7x2h92MnyRMYwScCpBfgtIJUxTQCTyUHfO1aoKyjePiIYnQMpkKQAw5lNd3d9+kcfIROElaHfY5mjvbFHv/Ff/Vr/Ot/+2Xu3LzOcLjC3bv3GAzb7O4eUVWP+d53v80rL73Ggwe3MbVhclAjk0PWN1LqGo6Pj9nc2GD38TFyMmMwGPJLX/xbHB+d8Nu//d8yn43odldYa6cgHccHxxztjVBBncJYg7G2sR5ZDOcWpFORwarFK2xoG31iweJIVEKtq2Z8nuhouNDRQ6MGoYT3PRahIeEvKvBHzAYBg36Xk8mEtZUVTp86xe17d30nSZAoOZmMaeUZ77zzXX791/8e1TzhZDTlm1//Hr/5W78WrjskcBxE5EVjgpysl0l1oSMFYZr7bfAWLAJCN1BzUb67InYZhS4kY72aiVKZt2YRITFh/RWXVcHjB/eoRmPKfpdep0VnbYOeUrS6neb9xhiiNCxOYGzt5TKd88mVuNiXiZdjdQ6t63CdiqqaURQzhLiLQ5HmLaSALM9BCLK05WX+nS94WQd5K2N9bYOynDObjBFeEycsok0YvyBzb0JRL9x4fz74Dq1gBxMtj3wxRQEWqcBZ0cBFzoKxlbenCde6jGUsYxmfxIgWKYsQQIQ3YsE5yzKGwyFbW1vkec7a2horKyu89957HB8fN0XjCCvE/cb95HnObDZrVBc+rgEo2l9E9Y9YeP841YNodTKfz9FaNwoHizYrP+7rZ4v+xpinYIvF9ymlUEpRFEVjs/JsPAtsLEbc93Q65U/+5E+QUjaWOFEJJcazEMmzQEos9ltrG3ubWOiPr8divlKK2WzGbDZjPp9jjGE0GvHaa69x6tSp5voWxyaOoRCiAVUODw+fgmWiAsrDhw95/PgxR0dH7OzsoJRif3+/uf6iKBpApd1uAzT3U0pJr9cjTVOOj485OTkhz3NWVlZYW1uj3W43NjaLSizg4Y2jo6PmXMuybECM+XzewBlx/sX5W1VVc/yo2BFBnmdtVuLfUYUlzol477TWjZLxbDZrLG+01ozHYz788MMG/DDGNOcXx7bdbjMcDjl79izPP/88Z86ceUoV5OOgj/gnfp7i/Ygwx6NHj1hdXW0saCKANRqNmE6nzOdzlFJMp1Pa7XYDWN25cwcpJf1+f5l3WsYyPsGRJZJBJ8OhOBiPMSKlt6bIUsnsZEYmFd12ikZxOJqxdXqVXr/No+MSm7cwx3uU44resMNLr51B6ZKHu8eUxpF1B/T6bcqioCgqut02qUo52ntM0mozHA7Yv3cT1e3x/FufpZoccPW777Fy+gWG7YQb731AuzdkbXOD8dERk/EJG2vrPLx9CwTsnDlLMTri/tUPGKyu4JKcw9GYXppTzAoSJF7vwZColHaaYkpfaDfGNe0KcV3pa9UCHWvVImYJXKPYGhsb/GshnxMyC74JBSLSIUVUaIqj7fDWog6pFNL6bzo8ROAVBRbsMCRkWU5ZFH59jm888j/3AQSpEiTSv+5cqLY7xwffv8r/9H/xdzl78TaHbz/EkCCEh1tuXbvNt/773yNppZx6402yod+XlALR3kRkQ59rqMfg9JPrTvuojddpyYw671Hf/T6uGiNFaLIREmt0uIBw0VIilcRbjRicCFazDfYSFEvxDRkIvK1JsIdFhtQHHlJSSjUNJuBzDCJJfQ5ASMA3lJAoRJr5c5cJUvomEluXIBzzWYVKS7K2RiUZvZ5CJBKZ5TiVkTiDrWocNaYuwLhgbyu96gMOEpDkOGM9TGC1V/cQEieiIU8AW4QDLDLx6h++IcYglVpMDWK13x5EyCFapMpAgDISiUSn0L30OoNXfhk5PA9JG4QikkMi2B7jjLeNsRVGF7hqymj3Ft//11/ie9/7kDxP2Vpto09mCBSFNkggVeGZA4KiTcgdKQnagz4x9xbzZE8AJUJuMwy9ePK5UAtv8Ooo8T0BlibaNxvMRHD8g++z+Ut/j5vvv8Ojf/d/ZfPn/2s++up/R0dp6rxLkuc4XXuoyHpoqK5rVFuStlo4Z9FWUxnfgKVy5eeiFFihME5gUFS1o6gN1tQImWKVIslzhEwQSqASiTUpQlVURcV0MvXqh52MJMlwpFTjOa6skSpFWEMm/bVaFFYb0kQiUgFGU5c1utYISlSiUCpD15X/meAEqQygVDVBuJQsS6DTQUhJmqeULmd2ckQxn5Pmfi1RGotSMzYGOYNzL7B/NKa8eQM7PibBkSaKpNUiaafMxwl6NsFZg6k1VWML+Vf/rLes7P8MhLX2p6740ev1luDHMpbxY2JjY2OpiLOMZSxjGX8DwhmDsLUHQGwsQrgAZnjZPt8V4Rcwzq/1kTIoTYTXlUiABIfBKr9wFCgQcxwOmWf0ts9SP39MPfoIahWemTxIEtLQOBd8PJ2XJY2P7os+sGBJgPrwkPnxIaYqUZ0u2XREvv8R7ftvI6oCIWF7Z4PVYcrw9EVa29u8dPkCH12/y80fXmOuNT/3xc9zdDSlqjQ/+MH3ePWVN+h1h0ynx7hCMzuZ0u1OaeV9Dg4O6HZ7DPodRpMJk4nvBv3N3/pP2D/Y4/d+9/eYTI9Y7w3p93sM+6vo8hrlvKY2BoEKEqsfcx8IXQPE5Ir3V7VBJSImURAO4zRKSLyr5pPljY1dNv6m4EQjRLkAh/wF82Fh0TubzzlzZpt7Dx/wxutvUJYzTkYjpvMaISW10UznBXv7j7n/4BabGxfQWvP+hzd49dXLXLx0wXcvCH9PnT9DnwARqgEYpFC+WyaaAzvju4bCNsL7EDVX4VmGIFGKhx2s8R0tSZKgEi9D6ggync7STlNOra8zE5CFxJIxIXkTlG6q2mB07RVtBF7No9YYo6nrktqERTkWKQVZmgdFDq+a4SEVw3Q2ZTydgEhptwdIJcjSDOMMvW6f9fUtf46IALEYWq0u/f4KvcEK8/mMNGuFjpmYzhNPknMmpon8eFrjJUsRPJGWDWCMkgleelZg8TCMt9ZxWKHC9SxjGctYxic3tNZMJhOKomhAhGgdEuGCqDYRgYNOp0Nd11y5cqWx9gCegidiMTyCI1Hxo67rpsD9LIwRVTtarRZ1XVPXNfDEEmOxOB3PZTqdUhQFQANHfBzsATwFeTxrJbMIiSxGVP2I6iC+8/AJ6BIBgUXwI57Hog3NvXv3Gmhl0TYlgh5KqcbmBZ62o4nbVVXVKDqAhymiukM8L2MMrVaL1dVVBoMB4/GYw8ND5vM59+/ffwpkiOee5zlvvfUWs9mM69ev88d//MfcvXv3xyqZRDWRR48eUZYl29vbpGlKr9ejqirKsmQ6nT4FRUQVjnhNrVaL9fV1JpMJWmvOnDnDuXPnmvtSFAXtdpuiKJ5SqohzSClFlmXAEzWPZ/8d4ZNod1PXdZODXIQnFudsPMf4nqgq0+l0mM1mFEXRNOlEECRJEo6OjhiNRo26xrO2LNFSZjAYcHh4yKNHj7hy5QpbW1u8/PLLXL58uVFteXauL35GFiOeW5z/SilGo1GjpjKbzRpgZXGeLCrIOOcYj8dPQUjLWMYyPlkhHJSzir2jmrGBtdWMlhLMp3M21jq0c0VhHLbWXLx8FpenVC6lrCekZUkq4Mzl05w706c4mXN/f4ZxjsFaH6ESirIiTRI2N1epjGMym7J58RLCFOw+2GfzlU9x6vQ6N997j2ll2Xn+U4wPdrn14T1WVtcZDDrMDw9IWkMg4d69B3T6fbZPn2L3xk3m4wlbm5tYayiOj2iRQj2DcYE2vsFBSEmnk0L4+W+t9AV55dd3xoZcDRLTNAHExg+IjQR+aekVK8LqlZhNiECGCzBHbN4gbCGEbEQQnAOjHSKsR72agmzwERfzOM5hau2hiPAc4u1dn9iWSgup9CoKIkASzjnm+yfcv3/A67/yOW5+9EccTmxzQVJIfvj9K2TtP+QX8pSNyy95+EM4JCmivQb5ACEkVGMcJYTivMj6qM3XEHkP2WpT33sHNz1A6MpDERicUEgE1tVgaE5MBFUtf60yQBsJ1umgnuJCPoNgV+ubSnx+wlu0YuoAy3hVFakkFotIw/tCg5RwDlsWjZoLQuCUV3FwErIOCLRXeimmIYfjMHWJkymmrlFZjpQpKm/73J3WCIEHPIRCKBFyGD7ngMUfT6omJ5SoBIEHJerKkmU+N5gkAl3bpkEJJXHGkGQpVkvfPBasZgmqGkJJnFB0zr/K8I3fIN14KdynlJDN8tCJA5zBWY0zJVbPcdWM2eF9rv7p1/jz//GbzGvD68+tIoylLisyIZnVjlwJEulI4gdAEoCikGsTBKgkqMk6PLATICivSgFPMpGEWS9Q6olNkt9VbLDy4JUUEV71z6yHH9xn5bU7dC++wOwbf8L23/lf0X7pP6L4wb9AtTpg/P0XMqEo5r65yBjcfIZIK58DE8EmKVE4HHVZIhUkLUldz6ltSql9HlXJHARIpVBZGj7alrqqPawjU2pTkzhNjvEWMtahncE6RTWfILMcQUJdT8mzFGkcSIdSbUxtMMYrDVtbY3F0h6sU0xpbjiHVWByl84oxEotKBCQWnaWk1gGadi8h6VxgtHsL6oJapmRJSlHUHDx8iHGS1toZssE6s7LG1lOkc1BrpEpwqOYziJBYJxbymH+1sazs/wzE0dERx8fHP/HjLIIfq6urzUJtGctYxtOxubn5lHznMpaxjGUs468nTF1hxhOUSokre78QVyiVIqVP7ivhkwh+seQlM5tOESxOWCQS56SXPsQXtoXnQ9DWkK70WX3ueYr9AyY3R5in4A6/8Ap9A42NjBTe0kMIA042BW0cVEcz5o8foicTWtVjune/ixgf+sWd0QhTM+j3aHdbdPZvs3f1fa5+eMDe3PLGz32Kuip58fLznBzMuHP7IY/37vL1r/85v/hLv0w5n1DVFeWxpVwp6XYGaK159Pgx25tb1OWUw8M92u0262vr/Of/8L9kNpty5YMPyPIM5zSg6A+HFPNdnPGFBKNtXLfzTH47dMfIJjkjZLQZiV04zmdCot2pCR2qCwtW53zSZiHtgs8VxDTNXyKE72woiopBv4/Rmpu3bvD6K6/xgw/exhhHXRnaWcp4PqbbyXj/gx/wd3/rBeazhLqQ/Pk3vsfp06fJsrxR/QgnFJIhoUtGJDhnsc4SvWZ9p4vyXwsRb/fC+T+5Ft99If2CuOkcCh3FzRgL2u0WO6d3KIcrOKP9uGQZMvW2Q9ZYRicjimLGYLBKu5WjpMEGSAQHSqUxj4O3U5E467wyhzUkKkFJ38Ezn88pizmdzoA6dICXZYkUGf1+RUu1w5V4pRspIc/btFqd0FkBxlRY6zubom+y76yRmKBCYq1PQljjr9mDHX4uCCGRUmCtCM1e/vNrHRjnfZydjavjv+TcWMYylrGMn7HQWrO/v98UhyPMEIGFNE1J07QBC6KaxMHBAe+//z5VVXmlJp4ofsTt4mtSysYaIxbfI+CwWMxetE+JdiYR1lgshsdCfYQNYkE+WnBEYOXZ4ns8t7jPRVWNRXBjEY6Ix43QxY+DReJ72u12Y1OzqBgxnU6x1jbF96j8EYGDCCQsXmu8H/HrqPZQVVVTuAcPO8RziNvH/UXFCYDbt29z9+7dZozifep2u817vvSlL3H79u0fgReevd74vZOTE6IFyvr6OgAPHz5kPB4zm82a7U9OTjg4OCDP80apJEmSBnZ4/vnnUUo10EaWZWRZxkcffcSNGzcaQCVeT1VVTKdTkiRpAJPZbEae5w38kuc5zjn/LBogk62tLQ4PD5v7sTg34njHc4tjmOd5c08jKBXnmnOOR48eNfYzg8GAo6Ojp8Ysfpa63S5bW1tIKTk6OsI5x/3793n8+DEffvghr7zyCi+99BLr6+tPgRlxHsWxia8vWgQZYzg6OmJ3d7eZtxG4WnzvInT1rJrOMpaxjE9oCMHeWDNTiq31Fokx6MqxvtZGKsFcO/JWhlMpk/GU0aM5Ze28Omk/59LFNVa6OYePjjmeVLi0xXDQASkpq5pef0C7pTje2yNp97n81uc4eXyP/YMjLn/mMxhruL+7jzz9Apd3Njm48wG3rt9m2N9ASsXD+7tsnT7N/uP7zEvDpRdfw5mCh7dukSYpGxubFEWNw9DKOyS9Pslkj7oyOCeojAcHMumQEnRlsDYUm3UEQbySg88FxJ/LEbaIlqN+rBB4tYK4sHUB3gh/fGHb5xVkADJ87T6qTnodBUmAHOI6UjzJTUR4QSqBNWHdHhQ1JC50m/jiuhWOVAlUaAbSAULQxnL1a+/x8//JF7j43A7H797HxMYOLLVxfPjNt2l1O3zm70s2XlIBNnnkc0jtNcgGCJFAdeyBC+fVNIVqo1ae8w1O+YD67vdwx3expvQFZeebLHw+KwABGBBR5cRhRRgKXQfgw+cy/LI7WKmkwYrXeJiEYLkCYeiEbwRxQfHB3wKvIBIoESBpdCR804rBOItKMqTMsaZu3uuEQziLs94y2NYFKG8RI6QkydsYo1HSgy0ohbMaKQzWahKlfD4mWtMSFFydV/HI2y1/36RXRlFJGuAOr3WRCG/BK4TvSBEq8dY5aeKzhjKjffZ12s//ImrjRchXQGbhehehUI1zBqdLnCmwZUE1PuL2997hT/75H3Pj/j6Xzw44c3qVe1ce+KM7mFc1G3lKogRSeGBKKeHnYbwvbiGH5iemh28aVbxnkI+gNiuItrqxJc2Pg7P+/ViHdRYrQo7Hgp07Tt6/QvfMi+x/623G3/8jNr74m9z5/r+iLSxGSbSFuiibBKFMMpy12KJEyAQcpHmC0ZKqKr2NTQpWVxhb4VqrVE6SBPLB54AUiQCZJjgcRkhcbShKgxKWdqcdACCvMltbQ60NRa3JhCRNFYlwCG1wRmO0Y1Z5MEqlKU4mJHmXJPXjo9IEaTOq2uHqGikESaKQiQBrsLZGKY3LJaYWmHJGluX0ugOqmSNTglbeZi47nJwcMJ7cIuvvIVUHoyv/MUCQpH4seionyQR1VVPriizPEFI09+avMpbgx89ARC/On3Qsgh+bm5tNZ8gylrGMp2N9fX2p+LGMZSxjGX8DQhcFkwf3GKQ5Lkv9QkEkNPKb4etAD+CcX9jFgnpDyoeFDqFLxEmvOuCCLoEzDtVq0zpznurFI6r976PHUZHCR2gswTiw1nuBShxSWCxeiURg4xINijnT3XvUj++RFFdhdoRQ0i/upUAqRaub8cqlVZKqpj2QnE1hd2JJVMqv/d1fZTKa89rrL6HrhMPDXb77vT8FCV/4zGfZfXgHXVYc3T+m1e6St9oU8zmHRwcU84KqMrTzNkeHxwyHK/zWb/2nbG2d5YMP3ubKBz9gdXWFYlyA8eOjrfXdOA3I8CSc83Kt/lfjE/lVJaWXoSQsUhGkUqGtQ4SkgQuAg3VPihYWh3Aypmd8ouZZ0uTHhMDLpAI8fnzIK688xze/8T1GJyc8/9zz3L59l/3DEdZBIiWTyZyjw2Pu37/Jxvp5dK25d/cRH928ycsvvdR087jYDuJEUE4VjQ6IDLCDiIoUeMWPODhRHcY2Xz/ppsEJnLFUZcVkNCVTWShIeUAj5rjSVockb3vgAZ8ISNIUpSSTkyn37t3m4PAxFy+8wNnTp8nbOVJ4m5i8ldPttJi2OxRFha5rrNEY4dC1xtYOlF/gtvIWAkWvN2S4shKzOjirSdLsSZItQBgRxsizLu3OkCTJqGvvz+qMwAjjP0sWbzsTQBVrDS5QQLHr2jb79ckhGZJ31nkrIWv8KDpnscZvt0Q+lrGMZXySwznH8fHxU+BDLAZHO5Wqquh0OozHY1ZWVjg4OOCP/uiPmEwmTWF5EQjQWj9VrI4gRrvdZjqdNooNzyplAE+BHrHQvghqxKJ2zF8lSUKv12tsTqLCSLyOxWMsnk/8fvz7WcWPxe9FUCOqjDwLPwCkaUqe57Rareb1RWBDSklRFA2kEa8tQhmLNjmx6L94zYvHil/HexShhajqsDjmcUzjfuOfeE7x9Y2NDaqqahQ4FuPjrje+Hsd0Y2ODV199lXa7zTe/+U2uXLnSqLws3uMIfSyOS6/Xa/YVYz6fU9f1U4BGlmV0u13m8zkA7Xa7ATDKsmygjbjd4j2MoE2SJKyvrzf7iCBOvL4IfcS/41hGi5b333+/Ub0pyxKlFHfv3m1sZtrtdgN+xM9CtLnudDrN9bfbbZKmM1own8/5xje+wdWrV/n0pz/Niy++SL/ff2reOucalZdolzOfz5lMJnS73cb2J97XRUWZeC5x/kQIKdrDLJuOlrGMT25UtaVKJJfWB8xHM5yzJFKihUBYRytTTEZzjucaUv87otNO6K202d7ooTA8eHhCiaS7MSBNMmalIc0y1lbWKMfHPLh1wGA4oNfJuPLNP6c7XOX8c5e5d+s2Juug8pSsPOG9r77DydGEXnfIfD5G0WZlbYM7N+6Qdfq89OplRof77O8d0ul00cWc8XhCd9DHmsKrUM7GHH30gMRKjO84IFHSWy1IrzBpnUUqEew7PbhhjA0KoLEZAnDCbyv8mtHbVthgVQKBvgiKDIAzTQ5IxFV5bERpCt6LLRk2NKj4Zg2JbHaJswiZYrR+chzh7VyJz2HOIhzkSpEoibAW5YKJioC9Ow95/NE+py5uc/fmPvsnVTiURDhvb/G9f/fOe9gkAAEAAElEQVQ1irLgF/4RrL1wmfaKvyDpQHRXIe96hYZyhDVzn1FwFkECvXOk53qozgrVnbcxez/ElSN/XT65gpMOr5ASmp9EyEdZX9QWQnlbFCFwugqXKkNTSgLCq4NirIdjbIRx8Puz/n54+5TaHxc/RlJlwdo15M1CcT9Vmc+/6bLJifjnENM0xuAswkoElQc8tKM2BUiJSnMP3RiNTBLIUqw2uLpGyQh/LHYrPbn3QiT+fiqFM/6ZTwaAhCRF2NKPsRJBRRifS2wNaZ16jdaFn0cOzyPzFVBpAF9ompkg2rvUOFNiyjnl6DH3f/AuX/7v/pArH91ntZvy5htnmI1LbG0QOObaA0OZciQiPCMjUFGJJtqpRJjFT1lkgD1owI7w0YmaH842w+DVbBeskZomnIBOSYGSIK3PeZnCcfjBbU73VpGiy/GX/t+c/d//Z2Rn3sROrmCzFmUNdW3I85YHSHCIRCJQAbxKsVZjTYFKvEqLtcYropU1ORlZq005n5AmjjRvI1OJMP5+yGDpjZJ0WgkkBom3g6pq/3nUxn/uW60OiVQk0pElOU5rpNUYG55hhaMyFpQlSRRKJOjZFK0rTG2fgEwI8ixYQgmvFqySNiIV5O0EY2vqckLeStCVQkqfn1IO0rTFvCwpTwqS1CCsI1G+CVFrS8to/7Owk5N1uhTzGcmCctFfdSzBj5+BuH///k/lOIuLiW63y+rq6k/luMtYxs9abGxsLBffy1jGMpbxNyD0fMbjD99DtTrkm9uo4OvtAsSRyMQvOKTCmNpLfzrXmF16eT3fDSCkQBu/WBTSK3XEB3CDQaaKLGvRv3Ce6eOHHL/3EFt7r0wR5BYdAuO8XJ+SHniQTULBNossKSzOgDh+zMroGnayS40ja7cRynuCCgsqUexs7zA/OGLQ6fCLZyfszWfs3X1Erz8kVy2ESjl7doNEJNy88yHf/tZX2VjfYntzjePDPeajMY/v7rJ1/hRCJbTqFsZZrl15jw/ev8Gd2/f5zOff5K3PvsULL7zM+voW165eYXf3MdrWaGMw1vvRPq1c8SRc+E/JBBvsS5Io60joJgijaaxDKYk2AiVEUIEISZggNdlQNEDMHfxlCvwidPOIoEJxdHRM1uqwubXBjVs3+cUv/gqT6Zwsa7O7+5g0SdG6oqoKPvjgPX7rt15gdKxwteLt773HcxfP+0VjvM6F4oi3fQmLNBE1TaIKiGs6k2xIXLjosytCz4sQPokVFquPdh9wdDii2+2St3LSJCXNMrI0C4UgRZK2yULRZDBYodduIZxjPp/z4MEd7t27QSvP2djYoNvroqQkmSuUFKRpRpbnpGlGXdVY69BGU1YFVV0hE5/AyvMWWdqm3WrR6w3IsxZO+D6YCLk4HMZqcNIDHMESp5W1EFJRBil5380bbkwEhzCExhssxkMgoXHKWtNItQon0dbgnMVY461tgiywqW1zXP6SQNAylrGMZfwshrWW2Wz2FKwAvkDfarXodrt0Oh2GwyErKyskScL169d58OBBAxzUdU2WZU8pJ0R4wRjTwA2x4B3VRWLxP57Hswodi8odi9Ybi2BKhE5arRZJkjR2IPEa4jlFG49F5Y9nGy2eBTZiRIWJqqoaYGXxPWmaMhwOnwI14jUt2ohEu5GokBLtYKKqSrQdWVTtWAQ7FiOOb7SNiecWxyWqqzjnmE6nrKyssLW1RZZljRJGPMdWq9Xcj7IsnwKApJT0+3201oxGo49V/VhZWWFjY4M333yTs2fPsr+/z0cfffQUEBShg7Ism3OPkM7q6ipFUTQwRrzns9mMTqcTFNL8s0qcj3F/aZo29ilRbaXT6WCMaaAGrTXdbrf5vjGG1dVVtNZEC5ZoMRTnxbPw0NHREdPplHa7zWAwaFRAopJHp9Oh1WqxsrLSQBrx/IbDYQOACOFt9+Ix4/jmeU5VVTx+/Jg/+ZM/4fbt27z11lucPn2aJEmYz+eN+keETKbTKQcHB40VUfycxbFdnMdxrsX7rJTi7NmzjMfjp2CoZSxjGZ+8SDLFVjflcPcIoXxhtDZzhmstdFkxHpeQtdjaXMdZKMua/jBj0M6YjeZMtCVvtVhf7eKAo8MT+uubbJ06xeH9u5yMJqxsbjGbTDi8/4Cti5dZXelz+9oVbt47wQlJojSmKNjYOMP25gYne0f0+l1aeYtHu4/ZOH+ZsxdO8fDGB5zsH9Hv9Dg+maJUQn91iEoAlzOdlbScoDqak5IgE0GCI0s8IKBr26wbnfMFYGdBqgTt6iZXQlReDTkWpCNJUozRHgqwNqAE0VKUoDAhnjAaQoaftwYVQJHaWKQQKBk1KMLPVuHAhYI6NI0cdenPyYmYyRAI+QSEVNJfV6oglfj8UbC5xYHRhve/+R5/6x/+CpdeOeD4Wzepnc+LWAQKKEvNh19/mzzNeesfGDZfhGzgr09hEO0NSNsgE2Q5wukZzvrnKCdSRGsDuf0mWdZHd9fQj97BjfdC44X2V+gsQiqM1U9AF+HzZUIq3xwlAKH8il8QLFp08GJ1zT0TBBAmXKMU3i5ZhP8ceNtbZ3ASryRhHFJ5ew2vmOvBEuscKpOgvfWtcF4VRKgk5IO8EowuijDWLij9aqx2yDSosaFQiYQ044n5cGSCtJ8bNuYNHAITIBAZ7m0w9tUl0goPwkiBMRrrQHW3SM99mvTMW6j+GUh7IFPfLea80m9UiXWuBlPj9BxTzanGh+x+8C5/9v/5A7777jWcFHzq1R3WtzfYvflDsA4lBCd1TSokmYBUSD9OcqFRDTDOz1gpIhQdb8WTZ78nAsM2gCAefInj4tOkQU3MuSB64/+tBKStFFOEZh3n0FPN6PoNWv0+s9u3MOPrpK/+KvabV6i0ty5JReoVerSHiWWiQOKhHGuRQpO3EurCBGDcUtWOeWEQyYTWYB2b5+SJQGUpgTvB1TXlfI4QkixVSJXiVIIxFmN9s1epa5SEdreHqS3KlSQS/zPCOeqqRCqBSjIMwk/nqsJY6LRXIEmoihJb16gk9YNnoZ5rjKmRSpJmHWxVY4Sj1RJkSYpSCdZ6FRPpSpzTqCRDJAqpJZWuqMqCVuqbtZz1MJauSgw12lmyVps8TxDaNo1Vf9WxBD9+BmJ/f/8v3eX4/08sKnxIKXnppZf4yle+8lNRG1nGMn6WYmNjY6n4sYxlLGMZfwPCVjX3vvM1ZKfDKZXS3t5G4tBBzjFJE6RSfnEUSHxEWAwGSMA4i3TgbEjACuXp/rDgjYsi5xwiUXQ2t1l/6TXq/WOmd+dEBREZVll+USbIECipMdZ630YcEocQntDf3GnzS5/a5kxyhFwf+gWOrrC69osmAGNZ21hn93hEt9Pn1Z0hk7nh2srLHNwf88Jrlzg++YALl3a4d+8Bed6mriv+7Ct/xN/7+/8FnW6P6ajm5PE+Sgk66wOcdQxXVukNhnzrG+9z585tzp3f5OjoPJPJlPl0SpqmTKYn9Fe7TKdjdG284gI/fjniECjhbXJU4rsVoyRl7I4RoSukdhrrbAODCCGegBQLxRzrGv2Mv1RE0CR6rNa65sZHt7n80mW+9tWvcfv2Lc6dO8eNGzfY3trg6HBEkibMiymj4xGHh7u0OyvUdcm9e3vsPnzM2XOnm3ssQibJOh28f4MaRpgfSBkSJ94z1yJASq8mE2YAzjbzy3dY+O1msyl3H9wMxQMPNgjh/DyWyv+dZLRaHc6evcCbb36OlUGXWmv2DnY5OTmg1+4zn02YTSdsrK4hZbBvCQMtpQBnPbSBI5UpRVlycPiYJEnRpvYd2cZ7u0qZ0+0PPKzhMzdeuhOBqQ3OmaDWYrHW8fjxY46PDmnnEm1q3+ETLGWcsIDfDkuT7DN2sVjmbV08CKJ9t42VWN84g7G1P37IBjyBcJbwxzKWsYxPZpycnPB7v/d7C0DhE1jjN3/zN1lbW2vUKYqi4Pbt23zta1+jrmuARoVgsZANPKU2EQvLscgeFQyeVdVYzAstKkQ8Cz88a1exWGTv9XqMx+MGDlzc/yL0sWjzEWNxfz/O7iVeJzwBO0ajEXme0+/3n7LjiFBMvI6oIhFtVJRS3uZs4RiL17moavKs3cez8ElVVQ3AEV+Lx9/f3ydJEk6fPt3AFv1+n9XV1ebYi5Y+i7BABC7G4/FT82bx3kVbluFwyIULF3j11Vf56le/SlVVzf6jeke8lghh9Pt9rLWcnJw0yjJCiEbpZRESidDNIoCjtebo6KgZvyRJKMsSrXUDv0QrmKIoODk5aSxoOp0OdV0zmUyaeVxVFdZa2u02dV03ijeHh4cADfw0nU4bmGKxoW04HNLr9SiKgqIoaLfbtFot5vM57XabLMtot9vNPYqqG1Fl5Ny5cwghODk54ctf/jKvvvoqzz33XGMHFK85qo1Ei6N47xbhjzjuEZKx1lLXNXVdI6Xkzp07jZLNsyovy1jGMj454Yzl6LBACYFCkGYpq50WJ0cz2isdBoMheafLyaTACsHOdh+AmVFU7YxeamlnCWVl0AY2zpxnZW2F3ZsfUZWG4domj/Z2yftrvPypNxnvP+b9d96l2+txamOV+w8e45Tk/KVLpMZweHDAcHVIWVbMphMuvPgcaSK49p0/J5EJq6vrnIwmZKkKeRVNuz1kNBqxfuo0nXrKBFCJYD7TkCjSkAuwNuRXCKqgwYZi8XejFMqvhZ2HMZRKPDRSVaFRxIXmHRncVw1xPY4QwYolQiN4RYvwPSVEsAyFiAXYhfxB6Ofw0ImNgIIL+w+KGU3jkLctlVKQOGhJyURqdGw0Ej53MD485vqVO7zyC29y8PCIW3dPqJyHRJyxKCGZlyXv/Mk3qcqCz/1nNZuXX6S1WoHVKFMhutuQpqBWEVXHW7/o+ZNzTgao9VeR7Q1UfxO9+wPM/k1kOcIr4CosBoX01iZChnED50yw6NVIqXAEOxcDlroBbaIaAuDvgRcMwVmDVMorcARwBCExukKFBivwaq9SZVTaj7+pg/qZBYQiySVWe1sOKQRO4tUi6gop/TOrVDIojjicrdGVBgFJmiHTFOuUP1/l1XKcswgVQFErgkVQuPWh1l5VJYnKvBqFM4hUgfF5CJdkqJUL5C98gWTnU4j2Osg8NJGZoLxCeHY1XjXEapwusdWM6miPB+99nz/+f/4rvvGdH1Jox6curvC5L77MgzsHjA+nJNaCFExqxzAVpIm/PSAw1qCECvCOP2wEbhzhWdMBzgQ7Zg9DGGtRTd5TBntdnyFTiIVnbJrPgpQettKVDoowDicEaJjunbD2woBEDNj7g/8jZ//e/4Gb3/ptOknGFAHO2+04Kfz8cqASb5msq5DftBZjHbV11FojVYZMHNZUJBi6/R5Oa6y2WClIsgxTaeysQCmB1pIkyb19t/XAktE1tixxAt/oJEHZmqTdojU8zcG9XT91VYJzAqlSEquRypHlGVIoD31obzmcS0ntpM/1OUs51TiVkFmLRqGEJJGghcAWFVJBt5cjkyGzeUlZzLGC8GydUOkS4zQiVWRJijGO2Vz7HF+aMpvNSaQlkVlIgf7Vq34swY+fgTg4OPipgB/Pdk688cYbT8l0LmMZy/Cxubm57LpYxjKWsYy/CeEcR1evI7pd2v0habeNzDJP6WORKiFJsqDuKEPxWYROBwkYT5NbhwnSklJ4wlw4GZIK+M4NozHOkHZ6rJy/QHl4QHX4PewMD3P4Mr9f9CL8Q74/STxlDwi/2BxudPjib36aM6c2EFaCcqhWDqKNmYwwZYGwvsCRpillVSGNlyp8bhWOpne4emWNbtfRXx1iDg55+bULnByNwRnGs0O+8uV/y6/9+m+SpRl6VnL8YJ/a1hS9im63z2AwYHU15/49GI2O2X3wgLIsGKys0O70OXvuAjvb6+jiu0xu3WusWH78vXiiiNEUcazBKYGzEqRDaxsJHKL/rHE2dCJEMCEUfkKvjY10yF9mOoQ/1vm0DAhu3b7H5RdeYnN7i7t3b/HSiy/4jtDuCrrU6MqgbUltNPfu3ePFy+uMxlAWlh9eu8GpM9vh/ERQMxFhniwAHcHbVqK89omwXjozJlWst4oRwuFC95EL3TQEy5bz559nZ+cc4D2By2JGVZUU5Yz5fM7h0T5lUVJWJfP5FGMcZa05OHjMvXu3wDn6wz7W1symI8q6ppUlTz3b+zEK9i4BAHGzGbeO9tnbfxwKKDXGaDrtLqdPX2IwWInZBwCM8V0axlT4JJgvHDrrOBkfc3T0mJeev8jzZ097uGcR/rBeacc5h7Hh3uNlcr3Vi226S5yXAPHSoBaMNVgrwPg5Yp7pClvGMpaxjE9iWGspiqL5d4QLohJDmqbs7u5y9epVrl27xt7eHvP5vNkWaOwnFhU6IkQRrWCirUl87li0cFm0RInnAU8rcMTXIpDxI797FmxSut0u4/G4OdYiVBLVGJ7d96JtSoRWIogRwYiiKBqbmngus9mMuq4bxYkIAwCN8sii8kcEF2LBfRH2WNwmwieLth1xnKIySHzdOdcodUQgIoIMx8fH/nf84SHz+RylFJPJhDzP2dzcbPazvr6OMYZLly41iixZljGZTDg8PPyRnN2iKki83/E68jz/EZUXY3wXZrz2fr/f2JVEoCaOU4Q8IrgSr3E+nzMej5nP5yRJ0sAZ8b6NRiP6/T4nJyesr69z+/ZtnHNcuHCBW7dusbm5yerqKj/84Q/J85yjoyPqumYwGKCUaoCcfr/P2toajx49YjKZUBQFW1tbrK6u8t3vfhdjDIPBgLquGY1GaK1ZWVlp7ku8/9H6p9/vs76+TpZlKKXodruMRl6B7bnnnmugizivkiRpoBOtNfP5nI2NjcamZ29vrxmbk5OTZl5XVdUogyilGkWSwWBAlmVPgThCCM6dO/cjwNUylrGMT14Y4xBKkShBK0/IU8G0tKyc3aI/7DOeVjw8GtPrttkctKiLikltUZmi38lQSjKbzlBZymDQwdYFt95/j26vj7GGh3t7nD5/jkGvw40f/IDj8Yxz584ynRxxPBmxub3B9kaf0cEho1lJp9VmMp2St9qcf/EFZsePeHR3j35/FZRkNBqT576Qmnd6OFszmkzorG6A09x+54coUmxQDFVWo2TuGwsWFDMlQTDBeqAihms6OaJ6VVxvBxUKgvKEcEglSFbbuMpQT3Xz+85GVU4XIAFUWJu7ACP4sC7alrimwQPwqhKJw2gTlETwFiLOYQn2ds7nN2LBNssUqjRekQGBcS44rDhuvv0hp5/f4fJbF9k/+ICjiQ5wiMA4DyUUVc17X/8eRld85u//OjuvvU5nVYMzHjfpbELWgtwrxFJOoB6D0z7roXJE9wxJPkD2NjG9D9CPr+Cm+0Fq0ytAoBRY52GPaB0ivGoGEQgI4+aVTL0PcmhvApl4BQqrkfgchyU8+wUFFoFEqgSUf9aQDaBjqGtQSUrWcshE4GpfCHfaW54kWSiCW0tZaKSEJHdIlQRhCuGBBOV/5/rci8Naja4Ksk7f52lshTGWVCXekkcG3YxgLWNM5W04EokQBlzim3OcQyYpRrVIN14gO/8F5PoLkA9BpQEkCpkmR2iQMT7n4TTOlNhqSjU+4OH77/CV3/l9vvGtHzI2hlM9xec+9xwib/Po7iGmrMgQzGtDbaGtvL2LQGKcIXE+d2JtyI0J0Sh8QHgefzY9FxunAtDh82ohpRk2jiCUCGqufg47jANpCGBPBK1Bz70Ki+z2mH37Hapfeh89eA45/gjXGeC0xon4bO6hE1P48cdZTF17uxUnMCQNYNVut5GJVwRO0hbzokBbi0tSpLbYqg5wjQeL6rIIOSNJZQzCGFxdMi9KpseGrY01RJJga+1BMamwSGytkYmf10mSoKTPVepqjrFQ1hWqnlOhqQwegDIGIwTWWIyeIfMUlaYIUtJWj9opRD0Las2K2jq0ceH5PEdmLZwGV8+wVYnMMm8NZHzOOXF+/mdJSqqCVbX4q68zLsGPn4GI9PpPOuLiL8brr7/+Iwv2ZSxjGT7xslT8WMYylrGMvwHhHPXcsP/ue/Q3Nmn3hwzTFJt3EQISlZAoRaYyalkjLDQ9Hc4vLkWQQvT7k0ghw2LW+g6I0NGhdY01GpW3SVbWGVy8yHz3AaMrD8DIZr8ePAgy5SKkMYRXt8hbirMvrPPpX36D9fWul7iUAqkrv0hzAqUyXKuDnc2ILSjtdgszn9NptciLE05Pb3Hl+BIPrhve+MIb5Fmb2tXkLZiXAqFS9h7f5htf+1M++7nPkpQTyqpk9PCQZKWg0+kzHK7w4kvPY6ymrgtGo0O2T53jxRdf4rXXXuWj6x+SZRnr26vcvnMfp3984lkggpWqDZ0iwSdUgHOyKez7ARIoKRHKRqvYYB/yZN3qiIvVsI+//ITwyQC8R69SilpX7D4a8fIrL/FnX/kzrt/4iNM7O9y6c4fnn3+ea9euIVVKVc3ZP9jjpZcNUngLnx9eu8HP/9xnaLXbzeLYBX9ca71EKE7FVglskNT0C+ygFBM6cfwq3Xg51aAs41fhFpUkrK6v0213ybO8gSFssDipqorReEytNWmSkGU5K6tDiqLk/oP7HOzvkWctWnmHNE8xtkbrCp0on9CS3qPWW8s4al34LitSyrLi7r07vPfBezhjEU0nDVy78UO2Nk6zsrpBmvokjNYVUmZkqe9s9QtUL0cqhaDbHQCCqq59kkVIrPXewMb6e2RdAGF44r+MEAjjkzkitLFoE2xerAvgh58Yzgap0qXWxzKWsYxPeEgp6XQ6T6lqaK0bgAO8NfB3vvOdRsUgqlgsFvsj2BEjggqxmL4IhkQbk6gGEgvVi0oTi7LSi8daVOOIEOiitQbQKBjMZrPmvYvdvotAxeK+F4GSZ1+L4Ee01FBKUQXbMYDJZMJkMmEwGKC1Jkl8KjRCL4vjnaZpcw51XVOWJZ1Op3nPoupIfH/cPhbtI2AQi/lKKfb395vzX3wfeIuVqPbhnKPVajWWL3mes729zeHhIc8//zyj0YjZbNZADc9CHs+OVbzXi/Mg3s9ogTKfz5+CfBZtUeL9icol8fwXx8AYw3A4ZDabNaBSvH7wVjL9fp/ZbEa73W62AZjNZjx8+JCyLDk+PqYoCh49esTJyQmrq6u0Wq1mHJMkYXt7m52dHdbX13nw4AF37tyh3++zs7PDcDik1WqRZRlFUdDv9xsbmQgROedot9u02+0GKEmS5ClA5uTkhPv379NqtTg6OiJNU+bzOcaYZk73ej2MMRwdHTEajVhZWWF7e7tRKln87MQxiyBMPE9rbQOpxDkU52O83tFotIQ/lrGMT3BY5zgal6x0EtJUYrI2p3aGpFnG3QcHzCxsbfTpppLj0YzKwqDfJs1bGCSzyZzB+iZbO1s8unWLkoQLb77F/Y8+ZK4tr3/h8yS25Mp3v49VbV587WUOHt0jSVu88vJlytmcvfv3Ga6sYrTkeDxj+/Q2G1sr3Lv+Ia1UMRiucnA8wynB6uoKxXRCkuXM5mOUSmh1OoyPHuNmBfXxjMQadB2aQaRf8VlnMcZ6u1OVULs6qHtGtY4nCgbWCt+w4wRCSawxYS2uAjDh1Rxku8WZX/wMp7YGfPf/9SXm8wpL4lUfEAgR1qJhbS6FVx8Vi1kHF5t3ZDgPia5qhHrynEHM7SSq2Z74uvEwZCuRZFJSWosUMR8ECJiNCt77s/f4uf/4Lc7/8CGzK48pnQgqDX79i7DoyvHB199ldjLmC/+g4MxnXqe3YVG2xukpsnsa0e5DlgX1jxTKMdgSh7dzEZ0dVGsVMTiLWL+Euf827uQ2cn6Mz5VonPbNSCrJGusPJxSi9goaTldYZ5BJ0ljBCOFAJDRNTSr83nQCgQnNHTpYoFiETMBphAy2twFa6HQE2AIpEqRLMDEPpyRYhXAWlEJSk+ZBksI6sAahUkiVzx8oQAlkYB8ECqNnmLpAigyyHKlrf76mQggFKsUaja0rHCnaVuRKhGtwWCEQMsH2tshOvYk4/Tqydxahcn9dQcHE68pEexev8oGpcabGlDOKk8c8vvoD/vif/R5/9rUPGBnDIFW8/sImpy6d5vH9Yw4fHiEdCOk4qhyZgkxCqqTPIQoVmoYI6qnOQ1IhN+ksqNBc5hVbovIwCOswWISUCGGRoSVKxgY44SGQNJHN/LbOIpy3nJZhTAg5F2slR3fu0Tu1wXyvZPKl32bz1V9l7w+/gzrbxqkEJS1SpVibUFUl1mgwBpcKLFDWFqFSD8gYR5IlZL02FoV2UM3mzMsKZy2ZtchEURuNNd4yyGj8fbMwry1FWZFJQ5o6irri5GRMRyX0V/o4Jxjfv4OpoTKWFIu2lrTV8kCS0d4OvNWjqmqSeYE2MJnMKWtLlqbeBtyF1j7hyJ1BJAprQBmNReNk4lWRrVdW1HWFLiuc8/NbpZLaKEptkK4i7WR0W161JJcCmfgGMBGVY34Cv1+W4MfPQBwfH/9UFD9iR0OMN954g1ar1SQOlrGMZfhYW1tbKn4sYxnLWMbfkHBOUow0j77/XQYrm8g0obVzBtFq++KwCBKJMnRxOOsLzsIrEES1CRssOHyyXGKoceJJYcMYjbG+I0dIaG9ssf7K67jZnMnNo/Ck7oAkLBC8moEKBP1ws8Xnf+11nn/jBVKpsLpCZinOCMykQiQVSd4GC0om0GljjseAZW1riwc3brO+cZpJUdJP9lk7eIfdzs/x/LTk/OuvU1tNohyz8Qn9lXU2z2yweWaNe7uPObu9hj7Zp64qyv2KO/Y6F56/wGDQ5fXXX+HR4z2cdayurtDv90nThFYrZzKdURTzf2+BXQAqNsg4ERaNfkFsrfdrldIvQkNGAG18YkcJEFJiw8LZ2CeS8E2jTyO/+pdcDAkPfwgJ7Z6iI1ocn0x441OfYnVtyN7jx7z44stYe4NzZ89xeHTIyegEYx2T8QRd1yiZImXN4f4JDx8+5OKlS95zVkSkxWMHftFvcFZ4cxkXoCE/o7wUbJTPDBdkG8UZDzcI54sIwjn/HmdQSeLnrMgQ0hdcsjxDCEWetYIUp+D45ITH+48oizndlRYOQ6JynHXUtSbLvBSmFJIsbRFtjqz1+zTKIaSi0+nz+iufYntzm6yVYI3DWkOWtjh75jnOXniRTqfrJS/LGZDSbudIpYICSvhjNcfHhxwf3AtKIDZI5YIVwR/Yefle10ij+iKMROCkRDjfBWEa+Vn/fqwIHjA0YNFPYXm0jGUsYxl/rRHtPBbBj7gOjfDE22+/zWw2o6qqp4rGQAMhVFXVFPCjJUWSJBRF0RQ2tNZP2bREwGMRxlgEL2Is5qri9rFAvmgvE/cTVRWi1Ue0yVjcx7P7b6y9Forzi+cRbTSihUg81+Fw2KgzHB0dNQDAohXOImgSz6/dbv/IOSweK24brToWzylCFvE9ER6JY/3s/oQQlGXJ4eEhp06dQghBURQopdjY2MBay3w+ZzabNfdSKcV0On0KCFg8h4+7J/HfEUSJ4IcQomk2y/Ocra0tNjc3G5WTqqoaMCJa3yxa+cSxHI/HzGazxqplcVyllGxubgLQ6XSYz+ecPn2anZ2d5jyiksmilU4EVR49ekSn06HdbpMkCYPBoFErUUpxcHDAbDbDOect64LaS1TG2d/fR0rJ2bNnmc1m9Hq95vMQlU6ilcz6+jrdbreZx2ma0mq1GmWVbrfLyckJFy9e5NatW0wmk8ZSaHV1lRs3bjSfraIomjlYlmUDj3S7XYqiaCxp4hztdruUZYkQgvl8TlEUy7zTMpbxCQ8hoN9J6ffa5P2czXPrVEXF7qMTCiG5cHoFaS3TosaqjJW1HnmaMJnNKCvN5tkLDPttbr73LsOdM6x0Wzy4+h5FDS+9+Sbjx/e4d/0Oq+s7aKe5d+0jEqdJO47b1/aoKsvWxionU68WdvHFFxBU7N65y+pwhWI6Ynf/hLXNLZwpmU0LkjRjNJ0yGPRRUjI9PKaVK1qDLkUtWL10hv2rd3HC0e50UMphSusBB4LypXNxSe2bRASEzgBYXHFbr7Tp8PmYJ88iknpac+urb7PfzagKQ1yJe/DDNQXvmAPyzQoSY6MCRFCjDMVv4WzkGhYaOQRChsYVQkFeCqzWfk0q/fXkiSJPYGa9ckI8eYdvgnl8Z5db793izHOn2Nsd8+igCMiDxxa8QipoB7eu3kb/s3/F56cTzv7cZxicfp5UZGBuI/U2dNdBKUTeB9lCVGPQU7A1TpeIpIVonyLZHqI6m5iDq5jH1zBHt0DPECr8DndgVbBdDUkdIQCRI3RoXhEOkUiizYuwIWehJMKG+yVzhNYgkyYPglRIkWCdCdACCCOQCvz/LM7oAL7YppnGARifl1OpPyeM9k0zUiKsH1MhBCgHUmArjdEWpTJkosIssEgp/DEIz0W6BFR4TnO+KUckDfhAvobceA51+nXkykVobyJkGoCXmMPxKh8+rWXA1VhTg9aYckwx2mf3vXf483/5r/nzb37AyFgyJdjqJJw7v462loe3HlDPSnJAW8GkcqykglQE1VpBODcZ5idPZooLihQhGeItkyBJUoyu/bNguKHChefk5m/flBSbrmKnln8mVU8dKzZfeSVdcMYy2DnL5GDKwQfXOPu3/pe49hqMDxGhSQgXbBSLAJXrCuEStJU+j2prhPCqLhAsZ9KUuqyZTWboak5LKayzzKylrgxKCKp5ASr1z6XGeNUQJFIarBRUFnrtlr8u69Vt8naO7PeYjUdUJ4cI5ZuVvLpIipUKcHRaLTIlGAmJrg2z8gSjtVc4cc43EaYdlLCgLbasqKhxKG9FrGuyjkKlGcgBKsmoR0e4qkQ6h5CS2hMkmHlBoiR5muCMQeU5SatNNZ15Ox6xaIH8VxNL8ONnIObz+U/lOM+qewwGA9566y2++tWv/lSOv4xl/CxEt9ul1+v9dZ/GMpaxjGUsA78Y0U7hEFS7R+y9/R1kK2VNSNpbp8IqxXeKKBnVPPxC0wsxhMUBAolqwA8pEyTeSkIpL/tojEG40CniHEmny/DCc1DN0aNvUB/4jgKBAeGwyntAtiScenGbT//K62xurSOExUm/sDfzCdIlJN0OZjZHT0aovAMSlAEtwFWGVrtDt9el1Iazpy9wWNRsFfsc7D/ig/d6rF+8wEtvvMn/5B+M+W//z/93ev0Ob3z6DUYnEx7v3aOsSi5fOMPo8DG2qpntH3JPwtlL5+n2upyS68xmBqMr3nn3O1z76Ab9Xo/9g+s8fLjvFyv8qKFGlIb0nTpgnCMJCRTjDHVtsM5LqlrjouIqNiyurY2vhYSHc03nD80R3bN//QWTQpBkCSvrbbZPbdLJVzg5VjiX8dxzF7j+wxtMJlM2NjYodcnrr77Ot77zLfJWjq41pi6RSiKEwhjBo919Ll261Bw/Lqadc0HhI3S6SOGtScQTqEMKiyGoewBOeGUMRLCKEd47VUrvt1vXFcbUKKFAQKLSsF8vNatU8I8lQWvL/sEBjx7eQ+AwLnYPG7Q21Lqm0hrr/D2o6wqcDIkOy+hkRJ7ldDptep0e66trXLpwiU63gzWGWmuEVJw5dYHzF5+jP1zBOcFsNgEkrTz30I61aONwxsvW97odMmlImHq4I64vglKuczp8TmLBSzbgkLV16JyKhcbQKRWBj5CMk9JLnz87H5exjGUs45MWUkr6/X4DKBhjmM/njULA/v4+e3t7TdEcnkAFi/mdWLCP4Ac8DTAkSdLsf9HaZREYiEoei7EIADwLicSvF/ezaM/S6/U4OjpqivuL5724/eLX8Vzi+S/CE61Wi+l02kAn3W4XIQR5njcgQFRmiO9fBDAWLWtiDi6qlsRjxW3in0UViUUrmMX3jsfjxr45AhHPQiXGGE5OTtjY2GgsRyaTCd1ut7ln4/GYW7dusbu720AzHzdfnm0aWxzLqG6RJEkzRmVZNjmOra0thsPhj8A6cV5E+xdrLa+//joXL17kz/7sz3j8+HGjVrE4LlHx48yZMzjnOHXqFAcHBxhjGI1GDVSxs7PDdDpFKUWWZbz22mt8+ctfRinF+vo6dV2zsrKCc46HDx+S5zndbrcBKiaTCdPplMPDQ/I8b+Z4WZaNWki8T71ej06nw3Q6bcY+KpXEsRFCsLW1xalTp7hy5UpzbltbW5RlyfPPP98cO89z2u12c6z9/X1arRaTyaSBkeKxo8WP1rqxeln8DC1aCimlqOv6R5SZl7GMZXyyQknB+krOcHuF/mqP45MZxzPLoN/lfD9nPCuZV4a0lbO5toazMJpOyNp9zr14huOHd3hw8JgzL77G/GSP6+98yOapc1x68Ry3PnyXyWTGzqmz7D1+xHRSsHNqk6qcc/9gxMraOl0pOJ6OGawOWO102H/8iGIyZWVjnfGspCwc6xublPMR5axCZW1UnrC+uUpdVIyOJ/T7HbJ2m+MbN6gqxfbrb7J79Q6tdod2N0XqmsT45wwblAsW1QuQMjQLOA8TxLU1BHVWguKEC1BGUACxDncyZ3JSYPG2qxEaAZ+b8MewPi1kLZbF+lf4/S9kUPB8opQlpAjKrVERwysrSOeoq9rnQfDil84JVCZpZYqkttR4e5NgUgMOqnnF1e9+xGtfeI5zL6wwne0xLow/R8AKgQpDYrTl/s0HzP7ZH/DZUcFb/+RlUpVhihFOFyg9g+4WLu9AniLSVUTVgWqMMzOsLj10ka8g0hZJZxs5vIQ6/CFm/wb25D5UUw96WOEVPRyIJANnsWhIUnA1QkicTPzrxiKFwsm4kjeNgoqTIa8mAkhjLVbgG62kh2WcsR4wECJYXnhVChekT2QYS28Tq/F3wB/Pq4J6WxbnNFgJRgZQQYAkNKX437WT4xlJkpFJrzqiRLCIscZDKcaQpP56TdJB9rZJzr6J3HgR2TuNUC1vXxLUYxAyiPE6D35YC1ZjTYXTJbaYMz96xIP3vs9XfucP+Np3rnJUa3IpWU0UpzbbbJzb4fDRiIc3HiOMJRFwVFkq42jnCiWFHw/ncyR+XsbmNUL+ZOGZOHxOBGC19vofbnFm+9OMaqoiuFjHhjgXGrNEAD6ccH7OB/UQCygncAb0zHD40W0S4dAHY+ydd+k9/xr17T+ncjXOCKraW/NaramNBSVJkhYYgdIFTkqU9BYn1oIpDegSXRjKoiKTXiGoKjWuqAGvuiFwXu1DOIwBJzTtdoZK28znE7ZW+qgkJUFSVBZTzsk7Oe0c+punmacddHHijb2dJc3aWCHQ8xlpkiKVor/WY61/CnH7Dno28kopEloq9ffFCQ8dmQpjMq9GKyR5IpGmxiHI05RWvkKdt6hmE2aTMQaJVdbvA0uSZF6lcDYiJ6G3sUHR9Wo9Hk77q40l+PEzEFEC8ycdz3Y1CCH4lV/5lSX4sYxlLES/36fVav1I0msZy1jGMpbx0w/roHaSRDikdczvPWT37e9gkxbriUL1BkFCUXky3oFxoRtWxPW97wgQCJCORKVI6RekQvjFF1KAcaE7xVLrCqccvc0tlDPUJ8fsf/sHmFlQLnA+cdE/3eflz7/AxZcukCUSY2pcbZGihnnpFT4wCKNJ+gPMZISejVAqQySKpJVT1iXKGla3Nrl79YfsnD/LKxef53h0le7jDzhe22D/+k1mWyc4XXD5xQsMVk4xn8w5PHjM7u59Dg/3yfM2l06d5nDvAbYyjB8dcddpzly8QK87wLk5Rwf7fOf73+HOndtYXTAaHzEZjUgUEK4LYl+Khz5UAD+kgERIn1Rxxtu4CG+ZEtia5r02LDK9BYz3LY3f83/HI/DkTX9JAcQ8T/nH//gfMpuPmM4q6rKkmNeMxjMuPHeBB/cecHSwz/bpHcaTKZ9587PcunWLeTFHJgKcIc9S5hODFIK7d27zcz/3OZq1mHhybv66vQcrYTycs+H8fZHBL96thz7kE5s4ay3WCd81RB1kbJ33Ew1wRM3cy6taX3jRoRCTphnjyYz7928zHh/R7XTR2heA6jr3xYe6pr1QsNPG++dKIairkrv375PlkgtnL4CzHgzB0spztK6bZFttgtSrkKGjyVFXBc46TCg8zYoZxXzOdDpiMh6hKFntt4mSucaa0DDk55BA+ERQo7LjkM7L9/oxrJ4qrLmgGuKlSX3ix1r/3mUsYxnL+CRHq9XitddeewrMePjwITdv3mwUGIqi+BE7lWftWeK28KTgEYvNEXaIxfGPs3B5du0b4Y3FEEI0FjGxoL6oHhKL3zGSJGkK5lGpIxbdP+4YixDJszBI3F9Ugni2oB5VGo6Pj+l2u6Rp+tT5xqJ7hGfa7XZzDcfHxyRJ0ti9PAu0LBbsn4Ve6rp+SsF3cSyfVS2J4xZhn2j1kmUZANeuXePdd9/92Mawj7N3+biIaiFJktBqtVhbW+MrX/kKp0+f5rnnnmvUOqI6SZ7nDTSUZRm/8Ru/wac//Wkmkwmf+cxn0Frz0ksvceXKFf70T/+UqqqaY+d5zsnJCfP5nFarhZSSwWDAo0ePKIoCrTWTyYSqqsiyjCRJyLKMwWDQ2N5Ei90zZ86gtaYsS05OTrh58yZJkpCmaaOQURQFa2trjUJOp9Ph+PiY3d3dBpbqdrucPn2a6XTqn/2Cxc1kMmkgjjiXZrMZBwcHjUrH6upqA5vUdd1cQ4R7sixrriWqfcR5PRgMmrkdrXUikBSVRhaBj/F43MyNeIxlLGMZn8xQieTU5R0EGfcfHlNb2NoakAvJwXhOZWF1fY1eL2cynjGbVWycO8uw1+L+tR8gkjbr2zvs3rmGdZJXf/FXqCdH/PDtbzNYXaM3WGH33n2q2nHp8guMjw8ZF4bNrTV0pSlrx5kL52kpuHH1I9I0Z31zi+PRmHa/w2DQYz47Ic1yVFKTtTP6gz5Hj+5jDWye3mY6mjIZz5ClRriK3a9/jbo2qKykntaNOqR1DqHCcwLB2tT5XIxv/pALUp9P/64PaRsIjQVRVEM2r/m1aqMOGX4NyqAOIpVXgDXON1QIt/Cc4UKzgQxF8aCQ4Ky3sZUygAUywei6+T2vpEQmviAsgVYqvbWNcEjnGhUTB2Ats/GM2z98yMWXtjlztuTOnRNmlW92sF4/NAAqAjCMjk64+tVvs/bcq2xcWKW90qU9WMdZjdQzZHcbka/gUgWtFiLJEHUX6hOsLhEmwCz5CnK9jextIlcvYQ6vY/ZvwPQRVONGRcMX/B1S5biGuHFh3AVpkmKNH1ujK99IFUEDIXAqWOEEFVcng1qIdb75yRmsAaH8ml6kGRjjgQ3p772rNQILiQIrfB4kPI8lIfmksja6LBDWhOdbQ7vXwRlNOa3IWpJ6XmITTWuQIABd1yDBGouSClNbXN5BDDfItl5CbbyAGFxAZAOETLz1TYRKnB+b0JGCcBprapyucXVBXY6Z7d3n9ne+y5/+yy/xrXduMNWGRAm6UrDRkbz4ynlEq8+d77zHfFrQxoJIOChqUiXIpW+WUsI3yFjn54NaUFBzYc7iAoCDXZDoiPm5mJMiQE5xLrsAH4TnxGDti4Qn5Ac4FTNuPr/nQgOOKRzTR3u0Bx2cU5x89B1aF34Bsd/DyZSiKjGVxTmf0zIWkAkYbzGD9JCOsCC0Q4fmJFeUmKqimytU2qGezzF1hVQp3jK4xlhvFWSRzIuSfjslS1rItIM0Dmm9RUxZV97SJU3Q85pR9YjOiibrtpEqqt/5tZTA5ynruqQqLO1OH2c1nU6GTlfQ5YxOClmagWphqjmVrbFOYGoL1q+znHCItOXnu56TtRVZv0uifB45N5pZUVAXFZkQDAcZnbXT7D9MmI2OMPfuopI8KO381dcZl+DHz0AsLp5+kvFx3Ru//Mu//FTnwzKW8R96RPBjGctYxjKW8dcfHiIIPvJCIsqak6s/xKZtZN5icP4SIvXul0oGGUdjUCpBqQRjrEc+nMRiSIO1hu8oIfzbE/cWhzUaU1usNUipUK0W/Z3TmFffxJyMOHrvBpiKdi/hhdfO8/rnLzPod1GANN7CQliLdALV7gbpSostStA1qtvB1TVWgFIJtpr4ZIUzZK0Wq8MBs5MjNnfO8fk34Ojr7/Oda+9ypZ9Ca8Qv/6f/OdVUc+3qDcrphIcP7qGUJE0lN268T787ZOfMBfYe3mY+rRg9OMZZwfnnn6fXyymKE05t7zC+cEStNd/77rewwMpqznxmmc1qL/coogylf3ZU0ncQWOGlIyUKg0FISSoV2mrio6RXeHXh66bV4CmkI4Ikrll2ekrHLWz/cSGF4Bd+4Qv88hd/ie+98z3Mo0fcu/uQ9dWLjI7HnD+7w3B1QFGWDHoDHjy4T7fd4eKFC9y5exeZWNIsxbosSpOwf/MO8+vv0bn0ql/UCRDOgFAB8vEJIe/zGnNVfrFOsHqJCzlfmKHpHFFB9hQpwboFyVnrfWzxSiBG+44HDJRWUxYZjx494sGDW14FA9C6DkWrGucc65s7dLv94NurkDJ5opaBZTQ+hLFgZ/0U1jpmxYTxeMygP6BuunuhmI+ZnhyTqgytK+7dv8vBwSHW1JRVyXQ6Zjw+YjwZMR4fY63mtRdfZq3/oj//Zg0RMgtC+3FyPAE3nENb3zFkjfFjFyARnG+XkgJ07CYP1xz//vfNiWUsYxnL+FmOVqvFW2+9hXOO+XxOr9fjgw8+YDQa0e12GY/HTCaTBq5YVK+IBY0IM0QljEWYI35/EfhYzP1EW5D42iIo8nEwQ9x+Uf0jvh6PtQiedDqdRr3kWZuSZ+1QYnycCogQooE5Ps6iRSnVqDxMJhNWV1efUjeJ5xthDSFEU4zPsqx5fRFGWbS1iaoOEQKJ4x0teJ5VRfm4JpL5fM6nPvUp1tbW6Ha7tFqtxrbkwYMH/MEf/EFjZ/IXQR6LUMri13EODIdDtre3mUwmfPjhh5w5c4Zz5841kMl4PEZrTbfbbWCRl156iS9+8YtcunSJ4+NjBoNBY7Fz7tw5iqJgNBo10Ei32+Xo6KixsIlQxc7OTjM28Z5FdZY0Ten3++zt7fHrv/7rdDodALIse2p+xvtRliVZljUWPouARJqmjaJJHKNWq8VsNmsgkGh1s76+3mwTwZjhcMjx8XEDAVVVxaNHj4h2MlEBZXEuPHjwoLnPg8GgmfOj0QiA6XSKEIIsy8jzHGNMo1IT51209un1eqyurjbbLmMZy/hkRtbKMWQ8eHBI3m1xaX2AripOigotJWsrQ1qthN2Hj5Bpj/MvvUI9PeSjH3xEf22TVrfD4wcP6G+f5fzzl3h07R0e3n3IpU99lrqY8v6332FldY21zQ5Hj3eRrZytrQ1OjvbpDNY5u3Oashxx5+ZNut0OeavLeHpCv9+lLgqqNCXJezgMK1vbKKl4ePM6g5UVOv0eh4/2UCpFIXl875CuEOw9OMAAeZJ6wF96GwWZiLAc9Gtin8sxuJDLcSLW111QH/CKBv414wvwoXkA4RswsN7aRUqBsSbkTkKh2RqkFFjrf2dIJZv8AtA0NXhLDf96kiXIRGGqGqODOmlAMoQEomKptYAia6U4a1AIWpWgpaDSAiu8xatf1/oEinWO0f6E49UeZy5tUReaew/GFAChYcIisdb4AnpRc3RyxIf/5n9Apgkv/Pzn2HntFXrbp0h1hdQlqjVGdLYRWRuXSFAdRJohqjnUE6wpAYeQGaK9icrXkYOzqI3LmMM7mMPruNEDXDFCOI0QCcEph6bzRSlv8eJMuG8OqQRCJd7mBIezVciJ8OTeJNK/3/lkUQQ+rLMoJfFogsVZ58ETSdhO+gSTTJAOoEa1WljjFUn9PRccnmhqcoZpVN+wGGPRZUG3nVBrhys1KH+PrQGconYJ6cYOydZl0o3LMDiDyFZA5aEpzNuCPGk8CnC1tTincabG6QqnC6rZCcf3bnD969/kT/7VV3j3+kOmzhfc+1KykgnOnh6w89wZdm895NHtPRSQCMVMW0YaNjLIlSCRAulpIaQKH4YInwCBamqgmtiohnONYM3iM7qUXslGCuGTKS6+J1gUOYcIKStJcNa1+GNb18AhzvmclZnX1LlByITqnfdZu/hpRoVEZBYhMpwrqMvC33cd1h26Jmn5JiVdTnEiweCwLsHUBrRGYUhVSlVXoE1oeqpIVIKzBiGhso660uQJpEpCbShm+whrUEpSWUdVa1ptDxLXZYFzgok5IM3btHsdpMoBi65KyqrG4hsDEQnFrMDN7vsmsLTNoLuKwFEVBQmWvJNjdMJ0VpHUjlbi1wJFXSDLirzdQQjQ0xMsCiESOv0+YEl1n8P9Y2w1wjhHUcz8zwaZUkzmKCZ+bi/Bj/8wI3Zm/KSj3W7/yGsvvPACFy5c4ObNmz+Vc1jGMv6mR7/f/9jPyjKWsYxlLOOnHwJf7FfCIUJCu56XPH73PUSnB2lKe2ebBG/1YqzGGN2Q614X0ScbQIakgSKs8P3XqAa+0EZTVTPAIVKBUIKk02Zw5jz1q69RHe5jDkf8wm+8wuXXnkPhkF7HEKTC1QaZJiSrfZhXuLoKpH2CmRfgJKrTpTo4wqYlUuWIlqQcj1FK0NvYYP/mDWZqj2Ge8cU3n+Pmn17jynvXGJ4acO+Dq9z/6DpXP/wBKxs7nNk+zbQsQTjarR7Xrn+f9fW/zQuXX+fmRx8wmkwZPTzkZqU5e/kig/4q3XbGW2/9HM9ffonTZ87wL//FP2O42mc+m/P4wSGzQj9lxKKET4L47pYni08paLxwERIngsw1cekaCiYfU7CIa57YXdL87fsafqzuxyuvvsxv/J1f44MP32c8HrOzdZr9gxl53mU02qPVfo4LFy9y/eptBJKiLJnNp1y+fJnDgwOsMCSJxNQ+UWStoRiNmH3jD0lO9she/gIMhn7BjMD57EGDp7joZxLgDZ8UyYLgbJAXlQEGIaqDhIKW8GMiQjeUQyBCZ0nj1eoczlhm8xEP7t/j5PiALE3Q2hejtK6YTKeMxmN2ds6xMlzzoIogeLmGgQ2fFWMMs2KO1ppH+7u0W3k4J0Gn02JjdZV2llLOjzk5rNg73Oftd7/P9WsfUek5Do0UCbqqKKoSrWvarRbm0vMB1LEBzghJtXB9QkhvQWNt02FitO+UNtbijPdw9t1gPkHnjG3GVUDTBfPTgOOXsYxlLOOvK6LqQWzGUUrxqU99itOnTzOfz/mX//JfNjBAtLhYtBpJkqSxBInFaaXUU2BAVBKJ9hiLsMaidUvcx7PWKIsQRvz6WTBh8Wd1hD9ikT7aYcR9xvg425Jn4Yl4PfHcF7d51jImqldMJhP6/f5T28cxiNcaAQlrLUVR0G63m7GN57UIe8RtF+GEeKy/KOJ5zudzyrJke3u72YdzjtFoxD//5/+cBw8ePAWoxHN5FsZZHP9n70FUuNjZ2WkUVnZ3dxsg4ezZs0+proC3uj179iyXL19ubGiGw2Fz/zqdDr1ejy984Qtcv379KfWKnZ2dHwFder3ej8yZxfvgnGNjY+Ope/hx0EyEIeJ1LgJHcV9RFeTZuRMtXRbHbHHuxvOy1v6I1cri+cb3L8JSw+GQ4XDYHG9x+1ar9dTx4hhKKZ9Sk1ldXSWqmESQZhnLWMYnM6rasHs05dyFDXIJ00nFuJgjWx1OnT5FNR1x5/ZD1k6f49TZ0xzv3eH4aMrpF19hdvCI6WzChZdfpdVuc/Ptb2OF4OXPf5E7V37A/sMjVjc2SBLJwfER3cE6Rpc8erDL2mqXXidj9/oVqrJgY2ONsqxxrqbT6aCrkqw/JO12mRwfsLqyBmjGhwdsbJ/CWcNkMmbQ73C0d4DC4eY1adbGOUkrdeR5gtI1trK+eUQpTFBpiDVtpby6ZahJ+7WxBEhwztKsYEUSitbh57DzPEFTnDfhdwoE1Qn/PqO9/YgQi88Uovl/LIBLJUmzhLrWJM7b/eIAJbF4GxKhAOmL5VJID5TMa9JWRl2VZKmik6dMSm8ZLIh2sCF/4aAsNI/v7dPp7nDuxdMUszvsnZSUOq5tvfqGc47ESo4enfDhvKSVCXav3eXyz93k+V/8PGsXL9Ja2wJdIOoZqrWBaA8hy3FpglB9SDvIeoarZ2BLD89IC/kGMltF9M6jNl7Eje9hjm5jju/jZgdgCpzQPqeTpEBUWUkC4FqAsSANMknAaCDF6ArhvGWtExZpfQ4kwgNCegBBiaSx+tFWUpSOVu5IpIJUNZCCc7WHDxKFEBKVZZ4jMcYb+3Q6lHNJmsVnW0WWOaxxWCtCc5K3FhJJiso7yMEWYv0FkrWLyN4pRDYA1cbJxDf1CAe+bcvPJhesdo0BZ7zKhykx5ZRytM/+9Wu898df40/+x29wc29EhSOVMEwSegpWuxlnnttmXmjuffQAU5SNncrjQuNw5EKQCJ9PU8prckgpfC5NCKSMzzYEVVb//OKfQ8LrCxm6mFoSItjjhM9IVATxoA7gZHNvFjNszgQ4XInQlOPVKywSozVpllAdzanuvI/K+lSP7sL6lm+QQ4BqebshZzFVhZESpQCV4qTy6h1ViTEWhUWohGJeesUYmaBk7Y9be5Cr0P5nx+qgjTUGpzW1haKoEa4mbbWpkaRpjpMppbb+82slVVHjrEVlCXk7QSUJ0hqk1lSlV7xVUiFkgpCWVruLTDKkkmhd4YREmxrSFgZBbQqEsVR+oDwMhcE4g7UKoRJQbdBT2oMuebdLq7tOmg9InGZezCm0Y15VJNaRZg5B6uG2n4DowhL8+BmIn5bVy2KHCPgfECsrK/zCL/zCEvxYxjJCDIfDJfixjGUsYxl/Q0IgyEJB21iBMuCQzE4KHr79PdJum3XpYDDA2Nrz+kYHf5KAt4cOj9iNIkQsboBE4qQIXQuJt7YIXuFpniFwSJWQ9bsMLlykPjmiM7nHpedPI4sCU5a4NEHlCSAR1mCLCjObo1otBBZX1R4S0AYznaJ6PWQrR08niJbCFBVJ3saUBUmakqiU0d4uaZZwerDC/+zzl/gf3r7NXvk5PvzBR/R6XawxPH54l+xwn1NnLpB3B5R1RZYnvPfBt/n0pz7Pm299kfff/TZ7xwcc7+4zn804/9ILtLtdlDQcHx7yS7/4azy4f5sPrrxLbzDw1//gGKNdVONESshUggodNL4Q5PyiVXjYxhf9QzevxYMNzgXLl2fv6ZN/ePsdglIFQb3i44v9w+GAX/ziF7h58xp37z/k7JkLfHDlQwa9bQ4fT3FOI6Vgc2uLOzfvoRKBSmA8GfHGa2/yw6sfMi3GKKkoa4MxPgGlrcQUc8zVb1I8vo565Ytk51/BJQkNPRQ5ouD9ap0GvPSjBaRKIn0R5pdtOi4kwefYoyR+wedsGK8o3e/lJB0CU9cc7O+x++i2VwJJE+q6wgHTyZS9wwPmRcHx8SFb22dQiVfdMEZT1xXG1qHTdsrx8bHvwKpqHuw+5MGDB6yvrSEV/NxnPsu5rR1W+j1k4qjKMZPjQ05O9jG6Jk8y8laPJFHUWlPMC6q6omm2ddHSx6caDBYZ/JitqxtcxlmBcHhFHYL8rrUL8ItfBLuwjbNgfMZnARpaFkSWsYxlfDJDa83+/j5RnQN8kWQwGPC7v/u77O7uNgXq2WxGlmVkWfbUts8CEIs5H6VUAzB8nMLGs+ofi38vwgWLQMLHqXREQGNxn/GcokXG4v4WAZCnpN4XrmPx+j4OTlmEBuJ5t9ttptMpVVXR7XYb4CSqYUynU7Iso9PpNHAH0AATaZo25xCPF0HKqPYRX4/2IRFKEEI8BeEsXkdUlLhz5w6vv/56c+6z2Yzf/d3f5erVqz9ilROVRRYtfZ7dd9yPMYaqqprth8Mhk8kEKSWrq6s45+j1eg38EiEEgPPnz3Pp0iUmkwm9Xo8syxqljjg+SZLw+uuv02q1+Pa3v/0jsNHHATsfd75/EeCwCNfE7Z+1KHr2eIvj/KxazMcpxzw7nz8OQHr2/c/OtY8DW4AfAXWe3Xbx72fhnWUsYxmf1HCc2+nREgl7R2Pm1jHYOsXO2dMcP3rA3u5jzjz/Cu2W5OaV91g5e56XPv8qt6++R6u3wsuvvsLo/nXe+cqfs/ncK5w6s8HVb32T8ahkbXuT8dExrU6LnbPnOHj0kNFozpnzF3D1nHs37jAcdBkMhmhj6faHFNMpxkla3ZSDx3todcjO9iYnhwcY51hf32Q6OUEkirXeKnsPdxkM+1RHh0grMbWhqjVZr0PezrCTCiu9goA1BhuWwc4av76zT9QHnHtSokaYYHkScjKEdaR5ojrarCfDS83vgVD5jg0Vwvk1uxDRdsR5O9bQDBQbLKqqRkpvi+GkRLW8GqzKE5yxOO28VUhQJ/G2LI66rLyibKLoaEinNZVzqGDDYYhEiwNhKQvN8eMJp5/f4exzG5RXH3E4qamdwUVwwgkMIIxlOpozlzAZVxz/4VfZ/eEtXv21n+fMW2/Q2z5F3itw9QxZDZHtDWgNcImCLEEkfUTagbrA6SlOlyA0kCBaLWitIfrnkBuvoUb3sCd3seP7uNFjbHEIwlsdIxzepjVFygSbhryZcz63poNCSxJ+p0mFkMqv850BqxFpDs5iMagkQxuLdRKZgnM1WhsPajiBMw5pa0SWgktwug75Jg+MKKnoUjAcpKgkAyxS4K1oCNCHEJB0Ea0hangWuXoWuXoGujvIpI+TeQCK8EqlIjTwEBVgDM5onNU4W4MpsXWJLsbMHj/g/gfv8c3f/ypf/85VHk0LnBBIBSmCXirpKMeLr57l1OVL3L56h8nhBJwjxWFQHFeOTEA3EWSJIBVhikShEWKz0JOvY8gIfRDnmLeKiY1xQghUIrHGhgag8GwR0lehrwob1D+sDbmpxWMr5QESa5HOYnVIzKUJQijqyZjOmRcodq9RTicULgUUTpcIZ0nS1CujmJLat8T5XFmlfVOP0UglcEpQFwYhLcoRLIUtaaIonUNXllbqcDXh3iqqAOIkWUaSJWQqp641s9mcRCrSZGEdoRJ0rRFJiTAWjEPXBoelKktaSsH/l70/C7LkSu87wd9ZfLlL7Bm5L8hEAkgABaBQqIVVZHEzqil2q8mRmmJrZCPpQTZm80LZmMmspQfN2LzqYR5HMpnMZN3kUDOjtm6NREpssUixyGKxNhSWwp5I5L7EHnFX384yD8f9puetBKWR1cbi/cpQGXHjLsePH4/w833/7/ePQqOR6hmkTCmrMqx3KchLTz4pMN5jsgKlILIKFCid4KqcsjBUpgoCJGUCyXZgwTliK4i8QUUxLg8tUd2lJWK5grOGIhtjbRC5fa9jIfz4cxA/KOJHg3Vsx9LSEp/97Gf5V//qX802dYtYxF/kWF1dnSFHF7GIRSxiET/cCMSJICQw3gcRiAfvJeXugKN33kIlKfHZ0/ilDjJK8NZR+QIVqaD/EIFG4EWT8FVBJCCCJkSI4AkLAmcqqioPCW+6IDxCOmSk6K2vs/n8Rc7JFVJvMZMpzhlUL0HGCa6o0ElCcTigKA9Ijq2hIo3wuiZ/CGxVwmSCWupTTSZYW6J7XfL9IzwVWkmWTp1gtL3L0lKfcjLmyWPL/JWXDP/0tdfZ63+Rn/j0k/zi8gr//t/9DmUx5e7tDzl56hy60yWrghXIq298Fff8Z/nUZ36St77zDe48uMvo4Ij3X3uLC89cYnl1mYN9T5x2+LVf+1v803/6f+fB9l36y0scMyWHe1MqE8QdkZIB9Sl93WITFPnGVkgNnaQHzlEUFVVhqGxF8Lr1j6BWZ+e0JlSIpjsEHm5AQ0vCY9dCEsEHH7xDWVToKOZbr77K8eNnieM+1o/o9VKMtXS6KWknIY4TYp1QFSUbx45x7NgGds+gdUSeT3BUYTOIIass/RLkzhbF4W9T3fkA/dQn0Zvn8Tp6iCjxDdVCQiMiqhNDdYvGjA7SpBWafiPv3Uz0gQ9ozZB0sHVCKtA/ptmY+1u3GAwP0FGEw1GVZRAkqZiTx5+gLCZsPbjH2toxur0lRqMho9GIyWSMNQ7jDONJxvbOgP2DETgojWVvf8yNG/fpdGKeuXgJKYOXrxeAVOhIsb66TixjVKSQKtBLyrJkmma1PaUFJbDOPMxc+JAIoqakCKHB193V9fFZHzx3fe2DPOtGwc3UQK7uEMM7hJfIZsE9lAstYhGLWMSPVVhrOTo6mn3fFIbfe+89Xn/99ZngoCkmF0WBtZYoimZF5scVt5vCstZ6ZoUxX7Ruiz/mRR3t92q/piEcNOKN+c9tyBiNwKD9s/Z7tQv57WNvRAlCiEfsP+bjcYV3KeVMZDIcDul0OjPxQiMeaGggjZimGWcURbPH2sfWFrO050cpNRNmADO6RCOImD+uZrw3b95kOp0ipaQoCr70pS/xjW98A2PMd4kPmjk7fvw4e3t7j3xem4LRCFKm0ylpms6sQ4wxs/E0a6eqqkesnhtKyquvvsrzzz9PFEVUVYUx5hFRUZZlSCk5deoUy8vLHB4ezsbc/k9r/V22PvPrqv39vMCiOQdtQcb8c5vH2+e/fY087npoP/64dTf/fXuO5gUnH3fNtdfi48YzL4Zpzt88UWQRi1jEj1fEWmIKy4NJho9jTp0/z/qxNe68/SZ5qbnwzAvsb9/h/qTg6Vc+gyjHfPDNr3HumRfo9lPe++ofMxnlPPm5L1IOd/na//Z79FZPcObyGfbu36G7ssL6sQ0e3LuDkhFPPnuOnfv3wTpW15YBhZWSbidluLsFzmFkhwdHhyz3u6z2+xxs7dI7tsHxk8fJxwNW1jdxruJg+x6rq32srTA4lBA46xBSoROFVOCcwNiwn3N1A0ggeLZsW2h+79W5F1Hbtkg/y9M0yAyPry1h6xAyFExFaA5orFiF0nXzCUgRbFmdswjf/I73QcyAR2mNszZ8tgziAh3FlNbQ7Xdx3kOscMMcIVUQk2hJ3E0pJiM6SRdjDLaydBNNGkty43DCo0QosFvXFN4F3jmycc7B7iGbFzbJsxx3a8BgCoX14f2FxPrwd1270FjjvKEygg/fvc7O3Qc89d4HXP7cKxx/5grdzVNEvQmymiDzZWRnA5H0INL4WCF0D2E7iKqAaoq307CnFhqSY9A7g165iD8xwU/v4UZbuOFtGO9Rje4jyiHYCmcrvPVYIzCuIo0leIHUNalDSrBBpOOtDdIEqWrRja1PWRTOpDMoqYi1RIgEjw2pD28RsUKgcPX5kELieWjPJ50liiRSOLz1Ya51aLbyUiM6K8iV48iV88jlM8j+SUiWEboDMg65v5mlSy0WItjNCl9bujgDNog+vK+w2ZBsuM/h7etc/eNv8bU/+Cbv3txm5Br7RNBIuggwhlOn+py4sM7+7iGDnUPKokQ4iKRgK6/IneOYViRKIn2g5Xo80of3gloYJR5SU6AWbzT3QdRSlfr6kIAQHiVkbR0cmLmeIHaqD3x2HYV7kfr+aaYwCdems6FxCtlwW8FVHjctEEozuHqDzU8fA9dBZlPorAaRjPfBzlpCJJorVTHOc5yHbiJRWiGUQMiQO0xkEAc5AcJ7tFJklaUyoQmqqhxSeKJIoHRCJBRIiY40nV5CaTyu8nQ6KdIJJBYdCZyXeB/s9BwghcFLgdcaYwRaO3Ssgv2QlBjjMfkRxnmEihB4jAkU2ywvKE1BJGIQCq1jDFBSi8gEFEUO5ST85lGCKp+g9D4q0jUNKZzHWAlwlirPKPI8nOP/hPj5vyQWwo8/B9HewH0/Yx7pCAG7+fzzz3Pq1Cnu3LnzAxnHIhbxoxxra2v0er0f9jAWsYhFLGIRdbiaBAAeV6cRFAHBmd2+z17yJj1h6J2/gFqOAyXAhU03orbsCFmHWkiia0/WCQ4XOkJEq6hRe414FzY0XnkSV7JitznWHZN4BU6CibFC4aYlpgLd6yA7XdSkoJyMKfcOSY8dDx6sSqFcIDO4qkJmecCq7m6RxClRt4OrFK7MSZeWsUXFdDgmimOywQHPnFjn//CC43/+6Nu831P8zC99lnfe+Q43blynG/e5d/cm1kouPPkUZV7i8XzlG19me/c+v/DTv8DSe2/x7gdvMxyNeP+173D83CmOnT6BEIqz5y/xK7/yN/gf/8f/B3mR01teJtKa8bAkn5ZordC1MMZ4Q2kqRCToLsdsbG6wtLSGNSW9Xh9vPB9du8HO9n4ggj425AzpKuBhC4+sdQT+8V2hg+GY9957n+ObJ0lTxerKBpef/AS3bm7T6/dYW7OYqgTvWer3iCJNFMcYW9FNU45vbjIYHSGlJpvm4EK3z7Ty3DQx68ai4wRhDdX1NylvfkB08TniZz6NPHYGpAxJoVqwAnXSph5f2LAHz9665wJoJfpDlqPeyIcuH+9tEEoA1DYoh0cH7O3vYKqSSEdMxxOGowHWWC5ceJZOusLN2x/yrW9/ne2dHZaWVhiODinLnLXVjdCpq6MgtrD+u/YZ1ntM3QUhfPDglUikqztrev1Z0QMJztTsWu+JlKYyJUoohFCzC9T75tqsVT3e4mxdvIFZcc3PBC9hDmoeCs6ZYO3ifP1RgrZp0CIWsYhF/DjHPMliNBrxla98hTzPH3lOU+BvCvht25cmmiJy83jznMeREeZpB237k0b80KZpNMXwpkjdFoy0C/xa64e/872fCQQaIcHj6BXt10ZR9Mixtcc3LwKYD6UU3W6XyWTCZDJhZWVlRuxoRBHNMRpjyLKMLMtYXl5+pFGqmZuGpNEWM0gpOTo6eoRiUlXVY4Ub8+dma2uL8XjMysoKr7/+Ol/96le/ixLSfr2UksuXL6O1ZmdnZzavzc/atj77+/tsbW09IthIkoQ8z0mShCzLqKqKNE1n57iqKq5evYr3noODA+I4fuRctIU8zVw0dJTH0S2a+WnicYKGj6N6tIUVzc/aAor2cTfz0Jzz5vmPExQ1jzfXxeOoG+1z1P5+XmQy/9y2EKo9ljY1Z/757WObn69FLGIRP35hHQzGBVF/hfNXnsaZKTdfexUne2ycPMa96+8Tr57gxc+9wNb198imJRef/QTTg23e/5O3WT19iec+9zIfvfMatz64wYmz50lSzWj3PidPn6YoS+7cusmxU2fRVHz03nusrG+wutpld2ubpNPD5wU7B3ukaZdJNmUw3mPj+DrOW/YGQ84+cZHljTXGg32WVpbZv3cHVxVE3uKqCh1FjHfHSBdsSiprWYojPBXWe3QUg/S4rJoRBxCB/ulrpMFMIOddKDE3/R4NAUEEBYVUYb/sa1SHkKHpwoWKKrP9Js3v7FD4FVIFAYhzs+YTKWXYY/pWcR2P1BK0JO0mkGjiJME7w3D7KBTQhQp7dCWJ0xhqy1tfJyz6iWaUWYyv74+EIDcW5wSOmnDiHNlgwjDVrBxfYTouAwEjs5Q1JSPkQwTGh7K7tR7pPM4LDg7HvPb73+TBe7e48pMvc+GzL7Ny4SKdteOoTo4qx8h4GdFZQ8S1ACSSoDuIKEHYXk0ByUMewlYg+4h0CZGsIZev4Dd2cNk2crqFH23hJvu4yQFkh1SVpiwckbtPpGTd2GOQQoIOJy/Ma4z3BpwFpYIYoT5+UAhv8cZCBEKGhivv9Wybr+o18TB9Ugt9ZDgPxhqskYFqki6jl08g+yeRy6dRvU1EugpRD0SKr5G1AlW/v5rZoAQgiwiNJs4EEYKtEKao7WZgsnuPu298hze+/HW++idvsjXMKWnWZxAsdIUkEZ7lruapFy/gZMRge5/xsMAVhth74ihifxDmqqMEEdRCD//QokWK2vIo5B/DvYSc5ZkCRfYh8aa5V5KzfqSGrttcT8GeGdvkW8TsfcJr6gadAGuphQzNZ4TsldIKZxxeKYQW2HGFnU7QSYLJRqRrCc4rynyMseG8CiURTmHwGFOhhQzv4YKNkLXNMdg6pxuslaZFifDQTRSGcF8nZC3TkUEQEyWaOIkxpaHMK5SQJHGMtfXMSIESEbbOH87uuGp779gr8BYVxQgHlXH4vMJ7sN5SVg/vB8vKID3EWtcCJ431AuctlXU4a/DGhtxYWSCVREhNUVbIyhIbSySjYDUjAaGYTKeMJxOE90Fg4x/eE36vYiH8WATAd21o2vHMM89w5cqVhfBjEYsA1tfXF8SPRSxiEYv4EQmPwPiWj2WdPw7fe3xpGX90nTKWQXSRJDgZ/D9DctUBOqj9hcMLF5IHzU23D/6tkuAz4n3YEHmrAq7UO5Qt2Bx8xJobIE1BlZeoJEWv9BHG4iuD92CmOUJoopU+ZjLCTjJMPCBa6YeNaydC2Rifl9jSoDopzkFlcuJuDzOZgNQ4V9FZWwEpuPnRPY6f3MTnU145e4x7e9f47W/+Ic+/cJqLl5/ivQ+uke3tAx5bWa5/8C5PXnmOPMsoq5x3rn6Ho8EB/7u/8t+xtrbG1775x+weDLn57nX2tna5eKVARxGrayf5hV/4Zf71//ovSToJK2sbnDu/jCk9N659hHVBDaNjSacbsbyyytraJt1OnzTtUJY5eMG0GrG02mE0UpQjEzah3s9gHlKIGYVF1omTmsAK1OSPBr9Ks10NUZSOyaSkWKkwbsjnP/9FptOKfr9PZSZsrPdx1mFMTpzWxInKIGRIRnV7PZSKyQtTW40E+oYxBTdMxOWy4lgUzT4xsgXi1tsMbr9P71O/gL78EkLFICQugFkDEYamg9TXHU2zfgkaeoV3PvgEY0P3i/fga7RonfwSQpDnGds72xwNj9AqoioLDo/2uXPvBr3OMk9cUBRlwe27t3jt21/D+T9CCEVRVpw5fZr/6ud/kTNn1ul2eiRJXHeOPO7Ccjj/cCxeeCyu9pQFIVSdiKgTNA3dpG6asbUNi3f1f016bdZ9a4M/q5d4H+bauUD3aJJvzed4KqQQs8RFEMuExMzDvfH3vjtiEYtYxCJ+FEIIQZIks++ttbzzzjsMBgO63e5MoNCIPqqqmhXh20KIJuaLy1rrR4QYDWFgfgzzhIN2Qbv5OYTC9jwFoxEYNmNtxBuNYKIRCsznpNoFfq31zMKmfcyPE4m0yQ3zj0EQu6RpymAwIE1TkiSZCTOSJHlEpNK8b57nj1BUGnFBQwRpRB7N8Uwmk9DRF8d0Oh2m0+ljaR/tcTevffDgAYeHh3zpS1+a2bN8nJClec3P/MzPcP/+fba3tzk4OKgJXOHzjx07hrWW0WjEzs4OKysrDIdDDg8PuXDhAtvb27M1kKbpzMZlXmTREELm11RbxOCcI0mSmYhinjrTntvH0S5mwtKPWa/ttdFer+310B53e83OizraY26v6bZ1zDzBo02Paa+xeWJH+/MfR/Jovm4LX+YFL+1xLcQfi1jEj284D+sXnubk2ZPs3LzK4cERpy48TTYY8OD+FhdeeIVYWq595zVOXbzCmeUON9/6DsNxxZWf+69Z6mne+MPfo8oFzzzzLNNsTKwVxy48w9btW+SF5/zlZ8gHe9y+dY+TTzxJKjx3b95g9dgG+WRK2u+z3l9i6/4Oca/LlYvn2NneofSKiy+8iLY5ZZ5z4swZ7rz9JmZaEmsHXlMaS2+pz8rSKgdihHVQGoOUCh15KgWmKOu6fb0/9kEAwSOkR+rCvptV0mv5Wy36dw8FIjIU2VWd07HWoLTGuuZ3puLhnjvsV50L7/twPytmZA0hJEIJolQRdWOEVsTLHWSSoGSEReCz6cMx4kFKpJJ40fw9UJiqAinpJZpuVFGWFu88Oo5IEBSmuR8KeSRTOUaHE1Y2Vzhz+QRFeY+qynAVmPrYnfB1Yd+hAInC1YVt4wQ3b+2y/eD3Ofv6+zzz+U9y7lMvsHT2At3V46jOBFEcIeM+MlmDzhJEcRCARCkiShHWBvsXO8XbHJAgI1AxdE8g41X80nnExhhXjiEf4ItDdDHCjkeIySYUI3w5RZYTcA7hK7wpawsfV2c+whmVKpSihVJ4YfFVTaHwQZzgnQkCmtmfzqZ5ROObHJ5KEVEH0VmmFKt0Vk4QLa2i+5uIdA3SFUTURcgkCIhsRWguqu1+hAShZsIGbw24KjzPWbwt8NZgTY7JxxTDffK9Xb7z5a/ylf/tT7l6Z5epDeOSwuOlQHroIEmBfiq58vwZ4n6fyeGY6cGU6VEGxhFr2K8MI+tJBcTKESuNljJYKQuBwAWBi6jF1jxqbcisIa22LPIeKQTNnUKwOGqe18g2goVO869zgdLqrQ05NiFm14sLKSqk8A+FSt6HHGUk8JFDOBHsj5xD9fuYW1uIKsOICCElKu1gKotRkqo0KKHoSFAyiN2cD9YqkZZIDFkVcoNSQlGFa30pqclBziEiHVivKtB0kJJIa4o8x3vQcUJZFJhmTxHFOKGDAMV7pK8Cckh40AndTozqaPJMBUqRUHgclTFIoSmNo6oKEIJIaXqdBO8lRaHQQtTc2ZDz62hF5R1GWKIkwSNxpsAZi3EeKRTSQe5GiFihYw1pD6MV3dU1kvp+eSH8+AsY8xuF71c0nQ2Pi7Nnz/Lyyy/z5S9/+WM3q4tYxF+ESJKEY8eOfey1sohFLGIRi/jBhifcbscN/tEDCLyXSOEROOzUcfTBDdTyCrrbQ6+sIBJVJ2mDwt1Lj/Aq+ElaE4rZzUbQ1clkKWv8Xl2gdiWqyjkxfcBqfoDygFPoKIVU4aUC71EqxeQ5xlb4yQAlFbq/RLm/Tzk+QiQa3esCHpHGyKrE2gpfxnTXjnF0+xZxnJJurFIcDXHOoJMOnfU13Lu3uXX1Ds+9+ARllvFfPf8EO+OrHNy7w8bGBkJFHI4nxMLTiwVVmXPjg/c4ceEJep0lkrTHcDLi//n/+Q1+9gs/zy/+pV/ha1/7Ch9ev8bB1j6jgzHDgyMuPPUUZ8+c4PM/8XnefPs7aB3TX1ql0+kwKUaMBiOiRNHt9un2ltAqZWV5hf2DIdPJGKUcRTnF2NDJ4pFoXSNPnUApSFJFp5uSxEkgXliHMZY8Lygrh5JAjYr1/rvvj6Mo4uSJMzz77As89+wniNQSzowZs0snhZXlZaR0JEmKVIqqcoxGQ9K0ixASrSOiOGI8zLHGILwEJxHKMJmOyTCUWta+9xYRaTAVqS+xr/8+Qiv0pU/ihAzUGN8ksWq/W+cRou5+xjetGfU6rAUhQuExIbFFXViw4WfWOgaDI3YOtnHGoRMYjIbsH+xwsL9H/9waVWUYTwY8uH+f8XjCcDglgGk8kUrI8wpwKKln2PnHXldeYK0Ldi2hXQJqUYp1VUjI1Eky72wQbvj6vNiQzPJN962Utbevq8kd4Xhd7eASRDahq8QLQZ3TqxN+gTbihQpXuhBB8OFNjSRthEML4cciFrGIH8+QUtLpdGaF5Xv37vHhhx/S7XYpy3Jmx9sIFxrCQ1NQfpxNRbv43IgqjDGPUBPmqQfzr2n+bYQd8yKQxqaiOYamyN0WfzR2KI2VSpPUnv8MpdTs+e2C+rw9x7wgoSniP050kKYpeZ6zu7vLxsYGnU6HqqpmwhUhBJPJhOk0FHriOJ6RLZRSGGOIoog4jmckkMlkwng8nuXMGrpIHMcBAT9nbzMf3nvKsuSb3/wm29vbHB4ePvKz9ry0H799+zZnzpzh4sWL/PzP/zydToejoyMGg8HM+ufdd9+dWdyUZcnR0RF5nlMUBUqpGdVkbW1tds7aMX9ems9uEy/aAon22pk/5jYJo/19+7ltgsw8VWN+XPO0mGZtNOtrfo0079UWKzWf1aaDPO5aaK/ljxMdNRSU5jjmRSDzpJJ5kcn8fP5ZTXqLWMQi/vxHlCasrsRcf/1PidIVLj37Art3rzGcSq589rMc3bvB3nDMky//JC4/5Orrr6PSHlc+9ylG+9tc/+YHdNJVlnuSvd1tNs6eII4irr3zPp3+MuefeoL929fYvnOP/tISDz76CK1iesvLDEYZZy5dIqbk3p37HL9whpWVZfb39zh29gIrq8scPbjFeDLFVZKd96dU4xFJr4uQMZnQHL94EZEP2D44DJYm1hBJTTWZ4qMIGSmCDCMQWTEuiAMQOAH4h1SBpkkAF5ommsK89TbQOpouHx/EEDMCmQjkVFkXqYW3dTNG2G+6ek9ag0SgJnRKEQQIAo/UCr2corsJUXeZ9NgKOu5QTUYkUcrOgwfhd70TCCVQcSCPWa0RzuOkR0YKqRTOl/S7ilFlMbW4JYmD3UzVqDbqziVbWaaDjKW1DuubS2TTEjcyFF5gncchQnm5pjJYPNKH3bm3DicE08Jz/f3b7N7d5uxb7/PMT32a08+/QP/kKZLldZSp8FWGyBJkuoZIl/BxgpcCITVECmFTsAZMibdFIICEsj/IDsQxMl6B7glwJcqUaDPFl0MohvhyjC8HkI+xxRTMFFFVOFvgywnSGJwxODzOFuHvmgZb2734OmfgVQJCI5RGag0yQkRdZNSFpI+IOsjuKjJdhXSFOFlBRT1QCSJdD0QZ1eQ5BNhyRrxAKkAjpKrFJqYWeZS16CPHGYOtCkwxoRgdMLh3j7tvvcNrf/gqb79/i+1Rjqk7YEIeMQgrIiHpAKmG554/xanLp5kOM4ppzvAoo8xLEgFJJ2abHl7sEktPRyl0bQkkCWtZNNThmrwhaBqfXKDfKjlrgPKN0Jbm0mgIqYCvCX91DhPfEHXCfU+dsJrlpr5bUB3yOrq5l3EOWzmMsoGCUxkGNz5i6dQTDJ3AjseYpBfuy3SEFFAWBmsscQIqUoymFdZZ4ihY/GgdYU2FUhJcINJqIUgjBUJQVAZjPZEG6xWVs2gdEScpReWockO3kyBFEIIoCVVZoG2F9TDKKgSeJBJIXwKOlY0uaaeLMRZERFlUWJ/jhUAgKW1Z5ymDbRV4FA6hIryOsFVJPg1UD6RGC0GsI7SU4CVGKIqRxWRTolgjtKQyJc4KKIP4LF0T9LpdlIwRtiZ+fB9otgvhx494FEXxA/Fz/LM2E0IIvvjFL/Iv/sW/YG9v7/s+lkUs4kc1er0eJ0+eXGy8F7GIRSziRyQaGUZLn14nD1T4GlBCYoae/Xc+QPeXWb38FFGSYp1DKo0TNVFBBRKBMSVaB3KDbBAGNVLRe1fbxIDAsV7ssVLuopRASh06L7IpfhIQgcJ7pE5QShFvbOCtpToa1AkDBUVFeXCI1DEyCptQGUe4aYUrM+JuB3REnhfopIfNS1Q3xuUZOulw+sImB7fu48qS3d0R6Uqfv/GpS7xVDRh2z3F8fYVTp84zOthHHe6SSyhMyf2b14jTLmm3S395lX5/lX/1r//fXHnqWX7uJ3+eU5tn+Oq3/oT9o0Pe/Obr3Lt9jysvPs+zV57j7NkLTPOc6XTI7u4OyyurbG5u0ust8cInXqHfW2M4GnP/3l0GwwJTFWSZIc8qhuND8umEojAYU+81ASUF/V6HldU1er0+SiosFd55TGkpTU6kY7pJn9JU3Lt/j4ODEdbU9A2tee6ZK/zMz/4lnrhwHucidreDyMbakmPrfWTtAdrtdFhZ3iDLpmRlzkp/BWcdeVHQ7y6TT4pAo3COopjQSSPWOimpHAUBg3OAw5uAr/RCkMYV/r2vI6IEefY5fKRCskoGC5wQou4yCKvKI+uNNwgZsLNNB5SSKligmAqEQCLIi5zdvV2GwwlSKrIi4+hoj8l4hFYpG2uniHWPw6M7TMZjrDWzhiQBHA0OufbR+0jp2N3dmhWzHntdzQp5NuTgXOhmCPSb2rtZCDw2eDU7GwQdrsZ0utqWxYEjeMs6ax96Ots6WVdTTxoqCF7WnSiENpM6OyekIFIR2JD4U1Jiag/eRSxiEYv4cY42bWN7e5vf/d3fZX9/f0bRaOx6myI3hGYFa+3MqqRduH4cnaB5j8fZTMwLP+bpH207j6Yw3v66Kao3z21bvLTJCk3BfZ7I0BAd2gSJNuXhcZSPecHH4ygiDd2jLEv29/dZWVmh3+8TRdHs2KqqmolB2vSSZvxtsklRFAyHw5kQpxHsaK1nx9h8XnOe5ukPEIgub7755nflANvCi7atj/eeo6Mj7t69y4kTJ/iJn/gJVlZWEELw/vvv861vfYvxePzIuMfjMUmScPz4cY6OjlhbW+PJJ5/k0qVLj809tudvntLRPgft42mvg8etv/Yxt0UUzdy0RRCPs3mZt2Vpr7kmGqFNm2LTnuf5z2gf4zy9o72u2iSd+ethXlgyL7RqolnTzc8eJ5Jqvp8/B4tYxCJ+zMJZtm7dZu3kBWQccePqOxy/+BwneoqrX/+PLG1e4Ikrz3Ln3Vc53DrgzNNX0InmtT/+MiYznLtwlmx8RFZJLr38IuXoiPt3tzl+/iLSV9x8+9tEzrG6vsH+0YgkSUFpbNLj6Zee5PDmh+weZayePo/Lp9y+c5vTFy/TiTyjgwd0I81+VtZWBxVrZ88SpTGlkDx95XmObl5l58MPyUYZyoMh0DmE9eTDYEknYxV6HqoKJHhbi/mdQ6p6b4lsDFhqUX8oBPuaDOKa38vyoZUqdZOFqImgQkq0UrO9qRCyFpHQInE6oBaoalXDJCQi1uhOgl7qkiyvEPeXQUUkvR5KKrAOJRRO1PQF7yizHHBYB+BJV7o474l6EaqbMC4HHE0MpjToTkQaa1zhsdZTVZYkjfAIijwnyiJOXj6NF2Cu7eELQTmr1wdLWB+AKcwYDiLMhXMe4yWH44Lx6x9y9+otnnzpXZ545SVOv/gim5/4ObzWmNEWqhggsg4yWkakKxB3QMd4pUBLRBIjXD8IdGwOpgg0DFfVgpoIVILXHvwKqncy5Aq8xfsSTImyObgiCEmsxbssWCSX0xlRA2dn5y+c4kDhEFKBUgitQaWBPKITUB2ESkHHSJmA1Pjm+QRbDUTUKGRC8xYh9+J1QrChrUUQtsTZKhBOrMG7CmcrXJljsjHZ0TaD+3e5++5V3v76d3j7revsTgoy52uhhw1rtSZqJMASgl4k+KmfeY4nnr/I7v0HGGMYD3OG+xOUc8TC0zm5wa23d9BARwi6ShJr0EoiZU37EDXvxoKoCali1hgDeI+UmmAhLGqhSH2/UNNmmTXE1eQbX6uevKxzTmHNNuEBahteMROLhM+0zqGUqN9C4I3DlQatI8zhmPikRUlBMZ4gO32cr+/vvUP6kMtx1pCVnrx09FONrsk7k9GYjpJ0IsU4NxgbRE2VtVgjMDLcJ1WVpfKetJuS9jvoOAETxFHGC6St0MrXFJ4Oka+IIjBe45r8lTekWkCV4UqNcxIlBVPvmYyn6FjhCbZNQgQ8SZImRDpCivBcIyWVc9iiwOkIIQPpNtISHSUIpYjT8N6josRXDu8LnBAIHZGXgA17i9Vuj87qOpO9vYc0o+9xLIQfiwCYYSs/Lj7/+c9z/PjxhfBjEX+ho9frcerUqR/2MBaxiEUsYhGtCNBGXwvZJdYLIIg2pKjdJ7xmupWx//5b6KUeOu1ApAJ9Q+rwLnXywVRFEGyE3XTYLHmQUgevTxcsQNbNiOP2iEh4dNSBSIZkRqpRTlKVOVKExIPNQ1eD1JJoqUs1yQLesnT4rKA8OCJZW0EohYwTRF7gjMEXBSvHNtm5eZMkionXVhDWUmZH2CJn7cwmvdUltm7f5mvvHrK0NORnPnuJ81Lx+u4WiZAQrfD5n36JvXfewN27xX1nOPKSIpuQTyccHewRJykrG6d4491X+ejGh/zCT/8iv/xf/3X++E++xIc3P2Lrzj12t7Y5d+kiTz33CY4fO8362icwNuPV176OjmJ+9Vf/FleeeREhI0pT8uDBfb721S/z1T/5E7a2tsiynMFgTFUapNRIacCGZE8nTYnjJZTqEEVdkljjhaeqSpJY0HFdlNIkcYflSOJ8SVHkDAYBgY53TMYTJqMxd+/cpds7xv7BPnkxZX2ty8ZGiiB0k6RJQrfT4+aDm9iipN9ZZjweMcknJFGfnf392hLGMh0PUL0CbTVKh44IpTRSRmAt1pWh40cJRDVCvvtl7GSIv/J5nI5pdPseEWgxdUdFg7SVMhBPmq6jsMtXeBReVIGW4STGVgxGh2zvbmErg9KK6dGQyXiCEJqVtWOcPfcECMPh0QFlVRIsVEInSqQ1eV7w9W9+nVu3rzMaD9ne2g9iGCkojaVV46in1NWiDo+SstE+1R6zNoiXhKy/9kHw4d0M2zmzc5EEH1dEeL1vCB8mED68qMVbstZ6+IdJB98UihweF7C9IlBfFOBEPegF8GMRi1jEj2k0ooLBYMC/+Tf/hhs3bszoHvP5m3aRuCFXNI1E87SEtuCgyQU14oA2gaB5zvyY5gkQ7bG0X9smLjSijWY8jXVIQ/No0xGaAnhDImkea//XtsOYp3+0xzwvNmgnVhuSx3g8nok2gNncNcfTCBKMMZRlOTvexhqmsX1pnt/pdGYWPVVVzcSWjyOwtOftcaKD9vltzlWbxFtVFXfu3OELX/jCzOonTVPOnDnDG2+8MVsDzTw2pJgXXniBpaUl2tSXZn6a45snZjjnavKZf2StzB9LW2DRnPvHWa3MCy7mCR/zIoo2JaQtLJkf77xYpX3uvfezY2iv4+brtlinPReNncw8Vad5rE37mBf1zNNQHtd497h8bHutL2IRi/jxDO8FJy9eZrS/y2Q05tKLn2b04BZX377BuedeQnjLu1//UyoRcfbyUwxGIwY3dljd3GStk7L1YIuot8pzn3yR7WtvMplannj+BaaHO+zcvcfy8iqTw0MGgzFpmpBNMzZObtJf7nHz3fcpiop4+Rj7hwO6/WUuvfQk4+3bjMclG2snuPXuW6hI0YkUS+vHmWQ56doGF8+e5v53XmUyydl84gK7b90P+zoXuteFtQgtkHGMV2DzCldTJaFu3hEhjxN+pwKCQPbA1YIPZgXtQMds6CBBzOG9bzbb4V8hMNYim9/t9f9JKcNelMYqpv7dLgUqkshORGdtCZIu6eoqKlJIpRGRxKPpLi8HOxBBTX4VKCnxNogXlAzjqvKSqijxXiCVZPNYl6wYkVWQl4FMkMaKvHIUhSFJDDIOpNDpaEKUKNaOr5INCszWMBAuH6FhUucSxGx+bEgxYAFMoG8OXMHb33iLux/e4bkHGZ+98FmK999Aak93dZ2ot4qMJ8j8CBl3IFpCJH1E2sUrGUArsQK6CNcNhBZjwJZgK7zLEc4CBleTEMJUdyGqTV3qHFtonqrCubAhR4Etg5ClFmbUB8gseSJCI1cQa4iajhIasnwjFkE2Z5jGDghnAi1ESIQMjVwIgagFKd47vKnwLhA+vKvwxmDLDJOPmR7uMbh3iztvvsXV197nvfdvsz3MyJ2nvsOrcyJhDXnvkAgSBJHynDmzxOlLxxkeHuGqinJcsnP3AFtUdPCsn1ylXNtgOr3FSqToKEGsPFqoh61C9XqVStJY39DwTV2gHGM9YNEy5C+p7x+8twRL3loIRauXRngaZ158sKgJ89P0uLlgG9Pcw8yyWDVdxIv6Egs2Mq6+jqUX5GWBihJEmREpTV4acBbpPalW5FaSZRllbXUjlaRwHpN7tHP4TofSWjwCRci1BkF7mF+pI7yUaCHRSYRUEWVWYIXEeYl1oLTEmHA+fTNmV5LIiMo6HIZ+vwPCMx5lTCdTdNrFEV4fK42UisqHvLJSmijRpHGEjpJA0S2DrC3pdAFFVWSBxqwlQiV4D2kU011ZpVrVqAiq0RBng3VxVZbhGKQkNhWJrdA2I5JmRnT5XsdC+LEIgP/kZmJ9fZ2f/dmf5b333vuuzeoiFvEXJZaWljhz5swPexiLWMQiFrGIR8LT9H0472r+R7jZFx6Uh0h4SqcZ3zwgWbtK0ltBRQlexVjrERiECCr7qiqDd6pzOG8JXSQghcDgUF7Qp+L0ZJcocqhuB3BQgbcVygd1eNxJUZ0OzlpkpupmhrCBkgqMcmETICVmcIiMYqKlbkjA9/qY6RTvPXG/i8cyGY9Z7XQxkyk67ZDv7eK9QCUpe3sFe4OYRJaMDvc4few4D669xicvbfK7r99gpXeci6/8FPnxTXrvvMnNMuOBFwipsKai9DkH23c5duocOtX8+y//Nk+ce5IXX/o0x06d4ztvfov9w0Ouvfc+W3cfcPGZp3nyynNcevISv/bf/S2ObR7n9JnzREmXvKgoK0evv8YXf/ov88lPfZ63v/Maf/gH/4H87YxTl87w7HPPs7X1gDt3blMUGXEcI6QmiVMQCVnRFE9ihHdoFQVrEqEQRGysn2JwNGU4fBCaNpxnPBmxt7uNNZKy1OztbgGWkydOoJWkKC2UnqyYUhaG+/fvk0QpcZqwu7vLnVv36USrGGtxlaUqcybDLTqdHsoYRORqbGbA1BobNn5K6XBOnUXZCm59m8IUyKc+jVxaJzA6wyb8kcYKUSetGvKHCBt354MXcEBjeJy32Mqws7vF0dEhUkqm2ZiD4T5lWZHEHXr9JVZXVrHGEycRS0vLDIY9smJEL005ffIc42zE/a0t9g+HdZeI4NLFs/R7fT66cYvhaPrIFWWsxzuB98HDtCl2WVd3uPhQ/JEqCh0UlQn2SkJSWYOpDK6hnBASETSdXEKA0CGx0BBUfNPxI+vOJTvr+nI+YGGFCHZLEgEqdF6Ft1sURBaxiEX8eIb3nizL+OpXv8q1a9ceKeK3i8fzX7dFE3+W2KApqEdRNLMFaYri7Z8379t+j3YhW86Szo8KK5xzM3IG8Ihli/d+ZpPSWA/PF/EbkUt7vE0xvnleW/QxP872z+dtM5rCvVKKKIpmc9aMsf1e1tqZYKL5/EYoUVUVw+Fw9vxOp0Oapo+QIhqSSFtc0o558cTj5vrj7Ga99wwGg5nYxxiDMYZOp4NSiml9P9kWHwyHQ3Z2dlhfX3/ks9rEivYcNY/PC1weN9/t17XH2F6L7TXWRGNB16a/tN/nceKRNjHmcc+fn+N5oVD7eOeFHvNzEUXRd83T475uj39ekNJ+fRPzYpH2PM1byixiEYv48QulJPu726yfOE0vH3H1T75Ef/UkT33qFbZv3eD+vX3OPnOFbjfh1kfXyacZ5y8/TTfy3P3oGmevvEDaS3nvta/R76/xzMtXuPbWG4wOBvR7fSbjHNFfph+XOCF54vxZJsMjrn3wPsdOn6G/FPH+69+mt7zGsfPwxtdvcunsScR0wjsffMhSf4U46aI7KZOq5MQzL6BFybt/8O9Z3TjJqcvPMt26R1lUKOJZST5s0kMzgQecdXgHAon1oelA1MQEIQBhaxqBCwVsZG3tUv/dFgIpBS6kZ5CqaUJwD+v/1L/rwxe4RlQyu1cJeSIVa5J+ghce3U1JVnroTgeRxCTdLlZIjJkiSYmW1yBNcUUBKKI4Au8wpkI6hcdiZGgOcZVFKU2VF3gnWVrvs1FY7m5NsBUUwpCkGi8seeWZTCtUrNBphDGe4cGE1c1lTj95jCIv2TvMEJXHu2AT431bbNk0T/h6H13b53iggsJ5Dg8GvP2VL5NNj0hlhlYRp567zMoT5+mtHydeXidK+wh9gNAdZNKrKSBLiDgBKYIuQwYaCMRBjOMIVjC2IGBcDcIZgkrFzHJzCFnLdFQtMkgROLwKYgYxM+Op/7bXBBM/Q5vMCyJlvVYEQRgiEDICoUHqmraq6k6VKvxnClw1AjMJxXXvQgNKVWHKKcV4wGT3Hgd373D7Ox/w4Zsf8t77tzjIDCXgRKO5CeIUHwA0eB8a0FIh6Cp44uwyL332SabTKSYrwcLugwGTUUYkPEksufiFF/gPr90hVYIYR08rYilnYg/hXRA0OQ81jXiWPPIPKcdKqUbTUv/IB4Ksr8clBFqJ+r6oEUqpWozrUELilUcKiTEW6WuanPcIJcHXKRwvZlQZIURN4KnpI9YhlMRZwXh3h6SzRLafUWUZHoU1hk4kEdLjKo9CstIJoqKpASsiEg22tEyznE63E96zbvZxzmO9D41I3hEnXYq8oMwKpHPBDlnGJCrkiKwp8A6yPCdSCtIERbDeRjjSJMEKiVARTjmkshgnyEpD5TxaOqJEo+NlikkOGJRQCAS2rMiKAldkOO9IO32iJKbMptiqwlce4yBSEVJ45Ngjooje8jIm6TA92sMag/KCXhQoxB0BPp9QHNnQCCbEQ8rK9zAWwo9FAMw8VT8uhBD89b/+1/ln/+yfzTZ6i1jEX7RYXV3l7NmzP+xhLGIRi1jEImYR1O+u1keLevMbNjceKQRSQOwEBZIyUxx9cJtkdQPd7xGlCV4qXC38ECLCW4eVVejm8LbuEgnKeSkEEZ5T+R4+P0AcW8V7F+xdnEdEOnhTFlN0muKrIOzwaRI2CpMCkKikQ7KumZbbmMKitKY8OEBogU7j0H2SJFhjwDv6GxscbT1gaWWVeGWJcjBCRAl2MqYqDLv3C5ZVxKllyVKvT3m4z2cvn2ZpMObmmSXefuerRNHPcObCi4yLis2P3mGcZYycR9b4RCUFew/u0O0tcfzkaXZ27/Ev/5e3eP7ZT/Pcy1/kwZ2r3Ll1jeFgn3e+/Sr3rt9k/1Mv8tM/93OI48cZDQ5AjhFSB7GAM3hvWV5Z45VP/xSry2u89MnPoGSENYaVpQ0++cmfYPPkCZR03Lt3m9u375BNMwbDIcPRmCKfYm1JpCMQEiE0cRITacU0KwGBqp14rK24d+8eK8snGU0GJB3PyRMrRJGqkz0+CGiGEw4PBjx4cI9LT1xGRzE72ztEdMmmeU268IxGB1T5EWnnGCvSooQM3QdaB69aZ8ImGcCB0IHaoZ2l+ODryIO7yCs/hTt9GZTGeTfrRprRLZp9vA3fNBhP70Ong/cGvGAwOGRnd4uiKJBKcXS0z2g0IlIpnU6PJy89zcmTJ2vyjObC+ae5/+A2H330HtY4zp2+RF5mCPkq9+4+wDhP2ol56fkXWeovMxxPGI5uty8rqqrCWIO1FdYJJpMxw/GAsiwRQhLFKd20j3UmUFzqYon3MBqPOTg6IOkkJGk3oFtnxcCQIGuad5wI9kxeuFrs8rArORRcQmIwdEE87Bx2zrSSfIuiyCIWsYgf33jw4AGvvfbaI6KMpgDfEBXguwvwbSpE8/PmtY1VSUOziOOYsiwfISg8zp5inlTwuKJ0O18kpSRJEuI4ngkIy7KciS2klGitSdN0RjcxxszoIGmazsggzdib4nxbKDI/trYApW39Mm9BMi9YsdailGJvb6/+exfmrCzL2Xw2lJCqqhiNRjPBDECapjPRRxPNcTTv08x9czyNgGSemPE4AUv7523xTVVV/O7v/i5f/OIXWV1dfeS4xuPxIwKdRjh07do1nn766dlY5mkUjyO7NDSMeaJFez6b17WpGI0opS0kaSgz8/Ys7fPSpm+0SSDt57bH0byfEGIm4mkEIW2KTHt9Nj9rxtW2iPk46k17HX6cMKP9uNb6u0Qg7euv/VmNbdPj1sAiFrGIH7+wHs5ceZaj6++xv7XNhZc+S7+bcPO9t5iWgmc+9TJVmbG3dR+fl1x68jKTg/tkDp547gX27t/h7odjLj57BSU8733za8hkmeNnn+D+ndt0+8sYN6W7tEx3eZntnW0cEZ/5hV9ktHWLu9eu8fKnXgQF2zv7XDxxnNGDBzgknbV11k4dx2QludRcevElRlu3uf/Rh6yfuYSOFLu3rzO9u4u0YS/sBehIo3RACUitKKdlIKF6i2hokFIEEYFvfseK4MRBTReQ1DYX9USJQNoIX4fCtPcmFMNrIoSvtQLOWqRQIS8katpm/d5CBcGE8450c410bTVQWTsdVH2v4soSZ0tUXJCefoa8OAz795pcKXywZ22oIc6JQKb0DlMa4l6HsjSYiePkxTX2DiZkhacoK6z0pHGMtpaiMGRZRT/SCCWwxnC0MyDpRmycWqJ0juFRsIuwoSQeGiSA2jQVajlALUtAEkgUzjrsVMD+gGz3Adv7uwx2j+h+/XVOPXmOs88+xfqTT7B88hSdjRPozhI67SOzAagEGXUDBSRaQiQ6UCUatYEGoWI8McL54N/rajSo94EG4gP1Qfha4OFd2Mtja3GBqM9N0/jRJEcCzzds7xWi+UwhgWYctaWLqBEhHiB8nndTsDnO5DWhxOBsDlWOcwZnCkw+JjvcZ7h1n+3rN7nx7ff48P2b3NsZMMgNufc4BFIGWyA7k9g8DElt8SLhypPHePnzz2LwlFmGc3CwO2R0OEU5QYLn5FNn2fzcT/Dgt99EC0EiIZUCLSCSMliiiJCxmhFrhJwZIHnhUUIiRWPf0izoMFdCEs4BQT/gavJOE8FWuH2fLMMpEMFGKVBeQ/uc8+G5vn6vYN/rUUIgRCDNOO/QKpwbPynRKyCdR5mMpLNM7sJqNBakN8gIYq2Y2nAf14skwhvQklhItK/o9Lu4Micrcqoy5OREHON1RF4UVHmFigV5GfYEOnU4oUK+ylRULojLlPLISKF0SpQGMVdeeYrJGGEMUmsqryimnrwqUVqQRAmuEuALtNJ4a3GmwjhHZWqaHJI0SaBpiLIO64LdTCQMPlbY0lD6HK8LpIpQOiZZWsFOJ2jnkCZDGYtKIsrpBDcZojo9wqn63t/zLYQfiwD+08QPgE996lNcuHCB69ev/4BGtYhF/OiElJJLly6xsrLywx7KIhaxiEUsYi48IZEgBXivQXgCINLjfUAnKiFQXpEflOx9+AHJ+jHS/hJaxRhvUErjtcd6g7BhA+XrDawUDcJTsuynJIf3sJHGmwqThY2UsUUQAAjqjXaGTlJUmiIjjUo7+CTGW48wIEjQaZdsvFdv1AxmNEaqZWQcI2Id4Jhe0Ol32clysmyMjGJsUaLTDpWrgjBkWfBkmrOaCtI4BVsx3L7H05un+dULnv0H27z17d9nMvoCaf8Y7txlTty/Sz48IpQxBCfX19k9OGR4tM9weICu/V7feudr9G6tcPbU0zzz/OfY2brOzu4W+4db/Okf7vDB29/hs5//SV7+3GfZPH4C60HKCJ30SdOUyloO9rbZ39+n1+0hZcTR4QFCClbXVjh/7jwnTp7i05/7KfCe0XDAzZsfce3DD7h69T2GR0dIKRiNR5SVRQhPnk1w1tLvpaRaMslzTFVyeLiL1p40LVnppQ+7jRBIJdBaMDkqee+9t8mLkkvnnyLPKnZ2DsgLgzMO6yry6YTdrY/odyM2lnqcjXN0FAEeU4SkgaksnTiuk00gpAyfJySR8Li9u6i3vgTTI9ylT4FOatoFeF9Ra1FCysZTYzzrtI13dWLJURY5O3tb7B3s4z1MJ1OOBkdY40gixalTp3j6qWc5ceIkKkpJkj4nj5/n4oVLnD9zgQdbd+l1+hRFznh8mUlNCjl3/hSXnriElIpOJ330evK+3vAHLG5ZFhwOjzg8PKAqTe2lG5JmxlQYU2Gdw1iLMY7dg33SNKHX77MRNd1Cwa/ZC4+3IWHgWy0zs0JOaAULnQ/1OLyvu2ua54mmO8h+37ojFrGIRSziRyG893zta18jz/MZHaNddG4KxO2CeFPobotCmnicnUhDWmjEGY0QYH4c8/miNulg3nqjXYBvrDG01rNC/ng8nh1LU6CPoog4jsmybGYBE0XR7N/2Z7UpGO3PfZyFRjPO9rHPF/ub902ShKIoZgSPttikCecceZ4/Mudaa+I4Jk3T77K9aY6/meP2mKy1TCYTyrL82DXQzFmbVjFPgvDec/PmTX7zN3+Tv/f3/h7Ly8v0ej16vR7T6XQ2L21Rx927d8mybGb30haHtIUWbSFPY/XSCBbaRJTm+fPzKoQgiqJHzkMjvmjWRPN+zddta5p5mk0zH008brzNumg+sz2meQrLvCClTe1orod5WkibyNFeW825aFM/5h9r3nOePtI+lubr9nssYhGL+PEMpRR3Xv8aUsY894WfJZ8Oee8b3yQ9dooLl47z4PoNssJw7ORpTp0+x4PrH9BbPUl/tc+Dmx/SPXaOl1/+NLe/8w1272zR662R50fcu3kbpWMm0z2efPZpYg1b29tsnrvI5voa1978Otmo4NIznyCfjCiLKWc3Vhju3qe3so7VCScvXSTbf4BYXuKZK5e5+9Y3mOwcsrxxgqP9A6ajEWmcgFd4IUPt3zcC0AhXGqyxeKmD4KP53S5qi4pQeaZtI9EQPkJDhKwFG7X9Ql2oRlBbfqhAEqkL4d6HfaqUqi50M2sEEkrihEcnGqkkItYIJZFak66uoKSiGB9RTgt8aSjLjGRtlai3wnjrBp4YlA20EV2Pry6OO2txEnQaI5wim2SYyuMmFafOXeLcxZxrH+zhnKcsHGCII411gqowVJkh7kY4BM5WGGNZWuuzecLh7Bg7zBHWgwAVRTgF2bQMoplGkOCDiMIRFATChfzXdJzz7T95g24nBRzTacXB7nt8+Nr7bJ7e4PRTlzj1/DMsnzvD2rknWTr/AtV4FyX3ETpG6AihUkTcQ0RdhO5AlOB1bcXSiA9Uoz4AL6Ja0FODO2pbmpDnaDpfABHO8Swx0tzWiLaNSy0C8Q9FP0HZYMEW+MZ+xmbggmAHW+KdDXYu1mCqDJdNKMYDxgc7HN69x/13PuTaWx9y6/YO24cTMkew+aj1JFIEoU2wIBIzcq8ApPCkCJal4JWXz/LSz77C5OiIajxC4BkPp+zeP6TKCyIcy/0OX/wf/q9882vfpBpnaKCrIJUBpCJlnd8QtQBGyhlhtTlsJVTIR7qG9FELNHh4LyKlnIk/wpKoBb11s5wUAqlUyK1Yh7VBZBLufcK8OluLm4KmaZavEfX6ElIEK5n6XloAVOCKDOE8qrSYONzjTacVSaLra9BTWo8pKxIdYU2FsJY4UpjSEGlY2jzNcJBTjbYZ5hWdRLG6HDMpJONphjCO3AiyvKCfqEDNTWKMlVSlxQtJkkSkS32kCNeZlAnGBbJhvNzH5Bk2r8iyguFwgqkq0kQzmpbB4imJSFRE2k1Q1AJmb3HWEeuQTzROISsLUlCZ8LgU4J3FuGBrpZVCJR5bWSIdo5IOLpsQRTFKOipAWYnHkohwjhzf+/u9hfBjEcCjCvSPi06nw1/+y3+Zf/JP/skPaFSLWMSPTiileOmll37Yw1jEIhaxiEV8VwSVeuVD64etN2RKWAS6voUOGzUAYwWHt/bpnbjG0sYGWkf4SKMVsw2pc1WgEPgabShrn1eVczI7xOcT0t4a3licVHhTYvMSn1U4VwSLDGNR3YRkeQXdidGdFJkmeJcH8cCkQCBIlteosglKeKrhAJWmgERGGiGDh2fc7ZMsLTPePaKTLBMvL2HLArxAY/j0z3yCbDTkaGeb6eiI5fUNJpMR0/0dXjxxkv/Ti5v8xqu32Lv3LZ546jP4/mXSK8d5ev8e77z/LlaU3L57D6F1mE/rKcYTuisRRZFjnOFg/6so1eH82UtceOIKo8Eeg8M9Dra3+b3f/rd840++wpNXnuap5z/B2XNP0O12STpdytJy7+5N8iIjjhLKMsfjOHY82MOsrW+QdjrgPdbDyuoxnn22y7GN45w6eYYoSllbX6fb7bJ/sMOHV99n68F9Tp8+w+Bon637dzC2QklBv9ehMnuQayIdk6YKhAgqftWlyAreevst7ty9zUpvmZWVDW7ducvR/hHWgnMVpizY373LdLjDE5df5ETk2YgAH4pokjhYm3iHkHUnhauRl94jpENKqLIc8iHiw69iVYS8+BJCRDWporF1cQ/XnHdhQ+1FvWG34DzTyYTdvV3yPOArDwa7ZPmYNO5z4sRJnnnqCqdPnkLHEdYadORZWkpQao3KTCjLMdk0QylBv99jaXWZfDrl5ImTrK2uk5clUmlalwgQcinOhQ1+nhcMx0PG03HAgcqQMJtMxuRFNuvQNnXibTrN2Nnd48TmKZaX1omTutAVALQIIXHCgpQID47a3qZODLlG3OFF3elVF1h84PvMupqa5NL3+1fMIhaxiEX8kKIoCm7cuDETY7TtW9oWF000YoR52kc7jDGzon2TB2pEF20RBTxavG/H44QmbauMx1nENGOWUrK0tERVVUynwWassXpphBHj8ZjpdDobV1N8n7fmaH/dfEbzmW3iR1vA0J6vtmVM+7E/y2Kjmbs0TWfz17aGaY+hOQ/tx9pzk+f5n2n/0ohe8jx/RPAxL8Rpzvnv//7vc/78ef723/7beO9n9I/2uWlEGEdHR9y/f5+nn376u4gUbVqH1voREsc8eaQR9TTz0oyxTbJo1m0zv48jhDTPh4e0mubfZtzz7zEvsmkTROZtYdrnrjk3zdw3opP257XFOfOCqHmKTDP2Nm2kLWBpk2ma92yLQdqCk8eJWhbij0Us4sc3yumE3uYTnLt8ifsfvsnt63c4efEKaQIPPniPLBOcvHgeb0bsbQ04++xLDPfucPf6B5x//pMo6bj67W8Se0EkY7b2jhAyQglFt9vn7KWz7O/tMBpNuPzCy1TZIW989cusbRzj3IXL3P3oI2Klwn4sUqyfOIlD0F9aZe/uTZY3TxC7ive+/LvEKkaqmN3tXSgqVlaXcc4xurOD9KExQClFrHUgQQhZExrq3/lCBpG/rwvOSmFm4sLGsgQC+qNuHGhsYWTL2sXVfwNl8zc1NE0IqQKFVYDUkprNiUoUSmuss7hI0Du2SrK2Qnd1GSss3pUc3d/GZSWmqMBZnNAIneOmA+zOViB0eo9QwUIm2HF4eusr2MpQFTllHv6GuMa2Fc/29fs8/fnnGBy9zv0HE5x1FHkFQhDp8Hcin+ZorVCdUJTHC6ajHJ3ErB3r4rxjMq4w1nL2yXPk+Zh7N3cxdeHde/ewwaQWTAioG5iCGCCbZmgl0FrhpccQcffOHtv3Drn67Xc5/sQ5nv8rf5UrJ65wcPUttIZ4aQXd7aLSDkonyChByBS0RqgYVIKQEULI8K+OQUa15YpESPFQQdAQDcTD8fnGymX2J84//Nd58EFoE2xbLDiDdyX4Em8s3hVB3OFM/TODNwZrCmw5wRRTitERk4NdBne32Ll5n/sf3ub6tdts704YVIbcOxyNQEjMqBszCk1zr1mPTAlB7GEllrzyyTP81H/zM2zde0A2GiOlwlQVk0GGNQ5pLV0d8dyv/Q2OvfwLfPTPfgNXlsR4EinQNWFDUouTfG21gkcKha1tcRtxhneuxXdpiLEPba5p3fO0bxt806iEmNndgEDKesprhY6UwVJI1DlQfLiWQpOTQNTXoUoTXJWHK1UrvHe4ogjXd27Jo4rKeJJEB1puVeKEwBmLwjE1FRhBKhylc0Q6QqmY4f4+Rq2Qo4gjTa+TkE0tuTHEWlE5j7OGfiem31Uk3QgrYqbBOYiqyPHW46oCkSQgIsZZRWI9na5HKA06wqtACpFKIcqK6dSAdehIgMhwnRhXlSit0CrCViU4hxEekedIFeEdgeghy2Bn5TVCeIo8R0tJpCAvCyoLyk+InaOTJCQ6wlQ5prKQRMRxH2EDEUcsiB9/8eIHdYP/nyP80FrzK7/yK/zzf/7P/8xEwp/36Ha7s01f08mxiEUopfjMZz7zwx7GIhaxiEUsYi4aBbulLhJD436JF412WjUuG3gERSYYfHSD9ZMnifs94pUVlNS1gr/2jLcV3lqEUGgNVhZsyIrV8SGq1ydWipk1BxYlBXlZYMscrSPK8QSZ56F+X8T4yqCdRyYRUpf4SBJ3u5iiwIxHGFshvaDY2yU+tokmRciwCRNS0l1aZrq9T5lP6awuQ1YQdfqU0yFkE1ZPnCCKUw7u38XvH7DcX2F3bwutJS8/cZaNXsqru2PeHH9Id/0pKjagLxDdW0wGI2IJcU3KlELigOngiN6xYyRJzM54h7IasbvzgM3eCqfPXeTshWfIsyNGR3sMBvu88eo3eO+tNzlx5gwXn3qWJy49hReOPB+hRIEzFVWRo5Wk3+ux1F8iiWO881gbClHZdMru7jZ3b92m2+9y9vwTrKxsoHTEiVPneOaZFyirkiIreOO1r/PvfudfcWx9hY3NY6ysroYNq1c4B5U1WF/RjXoI7/nO62/y9ttvUeaGs5cvcXg0oTJBwe+sw5mKvd37TO9cRXc0mxtrrE/2sMSoSOKdxzgTEgxSIFUUSCDC1rzZQO4IhBjwVYmWkul3/hDiHtHpp4CArfSNZ6t3CKFqcEXN0gw7cLx1DEdDDgcDvHOMxgPGowGR7LCxvslzTz/HxXOX6PX7OGdRUtPtdCl0gbEVWkukhKLIUDIijjTHN0+ipODkqROk3ZjSFDO7nFaqBescEGgbxlkqU4UUg3NYW6JkQPSbymCMw1pX+9ACXjCZZgwGhxzb2EDFIFEIJA+lWbUQRAQSiBB1wsoHcoqtu8ACfcTifLgOQrIidJ5YZxe0j0UsYhE/1jEej1laWpoVi+dpD20xRDs+TkzQvE9RFI881ggZ5kkO7WiLKdo0gqbw/7jH2+IMYwxFUSCEoNPpzKge7ddYa4miiE6nMyNrWGtJkuSR584X3ucFH+3xt6kU7QJ7W/TRfn4cx3Q6nZko5XHzoLWm0+l8LJWheb/Gtmb+fDXjaWxemp+1qRvNMbfPU1sUMB/NfP3O7/wOV65c4fOf/zxlWbKysjKz11laWqLf76OUoqoqPvroIy5fvvyIUKEtoGiOL4qimbVQW+TTjLU5xqqqZvPcrIFGGNF+/rxYqInmcxsBRCNSaQsomnPfPKf9fu25asYwL5ZpUzua9dM+P21h1DzppT0XzTzNi0rmrW7aa6z9uW0xUiMcmaf3zAuaFrGIRfz4RdJb4tQTZ3j/T/8jIlnjyeeeZ//WLe7ceUD/+AnOX7nAZHAASY9LnzjP9e98G6u6PPvz/y3Zvff46J0POX3+Cbavf0RpYWN9mfFoysrxTVbWV7l7+0N0d50nn3+B7esfMBqNefK5l1Cu4Pa77xIlKd5VLK32EEg6S0tkWc7waJ9jJ08xuHcVMx6zvHyMIi/IBgMirVk9s0E2HdNf7VIuJ2SN40ZDdiBs04IgI+zhZCSRVuCFpDIG4cOLwj5Y1AVpFQiR3s5EHCJU44OWQoqaFEJtE1IX5YUPe0spUKoW2rlmn63wUtBZ65OuLiO1Iu73cNoiHJhpgfKO8f4RrjB4C9Z6EuMoRyPy7W1MadCRwlYVUmqUDlTZye4BpgSRaKI0RihJPs1raoXD54bRsOD5n7zC4e98m3HmsdaR5RU+jYi0wnkoK0MaN2IXjykNUgnSbsrmcUkcTRiOcm5fvR7yNV6gRCjwu0fuA0W9t35oTeMB5wXCAs6iIkkkggigtAY/zZFbW9x//12mE8vOt75EpBRLx9dYu3CG7vEN0pVl4u4yUaeHShJUnAYiiIwRSiGIQOm6oUTORD+C+nvZ2LY05NC5+xnfCBLqxxshh6/AmdAg44IVrneuzr2YQFsxJbbKMEVGNR6TjwZM9rYZbe+yfeMed2/cZefuLvf3RgxzS+EdDoH1gTKiHy6pMI8izGHDy3A1jSQRgtTDUqz4S3/5k5y5dJpb129STccoFWhk+aQgH2RU04KOEqydOskr/8e/z3SQcfvqhwjvSCR0pUSLxkGnLvnLQA5Wur4fsIF2K0QQdkhZ2+fW9jYSgZeEa0vKOrfisTMqSH2/4QIXxvtAS3X1harq67UR41jvglinoXvQ3OOCdcHqxXuPzYtwrrwIJFfAK1BaMhrnTLSm10lRMlCZkQLvBKUxWBRSJ0SyrO+/PEkqKJ3BjA2TMkdKR7efMikqrAMVCYwRCO/opzFxGuMxjCuPFxYvJVEaEUWgpMArqJwliuIgrhESYyyoQMxJkTgncdMCUUhK78L5rmrSc1yBlWRFRifpoLQmyw1ZUQZLbl3R7SXEcUQaLVGVFWVZgBRYKaGCTBisCudKa4VWEi0dtjKYKuS9dKKJkihYXzXJ6u9xLIQfP+LR9gv9fsZ/jtWLEILLly/z0ksv8eqrr37fx/SDjiiK+NznPsev/dqvceLECe7du8e3vvUtrl69yocffjjDfS7iL2Z0u10+8YlP/LCHsYhFLGIRi2hFnQ7FIzC+LjqIoCQPGzeBdQrbtEDU8g/nJdOdksPr10g3Nun0lpBJIAg472qLFfDWopXGeU8iPCemOyjpSTpdfGlxGJwrsL5CSYVOYzDBNkYpgTMV5egIX3VmuFFNFxl1oKtwWUB0KqWwRQFKYYaD0CWxvopSUdhIe0+axuwMj1jPNjBJEoggsSLCU00G2OGQ7uoyrjrO3p17LPWWWO72GQ2O0DriiZMnObaWc/yjW3zj9re4laWsHn+WZ576Ke7de5vt3QdUuSFKdCiuu4CWzI8OodPHVo7BpKALlONDbl09YutOn9PnnuT46cuU+ZjxaJ88y7h9/Rr3bt7i1c5XWDtxktNnn2Dt2AY6giiOsN4yHu5w/eqQw71N+kurREkH7wU7Ww/Y3t5C64TNzZOsrqyRdjuBxlGVWFOQZxOGgwPSxPDZz77CeDTEWU0UBZSk83Zm74OxFNOcV1/9Nq+9/jq2gk88+ymOnzjPNJsGSxEHrio5PNpmeONdlkzGiQsXKN9/m6Hf487GKudOXyBCIoWn0eOLJpnvVS1IMDWGtN4YG0flM8x0wPQr/5rln/vfozZOQu1zG7oowsZe1PQPmjSD9VhTMRlPyIucylSMxkNc5VleXuaZy09z8cJFVpaXqU1U0EpjhSebeiajEQf72+zvbXN/e4tEJ3jvefHZFzl79iTra8dYXVmjKgxKitb1QY35NLU3rCJJU9bWjtPprDIaDhkODmuhjqOsgsiksuHYhQClNcbYgOsvLd4KnAyJHCH07Jp1rikOBfyuMCE56HAIL2qbl3ot1t0pvk7Q0BZ9LOohi1jEIn5Mw1o7K6Y3BePGDuNxgoHm+3Z+Z/5n3vuZtUjbuqN570Zg0S5Qfxxd4+PIH/OWFd4HzHKbcJAkySMkinbhPooiut0uEKgY8yKN+ec3jzfRJiy0j6Ftu9KmTrRDKUW/36coio8txEdR9Mj7f9y5a8//PAmlEVI87hwqpdBaPyJu+LPygs04rLXs7+/zP/1P/xPr6+uMx2N2d3eJoojz58/zy7/8y7z22mszQsfW1taMdtGMzzkXKF415aL57GYNNuKF5lw241VKzWx72s9vW6U066KxWGkLjZRSj1jGNI81c9ccZzMvzWubz3+c7VGbYtJev+112CaWNNdDew010aaTNI838zN/jc0LjB5HzHmcaKVN9mnPw38qV7uIRSziz294W/Lml/4DJ556gVg7br/1DmUluPDJV4hjwTgzrF68gvY5733rm5x46kXOPXeFW6/+Ibs3brOytMKDWzdwHpbXlyktPPHsU9iyYHdvh5OXnqWr4f03vk1nZZVPfOYVBndvcLi1R9zpoLRi9dhxlHRESYfJaIyR0O/22Ln+Lp00YXn9JOPRiGI8YWmlR399jdFwzNrZJ8DmGFOF38mVBQlCOAQaIT0qDsKGaLWPzQtMXuFcKGbjg3WEd75l5+ERPhAYZpQIAvlxJiipBR+ipkqEDI+v4QV+9n4yqgUaGnSqUd0OopOQriwTdbo4m1PlGWYyJT+a4AtHMa6wdT6nzC3aWyYHR8HWoTQIKbDWgNDEqaasBE44XF5S5RUi1aSrPbJhji8F3hvuvXWNF375p7nyqT3e/PotcgPeGPLM45JANzCVxVQKHYd58c5jLEjlkVrS7cUoJRmPK6bTktl+WoDwgRhhfV0I94GyCeC9rakSga7qPFhn8I7QtCEElbFESczk3i12P3qXnRsPyEY53hvSbsL6+gqrp9ZY3lxn5fgm/c11uuvrxEs9os4SKk1RcYKKEqTWgf4hFULKWYMLUuCFRoj672+ryu0RNe2zaYKpt/ne4Z3BYcFavHUhR1Hl2DKnyqdU2ZR8OGS6f8BwZ4/DB3vs3tti6/Y2R4OMwbhgYgJZovIeJ4KYCOGRteioyUe4WgRRS2cQHky95lI8XS84udrhsz91mdOXTjA4OKLMp+GeBDCF5WhnzNH+BGktnX6fn/w//19YuvAkb/z73+bw3g5SCGLliVUgvuiG9CFDm4yXrhZCMbMsambK1yQOX4ucPM29Yxi8F/W6lwJvw6UTLqE2Me+hkKp9j9xci7hA5pnlXOrLUrgg7sAReMqSQF1JdJ1LC+NQxrDWjfCKYIUtoyBg0Zq0lyKVoioqpHfEaYJSYAnj8ihWe+BkysQrsmJMJDy2Mugowvig0jHAtHDEvS44T6IVOolwLgl8WSnxHpSQaB2ahabDDKEEUdoPYijRCH8c3lVhPgQgHd5LsqzEegmuIO74MEYBMpbEkSZOkvBbx3mkM2jZwSmFNSWlqfBVhdaK5W7CUrcbrjkvMZVHaI8wFarKkanCqfoe988Qd/+XxkL48SMePyhld3sT/mfF5uYmX/jCF37shB+9Xo+/83f+Dr/+67/OlStXgDD3VVVx9epVvvWtb/FHf/RH/P7v/z737t37IY92ET+M+NSnPsX6+voPexiLWMQiFrGIuWjK1d4LnPBIPFp4lAjswrrcHG7saaQfElvB4UcP6J+6Q3/tGFGSoKSoN1iBxiCEQAmFlY7UTImzEZ1+HyoLQmJLi7AO50qc1AiliNJOUIrrGLC4sqSyHoQMIg6hEf0ImXSw0wJXFOhuFxnH2Mpgq4rq4ACpNfT7SBEK32m3y6mnnyRKEmw+RvaXEChURyCjiGo8RhhHb32doigYbe+yurxEqmJG+7tIpVneOM7nn7nMZnKb/++7D3jr5tc4efoZnjz3MuvHzvLgwYcMD/axTaIAgTMFo1HG0EHhPV4IlgV0hCcqxxQ33mZwr4taO0GydoxOV5LlQ6aTEQfTXbYe7PDOm99hfX2TU+fPs7G5yeaJU6S9PlprhsMBw+HhrJBgjMHZCYPpHh++P2AyOkuSdnHO4q0J50YIrDEkOqbXXcVWIliuIMBbytKA8HR6Cdmk4I//6CvcvnOP4+vnuHDuEv3ucpBK2NBJYSrDwd5dsrvXyMZD0vU+S0rRvXmPYdewsWqoqgKhNNIJtJKh88MaRBzVm3OLMxIhXFgfQFUZhgdDjPUU7ojp239C/wt/Bd8UEjx4DELKYGPim2SVw9WdoJPpiGySMZ1MKXJDJ+3y9KWneebS0xzfOIbSYZMoPdgy43BwxN37d7lz9xb37t3h2kfvce/eLZzTrK+t89QT53n60rOsrK5greXB9oNg3SIk1tuH592Fa0oIQS9dItrs4b3k4OCAosiZjI4oyoLpNKOyhspUaBkTRZrVlSXiKGZ5eRWhCGIqJxFSB/RtTT1xPnQrBW9fgRM2JF+8ABEhvaVy1ey5DQIzUF9rX+hFLGIRi/gxjnaxuk3QeBzlo01GaP6mNo/PP7cRlDRft6kN7fdrxtAWXbQL6+3ntj+nKWS3i+RtK4+2mME5R5Ikj7xnQ3Todrs452bEiiiKvmt+5sfR/rf9fvPz1ogH2o83z0uShJWVlRkFtk1QaVvSPG6u2mNrjqM9Pw1VIsuyGdGiTf9o5mzexmT+3MyLTprvq6ri1q1b/NZv/RZaa4wxdLtder3ejPbRHOfBwQH7+/ucO3du9l7ztjXzwoRGINSss4YW0pzLRmQRRdEjoqDm3D2OzNGsD+ccVVXNHmuvo2b9tKkZzTpqW7Q0r2sEHFEUPUIsnqfmNJQZrfV3re2GjNK2eWlfV/OPN+etOZcNHaQtOJlf5+3roPn8JElYxCIW8Rcjiizn9FOfwPuC6299iIgSnv3cCxw+uMXBJOLUM59g9941Bru7PPPST2B9xuv/7v+FLz2JTtjd3afbXyGKSqwXbJxYJy9ycgPnrryEqIbc/vAqG8dPcer8ae68/TrCKYxzrPb7pN0IU0xR/R57W1ukK2tQZBw9uE+caMrCkk/2kXjOXLkC3jCdDFk/c4psmtFfWqXIq1C898H6ob/WhyJHeokzHhFHpJvr2PEh2VEW9nFC1nSQUJ0WqhbXIWttf1PcrguuoauHWTVayBlhpBGI1K0ZOO+JIo3QChkrdDciXV4iWlkiXVkKRIDpmGo8DrmXoxHT3RG2MKEw3oxBSsaH98mGEzCh1ULLKOQjHOSTEusaoUkglijvmQxyRG1r453DjCbceO0qz3/xFbbuHHD79hAHGOvxmcUYMA68hI4PQo/m/bxxCCXRsQYpkUogJGTTEl/5MMcyiD6crxsqYMar8LNkmUMisXVuyZYljfxCWDjcOuTNg1eRcYRzHmPD38pxVrGzO0RcvYvW0E8T+ksdVo6tsHx8laVjK/RWVumtrRL1e0S9HjrR6ChCpx1UnCC0RMoYGcWhCUSG8xXgLRJoGmEszgWih7MOby3WFLiywJQFJs8ps5xiOGBycMhwb5/B7piDrQMO9gcMRxnjSca4sFS+zgE6hxUP5+KhoOhhzKAXwgfL5ppe4xEoHIkQdBAcX9H81M89S//YCke7h5iyqu/ZFLYwjI/G7N8/wBaGjhZc/NznufhX/irCebauvkFVVHSlpyMlsSTQKGRtiUMNieEhBcd5V4/Vzx734lHJDDOC6sPrQygRrIlajznvwjVEaOxx9qGNC7WARNSWMM4Fi6CGgCK8BFkLq+pVI4WaXY6irxFWoIxFWkev12HkJMV4glaghEIL0JEgKyw4S5xqoliiIgVOUJWWNImJtGNkHEZ1kHFKOR4Ra7DWUBkDSuCtACRKeqIkRgmCwEgJchvys1opjKmCnbP1FKUlzzKWV8E6xXA0xpnQLNZLY5xXgfiMZ1J4jC2ReJJ+ihcKnSqW1lconQVTIX3I71a2QuDpdCMMEZWzTMwUbz0rShADwlm8sQyzKRGeXhKBElRFBSpHRQl+Jl373sZC+LGI/7+i1+vxyiuvsLq6ytHR0Q97ON+T6PV6/Pqv/zp//+//fTY2NmaPCyGI45hPfOITPPfcc/zKr/wKH330Eb/xG7/Bb/3Wb3F4ePhDHPUiftDxV//qX/1hD2ERi1jEIhbxmAjbe19vQgxauhk2MYjfHRAsW5yvkwE4SgejI8/B1ausbJ4g7S0hlEIJMdtoSYL3pcezlB/Q68YkaUxVTfDWgvMYBFrH2CzYw8VxineGKI1RzmFycM5gsgwpwyZJSIHupqhujK2KkJTIPWZS4JRAGkt5sIeIFJCGgrcUJGmKjDSUHlPk6LSHVAJPhOp28WWJ6nVZPbZJlU0ZjoYs9fuYzDA92kdK6K4f59nLT7Kx1OerH93hS3feYn/vHidOPssT5z7NwfJt7m7dZG88wXhHRwoKB5kPc1x6z5EHtGZdCtaVYl1ViOE9BqM9qqRDd2kZtXSCjqsYjg4ZT0fs7j5gZ3cbFWlWltc5fuw0myc3WTu+xvqxTdIkqcUOjiiK68KDYntrhyRJSdOUTpqQpEno7HQej6Q7zqgqg7UVxpSUZUZZ5igd8eF717h/fwtcxAvPfo5ujWX3TVeEg2w6YefBR8S7dzF5zsR7Lm0s4+/tMpoKkkSEjhhboXSM8wZrg/8pQtULjWD/IoC6M8M7R55nHOzvEEUJBkH20Zt0nv40avM0noCS9D6sj9CHUxf5sAg81lbk2ZTB4IhpPsU5wxMXLvDck89ycvM4cRzV1BCHdY7haMCN61e59tF1hqMR+/u7DI8OGR6NOBqWHByO+OQLh1gX8PqBmuECoUaKxq0GvKeyFdZWOGuRShMpBULhfUD1D0YTjob7HB3sM5qOqUpDkqRcvnSFS+fPsrq2xnK/T6fTqbtKwvF4/xCbOkuqidDFBSCVxlZFEJ/goZ6XIA4JApCHmNLvPxVxEYtYxCJ+mDHf8T8vXmgK4/NF6fni+nw0xe+qqh4RfbT/g0cL6U08jl7QJiDMk0Ga1z6OntAU7RsCRHu8zTg6nQ7D4ZCiKGaF/48TQsyTOdp2H81YHidsmBdmNKKTqqpmotRm3L1ebyZqmBeOzAtP5gUhzfeNWKLT6QBBfDCdTmcEi3m6yXx8XINYI5DIsozXX3+dJEm4cOHCY4kR3num0yn379/n+PHjj5Au2ue1Eaq056yhdDQCkOZnURR9F82jEVQ0VJCyLL/rmNrnBXhE4DF/fO3Pb392QygBZiKTtqDCGDMjtbTXZPs92gKgRrDR0FAaCkd7/trnuVk3bXuX5jw3Y20EIc28NmSUtj1Oe03Ok1UWsYhF/PhF0u1yuHWHwWHG2tkn2Dy+xvatj1g6/QQnNnpce+MbJMvHeO5Tn2Hro6vs377N6slzFAw5ODhidXMTUxn666dJlGNv9wCR9jh17jT3r71FdnjE+uoauJIbb75Gr9ejGI84feoE1udURuGAbDhlaW2N0f4euJKNEyfIJxMGBwd0V5Y4/8lXKMYD8lFO1OlTTKcI59m7eY18OAahgQpnLFVekGqBLQ0ORaQU0dIqiTYcicO6cNw07zz8+0xdeAY325M691D4IUTTPBH2rUIG8kBTEAeLkAqlFVJHSBn+HohYotKIqJuG+rnJMeMp2d4RvhKMDwb4ytf2r7W9KEGUENlAFvFSIrzFVCXOebSOArGybgARBKFHPg37WC8cUkuM81hnuf/ODTaffpJXfvEzDP7nP2IwsFgXKBymNJTGUlYVRRrR7UV0OkloSBIEQooPFjaiE7MExJFmMikocluTQyUIF6geNT0lWOY0BXuBwSP9w8yZp/57Y8EKQZkbRGHq6n5o+Ak2upLKWQrnmRZTtodTxP0DlAhUhThSJJGk043pdFOSfpc4jeivLtHpd5CJJkm6RN0OcSfBNhY8SiBlTZsVHu8kVZlj8pxqmlPkljKfUmQZ+ShjOp4wHWcMBmNGk5K8dFQeKgfGP1xDXjT3rLWoKIAsEKIRUDxsCAsClKCOqZ1msS4sTiUh8ZKuljxxbplPvHiWpK8psynOWMChlMRaRz4u2LqxTzEtSYQgSTRf+B/+byTLS+SHQ9756teRIhA+eloQqzAu2bBPRFj1M2FKI9IQMlhL+6AGEQ2lZCbWaO7b6vv3RlAajLAxzqKERslgPSQQOBsacYRvPru5V314PYZGnfoepybTCiVRur5Xp7Yq9BIcREvLeLOPVoqqrBCdZeLEI0RotrKVpfSB7NHrKYxO2RtnpE7Vtj2eSVkyGZQ1xXZEManQeGIVg3CIJMU6h/fBZrtT2zpZV69mL6hKxzTL6ShPp6aEjEYTnHFEnZiqsuRliS1KlJfoKBD2XD0BZempShNMioUPeVoZ1oLWoKIe+ThjkuUIV6ATgYoUWmos4KQi7XbpRZqlJEJjySdTMlNRVRYdKbxToVlMaoypybn4GU34exkL4cciAEiS5D9rQyGl5JOf/CQXL17k9ddf/wGM7PsbWmv+5t/8m/zDf/gPWV5efuyGGMJxr6+vs7a2xssvv8zf/bt/l3/8j/8xf/AHf8DBwcEPxI5nET+8OHXqFL/0S7/0wx7GIhaxiEUs4nEhApcv7I9EfZPe+GSqUOAnyD0EQbluCZtDnGR0e8DB5ruk/RWWorP4jgw33dbXCQRIXc5mpOj3jqNVhE6WCQJ8B6rGMFahO0FKgbUlyhq8KYmKCpNPcKXFFRVmMm0yF0H8kRry7X20jom6HaTTWCbYbEx1NECuqfD8piNAR6hEUw1GWJ2glQI0KhYYOwHj0L0eq5sn2DUlk7ykE3fJ8in5YIiSms7qJqfPnuO/WV3l6WN3+V/efcA7H/4pq2tnOH78eTY3n+Dm7e9we+cBk7KiJkUiCPYpxjoOs5KRUuwoy9k05pxydHzBclbQswMycZ+jaIne8ibV+gmysmQ8HpFXE0ajAdk45+69O0gVBC393hJrxzZYWV2mt9Sn2+sTx+Gc5rnB2ilFWRJPFVKHTe9wPGI6HeOcoSxysmnBZDKmmDrybIAUEcfWzqFEhI6ieqPrwBqsKRkc7pJt32R9fMBOYbhrHMc3Vli1Fj/MKb2nsoIkivHOUpZTOnEXJ0LXaVUUSBE6DUCCl7iqDBtmH1CkprI4m2NxFPeuI779+2z+wn+P1HUnrG9wmgCm3nETOqYI3RqHg13KouL8mXM8//TznDl1kjSNEL7Ce4ElIDAnowHj0QBXWbRQpElCv9+l3+9ga4/kw8M9Dg/36XY6GOcp8rJObCgqEexTPJDlBaPRiOlkgtYRHk+WTdnbvcd4PKUsHcPBiHv377B/MKYqQWrJ6vIqn3z2Exw/dhIdRSBCEo0ZwrMJB8jZHFAnApyzQKDuWPewS9YJh/chSeSaJJV/CPxdxCIWsYgfx2iLPNrChnZheN7ypKEuNM9rYr6IPl/8bttpNMXqdkF6/ufz4o72WNs/b37W0CfmBSFRFM2K9m3hSFN8V0qRpinT6XRWfJ8XRMyLFf5T8/k4Gkq7IN+ej4YW0djPxHH8XUKb+c8wxszG3oy1Pa+NcKQ994+zEmnG8meRgJufta1kGlKJlJLJZEKv13tkfPBQXHH9+nWeffbZ2TgaAUjz3u08YXuszTgb+kkz9rbtS/tcNPYxzdfN58yvnbZ4qS08acbWns/meVpr0jR9RDTRvG9bTNO8Z/u8NNEIPZqv22KoZh7aVI7mONrv0whFmuuyLSpp1nb7fLbPefM+7dc16+7j8pSLWMQi/vxHMZ1ytHvAhRdfwZcD7l2/wcWXPoUi46PXX+fEhacRruT6a19HWEUnTdm+dZ2k36W3uoRKIzZOnWAyGrKzdcDJp59Huynv/Okf0+stc/HiWbZu3kcvb9CJYorhAceOb1IWI5ZPnWd8dAgqYbmfMN55QCfu0OlvMj7aJxscsrR5nPWTx9m+9i42m+KTHqvHT2LLAuegu7qOdyCFp3QhbWFNsHZQMuQxZByTrC1R2QEoEFIgfLDTeNjCE+rP4Xd2bQUjgm2Fa/4E1raiDhtsRBAPCZDeB1sXqYJ9hDfoOCXdWMJIi0xjVBREBq4weC/ACbLBGMq6+SRJKPMiUB9wuPGYo4+uY0uHVBIdx1RFgZCN8FDMGhi8f5gx8XWh3gmHMY7Sebz0vP2Vb/PT//0v8NxnLvHtP7pKXvrwWiuwNlAuKuOprMNa6HZjtAqWKc6GxhKtFLIbI6VBCcFE5Exzh7Ue7YMtbYXDAd7XxxFmdLbmZn/5asGMa86EtzVlhUaVQ2XDuWgsUsI+XOJxVDac79xY/BT8MAd/NINUyFo4ojXESrG20cUDRekxVSB9elGLeERotrIWqrLCGEtlAyW0tA7nwflg82Fna4f62Jokg5891v66EUY0B+1DD1CwMJEg6uYSGw453LMKT08JNnoRT1xa56nnzqG7HaqsCO/rQ8LGO6jGOXv3DsgmBUJ4lPdc+dnPsHb5CgI42rnP/Ws3iIQjxpMg0VKiRE3ywyKobRyFq4kctdii5reIurFtdiyinVhp3UNZh5m7r2jyLNTiqSZn2azf2Q+pP1MGeo6QYG2wPpESEB5rTFghwqOUQAiPywyVHKHTFDcuqIYTjIiCoCOKsLaiMg6tI3odSdpPmOplRnsjtKvopykZEYeZxRnoeIeWnkh6SgPeG7RUOC8xPvx+6XQlRVUijENEEd5JpFQoU6Gdo6oMsbNMKyizik5NF7HOE0cRKg6YHSkkxlZIpUFpdCxI4g7WG8pignEGM8mIYxXmSFZESUpRCFxVEscxUgoKYyisR1lLN1b0I4USjqKwjKY5SgexFrUISSuJTiNEZxlrTcg7fx/u9RbCj0UAPNZ38uPi8uXLPPXUU7zxxht/5gb0z0M8//zz/KN/9I9YWVn5z3p+k7D45Cc/yW/+5m/ye7/3e/zLf/kv+Y//8T/y4MGD7/NoF/HDCCklv/qrv8rm5uZiw72IRSxiET9iIQjWLs3GTwiPqjdLQojZhixsb8Mm1uIx3tTfa3whGVy9SWdtg6jboXv8OCKK8TiUhH5VcKo4ZEVKpBPgahJIpBGq3jBZB3EEtgq2pEqBFnhdQTdCmwJbVYgoQiYRKoqClQuCqCswvRyX5eg0gcrhEo0wMXYyxsQpqtsNNiNaUk6ndNdX8B7seIRcDpYvQgpUJ6WajNGyQ7K8xFK+xuHWDqqKSNMYY0uq6ZQonRD1+yytrfJir8epzXW+de0WX75+j9sf7pCsnuPExhX6S6fYO7zJ/tERk7zA4kOiwBM2uc5ijOBDm3NLCtaU5KRWnEbSV5ZT9oDJYI+JTlk5eQZ94TxGpxRVyWQy4XBwiDWOyWjCdDJlZ28H6QVKa6SSRFFKnGjiKCHpdFCRRMngoeqdo7IWa0Lnp7e1r6mXaKXRqlunXhRSS6QIfrzeeYaTAebgPt3De6wWBddyx13riCPFidUuwwdBPHEsgrUViCMZ1pkzlJUh0qGbIokjdBLVnUYe5wqwbkbuCBmFHG+g01vFOUm8fQ23fQt9+tLDBIyvEynOIXyganiaQpkin45Z7q/y7OWnOXvqDFEUUxYFzlqMC7j+vCw4GhxRliVxEoVOJ+GR4gKrS6tY69BRxMbaBtPJhP39XSpnGQwHmMoS1+elGfve/h43b99Eak3a7RFHEVmWcXQ0BAvdtEeadINFjK035cZy7/4We4f7rK6vk6QpzruQYPMO6iTaLPEi69RSnagRQoWuLm/xzmKcA+dxPngVe+dwlKGDqcl0+O8HFHMRi1jEIn40ok0AeJzAo52PaQtB2gXqdlG9/b7twnpbaDBP+5gXeszTNNqUkHmBSLvQ/rhjaygMjbiiLYZoci9VVc2K343tRntcj3vf9jHPj2ee7vE4qko7GhLsPOmj/dx5akhbODD/fSNGaIQAbXJFc/xte5x5IcDjYn48zjmKokBrTVmWM8uc9ucopbDWsr+/T57nM1FFWZaz+W6/pqF1tM9xWyzR/NsQMrTWs/E37922zGmvj+a92uSYhtohhCBJktlnNetJSkme54/M37yVUPv8t+kb82SPRtTRns9mfbZFJm0BR9u2pf3a5tqbtzvSWs9eY62d2bmUZfmIIKoZ88ddN4tYxCJ+vEJFMc+88jI7d24SLa1y6bln2L3+HYrCcObpFyinQ3Zu3MZbgy0yrJCcf/oZvFL01zdw2REfvPkG6domF559gfHBDts3bnD67EV6vQ77+7v0j28y2N1haX2DzdPHKAvHsbOX2Lp9g3R5jUhCPjpkaXkFW+Qc7dylKko2Lj1FlESMh0d0l9apkj5xp8/0cI8oifFKES316Syn5IOSsCX2eKUQWoLzqEjh05h4eRk/SWa1TUdDZn34O9kThB74IDBoiARSMrNTCQXrQORw3iM9M8oBgtCh35HEnRipFSKN6fQ76CjB2WBxUh2NMJOc/GBENSkxJlAgClcSdxK8sZS5w5WO6cGoFkV4qrxCRcGi1LtQFA/b1XqHKwJTwSMwtsIJTek8xkHlHOMH+7zxJ2/yzKee52BnwAdvbWGtxwmB92FPjbNkzmNKT55XLPdTok6EVBJq6xipJUlHB2GFSoM1SWYxxiE8RELjApMB62t6Cb5RV9QsiDp8GHiAfMiZDMfXOTZHaL7A+5nQxhNsZgOlIvyv8UcRws8oos55vHFQeSSCqfdICdNphbG1leuMPvJQuxE+N6BcBB47a/ZoiKCikSnMcgKNpU1gp9a2N632L988SYRzGV4vkA6MCM93XpBKSVcJlmLJ+TN9PvvzryC0IB9PMEUJQuKsgdqWqJxWDPcHlJMcby3KwerxHp/8b38BbIb1HXY/ep9icERcUyo6WhJrga7FFMGqxWOdQ+uHNoRKhBzWQ9FyM1+t81YLkGSDOxYz6U54fm3d6+u8yyP3sPiaviOwdYONr6kgs99PSuJq8Yl0IZeqpQxiGRfGLFVoYjLGoUXENCuwSY4XAWtirSHudQL9xVuMV9iy5PTmKr3IMx4FGsZKN6XMLFqGOfZKIqxESo31kqwsEd4RqUApKiuHkxaTVahEI2SEqwzSOxyCUeYQWNIkoqgMalSQkyNxSKGQePKsREmP0hapI4wPJA4dS3TSR0QJo8EEbwtsWSGkwo5GpN0uJkrIpjlCGkZlyBGvdCM6WoEpqcqcwbjE6RglDEJAZ6kPzmJdhbAKZyqkimgEY9/rWAg/FgE83Jz950S32+Vzn/scv/3bv02WZd/nkX3/QinFP/gH/4Bz5879F71ea80v/dIv8bnPfY6vf/3r/OZv/ib/9t/+2z/Xc7KI745Lly7x1/7aX3ukS2YRi1jEIhbxIxJ1O4GoFe+KGRyj3rA0RK4g/gjdAGJWcNcIJIriwLL7zrv/P/b+7EeS7M7vBT9nscWX2DNyz8raF7JYZJEsLt1ssUlRbHWrdVsDqQFdQJoHQQ0I0IOg/0APgiRAD4IgQMA8zIMwuBhgejBL3zvQxWjUq0Q2xSabbBaLtVdlVq4RGasvtpxtHo4dD8uoLG7NYpFs/6GiMjzc3OzYOebudn7n8/t+yVdW0WVBvrIWp+zesXV0h/HxrbgXrXAhTkO9EJEMFwKCI3QTkeA9QmrIIqDgqjomPfI8Vqa4KEkagse1ELxDDXNsNcfWczq+HlmOcPNj2sM9MgKZXkEqRQgtQkhUkeHnFXY6JVtZQQYRJ2vlAFvN0cMVxhub1LOKvRt3OX/+DKX0BFNhZhPwILMcqTXnzm7zpfVVHjpzjf/9W7f4+v7r7O29w9mzT/Do5U+zvXWHvf3r7O3fozG2q7iI/exDoDGBVsAMz13leF1L1hA8VAjOZpqRq2hvv0G7e42pKjAr66yfe4jHH/kESmfMqim3bt/m6OgYa2z01HQa286oK7o5rEZ0GQYlI8yhpEaJDFT0KhVKkQmJDzZCBUIggsM1XULF1uijHc4f32Ywn3K9snyr8ey4gJCCJy9v4yZz9uctV7OMcyOHSjMwQZRYFbGSWgDTowNW9RlkGSGe4Dzemu4aA+EVWmW08wk+m6HzIaUM+OsvwvmrSJVhg+smeVHFxHeqGzIItNIMBwUXLpzn4tYlzm2dxTnD7v5trLE0JlbDtG1D61rqJlYYF1pjsMhBSZ6d5czqZrRzyTVZrmlsze7eDsaaCIC0bTcXUDGRARwcHPHya6/ikZw5c57VlS188Gg9ZGWsadqa8XiV8coao8ks2hk5ODg85LU3X6EoS1bX1lAqKqHUbYUxLaa1WBslcmO1l0IIUDLq9QQC3ncVQCm5lKqPQojSrV0lDqGTTX//P2mWsYxlLOMDi9MqAf2/Ae/K4/QX4x8EfZzed9ou7etB9iQPgh0epLrxICWQPszRf+33a1s6v6QekeCBvpLGgwCNfhtPgx/9tqXF/L5yax+4SG0ty5KiKJjP5++yoknndRq+SSop/fNLUEPfTqWvrJEAiweBNaehn9PqGH2bkT7cYoyhbdt3XSfpvNM5zmYzJpPJAg55kOJJGocsy94TYukDGKmdfWWNZHOSzrev7gIn0EYf1Ejj0++T/lgnOCQBJf3zSjY0ab9JiaPfhwlk6UMg/fdCei8lSKZvjZT6KB0j9UtSO+lfh/2fFH31mgSFpGP2oawfpPiyjGUs4+c7dKZ486UXufLhj6NFxTuv/AWr249w7olLTO68ye5b13Ctp3aWslxlOMqZtTUXrj7K3rVXeevV64zPX+ahJx7i7e9+m/b4mKuPPMx0NuN4DhsXL3F05x02LlwgeIvxkBUZt99+nWK8hmvmgGOgFM1sishzRmfPsb51nqPDHeqZI8tHkOVxwTXUjDY2mU0P2bzyMKPRmKZKn+/RckR1igFBKbyQjDbWyFbH2IMcmQmoRVf4IDsQpPt+XgCrkmR/kSIWxIfOsiOqGMTjsBD9IFkyFDnZqEQVmnxYoMshpq1xxzOq/WN81WLmFlMZhNdI4XAhgBe0s5aFCokzmDpQDAqqqoEgsa1DSh3bQgApCT622blYXOQJBCmw3tM6R2MdDknrAt/7s1cYba7zoV96nsnhV3jn2iHRvdQTOruYYCK2YZzB2sCK8wyHEWRZQJa5jj9Vl+OiYTaPRTmhQztigUWv7KL7fUHfnGJngwgLhYnQgzHS62OBRnqWBQRyAhsIZAI3vI9AjugYhSCYTJvFHD+EEyWXBfSRAJT7wA6J7+b/gRPYIwQBoldwI6IKRoinvNhWptMUghA8CghCYNN2HTAiAgxEYCgC51YLrj60xiMffhgvAmbe4JyP110Hs4QQ1VzmxxXVcUs1awitJVPw6KeeIBTQHr5Bnq/wxp//GdW8IhOBQglUJ7SBT4VLIvVA9/2vToYnvlV6966+y4PEHKiIic8IRnWvEt25QnyPxKItF6/VlEfp8qc+BKIdtcT7E6gmXRzRLsgv3mRCCoSO9zxCi+T0gmgdSS3Et54gRKeu06C1Is8kIVh0Ocb4QOYcKtMYawlSkWUC6w0rhSZIydwIMhEQTmAQCBxFodAyp6oqTCUoh4os0wgBdWuRskUEcNZjfcBayFSIEIkHbx1KES2dgkMpGd+nQZB5RyF1tP0OAe1j37XWRstj2+UodY4PEjedEaRi1kJbz5jOGlYGGpUPUWqELAbcO6qZ1ZZiIAkhWh+6qiIrMpyHqrbg58juXpYfAHn/OLEEP5YB3D/J/WHiC1/4AuPx+OcacvjSl77Er/3ar/2l9iGEYGtri9/4jd/gM5/5DP/oH/0j/sN/+A/84R/+IbPZbDlB+zmPsiz57d/+bT796U8/sJpoGctYxjKW8cGGAFSnISBFQAkX1ThEJKZ9AOMjvZ8W15PcYxAnSgHOK6Y35+y/8l0GK2sonSGKgnEzo9y7iauniMEIvIPgEVkO1kZbCqHAW4T3BB9VD1AQnAHrQMj4Nx8INYhC46sGQoQEPJBtDCjWV5jd3CHLMrJiSJCC1hv8fIabHSNzjSjyOJmyBlUOCM4SWoczBq00wQWkKlCZx07nZKMBa5tbHO3scXQ44dz5swTTENoWLxpcPYUgEVqQD1b48GOPohGsvb7LN6qMN+78BSuHb7F17mkeeegFtrdvc3Bwi3v7B9R1jfNd8gBwoVtMCIHgA1PgrhMMdWAlBNYCrEiLljW7e/u889Zb6DJnvLbK+vpZHnviGc48/3EQitoYppNjDg8nTKZT6qbCWYdxFiFkrA4KnYWIaOgMfpDBEzKPEp3aBQbtLLqZkE92GM8P0W3DvcbxtbnnHeOpusTAc1fOcK7Mmd/Y54Iu2dSeYKHIB4xHq5hmCqqgyPI4aXYW2zbU+5Lh5hqyKAghYBqDVFE3VIiYGPLB44xF+oZCK/y9d3BH+8iNpCbWkyoNHu9trHCScPbMNp/4yCcRHlrbcuPuLeqmpjUW50z0Oe0UNYyxVPOaalZTtRU+eGSQCCXR0pPh4/vCVQihaU3FZFrhnAcZ6N/qOB+o5w3TyZRRWZFnc4piyHAwJhSBpqlwzuKvPsV4vMZkckRrW8p8gHWwu7eL845iUACxgriua2azOXv7e9y7t0PbWqRQIH0HbOlFlYkQAoEEVJcgilKcdMkHJSMkMv85nossYxnLWMYPE/3F9P7icV/h47T1xQ/KQ/TBjL4iw4PAA2CxoN1fiD89P+4/7i90O+ew1i4sQU6DI/2F8hTpfNKiuveePM+ZzWbvqX7Rb2u/Le9lJfIga5X0uL+fwWBAXdf3nXe/70+34fRx4ARMOK3g8CD4JY1Fev5B53u67/vqIn2ApGmad0EI/d+11oxGo/te21e/SH2UYIQEcvTb1leZ6cMcCT5JYENfKSTtI8Ev6XGCQ/rXdOqz/rj0VT+klAuFkgRn9Lfrq5YkFZLUB2kffYgnQSqpOC71n5SS8XhM0zQk9ZL+tZDgmP54PAjYSW1NiiIhBPI8X/RzAmb60NQyF7WMZfziRls1XH7sQxxcewXTOi48+RRCCd74xn/HHU9oqwZfrrN99WHMbIqRmnNbm7zy1T9i3gTOP/MR1tbHvPTNb7C5usaZK5fYuX2TgGAwHNMcSbbOX+XwYJe1zW2Oj/aRtKysjalmE8qypCxXmM0m5CtrqDynGJXs7dwmHwzIypxyOORw5zYBSVaWeF1w+flfZr5znbe+8yqmtcggQOiITKTFYqXQq6uMLl8lG6/h1lbQhURMY3V8EAKl4z2A9yEqWAiJ7yxdBNFmAtEtOBPTMYgIFYRuoT/LNFLFubOUkrZqyQYlalyAkpjZMe1khp012EkdF/KNx5iokCGUXBQZALHoQEVFVdt6bOsInfWGlAK86yxKfIQghMJ1fiVRwSMKwhpjI8xBoPFgg2Q+a/jKH3yD/+n/+Ot88jc+Tf1//2/cvTuPqrGhK1bxEmxAacF83mKto2ksK+sFeZ6jdDzXQCAbZKyXOSpT+BCYVnGR2/mk+BGSOATQLeuHkDiJ+0N0wEvqBxJU0eUphOigjcUmhA6tSEUZdEodIKGzguk6NQICIeU94qw/BNHZzaR7kxP9D0/sS5/uGzmBPjxRcUL2nomnlk4sdCbPSegkQhIesF4slGxFiJmkgQislpqrF9d49JmLjFYGIAXVZBK/66Ve5Gk8AWc8zbyhmTVMjmZU8xolYPPhbcpHHmbn5h6DzZfwcotr3/0u7WxCqSVDLdAyxCYKoqoHAiVFhEBSrgsIMgFRLK43IRYjE8dSnIyTVBGkSpxGBEciFCODXKiDdGlRRFcYl9R2TqCfTlO5A0NEVwAVr3VJCJ2BtgclFGp1iLAON6tijtMFnI2ARaYCIpM0FgaZjLCPB5VlhJDTtlOyQiDRuMYwbw2uqQkIcikpVwZIIcmVx3mBCRLqFgG0rcV7wXg0IKCo6jkyQF1bhAwIGe1ZJB6pBDKeUNQL0gKhM7T0tJXHuJbBSJDpnHo2RwpFMAJUoFDRZkiqHBBYY6jnlsm0waJpnUPjKYhFVPOqxjUtjkCOZxAMGYE8OHIpsG3FXGRRJboxyLaNeeUFwfaTiyX48TMeTdP8QFnHn0T0J5c/THz0ox/lypUr7O7uvo+tev9ic3OTf/pP/ynr6+s/kUmUEILNzU3+xt/4G3zxi1/kq1/9Kv/+3/97vva1r3Hnzp13JTKW8bMfQgi++MUv8s/+2T+jLMsPujnLWMYylrGMB4ZIAoVIHFKE6Hsq48TXOU9AkaaEiaTvk+yLSgajOH79LsONl8lHY0Zntlg7uIuaT+KEyhhClsVFfQQ4R0CBsOBtZwoaJQKDjNKkIXiwLUIofFURnINWE5xHaE3QClEWKK1Al+TrI8xxjRxAsBZyjWaENwY3m5EVA0QItNMpxdoa0uQ082M4nMDKEJmVaJ0htMSHKa6uKcYrrG1vsX/zDq2zjMYjgo22JQSHLouoRtJVEzx88TzHTYM/FtTecXP/mMO3/wfjcoPNratcOv8hts/NOdi7zr29Aw5n86je0Ovh1geUAGMDc+s5EAIpoFSS9SAohEALwBnmR/tM9u/x9mvfZbw65sKlC1x86Gk2trZZHW1SVSPKvCAflkipCAishbqusNZiTIu1Bm8Dtm2xpsJPD9FH91DVIaWZo5yldZ6bJnCt8tw0jml0EUEIOD8quDIqENf3GaMRwLGVrG3kPPzQOaSpqSfHlBvbi8oYrbJY2dO2uLpFZVk3uVZdMqVLqkiBENEmKBsoMiXx7Zxm9xpy/QzQLW4trstoZxOrZARraxtcPHuRvYN9dvZ3sa3Di/6inohVC94xnc64vXOzk22vaG1c3Mh0QZZlDAcjNtY2WV1ZIytynO2gEW/JZPQETuM4LHLG4yFFlmPdnNk0YM2MLCvi+SjP6uqYPL/A+bPnadsW42sUmiwvGAxKBApvTyp3lFAokTObNbx97Rq7945PzqMraVkkpcTJu1QKsej3TqS0qwQLzGvzl/0QWcYylrGMn9noKz6cBiv6v/ef7y/0f7/9nj5Gf/8PAgv6/z4I/Di9/7QQ771fWKT0F//TYvmDFsj7QEFa5E8L8X3IIG1/Gmh4UH89qD/6AMxpNYm0CD+bze5b4D99jqdBkj5Y0O+nvhpFXzHjNKzRhytOq5L095t+71uCPAgMqqrqXZBKAh4++9nP8sILL1AUxbssV4wxDAaDBfDTH6cQwrvULxLI0AdE+oot6bhJ6aJ/fg9SMEnb91VaUhu897Rtu1DISNdXH/Bo23axj9P2QH3oI51zfxz6EEhqf19lJAEt6VyS6kl63Ad8+rBJXx1Fa71QJemrlKQ+Ncbc935ZxjKW8YsZOtNcf/Vl1s5d5OrTl9h75y327+yQec98bhmfvcT2lYvs3b3JaO0MY+V5/dvfRparfOj5j7B/+zrvvPg6585epMgLpkd7jNbWcbZlc/sMQQhmsylnr1zm6N4u47VVQlPjXGDtzDbBeaxtESJQjEdU8wm+EYzXVwhSIJ1l78YbSJ1jTGDtyiOsX7zC7Ve+yfSoZn3rAt4LMiStj5+NwQXUoFOnyAryjbPkwxKT5aiswIfO3s0HnLURtBAyggdCEtVc4wI0UqBEnO8GQbc4DnSqE0IKnHcIqcnKHFUo8jxDr5bockBY7J+48B0C3seFfKk6JQAfC1finLy7H/AO1zjoAIPQQR8hnEAUgaSS4Lv7hjintzYQdEbrPU3wVC5Q+4D3ktYLjvemfP2/v8Tf+D98lk9+ecqf/uc/Z3en7ixfPOBxLqaXlBLUrae1Da31rK0FRuMiKhNIhZABrRXDlZLgA3nZMJ86JrXFWk9PCKJTeQi9IosFBhBVO3r3OIs7qMX9Rm9fSQ2C9L16Am1Asorp9typoaQIvXyc70AT34EHscAjLACUBIOc5O1OXh065YqQGoK4X/WCE9UP36mQpH0LIBcCTSBHMNCCC1sDnvnIZTbPbhBEwDuDNw1SaZLCjDMO33qMMZhZQ9u0HO5NqGYNWgrGm2MmgyF/8e2XefyplqzMqF+/QXt0QCYFQykZyGjxku4pfIiATwReZASafFfwkrIhi+vNdyq/CVYO0F2TiGg/lPpwoRqX1FmEjCiMSP0WOjgk9cvJPXGCgoKgU3CJAIiKb1G8MXHMW4t3gmG2Tt1E5WIpQXlw1lEOM0ShcbogVxlBCFzbIrIch2Je1wwGBUrnhHlNIQITwCHJtWZlmINUeOtojaUxUTFlvDKkaj1NXSGsw2pBphVWSmZzi2lahASdeYqVYQRRmoosy7A2qp9IAaujAW3T0s6nGA9Na5EiwziPC6BEQGfgvUMtoLRA0waa2qAzhVaK9XKMdA1DJZBKcdw4gjEMssB4JMkzjTMtpYYgPMdzS+0cXjiU8gyUj2DZ+xBL8ONnPN5rgvqTjizLfiT/SCkln//85/nmN7/5Prbq/QkhBF/+8pf51Kc+9ROdRPWrFn7lV36FF154gT/+4z/m937v9/gv/+W/8Oqrr/7EjrWM9z+ee+45/s2/+TecO3fug27KMpaxjGUs4z1iMfkLsRogykJ2MtpxXt9NOtPkJ07+HB4VIl2fpCQFAjuFw1feYLCxyaaC8mAX4QOyiCoPCE+wHm/qiJxI0al6xPakSaFUGXhPyAVQRPuPugYh8E0bJ3Yh2lborESqWEGQjYa0s4bqcEY5GiB1BiJKZPqqJhiD0jmmnlN0k+9qNqcYDvHGIQpN8I5gDTrLsNYR2oq1rS1mx8cc7x8xXllDl3mk3XOJzHKsqTFtS/AWiebyygjPIWq0zleuae7uHzFr9pndPqTcLVldOcv29qOcPQOT2V329vY4ODxi3prowYvAdDBICDFBE6srHI1KCxuCHEGOZJDlnN1YA+Gw84rJ0Q77x3vcvXuHw1s32RKRyLdFiSoLNja2GBRDjDcRBvERlGnmx7T791hrGvCWisAdD7s2sG88x8bRBjDddZALWC8zHj4zorh1ACYwCQEZLFc3BE89tkEhDdXxcay8EAqBQgooiiG4FtfW2LomG5QIJQl4rPEoEWEM37ZIkZPpjMFgRDEcEKSivneL8NjzCJVFHZrQUf6hgzpCXIiyzjNv5ty6fZPjw4MoFd/dtyudRwBCCJzz1POKg4MDdnbuMK/mGGMxxkZbHy0ZDIY8dPlhhoMh5ahECIXOBFoKtBBoJbA2whfj4YCNtS3WVlbRmUQSsKbBO4vo3gtSKMbjNRAxYeO9hSCQQnXSrgLvHM672JbWRiClqTDGdMmopTreMpaxjGV8v+gvhvcX9dNzp+1Z0uL3g9Qk+tGHHvrwx2moow97PMhipf/a/j7SYnlRFPfBHKnNfTuR/jH7yiV9GCPLMrIsu+/8HgSrnFZCeVC/9eNBljkPsrBZJNEf8DiBAgl8OH2sB8Es/fFKYEUfCDk9nqeBj9PqI/1j9vvwQUVIWZaxubnJc889x/r6+n0qGafPEVjAD/1rLZ1Pvw19EKLfP0nZItm3JPDh9FjGewN7HyyS+i7BGamNRae01odtkhpHCufc4nX9nyzLFjY4fSjEGLNQ5EjXex/WqOv6PjWQ/rFS/yQQxVq7+NFaL+CU1M7+9d3vr6Qy0u+nZSxjGb+40dQNQQwYr25w85VXmB3skxOYzi1nHnuK8VBx+9o1ti5fwU2PeOfGTS499TTDYcmb3/5ThCx46OFH2Ll9k0rnZDiEKrl49QpH9+5BNiQfr7Jze4fBoCS0LcHDaOUMLrRUs2NWNtZYO3+Og9u3KEYrWCBf28JXB0xmc9bOXmR/b5+zTz2DouEb/+v/DYp1tNbcfvXlWEzQg/ZjpX2XFwngWnuiIpBFdYPWO5AglEIGHUEPH5VCBN1nfyQHsD4g0AuAIx3HBx+VP13AiQh0FGWOLDK0VuBrgouKHm7eooVi3ji8cYuiC4hQA/fd25zkMaw7KYyw1qMUOEdcaF9ACiKqbwaHD9BYT2sDtfO0LjC3AevBiUAbPEFIvv2N17n05CM8+5EP84Lx/NHvfYPJ1MZ9deYqwXdWHEikDEwmNfN5y3iUM14bMB5laC2wJn4Xj9aGDFZzJkXD5PZxVGRdKGREW1kposVq/HvqgwDBI5G9YovuHoQT25fTwAc8+L4K0UEJ3Xep692/QBrXqNqRNCZ8b38LYIPQk644aavrciWIqOrhg+iNZ1cm0tkChUg0IKDTiO0W9BEUUrJSZlx9eJWHH91msL6KaRqSqK/W3f2Bj/kd27Y08wZTW4K17N2ZMNmfkwnBxtk17hQ5124c8qhQrO4cY8Mb1CZwvLdPISXDTDLQoFRU9QiomMuJCbSu3V3+0jtUd78jRMyrpL5I0Ivo+lkIUDKqaUgFhHiPjRSdQk28xtNrTtRCRKc8IpA6qhSngp0Q0z3dIQNSRRgEH6K/to9WyzjBfP8AgkJ2OVctIFvJMFmJAYqiREuJIqC0pm4MjZmhyxwncoITDFeHWJPjpnOkGGA9NASU8xjjqKqW1ZUxQnqqVhJ0QEuD8BZrWiCnMZ7Wxh9BhGpdaRgMSpqQ07SGxkIhARymqfAudJ8Vnro2+DZau2gt0SrCNsVwAB5MU1MZT9VagumAq6FiXEpEyKLKjgtcuHCe470jVnPLYLjCbDJjNJCUg5KjVjAxDpxlbixDFSjKbpzlA95Lf8lY3kEuA3i3HOYPE3/9r/91/t2/+3fvU4vevzhz5gy/+Zu/yfb29vt6nLIs+fKXv8xnP/tZ/sE/+Af8/u//Pr/7u7/Ld77znXdVbSzjZyueeeYZ/u2//bc8++yzH3RTlrGMZSxjGd8nTuh/gRYBJbqJYRBRhjEIXEjOnon77ybAXbWIIHl/xt+qu4aD732XJ0c5uW9RukDIbiKDIJU6BBei6odtkZkCrRFKI7P4b/Ae4TzBW5TKYDDE1hXeOoJ3CFkg0Whpo2apiPR+NsyZHE8phlmsWHQ+qkkYh5/XyPEI1zR4YwCJMxZkFLK0kyN8oVG6jPYzMqOdHpGVY7YuXGT3+k0mxxM2z51DKIltK5q6AqERWRYntU3NyniVR5RmcLxLM5rzlSPJrCxprKH2NfODa9y7d52VcpXx+jkevvgEVy43HB7vcXR4zPFkGhf6fegkVwXOgxbQ2Jg6UE7gpcdqgVPgZzWXLl5gMBjTNIJpPQOv2Nw6T9a2KGMIdUOoW+azGj8okFmGc4K6qZjXDY01eOu4JQS1F0xaS+McxsdkSyBWzgCMpGItz7i0PaDYr3ln3rImFRWOZzYlH390xFrmMUcHNM2ccmWVXGm8Nyg0UgakzBHOgTUEY1G6ACGp6jkDrXGuxbQVSgiGoxGDooxqLwh0MyWYFlIVCWJRrYJX4A2+dRwfHnPz5k2uvfky+/dugBeU5Qq6GJAVI3Sek+kCRGA2q8EJirzEu4C3cxpb07QOaz1N03LmzAznLAKPlholIAiHC4bOPhcAXWjGKwNGoxKpVNe8mAqICyddUsrGN1BAEFxcnDHeY21cvKnqObOqYjadc3C8x/7BPvv7+8ym9f35tWUsYxnLWMYDQy4SvydgQ4rTC+999YQHbfugvyclij640T/m6cXt1KY+kHD63/R7WlRPqganlR7SonZaWD8NLvT3lxbi+205Db30VSlOAxGn7XD6+z6t9nAaOulbb5w+19N93D9OH4JJ+zhtmZLak47df3wa5HlQ9MGFfvTPtf+7lJKVlRWeffZZzp07d5+aCkQooQ9UpOP3IYkUdV0vIIz+cfrKHH0wI1m+JMWMBDckq5N0nNOwSx8WSu0wxtzXnrZt3zW+p/uuv33fLia1JfVFH2Dpj1FqR9pGKUXbtpRluYB+EvSR2p+AF2Ch4pFe31f56G+XroUEgfw0igGXsYxlfDChM41pJuzefAvRTPHGcWBbVta3qY52OLo9pxyvsfPG6+TjNT70y5/j4J3XufbW66xffIgLD1/l9T/7KoIc4Vo2Ll9mvLrK9PiQlTPnONzfxwTF6soqpqkJEnSWs3fnbZwNrF95mGwoOdi9w9rFK8xnR2xsnOfg1ttkwzVWNs6wd+cOVz/+Ofavv8zRzj7nH3mKu9feYjptuPTY40xevRk/I4VHCUGwDtsKgnOI1rBy/hKqVIRMoXJF8AGBjEU5NqkwnCzQQ2dB6nv3JD5alsoFTOC6hdKA0hn5uEAVkiAgKwcIKSM+oTR2PsXMDM2s7qxiVKf+cT/E2EknxGOpEC1TQtSOlTKWDLkQgQAfQiy48QIXPM57nI+wgglQOcvcgvXR5sWFgAsOg8Q4T9M2/K//zz9k+8Jvcem5D/Gxwyl//scvczyJk2tPLJAIQeKFRziB8tE+49i2zCYNk7FmdXXIYJgjlUYqkEFTtRW1C7gAPk7U43q9FGRaMhxkCAHOe5rG01qL8N3cXkTbZNHBLJ2IRLR5EQHo+u5UpIIr0T1Idyzh1L8QYYK4TVioiHTD3R2Mk9RdiKoYQt6/EyHS/4i2sYjOOubEtqS7PJDBo4nQgyIqNwwyyfmtgotXtjh/ZQulBb5tO7USjxSaBN+YxjA7rpgczmjrFltbqnmDqSyr45JiEGi3N/jmy7cJQZDdPUbpW+we7nH10qPsXL9OLmCoPFpKgg8orTrIxcdrkggECR9zRUjZXY5RfkMQC3m6Iergm3iOSb0F4RdQzslYpIxpj58JJ+8xIU7shk8ypUkaV8Vitdgp0V7bR2sjIU62VwiMcygtCcIy3ljBDQrQGYUuCcHjXQRZXHAoESi0pmkMQToGK2vRAtt7CiXQZUFlZSwWsgGkZLwyQuUagycIhfI6qqJUExyao+MWIRXOGEKATEebl7quUSICJ9JKCmUp8gzjBPO5QWtBMSgwTcxbSaUYjkoGKytU0wnj8QCVD5kfHhOiABBBaYJw5EpRhEA7rzAIamsoZEDZG6wUmmGRIzLN2sY6tpkxFznTEChHw5iDm84JpiXoDJGUW37CsQQ/lgH86IofAC+88ALD4ZD5fP4+ter9iWeffZbf+I3f+KlJJq6srPDpT3+a559/nn/8j/8x//2//3f+03/6T/zRH/0RVVXdN0Fdxgcfjz32GP/6X/9rvvCFL/zIMNQylrGMZSzjpx2pJCHWkEghYwVDN9lxXXVDjDhp8d1fJII4HQiLyZJAgJMMTGAtWLSUZHknJZ5lhKrCt02UwfQOobK4IN62SGKCQWQZoTUIpSDTYAMogcBB083dshyEQJVZtIbxPm4vBEpnZMOCatYwGkkUMZmgtCI0LQwHCATtbI7KClSu8a2DQqCVwhmLl5bQGrw15OUq1jYMh0PWz21zuLNLkWcUwwEOwfFxxexoymzWYlyDLiXlaEBZllzaPM+vl1MuFvv81+tT7gaFBWrrMDJw6I7Z3zki3ynYWj/D9vmLXLnwBJaG/f0d7u7sMpnNmVcNLk0wAS1ErNzpJpEDY2irQ+5huN1U+HJINiqZTRpyD1vDAaNyyFCMGMoMJWK1D9ZhGsO9tuHt+YS6jVU2znUyrgFMPASaCPiUUjAuCgohWRsrxMEc13iGQjLIA588p3jifMG4kLh6iqlbbICyGMVKJMAF21VaBGSWgbUd0CFRWmFci/YBR2R6gm2oDvfQwiO3t5GFxk/vIdoairKTj3WLyV7AEYKjaWsOjva4c+sGd2+9yf7ObbyNizZearwqEFnJMF9FlzlKDjDWUKoz6JEh15N4lfspOEtwgdnxlP39eyA83sPB4QFtVWGamta5mFgBTGto5nOqaoaQOiafrMM6h7EW6xzWgXcW61uc9bRtB3u0DXVVM6/mVHVNVVXMq5qmbTHGvS+T2mUsYxnL+EWM0+BF+tt7Pf/jKMYmCONBx0rPPyj6gER/mwQXpMXsvtLHg+AMiPmofvvvk6nuwRWngZTTQMxpECO9ph+nbWAepJLRB1T6yhoPAlP6qg3p76dhnRDCfWop/b/3VUBO5+QetOjf75cELvT7+XSkbVP7hBB89KMf5dFHH11sn2XZfc/3YYsEKZxW50hwxGl1Eu/94jWn+7lvGVNVFUnJJdmyJMWO/pietqBJ55J+0nPp+QeNQwIq+tYt6TpIfZyOkx4ndRkhot1PgpSklAvFjjQG/bY0TbPozz7kkx4nFZY+5NN/X6Q2pvNcxjKW8YsdUilG45JMWGatJwxWuHz+LId3b5AXa+R5xs7dfc4/9RSXH7rAje98i9ms5ZFPfhYlDK9/8+vofAWVBc5duYp3c44PdtFZzs6d24xW13HNnGZmGI5GHNw7YH54CAoe+egLWDOlOppTDFeompbR+ia3Xn2RzYefop7s0cwlj37is1z75n/jeH/GxqXL3H7jZYrRGpvnL3H3zbdoa0shNNFCRCE65Q6pJWQFxdmHUeUeWVHirUXKuIosAL/Q76BTRI194r1HqJ78hAJFLHaJn81RvZVOxUCIgMoUqsxBdRZkDmRoaadz2tpgWo+zScZAojNJM2/i2ngHlkTbEcdwOGA+ryPIkr5aO5CgNTYu1PoIc3gfAQ/jo+Vt4wJm8RgcASsCPgiMgyA1Nhh29yb83u99jb//P3+eJz/7PLa1fP2PXqZtfVR8DZ0qRkjKFqCEWrSjnhtMdUyeK8bjIcOVnLb1HB5FKx84sTWmg1ZsiP+WhWasJW3p2T+qaFofrYrTdRmiBa0X/XsZSKUgJFCmAz5CN3ahE+kI3Xgu8nUJYiDmZDx+UZBzgiZ0KEHiORD4LmEUQmff0j0ZRJdLEiBFpybTvycMASkiGCERiCBQRFuXtXHO5saA85fXOXtpG+ttVAcJEQaSMsIWxjjqWcPB7pTDexOmsxacxVmBFp4z60OKHLY//Bj/5z94mVIGtISDoymjQY7OBO20op5UrBcwyjVqAbAIkl2yFyHaCEG0NfIRZOk08XoKJrGPJLJTaOnul4TAde8JQqAT+WAxTERYIwQXzy1ERZEQItwjRcxZyTTGIh4niKRqJ6I6z4LiCQidTkQQLOAcXmvAs7K1xWEwDFdGuC4Z5wPUjcfOK7K8wAgFwjMcDGhNgzMOjSPTino+x3qJkBqVdXYzItD4eJ9Z6IIWw2hjzL51HN47QGoFMlq4FFJQZh6lc6zzTCcNNtQMi5xBWURlFCFprcV5S16qrlhNI4PHGUtTN0RLZkU1m+NCwAK2U1MZlRlDBXkuOJgb5nWgLBXOeczcIDJFM5uROU9QmspJ5rXBOoczUdmnyCRZUVLmqrv+vs8XxY8ZS/BjGcCPp/gxHA55+umnf67sXoqi4Hd+53dYX1//qR5XiChxur29zd/5O3+H3/qt3+Ktt97i937v9/j93/993nzzTW7dusXR0dESAvkA4+GHH+Zf/at/xd/+2397CX0sYxnLWMbPRZwQ/TEE3ieJyKROcOL9mah3vygjONH6SHuQSvHQQxcZSYfWCqE1wRj8bBZtWqyJkx2pCc4g8J004gwhJa5pOzI+RBBERfsVoTVyMETkFte0BKE6JYXuBxErU4wlz3Jm82MqAlle4H2IqiHWEqxH6gwzm6M3y5hMbhsIY9ACJQfYpiFYg5I6JkyEoJ0es7o2RgCT40P0cMi8brj2vTepJ5ZZUzBxOa1wZNkB47Fnc0uxsT7mhStnuLRW8MdvHfJnOwYhNbmE1ltaBS0tNw9vcPfgDqv5gI31LTbOrHP5o5ewUnBv/x579/a5t3/AfDbHer+oLsiBMzIKR7T1nICkdYbqqAEfqCrL3uSYVgrWhERIgdKCjCgbaoDd4KhdYNJ6NEnV40SWVAK5EqxohZIZ3hrWBFydZgy8gExzacPx1PmMjVGGJOCqhmBbjPcMxmPKcojHI1FR3dJ6ZJmdLMAYA96hhIxSkaHGOIv3LaPxKuPBmLLIEcHhWsnkYBd1+KcMRi8g/HqESnyXcAmG4KO3bNvMqapj5tMpzjqc8QgcFksbWoyfc6yOKYoRxWCFIh8jlMf7WKGihIwewCFQ1547t+8yn08YjIcoITk6mnFvf8Ksthi3IE84ODjmpVdf4tbO7Vida5uoGmIM1rj4r/O4DgJxLuCcx/uA9R7vUpVSeF8msctYxjKW8VcpTsMV6W/9eK/ClvfKL/TBhP4iflrgPx1pETxtc3pRv7+Ynhau+yBKf0G/r+pw+nzSdv1F79MQxWkgpL/NadWOB0EpfYUPiMU6V69e5datWxweHr5r/6eBj/5++4v2D1Ld6EMafZWTfpveqy8eBH6cVjHpH/tBQMqDxl9rfV8BUn+fyYKlv+8syyjLkhACbdvinCPPc4QQC3uUvsJHOsZphRZgAZP0FVzS8ZLNSjqf9Ljfh6f7IcEkqX1JRSNF6vPT45XnOcYYtNYURbFoW13XC/WPdPy0TR8SSaBJOtf0+v6Yp9endqd/k1pJOn56nKCP00o5y9zUMpbxixvOWgajVQ739hltnmc0kOzdvcnGxnn29+5iA3z0V79AfXSTl//oD9m49Cgf+tiT3HvnFXbv7LO6dQ4pJdtXLnG4d5fBcEDhFcfTOatb2zTHh2RaMljf4Mbrr1FPG4qVdS4/9RiHd94iH4zx3pCtnSGj4uD2Lc4+9RHuvv0aWxcfZry6wrVvfoXZ/gSc4OCda1x58hlmR/tcf/klhsMBeaEIkd+L9iu5RmmJbQy+DRSbl9GiRWd5Z3slECJ0OYFOkULEBXcRiHPiTg0iLlRH/YIQfFSeIM5to1VItMvwIh5XakXwDmcNpqrAevzMYGuDd57gA0iQQVHXNkIGLlq/BE6+C63rINCFFWunihAAKfAuWo4YF21KjPPMXYI9BLWD2kWQxBMhDuM9Lshor6YU1lu++a3XWFkb8nd+6zM89cvPU80q/uLPrtEaWGS5OrWFIAIEB16gtWRzY4QKMK9rjg+PqWaa2oI1rlP1jKvJLimIIKhbR2U8ctJSZoo8V7hUMBU6FKdT2pAd9CI7K5KoPCFOQA+SUgtdzi10Ni1i0ZvdaC1UPqDr0g4uoAcphBDHfHH/ItJ91AkWImQPEVlAJ3TKHl1hjhARkCGgAiglyLRgdZCxupKzvlZw7vI5ynGGsU3sZ5Egonj/WB037O3MuHNrn9mkxThH6NQtpHQMBznFMOOh5x7hv/75bSbTOb/yyx/mwuY6r7x9ncPjKYoROzfuUgpPIRUaifeOTHX36wLi2adytE5hRnT3CyEsgCQRZK8fTnCpkNRxIF7badxJgEfsHBcS3Bu6fKnsxibBNKE7/27MhcQ5Hy1mhECq+L7CBYQPMeGmPcX6OqEyXV5UoLOM451d9MMXqBqP8gaVZcyqBlSGHIyYTudkmaYY5FSNRWSCotCUec60DkwnFcFbVtdytFJYYzFeYBqHloFMw6AomTUxfyU7q5YQBEoILIG2DSgsIgi89VgPRgUyn5QOo5JMsAFtLSpXjPMiFnPJqGoThKSeRxWQNngqG3NrW2OBMgFvLZO5xbSBosjAOaRSOG84nkwpM4Ujw4lAIxVta5kbj6kbsI7hQJMXitq494f6YAl+LKOLLMt+ZAUMpRQf/vCHf67Aj2effZbf/M3f/MAmTv3EwWOPPcY//+f/nH/yT/4J3/ve9/jWt77FSy+9xKuvvsr3vvc9rl+/vqh8WMb7H+fOneNf/st/yd/7e3/vp6YGs4xlLGMZy/jLRSBOoqVwZCIS3kKqqHLQVUkkqCNVIETH1Di5ccEj0CeVDcDKWs7F7VW0kkglwFp8XRGsw9dTEBn4QJAW7y2SgG+B4AgogguoskDgIUgIKpptBoMqc7zPCVIRhEWNsk53MYCIZS/W2Ei8ZxlNXVMMh+hsBV/VcSLW1qg8x81ipaSWGmNqomJE9MdVWYb1DnKNsAHXWrJihHUVZSFheAaHQGaKbKRo6gBOcmZDsLY9YLg2iBKXucIFx9H0gHVd8OuPb/DIygFfvxO40QgqmaOcwUmwMsqD7tkpe3cn6NuwNhyyefYcW+fP8sjHrlA1DbP5lLs7Oxwfz2jmc8YEVnygdjamI0YjptOK46ploCWFiPPKXAhWhIyerAGkDDQO3sGzazyVS7UsCecBrQXjTDGQEovkqDbkrua5XPHMKEMpzWAEj52BrcGATDuCieoVuEDrPVJljMZbZFmOzjOU1GQ6ixURdBXNKiaE8A4lBSrTTI8PgYBpGpy2+CJ6EHvnEFnOUGj0rX3KR29xPAlIVhA+dLBEVP0ospxzm2d54tEnCW3FtbdeZufuHt55MikIUiFlTp4PGIw2GI+2yMsVpM5p2prZzCNokKKDMBpLXbc0dQt3DwFobaByMckQeu+raW157e07CHl3kVmJlQihkxB9f9/by1jGMpaxjJPF7dNKEafVORLo8F6QwA86Rt+KJQEXfXDgtNrFaQCibwmSFtlPW66ctjI5rTRyWmmj376+osVpoOFB53ga+ngQsLG6ukpZluzv71MUBdPplKIo7gMPTluvnI5+29/LZied4+kxTL9/P+jj+yk+nAZ13usYadv+c3meMxwOF8BPX+1kOBze19+pDX0VkwQ6pH8T4JDUMFJfKKUWx0g/CQo6rRYCLNqU+vM0JFPXNVJK8jxf9G0CSfqwRDrPdHxjzH19lNqbjp3+3jTNfbmgfj/MZrPFcY0xNE2DUmph85JUTvrqI977hQ1OAn9Sv/VBlrTf03DKMi+1jGX8VQjB3ds7nDl/FhEMx8cNaxsbTKYHZGubPPHkExxd+x6337rOxcc+xObFc7z2za9QliNGK2NkJtk8e4E7t2+yfeEhjnZv4VGsrq9RHx+yfv4imXLcfOU16rpm/dJlBoOMgzvvsLK+hlOK9e2LHN98GztcYfXsWd546SWuPvEh7OyQF//865SDEda0qHKFsw9dYbp/l7vXblGUKwitcVKxMH4IAWMdIReoLItWKPIMKr8DWYHMo82IT4vTkGQJusX3zv6jU/JIX6shkheL3I7oCn7imnm0exHBE6zHe4OzFhyYKtqnCugWrz34uI0MSeGjU3voAQflcMBsMsF7cN4hkCfqGT4WEiXoo3VxYbn1gsZD6wJeiGin6gWGCHw4BF5Ag6dqPUYIbPD8f//k22ye2eRXfukJnv3Cp6gbw0vfvo1zIS3Nx3RRt1TvXcBaj9KBs9vrzKcl+wcVR3XLpG7xApSUaB2QMmNWW5wXOGJeJYSoTNo2FtFGdS5B7MfQwRRKRkAgyJhbQYQFbBFBlhDHrANzEGIBp/iwGFm67uq+11hAGh0v0o1vVAsRImENdFAH3YY+HWLxnglJuCXZAYnOBijE5xURXhgOJCtDzaDMGRQZg6Fm++IG5ThDqAh6iCA6+ETQNoa93Qm3bxyyf29G3bpu/xFGkkIhEORlzpUPXeVrX3+T775zAErzR99+C01gY1QwLBWra6vce3OfQkpyBUomcCYsrm+IOTQlVMxXOg+is0FycXuZwKdO5cT7QFHkWGcIHnCxs5QSBBH1VLrh6dRQosWM8x3UHEKnWOMieNWNS+gyXKIbXym6d5v3eJuUULp8aojwR71/RDGOxWXBeaRSmOM5NC35mbMoV2GtI5OCqmk4njWsjQdkhcZ7h8w1WudkucZKSetqlC7IlUMDudbkUjKdNWgcMoAi0PqADTBYGzGdTQkuWjCJYMm1omnAOkeuZacKDU3dkqlAVmQ4Z1Ey3WfFPs6KnLZu47l4g1Y5VW2RWFoPUni2SsWw1BwdO4yNXT8Y5hivcd4iRAeMtJ5ZaylcRTYcMfeGo+M5CIkSnkwHRHAczyymjWrFP6oTxw8TS/BjGcC7Ewc/TCileOSRR96nFr0/8Tu/8zuMx+MPuhn3xWAw4OMf/zjPP/88dV2zt7fHzs4Od+7c4dVXX+XFF1/kO9/5Dq+//joHBwc/VPJmGT9aDAYD/sW/+Bf89m//9nJyvYxlLGMZP0/RVYtoEasaYpWcxjmL7xmGih7FHucrXbpAnChCpDizPWR1kCGDJ7iAbwzBOlw9BesJoQap4qTSdx6YMsoqurZGdBURMlcI5wCDLIeguttu2yDLDJnnUexDZxEaEBJElKJum5o8z2gqH8/F2ZjQUBLaFlkOEC5WuMhM42YG2xoyrVEyi/66WY5vPK23+KZCDYZkukSSYUwF2pApwWMffob5pMb6QF54dJ4jVIG1LdZDZVrC0GOMwbSOtZDxyxstt2vLHQZcqxV3jyq0EuhMYAtB66Fxgd2m4tZbb6LeepPV8Yizm+ucu3CBZ65eRZcDauew1mPaimo6Z8V6fHAMq4Z5VWHbhixIpDUI5zBB4KTDSIklMENy7Fr0rKKQgUGuyZUCD7nShOBpmpadxtJ4zwrw2ZHmr13OefjsmOFwiAwOa+YEa/BzS/AO6QVOBoRUlGXOqBzE+2RnKYfjeN/sPMG3CJl10pcBnEXnBZnOaJs2jlcQDAclRVnGxQ7bQi1ZG4ywu3P0keJw8iLro090l6tYgBZaSzY313n26Y+wubKB0iV7h1/hcH+Kkh6pLVJrMq0YDoesbWwxHK4hpKSuK5R0mFJQFjkg0LOKtjU47zpFkdAlSTqZ3XBS6RSIkrXCnzxexjKWsYxl/HTjNCTQXxjuq1L0LSS+HxDxIJWMFEnl4LRyxA9qj3MOYwxJ5bTfvtOv7UMc7wU89CMtuve3fdB8vQ8vnD5+f4E/Wco459jc3ERrzXw+x3vPdDpdKFGk1yegJcEODwJw4N3KHH2YIKk6AAvbj/52py1UftA+07ZpX9+vP8uy5PHHH1+cd9pP27b3FRml4yilaNt2kXxOv1traZqGoijQWi/6xRizgCCSIkaCO7TW1HW96LekMlLX9X3XSt/2JMEQCQzpX9+pfQmg6EMZWZaRZdnCKqYPVfTVVvI8v6+/Uj8m4KMP+PTBkwRs9F9TluUCZkljlaCVpN4BLKCTvlJIeq+l80htTPs/DfMsc4DLWMYvbgQEF69cYO/uLvnqmOFgwNH+jO2HH2J1reTWi39GO2t55Lnnsc7x1puvc/mpZzje3UEPVlk/t83ujetsX7rKzTe+x8rWZVZyxeTeLbYuXcV7x7XvfIemajh35VGCDHipOPPoo4h8RJ7BzqvfZfXiVWYHe9y5eZv1zQu8/o3/weqZDc5evszt6++wdfEhxqsj7rz+KnjH2tYZjiYzbNOiM4WUZmHdgRdRHdUYZJDIchOVryCLkmxUIvCx5kWcfJ8Luu9DkfI0sXdCSJ+DvfuEQFT7kFG1IPiAqQ1SS4SM3hLeWsysxc4t3rm4qB083YS3W9z2EQShgyq67xznTtQPIFq1iFQkEQQeh3VRb6T1ARPAerAdBNJYTz4a4OcOJz14GRUHQoRAjA94JGnN3ljP//X//ce0dcMXPv8Un/i1X0Kpr/Gdb92gaV3U1RDJhsXjEGBgf2fOqBwyq2puHDfcq1qkkpS5RuDZWB1SV3OKPGBdVEv1KIztbFY6pQ6fQIwOgBDAxqBg3rbU1uGDR/XGRBAhiwRoxPTFiRJvamm62wkh7jUdZ1FFIsQi9+BJUE9chPeLXUfYRXYqL6K7VpQUi2PLnvJIyvFpBcNSs7mWMR6V6FyyujpkbWNEVuh4vh1w5IKnrhz37h5z68Y+B0c1TeNw3qNkLAYLInqdrG+skWF5/JOP8/W/uMGNdw44ai02V9jakslA0BojwJqAbz1aQq4jXSGVRHa6tL7LJQqh8DL2aSAs7ickAilFBBpkwBsfbWGUxJi2U0cBRMxbeR+QyhNCD34WSYks2u0KYurSeYdUMW8ZGSwZx6CjcVJtWvRZgeBj/4Yun+rxKB/f68FZgo/qtd53YFI1R9o51kKWKfAlxwdHuNYgXIZ0UJQ5xWCAsXA4qWmsQzjLylBj0BzMW1ZsVMNo6wqlYpHbbDKP93JSY2XJcLhC20wZ5BopCoKzKNXiPWRK0RqPMy1aCVojMMaRZ5KiKFCZjrbUrQUXi7gcnmAVtqmpKou1jlGhOLc1YlhojBc0bsq0tehMowBUQOkheRbAGoxTtI3D1I5h5ghColSGNxYyFT8vkAgtWBmUEdB5D8XFv0wswY+f8fhxLFh+nMjz/Ecmi6SUnDlz5n1q0U8+nn76af7m3/ybH3Qz3jOEEAwGAy5fvszly5cJIfDlL395UaFgrWVvb4/XXnuN1157jZs3b3Lr1i1u3brF7du3OTo6wjl3X5VBqpToy26+l3TrX+X48Ic/zBe/+MX7EiLLWMYylrGMn/3o6jPQIqCkQsk4GQh9ZYJuYhXnl+H0Dhb/RGlCuHhpA2lbgrMY00Sk3VqC8wTnQC6wERA+JjWsiZKfAZyxEDy+BVlkCKXxdY3MdGxLppC5QI8yQMUJb+gUSXxAFxneGwIZ1rbU84rRYIjzliCijKEgoHRO8NFfFiVomyZWusgWkWkwQHBxYpdlIKJ9iVAZw+EqochwxhGkJ9MS5z22NQih0FnGYFjS1C06ixPnqplT6cBwY4A8kDyG5REVuFJIvpuNuHZYU88NKEEmBMNM4DKJCYrGemb1nDduzHjrxi2ElAzynOFwjMpLysGY1c1z6HGOc5Zz2yXeNdFj0zu0UoRgKfMC5xxVHSsHciG40LSsHs+opjPapqU1jrmtOKpqrLeIEKsBViV8eUvza4+ucG59SF7k2Lammc8wVRPlHF0nviIFRnrKfMTqyhp5lnWeoYo8y6I6hk8VHxH8Ed5H81IRyLTGB4+va0KwOGtxjcEARZ4hBxkhKEI1o3rnJcTqakxEAZAS/h6CJ88LNja2mc8bZDZgZ2LZn8RrXiuP1jNmrcOQEURBayxKZdRNTV1PQTqUEuSZoMk6SVYjMCJVToF06cjvBjx6tVjLWMYylrGMn3KcXvh/UPRtPvpz/QdZffTVN06DJKdtJdLidd+io5+b6ucaki1G2j69JrWrv7CfXpfAhdM//Uht7EMAabE9Pf9eqhen+6soCh5//HHu3r3LZDKhruvFec5mMy5cuMBkMsEYs4Bokq1JKh46rZwBLLbrt+c0PKK1/qFye+n51Ec/aLsHgTjp+Gtra/zar/0aH/3oR/mTP/mT+8YjAQr98e1b+SSQJ4Ed/del80/bpBxK0zT3QQqDwWABvvQhjTzPFyBJen1SRWnb9j6bmQcBPdba+wATKSXT6ZQsyxbXYdM07zo/5xxVVS2O31fJSaBKGuN0jNS/fXugfl+dVtk5bV2TjgvcZ63TP3Zf4SQdJwEipy15lrGMZfziRZZp7t29w3DlDPXkgJ17LY888wxlqbj98qtYr7j63CeYTA8Rg1U+9PnnObr5BuuXrqKKnLqacf7Rp3jzL75BCDnN0S5VW3P28hXqowNuX79OUY4YrqwCLZkc4GyLOd5jfvgOIS859+RHuPPmGwy3znO+HHPtxe9y4fEncfNDrr1xnaee/xjV4R7Xv/cSwgey4Qp79w5onePS5fMUl2Yc7N1CIvHBYdoGFwqUFLGAZhpQl1fRRUkxGqIyjfRRssGHbsE4dHUwQXZKDJ3iRAcNiG5df/FdJ6M2hPceGQS0Ats6oEHYCKRiRVxc79QTfIhzde99DxSI+SIffFzgFsSFeAKodH8hsb5TIulmx+mxI+ZyrI9Ahg8SKwTKCxwC60KEPjxYQgdfgO0AEofACajalv/H//6nSOX55c8+zce+9Bmc/wovfvsdGhM6+CH0JuuBWeO5eeMAPcyYOMeei+eWB8OFtU3G51c4erPBC49UgWGRoXTG8XFF46NyhBMBF+I5uOARIh5r1lqs6/oOFmoTComgswAh5dpiIVXETUJnEdO79yQpanT3ZkIszkMserT7LQSUEFF7IgE6CIQUyM5ORtK7B0pQbHf9aCHItaIsBCujjJWVknOXNhisDNEiIJRcFIE5G2hry/69KTdvH3Hv3pS6Az4ExBybklR4ai+wITCtGy5sjvhvf/oGs70pXmVYES2IksWyVpLGe27d2SFMLbkMZEKgixzpXSweS+cvFUKKhepKymGKBIGIgMgzbGO6s5ZdrrMrPOJEgSMEF98/CWIi3UMJpEj3zd09vRCIpI6XsqYuxLyW7EamU/eQIr4fT+5tQSfFW+Ex1sTxUZLgOxBHZjjjCcEyawTOwJnVAfU8EFyLFBoZHLPpjMNJRRvi8UY6i+ugKFyQHBzP0QpyGbBtS+sDuIC3gf3pMTYIcqXIdEBnGhBk5YB8UDI9PAaRoQpJkWVU84p2bhmVGnzAWYPMJD5EFQ4ZIM81LZLprMU0Htf1+7AQqOCZzlrq2iC0ZrBaUs0bMJ6gQEmY1xbpHMYHagfBS8ykQmUZbWPIZMB7gZaKxjjKXFGoDuZ/H0qtluDHz3jkef5TUSA4XXnww4QQ4mdOPeO9QkrJ3/27f5ft7e2fCkjzk4hUVaK1pixLADY2Nnj88cf59V//9fu2DSEwn8+5d+8eOzs77O7usr+/z3Q6ZTKZMJlMFr9Pp1PquqZtW5qmoWmaBVjStu0CNHHOLRIdacLfh1B+kSafH/nIRzhz5szPzbWxjGUsYxnLOAktLLmK6ghC6Ah5hDipdkEv5BRDxPrjRKmboMbKg1RtAKOx5uzWGIzBVrOOdhcI78A5ICBUFj0kpe4UR4CswJsaWoOXCuG6KojGgLKIvIgKC8EhywKdKUAScOC6abDtfOm1QiqNbVuKvGRyuMdwMOzsXKMAY/SPjBMbmeUIwLUzXJ6hBjpCKgQgblcMRrTNDBdasqBwbQMhoIsCoTQhKwnBU89nGNMSgkEE0EqhxJBcF5RZQdE0SDGlKiS5t2ysbPARPeTqjR2+XgT+Yh/2Z7GqxrcCpUBJwVBAqSVCSByB1njqtmZSVV0iAeybryGzAZkuGYxWyfOCvBiSqRylA9J7PFO8cxhnaY2hrWuqdsZsfow1DQSHCB4NDBGMhWAOWAV/58qI33h8lY3VVaxpqOfHNNMZTWU7GUyQKo6ByguKTLC6ssVoPAbpCB6K4SBWojqPdV0yKAQQFikA7xBCUZQFWkFT1ciQKlI8KpdkqysgJHZaY+o51Sst/rlNXPk8UnXqL6GrKuoqm1pTsbO3w/fefIM7BzUmlcA4kG1AVhW7xzfI7+5QFCWZzgjB452hyBSElraN93Ktid7DzgVcnDdHCxd4z8nmL84d3zKWsYxl/PzHgxQj+sBAPx40Z38QEJIe960/ErSQXtNf1E8LeeM3IQABAABJREFU8wloSNFfwE77PG1Jktp8uvCoD1T0z6t/bn3o4zRM0m93HxDZ3t5eFMns7u7eZ2lz7tw5bt26RVVVvPbaawvgI8EKUsaKwCzL3jP/kWCF1Kb+8VNBTl3XsbKwU6zot78/LuknwQD9fu/DHu+lBJHO+cKFC/ytv/W3eO6556iqqqtgdvdBBalgyBizyDsm2CAdO9m29O1f0vGzLFuoeiS1CoggTJZleO+p6xrn3KL/0jWW+qVvC9S3NkkgSBqPpLKRrgNjzCKHmX5PSiQJRkoqJElZI/VB27aLfad99CGM1M+pbamP+9BHOlZ6j6RzTpYtwKJdSeFEKbXIqyWFkv41kPaXzgkiTNN/byxjGcv4xQvbtuhiwOTeLusXL/Dsh5/k8NZ1br7+JtsPPc75Ry7z1qtvsHb5Mc6cW2d+fMjWhz6Fq6fUh4cMVeDai/+DQbmKdA16sMbGY09y9+U/Z3/nmLWLl1Fhjq0NGxvn2du5w7nzZ7lz8zqrlx9nOCq5+drLbF18nKPbb3Lvxi0uPPoEe9dfQxZrPPfXvsD+te8x2dlh6+x57u0ds3/vAE1g+8wmVdWws3NMiGvGSKdQQuOtQQHetBzdus7asw+hRisUayOCiDNPIRTRgCRBDQJkhD28TyDj/d+VPnSWLcETfHytt3Fi7NoWXwukCDgru32B9+m+xy/mvFEFJKp+CCk76wu6XJGnntcIGfMyrZ3gFxqxguAtXkgQUWnWBo8XIKXGmBbjwVQNxntsZ7HSlXVgQ8B6QcDjfMpLxW0Oa8P/8v/6Kju39/n85z7ER3/1eTIt+da3bjCbdwq3C1WICEIcVYa6ddyZWw5sBDmCgSN3zJ6JNjellOQEnnv+KWYHM5rX3qYIYL2gdQHjPSYpOkTpDypjFjmAaABCp70RFoBCVIXoNDmiVAQKSRCuV1ASlUbkQrFELPJwoeuTODhJASbBu+nIUQVGdFBJCkFUnlCdPfMg04wGikGpyTNFOdKUg5zxSkk5yhDCI6SCIGlby3Tasn9vxs7OIXtHUZnCdCCMl2AQNAScsV3BTIQ0jIednWP0rAYEu7bCKUmpcpz3ZErQOsdABFZWBhy8vcNAQJlHm2Fvo/IqXRFOJhS+B94s7GoCCNWpitVtPN/oax2tg4nXaQidGooEJaPdtfM+jovs7hlDLDCTikXBmSAgQlTBESKOue6O551FSNmdc8x3iRDzR1KBzjKCifdLUin0YIiZzAne4V0AJWORltbUTVTYyGRA4VldHSKVQspAawJ7B/sEMrwHXSi8VMwai3UVWmdQDlDtDJxnNjNIGfBoDicVTRvQEpQWeOtx1qK0xhpDMShY3z7D4f4xzrSMBjnBFggZLaOt9winqCdz2hayTKILhbWOoATGSqbVnHUpWCkEhfJMD2fULhZ5ZUW0fpJaYaxDB483nsZ4pPUIRbSZyWVMI1vL9kgxay1zY8nLgkxKMi2wbUuy//lJxxL8WAbAgo7/USJJRP48xJUrV/jiF7/IYDD4oJvyvoQQgtFoxGg04urVq99321R1Udc1VVVRVdUC/qjreiHVaa2lrutFNUaSA00/TdMs9lHXNXVdM5/PF7/PZrP7jjGfzxfwyc/SdZNlGc888wyrq6sfdFOWsYxlLGMZP0YUypPpmDwVwZNmNC4kJj4S+CGcVA2eyEEGpJCd5KHj8iPblEoSqgrhA87WaKnTqjipCkEoTZK5FEJAmnyJqAQRMkAIvHUIofFNS/R7DJADvsAbgdASVKLy47ZCZRSDAcf79xgWQ5QuonKIAOtMrHBwvpNmjCCKQBBciIokCIIjKoNI1YElklC3eOvwWQAf4Qi8w1uHLEtUVlBqTYlAFQN8W+OqWYQLgqAsNKNyTJnl7AvFvJoxbxu2x2t85umHeeLSlGeu3+ErB3DYSuT+FNM6jAtMvaPG47HohU8sBCUJArwEJFgqfKhpj/eZdl6lkaiQWO9RIgIUMXkQ1Tm0gFE3NiMpWEfjQ+AoeGbBs5orfvupLX714TUyCc3smPnxEXXdxAocI5AS8lIhtKDQBePxGlmm0DomCfJ8iFAaKSRSyUUiwxMrA2QI0bIHEVVWhKDISyp7iBAKKeNCjxQK37QR/ilz/HSPeqdFNQrnpki5jvdukQwRUhCcZ3I84c1rb/L6W9dx9gTPiBU4ERBpWguthcn8PoWOVB2Vtl9cystYxjKWsYyf+egrWKTHcKLc0AcC0t/7//4o0VdlOK0e0Qc1+oBGH0DoQx+n7UvgfpWDBIP0z6d/zP4ifzp+H3zon/t7Wcak5/uAA8RiGmstu7u7TCYTqqqibdtF8YuUcrEw31fqeFCfbm9vc/v27YV6RYp0rqlgJgEHeZ6/S8mir3wCEQBI6hmnz6kf/f5M55plGR/5yEf49V//dS5cuIDW+j77nf7+iqK4T42kD4MkkKNfIHYa0kgQS1/JJMEVqd8fpIaRQIy0f2PMAoBIMER6bQhhAaKkNhVFsbBs6V8zad/9ay8BNAnmSWN7WpEj/aR+TXCGtfY+i58EZyTQI8ExUsoFDJKu29SOsiwXfSKlpCzL+0CTdMy+4sdpUGpZoLSMZfzihpCCuqp45IVPsXV2jdtvvEpTex7/7K8iMOwfHHL1oy9Qjge0dcP6lSdoZ/uY2ZzJ0T7SO7YuP0ZztMd8Hm1v927dpDaCi08+TT3ZxbctWV5wtLdLnmnu3LrD5tWnsPMj9g/3WN++wLUXv43wltW1NfZvvcP65SfZPLPOre98FecC21cf5e6tXabHFStrK6ysr3N3dxdbN4zXx+zfOKIIEERcjHZWRcUPWpqDXcTgQ6jhOno0jpYsItx3TxB8910n6Cx7BSfOL31w4MS2IoTOHiNEpZBQe1ASWWqcNRA65Y4QcK6bA6eKByEI3iNlsrg7ASo8gfmsQuc51nqsi/N+kAgZF2h9EFjvaL3HhoAJklnd0nbzc+sDzgs8CtdZetjg4/Oi+/4GfAcVBB8hiCPn+a9fe5k7N27zuU89yWOf+BDlypA//9qb7B/VMQMVIoBhEbgg2Ws8lUuKKXF/07plWrVoGSgyyVBAdu4qt974Uyqh2FwdoXBMjqYUTtD4QOMEFrq+iBmPBGdoIVBCooRYFFoJulRNp3oSw3dFL7GdPkRYJdmSpL0K0bNcFicWM4t7uEVJikjDhe7gh0xJikwwLDRZLpFaMB4WFKWkKHLyIqMYZQQkOlcIJQFBVRmmRy237xxxd2fCvG4j7EEUy21F/N2FCKvEXIuI+SohKLVEOUdbzbtzFswC5EURlT5CvC8KQmKsoZ3NcG1DkWu0lnhvQEaIgk4FxXX9pkICauI1IYDgPU6A8KBEgoRSNieOUYJl4jh0Kjld4ZsKApcAqtSfSVglaax0SiMixP04F62FCB1s09ndRAAnwjnBxXscpaNVTMoz6izH+4ZsdUSDpK4N89kMoXQsjBKeLC/Ix2Pq2lBNpyhVYm0gw4ILNHW8//XW4VtLrhVKSrxxFDICUtZ7pFJkGehgOzsXMLOaYn0cFXKmc2obYbTRsES6ltFA4YRCK3BKU09bmqrBBUFrQ1RBVpK2NXjnGUvJ1lijdMAHwayyNCFQeEehcvJM0aj43h1kGtNapPAIAaNc0ASwrSXLFStDHYEdETVskFAOSrxrOa4NXU3ZTzyW4McygB8f/JhMJu9Ti36y8clPfpKPfvSjywkT8Us0z3PyPP9LwQ79CXL66VdvPOj3VOFwfHzMV7/6Vf7jf/yP3Lp16yd4dj96bG9v8+STTy6kPZexjGUsYxk/PyEF5FojtY4KCcJ3ah9pYT5OcqK2huQkXRBDLB4JtBacObuKqWdQT9BEkMQHh1K9agMhCMGCj7Nr13ZS3N0EWSodvRuVxLc1OI3QGrxBDguEiDKFQoP0CpXLuD/iTF0oidRxotpajwye+cE+ZblCO50jihKtPIicEDxSK1RWYJsWYxpy12VIJAjjQavopaklOmiavT3UaIAsB0D0v5XESk5MTGlkA4nMSnRrEEUW24vHO89oNAIpyPOM6XTC4fSI7c0zXDp3lrWVMYPvvsONSuKeuEqzO8Pdvsds3jJxjql3VMHGybMPzIOLtirWMwsOvyDjNUJBaxymq34ofGCsBDiBcGkQAwMh2M4FlwvBJoK21bxVBxrX8PBawf/8kQs8c3ZEMz/mcP+AybRhNhFxoioDZQnFQJKXBWurm5h6zmi0hlYBaxtypRhvbHJ0cECRS6RSCGdBBIKxBKliU2ScoQsPEk1ZjgjO4WU8VgjRLkjqDKkzPI58OMDc2yc/3sRs7qDCuEuEpMU1j3WOe3s7vPz6K+ztTVIOpXcVvzv6zyyLRJexjGUs4+c3TtuKnFaBOA1/9B+/F/zx/RaS+3YWfbDgtKJIvx3AwhpDKXWyiHNKwSP9nhbATy/a94GQ00oj/deehlseBIekv4cQmM1mFEVBXdccHh7inGMymdA0DbPZbAEZZFnGYDBYAA2nwZT+saSUrK6u8vf//t/n4OCA//yf/zNvvfXWu8anD6X04ZOkgNHvixRJbeQHRQIXkkrF6uoqX/ziF/nMZz5DWZb3WYichnpSf6YcYF81Q2u9eJyeSwBEUqDI8/y+XE//vNNzaf/pWkjtMMYsYI4+XJTURbIsW9ibJBAkFZ2FEG1ZUn4pjXsCNRLEkkCMdD2m1xVFQdu2i+On67YoigVk0m/TaSucBHekayQVNOV5vlB16SuHpNenf7XW911XqV8TCJNAkD700QdPlrGMZfziRRCSj33+C9TzY66/9CLjS0/y+MeeY77zFs6WXHrqYyglCF7BEKrD2/h6xsG9PUbrq7TTGe+88RZCKLQzTK+9TVNXrG2f543vfJNBkbN+ZguhwNcttfWMN86w89YbrGycwdY1L/7Rf8PmJeOywNrAlQ89hTCG1/7sq2w9dJXV9XO8+uK3MbM52+fOs352i513riG84NHnPoK5c4fD791AhFiAEoIApQnBIF2gOboHOkMOV9HjMXmZMZ/ZLhkTF8ADce6rhFwUviebFqm6z0UhCKFTOeg+6xdqUs4hfKDUZVTuIIBzKCROxEVqaxze+Vhcke5fXFQXQQhsZznjI62ARDCvGtpOzVXK7j6LuHDu8TjvMUFSG0sb1ImdSwg4FNaHqPLRoSMxR9XdU6THIeajpARc4MAFvnHjkN173+C5xy/w6U89wef+xoiv/8n3uH13FkGX2HVU1jOzoIA8RHDDEeEBR7SZsU1gJuD/9L/8b6zmgrFWPPTww6jZMfcOZkgRKFTMm7VeYDt1DZcKnYJEC8HqSDHIVRyfEAuoEAJHXKh3XiA8BGk7eEDinMdYj7UpR9fdg3T5ujTYqlM8iOMfIBVnCRHVEzJNriRFLilLRZFnZIVEZxqdK8oiQ2YKLVUcOxkhHSEVbe3ZuXPMrVtH3DucUjmH92BFtNypuvGKkFG8xnzovquFACmREkohaOfzCG5IMMFRCcVAqi7f51Ey0PpYezWvW3LhGQ0LUIKizAmNpalrhI+WNgtoo1egE210unxeAjTotkn3KFGeBQEoeWL/0mGq3dj0Xx9SOnORLAoJIBExwSQlIBSh02ERdG0M0VqJTjnEh1iEJgUEF/AqqnyE7v7GAfPGoo2lGAywTYsSjqBKKguzw2m8l8oUQ5EhASkUzjsmlcEFzzArMHWLmc5pnWVzY0RjPEeTinxQopVGCY8SEebNtaTQkkyC1ZJqZhAugjbB605xWcQcrRQcTwzeJNA53qPvHVq8BK0kW6sjVtdzdDA4BPPG4zNNrgRKarSM7+JceJRWTKc1AxlYHWQ4C1J4zLxFKsvqcIXaWOaNw/qkVueY6RqtNVlRRrVo/5O/11uudC4D4F0SnT9MOOd4/fXX36cW/eRidXWVL33pS2xubn7QTfmFir5M6Y8aIQSeeOIJ/vRP//QDBz8uXrzI008//YG2YRnLWMYylvHjhVQS1RmA+uDRRNUFH1p8iDf3J6KSaTIVUJ1k6InlS6TjzWxOIwRKdgoTMip7hM7vFKKkaPpJk0EhBKIDAJw14C3OxJmYcB7hDWTRhoYQohVLsATTRmUOJaPEqBaoLEPnmnIwYHZwiBJREjQbjyIkUdcgooWN8B6hNUIqvDd4K/HGIEsd5R27SgWUIjiLKgbkaJqje4gQyNe3CQqsrVCixHuHzDOUzvG0SJ2hB2U3EVS4at4lowXDwSha0cyO2DnYZ3tLsbaxxa9+asjrb1zj7TuvMT9zEZ76CP7eIeHWO+hmghA5QXiMg51pxlEtmHvHNdPSCHjhUx/m4uNPM50d8T++/me8cvuY9Y0NhmXOU2fXeXhrG3f7NtUb18i1pxQCLSWzVnG7dtxuLa0K/NqjW/zNZy4wFC0Hu7eZHRxTz+FwUjAcSbbOaaQK5IVmNFrBti3rq9vM1AGNmYFXZFIyXt3AGks1O2SQnwMhkDKqg/gOzBGiq4j1FgKoLKq2oDKCE8xrh6VB6gznPd7FBFU2GKCUpLoF2ZU7oB5FdmObLti2rdi5d483r9+gMf4+6GO5DLCMZSxjGX/1om+30V8Q/n6/nwYYvh8UkhahTyuJ9IGStLjdV5zIsuy+ffWtO06DJgkISOcDJyBDH1RIz/fBgr4lSh9+OQ2bQFxY39jYYH19nePjY6bTKYeHhwt7lrS91po8z+8DPvr7TY/T71JKPvrRj3L27Fm2t7e5fPkyX/va1/iDP/iDBVxyuu/TufXb11fN6KuEpG1Oj1MfGkiPlVJsbm7yW7/1W3ziE5/Ae78AEk4DQqfHOY2Z935RDJbUXpVSlGWJUor5fL44Vn8sjDGLbVOBUQJCEsCTzk0I8S7l16TKkcYunXuyTEnXQQJRklpIH2RJ258+djpuAk0SQNHfLm2TFFu01gsQKEEz6XzTNZL6J702jVtf5STZ5/ShpD400lcE6V+zqR/757EsXlvGMn6xoxwO2L/xKkYMePiFv4YeFkzu7bJy/kMMNlbxXiFDzdGN1xHFCqPNLab3WjYvbHFw4wZvvfoWN9/ZI1eK1dWcCxcusl4MuPHSd8iEZnTmAhsXz3K0cytaLuQ59eSAsig4vHMTGVrWN8ZMKsNgdY2Hn32a3be/x3xvwoVHn2Xua77+la+hg+fSY09y9qErTO6+jcJx9solquMpr33rZYLKkc5E8MHZTpVSIEOFPbyNNZJsuA5FiS40Mpq7EERUMU0KD945kHJh4yKE7KwxupVt0ne0AB+VQLxLi9GiAxIdhM7Gq1PWoFPqoIM+PAHvA6573jmH9bEV3gvw0EiL6z6jXaekIDoLV4vH+VhcZJzHCtFZDXtciDYqJ6oRoVMqOFF4SABJPJ+oGis6JdQA7LqAqB3+lVvc2zng4x9/jI9/7mle+fO3ePPNg27JPi7srgpBIQR1CBig7gpnrACvBJlSGOewFo6bwHFrmX7zFYYZBAfKQSEDA6KNcVQfEZ3GSeyTxgVaJ9kYZBSZRMpAcAFCBAykjOcbmRCHdwIXouVI8AFrPa1ztDaqr3gfVVtEghxELMTKVbS/UEpG2EYF8kyRFRm5Vh3oEYtxlBYopQlEJRKpTnRKvPc4K5hN57x9/R537k2pW0+LpxUCj4iqLCGZQsfrVXRUjVKSXEMTJG2AofDQeqyny7EF5gHIJGvjEo9EKUWmAge1o2osw/EGM3dE8A7TCqatQXWXclQ8CQvwJwjRXQOiU78JeOcXajcydHnHrr307g18iNcPROtiGVR0TepsWtL7JHT5JSHozje+XkKnkNyVznXqIT19lqg6LLvcaaewHO2SJAEfj+8tUinaqsbkGjuvKHLFaFQilEJmBVXtmE8OGeYZuVRoLcjKDOcC86albuc0TYscOKSSzIyjNYF6v+Z4FgHh9TyQaYEMEu8dwzzDdO+11jiyQmNdSTsL5DJCKrUxBA9aK6xMHlAWmSmyEPO4VWPx1lMWMKBlNFhlf+oI1jEc5ox1RtN6QtuAEDR1g/cB6zxSZCjlkAJ8cFSNIVeBtdUR1nsqA/PasLaaIbKC2bRCoshyjZS+yzf/aOvyP0wswY9lACwmuT9KWGt58cUX36cW/eTi/PnzfOlLX1pOmH6GIlWknD179gNvx0MPPfQD7XGWsYxlLGMZP5uhhyN0obFttNwIShKCi1UWSY5ThE6NAxbMezePkYhYRQAUhUAJhzGO8swWiuhvmY1KpHcEY3DGxBtyCeQ5EAg21moIJLa1CDKEyohT/AhJBOeijKgzCBnlqUPbdBKLElkWcWodNFIX6LKkKIZMOCCYlpIVhAKVZ8yP9pFySCa7ySGys54ReGexTU2mc6RSkClECKg8w0qNt5ZsUKL0eaq92+A85dY2QmmCaQnOYqeGWV2DiqImSknwISqG5BlkmkEIzOcztNYMyiFNU7N/cI9qPmVjc5Onn3iEYSHYuXODdn6Pdu089tMfYxBatmaHjLEI75lUM67d3OfefsvqROO3z/KFL/9PbD/8JIdHh+SjTS7ffoePPf9xjFfcuf4m5y89zvYLY179v/xHzN4RwQmOnGbPSPadZTCU/L3nLvOJy2tMD+9x8/Yu9dzgjMBZybmzGecvryFyyWx2RJEP2N64yMHhHk1doaRifnxMlg9YvXgJORgwu3cP20bPUEGIvqQiTjTJJXQwTLTWcYgsI9OKTCuc9eAtvlN6cXWLDICICxjj4ZB3bu+y4S5BsAihEUhCiPusq5q7O3e5t3e0qApaxjKWsYxl/NWJ09DGadWL02of/TgNHvThifeCPxJ80Ld26Stq9NUskpJCUjI4fbz3Uv3oH7tvzXEaOEmL/Kf7Ii3s9+1E+vvpH+fNN99cqG309/deUEX/9Q9qO8Dm5iaf/OQnF2DBxsYGv/Irv8JDDz3EH/7hH/Laa68xnU7ve33aR2pLX+kj9d97te90Liv1VZZlPPHEE3zxi1/k6tWrNE1DURSMRqMFtNC27bv6PPVd354nKWoktYwEXKTt67pegA4JoBkOhzjnWFtbA1iAG0VRvEv9JVmi9M+l34ak0JEAlNTuPgCR4IoEcqTH6biprem4cGL/0j+/PuAELCx4ktpK6l/v/QLgKMtyAYikfSbV2KT2kca1LMv7IKEEvJxWmU0gUIJSTlsg1XW92GYZy1jGL2Y0sxkUWzz16V+irY+4+/orrF58FFWO2H3jL5je2WGwWpDrEbQC6xTt8QG33r7Gzs497tzZx1mLWt3g3LlN7rz9BsK0lKMh565cARG4/uorjFY20OUQPczJhWZydMDKxgoqH7C/u8PFy1fYOrPJO9/8E4TUFCvrvPLyK1x/Z5f19TFPPPk4h8eHvPP73+Xs+XOcf+IjvPHid7h36w6ti5X/IxlzG15IUALvPJmy2GoH2waK1fMUGxvkoxyto7U8LllIxNcK0TP5EB2I0DEfSenAuYDAIVVcJIcONvCSpmrRWbR+lVIDjuA9eLlQOzDWRzggronjQmyG8+m7Nyq5tq3FdwvucR4e0FLiXPxOtT6CFhbROchEVQvfWQw7OvUNAjZ0FiLEop4oYNoZd/h4nkIkBZCYB7hlA0EE/MGMoz/4Lk88fp5HHj6Ps4HDW8dkRAuSVjtMkHgpmRjHsYMqaKx3jMc5Oh9xa/8ALzotByE4nlccdAVRSgRyYCACQyGQwmOdwIuY6hAh4IXguGrhQDEqFOMy9nEmIcsjgKFEQEuFCRZrHFpIlNSItKgdfGeBE3DWIqWi8z1BqwjVKKmRWqBk91gplAIhJTrXnX0uIDsACAkejHVgoTWe2bxhNmvZO5xxMGuoWkfrAy3gRIR6fAfeKHFinkJ3vSkZj6+0xLWBPHiwgcZ7tJB04hcchYDOFdvnNmhnM8aDEilg0MJrt+fs3N1nteNaCDGv5q1HhE4sNhIYHQTUu0cmdKCF7K7FThVGEPsSgewehw62iUBSUqIJ3f2Tj/Y/CxWdhdRHlIjxMU+a1E4Wt4ndAy8C0ser1PsO5JYBKeMJiBD3K0IHpxCvb2cswguUayhlGZVyheBoVmMbR5YVSDy2rbDBRxhLZlRtoHLQOgmVQ4mAlTkz2zBpLdYKVBDMqoZhnoGUKBlJGhE6ZZkgyaVESoEuFEpITNMQgiCI7vMh18jgGQ01cwezWdPdDwZKDWsDDcFxMJnQ2mjJgnfR2qmuorWM68AzoLUWJTNknoGCNkS1n9WhwDpHZRTzxkKWocoCj8SHgCagTYt3nZ3O+7BuvQQ/lgH8eFYvR0dHP/PghxCCz33uczzyyCMfdFOWcSrKsmR9ff0DbUOe53ziE5+gLMsPtB3LWMYylrGMHy/0cMT4kcc4fOUVhMrjpNxFUj1iHR5JVxnbWzIP3T11oJvcC0FeBLSoyIVESYdWEjKFzOPUi7JENDHhjRbo4QikxM1rUAp7fMhsb5eVlREiL1CDAj1aAxTBWuy8wbkGoeiMOyN1H/B4YxFBRZ5eKmRRko8LVtfWOd7ZITgbVSKsRyBp65osyyF4ZKaROkNIjfcWaxoy60BrhNQEE80tVTnEzI5xLehywGBzm/pgj+reXfKNM+jROE5ujYmKJJUlOMN8OosLMUrjgydIESeBWlPmGW1TMx4MaE3NtJ7Q3KlZX1lnc+MMJjik86wPA/v717lhFXtXH8fnORdDxaap2djY5869u5w/mDBf2+CRJ5/jzMOPM60q9HBE/q2vs3t3lzNnz+KdwwrF2Uee4viZx5h+788wNTA1NCpn68yIv/6hi5wpAtdeu87+vQmZUmTFiOFYsL4+ZrQ2xPoW5y0ro3UgIKVgUBYcTPZRBOy0QW+vo8cjrLVMp4ddVUkGQkEwUdbSe4INi4k4MoJHgoxMl+jBAMwEgSN0SS3vAr6qUWWByEoGgwJ/tAOmxuZTNFncnjjBn04m3L57m9m8Pblmf2rvsGUsYxnLWMYHHX27ifeCPtLj0/Gg4pcfVBCTFuL7dh/9Y/QX1k/bafSVQdK++lLsfYCir+Rx+hinn/t+oEvqkwQIpOf6NiR9dYjTx+qrQ7wXZNMfgyzL+OQnP8l4PF6AARAX7x999FEuXrzIjRs3+PrXv853v/tdjo6O7lN4ABZtS/tOShb9Rf7TIE3/X6XUAjb5pV/6Jcbj8UKto281AjwQjkkqG0lhAiIgkcCFdJy+4kVd13zta19bQCHXr19nOBwipWRra4vPfe5zbGxsLBRVrLUMBgOstQvwIR0zjUu6fqy1C7uWdP2dtpBJj9u2Jc/z+yAJY8x9yhp9FY4+XJGu1bSPdC6nx9laS5ZlZFm2eC6BIwmK6gM16fk+wAIR3Eh9m9Q+Up9KKSmKYnHOaV99RZ/hcLhU/VjGMn7BQ+UFGxfP8c3/8r/ROMXjz3+cTLW88/X/D9OduxxWBR/++FO8/dKfM2sCIgQmR4dkZcnxpOH4cI70EnH3Lq/s3eQzv/JJzHyKaQPz4wmtqVnd2GQ2m7J58SGUcEx2dlnZOtNVrbc88tGPEUzNnddfYvXMWcr1LXbfucNAeD7x7FVUOeL2O2+Ra8GjTz2BEIrvfu1PkVIxWl9lWh2hBLF4gfg5rFdGjGSJOZ4RJnN86xHlOtn4DPnGkPDmAb5TuQidHUXHBkSViE6VIC1Yd1uQ1sMTDLJQ5+oKfKQ6Uc4MAoKUUX3ExqIc7z0+hGjr0lUDOe/xHkIQEfToFDOEiECHF2C9R3bWvABeSDwuAgQBbAATiPmoAJY4l4+WLxEIcZ16gifgxYnVBgR8EBgHyLiYLAAj4I71aCReBb772m3eub7Ho5sFZ0qFcB482CAJWpMPCg6mM45az8R4PvbXf5mLj17ij/5//43dvQOCjN8lWaYw1lNZIrwioHVQq4DLo/qt8Z1aRwAdOkAAwd605nAegQktBQqJUoJMJoUOSTzTCAUo0SJFXNiWUqCVRCnIdAYqoJDd3zqLlm60I4WgcA6cD4Rg0SaAdDgXlVo8MJsZ5nNDYy1NY6mNAx/QWnLUGuYu0AYIobN+WUBG3X1VNy4ixLZpJckyRZ5nVC4wbKtO8cTHYjIR0CJQeUE2LHjowibro4zxmYusjwuyouTW7gH3pke0bY2SkrpxEdLQolNpkR1sIToAKeDwyNDd43f2z/EC9wtwI0FJIvmuJNucTlV4ca+Q/kk5z+AhyC4HGuJ/NsRiNhk1OyJAlfxc4r6jBU33ZiKgdBwfVeioCtO6TqkkQjqyyPBVg5ASFVpG2QCdSax1GBOVe7Iso1QRsphbi0JxdFhRjgJeyGjhg8R6w3HtaFwgSIkWIKXCB4X1Di0CuZYERLw+iNeJkgLvBcHFHCtagIj2RMbHzwMpYDTMsBaaxjKtW1obWBtkbAwVWgaqRlDPGsqyYD6bkykoxQCtVPxMMBbnPFIKilxjWosLimAl+WCAEy2T1uK8w1mDLnMGKytIJfHWkZUDlFZYa5DdeFt7Mk/4ScUS/PgZjySt+H7Hj2P18tWvfpWjo6P3qUU/mVBK8Q//4T/8qfThMn60yPOc4XD4gbZhMBjw+c9//gNtwzKWsYxlLOPHD5XnnPnYC9jJlObuDs61XeWEwAWBwKOFw6OirCEnP7JLHyRXjSIPDMuMcjgkK4fIokSXOapUCBkIUmCnc1zTkJUlSnSe2+Mhzhiqgwmrm+sUhaatZ8isQOYSERxk0f4jkCGzjOAsqLyTN42TK+8D0p0AK2pQMNpax9VNV37iO59PqKoZ5WCEFkXXft/R9gq8wDsXLUiUJFiQSPRggG8aXDVHKk02WkFIxXTnFqap0StrqPEqqswROsMhYnJAOnCdfKMD21bUtsYKhfAjtMwYlQUT6ykLxbyumc6n6ExR5iX1bI5FcXl7lcHBPezd1zBbF3lnbRtdrrM5XGdDK9ZWjjlQnrXxkNXVNfLhEGMf5c23XmVnd4fZ9eugS9rQILOcwfoGjATZmQG4EWtZxqcubeCPj3n55R2Mz3jo0QusbYyRIlZsGNNgbE1bzzizfg6Vlezu3WYyOUaqgAyetnYcHksuXh0i84zp8RGtqRmPNhAd2EEndWltjSQmY/BF7G9vkRKU0hTlkGpW0VYNuJZhOaaZTSgGY4SI1Rl5nlFIR7VnodhD6/WF7681hoOjPW7fvUtj3KLSaBnLWMYylvFXI06rTZxWtTitCvEgiOK9tv9+kSCK0/YmfVWO0+oNfcuW023uHz/9LS3W9HM1pwGF9zqHvj1IggTSgvqD7E1+0ML5aVWOvr3L6XO7fPkyzz333AIaSJYpSQViNBrx9NNP8+ijj3L37l2+9rWv8fWvf30BgKRzSBYpCb54LzgltS/1+Wg04plnnuFXf/VXuXr1KnmeY61dKGo45yjLcqFekY7RP7/+8fuqLckGJQEL/TF46aWX+Na3vsXGxgZXr15ldXV1YQGzu7vLdDpFa03TNIvrp2ka8jynKIqFOkc6XrLbkVIuLFSSokb/2kvjnNRHErCR9qGUoiiKRVsTIJJUN04rZvSBE2ABzCQlkXTsqqruO/9+P6bX9a+rdKwEsfTbmdpvrV2oi6R9lGW5UC1JwE7KzabrexnLWMYvbggC77z4TWZ7x1hV8t2vfZWV9QHTaUVTVRxNKtY3BhweVVy/cYvjuWFrdQ3THnG8N8XNKwoRGJ7b4uOf+wxNdcj+vSOc86ysrDAqNfPW8MhzL7D3xsvs7d1ldescTd2SDwvOP/oI093b7N26zvq5h2jrltvXbqKF4PzVy7R1xd6dHcbDAY8+/zGq411e+/PvMl7bZDqbsr83ZZBrKhzIuMxoW8vs3jHFVvx8q157FXt4SDg/QI9WKTfXkOIGWgpsOLHgDT4uQAf8YtU6uICQcrHOncCNqOgaFavitgKERGY5SkgChnxY4EyLqwwoifBxkdilewTiorgnWsW4ILtajmj74X13n6IVjbEEIcCBp7N6AYIPGEd83Kl92G4xPyp9BEynLBJVJRbr6t15d6BD9yfZbStFZEwMcMd5EArvA2NaxLGlVBKpQld4Ag6Pm89YkYJBrtlQAXZu8PqNdzh4+xYbAhohMAq0khwbh5IgfcCFCCC0Bo4CKBXVSYL3GCUohUY6jxQS6WPuSgqPdgIlPMKdIAiiA1kkKgIH3djKzkYGIrijpEIqyJQiVwmoEdEGRYYO0pF4F3AiKrQI0cGZjmjTQ8A4h/OdOAaSssw5c3adPBPsXbtD4hlUyv0FkEKCcAQRkEIy0gqkIFOSssgpyhyUprpzjyJ4Kh/HIaIvEisEamXIU5fPsr4xZlTmlGWOdQ5lLZfPbaKzjNe/9QoXColv53glsC6QyUBA4bxDCNVldiIYFMVIIgDifboy4rWppALfywSJpP5Ll4M6AURARFtqGa9nv3hVVOageztJIfGdwk6QAuE5GQeIF2I8FKqDcoSUC+ghUxKhRFS/zRSmMRF88lCOSgSCqnEcNQ5czdrKmNFQRuiosaAyrAt4WiazColgYzTkWNbMqoAXBqFV3H8Aayy5koxyTcDgQsC6gAgWIaNqjFIK27aUg4x6anBCg1A4EcEtY6P6iFea2nhsCAxGJUUQ0FpkCDStorYGoaBqW4pcYawisy1FqfFC4ecOFxpkppDeIzXUxpNJR6YFQoIPCgcUKwo5HIMTNFWNdY5SC1xwlHlJVdUY43iPKcBfKpbgx894/LTI7h/V6sU5x+/+7u++jy36ycSzzz7LL/3SL33QzVjGA0IIwWAwWPiofhBx7tw5XnjhhQ/k2MtYxjKWsYy/fAidsf3s8/jZjDt/8vtUhwcLj1dQSOGQokt0d5MpQaxMEEnWsSP31zZGbFx6hM21dcrxGpIcIQWhCNh2Fzev+PZXXmYymfKJFx5jbfNsrEpB4KYzpFAMz2xEC5dqjq0NeiUgi4JgLTLzyKyIkxcygutIdNcl+oMneEkQDkSGUA5VesbnzlDtHVBNJuQyQ+mMwXiFajJlPCzABYTURAdWG/fnbcwWKAk6kvsyz5BliTcG19ZIrchW1hkTmO3dodq7jaiOyMebZKM1RJ4hRfR+F41BJN9QmyNNRmMMs+oY07bQlgQRGK+sMK1mlHqIJYAMBOW4ffcdcvkIo+EQ7wNb4xzLlDfv7PFWNiI7+zgD15Dt3eb/z95/xVqW5eed4G+Z7Y65Nm74yMiIjPSZ5bOqSJYRm2RrSBkIIiGhoZlBAwNBD/0k80CM3kQQI0CABAgDzUhoNAajVgsiWoBMj2hE78tkMauystKbyLA3rr/HbLfMPKy9dpyIyiwmVZnJMucjI6Puuedss/a+cff6r9//+9o7b9KcfoB0OGR1bZ2NjRNk+duhs1IKyqpkMtnlzo1tTHGWwcULXBKWB3zN/s1r1DPPxUunGa4OOJ41XL99zIWzK2RFhjE1w3yNXGVonaK1QmnJvD4mFQkr4xPcmuzgVcJoMwAYVTnHI0h0RuyQQMhQlHLgWotPusxU0cW+CIXSiiwdIOQ+QoYOzrKakRcjVJKg0xyRKCBnrRjyzS99lQs/uUm2/kA3iRe0TcXR4RG7+4fYvlq01FJLLbXUD5PuX7BfhBTuBzruX+i+3yHkvewrahFSiIvXi+4YcXF6EdJY3O/i8S6+vuj2segusbi9d4pXiWBEXdf96xH4WDzu+51H7l+cXzzed6u3xWNbdNkQQpBlGT/6oz/aAw7xWGNUR9xHHLMLFy5w5swZPvKRj/Brv/ZrvPrqq70rx2LcyLsBMovAx2Aw4LHHHuPzn/88ly5dIkmSHvqIwEKSJDRN07tVRKeN+8EW59w94IP3nqqq+u3EGo2UkuPjY775zW/yR3/0R8zncy5cuMA3v/lN6rpmZWWld6aI4Es8/kW44p0goRgHswh3xOOKrhoxZiW6kSwCE4tjF+/7eH7x+KMbS7x3kiTpx6Esy/4c4/WL91TcR9M0/XECzOfze+JaoutHvHejQ0hZlv3Yx33GY2yapj++6Iyite7dQuL24z3xZ23QW2qppb6/5DyM1japyznV5IDJ7Jjt24qVzTFtU5MXBa++/AaHh1OOjksUlttHO1SzKbK2rIwHnD6/wSNPPcL29bfYOHuRjbMFbVtTDHOqFk6ORrz27O+T6oRi5QSHkxmnzp8jKzKObl6lPJ4wWj/Lzu09aOeMBgXJaMDkeIKdzhiujDj3+FPcvvEWt15/nY3NLa6+fZsk1Zw5u8nxbMa+FDQuQBpKhn/7TNkikZjpVY5e32bl3APo0Rqj02dI0hdo2wB5eE+IlAWaukUgcN6FpguCo0bnwQpCoiIsGcJygQ4WBKxxeO1IRxnZQFNN29DM4kMsiJddVFxtO2giuGZ4ITtXC3BeYHxYePdesLaygXW7NE0AOYQMrh6t8xgHhgB3GBucQmz3eeM9pvPy9BFOiZhHdCTp9nHX+yO8VxEgEAtUHnatY0UJzuearUygAYforWyt9+AlLWCdoJCC5sZtjHFcLDQz55h5mHvJrLZkHRLQCIkXDu9CPaO1jtqBlSHexRoP2qBF52zqJdIHxw/TH2dw0lB4pPCdi0Z8Fg2/l4OrRne4FoT1CAMIs4CL0I+N64KbRXdte3RB3HV7iXtQWpLnGqk1zgtuTUqa1mClIsGhJOAlAo9WCi08w1yztTnksScfJMsSDg+nzKYtN+4ckqUZt96+gzIW07m5OEItsRGe4foao7UBlTWU8xJccMnLc42UAiE8J9fGrP/IJ2hfewN7bYZKU4R1GGfw3qFVcP1QIkQ3O+exIkA5iHD/SO87iMlhuygSKe6OlJBh5IyxSNlFvHgCBEL3TO/ic3j4OQoONZ0XsuxcTDoiyXmPXIxb8h4fY3qgi28KsJUIljwkaUY9rxCNwScKOtcfKRMmpWHaNFipGaYJ0s8BQVXVpGlGbQ0oQeWDu4nCs79/yN60ZZhqRqOcsmrRSMrakCrBKIUsUbSN52hSkWlNngbARWuFxJKmCmMsWqvQOOVasjzBOoF1kqZxtKahMiC1YiAVqXdYPE1jMTiUTsK/OULQGsGwECSZRiYZ82kbrqMMz8+DRLJfWoSCYpAh0EhpqVuD1pKVzXWsE9SzEuM9qXAI08FrqnOp6WCv91tL8GMpIPxC/rNMKF599VV+7/d+7wM8ovdHP/dzP7d0+/ge1mAwCA+Df07gx0/91E/1xYClllpqqaW+/ySkZPzAZXxZMd++RfO1rxJjOLqpMBKB8d2zQDeb9D7CISCEp8g8T37kMc6eeRxlHb4hgA7a48wRYGiOjqhmJaurGiUlokiRWlDvHVPtHTA6cwKpFFYJZKIR3gfgoyiQaYpIO9NOCViBkw6hPBgT3DTovE0JE2ep01A4LzzFume6t0eyeRIajZ2XwRLS2M42UyKV6GB4F7pTbKDohVR4Z5Fpih4McI3FVXNc3SBVSrKyylCA375FNZvRVCXpvCQdr5AMCqTWqDQJ2/ACpUNWphQtOE/V1BxP9xmlA+rJhEGSUzUtOkvJVMZxewgqZWf3DmfPn6M1hsnBPusnT3F5fcSJ42Oq/TnXWkG5dpb27TfZylYYbZ5isLLJxvomw2IQrpcX1FXNtbdeZLa2jpOKjeu3ObGuOJodkiUF6+dXODg+5OabB4xGazzy2EWGowLfGqaHx6RZSmMb2tYgdUKiUhKVkhUDGuu4tW24fGGdZFBQzSpm8yn40JUC3YKSc+BCJ09jKhQZGEfwuA3dLVIK0iQjURLXNljnMU7ghUIkMgBAQiCUYpgPMNduUU2O8Kt0HTLQNC3Hk2Mms3nXDbTUUksttdQPm+53v1gEORa/hvevcWhx24vRE4t/7gdJFiGJd4NM4vaAvk7zTrDJO52Hc46yLO9Z4P/T/l48tgg13B+pAtzjsPBudTEpJU888QQPPvhgDznEyBEpZe+6ERf1Y4RH0zScO3eOv/W3/hYvvvgiv//7v8+1a9d6R4643/sBlQgUFEXBww8/zGc/+1keeeQRRqMRQgiapulhgsVonHgc4/EY7z3T6fQeN5R4/aK7RtyX1rqHEtq2pa5rXnnlFb72ta+xvb0NwNNPP81P//RPc/PmTV5++eUeXnjwwQfJ8xwhRO/wsehWobXur90iBBLhjugQEmGH+L4I+USIJNaO4jWM91CsJ0XIIgIjQB9HM5uF6MK2bXugIkIXEVSJ98piBE+slVZVdU8sTzyHOGZx/OPYRQeVxaiheP/F6x5rUfGYoytIURT3fL3UUkv9AMt7mtryxBd+goOrr3Dr7VvUu1OqWYVpDK2vqeYVu/tTmqphMxEwrRlrhR5Inv7ogzhruXV9m/NXHubg9luopODhTz3D3vVrzHZu8/Y3v8Ha5hbpeEDTNFx46BLeGI5uXMW0La3XHN26hRKCE6dOQ5qyt7MHpmW4sUm+vsmLz/4RTdWwfvIs1167ik4TTpzYQBRj2hIqFMYrUhxSCMpJyfDUGAnkvub2c1/nwo8/ghxtUZw+C1oiZXBuQHhMbcIirZcIGWo4cTXU+xAVEhejoxNHKOyEmlBwQVUgFRZLPh6iC0maJVh/jDVgKoO10JomzMMJ8EBoBuoWxQnwhvN3nTYPDvc7SLINRqwd2OFccMYwBPBCCELMRPdZS/jfrnOkBbromBD5Ej7gidiH9935uvC7SHVOEK4zVTiTC07lkkJHIKJbtA9WETgE2jmsACMIi8xaUGjBipOU1jM1jiPnyS0ce0kpPLUPsTKhItXBNFYQzVQa6wMcgseJcLwKQUKIalGh3wcjQixMaDsKi+aiG9uAmXSD3cMvXWfWQqVj8ektfDuMVYQVQnyO6GN6tPSMB1lo2PIE55ampjUWKQR5nqKUYJBo8jxjOMgYr2QMxjlrJ1ZJ04zLjzzAI5/9Uf79//y/kQ2HvP7SW7jG4DuoJlw/hxeCfGVMKwW7xxXjIiVLE2TbBkcSb5GI4IaRWFpjGF44ydGd2+G5QCqku9srFHikcG/EUXCO4ERLGHwPqIXnUhHvKdf9TPTPjqKDiYLzDX3jWwdyLJAzQvSGM/37+mPxAbe5i+GEaBSpuuhsFRqgpJDY7nnNe1CJRCoFIqG2UzQeLxQrg4TDeY2zoGRO2zi80JTTOcWwYFY1DIdDqrJEekOeZeTTBuUcsmkYiQD/aq/BOlIJddXQGoPqnE3Cz6oCB0kiME4wL2tM69HSkWkJ1mKtwjpoWkftLEolZFqQ48gSmLQWg0fiKI1DS0GiBamWFClY55hO5ggvKVLNpDRIqWitRyaKJFMgNNZDIxzZIENLSL2hERLhPKNMIoXCWsl8VlHVliTVCClQS/BjqQ9Kf9aol//wH/4DBwcHH+ARffdaXV3lJ37iJ5aE/PewiqIgSZJ7bDQ/TP30T//0n8t+l1pqqaWWep8kIFsZsnLxIU5+7NOUd24xufo23nfdel2cRph4hglN8MboXD+6SeYDl09y/uyDiNaGRX1rAmiQJXga2qMZ01t32Nsv2dhYCaGtwjHfO6LeO2D80AOhENy24DRSqWCXKLpFDUHoMlBdCUOCFAJnW6RXeAl4C1J1E1x5d6KmFHqQkbVDyuNj8qwgH67Q1HOcMV2xQwXgQLR453HWdgxJ8L0UzoO1SK1QeYJvFaatEW2CTgqSlTVGSMTebWbzY2q3g7FTsmadZLiCSnOklqAUyji8gER4vMsZuoZj02JEIOMLJHVbUVUVznnSdEBRSJq6oWrmaJ1QNxOm+wKZpaRKsDEeMTyasHPzW1x/6Y+5/ju/zvCxp2nHG1g0mQA7GHa2opLdF79B8vLX8AcVRzrl6mHK2kCTJAYnp4zXVjl1+gSrqxuoJMEJyXyyR3l8yMpgjJSCpp6gRcv6cBWV5hyWx1y7tkc1t2ydWcfhmM32qatj8nQcFhOsB9WNqwiTa2cdIr0LcXhv8dYgdUo2yFBaI9qGtmnJModOQjeK9w7hwDtLphSrOqEpD/EYLAqPp20Ns2rWLXwssY+lllpqqR82LTo0LMIYiwvPixEh97tv/FncPhZ1vwPFogvH4oL+ottB3M8iKBLfc3/8Snxt0UHknY5hcX9VVd2zCP5O4MviZ+PfQoh7moHud/2Ir91/DIsOHlJKtra2+MxnPtOPdwQY4rbzPO/HajHiA6Cua7Is45Of/CSPPPIIzz//PM8//zzXrl1jNpvds89Fh49HH32UT3/60zz44IMURdG7ScTrEx0r4jFEUCBCEzF25f5xio4aEbCIsIVzjvl8zq1bt/ijP/ojrl69inOOtbU1Pv7xj/ORj3wE7z0XLlzg3LlzPTQR4Ya2bbHW3gM8SCl7p4w4xtENY9HRoq5rkiShKIp+LOKxR7gixtnEcY9OIYugDNA7aywew/3uGd57siy7J64njlO8dovwySIwswiqFEVBURRYa3tXkvvdcBZdSeKxLzqkxGNOkqQ/tkWY58Nygl5qqaU+fAkp2ThzirdefJ5TDz7C0xevoJ//Gq+9fJ3JdE5rNaWx2HlLOqtZPzXGDlrWTqzz8BMXmR7vM5laTl95nBuvvcTDn3wGVaS8+id/TFtZkkQh9BBQ6HTI6Qcusn/zbSa7OxTDMU7m0BoGw4zRyga1bZjf3iVNE1bOXGB6cMCt119l/cRZvKl461uvkqcp+TCjagTX3nidmzd3kK1l1lq0dBgTmmhM61HOoQVMvvnH4P8viGyFfOsMyTihmVqgi5YL1ATeu+DM4booOB+eNwIg0tEJPjT49K4gPsABznkypSjWCiaHE7IqBdtia8N0WiFE52YlwlzbGBviZZBYdxf8iAv+EBCFqm4YDosuyiXEuhg81kmM8zgc3gusDXCD7RwigiuF78CBe9GGjn/oo4cjHKEc0DmKaAQtHunhfCJ5YqhYSyCRAa0QQnXPJNDZoWClxLjgcpH4AKRkgJWCTMJASFak4NBY8hYmDqYIKiFovccSamgJIJQG6Wnb4PygOxDBEmAX4zyZCA4l0vvOoSQAAkoIhO+iSISgw3Z6z5N+hN1d+OHu2EQYQXRpI8GNwoStda4fkkTB+mrOcJixsTbAmJrReMRoZcidnQk7O4cUecLm+pi19RFt6zielBxNSyZlzdUb+6yNC1DwB19+lSc//hG+8D/8T/zPf+9/5PjwBbw1GC9x3pEkitHqmLmHSd0yt46Dqqbxlotb66guMqdpW9qmIc0SPJKqbTn12GPsfv3rDLomKt+5Syg83qsADXnwXcSNRIdR8IES8V3sjlaqi8K5G5vjXXiO89FCxpkAYPT3bwd9+O6ZvjOeESLAJdElBCIoEjcMCIEUHqEkSI/UIW7IE2pgUoIe5Dgkvm1w0uGVpzEgrCDLFGVjObE2xuOZHE0ZDAe4tgXrmRwco4YD5vMWZWrSYcHx4ZzV0QBci+ieA52xDLKUeWuZlA1C6DAeWoG1NJVFJoLGOJzXJNJhrccaTzFMSFJNa8Izc9NajmuLaT2riYc0lHat6Rz7XIvSCYkTeGsxlWU4SpjMDUoGp50iSyDRNI3laDpjlBcMco3znqqq8cBgmDNaWUVaj6km1K1jqARpKvAqo64twluEFmgRYpc+iCe9JfixVE/Zv9fJxM2bN/n1X//1b8v9/F7TZz7zGR544IHlJOl7WDHq5c9Dp0+f5lOf+tSfy76XWmqppZZ6fyQQqDQj29xg7cqjTG5/gurwANHsIp0JE85ugooP2aKx8yDUDQRppnn8I4+T6hTX1GHKaVocLdJWWFNRHR6xe+uAsjYUUmKNpZ7MmNzc5sTjD6GGI3xZhUJFlqCHgzBJlYJ2Xgb4QglUkiGVDF/LbmKVaoR3eNuln8YisJAIrRHO4UyDEinzco+maiiyDKXT0N0SpnPdBLB75rEOjMWnoSCA6oJPnQvuI4nCW4k1NdKlSK1I19dBa8TeHcrJIXZeUttQa8lWBAKN9wKnO8cTGQCWkfTUbUtZ1xS2pchyhCjwpmFuK4yz1NUReMXR4QGnTp9hOrcMkTTzEiPAtQ3G1ZxYXSXVjqu33uLoS9epn/w89XCNItGsnzyDsSa4jNy+TnVccmoDzm0NWRuN2FjbJBtkGOOomop6XuGNAaWwbcN8MkEITWtqrLMoXaCyATJNmZVztm/t8dqrLRfPpAzXRrSVYTavsMaTjooA8zgXwA7ncNaAt3jf4DAdfBNnbMHmNkA4DkxL3bQMfYuQLhSpnMALG9xdpGSgFdPqAOfCtkIBylLVDXXreId1raWWWmqppX4ItOiScT8Acv9rEcj4s4Ie76bFRfNFGOT+hfZ32t/iZxYXsu+PULk/0mXx88456rruXRr+NGeP+3X/ft5pv4vgxKKbSYQKIDgzfOELX2BjY+PbHCsiCBCdMrIsA4L7RPw6vmatZWVlhWeeeYZnnnmG27dv8/zzz/PKK69wfHxMVVUMBgPOnz/PZz/7WS5duoRS6p5xHAwG3xbnEgGE+Xzegyp7e3sYY/qxi04ZEXSIkEL8flEU3L59mz/8wz/k9ddf72GLK1eu8LnPfY4LFy70DhlFUVBVVQ+OtG3bj1cch8UxX7zGEXAwxnyby0mMnIlOGG3b9sBIhFriZxbhi8Vraq3t43SstX2TUTz2uN3oUHK/Q02e5/fE8cR6qTEG51wPpxhj+n3EOJoIqWRZ1t9X0VklRvHE90YwxVrbvz/u436Q6/36eV5qqaW+9+SspdEZDz79WYphzq03X2bn5h71rEV5zdG0YudgSlo3nNxYpVWCy49dZGWQcnBnG53mJNpwtH2NtfNX2Nu+ye72Dg997FP4ZsLVF1+iaRsGGyc49cBZXv/6V7CtYXVtlbJ0aAXpuCDNc8p5S1s3jDY2yYZjdq5fwzQ1o5UR9fEhN9+8yonNVdRgzJ2DCXfeeh1jBGura9zaOWBiG8ZSIoXDWUs5KclzELXD3XyO7VeucXKrIFnZpFgbMbt5GOIOgOh6qjr3Di/D4jIRmvC+NzKILhHe0/+x2LB4XjYkgxS8o55VoSO/CQurVVPjvAffPTcFs9U+PiQcBWGfvesHCCRNHcBT50ITjPXQOEfrA5zgncN52cW/hu8H2EP0rgzBdbbzURAE14oIs3SS0LuNKCER3nJCwCcKxZlBQqrCYrx3Md6mi/iI+8CjRWw8Cm4VznmM8GgEiY6RLIIMGFhIbABA5sLTEqJNvAweHdYLbBdDK1U4/j5upTunxDsSERur4jiG85LcdTQR3XlFGCSMCQsr3gvPaiJE7kAYJ0eAaWQHgQgBm2s5qysZpzbHnD6/DgK+8Jf+e8anznF8eMj2tWvMjqZ4r2nqkt3tHSzbDFYS8mJAVTVMy5YXXrrJ8Z0DPvff/QWKM1f48f/hf+T61f8H09e38VKwsTqGPKO0FuXCMRljqT1c3Z0yqyyXtlbZWk9p5y2JCu4QCE05K8kGBdlgBTObI6Un0aGe6IXsAaYQsSLvQq3OBTymGzOpAqQhF8ZJQlejsmGE+1JUiGeJziB91IsTXZxIuIhehPs2RvC47kKo/ufAISSd84joYmJ8uLkBJQWmKjHWkEiNTBLyU1uY1yq0kFSlIU1CrDYqAS04OpyilMAKzaxsMJMSiaTIFK0vyYucpmzIEoltYV62FJlGOoew3bm4Blzno9PZZJi2RWlF3cLhvCJPPaurGXmRIaWimlZYBJUNjjqrmWKswEmorWOYpLQ21PCyLCXFI5zAGoPp6skJnlRLrDUoa0i8Z7VIaV2LdArroTaO1XFBlie4tsKZ8O/CSqaQxoaLhES4Bq0FSktWBilSCOQHkPWyBD+WIkkSBoPBe3qvMYb/8l/+C88+++wHfFTfnZRS/IW/8BfY3Nz88z6Upb6D8jzvuxo+bP34j/84w+Hwz2XfSy211FJLvT8KhVSJTFOykyfZePxppneuMzs6JHFtsD+ME2lBIOQXegwEnlOnxpw+sYGv6uCU4Q3eWEQucM2c5vCA2c4ON64ds7WZk+c56aBgfmcP17bYaYNQDTLRaD1C1DUgoTW00xn18RSPI8lzZJqQDAeorAhxMEqCa0FES0UbpnPW9OeHCM4hQktMa7hza5cLl86RaIFG433Ip0VKvJTgPN4ZevtTGSok3jqEUkgFKh3gjMW5Flc1wWEk0aQrK0idoNOU6fEepqpoxRSPIS3GqHyA9kDSRZb4lERKBm3LxFqcANs5jgTn0pa2nuFsQ5IU1KalqlvSNGc2n7EyXmFyvM8wzRnkA6q6pdA5pzZWSI5K7kz22T6aM2tmHH3reUxdosuKtdvHrDWCRHmSLKUp56j1UyiZ0PoGbwWtKalmM1RdYkxLXuTk+RChJIlOsd5yOJvS7E3YvT3hrWsaazXnz60ihGQ2PWA2nwCKJC26jhcRuoKs7SxmRdcxBF5YEC7YjDpPmCh78Ia6NtRzRztsKMsWNZmR5haVJqATVJqyWgyp/d17WiCx3mOs7TJdBXfTbZdaaqmllvph0iJ8IaW8xxkgLlzHReRFN5D7F97fTd8pmiV+LzpJRNgjLtQvvmdxIX4xKmbR6WNRi84hsdgdXSMitLAYjbHoQhI/d/9238lx5Du5MGRZ1sfA3O+uEBfvn3rqKR599FGklOR53gMJ0dkiujlEh4j7jyuOVZZlPfgghODSpUtcunQJay2TyYSDgwM2Nzf7BpnoMBHjYyJUEc9lNpv1cTPxvGOsym/91m9x/fr1HlpI05Rf/uVfRkrJ8fExv/3bv01VVXzqU5/i1KlT7Ozs8O/+3b9jOp3ivWd1dZUf+ZEf4ZFHHmFlZeUeeGc2m9G2bQ9ARPBkEYSIx5Tnee9uobXu4QutNVVV0bbtt90Ps9nsnusWYYkkSZhOp/dAFHFb9wNE0XkkwhRFUaCUoqqqe4CPxb/jscUxnU6n/bbjucXvDYdDjDHs7+9T1zWDwaAHXxZBnXjt4/bjz2iESKLzynw+72Nl4mv3wy1LLbXUD56kTpjt7+BqxxvffJ6XXnoVrzJcU6Kswh0f8fTmgEQVIODchZNIYWmMJ0kLWhQXLl1g//CQ6e51RD7k0Y99ihuvPs/Nt95G64RTFx/kxOkTvPonX6UYrpCuJDT1DEWLTkf4NGdWljTTksHqOl5Kdm/dphgMEcOcm6+9hZmWnDx3mtHmCV7+1uscHM4YDQeQZLx2fZ8bRyWraQAfBMGRwBgX4mtVg5vc5OArf8zWX/4UanyalXNn2f3WbqhDILr4i7DgH+a+Hi9kH03R/QchuucK6L4PNqAPAfZoK9rbNTpVKEVwNREOY4HuPcEFIVSFLKFnxiMwDgJeEGAO1TkcGOepTAiFsXTxLhYa72mwId5FhIgU56NzSJjLB2Chc7vo61KCPlCjJ05EN3agvMcIUDgGwEcLxeNjxSiRPRRwFyvpNgl44ZE+2NzKuG0hcMIjvceEQAykCgCF1p5EBt8HAOvCs8XmSsZB1VC2FiskpqujSR+raL77/+D84UQXKyOCKwhEkEbcPW/RRb10rruiB37ogJDuvrnncTRcJxfBAxEcTCSwMkg4tZEynbdMpzXbt445c/k0px/7OOOHfoQzwy0e7dx0IThjlDvXufaV/4Nrr73Mqy+9xs3Xb7Kzfcxk55gBLdPdO/i25oGPfp6HPvYrIP+EE+c2uX3Y8I037zCrQ33xxCBjNdcczFtmxrF9NGcybzixP2FrZcDmeIAvG1rXcHg8Ze9wwpWVAXo+o2kN3oKWAo8MEEt3X3ugaloGiQ7OxbFhLTavSdFFAwVnYOc7Cxpx10FFSBWqijZgHEp14T1edoxRiEoSHXgU3EI6I5BA8uBluDYd80GPZtkAQXkcKleoTOEaixIyNEc1jpKE0gmO9yesr45QaYJD0jYO07YU4wF11WDqFtMYsIbBIMdJiTWeYSEYb65i5nOm1SSkZFuPa0KcNdYhvEdJjXOC1nqSVJBpjdaeWd2QahgWmtEgw6qEyaRkUlumlaepDWspbBQSrQSHjUUqSWtCfGAmNdK35FmCRVO3ont+V2RpSus89axESZCqg+CdRuBJ0gRRpPimYT6rUWmCFprcG4QJjsFKgtACozVKScajvL+mHwTjuwQ/vscVJw0fpNI0fc8L4Ldu3eLf/tt/y+Hh4Qd6TN+tzpw5w8c+9rG+s2Kp703FSfOHLSEEX/ziF5f3x1JLLbXUD4CECDmpssi7yJcfYba9zfHrbyI8tDb2BwDC3504ESj1S5fOIuuKxoSs9BCJIiFNMNWE6njOZG/CUWl45MEMpROE1swOjpCuYbZ9h7xt0YMcNSiAsPBv5g3NdEY1m9KWE1CCfLhKXo1JRhZV5EitkFov2CqGAq/zNoAGsYWlA0CG66scv7zL9o2bnHvwIghFKIDEP113hhRh8oVFqgS8AhusU4UWyCJB2RxbS6xpEK1ESUApZJ5SnDyD0Amzgzu01ZzWGYRT4ThdhiBBJhkyDRPL0XgVqRS2tVjvsBic1zhr8KbBmRaVDVEq4WD/Fnk6QAnFxDZ413I0O6QohtSuRqgE4z3W1WQHNzl79gkOxgOkcWAMB1/+PZI2PB9XNdjWkChJbebYeXDj0FKysX46dPAqQdI6PLbL9Kwomznz+THNdMbxXsmdnZR5nfHAWcHWuZM0dc1kfhwmp/kIFduLZLCbdc5hXIvzFoVGoLrJtMBLdfe6dQ4hTWNoa0FV1hzub+NMzaAYkGQ5OstRg4xxUbDv2q57xof9dH98939LLbXUUkv98GkRHohuE/fDFfcvLC/qvcAf7/T9RdgiuhnERfoImSy6YsTF7vvjKxadNe53L4nQQFzg9t7f41Lxbs4e97t33H/M949dPAcITUIR9oiwyf3bWdz2qVOn+MQnPkFd14xGo96pITp9REU4I4IOcb8RXFi8ThFaiK8lScLJkyc5ffp072wRo0fisQI9yBCPIV7v8XjMfD7vx3J7e5vbt29zfHzcR5FMp1NeeOEFVlZWqKqKsiyZz+e8/fbbHBwc8NZbb7Gzs0Oe51y5coUvfOELnD17tj/e+XyOUqp3ulgEOCA0lFVV1V+DCIY0TdNDE4sgxP7+PltbW/fEqUQwJEmS/rV4zvG+j24gEQ6KYxm3kyRJ72gc4Yq6rqmqitFoxHA47I8lxsfEbRtjqOuaoij6z8Z9Rc1mM6SUjMdj6rpmbW2th0viPqOjR5Zl/c+MEILpdApAlmU9KBOPY/FnZzabYYzpo2iWjh9LLfWDK+8dN157k5de+HVMZSFRDNeHHO7OWM80Tzx5HqW6SNVBwaR0nLl8mYPtbSbTkuHqgOPJIaOVVe7cvImdlby4vc3xbMLJU6e48vSTtLblhWe/xIlT51HKc3R4jJSClc3TCAXWGPCewcbJ8G98WbG+dYpqfszetZuY2nL6kYdprOdbz7+CbSznLpzACcWLb9xgPp+SYmm8oHWOVIeFbOMMziZkgxw/qWleew5nP4MarLFy/hxKfx3XBkcJ19VDXPd7Arp4F+66cITxut+ho4NFrO1cDhyt9zSlRUtPVVqsN+AFtjW4QEjgfIAIbD/3BjqAJDhzOLBhkds5Hz4nROjqd54mNmk48FJibQejELbb0w5d9In1EZboYFkp+gX3vl+nA1F8B2h4D5e14KmhYnOUkCRhcdxDiBDu6g39or8I+wqeAvQxOEIKhBcI73DC45UI++zcRxotmHtPieBHP/MY65uaX/3dl5nWDiN8aE3yoH0gQKLbSLwy1gf4I46j6mCTcIFEvFDh+93rwWVi4fmtd7PornXnWBKutQ+OFC44nOgElIIkzagmLfu3j8n355TW8MIf/QFXWsX47GOkq+cRRY73cLw74eU/+AYv/sHXeOPVb7G9s8fe3pTZ/hTZGsYrmrdfepHXvvQ7fOX3/5jdvV02T29y+tQmFy4W7B9XvLFzyPG8Dq4Og4StUUZStsxaS9227BxZjidz7uQZiVa0tsU0FmcMm5dPcWp1hD2YIL1HeYF1IHBIEZ5NA65jMc6hky76WUpktJ6x3bWPNU3C5xEe2cX+CFRwl+0ArFhnlEqCdwGcCVRQuDNdN0fo79au7CU6CMd1jiQiOIv0IIgBJy06E+g8w5YNHsHOtTep2oZRtkpjPPVxjZA1SZaBBp2lkGWY3QPGOTQ25WDakObhDpDmmNEDY4xJUUmKUA7nLdgAWHsCWNW2LUIpqrYDWVJP3XikFKQpaK3wStKYlnljOJ63FEqwNoRhIkHBUdWCTJjXLauFYjxQ1HODUprGGFpjKYohZTUPcIj3CJ3iRE1eZJRNgxKOQZbSGI+U3RgphbAeV4eoLdsYBI7WthSjgjxPQbSsjwucM3gfHW7e/2e9JfjxPa7FHMwPSmmavifHD2stv/zLv8wf/MEffKDH837o8uXLPP7443/eh7HUn6LFQsuHqTNnzvDkk0/ek/W71FJLLbXU96c8IX8RKUnW11i/8hjHNz9JtbdPubcfJtkhQROJQnbJowIoBimnT61THt4m1Q6dZmGCLBTMjrFNTTk54nj/iOFYMswzlNJgLNWsYnNjhXp/HxzovAB9jLOOdj7D1BXWGry3qCQD53BNRTuTYEPXgUwVMs1DDqcPk7jQTtG5kwiNVAYnw2RN64RzFzfYvbPHbHKETlfJhSKEo8buGgEEpxPv6CxMYxSMClCJTpBpEsAE0+CaNgAoxoWig5IUaxvoJGO+v01TzzB4vGvAjJCu6DoLJKiEJBMMGWKqGmcMILusWwdYvLV4Z0mLDCkyhPPkSRbgCdNSzhzlfEZrDd44zHxKOzOIw+uMixVuDE9w8sxFsjzHHe8zLAZcePtFnr58jtXxgDQLHZKhdWYhRkeI0M2jLN61VLNjjssp9fQYU86YHzVMj6BqE9YLw1OPnyMtCg6P9qiaOUJBkubobqLsbYNXCmsBHwpLrls8COcKwose1EEofOvRSiGVoWpqqtmEPM/RUiGsw7eGVCsG2YC01TgvkN22g41nmIIv5t4utdRSSy31w6NFcOL+eJVFqAC4Z377ThDEOy0gL762CJQsfn8RYGiapodQFmGU+N64n/i5WNOKC9vxzyIU0jRN//n3GuOyeHz3QxsREIgAQHQUWaw/vFPEzOJrUkqKouCLX/wi58+f7yNDgP68YqTJ/WM1n8/JsgylVO8YEc+7aZo+viVCFYvxt4sgivee8XhM27b3AATxvObzeR/xUlUVSilefPFF/tN/+k+9Q0uMPqmqiqqqGA6HpGnKaDRic3OTtm1588032d7eZmtri1OnTvGX//JfZjAYYIzpQZw4JmVZ9g4eEdhZvJ7xejjn+uMejUb3XAfnHGtra/fcK9ExA+5GvMT9L57HYjRP/DtCEhFGKsuyH8fBYEDbtiilaNu230eMyYlROfHnyVrbAyzxvo2wSjyG8XjMdDrtr3Ucm3iPxeOQUpKmaQ9NRaeXxRijeN/Ez0fIJUa/LLXUUj/YKidT9t++zlNXzqMSwcGtPXYP9vnkYxc4cbKgrh3ZeISpZwzWzrCaJ9x85VsgNbZRFKMVzHzO2y+8yLlHHsK3c9566xanLz3EU5/6OC9+9UvcevsGjzz+GM1swrVr11nfOsX47EV8WzHZ3SUvBggdXD98U5Jlmp2b1ymPDkmzjIc+8gT7O7tce+MGaZox2lxlbhpubk+QMmF1NEJ7SVM3lA4yJ8izFGdaUBrnLFJ5Dl79OtXtXQYnCrLzF0jGimYPLKC0xNkAzlkX6wi+cwJx3TRYYr3tWnri81CAH5SUeCHwxmOdQ3pPY8E7E+oDBKcOgcDYGNdi8QRXj55PcAEGcYCxlhBWEvs6ImwbXEYcAXowxuG6gIzgCUI3hw8fvItIsPC/Pd530SjCI3yY7cvO+UMCa0rwzEhxZiA5fek0gprD6zMaF35v4T0O2dWNAjwTFvTD7xRBqGGEypdDddCHdqEvyPnwmUTAQAsuPXqRh5/c4vmXd9idGdyCJYdDYL1HEZ8lRf/97lBoOkAjxXeQCMFkBd87ecRHO+E9i+UN37mGxDgY72Psje+MWsJ4C+FZKXJa63nhzT3Cb0lJ5QTttUN+6f/7n3n4y89z4cqDnLrwIFmxxtHRnBe+8mXefP119vb2cUJz+/CY2d4xuXGcPrfJ+mbOnVnNv/4n/3eEhzRRzIzl2o19Hn/0PH/x80/yx19/jWdfuclh2dBODYWQZIlGamiQSCHQXlHOayoZIBkFeKH4+tu7/F//+x/n5u/+Draah/ETXYxON07OhzqldR7XxbKEIeogDOE7hxzuunl0thzOOaQQWGd6YIS7t2AAqnrAqXMYcd32uvGlc+rRMjqREMAiHd7TO44Ij5AGKSTOKszRlCRLsNaB8wyHBXXj8C40Xg0zgRcV2WhI2RjyRHPi9DpHk4qjO1PmdXDTzVLFtLGY6ze58OgV9OoazdU71EeHSGEQ0uEtSKVRssUiSbVHq/DzWjeWIlE4FaJ4ZtMaIxWtgRUt2MgFSii888ytIRlkzGqH1gKtBLZp0YnESUFrDLmWNE2JFAqdpNQiZ2/7gEJDczxjmGqEEszrNjgbGUuhw3NgbS1tYymbKRsjTWlAK43WKUeTCiXANpY0k3ilu5r1+78+ugQ/vsf1Ydj6vVfHj7feeot/9I/+UT85eb+ktWZzcxNjDHt7e9/19pRSPPnkk1y4cOF9OLqlfhD1iU98ggcffHBpmbnUUkst9X2vbrYoBFIphNZkJ7ZYfeQxjm9eo/zql5C1weL7rNRoTymRKOmY7b3GymqGTEeoQRZMNHyDc5ZmWjLb28M5w9nz68EmOlGYpkXJhHQwwtUltpziraM1DaatMW0ZumUQaJ2hZbApdabGCAFCglQol4Fv8Sp0qCAFQjqEV6FLwra9TSZCInTCxslVbt86YH/3gPHmWsjMjL/PRIKQnb003QzcO4TsChGii8bB4dME27Z4qwKgYD2eNkwSpUImKelaglAJ6vAOzXSCd3OMd8iug0UkGpFIhJToJA0TdSkRRmLqChDBglVqnAOsReskWCNKT6JSTJdWap3Di/CV1gm5qsE55M41xmqF3atvs3O4i62mZEd3WB/mnD53GqWTAGWY0GXgASm75+fOqcMLMI1hXpZMD+9AazAzS1NB02oK7XnisREnzmxSljMm0wNc61BSk+gUpRVCKLwD60zo/PSu67AJxaBgzyrwob0mvO4d1jpa5xiuZeBtmJx7EzosrMdRY8qStEgpqgSsx4pu8QSBlAopltDHUksttdQPqxbBiqhFN477I0Xujxn5TgDFO8EP76TFeJToyhEXpxdBhUUXi8XjMcb08RaLYMf9gMk7aRFYud/l490+kyQJ4/H4HR103yl2Jh77YtxLkiR89rOf5dy5c/2ifYxbiee06HgSX59MJn1kR9x2dMmIkSgRVDDGMJ/Pe+eOxcib+XzeAwH3O1IsRqhEkCQ6W3z961+nrusesPDek+c5586dI01T5vM5xhgGgwHD4RClFIPBoAdZHn/8cQaDQe+gEY8nHkeSJBRF0X8vgiEx+qau6x60iOcfoRmtNfP5HCll73Ixm81QSvXjAPRxNtbaHgoSQpDneT/uxpjevSU6oEgpezAJQrTw4rjGY4rH2jRN7zASXT3ud6yJ1ylew9Fo9G332qLTTnxvnuf9z4BzjuFwSF3X/bGWZXlP3Es8P+99f86L57/UUkv9YEpIycNPPIY3NdODXTbWCy5ePoczFboYsrIxZFaVrJ+8xOHN20xvTxmMRsxaz5XHH+PWK88zPZxy4vQZDu/cojKeKx99mtWNdZ799V/DJxmf/NznufX6yxzvHbP+4MOcuXiem6+8RDM9ZmXrJE4kHO/voVXK2smTHGzfpJlOOX3uFMlwlVtX32R6NOHshVOMTpzh9u4+O28dkjjPwFqGiSQdFcyco6wNq8oFiENIymmNyCVCeibXXqG+c5vB1kWykxdINoawP0MhkBKU0pg2Qh/QeUQglcJZ30EAocFFCEEy0BRZymxSBRcA67AuuBU41wED3aK57aEN0S2gA65b0O4dREQHebjOJTbGtgTXC+s8TsgQl+HB+FDHCF93bhp02yeaW4i+hcP1AS+AjzjGXRgkNnpIDxnw1EDx0FgzzjTHd/bRWUbTtKFPSAbHjABKOISS6G4bAh9qOz1c4RAiOLx2LUIBBFASnAMtOXNmnWSc8mu/8wrPvX6HtrUhFoYuliWWI4JVB7bziIh8B923TLQR6c7DdVE9MYzEeY/sHCYiBRKdKYLTSudqQYzO6YCWLgplZZBzei1lv4RbkylaanTnClI2hlvbBxweTnjlhdfJspzGGsrGkg8yWmtRWcbV63vsHc546MQqFx7YIhsmNNZwNJ+RJgnWWMqmpWkMFQ3fevkqFy6c5mOPnmeUD/iTV6+xdzwD75mVNVJAKlQYdx8g3QgXOQRKaiaN4d//7h/xt/7KX+Pq//Ef8U2F7+KRvQ+wR6ICyOQ7IETpGKkY7ml8uB5SiX48w49EZ80hQnyItW3v6EH0jnUdYNXd30iB0OEdMt7/3Z/wPNzdqiLEMMkO9hHSd+4xhBqjBylCPdHahtYIGKbM6xZtQCUSaxVWCg4O5qTDnFwnTEqLSFIGK6MQbdc0OBsgqtlRzfUX38JJqCYlWnissUipyLRESU+WjZjOWorE01qLtYZxkWKMxRiPFw6LpW4aikSyNhKMhzl7hzPq1pIVGUdlSaoLMqXRGhIlEUpSWjBe4RAkqUInkqQYczStcF7QtJbVYUaew/G8obYCaRpWhoPOasczTAVGylAylJ40VzgnmZQ1SSJwbU1SFNTGMJ3O8F70kUvvp5bgx/eBiqL4QLefZRnj8fg7vmd3d5e/9/f+Hjdu3Hhf972xscFf/+t/nX/wD/4B/+bf/Bt+4Rd+4bve5mg04vOf//zSzeH7QB+Go839SpKET3ziE5w6depD3e9SSy211FIfhHw32QltER6HyFPy06dZe/xJyv07HL/yGqFJJGABQnhkNzstUsvqKKFYGZGOhsgkAalwTYOrK6rJEe2kJikyTmxtYCtLMhxi5jPSTIKUSBXgBmsNtg3gh3Wh00/JtNufAuVDPmVbYYVCqgSRaKQOLhtCdy4PHVMgXJjAC5UglOntPIUAlObG9owzF2u8CXaCSAmiIbibiDCJd2GiJ2RcNLGhsiElQiqU0lhp8MITyxZ4j7ctQmgQnmQ0AHESISTtfIarSoTUOARCFCiZghbBhlJrhNaIOjh/KJ32E06BR0iHUoJUpriu48DZLmfXVbi2xbWOtmrwzpHnAzhxiieGin3r+IP5hOnePnvKUZ9cYd44xqnDNy3W2FgaQiYKVIxnAbQEKymrY0xVIo2gbhxVA6l2nDurufDQebyA+Xwe7Nu9JU8ytFYdDGOD5atTONtBHxKcN2FclQxllq5y5J3BW4c1lqqyjFYVw5VV8jQPEIl3CK1xtsGUJcmwIK8bTF0iVBJsLEV3Dyw51aWWWmqpH0q9UyPQ4sL0IsQQQYzvFNvyTq/Fz/5pDiERjFh0zLDW9gve7+YWEr//Tk4e79WVZPF776V5Y9GN5H4XkneDRhahD601Tz75JJ/+9KfvgS4ibFGW5T1jkaYp1lqKougBjLIsybKM4+PjPlYmukEopZhOpyilelgiOlxEUCHubzKZ9M4R1lqSJGEymfSxuRGCaNuWo6Mjtre373EQEUKwvr7OaDRCa82NGze4ceMGeZ7zzDPPIITgxo0bPPzwwxwfH7O6utoDFGma9oBGjD0B+niVsix70CKel7W2H6e2bft43Qh2pGl6T8xPlmW9Q0pZlozH4x6eiN8XQtC2bQ+JGGN6gObo6Kg/Tq01WZb1UMoijNE0Tb/NRfeWLMuQUpLnOWVZ9nBJ/BzQgzURAIoRPPFncfF9SimqqurHLwIj0RUlahFgSdOUtm37WJw4/ouA11JLLfWDqSxPcfUxxwdHnDyzhVZQTitWz5zDtyWNdQxGK9y5+jbjlXWaqqGmYO3ECq997UucfOA86yc22L6zz8rmCc6eOkc1OeCVrz/LxSc/Qp5mvPzlLzM9mnPmoYcZDwte/N3fYjAes3ryNEIpDrZvsHnuAlql3HjlNRLleeDKZWazGW+98jJt1XDm/Hl0PuTFl17jYO+Q8SAnyRV6dYWqqnEHFddbT2Idm06SOIOSCU1rWN1cxVcV2WTCzjefY/2xB8nOXGHlwlmOX381OFzIENGK9WBDbYIu0MLaCAS4DmwQJJkiHw/IBwVV1WLbBqUErfBdE0xYoJZ0MSR9PcjddVHAdzEuoabkEH3cgvcO7yVOuNBL0z16WOdwCIo0oS7rzvXi7iK8hdDwA/gOVvACoj2GA7zsolb6pqS7ES1WhLaSiwk8PhRsFBolBe3cUM+qAIwIj1AKrcPvOWcItR8JWimU7hykGkPbuODq6h2uK414ERb7hfc0QLo2ZJ5onn/5Jm9uT8LvIe7CGt0pYekcSWL9LVTfIgIS3FYB66CVkMS2nO5XmCO6n3TPn/1jWNyC791VEF0jk7zb/iKFZHWYsbo2pNWWG5MSi2OQJgFE8AJrHcZ49o+mjEae0nom0ymX1k7jrOKbL9ygag2f+tTDnN0ac3B4zFFVBVBCaDY2V7n4yNP88e/+DokSJCphMq345rfeRumEat6wMUjR1jOrW4SwNNZSmqaPrtGic5ZxFofAuHCObx8c829/5b/yf/7pv8z13/hV2uMQuSTjs+/CM2rrLdoLVCB1cM6HaGbAGg8ixJoI142jl0CoGWrdub3E0e6cbrwDobrrKsPN2HEK97qLLMBQUoLwDqWSAOeIFimS7h6UuNZ0TWgeLxTTqmGetWRpipTQNgarJfPakOU5mdIcl65rTnPYas7aKOVwCnXVkiaK1rTMp1OElOSJYOvcWQ6P51QHRyjVNfQ5w2iU0tQ13giE90jpSBIBXjEtDVVrGOaClTyjSBWNE8yNZzzIQcBqEcCgHmqP7itSIZUkTyU6k9hkyNHxhARBUShylZBqR2UcFom3nlSC8C2JTpDeYb0jUQInbBh/kVBOK1aGCUmmcINN6rLi4LgOjki+i+B5n7UEP74PFCdKH5SGwyGbm5vv+v2qqvhf/pf/hd/8zd98X/e7tbXF3//7f5+//bf/Nnmes7u7+75sdzwe86M/+qPvy7aW+mD15wF+bG1t8clPfvKewsVSSy211FLfrwodFb1NYVfUl0XG+MFL2Lqinc0o39pGC9thASHjVAvP5Ytj1lZHpMMClacQqWzb0sxLysMDvPCsbK0zGq9wXB6gi4z53j4qUR1IovvOj2C5KdAyRXazJ6UyhAdnawQS5wyibTByhlAhwkXJAm9bUCk4G85JgER1Vordwo8EY1uuHQomxzlPNR4hVQdxKFwbJvBCCuKs2fswPlKqfkLpEQitkGkSIAbv7l1IkSJEtTQtMitIRsMAdRzs0cyOsOUM4SyJDCS8RCBUB1x4UDYlSVtSkyBk6OoI891gwWpwWNNgW4NpKqSSONNi5zWzeQlas3X2AqYYs8OY/PQF1g53+egjj7M9nZAPhnxrcsybt1s+PpjyQAq50ng8Sshg11qbEJmjFCpNuom4whswjcO2jkwL1jYTzj1wEp2HhZfZ/Ii2rkjSgjQdkCQpouuadY0BDN6FTg6L6LomFLFdQti7xQnvfdd1BLa1NPM5mhSvJdYYnEgBg63mpHad3LQcTHZJinEH4mi0DosnS7+PpZZaaqkfTi26Siw6aizCDFH3x468GySx+L33An0sfi4CIPHrd9t+/HvxWN/L9r/TthaP/d0AksUxuX+MFo/n/pic+H4pJQ8++CA/9VM/1cMDcWG/aZrg/rYQG6O17p0lovNFdDeJkTAR5qjrmjzPe+eMCEG82xjF4/HeM51OEUIwGo16h454XBFEKMvyHieSCBFsbGyws7PDiRMnKMuyd5u4detWDzK0bcuJEydo27YHFOLfEaho27aPeqnruj/nNE1RSjGbzcjzvAceYjNWBFZiTE2MgTHG9O4l1lrSNAVCPEt0JpnP5yRJwnw+xznHYDDAWsvBwUEPiERwIgIoEZKJ92l01IDwMxSvXdxfjMGJTiwRtIjHvOhYE6Nh4nktun3E1yO8UlVVH+GyCMekacp0Or0HDIl/yrLsHZkXx2SppZb6AZVzCG+4/NhDNNWMxko2L55kb2eH8co67fSA+fGU4coak8MDUBozO+JofshDn3yGfJDw6nMvcPLyFVIBu1dfQxdDnvjkp7nz+mtcv3EbhOKBJ64w39vh+vUjVk+cYOP0GapmTjU55tSlK+As22+8QpYqTpw9x87tHY6ODhmMhmxducy0drz95tu0B4ecHGfoPGOwMiDPEnZ3jnn11R1enxq88lwwgoGG1hi0FLStJ5GQpZ7t577ClZ/5adRoi9VHHuXG772KsBKVaGSmQoSEjO5JAfWwPkSuIDQ4T5Jp0kxjqpbaOJyxXVNKcC4VvgM3CI4RovvdKhE4oXCug/C6Bh3nXHA2EB68RQqF0ilNawOwERs7vKB1AkNY6NVaU7Y2sCoAnVNDjCqBiDTI3nmjl4g1gwVXMzwSwZqEJweac6s5Co9zFu86FEZ4stUBWw+fZnhmHSkkpg7NM94GGKBtKqrDGfX+lOPbk+64wAuJ9xapJEkimDmPSDXHSvHKjSNeP5iFmo0C5wTCeVQHojg6aMUHoOWu50R3Ht1/PAIbT1WA7iCCRbeThSfVfjDuYh/BRcQLcRcQoXM3EeCspWoMAkuqBK2DunUUiSbPNaurY5Ca23v7OOPAG55+4gIHU89Xv/Em58+e4smnLnN4vMvewSEyZATjXXByqMqKzVNXKIbP0ZaH4cyVZj6r8bMKYyzeGCTBhaNIE3wVoCTrQw3NdECPFBLpBY4QkaOk5M07e/y//+Ov8D/97N/g6n/8N9imQgiH8jJE3cTR8ALjPDpNQ1NW5yrrXACXlAiQUnRT8UCikjDOvnuulAHQUULiWhfAAh/qh86GqBbVjbPstqFUaEaTQuA655veuY4uBkY5tFRhWypcYGcMTsDcWkSWIKWmKlsknrY15FlKqgXTWQAdVteGtEaQZ6FxbGAEjRA4Y1Fa0zpQ3mGFZvv6DVbOnAEvaKfHpBq8cwjXMB4V2EmDaRuqGtrW4gkwylDAVq5YKxIO5zNq1zIeaLwTzJsWrCPJNIkMbtDWWLIsIYHw74RwGJFh6pYiVRxPaoTzWGGomg66955xkdK2DVpAKjzGtCRZiqlrUqEwSOqjGZl3KA9CpuztH7Gzt08xGLKWpB8I9AFL8OP7Qh/0g/7a2tq7xqI45/j93/99/tW/+ldMp9P3bZ+rq6v8k3/yT/jZn/1ZRqMRh4eHvPTSS+/Ltp966inOnz//vmxrqR88nTlzhk9+8pN/3oex1FJLLbXU+6KugK8korOzdNbgcAxOniKTKfXBPtPdX6Odtlgv8Ci0gHGhuHxhg3Q8QI0GSJXhTINrDaZuqY6OacsSPZSsrK131qMWqRKasiIfZ91iv8JbhzEtzluU1EgZCgQCoLN5DLEjXSdCWyOkxNUNLTNwPoAntN37HFiB8waJRHT2id6HwsasFdxpJGVj7rZtOIlA39Mp4PCoWHTokH1vDX3ridIIbbtpHh0c0R2374xAmwqRpKi8IF8/gdCa+nAfW07BOfRoLfSnZAqhdIBItEKnKYUZkharzJsdbF3RqDRMKkXoWnDWoLsJnjfB1nVl8wTjjQ2SwQqv7U65dXwAa0dUOuOp/+4vYp/9MoezCjFaobSGP24qbpQzPqNLBsUAIV3YtnHgWlSa46VESY2SGqykMS1SCVaHOSfPbDBc38B6x2Q2YV7NAI9SOhD7hONzNvwRCISXOBEm5K6z0fSu62bp7ExdZ5Vpne8sUS22NJT2EOUlaigwsiZRGc42+NaROkO79xxq8zxOKITUpKkOriPcLY0stdRSSy31w6PFReH7X7+/icJ738MKfxrY8Z3gjUUQ4TtBFvd/5t3e825wyv3H9E6ffa8OJouK0MX9Dh/3j8n90TFCCLa2tviJn/iJexpFFmNbqqrqF+/LsuydGiLA0LZtf3zRhSKCGDF+ZT6f9/EjVVX1MTF5npNlGUdHRwC9c0UEByIksLGxQdu2FEVBVVX9/ra3t3swIYItRVEghODw8JDbt2/35900DS+//DKXL18my7L+Htvf32dlZYUYEQMBQIjAR1EUPcAQ4YnoABIjXmIcTYQdYpxJdKSRUjIYDCjL8tvu7bZte3eMCGx470nTtHf+GA6H/TWJUS8RzFg89xiTEv+O57ToqrEYARNdNhahmQhnLJ5HkiT3OLtEJ5H9/f3ehSU6pcSxjfeHc46qqu45jsVoIKVUf77ROWXpZrzUUj+4ElJx+VOf4PprrzHaPMVKlnKwu8/6qVMc7R6SJGOKTILWrJ3L2bt+i7WtTU6c3uLW1bc4PJry6Ec/RjndZW/ngNH6JmsnTvD2n3yZ8nBGNl7BFyvY1uKs4/zjT5COCo729smzgpMPnmT3xjXqg302NjZBJ+zc2kEowebpLTbOXGBv55A3X3qZkZacP7cW6h9Csro65s7BlK+9uc/NiaEyjmPhKb3HekUiPDoJMWJJplCZp925yeTmDVYun2XjypNkw/8f9RQQnnScojKBnHmkUbhZG9wSvO3dFKQSjE6sUM/nFOOCtqoRWYI5nmPaNkAdneNCBBK86H7vO4f3NtQuvO/AENfHuQQngTDrNo0l4iKhmcOH7v4OR2isZTBImBza3g3EeRfiYbp9B4whQA8OF4AIQY9AiA4KWOAfKDw8kkvO5J6Lj53j+OYek6PZQmSE5uIXPsrpT3yUYus0UqvgwJFoVDpEpAm+mjJ58xVu/O6XKA9eonahpuS9wDuJHiWMxkN2bhxy5AUv7894/bim8QIhHCG+Q+CkQEeIoTsh5yN00MGydDAI8dkqDLrzAiNC3Uf6ACtE4mPR6KNL3lmAQeLYLT7nhdccntI4zpzc5OFHH+CX/vMfMZlbPI5MB2eUWd1Qmoo8Tzi7kZPkGS+/fchzL93koYunePojDzKf7+OtCSYpMrhqyCTAPfv7U/7rf/j/oKQi1RpjQ4SQAM6dXCPRgrdvH9LUJWComxCrkgjVQUNgfOd+4uMdILE+nIESgttHR/w///d/x1/76Ecw3/xqABzwKC9C3LMDhKB1Dtk2pEqGqBcpcd73sMZdSCQ0ehnfOXoISZ8H1F00KcOdGOKQHSo+13a1RNn9bX0XPxNdeJzreKXwXGSRZJnur7ntopxD1JGmkSmIjHlVkScppWlwtSNVLUfzUFcbrxZMDqeMhhleJ1TTGcNBjkoV1aymnJUkWqET3cE7gvnuHjrJGIxHOGOoqpqqcZSmxnqBRdC2hqOZIZOeCycyBqnGec/BtOJwZkMMjg73UVsbtO7gJi2ZVYZUSbyxqDRhNisxCORswmh1RFkG0Ho+q0kTx8owxztPEaw+WElT0lxxdDxnPBxSNwaFwGiNqSwqSdGATjOa1qESwdbp06ykglES75c/y2+O96Yl+PF9oOFw+B0tL78bKaX46Ec/ytra2rd9z3vPiy++yM///M/z+uuvv2/7PH/+PP/yX/5L/uJf/Iv9pGY+n/Pcc8+9L9v/K3/lrywnRt8nipP/D3N/n/nMZzh79uyHts+lllpqqaU+SIluIT4utruwOC8E2WCMOrvC+qNPcfD2G+x98xVcEyZJWgjOnco5uTFEKoWQCQ4bSHVrsZWhruZILUgHQ9KioKkqdJ7jTIhzSZJVbOto6hLT1jgRsi6VTkPuplQIHzoIhNJ422CswzmBMxWGGc4ZtB0iRciTVEIgEh0IdhzONCAShAqOGkKqvsuicoGe963Fdy4hXVUjzOCipajrqH8hwVmEkAE6QSCkRaYajO3cQCxCSbzwwRPVhk162yCTFJWnZHIDKROq4z3aaoZzFm0NCWN8qoLrhZKINEM3hiIfUk6PMG2DcRXG6eCEIlOUUAGYsJDqIcX6CWSWIZMMPci4c7zD3sGc0dtvYYoBr/zWr7IhPdePKobr6+g0RxU5d44Mbx3e5pLwDIshUku0BHyKdzbYPybBZt15j3OQKMFofcD4xElEmlHVJZPZEbZt0WlOoodonQEC27aYzgYy2mBCmFgrkdHnEItY4unyWfHkSiGVAuNpXItrLbU4RokWkY/RWYHwIV4nlRq7/RzNA0+RFA8jlSBN0qVL2VJLLbXUD7nezQ0C7gIgi5Evf5ojx5+lvnS/e8Z32m5833vd5uJ23wtQ8qft850Ak3fa/qLzx+KftbU1vvjFL3Ly5Ml+sT7Wlrz3JEnSu1NMJhOAHniITr0REojQRIQJlFJorWmaBq11D3zEaxeBhwgjeO97oCGeR4yIEUL0cEuWZT1QMplM7jkfKSXr6+scHBxgraWua86dO8dwOOzdLdbW1rh+/TppmnLy5ElWV1cxxrC1tdUfgzEmdDaXZQ80xH065/rYmngeEWyo65r5fM7q6mo/foPBgKZpmE6n94Ag0c0kwiDRFcVa28MUEWqJUE90+ciyrB/L6BISr02EdCI0U1UVzrkeWInHGONxIpQDwQkkRm9LKSnLktFo1MMti+4xACsrK/3rRVH0bihN01DXdQ95eH83AineF9EdJkbRpGmK9753dVlqqaV+MJUMR8iVDQanH6KcHdOYmmJlnf1bNxmsrmFaA0lGURTsX3uNkw9cJEkEt6++Sb6yyeOPP8XBndsc3jnizINX0K7irT95ltF4iN5MqVpLebiPHw45+/DjOF9z5+2rbJ4+R5IPOLx1A+0d61cepp6V7Ny4hVCSta0t1GCNq2++zfGt65wYpoxXRgxWxggZIL2rNw959oXrHM8NA63Ic4HCMXUO4z2pkJjWIOsERhmusfijHe688Dyj81sU5x9hdHaV+uUjwKOzlHQUagVuYlCJp6lqQiCtJUkz0qEmSSXoFITFWEc5LWlb28/zwQUaQXQL331/RmjW8BDiXYjPTh686vppIgQSXC6Cg0WIHfEufN95QVUalFc42TWAdO91dC4jcHdhHnrIwdNFpXQxKP2TiQDpPecyyaOjhBMDzfHOBOPA+VBf0dKxemaLzcefoDhzEb2yhlQ5IsmR2QpisAlpDvU+a4NVqr09bj3/BuU8NBdJBd4pTCPYO2zZMYKX5zVvTlvqcLLESocnLLjfi190bhzd1x3fEBxq4wh05xgdUqzzfSyM7MeAfny4B/oAhOsgH9/DPrFpyeKZtg1vXNtlUCg+9vAp3rqxT9l6RJKwupKTFwW2bVhfHzIrKybzhjtHMx54YIvHrmxR11Mc0BgTrox3nFhfpWkamrZhNMjRSlHWLd5LNNAYT1W31K2lGIwYDIZoVccDwzvQSpJ0DVyti9FEoS7nvAvONc6SCEWmBDvzKb/07J/w1x69RHb9bYS1WNcB3T04EpqInOzGuHMjc/iO6wgwE8R5QQRBLFKqPgbG9XdnnDOIHoSynZlO+DEJ5yK6iyLoIPLuOSeNz57Wdb1qCtU90zjvaYXEKompDNN5S5NCoiVKeuaVBdEwGCbMy4o8TTmalOSJZjAaIpKELEmw5Q4nT65QlgbXtohEgvOU85oid2idoQZ5iKgRDUeTmlwKvFDMbPiZXcsE64Wkag23DxqMFzgBqRD4VISmMB3+nWk8VJOaYa4pNAgtqGpHYzxZItGpZn/vmNUiQRrL2jAhG6TUtSGRYJwgkQ6koqxa8ixjWlYo6WkBZSqEVBjnkEnKpG6olcJlA7KqZJRJyAqsC0lX77eW4Mf3gT5Ix480TfnCF77wjpPzW7du8fM///M8++yz79v+Hn74Yf7xP/7HvW1m1De+8Q0ODg6+6+0rpfjCF77wXW9nqQ9HH3bUS5qm/MzP/Mx7KkYttdRSSy31/aDOSlPEFFKB9QakROoEmSaMzl5g/emPMd3boXr7GAAtBZceXKcYjxBpjrMO21S41tHO5lTzCW1VIbKEbDRGDXLqSUm+toI1BmsdKtUk+QDvLaYuQShUohEquHOEAgIdZAG+rVFC05YzvG2x1uBdhfACIzVCq36yhgM6AMPZYDcqVIii0UnCINMY32KsBaGQMkTOIJt+YoZS3eQtTJqFj1O+BThWCISUkEqECZPbOKp4G/52BiFCFIswINOUbGMdkWjE4S51OcUdh66JZHWDZJCHgotSqCwjs0Py8QrTvT2KfEiWFwiVIKQMk37vkc4jVIZQHic1yWCIlZI7u0ccNi3n7QZJ0zA63qcYD9kajzmqK6xUKClQuuBq1aJvXuPCmQuMxqPAYEgR2lN8+OOtwTpH3cBgNWVlfZ2kGNE0FZPZhLKq8N6RJBlKe7zw1KZGIfAiWEAGs9Pwt1QZkhZEl18rBHS2l94HhxCpJYlQmNZi2zDh101N0iYMBhrrLDJkAYXonQPLwc3/zOal/xv4LrZmoRFoqaWWWmqpH069kzvFO8EgSZL0rhPv9B74zq4di+DI/e//TpEt7xU2eSf3ju+20endjuM77fP+z2dZxo/92I/x4IMPAiHmQ+vQqRxhgxjfAaH2FJ0kIpwAd+t3MX4kLvDHWJII6kRgI7pvRIeMCFAsxsWMRqPexSWCCMYY2rbt4QIIDiHD4fAeh4skSZhMJhweHiKlpK5rfuzHfoyNjQ1efPFFvvGNb/Dwww+zvb3N6uoqQA97xP1orZnNZj24ECEQ7z3j8Zg0Te8BIYqiwDlHkiSMx+Pe6QICTLEYJRNjauq6pq7r/vNVVdG2LXme9zBMvC5xjCIYuzh2cTwWHUiapumvT3TjiMBNkiQURdFHtkTgJ55LhHEilBHBlngc8RoaY3qoI8syhBD9eQIURUFd1z0YFIGP6B5SVRV1XfdOH8aYHhb5IJoAl1pqqe8N2bbmD//gOarJhCuPPEqhK27fuMn6mXMkowEDYbDTGYd7tzl35VHq2YztGzfZOH+RxHme//0/pBiMuPj44xzubrP71pucPL2FKhKcSBkai6xrTp45zf7Nt2hbx9b5czg0kzs3SKQjW9lkcjxhdnjIYGXM2tkHqKqa3ds3cUf7bI0LiuGAldPnqesJMsm5vXODt67tsjIYIuyU8WpK2Ti+eWfKVgFnrGagHVgXmmvIQj3CzDh68QXMFz5LPlpn45HL7L/yLG1Z09aO4WpO7oIbqGklSaJDbUEKkkTjrSAfjan27lA2JfXE0DY2zJed6+fNUgTHKTp3BLzEIwi/2cL73IJvgvMBOHDeBceEDv7oLDtwBCcH3zksOAFV1ZJojfMGj+t7QKQMdanWB2dQfIi+FQQn2Bjl4YVHdn97Txfxojgz0GglmOxO6GiTLiI4oZ43lIdz8ukMmQ+ReY5IChicgI2HQCnYn+NaQ1PVGOtxQuBcVw+RwTX2dtPw2rThrdJRmtC44kX3HNU5wAYXD0i6c/GiA2fpOmF8fNYCelggflZ0r3dRIvfUM/zCf3uu4x4QpHei7SGasIGqtlzbOeSZj17m5Pl1Tp9f42hmeP7lm8Gdwxh0qinnczZXxrx9PCXRGRfODDmazWDePYM4zyBPyRLP409/nFdeegE1O0YKUFKSaYVHM59VVFXLZF7zxs1dsp0p03nNrKpxLly/YeJ5aHPEoJBsH845NoKqNVQGWucWHF2Cw8uwi6M7rCp+5fXr/PjWOut7O+QyusWGElb4/d+BRA7wDiniM0H3nB4jnrsr5zvQRniPs76LEAn3l1ShThXc0XyoxznbOX5IlKRzJomHG56BNAG+sEKgRLBoUZ2biLc+bFdYSiQ3j2syUaLSlKOyZZBKWqVIvGE4kHgn8EJQ1w1pltEYx+RgjsIGN5CVIclwiDGHeCStA51q6qnFIjCtJdOW8ShhpiVlZbBOsDetcK1nNZXkKdS1Y3deUzrQSiFxpHmCUqCkp3Y1aVYwm9aMEkWmJWmR0HooywaUYLxWUFU143HGZFoxzCRZnjEvDWmmaRtHroKLi2kc6aignJXgQiS0Fo40CdCyzBKk9JSALsaIsiShpao108nRB+buuwQ/vg8UOwU+CA0GAz73uc992+uHh4f84i/+Ir/8y7/8vu3rwoUL/MIv/AI/8zM/822di7/1W7/1vuzjwQcf7CfqS33v6zt19nwQ2traesf7famlllpqqe9veQzeO4wzgEdKhdIaKSTJ2oj1S1co93aY7P0eHHuKTDJcGbA7cazkKbmQeOnxdoapK9pyjm0bdKYDVT4c0O7NKdZS6ukMXRTorEDlOcNiiLNNsH/oJj89cIAPESbO4ZsEm6TINEGWKU05wVuLMw3N/DBMpJ1CDSRSKbxrw8k5h1AJqGC3qJRmVIDFYxoPhEKDx+F9i1QJSNlNkjubR+dBaoQSeGsBD0ogkH2Bwivw1nTlj2BjipTgwnTBNS1CdF0IWUI6GiDFSZROKKdHNNM9VJKC1qhUgwsZoVmaMRitYaoaIRMGxQpd/QPru/xb5VECVFbg0xSZJRwdHrG7c8yRUFRtSy7DBFqVFae3TjDbPyDJJU5CmhZcMxn66AYrK6vdgkMoqqs0AQHGeuqypm481klW1lcZrp/ASsGsqphMj7HOkeYrSJX01pUIiRAyuLBIGSAa/N1uF6lDN5HzeGcRQnVdLhbrHMY7hPEkPgFtaVpL2wYApXUtWqjeyUVIjZhJ9r55G5f9Z1TxE8zmx8yr8p77PeKry2WApZZaaqkfDvURbt3i+WLzRAQ1FqMpFhfX79/Gn9Wt4zuBHH/W4//TAI/3cozv9Jn76wr3j9f9bh/RCSKOmZSSJ554gkuXLt0DYkSH0ggIRIcJIUTvEBEVoY7o3BBBD2st4/G4j23Jsoy2bdFa984QdV1zeHgIBDAiFtuzLOtjT+IxxOMGeuBiNpshhOCTn/wkJ06c4Ctf+QqPPvooly5dYnt7m9/8zd/k8PAQpRQrKys8+uijTCYTqqpifX2d8XjM0dERP/ETP8EzzzzDt771Leq67gGSCBJJKXvYZDgc9ucdxzpCFUIIZrNZD8TM53OklD3QECGNCGfEuJrZbNaPi1KK2WzGZDIhTVNGo1EPxMTrE2GNNE17qCW+vrKywmQyueeaRSeWeF9EAGPRkcUY00fwDAaD/n6K1zteC2stVVWRZVl/PotQSdx2BHjivRDdSSLIE8cToCzL/nNxzCNMstRSS/1gajadceOt61y68hDDoQY/5OzDV5CjTc4++ji2mfHSl77E2Uef4ubzX6NtLReuPMTutetMJ1MuP/4Qw5UVXvjyV5jNWy4//SgntjaojaCeTWjmU1bHBddffZ2N02c4fWKd3dvbuKpivL6BznN29/epyjmbp86QpCmHu7eZHRwi6jnDQUaWaHSWMznYRY03uHHjFrs7EzbGOdXBhCRTzNKEP3ltmx3juNYILmeOsVdIIfB1y/yoYZAnSOuZvPQ85fY+Sb7F2sOPoYvnMHNwbY1zGclghKkdurUYBLSgswQhHE1tONrZwTSO8rChrlq6NfEAiPjOLQKPVMFNKVAFEtG1b/R4gw/PSZ4Aa1h3L4gqhewgB4+zwUHVeTDOBxdXQnStoHOpkgGS8AIGWcruvKUJRErv+hH/ChyGiMv35AIeGWguDjWZBOdEWJwXsgMiPFJAfTDhtV/5HbZu3GLrqadYv/QQeT7GW7rajcMd7TO9+hrb33yD+dEcEd0+lAQvOWxq3pga3qocUxOahHpwpXu0E/FYOyIjLkwvQhrh6xD3cu8Toe8NaKO3R6yhLAxDaEL6tp+IAC2AQHrfOaOEHTqgdXBrYvh//eevsb6W88mnL/GJH/kcX3rjV3jx+g6bqeahc6vko4w/uTnjjZ0pZzcy5vOGJM27ph5HkQWQxgvJ8889ixYERxktGY3zzvHM0LQWWYYamXEOaVqMa5FCkCcJ3lSsDXKe/ugjPPjwE/zhH3+Zb778BkpKlHQ0xiIARaglWWdxQAbMEGxPK37HG/76I5dwb76FdB7X3YuhuUuEZi+t0F2DUcwKsjbcE2Gs71qnCCmhi7FBeETM08EF2EjK/gJLpUByF1CSwR0YIRA2xJ7Y2iCVAOtAy2jzAp0rjDHh+I7mJXmRURkotGeUKoQUFMGKl7a1KNmCdwzGGcZZ0iwFFPX0mHyQ4qRgNpkjUEgVxq1tDEoq6jo8U+a2pTUwKwXGwbxuEN6zlkk2hook1ezWLY2RDHKB8pDoFIuk9YLZrEIohceQJaGeODUt0ioqJylbzzBPMQbSbMBkWqIkOJFwPK3JE4WtG1bzBBH/3dAp08mcrBufLJW4zskvHQxpbUstJXm2SoHBJZKyFth5w7BIg1POB/CstwQ/vg8Uif4PQk8//TSnT5/uv/beU9c1/+Jf/Av+9b/+199WLPhv1crKCv/8n/9zfuZnfuYdHUx++7d/+33Zz+c+97mlHfb3kaKd6Yeln/zJn2RlZeVD299SSy211FIfghZnoN6GGX9HsAskMknINzdZeegKK9feZP782+TDjNvNBjduZaz5EdOrL7Ex9Dxwdp0sLbA6TMqKdIVsOETmGU3dMFIjZpMpxXiASrrHaAlSZ2GeJQQeGyAB0Tk/hMROvJKINEGmGSrNkUpRTw6xTYV1bejOkAohR/g0QSJBgRcWa1uk0AitkVqR6NCfYkyLM22w4XS2cxpRIDVeSlBh4uG74rJ3XR6oEAEoEeA78t8Lj5BJ6A6I0TGEjgHvbeh+AXxdIp1BpinpaIAe5KjBmOrgNnZ2jNEJkiGoYEme6pRhOsSsb9HMZ1hnSLPglCK8w3uH82GCKRKNlWDamquvv4mZewotqE3N2uUrUFeU5YS1TPLglUvILOf2nZu89uK3uHN7l5uMGBzXrAzmJFKSZYJESpCSenJEOZ9SloLhIGfz5ElkllHWNbO6pG2brvMhmq7KzkbTYZEB0OirIXGBKXQFOWc79xL6SbRAdXyNQGqNayGRAiugaRxt02KbGp/lIRrHGxCCFDCHG/jyYUpfs3+4T1m1997zwUP1XnvYpZZaaqmlfmC1CDbERYlFZ474OtDDBTHuYvEz/y16N9eQeFyLr32nfXwnp5D3EiHzbsf2Tp+L8ME77X/RySGO4dbWFp/97GdZW1vrQYAIgET4IoIgxpgeBFh0ME2SpN9mXLRXSlEUBVmWsRibYoy5B0qJTi1CiD7iIx5HjGQB+tiSGHmSZVkfh1LXNSsrK3jv2djY4JlnnuFnf/Zn+da3vsVv/MZv9ABBmqbs7e2xu7vL/v4+Tz75JNvb2xwdHbG7u8uVK1cA+MpXvtIDCBGGiLW26L4hhKCu6z4Cp6oqptNp7woSHSzi9YjnXNd1D0TE1yPw0DRNP35xHBbjT6bTad8cF88nOqvEMSzLsodNYhTLYiRMBE0iRBKvpRCi3/bKykoPppRl2R/nIoATHWAiNBKPN9a5kiTp75fo3gH0YEist8Z7M16j+PkIv3yYNbOlllrqw5XzguHaKVZPXmDmEvJcMq01Vx75BNdf/QpvfuMb3HzrFqJueOSxh7hw6RTbr71C1dRcuHKF2WzGzVdeYnNtwKOffBw9HnH7rbcxdcUg0+RJxvRon9OXHiQvMq6/9ip5VrBy8iwWx9U33yYtBqxtrCGF5Gh/j2o6oUggH66Sj4ZkSYrXmmnT8sqLr+CRnNwco13NPIfnXzvgW3eOqYynEDC3cNg4TmQKTaiHtBbGD5ymubPN8c4ut77yLMXZn6Q4d4liY8h8MgE0znb1ikShBwmpByMtzhuyvCBb1dA4ZvPZgruBwGH7KAviIqr3d0EQf9fBgm4e35eR7q6Zd6/E4BLfAxti0ctCdkAEMdYlLJpvjAtWRhmtERzXDZ4a62LUBvR1hGBp0e0htNycVILLQ8ko1+FYrcdLj5YqLMR71/9enFzfZnbnkJ0X3+Lij32aM59uKbbm2INtnClpbnyD63/0Fe688Dbe3o1nEVphvGS3cbxVGY5NaCaKccF3nTtiZ1B0Pe09N8Jh49ELlQhHSAsR0dVDEGADGcYmuKmEKAtB9727Q3AXOOlAGEmIOaFzVDUerBfdnQSNF1hj2d+d8tJvfJN//RvPIwRkQvDYExdRWcZXrh5x7XDGEw9sYk0LStOaBokgz1I++6Of5PadHY6272DbmjTPsM4gk5y0yMlTzXRWMjYOYyzHxxrvJaM8PM9NbYNpLUmqqb3jD//kFX7tj5+nrFqsB+uhMQ6JIPECI0B4jxKCsm3IkwTlJMY5tmcNf3D7gM+sjCmOj+JpY1wYHJ0mIb5YhptJqrswEELgvEeK+HyhAIvv2A7RuYwoRKhbAkprrLdd4xL99XXWoRKNVIHacaFTK3yNR6pwD0jvEcqBlwit8I2BJGHaWmyaI70j1YJEKvIkuJi0KExl8GXLKFdUWiNTTzk5pshTitUReEemJUppDg8OUd2RWWPxXpJoBQKq2rBfeW7tlwjnKZRgrUgYKshTSela0lRjvcd095aQgqq1CONIMk3VWHzjGRQSLxSJb5FYZpMKJTSJlgglmM9LCi2xSnM8LSkShRaeYpB2MeOeJNOUVcso1wgFUmha75lbR54VtHWDTxRZMqAQFpEovBKYWpFkSTD16a7l+60l+PF9oDRNv2vry3fTX/pLf+meCbExhl/6pV/in/2zf9Znln63Wl9f55/+03/KX/2rf/UdYz3efvttXn311fdlX5///Oe/rfNiqe9dxYnzhyGlFH/jb/yND2VfSy211FJLfYgSYaLjBF3nRXjYDp0LQTJLSU+cYPXhx5hs7zBYPcPmo08jjGF+dMD+7UNumJpjP2IoPRfWT2H3j0hX10i2NhBC0TQlUjrqsmbl1ElEEp7PAmLfdUkgwgw2tJvgre0sR0O3hkw0gYxXpKMRQiuaw32Ma7F1hecQhED7ApFmYLpygG6QSnbFcMvKIEWLmqY23Sw+FLbv9q+AFCq0vqBAerx3CH83pz44gXQWjT50IIDEm4YO+w+xuHiwEqEkeIGzLUiHcMEq1HtBMhggxRmqw11MNe+iUnKcdygtSUkYugGlD8X+VCYhFsdbcGCEQOoUlEJIz+6t2+xu77GhFTppEcZweGebHEi1xM0rzjz2JB/70S9ysHfA7//Or/Lv/9f/jbdmFb9y3bKZC84OhiRJirMWBEyPjphPaoQSPPDgSYYbmxgLk/mM6XyKcwKpus4ClaK0RikZbDah67jROB+urRQhG1gpFYoinhDP48I96eM4exfuTenBhM6hqnY085o2rfBpEWotaQJSkOqUk+eeYPX8R7h67Rp7B7s05r6u7YX/tUQ/llpqqaV+sLUIeiy+9m6KC9xVVfUuDPd/Hv7srhqLn3unbd0Poiy+771ErbwXMGTx+4vRG/d/L9ad7j+GdxrHLMt45plnenDDGEOapn3URlzoj/DAdDpFCNEv9scF/xiJAvSgSPx+BAcg1NyKokAp1b8eAYFFJ5IIHeR5jrW2d4mIwMRirEnTNOR5jhCCW7duMRwOuXz5Mqurq7z88stAcBKeTqccHh7y1ltv8fWvf50TJ05weHhIXdc89dRTfPWrX+WVV17hypUrvPrqq+zs7PSfjecV76eyLHsHEAgNa2maUpYlWZb1wEeEZxbjUBahmwjDREeOCDtorftzAu6BIZxz/XlH95R4PNEpZRG+iUBNHLMI0sQxTtP0nmucJEkfLxNfX4zeiUDL2tpaD/IA/TFFmCTP897JJN6T8ZrG849OMVrrHoCJriJLLbXUD76sdVy7dZPf+/I38HjyVPGJZ57hS7/5H5mWLWZuGJiaEyfWMVXJq89+lfOXL3JqvMrunV3acsr5yw+Rrm5wuH/Ai3/4x2RZzuWHL7G/fYvdOztcfuJJTDnl5uuvsrqyjsxH7O3d5mh3j2K0wekLZ6hnxxzt3sYbQy48WZqTjYboLIHhGvOyYvv6LdbX1qnbmmHqcGLAre2Kb92ekKaKR7fGXDuquD2ruW48Zw2sJgIpE5LRADksyE9uYO0RkzdexOx+mmz1JOMHT7H35jGzgxn5iRFtXYIR4CU6TxHKABKhNL62lEcVTdXiXLdg7WyIKZGdLcRicELozsE717koiD7SJcIgIHCds4dAIISiteH3Eh1A4kNQSw9teIJ7qevwES/AScnG+oA3r+2yM2loo8MpC/CHiBRGB1x4SIXgsbHm1DBFdr//HYDztB6kCrWHAJsEl1lb10zevMYLN+6w9/LrnPn4E2QrA8qDfQ5eeo2bX3+V8rgM+3YBvGgqw14Nr5eGHQvBK1d0MEYYq94xrQNS8HdrQi6+yYO9O7ShrCQiHBMAh+jwinD4DtqQ9z26ioVtAHgZxsQQXVEExkIr7la5nAzwR3DgCB/U6i4o8dvfukqNQOqETz12Bik8pmvMyTKF7sb/eNpQVwadJggcaRacdfMiZ1AUFEWO1CnGSWbzlvFoyLxsqeo6vD/VYQycxHrLXlVjWtNBGAGi8N7iYuMVDoWg7eo3zhMifwErJd+6vc/ahVM8nSQo022nqyG2TXCmwEcHmND0JVSo39E1aDnvOrcUEEJ2+3GhoUxKVBIdQDzSBacOqUKcDD5EWYtQSAz1LOlRQuJMePZpvSNLO1cz51AyPI9JJTHjIXu3JogElJAoZ9lYLZiUJZPasLG6xqSdUWQaBDRNiHMS1lJVNWmaUOQZWgkSndDaFVzTYIUgcwm2afBCMpk2OCQ35obGScY4BjjGiSJVsDetGQ8zvBKUrpsr4EkySZrlVHWNayyFFGSZpkgEo0GCQ1C1kizTaA1FkdFUDaNMInRCWzs21lZoyymrKwWz2tCULWkqqCsLOJomzMFa3+AQFCvrzOYzVJqSJwmymmBTjas9TjiSVNIaizfdD4BbOn78UOqDcvxQSvHFL36x/9p7z9e+9jV+8Rd/kd3d3fdlH6urq/zdv/t3+bmf+7l3hD4AvvzlL/cWkt+N1tfXeeKJJ951P0t97ynaXX4Yunz5Mh/5yEc+lH0ttdRSSy314Sg0Aoi71ng+9mWIzoFQoVSGkKDyjOGZs2w89gjnLj7GaH2D5nCPU+OM/MI5LA5zNGEmDG60QZOdwa2dp24trpxircc3DtOGPEq8AZXiscHdAfA+PPSHxQXbFQW6BYeusCsQyCxHpjkyzcL0eHKIbWt822LLCRKL6xd6gitHWR/z8jdeZlA4sjQlEeBMyO4Umi6GJAAbUkdwQ4QJoeigDmTn4OFiowv9TDuOqJDh876LqHEeISxearAG0WeDerxvcU2LVJJkOMA2I0xTEzNYpQCUIiEjTy1CDGmrI1pjyIRESYXBhaiTLkalqebcvHkNcJw64dFKMb54jtu7U1yWY62gPDyi2r6JaWouP/IoW6fOcO78Zd544Ru8/c3n+PJkn0/bY7TUKL2GaQyHOztUtePs+ZOcOHMaB0yrOZP5PBT3pUbrHCkyEp2ilQapgpulC36aPmbfutB5kwgNzuFd242HCpNkb/BCYmyD6zpXnHBhDu0E3kFdW0zdYk2NzFdRSUZ4IlKsXnwMj2Q6nbC3d4S1C4thXUElXjbpRVdwuqfEtfD+u6+J7uciNPB0r8aOmyVAstRSSy31Pat3ctmI0Rv3O1nAXdePGH3xbtv8b1lcvh/KeC/H/G4uId8p+vX+/UTYIgIXcDdmZFHx+7Eu9G5xLxEQeeSRR/j4xz9O0zT9Anw83uggUVXVPWBHlmUMh0OqqqKqqv46xPH23veRIjHSQwjBeDxmNpvRti1CCIqi6CGHGBcyn897UGLRVaSqqnucMxbdRSKssru7y3w+58KFCzzwwANYa3n22Wf7WBLvPQcHBzz33HPcvHmzByZOnz7NRz/6Ub70pS/xq7/6q1y5coXHHnuMO3fu9OccI4QiLBNjW7TWfaSJ976HLuq6Js9ztNa9i4m1lslkck/cSp7nzGaze8Z+NBr124zxKxGA8d73LhsxciZE/IUxid+Lx9a2LUmSMJ1OybKsj1spy5KiKLDWMhwOewAF7sa5LN63McalLMv+2JRSvXtJ0zQkSXIPQBKvzXA4vOcaRHeS6PAxn8976Cjew3F84vaWWmqpH0xVdcOLr95glKfkg5RRkXD7zRcpZxVH+zNWM8Wjj5wlpaU9nnLlqafRKdy8ehNrGtbW16mrhus3X6au55w5e5pzDz3E1ddeZn50zOrqJge3ruGqGasbJ1k9dY5bb7xMeTSjGI05e+Uy3jbsXbtGMRgy2lhDSx8WIROJyzKODw7Zvn6DsjSsryWsDSWgePbrV3n96i6nV3Pwjpe2Z9yoDLV13Dae0jhWMx1qD7OS/eu3KRKJSCR2ss3k6hskT1xg/MAFvH6ZtqyojuboXIEOkEZbNTgDKpdYY6mmM8ppg/UixGaEngzoXCJE505gbedY0VeHQEmJjc9M0IMXXvjgbkDXzCM6NyofKj2BUBDBhQOP6z4nEWHxXwTAY2dS0t5wHE1rhNChXoTA4cI8XwB09QJ3d/5+OhE8MEoZaIFrQ6RImO+HZwvXuW6FM7EdMNHN69uaO1/5Bvsvvo4qNKZtaadz2sbgO/BEKIl1nllruTZtuVU7Ku9C3aeDNu7qbrQK3Rj1PUOEsQoNVqHKpSL5gcD5rlmG4OoRIITuPD1Y0dWHOnfVu8+vHSRCqJ3Z7kXnwXaOFd6HCJS4MeEJ92k3hgHiEZQeKu95ZHPIMJFUxoZ6loKtjTGf+ORn+fJXv8zbr75CnmsG4zFtXZImmsEoJ0kUw/GQpm6w1oR4Fh/uiSxRzNuWNE/JUKRVw6xqaC341qKkIJOaBEcpFEpJ6sYhhUcozbwx3bNtdJMRnZsMtN7z9Vu7PHjxBOsHeygfngFcV7tx3ndNRh5HB88S6o8SuqjocB20ln3d0eERXSRMaJgTCOeREpQOzWPBOadjkmSoP5q6a2BzARxRWiF9cA3po2AEwUU4kZjxaVyyBwISbxmlEiEt1kORaqr5hNFKijeeJE2pG4eWFo3Atobm8Bg1LLBphlQV2XBI5SGRHtM6pqbGNTVOwO68pnKwNSwYSUfiGsqmpZSKJE2ZNo6yrBBK07iGfJiik4RyVlPXBuUsm2spKpEUhWJWO5rKIFSKFJZEJ9RVRZEq6kYyPZqRpBItG8bDIswXrKMYpDRVTTbIadoW51raFpyQrAwSpGqpBxlFkqDbhiQPDn3OW7Ispaoa6jLE2EAXzfM+awl+fB8o0ujvty5evMgDDzzQf318fMw//If/kFdeeeV92X6apvzNv/k3+Tt/5+8wGo3e9X3PPffc+wJ+XL58mY2Nje9YfFjqe0sfZifDj//4jzMajZb3x1JLLbXUD5h8/E8HffTF/DANDcVXkSCEIltb5eTjH+HMqUt4W1MoGOSK7NxJmlu3uD6vefTyFqfX11gfjlh/4il8e8ThK29x4vQJmnlJMihCNIeIC/CCMA12+AgzekAo0BLhLN7YbuJuEVIhvAOhkDojHa0gPDTTQ1rT0tazvkNCqixEtQrJzs4+V9+c8eTTK0jfBPeR1mGNQ3XFilAAiH8kQiXgQ/GDzrIxQh/Ch8KG9wJUEgAG70I2rBBhlik1QjpcC+hYSAHfFde9c+AEvgsX1cUQ14aJkHeum7wIEiXxOkdi0XiM9UhvkVZiRYAiwIL17O7copyVZJknyz0qy3ngU59h93e/hFMKmeYY67l48TKzySHnLz1CkmZ88Sf/T/zoF/4Cr/7+f+W//K//iq8fzJH+iDNKMz3eZ3fniJVHP86lv/Y3yCd3mN68ysH+G8wbixMJWiVInaESjdIJSiVIKZAyeLkEQEIgvcSJLjdYdNapMsAeQii8bxEiBWfxQtM4S5GmSA/GOEzrsEbiNDgTbh2hBEIJfNuCrXHzKaZpODo64ujgmJD/2xWtRHQgoSv90Nml+gXXl+7rBajH0zVvwN3OJ+5arMYGgyX+sdRSSy31vaW4AHy/88e7wSDR5WA8HvcL0vdr0aEj/u93m5e/k8PHdzOnfjd3j3dyB/n/s/fnwZYl930f+MnMs9317VXv1V5d3VVd3Wg01gYaALEQJEjIks2RKFmmZCpCEzO2x6HQHw5rIsb+w+NQTMxfExMayQqPJxxjy+OwKVnUQi0kIQokCBAbsfW+VHXtVa/efrez5TJ/5MlTt4oNCkuDMOD77ah+9e67Z8uTt97JzM/v+w0wRBzHJEnykJNHgCrm3x+Al+D8EK4xQAFh2xABMhgM+OAHP9guygdIIQAHdV23xwznGRwhZrMZZVk+BLAEp5DgOlGWZfsny7LWCSIUwARIJWwTIlKm0yndbrcFAcL5zrvbhmMFKALg8PAQ8HN9w+GQvb09rl271rZTiFK5fv06Unonuccff5x3v/vddDodzp07xxe+8AU+/elPc/nyZV5//XX29vZaV41wj0ajEUtLSy0YE8cxZVlS13V7jND2oa2CM0pwAZm/7l6vR4g4CZBIcDhRStHtdr21+mTC8vLyQy4jAQgJ9zn0k3D/jGky4Zs2klK2ziN1XbfQSACBgkNHuJ9SyjauOsTuhLYM3887rwQwJrzn4OAArXULg8z3lfD+0LZpmrZtE0CQBfSx0EI/3RIITix1mNaSbpbQ76aUhxPG4ymnVzqcXe8iTMnyiWOcfvw8ZT7j3o27ZJ0Oa2fOcrg/YTSaoBQ8cfkSZV7w2h98iai/xKkLlxiNRvS7fbL+aZyz3HvrDWaTgqTf58TFJznc2+Xw1hWGvQFLK8tk3RRXlxRVTW4Vkzv32bm3h9WG9fUhJ544TyVi/uBzv0ddVXz8gxeYzCy/+423OCg1w0jiIsleqdnpRqw4Qepq6qpCjiSDx7dQsynjvV2O3nyN3pk1lk+dRSWSamKYjWZ0RQddV+hZja59kU2dW5SzmNIDIVqbJt7Cej+F8DuexsmjjV8VDwpe5qCPEM/ivxcIEegGgWmB2eb3ggXRwDDONa5eQhJKQ6zz7hR17bh/MEMCpTXUoeKi2b8LI3rnx/MWRyIEZzuSlUQhkWhRE8bpNEVA3tXENS4l/vojqfwiPmCtphyPYQLGWZpDtG4jzlm0E0y05XZhOXAOgwynhgjHaZrggT+JwDYghxK+1KkKRiUNxCHsAxcQN8eLBBeK8IoCn8os/JyK/104517auHq4cM5hniX8N/fYKV0ohvGhyq5xYNHOMXOQxTGntpY5KgqEkGAtayt9ur2MaVUhhCROBYNBF+Mcw2GX1ZVlVlcHDJYG9HrLaCO4fesmeblLHHuIA+sY9Dto41DC0et3UVIwzkuctaRS8tjmCuvHt/jqN1/DlCWJ8oVDsVKU0jbX6yhr7UENAappwonWvDwteS7rEhU+rtFagxAS40BbQyweRO01u/Iwkm3KzYT18zrOkmUdyrrwkIakeZ4QGGcxRiOldyaJlC/AIpJEqUISYYq6BU5kc3OVUs3cmPEOv0mMHk2Z9QVv3rhNrRLSWJIpRSW909pwmFCWFWkiiOMEIRT7kwLtagadDq4qfd8TkqOjGb00p7u6xGw6Io0iau1wuma41Gd/b8rurGKnqJBCslIUiEwihKSylqVUMprWaOvod2IqA7ESLA8yrPT7WoolEhjNxvR6PaYjTV1Yssw/k6aRoq4t1jqOyoqq8M7LdVkzHCqksFhtSZUvpuv2M2a69kV4SDpZTJJkOFFRo1juZriyQhtN7RxxIllaXmM2KXBCYwTE6mGQ/53UAvz4CVAYoLzTeu655+h2u62l4t/6W3+LL3zhC+/Y/p9//nn+y//yv2RjY+O7vsc5x6uvvvq2lRrfry5cuMDKysoPvZ+F/uQU7DJ/1EqShE984hP0er0f+bEWWmihhRb6k1Wohggr4UIIhIyaRXLXTLR6S0EZJ6wfP0G316EcFfSkxXtwGqZFxfJqj2MbS6goIUPg8jGqnyCoWDq2ys4bVxmsryAk2FojrEZEkR/1OoF31TCgIv/wb7zLhxOigSTAmbI5awPOodIYoVb8JMJ4D2sMdVHikESpwFmFFYbxqMaKiE7WpSqOiBRUtcVo3US5gJAxUkWIKEGmMTSxJMgG/nDghA2+mzwowvDTFkKIJuvTDzD9YoaPX/Eb+4kEWheRGqed/6MsQkmirAcIbFUhohikREpBorzNojZ+oK+tRUg/8HROoZCU5Yy9nW2ctXQ6EVnWwaQxq6fOMth4E6siYgMUJcvDNTa2TuGcRSUZHQdJEvPMJz7Lm5//Db7zWsEfHkx4Rhcc3dpDioinP/YJ5LueZ1xV9J9PuGRLJveuU2zfwk0n6P17iOkIUeVIAzi/MGOFj4uJ8FE3vjLCR71YFLFM/eJFs7jgXUEMwllfKVQ59FRT5468hspI0shhnUYoR5R0/f2wBiEU5fSAoijYPdpjNMnbvi4blEMKfwsAhA3FSqIBO/ysjZRNRY3zGbOhG4jmdddOuPj91MZPKDUM1QIAWWihhRb6X5neDtYAHnptPtoixFuEyItH9xP0x8WshO9/mPP948CRt4NYAuwR3C6C48GjkS2Pnm8cx22ESphADe4b4eu8g0in0+H5559nOBwymUxaWGDejSFAC2G74Cgx7z4S4krSNEUp1TpOCCHodrsURQFAVVUtQBCgDWNMC5f0+/32PfPwQ2i/EHUS2iDEw4Rt67rmxo0bdDodzpw5w/LyMn/wB3/A9vZ2CxcEMEIIwebmJr/yK7/C0tJS2+4XL17kjTfe4Ld+67e4dOkSTz/9NF/+8pepquqhcxVCkOf++STEtIT4kuBWEUURo9GovZcBrAkOG2maMplMyPO8BTeAtg0D1BLcTELMy3Q6bSEP3VRBh3sfzjGcV4CDwj5C3wrgRwBiAnQSAJEAh8xmM0IcS4h3CRBKlmWMx+N223DuoT3m437m32OtbSEgoIVTwrxsuKdpmrZwzEILLfTTrWoyY2vYharEliVdo9na6JOlMVkv5cwTj7G8usKdq9cQ0rJ+/DhxZ8j9+ztoDWmiGC6vMbq/zdHeLmsbG8wqw40rb7G0MsCSceutG9S1QVrDySefptPvcOfNN5iOJyxvbCJ0yWCpj1CKCYr79+8xnexiZxMGWcLWqbMkgyHXr9/iha+/SJKkvO+9l7h3d48vv3iNUak5PUgotOPutEZbuFsYtlJDHEusc1QzTTWtyFKFomb7xW/Qv3yR5dVjrJ7b4PYL21SjgjiWmLpGFw5jHcIarNOUxlDXmtrUvngFCBCBd5DwcxjgfLxFgBn8F5owXp+yQoAqZFs44RrHitblokE1tPWDYyEl0pqG2mieC1wzn4EfQ9dhNN6QFN4V5MG6h51ztxDAqhRsZT6CRFuLExIhwTS/Q5QQvghEhrkt79BgnUEgUVLgpMQaD6DQwBLWqfaqjRXU2rJTWna0RTta0MM3YSgXeQDLhHAVOdd+qmly08wvSOF31Ea7EOCPOdeQBiaRCO8+0rZDaHX/pvY5K2wmXAOBiHaeT7oHW9AWxXgXl8JCIRzGwZPHVxgdjdDOkiUpw37Ke97/LPe27/P6Ky9gjSGJYhCKwaDH8lJKv99neX2djc1TpJ0uZVEyKaYU5YT19SXKsmR0MEPXBqkgkoqyrgkJQ5GUIC3bRyWv3XqJorKkCtJEUlpLpHw8NdY86JtKYZwHNQSOWsBbBxOePXuctJiihECgCJFDAtnUnDnfTq6ZkgvOIbIpKPKZxFRV6Z/ZlALC+psv/pLNZJKIQEqHExKpPKCqZyXY4IwHzjZRQ8JBFPnPiLUYXeGwrGydZf8b19BC0IsjTPPsVZaGTiehn3YRzhBHEXntKG3Ncq9DXVti6z+DMo7oK4W2NbNpTjYc+ud7XdMbpuzNDBPj2C8Nee2BjnFlUc4QS1jqRozGBbPK0U8joubz3e0mjGc1RVkRoYmlaT5jmXfaiBRK+DnaQaeD1pbSOGalwdY1SSyROFaXM6pKg/ER48Zo+v2MaWkwtUFKR9aJGCz1yEuNpkemBJGtEGlEXUswFStrq+Qk1Cb3pkqRIk5Ue2/eaS3Aj58ADZvO/k7rQx/6UAuVvPzyy/zar/3aO+K8AXDu3Dn+9t/+2xw7duyPPffJZMLBwcE7QjWFDNOFfnIUBuM/al26dIlLly4t3D4WWmihhX4K5cdBBuestz1EzFV/yOZ1PwkrVUQv6ZAmgqifkFUxRtfMZgUuidla65EkKbquccZR7W2jdI/yaJdOskxVaTaWlpqCEY0zEicMwvpBpxMCIWJvi2j8xINTeGsH0VIW/qvxtpPOSWSSkC4veZeIySHaaUxdAQKpBKK5vsr5qBaJpiubAbnwExtCgFAKESmEahZCRDPQE96Ro6EVEFKB9W4WzoHTNVYomiRXz7A0Yw8hlR+D6BqE8pCBs/goGOcnUJxFWIuMYlxkvO2q9gN+lfoFBhlHYEoiYoyr/YSNtVjpJzKcM4wP98nLilhJOnGKiiQyTuksLbP1+EVe/b3PU6+s0s263Hjz2yRLHZZXN/y4OY5xWKLBEk987E/x9c/9X7l7bAl9a4/jheXspTM4FXFwcMR/8//6+/QGfU6eOcaxzXVOnH6G9ctr9LOYbichshVufIgpJth8ih3t44oZen+H+mgfygmuyDHFDFdptK1InR8wOpW21TPWGIzR5FpjSkntYKYjtI7opiUIgYwSVJaBkhijMaZEj/eYHh2wt3uPsqzwdTHeTlMBsYRI+e+l9RNKUkqSOCLNYvq9PuvrGwz6A2pdkOcFRVVi6praaLSuKKrK25NrP5kVaY02oDUYG8xbH1TdLLTQQgst9OPRPIgxD3/M/yz8PCxsB9AhTVOKonjIGeNRp5AfZIz8/Ua9fC8KgEAcxy1UMb+fRyNi5r8mSdIu2IdF9vm2McY8FN+RJAnPPPMMzz77bDsnFqCE+UX74Dahm7i+sN95F5DgABJFEWVZthBCADUC7CGlpNvttsUvwRkj7CvEgMxDBOF68jxvIYZutwvAdDoljmN6vR5aa/I8Z3d3l6WlJS5cuECSJLzwwgvtOXe73TYyBnyBWYBHQpxNp9PhmWee4Wtf+xovvvgizz77LKurq5RlyXQ6bdsxTdO2jw0Gg4f+fnh42MbTlGXZuqTM37/gghJiWMJ15XneupoEiCm0V4hSmY+WmYdsjDGtW0uARwLsEaCU+fmnAEQZY1rAI7iRBPgi9L1wj40xTCYTQtzMvCtH6DPBgSRc29LS0kPXHu5vULjeALCEfh+uP/TrhRZa6KdTqYSf//kP8dipJb74219je2/M6voSg44iSmLOX34KXee8/tJLbG2dojfIsEmHewcjqlnF8qDHYGmJymkGm6fIltc43N7BTg45tr7BYHOdG1feQiHo9TLWjp+kKsZcffNForTPYG2VyGn6y8tEiWJnd8zO/QPy8RhpKzZPbbG0usJ4pvnKV7/FZPeI82dOcPbCCV5/9RqvvXGHlU7CZr/Lnf0pdyYFmZLkON4qDGeziuXY/7uokFSFZbi1Sr5zRLW3y86Lr9L/2Q9z5mPv5/4b/4Kq1NSzuqlUcOhaU9WG3nIfpSV5XoL1bhUt1Nm4eXh3D9uiAZ7X8OCFasbL3hnBx6/gnC/aQTZRLw/AAiklxjTrFUpgtI/yNQQXCvHAiSJAKKKBMkTwIWlAiDZatSmwaAbXCthMBUuRj2Lx5+z3bX1FE8aBwyCsB1oiJJGUSCHANhEggc6wtoVKlARjvQusNoaRNtwp/OJ5ewKtJ4drv/NFVQ5p/fmFuJBwPZF11EJQh+dONecqSgMjhDvgfCxt48PS/twRsI2Hi01cgG7EXNu2+2zOcO7cnYDaOnIHeVPktDzo8NixLrujI6RQSAy93pDaZWirQHhXhkG/z+rWJutLPVaWBgyXOyyvHmfj1OOkw+Ps37vJZFqyc28bZw7pZQkmqzCRY2WtT5Z0GY0Krt/da6AbS1058mKMw0eUnN/ocVjA3tEY5xSxb1wqHAqwWCIpqa3vs9YYtic5t2rJBQGpkggrWtcX08wFeXdjgRPevTeSwgNBwuKswDrTxBbTtLRBKelhpuYzgaCJdAYnFVIqjNUkSUadaxwQNVHPNDV2UjzoK8Y5hAYXwaxQ7MxK+seW0c4RRf65y+ragyskCGHIZyVCRRxbHpBKST4t0M5RO4EqDVHsQRerLXVekPUyoiRiVlnKylDWmpVuQhIrKqMpMYy1ZC2RlIUmEoKNQQzSoYTFYJlMvQuxMhBFEqsNLnIkiZ9jlUaR5zkr/Q75LMc6xVFZo7V3QjYC1pZSisrHNsdJwqwo6PdSjmYlOA+OJKlibXXg4TQRETtIRI2KU0YzP4ZYWh5SWcnkaIzOC6TWRCpiNiqbbv3Oz7otwI+fAP0oXCyWlpa4dOlSO8h48cUX2d7efkf2vba2xn/xX/wXXL58+d842J/P0fxh1Ov1OHv2bDuoXugnQ39SUS9PP/0058+fX4AfCy200EI/ZfKDU1/d4W0zjV9MF2GA5AfyfsQtSJOE5U4fpmN6SUq2topdWSFZWqI82mO1o5A4olQTdXsIVTO+cx0pLa6uSNOIKFYgFSLpgDH4rI4m59KHXj6ooBB+2CsiX/VhXQXa/8w5PzFBWWONQyaKdHkFpytcMcLoytP/SQxSkyZ+YF+VFQ6JcYY4TVBJ2h7bV65IhAHihvZ3FoHykxrOZ3raukbXGuckVte+KgTlMzqVRFiHcRapGntH6b0moihqQBMPgzipfJWBs81IUGKFROuyeUkQeSsSnNNIlaCU9UYkVmCdxTiLMimm1hwd7IJ2dNKYTtqBWGHilDhNOPH4E1z5w69y8v0fIhYxpRuxc+sqF59+v79+FREJgTOa88/9DOlglf7ebW7LmN76ElESsX/zLZae/RnKMmL/cMa1q29gzaskaUan1yHKJL1+Srcb8dhjJzh17jhRPMCqDulKzNLZHiurS/STiAiDnY1xxRSzdw977ybi6C5M9kKpC8YZmnhWrHGUWjKtI7CK2vhqHqUUKvFwjnUCjUDfu0KtY8Z7ezhjfXWP713EErI0ZtDrsnVsi9Nnz/HY+XOcPXuOE2dOs7qxweraMY4d2yTtZlirqcqCYjqhLAqK2YzxdMxoNGY8OqSYFRwdHvLG1Tf4+h/+Ia+/cYVpUXg7WQva+goSS9OnWYAgCy200EJ/kvpukMWjDhbzMSjz7h+dTqeNrnhU/ya3jz9O/6ax9aOQyvxr81JKtS4X8wvcARh49Hjz+wjRLEqp1tlj3uXjUYfRED1y6dIlfvEXf7F16YAHkTrBdcFay3Q6bZ1AwMMSARaQUtLpdADa80ySpI0YCaDBPHASoj3Agx7zcTxVVf0Rx4her0dVVVhr2xiVAIwYY/zEurWUZcnt27ex1rK5ucnx48cpioIrV64wGAw4ODggSRKOHTvG/v4+AKurq4zHYwaDwUMAxvLyMlEU8Vu/9VtcvnyZy5cvc+/ePdI0bYGPADoEcGU6nbK0tERd1/T7/YfiiYLrTKfTaYGPAIbMx8cEOCS4ngQwI7y30+m0YFBoT6UUdV23BWzzgMi8g0tos0cBj/l4lXAPQ+xLWZZtX5q/ltlsRl3XpGn6EFyktW7hn+AAE6CdR68rbBvaKdzfMJ8ZQJy6rluHkYUWWuinU8OVJR47v8k3fucL5Nrw7HsfpxqNiNOYJ565zM7OfabTGY898Tjdfo9ZqdFVSSI0x86ewgnJZDRhsDxkdjRm+9pV+v0uKyfPQBTx8jdfoibixIlNBktL7N2/RTU+Im0WdcXkEJV1cFGHu7cP2N2+jy4KIhyrW5usnT7F4dGYV1+/QqwUH/nEewHFyy+9we2bu5xcH5DGES9fu8+sNpzuJ9wvNTulJheS7cqwpR0RAqMFepqja0hXu1SzI3Zf/A5Lj51nsHGCwfEO968V5NOStJP4YgQDVhvycY4Swi+y2znnL+kI2aUiFLqI5kvjlKCk5wWs9e4KxlnvsCGYcw4JjpjB/dLhhB8PW9dUxLjGn6KZyxF4v9cQNRLsRWzjbuHLkoLTiH8daFbfBRGwFEkEHmBQUniLTuuQMmp+v9imfshfszXGu3nKBrCQoplv8MVP1loPWVjbLOY7itpxJ9fcqy2VDQUlHqzw4R8PgAx/en4uTbh5JMShgURIZBNbI8UD5kQ2LiJNwk7zu919l5mDB3ttiZE5ICGcT/AdsXKe92hec47SQsGD+BklBFurA557epWbd2ryusPS8hK5NrzywjdQSQS1ptPNiCLBxQtnePLd76a3eZGos4SQGVHWQaY94sJS1d8h6/SJ00P6PcOg1+H45gZPvftZbt26x5uvvc7+aEwkFfvjglIXZGlGlkRUteHOUQXGsrW6zM4o9/HM0qEcaGewuNZNwxofIyyV5NrhAZeXlnH5mCSLwBjfrsbhlECF5yahmkIshxCOSDYeNdahmz4QKUGcJBhtfAR1KO6xrnEIiZrPi0UlEbqs/RyiBGM1wnmXGYT0DsauiVHE+e9lxFvXrjIDhHF0hL/rPSHoRDGpUlS6ZjIuUA7SjiTCUdUFdW0pSkMkJZUDNCQRdNMEbQ15UaOdY5JbxqOcldg7ES/3UpzMmE5ykmYaUgOdLCFLJFEM09KSFzWdxNLLEqrKgfERTboyRFGCMQ7pKlaGKUcT/+9LZTUdBSQReWlZ76cgDN0sJRaOSmuSWFAUFcJJDJZOJ6Y/6LE7ynEqIcLh6hxkhBVAVbK+3IFI4VwMega6QiHIZ4WPWgrt/A5rAX78BGhlZeUdX7A+d+4cW1tb7X53d3eZzWY/9H7jOOZXf/VX+aVf+qV2AP3HKQyMf1itrq5y5syZxcL+T5iCheePUt1ul/e9730sLy//SI+z0EILLbTQj0Mi5FaEkg6kVEih2sGJcwaBQ0mFtgWxinEqQpQ5LlZ0N9fp1yXiMCNNYmxdIWJfRVGN7lLt7tBfHqDLmqzf93CFM41losDa5tjCD+d9YmoT9xFoetEsOogHFSkIT9hbZ7F1iTUKGcdEgyGqnuG0xuoC5ywSSZoIVDPYsFahrabXTYjiBKN9ZAjG+YF6JBFSIaS3k8T6Gg1TNbbX2nonFA3OapyLQCqMAYfCGf9+qQxSCJx0COXQdYFSMUp5CEFKiUU3ExFAHOGMw2pfJSC0wtZ+cQQHSgqsihDGIIU/B2PBiJpidMhsWhDFkm7WIclSrJTIKCKKElZPn6bT6XHla1+k018hOZawdfoc1hkfkdLE62AlKyfPcP7972H3KzfYyhKsivni7T0uxDd5xvg89kleUtYV1liIM1zpqKYTDvYLhn3Fv/PnP8PO7j7WxNy+ts0X/9V3wEGUCAaDhOFKj8Gwy7FjK2yeWufk009w8tgy5uv/gvo7X8EZR2389WGg0oqiluQ6bqs9VCSJ0qyJzgHrDFpX2GLKqhWsFEdIAZmK6HY6nN06wQc/8AE++PyHec8HPsDWyRN0u76tpIqQceInvOYmUFptOIwuMVWJlMo7uTg/YDfWoquK/f09rrz5Bl/5yh/wpS9/kVdffYXdvRF5WaPNg91Zx9tO4Sy00EILLfSj0aMV/4/CIPOuB/Ovh0gKa+1DkSXz+m4RKn+cvpd5l7d7z/xrAfhI0/Rtry/8eRQeCfsJi/bzjh4BspiPdgkL/QEaOHHiBJ/5zGcYDoetI0VVVWRZRpIklGXZzlEEB46iKNrF+kdjS0LbB5eJPM/bqBYpJUmStK4rIbIkOIvMx7cEt4pwjAAv9Hq9FnAoiqIFROq6bl04tNZsb2+jlOLcuXMMh0Pu3LnDbDbjwx/+MC+88ALvfe97+eVf/mWKouAf/+N/zKlTp1hfX2+PG/bf6/W4dOkS3/zmN/nOd77Dc889x8mTJ7l//z7wAIyZj1fpdDot+BAcRYKbSHCeqaqq7Y8B+JiHL7TWDIfDtj06nc5D0EsAMuq6fuj7+QidUFgWomdCf87zvD1W6B+dTqc9N6A97263+xBo0+l02vsbXE2Cy4e3Mi8BDwUFJ5LQpwOcU5ZlC7YER54A+YT7Gc4nbG+MaaNe/iRcchdaaKEfj6yueOVLX2Rj6zjrWPRszMbWJpvnTnDz6lWiOOGJxx9HJpK9nUNEJBj2EtZOn2EymjKbHtDtr1BNc/ZvXqPXS+itLDMejTHGEgOnHz9Ptxvz8jdfZHl5md5wjWI8piNqsjiiv3Gca2++xf7uPstLHTodweqJE2Srm9y6u8fVl68QKcsHP/485XTCt772bfJZxTNPncbomtfeuI81jmO9lKujkrtFjVM+uuR6HXOhNnSkxJYl7I9IdoYsH18h3p8wu3eHvRdeofPhswxPbbF7/U10Jel0aRbGLc44TKXR1i+QC9lAdwEMaIAObxLil7aF8JEWNDCHT1IQPtYB4cfoIoAcwb8Dgk2HdSFKxLsreFZD+AiMxtHCOlp3j1D4AQ8gBc9rPKAhwhn5nzkSCT3V7AeI0wgpI0yt0XXtC3qQWGs8DBIW3F0TgSME0ql2j0K0DAxCKlQcoauaAw3XC8e+bmuTmkKpB7CvbP6E82uBkGbUb4Gq2TYFYtEYxLrgkdJccXCkC01Js1HjyOGa94SgF4IzytxexCPPos45rBDNfIlv08pBgaPytwvZvO9Dlx7n4PAmf+U/+hX+/v/0m1y9toeSgrXVdSYHI3rdiE7Wp9fLePpDP8P605+AqN/euwDAdNbOkvWXUFFEp9fFWMvZUyf4yC/9H1m7/AHOXb/Cyuf/Afv7/ws37ow8yGEteVkynsxI0pSqLOknEdOiohNLnEg5muWINpPHf0ak8C4nojmB24c56twJIjMlShR1ob1rRePy6/B9wbetbdKYfZ9QUURw+pVt39cPvF2CtW/Tz4UDpzVEkU+iNhZb1s17rd8VHviVSjWxwRIJlMZgVcbtUUGtFHVtEVaz1vfPZGVt6EYR+4cT6lyzNEiZ5iXWGDIZIawhizwkpW3jxhMJxhriLKWcGW7vHGCsY6AkMoLl4ZBaOIxzdF1CMTNgNEkSMyv8eXakB6Q6Maws95BRRCfVFIVmVtSkXR/RIl1FN40ZVZqiliSxpChLlrMM42pWugpTTkmyFIWmrh2VBVM7OomkqB1JltAb9NibVRyNc9YHEdiKziChtpLyaMbq2sD/W9Z8VjrdjNKUzKYzpPJ9TinBHIP2jmkBfvwEaHV19Y98Hyj9H1RPPPEEJ06cAPw/jEdHR+1A5weVEIKPfOQj/Mf/8X/McDj8nrbp9/scP378+6oseTutra1x5syZH3j7hX48+pMAPzY2NvjoRz+6sMdcaKGFFvqplGusM2UzVPcDJg8A+KpP4yxOKBwWaxxaF3SMxpYlR/ePmB0csLa5RuosZjr1MEZe4WyFdZooTYjiGD2ekqYxPoMybWAKENZ6eEJG+PgTsH5E5N0aEH6mwfnhr5ASZxzg8yWNc5hyhkASd3pEaZe4O8ROx2A0TtRYG5EkMb1UkM80kXSkEvqDyMfIOIeQ3o1DxL5SxDnrDU+QGG2oywJn8ANmI9FlTF11qcw607LDdBYzLSGva6x0CGtJlWW5L+gkNZ1sior3SJMcGSlkU1UhVQxO+5G/NUS9DBnHGF3ijGvibJrqXNdMMOD84FZ5PL8scw7396i1odfpkCQeZIiUhDjz1qz9Ppc/8UmufOMb2F7E6SefoT9c9pCJ1Qjl7w1CoKTkmU99it/8/X8OmeNMT9DNVvjK9Rtc/19+jfE0pyo7OCew+HuAqTDaoilZ2zzFne1dDnZmHB4ccPLUJifOHeO1l25jJ47Dgwquj7HWEscROEOaKv69v/IZzndXKe7eY2Nj1e/fClwtKCtFriWljVDKIqRFSIlSqR/3a0ddlRSlr/AcCPjQyeO4lbN88JOf5hOf/jTnL12mNxwQx0kzaYLvW8KDPkA72TKPZjgcztQIoYg7A/+edkbKoXAkWUZ3MODk6dN89Gc+wf9hMub2rZt8+Utf5F9//nf4w298kzv3dqkaAsRXOi3cPxZaaKGF/iT0dnMl4XfrvFPB2703xHzMuw/8mzQ/P/Pd5mreLoLm0XN7O807fARo4u2O9aj7R/h5+D7AHQFECO4e88BHOE6IB1laWuKzn/0sm5ub5Hn+0HGstUwmk/YYWus2lqMsyxasCYvx8CCeJew/uFmE4qYAfgSHjvkomRAXEgCHNE0fiiTJ8xytNd1ut4VSAmgyHzkTAIV79+7R7XY5d+4cg8GA3//930cIQb/fp9PpMBgMOHbsWOs4EaAZa23rfNLtdhFCcPHiRW7fvs1v/uZv8q53vYunn36a3d1dJpNJC00EkOPRezAP7cRx3EbMBBAktFMAHILbSYg5mXc+CSDFYDBgNpu18TEB8Oj3+wCto0gAK96uf4R7FN4778BhjCGOYwBGo9FD/TX0peBAkqZpe0/D8eahjgCKhP4TCuzm79f8OQdXlyRJWqAmiiJ6vV77mV7MZS200E+vjLE887FPMN67Tz4asfXUM9Sm5K2Xv83GsS2WN0+SV1OKUU2aRAyXBgihmIyOMLqiO1hmeniALWYsHz+Gi2PGo5mvJBeWU49fZFaWXHnlNWxRsz29Sy+NOXnuNGtrA8bTGS+/+CLFeML6ypBuV7F27km6G1t860tf5frr19jc2uCp9z3D/s4ed968yvIg5anLZ9m7v8/2vTHW1Kwtd/n29SPu5iVp4zQxxXGn0Ox3FYPY0RMKWxpGt3boLp0i6me4O3vc+cOvs/HkCQYnTyDj13GlQdcWXdeUtZ/A0JVpIQLv+OEXpF0zFyQFfuFYqNZnwjWwhnPWx9USnjEaeCS8B7BuDlVoaotwvkjH2QCZiCYS5mEFYMTxAKwQ4QfCNe6qAJLgg4GDZaXIpMAKi3bQ73aRsUMUEps7XO0LZaSUIJp5JecdT5r8XVQAPrB+Pio4YmAZrveY7R6xXXu3jxIao1rv2uHZFx/lYgQI691aXbgQHlxMcDSpnaMnvTuBbVaqjfOuJy380rSpbC1Agg3Ig/Zu2yqgJY17iJ17dHQNXGKbXRsrMDhqa6kEmOZYsoF+FJLjwwqdl2w8+XN84OOSmzf+R0bTHKVSlLNEyiBiwfJSRtJZgqgXTo35vyS9Hll/2buqaYeMU+LegP37e3RPHDHYOM7F93+UyXjK6J98jr29IzpK0u1mjGYVRVkw7EjW+hn3DkoqAzNrcNaSKl+glltfnCacafumAbQT7JaaJ4Z9ZGWJlCaOJZGQ2KpGNUVVvvhMEMU+xkUq1c69SSl9n9C+vRRNcZDEO0tY7xICDpUlOGdwppk7Nf5PcFEWQmGNdwRxwiFi6V1YrGJqJDfHM4rOgKioWFvrk8YKU1SoVDCZVCglSXsRxawEIUmUpKhrEim9g4/w/a5wUBqFmWmWI8N4XGAqQz+J6KaS4VLGtKxJUu8kMpqOSaQkVWCtwTbAVqUlkZJESYRVSTNf6Ci1Ju7ElLUlUY7Vfp+jcQ54+mKSa3pRjIoE/TRjlhf0kwgRxZR53ThNW+IsJteGymhMAcWeQaiEzElkPaM36DDJLbPxhOVhh1mRk6QZUZyiqwqcJk4kXdlnejQjiSTGfO/w/fejBfjxE6Bjx449NGje3Nz8ocCPKIq4ePEia2trgB/IzGaztjrkB9XGxgb/yX/yn3DhwoXveRshBB/84Af59V//9ZaU/0G0srLCyZMnf+DtF/rxKM/zP5Jx+k7rzJkzvOc97/mRHmOhhRZaaKEfl5raDI+5+wpQqfyAXygPZFiDxEdlYB2uMkTOV4zEzlEf7DPTY2QaQV3jihIhLPEgo56OiSKBjBRW16TdtEFL/OhSiAglJVarZpIbP3AzBqetj4JRClB+MGUdWNNmyLrGjUM4f25W16AEUdrHVhWV01gncLVGZgnDpYjxqGR5KUIJwbCfNYv4Hv7weeRRMzL2to9lPgYnsbXF1BYnVyjK09y9G3P9YMa+jhhZKCNJZSQVMUIoTFUgqYm3K+JyynpPcXbpDFuDkuFwn95gjJHeQlQhvMuqdsg4QkYRzBzGFlhdIYxEqNhPIgiBsBCJiNJUGGcpZjnTsc9DjQTe5UOlKCVQcYaSAhGnPP7cR1g7dx5dl2xdeJKDndt+IGdqiDt+39JXUl54/wdJugMOphOKYcWF4z0+1ekzfuw8r7z++9y+cQ0Rdcg6a4j4LNYmGGsQwCyvEdLx9Hsew2rD9bfusHZ8Gf3CTW83Kx221j4Kp/KTDaB49ZWbbDwlOTrYY21tBW00uoZ6ElEYwUzH1E418TkQJR6SQUZYY6hqQ6kF2dIZNp7+MBee+Qj/u6efpb+6hkFgjCWf5pSqbFw+/IKZFLKtSJr/bNBUhGCdB2OEaO1XAyDSVhY4P0UkpCJSjpUkY2VlncuX38Uv//m/yJUrb/I7//p3+Ke/8U948aVXmRU+isjauc/hQgsttNBC77jCovKjYIX/ne9fe9QRIMAFAaSQUtLtdh9yKAjvmwclwr7nf/696O0cOd7uPVEU0e12WzDiUajkj3MkCT9/O+AjzGWF7eM4Jo7jNj5GSp83/ulPf5pz585R13X7eohzOTo6asGLbrdLWZZtbEm3220hBCEEnU6HtbU1JpNJCzWEv89Hz4TzCTBIAEbm42XyPCeO4xbECGBMHMet80QAHuI4bmNJ5l1eJpMJRVGwubnJ2bNnAfj2t79Nv99/yAVFa926cGRZBtC2YRRFLdQxGo04d+4c3/jGN/jDP/xDPv7xj3Ps2DFGo1HrYDKZTNp4mPA17Gd5ebl1EQkxLMEBYzqdtlEr4fhZlrX3Irw3OIEE6CKO43b+aL5tghuHtbaN/snzvIVwer0es9msvT/hPMJ9DLE9wWUluIkEh5DguFHXNVmWtecYYKAAe4RImvl7Fu7nvINMcOAJET3Bvaau6/ZzMR+Vs3A1Xmihn24lWcbV118hjSNOnDrLwe42VTnmwlNPEXWXOJpMMZWjk6YsDZcwSlLOZmT9HnXZYXp0xHDYJ1ruMS0FVT7BVTNSKegurbF9/Q0m+yPKcUHaSekNBqysr7L51CXqo30Ob9yEqmbQTekNMo4/cZlKdvj9f/k5Rvf3ed/Hnme4POCNl19ib++AU5vLbJ3Y5P7OPmVek8WS9WMrfOP1e9RSspHGjCvLkTOkUlACVyvHqoJOBypdo2Y1R3dHZJGPOxWjA7Zfvc6JS1usnFhl//oYUxiUVGB1C3o09hL+96vAz6GIJrYEj1UEqMLHr/g5F4lstqd1OgggQwAl/Bs8ZCdwTd2OaEENgXeiCF8BHwXTPD8Fdw/ROlg8+J+TYu77B44bXeGIhMAYSXZik4t//s8yvfYyd771LWIBptCYWuCsBlTr9CAEOGtwzs8PSCXQFjA1SiqcBJxiPKnYqwU3Z46RAysaFzVc4/DRxCM7h3Gg8TEkiiaeRvhrbONB8E4fUoo24sU112JcqCsJTimBGRGNK4V3nwi/0kTjRsHcr7hwi50Q2OaeekcV77JbW4fBUTZAiHH+/QmQCsFSFtFNx/SXhhSjezz97vfw+cHfZzQtca5maSljebkL1lJX/nlFuIYamZMAoqzLmcvPc+P1lxlfu46pS+7eu0Ne/SY33/hDLr37/aTdLp2lZbrDLlvLXQZZys6o4Mhaslhx8dQSUiTsjmvGVemdK7IYqwSzifHghfNAj8US4nG6acJEVyTDLjIrMTNHpATdJEHXlY9urgy69k7DUko/r9gAJEIIMNb328gXozlpAennKKVvaCV8ho6pal+Q1UBO4JtEqsbxRdBGu1jnsMJglIAk4s5RxVglCCVIssg7eFhLkiXIWGJxxM6QdrpMx0eksQeUOv0Os1mF1Q4lHVnawdQ1da1RSrK7O6HUho1eQpwlOGupnCLKPMCrdU0nTilmU++ME0doLEe5oZvGLA0yQFAWFdpoqrImSWKMgW4W04uhrnIQGmsNUiZE0pFIg3KKWhuyToYSPqbKZR3GozG9foeDSYXVhuEgZW9SkUYJfTSGmn5/QF5qdKkZdGPqqqbXy3BOEAlH0ouZHJZEaUZelkRJSj7z47I4fuch3wX48ROgjY0N0jRtF8gfdQD5frW0tMTly5fbwWZVVW21ww+jv/pX/yq/+Iu/+H1v9wu/8Av8zb/5N39g8EMIwfHjx1laWvqBtl/ox6cfNfghpeRjH/sY3W73R3aMhRZaaKGFfpxq6hmc8dUPzjyofg1AhQuDFB/5knUysBpJ4a0+pUDqCiKLEA6ZNqBGFOHKHBUprLMY3SwqNAsOQkrvLCJUQ9eDcxpnLUJYnwWrpJ9YqGp/ptZga+3tHa0GJxHCIRRYW2EqX62h0hSVZkhdYnWNUw6pIpaXU+7czFlbdhzrCDppih82K4RqDDKbRRRjLXUx8xR9WWNMRF6e587hJq/v5uzSZazWkf2E/lKPJy6ss7w+5Hf/5esgap55dos71w4488QaL335KlenJW/t7DG8N+aJlQ2e3DrB0uA6WTbDRtLDHM7bUMpIoNK4gTIMzjiEdAgRIYVFRoq68pWTVZ2TTw+pjEGpmChKkchmgSgjVpGHWpRERYrljWPk0wlZr0uvXPEOKtbgbO1dL5oB8+qJU2w9+Rh3v/FtiqoiShKiWc7ZEyd59gM/w/2dAWU14uc/+zxRvMbLL13j1Lk17t89ZHVtiV63T60rloddjo4O6S9nOOHzUoPLSEyMczW97jJCwu2bOyQfuoh2YKyhKmtm4xhdRRTWkdsI3UwHCSGRcUQtYTydcaey3IvXyd/7fooo5ZqW8PJL2FdfwFlDpSvv7CJ8/E0ax3SSjNWVDdY3jrG+scn6+iYra2t0ej1UFPl8Vpq8YyDAIPN6UN3yoErHz9n4z5SSMUurq7x36QM89fQz/IV/9y/yhd/7PP/z//w/8dWvf4OjcY5psl8W6MdCCy200Duv7+amEZ533g76gD/qEiClpN/vtxDBPJgwv//vttD8vUAab7dNUHB/CFBEWBgP1/N25/Lo9YeYlODSEKCGoCiKSNOUOI7bbYQQxHHM888/z9NPP906P4TjB5AhOC2EtgkuKfOL/9PptHXmqKqqBTrmnR+8G1jcOnQESOPo6Kh1gwjwQQAx5t1Ewr7qumY8HrdOFAEOmFc4VnDiOHPmDCsrKxweHnLjxg0ee+yxFhSZ7w9CiBaM6Pf7TKfT9hpDnMzGxgbLy8v81m/9Fs8++yzvete7ODw8JI5j9vf3WVpaoizLNrqlKAr6/X7bN0PbBpeM8XjcwjPhHg6Hw4ecTwLUEQClAOWEdn7UXSO0QQCdiqJojxFFEVmWtdcVHEGklK17yMHBQXv+4TjB8WP+mPNuHQEKCQ4d864cATgKES4B6Jm/ryESKPQFoL2vwfEk9MM0TZlOpz+SKtCFFlrofx0qZjnLS+usba1x/+oVQHJsawviLgeHI5zVrCwtM1hbpswr8tEhWZIxur/LbJYzXF7COsH+wYhiPMOZnH6/h4pTDvd2MbOCQa/D5okNZoWmf+osZ599lqPb19h+4wq2rBh2Ff2lIWuPP82bb1zl5W+9xsnNZT7+Cx/n/p1tvvnii/S6Me96+izdwTJXX7+CLi1RIumtrPDSt6+Q15bj/Q73DidMJSwlEUeV5UhbXs8N52LFsoZUWcp8hr5l6Q4SokRia83+S6+yev7jrD1+it3bLyGdxFpBEkdUVY0hRObONZ4UxHGEALQ1GGPBNgvfTZyGwwMNYdQb4JAHRQsB1PAOCf6fZYEQfi7JPEBEAB/r0qB53kCheY8vprDteNo695CNhJg/fweJcPSVB1Iq60hPvYvB5SfJt+/i4og4ioiSirpQ6NIX8eCaxXxcE6Xrf/dp6103PNToUEpicOzdH/PSqOJ2pSktWCmIECgpSJWkrh1rmaIwloPKYLBYKUjwUTjGuQa88DhHjKMvpHcFwTtG2KaNpfCOru1Ft24nDRATXg7ASLglzVdHE+cSmkg8MLXwcysOLaCS/nvhQAnoCojx4Tsf+cDTnDimOH68y7HHn8eVhqQzADFhfZjxyecvop2kqCUikty+8iKPb11GdPx64rwpKkKgBssY1UXXmsk0p9re4+Ur91HC8forr7K8fowXX3iJG29dpx8rsm5KlFcoIciSiPFUk0SKfi9mVtco41judzkqq6Zwyru9CPw1mOZ686piaWuLfO8W6xs9tBBgS6IuJKqH0DV6pqmKyvcL6YiTCKdtC0cLJRDBgQ2Bk7IFl5wFGUcP+qfEEzbGgbG+UCg8yzWOMiJSOLy7CFIgkFROcPNohsw6VKXBRQrTuOBVuibJOiRpRCkc1hrW1geUeY5KFVLFDPoCWyuqoiSva+pK00kjDgtDrQFnSJOY3nKXwoBwlsNxgTMaUWp6kWK41EVXjllekiYRnUySKkcSCaw2xKlgWkXMcoupLMuDlKzjARrjFFIJnHDURU0mFZGQGF3RyzJqbREZlM4xmhZ0uxnTQhNJwWAQM3MQJxk9NJmA7mqfsvJQVb/focaRRhHaSVLhMMZRVjMMjqPdCWhDWdQoYVFKUOt3PtZvAX78BChJEo4fP85kMgF+ePBjZWWFp556qv1+3n7wB9WHPvQh/tpf+2vtoPT70aVLl/joRz/Kb/zGb/xAx46iiAsXLizsD3/CFKxmf5RRL0opPvOZz/zI9r/QQgsttNCPWc3gUogIh0aJGCt0M+A2OGfnxt+Rrx5w+FgUKVBRjNM10tW4wiBjiZQxIhKINEZXla8QqSy2NkgUJs8xRYlIEqIsheb5QygJNkIIC5H0JL6uccZADJgmD9aC/x84ZxHKD1UxBlMXHmBQkqjTRxeFBzesRVc5w37MW7VDl5aTA0scKb9/IREIpFQIJz2oUnvgQpdQF0vsjh/n5btwR6QcRavYJOL85Q0uPnOKMxfXuXtjysp6l1dePmS4Ch/9ty5z841DTp1bYW1rlWJa8tq3bzA7nPGNnXvcvZLz7vWnOLG2TXfpNk5prJR0YgHOAxgyjttJFdsAOEIoVJSitEMYjdGWqiyAZkI8UkjVTB5gkUJ5tEUmGGFIky5KSoSTKBlRlzlRljWRP3g4AoGKE06/5wN89RsvIkWCtQrhaqrJiI3ja8RJFxl3WD1+ho988r38qT/3SaS0RHEKWN584zrLqwOuX7vHk08/zmQ644mLp7h1cxulMnZ2jtBYet0+SS9ldLjPzv2aWiTYOMNaQVEYrBYYDLWNMSicsLik4r613Nk+ZDTNGfePYdfOQ9xBHB16wMTU1HWFNT4SyGHQuvJ5vVFMmnTodFLiu7eRUhApRZZlLA+W2Tx+ks3Nk6yvH2d94xjDpRWSbuZrmOYX1MLMRlt60yySAYio8WJ1IAxCWDpRzOks48//+X+PT3/6M/zu736e/+H/9/f44h98lWleENYeF8sTCy200ELvvMJi+jzY0U7wNhO0YQF8fht4sDgeRRGDwYA0TdsijPm4lkeP9+jff5CIXiklaZo+tND9di4mbwd8zB8zOCmE2JT581BKPQR8BMeI8PfLly/z/ve/v40ZCTDG/KJ+AA9C+8RxzNHREfAglmMwGLROFyHKRCnFZDJpQYfg4hEW9+fjZ5IkaWGOsP3a2hpFUTAajcjzHGu9rftgMGhdKIqioCiK1rlkOBwym81a14xr166RZRlnz55lOBzy8ssvc3h4yNraGtvb2w+177yrSOhDg8GAsizbvhOcJy5evMhXv/pVvvrVr/JzP/dzrK+vc/369RaqCM4aAZoI+1NKMZvNWieMAH8E55k8z+l0Om3kixCC8XjctnO322V5ebntn0KIh6JlyrJsnVXCsQNgEUCXKIqo67rdR3DmCMcO/Se4q2itH4r6Cfc93KuVlRVms9lDcEn4TAVQJwArxpj2GMGdxhjTtvl8nJGUkpWVlRYEmu+j4b49CncttNBCPz1Ks4yltT7j/R36y6usnT7F0d4+uAjqCWvrxxiubrC3u8tk7z5CKGZ2RJKmrG2dBKM52N6mLCuGvZS4s4pMM26/9grSGpKsy/Gz5zDlEcefuUR/c5M3vvoVdm/dpItGFFM2LzxOd+sUN65eZ//GTZ68eIrzT17k2uuvM9o/YHW5w9kL59C25sorb5Aqwcpqj73RlIP9Q5Z7XQ4OC/ZnBYU2ZDLisNZMa+9QcVhb7tSC41FNomIfs1qVSJXRP97n8MYBs3t32XvzFsvr66QdRTEGFTmUAJrFZ+ceRK00Zh+ti5Vr5gAcIZ4CXxBjHYomFqQFMHwRBta0SSRShIgUHmAh4VitVUgTfzHn6tFGk4Tjz0Wo4IJLSIhjCTUqAim806kVDqMd17/2RXRxxPTOFdLY0Ot3SPoRIi4QkcSUGlNrMA7nrC+sEd4dwzTQi7MCJx3OGEor2S0MdwrN2HrXDOe8I6y1YIRFCcnJk1tcv3mPyBpAonB0s4hx4aN8vQOHQzoYSkFCy660Liyi+S/AGtjG96OBYHzTy8Y9wmINBOrAzT36eUThQRsaK9A4tHMYIaiEdyaRzi9op07gOQSBRrLStVgNab+P6G1QjA5Y6g1Z6R1w4dwxPvqZz1CRcvetq+wfTBjvb5Pv79A5MWyeNR8+F4dglk+xQlBUjspUXL1xl24Ucfv2Lk5E7N7fgbpmGisK4zicFGhjMVZxMK44vppitUVJv/9JUbX0ixMOhSAVisqZB9dvHVEkKKY5Yj1GZpI06iOlI04jnFYoaUEIor4gG/bIDybUZQ0aTGWaz0NTrCYatiP0W+dwBowQqOYz4m9liBFyRJFsAKMIazUqStHaIJ13kK10zV4Fu8bSXRkw2TlgNClY7iQksSXpdKlsTaxi+ksdXO2Lp/r9IToGYTPqgz1muqQSAqN9DM5oWiMcLCcRuJhISsq8YHl5ifGsIIoVQjh2xlOUSMhMjLaOTqoY9CJcVZDGMbEziCzicFqDVPQSQZwohmvL1HWFyGImxZhOppA1JM6QOkMaQzdLsMJiMRzlMD0qObYyZJrnWO37o4tjcpuQmoI0lnRShYvwDtMo8lrT63WxOJS0GBTFOCfNImZHE6StKCqQTVvX2i6iXv63KiEE58+f58qVKwA/dKTJ8ePHuXz5cvt9XdcURfED7291dZX/7D/7z9jc3PyBtldK8Z/+p/8pv/3bv/0DuX7EccylS5d+oGMv9ONTyK39Uarf7/Pcc8/9SI+x0EILLbTQj1GhQsD5AbDF4qQfROEcxhocxldHCF8loYS3LrQiwiFRcexBjaZKQcQJUglElnmoQjpviegs1WxKPT7AGYvq9Yj7Q6J+F5GkqCj241cZ4/A583pWYI1BKOENFLAgFUIp751oG/gjkghdgy4xZQ1CkAyWibMuui5wpsbZmCzrkWYx4+mMbhL7/YbwU2dBKGpdossaZyW1NpT5Kd68eYzXJhE3bMrauS1Sa3nvRx9jOspJujGvvrTLzWs7YAV3bt9jZ0dx794RWyeHFHWBSgTnz29y/qljuNry+X/2MrffvMfh/n0+aM9wqujTW36FXtciK02MQ0YSHyUqG3tJjVS+jYQPlwXh0HWBtd5wVCqQOISMAeFzYaUEqXwFTmPDmXZ63oIzjqjLGR1WcM4gUM0cgh/kblx+lqgTEUd+oG5sRb5/n/XTz6Iiia4dX/2Dl7j49Fkeu3AGKXz26SyfcuvGHbIsRSUp3X6H2lT8lf/wz2A1HB2N+MoXX6Tf7/HYhdN89cvf5Iv/+h6HE8u0jnD9rq/irTVI5/NoI0evO6WONHesYTeSTKyBWrCiBiR1hc5nGFNhjKWoZxhbI1WMlJGPholjlEoQQvnJHaOQAgaDJfr9Af3hgMGgT5Qm7B7ssne4y+tvvoSzjkFvyMlT5zh56gz95SWf7dp+jtrSlkccQcSDag9JM6FliZOEYxvH+bf/7V/iQx/6EP/yN/85/+//5v/Da29cRWtvebuAPxZaaKGF3jk96ojxdgBGgCMehTTmwZCwgJxlWesqGxweAqDw6D7nv87v+7udJ/BQdEYcx3/EbeLtrm3+uh6Nqpm/lvlzUUq1rgrBAeJRR5RTp07x6U9/msFg8FAcSFEUbRFKWDgKcSfT6RQhBHmet5EoRVFgrSVNU4wxbSxMACCKomjfH85xMpmQJMlDUS/ha5IkzGaz1nnDOddCC+HcwrGSJKGqqtaFJM9znHP0ej2KouDu3bsMh0MuXLhAlmW8+OKLpGlKVVVt7Mh82wTni+CS0e12WyAhTVP6/X4LPWxubvLbv/3bvO997+Ppp5/m1q1b7O3tPRRvEiCX0H5VVbXgSoA2grtJVVUopVpIIzjAACwvL7ftc3h42LZLcMQI7RficYKzx3A4bMGJ4MwR+kZwI0nTFK01g8GAoihaGCOcXwCKArwTjg2QpilFUbTRMuHcQ1QM0AI781E0Aa4aDAYIITg8PGyBo+DWEuJswrGDS0zY58rKSgviLLTQQj99EgJGO7ssHTvO2ukzvPXqyxzsHrCx0mNpbQOVRLz2za9QG4lSCVk3ZjDsUwPFbEZe5FhnGAx7gGM2nnJ45VWcliT9AUmvS1XPOPbEM0ynY77267/Ozt0DTFEx6EouP/cs8coKr7/0Gvu7Ix5/+hJbp0/x6te+gi4Np8+fZnVlmfv3jzjau8+wmxL3Or4CftBn1QhmkxxhDLEDJSR3qwptoBc5NIJCO16ZWc7EgoG1pEJgrWNyWNBbXae7WmPvTdj55kss/fwH2Hj8JLe+dYsoiqkL46M+nPVuq9bhGhbOzv1Og/CsItp2DfMJTjrv3mEdxvo5H2vNAzeEBmVwrskisaKNMXHWoiQI+wD3kMIvkisn0XhnEEETOeMk0u8Nj1J4RxAx54Th3TM8oKKdj+BQxRG3v/EllFTkaYTRlk6vQxRnpAOL7RhMWWMKja00Fo2xFiPwkbCNmwPOfz+pLfcKw04NYaUtLPprwNWOTMCdO9tYoz0g4wSdRAICZ/31AKQ4epFAAcIJ73jiHgAywaGD4Cgh/HXb9m7IFi5AiNaVFNeGKLft18zeeZcP56hwFM2FSQeJ8NEuMdInKjdQTSwV3dTRW9mit3SafGebV7/6rzl+rE+v9xgbGytEw+P0t97N8MwHeeMr/4K7t25xcO3bqGyJZHnFF74IhzOWenzI1W99iXu3bzCZWZyIOZpW9DpdIuG4ef+IotIYbUikAgOu0Ey1RCQJcQRPnVrh2OYx8peuMZppSmtQxtHtdLg/KRAYFJGPT3EOJ3z7deIIXRRUZYGwSyxtrGGLHFMWCKvQpsJUFSpR9Nb61LVGdhK6ScJ0PEZZibP4zwwN0GFc6wwshJ9DstZ6lxbh/JwiNPdGYmzjLOP8XKbFoJRAKt/XZNbhxu6IIxORb+8jnKO0cHfviLqsOL6lWFkbEEnj44kjiZQZ1joS50gSGHU7VDOHNRqNYFLVLPUSIudIIx+RtDfKWV8WTA9H1GXN1iDjYGoZxgllZRGu4PhKFyUjRF3R6aWoREAWc/+g8NEukSBV0OslVLMZpRWMCk0xLZFpDyslWQRdLJFyaCtI0pi6lOwcTOh2E/Z2j+ikkn43ZlRL8qiLnFV0lUNFjqPSIEvH0nCAdpY47TEdzxgkgrTXZzLSVNMZ+VGFUhLrJHFkcUpwMM6x1rVzne+kFk+PPwESQvDMM8/wuc99DoBTp079QJUW4Afg73vf++h0Ou1rj+a8fj+Koohf/uVf5iMf+cgP7LghhOC9730vn/3sZ/lH/+gffd/bx3HME0888QMde6Efn+q6/pGDH0888QRJkvxIj7HQQgsttNCPWU5ibRnqO/ygXISs9+YPgHXtgkecpkg3wCqFNTWunqEGHaJeF+kslDOsLkkHy5ije37AnyRESUq0uo6eTamnE8oiRxdDkv4AOj1EEiEUOGOoRiOqycS7i+BPxFnhJyyUrxgRMvLRKChU1gEk2o0wVUE9GaPSFKkipLF+oT829PuKgx1wWbNPbXw1CYq60phKY7TDmJoiP8WrNzZ5Q6xwsLbMu57e5MT5Y5SFZnVzwHiq+ef/4CX2dw/95HsU+ZxYIdnZ0dy+sYf40jVULDhzbp2PfPoSVjue/8xTvHV6ha/9XsyXZxPeo9c5XT+JWXsJ2ynouJRUxERxgjYObR04iZDCgzBYlBBYY6nLCieML3yw4O1RvN2qkglSKppRPRAcXiT50T6y22F6tMfQ4cEP53zsCx6GWT93iZPLXYSsKaqCsq4QO3dYeebjdDsZlZ5x59Yuv/4//x5/8a/8POsba1hTMxvPOP/4aWazHITi3t1ddrcPWF5a5uSpE2TdiH//f/9LSCWoy4qbt26CjDFasnNQ01veIB/tUFYVOrLYjnfw2C0E95TjIHXkziGKmOX+kElekh/cIM9nTKdTylmBFQakIJIJThpWlodsHn+cY5unefLSZS5evMz5Cxd8VVRjU64i1cIrjibiyFpeeulb/NN/9g+Yfv6fMOytcOHsRd77vuc4e+FxOt3u3CLcXF5LKGwKVp/NRJiIEpxyOF2TAie2TvAX/92/xIc/9BH+u//+/8uv/S+/zu7uoa/kWmihhRZa6IdWADqAh2JSwmvfi2tG+Puj7w0L42HR/NEIlXlwJGwz/yc4Z8y7EwQXA9X+7v6jTiGPzmXNuyDMwyNhYd4Yw9raGmVZtrBEWNgPx3wUChFCsL6+zmc/+1lWV1cZDoeto0OAB4CHnCMCYAHQ6/Va14ZwLVJKxuMxWmvKsmxdK4LjRGgHrXULBBRFQZIkJEnCwcFBC43MR7yEe9npdJhOpy0YEAq0ut0uvV6vjU4J4IgQgqOjI6y1bGxssLm5SVVVvPDCC/R6PXq93kMOKPMOMQGACbBBkiT+GaQsWxBIa83TTz/Nb//2b/OlL32JP/2n/zRnz55ld3e3bb9ut9vG+ASHjaAAeFhr2+iSPM/b14LbRjjH+ZiTAE8YY1o4IkAl/X4f4KFzreu6BVjmwSGgBVSCY0e4v+GYod/Mu5PMZrMWsun1eg993sJnJjh4jEajNt4ngCshMiiO4xZwCYBSeB1oP2vhnOu6bu9NcGFZRL0stNBPr6w1HD/3BCjBa9/+OpPDCWfPnKbT63C4e8TRTkUa96jqkrSbkkaSKM4oJ2MOdnZZWeuT9rqMpjlOKiKr6SZdRDci6Q2IBx2WNreo6xF3XvhDRrfvQCU4duIYz3zqExzu3+fNF16izksuv+syWVfx1jf/kNQZzl16DKkE927cpKpKVlZ7JMvrGAdd4yhmeQNLSJZ6CUfacGQrwNGRgtIIauMXsw+05a1asK4taRyha4Og4mB7zKCfIBOBPjpk79o9hhtrxMlNiqIC67zrRDOX46MnAqzhi2jmM2CE8HEZjRkF8+NbKb2rh7HMjXEbMLZ5q3iosEgQSUFtg8tH84aW4bBI8QC4sI3LAihfD9TU54jmHHyYjEQ4iIVA+VkRb/0hPOjgjMaVlskB1KUh68Sk3YQkTUg6CSbX1EVJlZe4SqMr0NYDJwKHdVA42K/gbg0z4RqDCfdgPN+ca+1gfXnIzv4hkTAkEahIMik1wX9DSUfWbOMdTizSidYhJRh7OAc2vC80SCMnmGtY/KSPa+4FYISPlDGABrSFSvhoFw/PNA4fCN9UMtyGcC8csXAU5ZS79w4ZDAeIL/yP7L7+Ms9+4DLb98fMDnapplOyqEe0dIyt80+xe/ceu3eukxefo7d2EtVbJoozpNXs3HiV177xJXZ2dpjkMybjkrKqiJo+1zAxGCHQgMhiRJyinEPqms2h4GPPXyJb2SKf5UzK2+QHU6paU9kCZS1CxlhnqfDAS6BojIN+1sNZwWw0w1rHYLlDqSWmmFHNCrI4Ih6kVLMaXebUuSUvah8fY4PTX2gn//kI/xeN6wtC4IIrjpA4Y7DOIWOFMM2zr3CoVJEuLVEejfycKoLRTHN1VHHfKJzTxErSURJEQqkdB3sjYinp9BOEE6RJhNUWnCOLBYiIuNejWzqmoym7ezNWehHdRFEVNdPKMipqNocJpqyp8or1tS7GFAwihUoVKoNYCrqpRDpD2smoi4pON+OwFOTTkk4sSCN/TtUop5PGjHJDbQwSR5XX9Dsd+nEEFqqioptIxpVmfzQj6SRUFpa6CSuDGJdm1CahmpSkoiZJI2YGIglOKIq6QkYp5axGGoNDMto/BFKMsSgERtumbSUHhzOclb5Pfxeo/ofRAvz4CVAAI4I2NjZ+4H0ppfjoRz/60GshX/MH0fnz5/nVX/3VHzp+ptvt8pf+0l/i85//PIeHh9/XtlEU8fjjj/9Qx1/oT14/rNPM96K1tbUfyT+cCy200EIL/a9HoULAE+0SJ0E4iTE1WNtYYYaKDlCpn+SWS0uUu/fJJxOMUIilZZKLl7B3b2BvHoIrkUmGVQoJdJeXiLsZVldIpYi7fYrDffThAVhLbB2KDjJOsdZgygq0xckaKcPkdY01FkQT8SK8lbMU3mpcZH6gqfMJ9WyCjGPSTh9d7XrwQzsGg5Qrb/nKVFtrbGVACIzwEyKmrrFIimKd124e49W6y1F/iZPnV7j83Gmq0jGazfinv/Z1dndHfsFB1wilsM40riTeQKSua195UMPVK/fZuXfEylqfx548ztU39jh7YYPpuM8L93ZQsy2cyLFLLyGUH1Z2OilCRaA1AgMahFT4IbyP3XHOV0rgBE5qLN7dxDndvMchVOKhnsY71FrD9pVXOPH0+6hmU0xdIaPYD8KF8KCIcyytrrO5PsDs71GUJVVtkHv3SBNBp6MYjSVWOK68epO/+//4B1x+5jE++en3k6QSnKauSsqiZGV1lZNnj6Frzd7enrddLwusNfR7XSYzTZx2kBJu3Tzkmc0z7N58k4kumESOvbFkD7jbcexajc4lQim6cYfbt3YYT2+S5zW69PBKJKFqzDfSaEZ/qcPg2Dp//T/8G7z7gx9gsLKEihWP2HN8Vw2XVri/u81kdMiOvM/VG6/zxa/9Dqc3z/K+936Yd7/3AxzbOoFUinZybN4FpJ1UwVfmCAFx4iNmpKAn4OITF/kbf+P/zEeef56//Xf+K776je9Qa/OOfMYXWmihhf63rHmHhjC2DVDB/M/D34P+uMXi+erY4JwR4IWw//n3PBon83bOH/NRIvPnNB9T8Sh8Mg+wzLtSzG8npWxdQ1QTaxZAjQBOzEdnBA2HQz75yU+ysrLC0dFRG5USFt2zLGu/Hw6HlGXZLrQH54awiG+MIY5jVlZWAA9zZFnWwhHW2hYSmc1mD7lOhPf1+32qqiKOY/I8bwGZAJzMt2FwnAiuFgF0yfO8jREJAMm1a9cAOHv2LIPBgHv37nHv3j3OnDlDr9drXTcedVOpqorhcNieP9DCCFmWMR6PEUJw7NgxTp8+zec+9zmee+453vWud3H9+nXyPH9oX5PJpHXcGAwGzGaz9thFUbT7Hg6HZFnWxk0/er8DUBNgpPCeOI7Z29tr4Yk8z1sQJjhrhPsUoJnZbIYxhn6/z2g0atsgOI8EcCO43wRII8S1JEnSto/Wur0fwcVjHtII7RZAmnAuoY+HPpGmKUmStLEzaZoynU5byCXE3QAtALSItV5ooZ9eqSjm/vYdDg8OSOKYM+cfJx10Gd3fps6ndIdLTCdjut0OaIPrDLh38ybF0SHrW8dZO36cw51dZkVFP03Q+RglYbC6wmBjg1Ipxvtj3vr2l4nKkpMn1xmcOM/ZDz7Hay98m/3791k+tsbWqTMc3dnh/pv36PVjjl86hy4NB7u7LK/2Uek6ddSjyHMEhsHqEGcsB9u7dLsJBYJbk5yeUiRI9kpNbiwWQRRJtIM3c8fpyLtHRPgYrOnuGGEyRKRQ1rD70pt0nrvIcGuZ/RtjjKuR0vn4XRcAA5q6GtcYVDaxKhKcEw18EZ41/Jvd3Gt+QVyE4Bf/mgtzSo0LSGP+0dINzdbSzzi1YIfysxztPiX4RfzGQcThwRUrmh06/zPZwBDGCfqrQ2xZMBn7ReDYOoyrqbSlLiNqKzn33k+w+dgW97/9ZWY728SznHI0Q49K0N4tQluLASbasV0b9qxt3D78WXlGwi/9S+cdNm7dP0RbzSCWrAwy7o4KjLHUQjBuKqgS50gRJAIiJ5rFZNfAHx6GCZ4mD7E2LSDjgZTwom0ie6yAGg971M63W+VC3A4oIGlcPpSQbZt5hxEXkmSwQJzEHNvYwFjHq6/eYuXuIVk/5cSFdxMvHfKtL3yO6dEBw3oCrqDb75N2Eu7cvEpndMTKaB+DI88L8lnBwc4O9+7dZjrJyQuNtmBkhNEFqTNkkURrD31004iVfpdp7TBYhongZz70FD/zH/wNDq/d56Vvv0lP3UEpwazS3iEVsMF9Bu9iI4RAOVAq5vjGFverr1FXlrjUHN4ZU46mREoRKUs8SIlUjJCKajbDaoszFmObuBbwBFQDH6kA37iHujRCSISQ+E8qbZSRbNw9XASd9SWKcYEANA4jFbePSm6VlrGDpSSi0h6+EUBpNB0XcW/7kPJ2zcbGMqsrA7pdSZJG1N6QGRx0+z0moylbKylp5p3ZVJyyf3TE8WFGpiQjq1FSUOY5cRqjIsVgkBG7GislutAs9RRW13S7CoSirkrWuymJrDBSclBoBJZJ7XDW0c9iqkjRV46uqEgiGM0Klpf7HE1r7h3OMFGCTGI2hj0GSqNx7B/VjI+O2BookkxR1IZIKZSUGKPRpaAcjxh0E2QWkRc1CIWxOWkiMVVNFklqodjdG5NI6A79s2il33nIdwF+/ARICMGzzz7bZkqGweYPoiiK+MhHPvLQa2HQ8/0qjmP+wl/4Czz33HM/9OK6UooPf/jDfOxjH+M3fuM3vq9t19fXf2jwZKE/ef1JOH78MJDUQgsttNBCPyFyoSrVNgMWbx8pjHeB8JaG3urQYUFAdbCHMhXCWuJEMZ1ZRnfvEqcRbN/E1QUuBplFPhcTh1AS2UlwM0ddTnFGk61tUO4foGfjtrTE9QUyUkRJgi0KnNaQKIR0KBGB1Fht0NXM55sK77CRdvtEcYbIuigHtpig8ynxYEhS9qlnOVI4Bv0OU1GgjJ9IsEajG7tUiafn66rD/YPHuWJTtpMVPvqZx5mNHXs7Y1745i3efPUuVZFjnCPLIuJEYJ1CNxaDTgiEinD4SlsZAQLOXVznQz9zkWyY8TP/9rtIugmznSnf+vxVdl6+zspBn3JnhzjeQ3YcQiqyjp+UN7bGZ7/46gOkxLiQP0oTz+NLOCwWKRVC+ioKGSd+UkdKpPA5pZP7dyi2TuOExeiaJLhSQDvJkPQGDDfPsLt3l6IucWiK/W0SJVha6XLv/hghJbrW3Lq5zc7umHt39/i5z76Xbi9lNBpzfGuDo9GUSAjSRFLcn7GxeZzZ4RFHRxOuX7vHN75+DYujLmpef+0a5y9aXn/tNjv7lnEp2U0M163haOzIDUwqg5SWbG8HU1sq67CuySMGjPGTQ9b5yaVYZvzCz/8Z3vvcB+kvDxHR9w59hObY39lG65os6+CwlGXBweiAV996kd/+/G/w3qee4yMf/xRnHrvQ9Pm5/b/NGFQgPLwklXetETXLw2V+/ud+gccee4K/83f/Dr/2D/8JRfGDuQoutNBCCy30MLwRYI/5xeBHnTTmI1HggZPGo+4gf5z7RnA0+G7nMQ99vJ3ryNu5eTx6fo/CH4+eNzyAH8Ix5qNNHm2TR88rTVM+9alP8Z73vKeFLIwxTKfT1m1DKcXy8jJaa0ajUbvg3+12Mca00R/BrSK8Vtc1WZYxnU7pdDptTEgcx+21zceAdDodDg8POTw8/CP3Zj7Opdvtto4XwYkiwCLBEaMoCpRSD7mJbG9v0+12OXfuHIPBgK997WuMRiO2trZasCRABeH8QhSPUoq6rlvAQQjRFoX1er22WCe4fnzhC1/gz/25P8elS5e4evVqO4+4trZGVVWMRqOHQJL79++3cT8hciXAMUAbmeKco9vttjE5URS17TubzdrzmIdw6rpu4ZXgrhEiZdbW1hiPx+21hHsZol4CRBRFEbPZ7KHImxDF0ul0EEK0fSU4ewghWmePAPcE8CSO47YPdbvdti2KomAwGLR9tigKyrJs+31oo9A+wTkl9IWF48dCC/30ylrDZDSl1DWdlWOkvR5Hd24TRwlR1qOqDFmSodKIsoLR3jbUM7bOnaO/tsq9W7e4e+suKytLSDQiSuhvLNE7doJZPmPnyptUozGDOGWwucXq+ccoXMzv/fN/yeHd+zz27ndx8uJ53vrKl6iOJiyvDjj5xBOM9g65f/MWxx87w9L6CYqyoDw4YHl9HV1U3LvxFq6uOXfxLK+88CaxiriwlDGqHFePcmrhiCNf6FEb7zpxx1hupIpVVXG8G1EbSwxUhSFRCpTDzaYc3T1kuLXC4Z0xxso2UMQ5758hGuhDSPEAAAGw3skixH8gZOMSAgQcRAhU4z5hm6IS5zyMQIuCOJQUGOOBBoR361Bz0IcSjqgBRKzwwIkU864fNEelcVbwC/A0i+NSCnACg2S4scLs8BA7LamNxRqLdI7EGcBS1assf+SvsHSpw86tHdyspLuyDPEueX2I0GArh3aOUluOKstubclt8HiwAXlpnD/CTIKlNjWpFJxZG7I9mWK0b+2ZdRTCYRxMG3eNSHgYQ+JQzrscKAGqAV6iB5UigEA6P8dh/a3BNnCHFj5uRuPdI1oA1DcPiRDEzf5DJI7zd2ouPqa5N037ZllKp7OElDllZcit4fzZxzFCsXnuEum3vsrNt66xevoqQkqq6Zhuf8j+QclkdMRkMiGKYrQ27O1uMx1P2d07ZO9gxP2DEUQJUiYkpiZKvJOtUIK+jHl8dcBw0OXq/RFCaN5zbpWPfuqjXPnyN9i5M2Z0fx9TV1jjMBZ041DrREPDWA/RKASJg41uhJ1MEFhsbRnvTb0Dh3BYYzHSUU0LyrwgSTKckd7FOHQ7Zz24ISVYENbDDjRglLMOKZvbJH1/tbUvoBJSIKyPTTbGEmcReqah1jjh719eO94Yl4ys71mlFkjhiBp3304aYbWg0Jo0zZiMZtTjGUtLCZ3lPiQJNMV5aRxx+vwxtIV7d/ZxWlCVM548s4a1NbW2LGUpdWmIpG+nJI2YjnOII8BgpaDSjl4q0SJmnNd0ogijDWUFtdSURhMJQewUaaTQBvpKkGHodyPy0j+3H+U147okWl0mFYozS4oYx+5+Tpx2EHnBRiZQiaKsLZkUqFQxK3MGvSF1njPsxMRKMJ7Wfg4Tg/IznFhjSPoZu3tThIO0G6OSyH9K3QNQ/p3SAvz4CZAQgs3NTT7+8Y/zyiuvPBTTEn7+vQ4EnnzySU6cOPHQaz+o48epU6f4a3/tr7Xk/g+rkydP8ku/9Et88Ytf5ODg4Hve7vHHH1+4OvwE6k8C/Dh79uyibyy00EIL/ZTLV39YnB9xN4UUFovBOgPGegDEWaQBqhJXV8SdGCElsRTUVYEuZ5jRDkoJ0KKp8BAgvROGqS3EKWqYoKrSAw6DJbL1dYqdu+jZCIQikiD6A2SaIuMYbUqsdn6RHB/rgrIIEWFF7bNmdUWdT7B1RZR1UR2f0auLKUpbkm6Pus6pTEGaZti05AA/wK7rkhoLNkI4R6Utk+llXtqzXNNdZsyYTHOOba3xta9e4eadu6hhSZwZ3v/eU/zin/kAVTFj+94hb7xym5WVVXbuT3jz9THdrMfWmTUeu3SMk+fXGJ5YorfW497rh3zun7/KrJhx4tiAYxeWGWx0eetzL/KE+SD3tv8Zp08KoshQaUOcRIg62H76WhlHBViss36ixfr2Fk76PyJqqjIrVJo2lTcOi0UJSTnaZ7Z9CzPsko8O6AyWG7BHgjUgJFGS0j92iu2X/oDaGG+dOtunmhxx8uw6L750s8lUxRP6publl68zmeX80p/9CEr5Xa2uDJhNCoq8oDZw/euv8rWvvMlbb96mrjRCJEggjTXRra/yla99i71RyVTDYax5q3DsFpaZgbLJt+0I38cqfIZtLByREG31UCIElYPh0oC/9Ct/lT/7F/8yaZrMOW+EMqR/s7TW7O/vIaNQ0WGaSTEoK8Esn3Fn+wa///Xf4UPv+Rl+9jN/ihOnz6DCwt8fGWe4ua8CoRQyNkROImTGE088wX/+f/nPOXnyBH/3v/5vORqNH9pqoYUWWmih702PjmWNMe1i/qMABfAQJDEffzIPVrwdxBG2CV+/lzmmsK9HnTq+1+iZt/v+7cCVACTkec5gMCCO4zaG4+3ia6Io4v3vfz/vfe9723musJAe4Izg5KG1Zjwe45xjMBgwGAxI05SiKFrHiMFg0Lb7PHDR7/db94gQQZOmaRs3EpwpkiTh3Llz7OzseBDWGHq9Xvt328SyBYeMpaWlFhgJDhpxHKO1bqNbAvxw7949Dg8P2dra4ty5cyil+M53vkOapvT7feq6fuhePgrWFEXRtutsNkMpxdHREXmes7Ky0kIv3W6Xs2fP8q/+1b/iwx/+MJcuXeKtt95qoYX594V2CpCKUqqNEQruJuH+BRgly7L2moQQjEajNsqm3+8znU7p9Xp0u13yPG/dPubbIkmSti2LoiCKInq9Hnmeo5Si1+uhlGpjZIA2Nqjf75NlWQuozN8bgOvXr7O0tNTG+Wit28garXUbIRTOO2wb+qi1lvF43F5/AGyyLANoz2symdDtdlsoJIAqCy200E+v6tqAcqyuruOKMfffOiTrDDHWkc8mDJZXiOOY6aygmEzpZjGDrfPkleW1b38HaSqWs4TIaKpa01vdgKzPtdfepB4dktgpadxheOoia2fOcufmDW69eYUsS/nQz32cuDfgi//4nyKLktNnT7Fx/jFuvPoK+dGY9VNn0JXhztU3AEtvbZPp0YTd62+SZYqtJy9z+9pVkrTDcjpm28JBoUkjibQwNZayAS/qZrH/5aJmS0UsZ4IEh3EOow21k3S7CbiC0bXb9N53lt5KwuH27IGTRuPwEX6VuWbB3C9qN4UwEh8va237xvY5oR0+C6ykyXzxhSdSgDEBZvWbSiEwCKSzKOl/YAxo4cEP3QAUPmq4cbVosk88lNI8dzXWIaKJNwnIggBqbdg/mmHymsr6eJkai3ACrEQ5R5Lv8o3/6r9i9fHH2fn6H3DqM3+KpeM1+stfpdQHFHWNNo5SGybacWAcRw4KLK6BVSS08STh2MqTAJxaGWIlHE5qX/whoIePVimdj1zRNLEszrVuHg5vzxH2xwMUo4mVoQFpPLlhhWsjdWRjwKLwfyI8kBMjkcHNo3ElCc0qhO8HTgR/kTAbAZE17O6MiaOSpJMQS0WnP2Q6OqK/cgonEl59+XW0dWyePIlQCXfv7XBvZ5eirOh1UuJY+sKwKGF//ya7+yPGeYlKM2R3gJ6W9JOUui4paktVGQaJRYmczWOnGI3GLPVinjq1xitf/Aq337zG5HDCbFKhbChU82CRE80zrPMgjgIyIUgkvP+Zy9y5+iqJsRTTGqzxYI2yCKUQRjA9mBKnMSYymKrGGf+MpxQI1TgECnDWts65oU8L2QBO1uGMheDmp0QTR2ebniNQSUY5yVGx8/N2MmL7qOLqrEbGMQMhEdbQTyPWBhlL3YRIeI+ZWEpMVTJc7pJFEqsdh3tj4n6XJO0SKYhjHwdVGs1wdYnt27sMeh0mRUVRaCIpiWNIY4WVljiKKKa5/2xZTZz4eMCZNiiZkjuoyprByoASyGtNXtSkcUS/k2G0xhlDN5H0FQyXO4zynO6gy/5RgXCGjeUB8bGzMDqkJ8aUxpF1MoqiYqUn0U4yHhckSqETSVXWRCpjNJqytT5AA5O8QleaTte7FOZHOQpDpxNzeDjFFhVrq31IYw6PCrQ2RN1eLlQAAQAASURBVPE7j2kswI+fEK2srPALv/AL9Hq9Ns8y6MSJE9y+fft72s/HP/7xt508+H4dP4QQ/PW//tc5duzY97Xdv2mfn/3sZ/l7f+/v8bu/+7vf83bnz59/x85hoT85aa1/5FEvFy5c+JHuf6GFFlpooR+/jDMYa3B2ruqzHeI6H1/SWB7GQkBRgC7RRUmcdnHa4fIZ1DMom0zwJMOUE+94EYEjoj44oE5S4qUOIk1xDkyZE/X7JIM13HgfU00RuUCmKSJSRN0O1hhMmePqGiElqAisQUUdpIqQpsAJhakrXF2DgajXRSQdpK5xuibqDkjSGXVREEWCOHHs5wZjNTNbkERddFVgqwJjT3LlXoe3dJepczgp+cpXXmLzkiA5rbn0OMRRCgg2NmBts0dZRKxtLXH8ZJfB0oDZrOTyjQlLW2coI3CRJMex/cJ9blzf4dy5FR4/P+D2myXXX77LV2+9SZ3XmLomUX1OuGfZO/wakZDUxYzh+goqTjBG+0odJ320i9W4MDFuvc2lMTXGRv6rETiRIuOkifJp7DyNppqMuP17/4Kz/85fwjmDsxrh4qYipIGApGJw/CRCxRR1QWQtsRLcv/o6p86cpNuVHDu+zngyY383pzfsU5UVN6/e57//b/855y6c4CMfWUIqxWgy49atI1597S5XruxhtWN8dIRSMZ2uYnkg2Zi9iXjrW7x1VGC6cBfDlYllon32btVMDsXNdcz8HBX9SJCqZsHLOkoHtQEZKX7mox/jZz/7p0nSlKquEYUCWaOiCBkGh98FAAluKqPDQ268dZ04i9lY3yTtpEglMK5GOV+qo5Rilpf8s3/163ztW1/k48//HB//1Gc4dvLkXLX2vD1oKBWy3qtTgowFrjLESnD8+DH+w//gP2JpeZn/59/6O9y/v/tH+ZGFFlpooYW+J81HdWit2wiN+QgW8HM78++dX3yed/v4blDHo3NFbxfpEjQPiDwa8xK2mT/O2zkXzIMj89sH2CM4MxhjiKKIPM+ZTCYPXcP88aSUnDlzhueff57hcNg6KGxubrZAQIjbqOuayWTSRqY45yiKonXHCBCAEIKNjY2HokeC84cQgjiO2+Ks0N7B7SOACQcHBy0gEe5LgCvqum6hixAVEscxo9GohSiC40SINQmAQJgHPH36NKurq4xGI65cucJwOCRJktapJFzXvAL40uv1GA6HbXuGdg5OGOG9p0+f5vOf/zyf//zn+ZVf+RUee+wxvvzlL7O+vt7CHGXpXb7CtYT7G2JY1tfX2d7ebuGKAIUEZ4vpdEqapq1jilKqvQ8BsOh0OvR6vRa2CJ+J0A86nU4L/BRF0YIWIY4l3IcQSdPr9ZjNZm0bRFHUzpFKKcmyjBMnTnB0dNQCJKGfhqiZ1dXV9nrm+/JsNqPT6ZBl2UOuJfN9JDjNhLYLsTjhnoU/Cy200E+nVKQ4c+EJ7l57i7qoWN9Yp56VTKcTOktDkBHbd++QpB3Wj62RdWNu3bjN3cOcNBJsLC8hpzMwhrTbZe/ggPrWbXq9mJWVFCEHxEvrRINlrr3+Kvvb99lYX6HT73P71dfY376PKEsuvvtJustDrr/yMkoIljZPIJMYXU4ZrKyS9ZfZvn2T/GjCmaeeJur0ufX6qxztjxj2EzpZQiQnrKeSqVXszMoGeqBxd/CL23s1XK1hvTBsdWKcMVSlRmQJDkmcJLiiZu/KDsP1VWaHBVpYytJDHggPT1ghkM4X/AjpF7MtFufAGT/Idji/8C0Ubf1JAyJIAQLb5rnYtoDIL5Y7ZOP04SUchLwSJQXSPnDRoAE1QlyMQCAscz+niUGRDajg/00PEMTunT0i2RjISodqHGctMNMWUY8o3/xn7L0icFIh39xhfXmLt155jfHBjFobtHEUxnCkYd/AzPntvTvHgxgW0SATCsik4MxyHxUr3to+8tfOgzgbhSBuHv8ac1sMPtpGO4t2HgYJFxgAjdaIo/m7wLUxJhK8c4gI98DDCNI2oEcAZ5xo3XqDU0sb7dPAKf6E/fzJyrBHd9DD1BZdl/RWNtAi48Zb1ymKGpl0efP2HrujnKee9nG+o0mOkilVcUg3i8jHU8qqYjqrqIqayXhGUWni9eM88+HP8MLnP0dHHTGrcvKiJFOOx4/1+blPfJjt7UNO9hRbaUTPWfauXMWMp7hSQ1WTKEEtPEjjpEU7i3QCZSUxln4c040kk9xHgcwOxqRC+TlOY/ykkQPZ5NsIK9ClQVbaN4V1qEh5dw+8a4effBHtV6sNUSyhcaoRDczkjEFJ5R1yhJ9HFc56l6FpiRQGGUXUpWVaWt46rJgSESPIEoWx/vPX66aICDqdGIkiqiuck0TWoCLlHYZrTT0qKVVFJ/OgrHCwP5kRi5SVpR7jozGpizDGgalRQjKtLDJWlFWFwNFNoZt1OTw4opul5JVhb1L74i/rmE5mTIuSSpcMlwaIKCGNJNNxzSCJSCLL0lKXvaMJxioO9mY4Y4mUQ+kasXuNojQUzjWORZaOEmgERa39J8Qa33elxFQVgzQiryvGU40Skk6WkiQxs9GEWDo6WYe80pSzis2NLqQxR1PDtCqRUeT/fXuHtQA/fkIUxzGf+tSnePe73/1Hol4+9KEP8Q//4T/8nvbzyU9+8m2rQr5fx4/3vOc9/Nk/+2e/r22+F21tbfGX//Jf5ktf+tL3fE6Lxf2fTFVV1Vp8/igkhODixYsLx4+FFlpooZ9yWWuwWmOt8REVUuGMxlrdkOBNiKRzdIT2g1dTYY2hLmvIErLVJWL62MkO1mhkt4fLeph85MeTToCMcHmOzSJkpBCRwtY1zoLq9Yl1TV2MMUWJKmuUksg4Js4yrC6xtfZEvfY2iVIokAIpYlwMzmqsrtHlGKEcKu4g0h7OaJCKKOuRGketJL1UsZMbjPCGo1mvj0sMRvfYvv8EVyeWcdJHKMfyWUv39Ijt7SnFWz30WFLVE5SKEXKf+l7E40+tMhodIZzgtDpHWRZ0+5py/w5vvjpiZX2V46eHZMpgDyvEqKSz2uPS5VNsrA8pRte5sTvD6IqXI8lm/wL7u99hZejoJLGvxokjf08sCOXH6rq23uqyqX6xWmNMRW1jpNE4LXDaQqxwxlcLOeOtQ0wxZfTqC/CpEawew2gNVKikQ+NjCc6xtHkaRESlc4RT2HpGdf0NznzmffS7HS48fpppPmN7cMS7338J5yyvvXyN6zfuc/W1m4z2xxAl3Lm5R60rai3QtUYQU1YzlvprXLywQvfoZcYvfYudw4LcWa5PLDdrP+ESSZ9bC83EBlA2iwYd4Z9ZSuMY9CJKbTCVRQvBB595N5/8+T+FUBF5UZKkKWWlcdaQJDFZJ0MlMUVZcnh4wOHhIdPpiNl0ymw6pa4r8tmM73z7mxzujn1uanEHGSnSTkav3yVKFA6LkqLJjE25UU35B7/xP/Dlr/8uH3nuZ/nZn/+3WDm2DtBmKIPzkxChasU2/buREIKV5VX+/V/5VdbW1vmb/7f/Ozdu3CHkKC+00EILLfT9K4AAYUHdWtuCHmFxeD7eZd7l41FYImwbvgbNwyTzUMbb/by15547j++mt4uFeTsFwCIs1AcoIM/ztnDkUReTcOyVlRV+9md/lk6nw2QyaaM4JpNJ674RonJCG4U4mbDwfnR01Lp1zMMiIYIjLP4H94+6rtFas7q6SlEUVFVFURR0Op3W4TQctygK/7u/02mBiABohK8h7iW4PoT4kgCrhPPM85zt7W3SNG1jXt544w12dnZ47LHHqKqKXq+HtZY0Tf8I+BNcTCaTCdZasixrXUfW19dbl47JZEKSJAwGA5544gk+//nP89GPfpQnn3ySN9988yHIKI5jptMpAFmWMRwO23m9LMva6+x2u6RpinOOvb09gDZSRQjRxqTMwxbhvoT5yziO27YJwEaAhELfCXExzjkODw+J4xgpZduG4TP0aBt0u92H4JRwDkKIFtIZDoet20sAXfI8b6GSEN8SjhlibcL9D3/P87x1aAnnvLOz07rNBJeRhRZa6KdTURTztS99mV5vwJmtDfKDAw72R/TXNjBCUJYFyytLDIZLSKnYvnmLvbu7rK+uYp0lMhUqS1k9c55r129QVDUbq0vE0pANlol6yxgVc/vKVYQp2Tq1QX+4xs72XWb7dzl+bJXTlz9KYTXbb10lSrtk/S7TvEJmHWTaA9Xh1ptXgIozFy9zNDpk581v0ht0OXHuLHvb28RJzOZSh72ZYWd/6iNkhaQyBitAOf+ndo5XC8PJyDGIJX3ln1nqSlNqRb8XUdU10+1DsmGPtJ9Q7RUo5eNYHqxlexJB4GNgsa5xmZA4LD4VJri48hD04WgWyqXyzp/ONu/xIIESyoMXTZGDQCClQ1iJMwYpHSqSqHrO0VIEqMK150jjWCFd8Pho3LcA4zzoYJzF1Q6r/MJ7kiqiSJDGEcJZTGnJixJnLRaFcIZ7f/AP2P6CwLgcY4WHMKwjd4KRtoytoLCNq8acpPP3JBbQUY6zq136fcVLt0eUxjBcHlKXNbMyR6J84Y1rnh3m4nHMXDs+uKrma3A3Efj44NbpxL+7SbhBWL+fAMnIZo6h3ZdwD1wq2jv34HiyOa8aDzCM85Is6/L6W2/y2NOn6G2uMZnNePnV1/nmd15lPMo5OpxS5BWIK0wmM45tHiONYX19DZ0fomSMLQsG/R57ByOskERpxrMf+zSrpz5AIn6HjpL0IsEsUpxZjvnTv/hpnv8L/ye+8ff/Owb7t+i5mvz+PUyusbV3s5k5x64WHNSW0nqGw//xEUARjkT42JzVfhdZTjGTGSiwxqBEaN3w3C28e6tt3DoUSPWglSQOGSksxreX8dHSkZI+rsVoxqVhtZt6uEnJplM+iMmWkfDzq06gMoXRjrxy3J9p3phU1CqhFymk9fNBcZJyOC1YzhTD9SG61kgRIf7/7P1psGbJfd6J/TLzrO9+t1puLV3VVdX7AnQTIAgSBAiCMIkRSVGURDLGsmhJ1jg8Hsf4Cz95YqxxjEOOUIQV8u6xzZElUXtoZiiZFEgCBAgSJHag0Y2u7uqufbv7u54tF3/Ic07dKhSApsiW2I33iaiqdzlrnnNvncx8/r/HFZTWsX+QkSYhGoGpSgajPtPpgp7RVBbKrCIMHfMiJ5LQDRwyCohCwf5MYxD0ux1MtmB1kCCFoshL4kCx0JrcCAaDHh3pyOcVk1lOHAhUJ2JlbYRGUkym9JVmOIiI0j6LyYKkk7I/LhgkAZWFIsvod2NuHyzY2c+I05SqqDg68H2IQjsCJbDG4pxEqRhnSkaDBCchKzQKhxKWIJBMx3OoCtI0ZFEWjCcFw0GCkwGLWcnBbEHcGyLlFvJtqJJaGj/eQbpw4QIXLlzg8uXL7Wdnz55lY2Pj25ZtOgaHneFHjx7lmWee+bZlm87KW1UQBPz1v/7XOXbs2J/6pLoQgl/6pV/i7/29v8dLL730ltZZGj/emSrLsh0UeDt09OhR1tfX37btL7XUUkst9WdBDmMN1mgQFimVx0E2VQb4Tox1gNF0hUM66ztJ1qB1hisVwcqItNOlmm6DdbiiQhdzqtkEigIcBIny5QmA6naIVoboyRRnKlSaonodnDUYbbBFhkoThBLIOCLQXZwTmHKBMx7taIVCKAlSIlG4IPH46HxONZ8gugIVxTgtcRiCpEtoBMYZukoShQqrIlIhsbnBWUm+6HLptmA6OoUuK1bOFQRH97nzWkqQn+PoSpej5wZsPrJCbzTgG1+6SllmvPzyNwiDkCeefIooiemvrJAtFqwfO8IHPjbgzpUdrr+xx+d/+wqbJ4YM17pUpWZ4rMP2tuQDP34epQJef/UmUwtvZIYTxTkms4tEaz2KvCKMI0SgQFvQntBiXJ252wxM1NfKOoM2GltYgn6ADEKs1UhhccK763WRe0rK+ADZDPA4i7W6noTwHeHukeNoBGVZoKIEYQyD+T6jYZ/RMKHTSwnigE6nR9qNCKOIx595hN29KcdPrNLpJHz2s6/y5JMnGK12MAamkwXXb+zT6fa4cGGVkdpi++t/wN39jFI43sRytfSDE51AMK+zcpXwt1BhHRJBp8abTmuDiMs02kFpYGU04NkXXqDQhq2t2xijyfI5YRixvbPN3u5drl27wpuXX+f2nZvMF3MW2YJSF56sYgRaV8ync8bjfUQQYKxjvsh8RdN4wkEQknRjesMOMoRQhSiVo4KQosyYXZ5x9c5lvviV3+dn/twv8OIHPkhSZ94jJE5IcNZXNxnpDVdG1maskigQ9Hodfvo/+hk6acp/8V/+La5cvV5jcO+pSVNeaqmlllrq4Tps2Dhs+Hgw8qX57vA6D74+bOpo1j28zuGYlcOvHzR7PCzG5WHEkO8UIXPYiHB4G42JoDEBAG18xuHtPvi62+3y0Y9+lJMnT7YRH8YY5vM5o9GojedoiBoN+SMMwzbeZTqdtvSFxtzRrNfErjjnKMuSTqfDfD4nCAJmsxl7e3sIIYjjuKVGpGmKtRatNUmSMJ/PiaKojYI5bBxoXsdxTJZlbQzI2tpaa3jpdrvEccxkMmGxWLC/v89gMODs2bOkacorr7xClmWsra0RhuF9cSVN+zfXsjGiNPSRho4CtGSTxpDQxLicP3+eK1eu8Du/8zv88i//Mk899RRf//rXmU6nrfEiDMPWNNPEmzTED2NMa5jY399vTT1NG/R6vfYaDAYDer0eZVm26/V6PWazWbtuVVWkaUoQBG28y+FtDgaD+8wVzbXQWjMcDtvYl+a+a6714evTGDmamJkmrmU0GrG9vd1GzMxms9bc0rRvs+9mHw0FpiEu53lOv99Ha83Ozg6dTqe9L5vrdXBwcN+9v9RSS727tJjNWBmtsdoNObhzB1sWJEkfi0NrS6eb0E9Dqjxj9+4u1WLO8Y0eNpsQKOjEa4TDFV579XVub+9z/NgRTKXpbqwTD1ZZZDk3L77K2rFN1o+tYLRl6+pVsoN9zj31JIOjJ9m+eZ2dG9dZP3OGKImYT+cM1ldJU///1tb1a6ysrxAPR+zs3GF8e4uVY5v0V/rceOMyk4Mpa6sd9qcZk70ca320y6LyM/meOoGne0rBnjZcLAWreUXSiwidwxjNYuaI4y5hHGJ1xeTGNr21DrPdHFsTXC2unfRv+o/NY4RDIoTzsbENQQNZmwXuTZy7Zh1BTSPFT6BDi/ho9qFtPbbkPDkhkJIS70iQwk+sevqIX862VExvOhG1AcV/4o8SfGSKxhEg2vYKlEAGASIMGBxdoTNIEWXF9rXbjMc51lgEEkuFrizGeoOFcVBZwdxa5k6QWX+CtQXDn5LzJI9AwDAUPHZ0RNqP+fq1XSpjCJQnP4ShICgFxlpvpKkPXNSFH0oKZGM8aAJXXHMlRL3cPcJIPSKHqzknjYfBb07cI4IcNn0cYpM44aNyGiJLs20JKCkw1o+jxFHMo6ePcvk1w+jYBqdf/DC3vvZNLr9xg/FkRhiEfmxGSfYO9kk7KQiHcxWuEjgrcEKTdnpM53PCIAAlCDt9nv6Bj3DjumOlE7LR6zNbzDk1TPjzH3uRD/3yr+DUkGp3h7UkgArGk4w8r5jnFXuV4+Jc89pcs6gLmYSTxAJviMIbOxLnELrkyQtn6Mcxu3mG6gWeyuGaG9O1xW4+skVinUU6h9UgpEMJiZACJ/z185FDtYlH+OujZMAg8YQyWxk/liYcUtaRL8IRxn7syNXrVZVFu4DLkxljIUnqcazcOPpJSCANg7hDYCv2t/dYWRtQaUOn22OxPyeUzX1qGfY6GGPp1PEp1oIyMFksWO0pkkAQhQ4pI7TRCCVY7/Y8vSNVpLFicrCgMJ4kVBpBJRSDbkwxX7A7WbAyiOl2YyqrqYoFSRgRBoL+yhA5XOH25VsMezF7B3PW+hFWCPb2ctI4Yneegwrp9QOksfQ6CuMCZuOC7iChstCJQqyD+Syjk4ZkeYUMFTiIlSSKQ0pdoauKQRpSOUeRVfQ7ATKQVMZyZ29BmCT0gtATcOSS+PF9rQYb2HRenHO8+OKLxHH8bcueP3+eNE35+te/3n72gz/4g4xGo4dGvfxxiB8vvPACH/nIRwiCt+f26XQ6/Mqv/Ap/7a/9tbd0XEvjxztTTYbp26Vm8GNJ/FhqqaWWenfL6gprjY8PQdJkiEpU3Tl04AzCOSJToeIIN6s7+xZsmeFCiewd8/mr1iBDiQoUejomjBQ2L1Bx7DviUYgKI1Tao5pMMNkClcSoJMbpCpflUBXYPEN2OohQIMMIqY2fkEeD1pgiRwYRKIFUviMrZEAQ9agWB+TzMUmwhoxinAhQkSJAoMcHCF1xbNj3ebdCYrTGGMssu8A2HYqwx7FHMo6+x9ENnuVDTz/OsQvrjI72uH1xzBtv3mE+y3nxw+cwi9scPXmK7Tu3WCzmHByMmd68Rpp0OXHmJMONIcP1IccfPUJ/rUs5z5lNcqwJEYHCGMfGsSFnLxxhMinY3drnllGcSi9wd/sbjIZdbABlXiJr4wwyxJBhbdl2/I0VNarToHC+A2ohSHoeRSkEUiqM9TQXnVfoqsSM90E4pAiAZtKpdpE46Iw2cCplPN4nHVoiaci2bnH1lW/wC3/lY0BKlmu0gaIomU4zpvsLhoMeaZIQxQErKwlZtiDOA8JIsnFswCIrGQw67G+9zvzySxzs7jMHrhjLDe0nEEIhWGiPQZVAWJs+AilYS0MCDFnpB5sCAQvjKDUgHHf3J3zxy18lSTrM5hPca6+wu7vL9WtvcPnqJaazCUJIoiQhUMqjY53Gutp8pF1N/MgwxiGUxFqHqQzW+IGzHM18lnGwP6HTS+gP+4RxRKAcUgnC0GCqim++/nWu/jdX+MCXf4Sf+elf4Mxjj7UDAwiJcxEyEEgt0LrEoXBCYEyFUJAkMT/xsY9jreO/+N/+La5evXGfzSOK6gpe/ceLnlxqqaWW+n7QwyJN4F6RTzPJ3ix7ePnDk/0PxrA8uExjuGhoGA0R4vAkdvO6WbbRwwweDx77w4woD67XnNfDImMeJHw03zVxKx/84Ad58skniaKonbAH2riOxvTREB2aifxOp0Oe50gpWV9fZzqdtkaTpg3AF0CNRiOKosBay8HBAWVZtuYRIQSz2aw1iPR6PaqqoixLwjBkPB4ThmG77SZmpjmvXq9HlmWtuaChRjTkkuY4mliT8XjcxqecOHECrTXf+MY3CMOQo0ePtjEkjRGiuZZNmyilWpNJs+/G1NLErzQmjoODAzqdDlJKzpw5w+/93u/xoQ99iAsXLvDmm29SVVVLQ0mSpJ4kqNqImqb9mvvqcKRO8344HLKzs9OOe85mszZWpTFfNDSV0Wh0n9FDCNEaKBpTTxRFbdtHUUQQBIzH4/Y+q6qqNas0911jCmkMP82x53lOkiQMh8P2GkwmE/r9PlVVYa1tSSaj0ai9N5ptN8fY6/Xuo6M08TWN8ae5b4fDYWsomc/n9/3sLLXUUu8uhWFIKiqmW2NkIOmurDLNKkJbMegPSdOE6XhMdjBGmpLhoIswBSQhg81TZNpx9Y2r2MpwajUlURUbpx6BKGZrZ4/d61c5/sgp1o8fo9SGO5dfJ1aOJ198Dwdb23zrC18izxY8/sEPE6QRb7zyTbCG3midxWJBucjZPHsaKwQHu3voynDs7DmiNOXmGxdxuuTI8Q32dvbQOEb9iJk27JQaJcBZTyDQwmGgNQG8UViOhophYViLJVIAxpLNK/q9CCpPuihmAVESYE0FSuCc8H3rxrkhfDSLgLq/a/0enB9bkHVExr3FJdYZhPCT5jiJQPhjdQ7jDK1BxNX2A3coFsb55QIp0MbHdUi8h8GHaNSRJU0hkhCtCUPUpglTR8BqQGAwViKcNzfYeUmcgghTTjz5DOtHB7z2uS8y+/JFtNEoBNo6CucweJqIBUogt4KFsxRNLou4F1Uj8RTSroTnzxzh9LkNfuuPLjPPDaF0IARFMW/HXZQEU59yYyDxtDkB2NoY0zpu6le2vSaijhNpxmdaPsihf5pImHrkpiV63Gs810BYPB2j+b7ZrvUmBRyMJ/vM5lM++MEfYOXME0TREdJ0FYlkMS+IAoMRnkJ77OgaCMm8MHTXVrAmRzvQZYnOS4IgpqqmxFHA8Mgmp8+d54u/9c853hc8ee4EZT6jGh9w7ORpkmMXuPOZT2IPruN0xXyWM8tKMm3ZMY5vzSouTitK50ijgEVVEUtJpAJ0ZYgQrCrF0ViBkoRyxq1LM9aT0P9MNO3sanKKcLXpBoTBR7TU5gxZE25kKIj7KTq3mKLEWv8s66wFFDhNqOpnQuWJK+B8EZ0zyFBRaYMK/DLWOYwM2ZpPuTSvsEGEFRJtHWEQMC8NK0nCsBPQTVKM0WANTimKyhHIkMpZKq3RFhaLkjRWVNJ5coYQBNIRd0KSWDDoRBgc2UKTFSWDbgh6wagb0U0TykzjRICucrq9hMm0oh/FlNMZs3nGxrCDShTzRUkcB3SjkLIoUBh00Ge6MyeOIybznNVRihOS/b0FaRQxy0tkNyUzEXYxZaMrCDohi2lOHAZUeUEcSVygmM8KkjSkNJoojOo4KEsYR1jr0EVFPw2w1lBkBWkgUR2FDb3hpHKOzVGXJAkRzvJ2POotjR/vQHU6HdI0ZT6f8+KLL3Lr1q1vW2Z9fZ3hcHif8eN973sf/X7/25ZtciXfiqIo4iMf+Qjnzp17WyfUP/rRj/Lss8/yla985Xsez+rq6tt2HEu9fWqybN8uNcaPpZZaaqml3r1ygDUWYw1Chi370TrXEiAQDoNBlgtkniM7HcyOJ04ACGex2RST9UAodJajqwLRU1AVyDRGlxVR18dqSOWjMYSSiDBC5zmqKAjSFBlHKGOxlcYWJYQBMvSkCyFBqhBrDBaHsxWyqpA6xKk6415JVBTiTJcym5Ht3yUZrqHiLlpDaSoW8ykuczyykSCl7wDrfA5ixNaeYvX5R3nxx18gPJJgEuiv9Mj2Sr70u29ycHDA8c0hbmHYurPg6jduEsdj/vLTP0gUhMRpyt2tu/yf/49/mz/3M7/ABz/2If+8J2G4PuADH3uKMi+58q0b3Liyw+9+8lsMBj0QlhOPDNDVCf7ocyXTLGOmunCwwmQxR0pHkU0Yrq0gCbC28uMCToIxWOOrSGydhSutQiGoqpIoje5hPa3xCzkwlcYajd3bQhjjaR9GI2zoY3TqkYTOcJUg7lIsYJEU9HshUX7AtX/xf2cmYtT6CfqbJ+hvHGW0ts6p8yd48b3vw1qJkwF5XvGe5y5w99YeuS7Y2Z1QlYannjrC73320wS7N+nt3CRKJBdnmtvGtYMSRT0Q09yrha0RnonEOctcOxaeqElRr+eahZ3lpZe/iZCWvCjY3rqDrhaoAKQKUKEkCEKyfO7vcwTSSbQxWKsxFow26NIQKJ9nn5floUGv+mfFgigsi/mMyf6C/jClN+ijQp85HQSSKE6odMmn/uA3ufj6y/zsJ36Jj3z8J0k6KQ2vAxUSdDwuV+c5zkGJJKyvbRSGfPxjH8dh+d/8l/8V1w/FvhRl/nb/qlhqqaWWekfrsJHiYdG9Dxub+U4mj8MmkMOGD7hHjm3oB4eNCY1ZoJmEf5hp47B5o1nmsEHksDHl8OvD3zfGhsPH9+D2DysIAp599lmeeeaZlvRRlmVroADY2dlhNBoBvgBlMpm0ZpbxeIzWmn6/z3w+byfhm3Y6TKqIoug+k0MTCVOWZUu9iKKoncxviBTOuZbi0RAhqqqi1+u1RpQ4jtsiL611u70mpqWhQXQ6HabTKdevXycIAs6ePUu/32dra4ubN2/S6/UIw5A8z4njuKVFPBgH1JAnwjBsyST7+/ttXEqznlKqjSkBOHPmDFevXuW3fuu3+Jt/82/y+OOP89WvfrU1LhRF0RJOmhidsizbGJsmrqgxQwwGg9YE0hBUnHNtXA/Q3oeNQaeqqtZgcriQrSiK9h5prmNjqqmqqo2eMca0xovmmJp/e73efefrnKPb7bb7aNqtiZwpigJjDJ1Op6WnNIabxvwhpWQ+n7eEjzAM7zN/AC3ZpNPptJE4jclqqaWWevfK6Aoqw/qxdfJKY0TEcCVBygBnDLt3d1kcjIkjRZKk2HzMYGOTcDTkzq1bLHb3SaQkSBX9I8cYndhknpcsdnbZvXGTE48+StxLuX31Kvu3bjNYX2H11Hn2t7bZvnqdUb+HPHaGy69+i2y6z6nzF0j7fbZuXCXUFaeffYYwDNi5dQdhNINhjyKbcfP1iySDEaMjRzBCYuOI9OoO3bjLsydWmXztJjcXBdZBZR3WNSYCgRWOqXG8mlmOCegGkkR6Y0iQa8Sgh0wrIqeoZguiNEHnFmcdxjkPYRUCXceqCFHbG2oygvCjLW3MS2PgsI1hgKYH68c5rPPH55xDCk+maH7zCoTvMHPPvCARVNaBUDhh6y2Cta593RBBjN+Ip2I4i0OCgNw6cgt96SNqNRahPd3VZBVXXruFjVbprRwhL/yAwXCtx3DU4c7NAxZlhfMlRVgHpYUCwcI5tPNUDNWaPwRKOgZKcP7YgCCN+M0/fJPJdIGS9+iboqaUIOr38pBp17l7AxU1/aR59HTS01u9meOem6P2vNwz+3JorKMmRlhc+4VrXB7teEVt/G1woQKcEK2ZxTTrISisZjab8hf+0/81W9eucufiq4g8J4kkQlqSJEI7izWWeZbR76Y4PWc+kcRKYIoSrSucECxmE8JQ0U87nH3iedJolfUw4/y5Ezz77KOEynHr9ddxVcXizTd589P/isXuPlVZkC0M0xJ2K8HLM8ulqSY3lkEkiZIQ6RypgmlRkTrHuhIcDSS9WDHoKh5//DFe+a0vEIYWIb1xV9T3knMW6/BFP875uCLvSkIJ4QktEpAOnZeYwmF1hZQhzppDBVISDDhZG6Zkc1c7nBI+4sjVZhOlqCxMC8Obk5J9BIXz1y2JAgIniAMIQoUKFGkvRoU+ztBOC5w2FGWJE5CGAZU1pNKP5alAUCHIsoq1fkCAo99JcK6qC8I0x9e6GFOSxBEoye3tMYmKKK2h3w2prOZoR1KUOfOZH3NCOrIStHZ0e5LJZIrRGqcSzK0x3X7MYjojDRXTaYmSsL4yYDxbEKYJJQlmMiUSFTaIubszY60fESgQVnhOj3Z00xghrH92DyJ0taC/vs7e9gybF6RRMzjtjzWIIwpTkS0KTGU5sd4jEBppnDflvA3z7EvjxztQUkqOHDnC7du3efLJJ7l58+a3LRNF0X0kkJWVFZ566qk26/Kw/jjEj2PHjvGJT3yixU6+HRJCsLa2xs///M/zta997b6Bggc1Go3aDuNS7yy93VEvFy5coNvtvm3bX2qppZZa6s+AnMcdOms9gcCXfLQRL6KmSOAcoZ4hcUQrq+TXFc6VYCtPi3BQ7u0Q9gc4XeJkic6mmKIEl2C1RUUJSIlQChkob+YQfgLfFHlNBIlwkas7y0BVQRQhw5CmvgWhvAFECLQpwGQEziLCEKclIpCoOCWwhnwxpSjuEHS6BFHqCQ7zGcI5elGI0RaBn9gv3ZD9MuLmQcTdL/0Rg5WE8Y2YsBuzciTlsXNHeOXrOcW2Zm9vzq0bY3b3JiQdeOUbNzhz7jjbOzl3dxacOfdRLr0a8w/+r5/kzGPHOHFqlWOn1un2OoRRyLn3PML21g02jsKlb0148+ItPvZTT/Lc+ze59PqYW9dus1MJNsQptvf+iHQIoUq8kSNKMdZS6QJrqkNGBxDCR7W0owJaE3WH3iAgpMdx4sBZrNY+sWe8h6wrdpzVvpNIPUnkLFHSIR6tU+nXWcwMcjiiF0tMtktxd8zWSy9zrYKFVthAEfQSol6HtNNj9egxRseP0l1d5/iRY6xubhIOznLx0h3+4It/wJU3L/KBEYjU8dWx4YZxGOfzar/tVq3/tcBe7s0rzWDVd9IiL/jil75GoCCOBGGoMJY69kYgZA74ASrnHLYZ5AEq7bN+rXVU9aBNpWvfTI2ircd1EHU/sywNZT5nNsnp9lM6/YQg9BNXOi7RRnNNv8mv/uP/C9/61kv85V/6q2w+crodRBAoom4XoQSz8QxrwWhNGMUgHHEc8fEf/ziLRcbf+q/+a+7c3X5ISy211FJLLXVYjQHhwaiUxtQA3Pd986dZpjFhHF72waiUw+sc3kdDQWi20UxoH173Ycfb7OfBfX43Gkgz7tPQJh62jQfPWynFU089xY/8yI+glGI8HqOUYjKZtAaCKIpIkqSNj+l0Oq0pYW9vjyzLGAwG7cR7c65xHNPpdNrxssbAEAQBWZa1BoDhcMhwOGzXb8wJSqmW2lCW5X0RM3me0+v12rgRIQTT6ZRut9uaFw4bHLTWLBaLdttZlnHr1i263S5nzpyh3+/zta99jf39fc6fP49SqjUcPOzaN0U43W6XNE2pqqo1zIRh2LZXY+BYWVlhsVi0FJInn3ySP/zDP+TDH/4wTz75JN/85jfbWJcwDFvTQ0P26PV6raGl1+u1FIvmGpVlycHBQWuWaYwaDf1lOBy2bdJErTTt1JhIDsfoNOfXUDqqqmqNI41BpYmLybKM1dXVNp6nGUNqjqW5/s02DhuTDkf1dDodXx1fn3cTFdTQZoQQrQloPp+TJAlKKdI0bY1HTWRNExHT3KdLLbXUu1dSCDY2N3BIAqmJggjlHEm/z5XLbxLYim7SIVIWPRuzevoRotEKW1tbFJmvnO+GEWqwQtBb49btbeb7+8hyzoWnn6LQFdtXr2GzBcdOHUdECTtXL5HPxqyfOIFDsnXnLkkv5dQTFziY51x94zInHz3Nuaefpypm7Fx+gyrL6B/fxDqDXIxZObZCb/0E8yInUAoDbM8rLm5PyQvDvKhQzpMoQBDijRo+qsX3jW+VmsthyHrlOJoqnKkppuM5q6f6ROGC8faCfF7gcERhSFYWbVyGJ2nURRcCnKsnv2uyRzPOIAAac0KNmmioCS2NQ4h7JhLrkPgoDk9CEFjriQuB8BYJbYEmJgbn6RT15LtstoVACr9f2/S9azyGxkferAQKJSWmKRixDldVOGO58vWvsHvldaQuQQQcf/I8x06O2J9+FTFr4m8sBiidoHCOyoGrXTbCuZo6KuiHkudODJgbyx996waFdYTNmEtD1qgb67BRg9q8IcT989Ki/rK1ugjR+FvqoRzv3niQ+CYEh/r/9+w19/6mHadoq2ka6sWh/bvaVFPbgjFa8KUvfI2/8L/qcfTUE0y+8vv0R33iTkonjr1xQDqSNEE6RxiGSCkIpUIKR6gctnIYFEJGhLJgY6XP08+/yHT7gI3Y8N4f/RhHn/4gW9v/T/SVS7jpFttf/OeMb3wTYw0Hc800r9jTgtcnmksHC2ba0g0Vg05Mpi1CChZaEzhHX0rWlSCSgn4a8PQLF7h46QaJrQiUJAqCemzTN6wKZE268W0ulTjU7o4aEYKpwGqDFAIVBDh9OM7IoQJV0z9c6wORykf4qiDEaI0KJEjQxlA6xXZmuV46dBBgrCBQEm0soQBLwN1xTiQco35A2OkgJZgkYXtrjJKSbhJxMC8YxgqloCwtlZOYSrPaUThjUFHMvKjoho44EPTXOlhjUSrAiIDpwQJTWSpXkaYR00VBHEo6acxsOiZQMdOsIo4VlS3oxiHjeUEkBXGvx827M9YGEdmi8DYX539nRLFiVswpCKgsuMWEfqCJQkU+z0kDSRQHrJ4+y+61W1RlRhhIrBMkSUxuHLHQ9FdHzHJ/7WQa4kzhf0XUtKMq18wqQ2ENq6sDkvrxbjGdI6RE6+88//3vquUT5DtQQgiOHDlCFEWcOHHiO3a2D+dAPvbYYzz++OMPXbbprLyV/T7zzDP80A/90J/sBN6CGrLIhQsXuHjx4ndcbm1t7b6O7FLvDDnn3taolzRNuXDhwkNjkJZaaqmllno3yaF1ibWaAIsQyncypcVhMfhKP+EkabFAipTi9jWcznGmwunCG0YcYDQuXyAlEEeUO4u6H+Tq6BcJUiGkBBTYGk3oBLbQOGORYYBMYt9B1t6EIIRojSK2NLX/QwEanMRiMKaszSICZQVBqAiTDs4aFrMJ+axEBIVHKhqNkpCGljCI/HFLQTGLmdkO45ml3L7GxbvX2XQfZXFrRjkf0lUDrr58hV/8aB/1+IhvXT/Ov/6NCbNJzr/5l1cJgissFiXxSGMTweSu4Na1q/CvX+P5F07zxOMbGAQH8wXHTiQImxHHESdPJnz5i9tYJ7hz84A8L7AW9oTlTOcYizmYQiHQFA5/zM2EkgPjLMYKRFDzH4SfpNJFhi4zVKeHs34iRtQjD85ajK58ZUMxB63r61FhTYWUdRWEMwgl6YzWsBayHPKyhJ5jkCQUoyk4TZkJiszH8ASmhP0pxd07HLx+iTvGDwAZKbFxwsGJR8lXjnHz1mWe2AjomQO+uLDcNo7qEOHjO8k67g00vQU1lTJF6ShL3VbeKEGd2YpHrgrnc34RhIGkMmCMxVh81QRts7fUj+b94ThR5xxFpZkvpnRmGb1hQpyEaO3Q2neOHTN+/yuf4vbWLX7hL/1V3vP+96ECBTiEkIRJl66zLGYZZVnhXE7SSUAI4jjhz33ip9nb3eW//j/8Hebz7C23xVJLLbXU96MejHppXj9I6XjQvNGQQA5/32zvwXGhB79r/hw2YzSmCKA1FjzM0NHs92Gmje90fs33zaT84YKl+ycM7je+nD59mo985CNtBEsTzTIcDu8jXTRxJw29oiE8dLtdOp1Oa8hoyBvNZH9DZmiIGPP5nG63y2g0amNi+v1+2y4NlaOJzGnIGlprlFLMZrPWjBNFEfP5nDRN6XQ6rWHgcHRMYxJoIkk6nQ47OzvMZjNmsxnHjx/nkUceQSnFSy+9hDGGY8eOtdtpImIO3xtNuzTbbigXw+GQNE3J85w8zwnDkLIs2d/fp9frtUSMOI7Z3Nzk4sWLfPKTn+TChQs8//zz/OEf/uF9+2qMFA/GpzQklqY9m/tpc3OzNcE091az3mKxII7jNr6micuZTCYIIUiShCRJWlNM0/7N98194erJniiKWrNPr9drTTLGmHbfzbINASUMw5ak0lxrY8x98cKNQaehfkRRRJZl7T2tlKLf77O7u9uaeJqoocZQEoZha3IpioI4jpdjnkst9S5W2usRDYbs3dlGCkiTHkYbrrzyMlFvQH/QI6wyKl1x4uln0QKu37iFzQtiWzFcXSHqDZlXjp1b13HZlG6/z+Yz72U8HnP79YsMel2OPvUUzjm2bt3EOsPaqTPkeYEwhrMXzjDPFuzc2UFrzflnnmNw9DivvfRN5rev0u9EnHj2vaBCDm7dwNqI4dEjFMWCUAXs7e7x+S+9xs3tKZP9EusEFh/JInA+NsT5WBKJxGBxUmCAV3PNWiDoRoKhUn6yuaw42K84emJIPivRc+3HAIQlkAptPNlVCk/xaMwU92gVsiUWeBdI/RxUmzicaUwdENQmECX9cXnPhMA5icC0nA+Ej0JVBky9L4Xvk0snmxIfZDNeIXzBB060xIb6IGqiiGPfwNg61gJBYAWVA5zEYTDaEzsW+/sEoUQFAVs398lzx97eAofECY3FH0/pLLmr43P90AASQSQdG4nkpz70FFY5/tWnv4VxjljSujR82E09NsMhL4to2lXcg300p1EPUjRjE/eZRWpyiMPh7P3Prw04xMM9vD2n9Xe0BpPDRLl6PKTZX4sS8QckACxklWV7otm5+BLHLzzNueffjwhX6UT/AwjQ1jLodYiTgFG/x7Gjx7hxxxNVO2lIPOhRxBVKBEgRsDOeEsR9njj/BK++eotHNiKOP/8RdBkyvnydQSRx04zXPvNJtq7dYZFrdrOKm7nmRmZ542BBZSxpKNnohnQjybzSVLoitJaBgGNKEEtQwnB8s0fYG7J9/TIXlPXRP9bHVTshkEr6yBbhW0wKWRN0ahIIIK0gkG3gUHu9hGjMOfVPQTNfLBxKSZB+20IFWGcRgfJupXpbWQGXJgs/LhYIQiGw2tKNA4TTBBjSQ6YUVxq0lIz3ZkTOkiSSykKAQwrrK5GihPl0zomhL3gjUMzynFILylRwbCUBSoSK2D7IMOWCtWFKBkQq4O72hFAFGA1VkVERMF4UKCEII1gZJOAckYrIteTGnQk4wSIv6KYRSImTgk4UMi8suZFooMoKNroKFaWMD+YkQYBzFvKS25cuY/OKtBeDhDhSGFGHV4WK6azEOUcchswnOZEUaGFRQlKUhkWu0VIyGHRIwxBpHeODA0BQ6IeVjv3JtTR+vAMlpWRzc5P19XWOHj36bZ2AIAjodrtMJpP2/fve9z7Onz//0O01Tv7vJaUUv/iLv/hQasiftoQQnDlzhvPnz39P40eTm7nUO0uLxeItGY7+XbS5ucnJkyeXJJilllpqqXe5nANd1VUfUiKErYsBjO+E4mkgWEOvnJCoCeZWBlWBswpEhEPgrMQagckWCOkQgcIUM+Je6mMq0hirNUgLSqGSAOeMN49YjXOh74BKhVAWmSTYovTb1tqPOTgHTiKEqbGKgcdvWoGxGqtLpJNYKqyuCNIeQadLqDVUBVoXWBVjnSWOHXGowJZtdUqWxeQyJisWdIaO4mbEjTtTsvmMzaMJx7c/z4+cqihvbZOqk6jwDA7fqdzfnRCEoa9wyWC4Lhjv+pgcSczVKxMev3CSbjflq3+0zR/8zhUef3yN9/3gOV57eQtjHEka8OXP7bG3M8Y5y0w4rExYTCuqLCdMOlSVwWrjc29VgLUOowEHygkwHlnpqpLKGow1hMNVQCCF9JE9Qvh4n0LTjVMoK8xsjN04XqPCSwSRN1gYg5SCzsoKAkFZ4I0pxk/ARIGk1wOTOFzPobUf+HEGkkigrajNDhZtHIsjx/jdGzssLr/BZjci0VNe1RU3S8Pc1CjXP2VZWxe5tB/4jrsBpKy7h+bewAg4isrgXDPYxD3MLfcKZ9qBsHZ1bxpxddyOLqGqNNliTn8Q0hv5HGNrDDiJkpKrdy/z3/6j/xs/dfsWH/3JQ9EvQhB1egghmR2McUZjTUkQpIjA0Ot0+ct/+Ze4fPUqf/8f/GPK8q2RB5daaqmlvt/UEA2aCXXgvn/vH0y/ZxBpjAjNBPJhimpjBmkoGw+jfxxWY5RoJuGrqrqPMHL42JrPmv09LA7mYefYTNQ3dJEmhuXw+T5oADlz5gw//uM/Tr/fJ8syoihqiQnNNg4f9+GIjsaMEMcx3W6X2WzWTrg3y+d53pI6GiNDp9Npz7ExIpRl2ZIhmqiYIAhaukMTGaKUYmVlhaIoKIqC+XzeEjaadp/P5+2xTKdTRqNRu6/GKOGcY2trCyklJ0+eZH19ncViwcWLF1uqRBMh08SdNGaKw9e+IZI45xiPx6Rp2i7T6XTacz5MaXXOtUSKp59+mi984Qt8+MMf5oUXXuDixYtthE6zXEPoaCJRGvNNE2nSkFkac02z/+baWGtbiktVVURRxM7ODkmStNE4aZqyWCza2Jbd3V0GgwFra2vtvdIs0+l0WtMIQBzHZFlGmqZkWUYcx2it22vfRNA0cTENVeSweeXwn36/j3OOJEnI85zZbIbWmtXVVcIwbPdx+PpKKVtjTxRFLYGlab/DPwNLLbXUu09VVXH1yg0iJdk8cQKdZdy9coUQSGyGyDKMVaycOsM01+R5gaxKUpEx2FghGK1T2oDJ9k1EueCRJx4nGa2xt3WTW5feII0TTj75BFUluPrKSyTdDsMjR5lPZp6AFSn2xxOqoqI/6JB0OszH23z+Dz5PqAznn7pAsnacW1cuU84zOv0ex889ilWKbjAiSlK++bWv0U8TTm+MKLI9xoUm045USnIchXaexOEElbCtMcHh2LZwsdCsZhLZCRgIKCpNtT2hPDZkcHyAvnaAKEAbh5QO5QTWibY/awVtkcg9E4GriyTq5xB373Pw+xc1vQMncMbTGBozgXP+O4trutre1CF8vIZCYoQldA6NQzfOCOrnLLzRRHj8SBOkURsyfLFIAdwqLKkUdAJFYGpipx9FonIGYRTGGmxpWVy7yfa1W2jto4OtrU0fFnIEubt3rEo4EiV4ZJjwIz94jtGRPv/mM6+xqAyBrJ/fkNiaeGLxn6n6PBoqi23bot6uk9j63A7jO0Q9ruBaYoj/RChwD1TGNNv27eypKK6mjtRbubfdehf17eNft/8lOizNNbPszwq+8vu/y4vOsHbqAmq4xqDXpdeJOJjkpEnI2lqfI+urWGE5d/Yk051tBr0OptKIKGbU77J6/DHUpdd45v0/zNrRY4R/8BVW1iLi1WPMvv5VgmKbIi/YvnuH7bv7TIuSnbnl8rzg6rxiv7Ro5+hEAWtpyDAKMcJRVA5pHD3h2FCKvpAoHEkk6I/WuHJzih3vEw8kQqratCFqsodvFotD1eYP6yw4h5R4I4isDTjWoUL/7G6LEhn4566mIaX042kCfEyP8oVO1tnWWBJEkso41LDP3ZsLrs0rZk4itSVUgk6kyCtDPxQMY0kaKqQUjKcZQdhh69Y+CYZuErM9y4hVRRBIgijEAjbLObGSgLYkoWSRl0QSgqAkjbsUZUkcJ4wXFlFp1vsJWE2ahMxnGb04JI4kVQXaSfYXBQbFkZ5kZdShMoY0VtyaGm7enbLREfQ6IYNOyKKskELT7fbIihIrQxARMl8wTBRlUZItBLowRMKbgI2xdKQhDwA0cZKQl45AlUTdHlleoStNrCRVqYmUpNTG3/xRwG5uMJVgfRgSosEU7GxPCCLJ9tQbRoIw/GP+7/G9tTR+vAMlpeTChQs45zhy5Mi3xa6EYchwOOT69esArK6u8olPfOI70g+aDtL30pEjR/jJn/zJP/kJvEWtra1x7Nix77rMysrK0v3+DpS1loODg7dt+5ubm5w4ceJt2/5SSy211FJ/NuSsxRkQSta4PoEQCiUjjLNIvEEjrHIiXaCsRkwLPyleFOhKI4IY0VlFygEyipCRxAWSMIhQ/S42n/nOUllgtSWweHqHNujKkya8K953mYXyr6UIcaYBekpMnW2rghRnq5oMWmIrn4/qrME4DVpjtMEJRRinJJ0esgjRiyna5Fhj6XQhjiTCWYQKcU7jZAc6A6yFNIqIwwgrIIo7HDk+YGUzJhnPENWC3uo62UQhrcQgsLrAYAniBJ0LhAKBxFQVSEmZV/RGHUaDDh/6yGO8cXHAmUc3wMTs7ZUUhWXrzoytrRnWOB/jcrTCzCJ0pcApnDXkZeUrY53B6ArtSmzVIFB9b95ag3UGqzVSRSSDldps4A0YuHqyqiwYDQZgKpjPgRrB2tSNWN/xNdqRDtYQQKEhyzRlaUjjgEiFhLKgm4YEHUVRFWgrcNYgHQjl76lKG2bxOi+zitO7PBLCrdvb3BaOIg7InSDXb4ftox6EaQc+xOFxEKyhLY9xTTXSQybW/M8F7feHarAP/U07GNVWBTvIC0e1V5IXY4YrKXQdQgQoFRHHMM1m/Manf529vR0+8bM/x+rGGn6gRxImHQYjR5VlmDqmJwwCwLI6Wuc//U/+M27evMm//eSnMd8l2nGppZZa6vtdh+kRh6kch0kbhw0SD0aqPGieOByt8jACyGETSTN5f5hqoLUnqjWT9t8rSubB7R42cRyeXG9MJQ8u8zA9//zzHDt2rDVjKKVYLBZtvIoxpjV8NIaOXq/XmjpWV1fRWrO1tYVSiuFwyGw2Yz6fMxwOAdqIjiYepYlAacwt8/m8jRxpYmAas0mapkRR1FI+mu/BE0obmsThfTRt1VA2GorJzZs321iWMAzZ29sjjmPOnDlDr9fj8uXLbG1t0e/325iUIAhaU0nTHocVBAGrq6uMx2OKomhNINZaut1uS/ho6CnD4ZC9vb220GxjY4Ner8cnP/lJnnzySZ555hk+97nPtfeLUor5fE6/30cp9W0xvIvFgiAISJKEoigYj8ftvtI05eDgoL3v4jjGOcd0Om0NGA2tYzqdtmaXnZ0d1tbW6PV6WOtz151zLYFjMpkQhuF9lJaGfDKfz1uyx/Hjx5nNZlhrW0pt83OllGqja4wxxHHcml0aE4cQgm63y2KxIE3T9ro3RWvNvRFFEePx+D5DVWO0KcuypX28lUK9pZZa6p2pbJ4REPDYe5/nxqvfYuvKNWSpGR1dZ2Ojj7GGeGWNrTtb5DqBfMrKMGTlyGm0iLh1c4vJ1h7DYYcz73mBaZmze/lN3GLCI2fPcezxp7n95utc/PKX6Q36pL0V7t7cptuJCSLF/mSORrF6Yp1Od4AxFTu3v8XZcyc5fv5RpkXOwd3bFFnBySeeYv3kCV9cEMTcvrvLN37zN5gvLLO9CakUnDwyoLo7wQnJJNdU0hFKR+UEpfUkjMAJrPAxpVLAlcJxPDB0haDXC8E4ggDGW3MufORpguRVdi7tk2Uaa4Uv1LF10YXz0SpWNLEjwhsDhB8zELXjozUm2EOJIc7HwTjpGodD3TH2McLO+mVkPVYD1sdi4AitQEsfsaqswDRmWNfEuniDioD62KgjWFxLBrHAvoHLueHRrmIQBlhjyOvqDWd9u3nyhqBANyAHXwhTkx7Kum0r5/cbSgikYKMXc/rkOhOj+I1f/yp7k4ywoT/UxhspfXsohKelOIdrolXwBhlquolfzaLq9RuzCI3hRhymfria0oF3etTNK7h3nfyFse06TZGKdfX4WN1GflvthhEWbPPYWVNLnHPcPZjxta98FWzJBz4qmbz+MoFZ8OjREa/lOyyykvHBnG7aYW3Qw+YLzj3yCEc3N7l14xq9OGXz6R9gf2/BY8/GHH/Pj2KtJr/2FVZ+4CTV7ha3v/oZpjvbFPMFMrIUKub6POPlvQW3s4LMO4CIFfQjxTAOkQhuLUqqsmIkBJtSMWhIEVJQalgQ8bUvXeRcKFBSoJTnsAhBbSISCCFpm8EBSJzwpg9XV/04BUoKT8i1BoTAVKa9eg2hRQgBUuCkRJeGIIAwVjgUUlmsc4SjAQf947z65a9x1ymccCgp6CiBktBJA6LakxIGIUKCsQG3b+3TiRVJGDBdLBjGCaUpiZRgPs/pJyGb/RBrDSaQFJUPgYrCkDAOEAI6acLufkmZV8RBSKUrCCRFVoBQhErT6wQczA27+3PCOCYJFCqA7b0M5Qz0E3a3DljvD1jvS0a9hGzhzR2dfo/JeEElJPPS4PKMjVGHRVZQaUMYxphKgpUo4U34IowIlCKIIuazgjgOUVGKLitcodGLjNIZOknM3Bqq3BMAq8KShimdboBP0KnY3p1jjWBnUkCS1JCFP33mx9L48Q6UUopPfOITHBwcEMcx6+vr930fRRHD4ZD9/X1Onz7N3/k7f4ePfOQj33F71lqq6ntX2v3UT/1U65r/96Eoijh16hRxHH9HMkQcx0v3+ztQTVXJ26XNzU02Nzfftu0vtdRSSy31Z0TOAYYg6OBjXjyNoEE/Nh30dL5HaCtUIHwlh7UoCUIKdD4nn09wvRFytIpIBpj5gmI2J02HPp4FcFpj8gJTVBAIQKIrTaAkKgyRoULIAKc9IFPIwPeCpMLlJVZXCCFr6qjzkTEqJJACXRQ4Z31HVwi0LtGzCUYXhFEKKkQIsFmBcIYojlBB0hpYHIrKBGgU1oAkodsLmFqDE/DZz1zjlWNd0jhg2BtwYdRl88SIT/zc86AL7lzf4eZWxfb+DCdKrHHebIFDSEWWVfz2b77MRz72JIN+hyeePkGcBISxQghFEAgC5TuDhSmJ0gDiKdmWp2yURUGaRnVn3npSi9ZUua4HQABzqJzDCqyzRCog7PbxTFSPdAXn27LIiKPEt/d85hGXRuOc8T1eIcF52kpntIKUYLVgtrDkuiBNY+KwSycsCevBmDDuoDFY4zGyoVQgFPtFybX0DPs39jmqKmazBaoXMs4KQqWYZ/cG5KNAUf0poxrbbR2aBLtH7mjef+c93qtuOlQF9V32dT+qFYwWzCcGUy6oViwOiRCSKIiJo4giyPniS3/EwWSfn/6Zn+f0o2drAo8gSHveFJVXYCVOSKJAIYXl9Omz/Gf/y/+c1y+9yaVLl9+Gru5SSy211LtHzcRzY7Jo3j/MCPKgGeRh23pwuw+udzi6BbiPtNqYKg5//iAt5GGvH9xvQ0poyCZNNEnzp/nuYUaWJjpWa01VVa0hpVmvaaeqqlpjQJ7nbeRIQ8gty7KlWzRGEaUUa2trHBwctJP6DbWiKAoGgwHWWqbTaXvejZGkoX7s7++jlKIsS0ajEdZa9vf3SdOUOI5bo0G3273PaFMUBf1+v21jrXUbT5JlGVVVsbOzw8rKCmfPniVNU1555RVmsxlnzpxhMBi0BpcgCFpzTkOwaMw8jclhfX29bWegNS4YY0iSpDX5NGaWJs4kCALOnz/PV77yFb761a/ywQ9+kI2NDa5fv96SQpronDAMW/NGY/JpooMGgwEHBwftuSdJgnOupW+MRiPKsmyNGFLK1txzmEbSFLRJKZnNZi3loxl7asw5YRgipWQwGLQUmOZeaSgjs9msvd7W2rb9D18foDUCNfdBE9vS7/db00aWZUyn09Z41FBDwBt8GpJMQ10Zj8dsbm6yt7fXXsNlsdtSS717lSQxa8eP8tXf+wPyO9so4Ti+uUG3GxB0B/Q2NtFCocZz+mVO0ukQd3scTA13b72JsIYTJ4+y8eg59vf32Ll7h34csHbmEYK0x6uf/z1uXLpKd9TDqpBbt24wGo7ora2S5Rlx2mF9bUiYJlgtmd66SXcwYnjyFIXW7Fy/C8Zw4vEnGZ04RV6WpN2Uy9/8Ml/4/a8yTHsUB2OOH1lhlhVc37pFGigWpSZUjliDrgs0JCCdaKmUQe2zKHBcLDR9CYNYshooykoj92fs3hWc+7EPUGWfRV+d4CqJto2p0yKUwFoIpCcWgI8saU0Chxtb+PSKpt7A4Q0VEoEVti2yaEwZoi6tcDTPNw6cLzQBhwIqHIHwxA/hfLfXd7tbq4mfrBd4I8ShwguHwwrY0eDmFY+kAeuxpGsDtLYY589RW2+ycPX+D5dLWPARFc4bLZQShNLRUd40+PKVbaYXb1Ea6+uU6nGP9n8V4Q/YOuc9FEK0JBTnXOv8cBx+rnT1sIRozwPunXcb3uLu/040bdO2AC2+w4+D+QX9R+LeuuLQKu1rd+i1/2pRaIyWXHn9DUzl2NjcJLQlobWcPjJiVlUo5ciLnEUes5JKHn38acLBiNlkzPFnP8LaUz9C9aXfR5cTehun2Hn1Ncz+a5hJzM3f/We89ul/RTaZEQQxN7bG3Kkkr+wtuLsoKJw33CAk0ln6kUIqyZ1FxcG8pINjQwq6QiEFKAGBEDz57ONcvjvBTGesDCGUTYyQj6QWzltgpPTGIiEFzvp4IlG3Q0NL8c9XtfnD+Z85/9znfAyQ8+YnFdb9BuPqIUsfT22dRQWSsrJYEfCtb17hWlVBGBNL6CoJWJQURFIQRwJnBVVNhR0fLFjpRYwGXWazjCT2JtjVYcQ01yghiJTxETNCAYa5MYRxSBRHZGVBP5Hs7c0x2iGVY1EWlLkljDyZz5SQRpa7ezl5BU6FSCFInKWqDLpydGJBlhUcX+0SR5LVYcR8vKDTERih2Nrex4iYrNS4QtPrRswKgy01nU6ENYYoVQRCIIVDJDFZURElCbNZSaIcQRTX8+kSpQRRHCCFYqErDqYGaS29QcDqYI18MsUUGTJS7O3PyWXEzjQnGaWcOrYG7s5DC7j+pFoaP96BEkLwoQ99qO2YPRjhEoYho9GIzc1N/u7f/bu8+OKL3zUO5XC1x3eSlJKf/dmf/fdqshBCcO7cObrd7tsWCbLUfxg1gx5vVQ0q860u+/TTT5Om6b/r4S211FJLLfVOkbOEMkDJwPd2tPU9yqaDagxOa6L5NsZUOBv5yg6tcdZ3cJJBigojpFRIWWCrKWaxYDqZ0D+2QmVAxgFUGluW2LKAbYiGAwYnT2DmOSoOUEkCVtQd+MAPCjjnqzF0hdEaYx1SVIBAOAEywBlP2PBeFYNwEqzGlCVl5qM1ZJySdAcYN0MvbLu+DBQgkCJBazDGUZUF813H8UfWyG9Jiu2C2bxkfjlDKgXCUgUJYaj5oR87zaPyBm9e+zxXXvwLvPLG62T6Og6DEBqExjmFkI4bN/b59X/1BZ567hQnjq9w81sT7twpuHHjJi++7zQH44KdvQWKAGSBKTRSBZ7gYSpvXLHGk02qikXmqzWdBWEFxgiCyCGlwjlDICVJt4tKYl8Z4yx16C5VkaFnE1wnIIoipC5wDWrbOIS4V8XprCNMe77qB0eWC/KiwHZ953EgV3w1jy3qgQo/QRJEAcIFTLIFry76bCNQ+3eoTMmj7znPm7dvYW6XbM1zKuufyaWQ9Dpd9urJpD/12/17vP/O691jevzx1qMdYLFAlluqnQxd+Y59oPb9jxsKKQPeuHGJf/Rrv8rP/vRf5Mlnn0KqAIdAhQnOOKw2WKtBCoIwIELwwnt/kL/+P/1l/tb/7n9PnhdL88dSSy211EPUmDoOmz4eJGkcNl487P3h7Tzs9cNiXr6TieTwBH6jByNfHtxHM+7UGDkOv24m5ZuIlu90jg25ot/vs7Ozw3PPPdeaG5pjaCbcG9pCEwvSkBOqylfg5XlOmqYcPXr02ygmBwcHbdyJlLKlRCwWC6y1ZFnWmgoackdDhgBaQkNjFmgMB/1+n6IoWkNImqZYa++LQWmMIZPJhKqqGI1G9Pv9lhyyt7eHtZbhcMiJEyew1vLSSy9hrWV9fb2NRomiiPl8fl87H74mQRDgnOPg4IAgCNrYk8bQ07SVlJI0TZlMJhRF0ZI7nHOtyeSTn/wkzz33HE8//TTXr18ny7LW0GKtbSNxmmvbFJ81f4LAP88tFos2SkYp1Zo7mrZpaCZKqdYQ0lz7oihI07QlumitKYqipX+ENUK7Ma40cS2HCSmH74PpdNqee0NsaUwkSikGgwGdTqc10zjn2uvYRLUopVrTy+EIl4awEoYhSZKQZVnb3gDj8RghBGEYviU681JLLfXOlQW++Nk/pBdJRpFk7dg6g/UVov6AWQWXv/4SZVaw2pUM+h0IE27c3mE2mTMcdXjkwpOoOOH65Uts397i9LlTHD16nPHemDe+8QXy6ZxzzzzOZJERKMHG0TUsksU8R4Uh/dVVkv6Q8fZd9q++weqRDWS3w/aNG+SzCcmgzyPPvB+ZJFTVgt7KBjtXXuULv/dHWJcwOdjm6R94jkVeYd+8ymMnV7m2PcNWGuFgUVVYaQmRlNabFmQ9Se3Jlp40cUtbLpaWwawiXklJpaEqc+585eusPfLjbL7wGNn467iDyke6Wod04r650oaIQG2SkMJTEnQ9JlCP1NRLK8CTJXAgpMRZH39hXW1dEDUJo96JqeNlfAmQQyIIhMBYH/1ihWvcD/VR+OgMCVjXMi7qq+4lERhgRzsWs5LjleJYEjCII4JQkGcV86p+bqrtF642XlgExgofNVMfpwJCPK1koQ250zjhzQK+hKbZp4/8xViErIkotjaXHDLNOGRbbeLawYHajHH4ma82KTjXWGXu1fTg7m3t3nPjPYPI4QIW0dBR8NtqoCEcOi6La+kk7Y7qxt2dFRw7mnL3zg2mi31iqxFWsz7sMoqG5NoQh4pqPubCs+/h6Hs/RuFigjcuE9dRtaLIEeNrLK5f481P/Qtsts/+669y5cptbt26y6LQ3J3PuTnPuT4tKR0EUhEp/8xcGFBCEacRtw4y9hYlIY51JRhJSVAzauNAsro+Ij5+lIv/3ec4Hzh6oTcb+Aheg5Cyjf4RTtYUN39vN+ctqD0UzU9V88zn6qsdgEpizKL0JpKg/jlxDoFBhRIZKpwUCGPRxqKR3Liyw6tbc3atAGkZRAGVscRKkkhJoATT0hJLSeg8hWYYKsIg4GBvClKxMx1zYq3PJCtRQrHSDbG2wIWKalHRSwKcClACSq3pKIm0UGmDcoI0TZjszeh2AkrgYFIwjCK0gf1pDkFAFIf0In9/RsIRRpIkjckqTS8NSXopB+OcqjRoociLBSqQLCqoFgXHRyn7iwphS9ZWU6bTOWkoSWJHoGJybSgKTSdV9TiYQ8UReVEwzyoG3YRqMWew0icD2J0z6sWgK45vrjEfz5AmQ6iI/f2cysJOmdEZDTh7YpVA+d89YfCnP+e+NH68A/Vgh/zJJ5+8b2I8jmOeeOIJ3ve+931P0wfQVlZ8Nz3yyCM8++yz/97pGmfOnFlO4L8LZa1lb2/vLS174sQJPvzhD/Nrv/Zrb2n5fr/PD/3QDy1JMEsttdRS3wdy2qCMRdVVJMYYpPTmAaw3XbiiIMwnmFIjEz/w77CI1mXu2l6prTHLVldEaejhIcIhA1V3YgWmKHF6gTOCeHVIsrqKcz7yxWmL1c7zJ6XA1tUTVeGNCc3AsDEaIRRSKIwzaFuijZ8QF1IiwhRhC4yp0PMFZCUq7SHDgEAFPuIDV9eZOG+ocBopFXEYM9uynH1Pl+1jGfN9TZJOiOMus2kARKys99i+mROnAcVrVyj3dxBnpvzkz16gM3gaISP+4f/r99m720XIirQjGA47rAwjQnOAqWJ0tY+0FSdORpy7MOJLf7jFfLIAY4nSEqkjQumwpkQFA5y2VJXGVBbtDFk5RVcaZwXCeiSrED5PVlpJEnfodkcESafFrIIAY6gWc/QiRwUhYSeBPGvHAqzWID0a0zqLsxYZpVgU1hmKUpDlmkKXDOI+UZAghcO5LpXRCBEghKNUAy4tjnJj+ypvLA7Idi5BEvLMe59jVswx31yQSEWJJ6OAP7xF8daMqu8UNQQQWY8qGO0Y75dYvYsxGiUjhAiwzhtvqvIWv/ZP/j5/sfhFnn/xBR99hEBFMc4u/DCHBSEUUSwQUvEXfu4v8Tuf+h1++1OffeuulKWWWmqp7xM1k+dSyoeSLw4v1yzTvD882f8wNXSMh33/oFGgMUE075s/DSWjWfbw8s2/jWFDa30fXaI53sOT3kqpdtL/wTZojAdKKQ4ODlozRZZlWGvbGORmv03c8eG4jyZOpDnuxWKBUorV1dU2QqWJcBFCkOc5e3t7LTGkoTz0ej3CMKQoCsIwJM/z9jo15oSGEAK0pJCyLNt4j6a9iqJozRcNeURK2Ua3OOfaKJbJZIJSirNnz7bxJtevX0cIwcbGRttO+/v7bdRLc20Ot2m32yWKIuI4Js/z1hjTEELCMGxNMkEQkGUZvV6vJYJkWUYURTz55JN84Qtf4Itf/CIf/ehHOXnyJNevX2dlZYWDg4PW5KG1ZjabtYaRw/dvcz2SJCFJEiaTCXmeM5/P23NvKDBNG2itWVlZYTwe3xdr3bR/c+3KsmwJIQDD4RClVGuiau7JxmjRUDo6nU5rAhFCtMaUbrfLwcFBa5Apy7I1gDRtk6Yp+/v7ZFkG+OK83d3dlhwjpWzpM40paHV1lclk0ppxGrNNQwdZaqml3p0q8oLVWBLZksFojc5wRDT0vw+0NXSUY3U1JU0jVDLk1vWbmKrikbOnGB1dozSC/ds7zLZ2efSxxxgdWefKK6+ydeUqaZJw5tnn2Lt7l9I4jp8/y2IxpZjPGA5XiPodBIJrL32NQAqGG8eopKTISsJAsnHuHKPT55hNx0RBxMqJR9m5+jqvfvkl9Kwiih0nzz3GlTeu0e/ErKz12N7eoxeBGybMd+ZESpJoQV7388H7BUxNgGiiTwyCW9pxpTT0FwWPdAOUdejplEuf/hJP/tT7OP6eCTe++AY4R6X9vLa17Zx/Sz3w4zDeCOF/h+MRF87HkTrRRI1QT557coJrDq4JJLH1s5CrjQ7NtnAoIVHOoKQgAIypx3loiif8PpR/iRQ14NThDReAFN70IYSPBJ5Zx5uZZa8qOR4pNocJSik6whNTM+2Xw/mYm9L4mBdj/X4t3pBSGSiwVLUxIgoUofTxGZN5hnYOYesYEeeP39JE1NAaL8CPhbl2qKNhhfh4WOscsl3HtUYOgfAN1sTKyHvPlI38Pu4xUXwxyb1iFde0+X3L0dJAWh+IEzjhr5G1cPn2Dk9d2GCWVey9ucVqJyBJJHHgEEoSRDHWlPT7CUHcoaosZZFRTg7I9g4o85fZufh7HN9c5/Lnfp2br38FW83ZHu9xfW/MzYXm7jTnxrRiKyuprKUbRsQhdKKIwjlMZdDacm1/TpZpYmHZEJJ14e8HJSzd0EcInX3mcX7vK5eIy5zjo4BQSqRyqCDAVmVtXvKGDiElAv+sL2t0h3BgjYOm/k0KTwaRvp2EABUFVKXx96H0d76zEusqwkCBqn8uTP0zEChMuso3L13mZumI4ogoDj2lLgpIlSKNAjJjSFTgyT3O0Y0D4iigmhdEAUyLkmP9DvliwaCXEIUhVAX9NKQsK5IQpLQMBGgkurDkxkIsiUJJ2km5dvuAfjfBINnezzjS98TfvLSIIEYqySjxMXvOVAx7HfbHGTYrAUuY9rmzNSOQBi0Fk2lJHAZUWqGMYdiLmCwKykWFVIK9fUO/lyBcQRjF5BbK0jDoxjgpPDIoUhSVI19k9NIQnS8YrgzIDWSzGWkSUGlBkgTk0wlloSmFYmd3ShBEZCqks9rj9PoIgKxwBIHkuyMZ/t20NH68C3T8+HE++MEP8qlPfQqA973vfXzkIx9hNBp9T9MH3MtU/W76wAc+wGAw+FM53j+ONjc3l8aPd6Gazuxb0d/4G3+DRx555C0bP9bW1njhhRf+BEe31FJLLbXUO0W6LCgODkiCCKsCT4WoO0bGVGhdoYop5HOsNt4QIEAGAU6XtQs+8CQCjO8sCUE5X1BVJc5JsKCiGENFmHZ85IspqeYWGQUQDFBB5AcWZImTDmE1zoAzDlNWmLLCWofD1JULgTcpmArtDIXx6G7XkiMUMpAgI0yRY0yByyvibkxvMEIEic+ddaYdnOgm0M39oAJZys6VORee6bB/6ybPPXucH/2xF7hy5TYvf+Mmjh2OnugzXImYTQvE+nE2jkWMx9s89uwPESUdfvrPj9m+XXHqzDECpTl9/hG0LsizOXEa48yj3Lp1m9/873+Tv/t/+lW6wQ8zSM9hVEVno8RdEphqShhqgkDhBCipECogzyZkWY4ufZWPtQKkQyEIgpA06dFJenQ7Q1TSQdRmGU9UgXyyj7CKpNNHKIlZzAkRWAfOmbpCwmMsjbXIMMJYiXOGSnsM6CKf0UlSkiglUApdVVhnUAoqNeJzu8f41muXWJldZT8vSU8e570/9kEOxru88gevYBcl89BXEDWy1pIX787KTFubP6x/w2xWYuw+ggBjXD1x4kjTmEpr/sm/+EdYY3nv+3/AD0AIRRClVLUxxmjtI5KkY/P4cf5nf+Nv8NWvvcTu7sF9VT9LLbXUUt/PaswWzaR1Y4D4TlEuhyf2H/zu8ET7YRJH892D6zy47OHlG1pEYwxojqn5t/nTjDUVRdEu29AcmriRhvbR7KsxKxw2KhhjyLKMJEkIQ496XiwWzGazdkK+0+kAtPEcSimm0ylJkrRmBqVUS4JoolPW19cRQlBVVWsi6ff7SCmZTCZI6QeUi6JoJ+obg0JjVmhMN8PhsKU1CCG+LSaloT40USnNpH+n02nNKeDNBP1+n26325oOGnPGrVu3SNOU06dP0+v1+PznP8/u7i69Xo+NjY12bC8Mw7b9mnNozA4NEcUYw3A4ZDqdEkUR/X6fPM+JY4+vbqgnxhgGg0FLsGjutWa51dVVPvnJT/Le976X9773vezu7rZmlsYskSRJS0xpaBiNmaWJ7VFKtd9Np9M2rqUx4cRxzMrKyn0RKs144cHBAZPJpCV0NBEzjdGiuRezLGsjsxtqi9aaKIrae7S557a3t7HWcvbs2daUYYxpzTJFURDHMfv7+2itieO4vZbD4bA1rTTn1FyD6XTKZDJhNBoRhmEbFdMcb0MxaaJyllEvSy317lUgYK2rCDsbrJ46iVZdbtzao1rMiGxBHEd01tYojOHm5RsoNGcvPMJw8zgHu2NuvfEmrsw4+9QFpFB85ZOfAl1x7PgaIu2xt32buJOy0u9x6+pVnNEcOXmKqN/hYOsOi91tBisrpMMVNIooUHR6wk/0qpS92zdJRkdYP/sE17/1BV767O+zc2OLY5vHSQdddm5e4+y5M6gw5O71K5w5sYK4ts/+ZE4kBd1AkmuDEI4wEFTGofGmBxAY4VBOEEtYWMcrOfQCR0cajnVCBI7izh2ufP41nvyxZ6lmc+5847aP9JUBlTOtgeEeEeNQpIvzVgULdRSMJ6d6SMU9CoUHWNRED+HHEmgNIp4EIoWoSRsOJXwUh3Ke3hAKg7WCoI2J8fttKCGNfUHgoyPMIbSmEMJHdiAxwK6GmTFsFXPW44Bjg4hBEmKmOYuyorSgDVROoJ2jqouNTN0Gh5EdQkIviXj6/CMU2vBH37iEVP4IAylRQnF0pcvG2jovvfYGlanpEog6XuaeG8Qdop8I0fAl/OfeYyLa9kc4msiY+6mjov235ZLeRxhpbDd+DXEf9aMBqhymhjQvvVlkaz9jf2yIA0tmSugMiBLY3DyKDhIOspKskFgXMJ8cUN1+jTtXbrIY7zG78jV27tzFlQe8frDD5et3uXntGsfXIm5fv8Ord8Zc2ZqzlxVoqXAIVjtRTZrwhWiFtczLCqMtXWF5bJBgFgVDIQidIJCCbuSJNU88e55rc82VS9d5OpJ0A0+kicMEa6wfO6sNLv7WrO8sd+8qu/ZnqbnHBNZYJBIhIYglToo6dvrQc77zNBqhApD+/kFJrIWscrxy+yavZhVTFdAHsryiFwaM4gAZBGRV5e83XaGCgF4ScXQYY0uNFIppnrE66JDrirVuQBAJqiynk4YYa4gCSZLECOcL4srS+ntZWxQ53VGPW/sZaZwQRgHbkwJlIS8NNVOWbiLohQFOOLJFxsZKByElQaSIA4GIIqb7c7oBVCrEFQVrR4bc3V9A5k0kVkrKRcVKP0YGEhkIH2ktBaQp5bhg9ciQbJ4R4PzvnLxCVIb1TsR0kZH0Y3bnBcU0Y9CN/LWzBpRkMXdMpprJrCCIQnLhiAZdjg97xJFklpWshP7nR5s/fevH0vjxLlC32+VXfuVXmEwmnDp1ir/9t/82m5ubb7lz0Djwv5tefPHFNsfy36c2Nzfbqo2HqSiKb3MNLvVnX9Zatre3v+dyR48e5S/9pb/E3bt33/K2f/AHf5CVlZU/yeEttdRSSy31DpHOF0xvXiNKe4hOFyEFQihwvsOjq5JosU85nxI4i3Ma53w1nnUN6cOAVHVnqqZwZAVJt4/Txm9T1lWpcUhQpghd4JA4bTB5gRGlj4oJBKgApyyu0r4CodJYU1e44pBCYXWJUB7baExVkxJKrNHoqo6CUZIoSlBJhMkKdJZT6Yr+UDLojbAyROscISTWVEThnF5eknRSPvSTTxKvwqW7X+fpD57g8jdCyuxNXvzAEf6T//zPc/f2Fl/74lf5t//9G7zw7BMUK6c5cf4ERTZgPlkgrOTMo6d534eOEiWxd/ePepRZgbWO3Vs7rJ9ep7QFMs64c/cNzp95PzIUdI5ayqJkxfbI812i2FcW+Agch9Ylk9kBZV7gjO+3OwMqdKgoIA4T0k6XbtIjTjuIOELrEqMrlPDmnnw8QUqIeylIsLrAOYuzDTreYpzFOF9BGqYpQgU4KowVFDlUZUU2mxIOIpIwrq+Zo7IBL2Wn+MxLX+UJ7rBbaHrHj/D0x3+Y6SLja5//Gtn+AgvMK+vvo+8TNeYPpE/dyeaau7e3KcuColzDWUfV69PtWiqd84//5T9AKcmz730eqQKEkMggwJQlQgQ44/NdhXP82Id/jJ/6yY/xj37tX/B91KRLLbXUUt9Th2MvDqsxYzws8uW7kTy+0z4OGzeaz76blFLtpHljcGhoHUC7rcYs0azTHFuzzmECSEO9aM4FPJGhoUvEcdx+XpYlkzparZlMHw6HhGHYxoNEUcTe3h4rKytt+zWEh7IsKYqiJS80xpAmTiUMw9aY0pAgGuJJVVVtVEhjgsmy7L62b0ggzXpVVXHnzp32fUP0aNqgMVA0tJCGbNHpdFozQHPOR44c4cyZM4RhyCuvvEJZlhw/fpyyLBFCtAaDsixbesjhKB5jTEs6mc/n7WeTyYTxeMzGxgZxHKO1buNODsfeNMSSxnTx1FNP8dnPfpbPf/7zfOITn2Bzc5O7d++2hhilVEsMacgoSimqqiKO45aK0twzcRwzHA5bU02e5/T7fYQQjMfj1rxRVVVLXXHOtW0VRVFr1ul2u22bNjQX5xzb29ttTFCn02mjiBtTUVNINxqNsNa2VI/GxNSYOYQQrK6uUpYlWuv2/uh2u+zs7LTn39wLTbvGcdxej9FoRFVVnDx5kp2dnZaU0u/3mU6n37NQb6mllnrnSgqIe12itSNcu7nD3tZFBoMua6OYOOoRD4ZklWExnjAaJmw++gidlQ1uXr3K3TcvM1odcuzJx9jb3mPrypt0exEnzp5jMiuRcYejmyfQRc7B9h0CDCsnjxFEAbev3SQKYHTkCCrpolWEFJB2OhgnMUFEWc4ZHN8kHW5w+Suf4Y2XXub2mzcZHjlCoSv0+ICn3vsc0/1ddne2OXJ8nfnBhL3xgpWFZlwYxrYiUg5hHVPrp6ZlHU1ia3IE1Im9zrEnHK9lmqEQxFJyNJVIW3HwrYu8ORrw+I/8MML8PrdfvkOee9KnNWCsqw0Cnizi6kcmK5rYi5pIhkB4XIUfR2l5EsJ/Bq1pQeCppO3jV03uCJxEC0co8XGzTtRFLqAakklN9gjqEzT19gIE2kksBonE1gQR0ZIs/DNR6SRbDvYyzZ3S0FMZqZCe9lEbV7RzlA7yNuoFrO+tI4RDScGgl7DWizl1+gJ/8KUvogKHpSbISQUy4MzjT3Hi+AbfvHSZGi17n9PC1QYLURs8XEPyAD/e1ZZtuDoyx91bt4m4uecZab9rUS336RBhpEZ+COuwkpY3ghO+3eo2lRZc/f9pXhS8ee0OP/qBx3HCcTCbEUY9jp57juHJTS699FWu37jJ3RtbVLM5ZZWwe+c6xWKfyWzMdH8PIQN29mZcvr3DdDzjzkxwY3fB7YOc/YX2x2EtvUgwSiPmlSGvDAtdUWiD04YBluc3BpxeX+X1SzeQFlTgCKRgu7I8cf4c8ux5PvNP/i0DDKc6IXEASvlxTKd1bcqoC5uaZ3IBYBBCoTgU2ahqp5O79/wuJATdDvk4R0kFWKy9ZxlxOCqtCYRCBP46lU5yZ2Z4bb/AJTGJFGTGEEiFE5aDwuBKSyeUBEKi8SYWXZYIK1FRwCI3xJ0YpCNWjm43ZJ5phHLookIEnhSjhMEKWJQOGUh0pRmkis6wy52tKVKETMuKeV6hpCDthj5SSAoCYRh0Yk+gDQJWRylCSnRVMup0MRHc3Zqw2o2wTjCb5hDF3J3kdIYjBkNDPhmTFRVx4MOP0kRhjEZXChMG6GlBb5BQWE84qYRktrXA6Iq1QUJpNXEaMZmVlIuK4SCiNA6lvR1qZix7ByVVWeIixdhY+isrbK4NCTAsKkNHVVip0Oat99n+OFoaP94FklLy8Y9/nI997GPt+7d6szSO9e+mlZUVzp0717rO/30qjmNOnjzJN7/5zYd+v7Ozs+wEvQP1Vo0fP/ETP8Hm5iZVVTEYDNqBne8kIQQ//dM//bb8slxqqaWWWurPnkyWs/v6K0T9Hv0TZxFJ4iNesOgqR1clwXgfnc0J4uQeHt1YPxhg61iWutepQoUKA0xhSVd6OKuRYYCsqxudiwgkKB1irPbRF0XpsYkKkAoVhohAIeIQoSoCFxJWMZUuEDX1AxX4wWJrMVIjZICUYCqNsxZtNCa36KIiCgNQYANBmeUeGRp3kWmHRIYoKZFYArXPCgWdWDGfl6Bidl5LSE7scfIZuPp1x/V/9iZ3bs15+r3H+I/+4ifI5hn/9r/7dc49/Rinzm9SFZrh2pCok1DlJWESUeYVi+mCsjJkM0/7GBzt88//8T/kM5/6Hb75lUucOfXzdKKzxKsTOkfH7Lw84CySRb7NSk+iAoXWBVZr8rJgOl9QZgZnBE77gQSloBuFdAcjep0RcRCi4hSCEF1pcBZrPTx1vrdDEoeEQZ0Bm89xxmCswVQaGTpsHfVjrR+AUELWAySSbGFZZBVpUGJ1QRSMfE5sPuduNeLTF++y4W6RVYZqZYUXPv4hSgVf/ezn2b+1QyT84FRQY0W/n+T7DoIg8AN32UKDm2CNwRk/cWWdIUkjts02//Cf/gP+inA8/dxzKBmgVIiVGoTFOYl0EqlgMBjyV/7j/zG//Tu/y+073/sZcamlllrq+0UPo3Q8GKVyeDmgnez/Tnpw/WbdB6NcHrb/w++bMaLDNI/D2wBPnmgm8Jvxp2YMJ47j9r2UkiRJWtNDQxRpTB+dTqc9p2YCvjF4dLvdlo6htW73F8cxvV6vJTuMRqPW6NHr9ej1epRlyWKxaM0NTTTIYZNKYzBo4lGMMeR5jtYaoDUINNEiDZlEKUVZlqyvr5NlGXmet+aWfr/fGnoaI8Dha9MYcSaTSUuvuHz5MuALpI4cOUKWZbzyyiutScA518ZAN9eiMW88SG1p4m0aYotSCmMMGxsbrVnBOdcaKBojS0O5AJhOp8RxTJqmnDhxgt/+7d/mfe97H88//zyf/OQnkVLS6XSYzWYEQdDSUJIkQWvdmmsaSsZisWg/Pxwn05hDFotFewzNfdDcO9baNqKloZc0hpN+v8/BwQFZlrVUjaadOp0OYRiysrLCYrGgqqqWKrKysnKfkaihxzjnWF1dxRjDkSNH2jZp4oeqqmJ/f5+yLFs6yeFjPXbsWEtdcc4xn88JgoDbt2+390JznzYxN0sttdS7U0IKGKxw8dVLLKYVa6Mua4OIbrdDvHKUebYgXxR0uhGbZ87h4pg3X3mVvVu32Di+wej4JuP9PfZuXmc0GuIGa+zlFfFgHRUEzPOMxcEBQsVsnjlFYTQ7N26jVEin16EzGpJN5wht6K+vMJsVaAeDXszxC4+zc2eLVz/1m1z55kWCOGX95HH2tnY49chJNk8do8hyVo9usHFshYOdXZwLSNOYTE+YaUsSKArrmJfGkySkQFtPzYiEwDgfS6KAWIB2gkulYaAkvaCiEyV0hUE6w86XX2J08gRnfuKHKRefZPvyBHKJxmFdhT0U/eIaR4m75zFw7RtPkbDO0ESX+OgQd8+00UScNCSLlszq4zZ83IpAAkFtEDHSoa1fVglP/nA+I8Zvox46EELUJE1vUrHC1Uk0ro3AcXjTRAXsaMu+hkg6Qjwlpj5qKgeFAO3uGSv8uQtWBz0+9P5znD11kq9//RLbu3veduMsOEGlKyyGOFbsjw8orK2NKA2bxDegdK41VrSmD+eRHw2tpDnL+wZHGlqHvf/z5nocWrClgNg6w0XeN8wi/DF829JNe97bvLWO165u8cIzpzmysQrWkMaCeZlx7sIHCXtHmf7mP+VOeY1rtyZM5gVKaCIBKijRtmC+WHDx+l1u7kzpK8nurGR7rsm1I1ASqSQB0I8CpFSU1nBQaCpt6EgYCMHJOGQjgK0bd0mEQEbOx5WUhiPH1jh2/gT/8n/4HFWWc7ojGShP7giCACdBmLpwyvlCKSUVsrkqdYyOEyCkrI1Knn6LAqUkYRigkoAyL3G6eU6CGt9KkATIAIRSOGMQCsrKMjWKl+6MuZZbbDdGAEkQ4XRJaS3aOrqBf6asdEmoBImSnFyJGSQhu4uS1UFMVWqSAKrMMhtrotBfdxUqgsgxHHbAGYxxRGjGM0MkBemgz+3bY9I4oHQOaQxJKIkihcYhpSGJYvISDhYVvUQRSIMhIMsqYldRBpLtm2NW+30EmrIwyDBiJiOkMcTZlNxYpBDEUUhVVnRXEpI4oDARpfEFe8NByiK3JC7HhQHz/TlOl3S7AXuTnLXVLiUWYSwbKymVU0ymGQJJFMB4UbGXaeJOTKYU3VGfI/2U2BnmVUE/itDaMplUPvbJLokfSz1ETcf7rcS6PKjGPf/dtLm5yebm5n+QyXQhBI8++ui3oUgb7e3tfVvly1J/9tVgM7+b0jTlR3/0R+n3+yRJwpEjR76n8WMwGPDDP/zDf5qHutRSSy211J9hOWO4+9KXUYM+Qdoj3TiCCCJPHigLtC2pshmxMUilkCLA9wqN7yjKezhy4RxCOv+vMwRKgakQdaYqTnr6RxACCml8haspF0ihIAyRkfJc0bo0xAEqjenGAVEnpspLykVGVZYYa7HOIFGIujJDSkWgPLpQOIEwjtzU0SEODI7JNGNW3mLj1Gk6gw4ChzUGxB4r4ZxeUTKb5qwdHfDU44/x6d/9BoMzWzz5w0e4cdHx2mtbPPHcBvmi4MS5R/j5v/ofkxclvZURs70Ji6wgzwrG+2OiMPLVKEJSVhl5mfHFL/4+r7z0Cr/x658mjS9w9vQvkHQjhkcWFOxxcPUoSdXhSDTjTn6dU0cjnKlweLPNYjFjPp2ivZfDt1HgSBJFJ+nSiVKSZpA7ChFSUhU5Ck9lwWjmO/usDAdIFfrrWJY4YxFCYpxG1YQVow1a+/aTwmf9SgSlFlQFuB6kcZck6eDyGaURfG035mDvyzwdW667Di9+/EeQgw6X/uDz7N7Y9qYPoML7hmIhyL+PEBUO6gpvgZQOKaHIDeP9OQKFcZbKFKyMRriu5PbuTf7+P/pV/if2l3nm+ed85EsQUlY5gYpx1iADP7n0vh94P5/4qY/zq//tr31fkVSWWmqppb6bvpMJoxkjeZDs0ZpcH/g9+r3eN+s222zMAg9bBmhNC41hoKFmHDaANBPWh4+7IS8EQdBGrzQT3IcNIo3ZIEkSut3ut21La8329nYbqVGW5X1xLs45dnd3AUiShKIoyLIMY0wbt9Icb2MaKcuSPM9JkoROp0NZli3VoTnOIAjQWreml8YE0u/36fV6pGnarmetZTqdtsaHIAioqqpdp9nWYSKFUorZbNZGwzTbCoKAO3fuEEURZ86cIU1Trl+/ztbWVmummM/nWGvp9Xot+aLX632bYchaS5IkHBwcMBwO2/GZ9fX1lp5hjGmv4/b2NmVZMhwOW2pGWZbMZjPW1tZI05THHnuMz3zmM3zuc5/j537u5zh58iRvvPEGURS1kS6DwaCNVVlfX2cymbBYLIiiiPF43BJBGjNEE/08n8+ZzWatMaOhETevG3NLQ9QoioKDgwOSJGlpK40hqTHkNPdwc980911jFmquUUMoaSgxjfGkLMs25qVpp+YeTtOUyWRCFEUA7fVQSpGmadsGTWzMYDBo272h5ACt4WdJOV5qqXevZBhx6+4BsQqJBpKTmyv01o5gophr126SBJL14xscPXOWm1dvcu3Sm3TCgEceu4B1lu1bd3FlzvqJY8wOpmxfeoOg16cbdkkDyXy+IE1SojjlxtWb6GzOkaMbhGmMCCL2bm8RdTpsnDnJ7s2bZFnFiXOPsvrIeW698XW++qnPcXBnn5WNVbSUjA/G/OCHP0iR5ezv7nH8zCnSKGQ6nhCnPbq5Y+vuhCKvOD1K2JoUHGhNKjyloTA+yCNC4KzD4I0MqjY5VLVp4eXCsBII4mnGyW7MQElcMefSJ3+X9Bd/mrMf+2GKX/9d9m94apVzAVjTGgoEAmc8/8LVhA8pZR2FUhsZHDQxLtY1MSwCU0fHCiGwWJRQICzW+veH40+U9IUoxgkUoHCEwp+LD+7wJg3RkkBqgoaQPkjFOR9T46Ek9bOXbY0MNazEjz/Udc8SkOLeObh6QSe8WUM6TzZZlDA680HO/NiPUQ1/hy+8fg1TVE2HHgeEkWJ9bY3pwW4dY+ONGg2lo6GeuBoU0RJTsD5WpDGzHDJ9iEPLewrIPQNMc4Hu/1+tXtvhKRY1zYP63O/7L7ABX4jmpWs/d/W/+4ucL7x8nZ/52PO85/3vpzdc5fbVyxy8/gV6G6c4/8RT9KKA2zfv8urFq+hS0+/FpGmJQbFzMGd3vEDhY6QdgjQOKKwlDP2znbOWSVFRLAq0McTOsa4EKwGsKEVXSebTnFPH+jjgzZ2Mq+Oc0yeP8OGPfIB//m8+x92dXR4P4WRSjwE6X4gmDLhmDJJ6vrduTym98cNaQHoWi9UOJy1KSpzxhWYi9YQOm3vCKs6P2yBBCVmPW4p2LKY0hswGvLo149W5YRoE6KwikJJuJBCBwgn/PIz0EUNpHBA7y/GeotdJ2F0s2FjtscgrAgmdQJGJgpVhgqlKZBigIkHSSSmzEhcEKBmQFQXCGdbWe+zsT0nikKyqqErNqWMdKiuY5hpjBFGoSJKY7fmcWEmEdKS9Lte2pxzpdzAqYff2hCNHhiglGe8VSCVZIFFGsh47sqxA4YgCRRQqRBpgkWijMEIiqVgZdckqizKa3FmK8YLAWfqjhINxRicKKXRGsTCEYUhWavKywjnJdFaiIsiQ2DghCyN6wy4nBx16sY+K7gQCXRU4BMMjIwz3Exz/tLQ0fizVVil8JzXGj/9QOn/+/Hc0fjSZm0u9s2StZWdn57su88gjj/DMM88QBAFhGDIcDr/ndj/wgQ8sY16WWmqppb6f5GBy+Rqi9wV6KxvEnS4qjDGVoSxz3ymVIUrU1afC57U6J3DOQB0LAx4vqIIIJyBMUmQcYTPvDveZJNbzEiU4LNYIMJIgipCBz7oUgarjZnwvVUiFMz7yRSUJQRwRJxFVUVDmBWVh0UYghEQohdQGROB7soHDaQfWoF2FQKGBvITpQUU4yuj3+ihdAoo4tsjFLY6UG2zvaMInJSujhJXOEW69pNm5sc3qqYJjp9fpDFI2ThzxuNEwZLG9y93rt7GVAQfdUZcgCphkU25ev8KrL7/Ml7/4Je5u36a/OmQ0PMcHP/oJ0l6HOHEssn2uv7lHsbWJm4YcDzRucRPJrq+4DBR5UaApWcwyqrzEVgJnBDKAOHX0uj26gxU63R5BFGGqAhMGICS2Kj1SRSioSmY7u5xZGyKVQEgf/+KM8YMupsRYg3PCR8QYgzWaMKwH052j0hJr/CCPUpHv9ErF1I547c6Y82nFtbng8R/7ATpHV7n66itc/eabWOOrgDILSkBS5+h+dwv1u1MOh7X3Kn9MprF2QqUrjKkJLQ46nS63d+/w//0nf5//efq/4Oz5R5FSoFSI1gVhmNQDQ4pup8sv/cIv8m9+499y5853f05caqmllvp+UUMzeND8cZjaATzUrHGY9NAYJ77bJPJh08ZhNftptneYEAI+oqU5xuY4Dh9rs15DkmjMFI0RpNlnYzgoy7IlZzSmjMPH2Jgqbt++zfnz55nP5z7aLQwZDAYYY9jf36ff77fHYoxha2uLXq/HbDbDGMPx48eZzWbtpHyzz6IoiKKIPM9b48FhEkdjEFldXW3PvaoqptMp0+kUay2rq6tkWYbWmvF43E74W2uJoqglljTbafaZZdl9bZ8kCdZa5vM5d+7cod/vc/r0abrdLq+88grT6ZQoijh9+nQbaRPHMZ1OpyWoPHiNGxJIGIYt4aSJkun3+62BpImEbgwpaZpirWVlZYX9/X2OHj3aRsGMRiOOHTvGb//2b/P+97+fZ599lhs3brTfN1FA0+mUMAxZLBbMZrP7iMVFUZCmadumk8mE2WyGUoput8tkMiHP89ZMEoYh1tp2vCjLMoIgYDabtYaOxsCSpilxHLf3aWO8aNo3z3O63S5aa/I8b9twNpsRhmFLh5lMJnS73dYIEkVRa95p9gewWCwYDAYtGaaJnWmMM7u7u+25Nss0PxtNhFBj5lkSbZda6t2rIsuZ3N7i1JE+KxtHiXopu/sTtrf2WF3t88gT5+iurPHmt17l9ps36A56PPnic4x3dtm+eYd02GO0foS7OztUxrB55gSFdty8dAmCmM1HNzFGc+PKFWKhOHH2FEEUop2kyBZ0+h1WTp5h+9pl5ouSx198ARV12Ln+Bje//g3md3c4/9QT7O/vIbTjve9/L1s3rjMdz7jw3LMM1zeY7e8RhAGVCrh14zadTshz549w8eouCMeRXsqdWYFQFR0klYWiLmoJ8cUZlahhBDXdQTt4ZeHoIkhkRdKPiaxF7O9y8V9/muf/8v+I8x99Pxf/f59ntl1irfERI6KJ9HUofN9fItrIF8/reGCOSQhfu4OPm2loHM61bo02PoWa8oETOOlQNb6ist60EApv9vCGiJosAlgB0kmc8IYLiy88Uo46tsT3rxs/ir3Hs/BGjtoM4WrSiHWN6cPV5A1PLHGAEQKwjKcz/j//zf8b9au/hhQFRZ5R80v8eVtHXho+/UcvUy6m/phtTRtBIBszRWOxqNvX1WYP2ZBJWhuHqA009+JzWjqHqPEp+MYQ9zZ76DrcT/nwO70XK+Pqv0Tddg7q7Be/vBAOZ317f+1b13nhhffw0+/7KcLuMfYP/jlXX3uJZwYJxzaPUox3mc0KHjmxAGe5uztlf5oxzyx785xSO5L6OWNhcl8ghjcAF5XBGou0DoVjAAyVYqCgHwrSUBE4w0c/+gRP/8W/wj/9f/wztu++zJnNDX7iYz/Kr//OV3j9+hZrwnE6VgxCfxphIOn2OmT7M39NESjhvBFIqPYaCCmRDb0GCAKJFKLG3TiEsISDhHw/RwiDc7KOMbLeFBLHyDjEZCUI0MZQaMGNccG39gsOhPAmFylxWCprKIxFSocwgkESsZ6GSKs52g1J4oC7k4z1fkheWNCOMAjIs5wkDhDCIaOYNDT0VnrsjzOkEERhwJ2dBbGE4bDDdDJj2O+yO8lYZCXHVrqoQFFohwpj0tT/bN+ZlqRRzCC0pN2EyUITKsXCWPL9nI1RitEFZRVgpCCXCmUVgZ4wLiW9WJHGAlNopHLYMAYLVkl05QhUwN44Q1UWlYbcunFA///P3n8+W7Ld55ngs9ZKv+1xdcqb63GBey88QYCkaNUiKZEyE5JaPWpRE6HQqCP6qz5MzPwDE90zETP9oTu6p3s0o1a0pJFEUaREEAAJDwIgzPXe1a2qc+qY7XfaZeZD7szaVax7KQMCBLDfiKo6Z+/cmWtl5qmTa63n976+IOnHTBYVcRDgJBRLy2C7Q1pWKOVRCcNyUuCko1I+xAllWnBm2Gc38ohNQZ7WwIznS2aZxg89TqYZzq7cn77H2oAfP+ZqKineS3t7e+zu7n6fWvTH9eCDD77rQGc8HnN4eMjOzs5mMPRDpDzPmc/n77nNww8/zIMPPgjU9rBNVcd76Vd/9Vd/IJFEG2200UYb/WDkcFSFYPzim9w+8zRxf0jfjyhNhbUG3+8QxEM86SGVjxSuduRYC2p11tYwhwhq60oBzpoaJHAW6QVgwWmNLTVW15+VnsQLvFXJhUP6PkLV1tTWGKzV9T6MaYbO9UBaKrwwRnkhnpdTlQVuaSlRoORqIsHhjKayBmM1StWfFTichEz7LEpBqQp8Y/CFRDnw/Xe4aN/HjcMJy8UZisIRxgFxOKCaxRw9v2T85iky/Tzj4xEXr14lCAMuXL1IlmUUwpItM777pW/wmc/8W27eOGB6UmGrHTrxFfzkKrIS5AuHMDnLpWB6OufCzj4/98Gf5ltfeofTyQnnA8HJyXNsdXM68RbCQZFlLNIJyzxFazC2nkzwA0echHR7A3rdIX4QgpKY3CJViBC140cQJEhPUhUF1fiUwZU+iHqRyBYlpshwvodwDqNN60xhrUPrilA5PByFayYrJL4fgFA4Uy+E3Cy3UeULzFLDzqMPc+aRKxwd3OS1bz7HbFHiqG1ULa62dAVmP6ZVmG4182KtwPPqSa2yMEzHaWv32kzeRWHIjdtv84/+yf/Mf/X3/2v2zuwhhMIah5UGr1lIlIIPf+gj/Lmf+Sn+2T//15sK14022mijldbhj/s5fDT/X65HvDTODs3XwF2fvTe2Zf3/3HXIZB14aL6WUrYL3M0ideO6obW+y/FjPTZkvS3Ngn/T5ibapSgKtNbEcUyn07mrrfcCKcvlkq2tLaIooqqqNi6kceyIoogoilgsFnS7Xfr9PkKIdr9ZluGcw/d9fN9v+yWlZDwetwBC0/4wDFvHjjiO231XVcXu7m7rNNLv91ugoAEOgiBgsVgANSiz7nIShiFhGCKEYHd3l9FoRKfTYT6fkyQJaZpyfHyMMYbBYMClS5dwzvHMM8+gtWZvb6/tgzEGrTVZlrFcLplMJhhj7jr/zTXRunavK8uSJElaQKOJX5nP53fdUw200QAvTUzO7du3yfOcy5cv841vfIMvfvGL/I2/8Te4du0a3/rWt/A8j263y9HRUet8UhQFeZ6jlCJN0zZapSiK1j2miZdp7ocmlqUBWjzPYzwe0+/3W1ACoNfrURRFC/Q08T/NeWjAosaBxfd94jhur02n02EymRAEQXsNGyCp+Rls7tMgCBiNRkA9d7pcLlsAaTqdtgBRA5qkacp8Pm/PfVEUJElCr9cjyzJ2dnY4Pj5uf06ae3CjjTb60ZQQgmuXhuydvUC0s8Ot69eZjBacPbvFtaeeQkUJL3zzmyzGpzzy/ofo7exydHiL2cEhfjQg6Q14483reHHMuUvXyGdjpie32OpJnPI5uHGAtIZ+0uHKA1coiozFsiBIEs5dexBtLNdfeQmvP+SJX/6LnN6+ydFzz5EfvMnxzQOuPvUUWi85e36HpLdDtlxy5uwOT/zkR5kvCm69dZ0w8PA9wfjwFhcuX+Dc5bO88+YNOpHHdmU4mBkiJfCEz1JYpiVILL6oIYRitYhf8w8OJcA6waGxPJ1rQqkQfsXlXoSvK4obb/Pdf/Y5PvZf/jIP/mLBq7/3LeypQzgPbWzrfuFk7S5iVtEm1rlVVEYdxdJiCcJhbE0dyFV8Ca6GRaSjjl+hjnlxrvYMkdRwh1y5YHjS1cCJElhTx7S0JhdCILBYsXqOAxSyddDAudbBwjaOIPXNsXpPtG4czYla+XjB2mtwJ9KmLqxwWFPW7quujjduUlfkqiDKOsfbr7+KaLJjGuBErFw61objdcsEUgqMWbmkCAF21dfV9bNrba67sdbA5vvVvpuYFtd0d+1nwzXbraw/BI72o+3uxKq/rj2eBUpt+ef/8tN84OM/y8//9b/L2ad+jj/8Z/938j/4PR772Cd48H2PEniOXuzwO7tUzzzP6GTCm7eOOFpo5sscKSS4JaVzlIVGra53f9U2X0IXSKQkUYI4EPRCuHJxm52rV3j87/2f+ObXvsMXvv0iH3vfFT76kz/B//Yv/oBvvvQmfeF4IJBcTARKOpRUCGEplhlS1feZkrV7i1zdBw06I1bwC8IhscgV2CGVRClDstunXFrcat7SWYtsf8AkMg6o0gKERUiPtKg4WsJ3bs85sIJKevXPgazvk2VpiEIfbS2BhMQaOg58T1AaS5pqLux1qErNPC0YJj6+MPSGMabMSauqPjdbPaaLFCElO7tdDg6XSOcIOxHjyZJuL6YoNbYS9BKFE5LDUQ5CEPkefhhwnBrS1NCRFQsL5cSgrMXqiqIs2el5eB4URlFVGqIAYQSBM2RWIK0hdxrrAoRzYAS+V8dDu8LhKUe6NPjGUlhNNjV0A0WYhByP8jpmyfcRSLpdDw0EOKyzWG0IfEmmHbmRVHnFmXPbnAt8erJkWWgCKQmVIbMBhRS8eeMEaVTtWG02US8b/SmoGbDfT0opdnd32wzPH4Qax4/7yTnHt7/9bd7//vd/n1u10X+KDg8P33MyPwxDnnrqKfb29gAIgoDhcPie+9za2uInf/InN/mnG2200UY/RnIOjFO41DB67mn6e2fxogQ6CYGSeEGECOqJdCVXVRvOIpSsSysESF8hpYdSflv9YAHheVBKhBLt4FpIUIHCujoTEkCgEKEHSuG0QVcVVhtsVWFNs+DStFjinEGsRtZeFOL59aDt5PgA6bn6PWlwRpDmGl2BH1gCXxJEkspYtBOcLuFRK4mTbps1uxNULN95iSt8nMO3ljz6wT0e/OuP8bu/+So33hkhjUTomHQOk5MZj3+ojwwkv/+ZT9OP+5w5e57BzhZ/9LWv8/UvvMhW78Nsdx6AJKwHlsbHzQJsGjE39eR35Pf44M+8nysP7jI60qjJmKC8waR8nbNX+8RxgnOGLJ1TVCVl6tBVPRHg+Y44EvQ6Q4bdHaLAQyqJtYayyvE9hXEGXVZ4qnY+yRczup4jjMJ6gI9DV7q2dpV1dIgxuq6UcTWE46wmCCHwHEW5ys6VoPxgNQdhsQhOlz5hlbEcDHnqJz7IfDblre88x/HxgsJBsKrY6SqBdJBzj/Xoj5kauMbYVbGNELjKspgVOCZooxFO0h90iaKYl996mf/tn/4TfuM3/g90Oh2U51FVBVIpwCE8j07S4df/0q/xb//dZ5jPlz/YDm600UYb/RnQustGA3PcL9qlWRxef/9eYGP9+/Vx8/1Aj3WtR4U0wMb6MZrjNovsjfNBAxg04ErzfdOPxo2kWUhv4i6SJGmdPtbbvN62ZgE/TdM2wmM2m9Hr9dptGviiqiqEEO1CfgN4NLBIAx80MSGe57WuGA3M0pzfTqdTV3wWBbPZjG63y3Q6JU1TOp1OCxQURcFgMCDP8xY4aGI7pJRtAVaSJG3MS/Ne4/yxs7PTukHcunULpRQPPPAA/X6f0WjEW2+9hTGmnStpXCXyPKfb7VIURQutNHElTTRPGIZ3AUWLxYIsy0iShOFwSJZlhGGI1ppOp9PG5DTwQnNfRlHUOmoURcG1a9f4/d//fX7yJ3+Sxx9/nDfeeKMFOXZ2diiKonVNUUoxGAzaCJWqqlpXjgYAadxVtNaEYdhG5DSwURON04BDYRi23zfOIE1MTwPzeJ7HbDZjMBi0sE4T8dJc5wYE8X2fo6MjgiAgDEM6nQ4nJyfEcczZs2eZzWbEcdy60DSxNuPxmN3dXYQQ7bVu/lVKsVwu6ff7rRMM1G7MR0dH7TVJ07Q9XxtttNGPpsLQ4+IjDxHsnOPWW29hLTz65KNsXbjAMte8/qUvk55OefwTHyHuBtx68zqnJ6dsbW/TO3OG05NTZBAx3NpmfHKKK5acPXcGg+Lg9jFmUSAd7D78IH63w2xyQqUtZ65dY3Z6whsvv86FJ57iypNP8Z3P/hte/s4znBkMOXt+hysfeBLjBNvbF5idHjE9OWXv0iV2zp7l4O03ODk84vzlS4jAo1xOGfQHlMYynRUEfshWN2SaVvQDgyd8jrMS4xy+55BWYrRDr1wu5GpR3aMGNJyoI1AOteW1vI5QCaXgcj/CFpr81ls889tf5qlf/ziP/JLl5d/9NumkxJX1c0ZldL0f5+qoFQSaNXiCBvBo3C1WsMjaAjtIjDW1x0fz7OTACrGCIFZOG8KhAH9FI3iijrBRq+M5UbfBWWgcQ5xzKCExou5bG0FD3cD6v/3Vc9qqomKVftKyGc2Xd5gKcWcvjWHG6m8nRD3HReMoYlcQTN0JZ5rnqzseH447nh2NI4Fzol4kX9tZPccm1vgTcaedzd4ca8+N1B1y8g4csvYzYVd7Wo+ZccKtXE+o3Vto4A+3BsU05IjDCDidFfw//6//N3Th+Klf/Us8+dO/wDvf/jSj6y+RJgMGq6i619+6ReBJfD/Ek4oqm2OLilAIhARrDNHqHun7kitbfY5O5jhd1TFF0hEqSeJLrl7cJhgOeeOdY778f/6/8Oqrb/NLP/9hLl28xv/nn32OZ9+4jodj3xM8ECuiVfSKrxS+72FKjRQOT63idqjhC6TAWFNDHqvzIoVEtGYrtaOH3/GxTqHTZe2K4hzCglQSPEGQhBjtUB0PZwxlbpnm8PJpyhsljBH4qiZySlPPmykpwNbxTH0hiZ0llpKwE+IN91iMxsyXOZGvGCQBzmmMEGR5ARY6XY8g9plmhkpbdnYG3DqaM1vU0Sg3DpckkeJ4lGO0xTpLkgSM0wJtoBtJnLVMUovTgp4H3dBjPF/Q70WEocTrd/DikGVakFeGo3GBkR6DDjhbUjlFN/SQuiL0JU4JPN8DJZBRhCk0CMFktiQJQeMxzUqKQjPsRMyWBVKA8ur7UfmSUW4RaU6nE5CmGhAsK4MNQ0opGAwStpVPqAtM5CGFIYgkJwtDWmRklSYKFMrzcDjMn8LE4gb8+DGXc448f3eDat/328HKD0oXL15sB2v307e+9S3+9t/+29/nVm30n6Lbt2+/5/v9fp+Pf/zj7X337+P48YEPfOAHfq9utNFGG230/ZdBopyhOpkyevEZVBzTvfIAshPjeQHWC0BYHLYZNbaDU+UHeGGIVAKpAoTyMbLOuRRQO34oD+GBwsP5CltVKKeoqRFVx1xoi8kKTKnRRYE1FaYsscaiPA8QiMbKWsr6ayxSKFABInAcHxckcUkYBUghKSvDciapKknU0WhjCWMPB5Qonj4Y8ZGHB8TStpahHpJe92125td48eWA6oltTg+XfOrnr/I7/yJnMilx2ufF76ZMxzOq6utcfXiHoxsH/Ozf/fOUpeWbX/06rz4P1y7+Laq8wlqFcAJnFX7so0uNq1JwjiSW/PKvPcWHf+Yh3nhxxI1XDnhYlYyOnmZ3ULC7t4fn+WRZTlpoispQlfWEge9Zohi6nYThoE8n6aB8H+ssRjuyIseLE3RZYrReuahYlifHDGMPqQQohS01VZlhdIVxdY6v0RpHvZihTYU1Bj+AwHfIsrYB9XyBpzyU5yGVR2lDqnLOXBc8/lOfwinB4UtvcOuNQ5amnthRAiJV/6utINWW6gd58/8pazUX9p5qq4VEPcUjlaKsLGaSIaXC9yZ4vodSHp50fO3bX2V/d59f/6t/FeUpjLWURW2VXlcQefzERz/OY48+xDf/6OnvT0c32mijjf4M614o4944lwaIaL5eHw+vf7/uGnA/wON+8brrkAXciXa5t3hICNEuxK9DButRG+uOHevQR+O+UJYlUkq63S5JktwFezRqYJJmPw18sb+/34IXDejQ7XbxPI80TVFK0e12WS6XLfRxcnLCcDhsv7fWMhqN6Ha7bWxMs/DeQArOObIso9/vtzEjDTzSRHhordt/syxrQZKmvdbaFj7IsoyjoyP29/fvAi2iKGqjUGazGdZajo6OiOOYy5cvkyQJ3/nOdxiNRggh2NnZAaAsS+I4RkrZxtoEQXDXtWrug6OjI7a2tkiSpHVp8TyvjWNpYni63W4Lxjjn6PV6TKdTqqoiCAKklJRl2bqaPPTQQ3z2s5/lc5/7HL/xG7/Bo48+yje+8Y22bw2E08TXSClb6MX3/TbCpYnDaaJ/m7iZZj9lWd7lzNFEpTQuJc1910ToLJdLwjAkiqIW1qiqqo28qaqKo6MjnHN0Op3WaQRgf3//ruiYwWCAlJKTkxOiKGrBD6jnrhonkTRNW2cYpVQLAVlrSZKEsixbIKlpb3O9hBD0er33nK/daKONfvil/JCJDhk//wqBLjl/9SrJ/lmmiwXTWzfp+pL3/eLPIJTjxgsvMTo6oX9mHxeETE6P0WXF1vaA4xvX8aSivzugFJLpdEF/0GNvmOD5HbKiYvryS3hByAMf+CBZOuXGyy/w4BOfYO/RR/nO7/4Wrz7zPDudLg8/+f5VcQxEYcjh62+gK8P5hx4k7iQc33iLKsu48tjj5NphiyU2y3BAkS6JPcvSt/T7CfuZwROKW4scV0AsJc46Ule7YiAs0gkCaJ0f5AoUkAgq53iptHVU6KwiDALOxh62qBg/+zQvxglP/fqneExIXv70t2BUoo1DWsA1rhoWJySesxjnVrEvjcPEyiPDAisXELgTiCJFHaOh3cqHQwCiCUO548jhrRbjrXMEUgIWa8G4Ov4EUc9ByBUg4QkwzrXuFW616zb6BYnFrb5ujC9EG6Ni4Q6g4sTdUSrcASkssGawsXqmauw5GrSjhm5Y+7R19ds1HrLCXKxr3URqpw9XO4Vw95xB3d1mIkGsHbN5lgRr70AfjXvHHdMRsQJKVq4faxCLFXf63twvbnV8t/rCCYGwNVT04lvv8N//P/4bFgev8alf+QWuPvEJlMno72zz3Ne/xunJnPEk5eh0zpvXjzk6ndITku3Qw3eOwJPMUk2gZA1kANk8xceu5vcckSfxPSjw+Pr1GW89d0RhHA+f6fEbf+sXOZ0F/KN/9Nu8fniEj2DPk3ygI9kNJb60tSuxqF1rpbNIBUrKGsZp4G8HnhArd+L62A7RFqQpCVE/xHmKfLSonW5MfXWFlDgFKvKpSoNUECQdFqcp08zw8ijjhYUmC3yEdQgnakdk53DW4YTEmopB4NFVNeg0MzA/nHPWSqzWVEphiopu5GN17TDipKO/FRLEMePRAi8QqDjhD5+5wXY3ZlZU5IXAUxKzKAhDr4YxcCxzg/I9+olHVhqccchYEZmKbuLhBYZH985gyoIoDJgWmlvHOSbXCAnhzh7ZdEGe50RhxDD0McWc0Jck/YSsMkghKC3kiwzfCynmCyIhyI1iMs6IPUE38Jmlhl4kUKqOchoMQm6OlthU4ycBt05yPCeZZRlEEUsn6PU7nOuGJDavC/dSTacTUljBonJkaUoiYWuny7wo6fQTZBi/x2+K/zhtwI+N3hWogNpWcDAYfB9b88cVRRHnzp3j9ddfv+/7f/iHf3jfqpSN/uzq6OjoPd/v9Xp8+MMfbr9vrDLfS4888sif6Aqy0UYbbbTRj5bqABVRW+5Vjtnrb2BCn73Ao3/xGi4ssdID6SOFjzMOVA1KWByi1GgHKgpA6nogbsCPAkylaUa8QimEVDhTD32tk1ht0EVJmWVUeU6xzCjSGVVRYIzFmRJbj/KRQtUAiXL4cYzyA3zPx/MCfD8CB2kRkhcVe9sGhybPNLrycA7yTGGMRUmLM1AYQ1YaZouMbujjIRDWYo0kicAbf5cLy5hv/M5L/Nxff4Ldsz0+8hPn+dIfpFSVQYqEW2+XfOBDHax0fPynPsV0OuP3fuv3+fKn30HoBzFFUS/oG4MSsgYySg8EVKak14n4xCevcObcNq8/e8Qf/PYLbE1GlKOXKasXOP9AQhyGlLpgOl2wTDOsdjgr8TxHbcySMOhv0427eGENyDgLZVWS5gt6UYKuqlU7LM5TVKMTtgcdQOGMQBclhTXYStfuK00liNZUpa5jX7Qm8AW+gkAKlKjhDVbWmc5YChGRzl/m3GOPMNjf4eTwkHdeeotprjECOgKSFeFfGkFuHYu2KudHU/eDPtYKjNoXGpCqhqXqCi2rBfPZEulRW4pKRRwHCFHx25/9HS5ffYAPf+zDKKUoq5wgDHHGghTs7Z7h5/7cT/Pt7zyL+VOwvNxoo402+mFTA3s0wMT95j7W3THWdT+nzfuBIvcCJOsuIe+1v3tdOZrF9gb8uHc/zb4b14zG2cH3fbrdbht50vS7ARWac9A4KzQOD1VVUZYlUM8bNI4bjfvIcDjk9PSUPM+RUraAxXA4bCGM+XxOv99v29EsyjdtVUqhlGpdRRonjSzL2vm0yWQC0B67+WxTbNXEuzTxJd1ul+FwyHQ65ejoiDNnzjCfz8myjOFw2EISDUCSpin7+/tcvnyZMAx57rnn2qiUwWDQumGcnp62wEee5y340jh9NOcJYLFYtI4ZTVTMZDJBSkmn02E8HrdxJr7vt/trXF+a/VhrmUwm7O/vU1UV165d40tf+hI//dM/zWOPPcYbb7zRAizD4RBjDHmet9d5Mpm0gEfzfgOWNFErnue18SkNVCGlbCN9GjinOVcN0NFAHk20TuOQEscxi8WCNE0Zj8dIKen1eu21G4/H7Wem02nbzyZ6JQzDFuhpXGDyPG9hpiZ+xhjD/v5+e90bcKWBXtI0be+7qqraa9q4gkyn0/d0y91oo41+uFWWFW+9/BYXL25z8dIVSi/k1ls3EdWCWFk6DzzMIs04ef11TJFy7uo15ssMWWo63QRdLBjdfIetwYDe2X0WaUqaZiRJzLDXwVhDlmk8UbFz6RLxzhlGR4eMb9/ikU/9ItH2Fl/6l/+MF59+iQtn93n4oz8BgUcYgCkqbr/1NkjJgx/+INPxmBuvv0EUxwz2zrGYTsjTlMH2AH9vh9GNm+xcOMPk5JTBcEg2q5hHipNMEEjFMPCZUGFKW4/ZZR3B4nv17yfLyvkDYAVbWOHIHbxZWrrSEUwyPBEyVKCMYfTMd3m+0+HxX/oEj/kBr//eN5kdpTgURoPAYHRd0CNWpMAdZ4vVQrpzq3iN2hZUwh26QKyKgQCErAEKa6nBDABZO5ZQR/I6WwMtVgh8YXErtwTbMCONS8Yq56SGOEQLMNTODau9r2ANa1nBs2CtwNk6IhVYwS13u3TcJXHnNUHjakILbzQpLI2BRw2R1NvbVTsFq2e8xl1j1UaBwKysPcQKmmlhDufuRLe0uSx1C62pv7ZNy1ZtMNTze+3zE2tgh2ycStaBF3Crg1jn7nTUrRxUHBgLL18/5N9+9vOkixM+8VM/iR/5ZCcpj3zsU7z+7a/x6otvcnBrzHiyBG0RlUGtYBWtKxIJkS8Roi7eChVYKTDUz4daSo5Kw9EiI9OGQRzySx++ygd+/s/zld/7Bt/85reZZAVSCLoCHgngfCjwvNrtI1ASX8k7nVr97fnqjoPw6nzXV2T1vF4bgaAkeJGHCDzyWY6w9QVWYoUJCfACD+sEYBCeZH6yZFkYbswNL801JwKkVISyLmrDQuwp9AoAipWg6yliX7JAko5TdnoRi9mCXhygS0vsCyIp8ZIA4Sr8IEYjKRc5t2clfqfP7TdH9LtdZhoOlyUOx9BBoEAZEL4lKypKK9nvelRW1/eqF+KXBV2vvhf2d4dMljkgyJxmmltmswIpHcHOAF04ZFWiQoEtSypb1dE9XsAyzYlin6NRigU6vYR0lpJ4gtR5vPH2iEHkMQhjkJahJ4j8Ogam0w2wUuAbiDqK65OCstB04wjrBSydoN/vsO37BGVGKSxK+HQ6AbkTnC4q8qykG3m1I5IxmMJgtcaL1P1+RfwnaQN+/JjLOdfaDd5PzeDqBykhBA8//PC7gh+vvfYaL7/8Mu973/u+zy3b6D9WBwcH7/n+ww8/zLlz59rvgyB4z/tQCMFDDz30J7qCbLTRRhtt9KMnJWxtsSgEeppz8tzziE4XL0zwAp9KCKTykQKM00gkaIc1DiENSvpgBVYbhJPIYQ9VlNi8ospLwKKSTmtjacoKkxfki4xsOSedjchnKcU0R5cGWzWDc4fRq0GwsjhlUZ7AjxTSF6hQEne6RN0uQgacvZTwztsl8zRf2Rk64o5BAroSGAvaWCSS3DqKCtLZHLu3RV1JYXGuwvcDesMbuMkbLE4kX/vd1/jkrz7MhYsDfvYXH+MLn36RwhisriizEU9//RYf/OQn+B/+23/E2y/56GJIpVNMZRBS4UkfEHhC4ZzBUx5nd3t87FMP8LGffYDZOOXT//IZxM0Jg+U7vHP8Fa7tW/Z2BkhhKauK2XxRV7xYifIhjKHXiRj0h/S7W4RRtBqgg3WGolyS5xkiiqiyrK4uMbV7h8zG9HoxzoGpSoq8IL9TH1JXebh6UgTBymnCI1AOz3P40uF7Ft+X+MpHIql0we3RlKyquPSBR1nMZhy+8SaHt6cUDnwJiaorkayAEkfuHNmP8Dy8khJzT6Z8U7mzvgRYzzmIdmLHWIfvSaxz5LlBjlMCL2TuzxGiTxyFTNIR//Sf/69cvnKZ3b1dhJCUZUEYhAihCMOIn/+5n+d//J//v4zH0+9rvzfaaKON/qypAT4ax4wGhFhfDF6HLu5157h32/eKclnXvaBGs7/1Y6xv0zhGNFBAVVV3xYusx7uUZUlRFC000cRsNIvu6/u+9xjrEIm1lul0elckyDoYsrW1xXK5bI/fOFM0fWkcQRrYxFrbwhe+79PpdKiqqnUaaZxIF4sFURS1wEVRFK27x3A4pNfrtY4ezWcbF5L1iJWqquh2uy2o0UTXNDCD1pokSbhx4wZSSs6cOcP+/j55nvPcc8+1EEO32+Xg4AClFHEct+4pZVm2MTNKqdZto7mX+v0+Fy5c4ODggCzLcM4RxzHOOYbDYQvwGGNaGKEBaowxLBYLer1ee41PTk6w1vLoo4/y+uuv89nPfpa/9/f+Ho888gif/exnSZKkrr6MIqIoQkrJZDKh1+tRFEXb9iaSpQE9mv4cHx+3177T6bROJScnJ2xtbbUxOc21TJKkhYEal5Ysy+h0Oi1w09x76xFGDSCTJAlQx77M53OiKKLf75NlWQv4rMNN90JJDTzUOHs0+27ihsqyZGtrC4DZbNbeY83PTZZld/0MbLTRRj960mXJ1atnufLE45Tpkje+8XVsBVeu7iP9mLdfeA7jLL1OiD/Y4ujohMATCF0yXoyZz5dcfvhRhufOcjo6Zj4e4Yzl3MWzVEaijSHZGbJ/6QLpPGV6fIwwFY996pcwtuB3/5f/hXdev8EHP/kxHv3QExRFhisqDg8OKOZzds7ssnflAV5/7jn8wOf85QtoK0kXCzpJgO95zJcpZx+4wuWdbQ7fvEXU05jlO8SBJO5FBJOCfiDJKoFwAkW9YF3H1YJwDuscHqvnHOdqV00BirpQY2kcL+XgCUMwLnhwK2KgwGZzbn/1i/hRyON/4ed4EMfrn/sW09splXBoDU5asHVshVSNO4RYwQKujtIQDmvrPwiBkAJhatcRK0Cu/p83ztULyCvnDWHrZxaDQK3gD2sc/srxVdnaPaFw1HGz1MiDRLbRKLXjRr2sr1fgSxP9IhGtFUYTdVLDEY0zSAOzNLMhd/AAaJmQVYFGvStD48RRf66NVnGN/4dYG++v4lZWmEvNEdQQiG12Sr2fBk5owZE7u1i1qWm3QziLbZxOnGidPVrgQ7Y8CEaurhl3+ohbtXv9IM37rnEdqb8uK8OX/uh50rzEUz5nzsU4JE4otrd3+Llf+gme/vYzpN9YoksPYx2lNihnCQSESvK+a1vgNBcu7eM8j6987RWOFgWpNTgp6AxCvNOSX/6ln+EjH3+Ct29n/L//u3/CO8cnaG0QCBLheDRUPNyVRB74QuDJ2gHF2focK1EXtikpUbL+2tna8UOuzrtUCmfqKCGl6nlF6ftkkwxp66ImoWoqRCqJ9Dys9DGmIO5GZIuMRWa5Ma/4zknOgRVY6WGNodfrkGYZ2lpy6wgECNnEL8G4chR5yXbsUeYlvU5IkWuUq4i7EVWe4SUecRJS2Rp2dihGi4rx8YjYg8V4gZWSKAk4EyiUrdjrdsi1oTCWMEnoidoJZZ6WBEHIMIbI81mWFTuDiEleks5zwsDjcJRRWYkUlnhnm2UumJ+esp8o+rGPsYbQ88gKg84LeoMAbW2d24LgdJLR8SVZCafzOed2t9iKwNiqvhbOEoaKrX7ENLcU85wkDjldLLFGIpRi6WChoLvVYTeJ6FIQJwEaD084lgWcTJYY51ACEl9xMFkSCMGV81vo6jrV6OQ/6HfHv4824MdGVNW7m1Q31o4/SAkhePDBB9/1/TRN+fSnP70BP36I9F7ghxCCn/mZn7lrosrzPDqdzrt+Zmtri6tXr7b2mhtttNFGG/34SAnwZRPN4ignFaPnniXs9pBhgDIaA+RZxrMvvkDgKy7u75FEIXGcYLSu7aF9iQ405jRndOuY2PMRWYWeL/F7GmcdRZaiy5zlbEo6W5CPc6plRaUtpvHRlPXgSNazGavhmasHYTlUlWl9PItJiR9PUZFHHPpce6jPyaGhNJok9vB7kjAIKArNdF5gjCCOPFKn2fY8IpuC0RgZIqxBKQ+spd8RZMtvcLGKef11+OZnQz768xc4e6HDz//K+/juN25y63DOcC/m2390wLf/23/DbLSLqSS6ylFeiKVEWIlcLZY45+gmARfODfjIT11me6fHS9895umvv03+xjHvD2a8efQHnN+d8fjjl+kPEtJ0ymwxZbbIMAaUcoSRpZck9Dodet1t4ihGeT4OiTUlVVlSlDmlEYiwQ6VLjKkwpsBZR1SVSOFjjUGXdaWvFgprHcbU1S9mNWEODl1pdFnhSUkUQKYMgXIEvo/nB0ipWBYLXnvtLbYfeQwXSOY3Trj52g2yVcTLoHaJpXIO7QS5hYr7VNT8CMn3FMKAWc1QtXator63m8xktZqcaxOJVzMyjaNrlmvGp1OccCivrmwBeO3Gy/zmv/gX/O//7n9J4IdUVV2hLQ0gHI+/7wkeeeQBvv717/ygTsFGG2200Z8pNYvLDUSw7s5xr+5dKF6PTWkWxe9187gX6Fh32lg//joIsr79OlCwDmY0i9xSSrTW5HleTwSvYIQ4jul0OveNqLkfYNJEYjTtuH79Oo899hjOOXZ3dymKgjzPmc1mLBYLOp0Onue1rhHD4bAFT5bLJZ1OhyAImM/nLJdLPM9DCMFsNkMI0bp/BEHQghONc0Qz/9BEnggh0Fq3bhQNCFOWZes+Mp1OW0BgHUJpYl+klERRRJIkZFlGWZZMp1N83+fBBx8kSRJu3brF7du3sdaytbWF7/tsbW21gEEDGzT9bs5bcz6ttS28MZvNCIKATqfTOpmsgyHW2jZ6pXHGaACNpr/NeWggGCEEjz76KF/5ylf4qZ/6KR5//HGefvppRqMRURQxGo1aN4/m+I0jRgNdpGmKEILxeEy/32c8HjMYDFp4orkn9Oo5vol3aeY3B4MB1lp2dnYwxjAajdrtGlDl3LlzbfxKA940rh5BEBCGYevCobXG8zzyPG/73twPWus2lqZx+ViPRWoAI6CNffF9n8ViwXK5xPf9Fhzp9Xrt58uybGOPNtpoox9NhUnM1aee5PjgJrNbtwiCmEvvv4Z1gmwxB9/n0tVLjE9HzBcpvX6CxLFMS5LhkO3Ll/HiPjffeoNsuqQ/7NEbDpkvZkip8KKYrZ1dRrdPmB4f0tve5dyTn2AyPeWL//SfshhN+aW/8itsXzxHPpuTTmaMjg7xpGD/0iXCToe3X3wRX3n0t/eZLUt8CZ4nKCqL9EOuPPwoy3nGay+9TiQrzl88jzi3R/fGAeVzr5Fv+bx2WKG1RTiBLxVOGJy1lFZiVq6qa/QC9W+VVdwKoAUcGcezuSXEISYFDw1DhoGHKXIOv/hFhPB47Bf/HI+EEa/+9peZHuVgTR3jIgU4gRJ1PIaz1DGxq3HtKukFKWrvixaSdXfgjxW6gHA1JOEcCLUCJlYOF35r0yEwgCfBMw69AhMsDrVyC1H1LlFiVQAjqN9ztt56FaeKWMXI2NUHVkBD7dbhVqDDHZCDlUtHDUHUX9vm9Da1Mitow7RuHCsYpGkTNaBRxyTXbZCr9+6M/OtrBDW40wAeNIUhzT5FA7PYtu13II76OFbccfa4Q27ULiBNu9zq3Lq66XeYD8ddLXJthM2da2gF/NHzrzNelvyd//yXuLYfcevGDb71ta8jvITd7S5Pfegh3nnrmLdujJinGabSJEIy6PtEewOkcIjdsyyWGWYQ0+t3iKxjJxF88ieepJI9xuzxr/5/X+OVV59lulxgXQ0vBQIeDCSPdTw6nsVbAR5SriaanMVD1nMlzfP96kLVz3V1XJGEOgKmmZ+RNTSksxJhXO1Gs4I1hADhCUQowVQEvqJYFuQlHC81z44K3qwcC6VqiERI8jynG0YcL9K7YpHCJOR2rhHW0g88Sm0YJiHKWjwF3TigoxxKKpAwTSuwjmWuSUtNVVY8/NBZjk6XiKJgd6dHRyk8U+HhI0PJoqxwQrDXjVnmBdPC0u91GSQBVmuMEKjI43RRx2oPk5CTecl0afBj6G4PUNqyWCzwnKUfenjCEEU+cehjXIE2jiy3zJcVUoVokzPsJRyeLrHWMBh06HqKIl/Sj/362d5Z/DhgYSTpIsMJyWJRUhYSh8H5HlMp6PYSdoc9Eiw4yWhW0gkrpKcwpr5HQ+HodCMmWY7Tlt3tDqeTJc7WkT3fa23Ajx9zNQPCd1NDzP8gJaXkgQceeNf3i6LgK1/5Cn//7/994vh7n4e00fdet27detf3hBB89KMfves1pRRRFL3rZ/b29rh06dL3rH0bbbTRRhv9sKgegApEPZYHcJLscMbBM9/Bhj7DvTMYJ+j5AaeHS0YHKZNrJzz44CWiZYpyYND43T7T01P0PCVdljz42EOYPMMt58h8iXGWdDpmOZ1TzCpsBU46REfRjWKCJMQPPTwV4KnaZcRZg9aryBQLOs/QeUW2KDClxZZQVBY7z5FxjuqF7J6JmY9TCm3wPYHyJDEh42lFWjqOUzguJB/cqjNHq6xAxApPSYTyEJXBk47tgeb28Rd5wP0Mb75g+YPTIx7/5MM89MQeN95ZoO2QF//oFgev7lLmPUxeWyViJU4bMCB98KRACMMHP3KN9z15ge52Fykst29N+drnXiA8mvBUlPPma7/N+Z1T3vfIJXZ7fTwnMF7CYn6CNRVSgudbosAjiQM63QG9JKkrEBw4pzFaU1QFldZUWtcN4I4dqSlSPFNXT+qiYD6fMp6e4qLuqto3R4VBHcNTWRAGbUrsCorxvXrOx/PqHHnfq11MDk9OmYmI7gPnmc2m3HrzOseTFAP0VO34YZ3DUMMf1kH6I2y7LYBKmxpYWtnIClnH8DSSQqysYutBqrEOz/PqCTxt8ZRoq2yWaYmazgmjACUVSScBLH/wh5/liSee5COf+DjWaMqqxHc+UgqGg21+4uMf4xvf+G7t4nJX+wR3v7LRRhtt9KOrZpF6HdyAO7DF/dw87vf6utaLLNa3ea8I3fXX1z9/P3ikAQGMMWitW9eSBmQAWrihcWhYb+c6lHIvtHLvscbjMQDb29t39WN3d7fdj1KKoijodrutS8T+/j6z2Yw0TdvImW63y2KxuOucNyBAAwA04EMzV7ZcLomiiKIo2uiZBvooy5Jut9uCNo3LSeN60ThHDAYDqqpiMBgQRRHWWg4PD9v9Hx8f0+v1uHz5MkmS8OKLL7ZgSuN6GgQBcRwTxzFpmhLHMb7v3wXorJ/jxiVlPB5TFAVhGLZ9buJMmvnC5k8DhURR1IIb63M1jaNKFEVcvHiR69ev85nPfIZHHnmEj3zkI3zmM59hPB6TJAmnp6cADIfDNualgYQ8z2v3eenSJaSULBaL9hqEYdg6ejTuLI1TinOONE3beczGdWUdZGpcYID2miwWC4QQZFnGlStXWreUBpgJwxBjDKenpy3o04Aqzc+DUookSZhOpy3UM5/PWSwWZFnG1tYWVVWxXC6J47iNqEmShKqqWneSBjKK47gFUzbaaKMfTXlBwDuvvIrNl4RRzIVrV7l9cAtdCYTNOXPuLKPpkizNOXfuPOVyyeloRn93i8HOLlmeM7r+FhQZngpAehzeOiKKQ3qDmP7Zc4wODtDWsn/1YXrnLvD0V77INz77BS49cJWf/ys/jYoj9HLJ7VdeJF1k7Fw4z5XHHsAJjzeee4bh9hAlfObjQzwBxH36uwOckGxfeICTg7d49gtf5OL7nuTRj32UfD7FpFO8o2MuX9nn4PaUytRuBr6EBIHR9e8mQe24oQRtMYFA1sdp4QSocDgpONGWZ3OFROMpwbVBQFdI9HLKjc99BulJHvypD3PtZ6dc/+rzjG4sENJRVQbhwFhTO3bIeqBq4G47idVjRgNzIAQOW0cCt4UQ9TbNr1bVPK84h3UCX64WzG0NfNiVY0dpoVpFnCjEqvp/BShQu6E0VhVm5bIpG2BT1AVPbvV552hjW9ZoC1ZfrekOAOIaMKN19BDt1nX3m086DHf2vWJLVm0SrVNHayeyrsapw93Zl6B2RGlxFHHnyK2LSdMHxFrKjljBHa7dccO+NADIfUtxWgCmuZjN9TK8+ubb/Df/3T/nb/6t/5y/8Klfph9/lWee/hbf/NYbZFXtsvHBJ6/xxnHK62++g58EfOLnPsiHP/4h3nnjNZ595Tq3T+Y436er4PLViwjgcC757nde4vW3fpfb0zla1NdOOkGA45KSPNnxGHgWJWsASQrwFAihEMbgK4ldga3WOoRwKFk745iVI4tbOeAIYZBK4SmBzg0gUEq2cI2UAhkAfh1X7XmydtbVjtNU8/yk4MXcMBYSZVdQOBbpFNMiJwh98kJjcAyTkIW1BMpjGELgSXqhR4Il8QRh4BH6EkTtTDRLF8xzwe7uNjeu3+RglnPh3JCd83tkBCS6YjeRZFnJpLIIC1ZopIV+N8JiMM4SBD7n9xOM8Dg5GBPFEaeTBcNOh04YcLooyEpHvx9i/Ri7zKnKnEQIzgwUyhMsSotwlkxXCASlrR2cpSeZLjPODCKqqkJ5Pr2ky5lugARyV2AQ5HnB9iAmyytcUZIaGM9SuoHCD/x6XtDz6fYSzmx16LiKOPQYzQyJ7+MHAel8RikkvieIhGA8mxP4Hmf2hizyimy+RFLPp32vtQE/NnpPx4+m2uAHKSHEe4IfzjlefPFFnnvuOT760Y9uaPgfAt24ceNd3xNC/DH3FiFEOxFxv0HvcDhkf3//e97OjTbaaKON/qzL1VahAK6xBxeUlWT5+gk2egbvyQ9SWovvB5zZ22ZysGR+aphetfihjykdTkiKacrkNGd5OufsXsDo1g2CqINezBHLGcUyp5hpnBBE3YTOsEvU6xIlMb4XINXKN9QadJHXi+aej6kqhKzxFGttDTdkGdlySTZPKeYVrqoHxNrkqI7PcHfAYpIyWaRUZV11Ml0EHC0jXi8LtDPsdWQ9QZ/OEXGCk3X1KMKiqCtU9/dybp/8AdfsJzm6Zfju7xS89twuh7cyfuHX3s+Z81s8cO2Y8WjKyWFKmhZIabl9MOUDH7pIf9hBScF3v31EfzshL2D+9oijG1PeePoGV0TG5UHFqy/9Fmf6t3jfg/sMOjFGV2Q6ZzSbspiPCIK6D0Eg6CYxnc6QblxX4DoEdjWpbqxDm5JKW3RpEUohUOAsVltEUSLLAl0UjI6PePvt6yyWC/avXEZYhzUWIWtLW2NXhT2mjoQJhERK6txPJQm8AOmFaF1ycLwkfPQJcmuYnxzzzlu3MYAvoCcFhQGl6jmDwjpyHOkP6pb/PqjJPVZColRdpW2MAU/W1QhSYZ1BCIm1DucsnpKrvGMAgbYOT0iEcFgrsBoWszmBHxJFMUIKFtmc3/ydf8VDjz5Kr9+lLIraaUR4+L7kZz715/jv/4f/hbK8e6yygT422mijHxetu3rcC300Wo95ea/9rLtxrIMh6/+uO3esAxfrx7qfm8j6cRo3kPXYCykl29vbrUtGGIYtlPAnuZO8G5DSvJ5lGaPRiO3tbbIsax0gzpw5w9HRUQsFVFV1V19GoxFZlrVgR6/Xa+NVlFJ0Oh3KsmxdQxqoo3GBWG9v43KRJEkLVDQATOOe4vt+C1U0cSlNJMhsNmtdNRrQoHEEOT09baNXLl68iBCCZ599tnWbaGCW9Wvb6XRYLpcsl8s/dt0aSCEIAnq9HmVZtg4eWusWwCjLEt/32/7O53M6nQ7GmDYWJggCPM9jsVgQhmHbpyZa5YknnuCb3/wmzz33HB/60Id4/vnnmUwmLJdLrLWtG0sT+dPpdBiNRq3jhhCCyWRCv99vIYk8z9tja62J45jlctnGxTT3FtSuIU2kTqfTuSuGaDaboZQiz3OWyyVFUWCM4dy5c0ynU/I8b+EL51x7jre2ttp7rIGYkiRp7/miKOj1emRZRhAEDAYDptNpe9wGWNnd3WU0GrXnu7kvGreZJg5oHSzZaKONfvRUpBnp6JQwEMRbW7z60isUuWFvfw/lR4ymSygzIt9jenJCmpdcfPRBJqcj3nrrbZLYo0gX9HbPoKuS+WxMEoYMhn2CpMeNF19CKY9zj3+AeGeLr/zmv+b6i6/xgScfY/fSefI0g9GIydFtDILzjz7Oxcce5/jgBsdvvsT5K1cIo5jp6Qk7Z/aI4pBCW4LuEOlHHLz9KiZL+dhf+EuEexd56bvfJPIViaoY7O2zmMwJPI9epLAGYs/jxrJA4AhUXWTiVkCDxKH1qnhgBRXolZuDQiBXQ81jY3ihFMgUnDU8uJ2QSCCfc+sPPoetUh7+hZ/nsfOXeek3f4/J9SnCeXW8A3dDDMK5ei5JsIp/oWYMHAix2mYVsmFxCCFBOJwTbfSGXQEc4FamJYLQ1TEt0rCCTGo5V4MYFlCuXqhXdwEOrCiI1Tja3YmUMbKBT+oo4dp1pG7LXYRGA9M2u2phlTu4h3VuFWFzZ7u1084dkGTdSWPlbLICQhpOpSUt2mKN1fFXEEh9DHuneKN5nnMN8HG3BHXUDtTHc07eXQSyioVRDtq18nafd/Z/d7HI6hmI+nfs//Q//k/81m/+Gz710Sf5S7/2G1x+5xXm119hMT3h/R/8IN5Bxu3jMbku+frTb5CnOU99/OOU332NM9sDxJmznJyMeO6lY2aznJuH32FelJSmdmsRCBQCD7jkwVNdxVYgkLI+f0IJPE8ipcJUFQESZ1d4jABnHQiJVDU15ClVO8DY5nlPgjVYXZ/HejPLatoRhEAEAcaaupALKDSMUs0L45ynl5pD4wiVxElXQ0hWkK5A2ShSNRYlPeaFwVeWbqJQQpB4EqUrZKSIfMe5/T7LdIGwkpPxmE6/y3i0YPzmEaGAq7tdkkGAFwZc2OkhJqeUeYmuDHlaID2PsPTYSgIqTzHPNaYSDHuKybyiKpZY51jOZ2zHPp6E8SLHCkmYhBjl08OhqYj7CZXV+IGHQHI6zXCl5cJOwnyyYJEWJImP0SUXt3pURU6SKHrn+rhCUy6XxKGPEwKdl2wNY1IjkMaSOsfpUuMrn9xYLIbDrKSzG7E76NITDqkdy0XOMPGQTqCtxamgdi6WhspUDALF9nbCLC1Ip3O6nQ7a2hYg+15qA378mOtPcvxobCx/0Lp06dJdA8J79frrr/PFL36RD33oQ3je5rb+syxjDDdv3nzX93d3d9nd3f1jrzdVJU3O6br6/f59P7PRRhtttNGPtgS1faaSsq58wGGcIreSshBMX73FYDgkG9b5sQ89eJHxaMThjZLXvnWDN+wtXprCwjgiJA/senzkkSHbu0OscUwPj8nGS/odD4XP1v6Azu4WSb+HF4SrvNVV7UJlcVbXg3OpKLMlqjIgLMLWVhMSixeGBFFE0h+gtzOKZcZiPKVMNaXR2KzCuJROz0fJLpPDjKKUnGQdXilyDnXOlgwJvdqv07kSz1kM9SJ8PQVRZ+cmUcS5MwU3b/07etVTPCE/yNuvFngi5KXvHCCl4OojW5zPevR/PiaIPKajgi/8zkv80l97gq/+7pvsXOyxHF/nO196mwceS7n+wjFnPMsnEsPs5rO8ePRFLu9lPHBtj0E3QRiNBjJtWS7mdHoeWlvA0gk79Lpb9DtDojAGHM5Y7CrjtdQVujJgagik7gsgZP1aVSLznOODd3jtpbcZHRWcPReQBAlCKKqqdgoxWmMKg/PqPHud521urgACHzzpAZbpfMk86qHO7FIuJhy9fZP5Msc42FaCpYWJEWwJ19TLkHHf2pIfCSlZLwxKIVBSIUTt+lGnFgNSoLVBehJnHZ6SVLquOlJKYo1DKjC2nsiRSuC0Y5nWC25xVBAEGWHk45zl+Vef5stf+Dx/4S/+8up+rq0xpRC875H3cf78Pm+99e7A8EYbbbTRj7oawOHe+JN7QZB7XT3ujUpZd9VY//rez60fr/n+3Y6xvr9797v+WrNwvb+/z2Qyafd3b1vvdTNZf//e4zUqioLj42OuXbvWLqI3DhxRFDGdTtu4DSklRVEwn8//GHji+z7L5RLnXAs4NJE061EdjetHAyYEQdC+1izcN8DJeuxHGIaEYUgcx9y+ffsuIKDT6eCco9fr4ZxjuVyyvb1Nr9fj+eefx/M8rly5wnA4ZDKZ8Oqrr2KModPpsLu7S1VVdLtdwjC8y+WkgQmafq4DOU1hTRzHKKXaeb/GkSIIArrdLnmetzEojQNG494SxzGj0QhjTAvYNCBK01fnXBvN/KEPfYivfe1raK3Z3t5mOp0ynU6J45jz588DtIU+DUxTVRWnp6ftNW1ig9bvraZPzXGHwyHz+bwFdJoImMFg0N7bDSiyu7vLzs4OBwcHdDqdFvLpdrs454iiiDRNMcbQ6/Vat5AmbkYpxXw+p9frtYBRcx1Go1Hr/NFAJltbWwghSNO0BWAa15TmeA1Y0riX3O/nbqONNvrRkACuve9ByqLk8K13sJnmzPl98qxgdjRmMOjhd2KqqkR6imsPPUThNNPJCdksJwrPsn32AqOjIybjBY+8/wG2d3bRzvHOyy+RDHa58oEnmaQpn/l//WOq+ZSP/tTH8ToRvudj8px8PoJAcunSFaLtfZ776hdZnpxw8coldJkzPjik14ux2jJflHS2huTpkvn4JoPtPoMr76c0jpc/95uAx/5jD2HKitFoStDp8dCj5/F8xRs3prx9WuABHSVJpCSzhuUqu8M5AdbVrqqIejzpXO2QsDpf9aaCg9KihEFYkLLi6tCnHyiq5ZRbX/4aeF0e/fMf4rG/rHn9332R09dGlKVcOYnaGr4QDukAUS+6IwRS1DGmUgKujoGpD1y7N1ix8tBwd4olpABr65mLxq1CAt4qBljbOgLGCtuCHwbaSBK7gk8EjWOmax1tzcrho+l7C2S4FSyyBlGIxkqDxsRkHcOoN3LtfmrQpsEj7jXvEKw9b3LHI8RxxzmkcRJh5Tzi2q1onT/aowqxduw7e7xTNkILlNSnwLUGLDUAU/d1hde0rEnTh8Y1BbG6ZkLcOSd3qQZysIbbt2/xL3/nJr/7B1/m0pWrfOCxa3hlxfgPn0d5ER999AFOJ2N8X3GSSz73pWe4OXYc3T7g+PiEvKjIK0Ne1RElzVyTBKR1+BLOKfhg1+dMBEpaPCFRsgaepBA4rVex1fV5rd1vZHsPWVvfG/U5dKt5mjs/DE4IpFdfK09KhAIZKCwCXRqirl/f69bn9HTCi6clT880b1duFTNTn9tACirjcMLhhKAoS7pRQFpV6AoCISjykr1hF8/VUTUIy85el1KnDLp+XRw0U9wepYRCMaoqAinpJh7dvR3yyZKOKciUoMw1Vlj2BxFaenSigDRNGc2WJD50Yp+yNIxHcwK/LkYKfI9eN+bwZIo2Aq0USnokziFMThQrnHTYwhBEHp60DAJIQkckSnIf4lChkEReSOgZpAroBD62qsiqijhWZMsKoS2dSHI4yWqeSkqmi4qsdGQO4kAwMSXJsMP57S3iIic3JTjDIIop85xIgddJKCpJKC3L3KKcYGurQ2kENivpJyFpWdQ///Lee/U/XZsV8h9zOeeYzWbv+v6fFcePTqfD/v7+uwIDeZ7z+c9/nr/5N/8mFy5c+D63cKP/EJ2cnFAUxbu+f+7cuftWNDSVG/cDP4bDIf1+/3vazo022mijjX44JEUNOuA01oFGUK0qF2zqmL32OvNHz2IGWwwHPT705KMc9I8YT3KORjmBMcjK8cC+5ice3aXj+SxuT+ps896A3d1tkm6CCoMa9vBqO8Y6DsWCde2EgHMW4SuEVbXzhC0RWKwtYDWQN7IAKZHCI/A8wp0hnX6XfL5gPpozX6RUE40sDX7ksXuxz/OvwHN5xi29wEfiC4G2BqMkeVHQKVJE1Meia0tJoerBrzCEnseFcwNOp89xePIGl/2PcD64wvHLmu/cnFKFPvHQY7DTIYhqt4WLVzu8+dIxo6Mxr714C5PNiZWHe/WIT/YM1eQ13nnzq3juLd7/YI/z++foxRFQoZ0jryqmoxGh7yEIcLa2WOx1+vQ6w3oSXCmMNRjn0EZjceRlSqULjMnxPQeurnS02lHJknw6IX3rbV554R1GI82ZLZ+9vSFJ0kGI2mLc2bi2lscgtaXIM2xZ4LTDmnoCxQ8EStWVt8fTKcW5R8icZjE+5vDWiNJBvJqsmMoIGW9RFEcoaxDCUX4fJuDvGK9+fySon/uVFPi+h+8rBBIpBZ5SaG0QEoTwKMqSwA/I8wwQiJV1aD3JUlflGGOQSFD1PpVU5IVmNp8hlUWIPkHgU+qSf/uZf8tHP/5x9s7uYqxFiHoxZWdnhyeeeHwDfmy00UY/1lp3vXg3148mSqTRuntHs20TdXE/t4979we0i+v3Aifvdpx7YRHnXLuIb61lNpvR6/Xu2/571SyGv1sb7z3O4eFhO1fQuIA0cEYcx228ymw24/LlyxRFwWw2a+e6Op0OaZrieV7rspEkCVEUtU4RTfxI4xLSLPo3cR8NDNHEgmit71rkd85xfHzculScnp7i+z77+/scHR3heR5ZlrVARhNBcvv2bcIw5OrVq8RxzLPPPst4PMY5x87OTtvOJh6k6Vdzbe6N0hFCYIwhDMMWMhkMBq1rReOmMZ/P25iRBvponECgdtNo+lsUBf1+vwVGmuvu+z6PP/44Tz/9NN/97nf55Cc/yfPPP09VVS1o0kAb8/mcJkZm/fw2BT7L5RIhBIPBgOVySVVVLdzjeV4bHWOMYTabUVUVURQhhGhjd7Isa38WGpeNZu6p0+m0ETINKKOUaqNXmvPbQB/NdW6gkOZ6Ns4ySim63W7rQuN5HkmSoLUmiqIWRmricZoIoQb4ANr2brTRRj+6CuOI0WjK6fWb+J7H7oVzpOkcU5Zs750h7kbMpnM6wz2Gu2c4GY8Zn5ygvIhLD51HSc309iGBsTxybZ8o6TCZTBjfPuHsg49y5pH38fbzT/Odz3+e4dYOD3zoE/WKsi45vXkEsv6/avfcJXSx4NmvfZ5uEHLxgSuoMEFXBZ04QHk+tkwRfgepJNUiZ/fcObo7O0yPbzO6cZP9S5fYvXSN4+PbjG/fwnOOpL9N6CvSecarb58iraHnCUIrmFQOg0QJu3LTEOAL6mmWlfukdK2Jg13BDGblAHJQAdbgRIkQlgd2Yga+Qi+nHHz+32HzJY/98id4+K8kBJ/+PCcvH+PmFVXlUE7grEPKBuKQmNq0AhxYZ2uUYEVEeFLWhSnO4ssa8VhxKrWXhWA1B2PRto5mEUKssA1B5QROCaSwSAullWjhmjIXTAM6uAZ3cFhW8aoAuFX0jFs5jTTPeO6OUQh3/7v+66P9cv1FtwIm7n3Ga5/97uxvHQKp23UHIGnhj8ZxZAXANPMZ7q7nRlq4Zb1lDebR9r6Bclb9bFxKVq/UMEzznOsa5KQ+vhSr469MUO6k1d45r6pBQoQgzea8/OIzvPbyC8gaK8FTEl+BcJLQr6+9tRZd1fNXrLUHaucYsdqndBAJwWUFH+j67AUOXwmUYzUvsnrGNnYV21I3XbjaV8Y6s3oeq8+IXbVJSblyp6lhqAZskaoplgJUfZ+ZUtdFOdZSGsHt0xmvnOR8d1bymoFSCAIhKK0g8SXGARgCWRf7+EqwLEp6UUCiBB1f0fFqJxJfOXqBZLvnk2cVvU5cP7dHPsiKbhRwWBRoqej3Arp72/iVwy2m3MoKFsuSJAyYLQv6nYjt7RihC5QQbMcKT4Ez4EnNsJ+gcThn8JRgkeYYI5GBT+wFBNJQ5hqhAKGock0nqJ/T8kKzt5OQloaDeQUO/G4HVxi0yUlzQbfj104pWMKwLjZPQkXcV8yKCpylUiHHoyWRp4h8W885K4+kE7Ozv4MyhkWW49uKvWGH0tTPfC5w5KWmP+wymswoMsP+MGCZVgit6fZDxtOUne1tPP8A+6cQ67cBP37MVVUV3/3ud9/1/Waw/INWUwHwXk4RX/3qV3nllVc4f/78ZnD0Z1gHBwfvmVE6GAzue/3WLTvX5Xke58+f39hfbrTRRhv9GEoIWltN5wTGCbSr3T8kEADZ7SWn/duUZ2LCTsTO7hbDboflIuXw4AD10hHHc8VOItjdHRKpkCD0aqhjVU2BqMBU2NwglIeTsj64qYl4u7JlRAiEVEjfw48idJ7irFsNZGU9gWAdThc4l4M1KF+hZEA87BH2uwyygjzNSbMCESp2zp/ncHGLg+dPMc7hCUcgJM5owGKtQZcZvf42uVZoXYHTq2oBgVTgqRBfSZIw52T0eW6fRAzjD/BA+BTTZcDpgeEwKzDS4VaVAoH0cVXFpc6AD+76xCJndPwM77zxLOi3ObftuHhhi+1Bn0AprC6wzpBVJYusoNvrgHBUkwyHoRsn9PtD4ihGybqqxlSWymoq7ahsgdYVlTE4BJ4UKCnR1mJ0gUVwdPOAb37tBq4UPHgh4rFHLjHo9/F62yg/JJ/N0bpbO37o1aBMV9hsiTAasarICQNJEIQUecapVZRbOxTLOdNbh0wWNZwaIphYiUr2CGOFyyG3jko48j8FIqMep9+p0oE71qVNxcufpqQURIEiDAK8wMNXIVBXfSihQFT4vofRFhH6WOfwfI+irPB8hTAOp23rLut7q+ey1YSEsRZjHek8JwwVfui33rLvHL3JZz/zGf7mf/G3MBokGuX7RFHCk088wW//9mc21a4bbbTRj63W41Pe6//Ce909mgXu5ut1l4Rm+3ebN1lfBGjG2feCHevtaRbS721zURSt40RVVaRpel/nkHvhkvuN7e+FPtaPMxqNOD09vSsatgEugiDAWsve3h6LxaJd0L98+TKLxQLf94njuIVADg8P25iPBkLwfZ88z9uIlgauaACVZoHeOdcCFI1bxWQywfd9kiRha2urBSkaKKNxpdje3m6PKaUky7IW5Njf3+fixYuEYcgLL7zQRrg0x1oul0wmE+I4bmEGrXULFzSRMs11sta2oEsDiUwmkza6pblOi8UCKSWLxYLhcEie5/i+357DJpJXa01VVQyHQ7TWJEnSRshcunSJ1157jU9/+tM8+eSTPPXUU/z+7/8+1lq63S6+79PtdlksFm2MS9PeBrgB2uiZ9UiVBqrwfZ9+v09RFG3bG7eVxjW4iU1p7ovm3DSuLmEYUhRF69TS7XY5OTnB933m8zlJkuB5XhvJk2UZeZ63sEYTFXP+/HlGo1HrorJcLtt+FEXROsQ0MTLrUElZlpRl2d4H6+3baKONfjRljGExnjDcHYIfc3J4yGK2pL/VpSpLxgdLtnoRWpe89drbjEe32dvf4eoHnmQ+Tzk+OECGMZcfu4wQitOTMel8wvnHHifZ2ePbn/k0o5s3uHDxMueuXUJbgy8l2mikH5BEktAPSOdjjt96nZ1uhJUxMu4ThRazWFJYjzhK8AKFTLao8pReNybsd1iOTykmp/S3evR2znDjpec5un1Ib7iD8Gt3yCDYwY/fZn+7g61gkhkOFjkS6kgUC1oIjHCYlSOGkg4lHNatFv1tAwaIVaFLDYccAKKwOFcileLRMwmhcJjlkltf+gJGWx742Sd56K/8KsFn/4Cbf/QGi5msI2JFs9jeQARrzyD2DlxQy63mEkTtwrByllghGEhxpxhI1UYKGFuP70NPooxBWEUhDMjaq1Y5yF0d8eActYOtcEi3ipOhgQlc6yahGpBBgHC2dd+4A1DUf7sVCdAAHG41Jndi/TlKrLa78/zonLvLI2PFXqzcPWpXgvUAlWa+rHG6sHe5UaxBKKt9CVFDEvcCIatWr2CPO04dknrxulp93oqVo0b7idVn19rRHK+BaGpwpAaIGmeO9pMruEIIuYIuHAiJ0XYV2yJY6jtwjWw73ji/NE4fdRSRFJAIwWVf8ERXsh04fAFKNEU2Ek828UJiBZo0AM7qOgtRQznW4oRAKYGnxMqVxa0Kz0Cpeu5RqdplVfh15VJVGqQD6QsqA5NZyRuTnKfnFa9qWKwcbbQz+AK0AVmnK2OdpZcE5Kb++ZNWEPmS/W7EMltikUS+Ig5rJxupPLSu8JSkzDMOphmzDCZZiRcIds/usdXrYRYzRsscqzX92McqDz8K6fYTqrJCWsdwELBcaiyihjFcHdMc+h6L3JFlOXESY4XBUwHDWFA5SVVYEJIiy+knEdIaNIplZVhONKN5yvb5IUm/j1tqdDEhSkICT1KVFRoQiUchJIHv0xuELPKKvKiQKIoVDKI8n8BXzPMKQp/9/V2cNRRGECY9BiKv/8+wlu6gy83DMXvDDtNlzu3DGRd3uxgspjDEkc98mSOcwZocY+2fStXXBvz4MddisfgTwY8wDL9/DXoXJUnClStX+OY3v/mu24xGI37rt36LT33qU38mYJWN7q/bt2+/a2QP8K7OHUEQvCv4ce7cue9Z+zbaaKONNvoh02rguBo+Y1yd3RlJ8IWj1B63DzOyR3M6VYFErcpFLFI6AmUYhgYPS5ot6O52UGGEqSpskeOMxlmH1RXS81B+BKLxrrQ4pQCDcwKhfECgggjpRXhRgqmqugJCW6zTWGNxWmNwWKspdQ66QlYlQkiUEvSGCb3dHsoL8MKQc0PBlq84LTUSSbSqMjGVQ1hJVebgNFGUsMwqjDZI4dWTAdYilU/gKYbdiE4csrtTcDL5Brfe+jIV+/T7D7Ad9LFIEB4KD+csShlObn+V15fXkXZGN0y5vKU4s79NvxMTKoHCYo3BCklhYbaYgJB0Oz3SPMci8KTHsL9LJ+zhqboSptJQWk2pNUWVo3VBkS+pSoOpytUkhsPoiqrIUUoyOp1ymFo+9VDEEw+fI45ihLMYNM5TGFPW8EmeYgwEvoezFr2YYqkHU8oThL7E2YrJcsly+xq5sKSTEdPDGaW1+EJQ4LDhkOGwx+nJEZG15DgK970fkwkg9DySyEdKh1QSbd0qP96ija2rgP6U5v2lFIShTxwFBF5EEMUEntcuaAVRRJ5lhEFEpesKW0ttJzqaTEjCkGWeAvWCSqkrmkkW69yqelagBBhtWcwzfD/E90I8X6B1xue/8ml+9md/lrMXzgPeapFE8vhjT+D7HmVZ/el0fqONNtroh0DrUSj3Rr3cC3k0Wv/+fpDHvSDFuzmB3Lv9u23bOEmsQx9ZlrVgge/7f6wd94Io9+vvu7mNrPczTVMWiwW7u7uti0Wz+N9AL1mWEQQBs9kM3/cpiqLdrnFpaLZpnD7yPG/70Dh7eJ7Hzs4OZVmitWY8HgO00AXULhadTqeuHEwSrLXEcQxAt9tluVyS5zlhGHJyckK322U0GlFVFb7vt64a169fR0rJ7u4uZ8+epSxLnn32WbTW7etN+5RSrctKAyWUZcnJyUkbg7IO76RpilKKJEmYz+e49ve1bNvQgB3GGJbLJd1u965rLKWkLMsWqmjOUQNnNGDJQw89xDPPPMMf/dEf8bM/+7Ps7e1xdHSE1vqu89HEuGitWweO5pw0riwNjDIYDFpHjQa8aa5dc23W43b6/T5SStI0JY5jiqJogR/f91sHlOYcrve/acd6nxugqLmmzfVu7vUG+mmcRRqQBWA+nxMEQQvnZFnWOp8kSdJCJI1by6bIaaONfnRVFCXDc+dYzOY8890XOLsz5IHHH2UyG3N0+5jz588gwpCjgwOcVTz4yENsX7zE9Tfe5Obrb3Dx4j6X3/8kRZFy+s7bKC/iykc+yTJd8vzn/x3bgw6XPvwIN67fZLHIGGz3EMYyXpYMd7ZIIskbb94ikJJzF68gpCMrS5LIJ+gmiF6XMCvw45jKgF5O6fR7qE6f5XiMs4ady1dRYczx26+jq5zt7W2k8ujvDFHCcfO1lwkDj4cuDRDaMT9Y1EUpOKSQeMaSG0FuDNaCpxqItV5oN8atIlRq9wMlahdPJwSVdVyvLKUFOy1wTnBtN6EbanS64PCLnyc/Pebxv/YrXPvLf4Xu/ud5/fefZXaUUhSOCotaje+FE3UEC9RRJfbO+Nu5ZnkfpKxBCoOjjih1K4eJerFeOrmKLbEYBNLWcIdfkxNA/U9la7SjQmCFq2EOWwMwdRNWSEDz7GPBiBXI0XpkiDtzE+2zklu5YtRtkjisaN++4+SxDkrUyR2rfbMGlKzeX/3l1hxJGhjFrcW6yAY2WXP9uGsfq/N7B75YtcE1bnB3+gH19Q8lGO0wKwcPtzr2ekuaei239jknWME8DuHq+0mI+ro117s+plxrp7jr+PVn7+6EWJ3rJlpHOJBO4EsYSngokDzQUfS8GvoQoj4vnpQ1TCElWFfH1AiBw9RtELUTSX2YGg5RK0dW4VaRLxKEXEU7K4mQ4IUKfIVQEl1olCeQUqERHE8KXhvl/NGo4GXtOKkatxaBkgJPeVTW4IyjIxVJqEhLAwICIel4lt1uzHie4nsKH0hCH60UuqqISsPOoFO7vxaaru/zwtGMS5d2eOihC1RZyfGtI5yu8JUk8BzCBxv7dAcBOFiOlsSJR1lZZlmJ8gJsWTJIAmJfgA9+5Vg6D1c6eoMOWx3FIi0w2tLtBJiqYO9MnzwrKa0gLStuTXIWUrO102d4ZoetrX0OX34TX1Rsb3fJK0E1W5CHAbdmBe+/ehG5nHB7uiCd5PhB7Wob+z6ZruOp8wpE6LF9ZhvPakoMsZJsuZJSl/heh+1ByK3DOdLAdFExnc84t9vDOIfJSpIkYZHlhL7C73W4OaldAdWfgofBBvz4Mdc3vvGNNqv0fhJC3Hex/futJEm4du3an7jdP/kn/4R/+A//4QYE+DOsg4OD9wQ/4ji+74ROY196r6SUdDqd72kbN9poo402+uFQbXkocKv81WZwKYWjxgtqB5DJ0jCdL9nZGuJUXUkhnMNXHv2+pN8v6MQeJwe3qLKCrd0zJMM+weBMPRFb5FRphlmBDEqq2m5UqtqeU0qE54Pnr7Iy62GwND6q0ljncKZ2Q3Da4FxdTaDzjEJXGGERKLAWnME3FqkFUpboxQzlFnx8W/LZI4l0gkBITAVVmeP5HlWeU5UFnf4AZ0IyCwaNFA6hFM7VUyVKSYQHnh/R6QScPQPpMud09A2my5K8cuSFA1cP/nwFkS85e8Zn0E/odQbESUCo/HoiQhqMrtDOkFvBdDKm2+nSGwzQziAwSBz97pDBYIAfBGhrqIxmmedk+ZwsnVGVmiIvKQuLKQVFIcDzyNIKOhZdVQjrUy1Tru0oHr6yg/IVk8Upy8WSYrzNB7AYrbEWtDZoXdtBGmtw+QJtDKUBIR1OWtJ0wqiExWCHdL4gPTpgNM2wCHwBM3y6SY9SZxidsXAW5+oKie+lJBAoySc/8pP8ws//Aq+/8Tovv/oKJ9MjyipDl5o0S1lmBWVl1yxFv3dqJhSklHi+wlOSMIqRsp7MCIIIKSRhGGGtQKk5xhiUVPS6GmfryQrfryeepDH1pIq1aG3ryQrA81VdPVMaiiKjLAKE8HHS8s7hdb761a/w63/tryKNxFqD8nwuXrzM1laf27dPv+f93mijjTb6YdO9bhfAXdEU6++tAxX3Wzi+n3MG/HHA496omfu5day3pfm6AQS01m3UxXrb7gVY7te3e3W/SJsGWNBaY62lqir6/T5BELR9t9YyHA7buJLG/cL3fZRS7OzstO1cj5nxPI9+v9+2J89zmijiKIraCNsGImlcQ3zf5+Dg4C5nlPF43DqHNP1oIkkap90mLqSJXG4cSa5du0aSJBwcHHDz5k2stQRBQLfbJUkShsMhZVm2sE1zHbwVxLl+rhqQoIEptra2Wgikqirm83kLvTTnIc/zthCn2W9VVa3jBtA6f3S73TbmpCgKkiThscce4+DggN/93d/lqaee4kMf+hBf+MIX7oJymrnHBtpo9uucoyxLPM9r43Ca6J5+v49Sil6v184XGWMYj8ctRNFAMNvb23cBLvP5vHX5aFxA4jimqqq272EYsrW1Rb/f54033mhBIqB1fEnTtHVRaZw8+v0+/X6f27dvt+ewqirOnj1LmqatU0tzDn3fvwsQaVxFmn1utNFGP7oKwpDlIufrn/0ana0hZ65eY5anHF8/ZO/8RZYVmGyKNI6HHnsE7SveeOE5Dq9f5+rly+xfu8Z4fMTt69c5e+4i5x99H+OjWxy8/ALXHnmE3s4up8e3ufDgg+SLjLLQHFx/m729PZI45p03XiOQAfvnz6CEQCgY9gdI36MqNF6oSHZ2mB0d1ouswyEWQTGbIExFd/ccZVWyOHyb+WiGFB5xHBP2erz52k2yxZidgc/O7i42X+KFHqHv0QkqRGlRQIpgoevfVVKtinoMBF49n2MAQz1ubhbGjeOu54jbWmBzjXU52joe3E/odyUuLZi/9Cwv/FPHI3/lz3Pmz/0qKox56/e/xejmAlFojK3dKQUO6UQd08Kd39/1+LsuZGDlSGJcHbkiPIlztl2Y1ysqxDqBdHVchxUOBaBWAMLKFkNIsFZgVou+jQuGdK6FGowQd+JgROMOUasusoC7y1Ia+kGsHDjuhhhYASysgIh1/MK1267Fj9znnnX3fCPcelvq3ctVO6y405e2hUK0AEezk3U3krue85odrgEYbXvdCpYR61uttpG0YIq4h2K5A1zU1+yuJSjRwMesbFLuRO1IxF0QiHD1/SIExFJwVsLVUHAxUiTKEkhRR7ngEFKghFs53zq8lfOJkDXEIh0oJVFCYo1GCIkUtZOMEgprNG61P+XVgLGzID0QngAl2zYLCaVzzFLLK+OS74xKXtGC48quwKb6nkpUHR1TOlsXzoWKWalJlMJXklg6tgNwtkJ4Ch9LJ/RY5BalNNu9kDASLPOS26dL9nd2OMhOkaHi/JkB/c6Qm7feYjFfkviC2AuwQpEawU4UMkgCDo9mOCHrmGGnCZRC2IooUsRJCFJQ5AWe9FC+IO4m7G+FLNKc+bKgG4VUec5g0EELh/A8PFMxmecsrEN2fbb3+lgvZO/cHroQ2Bs5ZSkwWcbO2W1evXHKpf3zqHTCfDbFVIog9pkuC3LtqLTmyn4Xo0KO04xeb8ByviD3JFtxSKQzjClxFrzAceM0xXegPZ/5vGAriXCuwuYV3ThgmWYoq+mfGXCUWnqew5PiT2WebwN+/Jjry1/+8g+6Cf9eiqKIixcv3mXdeT8dHR3xm7/5m/yDf/APvo+t2+g/RIeHh+95DZvJknv1blEvQoi2emajjTbaaKMfPwmnWKXC3lVpIIRDO0XpBGkO4/kSawxSqnoMJyVRFLO/t0WVT8ELqJYVo+MjRreP6W0lbJ85T3/vDFGvS+CH2KTCaY1U3spjcjXAkgIhFNYasLYeNGuHNabNOpVeXYmJ1xh2OrQpWSxmaGsQQoGoh2JS+LWFoykpTUmeVjw4NNxc+LyZCgSSSivKrCIMQgyWbHpE2K0nWZwTFFkKUmF0hVpVlkgpEc6iPB9lDBJNZ6vLoB+2i/S6qpCeRJcW5Xn1wNJZ4jiprShF7ZZhV1aYFVBpy3ySsr29Q6eTYJzFOYuUPnEU0086CHzSsmCeTZlPJ8wnGelck2eCZemR6YjMKFID2gqkErxvWjHcdeiyREYCz1kevNhBKMXB8U2mpwvGpxrO7/OoY7WQY0A4rLEY4xDSw2VTSl1S6XoSwGCZlSmz4AJzKZlPTihOFsyr2vJVO9AqZHtvSGkWzI41mXNo4N3R1f9wSVFXhuzt7PBf/R//a375L/9nVJXm+OiUp7/1DF//w6/z7ae/yTsHbyPGJ8wWSyr9vY19UULUk29JhzCK6CQ94jAhSjo1HOX7SCXQsSGJO2hT4CmoKltbigqYzSYoIRFCYqwBmsmzupJECIk2FiEsUkiMdpRl7cwiVQflCXKd8qUv/z4/9wu/yNb2EGM0nh+wv7fL3t7eBvzYaKONfqx1r+PH/dw37nXOeDfHjPX9rMe/vFv8yjrMcT9Y5F6njiYGpJm7qaqqjdBoxvP3cydZ//yfBIU0oEuzjyYmo1ksb6JAmoX9ra0tmugSqB0arLWtw0fj1LC7u8tisWAwGLSxJmVZIqWk1+u1ThxCiHYfTaRJ048GmmicKwaDAdPptHWxaOJGyrKk0+nQ7XZxzpGmKdeuXUMpRZqmlGXJZDJp3W/jOObFF19kOp0C0Ov1iOP4LvClAUsA9vb2qKqK0WjUbtOAL40rxWAwaMEFrTVZljEcDtnZ2WkjYhp3jel0ShiGDIdDxuMxnU6H09PT1plDa02v16MoijZSp9PptG4eDz/8MF/72tf4+te/zi//8i+zu7vL4eEhYRjS7/fbmJd+v89kMmmjgdI0JQxDoigiTVNmsxm9Xq91dmlcPpxznJyctE4vjbuHUoo4jhmPx6RpynA4JEmSu+6VZrsgCAjDkPF4TBzHLURy8+bNu352sixro2cad5CyLFuXkzzPyfOcbrfLbDZroZzFYkFRFEynU5RSXLhwoYWJgDauZj6ftz8rnue9Z+HURhtt9MOtoij53G9+jgceOs+DTz7OfDphdnDA+cuXePv2bbSVPHxln+0HLpNrOHz9bdA5T37oKaLugOnJMeVixiNPPkUQRLz9wrMsZlOuve8JkI7ZdMnW/iU8aXjzlZdJD9/h0pWLZFnFq8+9wNagz/a5cyyLiu3tAcp3LOY5oXX0zlxAhAE3X3wRTMXeAw+R9BPyxYIo9BHEFPMZVVkyOzmiyjWDnR5hp8/hay9z47kXGJ4/T5RcYFkYhucvcXaumU4zrA0JRcWsNKQOEl+AMZTC4Uy9mG2to7T1gqiqy2RqqILa7bVWPe9jcByWDmsrtKujMB7YT9gbBLi0YPbi0zw9G/HIX/51Ln7iPyM6s8+b//b3OXlrzHJWgZFgDEKuXEaEqsEJUwMhVtRFIBaLcIq6PkJgVovmTYSIZBUXbJu2gbFyNf/kEHYVy2EtRZ1PgnKO3NZQiW5iWlwNuwgHTjTBwY2bhmutOdqoGtaAhYZyWNl0uBXA0GIR7eOXa9GP+mNiFY9yjxrgora6WP94He3ialDmjnvIytWkteGorUREY6bhXHu+RPOcJ0DB6vUmjqV2p6hWUStS0DqXrHdzXWK96+35sytQ5M57brVt61pyl/tIA5KItX2JNYClBj4UEEnYlnBGOS6Fku0AEmVRkjvRy1LiS/BV/cztNQ4e4s7xxOqesaIuqFECPG8F71hTu6cqgRdK/H6CXlZIZZGhAqUwlUYpDxAURjDKDS8eZXxrXPB65bil79wjAvBEHaWipCLyAkqrOc0r+oFPaSFCMAgEEoWuLNtJQKwU2hq0ybm2OyBRlkEiWBaaw5Hm1aPbTHFcvbDFMAq59dYNJuMJO/0OoTJ0AslJqvH8ukBoOk/xAyi1T6odnrMM4wCJxY/qn//RLMUPQiwQJwmRMIxOZiwLg8MnqzTdKCEzmjzVKOdIPJ9+P2IRSfrn9jAi4PRozrdufokHzu1yK6+IraG/t814NOPCIMY3KQcnCwJh8D2fw+mcaapRkeTKlX1cXnIyn9Lf2kZg0KFPgqJTzIm7EdOlI4wkB3NNlTq6iWQ+TemGCiHr+dQoCqich/QEoaeYpyV+7uiGrnYKuk8R/H+qNuDHj7Gccz804IdSirNnz9Lv91srzXfTP/7H/5i/83f+DkmSfJ9at9F/iP6kqJd3s7Fs8mbvt3232/2etW+jjTbaaKMfHjUDL+vqqg/rJI3hpXMKjaNyDm0Ft0c5eZ6SiDomxfMUSRThyy2yuQQJ3f0BVV4wG01JxxnF4i2W8xkXHn2UMO7jeQGOCkQNcThjEM5hqpXttaltoJ019SBSSKSSCFkPhZVXV4A4WzuACARGG7QuEMJDWEdVlpTlDF3ZGq4QDmfrmIwPDS2zMsQ5KCtJoStirclKjSckxWKOn/RJ+gmeL1nOp+B5NT3uVpWTFpwtQcjauQRLIED4IdarEHEASFxcu4Tg6mxNIeuJBesMhgpjLNqBcxKJ5PK1KxhrqXSOsY6iXJDnGuV5FNaSjo+ZTFLGJymzOcxzn8xEFFaiHVSrahUlLB3fcH4o2Ulqi1BbaaznU1UFZTHnzbfGzE81i6XHuPJ54KkzlLpE29pNRQkfbUqcq6tHpQZt63gZz4fKGIrCMd/bY7FcUoxH5JOMzNUTDqkQeH6H0/ERQmqM01SA/h5S+FKAL6EbBfzEU5/kZ37pZ4i6CZEQ9HYGPPDoA/zFv/YrHNw44Iuf+zL/+rf/FX/wlc8wnafvWgn9HyNPSeIoJgrrRZAwiOj0uiRxhyDwAcnWYMB4Wi+CTCaG3b0zWGOZTqdkKHqDLXCOUhs838MBRVVQlRq5WiixdgUNraCcLC3xPInn+wTCRwnJa9df4ZUXX+ajP/mxujrcWvq9HmfP7vPccy99z/q80UYbbfTDpnvBi3sdOu4FJta3uRfuWP9cA2Dc7/314zXb3tumdfiiUePiIIRoF/HXF8abbZp/7wVU7ucIcu/x1vvWuC0IIVrA4+2362rmJnKmLGtr5SRJWC6XzOfzNlbEGNNGbxRFcZfjQ+MK0YACVVW10MN0Om2jOBrXjnXwJY7j9hidToeqqlqYod/v3+Wq0US3NMBHt9tlOp1SliVnz57l4sWLSCl5/vnnKcsS51zrcuF5Xhu3kuf5XfsKw7CFVZpz2IAfjWuG1roFZIIgII5jDg4OgHoOpixL8jxvI1Nu3brVgiuNo0gDZoRhSBAE7O7uUhRF+8fzPAaDAWfPnuX3fu/3+OhHP8pTTz3FjRs3GI1Gbduac5UkCYeHh23bG3hICEEURXc53CwWC4bDYRtl0+l0KMuSMAyZz+ct7NLEuTSOIkDb3qqqWCwWZFlGHMdsb28TRRFVVbV9BVp4Z2trCyll27der8fJyQl5nq+elSbtuW7uEedcG63T3JfL5RLnXHv/HBwccObMGfb39xmPx3ie10bbbLTRRj+aypcpvd55jPR44dsvMuyFDM9s886NmyShx/mr5+lu71BYw+j2bbodn/PXHqJIC45u3KLXjzj/+BNo55iOT0mShJ0zZ8iLDGOgv3eWNM+ZHN1GeglX33+VV55/kWx0wgMPXaW31WMympJ0+wgVUJVLPGWJh3uA4q1vfQ3PWZJzV/HDiHw8wTmDSMLaUTPPOLl1E1tkDIbbSKFYjG6TdCI++MmPE/aHDPbPMTm4TjaZsLM94sr5HM+bMU49ilmBZ0sCazFWoFzNYBTWUViHdXVMiqKGPZoolnuNLFaeEBwZMLnGUaClRJzrcnY3wJtlFIfXeeWf/nPy8S9w9RMf5OH/XUzyhS9z8zvvsJyWFKXD2DtjV6gjUZ11LWDROGMIdwcysK4GGKy4AzsIVkCurR0majcLgVqhFr6s30PWMSH1vIsDWxeaNFAEK5eJFmRo/xZ3Or/udbH28hqWS8OKNPhC4/fRuHWIe47QwBxOiDu7dneACbeCWhSr502xdry1Z0e12odb9ecOrrNGb7Tn0rXXsmmPW53f9V6LxrCk3WJ1ztfa0PzWbM6XcGDa90VrftIcdh0YcavPt1AGdySdwEMQCkFPwK6EMx7sBIqeL4gUeJI6gkXWXi1S1JFG4FBSgK2LYaSo998CJav2KCFacES6+tlbeYKg4xEM+6Snc5TvCAYJVgtMaZBCYLWlKC2jrOLlUcF3Rzmva8HtdglufZwgauceY1Yur4LEq6NNAukRKkdeWTpRQOiBwuIFAZ4RbG0ldBPJMPFRSnA0M2gpWGK4eGabnTCgHC+ZHk+IfIEfepzpdzmcLJDOcvbsDrMiQKdzfKkIlEM7SxIprHHkRcmeHyMDj9OFwymBRWJ0jq6g6wliTxIoAZ7EWYvT4AchZVqghSbXmsH2DsrziSSELmcYSRaTGUqXyKTLjRsn9CNBMuhyfLLAdwYVBORpTuBJzmwH9La2KfOC+XzB3tldtAhYlDlFFdAp59ieYrrISHp93joYYytHP47ICk0/lqgwZDzJGIRQYtEmp99LWKQOz83ZPTdkNKnhc+UpvtfagB8/xnrnnXd48803f9DN+PfWmTNn2NnZ+RPBj5dffpnPf/7z/Mqv/Mr3qWUb/fvKOfcnOn68m5pB+b1qBv8bbbTRRhv9GKq1lLQ4FHZtiGcA4wTlqirk5NSQphmhFyEBL/TwlCLyfWI/IisWaGvo9rfoJAnWCRbzMVIF6LTE86t6IGgsDl3DHpXBrZw5QCA9hZASpXyEsAg/WA0i60oUnMQ5gzMaU2qsdgRRhNKrqQxt8GS4GuhrqsphK4G1IDxHP9S8vxNwPRWURlGkJUWY1tSEc0TzGVFvCy9MkErgS49S69r5A4nTFcbUcIWUPlLUx60nlGsQpSkGkbI2UjVGg1AYC0aXCCUxVlDlhrjbob+1hS4K0jIl1yVVVZBnS9L5kjwryQtY5op5qljkisrEaCfbyQYlIBAOTxh6gWZvKDh7JmB3u89W4jESoHWF0gqzrDi+WUIlKIyP9SwPXk744Ecfoaw0GHDWoY1BColSEuMEttQYU7tRhD5YDbnrsAh7ZPMx1WxJWhjMajInR5AoQZppSpOija4rf75Ht61k5bThSbaGu/zKX/x1ts7sIKS6M6MiIYhDrjx8lb919RI/8dMf57/4W6/yzIsvYL9HDVFSEschSRKTJDFR3KHbHdDr9vCUpNtLqCqDdZbt7W2UlARBiNGa8XhEEIZEcUSlNf3+NtPphKxKVy4zCuGDsQ5PSYQ2KKFwGLR16NKgiwpdlAhhCIOAtFzy5S9+gQ9+7CkEAl/XTjPnzp69UyG00UYbbfRjpnvhjUb3QhDr29/7+fvFuNxvu2ab+zlurLcDuGvx/X6vQe1WMBwOW4jB9/27ijnudSO5t7/rbiPvFmXTQCnT6ZTRaNS6LmitUUqRZRlQx7E0/SiKoo1wyfOcoijaYzUQRONW0kSNdDqd1mUCYDqdtvEijTNIkiTMZrP2+L7vUxRF6/RhjEFKyenpKbu7uy2wUpZlG+sSBAGLxYLr16/jeR7nz59nOBwyn8956aWX6qg1pdjf32+dOpbLJVVVEcdxC2ssFou2/Q2Mc+99U5Yls9mMfr9PkiSMx2OKomA4HHJ4eMjOzg7z+RylFJPJhDzP2dvbQ2vNfD5vnTem0ylxHHN0dEQQBARB0EaZNG4gTXzzl7/8Zb761a/ya7/2a1y5coUXX3yRMAypqoooitqIoG6324I3xhjSNG0dRYwx9Pv91n1jsVhgjGnhHs/zWieRZv4pjuPWRcM5x+npaXsujDGcOXOGyWSCEILhcMhyuWQ8HiOlbI/VuKA0kTdFUaCUYjqd4pxro2Ia95Asy5BSkiQJcRy38TANbKL+/+z9aawuWX7WC/7WWjG/0573mXOszKysqqwq41ueB8AM4hrs5uqaK1A3DYimr0QjARLqbgmBBM0HkEBqQfMB8QFhCRWga0CAcdkuj3iqcs1ZWTlnnnGfPb1zjGvoDysizs7jzKyyq7Bdle8jndz77DfeNyJWxMkd679+/+dRqr/Xu/PsgKIujujhf1MbbbTRt5biNGb3yjayXnFwcEAdhMxWa248epnDw0OMCsgrw3J2xnh7m+vve4q7b7zJvVde5cYTT3D42CNY17A+OgbrSMdDltM5Va2J0wGvffkF5qfHXHnsCRokP/+TP8NAWD7yP32AcLzDerVisn+AkJKmqUlHY1QYs5rNOX/jM0x2tyhNSt0YqvWM4WSbpjbkVYmUkC9mPgJibx9tHKYF2YI4JohSRJJx98276EpTTKeIpiLLIshSzvKSuQOHX/wFyLUhNw5tHAqJEt4NwTqHbh0mLi7OcwEgkMIvoZ8Zx2+WmrVdYWXA5PEP8vT3HnL6m5/i/JW7vPYT/5HFrSOe/hPfzWP/p30Gh7/IzV95gendJXXlIT8EaGc91CHa6JLWtdXhcNLXHaS44GhB58BB/3crHoAWvkblI0MUzp+AaREH2X4MUAGNeOD48QAKkAjMW40uLkySXQuKXORCZPteX3zp4lzegoS0Dhqud4OgG9cLY9vBCeLi9xceIx8AIRfe3QG7tgtkvgButMd58VG0c/Jw7fetXYe/5p6cQbkH52T6cXmwUyn8NVLtB4kLVIXsII72PuohotYppXMtcT0E0kEp3nkjdoJEQCZhS0pGwrETwE6sGCjv5KpkB3X4kQqkIFAKKVpAAeeBD9nCQ22DWBcVJBFIHDJQmNYFIgwEIlQgAvLjBdlhwJU/9B2kV97HS//+E3DvHkY4ihpO14YXZxWfnZW8rOFEWzpsW3X7aGuotXMMQ4VxlggQFmIlGQUeUonDCGE0Mk5Y1xVJlhFGCmhIRIRtGpxLuH9esggDdocp24F//3y5pNGWNAsJQsnxak1tLXE2pGhCojCkGo4oZ0sGsWKUJeiqoKwMIoqoZMRi3qCihDAIyKSkRhIHithq5usSFQBBhNWOOM1YrVasy5ploIjHW1glkKYhKQ1bUpOXBtkUbB3s8trr90kCRRRn3DqaE4sQgWC5XBPIgCwJGO6MKUrL2XzJ4eEuSgQsjcYlI7aaNYMI8qIhCANuH88YZwmhtCwLTaZgd2/MyaxA6QoX+efPWArqssaahqvXJswbhykNUkm0+cYXuzbgx3tYv/mbv9lPgr8ZdPnyZfb29njllVfedbvZbMZP/MRP8If+0B/aAAG/z7Rarfruh9+u3inqpetW2WijjTba6D2odiLurLfZfMDL+8JAbaFpJ8CL3DJdzhnEQ+I4QFiBkEAQAI4Y37VSlgVJkjDIEpLRCCv99Evnue840BarG5qqRlqJVAEigCCJkEq28IdCKOm/inai2y2c1AZdVdRlhTWGOB1gdQPWYGSJMQ2DICIMIsqipAkaTOMw1k9MD6OG+6WlsiHrdUMaaQZRgK4cq9mSbDJHhgFCSdLJiFRI6rwgXzUIIRGBotEFztRIAUr6Dk6URKkIpKSuSj821vroFCNbyMZii5IoGrB/+TIqCSnyNYvFObPljDKvyZcNi5VjUSjyJqbQytubItrJLL6AgyORlixs2EoN2xPB7k7GeLxFFEW+KGE0KlBYXdNUIPMldSmxwpGNNY/c2ObR64+ye/0ad4oKYzQ+XkSiwsDbzDc1TVPQGG8IqyRUlaPc3WPtHPVqjlmUFMYDQgYQYUQ6Crl/MiWQlgRHCdi3uQV/JxJCEEjIBjGPXH+c7/7+70Yq+dZWj/4ed6hQUdU52jVtQeYbE/cSBJI0ScgGQ9JsyHg8YTAYMsgGJFFCnISAIAy9bShYwiDAGI21W9j5lEGWcTY9x0lHOkhxubeSD1VIUVcY22CsZTgYoZsK60RfNCrKBhnkZCJBSYWzlk9/4dc4u/9/Zu/SHrppUKHi6tVrbxtDsNFGG230XtBF2ALeCj28k1MHPFjgf1gPv7eDD94pKqZ7/e3AgYf303296FDQOXSu12uqqkJK+VucPx4+r65R5O2cTC5+/sW4mTiOGY1GfXxHlmUsl0uKoiAMQ5IkYXt7m+VyibWWqqoA7y7b7eciqNE5TMznc6y1rNfrPtalixjp3nfRWaJzKu0W/7XWPdTQxap0MTB7e3tIKTk+PmY8HvcgxcnJCdPplDiOeeyxx0jTlK985Sucnp56m/Ag4MaNG2RZhrW2d/7o3D/SNKUsy96p4mKUTjdunQtLGIb9mIxGo7cAI8vl8rc4V0ynU3Z2dnooZDab9RE6g8GAPM8Jw7B3MgHvHNIBDYeHh/z0T/80H/vYx/jIRz7Ca6+91kM3nYML0IMzcRyzWq0IgoD1et1DNnVdUxQFk8kEIQSj0agfyyzLuH//PnVdM5lMGI1GfXxLF2dz8dqMx2PW6zVaa5IkYb1es1gsGAwGgK9FlWVJnueUZclgMGAwGBAEQT+mHXDUjXf3fZ7nJEnSO6QAfaxN18Q0mUxYLBYURdGPfZqmfczQBvzYaKNvXRnryNKI/UeusC5ybFlx7fIlJpcOqLWjXK2o85LDy1cY7h7y8he+wNndOzz21DNcevJ95PmcxckxURgRRBlFvkI4Q1MWLM6P0cWKG1evcXb0JvffuMeNwzGPPfMsDR583No7wJqKptGMdvc5uXeTLImoZ3OuPfUsujzj5M05Tzz2COkwY3Y65/zuHbb29pCuQjQ5O7sTzs9XqHRAksSYKifJhgSDLZq6opqdsDg+YhAY4iwmX1remJ3z5tGMSAUMlKC2ghJLg8O0LghdnIp1YNqSiuoMHlrLBiEeLMw7vBWEc4Jza/liKdBnK+pPv8nomR/g8scyyulPks+WnPzqL1HNznnij/8hDv/gD5NeOuTWz/46R6+ckS98/KszgPTustY5lPCtMgj//3kjHEr65xnnHBK/4G/wz03GghDe5UEbhxAWhT8nH95rQbXxL1aCtD5at3Mdwbufek7AR970QATwgJzw59zJ4XqgQrRgSI9duIs1DdHHEl8w3HiLS4YT/Tu9Q4ajj4Np02z64zN0MTXdB/pxEwKwDiHFQ2WO9nmzG5OWHBHtOXrmxPXuH6J1RungE2+e8aB2IvDwQnv0PspEvBVspgVcOrjEveVe8nsy1rVwiSAUfvE8EoK0hT4yKZgo2A4E40CSBpJAWgIpfLyL9IhOGEhCKZHSRwap9plRtu4u3VhLRH8PSwFSSZx1KEApHyOEtjS2Qig4/L7vY/Ltf5S7v/w8br3GBZK8NByvNC+cFTy/bPhKAyfW9o4tqttXS8JEAgIctdYkQYAAshDvBCwc+4OQJABtDUVZkUSKqiy5sh+zt52xmBccLRvqcsUqcFzf3+YwDlDGce9khTG1v/fTEZUQjMcZVy9f4vxkyeJ0irUSiyUBlIpprEU7hwwDauuYLSuEsSRxRBRIpkVJHCi2t1NWS0c2HDLLS+qVB82KJsdUNUYpwixmHEtKKYnKFQfDkNm8BqOpZcCduwuGSQzWspgXCAsqkazzCoKAIHBk4wHr3LJYrtidZEgLsyKniTKGecluUpEXmmEasGw0tm4Y7GQsSsMoUowGIdPzBcJAksWUTjAMFbpu0HXFlf0BpXOs52sm4whrQetvVJvZA23Aj/ewPvvZz/b5n98MunTpEgcHB191O2MMn/70p3nhhRf46Ec/+rtwZBt9rbpYFPjt6t3Aj2+m+3ijjTbaaKNvpPyEyS/YS2w7ebMIGmd9t4QzKCGpSsnZomBrsCAIt1FCYOoca9quiyBAKonVBiEdxKEvINSOpqwplxW6qfCTRYlSATJKCKIAlYTIwMMWPtol6NfwEaptaxBgLEIokPhIDFsxkgO0bnDa4OKEqimwukYpRxgGVEVJTe2zarVjkFj2CsOqiSl0RFUZdichqYzQTUM+mxKnQ2SaAd5dJB5lJJOhB0DWJVbXIAQaS12UWDSgQFStdaZFBQGdTamzmlDFpJMJUZYglCBf59y/fYeTsxmnpwWrtWRdBRRNSm0FvvThs3CV8B0UgTAo4YiUYRg3bGewvRMwmYwZJJnPaJcB1oJ1DU2+JkoSTFPjmoZmvSa3ltFI8+ijO1y/coPBaEiwtUdRFj7/VmiEFERBiK9eWIx1aGPbbiGoGkm9tU9e5ZTLHJdryrY+YYQkSiKGw5D1DJQxhFKQdzfX1ynZAjBRIEnjjG//6Hdx7bFr71zUb+s5b9x8HRnKB0WQb4AE9Db0WTZkOBgyHGSkWcZoOEAqxXA4oqpKmqohjAJq3TA9nxGogCzOCJWiqiuKsqIqCxCOQEpq0/jO5tbWdF2ssdYQBBKpBMY5dOUIIo1QFQhLEIScTY/40ue/wA8c/iB1U5EFGVcuHbaLQd/4yfBGG2200e93XYQ6HoY+Lv7sYaDj4nsfjlbpvgfeAnRcBATeDjZ5J+Dj7X7WSUpJFEUsFoseNHi7xeyLf78Idlx8/e0cTy7GxXSOFZ3jhBCCvb293j3hIggipWS9XjOfzxmPxxhjCIKAw8NDTk9PCcOQwWDQO0J07gxdBEgHh3QgRhfdkqYp4JteLsZ8AL2zRQeJTKfTPqZlOBzinCOKIsbjMUVRcHh4yPXr10mShC996UvkuY9764CPLiak21d3zh000cWfdGNpre2hmslkQp7nvRvHarXCWtufWxdlkiQJ4/GYKIqYzWYURfEWwAHonXmHwyFpmvbjZ61lsViQJEk/Vo899hi/+qu/yi/+4i/yYz/2Yzz99NO8+uqrGGOYTqc9ICSEYGdnpz+XyWTS13668wqCoI/m6e6Zpml64GUymfSuJt25dZE+F90+5vN573jSXes4jnHOsVqtemijGxshBDdv3mQymaCU6qGVJEnQWvegSwckdaCHMaZ3j1mv12RZRl3XTKfT/viKomBnZwfwgMhoNGKjjTb61lUYKC7fuM705BwZhDzxvuukwxHr2lIXK5qy4vKNa8g44cVP/xq20nzkOz5GNNzm6NYbCFNCXbNeFzhZEMeWPF9SlyVbWxOi/V2mp1PWJ/d5+tlH2bp6jfPpknQ4ZLKzjS4LglAhZMD03h2ybESWRURJipYJOtrisQ9dJslGvPKlF3jzpRd58v3vp1rNGKQhe9dukK9XRLEinx0j4oR4uI2TKfliSagsW0PJ5YMnEUqyWuV89sVfYt9ptg9GVFpzXhnuag82CASh9PEOxjqMaaM+BEjnF8adX6fnAuvQB5i4FkxQQrAEvlDW5HfuoP7Nj/P0tQE7QUIyAp3XzL74Gb50fEr5o3+S69/2gzy1d0jys5/k1qdeI1965EJa7yJr6J7FWihFKnzbiOujTYVwWOc8pCG8s4NEotttrPPwQ5df0gEMCLASoi6uRPqo33az1pnC79uv48u3nDeIdt8PIlzoPrcFJTpAwz0YLC5uLC6+KPxxWHsB+qCDPrxbRvdC/23nxIGvzT0wH3kAW3T7fQCX+NqIa4kT99C+urpJrw4yoYVPRPt5PQDdQS5+Q9luL6T/3kMkb42B8SHQrnf8kEIgnI8WUkAgJCHOgx84BoFkrBzboSANPEARq85ZDZSQ3l1YCcJQgoGm1ihfZPNASg+q+Pf44OTuwvjxD6RDBpJokGDqBikdRkNjHPc+c5Oz1/4j5WsvY+oVRWM5WjZ8/jjnKyvNqwbOtcUKfx+q3rnED0ag/P1mhCBo3UlSAYNA4qzh2jDBCEGtHUpKRpEiiwQ3LqVcvz7m+CTnjZOS+/OSdCfl0uE+l0MfFfOVl++TN5adrZhaSGpjubQ7Yv/gkJPTBQ7HYBBT5iVZEhILH/NiqoZhGnG6KIikxEhFGEbECsqyYpyl6LJkPltjjGJVNzjtPZ+NEwSmJookIkgYpBGV1VA2TIYBy0pjnKNRISfTku0sZJIEzJfetc2hWS5yskyxri0yTllVjtPpku1xTBQErITGDidEi4KBqFnNl1y9ustZ5Qiqgu1RxuJ8wXg0IFSwnOc4ZxkkKSfrioGSgMEKwXgYU2nN2d2SLI2Yl63D2/8AxncDfrxHVVUVX/7yl2ma5vf6UL5mjUYjnnzyScIw/KrH/dJLL/Ebv/EbPPfcc303yUa/95rNZr016m9XnYXqw3LOkef513toG2200UYbfZPKOId2DtfGh1gctYUK17t9SLx943ResxieIZ1jPJogZYTEIqVASEm+PCeKBqAUWhtMo6mLAl0WaGsQUhJHCVESEyYDVBQglUIGoe9gkAKpQj+pkxLnLAL1wMkBb+MXpQlGNqgwwGmNNjVohzaaUMdoU2O0w1hLPKgoVwX1ukQLTW0M+5Fl3ggaJCoSyEixvb1NXZVUyxXFcEEWeAcSQuVzaRXEwwHRYIBrJmit0Y2mqRtvOymsn1QKhQokSvoZslIhQRyAVGirWS7mnJ6dcvfekqNTwzRXlCYD1xqgCpAevfHFDuGIpCVSljjQZLFjMhJsbSUMR0OSOCEKY4IgwQFNU1A3NVVVEp2dsDPaAuuwxlDUJbXQpGPJZLJLFEeIMMIMJ5RnU8IwRjrOoWNOAAEAAElEQVTh7TVDhZTKF4ycQ2vQFuoatIzR2YTi9Ai9LjC1oWyrHlKFvsM0kGhTEzU1WkLdF0O+PgkBoRLEScjO9iW+87u+l2SYvts70E3DG2+8RpWXiA7G+TqPQwlBFqe+o8R1rTGCMIpI0wQlQ5I0Ik1TlAxxsUZKYJUzSAeUVJRhjbaWJB0gZYCxmnWe4xDoRrcf6YtcxhjiKOy7f2XbVlQXBiUlYSiRQpI3FZ//3Of4/h/6AbRu0Lrh4OCQMIzQ+pvHqXCjjTba6H+ULgIaF5013s7V4eLfL/7snaJhHo6VeTdXj7fbz8OQycXP7SCFprWB78CAi44lD3/mxXO+qLeLqTk9PeXGjRu948VsNmM8HvcL80HgHc6EEKxWqz4WZTweA/SOGZ3bQrfA30Ei3WJ952DSOXcopajruo8okVKyWCzI87x3FYnjmDzPieMYa20f+zGbzXqniMVi0b9+cnJCEATs7Oxw6dIlmqbhS1/6Ug88bG1tIYTg9PS0H58unqSLFwHvwNHVw7r746LjSBzHJEmClJI0Tfv4kpOTE5RSLJdLdnZ2+vqbEILJZELTeAeyqqoIw5DRaESWZcxmMwaDAWmaslgsyLKM3d3dHoDQWvfxNT/7sz/Ld37nd/Lcc8/x4osv9mCKEILBYNA7b1hr+6icIAjeEmejtUZKiTGmd2Tp7t/d3d3+WnQQjNaayWTCyckJaZr2x9S5kYxGo/45RSlFWZY9uPFwNMtoNOqBpg4g6ca1u08WiwW7u7sA7O3tcXx83EMko9GIoij6e/fKlSvUdd3H3XRj/HZxxxtttNG3jlQQsFwsSSPF/uVLuCDhfLYkDhTCCq48+TS6KTl541X2dneZHFzBSsfs9D6jYUIUZiync5z2ERPFukAJxc7eDsPJFrOjI9IsYv8jH8UoxTIvmEwGBGFAsZgSRiEOSVXW7F4+II4TTu/d4+TelO39LbLJGBUP+M1PfoIwCHj8sSuEsmJra4RTKfm6Ynk2RzQV21tjKhOQ7uxiyxUyFLimBG1Y5qAmOyyrBU89ecDBMGA1L/jyzQULbQklRNIvzMdCUjiHr/S3i/WCNujE13toPT76BW0ecAxGePhAAtrBK7WmvnmT85OYR4aSR/aH7I5iwnVFc3yTF//Nv+Hsle/kmT/+XTz+p/83Rjd+npu/+HnOb60pS4s0FuGcB0CE9fPxdhHdON9cImVndNE+n9gWmJUQODDKL/Bb2x69EFgESlicdYTYNtZEEOIjYowT1C24IVoHWWuNb3bpR8ZDC969osNAWjcO16Mlb3EB6QbsIazjAY3R/r2DbC7K9TAHv+VFge+yUUK0SEwr+cCkxDnvKOFdRt56BA8+xcMzvoTW+YE8cK1AdKCF87EoQIDACDAIXwNs3++EQ8qOEWljVFwHgLi2ScU7dSB83VDg3TZCIBQe+sgEjBRshZJMOSIlCYQjUgIhfGSyEhIlhI8yDiTC0TrS+ohdKUF6mxL/nGadh0zEAxcSCUhnfSPTMPbHFAhUpKC2OA3r19+gjm8BlqIR3FrUfP5+wfO55vUGFrargdKPo+quufTwkUGwE0gSqcgCiFWAEI7Le1us14WPwpGK7UARKjjcTmmqmls3F7x5b4VDMNgdsn+4R5QXFLbiONcIY5kkilVt2d7P2L4y4sr1q+weHCKjgNdffJPVYk0ExCJAKEGsINSOKAuJbUAkGpKtXUy+JoxjytMZodWAZXcy5o2jBevS4bCkcYipHSoMWRtNRMisqAik4nIWUmmDLhuECqnWDZPQsTUImS9WDLMBxmp0rkkHMY2zuECyrsDaiv2dhCBMOFqsCXd2iNcV+7GlKit2dzJWgF7VbA8jZpVhPEjYGqUsG0c5KwmMIachVYokgMoqMA3CauZLfMNhYFmufI0rCL7x5McG/HiP6vbt29y5c+ebyjJZSsmHP/zhPjfz3ZTnOT/5kz/Jj/7oj3J4ePi7dIQbfTWdn5+zXC5/R+/tLFMflrX2myqyaKONNtpoo2+sLBbrLNoFGAfaOUpnaZx7YI0p2sm1ECzWa+q6wbmG7Z3LRHGMdFAXa5SM0U1DICXUjrqoqYsCJQVZNiYeDAiiyAMfcYyfbjuE8pBBN7kUwhfcfSeIn3oLJ3Cqq0IICNoWBQu4EE1F4zTWCRrj0MZgnQFhiQcJKoQ6r5BOsDesWRmJFA1xCFIpiAN29q6zPj+lmJ0TJQPC0QChAjrLUw+kCESsCOOA0FlS8MfmLM7a9vgczvruCV03rPIVs/mck+Mld4/WHC8li0piXITCT1aVsATCoqRG4QiV9cBH6EgzwzAOGKSK4SgjzVLiMCWMQ5QMQCgsgrxYsV5NKfOcfF3AnZvshyEylFhnGEwk6pJgMkmJkhQVRshkgJtsIc5mBKq1VpUCFSjfcSElRkCt/R9XAIcTqlBQFSuqvMRpi24fiQ0Cpw3roqJpGgLApSlmXfD1kh8+WgeiUJAkGdevPMZzH/2ALwC8i+azKbfv3GS1XjyomnydCpQkSgLSZMggGxPHEXEUEwQh1jgG2wPi1Hcqp1mCFNA0NXFq0NpQNzUqCLBFgW5qymJFWeQkcYxuNMPBiLzwnQ668d1PddO03UQOoXybVqMdqtLEsUQLEFheeeNlynVNMkxodM3u7g5BsAG5N9poo/emLkaxvF2kS6eHQYtu8f/h9z/8notxKV9tH91nP+zO8U7bPxwX00WMdMDERdeOiw4kF6NeLn7WxXN7GEpxzhGGIcYYlssly+Wyd4LogAXwLgpdpMt4PCZJkj7epAMJwNeUjo+PkVJyeHjYu20opXrHjm6xvyiK3mnEGEOe50wmE6qq6oGLi1Es3Rh3jg87Ozucn5/34MXt27cJgoBHH32UwWDAyckJN2/e7MGVwWDQ70spRVVVVFVFEAS9W8h4PO6jdS6OYTduHRAyHo97YCPP8x7M6eCWLMvQWvcAjTGGOI6pqgpjTH8OXQxLt00XhdLBEh34EgQB73//+/mFX/gFfuEXfoE/9+f+HB/4wAf4/Oc/jzGG2WzG1atXe9eRoig4Ozvr64BlWVJVFVpr6romiiKGwyFSSra2tijLkiAIKIqCovDxjVEU9ccspWRvb4/RaMTdu3f7OJvFYtE7tHTwxf7+PkVREEVRHzdzenqKUqrfRgjBcDjEGNOPTZ7nb3GbKcuS4+Pjt7isVFXVO8B0II9Sqh+35XKJ1prZbNbfQxtttNG3nqzWDEPYPrhG0VQUJ/eIwphoZ4vhzi7LxZz8/JTh1gHpaES+WtE0FdlggEaRn81a4N6wmi9IsozB7jZV7ZidTQnCiO39KywWSwSSyWSAcNDUFUk2QmBZL5YMt7eJkwFnp6fMju9z9dHHiAcD8mLNvRdfQMQxw2xAFAdce/qDGKuZ37vP8WsvMRglbB9coakr3nj+Ba5VGmlrtva2aLQj2Tmgqg2nN18hnkwY7e8RhQFhOic+LRjmNYETDFTMvNHkGtZNg8TXL5wQKCHQzuMcHj7wC/gg2t/NHm5oLN7L1D1wi9DAm42hsjllIykqx5WdlBvbKVGiqRZn3Pu5n6Y+OebxP/FHuPwD/wvjRx/l1s/8Mrc+d5tioXuowzn5AJxoayYIgbBgnAEEgRSYzrED2oV/4UERIXDG4CQ+7sN6GEBahxKOEIEVoF3b6nERxujgCZyvMbkL7EXrsunepkHEG3lccN7odMF5oncy6aEO3gJ2eAikaz8RD2JfxEXgRfTXo+dIBH6MerakhTJcC/C0TiAdmOGNObpoWNeZevR/l57TQOIIAIXso1iE8DHPtRTUznnwQnTAB+394lqARPZAjYc2QFoPkqjWrSURgkEb6zJUkkg6EgWJkr7upXysi2gboALpnT66CFvROny0TIkfVmtbEMUDyfQuMAYpBIEQSCVItzOwDtM0BEmACAMUNTLwLjg4S+7g1rLh0/dyvlIYXtWO/CFXFYWHYhweLOhNXSTk1pKokCAQ5HXDwSBmWVQYHIURVFXNQAbcOByws5fy/CunZJFgWteEo5SrVw4Q8yWp1JzNVqggYpQK0v1tbp2vGY0HPPH+D/P6Kze5+th1bnzo21gva0JqorbOqIx3S15UGlMYxlv71GUODTgC1nnjG+rQbI0z7s9XlJWmajSjQYoIA/JlwXleI4VEhGt2t4ZsCcsiL8giRRQF3DtfsTOI2d/ZZn6+ZpIMkMq7vWRpTFXVkMb43iXHIAqQIuB4XaCzMe5sxRYF8U7K1k5GFUXMT5dsZwmLukbpht39MdNVTllbhoMYQ4yzMJKWed1gG9jfSn35t8rZnmTM8hyhwgf/Tr/B2oAf71G9/vrr3L9//6tu55zrifffD/rIRz5ClmUsFouvuu3P//zP88orr3BwcPDO9tkb/a7qa3H86Io8D6ubsL/d9icnJ9+Q49too4022uibS97+0vb5pw1QOUftLH7ajQcTgFA5slGA1jXzZoVTkA22CJMY56AxNdZplArb4qskTiKSeBsRBIRp6l0kQuWnpM4iQtVOQFU7IXnQ5SGEBycQEt/A4X0xjTUY42hKTZXn1FVOXRc0VUNTl+B8pIU3YfAdDEKFGASlMcjAMZ4E7JuG00Ix2YkIw4CyKpnshgz3D2nu3qVezwmyFFzjixESnHUgA5wF0VpP+km+9QccSKy21I1hvVownS04PVtxfFIwnVvWtaI2Ee1yDJEwhMKSKE2sLKGyxJEljRxxLEniiCQNydKYNIoJI0UUp95VRCmEUlgRUDUFy8UZy/Nz8ryiyi1NLQjvT7G6YTjaJp+ecf3KITO9II4Twij29p1bO5TGL2DIIMA562Ee4RdmkAKnQqrGUWpBYWBrZ4dCN+SrFdNVTepHgAhBKQPfLVVVhM5bZObatPm0khuHT3J0fpOq/p3EzPl7MYoisnTAB575IJceufRV33X//j3u3LtNVRa4b1DtX0p/LcIoJhsMSdOMJInI0oQ4SVChIlABcRQTxVFvjW9a4FZJ1WbWCqq6IogiwijBGsv21jYnZ8doo7HWopRsa0cPFpwa7XxBREJZWsKgQUmJUJbT6RHTs3MO00tobRkMRm8L/2600UYbvRfU1TI6p4aLetgl4+EolLdz0bgIgDz8s4djYx7e/uJ+3m5/Fz/z4nZdLAn4KI4gCPoIj7eDTd7J4aODPh52FbHWUtc1s9msd/fQWjOdThmPx33UhnOOJEkYjUacn5/30R2dO4S1lqqqSNOU1WpFlmX9Qny3yH96etq7YyyXy975QinVAy5JkmCt7R1EOgCli3Xp4l46J4zVatWf13q9ZjabMZlMeOSRR0iShF/7tV9jNpsB/nnn8uXLRFHEZDLpY28714k0Tfsxfdj9thsDrXUPdZyfnxOGIaenpwyHQ8bjcQ9sdG4gVVWRZRlBEDAcDtFa904pFyEcpRSLxQKtNbu7uywWC8qy7Mdld3eXOI5RSvH+97+fn/u5n+O7v/u7+dCHPsRXvvIViqJgNBqxWq1YLpfEcQzQAx15nrNer/umoM79pK5rqqpiNBoxHo9ZrVbEcUyWZZydnfUuK1tbW71jbNM0fSxPXdd9nTEMw7dAJt190N1HdV0zHA6JooiyLP0z0YVomA6a6aAcpRRhGPbxPKPRqId+umcroN93BxSladoDRhsX4402+tZVGAXsP/YYp7dvI6VivDVm58o1lvMVR3fuMt4aMd6eYIRANyXOevchYxsi68A61os1cRKwtbvFZP+Q2fkUoyt29raJkozF9Jw0ioizjKrICZRCxRFal5hak2RDgiRlsZwTSMGVJ56m0YaicVipqMuC69dvMEgl8XDM9OSY26+8hC4bxlnMaHsXK+De6y+RRopUlkSDBKcEo4NHaLRB5HMuBZdZnJ9ysL9DPYg4O52TKsnlcUZR1awrS64d1vlmAan8orgDrHZty42vW3TQgWgbKxze3VMhUI4eDqF1gmiAexbWlWXhSpbaUNYpl3cTRuOQsDDMn/8cX7x7h+Pv+T7e94e/i6f+r8+w9Us/xeuf/Cznd3KqqgEjHkSZOAik8gCDFN6BtmvSkC3Y4LxLCda0rhvds5F3d7CtG4aU0kMH1v/dABrhG5u6GBfnPL/RPSO1UINt60ZdJGwHXLxlKdk9+KZFQHyDlGtBESEQznU+Ki1p0QEY/oeugz9E93LnqoGvlbjusNqft84bnWdH95jnr6OPWxbtzkQHjwjxIKKldd8QrSMG7fUWeBcV78ghiIV3yVBSEFuBE45aQOME2rYlL9oiihNIKXz8SXsOqq25SQmhg1QKBoEgE46RhDQQhLKNfZEQSEsg5AXXEEeoFGEoCVWArtt1rdZtRLoOVPLOH74xzRMzEotxXUSMI0gVySSjymsPA4UhQimENiC844wRsCgtL80anj8p+XLV8GbjqC4gPBLR7tfDDWHor1voJFkosM6SIDHOsSwbDrOwh2sAQmEZxIob2wlxFnDreI10gvulhvGQnfGAialZOc3xtMIReBxpd487J0vSVHJaW/Lnj/jUp7/MC6/d5U//yB/lqeee5c00oDw9pVmuCJuGWhsGgzGN1qh8zmA4oCkqbF2itMNUJaPdEffOc07mJTuDkEmQsqosJydTkJZsMCBUAYNYMZaWomwInSFJUmbznDQOGY0zqqJgOEqZ5wZb1TglWeQlykFgLDKJSGSM1DVr05BsT5Dzmr3YEYcJYWgpkoQ7r51xfXdIbi11XvHEjT3O85p1XjOKfd1ztdZksaBqHEIEjCNIYknRGPb2hpxVFpUOses1UnqX3G+0NhW096Ccc7z55pscHx9/1W2ttf1E8veDnnzySQ4PDzk6Ovqq287nc37iJ36C7/zO79xMlH6faDabsVqt3nWbruPiYb1T1IvWmps3b/6WwtNGG2200UbvDRknME5inED30IfrJ5gdOZ+kBqEEtTMkMmCRL5mvZqTZECUEtW4Q0k+sZKAIsxDhEgBkGIDCO3lY3/GhwhCpgnZyZXFSgutwE+d7CQQ464Npja7RZUOxmrOaTlmdr9BlhQCiNCJOE7LxmCCKcHhgwVuBWozxk+DBxFCtVljbMHFr3lgpgnjM7uEuxeKcuioIo5Thzi6r6QkmXyGTobfTlC2WogwiCICu2gAohbWGfLXm/skJN+8uuHNUsFwJjJE4qzDttEEAsTBkgSYLG9LQkiaWKIQ4VqRZSpamRHFMHEdEKkQFim5qrOIYnENb6zse6zWr5ZTV+ZR8VWM1CCUYjAMGoqRe5YwPLlMspiTjLYwSpOkYIQOsk8j966xr390phMRpAzLAGt/volSAizIaDUvjnxV29w/IVzn5OmdaGZ8XjAd1LJI4DFnOFwRtQWBd+lz6R/af5v/1v/9d/r//+h/w5Ve+8Nu6T/3ZQ6ikXzgZbPORb/sDxFnyru9z1nL37l1O759gjG1tXL8+CUAqnykbKD9uAuFjd6KIKIpIk9QvyElBEIQkMehGeycWJ1CBIAwC4ihhb2+P+/fvI5wlTTOmszOM8V0wTWMwtrU/lYLG+PxS0XWjWDDWUZeWOPW2qIvljLOzUw6uHaC1ZpgNSdOE6fTrPPGNNtpoo29SvZ3zRff9O82D38np46u5vr7T9hehkHeCTS6+5+FjfhjSuOgG0UEX3esXz/nt4l/eDmjpgBJrLXEcc+nSpR706CJG6rpmd3eX+/fvE4YhSZJQliWDwYCmaXpHiA5K6b52zStK+Wi9IAiYTqekaYqUsv/d2QEF3XZaa8qy7METb00ue+eLDiLpXDOklD28Mh6PuXr1KkEQ8KUvfal3ComiiL29vT5iJEkSgiCgaZo+Xqarf21tbfXQzcU/XdSO1posyzDGMBwOWa/XPbRy8V4Iw5Aoinp4pQMzDg4OOD097eNYzs/PAb+AtVwuybKsryd2Dir+eU1wcHDAF7/4RX7u536Ov/AX/gLPPfccv/Irv8JoNOL4+Jgk8c9HVVUhhOjBiiRJ6OJ6OqimA31OT0/Z39+naRq6yB4pZX8N4jjuXTY6d5f1ek2apv09EoZhD2DMZrO+Mamua8bjMQcHB9y7dw+tde/2kSQJ4/EYYwyLxaIf4/V63Z/vyckJUkqyLGNrawspZe8008EzdV1z+fJl8jzv76e6rt/13+tGG230TS6huHf7iEwFBFHC7iNPc3b/DnUD1973OKb9f3uYxJjVCqMbkixBhQnaVNy5fRvhYrb3t9m5coOT27cpV2v2r18nSjNO794iDBXJYMTi/ITxZBukpKlLqnVFECeIZExtHCIICVTI6d1jgkFGGEvuvfoag2xIKMGKmLPjU/LpFJZLxoOE4c4ucrCFq5bEYczO1sCfVraFnOxTVCVhoBjtHVLNzxluv4/TecGiPCIZpBzspETTinPjKOqaREEsBY3xAEeNozb0MRiuXdAGsK1vg3X+2aGLs3APyAdf6mi/Nw5mwJcqw9I4GmuZFzWPHmTsDyJU3WBmR9z9b/+Z1e0jnvqRP87hH/5fmbzvKd74r5/k7hduslo01NphjHuwUO48lCKFxOKjW3DtMxhd9InCtrEwtHPjru21i2FxQO3wscVW0LQNTg+cPvz5dJyH/50uWiDjQY3g4lNeB250ziMXfy7dAwBE9D/lQcSLeOuH9cyJcw/gk1aydbbo9t+2FtH2PtEanPDALUS89TPFWx0+2i0R8sHrXdSLcj7CREkIhCNAEAjpazfS7ysVvjbo3VPAOonGH4towSEpIGydQyIBkYIUSKQgC6T/mXCEShJKv10oPTginPPRLtLXONLYg8y6anzMsnQIfJOMdF1UURd36wdDts+ESoDAMrw0YnTtGrNXbiEMWOmhFFv7uJhGGxoHZ7nhpbOCz881r9WGWxoa58eHtiGuw8StcIRCYJ3/9zSKJBJQQrXnYdgZpszKiq0swDaOQMA4VAwixfm6RmGxxjK3hjpQPHO4BWXB/HzFqhLUteHSTogeDrk1tzSrgp3xDveP16zefJF1oTk7mfGlX/8FvuOP/SjWZkgZEschs8USoUJCqdDaMtyNkUmEQpON9zi/c8xkELOuKk7PV0RxCNpgheD143PSMOTKlW3Gu7u4wpI1C/K6oc4L9q9sc3Q8JQgCJoOM0/szLh8MOM81psiZbA04n5YIYxgNU+osIiVgaA1FEiKDIao2bMcNShhGg4hgmHF6tOJwK8VIWM9Krh8OKa3g/GzFzjgjjCPWyzWDNKasKorKoJREBIrlqmY4VMy0ZVFZEgHbg6T9f8g3Xhvw4z2osix59dVXv6rzAvz+i9HIsowf+qEf4vOf//zXtP2P//iP81f/6l/lkUce2UABv8dyzjGbzcjz/F23e6f78t2iXk5OTsjznMFg8A051o022mijjb455B0/HMZJtPMxL9pdzC/F2yUCUkHtarS1BNKhZMTpasFkuCLLBu2kWbQABwipkLKDHaTv0hAGqZQHPlRXTBAIGbSTb4mzfv/WlDRlSblakS9WVOs1dbGmLiAbDtm5dEg6TJFxhDOWfL1iNZtRzxbossBYP8GyaKQIULEvHDvT0NQNUWB5bMewmq9QVw8ZjnfQVUUQpgRxhFABdVkRhSnCWX98UiCCwOeaWgnWYoylbGpu3bnHl1+ZcufUsq4F0gWkEpSwhNISAWmgSULNMIXhwBHGgkgqlJTEUUo2GpNEGVEcIdvIG+d0G3+jEDICIamairop0dawXp6zns+pKk0QSuJRSJKkxGnGIAmgWJGMJ4RRik7GROMn2Hv0SVKbI11NcONR6rrESdBVSVMHRFnm96klzsCqUdxZw6m2jLIUtsbkJ0cUlWHeONK2kKDxBaH5aomtSkY4oiDk6u4urxwd8b3P/WH+5z/zB/nx//z/QwpJFAWU1ddekPcOrwqlQna39vnQR55715gX5xxN03DzzTeYTqfoFvx4qAbz25aUgkBKpAiIohiEIwgDD2I4Woc1D3yEQUCgAmpKGlOjAkWcxCyXS1So2k4rSRTGzObnSCGIo5gqqsnLwgNYDlTgfVnDQPjIRtl1pPj/1o2jzmtwULiCo3t3efpD7wdhCAPF9vaEu3e/Ovi90UYbbfStqIdBh4edPx6udVyENx528HgYrLj4/cWvF0GLDiJ5WBedPy6CJheBk4uvde8xxvQL3t2f7twe/ty3gzzebhw6KGE4HAK+oWQ0GvUL+VrrfnG9i5M5Pz9HKUVd1/0CfxiGnJ+fk6ZpHxkynU57wCCO4x4m6eJUOmigi+sYjUYYY9BaMxgMqOuara2tPp4lCALm83nv2FEUBUII8jzn7OyMMAy5fPkyOzs7rFYrnn/++R7g2N3dRWtNkiRkWYZSiqZpepikc8VIkqSPoHnY5cU5x3q9Znt7m7IsGQ6HDIfDfnyyLGM0GqG15t69e+zt7fUROR1Y051vd23zPCcMQ5RSFEXRu2VMJhPSNO2vgbW2d9X4wAc+wC//8i/zPd/zPTz77LN85StfoWmaHpAJgoBLly7x6quv9s4hWZb1jhldxEt3zGVZcnZ2hjGmj53pXEr29vYAHz/cgSrd54zHY5xzjEYjjo6OODs768dUCEEQBFRVxWw2689xMpmQZRmnp6cMBgMWiwVpmvYuJcPhsHcVUUqhtebKlSt9xEt3PZIk6d1VxuMxeZ738Eh3Hpuol402+taVbhpMVZFduUQy2eXo9k3W64YnP/QMTjQ01jDa2SGUDhNFRNoyvX/G+ekZ5yf3sc2KZ97/foIw4c3PfoYgDbn61NMYC1VdsX3tUQajAad33mDr6g2CIKQpSkRTko5GHjKoSwIjwWhWdU06GSOE4tbzX2AQBYzTIU2+pljMsKYhCS1Xv+0DWJUwny7Iz88RzZr9g0uESYgmgCgkEDXReIi1Aeuze6TDCVbC/OanObo9YzIe8/TTitOjU6pX5pzlJWko2SEEIVjUFm1900BnJyEkGO0wrgUNLBgeRO36+axrIQTv7OCcxQiBFA4jIHeC1zTkRc1jxlDohvkk4/J2zCh12Lzg/Dd/md94+RUe/5P/M4//4Md46v9yjUsvfI43f+ZXOXr5lHzVeBcW63A+7wUjHNL5qBbVNpSAr1fhHMKCkNJHEGPbU3IoBMY5jIMGR2mgbmESfxYebjEtTOF6Xwaw9oELiKN9ZroAc4jOjUOI1rKj+7lrmz8sriVH/LNW9xzWDqZox1IID7SAd2F56HntYRCkAxB6Z4/u2uDPuYNz+uPs3/Vg/12dRAg/VhJH4HxjlxKS2Nk+5iUU7Tl1ryvAescPIfxzSm+MIjoQw49wJAWhEETCQw+RggBLJCWh9NE9AQ4hW9RICELlwYk4DPy+HOi68VdH4GtfzrbH6vcZStmPB871jUFBJBhf2ye59gTHn3uJQGtEIJDOYa3GGGiMpRKCe0vL86c5r+eaFxvHqXY0eIjE2Lb9rL22pru3/E1IpPCRzFJQY0ml4vIgorIWLSSnuSZVcGUgeexwSF401FVDGijuOdBRyDP7E8I65960YpprhHZcmUjWMgIbUeVzrBKEg5SxUrz8xqtsDxNCZxlv7XPrxZex57ew5ZpFXkOgMLWlXM7Y2U4oypqtrW1cmHB2dMJwlFDmOaenS0aDGCUci9Jwb5ETRjGHV8eMD/cwi5JhtSKvaoQVDNKI8/MVAyVJspTz6ZwoDrk9rTF1Q5wE3D9aMEpjonFKKRxCwzBwFMJQhRlpbdmOHcvcsTVKEGnGnZunHO5kiDDm/t1THrs6QYyGvPnaEcNIYXVDqRucClgWDXVtfOOcgnXVMAglhZNM5zmxCtlPFEiJ3YAfG32jNJ/Peemll76mbZumYfr7rL3uL//lv8y/+lf/itPT06+67fHxMX/7b/9t/tk/+2e9FeVGvzfq8knfKcqlUzfJfVhdwebtNJvNOD4+5rHHHvuGHOtGG2200UbfPNJOYfAEv3bOW2riJ66hEITthFIKgTWORjcMIkUaROTVmtlyThBESGEpigpEQuhDYn03gnBYTDv5DBBKIpQCobDOgHPeXcKC0Q11vmI1XTA/m1LlJWE8YPtwm/2rO0gBQRIioxhrLOV6yfTWHY5unjA9zXGmJhb4RXkFKgCUQMqaup0wGmPaCZ1iLw5QtmF+fspkd49mVRJai85zgjAkX06RQYQMvKOHdRLhSqywGAn5csnx+ZIXblW8cWqptXdHCRCMVEMWaoahJQ4taeoYZgGjUUYYBTS6JggiwlARRwlp6F0+nGuwrTW1DCJEECPa/ozKNNTFgqqpsUbTVCXrxRxTaZIkJs0iAhUQhglRlhILicxX1GHMaPcQm+3wkT///2T70Udo7tzi5BP/HrO1h5kvaBpDU+Y0cUSYGozxjiJIQe5CTirL3FmSrTGNlFR5TmAcpXWspWAgoLAQtO4nXT6uEbDK136BaaT4xV/677z82otkacr+/i5v3rr92yrIOyxKBDxy/XEuPXLwVbdfLhe8eet1FssZztgezvidqIM0wBFEITjvKBMEAYEKkUISR3G7COe3d8LnzNZNQ1MbJpMJKggo8pz5ctl2FTusMzhjQIVUuqGoK4yxHpAJQsChncVq7TtfugVAfE6tEo6y0oRJSFVX3L13D+csVkjqumFrsvU7O+mNNtpoo28RPQxyvJMDBvCW30sdIHHRHeTh+fbDrh7dfrqfv1vEzLvtozvWztnjYqRLXdf9ovbFz387R5F3c/vovu8cQq21HB4eMhgMKIqCpmne4mRx7969Hj4RQlAUBavVit3dXQBGo1EPhnRuFcvlso+d1VpTFEXvtNHVNzrXkIsRL+BdQg4PD/sF/TiOyfMcay2TyYTBYMBsNiOKot7JIgxDHnvsMeI45rXXXuPk5KQHLLa3t9nb20NrzWq1Ymdnp4daOueQra2tt0TJXLy+HUDSuYIkSdJHl4Rh2AM5XSyJUgohBPP5nJ2dHeq67sdgtVr1kbzr9Zou9mZ7e5t79+71sMju7m4PjCilWK1W5HnOo48+yhtvvMEnP/lJnnzyST70oQ/xmc98BiFE73SxXC77MYnjuI+D6QCT7p7qakWdk4sxph/rOI57uGc4HGKtJUmS3v2jc1NZrVYcHh7y+uuv93DI1tYW0+mUyWTC2dkZu7u7HB4e9vvuoJHxeMx6vWYymfQuMHEcE0VR79SyWq36KJ0OLumcQTqnluFw2IMvHTy00UYbfWtruHcFOdzi+Og+WluuPvEU09kS6yCQAl2UnK5KytkpxeyM49v3yIucwyt7XH/2AwRhhKlzLj/1BFuPvI+6rqjLiiCMCZIx89kxR3PQZ0tkteDK/oh0MMYIhXQGXWsCHKdn54hsQIjhjS9/ka1YcXDjKkoFrOZT0jhE1450/wqFCjm/c5fBICYNDHE2IhiMcU6zXmhSqZBBTLlcEacDti5dpzaaej5FiZhL+9uMdrbIBgGDnSOOZs+zqmvWlUGWhrURBBKSdvJrWqih0b7Z5oHzRAs6tHUbzyB4gEFK6SNulcDZ1iHE+SgR4xxHVtFoS5k7Sl2wKjRXdzL2BylRVaIXd3n13/9bzl76Co//0R/g8nf+KIOnP8DOz/80r//cl5jfXyM0SNNG/hpfj5ICv7xvaV1P/XK/bd0oPAjiz0FZgfbVJmrraxMVoHFoHkAetqtOCNFWu+jPtYtekc75RiZamMIPTxsrQg9fdNCB6GGazrUEpGjj21oHjs4RBEB0ZvrOwx/9X1qAxX90B48IZO8z8pAHiaAttHlQw4kLx9OejBAeUgAQtnP7EKi2vtcBH0o4lADVO2rYtskFkBC1h+BwYP1YyA5oQRAJ6SNjlEBJR4Akkj76JZQeChF4+MjHtAiUEgySiDgJqGqD06Z16vU1OiEMUjgkso2CkX58nYePBA4lFVJBnAXE+yOqAua/8QWUNLgWckE4DwQJRR1HvHQv54Wzgjdqy+uN46R1Be4vgfROKLK97g9gG0ekAnCGLJBY6SOfMyVYVRrNg+gkqSRhIMkSxdlszdXdISdVhUgHPL2VYJZLcg3LRckokpQhnDeQ1lCVC/Z2I0ajHagFL7x2h1A4EmV59GCAbCrOX3+ZYeQwSjDcvUp+cpcgtaSJv+frxZp4lFMt1uyNY87nDbNZiRK+iWhZWs4WFVEScPnSDulwQDmvUOsVjQKrLVujmMo4RNWwtzvh6HTGzk7Gee6Ynk7Z3RkymxeMo4DKwqpqiLOUrSgkp2FhE8ZFzV5cY6xiIDSjvT1u3p6zlUpGu2Pu3pyyM0ggTbl9c8owCgjilOW8JMsCThfeKWUQ+Tgd6wTDLCRNA07nJeMkZnucYHTDorA4JA/Pzb4R2oAf70Etl0veeOONr2nbuq6/pliV30098cQT/MN/+A/5e3/v7/HGG2+86z8M5xw/8RM/wY0bN/jrf/2vs7u7u3H++D1S0zRfNeYF4Pbt2297TYUQDAaDt9jAdjo+PubNN9/cgB8bbbTRRu8xOdfaXTponI956X5DSAQB0rssCHDWooSlNIJZVRAFfoIzz5dMRmOCIGGYKF8ot62zgnBYa/xkTQTQOoCYRmNtiTUO0zSU6xX5fM756ZIyL0iyMTsHO1x+bEScxagwAilxxmAaTbFcszy7z+mtO9y/m6NNxMHhiCBqkMIQhb74L6RqJ+TCTxSVRMiAII4Io7hdnFcgNE1TYQNYF+fYqkbJEK0NuqnAKYzWOG2pTcVyteDkPOe1Y8etpSJvAhwQCG+XOYlKdoc14wFsjQKUE+xs7xOlIUoFLBZTokAxzAa+I1YFSKl854aKCWSAVCnWOZq6pKwLtG4oihytS5zRmKbCNhbpFMkwYTAZ+4m/rQmUJA5Sny0/O2EZRhxefYKdG+9n8vhjmLohfeYZhvJPc/fsDuezM5bLOa6pmKQDrPVFA58F66ilZOEcCwtXd3YomhrTVMSNAeFYOm8HanHIQGK1Q1pvklk1mmm9BOC//sLH+eXP/CTHZ8cIKbh/fPzbmqB1tqcqVDzz/g+QDtJ33V4Ax/fvc/fubYp1jtYO+3XMB4UQvmAhII4SVKD88VtHGAZIJfvFIIcgCPyiw3qd46xgOMywzuKs9sUe65/v/MJEQ5pmLFcziqJimA6og4ayKvvz9ln1jkD5BT5jH0AsxgnCIMZZixUB947utpa9iiSO2W87dTfaaKON3mt6pyiVtwNB3g4A6VwZLr734c+/CFg8DH88vN+L379TBM3F4wrDkI997GN87nOfYzqd9vtwzvVuFEAPGLzTsV08pncak8lk0kMoSZL08MDBwUEfm7G7u9s7jnTuG527RBRFzGazHhjQWvfOo2makuc5UkouX75MVVWUZYkxpnfzSNOUoijI87yP/ujAls4dZL1es7W1xf3795FScuXKFcbjMcfHx5ycnLBcLrl8+TJXr14lSRK+8IUv9LEmSikuXbpE0zRsbW31bqndZ3egRwegdHDKxWt50aVjOByitWaxWDAajdje3ibLMoqi6GNglsslTdMQRVEfJyOE6Pc5n89ZrVY89thjTKdTjo+POT8/5+DgoHflmM/nPWyjlOqdOMIw5IMf/CC/9mu/xvd93/fx7LPP8vzzz/dASQfnXLt2jTzP+2vbOXx0jiVhGPbXr3M6ieO4v75pmvY/r+uanZ2dHlwBeteZPM/J85zt7e3+nLvjXK1WDAYD8jzHGMNqtSJNU6IoesvYG2P6/V50FeliebromSzLfORhe090TiLdfjpAZ6ONNvrWVhj5OsGXP/M5tkYZKkq598ZL1FWOKWoCKXnj1ZsgBHvDDBEq4mHM9SeuMZxM0I3FYZg8+hSTR57izisvUC3nGGNZzGeYuuQrz3+F+WzBpYM9nnjyMbLJLrPVijSOEQKiKGQ+PcE4CLXm1S9+hku7+1y6fg0nLNYaBoPYz69lgK5yzt+4BcQ4NOH2EJKYXEvSdEA0KIgjgV6cEI93EVFKUVSYqqCan3Nw/Ro4Q14rSmsgHXPj0UMUcPNoxXmeU1tLoiSJcmjryLUjhweL+K1PhGhXrHsMoa3hCLwbhxQCjUDjkK0zhcU7coShYm4EL1jDmWl4TFvWleZslHBtO2JrKDDlnOmnfplPf/FLXPruP8z7/5c/wiP/2/+D/W//79z6xM9z6zdeZTVt0BqcMzgpfEwvbQ3KOaTsnq8Ezj4IXXHWO5FY62gsHvwwPrZD4+tcxoG+cM4tq0Fr6NqaeLjeFVR27ES38A99xGoXnUIXsyIePBs40bl+tA4f/Xva93VuHa2d58XxfkCUdO8RbdRL96YO9qDfpjsub9TSgjqIzpTEO3JY13qmeIePDu4InCASlkBCiHfqUO1xy9btRXZ3iPR1NOHaiBZ8I02Ad+KIW3AkSQK2xwOW50sf39JCJRKHlC1YIgRppNi+MiaKU6b3Fuii9pCHVAgEEv+sJqxFXHA7EcK3QuF8g5cUAhUKRBBSnK3BOILIn7zqrqt07H3oGd64n/PZz7/GC6cFr9eWNxpL4UR/bWw7Zqq99zWC0FkCIUG0UIszTELvRqOtYS8MkMLfVaNQEClFlkSkgSAUlntHK65MUu7Nc5ooYh+Hni/AOKZ5w/4oYHuScGtVk4iINBEMRhMm22OiouGleyesdc0oC/ne5w75nh/4Dk6Wjjc+/wLrKme4tcd8Oe1dYKcrQ7EsuX5pgK1rDq5f5fx8Sr48JQkEVS0wzjeSbW1njHbGlDKimlak5RrhakQaMRmm6KImVpbxTspinTMeZCy1QzQNNw4nLC0E1pIkMatSE4UhQxXQYFiJlKw27A1KtApp8pr9q1vcvjMjcYbJ7hZ37pyTBTDYHnL7zpTIQZJFTOdrVBBynjeEwrEzTnEIpvOSLHBsTTJOFwWRVAzTEIdmWRmWZeNrfP8DHvk24Md7UEVRfM0wx3q97vNEuw6H32sFQcCf+TN/hsPDQ/7pP/2nfOITn3hXF4n1es0//+f/nOVyyd/+23+bvb29Dfzxe6Cu0+arqSt2dJmuF9V13zwMfhwdHfHyyy/zAz/wA5tru9FGG230HpIDjJMYBKaFPpzzE/xQCIL2V4LCWyUqZxnGAatCs6gKhmHG2XrK9npIEg9QDoI4w1qNNRZTOrwRpwFhcKbA2IamrCjyFetFzvR8yfmsQsiIazf2efKJRxhMJn4iLVvLTSGwukYXNavFlPn9Y5ZHU6aLktHWmMefepTx/i4OR7lcoJvKTzKDENUWZGTgoQ8hu1YLgW1qD79Yg9UaUzvvthEosIamMazzFSIMMXVB1VTMZnPunFpeOY+Z1SHOeQvUWMJAGfbGJQf7kq3RmCROEBICCeloiNYNy2JBEEomg22iKMM5jZMhIkpQYYhpZywqCNFlwXR6n9pUvhBe1r5KIUBYQRREhIOUNBsShIq6XCNFRJKNyAZjgjjC3L+FuPo4ZbVmcuMGuqwwVU2YxBwvZpweH3H/7h2aukSZhmbrAK07JxaLkgFrAuYGSgfjwz0W8xlOGwLtCxMlvqiiBCxWZ1hvvorEsYUgBBrgbHqfs+l9f+9ZR16Uv4271d+MEkkcZjz1zFOoQL3rOxxw883XOT4+piobGuO+zkUAX/DwwJAjDFKElL440Yb5aqP9QgtQ1Q2mMcRJRByFQBfP4p/ral23HcPSX2/rI31CFfU5tkoGZGnGulh2DUYtPPIA+pACAiUQUlBXNUIaju7do6k0Kgs5n52jm4YwDGiaTefrRhtt9N7S20WdPAxmXARBLr6vc9notr0Y83IxKuXh9z4czfLwNu8EfTwMnXRAxZNPPsloNOLnfu7nWK/XvRuF1pqmaVBK9Q4UD8/nu2O8CKZcPLfu9Q7mODw8ZLVa9Z8zGAwIw5CqqtjZ2cFa2ztvzOdz4jjm2rVrrNdroiiiLEtOT08Zj8e9W8mVK1fQWhOG4VsW+uu67p1LiqLoozkmk0kPlgghWCwWKKV6EKIsS6qq4uTkpN+XMYaqqgjDkL29PS5duoQxhi984Qv9deuiT7oYXSnlW5przs7OelilG5+Ho3allG9xlOiiVS4CDV0sSueSsVwu+3GOYx/31o1HlmVkWUYQBFhruXr1KlEUsb29zXK57O+DLtrGOddDHXVds7+/T1VV/PRP/zRPP/00H/7wh5nNZr0LR+fM0d0fHVjTHWcYhgghWC6XDIdDptMpUsreyaO7v7rtunEpy7KPBloulwRBwM7ODvP5nDAM+xpoURSs12tWq1Uf89M5vnRxLlVVURQFWZb111AI0bt8nJ6e9vdGGIYYY3qQZjAYoJTCWtu7hnQAy2q16q/PRhtt9K2ppml4+TOf5pFHLjHZ38YmQ0JlcVXK4vQeR7fuksaK3b0t8rwkDBWP3HiUIAiRUUSaxmS7V0i3d/mN//ITzI7vE2cJRbnmfe97gi++8BXq1ZpHrx8SCMGVR67z8pe/SBRl5EHIYDxmOZ+imxKroTg/5dlnnyEbTqiMJlAS5QyBSFgVNUXZEGjDaJghpWA4HkCUUNaGOItQUUDa/r7I9g4pC83i9h0G4yGmXJEOh95BZDHl9M6bxGlGk68ZZAH7uylHZ2siJdjLIhrtvVxnpUFiiQXUymG0BNc+p7j2GailG0TnWukcCkXlDMbRunx4UCSWEizUta8ZVALewLG0mpU1rI2lqBoOxylXtlICXWOKM+7+zH/g9MUv88QP/zCPf9/38+Sff4r9D/8St3/+M9z50i3sosE5iXO2d+mQXdyLsd6NQUpMt8rbOjo0DkoLhfURNn0MjHsAQjgHSLBO8OBxzFs9vPVXhIclHv614ZuKur888PHoGY8LIAG4fkHeu4N4Zw7/8QLvnQK9J0jnONFt4q9E7yjiOlpFCITzMS+yI1WE6yGV/nPwAItC9nBK7/iBRQkfVxICofDRxEqKFvjwI64EbYSO/+wQWijDO48G+BqXwkdDK+Oo1wWh9HEvSrTABxIpfeNKnEjG22MarVjcP8fWut+XbOEYD9/4MfSRM97zxQMgvhajpEMEDqkEpqpxgUDGijAKMGWFcRCPE8YffZYvvDDlU5+6yc1lzsuV47421O1Ve/BfX7mive8D4Tz00QFEFraigDCQLLVlJ1AIZ7D4eukojSm1AWe5PBlw73TBpd2Uo6ohR/CB7YzQ1QgXclRU1KVjOBbMnOBgOEQokKOUOB0ycpqXzue8fDZDhZIPP7HNo4/f4HQJJ3cXYGqSrTHO1AwHEYXTlIWlKRqkgOWyJEjnJE89jixqwkCyWkNeNiytRsUh2+MUbaE6nXF9O6AAYnw9yzaaJITRJOX0bEWaZNgwYnl0xiBLOF6WhEpy9fIW54sCgohBGmKdZmkiqlXJQWbRCExRM9oecP+kQNYN2zsDpouCQAmy8YCT8xWh1mwfjJnNarQ21E4ghGRvf4RxAbOzcwLhyNKYeV4hrGUYK4aTgHVpWTeG5dqDvqH6xj/rbcCP96C6bMqvRVprfuZnfoZPf/rTfNd3fRfw9h0nv9vKsow/9sf+GN/2bd/Gxz/+cf7BP/gHHB8fv+P20+mUf/Ev/gW3b9/mX/7Lf8nW1tbvi/N4L6lpGpbL5VfdTmvNiy++yP7+/m95bWdnp89Hvaj1es2Xv/xl1ut1n+m70UYbbbTRe0GeozfO8SCcw0+sFF3+ZzvJk5JQJQSiQaaCQjco12CV43hxyqWtyL9PCnRlqVZLn7Vi24WJOqeuchaLNWena46nlryG3Z0xTz99mcvX9oizBOEeTJQdAtNoqtkaXeWspucsTmaUs5xaCa4/dZ2rN66RDEeIJAEc8XiEqUucs3Szdtd2jXTTO2esf10FSARSBLgwREqHrmqMg8rmLMsCWZXEaURZlJzNS27PJG8uMtZa4rBEAsbKMg4btkcNl67EbG9NSLMREkEYxuxs79LUFfdP71Joi6KBfMFISeJkiJAxRjmKMsfqhiRMPGxiG4rVAmMNSklCFWCtRoUKJQKCMCKMYyIVomSACC1hGJFkY1SocM5gp2dsP/UR7r72Cs/8EESDDLIMBGTZFk3tODk5Io0ibLHEWYNqbV1RoFSAVYrKOZyUpJMht+6+gnEKoR0xsHRQC0jb8T1zAi2gBmoHD62H/Q7lWutUyOIBjzx6rY87eSdZY3j99dc4PTmmrKuv+ziE8As+neMGXfFDSeqmQZAhlbent85SVzXZYECgJFXlo1vCIKKpDcZZ0iRB1w1ZmjA9MzjdMByOMGbedyZHAooqp6p99zHCYaxAyW6h0JeXGm0p8goVCKLYcefuLYp8TZL5xZ5XX30NpSRN8/WNwUYbbbTRN5vezkmj+3rxtYvuGB248TCs8bC67d4Otni7mJVu2w5E6LbpYIKLYEa3bZIkOOe4fv06f+AP/AF+/dd/nbIs++0uwh8XI1IePs93cgXt1C2sdw4VQA89dC4RHewxGo36hfo8z/uYk62tLQaDAffv36eua9I0pWma/ji793XODVprhBBkWUYYhszn8x4q6CJWunNrmobFYsHOzg5ZlvUOG6enp72DRed08cgjjzAYDDg5OeHVV1/tz72LZOmcROI4Bnz07dbWFmmaorXuQYvZbNZfh4choA4K6QCEMAzpIkuqqurjbaSUxHHcv79z1uhgim5/nYvGzs7OW4CFzj1le3ubLvLk7OzsLTDLk08+yac+9Sm++MUv8m3f9m289NJL3L9/v486mc1m/dgPBgPAR7qcnJwwmUzQWrO1tdU7d2it0Vr38MjR0RHn5+e9K0wHW3TgxWg04uTk5C0RPlevXqWqqv6+Wa/XaK1755EODKrrmslkghCij3opigIhRO8uU9c129vbKKUoy7LfFjxc07ndLJdL4jimqqreteWdHG422mijbw3puuYDH3mO7SuXmS1yBBrZWOpGo6KMw0dvUK5LTo9PUCLixmOHIBRV1TAejZkcXsWpmM9/8ifBSkbjFBkmfPv3fSenb77IfL5ExAlBovjY934fSjiuXL1CMZuhhaXMZ4SBpF6WYC3v+9AHEEIyn54TKkEYRYTpiNVyiUUSRxZhIN09YHx4mbOTU3RRkWQpYRRSr9eE2YgwTZmeHtOUhnA4pl7OiZOYaDSizFdIpdieDKgby9buNleuH/CqfJFL0xptFafznFwYppUmwDKMBEttqYxDCYtQfkFbu96EwsdotHAFEsr294xyHmlIAkEkfRxJ04WnCE9UCOCuNZxWgkeN5nHtWJaGVRFxaTdleytC5TX17ed5/p+/yu3//u2874//INe+68fY/gPfy9XP/Cpv/Owvc/dL91jMbR+5ggBj2mcy8eC5STgPX9g2erayDu2Er2k570ry4I/FCodxbfSL65xExQVXDugokB7g6H53dA4dLQBDt337ww68sKJ1AbkQF9JVoB7+NdSNefdBoj8u5wNeWsDDb9wdS1dYEQjbHlfnPtL6hAgexDQ/GKfWyQMIpSACIrwzRSgkSA+GSHyMixAQuM4B2AMogRQEwvk4l9a9I1J+eyUgjUOc0Wwd7lKsZ8RhQl1WhGFAEiuM9nDJ9Gjh9+E8/KJkGx8j/PF3zS+idSjB+jpke0uiAocMBE46VBJiG+3H3FqaosZogxgmrCcH/Nx/eZk3b9/jtcbyRmmZax+9g3R9hBBO9GPWvuT/v0ILtSiYKEUahiycoTSGRAYECrajAASsKkttHXVTscole6OEV89rgiTmgwcJ25njfCU5mVcUhSZUAp0O2YpjrKmQ4zGJc0RGc2vd8KnX7pPFEY/vJjz7zKOczSqi+atMj6ZIo9GN4tqzHwQV8MrzLxKKiKtxRLnOWa8qptOc9PZdti8/weLeObUomDcgIsG13S0cGpzl0rUEg0KUhihwLOZrUiUZbY9Y5xVRGDArNLOTJWkYcnS2ZjJQXNrJqI1DqJBEBjijKaSirh17g3bQtOXw0oRb9xckSpKNEo6mJYFzDCYZZ4sKUzfs7aacLmqaogKlmGvHjYNtlo1meTYjsIbxJKYCTGNJopCDg4xFrVmtKurKoE3rDqPevSnsd6IN+PEeVEflfy3uCwDPP/88f+Nv/A3+8T/+x3zsYx/rrSJ/r9XZXP7Vv/pX+eAHP8hf+St/hVdfffUdty/Lkv/4H/8jRVHwT/7JP+F973tf3zWw0f94fa1RL845fuqnforv/d7v/S2v7e7uvuM1+/SnP83p6ekG/Nhoo402ek/JO3IYJ2ic75joJnihkCh8F0CIQ0qNFZJYxgTSIaTDaEMoojbb0iKRNNpQ1xVuNUcbS1mtWa8qZrOK+yu4txQENmBnJPnAM1s8+eSjDCY+j9sZ7WERKbHa0BQ5xXLB/OSYcragWtbgBIPdLR65fo2tvW1k6C01HA1KRD5vRSVYbXDW4ZwBp3G+scUXyp31kRgWgjBAqra7QEqcKBHOIV1MNB5w5/VzorBgUVhuzyPOqtjbuArfUbITNVwaVYyHlslQMUpjwkAAGikjRukQIRx1U6OCiMO9a0ShpG5KGtugF1OSdIh2Fm0cwyQljGOEDMBJLBqjGwKVEEQRAkUYZYTSu0SoICIQCqUEcThGBRIhabNeJTQ1WZGze+NphGxNO9uixM7ly8x+dUq5zgmB9dkxsqpASB+LY9ueFSnQQDbIQDjWqxqJwBjLUAgWzlHhSACDQ+PIEBigwmEutp/8ju9UaKxDGMPW1g57B7t0XTq/tZLiiyOz6ZSbt95kej6jqgzGua/zMLpOE7DWYW2Ns36xDOfQ1mBMQ1MHIAoGgyFGa6IoJY5j6tqwXC1BQJoMCGTFbDHHWE2jNUEYka/XLFdLf6zOYawF64hChdYPCl+294X1X6WUWNrijpCUumK5WJONh8Rxwrd99CPcfPMuQjQb6/ONNtroPaeHo1guumB0r1/80+ni6xdhjnfSO8Ei7waGdK9ddOW8CG10AIQxhieffJL5fM7zzz+P1hprbe86cdH14+LnvJ37yMNjA/QuGt3CunOONE0py7IHGM7OzsjznCzLaJoGKSVpmmKtfQvk0bmEBEFAXdc9cNCBGXmeU1UVUsre1WFrawvwDiNN03Dv3j22t7d7YOTk5ITt7e0+cmY0GlFVFaPRiK2tLW7fvs10OmU0GnH9+nWSJOHTn/40s9msP8ednZ3eYaRzCDk/P++Blg66lK2bVze+HRBx8fp24EN37bqxD8OwhyQ6aKeLLbHW9q4j9+/fRynVAxnr9bqHV8qyxFpLFEW9A0f3+d3YF0XRu4QcHh5y584dfuqnfopnn32WZ599ltu3bxOGIbPZjDRNGY/HfRPQ3t4eSZL0TiRdjXI6nRLHcR/vUlUVWZZRlmXvNNMdWxiGjMdjmqZhd3eXsixZLpfM5/M+WmY2mxFFEXEcMx6PSZKEnZ0dyrJkOp320S/373s3uA4C0VqzXC5Zr9copRgMBgRBwGKx6CGcqqpIkoQkSViv1wwGA+7evUtZlj3g012bDfix0UbfukrSlGQy5ou//hmcMzz6+COsihInJFVtMBp0o0mjmMs3HqGoGqxeMxjvMtre485rb3B6/xirBKfH97n8yCM8/v4P8fJnP8sLv/br7G3vcv1DH+TS408QxSGzo/vYpoEoaZsyLMvzM9LBgEuPPIqhYX7rFnEUMtoZ41RI01iwBtfUqLpGCEU8HPPmSy9RlI69SwdY3dCUOdl4BxknLKfnyGiISkJOX/g819/3KPHWlndvtAJcQJwMGYwFwfiQs9MTXBBysD+iyiusjtC2ZEcE1KHiaF2BhVRKjHQ0gDGuhxxECzy4NnajagwdtICANBQESqIbgzZgnO3BAyFc6xwrWAt4qbHMreVxYymtZV41HE4GXNpJiVSFKktWX/jvfOH1F7n9Kx/jfT/8g+x+5//Khz/8XTzymV/i1f/237n30gnrlcHYDoTw17tve2ghjwaoHNRO9PEuvp1JtFEv/v1WCKyjjX3tiYsHMS6ie75rwQrReXq4tnzSPs/1f5zfWf/rxTuFyNZeRDnxAOoQgq5txH+YfJDaIjyuIVv3D0cbVdKeB+24dpEniAeOIL27R9t85S5AIEr4cxPiwfVVOCI8+BHgCEXrntEeX/dHAaFsz0XY3gk4UKJ16Gj/Ln3MSyAEoZI4IZEqQKBwpnURsZa6tBhtvSNvC3ZIKVFCethIeKBFSh/Fo4R4cA5CEARdJBGoSKEyiVARtvbx0lKCdpamgUpJTuYNL7z4ArfXDW82jjuVo25BGvOA2OmuYh9rg3sQjWNbx5GxFCRKoJ0HlQdS4pwhry07qWzPSRJhGYeCx69u8erJikZIboxCLIb7Z4ajaUlZW1QsCUYpUZJSOE0UD5B5gzM1v373Hq/MCrbHGXtJwIce3SNJR4wnknxZIjmnKCpGMmM6nZGkCTvXLnP7tZuYWlPXHvwZBIryZMZc3mOxrjhbNoy3Ug62RghTUlSGg+0B51UFWhNLga4atocRWRqS5yW21mgjmc9zNJLjcs04jZmMUhZF6WuSQqCdd+mQQcKVoEREIWaVs3t9l9tnSyIZMtnOOJ8WuMYw2km5e7bENI7LOwlHc03Y1EgZcJLXDCcjTs7mRMaSiIbROGGtNaZpCJ1lZ3uLde1YTgvKvMFpSxZKgkDR6HdOs/id6vfHCv5Gv6u6dOkSf+JP/Ak+/vGPU5Zfm0X1r//6r/Pn//yf5+///b/Pn/pTf4o0ffdM8t9NKaX47u/+bv7sn/2z/KN/9I/e9Zystfz0T/80f+2v/TX+4T/8hzz33HN90WSj/7Fyzv2WiJZ32u6Tn/wkf+tv/S1Go9FbXuscP95On/3sZ3nxxRd55JFHNhPjjTbaaKP3kLr8UOv8VFMKQSglgYBIOEIgaGfaxlmsc8QqAqFoRE0gI4SSlLqExtIYzXq1opnNmU4Lbs4kN3M4qi3SKZ4cBTz3mOTRa3sMx0OcqWnyEhUFiHYhpF7MKWYzFtMT8umSatUQBiHDnTFbl/cZ73pramctzhiElEiC3p4R690enLVY42NnhJRtBI3DGcBphPCTNSVC2ghPAhMRtY4g2WhEMCh57VbJcR0yb0IyCYmwhMoyiAx744rdHckoGZEkEWEUE4QZUgYk8dB3yRY1dV0SBjHDwZA4SRBKYWztI2/mM5qqZDLeIx2O2gIACOkhj7o4wzQlSTYgDodEQYQSASqUqCBorUyt5x9E1x3iCzVaN4S3X+P6n/yDSKko5jPiwRAVBjRNw80330Dritn5grNXX+D8/X+AiQClJFo7sLafDA/HQ6qmpK40zFZY5xgJbxlaOd8dUgOxEAzx8S69teo3QNY5tAUrDHfv3SYexgyGQ1QQ/NbnUee4d+8uN2/fZDqb05iu0PP1qM28FcLDQ070C25NU4IbtHnzpYdvnME5P85FXvi8+9DbnjtrqKoSZx1lURGGEU3doK0ljiKqpqZuDE2jiYKArjDhbfMF2nQlLUEgFUI6qsYQWkEUWRaLKb/ya7+IiiO+8IXn2Ur2+f/8v/8+y8WaT/zST/LLn/rVDQCy0UYbvSd0Eb54N/ABeMfaxkU3kIfhj4dhj4f32c3hH/7s7ngedpF4WMPhsJ/XG2P49m//dhaLBbdu3Xpb+CMMw99yDA/vt3v94jF4ZyrTx7OUZUkQBGRZBtCDEFr7SLMgCPrF97IsyfOcMAz7RfruWIbDYR/7MZ1O+9fiOGa1WvXQROcMsVwuyfOcyWQC0IKTNVLKPmJmZ2eH9XrdAwha6x5IGI/HXLp0iSAI+PznP9+7VkgpuXTpUg8WTKdTwDtmDIdD8jxnvV73zh9hGDKdTgnD8LdAOR18cOPGDay1vWPI2dkZZVkyHo/7c1sulyil+qiTzn212+dFh43uXDpIZLFYkKYpQgjKsiRJEpRSbG9v0zTNW973zDPP8NnPfpbPfOYzfM/3fA/Xrl2jKIr+M6uq6p1GBgP/vJJlGdbaPnpHStlfr/F4jNa6B1K65rfBYNDfL1VVEccx9+7d4/j4uB+XzoHFGMP5+TlRFPVOLh20EoYhp6enveNJFEX9ee/v7/cOIU3T9LBQB+HkeU5Zlj18090HcRwzHA4ZDAb9v4m3i0DeaKONvnXkgFd+8zNkg4zDRx+hbjSNs5SLGU7FNMsZQin2Ll1iNTvDmYbxzi7OCd78yhdpijW7g23u3bnJ9cM9blzd58uf+A9Uixnf9b3/E+NrN6idwpUl92++jKkaRJzhZICr16AbdnZGXHr8GdaLKc06Z3t3SJqk1C6kNhAEBrQmkpZosoUcjMnXSwZZTBwrmsUckQiy/UusS4MtFyTDLb7yG7+JWJ3wwe/+XtTER61h5zTlAlvlZOOMvIbP/uIvsV4WHB5sc/my5OTonCCEw1FC3jhuzXIEkkx5CGLhwFnXunwIpOyeF8BaQVWbfm3cCYiVd4TQjQGDd04FZOsEYt0Dpwnl/Az1RMPaGE6M43FjWTULztcV+1sJlyYjoqqkyU+4/0v/hdPPfYb9j30PT/3wH2P3h/4SOx/7QU5/5ZO88l9/mfuvnbNeG0osUjuUEGjbLuI7gZHeuYS2mck6H9xh8Iv8bWkI10a8uK4RpIMiWpKjd31oH9UkHRTiYQTZ1sxE5+Dh/IK/6J/bRNuT4qEU77jhP7dzkvCNRqIdZ3qKxOHrFG1gn4dWhHsAeHROIt3rDh870+7fAxui8wrxMAXencQJH9kcQFvfg0hKFAYlIZAS2d4HQQtmhFIQSPrPVcKDIoGUrVuuh0aEdEjnXxO6AiA/OwUcVnivFSEkQnpQRLRuIqod30Dia1gdHOP6i+EjcqQkDAVR7J9rtTFYIVAqxJba1/aAxkBtBEsjePmk5I1Fxe3G8GbjWDS+MUngehcZJ0C5B9dZtqCHv98Fsh8nRe0MmfNRuzGCvTiksIZYRTTWu7zuho6DUcDOOOXWWY5pDKkS5HnD7LzCOkckYGcUkYcBkQqBislkgl2tSYXlK9Oc509XDAcpoyjgib0EFUXUyyXVaMJ0WZINItIM9q5e43SuOZ3dY3v7EqPhNrppWNYrdtKIJApZThfcPslxoWL30jaXB4pAOBZlQCgt66piFMbYCNbznDSQTMZD5osV0jqE8jBIPEw4PVkxGcTsjjKWRY01gu2BYN1odBQjw5iDyD+HFauCvavbvHG8xFWaZBRzer5GNprtScS60qxWBbujASd5g1uUbB2kvHJSECQDhBGkYYJtlozTACEMdW0IhWN3EhGkkvOznHxR+igjo9nKEoy1Pfj0jdQG/HgP6uDggL/zd/4OH/vYx/i3//bf8qu/+qvUdf1V3/fKK6/wN//m3+T4+Ji/+Bf/Ym+x+PtBaZryIz/yI3z84x/npZdeetdtjTE9WPAv/sW/4MaNGxtQ4HdBXRHla9FLL73EJz/5SX7kR37kLT9/N8eP9XrNv/k3/4Y/+Af/YN8dsdFGG2200be2ut/eFtdOiMAz+j5r00942qmSczRGU0mFEiGhiImE8u4QQYBpaqw2rKslRV5x+37NS+eCV3JNZQ2ZCPnAFnz/B4Y8evUaaZrgnEFIRZdfCoJ8tmB2fJfV6Qm6MaTZmN1LO4x2tkkGSetyKTBFgQgUKoqQYQSB8lCHMxhtsFpjjcZZf+zWeDcGpw3SSVQcEESKIBQI23g4oaUUTNsNIpTk4PKAW6cVizygdo6xNEySmu2RZmskSGPFIEsJlSRIYwIRooAkSgnjBOscTVOjnWEQDQnDCKMtrtGAQRKCDIjilCT1ESw+ttYAliiJKZYKU1uUVMRJTBgkKKE89IED12DbKoWT/po467tgEJbm5Igg904TSob9c1ucpNi6Yb2ckZ/e4+7rR0wXBU+0cSXOGhCCGlg6x+VBymKxQNeWoKwpHaQSEisoHG38CFTt1RwBU+etMr9RMtbxuS/+Jv/3//3/xjNPfZAPPPcsjz3+KNeuX+fw8BLbOzt+0UlKXnnpRW7ffpPVosB+/dSH77h2FmPbzm/AaENZFWg9ZF3kICAMJgTtQkscJxR5QRQmRFHEerXGGk1ZlcwXS4q8xFjfQWucd6Apywdd6UGg/ETWtN3gQqC19fcnvnhkraGtv/niighY5iv+83/7r6yKiju37hInETeuX+PxR55kPBgShQFVvcl92Wijjb71dRF06L52EMM7zY3fDvB4+PMubvdu7+2Aj4cjXC4e08OfffG9XQxKB5HEccz3fd/38bM/+7Pcv3+/hz86B47OsaJTt4+Lx3HxHKy13jXKWpIkYTgcMpvNeteH+XxOlmUopXDOsb+/z3K5pCgKoiiirmvW63UfRVJVVQ9i7Ozs9LEqy+Wyd/+4cuUKi8Wijz3pwIAurqaDVzpIpIMeVqsVWmuOjo5YLpeMRiN2dnaQUnJ0dEQQBBwcHLC3t0ee53zxi1/sx10pxd7eXg9KXLt2jel02t8LW1tbfUyIEIL5fN7HlTx8TTvYpigKrLXs7e1RFAXGmD6apYu+aZqGuq57QObo6Ij3ve99TCYTZrNZfzwdEOGcoygKhsNh78px5coVyrKkrmsWi0UfjdJF7Vhr2dnZIY5jPvGJT/Dcc8/x0Y9+lF/4hV/o3VrSNKWua87Pz8myjLqu2d/f72GcNE0ZDAaUZUkYhhwfH/dOIB1w0rmejMdjFotFH2czHA6ZTCa88sor7O3tkWVZD9ZMJhPOz88xxpCmKaenpyRJ0kMiQA8SdVFFZ2dnPRATx3E//lEUMZlMuHfvXv9v4uzsjMPDQ9I0ZTKZUFUVw+GQk5MT/9y1Xm9A1402+haW0Q0H169w+MhjLGZT1uslk9GAfDHDNTmTnQnreUld5oy3xyRpTG1B1wXDYUq4u0VV1Dzx7LNsX77E/ddf4cq1Sxw8+QNoq8jzEkVBPpsz2NqijCqsClEShJaM93Yhypge32d2dMRkZxsRxyzXNY0uCeIIawUyCMi2dmlqgRGKMI6QxlItF8RbW+xcvcR8OkdNLjPL4eX/8H/wvqcuc/W5b8cieO3XPke8vcdg4EiTGDUYsc4LqsUJ1y9N0LsDolAxPVuytTtgMhly7/6Sl+9MSZXgIAlYGsF5pT1vIEAGYLSjnWKinQMnEUpgsOAglt4Ftml804Ft/38aSIExrnUZ9cBD0OITtoURcud4rTGcWcsTJuBaU7Gua+bLmMuThGESEVtLUxxx9PP/kfMvfIrdb/senvrhP8LuD/1Fdj72/Zz9xi/z0id+haNXTljMKg8HOB9R0xhLjURFECKIjI8/7ZwxHa2Tm+icQB5EpcIFp5MeqnA8CBxux8hdcPloC0YC2TpdKFRfhWkhE9FaWnTGpP5j+226+Bd1Abx11kMafj/+uG1LpEg6BxN6B9XetKKLcenPpTtrfyzeXcP1kEWA8PU94Vr4A6L2swLpfEOUkkRCELTuJ0H7mRJBJLzzhhQeIhFCIKRsX/eHp1qnFP/rXXpHGBxBqMC49jwdsvPWEO37hccv/LgLgkASRO3ZBa17nfUkj14XIBTaOGqgsHB70fDKtORWZbhtLEeNoG7Hx7UD1gj//qAFPVw7hp3rR8cNSCxZ4J95rfP/LmInOIgUWghqI5mEitpYHp8k7KcQxhHHqxIhIAkDzqsaIyXOOrR12FAyyRIGwiAiyWQyIdUVTmpunq/59NGa7dGAUST56LWMvcu7KCcoi5KTuyeA5dKju2w/8gRFoTl74yXiuiZf5qRCsGxqJoMA5yxVZVisKla65sn3XSOWIa44ZVlpZBhR4NCFJYlDpvMlWRgRR5LT6dJDQnHCbLokjgMWM02WxkwSyaquaFBsD0Iq7WuBwikmtsEJwWJVc+nKFq/fnqO049LBkOW6QRcVe3sjZvOSfF1ydTtl1TgoDLv7Q+7OViBDsiBmtZwxCCS7qSIYxUzXNaFUbA8CJttDZosCIRwqkjSNYHccko0HPdT1jdYG/HgPSkrJY489xl/+y3+ZH/uxH+PmzZv8p//0n/jxH//xd41KAbhz5w5/9+/+XaSU/KW/9Jf6SczvB334wx/mO77jO3jllVe+qrOEMYaf/dmf5a/8lb/Cv/7X/5q9vb3fN+fxrarRaMTTTz/dFwXeTefn53z84x/n+7//+9na2uqvzd7e3rtGDf27f/fv+Gt/7a/x0Y9+dHM9N9poo43eI7L4yUwf89JO9CIBoTQoLNpJrPaTMuN8bEkgIQoHBGEM0qHrirrMOT6vuHkmePHccVTXGBwHQcIzqeK5RwOuHBy0Lg3Kd4tIgQgk1gnK6YzZ9D51vmIw3mG8u8dod8v/7nICZxqwHuIQcUQwSJFRCFLhhMLqCttodFVjyoZyNUdICOMU6xxGa5/VmaSEUYBUD6b21lkabah0Ta0raud8tE2W8vijGfdXJdNSkaiGva2G0UAwTBVJMiKKQhwWYyAdpIRhggwj/7m2QWMJVUycxOha09QlAos2FhWEHOxdAQVVUdCscoIgQMah74AMM5SMqKoKFcTEyZBQRf6ohcUZS59PK0OssyAURgoMBmEEgfQwAEIQjwd0VYMwiQmDkPPz+yzu32deNSyXy74AIaRfELMCKgHJZMz52YKm0cS1aSNvYIRgjWXlfHdN1BZNJALd9098Y+SAVVny2uuvsVyVvPTqywyGA4bDAZPJFtvbWxwc7HHp8BKf/eKnuHvnHmWjv2HHYIxBC4k21se86AajQ5q6piwjotDgsOjGImhYmCVpnNLoirKq0I2mKCrqqvRFHgn5Ovd2/lFMXiyoa98Z0h20lAKtTW8/a9uREMJDWwL6TqtGG+ZzgwNu3bzLusjJi4rZYsHp2ZRXXnmNG9cf8REyG2200UbvAb2d40dX7+i+XgQuLgIaD8ekPBwD83ZQycP7vPj14uc8/LMOwLj4ns6BAXhL1MjBwQE/+IM/yCc/+UmOj4/7fXYOCV2dqftjrX3bCJqL++7cG4IgQAjvaHVyckJZlqxWqx4eKIqiBxKiKOohiPV6zWg0IgxDyrIkTVOSJGE6nfLGG2+QJEnvXHLnzh329vZ6EEEp1S/YCyFYrVbs7Owwn88Zj8e9U8RwOOwjRToQZTaboZRiNpsRxzFPPPEEcRz//9n782Bbsru+F/ysIac97zOfe+5Qo0pVpZJq0IhKjNaTDcYE2OIZCONuR78mXjR2eEA2tAErIBzRdkBgaNO2iXC7/cB+7jbtxjywMaAyM5qQVJNqvPNw5rPnneNaq//InXlPXUpC4klCiP2NOHVO7b0zc+XKPPfkWuv7+3y5cuUKe3t7ryGnOueYTCZ0Op36OrdaLUajEb7v1/SI0WhEURQ1WeL1JpPTNGU8HiOlZDqdMh6P6XQ6dLtdkiSh1+vVpI2KblLFrFRRM6fpGYPBoKaCSCm5ceMG6+vrJVp8Yciw1pIkCY1GAyEEk8mEVqtV98eDDz7IJz7xCT72sY/xDd/wDfR6PdrtNoPBgNFoRJIkrK2tkSQJ29vbBEHA0dERnU6HLMuYTqdkWUa/30dKyWg0qqN3qv6azWa12SSKyii7ygRz9uzZOjKnMr40m016vR7z+Zw8z0mShPF4jO/7dQROFXHT6XRI0xTP85jP57Tb7ZogUu2zMn9UhJtut0uz2ay3q4g0VYzM0vSx1FJf2fJ9j5WtDa69colZnLF5ZoPrN24gleDs+S2Odw/xfUm736XZ7RInMZkpiNodGq0WJ7euEwQNGr1VJif7hGFIY32bLJ4zPDwmnWf4niTstIlnU46Ph+igQbfX5sz5bY5v3iAfDvCUo9+JaPbbFIAOIB+cIG1GmdwpyUWEcQnSGZSQ6GZAc+0egt4azgmORjNe+Mgn6Zo93vnut9Ba3yAvCkQeoyevcPjqx0k2LtC88AZ0I8fGM6JWgyAK8XzN0WBK4DQXgojh/iHTJGE7beIPE4YpDCYxxYLYEAhJIgxSlhSEwloMJcUBHG4RuaGlwixIVTiJFOUANrOL2A7hKBbmCLWgY+BK44IVpWnhxDqmac7NXHGPFWznjtwJupFmpalpdT10ksF0l91f+4/sffR32fyq9/DAN/85Nv/C/4m1d3w1hx/571z89Y9w7cU9pjEoFLNxQpbmpBkUQuGEKEkYVtSkDytukz/g9PPYHaYP4W4bKFzF3bhNAqkMCWVkikAJgecsSpQ0DSFESZRYuGgc3B6zVyaU20de/FCaa5w4ZVKuuR2u+vAiCqWKnmFhthELw8SCWrKY1VK1qaUkVygcSi4MH6WHgkBCuNiHFuXzpcShhSzjXGRZBKYWTggpStOIlLf7oTKulA6TMvJHLYwhApDOIWVJEhFSIG1pvRFOIClJtXWsy2LH2kmkLsmzSssSDbKIxfHaEek4XpgxBLkxJFZylBRcHmZcmabsFXCtcIxsSWdR3I4hKlWadNxi3kS626aWBUAXLcuoFBDEhaEfaEIp2WgEJKZgXlg8DzKTc383ZKcFshXx8rUBHa0xlATcpu+R5xmhEqhI4wcBxuWsnVtFBV3i3QN8zzFJ4XeuTfD8EIXh0fNdVndW8b2I6ThmNh0xnmVsbPice/hhRLTKzWefZyWIiLTFGMtwNEcLSRB4TCY5sSzn6TZ7IYFLabQjTmIP6RwmzclSS68ZMJzNcXlBq9NkMJmgCoMLFAcHQ1oNn3GSs9IO8YRknMaMjORcr0GazDFCYKImfSnxfEuSJpw9v8HecIYWik7XY54WuLxgc6XFcVIwnaa0Q80scRR5wZntLjfHCeO5ZWvFYzwZ0owCQgrCTsDxLKdIHR0N3W7ItChI4xzjQCmNMylrKx3i6nf5j/H344/S0vjxZ1ie57G6usrq6iqPPfYY3/Zt38Z73vMexuPxZ93u5OSEv//3/z6tVovv+q7v+qwL8V9Kaa35zu/8Tn7+53+eOI7/yM9ba/n1X/91/ubf/Jv8xE/8BNvb21+CVv7ZVRAE/PW//tcZjUb8u3/37+oJnteTtZZf+ZVf4T/9p//Ed3/3d9cVMxcuXPisNI/5fM4HPvABfu7nfm55PZdaaqml/gzJLKoXlJBoIetKACUszpV1AtYatIVCZoTSQwgf50oihMkS4lnM/lHCC/uKF2cZqbNYHKsq4OGGxxu3Ujb7DaQ0OGtwTpaMSqVw1pInKel8iqd9ulsXiNod/HazrBYwBbYoDQ0Ii4o8VFiaK3ACUxQURUoepxTxnCJJKLIUk6VIqbBpgVQC7fsEzSZe5JWmCTROqJK4ME+IkzlxVpDnGRaHkhqhNCurLe47m3HpWkEzKAgjSbfXodHpY5VPLCXGOjwlEZ4kUBJpMgLh8FxJqWg02kjPwxXQaIU4Z5GBh15gMgprkbKgEGUMSDkPIPCDqKxGLSAIQrTSWJuBKVjUVSC0QgiFU6WBJndgrcNIiVYSoQJUr48ATJoxur6L05LO1jrNRpfRcEzYXEPqEbP5hDxNENJDqrLKQimFD3RbTS5f2icUksCW1UL5wvwhHIydI3clN9Wnyqb9wisvLKPJFJNfYzQM8fyAKGqjtawpKIXJmMxHHB0NvgARL6WsdeAsWorSuJMnuCAAJPMkIYjCRdW1JU7mOBeglMUaQ55nNJstPK0Iw4AkDhEqAVG2WWvNeDJebLdYcKQ0migpatpHWZhyO26g/LmccymrsMq2Kim5cNcF9g/3CMYxQkjyImU0mfLMp5+j+CJkoC611FJLfTmqirD4TO/d+f+n40/u/H5achFP93r6TCSQO40gd0a93Pn56jgV1aM6DyklZ86c4Wu+5mt46qmnODo6whhTmhOLoiZn3Lmv0+dzZ/SNMaY2SXieR5ZlrK6u1pElnufV0SBCCNrtNsYYJpMJYRiyv79PHMf0+32CIKj/RgGcOXOG1dVVZrNZTWKw1tJqtZhOp7WpYz6f18aTyohRRZ1UMTC9Xq/EScdxbczQWjMej1lbW2NnZ4cwDHnmmWcYjUZ1v8RxzC//8i/ztV/7tXXcS6tVRvJtbGzg+z7j8biOpakiXqqvoiheYyIZj8c88MADGGPIsow8zxFCEMcxWZaRZVkdq5JlGVEU0W63yfO8JmycnJzgeR5RFDGbzZBSEgQBURRx4cKFuq+Oj4/rmJPKjKOU4uzZs0gpGQwGJEnCfffdx82bN/nVX/1VHnvsMd761rfyyU9+so6gaTQaNWGloqRU/VgVGFWRKYPBgPX19dfcF+12mzRN0VoznU7Z2tpCCMHBwUFNRomiqDYhnTaIVGaVvb29ug+ra3PhwgVu3ryJtbZuX0WwGY1GrKyssLu7W9IHFxE3jUaDOI7pdrs1wURKWcfLVIaaO01PSy211FeYhOTg5g3y6ZRWe40bFy+zfW6ble1NDq9fJ48zNs5sEXXb2MyQzGPCRojn+czHY9bPn0dKzWQ4JlCa5koPpzWyMETNBr4nwG+QGZiNJrSikEanycaFc0wGA+bTGJFOCbp9GpubZM4Sj2ak0zGtZoBAUiBwEorZBOFAhyFBt0+4skZhBJcu7vHch3+Lo1df5qu+7qu45+F3I4QitwU2txSZY+OhRzj3GBxeusILv/3rhFvneOiJh1AUOGuYzxPy4wOU0Pi9VXIk69on9Adk6QkHkylKSTxr8Z0it2UUbgrkhbttVnCujOIQAoFaRMCVETBClaUd1jo8HEaCsbeNB4WzSAdWOIyo6BmlASIWcMtaBjFs55a7C8vWzGMw02yst9jsd/HznGYjx2Q3ufVf/1du/PaHOPOur+Xe9z7J2tf9H9h4zzfywKd+l8u/+VFeffo6kzTDzyW5s1hjySnbY52jWJBHbOWxOPVnQNRGi5KogbCvMQGw+F7Gj5QbVEYFKcCjnP/whcMXAk84pCgNCVZKnHCl+US62xEzC8pEuffSQGGFKOcXxMJss3izOP3cKcprUcaUlIUhFR1kkUVTxq8sTkwj8Rb7U5SRM3LRzkBIPOlqA4iUErVYNC/jjxfzWUKgscjFda1pIgtzDCyeqxfP18JxOy4IgZalYQjp6qwdSWUUEuBUSd5QZQyMEKC0LE00ngYpEVqgw5AizzFJjhnNMRZS50isYJgaro8TrkwyDgrHTQP7xjG3bjFnsog9XlBbzMLQUzNVyo/hSYmx4MmyHwIBSsEstzS1JBSOlUBinSGn7J+2ho3Q464Vj7Db5rnL+2y1I3wJsXUcTRLaoUduLV7ooTyPpsqJ1nooFeHPRkyM5coo5bmDCSDoipyve+I+hCfwvBaHJ3N2r+3jMGzds0Z7a5PLV0+ImhaV5HjZDOUJprMMrSTthmaeWwpXEEiN1AqhPeaZoeEKVtfX2L9xk/F4ThhEHAymNJuatX7ELI8xeU7YaDCYzJFCMUgM1gtoe6WBK08Vm5FCmJyZlWTGst0o+yLPDFubffbGCdOTKf1uj9FkijCWbivgcJ6SZpbAUxQGkiRlc73JrVHKyfGMu7daTBJTxkmlKWFTsTeYM04sbSzrm11Sa5mOU7DlPJeMNA1PE3QiTg6mr4ms+kLqy2PFfqk/MZ0eKL/pTW+i1+v9kcYPKBfYv//7v59Op8O3fMu3fMaJiC+13vOe93DPPffw/PPPf06fN8bwy7/8y9x333184AMfqHNYl/rCSwjBXXfdxT/5J/+E+++/n7/zd/4OaZp+xs8PBgP++T//5zz++OM8+uijCCFYW1vj3Llz3Lp16zNu93u/93v8yI/8CD/0Qz/EmTNnvhinstRSSy211JeRilNRHIoF8lEIPGEWiMpy2FQYhXAlvSF3Di0MQnsUNiNLUw6OEq7t+7wyT5m5clF5Rfq8uaV5+FzO1mqHTnOFNEtJsphQaRAWcpBK4qwhjNr4jQZ+IyhNBxV+0QKYcmAYRkgtF0QNMHlGNp8TT6ak8wkmzciTOcLkaOWjwhCBwA8bBM0mOvQW7EkPKxxFmjEbj5nM5lhsiSx1ZTKsVD44hdIF57YbzGYjVGBQjTaXY8nLN/c5TAV+Z4VGq0Mj9AgoiNyUhp1xV9fn7rVVGs0I7WuU9tGRRkh3m6ZhCqyx2KJASkWj3SVPZmRJTmENUnv4YYTWBc1Wu5wyWKAihPaQUoFUZKZgPBsTZxleGKG1jxCCeZ7hqQay1SljQpKM8dEJo8mEhzbW6a9tkeWGsKmR2meWpjhRVmgIUU0WiDJP1VdM4oxWK8JfAClOR2nmi6oNA8Tcrlj5QqusSnJMk4Tc5Cg1ZzQewKKqQyyqQNKsIDdf2AFghVlNkxwlM1Q6x4kSIT+fzQk8H1/PkRKc5yF9SZ5nCCCJ5yRxTBynzOZz8izDWbcgfIBWGiU8fL98DRy+pykKg1KLyBfKCROlBMaCs47XW8/od3sYU9BprdAKJJ12k50zO0ziCc+/8DyXrl75gvbLUksttdSXq06bHU4TPqo5GK11TTQ4TfB4PVpH9f+v99qdRI07TR13xrycfv31aBxVW8MwBKgjU5QqF2KUUpw/f54nn3ySp556ivF4/BqTgpTyNfNMr2dGOR0DUxQF4/EYpRRBEJDneU20qKJeThsZKvNCFcdS/VyRQWazGfv7+0RRxNraGicnJ7XBo4oVybKMbrdbXwOlFLPZjE6nU9M3rLXEcczx8THWWrrdLlJKOp0O8/mcRqPBq6++ihCCXq/H+vo61lo+9alP1ZSNqo8rGkpFNqliXirKR9W/FWUjjuPaeFKp6tOKYpFlWR11MplMalrJYDBgY2ODdrtdE0uCIGA2m+H7fm22qEwsp/tdSllTXo0xDIdDer0ejUajfq3T6WCtZTAY0O12GY1GTCYTzp07x8c//nE+/OEP843f+I0IIUjTFN/3a0pLlmXM53OazSZFUdQxK5Up5fDwsI6aEULU16IoCqIoYjKZ1L9PcRzTaDTq+6/qvyRJyufQRR9VfVqRRoB6f0dHR7WxpzIXVfdDu91mOBzWcURVbExF/0iShCRJmM1mtUmkItMURUGz2fyMkU5LLbXUn34VWYotDFGvw/D4kLve9DDd1R7XX3iBg5evsHXXeZrn7qe9ucXF//YLdM9uI7WPdZb1u+7h5GhMMjzGDz3C1U106IPJyE+GSC2Zpwqb5UwGY7RzKA3Cb3B8eMzhxRcJhUHZnKy/RVFAMjlBW0ejGyG1Jk5zpB/SbjUwWQLOEXRWiC48RqYjPvyz/4zf/bXf4u4L23zrd34TrbUt8HxMlpFOZmTxlLDRwutusrd/RLB2hofe1sbgIYwFzydLp8giYfvuu4jzlCTJyZKcrDDoRsDGaoPDSYxJyngSYSFVkpPckbgCRBk16wDPgZDl/I+qnkcQdVaJXZA+8kUMRpm8IhCLeJfFU1ZNrLCIBRTCIYUgxnG5sBwYx4XccT43jI2jUCE757d55Fv/R1r2hGd/7t9zcu0mV3/pf+XK7/x3th57O/f9+T/H2bd/O29969dz7wsf49WnfpuPfOg5Xr0RY52lsIIcSpNEXRQhcO61z11SVNaLKh6likupDAMLWsSCmsHC9CGcw6M0UPjCElIaBrRgQRpZxKws9ns7SkTUcyZV5MyigWX/la6KMn7ZgasPWrFpq9aWphohSrOFEOXcHZRkDyHLI0gHSooywlmWJpYAiNRi0VyyiH9ZbC8cykmkFHiyfFaUSiyibRZ9RmnQkKIioJQdp6A2xQi5iHlRZfDL7b+8JXUExO2oGyWRCrRe7EdBY7WFFwXEoxSX5WSzGLcwb2YWEuMYpo4bk4wbs5yD3HHLOPYLx9iVJg8WlBmFoCqTcos+lKK8hiVApbwgqS2NMBXJxVgoMksoJYGArqfJjEUiibTCWYMnBNsth2qFXLw5YLvdxOQJUvi4rGC15XM0TPAjD+P5bLR8tnZaTAgQWU5uLLcO5jx964Rm02erG/Hme9o89MSD3Lhyk+HRgOEg49rJmAvnO0SrW4yGOaNrl2n5Oa1WRBD4xKkB4RF6GbPMMp6VRugEhedpZrOcNMlZ7UZsvOE+buwe0goKxnGKsI5+u8PxYIynJEHQZDSY4XuSmc0wQYNQe3ieIkXSbwo6vuPWYI4JI86v9wlNxnQyZnurx8hAPs9pd7scD6e4PGe14+NrxXSeIZxGiAJrCzZXm+RCMR1MuedMlxyDKSxaO7Z6HUbTGbPY0vMF5zbaeK0Gg70JaZIT+h62cHja0FtrcjhIME4Dji9GWdPS+LFUrQqH+blqb2+PH/3RH2V7e5t3vOMdf2jy4E9CjUaD7/iO7+AHf/AHP+dtptMp/+pf/Sve+MY38h3f8R3LQdUXWZ7n8Z3f+Z380i/9Ev/lv/yXz/rZp59+mn/wD/4B/+E//Ica1/lX/+pf5SMf+chn3CZJEn7u536ONE35x//4H9dVHEsttdRSS31lqsyttEgEqkZAWoSw4MrBp3UCaSTCanxtcKKkbwjlU2Qpg0nMwbHP5cQysTkGR0Mo3hD6nG8lhIHCC0OkVDSDCGscWZIglEJ7gJb4UVRiHX2NEBKExbkCZ8tIExX6SC9cGCYcxhqKLCcdT4lHQ5L5lCKZIQtRDnRVQBA18cMmyhN47QbS88BToHysdWSzmMl4wnh8gtIazw8JtEAoBc7grCWzFuE0YcNn+4zHp24k/PejGdHWBk/+pW/iia/+arbvukC720N5kiLPmZ4MOLh6havP/AEfe+6jrOdH3LfWo6Mkwsmy061FSoWzZcSLLQxSSYRTZIUlnk9LBHajjZQK5Xlo7SG1V7I3hayrPmbzmCu3XuF4PKTb7NPXmswaYhXRf8u76T7wKF63B4DfbtC/cJa+EOgooLO6UlZF2oLC95kkKVJppBQIIXECtPbQUuJLSVoUGFMsKl9u57k6IMcRIGghmOCYnUKafjFUWIfNDIKyEun0saz9wh9bwILO4haTSwVplhCGEUmaEEUhSZrgaY3nlwtK2vOxzmCNoyhy4jhhPJqU7TeGvMjLaBhTUAa3WLIiI8kzjHGLsYVASbkgf5Rn5T4LTkVJxc7ODnuHe2yvn2d9a4v3/g/fQH+tj9aKyXjA/+3HfoxXL176AvfQUksttdSXn07Hr1SL9qcNF9UC+52mjzuNIBXh486Ylkqfidhxug13vnZn5Arwh8walekCqOebKgKDEIJ77rmHwWDAhz/8YZIkoSiKcgJbqddQST4bZeT0sbXWOOdot9sUC7x7RQ+pjCsVPcLzPM6ePctwOKQoivrzzWazpoYMBoN6Ed45h9aa2WxGq9XCudI4WcXHVCaLymzg+z6NRoPBYIDnebVhR2tNGIa0Wi183+fg4KA2wnQ6HY6Ojnj++eeZTqc1sUIpVZM8KmJGZVRI07Q2HVQ0iTRNkVKyu7v7mj4/3YfHx8c0Gg1OTk4AGA6HnD9/HqA2oFS0kjAMcc6xurpaU0WiKKIoivq9KtIlCAKazSbGGG7cuIExhvl8Xpt+KjNQZRwRQtDpdDDG0O/36fV6/Oqv/ipPPPEEjz/+OLPZrDSozuf19dBaE8dxbcKo+icMQ8IwxPf9+jqkaVpfJ6VUHfVcRcwMBoN6P61Wi06nw2g0qmNupJS1saUiyEBJta3ia6rjdLtdlFJsbW3VlJvK6JMkCcPhsL4+lTHEGFOTUqrzrGgx1bVZaqmlvjIllUJqn2SWcfebn6DR9nn2N3+T8d4h5++/j7NPPEkarnDzk7/Fyt334pQgajbwoiavfvJTKO2xdeEcYW8VoT3S8ZDZ0REogZMaLwqZDye0wnIheDoaEqfgKKv6s9mcYVogbuzjipyGLwhCjdACpz18LyIMIzxPl5THzir+6t3cePFlXvjt/4adXOdb/9J7uOvhNyGDiEJo0qQgGY7wZEG710f5AfPpCTbLMFGHznYPnMEiydIZSkLQ71PkBl9rnEhY317HSsvQONrdkLs2WqjjOdMEUisYzxKsdSgBhRNo5/CUwDqBcRZfirLYhDKqY4EDQUnI7e3oEcdtU4dd2D4kZSGIEyXxQgmHdKWxwQoHTjJxjudzw7XCclduOEkOOJhr4t8Ycteb78a/936aqcWfzIint9j9jf/Mjd/+EO03vIV7vuGrOf+2J3ns//JuWtv/C6N/8b+xO1zMW4mS+GGEwCAoFmiHPwwDsAuTh6yjXcpokHKORSJKUwPlSQpASwgEBMISSkcgFlEppdugjM1x5RyEqNwQiwgU59zCkLKgoMjKaFK2t4qCsVLAqahcd+q/dUKMO2WqkAsSCQItFsQSUf6sFgYQpcBH4kmBFLac52FhFll8Tsiq8Ecuvlcmj4qMIhbblc+uWpbGELEghpTxLGXQjM0r2kt5LKqIGEpziNaCMPLL+UYl0S0f6WvCjU0m1w8xcVpTZDIhmGWOYWrYneXcmOTs545j59g1lrGBHEGBW1BbBHpxdXMsbnGfale+nlHSTvyFUUQDvnIIJ0uDjjO0tCSSil6gMM7hKUXT1ygl8BDc1ZH0ey12D8b0/ABVFHh+yP4oITEWlTnwNFGvTZIW7A5GvPObvp5bh0Oe+8RLjPZOuDqao6KAXrfNWmi56/wZtna2mcQel179ODePT1jbXMFvd9h/6QZbQUEn0ihPE4URaWYpihGRD+OZY5YYAqmILYsCu4LJJKHXEMTDETdefZG7H3yYG898lBY+q03NcDIii2OanQ4uT2kEiiw3tNsROQrfL2/6UEj8wCMzoEPY7jbxpWEWGza3eiRCkU5zwkAzGsVIpVhvaawQHA/HFAVIXzJJMlY7ilR5HB5NWesERE3F3q0ZvtBs9xsID5phQBQZVtuK7mafK1dHxPOkpCojUbJgpd9g7HwGowlZkiClIC++8CVnS+PHUrXyPGc2m31e2zz99NP803/6T/ln/+yfce7cuS+LBfb3v//9/ORP/iSHh4ef8zZHR0d84AMf4G1vexsPPPDAF7F1S0E5mH7/+9/Pr/zKr9SD5teTc46nnnqKv/t3/y4/+ZM/SafT4Xu+53u49957+dEf/VE+/vGPvy72cjqd8rM/+7N8+tOf5md+5md48MEH67iYpZZaaqmlvrJU4LCUg3VPyMVXUcZlILGUgyBjS3xnUzaxLkHo0jwxi2MOjgxHScTQzDGUGaB9GbDiGQSOPHNoz0d5HlHUIs9TrMnJ8im6CGnIFn7UQEkQWi+QkA6pfGRQRncgBUJ5gMXkFlMU5HFCHidY4/CDgMDXaBRK+fiBjxf4iMBbGD58hJJ1xWI6S5iOpyRpTLPRIYpCtNQ1gNOanDwvkNaUA2earK5qslHA1/7Fb+R//J7/me03PoxUXj0IL8fklrWzO1x48H6e+PqvZnhwyAu/92E+/Vu/xJnBEZvNBGUlntRIqciyhHQ+JfADhAyZTYaYYoG6VgqtAyQC39OAxGQZMvIREgSa+WzOtd1LvHr5gKihWOlIphZaD72LYPs8v//xD9PdHXH/17yvnLwRgu72+qK5DqU1jWYHP4iQQcR8HpcTAV5QozvbrTaB1igpscZwdHxEaErTR2X8ADCLLw20EZzc7pQvmmz1HONuV8J8sVRPwDhHYQzzOMZBSedwjsm0RKNGjQZplqJ9xcnxYTkB4WvytMAYSxD4FKZACoW15YSTNQVpmpKmKVmWL6ZXDHYxEVUYU1awLKJdis9CMonCEN+THB2dsNrb4l3vfAf9tTb7e7c4s3OG/uo6f/Eb/zz//F/8q2Xky1JLLfVnStVCcmW6OG0EuXNR/7ROGyiqbe78/GlzCLw2RuZ0tMqdRpTXM39U71V0ispsIqX8QwYOpRQPP/wwR0dHvPDCCxRFUUdoVLEpr3e+p4952tRQkSsqokJFVfA8rzbKnD17FmMMo9GIIAhot9sMBoPaxFA9a1UxG57nsbKyUkfG+L6PUgrnHAcHB2it67+BaZqytrZWmwWKoqjNGpUJYDKZYK3l4OCAZrPJeDwmiiLOnz+P53m8+OKLdTRuZUio2nnlyhWcc/R6PcIwrPtpbW2tpnFU/VJRL6rjnr4+q6urNJvNmsLq+z7dbpc4jvF9n9lsxurqKqPRqCbjnia2KqXwPA8pJb7vv+Y+qMwoRVHUNJqqP9rtNlEUcf369dq8cjpOJggC7r33Xn7/93+f3/md3+Ev/+W/zMrKCleuXKnvp+r6jEaj2uRRETsqasvx8THdbpcsy2pyRpIkzOfz+jhQFo9VkTq9Xo88z0mShGaz+RpqS57ndVxNEAS1WcX3fabTaU2bqT5XnXNFJqkoN1prWq0WUJKUJ5MJSZLUxpmKflIZoKIo+rwK9JZaaqk/Xcpzw2Q048z9D+GAT37oKaaHQ+598H78lXUuP/MHdDoNWt0O8XzO2l33EeeWZ3/3wzQVnH/0QcL+Gnk84+jSJaTN8aImUb/LaDBkdHLM1uYqxeiA4eGIPE1R+QmeL5nMCgrp4zUi0ukQl7cIe22QEhm0kL6PcxbPD8BTBGEfEXS49fTvcfDys1w4t0q/91W0Ns+CH5DEKUKkJCcHuMIRrq/gHIyHE0yWMT64wfqZs+igTxYbCpMT+D7OSbIiI88tzgoazRa2KDh/fgcTp2gtyYsc7Smu7E8YDBKEkHjKkRqDVg5PaRJjcQ4CrRCurKAXonzeKUtvBNU/p24x+JaLyAxhXWl8EOV8gKOc0lFU8wW3Y0pKA0j5/sA5Jpnhhom5+8oV9vb/76z/WoOVjiISBZ0oIFwN0HFCPB1z8sxvcfO5j+Jv3c32Wx/j/jdu8+73vYlP/e4rXN6bYRJRxrxYcEJiXcnYsHVsye3vtXHBlSYHueCU2MWkgqxjUMr5DW9BzghhYfooC6eEK+tyJAvKhyh/EBW4g9s0DGdL8qvFUFlnLICTpQFElLEuuNutddVkkygbJ2R9CFiYO6QoSSyKMmqmNIKU7VOiNDsoWRJfSuJJeS1wJa1EVs+pUi76opyDQAiELWNgWMS1IMpoGCFleXxVmkKcvX2NS2SvXcxpCKQqqR9B6IFw6LbCbzYIOm3SaUY6nDM8ulImO2tBbmGaGU5mhhuTlP3YcJQ5TpzjwDqOjSOxLM6lJL6KBYGknJOyLJpfnomAAoF0pSmloPysvyCjWCcojKPtKxQOzxOkxtL1PRqeIFQOg6WjDOtrPfYGM7qBx2SeEfmK40nG7jglNo5mU3F2e5UiTmkFHn6jw3R6QvvM/Vz93z7M0WhKjqPRDFDpjPWVFkGjyfUbA165tMtz1/ZY7TeRQpNdO+Su9ZBmK0KGIX7oYYqEzFi8MGAwGDObZ7SbDeZ5RtRuILOEonBsr4R4WnLt+ojV2ZyVlXX6m1vI6ZCCnIZTrHTWmE7nOCGZZikSRzsIF44ujS/BCUtuwVnLVi9ECUc8z2iHkEvJdJLS8H1mSUne6fpglcfhIMZXHo3QkZqCVqgxnsd4lNJtSlbXm1y5OaXILWdXNZ4vmSU5uXO0tWB9Z5UrezFH+xN8YfFCDZ6h3VEUfsS1V4+YT1NWQ1X+7nwRJgKXxo+lalVoyM9H1lp+8Rd/kQcffJAf/uEfrgdPf1ISQrC1tcW3fMu38K//9b/+vLIw9/b2+PEf/3F+5md+5ovYwqWAenLn3LlzXL169bN+1hjDf/yP/5G1tTV+4Ad+gNXVVb7pm74JYwzf+73fy82bN193u6Io+MhHPsJf+kt/iR/6oR/i277t2+j1el8W5qSlllpqqaW+MHJAuX68GNAKhycsEoN1EuNkmVVKSf1IEmhHGb70QWrieMJwmDAaR4wKyDA4HA3hsa58tEzQoaDf79BsdrHWYfIcoTRYi7IKU2TE8wkq8FFBRDnaK0AqhJQIpXF2MX1gHdYaTJZgjUGi8JtNvDBYDMxBFAaxqChFS1QUIYKgRKQXBmsdeVyQJhnOWfrdLkEjRGmBNYbqUMaUQ2xPeDjjk1tDEDZ531vP4j32EI12c0EmOfV3UThALmJYLNI6OqtrvOW938D5t7yZp3/1l3j5U7/B+SilSYi0ljzLaXdX0IFPOpvjeQ2aLQ+pNdl8Sl7EoCyesmVlT56XFI52mzRO2TvZ55WrB5wMoCcN42idzXe/j2cvvsyH/5d/TzKP+cbv/p8RUjLZP6LR66LD8nkzT1KycYwrYJoPiOMp8/kUz/MXk0gOnKPT69IIfLTvYYoCJwXTU5Uo8WLCQlLmpyaU5h/t4DOH0n3h9cW1mNyWsY6sKKsqcJDnWTkh5mlm0ym+HyCAojBEUYlJj8flopnSmixPmU7neFpTFDlZmhEnU2bxbSS9tRalFdK6Mp/YmMXkxWKW7LOo2WiQFzkr3VXuOnsXd913gb2DG1y/eIl+v4+Qivvvf4CVlRUODj53o/dSSy211J9mnTZAnI5cqegYr0fg+EzxL3cSM+6MZ7nT0PHZomNe75jVz41GoyZwFEXxh9pZmUHCMOStb30rBwcHHB4e1gv7VYRHtd3p49/53VpLmqYMBgOazSaDwQApZU3vmM/ntRmh2+3W/2+tJcsy1tfXiaKoJjccHR3VJoHJZEKWZXieRxiG5HleR5+02+2a7FERJDzPw1pLEASkacp0Oq3jTyojSJqmNJtNhsMheZ6ztbXF5uYmQRDwiU98gvF4XEfeVPEjUkquX7/ObDZjfX2dBx54gGazyXQ6pdVq1Qag+XxOr9fD9/26/4HaFFH1eWUc8TwPY0x9vSo6R2VIqCKh0zSl1WrVhhGlFHt7e3S7XZxzxHFc0yuyLANKI85wOKyNNlJK4jhGSkmWZXQ6HcbjcU3niKKITqfDysoKv/7rv8473vEOHn30US5dulRft36/X5sjwjBkPp/XNA+gfg+ozSVVJE2e54xGI3q9HnEcMxqNamOGc45Op8N0Oq2JJpVRpzJiVGSOKgLmztjrqg0V2aXdbtfEmCAI6iiXLMtot9t1+6Ioqvdrra1jce78XVxqqaW+sqS0ZHV7nenomOtPfwpPCu5+4xs4nkwJC8P2+bNlnIa1rJ+7wMGNW1z85Efp9fq84bHHUa0O2WzE8OoVAt9HN/voRpv9W7e4eekSb3rLQ7Q6EcMiJkoSfM+ilWYWG4zvE2qJSXP6Ky2aa6v4/Q7GKbTnoZRXLvhLgY76OGB6cA1Pptz36JtRfkDQaFKIgDyelnRHZwgCj2C1i9Ieo+M9tPLwwgY7d1+gs7ZOMo8xxhH4miyz6MDDU44inoIM8cI+YWuVeO8K5+69wMneHk0fjptzjscxzanCIDE2Z1Ur5s4ydbZc7FeOEu5ZPrcY55ALgkZNcLAgncAsRt8VTcMtoksWnJBykV3cjn6VojRWqMUivFgYFKxzHFvDMDFcyXPOzSfsDBWroaajNWHo028FNHo+7aJAxSnD68/zzOUX+KgOuPvN97D94N1s3pNw/dIh1/enHMV2YbIQaHvbbIJzi/ZRmhqoTB+yNE/gyqCXBcVEARqHLwS+FITCEgqJJypKiCypGaJME3aLuTUBNemimqWQCIQqA3Hc4m+fRJazKu72819pBBG1acQujA1lY8vjyrrkpZzHK+NSFCxIvpVhpTR7lHG8kjK6RolqW4dUC2PK4rlY1qQPtyChgFC3o16EK/tUaYkpLErK8h63FilKU4Wo+kYplJZ4nkSFgtZqC9kM8KKIIrPMj8bMLx2W5hkt0A2PpLBMU9gdxuzOM/ZmhqPMMERwZGFYWBIHZnF/2gVdRoiFSal6zy3ur0XhTGX0UKKcr9JCoGVFa3HgLN1AoikjcIqsoNf0CVRBy/eJlCT0YaXX4dbJnLanmcQ5swwuD2Z4TtAMFFGg2FnrsBKG7CcxQbdBkWWczCTPvnCRq8cTPA9WmiE95XjzG7ZwQZMXLx2SXp1z69J1Nta6iAJW0jnnL7Sg0QCl0EqBUEynOUU+J8OnSC0ekCYZfugTSMhw9CJJPMuYzCEIPEyhuPXqC9z72BNc/GQZj93pNrl5MCJLwBRTNla6CCXIrUMiCT1JZgxJmqO0pOUrfC2wWc7WasDMwGgwZb3fYTKLyeKUXqTwfM1onKKkJAo8siQl0hKjBOOTBKUsm5tr7A5z4nnGudUGO2c73NifgpNo6bjwhm1uHk+5fvUIX0iCwMMpyE1OtLLFy5cHzCcpgTP0el1MUZmTvrBaGj+WqpWm6R9rQGGM4V/8i3/Bk08+yV/4C3/hi9Cyz0+tVotv//Zv56mnnuLSpc8dAS2lrLGWS33x1ev1OHv27B9p/ICyEuLf/Jt/Q6fT4fu+7/toNBp83dd9HU888QS3bt36rPft1atX+f7v/36efvppvu/7vu/Lhkyz1FJLLbXUF0YF5cBMC4UvBJ6wVENe6+QCggk4SAtNQRk54fKU6XzG4bFkmoekLgMcHpJVFbHlWzzpaHYiemsbgMAPPayVYCHwQ/IF6SDPUubjETjQ3mKCREuMdTiTgy1NBjZPMXm2oIFoVChRSLAaUbpTED5YkyG8MhZF+AHWWGxR4HKDsaUBpKzkbNNolxE05V/CHIHF2nJQiJPlAFJahFVgLC2bcvPlFzh66FE622dRUp0yf1SLKA6nPNAWWZgy4qbX4Ylv/lZe2T7DS7/yc9zjJ0QG2p0+fuSXlI9mA6XLqg5b5EhfM59O0GETm2YUWYLyykpamxccHu3z0qVrXL0Bk0JgNy5w/vGv5tee+gWuPfcy8aFFRpo0KUNXGt0uakHwstbyqd//HSbxmP/pb/xf+c2nfpHR8LeJkwTt6QVOvIweWVlfI2xGaO3hjMGp0tzhA4kAGSrEvEAskKg5C0CpcF86N8aXUoucFWsd2QIF3251EAiMyUnThNlckeoUYwpazQbgSLMMY2LyLMdaR5JMSNKYOJljCkea5eW8g3OLgT31REa5cGFRSmDNZ65oEELQajaJgoBv/s7vIPSbaF+yu3uLy5eu8sgTj2FNTrfTZXtrc2n8WGqppf5M6U7yRkUVqEx3d1I97iRjAH9o7FzFeVSfeb2x9Z2mk0p3mknufD+KovoYVbsqc2BFoKiOt76+ztvf/naeeuop5vN5TYs4bRY5Tf04/TNQGxgq00W32yUMw9rskWVZTWzwPK+mL1TxIpXRZDKZEAQB6+vrjMfjun3VAv1sNqPX69XkiXa7XZszer0e8/kc51xtEDg6OuLMmTMYYzg8PCTLMvI8p9/vs7GxwfXr1/F9n42NDdbX15nP53zqU5+qP2eMwff9Orpmb2+PJEmYzWYcHx+ztrbGxsZG3beVcUNKiTGmNrfceX9UcSJhGBIEAcPhEKUUWZaRpikrKyu1aWQ4HBIEAZubmxwcHJBlGUEQcPnyZYIgqIkas9mMOI5rA0pllNjZ2aEoCgaDAZ1Oh+FwWJsbPM+rzTnOOTY2NphMJtx777189KMf5Td+4zf4ru/6Lt7whjewu7tbmzG01nXUTJqmNX1ESkm73ebw8JD9/X16vR5aa65evYrneayurtJut+tr1Gg06jmlOI6ZTqfM5/PaPJNlGY1Go753qu2SJGE8HpNlGfP5nE6nw2w2q0011T1TbaeUYjgcAnDz5k2iKCIMQ5rNJlJKut1ufa82Gg2stQyHw/r6LbXUUl+ZEkJydOuQ6eERG5t9ehsb7O8d0ur0WdveKE1iK+sE7Q5Xn/80RTbnjW95E6sX7gHtY+MZw6uvIlEYZzBpynAw5tJzn+aBNz1If3uL/cvXyHKPsN1ChZrRJCHVHv31dYrBISryEKqBinp4zQ7aOaSQ5eK9BB1GGCxFPCaLM4SKCFtdCBqkBSit0GGTIp4ihaC5ullSM5wkaHWQFrSCqNNinlqGwymNMEA6jdBmsRBfmkgmwzFB6DO7uUfUXaXd66E9wcmtA+azggvbPZLE4kYJjVAzKRzjwuJTLr5PcrDOIoUq51wQpSlhgU8oY4DdIhbDoUuLB1Y4jFjEnLCI+qgNtNyeL7ELwy0VycKWny3zURhaxzS13CwsZ3LDjs5ZSXPGs5RWqGk3PVr9Dv1GTjDNmM1jrn7sGTI07bUOjU6Ht15YY/fqAQejnN1hwtS50uhCacmw1eEQ1UxX3RpFSS8RC5+FL8p4j1AIAmkIXRWZUkWk2EUPuIWJpOwDJRexvPWRFvfrwpThRNVPZYRsWfwkF/E4pYNELJAhrurPuq1ldE1JESkjV8r5Ons7kkWAdqUpRUuLJ0UZNSzK3xlYGF4WJBKxcK0oUfJZJItn4Yo4t2i3wJaIECsIQw+swxVlRDQClHAICZ5Q+G2NDhXdcxtE611k0GRw7YjpzRH5dIazOVIr8DSzwjGd5OyOUvamObvzgqGxjB3sGxhYS+7KojWzcKTYBZXFQ5TTTs6RC4FwCxJL2doSXiEkwjrMwiCCq+5RS2EhVKK+L5WDTqDwcKw3AoLFs0jD8xlPYpqBzzRJ6bYaxDYlt2U8jg4CtvohO5HiKJ2RhQ2OJhm4gk998mVevrxPv+3Rbzfwi4Q33LXDtYMRURDj9VYYnhzS7TcwSFaTE95w/yay2yWzEuIYKxzzSULDl0wKgSgKei2f6dTgRT7KL2OVwtAnSTLiTKBsgQMmk4x2FDHZOyBqBhRYrt4cMxzFhH5IFIYEocZYmMcZUSCZJRlZloPn4WmFajZIJzN6LY9c+2TTGdvrXYTyIS7o9RXaGQajjHkO7UiTFgatJTrymJ3MEM6y2mty/XBKPE5Za/uc32mQ6Ig0n5LO5zx41wZH05wbVwcI45BYZnFOFGrOne2zdzTl4HCGxnL2bJ/RNEE4h6deayT+Qmhp/FiqVr/fr6sUPl8NBgN++Id/mLe97W2sra19EVr3uUtKyZNPPskHP/hBPvjBD3L58mXgD09wnJbWmr/xN/4G3/M93/OlauafebXbbba2tj7nz5+cnPATP/ETdLtd/tbf+lt0u13e+9738t/+2397DXL09XR0dMTP/MzP8Mwzz/DTP/3TPPTQQ0vzx1JLLbXUV4BKBGc5WPUE+AKUKBGfhSsHx1AOqiyQ5o5CGJTUmCRmMrFM5lE5CFsMqwKh6EhJJ0hZ7Ru6/RbHoxE3d6+yudFnZ2uTwGvgCoMfRhiTIXNJlqSkcopVGiE1UiuEkggNGLDGYNICsOV7ViwGxRbpC2TgYY3DJCnCSWQYILSPMRZrLM6UCyW2MCAMQRjghx7S98DZMs/UaRwGIS3GSKSmfo/C4IxDGkN86yqXX3iWrbvvo7Vx5tRQ/rYEgNJIz6ILD5FJpPa467G3kuQ5z/3Sz/LWtQDhLHlmUXqByDQ5FlVG2SQpWZIQhW1skpDMJ7T7IbawTKYTnn3lKs9dSjlIHLLd5N1/8c/x0Y//GldeeRmbGzwhmKUFz3/8d7jy3NNceOgRrLPEgyHP/v5v8PHf/12G0wmep3jx+nNM5jMazQ7O2QWOtRw491fWcWGDwhm0BJtbZnaRaxsGdFo+R/MhUGbjhos+KF4zQfHZ9bl/8stDxjpcbnDWlYh2IEkSbBDQdGAyi68EWVYwLCYEvo+xjjwvEAI8TzGeZMwm88U9JojCJnM3RRZl5ZOSJazUmBIFK4XEWof9LB0lgKzIePr553EF/E//5+8hzzJu3LxOYmLSNKbZamFsuUi41FJLLfVnRafHr9WiunOu/Df2FN3j9ca5p00Zd5oz7vz8neaP19vn6WOdJnHc+XNljqgWvk+rMmqc1r333svNmzd55plnKIqijn2pjv96hpbTkTPGGFqtFsYYhsMh7XabyWTCZDIhiiKUUvi+z2QywTlXzyPkeU673eb4+JjxeMzq6mpN7ojjmCAI0FqTZVl9Pq1Wq16oF0KQZRmz2awmlDQajZrSUbUhz3Om0ylQFrd4nsfe3h6e53H33XcThiFXr17l+vXr9XlLKWk0Gnzbt30b/X6fT33qU7zyyitcuXKlNlHs7u7S6/XY2tqi1Woxn89pt9uMx2Oee+652sxRnW9RFHz605/mzW9+c31vBAu6XBAENUGjOn51nZIkIU3T2phTUUEq84e1lvX1dZIkoSgKoiiqqSHGGGazGVEUsba2Vl+3zc1N9vb2aDQa9T48z2N7e5u7776bp556ine961088sgj/N7v/R4HBwdsbm7SbrfxPI+jo6O6/fP5nCRJ6uib0WhEo9FgbW2N3d3dmuBSUVKqCJtms1nfn86Vz0UnJye1IWM+n9f3W0V+6ff73LhxozavVMetiCbWlrGH1tq6D6p7uDKs+L7P3t5eHWlUxcFU0T/b29vcunXrD/0+L7XUUl85KrKMfHjChbvP4rUbDI5OWDtzBi+MmE+mdNZX0YHi2nOfwPMaXHjkUbxulyxz5MMTpvu3SuqGUmgvApMxGe3xlrc+QqO3yq2XXyEeDFjfOYMi4GAvIZcRNp2SHB/huwyjQpzSBI0GDoktMiyWoNFChA1MkVPMY/IkIQwUjf4mVgWk0yHGOExmMfEM39d4UYsszZB+yHRwzHwyZnVzh8LmzIcjTJ6gpCDLCwQS3y//7c2TlNxC/8wO5DmdtS3aqys4JFIqJIo0tSQGNkcpjcDj+nDGSZHR8cuYl1FeLEgT5VhUS8oCmsVch2VhIl08Q3iUpIzCOQpsGXmCwFaGVkoiaJny4ha7Xoz6F0YBuTA2nDaMFAKOLQwyw9XcspkZzuicjVTRTTTBJKXRiGi1mqx1ItrThOk8ZXI8ZLjnuKYV3c0uO+c1WxsZg9GcwThlFOckVtRRqlSEgIqu4SqTQGl4UaKkfQRCEChDJBRaWTxKwoeQoqRjuLL1SgiEuM3iQMhFnExlnAFZmTSEqONglNCluUZYrJM47IK6Ue6nJKs6qugVu6CA1sRbFscV5byeWpA7hAAlSzOHFCBVuU+lRE3moCKULAgngtK0JEVp5pBKlSQTe9swIqrIl8KCcXVcs+dZwqZGh5LOmRVUs0mWGnLnM7k4JhtcgyRGSoH1FE5qpgWcjFOOpim7k4KD2DB0jiGOgYGBdcydeI1Jp2z2gljsKhMNdcGa4jaFBsHCZFT2laQknujK/OPEoj9AGYeU0A8VTU/RCyXtQDLLCtqBwpmMThRwMJrTaYS8cnOE72l6DZ9YWDY2WtzViziajCm0T88LiIuctAgYHJ9wpu+hVURLGu564EGefu4y917oc+H+87zw8j6dhs80zli1Y772L7yV3XGGTQytXpdczBH5DJsNSGWIzR2+sxjhCFsNcAYvDFGFY5akTCYpTV+RGUmgJU2tcJ7PzVcu8qYn384nP/4cu8cxgS+Zm5yGDiiEI0GiFia4WWKRgU8zitDSkcxyus0mc8DEhk7TRwcRo+GYQAo8z2M4jJnFOY2ojK6SwtDoNhgPEqyTRIEgNpBPMzqh4p7tiDxscu3VA+azGfftdLANn1eev0USJ5BD6AcEgWBzvU0iJbu3phRxxhvOd5gkOUfH8/Je95bGj6W+iGo0Grz//e/np3/6p/9Y5I9nn32Wn/qpn+KHfuiH/tAA/kutKIr4zu/8Tr75m7+Zj33sY7z88svs7u6yt7fH4eEhcRzXua/r6+u8//3v5xu/8Rtpt9t/ou3+s6ROp8P29vbntc1wOOQHf/AHiaKI7/7u7+brv/7r6yqeP0ppmvJbv/VbfPu3fzs/9mM/xtd//dfj+/7SALLUUkst9adcJaXBogX4C7xlhsQ4UdcoVJEw80xhsRhnmcUx07EkM95iiC7QSJrCo6McgWdpdXyKIuXoJGZ/v6CwB2ys92g1W6XZochRUmKlJPB8RFFWVFpV5a2ATUtShytM6SYXBucKjC0wRQZFVg/kTJwghaCzs4PwQ0yaYYqsHCbnpo7L0EGA0hrteyAswgmcEwipEVJgigJhLEIu8mGdwKGRskAKaCYTLn3q42zdcw8PdXvooFlWHVhDkaUopZFK3658UBrPCygKi5CK+554G5OTE17+g//KQ52UiEXVrhFlHKp0FFkBQtFs93EmJ2i0KKzBYkApLl25wScuxlyLDWMc/8M3vIfrB5c4PDzCa/RQssDOpiSZZXDpFf7l3/k/cv6xd7Jx5gKbm2f55V/4N9z32Dv4mq/6Rq5fucwnPvUJLIoky0nTDCH1IvNVsbq6SqfVRAnB2TNr3Ly2T24dfQX99TV0KGB3iHWCnEXuLYL887By/GkyfZSTBOB5sjTruAIvKBd2rMuZzkeL+65NGEZIqUDk5EWOQJBnOUmSYLKCRVoMylPk8wytfKJIkaYJeZ6Xx6sOCNjP5vqgrBgLQ8Vqf5XtrfNIT3J0cIjIJLv7h7zw6ed49PG3IqVPq9X84nbUUksttdSXiSpKxp1kjjvnbT6TCeO07oxIudMwcmesy52Gkc9kHHm918NwQflaEAsq0wSUxTenX3PO4Xkejz/+ODdv3uTw8BBjDHme14SH6jinzR7VMY0x3Lhxg5WVlXrBfXd3l6OjI3Z2dhgMBsRxjO/79WJ+t9tlOp0ym80Iw5Asy+rYlps3b9Lr9epIEiEEzWaTPM9xznFycoLv+wRBwOHhYR3bobWmKAoODw+JoojpdFov7Fd0kPl8zmAwwPd94jhma2uLnZ0dgiDg4sWLNJtNhBA11eKd73wnOzs7SCl5/PHHefjhh3n++ed56aWXuHbtGs1mk/F4zPHxMc1mk/X1ddbW1tja2uLg4IDZbMYrr7yCc44oirh+/TrOOd7ylrfURJVWq8XR0RFBEHD27FmOj485OTnhjW98Yx2RUsWVVGSQGzdu0G63GY1GNZ2lKIo6/uZ0HEt1DyRJwtmzZ7l+/fprtjk+PmY+n9NoNDg8PMT3fe655x6eeuopfv7nf553vOMdCCE4ODjg+PiYd77znQwGA9I0pd1u0+l00Fozm83q2Bt/EftXGUG01sRxXP8+zWazmhhStbvVatURLUmSlFGL1r4m3gZgOp3inGM+n9NqtRiPx3ieR5ZltYFlf3+/NuB4nlfHx1TRORUppGpLGIY1JSVJkjo66E7T1FJLLfWVI4Hg3P13k2QpNknora1gLMRxQndtlUanx3R0QtRssH7mPmh0SJOM+OQIl85p9HtMpjHWOiKvNCKce9PjpHnG6OQIZzLWz24TNULm85zudovZ4T5+AzxfIXWX0WhKe7ND2Ahw2bzcV28FJzxmwwmSAmcL/EYTP2yVJsgixkmFyBNMFhM2I1AB8XyOBOKTYygy1nfOkkwmjA9uYeI5wo/wm02UJ/GUIZslWMoF/mQ0J2qvErQ1Tim8qENhChr9VbApnemULE1wF/pcvnqCO3FESjExhsKUBgflldEZDomzplxktyxKfEQF5igXRB0YZxfGhJKWYLltoBCUkTBOUFMryuKScgF+MVWyAKOI24QMVy7EWwFDB9PCsWcMa4VjJ7Os+oZOnNOcBgSBIgw8VneatOKM+SxhPE2Z3jxkJDVCezhPcHZnjc3ZlElcMI0L0tyUkce2NKQsMlVrg4GzZUSKJx2BgFBKfFFGhEgBSooy5sTZ2jxR0TKq+9LhFkSNkuShPYHJWJwnFSx2obL4SlISM1zV0TgUpSnD2aq/FlEsC3KFFLKOOpGujLyozB9SgpYCIUuTQ3UV5eLgVfwOgBZlm5UAqSRicVWUlFgswrq6fUorBAbtCYJmgAoVvQsrtLZXMU5jRYuji9fJdo9xWVb2h3AYBIWD0ShlklkO5wX7c8NRWjCyjqkTHFnH2DjSxT0gFy2sDB5V0ZW3OJ/FjAlKCNTiTqtMR8qVG8oyVAhBeV2lKw0gylk8IRHW4ilJWzvaSrESKFZaPnGasdYJKApDsxlxPM2QQnH5aEZcONakwUnHhTMrrDVDdkcTgkbARqvDfJYxmJWRg/12RCglkbIIFXDx8k3O7bTob69xeDRGC4uRgjN+zru+9m20LzxMfG2X8cF18smIPM4QRU4UBSRZTjtQzJIUoSOKJME4Q0P1cXmCMZZOy8fmBc4J2pGP7wuGgxm9boOXn79MVmhKq4ykoRX9TgjKI0kyjLVEmcM4SyMK0FoiipxOqIiNIU4KVtohxkkOD4ZoLA1fkmUFxmraTYnJC0IfNjbX2L01wROKQBvmpiAdzRFC0G94uFabF18+Jh1M2e4pon6XaxePyeMELSRaO6RnWGkFtHsBr96cMprEnOn4iEAx2p3S9hW+J8n/iHmyP46Wxo+lXqO//bf/Nr/927/N008//Xlvm6Yp//k//2e+9Vu/lccee+yL0LrPTxVq873vfS/vfe97/6Sbs9QdiqKIra2temLkc9V0OuWDH/wgWmve//73s7W1xWg0+py2dc7x6U9/mu/93u/lH/2jf8R3fMd3/ImblJZaaqmllvrjS6jS/e9JQSjAkwUOh3WCwt0eMDkcOY5Z4iiMQJiUJDUkqQ9OLfCWkqYI6CqPSBYEgSUMG8RpxsGhwViF72XMkphGmCCER5pn+NIjDJsUNscWKWCxmaFwOc6CcwXWFEghUf4CZWly0mRGFk9xaYYtHAoIm226b7gX3elhigJrcqxZzCgIh9QSIRXS0+VgVspyPX2BuLTOlgYQUUbNiMIuqKR2sTgiCXyfM9qSjfe58uyn6G9u0V3bYjA4YX/3BidHB4SNBhfuvp+trR18HZTZsp5G5eXCi3WWh9791fz3V57jML7KZmCh0AhdTk6nsyl5XkbEgCXLCoIgJE9TbGHJneHGwZhPxzknWLbPnGH7/rPcuHqRex95Es9vUIyH3PzQf8fkcyJnuPHCS7z43Es0WxH3PPEOZrM5L/7Bx3nLV301977xTTz+tie5dPk68Swlmc/Qno9WHjiHF/hsrPYRUtLtdNjThyRZWf3TXukzz9K6ciZdVPpIAR5/+kgen4/yosSPGwdB5BEGAVJD6EcIIdG+oNEM6HQ6SCEYDEcUuQFPoXSDiZ0hEPS6febJlMl4jA40cWzKShgH7lRV+udiLFdSMo/nSBGwvrnG1WuXONo/ZGtrA6cLxocx6STh8vUXyNLsS9BLSy211FJ/snq9+JXXi1WpPlu9X32/k9hxel93/vx6nz29z9PGjTvfP/25KnJjZWWlNmhUi/xKqdfQQu48v5WVFd7ylrfwW7/1WzU5QilVR8OcPtbpdhhjajpHnuc0m01OTk7odrsYY+rF/Iq8YYypTR/Vvnu9HicnJ/U5VMYTIQT9fh9rLTdv3mQ4HDIcDmk0Gpw5c4aDgwOSJKlpGzs7O8znc05OTsiyrDYNVFSHijAyGo3wfZ/V1dWaYjUajTh37hy3bt1CCMEjjzzCO97xjppAUtEq3v72t9fRtx/72Me4ceMGQRDQ7/cZjUbs7e3RbrdrAwmUhTQVzUNrzaVLlzh37lwdkVOZI6q4ESjn+bIsqyNNVlZWaDab7O/v0+12kVLSbDYxxuCcI8sytra2mEwmWGtrs4jWGt/30VpzdHREFEWvMcZUpJQqXsf3fQ4ODhgOh/ziL/4iBwcH7OzssLm5ycWLF5lMJmit65iYPM+Joog4jhmNRqyurpLnOVmWcfHixbqdSimCICAMQyaTSRmj0G6jtSZN05r4UhQFw+GwNo/s7e0RBEFNabHW1lE60+mUVqtVz2tVvytpmtbnedrgZK2tzRzNZrMm0FQ0mIoio7Wu6TNLLbXUV6Y83yOzFqEEWEGS5gghiZpN8EJms3FZlX/PvRQyIItn5OMjXJ6jmi3i2RxbFDTabcJGRNDcJJ7NGB8fIdI53V4PEYbMncIGHcxkQqPVRDQ9LBHxZEgQtVk/d6HM6NQBrbUzZPMZ2WiIVAohwI8aeK0ueTwriafah7zA5gVRo4lxgnw2RSlNkc8JAolornD15VfwXI4mY/fGLbbuuhfpHJ4sF+aLZAh+RKPXZnx4SDzNaG6cp9M1FDkooUgN5LJLr79OPIlRfsBsnlE4ePHWmFlmCKREWUEoIcOW9AkhmJvSjFHGsbjbNAXnFmaQ2yYDu8jVEOK2p6EyfZT/U0aIQGkQAWrSRjldU1IoXP2Zip5RUh8u54abhaGfF2xpxXpasO4pWp5mPjdETU2j06TdjjDO0r/7PK8+/SLHw5T9awkoQaMRsNnz8H2J1pJkMmee5mSZI8st1i2MKdLhaUFDSTwh8ABPLmi5i76XFS1DlickxaKvqigVBFaUURUVVUNJeO0sW/UsWRpkEK40zixMGBJRGj2cBSUXphpbGhkWVBC5MIhUcSxKloYPKUrTBkKgF/NeFQmkuh6lkaI8DymqC0JJ0MXi6dIwIbVCSIezAq0tUU/RO7dOY32N1rktskITD3OOdo8YXnwFnRuwOSApXGkmSnLHKM45TguOpwV7iWFiHBNgaAXHzjF2DmvAikVpmbtdiCYdZVSLc3iujGZOK1pMFVEjym30wh5SAEI4POXIjCgX8q3DUwItHE6oRUyIoh9J2kpyruux3vMZp45GFKKcITaC529OQAimWUlADnxHoSwX1jusd1s8ffUm957fYrUbEoiA3eMJnge9TogsCtaamr2pZTya8ODD22ycPc/+4QmFM1hTsELGxj1rDGPL/NVXMYVFIRDWgbJksxle5NNp+MSZQWuPzDqMhShqMBmOyNMcJWCWZiig3/TJHSSjOe225mCSkR0e0N7ucfddffq+xuCYZ5ZkPsdqSdht45mEbSVRkQMt8V1A4RzG9wnsHKUls2mCrwXOOEaTKc5v4oWadJahBWxs9RlPUkRh8LWlkDBLIU4LLqw1WTnT44XLx5wcjFmPoNfv8srLNyhih3Oa0Jc4U9DwBStnetwY5+wdTekGHv31JtcHOSZNOLvWwgpJkuR/7L8jn0lL48dStYQQXLhwgR/5kR/hAx/4AC+//PLnvY+XX36Z3/zN3+SRRx6pB5dLLfV6EkJw9uxZOp0OJycnn9e2x8fHfOhDH+J973sfTzzxBC+99NLntf2lS5f4h//wH9JoNPgrf+WvfF7bLrXUUkst9eUjHYWEWiITiy8L5AIvWQ7OqqqMkvZROEecQ2EdLkvJE8gKXdM+PCHxhEdHKiIvRTcEzlkGg5zZLCSMMgK/rC9IsoxGqJBiUYGqwPcDrNUUWY6JY7JsRpEXWJGjlYf2AlzqcFlClsxJZ3NsViCdJIwaNNdX6ZzfQTWbJZLUZDhrwRYIqRCqjI6R2i8rHhaVEbiymsVR5r3iBCi5qKoowGryIibOUjJTkJscA2zIlBvPfISfv3UTHTYwzmJsQZ4XWOCFZ5/mkbc8wYMPPUqr1SkH3Nqj0AZZGBrtNve/62t55v/z/+DdGxId+ARaYXJDXhiUHywmUQRSK1yRIYwBqUizjONZxpG1iBDuefBewlaHx77qz9Htr6IDzWB/l+u/8zE8OcOXgiJ2mMKRFnM++qGnyIDuxgbxdEJndZMHHnwTUeQTSElhirJPpALKifOdM2fIbIyVjq2zfa5fH6Iyg3GOeGEgcEAOjHAkzrG6mCT5SrQXlAsP5eSPsYbJtDTRBi5CRrqszBWgtEQrSavZpN3pMh6NSLMyO7ooDFHYQHmK4eiEJE1JxxOcs+XvmbMlDUQKjPnc7DONqIG1AmsKwsjnk09/jI9++CNsb5xl+9wZTg6uIYKMS5evkhZfiVdmqaWWWuq27iR9vJ7+qPeklDVt4bTJ4jOZR+7cXghRb3/afHG6ja+3TRWrUr1WGT6MKYP4Ti9mO+dqc0dRFDzwwAPcuHGDl156qTZ+VMaA09u8nnlkOp2S5znWWtbW1mrTR7Xvqg2n42Osta+JBanes9bWcTXGlGbJMAxZWVmpY2CyLKsjQ5IkQUpJnuf1dhVlot1uk+d53SdZlrG3t0er1eL8+fN0u12GwyFHR0ccHR3h+z6e53H+/HnCMCQMQ4IgQAhRx8ikacrOzg4rKytcu3aNy5cvc+3atTqKpDIzVOaD6po755hMJjz77LMURcGDDz5Yt/v4+Ljuqyo+ZTAY1EYK5xxxHLO5ufmaPjfG1MaHq1evkud5TTo5PDxkY2OjJmZMJpM66qTqtzAM6Xa7nJyc1EaSNE2ZzWYMh0OuXbtGq9VibW2Ng4MDnn76aZ544omaJpIkCd1ut6a9VPdIRdPIsozpdMpjjz3G/v4+cRwjpaypK6cjbcbjMePxmE6nU5uPxuMx6+vr9b3k+359L1cmkoqccuPGDfr9Pv1+vyaHRFFElmWMx2OCIGB9fZ3JZMLx8TFhGBLHcX1dtda1WSqO4/o4Sy211FeeCmMx1uFJxWw+BRkQthoYocjiCc1Wg87mWZJ5zOTgCJdPCcMGs9Ti6XLsrr2IqNNBSo/DGzfIphPi0THn7r2XsNPk+GiAVBppMgJtwW+TG8N8NCJohKyubKLsnDQuaG6e5ejadfLJgGa/h+drvLCJarTJZjNs4dB+i2R0gisSGt0uRVZg8hSpNc4UaD8AJOnogLW2QgUd0vEA5Rx+FBFoSOIEJSWq1aPR6TEfnbB69iyi2cQ5SRJPUMojS+bMT45QFkS7Q29rm/nghK2zq8RWsJOURMqTWQo45sahXPmsMnUWpUFYQW4W8RgOjHOl6UGU1AQpBda6klghbtMu6seb25kit1WZQxZFI7cNCKV/JpcCtTCKlHEnpUmhcIID4zixBY1csJoZtnXOWpLTnknaDY9Wu8H5R97FQ9/yzdz7pqd4+td/l5PdAdOk/JsQO0iURvuKZiviwTfsYOdz/ECBMzhrefitD1EUlunhkOHNfSaHU/I0J00NRW7LoiJBGThiXEnxrFEnbsFSqEwZJbWijEspz7Qkn5QuGUdp1FgASMpnsqrfnKCMhS0DS6QEhELa2x0msfW1EDiUWtBIFpEsYkHNuG0QkQhZklXAoUU594VwCKnQC5qI1oqo7eEERN2A1kaPaKNPY2MNGXWIE83JtWNu/fZ1itEJIokBg9blnFBWQOEs86JgmsOtccK4cBynhmnuGDkYLL4m1p4yd7gFBaa87mpBgDE4tBP4lCdlFoYZye2+t7W5qHQQKVHeN1iBFOXzm5JljE9prrFoIVnzBW0lONv2WO1oZlYwSwza5ISdBofzFIXkJCmwCJQriBoeG70uoZZc3j3i3vPrbK/3yGcpB7Mh7VZIVymUs2x1PUZxjrEJ9z5wBqka7O4OFnHWlo41bG62aLRaPPDmN+K8JvuXbjKdTzFJDM4Stpt40jKLy+K2VjvkaJoShj5ZbkjihG6rwXA0w2BZ6XQQLiWZz1lphkzilGJeEAYS0oRzZ/rE0ynjQUpaGIwQqDBECEW7GVIUhtgUbN91P6ODIdnwgKjp4Tc8kjwDym1mSYaWARJHklmEM3RWIvaOxlAIpMiQUUBWCPIipack62shu8cT5tOMpnLsbK9yOJgyHOYkTuNpMNOUhidZXY3YnzleeekQbQ133dXnsLDEwwnn+yF+OyJOC0ztJvvCabkyv9RrpLXmfe97H77v8/f+3t/jxRdfrAdNn4uSJOHTn/400+mUXq/3xWvoUl8RevOb38zOzg6j0eh177Nq0qj6iqKI97znPfy1v/bXePe7383GxgZve9vb+Pf//t9/3se+ceMGP/ADP8Cjjz7Kfffd94U4naWWWmqppb7E0lGDtbsucPzCZbQD5yS5k2ROkDuHXgwmDZBai8oFaWHRhSNJJYnRgEQKR0MKPKHoqNLRHUaC3FrGI0XhfDyVIJwjTedknibwNFHUBAS5ydHSKytZPUeCwxmLzTOcLAfGhc0okiHJYIotyskGz9e0Vnp0z+4Qrq8jtMKa0jHvclMOrqUq81eVQmofqUqKRllRsRhMC4GzOc6KRTWLwhQZaZoxmgzZPznkaDJlPM+Z5RbrIPQlpnWdT730Cq21bVbWNmi3e3R6faKgSZzOePWlT2Os4eGHnyAIQ4SUtzHmBu5++BF+U69xbXjE/X2BMBEgaLZbSN8vsaDWYlKLLRaLPA6yvGA4T8mcQyO56557eODNb6Ld3URrH4tBCkHsN5gX0LNADsKIhaEACgGHRyP2b+3R7q8znY45t3OOtV4bVywWtSxILZFCsra+zmC8TxZnNKIQ7WtMZjgZDZkmMUqUBqFKKTAEfCHI/hgRiF/ucov/SFnmwprCki4GvZPhmP7KOmHYQAqFFIo4zpnMTojjOVlSUJjyazqbMY1HjIcD8rwgN+WiWlk5VE51/VHxLqfVajWYzmd0G13CKCSeZ4ymOY1wzqYzfPL5Zwg9j8cefoJXblz6IvXOUksttdSXl14vRuVOusfrfa4iYbxeRMud+zm9/Z2kjzuP83pRLxXNoFJFVahMH1AaKap9V4aW1zO2hGFYR75UcSHGmHo/FZHjzrZUC/fGGLIsq2NVqsX2ikBhrSWOY/r9Pkopoiii1WrVxInTi++TyYTV1VWm0ynr6+ucPXu2jiSJogitNa1WizAMGY1GhGGIlJI4jul2u7VBZH9/n3PnzgEl/TTPy9g03/c5c+YMnufxwgsvkKYpR0dHdR9ubm7WNJEsy2g0GmxsbHD9+nWstZw5c4b5fI5SigceeICDgwOeeeYZrl69ijGG+XxOGIZ1/FoV7aK1ruNH8jxnNBoRRRHj8bimXFRkijzPOTk5qWN7tNaMx+O62CtJkppooZSqr1VFfmm322RZRhRF9flX90z1Vd0Xa2trAMRxzGOPPcZ4POaXfumXuHz5MltbW/i+z/b2Ns8++yzj8Zi1tTU6nQ6XL19+zXnN53M2NzcZjUY8/PDDXL16lTRNGQwGNBqN2lSyvr5eR6scHx/TaDTo9/u1Uabb7XLlypWSfiYlGxsb7O3tkSRJfb/1+/36Xux0Osznc6IoqqNiKqJIRVE5TQCp7jdrbW02qc5rPB4ThuEfK4p7qaWW+lMiV84bpHlWxiUIB1KhbE6nvUbnzBbD/QMOL18i0B5hI2A6neE1W3hSMp3N2b6wRRanjE72CDzw7Jz1N9xP2F1l79oV0szS7fk0WwEuF6S5wPc1UfsMYSNE+U2k9vCaqySzETIf0Gz5aAq8oIPwQ9LJBDBo36NIJvi6wOtukBUFpogRrsBmFh1FpHGCyeYEYYjUkniW0Oj3WFnr4C0W74vZBOs3aW+tI6Wh0esTNDvgBeAyonaX+XiMSeYIYxkOYnZv3cQXOc3AZ3A8RjnFeq8JhWG1qTiZFURZQVxYjpMcBXhCkjiHr8qYjtTY0kywiGtBgLPlArxYmDNYmD4WKRuL7yXxouJdLOAWiziY8lIKUc5RFDiUozaXlH/fqjiY8oMGGFvHzFj2CuhKy5oWrCY5q5OU6fyTPPfhF1g/u4a2mlY7pNkMyQtDnmWkSUGWZkznKc/uTWg2I/qbK7S6Ef3zZ2i989s5+7YnafYjGFxjvv8q06uvcPCx32N45RbjgyGz45g0NhS5W8TelDYEU05CIXEY5xZ8D4cTcpGFUxoUKuNG1R9VzMqiN6obHCFLEgpCLOJaLBU8rjJIqAXJV0mJPvVsKmVp9HCL+BfhBEpVdAwQUuAphRIOpMUPwW8ERGtNOjubtM6fQYVNdLtH4SJObg64efmQdPcF4r0DTDxDYMsCKiVJjSWdOZLcMk4LBpllnDlOEsOgMMQOpsCJgYlzzJ3DCIFzoOTC4ONKxkxlgqn6SLlyId4hKBZ9KxaUElndS4tOKcSCTsNtsm9Fk/FESWJxQEsK+srR8iWdyENLmAmfq7tj1psBR5nj5d0Z2hikEhQWOlqw2W8QNXxG84TDSco9926ytdJHZClHJyNSDefPbuCygpYtmM9z0nzOzvkNlPQZDscYIyG1dGRO1BJ0tjbIvSaXrycEIkGgKZwizwuwBt+XTNOspJRIS1ZYmkHIdBIznU7pNjyyPEUKQ6Q9kvkcLWGt12Y4jbFO0VlvUmQxW6stpB8yGI9Qno8xMWkQoIWmLcqIyONZykwJWrMUTwikH+KZGOtphJD4YchsGpMXDukpshxslrOzvcIst8SzGb0A6DbZG6XMY0vX97jvfLeM8pkbPFOwvdHiZJYynzuc0mTzHC00Esf2WsDMC3n1lX2kNdx7pkUReRxfPGKj6dPuNzhJLHnx2U3+f1wtjR9L/SEFQcD73vc+HnvsMT74wQ/yK7/yKxwcHDCfzz/rdhUa8v777ycMwy9Ra5f606xHH32Uf/tv/y2/9mu/xsWLFxmPx3WubhRFrK+vs7Gxwfb2Nvfccw9vfOMbabVar5kceMc73vGaKqbPR7du3eKpp55aGj+WWmqppf6USvkB648+RjYeUNw6onCCwkkK58qYCcoBuMGVlRc5TGaOFpAkCufKygNPQCAkgSxo6YRGwxD6HllmyDK/dNsrh0Azno+Zp3PiuMv5sxFKarI4pcgtvqfJk4RkNEB5AWGzhckS5oMhJjMUU4uzEDV9wkZAc61P88wGfreD0BpnBdYWOGNBCKQSoARC6bLqYbGgXk5OlKYKV080lJ+3xpBmKYdHx7x0fY9nbxzz6nHKcSoYFZbUWiSOe0PJXWczxr5Dhh3e/Ja7+dr3fRNbO3ehPY/9G1e5fPFFjg/2uNJ8mXvueSNaewsTisQTEt3pcO9jb+X3//PPcrYVEqgGfqOBVA5hDA6LKQqcKUCUlRvWOUaTEaM4L4tElGb73FmanTbD0QGNRoeiyAgaITtveZA/uHKFLBXkthxEx6aswJCirJa9+PJLXLjvPp771Cd5+OE3c+bMOoHvo5Re5MWW0xDdboe0mDCaxqyttlGeJnEJ+WhIbKoanNdOrMeuzMT9SpWU5QRLFDWIoibNqEkYRWRZSpxMUdohnOPW7jUQkjBooqQiniecnAxIs4Tj4TGHh4dYV6JlnasWB09VSX0eUlozm87YWT+LH/i0Wl2CIGA4HnHj5h55Znj25ZdYXznDX/zz38yv/PqvLythl1pqqa94vZ7p4zTl4vVMIKc/U42XK9rD6biW14uSOb3PP8oscuexq/dbrdZrKB0VfaQ6XmUqOL191TbnHJubmzz00EN89KMfpSgK8rycyK0MGa9ncvE8jyAIaLfbzGYzjo6O6liTyoDSarVqckUYhjQaDQaDQU2rqIwDWZYxHA5pNpskSYJSiqIomM1mTKdToijCGEMQBLW5pqKTVCaLKuJkOp2ysrJCkiT1Mfb39xmPx+zs7LCxsYHneVy8eLGOY6n6cHV1ldFoRBAEnJyc1HEkVQTN4eEhUJplWq0WcRzz5JNPsrOzw5UrV9jd3SUMQ9bW1njrW9/KxsYGv/ALv8C9997LI488Uu+rIlhUcSjVPaKUYn19/TUxJafvwep8Tt9LFVWlMpyc7o/qvdPXK45jfN+nKIqaUiKE4Omnn2Y8HhNFEcfHx+zu7gLQ7XYJw5BXX32V++67D8/z2NraotVqkSQJWZYxGAzwPA/nHJcuXcJay3333cdwOMTzPFqtFnDbjFS9nuc5k8mkNgpV81J5nnP+/HnSNKXVatHtdhmPxwyHQ1555RV836fX6zEajWi328znc9I0fU0fVBSTqq+Pjo4Iw5CNjQ0ODw9rOksVR9PtdmtKzVJLLfWVKkdhLY0gJBUCqTRaK1bPnyNaPcPoYJfx3nVaDY2VmnmRE7b7CJuRzTM2z25jMAwOj7EmoxV16N3/MIVzHN66RhonqCBAS0luCiwav93CCwIEJcnKb3YRQchsMEZpSdDsIoUm2twhK3LS0RAdBCjpkc+nKK2QvW3yLMZkM4osRhiLH2mMLcAkBMpgjEN6IZ5fRnv4zQiUxQpVEkGtII1TWr0OXuCDCtBhhLOOdD6lyOa0VlfwxJzdi5c4f/4sYaTJspQ3+/fz4vPXaLYEDz98nuvXb3Frd8St45T9SULLU+ROQGHwlSOxUFiDR7lIL2QZDbIYtSIdZcFOPbdy285gxelnnIrkUH6vzCMCQY7DCIe3cIvYRaxHRQaxleHEld+tEBgB6YICcmQsnrB0M8NacpMVJTgc3SDUgoYvCQVIK5C+R6sToLCY3JLnGZlJmd3YZXTdcvDSDS79zksEzW26Z9/A+Tc/woV3P0H77jdz9vzbubepEaPLmONbZKMTprduER+NGd06ZHo8JZmlmKSkyRSpwRpBYapnyJLoaqxFKoEUirwwi2mUBSkFh7HlZ5VcGGpExbFwSFkaUIUs2btayJJisZhnkoBUIJEoXEnDwKGUxA8U0ivvH6+lafWbeJFPY7VLtLFGuNLHeW2yXFIkluHRhMkLB8Q3XoJ0TDod40yBLQwWS2EExkFiLZMsY5obhqllnMIkN0xsae6YWhg6x8RJZtZicThKA49d0EgKy2L+qXxXCoFyJc2jjKkRZbyQK+eeJLKM0nGlKaSc4SvvCQdoqXB2Yd5e3HcNXZKNnROsKkHP92goSyf0Ec5yc2IYDIZ4QELOjUmKRNLxFcpAJB39pqIVBexO5jjPcf8926x1mjQwnIxnCF9yZq2HylMaQrA/iZnkGSsba0RCUChBHhdEWUy/4ZN7jqjVxvltEhtw6w8+yvnNJgiNcwLf87F5TJxleNJHKp94PgYnSedzXJLQaYdMk5TVbouw3WJyMkK7gjCMOB7MCT3JHIFJUzZX2hwNY/LjGSbPKRDE2mOeSzZsjo40u8OEcSHoeT7u6IBCOrSLKYRHXpRzlVlukYFP4AQuNyibs7baIM4LTo5neB74vTafvjogi3PWQsmFsz0mwmc6GJHlOTvrDeLCMBnNmaYOKRxrrfLf1jOrbVpn+3z0kzeRxrKz0iTsNHn2yoCOg83VgExqjscxUBJvvtBaGj+Wel0JIdjc3OSnfuqnePbZZ/nQhz7E008/zfXr1xkMBvWAtdVq0e/32dzc5J577uHJJ5/kXe9619L4sdTnJCEEjz32GI899tgfex/3338/jz76KJ/4xCc+r+2UUjz++OO87W1v+2Mfe6mlllpqqT9hSUHz3A6bj7+dw/R3iA+nGMdiuAmI0k1fOIfFUVjHYOzwIjAFeItBmRKOSGYEsiDyLEFQVjukcYEpPDzh0A40jjxP2Z9MmM7nrK9t0Gq08cOAJI6JZ3PiwREYQ3tjG6k18ckxmjnOapodn/ZKi6jTQDWbeN0mKgiRfoBzAivycmArQGhVYiyVRCi9yEFdEC2lQJTDYBASbIGzkjSNubl3yB9c2udjV455cZAyMw6LQkoFQmJFRmIyhoXjKNN0Vnv0W32S6QSNT7e7yuVXPs31Kxe57/4H2du7yu6ta6ysbNBfWUMKjZQFTiiQkje8+S38l5/7fzHNDL0IiixBK4XUGusK8iTBOQsWlOdRODgcjhil5XlKJK12C4GPryK08lGizIFf2Vhn4gSDuMxjVa6soGkgSHHExrJ7/SbPPP0Jrl28xl/5a3+ZeHqC8srpGLnIoC0XdwIkknangxc2CMIQ46bk86QcvL/OopcD0q/UIssFNlVIwWw+p7A5WSvDj2ekSVJiZOOMOM4xRYHSim63hxCOoihI4oQ0zSg+xwiXz61JgizNsNbRbrdQSmOsYTadEwQek/EI6yzHgxHD6ZQLd93NxvoG4/HkC9aGpZZaaqkvN70efaNaYP9sNI5qu9PRJneaNu7UafJCpdMxL3ce77Pty/O8mhBxer8VhaQiQ9xpZDl9Lg8++CAvvvgiw+GwNn6cNpKclhAC3/dpNpvEcUwYhgyHwzrKYzgcsrm5Wf4NSxK2trbqGJbTbYUy6qPZbNJoNNC6jD+raBuTyaQ+p/X19Tpaxvd9tra2iOOYIAhYXV1lPB4DsLm5iZSS0WjESy+9VMfYZFmGMYbr16+zvr7OYDCoY3CFEGxsbBAEAWfOnKnpEUIIxuMxjUYD3/e5ceNGHTlTUUTiOK5pItvb2wCsra2xublZb1fRMU6bg/I8p9Pp1PEi0+mURqPBbDZDSllTTqIoYjKZ0O12a6JKp9NhOBzWxobqWty4caM2fjz00EOMRiPSNMUYU88nVserYlqazSbtdpvj42OstWxtbXH58mUuXrxIFEXcd999JEnCjRs3mM/nrKysEEUR0+kUKA0mGxsbZFlGq9ViOp3WUS/GGEajUW0ECsOQLMvQWjObzWg2mwyHQ9bW1lBKMRwOsdbS6XQYDAYMh0OAmg5TnVt1PoPBAIBGo8F0OiVJEjqdDlAalpVSTKdTOp0Ovu/T6XTqSKHqXqzuuTzPa0PSUkst9ZUpKSXWGGaZoNFs42vonzmL317j5NY1dl94ge3zO0wmU2aTlPVzO2RxCgh661vgBRzeuMFsMGJto8fK2XMMTkYMD3fJ53NsnNHrS4RQ+O0WFXdCuAIVNgk7XRAe05NDTFag0EgB4do2aWoY714nCAOcdBR5ilQa4XewTmHygjzJ8KQqo2mMByZGCYfDAwTOCrQOEIGk0e1inaNIU0ySMhwM6OmQRrdPEcdImeOEpshTijgmiDpY1SBYv4eH3hly85WrJEWLs296C8Km6OB3OTqKkRje0PJQ6jJJnJEXHtksw3OChq8YFg5nHFoIikIg5W2yB4sYDiFZmD7KRfXbDAtuR2+cknZlT1YhLpZynsazAlE/xjhs/dNtyogDnKxMJ9VRwDlBamEfx5GBQFjaWcyGFPSUpKcETSXxkhyloOGXJiGhNZEn0JHA2BxhHMIMEaMh6eRFLr30X7n4/20ioy6d7VV6O+dprLXpnt1k/Z4n6J1/O2ue42w2RZNBPiebTkiHQ+L9A+ZHx8yOxySjGfksxWSWojCYwmJzR5pQGiksZW9Jh3UCuzB+SOdKmogQC+NH+cwnhUNIiadkafjQtvxZK/xA4YUaHSj8hocXRfjdFlG/TbjSR4ZNZBiB16TIFNOTKceHQ7LLA7LhNWaHezCb4fIUa3OcKakZuSnnBGNjSAuIC8uscMwyw0lmGeeOxDpiVxI9plYwdI6Zhby0tJSULidK88fCkCHrZ+eyAE0CSiwMQlagKA0gDlFGtCy2O116pCpj9OKesM6Ur4nSIBOp0ljisGgp0RIEhm6k6PkWoTz2kwxPODIHN0YxWsoyutg6ulrQa4Z4oc+rxwM2Ntqc21qjpQQ6yzmZZmS+x3ozIvQULd/j2SsHzNOE++6/QEsYpnnBaH/CtixorkXQ7uOERTcC9vcPSAYJTZUgXISzBdl0SqQhKwxFailEjAodWgeMpjO0gGa3wXA+w4sCdNRiMp6hlCT0Ao5HMaEUWCmZTmM2ViIOBnMOjxPCQNJpBySeQgdNVuYJDWk5meecTHN6rYh+5GHzhEZDM5oUCM/DqTJ+WwVNijgjCjyMErRbLYw1DA7GKOeIWiGvXjlhMs3Z6SruOddjgmR0OCKZxpzb6FAIRzJNMMIj8A2+KmO4V/sRjfUml25MMLGhGUnWV5u8tDdApxnrKxHdtQ7XhgXzeVGagr4IdLel8WOpzyrP83j88cd5/PHHGY1GHBwcMB6PSdMUKAcznU6H1dVVOp3O0om+1JdUQgh6vR4//uM/zr/8l/+SD33oQzUW9fWktebChQs89NBDfN3XfR3f8A3fwCOPPPIlbPFSSy211FJfaEUra6j7JPl4RPqxT+CmOdUQqnoqsYvFf4FgNJb0Q4fSDpU7pLBEsiDUGVo6PM/h+SW+0lmBFhYhJEqAVIKGVIyFoHCGpChoSYlCE/kNptkIY6DdXcMLA0yWk04nBN0mvTNNolarNE54CkIfGYZI7Zf4CuegcJQjZlfm8AqFUAIhJMKJRbUDlBMSFrd4rcgK9o+O+PDzu/zm5QkXxzGJzVFCshIIsAW5ydAdyTRxeLlPFDlGcUprMmLk73PpFcfxeMCZeM5zz3yaW7eukCRzztx1FwcnhxweHdDudlGyzEm1VoA1bOzsIIImu7OUrbYhkBLl+xR5TpYm4ARKabJiRlZIJsmY3ZMJ0xx8SpynkIJmu8PqxhYgMNYgkTz2rvfwH/+f/2+Ok4RQCHqh4kyrwUrLRwUaocCeXObX/n/7fNXXfB3tdpd4OsBiSwKFNaXhBdCmQFjDykqbq1dvUpgcC9i84LMxw75SfR+4MlO6MJDlMW4YcygnaK80GBW5xZwydRSm4ODgMz9jfSEkpMCYAt/TNKIQYwsOjw4xi8W4KAzxPI1A8OwLz/DCCy/T7/fpdrtf1HYttdRSS/1J607zRGVQOE3leD2DSLXdnfrfEx3xepExlU6bOKoYjIqocGebnXM18aMiM1SvV8fo9Xrcd999/MEf/EFN0zgd+XK6PUII0jTl8PCwpos0m00uXLjAyclJTc0oioJer0e73cbzPBqNBkopDg4O6tiWyhBSxZPMZjOyLCNNU9I0/f+z99/hkiTnfSb6RqQtX3Vcn/ZuBoMZgAAIip4gQVAiIQ+JMhSlXUK6e7VaSXullbcrSquVN7t69OhylxSFlbuU26UcKUIENSQICgRAwsxgBuMwpu1xdcpX2oj7R5quOTjdMz0Ggx587/NkV1eaiMjIPJWREb/4fZw+fZrFYsHh4SGLxYL19fXajcT3faIootlsslgsGI/HbG1toZRiuVzy8Y9/nOeeew5jDM1mk2vXrvHDP/zDfNd3fRfXrl2rHUp832d9fb12LlksFiRJUjtaVC4S1f8rgUAQBPW5r6+v43keBwcHTKdTPve5zwGFCGR9fZ0sywjDkCiK6jAmFXEc4zgOnU6HPM/xPK+uryqMyWw2Yz6f4zgOW1tbbG1tsbu7+6KQMFtbW5w6daqu3/l8TpZlbG9v43keu7u7xYxz36/vg+VySZZlnDt3jtlshrW2Fovs7++zv7/P6dOnmUwmDIdD0jTlxIkTddqLxYJer0e328VxHDyvCMeYJAnD4bB2Z6nSrcLWtFotoOg3Go1GhGHIcrms057P5/Xf3/7+PhsbG7iuW4fiqcITR1FU379BEDCbzciyDK11LTJZLpd1eJ3pdEoURXQ6HUajEe12mzwvZuJ7nnfs37AgCG8OjDHEy4hWt0uz4dE/dR7jthjv3OTwytOsnzrN4WhCnsNga5O9GzcIA5eN02c4HE6Zj55n/8oV3vKOd7Bx8SJPPfI4h7u7hL6lrXOaDYMmxHoBlgCFwZoU5YYEvVPgOcSjPUy6xHV9gsBHhT3SNCYeH+A3A3wvRDsat9koHBJMDnnMwXPP0Gx3CAddssUMu1zieA7KbxT9FHlOvFzg+E38RofOlsN8NCZLUtZObTDdf4R4PieJY1rrm7hByPRgl2S5IGh2wG/iaIXNFE7vDGff2SNLctI4J49j2oMBw70h/bVtDg/hzNltPDzMUzs4yrJMLPtxhrWKgCIMi+tCRim2WAmvYZRFGSCvhBiFIKQaOLVQTjBacfooXR8K0Yel+qUu5CBFeBhdOX1wKyRMlV6VlqLoCjLlwdXxUwvzXLGfGUKtaTkZPVUIQLoa2mlGQyk8TTHA7zi4rsJ3XBo+OMriKotrc0w+gvmIxTMvED33WbR2C9cJr4lutehubdHsdfA7IV63S3drk87Wg4Rv/1raocb3wVEZOk8hX2KtwaRL7GJOOluSzKZkixl5DgpDnuZkcUoeJ5CZon/H1XhhiHbroCZFn5fv4ngav9VEKx+33cJtNtBeC6sUOQ7ZMieex8TjmOnegmg0JhldZXEwJB1NsPESx6RFP0aek+YGShFFklvS3BDllmliWKSGaWqYppZFbohMIbiZ25y5gbGFaW5JsKS2CKviVOGVKSaUKaVwKIWzxW1T/D1ji3A05QQ0h0IAUsqlC7eOOvyLLe+yQkCSYcmURVOkrSthrlGEHhgLDuBrTVPDwHNYb3l41uA1fJ7Zi0iSnMBVzFJbOOBai+c6tH2N1ZalMUznC06c7HHh5DodpcjmEfM8x3Qa9DyX9YYiVy6ffeIKqVbcd+EkbW2ItcbOLetuQug5TDOLjmIGaz0cN8DLl5zYDGiunwA3gMWShudwOByRGVA2RTkuaRzjaAcvcDFGk5iMTr9PYlz290e0lSEMXOIsw8WysDDZnbK11ma4zMlSxSQzWNfBRCmdTo8wN4TNwjkWJ6Dlw2a/SZSkdALFbJ6BDojmS/zQodlbx6QGLzCkccJar0mmPeJFxKDfIjFw8+YQ14VO16W51WER9ti5uo9OMwYtn9R12RvOSRKFTRM6HRc3aNDQDoONgOd25hzuzOk3FOdOtNmZTUlnKefXm2xu+BzEKddvjMmyfOUeeW0R4Yfwsun1etKxKnzF4bou73nPe3jXu97FbDbj4OCAg4MDZrMZy+USrTXdbpdWq8WJEyfq+LutVgvP80SsJAiCcC+jipfFYGONwVvfjokTbv7yo9hFWsQIBTJryaypbSeXS5c4ctDW4usUl5zAsfhujudafM+ivXIgwVocVBHj1OoiLqfr0w8Ny9yQpTnWGJI0Ybp3gzzP6G5sEbZapHHM9MYOyrqsnTwL1pDlBmVTXBdcxwEUVmlUbssQLzlKOShX4TheKVwpZuhgLbb0FFWqnJmaGcbjKZ97epeffcbyhUPFPLf01vo8tHWSTmsdZiOY32B/OuTSt/t0TzaZXHuQq198nuevPc/+jX02M5d3ffP7eerxx1mM9xgOd0jTnPF4xPyJx3E8l8PhEHsRjCryLWzSM1Tg0Wi2uXE4451bOdZ6RLMZJrc4vnvrOeto8jhjuhyzP8rJjMKjeCkejXeYTPbxw7MoLCYvhBub26fpb50gvvI87z4T8s63nKLfbQOW5WLO1cNDfvKL1/mNv/+PEbSazGZzWp0+JjdgNLjlHAqrcK3FNRrfcZhNxoymCQ1rcXP14l6Yr0Kq8T9jLEn8xoVN8VxNt9Nka7DG2dNnSOIET0G73SBepgReE6zF9TyiaMlTTz5NmmYyICIIwlcFxwkujoZtOeoIcjRcy9GwLqsOIEf3OS6UytGyrApLjoZ/qQQIlbhjdbC9CvNRhXupQn+sikKqkCJvfetbefLJJxmNRvWgueM4L8q72n8ymeB5HsPhsN7v+vXrdDqdWuiRJEktOLDW1gPylVhhZ2cHgG63SxAEtcAhDEPyPGdjYwOlFGma1qIQgE6nw3Q6RWtNFEXcuHGjdr5IkoTpdMrOzg7Xr1+vRS+V+8jOzg5PP/00V69eZWNjg4ceegjHcTg8POQjH/kIaZp+iYCmEgW4rkuv16Pf7+O6LpcuXSLLstpJohKvdLtdptMpWZZxeHhIt9sliiJ6vV4tVnBdlzzPMcbgeR7nz5+vHUw2Nze5fv167RYC0O/3mc/ndT14nke322V3d5c8z2shzXg8ptfrMZvN8H2fM2fO1MdW90Hl2BJFEcPhsHb/uHDhAmEY4vs+X/jCF1gul1y9epW3v/3tWGsJgoA4jrHW4vs+7Xab6XRKo9Go6ydNU8IwZDqd1vdZFEW1IOTixYvEcVw7dDQajVqYcXh4iDGGkydP1sKXSthR3d+dTgfHcer7qRLTVKIP13UJgoB+v89yuazDB21ubtbXFajLBNROIs1m8xWFNRYE4V5B0WyGrG9v0d3aYBYZ0tE+2XSPoNng8GCfZqdFK2zx7BeepN3t0Dt9gvFownw0xOYxD33D19Fa3+bTP/vzHO7cYHN7ja1Bl8DGjBKXrQtvBZujbEK+jAiCFn57QBYtiIYT0vkYv9mg1V/DWIflMiJdzlAavLAJWJTXIoojsqgQwOXJku7GgLDdZzEdkU4OSaKEVm8TF4MxGUmSMT2c0N1sEi8i8iRhvDskVyGd7YusnR+xe20He+4cOB6jvZsEriXP5tx8Zp+Tly9jnQwnbOAFXXRrHbUcYacznEYTv7vF/V/bYby3T3MZ4usTmEXCQ3nGtRsTHrs2BaVoug5pZsixZLllVjoyaGXRtgytoWwhMlGgjCpD6yqULh1AygH+ygukCt2SqzJ8CYVliIU6NIxecQnRto6GUo/wqjL0i6EQnmgUjtVFmOKV/SyKCJhmiptYvMzQBvoODLSmrRU9B0Kd4+oihLHvOrjaEji6EAt4mtABTztgDXkWFy4DeYyNR0xGN5kW84/IsHhBk7DbRns+bhDgt5rkro/rhejQJ+x1CXotGv0WfmuA3/fxNiHQCqU12mYorYpJNnkOeV6EaNEuBoXNMvLUYLKcLE1Yzue4kUd0cEi8GGIySx5nqHSGTWPyRUQ8XWCWc0gTNIbcWnKTkWbVNbFkBuLcskxzUgOLPGea5MxzyyK1RDnERrE0OUsLsS2EBXMLC1s4fSytxbG6DM1icWvBhi7CBFmLt+LekqnCrSMrw/rU/1dVKKBCHFTcH6q+HzTFZDOH4h5Q1oJSZfib4n5Q2hZ1isG3Gk9Dy1V0PIeTHb8QACu4tr/EzWHQ8MmsIU5StKvRKLSn8F1L23MJfBe/53N6vc+ZfoN8EbF0DKbVZNB0CZUhjhNe2N2n2fJZ3xpwqheyH+XMhktOtxS0ely7uQ8o1jsdkiTizKDFxgOXWcQ5UWZYjMaEWjOeznAdFwXk5c2fLFO6gy5JFrKcT+k0ffJUEU9ndAIH3/WYz+Y4rkfqeERRTLcdMk5zJuMFG9tbDHSD3CzJFJzfXGd3b5d4NKPte7hNlzxUjCZz2oFHnAF5EU5ruViitE+WJoTaQTd82r4ibLcY7U0INDj9LtevHOBpB+1k5Drk6s6M+9s9/FCx1mmQOwEHo4gkhSjO0QbiDDw3o7sW8PxBzPwwouUrzq63iD2X51+YsR1oNtd8nEGT3WfnpHFW/K6o10f5IcIPQRDueRzHod/v0+v1OH369B33FaGHIAjCmwgLmUnQocva5csEbshyOmX+yNNoW8RtzS3FSyHlyxOKxcKjTYxSpQ5fgetawjDHCywoA0Zhc6ewoLSmbJBbtDW0vQbKSdHKsFwume7dYLZ7gNsMaLa65LEmms9Ipgs2L54HDMoN8JysnAGTFzIUY0tBR/HSa3OD9nx0NUhCMaPVWAs2B1XYlOZWsZjPePbqPh972vDoQZPr0ytsbPf5o3/gf2ZtbYPHfukzHE6GfPGZZzg0CyaLJT/7kTleKyaLP0ZooN/v8fxwzDTN8VstojjGRIqttS3a3T7DnWd47vkvsrZ9gksXHiTL8+LFN8/J84wsTUjTDK8RMJqOULoQs2jt4vjFq3BuskJ7YSy5yZnMZowXCgsECpTN2X32CtfOXqXRHKCVwzKek0YpeZbwTd/8jXz25lXObbVZ67RIowXD4T7XD6b810PDO86c4e0PnOfxZ6/w+U8/zru/6esxpiifygqRjNKKNMuJ0wWGnOkiJUpyHBS+hTtafghfNrLcMppNCHTMYjlhvmiytzum1+mzFx3gekVni1aaU6e2WCwm5WDRG11yQRCE14dqcLka+D0uxMqqSGNV2HGc+GM13MpR0dzR46rtVf7HhZM5rhyVG0cl+KgG9Y86emRZ9qJQI5XDwaqziVKKtbU1Ll68yGc/+1myLCNN09oR42j5K4eNKqTG5uYm+/v7DIdDtNb0ej3G4zFra2vs7OzQaDRoNpukaVqXNY5jNjY2SJIE3/fp9/u1cGI8HmOMYT6fY62tHThu3LhBmqZMp0XosSiKSNOUdrtNEAR0Oh0WiwXXrl0jSRKUUriuW7s5VOFT+v0+cRzXx68KIoIgYDAY1M4YlWCjEltUoW0qUUOv1yPPcxqNBsYYnn/+eVzXZXNzsw5l43keaZqyXC5r59QkSVgsFnV4Gtd1mc1mdDqdWoxRhWa5fv16nUblgjGfz+n1elhruX79OidPniTPc1544YVaULJYLFgsFvi+TxzHeJ5HlmXM53N2d3eZTqdsbW2RZRlZlrG+vs673/1u8jxnNBqRZRnf/u3fzvPPP88zzzzDeDxmNBrVDhtJknB4eIjWunaU8TyPkydPEkUR0+mUZrNJGIYMh8PafaXVanHjxg3iOK7FGZXrRyX+efbZZ+sQNWmaEscxnU6nnnTU6XSYz+csFgvCMKTX69WCmsqtZjwes7m5yXK5ZDKZ1MKR+XxOs9ms788kSRiPx6/KoUcQhK9sHEezdf4Caxfv5/ozzzC8foN2I8QJXKxWbJ49zXK25NFPfIr1E6fZOLXNcP+Qw/09Asfh1OX76G5t88mfeZjJwQEX7zvDZi8gmk6ZWA+/1Wd2OCZsNWk2HFRm8bpdcqXIFmNcFeG1G4RrW0TLhOnhPo61+L6L1j6OVhg0i8MdkuWUoNVGux7h+ha50Rxce5amNvieJp5mJPGScHACkyyJ50McG+Nog1IZjaaPsxmSZg7pckFv6xQ66BL2OyxGe4SexuDS7PVpHx4QH15Hr2/S6G7hhC2S2QTttfC7LtPDffxmyGg3Jexs4vs+hzs32TpzmsbGBk57l6ujJ7HakmcpidaMM8PcWIrunkLCkavSe6PQeRTOHE7hwACFy0K13tWU4fMq1wZbzh9RtYOHwRZODRQrNIW4o/oV11ZRD//bwinCUPiLaCxGGTLAsQ45pgwnUoQYySuhioE5sLBwU1lcZei4htBCR1k6WtNUhkBDqAsxhqvAUxrPyfAd8LTG0wYHje+CryyOY4rzNGDSGdFwXvSfKYh00R7NjSl61HThEqqcAKU1bsMvJv1oXYbNMShbhK9Q1mBSg3YUNjNkWeFMa4yh8tDwNChT9HUpVQggsHkR4liByYuJQyiDsWByRZrnxHkhNMiMKgQHmWFRhnBZ5oalgdgq4tySAgmWCFhaxcxYIlMIXayx4GhiTHGNVBGyxS2vqdFgMDilG4cDaAfy/FY44sIJRmGUwbHFtVYUE9EqF5k6rE95z7iFB08hElEatxSP6PJ+CJTCUQAa31W0HOj7mm7DY5rmYBS+sngKvNBlP86Y44CjsViageLydo9BM2Q2i2j1fdYaASc8w2wckUYJrUZIqwFJsmQZBOzuznE9h7DXY83X7M9jxqMEP54Q+Q32Zxl50GYj1ETLiH6o2Dizzdp9b8ObxiyffAxtFQeHY2yUoFSO6zpkeMwXGe1GgzhJyVC4YZvlMkHlC1qeIXA089mCJDWoNCaNY05t9BgvYpajiNDXNJycabpAWcu59Q67OzdwMXR7Hn4jYJ5AlFi8zGLcBMfzGAyazKOMRsPBsRYnS1jkio12QHuzz971AzzX4LYCPvv5K3R9l8DTpMrnxmjOwmqaGxt4QZPZwQHz6ZIsjlnGOaDRgaLhOpw82eHqOGXvxojQ02z2Qmzb45EnR7RzuHiqQe90j8eenbJ3Y0xuDKHrFr8lr0Onlgg/BEF40yCiDkEQhK8uLMWLX+iFdPoDPO2xtvsQ492bJDvzIoasteRA9brlKEWS+ORBilaaQBtagaERGhzfoF1bRFtxFK4DrjLEaNJUk+cGXymUsoSOh3Z9ZqMR8+EhSik63T6+12A+HGLimO52D8cr4nQrG+O2m2hfg3HB8TFZgnKqeSNFSBmtLdp1UdbB5FnRiWCLEDAGQ54a9g8O+a+fv8lnDrd5ZnjIJHmOcxcv8Jt+6/+Lr3vfexmc7DNLFvziP/2/+Oyjn2BneMA4y4ixcABtrVj3NJ0sph2E3Lixx0cf/gjf9K3fxTu/9b0Evsf//U9+mOef/gKf+dwjbJ3aZmvzbGENUQ7WxHFMliTkaUKeGeLMFPFGfbd8Oc+KEB1ZhsktJrfEacxkmjCOFTGGyMLJVhdnrljsR7zw3NPM5zMWswlJEhNFc8Jei/Xt0yTZhNHogOl4xI3diM/OLZe31/k1959ktnMVJ3DIlimf++VP87Xf8G6U9mjYJrjgKI/M5CRRQrPRoOm7xFGOjypehsyL46sKbwyOo3G1x7WdXb7w1FN89+VL/Kpf9X5GswN+/Mf/HUmalp1olrOnT0NmygHEN7rkgiAIrx+r4VOq79VA8KpIYpWjApGjoo/jBCJ3Sm/VieM4lxC4FVam2u44Ti2oqAQdq+lVgpJKALIaCqYaIK9cLR544AGefPJJZrNZ7dhROXqsnnMVXqTb7ZKmKXt7e+R5Xjt+NhqNov1Sho2pBBVaa6bTKQ888EAtLBmPx/X5xXGMMYY8z2sRx2w2o9/vc+LECYbDIdeuXavdK+bzOVtbW0RRVAsaJpMJs9nsRU4RVfkvXbrEiRMnsNby1FNP1Q4Zb3vb2/B9n8FgQJZl9Ho9PM97UeiZs2fPkuc50+mUIAg4PDwEqL/P53Pa7Tbz+ZyNjQ3yPK/D0MRxXIdDqUKiVCKEKmTK4eEhruuyu7tLr9erQ8xUYXOUUkyn0xe5dziOU4sm9vf3SZKkrr8wDDk4OKjDqiwWi1qEsr+/TxzHdb1evXr1RYKW973vfcRxzKc//Wk+9alP8Z3f+Z08++yzNBoNZrMZaZpy6tQprLW1U8f6+jrWWq5evcrm5ibNZpMTJ07UQp88z1FKMZ/PabVatVClEu5U4WeUUrXjTCWcqVxErly5Ut9LlbNHdb8Ph0NOnTrFcrkEYH19nd3dXfb29ur7dm9vj0uXLpHnOcvlktFoVItrKqGUIAhvThzPI+hv8PznPs31J55g6+QGrhuQKY0fNHnq88+w2N/hzLlTtDf6DHduMJ2MaTQCzlw8hwpafPYXPsbw+g1OXThHv90ohtIdl3SRE3YNDd/S6vbxWx5Oa500g3QxJfCb+OEaOmywODxgMd4v3CHCVuGe6vlkuSZZzgo37a1TxXPeDcit4uC5x4pnR69PGjm0dINWfxPjaHTDpb2W4jZz/GYRCstxNaoREx9OaDbb+M2ATislGg3JkpjcC0iWMc1WiBOGRNMpTmtA2w1JJodk0RK/M0Cj6Zy8n9G1p1DzZxhPI8J+n61z5xjtj7jxxHNE4xmXtjss5ynjmcPuOGGe5bQcRaAVaWZJTSG8KA09iqF4RRFitxIDoFDVwLxThPEwqhA0FOEZCndRRTHhSClVCBMURbgOSz3YD4UpqTK3nD4qUxCLJavEAEphydEWXAuZUihbOE+4FrRSOGWq1ipS4DCzKAs7gIPBw9BxFF2taCpoaUWgDL5WeKUTRS0GcYs68TRoDa6yeLpwCtFa4SqFqxWeW7hTWGsLBw8FKs9QKNLo1j2tSvcUrRyUBqUMWEVavbdbg0GDAmMNuYEYaocMU9WhzTEKMqtJMkuc5mTWkBqIMkWUZcQmJ8k0mYXUWpY5pKUQI7EUDh7KsDAQmUL4kdlSzFFes8pxIbfgVNeyLE8GKDSmdHzxqNrMiqwMi5urQqhRaoRwjSrcQIpbo7i+9XW+9U91r9jqXimvqaaoP09bjFV4gIOl62saStHwNIskYxRnbHqaTttnnORMYsM4NWgNvqNxNGwPGmxvDbixs8fmepeW53Ch76NdzWSUYIwijlNyx2f3wNB0Zzi+otFt0nQUo2VCGlt62RKn6fP83pLcwIn1Pg4Gtx2yJOD64Qw7SsnmS9IoYTY6pOn7RLGDyTJ8z2EamdIhJWOSOnSaHvlsQRRHtH0Hx/OK65xlhL6H1rC93WRvnjGbx6w1XZrNgDiNWev49Js+syRjcjjja955CS8MuPb8dVSWFkIXBdictX6bJIfFIsPJHUJfsUgMnjb47SbD/X2UyemtdfjCjQUBlny5hKbPZJnQcDXa8dm7sk+6TBjtjbBag1Es0qJP+VRTc+5Mh91lzmxnTLfh0g0cWoMGn3thRCM13LfdoLne5pnrEXs3p/iuxtXOLcHH6zCkKcIPQRAEQRAE4d7EKhzXJQhauEGI027SPneeja/5Gvbiz7I4XJDWcy8KC06NgtzDGJfQT2i44IcWz7c4rsXxFKktBjAcz6B1Tp47WGVJkhQvdHDcgCBo4mhNtJhgE4PfcPB8v5jRt4wJ/BDfb2C0wfXDYmAFi8kMOmiilUOWGbIkxvE8lHbRTuE+QgaGDGMNlJaSWZoSLZc8+9x1/tPjY57NLrM7uUnuDPmad/0K3vV138JgfYvHPvkpPv2Jj/Gff/rDPP7sk4ySmIjC8aR64xwbsClkkaXnpPha8YVHP8vFSw8wmx/y1GPP8dijv8QvfOLjxNEcVXZyaFV0MWR5SrRYkGUJaTRnOplw0lWQZ2RJgtJOYVVpivAhSjvY3JKkCfujjGUGmS3infppygtPfI7uYIt/+f/8I+bRopgBkBqSKCb0m7z3oa/n2iM/AfmCaJnyROLQXevygbefJwh9RqND3MEJvuE7vpVPffTj/OyHf5Zveu+3sb19puh4zyCNY6JFTKPZIs4hMpY1rQiBHorli7pkhDeCYmDFEgYeB8MR8/mMwG+iFVy+cI5zZ84wny25cu0GO3s7uITlcWLZIgjCm5dVoQW82OGj2n7cMXBLyHFU7FENSq+KLVaPrZwmVvO7XV7V+kr0Ubl+VIPf1lrSNK2FHKuD7RVVuJXK+WM1L8dxOHXqFOfPn+fxxx+vB+V933+RK0nlnNHtdsmyjFarxXK5xPM81tfXCcOQvb092u12nX8lkhiNRnQ6HYbDIZubm1hrWV9f54UXXqjdT5577rm6LiaTCWEYMh6PaxeMtbU1lssli8WCwWBQC1+uXr3K2bNn6ff7bG5u1udUuX0AnDlzht3dXR5++GHSNEVrzf3338/m5mYtrPA8rxZteJ5XO0U0m02effbZOtRJo9FgPB5z5swZ5vN57ZJy+fJlut0ue3t7NBoNDg8PabVaDIdDwjBkfX2d8XgMwHK5rK/lbDarhSfNZpPxeEye57VwI8syLl68WAsvtNa1e0uSJEwmE1zXrcOYRFFEu90miiLm8znb29u1C0iVv+M47O/vo7WuxRSr1/r06dP89E//NN/yLd/C/fffz7PPPstyuayvQeVsUuXZaDSYTqekacpisSBNU7Iso91us7m5yWKxqEMJOY5Dq9XCcRzCMCQMw9oFpN1u138zVdqVc8tisaDdbjObzXAch0ajwf7+fn39rLXM53OWy2X992WMYTAYsFgsuHLlSu06MhqNmE6n9Ho9giCQCU6C8CbGWLjx5CMs9nc5e/k8brdPZjQehp0nHyMwORfe/VaMdhkPx4QutHod+mfOkFjL3jOP0Q4dLjxwH51Bl0a/hQoaBMqw6fr4jT7aUSgvIIkT5osJLkVoA89zSOOEgy9+EUxGs9/BczTZcoFqNIrB8XSK7wc4XuHmkOeQLCfE4108coL2gDTNWYzntHprpGmMzQoX0GSZkUYZfhO0MqRJWrYRIEszdBaSLGd4xEwPD0mXCxpNn8kYkjjCcR2wAdH4AK0zgrV1HL9BNElZ7jxLf31A27+fm089hVY5h8OU2SKh12uhLmxxxj1BPJ5y5bl9FskhmXXJMpjlOUtPYdLSPcJYtNIYVTqtUoV5KZ0yynAvVkGcl05cjiodTVU1h6cUghQUTrOVm0c5plumX4twKUQHji3ChVgsLqVggCJsscWWooCyzUchUDEU4UN0WV5HF8KD3BTChlQrlgb28sIdM1AKH4Vvc1qOJgQaDoTK4qUKD4tfhq1xFHjKwVFVPRROrY4uJlFBMUHK0YUgRFG4Uihty7MuJkopleKoIuRMboo2p7GK1JgifI425JkpHVIcLIbcKpK8mFRkbOnIkVuWphScKLCli0dqLRlFWOcEiIDIWhbGkqBJLcQ2J7fg4hT9YaWDSU4hQjHVmLsFZU3hhKMKwU0Z+YYqaLTDipij7jiyOGW4IEURwkVRpHvrihUuKdU1B3BtUW9lJGeyUtjjO0V4Z19Drop7xbGWgadpew6hq8gMJGnOydCl5TlMEoPRHhkpmdKFcEgrmqFPmhnGs5huq8mgFbLuK6zfZBanKB9CrRjNUkySc2qjTU5E5jZY67eYZw7pwYRNz2AHXb54Y4hWDn6gMTko37Jx7jJfePQRjNtkdvWnufS2S6DAtwqdLwk8jXWbHI4WKJPjdppcHc/ouwp3OibLMgJrcbUiN5rcFgKxsB1g84x55jAeLxl0GoQNl/kyJmg0cF2PaZIyn0Sc3WgSLZZENMlTw/ZGi4N5SpKlDDoes2jJcBShc0Or6TGLMnSW09seMNxb0A0t3VMDrk8i4smcTiPATQ2HcUxmc9bXOwTWYXb1CrmC2TJHoek0PZpByJqf8fa3bHFgPW4+dR20pqk0g0GbTz8/hnHE2850aAxCnr25YPfmkiRJaXqawFNMo/KdSBw/BEEQBEEQBKFEgR80cAOvfCkzhGsD1t/yIMl0wuyXHsVGxcuySxGn01MUTg8mwHoJuYLcaJS2uJrCdhJLbkr5v5NDoshzTRyB20xpBh0cLyCLlqSLJeRFjHHfC1BAd+sEYb+N43ugHZzAKyw6c4PJNcoqlGdxtUeWJsSLJV7QwOpiNoRFF/aP1pKnEelyyfDwkE89fZ2ffjZn0XyQw8nzdAcBDzzwPta3tpnP5zz+6Cf5mQ//R77wxSfZT5csWXWxUDhKE+ATqhZdb42m28TVS7RzkyAISeMxf/GP/g9EScLV61eLo5XF0R7bJ0/juA5ZlpLEMUm0IF4smQ1vMBpN6JxuoLWLLjstjMlR1qKtJScjtxmz5YL9kYelCLNyxtXcZxcMrz7Pzgtf5HAIj37xKt12k41+m82tLfrdHnkv5ErSIr8+YuJppkrxuy6v47nF7AvHGgI/IGw0+J7f/Ov53Cc/xcc+/LP8ut/6m9Fa4XkBWZ4xG08YnNtirdNgMYvKQDqKvrIkSrEvMyrfUEye47kOrucymU65cf0K62vbpElCp91isDag0SrEHnESM1/OgcKBRhAE4c3KUeHH6vqjTgBHBRpHHTqqdavrj3MSOXr86rqjIWSOK1e73cZ1b3U3aq1rl41KSLE6+A3U313XrQf5q3AxQRDwjne8g+eee652/fB9v95elbty83Bdt3b+WCwWdbiSamC/Eie0Wi2UUvUxSZKws7NTiywqN4per1eHWFFK1eKCMAxrF4/19XUmkwkHBwc0m83a+aEKy3J4eFiHdKncSaq6AnjmmWeIomLKrOu6DAYDer0e7XabOI4ZDofEcUyv1yOKotpNxHVd0jRlbW0NYwyLxaIWf0yn01pkUblVVAKR6twrIUYldFgsFvW5J0lCq9Wqr9HOzg7dbpfJZMLNmzfrcDuVe0ocx0wmkzo0UOWCUYVO6fV69bWbTCa1s0gURezv79NoNMjznCiK6joECIIA13XrECqe5zGbzXj44Yd5//vfz+HhIcvlsnYdGQ6HbGxskKZpLe6pQrBU7i3NZpPd3V1arVYdeqjf75NlGePxmMlkQp7nnDt3jiAIiKKoFrRUAiWlVF1f1XGNRqN2Y6lcUSrnlDAM63t7a2urPrfKGSfPc27cuEEQBPV1q0QjgiC8OTFpSjQ8oHviNLbZZR7nRMMhyXxM0PBo9bYYjWOSaIinNa1un+baOvPYsJhM6K+tkcQpJjesnT4PqnAIDRshvt/Ea3XQnsdiPCJJInQeg3JxPY8kSpge3CTwFX7YxvM9sizFa4YY62Csxm92sJlBOS4omI32UFlCGDaw7R7WWBzH0ggy5uMxrX4TrV2Uo9EYPE/he4o8SXBcB91qFAoCckwS4XoNlBvQ6YxJ3Aa5ScjjOb4XELRb3Hzm82yqt7J+7gLgsRgOiUc3afUGRagU1WTzwnnGe/t0HIOjQhYYdBBi8oTA9diKEtIk5drOnOvjJa4FL1dkTuG+4aFJjMLmkOnCLdZ1ikF+r2zq5KoQL6AK0UOeUYpBbBmKtDgOKEUaFlP2xDhqRfxR/pzfCv1SfNFKla4RBl12Q1E6SWALwUXhA1IeXzpQ2DI/Y27Fk9G6cOrI83JyhFVkWOblDh6WPC8G211lcZXFV4qWUgRleJFQ5fiqEL5oBS4WR1mUtSilS3eOIuyJLR1RKCcamdLNQpehaZRWtbtqbkwZ+qQMj2PL0DFk5Kpw3s2sIreGrHQzya0lt2DQ5BQhW5bWsLSG1KpC8EG1D6UbaClsprhGTlU5qnTzKL+6lfhDKVIKYYZvb4k0clWE/XFtUd5E2VvnDDhaF9fEWpyVR7VrC1FHrurAPkWd2FJkUt0nymAoHFwcVdwrjlvJZzTaKNqBpuVrmr4uRELWsu47bHY8xpElyXImNmWRW1ytCT0HraDhObjKEmrLeq/DekNDmjM9GJEoQ78dcrCMaG60WGv4pFlKbgPWBh2mcYqeRQxUhMXl2v4hYauBRbOYROwuDrnw7Q8ROx7tbh/3YBeCnOcfXdLs9EAXgh4v8NibRswysMYwOZiz3m3g5xmzZUKv6bG0MVGssDYmzQ3NZojvOiwSy97BiPW2j/U8Dg4XNBse2hbi29xoNro+WeBxMJpyutvl7KXTXLu+SzRZstYOsfjs7C3wHZeNrQ7TeYKjMjp9nygFmy1oNttkvse1K9dJTE7fU4yMZW+ecXKjSUs52DjGdQrRUi9UtFo+vuMSKMu5kx0OjceTT+yhckPXc9jaaPGp5/dIl4qHttoEvZDPPXuAo3wgx9eGbjNkmsZkaXFvmfy1n8wkwg9BEARBEAThnqSI2x3gOh5KFQMZ2ncJN9doX7pAc+cm8Rd3MbnGVbZ8OQVHWZxcoXKfsBOzPfAIXIckTlkQE2cKX4OvNaGXEUcpuYEsVSSxIWzEeF6HdLkkn6QEfkCrN8BtNNChh98KUcrBWIPNLTbL0Y5GaRfXK2ZN5FmKySCPcrLEgI3KOKm6mHGSQ5omTA8PePb6TT78xD6/dBjSXf8a4ug69731LZy79NbShtVjMhnyuc9+kl966vNMTE5mb9URWBw0IW1aqkvo9HFVE6yHydfpNbbprUdcv3GD/YNdzl+8wDKaMJ+M8Zw2b3v7u3nLgw/iaodlMieNlqRxTJ4s2H3hCrPpgpPddVzHxVpdTMMoZ2VYY1Eqx2QZe/sLrs4zcp3xdWselwaKQdfl0bjLyYfu59tObOKHSzrdDsvFgsX0kPnkkJvXrzGfz5hpj3lu+I6ex8BzCbXLwtrCftTVaAW+5/KN3/FtXH7rgzieg8ktGRlpljM+HHHivtMM1jrs7IyYW0toC3F9RylG1iISgjcOz/UYTWbMZksUqrBvD7t4vlOIjbKs8K6xljw3GFN07Cyj5RtddEEQhNeNVZHFnZw3bifEuJ1DSJXW6qDycUKRo3kc5zRytIydTqcWGeR5XgsxKjePavC7Sq8Sgqy6OqwKTJRSnDp1inPnzvGFL3yBNE1fFPKlQmtNp9Op08qyjOVySRzHtdgiTdM6/EgVYqbX69V5t1otgiBgf3+fwWBQH1OJD5bLJWEYsrGxwWw2q0Uuk8mE8XjM+vo6nU6HGzdu0Gq1OH36NOPxGGttLQjQWtcClzNnzvDYY4/V4V+MMYRhSKfTYblcslwuSZIEay2tVoskSdBas76+XosQwjDEdV0WiwUAJ06cYH9/v3Ynue+++5jNZiwWi1ro4nkeAM1mk4ODA6Io4sSJE7VYYzQa0e/3a7FGnucYY9jf3weoRR+u6zIcDmsBSxW+RmvNYrFgfX2d2WxWC0DW1ta4ceMGWZaRZRm7u7u1kwlQO2asiicql5Z2u12LJLa2tnj44Yd5z3vew+bmJgcHB4zH41rYY61lbW2tFsG0223a7Taj0QitdS0SGg6HzGYzrLX1vZRlGUEQcP36dYIgqN1b4jgmSZI6DcdxamGO1prRaFSHCarELhsbG+zv79PpdNja2iKOY+I4roUjL7zwAlmW4ft+LSKq/mb7/f6L7m9BEN58WJMT9gZkuORRzPXnr+CanJNnt/AbDQ6nc/JlRL8V0u738HtrXL1yg3gR0ep0yNMMZTO2z13GDRpk8QLXc1DWxQlaOK0e0xvPM7x+g06vhcoTgkZAMpuynM9pdLo02s1CppAa/KBFlhucMtRGnuSQZWSJw3JyiOcqGr0ORvlF2IY4wXUdVH+bwAnwGg3yOCnCZCwmtDdOgCpcV/12F5MlJNEeliK0A1mGSRJ0d5tWJ8bMJtheD+UF+E2f7pkAr7+JUprF4Q7T/Rs0Gm1wAxxP02q0ydN1GoMtxteepb2xxnjnJs7BAaODhPkyZbAxwKEQeURpRjrPyKwhRJFrTWZzlC1ChJjSs0Jbi7YaC8SlO4RF4TqWPFNlaBZbDtoX4T4M1O4PmbK4tgiTAhZjKQUOhSCkEh5YAAVeKZgAyn6tIryLRZWhSG45hGBB6cL6wpaOroVRrMLRthCBYOq2lbUvNhPIc4sthQ45sMgLQYZSleCgcJ9wlMFxiklTviocSDwUnipcOr1SkOKoQsCiVeX4UaShSweRWuSBxZjC2cDYwm01L10vDLYOw5KZwq02A1IgMZYcRWpzDEWfXqKoQ+Ngi2lTUFwDr5TQFOkXpclKtw9DIbzQVhVCk9JRJaPoU3LKMDBQuLCAQpWamkwV7h5ZqeJxC+VK4c5SXj+tSoF0MZ8LU06KUlbVgh6tqK9pNddMo3A0uLpMtyxnS8PAd2h6DsvU4DuKjgNnByFRZgh8iN2A6STCVaoIaZxltEOXPDeEyrAWGDbamvX1AYvDQ5QPiyhmmWf0tlpsNBrsTqY4yqHfCYmNwZkuWHMNuaO5MZnS29wgsgHPf3GXaLFgsN3kMNEk0znBNKbtL8msj5pMSNIZ2gnILRxMF9yYGUaziMA1nN3o0g01w6Gh39CAQZXuxrG1tBo+oacYTmckqcPmoAmOYhEnrA9CtKOZzSKWuWGj2ybKLNNhQsczqGyB7Z8kml9ja9ACz+X6zQnWGDa32xwcLnCNZX2zwyIzmChls9vE76/xhaf3cNE0fct+rrk5XhKGDVpW4xFzmGXsLQxdX7E+CMm0Q9P1Ob8VELdDHv3kDbI4Yi3UnDvZ4/M3DjGJ4i3rAWfOtri+MKSpwnUtWWpphh6zKCFLQBlDbiy3efV6VUgLUhAEQRAEQbgnUSg8zwelybKELE/ITUauwFtfZ/DW+8mXEeMb02JWQ/ViShk7M/OYRynLPMN1DEpbsEV0TZNblA+tTs4iSslyjbEKa3OiNKGDYnl4iFLQO9nF+D6PPHGFKI95x9fch6/9Qs2vi1kOju/iuBrtulgnI52lZHGG63kErQZ5GpPlCcoUsxoWkxm7hwf84tO7fOTKnKupx5mTbwM95z3v+x7e/U3v4eoLz7NYjPnCIx/n8Scf5cmdGyxNXrqfVHVUvORq6+IpH4tDkkdk1jBPItzIoRUMaHfWGNzfYnI4ZDFf0O0P2Nw6yaVLb+HXfuC3sbm5RZLERMs58WJBHkXYOOapRx6jqS0X19oom6Ocwtq1qEsXV2vSdEG6XHD9ICe3il//YJ/LZzrE6ZJ5bjgRnGay9wLXrt1g94Xr3MASxxkKjeP75HmGE6Xols8DiaWvXLrNNko59YCSF/g4rovr+Gjt0Gq3MeQk6RIn98iThOnwEIWi026htGKeWRoKWrbofHAVtWBG+PKiFHzdu96KcRS/+IlHyHPD9Rv7PPCWGMcGeL6H62gcrQiDgDxPybMcaw3z+eKNLr4gCMLrxlH3jdvN/l8VSRwVeqw6TKzuc/T4VUHGce4fR4Uhx+2ntabf7wOFm0GVbp7n5HleCw6OOo5AGWavdAWpXCaqMoVhyNve9jaeeeaZevC8cv2ocByH+XxOEAS1K8RyuaxdGKp1AO12myRJCIKA4XBYhx1ZLBb0er1azFGJVowxjMdjoijCdV329vbQWtchW6rwM2trayil6Ha7uK7LaDSi0WjUIT+q86mcHvb29gB46KGH+PznP4/jOGxsbLC2tsbFixdxXZdHHnkEz/OYz+d1nVT1tr6+znQ6Zblc1m4blXOK4zg4jkMURRwcHNDtduuwO2EY0mq1aseNU6dO1aKFXq/H6dOna2ePKuRJGIZsb29zeHhImqZ4nleHd4miqL72xhiMMSRJ8iKhSJqmjMfj2mmk1+sBMBwOX+S4kaYp/X6fJElIkqQWRFSCno2Njdol5Kd/+qf5wAc+wPPPP1+HY1FK1c4qeZ5z8uRJgFqMsVwuazeOTqdDt9uthTKV64jjOLz1rW+txTLV8c1mk1OnTtUOLOPxGM/zuHDhQh1KpgrhslwuiaKIMAwJgoD5fE6r1eLSpUukacpwOMRxHEajEWmacvLkSZIk4ebNm5w6dYpGo/EiBxxBEN58KNfDNrpk0RK7WNBrOWyfO4/xGhzc3CWLY3qtkM5ggL+2wXD3Biaa0W11aPgKx/fobF9CByFZPMfRisV4gre9Dr1tbj73BPP9K7TbXcJmgHKbKK9JGi8JOm38oEGyWGBwi0HqdInnheRZhNaFOCPJLMvDGzSbbZyggbFeEQo2t/ihj9NosNgfYZ0mQbeLzcbkiwlu0CbHIU8ywlYf6wTkWY7jlhNkcHE9TZJFBL6P6zfRgzWs44OxKL+F0+yhtM9yMmTvhafx0hnaNZhlG0WIcjSOE2CDFoPLm4xf+DxPP/oMpx54C+vaox2OGY+muL7LiZM94szA7oxRbFhmhkVusMbBaoMytnCGdQv31WWaE9siHI/rFM+B3BThWkyp2KhdN2wpANGF84SiGMRX1mJNJYdYvfDVP7Z8vlVtKV24R1Q7KVuGVymcHhSFU23hsqrLgV2FURpjczBlGspiS9EHpSOHtYVABF2IDlCqnEhh0aoSYBQCj6zsF7EZzEtBQ2XbUbl0oKq+pkKMUbt+qFttUgeFow22dCWxqjB8MQasUeQU+xosRhUhly3FgHVepl0FzTG1M0pxLpXgpqpbU1QXTtkHZ40tw64UexSij8IpJVNF3k55BoWopcgzV7fEKG6ZuimFPJaigLosWV2/gFWFEEiXQpsinEwl2LF4WpX3TXHNcmvxnFI0YgvHD7cUrShg4Dm0XUUncMlNTm4Mvtac2whZRClz67DA48Z8ie8oBr7HNI7xQg9PKwakPHiqy9Zam96gQzKdMJ0usakl0zlBK6SnA64NZxij6QZFmZ3RlHU3JreKG5MZgxPbnHnnO/m5hx8lTVLCQcj6iW0Odye4w316zYxpaun5unDEURrH0dzcnXL15iGzJGf79BpnN/s0PIcb127SdFwyk2HSovYWVtHwHTodj3mUkxmXrTUfP2hwY2ePsNkgaHjsHkTkpgh3dbBMcAOPVstlY73FwTTDy3Y5f/855vt7zBcZeZZw6kSPNI3wraU/aDBNUrR26QQOjUGHq9fn5LOUtqOY5i5X9qaEYci5dkC/7TBdKsZJhs0y/KbHZJnSdRQnTwckrZBHH9vDzTM2eyFnT3R4YTxnOEw43w84d7LF0gvY29mjF/gkWYqLIU2KeyBGsR9FOI7GcyTUiyAIgiAIgiAUqPKljmIWZbRYMJ9PSKMYfE337FkcY7HZZ5jtLinCnRSzEbSyGKNZThtcd+ac27D4WqG0RmmDzYsXaL9pCRsZ85lHnDiEVUzSJENFGetnN6EZ8NRzO3zssYzLpy2Ho0NaYRNlC9FFTkaeFfHEjXVo9xq0gjYYhdIeaIPBMBkekCwWLKOUp68P+ZkrC355kmJ0yKkTX0ur7fK9v/O/5/KD7+bGtetoR/GJj/4En/38Z9jNExKKmQXVWE4138LFI1Q9Ov45NB65SQjcDg2/TSNsYk3EcHeK90TEufsvsXVim0uXL/OWB97O/W97iF6/RxIlJPGY+XRMtpxj4jmL8SGf+cyjvH2zxSAM0dqlfCMHqzEmI82WRceNgpOdlLbj8bYLA4JGk2XawM7neNmEJz71NM/tLhgdRgzWW6yfWWNtY4Ow0eDRX/wscWpwJwmeH4LjgXLIlSJNFriuh+d6xYkrU7imeJo8TkmThExlOPMJZjKDLKfTaeE6mkmaMQH8cvrMhqvZTQyJiD++7DRDF9+L2D57H08+8QwHwxmH4wn7BwdsbBT29a7WBL5PIwwATZanZFnKcimOH4IgfHVw1K1jdd3R9UDtIFHtd1T0seouULkkVAPsR4Umxzl9HIfjOKytrdVij9V0PM+rw7NUYTOAF4lSVvMzxtT7aa05ffo0GxsbzOdz0jStxQdVOpWoYTKZ1PlVIg7HcWoHj9lsxokTJxgOh7RarfrcAXq9HpPJpF5XiRIqV40gCAiCgGeffZaNjQ1838d1XTqdTu0I4TgOrVaL+Xxen2sQBLzzne+sRQfvfve7yfOcn/qpn6pDhlSCjrW1NVqtFvv7+7UDSXWdFosF1lqazWYdKsUYQxAENBoNHMfh4OCA69ev02g0OHHiRO04UoW9WSwWDAaD2nVjuVzS7XZRSjEYDOrr5LouJ3o6awABAABJREFUvu/XdVaFRYHCGWVtbY0wDPnMZz7DYDCow/JYa5lOp/R6PfI8Zzwe1wKcxWJBp9PB8zyMMbTb7foctdZsbm7WgpMLFy4wGo1YLBa1o8jly5fZ2dlhY2MDrTU///M/z3d8x3dw+fJlPvnJT9ZuGpVLChQuIlEUvUjo4vt+LQhyHIf19fW6HEAdzqdwF/Rq8UYl8jHG4Loua2trXLlyhevXr+P7Ptba2gHkLW95C9evX6+vUb/fZ3d3l8ViwYkTJ1hbW+OFF16oQwVV9+/FixfJsozNzc1aaCQIwpsTi2U6GuFqS8ML6W5sk/oNJnt7RMsZG+0WrV6XPOwyOpwx3ZvSGXRp+Bo/dEncVuGGMJ/ieR7TK88Rbpwh7LR44uf+A8Mb+2xuruH1Wig/wCiPPM7QePhNj3hyiM0TlnGKdhqFC4eTgs0JeussJ4dEh7t0B2vooIm1GmvBbXbQnkuyWJKm4Hc3CBo90miO60CUxmQ5dHwfv90hTw0kMSZNi06KPIM8JtMefmsNr9kkzw3aDVCOQ7ScE4RrOH7IZOcFpntXmezeoL+2TtBsYvM5Jndx/QbKDTHpEpPFBIHm8n3r5PGE3Rs7NDo9PGdBq90F63BiIytEA4dLDhc5mc3IMfjaIbGFo4OnLbPcYJyiXeWh0AqMMlhThDlxdRH6RXMrtElWCkJ8CicMTCGe0BRKgJxboV2gCMmiKAb9KyGIpRLu2jokSKktKMIFK6gCpxROHgqrDCYvRBHF5JvClaTIo+iHUaXII1eFyKMSReSU4VBKgYUp3UkcC+hbjhoGCkGLBY0mw6IMZeiUQvSCVaX4osjDoUjDlEIPU+aV2cp9o3A2sVZhtSKmyEdjiVFoqwrhhbJoCnfXyi3FKx1LdBWqpRThOLYKq2PJFSSUIpdStGEVpcNrKeoohSCugtwqMiCjEN54duV6rAg2HAqBhC4FLxZVilw0ubJFyJi6HEW5PFXW34qQxNGKwFGkucEpw+p4urhOLa3oe4peoElywzjJWfcU5zaaLKMlOR7XpglGGRJjONMKabgar+EQa5dwNuXB823OX96i1W7T7Gxw9fA5LIpxmtDoNTnd63FzOiWJNZst8DsdnDii66VYYC9xeXwn4j2X2+TuOqPhBL/js3liwPhgxloesd5WzBJoNsIq/hHWbbA7XTI8XNAIPVr9Jmc2B4Q6Z7pI6DV9tM2ZLyxB4LFcxnRDh0G/UUz+snDhRBPb8Lh5bUS322CRwPX9JQejJe1Bm2UEoevQ7A9oWMuNgzGL0YK3XF7H650jurFHniRsn+gxmRt0EtHf6nMwS1DAyRNdXFdx9dohi/mShuswzyzP3BwRNltc6rfYaueMpzFXRxlJbGi3QpY5dPyAS2caBGsNPvXYkOn+lFP9Jie3ulybRjz33JjTvQYPXmhjuh2ef3yHpnVY5oYktvRDFydwOZwlzNOUbq+No+bckjC9dkjrURAEQRAEQbhHKVTzaRQTzebMFxOWizlJHKEdaAzWCP0mjtVc+8QvER0m5XGmePGzmjgPiceGXntBP1DkTqGod5QlSxXKgaCVsVhosgzSOYQN8B2P/uktaARc273JLz+dMcktp7ZCpos5k+mYLM4YL1L2xzF7U8tkCco4XNzQfOPXnmRzsEGWxkTxgtHeTQ6H+4zGM57es/zEbsZOluN7IadOfD1h6PHbfvd/w/f+wG9lcpiQLsf8i3/4L/jko7/MgcnK2RJVvZSv7srBJ6BBj467RcvfxnF8lNK0gjbtZodmM8RRijRf4OuUltmkrU5w/+V38LXf8BC+75MkMbPphOHeHsvxiGQ2J49TPv/pX2a8f8B3f/t9tJoNXOVh8hSMxZocazO0dsjzBO34dBoe95/fwG+ExCZjlkTsTQ7w4oxwpMi0T39ds77dx/V84nhBEi9YP9FFNWNCZZjkCuNZFllE4GkaYQe33SPWTjmLxaCVU7xs27IDI8/QiwmtxZJsNqfdDtGBSx5lzKwlVND2Fb4Dm0qxH1vie0D8Uc3cuddZ7w544MFT3Ny9Sn9tnUsXT7OYP401lmeefprJZIDrhKR5juO4tJoB7XaTrc0BX3juyTp0gCAIwpuV27l8HCeWWF1fCThu59Rxu9Ax1fpVQcRxLh9Hv1dhW7rdLlmWvUhIUok3qvRX86yOW82vCg1TldMYQ7PZ5OLFi1y5coUsy2rHDs/zsNYSRRFXr16l0WjQ6/XIsqJ7v9vtcuPGDa5cuUK/3+fcuXMcHBwwn89rF4vt7W1eeOEFrLWcPXuW4XBYD7pba1kul2xsbNTuIGEY4vt+fZ5RFGGMYTQaEYYhg8GgFmJULhZVSJNLly7xPd/zPXVIGNd1eeKJJ+rz3tjY4Nq1a5w/f54nnniCzc1Ner0ee3t7dYiaKiTL3t4e7XYbz/NqYcu1a9fqsCpVvrPZjPPnz9fOK/P5vHagOH/+fC06CIKA3d3dwk2tFM6EYViHnJnP50wmkzodKGKt7+zscPLkySLueunGtrOzw4kTJwiCoL5+lZtHFZIlCALuv/9+RqMRSZLUdVcJPiqXjdFohOd5fPGLX8T3fU6fPs36+jo3b97kwx/+MN/3fd/H448/Xpe9Cp/S6XTI85wwDGtBz6p7ShVipRL1VKIR3/drgUjl4lHtU6ULhdjoxIkTRFFUu560Wi1c12V/f58gCGg2mwyHw3p/YwyTyQSAjY2Nev10WswyXSwWTKfTer80TV/qJ0IQhHuUNMkwcYofujhhQGwyssM9JsMDzpzaIggDaPRJ04gsnqE9TSNs0t/eYJEk+E5IHi8xyZJkd59Wv0/75Dajq08QmhmbXZf1jT6NtS1So7BZiuu4ZejZDGMVym8TuGlp96Bx/RDdGTA72MGahMGZi+QmJ41S8jyl2V/DbbRIl0uW02Kg1PFbxPMpjueCNbga8lxhvQ5pHKEdRRItURiCZlCEwnADPBROo4XRHlpZrNIs5zMarRbacUhmB+TxGIecQKekozGTVodGp0PQNKCL/LAW6zTR3bNc+Poe1z75s5w8vcZimZP4PrnROHqB57tsrvdwXR+1NyGzGQ3rsswtWZ7hqSLciwU8Da4tJgVhFLFVJKWDg1KULhcWz1Hkpuh78FF4Hpi8cP5wShFHVjpFqFLI4Oqin8nkRTpFuwiUhtQUAhRbCiMqRw2tLaWWF2OKvJUqRBVKgeOAtarcBlU8GWMtNqf0gS0ECKZsgtmyv8RW7iOA6wCl00m1JS8FKp4qHD8cFLZsxzmlICMvhRuOKsKWFKKTQlihdeGkYatClC68RdkVaSlB0YoidDDFsTmFAKPSXuiynAkWxxbOJNYWA9yF8KNQXBjK81WFs0aVLaVLh1vULnX4nbJZa6p9KBw8dBmixUWTVfVZOqg4VtUinEqIYylFIWXYHM9SCEKsxcNiVRHOJijDutisuBd0eU851rLmKnyt6IVeIdJJDBuu5tRam9FkzlavxZOHS6ZGgcrYCBxcRzNVEBuNP5/ztefWOHFmDT9s0+z02R/uM5svsFlGZ9Bh0HS5fnCA8kNCNSfobeBGS/oNS7LImRiPjzx6g7MnBwSDE3z0Zz+O56f0+z2mBws2bMaZUyGH85xBx8dxDJgcP2hyMI0YT2KMzklyy7lmgGuWjGbQUIY4y7F5hh8GzJcxzYbPer/NZD4jjnLW1jvQbnPj2pCm65AbmC1SxvO0cH91FIEHge/QzBfMUk26TDi7HmLylP0rV1C5YbPfZrRY4tmc9nqLxTLGGk3XV/iB5vr1IekywQUSY9idJYShx6VBl81GSmQUBwvIMtCFlQ6t0OfMwMPtd/n81QXjvTGnBz7nT3W5vsy4sTPh0naP85sBer3H458/QMUG42iWeUwjdFC+y+E8JbUKpxHQ9N3i/rllB/OaIcIPQRAEQRAE4Z7FGkOcRMwXI6LFjDiOS+W+ixt4NLpruMojXcy4+alHMVHxQmasIrUOkYVpFODvp3ROG6wuXsYcFOQGYxTatXhBTpY4LOYuYUfRbHUJHMW1nStcvbLg2YnHiXZOlCe88OyEq/uWm0vLYWLIrKXjuJwKPS5teNx/oYPWHpPpFBSMhnsM9w6J0oTnDi3/didnbCyu53Fi8xsImy1+5a/9Ln7VBz6AwiOe3OTHfuhv8YlHPsn+l4g+oHhpdvBp0GKNlrNOM9jCdUM816Ppt2m4DRzHReETNlv0G5u4rsN8mPK5F77IF375c7T+1x/g8lsvMRofsr+zw/zwgOVoRLaYM97f5SMf+Xm+89IWl09u4XkByhqscrB5UtiuWp88S7DKJWi0OHnuDKHvM0+WeF6DdrPDjf09TvXW6AzaDBOfg8khYbNDbnPyPMPkKTrQZNpnuEwYqowLjiEDWk6AdRysdjBphuPYohMrL+K/O66Pk+eYNCUdTjnjK8LRIa2NNfqdkINpgrEG5UJlGOJZ2HAVB5klKjsRFEV82a803gyiD8/1eO93vIfBZoOPfWyfzz/6eTq9NdZ6AXlm0dqwtzfkxOZJTAauqzl54hQP3vdWnr/yLLPZ/I0+BUEQhNeVo6KMVVFHJahY5ajIA/iSfV5uHkcFGkfTP26fTqdTD/RXTgWr4pDa/rt0/jgqTFndryr7ah4XL17kYx/7WC38qNwUqjqpHBsODw/rdCaTSV0u13XZ3d0lSRLiOGY2mxEEAZ1Oh4ceeqgOB1OFpMnzvHaHGA6HjMdjTp8+TafTodFo1KKOyl1lsViwtbVFEATMZrM6tEyVl9aas2fP0u/32dvbq0Uoo9EIpVQtfHBdlziOuf/+++twLZVwpApzsrm5iVKK0WhElmU0m81atHDu3DmyLMPzPLIso9fr1aFTsiyrQ7/s7+/XbhZBEPDCCy/QarWI47h2OpnP57UrSLfbpdPpYIxhZ2enFt8opZjP5xhj6jAyxhjSNKXX6xFFEc1mk8lkwnQ6rV08RqMR4/G4dtKYTqd13U8mE+bzOY7j1M4tcRxjreW5557DdV3OnTvHL/zCL/C+972Pd77znfzcz/0caZoSxzHT6RRrLdvb2+R5XguMqrAy165dq++L6vr4vk+73a7LtHovJ0lCFEUAdZgbay3tdhtjDL7vk+c5rVaL0WiE4zh0Oh0ODw+ZzWasra1x+fJlDg8PaTQadegbYwzb29u1G8lgMMBxHNI0rZ1ZBEF4c6Kw9DsBXqNBogqXg9n+iFOnTtI9scF0nmDimOV0SjafM+i26J0+y3g+JV8aPD9hMT6k1QjYuv9BjNdhdLBPFkW0un0ap/o0N7aJ05wsXuL6HtlyTp7EBO0mWZKQxyluKaLUbjFIPb5xBZeUxmCLRZQDObPDIWunTqP9NrkK2Hv+UVrdAdlyyfDaVbbuewDH87DxFO17JLM5ybVr9DY2yKKEaD6j2Qqwugi15TkelZWCsgbleqTLBWG7h/YbgEIpTaO3gWdj3M01FosMz80JewPcZr+QVOQuB9eeRVuHwdmz6Mzh1Ne+h/3nniLbHdEyGWQpwXa/cJuYJLSSlBODBp7nsLdISaOM7bAIH7fMwTcwnkdkthiQLXSOCq0MOKXzhbYopxB8OMqidaFmMMaSmUIQYZXCWIW2thBRKNBaoTWFe60qxBpaFw4gJgdXVWFUitAyCtBO6dKhi1AplagDC45TOH2YvBB5OK7FWqcIH2MLoYNdDYmCKYQd5fdcWZwqXIst+smMLb/rwrXEWFsMItvCW8OWR9syncp1g/KcK3eQ3Cq6jsJ3NBmGODfkZcGNLVw/cgtWaXQpntDlPZGrIoQOWpFZUwo/KucQyHQRWsWx4NoyzouqwseU7iWluEOXprRpKfiwaByKa5Ta4rxzdStMD6qQwegynVKGU+etbSn5UFUImVsiGa+cnOai8B1Fntvi+uvCUcTVhdsHpvp/0c51UKz5Gl9D6ChavsPOLKKhHXrtkNE8InA9Hhkl7C0NjqPY8N2ibKFHlub4yznffP+As285zWwSc3BtB6UUs6j4+7Last5tMJrPaXfbZPMlXrdDNlvSdXNGB0ty7fHhx/eZzHLmy5Th1AWzYGt7g3i4oLOcsX0i5Ob+lFazQW4tvg4IfMNonrKcRmAti0yxFXqs9wP2ZglunJEHDsvZglanyWyR0PY9+v0G0yhmPss40W8S43DtmRt0g4Cw6fP8XsR0kdD1NL1emzQzhNbS9kPGsWU5nnJuo4l2NYcHczwds3Fmk9FwhElzNk/2OJwXbd+Op/CbAS88f8ByOsVzNdYJuDGZkzkOF9dabLQt0yRn/zBnOc/QNqfV9mhpy8nNANVv8cjTBxzuzjnZ9zh7ssMYxfMvHDJwDWc3fVrbfR5/Zkw8nmNsTmqKfi+rFMOFYRRljJOEM1tr9EKv/nt4rRHhhyAIgiAIgnDPkqUJ8XLOYjklihbYPC/inmqN43lozyNY67P2lgexi5jDx55BJTkWh8wqImOJDVybhGz3FzT7mrx8gfOVwvc0CYagUVh75rlDPM/JyFjM5gxHE549VAxNSjKHG5+ZsR/B1GT0HYfLLZdzPZfzW01ObfTp9zuEYYMoj9m9eYU0jjnYnzAcKV6IFf9llDM0FsfRrA3egec3uHT/Kb7xO7+HODHsPHuNv/eDf4qf+q8fZi/PWO0KLmYyFIaSleijoQcE3hpa+Sjr4OkQ1/VptDr4jQDfD/B8H6tguZizt/sMe/ufBTvl0598G8bNiGYzluMxyWxGupwRz8f8x//4k3iLKd/19acJHZc8iwu7URy0G2CtIYuiojOJnCiLyawlCJrMkgXaKpI04WCcknsuew2fxKb4QVhbimqtUbhkieHa9SFZmqKV5RcmEaeC67zrokujO8A4DsbkuJ6LVoo8y7CmiD3rag/jZLT0nHdc1CTpLmlwmu21Nt1wk8PZFMY7OKqYMZKXYo+2Kmb0+OXME20t8Rtxg7/J2d7a4K0P3cd8MabXW+fmjevs7BxglcUPgnL2jUEpA8qQ5SnDwwOefE4zX4xlMEQQhK8KjgoiqpAgR/c5LgzL6voq7MtxQpDjnEOObjsq8jgu5EwV5qXKpzrWcZza/WI1BIzWui7X0bSq7ZWzhLWWwWBAr9djd3e3FhZU6S2XS5bLZe0YMRgMsNaSZRnz+Zy1tTUcx6kdFaqB/Xa7XYdkqUJ8JEnCYrHAcRza7TZxHNPv9+vB/cPDQ9bX11kul+zt7RGGIZ7n1eKE2WxWh0jpdDpMp1P29/dpNBqcPn0a3/fZ29urBRZ7e3sopWg0Gly8eJEkSeh0OjiOQ57npGlKu91mMpnUbhlZlnF4eMjGxgZhGNYhTSrxh1KqDnkSxzEbGxvEcUyv1yOOY1qtFjs7OxweHtLtdlkul7WQpRJwVHVsjOH69esMBgO01vi+z5kzZ7h27RrdbrcWzFhr63AylTjHdV2azWbtulEJP/r9Pmma0mq16pAqlUCjElZUwpMwDL/EiaPX6wHQbDb5iZ/4CT74wQ/S7XaZz+f1da+cY6rr6rpuLZyBwq3E87w6pI1SivF4TJ7nRFFEr9djPp+zWCzqMgRBwHw+p9lsMpvNmEwmtRPIdDrl4OAA13XrUEW+77O9vY3Wmt3dXQCSJHmRY9loNOLatWs88MADGGPY2Njg4OCAVqtVC2EEQXjz4XkOOmySGIckXZBFI06eOkVzbY3hQfFbRJLgJnMGgx7hxkmiJMVEMVGUMtwbs31ii+7WSeaJIh7vYuM5voWwvwFBkziKMXmCzQzGGkyywLoBUZyRpTmqfKZ4jRY2zxhdexbfb+C1mkwPx/jNJslkxPqZS4T9bSZ7z3Pzsc/R39pAkbCcHrJ55jykCfFsghOG7Fy9yXB3zPpZzehAcbhzjXa/j+sF+I6HyWOUCnDCBnmWoLQPucENu2ingXKKcGB4HYxxaJy4RHSwx2KaMLj/Mk7QwRiN9hzi0Q26nTZO0MIkOYYA3TlBb2tKNJthTJNOew1FwmS2xNcaN1/iqIAw0LQa4DktDpYZO9MUsCRxggO4WpEYRY7F6mLAX5dhXtwyyq2jK68MyIwlzwqJQq4AY8o+muJfrW0h8DC6EGZgCycPq8jKGf+6CrNShpLRpXuGdhS+LUKeZArSug1mC1cCVbh1lB4WKG3J87INaMsYM6oIrWKq6CVlWBdHqUo3QW4ruQSkpRDCKaUPUOxD6WJRuXc4FjIMob7lzuE5DnluyDXMbM56p00ym6FMISTRugij4+SF90ZWh0KxpCi80l0lwhQiFAoHELdUZthSjOKUwha7UsbKQUWX+2gKQYgqJ3hV4pLMKoyytwQcpUNL1SdkVNH2LWuvdBUpHDqqcfoihExRcq0KkYqHwnNs4UQDWK1IULja0FAKk1lC1wFduIP4Bvp+YbXSCTzWGg5RbBn4Lo7vMF6kdBoOzy9ydtIcV2nONQLagcu1+ZJkmeJFS9730Db9zZD7vuZdXHviBabTm1w7mJITkiwiTq11uLE3oTfo0HI0cW+d2c6QtcCQK8Ui0vynL9zk5njBxe0OuB4f++jP89AD54j3p/SyKf1zbWZW0XLDwuXFQJZbRpMlWZSAA3FqOd316DZdDoZzXD8gtYrJ/oS19T6jeUyoobvWZhQlRJOYrbUmNmxx48oBgYXT6y2evHpAusw50fZxHWh6Cu37BA2PvXGMyQz3b3dQHownEVma0umFLOIMx/UJmh67hxG5tQQONAYthgczJsMRJtW4mz2e3Z8BigtrPbb6DrM4Zp66LJMY67iozBBkOWfOtsm6XR5/agebWAZth9NbTWbK5ZHH99jw4IFzPZprbb7w9B7RNCVWYI1Ga0NmLCbXxJkh1XD57Cm8eEng5CilX/RO9VqhXo9EhS9FKXXQaDTWHnzwwTe6KIIgCK8bjz/+OGEYMhwOX3upoiAIwgpKqYPA99fOnDxFnmfkeYa1pvaw1NpBaX3LftGASVOy+Yw8KkK+GArrx0q57zkW3zNF3FduxUOFYoaFKT0jFeD4hQVlllqmqWJhK2PI4oUwUIq2rwj8Iq68ozRKFz+NFkuWpWRpijGW3ChmGcxX7P2U9vC9JtYm9AebNJpNPNfhcH+H4eFBOVvheDQOGg+tHLRyUUqjywWlylkmLlqXgzwKrMnJ0pQkm2Nsiuto+mt9vMCr3ufrDoPZZMpivmCz6dH0vTLd4syqqSvWmOJ6KF3OVskLW03HIckStHLI8ox5BqYcRMkyA7q0+Fwx5ATI0owkybGAA7RcRa/pYnDwBxtYXXRYOa4uZ7bYWkBirSXev4GTZ2TWkoQdoiQmWibkxqKMKa910Xljy1PNX1SntyxD71kqJ9KvEJSCdqtFp9MhyzKWy0U9EFLY1bqgFMbkeK5fzsiNiOMErV0ajZBxaZVu7eswRUIQBOENRCl1AKxVrhl3cdwdtx/n4nG36R8n+Kj+32q1XjSIfjdlO+ogctz+1lqGw2Hh8EYhLqmEEL7vs7a2BlDnXzmBOI5Tp22tJc/z+rhK9LEaiqYSkxxNx1qL53kvEtGsClNWnUpW66po5xRChMFgQBAETCYTrLUsFgvG4zFKKYIgqEPlVPkEQVALPSqXidVyVUuSJLXDh9Ya13VJ07QWUVTl8X2/Luuq2KcSFVVlr/JY3be6JkfPdfX6HE27yq9yRalCl1ThZLIse5GQs8qzqoOqfo+6zLiui9a6DkGzvb0NwHK5rF1YqutYlTHLshfdC6v5VudeiU/SNKXRaNR1X/0tVuVfPc8q3NDq30YlaqrOe3X76n6O49T1Ud0rlUhKKcXNmzdJ03RorV3/kj8IQRDuWZRSB4HnrJ3f6tdvvdpxUI4mz/LiPdYYtFJoR6O0U9hCAGmSgMnxfB/H8wtH07x4Wy1cJXQxeK+Ld3FjTDExonSTsKb8TQdQGu26xft7Xkyg0K5bvBBrVbzraxelHUyWYLKkKAugtC7ik1BqALQmz1LyOAKtUVpjsrx8z9Y4rlOcoyrKVr2gFgOfBlW6gJhyIgfl+7njaLTNyOIYqzzcIERVfRm2DuwB5TljTRF6Nk8xeV6W0ZIlCTY3mDwjz2+5VuTGkGaGLDMkWV64UJT9CasRGF4sU+VFPRZVH0LtGHHkvXu1SfNSw7FH06iO1Svr7G3e7Y+W5yj2yL4v7nW59f3oevjS8z2KVi/eVqXjalWEFzbmRWU6WhaOlLkSXKjj9ltJ/7h9jpbhuPM5rqzcZr9VjtbXl5RLreRrb22v1lf/r8Q3nlaFC4gF3ykcV7CgtCLNTekuokiL+Du4SuHpFfGNsbR8jd/wUdrB9QOixZLa2sVWvw1Fp5dWhcuKzQ2OKsP5WMsyNSzjDN9z0Y4q28cWTytcLEHDxWqn+P0oJ6thLVma1SeZ5xbXuSXsvlUPRXii3JT3g1u0t0x5/srRZGlRTtdxyExOnoPrVKGLVP13kZvCWSfwit+hzBhMZnAcTWYMjlZ4YchyUbpBa3A9r3AUzgrHREcrtOsQpQZHawJXFZPX8uL3gPI3QAOBr7HaIU6y+vp5jgKtiBKDMobQd3A8TZTkmLy46Kb8ndWqdN6xhZhIaY2jNA6FMOz6LMXVsEjNS916d4UIP75MKKWeBbrAc29wUQRBEF5PLgATa+3FN7oggiC8uZG2lSAISLtDEIQ3KdLOEQSh5ALS1hGENx3ynBcEQRB4ndp5IvwQBEEQBEEQBEEQBEEQBEEQBEEQBEEQBEG4R/nSwJ6CIAiCIAiCIAiCIAiCIAiCIAiCIAiCIAjCPYEIPwRBEARBEARBEARBEARBEARBEARBEARBEO5RRPghCIIgCIIgCIIgCIIgCIIgCIIgCIIgCIJwjyLCD0EQBEEQBEEQBEEQBEEQBEEQBEEQBEEQhHsUEX4IgiAIgiAIgiAIgiAIgiAIgiAIgiAIgiDco4jwQxAEQRAEQRAEQRAEQRAEQRAEQRAEQRAE4R5FhB+CIAiCIAiCIAiCIAiCIAiCIAiCIAiCIAj3KCL8EARBEARBEARBEARBEARBEARBEARBEARBuEcR4YcgCIIgCIIgCIIgCIIgCIIgCIIgCIIgCMI9igg/BEEQBEEQBEEQBEEQBEEQBEEQBEEQBEEQ7lFE+CEIgiAIgiAIgiAIgiAIgiAIgiAIgiAIgnCPIsIPQRAEQRAEQRAEQRAEQRAEQRAEQRAEQRCEexQRfgiCIAiCIAiCIAiCIAiCIAiCIAiCIAiCINyjiPBDEARBEARBEARBEARBEARBEARBEARBEAThHkWEH4IgCIIgCIIgCIIgCIIgCIIgCIIgCIIgCPcoIvwQBEEQBEEQBEEQBEEQBEEQBEEQBEEQBEG4RxHhhyAIgiAIgiAIgiAIgiAIgiAIgiAIgiAIwj2KCD8EQRAEQRAEQRAEQRAEQRAEQRAEQRAEQRDuUUT4IQiCIAiCIAiCIAiCIAiCIAiCIAiCIAiCcI8iwg9BEARBEARBEARBEARBEARBEARBEARBEIR7FBF+CIIgCIIgCIIgCIIgCIIgCIIgCIIgCIIg3KOI8EMQBEEQBEEQBEEQBEEQBEEQBEEQBEEQBOEeRYQfgiAIgiAIgiAIgiAIgiAIgiAIgiAIgiAI9ygi/BAEQRAEQRAEQRAEQRAEQRAEQRAEQRAEQbhHEeGHIAiCIAiCIAiCIAiCIAiCIAiCIAiCIAjCPYoIPwRBEARBEARBEARBEARBEARBEARBEARBEO5RRPghCIIgCIIgCIIgCIIgCIIgCIIgCIIgCIJwjyLCD0EQBEEQBEEQBEEQBEEQBEEQBEEQBEEQhHsUEX4IgiAIgiAIgiAIgiAIgiAIgiAIgiAIgiDco4jwQxAEQRAEQRAEQRAEQRAEQRAEQRAEQRAE4R5FhB+CcA+glPqgUsoqpR5+o8vyUiilLpRlta9xuvdMHQiCIAiCIAiCIAiCIAiCIAiCIAiCIHy5EOGH8IZSDub/oFLqXW90WYQ3L0qpfnmf/eAbXRZBEARBEARBEARBEF45SqkPlRNDfvCrIV9BEARBEIQ3knJsxSqlPvRGl+Xlci+WWRBeC9w3ugDCVz0fBL4DeA74zBtZEOE1IwWeeKMLcYQ+8BfK///gG1cMQRAEQRAEQRAEQRAEQRAEQRAE4TiUUh8ELgA/bq39zBtaGEG4xxDhhyAIrynW2mvAW9/ocgiCIAiCIAiCIAiCIAiCIAiCIAj3FB/k1U8Y36eYoHzjNSmRINwjiPBDEARBEARBEARBEARBEARBEARBEARBuOex1v594O+/0eUQhC83+o0ugPDlQyn1XBnT6r1KqZNKqR9SSl1RSi2VUo8rpf4npZRe2f+3KqU+qpQaKaUmSqn/qJR6+x3S/1ql1D8t04yVUvtKqZ9SSn3vMft+UCllKVR7AP+oLFu1PHfMMSeUUn9bKfUFpdRCKTVWSn1CKfVHlVLBbcpUx19VSgVKqT+rlPqcUmparu+v7KuUUr+9PM+b5TlcU0r9XFk36+V+314eG1frbpP3JaWUKfd94JjtZ8vzebQsz1Qp9ZhS6h8qpb7zduneIb+3K6V+VCn1rFIqKq/bx5RSv08p5d1tesekf6G6PuX3b1JK/Wul1A2lVK6U+t+O2+82af06pdR/Ka/hRCn1caXUD5TbHi6P/+BLlOfXl2mMlFKzMo3fccx+DwPPrny3R5YfvIs6qI65UNb3j5X3SlTel3/+DvdifV5KqUZ5Tz6hir+/3TKt+18i/1ddb4IgCIIgCIIgCIIgCIIgCIIgCIIgvLkQ4cdXJxeBXwb+e6ALeBShOf4O8L8DKKX+GvAvgW+muE86wK8BPnrc4LRS6vcCnwJ+J3AGWAB94LuBf62U+idKKWflkCWwA6Tl90n5vVr2jqT/DcBjwB8BHgAywAe+HvhbwC8qpbbucM4h8HPAXy7PNT+Sfg/4MPBj5XluAXNgDXhPWTe/HsBa+3PAk2X+33+HPH83oICPWWufOJLf91LYTP0R4G0U7jtpWbbfA/yjO6T7JSil/iDw2TLPC2VabeBbgP8v8GGlVPNu0nyJ/H478FHge4EGR+rzJY79c8C/B95LcV/lFNfxQ0qpv/sy0/jzwL8Dvr1c1QK+EfjnSqk/fGT3IYWtV8XOkWX2csu+wrcAHwd+O8X5K4r78i8BDyul2nc4tgt8DPgLwHnAAptlWh9XSl0+7qDXot4EQRCEr07Ui8W/55RSP6IKoW5UCkb/VtkWOnpcoAoh8D9WSn1WFaLeSCn1vFLqnymlvu4l8nWUUn9YFaLbpVJqTyn1H5RS31purwWVtzl+Uyn1V5VSj5Qiz7kqBLP/q1Jq7RXUw1ER67eW5dlThaj4M0qpP6hWhNB3qMc1pdTfKeuvEgv/sFLq5EuU4QeUUr9YnsuwFHT+uqPp3+25CYIgCMKbAaXUg6qYpPRk+Wwele2Av3e7dsdKe+Oz5THD8vn+K14ir7ZS6s8opT5ZTq6IlFJPlXmdfQVl31JK/c2yrTIv07uilPoFpdRfUkqdv81xd93eUUptKKV+v1Lq36piEsq0PO6xsn1y6m7LX6b7wbIt8nD5/QdUMeFkUtbRR5RS77/NsUfbWbebLOPfIf/VSTKRKiYa/ViZ1ktOMhIEQRCENwtH+h9OK6X+gVLqi2X/w2dW9ruslPo/ym2RUupQFROZ/zv14vG44/J4zSYmK6X+dFneSCn1G49se9ltHfUKJozfoUw/WB7zoWO2veIJvitpSP+O8JWJtVaWr5KFIh6WBUbALwDvKNc3gT9XbjPAnwES4A8BrXKftwNfKPf5l0fS/RaKQWgL/CvgTLm+DfzZMk0L/LljyvRwue2Ddyj3ALhe7vc54OvL9Q7wWygG9i3wn4859kPltilwSDG47pfbzgNe+f//UO63AP4/QL9cr4AHgb8I/MaVdP9Euf8v36bMGnih3Of3HFNfabntZygG71W5rQN8APjRI8d8sNz/4WPy+kC5bQL8cWCjXO8D30MhUrHA//Eq758LZTpVff5r4EK5zV35f73fMWm8byWNHwW2yvU94H/h1v35JffESh2MKIQ/f27lOp0o7z1LISpau13ZX2Ud2JUyfAL4mpW6/mB5/1jg/7zDvX5I4UDyPRT3sKYQF13hmL+vV1tvssgiiyyyyMKtNuB/B+yuPMuXK8+Xp4CTR477dSvbDUWba/WYFPhvbpOnB/zEkX0PV/7/vSvbLhxz/LcBByv7xEfyfgF44C7r4cLK8d/LrfbY4cr/LfD/AO4d6vF3rfx/DkQrxz4LDG6T/w+v7JeX+Vbt5D+0kuZ73+h7RhZZZJFFFlm+3AvwP1K861fPytlK2+FF/SHc6uv5y8B/Kv+flO2bav8l8M23yevBledu1TaZrXwfAt96zHFVvj94ZP15bvVb2fI8hivPeQv8vmPSe0XtHYoJUKtlPzhSd7uUfX53eQ0+WNU18Hdv02axwB875tjVdtZ3c6t/ZMStPkML/Pht8u5RTChbrYvxyr3w/dW2N/pelUUWWWSRRZbXe1lpp/xeiknSVf/DDPhMuc+vO9JuGJXtoer7f6Yc3zsm/e9deVZX7abVtstzR/b/wXL9h45J66+vPK+/68i2u2rrUIzf3Vw5j3H5vVo+eRd1eKcyV/l/P7fagOOyfNW2/wq0b5O29O/I8hW7vOEFkOXLeLFv/dgMKQfMj2z/yMqP1f98zPb3lNsiSvHEkeN+HnCOOe6vcGuAoXtk28O8xGA18Oe51Sm/fcz2714p9/uObPvQyrbvvk36v4ZbAxrvf5l1ubXy8HnnHco0PfpwAH6x3PazlMKTl5HfBznS0VGud1au6/fc5tjLFI2ClCMDOnd5/1xYqcufB/RL7XfMtp8tt/0UpdjlyPZ/sJLHB49s++DKtj97zLENbg1m/bcvt0x3WQdV/jscEZccKWMOnLvNvb4A7jvm2GoA7EV/X6+23mSRRRZZZJGFF4t/nwK+rVyvgd/IrU6EDx857r0UbnDvAZor689xazBgefSZV+7zF7k18PGHgEa5/jyFg9XhyrPrwpFjz69s/wfAfWVZNYUY+afKbZ/nmLbnHephtS0zAn4SuFhua1EIaKuBiT9zh3o8BD5NOZhEIYD9DStl/hvHHPu7V/L+K0CvXL8F/AhFu3KOdAzIIossssjyVbgAv3XlOfmvgAdXtq1RuMv+7ZV1H1p5Jh8Av41bk3zeATxSbv/EMXn1KISalsLp9h1VewK4BPyzcttNjvSdcXvhx4+W658q2026XB+UbZf/BfjAkWNecXuHYsLSnwa+hlKsStE/9HXcEsI8yjH9By9xHT7IrYEPC/y1lTbLSeCfcqv/7NuOHHth5RoeAv+CWxOEWsCf4taAyK85Ju//i1uDRr+LWxO13kYx8FK3Hd/o+1UWWWSRRRZZXu+FW/0PU4rJ0N+ysu0+ijGfSrDwMKV4omx7/F5uTVD5kWPSfiUTk3+QIyKKss3yQyvP/m8+csyraes8zKsc6ziuzCvbVvuG7naCr/TvyPIVvbzhBZDly3ixbz0s/spttv9pbqnuvkTJVv4gV2q8h8p1aysvbr/2Nun2Vo77viPbXvIHnCKEiQX+5h32+YVynx86sv5D5frP3uHYHyv3+cm7rM9/Ux73vx+z7f9Xbjv6gHzrykPhG+8irw9yvPDju8r1j7zE8ZU453e8ivvnwkrZf9PL2e/I+o2V47/rNseeX9nng0e2VXWw5PZK1X/CMQMutyvTK6iDqmx/8TbbNbecO/7wbe71f3ybY72Vv6WHXqt6k0UWWWSRRRZutQGXHC8+/M6V58i33UW6/7A85i8cWd/hVgfEcQIKD/jMSp4XjmyvBhX+6m3y9bnVPvwtd1He1bbMo0BwzD4/yK0Bj+aRbVU93gTWjzn2j5bbv3hkveLWANOXdBqU+/zHlbK9942+Z2SRRRZZZJHly7WU7YKr5TPwn7/MYz50p7YLhQCi2n50UsZffqm8KMShliPOFtxe+PFYuf6338V5v17tnYBiAMUC33GX1+KDK/X2w8dsVxQDRBb46SPbVttZH+b4SSv/nuP7yi5xqz/k+485rseKo8obfc/KIossssgiy+u98OKJJyeO2V71xzx9tO+i3P57uSXWvO/ItlcyMbnqK/lQ+d3j1hjYDsdPjn7FbR2+fMKPu5rgi/TvyHIPLMfGrxbe9Dxym/W75edz1trZ0Y3WWgPsl18H5efXUvzYVQ+KL8FaOwZ+qfz67rspaBn78+3l1/9yh11/5iXS/693OPabys+fuIuiQaHeA/idqzFKlVIDClUkFA/g4/IaWmt/8S7zO45vKT/vL2OQHbus7HfXcWpvw53q83a8q/w0FEKdL8Fa+zyFvdedeMxaO7/Ntmvl5+A2218rHj5uZfk38tHy6+3uxU/e5tiUW3+Dq+V/V/n5autNEARBEP6ltfbpoyuttf+FW8+Y33IX6f378vNbj6z/boqZnRHw947JLwX+znEJKqWaFLN+ze32sdYmFCHnAH7VXZR3lb9trY2PWf93KMrdpTiP4/g/rbUHx6z/8fLzolKqtbL+3RSDIQB/4zZp/vU7llYQBEEQ3rx8F3CaomP9j9/lsR+11v780ZXW2l+iEJPArT6lih8oP//2HdL95+Xny21nTMrPky9n59ezvVO2b/5z+fVoG+1u+CvHpG2Bv1p+fZ9Sau02x/61ct+j/Hj5efSa/CaKvsUrFINIR/MdU8woFgRBEISvNv6xtXZndYVSSlG4hwP8XWvt4pjjfoRirESx0s+jlHor8A3l1z9R9s/cFUqpBkWI3O+jeHa/x1r72SP7fLn6dl4tP2StHR6z/h9TtCU18JtX1kv/jvAVj/tGF0B4Q7hxm/X5S2xf3ccrPzfLz/FxYpEVqhfuzTvscxxrUAuUrt1hv5dKf+8Ox54oP+924PynKB5sZ4FfT+EAAkVcsBB4wlr7sdcor9tRdSoEK2nfieZrlO+d6vN2bJSfY2vt8g77XaewkL8d0ztsi8pP7w77vBbc6V6stt3uXrzb8r9W9SYIgiAID99h289SCEVfJFwsO/T/APCrgQcoZlw6R449deT715afn7lD+/Cjt1n/dRSzPizwSNGfcSyN8vOVilofPm6ltXailPo08M0UdfHjx+x2rIiTF7cP+hTWnnCrPm4eJ7wp+TiF1err3YYRBEEQhK80qgkyn7XW3uld+zhu90yG4rl8hpWJFUqps+U6gJ9QSh0nToCiLQIvv53xE8A3An9dKXU/xSDGx+/wDv+q2zvlwM0fBL6dYgCiTTG4s8rRNtrL5QVr7bO32fbzFH2DDsVElZ85Zp+XaisdnaxTtZU+dhvBCNy+7SgIgiAIb2aOm4B7iaJvBm4zWdpaa5RSD1OEy1vt53m1E5O7FGHlvp0ixN2vtNYeN9b15erbebU8fNzKsv4+CvwOXlx/0r8jfMUjwg/htSL4MuQRvopj85fe5e4of/x/FPgLFHG9KuHH7y4//9FrnecxVKKYf2ut/cCXIT8ArLWveX0KgiAIgvC6c1fCRaXUQxSd+avi0im3Qvj5FB33q+4WcEu0eCcx8fXbrK9ErYrXV9T6mos4rbXRSmfGcSLO29aHtTZRSh0A23colyAIgiC8GXk1E2TudmLFqiPH1stI/+W2M/46xQDHbwB+f7lkSqlPUsyI/WFr7eiYcryi9o5S6vsoZqJW52YowtRVbmZtivbZ0Tbay+W27SRr7VIpdUjRvjm2rWStvd11ud1knVfTdhQEQRCENzPHTcBdff7e7WTpVzsx+TeVnynw/tuIPuDL17fzarnbviHp3xG+4pFQL8KrpXrwNJRSd3LzqGZU3K1TxJDiBRbu7GbwStOHIo4XwPlXcOyPUpTv/Uqpk0qpd1C87OcUL+G3y+u1cmZ4rdN7PanCBPVKO7Db8bKsUd9g7jRrptr2Su7F43gz1ZsgCIJwb/GPKF7Qfxl4P9Cx1nattSestdsUtp3wpbNLXw3V+8nYWqtexvLe1zBvQRAEQRDe3Kz2gw5eRjvjwstJ1FobW2t/I4Vj2N+gmOlpV74/qZR65zHluOv2Ttn39sMU4ol/AfwKILTWDqy122Ub7e9Wu99l/QiCIAiC8JXFS03AfTWTpV8JP0chxvSAH7nDeIX07QjCG4QIP4RXy6cpXmYBvvO4HZRSPQoxBBQDB6tUoo5jX0bLOF+P3in9kvfdJv2Xw8fLz19ztweWisb/TGFx+d9yy+3jJ621x6n+qrzWlFLfdMz2u6Wy+nqHUur0a5De68lnyk9NYSX/JSilzvHKBDgvRXWfVTHwXi3fcdzKMu1vL7++knvxOD5Tfr4R9SYIgiC8uXjZwsXy2fINFJ0Mv8Fa+1PHhG253ayNSrR4J1Hi7bZVotZu2YZ8vXgjRJy3rQ+llA+sv0b5CYIgCMK9xKuZjPNK84LXYQKNtfbj1to/aa39ZgpXtN9BMaN2E/iRY8rxSto7v5rC0eMx4Puttb9krU2P7PNyZtbeidu2k5RSIbdCtXzZ2kovsU0QBEEQvppYff7e7WTpVzuR+Fngu8p0vhP4caXUcdEAvlx9O6+Wu+0bkv4d4SseEX4Irwpr7ZBbccT+pFLquHvqT1IoD2cUcU9XmZSf/Ttk86/Lzw8qpb7kB1Up9d0UsygA/uXLKPZRKmeO71ZKvf8VHP/D5efvoYiZBvAPj9vRWvsF4BPl17+hlHq1cb4+AlyhEJ78zTvtqJQ6GkP1y4q1dp9bMVn/2G12++OvU/aTlf/3X4P0/gel1HHp/C6KBpUB/u/XIJ83ut4EQRCENxfHChePbKuEi3UHgbX2dtaXv/I26z9dfr5LKdW+zT7vuc36TwEZhSj4lbTLXi63E3F2uBW/9bUScVb1sa2Uunybfb4Rif8qCIIgfHVSTZB53Se0WGuf5dZAxK9+nfOaW2t/DPi95aqvU0pVoVdeTXunaqN9zlprjm4sJ6S87+j6u+S8UurCbbZ9G0UflOXWRJVXS9VW+tY7TNa5XdtREARBEL7a+CIwKv9/u8nYGnhv+XW1b+NVT0wux7h+JYUI4ruBf1OKHVZ5tX07d5ww/hpytxN8pX9H+IpHhB/Ca8Gfp/ghfjfwY0qpMwBKqbZS6s8Af6rc769ZaydHjv18+fmb76D8+/sUMbMawH9SSv2KMn1HKfW9wI+V+/20tfZnXkH5f7JcFMVD6n+sBvVVwUNKqb+tlPrAbY7/d8Au8BaKWRy7wH+4Q35/hGL27HtWz6fMr6OU+j6l1D97OQUvZ3X8QYoX7t+hlPpxpdS7VtLzlFK/Qin1NyjUmG80f6n8fL9S6keUUlsASqmuUuovAn+AIi7ta0oZS7eKB/u7b7efUuq9SilbLu+9Q5IhxbV7e3mcp5T6AeCHyu3/8A7x7V4Jb0i9CYIgCG86frtS6tLRlUqpbwe+tfz6r8rP6rlyonruHDnma4Dvv00+HwbmFM/LP3DMsS7wPx13YBkT/t+UX/9SKcQ4FqWUewdhyUvxR4/pmAD4wxTlnlCcx2vBp4Hny//fTsT5J16jvARBEAThXuMjFDHUX3JCy2vEh8rPP3YnoUnZH9R/OQnepk1Rsax2A3x41e2dqo329tuIJP7fwO0GIu6GP31MWRS3+vg+Uk4Gey34cYp+rbPAbzsm3y7w+16jvARBEAThnsba/z977x0vaVbVe3/XfkJVndSn4wQmkpM6BsAIg16zvphQUZTxonivVwEVzGEUAwYUveq9IsKooOhVQBFBQRgEFSQoiGSYBobJ3X1ihSfs9f6x91P1nOdU1cl9umf279PVdep5dlg7r73W2murMjp0+gwRmRkT7LuB++HW1/9Xi7svB5NV9b3AlwLngK/G6QXj2vu9yna2c2B8P7DTA75BvhNwwSMYfgTsGar6L8D34SbBJwKfEJGzOKvDX8Rtbl8KPHdM9D8BMtyJgXtE5FMiclpE3lJL/xzwdbhF5NOBt4vICs6DyF/iXEy+h5G3jZ3SrzjFxZuAGeC3gTMicgbo4oxTfogJi4w3vvjj2qM/UdViSn7/jFs4BrhTGG8Xka7Pbxn4M0bKl+3Q/zfAU3H1+ATg32vp9YC34zxCHLpLLVV9PXCj//lU4A7fV84CPwM8D3i3fz/Y5+wrt6rPE5E1389Oi8gzd5HW9wGfBvyniCzh+uJNuP7zVlx/2Tcccr0FBAQEBNx7kAGvEZHPB3cCRES+lpF3tdd5PgXg/cCtOD7uz0XkgT5OIiLfgLvqrnn1CzDc4Fd3y/+CN6rt+PhX+fyunULnj+HWuAcD/yIiX1EJI7wS5kEi8kPAB3D32g8hIjfUjDivmZLHVcArqjAiMiMiP8xovf0VVe1Oib9t+NO4z/E//4eIPMcrMBCRkyLyAuDLcXxnQEBAQEDAfQpepvLD/ueTROQvROSh1XsROSYi3yMiv71PWT4Xd1L2BI7P+Gap3U8vIleJyNNwpzu/bptpvldEfklEHlUZgXie5dHA//Zh3u7lWxV2y++8HqfEeSTw2zI6uLQgIs8Gfhc4M4lQEbnJ80mnp5RnBXiaL9MRH+9S4I9w7t0V+Lmtq2V7UNWP4uSGAC8UkW+rlEci8nDcYa1xSq2AgICAgID7Kn4Jd+DmcuDVIvIQABFpicj34HRc4A6ofrQRd18OJqvqf+A8fiwDXw+8VESiWpBdy3bY3oFxvH5HReSmreidgB0d8A3ynYCLAcHwI2BfoKq/DzwK+FOcd4453IT/OuCJqvpkVS3HxPsAzjLwtT78pbh7Xa9ohPs34OE4JcKHcK6SCpzLqGcDj1HVu/ZA/xLOCOMpuE30WWAet1l+E+70599MSaJu9feibeT3MuBhOG8mH/KPY9wi90LgO3dI/4uBhwDPxy2KJbDg6b8Z+Fn//tChqj+HM1D5JxxzEuOMU56sqnUDlaV9zvrncdcOvQenxLrafxZ3kda/4Fx2/QXO0EKBD+KMMK5X1bGKsL3gEOstICAgIODeg2fhDGb/WURWcYYbf4PzWPYRHB8EDDezT8cZ9l4PfLhmePtXuPXvmVPyeg7OY0aMEzisiMg53MmIr8JdkVdhg9Giqp7GuQK9DafUeA2wLiL3AH0c7/Q83GlW3VENjPBUnIDiFk/XMvDruP3RXwO/ust0J+FFwIv93z8FnPVGnHfiTuL8EKO7YoMRZ0BAQEDAfQqq+uc444/qQNH7RWTVr9FngBfgDgLtR15LOIH8+3GGoH8OrIrIPSLSxfEqvw9cx/b5jFM4Dxn/BlQHcQbA2zzd9+DW+zodp9kFv6OqH8TJfsB5gD3n6+kcjn/5R0bKit3i330eP447GHXW0/kd/v2PqOpbJsTdLX4Ad3XMHM4IZM0ftPkvXB1+nw+X7XO+AQEBAQEBFx28MceTcDzD9cAHPD+wiuObWjie4Jlj4u7bwWRVfQeOn1nFee16sbhrZvYq29nywPg+YTcHfIN8J+CCRrx1kIB7C1T1mi3e38TI5eWO01DVd7ELrxuq+k84ZfZW4e7ATZrb9qagqjcAN2wzrMV57vjjrcKOQXV/6ttU9X3bzO8W3MZ2O2FvYuu2Oc0Et+n7AZ/+lneqbSec91KyyZBG3H231cme9zfi3MTWdXAjo5O6zXclTggyUZGjqjezzXvjvDuzb9lOWB/++m2EuWaL9zuut4CAgICAgBo+gjtF8bM4hcdJ4DTOkOM5qrrh2jBVfYWIfDHwk8Dn4gxvP44zjPhlpihgVDUTka/GGY98F/AgnNHuq3AnUz5QC740Jv7b/Wnf/4kzfHwYzlhzFWfE+S/AX3k+csdQ1b8SkcfjTqB8rqftv3AGuL/n+cJ9g6qqiDwVeDOuTI/A8Rw3A7+mqq8RkerUyNJ+5h0QEBAQEHAxQFV/Q0Rej1NQPB64DMhx6/4bcd4m9iuvj4jIZ+IMUZ+IE/gv4rymvgcn6P9r4O+3meQTcLzVY3HGJJfglBXvB/4O+M1xh5V2y++o6g+JyPt9vIfjrsn5d5yS5HdwVzLvCar6gyLybp/Hw3CKkHcAv6qqr91r+mPyWxKRL8DxZk/C1WMfV3/PwSmwIPBJAQEBAQEBAKjqq8Rdw/sjuMPVl+M8TfwnTsf1onGHsX3cl4nI23C6ti/DXbdWHUz+ZxxPsV063ioiX4U73P0dQC4i360Ou+V1PiAiX4ozQn0U7sD4QTgyqA74/jTOgKaNO+D7UhzPs8lwI8h3Ai50iLvlIiAgYLfw7qs+AlwDPFVVt/T4ETAeIvLjOGXQh1X1wYdNTxMiUk2Y13oDlwsCF3q9BQQEBAQcLrwr76uBx3sjx0OHiHwJzsvax7cyfNzHPK8BbgFQ1W0Zep4viMgDcPxkBsyrajjNGhAQEBAQEHBeISI34E6wvmk7h1fOJ7yC5YVcgLQFBAQEBAQEXHgQkV/AHWR6gap+b+Pdgel5gnwn4LARPH4EBOwB3m3Vz+CMPu7EucEKmAIR+Q2cNedrVPVO/+xSnFutn/DBnndI5F2wCPUWEBAQEHAvw7P99+sOlYoLBz/iv/8pCAUCAgICAgICAkYQkRR4hv8ZeMeAgICAgICA7eBy/73J69oBI8h3Ag4VB+EaJyDgXg8R+Vx/evYczvAD4CdUtXd4VF00eDTuBMkdItLzd8/djnOnFeHciL3gEOm7UBHqLSAgICDgooGIRCLylyLyFSJypPb8ESLylzh36Dnw24dG5HmGiLxYRL5JRI7Xnl0rIr8HPM0/CkacAQEBAQEBAfc5iMhVnlf6In+dLSJiROTRuCt3Pg1Yxnn9CAgICAgICAiYCBF5GPDV/ue/HUD6Qb4TcMEiePwICNgd2jiX6Tnu3rPfuFiueBGRZwHP2kkcVb10H0n4ReCbcXenXQrM4awu34G7d+6v9jGvexNCvQUEBAQEXEwQ4Bv9BxFZwe09Zvx7C3y/qv7n4ZB3KPhS4AYAEVnH1cF87f0vqOprD4GugICAgICAgIDDRorjk24AEJElnOyt7d/3gSdXHlADAgICAgICApoQkUcAbwaO+kfvBw5CzhLkOwEXLILhR0DALqCqN+MUGhcj5oBLDitzVX0N8JrDyn8vUNVDa/OLud4CAgICAu6TKHHXkX057oTmKZyHqo8D/wQ8X1XfdXjkHQqeDTwB+EwcLzYD3Ab8K/B7qvqGQ6QtICAgICAgIOAwcRvww8CXAQ8FTgIKfBh4A/A8Vf3w4ZEXEBAQEBAQcBEgARaBM8A/AD+iqvkB5BPkOwEXLERVD5uGgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAXcAcNgEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQG7QzD8CAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAi4SBEMP3YIEblRRFREbjpsWgLOL3y7q4hcc9i03FshIjf4Or55zLvT/t31552wXUBErvf0nj5sWgICAgIC9gci8iAReZmI3CEiZeAJ9wYRuabirw6bloCA841pfG9AQMC9CyLyJBH5VxFZrckVrj8kWvZdriEiN/k0b9yvNLeRZ9hv7wIikorIT4vI+0Wkf1/gw/bKb15ssqiAgICAw0bY518cmMS/hfa78HEYvPeFhnsrf7Zf5Yr3h5yAixUich3wdcBpVb3pUIkJ2DP8hHA98B+q+srDpCUgICAgIODeBBE5BrwZuARQ4CxQAMvbjH+j//P5qrp0ACTuCiLyTGARuElVTx8qMRcJQp1d3BCRReCZAKp64wHl8UxCHwkICABE5NuBl/ifOXCn/zs7HIoC9gJvMHMDsKSqzz9UYnaH3wW+2/+9DiwdHikO94I6vddDRL4OuA64WVVvPlRiAgICAho4LP2WiNwAXAO8UlX/43zlGxAQELAVguFHwHXAzwJvAm46VEoC9gPX49rzj4BXHiol+4+PAn2ge9iEbBNd4IPApw6bkICAgICAfcGTcEYfHwKuV9Xbdxj/Z/33TVwAQvYanglcDdwMnD5MQi4iPJNQZxczFhmNxxsPKI9nsnUfWcbxip84IBoCAgIuDDzTf/8m8COqWhwiLeDmHXBGKAE7xzW4NeTjwPMPlZIdQkSO4AwsAL5RVV9+iOTUcQ0XaZ3eh/B1wFP83zcfHhkBAQEBY3Edh6PfugF4HG6/9x/nMd+DRs6IXwwICLgIEQw/AgICLgqo6pccNg07gar+G/DQw6YjICAgIGDf8Aj//apdGH0EBAQEbICqvgJ4xWHTERAQcOCo+IcXXQBGH6hq2KPed/EQnBz4zAVk9BEQEBAQEBBwAUFVP0XQaQQEXNQwh01AQEBAQEBAQEBAwEWAjv9eO1QqAgICAgICAi4mBP4h4EJB6IsBAQEBAQEBAQEB93IciOGHiJwQke8Tkb8WkQ+IyKqIrIvI+0TkN0Tk8l2me6OIqIjcJCJGRL5fRP5NRJb88+sa4b/W03CHiGQicpeIvEpEvnyLfB4iIn/mw/d8GX5WRFq7oXsM7SIi/0tE/l1E1kTkdhH5IxG5ohb+Qf7ZrSLSF5H3isj3bJHHgs/n3T7dNRF5j4j8nHfp2AyvwIv9z8d5+uqf6xvhjYg8VUTeJCJnPV23iMgLROSBE2i63qd12v/+ShF5ja9b6++/rod/mIj8XxH5kIh0fdv+p4j8toh8tg8jIvIRn+73b1Enb/LhfmnMu0REniYi/ygid4vIQEQ+LiL/4J/PTkt7THpzIvITIvJ2EVn29fNhT/uVO0mrluYpEfk13/7rPs1Pisi/iMjPi8jVPtw1vj0rt9VPGdOe19TSfbCI/IyIvMG3Yd/X9VtF5IdFpDOGHETkBp/Wzf7314rIG33cNR//SVuU6XLfZz7l8/2YuHlhcYt4pyf0y/2g6QoR+cMGTb8pIkeb6W8Xzb7feHezf3eDiHTEjdsPiptv7hKRl4nIg3aSX6MszxKR1/r+1xWRFXHzzc9tVc/bSP9rfP0u+3TfKiJPaZZrTLzHishvicjbROQ2Gc3JrxWRb5qS300+zRtFpCUiPyluXlv1zxd9uHlfn3/hx8uSr8+P+P42tT4b7dAXNy+/TEQeWY0vP8Ymxb9GRP63j9/19L1TRH5UdjiXBNw3IeePd/pBcXzCuoicEZG/EZFHT4n/WSLyXBF5i4h8QtxaecaP9+8WkWiL/L9ARF4tjm9Y93k/09MyHN8T4hoR+Q4ReZ24dTrz88efi8hjdlMftbS/wc8/1fp/q4i8VEQ+a0zYm/34v8E/+lmpra/byOumRrhbZOP6fNOYOCdF5JfF8UBrvu7eKyK/KCLHGmFnxc33KiIvnUDDA306KiI/5J/d6Om62gd7Y4Oum7cqWy39fem/vr/8rW+Xroj8hzh+f+qeZSft6cNPXKNrYTat//tcZ6mIPEMcT7ckIrmI3OnHyO+KyOdNiff9IvJmP64q/vVFIvKwLfLc8RrerAcReZKnecXX9yvq+YrIZeLWw9Pi1tOPiMiPydZzxY72bWPo2hb/58PfUvvd5JdvrL3bMW+xkz4yro+NSW+nfXsDzyKOj3mZr9e+uPH50yKSTsozIOC+CNlnPqw5Fj3q6/9NtbC72pvX4u9IrlGj4ZrG80icvOb3xe1j7pQR7/UKEfnindTBdiB75FMbaVU8xD1+rn63uPVSpsRpicgPidujLvt4H/RtfumY8KeBN/qfV49ZQ24YE2dH8/h2ISIP8G31Md9nzonIP8kY/rxabxhd0dGkfRPdE/Lc0557Qpqn2Wad7tc4lV3ymz7uVSLyQnFyuUom+uvSkLmKw57lltug50oReZ5vj1X/eZ84+dbjG2F3PMbF88yMrnnZsA+SbeyFAgICHPZrDhuTbnMtnagv2+m618hn3/QoIvIZfg5SEXmJiMS1dzuSA8kO9VvboOuPxe1lB76NPiZuHX+miMz4cNW6+jgf9cWNPE830t0PmXjkaXi3uLXrrLi17HO2KNNjxO2rz4rbJ/+HODnExDVPpsjB94mmXcsJt4KIfKbvU5/0bXiPiPy9iHzjlDinq74iIsf8eLjFx/+UiPyBiFy2S3r2xOefzzylNpdMSXts+8hmHfCO+XIf7ytE5C/F8csDcXKMt4rIT8kU/epO20026uaOitMDVjz1reL42klx97xnEpFv9+Va8+PgDSLy1VvF2xFUdd8/wK8D6j85cAYoas/uAj59F+ne6OP/EfBK/3cBnPN/X+fDJcBLavkp7g7l+u9fmZDHY4H1RryB//tfgF/yf9+0S9pvAl7m/x7gLO2rvD4GnAQ+t1amJcDWwjx7QvoPxN0nVoVbb5Tj48CDGnHuqNVL5n/XP59fCzsD/H0tvczTVv3uAU8YQ9f1/v1p4If939aXrwCeWQv7A41+slarBwVuroX9Cf/snVPq/AG1umuW/X7Av9fSLnH9dFB7dn0jTvX8mjF5PaxR/3mjbc8CX7DDPnM1cFstjcKnU+8P/8OHvdK3WZVnb0x7XllL+x2NtjvTSPftwPwYmm6o2gL46Vrd1fuC1tt1TD3d1Wjjrv/7w8APNdu6Freq32a77JWmT/flr8Kt1mj6yDSatmi/632802Pe3ezfPR14l/+7X8tXPU0P2MU8+Ze1NAY+nbL27CPAFTtN16f9U7V0qnFcpf2btXLd0Ig312iLFTbPyb8/Ic+b/PvnAm9j8/yz6MN9fy2tgs3jeQ34bxPyOMLGMTGo0bcGfFv1bkL8b8CNo/r8m9V+vwe4ZDd1Hj73nQ/nh3d6eS39JTaOmW+ZEP+eRt+ur8sKvBqIJ8T9zsb8c87nrZ6WanzfOCbuPPC6xpxTnzdK4Pt3UR/G10W97Oca6f7PRpyX49bRapyvUVtft5Hnb/mwVR53s3F9/q1G+C9k47o0aMwxnwAe0ojzmFrdfkvjXQS81b97AyD++bN8/lUbnW3Q9fKD7r+4O92rMN9YK0O9ryjuOoxN/Ww37enjXc+ENboW5gY285/7Umc4N+s31+is88bVs5eNiXcZ7u7gevlWar97wDdMyHO3a/iwHoBfqbVxPd97gAcDDwI+yWitr5fndyfQtat9G7vk/3Dj+e7a+ya//Kxa2B3zFjvpI4zpY/vQt6+phfkyRrzlEhvn41fudP4Mn/C5N3/YZz6M0f580vr/W7Wwu9qb+7j7JtcAHtmYP5fZKNNQ4Mcn0HETE3i6LerpRnbJp7JR1nSDD2fZvA48f0LeJxntxRW3H6+vbWeBz23Eebt/XtV1cw35llrYXc3j26y3r2Ejb7jExv3n64DZWvhv8fRNon3sPmBMvrvec09Jcyd1elj85mn//rsZybNWG23wYeCyRrw9yS23UXffyEYZUo+NMsPTex3jwOczZR/ENvZC4RM+4eM+u53DtpHujYzW0lfW5uhqzbnOh9vxulfLY0f8Rn3eHZPW59do+z28fMK/27EciB3ot7aox69i41raZ/O+9KE+bLWuVuGXG3m+vZbufsjEfwF4ba2Mq7W4PeDzJsT/1kYfq697f8kE/m2L9tsrTbuWE26jDZ82Ju16+f8EiMbEO+3fP7n297rvA1XcW4Cju6Bp13z+FulOrKfd5klNf73TfNk7X5769qmHXWIjn9LMc9ftxkj+9cM4XZnieKp6fncBDxsTd9d7Jh//d2rhSt9Pq/Z5OhN0oDvuI3uJPIX4pwM/DnwanmnGCZ0/m9GE8F5qE/s2060636pvwP8JzPh3p4AF//dvMmK8n4jf8OAWj//JaFF7UiP9o8Cd/t07gc/wzxPcpLRe66wTB8AWtC95+r/dd2gBvgi43b//v75xXwXc38ddAP4Po8F6fMzAeLd//wngS326AnwJzuijqvNWI+4N/t3NW9D/fxktet9bpYMT8r6R0cB68IRB38MN+N/FK0CBNl4B7dup6vD/j9qgAo75+npe7dnljCbusUwRbgFS4J8az1uMGJ27fdtWfSQCPsv3occ04lX0XdN4fgQ3iSjwFzhDgsi/uz/wUv/uDrySept95kWM+vEXAaZG/yOB5wBfN6GfTe2fvh2eClzdqJevBT7o09gkoK/1lyVf/z/FSPF+iW+7qr2PNeImwH/59x8FHuufG5/vXYzG16b+yNaGH7uhqVUr74fwxjmepq/Cjctzk2jaoo6v9/FOj3l3MyMG5Bbgy33fM76tK6XJX+wkT5/2c3BGVA+q9ZkEZ4X8bz7dV+8i3S9mNAZeBJyq9f/n1NpA2aw0mvHt8HX1NgAWgf/FiDl84ph8b2I075/DMdepf3c1kPi/vxU35h9Vey+4OwkrhdJd1ARgtTwqgdwajlmo0nwE8K+1PqBj4j4Kx+DmPv/71eaSz8MxUwr8/U7rPHzuWx8Onndaws2RPwh0/LsHAP/AiMHdZGwG/KkfX5fWns36sVLxLpuMUv3Yq4QAr8avnTgX0z/gx0w1tm4cE/8VjPixLwPa/vlR4Cf9uCvZuVHlj/l0LW69mPfP74dbw9Wn+9gxcW+aRO828x7LRzTCXM1GIcgDcWuDwa39lRHuf9HYsNba+mw1F/nnP8NozblyTJ6n2eOmYrf9l42C+CXgNcC1tX72bEYb95/Yr/Zkl4Yf+1VnON5Tcbzzk2v9OwKuwq2NTcF7wmgdfz1ujanWq8sY7X/WaYxl9raG31B7nwHPYLT/+jTgA/79y3EGmv/CaA81gxuvVRs9ckxdVHTvdN9Wp2un/N81VX1s0U574S227CNb9LHd9u1h2XBj/s8Zzb+zPt1KqPBVux3z4RM+97YPB8SH+XSmrv/sfm++33KNBwN/iOO7FmrPT/l5qBLgPmYMLTexCx6JPfCpjNbydRzP+b8ZyZoWgd+uzaOPGJP3axjxTU9kJMP5HJzhvuJkOCcm5Ht6i7LtmufcIt0HMBIu34w3Bvb94WmMhN0vHBN3W7RPyXvX6+IW6W63Tg+L3zxdi/th4Av9cwM8gZFB6T804u1abrmNOvt8RoqyN/g2qYy753Gylxcd9hgPn/AJn9Fnt3PYNtK9kZHcdJq+bLfr3o75DSbst/z8Ux1Ufu6YsryCXciB2KZ+a4t6/JhP41XU9Fs4vdwXAS9gM+90M2P20I0w+yETP4dT3n8zo/X304H/9O//bUzcBzAy2vt7RnrGGdwh14KRDODGRtyx7bcPNO1JTrhF+30+o7X8/zHSOc75vlPtgX9qTNzTtTL9O95oBXdg5/+r0fSru+hXu+Lzt5Fu1Q6b6mm3ebI/hh+75ct/178vPB2X1N5diztg87T9ajdGY3cJZw/wNYz0aI9jNB+8Fy93q8XdCz/17Yx40l9jowzrj3BzXDVHXr/TfrEhr71E3lWGrpNVyt/H7TDujbWKedqEMA/yFXsXY4TbPsy3Vg3XeF6dFruHxiLn3z+5lv/EAbAN2p8y5v131N5/kIaVOW5D8WH//jsnxM0YL1B9BCMLxP/eeHcDWyyMuMm+mji/d8z7GUaWUX/ceHd9rVx/OiH9BLh1WpgJ8f7ax/nNMe8MI+X5DY133+ef99nZqZ2qHNc0nv/CVrQzYq6etYP83ufjbOv0RaOf7ah/NtK4FrfQruMZxTH9RYGfHBO3w+gExKR+OqBxUtm//6Ja2pv6I1sbfuyGpu9ipBS4/5i4j2HEGEwcIxPqser7p8e8u5mRAOuBY95/Y62PprttyzHpHvN1YZv9eBtx38SIWdy0CcEpKKt2uGGHaVd9441j3t1US/fLdlluYWQx/pTGu/vX2vjbxsQ9Qs3zzpj3b2HC3Fir8yr+5+xXW4bPfevD/vFO4+bINiPF7SYB8RZpV/P2LWPeVQZV/zluHgN+pEbXjY13/80//wBwZELelTD9b3dA7xyjkxW/POZ9BLyZCcJXzo/hx0sm0eff1419v6nxLmbk2eN1fu77HEZC4W+fkOZp9mFTsZv+y0ZB/CYD5UYfXqbGl+ylPTl8w49qzfw/O4jz3VVZaGw8a2EqQ+3faTzf9RrORj7rZ8fErfNvZxlj5Az8o3//M43ne9m31enaKf837Hd76NcTeYvt9pFJfWyPfbs+pv5hQnu/yr9/0bQyhk/4hI/7sAc+zMffcv2fEnfa3nxf5RrbiFfJy1485t1N7IJHYg98KhtlTX8wIf1KkdVcf+pr15ePiXcJIy8UPz8h39NTyrUnnnOLOvtDH+8jzT7h3z/Nv7c0ZA3boX23H7ZYF7eIu2e6po1T9sBv+nenGcmNxslvHl9L/wsb73Ylt9xGeStvqG9iAl+4izrc9zEePuETPtv7TJvDthG3vpZO0pftZd3bMb/BmP0WzltypfT/sTFxdi0HYo+GHziFbVU/2/bYzDYMP7aRxnZl4l845v1n195f1XhX8QsfwBvQNN7XPYLeuFX77RNNu5YTbqMeK5nDWxjv1aO6xWGVmrLevzvt391B48C9f1/dYvCx3bbzBJon8vnbiHvTLutp2t7iRp/mTTvNl73x5Y9gpJsZO4dNSG/X7VYbu3ZCP34Io/nqyTus47H8FI5XrnTom+qYjby0skcZ7Zb3F+43VHXgCwDwBbtM5gzutNo4fCeukv5cVT85Icxf4hruEY27eqo7tf5AVe8ZE++lOO8Ze8GtOLc1Tby+9vevq2pRf6mqltG9l49sxK3o/mtVfW8zYVX9L1yZwVnh7RRfj9uQ3AG8cEz6XeBX/c9vkMl3eP/ahOdfgjv1UOIs7beLipYni0jSePelwBW4yfz/Nd59p/9+saq+Zwf5TcJT/PfzpoT50xpd28WK/97VHWK7haregmM2Z4DrJgTrA88fE7eHUyrA5H76clX94Ji4b8YpM3aL3dD0Df77L1X1Y2Pivo3RPbgHgb9U1Y+Mef43uAm+hTvtvS9Q1bO4k7iCs4TdFkTkBO4aLHBWkjom2K/sgbRX+e/PnTJ/vEdV/2E3iXt6X+1/Ntedr8fVxyeBPxsTdxmnSNsEEXmAT28Jx1CPy/sszvALdjb+AwKG2Cfeqcv4ObLPaP36xq3uW2zEfTOu/18jtftoxd0V+nX+5/NVNRsT/XdwG41xqNbVP/BjcBxe6r8fP2XeaOJLcac1MkZ8yxCqWuK8HwB8kWxxx+1+Q9x9sU/EbTx+Y1wYX5cVT/eljXcFTmiwjhOa/DjOkCTG8cUv5RCwg/77PB+2id/ArfELOIv6Chd0e26B3fB41bj4LVXNJ4Sp2njYN/ZxDc8Y3y//Gdc+4AxZlsaE+Uf/3eTD9rJvq7Ab/m/P2IK32Cv2q28/d0J7v9J/73u9BATcG7FPfNhu8562N99vucZWqPZsB1EHe+VTf3nC87/235NkE+9Q1b9vvENV72S0B9yNDO1AeBRf/uqO+t/0srgmXgh8Cre+ftOY9weCA14Xt5P/QfGbdfzFOPmNqr4RJ2eBzXW+W7nlRIjIQ4FH+58/MoUv3CkOcowHBARMwXnQl+1l3dszvyEi34XzdpUA36eqzx0T7KDkQNvBGk4OA+dZD8P2ZOJvVtW3NB+q6jtxukao8TqeX6h0Hr/p+akmno/jv3aLndK0VznhRIjIMZwRJjiD23JMsF/BrfNzOC/v4/ACVT0z5vkr/fe1IjK7U/omYZs6uH3Fecpzp3z5d+D41g+o6gt2kd9e2m1SP/4gI9nrTvnpSfzUdThPPDCmjjwv/Us7zGsiDszwQ0QeKiK/IyLvEZEVEbEioiKiOBfB4Nze7QbvaBpG1FApM58iIneM++Amn4rhvtLTm+Ksi8BZTG+Cr/y9KKYB3ueNOJq4q/b3JuMNjzv999HG88/y329kMt7QCLsTVHHePGHirKc/i7OIaqKHO6E6Dp/rv9+tqp/aAV1/hztJfwLnqqiO/+6//1xVhwuG32h9di3+niAiV+I2agB/N6XP/ZYPc+UOkq/o+xUR+V0RebyIdPZKc432LxWRPxORj4pItxqffox+hg82aYy+r16vDVRtOKmfjh1f23i3FXZD02f6700TfA1v3gNNW+Ht4x76jXs1JzRp3hIi8mgReZGIfEBE1hpt+wQfbCfz73X+2zISaDRp/jjuqqlJNMUi8lQRea2I3C4igxpN53ywNpPL+69bESkiV4jIr4jIO0VkSUTKWh6/6YM1y131gX+eoByByX2gWm/mgFunjP9v8eF2Mv4D7oM4D7zTpDmymnsXcdbfTbqeKCKvFJFPiEivMacsjqHr/jihKUyYX72g+p0T6KnG1k9NGVfV/DkDHJ+QThPVOvRuVT03Icw/4QxR6+HPFz6b0TWA/zml7M/y4TfNKar6YZzbToBfxPFkn8K5ez1Q7EP/vXncQ1VdwblthI1tcqG35zRUBoFPEJG/EZFvEJGJ/VhEYkYC/t+f0jde7sPU+8Z1/nvXa7jHaVVdHRPX4rwlws73MLvatzWwG/5v29glb7FX7FffHstnsg/1EhBwb8QB82Fb5b2jvfl+yzVq6XZE5AdF5GYRuUtE8hod1Vp8EHWwaz4VODvuEIfHVrKJ7cjQHiw7F/IfFI9yf5xHSphAu1+Xb95hutvGIa2L9fzPN7+5ZVyPqp824+5YbrkNVDLUs/6w0rZxiGM8ICCAQ9WX7Wrd2w9+Q0Seycj7xHeq6v+ZEPSg5EBbwsunqnn870Xkp0TkOtkn4xLZu0x80r4OxvM692ckq5uk51xjskxuO9gNTXuRE07DZ+LkaMrk8i7X0t7t/hlG9bpt7JTP3w8cRp4eu+HLK75mt3uavbTbzVPSncTb7ZafqtK5c9yheI9/wV0Vs2fE+5FIEyLyrcAfMxLSWZy7vMqqeg5nILBbC6m7p7yrrPLm/WcrzPjvYzh3i+CY8knYiWHCONw+7qGqljI6vDA2DKNNYdNK/KT/nkZbZWl3XERkioJzHHaSfj18HWcmGLyAcycGWwucN8DX2U3AT+Cu7Hg5DK38KuV209L1GKN+v6P8JqBuBXpqG+Fntg4yxK/gmKv/D+dW7fuAQkTejrvz7g8mnKrcEiLy27i70yrkOJdu1UmBY7h+NmmMbhL811BZkU7qpwc1vnZD0wn/PWnMwXR694rd0DwVIvIs3KmiakIpcUxkZUl7BMdM7mT+repp2Z+gnYTbgKvG0DSHO3Vb9zLSw83l1bxQzQOzjBRIdUyb9xGRxwF/i1tfKiwzqscOjsFslnsvfaAa/zEj+qdhJ+M/4D6G88A7TZtf6+9O4u4yrJTNf4HzjFNhgBujZS28adB1ovb3XsbW4pS4dWx3bG3Jz6hqX0TuwY3pcfzMQaIqt7CHOUVVXyAiT2E05z5titJhX7BP/Xc7fbTeJhd6e06Eqr5JRH4G+BmcEuBrAUTkA7jTsr/vjXgqHMMZBcH2BFx1Q+E9reE1TBvL5RZhJu1hdrtvq2PfeakKe+At9op96dvjDHU89lQvAQH3RpwHPmxa3rvZm++3XANxXpVuxt1bXWEdt5e0OHnZCQ6gDtgFn1rDXmQT25FxCa7cO1HMHxSPUg+3Hdr3lfc5xHWxyv8w+M09xd2l3HIr7EqGeshjPCDgPo9D1pftdt3bD36jMgr8eVV9yZRwByUH2i6+G7fGPQznles5wJqI/BPOQ/TLphjWTMQ+ycR3yuvU16ILRQ+zVznhNFTlXfYGLZOwFX80tkyeZ6t+7lRPs1cd3I5xGHnWsBu+fFd8zVZ5brPddszb7YGf2lI3qqoDvz/Ys8fifff4ISIngT/AVeaf4+4Wb6vqUVW9VFUvZTThb9udeAOTvE7AqEw/qKqyjc/Nu6ThQkT7Ak5/WpvtBZXF6FfIyD3mt+GuyHi/qm7pJWCPqI+ho9vob9dsN2FVHajqE4DPwyny34ora/X7QyLyGVOSGAsR+Urc5F/i7u56IO6O0+O1MVqdGtjtGA04BIjII3AGQ4Jzj/YIXNseq7Vt5abqfLbtT+MY3HtwrvsuUdUZVT3labpfLewkuibOId4C/SW4TdLrcS7tO6q6WCt3dQJ+P8tdjf93b3O9uWEf8w64F+E88U67wffgjD66wNOBK1W1raona3RVDOtBjK2v3+bYOr3D9A+aX9otqnIvb7Pc149LREQ+HdeHKnzhQRJ9AfTfC7U9p0JVn4PbKP44ThC0AjwUdw/p+0TkO2vB6/zmZ26nf5y3guwNF+y+7RB5izouyr4dEHCx4TDXsQtsb/583Lr0Mdx1IsdUda62Z/vcaZEvUlzIMrTDTHsTDntdvAD4zb3gsOWWFZ7PfW+MBwRcELgA9GUVDmN/8TL//SwRefSUcActB5oKdV4KPh0nA3sB8H5G14L8CfA2b8SxU+yHTDxge2gdNgF1HAaff4HtLe6teD4XAT91EFe9fCVuUnwf8G2q+k7dfN/gdk4x7haVK+FpJ9bG4SyjBXKam5sL0eVdZdE5rczVdSRnVHfk7WOn6dfDbxdVm129w3jVovwGnPXpd/jHlbvEF4+JcpaRu5wd5zcGd9b+3mmf2xZU9a2q+qOq+nk4V0hPwlnAnWR0X+hO8ET//UJV/TlV/eiYPnEQY7TqFxfS+KqsaKfd33e+7/bbC74RN6//var+gKq+Tzdfz7Sbtq3q6YhMv25oUl1Vfe4HVPWPVfWuxvu99rfPw81BZ4EnqOqbdfP9hZPy2EsfqMZ/uMIlYK84H7zTdufe+hpejd3nqOr/VtW6dy+828u61X6F+gmFvYyt/V5Xt+RnRKTNyKPCTvmZvaIq94KIHJkacgJEpIW79zZldO3Gj4jI50+OtWfsV//dTh+tt8le2rPiBacJvnbVBjuBqt6iqs9V1a/Anbp4PM71ewz8nohU3uTOMNqn7HRc7HUNP0gc1FjfD+yFt9grLvS5KiDg3obDlGHtdm++r3INcVcfV94Hvl1VX66bvYUdpBxvN3zqXrATGZcy/vTtntLf5TxeD7cd2vdzfTjMdREOj9/cc9xdyC23wo75pwtgjAcE3Ndx2Pqy3a57+8FvfAfO29EC7hqVz5wQ7tD3hqpaqOorVfV7VfXhuD3ys3GeCj4L+NldJHvQMvFxqK9FF4oeZq9ywmmoytvxRlaTcBD80TQchg5uL3kelpxs17rhfcCOeLs98lNb6kZ9+uPk7DvGQRh+VAPoPTrmag9x/lW++ADyrVBZSn/FTiKpagb8l//52HFhPO1j3x0y3uW/Hz8lTFXn72o8r9pomoVXFecxIjLJjVaV/jow6Y6iSXir//50Ebnf1JDjURk/fJf3gPGZuInqj5sBPVNV3ef1VbvIq5neLYwmp6/ca3rbyG9dVV8GPM0/+mzZeN/sdtqzGqP/Pu6liFyNswbcb1T9aNoYetwB5DsNVR1MOw39ReeDkH3CVm07y+6sDv/Dfxs2uqarp30VkxfoqXQB/20XNI1L/0Pq7gPcSR4VTV8gNf9fDUzqA9V6c0xEHrMFjQEB03A+eKfPmbKGV3PvEnDLGLomjd0vYPyG4GM47wUwYX71CujPHveO0dja73W1WoceNIXfeCwjV6ZNnmmvqDZbk+aad+D4F2GHfGwNvwQ8EsebPB64Cedm8E+mnE7ZDu8wDfvVf8fyACIyz+guzHqb7KU9l/z3Kb+xGodHTaF1r3W2Capaeo8WX4NzwTmL99zi+dd3+KA7HRf/4b93u4YfJHa1b9sHDPvplLV/L7xFPY/d9JHDnqsCAu5rOEwZ1q725vst18AJGKtTkge1Z5uG3fCpe0E1bz5uyjpQtfmHVLV+zctOZGj7PY9/jBEPM1b+JyIGuH6H6W4He10Xp2EncqzzzW9uGbfxblLcbcstt4FKhnpMRLYr39nrGN933jcg4D6Gw9aX7Wrd2w9+Q931KN8KvAp3jcvrROTTxgTdixzoQOYoVb1DVX8dd8IfNq8De9bDcDD8VZ1fmKTnHMobzhP2Kiechn9nJG+bxB8dqaV9vvbPh6GD20ueS400mnGF3bXPVqj4mgPXrY7BTnm7vfBTVTqXiMiDJ4T5fEb7gz3hIAw/lv33IycsJN8DPOAA8q3wx7iB/jAR+d5pAUXkaOPR//Pf3+PvW2ziW4Fr9kzh/qO6uuErx1lN+usfvsn//IvG62rCXZyS/stxC9lxRgYH9fRncNaPAC8f42FgK/wj7s6kCPi1HcYFeAXuJOTDgN/1z16tqndOCF9trG7w7tD3ipv897OmGa6Iw+J2E52iiAB3Fxw4xqIebjvtWY3RcUwWOKXRQWzmqvH1DSLyoOZLfxr5fBtWvcJ/f6OIXNN8KSKPYrpB1YWGrdr2J4H5nSaqqvcAb/Y/nzUh2LMnPJ9Kl1dE/uROaZqQ/oP8yalmHl/G5HZ8JW7NuBL45jFxF4D/MS6iqn6AEXPyq9797ViISMefxA8IGIfzwTvNAs9oPvT9snLL/JcNS/BpYzcGfmFcRl6Q8df+5zMmjI3vY+P94HXc5L+/XESmKoTH8HLT8A+4dTJhzJzlPZj8tP/5ZlW9YwdpbwdT12hVXQX+yv/8eS+AHgsRiZuGHCLyeOAH/c+n+rn76cBp4P6MhBU7omsb2K/++8MTeJ9n4gyMVnBtWGEv7fkh3F3KAnztmLgPxHnRmoQ91dkWPF7GyLtHfd24yX/fIFtc9VcfF/uwhh8k9rJv2wtWan8vTgizF96insek9KfhsOeqgID7Gg5ThrWXvfl+yjVWGQnMx/F9l7HxrvD9xm741L2gkqE9gtGpvXq+lzDaA06SoU078Xgg87gv/8v9z2dMMJb5bpzbeGUkg9kP7HVdnIbt1Olh8Zt1fIuI3L/5UEQeizOIh8l1vlO55UR4OcS/+Z9T5RA17HWM73W/EBBwX8dh68v2su7tmd/wBiRPBF6L0y+9XkQe1gh2k//ejRxor/vzZIpBDIz0ME257p70MPskE98Ezy9UsqVnTpBHPx2YZHS779gHOeG0tM8Cb/Q/f9QbwTbxo7h1fg34u53msUschg5uL3n+p/9+lOcLmvh2Dsbr+Z/geJSHbiUXOgA8TsZ4SPa6y0qfXuft9sJP/QfwEf/3j46JK8CPbYvqbeAgDD9ejyv8I4HfrhTdIrIgIs/GMbhnDiBfAFT1fYzuRPs9EfllERlaKYnIvIh8mYi8hM0M+e8Cd+Esd/6+Wsz85P9k3F1sy1x4+HPgPf7vV4rIf6sWKxH5EtxkluA8mry0EbfycvJwmXBiXVU/jrvbDOC5IvK0asHw1kmvxlmKdZmgBJoGv/j/sP/5JBH5CxF5aPVeRI6JyPeIyG9PiD/ATRAw2my9aEqWf4gbaC3gH0XkO6oNs4hEIvI5IvIHk+pjDJ6Ls1o8AfyLiHyz1Fxpi8hVIvI0nFXX120zTYD3isgvicijqo2pNx55NPC/fZi3N9wJVe35heOMKzxe57+/V0T+ey3tq0Tkj3BXyTRdFO0H/hzn0q4F/J2IfKHP14jIV+MEGCtT4h8E/hQ34XaA14rI53maxDOZr+TCHPOTULXtV4vIj9f69UkR+TXgx9n9/Pvz/vsrROSF4t3P+7n954D/xeS6quj6DREZWpeLM6z5R0YubneLf8bNP8eBP66YE3HGFv8dx+yOLbeqfpTRvPhCEfk2cQptROThwGuYzgg/Hac8fCxuPvnCisH088mnicjP4OaIi+naoIDzi/PBOy0DzxGRZ1RrlBdc/jVOANnHrWd1VGP3p0XkCV5IjV+jXwU8Gufpaxx+GafA/jTgr8RZlSMibRH5Xz6vpXERVfW1uDVBgFeIyLOl5q7R8wVfJyJ/A/zGdivAn1r5Jf/z6SLyk36jjTjDzT/DnTywwE9tN90doFqjv7OqyzH4MZw71QfjeIqvqDbEfm16kIj8EPABaqczfJ/5I1ydvUBVXw1DY5Kn4Mr0VBH5/6bQ9SQZI8jfBvar/16Fa+9rfPwZEflh3L2kAL9SP2G6l/b0nv4qocNvVnO3/3wZru/3mIy91tkfi8iLReTLpWbg48v+RziBRI+RwQY4/vWt/t0bPG+8UIt7qYh8u4i8ic3Ks72s4QeGPe7b9pLvEnCb//ldE4Ltmrfw2HUfuQDmqoCA+xoOU4a1l735vsk1PL9QGbS/SESu8+kYcTKlN3GwJ/13w6fuGqr6ZpzyCVx5v6nG5342TvF/FOdB7bca0T+M88x1RETGGoke8Dz+Szj++3Lg1SLyEJ9uS0S+B6jkZn/o97r7hb2ui9OwZZ1ySPxmAxnwGvEKAj8+vpaRQvV1qvrP4yLuQm65FX4IZyj8RThZVn1fMC8i3yoiQ/nvPozxiq/5ChmvDAoICJiOw9aX7WXd2xd+w8+DX4+TA5/yaT2o9n4vcqAt9Vtb4BE4PcwzReTBIkO5deLXpcoI9e8n5PsNMvm63oOWiU/CL+P4p4fh9IXX+nw7IvJM4DmcfxnAruWE28BP4/iqzwJeVskVRGRORH6CkUL9uap6vvRPh6GD20ue/4yTk6TAn9X6zIw4g4w/OAB6UdX/An7f//xdEblRRtceIyLX+mdjD+buESvAy0Xkq2pj84tw+qAWbowPjeH2wk95g6wb/c//LiK/UlsLLsHxhV+M47f3DlXd9w9u8tXa5xyOIVXcIvML/u+bdpjujduJh/Mc8XsNGpZxE4etPXvjmLiP85VbhVnCTZIK/AtugjoQ2mt5XrPTNHCGF6draaz7T/X748CDJ6T7plq4Mz6d08Dn1sLM4JiAKlzm27X63cfd89lM+3r//vQ26qjauFRprjbyuHlK3EfUwt0OxFvkdSXOiq2KU+DuGhvUnl2/3fbx9f++MenV+5ICT9lBn1lqpHfG13v17G7g0xtxEpwhg/q+fletPa/wYVKc+7R62vV6/mngZv/3DY30b9hGW0zrpw/3NNXbuKqjD/s+MDZ9Rv272S57pem6RvnrNH2wRtPf73DMX8+Evj+pfrdT3m3k+1e1slicArGa916Is6BW4MadpOvT/tkxaRf+968xmkue1Ih3f99fq7g9nIWt+rr+stq7axpxt0UvzgCjPtaWcMIjxbne+oEpfWuRkVu4aj5bqvWHJ/m/BxPy/ko2jtc+bvzXx6sCV++0zsPnvvPh4HmnP8Jtoset4QXwrWPiHmO0plTxlmtxbpg2V+EUqnW+62xtXPyFp0mBHx8TdxZ3Mq4+55zDMeX1enrxDusjquVblaM+T5bA902Iu635aEre31XLt4fjzU4Dv94I9yicJ7R6vTd5FAUeV4vzUkZr6eyYvH/Vv78TONV498W1NAfAJz1dLzvo/ovzpFfF+UZG8/a52t+KM8TcxNvtsT3v7+u1irvu26VaN6p15eYxcfdUZ4y8TdX7dp1vL4DvGBPvFPCWWrgSxx+usbH+f3ZM3N2u4TdMqodamNNM4VmmpcEu923bpOvGcf3Ov/u5WtprjPjlZ9bC7IW32LKPbKNedty3qY2pKfVyPRN41PAJn/vqhwPiw3zaVZrXjHm36725j79vcg3gMWyUX6zVfp/BnRAeO7+wSx6JvfGpW85lW8yzJ9m4B+yxkc88C3zehHTr8/MSozXkm2phds2jbKPevpYRz1L11/re8/WM5we3rLMt8t31uriNtLdTp4fFb57277+bkTyrLjdSHA9+2RZl3JHccht19q2M5NXq6TlT62OnG+H3MsZP+DBV3729aqO9lCF8wue+9NntHLaNdG/cTjz2tu7tiN9gyp4Ep2O62b//JHBt7d2u5UBsQ781pW6ua6Tf92nU9VRvBxYa8R5aq4McJ8c5DbylFuZAZeJM5xO/ldGev7nu/SWjtffGRrxp7bdXmnYtJ9xGO35vrc2aMg8FXgJEY+KdZgzP3Agzto22oGdPfP4WaY9th73miTPOqvf75Vqf+cMp+V7P3vjyFu7QeHOOrMu6mnnuut1qdfDDjOTeXUYePRTH8z18TJq75qd8/N9ptE99f1B5bZ5aru18DsLjB6r6Q7grQf4dN/lF/u9nAl/tC3RgUHdH9vfhLOhfghOst3An5D4B/A3w/YzctdTjvgl31+Kf4yblFq6yb8QJ8AYHSftuoaofAT4Dd6LvvbVX78VZ8H26qn5oQvRvwAlcb8G5U7raf4an09RZvH8lbqPzZlxnnsHV7QuBT1PVv2YPUNXfwNX9i3F1nuA6+Xtw1qY/OCXuf+HcdgP8ibr746bl9UncSdmn4wToq7iy346z3vxuRq4Tt0P7Rzzt34dzLXUO56ay8PS/ANf3X7LdNHGTxC8zsrabwy2E78FZQD5CVd9Tj6DOe8qX4E4SfApnrVu1Z+zDZLi7pipPJdbT+Trga1X1OTugcUdQd7LzOlyfuR3XxnfgTns+CjfRnVeo6n/gxs6LPS0VTb+BO81e3de3dL5p2yW+BWfF+n7cwiy4PvQUVf3uvSSsqj+H65f/hFNQxTjG98mq+mxGrlmXGvE+hqvLl+AWzciHeSnwKFWd5Ep1J7T9Nm4uq04ixbgT8T+Lux9tdUrcJdypm+fgFnvBMfl/5ul+/7hy1eK/Bnc6/xdwnn0GOGOSFZzB4HOBz1bnPSkgYCzOA++kONeaP4Tr0ylurfpb4PNV9WVjaDoLfC7wf4Bb/eMeTij6OFW9aWqGqi/GecN5LW6z0MIZST4dtwEdO2f4uOuq+vXA1+AUAbfh+I7KwPEvcBvGHbkd9zziU3A84D/4vKv1/8+AR6vq7+0kzR3k/WKcC9d/w7Xnlbj1+UQj3NtxAoQfxc0ha7g5pQu8A3eS83GeZ0VEvhn4Ntzm7Dt04z30FX4aJ6g5xeiO8Sq/N+A2d2/Cte/9PF2X7qBse+6/qvpXOBfhr/ZlKYB349r4G8bxdntpT782PcaHu9vTfCvwi7g1YeIpkH2osx8DfgQ3Nj6GG48R8FEcP/JZqvonzUiqehfOSP3bcR797mZ0hdsHcG54v5kxp6J3u4YfNPayb9sjfh43xt6DW/crfnmxRtteeIs99ZHDnKsCAu6LOCwZ1l735vsp11DVtwGfh+PzzuF4rrtwJ/Cuw63JB4Ud86l7zlD1blx5n4Xjr3Kf74dx1+M9QlX/dUL0/4GT1XwAt2ZVa8jQPflBzuOq+ircadk/wMnOZnDr1Ftw/fjLJ/CDe8Je1sVtYDt1eij8Zg0fwY23F+H2NhGu/p8HfI6q3r5F3juSW24FPy4ehlMgVOlWbfJC4Dsb4Xc9xtVdHfh43L7sbpwCuWqjgICAbeAC0Jftet3bZ36ji5Pz/DNwBfBGEbnKv9uLHGhL/dYUvB+3Xv9fXJssAQu4uf4tPr8v0IanCHVXb30pI5nXpT7PK2phDlwmPgl+nfgC3Jq3hGvv9+H63Dfj+K/zir3ICbeR9u/j9Et/iuubcz6P1wFPVNUnq2o5JYl9xWHo4PZhb/EKnDHSG3HjPMJ5/Hmqqj51v+mt5TtQ1W/ByaxehTu0NutpeCvuSqQ/OICsz+DG5/N9nilu3vkD4Dqvy2zSuqc9k6p+P/Bk4G2MrqB+E/A1ntfeF4jqeR/fAQH7DhG5ErfhMsDD/MIbELBniMif4Cbjn1PVGw+ZnAsWIjKLWyxbOGvt04dL0f5BRJ6KE5y8SVWvP2RyAgJ2BBG5ESeM/SNVveFwqRnBu9D7OM744fGqevPhUhQQcN/FvXkNDwgICAi4cHGh8qkBAQeBILcMCAgICAiYjCAnDDhfEJGbcYeqvmurQ40XKw7E40dAwCHgabj+/OaweQrYL/h7has7Zl83LWwAT8cpjD58b1IY+bvwnuF/hj4QELB/+FbcZm4FZ+UcEBBweLhXruEBAQEBAQEBARcQgtwyICAgICBgMoKcMCBgnxAMPwIueojIZzJSzD7/EEkJuAghIk8QkV8SkUeISOKftUTkCcAbgA7wVlX950Ml9AKAiPyGiNwgIpfUnl0qIj+PuyYFnJvTiwoicpWIvFhEvsifekZEjIg8Guey8NNwruFeOC2dgICAjRCRnxCRHxCRK0XE+GdHReQZuLshAX5PVXuHR2VAwH0D99Y1PCAgICAgICDgQkeQWwYEBAQEBAQ5YUDA+UJ82AQEBOwWIvIW4P64+9MEd2f5Kw6VqICLESeBH/cfKyJLuHv8qvnx47irXgLcnWc/CCAifaAPLNbe/wnwgvNP1p6RAjf4D74PtBndA9kHnqyqdx4CbQEBFzMeDnw78NtAJiLruDlD/PvXAz93OKQFBNzncG9dwwMCAgICAgICLkgEuWVAQEBAQMAGBDlhQMB5QDD8CLiYcQVwGXAn8LfAj6qqHi5JARchXg/8IvDFwNXACaALfAT4G+C3VHXp0Ki7sPCLwDcDj8EJLuaAu4B3AC9S1b86RNr2gtuAHwa+DHgozhhIgQ/jvL48T1U/fHjkBQRctPg9nIvGL8St14vAWeA9wEuAP1bV4tCoCwi4b+HeuoYHBAQEBAQEBFyoCHLLgICAgICAEYKcMCDgPEACvxkQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQcHHCHDYBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBu0Mw/AgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIuEgRDD8CAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAi5SBMOPgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICLFMHwIyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICDgIkV82AQcNETkFmABOH3IpAQEBAQEnD9cA6yo6rWHTUjA/iKs6wEBAQEBAQFjcA2B97vXIPB7AQEBAQE1XENY4wPuZQi8TkBAQEAAB8Tj3OsNP4CFSDi2mJhj2nwjIAgACqgq1r8wBgxCJO5dqSBGSFptTJJiRMizDFPkaJJijCDGYK0FBBFBBLqra1AWgBJHBkWYnZ+jVFhbWaMsS2IjGBHECCaKMFEMxqAKpbVYq2Atc52UODaU1tLtDcgLS5KmqCpgXVnElacsLbYsSdIUK4YyyzC2wIDLK46J05S836e0ljRNidqzrC+fQ1BaaUynnTo6rBKnCQCDrMCkHdTEWAutNCFOElAZ1mmjitHNj1H/RjaE3PznZoyNMSZO9UA3B5AtssD1hf5ggIhziiMiiHEuctZXlxl010lFmemkmCjCqu9DQK+fs7Tec3WkSllaRFyaKMRxjALtVovLL78C32MQce0Piqrvj2WJiYynyf2nGwlFVVH8t4KqHeY1rIFaOJfQqGbw8eoRGrn4Pjb6u0qzqhsQVK3vfwoqvitWaQ9jU09605jc8MK/FZeV1Jt0w98b21rEYIyQxjFFUVCUJd31NU5eejmRCHmeMcgy1NpJuU8jqlmEDX1Jhz17ehJ1iAizSYwtCtQWrA0yTNr29ToxpU1vJvX4ceNPRDa06SjG5PzGQSYNJJ2QkghGIIkMRV6w3l2nzPqglliEvLT0S9yYMNCK3QzdaQl5ppRWKBXmWhGJzyErLOcGZTUDDmFd+Y7tqEABFwsWkiQ5durUqYntW+/f1lq/Jo8+9TD1sDKmU9efNcNuHkcb49TDNNMWEay1w/fGmLH5T6KlWY5JUD//j6uDZrhx+Yx73ixTM72t6mVaGluVZ1J6e8F26a+HbZZlq340KY3m+rpdGqfR1Eynem79uredtCZhHI1V/9oqrZ3U0U6xm/QmxZk0tneSx17q+KCwm7bda5oXAi4GGg8L97a6+dCHPkSv1ztsMgL2DwtJHB275PjR8fzehu19UxZRPRE2ixnG9/vGDnm4H1VVBLfOGS8/Gu29R3trvxHfsE/WDc8Az4NVcas9uGA20VnJAtRvhF2+Lo0ojomTBDEVTyfD/X69FCKCMQ3eq1auav833C+KuGJYO9y3mcjJuPIspywKjDHEcYwxjuZKjlXJL9zHjujydTesJ6m1QiUj8RSU1lIWJSIQRZGTL9TkTLYssV4+pjg6FYgiQ5qmns8Z7YazLCf36YmAEU+7QGmdbMQYgy0tkTGIEYqiIBrKZBRrFatKFEVD3t1a6+ohirx8xZfLy+/K0pIkCRIZjLg0VZUkiTFiACf3cfSP+lJkDGVRUtoSVZdPXlrywvFvRoQ0iWm3WhgjWFWMEUevL3ZRlGR57vuVUJQleVFSWItaV8+mqg9jSGIDKEXpEohEiGNX1tJa8rz0/R6Ml1uayPh6jxBjQNXVvSpiIhDFiKFqdrf3GDmbdvIqO2r7SpZmlUpGWNGH70ejvu77ma0kVqMxxVBmNJLxDjtPfQ9VzQR+XG0MJsP4dXnxkPbaWNlAj5fVDWVpDZ5qAw+88b8Rob4+hiHr/L3vX/jx1RS2DPvhML3a/oJavQyTsb6vluRlSVmWlNaOTbtO5DROcThHjZHBjmRS2vg9Ps3mLK3AILc7lFYFBFwUWEDMsSjpnAfZpYAdIFq6vx13g9oCUy3TAtZWe/sRn1Ct45XuI4pTMAli3XqjJvXpuvXAlgVx7FSOahKcNmWcdHoziZswfeLZMOVsKY+XyXPPvWtXdMCYUHmbZn+tsXG1l1Pn8m1P9BfyijAaO2NfVZAxz6d24t3S0sxw6wymlGCfMKVutk/mhshic6I4drwnYLV0elUBYyKK0hKZmi4AN+9FxvGRJoooLSgRkVhERvsAVQNRvEcap1BfZiCy7+vAfcHw4/RiKzr2NVfNuA0BIMYQGSES4xQtbq9CL7csdQtElchAKoZLOylLquQGxMRc9sAHs3D5VRxZPMbHTn+c+PQHmL3qapL5DrOLx7n77ntIWy1SEeIk5T/f8S4Gd3ySMs9ZnGtTIDz2yx/LmiS8/R/eDGXJVVdezvxll7EuLc6dWSZfuoc075JSYETpl5Dlymc94hpOXnaCpV6fd7z3Y9x21xrXPvgB9HtdtMgQkxDHMaUtWV5dZ3XpLGurGVFZcPRoyqddexlJHBN35uhllqjdwmL45Ic+ylUPfjDJpQ/ivW/4O+YS4dSJeR756Q8k7XTo9/qcuN8VYCI+8snb6dzvEdiFS1hdy7nf5ffjyvtdAWL81tIZzVSbq6EyAoabWhHBqrhJ3+//RsIa2WDAIqK1t6MVemSQ4WNLNSX5PI1QWkUUZ8QjbtNYpTbsC8P0RptORBhkOe//8AfpdGZBnLFGOzV0EsPb/vHVfOTd/8q1cxGf82n3p9Vps7LeJysBE/P+03fwV29+L0vxAktrXSTvMzM3x8rKKmkSs3D0GGVR8sD7P5jn/PLzSNoz9PoZAHMLR7DqNuWDwYDe+jrzC/MYMc6IIc/JcjeRIYKWJUWRU1TfWU5eFuR57gQMVilLN9EVhRPWVEIe6zfctiwpy8L/rT5/6xWSeKGDUqp64wSXZiUoiYxBxPiNY4ExEah1hlAK2GIkCFBLORRKOaZR8Bv3oaJFhgKzuiGFIBgUq4IaJyATVd9fvMBKlSRtMTM3RzJ/nN65OxisnOFtb/1XnvHTv0gihtvvPstdn/gI62trWFtiEMeIVFyueKFBTeigCPhNsFVnVFB1c7CoVpt+MGId87zBoEZqfdnn4zftVpXFhaOUa0ucXDjC6un38uGyxbETp/xYsdUg8UZgo35rtWLCfT5eWCeVkMILWlQcjU5oYIiNwdpiOLZsFd3XuHiObGR74kJtYBfE+LLUnyqK8QusF0rURKxpEnPk1CkecOVxPviud/HWN7+J5Ts+QpL1mTfOfO1Dd/e5W2OiJGZ+XjnaLrn8kjafPN3lnkGL1CoPvWKezz3myvrhW1d4+S3r5CYajmu10MsGBNxrcfrUqVPHnvGMZwwf1I04mn8PBgPiOCZJ3BoZRdFQaGu9wKtam6rNKVRCSnFCx1o+lSEJQJ7nw7CVQLhKyxgnhC6KAvBGf7V3xhh6vd4w/06n44TE9bWzYaRR/7sScNdpqn/qZcyyjDiOh580TYflUlWKothghFKVBaAsyw00q7o1AJxANoqiTeGr9xUqo5amcYuqDtOt0hoMBmMNF6p4Fd1Ve9XbuzJsaMat6BqXVr1sVb1VAvp6f6rC1ctb0VKVuarDejmbz6o06m1V0d7sX+Pavfm7TvtGAbRuKKOIkOf5hr5Vf9+s63r/aH7X06/6WL19t2NcVG/HScZY21VKjzOammSU1Mxzu+Ot+XtcnU1Lf6s8ms+axkTbwaS0pv09zfBomkHTtPrdTtjttO2kOtiqHNPy2o2hw2EbR0waT/uV9mHhIMp1/fXX8+53v/v0viYacJg4fcmJY8ee/d+fuIFHqWDM6LANVGt5JQ8wfk/q5AJ1xXNlCOGUvh7qdjqFKliLloraEltmFPkAowUzaUKnlbh9lCpZlpH3M8oicwd9igwtSgS3LpelUmR9xAqFKkVpIYoQA1jLoJ+R5xaDkCQtx5+KEkcRkQhFb8B6t0uWDSisEkcpadKim/dJ2x1OXnE5nfkFWp027VaHOI6xmmGtM2ooSrc2d1oxcZw6JX7m5ATCiNewWlIUA9CIJGljxPFig6xPFMHMTIs0jeitr3PunrNkg4xOu83s/AxJGiNGaLfa3tgA8mJAt9t1PC0GwRInMaiz7pfYoKJQCmkrodQcW5YIEcvLKywvrWK14PjxY2T5gHxQggpxGtHtdsl6A1ZW17xBg+Nfj588yrXXXsvS8jmMFCSRYb1nee/7Psr6IKMoYabd5tIT804mZ50gN44M3X5Gd5Bx2aUnKbIBnc4McWzI+n3KsmR9vU8/L5ibm+fIkQXWu13OLS+xODfDkYU5CiwLC4sURcm5s0vcftfdrKxlPOKRn86pkye49VO389FbTnNkcYEHXnsNcSyoVY4dO0GeZU4AXZYI0O+ucffdd7PaXaUVp6x3Mz50612cWc/pDwrmZiIecu2lPOTqayG3LA9WueSSkxyfX0DVkmvJueVl1tfXaUcRa2s9br97mXtW+3z8jjMsra2TJClRXJKIcqSTcvUlx+kkCcurGd3egE475uSxedrthMzCPWeWGAz6gKXTSohEWeh0aLVbLCweodXukOU53W6PwSAnilOslszPdpjrtCi1JE4TZlodt4+QiMGgS5atY8QbkBBhbYna0skSbUkUGVqtFFXHm0dRhEROcZjnJdmgwApAjIoBdeM7jiNvuOP2LIrBRDEisZd8VHJAg5GIKIqdXMPPEVEUE5kYYyI3ffgDeUNzDIVIDBLHpGnLycm8YY6qYouSssgp84yiyMFasF6hIIJaAbFIBMbEXtZlh3IeVWf0U5SuzFHacgfNvFGU5iW2KJ3gyVYGNHiDDUVLwKozKtISEUVicYZicYwFBmXOWm+dc6sr3HHuLPcsnWW5u0Z/MHD7vuFUW5PlVrK5yFQHbIaGOqZhHqP4A5UqqPg6VydzdfIhL2cj2misVkukosEYr3ABPnznGr3cnp6yZgQEXIw4HSWdYwv3u672aNz+q65trGRO/o0qdeO6oTHcEN54kATWbyfOPk6UHmFQgC2VNBpQ5Jas1+fIkXlWen3aSeR0Y+02kYkp+usUWmCIiYBBCaV2KDuztNrHMIMVbNqmVEN/6RZacZvo6P2xJvFyWh3pYibscfe+L9hsdNasj53msW97lYbuffxBTCd3HxrRqXjV2hSZgDZL61NrlHncXnn4zC1LVFlOJb+eTqWGq/2s5vKqNE3DD3VKk23JObxqZkijy36zvKLZRhufbS1T2VRPQz3IRlq2g82yE1crQ9UQMDLltsPqVAHjK8qKTsmvquVqvI/kn6rSCFNHpZvZKIttYkzRG+G08b0h4Ma0xoy7sYfIsFjx+quq/kcNPkzX8RPVkV8djo+RwarluFmC1gxiWyRpj3N3n6Hb7SEonXZMb1Cy0BbyUilQytywMA9r68LsQkLeS0ljy0qRIqZ0xiI6QyvNGKx2Kecuw1jr+E+p8nc6TVOTeY3rp1thsPSJLcPsBvcFww8AisxSWChEEKPMtGJIXSMZ3DMxMJ96TxuqHDGGrCxZyyw5UBpL/6O3cP/2HMeOHcOKZa2wzK8sER89giBEcYItCmi1URHas7PQSinEWQx12i2vDEq4+uH3x7aPs7rW446P3Y5ZO8eMyZlvx8Sps6ZHYoy1mEixZeE3FiVl6Rj5Is/cCYshQ+0GhdoSsSB5TlbkLFvDufWcdsuQluvEWpJl6xy98lpm5jukM3OkMRydi5hpRbTbiROKGCdE0SKDtAUiFEVOGkWo9imKgtIqUVxNhDrSn1cMPgwn/iaGAxYFMcMw4pXYVQhBEb/pGqY1/K9SrI9OwUjtdTXf1+cOH6yWvwyfgTs9URYjxZUxZmgEYCInSCpVkKSFxCnZYBlJ24g/AYMx9POMvL/OicVFrDdSiONkSJDFEkeGGMWWOWVpKYvcGRaUljzPKG3pN1Y6pN8I2CIHL3Ry64WfDIdzo/XvRvFUAWOcna21REawNvJpO4MhpSRSQSUCMZRl4dIRQax16RuIvFmEO/GTUJYlkTGoij81o8N2rOj3/kaGi99wPdXRxOjfeJpqgnoVEHWmBzI6maEiWBQzXEWd4UqZDTj7sfczvzBPad0Jl1aaMrj7Ns7c9ilEIC9yd9pHfV/1C2FFrfGMdEWV8RvOypjDDruN38zXSlmRM/TmgqLiN8s1zqXUEgMsLZ9FVYlLeOiDHsZH3vUueoMFWmniEqqYAXWLW7WMDxfRysOKerrwp69QVJ1hi1QLolVy64UA1AdHlY2i1nm4Gb5T1/ZDxksBb+BSiQOU0gtcqlNjgpgI8YJUwWAQlu64jXeduZskt2S9Nfr9AZER8rzkTK+giBPud2QByfp88sw6S62Y25f7rKyUFAxITURWWIyJKYuSlX5BOewxG41NAu7dqCtHmwYf1fvqu85o1g0imorFcRvCce+b+VYGETDyMFIZSFQbj7pSv6k0r6c5nA8byuImLVXa9XhbferlmPSpjB8m1R8wLG/1qZeraXzRzHcr5fok1A0S6n83jTO2y2Q362WSgrsZflo6k5TpzXJPo3FcPtN+1+u/bqzRNP7Yqu7HbZjH0doMP43eZnnH9YWt6n2/Mamc2wk7CXspwyRhRUBAQECAm1NNlOB2fE6dOPLaWG3G6p+aIaRWc/Jw5+j+r+2r6vOvVb9PEwHjtj9eF4sChS0pciBye6tKgVkWJTYv0CxHrCWKDEYSwDrPnSJEZYkFsizHRF5NqjjFrUSA8xSBEUwUYyTCJEoUZ0hegDoj3LTVggj6+YC15WWiOPKeLlPidoqJ2k5eYyHLc8q8cDIWEb9vdifejBhMJMjwcIClLN2BmTiGJGkT94S8yMgyd5o3TVMuufwU66vr9Hs9SptjFCIbgzoPD/1Bj7LIGfSdnGim3XF78zL3J+wseWadkU1WgJlFjA691Rpx3iYGWc4gz0EMa701RA3zcce1m28QwRkqJ0lCmqQM+j2KQR9QBjiDjrXuOr0S8qKk3Ulod1IwQm99QJIkRFHE2uo6nfl5iiJHxDI70warFJLRL3LiVovFmRkWFubptFOWl88wM9Nh4cgR1tfXiNKUdrvDytoaeVkQRxFXXnEpJ08dxyrcc/YcrbTD8WPHSJOY2DiPrlGkaOKMDayUqLV0i4JukaNxjLTb9Ff6dFozxL1VSnFC6rwoObeyzGC9T2uuhRFhkOdYW2JFSWMhnmsTY8iznFZqSNMIcMYCpXXynjiKMOI8hRiENIG8MBTWstrrIcaCGtpxhNGY7mDAUrdPrJbYRERxRL/fI1eLqiErLWuDjGy9R6/fZ7bd4uTiPGkS0Zl1xkNiDVEcu4N3fsSmSUISxd6Y3h3Qs2U+HMfWOn62KHKk9LxjaZH6PkQE1HjvuW48iTHOQMMYZxgCiBp/KMqCGNQ4KpyDFwNGsaV1p9+NOhlU5OQLQ/mLVdQYEhNR5gV57jzKOBlgNDTwLrxhdpmXWEqwpe+43ixCAS2dbEtADFgBIcIQIwlEcYTgaC6KAkrrDgWJE+44erzsU8TNJ8YpRiq5Dt5Yw0QRElfee0p6gwGr3S6rvS7rg4y8dLQY7z1GtSqxFwBWvLwBqRm14427qHn5qY4CbeSPRwbfdUXlcA8tNXWSKmJGkkEnRzIEGU/AvRmVLqKun2iE2PCurtjeei8ZO3mtUYhT8r5g8wF5plBa4oWYfr8AE1NYZ1zXabUQLFm/R5Eb4jgjilKMiUhaMb1unxY52do99AZ94tkTmCJDbA9TKtHCMWw8g7GObyuNIR4eMNye/GXn2Czb2Oowxfkzgh+1F1rTRQ0VNtUR0WgUevhfOSa1UXzZNN+Ol1mNk5N6ipCalcXE2hrq0BgZSNSLtvHPDVx61X+lcjnWwNhDHJXCqPm8FmdSWcf+3qBQHOl2xsrENhRERzTryCuX07FsDLpRvjWqIGEUr1pdDYbKqMdCzehGxlWRT7eaB5pEbsRmuajZRpgJbb+hSqrFWv0Bahn2Ra3V5eRxN2Z+01EPqfTIVmoUb6iMjT2s0jFWpjS5hVaekZmIBSlYTS1H4haFZlAIWQ7rpaWILbYwHF90ulA0Z7CeEGlOP05px7AaHcNEEVEhFMkMsZyjsDpqp1pfMGP63LgxeRhyvvuE4YcIzLaFQQa9UilU6GYlHYRW6lwQ5lawCnHidiFzAotRxFJhmWsLa31Lmef0l5Y4d/pjXPOA+yNA6/gJTJlTYlhdXWVhfp4syyizDDWGpe6AuVOXszi/SCt1zKpN2lgxfOL07Zy954PMUjCXRBAJ/SimsAWtWJjtxM6qXd0AKgpvtY0FW2BQynzgNvM4F6Qmcp4urC0BS2xw18eo0lsfUAxK2mlMu20wcUxpFdozZBgiWxIlbhNnRRDjrN/xJ2fUQmSc0YHxFnpF6V1danXaUsYNY8ZNH26ucMr1DQYYtcVD/IQnvh11+Hv0sDLz8PufYZzRAKwxUaPV008O1W+/ifQdxg1SOyyL2/c4EwBjDJEIqCUSt1Ep8oIkFQorlAi5ieiudmlHhiRp0e/3AUjieGjlaK1FjaE114Fuj15eYgY5iFAWBYNBRlGUDHLnRcNa612Sjk5zO88czi1jWQmnrOJsNJy3CCt4t41+Y+wV4wYD6gQs1giigpHI9SRjETsyfKg8gFT1JcYMDSWsV26KGKI4ccKDSLDWnXhybeViWtRZ8A0b2KVZUWRrF3a4NhBncCEMjZuMX1yqRXpkaKHDxnInH0pfF26RnpmbJbur8KcuXN2pccKE4UbW95vKWKHu/aOwOG8oYjDVVlrwhhyF917CcAw4Qxa/AIn31uENP1y/ciUvcYuEtZZzy8vEJ67i1JEj3Hrmbo5fcumQRRgyJd4wA19mxRtpaOVlw22sFTvypke1wfYEiw4XVysMDWuGA3P4dzXORuNmWE+e97He6ETVzaUbB81IWCooViDr94hNjM1yiqIkFkdzqdDXiMuuvJJBdxU1Ka2VHqs9pde1pFHE1Z2YuUR4wHziUlTlXK/EekGAyzFyp8g2UBxwb0Nd2Vw34qi/r76jKNqgEK97ZqgbKdQV6E0l/TjjgnFGGxU9VbjKw0IzrbphSNMYZJoiuJ5P3dvINAV+VQfNOqvQrItqfan+rntBqWiL43iDt5RmGZrtVNHtTuBt3mxW33VjlnHlqRt+1A1AKhqMv3JvK+OPZvxm+zQNW5rlGYcqzWafahpg1J83y7ZV+s3wzT7c9LQCG9t6XL1Pynec95RmOtOeTeuTFZqGTnWaDhLNvjrOCKXu6Wc/0Jw/Jo3x8yd8OlwE45aAgIDtQTAm9t4LvYG3uLVuZOxdX8dlKCeoPEdWgsSaVtF9UYkM/eNqhybirpWInKtgq4oWSl4UxFiECBN5uUjk9+W4fatYBS2wtqDQ0l13gbv6wtqSIs8xZUwUxbhTa35vrG6faUzkhMFxRCxt4iJH8gIpLSrOqH5mbhbbhW53jbTTwsQxrXYba1tDjwhRHNGOWuSRwWhVF+q8KYjBiPs7iuIa76P+Co+SKI5pdxJMDoMsI8syWq2EdrtFFInzdKwWLBRaUOYFJhbUls5raa9HFMXMtJ2xhrWWyESUZUk/z5yRi7X0e8btk0vnASUCIr+Py7OMKIkp8tzvW9tOBiOQpgnWGvKicHyuRJy75xyD/joSRxQqLK/1GBQliGG2FTHfjhG1rK0N6PcHzMzNcW5tlZn5GRYXZrBkxLFQ5n2yrKCXZZSIu/JYSuYXZ1g6d44oFo4eWSQr4Zbb7uQRD30oaoUiL0jTFseOn+DkiZMMuusMSmVhfo6jC8LxI/O00pgiLymsghjStE2/36e73gWrrK31UWtYmJmnLCydzgytXkFe5uQ2Jy8tS8srUCqtJObI0XnKrKBbriFx7K50yUtnIBA5JV+nk8JKl8gIrdjQSmPmOi3m2jEzqbvqpbA5nU5M0kpZWe+z1u1ipHR2CTjDokFm6fbXybIMooQoTTFZToQhTVqggi0t690e96ys0ooSpFRmO85DYbudumuqrT8Qou4AUhLHpElCUWTuaiNG+wbnUW/k3bQsS3c4TQFvbBB5AURZXV0gLryJZGjkpF6R4K7QKb1szx+UsoqWhsjEjkcT4+Wr7m9TVAeUnEystDo0Gsp6XYrCEsUxNo4xlbcRYzCJk6WaqERLiy0Go0NVfi4rCuu9zzr5mhGDRLEzjokMqpZ8kJPnOWqtE+PAULYm/rolUS9RE0VNiVjjnNVW3o4SZ/hhjTsANLA56/mA9UHfGX14OZ8xxl/z1DhJrJX0S0DKGo9cza5C/RoHY708ShWVSr7nDwUab1hivWy12id6uSPCcE/n6l2Hed9HWOSAgLEY6jTcL0bGWZvfb95nFRgjFLZEbEGaHIPWHK1WTtk7i9XBcO6MRNzBvEGfPMtpt1p0UqXUgqIXYTqKFAVpXkLapj3fopPn9NY+gcxcTh7PE0fnHE3e6xAmItFyJF8e0ry3QV2X4YzSqr7HK10nGQscPCpheQXj5e7+mSiiIwO5YfvKmHqqomyQ0e9MPjOOPpFNTVSTUVRztNeR1NKq3ci3kUw/tw9bptK5jct9qIyZRqFPd0oam/q+DpdMp1PSeniXahWl0nNsykVkqMepe0SpqVr8PqNJ2VA5wrB+q91GZezgnxnw/Pro05AW1dZAqb1tVtikWhr/fJpBggzDNw7XCkD94L4v4wT538a0N/flmn9tKgOOkSmHe24rHmD0aJizeL7KguODSJht9VlaWsXYDCVB1DlYiGNIWzH5wO3BellJaRZhdpbMzBDJgMh2yUybJF9DzQnE5JQltJIUsQXqZdo+uw01Ok4GOulA3fkyBrlPGH4AxHFCFJUUA6XIBay3cjeQK3T7BbYoK8MlFloJ56yla5VIhE5qMAUgQipOoRwbQ98YeoMCs7JGEcVkhZIkKaU1tBF0kLP0yU+xlt5KFBs6aYq9Yg4zdwTNBpTdLrMLLY4tdtzwF3dqQ7QkLyxiM0oLZWEZDDJsUbI4P0e700ak7y2pvRcISlT9FSDqFVzqhk6JEicxC3Md4silL+ruHy2kRVYqsr7qKiuK0SjBqhNQSDWpG0MSRfTzwrs4tUOllTf1qg30mju/oTp8/EI0HMx+Zhu6xxkKcVwYqRaMmnJAh2m4GNVk6ugFbHW/qKA1azqkUskPyR0Nuuq5NhRjPn1T3e/qN17GxFh1GzX1p3E0SbH5GkeOLpJbSykg+HtdcYIStZZBf8C7PvBuTh07xp/91V9z18oqc/NzmCiiFbnpL0lb5EXpTgCJIYoMSZQwN9Ph+s9+JJ+69VZuuf0eNxWLOMMGtSQi3hOCEguc7KSsDjLm2ynWOkOGEij96la1k1uvvCjM3zFrS2/Fb8RPyur/FYgt/dhx16xU/UArzxR+cfX7dbz9g7vaxY4Wd2d74A2IbOkYnyq+b13fpCNBnvgzHZVXCx8qjhMecu1V3HXPEj3v7UOAI8cv4Yr1PrfcepsT4pVegOgXicqoZOS/wy9h6lw3qSpKgRVngNLkCSpvJdXqryjW+E0tzsDBVv0NxXgXpKigVik0585extXHT3LH8kfpdvvMdNpDow9fIS5trZZgi5ZSszAUnJ+2EePhTq+58pihVw5vZuPfjQrjTn0NDapwXkOobbuHpzI8HUO2w4ozuvBlGi3Kbjwao7SMENmcs2tdVAvmZxIkKxArxFhuvfVTHJlv0bIFkVfARpJw/1OzPP7SGPKcmbZzYTzILecG/kTLkBOrj+WA+wrGKXLryvbmZm+4dvm4dcODqMbIVc/qStlxCvdxyuqm0UCVbx31azLGKX4nKd+bm9a68r5ugFI3ZGjWQ0XL0BBRRx4hqutpqrqoG1TUDUkmbZybRhV5nk80CqmXs1lf1XezfavfFX3TDEXqdVmnqXk9ShN1g5Vx6dX7wDRmvWlsMa6M4wxQmmmMM4yo18s0Ica4d9PKvVWZtkPbdtOeZPRxvg0Dxm3ImnPBTtKBje0zXjDFxGcBAQEBAXieI/HKWiebUOduY4O3NWmcBm8aFo41tqTip4axUKk8GDrBQxRFWE3claqFpRAliRTvM5bIqFcwQyH+sIUtq3Mzfr9kicW4vZotsWqJIr8H89c0lEXuroFR0NKiCSStlLadIcsL5+1VDMQJSTumI8JKd4217jpxq01rMCBvtcBEODGIkMQxkSRYWwlSC5x0wB1wsVaJosQbSOfO4CQSpwjXEokgkQiRhKJwhcmLkiSOmZmfpchybKn0ul3Wi5zWbIsocsWOoohSISty0iRx3igsIMZ5cLBCEiesrK6R5Tlp2qLf64MWQ+PiPMt8O3u5jShRJMRpTBxH7gqegfOwUhQZKyur2NISt9zVH0sra6RJizRRFmY6zLRSrIW1bp+iKMmKnNwWXH7iJGkE/TxCxXDbXffQTjtedmTo9zJOXXKM5dUet95+lhPHF2m1Em79xJ1g2swdOUI/G2BLpZV2aEvM6uo6584tcekVV3DZJSeI1LIwP4stC3rZADERURyTDzLWls6yurZCP8s4u7zO/Ows7U5Kv9fH2ozltWWslkgJZS6cW+1TlpbLji+SpgarGUUpxMYwyHLnfcYW9AcFSZyQxglJHJHEwvxMzOWnThKJkoolEudBojvoccnxY8zOzNCKDWeWMta6Ga12ikFpxUI/L7hrucugyLFR5DyJxDEtE6OmJI5wXj3Wod8r6GmJkYiZbsRiPiBJUxYXImzkZUPijIwqSU9ZWvJ+ThRXMrv6Ps4MZYvGRN5LTIFRnNzCy0OFEmszrBEM0bDvqBNlublDIkSsv77YAhaJIkpxHi/cIavKS44ZyrAEd2DIokRGGPR69Na7AESJv04zSRy9UeSMuLz3ZCLBJG1MaYlK6zw5Fxm2cN5xS+sOCkRpiySKEGOGXp5taRF1crPhvtDLr4xUEk11dWYqWp03a0yEib2nD6DQnKzM6WYD1vo9Vvs9sjwDtd5IzWJ9fSMuG3c9Tc2Mzht0iKkOKDKUvwznW4OTbw2vKfDyaXSo3BMjGzzhOrkhQ5mk837t5bJ15WhAwL0WNXkuo/3jaG85fe9eYdIeWm2lixiw3l2D9SW0LGkl7jqowuLmAXffFiZKObI4Qy8beJPXmFIsqSixOUK8aMn6A0yUEsVzzGhB2b+TsgSiFGMNViKsFrg5ytQkytuskYYMaGx1bagDd+2XVJ6VJii5d7IXH2qNZBR2HJ3u5WQ6tf4D8bymHemohjqNWpriJP7jLCuk+m9MnpOU7lPlLBNEESPZujIy+qhnKyMyHGM9lo5KPzKKs7lMlS5FGeknNtftuPR1qAMdhpFqjanRj2yqww09svbnZp1C1YKNPUWTLrd01dJs9EG3AI5i+P+GfXOo96mib/SiMbafjrKeaISgw7qoQk2RX276PdKPbejQw+RG6dVbuAq4geZNROL7tx3mVdUElfGn4I07/JuqbjZk6PSltigoo5Jsbd0dVOw7PdvCwhH6gx6WkoIOM/Nz5FmfdHYW1ZiebaGVrl9LkJhWHNPLlynjGaJCieIOZZERJ7PDdnVt4n2fVbJj8YWcNHVNGJ8HhfuE4YeIkKmQmph2Arm3FI9jQbGsdAsGg2LIOM/4Kyvu7uWoWlIjpKlx90UaQ6mW9fV1TN53C0rSYunWTzJ3+WV86pOfICfhmisux8bCZaeOsbJ+J1lpWVruoh2LLUrSyLA4m5LNxHTSiFYaecbWYAwYifxkahCr5HjlkCpRkiBJSmSc1w5n/W2854LSDw7nytOqEoswm8QcO9Jh4UiHMrfYPPduDgts5KzTJeuTFxDNLcLxKygAyZ1rz0ooouqUYCZ2Lj2tN2Bwnds54KtWoOEmqTFrbJzXqwBuAjZSWXJ7Aw58IrWYMkrdi3j8hgLx84DZMJ0NY1QbiWqFlI2pbpib8UoiZNOkJP4UgNXC3xnsT/yUzjVaGgtiYa7dYr2wDLI14iQiSiInyMDfv6nKYJDxur/7W77o87+AD7zt9Xz0g+9h4ehxTBzRbrdduLJk5dxZer11UMeMIYb5I4s86ld+i/+8+dW88TWvZFCM6lJESaKIUpVSDIuzHb7icx7GJz51J1/62M+hHVnuuf1Olld7rHczVgclvbxktZfRLZR+oZQ4V58FkXcs5oxeImOIjDvN0EoT4si1QqHqXMOJQU3krPnFkIshjQydCM9EGrfRM/7uZXGba4whSeLh7B0Z96wyvzDiFZcIaowTqCFEMtooigjtVodkZpb11iJ9e9ZdeyQJNi9Ykw691jxZXrC23geprqLBLRTDrlH1r9HiW7djMDWGQv39a37L7WNUm/Lhmj4cB+LTLfyCXi3Aqoqo5cN33MPnX3Ul164s8b4776A4cRkdb7ii1aZeFXfywi2wzhDJ0eIOe0QY48dsZUCjo3KMSlK33KxKUP3vvYYM1/Ch2cdoXIgrgK2nUL9qZSgJqKwvYz7z866DOz7OG2/rYSUiihPmIkOUlPSygl5RsGihHRvyFLq580hzYqFNGueU1nlcERUGWcFKYUdCGWiUL+DeirrSvWKcpnlpaMZrGkWMs8Yd582jHmac0UXTWGCaMrdOazOtpmHJuLLUn02if5zxSz1c8+/tCA4qownYrBhv1kU9zXGGKHUPGNX3JMOXurFHU4kzrs3H0VH/XeXdNKJo1uU4gUOd3mmW2+PqYVx5xxkIjEOzDJPoq4cd93wrTCvXTjBOuLAdw46pQp59wKT23I80m+08zRinwri+sZURzX0B59v4JyAg4MKDiBBHMeqN5a013vjDGe9LJVirydjc3dg63PKPNfrYKJUd7sdKFaTy6Oa3NEVZUJQZFBmFWMo49d4KnILBGBBjsViyogBbkESx40urLCLn4cBdKQGR3205H60ltsyJIm+QUZbkRUEUxyStFu3ODGXp5UJRgqRtWnGLjsAgd1exZLnzChDFQhQZd8gH4wxMhnIQ47du/ooab9jcaqVEkT/A4mU6ToZrSeKYODIURURR5GRZhsYRcRzRahmKLKcvSn/QJ2o576FRJLTaLQZ5Tq/fI0li7y0U7wXTHZxJ4pTu+jlWuuscPXKUslR6/T7z83PeuMeQxDGtJCHLc2yREccRcSsmFkN3bUCaums1rFiyPCPLlbi0zM7OEAFXXHKSQbZOp5UQRQJiyHKltLC8vMr8XAsoKa0QWcPSuXWW+wPalxzF5jm9/oD5hQVUDXfdeRfGRHQ6s3S7AwZrfa689HIU5czZMxgi3GGOnF42YH5hjrnZWdppjIiyut7l3PIyrTTmyOIcg7xgeXmZM+fOUhYFKytrWKvMz85grZINBvS6PUQhNoY0Voq8T5a0yDJBooi4lZC0IgorFHmfWECSmLW1DIsyN9umLCxHj8ywcM4wm85z1eVHWV5Zo9/rU1ohyyz9TIkiZb6dkHVisrzNoHB8e56VRBIzKC39oiArS7r9jKXVPjNJB4kE1ZI0TZmbabPWyyhFWekPGCxZWqnQzfukSUJkIjrt1BmUe483laC8LJR+P8MZe0EcCVHLy2CjyHlJwR/oM5ayLKDwh4VE3D1FCtYWlLk4rzo6PFu7wcjL5a8I1hsilc5bD5VXCmeoVV3DpOq8lGAVkzr6sywjzwZuLBWGMoooKi+TcUIcx052FsfOuEpiosiJa21ZUJqIIvIHEQpHU5IkAM7zb5Z7mnBeOKxC6fd2tlIumOHhHye1A/Gn9kVAYoOJIzQGLUuKoqSXDVjrdel2u/QHA1RL5wkIg+Cukrbias1I5OSzps6T+eumh14/tHbttPirdZyxiFVFfB26a5Odd5PhQQJG14sq3juwCFYMqHj5PKMrk+/jfHHAvR012YM1iJkki6jLLka/mzLQDePFGmfYCkQzlxN3LiEfrFP01yjyc2RF5mW/Jev9LrERIGJQFBgiev0B1hgitSgtems51pSUNsJma5hohkgVlQSJIVtbIZ4DxV83t1GS3FAK1eeXmnamcS3MsIxaeReyftqRekIMZfHj6qFR1+56lUprFFE3FhGphPWVZqn6e2ObbFRqywYldu2F143VZO+VJ+4qfWFYT1X4mjXfiK4qn+qJ10dMwnbkQU6OP94oZygjQ7z39g2KtI1nSHXD14aeKV7PIlodBt6sLKwU/PXyDP+s8flN1IeHW09G/UxGLxqRRhoOqRG6qbaq/QSVgcvIgHFDGZWRfU6lUxpLrxk6YK9a1OA3B1Jpv3Soa9nQZ4bfsiG9YQV7vmpUsjqk9rG+4ZRmicf0FvD0VSNhYygdfcbU8WaMxnZtGDoeC/UHvxWD9ddqMmwXBUSH5q7Ua7BAia3F2oi2rLPWXUclplRDlM6wWs6SznSQ9TvIkuOYMiPqnGBdU2aTAXN2lW7RptAUxBJpSWbm6OgZetKikNjxM/0u2pp1Z5s9vwgN+e64YjNexnnQck+4jxh+KEIvK0k6zupanQsLdz0GyiAvhxt7I5bjrZjVsqTEnfK3oqTGUMYQR4bBoM8dH7+Fmfk5MltA2qG84zbSU4vMFct8+NYzXLK4QHtmhlanQ0lMkXWZS2Nm2zGRQByntNIWYozbRPVyxAilQhJ5xThQlJasULq5pd9x3hS0LP0dh25TbsT4idMOJ2urlghhJnFK0jRWIrEkRoauB921MYo1EUnaoihy1omY7yyS9zKKOUOSuqFUFDnGJJR5SWkKp2z37gabQ3+zCUX1tibkZuMcI97KQ0wliPGKXD8juqlbGToBqibw2txXTQLVZEk1YdbCVbQ5sgXZEN9PqrU1unrmLA694MQbK6i12FIxcQKmEqSUWBW3UY4S1paX6HRmSE1Ee/EocZq6q1nKGBFXr917bqW3fA/e/wagFHmOJgmRQClKFBl3KsAqaezyj6RkZn4BzTM062KzfOMEaPDXvygr/YT3fTQlpaRjck6dWGChfQIUikGffDBAiVg+e4577r6bldUu6wPLynqfO1Z63LXWZzUriMWQGOgkEbOdlEuOzrmrOlA+ftcK57o5q7mlVLfCJZFhLonpidKLDZEX+lQGHIVE5FGMu8/VOAsacW5iVXH3Ivu2stZvMIeuJ91tz4NBxlrm6h1jOH7yMr74y76a9/7Xu7j82gf5K2eUxAin3/kWPnL7PeTrZ3nrP76WzMRYhX5eAMKsgUKEtjjvE26N8eZFRobXHsWIu0LHM9pR5D2AqFfKeQrN8G5ZP07FOM8XfjM7sNZdqzS889R11Y9d+SCe8IirePc73sm/feCDdBaOuXQj7wLUL6rVZrgSdLiraat+ymjhNcZ77jHu5IsXBiCjkxiqQjuuRpjBipBZd6JF/WATdQux1K6AiYbCU8eiRD5dwRkDuWoT0naLy04e453v+wRtO6CwJYPeOpFxArpUBBMJs0nCXMsZCyWRIIVTEs/G4u+TtRhRrMJav2C9FE9xnSEPuC9gnMeOCuOML6rfdQOEcRhn6DHpd1OR0Pwe93f997jnTVrGYVxeddqaBjDNvKYZskyrj2aZmoYb1fvmNRnjDHIm5VP34jEpTmXkU89nXFmnGXqMK3PTaGPc33V6mp5SxvXFcYYfdXqbeYwrzzh6J7X9OFrG5TXNKGEcpo2XJj2T+vV26v6gcb6MCZr12zQk2qq9d9o+FxOCQUdAQMB2IYi/SrS6yraaR63bw9Bcm+1QSKDDXfFIHtGcVjfOs26vY9VSFAVlUbqrHcqMvN+DokciBmsFJXaH7hEikxDHSh5bxAycV1DjXAwUpSVJ2lhcuiaKvbWHO40qBjS3INYpdb1nSfKMzBjSVkp7ZoaixHkAjQxJq+28cCYRS8vnyIuCwWDAYDAgbUUbDEjqVzIYY0jS2F1DiNsDZlmOMYY0TRAxlKX6c0RKJM6jgMV53TDGkBUWzSy2dDIma0uSJKLfzel1e6RpiyzLvXFOwerqGlDSmZlBMRRZxiB3Ri5JYinVKbn7vQzVkkGWM6PuKo12q0UaxwziiEHWR43f68YRkYkobMnc7Bwg3H3uHCDkgwGzMwt0OilXX34K1LDaFbKiwEjKIIPl5S5RK2Z2dpaF+XnKPCdOUwBWV5aZW1ik1x9w2913YAvnUaQsM9KW4Uing826rCytExk4fuII/bVVBr0uaOQ9eSR0Om1OHD9FjJBlBXfecze33X0Px4+dYHFhluW7z4EmaFkyM9Mm6/dpp4aTi4vMd2J6gy6tWGi1IubnW9y9ssx61icxES2JSUxMbBLyQZdOMk/WHZBlOXPzcxTWMrAZ7XaLVruFLUuOzM1xfH6etiqzkWGQGooc5z1ztU9ZFgwypVSl1YlYNAvcfscZysJdXxNHMUmc0E47xLbElrA+yFnp9Wl1YlpxRGRLZtod5topmIh+VqCFgo1Y05Lbzd2kRliYnaGVRkSp63cISOS8TGSFBVtijNJpp6OxLYIxziuItSMDcigQcIeHEMrSX82rBWWZYyRGjJM1gQ71HHUDkEp6YLEj2YpW3n2d92DieCibTJLUefMoBk7OU8lgi5IiL1z4qHRK0jimjBOiNEEicdfJmIgoiokSwXgDjlTb/qSq0u/33fVG/h7n+qUqzmPt6EBgNV84vlu9k1VfP1Hk5FfivNwWZckgG9DtdllfX6c3yEZX5wyVjCBRJcuSSg/lRWAWxRKRUJQl6r0CCFUAs0HrpVKTG1XKWC9qNXhZsjHe6Mxlb3z0yOCvoXYGJOJPWW2UMwcE3BshG76GTzcpfyfHHavoN6N5zx2GjZHOMVqzx2A9Rnp3kZc5SWsBnTuCrucwE1G0ToAIbYFy9TYMhr4o8ZGrwQhRmcHgDMxeMqQhEqGlH0OcKZrXSVce4Uc0bToEsUHDL14bXDeg8M9r10rJBouHZrnrdaYN/q/SyMuGeaWSKI/q0NSejmjdKNOoylLLYEN5mjKden6V4YnUntTSQIfT6oiVlRrtMNSN0aBhh5gmd2jyyhVBlfHCRsOT0fuq5oY30Pu3Mvxv9FwrDT/i9Qkbk3S8QBWhkpOMl6sMa7NGV93Yoqq3qh5H5Rylv4G2Ebmu1puyDKnVkQ5H4jDNsYfLamO92q+MdJ/OSLJeqkr22Rw3quqVOL6AKPiDsRsNYoYp1f6fIGes0TksdVUOPw43HLgdVujmdhManpt1+GJDe+qGSvZ+P7QyBBmRXj+QPYqriJSIEdJYOLuiRNEC0CJbmMeqM2wt8z5IjC3WkPQI1sQosDqISKI52lGPzPZQ62guJSaPF0mzNbJ4jnaiJOB1aDoshlXnNKCaG8a2kad3eIia0VA2w7IcDO4Thh+CO1mQld7lnQBGyNSwkuXkpZKI0E5jFlsRc0lEZpWFWOj2MtoGkkQoCxiUzv3n8m2fRC+5jGRmFi2UjiiRwGwa04kjikGfbH2ZrMg4MZvQjWJWeyVqC3cXaVGy1svo9i1SlKRmtBCWSTzcZJcWwFBa6Oe5N/RwltQiUJSl8/wBXtkSuWsyrDIolZWBEmPJS1jr5szMKVlZkhdKYSNmTUymOaUKS+sD0pk5SNrMdToofUCxhYXS3QUXiVIUmb+L0Q4FMBsWGD+TVhNztZRWlmveWc9QmexnOKRqnNogqib9yu6tmccwX904aKocx617G5f/2sDz83epFTtRYwKGm87KaEHJ8oLe+jqaxAwGOVGcghHuWVpjfb1HUcLRxSOkrRl/16dg1d1j6+8NQsTwsIc9nBMnTxJFbhNoYufasbCWvi291w4wUYRS0O60SZKYNI5ZXl2mlQhzkWIixVrXb9yduyOBly0KPvjRT3LVJUeJjBAlLSLjvMV0Fo7QKi1iIi69+iruP+iR9Xrkec76eo9PfepO3v2+W/jQ7WeJ44RWZJhvp1x6Yp7PeuS1SD7AJCnxuz7If56+i57aoQHOkU7Mo685RlTm2MKSl5bcQhQnKELRmiGanQdVJI6I0zZlnrlriKySpu7e4VKdp5n19S4mSWhFglVnIPKxj5/j3PIqhbqxknc6xJpxfHAnLb2GdbV0ZmaZnZvhzo9/iNtuP8eJxVla+RrSmgNKzp25nUFR0Oq0Me05ZnWddt4nsyUWZywR+6sOzvVKykhYz0syPx4j8UZGqswnMW3jmNDIL+KoQBwx004xkSFNYs4trXP7yoACXB/XYW/k7AfeyZx9ApfEOe/52HtZFaG0QhkJGOONLdzVS8YbsjneyeWl6gRplbUkOmS5fccXKn8hpVUKq3SLglMzKS1vsptZy209S6vd8ic0vAeR4Uh2SaWxjK6LAUQMkS/z2iCnM5MSoVx2zYM4OvMYPvzOt3LVIz+LIs+JcB6FrDe0ywqLTaEVR6Rm6OiTJDZ04tHQd6fwhLVBQVatoTWeYzTnBNxboepOKlZGBk0ld53Bqk4S1eNOs7RtehIZl2+V59iNRs34oXJXXaVd95ARVafCYBi+bik8Ds2yjCuTOxG78ZRB0xCjSW/9u2lQ0TSwqdPZNPyot0XdcKNO81ZX5Ewqm4hsuJ7GGDPsA5PKM8koqKKjGWa7RhpNI6KmR5Vx5duqbZvp71bxXzf+mPRunPHHuDHRpGkSJtXfpLJPqud6fuNompb3djCpjPtpYNE08Gjm1ZyfxrXDNAOZaXkGBAQE3Osg4owlbImIl5XYguoayibPpl4iKFTzK2B0eGJdvIwFq43dgtvdGLedQ1QoRSltRpENyLMMKQvKNEJtgRZ9hJhSQCOIxNAqUyjaZH1LkTvlrMFipUC9t1Utcu+lNcUaA+IMRAwFlIoxLURLbCFkOkBISdIOMwsR/d7A8Xhi6LTbiDG0Bn362YCsyMmKAUURE0eARO56leoKBzFDzxFR6hTqpfGn69TR6jyqWXLNsWUJYjBqfX27uk8AFUteltgiJ4liBEOel2SDFezMrKMlK4iTmG63pNdfYfGoZWFujiJXsCUqfXqZRRLneSEvM9LIMDuTAkq703JXyMagUpLEMNeZcXtYsRRFSWmVpJVS5iXdboEtodVuMzMzy8zsHK0kZmX5HELOkZmUKI65a3mNfjngVGuOk0cXSWNhdn6Wsij55LkV1kvLQqfFRz9+K5+46wxHF+Y5lg+YnWkRm4T5dou1fsbpu+9mrtPBlhnd7jqlCCurq7Q7M7RVOXJkjm53mTPnMmzc4r8++FGOLi5wbGGeM2eXAcMchs7sLFnWQxlw8tRJWmlCtyyI0hQplM7cPOVqj9X1Hv2+hVZEX3NaCEsr52gnFtGEle6ATrtNYYVut09ZwFxnljiJMYmh3W7TSWfclUJJi9h0aacxvV6PbnedOGmxtt6jO9fGpAml5pRWSNOENBYG/S6dWLnyeIduVtLtuWtyMuvcYZukRSEKhcXECXEcobZ08sxSWO/nRCiddIV+ljM/2+bIXIc4isjznCROSTsJrV5E5vt5YaEoI6IyckZDpZLlA9IkAnWef8QfqtFIsKVSAoU3FIjLEmsyhBI0oiyqA0ZebuAP7FTzhUHc9cYjzc/oKl21SBQTxTGddpssywCIkgQnRHTu+sWqu3W5tJRqKSmJtcQWmTMWiZ0hisTuChYTJUSx8w6UZzlFnjl5rxPHuI9axEsmkyjGiqUsrPdqVDo/HeK8lzjjihiJDMQGiQ2qlkFR0s9y1np9Vrs9uoMe/bwLpsBQOqXcBqWkV+yo9YfSxb+PUIEkSvyRbW+0UdOuNfcYztjG17C/skVxHnzd/DfaU7v83CFI9ddui/cAPCQrIOBeirphgNR1HduKM/7daE9NLU1n2GZsiRIh1hJJDJqRSkFZWMpWilCS2pwsmsV0b8OkCxTzp9D1c2TdezDzl2BM6pigYkAZzRJRgERI5CexujZoSGulcjZeoV7NH2PkVZVuyMdzCvHSK7jrGh3/u1YdUvt/qHOiukjCDuXiGyvN+Ll0VFcb20GG+qO6rmhT3fvcNpA0tq08LcNyNOugKlvDU3fdsGSY/9ZyNRd88+GUrWQyG+mtf1dLptaqumY8OPy/6g++7b2nFR0GqA7Dqvcq0iSk4tLr1egl/5OKXKNpXAkc7Y0AOkrftYhubreGPEtVa71w1LfBd69tyXZ8rUlNrorrQ6qV0fvosPCGmHV5W22MN8NuMuTA0LDY8VW2uXyT0ZDDMvKgM0khM+5xfWbQjQ20eXw1xrgink9xvGKWlXSjI8y2IorSoJogFFgtiI0hVktZDNCZeKhbMhJT2pIuc5g4IylXKPMe2oooTEJLIrTsk5t5xNxDxcdUfaxplDppPG16qqM3+yiO3IT7hOEHAv2ypLtmkchQFooWlk91c84MlFYJl6WWNIXICGulpWsd85skMYWWrGWl68RxzFo/Z0Z7HO8tM+gKpj1HIuo28a0OJ47MYAUGqyuUeUk+yLAqrPQLihKyfkaqioohSSCNY2Y6raFJhBgzGrAKJUJRuBPutrT+vXiGuPTzfoy1GZE/za+o8zyRKxmKWKE/KMF6S/jSbSBM3CIvu6z2c/qDnMtPnaJVdslXehRRCq0YI5Yyz4lmZpywpchHndJbobtNUTXTq6eh2XM3rHq+iBsXDqni+7JvjKKINm0FR3+NBtsoLe+QqrHkbJxqnFcP95cTxPi0qom1CueO9WCMc/FTlpZi0KN7z4BsfQ0jlqQzw0q3S55lzM0v0up0hleeWC0RcV5kRJ1LybIomDlxP3pWKK36soMxwtrqCqVa2nOzJGlMmSZkmcWYiDhJaEeW173xZo4XA65ejMn6zsCnQClsTFEohXWGFioQacaCscS2ICoLEqOYJCFqdRisLpG2E5K0RZKkdGZmEAyd5XMoytr6unMFKULLGC5dnOWSy47x4Idcw2wrIUlT1lZXueOeswg5g0IpFOYS4WHXnOLk0VniJGWwvsrayjpZVlJayGZPYhdOAJYobRG3Ok6RZ3MAknYLSnf65/Y77oaiZPHkKU6dOIItLUVZcs/d9xDJsnNLCbSSiDSNeMCDr+SeWJzhUhwzyEu666t0IiFpzXDlAx5O0ZqH7jlWl5dpFRmLx06wePJyrhh8nHbfkOUlaiJanZRWDOvdjPd8apWBFyQ6J78KaokEkjjiqqMtTqSGyDgvIJG/UzdtzzB/fJFWu8X84iJvf/t7uHW17731AOpOfqWx4apZOPPOv+WcGK5ZjCmtspYrZwvFtBLS2NBpdVicm6PTit2yHTmrTmstWpYMSmdcZHBuArWwlIzmjEgql8nKeq/PbcsrPPLKo7RsgVVlaaXHmazHqaMLpLUrcaw6LyAz7QQRIYnAFqV3B+gEhzNpRKQl//Xxe7j85AyaF5xcnId8QN5dY3l5lbzXY2amw6wMQCyXHz1Ckd/NrbmQthLKXp9MBYyQGMGUOSrqDZtcffWzklLdFURGnIHfaFMTZAP3dkwy7ADHbEVR5ASHRbEjBW/dqGScAr3u5aGpEK8MO6p4FQ3Nv6vfRVFQFMUwfj3f5lUoVb6Va+4kSRyj2jAYqSuUJxkk1OupTm+VT0VrRZOqbjBUqRtNTEqzSquq/yrNccr3Kk7zep1m+PoVMZPKVu8PzTas6rC67qf+Po7jiW1eoWlg06StHq4epmkg06yr+ruNQhod+7tJU/ManepZ0wCn3gfHGSBMymNcPewHpuUz7qqjSbRMM2zaDS3bMUgKCAgICDg/SJIEtc6QQUp3/7xb1wq3t9a6J7dKCMfot78lYSg/HC8LHgqgK/4pEsEWzstAURRoMSDLYoq0OnTjvI4oTuk8uopAKGw5PNFeFAV5XrhrYdViMKgWGHEHHCIDqEWJ3HdZgHHrYK5dkiRittMmigz9QUmRZ1h/3aoRd2AkHwzIs5RBP8YIdNotqmsbSlut8/6KC2u9EU2EMS3HE5Wlk6cI7qoYcIYjZTn0rlEWNV4D58lCoxJwStwsK9FeD6vOUFeMIUkjVlZXmJ2LiaI5yuEVqIaysO6K1iSlLEuSNEGSiNm5GWbas6hVEIsxMXGrQ2tmnjwvENsnTYWF+RlarZSl/jqDsiDPShYW5tw1NEmKYBn0M+ZnO5w8cYJzZ5cpswHHjy1w5MgcGCVtdejMzXLnHXdRZhkLMzOsr3XpdfscmWtz2ckjHJltY63SmV/ERHD7HXfQ65WcOj7HoN8jzzOWl5aJ4w6tOAGUpeUlylK55+wKx09dRpK0uOTkKbJBn6LIOXXqFIuLswx66xhR5xmihKKA2dkFsmxAZEqSVFjp5nQLIbdCObDI2sDJi+waqWmRF0uYVkyrPcP6ape17jrGCK1Wyx3GKYHSkiYxSafFTLtFnrWwtiSOIo7OtpidmWWQZayuZ8xag1Hl6JE5jAhr3S7L3T7tVsqlcy1WegVn6FFkEcZGRMYZiNiyoMwLbJ4zl8TMJM4rziDLsLGwnuWcXVkBzWnFCrMtbGnp9QcYSRA1xHGCximqJXmpSOGuUjalkBU5ZZ6jhaGIqj1BNNpflM4gQi1ghNIWUDjFhvOAI9ih1DByMkKvPFRVzPBvnKxOoRTHS0cSuTpNUufhoyz8WHGeRpxxgyKmdAdmlJGxQllisahYbFFAFCFFTJwmtNqGSBSbl+7aGqtEIpRihrJHx5fX95bVYT/nndYo/iCQG1cikTsw5j3CFtYyKHPWB11W19dYXVuj1+s5j0AYrCTOeK3mU33o2L4Sm6qAv/zZubkdyX6dsskrcP1Vw8PnwzSckYifYn0YJw/3iVJdHixac1s/3G6YscqVgIB7HzYqW3ezHx2nZNbaby9VcOswCjaniFsgfaLOSQa9c0Tzx1DTwcYzxIPbkXSerHUMzSxRax7Bkq/fhsxfiswuwupdmPk58EamJTiPQ7VSaeX3QTc+H34PZREbKB4qxN3kHCEUw3eqZgsjmWoS84pVrYwr6sqmWlyFun8KxzeKe6aVYeBmPdcGmxZh43zl851E30avH+MUwG7+1Ga8YTizwahiK0w6NLWdeDqy1Bjq0OrF26QAp1J11Fp6qI4bGSqMDF6E0RUkVXlG14pt1OtVdPm3mwwExsgpx5Rr40EdRutPo0TqrUa13sY+7IbQ0mgJn2bzUNAmOmpEbNjDUGujuh2KjvYtVRgdEjepfKPIusF4akQqjD+kxFBvKuimAm7+XTci2UBPrX6q/Gy9HDC+Jw/7eIM/gaF3FCsGq+L3hjll3AHNUSOYMkLtOpqAJEehtwIz80gUD7ucqEWLGKI5Yj1L2cuxktCPWsTlCoOBczNgyxxJ4loXnt62o3m4InhjdalOKPM+4b5h+AFYFfKsGPaTXIVuLgxs5E685yV55jpED+fZw6KIuqsLCvX2aHmGlrCa5Swl7hREfEyIMe76lXSO6FjCrR/9JA95wBWc+dSdzPbdqf62ERLjOr+J3X2MVoW8cBsbsM4owDjjDzFOiGDFLZJFXqBqa2uNOm8fJkL9aQ484136QdaOvD2jwKdWM5Kz60gEpbWoxMwmKWcHOR85vcTszBxXzPS44ojhY586SzkzR7p4jAJ111OYCBMJ5SAHvzi5DuwYcSP1ObJ+5n7E6FfeAka+P5qLw8aJauOU7jcU6o1emnG1eqJsmGIEGv5Cahg93bC2iLsft6JH1btWBH/VhhPaxK02s/OzrKz3KQY5YgZolhMZ47xHRJEz2LEFcSSkcURRwAB31VA/y3jH2/6Vz7ruOifkwWDFKQ9nOi13BYkKJk5JZtqURYaI83bQ62e87XV/zWdcMstMKyayBVHpDEgsArGlKIXcunYyoizOxnRahlY7IU3midJZsu4KppXQPnKUKI6xgz7SmsGq0i46zPRmaacRx+YSYiMcW5jlimuuZOboMeaPHmVhrkNkIo4cneXS43PMJ12yvMACR+ZbnDo+z+KxI6Qz8xT9eTpLSxQW4iRmXRZYljmMKHGaEiUpeZ6RxDPurt0kwdqSpTvuZn15iaNxweKReTrzi2iZUw76zLUiLpmJ6WbK2Vw5fmSepDVHOTvHYLnE2oLECP21NRZPXU78/7P3Z8+WJNl5L/ZbPkTEHs+QU009AegGQAICL2m84tWl0Yyk9KAnmfFv05PepWe+6FUyEyW7vEZeUiaQwEU30Ojuquqqyso84x4iwic9eHjsODv3ycpqNEiwO73sVJ6zd4RP4e7hvta3vs9BcC0/XEReLy/oalierVjWmn/6nSfcNE/5XnfHypm8JohQz2fM5obN3ZZX11vuPVRJ4VJmzBAl1IOEy/eervnRh3NM8LTbHmuzoaA6f8r3/+SPWKzWzJYrvvr5L1j89NXAVJI3QyFmGZ3ffTLjolZ5b2lmJFG8vNvzWauYn51T1zXz5ZKnF+esFiu0HtapkI0TMXhc70kEjJS1JNCHkKl3Y0LpLDeUQuSLL77AaeGf/MkPke0N+13Hp19e8ZO94Ye/8wNqlUFKkJlHlDV8/OElKfSEviPEmJk/YqbuvTxb4Dd3/OLlHd/5+BnGe2aXZxhrefrBh8yt4rXrubw840J1vH6Z+OC73+Xu+p7PrlqkaP4qi7UVi0XDujYwUBMzADz2LpJE0KqwKGSd37yHeW8Y+E1Px8CLsqGasmgUB/i3cQxPnezHbCJT9o7y97Q+p+pwCoQwBXYUR73IQee4lDsFDJSfAlqYsocc1/84/2/qx2k9p+2eAiVyJKiMToPj/joGMUzrqtSgwT78PS17CuAoZT5Wx9KeAkI5Bb4p5Z9i9CjlT+tR8gwhPADunOrX4/adasP0++Pnetymx/rtuKxvAkicki+agnlOAUlOgT/KfdMyT4FPjvvjm4ASx8Ce4/afyv9vK50ar8d1Of7+XfN8232PGfBOAXDeBXzzPr1P79P79NuQROTgxAzZhqCSGuQcBJHCQHoMGD1YREvkeEopy7RmL2op4cigOnkPJCCkfK5yPbHvcDrgKoUQEAwihkQiuID3gZg8KEGUzk5ppYghEEMkhkhKIbNFakWSiBAzwISBEzVFCIkUAqIMMSl8Z6gaw3w5J6k2M2q0e6pKDxIsxaGR6PoOazWkGlFpcA5nydQioRpTGGRgBGM03kVa1+G9YK0dDM6BlCIhZue2koh3PW23RyuY1Q3eOza7jvlsDiJEEZyLWQ43RhxQWU1tDdZorNVE7wneDedjjVZgjcnBFEZhK4WtLcZqdtstWrIFSOsKpS2h9QhCXRlmT5/iYmDb5j7R2uTyaotzHS5EPPCdjz+gqWru73dcXpxze78luI6mumC+WLLd99zc79DGst/vCE54fnHB5eUaY4W+bbl4fsasttzc3tF3gbPFirqq8CFHR+skzCtLt9+TEM7tOff3tyQUy/mCP/yd72OtYbvdcnG+ZjGbISm7wYzObA+mWbI6W2ONwQeP847rq2t2m3usKGZVtg0En2j3PU5b2tZTVwED3N3eD4CkRFNZINL33WDjiqyWNbNmhkigrsA7xayuMqgpBfa95/XNhhAC81nNfGYILuYgmhhpaktjNZu9R4vG1gqrBE0iBZ+tfSJUSvN0vWS/b3l1u8X5iNaWLgp3+57KJNaLCtc7RGm60JOiMK+bgdlYUKJzUE8UfPAoSfTOEb0jBtBKU1UVxtjR9uH7wcah9GAqzfbVmMD1Hu8yZbgx1SDTm6+LMb2xFhTHoAxsOdnvJ4hW+JjzTKONUbLtdrgzrz3FcTVYJwdbYhyiVLSAwRBCoOs6gk+5zEi2KSuNKAjDPBwdgjCUJaCH/WKxt8rhqqwqnHAx0jrPbt9yt9lwv9ux71vcAPRSIujBiDuYl8eyslk5g2NGPxzZ1jy1sxQncpbbViP9exrqJRQb9WGfm/tcRqcfqTg1JRuuBUQOjsDxt/db4/fpNzo9PJt+2zPx9Ox+/Hfxkozfkd2tShSSPAmDEYObP6X29yQ/Q6qKsP+cqrqgq9ek1COigURoLmB3Rbx/DYunaF1BvyU2MwwanbJtPEmWlUtlXTy40R/U/eCvefj9dH9WGN2KQ0w4HSU/dYA/1o8FiDC9X0Qd9o8P8p68I073/MQpfap50zbJw48nnvvHTdgTYMcQKFycxaUnEo+NlSEw/A0n/ruNsZP2qySTHnnYrvG3sZIyflMY/9NQcRkk7HPT0vhdcaRP+3sKOBlLTjAyocihH8YhM/TZw7qebORD4gsZs4R0uF8pIUWZ+GMfOvJFZJBgO8orHcbSo3YiGPthPJ0Mc6e8CjOrvgxtJo+Fcs4p71sZtx7fkMbBNzYmTfYsj197YhxN/v/wrlLXI1viJMvjx3ooO59dSr/m+wa5PooszuGuVFhzBKKO2AjiHeh5lvvzflCOakk+oWYG4x37rkPbfH5B6eyP15GZDfShwbOkYUeKHVI1GO5oQ08yAUU+/70LaONNUNKJLv5bTL81wA9R4EOiKofSNLDTJYGY6GOi9ZkerxvkKPJmODNtaDksQkoJXYi0vWcmCev3xNCz+foLzr7zO7je8/2PL7hqPeH2Fd5E+qBYWkNUQvQehWRnfXEIpIgPCR8ilc3I7xQCGiAWCY+aFDPyW4umSKNoUdl5nDJQpLywhIRVEJIMFJqR2WqB0nkVixGaWcOzszViPiLZNb15han3fPJ8RWU0LsaMxE4KJCOhos9SHGVVL5R9h2mqHryMDi8pRkmIwTbx4NU3ed1N1pY0WfQPL+A0HCIOL4XhtlSwksPWYaiIpCGfcYU4bCDKx+XFMG4gRl2gcggcDlYqH4J8isSUQTqLecN2uyfExHefrTlvsp6di1krNG+o8rP2o3EKXN8Td1fQ74YOyXSKSil0ZdApP3/VKEyo0crmw6AkWueIXccvX97zHZ3oXcKFIaKBrLmldcKYHIVjtUI1DW79AlldgneIKKxSJHVPRAhBaPtE2t+joqffbek3G5JzVFajjUHParyq2bUR0Jl+V7IRyVjNalUhQRGicPHsjOff/S6WPjPVGEsMid4LyVqSNmhh0DrVw3gWrFGkzmN1zeubW776xaec6YjoCjHZEKCUQqfAwggXc4MQuHKB5bzBSOJZbXlJjraa1TW1MfgoqGqGuHueLOFWEsFUGFOxXtR8vNIEhKfzJedREXyPMZrmfEVdG66+gu9cNFxcrvj863u+3mW63toaKhJJNL/73af84Dtn2ORJIRvz9ts9ZnXJ8xdPma9WmKrhbGn43oVFicK5yL6P+BS5PJ/zD/7wQ9bKs7vdo+sGqWesrzboraE+v2C2WHF5ccHZesVivqKua1Loc7/YIUqmc2it0BKJzqEHI1LfdVmzTBmqSkOCmVV81XY8efKEoAOrxYx+t+Py2Rl//+//AXNrcPsdIQZ6FwhE1us5IglLZrIxGkieujasFhW3XxiUNVw8u2QhQrVcc68M6yeXVHXFNYkXzy9Y+TtefXVF09Qsmhm2Snz3xSW//PQrTGOpvKZWYAZAh2TbE5DXyySCMpkCOobD5vd9RMhvT5o6+U8BLkqaHn6LY/UxR+8xY8O7XHP82TGYYurQPQYyTIESx2CNcm1h3phee8qxP63n28AFJ6NB3nL/cXse69spWGTapuM8jttXfj+u37EczhQUc3xwfezwOu3/Y3DFFAAyZVA55aA/1e5jcNCpso/76fj3aR1P1fm4308xwRyPjWPQ0GPpFOjj+Pfjv4+BLaeAE29r22P98S4GrlPXPDbXT1332Nj/JvBKKftdABnT5/Q2ENHb8nobGOex7981fdPYK58d5zl93m8zGr0N1PN3AdDyWF+9yxz+pvR3oX3fJr1r2/5ba9f79JuTRBgj+rMcAcQoSJSRjXA0hKY0AXa+adQs5++YikN3cvvEcCzl7J9kOJ4HXN8Tuo7ORLwzaMkgdZWPUiPjYizvRq3ymSVlW5JSOgf9DI7dbK8Z2ECLjaHYMlLMEi0xIdYQXE/b7mgWS2azGtV7UgykAE1lERYYazHaEGLEeZ/BrCkb1WMM+AQ6qlGyNoZAigejqlIMDHTZTjIyo8WIU2p0Qnd9D9FnVgLv2Gw2QMqAFwTnHMnH7CQnobWhthajdWbS7XtidCipmTeGXdszry3WHiRRVRK8d3Tdnrqu6HqHrTQ+elx09L6nrhfM53M2ux1t70kkTKUxVmOMwvc9nfesVivOzs7o255mPkftW9quQ2nN2eUZWmuuXl3TdwFRmu2+Zb40fPLiObOZ4eura9brc2bzis39FRKhMlA3FevFjNVixlf3dzRNw91+T0yJZ5eX+BhQWvGDDz7i6eUKYyteX12xXM04P1sT48AAQ8oSv65n/XzJYjHH9Y5927HbtdzdbpDg+d4HT6ntnE8//yVtdLiY6BPUdQWhp90G+laxXC54crHk/HxJIhJCpBoAKkuZo5Wi2+9IMZe9azu2bU+zWHG92XJ7v0epS+pGQwoo0cxri9BwtloQUwQdiDjms4q6yebk4HJgmI9gKsuL5xfctXte3m1wMaEiRJcg9syqxLbtud+3VD7R9Q7XdujLc5QRUIJWWf4k654PbDQqz+cQsqxJ3u/mOe+CI/iAiEbrYg0AUiSmgHMR5wLG1ChJqOI/GZeQg80vS2rrYQ8zgN61UFU1ypjMOqNNtu8pldcJOTgtBSZOpDRovwNRkBhQWlNXmZFnf7+j7zsU2bab7ZdDHlphJEvTZImjHHwmFAaQRDZM5vWu2DKBQTI80vWOXdex2e3Z7HbsBjlnYsjsr8K49uQyimOmnIVitjkPa2FeK7O8C6pEvx8cVg8kX4a2ZOBKzgNysKAawCaRnMco+T1xDhb2EBnylnffir1P79N/g+lx+8jb0rvu46eMZPnv/KvEhEoeJxbRFlGGDBbdEXevMIun+GqFxMwYzyAzrkSQ2RnS3uLaK/TyGeHmS2RWD85yD8SDT0YY1u2HAUn5nDjtggOrwBs9JOT1pMh0vcXdeui7ydkzPfxsCvI4fP5m2blcOEhtPPZcpuf06acHv9Q0lfLLuvo4K0gaXWSMTrVpvXnwbB/Wr5Rx2gY17adTff74+Dr0/xvyNGVvPd2Hj+/Iqe1gyj4z6bOpvePNj476TR5cKAgFECKjZyCN5TzIMx13+VDf8rJJg0/hQR+UB3F0vijFTu0i49nim5EBYzWmz2aobZrIxjw42SSZkqGMdxz31bue4Y9tNg/qMY6vQ15FCu8NO9Ch9OMCRuWmyLCHSYcrR4jQMDYm0/KN+VO2O2oY36lcHzVWV4ToSKJIWuf9gzbQXaNij7ENPkRidUbdvyYEg6qqfO5BZ7nNGOlDIHFPsA1oocIRfFZ+SBRwqwwyeAO47lvYjKbpV73vXdNvDfAjIHQx4ZRirkbIwDBYBxmVOCwLCaKPGK3QWjACpOzYTAlcTKQIH6wblAxADavZthuu/+rPqFbn3O09m5efcmZypMRcJ+ra0EVIMWF0lmawRhG7kB3SCggJ51PWaVRlgy3olEjeE+NAr6kzq0cMAUyWeijg6zRoPPqY6CO4AH1KXDY1f++f/3P0fEHRz0JpPvjRj3LUPIovf3yFpMRisaRzPa9v2wwYiGB9T9f2uG4w+MeQJ5EML4zyIhzOIIq83JYd+7hgTGw0eU6nYaE8PijB5PXGw0+PF8TyMh5eyJPFeGRJmbzM5OheUh4LwzESoTgQJ7Izw4VZTidTCFlTIQhVXdP5QBJhtVrw/OKMWzHc7/aAYI2hVooUoHPDwXSoz+//4R9zfvkMJBuHKIcj8gEvqnz4El1lBgzKISjhY+L13vG9c8WqVvgIIeVDXCTnE5MQUjYoXb98zZ/+u/+Z5h/9fWb1cGAlQfT43R3BB6Lr8oFSgaIHAnvXs9m1NEaxTYHeC+vzM1L8ncGxkTIYKQb2nUOGSIjdvmNzv2HVWEQC6IZ6viLsNvjtFt/MUAubD/Y6Pzej9Uifu9vt+MVf/4y1dqyt5pW3zGuLIRKTkEThEZyP9CESEmhjWNjER8uOn73OxrPZYsFsMePTH/8p8w9/j1lVcVs/Z7NP0LeklNh0ni9Ysdc11WLGwhqsVphFgzYG1+6ISZgvZzz74IKr+y21y4w7ldWczTQaYWYEbSpmswWiMiuQ8wltFam9ItpACA3Rey6WNVoUfduzrDLV3pNVzbJpaAzYukZsTR+EeNdyv9/z5Z3j4rmwWF+QlEGqGqlqFBWEgNEQPZhljdYGI57Q98zmS7x39N0e57L2oq0rlNY0dUVdGbZXX3N/teHpB09ZLWachTPW6zWNtYSmwQeHD5HeddQ2ouyMuRVIMR/Oo6OuLbbJ0WJKLIvZnCZFxBh87zHK0lQGUmK9XrDY99hqzve+8yFf//gnzJcrnj9/ytXLK+q+RieoK41hAJupQhGY121BYZQdGHny5kH0RC7rffqNTcVR33XdyEhhjHkg8RJjxBhDNWzmvPd0XQdkR4K19oFzMqWDbMRUpsRaixk0mK21Y95VlSPljDFYa8fILa1zJFphkgghjHWeAg6szeuf957tdosxhrqus9azc2M9iyxMyafUu3xXrp8ykpQ+UEqxXC7HskMI9H0/Ake01oQQRsmZwsxRmDWqqnpgfCgHiAKYqOuaAkxxg4Z9YQcp+U8BCqXtU2d2+bwwbpS6TuVhToFLSjp2Up8C9EwldUq/HRtVjkFEpZ3lWUxBOd77MZ/jMo/BPKXNpewpKKM802ndp8wY3nuqqnqQVxnP+/1+HLNTkMFUymba1lP9dcywMgUaTdtcPj+WyyltmbKwTMs5znua3saEYox5kMcxYOcUoOb4GZwaB33fj5+/DYjztlTuna4VBZhVvj9m7Zm24bE6H1/XdR11XdMMevLTZ7ndbsfnW2SK3rXuZa2aMt+UvMqzzxHAaVwny9/TNpfvpxJVZc2x1o5rR1kXUkpvPNdfV3pXA2lJ7wK+eZ/ep/fp70KaSMWRJR/hYKAUEVTKLK2F2WO6pxuNy29Zc94AWMoQhKHznlLIzInFyRxCJCaNGgLZcznZLhJD3qsJOao9xQAxMxSkVBTtM8NncaCO61GMI4WqUmoI+AkZlOEcfd+yWK2pmwrX94BiZg1N0+Qzp9EwOEudd1RR52CdkEExPnl0ykwbIuCcw7lu3K+K5OAU7/Oa3bY9dVOTJLOkiCSMVWzvWlIKRB9xfU/XKuqqoqlq7nZ7iDCbz3ADkKQwM5AmeyER6krTtp75vKapa6xW+JSyXK1vs8NcstTOTJtsiHUdm82W1WKJNjWJNr9vTC6/rip65+mcI0XP+bPL/E6LCV3V9M7TO8cPP/6I5XLF7fVr2t2eerDlzJZzLtYNTy/PuNtuWcwXrJYLvvjqJRdnZ8xnmot+RbNccnG2JHWOFBTeR0IQvvudD5HQI9ry7Okn9NGjq2woFoHaZimYdt9i7IzaVnjn87t+ZhECfb/D9V0G8jjPxy+eM19f8Ne/+JI+OKJAcJF979l2e7RU+fwrCp2Eytg85qNgq4oUI8pobKrRCtpt4nbTcXO3437nmM1XOJfY9xFCYN87us4jJJSC2aLBVJrz9Zrb7YblvOF+s8d5RxcOTBblWYnK4A8t2cEfUqJzASOgLXSd537XYfWWSjv6vmdTCaY2rJdzGJh9tdIDECqPS0UiSnEU5fnqnCOliI+elMCo4uKJiOjMshwjWpHBGZIZlpPEQbhEkAGIJUahlc6gk3Hu68x6obPdD3IfozR6iJan2FVHZqGDg0TkcM4JPv9uqzwXun2bQSQpM7kkyp44OzKUMpnRR2dmaUHlYLZiCxxssKLKWWlguiTkYJ3g2bmebbdns9+xa1u87yDFzBI0eLwy53C2bwqRLDuVbSmFiSi3ZRI5LkJMaqxvkWkp7S68zvn3Q3TyeM2w9dIUn1q2JWfWo8m+bPQz5Xq937K9T7/J6dsC+af3vO1Me+qsI0AKiujv0b6lV4IxNr/XlaLqv6Ba/h6xWh3lkQbHS0IwhNkZ9e6a1u0xVQW7DTQrVGgxErKUDNnDLmlYK9+hfm9r76/y3ePXTB3ab9otTp3b/6b1fHjNxGl8Ah0gDxAROY1sJeXn0bJOj6U37Q4lt1PXn+ivYQlPJZ8EyADzKE77b0gikgPxx/FwsM38emwEB5tUSuU9dtjj536btGl416XpZ6O/8tDOsZeG5/KgK6d+ziGPkvM3Ij++IR336q/jVXiqr2X0oQztK38mYALwObYLj/V849mNL/H8V4LCWDKuKDJhdonpwZ1p0nXjr6UzhudTmNfKF7pqkH4HLDIuVoPvW5rUkVSD4Ab5PUj1Baq9JtpnBElo1ROD5tZVLDRs1QxPTaQlsEBxTfK3xODRzIZ18PExf8re/Vj627RJ/VYAP0Qyg4BDuHeJC1FoDdpoZmZG2PQYSYSU2PuApAyU2LnAmWhEZxVIkEw9CEQ8y3nD8nLJ3c0WkcSssiiJxO6KcLOljo55bTK7CPnM7QI4F7C2AZUBG4HEvvcsaoM1ir2LSBAaBSFl+keloA8pO3ZCYIBUABBTdrLHlDftKQzRqyRqnTVYdz3cd4EgClMtMm5MlU05aFH5MNZUxE5QxkLfsbKJkKBtW+quRSVP7ONwsEkDtSoDo8aANhcGGqLDEnpY9NJhAS2LyYDwPkx8DgvM8Esa/5/GxfcwwdLkhqGMKRxOhCmaZLIsTcBj5UWumNAG5CoywDBkOBhqCyLs+h7nO1Azeu8JIVLP51TOD5IlkLzn5uYWaywX5+doLYQBMJMdPp6g5/RhqF/KdLXamOEwxyDNkR0q2lakvgWbWM9qXt1u2PSBjdOcCRiVF8tAljcKMct/FCBR1+7565/8lB/9zkcsP3iKNlm/OFlNBIyBup6NL0atYXa/ZVaZDIIyGms0NREVOmIsjrT8YjWioA8EF/Khvd2j9ncYO0NpQ4pCLR5mDV77DGox+cAsorOhzRoUkb7z/OVff47f3NDMElFqlpfPqar8PTEDPdreo1NCxSy/o7QhkPBonGQqW2lqtrs93eaWs2E4iFpgjMaFHkh459mEGao2LJZzztaZdlaMJoYeZS2zWQ1Evr66537Xo0IkhYgLns7ULBvL+dJS+Q7xoFXWAqxrSzWvMgOHAoLPzhEyfarSQq0VISYUCu+hOptTWyAlrq9uYbfh9uUNG6dw2xa1bXHf+x4vPsnrU+x6VDPj8tkaowazolZIGBxyWqGiZDmfQKbbTRFJGt93WCU8Xdd8crnmg9/9Xf7n2xuWcYlRQPTjViWFbNxpe8/F2RwT9nltcw4XAlVlAIX3Ad1UnC8b6BzbpPC+hdBD2+CTp64rVCvo9RkvfueHzP6nf4/d9dztW7AVTz74mE/mC66++itMillWx0o2aobB9KvyA03DGSQzoJ2WwHiffnNSATUAo+OzOFrL5wUoMXV6F0fl9GBRHPvF8XksZVLuO3ZAF7DJ1NldwBIFDDAFDJR7p0AAY8wbzuCU8ru+OPUL+OL4mmmbjh3Ox47g4pgt15w6vB6zpTzmSJ2WNd3ETut0qv9EZHxOU5aNKfChgE6mDptTxoxjcMLb6lzKOgaKPHDyTPr1lKN+2ufl521giuPyjvv6uC3HIIby/I77oTjq27b9RsPHY+V/28P0FAwzvX8Kbjhu87Tu07JPGafKfd/GkHLchuMD1ak+PTWWpt//TdPbDnS/qiFvPp8/AD8dgzSMMeNP3/fvlG+McQSZlWdYQG4l7+m/wAMQUJmjJa/pfeX7sj5P17hpOY8xoRynX8Xw822f5fFacLzW/LoBKu/T+/Q+fbskkqP/YwpEDCIKNURj6SiIZBnL4lSQARrycC8RSSmOjIHFOJhGHWgyaESy8zOfpMv7ZNBzHtaCDPzwpJQBuIgiBp/ZPmLCFxDgsJZ4HwjOY5XJ50Cth3PKwZqplB4OMEMk/VCWNhplcuS7MsXwnLBzi7aa4MKw9gpaW8RokoLg+wx26Pshsj4OtnXJEjlh+D1GvM/vjnxWzLI6ikTvA9vdHhcci3mDRpF8wAxrfdt2NHVFU1skKWJIVEbQRkAp5ssZfd/hXJas0XqQrYkps8kqnSWWu46UNM1sDingfU/vWqwymWFBKWprsUqjPbS3Ozb3O9yTSFSZ9XGxWHJze4+I4PrAZtuitOb5izPmi5r77Y75bEXYtOz3eyprWMxndH3H1/d3bHzPs7NLKt9zPrcsZjVtuyNFUMrw+uqGarbg8ulHJL/nI1sjxlBZxS9fvsYrw2Z7ze///u+ymM8IMXC2XPP65oZ6PqOuF1y/vub+7p6nT59ye7Pl6voG2ywQMWy2W85WS1y3Q0vNbrthOa+4u73h+bMLEopfXl3z5f0tGxImKSqVbTj7rmNWGRZVTdNUIIFd35NazdOLJ3T9Hj2cC6y2pOjY9YGffPqS3b5lvViyqBVUQu8X3Nxs2PeRV3cdxMh8ZlgvZiyXK1aLBb3vEKW531hcgBCg84nFzGK00PsWJKFiwmrDom7Yt4He+2zTxBA99K1nrzs65Wn7HrWPzOdzZlWFItPoeJ+ZZRJkwI4PKC3Zjjsc9b13hBjxhMyKGwIMY1RVYI1Cx9w+SXGY2YEYBYgoMSj7UEZSaxlsNjnABwSl814rhAiShmvsyFhb7BCFheNgl0jIAB6rRNDGQoS+bRGtMXWVgSghAILEbPPM7tGBbRpBqwz8YHCc5jVGZRYUGdaplEgx4oOni57W9Wzblu2+Zdfu6Vw/7HPAaBnyyvZYXeyxA1lSpLCBFIkBNdpQH/ybNXA4OIRGi+5gkx3m+2gALqwgB4YUhsCdEdAjZS082HQlZWmsx5wr79P79N98Gob2rwKC+LbnlYPNIxFSwEsktXtk8YIkLbLfw+pDQtiR0gol0/vyHM33CyoZwuwStf0CbS+J3T1xcYboGX2yeeVIGiSOc//b1PG4rd+mX9792qlz+pvsN+Xa9Ma1p/J++FzkAVDgALjI76NHwSVHf0+7cVyPT9p5yh73m/rjeHV/rOg0XjdWYQJCSZQ2l2smrMmTuo25SGbVOtR58B6esHU+rNNxfdPk56Ed7822FA+lPAQ1TOsp+Q1VQCLTXhrbOPVFTn2ZA5BlvC0Nfkt5wH/yoPan6vqg0sf9Nj78NFbhMTvYN60Np8bo9DmUfv2b2OnecjWjn7a081R7pfyexttKGoE5kvd2SoFVCRUdKQWS0UjfIX5HtEtcUCR1M1DICyIGMQ3SblDNAgnZ1xujkHQk+YAYTyWWlAI+VGAUEjL4XgY6+jJmHrMzfhP4428T9AG/JcAPAK0VlYImRvSA6u5DYO/3zEhYyXRV+5A4t4rXQbFtAws7bIAVh8UB8stOK+x8hb7ZoXSiahQRhYtZTzSlRJfgtoMuJGqVN799nwdhkEQQ2LrEzCcaO1D3AZ2LVMagJCPnGbb/wQ/G12E+BgbNVqUJKR9+fBj03kkjYipJNjz43tGQwQVK2eFzD1pnpJXRhG2k7Xbsdi3PzmYoJfi+w+139H1P1+d6xhSJrkcPjoscfT84RspiKgxo+sOCOE5gRQHGv5nk5K+TnN7cNIxLdZn85WUxHCQyIYm8cceDXNPQWUNfDo86G1WG65XKY4UYMgAhCft9S7vZom2N0pY2JnzwxBAJriO5nrRe44lZ03KoZt/3/Nl/+o9Y/b8ZaxFCoBp0QkWEsO8QbfEhsG07Vjo79fcugmQH+7aPnFdCUoJWCat0jgYZIjpTHJweMXH3+jV//ZOf8nQ9wxpD0GogjUyokKljRRTa5INeJL+Qa60zU421rM9WmDrLtowvuZSNUnUtRCBGRWUtVWMzkEgUSgl2ViMxEWtD5y1ODRqukCnkrCbEwM9/8Sm7mxuWeBZVw+s+cLa6GHW9CI7U3RODe/Dyq+qaLll+vq/Z+Q3RByxwe3XFB9//PRZna/z9S17oG17qJ/RKY4zmxariH31o+PE2sbIBcT5vTqMmBUfWcg5ICOxv7uldGLCKEZIm9I5Y5T4yBKpKo40mBU9IlmrWYGyDUjnS1juHTsM8TYqQPMErdIyw3RAWhqAtRjIDUNNoPnh2Ruc19XLJs6drnj8943xWo71nt93je+j3mlRVGbCWIkYNkgihR0siak2yYCuD6x2RRNe3NI2lbipWzz5h+fRDWhepqypr5pYxGyNKQehb7vs99fopVV0hqSW5LtO8LmpEKfreYSpL01T4kAh+MCrS0O1bAGaVwgVPHKJKTFXh2zu++MIT9IwP/+h/yw+fLfh//99/CkR8jJgEiM77gmGuR9JAFTqsO8WB/cjy8j79ZqTijC5OUMjrUHFSlvWvruvx+hLJPnXcPwZ4OAZWnPo5BhqUfIvxcApMKHU4vnfKXlGcosUACQd2ilPAhWldgAf3Fkdr6ZNjmZQpOKN8dqpN0zKnaVr/Y1DJcd7l3lOO4mP2jilQ47GD3hQwMf1sCtqY1vlUXtN+PP5s6mQ/bu+pOpzqn+N0PE5OlV9+L87yaV9M21e+extg4jHjwdsAII/ldeo5TNlr3uWeU2WcAo18U12m3z82Jx47WH0TwOTUWHhbOh5/p37/m6Tp+nC8hpT1rFz3bQ1+p9a4ksrac+r64/Wm/Dv9ma5zUzaQ/xJAil8l/2OA0t8kr/fpfXqffr0pv28MRLJ8imSARfku/6gsjRvCGP2emS/yuTvfI4P9JEEKk/ldQBrFsjg4IcteoLC2DU5LohBDCUeZvsNKZPvwrk5F5iXLtiAMgR2ZOVaUEEIO1DCjcXW6B0mZdaAyKDPIrRozytjUTU2vHClGtGTmTGUMMjDWpsEhXqjJU0pjcBApswQolQOIuq6jqi2JRPQeay1x8Lne3d0i0VMZm5lBY49W0KWIKGG1XhBcout66qqmnjckUdjGUtWazX2LEoPRFSkJ+86htM0OcMD1Hhf84OqOJO8JXYutl1hbgyTsrCaI0LnA/XbPvu/Y7HcsujmVrZCY3zVOhM2+xYfI+fmcp+fnxN6RAig0fdtBSiwWCyQFXn/9NdvbHav5kuWyyef5ENh2Pb2LtK3D+cjF+SUfPf+Y1WLF9U2OFFxVc+53d+i6Imxv+e4PPmZ1vsCaGpTlvt2jbcWHT1/gPOycp2rm1JXh7uaG2axGG6FzHfWsQdssa3J7e0tKid2upakb5jPL3f0WqzV9HwhRsq3BGJbNnEoJs8qyOpuzOltle0LKoPfge1LwbPYdQmQ+m7Hbem63O662HVqgshEJe54/e4qta1zXIqK4ut3jveMDdca8CswWFqVSDsCxsF5YRGpuru/YbXpWzRyjEsZoEqCTcDZfsGj23O96SInKZIYao8FqjdWawMC+ERKuC3gXMFrhnSdIQCuN94H9rsUFx2oxx1YViCIEh/eJ3rtsA5MMKsgA+ENQACSMViQDRAb2mmwvUEcMgPkHiGX9yMAlTLa5Bu+JyWNMYfwp7MfqjX1VXluy3dnYCq0tKSVc6HMfaTO4+wIHcao4gD/KwjJYOYbgoOzEKw4LKSTBpGF/FVPCxUDnHNuu5b7dsdnv6HxPAeEaMdluEvMeUwkH260SQGdbN5AoIDVB0KNDJic1BAEmiiQOqexZB/CLDGtqCd5DQPRh2Z3kldfPONiQD8yJGTgytPkd3xvv0/v0m5q+6dxf0rucwwOC8zDTS0gtIp7+5itsNQOzRIUt0bdINcvzOeszQMpnRR8cemBerlbfod1+htGatL1GRZ0ZkcoilUv/Vu172CYZl5+Hju3TZ7ijXE/8nRmg8vp1WMsfu/Z0HY9d+W9vz3FeuVwZ2nAcHDO1acWHeYx7xolzH8b1+bjM011z2lb3WBvy9yf6RngA6jhcz6RNk8+Bss+efs6JPE79nu8uL5fRE3Woz6l6jxJmjM84310kVOLk3TYw00zYusrRYGzskEOajpdJew6AkYfjfjx1CAN719tTecKFWeXQ2sIPM4wDSYMC08M+e8y+MdZxcu3hszI3JjVOAMUnPW3J8Z7nzWdZ5HTeMrqAvHcpT/QB6VeeJOO1Sob80jCG8t0U8DxovE+Q9kR9jo4B198j9Rkm3tHpWQay937YqyqiXaPiNanvSLbOQdXKELCoFAmi8bEnhUjs7jhbP2Oz2+V909Bl6WjsPWaHHD+dLB3/JfY1vzXAj6YyVJLQOmEl4lAjPV3GA+RJvAmR57VloRJOCy5EAsLF2ZKLizV10xASvHABpwTnE9YFYm0RY5AhCn0fQCUheVhYOKsAUex8XtwjsO0izkd8SBgSvnNZwiWAi4EQNFplhJ5PEAcggUimIBSRTH9IQCuFVkKIEUVAJZCkB+0kQQ8HGN/3JIEQIkYXYESmRhU0xtRoo1hfnLH7rMcHUBJyJItrszbtYCxROmvVKjUwoSQZdG2zMaMg/MqYnixDFFBGli3JNJLlGZRFTEYQxxt3j+to/ubNjUQalkklk+9O5HW4b3jZEoZLU3ZiD5dnGZJ8ClMqg2SU1milBvkMT7trsfOWfZpxtenYi8oau97lw2lM9M6x3e4QSbx47gnO04in1uX9nQ0uenihG60IzuG8Y3d/T7+9J56tUEDnPUlgphIXlUIJuJBwMdKnrEWaBlkiEcEohWgwFvp2j2mWLFdziOCDx3ctMaT8k3uPvu8IMaKtQZm8cUtKo5s51maQQ9YtzQc/YzS2quhDIvk4MN0YQji8dF3rcKIJMZHMEBkfM0VveTJfvrzh9uoa61uWqxplK3Zt4oNKoZQnuoAIKGOweqDNTTkWYVY3GNGs6wbFPSF5GpujUFK9JuiGRGYbiSR8yBGpttZApPeOn161LHTk2VJTqZBRfCmyd5HVesHZzHB919EqQ2U10TvOlzOstey84ToYzMZjxOP7lp2LVOLZzT1zEo1A7xMpyADOSsSY+1IrchRX17MPnqoxxEHC5+zZc6pmwfrJM2bLBedPLzm7OMcaw+JyyebuHpGEMSoDPyQiKSIxEEloYQBvlOh7TR8HY11tqeqK5Dfs76+4vd/RPJkhMRJiwNhMD5tST/Itu+tXvK5mLD9+gRVFEkU1m6O0IabIZrtDjBnoZS1EhTYwrwS3cVRWsZrV3ERo+4TETDvqYuJ8YfnqLpHqOd51CGGcqzKsGTGVjbrk5xPzCz/JRG/uW7wj3qf/9tIUpFCcjQVYMAVKFDmW4oR0zo1O66mzEjjpEC3/TgEBxbE5BXeU+kzlGEpdyvdTaYVpO0p+5foikzBlEJmWP3WmnuqLIqcAjGCXqWFz6rSf9t9Dw+eb/TBNp0Ae5fNSz9IfJY8itVPqaAYnxnH7T5X5TY70x9gnHgN9TK87BnBM75m2bwraOa7j24AUx6CA6fenABTHzvPRWDsAQqYyHYW54dQB+V3AB48BQ47bdQrkMK3fcV+cOmx+08Hz+BB66vrjMT/9ffrMTl33tjocH5C/aeyf+m4KPDo1Fr8t8KE84yIrVT4rsimFVaNI7rwr2KRIYj02f0/lVcZpYVI6BqeVPiuMIEXialq3Xxcg5rH0TcCe6XVvA3v8bdbxfXqf3qdvn7QuUYF5788QxDPdtyADm+kgxxIHEWkl2aCoRYj4fDZWgpTzV0Z5kFk40nDmS8QA3vU41xO9w0ocpGzzmukz3hwhEFIkiRBFEJ3tLMF7iCkHH9kGhRBjDlZIMRIDRJfzCuRzrQznsxjIgBWls9yMsdiqQleWlPI+t7YVTV3jQsiSrmowVJOBIEkZQgpDBH+WnPEu25LUEMgRB6tp3ztEElpbQvCIZGbIphFub3ZEH2hqCyFQW4PKYVR0fc9yMUMrT9f2JGlYrs7oBha9ylRUdcCahspWgORggJT7vVKGWV2xvduRyEFPXYzZMa8j2mSLjlYZkLHb37Pdbei85/5uy2q1orENm00O9PCD83i5nLNeLQYmEpgvFrS+pY9dlvIQzW7X8+r1NUkUq/UZUmdwUerASM1tv+NnL1/xyYsXPHlyznxRc3Vzze3dnssnF7ShJcSINZYnz58QgyJGDSiic9zd3vPxBx+TkvDq9Ssaa9FWEV3HfKYRZQgeEE1TL7BG45MiYfjq69cYo1kvV9zd3YMId/f3RO85W8wwKZJiwAVHs1hRzyzPXjzl+bOnXF9f8fLrK+xlxa7dkUhsd3tmleDaxO3dhn3r0CqzmdxsOmYXc6q6Yhk9L56dEVNis73G2nm2nxBxBO7bHQmhqWc8OXtG6zqurjzbznG3tyRVk1IOkurxRAlURmEkETUsa8W8ykzM60XN2dwgJFoV8SlR4Ymux0dN63uESDKGro/cb3ckAlVVY6xAkBz8tNvhQwYr2bpCKzADFXLyQlKKmAKoiDYRLRawiDKIMiitsyRTYmDyyTJISTxxMJUb0WgB53pc3+dVI5LXGJEsjyw6g1Qklz1aQLXC6nx/8A7X9Rk84vvM4pxSnk2qLG0y5B9J0UOMyMRpRXGcieR1UMnIfhpToA89vffsXceu27HrtrS+JQTHwXKa5YmzPVMoPCgH52OWz0pRiDK6gXK9JGVGliEVYNkocRmn552DY3LqABsI9XObkhr7Soke/j44CLPDSwCdHSvv92fv029ROj43Pg76mJ5JD2f1x8+f+XeFQvs9OniS36P6htlsjYv5HR/ml9j7L0n2e3l7JHmdEBQxDMy3PiJKEWNi3nxIbD/H7bd46bGDf4PRMTuUfezdfaNtp3w2w/ol5fq8aB4c1A+d1geQSPa/nS6r2GAKOGDq2C62jjd6e/LvqXoe+vfNMt/M79DsCUBjZEkq66ga9r9TMIggxOGdUBqbHvRrruFja+YBUHeo36QFbwENlLzL/6cSNaXv37ztm8ECb7O9TF8l4/AZyikcWSceFkpkvDYN/sBp3cdxWSqcyneHMXAYs5OgZyZP/oE3n4c+R3njak6Nj29KcvTvwQta6lrOR0N/pNFzcrK8x21Spa7p6P7puvLNbTiMm1NCNzKOnNKC8pOfrxyufzi1Bx/xIbfx+ZL3QjEm+pgQLFHWSHuLNnOQzOSRRKGxJPpBWjPDW71doPa3BGNBVQynKbR0eIlISITta2T+FGqDbbf4Aj4RYdpHbwXdjaAgJkPkNCPyrzP91gA/ENDCQNkHmxCIKJQyRBIuJmYpO7xvvOdcC3cwRFsIq/MzLn/0e4RqzaKuWDQVP/lPf4b96oplcrhZRQiJGMkanzEOR88hb2tAKSRGYugRYFlr7o2wUwmtEiihDYIfWEkKigkRkg8Zwe0CiYTWICrTEOZZoTJqnYQLKutfpuHQIdnhG1Kg79sM8kgexJKAEFwGipjMzKBNxfLsguXNBp12oIQQskFkuZojV102WAxjW1SmL5RBK/LgpJBx407MCPzDey2/2PLgz8vBiAUVGaRihsMHB/RXeZgPQHQpv+DS8MIeASMcHTUeeSePUzRTSaAky6u4rpvcLSAq11OGukqOrNHDGcT1HQnF3e0911+/pFWGSNYEVgru7m/Z7+6JvgcE17cEEj/6wz/iydMP0MoMbYuILtFCihQT++2WdnsHcWAMIWEGupQ/ejbjwjpwCY1gjUaFTNHqXUCJ4CN0g+NoVi/58MMPqOdLmrPnpBSpUsDtd0Tv6Ldb9ne3xNDj9z3tZsf16zu++HrLrLY80xXOJ+bLzAaTvCMpTd8Puu5GSFoQFE1lmS/nNI3JzyLlQ2bqIxIConMfpRSH4ALNvu358rNf0qi83M4rw66PLFbnzGpNcA5h2KaorL2q9TBeBKqmZqE67PYelTQxBur5nNlsxl//5//AD/7hPyPGik/7JS5E4v6emUTazY5/++PP+Oym4y/aaxZNzXc+uuTibMWimbGoLdXlik8uIo0V/ul3HV4ZqsrSty33fSKgualrbkKCVkj7e7748pab7R4fvyKgMHXD+nzN1mnq1RIrCmUs19uW7dUtQQYGi+jRSehdZNMFuqhJSpNMhalnLBaLIdok0+c2iwVCoPODdi2OqCqcc8xnlkobfEiI9ON6waDPvN+3zNZDfsnTXn/Jzd2G+jtNlughM94UjpN6NuPJeokxCq0yaK1ziUZFoncEF2m3e8RWhBDzOhtzZFvQFUp6rNXU8xmQmZFicHleG8Plesnr7Y5KG7b3r5lZdVgbBkdVjPGgKzhO9sEYXPgI30M/fqNTcTpOWS2KI9Ra+0CSoDgzi2zK1Lk5BVE85qCcAhmOGSqmztNTke8lzynAodxXHLhT525p1xQEUepSHKwhhEkk28FZOWX1mNbdez9efxyVX9IxW8bbnLRTEMpUzuZU2UoprM1Rbn3f45yjrmuMMaOETXEUt2170ul8DDAo9T7un8c2zY8bSh7fYJ+SnCl1PdVHx8+2/HsKVHQKjDB9Fm3bPnCwl7FXAEHb7RbnHG3bcn5+/mjb3uZsf1cQwrHRaJrX8Tg6/rf8/q7O+ON7H6vzN+V3PD/fBUR0Kt9H0frfAOQ4Bo78qge5KQvPlF2jyLyUdWMql/SuqTDvACPYrKTpmlTWmtKGAuaYshoVYB3wQHbrWBZmKgXzdzG9B328T+/T381U5BZK5KiQg01E4hBxLpmlsVAqI+hBv1kU+AQpRkTp4bwfQQ0Gu5QlCVKhIU3Z/kOMBB8yc2ccZCqUwahsV5EUs6xEikMAQzY9Z19nIPhsXzE6S4fGGIk+A09IKQfzxANjiWg9GKYHeQelMvA/QqUtdbNAVYYwSM30rqepZ1ilkBhyrwzrl1KKRJadCTENzATg+p6u7xGtqGLAGJulYIj0fUdlUg4IURGLZTGbs5wv2NxuaLdbYoxcnp2xWC6g22Un+LxBxACWEBKLZYNSGuccRpsMZtERZXPnGq3puizX6mJAVxW9v6drHdVihotCFItSFiUVm82OECK73Z6kYNf3tH3Pvm0JPtDHLjOWaI3zAVPXNPWMum7oY8gyGyL43uO6ROsCohLX9/dc317z8Scfc3l5RrffklA0tiHFlkSiqhqeP3/Bcrlks9vxy8+/4OL8HImernUoZRAC/a7jydMLtCi6th1lflMKfPHl51zd3nNx+YTZvMZKQlTgq6+u0HYxsFfkgeO94/MvX3Nzs+HpxZL9fs9+33O12XG3vWdRJT744JJXr67o257z9YLL9ZyzecUHTy9YLBd8/eoVtTXoGNjsN8zmc2qbWUijKGJK9H1HXVXstjvmixkRw74L1Frx0bMnXN1tWdZ7Fos5ldGkqNhsepTO5/emnnF+tuD6NuFdwKfEzV2Wxsl7Aw06EYICybYKJREZ5HgbLcxmNcvVDK0Ss3mFcxE9BIwklaWDcmBaou083mdASfAB1zv6pNjvWjbtDkmR5UJyn4vgfETpSEVAEwdQU5ZYEqlQUpEGho5EIg7MIUJm5iAltDaAB6Wo6gqU4GO2xyZJhL4bVqIMntDK4EURiXmJSdlWWzU1KUV2uw191xFcIGVN5tGGKSmQiuZ9yvYZhsCdvB6UaxNRUg5yUblvU8zRYxHwMeFConOefefYtT19F4gD5iPbUtToylGDYyeqDPbI9tSyRxZEK0JhUy3yDiSyRAyD1HfxXAzfF9/Z6LCEU86Q4u4p4X7FcTjeMtiS8++KJIk0kZt5n96n3+T0TQELb6Ti2c5/UObcw0Afxs/HvFMkhXu6EEE3qLNP6LttZlkTjZaGUK2ge41qng43qcFNk1nFgxJygKzCJwWz56j2r7QCfj0AAQAASURBVPDbO1h+dwQeFN9Pkch4l3PyMRDgoalqWKckvvEZD1zNjzFeTMtIo625OP4PtpzH7y3XvXmuPQAG8ncHBowH5usBdDJ9NiNgZfRtjRXl4fNLqCKJNXGUP7DDDf8/5ko6ADUeBqy8zV7yho2k1HKCsDku+eG9w3MfXf7Hz+50ufniaX0PagjlhXGw/B/6t4ATcql5TyxjHeN4x7ROh8ym7yqZ+Aqn/VHKSeNwG99jg1TaeNtRn4iMVX/DtjXaw0o/TMAmpT4HV8fxO5WH42QEJbwpe13KHu9/UIfyqRr9yZPO4RQg6tF16uEDGfOR8d/IlPXmsEIcz+JDPcc8ZJLt4BdWks9+BkMXd2iJaNtkH6T3iKlJ6g4JgZjKGBSUqtH1GrW/Icwucoai89mu6+j2X2ObM5StcDFirMKl7H9NadpHJ+bKN4CoDn36t2cf+60AfriY6ALEpAa6zxylkYELQhBFSIGU8md3PvLMGp5axWd9yIdWEXwSNle32PWc+XxGFRXN/h51uSS4nhR6QowQA41ROKdwPtGFyG3nQYRVpel9xLc9SmlEKSqjCJKNGUaBHaU8PFoLRudNdT/QaEafD9K1tVR1Q4h+ZGgQBaO27fBvmS29T/SdwzlP6HbUdU1Iiuh6koXEHExmCTF1k2knncOHTDc6W61QqkI+vSWGQN950hAVMo5hyctDZvLIf5dXqRyhvQquJTHaXnKk/rhoTmaxTJfKhyke3o45H2GQnRkan+QBOHJ80U32RnHykgeIIeJcd1hJEoyoUqWGjU3CmHrQlp1xbWdsd3uq6HnWCJ/vekRbGJwEvt1n9gVRVE1DNZsTfOB65zjverLOKBitOegU61FqJ8VBR4+ESnBeZxqjdWPQyRGzWgZawMYs+6JscWxGvFeECE/OVnz83e9T13P63mEri9IKW8+IKhu1RCt82+J6h8TI3AhnFRgjSIrs7u5ZLmfogT6za/f0XY8iRzEJDE6qPMaj9yQyJWzWGg6IZqR5TSmhUmaGcG4AjTQ1nWuRuuHTr275B3/yB/kVoTRIibAXWu/xEULKhjOjFF3f8tPXL/H2Y0JIVNWc7XaH296ifI60+KAJdJ//OX/80ZL1732Hfdtxf3fHM5UIYUWKkf7VS25vr7mzFpRBz+YslguePHvO8sWKmRFMitTeYXyFD5HKGnZ3N4hW9Ps7tvdbbm43uFg2ubfcvL5ivVygz1Y4pdns9tz6bNDadI6fXm0QIi549n3i0+uOTTKcrcDuWtoEN5s7bP2as4sNz58/pdEZdNF1niCe9dk5tdX0+w2SbDYoCJCyHvI42oNjs9ny9MU5JQYj+p5971lqGUFciTiyDnW9584pvnu2wmjBpWFCKz2sQYIPEaU0t3d7Zs2cLgSC6/HG4rxDNEgK9F2H1DWzxQJbWRLCpvcs1yuq5YLrL69yXEkc5nrZAKe8jpc3fqQ43iebt7+bvqX36deQpo7V4mCcbqqMMfR9T9u2OOew1o7O0wL8gIMTdwrOGKOWjjZfU6aF43vKzzFbRdnETZk+pvUujtXyb4nqL87UKVgjpTQCPwr7w7QdU+aR8uO9HwEEU2aAY9DHtK6n+nrapimrAhwcyMeMIlOwSmEvKcCWwlxxd3dH27bM53OapmE2m71RxrTcKfCiPPfp85n+e2qcHKfH2lvKOQU2KQejKZjo+JrHyjsu+xgIcjBsZLCMMYamabi9veX+/p67uzuePXuG1pqmaR6w1Zx6ntPvp207Pui9a52P8z/+/RjwMf3ubf3/bUEf3wZM8hjI5rHD7/Ha8Fg9yjXH8+YY9DG97tumMqcLwGLanim4Ytreb0rHwJXpGC4sHcdAkr7vR4DcdD0qQDtgBII458b18Zit6L8E4OPblnFq/P+qc+J9ep/ep19/yuuORiYGbgBJiZiK4b+8swfZXIlAyKeamNlSRwN9TIQYSSlHuWczQz6b5sj/smfzxOjxoSfGgNJCJQYtw94gZsqPGHLATxhAHTGEDLjwPts6YswO95BwXU9KOYhEAElZBlUPwSUpgZIMYNcD2MWHLIMhSmHrGivQD2ygxhpsVaNFj/K+4ztSKVKAPgRUTAPDCVlWxUOMnlmdQRpWK3rnabuWyliSc/i+RURzcbYm9I7r61tSEu73WxbrBfNZzX4fM4BFKUxtCcNZt7E1N11H7wWtajabPcrWpBCIzuNdwLlI6z1RCc73bHdbrFY4F7i/v2fezNhvW77++musqWjbPbPFPA+K3LEE14EKzOdNlmxuO9aLOYqINjlAZ3N7l+1lATof6ENmcOhcfs6XF2cYI/RElCgiCdEVbdvynQ9f8OzpBaKEz3/+S0CoreH+7pb9vqWZ1VxdXzGfz7AKrl6+5OZ+i2jDiw+ecX13S9e2aKWYNQ3r1QLX7ri527LbdXxUNZkdw+RAlZubW4yC9XLBan2GInF3e0e739Noy+yiZlFVfBUTZ6sVL56cI7HlyZMLVoslPkFIkcuLNVocSrKrvKkbRAm2ttSzLLVzd9cRfMKezek6aNvI0ydLUIrb7Y7z9ZLVYolzmfniftthrEK6PU1ds1ws815eFJ3z+M0e0YbKVFgDs1pRaY1OkdpoQtT4BG2ImBCISqOMZdZYmhQIfcxgLjXY5DyEmLB6ALvqYW+VwLlI5wNt74khYbXGKIMiM7yklPAxDlJLZT+jMiBF9OBQycFeIXog5DJjIiZH0jrPUwXGaKrKEopck87MEy70pKhQkkgx25KJgSg5AEcQqrpGkqLd7dlttgTfQwCJcjBYJpXtZyqPvyz4EzKwLIvh5nUqZesuDNTiSkgx26+DzyCqNgwSL31P2zt6F7LNmyzrPK51xUAixe01AOpUAXbkS5QIGSiS/0uS7S2jyyZN9tQTZ9lhn64mvp4DG3RxZE2/G12BcqgbA9AjP6+DxM379D79JqZfbWRPwQBTT8vkiomj5thpGhN0fY92LULCph4fParKIDohItUlcfspsQoYrfHRE0WjUl5PIaFi9vX0JLRd4OpL3PaeWqmJo+jgFC/1On0+zn6x6cdylMfD3xVvb/e3S9/GOftYYM3w28m/HwIA3mz7w3aXlTON/TbmM4B9RCKpgPGG9XmytJ5uxyNfvA2Mc+qM/Fghp/tQTvzAm8/uTftrvv0QCHqMOjiMpelnE1feG473PGZOAWAe1uHQU5Pjx8M2yeSedPhzfHdNr53mPrl1Oq7eBoIqQKHE4T1a6lZ8lWPZUsbS4+/Nx2wgBYRB8d8mzQiWeYdp9RiI6PDZxM87srXIpD/S+BmTNQthANKkQy7ysM8BVDJEA8ntkdkqZ5E1NhEi67pit7lHd1+jZE40M6IRlG6Isx7V7wjNIvvFVcTdZfkrqVcInpAslakGP3qTzw2jP/nx9F8z8Om3Avhx3wb+/MstZylSq0xp5VKOPjcm0cXENiXmBlLIhwwXHB80lu+g+KJzdNuWRT1Dpw6dEruf/oLF/RXmyQW3MaC8p7GKgAVxGKPxyVEJWC0YlcUKfPD0+5Z2t4UUR43JWimMycCUwT2cjccqHzBC9COFfQwxR5pohXOelHxmGBkYKJKQI/5Tnqo+ZeCLItG3fZa1WJyDaIQcva91Zu3I6E5B2Zq6meH9jr7bsd113N3ck5pZjn7xnpgiKQQKxZ+Mq1tGPhVtWCWCGqgQ02RBFBnkcIbDQll+oSyxMv6WgRfTpzqgzMoKXJg+htvSAPDJxpVyz3BteuN1cqjT8HeOpukpJ58kaWyn0pqQIKVMS6l9oKkMmkDo9hg05zrw4tmKn3rNbrclxixNoQRQgrGZWrJ3PT/7yX/mrLH5gFhZbD3P8j+D7m5lDdZounaHJMmUr0oTlKAqjbWa2CV8FEICHxK9y4wfSg06WAhaC421/N7vfo+nz59RzRcD5eQWa0yWsFEakYS1DUoU3X5PFEVIoGyFripEVyhFBiENTCshCmJyFJWkEpmQVTp9CAMhZMpaoinLHzgfiBrQglYK0Zb5+Qd8/0cf8+d/9mf4lPWM73d7lLasVnM65wc6J4VOCS/5SKhVNvoZBfOmoY8VNE9IUUjBobSw3W75zu/+Ac1szu2rv2b2QvOds8T//p/9CY2tSMHlpx0jMQRiDETXZSOiMnRB8fq+z5FTtkJpi61nrGYCbvhcW0xV4/p5Brh0S/7ed5/Q7ne0XU/wgdu7DX/52SvuXeLm7o5d7/E+EJyjMYpuv+XHP99BSoSY8EnxV1ctfTPnR82M4PZ8fn9PZTOSVOmKermmqQfHtrZY2/CdP/wHPHn+lBiu2d7esKxrEgofMsFnZfL4jTHSdj2VzSClGPN8clGobNZ7NqYCEtpYQuhIfpB7QvBdT9v29G2PrXNUPyHQ7jsqu8DEhA+ZdosQ2e56dm2L1dlY0rYOpVcoZQnOsd9seHktLM9m/Ph//XP6rz7nQ5VIKbMdFVYhHxPboR5lXclGGo2M8/y/3sv1ffrbTaccl9O/RYS+77m9vaXve+q6Hh2yxVlaHJhN07BarVBKjSCGU6CP8lMcoMefAw+coMX5OQUslPumwIjp/YXhYcqe4Zxjt9ux3+9Zr9dYa99g1yh5l3KKk7gAPqblHDOPnPr3bajjqeP8GBAxdUpPD43Telhrs4b6dsvPfvYzNpsNH3zwAS9evGCxWLzVuX9c1yno4THgxduAB4+laV8dt28K2HgMRPBNeU/7cZrPtJxyjYjw+vVrrq6u2O12pJRYrVZ5LzcAmh4DfUxBCNPv3ma0+KZnf9yOx9r3tj45BRB52zXHdXvMGPFN5Z0CZJwChky/eywd99NjgJrp5992DIYQMn35wJgznb9t2z5g1/g2Y/C4PlNg2FSiqYyfAgYRkXHMlblR1roi3TQFhBznczy2/2ult9Xj1HN8D/54n96n/7pJaQ2T9TrP4QiR7KxMQ+R6FERNpL4YmCZTIg1SMYlIlJBPCDFm4w/kM2NKWbYlhixlmtIAHBn2bSqfQ0IIiBeyQoLgQpZRESB4T9/1eOcgpnwuSUJ0GRxitM0sqSKIyTIu5W+lVI7gzI3O4a0ihOBwXYetK+oB9Nm1Lb53WbrEmkM/yHD2HxwBzjuC8wTfZVZQUfSux/uAkGiqOUZpgnjutls6pWkaixGFSwFJMG9q2qbCBU/fdyQii0U+6xqjEaVpZg3eO1IMmLqh0hUpgtEVbXuLe3VNdJ7tZgM608STNNbUxBC5u7mjQuO9p21bYoTPv/yS69tblrMZvuuomxm1sVhbEUPg5vaW8/Wa9dkal+6ogFlV4ckMLbe3G9zes585fIwYW+GDZzabU2nN73z/+8wqw367IabEYj7n7n5P3+dz7Mcfz6iUcH39mt1ux4sXL3DO8dWrlyQRbvZbvvzlV/zgu5/w6uXXvLq6xofIiw8/YrfvWa2WA4hXs5zVeN9yffea+9tb1mfndM6xmDfUxrDb79A6g7rP1mdcni/p9jus0awXFbW54OLsnC9f37KsDN/9+AWSHLPVkmaxygCM/Z7GaprGQHCcLRYYZdjsdviUqOsGH+CXX35N3/WcLxaczWq6/Z7droNn5/TekxDmswaVUpYs0ppd12KpcL7nySWsVgu2naOaNdy7Ddu2RVeGWR1YJsvq8gxiZL9vEEm8uk3se0cU6FRi30ZaB/N5RW0UQbvMHqYVRCGlkGWHFKxWM1Lo6dp9lp/WFqWy5LQ1llmlMUoRUyRGUDrbAWKIhJCDriQN4S2SUDoNtiOBCGaQSUoiA4NQtmHWWjObzTDa5LkmClQORNHKovVwDlP5hsxKAaSIrRqsrfEust+2uN5DDBn4ERQpZPmlLJsiiIlDcNPg2XiQ0kNbpwJ0Ikreo7gYab1j37fs+45u3+G7FnxApYQeABoysHoIub1xsJQcQCElihoQUOgcuDX8l2VfJgE2qtxXjMDH+8uHdO0iKjt0ZAi6GNap3OTsnBruJlGCeop9Vx1+f5/ep9/Q9PagiuKtyP6J7Hs5mhAjzQDjveN8Hb3EQ7QoEa0COglWB0JqiGKown1eC/QS0CTl0LMnyP5r4uIjtOoJ5EBYFSNBIsEkJGl06AjXL6nDK0SbXMeDk2jYvx38Og9BEcftPNU/b3xydO1D+8a7AvnTwGj0GCBiev/bwR7fnE5de9jblnoPMoalXwbg4al6j6umlJdDevCMf5V06mz8hq2JvMc+gcMY83h4zzGzcMnllG1p+tzU0TXT+4/vO92OhzaZkt/bgITH43Fy5TiP3rwiDfQeIoV7cPhORjfpI+lNxo9D/8mD+ZLfjQfQh3D8vI7q+6At35zGvNL4vyGpoQGT9YeHdX7XvEuQf/HZ5LE+FDvx8Wb8ycN9xNgcmfT/UfFJIkprVKpIUhOTQylHSpZ1Y5jrgEiFap5nyT23w/QZ9Kt0gzF3xN4gxrDfbVCmolq8wKWUpf1iDmBPbU+qmoE58jQA7UG733i2J677do/rndNvBfAjkth0ESMJNBiJ9CHj4UKK9MCdGKKHJxI5s5qA8FnrMErxrDHMdjvCz/4a5T2td9TnZ2yeXuK3tyxtwq4WSPBEH9BKMovH4NgMEfYhEBlACALG5GhOJcOmd6AsSrFswTNtYKbWi2jJG/WudRngofQw2QL5bJMnn4gaNsZZgTaliEHR6EiSiHcdSmRw2MsgiWCyUYQEKjv5h5N61rqMgdb13Hz9JfXqHEmJ4HtSyrTtMSVIA+0pMq7CeWEqvzNMzmHQDxEhOfI461sedLpkdPAWuZcD6oyCcXvwf8lf8IBlZ7rhSWXByHI840Ilh3unK7P3nuDDmE85nJAyMvb11TVrWxH6DkyDrS3LVUOIkIKmMpof/s5HfPrzW7Q2xNiTYsBqjRsij4KPECMXy4bVvMZWFfPlGeisP7V3LbHvWJ+dc7/doUTQDJFAIlxWmo+//7ucsaH78p4NNT56ZoNOsFWaKAyH4ESlhI9fXPKjP/ojlK3xIaBMhXSOcjDLRn8HBJSxqKrGq4pf3PZ8uonMbc8HuuKpEWaNBTsjoVBGHRaoYWzFmPWRQ++JdaYkT9HTtZ7oE+39BrUyKK0GYI3gzYp/95ef8f/96ec0dcXvvljzj/67f8jfD56dT7mcMBx+leSylSJjCgSlDfV8jjKW9bzi9V2PDwFjKxRgzl8QtcG1LZ/+xS/4+Ls/YLZ+gnT3mfJTqcxiY4UkBqgRMiilDgpXM1KaK23QWvPkow+Q3de4riPZVTZOyhoJHaLPef7d70L0MMyb7e0N3f/nf+EvfvYyU0yFDGiIcfgZV4DDeMvMFlk8SmsFweP6fJVWPfe7OzZS5B8qfNJ8/dWXLC/OUK7Dimcxn6O1wdRzZvMlTVMPmteBq92e76fIdt9hTUCSZ+8Tvtuz28BifY5Wh82H9z3KVPh+zy4MG8EUicmTogEl9M5TNxXNrKJLmiSapsrrw7331HOF7z27fU/SQgzZ4X6xnHFWabre8dOf/Fu0OD46z9ubHA2X15sQIq0ftg1lYzIAtOIQrfzeWfObn4rjv7BflL+dc2y3W16+fMnLly9pmmZ0SHrvqaqKqqpomoYPPviApmnQWo8sG/AmAGIK/JjKpRQHaLm2sHUUgMbUqVocoSUdO4eLvEth9ADYbDa8fPmSr7/+mk8++YTLy0suLi6ANw+hpY5lnSplT0EhwAgMedvB7puc69P8ju+fypsA7Pf7B87h3W7HF198wb/9t/+Wm5sb/uiP/ghjDPP5/EFbvqkODylM3w5YeJuD9/i+qYN9Wu6p/jmWfpmWdyr/txkhSrlhsob1fc9Pf/pTfvrTn/L111+jlOL3fu/3ODs7eyvwYzofpp8fl/8YWOYYfHJcz8f6raQy7o6d6N8GTPHYNe/y2ak0LX/K6DLtr8KKc9yeb1PP4/45fhbvWl/vPXVdM5vNRkmlEALOOW5ubmiahrquqev6G/t2WvYx8Kzv+wegtaqqMMY8YPHYbrfj+jiVcKmqaly3CvADDv07ZUKa9smvO/2q+R6vA8dj/316n96n/9ppsFeUYDPJUrchPATXlu+ImdUjAmLyWiUhQIwIKjNiDsbyxBARHwIkGeRHs33Ih0AIDlJAke0cSgRiwnmfpXlVRFKW6S2spa4PWfY3JAgRnwYmJRex2lDVg+QEgjEWYy2isidEGY3RJsutDGdzGVgX+65F7Q1GG0xjkaYhOE/f96gYRvnfRBoYEg7sTF3Xsdnc0sxq6oGBres7dvs9kg7SYW3b0ncdi/mcs9UaHwKu71Ea5osaHw3eZ7bXqmqIMYMBlRHSbgfk6F+tFXVd0XYtaE1TW3751Uv225YUEheXZ0TvCS5h1YzGzml3HTfplvlqTlPP6GLkft/S9h5FByFwe38Pork4O+fr169Iori4fEqtNPvuNWdn5yDCvFlgjOX1V58hWmPnDbNZg+AheS4uVliBZVNzf3tLTLA+f4K2FZeXDV/+xV8xa2pmVc3N9TXXr6+prMJoxcuvX7G537I+v+AXX77i1e2e78RE1+/ofeTp0yfstrecn62ZGfji6pZnz54jItzf73j96o712TnXNzuotlw+Oce7HW3r8D7w9PKSy/M1+Ja9bzlbN8yXhqpq8H1i37ZUzQueP12T3J6z8xW2qumdZ7fbZmmWFDPAqDLYqmJ/f4tWFlTNX315zVf3W9bzitVSM2uEq03H7a1ntb1ku7mj6zou12t0DOz2PaEPpBjp+kDbeULWGgEJXFyuud21uPt7uk6jo0dXKxa1oNB88GTBxfmaxWzGL758zW6/J1WaPsBm52lmiZQE74GUmQGRiDEKFyAEz3w+Z7GYZ/kXynnGo1SWQTKDEpQPgSSgsahBLjn5SFJCCiGDPgxoU41jXhV7kkmkMNjcBbQWqtoyn89zUEDwox03kbBVBnTJYHM9gCUU1cxSmZrgfGb5iVDpCjGWFBKElNeogSVIyN2p1LAfHbaLksreZGAJyuouJCWj5oEn0UXPrmvZdhvarmXfZkagMDCEaDWAOCRlmxKgBgpz0iAFBbktaWA/kcIcVLwr2TGj0AeP0mSLdHDbxMF5OQTMFQusHPg74lA+gwM4d12xbZf9MwcD8pAT8SHl/fv0Pv0mpjdtH8MvaTofpg7yY0/hxC5PGpg7Ht5WWCFCHNYeMbhQEasVUeZoZaj7V3RmhaqeEK1F97dEtyPVDSpAkjDYXiuSa6F7RQh7ZrEjnf0uLv7V6PSG48CNaTuO21D8P6dmu/AQzPBmn8hk3XjXc9wpwM27prdf/9AGcvzZcR7Tuh8caBNpkweL7oQpafTBHfedPBwah9oc1Xuy7vK4fajkdSxj8lgfPLQBvVmH09cfj+eHfXjI89AfIlObS7ENPm5vK/kexudRzVLigQwOB1/iFO0hqXyXHtT0uGUyqecbtZCJL+ON7477YvquTUcz5uG30zsmLTuR//BNOtXnA8vYFHwzvKe/yezyhj0lpUnXPZyjDwKxRMhycg8ZRh6Ofhn3apJKfQ6MLxGFTQ6SgxgQSVQp4rThrN7wejND4m2WelFCaub4pJDgkL6lIyLxmra11NoQ60uCREzSxKQJEhBjILZEUj7/HbX5m9Ljfc/JMfk3Tb8VwA+ADtgmwaS80e5jImGJkJ2HRtO7yEJgZTPtZoqRvQ/sYkDmNd/9+ANCPceGwKzW9Lc75nHP+WqOdy3ddpMHVoJaCUHlzXJjYI7QR8U+RFwfsjNZ6UGDNTM1mBQygoh8iAgp+4u1SoholEr0LmuhitIQ86FERKPLYl8mDjHLnqSEloQWoY+w2/eZklPbjN8eDhIZ3CDZ8Y1GV3OUbQiuRwnUxhJ6j+87VEp4lw8/rutytEyZvOMgPSyBcWAjyWj7A5q7MHyUF9aBwCpP6zS8xArycQRADQeBQntV3mcjmGPILa8EUKhTp6vF1LFeqn6oiwzMKj6XkMrrpxyO8qHoetsOurkKrbP+Z0ATgsJaDU1NjIm6rlFKmDczAgm/32Xjym5L8J7f/8P/jvWTpxhbodVAie16RCLL5YLbbcv29opKEqumZm4VXQw8OVvxj/77f8an/7//F/dS89e9Yds5/t6iozYua4AmSEkhKtBUmsW8pqrA7e9JvkUZCwKuaxFlCcGRQoCo6fdbQu+ZVxWfrGv2rx2VGJoUMMpgrcbg+ezLO1qv2XrBhyybI1qwSqgsGKtROj9/P0gi+T4OgIoKJSbTbErk+qvP+eInP+WHH3/AbD5jfXHBkx/+Ef7l59x//gvmdU0/PKOEkIKnc4GXe8+9T2Ay48WZaXliIj9PmWKyaRqMFv7yT/8X/vCf/HPqxYrf/94PQRmq2YJkM+2TQhH6HdH1xOTJFgBBV3NCUNRY4uaW4FpiVBgtmNSh6prU3eD39ySzws7XhBQQSrSWJoWA71p832bdad8RvcsgqAgpRFoXslaYlgHQBT7mKCUrgsSYqUCBhBoBW6PzOWQgRwoOd/+Ku/YaYsQqxc3wUgpo0BprLT4KohVf3G759NOXnHe3NE2FTZFX2w798mt0vKQLWYIoxSwZ0XvPfLlCS6bL9WTQhkQy7WjUtH2gObNorbHJEoIQ9SyP8RhYzKtBI7kjzHKdq7rm7Pycqonce8XzizP87jVWZYNTMTwkMuNHGzKwTwEFgZpiBFtQzu/NA7/JaQq+KECHEoHuvefm5obPP/+cv/zLv+Ty8nIEYbRt+4AN4bPPPuPq6ornz59zeXn5ALgAh41ZcWIWJ2cBahRHcYl8L6whBYwydX4fs36UjV7JzxhDXdcjK8b19TU///nP+c//+T/zs5/9jH/4D/8hP/rRjzg/P38gBTMFdpR+mP5d2nMs5TCVppmCHI4dKW8DYEyduoUVoLSngE+KA1lEqOua3W5H27b87Gc/4+bmhu9973sopVgsFuz3+zcAJaXMKdPJtP+m35f2lDQdF9PPTv07vX4KjjnFnlFV1QNAz3RcHvfRY2VOgUDT51YM0uW5/PKXv+Qv/uIv+Oyzz/iTP/mTB5JFpwAmx5+fenanwCKn0mPgjWPQxKm2nmr7N5X76zW4nE7Tvi31OjUvT6XjuTFt2/EYO37mU8DFNyURYTaboZRit9vx4x//mLOzM87Ozliv19R1TVVVDwBm75KOwS/ee5xzY14FmPXVV18RQqBpGj7++ONxTBbgW2lHCIE///M/5+rqir7v+dGPfsRHH31E0zTEGOm6bgR/HAOR/mulvy3wyfv0Pr1PfztpjHSnvJM0WqvJOp6j+mMMJBFQCT0weCBZsgGRIfAkjgbLmAZ52phIZKmXEEJmo4iRGBxCwigZnLwZGOJdDqrQklkFUohUlSGFNLKzpsHmFELEuwzCrYwd6j/UezBWZ/vGAPzXBlMZiJn5MYkiJKHzgdS2JISZzKmNRVuLjx7ve7w7vG/GPYLKe67oA9utZrvdQhqkMbSl73qEwh4lLJdL7iJstjuUMiiBvuvyedcYrK6yBE6ChEaUIXGQI+s7j1W574wRxOW+W6+WXN9c8/KrVzifmC2aLNPCkro2nK+WfLnbc7vdYxYVq/M13gWaqsHrlsoaggivXr+mqWcolZlVNrs9SoR6NsMqYbVYoCSiJOK6HW3fs1qvqCqDNYbtZofERFNV+G5P78O4X22amvm8oW1baqu4+PgjatNwc39HJL+Pd5v7ETwefM9+c8+Ty0s6FyEo2i6SkqHvAqbS/OzTn6JFoxQ4H2h3PcvFGT//6mu+fnnFP/rwO0QUr27viWiePb3kxZMzYnRsvUNXwuVsjfOB6+t7qtmapqm4XM45W81we8VqviA4z/3mnvvNPaRAUzecXZzRx0TVNCzXF6gIfddzfX2DQvN0tWA1yzbFEAJaVUSX2F7vWK5mnC2XRN/jui19H7A6snMd3mdgU7vvUSjmdU0liafrBc8v1gTnURLx3lFby2xWcd6ssLqi7TpeJ8+TizW1Nez6nm3bD0EcEYkJ8RlEpY1FvMe5QIoJqyuUaXF9lmEK3g8MugwSTYMjUzIThihB0ISQCMERU0IZS2OqbMPSGcQUdWacqFSWRkZ0DpSxFfP5igj0PhAlZIljETLASZFCzEyxqYAdNFVd09QNrvd07R7n+0zQYSwhJYzOzKshOGJUmQEkATLIZg5rnUg6AD+kfCY5kFtPAhh9pHOBfefYtT37ts+suqFIXwmogeUjDTGpKQ5m0mxfjYOEMqmcYQbjaCQbd5CBiWhYo2KxsU7OTqNHMDMAK1GkNEbbjcA4UOiU4SBJTc9eo9s2A1JksFEXWzBpqMv79D79JqcC2piCBA6/HxzzU6mK4Sw+fn9gCHjgNCfBKAcRc65ujxmsqSbskO0XELcoP8OZGulfYdtrwvIFfv4Evbmitx9iRYjKEFyH2n2J0EE1Yy4av/iIWC1JQYgyMPKOdYrA4axc1pp8nj6AG95+Vj4GQRzYB77pdHdszxjxDCNzSgTM4IwvzBsP739bAM+7lv14GlfB7CifrNOPm7UnTvmpNIYcgAaHUOlhvKAYo64pzi41YkdG4F2K5YlMrhOmHT0doe/e/mKLeKxRj9uk4CGI5OADlAfX5Wf68J1xeN6l7NM1nz5ndeKzUziN4kdE3sw1PfL7MYDiNPDlTdudTBpTfJXFC3oafFD6Zlg3JE1YMh7OucO9h+fzAHKZiif30JrxjDa8yKVsHx5MnvKSf3Mw5zrHcZymoR0jOyMHZptpy1TJLpX65zprLK1XzKoWqzd01OgUWJ3B3X5GIKLzVoWoBZ00KfUoEzBKQ2zY7a4htnhlIOxG+U2VBpnRpJE0eKJGEPHbgwofS98GMPKrpt8O4MdAazczAJEkelwCQsyOwmwIjcRhY1mpjKpOKPoAISWkaqgXK4J31IsZc2r09hptFCEMGoaSGJivMjI7JXwiS1lIIibYtJ7ddpevAwQ1SA5lVcfMXpBQoqkyxQO9z0wA277n1dUdseuxRhFiHEAUiSiaELIhQiThU4QkREmElKVA/sOf/4xtrDlfz/Gi8SkSlQbdEOZrcD2/3zu8c3lDIZoQclkEl5HpBIJ3VFVN6N20oweohkyXgBw9A0M9ZcCNHQh+CgKjTFVdEB7jRuDEIy0rHIfFburoFcllqmExPVo6KRuDUQYmJZIqdRrakgqLyvCUREGKVFrxyWrOxUplakiVW1o3NS4qnBeUEl6saz5YWbb7BewVy/NLdvtdNoI7j4RAiIHd7p7Zbk5lLaIVbdtSWU1VN9xu9ty/fo3yPf/DH/6AP/jBd/kPf/ELfvLFa9L6IxbPPyEAX7qKz252RCo+R/jYZHpaESFEmM0qfue7z4FAf3+N8TuCrRCVGTeC94NzywxRTplppqo0lYXlzLCaKWaNZVnlDelu12fQQdex2TpeX2/58t6hHDRk8JSyCm0guH4A0US0tdjgqWdn7PBZXkgLSWlC6/nB7/8R/+O//D/wH//jf+Cnn37O//Snv+DjlSH6xOyjD9h8+RkupSHyQbHtA1/tIn0UVkpjypncGBJZtsfYipvr1/TbGwiOSiuWZyt8TJiqIaphnKUEoSc6NxktZRyrHBUWs7xRCh7VmHE0pxggORBQxhK2L8Fc5vUnZiBZij5voFLMxsnAIMuUo7lqrUgh6x4nnV8ArU+4BLUIKpFphRVoleWUjCj6OBgQUyIEn2VUiOCyUTHXOb+MQoqEvqPdQ4iB3iU2+57PP/sCc6OojEIJvLrfwS8/x21uqesZVZWj0kJIzOaGZ6slTVNhJMvUVJWG4YAvRJzzLBbNsE5pfPLoELFVTfAOa2dUg3NbGYtoTRSh63v2vaA+/n3+T//H/57/5//1/5yJRkMc14vZfMbdXcc+DhsXpbC6IoQOUtHyhuOZ/z795iSRHGUImd1j6oAvYJC+72nbltlsxieffML5+TmLxQKAruu4ubnhq6++4vXr14hkSZWzs7PRCHwqTQEVU0mVwtBRygUGBp4D+KPc9+ahU0YAQHGMFjDJarXi4uKCZ8+ecXt7y2KxGMCEB7aR0h/HwI9p9MDUmV1ADdNI2WPH/Cnn9ykH/RRscSwhM3UMa61HJpTSJ8VJXkAzWuuReeC4XtOypuCPYyDNtA3HeRy36ZRjfpqmfXvch+V5FwaZY+mdqYP7bRv/Uxv96ZgoIJBSvjGGi4sLzs7OqOua+/t7ZrPZyXxOAUKOv38YOXG6TlMQyfE4eRuLzXEfn/p7Wsap39923bt8flyfaR2m4IdjYNa3PXwdA2GOP5+O1W9jJDLGcHNzwy9+8Qv+9b/+13zve9/jD//wD/kH/+AfMJ/PRwmYKZjtXeo6ZTYqDsL5fD6ye/z4xz/mT//0T2nblg8//JB/+S//5RvPubTp1atX/Jt/82/4T//pP3F1dcW/+lf/in/xL/4FT548GUEkx/f8XQGAlHQM5JnOm18FWPQ+vU/v0683yaAjDwXI6ifrazbQi5S1dpCBFBkkabOkKEkR8cRUnJigEigMSctgE/LDHmSQe4kZbC9KkKQy62nMhvAQApFEcGGInM/sjjF4XN+TYkIlIcQwSPIO7JrjO14RvCOliAo5ECgmQauAsTYDNpIQYg7JVcO5tncdso2o2ZymrlHaknzCtV2WSXEOLYKzFbqyoDJDilIK13p2ux2L+YzKaILLjIsBSFoxqxtkpfj69Ws22x21NWitiBLRKjtiq3qJEkPv/WCj8mhtIGp2u1tIAxWyVrnNUbCmylKF1Wt635HI7Aw5iDBS1dD7Pc4lzvycWbPi6tUduJ5KhPV8Ru8Dr+9uUVqxXsy5nTXcbnd0bUsKjrrSED2L1Zyu27PvWlL0WGNojMUPbBWrxRkqaebzOVVd0bYdxqgMIlCK29s7hERtK3b7PdvdjvXZGbPKcHt1w3a7Zb0+Y9/1rBYzzi/WrFZn3F5f8+R8yWJuqJtzfvLTT5EkPDmf8/VXX7M+g6qqud/s+PyLLzi/fEJVaXa7HV0fWMxmJGC3a9E6R1jO50v6zrPbe7RdocSigEVdQYSA5ur1HVp22EoRYsdyOSekBLoiuj1WG85WK1Jw7PYd6xo+ulhwuZgP8AVNJYp5ZZhXmsXM0jSWOFDLV9ZitcmRjtt2iB7XeJfnZfQeFRNPn1xyvmh4+fIlCkuIsO1aIsLKGi7OFnzw5AJN4smTc3zfs9lu0VqIYY4xQxCOUczrBuwM3ef51rsMpgpRcD6QoqfSCjMwjAoJUw3BZQkgs3vEEIgp4FNCaU2lNMYotNLjHFSDXVhpNYxbjbU1tqqp64rOhbzu6El0dUzEGIYzIHlNSEJdWarK5nm42+D7fgiIkRzNp3RmSpaEMiYDOzSDjO3UKVfcLIMjDjWyjSgtQ7R1IriE846u62i7jq5zeD9IcccjB5UAkuXH8/EmkiRTt+vBUVPsoMWpLEplSYdisU7FRjvs54pFPR0cR5nqfjiPyWGvl/MvFuLDbwOGhCgH226hUCkBgblHMqP1W7yf79P79N98kok34uiL/G6VA7hjdLqO55TTfpOH55gygTMozAeP1kLvAtX5d1GVZbsVrK5R9Yog5zgMsd9iwwbRoNwGp2tk9xqTArG5QKs1dF/h6k8IFmwEGef09FwMU4fyo/V7rH8mbc35pQd/DwvYO53dHtoe8r8hRlRKo0e5gAqOwR+/7vRmntO6FSDHw2sOporJmTo7rw7XIIwe/jRCExhppSaPYuDJm7jXy/WDUsCkbtOaFIzIw7p9kx3l9PN/V/vL4+N8yjLxsJxDbY/GzIn5JtN6De+3Ayd7GvEvh+un1z4sNk3/fgDyGPyYEwBH+fvQDW/a9t60952ox1FLT138EPxxIgkD+LT8TOftYf15mAbb2lvqWz4fGzniQQ79MLZmGKd5y5PveaBmlcqYmdRRgBRog6K2T9Fhz6IKNMohyXDj83kIY4jpnsoptASc0pAMPTU63rCezUFmeN9jTGTT36JSIuk5SVuSaKzOUlfKkH2BPG7feqwv/ksFI/12AD8kDVqMkAKEkCdFBnYMBv1hAPnBcaFFsCpT98XBiBC9o7Y1MUH0CWMNvQ+gmgFpnrWEog8oyYoURilCiigEo4VZypTKP/vFL3HbPQOzJ6IGyZeYp6tWWaLDhYiWfEipo+Z+5/irP/8rqrMFyRraPmAI2GqIuBh27SkNsrXD4qeGZeCzX3yJur3l916s2baOEBw7H2mT5fX5D2lE8exjz+1XXxAGNH0Mjhjztj/ETAXmnaOqG1RwZM3J7LwmpbxxhzzvlMpADhi0jxjqOQAyhg2ByDSa9uGrpGiupXHjn///gO5MDmjwkc4sZehH+VXK6kBGlDGAUPLtOQrID2UeHHDDApIOzixN4pPzhh9+/3uEkk+KGK2IGLJnHpazij/+3jN+/mqL1gptLKvFamB+2DObz0kh8p25I8Uts0pDtye0e1T9hLv7Lf3tNR/oxCerJf/4Bz9gbmr+d7/3O/R3d/zy81/yk19+zW0v/Ow+su8dT6xnhrCl4sw4lJD1bpuK5x9/zM3rV9jKYOoKY6s8Zk2VjUkxkpQmtVv67YaUhGaxRFtN0grb1MzmDfPVApcCX72+oQsVH378HV4Ez9d//WO2V6/4ehPZomlq4QmaJAqFAxSqqon7jjhIEgXviHZGOcilKPzpn/2En93s+OJnP0N397jP/4LrmeJ/+B//Kdvddnz5Zh3UvEELMYOqtM6RFrfeELuKmPb0g/Ftu+/45Ps/RBvNbn/PLz9r+eh7PzjoLA9rBSkgeLK0SkaR5EOsx29u8iUJlC19OBzOlaCoULYidXeZUSV5iGUMHeRcQggZoDAMzhDz90LEGkGr8nIQdAjElOk/IZIlrQWt8pqmgUonAongI20XaGaDLIwoVNkAiYwvwlhGacpOcx8TKTj2e3BK8D6y7R31zTVhe49oQcSgrWbfR37v7/2IGDx9MCgVUEZRoYeNnxBDpPOBs8qilSI4ye2Nnl3bZ6MlDTFFepeRktFHXn/9is+/es0yLli/iHz5Vz9mv9vAatDcRpNILNZnyJc3+CQoclRJwYMekOzv0296Kkb/EulenHSF/WMKjDg7O+Ojjz7iyZMnIzjj+voaYwzb7Zb7+3tubm7o+56macb8punYkV4ADdPPpo7CwvwBB4fylDliylgCvAH8KFHyZ2dnfPLJJyil+Oijj1iv12N5BXRwzC5S8p3+Pa3/sYRJAWaUfpuCMabfneqHcv0xOKKkAjKZ/j0FjEzrVNhUHgN+TJ+FMSZTnB+BQabPp5Q1ve8Y3FKAJqXc6bOfMpUcA2WmQJdy/6m+mX5XPi95lTpOHepTYEyp07QNRWqjONGnY6u0d1qPKShoOsZSSjjnHjybMm6OwTvT8V3qU+o27Z9SpzL2ixTNtE9LvtP+LmWfAh6VOT3to2kbSvsLkKHM/1NlTssuwK3j+k2d/0qpgdb+MF+n302lnKbPeTpGpnP6eI5XVfVgvE77sqSu63j9+jX//t//e+7v7zk7O+OP//iPH8j8lHKma8kx+OkUOKeMk+PxeXd3x5//+Z+z2WzyHmECFJmOk9Luvu+5u7vj9evX3N7e4px7MK6nc/wYrDUFoU2BYeVnOqaPQRnH8+5t61SJ0j4F0JnW8W+aftUD/N91gMm3ade7tuXb9tXf9T56n/4WU5nXOVwdJcMeZ3CYpgFUnw86ihgTagCjJ6UysyLZmZ59nTEDQ5SgKGtyQKJCQsIHj/fd4PyW0dmdz5yD1nqSQXrLo4ejXvQO3/d0vkeSxoolpmyHEoTgIy46RCoMmSFTdAaoZMPUwaEslVA3zeBYTqAUVmemk+ADvetzkIVRWYpGCX1wdN0OQsQZi501iBJ2my37zYa2bfFaYQRUU6EISASlDYoM7GiaikVT03c+sygajWiNNpYYsq0qKcAHdGVIKQNSZrMZt/eW3b4Fgdoa+r4jaI0xFY21nK+X2WZiM7vdrm1RxhBFYWyFdz2hj7h94Ob2Bu96RCfmi4Y5ii9ev2Y2a1guZ6zPVtztMrDgbrPlfrfn2Qcfgmi8i/R9R200KgVkeKdqazl/uqKZ68zOCrg4gEFUTdtlZo31ek3wjqvrG7q+5/J8Tes67nYbnlxcMl+suP78M548uWBWW+oqEUPL2ZNnLJcLfvHZp2xu7litzri6azE7z3x9SXLCp7/8gv2uZb12WGuprWFZV3z18iV1bVj+zvcIrqOubR7PyVFVgUYM13c3ONey2WzQpuLq5o6r6zvmsxlnZ3OePblgtTzHWAtEREW6viV4T2UrjIm8uFiyntUEnwM3rNUsFguUMcyXFdutYdduqSuNURHTVAQXoc8gKKs1ogSfPCF4+q7Dh4i2FXvv2DvHUhl6FyFFzLDPmc1qnlyuESLLxZyr3rFvHT5uCC5yvl4yq80w1kFbBUbo947NLqLFEEKeSykFRAtoofeRAgtTA8uxRVAh0vo9URSiNZXW1EZjtM7ytTHlgBryHNLGEIkobTG2oq5ng43YY1QagFuCj5kZKPqI9yHXBzC2wtQ13gf22x2h60g+ZBBEGkASMWQnmi7gr0TRkpZiJE3ln2EPIwmRkMeCKJJK+JRwKdKnQO9dnmfO5SCZmDLDshoiblNeM1M6ROhmc6tCDfbi8TxGGgL2JNuPBhCHFCJV8rqXkSKDk2kEfZQcct5ZCnzYuxEmfhkZwSVxsFEDmQ23+G9GGywUO2xhAXm/C3iffqPTSWd1njxFwuKwF1aIFGf9abaLBzaQ0aWf56YSjQstdegwklDB4XtBxS3irghKQAUkmiynpRL93RWh+yX1/BxZfESyC5K7xXevUfPvoEUISeGhxEOPpvDjen7TOezbgjfe9Z7Hrk/D3lKEB338bezKp2webyvv7Zmlh/+maV1GDo635p+XzYm9rfjFC4xhyqIkahx+40orxXsmb+IlJn+eqsfb+uCxwIrHnt+beR37DB8r+7h25btiE8qSIqU+432lHsN7J5W8pu7HgugoziHGYf4gr+kMLM8yDf87tlGeaA3HH49sP4/0b4LMElbqfyLPw68Jkn74rUA+NMmhrWW8PMjumGEokTIdwcRP+7DvT9V1bOcgDFf2Evndrw5bh4F9ZtwHvGGnT0MdBK8NQoBgCTrRBcdyYdhtEzreMIuCaRL7ALE6o5dB0jN56Has5onbbk7SNY0EnO9JzTNi6qHvUf2ebUjgPfhA0tUR5OX0XJ/aCP9L2zR+O4AfCD7lDbQH+hDpk4ASYjxMVCRLrvgUcVGyFIvkwetCIPQ9QqCqDMoobIr0KJzzROcHQ3dGISktQGYQUSi6kLhpI+vKUCfPzU//Gg1YJSxrhdUmI87VgWUiyyYMDuOUICbmjWa5bLje9XROgyiqxqK1ovdZIqKQPWtJoPJhaKagFqhs4sVcUxtFtaxxfaLqegKe5fYvSUDYPeH+1ReYxWIAzCg0RXMJap0ZHKrlChXqcRMyQCo4oAkn2wsRfEqDYSbLWMTx+7JoHKZLcVTLCNgYXlADZeFhTzRdkEoew6ZChheeyDAHE1pp4nAAknL4kLyQkArrSKK2lqaZDTnJWIlEQluTpTi0IrhwWPC0zgCblA+dRgnf/+gJy//0GZv7HP2jjSHFgOpaYtcSYmDvYWkrPl5armdCV81JxrPqOn74yXOeWktlLV/95OfYueXl9ZZV0rR3V/w//m//F+Ks5ut7T60tf3AeeWocr1q4F8OffHzGzAp//eUVf/a//hXnyxrXd1gViX0PWlDej4un9wHneow1SIoE1xJ8j1Uwt5lCt+96NpuWDkNlIstFRVJztFH4GNj2Pc/WDdYa7neBv/zilmfrilVTYZRgaoWEFu9dBs6kvAMJIbHft/zix/+B/i/+Hd/56AP+yT/6A+rZOX0X+PRqi998DZKZXPKZNdP3iuSDozUGayu6lJlXUsqgCq3zc5w9+5gUhXa/47Orn/OD3/+Dwckiw0FbULYmBZ+jJbQdpVpkQq0fYkTMDGtABnSsqZZDFAdE36JtMxx2hxM8kZSyUyKkQvF7AB8ppVADLXFCso6tgBvqZZQQvScaQUtxjOQhn3xmHAmDHIrvhYRBdI66SlJohPP/VAItKq835E1hNuQwzNWYo79iIHiyFm3qSbvE9a4j+kT0gYQQoiIkR1KHaJQQIi5CXVu0qRCnUSqyroTrncusJCaX1zuf18oY2O57louGhcB9qsinhjynwqDBFiIEO+N1D8HUPDu/5Prmmt673G8p4oMvS8H79Buc2rYdnbfFKVk0yosjWOtMbV3XNWdnZzx58oSbmxvW6zUXFxc8f/6czWbDq1ev2O/33N3d8fz5c5xzuIH559ixXVgpiiPXGDM60YEHzB/l2hJV3zTN6Mgs0gpd1z0AcBQn5Xa7ZTabsV6vuby85I//+I9ZLBaklEY5lOK4Xy6XDxygVVWx2+0eOHanzt39fv/gkFNV1djm4mSvqoq6rtFac39/PzrUq0EfHhgZL5qmGWVnnHPjBnfq2LXWjvIPpV7lWu+zTn0IYWSwKH08vb5I6+hBrmq3240O8NLGMg6m4JYp8KU46uu6fiB1Udf1A+BCSmkcQ1MZjfKsu64bP5tKr2RHkBvzCSGwXq8ftKmqKrquo+97vM/65aWeTdPQdd04Pm5vb0dgwH6/Z7fbsd/vR4mP0v+l7OmYFZFxjJV5UvqryHAUR3sZu13XjWOjXFf+noKC6rpmv9+PfVyeSdd1OOcQkZFForRzCtCZzWZj/6SUZfHK32VclrL2+/2DeWitffDsrLUjiKJt25PrRWlPmWflGRljHoA3rLXjXC7PqUigVFU1smsYYzDGsNlsRqBLYeMp7DYhBObz+bguVVU1zvG+71kul+N42e/3zOfzB+O4AJvquubzzz9nvV6z2Wze6KvC/FFAKlVVjW0qdSlrR+nXY5DPbrcb14jFYsEXX3zB1dUVH3744YO5Uvp6t9sRQuDJkyf843/8j1mv13z11Vd89NFH4zpQ5mrTNGOby/ycsvyU9WSz2bBer8c1pqxx5dqyNpW6lLlT+rQAqcrcLc87xpij3BeLEZQyBSW9Lf2XPpx/sxHoNyu9K/jjt6Ev3qfH08GAmp2fIgfiYRmYE2M8APkViSQHQNwB+KeGM19xpByM9EplZpCkDH4AdRQ2AxlsGDGEfK4cQBp9nyV0dWWI0dN2Pa7vSFlYk6RVBh0MlQ0p4XwGVEhVDawCNTJl9rAGsTbbG6oqg1VC3vuISthMBwsk+uBQMRG9H5hFslStH9hbI4m6qYnBZ0bFrsMolVkcjSGmDDqJOBqTAf6KxPJsyXa7Q4lgbF5fhQHY3++pbYUSi5Vso4jBo42w+P+z9x9flhzpmTf4MzNXV4eOyEiJhAYKQOkq1LDZbLY45Okmv5n5FrOZXvU5/Ju4mEULHu6ap3sx/HpRQ9UsFlgoQWgURCZSZ2TIK12ZmIW5edxIJERRdbMqDScSEfe6MHc3Mzd7n+d9nm7CycmcfGGwSUKxyOlkHUpRoWLBYNBhNp8jsHR7HcbTmV+DxgmdToe6qqnritpUPqEi9rG4fr8HFob9Dlkn8SSYLCXpxD7pyhhGvYxENYlCUnr/7ShGqtgr+gpBr9thOBzQSb1Vy8l4Qm28wkpZlVSm9son3QG3btyiqmqvBKEkVVFSliWXzu1SlDUrgx6j0ZBer8vxyQlpEpN1IooyZzqbUxtDZTTWSs6tb3MyzZFRwq29IybzgqeHq1750jlu3LvP/b37fPXll6i1JVExAkVV1zgckbSMpydMZxPyusZMFzhKJrM5i7JCKklWKQbDIf3eCCFA1zVxlDQETo3sDYiimNFwQBzlzPOaWVmgJERKYpwnN8SRosodkXB0sxiQHBcLilJTlDUukmhTs8gXFMWCqioRUmN0iTWGSCiUVOSVRklBIhRSKpI0od/vYGxNHEmkcqgYtDOczKZ00oh+NvIEKiNRkbe0XRQVRQXdJEMpSRon1HWFc2Cb971SEUp6a2EESCEpq4raOqTyHu1RpJBKIYXfzyfg+CSa4MwrZYRUEXGSECcRVeXvv5BhPeNjOtZqau3VNRCKJEmbuKGlXJTookA0SqxekdUTTYQAKZW38/UDlaedLb0GAyzUgi8t5OLJaTjj1VuNoaoqyqqgrEtqoxuimieJ+ZBUQ1gR1pM4aIAb6W1bhPOEMR909pUKFBDRxGeVNO1Y6TA0LDqcs6gmIufBmmbdF8Ci5pgO1yZYikbuutX7cAF4lDh3GjkO1Bc///IPqJ0DPJ4KPC6/1MXbCQRMoqGAcdZaoYlvAKfpvbRo8ueB7m1cWjSKR2aBjGKUgUg6Kgtx5yJa1rhyhups4lKBqSWUM1BDeqMOutBIO6MuC1RVIfuXsYDGEFmBkw6DwAmvbLQ8hf+sLvyLkCb+LuXzjuPV5ZbHIXgU+P53Ocejtnl4+4Cs+XFVLG0TwPSz+NlnnbOd+zb7nYLqNPPo0HaCzpRYakcQVEDaY51CfWfLI57ZF92DR33/d3n2Dyf7BIKHECzdq5BIsnzMQKB6xHna+9ec4+yXS//3ngYBL3UPHeuUAtE8h6W9P+t6T8kCgehx9rsvun8i1OHMPT2rKRSIFmeUQpxrN2jr3eCyZyrd1uGhfiseVuY6xWdpmEdnnrMI23CmrsF+qZ0/NMqPp5sst+kzB0MiME7gIkdlM9a6DldO2DvJGIyGdPs1amZZFDHO1Uhhsc5R5zP6qWJaxqCGKGmx5cLPZdEYp4izGKgRzhILxdjWXgFkCbdeLn/bhKC/7/IrQvxoXpHOy8QZ69ru1krpBcDSgXPSq3o0D8/bDMDh8Zi6N4UmkKiNRgnnrV6SZlEs/OQ2amwLPBHCT1wjAfPaMJICqb2VihKSYRb7xbR1OHz2ipAQC9DSd/ZICCoFcRoz2txgen8fW1kfoGgmA7ES0CySGqqXX5zg+VOREmx3FFujDkoBSiGlDxbX2oGrqSz0R0NcnLKYTXHWoqKIrJtitUbEkkgZdFV6b1hT+IUitGSK0/t5OghYYxoLGn9/XWC3t6w3783r7TvCAHW6ePPE76Uht2Uuhr8fGj5F42EpmuGt+T4MblLIMz1TiOX6SFya0h0M202kaiRYLQipGrUJiRLeciOSXtGlqptBXwpUFPHEhU3+H9++wuGDMW/fPuTG0RFmMmWUSJ5ZzRDWIPqb2KjHyuoKLz95np+8e42j8YRumnLPGO6UC1Y7EaasGUUZN08OmJaahbZEcc1kEZMIyfku7HZ9MOpyR1KJjHT3IrtDxe3DE2bTE4Tt8vbPr7O7s8nK6ohuJyEVVZNt6b2plDXYukLrkqq2TI8mTKdzThYFcRSjnWCc19DpIYT3vTMywzrJvDRcn2gGfViPYWs94+Jaj/Gs4Hi2YDTq0+n0iZzE5QVIhZSiaR+Cw/0HzGZjtjfX+NaLV1ntZ9BfZ10XXPvwLYZrK37J6SBYphCyBYQgiRVprIj1mMwocDFpHHt7h/kh1959k2e//irD9XWuuLzJ7BdNW/A9KUo6RFGCNTXONYO1iukkiiyNAK9UYoxFRY2MrYiQnZHPsHBAowTilvq/s96CREYRtfFyvkL4jArtfLtRkX+pSgdlZbFOoo3DOIijiEgqpBBIqZBRk92uPdDRzDdw1lCU0Etjn4Fm/YvMRaKdloQsE2sF2vtbEUtQnhniCXHCA25SeJup08xh4TOvOh2yOILaYLTFYYlTrzykdU2lvQqOz47xcrQLIqzwwFSW+uyysjI4U1MtptTWMhx0SZXmZz/5ITlwNZM468dggeRkXnPzwwf8z3sFMxS7/Zj9B6VXrTEGZ7QPtIVn8bj80pYA6gEt8LmsshAA0fPnz7egYVVVLTAdxzGdTofNzU329/eZTCYcHBxw6dKlFoAP4OFsNuPw8JBr165x7949ryTkHN1ul+FwyMrKChsbG5w/f76dkFtr6fV6LRi+v7/P7du3OTo6akHg8+fPt4QBrTXr6+uMRiP6/T5JkmCt5ejoiOPjY2azGVevXmU0GpEkCVVVEcdxS0w4ODjg8PCQyWTifdyBLMvodrt0u12uXLnCcDhsAfs0TamqioODA9555x263S5ra2tcvHiRN954g/39/damIU1Ttra22N7e5sKFCxhjSJKkJYyEaw7A6zKwGwD+8JkxplVZAdja2motdvb29viLv/gLiqJojxMIOufPn2c0GrWAbyC9hGc3Ho/J85zpdMp8Pmc+n9Pv91uSz+XLl9nZ2SEQTvI8Z319vbUMunbtGg8ePODg4ID79++3z6PX66GUYnt7mytXrrC1tYUxhul0SpZlpGnKeDzmzTff5ODggKOjI2azGf1+n9XVVXZ3dynLkpWVlZascXJyQpZlLRnp+PiYGzducOvWLe7cucNsNmMwGLTtoSgKut0uOzs79Ho9jDEURdESdUJfCESAhxVdwue9Xo/r16+3bSTcs6qqqOuaCxcucPHiRVZWVuh0OjjnmM/nWGtZWVkhiqK2nR0fH/PgwYOWBBIslYIVzWQyYW9vD6UU/X6fbrdLr9djNpvxwQcfcHBw0JJner0eL7zwQku6uHv3LltbWwwGg5YwFEgpQYViMBi0xKgPPviA4+Pjtl8/8cQTdDod0jRlMBi0+02nUx48eNCSH9I05YknnmBlZaVVrQj33jnHeDzm3Llz1HXNbDbj1q1bbd+PoojhcMilS5daUoO1lul0StzMO4qiYD6fo5RiMBiQZRlFUbBYLDg+PmY8HjMYDOh2u6yurlIUBdPptCWMDAaDlnR17tw5NjY22j6slOLk5KQdh8K1B7JMIDeEMSz0Jykl6+vr5Hneth2gJXtlWcZgMGBlZYXpdNqS4obDIZ1OpyXWBPKLlJJvf/vbvPrqqyilqOu6JXos9/VAAAl1N8a0BKdgWxOIGePxmLIs2djYIE3T9jyBPBTGbqAl8eR53o4jzrlWKSVNU+I4bvtfGIcel8flcfknUhyNJW6IR4sGtPCrGtf6RvviM9XBNaogYZ64rPAkmxiDxasEWNGQQ5Z+nHUYazzBX4CzFm0t2vm1U601dRPbMM5SVzVlpcH4oL1SrolfOGwsEE41oLYn16MUURKRdFMiFaw1FFGSEMUZUZqiotiDpUpgrUYKh1SO1tLSaYxx1FXFvFx4YpvWaKOx2mAwOGdIkpjRcIiIFLrWFGWJo24URGKc1uRFQZRGpFlK1skaSxtNFCmUUNRl1RBISnRdEScDVBQRJymL2YzZfIpzhqquKcuKKIlJe5lPBqhKUpnQ7WTEkaIsSgQRutIszJzVlQHdNKKKFc7599/KcMQin4O0JFlCVeSsrfaJM/8O6ff69LtTVBSxtrHB0f4eztYkWcIgGjAeT5jnOf3B0JNHVcTqygqjwRCtDftHR9y9t0dn0GE0m2EyS1EVdDs9bt65y2SRY+qqUfVIebD/gNXVFXqjHse3b9PveQULnMXoolHimCJk1Mz/+8znc7a3d9i7d5+sNyIdDBnPC4aDHuujIVVVM5mX3Lh3wMVzO2xvrFOWBcO1VfJ8RlHUCBx1XVHpism85GA8Y9SXzOfHCAFZBL0sopslGK0pG2KRMZZFPqMqS5w1dDpdnKNRPtz38YUsIY4EztZknS7COTppB+E8wWIw6DHNC2a6ZFaV5FbjNMyLHFOV6ConAvoqoaMkUSJIoyFGa+ZlTa+bMRpK4iRCRgoBpFEMys8f0iTBIanLilprjLOe8KINyjmMDkiDT5TJ0pi460nfRpfUxnil5Thp+rhGRYpaa8qqpjIQJ4IoPiXCojWmUQZZVgr0Mb4IKSRxlCCkxDqf1CflKbBjnY+TuEakJ45iup0uSnhru6pY4IxtbKJUE/u0OAxB4EOJ0yxWQgIZNsB5gG1UiVRrQ26daZJ8fP/26j41ZVVR6ZramCaeCghFSGgTzsclT81VGlI34ERIw2t8yu0ySaOJ4S4DK0FxgAAa0iQ7P6TKJhynWerhOmWLMrUWMSyHY88S8ZY/E01M3as6PS6Pyy9vWYJflyDaJdhY2DNbLcPSn8asz34QwOjWVQoJVUGUZeS5JB5s4aZ7IDuobAMrweUHmDohMiUyHUFnRMaMiTzB1hVpPcX0RliXQ9RDWYGVS/VqgSG3BEIvjUefQ1b4rO+/TPnb7He6zylB4O+7fJYKwKO3C+OgDB+Gb9t6PlIDyS19F/Zx7uyWD18rp9Zagbjg20lIgvbH+Cy2wlmiwmfbe3zeNX/W538fSppCnKpenb23S9uH87V/hPZ6dit/e5fIG6evq2bzQDZ5NE2kPUa4pzx8b2jepct7fZoE9DDxp925rZc78xmceS3TAuCPorM0z9734WYcemjT0/f12Sq0uOwS2aa9DnG6r3NLFJT269M5wMNNIQgDLJ/Gnd7Ipdp7MC1yDusihKhY6cDJfJW1fs6kquAkJlUZw7iijCPySiLKKd00JVaWwg0RGHosGFcl/WEHUR1Sxas4ImrbASxZIjC5QGE/PQB/Rvlflcjyq0P8EBInHIVxlBYK4RCR93MlTHgbiVBjwTSkiUiAjHx22Y2330e8d81nxOOojOPbVzeQq91GFko0MnQBWPURBef8eUoNFZZYSlazhlkm8FKj1qKkQlufse6clyaVnFqchA4ulSTNEuJSY/GAfW0bmXLLaR04VbDwfATHuHKMC03PWeIowtYahSNJBB2nqDTcvnufOwdHxAKe3B4gM4VxAhXHyDj2thJ1TSdSzOoKnG0XCc3Fe8sZTifqUvmghRPBV9IzuJ11eEZ4qLe/734hFjjnsh34wiLE2fBSas7V2sHg9/crEpw1CBn5oAuiXfyEiUirOSLAy1s1kxEhMTZQASCNY29Pw+lAJaRo5VuldE0dJJHQZLGi1+uystLn2996iTyH5MdvcPP/99fUumZla5NvfeMVauFYHO2jgHwxZzTq82/+b9/kZ2++ze29I+7Mpn6R61KUEBwez8grjVSCSCpypaCu+X/9s68x279NnI9BV1zYWeOZZ6/w3u0Jf/HuPlI7sJAvSj589wPufHKLldVVLl05z+VLW3SiCKEihHPEaepfLCZCFQUq8s+ori1GOMp5yd44Zy3p4QJDX5c466UwVxJBJ4JICeJIMuh7/1td1Rwfz7i9f4KOIoadGCtVs3iU1LVjf+8umyt9/tm3XmZjbYRUMXHWYbB+kcHBXZwzTBcVHz+Ykdc1HWrK2pOGhHOkjbVIHAkiIX0QTcWoOOHk6ACzGON0STeOGPTWSdIU1y60fTsSUeJ/N41c/PIkp2noKop98C6QO3Bte2oZujgIAHTTToTw2SKLIkco4Z+L80ogos2g8C/2NPGsxrxyjT+zl4hXkUQqRSQjP4bEEWkiKXLjPWuBWEmyWJE0/q1WGyIZEYmmv1kwAp8d4/zxUwWx8n2hqKzPzlEKJbxol5Ne1lRK74s8GA1RwqCtz8a1xvpAqgOjNbWxRNLLGpe1pdYWpVKU9CQRJfx3i9KgjaGaL7DSq+gcVYa8dNTSoToWKbyVjbWavDRMjKI0YKwhjhWRkBgE1tGom/yveaE+Lv+4pQ3gN9nrcGphEoBd8HZGAdgOGePLRIVAGInjmG63u0Ry8j9HR0fcvn2be/fusbe3R7fbbZUrrLXs7+8znU6ZTqf0ej1WVlbajHvnHNevX2dvb4/bt2+3QG1QiMjzvM3IL4qCV155hTiOW8A7KH/cvn2bvb09BoMBaeozREOdrbU8ePCAN954gwcPHjCbzVrlh5DpOhqNmM1mXLx4ke3t7TbYmec5d+/e5Uc/+hGDwYDRaMTPf/5z3n//febzeat8Udc129vbXLp0CSEE6+vrLcGgne80zyQcOygQBFWFZQuTZeWH+XzOzZs3qeuaxWLBjRs32vMuFgtGoxEXL14kz3NeeuklOp1OC+5mWcYHH3zAu+++y3vvvdeSUJbbAngCzFe/+lW+8Y1vsLq62gLYVVUxHo+5ceMGP/nJTzg+Pm6JEHmec//+/VZxYn19nVdffRUpJSsrK0gpKYqCo6MjfvjDH/LRRx+1SipB7SAozWxsbPDNb36TCxcu0O/3WyC7qipu3rzJa6+91pKKJpMJ0+m0JTWsrKxw8+ZNptNpC+qHdp8kyRnix8P2KW3AtCEWPXjwgNdee43bt29zcHDQkm9CQLzf77O+vs7Fixf56le/2hIoAoD/4Ycf8vOf/5zbt28zm804Ojpq69HtdnnyySd59tlneeGFF+h0Ou2zD/f7xo0bfPLJJ7z++ustASOc/4knnmiVT8bjMb/927/dqkuEtnD//n2uXbvGjRs3WoLJaDTiz/7szzg8PGzJUJcvX25/XnzxRa5du8bHH3/MJ598wo0bNwDaZ/Nrv/ZrvPLKK/R6Pfr9fqvgEccxw+GQBw8e8PHHH3P9+nXeffddZrNZ2/e2trZ49tlnuXr1KpcvX/by8E3fK4qCt99+m1u3btHtdlvS1s9+9jNu3brVElU2Nzc5d+4cL730EpcuXaLb7bbP98MPP+T999/nb/7mb5jP5xwdHfHOO++0yiFVVRFFESsrK7z44ov0er12TPrpT3/KeDxmPp+T53nbL9I05erVq2xvb7O2ttb2h7IsmUwm3Lhxg5/97GcsFouWePanf/qnDAaDts3u7u6yubnJaDTCOcf9+/fb+7azs8PGxkZLsArKNUF14/333+fu3btMJhPKsiTLMlZWVlrS2cWLF1uVoHAvA2nn4OCAy5cvt6Skjz/+uCVa5XnOzs5Oa+m1urraEuNCHyjL8jHx43F5XP6JFUdj6UXzTpOyWXYJTu0L/FgRyBtC+HjKw3OTYAtjbWMhtSRljPN/C+vXb0EVQCmHcoJKg9YGox1VWaGNboBPS2kMVaXRDiIVewsaQEhJHAtkBMIIrPXxDSkjhIhw+HlQnEhkFPvM2zhFqRRU5LdDIgwYKxBOA35b4RzG1NS6psZbPOSLog0ma62pZjVlUdLv9eh2UuIspipL6qrEGE1ZWdLYIZVFVwvi3CdRZFmMlL2G6OeQyts+WPzazlpLWZTo2tLtpmSZYTI+5ng8QxBR1xU4RbffoSwLaltT14Y0Tel3U2bzRftMrdVenUJJkkiiYkUUS1aGA6qyREYS5yRVbej3B0RpRpqkxEnGyfiksSrpcu/+PaazEiET+sMBs9kcXVc4Z6jLBWnWZTTo4qyhKkuO51NOZlNGowHT8QRrHMZoFrMFh4eHqChhPJ+ye36HSHpbjJ2dHZI0ZmVlhBQKKeHkZEK1qCjmFcOBJIkV26tDDk8mnDt3hZPplDyfsHtuAydhddBjfSUjihx37t1DJT02Nra4cmkHUy/Q2lBqzf7RAUGxIS9yxuMpDw4nHI1zaiNQziGtIcsk/X6XxTxnPJ3ipCSJY6qiQpuaXq+L1Z4IgvDztcFgwGA4RNerjE9OSNKULEkwThOnEVm/Sxp5BePxdMbxZME8rygqTZLE5HnFrMwx1YKVwYBhL0M4A0qSqJjj4wmz2iAiRV5V6EZVxujazymdQ0qFM15NI4u9Gp1t4ixFXUPprVQG/b5PqNOeqNTppFhrWBQGU3uSi7OWsi7BWlKXYJW3ZNE1CAw60dRaE0UVtgYpDS6SKJSnXFgDTjVEKEUSewUf6RoSBgrrQozWA5hRFCMiSdbpE8mYfJH7PqE1wrMqmtiQPYUkhLf1dVYTsKeQsX+aLRsAJX9uZy0WjdPGq+WaklpXFNpSG4fW9ZJ6W8ipDbFTkIG41sS5vZp1GPYCMcXvJ2Qgb/gxVQBSqLY+wQK9ifL62FU7jrrTcbklyzSxtebaXBN/Vc3vriHv+XtzCvaEeJK32GpulBA4Hs/dHpdf5nIGej796KGM/U+X0++/mEzQWORJHxNOI4cUllg4rOjgrEQqga3nWAOYGZHowOpFKiK6dUEeRV5pTRfYjSfRtSbRE+p6jkjWkSJCON1gO/IUbWYJBv4S4OfDBILleNPfR3k0yeAUUP4sVYXPU1T5Rbf5ghry6Of9xbuJgI+dnuyRmwazLXfme/fp35bxkM+4rIcJOw9f/z+08sGjSDXNN0vf+/dJu23oL/h70YCSzRz+0fVdolb491IgMIR3cIPZnsUj/HFbKk/TEdqE/bZ+ywSGR595qdptzdtzLD8k8dDQsXz0gJvimvd3c1XOtZ8vkzH8rMSeqVbbJIQLl37m/OKR7fe0vmfJK26pXS0TSd3Z613qk57zITg9uQAMDjBIpPPK/xeGgklecpJ3SFVKRzhqK+lEgskMVqMKV84xMqXf0YznlkzOwFlqq0AlGJ0glCffWpXihME6g4wj1HwB9Ju53lly6vIY4p0oCE3B39EGsw4Y9/Id/PsuvyLEj8BWdkjhKByUTpA4r0QhhCUS3q9UNCklQnhA1yCIVENE0JrYeY9C4VuU319FJFnmbTyMBmtRjVyodAaEI5Uwkt7zLFWeVID0ihFeMYBmktxI5jlLZQKg4t+ZFuctZ7QmjmKkssRSYYWkLip63Qhj/MLOWdeoljRgKP74e7nB7c95cj2jkznqqgFllCBSkk4icZWXEndOUq33SJ2f8CsVIaOINDFoU6OUBGuQeDzcPbKZekUG1wRiPFE7kDROgy2B6BE8WGkGGNEsCpbnPKGTBymlcIx2KHWn7kqnslb+5WedD974xcky264B6pufsi6ZTY5ZXd9CCEVZG6R0WFf5ZY7Ds/m1V8dwzmHDoNmof4jQbpzBGsNkWpBXXnY1ihM6nT6lMbjikHoumZc11WLBpYurfOtbLzN4++fcvHuIVZKqrOh2O1RVzfb6iLzWTIuKeQXfeOZpvvHdr/Ozn0U8sTVkun+ffjfiWPaZmgn37h5gUOxkgsg6dK2ZT2YUi4K8yLEi5tKFLVJncLryJKaGUe9ERGEEWZqSZjF354aqrCmKmt0kRUqFRWCcRlvNSjfmO5f7xFmCX1w66nKKrWMQirW1Pr1eyu2be+wdGRbDlNFGRDeW5EVOrBy/+erX2dhY9YGxZMD6+csc3voAhLcXOTya8N6NI+7t3WNn1PGKNs0jjZOYKI6Z6iHSSaydsFjM0FWFQXL+yhXiSGLmCw6KI56IIhqvn/Zl42yjcNF0dLGU3eDfTbZtb4G57AkiodWFdhR+de2YIpTA6tpbydgALFusdUSBsNQo3vh3hGyvL4kUsQyTA+/faq2DSKHimFQLilpjwKv/OK/0oSKFFb4Peocr6Uc25+2KLHgVGwGuUa/B0ni8Lk3OlUJKR2wiVkY9DyTWFRaJkKKRWrZNv9AY5+tcVTWliYmoiGrLvNYEspipa/LKEKcp1jkqkXA8mTE1UJOQSEkvkgik990VXu3kxd0O+WHE60eaj+4cUguJNRrbjKcPT7Uel1/eskz+WJ5cLS8Ol61ZgFbhYdk6A7ytQ7BDAFpiSFDquH//PlVVcfny5VZBRGvN3t4ek8mExWLB1tZWq9YhhGiJDLdu3eLg4AClFKPRiF6vR5IkzOfzFuifTqdcuXKFsizPZO0XRcHh4SG3b9/m2WefpaqqlnQRruHWrVt8+OGHTKdTrLUtgB5A3wcPHrSqKIFQEOwYjo+P+fDDDxkOh/T7fTqdDgcHB63NibW2JSTM53M2NzcZDAbtd8EyYdlGIty/8P9A9ghkmKAWEGxk7t+/T1EUTCaTM+SNoORRVRVJkrTAeLDnCOSZo6Mj7t69SxzH7TXEccx0OuXk5KRVaVhbW2vVF4QQFEXB/fv3eeedd3j33XcRQpBlGcPhsFUsqeuao6MjxuMxly9fbhUxpJQcHh5y8+ZNXn/9dY6Pj+l0OqyurtLr9RiPx229sixjbW2NTqfTPvtAHLl+/Tp/+Zd/yd7eHkVRtEoGdV0znU6pqor9/X3qum7VR0IJ92HZ2mX53i+TcPI85/r167z99tvs7e0xn89b8kyQw7916xY3btzg6OiI7e1trl692irE3Lt3j5/85Cd89NFHHBwctLYny0GmQDoaDoc8+eSTxHHcqlAcHh7yxhtv8P777/POO+9QFEVrUTOZTDg+Pm4tm6qq4jvf+Q7r6+v0+/227YS2+qMf/Yi1tTXW1tbY2Njg3XffPdN2bt26xf379zk+PiZNU9544w0++OADbty4weHhYXvPut0uaZqytrbWknKCOkbog++//z4/+9nP+Oijj/j444/PKH6sra21JIYoitq2aa0lz3OuXbvGz372M7IsYzqdMhwO+eEPf8itW7fI87xV/AiqMMPhkI2NDbrdLlVVce/evZZsM5/POTk54dq1aywWi5ZIlWUZ6+vrPPPMMy3p6N69e/zwhz9kb2+vJWSEdhNFER988AHPP/88Tz75JE8//TSDwQBjTDteBYKLMYbZbMZPf/rT1kqnrmueeeYZXn75Zfr9PgB3797lo48+4vj4mF/7tV9jNBq1bTNYN81mM27fvs0PfvADrl27xng8br/v9/tsbm7y8ssvs7a21pLwgsLHZDLh5s2bvPfeeywWC/r9PtZaXn/9de7cucN0OqUsS7a2tnjuued46qmnePHFF0mSpL3uoEYUnusyMexxeVwel/+9i3EG2Xg9y+VIZiDosxT0lV6t47QIhFDNe06f3ZYGwAyEEkGzBvSfe5JqcK221Mbb+C3yAucMkWpUSQEnJXGcEMkGUHYGFUuSNMIhMLXBaokQCoHCGEtZFUQlxLFAxaohqEAUS4Ty/hORjHGusSTTxq9zrGusUGKcqbGVRiDoZBmFrv06TwjqRuXOGMPayiqdfoduNwVnMaamKArKsqaqSk8WqSvquqTTySACI2OvemCdT6ShQkUCV1mKPCfPF2SdFCG93UZelNjaomtNPs/pdXpY7ciSLrPZhF4nZXN9CMLh9Ug8sF2bCicEMopbYnOaJkSRJEoSnHBUdU1/OAAkVZmj4oTV0cgTFmpNWRuS1FBWOXIOaRIxWhnQ63eo8pxhElOWNSfjCd3+kLv3D+hkHdYGA8q6oKpy4jjl3v09oijGGUccd0jSDnlRMloZkCQR0+mYbq9LpLx972LhpfKHgxXQFpkpVJrR6xsiHN0korO1SRbHiChifdjjyuVd7ty5w2Ru+fY3v82wE6Ok4e6DfVb6a3zw/gdoV7C9tYVA8fGNA8azgpNJQV7WRFFJjGXU7TAa9iiKik/uHXI4r7l8wXD+3AbGaLa21gHJ4cEhCG+BmdeGTrdPt5NSlxWz6YxISYaDHlhHFMekWcp8PiWv4MGDI1xVMkpTImvRzive6bpC2ZJsZ5vJPGcyK3FzwUq/A0JQVQV5GbFYdKhqhzQWGSmyNME6TRp7tWIpPblACkskIIoVWhvmjQ3msNcni1MqV6DCPFeJJv4EaRyRxAqEbeOIzgZLaW/lEgnRJNq4VgXIuWCP6NVMlRKN2odXOzVWgzONVUvA0BryuhCIRHplnihlMVuQ5wuMrhG2sRSUp9aTNBYtONcqbTga8KchfvhQqZdIb+OhzvgIiNE4XWPLHKMrr6xqjFdUbcJOUjbJdYGI0Q5wsv0+xLP8MCkQzZhqZaOgKwKZLmzln4/HgHzM10dZTBun8nhNE29tyR6n8VwPagRALHwslnAy5VVxFaeoq/XBb+e8gomnmIj2GTwuj8svc/k0eePz2/zy2v9zyQRL4KwTrrFzdygbYUUNpqQ2cxJqFIIkXUUlOyxwyOkDou42Ts6pJzOipIdJUsoyJ4n66HQTaUukPgSjMOkIFwlvLR7IYp9JRvnia/tFy6OIB1+ObAJnxqqlY/2iSiS/SN0/a9svJvJ8FqFCNMD/o/c/Q174zGOcbYOfV5fPUvj4om3+ocqn782jVTj8bTpdR7QYwhIhY5npIk53eugwzf3mFIc4JVmcHiX8fvr9wwSN0Fd4qP6ffX2nbX0JGH343IFgsdyul/61SzUXj8B1w7YPW7S4dm4UPghXCmfJoGEL1/67XD//meHTFTzd9gtf/Q0TRSCInaQWkEZz4kiRH0dYkbLA0nUFCRUIhTSC/UlOV1liTtCzCCfWKUXsleGURMXrODtGqCGxKFiY1KvaWwki9livsOA0Anmmnzw8ZohmjfkoTtE/tBLIrwjxo51ukkhBJhyVg0hKtLAeWJXeBsYZv5a21tusSAsqanyVcMSxalgFgtrU6NogpYI4QUUR1nkVDik9uSPxqxmqUiOkQAOZhDiWaAe1gzo3DFKJkI7aOITDL2y84p5fvDiD1o6Fq9FVhZCSKFLkeYE2mkGng3UWY6qmB1oi4VDST6C9vaQgdhblHEoq4jhCYbHaT/a9JQRYJLUR1Lqmco0EgJQgFEpKlPLenr4jh0n+cnl4cu+3c3gAPAx1YTwVzi/GhDxd+LQDlwjyUl7uTJwZhJfP1owFTjR0HRlO3LJNJaeD3tkBpBl6lw6ktaEuCwKIP18suH3tfczikGeuPuGZZE0Q2WiNth4mlywNcNaidQ3OEVNTFzOcsSRSkEaiUVVU9DYvEKc9dF1x7/4D7h5P2drcoLu2xnODFcqqYv/oiEhFZEnCxvYmBsX+/iGX04T/8//4TUarQ7737W+wc/48Bwf7vPfW27z14R43Dwy5kByXMYXTPNGpPMEDMNZx8OCIxeJNbk1fJBsNoZjy4laPYdbI8c9m5LMZZaVZVAbKCmkMUgp6HZ+FG8URtgZdGpzwYJ7Tmso6TwoQCiUs1mpcbXDaMOhndPoR0yTh/v5dDoyg0x/wleevsDrqebIFgtmiYDeSnNy/6e1gnKHfH/L//K2v89pffJ+T8TGldRhnsU54JQ68Uo51oa1Iv5iPY7pbl3EyYjI5oaOPvK+p80SE8PiDr1mQCsbZU6LyEpAZ2IXLLM/l+YFrXn7+/yGM6CjynLrSOOtJJq7xgFVS0InDS9yPOcaFdi9JlEA6Q61BSVo1EyxESYqy/loj50g6CUmssLXPmtfGgFREyoF1nq3YkMIq7UlN3r/a9yETpD6dwwqHQmC08fWJvX1OlU8xdYU1jqKukNKhnKO2FlNrjHPEgub6JGls6UrL8bxCSkHcEJpy472My8UMbQVJFDHsJMh4iFmcIKzxcYUmaJF2Is5fucjdW/exB4dMZ8a3OeP7U5pkOGcpq2p5NvG4/BKW5clnyHYKBI8w2QoZ6sGKItiThEz227dvc+vWLYQQDAYD+v3+GcJCnufcvn2bw8NDtNZcuXKFX/u1X0NK2ap1vPPOO9y4cYO7d+/y/vvvt8ClUqpVGJhMJqyvr/P8889z+fJlNjY2SJKETz75hOvXr3Pr1i1u377dKl0sqzqETPnj4+MWdA6klAcPHnDv3j1+9KMfcf/+fXZ2djh//jxPPfUUSZJwcnLC3bt3+Z//83/yzjvv4JxjNBqxtbXVEhsCuLu8SH7qqae4cOFCa5Px53/+5y0Ivbq6ypUrV1plglBXa20LMC+rsKRp2gL3gWwQgPMoipjNZq0dQ6/X49vf/nb7HD744APeeustTk5OeOONN7hw4UJrGZLnOUophsMhTz/9NJubm1y4cIG1tTX6/T5pmnLr1i2uX7/OBx98wJ/92Z+1IPT29jarq6v8/Oc/59q1a/z0pz+lKAq+8Y1v8Pzzz/PCCy+0/tnHx8e89957vPnmm0gpKcuynZy/++67/Nmf/RnXr1/nu9/9Li+99BIvvvgiKysrbXv44Q9/yLvvvsvbb79NHMesrq6ys7NDVVWtAsdrr73G+vo6Tz75JK+++ipPPPEEwWblzp07/OAHP+Do6Ki9Z8skm4dJH6GIJsgvhGhJBH/wB3/AeDzmwoULfPe73+XJJ59kfX2dKIooioIf/OAHvPXWWwTrkjRNMcZweHjIn/7pn/KDH/yA4XDIU089xXe/+12GwyFFUTAej7l27Ro//OEPWzLT7/zO77C9vQ3AdDrl9ddf5/vf/z4PHjxgNBrxne98h3PnzhHHMXfv3mV/f5/79++3FjBByWF1dbVVmwDY39/nzTffbNV1dnd3eeKJJ1p7kdlsxptvvslbb73VEl0+/vhj4jhuFT7G4zF3797lzp07/Pf//t+x1vLNb36Tr371q606xXg85vDwkP/yX/4Ld+7cwVrLxYsX2draaskdk8mEH//4x9y/f5/bt2/zu7/7u5w/f749xsHBAW+++SZ5nvPBBx+0bb3X63Hu3DmEENy8ebMllXS7Xb7zne9w9epVwJMsyrJksVi0ihwAi8Wifb5pmnJ0dERZlpRlyXg85sMPP+S1115jsViQJAnb29tkWYYxhslkwl/8xV9w8eJFvv71r/PP//k/P6M08uDBA65du8bR0VE7Hl2/fp3FYsFisWif9+rqKufOnWM0GnHjxg3++I//mLfffpuVlRWuXr3akkLSNOXu3bu88847/Omf/in/43/8j1Ylpt/vU1UVh4eHOOfa5/jSSy+xtbXF4eEh3W6X6XTKe++9x3/+z/+Z3d3dlgD18ccft9Y/4X4999xzfP3rX6fT6fDSSy8BoLVmsVi0NjSBYPS4PC6Pyz+N4nn0fn4SlCJgeR4oCVYMTgiscC346F+Np4G4h4PXbX6bcz6W4own90cSWTucM43NqF9vVrrGWNuERgSyAYzT2IOksRAeGHWeaJfEEUJAKTQ2AiUjRGN7oOuaPG/USIRENfEaqWKkFERSEkUOIRVCJAgkda2pqtwDx8pblllr0NorLKRKYo0hUhFEkqIhP5d1RaITOt0uKo5BaLJej9l4xszOfZBWSKIoRgpFmsRYaxiP5xgczhiKYoptrstozWI+QyqvcuGV9TTj4ylpnFCWFYt8wWQ6AwFKWObznNFoQGUcVV2gdd7MXzrEsT+ncgKd12hR0e13ieMIISRKRcRJ5NUdqjnK1qytD3DGspgtsLoilgPGR0fMlKI/6LO9voF0okmKElRl7a1qpmOqouDc5nlKXbFYLEgtHORHWDTdXo/j8YQ4Uui6YlLkpKnieP8BMk7obq54kN4YpuM5KIWQknlZMNpYxxYCJUum0xmra6uUpaaqLFksuXz5Iv1uh/cefMBwtMGw32EhS4yVOBtzd++A6dEhFy9t08263Lp9D+cEs3lOXpcUdYkqIMGwtTIgixLG84rpvCIvjzi/s4WxliiJ6HQy8rzAYanqEmMccRLT63boZhlH5SFRktLt9+gNej4eoRRlXrKY56goYjGfk2Udzm+tc//BAxbGUTtNWVV0U+mtRqzjeLagrDVKKiIioqRLWVry2lBqQ5I4b8skvOpwlqTeNraT+ZC/qanrmk4nRsqIRaGoqpwy0SRRghOQaw21Qgrl7VYcZJ2ULPVttVVydsJLmwqIAzHE+TmCaMYPaX2Mw9tFRURKNfbSgqrOG3sdjdWeGCIb9bAoFuAioihFqJSyrMmrwo8bAlAC0WyLc0jnbVqsqRDBWkV4dWEHnuAggsqzj5t5sRCJc54wYk2F1SW2rqh13tji+ITCMBoKCUp41b+QTWzxw6KQTaJPkzgUECjZwjkh1ioa66xGYUkuZQUTlJFPlVVdk4TXxs+QS/YAylcC6UkvLVYW4sHN2CzMKaGjifs62VBJGhKK1/rwcVYhTtf+j8vj8stWPk0u+GzlgUdv/+mybCERcA+FRNucRAlKFJFIsYt79ExBkozQwy1ql4CoiXSJqY5w+haWEpmsQNQHnZMU++iB8PgVBlQX5ybI/Caqqluwc/kS/jaEiL8NaeDLkEoeVhQ5C6SfHuPzjvVZJJMvQ4b4ovJ551+u42cSPL7k+f6uCiWfrbbxxdt80b5f1K4/r5zd178/3BliZIPjNARMQvIvjTsDjs87jWC5n+KxyM/AKlumxBL7o313Nn1TfMl2dPb6llUlPruuLebZlEDPDJDZKTFFLLE55EP7W852Zv+PazHXZj6xTABpr7/Z1p3Fjz2s5ghuD+01NBUKT+VzL+uh4pxECtjpJ+xNHZHy6nDC1RgSMjEmFR1MdYzsbEIEcwtJlrImc4qqYm56/nJVjK0tIlYYOaJbHVPIFYgsRih/hxq8OXAOPquER/3wmPiPUX4lIl/h1htEyxRv1RmadmkD0EPTnIVnkVug1kF2DvLaNBNhhZCCstannIGG2e1wqEiiEkVR1t4KRPis+Ng4UuXZ2dp6tnUSqVMmpHRoA0Vl6MUesC5rGh9KhzaWujbEnT5SGaJIkWUJSZpSa40zp3KCEkAJcBKPy1giIbkwjBhkkiyR2CilrCS68hkw1vksYoXw0jNWIxobG4dXLIiVYFGXNIJ/XsrPrxdw7dDmTv92eA/JoLDREi3CAuOUBbr8cmuXFcG7VyyZ13jGiCeKuKXBtqnHMsEkHL85cHPcUx7bcp0tYfx3TTag753WGBJRce/2z3GXzhNUWcJA7bQHwIUTWF150QZr0VWFEhFgmeYLwHuKYg1JBFEkOa5ieiry5zKGcj7jvcMTVkcDLqwPUVFCMgZtSqwTnEwWrKyssLm5yVe+8jy9rfMYpVhdOwdJTNSp6K5usaMF9/bf4SvPPs3dBwc8OJlzrZJsqoqBtKTOgTXMxnMODxxH9ycUx/dJXxhxfqS4/+CE48mCWV6ha8PJQlNo4/uH9HKXCttkT4JSUBtLUdXeMqRh0fhnqBCRJxEJ64g6Ma6yZL0B53sr6KpgPJ4zKTWlFWxsdslkzNr6OWaH96nLwpOSjGB06QVe/Tf/jvHsiL/+s+83gk7eHqWTxkgcW8mCUkfccpZIKbKsw/z4kNvXP+bJ519k9/wuq0dzn4ksJE4ovEyVW+IChRbYyDLRCoD59mmDJlAzdgi8PHBoSc4HOIQz/oVmNK6umU8m/sXQqOVAYwMjhbdECnJZDoT17VtKkNJhggpQUw9jHHEMwvk8DCe9KoZSCqkk1nqVG11bZBSuKdgseWUgY/xi3ttQ+3tQGT+GRc1zbMR4veyxTDBVjk4UEq8eEiuJ1hXOKpwRmFqjbcNck7Ef70goUDi78FngSqFzTVFbhgiKIuf2gwMW/RHPX32G//P//v/m7f/2/2Fy+72GqOLrro2XeS2remk88G1AhreoEKeTqsfll7oEAgTQkj4CqSC8UwJ4H1QHgvXL8fFxq+Kxs7PDuXPnWFlZaQHQxWLBnTt3uH37Nt1ul4sXL/Liiy+SZVkLIm5sbPD888+jlGKx8PLQIbM/y7LW0qHT6fD1r3+dp59+ugWwoyjiypUrPiBflty4caMlTcCpH31Qx4iiqM1gt9ayu7vL66+/zocffsi1a9d47rnneOWVV7hy5Qr9fr+1/bh69SpKKX72s59xeHjIJ598wlNPPdXaJYAHZ1dXV3nqqaf41re+xe7uLsPhkCiKKMuSKIr48Y9/zI0bN7h582YL1Aa1CGNMe++TJCFNU4IqiRDeUkZr3dpTBDC7KAqGwyEvvPACL7/8Mk8++ST9fr9ViTh//jxpmvLzn/+cmzdvttc5Go3a53DhwgXOnz9PlmWtqomUkn6/35ICLl68yI0bNzg5OeH4+JiiKFBKcXR0xN7eHicnJ3zta1/jd37nd9jd3eX4+Jhut4uUks3NzdbCJKhMALz11lv89Kc/5dq1a/zLf/kv+bf/9t+ytbVFmqZUVcXzzz/f2mHcv3+f/f399t7v7OxwcnLC7du3uX79OleuXOF73/se3/nOd3juuefQWrdtent7m+vXr5PnObPZrMnQLVsFBq01VVUBPqgdnmloL0VRtG39zp07vPjii3zve9/j137t10iSpLV72djY4N/9u3/Hr//6r2OMabNur1+/zt/8zd/wl3/5l+zu7vKbv/mbfOMb32Bzc5M8z1ty0rPPPsuNGze4d+8eb731Fr/xG7/BE088gdaajz76iD/+4z/m5OSE8+fP87u/+7t861vfaok4VVUxn8/5q7/6K/76r/+aN998s1XU0VpTlmX7XANB6OWXX+bpp5/m+eef56WXXmJzc5M4jpnP5/z+7/8+r7/+Onfv3iWKIl5++WVeeeUVXnzxRa5evUpd13zwwQf89V//NX/4h3/IG2+8QZZlXLp0iYsXL6K15pNPPuG//bf/xs2bN/nKV77CK6+8wr/4F/+C0WjUEjv29/f5T//pP/HWW2/xJ3/yJzzxxBOt6g9Ap9Npn8NwOOS5557jG9/4BlevXuXChQstoeZHP/oRr732Gn/5l3/J6uoqq6urKKX49V//dZ544gmee+45fv/3f58LFy7w6quv8lu/9Vv0ej0mkwnWWrIsQynVEou+8pWv8B/+w39gY2OD7e1tzp8/jxDeemUymfDaa6/xH//jf+Ttt99msVjwne98hyeeeILz58/zG7/xG6Rpyve//30ODw/Z2Njg937v9+h2uy0JrdPpsL6+znA4bMfhoNoR1JfCs6rrmj//8z/nBz/4Aa+99hrPPPMM/+pf/SueffZZdnZ2mM1mvPHGG7z99tv81V/9FX/4h3/Ib//2b/Pqq6+2ijDdbpfRaNSqupw7d46nn36a3/u932N3d5c4jpnNZvzX//pfW5LJzs4OV69ebdvY8jvhsdrH4/K4/NMqriHXL2esLwfpH85KDIHD5aBpG4Nw4XiNma47DdQ563DGgrVIz5XHaEOwXFB4q1WRAljiSKGUIFIKJfy6KZJe8dLHjjyQLIXEyUBgaNae1h/bzwE1MtIkIvJAKAXS1MRWI8hIkhSpFEoqVBQRxVBVNT4jXpImiVeGnc2pqhqBI4kionhAVOTUtX+X5nmBiiOySBKphE6/TxJ5VYuqXGCspqhKOlmXJInodQfM5iWL6QQlXQOKayJiIiXRdcHJcY2KBNPJFKe9SoI2NbWpWOQzytq/34Ww2LrmEhfoDwbkZUmsYozxZAGVKEyqKYucss6xQjBcWfWgMZBmHWRISFKQFzkrqyOmiznHh0dtws10tiBJIq+UWuRUlaHSln6/RjsvxTybnNDvxoyGPQ6ODoilZHF4xKIq6fT7CCGZTWdIFXHt+k06acLmao/hoE+WppjaJ8GUpbcmjWOoqpxLly4SxYrIKsq6Jsm6ZL0e8/wYU9cMk5gsjZERbG+uc/HCRYTzxMXpZMLJ+Jh8XnBhd5vdrU1m8yl1uWBjtc9kcsLx3CBEwrwwiEyRdHs+qayTIZUjikHrksVsyupoyHQ6Ic9LkiihrDW1ruglMQJBmqWtfWtZVfT7A3Rds5hPWcwXGGM4mU4p6prBaJXBaIVFXpA5y8FJYwWZZeRFyepgwCKvOZnlTBc5G6MByvjn4RPooMhr8kWOGCiG/Q6RkvQ6GaOVEVEkOD6ZorXBWQ86yCYeBwahfOwpL3zbTqPIK2ooT35yQqBtYwkVKR9ztF6WXEqQOLTRlHXl+6yQoAJIAUL52IFqSL11VaG1QWvbkBjkacQev12cZFS1oSxzRDMWCBWfAoemkQU3Jc7VIDQCg3MeJPXBQom3mva0BtHEHR3Kx5FwXu3DVJiqwOoCqys8v8UrAsk20cmPOaKxVrEBYJJ+fGujsMKf37lGjVY00a1G4Vk2sVPZxPGUNE38KFiquobUwRLg0gaCmzFXtBgX+GcXAuYC2cRpGosXolb52gNAzbFCTBc8mLZ8rsflcfklLmfB6pAUewofP6xwG8oXAe6+F0o8KuYxi6KqMZRIXRNnCTMXg7SoYoZQEm0sTkpc2oXpA8o4RsiI2FSe5JakuOIY0VkHKzFOYNUKceQgmiwhLl8M1H/etXwZAsaXPfaX2e4XV9j49Hk+65xf5jo/6/svc94vWx6ux+ed+/OO//dJyvmyaiKP2vaL63HaD06R9/BuWjpW++JpsKyGHPIp1Rd3+ot1oiWUPPLa2qqJ09/b97ZosZ+w/+fd+09f53KbC9s8tIUI+50m57tmw+WrPXvMoJKyfJxTe5ZThDWsrpYoGs1c5PRy3entDuuwpTOL9nmc3pxAqFm6ch4uHosNcwuP/wgnsFIwGiqKKgeXIUxFjzmVU2gUYzUiERYrEpI4xtoFMhpSV5JjOaITV6ylNdPCUasUVymUq7EmRmYZcVlTyYzaGhJVUegCGaWNfL/jzE19iCfT3i2Bb1v/SASQXwniBzS2BQ24auTppNI1YKJzDtEQMXAOJSBRCiU9mOgXD4KyNhgHTjhqB3mlce7U76gdSBxoHLX2mfMaSVnD2PiHHElLrPAEBemdaZ32zkB1bUgbooUFtBTkpSWKBEIJqqom6zVSoUoRRUkzUfbZJ5EUqAa49lRvR2EdiXQkCrqpZ0BZ47PzbUv8Fli8pB7Wv6aVMw17q+G+iQgVGc+YCgwwETqaJ460lI12LGgGU9ls3I5sp4NvGDjd0r6i2cSPA6dsVyFOjx0O3YoANSONaK5/KTxE+Eu4cAzB6X/LBBG/5SlJxZ+vdqCSGBplkhA4kqK5d1ZjrEBJi1I+u8QaSyRhllecjOdt/eNUcfkrT1HuJ9z66H12d8+RRjHDLGPveIqwjmI645apWR30iKKIo8mCOI6wsymTvKa3vsntqaZ67zppJ2U4HDIY9MnSLr2VTe4dTnnuuef46stfYXa4x/1bN3n9rQ/5eO+YbqI4vzkkcyWZ1uiGiVRWhh+8fZON1FAcn6BrzcyAzgZUg00oJmRlztFCc/9kTmmcX6gKH+TvdWIWZcFibjB4P16H9AtUZxHCKzPQqFOgFBJFlHZY2+6yISXT4xP29vYh6vCtp77G3s9fpzaWWWW5d1ywvbPGtKwxVjBbVFSmMWYRXkXCWINQGoTP/hBRhJCKk/272GKGq0tGvYxu3SVOYhDi7LteBNlKH0xzwiGsbdrE6eDcvoSFaMeAwF50TnppTiV98ME5nI2RsUYlqfecl3gLFueJaNYYjIE48i8/Jx1WNBKewvdZ0VgiRaK5p02fssb3/arUWCVRiUEZh9UO7SwLY4kbhmbodc41ssXaeGsq0Sgd4SiN9UG4xiNWGz/+WQcqjhrZUYiUwlX+gaqmlykhKWuNFd4ztyai0lDaCJdE4CxxooiUQOsabSy2KDg5mTLNS4q4xvVXsVXJwXhMhsU0WXhlDUYkRNKPxd6Djwb4F8Qqevj9+rj8kpag7hH+H4gHy4De8gLt8PCQoiiI45iyLMnznLIsqaqK8+fPc+nSJTY3N1HKZy0GJYxAUuh2u62lyYMHDyiKgrquWyIDeAuZcFzvi+6tSoQQrTpB2MY516qCBMUGrXVb56AotXwtD5NCtNZMJhNmsxlxHHP+/Hm2trYYjUbt4iSoHFy6dIl3332XxWLB3t4e0+m0BfbDsUIG/sbGBp1OB611Sz7Z2tpidXWVvb299hqFEKSpH8/G4zEnJyeMx+NWvcM511o1AK3KgVKKoija8wYblO3tbVZWVs4802CD8eDBAz755BPu37/PZDJhbW2tJZykadqC64vFgvl83tqkaK05PDxkMpm0pID5fE5VVVRVRVEUZ6yAZrMZeZ63ViwBwA51DxY+s9mMa9eutYoUzzzzTGsPU9c1eZ4TRVFLJjh37hzT6ZSjoyPm83mrBnHr1i329vZ4+umnuXr1KufPn2+yZuvWCqff759RKVi2TwFaQPthAGz596DGElRqwnV1Op22HwXiQK/XO2OJMZlMODw8ZD6fc+HChVb1ImwTSEkXLlzg0qVLzOfz9j4G+5B79+5x7949dnd3eeqpp3j66adbKw8v6e7bTFDtqKrqzHVWVYUxhrquMcbQ7Xa5fPkyL774Il/96lcZDAYt8SdJEjY3N1uFn4sXL/Lrv/7rXL16le3tbZxzDAYDzp07x5NPPsnKygpaa+q6bq9nOp22yhe7u7u88sorfPWrX2VnZwetNXEct23uhRdeaBVLgjJQKKEP9/t9nnzySX77t3+bjY0NhsMhKysr9Pt9XnrpJebzOe+//z4nJydMJhPKsqTX65GmKaPRiM3NTeq6bs+9sbFBr9cjy7K2j5Vl2RKt1tbWeOmll1hfX29JUmFsHI1GdLtd/uqv/oqbN29yfHzM3bt3WVlZIcsyut0u29vb7RgYRVFLZhFCtOMieHJcsE3KsuzMNmFMnk6nrdVVt9vl3/ybf8NXv/pVdnd3WVlZaS2M1tfXW4Wimzdv8tRTT/HMM8+07Re8HVcgYb388su8+OKL7bg9n88Zj8d8//vf5/j4mJs3b7JYLNo2tEz+CMf72wYQH5fH5XH5xy6nwUbbkD8eFTz38ZA2HNAuCNp1nqO1QfCqi35j19jkukah0UsOe8DTWIuzXk00jiJkJyNk4svGB0FKQdxY+CohiaLGIqGxXBEyQhqJUAlRnKKUTwBwxqBrDUKBUAjpldgkEmNrpJZUosIhSJAQgZTKK1/gKIoSY/07PVIR/e4A52YsZnMM3ooGY8E6itzPdVQkEMJAmhIpQa/bIYkj5ouY+WKBQ1FbS6IUiVSsr6+hjWExGxMnKdoZpLR04sQnJGlNPivI86KxEG6CxVbgDN6mVGuv2FBUTOYLesMewhkiFaNkTFU7yqpAxYpEpnR7XSyCTpqidY2Uonm3eBu1wWBAPZtRlDUq7XI8z5nO53S6Haq6oqpy1laGaF0xHk8xMqZfligJSRwzn07ZXF0jX5QcjResr61SWg/2xzIiLwqUlCRpwtF0hpvBaNQnilMiGaOrmqwXgzP0ux1OJpOGjOtI4gSEIIokm1vrREnsQXinSSJFv9/D2prdc9tEKvI2NUXJwYNDxkfHPHX1IltrA3SVM5uesDLqMZ/n9HsderMS4yzTxYL+2hCBZTgYsX94gnSWNMpY5EVDljXc39un1+mRJBn5ZE5RVaRpjK4bm0Cl6HQ6GKuJI8V8NsNZb507m1Tc2TtiXpZc6KVYHDJSUFuMrrw1rPHx03M7a1gLeVHSyxKkFMwXc5I4Imn6xSL38wYpJU4J0ixhM00YDPvURpMvCqxxOBlRzHN0WXsCFdYTH4zGOU94ioA4ko3dtMI4wUIbYgGZktTN8ilqMkhra9DGJ5FIHLWucdITOhQSZ70dFFKRFyV1VfgBxIl2Lu2cQbgInERGCcZZitwrzSiJJ3ShMM36xTV2Ss5owKKkV9uQzifOYQTCRZ5EIWnjxz42bX3czBowFa4usfUCW5dgDYIIosjHkUMcDh/XaSK4gG2IHU0SkTB+IHQRDcMDMH6scyH2KpbAnCZ26gL44gdUfz88LcSJAJiJVqS9lbyXp3F2K1yrcgT4ePMS4U54nxe8+oprzi0QsonG2gAiSR7P2h6XX50iPvWrQLRrfvjFQHfXKKF5NSJBXS5IVIp1GTDHqQ4uqhFJD+UErrOBtBJtZ2T5BLvzCkYXiHqM7WwihMNm6yTja1Qiw8YxCoiB0hbUjib+fNY6pb2kz1mD/Z3XZw3A/EVH+dsQPP6udftFVC2+zD16lGrJ37YeDx8/8PA+S1HkUfX4PIWOX6R+Z+rxJfb5YtJIIDMEKkLYXhFUqGiwUNcodwRypHNmiZxw9ogN3Ar4+U1gSpwhVDQYc8BcPVEhvOtce84vj2I8TJr4LELIKaHi4dvjluolxCnZtqGHLMUTm2P7yjfPIyh/hDnDKWlDBLZI+//TM7bnbCpwtu6nz+f0Uk7r0B7loet0oV5LuzgERa6ZG0EnLijpU4g+UhgEFmsEVghP5iinuGwVg/MkYKuZV7AgZZiCEjOmC4N1NRZD6XpkcoqsDYiMLElZ1E3bbhedS8SVh8gdLuDkzS37x8KufmWIH845KusaOxX8RD80ZjhtsMI3SdnsI6XzC5Y4eER7ENQiEMZha4OpDSJWTacPE2APBFdNhr5SHtBNtUNZQxoppFKUDvKyZq2fgvSEg1gqT8zANwxjodTGA61pRF0bVFU2A9DpoBFIJhYPFhvnPAs8kCCawcvhqI1BO68goI1FSYGxFm0c1kJtvdIJxqJUhLE+G8FJibaSLGkkDBtbjbMkuaWXe8s49xONUJfA6HD4oIsn1/jng5TtAsQfohkQrWVZbsgfxp0SOQJhRCwPhWf3gSXKydIv7a/NQBEGLBeCw5wdTCzOo+TONIOP39ZaibUKa70ajLMWpxTH4zGzRYF2wicYSAEi8nUop6Rug80LF+hd3GD/aMxiPuXk6JBrd+4zXVSs9rusrDgyJbiwtca9B8ecHN7lTr2g/8JLZHKFyd4EM+16Akg3RQlI13c50THdc09zfv0Cu8++xPHdW/z5j35CnXUY9dZZtWOevJTTG8V8/POca7cc1/YFAxSZ8NdQzqbUqibpDij6HUq74M7RhHfe/4A4/Qoq7jDYucSL55/ik5/9iJODI7RxxE5QlRWqWTArZbAWojihKBbU2lIZTZ4vKLWjO1qhu7VDp6q48cldTo4P+Nn7nzDTCk1CuTDkH/6cfD7nwaRoFuF+YPVWLxEWyZHpkVcSax1a19RlTtIfcu7CJYQQLBYzKLS3aVqeOAdSkrNtcKBhdDSWJc0PQQqzeWG7ZpEtm0mN9x3x/2/JQwKVZFjrKPOFzyIzrsmc8MBGVTu09e1NRZ5gpLEIGZNGEalwWOfnJqKxbKkqH2CIhCBNIrS1FKXGaJq+bKkar1vjGkuX5j+BD4Z4eVMfkLBWUBkHscA4ixKevGgcWGtIkpQoir2qkYJIC6yTGCeRwisT1VXZZLZ5uWLrDKPEUVtBrTWJashW2lBbT0DLK00cJxzPCvJsnVEvpcyn1IUhw1LW3vpG9RQSx6z20ss+K+WUXWpdCHo8Lr/sJdhdhN+XbV6WFx9SShaLxRmAdDqdtoDz7u4uly5dYjAY4JyjKAqSJGmBywDEl2XJ8fExs9msBXcXi0VLGAhgYgCog9JFkiQtSBu+W67rMnElgPmPuhYVpIOb74OlRlEUDAYDdnZ2WjWCQC5RShFFERsbG6Rpymw2ay00gnJFOFeSJHQ6nZYMEGxlAqAbgOjZbEZQVQnkkWCbc+fOndZ+xFrbEhgCgePChQtnSAvW2pZsEOxdAkkhkEI2NjZYXV31BMijI6bTKWVZtqoKAWieTqfcu3ePo6MjFosFURS15JgHDx60hIJlEkG4x1JKptMp165dQ2vdKi4Egk4URe35gurBnTt3qKqKlZUVer0eR0dHjMdjtNbM53O63a6XQzeGfr/PwcEB0+m0VT85ODjgzp07HB8fs7u729pm7O/vE5RTkiRpzx/A60CSWK5f6APLbSTc49AGwvaz2Yz9/X1u377N7u5ue/3W2pb8EY6xDKhrrVuQfzKZcHBw0Fp2xHFMmnoCaqfT4eTkhMVi0Sp5PHjwgPl8zvb2NlevXmVnZwdjDHmet20gWNgEwlEgfoTnGMgEod9ubW1x6dIlrl692pIltNatGkWv16Pb7XLu3Dm+9rWvsbq6ihCCo6MjkiSh2+2ytbXFysrKp+7fZDJhf3+fvb09Xn31VS5fvszOzg51XXN8fEySJGRZRqfTaQlLUkqOjo4oiqIdHwLxo9vtcuHCBX7jN36jJSYFlYzLly9zcHDA2tpaa6dSVRX9fr99JmmatmNQaLehD4bzhOcb1G6CJVKwUyqKoiW2rK+vs7Ozw+HhIbPZjIODA2azWWv/FJ5zOF5of1EU0ev1GI/HbTsO42ZQ+Aj7hXHt5OSEvb09iqJgZ2eHf/bP/hnb29t0Oh2UUiRJwlNPPUW/3+fGjRv80R/9Effv3+f+/fu8+OKL7XMP5LSgmvK1r32Nra2tVhVkMBjwve99j/fee4/xeMze3h6LxYJer9f2kaCW9KjMpsflcXlc/vctbUixWZsLEXQlzwYBz8wLH7U/nC74m2Cmsx6k9d66BozGaY0zNc5W4KxXOBSWKFYkiQecrbY4q7FOt+9RKSweTJVIqVAqRkjp1/xxhCNFxREqEkQCnFXUpcQhUZFq12dS+OCzEg6rDbWr/FqzURt1TqCNRuuSWmtvc6EtSihW+n2ctkxnM+aLBdJZpJJEQmG0psxzrK3IF4put2DQHxLHCf1eDyn9OtK/iyuEkiSJZG1lgNEVRV7Qy1IkgiRNUdZBWZHPvCJuFHtivxQ+UzhOUkxd0UkikkSRxRAJQ13k1GWBdDFRklJXmvsPDtnZ3WFtbYVelmEJSTUV2kjiKAM0i7xgc2sDOYnY3z9ic2uXTqfPweEBxaLEaUtVVxSVQUYJRV0ilPPqGbVmZdRjMBpQ5DUHhwf0uymdLPN2MUbiUExmUyySbtqlqmGymDeKwN7itq5qnLXUpsKJxvYHqKqKTqdDPdXM5wVZxxNR0iSiXJTEkaLT7WLqisl4TtqB1fVVrKlwumBrfUSWRhg0pdH0Bj2EtWhd0OtKVocZtV1gXUK3l5EXBUgoywKlJJ04JpWCYbdHPitYzAuSKEMKzdHRMWmagROkSUJVlURK0e/1iGNFkc8bhVHoJCnaOfqdzIsHa9g/OKCuNLohS0VKkJcVUZKSpR0GWcrl7XU21wdM5xOkq+h3Ogx7GVJAXReMBl06qVfJyboZAkHa6aDnuZ8/JAJcjdZeMTmLE2KlWBQF89z3AaEURamxNJZD1vdPifRkCbzaqZL+PBJBWWmMqf08yTqsrnFSIpXypARiVJxgrGMxn2NNRRzFjZWlj3MaJ4gC+UsIymJBXeWAt+sVcJrIZm0TQ/RJh8LS2EF7Eotz2seWIgMy9vHaEMRs45+n45HVFa6uQWtPdlE+MVEKr9kqFTjjlkLQvg/6OGmjdNbEpqWTBH3ooOthaNZCgRSHwzXHEbJRUXYNGaMFj70VuR+LBQp5BuAJ53K2SYIVeDUXvCA19uwczKtAh7qFNbwn4Pubu3SPHpfH5VeiLKOFYc5iCYo4wc7uYejws4Bgr7bT9F8psfWCKO1SVBW90SWq4ohIZjgRY5Iu1mqoFkR6jBlcxuoIGQ+QQlAtbhF1dxBE2GwTyn1E9zzWaW9dJZpkJXFqe8WSXdQvDHSK9p9P36V22Gzwm2bbs0PFZ5NAPo/g8fD88hclaCw/i88ianzRfmGN/WWJKF+0vv1FiBfhnfb3sWZeTkz625A/Hi6/CBnk0wQQWAIDW0zwtJnpz9jOPXSEU2JCuyZ5RD0+TYo5Rf2DHpcX3xJntpMOjHi4H3/aTibss5ykvEz2cAEoWR4nmn9EoxrWJtk/ov5hzbR0MhrvhWY/96nNceCEwgVDOnFag8+sv3NLdQ3/t+3fjyIPSSGXRhR/LRKBrhw6TehSY4iRsiI2klopFNBNNMdxB6UcpdNEeDEFIzyhGAfjEiLZQ6U1yi0wsovWBkuXxI0pdZckzhClbOc8wtHMK/3VLivUO7E0EoV79Mj7/fdffmWIHwhJYS3aCmoAKTHNy8FPjP0j8A6xPsPcGIMykEiJpcn8FIFG4FASdLOYblwHfRaK8J0xVopuJD2gKsEpgUgFE+2YV5aRrxh15ZqJsJ9kF6VpAvw+S6IjHINB7BUUjMXqGmu1X8Q7SRxHoBSx1ggspjrtMBaLEoJurFDC19+5RqYvWD1ocA2j2lkPDEcSrPLALxIc1r+4kSgskWjMH8ySPQai9XaiuUfBDsV39oc7cAjM2Jb91Xo7Nf8IaJjfzcDqzg4Dp8oiy4ORaGxlfEeTgiWgW4TxoO2KoV7Lx5UyLJBOu6JuAis0jDfnTJOBQkMMgLqsKPKK2miM9jY51jj2Hzygriok3rYnSyO6/QHquGLn/EV6K+vo6T6VkKhswLn1bXqrW7z29occzxbEcUI3ljxzaYftZ17gK9/ukkYRSZIyXNug2+sTSUGcJojIW2uYcsbNW/dRi32u3zimv7rNxSefY/3KC3x09z7febLgW8+fo9/ZIFFHpN2c/UsR9+5YDk8Etw63+NENqOqcXiRRDlxVIq3lRFumSY//8fpHfHxQ8M9+41+y+ew3qa3gStyDYkaqJL1elzqpOBkfEQtFP9JUeU1uBLPKg435fMbde/e4++CIpD9g+9wuvbjD/t3bfPj2O6xtX6Zb1ZyMZ3T720R6gZ0fEztDFsfkSqLwC+o4ijHWYZ2k1F4lwglvo5P1V8lsisNw59ZdzsWFX3g2C3HcklKNC33FhlYSPm5brd/Nng7qzkGQ5hSBSBK6gmmtVaqiRNfGe9M2R7cNh6TEIa1nisbGL9BL7dsjeJKDscZ7uILPFKtqKm3odVOkAqU8OKOL3FtaOUcaSaJmXIt84/ULdjzZK47Ca9uPHZV1KBl81kRLMBNAkmVkcdQQSAQqjqnzIsxWWtsKv1gXOCfRzhLbEls7yrKidnNu3T2kOvBe0bPpgnuLjPNPPUt0cMja2gplMSOfLZgf5qz0owYAE4xSRX5ywn5hmhHgdBLj1VoeS7j/qpTlSfTyz/L3UkpWVlbY2tpqiQvOOe7fv898Pm/VA1ZXV8myjJOTk1bNYtmm5M6dO61NipesFi2wLoRo7WNWVlZaMDYQJwIIWxQFaZq2Vij9fp/pdNoSSwJwGpQs4JSUEQDg5QXM3t4eR0dHreVMALYD8KyUai0yAsAKUBRFS/RYVloI5I4AwIcM+U6n0xINwt/9fp80TQGYz+d8+OGHvPXWW7z33nst6SUQP7rdLk8++SRaa5599tkWqF0mjmitKYqCoig4d+5cCxoDrK2tsbm5yerq6hlgXErJaDTizp073Lp1i/fff5+33nqL/f19ptNpC4aHdhGILd1ut1U0WV1dZXNzk5WVFd59911u3rzJaDRiY2ODnZ0ddnZ22NjYYGtri83NzRYAl1K2KhHT6ZQ/+qM/am03QnsI6gzdbpe9vT2EEKysrLQEhcPDQ/b29lqroUDaCWSH8PyX23MgBwXlAq11a8ER2kiSJM18xe+bpimDwYCNjQ0uXLjQ2rH88Ic/5MUXX+TZZ59ldXW1VV0IyjThGLPZjPF4TFEUvPHGG3z44YdIKTk5OWntgsK17e3ttaosBwcHTCYTxuMxR0dHDIdDLl26xO7ubnv/+v1+q4xSlmV7bYF4EL4L19/v99t2DLQqJoFYU5YlWZbx1FNPsbu7y3g8bttCIGnFcdyqvIT+Np/P235RliW3bt3i5s2bzGYzFotFq+6itW77cOgTt27dar+7fv069+/fZzAY0O126Xa7rVqLtZa9vb223lVVsb6+zmAwYHNzk263y2QyYTKZsFgsWF1dJc/ztq7D4bB91oeHhy3AFMaL0CYCGacsS46OjphMJty8eZODgwMWiwWz2YwbN25w48YNJpPJGbJHv99v7+FwOGQ6ndLtdplOp9R13VrWBMupcK7FYsHJyQlHR0ftfQ02Qzdv3mR/fx+lFM899xwbGxstqeX4+JjpdNoSdb72ta/xf/1f/xf37t3j3Xff5dVXX223nU6nrbXTM888w5NPPsn+/n57DYHkE9ReptNpqywUrHlC4Owx8eNxeVz+6ZXTdZXAC7H61VlYo5wlA9s2fuD3aALo+LWUX6h57dIaMM5ijfYxlrrC6hJTVdhGlUMK2a4JIxURSYFGY41BOg+8KgGiWU/SxBeUikJCPCrKwMUY4ddcXtHRgmpURZxG1w5rLCZSRIlEOIVQTcJJ7SM7zlhKramrEmv8OrSq/TslTVO6Wcxw2KfShrIsMbUhsoZON0HG3qrYk3trtDZgodPtEsUxSvl4iydBlxjlUyBqo5EqwmhD3InpZh2kVKRS4RxoY1AyYjBMKEuv/DFbzBjoPmm3R54vSLOMKI6w1sfYkjhhMVtQ1xVJt0dVVCwmJdvr6yRJ0iqESAnTyZzhMCGNY5KGYJhlXfYeHBFFJ2xtbVGVC3RVEUUpRVVTljXDQZ8kSllow2y2wBrL9voKWb/H3ek9shiGwy7WGRaLnEhFSHzmo8Ix6KeknYTq7oJIQCQlZZFjHVRlicAniTmriVRCr5PS6WTsHxwym86JVcpsscCaipVRlzjySnz5ImexyOl0ewg8eTmKJWurQ4Tz7Q0pSeOYowcPSBPB+soQTML4ZMZwfYVBd4DCEmcZURrRyzLWVvusrfTI0piT4wndbkaS+nd0UZasrK54lU8lqUpPhBh0u4ynlvlshjGGoqgYrfTZXl9FObi/d8RinKMT3078dSvqssYKS1VbTiZz6qJgpZuwtjIgjgSzecnqygpra0NofNAdgiTtkiYJprYUZQlC+kQ0a8iSlNl8ijaGJI3IUm8pVBuDczV5UZImMTLx73SEQCqHApRQ1FVNoTRIHx+V0sdZjDbUtUHEEhP5pBtT122cMU1iZJSSL7wNZaRso+jrQKrGmsUTuZI4o9Z1Q/7RREr5PqtrnDEe+GxsZlSkfLxEW9AWV2tsXeGsRiowREhpmmTjhqzhkZhGFUPjnCd/0BBJkBKpJEJ5lSDnFM44kKegVRvZFJZTdWO59BuNHTFtTEiKEFMRPpbSWH0HC+IzgAUh8hKOCUKG8dc1oJDC2+T4cQsEprXAEiAb8KqR0cd51Q9PnvPra9mAUaLZ9h8LIHlcHpf/VeXz1iatVUMLHC6jF2eOcvbrtnikK/RpaSuIh4iixKUbCFchTEIkUqzq4op9lF5A/7LfXlnP14r7RM6hZ/vEgx1c2iOqTjD1HJn0vdWC0E3yrwqAjq9PM4d71GU+HMM783d7WeIUNQ5Ad3PNZ/CmJVzoIbz788/bgk2n5/k8csYXqVw86hyPImr8bUglX7YunyIAPeJ8X3bfL6rDw+XLnPtvc67POt8vtD8PX6v/9FH9RwTMaDmhPHzvwrv2zMZnjv5wTd3SNgGjbGGjthaN4kpL1XRL/WeJ6MBZOkdQzTpzrZ96rsvHOz2v/Mw2eLqNLw2JyzUYLRIhbHtPm6HKr4mW+v1SLRsCyMN2nQ+PA/7cDxNXHm5Hy0n6wTpOCNvgy34+E7mIWmkcnqzt6gInQCcjsnJGHkVIJQi53eHO1lZSmYyBmJOKBT2hmJQ+Hpy4KbauEaZGik6zTqTBm4M6ZbjiT7eDsG1ojb9oH/hFyq8E8UOIBphxjrSV0gs/TfeVeOax8IqY2jkUAqstMvE2C7Wx1DZMfp1X4ih9FppKfQC0Ng5tjVfdsIKFhlQJhLPNhFVSa4sB5gicBBl5oFcbR1EZrHFEwpBGgixVEEU4YD6vqWpLXVsS6zPzrfOekFGUEEUVnoHt5fOCs4oEcN42xiI9C75haRkbghhecUM7SV5U1IC2UFe66biSKPK2GLESFEb75tvYQraD1fK71bnT4AeqYaH5gfF0ZHOtuqEML1n/Rfuva8DrNsCDeOil3pzRBaUOWDKFpJ1cNIsHSdim2feU+9HuoZRXLwj8FCGEDzQ031t8hpBtBjuBwBmNxJClCiFle9+KYsbB0Qlae3ufWMGgm9HpDYjVMZvrO36xIwUYg7QV3e4Gx+OJt1rJK0RRUUmBvXCRn7x/i2+++ATf+tpzCAlxlKCiiE63RxInGGu4efMOa5sbbKz2mc/GPNi/z42bN/nk5i1isyBzE7776zs8//xXcFqix28i3IxOf8T5Z6dUesr779bsp0/yk/dvcjBdMOwkqCRGYViU99CzBXeqGePDCWJ0ia3zl9lcG7L14rfodboYB3lR8PM3fsBkv+DySo+61hSFpqw1k1nFweI2ZVVx794Drt28ixERV3PLN195hScuXeTiK98lGYwoi4JiPuP29Wscjk+4fe8eg+GAOEmw1rdtEMRRRCRgFE3RNL7BkSLrdBnv32X/cMb27jZPXj5HdHwLpZYmdeHp2+WX4NJMsWlEYVLpAlGkeTP4tmdDQ/OSluBbizh9wWmj0U3WsnVeftc5gYolifB2KhqHaQJPWaQYRhJpLcZZdO39a43xpLM49hKfSnqfWNfYvvgMDUnhDNK4tj+ahm4ihcA4gTaOjmdHNf3T++SqyPcBT3jz8rrGWjrdHr3BCq7MQTjyvKDIS2QEsVTUzlFUGiKF0TULU1Frxb1b9xlsXUDXJUcfX+fkvQ/AWYQTfPL+B/yPyRZPXnkKwwkrW+eY37tLWdbsribEWPZzRzcVZBFcv7XHg9L4vtc8gkA/+wd8Xz4u/xuVh0kfAfQMyhkPK2Nsbm6yu7vL6uoqaZpy/fp1PvroI959911+/OMfE0UROzs7LQki2JHMZjM6nQ47OzucP3+e3d3dFvAPCiLLQH+32+XSJa8sFOxl+v0+nU6HNE1boNw5d0aFJGSkB0B1uf4hQz0QTYA2q19r3Wa1B0uQoGoS1CACKNrv91ksFq0dSQDlp9NpS9IICh9hu0B0mM1mrX1HUBYI58iyjAsXLgCwubnJYDBozzGfz9vz7+zskGVZS7gJx18movT7feI4bm1EghVIIAQEYD7Pc4rCy1m/9dZbvP322/zN3/wNSZJw5coVtre32djYaJ/PfD7nT/7kTxgMBi0xoqoqLl++3G7z4x//uLXpuXbtGj//+c9b64vRaMS//tf/mqeeeor19fW23stWNlevXgVoFUUCYUFrzZUrV6iqqrX5CNdRlmVLPlgsFkwmE7IsO2PJEtpGUBKZz+etGkp4FoGkEYB/51xLRgqWJOfOnePf//t/zw9+8IOW/PHgwQNef/31lqjz/PPP8+KLL/LEE0/w5JNPcnx8zL1799jf329JUFrrVgFlPB6355xOp3Q6HTY3N1lbW2vJVkH9A7zlyXA4bJUhwn0Yj8etzZIxhvF47IGKJSseYwyz2YzpdNreg9CXJpNJOwZUVcXBwQEnJyetokggK4W+BLQqJVEUtf03kBkCySBNU959911u3LjREqkGg0GrvBFsaBaLRdtGlwlUgdgViCBVVZ1RcQkEmzzPyfP8TL3CeBB+QhuQUjIcDhmNRm0bDKQGKSV1XXN0dMT3v/993nrrLW7evMnJyUmrzBHaVbBsCjYwoU8EskQgJmVZ1trFOOfa5xza2Ww2a78PKi6B/JUkCcfHxxwfHxPHMefOnQNgOp22Y1iwFBJCsLu7y/b2NtZaxuNx+7x6vV5rYxXuV1EU7d9hTA4kjzRNW9IM0NoRPfz+eFwel8flf//i4HTsboJxp7LAIdnFnYYdwiI/kECWSCFh7dXiBggPKjbHDPYMVVVSVoVX/XCGKFLNmk944r7wQKd03npXCocSFuckPpvfr7WcFYgoIo5TUKCUQzvnbU9FhMAhI68G6nBYW1NWNaaOsIVBqdy/6+OIWkWUuUSg0M4Dz2VRoiKFtTCd5yyKkqpOiKQkkZYsitBNjMUKiKOILIuJ05i6rCjLun0/J2lKpQ3aWPK8QClFJ0mp6hJtaopFyWKRczwp2NneZtCRGK1JspQoSbFWM+z3mU2mWF1RG8N0fEJ3e4OslxLFMWnWochLirxkZ3fHqzaqmF5/SP/BAcViymI2RrgOdW0ZDPscnxwxnYyJZESvnxEphzWabpqRxQm3bt1kc2sTYzRREuOkRBVNdqwzbG5vcPveAVZr0jhiNp0y7HdYHw2R1tGJUk7m/l2hIoGQhm4nwaWyIeQmmLqiPxgwWxQkkWpIy5YoTsg6XcR4Rq/fo9vttHOkTqeHbRQkIiVYW+k34Lri8PgQFQlGwwwpHUZb0k6GEJBlHSIRYZ2jzEuMNvS6PVYGKaaGcxs91rc2iZIOde1I04T11QGmWjDoKrJMcjI+QTvoxX6efXwyoa4qdF2SpF7JLet4VbBga93tdnxdI4WMPel2ZTDkzr0H1LZkd/Uc1lnmiwVRJOl0YtI4JlWSuphjtJ8fdboptau4fGEHa6CbDTD45CQVK+I0Rgjr38lljTGOurG7iWTMyXhBHClvTa0E2hk6WQfrJEVVY3EkaezVa8rCz7WtT9ipKoM1BYORv66QPdv2f+s7vicqKWpjkVGMjFOsE8zmM+qqgligpG5IZf5HRQlJmvk4ia6wpiaKVJNEp7E1WCNw+LlkCPhLJ7GiUaU0NU574oexDhmsTZxAJhLnJEIFi2CNbdSHTN1Yi0uBkworI5yM8AlHPhgbiBxtTNN5kCpk24vmfoRoqluKb8nGsPc01uWthQFsAFZEA6y0MVSf9iiRYIP9t6PlvilLk1vZnEk17d+2RJBTYskSiCVOgTeWgB0l/24g3+PyuPxTL58Gupf6yhlmw2dAjMLHp53whNaIGiEzpDuCLEUsYmI9phZrRNNb9KKUcnQVjDkLueCIsnWEcpiTWzB6AtXbQkweYOIBgprUCoyMANnYNIW43dnafhZ4H2Irp/G+051bcFl8GrxvcZ1HEQse/nj5lrXYkjjzeVAT+CxSyi9KPnhYreDLHufLJCz8IkoaX1Yl5GGA/cuqo3ze58vH/DLqJ6H8Q6/dT88f/v9wMukSLvSQo8ApsP+Ie9Ds5cI5lt937pT0Ed6+pwQGcfoeb9c6y0d9xImWz/jI27V8Dae/h7q5R1wZ0KydljGxUNfTk7hP7elabJilZx3O+uX6zmn/cO7TdjWf1S6dn3xBWP9JhbMQCbBE9MUJpVFICcaluKgktnOs7C/XvsXSRyNDOUnRLiZJBCtR0QgNCFxVoUlaIaZP37yl+gmPbbe44yNIQv9Q5VeD+IEftK0FlTSSc3gpQymaiTK+bQgExjq0sV71QXrAU0WKSEq00wjZZGk472taF6VnXQPGWKpSE0eSXgwm9iQCB1TGkTekDyskcyM4yC0Yx7AjqLSjKDVZLEiyCBVF1EJ4pYCGuOHwAQdrBRiNaGT15BKYqxqQWTarFdm8/LxdgyeXrPYlpbEksaLSFmMdRW7IK0ttDFJJjBOUpT4dmPEZAhK/cKF9CYZufyppE35sE4URgd3REGfEQw9ICNlQMsKw1w4JbZ8ISiqnEO/pi3nJGWbp42aAbFJtxNLxlkcN0dY7XIsHmYSUbZaOB8OhrhugHoGUCmu19+uUAoRBKYFGooJyiVBMZwvKqvLeqAIiJVkZdun3BmTxGHd4H5V2UVlEN+14glFdU+QLb80BrZ1NFKekUcqVK09z4eqLxJFENROpAF45HN1Oj8lkQhzFzI73qUvL6oND6qpGyoR+1mNzQ1JayGcFUeVIY+nlL214ogZncwaZQmnFsJPgpODazXsUIibup9BPvB2SK6kOPuHND6fMjOPcxSfI7Jzj29eoFnOkVFw/KtEaVJxy79pN6qJiau6S15b7xzPyypM0yspw6SvfJZUlVdSlKDWzRcV8UaH6I0ZCkSUZ5ckeJ7MFufH90QEyUhhrKIsZpR3hnPXySwjG+/cxWmGrkv5mD4oOSsp2UD+lFPrn7UG0pm1bExoqbTDROfyKlpbwEfqAaEZ+/7HAe8d5v1VtGh+3RsGzMobKSFQnxkXeSslYS6T8M3mik2EsCF140pYTuMISSRAWjPYrbFOWTesFbX2AQAqJMIG0ZsmUbJxq/A2zFoSUxEr4a/HaRl6QyzmM9UEz26h3WAtxoigXs0bm12ey9XopWnvf4UhIFosCEUW42mAVOBEx2L5EHEmcrkHXmLoG4dWF6jzHiIT+YMjxZM7JdMZKDEoJ0lhii5pYSjIlWMwKfrY/YWZDfw3/inbCYj81UXtcftlKCOKFbPUAZoZxcJk4Udd1S+jodrutCsj6+jpra2vMZjNu376NlJLz58+3xwZacDbsv7W11QKNxniJ2/B7IEFkWYbWuq1DUPR4WAkETieyAdgOIGgAcIUQ7d9BASNcV1mW7d8BIF++BwEId85bQiwWizZrP0mSFrgNoOkyMBLAfNXMbYDWbmG5juFcwUpmZ2eHbrfbPpOqqtrn0ev1WmA2XFtQxViu0zI4H4gn4fxBYSAA4ffv3+eDDz7g1q1b9Ho9vvWtb3Hu3DnW19dZXV1FSklRFNy/f789XrimcI82Nzd5+eWX6fV6LbHi+Pi4tcQpioLxeMxPf/rTVjUkqJZIKcmyjG9+85tcunSpJRqpxjc9WOb0+/1WoeH8+fMALVAd7uXy4j8QOsKzW27Xoa0t2wE9/IzCMwyAPkCn0+HZZ5+lrmsuXLjA3t5eS6QYj8ecnJzw0UcftaoQGxsbrKysnCELvPDCC2xtbbW2PIFYEZ5XqHcURVy+fLm1ugl9Ntyz0PYCUStJEqqqOtOXlvtFuCfBYiWUZeWd5XsUiASPOmcgeCyTxsI9Xj5nuLcbGxtcvnyZwWDQqosEq6C6rttri6KI8+fPMxqNWksRpVRLqAh9J9yr0KfD9QalnVC30BfC9YSf5bqGexCuVSnFeDzmr//6r3nttdc4OTnBOcfu7i7PPPNMSxaKooi33nqLO3fuUJblGdJQOPby72VZkiRJu014pssKI0mStNewfN8DmafT6TAYDFoCV7jPy0olwWoqkMwCuSaQlwJRJoypoS8vE8PCmB3GqeVxa7ne/7uWzwuq/SqXx2Sdx+VU8eN0DnjmO05VskJ8oiVgOEMbn2iAi9Odwz9+G4d/N5na4JxBYsH5LHvXqFVImiwyPHnfNusq2WS5B2liL9zgQQ9bVzgMCG9NGssYoSRSeusU56DShqrQWCMo6xKKCms0SRx71UcNUsaIWGKd9ZYYpvZKFUIwnxfM53MiAUZXRHFEnHrSShTH/rxpRprGZGlGUhQsFjPyfI61mqIyLPKKWmuqsiRLI6I4pio9KG/qmsOjE6qy5olLu17pVkrWN0dMTiZYa9jc3OBYnTREGUsaR3Q3N7w2rhCkseJ4MiEvNN3hKscnx6ykivMXtjk5Oma2mCKUYP/BCVeSLnHUASR1XTI5XiAV1OUMIRPSVFFXJTeu32Re5GxtrJNGiihSGFPjhJ/7bG6sspjnCOBkMqfSjlQahJScLOZoq1hfXac36JGmEetr657UEsesrq0wWh1S1ZrjoyN6/T5FVRMLTa/fZ2V1g9k0ZzBcJYoziqKkKAv6gw7GaIQ1CBlhrLdvFlJweHJCZ9BHSJ8kUVU5J8cnrK1cod8fsL+3B9LHrgajFeJIcffGDXrDATuba6yvD7AuotaOcrqgl3TZXBl5tVmVoWuvqCUF6MpxcjIhihS6rlBCUpYVvUEfhCMv5iSJwuiITpYipVcCLcscowviyLK9tcPW6pCDgyPSSLFQDiwkacSg18WYms6gy8rGCmkno4dAqJz5bIGKPRE8ziJ63R5KRDi8elpRVFRVgRCONMkoyorxfMGg2yGLoyZ+KVGRYzTqoa1prINiKuXJsA6oao0SAusstQGERcoIYyzOyma+4XBKNPFG6eOqRhBnGVmnR1nklMUMZ2qMitFGtKQuKSRp0iGKJFWxoK7LhhQiqU2NcbZVT5Y0qhXOIazxMSSrTxOLnMQ6QDscuolQKiJVg1AIi48ZWYurDehGNdYBUjVElQRBhCDMZa1XFhEhk74BlAL+09pjNYoiIZZ6xobBF6/e3EBY7hQMEuI0qdA5H7O1OLxthFgiiOAz/hvEp02Mcl6ZxdHYx4R4ozgVRPcHaLIKEY3irf/WNjH7x+Vx+VUqnwb8l3ESCH3l4Sx5+PS6wUd4RQPQLkikwThP/HNWUQmNUH3k/BOiwWWKeIi0nsjqwUpJhMJRonWJjFaQwxg3uwe9NchSKE6wvQFUY5TIUdRYkZ0CxO50nFgun0c8CPhGIK0F0OfMuNF8snyYR+LkyxuIT28olj5v4zLubN2W4zWfp/jhArbg/zgFeX+B9dwXES4+a/vP++zvoqzxeXX/rLp+mTp93v6P2ucfbk0YMIWzRIt2TXEGxQzvSN8/4CFM8hFHbg7m37PN7ywToz61U3g/nya7ho9DCYSIh+kcfo7yqGe/3E+WVXPc0r9ni+Ps8UOxnL0vn97XLW0vzmzxxW17aT7Tbn+6z2e1SxG+UxEKb9PnBB7stwpVT1FpTWW7pLJgYWuqaEhWH2J0iY7S9iID9lzUKcLlYCpm1SqVzBhGkkzOmC1qqBw04guf+fybcWz5NrTP7R9havOrQ/yQkkiCkQ6v5SHaFiyE8J8IT5KQEmIBaSSIhPSscOOnt5FsPISMB/GNbYLDUuKEaiSO/ZxVRhIrIGoW/5F0JEIwM5YayyCSrCjoZxHdVNBJFIM0bbZ2eDlPCK9pKSUqdpjmxaEa3xaHJ7VEMmr8UPO223orEv+qzbUD66i1r69ynqgihcNqjbOOfifFCYsTgoXxyhY+iCLAWuLEy7xP5qcECd8jliwv8AsD0/hbStGoZ4TO604HF9ksLDyb0n8W1IlOj9YIEwpAhGnDKcVEnFkELLHkoCWM+I3bt3i7oGmGkYe5IEghiGRECAC1DWmJoeevswkwSb84dA0JRAiwTmCMYzqbsTIaNQEPSJRgddSj0x/RS/foSIttFBvSNMEdn3B0eIjVGon3AFbNj9UaK2J+tme5LDa5sLVJHIlG/tG29yx2jn7z4tp0hivf9CoT1hiqskLYBb34/8utPc2P35mwkhV8/WlYGypkKXEFJHHK1154mgvnSu4+OGTvwSEfXL/NLC9JpcDqkijqM9jY4ObdWwgVoxzEZsGD2T6TvELFMaujEVkaU+c5tz/4iNXz57h2f0IWK6I4wjQgqmju+bnzl9jZ3uKNN99AdiyTxYL5dIZZLHg2i7GTKbPLT7Px/FP85Ad/ikUhpcVYTRxFaCc50l1Kl3npTWvJ8zmDtQ16URchJflijNCmUcUwtOwP5xoLE9kQbny/b180Dmi0cxCN0os7feX5RtEs7G3jF720j8/eMEjnyRiV9pLB51e6JLHi8uUruMUJ712/68ELJbDSsxOlkO3Loe0Dzi+EYwHOGYxz2MYqxuJYGE1uHHGn38qduqUZg8MhI0msAoFMonCtRYx1plXkAa9wI51henLgs15cijYGo70sbSQlkZJURYmMEoQUaARVPuPu/gmXn7iIqX32nGuCpInwXtkyjrl35xaVMbz34x/yUXFEaQ04i3XQTxTKOW4fLrgxNg3rdHnysLQYsJyZYD8uv3wlALQBDJ5Op2eAy4eB5fAZQJ77DMrNzU0uXbrEBx98wL1794jjmJ2dnRYkDIBmAC+LomjBzVCHAH4HoDQoGcApGFkUBXmet+cPxJEAZgZyQADQwyI32EoEYDoA+ODBjXBMpXz2Xsi8T9O0vS/Lx5xMJpRlyerqKt1u90xdly1FAtDekgkbQDl8F+5nIMRYa1lbW2N9fb297oczGALIq7X2/ucN0aSqKobDYXudgagQ6pAkCfP5nLquW/A5WOdIKblz5w4fffQRh4eHvPTSS/zWb/1Wq7YQ1E8CgSO0l1C/QFIYDAaMRiOuXLlCnufMZjMODw9bC5lwjp/85CdsbGywvb3NxYsXWxJJt9vle9/7Hk899RRaayaTCWmatkoKdV231jLgFR3m8/mngPsA7C+D/2H78JwDSB7uabC8WVZ7WCYMLSu5xHHMysoK3/zmN8nznPl8zuHhIdevX+fOnTvcvHmTe/fu8dZbb1FVFS+++CLb29vts3HO8dWvfpVXXnmFzc1NTk5OEEK0wH63221JL8EOZVmF4WFCRaibEILBYNDen4evP/wdbHN6vV7bzpbJEMugfiA4PDxmBCWUZXJFIAqE+7U8XjjneOKJJ/jN3/xNLl++TJ7nZwgOwfYpPMugjhEIOkmStASooija+xC2X7bkybKsvR/hmh8ev0JbCNex3N/C9R8fH/Pnf/7nvP3221y5coWnn36a5557jm9/+9tnbFr+4A/+gLquuX379hnSWbhvaZqSpmmr+LNscRPqGe5d+P8yOeRhQkpQ4gj3Pjz/5fEibAd8qi2H7ZYVl5bH6jBGhvYQ6hT6STju8rj0ZcuX3f7vEkz7vOP9qpM+QnlM/vgVLc7HLx7G/EJ7WCatLhPrPOLpcGfI4KfkD+E8EG+dxRif+BLGUT//8+O8DapKAaB0Gmt84oU1DtB+zSRBOVASVByRpDGxikEorBXUusRUFpQgiRJE7JMT4kQRiRilIjInKNOasqgwRY2tNba2lNoTfWvjELJEav9+dTiS2KtDdGKFrQRHkzllWSMEdLsJ/V5GliWgLFEivaqF8DY03X4MOOq6xNganEVXBdpYrNZMi5IokhirkcITWoeDAaYqOTg8ZHV1RFUUdDsd+t2URVEyGPSxuoRm/VxVXgmiKuekScJoNADhMLqk2+uSxDFFXnBuZ4fRYMSiyBFCMZvl3Lu/x87OBuvr61ijmU0WZFlCkVcYV5J2Ija31rl16x7O4e1aehkO0MYSxwlSSDppyvHRhKLS2Mam2USC7nAVtyiJjEAIiaSmLgs2hutorZiXM+oqIss6CCfJkowoSnEuwjpQMmZllPIJN4nTFBl7dVytPamx20vpzDxReF6UdAY+QeXw6ISV0QiHRMqYbtbnwnnFcNilKBfsHewTJxm7O1uMhh1MXZNlKaPhEIREG0eaJThh2DvY58LuLtoMmI5PfGJGJCjyGVb32B8/oK4Lkjim2+myWOSt5d1iMUNJQRQlOCTdXhepJJPxzMdnqoper8vG2goRjm4no7aS/cmCugaTgcZSVZpBv0e/t4JzEMUx9XRGknXRxlIWFYmM6WRe1cQYH2vRDkx1anU5nhxR5CXq/8/en/Valpzn/eAvhjXt6cwn58waWcUqzhSp0W1LtiwZMNAwYMC6bKDhu0Z/jL7pO38Eo4HuNhpGX8iG4eGvwdZISaTKFFksVtaU85nPHtcQQ1/EirXXOZlZVXRT9l9kBpB5ztl7jbEiYkW8z/M+D4pUJYw3FImWaCWQWrK5McTZgixRyDoQa5XU1FWF1IosC1LbUih0qvFVg7Et6cPTZsMF2W/nHFJrsnyIbGMVrikJZC7V/rQIn5AlOXmWY+qGslwEQo8P8VfnXXAzaGPIwnuEb/De4a2BNvbjfRs7kiJk3cfxylq8MXhl8MJATB7rCNABVPIIkAp0ilcpKI01IUEREQhmtgtZ9oBgERQ8vAwqq0EhxAWljnbrwDahh1oJWkOWdfy0C85GIEeihFt/LmSQdSfK0It1vF21sW3XxnGlwrfqIGHbVmVWuEAn8UGBRLbH9160qkXuRWznRfm5KpeVL9YlRJ+f1R2e20U6kEQCFYmCup4hdI7zFmFB2jmJzrBSIaXCO0NQbo9AqwWRIFrsR6qMZryBmR4gNq+Rzh5iTBFIagzwQj8FPj8XGX3G9Xax3WgVc+HuLyBCPTC1r6DQixVfAu3XO1w+7xrEFvG/55Bp+kSUp3kBve+6/cRFot3zCCO93T5t6fNZ5JMw3f1sIsanlc+j+PG843/W9n0L1s+javKsc1y+1udt/9zj+/gjrA+EAO88UfUqHDe+Gtt33qWdY5vrPusaz3NvYL27iKSOZ2zv/bpZrf8DQkIxkUhAv6209/k5l8z9NrS+o0uXIfpWTZcbem+79VEv/exNL3x0cnh+u/QdznOR8OK78z/rrP0+5hBSoaXFCd1+pvBuhhMGshxrFF7JYKtnLSbZJDHnWK9wQrfwYLjp0uSk6QDpGlZKIo1n7gQzNQZzCr5pTy17VxLXqK67tO7KuzHKXxg+6JFbftrl54L4ASCEZFnTZtu309UWcOzYxlKgCOCnlpJES6QI0ngCkEoiW8UKBCgZGNvO2DDZbbMtApgsSJUklaJlfkOCIElhoGSngJEqyXCQkWYp1vuQmW8srg2AK91KBXqP9oDzWBsWrUJAqhOSLASWvWh9mdqOYttACUKuPRWVIMsyrFCUTYOzLpAL0oSRbgFfJF4IDLbNbAmSiGmaodIMbR1iWeFdYMD332fR3cr7QPrQkUkjAn0lZub3oOu24cdgSn+yIrofoY9fED/iYpfq007W8jmy1Rhcn/XyILPWKcGvmeQCUEoTFy2RXBJZWlFXIBA+Wnudtp0EElCrvGAcxnjywbB93oJxpticDFsvXUmyd4XKJyQsSRPL/v4OHz84ZHdjzKv7G9zZzCkyzSePT7DW8uTkhI//83/g6299kRvXr+JRIF0YVNqXTLzacEUKr5MgOGEtnqaVZFRsTIbcGJ1w9uiE5uYANywwdSAkWJFg5IB5VXJweM6qFjSLFYm1/MI3voZSjrJq8FlBKTRJ1ZCmCWKwSTrZ4qp3/OCdv+DJ4Qk7GxsME42rahaLJaezBXXjuHFtG+Md1gcJpqQY8pVv/RI6kXhr2c41hVXUJPjjM3755ls4LXi8u8vD6gApPRoTlFSSDOMFjRMYm2AQOGsRSuOdY7h1lUZo6uWUh5/c58ogPMcug6x9Qca66xpC952H1jNaCN8yNfuTxjgh9euJXpu14F1ogd57mqrEG0NlHG+8fJ03bmxj64qVT/j6P/6nLB/+GF/+Z2SSMSgyEqVBOJyDsqxpTIMH6rqhrEwQ51Q62L6kGco7Hh+fUxrI0oSxkiyaCi0LhLNYPMqDcB5vHWVt0CJD+jC2VdbTeBgmcRwMMqZeSgSOqio5nS4QYsV4VKCFoC5r8kKjtCJJE5zzaK1AyLB4rxfo5TmuvooxVajKtoq0VigpuXrrDtn5Y5Is59buJvp0wYfe4WxQHpGJoDaWRytH1UoOtg414VitF20QEe1Nkl6Un8kSQcUIHsZM78vkCu89W1tbnZ1DzCQfj8fs7u52xILDw0MePnzIzs4ON27c6GwNooJDtFS4du0ar7zyCk3TsFqtyLKM4XDYWYSsVqsLi4vxeMxyuezsIKIVSiQ9nJ+fd8SGSA6IQHp/AqyUIs/zjmwgRFDQGA6HZFnGbDZjPp931g2xftI0xXvP8fExUkoGgwGbm5ud5YwQYU6wvb1NlmUdsBqBZdMj5kVgF+isLiKYelnBpP+MYA3SR1JH/9iRhDCfzxkMBpRl2RE3tNa8//77vPfeexwdHfGtb32Lvb09JpMJAPfu3eue8c2bNzs1hmhNo7VmOp121ikRII7knb7SQZZlWGvZ2dnh1q1bvPTSS5RlycOHD/kv/+W/8Hu/93t88sknvPTSS9y8eZMkSTp7lnfffZft7W2Gw2Fn1VLXdQdkx/oL/uklUkq2trbY3NzsiEe3b9/mypUrVFVFXdfkec5wOOwURmL7jgSIoiguAObxfvoL6EheikoXR0dHbG9vs7m5CcDVq1f58pe/3Fn//Kf/9J/4wz/8Q46OjvjLv/xLvvSlL7G/v8/Vq1f58Y9/zNHRUUc4StOUx48ft5LmBUmSdIo4USUlWqxcv36dsiw7249Ivugv9EejEePxmPF43N37aDTqLFpif48kiqiSES1UIoEiqmjUdd2100g6irYvkUwQiQyROBEVaG7fvs3HH39MVVVdu7x58yZFUXB6etq16djXIiFnNpt1pJN4j327lj5pLPaPSIiIihTxGdZ13X0ex5vFYtGNJ0Fq3nfnL4qCg4MDPvroIz788ENu3rzJb//2b/Mrv/Ir3Lp1izRNKcuSpmmCHHtRdASWaGsV22g8dnyO/evu95VI1LqsghTHAWstW1tbbG1tAXB0dNQ9yziezGazC8fqk0mMMZ21T9M0nbIRBGJKHKti6RNF4nVcVjHpj1cvyovyovwdKTGAeqnvWmfD+t1fDBQ6Z+nAzHiINkYjhQ/rKk9Yu1vAO5yzOBuUPHWbwGOswAqPd47GO4TzCCxCyC6iINrzayQ6UWgtSTJFkuiQRNISP6xpqI3BW49vAmkkuoLKVKJlQqKDKodKEkQCtjGYssaakKSgVGsbWtdUZQlSkucZ+bBASo/SgtrWeBuC1loq8CE5oihSdEKwkpAS60KcIMlyjG2omzpck3fYuka00HBT13jvWJoSrWqktyRpsKmp62BZ15QleZbR1CVNlSDwWByj4QhQVFVJXVZ46ynygtGoaK89YX93l+l0SrkqybIUlSTU1pKmmvOzE0YDTZqlpEmGMcNgq2EN1hnyTLO3v02Sphw8OaCpKuaLOca6QDioG7xtMN4jlEIKhyMA/qumIkkd3hvK5Yoiy7C2RmvQwpAPMkwwXiVTKaapGI+GzGZnFIMxEo33gUAzX8wp65JNMUYnKUJpyrJCihBPEgKUTsLzqSuOT88YDMcYGxR6NzYmzGYVzjacnEw5O19w7dqQ0WiAVhJraq7fvIGzPqiEiEAuks5jbE1e5CxXS4bjAZNRwunpOc5UzBZTlssS5xscksY0rFYlAM30FOccu1u7SKnJU4mSiqrx/OjD98nThEGiGI2GpEmGrUtGowlC1hT6BKxHCxA+kFyyNEWIkDQV1PIsxbBgOl9QLlfkaYoSgYjlXFC9GeQD5ssZq6pkVZXMFwusdVS1YVU2Qa1ZSCwO4S1ZIkgGA6RS1G6B9Z48ScGFuOhoOMC0CjhaaYy2OFMDPqgWyxhvDLGaNMvJ8yGmLmnqJd41IZbrLaAQyqN1QpEP8NZSLhfYugYpcIQxBClQQiCUCJwST8gpsiaofRjbEkBa9TuhwtgkfavK7FuSiMNr35En2kELJyVOSiwKpMZJjZOq1WgVrDPmWsuoSL6gjVu21yhQbaxK4L1sscy4XqM9Zx90WIMp6wSoeK4OBWtjpwIlwvXEVD7W3DtEa/NCG6fx7ZiuVBtei/FVITuA1/dJKC3YI5EvYjsvys9ZcR3WsS59wkfAbmixiVBkt10ETtdjgkFKMI1CpkOscSQ6xzmLrU7RmzexyQZqeYQcjXDSIR14edHGIarOO++RaoAaSerZAT6bIJopXqdYJBoZEc3usnrR84tAvLj0eRsv7t++6G0s6I1DvcCv79VBHy0ixukRdLLuLQB9gcCx3qq92LCd6Oq6v8VlgD0cL46LdLYOcY9PB9jXddInGPSfd//83QU+50hxe/cZ23x6uays8Le9hv4fUSj5LMLHs4776ddwCRPqsIVnGaG0+3T/x+cX9/+U83Zt8nnX0Vq/Idu1R1AgDC/wNe7pO3JS27aF6xE1fpJycZy5fG/Rmu6zi3/qr8ufdLAZtPOgz3NNcTyL8ZzYti9uk2uYbE04L+dUTMJ+3qDsAtQ2zpxj5CZaBEIqKhjQNWpC1pyxYgMlVMDTlQiKbjZB+SpQ3oREoDGuRHhFlnpMY0ErBK6DDkVrMUNLcBXPukkPPiqd/S12rZ8P4kc7altPkP9LgjKDEKCECNYrhMXLzljhraMyDmU8WgqEFxjpSRDtgkGgpMdJgbG0Ut2CNE0ooySd9yRKMMo02FYKVAqMEwhp20BkAElFopFaI7zH+gYtwyTfxgB+m7VibJDwc54QQLCePA9gq7MW2zQkaQjWSkIgwvn1MKVEGBRXVY1tLIrAEq8s1EYE2xu/lon3eKrG4lwAvJ2NC5wgL+qca3+P0n2BG+5o5fykQLYZlIJw3bAedL1f5+B0ixNBSz7xa+w8viL9BZEv1sSGS6UHKhMHy0uLDdkdYb2Da1cnPpyKJM26KZIUgsloA3H9FkmSIXQSgkc6aQfemOHYe0UIRdPUgEWlSTcYK6kYb0zQaQuyNY7G1gxHmmxQUJFSNo947eVr/MYvvIbwFo/nd/+3KU1dcVqfsaxs63/u8SrW1XqB1H+vhQXX2hJDKo9Ag4fpoiQbFQyv7/HXH58yfiw5Pqqhqrgy2SArxgjzmI36jA3fcCRqlh7SLEWnju0bNzg/PiOzBiPgeGXACFJ7Tp4lbO5dY1WWWFOzKOfoXJG4iq/euYozjmI0wDrPdjGgsh412eP1L77FbHpOMd5gdOUmE6VJBNyfLzhbLPB71zEbmzz8/t+EbBTraIzHtnM325TUxx9hxy/jvCPPMvI848GH76InV8mV5QsvX6c5e4IgGpHSLYB9S+n0wnUv9wtzrWjdEvft5gOBPBbkhNtnEa2NfHzhSZpqBd6xsIKv/tov8Y+/9Ta+WmAag9ANbnSDL97656Ay8t2brSerbUEpS12tkALmB58wPTlDSB0yP6TGq4LD01P+6k//nMPTGXVtKI0B1/WidpwJJK3GOZo2PaRxIQhprMMKSZ5laOl7mSSh13x090OmhwcUgwH7O5tMhgWpEuxsD3DWoKqKo+kSXRRoJSkbgRUKPZm0Eq0N0tlOGEUCSZKykxq+9+EnFPu3+fKv/gYP/vP/E9NYXNI+D++ZrhxHjcK6tfJBfH+KqF7aZ7j+7c5LX5T/haWv2BDBv/6kvw8Irlarzs6gbxuSpim7u7u88sornRXKe++9h1KKvb09tNZsb29z584dTk9Pmc/n/OAHP0AIwcbGBnmeY62lqirOzs44Pj6maRpu377NYDAgTVNu3LjBBx98wPn5OR988AFJkjAajZBScn5+zpMnT3j06BGnp6cdWB0zTiPwWVUVq9XqAriqlGI0GrG3t9eB6R999FEHwI9Go45ocXZ2xh/90R9hjGFra4v9/f1O3SBmz8fM+75FRqznqEzSJ9bEzP2+Ukjc/ll/R/A92npEAkZRFJ2iRJqmTCaTC89pPp/z4Ycfcnx8zHA45O2332ZzcxOlVEegiMc/OTlhNpt15AHnHOfn55ycnHB4eNidP/6rqoqDg4OORHDlyhXG43FHZomki6gQE+sgqg9cv36dJ0+e8PjxY37wgx90SiBR+QTWhJfZbMaTJ08uqKNcvXqVW7du8aMf/YjHjx9zfHzMjRs3GI1GaK2Zz+c8fPiQ4+NjPvroI+bzeUcqiM8tklkiyP2sRW0kEy2XSw4ODsjznCzLAFgsFoxGI9I0ZTwed59H8oSUslM5kVLyne98pyNFvfTSS1y/fr1T6ojto6oqzs/PGQwGbGxsYIzh6tWrWGv5/ve/j1KK8XjMF77wBTY2NnDOsVwuefToEffv3+fw8LB7PpEEY4zp2k1UvYh9b7lcdu0gtse4XSRcCCFomqaz3Yl11yeTRPKWtZbd3V1u377NjRs3ePjwIXfv3mVvb48333yzIz1E8kIknURloL4qSyzxHPF74MJ8G+jqLfb1SOqI9xkJL3GMiO0gKqlEm5ejoyOm0ylXrlxhZ2eHzc1NrLUcHx8DdKSKaKsSCV+xfuI2fXuZ/rgQyRl98kxUJynL8sI1Sim78ebx48d897vf5Z//83/etaFIlItWP48fP+bBgwcX1F0iScw5R1mWVFXVHTuq2cTxJJIArbUsl8tu3I9Emkh0ic/rhZLGi/Ki/B0o7Ro+rs/iuj3EDRzexUBkWC8L34IA0rfBuV7QWobAmwBQYTXnvUVLFdZHzmBNHUBa5wNw29q9WOOwTciUF8qHdaIQXdBQaIFUkGaKLEtQKglxB6FQqUKngqRW1FWFsz4ApM5iG0MjG5TWaARCSdJUIeUA78DmhqqqaZqKpjY0TTtHVArvg5WEUopEK9QowQtJnqXYxqASRZqFBAIlJanOSHWGUClCeMrViqYuKaumI6v6lijinCPVAm89zkuUU5SrEo8hTxMUYKqGujI0VUWVhXqrygqQ1KuKJjFkRYKxhrKqWa1KiiJHJRLTNOTCMhwUlOWSql5hXUNtGhaLFXmqWUxLTo9OSFLFcDSiGBWUyxXz1TJAMLJC54KtzQIlxpwdOxbzCq+CYu9yuQoYdRKUT8bjAfNlRV1bcAJTNyhhaUyFVopBkTIcpiSpJB9kWBHmGkIrDJbNnV0aa1BSYq2hrlYkqWZnZx+lNV6C8Z75suTw9IST0zOaxpKlKVmacT6dUQyDovAgTYPtqZTMlktm8wXDYU5pagbDgqSdb1a2AaUY5gUnxycIPGmStAQFz/72DgJPlmeQhP5RVRVpljNbLDBWMl80DEk5Ow/KgMvVgqLISXTKYBhUOYR06CTheLrieLpgnGdk4wKJx5oSi6fIEgoHwyLHechSjdKSJNXkwxwnPVpJjAnriqEX2DqoXaRFSt2E5I8kL0h0ijE1tjGUq5KmMdRNjRIOJQNBpKpqBJaiSJBSBdJrrkmyDD9fURnLSEiSJEUIS15kGBPipk0bIwpW1dEWTrQxHoFQGflgghSK+WpJVS2xziCUQvhWeQJI8gydJiwWU+q6RHoPqA5olFKigqwqQTDVBnKX82A9ztgwplgbAAAI8VEf7rGFL8I6NkiIIDSgJFJpVJJiXY4wDV4qrNRYofBoPG0mqW/VlUVUB1lbRXdjJz0LxDAYEpMFg2pxO9Z2JcZaIlkkEjb6m6kuEKPExfS6jizS/vOEBMUQ0xUIGbc07RC/PrdsI5fxIyFkZ+v8orwoP6vlcj9YkxSet/1FVoR/CpC9FBfwjtC7FN5VaJUhVyUiEbD4AJkMETIFXUCS46sDGF4Db3qHjOiIa0HbBO8FUg/JcodpjvHGoPwuCLt2boqx8RaMvgDTduOSuPTFcyrJx8NEZLt/n88CpiPBo60zH9XeLyNJ4hm/Rd6HuHCeLv586VS+98vluMyaGOCfcXuXH/KaYPC0WkXvXrn8+dNof8TELpztc5Apnkfy+Dxr54uqJZ+9/U+q9PF57GSe991T9xTfM/0n0rWNT1EK+ZTr4alntU4Qfl7dS9bv2N7FRnhvfbzuvXiR/oMIuLXv7SDE53vWz7z03oV0ceHe+UO9rLfr5hY8/13dUTREa7tNIL6G/tJ+0tZ3n8zf9RnRv6/YRtfzDNGiyyujyNKU1JVUcoAQDmlOqJIxeZrgjEJIB1rjaoH27bxRKJSakPkzFuySKY93Eo/DCo0WQe0DDa4p0eUxsthmktYczhtUkq7Hn3a+c5mWsr6X9TgS52LdGvdvofxcED9CG2pJEw60lC2JwXdkA+8FxnoOF46bu5pUShZTS+oIUQERZK2sp2Vie6RUaO2pTQ0uTGadCFKkQR5IUDsXJv0+WDJYL0AEb0mExAtYLCsSpZBKkSgVvDcRpDK4RpTGBpaXDkEH10oPJVmOSnO8B+daT3TZLiT8mhYhhO8sbGS4Weqm6VjqgnDNkV1tfWBvGitoTGvpohTWOUxVglBMVyXm9JCz2YLGGjQBUA7AdxgQlAzSo85ZhIxAeSu93bbyQMBpmYg+BGdEJIb6dmHGGrRud27/bzu+b/8Tl4O44f5dez7ng0LAekCMc4DWN65lzsVBJsuLbkRVUnHn5dfJXn8VUy2QMqhIgKepm0B2EeF+pEjQSpElOfPZgkGRoVZNOIvz1C746TnnKeuK6vgRUg+Z6YLNzRErs2T3yj4kOXt7E548esJgOCRJQlBnmBYYX3LUgiMe1ZJVW2mqdkITbjvcS1xsxf4ghUN6GA8HmMFjlo+m1GdzzEQgZzXS1FRpzcH0hPliRmLnbG/mfDJQPJgZVrVha3PCxpXb3P3wEeM845VrQ3xTcjo9Z3tzlxv7Yx5Q84N7S85XDZCwd+0LfOkXf4lHH9/l/fd/jJees/mcc2cwwLXNfQajMZ/c+4DZfM7d//IfWNWGwWhCIjwfHt6jWJRkBo6fPMYaSxNVeHBsTAqU8mhX44zFWUeaFzjrODt8zFaxhZCO7Z0dzqsZSkS1maAR4d06S6KVkUAI2bIJW7/WllXdeZcSX3QXXr1PTRhwJsjl2aBiMW88+WiLJM1pbIPw4TqUVqjxGJGNyTa2kSpka3Uvv/b3VHtG4/Ga492UlMtzDg7vYlZzbF1iDVjnceiWYbieaTvnqY3FOI+WEolDCIfxgJSoltQihSSQtYOqzGo24+H5OTpNOXrwEKET8iJlZ2vE5jinyBLuHU5Jbu0yN4LSCfRgg83BJnjTeWPHMcA5T6ITaGom4yE7L73GtY0B3z85BeG6hb7wjpPKsUJfDCwQx7o4Uqy96l4ECH62S3+yvZbEdd1nWusucz2SKfr2JJGEsLu7S13XHBwc8PjxYx4/fgzAeDzuMv2FEJycnPDkyRPyPGd7e5vRaERd151awtnZGWmacvXq1Y40cPXqVZ48ecJsNuPBgwc45xgOh0gpmc1mLJdLTk5OKMvywr1AANKjckQElyPxI5bJZML29jb379/nwYMHnQ3G/v4+EEgvR0dHvP/++8HnfG+PK1euXABzI7AcwfJomxEB1b7NS58gEsH2frm8uOj/3bcr6YOuq9WKk5MTHj582KmxRBA5KrHUdc3u7i43btwgy7Lu+UXQerVaddYse3t7nepGrJdPPvmkq9v+Qvzs7IwnT55wenrKzZs3uX79ekeAsNayWq04ODjg/PycnZ0ddnZ2mEwmSCl56aWXODw8ZD6f8/HHH/Pd736Xs7Mz7ty50xFv4n3cv3+fJ0+eMBgM+NrXvkaapmxvb3P16lU2Nzd58OAB7733HlmWde0t3nu8/ul02im1xHYez/G8hXkEvBeLBUdHR3znO9+hqip2dnY6JY1ISjg9PeXk5ISmaTrShrW2Ixjt7e1x7949vv/976O1ZjgcdnZCUWVlNptxdHTE6ekpb775JltbW6RpytbWFteuXWM2m3UkimgJY4zh8PCQ+/fvd7Y9kdTVt0WJzy22vfh9tDx5lux/n+gTv+/bzcRjXbYmGQwGbG9vc+3aNX74wx/y7rvvMh6P2draoiiKCzZIVVVxdHTUKe7cuHGD8Xjc9bE+2axP1InXGFVhIiHk8tgWFTZGo1H3PKPtVFTo6dvBxOPHscl739VzPO+9e/dYLBbdZ30lnGBZmTAcDjtVm/l8zmQy6dRSIlnqWQSXy31/Y2ODra0tDg8PefToEQ8ePOD69esURQHQ9eHlcsn777/P+fk5N2/eZH9/vyMi9Qk8fXJP/5yRpBbHq6qq1jaC7ZjWr/efpLxQCHlRXpT/tUUIuVYLiPEb77tg2Rpy9GFNThuobxM6hIzLnxZSlHHtKEBblNJImeC8pLKt8qoxWBPkzyPBwztwGGo8Uguwwa4goc3q8gk6Cf8SHRQQHTKA+EKTKsVSiWAjLFRQdm1jmt55nDV4IfFtNq2QkrQoUKliVXqcd9RNAz4oK3qvqKua+XzGcDAg0QmTjQmDQcFyMaNqGqQOqpBVbdBJg05yBkUSlBm9Z7WYU5YV4EkzyWhjQDbMWM7mSBnurawMpnGkSYJQGRKoa4u0Lig3ijAXyKNKlQlkFCWDKqnEMxyPWM4XTGcLxpMhSaJZLafUagXCkQ9y6rKhWq04Pz5G6ASdJSyrElGFhC2ZKmrTUNYGPFTVDM8puzs7pEnCxsYE5ReUjcV4j7MO4WRYf/uQVCURzM7nJBImaYr1hGQoY8EnwX7Ye5xpsE2FdVDWZfvOsmxtblCWq5AMJhyJUlzb36MNxaC1IksTtFQs5iuElszLkvHmJk1ZkWnN63euM8w0Unoq6/ngk0eMMo+1Hts0KAV5pnhy+Igsz0h1gpYaiWM0GoBOscaS5zl5HkicSZpwdnLAcrmirEqU1hwcnlMbRdk4vGrIh7a1NHKUVc12MUCpoNFQ1jWj4RBvDaM8EINQkrTIKJuGRIX1d+MsVoDBY5F461FYfFNRVwneZxjjsKbBNiVJnpMk4d1srMA6Q1aAc4az83OaxoS4rAv3nkpPkQi8DYRurSW7u1sMSDFOopMBiU4RYoq1HoRFK5AqZVAMqOuWxGsc3goSnXSxGtmqNUuhyPIBeV6wqlaU5SrMo2i3lwq8RqsBRT6gqlZU1RLnTSCPtYlAwguU1CFW0mZ1OmdDootpwNQYU+FM3cZbI3jYA0i8AyfwMU6EAjRSKFwa7MgVAq+Cco0VGovCdESIYMMQxsmLYF0faOqS5+L33iM62frWKrGPKKHa+EtIaIr9og/2iPaz9kbasKlfY1NiTd6Isek2UN4DcFR7zEjSa6+Ttfy9INiDX0w1eFFelJ/Vsu4HfcB4/d2zQGhB35riwjc94qyQCiwoV4OQWLcgSwa4bBflpiBStEgxxS5q9gHYLbzS7dh16Xx4gsqTw3qHSjeQUkPzgHr6cTsgEYgWPduLpwkXT4Pzz1OZuID0CGjTnXuAdBzTLp8hxsDXdRpwMnlxmy6K3H8Kso+3X1wTihh3bOukI7y5Zz6lsM9lNZKL0LCPQFjv3p9FPHj6sz4I7tc38axL+BSiRP98n0Wa+CwFkJ+U9PF5iSafdl0/6TV91hr/01RDfIfRxM8vYpHrfZ7GMi8QMXn6UXXtUKxRptifwIUf/ftG9FrS5XOuj3W5XCZw/GSl3095bj2FIcyvN4xXJwRBCcN1nz3r+V0m2PgLFXIxUVsKT3BESPBCYZDI6gwrCrQYkIiGBgVOkiiNtDVCh3WX9FAlGYO6YGDOaOQWXhikByskQiZo02Csxy5PSPMhJGNQ50hqBMHloUskJ8Yv6Yn4xMlbjxjm/TNayE+3/FwQP6Bd5guB8Z6knWx7HxQ/HDIsJvEkDj5+XHFtK+PKfs5sVmGMJxGqnXQSJDxzQaPgeOZYrKqwUHcOvEa0GSdKghbgRMjqqNv3jdYhGByWF5JBnlA2hrSdTDc2gMzOtfLjWlM2bXBWKRrn8M5SjMZ4ofDGYuqma1TO2SDqZME4j3CQSNA+qHpY63HWYmiDGN6FybgLKh0dS5zw6hMiWM5Y56lWNTpV1FXJ2cO7HB5XLBcLssG4zTxwYXHS2t1Ya4gLju6g7umBrMtW7Tpu/4V9EVBfm7P0XkoXvqNdnPS3XR8pqoGsh8MWtI+naYk9g8GwtW0JH1sESZpjm7K1eBF4G+wFrGnaCZHE49kY5GxvbvDk8QOuX9/mwcm8XewEYHpV1sEy42zGwlgGmeIv//u7/Lvfe0ymYFJkbI3H/MM3CrytEQxIVFgUCufIiwFnx08CqcYn3Q2t5wkX5ZLii8D7qGjigIaqrvDeMdxTDK/s8OaNjMQ1zM8s//2dJaePH3D/yTGn55ZBbTgVA7JxQm0dO5t7bI622B5uMEgTvIGjDw55sJjyq7/4Df6P/+gX+ehHP+L/9n//cw7qMcd2m/FLvwnZL/AXH3zMaj7g4/sPWJYrqnpOnkjuvDri3f/+fY7v/nfuPXyI8I4sSdjaSNna3Ufuj3Hes5o9YmuQ8c2vf52bjw84PptxNl8yzAcYkaH3X0Y0Gc4ZrFeUqxVXrt9hc/8K1eq8JSwJhNIIVxMZ0l7ESbNfv5Sca9toSwDxPYUVz1pBrS/p5X13zG6AJxDOjLHMa8vcKQbDQZAWdiEbJgQyZTdmEaU/25eha6VMEQqdj2nmp8E3tu1rpjEoUyJNgzGe0ggaBF6ptu+JVgUoqPdUtcMh0LEPyJhB1wp1yhAgdYjWn9q3IFFIKnJ1TbWqcKVkOT3nOE2QSnIwXWI/uo9b1dx89Sssjp7gR7ts5pKmrhDOdYSVlrbCrDSMRhOGG5tkWjCfnaPbvmudxxjPSS0wBD9d3zX4tg+3ykhd9suFseNF+VkrfYWDPuAbQV4IgGJfxSASG/I8vwBcbmxsdEDyvXv3uH//PsYY9vf3uXnzJnfu3OnII/fv3+dHP/pRR+xYLBZd5nnTNFy5cgXvfZexfuPGDR48eNApGjx48KAjPhhjgu95q2hw2Z4ggpVR3SGCvH0rls3NTa5cucLm5iYPHz7EOcfZ2Rk3b94EYDqddqD6t7/9be7cucONGzc6kFdK2akrRCA+TdMOMI7geATYY0Zo/xnE8mkLp6jWEM8RCSfGGKbTKR988AHOOQaDAS+//DJVVXFycsK9e/d49OhR9/nVq1cvKB1MJhOSJGG1WnH37l3eeecdrl69Sp7nnJ6ecvfuXR4+fMjJycmFa4xg+/n5OXfv3uWHP/whV69e5dVXX+Xq1avs7u5ijOHRo0d88skn3L9/nzt37vDSSy916hdvv/02i8WC2WzGu+++253vjTfe4NVXXyVN04448ld/9Vcsl0vu3LnDF7/4RdI0ZWdnh2vXrnH9+nXeeecdmqbh8PCQL3/5y6xWK548ecLDhw958uQJ9+7dQ0pJURQdiSHarSwWiw74j/fXJ+VEu4779+/zu7/7u5ycnPDqq69y+/Zt9vf3mc1mnJ2dcffuXT744AOstWxsbHD79u1OIePGjRu8+eab/PEf/zF//dd/3ZFQ9vb2OvLHkydP+PGPf8zDhw+ZTqcdKUlrzc7ODt/4xjd45513+OSTTzDGcHp6ymAwoGkaPvnkE46Pj3nw4AFnZ2dsbGx0bexyv4iEClhb2/TJL/Hv2G8iASDPc5Ik6RRKYh+LbSGqXMT2urW1xcsvv8wf//Ef81d/9VdUVcXGxgZvvvlm157ruubs7Iwf/vCHvP/++2xtbTEcDhkMBl1f6l9rvL7+v9gniqJ4SlknktM2Nze5fv16RzA5PDzkpZde6mxqqqpCiGD/tLGxwWg04uTkhIODA05PT7l69Wo35kynU/7sz/6Me/fuMZ1Occ51VlJRxahpGra3tymKgvv373P//n2uXLnCYDBAKcXp6SlRqeNysOXy89na2uLOnTscHR3x8ccf89d//ddkWcaVK1c6ct50OuXRo0f8xV/8BavVqlNiijY4o9GIoii648bn1x//o/VOkiRENZGoThKJL7GPvFD6eFFelL87RRDfacH6MX7W6qk+Hd/0MTbgiVme3sc5i+92jsf0UqC1Qrc2K8576sZiq6YFbEEpAlgp6QDJ9VVEvEEEa0wTFCIQ0YbV0eBBKaTW5GqMNRZvAmgtlUBqhRcBVBcE8LgxFiEUwhmca8KaUDisN+A9UmrSJKUxDeVyhXQePR6TZjlpokkTwbKqKKuaNM1pRMmqrkBJjLPoluC3tbOPR1GVC6SQJImmyBWDpGC5XCKEJy8y5lqwWJTB/qUlAgRVSkOWhXF3OB4HBb15SJJxziBISJMCn0KWFTjj8F6QZTlNU1GvSlZVg2mCzdggzykHBUcnU1SSobMBTVVSNwZhG2xVUpUVTR3UaMtyRqJSNjaHrKxhOMmpz6akSqJ1IHzP5yukl7imwZkKQUlVGqQI9oDO1igJeEWmcqQI6iZNXSGTjFW5Ymf3CkdHJwyLAY1pUKnC+UAkHY1GPH70hDzNSbXn6u4es9NzyrIkH2UYU5FqjSokwyJjf3ebVVmSZoLGLLFmhU8U9z/6iOVqhUCwWsywjWEhJaPxgHGRYa2hGExwaLwOShdh/i5RSUpR5NTliiLLGQ0L7j86YT5fBfJGNUfKMYvFjFQlrJoSvyWo62AhmyYJ4Bmkmus7mxwcHWM9LFZlsMKWki2RsFqFNuGspVwuqZViWGQo6yiXFWfTBfNF1cUStFakSpGlmroOhGqdJAghOT9fILynahrmq5KysRRZghOK6XyJwjEeDiiHDdYuQz9pe5wxgcDSNDVCK6RKkDoo03gfBJeVjAkADmMajGkAQZoljIYBpFyV5zgf7A+0Skl0QqJTpM4ZDid4KVmuzjG2JpGyAwylAKTq4kHOObwJyqamqrB1jWsacCHLXknR2gWH+GHgPwRlEbzASxMsYbwOsUgVCCo2CTbkQjc4a3FeYAkEFnxQG14DwxH2sd1cJ459a0CznQM56CwihGyHyvXawbckOU97j228PAA0No56a0CGCKQ+G8B1Lo6fso0Ft2k70rUEOUD4Doj1PqgleddGd4Vrk5Q+/X3xorwof7fLZQyk3+A/DaB9dgzmQpxGyGDtLRymXLCiQTiFyPfxzuC8Qgvw0gSjt8ENWDyG0a2nThnIEB7vbRubdnhvkUmKG9+mmv4JA7WF8+sx8yKic/Ea14pun0UWEHRS2h3KE8HTy0fvnVGINb4kXItxyW6T+F4RiAtHaBEn+knI6/iju/CZ7+3ZWYJ8rgGrD/37p3a5CHpfJor0a7Z3BR3Z4NMsNC6W/vr4WclST1315xyMP4uI8VnHfN6+P8lxf9JzXj7/Z7bJ7jjic+0T39kXgP/e5pfvSrRYdqfgLlSLeYgeC6LfLi621/69PP8enr1///vLJJg17/PTn9Gzvo+4pOyNaR3Z83PU9/ocFy+5bfUkXmD0gGR5jFUZIh2Bb0i8obIFBo9KJI03RNMO5xXKWebJkKE7w9olRmdhHuUlTgTS3Gp5QppvQrqB9RalU6Rd4J1DSoUnJihAC0r3n8jF+hUS/Dqpk0tj0E+r/NwQP4SUIBSIJoCGwmO96wgE0A7qQpBKwXxaIxrIRpLhUJEJQZ5pvPSUteNsZlgtTcj8uOJa0sda8cIRGmGSKpyMADI0jcN4T01gTicyWqNAWRtU8GPB+0CccM6tJ/jOoYQIDk/W4hx4LDp6RCrZ+gg5lPAUCcELtH3b+vYelSJIiFow3qBFBFCDlYNWARxOVGBvGgfKWaraMT88JpEeTM3jxwf84McPOTk6ZLi5S5a0MG2Y9bcLMotWfZPHGKC5HKW58LTanz3SRvfuuiDq1QLy68VDZHXEJ+pdlBJsX/XdeNZjVcU+ie9e4FKEAHKfNhKABd1ObgKxJKg+SGSS4UUgUXjnuHHtCkmeYV3DYFSEhUx7Ius8TWOoypJl1VCPdjFJymy55MP3f8T+ULNMJB+vSv7+K/8QrSR1Y0gThUGwbBpWdUXZBOArzGUuDg8XiIddAKytylZZQlCjvaNpPAefzNkYa/TthNoEUN06gXVwtqx599SiZpbx8AqoinvnFcv3H7M39dSJQiSCjz9+wv2PHlENUhpVsPQJptjiza98HXFgOP24YWNrn2IwxMshX/j6L3Jw8gcsy0OEqBhOhvzyP/h1vBAcHh1zPlvglWIkM4bpJuONGxR5RqYVYxkINre+/n8Ii+ampFrM8M7y5LwiLfaxzSLIn7fWTqMrN1D5gNnjD7n75ENuX99r+1qr5tE96zj1aydKQrQ2auH5uW6i2barWNdetW3WdczJjogjFQiJs57GOBalxcucUZHhTcPRyZS6lct0QJ4INovgUS277uNxzgACpUQIXAkRxrWWVRyIX+H8xgWVo0SCSASpCsSvxApKC1aGACMSkEGVKLxsHV6EQILzIUPWWEvmHfupZKIVZyvXKhg5jPOsDCgvSIRBGKhqy6I6xdSwe9szP3iIJsckQ0zTrF883oVr1An5eBsay3Bzl0KJEMhZz4WoGsu5VW2Q1wdPbQReuK6Nd5PU54wsL8rPXolgcCQwRAA4SRI2NzcRQrC9vc1kMgECCB7tGSIwnKYpeZ5z/fp1tNYcHBx0qiAQlD9efvlltre32d/f5+DggNlsxsHBAcYYdnd3GY/HDIfDTsFBKdWpY3zta1/j5Zdf5uOPP+bo6AghRGfHcvPmTc7Ozjg4OODevXsMh8OO8BGBzcFg0KmIDAYDvPedQsiNGzc6cP2P/uiPOD8/5+DggL/+67/uiCJJkvALv/ALfPOb3+TKlStB5nm57KxkIuhcliWz2YzVanVBJSFaNwwGgy4Lvw/Gf54S78Vay2w269QGtra28N535Inz83P+43/8j50qw2Aw4Atf+AJvvvkmX/va19jf3+8sHLz3/Oqv/irOOb773e/yve99j9/93d/t2kWSJLzyyiu88sorvPnmm/zJn/wJfQWMqCLgvef09JQHDx7w7rvvdsoLsX6iNcmv/dqv8cUvfpGtrS2WyyXXrl3j29/+NltbW2RZxo9+9KPuX6yfeN/GGL74xS/y0ksvsbm5yXK5ZDAY8Morr/Bbv/VbHTHoj//4j/mzP/sz6rpmc3OTvb09XnnlFYqioK5r8jynqqqOaBSB8P6iKhKgLi+I47P8kz/5E/70T/+UvkJC0zSUZcnOzg7f+ta3+OpXv8o3vvGNzgJme3ubra0tnHP8+Mc/5nvf+x53797tLFiSJKGqKhaLRWeRFAlDaZpy69Ytfud3foeiKPje977H3/zN3/DOO+90lkhCCF577TWuXLnStcW+ggUEW5rpdMp8Pu/aUiROBHBKXCAUxYVjtAfZ3t4myzJWq1WnchNtksqy7Oo0Pvtr167xj/7RP+L999/n448/5jvf+Q53797l2rVrHfEDoK5r7t69y8HBAb/+67/O1772ta6++9crpWRvb4/5fE7TNOR53ln6zOdzzs/POT095ezsjOVyyXA45Pz8nCzL2N/f56233uLf/bt/x3/9r/+Vu3fv8uUvf7lTgLHW8ju/8ztsbm7yxhtv8I1vfIN/+2//Lf/m3/wb/vIv/5JvfvObaK27seb73/9+d05rLYeHh9R13dVZkiS8/vrrHBwc8IMf/IB//a//Ne+88w43btygKArOzs74whe+wKuvvsre3h6j0ah7Rn27m0hu+63f+i12d3f59//+3/Ov/tW/4p/+03/KW2+9xZUrVwB47733ePfdd/nOd77D22+/zS/90i/x1a9+lcPDQ3Z2drpnX5Yl0+mU6XTKYrHorrffRuK/6XTaPdvxeNw9kzgevSgvyovyd6SINoNLBKUO4ULgP9gGtAkjQgSr2DZ5I0xNAulCtHIfEXwIS/c1ICq8xtqELEsp0gyTZvjlksYYqqoJkKf0aCEC8ItAxWCrEOgkkOuFcngHVblCSEeGQwiFI8RjABKVkOkEkhAPqG0T1EZ0DlIhlUOrJCT/6HA8YyzeOYwVrFaGxbICG9QetNIkSSCrrJY1QizJhUcrgZaSjY0titpgbYPWKpD/PaxWFaoJhIEiy9nd3qauRpxOT5mvKgaDgmyQI6TFWRuUH9KcwaDh7OyM2jS4NjYW7UiFkDSmYTAakGcZs+mUcrVC1BWTSU6aB+saHzgw5EVOZjPOzs/wlJyfn6FUitaKwXjMlpAcPDkmaVLyLGV6PmNYpMFux0mk0JRmRdkIzmcrRuMhQkiMk+TFkNVyiRYanEAKmM1OmAwHKClIZQ5DSVUtKQqNoCbVCVki2mQMqJowj6irEmMt1WqFEorGeMqyYaRVZ8eYpinn03OkDjGRvEjZ2Jhw8PgJG4wYFgW5lHitkARSwnA8BimRwHhYUJYrHty/x/7mBlmr1pGonMXilFdeSXHOkiQZSiYYA8PRMLwDpUVrQZbmzM8kq+UqECV1ymQ8wCOoTMNyvqBcbHBSVdy8eQOhdLC4q0o8nuFwQKIU4+GQyWjAdLZgNl+hpUD6UCdKZpydnyOcpUgU1hoWVUmSSjCGIsupq5rp+ZwGS208+WDIIE0osgwtDUIorPWUdc1iuSKRgqaxHJzNOS8tXibMTM3JrGSgFINBIGxW1QJlC4yxSNlQVYaghRE6dVXXLMskJNkQFIyLPCEfDoJNU7mkriqyJKcoJgwGwYJWeINWIIRG6xStU5RKyYoxeT6gXC2DTa9wSKHamLKENoHPWY/zBls3+MZimhJTV9jaIhxIoVE6JBYJX4fYUxzbfIi7hBihQLQ6KiKCkkoihEYoh2g02AZvg2qwc64Ne16GVGON+JYUso5u9dU5ghVWXCeYdjcZLL4jgBjXeX2QVag2uSlqiNCNwd2xY+w5/u778y7RCkC3CiICULqzACaCZ+2lChX+9m2C3WXQ50V5UX52SoyrxKzCdX9+Cizs79UjuK4/uwyOtvMl3+CFo14tMbIO/c2coZsVjR4gae1KLaDHIA/BzEBP2j5Ni2tEVTLAtdcsgwKUMxXCSVzu0O0YFRU5RHeQy/fQw3y6unjG9z5awsRkRbqY8KUDXMSw+5m/bRzfx3rpD8jPiGt1yZ69+u9Ix/G7llgi+uP7ZxEK6GNTl5/zhZu/UGfiQh2tY+DhI987RjjLs8hCF9tK3D+C7xd/9sunffZpah0/CaHjs87Zj3c9b5vnkR1+UrLI51EDeRZh4tMefccP8Zefce/PiGaKqPhC713cO+Wly3v6ej5fPV+sL7jcXoS4eMx1E/zs9/GzRq6Q8L8uvqeCFnd4WuVjTTjp/73eIfwe3T+F0tjyDCNSRDpEWo9TGYI5tZAMUsso9ahmGQhsUqCSBGSOwlFnG+j6DOM0Ho3AUdUOWx4zyHax+QbW1oFM7xMyLah9S/noPav+vTohkB1efakTPhcb/+mUnxviB4T6lEphca2nqw8L9/Z35z2lDTJ8ifJY43hy4ii0Y7NIsL6kqR3CWhKpUAgMjqpqgk1LniGUwtchGG2856iyCONJpaC2npXxTFeWaW0YJpov7BdoJRFSUBuD8rINIkissdTWkSpJoqA0nsaGTtcYi2/KwG5PE4QUWAPC+rYxSYSQeBEWBd55auuxHrYGkOiU2lSd160j2JAosX7JWi8wxocXuPSsqpq79w8olMDnKS5PqeqaxeysDbjQvZicdThrOnAcH+QEI6ElDnb91zSI1l2jM6lpJyktoNF1+vXYGOwr5HrQaTt8B/72X3S9ASuSSl1LxhD0ZRBDSfMBte29BLuoUatmYmriwijKPDWmwQrB9au72KZCi5BpEmkCzodnYIzFtFKbB0fHjIqCqvWWHxQjUumYTi2LVcloMOBouiJLE7yUCOeoqxVPHj7CNNEXE6JSRKwH0d6obweSSFyIdYy3rKoldx/NGW9PGGylvHvf8IN3TvnyKwlpNmZ/b4/s7iPmS8P27g6/8o23ufve9yldgzs/YHb+iNIJ3M4VynzC5MtfQSeaP/1ozuwvj5lMbnD9H/+fuPdf/xTu/4jBZk41TFjOp/yX//weoknIdM7G0LGxMeLh4THHBw+4/+EDFidPOF/WpIlGfu8dUp2jdBICY3mOERLvJaoYM5hsMJlscOPqFYapJk/ytk48eR487N//79/l1tvfwJRz3nz1Zlt3Ym33E1m73oa6bPd31rBYVdR1gzWGum5QScLuzgYqzsO8D7ZJnkCakLRtMOhl4AMj01qLaRpKa9FpwniQ4mzDwf1HHNeBtLVqDGmR8tXNa+RC4HwMa7ZWNEIGwoM1wT+2JZtAIIDVdUNjPIsGZAIDJRhmkjsThUTQNIIH0yB/mkvBZq4YJYqsJahZ5dkuNCMd+oXwHi89Ex0kfldGU9YNFQ5jPXjHqnZMBmnIlhEhIBFeegqdJMgkI8kyTNME+WQf5zqhn0udceXWazw8O2K8ucXJ8QmrqiJwWQVWwLxyrEhxvmNqAXSBAqnascL1JgYvggM/syWC27EIIS7YOsRM8a2tLXZ2dqiqCgjy//2M+mgFkCRJR96YTCbdZxEsz/OcwWDA3t4eT548YblcdvYrGxsbZFl2IcM/AshKKTY3NxmNRmRZxp07d9Bak6Zp99m7777bKUNEUkVVVYE42YLozjnKsmRrawutdXc/tg3+37lzB6UU8/m8s2bI85wsyzpCSgRnl8slzrnu2F/96lcpioLd3V329vY69YBYl9aGLMyXX36ZyWTSKalEckN8Hv1Fw7MWeNGqBmA4HHLnzh1+4zd+o9u+aRoePnwYCHutCkOe553KRgTNo2VD1kqKv/7664xGI27fvt2RViC8e+/cucPOzg5JknQ2Hbdv3+5sSt54442OgLNYLDprnWh9E20qrl+/zssvv0xRFFRVxWq1QinFxsYGX/nKVxiPxzx48IDz83NmsxnHx8dYa7t6vXPnDltbW2xvbzMcDlksFh056e2336YsSx48eMDh4SGPHz9mY2ODq1evcvPmTW7evMnJyQnz+Zy6rrly5QpZlqG1JkmSjmAUF7OR9BGfS1EUbG1t8corr/Av/+W/5OzsjNPTU46Pj1kul2RZ1vWXV155hS9/+ctcvXoVa21H5BBCcOPGDf7JP/knvPXWWzx58oTj42OOj487xYjhcNgRRK5cucKdO3eAYOUjpWQ0GvHbv/3bfP3rX+eDDz5gNpuhtSbPczY2Nnj55Zf5m7/5G773ve/xF3/xFx2Za7VaERVu4vP75je/ya1btxiPxx35I96/tZbt7W3eeustJpMJr776amfdU9c1fRURKSXXr1/vrGii/Ui0Ebp9+zb/7J/9M9577z3u3bvHxx9/zN27d7s6TpIEgL29Pb761a/yta99rVOc6aulDAYDBoMB0+m0Gxv6z0hr3anZbGxsdKoZ8V+SJHzpS1/i/v373Lt3j+PjY37/93+/Iy9NJhPm83lHTvvVX/1V7t+/z8HBAT/+8Y/55JNPOrWOLMt46623KMuS09PTri1WVdW1e601L730ErPZjIcPH/L+++9zdnYW5OTbsTJNU/b29litVkAgyO3s7LBarZhOp10bdc516h3xOUbyURzb4pjz+uuv8w/+wT/o2uD5+XmncqSU6ghwUeUkts1oVxQzr/f397lx40anRhPHnUhm+0lVP36SINELNZEX5UX56RaBaDOoXFj3tLGNqOYaY5QBKolB5ja0L2I8IQbfIngZMzYFQil0okmMJs0SsqygTucICdb7sF70jiTRiFSihAvZsPgABAuFUCBlIIAYB41xSEWw6NABKDXWo51AJgqdpggpyYLvbljXtWCqIEEriU4keEFTNzQyqBgUg6AIu1otQiKRMUGtRCm8CgSMclWSFzlaBxXGNEvApSjVWh2bhsViymo2RwtLMpngPDhvaRrH6emcZWXY2dog0RlQA5BozWBzwHBccH4+ZTFb4q0gTdIu4817z2qxDPPPIo/BCJqqQghJMRiRpjmrsqKqLGkWbOOs8zgXPmucYbWckipFkWisrRFe4q3l7HRGluXBbhkYFAVagXeOctVQ5COW85JiMCaVGpxFIEh1Qp4XOO/ZnozxztE4y2K5xJqg2uttQ54pvGtYLOY0JiRszRZL0jwDjkh0ymK5om4qBj7HWMtyuSCxhq3NTcaDIcoLJIrRcMwT/xhTV4z3dpE6qGvUUpBlGqU0q8Ygfc2gGHB4fEppPOkwpalrSHPOFyWNdQgd5npZVlDXprVSsThvGY4GjEZDTG04P59RljWT8RhjK67uDymKhB9/9ISmFlSlIc3p1CfquqKpDUJ4vHXITGHwiCzlYDZjtigphgXaBoWHJwfHLBezMD/PEnQ6DPGWJIFEkRUZfr5gVTccnZ2yvb3F1mLFoMjQKkGrBKU8q7rG2Qrb1HgBy7LkbLZgXjkqu6IxISI3zlNUkrRKGB6VOGzTcF5WGOtQUpKkoW3Pzuc4J0hUgjUhJhDntpVo8KXAWEjR5MUApTRltUDSgAyxUZlkQfUjGbR2iIZyNcN5S6pamxgPUrbzbdtgjaMxNbYyYBzOBcVEpECIYPenWmtt5zQY0/bzznwljFlRor8DgxwIjdCyBVZFGwOyeGfj4NgSSXwLPMouUHqBEHLBWoZWqWcNUPkuhig7ELn7rjcS93/zEZi9DDzJy3OgNgraI6BE5Mv71s7Fhf1iqFa0pJB+JjBxrH8R23lRfubLJeC/KxeVbp/+7tOPKoRHeAnWglKkKsE2Coob1PWPGPglwp3hfY4XErxBDO7A7AMYD3GtclFQCApqGc5bQGKwUB7B4jG5m6M390OMDcKcLd6R+JT4bJy3dYOKbBEU39VK+Cm77cPfzxhzYrJxt6FvsaBIqOmTyC4CypevKcbx1+iRJxL/+tfRQ4w6a5tn3Wkfrg6TU98B3Wvlk2eRCaJqJd3Pp2F10fvsKbj9GVew/n19PNfb//MnSjx//buuu2dt/5Oqdvwk+11Up3j2c/6048T9nn/OZ9/zRUIRdCsUQZjrd+SKC2wPgoqX6/YV8VgteBIJVF0f6b/mL5GYnnVbz7qPi/WxHkcu3mu/7TzjxX9pk8v19FSdt7iVaNc9DoHFtdhcxC7j+T5PEd1PIVrnC99QlkvkaBcvwzohJSQ0O5mwP6qoS4nWGYz38NUK38zwdo6SGpsU6GREXs9YpVuousaaGWlW0ORbOG9ASZQTVDaIK0gE1nucEGucsE8C8RcJs/HSO02Ev0Xux88F8UMA0gdmopaSxoJuFRsSJWhaoFDiUa0ti2wH3KGAXHqUA+klSoYBMKhYgDWCug4LRqk0UmuaaoVsCQK5lIBBILEWhPMMJTgNlXU8mjVcm6QYExamXrbShDgSBRJFY4ONQaKgwVEbi7eOJM2wDmbzOY0xYSy1MegPqn2vCyFxklZqMPjDChyZVhhr8F7grUdKj5ShA0ohUK36lXW29bW1iKaiNgqlNdt5zmt3bgcBwBag7sgZ7YJEyrajd627bdERLMd3gZjwrCJ4Hkkf611CFk/78unY5PHYYk0E6U4Xf7nIuuqupAP6o/1Ju28rM5gXOc2yZE1PiS9wGQJQ3mMbGzxh2+fmWzLLxtY2B9MFWarDtUU7nZZN2dSW6fmU1WrFyM8YAy/tbDB56RpZFrzZi1SzWK64Ohly7/ERSaJYNh6tU8YjxfLoPovZlK2tSZiA0ZtUdW+L9SAIHicIHnwt2WVrmHJzIDBlw/JAYqqaohI0U4usJbdnc96oS35sLKPhBkLnnE8XnC9rvjaBX7g1hutvsfmlX6PY2mU2W3BwfMTdT+7xH/63PySVlkenc1hJcpXyV7//ezz+0fcpZwuUE+zsbjLONNZOKbZ2WcxP+eiHf8XJ8TED7THG0piGVVmSIdkqcnIFh6uafFjw+OCAqjLUre/s229/lV/8xV/hB3/+Y17+0i/gnEVnGU1dUfgVmTcUm2N2r2xyfHDYtoF1XV0gErWLZFvXHD160lowAc6ikhS7OaaN8YV6b2Ur6QJ3LfnBBRUY4QWmqjCNZdV4sknOKNMIUbO9s4EuLU9OliQ6xZk2k8OZ8Chl+wbztj2bwzZV+3uwiemauwuB0Rv7Y778xh12RwXvvf8RXgSFozRJ+OJmgXOe5sYAY0NgRMgoeSf4KmEBLgElAwnu0cGMRSNRzodxtA1QKCkZKkGuBJmWvWm0QCYZUifs3X4VlY+oFqcoH7yew2IhjEfpYMQgSxDOcPrhj/j9D1aUzrOjBMIFe6rzKniredcEMptY9ycvQlA4kNjWVi9/i+/OF+V/B8Xa8J65LCEXwe9ohRFB3wgKR9C2vwiIYKeUks3NzQvHiaBjJGtE25N4/gicK6W6a/HeX7CSiMeNQG60H4gEj7qug7VbS2aIQGW0jNnb2+tUN/rfN01D0sp1R4uUsiyZz+edHc1gMGA4HDKbzbDWdsQK5xxFUXDjxo0OpA2SyBelHb33nerAZDLpwObPYs1fJoLE+hdCdPYVr7/+OsPhsAP4J5MJ3vvuuWVZxt7eHoPBoFNH6D8r5xw7Ozvkec54PO4IMVHNYnd3l+Fw2O1jjOmIPXVdM5lM0FozmUxompDJGtUfvPdsbW11RAatdWcdEi0jtNZd3e/u7nakm9PT007VYXt7m5deeqlrC0qpThUhnvuNN95gb2+P09NTHj16xObmJjs7O+zt7bG/v8/169dZLpesVis2Nzc7i4++qki/L/SfS2xDWmt+8Rd/sSNsHB8fs1gsunoeDAZcv36dW7dudRYsTdN0xI4sy7rrPDk56Wxo4vebm5vs7+9fILhEoD32k1u3bnX3FJUw0jRle3ubzc1NHj16hJSyIxtFW5/YRzc2NnjllVfQWnP79u3OEqZvLeKcY3Nzky984Qvs7u5y7dq1bgyIfTmSKYbDIV/84hcxxnDz5s1OGSL2ba01X/rSl9je3ubGjRuMx2MePXrUEcZif33ttdd44403OsWSSJi6efMmX//618myjJs3b3YWRVGZJJ5ja2uLr3zlK5yfn3P79m0Gg8EFsoKUkpdeeolvfetb7O/v8+DBAw4ODkjTlOFwyP7+fkdCKYqC1157jb/39/4ed+/e5cmTJ5ydnTEejztyxle+8hXquub4+JizszP29vY6kkZsM3t7e7z++uudHUy8XiFEN7akaQrAaDTi1VdfBehsZeJzb5qG0WjElStX+OpXv8rjx4955513ODs768a+wWDA7u4ub7/9Nl/60pfY3d3tnlO0ONre3ubtt9/m5s2bnYJTHANiPaVp2lk2SSnZ2trqCCc/aZbPi/KivCj/+ylSSJyQ62QHAUK2yn9ttw4WkL6XKKuCRULcT0aLwDb+HzIhcF6ipApBdSEQiURnCTpN0ZXB1sFiNwRgwQsZJNNFUJS07ZguLYBEaRWWa66Ng6hA7BDGhgNIhRQyKL/qcDONtUGR1oc1j2ivXQqNUiGek+UpaZYyHA5ozBbOBKtQ64IiiFYepSTIEJtS0uKpw1rLuFbhLSHRmjwfUDcV0/MZ1hoGRVCbGw1zlquMZVlxeHTCKEsRPqi3Ckp0IslHA+TmiOV8SWMsRZ4iPFhTMSgKpHDYpgYpyAYpKqT3U5UrjLGMJjAYDmnqmsVyiXeOPB9iLTizwnqPRlItK7yXNI0nzSw7OxOW8xmpLkBAtShxwHiYkyYKQYNWKTpxaG0ZjTTL1SqskV3DxigBa3B2wXA4oG4kphE469gYj/F4VuWS0WjEbDqlqh2DwQSlEpryw06aAAEAAElEQVTac3o2ozENo+EAKQS2rkmVpmlK8kHOZDJic3PM2XSKF5KNrS02NrfIc0FeZBg8y2pF5kMixWw+ZzZfsDmZIG1NOTtlYzRCioz5YkmaGawp2dkaUa1KZtOUYig5m56RD4YI4cizgkQnpGnOcnZCUxvyPKPIM5JkENbrZsrKNFTWcLZYgM+ojA1Zka7i1J4yHBTsbG7jcVhvWTUN82WFMRJnPNY5rPesygprHAhHopOgRCwlWicMBsMQlfCCqqmpjQFvEcIgZIi3DgYFKoVVXVOtlp2N5aoMVj+NaUCEufbGcMR4PERriTOGJFFoLVgsF6wqE1R0PGglybIB1iwpVzXZOEElCpxAaQ1CYKyjqgzWQpINSLNB6LumRrQKG0mSIJMEKROK4RCdaGazM+omqJ4IGQkMHkewjAlJXJ7GGJx14EICoVQaIXwgiQC0xJqQaheIH0AbA432ma2SSKsqIqUMirEoZPCFadWOHF6KNm4l8fIiiBeGP9W6uMQ4Vzsna5WSICa7hTiKFJ4Yxr28Pu7ASXpJUq0tsmgtrsWFY65LsMUKVslSCCQygCLx+8gcidfVA2qlbAHUmEj1HKzpRXlRfrbKRZD/WeXTQOznlaAWEOZLEosUDqkFjalwtkSQUOtNEhrE7AOEGiHyXdA5frCPWR2SDm7hncVKiyRgRcJ57PIJqVlhkQGfGr2CrRb46UGYJ/XIZB2I3sJBF8kLfeoE/Q3iXcSA+3rsQFzaZr3txSFDtJ/FT9aEhk9Ti3gaR4pEYlqAu1+/vhvDLpNb+spLlyFw0buH9bnFheFuTUC4fJ8X24H3T9/PRWWE9dWFsfvCVbJWnPnpJElEgvX6/M/e5tOO8Vnl0/b/rKS0z3Ocz7vf00lw620+T1+OYI6Pn8djXr4PvyaO+vZE3ZrmwrHbYz7j/M8msjynzV5ImAkLKP+MF3Lsvc8ipTx1371zdbDwJTKWEN23z2iLYn2Cdv4SP8F7pLAInbGxOWRhZ5T1EC0LVFJimoTrw4bDpSbVGiEXCGtBaZzeCXXiDb5ZBKU5ucSfVzhhKZIcoRXGV2ghsC4kOlvnGSUZ86YGnaMduEuKa72HQrtibet3XR+X51A/zfJzQfyAIBEqpUJiwTqkusTQE6G6MymorcO2HScRgkSAlpZUaxorcI0N83AEiRc0xtDUDT5J8D5MVEEirCXTEmsklbXgYZBKGgdGCA7nBj+t2ClCFkemPNaExb63YIWnNr7N6m+n7RKs8SHo3DQY62nKEudAaY31DufDy9j5YNWhfAhIZFKg24HBeI+QoJzAsmZUe6C2wWuxbgAfGrT2gkR4bu0OUTrhqHJcu3kLuQFFngfgoa3rjkDhaRcGsTP7sBwR7WSGADTHzOsWwu1erp2sDx58ALelbBHf3nARXnC+u/71cNMOgjJyTPrMdH+BVNJJhbXXFkCjBCiJkwRr3XrQiYw6D01do7KUprF4a0mzhCRPmT94zN7mACE6l6eQKdDuP5+d0dQ1+9euIXXBW3s7uJf2uXf/Iw4ePWRv7wqL0jK4OsQ7QaI1sjFsJY6VCHzTs7Mzbty62dWf98F+w7dgSwBBgrKIaWo+un/AwfEpN8Yzrrxs2Sx2+PrtGuwcMbY0p4J5NmJaTTl87zG//MTwtbrkH24o7lLx3kcP2FaSa9e2+dq3v83rv/yPUMNtDs/n/PiTjzl6+DEHT465e2TIhlu8umv4b7//e1y/8RpfvH2V125WvPaK5cH2kLPVBL25zenxI6azgofHJ3z3L/6c8vyck/MZcy2xNrSHPBswzAvOqhWmdjAY4JIEkaZo02DqEo/g6mRIng3Y3t3HemiMxTlBU9Vcf/l1xpNNZovHHB4et33fXXrxhBmcaEkFwgf7JiVlu0iVeKWQSYIjEKR6czw6IpJfk5aEDBYuSEfdlNTGUFoohgVZorCrmidHUxZodKowlQGVBCBZrFtzZyclRCB32LptixLhw6RaKhWpINy6c5Vf/ce/yUs398n/P/8v7h2e4KxDKEVWpK2ncbBqce3xHSqMjQ7UYMBXvvJlJqliMZ3yB7//J3x0sqSQKVtFHfypbbCf2teSgRKs8BgfMuQAkrRAWMOjx0+48soEZ4JHtWj5MUHNR7Czt0EyfcDKKf7xP/xN8o+/x3v/bcXcWVQqMA7OrejadhhX7Pp9T/C/7TJgLtkbvSg/e6VP4ojAaATB+xneEWCNQGT8O/4egcVIhFitVheyyCMJoD/ZjEoawAV1j74UXVQySNOU5XJ5ASiO1z6bzTg5OeHk5ISmabh16xaj0Qil1AVVi6IIqkVN01wAZ4UINhYRzN7a2uruP4KtkSBhreX8/LwjNEQgPMsyRqMR4/G4+y7ee18ZIc/zYH/W3hvQ1WO8lj5ZJNbD5b/7x42Eg37Wf7SyybKMPM9JkqSzvonPIs/z7hqj/UlUW4lEAq11R76JYHsEseOxzs/P2dzc7MgxRVEwm806e5xIqonKGicnJx3pY2dnp3u2q9WKNE3Z3d0lSZKOBNRXoIE1KSGSfCIJJypU7O/vI6WkrusLdSqlZHd3FwgKNXGb+HucQ/XB+X6JAHqe5+zs7HRqHrGPxHqLqivW2gttwRjTPeu4v/ee8/NzptNpR4oqiqIjVEgpOTo6QkpJkiQdQSBaC7388svdNUcrncPDQ46Pjzk8PGQymXQKLfEekiTh2rVr7O3tdUot8bj9QLW1lv39fYqioGmaTp2iqirquu7aVmw3v/mbv4kQgqIo2NjYoK7rjpBQVRW3b9/m1q1bfO1rX+Nb3/oWJycnAF0f995z5coVdnd3OzLDYrEA4Nvf/javv/46WuuOsBOVKeq67pRSXnnlFba2tjoy0mAwYDabAXTWPnfu3OHGjRtMp1OOjo54/PgxWZYxmUy4du0ao9Go6483b97kX/yLf9ERfA4ODhiPxx2h6ObNmx3x4+joqFNdif0mklEmkwlf+MIX+NKXvtSR1KLK0EsvvcS1a9fI87wj9BhjOiWh+Gzi+Ke15rXXXmMymfD3//7f7+xaFotF1/5fe+01iqLAWsvZ2VkAfzY22N/fJ89zmqZhe3u7O35/LIrqS2+//TZXr17lzTff5OWXXybLsq4f/Y+ofbwoL8qL8r+2CBGIHwgPMmSAOg9CuDYrNWzkiLGH+K+dlyDXZJHumK1FDEHxI1qVSKGQreVDnqT4wqBw2EaSKIWSAdANSSMeJUL8Q4qQFJIk4X3rfACxldVo4XFe4Cw4ZXHOYBzdMibMB1xQ3QCsbbf3lpBsEuwkPMG2RkjBoCUf+Jb4UVcVVV1hrA0JOe261ViHaRrqqmI2m2OdpcgLhkVBkuSUZc3xyYyqsGxubFDkGdtbY8a25uDJAafzsGBTSjEa5qhGUBlDlqQM8oS6rkJCkYSmaigrxWQ8wXtL3TRIFeY6g+EAi8U0FdPTk9aCawPhZav41ILIzoMxZEqQFCkmMUCNsw3eKZSU1Ms5SZqgfI2Uabs2ljSNJU0Ng/EwJD+ZmsJn2MaEmJzyZLmirCq0UiRaY43h6OQcY8Laf7lsELIGD4v5HC8V480dqtoxmYz54IMPkR62tzepqpIszVBSk+gUMoVOM5z31KZisrHB3v4+ZTkNVszeYp3B2DBPn06nrKqKzckGdW0YJClFMUD6hLPTOWlaMhmmjLNQz7Vx2FXNoyeHvPraGKTH2ApIESjKxiKTlDSRpEVBnqVkxYjTpSNJFItyxaIqsabmbLrC2YbVcsVoUPDaq6+glGS5LKmqBlM5RllGg+X07AxvGxpnKQYjZlWFxTMZjzB1HewykxwlExrjqI1hsVohhWNYpOSJRglaFcMEJRRZkuEczMsSrRTOGRIlGGjBeJBRa4lWgsmwYDxQ5Ek7V00SzqYzFsuaJMnxzqOlYpDnZFlKVa1wzjMYpjjrUVKGJKLG4LwlLzI2JxsM8hxjSpxrsM6hlUKlIVklTQbk2YCyWrBazUPcQuhuYBEt2Syo5wQCivBtPF+CEjrEb1wkawRCSBcNFBIvNEIGq2EBQYFZaZRKUSpBqQShkqAkjUS4EOxwSJwTeCdAqDbc0ekchX7k41WCJxI0YmgkJjldAnhQxKzz9oNAKmnH2QgYx7lTUOQIwd5uPhXB0QuAtI/BVTpiHSB9oKf4NrgtOxuXQI5ZQzprYLId0l8Ed16Un/nyvCXKZ4HYl8Hcy9tIAU5KXFUhnME7yLIJujrENGfk6QSZZQgxCXOO8hgnLH5wjbw5ozFTVDJqLV0c1ewIZRfk+Q6lEAi7ohndRqkMZ6ct2fYi9htxlJbV1d2s8FGhrUV0WmJuu9f68wtK7/IpYPkpEsdTFfr5wf9nHeMiTr5WqefSeZ53lufB5c8441N/xTjHxXVsn9TR/7zfVnp/R3v0bj7sn97mqWN9vvI8QtL62BffPc8jijyvfJaKxLM+e9b1fJ7jfFp5nvrHpx1nXQetGk2f6HS57nukime25/ib8K06xvqzuPllYsaF+29ZVz4e8HNwDS4SGERvsQV4nm6Pcb7Ru7xnPe+QyMuFvhOJL3GIoP362fXdJuj3T98qG3lTY3zGeKgYmAWNrYGQfJlmMDuR7A8CkdlJj/AaYS3SNXhpSXRQkKMpcIR1Qik02hpcPkIqiXASjcSZBpFKXG2QorV86saHXj37fv1cuJtnf/xTLD8XxI9Qzw6EomqlAxu3lgt1KJRoRY3aiad1npjgXxtPqoMqBt6Tatn2SYH2YVJvbYMWbfZZaxPh8DRCUNswAU/U2mtRGMMrQ8kkTyjSQNgQ3uFcsEoQos2Gdw7rICkS/KppfSSDTYgrl0HRgyDZqBLVvg49ylsSHKad8gshQXoSGa4tBvFNe5PWh/9s2yAzBU4Lgm1tCDYoIdje3cKimZ6tmGxss6kVSV6ACJ1Osu6kvl1ldEDShd4b6kReHA1alnr7sgkfhuCKaweX3mJlTfige3t1KiF4Op6kvzyQ+BYsDjYcHQu13V9K0ZI/LkpbBeLGesDxzmGcC2C6cxhrUKkmGxRIneLqko3x1Y6xLtpripnIVVliakMlhzgnGDkDUYpcgsZy7+EBVVVytKghSUicQfoFg0SxNZ5QTw+Ynx6SpgpvLavFOU1dYpsykAPaK2+ahroxLE8WiPkSK+YIVtSJoR42yBpcBZW3zHPD4M8dt/+mZLm4h2scLyvLnZOP+IWjT7hX5DwaX6W2Cff/8s85evABf/HwhKu3X2K8dY3dq6/wvXsfsnIr/JbGLFdMz+CjYsRgJ2Uysxw3mvH2gMPHH2DODpifnFLNF1TGMJ0vME1D0wi0kpjGocYJtakxTYVxYfHvVUq2t49cLhFVjW1qaqGoracWWXxIJHmK0pJkvIvQmtXpAR+envLqm2+0i/e+LVBL+uiWmyBkyN6qGxF8jqVAWBcyTZI2ABAbZGQSxbYqJCIqcnhPtSqpakttYVwM0FIhlOALL19pxxvHalFzXnmG2RrQCt+0Y4dUgeDhwnjjCZNv2swQ58BJxbXX3+Dql3+JfJjx6//n/yvNaompakxdYk2DqSvqqgwSwnVNXZVUZUVTV9i6Ro23eOub32CY5cwP7vHf/uCPKY1ge3cbPV0irUNKz9hJ3kCQGlh4OPGWcVsLeZZhyhlHDz5kfO0GoqnAutDPBXgXJvl7t17jyvVb+B9+zGh7j+N3plhrwhjqPIvGMncaLwJ5SrWZJq4NoggRRh7n3eeavLwoPxslvseitUskKUSwug+4R0UAoCNyRMUF730HxkZ7lfjeGg6HbGxsdOoH5+fnTCaTDgBNkqQjaUTFjEhEATg+PuY73/kOJycnnQVBJKasVitOTk44PT3FOce3vvWtzo4hEiS8953VQbzGCLTHa41kkEhqADrAPFrSRLUS4IItSF3XnJ6e4r3vAHvvfUdmiLY0faJNJFP0yR59lZAuMNh7v0flh0iyiederVYXCCaTyaQjJMR6bZqmIx8Mh8NOBeL8/BwhRPfso5JKJFXM5/NOBUEIwdnZWVd/fbuf1WrFYrGgKIqu3TjnOiB+uVwCdHY/sf6Ojo66+ojkEO99V+eRBBLbU7yOWC9RTSMqgfTVIGKbTpIEYwxlWXb7x/s1xrBcLjtijDGmI6vEc0QCQ/z9+Pj4wnUBLBaL7vnFa43kD601RVF0z/zo6Khrc0KIjmwQr2s2m3UkDYDt7W2898xmMz744INOoWJ3d7frp3VdU5Ylf/AHf8Cf/dmf8fDhQ37jN36jU2uIpa/mEpVhIjkjWrP0+3tUHIlKL1JK8paoHJVxoqpLv00BHSlpPB531kgAm5ubbGxsdO04TdPuuZ2cnHTtIQZmiqLo2n2SJN3zuqwQpJRiZ2fnAnnMe89oNOquNbbbra0tNjY2+OIXv9htG/tKtDqKqjxRSeWNN97oLKLquubhw4cd4Ws8HnfX3Cf9xLpWSvH1r3+9a/exLrwPc8ujoyMGg0FHWtvc3OwsWgAmk0lnoaS15saNG+zt7XXHybKs69+xrcb+LITg5OSkU875lV/5FSJ5LxJ0Yl055zg+Pu7sgK5fv85kMmGxWFCWZddm+wo5L8qL8qL8XSgCqdqomaeNNngECi/beI4Pa23n14FGoWTvCFxYH0QAMSoHhjE9ED6kSgnKHQl5kSNFsFuRLdPctWs66cIaUctAmpdaopJgv+vbOJK1HudMmMv5YDmcOoXwMthiClBSoaXGek9TGcoq2G9YLE1luqBqSFoSmMaik4Qsy8mzNLzTdfh7uVx2K1HnHU3dUJUlHk8+zFjMlxwcHpAmiu2dzWBTbCsWVYlYJIxGg2Bb4wLh1FSGugng+Hy1xDYGqSTDIqfIc4ZFiAE5G+YfpnHUVUOeJThTtUkCocKl8lgTiCrn5zNWixVaKJyzoHxIclEJK+uZlwuSRKISzXA0pqkrlssKvMc2hqpsSLMMkWoSpairCuMddW3Z2Bwh04SmqtESdCJQQiOFpLYWIVOMcYBBJ5IkFcxnc4RXDDcLxhtjqrJEKs3Z+ZJitNOSDTXXr1/lwYNH7F+7SppKJII0ycAHkkGiEvIkpbFNOy/J8TQYYylXK6pywajIcQ6EUFhjgtILnrRIGY5yzs7PyPMEISR5mmGNIc0z0izjdLpiWTbkxYDpbE6aZmxubCBEsIwOuVEhlqlUhk4C4Va1hIHZrGSwtcFqVTOvSo4OD7l94xpOSkpjmS5WzOclCsFkY4OHh8csZwtSqXCNIcsFznrSNCHNEkTWzoXyoMZRlQ3LsmI6X1Kkgo1RQaI8eapDZMWBTBR5kaEUaOmxtqY2wW5F4hkksD0a0xjL3uaQIgGlPJPJBk6kzJen1MbT+AZjHAaF0Ak6y1hVIcFmQI7UEryiqS1YT5Zk5MWA0XiClGG+ZK0JcQWlAYmUKVk2wDvPahVJq8HOKdgAt/EGC8Z48IrA4PJtElxQRQ0Jbg6swXsbgsxOECSYw7gRsmwCcCOURiUpMklROkPqNMQSlQz7eNfGQESbiBSsepCii2nGxDjZIq1hjPNEjGlN2ojlYky1A5LanX0b1Y2gZ9i/TYJDtduth9UuxsrTa0Ck7LYSsY36SAahPU8Y1cOaMiTKddFgv07Teq5NxIvyovwMlD7+8JMoFHyezz0elMI1S7R0YegaTbCTmzSLM4T3JPp62D/xMFDI8gyxPMZ6j1/ex49fQsyOgCVZsYvKvsDy7B6prPGbryJsjXAeJQWNiLS0ZwPQ6zFJhDGm+8tHjLyNsX8KwN8DhJ++7+fXT3+fT1dPCUB9uI7eNh1q/sxdIv789DU/A/wO977GEKJ6XX/MXmMNF2NJ4eN4zMu2LGK996XrFd2kWPS27W6s+/uno5T50xmzP4288dMsP6ma8f94Wdd5fAc/r6YufN7iIMEOLr7vw3V0alodkfJi6d7XF7vlc+v2wu+X9o2tNp47JkMLIXjqxDxdX379RWjvl67zqd4ciR7PURtpV4EAWOnwSJaNw6W75InBippCLtgaCO6ejBBC4QQkTmKbEu8NeIPAI0VOwxBjl4zyhiLfoSlrGjUitXOcPcNUCqk0Thc4mYBKUT6IBjj5tAEVT1/2xXoWn23X9f9P+bkgfgDdQFmbBhwUwyQEh32wvsCHxXTjDLRUB2sdUglmRrAsHfsDRaJ88J10HoSncVA2DiF1yEJpvR9aOJJUCBpvMc7TuDChRmnSVDNIBFkqMaYJRIkw3W0XboFl7b0gSxVNEwIbxnkaG4grqbMdMB2m4O1ttFKmUimQtn2NBGUD4x2NEzTWkyhBlsQXhAiZ9LGJCk9VB2C1JazjlabROeiCNLc426B0GhY67fvEt35qnSTWhcYbX2jtYLVG23uPKC444mce53zbAWXvOKIb6MLixHfX0JcwDF6XbT+L/myR6egdzvtukSI78Cmexfd+CqJlpReBweVdkD7VWtEYh2ksUkl0luNMTZbI4AkrBaplfkW7uSRNgWCtM9Qe4xymXpFTo12Ddo6mXjCdac5ni+AlKiR7WxPeePOLvPLGmwwnO2R6xdmDH6LzHIXH1CucNWHgkFH63CCdJxWeW9c28GILu/gBQjR4B3XtEJUnEZpUObKRYPqy5PFhgTobMDo+gXmDaUqckFxraq7+zfc5e+89mkxROMHJ+Bbvn2q+tC2ZpJa38wfcvX+fD6odLILajDmdXeeHn2TUPuXsbMT22QH1k/e5UsBiNqV2QaEmzzISJbDGMV3VgMQ0FedLg42WSs4iqwrf1GidobMRxhlmIuVsesonDz9mc28fISSjYkCWak7u3yUdjNjZ2+WVPEXrkKXl2sytixMi27WxaCGCsy0pSCC8xXu7XogK1u3G9SZMIvThcGhFUzVUdUNpPcWgoLEN3nnyYeuR6z2DEWyrhDxVHRgjW+KUFwqpM7AlLScqnClOjl0IvOWjERuTTYzPcDIn27pGNlohTA3OrvunM6BHMNgIQQUf5Izxru1Loe+500OWi5LRaIiSlv1rV7me57hyRXHvAcWiorEwrj0jodgdZayExI8LrIBRUZBKyaquEaKVHXWhXhogn+xwsvIYmXH0+DGPTk7Ics3+cICyDQ/Pz1kRs/P6PTN0KBE8qnA+9vv1YuBF+dktl4kGfSWMPrnhsmJHBFojkBoB6L4iSJ/NHY8XlRH64HhU24glgp0RiIyg83w+Z7FYcHx8fEEJog/kxqz2PvGjr+AQweI+cBnJKhGg7SuYRGA0XldfFSVeX/xeyouLxUiMiKSGCK7Ge4vZ/PEa+t/367lfD5Ec0M9YiIopETDvP7dIeulnN8Tf+7Y4cbuodhKvJxJZYpuI9xXB6z45KBIM+hYVkWgQ94nnj8ePhJhYv8vlstuuD+xH4kr/GMCFZxmJRPF64r79a4z713Xd1UH/WcZ7vrwo7VsiRXJKrNdIAlkrhK39V+PfTdN09RjB+P4zj9cR+1nsG/E88Vg//vGP+fDDDzHGsLu7y2Aw6K5vPp/zzjvvcH5+zo0bN3jzzTeZTCbdtUUyQjxXvw9Fokes16iqEftEvLf+vv1xIl5f7Nv9vnWZ0NQnPsS66bf1uE2sp/740O+Dcf/+z8t9KaqCRAWOy4ou/bEm1k2ftNa3ZonHjgSU/n79815u78/qz/1xoN8mYn1GklK872gtFa+rr8bUr89+/ffPE683Et8u110/YNd/fkopqqp67ljwk5SfJLDzebe9POb+NMrzzv2TBod/kmO/KC/K/4wiZSvf6YOxaRQEli1wSOR9h/DMM8ewi2uC3rjrwxpbKY1OUrRKESrB6wThGpIkgNbONMQEnuB40KqUJsECNyT/+JAYgMA5qI1DqbAycc7hvME2NY1zaJ0gE41q1U+lB6MsidYoqYPih68xtaExDbYOa1NjLM18HpSkJhtsbW4EFassJ9FJGPcApRVOBdXJxWKBMwYtIU815WrFwcEBWktGgyG7+1cYDEetpYunEiVWKWSeYp3DC8FqueDwyQHzaUVTNkwmMBwVgTyR5GFdKkJSgLW2swwGgakbhFQ4G4grSkmqcsmsLFFKIrQGK4JVi9BgwUmPFIa8yJA4llVDVVZ4Y0iTFGNLtLNk44y9vTFlbZlPl5w8OsK4hnyQkRdJUHNIcpraUa1qvGvIUh0UMJYlq9WKYSGYjArmtaGsK65cvYKXBe9/8Iiz8zk7u3tkWc6Vq9c5PDqjqmq2d65gmhqBom5KBJK6KimKHLdwCCRZlqOTYINTluFcbDqKQY5OFM41mMZgGjg/D2Tlk9MTtra2aNp5TFk1DEdDkkRxeHhAnmUIoKkrtrd3kFLjPaRJEpT+hGOQD3CmYbVa0pgKoUNijDEOpZO2Bwi2trbYbW0oGxMSkFIJoyJnMtnkk8MpZVmR5JrNUcHVrQm5cAyzFGcqhqOCvMgxNsxzTWU5O5tTVpa9zQkYi/YCKRSrsiLNSqTWKCUoUsUo0zw4OGZeO4wDYywSy84oYZAPubIzZLUsSbRkc7yBFSmzec1y1TBflRhraQw4NHletHUmUTprf2pc3QCSLBsyGm0wHm/QVKs2KSrMEZROEFKTJAU6zSiXy9YmKMYtw4gjhOhIVWH+sSZNyDZQ6AG6xDBwVoCzOGvxWPAyWFBJFaygpEQoiWyJHzLJES3xwwc2DC4qXLYJir4dC8M1qR7Rox0vielmkmhPI4Vv1T5i8Ej1fncdkEMLCIZxrhdTaY9MvMuWwBHCaH4dpekDtu0YLGnngP2xtxuf18pMQKsKso5Hg0T0QtQvZiIvys9++ey5+eU41eXvLs/ZY38GsKYkEVC7hqKZY1cPUV4jmieYk4Y0y0FIjDQkXmKUx80X2NUxzeyQ4e5ruOEXsaamOn2PNB8jB7ewtkaoMB5ZK0hEGtSe+oQJ4jhzmQQhnvG7741Jz6+XZ91/rJvP2qZfX9771tKrFzN8BsBMBLxp0a3+MxA9i5ZIvBPPuLcOPF9/HkSU2hRocZkEsF7rXjhOd/i17WEE3n23n1hfbLdb/w4unuP5hBDRO1/76TPaWfz84nN4+tk9K87wk5b47J5HwvgfIYt81jGfd/xPO88FwpCItmlt/ftLjyfiNJ/6tltjuM9qpV5cqvMOnmo/jyyNXhsQrJPcn02I8hfbfTvx6G43NFy6+cO6Qtanf0bffF5x3XW0041uvhISgS8239ih2tiUU62qg2BWBzuWkRaMs5yzqmbIOU3tSHON8zMkOY0cISQYKZDOIqoFmVghGFDZMeQGvTgiKRTWjyEb0JglrpmhrKS2NcK0ah9SIN1Fu6bLpJcLLafr73974NXPDfFDtt6C4eXh0UphvGecJpwZ203WUyFwwhNjq0oJhFecVA5Jw/5QI6KVBqHR1XWDtAaRqHYS3S4MnMUJwcqGrA8V33HOYT2c1oJt6Ujb7W1LDEhUkLCyImQ/rIxn2dRUtaUyFpxnZDypUMEKUuggNegdxgBeICUoRQBwRVislAa0gIEGZyzGBVfL0JsU0SfXWAMtuUQnQSIx1RolVKhDpfAyo2lsm2lqkFIEgggQ+dkdANvr4PGbdcNuWVsiBpUB6YNtg2gHFG+Dl6UME4VuYRDP4H33ko1LrjBY0/5FuwCJzz+QPjqfSt+KJUhFZJJBt9N6JSXiGkdSec39acnexoikGDKfLUL2hlRkxZjG1GS5JkkTmvbeXXtsayyNsZwdndI0lq1hRmM9la+YHZ5RzWa89tJNfvzBRzRVyWSQs7+7yauvv8GVO69SjCakeYbSOrB1zQJfVTil0Uoi07DYFTLIQ8Z7B4HSGavFDKFDQD7NLWpLIesEvKVcaUST4d5+C/Xq16jY58n77/HkD/8jG/cecMNLqsWSqU65+pW32B1kXFnMGW7uMUhrduQDticDrnz7Cid34I/++hNe291kvDvmm7/8Mvt7I5RMOD874ejeI1JteeP2NT4+rzifVUznIVtYIQLJA0dpDCMSnAiZUcFbUJFubpAt59SzOc1ygS5GXNuaoIRhY5gHnqMTSK2pGsvNl19HJwnjnW02MkkpNJ0fdHz59l9S8UUsBQKHNwakJJ9s400VslaQPd/COIHtvQ6daIkKFoSiXC2pGkflPc4J/vsPPmFnI+XO7gjhI0gTJp0IEeQvEQgfyGVSapTW1MsS5x26DT7GF6I3hrq2uCRltLFFUeQY57HGoI1DqgzyjBghrZsam+6QDvdbSSrCzwjsuAqzOEQlGTduXcVNa6aVY+PaDYrhkLPHD7FCIpRAWo92jpX1JFIw1ApVZBzlQ9587Q2MlsxNjfQhSOBwnQzxR9//Mz5Ec3w65f/7//5/sHz8ATfuvMWvfeMlHr7zl3zvk1OWVYUVIaAj2/qNNS2lar1j+6HcF+XnoTyP9d7Pur+8+Igg72XgLQK2l0sETCOxog9K9kki/b/7APB4PO4sJPrgZZIkTCYT9vb22N/fZ3t7uwPuI0Ddn3T3iRGXQdNI1OiTCfpklr4dTn+B26+vfj1EoLRPLuhvH8He/vH7z+MpVrf3F+q8rwDSB7P754rb9a+r//z6CiV9hZTLAPbl5xuJCfF5RTC8TzqJygfx89gu+u0pSZIL5IN4rL4iRp9g06//y/faf9ZR7ePZiy4utN/LQHq//UZiTf85RAubfp316yQSVPrPuP+sOmu+Z5AA+s8x7tMng0ynU370ox9xenrK5uZmZ9ljrWW5XHJycsLVq1d57bXXeO211zr1m6gG0a+P/vmfRUToK5/0t+nXX78P9Z9JPFafnNV/bpfb5+W+0/93uU4uH+fyz8v3FJ9HfK79/vCsRXOfuNS3YorEon676T+zy/2kXy63z8ttsn9P0R6rX/rniG3iWd/3t+nfW58486xn2S+XSWrxXP8zVT7+tggSn/e4zwz2PmMcedbfn3bM/1n196K8KJeLEHTxB9zatkDINkgZA6m08KyP/f3ZfSZ2kcuk1Zh1H/8hVYhReIFWCi89gbwuwXkUQYVDaI3UurVlICSsiNYm1AcbAymDaokSGtOYYD+StQRQD7612ZUoikyjlcYCg4HD1Ib5Ys5iucQ5T5YmZLnFeU9jS+YrhcOSJWkb/1F477CNCUtc66jLCtuYoDBhHYPhENNYyvkKW07xXlEMSrJMUuSa8ajo7MbSNGG+WFBkOS/fucN0NqNa1ThnaUyNAfIkJR8UGBfI/R6PTlqbLQIJ1HqLlopyVbOYl6RZSp4mgEdKT1XXnJ2dUK4qpBBoI8hGQ/IkRXmJLRskCYnSpIlGJZraWBQK5RU720OyVDI9W3B4WnPy5Jz9nQkbwwKdKhZlyelsRZ4IqrpGAmenU2bzio1CM9lNGY4mPHhyzLAYs39lj8PzM5arcxaLGUWRMxwMuHH9Gmfnp3gPo9GE87NzllWNNYbFcs6VvX2KLCVRipmpyVLNrJyjdVB5qa0j1YKjwyOqqqRclVhjMMayWJUYWzEsUuZNhfC2tf5QzKZTVqslN65fw5mGzY0x3jvKsiZNCxSaQT4gTzV5UVCvFkGpxkORFtT1IWhFVdd4Z9nKFaPRBmhBVTccPDnC+Zokk6SpZHOYsj8uODY1qYaXrm2xvz0kcVUIkVSGRlekOsVZxbxZcj6f4aolW4OEXEmqRc2pLpHZAicFaVUGlTGhqS2sGsHJrKRu1V6KVFJowfY4Y2MyZDQqqJqqtc5MsEIzHGRk55LzZgXeBzu8uqLING6Qh6SsJAEBOklw3iGNIMkzxpsbKK1YzGuqekVdV2ilwYNWGWk+oDGO+WpO3VQoIcC1yr16Pc9xnQ237QEpYXCxzuKNwxsb1HeNBSzCB0nxQHLQQa1DCZRuiR9aI3WKUEF1SMiWsGo9DoclqH24lvzmo7pRnIsBoNuInwugI2G8EsigVCREIMv5GEcJ1y7aeGEL/VwCM+Pxo2Vq3M63MbCLCTdruDLGxNr5dWvzvaal9KGcCHT14sQiKIL0bu7yry/Ki/IzWPoA+6e39svxrcvry6f39whvwFZ4GSy2MmlZoVHDDeRgA183rExNvnULbQVNfYZbnZJkYwab17DzJ/gG3PQ9qCz5xjVINsE0SJ3gvEUikaLBS9+BtxevR/b6e7znXonrFv8MAu8z7uvyvce/P6v+Pv+6JiSLCinWE0jfjVztWNVbR/d/E5c+649l7dftrRKt4gVPj3MtGtFLhF7fw3rd1yegcNm9i5jFHOo2jvmeFiG/cLb476ItyXOe11PXsY4NrN8Zz1ZqeNYxPq1N97f9rN8v7/esePHz1sifp208b13+PDKI935d5d27TtCCnKHO2u989/7uIZy957ymUkQQaP3MghJI/2m179pntZN+O+495kgSu3gv8e3uu3lHdzvrm39GUwr9Jtzbp9fZ5Tp1sa7a9hyP4IXsYFrRdaL1fkERLdSfRLNsHKlPID3lwbEk1ROGY8V4uOL+owo9KLDCo7xEOYVdThHKopVi7pKWNZEy3thkOT9AJkOkseQIyAtkUyF1jparNhEZvI/P8eL412uRXQ3KS/3nb6P83BA/umllr9F0c0wEMV1ESYGSgYUlRVBqGAmHV56Bkty6usl8WbE4m2E8GBemx9Y0KFkEZQbvQIRJPC2Am6kYqJDUHrQUTGuPLy3XhooAZgqc96wai3NhQW+9p7GORe0wPgQTpAgyi3jwMqUxBm8bREu+CJKRAukF1ocFtyeQPlIVziEJShWKIKuI911wwCqJSgSlk1gA7zoLhcRXKJmRCYutK1QucTYEprsG28Vg1sz4DtzwLajdDnAussLw3aCE9+s3lXPrgVBIoq8unSKK749R68VPNxgJIqkkbNj+7PpV+4vo+8UJAhHAw6XOGskslcr47gcnfO2W4sqN21RHpwjhkVqQDoZMz89JshTjPJWxVLVFEtqTcyEYUy9X1MZxag1SJ0xPzynPz1iWS47P50wGBfnGgJ2NIftX9tje36PIU4o8IUkTkqwgHQxCpgC02UYalELKEIASBEJRXYc2pdMcv1zQVDXeK2yjaEqHryvqpaNpHE9Otxh8+f/CjeF1zk6O2Lx+i9HLd1ieHHHaeNzZMaONHTZ/6e9hTM2f/bffZ3LjVX5hdx9Xl3gcg7pi47Uzbv2yYrh3M7SZ0ZgsT7DGYsoJ/+nf/CF2Z5edr/866oPfRZslajpltlySJAmjQcGisYyGBZub2zSrBeeLBUIpMglpXeLqmq3xkP3rN7lx9QobW3v4ZMx4vB36jzMIJMv5nAbFYDBmY7TD4UcfsHntFt7blvMh2xdHyCiL44JoLYxUkoBokOmQYv8Vpg/eDRlQcQLYvuT8hUlsbGFrwtGqLCkbR+VApwlNUyNE2r3Yfdu28dCsluElKZPwffted/WKenHWfhYVa9rxxTrq2iDSMY+OHR8eWl7f9WgaSHOQav2G9KCTDJkU4b5Fmx0S+wuBCFY1QfXHtot/Zx3eGrwL0rPL/R2y0YB8uWBx7zH1vAYH1oFXGb5uGCBZCoFtGlLhMaIFt1ywrjp/+AGPT1bowYhyeY40JavpCb9vpwynZ6yaYHnlZbjO8EJdT5iC93fbn53rTWx/otfEi/J3qHjvL4Ciz5tkCyE6qwq4mKV/eZ9o3dDfvw/MR3uBZy0oP22B8KUvfYk33nijIxP0z5dlGUVRMBgMGA6HLBaLC4Dt5fvpg9QRjL+8zWWyCDytNhDvIRJF+gofETzug8SXAdnL+/fr4VkgeVSuuEwcuHxvfcWOy3VwWa2hf33xu8vBhcv30D9WvKdI7Ojvcxmcvnxvl0ks8fc+OaJ/zmircvlZXT4+rG0uPq1eL29/mTDwacfv/31ZNST+Hm2L4v31wf9Yd7HOImEkqnP0z19VVbfd1atX2d7eZrVaUVUVBwcHlGXJ/4+9/2iWbbnyPLHfcrFFiHPO1U/iAQkgAaSoBCpLNVlkk9a0puagaW00TjigccKvwQGNQ34HGruGPWiyFc3K2qqrrbK6qhJVqZGJBB7w5JVHhdjCBQfuHrFP3Dj33QdhVQlcN7v3ROzY27X79rXWf/2XUoo7d+7w3e9+l+985zt85zvf4Xvf+x7b7fbGvJ3W93BMS/0O6zv97VDgL205BqaYrqFpPtNnj/Xx4dq7TRF3uD6m+RybP6+j4DhcN8f2kNv2qdfZy46tyS+zD75uXQ7TdK2Wehz7Cxwd59uUhsdAfr+I9LqC+5cp/xDg8qqyD5Vcr1JuHQIXX5Vel6Hkl624eJN+HZMkZr/M7hSEBKyPEY0uWkAiSV5RlLV1+7qJ8SYINZ09PBJJYTiLcj2LeEZrlCgCfudnp7KyXhQJ2LEDcxZPvqSrCDFitcEqjTUptEkqz6Hz+cs5l7z6fZYDNRlEmrxukcTkOvQDwRd6k8QisL6+pu+6BJY0JjF/kvdmgSiBuqkZlNANA10/YnxkVtecnhgQwbuRq4unbLsOU1nu3z1LoFKtuXN2l3YmjN2AJ3DvwSl+SOFWBjcSo89OQg2zkwUxJL1RzCwHQsRYyzCOCIrZvGGMI8E7nNNopenWPU1jsadzHg89201PVRnC9TVRBazSNDNLHeb4caQfembW0jQ1bW0Zxo7uoqeqNI/eXtLOLZ9/foHvR9YibIcxAXS0YtsNGNEM40j0iqoy1LVlvek4uduymFX85MOf8P5XNSeLJdvVUzbXK4blCTFEzk7v8PTpM4Y+cHoyJ8Rr6rrhkycfE6Pn7tldBE3XbTBGsdms2GxW3L1zSttUeOeQWvPs2QVtY7m8uCCGgLWKzfoSoyxucFijAM9yuURrxePPPye4kaayKIks5gvW3cjp0uLdmENSOs5OU0jE2WzBxfWGen6K4ylohRJw48Cstjx6cMrl9RqtDKt1n9dFz2xeE8VxsmxpjOdOKzy4s+S9t++gjWG2bvE+YLRiNq+TTB4dWgnLRcNbd5co1xGCY3a6QNWGKAlEm5zwNJth5PH5Nc+ut2xdMmKczRTLasbd05aTkwXGNomvWBua2RzbzJAYmc0a6sZQGc3oPNvNmrE/wVQGa9I/JL2zjNIEbRClqeqWxfyEsesIYSR4j3cerQyIpmnnaG25vDhnu17j3ZjAZqQ9wOry3gyZzWbqfFOcyRIDcBwzg19mUVXEBCIpmlfJumKlUcqmsMKqQimbnVnUruxAxBPxMeBjuEHenlSm03NOPhcVQ2UOyZJsL+U8GXe6IIl5r4xusjuqrF+CmEsTUvgaiUVflfMCosiO8XlnGIp7lo/dLi6FFj7pb/aGqn0Ssr43FvaPdK14L3+JI+ab9Cb9DU6vB7Y+Jn8dGuBv/K5T+CsVHF4COEU3excZN5gR4hAxZ2/B9Tmbpz/EVIJVLfbsHUTNMJzj736DeP5Twqiw7V1i9yLtN+0JwY8oMUQljHhMVPkslZm0d0bcqbGatBcdgXS9rnzyRXLobX1S0qGsON1Nj9fsSPkityqeX/m8FH12Ke8LyvkS3yn7J3vTl9y4L9m69tU+dKZj8uTh5+P1e7UeqtTg+O+HbfgyOoWf5/5XPfdFc+d17yn3SZEbbv6y+xQPr0tmgDnsy4N7d+9XINlZw+6G3b2ye0Pv9QEUdrAkeuyGSB1O53isgtk+dbM2x/rj5nv/OJtKqmO5b/psqlTc/VTqzsFNJf/UfNGJEV6Tzqhv33U8eT7nbj1wNW65vLT0nUdrYaGu6OIJPYpx+xxD5P5JxfOtRusZDscsDIjzNGLYdBeoFgIVISjEzKgroR571s6DFURqRMZU92lby2d59Vr4RadfG+CHgmwgF5Rktg4f6STiQ2L5UKLpiLQqoaRDRt8QYaHBSuTe/Tv4yw2u62DwqJDprH2klkSzEfIhnBjxEVyiKUBnujotwhAVNoMKfJQbSgshAUOMUigtuBgx1rP1Qtcnms5IwHlPZExsADmfEHyaTIEM8EhgEKIgOdaQImIrTdVUbLdDellHjxYhGqHKHiveeaKLuDEQrUsv7pjq1tSawTu0Mfz08VPat88RbSDAdrshBJfa730KISNZCNGayjaJ2lQZlNJoJYlVQRRKNEZPjDBZwNAKvE4eMzov7Ch76p+QBRRdjPAAMRIlIFFl83AO/ZJw8hk0EtO82D2T39sFgLKTS9KzIf8eoiSmgo8f88H9Ja7riDFg6wZlLeeXV/T9hrvjSAL8+2RQl9TOedvg48g4bFExUtnAu/PIe988xX/jDG3rFOtTW5TV2KpGW4/1W1SY4V0gxgHnVTKGE9BVTQgd6/UqGSdzm7tth1FzqqpBGZuEtn5gGCOtvUK3Doyl0iPbDoyJXK167pzNuNyMjLGl/Y3v4e8848knH+KXD3jwu7/P/CtfRfzA34ppHIb1FVrWPH/2DK/rFI7FWEx0zNoKP14RpeaTH/+IHz9+wY8uNlw9X/Ov/1//iKdPnnC96diOji4z26y3PUobvve734Wx5+rinME7Nv1AFMXf/t7v8+79M9556x2+9Tvf5Wq15o9/8BM++vQxm8FxsqwYfQq/o5RwPXhOlGJzdcXlp59w570P9oL6buWlVOBSRUDVEhFjqU/u0W8u8OOGGGd5wugMXLr5dk3vqJjYgfKJa+z7HJdWWLQNMeT1PHn5AkTXEa4fM6w0qJS/8wGlLSiN77Yo0ssM0poXkuJy9J56ZjHecn3lCfWn6Cf/Ak6/gbv7bZxtqK0QfeD5ZSScaO43YX9Q2J3FE2vRODo21xdstv3uZRvGAT/WXK7WmLtnPCey3mr8vbuc98/RCHdPT6hmDe75p8jVNTJfEIeer9XCVRQ+7QI+xuTB4iFET3Q9o+/RCjbdNR//xPHQBAafxidGkvfM4alDyCCSFEL3tuPTm/SrlW4z6pUwHsWTfGp0n4ZDmBqRS3iVYog+ZNYoB9SpEfgYwOLQQC8izOfznQF5CuoooWYgeaY/ffp0x1LwKiFkWocp8GEKqjisy9QDArjRN4XVooQaKUaIkseU+WEa/mPap4d9NS1/Coo4vHasrsfSoTG3pGEYbr3/MIxHafdhX5Q2HY57jHtGhENB9lh7D0PVTMst8+sYeGCaDhlRpu2YPjedA4d9dswgPp1z02s3jV5hN+7l3sPyDvOd9vV07h4aieu65lvf+hbf/va3gQREury83IXTWSwWvPXWW8znc+q6ZrVaMY7jjqmiMKYchnYpdauq6qV+moJaCpDmEJhzmKb9ftj2Q7BGuXYIRjoERx321yE7zXQ+HttzbkuHv0/3uWMAgGl7jxn8X6UAOQYoOVznXyZ9mWeOgduOfQZeAgG+SoH0umCKL5O+jOB+2/vrWPoy9x2Oz+t+fpPepH+Xk1LJuytEUCq7YYQUvgAkExnqFEKhMIO+Yjkehv8q57PROVxwJPO0wpX1pBUiAZWVnDErBLQSIgHvR2IG6Zezg0g5o5mkg1GRiAMS6N2HgA8JOOly2YNPIWAKYEMAYxQz3WKsYbPesFlvWK97+r5HaY2xhigK5zMLSJ9YlyprMMbS1hXzWcs4OJq2pd8ObLc92+0ao4ST0xYlkRAMVi8YfeT6coNSGh8dY/8cbSQBM4xmGFOou0qgu9rSb0e08vgxEMWxXJzQ1As8yVDthpHgI2IG/OhoW03UET96+m1gHHwCkISRutY8vDdnmFvCGOn7gX69RrctRgm60ngt+OC5ut6irUXpCmst69UVW4m89fAuD++foRVst1uUMlyvesbtFiOKIJpV51h3DlRNZQwueoYIl9cb3BhYrzsuLi45OT3BPHpI8I7t+opOaXTdMPqBzXqFevQWs9kJ3bBhPp/z/OkTnj5+TDtfUjUOUcKLixdYo/De0dQtIhpRkbOzBUYFhJEQItF7JEaMAh09oqCqDMvFkghcXFyyWC4ZxoGh94yDY+w6JEbGoSPKyMnJgvnsJM1rP1LXNSfzlkd3Fzx7bhHgwWnDwzsLhnEAgZPlHO9gs14zX9YohOXpGRHDrG25v2h4cGcGFobg6GJicmtnDad3znjy5AmXq47333sbazWrzZqL9ZqzOwvu3D2lrjSzWZWdT2C93bDdblmt16z7FP42+JEQFHfO7nK6aBClGTyoPqB1g23moCxahKpqWMwWbBcdQzdSa4tVerIuk/XCe4/UCoyBqqaeLRhGRxg7mqpiqFo2Xc8YoKrm1HXL5fU1lxfnjOM2AZcQnBuxxqTQ2uVcLolbQ0ThJeB8wDtPGAN+9ASXZb0Ykq6p2Dp3DmU6zwMNyqS+yf+UyqHDKbJE/hfJIJuyi8WdpXD33ld7L9gCEJFskFQyCbci6fkonhjDbs/MCul8liDpfnfmpGzclKwniyExMOXz3B6Qsgd/FPBZ+VzK3p9/Jq5/EaKSnf1GMqNJRLLTIPs6vElv0q9wkv0iPfLby8wKx+SywxQTSowQerTqIWiG6FH1POl1ZxXiHf78J8gwsGibFKL77F1UFGIYiVrRXz9HM6Jnj2D2EEKP6q8JVx+i61OiOcUEQcWYKNWTUpmyh6S6hUnz9opbQW7Iw7e1f9eeV/TNF4Iijvz20jOTv4el7YC+u9+y3fAVqRjw0/Dmdkw+Hz49/b7rF47rYY6lvX3hiEZ88kjZiw9/2EEJJvceOhPc1s/HxuG2er8uaOK29EW6kmP3/6Ll39cFf6QKMJnz+0s3Lkx+eWl9518C3JiDkxm1yyONvdy4Y2qymuZ7czZzMAEnBq6YbI/HV8ur0ssz8fgc2N83nZpxChSb3l4MwpN6eJJjmgqB3jjengeu1z2rwRL0grl+wZVrUZVB0KzHmrNmg7+8IqI5XcxYb3vq0BLCc4zSOISOhqppWLoLrjyEyqBiJBLYeMXMamIfEvA1O2l/ufTLs139WgA/EtgjTQMjCqsS3ZwSsCIQAzGA95FBQkYFCUMI+BCxWqFEsEqolGArm0AFeXm4EBmdJwEU0kGdTHVZK5hriCELITESAmzHkW3QRKVwId2HTrGAPNkTnjTBYxS0KJrKJIEmz4ekDI1ElUEfxUgREqBTWTURNMD5yABYNFEEZSrqGoJzbHvQ4qmNTlSlfh/6YnSB0QnEMVFeE9ESUa7j7HTJn//FnxPrP0fbCh+FYbtCEkY/GdlsjbFVEs6NQattWrCSgDgiAurQYLSnaVWiEvq9GFWUQrJnjaAySl5SbF6TYuWSlSVaJy9mRNAZVBIlZq8ajyjBGptURTEkYEnuYh8CE/LCTKuY6tOPDqM0isDFs6c0bQViaOeneO958vQ5TV0l0AuSPRxIgpcCW1maWlASUaHD6BmxXTK7V2MkYlREEdBasAqsEYxyaH2BbK8TtaxSCVUrCWzkozAODn/9HOsC6BQW5ERMihfq6rRFRwE6PvqBMF8+SUqzaHBO6DaOi2cj1+t/werKcXnxjB98/59z8ewx3aZjWF/RjZ4f/tG/4uu/8z0WJ/eZnZ4iAqY+w9RLxvMrFg/eRQ8X2GrGyXvfxNQ1Jgv8QVX8v//R/41vfvV93qsDP/nwhzx2jvlsxmzs+Ky7xvmIrWqads7J3Ydszz9ns75k8AGlNP224/t//AM+ff8D/v57v8c9fcZsafn8k/+O9eU5MYDMGyR4jKmIfuQnf/Un3H3wiHfu3WPx8G6qz47aMtFPiSov0LA/BIUU+zXEge3FZwmMFGEfYToL9qJBMphI8stpJyPHpOBzjs55oii++uiEbz5smbc5QEl5w8b0Yi5eFYRADA7GEU/5DURJDjmVDUgCbuxZ9yPVozm/9b2v8v47CjsCVQtK4TYDa7EMM8O81mjXI8EnjxQiKqa+SEK2IfiR6DuG9YoYEuPQOA6M6w2iDJfrNQ7FwiiG7RZbVcTTBc3ZGW+9+zaXTYt7/hlX2mS2g4GxG3m/tawHx1MPQRTtckm77miWS75yd85Hn3xKFE3dtlRSvJDKuThOiHuSUsD7gehVBr6lWNSTB96kX9F0zFBfjNfDMOyADM65GwwKUwP8lO2hKOunIVOOsUUcHsCn95S8pv/KtWOhM4wxNzz1X2X8PWTDmN4zNY4ehkcpf6eG8Gnbi8G+PFeuT4EHh8waL1GkH9R5mk955hiQ4bZ63vbbYRmv8hY4ltfUkH8YvmUKpCksMYfjcAj8Oeyf8ts0vMWx+XPIInHYb4ftOQYqKP9uM3bfBi45VHBM7zk2j47N6+nzh3lM+7IAbUSE09PTHaPOMAwsl8vdfdZa5vP5bt6t1+sbQIZj62KqdJgCbaZ9dAhYuU1hcfh9ur6OzYNjfXrb/cfSbeCJLyrr2Di/agymbXvVmB325fTe11Fk/DKUKCVNQ8EcK3eaXqXoO1a/X1adf9HpVXPpi547/PwqJdyr0pcByvxN6dc36W9OKsAPYiSGpAeIEtlDGbPMlvUeRUdzCFgrn8scLaDMdDYCHz0++MSaEfJ7XCmU0TtnoOgjPnpQYGx6Z4WYdAs+h2wJAWKIKLEYnQ23BwrZGGEcPEoSs4a2GqLHh5FuAFwkBsEogzYWaw2z+QzvPf3Q4XahBTVaDBIVKiokKIahJ/qAmpkEjNQGPaup2oax92zXHc+fB66vL+E6YkwKo9M0NbPGsu62bFZr6roimhQC2TvFcpYYPfpuk8AGPsmLxiTdy6bbIErT9w60wtQ12lgGl+iPEU3dNNimxo+OrhrpNh16jIQwMnQjIoHZrMENPc7BZj0m0EqtWJ7MMZVFjTV4zeAiT55fQ4yEMKCUIOqaO/cUp2cnaKPpNx2LWZXYUkJEm4qAJirDMIy4EGmahiGATtE4MNpwfXXBbGZYnDRstyPODdTNjM16xdnJCX2/YbNZ07Yt627NvfsPGPuBp0+fsRwG3n7vfbxPLByNbSCS5GQigue9d+7Td1cM/cjGeWLwNFWNxBEhJCaSyiIIw+iZz5bYyiISCcGxXq/SPMrn66ZpaNs5bd0yDluCRIw1zBrLe/fvMKzuMwwdd5Y1UUc+/MkTHr31FuPo0Faz7tc0rUnOQI2lshVfe/cBWnkqW3GxWtEPOXxQZhfuBs+TZxcEH/BeqNqG8+3I2ke+dnYHgqPvHcY2LJcnKFOxXq8Z+xGCsO0d3ZB0INshEpWlaud4n9hplBlpFwusbYhKUxlDXVdUtaGqK0SgnTXYyuDxBALOO2zQaGMSeY4xtHObQE99R2ViYgapGqxtQAnNbEk/OJ4+eczQb1DiSKyvihRoZe9yorXes2zoDMYgprXvfbpz+u7NwAWJkgAWopJTj9aIMqBMcvQyJrNFqwzgUMSQHOim4I8d4COyG4s96HxftLD39E5ayMJmnLbKfdiVqWEv68V3GtAC/DjYZ8nv+YkclbW6u+qVaA6RHRFrCpFMJBZ9ebHXZL1WAseFpKssD8VJ0W8cet6kX/GUziZ7J5JXASFeJeMcyrMJiGaRsMUoxxhqhDWMI13oMeOIkUDdPsDN7xNDpFId62c/oj37gMYM9OcfY6qHcOe3CZd/jbKnRFPB/AGE+4TuCt1/SrBzVFQoiUTRiX8oMtlFJ9bc3aeXZZQv094vuv6qdEMuulGzozff/Jqff8kp8fCxSVlx8vloOiab/TztulXWuzkO6d7yUwIK7e9LPx6OwTFdw42ymZ6/8zsi208KxEDkuE7iddOhDudGc14DIPRFZb2u3Pw6c092/5UM839S6pLZcfI/yeeJQ2DEdL4dLTXKS9f3o1jyOHy+HA7iTWAF2TqUZaCdgWua59F+fvmNLbkDjo3XDteR/y+/7mSnHZBJbu4lUs43u1WGFkGpkQicGEXFwMUG+ghEjXDKmb5gdDVKwRDhyfmKynhOwkAYR1ycJcCsXuJRxGBAUlhHMRXNsGI11vhKY7zCBUVta9g6BEtUHskO8jf7IPX9jRrvxvgN8OPnSwKOgEii4hRhR7FTG40JksADYWBuEqNFF3bRkBJoJD9nK0ONYQglzIIwhogbxmwg1vvhihBEcAEICVQQSXWorSKMkZkFohBEZTYLRYzJkz9mYQ4izgU8KeYpApI9lgGUMRlQkl9VMYEeoigGD0YlgWSM8GIQzl2kfrHlA5ViLI7jyOhDAoqkQLMoyTEgJYWbEQTvshd1GJEQ8G5IQqmtcr8KRoTq5E6iZJUCsNEUILlIiccb82EgUxGGTEQo+ZWQY2AiCUEuIZu8hQT2II9hvigqX5W94odsFBaVgRei82YTd8b1BKDQGVySASIZWDJ0/U0ZIyYlTgKLpHGetZaqTkAga2vsbInzgfPzC955dD8JhhEkRuzOwKhADP3oAc3997+FE8Pz50/5S/tgB6xxfYc4j8Kjok8hecJICA6jIkYiGo+REasiWgVUdFjVYurEUlGpgI6O6K+xqsMSMVowWlD9CaM7y96zCiWK2nvevSMEuSBe/DMejT0ffODw7yxB3QF5D48QVQXmY8brjxjPI84LHkXnwfrA9fNPE0uDCO4v/hxbNWhbIaIYhpGvPTzju3/nH/Cj//7/x8IK81nL6Ttf5/z8Beryrzixmt/75td4/+4p26vHfPTsBcMQ6IYREcX9pcG8+JA//Ms/4Q/+yT9meXaft7/yG8zblrszzelsQYjgnEdpYbu6opbAuFkRWs/i5ARrVH5thDK4EFReP+UFkzy5lFZpDoaQBfXs6SCSwxHlV5AoiP7GmQlILmoR3OjYZFDOW/dPuXfvNMm1kpGLIYEWhOLRkY0/01eyyI5+ax9rMCnTgofrLnC3bdHNjMWJJfiv4JbvgNI0lcI6R8TjxkC9rImqo98GlIJAFsRFozRE16HxdOtrAoqAYxw9sVV4N2BtxbofqEaH0Rqs4SsfvI1Gsd1suRZQZ/eJIYUcCkPP+Xak0oJVihg9UaW9zBM5PV3SVBojwhgCgw+JjWmvpWB62JGsXQjO78Yx5nBQkbTu3qRfzTQ1kh6GD9l5bmbGgK7rbhiuC3Ch5FNCa5Q8pswAh6ElyjPl76GxdgoeKP9KSJpiCHfO7e4rqdS5AE8Ojb9TUEv5PAWzTEEdU0DANBxHYTop+ZTvpYxipD802Je8jvVx+W6MuaGYOAQxFBBOuXYbgKCUMR3nQ+HgdYWuQ8P/YT2OAX6KAWga8mc6RtP2T8dwOk7TfirjPgXcTMdoWu/D8S7PHhrkD0PilLH5IpBBefa23w4VS6VPpuvssH8P23IIaJne17Yt1trd/JvNZlhrd4a3sjbL2p2265At5hAYcaxPj32/LRzQYf8drvHDf4f996o6HqZDYNYhI85tdbqtjsfync6hcu+XBXAcu/824+mx77+o9EWAg9sUTYffD3/7ZYR6+TLt/2WCI25r989T5usq4d6APt6kX3RKe2mS0RVCVHsvKh3zGUGTAfkgWR8SitYxRoJkwG+OW30j1LgAElESkBATsMMnRo4oES3JBqmVQEyGYCGF8xWSk0mM2QnIR8KYTNBaCSFYkpdaNvgSdx79UWkGnxyOEl9BZoXtR3o/IDrJf1oblIzYyqJVMnjXbc2YQ8ZooxGVmAm8CAFPkMgYAjZEdAQ3JvnZWE1dK6zRNK3h+mrOdrNlHB2963GbLUaDj45h2GKNUKmauq6pmpoYA73b0g8Dm+seQbLTU3Jach66rqOnJxKpqpqqsjgXWa+2rK7XNE3NfNFQV5a20SixbDce7xUhGIa+o+9dYlcRwXnH9WakaiyBwGIxY7mcsVhqrlYd5xfXdJ1jGEeqxuDiivOrDXfvnNHOW55fX7Le9szqmloJsyax2p3cW9INkccvzrneOBazltq2rNeXtIsZ49hzdXXJYu4TkCcaZvOa3nkWyyXr1Yrtds29ezPaWYMWzYNHD/noo4/ohh6jBKsrrFGEmFg7RSvc6NhuN9Q6UVcPw5bGauatoR8HovMMzlErhTU1fbfh8dNLZrMZohzWCFFB33eIGGIUjLa0Vpg1bYIpCIw+YGxFjIG20Ty4d4dPP/2cZn7CTz59nOqjDBeXV8znLWEYWa83tLM51lY0dcWjBw/YbLY8O7/k06eXOD9w/84JRhu0rlitNhACbz24izGpzK4bmDUzThYnDJuBMUZEdSxPyGGOPGPfMZ/PsVpjBc5OTqkr2Gx7bNMk1ptxoGpammbG4B0y9lhr0JVBWYOtDM6PKAtBJXCEUpLXg6auGpwLoAyLdo7EyDB2zJrZTndoqpaqaTF1zeXzp4z9FUYEHwSPy2Ewi14mr90YsyO7gPcpbPWYZYM8Z5XO+plYzFwxh9LNMecFRFQCQGiTWJNNlYAfWoMSQoi4kMKKe59CzEQfki6KvK/5CCqH0s5q4AK+2CsxZbdf7ja8GCfq0uJslByaEhF1ZmvOAB8RmehUyr6aQ9aQDTHZmCE57x0Da1HgTs9qGVgiopJRJ+cfsy63iMc7f8es/36j1XmTfvXTl5cFp7qS3Vm97AclDwBJjOEQGbzGRIs7/xGN6rHNA2JzxmAWGDfgoybGQBNGus//JWY2Y5AFKgp2/TQ52V3+NaG9B2EEJRglONGo7hmyfgKzt3OwqX1dd0CWl4y3eR+5xVD/ZVIxIJfss6o9XTpihE3Xy80HOqnbysh2prLV7szU0+djvPFdcjlyeB/cNLTfqOyeFWm3v0/r+3LF9oVNvk/LfKVeTQ4eZK+rzC1g2iuFJf32lPtpV2YxhO+MJJQePPbMYV436zat08vpy4BHXje9CuByW/k31iDsHFgnOIUD8EV5jmyL2uVIgmDsXvC36mzKeN+c16lgkX0ZMR723m1OToftlIlJNd7If1/e5LE44R3JdYjs14MSlW05L49x2SuO6TPK87t1kX8POCQajETePt3y4dOaJq5R5pQYRyKeUWactRuebmG4+hS7eBstgZUGXZ1xR3dsxjVdaHHUROWJ3qA0gMbbM+76K86HE6QyeO8wpgGugJaYmVH2rShjEqZD/1L//7LSrwfwA3A+ragAjCFQ5YWTXpbp8C0IhkitVBZXyyQkGReBGALtbEbnIrWA0grnI6Nz+BhRyhBRxOiIgMkvhEAKuRIFfCAxiBgheo+yifWDGDKwY7/5CZ5AAoVAWlzOB6zReFHpeZWEAaVMMWEz+IjyAavAx7RlKxE0MLjI4Hw6bMdA7xMwQkUYxiQ4pSAyim70uNGjbZUO+CGk+LfGMIyOFNO2HOBTf+oc6xYizo2MYZjsAWq3tyvZK7uLMkREUBmgIaIgs7MYU0K/pO2uID8i2fs/pnaK7A8TadgiCTgnaRTUftFBMhiLTKiKpBw82I+B7DfGkrRRGJMoJaOAMRplDXU742p1SRgdcTfHFKIV2hi0BlM1zBYP2Z531FXN/N4jnj59ihsj59ddCtMTwo7OMZI8eNKGZ3ehaQqdmA8+M8FEYkjzQUkCLBA8KkYS3WzyW9AkTxONT4wjWYDVEhEJGECrAat6KgWVrtFiMaLQCJUVjAbLBqthZothCJREjEqohMuLFaAwRrFYzKmrCiHR2v7O/+h9/nT9gm5wSfkVI42OnH/+MXfPzmg0XPUjf/ff+z3eX1j+k//vH/An1xWPZOB3v/KAf//v/S5zq/jwxx/zz//oL/k3P/0Rf/bph0RtWZ6e8vXvfI+vtXdS/N4IHsXXvv4trAp89OMfsbw+5/0777LZdtmzJvWN0m738tgpGmPEe0di9TBYU6GMRusUb1oIEBPbhAgpOhFJWBalCD6zAwXPMIxsxoCxlmWlctiSsuQ1CeJw8wUteR5IZviR3YFpz0pSBGU3OjZj5L35nKppGXza1ZROe0OKNhUw2mB0xIYIskH8hjAMBDQSUygjRAiuQxPpVhvGEAnOgamojeb88oo1sKgrTLdh4xT9sOViO6JHRz0omt/+TWwzpzINQ7/FO8dz5+k7l/fGCDqt9xCSJ5R3m9yngaHv8VVSapRjZumF3aGIxNAUxe8OzeVAOQXMvEm/euk2w+r092loB9gbzZ1zu88F/FEMz/Cy0fi28qbG4GK4nYJMILEbxBh3YIICSAFommYH4Cgee8eMxsUgXkLYFLCFtRZr7S7vKSCgMG4UA2dd17vD/DAMGJM8QZumIcbIZrM56l1fmEmmjCkFgAL7kDWHRuxp+V3X7QAw5ffp/YflHQJDDvujfD8G0Cjjd/hZKcU4jrsxL89OASLW2pfaOgUVTCnhY4y7/p/Os/J7KWfaF9P6TAXCw7l6+NzhHJ7Og2P9M713Wn/v/Uvz87ZnReRGO27r49J3BbBR6mSt3d1Xrpf5Pa2bc47tdgukOVpVFScnJ7uxOgQxTL9Pr02BRYf9e2xeTH87BugodTsco2N9cEOZlfeRw5Ay0/46BGYdhqk5BPUcG5vb2nds7t82voffX0e5djhvfxZD/7F1f1v6ovyPKUaPfT689mXq8G87TVmqXicdKvVuUx7/MpRihwCxN+lN+sWkIp8n2awo+2KRkSb77C4qQNYRJIVjCg8iUoyJUwBrAQKDSAA8SNIPqOw8QpRE+JiUOpDrIUphdDLSRi3oKumVjCQmK1vZxFCR9UmJ4SqH8RSTWFaj4FxiRfR+BIn44BAUxlSEkIAYPvjdWdEaQ1NVxBCpqyrpIVRiV/REjDZESeF31+sV3ueQZybpcoY+heto6pqze3fQStN3A5cXL9isVxgNxlhG5+n6Dq3zO95W6Eonw7/RdJs1xgiCTU5KA4QKZk1LZQ3RJ/3EfF7TzOoEnNj2bNYbrq8CzoUUernr0AratkbZmu1qQ1sJZ4sFtVE8fbGiHwObdcANHdYM3L1/l3ce3uXu3TOurjZs+p7Nao1C6IbIX3/0lFmlMcrw4nLgue85mVneeeuEqtYY3fPw/inzZcXjJxe8ePEC5zreffttLq7WdF3P9dUWo+rkIGZS2OOzszNWqy3OeTbrFW3TcHZ6wma9YbFYMJ/POT8/xzlHU1fMZjOU0gmEETzDdksYOjara7xPTl0Gzd3TEz5//JRx9FitOF3WKB1TSJR1AmcslktMZamsputGNpsNWZ2WdIPjgCMQQ6TreubLxW7eNdYwa1oimsePn/K1r/0m11dr6sqy2Xb044BsYd5aou9xI0hVE4bAX3/8OT/9/Aln84Z3Hjxk0dSZrDTylfffwbSGwQ+4wdFtNpwslnTdwKKpqazFjSPDMCBKMY6OwY0sl3MWsxrFkpNZi5WRphbatsZWNV5ZlqcnGGOzLlbQykAQQj7L1lVFZVvGMelc6qalrhuqusX5FIp7NpvRNnOur68YB4dShnEcCFFomhlnZ3dw40DXramtJvhA5xLzTFEjqN0ZD1CgVZYZXUB5MKIJyqNMZvVA7VmaA8Qc2mmnv9OJ8cNoi5gM/lAKla9HSQzQYwyMY2QYHW70eJcdyhB8UReFtNcVY2fSleyNIIdv45iNkTI14kkCfUiU5MSUjW5xp3nZZ5RChJdnwo0Skk5Lsk1oH5oAKaCQ6Y6eQTGi0x49MUbtzyzl0t5P/M3p4k361U2365kO06vk2XQDU1QCRV/s+g0KDW5AQk+9eEC3PicEoR4DkYFRFOBQ7ZxAZEZk9KDrJbo9xcURre4Ttx8j0aOWbyUgrFcYkbTQvU/7NYlVOsmtr5DVDgzZX7rnDmTttG3I3iadCsnnRnZ1OtZ/Nwz7pXZy0xaUcygl3Vapl83lt7XvaFb7c+9hHrfK93II5Ni34rY63DCi741iL7X38NkduznZzvaSvmW/Y8uNepTJOc3v5d395a4q90xl4mwvEeGWTnxpbhy243XW17G8ytp8XVl6937klvGJ5PPDvn92d+6eK29u2clAt7Wh5Lw/B0zAVrdM3T1wdLo2SXbG3f2xmGLZoeh37diDP6YjlCZKrkks55BUTvl4Y/ynoKn4aovOdF9JfyIGTS+Bt+8bnp4rVFwRY08TLvGiCCj6aLga7mLjFWhNbWAcHDJ7gPcjT4OlkjssuKT3WzrTIqrBIzgcEjWDOaPpL1l7iw6KQTnqYUWoTtDi8cX2XcJdZXl0vxcdsu2qW8fm502/FsCPCDj2SOIQ9ptOQicrdqFFlMYqQStPoZTQorOxPU3WWWNBFBqPFsHFgHMenw286WWXlcSiMCqyHQOORDuDCmgUjVHUKr0gQz4Qe+cT3rowZqjMbqGScVOJYCUJOrGu6dbXqJBCNChtGFyi59PZQ8XkTSICRiKnGrSFhwvNrFXEaLBK6IchG6zThm+VRiOZRSSmsCa1IfisHNHC0A17qkOSEKQyWwakjaOqDN6FHdZpGjpl9zoIEcQXmERGfu+N31qbxEiBZKGmPLnfuGJMYTS893kxyW6zSB+zQCP5AJXvKbErE+Amhf8RrTC6sIPIfostm09y8iESGb1DoqduWlwEYxXXly8wOV5mjIqoQJSgjcGaSG0rFotTugCmUvz0Jx/R9z1j8NgQ8uaXwBrTdS8hZjrPsN94YwLX+BzXV6sEYCoKLckwvpD7IY2PQWGyWTyTOWZPqqQkC7vxzK82fEzKsewLkPvQQVawaSUZWBLQKmaR9wyJIbGTPIsoOkQHjERUVIRagWjwjuhGfvCjHzMGuLNYsmhrnj37nP/qn/wzwtDzo4+eEdB877e+zn/8P/kt3r63wAj81jfu8x/+w2/y4Y8+4g/+zY/4x3/+nB9dvOAP/+C/4Sc//ktOm4rv/+s/5O2zBd+4f4/N0HMnnHN6tkB3Kx5//gxjkhLNaMHYPfWksYbgk3HYxxSOBNMQTY0i0vc9G5OjomZaTGNspuVMb7AYfDbiwND3YGu0Nbx/tsRvLrnaVNRaYZTsAD1ljpbDfyyaxhiyIB9uvAslh55RWuNdYOWhWSw5Xc4TBbKk9kjwBJfeMiF6um2PqRtKkz3FsAWaCDHFXFbiGbsN3o1crXvO1yDbJ4zjSNXWLNuK9WZNHDyGCM7hxsDiztsYE+H5x5jFQ4KHrXNUdYULIcVDy2eDvh/xMdD1PTPlM1NOJHpPjCRPONlTlqbDQqEh2x9/opT5XK6/Sb/K6Zj3fgkbUQ5RBSAxBXRMfysG2mKELaEmilF9anCbllXYOY4BPw4ZOy4uLnDO7UAFdV3vDA5Tw34pewoYKHWeChXT0CnTOhwyfpQ2F8P61BBfGEAK+COEQN/3LwFhSip9U8oohvwpwOQQwDD9PK1X6cMpm0q5VgAoU7DGqwzMZaynaVrONMRMXdc3wBJTxo9pnxfD/RSgcagQmP6b1q/kVcopbSh9dygwTvtimk/f97tnCxCl3D8FloQQdoCew36agj2mn6f5HRuP6Toq87z8K/00rXOZG3Vd0/f9Ls8yNqU+wzDsgB/DMFBV1Q7YUJ4rYzIMw411XOp4OG+m634K/JiukdL/0++3rdnDeTRNx+6b7iGHY3ibAXw6LwrjyXQ8powuh0w607ocXpuyykzvm/ZJuXYMzPOS0vCgzoeApWk9vqyi7ss+c1tdDz/fBng4lsc4jq9d/i8j/azKzS9KX6TYelkx98VpClx7k96kfxsp7RkxATDi5F0syVhZ0i7GegSV5dykhCihPZMcXYzWN97Znuzl49Ek9gCjExurSAq74FySRCJJLtM6gTe0EkR5olHYxiRDv0mgD6OT9KYi2blF70LWGtsgopNh2KcwLdYqlDagFASh6/sUksSN+b1QQYhIFIy1yTlGk43WQmVqIIXnHIaeqARdm3TGDDCOERcd69WW68vAct5y994dHj5aslxaLs+bJBeGEa0ghgLQ6BIbap1CNjSVpjJtAr8oC1Hhs4ONtalNwXu2g8OTWL/Ozs4IC4/zPVdX11ytNviQgNFKCTEmHU0QzeW2Q2yFnc24qzSr1ZbOZd3b6Pjksye08zmLxZy79044xbHZzHBdoGka1tstT59esVkPnDZzzlfXPD6/Jojm3umMyjoeP/spy9NT3n7rHtH3hHFAQmBez9jagW2/BWWAiMnyaztrcS6xmWy3W54/f8o7s5ZZ2xBraNuWp0+fst2sUZIArcZU6BRMGjd0BB2pGsOLF45uM0J0tE2NeEdroVIxUSYo0AYePTgluC3WzDBGJfZXYxHxjGOPViPz2YyLq0tczHNDa6q6RVD4cWTYrnjrwRmrYeSdt99hdX0JCJVZcnm14dmzZzw4m0PwVFWFaEulBEfH5WbLeuuZN4JohSjo+y1nZyec3j3l/PKafrNl2w1YZZgvlgxupHeO+2cnxJjOrioqFAncYCs4WcyYtzXD6oq2CSzbCm0SE0bdVMxm86xbiSiVHGS6LoUxssYkB5oIzgVMZbB1Q922hBiISlg0C5bLM5x3rNfXzNqaGCN97xi9cLI8xWjD2F9jNFDZHIYm5adEJeYN77PeEKxNoVlSnQzGCN44iBZC0kN4L4RAQplJJLqsyiEmpmFlUCrtHSJpvxBtUTp9djGFnXIhMgbP4D1jkTuJWU+S2GizFSXrQQClEngjb3vFDSYWgwvF4W3PlhyKdTSzzKeQNFmXCdniufc8jjt0XdIVStzTsWdbzk5VWxivQbL+VMgq57y356eysUmy3qpYg/bq171B6U16k34V0y9ULJDp32wPEo/rOxZqyZYtyoDzA0ELlWlReKp6iZO0x8fVE2oZ4ME3MWIZrj7EjTXN/C5jGJHFB4TnP8DM3sGriKgIMb3ngghRqYn5Wr3UvmMG+Zf75GftlHQW2QOBsz4825Libv95uQ7lfHlM9rx5X86Xl221P1OtDx86ksnOLiSlxCM3HfbZF8h6x9srB2XEyb/E8rY7bh/Nv+jFpg3JVjWZ1ntvr9uPy+QF8sqkXnnvF43f66Yveu42/dGhDL4DO0ztoYd6G9i9L+ONC5M+i2R737Ru0z6OJfODPKflFIDnRB90pI1559hlX4Zux8V6GBOm2Fhznafw0Z1lU/Y9ILu7SM+VPhHZRXd41RotKe7yS9e8chgMl9ee1bDF1obRN3RmCRLRQRMlsg2BuYmwXjP2LWb2kDGOaGWQoBlxOHWGkkATz1G+YyMNTjQxjAxyiqqvqboLqO/SqJFrFXCS7bKxMJnsVu6N0Tg6t35J55xfC+AHgCjDnhYnbWwuRLb9SAiJuSNdT/SXVQi4MYEIrIbEJAEQaaoKXVXEbo2WBOYYR0+M2VAUVUKGSwo7UFg8fAhYSWCQIIo+CIOPyfAaA+gE3lDsDQougg8RHxJTiNYJ/DF6j84sBDH6bDQmG+0jtUneA1qEzElAiPsNwufYs6gkwFmtUTZtzt6nPIKAyUwTiBB8JHiHd57oQmaBSICGtIclmr/phFai0FW1o+uEIhDIbtFH2fNzFPBF+pby8iFk8EKcbBJ5Q2C/NpJXjsnXJ+EhduOexqRQEBLijqKw+DgHBI3Zxay8MYdy+xKbhmHoHbEiAXK0oW0XBBG6TYcRSGCiBCBRIsn7R2uauqJqWtbrLS5EnB9uvDiD33sY6B0TCjvGB4j4zAgSfNq4tSIBaGKKIXxDGR9T2JddpiFm8L7sQCJpb84GsrRgDk4vJgm+hKT8igLR7s+Snhy/tMQwznlnpZhWihCTwEqadrz/1hxlU+zftmm42ka+89u/S/SOEUU4v+Bf/cXHGIksas29xvAnP/iInzzd8N3f/3ssT5aJsYQTVPVbnP6db/EffNvx/o8+ZH11wSeffcaL5xf89D//T3j34X0e/K/+F9TzU37zd36Tt5amzAhU9u7SWiFkIyogakjKQS1IpSBqQnTpX4gpfJNP4y+SRN8wOIIfQWzqV++J3uFcYBgd3/lbv8vDd9/Di+HOrOH5izWVyXM+JkaaprKJfaUYgb2n6x3aKqwxGMneGHkuxJie9duOz56v2DrFH//1Ex58/094++F93nr0gNO2SgCdmEA+SsB1PUon+t0EJhKUTkoGtIIA2jZY7+j6nut1D97zsEoeNE4HfOdw2zWNEua1IUrEmobKGGhP2Ihi1mqMFXzf44Kn1prRwxADIxFtK6xOitA7Jwvi+RNKcJsUzzaHairxuQCZHDLiZF8o28ib9OuRbjNgFYPq9Lepkb4YU4EbwI/p4asYUaehNkqZJY+p0b/8dgyosFgs8N7fYOaIMe5YM8pzh3neZmSeGioK24Yx5oZh7hCMUJhFpofnKeClgDmmzBdTZoZi4C/XSt0PgQ9TQEVpw3SspkCMKaPFtJ1l7L7IgF7G6VXz4RD4cZiOHbgPgTPT+h6CKqb1OAa6OQRUTOt5DCRQfttsNrt8j4FGCgijAJsOx2BaxynwI4SwG1sReQkIM533BfhR2EWmIYrKGpjWu9TjVeCeck8BfUxBCFMGjMOxnfbbbYChQ5DW9Htp13RcpuNwqPCY5nOY5yEIp/w7BJF8EVPKtP3HgB+HgJfDvjimaDgG/Di2jgpg7IvyPKzzIXjrGJjgddKrynqde4+Ve6zPX/X5l8X48WX64nXv/VlZNKbzunz+WZReb9Kb9G87SRE4JTvyRA+SAAi7PSn/f0OhGsNOpJWQnyfL8plaeMqo5rxLIWlJgHqJGi0Kq7Psq4GYHDGCSkFanM/nnqJP0Uk2N9amUKHZE1JHgyosbCrpOXwYqaxCKZtkTol4l0POUM54knU4ClzSL0QrSf63JhnioxC1RVuNjgGi4FxAosGNAyEIylagY4psGgtwE5QKXF5dcLlZYbVmdrpEa2GzWmGUZd62ED2jczjvWW+u2Wy31FVD21QoiVirmTdLTN3s+nJ1fcW26wh9YKMT0KOtk4yodWTW1NSzGdt1z2azRaKjtgkYMK8M/VDjvMdFQdctM6nYXm54/mLF6cyyXFYYBvyo6DeJOaWpanSrqZsZ9x485OGjgWfPz1mttkhd8ez5BU9fbBid4rQFLYZPP33Oe+9WfPDBV3j27BmbrmfoBqzWXHjoRsedkwWb1ZbZ3OGGnlnb8EKg3/a4oef69IIH9x8RnGfWtswXM6zVOL+lrlIY2RBHlCQ923q1Zt62DP3A9WqFVhVtW7Fcthhlqaxhs93QzFpEFHVt6bY90Xm8jPSDR+ma5XJJ32959OgR/bZjGAba2Zy6atBKo5Um5HPeyckCa2pMA2PUnH/yCY2tGbewuV6zXXfEsyU+CrNmRojC0A+JktOFHOIkrRlTG6KqqdoqKcijoqpmbPs1gqLWBiStraapUUrh/ZjYLYg436NFWM4aLi6u6Z3nrm6xYhm2HmU9s/kCa2uUVgmUBQxDRz90CSikNUIgxj7pYXRDXTeEANthw3y2YL5osVZYrVYIHms1vRsZA1T1jLquccMaN/YJnK+T7qa2GmU045h0Pj6DrJEmM74mVg2lheBDchrDZl0RRBWIzu9sUlJUb5AYgI1FtAWVwrwoa3OYF0UAXAgMzjEMjnFI4X+iLzJGGosYYgaaZBNM3iNjhFjsYWGvFklkrkIKV5WNLSJ7J5oMvCDrtYqtp7jv7Vg+dsq/ssfms+GNI0phJp7Qu6t9WRneNc2K4gW7281FsiE5y7+ovLt/+bPQm/Qm/U1Jh+f0287+x66/ZCCVYm+BsibFb6GqcGxY3vkqvr9E1ClKKtzJKcYnoGO4/DCB7pa/QYwOBVR3v85w9RGba8fi5AFuDHDyDt36p9RnXyPQI2pi7I97Z9tUseP1/1llydv6JfUhOyPwrgd25Rx/rvw9BvqY5rEfn73N66V88vdjMtdR/aXI3lB/bBxvnHPTh1eJczegFa/QRdz+1KSgo6m0fWqNm/wqaa9OVU821n25u7fCJP+boJDbwCTlb+mS/QiUvF4t475+X9y8/9i6vO252377wvld3rsyDYM0/bG8fuXIUE379GAeQZJbbszb/H6e1kkm8lMoc6+cFmD3ks97yk1Qz66kg/aWc8u+6GOzbJrTvk7H19bxa3ubphIwaMbg0YMntPdQ4Tmj1FgUISaiARUDlUp6VGUCjfVsxWOiJAC19KgY8aJBFJ3cp4o9jVxh/QbHjF48ikcY/ZixX3HRzhFzjXOBWCkUDpEEhtvhcfJ4HO6BMRYAzJHO+QWkXwvgRzrH6iTIi5Do90geDH5EJL3gVBaAtY7UlUVF0CJYq/GhLOBAZTR12yD9Fqs1QwwM3kHwjKNjux1oTUBphQaMFqxRrDvwAqODphJsjCzqRBEKaS3FEJLhlYQSSmUmRo8oiYEjFGOvGzHaJIVG9lBRKrVDiSTqTR0RRzbCR/oAqwjL7cidTU9lE1OJzpuykogyIErjAmjv8LHUzWUWiYBSIKPHGIVWkpgNKJ41JaRK2pGLRw2xhHWZbkzsWUOioqDGYlHKMBFYJmMaEFQZz7IvIcl4rzTsQr6U8qT82d2bP0xURIUpI06qlxpfADwTiApDPxB1VrAEx+zkDDcOjN025RQc4CEWKu8U+85YQ93UsFoTvPDuu2/z/OKCYeiTgBWLIJeFshgosJepsT8WpgeJ+OBxPgNERMqE2IES0i7rUx5Cvp54OUQyRj+SPIhMUnIl5qbkSbFjcvEkkFEMk/d9QuQpBJSgQmIIEaVQovDe4X3q3USEIXiErutQpiGKxkbP8uxt3vqNv8VPfvwXfPyjv8TneMSBSO/BNjX3lPD548/4w+//Cb/5e3+fqp3tDm9Ga6qTht//H3+bOPR8/PGH/JP/+j9ju9ni0Fz5hnbQ/LcfN8yaKoELYkDlOFsiOeYqHoVHKUm/ExPgKgpap1A2ihQ6R2uPIoU3MuIwOikHdQ5NUunEnqGVoJqak7fuUZ8tmbUNCp+APaqsAVBi8n6QBd3gqQSqJuJDXichxXhFMvgnppBDIQpf+da3+L9+8HW8aNSzP6e/qHj65D7rxQwhoEQhJnkY+xCp2hlGJ+CSsRVKG2LwXF6scaqmaix+9ZRPHl/y4P593rFJaYnWKd6xj0RjkueHaAIJ1IayjDLnp2NDe/ZNurXDX6xRmb1k8B4fSIpIrXFuBJVCWg0+oFUGp4SIL4CPG+fCsu5jns/Ju2mKHn1513iTfpXSMcPdMQM9JKaBKQCgGJlhb3wtrCBTxoJDA/302jSEyvSeQ+M7wHw+B/aAlKnRfRpK4hDwMAVQHAqJMWZq7GysL2wiU2P7VHA8DD0x7b9y2JwCBKZsDKWOU2P7NCTMoSF4ChA5FKCn4XWmwJNpGxeLxa686XgeCsLHjNDH+qncMwUblDZM58wxkMQxEMM072ndDg3v035WSt0AThyyThwCQ1ar1e7eKfBjCg44Vt4xhpVpm8rYlvwK2GdazrRPpmFlDhk1Ctio1OdQ0J0y2hhj0jv/gGmkGNym4KVjc2yajq0N59yOwaHMqVLnKYhlCrK4bb0ezulDsMghUGk6Vw/n66vync6Zaf8dU/gd7knTayVNwy9Nyyj1mxr+p317TOF2rN7TdXdYxs+irHvdZ16lHDz8Ox2rL3r+tjBR/y6mn7WPy7PHPh/7fls63KNet7w36U36RaWdLEsRA6bhLw+UgztduSQFRgg78IWo5MG/e8dJcsgREUJ0ODfiBrfTBcQc+hKlMUpjbJKTAKISog+MzhFjyGFdEkupiNq917J/OyoarEl5hAg+ZlbZrDvRShGVIpBYTAfXI5LCPzS2ZjmbAwqtFaPzbLuOFLoVtLGY4qQkGm0q6lozXyyJMdAPA8GPDH2PGwa867GVoqlTaIzBO7puy8aNdJsNdd0gorhaX7HZbjBG01QVbdtia8u22qKVTuFF3MA4ONZ+jRlHnEtsFE3bUtU1Uadwx/12m8AJkngEtLGIqfAuZKN2QGwCzpgKdFMn5xGjUQLbbY+tLU+iT05IVNi6YnSR6+stiEI2A8Zaglds4hZd1fzGN75G70auzi84aQ2fPnlBN3ZItBiBEDXPnl5CFE7PTtlsNzAKQz+iRXNxccmsscQYefH4MY/eVqi6pa4NG9cRI1xevODBvft4N9LUNacnJ9jKEKOjqhr6bQ8h6djms5ow9nTrDZUxzBqLUpHotywWLW5MLA5WC42xOB/ZrjeZ4UPYbDui0ty9e5bmpDEMw8Dz5y8ICFXdMGvnrFYrQtIasjg9Zehqrldrxljx448+xQ0jMYBRiYVNGU03Bqp6RlM3vLg45/GzZ9i65fRsyfPrK5RE8DBvZox6QCvDMIxopWgry4vLK5wf2G7WWBsJoWYcRs7unLHtB3zwbLZbVIisNlvc6LjertmOPV4SyCP4QFOn/CAyOk9xJgp+xLsRpUCLxvuAlcQi6vzA6EaCT4wei3laf9vtmm5zDTEwji4BiZRh1jREPzIOPTFmGUnS+bGq6xQe3DlCcPhQ1lUKF170GSqGxOBhhaiTLjP4AHHM+ZF0iOhMOiSJ5cPY/DeFgybrR0PWA/cuhQceh3S2deNI8D45eRXdYEwMrsVjd6fmDoGIzvuW7AxFu9+LHlmm+pK9kmV/TtIJ7EHRqWadCzfPFEI5n6SNV2WdTdjlFZOR48ZzN3WxiOz27bDb5Xc3osIrrEVv0pv0K5ZuM3C+rjE5nX3Yq0HLUgsRiT1RLYi+RzePiNUFZhigqdGhZpAt6uojbHufWN+HOKC0TeHsoqc6eZtw/ZTr809ozt7D2juE/gVhuEZXi6Tvz+s7MYqrHeCrGLOPGTxfR3a47Z5jcukXmP9zueH4r7fUKU7PlmR19KSgY7X7onbdkOsP7z327G7vFG6wNezquK/Qq7bMw/lUsrnRnhv3TJXxk8rcUs19PpOdP9689+Z0lht/D/uWYsvLBqj0Pos387+lxT+Lw8Nh/xybD68a25d+izv4xNH79iCfveVzB3TcZSXF9EGUeJDHyws+2V9Tp5c1USwlU9CGyLE+yu/8G3VlV47sLsiuXomeI+7PJCUfIZ8l9macG1nn9/x0wEs9OVq3l9OOlWRiOFYYgk42bYkBLzVRAhI0EPCqQscVSgIiLdEuaccrNvYEm+1PHo1ET4gCovDKMsoDtFzShku2w6cQlzhpqNQlfS80OGTsUNVdUIfsssfnQAK3sgOv/DKOOr8WwI8QE1AgZrh4JOKjRyPMrMGFHOolmXvxISkURGtCVhRkcgwIYK2intWsziMqJmO2dz6FxlCB7TCyXfWcnc7SstCGqorcITF0IIJosAIDcN0HzqxQ14lVQQXB+yyqxRQmIpKEZCdpC7DWEkUTg0vAC8nMI4AKCbCiSO3wpHAJkuPUbsbI823ga8rS+cDYDRQJwBidwC8x4iOJqjCzGhijskE8vcCTgjVRFvoQsDmP1JeyO8iTwRQxBIwusSDTc5RbINUhJHYM0SazGshuzFR5rmz4eY9RuxfDHiyRBKNApFAzKkq4l1joDPcS0P4epgwY07pJgbWkTSB4hmHAtQ22MjSNpm5qLtZblPfJOyFGCD61BZWo0kNEqyyMiUAMdF7oXWS1WuNzmY0x+BDpfaQyitZmqskYcZnhw4jGx8TCIigqkw9UMnkxhgRACiExiYiSpFgKaXxVnt+xsIyIypS5+eWW6WfdOBJcio0sSqEKRa4UzP7UayApy0JIdKxJMC0glQQg8TH1hfcD49Cjg2d9fcFf/On3uXz+OVoLGpsF7simDzy53PL1h0u8czz+/Md82LR89TvfxdoKEYUjUMdAPzhcP2CMZXHnAb/3d/4B99/+gN/41rf56Cc/AZuYKOIk5NM+jFB5yclOuAYQl40enl1olR2giZA9OQI+ZmF6ZzCMxFBioCY+GWKNZCCRigmMoSQgRLQEtIQdSKMygiagtGQAScAane5LMAsqqzFaY0SjKoNuhFYnOk0lEYYn9JfJK0bFzI+Rl+dYVpRKcaoTgidRpwbv8JL2zd/77rf4He/zvcl7KJC90pQBAsE5nI94FG70fHYOHz2NdF2a494ldqWy5qOkXtEK1t2Ii0I/erajSwpBlY1NIhg9jSObF375XuZcTodniTfpVzMVw91hWI0puKMYqkp4iXLf1DheDOgF3GBMOhYdY9s4BAMcS+W+qaGuHOjGcWQcx114h6kBttRnGIbd92l4lSlYYMq6UQzZpU5T1pBpKqE6St7FqF/+Tfv1GIBiympQ8rPW7gzqJU0BBLA3xJZ+PTQ4H95f2jXtx9v6eVrmMYDIsXsPgR+H7S5MCFOgwGEYmJJnyXdqVJ/+m47dtMzpeB4D6cBNsNAxVpRpHcZxfKnskqZtKPlNgR993+/aPAV+eO93c9EYswMsTefkdC6WcZuy6Wy3211f1nXNbDY72k9l/TnndmW0bbsLPTRN0+cOx7vMmwIgma6Pqqp2a6AAmo7N88NxnYaJKtcO59BhaKYvAn4cjserFAdl3Uzbf+zzbWnaV1PQzTAMN/anL8rzGOhjev8v29g/3QsO+/VwDA/Tq4Ajv+j0ZQAaX6YOX7a+x8BDh/3wyyz/TXqTfpEpicw32Zem6t7y2+79kp+LlLkrmS64gOyLEjQpKafrIYTAOA4JIDE6oneINil0JQpjdNIHxKQrcMGjdUQbjaksWqcQG9oknYPSJjvI7BWwkNlE0CgRvPOEmIzPITi0FqxV+Kjpu4G6NlS1pa4qioHXR0HrxLxpTDL+IyoBV/wAvaKuG4wxKRzxOND1PdvthhAjVTvHWIO1VTYqayozR9Ozurqk02uatqGuq8mZMzAODtsYTk5PCD5ijCa4iuurSy6vrnYMFY4SAjiFvNFKYZWi26xz/47EvkcpSwjgx4F+6HGDo2kN/dAzdCNVZbh7dsr8ZI7VEKNH3rpD3zuapmKxmKF1RYyKbuwYupEQA8PY0TQ1buzpNhtO7tyhUpbaVrSLBZ999hg/OrQyxNFTWcV2fUmgpR8Dm8EjWvPuW6cMbmS93rKYtWzW1zx/Znj09tssFw3d5hrvA9vthq7bYFQCGzTtDO8iVdVgTUOwWScjgbq2rAVcHJEwcLqccXl1hXOwnLf0BJQkzVdbt2w2a9brDfcfPaKqK8YAp3fv08xajE5gkNVqzdXVipOzu8SYghyNo8fh0FWFMYZN5wi65a9/8hnnl1uCH2Au1LWnGzvatmY+b7N8IGw2PevVikfLE+azmpNZy6KtaWrDbN7S9xo3JoCDNQZTWWoNy9YSXFe0HohIAq1sR7qh4/rqCqXg8eMnfPrsihdXFyxmBqU8SgIKjzUKrcG5AWMrtFb0nWO12bDebCFGPJGuG5DKEAfPdnPFfB5ZLJYQNUbXgOL6+or15pqAgm4EFThdVlgj+LFPQC5J62gcRhBJ+jXAWItzKdSRMj5jGJLDVQwp3IpSKax3CGmdKyLiBIxGRQg+6YeKLlKUTspLLQnwock6zYj3kXH0dONI13v6YcSNIYXNdsURLMuhgaw73WkuE7NxBnukPW0fX14EJKqdLhqEEOPuXikqlqzsLIYdybo+yW1HsrZ0Z6iiqP5AJuFgJhtx8fbe78xJvzO1bqQ+kMyCu9vA93t8ISN+cxx5k35N0w1554ugDbJfQuW8E8KIkYCPdXqX6poQFX5zha7vMsYNZvsJavE+wcxQDPidg7RC5UWtl+9jN5/RX/wUu/wK5uQ94ouPcXdnmGgQq8B1aHuHZHspJuw9oOJVurRdO2PW9e6+H7u57C83zaUiyX616y/JdulpNmVfKn2122Cm+qfyRNlDJ0ZmIiLxRjml/jHujePTX4+1O974HHNdJ22Z3jdp5jGAiqSG72571XZ5U7aTXRuPtWX/vbTjaO0mnycbuRze+3qpnHf39ZqOXjz4fuPJI/kcB/mU33c1n8gZt93zcp8c1zu+NDbTesnLsnqcyAgHFFo38y8ZvNT2I/qQl+T3MitKO9WNvpkO6xQcsr84ka0O5/LOWVdy/SMF7lVATVKKLkeCidFVSr1vZBsn84CyACdrcJIHe/kwEglKYSPUEsB5lEnMG0qyDBXWiL/Aq4oYe5xaUJlz2vGazt5F6FChARUIoiEm21nNyNv3NJcvGkywVHFNQDC6xcRLnDhi6AgS0p4pN+ds2RcOx0qV9XxzK/uFpV8L4EfMhmchCaERIQSPSGKvECZKWqXxwaM0eNH0ITKGsNtsQkwG29lixnMvtCrFgPUhvRBra7l354SLFxd02wHbWi7HwKb3NAK1yQYOJYWUIRv0hYpEnyiAqAQ2MTYBAMZtZtQQUrwgYxBTIa4nxhTmRHKMlCjgo0LlelsNEPEZLX3PRE4NVFahlcYpiM4n7xaSEBN9WqQhktpPjqKV0euiLJCoSLVSJDaHkKKXlvfzwYaV5nRGfpZNAdkhN/bv7P1Gkl7CeQFL2abyizRf2P1WNoGYzgkxeiRTwSYAjLC6vqbfrhiGEWM0d+/dx9QN2lRonQNM5LWZ8koZl40ohZ1ROBcZB48LAVNX6GaGriu2T59htMIYS8gvXR8SbawvjClaMYwjbkxsKi8+/ymb9ZrPP/wBtRmpqop37t2j3zr+2b/6Q37rg7f4X/4Pf5vBB/78oxf8sx88YXFywj/81nt8ulF89NljokrePUWJs2hr3ntwj/V2w6Z3WKWpjU5MKVrjoqdWiV2keDkkI3sSGCnGOCB4DRr6GNKclDR/PQFC3OWR/SzZeQQUodlohklM5ISHiHTbFVppqqrhN7/9Lu1W8+TFOdoYnEtAlEpSHHbvA1ebgY8vtrx/d0HnLnjy0V9StzPe+9p3UDYJpCHCk8ePWSwX9EPPqCq++bf/IcP6ip98+FOury45vWMAkwT2NHPTgTHucJX5sBzLuyW/jCRRayJpTsq+vYGIl/0hG7UHD6Vpuqe4Kgd1n/Mr12IIICoDtvK9XlAE4hjyHpbXQgy7OU7+zu5FGSmcqCLkMD+JylfFQAxJoaQK+AQQlQBkEiJKDQkQhEdFD+gd8EQkM52Ix4ggOLQGU/al6DE67VvOKtyw4dHS82LMVMPeo01qOxIIAm1d4fstVWWxvueq2xJjSLF2cweqST+VA8RU9BEkL9X9vnH8IPgm/SqlYyCIYlCdMgKUA9XU4Ds17paUFP7jDUPpNIRKAX0UQ3kxMhdj7hRYUsqeMgyUfIqhuxgqpsbfpmluhHqY1qWUU9own89vGO6BG3UoHu2FtWFa/pQNYhxHYozMZrOdoXl6b+mnw1AxxehdGB0OjfCHoTambBmHBukpA8Qxo/50bKfXjwFEpgwX07kyHYsQwo4xpbCllHqXf4fC+TE2gcKGUfI5Bhg6ZNSYtu8w70PgwXSO3wbUOQQalfEu905ZVbquA9jN6+k4T+eOMWYHljgEO02/T+dvmdelHtPwScMw7O6d9ss0n+maHYbh6Bgc9n/5XMAs07Gerrkyx6frtzx7yGQwNSIWcNBh/5Y6H0vTMbwNaPBFYKVXtfU20MPhmpimQwDNFKDzRWkaPuVVz3wRiGWajq3PY59fN902lq/zzC8yve5Y/rLTqxRWsO//1+2vVyloX6f8N+lN+kUnKVpDeRlsm9J+zpYQBxInzh9ZqxljwAeH947gc9iXEHDeJ30QIIodGMOYFCbB+5CcObTGVjVNU1M1DWa3F2WP9+L1j0+g+awHSEwfinHsGN3IzkMupPC9qFSWmjUgCucdMqR6J+BHYplUKrFrxphDg3Yd680VIcDJyRlKJ5m6hBpsW4vRmuBDOvdEoet7fIg47+l7C5ue6+2a6+2K+/fu8PDBCbO2RkVDZWvEBEY/sF6tEpOnTmFtRGm0rVgsT7BNg1Im6Xei4PqOwa0gOupKaKsq7UGicxgby2plWK83BBc5W85gISijMAYGNyLKMl/M0MZwddWz2Qz4sefO/Rlnd+5wqqHveq6vL3FDzziMjH3g3L1AGZjNF9x/+IC2aZjVFc+ePiGGQL91zCphNq8ZPGxXW0afmEyHbsvDhw/QRrPZbBi95+LFJbPFCYvTBW074+Liirgd2K47FvPEWNfUDZvNFmsrlDJo7XDB4WNA2wpUpB87huhYNguGF1eYKjFOoFRivPSJ7bXrBzZdn1hdrMH6gNIa70dOTpZstj0imqZJ7C06s0k0sxYfHEZXGG1ZLk+5vPqcy4sLNtsOj6euoOoC1kQe3jvhq195l8oaImkNNHWbwq2IJjihMjXWWEDRDQPRj0DAaovBc9JUfPDOQy4urxiGLUoLpkrhYJQo+m1Ht93i3ciTFxc8uVix3XbM7ZxKTApTbQy6qkEZIum8YrTmelhxdb1hvdmgteAy43LY9mgxXKw3hBBpZy11nc71q9U1V9dXdMNAVTeMztHMKpq6RkWPC44QA9ZWxOiTM5/3DG7EGItSmn7sdzKUD55hTJ+NNiid9DZJ9ZX1XMoQTFZKxUgInsJjoZRK4Z+yw00BpoUY8F4Yh5F+9HSDoxsCfTfiRo/zieU1hgT+2AE+QnEAS4axqAIRtdPZxsIWOzkfFuNTAq/FFO487g2bRcFSmIaT6TXrynahoPfGmML4sdOBFV1V2auzDi2VrUBC1q2qSTn5ARVBwsSeUwxISQM2DfX7Jr1Jvy5pKoMlPXCcrC92gIVdJIasKC3rKJBsIUN/iRXBSaAOkVjVuG1Anb1Pv/qEpj2D02+AslkPbBAfidjkWbxbq55q/hZGP6e//DH2zleJ9RnD6nPMyXswXGLdFRpAHhHCHmw71Vsck1f36ch14QYg4jC81F6/NzEy33x89zdvKxNd/1S3fCjD7PeolwzqIjfyvVHo5Lf9pZflMZmWMdlHpRRKsg3eptqe6jV3c+Xg++vKca8DgkjXpnlOGVSm+oKi45nkLcmpNJYBuL0mL9X/ZjX2b4lXPT9t180uuPncq5zNjub+GvfeuOc1sr6pC9nX6+ZNk8/xth/210T2Ot/pPHu5/jf7U25MN5nMxrS+DkEhux0pFkDGfp3I9PeXi9o/94rm3ZhP099ycTd4XyIYMagAXhskuuSCHTWCxymPdhoZr1DGI6YlhG0CjasF1qyx7ppoW4Lay5sJUu0hCk/OBbxPbHP2lBhBxxVRDKveQRwT0HZX2xu7xL4tU11YaecvSYXy6wH8AGKI1LZCtGXteiIRHbMyNE9sV6hgbDJiKiWMPtCPaapqiRkZHVksZoxRsC6AUvRDUhqIEqrasphZVIhEJZw1wqkSnC+Ip1SvSqdJpJWglUCIaFEEFfFjJDiPqgt7RkSLsOk9bQ3EFE4jhFQvrZMC2eRF4Yk77/7RCypF9aBSkRbF3Cq0pLAjujJEFwk+gTZcCDtEZKVB2woxNUSD9w5Hil+JgAsjIhC8y4bnuGN9iETI8XTJoIzdS5abc3qnsCnxv3DpgL97I6eDf8yH/jhdNFOjcAaDJICDRasqLzXBe8dw+Tmtv6YaOnoPQ1NhZ8skQ+l9nnuGkcnyjHEHPPU+GdBrramMQdkGRNN3G4xJ7CwxM6LsDBCZvjVkhpTROcYRri9fJNrVCM/OL5lVmoezmmHTcfH5j7iw1+jxLUI/8OmP/oy/+Nd/TTM/4dun/x6fXkb+9T/9x3R99qwQTRTh6++/y/2/83v8m3/zb/jxs2veeXCHv/ub7/Ivf/g5V32kaSp+62vv8be+/i4GuNgM/NlPnvK892BqjN2DSLStsXWFiEZlBZfRCqv2Y5dAI1mJJnnzFUAl5hxcNrTkEEAhRpQynD56l0fvvc//7j/+P/BP/8k/4z/9z/8Luo97YgjE6G8YO32MnK86tBIe3jlBXVzz0Q//GGMtj77yDcTaxLxDMfoFrLG081OG9XVWju2VZvuXkOz/j3vAUxn7KBEJKQxRmduBSExkMjvAC5LmBDshWO2UCHuKu/2BKCEa0/4TQtiDGiaHxJgBJUk5uVfQxyIs57URmAj1u0WV370ZlCO+zONJwNfSxpiAKSJFtM5rLK/phCkJO8GdUtddPiVOawLAaYn0nSPWkaFuWV9f4L3DqvTq9KS43CHn342OermkqdsUKgaFURa0RpFCS6kJ4KO8gPf8QTHvD+zq9ib9aqepUfYQSHAMQT01Ch87rB96tB8TRncA0QMj6DEQwtRIPL3vUKCa1vu2+pd0CKSw1u6M44eADBHZgRlKHY4ZhqcgkHLP4e+3GWWn4JQCMCjXD0ORlM/TZ6ZlT38/NjbHxqOwlkyvH+Z32O7SF9P6HdbzGMjkmCA8/fxq5cXLwvO0foflH/bztF5fVM6xth1rx2E+h/cXoMqxMTx85rC+x9p6DKxy2OaSvqidt/X/oXJjqgi5rbypAuqwH4+15bbyD+t3U1Fxs4xjf18FGDhWry8CfxxLh3V4nfK+rCLky6RXgRN+njJ/0Yqbnzfd1q7XVcL9vGW+Kv0yx/dNepN+USmJUBPZKBZpIO6VKi8pFSfzWgFRIYrEPChJzxBi2Dkv4D0ujMToCLgE+oiC1YbKaIzVKeSINoBHS4DoCSaBNJTRGKMwquyxoEMS83x0SU73Y2LQqgy1SvLYEAOjGyEIWiIxeELWM3iBumoQki4ogTo9m21P7zpsVSXWDyHrHKDreryHxWLJcnmSHFEyI0AMnqgi282W8+fnDP0WYwyLZcvyZIlETWcj1twlhsSwqbWm63sGN6ICVFoTBPphZHAOW1WJQcVWVMsznNVsPRAEkw240TvEWpr5KVXboHB4NzB0A6IUtamR7Ahyejqn2654cX6BrWfUTUPoBpzrUKKpastmM3B9sabreq65ph8Gnj274OFbb3P/4QNsO+f8xQuGbYfYyOXlFeerC+6e3eXhw7dBV+iqwdZCU9Vcq4E+jpzNTllUGmsiaMN2O7LtRjbrNR989QMePXrE0+fnXF6sOX9xwTAMzOYLZg422y3dMDKbC0ppZk3LpesYfU8Is50ORQlok6ijjUClDFosoxNGJ0RRtHWFGx29RPqhI4SAVYIBxsEBgsRAZWZ4J/iQzt6mqagaQ1NpIKKNIbgEhIgx0krDenVFa6HbOowSggtstyOzytDYhqZtCTHgfJpP8+USU7Ws1lsqk0JV60oxhshnn1/QthVGjTx40NC7jqg8i7NTnl1c085mtE2F0ZHAQNSBqrE453l+fsmqG+lcZMgqRKMV2grNfIa2idVYxSqF4wkO510CdGiVGFFCwOIZu54NjherjqodGbYDp2f3IUbWq0vcsCWpXFOo79PFIgFH/IgHyCzDbhyTbjWCVhZja7ptx/pqjVYQaoMbO4wqIf90ZkFN+4goQVQK/6Kj4MeIz+DwtIfpFB5XNKJMBniVEDYw+si2H+mGka4P9GPADQ7nPDEMFMVOMuilUMzBZ1aibJJAJ2/WpFdKG1FUar8b7hzoc14iSEiaY8msv1KUwGX7nITSTZtbnNp00t4Si7Z1rztVRdm2s3dIDlucdGo7k6EkZ8i9CXHPUqJi0QvvDT5v0pv065BulWl3a2qi9z3UGRRbi2RWbgBRmO4Kaz1df4EyM7xShLGHsU/2DNWk0FMxMXAZCSjdoDQQ9E2jbPDY+gGomnD1U8LyHarrLWFYoasTtk5j/RIVU1h52OuYSnX3TEDpPHdo2C9Alxsb0oFx/Jiu7mh/3ijp5rWpberGvnQopxUbx+7hyedsI5jWo4zBYbqR787Cn79G2WcdJzaV6U03Ho0v6R1u04ncWodb0tE+uP1u9jQO095+ORwNKEqYlsO0r+/02rT++zP2XhaAm31zbA7cZA2JE9khTsbuVTLx6/bHF475l0ivyuv1dAplbQkxykvzIuWTAAo3+7O0Fw7n9SvLvDmdy52Tvs+/Hjw6KSF/m4BJ4z5MTVkPAvu95HCZiuRoA2n+qeBwukbEI2KBERkeY6sZKA26gRCTU3u0jGpBGy7YOIPSNpftiVGjomXE4f2cpfYEEUJscSoySE3r1lR6y7brMRSgbga/HnYSEz0wx2ftLzL9WgA/gByXLKBInhEJgZMOu0ono9DgNZ91I2OAWS05nEoEldA8UcjxfYTFvGWDIKMjKhiGxJihdEEEBUTnxRIT6CP4ZPbVIow+ec4bLYlqMMLoAzp7khgjSBYOhuCRCNZqWqUYxjSBKm0IWdmglMFqA03D2ckpZwqMEsxdwSFYW4FOHsCzWUvTNswXC7StUUpTNS2iDdfbnuBG6qbCqqQhEa3QjcUaiwx9Cv/gPWenI9cXV5y1NcYIOvbgBiKKfnRJ8CDRewWfmFHcIBkhb3ZGLqVTXEvv05QvgnGInnF0Cf6hdRJcBBC1A36IpPwT7aoDCUkpEyNkqsuYd4fgPW1bE7ueu2cnBLNk4wUlOil90uiUnSQZ18tLV6aGg+Q9IyqzDCgwVY0Pjm51RSUgSnA+sO4jQ1Dcf/QVfvd3t/zpX/4YqxV+HAluhJji747Oc/5nf8KnP/oz3n5wj8U3foPtZgMRzi/X/Nlf/oTPHj/jr370mOADru9Ydx43BBoJiAqsx0jAYbRmc/GM7/+Lf8qPHr/guve8c6qowwmffviXaG346nsP2D4e+Z/9n/+33Dmdc/7sCfEf/af8dz/8lE/Or7joRkIYkBhwojJ+J4E4lFJYbfneN97jbqu52o58ctlx7RVBDNoajLZoYzB1RVU1CIKpmgQk0WnOvvfoAYu3H/Lwt36fO1/5Bu987TF37j3g/PqSi/PkjR0lKS5UBtForTlfdcxnZ3z1vYd8/wcf8cM/+Ve0sxMevvsBpqr5jd/4Ck8ff855DNRtwzj23L9/j2F9zXq9ygQzCbwh5UAVQ55HmdkmDXsWZiFzVOB3L54IEvEx7uYHASQIIfoEkZD9Jh5jSAeMDB4JGX25R5nE3WE+7Oi2cqY3UgZUxczWQlEApJBThXozpimc2pgVBTGGvO9BYbEh7ksowKlS76JEjBmUcviujiEBoRLTkJ7kkkL6eITlg/s8uFfx2ePHBD+meyV5qoSYlBHW6LS/acXm+pLt4KhzX4yjI7QVc6u4ayNDgM6n/it6h8heebAXDoQEL3mTflVTjIlF4DBcxjFj923hEm4zAL/KEHvMWHtY3pTVoPw9NP7FGG/UvbAvHOY1be8hKKIwbcDe274YzafhP6a/TwEWx8AmMcZdOJJj/TT9PA2DMu2XY0CNUq9DZospW8n0eumfY30//fwqwMix8TgEhZS6FQDNlC1kysZw2H/TPKbzYNoft7Xri9Krnp0CZ44Z8Q/LPgw7Yoy50ZbDUEnTth4ychxTLLxKwXDb52k5h3Or9N+XNUgflnUMIPE3KR3uTb/sMv5dzO9NepPepDcJXlbI3/zl5XBgRQ5i8kzSOSZHmATcTwB9du9HkOxJr0hOOSpKcubQKoc5AB8CWoGtK6wC5z1FbPPeIyGfnTITgA+OYeiJwwgkZg83WgY1oMQQFWitElur9ymUq9LJ2CsQJSSmTQlobRIrhFa0oUXpFI7V+5FYQo21DbP5EmsNxkCMjmLQjZJCeSoMogzb0dOt1ry4uKRtXmCUYA00taWpDGI0yhiG0bFZbYmjB+cxdU1d1zSV0NSCNRZrarROYYgFqKuKqCKjG1Ha0jbzzDDXsbq84PzFOUJiGdGmo2oqyAykYhK18maEaAxNM6OqBIkacGit0HZARoc1Gtu2nJ7eARMZXM9ifkLTzHGjJ4yOsztb1v2WzWrNi+eX3Ll7SlUJ2tb46Di9c8bzpxd8+OMnfPWDB9y/d0rnHGdnd3j24orgHM+ePeHuw/uc3TlltphzdbWm2w6MY+DRo7d5/PhzVqsVp6enIGAry3w243p1xbydo41Osm9wCJ5ZXTNcC1ddj7YjRlVs1x3hZM58sWS73bAQi5IUSmjWGqwVtttrlqd3E0hCgbWGKgS2/UBVJWYOQpL/ta4IQFO3dNsNvh+JbqS2CTyxXMzp+o4xCGOAi+s1l5cb6noJyqEQmnZGGAcaq/j6V9/FjYmV83K14cXlFe+3D8GPKImMIZ0ljbZ0XY9Eoa4tldZEF6iMJTRJv9d1HcF5onNUKrJoKmZ1TVNZaqvREvGux+gFxlRsuy1NU2OtYdiusSL4EHiy7ggkmScEj3eBGJO+8/r6iq7bJqZTrRj6nqY21FbjhgHRioCiqhvEOVwIECSFSFKR3js2fYfEgIoK7wPD4DA6onUAA6MbE+hDJDN4KEzUeO9IbEJ7AMbObKF0ZgJKOuXCMtQPjr53ie2jcwyjJ7gMTMvIiLQPeZDsjOY9MbjsmqOSE5bPexqS2ZQnQPNim5s4JcSJwfWmQTR9Uwdn0J0BQ2DKlHzo216aXAw1MSad6Y0zbfaILU59+zyyDjaD+yTrIn9pHOhv0pv070h6FSgeSPuAeplJ4khOez10+konilovGLdgjMKPG+gvqOb3UKdfw118COsaa2eYoUfqJVrNEhPZQYFKhDF6pDpFK0O3+hhp7iHbT/H6G4k9SbXJCTQz7ye78qF8fvh3ois4siNNAn8f759X9cjk1rK9lIyL7hy5qau4oefYZXAIHSl6ctldPWqwn9T5mMx8yKpwDFBSni/pMGT0S/rLSVm/aP3IbXN1p5NBKM6qqUlTUIf6ufUGqbzMQiWvDsV98OT+nbez892u8z289kXlHOq8brv/ZwHjvKq8af1u5l2cH4+VXebG9P79e1byu3t677G0y2+yDpLlOz9zg+El3rzMpJDJ8+X8UGxfcb8g0lQ6Up2dflMZ5vUGGXvW7nM28layswnY9RNGu2Bua4bBU1UGQyEQEFS0dPouM/+MjT9BqZqAwokHsQmKLwEXPFUQehUTQ3/QjLElRocJgeBctnmn/tzPuTjp4tSQ19nRf970awH8SIdxEr2iJCR+jCVkQCAEg8QUz7EbHaOH79w5ZTZfJiGhVUjf4babbPAUmtpS1RVsRhAYvE+H+5AoGmOJdxIT8MCncyuiFC4qlBYqk0AOojUmMyNIiUUeBLQmKEM7Bs4epHiqoitQmvZkiWlmVMZg65rN4CAEjLUYUWijGEPg/jDgxpEoMHRbIkJtDO1slgUUk+ovCYGtreHu/TsEDESPG/qkVNAWjxBtgw8RU9eM/TOun3+OaEusDF3MwoEogne44Nk6z+r6GpxPYBKTKByX8zmz+QIkhXpIdI6erhsYhoG6rljMTxiDT4b0UYhuSPGTjMGFiFUqh03xKUTPOIAoYmbh8BFEm2zgErabLa67guGaVTC0pxaJbZ4kZYDgxqaUfszG5SRkxexxY5XCBxgcSNWyWm8Ig8PbGtOeYk8e8HRcMkjFJxdPedzB7OweZ8tlVj4lZoPgPfiIlcjcCCdtTWNTjFgB3DDw9mnLd77yLR49esDz//YH1NWMd9uR1ccf8nc/OOV6W/P9n15QVRX3TxZ8fnHFttvQDSNWa1brjj/98DGttfz217/COLvHUM35o8+WvPiLCiPv8qT5j3gWXyDtllZ3vP3wc/72bzn69TV/9dGKy65CAGt6Vucf8n/5P/1v+Lu/+w2Gvuf/8X//f/L/+YO/YDPGDFzYH9CSwKf2fZtOQ/zmb3+Xx1/5Bs98i6kb1s8uefe9r/Di/IL19YZeOkJMsZLbWU30kaHrCdHz4efP+UQJHo24nsef/pSN03zw1a/xyfNL+j4wDp5hDEQfaR6+R9f/GO88264HUST23T36s3xWWQEY8psu88ykw2dk50kRYjmYRbROe0hBVQvZeyFrCiemrf2+EEEK49Du/RfZd1y88X3yCmbPHHJMRCfXOyMlY9yFSilTu5wh443DZDq0Ju+3PV3bDkwRynG7ZKQyJefksJwPDT6mvfXy2WP+6MmIDxGcpx8c88pQODmCUmlvBKq6oa6TUjJKPirkUEKNNdxvoXOB8yFmRpa4CwMlpBf5vrsiPW/Sr3KKMe6odw/DiBQhqKQCjjhmeD88LL9OCIYpWGOaR3nuMDTMYfiZcm/JaxpyY2psn5ZfDPTTckq+03ApU/DCsfxLfaYAhcNQKwWMcQi8mPZfyaOE9ij3HoJeSjock6mRf8wxwqdtLcCDY/kdAzwcA3+Uet0WXmQKgCihSA7zmIKKpgqA6VhMx26a7zEAR7nv2Fy87bljDB7H+mLKajMNDzPN11p7o8+89zfaMv1bwrwcAkCm6VWAltuAQ4fjdwgIKp8PAVu3pcO5cpugPq3Dm/R66U1fvUlv0pv070J6aS8qVgTZy0aHP+cHs7yTJA+N4EUSO6FSBFFoIYW0VAqjNbW1SNOivMPYiBZAAt4H8CBWo3WDrTQmxOzxnwAhAYgxEMYUDjf6Ee8dqCSLCRCcYz2uCEEIKiLKoKIQ0GiTwqY02tD3Pa4fUZXszroAba2orE4gex/YbsGHiPcDtrKsN1cMQ8TH5GgUAc+Y5cHEDuv7nibFc2Wz6Riut4jR2MrQ1p6qipwuWhbGYuoK5x1ONFEnVtkxRmSEiEN0hVGSopRIZOg7Li+e4NwIRIypqKqatpnhvefqcsVmNTIOI9fbNU1TM1vOOVnOqW3DYlFz7/5DAoJznkAKS6PQ+DAiQVgsZzx9+pR+6HDDlqtzzx19j81qTQyCrStms5rNJkAH89kMawzr9ZoXF89oa0sY4GrV0y5h8Ill5ZNPnnDndMZs2eJHwSpLc3rKtuu4uFyxXJ4xn9+hqhZ03Zb1NgES7t+/z8XFOV23wegaa2oG7fD+muvra2ZNg5LkwOLHBFpvmpbw/Dlh7KiqSN8nZ5PLqxVGgTaw3WyI3tO2cyRGZrM5MTjqxrJcLjHastqsOF3OAMHo5IAiMYUbInhiCBhtIa6p6pqrzYambjidzVBDT2tSmOp5WzH2WyqtuLg8RyuhsoIber7zza/zB//9H3L//h188Jw//ZQYtswWBiMaJQGCp21bXEggBALMmxmNrUEUujIIBjeMeDdSaWgloq2wrBXLuaFSifmm0honYI0BH1CiWc4WXK+v0dpQVTWrzTo55ISABJ/ZWpIzWzf0XF6+IHhHZS3j4HBuZHG/Yei3eB+YL05QWiUmHAkJLFVpCMKoPP1mRWUtQfdEoO89TSsp7JA2+LhNa0oJ1hqKOTKdY4tDT3ZeC+WzQjTEHDbBhzTv+iGFd9n2jq53DH1h+kiOLkpynoFsRAhJBxJSiCeKDhZDjKC0Ju1GO6TbnvmjsB5NRLwoKgWIicX4WPR1u82WneIo3xPz/lv0emXnLey40z057lhDUnjwnZ2nGD6mNtXpPi57vZqScNOQ9Ca9Sb/C6XUMzIea4Jta4f0HJUmva/2WyIisN6hKiM/+DFEWpTy2f4E/eY/x8nPinTlmfoKIRZTnYMECiT1NJIHSopphTr+KXH3KOEbM9jyFKNcKiue95DA1UvaY1+yHCfzj5+mrpK/e7W77+yf26rTdHZwliy5D7a+/fObcG3V5jTGb1rmM27RPUpunim0BiS+1s3xMOnv1SgP9VAc1bddL9TvS/mM6uJv9/dLMO/h8oyaU9+TrlF/6dq8/KvqoY+Ud00WlMuPB/D1+302d7rROX1Yfcuz5LwKC/DzlTcs8Nmavzq8AE6YhXI49X/Ln9vYcsDDezLMsNrlhH4u7usfdNjOtrcgU/PHyKB7T8Ul0dKEFqXFxZBCDFUPcPiOoFmtPGf0zHC211hhDNgd7PImVzKm7tP6CbTCgBRUiqEiMiugjURlEPJ6kJ9WuZ+guWZ69h1af0scOZH6kxtN+L/9ef0/8WdOvBfAjKbdHtEDXr3AxMA4jJyV0gSQhybkBSMr33/nOt6jaFtya2loeP3nO+SefgEoI88oY5ssFrh8wtaGqWzyGIBVSL1GnUFc1mJpw5thsB0QJtm4xtsVUFcpqbGVp2yXrYSS4gXbeomwLPuBCwOX4kD4EHty9C9rivEOTDMbeO8bg2W5WjNtr2mZG1TTEAbp+pO87YvBobRnHEa2EIVhG75JHMIoYR6wWlKmxlWF0bWLhCB6ip60NVTtn020Y+p7teo2xNZvNlhxkgsW8YfCRbrMmRqFtK5qm5erigqvLS4zA2FfUVlFVFt+viL7HVhYVKsQYvA9cX10zdms2RrO6aKlmc4iRfrPCqERR5Jync4HtakU7b9lsNonuVCnquiHGgFaa4AdW3YgozYP7D/BR6DYrFrWmW28Q9Yw4e7jbSMiv2MSgcIhOlJ3iyLskRFe1ZYyRzgkew7MXF2z0KfbkIa4RfvT0nP7pR8zaVMd+uyFEWC5PGJ2nGx0+gO0uwA3MGsvdk5YHd09YzmpeWI1IMnKf3rvHnbtLzq59YneZL1h3A+76OU1jWdQV7581fOeb32A5q3i22vD2g/vcOTvlzp0zTk9PMVJx+ck5//SHn/Knnz7jf/r3vso//i8+JtRv894jTfBnfPe3T3lx2SNiWF3/Jn/0E7h73zLqLVerJGy17Zb/6H8f+ODb34OTOdWwYggZzJClyQJD0JLi81qtkwKr2NSU8Pizj1htVvzgT7/Pv/7n3+E//J//r7nz4BEPHjzi+fPndP2W4EeUMWgFPva0lSVGRT/2DC6PWwx88td/zPavfsCD+/9H/vqP/jlf+fbvEYLHWotWwvf/m/8S0y74+Ec/4Icffkg0NjFN6BQD1iqNGE1tDFZbtDUorTFKo7RCa5OUbpmthuyRpZTCaENlK5TJbEKU1+UOMpEwErt41ILKF6VI3+VMXEKmIElhmIXhlFUR4MnsJMXglnv7xjs6vyBDQFAZpCIQ055HDlt18zidr+XDZRHw03kpMRGF3RvX5/pz44B749gnJBrhqsaLxWfFaIgx1T/Hs7549pTrwdE9ecr22uLGEW1T34Z8ANBaWFQaRWQbSltiUnLkg0dhAYox8ZxcDscPgG/Sr06aAjTgphF6itCvcizxkg7vmV4/ZHko16d/p2CAaV2AG8r5af6H5UzTFARxGBLjGLChADX6vr/BUPGyQBhfAnCUe6ehWW5jXjhs2zS0Tin3ECBz2N5XCZZTQWh6/zHWjEMBZgqcua1fD/tWRG4AUKaAoAJwmPbFdIwPx7rcV1hXDufSFAA0BYxM2zgFVEzvnf69DXRx2O/HBPXDvjmc04fz89gYHoJUDoWrVwE/pnmWdBj6aFrX6W9fNH+OpS8S1l9X4P5lpy8njP/y6vBl+vZ183zdttwGqnuT3qQ36U36opT2jgQSfz2FZkzGCaZK9/zuyf/Se0ijtaA1VNYi1mB1iuksOBJltcKYCq1TmBV0AONzCFwB5xncwND3RO8I3uPDmOWxSFNpzC6YbEAJVFYRgzCOHufGJKcVum8S+ECbDBwOkTHrIdq2JYpncCMvXlzQdSPOeVyINHVNVVlCSAxujdGEKIQ4cn29ZnW1QmtFW1UsT04wSlHViratsgLcEUJgO3i0MbT1nHpZASl0rlYGazXz+YKmqSF6xmHLOG4h9EmvREWVw9FEItfXK54/f8Z6tUYrwVaKR8s7LJYntIsFgiTmSZ0cgqyy1LWmdyMxg0C6fmC9WqFQ3Hn4FqvNim7TsdluMdeXXF1dIGI4OT1jeXoKREY3MLqB6+trzs+fg3fcu3MGIgyDZ/X4OsnHMRAbw3rbc3G5ZrE8xVYNzvUsl3Mwls1mQwgBYyzWKhZqzmaz4fT0hNX6mvVmxckiUUO3dcu8mTH2W2KliQJK7c+DSoToEsBAq4ixkRfnz/HOc//eGSEmp65ZO0fbFFrEGI2IorYKaw1u9CgitVWZ8TYBso1JOoUw9gQ3YKoKU6UxG4YVD+49xFYRbQJ1rTk7XTCbtYk1dnNNVIGmajAooql4fn7Bk4sLPviN9wkOzp8+wxrLMDjWfUfITlt11RC6yKyZYVWXQtvoFOK3rmsk9lig0sLb9+8wDIm5pVaByiicH3JoosRWbOuKEKEfBmaz+U52UVphq5q6qfH9hiAjVVuzXCyoq5rry0s211eJuUe1bMees+UpVgtPnj1hMV+wYI5C2PQ9WkVmsxlKFG70SBw4mTWI63BVAo60bYv3nr7vEUnspqIUTdOkM7/SgBBEk8Izpf3E55i/6dyT9EMheALgneCcY+gD29ExDIFxzKCPGPchUNIWlvaGLJ6FEAg+kJU82baaZTwJaX6FAOqmsVNEJnthcvwLR89v03PaXruF5L0pBpCp/JnltjzHJe4B3bsaZF2q6LyP7/Ss7A1Gu+sCktmes+FI0C8ZZt+kN+lXOU31EseM4mkNy+4blDWX951EWU+QgKFHaU0II02zRBbv4s8/JdaPCPWCOG7RtWG8+AnNo+8QfVrmQSZBPG4s06yP1qC8Rp1+Ba5+THfxVyjX5eVdwnpkAAp70wrslvZuG7ipI9hZ43ftm9j+9z1wsHcJRVe+r/c0xZJR3N//qiQyZSPaG8l3+cWkr5Zdvq84k07aXvbN3IhpgRS71L6N6fpel1F0gum9gwhal99e3ZYvSsf0Toe/HTYpFToJb8t+S981olhIdnkcr8teRyEvXb/5XHmLZY5/gUPmilheoPuaTvqnfNg7LR8r76aONDGVKaURY3bPHQPHTNOxa4e6t0N94W3pcO69akxfWhsv6cLklfNl+lyy3bz0yw4I8tL1HZg07uxUSb4x2WYYsh2nVPYA5hXT2E7MW7m+MZ939ms43747piiBfnDMIkj0WKmhfwGioTpN2cUOJwtitNTWkkyLiYwArxhEU+uGKlzRxbsICoXgSJE7PJpaklwYXY/fPsPOHjIoQ9sYNv2IrW7W8Wb/6N1a2btbTyXVX2z6tQB+kBASBAFjdY6H6FN0gnx4VaLQIoitEAVaYDE3rFaKurZI23LvnXcZFm/zPJzgFfze7/+DFBM2BnzwXDUto/MwmzFUdzGzGbqeMQd071B4YkyobaU1WhvaWYMSjVZbVueX0AVkdJk9xDOrG5p5y2bTsVq9YDU4tOs4mVWMEfq+Z/SesesS1Z8bGN0CT2Sz7ZJARUaCa4W2KVYqccR5T20NTVMxuNRHzlvkxTOi0kQ/YoiExZx+cCnsR7+F4Og2I27s0Cot9G5QdMOAG3oUkT5sESK+H7ASMUahlceHyDAmYeV6fYXeKDhZoGzFdtPTbTskwuACq+vnzLbXCEI3eJRO1IzG2OThMTqGDvw4sFmtCMHRNlVS1ggQIpfXK0SEqxePUaZisVgQmzuYqib4kTgOu8WWt/a85PYHjAKJjxl17nwCxBhrGTy8GDTbZz0v+hmf+sAPv/9Drq9WzGYN87bFDVsYkyJnOzq6AN0w4saB4MENa5wbcT4w5jihzjnc6CCme7thSDSQLnC1WtFj+Pj8Dv0YqWxg3hj+g//B7/Gd7/4+SgV+83e+x517D3EeVpcbrn7yOc9+8Ff82UdP+PPH5wzdmg++8jX+y/9q5Hv/wPDeN++xvPecDz54xPnTC+azJZ9+fMlHP77m3r0F54sVTVhxcenZDobPLt7iP/uvPcPwMX/3t695/uIcUTHFYU2THGtrThYzAsnLOIwDV9fXDM5jq4q5hWq85Op6w1//8SX/8t4D/t7f+/vcffiI2aefsO7WRJfXa4B3H93ltDF89MljXjiHiz55WOVjXWUUwQ28+OzHPPzg62n9tDOIgR/+2fe5/9Vvs756xpOP/ioFbsnUn5rkTTHGgE2vqESVSdofFCBKp/EXzRgVUZLRzWpNrUCUAa3QyqC0QqlkDNTaIMZQG0NjE7WuU0k5o4xOIBNtUz2MxmjD/5+9/2q2bUnPM7En3TDTLbfN8aa8AVAwDcJI9Gyog9ERpKgrRVCMUOtv6Eo/QL9Al1LrQtHRISkUQahBEiAAggQBNMqiUFWnTh23z3bLTDtMOl3kyLnGmnvtUwcgIIJVOyP2XmvNOUaO9CPze9/v/YRSqAE0FFJSGEMxqNtIofbGyhRblmHkDioFJBnPvehKjAiR4gHnl2da8+L+bzHUMRFBBrm0/aYjv3hS+Jq0pwvDJn7INiZrw/U7LYXNCSFgtKaUkp1z2MGYmcklIYLUSR1IIHBdxzY6JiS1FDMx9F2fJFSJQwie64ONEAI1jLcQE5nGBY+QEi0YXfki/aSmTwKhn7dpPgSzb9sQP49wcHjN88p0G5FifN8h4D4mWnxawHtMvjis//jvnNfh4WJMFrmNKT0mTBx+N1YMGYereZ7ax2H7fFI6BOpva8dx2XN4nOf15yGJZEzqGCt1jOtzWM7b2iLnl4k+Yw+Kw385HZIbnlf/543rcT7jdBsx43l9/rxxfUgwyuSUw7Fy+NzD54+f/Un553o8j1Bz29j8y6bbxvDflDQ+6N9Wzr9OcsTftLZ4kV6kF+lF+vTpU/qMZt4H+bxybUYUUqbwskqhBgUQLZMDgNARERLZIw5EfCUFpdGUZUlpDIU2xOgJIRn6Q4gQItH6RPpwDu8sIdHmUVHSu0DoOrTSaCkxRqKHML9CKpz1eNcnk6mQCJliVbvODuc02O1aohR4mQDbddvTOI+XgqAlKgqarqPtk42jLMsBlI545/DeImWk0DCbGqq6JBIwZUlVTZKKQHCIEJFIQoi4YOnbHtfbdFQMnkBkUldoJRGD8TadXRXWeqy1SfVDKqx1dNbhY6SuS4pSM59OKKuSsqpAGwpdYlSBNBpjSoIXKK3wETqbCCt93xOiwIVA3zRsNttkTBYGZ5NqSyDQdjvK3lBoxdGsoveaSV2yWMxYL7e0bUdEMJnV+E2TwH4h2bUdRTlnfjRjdbViNvcUpSYSOZof0XbdoBhhEcJQVFNiTJ62ZVERQyBEj/c93juqumSz2tJ1DWWhwcVB+UOw3a1BQN+31LMpQgp2m1Sn+/fOEpChJNooTFVQ1BqlBYvFEVJJtE5ONVVpMEojTcFu12GQqKKi6zts36G03BNOur5Hhch8OgEcje2plaKPAdF3VEpiXY9WkrIsQQg637PabTBaMJ8taNsdT8/XnJ2ecnWxxPctcTZhHgTaS9rGYRQcT2vqQuNjAGGQQibAsRTMpyVSFxQCqlJTK41zyX4UQ5rZSmuEUIDEh5DCKgH1ZEJhCoQUTKqS0BSI4FBFwWxa09uOdrvEtg1qUtP3PVop5rMJzXaDbTtiXdN3LXYgUlXTmrKo8d4hhEeoSGg7nG0QCI6OT9FK0NuOKAQ9MZGahB5sFINjTmQgQCjSlJMJRGDYu0uRQtKEiIuR3gWc9XSdp7WR3qa/s2JIDLfs3wenwOA93qf1SZAAzkhEqtG+PDLIl9/cb+7XRnG9jj6zzxfXn8OwH81Y3A0k5uaHeZ29TV4+hRoeebOLbLCS++ySykdEDB7sQgAyhy7+NIv+i/Qi/ZedDs/yh5/vAU5u3wldn58zs0EQvMW2Dc4LiJ7IDHv5gBADur1A0CDMXcSkxrXfxq2fohanCC+RWRl9TBmLSe1IMGAjSuK8R598Htot/vwBKPMMjLnH/8fFOyCw5PTcs/Atfz5zbb5m71yZuR7XDqrPgKzjXw/b/voLDtO4rXMo8ltK+NzCi/El4hqHSnubPBbY72VhBJajhhfPOMtPbi/4dLa659kgbiPajNvleU486b2UkLfUvs/md2iv+yTS000b4a0l5bbK38xrPKbH3wnkSOXl+lkaqQd7Z8iv0dttOOPn3WYn+3H2wOfld9t1f1GbzvNJ838R+9OzfX6zLKM89h9dq3rcIG7sx71IZNZhjdiP5TgmgIjrbnumDnm/AwhJqRStOCK6FTJYYn03hZ6SKpHVZJHWQClpo0PLgpj4skQRsCwwYoXy5zh9B0lARZUeExxR9cRdEi4wiztEZZAhUKiaYBsEd/Z7wE9qv+d/8leXfiqIH1JKKgmzQuHKOd654QjukuymkggEUyEoyoIoFSlMgsRbR686eu84u3MPWx2z7RVVZdC1Y7fb0u529NYS1qthAAeKomC7A5oOKQJFPaVrt0gxeIlKSdtZdhtJoTVBRIiWZtdj+6sE5QqwRclmqxNTMzg2TUetYVKc4kMkeEfwEe8DzkOIAd9u8DHinEfEpLqglMAJibOOEBwEj1I6AbYhSWjGEJHB07c7pIQQPEVZ0DU7iFuMSWFTYnD0faq7lJq6LvFeEGyPEgFjTAqHYnuKQuGcGECqiMBRGo3Wku2modSKrYyoQYax7zpCBNune7WaIpVmtb1ic36OEJHZdMb8+JTVckkOwiJEkkrcbhsa1TEpdfLyBnZNz8QIppUhxkjftNSVQemCZj8Px2/LtNIcMjuJEEKkD5FqMuPk+ISj01M+ijN++Cfv8OjRI3Ap9I+IAmUkslA465hVE9rdjoDm4mrF0ydPeXJxia/v44sjvOpT7FufDkpKF/gY8DGys45t54hRUBjN1Bhmlaa/eooJkdJLyh66R0ve/+Nvo8qCri2R4du0Dx6hu447U42a1tQnBfoDC76j28J2HZnPBLvlhuOjCS7A1371Mzx5tGZ+b8bZq084vbNgNn2Nf7SxXJ03/Pm3n0BZouuSjx9Y+p3juFbcnZgkUxkl3tSookRowdRI7s4rVltFPZ0mJRkRUVojRfJ+Wjdbvv4f/hWmKBDKYCMIqbGhJ3jHbFbzmbff5HSqaduGXdey9Y4QIkJGlIjM6pqyqDg7vct8NqNdPkIrQdtuia6nNgrvPEomYoYi8A++9iYvV5FN0/GNHzzivWWPkelIakPAhogfDsbIJDELETWMDC8UrRSo6FNsWPK+LSZ/rijQOilXKClQQONSnmLY5e03SyKtQ8jEnFRSURWak0mKIx0H40dIJ+DBqCBRWqOMQSozsE9TqKlIkhdNQJpMhBSlkjqJkgPZSVOWFaWS9N4lLzYJQgzqJYOxSKGGKZC3ynEwFgybsyEWq4giHQ5EUhm58/J9WJ3zJPjhEJ+NImGQKxWJBCN6pNbUhWYik6eL7fvsxEL2kkFIjEz5KCLOpw1kCAGHRAlJJeXepPAi/eSnTIA4JFTkDegYSM4A/1gl43Ajfqi88EnkkPHfnyYdEkoO/3nvnwHDx/cdfj5Wr8hA/fPIJfm728gT47Yct9nh5+PQNRmk11oPJDd1o263KYbcRpwYf/c89ZbDctxW1tx+h2lMyDiMfzp+XiZTGGMAcM7dKMO4zuNxclt4ljEhZpxuK8Ph97eRWQ5JJeM2eB7h5/DAPO6/8f2HfT3ut8O8bivvX/Sgmctw2O63jdu/KtLD33SCwzPG+BfpRXqRXqQXaZ/GS+P1eslg/AufcF86v8cY96eCsHehz44/AiU1hSmxpsVpg9IG8ODdHhwlJrK91oaiUBQaBIOjhndkYDUGj7PZucMRg0NqhVQGhcC5QNO06WwlJKU1KO2o6orKFAQlUwi9qAiA85G2dzjXJxVMIbGdQ5gCMFSF5s5JwdnRHRAp9MNmt2W5XGKdx3rP5XrH9tE5xIiWUGiDFCYR55FoU1IVBWVZYoqKGCEQ8CFgBoB9s12xXW7YNj1t2+G9o6iS4sSkLFjMZsxmC5RRKG3wwdF3O2L0IBSgCSKB48FZ2t0W2/cgk/NUOa2QVY0pKoIYAHEfkD6FVvXBJyXPwjCZTlivVwgRODqe0bcd3kfmR1OM1ljbpbNlJvFEgTYSbSKz+ZyjozOaXc/FxVPOrz7AFJrZdMJ2u+PiquVqfcW9uwtOjo4hOKx1aOu5Wl4ymUwwRQECmmbHru25d+8VYgxUVUnXdYNlKpGGQkhhWpztCCESnEPJgFQ6efgphbM9MQTKosCXnuXyikTSCRSmAKGJQjOZzJAyhUaNUaUQQoOyg5QSow2IhiAcEc9ut6M0RQqHHCNCCvCWs3nNQqexdVRVFEiWlxvKouD46IxJUWNjQGkNUmK8RiM5nk0xWrHzntYGmj4ilw2FSnbEtmvxjSWEiLcdhampqyI5jymZQje7nkIrFvMZuzagCMzqAmNSeZQySSVVSYwpkMpgbWqvpm2IITKdTJPaSLfDKJFISlJRT2qUjGyW5zS7DXVl0DrVfb6Yo7Wi3VmUSvvebdMmdZx6wrSeIUKkbRu8s7TbDav1FVEIjo4XSKHo+5Y4OBJGJXG2RymNd56enqIoEUINto1EmBLRJ9UR1GAvSfZV7z12CIHb20DfeVoXcTbg/TXol0Q19nAImV0SB7WP6JP6T/pGgghEkRSEgkj2URETaCvyuYBrvETEawDqWVhtCMwis2Ur783jHqQUyD3ed2MfO6zRQtxcv4UU+/sS6SOXKD/xem1Of/vrz+Po0hfpRfoJTbedteF6Bn7qo3FM9lwGu2zse5rOM1UKoyWCLUiHFBpfnoHqMTTEGCjmr2GbJdTH6b0S434ajokVnhQ+zkidHASVommukCJgpEAGP8z5MYEiLR5hAGiTYx97p8tDh5lPfTZ+hjRy/fE17WJMdBsqsb89rXjXYc3/YnaIZBMf8tw/nGG9Gz1L3LhptKZdK0IdXjZaGkdtmbEqcWMFvb21RsSQPUnl+Vf/uPRp2uU2YsO1rWcYm59AMvg05IjDe+P+PJA/yw5p4zqPbbyHz2J0fb5nbDcTg9KgvO7TW+xtn6Z9xvUb1/Ov2uHn05bl06Xr+XHYhs/kN9pvpLE6qH4QyeTOGCM3OGXj+0dzJ5/l8jjO2NB1kZ6dMxKBJ4DpcesWoWeE6i5aRByGMm4JIhHLPZ6yKFi2TzDxBKUmRBVAlAQsNk6oZcD3l/T6mKgEwgeclIRuhwkN9fwIJwpccHRSc1QoVGz5cXPsRtv/NdsBfyqIH0pJjBIYJYhDqAZv08Y1jb9IDJFCRO6ZwMZaduslx2czCi2IweNtR60Fi7qkt5b1+gpvW/quScx+Z5Pc4gAou2jp2h1pcEtKmw4LaaAqpBS4AGHnEZUhKoW1lkJrrB/i3keP1iXB+QSs6JI6RNq2ZblumM0mCG0Q0SGVoZCJzBEEiJgOkITrzXq0Lp29Q5KXnExKpNAoGZNMlNFE6yiMIkaPiCnEhSkMwbsEcISkshCCGxiTfmCBRrTUuCFGqlSKtmsJIYHOXd9RlSVSkDxspMJ5T2EkLoCOoLQhkuRRE8ir2GybFK80WCDF+lRScnL3ZZrOEe12kGmFWV3gYwrFIpWinMzQ2xatItY7lBTJODOMiaIs2DT2ptEor+/5tMJouobAartje/4xZaEo7r3GNz54wJNv/JCubZjVFVWhIUas92w7cNFxVBruH814V6Q4wRfrLd/7wQ+4Wq2587kTykiK8xsSkBecIw4hOXyAzgZ2jUUqzfF8zufvnXFvdoLZXhGCQW4SkULLgAkCOb/DY6+5+O63OFluePtnP8vP//rbRC05/91v8a+aLcq3PHn6Eb/wX32VV04Nr7y8wEwqLi+2fPTuFc57rpZLokweFrveMplr/FrxD//bz/Pt71wwP15wetRz3l7gJ3PunbU82TqcKCl0gUBi+xaPpJpMmFYF7z1aoaYztO9S6A8fMNpQmUC3fMIf/Ob/g8Z6WmuJg2JEXRcUKvL4Ys3dO2/wmbdeZ7XZYe0S5wPORaIMTCcTqrrmcz/3SyhTDLGF0ybu81/9BarFKVVZcjpfUC1O6XeXfOWLb/LZ0wLbO0QUvN5qvnZssBdPuWotDzY9H2wsD6ykFRod2/2AUEowKQ1BTznSLY8u17iYPKo01xJ7kqRM4XwyA6W+ZXQIH0DVgUihI0wKxbSAsoAiBIQUyEEGOIYweMNInErGkWTUgeRJIQdgG2JwSUVmYE6qgfgRgV3fY4opL738CjWOj588pjCGWguKYgC/pUYXJVoZHBEbBXIwHEZnsQ58lNggCFKCSGtGWc84fflNmt12CBkV6ZtdKotOcrc+BkxRYhTsXOTll+7hNpfY1ifiS0zxa0GghEAT0VIw0UP8XB/ofNooaaAPEakVdaGQMSL/avdNL9LfwHQY5iQnIUSKBz6A8pnwMSZ+fBIx4TZwPaexqkUODTImH4zvuY0wMlaHGD9rnN9t92eFijFpQGt9Aywfh28B6Pv+BiHisH6HZc7Ek3F5gP1zx8SBTFDIpI/bwooc1nGcxvkcEmJuS4fkhHx9DrMyftbz+sEYc+N54/Z43hg4zCP3Ue6T/FlOh8ontwH6eRyM7x2HDxq39SeRjg777jZi0WH5bqvr4Zg7zOvHPf8vkm4jmTyvL/6i+T+vzM97/n+udFufvkgv0ov0Ir1If8G0N9xHMvHiRrqBOooUPpJ8zI/ZOjLsZ5LKhDUt2mgSVV8jhEyKFkMs+6LQSSE2RqK3tG2D8/3+WcFHur6htz3EgBQR4R1CSpKzTRzUCyLOOrY7QV2WSCHRSiBFQIlIiCGF5e0DffBJiSB6TFlyWs+oJtMUctY6VKHQpkhEAiEo6oLpdMJqtebqaklHZKIVRVWijUrhcGNSL3V4rlZrFos5QiWjNghMqTHGIHWBNiY5C7iAkgJdSLx3zOYV87qkUFBqKAuLDR27nSf4ZCuSQuK8JUqNKWsm0xrbwW4baNoOGyTT2QypahAF0pRobYgRSpFMNN57pACtNIHI4sizmE9ZXl3QtS2n9+4m0kpVYLuGEDXKSIpCUpsKGTXCaIQ0dNYS6UBKTtQZUkuU8IOSieRq19C0PR8/WbKYzTk6mtF3HcYIooxcLa84Ob6L946u7dm1O2azYxaLGUJInAsJlA+W3lnarmU+m+NtT/AeKTzeeYRMtjIfI9pUOC+xbU8MktPjU7z1dLZDSIlvG+4dz3DeUeqC9XpNNZnT2x5re4wqMMrszzhaCvreoqRGaUm3a5hOZ/SF5s7RBHY1xqdQ1uFoytW2IWjFbrPhTttyQsQUksJobO9RUVAXBXfPTtlt1nRdixeCy2aHUpHZZIoxhscPHzOdzvFBcHl1xVuvnWKKgs26oagkRg+ka11wMj/BP7mk1Aqjkp1ECFI4ZRIZKOzndtqXOmspTEld1QTvwDtc19D3HRGF1oa+aWjXG6LvMabEh8BkMmFWT5L6TggUlUFpjQ8CLQ2zcoJUhuXykl27QbieZr0CH6kns4FIEfDe4b1LDm4x0vc9PkLwgapOBColRSI0CFAmgaJC6AR7hBTGO4SAdY6ut7SdxdqI7QPWg7Np/mX7tEQOAJkfAYgZNEnEIkIKBZwcqSPIZCORGRhMaGKyOw1AoRicjWIUN1AXkdfNvL4K9oSVZC8dg1z74A/X998gf4yW6H2G1wBOUvSIqb2IQ4gYhvIkh6L9eSpk0+yL/fKL9NORxufivT2BA9ATrqfUKKX9TUBgQKS1I7oWozQhgKJAT2eEq8dUJYjtdwnqDmG+wMoGKRXF8Wv0q/dQp19AigBC3dxjDerPUnh8kDi7hN1Dqv4KXZ+yqwtAI0cA/zXt44AEEYdvnjO9U71THkrcFryF60hR++Un7m9Oe6NxKfIyd3MNE1mN4C+bRmFG9g5KZBLBmKIxlCXeUuW8bx2T8eL1Hvdm3+dy72++lQZyI/xJHF33nPX0P9VW8uPv/k+31dx+zWG+eQ590jOvr4lDp+y5MVkqZphkyc5Ewo6HUZnzfp6j2ie9s55nd37edT/umufZQ/+qnItSPpn8Mc771oshxutRKPLYzJ9cK+/kfcezZ7h06X4NHK0R8XCU3TjvgReRQii8M9huSaxe3ysCpmB8PVEUFFHw2sLxuBUoPSUKDfYSep8IHuoIpGEn71DyAEJHFClgZ2iWaLHj9O5LLBtD8B7le0xs6FVBROKDGxQXh3I/tx/++u2TPxXEjxAjhhQqIEnHJG5hiBIX0/dRxBRSICSzgA0BISWlMRB88mRXCge03ZZmu8L1/SAJKNAikUqc8xSFTIM7gFQKoYY4nAiC6xOIqVJoiL51WOdRRIxKkoYRN7C3JZ21FFqCTC86MaiRJNa7A6FwMcX3FCoZJ5AKpSTOR4yUSK0JwafJogrwdgBsFNaHpEhSlLggQFmm0xobwbUts9kcU9cE7wEJbYsQHYUpmc/mxJiUG0xR0neCZtsQYyJZoCQejxYQlUQQkEDbpENa33vwFhFLCpNiZCkpmE5qpIyYokBKRQyeziqUERiT5JU+/uCHGCmJJh10ty5QTArKoqTvGqSUFIVhcXSEFGu8S4SUEC2z+RyjXTLgmBSvdg+/C0gSRKOlKUbapmH5+AHnD97lwcOPeP+jB1ycX9C2barXEPqj80nQ1VrHtJ5wNp9SG8UaResCqtI4H3j44Yd0znHW98T2CW2byBs+BHzMIV+glII2RsrJjFdffY3jox2/ffo9vv/xBRMBs2rBUakolMQVhqvpMasoufredyiVRL7+Ns3pK3gBWsJiMeXe2SkfPlzzne99F3Na8WvVgl9+o8DvVvRG4l2Hs5E/fHAOx6ccCc/y6Y7JvRMWFfR95NWXptSLio8fBC4fX/Luwy1NnLLWAa0LpATbdQQiu77no4sVX3j1Dq+dWr736JKiKplocCFtTI1RhOBxbof0kdD1KVyIgL6NrIj86P33mc1rPvvyXV5/+ZzlZsuus+gIdQAjFZrA1F/RiRkIyaSuKYoC5xNx6OTsLoUQmNkJqwffY1JohDJc7lqmkylfcB9xdL5l1zo+/+pr9O2Op08s77Sa78op2w5WjaMXE1x0OCHwUeNlyaxosCEZ9WIUGJleVHWhKKXAx8C6D1z16aAtczy5gfAxMZLjec1iUlCV5RAfOa1LkgReywjBejyCEEVSd1OKzge2nSP7fPiDzY1UMhnrtAI8m21PUKlcve2ZzyvuHk/RfkdRFBSVGUBljY+SgED0jqdXG06Pa+Y6EkLDplnjO4dtHTsXCAiKosBOFyxO7rB6/B6TO68RvE8eTWSDq8BHKLXGe5vWEKNwMRkq0iwSA/s7IiSUWtD4ZHS0LtK6wb5BRJLWukqnMF1KxOftZV+kn9B0eBh6HhHjNjAdniUgjO+7DTjP390WduMwbEtO45Ao+f6xUsah2sJhSIxs1M3lGBMFDtU/xgB+ztsYc6uqRv5+fDDIpJrDNhmTTg4JA7cZKQ7b6zCsyvMIFuPPD685JKc8r4/G9RjXcdxuh4SW3I6Hnx/28SE547Bfx/Uct8nhv087Zg+VRZ43nsff5zqPr7lNUea29DzCyXi83zZnbvt9/Nnh8w/75i9r1H1evcbteVu9/qamw3nw15H3p00vDO0v0ov0Iv3nTmNs46aBNf0cL1MCUjiEvcEWUrD6/G7210TgERl4/04bwmIQAwKPEgqlkvKFVOmM0toWazt2uxRqRIpE+PchEoNFKYHwCiVEimQtFN5HOueSQqRLccJNoalLTWkimpCcc5WkdZa2bek7T9PZQVUk4j3oRYV1lt45XASlDJ5kz/KBdILSEqUlwmiq+ZyTsub07BgRPedPL7m4uKJzHX5lEUKxaSxloZlOCiZ1iWoMQiqqekZRGMpCU52dsN0UtLZH6ILCTHG+x/uW3ko2TUMIPcGBQCJkoJ7UaKGIIbJb7jj/uKNpOiBSlTVVoSl0xNstu2ARIjCdHaW+imHYywaQirqeIaJns12xWa1YrTaEEKimC4JKZ1YhC6JNzjPL5TlX4YLgIkoUzCZHzGZHHM9mbNodQub9kkMhmC6OqOZTHj485+MPH/Le+094+40zJpOa9WrH2f076YxrOxaLEwQKeXXBxeMHKPkSZVmz223pbYvRNToaDBFvLTF6nPVUdcG22YLt2W63hBiQqKRs4hpm8xmuD2w3GyASdMAYSbA9je0ozDFdaynqkuAlznZMJhMCgRDA6JJC1ezaDlOVROlRxlAqnZQ2FhP69ZTddkmlk4rN1nrulhVSdriuRSuoq5IYPK3raW3HZFYQxTzVxUVEFHRNzyp6To/mhCho2pa6mtA2LZLkbOaxRFqEKnHBYUNEKE09myOu1iDBBeicoG16yqpERY2MCqMUQgQKA6YQaF0O9hHNtt1CdFjboU0iMigp2K5W+N6CitjeMpvNmM2mA9kqIFXNbFKhjUlhe6VCSthul7S7LWVR0DlHGyLaTHBOURTJnulchwCMUljnadsOXSZ7cRErok+hdZ1zuODQSqAzGT+K5Afj03rTW0fbexrrcTYm1eYgBrsniTQmZQrfElOsAiHi4NmawrzgHSJ4EggTBku3IkaRSBWIQWUnopB7tVkpIkl6/FqtIw4kEYHiGVh1WEoTbCMP8Ak/Wp2TXUuI9PwQ/XDXNaEs21oFPoWoEZIceiEDtlFGBMn2fhOoFfyXsYN/kV6kv3wSYkSAiAdntuedxw7IH0KIPQchxKQK5W2LEA4jS1rZYzmiDxco9SrafUDR/4CwfIQWZ8STzyHKCaWd45YP0SevIuO1fURIiZAggqDbPUHZDWUsCMEiz75CMFPio+8jtGbv8DiqQ1pPRsSD50xsH6+ZEfmu26odRh8HATIkNbdrbGcAimMONZVzlKOcRm33CcD9c20uz9RhZNOI19cTU1kyAUaM8xSj64a/09KZro6M7WJjEHzcInFwnNzTPoZN8cHY+YSj/Sed+/+qCATP+/vQ7vfJ92Zk4Vl70/iSfRPu/77NxnJNSMh/joYfQuR3n0DIsc32drvyYX0+iZgx/vs/xbnqkwgeh9/9ZUkhN/kVCfMavhldNTqYxdHf1605hESKXCt/jcuSCWLPsdknhGyYR2LcSfv7FAI/4EamjDi/JOgzAumsVcaGZbjDyyeWZRspVEmMHsqawJQYArJvoV/hCagYsGpCEVc0TIh9g/Jrjs7mLFc9sgjYMGVRlWjZYZ1FRQ++ATm7QUq5nt+pMVMdIzfpLH/1u52fDuJHCFRKI0XA+Xb/oomRtAmWyXPVRlhbTxuTsoSWEi8ESPBCsLy8IEjJbrMc4v7k18bwAiSigiBETQS0FskDIwa8cyk/nWJB+uhTmJbgB9JU8pRPHruCGCS6KPE2eXlImZjcMViUTAolXdelQ4FOYRiMSod8SJ4r2gyx532S2dRFlWJVhYjWmhBBx3RgKKsSYT2tt0RvKQuD0ALvO0pKIhFvG4wEqSWd7ZExAcpRjhhcMk00UxgKUQwyozs8HilTqJKi0INhJcl/WR/oe0c1mTA/XhBiitEqtEZLjfWeCg1CIYRkZz1SWKQuQEiEMhR1lTYHzoMq8VJytWmQQlBOJhTTlxC6ZLtegpC4rqVrLW10HEGajHl1j6RFKAratuXqycc8/vCHPHn0Idvdlh++/yG7XYO1jt57+t4RAnQ+kXMKIzFKcmc+4e3XX+fJ+Tl/9q1v42zPpC6Z1xVxt0E5j11f0G8usbbnTDoWp3MK1/CDb/4JSgT++T/9DX7vj76OD5LZfE6hJSdFoFtv6SYn3P/Kz3F6/y7t5oKPLj7Cnz9BXT3m7O5L/Nwv/Sofv/dDfvD+e1h/F6vmXDzu6fqGSW1o24Y5Pb/9B3/O63drvvTkfU4Wc7rVmnc+3vD7P1jyz//3/w2/+NVTgg/J66ifYr1g0xr+9R99nwcPHvHSfMLRy5+hW26o+gbbtUnNwaTNnvOR5a7n3ccrPnvvmJd3PR+tGsxsghikZAEmk3qQjw0sZjXbricGl2RxWzi/vOIHP3yP+WzG2b37VO9/zLxwFFKhpKQuDa65ot88Rs7uI2OkqKoU0/DiI+qjI7782gmf/dprbNdrviOOcLbnwRPLt/78I97unqA3l1zZkEIkLc+xUVLpyM+fRv7e3/t7rKTid/74ggebBR8+/DqLkxmTSmGaR9y7UyEFXC3XPF613JtXGCl489Vj7i0qgg+888ETfu+9FV2USeElDC8nKTieT/jcZ15nMSsoJzOqMoXFMUWFkdB3LUVw0LSYyRSH4MHlioerhvVqS696gpAsZkf0XYPtW0TwlBL6foswBdO6RDgPQmPqCW9/5vNM64q7J0cot8RdfcRytUNOT7j3ymeoqxrnHJ217DYbTPWEo5MZdycQbMvu6gmrVcfF5ZqHV1tQmtIkUDi6hmb5iGJ+lmRfB7WdvAFwgLA9u87iQ+Tp1ZpaaoxMc2m/URBQaEWtYNMmw5b1YZDmSmu4Ggy1yggU4a/jXfki/Q1MYzB8nMag/KcBuj8JcB6TCsab4/HPMRHhNiB+vLn+JOJHVuu4jTiSFT9uK/chYeI2ssWhOsrhwWJ8bSaZHJI3cspKH8+r8yEJREq5f/4hIeLHtd1txIgxWWBMbHheecf9mNs7t884D7hJ/PhxZcnXhyFc3rhND0OaPI/gcli3rCjyPPLQpyUmHdb7kw54txE0Pun7T8rjeb+P+/t5h8/b8vu0h9Hxc27rp/HPv4lEhv9SCCmflP4mtuuL9CK9SD8hKR74d+3/PiD1xWtVj2fOAiF7yY9IH8P+yNqevm+xzuIHcDZ6hxIBJ3uUqtFKJvtPDDjrcH2fnIOEIkSBdyk8R3QCVSj2Dh2AdENYBtcTnCW4OCiwi+RBFvTe2x+VwGhjkqNAQGNicipSxtD7SOg6tDQURUlEJkXImML8plASgclkyisvvYLRJS54jC5wtkUExa61CKsAQd922C4wn5YpXAUJHD6ZLyjLEhD44LG9ozQFx4MKhrctkeTs07UdfWeZzCvquiSGHmcDzbYlKsl20+GDBqGp52cUhUZrQ1lXTKcLlCno+o5t22PdBWVZIvUgCY7EOU/XNFjbc/H0CZcXl4MSh6JZLbl68oTOJhtCXdVMFzPKaoqWkqADu25Lt77gYnXJ4uQY6wPTyZzZZMb5+VO22xVnJ8d87rOfo66meNshg6NpWiZ1ifOOBw8e8Nobb4GU2L5nNp+z2axRSrDZbrk3nTObzQnRIYSiKDVGG9pul/aeQ7vK6HGup1AFze4KUyg6fFLXdIEYFH3fU5UGoyR1qQCLlBpnA3U1oZAG2zm0KilLgzGa9a5BIJKzl0j7UxEkWpSE4DBaJduCkHSdp5wKrtZLirIG3/Py2Qlnd09RhaGsp7SdpVAFHz19RIjp2e1A7jieztlsWtrtjkJLlISjRU1VCfpWcHZ0RqkMIjikSOGFrIW2aVO4ad8zVYE7swqjFWVRYp1Fe0XbNUzmRxitE4EggNFFCjsEOGfpbQsioGuFcQVFOcEFT+92KAnEgDE1dZ3UhjvbEkJkNp1TVhXKlIQ4hHC2FmtbpvMZUUR2u4b57Jg4hEkotWa9vAARmc4nSGHYbdZEmdYCKRMI0dmGzu6wIVIUNVpphJQE4QnOp/BJ0WGDp+17ut6lNcNHIJ8FBxBDqBTzO6NPMQ7rVzpzxGEtEyI5MooMMgqIMiDF9VknAVZiT3SS5PPh4BQYh+ftl9Z4nRcjwEwMiiExIz8jlY4M5e5VRdgTQEjWvvRp3tsjEWq4LqM2YQTgjNfwyEAfuQ6p/CK9SD+xSez/u8ZNMyD4HKarGO6Jw6QRea4SUEIRhcB3W1QMBKkozIQyXGFFh9EBOftSsqFGRWiu0O0TIhE5v0+4eA+6FbGcIhj2Nb4n7M6Rdkk9u48VhtCvUHe+gMMggkUO81ncNmNjImZkUH5gdqXqjPZ6iew11CfkxSDnl9bnSERGMayFnoHdNpA8Rs0UEhkmDuus2Gc1dvIZ2S/G/w9bs8jtdphMGM4hJ6SUhGHN5hmb1ABsy4Se+aH8Ob99vbOta3SbkJLMMclEuet3RtyH80LuB04aFyLVlzBSoo03iQY5m+u987Ui3j6n0b2pb65tVTeuSxc/U6fx34f1PPxufM/4uxv35zEwuu359q7reh6eDZ5rg4njPh96IgquTxkjEs7hrbfY+X5c+rQkjOdd9+OIMrfZjJ93fbYF35avuKH+c02AubVcozXgRl8zqBLlybW/XA3d4/f33+ivAacVo+ft+ygP3ghRRogJP9d6RiVWhM5iRU1fzpBecTKHQlg+2E25q3b4ECmiguhRQDQVFCZFrLBbhN9iuxVh95BQGeand3Bes6gs26iolCX4nqb1COURUuGcQOk0HbI6P1Hu6Wchwef7NfyvM/1UED9ijNRaEaRES0XQhrLUhN6hJEihIKbB3btIFwWb3Q4tPLrU7PoUn1EIyWQyIboe22/RIuKlRMq0AAQfQEZ2tqFAcHRyMsQlhUJJlJLYqAi2gxiHF4InBIkMATkQKLRSSSlDiaQkETxlqYlSJrnBCMTkEe9DYkzX9YSirPAxAUouKvCCQmi0VqjoqasSHzxK1iAkzvcYVSYZT1US8QihmcynRKkIcYuNgVUT03sLlVQVTDUc1msckstNw2bXpniSakbnHLq8QxQCpxyRnti2OAm6KLEmhWSRk2RwcBFWUrJ1BUVRIJQayDhJ6jMqAUeR4ijFRnVX54Tt06RmIgVvf/YLKG0wpmTdgyynnD/+mP7yI07PzrBBMH/5cwilufzwB0R24Hu0rlBI4qAfKNJgIQJGSpSIfPNbf8T3/+ybfPThB/S7DcYYlICFFtiY+t+qAqUVlTHM64JpXXBy/1X+7m/8U54+eI8YAt3ZUzirMcpwenaHbrfi0eOnxEffRWnJpKrR02PivROiVEg027bh3fOGL3z1Z9lFzXcfWdY7Dfd/hpfsW3hzRnHnPk5teXzxMbZdUlUF919+gy/9wq/zua98jaZr+fp3v868v6CaLihcz0xaZKE4Xmj+2d/+LHeVovvwIT+czFBW8XAt+MbDjtIobNvx8MkKbSSm0JRTzVRL1h884Xf+9W/zha9+ib/9G/+YhpJ/87u/y3sfvofzESmTl4PRiY3nvefiaotRkpfuLGhsz+U2hceRIiSmrpAYXRBERBUGJSWrbZPGuEtz8tHjc77+rT9HBI+jYH5cUGrFpCo4qhWb979HIwUmfshxIZCFotvtWMwmSCV560zzz37j59g9fcTH3/8jnn7c8bTVzGXguDnHCQGFRjjHctewCRqtNQWWJ//h3zD9X/xj/rv/zd/B247f+qOSi6homjXd4xU//5mXmBnNo48+4s8/eMgb948pBHz2S2/y1iunKALzEr7/dEcQit7BedPTeigrw5dfv8NnPv85Tu7doW+2dH2PUgXVZMH8+A6x3zJTwHIJ5ZTv/OhDvv7eDzGTOQhNWWpEUaKKkokpOV5McZ3l4uopj5crdpcrjo+PeP3OgrfunOEDnNx9iZ/7xV/m1TtH9Jfv0l/coXUabQq6UPD6V36FeV3hncX3PZvlJba7IvotXbvDbs55+OiSzcUFmz/6Ni0Ko2SKrWwMs8WCoijYbdeDatBAFhtecIWS7IgYrejaBi08MuSNVEQGmV6OMrHVIY2tWktKmVQ/lFKUCnwIaAF6tNmPf80v0BfpP18SQtwA8fMG9VABIqk/FTcA4cNN8CFZ4rZD3fjv8c/D/MbA/m3EgfG/TObIoVS8988QD8Zkhk8KKZMJA2NyybjOY+WL2wgjYzWMXJbxs8dtLaVEa52kl50jxniD+HAIwh8SIsZKJLfV43mHlNvKOm6v2545jg97SAwZE2JuO9wc5nVIfMjt5Jy70Xbjut6Wz6Gyy2Ff/LjxM77mNlWSw/E6Lksea580Nj8p/TiSyPPS88bxYZ0P1Vh+3CH28PNPKtdf5jD9SUSRT8pvXN9xH3/Stc971jiv55Xrtmd8kgHl8PtDg8xtxpbD9LxrnpfXJxkaftz3n4Yc9GnG5G1r4G15/UU++4uU6Xlz9ba2/Iu206d55rh8n2a+P28MfZo2H4/Rw98/Kc/82fPesS/SiwTXYMjI+n0zznPcQyXpszgACuP9xhBe1XmLdT1d0yZyQd/heose7LlpDySG0Lae6JN6iCQOIeQUzkc8lhBjCnHiQUiF0gYRI945onMJ+A8BZz0hRLZEui6w3vbJYUcJqnKSVNWQ+zOulAqRjjeJnBIjyhREGXG+p2l7vHMUxqC1ZDGfMKkmxGCxXTt46HqMjtw9m1EWL2GDJ0Y5kAkqykrh+h3r5SVaSTrX0oUuKXLaHi1Fslf5lqmRlFVNBPqux3lBVdSUlUltEiuUiiAMm12DKiqOjo6YTGeYohwclHQCxwX0bYsiEkOgby3Nboc0KTxp13Rsti1FUeHxRFPSBcnT5YazkxPuHN/hGEHvumHtgN5ZvE9EmaLQeKHwwVNVE3RRMS0KtJLEEDEv32e9mrBaXrHZdbz8yn1eeekul08f0e1WzOc1UUQePDjn6aNz3nrzbXrnaZqGqpyyWm+oK8HVaslsMsU6EiHHJ/uZVFAWE2y/g+CJzrPb7qirGiUk682WtrOcLGZMignCRLabFYtZSVno5MhiNEVZU01qhDAopek7x+JogVbloAqaAC+tkspt1/VonewYu7YlBrBth3MepKFxmk0TKGea6D3TSUk9KZAqhSpOQJJD4JhUJW1b0Oy2lIXg5fsnPBJPCbZgUkgmRUSfztluW2S0KZw0gVIVGOlQQrFZr1leXqKVYD6b0253nJ6dEEJS6bR9g3EKH3uc75LDiLX7kCxVUeGCxdoGH3oikfnkGOkNQmiu1kuCtUgpKKuKuqxRRtP2LZ11SQE2eAyRQiqcTSFXnHAcL44pTMHVekk1mSFjAhG1Emw3S4xRnE2Oky2379E6nS8mdUmpJL5v6H3AlAVlWVGYpNaBEEnJQ0iQyZ6RtUp9SFrIKeyKJAg5kMHksK4NqFM+M4iYlOfFeN+erhUirTdCpfBUSmmEkgnwEySAbrhe7AHMZP8eUMO8cqa1QuQHRWIOjb5/Dwdu875PYRMGD+CDr9R4j0Oyz+zf/QMIm57HAfp4jUaGF/uAF+knPeV5B/s9TAY1nzf6r/fHcpinSXkn7btJ6mcyEvqGuizpbKSqF4TjL8NyRxsURXEPp2VSpS/uEkKP6FbYq3N0Nadff0BtvoyPO/z2nNI5xPQOYfYq7fIBpfTE47dxA26VCh/3SvXXhb3+mad5/ui2U1UGd2+uP9dNtV8mMpki3ZF+FzevlQKCyC16c4m6ccbYrz/X69ihWMZtKbf3vjRSEP21He+Zc2NkX9pxOW7YRsYFFdd1SS+COFoi9wyFfVluTbmAgj1ZcH9tZB/ZJO4vf/YMd1383HvPKv7ehPKv2+f2Ij3buLe11602B7GvTNocf0L6JHvO7fYMgZDDuBKRGOWNSqVb5PD7s7aX59VhXI7n1f3Zsty8/3n5/6fU99PYv277TOQ+IHfBJ83owzKI/blm9JRUlpznQDAdPeCZsrNfI64fG4NASQ/C0AZFNTlC9VdoAxMu0TJyMvO8+3RCDA5TVki/RgkPsUd4hwwdOexcROH0nHpBCpkZwbUOF3d0wuKiw3YCOdGIYoaPSRUpxhYhZ2R1EkFMuOf14pWG8BAR669zl/PTQfwI8QZoAYI37pzwZL1B9zu++MZr6MkxH/7oe/jtEh8lF6ttOqR7T289RirWzQ7z9AGVhOB92tgqRW8tghSrVWuFVpqyLBCFwSCIzrPtOoxKjLu+8/gQ6KxDhZDAbpfCGex2HU3TMKmqxBBSYH1kuVmhBtJHiA60xvpI1yWv/qJoKIzGoRAqeQhM6wn3zk64c+8eHz694t/+9u9TCc8//Ad/h/uf/RLL1ZqHH/yI9//kjzGh562f/RqTu69RzqY8vFzxrX/7e2A7Xv/yl6GeUpVT+vUj/Mc/YD6veeuX/w7Tk1cJ9QZ18ZDF/AjnYdv0WDUlCoUMMJkKpkonogsRJRUFSS41yQwKpMzy6+lvHzxSCITSgx0nIpXEtRvuHzdMX/4MTmi6vqMoa1559Q2iNHC1I8iSo95SH9VM5wt2ux26nuERaFMR7BoZQGuVwooEP6gveIRI8XnLac1uu+bqo+8zjTt+9s07aH0fISVSJFANIQgxHX6E1gipk3cAAhcLLq3irV/8NR5fXlCe3CcGTwyw6qHbSer6jGJ+ytPVGqcnPHl6zqOP3uPu6RFvvvEmj65WvPPeB3zt57+GU4r/+X/4D2x4g1fvv0GcvsfnXzvhydP32TYf0O6eMqtrjs7u8+Vf+HXuvvw63/jm/8z/97d+i9Xlin/2L/4PfOWLX2T1/nd5486ER0/OeW8TeGCXmJde582X7rEoNMp5Tl++y+ffusd2vaW4PGe5vEyGrrKg7zxBac6v1kwLxS/8L3+FN7/wRd565z3E7/97tCro6ZIXtAChFSp45MCOfHq1QYspr9w7YfvhU56utxQ6eTapKJJgpIi0fU/jPC4kZQclHVU1o9CGzXKDklAhmEvPSalQKnBnUXHvdMbDx4+5/OBb6NkxhoB9CrO7b+BQ1IVBKUmIgfNli5/foW9bXr9/TPvQo4yiqiqmvmPnUjiSxnpq4ZmFHX/7V77AZ774Ns1qybtPLvjHv/6P6B69x/L882gZCLanOr6DmJ/RO0tvO/78wZoPlpZSK55sIqfzCuthUpXI8y2Pdo7FrOZn31ywiEuK+i1arVBtm4xneEK/YlJPmBXJ8+ob33+ff/mHX2fdOY4nRywWx9iuwUVB8JYQYBMU7fIxHzx4QNc7IoKr5RKhFCdHU7SznL//Xf7YBV793/4LXjmq2S1qgnV0KHAd/foR09d/lVrrtIHtN+yWH9F3De12RejvgnoPd+eUH/z5D3nYJoKG0Qala6b3PgOyJPorok8HkbooECLQqZq3Xn+dx5drlK7Y2Y6qu6LvdzghKE2SO82hZ5NnS0QjQAmkFlTm2gBCFImgFkcb1v8M75sX6f9/abxZVUphrcU5lzwFRwSG1WqFMYaiKKiqit1ud2NPsN1umU6nSCnpui4R/AaA3FpLURR7tY3tdnsjZMuY/JCBa2BPjjDG7MkJ3nvquqZpGrouyTNba7HWEmPEObcnqYxB/Ux0sNbivU9kp7LEObcvx2Qy2dc3kxnGCiFHR0fEmGJShxCoqortdgskMkDXdcznc5RSeO+ZzWYpfvWQ32w22+ef78v167qODOhLKSnLEmtTLG3n3L5+IQSstRhjqKoKYwwAu91uT1yZTCZsNolkWdc1WmvatsU5l2JzF8VebSSTGcbkh7Zt9/2aST/OOfq+3xNt8qGlKAqklPtxM1ZcCSFQ1/W+L3J9tNYoldTUnEuxvs0Qcs4YkxSSuo7FYnHjYLfb7W4coA9DBGXiTB7Lfd/fGOfj8dg0zY32P5wHQoh9e+VD2nhc5rJqrem67pkxNTY6jBVzxgaJQ6WVEAJ93w8gmNjXMfdxbs88Bpxz+/LnZ+TxHWPc55P7Io+P2xR8pJT0fc9sNtuP5azukkMSbTab/bW5TM65fd3z/Pfe03Xdvr3Hdcr/xuo5Sqn9vM1z3Tm3b6+qquj7fv9vNpvderget++4fuM+zGPvsD3y2pHnxXgM5bXn8DvvPW3b7tujLEu6rtuXpSiK/fPzWDps9zwPhBD0fb+/N4/BvF7kPs5tM51ObxDLgP06k+uR21NrvR/vOb/xWjMeg3+RMEYxJrLauB+UUux2u33981zJ1+R5Mx4D+Xl5/cjvjqZp9mMwt8eYmJjHXZ6XuX+B/fgtyxKt9Y21IOeTy5vbtuu6JC8/jL2yLPdjxVq7vzeXL8/B+Xx+Y62o63q/XscY2e12+7FgrWWxWOzfWX3fU9f1jfqNCSVN01BV1b7N8/qS592YnKi1ZrPZ7NfoXKcxOXJMmHuRfnrT3iC+Xzdvu+ja/rd/nx0YDDPx0zmHH/YITdPQNDucbRDRYbRAiAB4BC6dSbzDu57gk83AKIVUmsIonDZ0vsH2AhElCIWQKhFF8Njo8W4IH8ywJlhP7xKxQmmF0Qrv0nxJJgWFFBIfUkhYgaTQhhgCoXO0oSf4gHeegMBMppRlItNvVle0fYcbVDDqeoJWGqkUk9mEqp4k77oQcbbj6uqcq6tzural6Axa7SAGur5Fa5lC8QqBLgqkULS9zRAL00mN0ooos1FTXSt6zqdonQ5s6/UVIYoEjhclAjWsMQIhkhMPWqXvRMA7i1GG6WQGwHq3pfWBN954g89/4YsUhaGuaiQCFyzBOdp2x2a7odk1LK+WxBBp3Y7VckMMMF8c8eZbb3JytEAZBcIyn0+pSkPbtTS7Dd12x3Q6pTCKvttRTypee/MNml3HennJZLZg3XQIBMfHx1yul3gfmdUz6qpOID+BbrfDB4sUCu88UjiUTGFIClMyrZM6LoVBiUBRJjtTYQx1bZAiEGNSa6mqGqNLpFKE4FBKoLXE2w4fRVJnEQKlFdaCUqANCOnYtVeIEKimFVEaFid3WTUNUWi6zlMoiw+gpSYGwWbdEF1AK8mk0Eg1jLsY0FpxcnrC1WqNjZ5219GVgrqqWbstAospFKZQlFVJHSNqIFA0zZb5fIYpDaIocEikjMTgMVpTVVXy4s3eyFGADyhjkr3OuxRmmxRatyormr5nu9oQbIciooZQv2VhCN7SNTsQCqIezk2Ktutpm0RqOj07ZjKdYXtLWU2pikBwPSFYunaH1oJpNQeSAkw/jPu6KpiUBUoEetujpKTQClNojNbEkN7vSiZ7J3EIBUVSK9ZGQRB4GQk+hXkhDl7bcb+A7QkYKhNCgiYIiZd+cCFNtheRySVDmHGhEvAgh/dsEFkFQKQwCCPgRe6JHmnljMM9CfC8BvL2y63IYGYOF3MDLrwG5Pa4TCaF3EYazeA21/fn327si8Uz379IL9JPVBpvUwaA/3CPsydSPPN5JigM81wwAKbJO951W8QkIHpHiFNYP0IahQpr/MV3EdNjBIYQQAiLEhotNHF3gd+9z3LzmMn8ZYqzt3HG4Nsd/uI7VLN7hMndtE7HFL5pz1UTKoGyN4gMYr+i3ICI99Nb3ODzXpPTDkHf9HEUOafINbEtqUDss4yj62403wFFYbj3miDxDCp9s83FzVAxo5N9epYU1yW/ueTtf4rD757/NMaFz/UXo9rcyriAm+1Jaof9ujrUMa33cT/m4hiY5hbiRhzA6j3oHw+/vpVc82kcLji4L5NJ9k8bCnjYM4eEhuc5HGQy1b4szzx1/D56ZlJy3Rf7k8Yz5b2tPmP72ic5dxza3T4pfRI557a8/7Lp+Xlfz00h5J4AFePNtnnW8SWPPchten1fvuigT4fH5XPHuATEeN07QiBjRGBAKDwt1tWoKhK7JdRnvHJ6yUePVsTeoIKkqwXIDuEuEdHgRYE3sxQ3KkiE23JsVhQEHupXKbSloKFUc4pyx+rxE4IN9JfDHqy+h1ISbEsm8eUQTIcDLiAGwtyYCvZXv8/5qSB+CAJSCRbHZyx3BbG7IgjF8WJO3Ul+4x/9r1DHL/F7v/M/8c0//L2BlOFp+khvAy4kcbn3PnzEn3/0hElZ8IW3X0eYgqt1w+Vyx/binFoLFscL5rMFyJKN0wjXo0vDVbPlg3c/oN+skkKHTAilEZJJpUEKSpViwfa9pVcCH9IkUEIS8QQfB7a4pDg5oQ+BYC1CQJBQHR9z997LtH3ACMvJ8ZxX3niLO6+9xUY/4OGD/4HabpiXv8Hd+68Q1BXb5ZrddkvRrnnr1VeZvvElnHNsvcI1DarfMi01xbymnh5x3m+x3iK9YjqfoebHeFNjuxVlVTDVFfXEEaXhqhVsmi69wKVEhrhnnKfJLa8H94iomF/W+bABQ1iedseR6SlMgVAG7xzBB7rNko8++oCqqol6DkSc9zz44HvM5wvu3HsJT5JSHJqU3guUj4N0aU+IicSBkHgf+OD9Sz7+3h9zsphwejIbXtxJlqe3Dmt7pNQIEamKkt4F2q6HGFC7jv7hU/7gyf/In718l8oumZmK7XJN2K753tMLtpsV5XTG26//DI8ef4daOC6ajov1Eht6Nr3jarXm4uKSf/8f/oi/83d/nZneododwr1Ct3mXRx98SLNdIkVgUhbU81Pe/PzPUs2P+dM//Y/8q//pX/LB++9TKMn/9b//7/naL/1XfPHLP8tLv/bfcrp6zHf+37/Fv/x//T/pheSlz3yWV166R1WU3Dk7YzGbphAgp1PmZUmlJbXRTJzD7RpCUEi15bvvvMd0WvGD9x8xmR8zmVzS9x191+CJaKFQpkSZBKD01vLxxYZLo2hcOtzGENEmgZ29tWzajk3TYW1iDJdlyd07ZyymM5SUGAlHylP7lolwiGbLS6+9xhe+9lX+5Bvf4Ts/+oj1tsPHj9FCcef0lDdff4PJ/Td59QtfRR/dZ26m/Iv/7l/wJ998h3d+9D76wTc4bz1lNefzn3uV480HPHrcEreB99aOQkleLyST6WxP8BEi8rmf+QXOfu1XCa7Hux7Xdbi+xfYdXbOh3azZrZasl1dslpes1iu+uFqy3W7pu5bPPHrCe0/WoKCaznGbK9zD9+mnM4p6jlKaGAIuCpAptlzbtvzBn/wpSMNUR9rVFbPiFK0VRhrkwLJ26wus7SmURBZ6vymOzoGumJclUmtMc8H66opXXj1he2kwhWB7tUJPFsRuxUfvfp+X3/gsdWnwtgUh8FGgyimCBBTIO6/w9ttvER9viNHiqxnB94R2g5xUydC537gJal0yn5zw9//X/5z20Yf4+X0W7or/z//9/8IT0kLw0r1j3GZHk0GXmAwRSkhkTEoymQAahnylTPEgY57oP4Zt+iL9l53GoF8G7TOY1LYtTdPgnOPk5OQGgJw3kRkYHgNdRVHsgbYMMI4VHTLwBzeVG8Zkg8ODRgb3cliQTEjIYFa+LgOdGUjPAFgGdseqHhk8zM/JQHVOh4B93197/uXnZjJEBlkz4F0UBW3b7utnjLkB7Bpj9sAhsM83lymDkJn8kusyLmcGcDPhILd/vi+THMZtOiaujPs/t2GuVwY3DwHaoihuqIWEEG6QSsbEh9y/uSzXxOHbFTvGIHG+dgzQjkPjjOt+OF7GYPn479xXUkrqut7XKwPOhwe/3B9j0D73ddM0+zKMCQEZjM353Aa0jg+th4f1siz3JJjcdzn/MbkmkyVuG6v571zOsXLPTQL3dT9kEL1t2325xySMMdEng/t1Xe8B7UwMOqzjmLwy/izXMbdrWZZkss1hGx0SVqy1t/bX81IeV2MCxmF/ZNJEVVU35uVt61Vun0y6GeeTVXxS2AN7gzg2Xl/HpDrvPUVR7AlWuU3GdRz/HNd7PL/Ga2Wuc9/3ezLKYV7j9jlsx/HP8bo/zj+P97w+5fmQiQohBHa73Z6gNu6DnO94bRoTWMbvpPxvrLiU7zkkhORr8nsik80yGfG2eue+ymvsmGiY++NQGWtMABkTO3IZMlFk/J7Kc2G8nuY1Zrzm5fbJ8yKvuXlMjsuU8xuTvcbjMb8XctlepBcpp+t3z6EqVDIm7AngggFIDfnbm/uGEAje78O2ONfjnSX6HilS2Ejve4LUuD55/Yngca4jhjTPxOCUk5znRQLmhSE6j/WBru8I1qMEFKpIEIzyxAjWOwQJXPcx4G0KB+N9wPUOVQikUEmBxAekkpT1lEJIfAj01uN9Mn6n+VqipKYwJVqqFLZUaogkUDlEAi6RP4TC2RRyxPmermvZ9jsCEqVqCND1Fh96hAjowqSznDKApCgr9Mzguh7vPKYoEELTe0dZVgPBpACh0CrZrrx1LGZJ7cD6tBaEKCirOSEG+r5LNqDgWK0bCJGyKtFKoXRaM04nNUoavE/7t75raId3vTKSru1om5aAYH50wr17EyKBptuwXi7ZrTesVis+/uBH+P4eR8dHWO9omw4pJLPJBCklSwxaCYrCsHSeJ+cr7t27x6uvnbFeblhv1hSmJEaoqgoXPV3bQQgoWSRJdRIo7/sOo8oE/EeLDw4hA1EEjJbM6qzo4lEihcqdTguMlggVmB3NE1ECkEqgtaLre8qyTgQQXeC7pA6MAGd72naX7FRR4PsUVrrte7wIFBPD8fyE5bvvoUWkdx2dC7jgUKrEO8Gu7WjbHZPKJClqJQdlHIePkdl8houwaRuW6zXTWjGrZ+AjpVEsphOOju9iiilTafBIAoHpbEpVlti2QbqeQkb6rkOKyGw+p6zq5GQUxKCWIjECUCKFY7IdwUek1KhBFWez3bDbrSiUSOqfpqCoFEJ6ml0LPqK1IDqL0AW77QbnAs55jo+OmE9nabGQhum0QkRPs12y222JpP1liIGmbeitww17h6oskCrZEqNMis1pT5NEysUgvS+FIIRkmwwBEAqpDSoEYhAIDdGTItaHYf0KYpDlH8AfQpIDV4oYPNJ7cBHkteS5QIBWSJUc6RCD7WjYqyBG+yMpyJjlHprJa2kUI+ghg7TP32elr1O4mxtbNDGAGCMAcb+Hy2BjXrmzzVdcA2sCMQJ1U3um0AIvbDsv0k9oOjwO7qfL88+JzwCgQCIeSGIIiThqO0TokbIkxDWl7miDB3OCnJ5gyorN0/eoTj9LURaEKCE4+vYK2a2oT76A8St61xOaR7AtEbtzqnufRVCjgiVKPYQ2kYNdlkTi2xdOkJcDOaIrjAkCe0g9rysi1STx33J4hLwe5WwHV/l4/fuYhpHWRvbY0n5tu9GkozP50Hpx74I/Jr7tGRHcJKM9S2jIv6fPR2vnqK77ut9CFniGEDHccJ19WptHq2WuMYfD5QZROl7neU1IuWV8iYP+GJdlVIub5Lzr1rxBl7ilTcZnx3Fdb3vmjfsOrhtdcOO659kN9p+N3y3P1v5GGh8zBs2G67tuIdvcRvg4/CzbEZ43t8f1uI2w8TwSx+HnP46I8mzffnL+t197Wx5jgkx4ph6p7sO+Achk33zvfmtxg1iS2/wa57lRt1F7RSlSmD3rIQiCdmxsyby+y9n0io8e7Fi7KYUSFGXHkXRcyi1BvgyksIJReHCSYLdozlG65FF7F1NJojdUscG5HZ31lNMJi6MCoXr6vuCisSgRCL5Ly5MQQwSmZ9WbZG5DIf9atzc/JcQPmFclP/tr/5AP/vC7iN0KKSR37pwx5YRiOqeoZ7z12S/xZ9/+FiwvsL2jcwEb0ktLCsHFtuHBxZbX5lP+4d/6RZpyyq79EVVd8eh8yXxe8sWf+zLHZ/d46TNf5Xzbc/nR95lUNWezOfePj/jm//x1midPiSrSh8gqwK5T1FVBMJLFEEeIEFAEQhD0waO0RMpABHrn0TG9yINIG/nFfMqd+/cpJlOUWCOFYbGYsjg+plAJpA4Rdg76vkMJyeliTvn6q/xHWeLDkhg9QUS6vmVWFGhdINyOrm3R06RqoosCG5IqRt/uOCoUpZNIU7HrHQvlqQqNd5Z7xwusc/Q2DEaYLHEjQA7e+bmD5H4/MLzXwgCup4lsmy1q+wB9VGCqCZ3zgMJMTlmev4NeXzGdTqle/jLeO7y3hHbDLuxopiWyvo/zyeheiYiPSQbTtjs2qysiyQgbSOSDdrvB2Tb1hQ8UyqTwFUIgBwO4QKCMxvpA73q0D8THl3QXFzTSELsdTz7+PkJ6zlcb3n+y5P6i5MpCURgqlQ5es8UpZVnA48dAin/atY/YWUuInt12Cc0GYTSv3oMnF++g/BYZS2a1obeWJig++9YXmSxO+Hf/9l/z737/93hycY4gUOmS7333uzx5+DF/+G9/h7O796lmc95594f8/V/+Gu36igex4fGH3+U73/ohXlUcH82YTmqOThYoqTDa8MbZgolQWKn5+GrHdz86Z/sn/56PP3iXSV1yfHaH9XpJ1zV7jwwtFUoJiB6lJCUG73s2jUsSTCIZgyKSza5jtWtp2hY3GK4nkwmvvvQy03qCVhItAkW7YSo8pQxIAa0wfOfxind/5w/47Ouv8us//2UefPwoEZg2LY8vn/LN9Yo3thu6f/CLUJ+g9Iz5qy1P/+DrlLMjtsslKys4dpbaX/Laq4rjAnbvWpSE3geimVCcvIyvFvim587dexijiFIhdIFSBaqcUTJ+GQGZ3hRjmmOuTwbGvsO2DW2zpdms2a2XbK8u2Gw2bDuPjdB0LRZPs9tyvm3ZlTNMccTf/6//G3Q1RSuB0pr5fE5VVWiZQhfludS3HS4ORvbhQykVs/mMyugkQ2wKZqf3mU4n1G/+PAg48z69oAZDiNIKJSVqcoapj6mcTZ4iIXD0ys/hguKVz/wK1jp++Ie/xe/9ybfZditEd0UoFjjb4ZwlkgyJQgrM4pijs7vY1TkvfelnmC3fRcbkQYeAO/fOKAr486dbomSQghVokmdKZpS6AEqkkFppExsRWoCIz7xYX6SfnDQGpm8D4na7HZeXl/R9z7179/ZEhkPSQAawxiSHDFqPVT4yqJ0Bt/Gzx+Df4aY5A2ZjoHQMxI3JAmPP+PF9GZDL5cvXjMHMXJYxEWGcrLV78DzG5Mk9BhQzSSADcJloMFb1APZEjaZpbhAqxsSHccia3I65LTP4ncHADC6OCQ5j0kJuqzHQeTgGxn2f2yMTDjKgPwZFxwB49j4fp8OQLeN+Phw7+ff8vHx/7qPcz2M1kTEAPAaGx2FnMgifwfEx4JrVGXLZx2oh4/LmMo/HxHjeZKJTLtc4r3HdP23KBKAxUWoM7Od6jftvDPofgtO3HUbHnx+2/2632xMRDkHl3Ae5XXN5x+PqcL6Ox1Ies8AzBKGs9DImdYzB/LHqw3htyc963toxrveYEJDb9XA8jdP4ujw3cz3zWjNe98Zjdfx3TuP1ZNxm47VqTBQZj7fDg/64zIdj4FAtJKvqjOffeP6O2+9Zg8Lt4XHG83PctlmtKMZEMBurPeV8xipDea6MySRjVY9DtY/xOMzfH9ZnXP783Xhejg1l+fvxfMtjOhMscjnH7Toed03T3CBrlWW579M8n8f9PlYPGSsvjY1HuV/yvMhzI/8N7Ms0LmcmidxGsPpxBrIX6Sc/3QAmRx/GbEsgGx/TGT/mL2PYh3iBBDiMaURJ9dMn5JWIVoK4vyqpj0oEzrp9qJcEWKZQEMEH+uiJQhFRCCTeR5y1uOCxtsdZj5aKymiULnDeEmUgCEVUCowh2B7nLPiAdBbrIoWTSKkJvkvnMC323my6MFTSDCBLOmkqITAKSqMoZhXTWU1wnuACvXV01rLrOlbbhqZpkUqhpaYoFIJIpQ3FVON7j1SOsqjTOVMqmqbj4uICKTomTOk6S1XXiBBRQkGUOB8IURGjwsdIiB4hU6jjNOclfdcTnIcYMErjYkwhU/uG3XZD17ZstxuEVNT1nHv3XmJ6eozZE3dVUmxoe6R21GqKVIpKCIh2IOsVIEiOCVLRtB30GlNUnN6pOTpO51Lb7+h2mmJSM5vNCD6wXa9omobF4hghIkVRYoqKoprS9T1Xyw13Ts/YbhOxvJ7P2W53zGYTJmUFEZRUeJ/6U8lI2zTUVUEMnkBEK51UIIJjMqnS+o6g71r6vsP7nrrWTOqCoi6ZTKfEEBJQIUApgzECYwpilMn+oJLqcN93dG1HDAFlkppL3zu6zuE9VOWU45NjlACtYVJolAMjJGVZI0gKFTE42qahNAU+gEKy2rZsdn0iHxUFSkRCbylEsjHEEAmuZzopqOuKqqwpyhmy8DR9z9HxCfPZFNt2eNuhsMwqyeP1lqqeIoSk6y2qrOj6Hus8ZYwgB8cO63DOk0KZKIwp8H0Pfc9EK4wRNMEjtcBoiXMdXd9hdEnbtYS+o+l6pCmoioq6njCbTUlOX5GyqojAer2h7RpiTCo01jY426d5FiNaS8rKoHVEiEAIHinFPpRtWppiEvkIQAAXBDYIfFRJpIOIUBKtBkO/jLgQCSKtV0INAGOa8AN2ktafGATRS7QSBOcR4hpIjYODoBDDWUYm6FUKUugXGIVVychKCj8uhNirgzBab68BGPbAyn79HH5eS92LPbp6fZnYL9RiL5s+stGIceCY6xsHS9oN8kcimLxIL9JPbhqrUozPSonAsJ8VA9B5E4Qn37ufJMN5zTlKo3DB0DsFxato19L1K7yfUZj7TO69ze7yId6fppBZ9jFFdQd59ysUYovkVfzVB+x2Gwq5wZQlsd0Sa5MQnuCBtAbGOIRnkGqP9cS8jiRENamAjIorr1eE1A4igaUiiut7M5Ik4o2/r+s/kCyAKNSN/MKeHJIoE9dEgtyO1+0/gp8RJByN/XJ8oFQyIoOkvhrKfgvYn59wzeK4lTeQvs1tOP52tMzuyST7d86zz8pEFiJDXvEGsJxvz6D5rWlU31y/Zy85CEECyGcvGz30JtGDW87wufz7e2BEc7mdGMLIPvfM/XEUnGbU7p98qsz2poNeiploc2Oyfao0tlkcOjX8OCJI/vn8sXX7988jdDyPVPK8PG8n66T5cRvhJD0PEkE+3JIfo/Vqr9sD4xkoBNfqIfsM8tcHfS1uzm0BXngIFqIGFTBRUJUKVI3twEbJNlaUytN1hqLsCFoThQY8wS4pcNw9mfBwWSO1wQeLEiWqqrHLRzhhIPS0TuKYEWNBrCYsigntw0epn1XCr9x+SOZ1Js8bBYjxUvRXnn4qiB9KCD731tu8/LmfQX/9PQSSSORoccxLZ/MUnkUrju/cZXb3FdrVBSIGfEiEheA9WklsjEwVfOH+lG2z4eriHNnvcG2TdvbB8cqrr7B1irM7Z2xYc3Ryl+B6ZOi5e3bEL/3qL/Ldr3+b1aOPET6ydYHLNvCyjlgf0NGBCCgphxdNYnIHD0orIGKkxLmANBqhFNNJyen9+1STKbpUuBbOn15x7/gE13dURnPn7ISiqgjtGu8cMjjuns6oCkFV18RGsmt2bB49YPvkAXfvv4YpClwT6LYr5meniBAolCb6iPcO5VMcTe96lKlZri1COKbGQgiE9ROOCsM6gpMQ/IipF/NLeQDepEYXaTg6HxE+rchGKpztiasH6HCBbyrK+Zw2KlpmiBhw1gKCZieQXUtQEH3yPomhpdmsoVrR2chus2ZaJ+N8byN9s+Ppj76HVHp4b0faric6h4oeRRhUBVIsXykk0hSUCJwPOJ8kLtn02Islbd/SK83q8ikX6yvqKtKHwKr1bPoOd2Wpp9Ohzkkedn56HykiSgoUEIexpxC8/cbr1GVBlAona5ye4U3gnR99QPHZ1ziZT/j4ySWT+Qmd7fjd3/ktvvmNb7BZr/chD15/63PYvuXR+ROWVzva8xWXtuGNV19meuc+s0pz3O34QdfRtA2Lk4r7pzMurq7Y2YLY7dg8vaC4e8w7Hz0mUMP0FPSURx/+iK5tuPvSq0yPz7hzb0vXbgnBp5jCIhK95WhSceflUy4ul1xcrWi9TRswAd47Vpsdm6bHZWA2JtLHKy+9xHQ2wyjJzGiO/BppHMeTks5anrSeNkYWOrAoUzgm27UU0ynToqCo18zmE+g7rp5+zG/+5m9z52jCD/70D3nwzjuIiyd0F5f0rccBBT2v3+k5PtEYJ3hpLTnZehZaYo5OMZO04FfzI1579QyjVHqR3XjXiWe2AEk2bvC6KDRFMaUYvjvKxsn9xclQGb1N5JBMEGk7umZH07S0bY/1bogpLRFKo4oKU5UUVYUpCoqiHMIiGLRJRnY1GPvTwf76cZm9GgcDACR2ddok7XeUw0Y0JK+xYWNPDPS9I8SI7zt+GByIJFUbZEEkEny/r5fznq31iLbhP/zuv+HD997lc61ksXsIMew3iqooePnOhB88WUMce9AM23yRtmtGCFyMdN4RvKTUAnN9ZnqRfoLTGDQcg3BCCFarFR999BHb7Za33357D+Ll65qmQQixJwZk4O8QENztdrzzzjvMZjMWi8VePWQM2mfv5UMgL//L4NbYmxrYq1eMVUjG1+W6ZOCsqqq9usBYpeQ2FYBDoDwrmVxeXrLZbLi6utoTxlarFdvtlrOzsz3JI4OgRVHsQ7dkwDATP/J3cE0sKYqCuq7Z7XY3DBY5XEMOuTP+bky8GCssHIZ1yG2dv3fOsd1u96DidDrd13kMaI+JIGPiRy5LJpzkkD9jlZLcJ4fKE2MgP6stjMlA4xAmUspnQs3kMBwZQB0D6xkcbduWrus4OTnh+PgYIcS+/FmBJQPVh6oSYxWAHPYhxsjl5eU+PE0OfTQGuvN143+3AevjlNuz67o9UJ7HRlaAaduWk5OTG+SFXNa+7/flzGSCpmn2YSfy/Zm4MSYCjAkueRzN5/M9+J0/y6FWvPc3wlvk5+axOFbtyD9z3mMS0XhM5DYYh17K8ziHLrlNXehwjh4aKmKMN67P4y9fOyZXGWPYbDbMZrP9PNjtdvtxkMfemLw0JsWMw2/k9s1qG+NQH3nsTSaTG2XMwH1d1/txOyZ4jNUcmqa5ETJp3JbjtshlGY/tvBbmsuexfdhXYyJEXjNznfMYrOua7XZ7g8yRw4BprTk6OmK9XrPdbvdKIGMCRX5mXktyWJY8TsZrcyZn5LIeEtjG61xu//ysMbEr3zsme5RluSduNE3DdDq9ER4lt1fOO4/l3Ge5rzebDaenp/s5l5VxxnUZE4pyXfLYCSGw+j9wXgABAABJREFUWCz27Wyt3b+DQwh89atfvUEoyoS3cZitsRpVrnsO3/Qi/bSnsSk+/bw2AY/fTYMXZIzD+Sxcy4wPtgcpbhpe/RCKcoiUgEQihUYKMEojpSDYQbUnZigAwgC65Gf5GLHO0fYWax1KiHRecp7OW3qb3q8hgpKa6bRkvqgxRrHd7oZQU4OBOiSjsnUWSHOudQ4tA1WAvukTISUGvEtKAV7GpF7R7phMkrKCcw4fIkYVKdwtgnW3pSxqqromhPSOfvLkCW27Yz6ZMptUHFUFk9pQaJP2y9JxdFSglWK12UJMbSCUwDqH7DSz2RGTeoqWRSIcNB1COrzcsHEO23UDSV8znc8wRYEWGoGkUJpJWRNDwFQl8/kCHxQhRMDRNw1tu8PbbiBeppDBIURMWSb1i2EYKJNsR7vNFVJI+t5TaMXszl1CjDTNlquLp3z4/kNOj1peef0VptMSaSRVadis1+x2ad0KomZSz5kEiRRpD/XDH73P3Tt3mExqpJRMZzPWmysWkxmFqSGCc4HgHZJ05sYnI7dSGmlKsDI5RzlL73omdUkhBIUSbHYNRTFHKM1kMoMgsaFnNl8M71KBNjVSFpTlBKU0VV3gvRupqxnatqEoDM1uTde0VFUNROpqwvJySfCSQhZMFxX1pOJoNkUqiVQgQ8PxtEDJQGUKehfomx5B5Gg+QUfLm/eOOSvgZFYzqzWFEQgCUkbAgwyEYFHGUMSK2WRBjB0bHN5ZVKWhKeh9oGLYJ2uN9ZbL1ZJiuqCoJ8QQqOpE9EhOYyn8cxDpjGZ0eldYb2ltj6pqnA10XUNpkvG+3e3obKSeQKWuw7F4D84GymmB1JonF0/ZLK9QeBDQdRZCpKonyd4ZI0aldcL5HucTmUHpAiUUCsA7QpCowiCExItIIOCjT2uWlJhCoxjCQcFe5SMOtsb0nhzWgQHAGBDVIXR0QASfiF0xEEIaX0n1NJAJEjGTKtKDB1CEhMqJvIdJF1xjQ9lx6ZpzchP4u4YrboAUGRwbfu5B0Yz8DhnGGJEM3vRc15kQiWIUomH4TwzAbAJfb/OXfZFepJ/cJPaTIcPpCR96Flo/nGseEUsijr5fMa8lfUyh5kR1hG08xfwlZHS4zYeEaDAKWP4Aipr53Z+jj5DC3LW0YYJzHRMpkXd+Fhcj9BeIyw+hLhF6TjRHBCxiENGMcpjheSG5Luk1uUVAQA77qjS/02dhAHyvaRZRkD67AQQP1+zB+etdYf4kjH7P16TPxDUxBUZrz9hIPiaGjNef4ewYrx2b90SMeL2eZptULlnMhI94/YgR+k0kkVQyUS8CMrJXub4eF2LffvsbR4STMYHvGmQeqjYA5mG0rya3nNiv3GQSzPCKGF86+uD2lPG+/UNzWw5/X79uRp/vP3o270wQIZf08Jp4k3hyI6/RZxEGNa1RHodEkZxP9KO/x45K+QUayGHPni336NwxDLIYb5Itxo5Gt6XbbG/Pu/7w2kPyx6d9ziflP/55/Tn7et1+/7PhbW5cc2OOiyG/4Z1PnhkhLSYIEOGZPPO8zfN7OJnhgkDpAiE6hPQYCa+dWp6sIlt5xtl8w9N2gsbRux0+BirdYuQGG2asmp5SKV6+O+XxRY8wCsIG7T0hrrj0EaFeR9sVvbyDK86AkPAqL+mwFCbSY9GixMakxCSjS4pCMe5XcQEEATct+n+16aeC+CGF4M2v/BxBFcOe2UPUKJ3izCMCNsLlxZJf+pmv8FsP3sP1feqAPQEkcmwUX3vzjDdev4OzHbgOPcTushE6Fzg/vyIKzdXFU86fXvGN3/ltPvfZ1wFHXZZMCsXnf+bzvGsUjz54wETBeRtYXnWUIvJqpamNpiNQSkEhPEoqtIbWhbSND4LY9sQgOZtPODmdo5WCaOl2Pe89uuDJ937AG2cLLi+fIkOPqs+SFLMEnENGR2UUfjrBVBVOwna9oe3fx+9WuNNTjJZ4IoRk0C/rFm0009Iwr5L6Rdc0GBkpi5KiMrQiIuyOEkv0DiM8U6XYRYVH7V+G+8PC8OLWWlIWBu9jklnNhsM+0l29TxXOqatE0Llcr1mGI4IIKCwSwW6zpi8stW3ZdZ623UHwSB9w/Zarj96l6Rx+fcHdl+c475FSEaNju3yCMim+bAyBtuuASGEMi6lJDP4QCTIZuAuTvJHbpmV3ucE9uSI6T9vvWC/PabYr2ujY+RT/U5LqHEJgZyMTBFoJjE4GzvOHHzKpDEYKlJJICVppPvu5L/KZL/8cP3jn+1gEV5st33rnHVbrFY8vlrgQ+ernXkueGa7nD/7d7/LhRx8TrcMoiXWBxeIIWc558OGHdM2OlfPsNmuO5nPeLGZ8+M57zO7d4+XTBa/vGv7Wm6/wo3XP+XLH4ydLqo3jlXun/MJnPsfR2Us8XH2LP/7uD5keB+7cNai+o9ksWV3VVHdf5vTuS0npQykuzp+y3a6ZVQU/85XPcv9kyp999wc0bYsdjDZxeGnGEOidG4xugqIoeeWll1jMFmitmBaat+qA3wbipMJGeNhCUAV3phWiKHG64uTsHs42rIdxv5hWxElF37ZUE8ePvvl1/m+rJ/zar/4tfvZnf57dO9/g/djz9e+DIrGVY+9ot/Dxw8AHjz1TI5gXipO3v0g9m+03B9/90z/iYf02v/BzX2VRldd7kdEmYvDDz18c/BylG5s2kQ7nssSYEjNNnx8N5Azi6IUZk6eJdw7XW/quS6F2+h67adgOsqJpE0liE4r0DKE0UiuUMmidSEhqMLLvSSIZZBpexHlTG0VEDiSRGARKDzaE4JHRpdhm5ZzS1GxdIoplAokPHmsd/ZOP+PZ/3PF4E/m1f/q/Y/dH36GqDBMl2LrIxcWSV840KiZCivOBzgMiooe2dSEQSEYbGwRSRVAKNfY8eZF+otPzWNPWWna7Hdvtdg9SZUD5UCEi5wMJVDs+Pt6Dzk+ePOFHP/oRd+/eRWvN/fv3bwBWh+WAazAwpwwkjoGsDJjWdb0HpDNBIAPSmayQAbuyLG8ApYfqCYc/xwztvu/5/d//fc7Pz9ntdpRlyWKxwFrLer2mLEt+/ud/nldffZWjo6N9+IsMMM5msz3Al5VC6rqmLEt2u93ec7yua5bLJd///vex1jKfz3nllVf2h9BDD/RDEHMymbBarW6Avrlv89qU+7ttW37zN38TKSX379/nl3/5l5nNZnvQMQP5GaicTqc3SBLLZQq7pbVmOp3uAeKx1/7h+BirmuQ+zUSUruv2gHQGzcf3HrLvxyoyWUEl97eUkuPj4/01f/zHf8yDBw9YrVb8yq/8CvP5fA/aZqJKHtf5Z65Xzr/rOubz+d7zfrPZsF6v92EfJoPE+TicznhMHdZhPL6FEIN3rdzPPe89dV1TVdWNsDa5X7KyyyGgnefCer3efz+ZTPbtnfs+9+VhP7Rtyze/+U2KouDo6Ii33nrrGQWLXJd86M5EkJznmICVgfj8M4Pyud673W4PwOfvch/k+mTyVVVVe7WQ29rysHyHJJNxOx6q6hwfH7PZbDg/P2e5XHJ0dMTR0dGeUDMe00VR7MlamZSRyVpKqf1cyYSgTBTquo6HDx9y//59ptMp0+mUpmlukB0yaD8m54zLnefmOOTHOFTVOBRSJqyNiSBZRQXYr4ljAllur0wSeZ4qSF7/c79UVbWvr7V2P3/ytXlsjNfVXOdMKJHyOhTTmGQzXvvzHDkck2Plj/H8y++Z/E7I5Kd8XSaV5XfEWCHqkFwzDhmWCT3vvPMO7777Lo8fP+af/JN/wmQy2atvjFVWcntnFY++7ymKgvPz8z2h8Bd/8Rf342mxWPCd73yHP/uzP+PDDz/kzp07vPTSS3tVrc1m8wxBbDKZUJYlZVnStu1+fH+a+fIi/WSnOP4tYwjPGFnHRs7rvVgczhExRKL3xBiSwmDwe7CxjymkrnMWiaXSEiOSioaIHi9sEiSOSTHUWYeNfjCUC7QySR1CaWSQuFxmqbBG03YW26W1xSjFZFFRFgWmKpMi5kA+yZLJfWdp2kQ+qKoyqS5qQ6E1+MDWNiAU3gXaNik62N5ST3b0rsNoyf07p5wcz6lLg9SJdHB8PGU2LdGqoLeO1WqLjYJZVXD/ZMq0rnHO0XaW1XaDlp7ptKAsFSEauj4ynx6nECmuoygrClNBlGid5n43hH+RhEEu3uGjoKiOmNUzppMpptAE59g2DUJETFWigyYGB1LQ7Lb0rqO3PW3b49ueZrWh6XYgFWVds21auq6hqgqqumQynVEWJRqZlCC9o6jqFNYmCpy19H2LDZairnjltdewXceTR4/oTzrq6Qxne5zzTOsZSgpa29Bu10zrKdZ7lNYY4OnFU05PjplNYDFd0LUFrXWUVcTZHtt1iGiRRKazGb13REIKlxw9skjqHG3bEZwjGo1WBcoUFMonwohSFJXB+YgxU6SaMKln9NaDUgOJII13pRTNtsEHQGq87cAFRCmQWoFRqKLE955+5+m2HaenRxCXTCaLgYhT0drAnfmUy6uIKRWFkoQA7bqh0FAXCo0H77h7Z4EKDUaBMRrnWoLbIbygUGDbFWE2pVQTOueSs1eUKT51BBEi06rG6BJlSqaTGY21rNcdqhWcHHXYWYNSEiNKvABZVngfEFFgtEFJSVlPkMDuaoXvPbVU2GZH37WU5RwbAq3tMaakqhXz2YSqLJGywPpAIRUezUcfPGDdrKkLTQwR2/ZIoTk6mRG8xcqA0lOctUQRUMLgQouWAiUCSjisdXhnKEqFoEAKPTi5RQotiFi0Fnin6H3Ahoj3iYmRzCqCEJP8eRgcbXISMZKdcIKLJBEYhZAKoZI9WisxhHpJYEgIEe98csLJ6+FAKMngXwrCE0khDVJZMvkkh1lIgOKgRiIEg6saMdtbYhyAmQRSZiD1AG5DhkRGiWIAUCN7OynDessA+ArEYM+JN1HcF1uBF+knOYnrX9Kv2VY1oi/cYFylz27skQVEYYjRJXDRNxSFpOsmSLFGFjVx3ROqCdL3WNdT+SsoJsTFa7i2Yfn0XSZ33kDEDr+z2Pb7lKdfJNonhOYcVR8TqjNEcQLBEtunqN0TqO/gNAhpEpVDyKHs2elwWIPEYDseSKR7VY8Y0jokxtob1xDw9Zpz0yYRR1fLgViwP/OJMSXkunVH7o3pM3GzxeMBiWAPLe+B/KFOmXDB9e954RzbGq57K5MHYiIWh5x3vC5/5JrfIkbfx/1liQ0yqiNcn1ev635Nd8njRpAdUEb57p+VCSNDq47wuhstcUAquDH+cl2fc27bw/w3CCnPkgJi/nxo2/GIeCbPW0gSY4LJdbuPVP/G399KsNiPhBEJ6fpldGgPGP9988x6uz1wfN1tDlfjaw/reluZx+flw/Z4PjHjdjvfbcSR2+xot+Vz/ezbQ9V8UlkOv483xnfM1Kkb190sRiaXpsgCIUZE6HFB8oWTLU82JZsGLBVrBWfTjvNWE0WJIrBxEwwat/kQKTrm8xlXT7f0YYKyDmEqnK4gaiDitaRUktJeYO0xGIEWgig8PkwoqoLWtURdoqPEC3+TFJXbjly/v770U0H8MMYwvfsKWx+ZTU9oiwWbvmW329KHBR89WXP5QctrE8PkZ36eP/2zH9A9+BbRJy9EYmJ2KyUH5rhIcpwuoCXUpcYDjXV8+KN3Obl7wg++9YdsguHh44csH33A5774WcTduxjXMy0LfvYXv4Ysprz3vXdARyoBb6jIo85xuesAwetG8HIleb/raAKc6KQYYIWgWV5w/2TO6y+fpUOojkgB73x8wbt//g6TviG4yHa1ZBJaZot7SGOSN0rfI5TBdi1al9RVgVUSERITCkkiTgze94lZ7vEueZdMK01Va/p2y+7yMYv5gtl0SuN7IpJAiXdrcJeI6BC2oRY91ldYikSOExBToKPE6oyJvR5ixLvkCRBcT7d7wiReUhYSYzQ7B5edpouOGJbUKjLVmn7ZUJcFm+U5T5c7XLNmFnqit0ip8W1Hs14R+y1RzAjB0zZrvLfpADdsSJJ3XAtREoPHVRozlLPr7B60tqsNzeOn2NWWdrdjt7uksQ3b3g2bl8TG1WqI2UmKYBVDis06rQtKYyBCsC1eJS1IKWBaT/nKz/0Cs7sv8+Dj91kuzwnzOT/60Y94cnFF11us8zx6esliOuW1l8/48PE5j59cEkNMcpghGaFqLfjzb/4JMViIkdlsys9/+TN849vf5fe/8z3+7mc+h7EPef/smNnp63ylKHnl4ooP1gF7BEVZ8vbJCa9PYH31lLbp6ayldD3B9cTo8bZns15S1zPu3rvLm2+XRO9ZrpbI6DFa03moFye8/trLXK42NK3FB5sOpRGMkigRiSJ5YNy79xLHxycUhaFUktfKwGkZWfoaHyyP2kgx1RzVJcV0zpd+5mtMCsmjj96nvXyCFoKyqpjO53R9T1HXKfyMWHGxbvn2t7+DffIB9snHPN1YZEwHX+cVq61ACc/9t2eYj5dMPehywqu//LfRpsBbi+0D5x98n//x3/4fmf2f/s/84le+QPDJuOj6jna3TeAOIAZlF1MWe6lbKVL4KCEFKWbhzT3dePNH/jNPlnxYHsaZFBKhDLqoKKbzBBwMxoEQs8fbsFkeNqLee4J3OJfAB+8srm/pgh9Ja0ciMj1ycIOTUiHV8E9eAzkuDGBZ36bQK9oQ+y1SKGKQiBgwgmQIjQEbQjoMOIs2E+gtjx4+5GhWwVrzZOl59OApcTelj1CRJGTlIHUbBfQusuwCQkkUcDIzlCKgpKBSEvXMZv1F+klL+wPaCGAbb5rHXu45BEVWF8hKB12X5KszcJk9vbPnctd1e2LEbDbj4uJirzqRgTO4jtU4LsO4nJno0bYtk8lkD45lZYJMSBl7lGfQOOd3GF7gtsPCYdvk1Pc9FxcXhBB46aWXeO2115BScnV1hbV2D/ZlQHkMCue2yp6EOfxC9ibPoGb2ej8/P+dHP/oR1lpefvll3njjjX3e4/YYl3UM/h+GqBiHEsjAZSZO5L7JBI9xuIkMYueyZUJDPhS/9957PHnyhOPjYz7/+c8zn8/35I3D9h2n8eEn/8zgdCZy5P7KoPCYDJDH4rgumUhyWOcQAk3T8PDhQ95//30uLy/5yle+wmw229e167obyjOZCDIm1uRx5L3fEz1yH2YCUQajx2DxuM6Hc288/zKgn+uflRgyIJ7n3ThsRn7OmICV657LNn7+WLmiHsCpcXigXO/dbsef/MmfsFgseP3113n11VdvtG+M1wpBuf3GeY/HWf57PK7G6i3GmBuknawsNCbh5LEZY3wGmP80KffbIclt3Md93/Onf/qnPHjwgPPzc66urvilX/olvvSlL3H//v0bc25MGslzOLdN13WsViuKotivU2PVivPzc/7wD/+QN954g8985jPcu3dv31/w7PzNzxgrveR2Oky5TTMxZDxmc7kPx9+YtHMY9mjcfreN37yOrddr3n//fa6urnjrrbdYLBaUZXmDwJLbfUz2G4/lMeHq8Lnj9Xs8Z/Lnh+P/sKx5vo77a9zG47Bf49Ay+d48L/LznHN7ZR6tNdvtlocPH/Lee++x3W5v9HsmqYxJL/lfHjO73Y4nT57w+PFjPve5z+1VkzK5RIikcJTzbJqG3W7H0dHRjbGY34X5Pblarfbvn+Pj473K0Yv0U5qGc002GSfZ4GxU3F80jPNk2BYxq0hGXIx4H5IjR0hOPW6wOzjnsG2HbXf4vieK5I0ltEDEQIweEZInvbOJpNT3NuHXMVKZCjEFIQMSSV2VRJLNxUcQ1iGFYRd2ST2g0DjX44KlDz1al/RdTwz5XSvwaLzQSCPR1ZxiMqFptyy3PewsAlA6GUCNFskJznq6zZarzQbnLNvVmuPFnPlizsniiMVijjEKSaTttnjXE/2Waa2ZTY+xrqf3HVKCUZ5JUTGbTKknNVEKpnWyJXR9hxSe6XSCiAotFZvNhr6ziEJRmQKlJVEohCyp5wWFqeh7i/OeptnStLDZrNFKI5Wka1M/SWQKVxGTokLwAds1KUSMiszPTiiriu1my9FigtELfIgsL9csLx+hhEBKR1EVRJEMv1VZI5VOIWaERJsSFwTR94RgadqG1eqK05PTpHxXGKztKaoJ8+kpfevYdp7pvEas19hdR1XPWG87QNPZJdPJhN12S/QOrTSTyZTV0lEWkrpWrNcppHIMAQL0nSO4CFHRW8eu8UgVODIFQieiqDaGGGQi/ZgJxlSEIACFlAYhNd4nwEZJTdc7yroCqYnAdFogCHSiY1rPMFojQiBGy2xWUlQFwTl0UVAUieBUGoX3ka5tKaYTlIpoBe1uxXxaUOhEAg/OYaTB2o5JUWObDX2MBO8IwVJohVSDilOMONvRNluqQmGMwnko6pp6Mme2WBEj1JMSu+rw3jEppxRlUnzRSuHdoIqlFAKP76FQBi0U02mN0orz5RJVGNCKi+WasizoO4uLgWllmM8XTCYzyrKgKGqULhE6ETM++vhDlldXnJ0sqIuSzXqD0or5YkppFH0/hFRyPUSL8wEZHSH0BBRCS3wIBD/IgUSRCBAxJNuQS2FqfAg4H3E+2U4CkiCH9U1I9uoWKGTMqhgAg3JRjIQoETLuCRMDPgeAdQEZrvd6Usq96it5Tzo4DcUYBvsuCClHnutyD3IJIQgx/P/Y+7NgTZL0rB/8+RLbt54185xcqzKzsqu6VlUv6kUtIRNSg8QAgoExsD9jmMEYN2M2mA0yuOOOK7hhuZgbxGI2xiD4mwECJNSSQK0uqZeq7lqzsqpyX07m2b81VnefC4+I851TWS3NDFxMd7lUnefbIjw8PHx5n+d9no+BEQ5Twy/eVqYOox6Nv6K2Z2gBuwbAqw8gZDucL8LaTRZ5bWaBaIBKIf05/4gs80/Lp+VHoRzlrC9amYhjceKj/YY79n4LStffF0IhqylhHFGkGQGCdPIYYecEs5QgGaL6p7B6CUOFMCmhjin3b3H4cMRyP8Gkc8LVq+S2IpRr6MmHFMqr00tT4iTIMPJJ1tMtdL6LMDlS11D9MQD+yM7AUdt3N9cq6s+d88mg7eDWfGeB3iGO9k7HG69WqVjY89K05AIJ4o8CWBvCQYMqHyMkLNyDOp3Z/69oh7D6XKIleDhna9uqxXov7k+PSB/NfRRuUdmOBVLGkW3J4rjq+Ph+t72eJ77XjNMsnHvxWkVrDXaCR8LRYH58T968bsZ+d/SlhRN/suJH0+OPU3IanGSR6PLDtZ+aWF1TJ7l4jifVdeHfxd83VTxRk6PaiobUs9geR4SRxd+cJE88iTjxJNLGD6vn4neP9/k/XnkiYeZEff44e++P19ed6DNPvtbF3z4pNuy7R7P/czSMqOOkL9r+4PwGECvqeKoEYTWXVgyjmSIzJaXTOApMEZJpyXo4Iy9ylAJhJ1R5juwuE7kJc7p0k4BuKZmLGIRGYLFOgqyQ1lLKiG6SQP6IaXUaGwiksBgCukozygwulgjZqOvXhLETzfqEt/6nlh8L4keSxAgdoJQlTlZI+s+Qz+8yGU/Z2j6kPAxQpaD3pc8TxAlXnnuea1vX/eZNK5TWGAcGSWEqtASUpJKCynrgtUJhXEkxn4DpUWYzhO4hpWA8Snnnzeu88Nmc9TObmMoQiIJnX3qOnXRO5842oTVshoqhVryRGj+pKcmt3LBbOk5LOBWHFJ2Ex0XJqTDi1RcuE8eaKPCSgXuTlN3H21DkBFIwGY+Qs0OqZBmpQCjvKznPMq9iIiVW1LLpWqJcier2KeZj4jghTmKsatiQwm8S8NJTUkqMreXEi4ygzvQ0DpzTFHoJETgiM0PhcOWYgcyZ0icnplbu9BsEA0aXVEpSGSjKjDJPkdkepDvITohUmlEuuLuXMUsfUhQ5ripY6cfEywqHJS9LitmY2WiENDlxz2cfZPMps7nElCWuqiizOXmWkc1miMBnlzircDUrzDkQztS+qAVx5KUabeFQhaN6vAWHB8SqJLdj9meHBMLWwj7NNOwnGK0FpTtiFjp8VkQYKEIt0Epy6sx5lBSMDvY4s7HB537yq2xPM955912ev3qe2TTm8c5jJrOMUPuM1kRL5kXJ7QeP2B1PqfK8Za0KPGllOOjhXEVVpERSoQJFUZXc3N8lXu+y82jM//7Om3z16vO8cuYpdueHDDY/Qz9+zOXDHbqkuKokmT/mzn7GQAuY7tHXgnJywIEKMEL7DJ/JiFEQ0x8M6Pd8Fvj0cBdnCkwV8njngJWlPitr62yu7zOaziiqgjyvSIuSvPSZVALHYDhkbW2Nfq9PFGrWA8uGnGNKg9KCW9szKhVx6cwaSkdsnj8PpuDejdtkWYa1HnwYrvaZT6fMs4JQe1WLKNJMs4I7Dx6z0unwc3/5L/PmH/4hD7+3S5iXCGd5tJOS7huiRKKdo4Ni5bNf5sxP/DRWJeCmCA7oJpq4nPHwxvsk1ZRZllNWprVAkUJ4NRZnatuWopYBz6lKTwpRWhPGHaKkQ9Lp0en16HV7JJ0OSRQRBBpdP28NWcQv4hc6VM3+dQ0rF+Hl4JyfGK2Tx4KlOK8o4wjR7fzp6piqa4OrzWvXApAGawxVWct6l95/11pLWVWUeUmez7FBhzDqshYHRNYw2ZkgsSx3Q4rDFIekwhFqSVk54uU1hoMEiUVIwUovQh1MmU5nHIxTep2QSHviSScEJbzcrtEQSUNuIJCCfigZhhqpBZEELRc5oZ+WH8WySPg4+W9DWmiy9LXWLUCmtW6z16fTKQ8ePGhBJaUUs9mMLMtaskAjez8ajXj06BFKKVZWVlrLkkUywSJw1wDGjYrCZDJhMpn48a22o2iA6tFoRJIkrK6utuBco9KwqESQJAlJkrSZ6Iug4iexxK21TKdTptMpS0tLXLx4keeff56yLHn06JH3KU+SVg2iUWzodrstcDidTsmyrCV6NMCpc65VWmjON51OefjwIWVZtjYai6D6EbnMHgPVnXMtiH9S1aD5vFElaDLFq6qqPd2DY6SLk20CMJvNjimN3Lp1i/v373P+/HmeeuopgiAgy7Jj4PUi4PtJlhkNKSZJkrbuDTmgyWRvrCIaYkZDiFi8R4tklwYEXbRwmM1mrbVNE9BtMv6b4zf1WSQwNCB7Qz5ZtMJoyAtNPdM0pdfrHQOjF8uTSC/NpjDLsvb6Fu16GpJLU+/mN2EYHmvHpj819V/MkFm0TdFaE8cxeZ63/zWgP3gix7vvvsva2pq39asJP801L353UTGhAcIXiQKLZJzm38YSp7nWxf7W2CQ1x1vs6811nyQmLI5dTVncDJ8kNTyJCDKbzfjt3/5tHjx4wOHhIWmasrKywsbGBisrK8eILkB7LxrSSKfTQSnFfD7n7t27lGXJCy+8wPLycnu+sizZ3d3l29/+Nvv7+3Q6HZaWlgBa1YbmHp18Pk7aNjXPZkOkaN5v2nNRFaRpu8XfNW20SCpZJMadDNw8aY5oCG+Hh4e88847vPPOO/ziL/4ily9fpt/vt5Y0i/dgUa1l8TlbPGdT35PnXHx/kVh1ksB3sp80djUNcTHP87aNm/o17bhIlgLasbWp72LbNp9nWcbh4SE7OzvtfNOMC83z0pCWmmtv1EvAK8Vsb29z69YtHj16xPLyMp1Oh06n06pCNc/w7u5uO9+98MILx8Z+rXU7R+7v77e2VMvLy6yvr7fkzE/Lj2lZiE434AAskj4EtPtwdyw47Op9TRMUdNZiK1srfBjyvCDNMrJ0jivnhKqiRKKcRurm2RRUlfX/GW/XYqtmPhRgvHdzZStsnYShpEQLL3CAdMRxgNIQKJ8vD4B13tKzqhAIqtKggwApFVEUImUAUjKezJjNZ5SVwVmQ4mhcMs5R1PYyzkJeFpiyZD7LeLQzodPp0u3sEkrH+lqXTiepz+ETfRSCIPBKrloFKBUglEbgmEwnjMZjn/nqHEorQu2VUMvCg9kIn/0vlfTqKMYiRYBzGiUlzjjSYkJlK7SOvMKFc/TiGKU1AuU1B2yJqUoaFwitvGVMEIdeCN55gHo+n9NLQoyzpGnObJ6SlwWutryoioqd7T3mWe7VfaOY1fUV1tbWkEgqY4h0yMr6gCLPebC1xe7jx4z3M+bzitNnNlhaWUcgKIucKPQqHPPplE6cUGUV81nGcHmIE4qytEBOJ/FqEDoQKK3p9r2VDqJC64CsyLDGgDNUVe4TrkLNeDrF4eh1vCJYJ0no9Ht+nFcCayxx7PcyZVmCPFoLSAnGWZwpEMKrpuIsUgiCMCRPU6zD9yME83RKkgQQgQwiotjHR4JAY03ZEna1CvFJTzmmsmgpSeKQMPBKJXEYEYcB3U7MoJdQ5HNv06MFSRSglCOKYxCCLEvJsxRMiRAQRQHzXBAlXbQKWD21TjZPSToB86knZK4s9el1jqzGEHgbZOnlsf2aTBFECUIa8rykKg3dTh+J9MqgzuFMSaAVvaWuty6TXvk0CLzys0Ew2j8gn01ZXxqQJAnT2YzCVnSTCCWkVyM2jspCUdRralchhav7rkJYgcUTyZy1UFqMrZPPXJ3R7YR/HpTAOh8jEviscZ+jUyuhOFkDewIh67FMKp8p5axPbLGW+qAYe2Q5fDKD3eGTgKwxLSB6tO+SWCvaBB9vK3NkXUVNwDpaQ9WJFc4rmNQ3pSVoONGsPxaynuvztfCXaJBRQyOX78flBsTzQa42kb1FQhqLrT8+oPVp+bT8/2PxT0KtGX0MLD9679hTfmJPfhTv9XFhaw3ZbEwqNfmsQLuKznwLZXJEoKjUMlV2gLQjnFYIaygdhMkKHN5hVo3Qvaco06lXR3IVprMOhzeo+pcRQqLQVNbhpEL2LlKFS8jDEc4FLV9FLNatJhyIltjhjggRom4DZ2kU1Y6IaApaa/LFJqjHqCZOfuxcJ9qJmmziGoWOTwb8j5EVFo/jjsgsEtES0lx9ctmSPUT7vhUewBbtPQZo1C5NPQ7XLdU2zpPIDQ0QsKB8sNgO4EmCJ/aWom6bmm1zpCblFvrOMdJKXVzd3+rffhI4few8x2v7idF4T/Cp4wUsgPu+Eseu62M3/Yccs5nnWrLIsS/Uby3si5+UvPHDCBrNeZorPP7dk58fkSE/iY/xxyFs/FHki+PEpE/+7h9FtnhSXf4o8sfH610/0+1t++GUhk++7sUx7ZPb5olHdwKFQjvFat9iy4r9LGQ93CczCZGYUNmCPBO4OGClL5kdCKaTGd3lC2g7plCbOBkyspJebFhxM2a5YO5ihDJIpzDWUShBbAJkENM1U6ZuFWRBaQwuihGTGbhlKgGe9Fo/u7J+zBf70v9C7OrHgvihgpDKGG4+esRs7wCBpKgEe4cpZ08V9DqOq5eusrS2ghWWz1y+xK031hHO+zhiwSjp7Ui0pheHlFpTaIM0jm4cYIXPSNdKoJUiwhGEgjDQlECWpnz03kcoYHjmHOVsxvrGMi8+8xQbnYDd+1vklWFaGIQzrGvJ5Vjz+twyCBRCSA6THuurXU7Pc168eolur4upUpI45OF+yqM7d3FlhcShBFhj6yCCRTgIw4DSWibTGdYahNL1RkCS5QZXGoZLa8xHXrJ3Nk8p8pJ8PqePaydgg6QqK4RzJKHfMNp6s+KM17YwDkoTk1cVvRCwBdZl6CLD6iUq2SUvK4yxKGlR0uGcIc1z5pMJHZHTkSk2DAijiKkV3H54yM7OyJM+rEEBPb0CK8sEWlGVOdVsRJHPibRAA9aWSGGhMv5GOp8B1I0kASHzwoPj7STL0SBoLczmOUvL6/Q6S+y89QPM6JBJNSMaBsQCJrOCyhhUbX2hhKiZq/7vQGusNYTSeDkzQW2fISktWIRXP5CS3qDHpReeZWs056NbdynyOe9e/4huFDDodhmFU+a530gnGvrOcDjNmY7HRGGIkr7vCSlQgQ8UW+DU6oBeFDMfzRmVBfO0YmlzSOYcbmr5vTvXeZAk/MU/80vc/c7vkSnF+umnOdcb8OUv/wz33voD/l//4b+Qq4Bn1ntsDAJu7U746GAHVIiUgk63j5gEbD2Q7GjJW9/9JsaU6DAmqyoODsfc3doluniGzc1NHuzssbU3ZjRLKSoftAJI4piL5y+ysrRE3OnT0XCpVzHfHaOXTkGaQpTz/MVND7pax8HWXfatYTZP0c6hpabf6+LyHCUUK4M+QRgihbe5kRJOnTvL/+X//Oc5t7nKCy9c4T/kh2y//Q7SGg4PK2Y48qJCOM3w4hV+4n/7G3T7kWcvohACOp2IpBNy9sxpNlc67D3YYWdni8nBIbP53INNVUleFAitUBKKrAGqsjpIV1JZnxHmMx2kJ8CoCB3FREmPzmBIb7BEb7hEvz+k1+/THwzpDwasLw0QgBUKHQQESrXe1VI2Mpk+S8I6jgVOfVmY7BeDpy0N0auSWAHSaYQGFbqjxVn9j7UVRVHigMmtt9EfPWTr4T3CqI+x3jKr108YTTOsE1jnCOOIOFToU5tsri3xZlUghSDpxgzDqQczjGlVS4SWdJRCC4HCIayjCiTUYx3SoZRAITDuKPjwafnRLB/bWCws4BvrkAaMnM1mDIdD4jhuA155nrfKFPv7+ywvL6O1Js9zHjx4wHQ6pdPpEMdxC9TP53N2d3dbRYOlpaVjBIOTZIMGXHbOkaYpd+7c4c6dO1y9epVz587R7/c5ODjgzp07bG9vs7a2xtmzZxmPx9y9e5fr168zHo9bK5g4jrl06RJPPfUUKysrpGl6TBGkaYdF9QchBHme89Zbb3F4eIhzjvv37zOZTHj66adxzrUWHf1+nyRJ2jqtra0xm814+PAh9+7dYzAYMBgM6HQ6PHr0iMlkQp7nrQ1MA9A14GNRFBwcHHD9+nUA4jjm9OnTrKysAEcgZZIkLQidZVlrM9GoTzTZ95PJhHv37rUWIv1+n9FoRK/Xa7Pim981Si2j0QigtY14/vnn0Vqzv7/PnTt3WlLHZDLhrbfeYnl5uSVwCCHIsozpdMru7i7GmLYN+v1+CzgbY5hOp8RxzHw+ZzKZcHh4yMWLF1t1j+l0ekyRIssyzpw5w9LSEkIIPvrooxYcD4KAU6dO0RBhoijic5/7HE8//TSj0YhnnnnGW/fV9W7sXPI8Z2dnh6IoWF1dZTgctt8LgoBOp8Py8jI3btxgMplQFAXdbpdTp0619goNKPxHbfAW/1skPzWEj4a0tLKyQr/f9zLkadqqYzTEnclk0vbj5eXlVhGjAagbEklzHY1dU57n7fmb57ypW7fbbZVgsixrCVqL6geL1ieNHQ0cqZ4Mh0OMMW3/adqjOUZD3GkUPuI4JgxDjDHHFA+a57NRMWisp0623ckN9uJrKSVxHDMYDI71yaaP5nnO48eP+eY3v8nnP/95vvSlL3H27FnW1tbodrutikO3222JAwCdTofZbMZ8Pm/JBTs7O/z2b/82d+7c4W/8jb/Biy++SKfTacczay1bW1ucO3eOTqfD6uoqk8mkrWdjdzObzVoyQpIkx9RdGuJd85vZbHaMXND0oWZ8WFQLakg+i9ZXDQGjsa9p+kejMtHc3+bzxs6lmRv29/d5+PAh169f58UXX2RpaYnBYHDMfkYIbze0SALJsqxVBel2u8c+Pzk/Nd9vSFhxHB9TFWr62KJ1UdPHgNZKq9vtts9SQ3ZZtAVzzjGbzYjjmCiKSJKE+/fvt+fc2NhgPB4znU4Zj8dUVUWe54RhSL/fb8ew5n419WzuVVOPps+Px2MeP37M+++/z7e//W2UUqyvr7O6usorr7zStvGNGzf4D//hP/Dw4UO2t7fZ3d3lK1/5Cr/0S7/E008/3Vr6fO973+ONN97g7bffZn9/nzAMuXjxIn/n7/wd1tbWjtmbfVp+zEqzh1kIdj8pyNwUK+oMTOdqH/N6XrOuVv04+ryyhtKUFFWBLTKktlRCUrrGVxq8oqmnliAlOgzrDHm8F3szntfJHsZ4wqOswfqyrLAItPb7dVHHDBC05H5rvfJnoDU4SEsHWG8tO58znc2Zzue4OiNfYhHOUFlDlpfgFHEYY3HkpSUrCiwzovmcXhwTCMF4OqWT+Oy0OFJIDFpCFAZ0uglBFBKGiulsxsH+iMnhhLwoSboxvW6HTichp0KpEOcEaZYCAh2EJLFPYBAyAAtxrAjDAInEGIUTmrK0VKZCqYA47CBDjZSKvEgp8gqlI4IwwDkoq4J5PsGYHFOTJoTyGZamsiA1SksGvYQqkiglEc6RzlLieAhCEIYBWK/WMN7bozKVz05WCmMGJHHM+ullVleWqCpHZQqmo33KdEIUd+l1e1QmJ8tTnBBM53N0oNHGMptnrC6vYK1X9fLwSK2QWyeO2dIihfTqp0JQVXVyl6mBbCHBKYrcUoW+Xw56fX8tUtUkGq/WYIwhirS39qjjTVIISuNtfoQEY0ofI9OqttSaIIUkShLS+QwhJEEUokSEiiKWKkeRl4RxTBjHKKWZz1LyvATr7ZeNBa1DTJajlMSYCiEilBYsryyRBArhKrCWONR0OglCBkRBB2sFeeEVYuMoQDmfEFUaiDoxYRCyfuoU8+kIrSxSOMJAEgcarRVlrU7hlYFBGoGpKiwOLRWdbpeqVIzGjxBItAQlKrSssFUJKiCKA+JAIfGAX6Q1siaS5GWJNSXrS0PCKGIyTzFVRScMUUIxn+dUpmiJj3lZ+BhGEPh4X2UxpkmRq8AZhNRQWYwzOBGCDBH4WK5f39VWus6H+aXzcUNbj2xSCK+WI2RjAONBsDqG2Vi20JBHa+KHwLXxtGMgUH0Mi7eOsc4rH9UIYG1HY3CqztCt46J2AUh2DUpR11K2RI0GrGvAwfbdI9LrEeujDst4QKi5Ng/qNSSY5tdNHMrWPzuW6/9p+bT86BbPhKKlL4hjH7R/Lf7gSU+Gw4CrY8BVCbqPE1OUcoTDpxhN9uj2lgnCDiZaQQmLt3sKsOk+otxieOEncSqmGt3H6AAVr2K1AAuBUJRmhhw85fc9MkdbhUNDNfF1FY1yNC1pwD/5DSgPR8SPRfKXH5vaLx0jtaiaeHb8mj2xofnecbWJhkDyBMbI0W8X3jvGOTvxO0E9RDZx8vrdBkY6SUCxNbnBE/q8apGs15OCRVWEuo62vqMfq2+NVzWclGbM/QQ2xklA318HLUnliCxT15/20Bxdnf9+M7b7yz5JN/HfbvH9ut4N+YKGYPOxKtYKL/V3rXAtsc8d/1qdpNr2jo+Vk0lJ7euT99sdr4c4UdeTpIi2XevG+GF7jrYdTpzS76GPCJVPJBg1dVk495MJGr7NhFi44aIhTy501WNjyNHvP4nM8qSY38fa4hM+/8TyQ7gei4c8+vtJX27II4vnku144hZ+144QbSNYrFQUVhAHmsd7U7SEUmQ42SV3MeiBx2IrA3OPExMGBG5MqRKs7ICrwAnGuUTIHt3QsioOmBchuYuQKvDEf1GRuwQdlETZLnkwIBIQAspOkK6gIkJLr0DZjlXNcEf9rPwvhK5+PCIoQrI3K3jzzbvkkxmVyZEqYmI6PHhcckHtcO7c11D9DkVlWV9Z4ezTzxKIPboKKgGBlMSBpBNK4kiBFERakZkCYasWCI21JpaWnqrIhAMUWWnIS0OA48NrH3ExL1k/f4bp5IAoDDl34Qz9TsTjO/cZaMFZBLZ0lHVw4kIoSQLNzaxkzVhe+swlOv0BSkoCUfFwUvLmW+/z1HJMUVtH5KZkMktZU6GXFSxybzkB2CxFSUmZ5xAkhEmHTr9DEEcgNaWxVPtbdLRFRZoo8Bkb4FBRgsFnBXbTlM3lVcqy8jYdznhWvLXkRUle5JjcUASOpbgDZYYrR2hjyEg5nBSkaYYwJZ0kqCUZc7qqIO5KAqUJeglpBbe29pmOZ+TzMUVZARYtJEXhfd619iQWUxSYMsc4hUTSkSWlW5BtlwLlDEs9zRjDLC/wM6xn1zcTdrOhms7n7PzgTYLSoqgI+x0qEyFdgShz9iY5AT6Ao5VAWoOWfiOOFAx6MVlWMikdQqYIfObGeJahSsXQVIz3tkkCizGWmw/3eLyzQ5FnlPmc2SilkySsrQ0ZDnuk27tkeQVG0ksCurFAKf8Y60DTqeXP4zhiPs+xpuD5Zy7wsz/3da6//S7/7Xf+B9lkhjy/wtLZDapSMrt2n91C8f/4j/+DU3s3eWl9mf20QPZjfvP3X2N+uEsVJdzbGVMh2D2csjNOyYzF7O/SSbpEUUw6HTEdH7L3+B4SQxDFXiGisqR5ztajbbQEYQ13Hu0znuWeHWwbuWvNmbMX2Dxzjn6vy9ryEpeGIUz3Odw/4OrzL/Kbv/U7/NRPvorL5wgEk8kh0zSjE2jWopgkipFJjEpCdBShZAOEaqbTOUGY4GZTpnkBcQeE4NRTl/k//d//Dtd/979x7533yA8eU+wfEvQ0Gy9/gRf+j3+V9ac3oRr7Mbn2/y1Kw4OtXX711/4z/9f/29/msz/1As8Jhy2LOvDiwFnvV1yPQ5jKW86UBVWRUWY5RTYny+bkaUo6PmTr4X1madZ6/+bZLsXkMdv3HdvCZ+XEccJwdZWnzm1w7fvf4bfeuMVIdOj2hwyWhiwvLbE0HDIcDBkuLbE0GNDtdel3vX9zEkdEYejVROrAUbt0rWfTdqFBw0pxR5OsczRhDufqCdn5gNP21gOmBlzUp5ARhjm2qp9TJSiqOovFGg7GKdx6j//0b/4F2/v7XOpKVBhwqhOwm5XHJr+sNDgpUbLOaMJirPPXoCSB8jKrfqpvAg2fhgh+lMtiRngDljVAfmNL0gDSJ1UKxuMxBwcHTCYT1tfXefbZZ+l2u4zHY95///0WcB4MBvR6PZaWljh9+jQbGxtEUdSCoCe9NBcB6EY5ogHhwKtONCB0o1pQFAWz2Yzl5eWWLPHw4UOqquLy5ctorSnLkoODg/bvyWTSAtqL5I/F+ixmms9rMtpsNmNvb4/9/X3W1tYoioK9vT3KsuTy5cvEsc84fP3111t1BfA2D+fPn29/c+fOnWPKE4eHh6ysrLS2Lo8fP2Y6nTKfz9vM+tXVVaIoYnl5ua1nAw4vgpeLm7Yoitjf3+fBgwd8+OGH3Llz55iCxGw2a+9/t9vFGMODBw9aks18Pmc6nZLnOaurq+zt7bVg+EcffdRaHRweHrK1tcWzzz7LCy+8wJe//GVef/11rl+/ztbWFnt7exhj6Pf7rK2t8fLLL/O5z32O+/fv8+677/Luu+9y+vRpiqJgNBqR5zlf/epXCYKABw8esL+/D0CWZezu7pLnOS+88ALnzp0jjmNee+01Dg8PW0JDozywubmJEIKbN2+2Vi9Xr14lz3Pu3LnDN7/5zZbQlKZpm7V/5coVrl69yuc+9zk6nQ55nrO9vc21a9f43d/9XXZ2dpjNZgRBwGc/+1k++9nP8sorr7C6utqC9ovAd1MWn6Pm3t+4cYObN2+2APvu7i53795lNBrx5S9/mbNnzzIcDnnvvfe4ffs2eZ6zvLzMl770JU6dOtWC7Nvb29y8eZN79+61tkRNv3nhhRe4ePEiSnlJ+Rs3biClpNfrsb6+zjvvvAPAYDDg0qVLLZkJaC18Dg4O2N3dZWtri6tXrzIYDAjDkN3dXQ4PD5lOpy1h5OzZswwGg5boMR6PWxWgwWDAZDJhOp0ipeTs2bM1wDLl/v37LelDKUWv12NlZaUlJjT9ejF7YlH54aSdRhiGbR/OsuzY837hwgXKsuTw8JAbN24wn885e/Ysr776Ks8999wxNZQsy1p7nIZ80KiI9Pt9VldXj9nhNM9nURQ+A7m2gmrGscYSZn9/vx2LwCukrK6utmMxeOLHdDptCRlxHLe2QA3RqlHaaVRjnkRoa/peU484jlsCQ0MmCkNvqZckCb1er1XpGI1GLbGqIcc1JJPPfOYzrK2t8eKLL/L888/T7XZJ07S1J2rmmul02pIKkyRpiTjN+Ni0RUNkWfze+vo6TbZ2Y9XS/HZRUagpDaFib2+P8XjMrVu36Ha7bGxscPnyZS5cuNCO1d/73vfY2trCOUe32+Ub3/gGg8GAc+fOtYSxpi/v7+/zn/7Tf+KDDz7gwYMH7bOepmnbHnEcH1M1WmyDpt82xJnpdMpoNGJvb4/79+/z+PHjto9JKel0OhRFwd27d9nY2ODs2bOsrq6ytbXFN77xDT772c96a4Ug4L333mM6nXL16lV++Zd/mfl8zrvvvsvdu3f5r//1v/IX/sJfYDgc/tELg0/Lj2Zpp6I6eM1xZZ2TRQofWPcZVc3OwJNAjKvtDnBQK3NQ7/utdZRV5c9RW16CRGgPfksZoLWpq9SM3T4h31ivZGFKbw/j7Wu9RazfZymkrhUcpc9ed85SlBaLwljhzyW0t5Y1jmKekmU+rpJmFVnhyKucvKygqoi1RCtJWRYeXI5CtIRQCVCSvLLkaUqZzVga9GBeMp/N2DZ+HOwmEb1OzOpKiA4d1XiKVjNMlaNsjhKGlaU+Sb8LKEojkAR4AQFHEEZ0OglhFNPt9EEohFAIqREK8iLDGW9xYoxBqoAgjLzaQxAitcaYCqRFSIOrSsp8Tpln5FnKdDxmNh2TpimVMRS1KoVSGmNABxGdToeqrEjLjNk0q+09fJcxpkI46Hc7dJIE4SylKbHGMN7TBEFIvz9kZXmNThKQZYbJeMRkVjBBMU66dPs9ojAgCGIiHTMZTVjq98mrivl8xmDQQwqLlGCNoyxzKmM8yUd4tQdZrwGqMkU4CIKIrMqwlUUJRW4qb4/qHDqQ5OkMrSWlUnSSmKqs0DEtAcQ5h5IaIyyVdRSVTzByzlCUJUEYktUqo0EYUlYlSEGYxBRzTxJJun0qI8l1jgojgjBBB5rxZMxkMqeKQqwpqawnrygNZVFgrKEbJyil6Q2GxNoRaEFV5MRpgtQhUkfosIMIInQFBBZqpZi8zAl1gkBjnEMHXsUtneyhtaTb6xDFoSc2OEFRFSRRpx0GjDM4Z6BWQSyVJ1FoibfvqXxMzVTWW8EIDc4hZUUUxISBpCoyMluilGZtZYhSEfMsJ4gClJaYsvJEIyGJ4gSs9XZDBhABTkiMrTC2QliDBqIgQMqozmD2sWLhJZcprfDxVQTGSa9N1JBAkLX1r++0SvjfeIULP+RZQFiHwHolE+dq8gbgfIKfq4khPiZ9BNo08v/CSU+Ic9Yr7LjajtI6bA2kSuHDVUegzyLw1AAw7gitEXVWfnPOBssQtLhtY/XQjteisXA5AmrrBPn6u669Bt8C4giA/NTq5dPyI1wa2LvBLj8OrjZ78CNVRW9hIo+thfzvlF/1VAXYEqe6VPYRQRyTHd4EF+NsirGOwPUphUBYRTm6hbYVeuUZrAyorCFYOo+YbZGN7xItX8Rhsckp5OQjbD6GoId0oSeLOXDC4lBH5Af8+OYfc4vXyWivtv6rolE68VZ+i+Qzd+LfH9Z+/q8jwtviT/37PtnyCcQAjmP9dWWOgeB24UxH31lQ5hC10rvzyYjSlfV4XFtEWD8XYhxOOiwGJSUNF88Tey2qIZYI6ec7HIsWIg1nY5Fo0tbsCdd2RIZYuNCF/iUXwfjF9vgY/0TU97Teny8wRly9Pm9RhY8dUxwB8655Xc897eFrAqADIf0a2YnmuP78RyH+j8eoFveqbXsctdpRHItjnA5/3KaNWlDek6fg6PwfJ1D4o7X2k82k3ZIVjvrwyfjPYr1/WBGiTsS1DtvModTK8gC12pYWikI4JF6pXSoN7ui6nkTm+GGEkCfV8eT3n5wstoAhAdTz+NGx4OPKPR+3fnlCLY7aVdhjHfN4vfw9FNbitGB7Z4peuQAK8vKAXMe+RhYQ/p5ZV7tazEdkvWWQAyQlCI2p76tyjlkpEWKVJDIkZs68TClFiDMRUlXMXZ++OqSsJtiwRymgKi3SgtLgrPCWlk3vO0l8+V+4xPmxIH444J0bO+w/fIwpvJ+nQFBWcHunQtkdsskOg+EqCEuSBDz7/PNkN/+QjrIEGlLriLSsFT0EEkcgoawHZqEUWW5xlWWgSwJlCRKNjmKk8IvwyjrysuLOjTuUZcHaxXMUBrq9AafPniHq9rh3/SMuqJxpXvEwqyilZGoFeWEJY8uFM2eIuz2KomCQaA5Ky9vvfoSqKoxMQDt00qOaT5jlJVEY0hmsIHDEYYAUUJUFlXPeF14oDH4zks5n2NE+VVHQSyLCJMFoiRKCUEnK0uCkYZrXG900JwhjgkhSOYGxY4qiIstz0nlKnmeURc7EGaYdxbmVmKQzoEhnLIcZh8WY/a0tcIZq0GfQ7zPoRAzjgFhDt9thauD+9g7TiQ+4mqrCVbUEtFY4Y32AQWmcMa1PnLGWQCtQjrQs0brr1VCEQlOiERRl5ec8ucA4dPaICS8caZozOjxECcFw0EPgJ0ZpDVXlGM9KTkeCUDicUli8jKNQklAp1lf6ZGVFLlP0oylSKNK8oCoFiYiw1tEbDLEmh8rx+NFDZmlGWWa4qkAh2Ds4QGpJL0mIo4g8L+nGAaEOCeOYbr/PfDrmzOYZoqTPhx++z2yeUeQ5w8EQ2VnnvZv3KeOY9acusL+zx+6NbQgjht0eq50E+/gjHJY3Ht5n53CZJSdY73a4eOUpitmY/aziYJ4xTqu6nSXaWYr5lPFkRG/QJZ2k7O8+IoxCdBjhcEjnMAbKynA4mrC3t89kMiNNUz952qNMrCTpcO7cU6ytnyKOItZPr/Hyy8/y+//lP3HlSz/L9vYDPvflL3NmELL76CFlVtJRAVopkk6XcNgnjBOclD6jBxDKZxtVxqLjmJXT6xhrmU8n/N43v8OplR6dOCYKBL1Xf5rnXvwSodbIYo4loHNqE9ntsTvJQEq6cUSotM8ommc83D3g1m//F37+z/wyz17cQAiHCEKkVK1FSm3a7OcmnaDwNjBhsyhqVkbOB2yeN6Zl1DrriSO2LCjLnGw+ZT4+4HD7Hjtb97jz4bvc39ri/q0PuT/OMXgrFOss1vmNgQoCAq2ROmQw6KOFBB3QrcHs5f6A/nBIf7jEcNCn1+vR7fXodfv0ul26nQ5JHBPX1jNKqna5O56MOZzMGSwtEwpLOZ9yuL9HWlmK2ZSgG/mFBxBqT87ISk/8iAJNVqTMD7Z5/605tipYjmIipwm6HUQ+8nHYhndSr5Olw3sa+wtsmdSmDtg6d+QJ92n50S6LxA84AgcbsK8hfyilWqCqyRZvwKv5fI4Qgr29PbIsI8syDg4OGAwGbZa4lJIwDOl0OgyHwxbUXLQdOFmvxfrBkcVHUxYXzQ3w15AYGyUHgNOnT7eKGg1I22TVN8D84nlOvm6+d/HiRR48eMBwOOTs2bN0u12GwyG7u7stAWARFG2yxoMgYGlpqVU4kVKyu7tLWZZ+/KitIJRSDAYDut0ujWpFFEUMh0M2NzcJgoDhcNiqaSyqozRgagOWN68bwHFvb4979+5x584dlFKsra0BcHh4yHw+bzPwG6B6e3u7VTV56qmnWqLMvXv3uHXrFhsbG3Q6HU6dOsV8PicMQ7rdLmfOnGFlZYUoipjNZnz44Yc8fPiQoii4dOkSxhj29vZ49OgRcRxz9epVZrMZo9GI999/n7IsWxJGv9+n2+2SZRlbW1vcuXOHM2fO0O122dzc5N69e9y+fZvt7e3WYmd5eZksy7h9+zZ37txhZWWFzc1NlFJMJpOWsNMQfMqyZGtri+l0ytraGp1Oh16vx3g85t69ezjnOHv2LJcvX26VFb773e+yu7tLEAScOXOG7e1tbty4gVKKU6dOcebMmWM2L8c2zyfIU01/GY1G3Lp1i8lkwmAwaMkvOzs7fP/73+fevXv0+33u3r3bEhgaosirr75Kr9cjDEMePnzYknuaY+/s7HD37t2WMNDv9ymKgps3b7aKAOvr6y1Z69y5c5w7d+6YakujQPHhhx9y/fp1dnZ2WpWTqqp45513uHfvHtPplKIoCMOQ27dvc/78eV555RUmkwkfffQRW1tb5HnOqVOn2N7e5vDwkDAM+drXvkaapjx+/JgbN260oH8Yhly4cIHnnnuOpaWldixZbN9PGkOa9n348CEPHjzg/v37jMdjlFItGWZ1dbW1Zvne975HVVXcuXOHXq/H/v4+Fy5cYDAYtISPRcub5jlrSGiLhKkHDx7w6NEjXn/9dR49eoRzjueee46NjY1WYWJ/f5/33nuPw8NDHj58iNaatbU1Njc3WwJBQ3R49OgRN27cYHt7m/39feI4Zjgcsr6+3tqqLNq4nGyTps5NvZtxL8sybt26xc2bN5lMJu04dOHCBTY2Njh//jx5nnP//v22HRvljpWVFV599dVWYSbLMnZ2dkjTlDiOcc5x7949Hjx40CpiNDYljQrF5z73OYbDIUopDg8PuX79Og8ePGB7e7tVq1lbW+PixYt0Oh2iKGrbfXGsbsiCDRlEa90S8m7fvs3Dhw9bcl1zf37+53+ezc1NtNZ88MEHvPvuuzjnOHPmDEEQcPfuXXZ2dnDOcfr0aQaDAVVVcevWLT788MO27zZzQEMQap73xbm16a/W2mNqLI0K1dLSEqdOneL8+fO8+OKLrK6utuohjWLR6uoqly9f5tlnn0VK2aqEHB4ecnh4yObmJg8fPuTg4IB+v8/6+jrT6ZSVlRV2d3fZ3t4+Zvv1aflxKw1xo3n18YD20feO5ihlvUS4z56U2DqjtQn32lodUApBqDQuDDEu8oSP2rKlNA4pHQqfRKACr+RpjAHh98POWowrsbbCGlGv5SK00mRpjrSFVymo1QyCQHn4wTqsFSihsaZ+5ozBlkcBWx1E6KqkKCusc1TGMs8ycA5jBencEgWOUCqcNRyMp1gBGks/UPTCkLyoGGUFeWbodDRaOIIkYF6W7I3m7I0DdkcTBl1NrARJGBDFnowbBhFKa6TQRFFAFMeEYYRzCikFSkkPAtU2GGEUIJXDWa+IZJ3DliCF34eqICKMOwjhYzh5NmE+OyBN9ymzGVWWMZ9NmU1njMcZ83lOlhvmhcVKja33zGGocUgORtM6jmVIq4zKWMIgxjhHVpSeXGMMUajphhGRhED75JlIQygsZmVAOd5F6pAwSojCCCO8WvBsekhVZa0KWBAmdAcJZVkyjCOy3M+tva4/p6kcpqoIlUQIrwrm8Cofup4XAyWIkhBTFRQ2pZNonPPWIFEQ4VxFWaRAhDUK42ICCY6SyiiKskKqAKEtaZYTxAlZOqeTxJRlRZkXaCWZjEaEYUCe50QqIIpicJbUgJZeuawyFTrU6CggiAKMNaTpnLIovB2Rc+QmJw5DtILJfO5BmKUlBLVdi7AeytOaIJ6BDlBRggpCHxdTCus0WvuEIJFWdDo9v0bIUvIipaytNzudHmGnS7ffRwUBVeGJiSvDVb8WUApR1YQIKdEqojIQxwn9XkyknM9ux5FmOQ5JUhikNgSBjxnM8pwsn5PEfQbDPp1OQppbhFREUlFQgIA4ipEO0nnKbJ4hhCRJOnVSkaMyGbjSW61IhZMSZIBQqgWLhJRYvJW0sa5OTPGgZgNgSVHP//6iaAwEPFmjWWvXICDak0CcBw2F82SQk+OgDz15dZDmM+MMwlmwEikBK2riio+duibYIhxCHNkjWhrbgoWYVjveGlpQp7Gu8aNwjaj5tZut+5IQoh3NG/umVnK2zUSvQe9mrexEXS27iFN+Wj4tP5JFIEE02gcNPP0J310Acz9+nFoRqkqJAoMVXRKhiE+/hE23CBgg41MQD6kQCFOS7V8n6awjemstISMSklIAvfMExS7lwQ300tNUQkLvAvrwJtXKs36cEMb/ygqUatSN3EKNGtuWJ1b4GClBcGRpe/KaT8YlPqltPvlEP/x7P0wB4ZPqsbhXbY5ROYtQEWAJqUhFQJkeENHDWYOUEMmYyh1l+rt6nPtkMN6vXxvs6kmXuQiCn0wKaz+X8og4AgvWOwtUhYZY1Jy3rePRuwjXjvknWuioutTjOvWefuG7rpGEWWhzqURLWgRPFDx69fGnor1WmmnnydYkTTssUjKaOan5/QKlwZMm6r9lQ6752GGPfnV0rGOUkvb6j7/+owkWzWt/n2hdBQTOi7gbS2ksgpJqvo3pbaJVhMLhpMY6r6KyuLc/VvM/Bljyw5635pgfvxZPXmqVdjh+TfWROaZccqJtTib/nDznk9rq6Mh1e+EQsotWkgpFT8wwREhkvZ4QtYqaJI5y5kqiki4dOyWnT6kDJA5pvLNFKcBpAcZiM4GUQ7TKGeoZNp0ibUwmBTM9oFPuMM8VedgFAYUxRFrhhEU6T5KyNXvLCdESr/449+T/2/JjQfworeOjD+9R5lOwPkvck6orSioe7GbcePdNrvQ3cCpGC8nlp89z7fEHlPaQfsdRVNSBTAfOobRf5CpTIooSrSRp5SiNIdAaITUq7BD2BsRKQuAIgxAjIK0M9+89ZpwWbDx9njKf0Vk6xeq6lyp/+MFHaGYILRllipFS9BX81IvP0OkPKAovX7g9mbE3mjCMOqjuEhWKODD0Tm+Sz4fgrA+UL63S6XkQQiLI8pKyKKmcYSWK6PUH7KQV83mGqOUYnfELbi0EzhrPMitzZNyhcgpRWfL5HKW0V6dAMTrcY29/wmQ0Jp2NKYscbEk3icimEUXWZWO5i9KKQEsunV1F4Lhz9x6iKgiVpaMrL3MahEyM4dqHD7AG78XbSj36BYC1lsrUmy1rEbijzDRjEVWOFd6WRilothqBBM+195PcIgetIX04Zz05JstwWIQKPDvf+cygbhAwKnMqA5HWDBJv0RGHADV4F0ZsbJ7CCYnuzXj9xjZxELRSsRWeuSeCmCLLsFZw+dIlvvfGG8xnE3All5+6yIc3b3FwMAIBxjnCUBJ1eqytnWZpabkOyICOujza3qGoLLPphDiOWFo7hRUB9+4+AAHDwSnKrOTwcJ++jkniPksXT2FMRT6boJ3BWNhTihvbO1yfzwikZHs0xRrL0rDDT7zyE4gq4/f/8DvMy4qDg23W1pYZ7e8RSEmgAoz1/rKtykxVMZ9PwdbEHVF70grQSoKFM2fO8/TTl1lfWycMNUWa8a1vfYf744IXn/kMt+7e5k9++Se5896bnjUnBFpJ9NISqtdFBJoKB6YEB4FWhEHoN9VKoVWFQrPUDSnHJf/+//lvuft4FyUlnSRmdXWFFz/3Oa5eOs/Fc2d4/VvfZK4S3HzEe7ceYBH8wld/gv/Dl65CPuXqmRWwhrwqGM2m3kO3mdSFV6E4mvYbRl+z4Kr/XiT6CYdwCiVbSqkf/I3BOTDpnOnhAY/vXufxg1vc/PAW9w9TCiRSawQZZVUHAkQduDeVJ3oFGi1TprakE4WMD/bYBtaW+uxLx2SWIZMuL5xdJlQC41kzBGGCkBorNXG3TyeJQUcUOiF3kjs3P2Q0Tdm8eJnNxMvtHtzfogpXMdNdZNDFll7lpBI+MOicAymx1hDEPc6srCAqGKcZH0wUs8kBRalQehklCh+Uc4IgkMSyllCujL/H7YLBszqVlgQK3+8axu2n5UeyNIvCRZCw2XAtqn40diFN5rq1ll6v5wORtRJGk8ndqAQ8evSoBe8bkoJSqiUzNDL7cLQAbTYTi+UkMaVRD1gE0ha/55w7llEvpWQwGLC6ugrQEgu63W57rMVF96KCwOJmWWvNyy+/zLVr19jY2OCll17i7NmzFEXBeOwzKRs7jsbyQQjBcDjk3LlzXLlyhX6/T1mW7O/vMx6PW9uZz3zmM1hruXfvHmEYsrS01BIAOp0OTz/9NF/84hdbAs5wOGytMpoFewNIL6q2KKVaEs/W1hb3799nd3eXL37xi7z44outLcV//I//sSXxNBnojx49Ynt7m+FwyNe+9jU6nQ57e3v86q/+Kg8fPmQ4HHLhwgWef/55bt++zdraGs8//zzLy8tt/ba3t3n//fcBOH/+PL/0S7+EMYZvfvObvPXWW7z77rt84QtfYD6ft3W8ePFiqwyzvLzM5cuXW0LB7u4uL774Is8++ywbGxu89tprfP/73+fmzZucOnWKn//5n2dlZYUsy/j1X/91Hj161JI8GpWX2WzGZDJB1lmOQRBQliX379/n9OnTnDt3js3NTV5//XUePHjARx99xKVLl7h8+TKNYsG3v/1tLl68yPPPP88zzzzDd77zHd544w0++ugj1tbWWsujRnWhea6e9Pw1QHWapmxtbfHmm29y8eJF1tfXGQ6HjMdj3n33XYAW+D59+jTGGK5du0YYhmxsbHD69GniOObhw4ct+Hvp0iW01jx48IB79+6xu7vLqVOnuHjxIlEUcevWLd566y3KsmRlZYXDw8NW8aUoipa0JISgqiru37/P66+/zne/+12klPzMz/wM8/mcPM/51re+xf3791sLIecc8/mcz372sy255wc/+AFvvvlme593d3eZTqcMh0PW1tZ49OgR77//Pvfu3WvtP8Iw5MUXX2yteBqripOZFifHjKYYY/jBD37A22+/zfXr13n48GFLdLl8+TLnz5+nLEtu3LjBt7/9bay1vPnmm9y4cYN+v89f/st/mVdffZX19fXWlqkhZTXkgoZsNZvN+OCDD7h27Ro3btxga2uL3/3d320tjZrnt7ENevjwIWmaorXmo48+oqoqPvOZz/ATP/ETrK6usrm5ibWWyWTCO++8w7e+9S1u377N48ePCcOQS5cu8cwzzyCl5Pnnnz9GZDtZGqJE0+ca0tNkMuG73/0uP/jBD9je3iZNU8qy5POf/zyvvvoqq6ur7O7u8uabb/KDH/yAt956CyEEvV6P8+fPc/bsWTqdDqPRiA8//JBvfOMbrK2tcfXqVYbDIe+//z6/9Vu/xcHBAadOneLGjRscHh4SRREvv/wyKysrPPPMM3Q6HR4+fMhrr73Wtl2jaHLhwgUmkwmXLl1idXX1Y0ofi/NDM/426ilZlrVKRF/96le5desW165d4+233+bSpUutmsyNGzd444030FqjlOIrX/kK3/jGN7h//z7z+Zwvf/nLLC0tYYzh7bffZnd3l9OnT7f34Hvf+x7Xrl1jd3f3GImuqWNjg9QQPRrLJWstw+GQ8+fPMxqNSNOUr3/966yvr6OUaomUvV6PZ555hp/+6Z/mmWeeQWvN7u4u3/zmN5nP5+zv73P58uVW4aQZw/f391tSYtMHPi0/nsW5o+zC9o2Pfec4+QPA1WnkQgiUqEEI4WXGDWAsPoveCaIoRNsuJhDYSiGdqRUESh90toKyrKgwOGMwpsK5Cq2Vt42gAmMJZYCQChV41RxnFWVjSeVAO5DOUhrT2nhopWoSiSewm9xgSkuRZZTWYKoSrSyBqnyiiLEIoJ9ElEVGaRx5JQiQHkxQPqligiU0gm4QsN5bxVUlUeT3TUqWLC13KaoOj0dzdvYOeLTrSKKEThjQCxXLww5JJyTAYUxKZSyBU36vaSXOQZwkiNqmpipzlJBoGaGkj02UeYWxDiMC4s4QHSiKbApVSpVNSGdj0tmU6WTM6GDGeDRnks7ZnxTszWBSwKRyXmXUgZKCQaIYdgOqPPVZeNLfdwFoIbCmBERLlvBKHDnTtGBWk4DMoU/0CYVj/XDGSu+ASAmKrCJJOqyt+EQILRXFKGNSlPSWlqCqEFJ7QkkFg16PsjJe4UNCFAUIYdCBJtQRtrQUpqAWYCEMNdaUGGfRQUhYVRTCJzYVhUXQZTYfe3VLR53VGdQkaYPWCiVFrWopmM9TYqFw1vkELWuJ4wQhFFlaMRgMMTZFS4WpKvI0RQhRW/tlZNmcpeGyT+RSIU44OkmAtF0ft6wsxaQgTmKkq0iiEKE0eTYjjzWSEBlFRNGAijlBnFA5QPk+UdYKqEoGhFECzjJPvVquMYZ0XlKWBdb5hJ240yPpDoiiLlrHzNMxRV7H4mQDWIFSgkA4lDRIZwkDTX+wBK5iPJoyzmZMM4uILFmZo40grHKsg1Jowiih3wnRWpIVJbO8IlCedBMHEYFWWOFt1kS9d6lsQZFnmDLzSjLGEgYR6NCDMEpTCYl0EmtdfQ99frlAoGurZ0XgrVykt/CRTqBUUI9xAmcsBnuUUV3/DkDWMhqi2fc6sBwnRS6SN4W1XinDeTKIcAIrDdYKDLVFjBTehqCh1NX2OkIIX39RE1VoYqk+TtpkrAosAomwdRa68GpG1OCdFd62oLUmqMlbDm+Rpeoj12hv2w4LV4STDYD0afm0/CgXT/paBMtPxjI/CWz92F5dWCSSMp2jdUJeldiyQOo+eZBjp/sIu4YSUMwPcZMtOivnQfdR1lLV82olBNoFVCpDRmtIGZPufki8fBmlE+huICb3Ef0LGCFaUqeQfo0gqPc9NdhZ//8TrvvoOkQ9HizGt/44qgT/n5QnA9aekGYX3m5Ejp503ifV6bjyikZiEE5QWkc8u8myPEDnhkIOmLsLlMEc6yQC1YLkR4D4k0gpsLjOPfr+CbLAE8DyP7LdahvCRYWIhvzR4BmivoFHGEf9fw1w7at0hH18jLTiSR5ONnUUNSmwvjgH1taAOE0/cou8k/YcC0yUI/pAfT4pFsgYP4REcHSok23T3Itjpzn6VByrzVE7H+vfrv38jwL0nxQPavqBc8fvuLMO6wqUK/nc+YBElxwewr3DR0zNOYwIcMKgjvNpPlaepOLxSWPLk4gjn9SfXH2DPJ/nyc/t8UMd3Zvjz89xhZRj91AIvGrIk++rwOFkbampQAhDKDKmsuf1NjwzBCskzkJHlhQSciAJ1tDFLpVcQwmJ8dAe2lpMJbFC4ZQnlZdI9uwKq4kgyEdUZUVhNTObMAhGVFWHSDmyao6I+rVlkb+h9RDLcWpbfe1PbNn/38qPBfEjyyuy0RhcSSs7JcBYB9YxyhXvvPcRq2evMDj3GZyQTFNDmedMtGOIJtDOb7RqUoDAUZoKrCGWjiSO2N/3mVul0KRVQIeQKAjohppQWgIMRT1wYitmu3tsmZLNpy4SBWP6gwErp06Tlpb9O3eY7Y/QccIg1rx05QIiSrjzYItimhIph467lE7TGXZIkphQa4rK0dWaEsdyBGF3SF6WdKqKOAqQEg6nY0SVk1WWLC/IjcXgMNZQlT6oK5TCCkXlBJW1BEkXI7zssROyzTwxVQWmwDrF1oN7PH7wkCyde39RKegmMXEYg7Ds7h+QpTMunF4m0Rot4DOXzxFFISrQLCcBNttHo5iWjjfe/oAqKxj0hxjranuN0vujOtBWUla+RU1Z+A2+UjSSg0I4uknAtLSYmgzgHERaIYUFaxHOerlVKVtZLlt/r6qDMkp4wK8yXpIo0YrhYMDW4QykZG0QszrwYJ1FEgQapGKvdAxXVhFKMnWaQGtW+zEHmcHSyNwbDvf2yCe7rJ/eQEcdpDCURYrDcev+A4SUpHlGuWvIs5zV5VVWTl1g92APgi7r68sIZ3j0eIe97S2qMgfn5WUPJ1Pi3oA0yyjLCoGgO1ihKEr29nbJs5TVtVOIsAMqJFlaq6WfS3orq6gwQirFhaVVnPU2Pjfu3qPMU4I4pqMqnBBsPXyAFIJhf4Azxk/WtRQrFmbzMUv9Dj/z5ZfYfvCA3/rD90AFBFIThxqL5JXPfZErV68S6pDhoEcUJdy6/i7r587x+//9t7hw5SpJp4MFSiEhCJBB3w+a1pDOUsDR7XRRUuJMhXUaJzRS+Yz9eeaDF3paocKLRMk50vkDcmN5vLXN+Bu/w4dnz6CV5NadBxD1MPmcR4/3CJIOz1w4jXz1NLIqCKIAhCQAZqOxn8CcaBctR1N9E45yNWOVmthxfPF7tIg5WssIHEiFwqHLAh1FmKpgPvVZ5g/v73NgYHeSesanc1TGk14CKbEKtAp8RrV0VE4QxAlRHJFnGYejMWMBodY8tdzjqQsbRMITK6qqwBg/YSppCPQMk4545+4u17fHjLICqTS2qti6d4uuy1kJJWub54g2PMAla3DdKUnY7TAwkmLuGMaCcVGxfvoURVoxmaVY5yizlNl8hgxjlobLSGeYmgJXpfRdThh5rzQrJeN5SRALtPPBLvCBEKX1j8fE9mNemkz1IAiOMdkbEK1RP5jP58xmM5IkaYkd8/n8GHj1zDPPtNnTjfVLA64vbjgbGfzGrmXRlgFowd6GtNCoWTTnaoDNBpyez+d0Op32GtI0ZXt7GyEEp0+f5t133+Xf/tt/S7/fZ2Njg0uXLrXfzbKsJYs0i96GUNJuehYsGxpySxAELdgZhiFVVbXkhUU1gjzP2wztXq/XWgEkSdKSDZpr2djY4MqVKwRBAHgrhubvbrdLkiR0Op0WCG5INot2Ak2dm0z5xqYiDEOuX7/OdDrlmWee4Wtf+1pLbHn66afpdrutvUMYhq2dyGw2a4kZZVlSFAWXL1/m5s2bVFXVEjzCMOTUqVM8//zzZFnGmTNnyLKMd999l93dXV599VWeffbZlgRx7tw5ZrMZ3/jGN3xQuz72xYsX+XN/7s9x6dIl+v1+e01Ntv6XvvQlfu7nfq617qmqiq2tLZRS/Pk//+d55ZVXWgua/f19/vAP/5D9/f0WOE2ShDAM26z7oijI8xxrLZ///Of5U3/qT/H000/jnOPq1av8+q//Oh999BHj8RhrLe+//z6vvfYaWmt+4Rd+gStXrrC8vMzp06cRQrS2MS+++CIrKytIKVuiwEmrpMXNVaPA0Cig/LW/9te4cuUKw+GQmzdv8k/+yT8hTVNeffVV/upf/av0+312d3f51re+xb/+1/+aW7dutXY2X/ziF/n85z+PlJL19XWcc+zt7fHRRx/xT//pP+XatWv0+31efvnllhj11FNP8Uu/9EtcuXKFbrfbkooODw9bhZXt7W3+xb/4F2xvb7O8vMyv/Mqv8NRTT/H222/zjW98g7feeou/9bf+Fq+88gpnz57l9u3b/Oqv/ir379/n3//7f89f/+t/HaUUaZqyv7/PL/zCL/Bn/syfYXNzk8FgwG/8xm/wxhtvsLu7y6/8yq/w7LPPtmSYmzdv0u12AY6RnRqiQ2Ov0pCWG2WTqqq4du0a//Jf/kuCIODZZ5/l7/29v8fBwQF/8Ad/wG/8xm+wu7vL3/7bf5uvfe1r7O/v85//83/m61//Oi+88AKDwaBVfxiNRhRFcaweDTnLGNM+Z8899xxlWbK7u8vKygp/9s/+Wc6dO0ee57z88suAV9lJkoS33nqLL3zhC7z88sv8pb/0l3jzzTd5/fXX+bVf+zUuXbrE2bNncc6xu7vLP//n/5yVlRW+8IUv8LWvfY2DgwP+1b/6V/y3//bfODg44LnnnmvHxoY006hjNONOQwIry5Jut8u9e/f4gz/4A/7ZP/tn/MW/+Bf503/6T/PUU09x+/Ztfu/3fo+33nqLL37xi/zar/0at2/fJo5j/v7f//v0ej2++c1v8tprr/F3/+7f5R/9o3/UKsm8//77ZFnWKpLs7Oxw7949tNY8//zz/M2/+TdbdZ9/82/+Db/zO7/DYDDg6aef5vr167z77rv89E//NL/4i7/I6dOnuXbtGtPptCWbNGTDhtjWzCGN3VRDesuyjOXlZX7u536Or3/96+14l+c5jx494h/8g3/ArVu3WFpaotPpoLXmq1/9Ks8++yy//Mu/TBzHfOUrX+Hb3/42//Af/kN2dnY4ffo0aZryh3/4hzz33HP85E/+JC+88AJRFLG+vo7Wmt///d9nPp+3hMdGEabpq1prDg8P23GzUTTq9Xqsra0xHo+ZzWYsLS2117o4xg8GA7Isa4/V7/fp9/utIs3u7i43b94kTVM+/PBDbt++zfr6OmfPnuWpp546ppj1afnxK03wuU6Posl0dCd2W813/X+2Dj77THvXZMtJL5GslcAoRRBIsBHGVVhpMaLCGYeUFoKgSYLHloa8LLBVgbUVlTAESqFkrS4lBVL6OpWVReGJJQBR7K2+LBZjBKbExxxkvd9yoIQHinNbUVSGzBQcHE6ZpLWtnrNgHV2tqKwlxNJPEkprSIvC70MrRxIFKKnJSq98ILWi19MI67P+48jba7kqpx/HRCqir2FvXNCE8Qki0lIwerxLf6LpdSNvIxtExJ2ETq9PWZXMJiOEkyRxwqDfR6gcW1UgNFY4dOTtbYo8Y3/nEGsKqjxjvL/PaO+A8cHExyhMSVE60tyxN885yAwTqymsorIGJGhh6QSKkNLXvatR0pNBlBRYA8YJcmv9RtqCI8BZgRVe2UALiXBe8bWymqww3D/MuL2XomWI0hJtpyw/3mFzqUc3jhgOl0jTgiwvOLVxBqlrF5KqBKVIkp4P3lYVTliCOKG0pY/9aUmeG4SWSKmIgshbp2iFs9DTS1TlITYKSDqaKNa+3wgojKMjBFpBkeX0hgmVKbAWEJqiMl4BQ0IUxXVClCOKAw4PRwyGPcBRlhlJJyadz7E4wqhDp7/M1tY90rRida2LCiOSTociT2vFJkWZFwSBpGs0gpL5PKXb7aDCgLLwZFARSMI49Ak5UZckHpAXKaHwFsy28E+o0jFaelJMHIU4KxGhI52NkFGIEhGhMOR5hkQymc5Jeop5OUUGjsqWBCrykXInkUojtcZJ4eOxWhN1OswnY9JZSpXO6YUBXR2B9TG96Twn6QQkISRRhJGKaVqR5hmB1uhIkCQxOojIigpTFUipgJI8L5hkOWmeITAkcUgnSVBOYOu4pJNeZdVJQ5kXWCqKPKWyKUKGSB2jtFeqcQ6f8CO8nYkAUHUfFsKr6QivzNIAUD7JzY9FTTa8H+/UkWJaHVeylnq88Iky3tW6xFqJrCSVMFjpZeN9XNp4JRLvgYBDeiUQK3DS1MoifnyzeIKatQIrhbeHEQblaisCIakcCCNqqX6BkD7e2sJzohnNJRUGKfQC+ObHd+kEtoZEZBs/W0DhPi2flh/RcjLL/iQYuvj+JwP6yqOV+YQoHjIpCpyUyCjATTKCsEc126esUrQZEZz+bP28SYzz6xcjHMJqjMzBaBASF8T01p5htnuNsHMR2V8lKPexxQw6kR+f3SFa5SgqHNEiS+ATrpcnPtufpDTwZJLFE2w+TrTdx4+xwOo4+qQlOhxVxx773Scpgnysvq62gHbgqNAdTZcYJTK6+T5leJ7cBc2gfkSwcH4d2RB/Pk5sXgS5j8PGJ0Hyk+9/7HV7LgH2iEDhXEO+OHrtVbV9BZuzChb6IQtjfFPLhfveYh4L989SK1vhWmxEC4GxBotDKk+kFAvMD1+3RgnjBBnhxDWeJLH4L4ljZBGgnV+P7SgWCRH+DU72F/+dJhbrP7MLJJUfRjx5Uvm4qkYde6Pe7zg8+VgEKFmxeWED8oK9ucdbrJNIkSGEwtrj48XJej/p/U8ig/yw+j65z4m27idJI4vP3tHpn1yPxXMcvde0hmyeVBAL56jhOOO8lZIUAiccwhlKESKFpZIKaR1aQJCPKOUhuABRGUqZEOiKqNgmjzZAgLIVVkYIa1DOYBCe4OwcPW1ZjjP28wgVBgxMihUFxsUkQUGGwBQT6J/GWVeT4SzNsy3atdv/2rXNj0UUpSxKRO0TKYXwNgzOEgiFERLj4NrDnOEbr/Ni1Ge4tsmtD24iRzuY5YRZVqFCAUr5Db+xyMCDKlYopFagFKVzzEvHXgZWWMKqoteJKSONVQ5nLEXlPYRwfkE/G024d+MOFoFxhu5gld5wieJsDqVkRYScWe6yMyl58O4NAlNx/vQquuc90YdhCM5QmoKJhawy5KZCIIl6q+yNUvIsY9AbEIUxSvjNt8Axm83odDrM85KsgjwvWO/2UK5CS4sL+6S5QeQFWgckvZhAhwgVUlYVpqyIQk2l+piiYjadMh7v+0x+IUk6CXEcEgYhubGUVcnuQUpR5Fw8s0qvkxBGIRcunGN3NEWqCqsC9jLDR/fusv94l243wZgSZ42f2JwHd4XwixnnwGRzutJALdPqme2ghSTSGiVqqU/n6vsvCAONlkXtjylAeq8128gSSUVZVIRxB1uVBIHPMLRVxfKwQ2+QMC8eoJRkfbnHmfUBVvhsoCDQGCeZH6QEYYDQEh1otJKcP73G5MEuRVVPRli63YTV4VnOXniaWel8xoIp2N4/ZDabo6REScizHFc5HIq9vR2oCl5+6UU++9Ln+eCDD/jBm98nm40pyryVrS3zgtHokEdbWz5IFEQEYYAzDmthNDpklqXEYYyx1qvJCImq7UpKk3kP3jorwRQlpTHooMfpc+skSYyxJbPRAduPHzPVgfd9FZ6laWzJaHTAfD7jqYsbbJ7ZQEjBxUcZS8vrRBjefv9dBsvrfPb5lzlz5ixREPDM0xd59OAhN5zl7r073Lp1i79y5TLvfP910sxnbCilwDrSySEf3LhLZhUrSwOeu9zB6sCTdpxEIRDWMZvPsQ4Gy8uM5oa13iXW5XOMFAySjL15SlpN6PYzIpezNLzI45EhCNY5+9SQOF4mHITcnXQxswN25wFXrlzm/qM9ZgcHGAdBPeY0C4fjy58jAki7CFr4k2PfO/rflrFkHc5UUPvJFkXBLE15sDdlVlpCrakaGXfpN/394Sr9OEZpRVXkzKdTpqn3vxUIFAbnBOc3T/H5Kxt0pZ9ylHBIpZBKEIYRUSAAyd3HOdszy+FkwiQv6fYGPkhZFP7ZQyIqyWkVo5c3MVJjsQzWl7HFnFngKFzK7m7qmedhjKyU9wM2FVk6JysqQmW8lC1QVpax0ew5y7qpWAsdAQIdBigJReGYG0s30jhjyUqD5lOrlx/10qg8NKD0UQDMky3yPGc2mzGfzwFa8kXzdxiG9Gq7o4YM0UjMHx4etgBkU5rjNSBpQxppznmSfLFYqqoiy7KW9BCGIf1+v7VwaRazDTDb7/dbZYODgwPm83lrydLYGJw7d24B3Pjhnb1pK611q6rRtEUQBC2xALz6xmw2o9vtUhRFLb3cae0Per0eV65cwRjD48ePefz4MWfPniVJEtbX1zl//jwbGxttO0gpieO4Jbs02eQNmNmQZJo6FUUB0BJtyrIEII5j1tbWGAwGLZkBfFb8+vp6C6zO53PG4zGPHj3ie9/7Hm+99VarfrK3t0cURW1GvXOuBWDzPKdRephOpxwcHLC7u8u7777L1tZWq4JRFAWTyaS1EVq0q1kk9TTHbvpdFEXtfWhsNxoVmqY+jRVR835DqEiSpH2/aR+gVQFpMuwXFRkW1QOstUyn05ZI8pu/+Zv89//+39vn5vr161hrOXXqFEmStG3R3IcnZY4svtfUc3l5mVOnTjEYDNBatwo5SimWl5fbexSGIWtra4Rh2J6n1+uRZRn37t3j/v37PHr0iNls1l7j3t5eS/RqlAcuXLjAhQsXOH/+PEEQEIZhq5izvLzMeDzmvffea+/HV77yFV566SWWl5fZ29vj4cOHbG9v0+l0mEwm3Lx5k3v37rUkqLIseeedd9r26Ha7XL16lZ/92Z9tVUoALly4wLVr1xiPxzx8+JCNjQ16vR4XLlzg9OnTLC0t0SijNMSwhuixqP7TWEoJIciyjEePHpHnOVeuXOFP/Ik/wdWrV8nzHIDJZMLbb7/NgwcPWF9f59y5c5Rl2RI+1tfXieO47d/NprkZt7rdbqvuALQEquFwSBRFaK05f/48n/3sZ1vwvumTQRDw0ksv8fM///P81E/9FMvLy2xubjKfz9ne3uajjz7ipZde4uDggO9///tkWcarr77K888/z+nTp9nc3OQLX/gC77//PltbW+zv77O6uto+702drbUtOa1pszAMmU6n3Lt3j/fff5+XXnqJr3zlKzzzzDMEQcD6+joXL16kLEvyPOfNN9/k6aef5nOf+xyf//znKYqCIAhYW1vjH//jf8ytW7daklFDqErTlDzPGQ6HbGxssLKywte+9jVefPFF8jxnaWmJN954g8PDQ/b29lhfXyeKIh48eMCNGze4efMmURRx6dKllkwnhGA+n7fjwuKzVFXVMXKVlLJVVXn48CG3bt3i+vXr7Rjx+uuvt2SZZs5oFDGacUYIQRzHbGxstDYv0+mUsiyJoqjt357o7W3EOp1Oq+6yWO+yLP3+p1ZlWiTpNOpazZi1qGjVkA4bsktDkmsUjMIwbO3WGsLN5cuXWV9f5+tf/zqHh4ftHNWQXD4tP8bFNRliR+sdd4wxf5z00X7u/B/ONRLjdXaiVLU6oELrEGcMtlQ+ucAarKlQ1qGU81n2dQUkFcYayrykpMQKhZIVUimCIMRo4+0dhMPirV+VjLDG4GyFt1SoSSvWE0FKW5GbklgqYqVY6sY4oFsESGPRquJwOmdeFoR4u5gIDTiUCuj1ukTpjKIsKEVJKA1xGDDsdMmLEilglhcksbffdFLXtmwZs9kMHcasry4zXDKkeYGxIKVgpZ+QLEcIVyCkV0Q1RYkLJfncUlWGQAi0cJhsxs70EVYqorhD0u0jVEBVGrL53GfBlRmTyZythzuMD+dYKxhP51jhMDjSwjLNLWOjyJykdBZJRS+WxBp6UUAvgG4YkMQKpSRIn4NaWYd1UAmBNV6NwFam7R9e/cAhrWszQbVwBApsbpBKUVrDrLRkRnJ7Znh7b8Qw0CwlI1a6ERvLfWbpnJWVZcI4QYcRMnNEAaADnwHoJNJBLGMEIJVESk1RFoTaoRSUGKQUoEFJR6er0NonSfV6fUaHB2gd4JAoFZLlBh3GCKEx1q/9kBal/Zonjjqt4pVf6weEoV/bP3q85e1TwoSqsv64xuvOTmcjur0eURgTxx3CIGR8uE8URAigSCu0UhS5wZQOKaEyBcqAEhZMRahjhLNUZUGgFFpLpOyA1FSVIysqnFT0OxFIH7vTQYJSXvkzihPKUhNHIXmeEYQKax1FOiOMve1SN+mghEZJjXV1nEIFSBFSlSUCSxIHFCJgtF8ynU2REjrdECEt4Kgqh4wk1glQIZWVmMJhnF9TKRUShRFhEFNWhiLLyKuCKi9wtvIJZwIGSW0BqsAZwyzPqYqCMFBEWoOzuKpEYmslWEGolSd8NMQHYRv+Gsiay2Jri17XYFJe5cPiahKIRATaW0vVA5tzNVhVk0gBKmePSHHOEy+w4ITDOoMQ1h/b+nGttVFwR7Er56yPuVqfKWmtRdSJTcZYPOQnajYcWOH7hnWebSUbC21cbcfbgBsLwK6VuFqBaZHMAviscyGQPi/XD+/N+QQLQNGn5dPyo1dOJlo4vIoALIDJyHrNc/SOWPiex9AN1pVUxT6iF1LNZz7eLzS2LHGD05R71wncWaLNVzBViVQBzlZUASjjUC7AKoNDeSt7UeFQVELQWf8s2eFHyFGG7F9CTj/ABVf8+scCTiOFNzrzdatHmCdwLfzSbgHo5Wg/5BYAerHQPseBZH+UZmx4onqBc9R+WkcEBgzNsvIIwF4A2etztuC7eELlj907/912HeoMWnowXsmYbD7FyAmqOyStLKXuoK3XhGqB8Po/nFepg+ZfOFJPWLjRNEloTT2PtSotHWOhvY5/z6ucNMNzQ9Zw4ugI/oyyPbWjsUtp1tQ0bJWF27vQP52fg5p29giJT6iWbT28PYgWFuUy8nRGWVmizhCpOzV+elTvdj/gGpbF0bnalydIOifv3CJpQZ6YWE4SGvwxj+zkj/pY85z6Sh1Z5pzoR8esdRoi+7EzHqvlcdWXhfsufD8XGMoS9ueaJVURCAU6QFrtcW8U4C0rF2N2TyKKLSZQ/jCyx5M+W/zNSVLUyVjicYKIrN/za7SPn685l/1EssyRtVzT98QRiUlIhDM4IVGO2m7P4lA1kVUjybDVAS4IyAgQymKLFOcEpeihA0uc75PHy1hisCVO2nadL3AoIciqivvjHh03Jq8i5nqAFoKOOMRUOaUVOFNicF4ZrVH3b66iGec/1gv+55YfC+JHQ7sSSKTwcoDWSoy1KKmojCavDAfbu2zd+YD7W9s8+P73uTrIkKqDlZpACZQOyKcTv5E0lScdWKhqFrLBsT+eU1YGIySz6ZRuJyGLAkoMpfPydalxKCk9k9kKpuMZN9+/ybnLF1lGo3XE+sY5UqP8hrmUbN1/hChLNlY6rJ47y8F4TjafMjo8QAlHGCi6K6exYZfD0ZhBt8vtB1vEWnH16Q2kEsRxQCAFsqqQziFUQJZlSAyhcjhj0EGIQWAKg5KSKAxRSqNliNYhKggJ44SJFRRlhRCaMNIYWfmAQzpHSUXYjWufV+mVRKqKqqyoqpK9g5LRaMxLn3maPgodwiAJGc8N41Ry9/YD9vf2UcIzx4VSKOuVOHQYI4XPfNEqIAhiKuvQJkc4g3QVElEP3o5QCiItCZzCYXBCUVYVEHilD78boqwsDkGUDCiqEbYyGOPoLa1gyhxTZqTpHFdpNk+tgLBM8wIHWCEIwhChan9uJZBOI5UnmTQTqBSQdGJAeoKJFMxnI6SUrG5c5ObNW9y9e5OD/QM0gkBJqsrQ7XRI4oDpzMufTg722Ng4zc98/RdZWl3n9e+/zmv//Rukh/tYvH+nVprByikGK+tYZ9BBQDmfoxMF1pKlcypjkAakMEzSMVJCoDVSRSSJzyJ0lUHi0M74TWsoKPKK2XjM/UczkKr2ux2yvnkWYwzzLPUAjXPMZhNm0zFZUfLBjbtcfOZ5rpw+Bdzgl/78X+DdN77Dd37wBqtJHxUldJdW+OyVpzh7Zg2bzzg82OfDDz6gMxySjvYpxge+3joAJVFojPAb+Tyv2BulPNo+JEi6dGLdPvvWVgiliJOIsBNTmR12bcjnljL+5BeG9Fb77O3B/R3YHRn2spCVc4Jiw3H2TIfbdwvuTAJWTp/hZq747hv3me8+ZInvk7mCg91tryCk6im+XfX4TIjj5A9f2um8Xql9bErzH7YvnDOYKqcqM8qyIC8ss7ygqixKCr8Z96vNekyqCRnWUhU5TmiSuEs2OSQvKkJRswuVYmNthY6SaAWmrHDOIqUm6SYgNDNjuXZnmw8eHDIrcqwTJHFMGGiUilkKNCabMpqlMBQMi4LZNGM47GKqkrTKuH9nm2lhmKUZ1nmVkTxzYDKUdZgyo7QOrRVBGBGGAbbMMZUPZM2MYSpg7iSXuoJO4Cd5p6EjHVEgiHTggy62WQZ8yv74US4NiaAB0k6qEzSg2u7u7jHGsXOO4XBIk1k+m804ODgAfFZyEzxtSAlBELTkgPl8fuQR+YQF8KL1zOJ7Dai/CKQ3AG+e5y3BobmmIAjY3NxkbW2N0WjE9vZ2K4Xf6/VaIPaPKs31Nt9tALhF4PmkQkme5y04l2VZq1YCXs3j3LlzWGvZ2dlhPB4DtCobjRVFnuctMBiGYUukWVRoaQKOi8opi+BnU58GrD7J2i/LEudcC/g319aQTZps+Ob4nU6HTqfD2tpaa0XQ9J8gCFoQM8/zFvxvyAtNfZaXl1lZWSGKIrrdLtPptFUiaADORVJQYznUkE/Ksmzrv3hNi9fYZMo3/y22z6KFz6Liy2K7LJIJmsz7LMuYz+fHXjfWQcPhkCAIOH369JFd3h+DTNR8p7nuhmgihCDP85YMAxxTO1gkxSxuAu/du8cHH3zAnTt3GI/HLXnFOdeSlhavP0kShsNha4XUtK/WmjiO2d3dbfuoEIKzZ8/y7LPPtgSe+XzOZDJhPp9z7949Dg4OyLKMfr/PaDRqCQeL97fb7bZKH0IIiqJgY2OjBf+vXbtGURSsrq6ytLTUqqc097BRLThJFGvas2n/oijY399Ha83p06e5fPky4AkaZ8+e5bnnnuNb3/oWBwcHDAaD1uamIbQNh8Nj/WNxU/+k1865tk83pJDmWGEYMp/P2+cXYGlpiTNnznD+/HkANjc3GQ6HrUpH08f29/c5PDz0CmUPHzIejwmCoCVP7e3tMR6PGQwG7Ti0OB41fXvRDqUsy/Z4p06d4uzZs6ytrbWWVY1yzvb2NqPRiH6/z+bmZktC2tzcZDaboZTi4OCgJb01pLjmvEmS0O/3WVlZYWNjo7UPWlpaYnl5ubWX0lpz6tQphsMhjx8/5vd///fZ2trizJkzrK6usra2xvr6ejseLvbhRfJU00+asenevXu89957PHr0qLXXWiQHNXVdVNVYPGYzDjYkjWYsb+5/Y/fT+FI3bdvMDyf75qIFV/O+EKIlfiyqRjVj5+Lc14wRWut27F2cFxuSWrfb5eLFizz99NMtCWdRFefT8mNaFkCBj89PDcODj33Hg6P+0ya47gFHgZA+O13Uqp/GOirjKCqBqySlK5GuQmA98OoctlZHtaYGTJU/vtASqRQOSVVaKgRRGBBGMQiJqUqybIoxlVeV0ApTeknrQGgMHmx1QiCVz2STwjHoRkRSELmA6bzCGEegApCOoqwoq5Ku0KwvdckyTZ6FtW2GQweaTq9LVRRkRUZXB3TCkDAI0FohYo3IC8qiQAtDpxMRhQFZmhF1OlhnkFFEp9PBOU88jQIP9Nt5SpGmXm2htm5VOkTFMbkFhB8DqrJkOhphCkdeGA72Joz2MnIjySrHYW6xQlJYx6yA1EhS66P7SSRYTgKGoaATwLAX0YkESkuc9PfTCYWxAm0cVZ14pIygMiVGeKUHISRWaVwFCD8vGmdwrgZVkAgBWlo6UhApQaklWe7YzR1b8xK3WzB8NOPS6ZwXn4Yza4IgLym0RglNMgxwphVMQEgHVvpErChgOs3IS1OD9IpOp0M6m3nF2lDjrCEtDEKGODTIgLIS5IWhEyjiuIuQPo4Hgsp6dQytQ0xVURaFtw5SGuug1+syGh2S5ylrq2sIIQmCBFGVIEvSNKPIK4bDBB1EKKlI0znz+cTvv+v1mgxCQh1hTEWcxDhrCCNd27cIhFMUmSEKNdgKrQVBmKCDEB0GUFqU9AqpBoeWCh3GKCUp8hSlAg8kWoeKunQHfo9Q5TmmzFEIkjBBSV3Hcn1sUCr/vElTIaSk2+ngyhxTVjURJCQIJMJBlhZESLT26saZqXDGEDmLcJJIBwRSUuYZlYGsrMjzAutKAiXodAZYVxGV3he+KgrydE5pStLCz6mVdTgR04lCQqVBQmXASgE1ycwn+dVjVTNXao0Q0pMxnMUqgXb1GCWVj4Mu7I+csy04JxCNz1ENNno1ZOdqRQ5nwckW0JNOg7E44QkgDQlOCk/CQDgMBudMmwgnasBEIBFONDCpr6v1MvzC4S1kapJHs6YQolETqdkajlrbRLRDelML/8ki5LEAElODhp+WT8uPWTmiPdX9XxwBxo56WeSad47SDZtnyrkKawqkXsUV+4iwgxQSW2WY2S7dc69SjnYo0jEiTpDOkxW1x2WPFHdapKu2bXLeqiMeXsWkW8wOHtBdOoOaP6AanKVyElMGWG8UB/UcexIMhwUwuYkB1fHzZlBrXraEhCeu/+ovnSjHAWNZt2Q9NrkFexGahMGaALF4TMEC8aI+44m4S/MeDSGv3mdJ6RUBpPJja7fbIxYTwkhTVQLplMev6nMa54HkxWs5IqM8aQw8IgScbJa2ri0of+KnUnj1+4X3PWmhZS4cDcMto6b55+h8QtTncLL9bPF8zdgtnGCBO8JRnziivOgA+rGgzMAph6xSXA5OSoRK8LZiCikqT7hA1FZkiwSJhYYQx8kKnhDQfF0s1GWBtNAwaBZb+WPfPd6UtlFdWSCguJqU05BEHOIT7qWrTyfb1x9XmGluRB13cw4pPLHD49v+mBrrNyQ1McdyPCHxeF/lY+8v/v0kpZCTn33SMRbrfvL1xwlInjB9UjWlXeU8ce4/qXR0tHY4Inp5m7tIGYwShNWIXOvamhOkM2DGCAKCuEOZW7TIgQqEARSVWiKw+4T5PnmwjhbS/7ZxFvA7NYwAa/vE0islGxtgKanEMtKESLGPqyqsLdAirG32Fq9LHGGHn0By+Z9RfiyIH845r3TnwFiBsdTBK0VZlQgEpZEcTh2DUHLgFGzdoeouEStNaR09pQmDgKwyVGWBTvwAVlmL0BqhJMYJpnlBWVoIwFaGMIyRyi/ctYRIQWXrh19oP41aRzFLef+9D9mcTEmGa/QHSwwHywghuPVwRKBDesqweXqVCk2ZTSnTGWVZkYQaIQzMD9ibC3YPZpTLJVlRcvvefc6fWkIISRjUMqPWgLMM+j2sc3RCb0lT5l7NpKws0lTtgryqKooyJwhicA6lA6yFdJ5hqxypNHmR++WGM0SBJokjHyAW+OwSW3fwGpAdjafcuP2A565eYrnTxVUZwaBDUVWA8AQEKb3naBCBsAiliEqD1Z40oXVAECfIMCHLJP1AUeKDJtYK0tIy0CW9UHGYG3QY4YzgMK0IEkUpAlAWa0tK45BCEsc95DylKOZY48AJwihhXlVYB91OxOnN01y/eZt5WpCVhv1JymadRSjrCUkIz9a0ziKsqAPwUBUVlXHE3T7LZy6hhCSvDDdufkAxGzPa38bNZvSkwIQhqc0xWUahFXGnQ1Eaev0eP/uzP8eZ8xd59/3rvPXGt6nmY06dOY+zhtHhPugAlKaqSrLMB2dcmZGNfaZsv9MjVV4Caml5FZNOKdIp03SKlppYS5aGQ86s9hnogjK1jMYpRVEQLg98YMsW7I/njLOc8WxMJgRKSXSUYBHkdb8R0lsC7R/s8+3vfp/P/c3/jaX49/mD//Gb2PE2ZzY8uJmlM0aHIwoHpXE4HTLsxsiqZK2XkM6nhEqQZwVhHBFoyXQ8RQcRSyurvHDxClmRE8ratshUTPf2ydIUlSREUUyQFXUwQ7KcBOwUXarAoWN4+RXLK0hEmqHDAnRCMc0I+pJ0b8TBFFQv5/2P4PXvCU6vGObTGVlluP9oy3vacrSkbcefj49IHF8+PGkZ7D92ePkzZx3WVJR5SpFnpPOU8WxOYR1aS9KsAuGzyEwNBAxXTjEMJTu7O5QqoTfokM0nZHnqg14SwjBiuRuTBJJJYZBW4ipDWTmsmbM1mdPvDTmYlvzgxkNmRqBchdABYaAJgoiVlVNEyiJKRTjRxN0us8MdxvMp/V6fIp2x/XiHWeHIsxLnJN2uJpay9ou2TCcjn83b6fmFSxgym4wJuwOSoGI8HlE5qKTgQSWJwoinYzCVRYeKjvahBCXqAKyqmdyfxgl+ZMsicaABpOCI/NEA9tPptFU0aH4jhODKlSuAzybe3t5uwcfNzU1GI0/I63a7rK6utkoFeZ4zGo1agLshhBxjhNcg5UkbmMU6GWPa7O+9vT1GoxGz2awF4/M8Zzqdsry8zGc+8xmyLOP+/fvs7OwwmUyYTCYfA20XF9cnCSnWWoqiaIkc8/mcwWBwrB0bYNsY09rINCBgA3A2m9gzZ85w5syZ1kZnOp1y+/Zttre3GQ6HPP3004zH41alolF2aMC9pk5VVR0jg4AnliwCo41VTpqmHBwctDY7QGv30SgUNCSRxkLgK1/5CqdOnWqP3el0ODw8pN/vE4ZhC+6HYcjS0hJZlrUqHmEYsrKywnPPPcfVq1f9HJVlrKysEMcxjx49YnNzk729vZa8c5II0WTPN6SXxWz6RUJKk1Ufx3H7Oo7jlvTQAKZRFBHHcUskaQhCDZi7WBrCSQOuNve/0+nw8ssvt+D+oh1Q0+5JkuCcI03Tjz1zJ/tXQ8CZz+etzUqapqRpSpZlLcDbWAo1tg4N8aK5z/P5nNdee40PP/yQ6XTKs88+y+XLl+l0OuR53qquNOdtyDlaa5aXl6mqisPDQ+bzeQvw7+3t8ejRI7a2tnj++efpdDoMh0Mmk0mrWjCfz9na2uLatWsA7O3tsbm5yXg8JooiLly40BK0GjuOxXtpjGFzc5MrV65w7949vvOd7/Daa68xHA45c+YMr7zyCl/+8pdbhZPGHqMFVeo2b+5zFEVt++zv79PpdDh9+jQbGxs8fvyY5eVlBoMBn/nMZ0jTlMlkQpqmbds3xIBFS6pm/Fm8f4skNOe8FdTS0lJLpmmshBq7H/BqEs34tdgPptNpS0BoSFSNNZExhgcPHvDNb36TN954o71vTR9txrosy7zNZE1AaJ6jRqGiGRf7/T6NXdejR484e/ZsayfVPL9NWzb3uVEs2tvbI0kStNZ0u12Wl5eZzWakadqOTY1tVPM6SZKW8HFwcNCSmhqSQhRFDIdDnnnmGb7whS/wxhtv8O/+3b8jjmPW19e5evUqX/jCF/grf+WvHCMVNuSKRaWe5lnUWnNwcMC1a9f4/ve/z/r6On/yT/5JVlZWqKqK/f19ut1uqyTVKCo1ZMCG7NGMQdPplF6v1yrf5HmOUoper4fWmt3d3bYPNm3Y3KNm7FkkLy2ScIDWLmiRyHVS6ak5dzOXNnNdU+/Gim17e7slgZ09e7ZVyiqKgs3NzZZ49Gn58SufFKhsg/cL5I/FtRENcUn4fCrLwnvWZ69bZ6mqsra4LBHUGeyVxVmLKUtqrQ4PqlYO6yROKBB+zSKUV39FyFpBxCtrFnnpM/2tqS1XLdIpKucoTW1Bg9dgtAZKJHlWkpU+ocaWFZQl0vmYhKwJFUEggYyiMqTZnFBFDHsJhS6pnCMtCvIyJdDQTwK6cZdOLOkkEqUVKoixhFSBRske1sC0qOj0OvQH3rokCAPQEiug3x1gbUmVz8mzAmsMxpq6PhqlLUJaBAXKSVw2YzbzYVGsYD73WW+VFYRRh6IwnmhQOWQgyYxjbi25czhp6MWaU8OQlU5ALxQsdSI6sUYpQCkqBJUTGAc194TKFYBGG6hKsMZRGkFlHKXTlBJKZ6gQZKUjy5236XF+rKMGv4X0uZKdRBFaQW4c06zicWoY3dnncJzzyqWCC6eHKCnJ84pTMiCJY0yVIbXGOklpfRIVQhDoAFNWWGuw1pN/4iQmcw5rHEJqkiRAqAAZxAgVoYMOSWfZq2IYL2qNsUipfFKSwccAZjOsseRZRpx0UVrV67IZg/6AvKiQATgsZVUQBgFpPmNl7RRSBR6sxzKZjmsylEAqSZbnRFFEb9Dh8PCAOOmCswTaoWVYx700OtAo7TBlSRioWhFDYbEorepoh/EELFkrL4uauGQqlA5BSIZLCUI4rM2J8HGPOIoIgnpfYLzlj9KiJif6515JhZIKarvlOIoQwlGVhlAHVEVB3IkRwlLmKUhJFHewtkTrGIFjMpnihCLsOHCSJA7xQKFFS4OwFVCRpV5JsjAlSiu6cYRUAigJtCVQBqmgKr0CjfPBHB+j0BolFBblY8NK+3HD1tFDJ2p1odqiATgCg3w5CZx4YES0QJyQFlsr9lhnvdS5aeT0ZY201IQ3sfBbGqvr2v6lJkZJvFw6rrErrsEn6/cdwvn4qa2BN+EEVtqjY0sPtMna4tqKoxhYkynt72INktWAiGwRxxrQa9DfT8un5cekHAdNWxQdxAmtANFoOtTfWwCfrbFEwmLCAWV2l3i4TjrdxllDMtgE3SdaTkh3rxNvvIQTUDpHQ4msK9Ie08EREcAJUALZ3SRR+8z2t+knGpdNEU4jRIVVtXXciXjsx4DjBWLHiStuOGNtsmQL8bZ7WlmTURYP+IQAsFhQB2guS3g1t4b0cVSvI6JNo0zgz1mv2IQ4dg1HY7JEyuPEEKT046l1xL0EWUU4VyJ0fMRXeEK7HHFgmjGy6QMnkkUXAPrFueEYWaGVjGk+d/XQWh/bOpqVaN0Svg0+EYheDLIfv2nH3m36TU0KFG7xvvn/sU6AM0hhWB0kKJdhSofWHjes7AyXW0i85YZzeoG805A9Tj4rxy1rxLFn5gTmUl+fbeq18LuTV/zJZeH5bH8rjn4kXCOIUrdN0872qM0aUkjLO1k8vz0isSzI/3gc2RKG2vfNNh4sanuT4xYpT+6zPPGzk7Hkk8/syQSWj69NPl6Of25P1EsAFW1T1LZTJ+v3JGLKUXsfx9+0C9BhRSADnMyYs45zlQeM8l0gwAQDArtP5npomQIVro7PORx5OCQudpDVLjZcR1uJwYCQNSGuPpkzOKeQymscGe3ACmxlPcG3yhGVQWjqMeKoPeX/m70/i7Utu897sd9oZrfa3e/TN9WfYhVJ0ZRJipJFyYoiQ5YDQ7mGHQf3wXkxYiOAYDh+8/tFkHcrLwliIEKAXF8Zii90Lbmh1VEU1RSbYlWx6pw659Tpd7+62YwmD2OOuefedUgJtoUYZA3isPbea67ZjG6O8f++//d5Gcj/f8kE1x8h4odtAfmw0PSExahWSRg01vBsbnhw7wF+9ZAXpEG1EpGlMVg1QGcZ+5XjaLZic2hIVFACQQnyJJynbEInlVKSKIkiaSenoAQxkIpUe2yUKpSSoD8D87Jh9vAxs0VJVVsuX7pEI3JSXbM1HbOtU8br69w/WiBskF8MwbkwQzhbM2waksmEyc5lskTzbG+Pw5NFCN6lGd4HQoRF4GTC6uSYyXjEXqLwrdXCyWxGJhpqa2m8RVmLdw20zG0pJavacTIvEQjKuqZaLciznNFwyGgwIMtypGoDzd3k5SO1i+31KS9c28WahsOjY4pcMkpzrm6OES9f44MP7lNWhiQfkGYFQvtWCisJ2TFao9OEyWhIMZyyOH6IsWFSlULhvaUyoV21COBIJoY0teSwWlAdC1I1Js0LxNE+3nqsaK1khMKYNj3IO5TOAYGQmt2tTQajMc/2j6gaQ20te0crnu7NmY4y8qyV8BYENRHnMYAxFgRMLlxiOhNkaxfIJ5ssG8NidszJ4R7Pnj5EVSUXRgNkljNfrhh6jxMSnGdrZ4tBXvBjn/kcF65c5Z3vvc+ffP33Odl/xnBtHQuoJCMdjlgtZhzvP+Z4H7TSZEmCVpLBaEpW5OAciXSs5ZrpNEdvj0hkgp3PufPwEd5BUgx5Oq945mp8U4NtFSf2j5AtASjXKePNCZtrY/ZnJxwdHjKbnzCcblIUBYlaY7E4wlvHoqx5cPcOf/LNd9m9eIHF7CNEOuXlT7+M8J7F8TFHB3t8+zvvQnWNr/3e7/Hdd9+hkZK18QjXGFyiAilgWaETz+HBAVXVIDw8ffqYV157lfXpiCf377K/d4I3jsY6ytmc5apEJwXGCLxOuHlzG5m+xB8tBOq2YfGnK8rGI+WI7YlkNFaMM8so92xMCgbDE5KBBl+CUCg/x5iGw6rBHR7T2HOrN84tO59DDBHt/NTnNp5ZlHbfdVhraFZL6qpiuVjy7HjOygqk0mS5ZGv3EtNiSNlmlI+3LvPZT79O/fQef/Lttzkpa4bJhEwJnDHIYsL/9u//97y5O+Lwva+TskJ5S9OkLCvLYukoa8v37j3gT+/uMWssRV5QVhXGQTGesrWxEzLlvCHVkjxvfXdFQ5LlQJAFVdkQXc3RSqMTCU5ipMSZEGAdj6dMhiNMU1E1Bik8z2YnCOOolydUTWBDey+x1vLRCVxIM6SDREJjAohluq3B6SL+k/LDWSLBo69U0VcT2N7e7iwlIqgGYbxFkkfMnr9z5w7Hx8ccHx/z6NEjsixja2uL8XjMYDBgZ2eH2WzGyckJJycnLBYLbt68yQsvvNCd83mZ9LHEzPmdnR1u377N22+/zTvvvIP3wT7i6OioA9rSNGVvb4/33nuPw8PDDvyOAFye5x2RopPoPVcvzyN+RPLHfD7n6OiIjY0NIJAnTk5OOkuF+N3FYkGSJIzH4w6wXCwWHB8fc/v2bS5fvszFixe5evUqT548YTQadcSWLMvY3NwMwdG65tmzZ53SxXg87tQf+koifcue1WrVgY9FUbC5uclHH33E3bt3uX37dmfNcHR0xOHhYaceMBgM2N7e5t69e5ycnLC/v8+LL77IcDjswOLDw8MOaMyyrCMmQFBZyPOc4XDIeDzm8PCQg4MDvPd87nOfA+Do6Ii9vT2Oj4+760aw/by1QWwDpRRFUTAYDCiK4oxlQiQERTWS2IYR/I/KKVHGG+j6StyYRaueJEnOEFAiwD0ejzvVgYcPH7K7u8uNGzeYTqfd/VZVRVmWXTZ+JEg9r/T7dnyWSBaJlitAZ88QlSgiacVay3A4ZG1tjeFw2Fn03Lt3D6UUn/3sZ/ln/+yfAXB8fMyHH37Ir//6r3ckhKgOEdU6Ipg9GAwwxnRj9fr167z66qtsbm7yne98h//4H/8jh4eH/J2/83dYLpcdeevWrVv843/8j7l27VpX76PRCOccJycnXL58mfX1dYbDIUmSdGoRUZ1hOp3yN/7G3+Cnf/qnuXfvHo8fP+b999/n29/+Nv/yX/5LBoMBP/7jP86LL77I06dPu74S1UniHNZXihFCdLY1VVV1ahbx+ovFgrquTzOKnOv6TyRTRIJCX8UlXm+1WlEURUeAiYS2eE9AR8SYz+cd+O+974gYi8WCo6Ojjgywvr7OlStXePjwYUeGKsuSixcv8su//Mtcv369Ux6KZKjhcMitW7fw3nc2IJHQE0lNfWUcCPZOEOyBsizrxstqtcJay2g06tQxqqqiKAo2NjY6+5++KlNfASMSyaQM64yTk5NuLuqP8ai4EefJ+XzOdDrlV37lV3jy5Emn0PEbv/Eb3L17l8ePH/PlL3+5m3PLsmQ6nXY2ZJGUEetGa839+/d5+vQpRVHwz//5P2c8HmOt5enTp0wmkzMKGtGqKCpH9UlF8Vmi1dLa2hp3797lyZMnHbHr2bNnHB0ddTZRa2trpGna9bu+NVGcG2I99N9JdV3zb//tv+WFF17g6tWrvPjii509T5ZlndVZPE9fXWk0GvETP/ETHB4e8q1vfYtf+ZVfIUmSjqDz2muv8bf/9t/+hPjxo1r8xwORHfEDG9b7vh8A/AHBs3ad4U1/v9BKDbc/S2mDfYn3OOGwdRhvSoTAYpIppMpwIlhECB/UJnxt2negQEhHXVU0tcMLhfeSunJY6zA2EPADGCzROLJMobRASKgRNKXlZFXSlDWJ0FQWjA/KJMKDNx6hUspyybwxLK3j4hg2pgOsgGQpqZYNTVVRC894NGQ8ytHaYVqrmel4TJ1mLJcrhoMByjRIqVibrhEUQxKEhOVihZApxWBAKTVeJ2RakWiFrSu8a3C2CnOPhQjqeh8UdpMsIck9rnQUxYCycigXAtZZklIJxdwYli7U79pAcWktZ3dtwCAT5IlgPC5IE4WXso2hCbBBNjnJg7+ptRrvg/pHowRNo0PikfM0VR2SQqRmUXv25pbDkwolIUs8gyQhlQIpfPgnw448STyJhkwLhkYyX1reOzhh3tR8ernOq1cvoKRn7/Ed1ja2GAzGmKZVSfXgZVDzSLQGKzGNx1qBsw1aJUhZAyKQCCYjslwzmA5JdM5wuEaS5yCDwkfWvrtM0wTllyRDSsXx4pBUpyBAJYFUMV8uybIclSQsliWD8RTrPbV1IdvYSobjNZwXNNYEsoJrSNMCrRUqdeSDAXlRYExL4EaRpopUxfEGSrbqtsLjnEGqVolKaiKlQAqPbSqwIVPcNg5DSHxDBDUMhCAbjHDG0tQWpT2+JSIrqTHOYIxFK0GSaARBmdcTxmBTr1iVZbBWTkKdCyTeObJck2cK4Q1aZgxSTaElSZ7glWaxanACBsUgqOEohcdjLVjTcHR8QFUtcSa8p62zQY0DSao0Ugqsc1hjWdoy8L9QWAue1tLEJ+BFUIKRos3CDklhgU4hcSKCui1hIkx2AbDsbff6ypYRMHHuNJIkpAQfyBYBJAxt4H0AIJ0EEc8bZ0vRKg75liDnAO9wXuBEsH2JMure+c6aJWY6R896Jz3etXbogPQC4URMBkeIlgwGxKzmYBsgg7pA5HlIgfRRH0S0oe6/XFDkk/JJ+W+hnE/k+RjRq/fTWTD79D8RUzTlklT6ECf1FmVXZE2FGW3jlCQlJJpma9con73HYPd1lA+K0J1UUDhjpy5yem8SgydDY/ItBsmE5fH7qJNnCKsRKoDRAv3ceEI/9t2e8Ax5IxI92o+IljfCn62feAYv6OYLf0aBoXe+9qcu/h5j832inYgkNt8/6Zm6fl4MLqozRTKyaBH89myoNrFBOI1t5lhfhDm2fY4I/nta2xEfV6Xnyg8gucTre84THsSZ48+3h1QSJYIbwmkM/fxlvz8R4Ly1/ff7bqjH2Ao9hQfh2JzmTAeK5aIhExahPEIrhHV4KmxzDOkGXpj2vXl2pS+6OutauPu7aOstrrv9Kbvi+z7fmc9+wLOd9o3+WBVn+DIdMeH7nLn/LH2yT3jxCaIlUSfEIgTW2fY4gdSKNA3JiGENEZRmzqvSnH/e/t/7iYTx3/k+/jwSyV+E8HH+u8/7rF8n8bA4jvvX+EHtdOacQmGdY15JvG+QpBgShBKweoYUKS6b4nF4u8TKDZxQCO/AOVAejadGUqU7ZNVDmvqQOpuSuKC1EjFuEYBfrBAoTPiu07jFPljLcP0a4vBuIP/rlvQan7XDB9v//0tc5/zoED+cR8owgTfNCmxFmm9i2zbz3rM0ku+9/5hPNzPWpOSgsSTOUDnP3swgvWS1dHzznUPeaArWL21TewdKMSwKkLIFIAPBQUhQvpXFFR6ZJCgbhrb3wa8RAVpIpHPBB1IJitGI0WhAqhWT6TZJts2NNUWRWj746D7N4g6yVbZIZDgHKsj8eFPxysUJu596k9dfucZv/tF3+PCj73Lj0WOkTiFJaLzg6bMD9FRCU5HkQ1a1Bb/g8OnDIBkpHEInNMahGoMzBnzYpNdesDSe2WJJWVVY0dAYQ6Ilo/GINM2QKgmTr7M4Z9sXbMisGeYZ13fWKLQjTT2HVcm8hG0h0UnC9rTAXtni8f6SwWiNyfpGkHiSGmPDi1XrBKU1RZYhOSGXnkJbZt4Egor3OCewjQ0ZFzrUvzANlRFon/Fk74SXblxHPjvCOodwbRAYgbHhRexdqyDiIdUpVy7sYKxlMVtgraBqPMfLmscHJzSmYHOtQKvg6et9+H5jLFXd4FXKgS2YXnoRr1JO5nOaasXe04ccPH1ILgSvvfIpfuyv/SxSSj56cI/Z4QE4g5IKkRaUixmvffrHuHPvI/70G18jqUt2xzlOOA7nR61komeUayYiyNxYKbCuwSmBq2cMEsvV7XXGxZhRqjg+OuLZ4R62mGIWS0auZrn/mMY21IBJNOgEnWWoYgLFCOM9KstxeGqCHOTueMylSxdpmob5qma5LKmFYn1jlwvbjnv3PuJkVfPd777L1jTn29/9iLq5x2B8n1uvf4aD/accHOzxY9mIb1cL7ty5zYcfvMNxWWFNxcnJApzFmYZEwKraR3vLeG3ExoXLrG1f5Mr1GwyHA6ajMVuXrqOk57f/9f/I3Y8eYJ1HJgOqsiTNh/zMz+xycSts7J3LaJqck2PLfAkeRVmFz54uHR/chzTfwD5xLHG8fP0AFkveXVn2GlCLGVVjoEg7wmFcK+JP18lnXmtnJvputuoRpU7/FjLMaupqSV2VzBdLjk4WHJcCRNhcL0pDmkqsD7Khh3t7/Mf/+J9I0pRkuMWkCDZIw4kJ85NO+eD2R9z70NIsLKlUTHNF2mY66eEEmTfM7y8YTNeZqpSyXKCkYDLdJM9SvG1wBharAyQldQO72zmTYkze2LYSFHk65KQ+ROoE6xzjyTqmXOBVgpaCJB2Q5AOElFhfcnz4jGVd4eoqBC+kRIuwUBsmimuFghakdjZ65frWbtbz5y9BPyk/DOX84jSSBSIQrbVmPB53hIIIej5+/LgDfQeDAdevX++AdeccSZJ0ViFRDSLaARRFQVVVjEaj597T8xai3gc7kosXL3Lr1q0OlLXWsrm5ycbGBt57Ll26RJIknfJIXdc8fPiwA9cuXbrE9vY2Gxsb3b32F9590sf5hXGSJEynUwaDwZmgYQTX8jwH6M45GAw6QNUY09VdVVU8fPiQ/f193n333U71RErZWSKMRqOOZHDnzh1+8zd/k52dHTY3NztLiJjZH0Hruq67++1njUeri+PjY/b39/na177GaDTCe89sNmOxWDCbzZjP5xhjuHr1Kg8ePGC1WvHWW2+xt7fXWb5EhY/XXnutA4IfPHjAt7/97U4h44UXXmBzc5Pd3V1u3rzJs2fP+OpXv8q9e/dYX19nNptxfHzM0dERa2trLBaLzkKlX1cx0z2283K57GwNVqsVx8fHXdZ7tMaIiiyz2ayzZYhtMp/PO7A/9ue+ZUvMto+2HJEscnJyQtM0bG9vc/PmTd5++22+8Y1vcHJywo0bNxBC8Pbbb1OWJaPRiC9/+csdeSQSD/r9qL/xim0V1S/6Shx9u5l4L0BHIIgKJFEpIGQwJ52yy927d0mShMePH/POO+/w5MmTj30nBl7iWIiEEyEER0dHTCYTrly5wpe//GUWiwX7+/t8/etf58033+TatWtsbW1x/fp1vva1r3Xte+3ata5O5/M5e3t7XLt2rVPIiISdqFJjjOHOnTud8syNGzd44403uHnzJoPBgH/9r/81T5484dGjR+zs7DAcDukrhvTtK/pWRVmWcfny5W68ffjhh3zpS19itVqxt7fHW2+91c1v4/GY2WwG0PWxSCjqj7G+ak9Uk4kAfiSNRDA+kgci+WuxWHTn6M+/8fyx787nc8bjcTe3FEXRkSs2Nja4evVqd+5I1ugTFSKB4LyKTf++gY6Q973vfY87d+4wHA65cOECdV13Kh7r6+sMBgMePXrE+++/z8svv9yRyO7fv89yueyIbbGfxueL14ntUpYlw+GwsyqJdReJFYeHhzRNw3A45NOf/nSndPLNb36TJ0+edEpDsd/0FVeiAgjQkSzyPKeuax4/fszv/u7v8sorr7BcLrlz5w7vvfcem5ubnaLPYrHoxnkkiMS2jGouUW3j6tWr/N7v/R7/0//0P/HHf/zHrK2tcefOHR4+fNgpODVN0/WdPumjT0zpP8NoNGJ3d5eNjQ3+8A//kLt373akyPjeiZZAq9WKPM8ZjUZ88YtfZDQasbm5ibWWl19+mZ//+Z/n1Vdf5cmTJ1RVxWQyYWdnh9dff73ro5+UH8XSanX0Y6fiNDh9/v3k+wknPXgh2Bo4fEvaM6Y9mQsZ5jIG8rzEC4dUksZ4GhdUPtCgNSRZQp4ViESDENRlgy0ttjE0VcNKCnQicS68H5GOpnHUZUNVWRobYiNSQZpqRkWGSjK8rTDe4LxDS8UwG3BcOlZ1gzUGhQtKIa5NPtKK4TDDWo9yjvlyxfpkQJ5qRA7C6WAdI2CxLMnysEetqgZjBEmRMlyfYpA01jIcDsK+TWqU1mRFjlYBejBNg9IDJmsZVbOPWa1IdM5oMARhgn+1tVgXbCuMh9XKUFeOpjEMiiFp6tG6piwrZlVDXgRS87PDI0oX5pdxnrG7pri8PWKzGKBFQz7SDIYFToCTGhxY6xHKoRFoSdBu9hKUojaWuvEkVmBmc5q6QQ0SZG3xtWB+smKxUtQmwXtY1RY7cIxSidKgkVjfIH2w3UhUQiolWQ7DRLNYOY7mK77zUbDUePPmFZJUsTg5CuuwPMfUlixJMM0KoTRSgtAKaTTeGuqVRWhBmiaUq2Vo8yRBJhljlaNUjkwSpEpxzoAI2dvL5QKlNEJokqRgsQhrS0NDkgbVuKPWunJjusbxfM5wOD5V9KsqinxEVozxIqhWOU+rTiyCeobSeNeQ6BStEwbFAHyw8tWqaGMaliSVSGGRtPL/WiJReC9wQCI13tWUzaolc6ZYIUFopBBIrVFJRoTknJfU1mCtBDRO2ECcwGOcaUEpgdYKom99+7+6rjG2IstTputjmrppyUAGrQQCR6o1g0EgYadJihCKynp0mjPOskD4MJa6bmisoamWLE6OWC2OENKhkwiQidbSJwT7vZQ0jcDYYLsrhUMrhxQgZKsK3QTlICdAaolKRCCASN9mjTqclMiWSCN9xHTaPZ1s93VCIERLmI8AmgfVxh2ta+NIrp0PWwUbIQTKSdAK78AFr6RTUFOIDmNyNqh+4NrEAWxLIjmNmYuADAeFEVpyiQpgaVA89nh/qsQJkUwV5+Z2VncOoQKBTwndm68jISTMyaJVOvgktvNJ+VEqHQgcFz8fA05P1R+8OAcfe4kwC2SiqMoFwjVkmy9idYI0xyiZ4JUKY2ywTlrOaI4eoqaXEF02/rnrteC6QCCEQ3tBI8N/ncrJ11+iOnib8uEH6GSKIok6Pr1TnBvH/WfqP4I4ZVucKhP1nljEOYEO0A/FdZ+drSrfBd/DEjHYS5xeOHzS2U99jDQRSRGnC9HzyVadRZY4vUcvRNBxakl+CIkzdXDAisd4H9QZ2suFNHJ/pjr69dMTfThD/zlzmBC9r7TvgufgyrFuicd1j/zxM3f1fO48Z44UvVjR+RvzZ/7THuMY5JqdjSG+nqGVh0SE9ZzX4C14Q+JPqPwQL1Nkqz4lxOlVBMFVLKIA/eufEnKi+og8E2M4Xx/d+fpx1l4d9ds8ttFpfCyomwQVldgXAiE3DNGO1UDX4MSh5bp+HolXYbwoRJugfkpoUSA9q6rG+QyldGuB06pweR+wYf9x4kQ/zvrx/dPpf88TNZ5L4OrV4w8idnz8s/PkmN6C5LRWn3u+89dt/3LmNyUsTliMEWiV4rGhjleHKJHTZBPwngxorMYrhdOQCIHzFiUyGoLSmcTi0qvo8h6ulrhkhHQWL0L/lEi89FiXkVEiraGZP0QlKXKyjcWiNZTmCC8GSK84MxiFR8b11fepw/8a5UeC+BEmIBssXoQH4Vgs91C6AIJvpfce42A1q1lXNasipRkMg6yxCbYREs8NLNmygWdHiEs7rWejJ08UuZY0zmONRWqPsw1pNsa1xAHpHd4rLEFWyvkYzFRIJDpVTC5doNi5gmsqRqnmxTd+jJkdsJ5attck98vfRb5/G+eDp6WQQa0mDPYgZ5l6w850jM5TLmxO+NN3G/7n//D7LJYLkvUNllLx//k3v8n27kVevnYZrRLSJMGaiqasKOuaRjisc1QWlHHB63K5QitNZT1l41mVDeVqQeMhTwum0wmr2WE7EQWJIdc4cBZEgkQyzFO2xgUKQ1MLGguzWtO4FL30TIogj7MxSMjzdUSxzWTzEl5orKMNlrQTl/coLcjUAJknaGnAGUwTrGWME5RNG8MQINpNS5YkFKnmzqP7VFtt5pwNk6wxNSFLxeHx2NaiBRxFkbOztc58sUQPJiHrxDkWteVkWXcSUVpp0lxjvcfYsIE0XnLx5i32SwkqZT6bUy3nPPzoDtXiiA2dcvWFV9l97cd4MmtASNLpZTbXLweVBK3BNYxT2D864U++/geY5ZKXP/MF6mrJbHbC4skD8qzg5dc+xZULu0y0RdoGb0p8s0B5y7L0VDaov9Re8eTpE46PFjw4OKZsjlHWQt2ELdzsCF+HgMLKgVcap3TYfWoNOg0bYB1UOKRUaK3QWqKVxLuG+fGMOsu5fHGX1155iXfev83h/lNGyQ5lWbFYleSDEVLA3t4ThNLMjw55cvsxH3x4l1W5QkuFr0sW1RzhLHmWU0yGjNOCjZ0LXLz5MruXrjLY3GHn6otIPFeuX2c2m/P4/od8/q/9LNM/+0PuPd6jdoLhcMTm1g67aznKHSO9wzpLojzFuuPCRlhqjYea4UAjUDRWYaygagTWSvxXEv6X//eS3/n91qd8OWO5WuEnaU/Kq7eY9P0lZfs3Eba03Uu8nf/luW1tBLds09CUK5qyZD5fclw21FaTJyoEDWwIQAkh28WCwxlHaQxl2y+9D+NaADQ1t9/+JggZbP68a797ujwP8sVTRmsSax2jwYQ1rdvFtEcIhZaCwWhEVa1InSPPMxovEHmONyV4gbUNXniqagU6o6xLhAdbrVhaiyorFvND6qoFQKuSuKwWQpDqFC08zlsuDjVbiW8Xvh7jg0RWCBvQLpgc32c990n5ISsRBI+AZwSj8jxHKdVlhAMdmBWBzQhcTqfTzvoiqhbEbH6gsxKIQGskKfTvAT6+sI0/RzLAcDjk+vXrXSa5tZaNjY3uftbW1ohZ8BcvXsQYw/7+PhBsFq5cudJZjfTVOf4i7Ou+ikmWZR3QGtU5ooVCVAW4dOlSJ8sfAeAIeEe7iqjAEG0NLl68yMbGBkmSsLOzQ1mW7O3t8ezZM7LWJz4CnP16iiBv/NdnmEspO8WVSPCI143Emag2YoxhY2ODK1euUJYljx494u7dux3Jp2ka1tfXuz6zu7vLhx9+yGKx4Lvf/W5n7zMajbh06RJvvvkmH374IUdHR/zJn/wJ29vbXR/rb5oiuBkJSPE5IlDaV1aw1nZEjaiCEQkjEQCOdRoB5niemD3fZ+XHdumrfESAXynVKQlMp1OuXbvWAd/GmM7S6Nvf/jZKKa5du/ZcxZpYnpcZcL5EckYkRcRjo9JHvNcIWsdnjnUYlVr++I//mCRJ2Nvb486dOx3xIAL08dqRLNHv87EOInnl1Vdf5bXXXuOtt97i2bNn/P7v/z6DwYDRaMTLL7/MH/3RH/HOO+90BImmadjb2+vG6euvv36mzfttL4Tg6dOnHB0FwOfll1/uCChRASf28ahKE+/7eXUY5680Tblw4QJ5nrO3t8e3vvUtbty4wcHBAe+++y7vvPMOW1tbrK2tdZYa/b7VJ7r17zvOe/3NfuxXKgI/KsjEP3z4kO3tbba2trDWdioasd3id2O/j3Y48VrD4ZBLly5RFAWPHj3i0aNHXLx4ESFERzKISiPRrqnfv/rzef/vSZJ04/w//If/wLvvvstgMODFF19kf3+fBw8eIITgxo0bXL58mf39fd566y1eeukl1tbW+OCDD3jvvfcYDAadkst8Pu/IWnH8xLqL/a7fVvG+Yn95/Phxp0Z0+fLlThVkMBgwmUw6kkjf2iW+X+JY6M/f6+vrjMdjmqbhq1/9Kk+fPsUY06mJRMWPaHMW/8X28z4oDUVLL4A8z3nllVf49re/zbNnz9jf3+fq1audutPW1hZJkpxRLOp7A8f7j3UTVbYGgwEXL17kjTfe4J133kEI0c0D4/GY3d3djhQTCU9RaSda8UQbn1dffZULFy7w7NkzVqsVo9GI6XTKlStXuj73SfnRK11QtQ2anoE3+jhHGyiMmZzeh8QT0Quunj2xABeSg4Itbbu36DAWGWIaXmGNAdc6OntHmngyGeZNrwROGYwPilGNESR5FpRlkyL4u3sD0iBUsHTw1pDphCJPSdKEZe2oKocxsFzVVFWNEJ6jVcV8WaGEJ9eeQaIYpjAcaZQUeBIylaCQHB/PMGXDKAkgu1EVlQse4GAR1oCVDJIkrA0WM0SasLM14emTPRbzGdlggPYGqROEkCQ6YZhDKWuUSBhkBalW1GKJbWpWtaMYjlGJQgPOVGgtWa6WCJmAqHHSt8C3wXkDWuCVROc59mQJXiGkZagtu1O4tFGwNRkwEAqpBPkgISk0TWNw3qNQKBxe6GC3LDzJIMN5RaJTqCqUF8wXJbbSqCSh8QqVSBpXsmhWrEwL7wiwTjNbBeC9SCXSNWReoaBNI2iQOgTtc61IE8EyyziYGb754T4ns4bPvnKN7bURSqzQQiFUIKkIobCuQUqBkhopNVLnVKLGGs9kskZdNa3d3xRrE4oiKI7pROExSBkC+E1TBbsTHwB842C5rEjSPKi6DsYY5yjLmkFWILxEq4RiMGS5CtZtTV3jrWGQZ5RVSZ6m1MbQNA5vJcVoDNZgrSHVikRrVJYHZQ8tgvWR8WgVlG2dtWidhTir9aRZQlkHYg4ijJtyMUengWST6QyZhLincw2yVURxbbiwKhtMU5NohZAJ1tH6oZsQTkS3QMapmmGUk8jyAmc8Wa5RqsEagS+jHWhY3w9GBcF9SOLRJCploFOE8KwWc1Z1jTWWRblgPp8hrGFtNGCY5wjVJsA1lto4pEqQaY6UglyLEH+0HmsrjGlCxq20CCewWBwqqARJhRIaqcBLgWqJF1a0kI+wISwk4/s2WLOcL2HN0JqiBEwSJXSwB25Vh20E2nzLBiHYwCilAkmLmDDTzqFO4BxYFybXkAAXAM3Tw0LAyrmguCNliLuIljgiJAhHUB5pr9fNv96183c7i4tgKoGP5xDtfBWvdWo/EJ754/XwSfmk/LAW4X2wCun+QKf6dRb0F936KELfUgqa1Qn7+8ckuUNLgUxH+OoEp3UY61KhguQEbvMm9sl3UeUQMVgD6/E6XFB5gRNBq8fJoIgU8pwdQmQYEWzshCpwxUsY9z3SJNjEtXd3jsDQJ1XEqPmpYU33OOdYDj2c//RU8bOOWBH+HtUkwj7mrEWMhwDOd+zhVlkg7nn82YsEUNqdfpdTu8vTawXs6wyQ7103m1kAb4nfcsjwxO0LrU9i8P1b8H2Syel+PsLi8fxxzyqF6Mh4och2HXN6X3CqTNLF3rrL98kwXSt0fz8zf8db7J739M+RTuKdC6pehPeS7+41vsME2xtjskRQGR+U3Zwk8Qk4h/AO54LtmBNzKr8FrXJUn+AhRCAennaXcz1MnNZ16Le9+j5PTPCnLgf9v3X13DvmtK7COXRMlmjt7fqV5MVpu3gCPulFrza7/tiP67b2L6EGwrG+tfzzYc3kfYJHkmjwTUha77dQVD3px5263/uP0LvX76eycT7mfFo9H7ce/34ltr+PGFhHphE9/EYQrZjOX+e516fX1h68l2gETtRkqaRyQ9RqhtMZThdBnc4qtD+ktCkysUCGVhLrGpA+2EYqh7Maj0fmu+TzfZYoVDJGiKDgJ0WYQqyU2KahPr6PGAxIih1q5xBIBsWIk2NAJL15GmhHhSTM93+Z+NWPBvEjQPhhgSo1aZphzIC6OkSl2+BsO/gNM1/yrUFOOihYzwdYoDYNgyShyDMyJZgIjzamRWkFSkqGScoVnaE1mMaSZYHYEXxINbWFprZ4Jaht8JFKJKQ+yDn5RCInm6jxJk25YHMyJC8GvPLKyxyobfbuf8gHd76LLGdcvnmDD++8j61rEq1JdACpauuoPXz33oeYd9/mt7/xdR7efpejvSe4piHVgrwYoJIBH3z0kO9+cJe3vjXmysULTEZbDM0CkoxmucBhwUpkK8HYGI9ZrSjyDKTE4FhVJcdHh3gpGI0FWZa3QZJ2kyIFKk3RWYK1nixL2J4UJMKwnM/wPufYgsumbG5fIBkMOJk9wS8OoFqRJxnaLzCrY7LJhTC91wYr25eGs0ghUDqhEQl1vcJnvvWdhWVVs6grdKqx2uIVWO+wwrO1NuSnv/IzHM1m1NZinQAnaOoa5081A4wNcprOSdanBYNBxt7+PtPtSyDf7V6Yy6rB2pBNkWjFFEVtPKuq4cnMcXc5Jl8vKBtDuTxhMT/hwYfvQ1VyYTji5qc+R7Z9EZGm3e7NOQtOUDUNT48POTl8yng44M6732GS56xducFqsQhS7uvbXLh8g+F4nWQwYikkSw9SC4bDhAvTAeNCMz854dmjexw8+ojVyT6ffv0Nvm0a7j45oTzcZ6AFKWHx5JtVy3b0SO9o6prKOgxtZkeisV6we+kyl3YuMDs6AKnZ3N7ms2/c4ht/+Lt89PARs8WC49mMjfV1itGY4domr3/2x/ne++8jVcKLL77EsMhZLku8ddhygT5+xOrZfawxDIYZvlwAjkRrpqOC0WjIZDrltR//CjdvvU4x2UAXI4SUrPYfBnAyk0xGGWVR8MKbn8Ul3+Xp0z22d7bY2dllbaLR3oYsAh9IVLMTQ4mk0EF2PEkcw0ygvMd6R5oGJmY6dgxyi1CKTBuakz3efe8Dbm59hjbN4uz8c27x1v3Yj1i6SFqIc1Zcf3m8tTRNSV0uKJdLnh4cc1w6vA52UVII0hYAhDYg4mMAM2Zmhw/CgiS8cFz71rXtsSErJPj7Ohv8qG1ZBfKUTJCCQALztiOZhMVWKx/qJeteMD/cx+UjJhlU1Yqj4xOy8QZufkQ22WJjfQMtaYM9gc2aJhIlBa4pOXh8n+OTE5ZOoFTCcDQO3mi2YiMB3ZK5IMgix7Xb6ZPJyE35pPwQl/Ms4wgWAmfAO2stVVV1AGNRFCyXy9YDe9nZChhjWCwWHXAbLTCGwyEQAOXRaERZlh1A2F8AR3CvnwEAdNYQaZpy+fJldnd3u2xsrTWLxYKyLCmKgqZpKIqC9fV1Ll68yPHxcadisr293WXXLxaL7npwujmJ1z4PUg8GA1544QXyPGc6nXbg/Hg85uWXXybP8w70y/OcL3zhC1199S0ednd3+cpXvsKTJ0/OKB9Eq5JI7rhx4wZra2scHh52QHZUFtFadwHbCKj27RzKsuzO3TQN169fZ2Njg2vXrvH48WNcK/+8tbXF4eEhW1tbbGxsUNc1RVHw+uuvc/HiRW7fvs29e/c6q5HLly9z5coVdnd3GY1GvPLKK9R1zaNHj3j27Flnl5JlGevr6/zSL/0S7733Ht/73vf45je/2RFN1tbWWFtbY2dnh6qquHLlSmcR0wdfYzb89evX2dnZ6QBfIYLlzdWrV1lbWztj+wGwvb3dEYGiksLa2lqnltJXuLh8+TIbGxtnVAi890ynU3Z2djpbjslkwmuvvcYv/dIv8dWvfpW33nqL3/md3yHLMvI85/r162xtbXUkgkja+EHjrm8nkSQJWmvquqYsS6SUzOfzrk9ZaynLsrMUqlrv+Phzmqa88MIL7O/vc+fOHX7t137tjKVNtBCKCgnR8iGO7eFw2NVLnudMJpNO/WE6nfKzP/uzFEXB7//+7/Orv/qrbGxs8Oabb/KTP/mTfP3rX+d3fud3+PrXv861a9dIkoT3338frTWvv/46P/VTP9UB7VG9JBI4lFIcHR3x7//9v+eb3/wmn/vc59jc3GQ2m/H48eNOZeHChQukaUpZlmfIaOczKyI5Jk1Trl69ys2bN7l37x6/8Ru/wcOHDzk4OOgUK77yla9w6dIl0jTl6Oiou7doWRLB+UhaeZ41VCwR1I/WGnVd81u/9Vu8/fbb3Lhxg1u3bnHr1i1GoxFCBPuXSLiJ7RCJH5GId+HCBb785S/z7/7dv+OrX/0q7733Hvfv32d9fZ1vfetbPH36FO89/+Af/AOuXLmC1prlctmpCEVSRezbsa0nkwk3b95ktVrxG7/xG3z1q1/l/v37vPLKK9y+fZvbt29z6dIl/tE/+kf8tb/21/jN3/xN/s2/+Td89NFHvPrqq7z33nvcvXuXN998k+vXr5MkSUewi8o68f3RJ2nF+4lzfySKzOdz3n//fX71V38VpRRbW1tdf1xfX+fWrVtcunSpU3sBuvke6CxuYmmahhdffJFPfepT3Lt3j1//9V/nt3/7t9na2mJ9fZ2NjQ1Go1FnzXPlypWO5BGVbuLc8/rrr7O2tob3QQ3oZ3/2Z1FK8a1vfYuHDx/yyiuvsL6+3hHzIvkrtm1/vA2Hw065J7a395719XVefvnlTuUp9r8bN250xI9r1651Vk+xTW/cuNHZ+ZycnHQKXJGkFkmHTdN0dmRngnCflB+xEhGP8LOgtY1ol4ES34ohR/KHa/clYY/ivA0givN463CErH3rgyVl0xjqxtBY1xJAJFoEgDpRErTAeU9jDM6Bny0xXpCkKZVtsPigTqoThLWYukJpgdKQ6pxUaQqZslotWa5mpFIyGqRkAw3C01QVpmmYLStm81Ww/20MZVmBqRinmkGSoGWKlilSawZFijMNeMiKAWuJxjQVNR4lPGkqqFaC5aomEQZnLD4JYHNWaJSEslrglWI4GvPs6R7Gzkl0wmg0IUlyKmtI8wG5DLLJDZZkuEZSjDCLGaapsKZBD/JAOJMNprHIJMeZmixPUUpQCUO5arBeYqzGWE9tLSvbYLAkErZHA66sTdgaFmRSg4K0yINNigMpNTiJTBRJKhDWY11odZxDKU1jHagMZx0iHTBcL6htAjLjeGGwywN0XjNowLXrD6zDecmqhKS1bw5ZygEs8c6RyQQtQwKZEA6VC6RLOFk6vv3okNo33Lq6zeWtTZSWjCYTqqokS3M8Hm/DJlupAuMarJ2TJgqnE/LROrKsUNkA40PszDU1QkhM40hUileOsi7J82EbMHeYakaqg8qKTDKUSqnrFYlIAgHBQZYVlFXDfLliuZjhrMFjMNailAi2zAZmyzmVNaAkVRn2J9ONzVbJQ5BmIUPRmxWJDoo0UqYIJTG2RliDsCBJsdUCoxZgU0yzpG5WLFZzcuOYpkOKVGOaoEIqpcK0AJRKLF5UIEN8okgSdJJhrA3EIRvitY40qH0iqJuQTqd1SprkrJqKJB1grKOuV6xWFollPBqipMI0EpIU4RTSerQSeG8pK0tZ1SAajFmSCMPGpEAJTZamJEmKFAEcS1JP7gOwmqZJUEp2we/deov0ikxrrKuJVi8x5d0LG9quCfEnmYSYi2gVnIUUKKGQ6txesgMUz0u3RyIyrZKHb0lr4ZzSh7+5FkD1PqiChP2CIFFBPaVxIUnR2ZA81yUTOXCyJdd0GKxrQa8AYlkXEoecC/sB6dprO0GYaS0iuDEBIUaslOzmcFy7b4pqMd6jIzlZtiCmhEj6+6R8Un5oizgLaIrzH57/0Qu86AO84cOgPgGNadAyQVmHUBLhT6jMCZkKMQnhwUsDPiHxhmTrJvMn7zLOPo1QAts4ZBos04Sn1f4JoLlFIwQIZ4Iqe31Mc3gf2TwmzxOsUCh/avv08bEr+g8SntvHZwCER3h55rAzBIRuTvQtuTeSPk7B8nD4KbEt3DAdQSNC7t29tXGUAID4c/NsmHtl77qnn4sW/zr/iL5TiAqZ/A1SgtQp3qmWkBAmbxEB/hgX6JN6CPF8EQL7HcD9scu1NyB9IPR5Qas80e9XpySFHkvjzDFdlUVyCC2Rok+sOB/HiDUfn7c7b8/+RsrTp2q/n2jF1toIaYIlvFcNXuvQr2xY91sT6n0oK6xraHyo6+clugl/9pm6Fjxz3Ok9nd7LaX1Ecso5LsTZ83Rkil7dtvuJ5xXR+yES2WMbx6vGxKVw3kgwsfQaqKOz+LYvGO+RLZsokoSdj/2uFwuO99kjDX3sxiA6HZ1p3/PJIGcIJP356geQPs63Vb+v0+vRfXLT84Cc88lAp3cfa+a0/YzwJE7ghWE+34NiF5GuBbJN4/ASpDOggy11QhIUP8olUhdIkSKdR6BwokG4hKrYoqgfUSFBabRUON+AzmmWJ9jqED1YQxbbGOdJhMLQIHWONfvBvQAZaF+i7T/hYXpj5i+n/GgQP3zLoHEG68FbSZ5v0ZSPEGqGcwqJY0TFJW3JJhOaomBy8QZCLjiZzUhSwyBPydKU7cZhhGdpA/kjkYKdYUGWSZZNQ+IcSkqqumbVHLCqHVXjWSwrBpkjT6P0ZhhIykvG62ussiHLckGRpGyvb/DS9Zu8+MIlXlvf4eCFDf7Vv3qIXB6yMUpIX32F+x/exzQNpdDsH51QVhWNdZja8NF/+Ld4GzJLFmXwiE+ThEY05L5mVAzxA8F8Nudb77zLxnTK1QvbmFlJ01iUa4JNjQwsbykVjTF4nyIIL8zGNMxPjpBJQqZTiiwnyYaYpmlfuZK8GDCajJGzE1JtEbbEWMNyteLR8ZLjZsnupRFeELxk1y6z8gJX38OtFgyUojq4i/eOdHopbFAtWNdmuwHLcoW3DYWARHikVCDClt04HwIDJigOWOvQupULkxqHREuBrYMSgqzr4MVJGIiulWP1wNbaBKFSFqsVxXQdIRSTQcG4kAhhWdYG4z3J4QKhEqracfvA8s4zidUZVWNYLmYcHT7l2cP7JLbh6vYOF155k9Hu5ZClKQQSjxYCg8A6w6MHd3lw9zblfI6rl7z4wsvcfOOvIHVK4MeHoJaQ7ezhHaLN6hynKVd31lDO8OSjhxw+uMu3/vT3ETg+/ean+Mrf+u94dvv/yu5qwZaWjLWiso6ld3iCmkODZWU9yjh0ywj1WuGHE2SW8pnP/1WK8TrjyYjxeI3bt9/lwwcPWSyDh7dDMFmbIm2N1ynDwZDZsuKFl24xGE2ZjgqePHvG7qWrrE0mJNWc6/aQw2nCo6eCrWFOoQhKH1qSSphsXODSi6+ytnsRVYypGwNJg6mWHO8/wq1mCATT6Tru4BmzowOMseRFzqVLF7m4MYWmpGoMSoU+0DRBUaLIUlLpcWELGmQrhUclbfX64G+a55ormwXLSjIvj/n//tr/g9eu/Z+5ujlGtFJd7aqknYhEt7DoMzDPFhFn/zB1eYFzFmMa6sWMcjnjyZM9bj8+xooQ6Az+upokTVAySoKFfAnv7RkOive0c0+7inMOh0PGjIt2xS0EiNY+KgCyLiwxulWdRKkQePLOd56ACNl6y3pS1crLu+A9W65KtE5ZzI5J04JE6TbsCkIqrJfBqqYxaAEXRhkPZzUGQV2t8M5wZaQYJ6BFYFdGdkfwfZQMhkMSLSnnS05r8ZPyw1qiGkIMjsVM57i4jOQKoAOz67oOWV+DQUc8mM1mxAzpCGD3FUQiOBUtOyL4dZ50EYH3vpWK976zTYky+nVdn8nUjs8Qz7NarToANWbZR7A1HhetAM6zr88viOO/aJmilOoAViEEWZZx8eJFkiTpQD7vPZcvX6af0T6bzc5I+0eyTATj+nWwXC5ZX19na2ura5eoQjAcDru2icB+/H58lvPPvLm52amVbG9vA4HIsrGx0RE2lFKd3Uuapuzu7rK1tcXnP//5Ts0gEkOidcrW1hY/93M/hzGmIwDFa+/v7yOl5ObNm1y7do2f/Mmf7J4/Eh2yLOOll17i5s2bHBwcMJlMzqhvFEXBtWvX2N7eZmdnp6vjNE156aWX2NjY6CwkiqIgz3OKouCLX/wiUdkg3nu0+VitVkynU5RSvPLKK2xubnbKBUqpzkrh85//PJ/97Ge5evUqg0HY0MRzv/DCCxwdHTGfz1FKdUSYwWDA0dHRmfZ83pg7/3u0NVFKcenSJfI8xznH7u4uf/2v//Xu5+FwyPHxMc45bt26xd/7e3+vsy4SQvCLv/iLfOpTn+LDDz/k3r177O7usra2xmg04vDwkIsXL7Kzs0NRFPz4j/84x8fHHRlmNpsxHA5J0xRrLT/3cz/XgfDOOba2tvjiF7/I5cuXuXTpEt57yrLk8uXL/NN/+k/5gz/4Az766CP29/c7ZZ7JZMKlS5e4cOECf/Wv/lW2t7c71Y40TcmyjDRN+fznP9+p5BwfH3Pnzp3O9uXzn/88X/jCFzoiSiQ1nbeLifNBHMdSSqbTKX/37/5dvvnNb/LWW2/xne98ByEEa2tr3Lp1i7//9/8+L7/8MvP5nGvXrvEzP/MzXL9+nSzLuvmjr04S2zQSQfqknEg+WFtb47Of/Sxf/vKXuXPnDn/2Z3/GN77xDX75l3+5IxjdvHmTtbU1xuNxp94wnU75zGc+Q1mWHB0dtdnLOS+88AL/8B/+Q37v936Pd955h1/7tV/rFI3W19fZ3Nzkxo0bnd3K0dFRRxCL9RSJRFprVqsgGb+5uckXvvAF/sk/+Sd897vf5aOPPuLP/uzPOnufT33qU6yvr/PzP//zbG1t8Z3vfId3332Xr33ta2xsbPBTP/VT/OIv/iI3b95ksViwWCz4W3/rb3Hz5k1GoxFSSm7dutWpTkR1pKjE8pWvfIX5fM7Vq1fZ2dnhK1/5Ct57Dg4OWK1WLJdLtre3efnll3nzzTfZ3t7m4OCgU/qI/aE/90fFlrIsuXnzJr/wC7/A5z73Ob75zW8ihODixYtcvHiR1WrVzWVJkvDTP/3THakqSZJu/l1bW+O11147Y6G0s7PDz/3cz/GlL32J5XJJVVWsr693RKq6rrt3Q1QTiuM8qlSladr1/2i3Ffv1Sy+9dMZ27ejoqHvPxPmvrzQTySPxnWKMYT6fs1wuyfO8U4IpiuJjc88n5UerxOCYjwF+OPU570cwhYA2czx2GefaZBzvg21GO+/5TsmntS1r/y5aa1oUgSyfJghESzKwIVPWOqpVhbNBNlqpBJRAyASlLE3dIISnagxCBSKZ8wYvHFpJdKpJU43wgqaxSOtIJRQSGimpnaPAkGqPkIrJMCNLwve80jSNopIq7AFxLUk+p65LvDMgBFma4LxAYUmlRvomqCdaj0oKsmLAcrliUT5jMt5kurbOyfyE45MZSTZmNN6gWRmqqiHPUmhWLBdHjLUi1xkiK/B4TF2ihEQlCcZ4vLCoxpLnkrKqUR5UIskLz8nsmKoJaghVXTOvahxQpJL1cc50mJIoh1aG4VCTDgReOKwHqRLSLBAFlJAhcFsZrDFYWwcBFxSV01RG0PiUxguMT1iuGla1QwnN7vYl3JZnVS9ZzmaslgvKckVTN5SVRcuEJBVoXCA0tH1ESolu36PKebKRINWS41nDg/0ThJOsFg1IhdQJWiua0iCUapNmZFCFTRU6y5BKgdDkxQghEpTOwRMSIqTCuUCMCKRQg5I+kGiqmgi7qyTFNBakwuKRicIJj3GO45MTtra3mC+WmKbEVAtSrZDSc3IyYzgcoROFqAXGQFU7jLMcz44p8pTRaEzd1FRVg3cNWqdBsjqBREmkJKi/1A1CBnlzCLEU05ThnWEsdVnTmJokzYAGhEUnCQhB2VTYVlXQ2kDeSZKMpixDPI/wXlJKY60hQYF3wUq5KcEHqxbvVFACBoyH+axidrzENDWT0QAnJMvKkipBIjWN9ShpEc6G+cAY0iTBOUiHUxKtcc5ibauGikAoTyYlIgmAimtjbsGISmBdSBKKqhcezWkUXyDbxDjVkieEtwhvkUKiEHjZJhCcIm09cOXjipL9PZ93DqQPsGwLDEovAjDsIsAVVV1leDbj8F6EunUWaRUIEyyqnAMf9oCuJRoJH8ZiJDKFrGnR7tdcDJKFmKSnteANdecJfTqqs8QM89MEIvAtTQQRCCMd1oXo1HSfDwV9Uj4pP5wlQsgBhD4FNrsioJdTeGaN7YUBFoxGa5hqgWAI6Trq4DEqe4xvDCIrkCLHSIP0Eq8zBpsvMHv2XQa7n0Zri7eONh0d6T1KEsh60oOTmGqGObmPb1YkfoW69FeonnwXuTTBPsHZwEw7c9sx3t0nI7Sgbwekn7JAQpw3xM9FF+cLnwQSAyBEIEeIWCme06h7IJEFVZH+PCq7q3f11ks6ieVUtUj0rt21To848HFAu51K8XiUlGBBaoWvVPf37jqAE6e2JLFNI1HBtxc4vdrHCQmxxkRXz89Dkk9JDucJHPQIKPGrMhIOZIy7n87FsiM9cPZ7MlhmCELifZ+0E4v3ns3pgPEgoZwHJbFgZUBgr+jwLjTCgRMo2ZDLFY2ZBPuNjuMhuv9+Pzsw0WsfCKTv03o+JVQ+j6T0sf3nuTr1Plrg9KyHxFkiT/+ksQ85377f2mt7176z274tYv/o7GHE2fYWgdiplERK34oZiEBaipfsjaHT+w7jrK9g080xIv79tH909xfHiD+7HonHPbeuep+d+yun/TCqz5yTSDl3Dt8poTxfdfh0DLbnBIRPEVXYc7h0jdwraukRKthnSrfC6TWksOymDfe1x7kG7ALsPkp6hBogVY4RGV5rnN9E1gfAJlZ4hMiQiwPq5X0G43WcmlIhQISEBOElUg9R4iFYHxQE+3Xje+3zCfHjv7w4F0gart24F9kQ3AbjdIlCkyyWbDb7gGBR5mQ6YfPSTU4+epv37z5ilTh2r19kXwh2lSKRUXqulQtKwK+njGyQZayNRSgDynJybLArjyFlVsngS6oFSliUgPHuNoPdK8jlCVXj8cZw595j3nj5DYrpFJlIdJqgizHHi5LU1Jh0xEoN2D/aZ7EMoFU+WA8DXS5QeNJBkGkXesFsPqexjnnVUFtPYRvSJGE0LCiGQ04Wc77zvQ/Y2D9iOCzYGGZsDBVaZwgB1WqBLEZUZQgoJCpkfiit0NmAsqrI85zBeJ3Z8QHeGYSnVc8QJLZGupq6XoE1nCxqHs8dTniO9p+xsbXNYDQGoRHDHUzjkPP7lKsVeFg8fh9nLHJ8AecEzraZK9KRE7IGnHPhhSQ1g2KIEksSc9RmF8eMR6gaw2q1xAsLpkZ4h7VgnaGsK3Q6DFOFCJkkzlqUEFzYmgbgpbako2FQWkhSppOC1C45mq8oa8PhsqY8rHhqB4gnEiszZkfHmKZm7+kDTg6eMLCOzSLlxTfe5MqtzzJfLsirY4bJsh30ktJK3nnwiLvvvc3qZI9L46AMsn7tZWSShECWCBkGcZ7wQqCBMY61RFBMcyZpQrOoufHSK+xON3j01p8iq5LFt27zB//iV9lyhstpwpF1zKoa62GQpWhP8Cq1ILwgkTJISjnP0jsGxYCXPvVpDp495kKWsPdkxtd+93col8esjQoePnmK9Y4kybnx8i22pmOqxuGs58GHd6mqiqp+ihQ7nBw84sMP3uHV1z/PG5uvohePuHJxC3X7AWuTEePxkFQrJmvrDCcbDDe3qJua9976BquDh6yaQAbCLJmMJ0w3NqnLir0Hd3jy7DHHe8+oVgsuXrrOkCUcHXJgn+CdaQMLYfFVN458OMSkGiEkjVIs2o24aTfLUiVILEWe8KXXr1I68EIj5VP+9Lf+FeKvfI4sUcHCSSVIpYKfq1QhW0kqhGhBmFYZRwgVXnhx8REXah5sUzE/fMb+g9s8vnubdz64R5olXBh65k37Ula6A5GEIGQWKQk+WFFJ0QZK4oQoQpAgBCBkeJF2NMOQBSK8a5VFg9TnKS0xLiDCgtm35BjvA1lES836xiZCa8rlDAjD0jU1RoAQSbhKzNIirHyssVSVo5kfkNuGC5OMo6XhBE9TV+SpZDdXZNKF/UNryWSspTQemWdcvnaTrfURbjHj3979Ixpj/9LfLZ+U//+U/gIzZskLIdBad9nrEdSMChGR/BEJGvEcUb4+kiLiefvWL/1zxXJeceQ0AHY2I6tvjxCBtGhDA6dyxWVZnrFiiN/rKyv0Jf3/PEm9/iK9fx8RlIugY78+0jTtALgI1EVQOmbiL5fL7t7ruu5UIuK9RHAzgofx/iMBpG+XEok2feWPPM+768YS6384HHZtHJUX+gSceC9VVWGtZTwed+SJeE/x+D75JB4T6ye2Q+wvURGlr4YRnzMqp8Q6j+0f6zfeU3y+aP0zmUyYTCZdO1VV1d1L0kqxR2WFSG6JihkQrEQ2Nze7e4jPEoFxIYLVRCQ2SSkpioLJZHKG+BRJPPG+47P3VQh+UB+LdhmRYBRtHpIk4cUXXwQCUSeCvBE4fv311xmNRuR5TlmW3X0XRcHLL7/cHZtlGRcuXOiIME3TcOvWra5e4jNYa1mtVhhjeP3117vjj46OSNOUwWDA1atX+fmf/3mklEwmk84G58033+Tq1avM5/OO2JHnOaPRiOVyyaVLlzq1lSzLOuLIYrFgY2ODz3zmM1y6dInlcsnJyQlZljGdTplMJl1/VkpRFEXX/2K7xXaIbdDvR7u7u3z+85/n5s2bPH36lDRNKYqC6XTK9vZ2p05x8+ZN/ubf/JudRVNUVejbgPRtlmKfi+Mu9p80Tbl48SK/8Au/wJMnT7rzv/HGG2xubiKl5Cd/8ic7pQkhgkJPnue8+OKLDIdDFosFa2trrFYrZrMZ169fJ89z3njjDR48eECapl2fGQ6HnbIPcGYcxX+x38e+37dF+vznP8/Fixc5Ojri5OSkI4Rtbm5ijGE0GvHaa6+xvb3Nm2++yWKxYDQadcS0OIZ3d3f5hV/4BTY2NiiKgrquuXnzZqfSEcdz7Gtvvvkm1lomk0lng/PFL36R5XLZkSDic0aCV5yjpJSdKkYcY5F4ExWWogLJaDTizTff7CxVYt+Mc4hSivF43CnQxPdh/90T1Uri+IjXzrLsjAVVABftmXdhnA/6GT+xHuJcFftv7D99NafY36SULXh32uejZVp8J65Wq+6+I1mtqqpOneqT8qNd+pL/MVB5hkTvBaIFJTugtp855l33/nXGYJzBWNMFGk/XUkE51AmHE4EoL7VGecCGbZDUGilUsL1oDDpNQhKBDgSqRlkq2c4Z3lKVFcbakOmpJPlgEILlStEQ9nDStqYiHiprKWuD9Q6tE3KVMhkWFJliOCgQQlGZBusNEkWa5eG8wxRtBNVqhW8sMQPV2SbsBaUm0aDTAqES0ixDpxlPHj+lVCvyYcFoPOTkZMnhwSGDfEQ+yFiVNc5rBsMRdVWxnJ+gx2vorEAoBVpjTINWmiQdoJ2nocTRoBOB8ZIsF6xKg2mTdozzLMuSxoUEo8kgZW2UUOSe0VgzKDTFKENpiRcC6wVeaJpWdl4IgVcWIw0qCXNW4wQ4BU0DjcA5wWJZc7g4piZDJUNMZRkMxxSDAUKC3alZzI45PNjj6OiIqioxRmCSoJApfJS0D4QGL2WwoDUGKR3rA0EmEvaXlrv7x6yqFSoJKjHT6RCyHAVYHxQVVJGiE0kxCv7ixnnSJEN5jVQpQiik0pi6wRMsShAC4TV5lgZSkjNYb1E6o64aIMF7SV0Z6rqkKkvK1YLhYEg1GuBsQ6IlWZYgvWMxn2OsZbWqKMoaa2GxWAalz9oyny8Az2Qcpe2DeojzNgAQUoX9vlIYU2NtjVIpEftKEoVSUJULaP3u0zRFSRGwMicC/i/jHsG2a5Cg3OG9wKECWNQOcq0EtlUK9a0tk2tq0lSjtKMuK0LKlGc+m3NwcIjwkBUpWZFirccikDrF+XCjSoQEK2scxthW2SqQEKUQWOu6OEMgh8W1v6Nx7fiiVV3VkCjwTtE4R9PY0Ge9QwpQqk1OUDqAiC1QGcIsrdWUlTgLSvn2s5ZEISTR20F0c5U4u15sk248rrOB8C13IlivnO5NOsKID4k51tpg5SJa+z4pcMZ3xLhg8yK683vAt3UT1ght3EjIAGZJurVbt4dFIoRvSVQBDA1ganx+OhzozLbWiRbQbed6708Jf5+UT8oPcfGEYfcDdW7E88HuAP46qvmMwXiDenmMoMYcP8JLUIMtpC3xT7+NTXLEYAeGO4CDbExaTHGH9/EbV/DKI1wDZCAsRkiE81THD3HVHkk6DmTJ8QSxdiXMR9YjVGuPIntPEBdukdhxuorrP1D3c7S3iJSGQDKIa7b4vbPgcff8EOas7wNSx28Fu7CzdYo/q6TbR2PPqh3EvUm8/mnCRZijZZjzRDgmTXTAr1zPjEWc1k//Nk7P00H4p5+dvdWz323j9e0lz8Qj+4e01XOmTs7XVffdXvy/f93nESTiObRoVZcJ6lFds/ePd56Lu+sIb9BatOtkh5Yeq2jxCYnwAiNBKRj4kplcAx/ULaKaxQ9SoAivt/ZhW/LEGWKNbw8iYu/hMxdJRpySOwKJ23/8GrHu2nfcuSo7z2M4bQh/2hBxDx7WG+7j7Xbu60pKnAOlU7RWAX9th5Dz/XaOiEvsi6JrROHbvVWvDttfuzEWyFYB8+n3kT8vDv28cvrdP58s8rx+6z1dnPL890+/40GCsBInHGQFu7lmvnpKnUyQDEEqFA1GOLxMuDqYsapSlM9wUuDzbYyzQI2wNaKcozjAComSQ1wyxK+eoQY72HKPcvmEwfQGXtdIu0LKHOs9SvqgdCh0UIKzJVIPP9aqQkh8t7j5y2F//IgQPxxeLNAaEArjPHWzxHqoa9j0M9bqfRrhsNkQRgMOywWLkwPKpmZzPGBtoNHFiAMEH5oGIzXTdtAoETIPmNVkyiEb3wK4Ho1lNwMWJbPag04QRnKQ5Xit2NkYkm1sorwl1RqpJGVjMdWKxXJGvVphfMLB4ZzttTFfqxV2UTIoNK++8jJ75St873sfsJjPkGnC4vgwbISSlLIM8ps4S5HlpD5k7lrnmVcNrCq8DSoTeZaQZSlluWJVriiXOZZt7HiXxNcczUtykZD7QKzIhhNMXSFEQmMsFktVW5AaJxTCG7wMm9zVfIb0HkuG9Z66WTGnQKee2liOTk549PA+g8GQdDAOqhyjLSoP/uQ+iasAT7l/F9nUyOEOxgUVj7pakco5Yy2oyyCFlGQFxWgdlhWT8ZCy3chaHzIZvK3YLio8DStb8oGrsU5jrcP5Cp0NoN28eBcCQKlWbG+scXR0QG0FYx2ynOuyZJSNGKsC0zSclIYDl7NsJmxffoHKS44O9lgtF+w9eUCzPGFiHYmSmCwhzTMeP36KEJ4X1j27eRmUFrTi6eGC78yeIMsZP/vCNl/50pvIfETtZ1gUHkXjJbXxNCgaq7BWMlmdUCwOOU4L7rz/LaZeMN25QjbZYGM15zPTLZ7deZ9U1Dz4sz/lcVXyZFUjjGFLJQzbTakQEjEp2D8+YeUMtfNUxlB5w/a1G/zv/o//Jx49ecZ/+vf/M4//9CnNqmRne4f//T/4B6wXOf/D/+V/QACmrrh75w7DV17l+rXrSCGZ1zUf3p5x6co1XL0kyQYM0xnvfffPuLPu+XGWFMkaiVboLMVpzaqqSVYlTfOE2eyIk8NjssGID76Tcjxb8PLLr/Dpn/hZdi9eIB0MaVYrpIDFYsZxNiPPa1773JfYe/cPWSpJPTNIHGmqSLVHS0GWaKRpsGYJzoVsHOdwtsaZEudkW/eSGy++yNUXXkKoBKEUQqUInaKWd9uXJxjfysh1C8zwuxeqzYyRgbwjNSgVCBpShSCg1AipMPWKpx9+h9vf/EPu373H1e0xlzZHNNZzsrIcG43KJxSTdRySurLUxuO1DJLEPmSmeCfaYEe7uBBhUXA2CziQfKQI9+JxaOla378YfIoZH+Ht67xtn8uHAKDwLOuGrANzw0LDmhqHR6UJtGxiIdp78sGOyzuHc4YiU+yMUk6KFfOlw3rPeqYoRFBf8TFwQPQkFBgveHQwp/SC8WCAynJ0+kl04Ie5RLAygvVwCjL1f+6Dquc3BlG9IQJyz5Ox629kzi9w+8QMpdQZFYt4bATOzp+nr1ACdMSQCAieydxoj+uf9/w1+uBYJHn0Ad7zVg9x4Xxmo9aSAFarVXefERiOlgt9O4b4Lz53/9wRDIxWNVE95LzSx3klgvhckUjQv/8IVkb1lfF4fOZ70U4jZs9Hsk5fXSW2f/w89oF4v/He+pnrkQgQrxPPE3+PVkHnN0RApzYSnymSZQaDQUeUiIB2v5374OhgMOiy8heLRXfeaBfRb89INomAaiSUxDbQWnfkkqgwEJ/1fJ/r96nzYye2fbS5iW0dgfk0Tdna2uq+s1qturaICjTRVmk+n3ekn+Fw2NlW9PtetHJpmobd3d0z/Sa2bXyOixcvdn0nqphEpYWdnR2Wy2XXpyLBYmtrqwPxYxsopZjNZkwmE9bX17vj43dXqxVbW1tcunSJnZ2d7v4i6WQ2m3UktD7QH5/rB41nIURH1HnhhRc6QkmfxBTVGaIFUV81Idb7+XH2vDaM/S3W0Wc/+1kWi0XXJ0ejUdf/bt26dUYJIh4TLZcioSsSYy5cuMDGxgYvvvhiZ3ulte6IC31wvygKqqrq5szYbyPZK7ZpVE66dOkSa2trXd8FOiudqqo61ZdIhKqqqiObxX4aSUDRuin24aiaEVUw+qTBy5cvd+M6zts3btzoxvFwODyjRlWWZTfnRyuVqJYUlS+SJOneRScnJ904jf2qT9LpqyLFeuyTM+I8Fn+Pn1dVxWKx6KyT4rwV58F+v+wrxsRzxbnv/LwXzxHHaF8dqT9O43tNStk9c6yjOEdGAkxUGIkqTs97/35SfjRK2I+fZoNHW8wOBPUQ9zbnSbcRlAxgpcWahqauMS78HN8bzrmQPHOaQogTDik9Wmmk0NDUWNsghGwVQSO5RIYkAkFvLGqUVjgblC0EkiIPKj3OO0zTYKzBGItAUJU1ZdNQNYaqaWh8GFeDPGeQZSih8NajpWY0HgSSSNMg8ORFilQhA3c8maBkyuz4BCGDGqJ1UDtL3RgcCTrVCJVhnCNJUtY31qjqhtVqSZYPyPOUulzy7MkD1jc3yAYDQKJVzmS6yXLRKsApjdBpUEFwJgD4OpAC0ArtwaOwTUso0wkeaKyhrBpWZYOXgkJLpgPNZKBYn2YMJ0HFMsuHIBSuRbG9kFgTZOadMSAdOpNokSFESqoLrAWpG0Tt8Y1ALByL2ZyX3niD4WiD3/nqf+L4+JC8yBgMx+RZgVaKNMvIsmDbEYAlRSudGfqf8DgMnpCQ6mXoX9pBOtI4Ldg7qniwqEgfPmOQKLzbZn1NkmQBwPA4TKPQqkCqDO+CRLfzgQCRpAMQITm6aQxpempX6UUgRRhbtX1fo1XK0WrBeDIg1YplOadarTBVSblcMChyalu3FjEFtUyYncywi0OSJCOdDmiMp64MUkhG0ynOCZrWzbo2Ia6nkhxjlszmx6yPg9WMdYDWNLbEmgYnw9jxPkMGOgPO1midtsTEYM0cWAEy9HshGAxGrT2J78gFTd2077ugdBLGoydJw/u/rqpA2rCOJMmwzen6uVw1HB3M8NYxHBQURY5wIIWiyIfggvKGakkbYbwLVKIpBiOyrNVM94Roj9QoLwI5xHpMExQxnKVVXJUIFQBJIWRIqrEWa87alUgZEoJcq8IhIyrnTUumEOBkoLeJpPVu0SHRvgOo6LKD+8SPM6UjSZxFMj9G+iCEdZCBxBHXQ962+yWpaEyDsz40mbUd0uecO6OWJKXAe9kqjniU98GqxnvwIfEIL8F5XGv1IuglSbRgEr61HvLR1Ut083u81mke8iflk/JDWvrjWoBs7U6itUM8JtIRRPu7O3MKgW2W+KahaY5xwrM2vYYabDA/+ZDCXUCPL+PHwWrOrpbYZ+8g0hzyLfLpZar99zCrA3S+iZcpXhh8U1HPDnDVHsVkB5u/ACf3yae7NOkm0oOVDrDt+AbhRAckB85dS3j7fuVjXBCJ8HFGOK0f3wf74z/R2ptE5QAR0G8f55turgnrNucNHS+DeI/yDPh/evbTWF6MYYM/s086D4R7BE60agbWIaVGikDwC2eVgVjHWRJHR+g5Q/44bVvvAyWGczG8/jGCuMdv6+77sBD6CT+ByHL6fM6dnW0DATt8XUZywOli/DSmKFolipiQigukHQHWtypa1pKlCdPJAMoZQgTrNa0UNlHh/SdDXxEuqM1IJRj4htRaTHc/LcIiov1J7349XbvHVvaxX0QeiPet9RhYG95Dse7o/bdfp3+REvvHabvGfhR/E2eP8R7kOeJQe0xHQPGOSOyQPvxrGov3gXiu2ve/aLEXGdcD8Uu9ijkdO6eEHyFF16e6w93p3Z9/8ueRM37Q359P5OgPeH/u2P7IPP1vPz59er3eOQTtesohpGe1qBHJgNFQ0fgapUqWTYZEARtsZycYqzmoddin2Aa8BW9Apjid4tUgWD3aGl+dkNgK5+es9g8QArLJVVw6wpuTVunGhn2b9yActRdkScZJsyIvxuDcaR1LEbzweo8sPlbb/+XlR4L44bylsXOMC0CmEIKqOUF6z5ZOuSRTjr3jvh4wktscPV2iyhnN8RGDouDlF66zMR5y2DQcSInRmnpl+XRTIQVoCVIqni4cSV1SXCoZbYcNeNlUTKYpMh2gFjVbRqBnNfvWUF25wOTqZYT01NWSNE1ZVpLVbMFofcTX//iPefVL30ZevEW5XJEpqJIRy3mJHGRsX32J9//0HcbjccgS84bSWg6WM6bjjJUVlCeHmM5TWZHoJCh1KEU2GtKUSxIlSZMUITVaheCF1IrZYolSKaVIOdkvcXslSklSpUkuv8JQa/7o3Y/ItWZ7a43jVU25KjHGs5yXGBM8jIRU7UatpNCK4XiKHGoaO+PZ/lOOjg54/+4dHj56yGc+9wUG4zVAoYsN0Anlsw/QvkbVKwpzQrOSNH5AbS3H+3sMx5YNpVBpCCgMxxN0PkTVmjzXmBLw4IwNnp84jmcLskTRVAvKqsS7IcZavPUULkoD+XZz17AxHjAa5jx4cA8jE7zwjAcZL0x32Z1qtFN4P8YfL8jzjGtvfo6nRzPuvf8uVblgfnwEdcm2FPhUscLjvaB2iqOPPiQrUuTGJkK4oF7SWN66vYcRmhc3BvyvfvLTjCYD2iVEeIFKgxAa5wXloqI+qTGLCmMFHx7PuP3kNsn8BCkTLjSWweEBy8NDCiFQtuHuasXxYcNYCF4UkqHUKCSN9RgvcInCekgD7RXlLXhHAjR1ze//zld5+zvf4mDvKdZY/teXLjLe3GLr0gtcv7jBZ15/jTuP9yibhqODfTZ3LvH3/vv/A365z//tX/yLQApazjnZe8Tesz2qcoX1oW/JjddYPXhAnuUMplssFvsIUyF9CKhr6dDNjMRKDj56SFKMODncY+/JR6wOPoTVCcvG4oTiw/ff5vB4zqWXP0s+WuO94yHLSM7yPmRgSREkVPEk4lQJQymFFg7pLIksSARIYUmEawlfoKVBCY/WkkQJEhm8lbVwgbjS2q90i3MpO7WN8I5K8DFTQ4QgZQiqeJwXNHWJPDnk6oVtLl/cDiCrlyBVK+0aVES8kHihWgJTUDaqyoq6bihrS1UbKuOomhD8W1Y1VRX8rJvGUTaWyhrqxrZ+sIG4ZVvf1rBGDh6v4Rk8xroQlCO8rJRK8NYyOzlBpgnWNrQwKkk+AlOGYFm7QGz38W3IQyCcIxWOK9OcQkle2h5y+HBBpROuZMFnuYlAMzbuIPDeUS4XzD68x5MnI4phxqpqgkTzJ+WHskTgKQKQi8XiY4B5nudnssThFESMgGUEoyJoFjOp4RQ0Xa1WHegY7SqiAkZd152NRpqmHBwcdNnpEZyrqgohBMPhsLtGf5MWQbpoH9IHJvvPG68V7y2C5hFw6xNKsiyjqqruufrZ/hE4jCU+fzx3BKpjZnhUC4hgXp7nnTLCcDg8c39N03QAdD/AGEHL80B0bKs+sBnrC+gsIICO5BAJBPH755nnfdWSqFhQFMWZdu+rZvQVVPrni3UWwc4I4vcB/P5GOhIr4v32FRyAjsQxGo26Oo0gd5+AE9tSStmRQ/rPF++tr8TSr78+6SYqVMR7iqobMcgb27tfzl8vnjPaHvUVUyL5qt/W58dY/94iYB/B3Aj2xsBD7M97e3vd/RhjOsWEWOfRpqbfdv1xEftiABYGXd0CHB8fd2NUiGDB1Aey+/cb66BPLupvXONYOBNMF+LM3NInMcT+FI/PsqyzV4rfPa+IE+slqpr0x3Nsi1g/sV/H8X2+v/fHY+wX8fr9caSU6pRq4vH9cdlX+ImqK311nEgcm06n3dwQ+37sv30yVJ9AFsdGVGqKx0fCQhyDUkoODg66+03TlOVySSSMSSk5Pj7ujo3njWMrfi8+TyRU9ftYLHGujc98/p0SyR2xfeJ147X6bdUnj/XrsN9+UfkE6ObHeI7+PfdVNfpkpX7pE2mklKyvr5+Zg+fz+Zm5c7FYnJmT+tfTWlNVVfeejOM/kiv79mpxfD6vDmPdRCLOaDQiy7Iz/S0SjSL5I/7+SfnRLEHCu/253TO0WGEXpBacelq3W7yO/BGC9iGz3XvXEkAqTFNhmxpvDd4HBRBBUOZQUqJV2CMaZ0H44HtvQ0zQCY+3Bpoa4zVCKbI0BLalVmRJGmxVdCDFy6jAUTvqsgmAuSvD1so20BhEVSNtAz6ohVhv0WmCV4JaOIQOiSqDImM8GmGco66rgK06jTOSvBhS1zaoMZR1SDZAUNWeuvao2pHl7XrDGLJigNSmndca1qdj7NBxcjLD2IqMAWk+oDGO4XBKqkOiEN4iVEqSDpHOUtcrTEuMCRa0HikdWZpSroJdR5YXVPUey6oOSpoeikIyHaasTQpGk4Ks0CEJJknwQgeFB69xeIwI86YTrTGEEihVoERBOpiGZBjRhHYTnpVZUbs1tq79BKPhlDR/j/rgLsvlPvO9fVSSBqBIBOKCVhJng2VzkmicBGdDhF1IFYRlAVQ7j2mPdLApJdiMB4cVHx0sSMUTlNQkQlGMB6RZhvcOZ0M916YltXqBUArdxuGEAGsjWTfFtdZDMkmDlYgBayFJUuraUOQFWaopV3OMacjzjKYu8M6SpRnWK8oyrBeaxnHn/hMu7V5kMhqxsb1F3RiMq5lOx2gBq6Yiy1PSJMUYh0pUO/ZC/G8wGGCaFQJJmmYYs0JoCda0hAGBTjR13eCFIstz0mRAY00bW2jVE/EkSVC40irBe4PFd+tW78PcX9Y1UiuqypDlGeVqjqMhUQrnw0xQ1WWw+3GCVdVQ1g2TUUGep2gpkUqQF+FnpWWYJ6xFKkeW5egkxQPWOaraBqKXbEEQgg2Ms2CNRJCSZ6qbR6JChvU2kFSERWlNViisadU0XMiWDUq9FulAKIVUIFSrfoIIWaYQ5hkR5h0pJVKEZ4iEiQDiyTOAXZgjo5GB74FwEZwEj21RuoAM+Tar2LrW2qBdD4YxLFBa41y77/S0xJ6WME1LLBURnnDt3Awo38a54vv/NBkjEPGCDc1pMkK7fhbngc5TJYOzgNsn1I9Pyg9v6YP/4fcA3HtObTU8Z0eB8OHFFNQlJB5DY4PN2iBXlMcGNRgjkowsvcpycUyaTEnHl8DXyMkELS7hVif45hCzeIrIxtRHd1A7A5w1NIf3UUJSjDZQ659jOXuILu+Rbr2GkSlCVAgUwgXCnhQpol23Rez540Cu7NZqvYdpyRiiW8hFNaEI7nZrvjPAMeeIb6cqRPBxID+QBU4VRUSgo2EjwN+7W2itvQQIGWfZdt/sTmepSMrz3rdKCu361AcLF+GDtZjpxRdEq4bUBcVPb/9sfCPeSXwOzj37mfL8RLXTvwkiOeb074JINjhlVnvwp++YSMQTog3BtySPjgHS1r/3LRHUnaqNR1KGFCpcWwp0IhmkKY0VCJughGNVr1itlrhyQW09KtUkeYLqkilANiaou7VYyhnNlB5px7fvrW7UCNE7/iyJwNkzd9r1y9gGoj/2vOtdk9O66fWXjozQ6+CnBIX2qLjXPm2Vc+dq76Vl2YhO8Tz2FUnZNFjb4J0gTRV+1cYbvMUTVCQC2anXfhAHJPFH31bY80QnfGzf5zxvv4/G//4gRY7nEVZPY0H98567fu9ez4zMM3XbPlxL3gnKjJZcap6VFldcJnNPObYFk0ySNk9wyZBMJTxYTEF6VKpgZfBeI5VDmBLpGpSvsTRIEmySIXSBMStSZVD5iMo6ZPkE7xssHp9MscIiRIZwJc7BMM84mden9efb2T00Mv2Z/S9jlfMjQfzAt149ouF0WhdMgVesYX81531nqZVgsVghymN2M4MWNeMiRY5bKfHmmGM8B0vDK1KT4rE+sNNyodgoBhzYIB+aK0vtBBpJOXMMFw2bWnBJSi4Nc56sT/nexU2ksJjGkGuFLC1P334fK2D5TDIaDLj/3e/y4pXXmYwLHgxGVBa+/tY3KTC8cf0Cf+3LX+Lp48fMTg45PDpgbXuXP9rfYzab8eqtN7n/KOdg7zHO1FSNoWlW4B15ohgPcorJesiktB7lLVlRUFvPyhhKU6KoyLIgLS2kwlhLVdfkHmZlzZODA8aDEQ8eP+0sUYRS6LSgsiV1VVEUiuFojC1yFquS7939iKODPcoyZHN5PArB29/5Nvt7e3zhJ77CxvZFpJAMpltkgwEHd78Dy32y4Yg8WVJVJYuFYTE7wBSjoF7gAjFhUIzxKiHJMtIsIbGgZViEeA8YT1VWuFognUG6EKh0xp2+4GRgzloXNnI7G1OEUjw7OsFl6zhrSZzjxaliXGiMSMmGI0Se8vRwzrPvvcWDkxXz40NWiwW58+TWkE7GLBqLbxqsEzQypW4qhLCtf20A1j/am/GNd++wPDri7375FoNRCArLdqHhnQ9gvHG4pUVUHmcSnjY1D57tcfzoEVuzE64IydgbsjsfcGItj06OubtasKwbdoTk02nOWh6yf3yiMMKxMo5FZZhVhtnxilwArVqMdp6pFMwffMT7z55SOMeGMTzCU3rJm+vrPH74mOlkxMVrN0mSP6A2Fq0lGxtrPHr8gBduXufnfuan+b//y/8nKr3CrU99mqr5U+7cOaSyNfPKMvyxn2fKV8F/DS8V3hE271qzfeU6mxtrPP7e2yxOTsiVIhmMOT5Z8Fv/4/+LS9tThDc4lVGZhv3DYzZ2drn56uv8ydf+kGezCuMA4UIwrF3cSNFmgaiQAuG9DUFC5/FeIYRuVwAJrUJp2Jz7QP4CCL3GgzNIJNJbBA4pHFoEUohWDu0NaaJIJCQStHKBMOJbMkkiyRONFhYpYDpN2VjLAtOwDXRYL3CtRGn0QnU+yMmG7bzEuRzrCowPtjLOO6wNChrOBXKh9SJYYFmLdY66brDWUTeOyhjq2lDWhrJqiSPWYayjqhqWVcOqalg1jqo2iDQlkZAqiRaCyobgjfMeoRIUCXVvAR4X59Fh0TrLWq7ZyhXCO4pc8ebukEYK0mbFqnatJ6xAtEFZCJk3Go+wK+y8ZD6X1FVN08tw/qT88JYIUvYz6Kuq6sDGmFkdAao+2B6B86iKUBRFB3T3F7IRyO2D22madtL5ETRbW1vrMs4jYBWvFbPLI1gdg3bxnzEmvGt7IHgEZfsy+7HEjOm+QsZ5KxI4Vdbol/599DPrzys89DNn++eJ5473FzdFUcEiZtvHLPb+5vTPW/hHwLmua1arVXdvfTJJvP+oPtIH5/vXiOeMQO15EL5pmo7A0CfOnP9+rI/Yh+Izxzo7r6YCnPlbPGe/Xs/3s+fddwTN+yow0XIGTkHhflv1n6GfxX/+Oh8PgpzedzxHVC44TxJ5Xj2fL8/b6MVniOD9+e/127dfT+fP1ycK9BV+4nH953vePfVJPv3v9c/Xv6fz3/kvLfEZ+4SqWLd9JYfn9av/WqVPVIgWHbEO+n3zL/q8/SBAtOqIhKl+W/dVdfpzcxyn/etFYkA8vk9UATpiTJwzY9vFv8V5IpLq+mSz5/W/v2j5QfXyg875/YN037/EvtGfM+I1+nN4v/+et3vpk07inCqEYDQadW0Qx1wkREU7r/jui0QkCPUbCWJ9taf+e7avoNR/lv57OpKP9vf3O0KiEILZbMZgMOgITHH++aT8KBffgZgxUNqHEE6tMtvfXZzrA0DrWtnkYJURwE5jDHVTY1syYgA02/FkIVjHgHNhryGlRCKD0qFWrQUoQQWgadBpihYhI0/pYP8ihEehaZqa2XyGs466rFkslghAa0fAzV27z7IkKmGSgFUNG0XKzlpBkiq8cOS5osjyQMxVCcPhhDw3VFWJVGEXarwjzQfoJqFKLCf2GAikgFVl8LLCWIfOMtI8x5GEfbEEbzxSaGQmyAYGpZNu3tRKhhhQVrBYLlktFwxHCnwS1CtVSoKiLJcoqRHCY60J6rOc2soZZylNg5MS36wYJgOGQ81olJMVBUmuETIkDglJAOIBYx2JSNp9KkAb+JcaoQtIBiFYqkuUc5hmSTlv8HJEMblKkucMhmuo+hjZSKp6xaIsmdcrvBLoVGOdwxlDbQQDNEmqEZ5WZ5WQUSnb36VAaQHOkWnLuk9Y1DBbNHx0VDJ4tEeeajbboLpKMxQghMLRYGuDU5ApjdQahyBVOuztrcdagbUCLTVaJ6xWJXhDkqSAwzuLUmBdw2w+Iy8GSJmEPT4wHI84ni/CO9OUrMqatUlBUWiUCoplJyczBJJBluMB01SsrW+gVbA4ci7YGZXLFWmetXE7kEoFYpIzCOFQrQWxdRZjG0zTkA4zpNboZIBdzgCLUuE9I9q4nLFx7xKIweVyRd3USCGpm4a6bkiFxNmQwFbXFanWKOGQWmJsiXOBlGydIc1TpusjikxSlyvyPEElijRPSPNgH9NYi0qTVmVFI5VgtSzx3pEUKc41GHO6jmzqoD4SQZzSGBrX4FwgiSRKobRGqwypW7VUZTDehqSzJsRVwnpOIKRHiHbACQKZQ0gEwWZYSNWqhARFoefvlyKweRZYCfNjHygJgGKcHeN8SFwnizbe2aq+hnVbaEMlNGmW4eoVTW07EM96FyyFIwHPgZcSpVoFOa9wkpCg079ngqIHUb2pLbIFfwM5T7RWxS3kFQGvdl7nv3z5/Un5pPy3XXrAZothEtc954vv/yRaQkJUUl7Ngtq9BYOncSCXezR+yXC4hjm6Tb16RpKP8WisN4GYJhNUmuDm+9jFHebvf0QyuMDo8qcgXac2JfWTb6LGG+jpZ3C+wWPRIqN9MQdAO9ERtu7GrejfeGeTInpPEg+Mz9MnVbRf7QDi9tBuDjw97nkA8mkMob2OP63BSMBoZ8n298jF8B2huL+X6sgT3YQbiQH9VjklHUgpAzXZ+xbpVSAcivDO6Oqqd8/x3rr5XYhu7u499MdjKacfndbsc+Itp++Nj5NEIgjtP/Zp7JO93+O7wJ9qMgUy32mtAJ3QhkQglKBIE/JCYWuF8YaDhw949PAR+0cHATtIhuTUrE/G5JOCJM8QSJT0CH+q2NW+4EIbntln95QrBKdkHfrkEN/aup3bn4ue5os4bVvX3v+Zv5/77vkY1F8kXtD9LPp9SJwdGr570JY4IAiKg6HvShVUtxytLVKvv3SxWn+2/Xy/vzt/5tpn3rvn+9658rzn7cfwnlcPH49/yjNju9/t++d7Xr1154qEJuHBq5aoAwLDyhqs3GbsnnBcTdlVKeOxZG+vQfAM5QsEFmUrMvsQ23i8HGJUQi0GIFRQy29KBvYRo6Fmpl5klBiyqmGmdrHVglw9oymfBJIOKUIm2CSHpED4o0AAi6S0tpJ9fIa/xDXOjwTxI1GCa2tDlJZUXlDVDkTKX/GekTH84fKEY6lIXYXHkoiQaTY/fsIk38SRMShyNhJHouCxM6QreNEJEgTaO6SwCFOjfJCI1Ficswjgo8bh9ytGwnOoJHu7m5gbV/HSUzcViRBM5o6jx085rGtUuUIrzXxVcvt773Ljy3PkeMqy8mxMN4LNiTW8evUiX/rv/jd89/ZDbr//AXfu3GZxtM/1F29x7/3vcO/D21y78SJZnvP40UeUyznWGoZ5wmg0pbKC1WKGsCXT8Yjdi1eYbGzRIKjKktVqRVPVlHXF8vCYqlqQCB+ylsWwtaWQnKxK/LJ9QSuFxFMuF0jXsLW+QVoMeHZwwJPHD5mfHGOaPugmyJOMLC9IkpRyWfLWN/6Az/z4TzBd28K5hPH6NlJ9lr07b7H35CHj4QnDtW0a7TmxNSBIh1MWJ08xdYnPw300XmJsmBwcHlPX4MF4S1N7kCAJASDnPbZloVtnW76j6KxedrfW26DNglSHDGRPUHZIU02epJQ+ZWIDrajee4isUuqyJPeOq0PN/RNHJlUIJEVAQyhG69ukiaSx4L2gMZZ7h0Gq88cvTvnUq9eChCQReABfK6RNkT6jBu6f7PH+h3eYP3nEdtPwWaUZSQV1CJY+Xa14e36M955Pr4+4dfEqu9MBAy3RHty8oT5ZUq1WzBPP0WjAfiJ5cjzn4GDFSWXwzpMJgfGwLjyyCYy2Ck/i4d7hMT+VjKhO9njvu4aDw2O21ze5cX3Kpd1tFk/v8L/8+j1evPUGommYzY85fu9dystXODqesawqtEp4/PAj7n/4PlZqNI5Hz55xbaxbBRmBGm9y46/+LJdvvMy7X/ttVJqTTbe4d/cBR8+eUM0OyYuUqjGsnGAwGnPllc/w5P59vvdgn6qd9gQgZLtga+l2XkJjAnggvMe6QHLwEXzpL2g4ZY3GxY0X7VoORasRS2SeRh/X80tCWrUMIVxgbOPbn8OiBu+QOBQWLcJnuZYkypMoSIQh1UEqVxEUYbTypNIHhRJvkQiKLPjQqlyTSNe+4NsFj5d4ETbaziscSSCFiGDF0rgQcLVeYFywqzHW0XQEEceqNhycNLz9ZMXw6g7L0jBrGubHT6nLFUYoJEEhZLWat1k3QfI9EjisrShdjUMjAOc804FCCKhlQuNqvA9jU8nT4EYkSQonEN5jmqa1jfnLA80+Kf9tlT6I1zQNJycnHTC/sbFxBjiLGfURRAc6cFxrfQYQPb9ojZnyUVUiyzJiVnTMSo4gZF3X3Wd9ADJeLwJqQHcvfcuNPqgX1QhiJnwsfRC9bzXTB5LjM0cVhvMAdx9M7LO1++D7eVuKCODH7Lz4eSR5PG/R/4NIAv12BLoM/ggg9sHL/jH9852vM2PMc9U8zgOp/eeN5+qTH84TC87fb59U8edl5J8nfcQ66p+/fx/nyTyRABPv8Ty42q/j8+fpf97/W/+/P4hkcZ5Q8TzyRv/Y5xFa+ioI578Xj+vX6fk2On+dftv1S/9ez2/I47n7/aF/rtjfn9dXz286/3OIA/0x1Qfx43j6foScv+i5nw8SnN0kp2nazQF9EP98nf/nPF9d1526QyxxHPaVeuK99tugP9aFOCXL9Y/tP0e/n8Q5rt92fSJI7D/9Nv6LlvPj4c8r5/v2fylZqF9n/bkq/ve8Qk3/b33ljThXxGeIhMd+HcYSFYX6NlTn34vxfmJ9xjmhr/AT36t9olNfrSfOs9ECKM7d/T4ZVag+KT+65UzgGwIgIE4BhXMHd4Hrrs8aCzYSbW1QJLQ11jRUdUlTV+EYXJAV9+CtwIuoXqORQuC8ReoE7wSVsWELaYNud6olqQxggLUVq6bCWgLRoao4mc1pGkvTWObzJd45Ui3JlMI5z6KqWNQVSiYUiWZjMmI0TFFJQ5pJBsWQLC2wDmQiQYL1NR7IshyHIMkyEi2omGGcZzLOEW6N5XLRhqxDMoW3nqaqGYzGqFThS09VGrz1HB4eMZwOWd/cIElzkjRFJ8GiAweL1QKlFU3ddPtZ5xzWBXuLNCmw1iCEDGon1mOsoSxXHB0fs1iuqFvCbZYmDIqUyWiAzhQGQ6KT1k8lSLQjQAsRshWUQCYJpQVnQEiNVAkyTcM811iUSNDKU66OmC0q9o8bfuvf/WukM8ye3WUtaRiOhtRVQp4b5FxxslpgGxt0C6TG25Bkk+TBl1s6i61di1V5UAqlFbkSQRmVGuEMO2NFWdUc1g0fPD0gT4MFUFPXDMcTlAxr5DzLaKTB2QahVFDIRdBYhxSaJC3wBPnupFVeUApMA7pV7020RkhBZVYsVzOGk1F7/ylrGyPywRr3H3yPYZ5hmhV1tWIyyVHSkmWa45NjTG0YDYd4DDpJSXQg4OlUUVcVy8UCZxvqpmE8GbYWsQllWSG9C5bRMRoiJdY5BB6pHFKFmEZtapAKrRKkVDTOIGVLuPchBmeMZbUKmb5CBi/51WqJdR7jLFqKVvXQ46XDS0GahFiFcOCdwBMINOubGyxnJ2TZCKUVWZ4FVQ8RJEfTJMchcc629+CRMqiueGuoa9u+pxI8wfrGeofxgcDRJuDjrcMl4JQPiqU6JdUSIRSpSMiEwylLow3GWuq6obINNmTdIDBIAUK1mfzCtviK6EABIThFyzq1jghsnsZBTtcoEdwJJLeQdR5jTx7vTmE4E9dFrn2fu7gODs/fWBNIsqlCKolp96fOuYitIAm2LgDehhM7XGtnIIjZ9CFL2aG8CEonwrdS56JNXmpVTLzAm3Zd7IMSrvfRXiAc80n5pPzwlhbYpQ/+i3Offx+wv7c1kTKhns9IUoEQjuk4YSQ9S+PIBtvoYp3B9FXK1R6r2R751otIlZBIifGWchHsC0YXPos2x6xKj5nvY8xH+HpBsfMKJpnifIOQFu011nu89AgX5h+pdZjHzpFWAn79/Yn3cYx3Md6Ih5/bd/f3yv3YzfNiH2e/19/3S6Q8g4L3jov/x5n94tn9YKvWEal1PfA+5GS2P0SagQ/r1kQrvE/bue3jpAEQAUfsPXM3Dz63zs7VCWd7je/9zbeVKjuSTPg0nONUdaA7ZwuTyOec+Pye/AfFIOLKvTu3c+SpQnmNM56jD+/zzXffZl7VaKKt4FkAAQAASURBVClY2gLmjok9htWMYTll/eIuSZqQSsHSfvxaXdvEWxVnVWHOxOLOPcNfaJ/uA/k31sHz4gHfL7b1vHjUx07vfVjQ9MyQur7aMTDiNQIW6bxHtMpmwnuUDO92IU4tYs4/RL+/nGlDKbp1QPzrn3fv/znxon6s4ux5Twlez/nWDzxnuH7o212fbjXlrffgDEokVE1FoneZ8phpXnP7kQcZSN45c9JhzqqCUk7RSYL1CucN2oekcNksyc0T0jxn7rdxXjKzBZNkRlIeoPIh2giaYop1CukFVPu45R5WGnANzhsUrQpq9y/ga2dIP/+Vy48E8UO20n7eO1IhyRLJNE25NK/4xuKEI8LGPgD/Dpmn6BRWVYmrlkyUYSdxLNY1SZYgE8Whd1Teo32QidICXhmlPFl50qpmqB3aBOZyUyj2tWDPwJW1CdPXbtBIgasrtITMaNy9x3ywWHLSVGjbsG6CV+q73/02P/bsMaujJd975x10PaMYjigPlrz/3jv8jXH+/2Pvz5o0ue7zXvS3hhzeseaegW7MAEGQkERSIinJonbYjuMhwj52yN4XvnFs3yjCEecD+CscX/neN9KF5ThxTsgRcnhr2lu0BYmkKAIkMTUaPVd3zfXWO2Xmms7FyszKKnSDpCyFbRILUaiud8hcuXLlGv7P838e3vz8i1y6dIlkdYNvf/ObvPLCi4hg2N/Z5uM7H3PhwhYvvvgKd+7cgkKRD1ewForlAh0c6xubSJ2xc3DEwdERSidkaUKa90n6OaPxkOFwyOHuLsEbrHMkOiHvDSlNRbAWmaQgFWVVsTLokVy4gKkqjk5O2PnwPY6PDrHWQAApJHmmSXVC1YDHswUhzPAhcDKbYK3hCz//FfL0Obwb0BuuMLj0CofFkuOTQ4wzbF64irpygeVygepdxMsEbyoEHqkTtM4wDsrK4HxKZSoCCutiHRbLgkxroroDtR9TzFaj9uEMISCDZ31lwGS65MHjfcrtI/L+IJJFXE0YCTCfTjFlBSIhEYJkccQ6nl4qWN/s82A2r7mkpyQCfFT68EHWvmKBvWnFxzsTNhT86i99njRLiZkGAmETpNFIcowX3Hv0mPc+vsnh9n02jeHnB0PWkgxVVWAMygW2l0u+W8557soG/+i1F3guycn2p+j9CuZLXBUzKioXfYUHzjDwMEhhZWvA7rWM3bsHTKykkAKRZSTrY/J+wsYwpTw+Yvt4yfdmR9yeTfnKs2/y/XfexlZLvvS557jx3Iusr65w684dbn18m3K55OrVK9iiYDJ9zNHhDrPJMcF5sl7G7s4eR0fHiLJgbTzk1t37XP3iS6TK0+vl7N/6Ie9g+YVf/Ttcffk11q68gCxn7H3wlwz60dbhaOoISc5obY2V1XXsfMZffHQH21+LMpOiHlh9zawTkdgQvI0Bsc5kJBoWbusveDrh11vduOgTIkq5tRncXRCtGY3ixB1qSbdA/HdL+KiH/oCqFWwkgXojLDhlBMc/6oVNJI+I0JBQat9ZQiRa4BAiUikkcUGgpUcJasWRQCIjWSRTglRCoiGVUclEy6jIoSWkwZMJUFIiZUCkAnpxwYEPPBSeDx4WbK5mbBcGZyrmJxNMTa7yIqCF4mA5r1V1BON+j37wzL2PqkEK9tdSLqTNIjm2jFKqluCL7elClPFFCFKtEcJHex/fZLL8Tc8sn5X/0eVJm76GiDCZTCjLEq01o9HoEyBvQ644Dyo24GsDwD4JMG8ypIEzWdFNJnU3q7whMDTA1nl7li7BoMnwb0gp5y1nGiJEU5pjNoBbV1XiPImhqqoz2doN2Nt8pqsOcp5M8TTCQ3OsBqRrsjm7yiZNW5wnTzxtY9S0TaOo0oC/3Y1BA2JqrVtgsmuH0YCexphPED+6oHKXfNFt5+a6Po2gcL6NuwBmF7Bsfp8PRJwHT7vX/yQSQ7e/NqSV5r49iWDxNDLGk+5rt57n69W81gXOzx/jScd5Up26/35SEOX8sbvlbJbNkwH48+34tA150z+696x57Xwf6Z7n/PHOBzx+3HL+Grr9rNt25y10fpLjP+17zXvda26A9S74/1chnZwf97p97jyI/6Q27T6jzWcaRZtuGz2tDzekvKY019lVVOp+9sctP6otun3kaf33r9Ke589x/pnutkf3/a4CUdciplvP5u9G7ahpk4YsB9QWoadzEnCGrNi0a7dPdY/dAO5N3bTWZ4iLaZq29XmSKhREW6+qqlqLoM/Kz2rpzF/nI9rNJ0Kovd27c1fM5HPO4Uy0MbHGYqzFGYO1Bm8N1lZYZzAugvcCj/AepTRKCJQAhKwtHaIqpLcBRHx+0iQSQ4RUOC+wDpbVAlNZrGlkvcFVnuViQWEtS2Mx1qENaCQmBCrjcN4xSAR5ohgnimGq47GRWOdR3hKIagxpGq0BhZQ1sAogyXRG0oOTqsQKw3CYkKdDquUSLaLvtPOGykK6KFnLx6RDKBLFbDJlXiwIMtDrDdE6Wjhpb0lUgq3XxwiFTvqUSwNBkiYaLwLWR8JHojOME6gYbyVLNVIUWGPrhBuHx6NloJ9nZEmCFsQEh+CivXZwBDwWH01uvQd8VLckINMUKVOEVCid4kK0v/VK4h2cLCx7JyccHE3Y+/YDpI97WNZXGQ4vk6qExIP1UfnFeIfHxeC3U3ir0CKNxA/pIIlxAuFctDSp99OJgmAFiRQMM+jlihPj2SsDH+6eMMwzMBWJkiRasUSS9fokSmGlJASFs4B0Ue0ARaJ7KFWvl7TGVAatM7TuUZYFQsQQhkJgy5JelqIClFVJCI5ef0hVWhKtKYolaaowtiIVOcOVVZAaV83Is5w0Tzg4OmQ8XjkliNtAuSyZzWY4H8ftwXDIbDqN87kWBCWRNkHrBKk0AXFKvpR1Aoc1WO8JQoNMCB6MK0nrNYgxkZBgrMWUMxQO62o7WRPI+wPKqiJNNTJEUolzAZWlBBeofIEz9ZrDOSQyqoiIqMCq0xSdJPGZDQItJc4bQCJ0TIiRSoEQVKbCWoOzNSEhWHwNgigZlV81HqlFJDQR4y5SKYKMGbY2RPKUkhIt45ihhUJKBQhsJXCuilY/yBivEbGvSxFBWlln8TYgXQR9WqZFm3jU2qucSb+OkaEYLWpk9yFatDgQ0S66IYfI6HMEgAsO66PSrfc1mSt4grEo3ewv471x3kcinGhIchCbJOB9HDOlBOllHD+lQYVaycOHGGcVdXa2A2RM8ml2Bq26q/Wn9hbBAe5H4T+flc/K/9IldvfmmT63r41/nQFl2+LjGkjWSKKpZmyNcqZVBngYXiAcPSSEJW4hqbIh6cpVdDZicXCTdHSVeXmMrhb0+1vYS6+S4VHqeTi4zWw5I00T1OgyZraD1gVhsEqQOcE6pLAQNIGIYQmp2gsKQtKNbJxfwnWv+XSv0sD3n77H7u6BnrS3PP/d5k8hZecMzWuiDdqfJnqeJd93jxl/12D1+XGpwaE7RB4po0VgIhVeRSJtzVCoU0RPSSRSNslb8XBSxnG8PXd9fv8psYnAJ8H9zsXW8fLT2I7vxqTqAzzx2s4f6gnxkDP34MynT/EOrRVBWhYHe3xwZxsvE649u4ESCfcfTphNH7K+mTCbLgilpdfrk26tomS0TlMdTOZchWqF9FMCCB1Q/WlkhfMxiadd59mn85NxgKcd/7z67pPjShEtPE+wOU+QCKHGswIIodCJqsmUPt5bJAjX1rY9RsdSRAhRr3VOrdrOP0vtd2OFP9Em3ev8tDZ7Uszuk+fynF540/Gan+Z4sv37E8euO2tcH9dJ86JOxo6bCpROsM5wYzNhOR9ibYFVEr2QmHzMlb5HHc7J3ISKMV72SVB4GXCLBaPsmCRNOfEXCSjQlhAkszBkLT1gXs3w9EjcHO80UILqo/INsmRGf/EoqphmacQNO1fM07e4fy3lZ4L4UVnHvYMFwoMTCmkdPzcoeFAU7HrDeNRnVprY2EJgEUy9YPdgwvX1AX2gn2UcFgobFPOiAi0wQdadKiATQbGV8bJLWFjDIE3opx7lJHkvQWaa8XrOlVeuE7TEFCXBO6ZFYP3OY/xyyXZVgXBYCYsA69axf3+b2+//gNUbn6MqSnppn6vPPMsHB3t8+50PON7ZZXjxKptrY15+4QWmh8fs3PmI1c1N3v3+29y5fZNHj3cY9Aesr2+ynOdU1mFsQT9LGa2sYQK1Goansh4qy8lsjnP7aALD0YjJQULwHqkkQkqmi4LJbFnbRnissyR4tjY3QA7YOz5mZ/sBy9lJDNy6SKqQCrIkJUlznI1eugGHdQFnHUKC94IH2w9x9k/Jk4T+YIBIEpJ8SH7hZcLkHsX8kEePdti4fIWyjDKfq5ducHB4EAH1EEApKusojcOHKG8cVE7iHIWTLIxlZhyFsbjg8S4C2M5aGl/KAGQa8jTh0c4Ok2XBbDpnPj3BeiiqiuBzrItBoizRVNbQ04JMCTZ6MOpptK437yrKkrbzQAhMD3YoU81iZRMzSvjo8YyjoyM+f3mV9c1VAgpFHxFyFAllueDu7iM+vPcx2/fusrpY8gtJyqUko2cCGZaR0qxd2eAAy5+8f5NNJfnf5YgX7xUoM0M0fp1BRWBfxw2ekZIkaHo4lHOsTB2bL2zRm8/Ze2SZrY5ZXH+W5y4L3lR3WUuP2D927NgVXtovme+9zwd/MWCxf4/1Yh8nE/L+kNsff8y3vvtd9ueGl7IRu/s/5HhRUlaGwtnaZgeSNGN6dMBf/Ol/46VXnufC5Uu8/dFdHu2fcH2zD0nO6sYmx7uPmR4f88yrbxJ0xnhwg+duvMW0tOzOSoJISNKU0WiVXj7ge+/eY6LG9HyzgY53N2ZLRBUWL6jFN07VOXy9qRbNBpp6QV2TQKSIUpzUC1YfXIzZhNpnuvFWJUSOiQBRByObAKX0MSDoCTHIVdcvPhvNdND1c4u/Gk+w0BCJOjSQ+BkXA4E0k2S9zBISSGj96UITImgmmuacHiniokJJaonbuByR0qNFQOPQSkYCiRIsFpKiWiJFVEFwtorXUVc9kmZCuxBVSIIzGFNSudjmRgp2Z5bVFU0mIjkkLroFqRIsrEfU/o8xUSUGgSrjqFwMCOY6BlL8Z8GBn+ryJOC8AfxPTk6YzWYkScLnPvc54FSdoVFQ6IKcDSlisVh8AnDtAqVlWZ5R1WisMA4ODtjd3WVtbY0LFy60dhxZljEYDFqCQnPsBvxqCBvnM6XP/27O1wXOu1n1XfuE82Boc64GFHXOtaSY85ngcGpj0yWSNJna3Xo1qifNdTRKKMvlkjzPW4LMfD4nSZJPKHQ0dWxK8/p0OiXPcwaDQVvnriVL1/JlZWWFxWJBVVUtUNltkwZc7H6/AZa7YHL3d1POk2e69+L897tEoeYz3c93+173vfNkiS7xpDlPU5o6dDP4QwhPJJp0FVy6dW8Uabrt3e0vXTJAAwx3n5EGCO4qVXTv39MA9U9TVzgfrDlPHGk+c37D3C1PDWw85XPnVR+697lp0+7zdv66zj+TP0k5/6w119VV4fE+yrA31k8/afm071hrWSwW9Pv99trKsjxDDmiIRj/J9TVt1jxrcEr0aM5bFEXMJu702caSq1vvZjxsVJOa+9/0/S6Zo0tU6ZINmvGpUaxoCGVdctZ/T2nqdb7v/VXu148qXdJVd0w4/7w0bdN9npp+1Yw/zdzRkPSaz0EkYOR5tJHo2q1IKSnLsp03m+M1P805J5NJe8/H43Fr3dKoXTX3qlGHavpfVVU8fvyY0WhEv99nbW2No6MjfvCDH7C7u8sv/uIvsrm5+ddy3z4r/2uXNmh/PpQcAq2tS01WsrZZt0TrAmPrH1NhTIk1UfEj7pc8wRu8ixawVvgITgeJdRYpFCpRgMZ7CMIhG9UrGTdHzlusCxgHi6VnNiupyppg4iO4vSgqTPBY6zEmMLEG6+O+UAnPMBUkIpCEgCkLqgQSJL08R6qEQAJCRsKD0HggT/OYMOBipqB1FhCkWQ8IBKlRssJUBTY4Mh2Dy7Ys2Xu8hzGBC5cukaU5RVKgdYIpS2xVUhUVUmnm8znD/qh9XuMznEZp6ADWBpIkx3tDVRYQHFIFlAx4LfC+BhVi1CymIoRAqgVJImqVASBosAFwuBCtIQRxwyeRVN4QqBO6kJEUo+K4HnHlEJUqg2BpHEVRgDPgTU3AEcxmS7goyXQE5BdVjla6yZ+MipZKUHjHIABKk2iNTATYAmmjUoWStfokgSACKlEoF+gnilQ4Si/YOSn44OEBXNkkz0tUWoLOyPq9CPS7QPAutotQ9XrSggCl474hdvWoSDGfTQkElFRIrXDOo1ROL09AKJSWjEYrSBmJQWmaUliD9QIhElZGq1FtxASUThkMR1FhwweqsiLRmsV8XpOZooqgs4Z+L8f7en5F1WtrC6KZUxU+BHTax1mDEAEHOC8pK0+aK6TQddLG6Rwm6kC9KQtm02NGwwHWOkzlUDpF65SiMAihCEEgZL3uRlAWSwKxXzTPfJIkmHKJtwGVJWid1okmDm+j1a9QCWmSE3zd1XAYU1JWZczEVClSaYSxWGdRUqJUtIOWCBKd1vuvGFtRSrWgSCBghUMIGVVcGl114ZFSoJWGJHIdhKzXWMTojvcBKWO8CF8Dcm1yAnVs6OmAVUPwCA2IE073GWcV83z7XrsX8bXiR+d3JGn4SASp1w7tuBvqTONadUSIhmASEyJDoE5IonmjrWcM4DSBoXqN6UMkkfBJ4Kld94cGJf0suPNZ+WkuZ4lcZ4D3c2Bw9ysxFB1j2JW1KFExHPc5flRGgDFfx8od8tEzJK7Ez09w7OF9SkBjDz4iycdkV9/EBYP0kPh9JsUIVyzRdoneehOhs5gguTzGz7bBC0JvhMpWQUqsg+B8jDeLJonvbF271/BkkoaAIGlU0p/YSufGwPOJPJ8ozV5NNCRZ6nOcqnl34yfxs7SvP+248b3G/KNT15p80NzDUM8BXlhEqJWOUPj2NGfve330zjnP2di2x39ym3TfOR+zOEtSELRYQz0mhxpniIrzp+1z/jwhhJqgGA9xZi/cqUO3+7avB4ELjpRIOE63LjF5cItHjy1XLqb0e4b+5TVSLVjTA8qyZDmdM764FtcaIiF054MaZwh8MoYUEG2C7Pk9+3mS0Seu71x76XpddP698219/t/nP/O0WEGjeHMG9zn/fXzt+nLah5ROUFLXSc219c4pikINdLdt1trF1f+1RCcp2naKL7QnfWL89nz840c9r09u/09rm7OkjxDOvna+Dm1/FiCIpGOlNDI4PBbpJc+sLZhNSx5Vq6yvJxwtIKgRvpwyFwrjJNoZBn6PUiqsXMMu54zyJcP+mP3FkOAtUJGWFSIY8J6pUmgxx1mD1Cv4dC3GYUQAbzGyh8wULthIwhC1XRSiVdb5myw/ExGUEATIBCkCpYFLqeTlVxO+9e6caZnRJ+XiOGFvXkAA6wNzG7j7eMLPPbsBQbIsDZNpiZJRjnNZeowLOKUReJJUc6Ql90YZzyQSbxy6r1D1ondj3Oe556+S9jK8NQg8k9Kze/cR10vP+5VhGVxcJSOZS5A+cHU+x85PWM6nHO89YjgYcv3ZZ3n/nb/k+7fv8P3vvc1rv7gCOqefZbz+0vNcubDO7v4+w8GIzc0tPrr5HtPJBF8tcfUGJev1eOWVl/A6Z7lYUiyXFMWCoqwwxZLKlAyylF5/iLEWX1VkvQEWUVtfBJRWODyL+ZRMC3RvxOHcEKa77O5u46qYkSClpCclaZZgvWA6n+MWizhFigika61QOvq2EiKYu7u/y5+/9U20VmxdvYGpoprGNAzZWFHcuXefhwcnrK5vsX90wgvXr7O3cAQpozSk8RyWgpNSsXCKsljSG/WwFUyXBlt5XBAsK4sNpt21BGMRqa6HxUCqJWVZsLe/jw/R/9VWFUrWLEsBIjgGmUKlmkGmkbbgwZFgaUOUWzI1KFcz5luIXkCSpSglOZ7MOB4MuPX4iMXeLq+/+Ytk6TqJGKKcws0W3H90hx98fJMHjx6QLeZ8QWU8PxgxDoLVRLJ5cZO1a5sM1texexP+/M+/QzGp+N9Ha7ygByiZIlIdtUJ97fNVKyTEET3gZf3MCEUmFOujC+hXNcFsM//c57nR2+Xz5kNyVzDMBMPVAblMKQ/2me18H9wBy6THgVVUWB6+9d9498EO93Z38EjUgwf0lMIhmBdLhKwztoVgUZZc7ucc7O6xsr7CxUtXWMszPrr9gEtrr7EoDVvP3uDR7Y+5f/M9sldfZj55yOCV17n0wqt8/PgIP0pJeznWORbTOR88OIDNF1jcvwtpSi/vsTIYgIzZWTYEnJKIRIMQOCIzOdQrma4Im5ANBSQuj+KaUEQix5kJ+nRyq+mG7bTb9PkQ6myJ5jUfM1QaskKoySSiXZA034vvyebYDWXjlPERPxNXYPVCx7evhVpWlRBqJZFmqveni5/g64VTfF+GZjkYA2oB6mtXSCGjVKmQWAPZygVmQbO0c7z3JEpQ+UbqM6rthBBAKhKtEHV9QojLZucC+0vLi2u1ukETTRACrQXBBJyLjOlIsIlvS63JhKUysV1bPs5n5aeyNOAk0IJ8XUA/z/MzmedFUZwBBBtAtVksdkHlhizQnKMByZRSGGNaeXylFHme45xjsViwvb2NUoqrV68yGAxakK0B36raR74BWJMk+YRyRvPak+T0y7Jsz91c73kyRlcZobuYdi7KKDfX27UMaRQztNY455jNZvR6vTNkjQY47SoEVFXVAsZNG5+cnPDee+9x7do11tfXGY/HTyUDPAnkl1IyHA4xxnB0dBQ3WR1rkwb4be7N/v4+g8GAEAJHR0fM53M2NjYYjUZt/ZvrazYkzXV0LQi67a2UotfrtYHk5ud8UKELLjfvd21ZuiBnl3zRBfif1B5dW4Pu9TafachD58HTpo2acz5pg9+cu0taOU8AaV5XSnF8fNz2l4bA1JTzCiA/arN3niDTPVb3mM37T6rXk8qnbbqfRE45T2b4tI37p21i/6qleVYbdYVu8L3X650ZM35cEsHTrvnTNtddsllDAGhe7yrJ/Lh1aIgBzdgSQjhDAgGeSAAriqIlZDRjWpfI1m2zZowHWkJHl8zRtTJpxtNmTG3G3uY83Xr9dZYnATI/zueeVrqgTUO0aJ6RbvCpeaabMbkZ3+D0uWrGwi6JpquY1Mxxe3t73Lp1i9dee43xeNwSAbvnbNq/uRZjDPfv3+fw8BDvPV/96lfp9/tnCDpNnw4hMJ/P23Pmec76+vqZflAUBbu7u9y/f5/XXnuN9fX1n/xmfFZ+eoo4xQ7k+WfMe6JlRL0Wsg7vLM5VWOtwLvZVY0z8qYmipqpwpsQ5Q3CG4B3OOirr0BKU8nVmO2QJyETFrEtfKyqKqArpbWiJ6N4FTBXJD2UV4yplVWFMwNhAZT2emFnvPFQ+WmWCIFcAUYUgTRU6k/GYtXVwkib083r/qmPQ25gS60q0TgkB8jxFKMnSVFQu2mtKpckyTVnOWS4K5ouClfGYtbFmd/+QB/fvUZmKS5cu0u+PMVVgNjvh+OAApRN6/QGgmJ6c0O8NUVLjvMPaSG6OirkS70EpjU40xXKO9AIlJEFrrPUkWYpQGl/bi6qaGCLrNYzzHh+iYkpMfXAIEQFmF6LNsRWglKz93YnS8kSgwlkTE42cxlWBYALCeRRRYUSKSBYpjMH5AEqQpSn9PCdPM8wyJkxIIUFIbPDx+AqyJCHC+44gHcFbtFJRARMXs3iVoJcEBpmjlwmWi8AiKO6dLMmzE8b9DJlopE4xvRylQryGGFHAOwi6zk4UMa3DA8ZalE6QUlMZS54n9RwLITiSNCdUBqUSkjSJ1jcqzrXzxYI8y9nZ3aM/6JFmPaTQFMWcrN9DJynL5YJ+r49EsCzm0e7SJ6ysrDEYDBD06Odpg9O3a0wZNDLrE2pFFucC2gWcEyBi37UeUJIkzdA6Q/haLdc5yqqK7S1j3KFYzhgPM5SQWGdQSYZzkWjiPRgb+1yaJfUzXSEE6KRHWVbx+XWGYjFH1yo5i8WCnBydpQTvkEQLmuAClSuw3lEZi3dAqNV9cBCiKmNZlUilSHSG1AqtY+JMY2UkZdIMTwTivJtKVdufOIKv40m17YmSEpkkBFkbsAiiBQ3AufVaEKHNkm/BmjqeE88Y10K+k2Bzuob2nJJA4pun6n2dvWW9vvB1bLe7LztdmzbrEM6sNUIIRGxX1j/xfYlEiVpNpQbcToHFeNEyiPr5bS79yfuIbmqTCF0B/M/KZ+VnoURcqAloik+8d0qUCMIBilDbWI1yhwpDSDxJkCAUzhZRUas/xlmHm+yhE8mgP0AMB8wWU+y9d+hf/jxalRQnU6iO6W29hg2G8uBj8ouvoIPCDlZR/RWE8xg7wRzeQak+Mh8glUOIaHMmw1neB3ySgBDJEWdJA4gf3yYz+FOlZyFO90hnztPGzOP5o+n5qZp3Qxjoxhp+VHyjrjmtHdcn3qxVlwAlJGBQKLwswApEY4XVqB+1NfRd9KHz487WqSHWfEp52p70XGvTkDhOz3iefHKW0NAcNpwe4kfXQzSOfVFlQggFEgb9PvrD9whOYdDMj3aYz2YUVUEhJEMZibo61yRCY3EEcTqfCELnGekmy9ZK7Q0I0WAD52JjzTfPkw2f1P/sOVXUqBJztq9ErOi07zW/f5zY0tlz14SMcJ4sAUIoAh7jLLbyEARJGmDR3LlaWaVVvgggPMJHOzrq+d/hT5+duplC9zmpiQlN+7aP0ifa+/Q+/6hr637uLMmz2wM59++z/fFJ5zgl8QDBReDMOXSSAgt0kDyzcsKR6eMXU1B9jkyPsd5n4hbIfISXgiA9lVrFhgSqfez8Frmy5P0xR5MCHQxGBpApQed4MSKqygWs2ET6OyTBgvPopq5B4rymn/ZYFIaQgwoSLxwNGNj2oSe23n9/+ZkgfgCkKo2bAytY947djxYcFQKbrDIxmg1Vst73HC0tAoElMC1NBH6TjIOFZ1l6kiwBolfQyXxBb5xhrMMLyf1SYhaObEOxIiVSRmnI8SBnbesFeqMcUxhAcFLBrdvbfG5DUZ1I7i8K4vwYGT8KWEjHbVPwdecoFjMSvyAkGc9dvYpMc47nC/7rn/4ZtrfJ2oVLIBRba2MGK6sMByOuXLzI5uWrvP6FL/Lxxx9x5+aHTKfHFEWJs4bH+yesrEl6gyEbGxfQSUJpKorFgv29xxTLBbOqQgE3nnuBwcYFpEoQCIqyxFQF08UCsbdHsZgymccNWK/XY1la7HJBmiT0s5Q0S6lc9P6Wta+bdfVmREYwV2tNliZYY3He44Tk4PCAt/7rH/O5L/w8Fy5eJZgSW87Zn87J8x4n0xnVYsajvcDJyZTKe0S1R1AnzI73OTSBvLdC0JoLa4o011QyY17OEc6hdUaW9VlducBJseTw8BibOE7zqAPBlDy89S47j3eRSkMQVFWBFnED4ly0yUgTSWmqKGMqAjoRYAKlsRgqfAgkSqNouHWxDFe30EIxWz7mwQHkqeZrzz3PizfeRDuFnM1xJxM++Pgm/+3m+0xnJ7yQ5Hx5sMozKmFrpc/GMxcYX9kiKQVifw4fHHDz7m2+v73HL+V9fvHCFfTqOsI7op4oYG38CS5aHUlJ6SQTZzj2jlnw2MWczVnFyq/8Igv3Hmtbil/PHyHnJaaEhweWVFTcO9ljd1YigNHhXcb5Jg/EJmF1lf/HP/gHmP/v/4eD6ZSjkxlFUaCyhGIeFWOU1MTBDsrFjOX0iIXzJPdS3KUNnn32Kt965wNu3t/hpUtr3Hz/PfJqzr33JshQcvXZZyjKCrtyjf6Fa7jZnOPDQ06mC+bLwJ2PHnF9/Cwr23d5dZAzCpqvvv4FUhEDOq6q8LoHqcZ4T2UNxnvKymG9pwyeSigqITBaYpTGiDgGGARFAAtYBE7GLA4nBJYoM3XqjRcDOY1Ha4CYWVHL1jYsTFG/2whZ1XBeXJDWwQEZBF5GexdfT5JBnDJ/o31M/F4z6QeaRV1n4RZn99MFCk0SRodDG2oVzhCZoyE0dadedHkcgADnAlYkHC2WmOApq4LC+XYBDwIlBbYO0joHaaZwdcKvqC1wpkXFwg3IZd1WIdRBwgSCjckzdZ8pKs/SCQZZDAIK6qAdZ2Ion5WfwtINTp3P0G+KtZadnZ1WqUMIwWg0akEmgJOTk05AzLdKGg05owHiG6WMxtKlsTZpjtNkpzdgI0SliqIoWjCrAUTPXwfwCTAyekyrFoTvEha6oP2TVCq6RIdmYd1YsDSEge65u1Yx3X+f3zw/aaHeAKtlWTKfz7l7927bLsPh8EyWdndT1QW8u/dtNpuxt7fHzs4Oi8UiBp5r0gCcEnMaG4gbN26QJAkPHjxge3ubV155hTRN6ff77X3sgqXNa+d/urY0Td3OZ9qf2dh1PtN9/0lEhfPWQeevuTnP+feb45wn8XTJG13iTxf4bu51U//zRJvuOZrPdD9bFAV37twhz3PG43EL4naP86TN13lWf/c6zweVPy2oc578cf445z9zfqN+/tqedq2fdv7z1/WTkEyeVprjNKoa3X4Gp3YaTyLGfFp50v18WnvleY4xpv1MQyDq9rmfxA4FTgGHhgzVnKt5Thurq8Vi0Y6FjSJEt85P6sfd/t2MBc0z1ZA+umNX075d9aYnPQP/PeVJ9/t8P3vSuX7cftIt3WftScc/f5+7hLam7YFPjBdwOjY1x18ul+zt7XHz5k3W19fbsbQ7r3TH1KbtrbXs7e1xcHDQ3qMuuagp3TG1qbcxpu03XXJLA9b/pH3xs/LTWESzjeiUJlPdR9DXR/KHM1FptLJVS1Dy1tXEjwpjK7w1eGdqskC0HxHUICaqBqYtTgbyLGntIEKIIKmz0RpGISOhwIFxHmMsRVHhi4LUOaSMMaWFM0wXFU4IkkyjZAoukIp67eccuZSMsozxKGU07rE6imtR66O3tHOWwlakeUKiosKCVBllWeJ9hdYJZVmSZ1kkZ7iA8B5bLhEkDAdDRAhYG9el4/GQCxc22H58zOH+PlrCeG2drQubeG84OjwkSVO0kqS9AcZZJifHjEaNSkQdmBawLObkeRafeynJ8yHOlDFgLVOEMDVRxWBqxVdFBDuCr+0nhCcICzKNCqzG4oyLcTkfg9QWV4sRJHgbbVac7cDDQmDxlNZhvMPg436cAMGD95Su5Pbjh+Qq4cbVa/T7fbIsY7IsYpA8EK1HkWRakitIpUAEGZWDZYpUklQGtIiATiIlwnsSHONM0k8kJziqAHMHe7Mljw9nICRSJfTyHjKPiUpaaaSQUXkWSahBFh8gSVIgkGYZRVkQgiRN+4BnOZ/GeS/RhMqjdQpC1fhRoDIFSZqg0hQbBMPhKjrpYZ3n5OSEi+MhZVXimjkBsM7S6/eiHaVzBCDPc5SWFGVUW5FCkeiU4G3MCA4OHyq8d4RgkQrKskJlOcHZOpAR9/1KKpx1BOspFguUTsjzLPaxJInKI1WBFLIlIgbvqWxAiaiOQohWIwJJmkTljTifwHI5R2lJr5cxOT6i1++jdFInqmi0zghC1WqhMR6K8yQyJ0nSqGpiS0pTYKplnD+TDBE8QqY1kSEGe6TUcVygJlCESN5Rdfa30pqIr0Sl0hgTcm1iWRNnqR+j+r7XAJsIdcxF0KjBChlq0KYZBB3tQUKLG57uZ1v1o/jT3bO0ZPQza/rO6yF09i20A+/5+fsM8aN+XUlZ29acrhWinYA4N36frvGo20E0Py2o3axNYoyojU19Vj4rP8WlC0af/bu77+w8TEEgUO3cEfyCVIHTPcrFEWlqKafbeCdZzI7IJh+g+hv0N65HizqpoVrQzxVVcYeTj3+f1dUVylKSbD1POd+LY59cUm2/g1zdQviAFxYZJEoEyDWhOsRNbhHKAh39z0HpDupx/hqesidqSRpn407nvx/j2tEWJUTZ7qjGL0S0Az9z/Fr1oIl9h4CsVZeCPxsbOl/HT71PLQHiCZ+vx7MQ4jwQasWsUK8zT5UWfPcrbZw71uP8QSNC0FzW+fZ7Uts+8bWnfKYhRjzxYtrv0NbhFKw/S2oI547dEFUSEedJmQSED1jhWX32Gjfm+7gHB1SLkiwEsjwn5Hm02MCQ94dsXL2ED55EKygiKffMxZypek3IqIk5zbooPLlRz9T9Sa8/7e/zRJqGbnL+s+e/8+Rn+ux+v3PGsxfaKonV7R8iAbSGVRH1Pampl82B43FFc9ya8NHU7zw7qy5Nwu4nR52zpJSn9bvue5/WBt3XTz/fOVdz6Z1+f77duiVCbppAgcdjHTwzWFIaxXwpkCQ4GfBBcqS2WAl7TCuHy0pSAoQpKiRUdklvsAG6ogwZUpYYdQGTZEgfH25V9ysjQHpPoldJzQ5zMcaJJJKDvKTAsd7T+PkcwhaIOB4gYvKzr8eCZgz46y4/G8QPIcjTPqVX9MOcy9Lx+Kig0H18SBEEZqViUztsqjipPMFDYSxVkEydpqriZiWpWdreeSYnx6xmayy1AKk5XEp6pWO3sLwiAkEI5lbwzLUrqOCpbIUXsDSex9u75GbJjcE67z6YMIt8K3TNmNZS0ksSTLCUiykXRmNefPll9iYFfvKY8fomi8d32X70iB+8+x7jx7uMhkMubK4h0j54S97rs7G1xaULW7z8yqucfO2Yw71ddh49ZOfxNieTCUWxZHZ8xJHZIxA3UnHB78mzPmmaQwg8OjhGHk3iBImo/eMkBE8/1SRiEHnoIbC5ucnkZMLjk0N8cFTWwGJBM84IAlJCmmqsc7WcYMAYh9aKLNNU1tVBFpjPZ3zww7/kZPcBl9YHbGYZFQ6VJlzfepYsy1DCo7VktiyYHG1jXQTQR6lkkAqMs/R7CcZVUeHAgzGeoDXPPPMsz1y7zuOdHW4uDpiYJT7vx8EzgF3MefTgPtOZw4UYnKlMDGB4wHqPcg4lm4BGtLVJZMwcCB4WRcwuEUI0XiHt4mJ+ckQ/yxCV487jI+YPdvjq134ZfThBLheE/T0mjx7wwZ3bhLLkl/I+v7K6wY2rF9l47lnyNENM5oiPZxBUlFlblry9t0OQgq9duERvYyueO0Q2METp0OBsHUgyzIJnH8+esxx6zxSYmooPtx+R/IWm94Wf53+7uMONmaCY9Nmf9Xl/95DZdMlhKbECvPFoLH0x5ZVf+vtM0h5e9/nKL3yJD+49ZjKbMZ2esJwJjI/ZN2VRxLZAUJmSe/fukdVkIZ1mjMZbXN/a5f2bHyPtMyhbsT7qc+H68ziRsHt4zIM7/ycP793FGMfk+ISTkxlGriBWL2GLO2ANxeFj7GLAVr/Ho/d+wKuXLyFrb1VRlciyAF3L9oYYoJDBImobH6E0JH1EP4dU4U8muNJiTIV1Dqs1VilMcJgAlQ1UQlAIT+kDRYBSSAohKYWgQFAGgSFQSUGFwAqBCQETBAawgvhcCUHtQwMSZJCdxWKc+CPpo7vsbEggzZ/x/VMeaKNmEj/niX6vIXQWgy0xpAlIyJY4IiQIX3+vPqL3lvl0hmrsFcoqZoi0tamZo4IovRzAW9EoitafgLK07C4tqwPqoGGsp1SaLFGUUZuVsrIcLgylUMytY6gFmTy99k9EGD4rP1XlfBCru9BsgEFjDHfv3j0TzNrY2GA4HMb5PAQODg5YLpctoN6ogaRpyvr6OlJKlssli8Wi/RHiVIliZWXlE6QJpRRlWVIUBUVRkKYp4/EYrTVVVbV1a+rbnG8+n7fHbkDUhnwynU7PKFU8CQzrZnkDrQJFV0WiSw5pFsppmrYBun6/37ZN08YNuNdVB2iuuauQslgsuH//Pqurq6ysrETVn86xnnT/uuAuwPHxMbdu3eKDDz5gf3+flZWVFpC01jIcDltAubET6Pf73Llzh/fff5/xeMzFixcZjUZnwMcu0N1th+a6u23XVTdprvVJtivNv59kY9MA0GcDqJ8k0pxvk+7v7n1tXm/UGLo2Pd3noPn8eVC4C3x3279radN8zlrLdDrlhz/8IRsbG1y5coWNjY0zz1kXUO5u6s6TWn4UEP7jlG57NefokmW65/w0wkj33j+pTbp2J2c3n58Ouj/pXj6tnCcydFV0jDEsFgvKsmQ8Hv/Yx/xJihCCfr/Pzs5OS3RbXV090y/OW4b8uNcFp5YvzfPUtWvRWrO3t9cqyPT7/fb5bNqwq3TSjA3d/t0Auc241jx7DbEtz/P2u8aYM89md8z5SdrrSX+ff2aa308KtD0p6PGTlPPHOv8cdf/dEGwaNY9G+aq5/mYs6hI/4JQUUhRFS/x47rnn2NzcbAk6XduY5tlvFF6MMRwcHHB4eEiWZS3xpqlfkiSfGKOa+3Z8fEye52RZ1pIGz49Ln5XPCnwykNyAlcG5qELhHN5anLUYa7BVrfBho+WKswWmKiPxw9sIJBNq9VGNRuDqjY8SILUgyxKyPANirMI4S2UcpTFQZ657b/HOx9eLCmei2sjSWmZlxbyyGDzCg7CBfhaJwybUazEfGOU5Gys91lZ6rK8NGfVSymWBJSBVCkJiTYl1nrRnEURZ50SqaPfSJI1KiUxS0rwPFoJReOtIlKxV6hTWGRaLgjzLWV8fM5tPCaGkKI7RCWxubbBclkwnC9J0yVBq8qzH0dERzlWMx6tkWVRbEyoGKxfLKXmmICRIoUnTHlVZ4n202YoAfolUNckvgJABZw2BnDSVKOnQeHxQpCrFOIvzFu8D3kuM9yQokDURxHqQPipCuFp5xTq8iaQe5+tYTU00EcSY0O7RAf0k48rlS+24IwK4UKtbOocSCZlWNenDIYSOG18saaLQ0jeiC/S0BOMoCGgdyBLQWrIoACGZlJ6HkzlCQi+Pc67SmkC9NpUJKIVQCbioPCIESKlirEJKqiqCHwEZCQvQ7k+UyqO6Bo75fM5gMABgOFyJ+wopGI5GKJUwX85Ie1E1cGdnhyzLI/lCK4SJlirWBB5v71BVBdeuXgZiclOWRNJLjBXE+IMUAmdr6fAQST22KkmzBOktxjmcLUnTDOc9OpFUxtFgDtZalM5YWd3EVAXeE9ckIioBL5dzkkqiE0GvlwGC4AVpkqFUQlValNRYATrRDAc9yuWMLM/RWQ8fFEqkaNXDG08VltG+0ju8j3EUpYhJbaZiNp8xmy8pygXOGTKd0ev1yHqaQW9AL+uTpxlZmqF7GUprHNTqpRInFT6kJCGt437xHFpFNeFIEomWQp4QVVyljEF/0QCEgSB8g8IBHhFUHUONFgpRBrbOKG9/mnGxlusPp3vfU1uXU3WPdm1FVNVpVGZck+XbiSqdn4/bdRsSiajtgEVnjSAQMirDClmTRlFPXDt9Amh8wtzvUXwW1/ms/KyX+Mw0wG4EtSPCXeMLUmGKGctpCT2PsnNkb5Py8AOUqej1PXr1BYToY8IS6ROcn6GlRvZXcH6L7PiQeRnIshWk7KH6AxyKfHiBcv82oXKkq1ejgrQSCJ/gJQhn0HjCwX/GywytVItRf/IafvRr3X32k/Ztcd9ZEzdOm6U+3pP0gep5SoiotB0aEogAIeJ7nfr8JHuQJxNGmkh9tIf3tsJXBkTAhZogKE5VQYQ4VdWOQH5HvUEImgVHJFv49hxn6yGf8NoTElea9869Jp74na5Cy9lv/jhkifbfUmKtA+UQPiCTSBQWwXPxxZdZvTzFuhK7nFKYChs8IniCtUxOZhRVxWgwgiLUyvsNofCUGHGeGNUmvJ6KqvzY5WnkjKeVJ5EZntY2n0aUeFqb1kwWYn+oSR2+juURnwclAibIDukjnPn6KYekUbrh9DOi/sa56xCd19tP113w056R8/GSTyMlPY0cQkv3lDQKPU8715n4n5TYYNFSUBi4Mg4oFXg4zVnxc0rvUPY4ku8Li9WKoSrJ3ILCKRZLUNLSy1dBB0r5TOyPWcXITyjKI6wY4OSwxuEcKiiiWl9Gkm/RL/eZs4kWPUxwOKvQeY/Ez6Ldk7AIEoIwiNDE7sXpvfprLj8TxI+4II3KDJcwpGbOXgCn15AiIfiKnrf0gkDoSNaIvlOek0XFSuXwMi5UdaLrLHdBZR0h1cykxswNe8Ygp5a13HBSeobDlIOqwjqPkmAdTJaGD+8+Zn485TIKc3/JNgqvExIto18skAgY9FKSPOdksWQ+neKtZbL/GLk85nMvv8gPDh5STI+YzWdMlwXjfi9akGQx6Bklkz0y06xvrHPpwhaDL34RrRUnJxNmJxNOJhMOD/aZTad455jNFzFQJ6AqCpyz9cZU4IMD5wjhNEPMGMNiWUQwbLlkMp3ifeDlV19n59E9wGJdzNZQNWnEi7gx18GT6ASPx1nXjiJSanpZnNhSnZH0+iyqwEd377O7o7i2MWSQa/AeNztidXWFJM+ZTUuOp3Gz1gxIup+Taw8eFApXj/p5ophV0B/0+NWvfpny7nd49UrBa/0h//Hb+4SwVk+8EuErZoXHyh5CWCBgqrIOPPsYYACErr1QncVbizFRBrLfS6mKuIlXUsdFUPNf8CSu4pm8xwvPv8HD5ZTNyYKVkyN4fB9pLG5ywsAYvt4fc2Gjz2s3nmXt0gWSoOBgAWURmWT5CFTMFFnu7XF7MedyornSH4JUtN6YwYF1hKogVEuWZcnEWo6C5yR4ZnjmwbEMUAqFcZ713ipf/KUvc5U/Jzl+mVLuM+8/S/r+HyIFDBOQ0jP1gcoEclmQzWZcfuNV7t6+x8+/+RWuv/UtJicnTOZLnIgSa71ejpnNolcbAu8CFZY8Szjae8xkWXL56jOsXrjMs0XBzY/vYRYFz11e42D+fZKP3wMZF5WmLJEIqtKDN4ihQvZHrL/4KkIpQlXhdYawhmUxJ8Wj6p2yKB0U0zb7wQVq4odEeIsieiYKpSDLCSolVAtEZchCiDJOQuBVZFv7Zh2uJHiPcI2NU7SRQQqCTmIwy9hI8kgSTJJQBMu8tDx8vM3DeckDC/sCJlLh04x02Of5l5+jn0qMjSQRGwCp8AGMD2cE4nwgirAGUf87buwbVbq4JGgWSqKtZ0vwaCbpAIRQM0VP2bzNWyKE6OkoFamSLJ3D1eoFgfaw9bGjAopWsmY4RuWQVnXEOfamFdcHPVJOM1KkEGglmJcOHwKTwjI3AlJJXwtS6Wv+dH0e8TMxxf3Mly6po/m7lfQ20eO91+tRVVVryTIej1s/6MlkwnK5bDOLG9LIaDRiOBxSlpGQtr29zXQ6bReXUkpWVlZ4+eWXmc1m7feXyyVVVXF0dMT+/j5CCAaDQQuqHR4eMp/Pz9RbSsl4PGZjY4M0TamqioODA/I8bzOh9/f36fV6rKysMB6P2dnZaUkkDVi6sbHB6uoqo9GIxWLR2p9cunSpPY4QgpOTE5bLJVmWsbKyQq/X4+DgACklm5ublGXJdDqtM0n9GXKLMYaqquj3+1HhqybFFEXRWjtorcmyrFVPOQ/SP+1HCMG1a9dawkhVVQgh2NnZ4fbt24QQuHDhAsPhkBACeZ6zubnZAppKKYbDIaPRqAWJsyxDa810Om1BxQa0bEDpBjDO85wQQqvS0iVXFEXR1jFN0yhx3lFhgVP7CTi142kA0yeBnl1bj+6GpVF4aY6XZdkZILTbzt127ZYzgd5wSvDpvt9cW1OX5vhVVXF8fMxbb73F888/D8ALL7xwxo6ou1l7Etni/Hma635aEKchTjWgcheQburXPV73+p4GwncJBd1NZle9pWvjc57c0/37adfUvf8/TunabzQ/UkqyLOPhw4ct4en111/n+eefP3PPfpLyaUG0+XzOH/zBH2CMYWtri1/7tV+jIXZ579vxsksw+lGlUfe5ffs2k8mEtbU1rl69SpIkreJSURT8/u//PisrK1y/fp3V1VXyPD9zH58WEDyvQgS0JI/j42Pu37/PfD7n1VdfZWtri8FgwMHBwRlyU/OsAWf6wE/apk8MaHHa959GyvirlIYw08xNzZjYJb00962Z05oxZz6fs1gs6Pf7LbGiW88uwaJLxAghMJvNAMiyjH6/j9aag4ODtl9AtPfp9/utDVRRFMznc8qy5MGDB3z88cft51955ZVWQQRgPp/zwx/+kI8++oh3332XEAKXL1/m+eef5xvf+Abee6qqoigKsiz7jATyWQGoMzNrH3YfsHi8s3hnsT6u97yxWGfxzmBtRWUKqqKIdnuuiEQDZ3CuAm/AR/VLGSMSIHwtyx1iEK9eSwC1dUz0atdaY0204XNVGZ8766mMYW6WLCvDfGmonAMk415KIhXWG0pbUAI9rdkYDRikmiyVZKlCCer9nUL3BmgR41LWxXoEFSgXAqVTsiRDJ5F8JZOE2XwGlSHPcgQBswgkA1jMjwgikPcEkFIuBcZFEoszjizpI4QmTzOK+RTrYHNzDR88lV1QlvEcGxvrLIqCZTEnyVKcDyivyLKcxXIGXqJSRfCB0hhUIhHGY6qKXp5ycWuD7b0J5mSJTDVpmqASgZSBVGvyLMN4TwgOqVO01Pha0cOHOE65hmiDinLhQVBZj/FRSdJVlunxhOODQ2xlkEGgUDU4FGNjwVdYISmWc/pJDCi7YPHeIdWpFZbSOu6vqYnViUQFTZoGlIr7036akWuPEwWmcqRK0lOCVAQ0kXBSGM/+rGStl+Aqw3KxQCuFznOECOhEgay942WzGw44H/uODynGwnCQ4L2JChvB46o4n2WZpnIGawzOlyg9wFRxz3FwcMi4P0CEwGK+ZFEWjMereGtw1uJ1TbaUgnJZopWmNDYSLrSkPxhwcjJlOV+gRyNsCFhpSbVEa2KCjKqJPPh4TG8xRUGqI1HEOVv3t4BOsrPzvQChJEk2ZDKZo5MI5nnv0UrgXcnhyZTxaMzKaISSCiVkJA3V7eVdwDtIkx7Om2gtlOTRlkloQFMUkYxf2TlVZbFWUBjLvDQsy5LSxmS12aJktjAsiyoK40pBIjXDLGXYTxkOMoa9hH4vZTzoM+gPybKcJM1Isx5JlkUFIlmCEAQp0VLHemuNDLXiGw4RXN2fwYnTWE0QAaFVtAOqM9gbpdN2zdt42VPbptQAUFRwiUmK3nlcqH/cWbUP53yr9tK1tXSN/VuIyUBxjRHH3fPraVWrfUTAKYYbURKkJEhi4o6Idj6ShuTb/D63fmrjN91yfm39N4CGfFY+K/+TlEiG++Q+4+xnugyH+pmprcGaCKtdzkmlxCcrUZkrzOgNrmCrBJGvkGKxg1Uyn2CkJaG2RT74mKEeIp7/HFJI7OI+1XSHdO16tM8SjuzySxT336EcX0Qn/TiOCY9C45XCW4v0tRVHAM4REZ62N33itZ777Pk9Vty7d8ckgWwIbDV5sg5fI0Qk+DZAcnvs+nOn5+BMjLtbj/PA9acRHloCnogJz1pLlPAI6UFoRG2H1lVMCmfaSZ4SGj5xmrN94JQE0lE1aX+eXN/Q+WTT70QITQN8gvTRYQycO39jFfLJffEn2sc5hJYIG1jhIXpZIMOLCCkjp1YpRJCoJEcDkoA1JV7KaKFtHcH5qJBG+IQaRX3ytlWa5NWWQNnyIJ7eLj/uXvPMd86d+2x1zrZFl6Dwaf2nvfXhdN5vziYgJikDUkXVVpdGVwaJQ4g0ztmtXVJkX4UW34kkpxCIecWyVpBp26s9+Zk6n7/zQv5kbJpPi4+cJ27EfwPIzinOEVme0nZNUoAOGgEkXoHS7BzO6SuLDMd4sYKXYwpA9yXeOVKhUeIAlxxjqmNG61s4IXGsE1RJ8ALjE47kOmkCmZuAfchCjPFyEJU/pKjJ9QUkI/rlEcsgUWmKrwRBKrLUEkKJQ6KEwNIdI//mYh0/E6iYlNFfceAKrrHkyBuWaR90hiKQSsnVVHOwDJjSMsoSJkuLwzNdLJjOFwid4FXsPBfGPXrpCjLLOTYw8QI7O2BztU8lt5j2U+4Ugt4EDgqBsJZZEBjjQCqevXqJ7PqzDBHMneFzaYpIU5TUpHmKCAJNzPQVWqFkAO9ZLBZkWpCvbvDFlRU+/O63uHfnFtff/DomWHYWc6bzBf28x2g8QqZ5zGrp98jzDCEVidZkvRUuXr7CjevXkbUkRVVFX1TnLIuioqoM89mC+WLJYjGPgQbvMdbinGG5LCjLktm8YDI5oVjMODyZsLTbnEyPWV1dZX3zEsXJHsF5lBL1BsPVA4TAETN00iRBpxLnQvRtBQgB5+C4mCLnM9IkoZeluF7O8TKCv1rAvKwo93bROkotVpXBG48XEmc9O7MFh4cn2OAZ9vqsrgwJXpGquDGX3kNwrA4z+qlg4SwSi/WR4NFLFD2hMChENoDFSWTrO48QEusspmpIHSqqPDjB0nmscaz0c65eXSM5XLB9vEQIUDKyPhMpudzv87k3f4FrPc3t3X32Hz/mC96S3PkI5QUsKwaV4Xp/wObzLzLY2EQ44OFxzA7ROm6wdAJZDxINi4Ljwz2OTMHXBusM8l4tMRSitUu5BFOBKaiM4aQsmXrHzFnm3rEIniLAIngWPmAWC15++TUuXLnK4OLfxx1+idmd/zfu7l9yeQUujiWLhePIBPI0sHcYCRX773yb57/yDbYXd5lVhq9/+ee5/+gxy6pCKc1iUdAbDDCmYr6sIjnDeyrvWCxLprMF5mDCwcEBo9GI5/tjPp8kPDg44HsnM9Y2xlzcWsGXBd7V2eaFQY/XuboxJsz2qJIhMk/Z37lHLiV7sxNUseDGpWFNdvAgZC25JhAqyliolrQRos9JqHUtgoNyjmCJos7EEAEZPMF4bOkxPkRfYO/iJhxq4oKr1xqN0obENzJ4oiaEhADO4sol8nAfOV8iSov2kOOojGOW97nx5Rd4dlXhrMV5hwugdMzQtcZhrMciEDrFVAbjPNYLjPcYT1Ta8ALrJZX3lM7jgsA5gQtgCbhAXCAICc63qhyBZoEcWq0PaOIPkksXN1gcRY93JaExsAktW7hegKvo05wosHhkHdDTOMoAx4uCicnZSiJpJK45RS3rbjEuWux4EZVKllWgl8UMvWbB8jchk/VZ+Z+vNABWF8jr9XptoP61115jZWWFk5MTtre3uXPnDsvlktFoxHg85tq1ay2Y7b3nzp07bG9vM5lMODk5IYTAdDpluVyyvr7Os88+eyYzv6vykaYpzjmWyyXb29t8/PHHvPzyy/T7fUII7OzssL29zXw+p9frtRnxxhj29/fbusxmMz766KMW5JNScnR0xDPPPMMzzzzD6uoqN2/eZDabtaoiSZLw3HPPce3aNYQQ7O3t8fHHH3N4eIhSiq2tLUIdcLx58ybHx8esrKxw48YNRqMRH330EbPZjF6vx2w2YzqdYq2l1+vxyiuvsLq6Sq/XaxfmBwcHzGYzHj58yHQ6bdugqqqWYOF9lJTO87xV/oBTAPo8cN8EINfW1uj3+2xtbTGZTHj33XfZ399Ha80bb7zB5cuXWxAzz3MODw+RUrK6uorWmtlsxqNHj6iqitFoxGg0apVMGvLLYrFgMplQlmV7nb1ej8FgwHg8bskHVVUxmUw4PDxsySVra2usr6+3GxVrLcvlsgVaAcbjccwSzLIz5JZuv31SZkB3Q9iAsF0rC+AM0Ntk458nH5wP5goh2vt3nmHfHLOr2jAajVpCU6OE01VQ6CoznL+m89kLzXm6ShvnS1d1pSH9NN/vkg/OZCnWP10LpW4dmp8u8H9e5aRRUOl+rqtU0K3v+U3rT6oeAbTklua+NESXNE159OgR3/ve97hz5w4AV69e/bGJH0/Klnha8CWEwPvvv09RFDz77LN84xvfOHNNXbLGj1uMMUwmE/7wD/+Q7e1tXn31VX71V3+VtbW19j4ZY7h582ZL3PrCF77A6upqS9Br7nWjctTUtXlda93erxACWZaxWCw4ODjg/fff5/vf/z7ee77whS9w+fLldszpqiT9VYgYTyMWPem1pwa8Otfyk5ZGUaMhQzTPYGMrVlVVq+7RjAuNAsdkMmkVmBpll6YeXVWOph92fzfz4snJCXfv3uXtt99me3ub4+Njlssl4/GYX/zFX+QLX/gCzz77LCsrK/zgBz/g1q1bSCm5d+8eh4eHVFXFP/gH/4C/9bf+FhcuXEAI0d6vo6MjLl26xGQy4datWxwcHPDSSy+xvr7OYDAgTdP2nn9WflZLJ5hHiDaZwRNcwIXYh72tMK62EbGR+OGswdgKayuMLTGmwJgCZyvwNpILQp3jHqLKg6mtX0yIti7aR+TAOodWUX0h0QqVpqResZgtqbzBBYsIMu79gsNYcAaSIOkrSa41eapwQTA1gsK4VonCWo/KPIM8ZdDvRXVV77C2JEkyFAohIy1FBAdBR/IClkRpEBobPNI6BsMRxpQQPFma44oCryz94RBvFi3wIUSJwFMVS4RQGGvQFpzJSaSmcgUn0yNW1zdIlcY5S1kWeOfp9/Jo7bc4Ic976CRFK81a3iP4eC+EcAgR8D6gtCZJoj2OUhItZVsPh0AqhRCRVEOQCB2l0H0IOAkiy5CuBkwqsM6ASiNo0vQOKQnOsTQV23v7vPvRXe7tPcZiscFGDg8y5g1KhQAqUzGZnbA+6sfAem032qykAlC5QOk8qlY2UgGEr0BAkAolBGmqCcLiVYKnIhD3t1niUYXHo7AC5s4wrwoW5ZKyXDIcRjtCKVVU0RCSyppIDhUS6x1KgBIxxqWUpiwr8kxSVXHtUJQlWd7De6jKJcvlgn4/B++wpmJvd5fxaMTKeMRiucRYj0oUQkicqaLFcp3URYD5bIb3nv5wSJYmZInCWcPx0RHOOS5sbOBdtLxUgxzlQNTKYYU3CKno9XOclxTLkl5PIer+471DqCRaJCdJ3C+FGAORKiHJekidkiS1yld9LwRgKoPWCUrpSM7RUWXY24DUGu9i9q9OEoplhU4SEAKhNMZ5FsWMqjQURcnebBLJHaVjaRyVd3gXx4LgHc44vAXvalKI8YQg47MvJZmS9FJFP1OsDXPWx0PWVgasroxYWVklc0PStI/OMrROIASsd3gCuo59gMAHiTONDUv0ow8hJsN40STDBGjUMkRMQmqIIC0EE0StiFPvq0JMxGkV5TrqcvF36PzE2Gy7J2gSElxnndusj4WM2JmoY0PEfzdxTVETPc7akUbbF4msbRViiOYU4I5AjmieuNAF52jBqJidHMfpz7gfn5Wf5nK6RWgz5urXT0kAofN8NDi2AxJZqzFVBb28T1nFWVZlF/BiDS8dIhngepsEIXDSoYTGLCfYw1v0Nq8j842ohhY8cniNXE9Z7r2HWn+ZLOmDCiQXX6Xc/RB96TVQKU6GuDaJ3lbgQKvk1JGis+95Esh+Ph5yPiGo+7lu3EhK2cbUWzd1IskDcTpUCBkJMc0rbRJOPSCFlvxRqzB1b0hn33YenD77sbOWnA3gLoQikiMCSSIhVTgEwtRtIWrKW2ggfdo6PC2lRLRJmPXNbztCF5Zv6nXWnuu0/qEF+rvEhS603iXH1DNE5xyfDsJ3vx86HVZ5jxUVhRWsb27WcwMYHFp4glA4qeK6x1ZoIQlSRYJSmtBWo0sGCM39jtaDUfw7nL59WqMOp+Hp+/EfZ6/eXlvn84I4XzYN+8RjPIF8c/aYNSmDENfD4pT4EWqVjtj0vr10Amil0VISQlTQCy25qbaMa88ZECKqfkXSkcA1aM256napmCF20NiHfKi//+O1248idj3te0I0yWKnrfU0ssgnxpUQn7sgFEELHh0syDdfxKgUZebYdIAXEu0swsekbINj4fsoJAkWvMak4zgWung8KTz4+Fmj1tFqhdweg92jFCmBIUHkmBCwDny2Sb7cpVKXojEDBhUC1i1Ar8S9UosNRrvH+oY/sc3+e8rPBPFDCOgrzao/Zigs7wuNVX0SDQkwMkuuDxVqdYO7BzPWegmzco5zsCxi9v1QRAsG4R2fe/YCn3/1eaxMmJ9MGQ9HICTPXr7IbD5FEcj6PY6Whmp+TKY00gW2+j2sV4S+I9EplQ0E6xkpgcfgvEEZhw+CeVUxn9YZD97y6pd+DYekKkvm1nNlfcD155/j7gfvsvvxe6zeeI2iKFjMZyipWFuukuSD2FG9Ie1lbaZIaRzDQZ8w7Ec5YC3QSQZ1bDkZQFk5sqEhnS/IF1Ga0to64CgEpakoy5Ljkxlyd5diMccIxWKxxDrDdDrlwqUrPFhOcaHEB0eiBELU2QzN5CTiQKJVJLhEKayACwIlIUuiV6nSEqETknwIeZ+ZVPQyRZ4qktGQ0XAF5yErSkYIjBeRQDCbcny0T1UtmVjHBBj2JM+tJhTlEpxhcfSYkQZUgk5SAgpUipQlz1zYRM49u5Mlapjj/FHNeod8MCKMEo7yIQeLkuXcYI3ClAuwKfTXWBsJLm2tkgyHvPdwglaaRCqurK3yK2/8Al+9dpG8LCgf3ufe3fvM79/huX5Otqzoe9gUks21TXprm5AkMCkRWkHahySLrykJSkWyp/dQFhzPpygReHk8RqY9cDaSRIIHawlViassc2NY2IqFiyofc2cpRGAZYB4CSxFAK/rjEUdv/QduPPMAJ1+mOHxMP5yw+lzK6o3A/e87Dj8S9NNAJgN4x+7jB6jvv8OFG1f58L33+YUv/RL/93/7U45nc0rj6eUZRWUZDvqUVYV3viXVVLJWi5ASs5iyM51wlOZcGfTZ6PXQicKXFQePj0hVDM5Ni5JJYSj2jtndH/LyxTWOP/ou+/OKoioY9zKeKSx91UxazQARQMY+2dilxNlaxowLpSJrmKje0VquBPDR7QiQeBkXG6HO7gkqgIsevUE2CyRVk0iaLGxisApZK40YXLUkWS7phUBfQOotCMlAKhLrMFKQpwq8QQmB1JIkgEpidgdKIHSv3UR7p6L0rQ8IodtFJvXiIrhAZTw+RBDA+xiIWy5LHJo072GsxzhP5RylcSzLivfv7LJdClSiUCoh6fUZ9oeIxZz5cokPnmVR4jvef0JAqnWtgBIX71WIbYQgPvsi3s+ytBwsLesqZvYEH1VTlFL0tMI4R6YlsrSYOkBUpZqMKPkbb5D9m5tYPiv/Q0sD4AItoNcFd7uA72AwIMuiVO9oNGozk8uybD/TKGdYa1uLFikl1lrm8zlVVaG15sqVK6yvr7fqHdPp9Awo75zj+Pg4zsd1pnVzfoCdnR1CCIzHY0ajEVmWcXR0xGQyoaqqVp0ihMDx8XGbfT8cDtna2mJlZQUhogLGwcEBg8GA1dVVrLUsFgv29/cJIVquOeeYTqdt1nsXENzb2+Pw8PAMOH54eMjOzk4LkjZAY0NSUUq15ImjoyO2t7db1ZGqqtr2t9a24GGzSYdTYLFrF9JVZABaokOapq26iRDRmqJR5WhIF0mStEoZQGuv8/DhQ8qy5ODggLIsSZKE8XjMm2++Gb3Ka0B5d3eXe/fusVwuCSG0YPPW1havv/46vV6P4+NjDg4OuH//PoeHh23fyvOcL3/5y619z9HREffu3ePo6IjpdArA2toaW1tbPPPMM239u4SXrs2HlJLd3d2WpJMkCfP5nOl0ipSSK1eutLYY29vbbTZ+nudcv36dwWBAksQN8WKxYHd3l5OTkzNKI1prnn/+eYbDYatGcv/+/daqQSnFfD4HYpZ/Q9ZpnrdGSeDo6IiTk5NWSabpG5PJpFWJaSx5mv4/Ho9ZLBZMp9MzlhwNecp73yqKzGYzFosFVVW193UwGLSqM91Mn0aRwhjTPmOz2ewMaSFJEgaDQfscNs9Ao5rSKE40BCwpZWsF1YwNTb2TJGFtba29l2VZtsSb5j52ZbXzPG/7V9POi8WifSaGw2H7+eVyyWAw4OLFi237NfVq2qEZG5rr7pJZnHPR+rBjbdQA5lpr8jw/oxTTXFOXwNOQbsqypCzLtr83z0ZzzY01FETFlGYMdM6xv7/P3bt3WVlZYTqdtiSqZpwuyxIpZTv+NqSFpv0aokvXHqn5u7mvSqn2WptjTiaT9hlsCFjd8a2xhmn6TkOaKsuSxWJBkiT0er2WnDSbzdp6N3ZWRa0a4JxjPB63z8d0OuXk5OSM2s/GxkZrITCfz9s2a8g+3aCmlJL9/X2Ojo44PIzE2X6/z3g85vLly+0zvb29zcnJSfvsj8djXnzxRYbDIQAnJyd8+OGH7dh4dHTE3t5e26++/OUvs7KycmYO7bZvMyZ351LgzHMZQmjVQ7a3t3n33XfRWnP9+vVWmch7z2Aw4MqVK2RZxv7+Pt/73vd49dVX2+fw9u3beO+5ePEiN27c4OjoiO9+97ssl0vu3bvHxsbGmSBvd+5/WvmrkGo+K/8rlFADktEWIXiPdzGRxPhataAmeRhj8LWti7UWawzWltiqxFVLnC3wpgQcst6XSBHVSON5YhJMzCwMWAnSZyiiDa9KFFLGscw7IqDtDKoO1PpgMcZQFhXGOoJzaC1BCZyQVM5TWIfDkypFT0tS4WKQMkQAPMuSaEtbBwDrmjXRbAiOLOvHPZ+oFcuyDOcCwnsyrSMxJgSkAmdDVKJUGlNFdZQgNUmqAMfkZBbXBCFQFiXpOGeUDVnMC4rlkv5gDAjKYoGT0Q5DSoXwgoUt8E7SGwxjwF3FtURwlkRarDXgHEoolE7J04SVYUZ+CEvn8EFhnMJZRVE6jIcsSyP4HQKq2V+HCqTDy4AMMZlICo1G4r3FuahksT855sHeI7YPHrMol4Q2aF4Hy/FxP08d1PaBwpgY27ERFK/Dy2glI1AvQlQzENHW19WAeYIkSyVKx0CwCCCUQieaRMU9qpaOJS6GkV1gUQUWxiO1pNbmRiYJARGJESGC/aGVMJcxsSM4BBaBiioOLq4hkjQjzTOqokQJyHRCP+9hrOH45Ig8FayvjQCFN5bKVqyO1zEmJjAtlyVSKpxxTKczjo+PGYwGDIY5vgJflUyPj9ndecy1K5dBBBaVwRvDeDRCyhi7EWkfVzlElqKERmoPylDYqm47jSkX6GSIcR6dJchEs5jO8GVFmmSRAKI1XopoO1Kvc6TUaBnVaCD2ZVsnzggRkESraiE0xhQgBLqX422Iz2FVqxLPS3aPpzyezihdACkRMsYbUxkzL0XQ8b47j9QgVEAsDEXpMZXHCxXrZyu8lTG+5gyYBcoZlA8Ya8jykn4YIfMBSucEURO2g0GoaDqk6j2RClGV2roYy/BSAgpcbIcgLagYH/VCxuxoJMLKemRo1oGxzeLYdGqD5Z1vyXLOuagUUxPUvHd4Z1sisncdlcA6VhtkBJwaMEsqgQiiJXLEQTQQlGiBwyZzXBKtDM7CW74Gq08JLHFej4CX8E2YSsZQjo99NVrhhPO45mfls/JTVj5J9ji/7m3XuvUvLyAJguAlQXp8NSVfX2FiLKlOyUebTI4f0Ft7AVSPoJI4lwRDdXgf6Wb0r74JIeCEr12kAuCgt0J+4SWWRx/jkk3Gq1cg6ZGuXaI8uEPv0is1MSu0KhsegdAagSJ05ty68nQpDUJEELmhW5whCUgZx8DmtTp4r4SugdJTwmbznYZYFmpyQ2gBeIHzHsmpvXBoAGURYZSaPxCJbNSviWgtFg/xFJD5Sfem/q7AEwJoKVBKI1KJr6JqHCHUY2XT3rFIzuK+LU/jHEDe8D1CQ6ALcT8faPZwtMSTbv1CCChEvc4ULckkAtDxmLI9TyQqCqiZNb69f7GOsl5jnSr7nm+HljgDuHo939MOHxZYCUJ4pAIf+Y/s7B4zHKUcHczY3BghMKQiUBFImnaQ9T62nl5EqInALe4gzrTp2fvWPFddQow/85l4jlDf+7OkmTP3v71BdWuJxrqncxzR6GycFl/vP86QK2ocKk6HTRvHtWDs63X/JO4poiCcYG4tmQgkWYbDAYIoxuHxtaVQU1dRY0BB1vc6eE7vTqd2LW+mjtd21VWewvn579mDP2mMO/tac9LG8uX0e93n4nT8qKki0uPQUeUfQYLFO01IE3Ayrn9xgAbnSBKNSDViYTEYvBToAHgFSGTwOGkRPgFvMUJg1ApKDdGiRFf7WJXjnUZLxyKkqHwdWTzC5VeZmnWQB5QmoJJ6fKox8ZaQ9QQSzl9H+ZkgfgTv6C1nrLqSiXccoQlS4c2S3C24sJGzcX2L9X6frLfH8XJBliWYSjCvXJSPtJ5CBMrKcKHfw4QEW1h8ZSlnU9I8QwdYywcorUiVwCxKBv0BhkC5WKKIm8WqqChCgfPU3J6oWCAELIgZ994FCB4t4eTogNl8TpIOuHDhKofHh+xPSzbWLvIgvc03/6/f59f/3gg13EAgWJYV7uCANJlHT1pv8UBV1UEQZ+vNgGvleUOgzjYQGOcpi4pFUbJcLPCmxIQIJhA8ZWVrOUFHUZQQAmUV/W6NjWoDy0VBmgjWt65y8Pg+wcVVfJLEgdPVi/gYYPX1SKaiLYwPaB0nrF7aZzgag9Qsy4IHj7bxLhJXVlfXGI/GpEtBcuLZuniNzWdfQtUZgaYqkEcHTCtLcFHStAqKvDcizxeMUkBYzGKGHzosCWXl6oyTKDn60o3rHO1qHhzdQeuEPMt55tnnGG8+g7+g2Jkv2Nt9zHRyhFCRmOOtJThPqmEQUkTWYzUV5GnCOM/40vUv8KUXX2UzzRGHe7jdHeyjhxxMZuTLJesy4VrSZ224QjaMRBehFAgViR69fiR6oE4nmqgLC9aDq1iYkr5UrOgEgY004GIOxZJgK4KpWFYV07JgHgKz4FhYwzJ45gQWAQoCSwJ2NsPOplRHH3Pw+DuYk/+KKudkKymj6wlqYFGZRvqKZQm5iszZ/aVh/63/yj/60v+Le7dvcrws+ZVf+gUW5ZKb93bIehnTWcFgZczqSojKOkTCjnEeLSXWBQrvcN7j/JJ71rCzUGRJyqjfY6QFuYCd6ZzDynJxdYjEc3h8zA+qilQJlqVlUVaUKkFvbHFdWEy94Q1C1BkeAiE14FoLFKxDJAnBWVxlcNZFSyI8UiiUFCRSxY04RMmtmtQR2XqSIKKJiSBuXoNoDVTavWtU+vDt4k2GU7ZtoxiipUTXCwClZK2i2XgaCnSaErwnSXsQbL2h7y4am0kkZmz4EINLja9smiQIKbAGgotEk75OEColyyOw4ZyLqj/lkuXM8f37P+C97Wn0UPMBPd7izZ//Ett3HjHeukJwhtIYfIwHEBBorcgThSeQpgkieFJrY8YXIOqJXNSL872Z4dmBJmtYOj4u1lWicUX0SCQEKufp6Sir68PpIvZvgi35WfmfpzQAWpqmlGUJnGZGNwBnQ95oSBQNYLVcLlt7l+Pj49bGpVHOWCwWre3HdDrFuThfvvzyy8znc5RSrRUMnBIYGtCqITd8/vOfZ21tjSzLMMZw7949Xn/9dW7cuMF4PCbLMm7dutUCvA3AKqWkKArW1ta4fPkyN27cYGtrC4CDgwM+/PBDptMpr7/+Os8++yxJkvDRRx/xve99j729Pd58880WCDfGtEoPDeB5cnLCZDJhc3OT8XhMnudMp1N2dnYAePHFF1lbW8Nay6NHj/jggw9YW1vjypUrJEnC7u4uH374IQ8ePOCNN95gdXW1BV+71iENiaEBvZMkYblctn93M/EbK4gGbFVKcXR0BJza9zSvhxBYLBZRLrkGdZMkYWdnh5OTE4bDIXmeUxQFk8mENE158cUXGQwGrdrChx9+yNtvv93a1hgTSavXr19na2uLCxcucPv2bT788EPefffd6MNeA5FNlvoLL7zAcDjkzp07vPXWW+zu7jKrMybH4zEvvfQSvV6vtUuBUzuYrv3NYDDg5s2b/MVf/AUhBIbDIQcHB+zt7ZFlGV/84hdZW1tDa813vvOd1kJofX2dX//1X+eFF15os/n39vb4sz/7Mz766KMWJG9IAV//+td5+eWX2dzcJE1T3nrrrbatkiTh3r17hBAtF958801Go1EL5Dakg9u3b7dqEb/6q7/aZvDfvn27zdifTqc8++yzXLt2jeeee46LFy9ycnLC7du3OT4+JkkSLly4wOPHj1tFgOeff74F72/dutX2oyRJuHLlCm+88UbrW98F0NM0bZVqnHM8fvy4Jcc45xgOh1y6dIlnn322ret8Pufw8JDJZMKLL75IURQcHR2xs7NDr9drAWrvPYeHh3z88cdMJhMGgwE/93M/x3A4RAhBURSsrKy0VgBdApO1trVkap7BwWDA9vZ2SzT43Oc+1/bfk5MTnnvuOdbW1phOp62NyXw+b/v3eDxur715bho7jIbgkiRJe7zZbIa1lpWVlTP2Rw1pJE3TM22a1Fm4H3/8MYvFAillq4zUkB/KsmwB/hBCS55pnm+lFNPplOPjY46Ojuj1eqyurpIkSUso6lqM7O3tAbTEspWVlZaY17RlQ0RpyByrq6tcvHixHdOMMSyXS6bTKfP5nOPjY5xzrKystESS5XJJv99vx5qmXRolqMFgwLVr19rrPDg4aNunUWw6Pj7m8PCQoih49dVXW4Wn7e1t3nvvvbbPra+v89WvfpWLFy8yGAw4Pj5uVU4aclHXciaEwM2bN3n77bf5wQ9+QFmWXLlyhZdeeom//bf/NtPplG9961v86Z/+KQcHBy3RaWNjg9/4jd/gS1/6EhDJbP/+3/97+v1+q5Rx9+5djDFcu3aNzc1NfuEXfgEhoqVLc9+7ZKru2N0Qx7TW9Pt9bty4weuvv97akv27f/fvuHXrFkVR8Pf+3t9Da83a2hovvPACv/Irv8KVK1eYTCa8//77/Nt/+2/55V/+ZTY2NlhbW+P999/ntdde45VXXuGNN95gPp9jjOHDDz/k5s2bvPHGG23ffpKa0fnyGenjp7s4H4kYobYsCPWepAoGZw3OVFgTSaiuJn0YX2FNiTMGawqcqXCuIrio9iEkSK0QTTA7REtJEcAbV8+fEmctwkeLBulDVGY0hrKOk5SVafddxniWZcWyiFa3qYrjnJbQTwSpUiiZU9koV91PYTxI6OdJJBWYEq8hSSMwrpVG1laeBCidw1pPUKfqW8EHtFDkeRbXFsFEgBlLmkisAekENgiMCzgTweAkVTg8PigWi5K8FxUYDo8njEYjVlfXsSHuj3Vti1I6Q64TytKRJIIklSxmc5yHlZU1qqogUQoto0qJEAJrZlhr8MEiFKS9jFQn0QLHS5wFU3lM6cCJaHuBRauYLGGNR3qPFgKnVQzoOhXBDhlJ/pOTAw4W0fq48iUhuBqnrkkesiZMNpttIbDBsjc5wvk4zlkfLTE0oLWn39NkKt47IXxUNggxOUlKQZZoksQRvAUfVWuzLCUrHFlikMpHX3kXWpvVRWWZzAtKE/BB1fE/Was1xHoJpaiKijRLUTqSQqwpcc6Q9VNMsYQgo/pKP8cFR5JqlFVkaQ+tU6bHkUCYDfrMliW93pDlchnbIsRg/3I5Q8noe269Zf9gQpr1GI1WyJKEID2kkr3DQ9I8ZTQaEHwkDhkbE9WEyvHeQEiQoodUOUJAkhqsSXHeIHxAJRLnPaZcsCgdAz1GSY31Ua1XSoWSEh9ABUmo7ZGlAJ1INrdWyfu6DvArvK8zIoOoSQkSpEbKlBDAVhVFUXFyPGVyMuNwOmN3vuRoWVI4S6IT8iwlEYJESTKto/VsCIjgybQnS1S0sxGCE2WYLQPWRIAgk4q+UuQioJzDGUOxXLJINAiP8J5ECpSQSCERIqnBu6gsJKSq7WtlC2F45/HBx5hpTaLwPiYUBSRSBKRQ0VZXROsUiCFUQh0zCo0iIJHUUcfQbK263JLO2x+PcyGe256+5r2rjdLjnqeNTIm63UWMbQlxGooU9f8a0kcLJtX9OhI8ms/6CMx27GpEI+kuGrCspmCFRqGX+p7/Tcwwn5XPyv885dPIBNRkhZoiSEycow4iW4IzhGBQvTH+5Agyidp4gWT2CBkM0WbEY01JtfcRyegCenwdXCCoGMOWIiGIetYKDpH0GG1+nvLoQyZ7U/pbz6IHlwhmiT16SLZyGRcMUmRRdc17rBUxQbGjDhRCRys6BmpP/6tjvHG934C4DQAdxz8hZEsaqwebU9C0PmwTSZZdkD8E3BlySE06C5EU2oDZrR0tkZAiQxOZfvKgc57c8ERiehvUj9eoZGO9HtW+GlD/acc+JcU9sQqnY3Lzb+J6pflO034NFtD5GI3xBw1Rr76GcOZk9aDckihk+1pzzefVTp5WRAAVFCIkVKZEJSt8+NEOmV6y0ZOUQRGEodfL2L5/zMZmhpAGhEKiePRoTjE3rFzdiIIXddL4J2p7rv5PgwOeds8iUSPQSIc39+AsOeLppUsUOX3tlEQhiEvXT/t+XfNzz038f6t2JwTWOipj6ElJCB4twNTTpJSqXkOE9jZ28ZFTks7puU+v9SxV5RPtJJvn50dPyOf7xo8meTzte9TP/GmXP28DfeY7dZeXNdbmCORhiZFZHEFFIAiJdA6Ux1IxEHOm0iGyIanUUBzgsg2E8KdkMX/6jMXhReCCxPs+Nh2QMaOXVLiiZGmPCSpFBkVa7FCEtajMZJcIMY42feK0vX+c9vyrlp8J4ocMnn45JVeSWzZQqAThLZld8NyK5NdeWsX7OWu54fIrQ/7PH5yQJppiWVJWhsWywhmDQWKMo1SaZSlwlcWakrSfkSd5u8DGOpRK2Bz22J9U4Dx5b8h0scTZCoWomZSiHaxciGSB4B3Ci8hCDzFLfz6f8sE730H311gWnvXxkGsv/ByDjWd4vH/C/Vtv8yf/5f/H537hVxhvXsYLxWQ+R8opSZZRLPrRN6sGglv2dw2QKhWz50OIgMSiLDFVxXxZYssKIQXGWqqyxDnPcrnE2pglWBYFSknKsmBZLKKHoxRRxjBAbzhi7eJVjnYfEJxBhkA/S5iXJg5mIVq6WGMhWKQSpFoTgqC0JlqnNGBdmnNhcwtnLCezKXfu3iH4KHiRJpq11VVefvUN0vE6WilW19ZROkFLjZAS52MQxgWPsY5eInASlkXF9mLKaDzkaGawHtIQ6PVyVrcucLIoId+PHtdXnmFRGB4dnpCkSVSncB4TPG5RRg6ac5TLJTZRfLxvCR8e8ObLV/i551/gb/3c13hmvIo8mSB27yJOJoSDfeZIwvXneHByhOr1uXT1OiQ5Iu1BmkGq4+STpPEny6kbLmYbeA+uAFtBVdSWOgIv6mwEb6EqCaYiWMsieA69YztPuRsC89kJaQgsQ2DpPQWBEigJLOyCd9/6v5HjY6bFlLGybK0JdC/FO8W9bzneuRWYVZAGwUAFTkzgOCSI6YTtW7e5cu0GNz94j1/+5V/jL773Dg97E4wPKALGxYHTe89w2KfnFEcnS6iVPDwRzM1STZ6krK2tszkeIZZzFos59xYFhfeUVcWj4znX1kbgDEfTWU1+qAd7F7gnBV9wnkw2MpUdFrG3BK3AQcARvON4/5DHhwdURRX7q9ZkdaZpP8np93pkSqGtQeCj1xmnjM3TmTTQehzWm9ZI+ogLy2YCE9TZJ3Xg0YaARaCFIpWCSoATYE2F1aC0Jk2bsSdKdznnOhO8rFVI4r+bBa0UqmV41mY0dSDUtwtI0WS/IOtJNkD9bCvpozQsELwl+Dpjrpix9/gh6XgDUWcS18sNBFEOVAuPJIL2zfgn2vVks6yJYM7JomRqc1IdG8gHcLU0GbUnrpCCkZKMkrjIX3ooXMC4Lh/0s/LTVkIIZ1QmiqIATi0wupnw3nvW1tZaJY8GNDPGcP/+fX7wgx8wGo24cOECq6urFEXBvXv3WtDrdCMKRVG0th4Aly9f5uTkBO+jHdv29jZKKdbX17l+/TrXrl1r3+sCwE0WfBdoa8DNfr9PnueMRiOuXr3KM88809q3LBYLTk5OePjwIc8880wLzFtr2djYYDgcsr+/z+3bt1t1jrW1NYbDIVmWte3S7/cpy7K1m2lA38uXL/PKK6/w6quvtkoWVVXx3nvvsb+/z/7+PkmS8MEHH5AkCV/5ylf46le/Sq/XYzKZcP/+fT7++OMzlh55nrfZ+l0rjUZdpbEr0Fq34Gg347z70xBJGmUUrXWrbDKbzdBa8/nPf57Pfe5z3Lhxg93dXb797W/z/vvv84d/+If8xm/8BkdHR9y6dYtvfetbfPGLX+SNN97g53/+57lz5w7/5b/8Fx49esR/+k//iX/4D/8h3/ve97h37x7PPPMM//yf//M2i/69996jqqpW4eM//sf/yI0bN/j1X/91Xn31VXZ3d/mTP/kT7t27x+/+7u/yr/7Vv2oB3kYdphmPG9B6NpsxmUwIIXD16lWuXbuGUor79+/zzW9+kyRJuHbtWgvyHhwc8NFHH/HNb36ztXJYLBYcHR2xtrbGm2++ybVr15BSsr29zd27d/nd3/1d/uk//aftfb958ya7u7stwD8YDLh69Sqj0Yi1tTV2dna4fv066+vraK35rd/6Ld555x1msxnf+MY3eOaZZ9pM/d/5nd9hbW2N1dVVVldXefvtt/nWt77Fq6++ihCCCxcu8K1vfYu//Mu/ZLFYtKD8YDDg0qVL/Of//J+5desWd+/ebft/A/wKEW1qXnzxRTY2Nlo1naafrK6uslgsuHfvHr//+7/PD37wg1blJ8syDg8Pee6553jjjTf4x//4H/P48WP+9E//lLfeeotLly7hvWd/f5+HDx/y5ptv8k/+yT9hZWWFd999l9/+7d9un9EQAhcuXODv/t2/y1e+8hU2Nzc5OTk5o2jRVeM4OTnhT/7kT9p2TtOUhw8ftsSPL37xi/z9v//3+eIXv8jVq1d59913+eY3v8nt27f5xje+wRe+8AVu377Nt7/9bd566y1+8zd/k5dffpm1tTWMMfzO7/wOH3zwAYvFgn/zb/4Ny+WSO3fu8Ed/9Ee89dZb7O/v473n+vXrvPrqq/ydv/N3+PKXv8zu7m5b90ZBoqoqPvzwQ/78z/+cP/7jP27VZtbW1nj99df5xje+wSuvvMLly5c5OjpqCQHr6+s45/j444955513+LM/+zO++93vcufOHe7cuUNVVXzlK1/hC1/4Am+88QZpmvLgwQN2d3d58OAB3//+9ymKoiXc/OZv/iaXLl0iz3M++OADfuu3fov333+fnZ0dNjc3mc1mbGxs8Nprr/Ev/sW/4Pj4mO985zv8zu/8Tkuqaexj/tE/+kf88i//Ms8//3xL0miev+FwyGQy4ebNm3znO9/hzp07/LN/9s945ZVXGI1G/MEf/AFvv/02165d41/+y3/J1tYWb731Fu+88w7OOf71v/7X7O7ucvv2bX7v936P+XzejlPNM/G1r32NX/3VX40y+DU5Zzgccnx83JIrQgj8h//wH3j77bdZLpd87WtfQynFzZs3+f73v89zzz3HSy+9xPPPP49zjjfffJO1tTU++OAD/viP/5jf/u3fbtU1Qgjs7u7y9a9/nS996Ut8+ctfZnt7m9/7vd/j1q1bfPOb3+S5555ja2uL4XDI4eFhq4LUqMQ0Vj/9fp9er0eSJKysrDAYDHjuued4+PAhu7u7/PCHP8TXEv+NCtV0OmV9fZ0LFy5w5cqVM/ZgV65cYTab8fjxY8qy5Ic//CHf+973SNOUtbU1BoNBq4yytbXVKvd01YY+Kz+bJYRQ27nUiQDOxsx056hcVRM/DMZENQprTLR2cSXOGry1OFsSrKkJAQ6CiRlcPtoXRKnAhiQfCDjAYYOlKB0Sh3MJqoj7j6qyFNYym5dMZwWFNfggUETyhxJxH6VVtOlw3iGTnFGiGSHxLlpJKBkY9FKG/QylJF54nDdoEhKdEtBonSKUrtUkHMWywnoHPoIqQUBpLUk+IElSnCvxNio6Kq0Z9nosF4Y061MKwcLP8L6kLA3LRYUzDucty8WStY0epbEsFjN8EIzX1mKWqlAIJI8fP8Y6T683IE2j9QYqZrWWxTzGW4hqTYmKBL8sSQnOk6UapWSsl1DgLc4LjLGUJXivKc2SYAJIhXeGyjgq6xBSNvmTEGLsxzobVR2M42RRUlmDIFrJxE/GPbdAxEQM0dllhqgQMa9jWkLQ2o744Ei1jmQc4eKeU8qo3GLjZ7K8R5ooBBXeGhCeRCZ4EZUt00SihIqiHnUCiA+C0sB0aZmcLBgNlkidxUzVQAvaq2afi8QR8NbjvCfJMgg+AiZKoev7IqUkT3WM09nYf6wxVKXDBcn+wYSrl/vRgnhlxPRkyni8gpKK4XBEWRVkWUqaadJBFlUzGEaLv8Eqtx/skucDklQRQiRN5XkfqTKcD8wXBTqtyaBeIFTMkJZSkqQ51gZ88Kgkoayi7YkxliSJSh5lqGrbllqSvN4DKCUJwca6JRkqSnDEWKYLBG/RSQpC40SIJJvgsGVUKjw4OubB/gkPDqfszQoK59CpJtEJSqhINRAiKvlI1QKIOngSETPDe1qQa0GiBATDia3Ae7SQ9JRimEpGvYxenqKVJDiDsyXeJliTYMqa4C4CUifxHCEqZggRkDIg63CRqDOPQhsrCiBiclxwAi/j313rXOgSP6J6a7R3EXW810ViR02Ya2xcvHN44wjWg4t2MPEn1PbE8TshhIgWhNp6oM5KPQO1hdP8/ebJE8F3ksNjolqMc9VQo4h9vgGxQ90m9eN5el0tWBlVdf6mgZHPymflf2hps75/xMfaZ6MuQUdcKTh8MacnKmy6gq/u009G0Tot7VFOt+n1tygmB/j5NoMLrxPSNIKPSoEEFTS+VrcWQnVAbENv8zWS+S6Th++wcuENemvPMb//l9j+OiJNCUDiC7K+Y5AWOBHVIVqiQTNunceQP0HubogLolamaA4Qx0VfB87btmoA2boNWyZHPZ42sZc29i+aser0+/U/miEp4gf156JVDHVC56cD1afXEGpcuGbmCId1tiWP+paUKjlrpUJrkdMes+F1nHvdt0SOhjwXj9Elzp3WtdOGdI55ru5duP8UxQidd3znnVOo40kqKOfbyIeo7q21oloUKJkxLxwkI1xYgpaUVaDf67N1oUSraDUHnqAlInhkqCidAambE9c0qIZf061rezfO1eVs/brxxs5XapJQp3t0ru/8d9uvdUgw3bY4S0/5FKJQ5xjtd8+1Z4PfhADIWjBARSwm11BYCd7V/Ch/OtfWeEurENMSMv0ZMlD8oG/b4Xw92786ZJZPI3N06/+08e1p7dj9u1u/037XXMsTyB/4SJqXuu7ygSTMWao1QOCJOJWUCco5dHWIUxVCJFhbUGYbpMtdxPKAkG/gRVyTJyLBhwqI68bYjBKBwDsLSZ+N3oKJsYw5xIQ+qt8HOyGYR0jpoZzX9nlx3RlJx4Ent85fT/mZIH4kBAZ45s4zlRqdDVHOItwSRcbFHFIFK2PNWqoYSksv6TOTJTZAaQy4AGmKDzBbluwfTbDFnFDOOJrMSJOUVCuyLMW5QFnF4MO0tCgCuYhZ6+PgsdZROo8XMi68vaeUkmNjWVYVwXkKFAsiIePC2hpH33ubl158idXVVYbZkOeff47swg3u3XlIsZhwuHuX7771R7z0+ptcvP5KZGkGwXI+x5ZlPYHGh0EK0EqSZymIyDS3Jsr8FVXJfBGzoOeLBc46kpoYUpkSYyxFUVFWJVWt9mFq0MVV8W/nPM4Z4pbJMxyuIIVgf/te3BDVMlTWxIAKxM2hloo0iRkrUmkyD8ui4GQ6ZzKZorUiTVKkitmnSklczUYVMiqdvP3OX2JMycpwyI0XXmG4sk6W58BqzMgQkkVhOZgW5EHipeRoVjK1gX3rebwcsn7tRYLqs1gs+KPvvBfv3+pFFpUn6w1YGyWUVcXR0SGL2ZTJbEY5L1ACrCnxziJqdqHxsLt7wubnf46v//rXSZcl4fFD7OOHqOmcpDKkJzOOLl3jpLKEwZDbAn5ufQuhM8h7keSRJqBT6PfjBqqooFiC0mBLqMpI+igLsCVJPRpW1mB9XGhYEbg/7HNH9LnlSj7cO6a4vMFxNWPvg0N+RQRGIZI9qvr3Es8sOD64e4/x9REXjKKUDoVntliSZA7vNLOZ5WQhGAPjXHHrSHBQWMzeDt/+kz/i//l//B/86f/1LnvT13jtxRe493iPk9IAOSeTCUmiUVpBgNGgT1FUJEoSjCUgWRmPGA2GbG2s0fOOxWSfw+WSaeUxPhIuxqMhq/2cpbGM0hRj5hgfxU4ia9fzweSQZ73j59f7RCE6oCYgUVu5+PrvcnLM3s5jhBKsDvvkSpGmKUmSonuDCDoqHVmbZYmdT/HWQ6ojmcJDk0HWyHXVs0+9cW+WeLW8XTOBtYs4TjfaElQd7DIIRPCoNEdJiTVF3NgHFxcBvmGVhHYylkLh+f+z92dPlmR3fh/4OYu73/3e2CMj96rMylpQS2MtAN1YmmKrhZ422bApQUYb0WxMJpvneaSZ9E9QJlEmk5kk4wMlGw6HD5xmN9ULAXQDjaUB1L5kVeVSmRkZGXvczZezzMPx43EjK4EmOUOzEVAHVsiIG37djx8/fpbf9/v7fkMGBwK8tbW9Ss0XrwlRRMahkHUfDsHAINMqgryt8VgnmJc1qcOHhafSKYnUpElGogSlM+Gc4W7wIiiXCO8DOUwKrA33LiQIVzubSVH/DEVZsTczLPVr9re3RGZ2qhU2t7RqL+jjyvPAOKYGjLOU7tPAwK96iWAshMVelNmPP0cySFRpmE6nHB4ekud5k3Eega7l5WU2NzdZXV1lZ2eHnZ2dRhEkyvlHe5dIqLDWNpYUMXs+qn4cHx+zt7fH1tZWYy0Rj4v1XrRTEUI0SiQxGzv+HK0GIumkKApOTk4aq5hIIIkgNwTJ/2hBAYHUGYkfi6SY2E6L9ev3+6RpSqvVatQwFtVClFIcHR01YGAkX0QFi0Uv6Ui2ideMSgjxOS3+t2g1sGjz0IA+C4SeSByZz+eN+kCSJPT7fdbW1tjc3GxUBiIJJtpanJycsLOz0/SLw8ND3n33XabTadOfdnZ2UEpRlmVjGzGfzxvg9pVXXqHdbnNyctIovCwvLzcqEf1+n83NTaqq4uDggKOjo0Y1QSl1hvgRgehIVNBa8/nPf57NzU201rz22mvcvHmTVqvVZNInSdKQjN58803m83ljXfH8889z8eLFxlJk8Zm/+eabIRO6ft7WWs6dO8fW1hbPPvssw+GwUYA5Pj5uVGkmkwk//OEPeeedd0jTlOeff55XXnmlUQl57bXXuHr1Kr/5m7/ZgPY/+tGPeO211yiKgnv37rG8vAxAu91maWmJl19+mY2NjcZ+6aOPPuLk5IR2u80f/MEfNCD/3t4e29vbrK+vN4StqL4S+2VUmlhaWuJLX/oSn/3sZ+n3+w3o/Md//MccHR3x9ttv87WvfY3RaNTc1/3793nmmWc4d+4cv/Ebv8G1a9dot9vcvXuX733ve7zyyis899xzjEYjDg8P+e53v8t7772HUopvfvObZ55l7D9RJaUsSx4+fMj9+/c5OTnhpZde4saNG0RFiffff5833niDdrvNiy++iHOusSCKqkJbW1tcvXqV733ve/zrf/2vMcbwjW98g+9///v87Gc/oygKXn75Zfr9Pvfu3eOtt97ir//6rzl37hwvvPBCo+jxwx/+kF6vR7vd5vz5800drbX0+33effdd3n33XV5//XWuX7/O2toaQohGkWV3d5fl5WWGw2FjkWKtbSxyOp0OFy5c4Pz583z88cdcunSJF154Aeccly5dYmVlpVGW2dvbQ2vNM888w6uvvtooabzxxht8//vf54tf/CKXL19mZWWFy5cvs76+3ijnbG9vc+fOHW7fvs2/+lf/ildffZUrV640Ki2/9Vu/xaVLl7DWNiSOOAbHsTeSLqJdSr/f5/33329sulqtFrdv3+btt99mPB5z9+5dLly4QJ7nTKdTRqMRWmvu3bvHe++9x2Aw4Nvf/jbD4ZCqqvirv/orPvroI3Z2djg8POTy5cuMx+Ngl6kU7Xa7GX9msxlvvfUWa2trXL58mc9+9rOkacrnPvc5yrJkMBg06kFra2tIKZtnsrq6yvb2dmOXlGUZm5ubXL58matXr3Lx4kVWV1f56KOPyPO8eVceJ04uBmWi5DsEMlBUWhmPx/zLf/kv2dvbYzKZkOc5t2/fbq4b7ZEW7YkiMWQwGLC0tHTmfa2qihdeeIHNzc2G7BjHsGvXrjEcDps5OqoofVp+PYv3nqqsguqgMVhjsM7gjaGwQenDmpKqjr1UZYExJcYUOFvh6gxY6XwMGTak9uim7nGNBUJMEnGEmEZRVuRVgcoVWkkUiryqyIuSeVGRm4qqsmA9nSTYUbYS2VzJYaFWVc06HVpZirdQ5TnOGdLariNttRFaB1sUlSB1SmU8xtYu9lKQddq0u32KosT5EP9ACEpjyE1OmrSQUmOFAamxtmyUHaWATndEq5VxfHyIKUpSbRBpUFwt8inlPENKTV7MEVJydGAZDpdQaUKSpLRbHY6ODkgzjU76Ie5lIC9zklQhvCYvLImWwc5ChBaWWtDtdlhZHrG8dERn95Cj+YzSlExyy7xIqVwrJOfYkATlPTgUQqmQkIEHK6iKEicqpJJYV1GWQUUtgOqBrK20Cm3j67YTGlcHwqVwtNMWWmnKsqzjYBWitlVpZW2WRx163R5KGsAhlUQIReXKGjxSWCeCIotyWOMwtsK7sL8Nmc4h0UtR/+7BWEdhPJPpjJPJhKzdoSpLtE6QUoU9vAelVR3Yj2FgSZZ28L5CZy0EkOpAiMtShfcVs/mELG1hK09ZzTFmxjTPaaUp08kO3V4blUoSArCW6DTszTNBmmacP7dBPp9ydDhjcnLE0nCEEIo8r1hZ2UCiMabAO0/W0igpSLOEqkpxrgRhKas5SoZ1l5C6zkFxQf1TKlSiEHlI+tIatFKYsuAwn6NVgpShPREOUSfhZIkma7XwBKWPIPAhECQkOqNyHmssxhRU+ZyT4xPuPNzh/fuPuLU/42hWUTlIM81QaWQi8ULhEEF1tbZPkVIiJGgkqQoqot5IuokiVRKcxJeeogyWLF54EiVpJQm9Vpu0lQTFYgkeQ1UGu1vrPZm3aN1C6wStE8A1yq9KeJwEG7OLXSB34E/VJD0gnAvElxrAbLBfcbqfAlFbYUVLu1rNw3p8rephrauVPoIVlTe2nvddQ14+C4AtQBGLACwO71UNjon6o/qm6rCXgKCIG7FCETPMm8GdCAw7zkrth8uFuJQn+r/8fzGRfFo+Lf8HLp8ETCONKgCc8XdTHdJPFdiSojyhN9rCOIt0KdnaZaZ3/jXd9Zdob70MXp2CjR6cF3iCWm5U1RC+JqspjXclurPCSqvLyc6H6PYGrQsvUz74a5JzLyNQOBUUHaToYxde2TNAroj1pyG7NES2OnYcXv0mRbAmcECj4rXYHAt7GB/D4IvXWTy4Hi8t4lR1oRlbfP33+nt+AVAWAWSOBL0nPZsnkx9cE+fBh6RuLxXUNuaLxI6FKj+xyJrsEVeucBrr/yR5QT7hDHWfiY3EghbI48D9wjlFXanH1Uf8wmD+OEj/JIBfSFGvtatAQPRQdkeca0/AVEhXoRw4XdLpKlzlw7OxCuEs3VaL/cOSTZk1xIxIDmzuWsZ6fvJZnL2707qekiA+WX/RXCN+73ESySef/99ETvxFf1+c/37ZOTxgnUPLpO4PAq2TQKzxwelBSVGvKeTpC+EXz3DKBgmUdR9wp8gkPTPtf5LE0Rwizt73Ykxh8X4fP+aXEaee9B6dPd/iNU4x7k8QRWp1ndocB5xBaIuTKcp7rABQSAyy2ManI5ybgDS4akKFgWSdTvWQaX6AaK2hsRgh0D4JFqGRGFPvdVCKuXPcP+kjnSFnCaUkiZxjsh5iPmNWjvGVra2nNJG48u/bye7XgviBTlA65X4xJRsO6A6WmJ6Mwc9J0hay1aE76NLqZVRC0U4TNjprHEyq2sXKB4UO55BC8N72LvLRhOl8RlUWCKCdaJ66eIErN56m024jy4peZdgoS5ZmJzzV6bCatWkbx4nzTDONz1oIocl3djgsZ9w2JYdHx+T7e7yT5xzVLKIkbXE0PuHurQ9QTz3DhXOb7O8fcHycM1wasNRdItmw7O3v8N6bfx1klLsrjRLAbF5RlgVFXjKf50ggS1Km2Tz4zHpPmedUxjKdzajKkqII8qgAhZQYa8mLgrKsyOc5ZVVSRbLIPHxnNpsxn01DBoYTCGFQMsE6R7vbY3XrEnsP72DLefCxdKBlsJiJWW8eybyoED4ER7NU02kNKSvDNJ8znk6BIKfZaWUMe12yNMXUDPVWlnDh/DmUCgBaUe7gvKcsC2bzOZWxFJMTdkY9eqMVxsdHXD6X0V/ewIxTrBf49ITbtz4AU7G6vklnMKC3tMZkOuPR/j4nR4fBszdRjCdTyjqLucwnVJVBiiAbaW1FUpV8/eJlnh9uIh88wO/vIE/GZAeHDErDatpisHGB9d6IH33wLqvHYz77Gy9AbwBZBt1hIHv0B9BuQVnCeBpUPJI0WLdMJ8HGpZrjhWOWaB61WmwvjfjB2gqta5d5bfsBk9Izur7FT999l6WVET/dP2BUlmz0Wxykip+OC16UIS+lFJ7cCyYCZg7K3T0eDDK0klRaMUoMx2NDXjpGbc9mz6NyT8sLDseeDw6CZclxWfLeB+9xuLPHxctP8dpP/5ovfukr3HznXd6695ACgVaeyhraWSsEh1TCsN9lVhQkJAyGIzY2z3P98nnmdz/k48N99qc50yqQhpRWDAcD+olEBsNlvFQoqSidbWiaXkCGYCNNUfWazfmab+sX2MQ+hADLco6WHqUTEiA4e4H0HlWZ4G8sfZCeU4q008FXBiuCXY2PbGkaN9Nm4vVhlUtM0wjEj7Dx9zWbMKhvhM+VkEhbUTmD1ZI002ANzooQDAvRsToQEsgRNU8E4V1D/PJC1j6vHmcMlTVh0+9ByFoFREoEBiFVmHxcCNi4OiDgXCB8BOlPX3sfgkoSknaXi5efQmvNvCqDshCnLFwlBY5aLUQkJEohqgrlg6ppCHMFP1jvwuJ4d1pysdshxeIcGOeYlpb9mWN77pnamvkNeBc4ev9++ZKflv9/KXFxubhQXASHIqg5Ho+pqorj42OOj48bwC/LsjMqFJE8sEhyiNnGEYCazWaNRVpUf4jZ/VGJQqkgv3t4eMjx8XFDFIkWBxGEjNYUsa4RPI5lUYo/qmQsgvjx2EW7lEWljEhuifcSwc7G23ThPmP7xXaItjORkBK/F889n88bUoAQ4ox1SSyLIGJUFYkKLIvPMNYn/i1aisR6Lcr2LRI/yrLEGNMc771vAMhowdNut0nT9Iy1RAQSY9/4+OOP2d3dRWvNdDo9o8LSarVI05TpdMo777xDp9MhyzJWV1fp9/scHh6yv7/fKFPs7u42ajBBHc1QliXz+byp0yJhabHE/tfpdNja2mJjYwMhBMPhkCzLGAwGnD9/viGEVFXFaDRq2j6qq3jvm2z5aAmT53mw63vs2XjvWV5e5sqVK7z88st0Oh2UUhRFwaNHj8iyjOl0yr1799je3sY5x5UrV7hx4wabm5tAUB65f/8+ly5dYjAYNO/HcDjEe8/JyQnb29u8/PLLTR3X19d5/vnnuXDhAv1+n/l8zvvvv0+e541VxsWLFynLsrGGWFpaQilFnudn+m98D9I0ZTgc8tRTT50ZF6JNU7RzGo/HDIfDhrylteapp57i0qVLjVrJ7u4uDx484P79+3zlK19prHYODw/5yU9+ws7ODkmS8JWvfKUhVsRnuPhztCmJKkPRmkkpxfb2Nm+++Sb379/nzp07PPfccyRJQlmWnJycNCSrlZUVnnrqKS5cuMBHH33E6uoq169f52c/+xnHx8dsbGzw8ssvo7Xm448/5ubNmxwfHzdKF4PBgIODA/7sz/6Mjz/+mFu3bnHlypUz72Kr1eLg4IB79+41qicvv/wyWZaxvb3N7u4unU6nIXPF8SDarDjn6HQ6jdLDysoK58+f57nnnqPT6TTPL5K/pJSNtcq1a9caO52bN2/ywQcf8PTTT3Px4kW63S43btxACMFgMGB9fZ21tTWKouCdd97hnXfe4Wtf+xpbW1tcunSJo6Mjbty4wWc+8xmSJGnIcNFeKZLz4jgQ++nGxgbj8ZjZbNaosZycnAQVxNmMe/fuNWNOfGe01uzu7nL37l16vV5jpVJVFbdu3eK1115jb2+Po6Mjrl+/fmZOiuQTay3T6ZS9vT2uXbvGjRs3uHDhAu12uyH0HR8fN6pO0+mUBw8esLe3x8nJSaMAZYzB+6A00+12m3cmjn2j0Yh+v/8JwlScqxbnk9gvIoHDGMN4POb27du8++67pGnajLN5njeKN/G8cTyKJLc4ByyqNMW5ZWNjgxs3bjTWU0VRoLXm/Pnzja3X4lz4afn1LKFfFc3aKbzPJc4YyiqoepiqoKpyyjKolVamxLqQjOGtByxaBJBVygAdSBH8nUMGexjPPALnBRDiEtY4qtJQOIcSIqhciJAdT2VI6nVXqTwOhyRYOygRLC27nRZpFpJW0kzR7XRI0wRnHCZL8digKJomSJWQZhlZ2kJKhdRB/dSaEMC0WDCKVqZIO+1TG7Y6hmKLEuslSulgIeIBqcjLOTpJcc4gtKrVsmA6nuKtx7oSbwJIPJ8X9PsacOSzKfk8xztPbzCiqia0Wgk6XQuKJ/kJzlmytI9ONPN5jk7SMM4iwjpSK3SqMa5CSEu7pVhd6rG21OFocsy4MsznjpOpZJp7+mXIWFTKY73DS4mFQH4RQalA1hIJUmqKyoSEDeVQUpIgSVsJaStByHkAvL1EOupnalGJYuvcJplUHBwcUQnH0XgMXpAmGe0sq4m8bZS3eFfgncVYV6thKpAu7NmlDQQPT50gafAIjJf13t6CEkhhkYRM42le8Ohgn+VRD1MTmYwxoARKJWRJCgiMC8qY1lrSVlaTtRVSZiRaoqQH77DOUBU5ZV7Q6/TYPz6k02mTF1PaLc2gG9TRrBecHE9J0pTZdMrSyoiiLOmkfdrtLhLF3qN9tu/vYFb6bG1uUBpDu50ihacwltkkJ0sUpioQOLSq5bJrKw4vHMYFW0ZjymDRKhRVZZHCkSRtnA1rZOdsUDzGMj45otUa0O12ULImP4jw3jZkASFBqfp9BpW1cVJSzqaYImc8OeTRwSE3P37Iu3d32D6eM3USVwN43jjyytJqJSEDVAUChfOn4JaQYVzQWpFIEaTSdYmUGmclVeU4cgVO+DBOeIn0QZm1laaoTKPSYHPknMUWRbBZ8Y6sBc7XBPeaCGZ9bcNiHNbU5Axrsbh6DNC12k09PwtHnS3TJPoE7M3X9sBR6eN0vREIHacEemtM83NQjA32PXgLtpZTrROPQvs4VLRrqONNUgQimiAk7oQEnzrwpEQMX52CszF7WNRAkwMIZOkmXlbHxU63wbEOC3G0+vNPy6flV7X8MvAU6pgQgobGKkASLN+E1FiToDNNxZDUWrLeMpicopwjDu8wuPIVpie7aC+RMtg4WSWQtXozXjXxaVGTSqQQSBXIIMJLnGrT33qeYu8ms4MprdWn8Lu3YPNZKqHDukl2wtwgRK2odkrG8CwCyHWCj68Jkgv7aL9AOhNCnA4ovwAgDWPIKbDsfSQCnH4pEELrsUbQjHXNiZG1d07DAKnJd6cDWlTTaO5p4fqfBLbDOKmVbH4TIhA/BGFsblQWajwgNtIi8SIOrw0RsGk+H8g5kVMjGipHU8cnqix84t4fG1kXYquLJIjAkwhEPNE00em1fmnxgNcgCrxUFNbzhUsZRV4yPj7Co9CZxFdhnW2kxVlF5aZIJVhfT2h1VzluyCh1izbkg8fv+cn1ci4+13+TKp8msf7CYx4nRTzhb0869m8iiPyyEi2UYr2kUPWeOWfxqYrYWRbIOvHtrhdtC88xrudOj2jOs1DfaAkUm1fE89dNFWOSPPb9x3//RW0T//5k1ZDTZxvecXfm76fnqZONgXaWYlONo6qJYxLhDF4KEmuguIfJzpFIy6TSpFKiBCSVQijPLFsnLbepih1ctoq0ElePnXgadTIBSOuxUpLLlKHoUFrBTHWoXIZWniwRGLePdyXOGVQdv69b/5Qox//vy68F8cN6x76rOFYSb0paxRE6U0CntlIAkWVUMqUyllai2Tx/mTdv72JMhRQB7PXeoSRUxmLsnLy0VKVBeEil4sO793n4cJergz5fXN/kt5++wejKJhUenWj8YMh8bY3WYIiQiulkxvaH73P/0SP2dnY52HsEeU5WFqwoyTRrI5Rg0AlZUNpMeLS7w9r6CsPJLj4vkWbM2rkNskOF1gkP9nb4yV99l6efeZ71C8/gEJRVRW6rsJGoKgQeYy2lqZA6YV4UmDKnso4iLylNRZUX4CxKa4Ikn+RkMmMym5DPpsGvU+sQbMRjar/bsiqRHnSSIIXG4wNYayxJmrG+dYXdnY9x0ylJEoKUnvCCSqUp7YKElgg+tFIEuUEtJSINCgvtLKPb7QCKylikFHR7fTr9IegMqZNamlTgjOXkZMzx4SNWV1ZZvXyZeZ5TesFofYsjJ9n+eDcANNZQ5nm4xnCJnb19eqXh5M59ylqSvt1u44D5bA6IIDFb5KRKI72jqgzzqqRlHV9ZXeFz7S7u3TcRx0ck0zm9omJVpwxH66jlVcRgxFqnxz9YWcebkvbKKqLfh8EIWh1o1yofzkMxhmKO9xU+azEfZLj1LabC8PDhLt3Pvswf/uWf8xePBuznGfd7A55qaT4yUx5NK56dDJibkou9jKuXz3E8LXh0PKe7tsz2ZJuOc2wAcw9zBKWUWO8R1nHz/iOy1ZQy8QykRZmcSWE5ko586uggmVXws4eeV0RG4UvuIDHTMX/9ve/yO9/+u3zw//5/ULz8CjeeusT+0SHv7Y7DZrqoqETYfBZVSpalVEJzeesSa+urtNKEL3z+C3x/7z5HZcWkKLEIlkdDlvttZFWCqTDekyiF8S4ssKxdYBoLMim51O+RNqNDDMyEbALha+9gY6hMHtRxpEJpjZKqdjuVDbsxMl8BEAqpwrsivMU7CXIxW0cQAhi1wkjNpPYiLNg8Au9tvQkP0lTeixqs9EjvKZynVAohgpyoE7L+3mKGxmmahRAeF3bM4IOssK2qkDFnXQiIGotMMxR1poepGiKIVJJEUMuG1iCxqajKquGWx4VCkraoihlHs5yVdg9TVad+ivXdC+GpjKOyjsobkJbU+8b2SgmP8aIOKHiM8xzPCvaLjLYt2ZsZHs0N48pR+hjsqPnuPrA6fa3a8mlY4Fe/LC7yFkkgi8SO6XTKrVu3GrDAOXcGFJvNZqRpyuHhYfMOPXr0iPF43BApohR+VVXs7+83QO98PufevXtcvHixIV8sLy/T7/c5OTlpbE9eeOGFBrRutVpMp1OOjo5YXl5u7iFmzbdarUZhY7HEAF6cgzqdTqNeElU6ImlBStmQHyL47L1vgLgI5kWFgkhsSdOUmAm+t7fXHBfVP+L3vfcNYSYClIvKJdH2IoKBWusG9Isl3l+0eVkkcywSQRYVFKJSRgSby7JsiDRFUTCdTps2TpKE8XjcZJZHxZNWq9W0yXQ65f79+9y+fZuTkxNGo1FDTDh//jwAFy5coCgK7t69yz/9p/+0Uat49tln+frXv97YsxwdHfHOO+/gvW+UZiLZJMuypm/F+459Md6LUqpRQokEobIm1I7H4wb43tzcbJRfTk5OGhA9tnG73ebtt9/m9ddf56OPPuLhw4fNM4/XWQRpvQ+2P8vLy2xsbDQqNsaY5tgPPviA999/nyRJ+NrXvsYXv/hFrl692ihvTCYTDg8P6Xa7/OAHP2iy/r33PHjwoOmfkfwTyQyXL19mY2OjeU8j8Lu/v8/3v/998jxv1Edeeukl2u020+mU8XjMYDBo3vvY/3q9Hp1OB6BRhNjb2wNge3u7sT2ZTCb0+/2m/1+/fp1vfvObXLt2jX6/z/7+Pm+++SZ37txBSslTTwUyY7RGGo1GfPTRRw2hZ3l5+Qygvwjsx3FoNBqxtLTE7//+79NqtZrx4w//8A8bayaA0WhEq9Vq6jmdTllZWeGFF17g937v9/gf/8f/kZ/+9Kd47/n+97/PlStX+NKXvsQ3v/nNRjHjvffeY2lpiW9+85tcuHCBNE158OAB58+fZz6fc+fOHdI0bfpgrOvBwUGjUHH+/Hleeukltra2mvaLCkjj8ZjRaNS8i2VZNu+WUoorV67w1ltvcenSJV555RWuXbvWKETEd+X8+fNcv36d3/qt3+Lll19mMpnw+uuv88Mf/pDt7W0ODw+bvv3qq682Y+atW7eacUgIwd7eHt1ul1arxblz5/iLv/gLVlZWeP7551leXmZ7e7sZK+LYEMkJk8mE0WjE2toazzzzDFmWkef5mXH1qaeeotPp8MEHH3BwcEBRFGRZxtNPP40QgocPH3Lr1i2eeuopfvazn9Hv9zHGcPfuXe7fv0+n02Fvb68hZUQSRCTmlWXJwcEB1lpWVlY4d+5c8y5HQl2SJBwdHfHDH/6Q73znO3zwwQfN+BKVcuJ4HN/pbrfbKBLFsVQIwfHxcTPOxDY0NQi0GGSJ/TmqhBweHvK9730PgC9/+cs8//zzrK2t8dFHH/GTn/yE8XjMfD5vCHCxLyul2N/fZ2dnB+cc3W6XXq9Xq+eFsX9lZYUvfOELAOzt7TV2Z3meN/NSJIB8Wn49S1yXWGMoqxJTBpUPY0qqqsBU9b9l3ih+VFUZpHjr+I30HiclWgXLykTVyj+6Vr1xoH1CogMIKYGqcBgvcNZjKkNFiEe0tEJ4h47JA0qTao3xHut8DcBAkgg6HU2n00JpjY6WSqbCWY/UklbWptProVMFKKRKkSpBSYkT0MpSREdiY7Z/EG5EaxFsO2Ww/PB45mbGdHZCIhKkAusqhHBolYCvQEjKwpJlKUnSIW05SlMyzwMZoJo55vOSVqtFt9tnPi/I84KqOgzCtyqhLCfoVgulNLNZUe9xHUp16bSHWG+ZTnOGgz5JkuJMhfWKNOshZIqtgkJnr53STTMmeUFlFdPSMZlVFIUlbfmAy8ugyuJ9sD+x1lNai0pTvIfSeIrKBcUHD3lhMC5Ba0UrSwNhzlqitzeENpMq5ehkQlHMKPKg7uvqMbqo5iF2ZmYUBpQzuHJGpsHIFI1DJz7YehAABC0cnSxFKMPMVVTTktJYjIvaooFYpKQEDGVVUZQCIQXOVVhncV6QaI3UaVA48QEsEwQyijGWRKdYMyVNEpyv6iQJh9YZ05Mx/f5SA/hnWYaSCXhYWdmkyOccHo0ppzlaCHRLIZzDG0slSmbzWQ3uCUajIWvrqwihKaoZiZJgS4xt4b2j0+6Euqpg6WGrAu8UnW6HqrI1UUlSGQN40jTDE5RcWpkkSUKGI94jhaTX7WKNRWmFUiEeo2VQTPFCoJTGiySAZQBSkCShnYr5nHk+Zf9wn7s7D3nv411uPTxif5JjnCTRaVCUhYawFZNPkAovI6FBoGsVIC1r0FWGZyasIMXQ61iGZUZR2toeV2BdsKC1LljgaFHHi4SGmrRTWQtKoXSClGCEQLqg3OGJ1pdQllWIkTiLcYGUYrVB6wShdCCCeF+TPwIAS62Q6kNz4nzIIo1WL94FSyxrQozNVjYQP0yFsdWpmqJxCOtDn/MxTgVSBjn5EAsHgQ/WRbXiiBCmJubUljVIpK//HipETAuO42yw0loEfkLmt6iPWxz34THABz7lfXxafoWLaIio9a+hRDIIUPuEhz/W70wMGXnpoTxE6w4TM8NZQPQo9m+RSkV79Rp6sE5PKIqdd2if/wzCVUifNLitjAQJ71FSnALMEViXYbxUHtprz8L0PtPDB2StLnL/Liob4GprFGclXlhYsIyJ5/I+jl117E7KeiAL9+fwSPnY6/4EwseTVIJcbZd2yiFZIC/YmlJ2yiFpvhfSPGMVXR2qP2uV4mp4VtQDksfjfEgSF5y153DOoaXCeUtXB+A1FUEtTHiHF4rG+oqzz/IMkE0DGzx+99Qa3Y8RClxD7Dir/HH2mIhRiHpgbUxc4qQiTpOzTq9Yx/8XrnmmYo8RRcJH4TgvBE4aEuPRUlHMZjjvMDbE6IoqR1ahb1WNqoNFioTKVAihGPUdk4lv6hCbSYgnNhBPIn8sEoOEkM089InWjdZAzd+eTPB4klLF49d6/NgnlV/2t8XnG4gCAiscEoHQkiyR9AYaHhlAYlyJEPrM3Z9VExGnhKbYX4hqML4hUTV3vfAunL4znGkZIU8VZf5NSC1PIn8sfvbkczx+/OOJdPEFCosihabdlZStDkLnWNFBOUAolK2Q+V1M6zxWpXTNDjPRw8kc4QwWE9ZZCIzeIikeYqsxIlkKmJ0A5SUIgxcavMNLgSIQzq2zyJpc4wRUTlO5AZ4E42ckZg66hWrYcKfv378JKenftvxaED/Ac9Jvk8qEeVkwGZ8gVEJVWo6UZpYXjKcSnVcUAsal4fbbr+GrCdYkZFLQEhavaiYikqqyCGfC5gBQ1vKMTvmcELyStthAk+3vM5lMmRwdgfCoq1c4qkruffABDz/6kPGd25zcv0cxPmFuDAaL0BqXJiynmvVzq7RXV0i73SCF6CEvJxwcHnNuOmN5tMaz158ma7e59d5NpuNjLmxs8cDBrfffxjvLytbTdaa+p8znTL1gTytm8zn3Hjwky7IQTHBBPaEyJmSXlCWpUkgdfRkF87xgOp9SzKZIESS/4oBgTYXHo6RGaoWmDnCa4BmltaYoQ4/ePHeZ/d0dZif7eONQWtEfLVNZMFXZyF45GZY9lXGUZY73jizRrCwv0+kPOT4+pphNWRr1GSyv0e6PkDrBGkdpKvLpjHw6ZT6fYWbHbG5skLZ7TMcn9IfLeJlQ2LABUTrBOEeVl7S7HcYTuH3vPt1ul/J4jNKKwdIIKSV5XtaZmBLhA0juEKRKMugPebCzj5tXvDoc8M3hEv2jA3Re0MtL1pOU0dIqemUNMVpprFyESmnLHrQyWFqu7V2yQPpINJQGiir4Wy4NmHz2C9x5tMv25JiNK1eoimP+n/+vf8Vv9fsctVP28iBNP925x4XzS0gpmY6PuHUHlvsd7j/c5eOH+3id0s4Uy6sj7m7v8mhWAmHAKgX4RIW5QEJRWm5uV5zTJUVf8NwoRWGonEc7wY4c8vOHJ0xm8Lc7kgdCUHlHaQxvvflTvvoffYtz5y/x+uuv8/LnX+Xendt8vHfCcVFgnUc4RafdoT9aYevcFu12C4FH40kTiU4Shv0BxnukVix1e5wbdjBFgbMWg2BWluyPZ/T7ffpKoXDYeixVElaMZTbPEa3Qd4VfkJmrN7leaTwltjLIMBTXizmHlJrItBY1m9b76O1VL8S1wntNMJmhXqnF7AlxOl3F+bSRsSNcvw4aWudxeLSQdQYHzDxBtcMUlDJkwUihcD4EKrGRWFIvTp0NHq3OYaqqJm/YYDFVTzAh08JhzNlFjzUWZx2mrLPJCR7Dzjpmkymz0i4QLIIqwOxwj4PDE4ZLIypTNafzhOCOFIKqXj14goqIl6cTfFigh21HkP70lNbx1sMx1tlg3bPApA6BjtOgR1w018Ywn8YGfsWLMaYhBkSgWUrZKCb0er0GcMzznDRN6fV6jWWEUiHz8dy5cxwdHbG7u9uAw1E9YvH88/mcd999t7H2iFnuKysrjMdjjo6OGhAxqhJ89NFHDSjZ7/cZDAY8evSI3d1d3nrrLVqtFnt7e40KQQTToxJBBCkXpfx7vR6bm5tYa5lMJpycnCCl5MGDB022+mAwaED5yWTSbCYi4SECglGJJFqyGGNotVoNiBuBvEgciFnikSRSFAVpmtZzY7CMyfM8KG4VBTFTYHd3l2hH0+/3z4CMEYgEGhJKfJaP/7xIBImZ55FoEusW/4vHxWtFO5BItuj1enzrW9/i4sWLDSFH14TW2WzG2toaX/7yl3nppZc4PDzkzp077O3tsb+/z1/8xV/UWbOhHp1Oh9/5nd/hypUrDAYDdnZ26HQ6DbC6vr7eZLn/IpZ7bKs0Tc/050WLnkiSid9ZJL2cnJxQFAXf/e53G6uab33rW2xtbTUg+f/yv/wvzXna7TbtdruxMirLslE0WAzQFEXRPMvpdNqAxGVZ8ujRI6bTKQC7u7tNv4i2OBsbG6yurvLcc8+xs7PTkFXiMzo4OEBKyWAw4Bvf+Ab9fp833niDP//zP+df/It/QbfbZX19nd/8zd/k1VdfbQhN3W63IS5EFYmqqrh//z7/2//2v/Hee+81bRktKFZXV5v+FzPHgcZqoixLtre3mc1m7O3tsbe3x+7uLv/9f//fN2D2cDhsyCSRbBX72aI9RmzjSHaJfSo+qyzLmrHi1q1bDWkpKrVEQoVzLlgpWstXv/pVJpMJf/Znf8Y/+kf/iK9+9av8wR/8AdevX2d3d7ch7UTSxD/4B/+geWfTNOX+/ftcvHiRzc3NM+9tJNtcuHCB69ev89FHH/Hf/Xf/Hf/r//q/srm5yY0bN/jqV7/K008/TbfbZX9/n9lshlKqUU2J92+MYTAYNH3l4OCgsSURQjTEKYC1tTXW1tYa0l2apqysrDTEh6g28t3vfpfXXnutsV3Z3NxkPp+zv7/Pc889F/YCtVJEVOfZ29trAI04li2Oo957Wq1WQ/Bot9usr69zeHjIrVu30FqztbXFF77wBebzOe+99x63b99uCEobGxuNDUq0Qvln/+yfNSS7Xq/H5cuXee6559ja2uLo6AghwjopSRIODg5QSjUWUvFZx/Hw4OAAOA0C/dmf/Rnvv/8+Sin+6//6v+aZZ56hLEtu377Nf/vf/reNUgzQKMXEvrc4zo/H40Y9JKorLZLHWq1WYxsVCXPdbrdRrjl//jzPPvssN27cAOCP/uiPuHPnTkOYi4SiyWTC7du3gUDCevvtt+n1ely9epULFy6glOLFF1/kZz/7GTdv3uTmzZsNAVNKyY0bN1hbW2vmsuPj40+tXn6NS1hrzDGVCSSPsqSsckxZhX9r0oepwhrEmqK2gylDvIIAIDsXCP5SSJwMKhLSeIT0NaAgUVLhlQEpqE3pQnBTeKxzmLKkqAJYi7doqWmlkkxrOjVQLfF02glZoknTBIlD2KpWVFzY5xGsa4ypSDKNThKoVTxMaRFKorRAU2fzaR1wCRf2bkUxo6p8sCBNAqjsPczKCdoGP3SBDdapdSa9cQ5RWaRQCK3J2i16/SFFXobApfNMpjmZc4GsojVFUXGwu8dgMERKzTyf0W51aKUZVVEwm85otS26MAxGy3TaXYq8wHtNliWB2OA8SmXo1NFq9VgeDFkaTNifllTOUFnFbDanyOcMhmmIzSRJvZkzKB0SiXwpKKsApDtMUEYQEqzHlAZbZyR32hmJDkoTDhAyAAVSSUpT8ejgAISj2QwLgXcV7SSh3dVYPyPPK7Q3pCLQN7w16Cwl0RLvTegz3pEkmlYqKG1IqqiMoawMpfUYG8CO02iuDNarSIyN+9eoMhfktp0DrWUdSIZEJ0gS8A6pVEjqsCG7WyDwFlqtHq1MM5+N6XXaIIMlztbmJlmmmc0ds9kJWZbS77c5GU9odVrM8xxtQ98e9LsI6VhbX2FpZYV5XjE9mTMYjMBbpvMZxWzCuY0lUq1DAoqTJFqRlznCd7CVxbiKdisDpULbSo3QARSwvkJnCokjUQKnFVJ0gpV0MQ9KL8biXWgzrZJAcHK1hR42xGKkppjNOJkcs73/iA8+fsAHDx6xfTBnPHOgNK1Uo5NAxAjW2uE+S+sCIUcGIpYUQZVEColUIZtVyGD/gqwz3aUgc45eB6Yzw2xmMd5SutCfKqMxpcYmCqFrmycR7RJCnNRZg3NJHbMJ3u7WRYvsU6BSePDO4rypVV4cSof5WskEpMBLiXC1YiSy5tH4OmkoJMF4F+I2tlb2MDXx1Ln63/q/cK2gNiJ8TfxoYlwgvUREYr+v92AEJYAIKwpRKwwJi/fBwgAhQ12kqjGYmkHiw/gV9hm+idc8jvGcSWaCOkmqfo8+LZ+WX9EiGjS1fjPOgOuyAVsXQVdETfoQHmFK6KSY2RSvDFX+CI9DDTaRApyTqO4GibdU2++Tbj4LlAiZIW2JiMoUoolOn61b86/AKU+7e5FWOuT40U1kcUwyvIC2IfFQqGgjEcrjqgxwqpQZYsULf/a1Q9Rj9fhlIPppPFlyqvhwNlHTLxDMFtUsFs9xVuHik9YU4awiYuENGaL5qz99RvVJg0KTgqoqKa1CoEGUIEQDXv9NCgif+KxBihe+ExuwOe5x4sNpvP9UuaShnYSzioXH8DgBj8jn+6Q9Sn3gE+8h9OtaFU8q0pamLGckqaSYgvHB9MIqS7DnExgTyD9hDQ5CJEgR6UkNTeWxe1/sF1H95JPlkyQLsfC3s58HDsQiaebfbg76Rd95EtHjb1LCONM/AW8d02kBgzbeCrSUoXXE6Xv8y67bEDEb9KRWbyNyPeq+5JvlzBNusO6L/pN9ePF6f1O7Pf7e/U1lkRQayukzEniQEm8NRzNFaaErDGMyfCKhmKLLfVzrElYJEg/Cl1iV4V2dxO0NkmCbboTHZet0ix3mVkC7j3IOJx0KTSUsErWgVidApkhvaqwxw5ZzXP4I3V5iyRtmVQ6ZRImwJl2kGP37WOb8WhA/vJSo3gA8ZEJghGBeOozQHBeeo2lFq61JEsnUew4Lwd5+yMAqjUQLh/C1skf0eWwnlCWYEqQz/M7KCv+3//Pfp2Ml0pYklaEsC7y3dK3C5RNufnyX7Z/+FW9vP+ToZEJZBTUCnWh6vRZeqcBiF4LOsIcbjXBSIfGUVYm1DlPk7H58k4fnrpAejlG+oi8KBsMWDz8WrPR6+I1NdvcV77//DluzORsXrlFZj6tC9m5pSuShwnlIlCZJa5l05+pJSgVLhk4HJ0SdLVcEgoNzeG/Jy5AtHEgjwfvUWAvOkmiNVYpqNsPYMoDKOIRUGC8wpWFtfZNpt8vuzoMQ/LCO4WBEXlZMxyfBj7eWAbS1nGSrlTEYLlNJzccP7pPg2NjYojtYAiGYRQ9oD6L2DK2KGatLQ/TGOifHR+wf3md5eZmyshTVnDwvmM9mFGVJUeRgDcOVNSbTKd47BsNldJrhnKXIc0BgrWE+nSCFp3JBXi1LgpdsmiRIa/hKv8vvr66ybizd4zFrSrOyvEK6fh7R60O3D91B2BSnGaStQALp9aDTg+VRXO0EUk5ewOE+JAqMQw4GGAyvv/EzPr++xrnVNtPxNq+9/hrb2/cD2cVaJgd7/OxnP6Pbyhh1M44Ox5zslzhnKauKTgfIHWl3QGepx3R2xJ53DIXAIoL3VT2IawRpbhgbx/jYs7MHa+0gDXcwh3v5CVPjeSnRtIGJCJkMWkA12eW1v/wuX/xbv813/vifIz/7Wa5cucJHj47YnRV4IXj6+vN87vNfpNNpc/ToAdPpUW2vFHbBMlEMl0a0k4RBu01bCPLZDCUlc2s5nJcYD6WxZO0emS/JjUPi6WYZF7otzh0c8eBwzpXNPmflJImrG4S1mKLAusAilIB0Dql0vciRC4w8EVIgUCBqIUofRKRqMc/6vPL0/NRBoGjl4oP0a5hTPfjgHWs9hKSZIJmZW8u0zuwoZnOEim1Tf8+6oLAVszAk2NLQSJR5SZDstAss33pTXgcZInvC1fdBLVXrXMi4ctZiraEyFuuDb7ETof5pkqJEQbvdQziHrarTDboICwtJBDjjxzXjmJDJFhaaAovE1OcFwbxyQRXEB3ueaFcTH4QMzJ2zi+zw0aflV7REYHXRriD+HDOeI7EjWmIsqiLErGshBOvr602mdlRAiNnGa2tr9Ho9lFKMRqMGvF0EjVutVmPDESXrlVJcuHCB+XzeAJFaa9bX18nzvAEM471kWUaapo2KQMx6jlniEZCPIO7y8jJ3797l5s2b3Lt3jyzLGgC01+s14HaaBn2jH/3oR6yuriKEYDab8ejRo8aKwznXkD5iG0aw1DnHdDptSB0RMD137hzWWm7dusVoNGqywu/fv0/MZo/tMJlMeOutt5hOp2xtbfHss882SiGRuBCC3vKM8kVc0EebhlNp4vC8IzEmgpZRySMSfSLAGUktEFQWhBD0ej2EEA3wu7KyQrvdbkBP51wgT9YqKpubm5w/f57bt2/z3nvv8d577zGbzRgOhwyHw4agorVmOBw2FjOz2YyDg4MzwYhYbwjjYWyHeL9Rvj328ahaEvtSBHZjxn60n4nKFzs7O1hr2djY4LOf/SyDwYDZbNZ8LwSSTUP0iO0ZiSRxwxUJSBcuXGjIQe+99x69Xo+qqrh69eqZe3jqqaf4+te/TqvVYjabNeSnfr/P5uZmQ26J997tdhsFBGMMvV6PF154gY2NDV566SUePnzIzs4O+/v7/OAHP+DSpUtcvXq1IQ5E9ZmqqkjTlL29PT744APeeustLl++zIsvvsilS5fQWnPz5k1+/vOfc3BwwHw+p91uN0SioiiYTCaNQk5UhYnP8qtf/Sr9fr8h3cQ+MRgMWFlZafrsojd6I9tYE2ji+7A4ZsW+F4+PoH9UplBKNUQBINgB1Ko5sc2yLGs+Pzo6asaHpaUlnnnmmTOb6GeffZb19fXG5iVeOz6Dp556qiFzvP7665ycnDTtv7e3xze+8Q2uX7/ekGRinSNJYNGWI5IQFq2vIkkqSRL29vY4Pj4+ow6z+E7H57qzs8P3vvc9rLW8/PLLXLt2jZWVFR48eMDrr79+5ruRMBXH9hhMXOxj0aImPt849rRaLS5dusSjR484ODggTVM2Nzf5zGc+w+7uLj/96U/58Y9/zOHhIYPBgK2trUZxI8syzp8/z5e+9KVmDOr1elhrWV1dZXNzs7l2fL+iBUt8P6qq4uDgoFFbiYSdeH/RZunSpUu8/PLLDQEpKrZEAknsi5PJpBmfFtvh8XaJJY69sd3jXBjHAKUU/X6fH/zgBwC88847tNttvv/977O7u8va2lqjSHVwcMBbb73F//Q//U/B9m8+Zz6fc/XqVQaDQUOI++Y3v8mf/Mmf8OjRI37+8583tjzLy8skSdLYNnU6HVZXVz9V/fg1Lt478rwmfpRlsK8tZtiqpKpyqmIeyB+2wJkK50wtLW4IaozBLgGCMqBzoiYIQGUN0tUS5sRM9gBYh62KC7ae0lNZhzFgHBjrUAoy5UgdIctfCrJEoJWk3UpRSbiu1AIlBToRQXHEK4RQdTKAx3lbJzBUoDyJDGO+sSVSQVUYvABjFUqFNYEUCq1CjGQ+n0AuUVKQpSmddotiPgfvqKqSwlnSVJMmGiEFxgTwPEvbJLX6SVUYirxDVRYkiWRezJnPCzqdPu12B+ctZTknSVr1OXKET1AqIc8tjx7ukmYdiqpgdWWNLM3wPqgLeB+Seqoq2Jm0uy1WVwYcjSdsH084ms3JSygrT54bbOVodRS6nYRtqQ3qEc4ahIBEC4QEZz2FD4oEmVK4TFAYj3WCTjchbSnmc4HwCucsEldbi9T7UOdryeoAIKQ6od9v027J8LzLOamStDNFqn3gMRCUK7NWQqJBe0+aaKQyQfWTek8rgw2pA7RY7FHgvCCvPEfjKasro9BfsYEY4EEpg1IplQvtl+nw7MbjOVorrK2ISVnOi5CIpiRlbT/d6WTkecHK0hKDQY+yqigLx3BphSxLmM5O0EoDkqKs0GmGlAJjKg729nn66tMUxlHmBWWVY/F02h3G4ymzcY6xnkQLiqJEJxk6bSPKEmsrjCswlQmKNFLjLEiRonywC8AFokyTGOJMHZNw4CqkT6mcq99Xh5A6qH7IMP4LJEIqqtJwNBlz9+FD3r3/MR/vHnI4LfBoht02so6vFD4kwDhBba0iqCpPVXloSZQMkvKiJngoKUOyT8g/QSjQtYIxiaWbQSfTTHPJtDKk0tNSglaqaacanyXgQmJMUCgJCTrCVtiywNSAZKIBoRrChbNB6RWpSFSC0NH+MrSTceCNB+VwLiiziDoz3tcQZIjl1P8trLf9AvgZYkAhXis84APxw3tL6K2nxI/gHhBOWpthnSZM1WsPVyeHSVdLy8pAOPMI6kBReId9DcC52l4hysnH2FfoDeHYGrAN8SnXINwRnvy3Bd0+LZ+W/8MVEW2y/QKoWiceCrHweX24B1DgKjRTnF/G5oekKmN47iV2d94hS9p4laAEIBzZ8Dyz8l2K4wekyxsI6xBJgnX1uy5qUsMZEPaU3CClxrkYx+3S27hOcfwh+c7Pg4KVDKS0GOOO3zs9Z7zVhQz/hZv6twHL4SzpI14r1jle8xc39yctJR7/d/FnX8fLz4Lj9Xjs49jZ3DEIiTGOoipJnSM3bYLngAxqIQso75Ou96S6Lv57tiFE3X/qeIt4jJbyBPLN6eh6Gg8SQjQmJzx+dE1WCQS/T9bv9DqP3YOIc5UgL0qUUGQayDy2BdoprEkwVUhap1Y+L4uS6UwwnwfrshrsWwDH6/liAQRYaEUeR9GfpMJxtq8tgAlxvdj8/Mln9cl+evr0T489/fwXzWH/dioZPpA1hCDPDcZWOCtJRIT3g2XjLz/ZAp515p5j3X85MSWSQvwTDn28fz7p3n7ZM/jF7//ieBLqGx/72eNlIyYgfCCxS5NjkgxRzNH5HlVvK+wRnUHhg0WjbAWCkSRgsCqcV+JBCorkHO3qLrPC4/UAKSusSJHGgbThvRBBEch4jXJTVOJx+Yxi/oBOexPRzujqOUcnM5Soibb+sRb/xcPVv3P5tSB+CASV8ZRlQaolc6kxdk6n0ydNBaVQlMaiUo0VUCEoi6DkkBcV41lJpxs2Tt5UXFhZQqmQmZEoRYbgCxPLM299QLY+wuZzCuswQlABxcmcannExacv8Zm/+gv2Jx8ytpaBBC1gKi2Jq1gd9BgNR7QHI1qDIVYllBjGZcXO9Ijj6ZiD8QQpHpJ2Bmw99SLdlmY4GHG+KtlZXuZwfx9MydJgQFFVfPDBBxSzORuXn8YiKMo5ZTFHSo2pStIkQaiazR7g02Ar4j2T8TFKh+xNR1AgqEyFrgd7h8caQ1VWKCWwLjDFdZLirSPVGulTjLW4qgLv0TqhMBWT6ZR+t0f36RvsbD9gNp1RWkuWhOxMYwzT+QxMgZagVEa7O2SaG2bTA1IF3aVlTvKS4/lDdO1Jb21gqVnnKPMZvXaGkIrj8UNMPmVlfYPdkxnVwXGY3uqAu7GWqizo93rYemuukwRR+8m7OnBcVgVYS5JoirLEVRWroy69TotHuwfs7R3xkk75eytrnCsMa1hWhkPaq5uIwQj6S9DuBQuXTq8mgHSDqke7E0ggCJjkgU6Zz+HoAPIZ5Dm0WjA7wf0//xlvJCVv//ynvHB9E9Frc3GjzV//9MfMD7YRIuz8bGmYHR8iqxbdbpupMJxb7ZJbgbIl89Kh8oK9ewe0LOQikA0KAV5JbA3Oa+foTMa0fNheVgIOZ4K9GVQGKhf6eiLheqI5tp6bhAm4Kz3K57z907/iC7/zu6xvXeDtt9/i2ouf5YPb97h7OGX50g2+8R/8R+AFZT7Fek+atvCuqomHQalidXmFRAlSIXHeIJTiYD7neBakWpMa6B20EhJjaYk2n/mNz/GZ9R7bH91kkKZcHM+RvtaqaJgBNRHDV4DAFgUxAyf8Rz0ACzyWQJgI9is0y7WwLW4yJeJEGIXzfPQ3DAurQBKBUzaqqwOADueDH6zFI5G12oVj7gPYhQ+KHL4+j3UlgrBAk0I25BTvbMOH9T6o+kgZM1FobGWEcyH4Eqtbl7BY97X3awgwORey/GPLSQRKapIso9cf0bbh3o2JnuyBQCIIqWiu3tQHkkYISHipsAjGTpDXbRnl/ZwPKh/OnTq8xo1PnOjj3Bgz+6SQlHxyMfpp+dUpEbiLJIHI7l8kR0SgMYLK8TvAGfWKfr9Pu91uFqij0agBNiOwqpSi1+sxm804Pj5uMtg3NzcBGAwGSCkb2wchBEtLSw3RIypmjEYjjDGN8kYE3PI8/wSRZWlpqQGAFwF5rTWj0YhHjx41ViARdIuKJoPBgPl8zmg0Yjgccvfu3cb6JBIbIqgciSVRNSNmiAshGiA1AqYRxD537hzb29s8evSI27dv473n5OSE/f39xt4iAnVFUXDv3j2Oj49RSvH000/TarXOEDsef4aLP0fAclHJA2h+Ph2nRaPmYa1tyAqLShZlWTbKAkoptre3G8DROcfu7i7T6bRRPolKCpcuXWJra4tut9vcWyQJ9Xo9sizj4OCA3d1d2u120/5HR0c8evSIwWDQAPiLahrxecZ2XyQPxGMjESSSWuLziOeJ/Tpm70dVgmgxEgkh0+n0zDsSyTRnAsR1fSIoXRQFy8vLjQVFVB4QQjQZ+VHZIqocxHdgOp02ShbdbpfJZNI816hoE+tjjCHPc/r9PqPRiGeeeYaDgwN+/vOf8+Mf/5gPPvigUaWIljDx3iGA/icnJzx69Ii9vT2++tWv8vnPf57nnnuOqqqYzWa88cYbjSrK4mY0vpuxXaM6Qnw/nn/+ea5cuUK32z2j0BCVSiKpYFGRILZxBOYX2wlolDwi+SO2dxyTovJNvEcpZXN/1lo2NzcbS6aTkxM2Njaa45MkYWlpic997nPNe2aMaZ7RcDhs6rj4fq2srDTX7fV6fPzxxzx8+JD79+/zox/9iPX1dXq93hlCSRwfFp9FVIeI43B8znGsA5p2jOSsWJ/F/hkVZT788EOuX7/O5z73OX7rt36LTqfDW2+9xcHBAbdu3Wq+t/hffCdiX4vnjPd3mg1GQ8S4dOkSb775JtPplHa7zX/4H/6HXLp0qRl3fvazn6GUYnNzk9XV1cYmLJK8vvzlLzdKNIuqHpFQFts61iP2heFwSKvVasbSa9eu4Zzj5OSkeSfifBXbd29vj3v37nH37t1mfon9Nyq/xDklEjzis4q/L6opxXZbtP+J5Kg4t2xtbbG3F8jkDx8+ZHl5mfF43NxjVVUMh8Nm3vrwww8bYsxgMODKlSv0+/1m7HzhhRfY3t4myzJOTk4QQjAcDllbW6Pf79Pr9ZoxN37v0/LrWYJqaU5VBhuXoswp8xmmzGvFj0D8sLYMGeu1/LGUIb8iUaK2zyBkGwpQMpAx8Kd7CykC2V5JiVcSrYLdiJMSdIL3AYyWQiC9IU00qYRUK2RtISMTHdTFtCZNM6SSSB2UO1ppTeQTENUAKhv2Zt6L+t0LRJMklSh0nZXvSLM2Og12F6djnA9AsxRY4/DWUcwLkkwFOxBTghLMigJrSqpE0W5ntFodrAkZ+VUxx3tIMkmSplijkUKiE8l0OqeqDNZ5+v0evZ6gKAukkgitmOdzdOJot1KcdUzGRyQJTNKMbm9Ekup6jHdURUg6kRKyzDEatdlYH3JpMmd+5z7GQmEdZVUym81ptVLanQ5CwNwZqtKQl1XYQ0qBq0q8dbQSHdQvBBgBTkJLQrej6HYk4yPPot6mszWQjAyZozVIozUMRi36vRSFQ/qg+ZnokAwlpUCqQO7RWtLKEhIFqXBkSa3sIGIsLWS0NoC5E3V/k0gcUgqqyjCZhHmw3enWljQWb8DpBCFDYoaUAidcIFXYkla7Q5mXpElQJAzKNwXtTGNsRZomWFMitaI/HFBUhsPDMVnWQqouO7uPaKWa/mCIkpI0UZiqpJUp5pMx08mYrJUwnpzgXVDlwDt0InHGIoTCWrDWY8qKVjvs8fGaqjTgHUqAcB68qZ95Cymp1wqGNG1T5JbS1usm78i0glThXIWxhkS3USoN6sJSQU2UCrYkhvF0wsc7H/PundtsH4/JS0uqElaGI1q0cFiKquIknzCtCqx3Id9GCJwFWwU7FCVVsAbGBJJHEDoOWcYSpPDBDgqFcAqbOVqZDLHlMvTNQqchGdB6rK1QFUGZRfqg0io93glsOUdIh8AgfIo3Cus8xtRkFC9wUqMSiVYJQoc+4JwnUi8aKwEfFUV8PX7UP/raMiCub4iaGbVCtQg35qXE23osa2x14lFBzUMQEnJc3Q9lDSZGnVoBKB8sXhABGAkJPjSAVEwRjoohjQx8fa0QC4vxMZr3ER4DfSLY8ynn49Pya1AeB9kbohz+zN9OgUKPIMGUJePJCdPxEYVUdNIOuVeoSMqSCRZBAjgj6K09x+Tjv0YkGb47wDqJRjfKPPHCUsY3XtYksXrP5gssCSLxCPq0lj/HeOcjJDOsCvJkvol1h3VPBMDP8AFEJNouws5PAH0RZz5fjJcsxo3Okg5OlVYXr/mLiB6Lf1+M1ZwlMATEPMIIrh7DggJHPYoukGWUVCjl8FXAqYxPm7rhTxMxmrb4GwgAj9f1zHca0olo2vVsUy6C/DWBQID39TxQk0Wa7y4cGeeWWrf7DKXi36zeCmcNSmmcSOhksDKEMtXM56BEhjea+azCmjDHJYlmLiu6nQ5lYSkry+1JwDtO7WbivFVPOadNUdf9lxMYfvHvZwkcwRbmbKznlxF1TruAa+bCX94+TyZJPPF5U1tiC3A2uDR4PFrWdyxksJ18Qv180ymCTVG417qO4rEx5pc800iA+re9n192/JPu/W+sA0E97OwXHaARwuGMpZNKCtlDFHNkeYTrbCFQeGuRQqP8mFJkeF/hZYoQwS5T1ViYwIOViEQw0efoz7aZASYZILxDaagsARsUoRMWXtETKXa8hzVzuoNLOKHwxpHqNsoeor0FL2rrv3p0+/eEX/1aED8QgqKYgzWotEO/20NLifKeTqdFpRS5dWjnmXmFQ9UNrsidIJcdjOhS2MCk7/XS4N2qEoQXtI3jnPOkuUG89lPs/jY603R/ewv6kt2b9/nr9w3DO+t07t3m5Y1l7P1dMBYlPPul4TmpeFpl7E1mzJIM0enhdUKWtOi1Omz0R8xsxd54wuH4CPYeMlva5PJnv8jBtGJpWXHp8oSfP3pEOZ0idcpKt48tK+7du0tVFpy/ch0hdJAbdCVYQ15M8QiU1kit0Tp4XlZVFQILOkgfCplivQ1SjtahkiQM2j4qAcgamAkyhN5bXBLayZcV1H63UkqEFJRlzjQvSbTi/KVLnByfcHy4T+FCJrXWCb1uD2cShKsQSYfcOIr5jFQJ+qMVcgOVyeuBqtnjhOxDU5JISdIdcTSekk+DjH5uPGVNQomTgpQK5QXohN5giFBpAK2tZT6bopSmKHK8s7RaGd7BeHyCLQuWB10unV/HOcvBzj5bszn/l+GIG5VhvdOlt7yE6A6DykenD6OV8F+7FxQ+VAJZG9oppCkUJZQFHBwEOZmTE5gewnwWdqFJCifHZIfbHNsTimqOMEcstSTryykn+zu0hWNjbZmqrMhOpowGPY5PZoyyBHsyZ3VeYQ8njIylNZnRsY5ZZRjjGQNzKSmFp9QCZRxtoRhZy4Aw8Jl6VvUiAPIWsPUksSwl54Tkbe95ICQKz1Ji0Ximxw948yc/4vlXXuG7/+pf8MJz/ycunFtnY/cImWZ46zDG4j1UeY51FVoKtFJ4HA/ufQyzCZUNmTTWeGbTGfOixDiPUpJO1uLcoEvbVUy9pNXOENZw4/NfYn6wh3SKzB8Gmdx6Exk9+oSvcyKcw9gy8CiFqkM49TY3ZjrUVF4vPMItSGKGld7CMaf+eQiJ8+50YqrZlc0Ctt5dey+w3mO8b7yBvXMUzjIVnqGkXnT74F/jQ0BBao30McBFOFetjOGaIGO81IJHm4gBg/qevKfWVG0IN857vAtEERszSZrFtK/BJs3xwS6iM6KjwRnL4uJSSBEkV03wO1NC4Jxh5iUV9aJMiJAx5cG6EOw441vI6SJeibMEkPBfXO097vf2aflVK0KIRr0gAquxn0Tp+nhcnue02+0mQzt+HoHZRWJBlmUNCBuB9giadrvdRk0jBt6jqsPS0hIbGxskScLx8XFjQ3Pp0qVGHSSSUFZXV+n3+wyHQ+bzOXfv3mV7e5s8z+l0Og0R4fr16/R6PVqtVmNTEkG49fX1xpog2sxsbGywvr7O6uoq6+vrFEXRKJW8+eabDWkhSRK63S6j0ajJ1DbGsLa21tQzto+UkqWlJS5fvszS0lJDXrh69SpVVTEej7lz584ZRY6LFy82ihpRpWIRYIRTYBxOlVriz4tZ+bG0Wi2WlpZYWlo6QyqJiibxGUZFgWhbcHx83CiVRIB5bW2NNE15/fXX+c53vsOPf/zj5v4++ugjvPdcvHiR6XTKG2+8wcHBAWtra1y6dInj42MODg4YjUYMBgM2Nzcbss1PfvITbt68yaVLl7hx4wbvvvsuu7u7zOdzLl68SJZlDfi7SLiI6g9RYSTP84aYEds1Prd4HxGktdayv7/fKC4kSUKapuR5zsOHD3nttdcwxvDBBx9w8+bNRokhHhvbLj47rXUDNI/HYw4PD3HOce7cOV599VV2d3d5//33+e53v0u73ebrX/86a2trtFot3nrrLdbW1rh69SpbW1t0Oh3u3r3LZDIhTVOeeeaZpi8XRdHYwUSVgjfffLMhNb3wwgtcv34dYwy3b9/m/fffZzabNYo7EVyP4H6SJE17RCUOrTV5njOdTpnP52eUZKJKRRwrut0u7Xa7GQ/iO/ro0aOGLDQajVhaWmI+nzf9SinV2HjEumitz6gpRNJHmqZYa+n3+8znc+7cucODBw/QWrO0tHRG9SX2k8lk0rTPn/7pn/KHf/iH9Ho9fu/3fo8f/OAH/OEf/iG7u7v8nb/zd1haWmrUP4QQfP3rX6fT6ZwhlMQ+FUlmUd1BSslkMmlIJV/+8peZTqfcu3ePH/3oR/zDf/gPuXPnDltbW1y7du0T42fsS/GzaLEV1YOi4kxU0on9L/4uhKjJ2/bM5x9//DFpmnL16lW+8IUvcO7cOWazGWVZNvZccdyPSi6xj0Xy2iJhLT4ba22jAhTH/WeffZY/+ZM/4cGDB4xGIy5cuMDS0hJFUfDUU0/xj//xP+aVV14JVoL9fkOKGgwG/PznP+fDDz/k0qVLdDod9vf3+fDDDxtLn/h8Y/uPRqNmrFpZWeG5557j3Xff5c6dO42V0U9+8hPu37/P7/7u73L16lV+/OMf8y//5b/k+PiYTqfDo0ePuHnzJo8ePWrGu0hai6pUrVaLyWTS9KeiKBpbsV6vx8nJSUMii+9EkiSsra1x/fp11tbWMMbQ7/f5xje+wa1bt7h37x7j8ZiVlRW+/e1vNySo2WzGiy++yLPPPsvf//t/n4cPHzKfz8my7IwCTVmWzGYzRqMRf/fv/l3yPGdnZ4dWq0W32236klKKpaUlBoNBQyz5NNP317N4b5nPx1RVQZXnlEVBUUypyhJjc1xVYUyJq4N0WstAwqgz+FVtXeucRxGIXFopEhWI97IG9gPhH4ICYYLzrZC0LiTeOdo6kEScrUlcwpIqSaeV0EoTUi1rwkeCzrIAtmuN1mGfkiQZ7VYLpaKcgMAag7EGKXWzL3PGBBtPWXvM4wCLVi0SnSDQiESjE4+SBUKU5C4PsRwEeTkHZ+otYiCa2cqihCafF1SlodVKgyS0VORFUHuwtqDXbeGdIS9Kev1BIJUZx3Q6o9/rkcqMyTwQM1SaUhYlpJ7BoEOSCEozx7mcopriyMjzEGtrt3vMZ8d470iThHarxbDX4eLGiP3DQx6NZ0xmJbO8zXxuODo4xhlLr99FimBbEeEGUxlcHS+QUiJVeC6lg1RIqsqQakG3m6KSaa1IG5J9mj2iqFUyCaSM5eUuG6tdslSgcWgs7VST6bjfSEi0CEKtGrSwpFLSSSWpFpgi7Fkr47BOYI2gMoJW1iWtbTkkvtkzOy8w1lOUQcEB4vdLWu0WQQWjVhn1Cimh0+nivSXNWqRJwnxeIGVCpycw+bwhCed5TtZKcd4xOTmimOe0Wm3msxMGnRShUzyCaR4+d8ZSFiXb9x/Q7fRppR32qyPy+QxnDK1ui9l0zvHxAedq9bbQZwHhqMoCqRwCiaothHSaoqmJhkFaAmvL2mpAk1cV89kcragtTAAsZVmAk8EmSWdYKZFJigucEpw1FEXO/Z2P+WD7Y/bnE3AeZSVKZgy7Q1q6Q2VL0rLCC6hqKxWEQyFDYpQN9idAsG5GIGQA6IItkEP62h4Ij0pCHCh1NelHKIy3zVghaqaI8x5bVThbkegUpMZY0NZhnUUIi7AVrkhwQlJ5QWUFxmmcTkNMFYfWniSpFQIJKjGiVsgIkiQC6WUT4/F1nEZQ76d8sAN2IoB5xHHQhziXkhIfGHDgJE4JhK1/J1hDCRHsjkMSkquF4IPlucCHPi0JsbEY8sHVYrce6VVN+CBcqwHJagBWhKzYRpUkAmpCBFsEUdtxLWbwC//pOuDT8itdFokeoiFILNocRXoWND8IcKLC2ilF5WgJh3IJqp2RKI9xBZlSIBVaASaQ2gyGdOtlZo/eIGu9gFYChCWqRgSCWc0kqOPDsk6BBIdUGcKG8bQ4foA5eZdMCwqRASlOyDo98pTwQXMfZ+/Z4xsiRZgjPUrK0+/8AubXYjLNmTY8+0nTgmeUCPzZ43/R2PL432NzxHNLGdaXnyQCgIga4NIhpccbVxMBgtWLqB/kkwH6Xw6Cf/I+Y+0e+72eg+PzPK3noiLEAmaxeO1Fdkf4C6629hJCnO2Zi/fwGEknnDMQso0pcSZnOttDKoOTkjK3JBmkOijSOemBgEEYUyehecdo2CZJdIPbnLmfxfnFR/LQLyd9/CJigV84fyQshuPDviH+/ovno8W+foqH/LvMX0+sr/eNfaT1niRRKA0CgyQopTuxgPmc6fMRizqtW0jGdc3HzXt/pn1O6yDE6TvQwC+1xckv6qN/E6HlF/39k/3+lLwSTxH/jYdJD6bGiK3SgKWcjfHeonubVB4kCqQLCm1mjFdLCFSwJ8TjbSA74w1eKIQyVNaiRco83UQX2wgvcNkA63zYq4gaORTghWQ2OQYErf75RpHGA1Zn4AzGV0iR1v3s361//JuWXwvih7UWU5TgLCYxWCyYCqkVYKmUZl5McUJzkraxXtULasew22FaGFy+j6uqRhJSaIVXjkQltKxhw8p6QZ6QSo3rA9/ehL5GvvuQo90ZG8WM+WxOazxhmOeMhWCCoPIwKSxf3roIvT57e4+4t/2AnTRlPBxiun2SVpvl7oDNlTWGyyPWVjconeY7t+7x6GSKnR0xmR6ysTYiPRey9hIB59ZW+NO//BEnt29RVBXDi1ewSYatwW5TluActl78JzpDJi1sLcnorA0TsQrymlooLB5jwkAcJ77SFHVHjf60gRCCqEHzRC9klUkSHfyo8/mE8uiYVrvN6uYWxwf7Qa618nVwWqNaGVUVAqaJ8CyvrlN6jbM5CIWMPl+itqQgWFAMR8s4oahMRZooOv0lZnnR2GNIQCiFkIpuEgLCSZqipCDVI0Awz3OszRkO+gihODo+YjafUs5njDoZ/X6b6WTK+MFDXj485NutLs+nKcsr66jhCJFkgdDRap0SPza2grLHdBb+FcC8qEkecyhzKAqoKjg+gOkJzE4gn4IxYB1JPmWjOiF3BQ93T/jpz3ZQaZfzKy3szCCFxMyPuYCm92hMZ5Yz2J/QNYaR87QRaKlCwAnIpeTAe/al5wiYSaCbYI9zlIcE6LRblGWFrQLpwEWiBGFA1niuJAkC+Im1nAgYpoILHcG0BOEK3v/rv+Q3vvxVVlbWeO/mR1x59gUuPnjEA2dDBo+3WO8wzjKfjkNgXGuMqfjJj3+Mm4xpK8XBZBY28t5hnKOVJqz12vQTjSnmTNMhq5e2mJwcMDs5YPswZ/PiBe7O3ufNasZ12cPXspQ+kivq//euCgo1IgQBooUIArwEas+0cPMeL32zUIoEkKC4YevPwlV8vejycfNO7QEW5S+9bYgnwerFYxvCiWPqHUUMcgkIHrdBxkrKQA4RSiGCxmfYH2sdiCkuqInIGiiPksf4IDNXr+1DNbyv5dVqGc/aF9dZ6vr6IDkqFVLUdRYKLWF/9z7ZWkLWa1PZMmiY1GQZKUMGWlUarJBIKSiKQCTSdRs76zA+2E45RJ1tIwOhxdebHwkqBjIWV6ML6ytfSxR+Wn51i7W2IRR0Oh263e4niByRbLC8vNyoWsQs86gIEgHMeM4IKi8uRqO1gxCiAeAjMB+BrZgZHZVByrIkz3O63W6jJlBVFTdv3qzJjQHs3d3dbcD1V155pVESUUo1kvcRmIvgWswYn81mrK+vs7Gx0ah7RHWH999/n6WlJbrdLk8//XRD2mi1WrRaLebzeQNcCyEYDAa88sorTdvE++/1erz44otcuXKlIaHked6QTL70pS9xcnLSgMeRjBOJNt57NjY2+L3f+72GmLG8vNyocMAnszSiRUzczLfbbc6dO8d4POb8+fOsra2RZRmz2YyTkxN6vR6DwYDnnnsOIURjw3B8fIwQgmvXrjEYDBqrGyklGxsb/Of/+X/eWOU8ePCAg4MDnn76aVZWVtjc3OT69esIIfjwww85OTnhjTfeoNvtsrS0xJe+9CWeffZZsiwD4L/8L/9LvvOd73Dv3r1GnaIsS5aWlhq7lcU+F60UYn2yLKPb7TIcDlleXm4IHpGMEQlHUV0g9teqqrh8+XJDElhdXeV3f/d3efvtt7l79y7/w//wP7C1tUW73abT6TAajRpyQCQ3AU09JpNJ03f39/cbJYmoCPPNb34TKSXvvPMOf/mXf8nm5iY3btzg0qVL/M//8//MX/7lX/JXf/VXzTM8PDxkOBzy3HPP8fTTTzfvV+yHUe0kSRJu3brFRx99xPHxMcvLy6ytrbGzs8PDhw954YUXuHTpEv1+v3lHJpPJGdJHtDEZDAb88R//Ma+99hrdbpfj42Nu377NfD5v1Ahi/eJ7HNsiWiX9xm/8BkII7t27xz/+x/+Y559/nqtXr3LhwgX+/M//nOPjY4bDIX/v7/09zp8/3zyL2J/juxDJJ9vb27z33nv8k3/yT+h0OsxmM27fvs0bb7zB3/7bf5vf+I3f4Ny5cw1R6Pj4uCH8GGPY3t7mn/yTf8JwOOTLX/4y3/rWt3j66af5p//0n/Knf/qndDod/uP/+D/m1VdfxTnHP//n/5x/9I/+Ec8//zxbW1ukacqf//mfY61lbW2Nb3/722esa46Ojvjggw/48MMP+eijj/id3/kdVlZWmM1mHB0dMZvN0Fo373VRFGdUSeK7myQJzzzzDFpr7t69y3e+8x0ODw8bNaKojhOJF5PJhE6nQ5qmjQ3PbDajqio6nQ5Xr17l8PCQH/7whyil+MxnPsNHH33E66+/zk9+8hMuXbrUtNOlS5d47733+N//9/+dhw8fsr6+zuXLl9nY2GAwGDSAWNxUL9rMOOe4evUq7XabdrvN+vo6zz77bNO/XnrpJcqyZGtri0uXLjXt8Y1vfIOtrS3+4T/8h/xX/9V/xdraGqPRiHv37pEkCV/60pf41re+xcbGRvMO9Xo9vPfN+NRqtfjP/rP/jD/6oz/itdde47/5b/4btNYMBgPOnTtHWZZ885vfZDAYIITgjTfe4PLly1y4cIHPf/7z3Lp1i62tLUajEZubm/zBH/wB169fZzAY8OjRI5RSvPTSS1y6dIn9/X1WVlYwxjAejxuLrDg/RgJgt9tleXmZ5eXlhmiXZRn/xX/xXzSkpizLmnE4klmi3Uy73W7Gjkj4AZp5ot/vN6RKIYJ6UJy7ojpKJMotjpefll/P4pxjnk8oipwqn1MWOUU+w1YG5wq8tTgf7DKEFHjC2JToJGRaypDQI5UKwWoJQiik1GglESokD3jrqGSFlx5JEiwQjEPXgKoQoLQC53HeUNmKRAraWUaq6rWaTkjShKSVkbXbJElKkirAo2SK0gqpghONdx6pQzZuAKI91juEhyzNgp2mD/ttUxnKoqg5/xKla/KmEKAV1geVrVaSkiYJRV5RlhWJViGpx0mMDVZ9VWWYjAt0IijyHJUqWu0BhweHnBzNGQ17DIZLGGNoZym0JHll2d19RK83wDnDfD5BzCVJFuIc1nk67QGZ8HgnqfIKLYOFoTcW4yvSTFNVQV2y1xtxfJTTy+Ys9QfsnOSM54b9qSHrlCHD+OgQU86QmUaoNARpvacsDFqFfWCZ5wgZgIGqLDFeBVlnCd1uh6w1oSyDxamoFTaFDFCU9Y5EOZZWWmyut+lmIHFkUqCFIVOCRCYhylQ5rAzJUNZLpE6R2iGTYIFcWsN8ZpkXgryU5IWnlXYZdVZRzjM+OcDZHOmpVXaDhc94PGU0LDDG0uloPIGk6WwgP/Z7XbJ22oyDzga1vsoYdKLRImWWH5Do4C0+r0mWRVEGi5e5CfGeKmc06CKEZHf/mDRpMx0fh0QeJ5jOp0zznBvXrjGejClKw/HJlJWlZXq9Abdu3WIymzEYhjk80dG+xlHmE4QM4J+QKsQInUNKHWyRS0tZGaRUoW8Uoc2MqZBCYX3oH0VZ4Z2hpVskMpC2vNA4K/AOjC2oqpydwx0+2r7HyfgE7zxOJkxtgcBihCRJW2inUFJTeUdS5OS2Ioq/exdUaKwTIHWol/cIQkxQijCOCBnsnKQ2qBr8SSpFlmh0IpGlI9GKJBNkbYlKZCBsOQPWIq3ASUflPZUPRHxtUpIkBaGpnKd0Aq8SEC0kGlQAxpwLsRghHItZ61JKkCHaJL2sY7F1xq4I9jIRREXUWaRCYLFoEdQ7Qqa+R4qgkiPquIkkvFOyTuYDEC4oeaiapIGoleh8HfNyIGSMfdVAcZ18E0gsHik8wtXZrKKBX5sIzqKCQCynQJUM4EsDL35aPi2/2iXE1X2NZJ4CnzWkfQaqD1bXp2CnrWZkMqHTH3J0tE+mDfbkI7SVwXZOBfs0pMQLia/VOror15lt/5z0/OfxqgY663HR43HGh0RgBcJLtFQ4AeVsyvToHuX4AX0xZ7D5WYpiipl+PxDqZCCWNcSDSBJ4jCAB9YgQBJLCmLQYb6/vM/x+SliIcaLF832CpCFqW5kanX5cSWjx2OYZPHa+x1VEvPd1HDqOVY+RHiIBIY5zHjLtkQa0VrSExpY1qMwpVrB4/sWfH6/DYpstfg9C+0U8P/zlVFEpAvRR8XExeH5WDeKT12goAvX8Eomzsmmr+Fz8Ewk1i8mZPnj1sb93xHhm8KUhr0QNiGvGhcWUFi8CSSYvLLNpQSsVTCrLwwONE0GN/FSBI7wINVfhDOnkDPmB037wJOJMU+eF///kffwie5HF9jsroR77YSAtfJIY8cs+e3xubO631gJD+GApmKR4X1vNwZlndnry5ragVmOPGIuIxMqmDTyNmp0HOE1qarAxwvsakTDEJ9vk8X68+Nnjf39Sm54t8QYWnvtj1wr1k0hsoKx6h/We+XyGWrmCrfFoLwzChns31mNVhsBjRFD489UEYfsIpfFeAxolg/WfVQqfnUMXd7GlhrSFcDXG7Cu8FZjxNloVJNk6RkqCZqRDCYGVGS3lsZVBproeJ2lwyX8f5deC+OG9RwlYGfRwSYeTyRhnPWWZ01np011Zp9rfRpNS6V6Q3xWefqJ5+uI5BktL6GiPQJi8SmPBO4SDlcmM7GAeOpgIBArVbyG2nsGP1siW3mTmLS2CjYHxsIfA6YSJMVRKcVRZyt1HLGUJ3fMXudhqM88Ldo8ecf9wj9lwRKvfY3Vjk63zF2i3OpTOUxUFr9sx5aCHUj2QV8Iw4Gr1hDLnm199lQ+9Z/bhTZLJCavnLtAerTA1JcdlxdRaSh+IHGiFzDp4nSLSFl5F/0lbv+ChHVS9GHDGnA4qYaeCKQoQtXy0c2gtQST1pOHxziAQwTPWWaoyZzbL0VqytLxKXsyZTsJmznoPtlYxEJ60ldLqjbDzHOkSHBUSMKaW7hQAnna7RXe4zOHRIThLfzhCJimiKEKgUwVpR6lTjDV19mFKr9sHIRmPTyiLnMFgSH8wJM9z9vZ2OT4+xpsi2I0kmvHhCfLRLr+dF/zdfp/Lq2tko2VEqwUyCQodOgnKHt1BsHMRIkR8lITJcSB85DmYYGtCNYeqCMof45rwkc/C79NxIIZ4zwsXV7jmcnb/4ufItETcPeK3j0/oH8/plBUdW9HygZFZoJg5R4nEC4/1YJ3FC4kToAQM8LScYw0w3YSqozk4lozxGA8oTZIJ5tU8DLf1g5e10mRbCG5oxcfW87q1GAGXeoILPcdHBx7jLePtD3jn9Z9x46XP8v1//Sc8863/iIsbb3C0W2JdmIyVlCCDn2+Rz5nPZkzmc0xl8cbQSzQZc+Z19s9qv8fWoBOkXodrfPzxLZY7GSvnr1DmYyrref+dN/hbX/ki2x9/zHa/zYUcFlZ7NFK/IihVWFeTlgQ1USj8FzezYWSOTIlT+oivZTdFnAFdGMD9wiIo9NCFCVy4U8eZuMD29dJJxA21Y+48VogQZEOAUBhTQh0YjPcRPWUFrpb4Uoisy+qNz1OZnP2TQ9rVHHOyh8vneGcC6aRejwfvvFCqqgJPIH/4YKEivKMqCoQAKT3WEayvtCZJM9qtjKCjGs4ZlVW0Vox6HaYnE6ZFSZLoIJnrg9WScadivEKctrT3HqEkSf1+h3/qBZdbEJqLGx8ha8KOONPmn5ZfrbIImj/OBF4kfTy+EYuAbPxOBP7i9yJ5I6pOxGMi2BXPEcklMRgbj49Zyd77BvRdrGOn02ksHqKNSCRorK6uNtn6wBnrCOdco+CxqCwQQeZ2u32mXVqtVlN3CFY0UdUkAu3xOlFpINrKxPuIRBqlFMPh8IxNQlHPpUmS0G63z9g9ROLKoupKVEmJAHts93hMbLNFFYxYt6hCcf369Ub1ZD6fNzYZsf0vXLiAlJJOp9N8P6qj9Pv9M7Y5VVXR7Xa5cuUKS0tLXLx4EQhEl3a7TbfbBeDChQt0Op3G3qPVatHr9djY2Gja2FpLt9vl+eefZ3Nzk8lk0rRVt9tlfX29IVos2izEEvvZ1atXGyuONE2btlhaWuJrX/tac65IJIoqGpubm2xtbTUkgStXrtBqtbhw4QI7OzusrKw0dkWRPNNutxFC8I1vfIP19XXOnTuHtbapi1KKwWDA17/+dQaDAevr6w2R5TOf+UxDpsqyrLG2+cY3vsHe3l6jytFqtbh27Rqj0Yhz584BcO3aNTY3N7l48WJjBxIDNzdu3GgsL6L6SSTs3Lhxg9XV1QasX3zXI7CcpikbGxt85Stf4d13321IJt1ul+vXrzObzRo1nUjKevrppxkOh02fEyIoH7Tbba5evcrv/M7vNKovJycnvP/+++zt7TEcDtnY2GjA6ngPkcwUiTuL72lZlo01RyQ4fOlLX+LFF19ka2uLqqooy5Ll5eWGzCOEYG9vj/fffx+AGzducO3aNVZXV3nppZd49913efjwIa+//jq/9Vu/xebmJi+99BL379/nww8/ZH9/v+nLDx48YH19vVH2ieNGfF/ivezs7PBHf/RHjEYjnHMcHR3xuc99jmeffZbNOtt3cXxatK+Jaikvv/wy+/v7/OQnP+HOnTtcvXqVy5cvc+XKFTY3N5lOp2itGzJOVBe6cOFCY0lkreX555/ny1/+Mvv7+7z11lvs7Ow0ffTixYtcvHixqfu1a9f47d/+bay1vPXWW02bdbtdBoPBmWe1+P4tklZ+8zd/k6Ojo8buKxLPrly5wn/6n/6nvPTSS2xsbDR2XbEPffvb3+bWrVvNOxTVlJ555hn6/X4zX8R2WgxueO9ZX1/ny1/+MpcuXWJvbw8hBL1ej6WlJa5evdoQ21qtFnt7e4xGo0aB5vnnn2d9fb0hdL3yyisMh0M6nU5znW63S5qmDAaD5nkBzXwQn2Mcv7MsY3V19RPjcafTOTOfLpIh45y3GMhZHOsikW2x/Z8UzFoM0MXjFufpX1aeFBj9tPwfv3hnmU8mFMWcMs8pi5yymuNshRQenEHWAKkSEiU8ulb7SOr1khOBzC8cBPA/EEGUTlDaI5XEK4u0ElGAEyFjziTB9AAv0EqTJilKCIIWZVVbdygQMpDg04S01SZNEjKdoJVCeIEQGqESkEkTgJcKwOK9xJoK6wzWe4QXWAdpEqxFpE7xQmEdFEWF1pJUp4TdULAiSRJHu+0pZnOcsGidoLTCe4vWCu8rwCKUJ5MJ82lOWRlUInFmhkgyeoOMkyPD3uGEfjfF2ookSWl3WmQdhZQp45MJSebJUompLI92HtLpL9HtDWi3Q4ahrUoSlWGtoN3t0EqDvHeaJmStFs46pLCsrS1jzZzz0z6H02MeHU/Z3T9BYmlpRTtL8cgQMtGgMo3Gg0/JC4OUIUFAqTBfdLME4ySFCRLdy72U1ZU+0+kBzgQ5Z0+UMnd0M8XycofVlRZtDdI52krSSiTtLA3KksKglUJrh8oEaaYDuUZDogMhpCwqynlFWViKyjOeQWG7rC9v0e2NKM2M2ewYa0AoTdBUsFQ2EB2MNVhbIbyn1+qAF5ycjBuypZSCsgyE8WJuKEuLUmEvU8xnoU9imE9moc9gKMoKa4LfeL/fpZVl6CRhPJ3Q7bXJWorJeEq7Hcb0JJGcW1+jNxzwwQcfkqQJWZaytraKUJKlpSGjQYelpRGtNMNURUissgIhErR0dVRFUtmKvApEwkBkkvU8nWCtq9eYGUKWFKWhpRTBjz0omCZpitQaLxU4UZMgKipTcnB8wJ0H9zicnWCExyvJZFYxLUowjrxT4DOP1sEmpuUtaZIiyxnOBEVVIRzeB8KJ9S6QP9C14a4M4KMg2Dclssn3kVohlUdrRaokuah/zhJa7RZpliK1CgQJ7zHeBXUg4ygjOd4alDI4L6k8WKGRGmTdpwIw4PFYnBAoE+bBcO2aXF+PiyEm4+uwYog9xRhKiA05kKKGhgLBJ0KRErAyUmFCmMuImrTsXLBE8g6sQzfhVVcrwfug3IrESduEXyUR312ALnwNYhMSAr2I2dOiSYgKwOjpvxDVeQgxYxHGw2Y//+kc/2n5FS4CGuD1SX990p8kgJDks2OESJDtHvLgAZVYxpchOS85uIPvjfDZJqK7hBCQ1GOrTAYk688y23mL9rmXkGiMKfESlNNoHQBlKRQeST57xHz/AShHUhYMljaxw8tUqoMpPsL5hEgKiP9r7uAxgsZpOcNWiOIBdbLi4n7BfeL7v/ic4bzh4zAqnVUvOEXBF/ccTzrXk663+LNv6u3rcbm2IfWB8CZFwDgUAmcj+YBTNsYvKL+InHBKjlv4doQbFrHjMzwX0Xx+logQ2/fs/u0X1sVTY6Jwaln/OKHm7DmipQg2WBlaLC633P7wPsZKHu3uczKZc3Gzj1ZQFjOMKxkfzznaH9PpZAgBWa+NFQ5Z22MsXmdxamgIP5zdXzZ95ZfdH2f5EY/vax8/4iyJ4ZN1eXLd/uZ57JcfF/szlJXF2qCSr6QKxILgi/gEYsqTLIyalcDZe4ysjvpdkSzELuK9ipqW6T95/l9Wnkxm+cXlcXLI6bsc1xePnzusOUSNK6okYdQTnOQ7qNY6BRKlwQpJW+SBkiQD0X4zy3moBdbkqHKOsTlSeLxK8CpDygwlEnySYMRV0vFtClZA9Um9YuZz/OFddG+NrNWntBLjPUIqhA8cAifbpIlmagpk1m0S6v8Nmu7fufyaED+CekWSpDgc68Mhk7zAVwUtJcl6A6p8QmJmZK4M2RtKc3VjldZgiVlp8bbAO4Oqs+udD7KA3VbGQCQMbY5IdKDZpSnQAjcEM8WbkjJLSHGcOMsE2L2wSWfrAtO33qddzBloRZakQfHBeGTP0B126Vx+kcudNmY0hGdfRB5PqSpD/nCXZDbhK70uN85t8vbhAfN8xsAUpM7wk9de5/7BAQWClnM8Mz1hmiXMrUVt3+e3neGl5Q0mwxX+1d3bvDcbM8VjEBh9glWhMxc6xSUpNkkwUiFVkG9EBZm/yOryCLwLsoKqHmhbaRqYe86FrJl6M5AojXcW6yxJkoZgi3UU+ZxpOSFLU1bW1pnPcor5DFcHv7VOMHjGkzGtdodWK8Nah3W1JzXBlsIUBf3BIGTT1N/t9PrIRNNqtU+zwRFB8USArpUqjk6OqYqCNE3pD4cY42q54hn5bAJVQZYq2mnC/OiEzcND/p6U/AdrG4zW1hCDIUKHbFyyFiRh4UPWCVGL6SwwfyoTlDwmJ6ekjpqRRlUEm5cyDxYvxRzyeTimLPBlifeWy7cK/q+Vo13epyUloiYsGGMx3mKEwAqojAs+tTIQE6paiswS5Buth5AnRSA7AGk/I5/kWFGzPL1nNpmQSBWCbPVkJ/ANCeicVvTxfNdbtvG0teRKz7HccRzOYC/3jKdTfvrD7/PyF/7vDIZ9bt97wFM3nmVn/n7YZKcZkuBze3J8zGw6pSyrZlUiBIwrx2jYJ81zOlmLQao5msyp0j7PP/U8dz6+g08Sss4SaatNPptxsH2P/fwLLC0vMzk+Jj8sOWWInqp+4D3OmkbNpk4Le8LioFYL8QueYkKAiBodConCYBYWu6fXiPzcuPSIHBLn3Wl2lyBMVEiEl8xcCEj0+j26gwGpCgQi5x2+lri3ru7b1uEJ2T8ei0w7ZGuXebC3zYedNq9cuMTS/JCj1/81s6NdHHVgHRHIXHXGSKhevANf800k1pozi0qtNYlKOX/pKYRukc/GVCayQsOdCgF7hycYoVhra6yHk6KkcrW/XH1MINCcXVB7D1JrBKdKRNSLN1k/hzBhy0DSq8khQdXp0/KrWhZJEhFQP90InW4CI4EhEhIeByoXCSKLwPwiCP7JBWcoEUyMwGGU84/Xmk6nDcgebVOEEI0VwmAwaDKrIwAay+MAYVmWdRBYNuSNRYAsWtIopeh0Ok1dIFiixIzuaG+xSH6J9/v457E+UdUkXmvRAie2c6xzVClZ3EDHNow2BIubr1jiwn9RsSWqo2RZxvnz5zk8PGyII5FcEMHcqOgRrxOfwWg0YjgcNteJliBRVWV1dbW57uI9l2XJysoKKysrZ/pEvLd4nXitmIUfQcpIIIq2NIvg5eP3L4Tg4sWLnD9/vulL8djBYMAXvvCF5rqRhJKmKU899VTzfCNhZ2VlhdFoxFNPPcVkMqHdbjdElKqqGpKQtZavfOUrTUb/4ruQpinD4ZBXX321eX6x/12/fr0hbkTyT6fT4Stf+Qo7OzscHh5ycnJCt9vlwoUL9Pt90jTl+PiYGzdu4JxjMAgS8ovtFAHsR48ecefOHfI8Z3l5mXPnzvH000/T6/UaoPrxjWtURlldXeXVV1/Fe8/+/j7eey5fvsxwOOTo6AhrLa1Wi6IoWFlZ4TOf+Qybm5vNeYUQjRLOxsYGX/va15BScvPmTXZ3dzk4OGB9fZ3r16/z1FNP0el0muMBinr9uFinJEkadQQpJdPplLIs6fV6/K2/9bcalZn5fA7AxYsXUUrR7/cBmEwmHB4ecvnyZV566SUuX75MmqZcvnyZz372s7zzzjvcu3eP4+NjVldXef755ynLkj/5kz9hZ2enIZQMh0MGg0FD/BiNRgAMh0O01gyHQ1ZWVhgMBty5c4ft7e3GkuUb3/gGn/nMZ9jY2DhDyopgf3xfrLWMRiO++MUv8tZbb3Hz5k0ODg4aYpv3nitXrlBVVaPCEcdQrTU3btzg4OCAdrvdkDt++7d/m5///Oe899573Lt3j2vXrnH58uUz9lNJknDx4kV+//d/n5/97Gc8ePCA4+PjxvokvmOx3zwpmKKU4itf+QplWaKUIs/zxkLl/Pnz/Cf/yX9Cp9Oh1+s195+mKZubm/ydv/N3ePvttxvi0/r6OltbW/T7/WYsjv0rWu0sjrOdTocXX3yxeXZxfNJaNxZBkejyeMBmMBgwnU4bQtjVq1c/MbamaUqapg0JaLHE9oljW/y93W4zHo/P2KMtkvMWx644J8X3KJLrFufhXzRHP07AXCR7xHljcV6M5/u0/PoU5yzz6TFFGYgfVVFgbZDIFsqjZAhIKhkyURMVlD5k7ScvZMiiBxFsHORp/1EqZHYJ5UNY0wu0chhRhb2cVCiVQD1XBSURRZpokK163yLwEoRUKK1J0hZSS5wEgyWROiiFKNnsVaRQKBHSW1WSobUlMQbjLd6EunghUSoJx4lAYlFKEwKeDmcNNTJCkmTgoSottsox1pGmEq0k3gZiTJYlxEyDdjeot82mOVIoqiJHaeh02synOePxHJ1InC8wziJ1glSaTr/LdDohn89RQpFlbQ4OdoOCaFnSG/ZJUo3DUrkCNzeUhSZNNELYUGdvcNagtWM47LE2GbK1POFkXDCbOU5aJQfjGa0MlA7xpkRVpElovzTVte2oQySKJPFkrQzrJVXpUTMT5Og7grWllMNDxYl1WEuIU0lPp63ZWBmwutwjTQXCV6TC0taQKUemQ6JDosPz1oki6ya02ylZqlHe1pm1HmctZW6YF4bJzDAtQWY91lbP0273OJ7sB8KPJAD6DrwMKpdFUdaKRmBMRZJkiHpNPlxaQicK52wg8VR1soODdqcVSD1pghCecjZDSEGWpJiqqNdXCudFOKfWTKZTysoyXF7l8OiI1fUVtFKcHE/otntsbGyyu3tAnpesLi+F80s4Oj4kSxOUTPFeYayoz11RGYtOWwjviJoMWreoTNybKIQMiUNaSfJyjncSkhbOC+ZFGUhQdRKMlgrVakESr2UC/8CWTGZjPn64zcHxYYjVEGxhK+soigpXWXYPD9AkrAwHpFlKlqa00oxkFogeArDOY1xQ2HGAU5JMJCAMPhLChEQpixSBXCR8II5p5Ul0uBctJUrWdZYJadYiyxTKabwzVMaEpCYpgsqLlFigMobKCCokXumanJYQVYhAIrwKY4SsxzKd1BnmkVBfEz+kDDCoD8DH6X432AvjA/HDCQFeBaKGlCgtsUZhlApqsFZhtA7KQzZYK3lv8caGf2tQIsZrsL623JVnMEaoPe7xtRpBTbarCSPgQuLY4wCsjyBovRf0cY0ja4XecMZPE3o+Lb/q5XFwWZz+4QwYvVi8CMQ5ypP/D3t/3mvZcqZ3Yr+Y1rD3PnPOmTfvUCQvhyJZrIktqVhVKha7C7akbjXQbTTgBgy0GxZguP0N7L/8FQzYX8BuwGi3WwJUdktqjVUqFamaOF/eMW/OecY9rSEG/xERa6+z8yTJEkqATd4ATuYe1o4V84p4n+d9Xqy30J6BgMnOIW79HBdATAqM8YTVu/h1Ta8UYnINUe7gTMDoQ2R/yvrJD6hufwYhKqTwoFyyTzuai2Oa448w9ZRy5zp++Zzq5n1cdX2Ekq8jeClNJJ2Jl21nA4B8qd6JSCA2ShnxH5+ICRtHke22Gjtw5c+3v7v8m9yS2RovEcJfef12ehUYLxIDI952sJznRsE7j0ZgjKJvowpcsmIPdf5x98k2uJdICIKBVSBGY2RbQWQgerzUFuPXiUhyJTlgkyQykShFRv+Ree1mFL4r4R7RZg/4gBYaQcO0LvmVL93gqDzFI7n7WcPJ8wVeBXo0wRWYINi5HnjzdsHBrsQ6hTGa3d0lf/jOjO3QIttHw8t1vzxGIP7c/4SHikgZ52cgQ3u8mrgwbq/xc25MRvlpz7GvHsfpnwCrpiOkUHBFYQirZOtADP15VdnGpKWYVVahF5DI9JeJUlvtkn6cx9VACPl3qM9PV+dxeeKcjW9f7luI6viRRitZr1s6UbFXSVx/RqECTVuD3kWFC1b6AIHk/vSM+VoBFSJIbH2NQCLlOxsd9N0pMaydROoaW99Erp/g64LWrnGL51Q7N5FmRtctAYcsMnYa18TORhtP6JaE2eEGA/v3uMf5uSB+CBHjb3YB+vWK3f0DKqNQ5RSlS0JwtEBBZF4rPFMjWS7OePpRQ13G2KT1dAKyxPkYImVaaIp6QnWxoO1bgm2hKBG2iie604eItcc5gainKCvpnKdF8OXf+A+w117n2x88gmbJdV1QmAJMCWUNWoNUCNvBZBfz9tu4W/fx83cRp8dUTx6hT18gDva5eXDIjf096ArE4YzFG5/iwb/9Dneu7XHj9/5j9lxg9/yEom8RiwuKzrNDoD9ZYJcr/svXXuPp6RknzYoL62i8Y953LJzl3FhObM/JGs6kYKk0nSlxkx20MWgTvbu0UjgiQz4SAWIMXB88IakpyLSgCBEZ9852CAlGlWACILEttH1Ha3uKoqAsD3B9z7pZEXx80LRNS9/1Ma6ViHKnSssY+kJpinICWse407rAlIGL+Qqpu3QginJldVXhhaDrW87PG5TRmKKmns6w1rFcrFgsF7TNir5tKLWiKjVKgDs75Zcu5vzXs12+eOMG5uAalCUiK3wgo+KHUKDKuIAuF3B2CohI4mjW0K2gXUYiCBKcjcofzkLXJxKIJXQN9B2+a/E2LT5dz60gsDh6Ynwv68F6hyUaB7rg6YjvrQhYJFZ4rIhkDxtEDJdCjEBnRcBKgSgUzaqnkwLnU0zdEBDeIkiHWJ9UP6SkCoHPGs2JdXzLeayQvDGBtw4C0sONCZw2sLCC9uEDfvTeh3zhy7/GH//hP+cbv/s73Ht6zmRacXrynA/e/T6PHn5Is26QUmMKgc+EBgJ4wYWFw9kM+p4Hx+f0zrOnKxyCsqqjkUMo9g9u8+HzP6fbnfHtP/9Tfv3zX+TZk6dcWEUbAoUQRFlKn55SAecsPngEikF3IloO84oSVSzE5iGTCRz5UeeSRN9YHi3/FpFiteI3i7sgEWxim7qQFQaiipAPjkV6Dk9nU3b39jE6xn51fYezMbhqwA2bzhBCUsMLiOlt6smEajLFtmdceM/h/g2Obt5mUkSPsOhtFtV9Iolk0x7Odjgf6S4iCBar9VAnF0L0PAs9jx8/4frdN3B9F401hBTWBrSAYKNnz02jWTQtZ37snRLvh9gorQzMRx+wzqOT/GlI0mRD46XNVKGiR8zu7j4XrUXIT4gfP6spH+oyKNS27QDMj4kdOfxCBrMgAnYZhIII+I3Bp0yQgA0oNf5tBinHntNjMGtMEMgAXyZjlGXJ0dHRAARWVUVZRjns7AGf65BDs+Q6ZpB8TD7IJAjgkqrEOOxEBvPGhIpx++W8MuCWAfkxESTfP/9lQG47z3EfjEkP43bcBhDHZfDes16v0/O/HNQ9uq4byBD5PlrrgXRx1d9YcWTcT9t9tq0gkfv5VddnsHsbAN2+11XklqvSVYef/FkGUTM5J/fFuJ6ZAJXDNeTrtNZMJpNB7SOEQNM0gyrFNpi6Db7msZf7O4eYyd/nsZ0VcMqy5M6dO1y/fn2YI9PpdACDcziOTELJYVbyWN3b22NnZ4d79+7xqU99ip2dnWFuZJWZPDdzSKKxmoYQYrjHzZs3B/B8b2+Puq4HAgRAVVXcuHGDt99+m8lkwu7u7jDWMtief/ubv/mb/NZv/dZAAhjfG2A+n1/qN+fcMHdz2afTKZ/5zGf4e3/v71GW5dCPBwcHtG07kLTeeOONQW3i9ddf5+zsjHv37nH37l2+8Y1vcPPmTYQQw3r3la98hS984QuDQWgymXB0dMTNmzcHspAQUYnEe8/e3h57e3sIIfg7f+fvEEIMqZGJZ2+++SZf//rXcc4xn88HcsP169cHwtAqScmPVRvGqkF7e3t8/etf52tf+9owpuq6HtRsyrLkV3/1V6mqisPDQ5bLJVJKrl27xm//9m9jreXg4IDJZMKjR4/4vd/7Pb72ta9xdnaGEOJS+KHlcjn0ifee3/u93+M3fuM3Lqm95HGaQ4hsz69MsMhhgHJdchinPA7eeOMNjDEDiSLPsfz3mc98ZggPldfvfK88Zl+1Lm0rWozJiONwRLBR5hiTK8Zki7zm/qQUQhhISLnMmeCVy5nJSjltr6WvIu9t1/FV/+f1N7f5WGkrP7Py55+kn9/kvWO9uqDvWvquxfY9PthI9hAKIVQKzSCRiRyhtELrRNIkn+FEVNBQAqRDSIeQoJVORzyHwyJlUgSRkhiIJQpbxyOjJ0iJUBJVaLQyaF0k5ZAY7jaOYbs5p4goKyyFhBBdA4Z5mzzapVYUWlMmw3AM3ylRMp1FhQQl0UrHkDDeRoAnSEQKV5HDLAUc69WS9bql0FAWMSyotQFT6ARECwpqzi4W+L6nbRq6Zh1D49YV68bhVx1VaShrieg6guwxpqIqSlZdw+nFnMm0ZFIZ1vNThLOsmwXVZJeDgxtM6ilCCmzbJJWPOoZMRkSHGymAHW70PWeLFY9OFixOFpycr6lKxbRQCGAyqyiSg5E2sX+LssK6HmuTkqyPCgdGAjoggqM3nsMdye1rU7xdsl45tFLszDT7uxX7OzWlAoJDYTEyEj40AhkkhdGUhUBqiS4Ms0nNZFKipUcJhWs7rBV4r2i7wGJpuVh6WmvYOdjnYG+fwlScnj5DS0WfwHhHtEOFAG1r6VpLs1pTViWtlEzqCdNpjSDaEK11aCWxNoL2k8k0nfOjkb9rO6Q0TCcTmvWC1brl9PQUqQ27+4fowrBcrZJqXQ1BcHJ8ztG1faITTKCsJ9TTHY5/9B4H+wcxbEsInJ2dcH5xxtG1I5rGcjFfUZUFVaEQCLq+oyorbN+hRFTHUEi6pqUoTLTNyUhqEErifEMQDt93MUxT6Gk7H4EkGyinNUqXCFlAUEgZsL7HOcvx6THPT1/QOUuGyXxkO8TnnwucLi8wyjCbFFSFwUiFUZpB3p2kqhrAexEB00TAkiKHhBKRECYBFYGw6FMi0UZhjETr+L0QElBoXVAUNWUZFVfwDu1jiKEgcogVQec8vQ0IJcBJMDVlOaOc7FFWNUInAogyKKPRUsXQRFKCj2QKKaOyqU77BOdDVEshpLVNpnBUARFUbG/lY9hf6XBSEpzGKYdWFqs0wRcxHLp1Mcx3Jn44N9jgclsTILis0BHDPwsYQihnh6hsFRNhZO8KieSRyB55IId8TRrXPgBBRKn/lFnwbpPHJ+mT9LOawoi0MHqfpkSyrW79JMT9iVtfoKVDhAYRNGbnTbw5R7QN89UZO9O30Dd3IICya1yzgPm7COHRxQ3U7msI+xHh7Cni4CY+hU5bnD+E+VOK6R6zu59jdfaMsH5CdfNtnKyR3tKjUUISXICgQfnR3mucRMarB4R4DKMPqhTD2pBtVtv55F9GIHUMAG+g5zHgPT63jO03YvT/jw//8ar3mfBAqpdICkUhMt4iHjAA8AIbIqk4GurtUI+hUYLYRJYfnVtfOoPBpfFyuVAvl/dVFqkNGcZduj6/fslOJcSlVttcIy4tzyH1owgDRwMfBHjF7t4eb7w2QWH5/odPefvtX+bgxi3mq4bQ9/S6xTuNtRVa9NhGgugp9YpPXwt8/+GE4xVX9OO4AcKmnGPyh2CYQ3nsjZswhDTPUtkBfOoeydVjYfM42zzPLtsBN6P8qvPzT0OIGJN+8v2ElFjvY5hLH0O4xX3TwOb4KVLsnEvEg3B5zAQRBiUuEUHdDbloqAQ/1f22iUX/rikExSbky2WCT5x6Ap9UFfGCthfo6U2kfUqnbzDlDN8+xvkLQrHP/Z1TLtqaE0oq8whhAwKH9DFsjFUlmOiYhPd436D6Y8quIYgVzfEP6AXUs9eg3iXClhrdJWxXmjTNowJbOZnAi8VmnQg/ZXf9O6afC+JHVVb4csrDF88RwbOwffQECCDbOSa8iSgMp7bEWofCs+46ZO/467/6JcTsiOWq4+L8jLOzC05OTtkznrd/7cu0oeDai4a9qo5yhvu7+L0pQTjEPzlFNGc0j2VUvehbbIAyBE7+4I+h/h6cHkOIjPWgTQwFMptCXcLUwP278OnP4e9+ivadB5Tf+XN0cwG+gzeO4I1fQJgSEVrwBSwXzL77R/ynf+PziKIEew62R8glFB5mCnxBUKD2KliumVrP3o0j+henrJcrWttjfUArgUHQu8CZEHzcNfzI9vxgveDj1YLzoqSdzigmMyQbz7Xe+QHMNdrgXYAM4sKgDCKFQhDBpSAE965XBGv48PkFfduyWi6RInqe1tMZEuj7CBIEYqzWCIgEnIubnriSi+hZk2B7IZORvI8hZnyIyhhtu0bKGOJlOttDmRh7drVaD0bi1XJBsF2UjZWS0HcUp6f8bQ//y1v3uHXtOmIyRVR1DOcSYl2RSemj6wAbQ7rk7/sOXAd9D9am1y3YGB4DHwjWR9UPbwku4G089PYJUHch4AS4ILAiMtk8Am8M1gt657DBY4FeRDnJXkBPoBdiIHy44HGAFzGsjg0istWajtaJKAOZniwecIFBViskT4lgHTeN4i7wBwgeEEk+nz4M3N4LNGtoe5jMAQoO9mqWzx/y+S/8Lgfv/ICLDm68dp8//963+PD9dzm/OKdpu2jE0hoEaFOglGCdYom365bFWVTmmJWG3jken55x4/yMqqqxffQgscsltZJcLNacPPmY5su/xP7hISfdUx46y2cSYBZl2aLspHcOUCDjxnTwGBtzGCPzg/ygzJHUcvxgSVRb8TlWWhCbh2HyiglCQMix8ETaJMSHqiNtA6VABkHnA8sAQQqKshiAiuA9zvbYvk9lGIEHIW13hEA5T79es2qaKDnXO4SJ6kWmjGCe92WaQ2nznOZSIJG3PNGDpV1z43CXmXrO07gzigS6ZsnZyTG71+/i+y6ueakgEoFRgl0jWXjBXMFpCHiRtxmpfYVKbc3wWVbzsM5hlAYys3RzcNDaRAOMNFy/c49qfxeePMXa/q/wSfJJ+v+1lMHsDIxnAC6Hg8jPpDFolAHCDDZnYHSseHEVCJdB/kwIGYd3ySFYgOHe+W+srpGvL4piAO8zsJ2B5LZtAYb6ZHn8MVklEyoyqDz+fExGyKSPDOiNCRywITLksoxJD2MiSC7POO+xpD9sDjeZkDEmjeT2ymSSbSLJOMRAJidkievseZ/LkMMK5PqtVqtLY2Kb+JHrfdXBdVz3cZly320TO8btkYHcqzzRrzwkb933J12TAc9Mush55/GT88nX5bGWCTLGmKFdtdYDsaAsy6Hvrgq3k/tWCPESWSortYzDNeTPM9FjHCbJe/9S/4znRFZLyWXJZIuyLNnb2xsII03TDOXPbZfLmNsghMBqtRrmex47IQTOz88HQlXuu0wYyiGMMsicr8nEilyesYJN3/cD4D4mNo3bMs/Z1WrF2dkZ5+fnQ5iZcb9Ya1kul4NSSB7fGYDP60wmy2SSSA67k8PU5O9CCMP/OSxHJmTksZT7JCvkeO9jGMP0fQ6JNP6NEGJQz8jkory25L723tN1HScnJ8P9y7IcxmVWNMnKLVllJpOaMlEo901eo9frNUopDg4OBrJHXl/KsiSEMKhd9H0/hB/Ka8dYWWZMZur7fuj3rFSUCR153cr5ZmJTHj+ZcDdWFsqEnjx38rgdE+1yf2yrkIzXm/y82TaKbK9t43mf5854jfpJKT9/8jzJ82octisrRf04I9V4bm8TO7bT9tqX2z+TtMbEmLFKlzFmGA+fpJ+/FLyna1b0XRsJD8HhnUdJiQ1qCD0gpETreHbMZ0gpZDpqeaTKyl0iqnMkUkgIDhFiSBYlJEEJghI4RYxp72OMe2cDAhXjoyKRskAXNUVZYUyJ1gVSpjnt2mFOCxHPy1HJQeBDj0+8EOc2z7VI3ojEBl0YlCriOTB59g/2TAkog7M+RWOInvnO9njr0GkP0K4XNKsVpZHUVQHOYROZrywLlO6ZlJKLpkUGR/CWk+M5i6JGG8VqNYfdXaTWcX33HvwjAAEAAElEQVTvLfv7JbOdGYUpCAQWixXT6QQvBc16jegFi/NzuvWaG7zGzv4h9XRKcDYaX0UEyYUE6TxSdux2jtu3O15cLJmv1pw3juenLbVRCBkVPrSYUJcVqtAgC5xT9K7HOol3PX1ncV1P7yzCO4TtqELgeh3QNwxTVbJee5TQFKVEG4FRHUYawGIUFEqhiCowSsdovcYIdKGppzV1ZShUCq9DAKVweNbrFcdnDacLx8JK0FOm0z2kkrjQM1/N8cmvNKp9qGij8Y6271gvF9huNzp6eI9WGtvFkKjWOXrrsTZQFhVVXQ/hUgQC5+Lzc+fggLZZcTFfslytUVpSVSVaK/quw/UNQgSquuT05BijoCwU1sHBwQFBKhbrZXo2tgg6isKwWHUEHyiLGOZnfnFMmE0p9C5Sabr1mklNdEaRmt57vO9wrgOqWB8RVTlEACkUXghs34EIFEUkbvW9wwmJrqZoUyGlATTSB4LtOF/MeXH6nN6tQUZSipQSic8oDXhH33csmzlNt8u0LiPoFBkaBBHDDssx8OfTeUlExWIZqRXRpqAFXjgyl0IEjzZRoUNJCcKncDAKbYq4BpioECRESC5VIZE+IvGjCB7be9pe0geNKGaYaoeynmHKOjr9oYZyaxlfOxGBASUk0qQzFiQkKtZfhKgcC4HgE7k7+KQG4BHBg1R4pTekcu1wrsB5h7Y2kj6cw3sbw714jw02heHJSiAhM1kIIa1fPuCJfRzXU6J6CkQSSu4nckggn9+SHQR99FZKiz4pgw2QGEMFbMCmT9In6WctDVBrMjlfIoAM18iBXHUp2R7fdexPJJ2PZ3mnDd52hOqAnekeFyfvYeaHTK/dwfc9wsxQ1Q6ht7TL5/gH76EKhW0+QHQWRIcOc6rpdeQbv4pdnrN6/G1mB6/D7hvxWUQMHqWFwwsdw5aJgAwGlwkCl8DxDTosErkD4hocMk8k2diz2g+jM1QGvCE+TzfnqzDKf9NaOdTUcL+t7zammI06wlVnnpwywWFMEtn0U3oW5bOYDHgvUDJuGwNx/e+JakY+EXYG5Y9cksHWsmVXCptz40C92Crm9sjYBsO3CSEbW5Tnx6UxgYLt9hnIAnFsChHSs+jyBUJEdTwrDGsbePT4jHXbce3oJpPZTY6PnxPQaNkl8ovA4Vl1gcnE8NGD55SAqSVdIxHyCOEiKSOxpMYlTeNkpL0+9NfleoX8YXg5jFDOdjxux/mHMB5DvDwmQrg0Zrfzf6kJ022kEJfO1du/y6998AQvEUqgZMSTtJL0Prsf/2RSyeXibPCt+BtIMTJftl2+lN1Gvf1V6S9D9LjaVrrdBiriV2Ti6WYuBhFFA7xPOJO3dE5TFreZto84E9eoC8OkW3Bz8oiL+ZRTW6EVCKVRvsWJjiLEaAS16FDeRidp3yOERpYVodxnvX6BmPbMjKC3PW79HOOyYlyHCiUhGIKUeB/iOaUqIRzjvCWePjJR6d/PHufngvghpUB4Ry0913ZnSFOBc/QhYIyg7R29FzRCIfsO6SwuwMp5np4vKHzBfN5wfHJMs5rTdpbprMJUNRWCs2nJt1an1I8art+5g7x9l0IZJt/uWTY179ZvYY9WNB9/gCXGTiw/+phWamrvERLe2Jshd3ZgUkOl4WgH3noDrt+DUws/+BdUj76HuFbAm9cRpoLWwvOHcPIC7AqKEkSBaBuED7DsYTGHtovKEiGA9eBSrDNnwUiEMlR6RXmtYlYrQu/p247las3FasXSOozWfMk5/obWLIXhB87zR13Dt9uG067BzvYIuogEAR/DVQTvYkxVJVE6HqakFHQ+EjCkjCuDVgaHo/WBs7mNzHulEMEn798GiCFFlFaUVRVXKB/ZbNb12N5vdgIhYG1UHIkHrihVIGQKRaOiDGs0dsafLddLjM2x4wXW9jSrFd426LRouIsLXl/M+a93d/j6ndeodg+gqGJfGJNO8y6SOYJLh1AbyRx9T1rFoV1F8kdIjB9noesI1g3XO++TjKXABehth/PRYGO9w4UoTelReKkjCUSIGK/UB/qQiBxAT1b3GL0GrBAxxIuQSfUDrAQxNTQXKzoZAXe/oezgQogHyBA3LwGBDoHPFoYL5/k3BJYiLvrHDTxbK67XcFN4nq8U9fVP8Ttf/13c5Ih15/mFz32ZP/uzP6FZnXH6/CHn83MEcYMUjWGpP3wXGZbeorzF2hiSaWY0ThoOb1zHtpa7917n/Mn7nC0usH3H+sEH3Nzb44PlOeHaPu+++y6f/vTbnJ4c8+1lw5thl3LYPMvk5RA9J5CRiJDZqvEveYaMnmobUauNvockPrBlEClWa+x+wYYMEVl9m7aN5I94JPYhqrEIKZHes3KOLpWjnu1QlhX46JGBD3gfIB2+M401C+WFZHAMKSSMJFDgcc2Svmvo+zaWIcRQSYk/kvc+0VPF20R28XRNg1YkRY+47TSFwZiCndkupdEsUricmOJGXhEohGdHCnaLQCscZ+QQLWljm9qQMERwhNSeLgR6HzBSREMPmxAwe7s7rNcNB3df58anPkUzP8H1PeITL9Gf6bTNfP9xoTQyoLYNLmWQbTvP/P8YhNomhIxVGHLeOf9xGfJvx9eMy5nLB1FeP4OF2yDeWJkjh1sZh6bJ98r5jkPhjOuZrxnXJQPo4+/HxI2r0rjuY3Bz/LsxyeSqPhv/5d9mQDWDgrlMY6Bzm7AyJmlsg6zbY2F8/zHx4ycdRrbLu339cBj/Mfcc53X5UPgyKSXnue1xn9+PAdPtsT/+P7dZVgPYbqvte45JOOP2HQOxeQyOwdhxvmNCVi5zBpWzIsZYnWdM4sr/j+deJkCN23bcZ2NyS/5NzmebMAUb4lb+XW6LrPgw/izfZ0yEGX837sdt9RwhBEdHRyyXyyEU1JgAlZU48tow7uMcSimP+3H9xoSmfO9MghirQ+Q+yeUat39up/EaAgyEiHFbjdv0KjJZLnfux9wG47EDDGFU8vjdVlYar5nWWoqiuEQqEkIMRLtM1Ngep+PxNyZejEMujcucy7Ldr9uEjHForUwwyu8zEWyc/5gUtr2WjsOYjNtnmySSr91ea8d1umo+/2UMK+PwLJnIclUbjhVMXrVevmq9H78f/25MitwmrmwrpHySfn5T8IGuXeNsi8Xjk/OCdxCw0fgWoBASLTSFKdFysxYHovEdMmiyUSUTMp2PghhOclJqlDZI1UUlABkIIhLRQYIwUZFAGZQxKF0gtUFoHY+GIaBEgXdxXsXwHtEIqURgExJDY0wxzLG4JiiESCoeSsVzeLJt+DQnYxzvqELhLXR9Sx96fAJureuQQFkafC9ZLhvWyyWz6TQaQlVUtdIChLNURiNLEyMVB0fTdPigaDvHqmmoJxU+gFKG84sljbUc7B+wlxxvms5RGBPJBSEC9Rfnp9gQ2G/WXL92RFXWWBuwfYc2EqmiV6Tre4zWXN/f541bNzl+dsFifcpybXl2tmJnVtO2jnZlKU1AqaiYqSSxL3yBEwErHNrI6E8jImBgpEdYh1cWsSPpJiaq9nqPUA6lA4UBIxRagJIyqTlIdCnRpUIXmqJUFCYghSV4i7OB3jtw0Kw6jo/nHC8bzjpN6ww7uweUumbdrOm6hvV6SfAuOpmQ6p2sKD4RD7SKY0dKyfHJMXu7+4gQlUVa11OWNUVRsFytgIATIZGfBDuzHfA9i9WSrm8p65KqKijLClUY2rZByIAkqobgLbvTmspIRFFycr7Ce8vHDz9CK+i7lv29A4yJ8cgn1Q595+jbFYUC30uM3sd76Ps1+DK1naZvWwiC0lTRduYdljY95wR936JMiRSx7LiAVOBcoCprqnqG1lG1NQOg62bFk5NnnC/PcCEq8ighUT5EIpWPNhXhwQnPqmtYrlbsTCZIY4jhRaLNMDrERO9PEaICjxgAo0B0zknX5OexBKliGCihJVIXoBTIHAoqKoFIrVC6QGuB0gItQ7KLRmtPCAITPH3v0U7iRYEopuhihjYVQSqi0qtCaA0iBlIJEEP/SoWW2S6VLE8ZIA7ZfCMjQcJFQEiEAMpHT+DIEhv2Jy4pe0TlIY/P4S6dizZcb8HHsR58cgzyUTk5uH5QAAn5e3yy3SZVVpHUQXx0zItKAApBthPnPapLdrLkbJRJOUiEj56x0ZaUYaxP0ifpZzcJmXcIEUAXyQEubVCAywSAnKxf4+yaupyxXFiMcNgX38c1DdLMsSKrYz9i+eEP0fUuqBoIKF2idcBO9gntMb49Qa3/Faa+g7rxBWQ5ZX38Ebq9YHL3S6AnRAV1FblzwiBwSAI4h1A6OjhmG3qu2wiIj+eDTV0zsQEZ17DgQgy1JWRcFRLOk7Pb1D3vzS4D+5umEcN1l9o53X9DDMnvX7aNvHyWifleIoiECCplkm4gIFxACYVOZAiXnvlSSoSTCOEG+z/Dup5ehBR2I9U92/THZCCRFv9tgsC4LcbvN9jDZVtRJN/5S0002IFkss9vne+uSkH4gYuYrfrjYCoCRQgWJRWLpec733vB3df2kGKCE5JAhVJtVJqRUVvDIsBD1wRKozlbCZ6eVKw6QyhcCsOWHcyvIO0IcaleQ1uO7Wzk8SKGsSOQhAyGiGF0JMhkbGMZz0VBHo+bIlx+PRRpuD72piARWNgM3leRPkIiRQmRbQsKoaJyoHUO71uQOxGPTDfMjtzjdiCRpa4KNbQpbcSmhgiZo8klRuWK7fqT0+Vx9+OVP676bDP3PNvErXFHCwQ+hRBEgFAQRMSClyFQTW4wWz1DMOVwV3JsD2jtgrr5LlR7KNlh7YL9/oQQCrxUWDmhExqrNcGmOec9Zv0xlSlZ1jfwUnCkjjlrbmAF4FZIa9H9Mb1oIvlXl4hQQagxQiKcJSi1WeK3GVp/Renngvjhbc/N3Qqn97h9tE/rY+iXee/QwLqHThS0tmMqepSJTG7bwXq5RlYdne0SySDQu0ChNMfPnjGbVhwfVHzrCO40cz69PubWI8u1suakMMzrkuXhHabLNQ8/+oClUPjg0ULRi8j+25WKL969i9ydwayCO0fw6c9Cq+HPvgv9BWLPwud2EOseHn4USR/CQOdg2UHfgOpAFol4QFSQ6F0MI9KvkkZRcjERIvZ+cNC00ClEJ2JYiADlwQ6FDhxMK2zvuVg3PF8u+PhiSUXg06rgK0XBY+H5w8Wcfz2f86CsaGZ7UBRJ1tDj+w7t1cDYlFrFA5qIixUyyqeC4HzV0TmP1iY+KoTE9T1eRNnBzvVI19Mkr1Ol9XAw1MqQlQniwSEeJIOQqKAuLcDxwJIApbSQ4zOIEtvHpnArRoJwHnNxzt90lv/NrTv8wp17yLJGKAMqxuKks+D72J7eM3LliWSbpN6Bs5EE4myczl0Hrsd7h0sGIhfiAcx5jyOFYgk+kj0gfqYEoihxusB5j/UW7yM47oLD4nEikj0iuSNgQ6AnYAk4RAwPI2IIF0sU9/JaQAFd67DEmKBCRXnceF60eAHO+URkCBwVhrsE/th7fiRSH4fAvz0RvDsPvLmr+PyBIOwe8oVf+y0eNSXTWc3Z9/6cP//Tb/H0yUOMFsxqQ11WdL2lFAVNm8KF+BBJMD4wnVTsVCW/9NZrfPH+Ae/88F2+t5ryi//Bb/Psw/eZTGu01vTdGetmBbqgun6PolnQ9I7nD97n85/9jzg8POKpfc4Tb7kfYrxk8ATb472NRqUkxSuE3GzC0uYwHkwlIkTPkGGljtlsvRfD+h2ycSHTOYWO7hA+HXxTDEUbLDZvhYPn3CVlFgS6SMoxBBAhbQJJJIyQlLlkzE+IGGolCLx1XKsn1OWEfanwfYO30bMz73cFgThVRfTcIBJA8ubYhyRf5kOUFg2R1CFNgaym3H3jLYRSOJfDT2w2LoUEIQKzQrFbgGYDZmTCTGZKb7bysS5R/ipgnUNLM2xOAsQ6CsVsZ4/dg0NU8Ki2RaVx+0n62U9j1YsxgJi/y59ngCmTIiCOzQwybx+SxmDsOI8xaSSD11lpJKsHZPAu3ydvXsfEkDE4nfOdTCZ0XUfTNJeAZSnlAJbnv9VqNZRpDEoCw7XbhJhc5zEo+SpSxDb5YEx4yMDlVcDjdpiCq0DAbWBzfBDIoG9WN8i/z+00/mw7TM82oDvur+1yjIHm7fSqg8g2uLxN2hj31zYwv53P9uttcDT/fltVYgwWd103gN+5PbYB5ayEkFUltuuZf7sNVGcAP+czVrkYk32uUpHJr8fED2MM6/V6GDtZQWF8vww0jwkCOf/tth2nDEpnFYk8RsbjYtyeGagPISpkaK0py5KyLAfVnW3SR84rK3Lk7/I9YROGY1yuN998k+l0CjC04bi+QkTlhaqqLgHvmaCRAfm+74d7j4kyua7T6XRQqBj3aX6d18kQAlVVDaSTrHaRySaZUDabzRBC0DTNJdJarmPu6zGZJRO38vdd110iB2VCSSYFZWWjbTJazrMoimEtHBNRxmSZbeJHbpM89vN1uWxjwtSYEDcmzIzXj+25kdsyr7l5fo1JLOPxOn7ubK9R24SVbaJD/i6XVQhxiQSVFabGCkHjcv+4lMuyrVI1Jnjk8bSd90uGta18X2UY3P58TPgYz3fg0jN6u/0/ST9fKYToDGF9wAbSWSQamYUX9M4jpcL5qE4pUxhYKdMcE5mIJZMMciaF5OdzGpvR4o8IOqp0qoJAhw8+zcuA1tGZRciQnDoD4AjB4VygT2uuJAKkMcRJJnmUQ+gpiMqnEfTOa6gih6QRItYlGy/yWpOfocYUaKljqBsp0EIjAVNqZC/pO4FzBfVkglGKZr1icTGnKCRSFWhhWDvL8xfnWNuxtzPFByhLTWkMXgi8sywuVhDg8GgfJTTa1JwtVoSwYjqpMHVHt1yybixFMeFicU5pCkpTYtuOxw8f0Hdrdnd3OdjdQ6X6IqLHq9ISg6QKcOvmjM9+5jbLYPno2Snny8Dj4xVlYRByRW87Zl1NPSsopzVFpTFWsV5HBUztFapUaDSdVNhuTbANWlgmRYDGYZGU2iBlVIosVDGAaVpLikKD8JSmYFLXmCJQlAqjoyNZlD4NBOtoFy3Hj+ecn7ase82iCWgzZTbZRQhB2zScnDynWc0JtkuqCJGc4EO0UXXWsVy3rFYNB97GsNEEhJJYBF3TYXRUA+uTQpiQIEXAaMPuzh5GBxbzBVII9vf3sDbunZumpS5mdF2Dsx3TnX12pjP6tkErjdEl5+cLzk5OOF/MCQRu375Js1pT1VNKA4XyNF3Pcjlnd7aP0lCWCqM9fdNjBFEhwgj60OFDhzaGsqzoe4cUPX3fIrUiYoIxBIwqSxCw7luUgLosKYopZTmJc0AoCAHnes4uznlxekzr7CUbgRWBXoBFRuVbEckJfe84W86p6oJZOU0khjgfhZR4IQhKErQgyIBIYZwGMCOEqM4jImhnBCgRFU08Eq0sAlBCYMRGlUNIiVQ6Eoe0TuuERIhoh/Teg+8QCnSQBFUhdQ26AhQOgZIKY8qosgMMMv5KRxA0gIsGuOhgJJLZKcTyS7FRxvAhRCA2qXXI4AlyQ2YdnvtJCSiEMBBBYlmjM5QO8RrCZl/trIn2uWzv9IHgbSJ5RMKJJ5FC8n5P+HSNRHoNMivd+iFMDGFDkolOQFmJVkY3ILFRPvokfZJ+5pIQXDW8MwwoB/vn9lUK1/RoJZBqAhxTVD3KOXohMPU+evcucvcQ2QmK5hknxx8xm96n2ruO857lxWPE6glFecD+9ddBKZYnD+mbc/pHf4qZ1LD/RgSBpUeEIhLxhMPLQHAx7JX1Pnv0sTGKb1dzs/fKaWxvAGI4iWFP9qrmEi/lFd/nkFP5jLcB3HOLhiHUez7fXHUTmYuQzj0v339sG3TWkbeSw1nY+xj6hQTDSYfwGpL6lA+JfJLt8bku4rI9K34xshWN2vVVSpHb7X359YaOEXOT5IYen/cGK73YkD8idjBmieQ2DUPOYnSP4axLQCkigXb9giWCtptiqgIfNJmELQfsJWch8EGxuz/h6fEJixcPkW5COPoSQQVkkDgRnxM/UblkdL7fPn+PqzSQPl6ZR6zlZmy8PKavelbl5r+ELaV9jRxjID/mQZfJqUJIvLdJpF2hhI571eDi8z7EffZQkLAhXQmRiZTxbDQq9aaOaTCPFYYuzcVXjLcry7zV3q/6fvuzoVSXvgtsqx6JNAbH10aytUx1KAh2TUCgpWZtBVVxjTfqR5yvOtbzx4T6gG72BgaY2jW16VhyhDBlbAocIkhMsHTSgF1T9g9R012Wfg+Q9B6W3GTfPONFfwNdaCa+50Qf0psdZL9CdmuCfUzjO4T0+G6FrrNNdIOf/VWnnwviRwieSkncZMraSzoEi9ZyuuxZLxZ8+v5NQjWlClC7NUVZ0HkXjQy2w3pP2/es255155ivlvQrwZ5doJuG05XjCZaP6fiT04+p3ENmdcWkrFDSUNVTxO4B7+xNUOKA+vycaduyDI5ewJ6W7OzNEDsl3L0Ob34GHpzC0/fgoIfrGtE7+OgcWgG9JRQl3DiCfkVgjTvpUU2L6OaIrgWp4996DaaCxkdCiLfxQaIEVD7G4OgEWAVtyzD/zRqhDaxXGAmH05KDuqJdr3i6WPDBcoldzbmuFf/zouR3tOIPmiX/eLXgw7Kim+zgtY4bd6PAxcVJODscrnwIyCAQKk10HyDEwwHpARjDYWTlg5AObRGAdl2HViaSA1x8onp8AtGTPLpQSVY1gthKJgDFOoRQkYOYpF19F/tbSzAiECT4VcO1+YL/VVnzn9y7z+7eAUJoIBlnrItP8WTEiOodLdENKal5JBJJPsQG20HX4H3A9jYqljiL9TH0ig8xRIgHvJBRzQOBlwKHwAYIWuNNgQvgrE1hNeJvXSKJOEIideQwLvF1/r8nhjtyIQnBCGBqaFYdXRDY1P4e8DblL0RSd4zhSgoh+MXSYJ3n3wrBUsoYEgcI3nPaeib+gN/40lf53Juv8UfvPufD+WOKDx/QdQ3nZ8d0tsd5qKuC69cOefL0GT5I8B5nfdzESsHtG9f5zS9/ms+9cZvDWcnq9BmP33uffWWYTmbcfO11IJIXXNuwXjXsfeqzMJ2x96xmfrFktzT86N0PeO2tT/H9k2O+s1xxx0wxcaEg2HhojQ9FOZAhSEzMgSUUdHoYZoWPOE6HLVSIZIQg0nxLZCTESJKUmFfwaZyrbIiPcWoDAqkUfd9z6iINwwtBUUWJXHwiNoQcaGbYKkSDgFSxqILIvHaW/aLmQEist7R9Q/BdqpSLB2qpkclrg7HU3ojhKqTCOR+HftrEaW1ozo95cb7k1p3b2L5LhtnN5mqnVEhnqTQUyXYmZQKkiOGHAj7N+VSPXDNBpKJ4gfWeIq0BAK63LDvH4d6UxbMnzJ89jmQVPvEM+VlPY/BvvKHMQOPYWzsDqvmzHCZFCMFisbgUpgI2m8axJ3QGEbMX/tirP98/q1Fkj/Qc2iETFsbAb77PGARVSnFxccHp6Smz2Yy6rgegtSiKS9L4dV3TNM0Auk4mkwEgzuE3csqvx2DkGMTP7zPAPwYTM9iby5vBwHG7j0H5bcWP3B5jAG8MwG8TDnKIigyk5pATuW3bth3aqq7roe3GwGv+f5sIMk6XDrZiQxTabqMxkDuu2/YhZftg/ZPA16vIH+N2GLfhtuJMBtWz+sE49E9RFEP/W2uZzWYDuJ7B/pxHvseYwJNDeYyVK8YKBtuqNuM6j9sv3zOXORMXchm3SUNjgsd2+IlMeLiqjTIJarvM2wSRcX3ruh7KkkPE5Pu2bUtd1wPIPh7nUkrquh7WlPV6Tdd1Q3iRPC/G4UG++tWvDp/XdX1pzHsfQ9yMw7vkcqj0/M9lm0wml+bC2dnZMA+KomA+nw/tobUexkO+T26DXPYx4Wx8TS5LVojZ2dm5NFeAS2tibuucd27XPGZyCJs8d8cKHnnOjokzOQROJmLktSf3RZ7zOb+8ho4VUrbHSB7748+BS6Fd8pq3rYKSvx+vh+NnRR4f4xA443G4PX/ymP5pyBk5jZVL8vjJfZzX/HFZftqUn6HbbTw2wO7s7AAMIbd+Uv4/ae0br3U5/E8mTQ6gknPDGAGGtv3L1O2T9LOTErUCF2JoC5/iZ0spcSLaN5RSOO8GB4hCGaQSKcyLBJHCF3jwThGkwIvoiBINvBKpdTz7JeBCaBNtKcKjZHQoKQpNWejo1a9EAmMTAc5FJw6SgwBBJEWDGEDCh/gXz4zg8Ngu2gB8EPE6KaPaiJR4L6IzjIjhaJRSzKazaDPoA857hJbEkJgKpKfvOwJRnaOuZ7TBIYKjrvaxXYPzPc5GB5fJzpRb5W2ePXnG6fmSqjK0LXTrhr29kr1ZVEswxnBycs7u7hRdKGY7hpPTF7TdhBAE1knWa8vFxYJ6UjNfLjBFlEsO7ZqnHz9A3L6LDILpdEZhNLZ3FIVmUk1wymN7mE0k924Jzlc9q1XDycrz5KRFl0tMBaYMqMaDsHg8pihQpqAqDUYLrG3RhUIph0Sy7ASFKJClpjKOiQk0vYtmbhMBeYSMBJJCo2RAq3g2rasSZSRSS/o+gFfUpUSpgAwdrfXMz5c8P11z3krO1wHnDXuTuF72rsP1PafnJ3RdSwgWLzw2OAICHyTWQ9NZjk/O2NvZYfdgl8l0yu7+AQFouhYpFVVdI5ViuVqxu7uLICqfKKmoyoq+m9O2DWVZRBWFddz7xbEfKHSJrjV1UeNcT1HE9bbpG5p2Rd8sKZTk3muv4YNjdzalaRY4G+eC7dN6LBTONQQvsX1Ds14ihIzOIN2aoDTeO4wuhpkbnwfR2ctZh1KCtl1TCEmhFaKuEUrjURRlPfSJkALvHK1tuVjOWTXraHcg7RtFBIcUASk1JNl8QpwXL5YXBGBn2tBZm2xNASVCCuEbYqinZNvwIUvTy+gnJwRBxhBLSieFHS0JvQCxJoSoWCykRsoCIQ1aF2htIjlMSZQ08ffJ+91ahw4KpQReKAIaL2Rc4JTE6AKpDEpFOXBCQJLOTlJGogcihZkhkjpI/kXR9INP4FsEw9L+2keHnRjqxQ/OeH5E9NB+o+gRfMDFWDHxbIXH2xiK2ifnH++ys5InuETwCC69Tr/FD6FjQnB47/C2xzsfPxM+EUfSediHFHlnc84JCfQLIUTiyCesj0/Sz3jagMojcHh4vblmbKMQEvr1KQd7BWvb4a1iun8bf+sL6NOHWLtCHH8X96KCnT10dY3D+1/i+MM/ZfH8h1Qa5OwGk9tv41VFWL5PW9yiC2fI4/eY/MLfRMgC0S9h8RznVqAqZH0NYWaIILAqeeB7j5JRsSiX+VV798v2lMvfKTUKpyvEQH4Ro99mx97o/Z/PXH4gX2yuu0zayMSyKGNwmTBx2bYTNtdf8vC8bPsZbARaJYBYUCpN7xwi7TOtawh4nFb4zuNFGIXrGnIjkwmuBtXF5dcijOr349s4kpsvFX1o0U37v2wX+3F2rkvfpfwCPhE3cluPfxtt+cFbptN9yjIwXzgIhq5vUFrRt1mzPOYqRWx/oQRtFzg9a5HlITuF4dhahCnGM2NUxg3B5SpSwHZ9Lr8Pl/r18njYEDzGYyw/p8btIrb4MZdTcp5NGMvQNeOMR+W9XNYw3E8KQdv1LJctq8bhhMJbg9BEhEpEiFWITZtengtbpcpf5vuOK/KKulxla9u2cV5lG9ied+P6/rh0uV0TuWWrcCK1b/ISBmEIrPDSI5yjECV3dk6RYpdSXsN38xj+qnmBNIq9o5qT9QrRfghuB6cP8KaMexwUujljx5zTTW+z9BVSEz3uBaysw6gpN8xHNOsDGisowimi67EqOslT3ERpwZ7/iOdti5kqIi7472+P83NB/PA+sGxbvNCczhtWnWW+XLNqG8rgWK0adqc7OBXDahRlQRCCtYc/fecB1aNTkMlDzXm0UpgA1wqLLgXNcoF1HSvrsC6CDxPfMj87xfaBa7tTPrNTc/drf40Xxwv+zR//EatnT9kN8IYQvDWZYnYmcO8W3Hwd/uS7YJ/BoYhkjWc9BAVNgA5Qazi0UK9YKcEfT27yUXebvcdP+RvLBUehR7g+PZtcCicSYl6kz7wnnm7r+L1NUkA2yvpxfp7kJdKkN9HLvjaa1/f3uTOZ8mI558P5gg/nFxwpzdeKkl9Tkv9pteAfN2selDV9VRNCDAEjtUIFFaVLQ8A5i7ACK126taW3LgHvieQhBEEKJPFQh3CxWsFH1j7ZmJoVKPJ6GY24keehyduFCATHiYWPv5FC4LHgHVqCloJgLfbigl9qOv73hzf4yrVbUYbThUGhgd4mdY8+Ej+8jyFcXIzpiXfQdwSXjM3SpINplJvsnKfpWpz39M7hgouyn8gocyZyFwi8MnghscFFo0GAYHus93jn8aQQJULgQvIoCSGSSQDnwxDiJaqIBBwxnIgnYEUM/SJrQ3OyoiURPoLHuxTiBaLngguYyIzgujHcR/BtAu8JEevnoypEoRW/8eXP8Z994zd48/YRCPj4+Ql/+u5Djq7fICAxZUnXrSE4Vs2aaVFQaMXzszm97REybqB88Ait+KXPf5qjqSH0HY2UkUQgPc3FC979/g/5/K/9NUoVCUar5YJwcYy5/yaz+RxblEhT8uE73+PT/9HvUdbf5sPmnOfeclvq2GUpLNCgMoGIT5f0lzfgWXovbtZSfMPID0lpQxbx+VGewtYEohEkXhsNg4JoOAnB49KB2xEP8q3zXIRIvvAS6rKIhhE5lnVL5I+0gcheG2ngIYQaYlx7kjeaj+NtYET7SOaJLmwK4S3BJ7n7vFWQkRDjfaALWaEjeiuvzk9YLjuCu4bru0S+iElLwW4BtNELrrHQB4EIcZwjBCoRkeJEFkObDsbT9JTvbYcqDJmSQ/A06wXzwgwgllI6kVU+AQl+VlPeiGcwdttzu+u6ASCbzWaDt/tyuWSxWDCbzSjLcgDJx/lmMHEM0I0l6bNSwVWe221SpCrL8hIgn68fg2c5VEGex9Zajo+Pefz4MU+fPuXmzZvcvHlzII9kJYSxh/35+Tld11HXNZPJZJDtX61WLwHmVwHh4w32OJTD9sFoWzUjg8c5XSXRn3+7LUG5rb6wDUrv7OwMQG4GJZumYbFYsLOzM4CUY9IPcCmvcR23FUWuOiDnOo4JHq9inm8fXsYGmPE9xr/Zfj1uy3HdX/V+3K5jdYvcV2MwOoPUmTSQiUNSSqqqukQCyeMp5z8mIsEGcM7EmlyGMYFgTNjYVpfZVobIc2dMGsjXj8dNBv/z57kO+fuxksi2Mk8Gs/PrMWkrq1qM145MGMgqM+v1epj7437bJhfl3xdFcYncla/N600uv1KK8/PzoRwZ7B6XVWs95D0mEuTxle8vhLhEWshzaUzMyGvRuE0ysSITU8a/y6SgPMbGChi5Dcd9Nf5/PP6998MamokbY0UJIQR931/q1zyO8hgbE3EykabruqEdx9fkdS6P6XGbjQkT4zUs98+26kdu/1ym8XzMeed6ZCWS8fMnE1bGpIX8+bjt8tj6aVNZlkOfFUVB13XDnMr3GK9bP53hRFDX9aCuU1XVS/M332e8jr/KgJPLsU16GX83/h8YSCVjwhQwjIMxYfCqPD9JPy9JxGM2bgi5EtU0Qjw7KR9BRW8JIYZDDYmQD4EgfQy96dLc9w4ZSIfszT5uMHzLeN7VWqNNgbfRe11IAapAqKQYgYGgCaFAoEBGyWgpo2Hbh6zK6LHW4V1Lb3ukJEmYR3A7E5NV0Cip09FFxZAPwSGVQiodwVsh0brAGp9Ckzqsj4onYPASrG3pWosIDm00roeuXWOMwphJdE4IiuA8ZVFx43qMrW67jhAkvetZrQI7swmziWc6rWl7x3q1jvvWac3hwS7nZ0t80FwsWtarBbZrmUyOqMqKZ09fsDMr2Z1N6JxnOT9HKcFyvWA6nbG3s0vfW87tGYUpqOsJQkRFiE+9dg0p4U9/8JDzdceT5yuMAimmCCVRxhNWDbaxKNnjpaCYFEglKHQ883qh6LyKjg+tx1mB8ZLgQnQukRohNNooTK3QSlFqQ1VJtAyUpUxG8wKQaCEwqkUTaDvL/GLFs+dLzhae87Vi3Qkmsz2MKpDBo/GczU9ZLecQemRS9VRIZBAEIXE2oIyiMBLnWppmze7+IQGFtZ6yKNHGoIxmvVqhVARTyrKK5AIpsH1D33dM6gqBo1l3WOtQUlBVBav1AhGi49FyeYFQkiA0widSt9Ls7e0hjUKpQOh75udzzs/nXLt+E1MYikpTaM3Z2RmubZhNK9arNdb1KCnp7AopFEYZRBC0bROfyWFzthIiSrfXlaLtO6zriOoNURXHJ+BQG6LthxiSZ9muOVucY71lA7pEBQ0lQMtAoQ1aSHpBtGmoQO8sJ8sL5l0LIdDZfkMORmCUppAKlTzCvfUxfE5ygkErUAJZFtF+KeKcddLTu5C4YQYpNAIVw1HLTUg6EoFFKhPDzYQEegWJQOGRuBBDNiMkUsd+RqjkfKTQSqKSAlAQkfChZFIsIZB8Zy4RPxyJqO6zL29IjnVJ/cNlBY1IkLnqdfApFEsicVgCTkeHtsGhwiUCSIikD+8igS4SSfpoy01KITGETPpfmrhOe4u0DuUUPkSHIu8CXlh8sNHOKZKpNoNWaT19Jfr0Sfok/SykZGOPgGte88RlzH8rSaGw3ZJrk5KTpaS3jlYdoLzA6ynF5AZldRCVFppz1s++Q9/MKYSnml7jommY1beQsgC7wPeB84d/wXT3NkwOaM8+pLzxNsLsIeodcBbhVtCc4ecPCHoXVR+iymkMUSajw+8Whv0SIDxUOWzCY4zPMQMYnJwACWMCjHhlfpFbO6LKDCEhxMisn5QskuLGJp98vrn82ai1IdnPL9ltYkVinfEo6QkpTKAX0PSW89MV0i9YyQlyZyfu6aTPBd7cZ6ywMKJUDHG+Es4QwmVFlR9HAhEi3yPbp3IoEz+q8wZ8vyqfrPoRRnmKlKVPQFwE3MPQV9ukHqkKnG+Zc5NdcUbXrHnw8UO+eOtN1n4ZMYvg0jkzlVuCMYqTZy2UBeHshDM+C9ohgsYLH8nVqV6xbVKZt8bgpfbg1USDq2152+SLzT0uk4Hytdt5j8ZTau7x7V8iQIiIdYitMsZyiWGtCGjO5w1PXzzjgw8/xFZvgp4QcFFZUMaHaTqFb+6Z7Qb+ygbaoKdh4wAtEpHiJznXbrff2HbwqjXgp81vTC7ZHl/52oivjXAyIQn0qAh7cW96SmDGabvm1O5w58aUR2cWJxSNdayCR4ULuk4i7ArtntOpCaG+iXSW/fqEJhzS9p7CXURlMxnx31Ip1r3C1zcw5pyVmqFp8XIPHaBHIqWjC4LaGMJqzWa92ajs/FWnnwviR2sdP3x8HtUQfAylEJmBgj54+maNUppl0LSUeAm39nZ46wuf48bt15jt7lNUZcT1U4zZ0res148x81O071HBs+57ZAgUEryUVKVmOtN87vU7fP0bv8u74Yj/x//l/8zZ6Qua4DBAXxi+ce0I+/pd5M51xDf/GDFdQa3gok8hRFwkYPQBfAu3nyLKU+z6Gn/07E3+8bvvsrh4wcPTNd+ZN/w3dw/RO7t02uB9D1oTgiIEheh7ZNth6CmsQjQdYt0gjARdIpZNUqcgERpDPLx4F1mRQSICFMJzezblRlnxYrXkndWSP17PuYbkd4uCXxWCv7+a84ddw2k9o9cGeoHS8bAWH4wpzISQ6TAB3kfZq3yo8ISEl4fhsBYDlcgkAb2ODwWZw8lEzxpIrPGQDjsiP6RlmkrRkKhkPCYlFyAQYFdr7MWcT/vA/+mX/wb3nUemgyM+AeNtEwkfnlh+F1U8CBJcT3A2EW66ePCRCi+hd5Gs0dtAa3s6Z/FSYGMTRGUPooHJBUEQGqejp4Aj4IUkiVIm41Vi1Pu4wQmIqBDiQ7w9IhFKSCogxAOcSO9FjJ3mCFAbOh9oOx8lM9OSs/mLbacIVOlZ88VCM3eOfwWcpZ2Ud5Y3bt/kb33tl3nt2iGPHj/hW3/xPZ6dnvPweI4+vIPzjqKKnh3teontHIvFgnXwuK5PBouST929wfc+fMjR4QGPHj3hj773Pr/3K2/Hh1HauEltcLan71qCd6gQCM7RnD1n+vwR8vpNmF8wLSs6JzCu4bs/+CE3X3uNH52f8RfrFTfrPVSKUZq7NG5mEutUpJiqwROJQwngg9EmLP0n4vi+xJYlZToCJYMQBBGlT+MBV0a1lxDo04Ed71lax4pI6vFKUZTFaAOTyBqRJjI80IWUhAEQgCAULrhM+Yz0p2AjoSfEsDVDEllNYyP5JgTpM4GQAusclqwEEj2OJsFxUMwQCJy1SaI4AwHRQNMn44L0Ub1mzHBNlJrh9fC4S6QPIeLGJ4RoAFYpIJ8HgrX09T4H9+6wM6kxpubh8xeJxPJJ+llN4w3gNmjeti3L5XIIWZC9y9u25eTk5BJYulwuBy90YFDR0FoPXvZjwC8rT+S/bOgbg4UZxMrgey5XBrFyGoOGGYxcrVacn5+zv78/1GusXpLLfnFxwfHxMW3bMpvNODo6GkDdDMBl5ZGsDJA3yuMyZUBvTADYJnK8qp23yS+v6qer+ir/ZgweX0V+WK1WnJ6e8uLFi6E+r2KNv+ogt01k2QYhX8VE/2kA1O20XZar6rX9evs+43bNfZf7ZaxMkL/b/k1O4/AYGYzP5Jlxn191yB+PgTEIPyZgjMfG2FizPS62yTHb/THOa7vdxgD3NqloTB7IbTC+//izMUg//jzPkzHBIpMQ8vx+1YF1rACU+yTns01qyv02ViYZEzVyWcZzNAPeY+LHGFzfVtLZJj5tE2XGaTwXx3XP769q66v6dbyGbBNUchnHY3Z7fI7n4HgtzfXM5c8qEONyjMfGVeXeLuu4HLnc47U8k6petZaN2yj3Zb5vrud4Hv24NWD7mvF326lt20Et5dq1a8PYGIcs+sum/JunT59SliV7e3tMJpNhnQA4Pz9nsVhQliU3bty41DevWvNfVZ+r1r1MtMkkpUyYyaSkT9InCUgHGb85L+WPicZmRSTQS6GQSLSQ6fgSDfpSCNAKKWI4lRgOwaOSesBguUYgUEipEFqiRHQ+QMgYskVKdFWhyxKjDcpE73ylIzgs5EaJ0QcbwdDB7rQJpeBCVNCEEO0FmfAeQjKyEj3lhQY8QhiUNoSRYrPWEoLAWYF1XQxVmw5exhTgHd2qiXUW4PoG34PSBlMYlI7qqlJoqumUvRA4fv4MY3pm0xJno/rUbFrRuzVCRYD6/LyhO55zcHDAvdduc3GxZL44ZzbRiImm7dbs7+wgfMVqtcI6y/XrB0xqhZIdXddw4Vps37C7s09VFngvkQrqugQ/pTCwOy3ZMYrvfvCUj16c8/Hjc6zzvBFkIscIlIGu7/DCIbWjKCVSasqqQKtIem4bR9c6ah/JQm1nsV4RpEJrQ1kVGBOVHYzSaA1Gg1LRiUcogRYS5QPKC1YXLS+eLnj0aM7zM8dJI7noRAxTUs0oigmH+wf07Yrlco7zHYSOjSVFZns/SkpkCiFi2xbbtbhujZrtkO3wRVERnKdrm0hQ9TFUG5DCEvdpXEf9B60MRhUELFoHukWLdyl0XgptJEKIqqo+eoqawmAqzWRS0DWBi7Nzqmoa7RHOsjOtUUJxfHzO/u4MKQ1t26CEQ2tDCBajS5SUGE1U1ZUBlewjfXA4G8ElrTQ+QGNjSFZtDJ31dNaidJTRznPX2sD5cskihTeKFjuf7GEKiwSl0NojZRIzNgXTqqZ1lqbt6F0i20sotEpkLkVZlejCIJUiOtMIpNQIpRGyiCFbjMKYAqNVpGoIR9cHOuvxQaG1QpsKaTQ6rQdaFygVIhFEa5RRce4gQYLy0UtdC0nMVYGI5QhSJkerSPKQSqNE2v/lfaMQcWyEbHKKbZzXBRH8QArJgA1p3ZGEpCWfzq5h4yQUHe4iy8L7jVKI9wGVCB8kG05WCcEHXIgOg8G7QUEk+CKucz6rfNhE9nB46/GujyGBtcNbhw8KaS3BBZwXhCBjOGEfVYxTFRAhAhdj5YNP0ifpZy0Ne9+EtIrRHn9zlsjgbdy3hKAo1YK6LKDdQ8tzqtkNOt8ipMSFBqmmrJontOdn1OU+9Y3PgLK4i1Nm7oL5u/9vwo2vsLdzxtOLlr07byPEDC0D9vgEd/I+cucWuIARgV6CKg+R5T6uPcZf/CCSvprnYEryviaWO1bpJTJBtneLqOCUkYfhfCRzfUcqBWOgPLXTAFGHMIQ8j7dPyHpS8R6vHmHYI44zDcnuvOmLWOZMkhifA+MOdNRzjG7NnUPD+w+e061P6bpTLpoXLM/PUL5jekNS7LwWCcMD+SQjVAz3z2XMe9mR5SS1Vca3fnwYjdHPyFb2MLRzxg/ES2vruK+G/HOb5pLl9ZkEzCRywIYwIBjUWELEaJxW4GuOz97FuVNu3DxACYUXUeV9U9Fcd0lVSJrO0bWWoEp8VaOIofr8UPromJuxhFSJl5tBXLb/5fptNdS4qlfYrcakgvE9xnNz89lAihnfZ/xahCufbGIY31ulEwAaxBqWT/iLb32HF88eYYNlffEdyhuf51d/+Zf53juPaNqoBChkUkEkDBhPmjCb4gzlGEZj2rWG4eYZ/7qKnLDtpPEyeebHp1fZTDc2p1FZ87ktT/NLv00IZoiO01IqZLCA4NZkwUUn0YVm3RlscDxZae4dBj46c1hVIIVDax/DsxS70C8xzXPa8x8xm4DvdmmsR2hAlLjKYIXHeE1HnJkXrmOnPGC3O2bhNEEErAchFN47FJ66KAj9Ku61gA0H569+n/NzQfwIAdquAREPfULKYaObmdGtg8YFbBcB9L3K8PbrN6GuUKzQrmPVdHTLBRqLKAzzcp/z8zWLeUulNKXqsS4+cFwS4Ous5eT0lCd/8k2+ud7nxdOHeGvZKw0SwQfW8v9qG77qC25+/8+hjLK3XHRRfaP3kUDQu1iRGw28ZQhml6cXd/j++hpi9X2+9tVf4p/98BHf+d77fOfpM07u7PJib4rQBUIqXnv9Ld7667/H93/4Id/+oz+k//gHvHGg2bFTdhYV++uW/cWaPSmoWof0PnmEOJAqAaiXJ7IQAi0CN6c1R2XBk9WS7yzm/NHK8prW/BeF4St94L+3p3y/rGmqOrZ3UjaIxpv0wAwZAEoLS4igm7MOh7/8EAgisuCVRooUo1cO0ztyVlJoF0Rij4p4kIlqHHHFkjIuTt5ZhHe4do1cLHjDdlx0PXd3Dzj4O38b8S+/CR+/lwgxAfyCIYwLArwF66I6ghdgV3hroySt8DgpY1xi19A7i0PSe0/vLT0RlLcy4FApVIuLBBClQGkc8TZBGiiKBHR3dG0XjUNEudpI6I9ylS4EvAg4oheFJ76Pih9RRcQR6InKEl5L1EHJ6mRFm77Lm41AlG7ID0lBoBaCa4XhZoA/AL5NDEPifVTEqQvFf/c//kvOlw1NUnLxxEX3l371JofX7xCAi7NjumbF2ekpzvbcuH7EtUlNsVfz61/6PNcnmvc+esivfemzvPvt7/Avv/UX/Prbr3M0iYa54EEqQ7V7xFtvfw4JtE2D8475csXhwTWoakTfM2s6Tp4957U37vDw3R/yqd/5OvYv/oLvdg2/Us44TAfbKPgRD9dKRIkxgYhhiYQYpDWDiHNdDA9JOVqj00gU6W/YSMbYczEG3dbO1VqCT6F4EiszBMeZtXQhxXpWmqo0CBGVjHzwBAFBSPCxb+KtLoM/UWUnrm2EKHkWkmdcCILMGJYiASpRHgQhQiI7KUSyF8gQge94OPcgondwNblJKRS2b+mti4aZNAellCgRWLlALSVNb2lD3LCKYWMYx9uwl07edZsQexvJLuscUup4WTIIN+sVxa373HntNgezHf7sn/8jHJ+kn/U0BprHAOl6veb09JSu67h79y7GmMEL/vj4eFDRUEpxeno6AIbAEDYiEz6ySkEIYVCegA0IPZ1Oh7AKY894pRTz+XwImTCdTinLcvBuz4BlBtOy17P3fojjPi7XmGSiteb999/n+fPndF1H0zTcuHFjKPNYUSCTQcZe5tmTO+ebCS1j8G1MzBgD8NsA9as26du/2Sa9jNswp3E4kgzQHh8f8/DhQx48eMB0OuXw8HDozzFwm9tyG6T/acgg4/Jedf12/ld5oL8KbH5VetUhffz5WFkh98l6vWa9Xg8kgIODgwEIHt87g+TOOebzOefn54MawWQyiTHjU9ieMQFgTEzYrv9YeSOPg6xEkEH8cRiQfM1VgP2YiDEeB7n+ZVkO43Kc17hs4zGa0ziUyjjvrAyRiVD5PkVRDKFnxgo9Wb1hTCbYVnLIZcgkr3E7jkOWZCJEVvkZK7Tk+llrqarqEvki53kV8WNcFtiQI/I6MZ7v4z4bk4byZ7neY/JKJkFkpY2r1lpg+E3+bEx4yQojbdu+1Ie5/jkES1ZP2W7bsarMeI0fk23y9XVdXxpD2/M591NWzsifbSuDjH+fy5bVk8ZzclyWcRtdpXoxXvvGZbvSsJZS/v7FixdcXFzQ9z3Xr1/HGDOEeho/n64if/241HUdP/jBD5hMJty/f5/79+8PzwYpJU+fPuUHP/gBu7u7XLt2bch3mxQ2fj2uw/Yauv0+z43z83Mmkwm7u7tDv4xVUj5JP98pkGT+vUhn/ZDhDuI5QiGkRiU1DqRK538BSLROHm/E87AUEeiX2qRwDQKlDEZqhFQIkUJAeI8yJUoVOBf3R4UpkEqBVGTIVCZDqRTxzOJDwPsUGjW9j0NZRrVIoaL9QjCQO+IFCfTAJ1UIN1ienbVIpaPoqE/hctNzwDlHb3tA4JzFui6STYDehag6UGiktwjfsZ4vkcZQz3bRtUagqcqKejKhDY5SK0So8dYynU4xPtC0PUomoCFonjx8iNEwme5y++Z1nj59xs5sh77tWC/XTKoSYzQnpyecncBsWuH7Fmwg2MA6tY05ukZV6kjQkYqdvV0mtqYuVpRSUxqNVvDR8TmPny0JPtB3jutHE2bTaN8pJxqpoLcWiacwJWVRIISkNA5XRaVVfFSb6pEEqZICiomghEhhe6RAJSVMJVPfBoftLIuLFc+fnvH06YLnpx1nK1i0CmlmVJM9Cl2xv7tLXdc8f/GUVbPE00PwKBH7PSSHDc9mDHVdh/MV3sVwql1nMWVSkQmCtutw3mOkoCyKqJiLJHhH1zaURTqgJzFRbQp8kLRtg1aK1kWV3Ol0B4Skby1SwMn5CWWh8R7qakLT9GhT4qWOISPPjtndnWGrks46mtWK4toBzju00HgnkCIq0Xjf4XwkFpWqTOQFRW97ApvnQpCRrOVzqDkRnWmCd0gJQoboJR0CveuYLxf0bhOqM6vaCinQItoDCy0xRtG2aZ8sBb4PuBRCOhInNmcnozXGFDG0kpAoqSiMxmiBMVFdR8oCoTVSlyCjWoYgYB2s20AImkIbCqMojaIo4/5SmwIpQwz1onQklSAJQiYzUFQ7jSFtolpIHguksC9aKbSJa5FERCJaPgOQoMZkNxHJISwnRbbVkUJi5bUFdAiX9kvjfU7882kZSu9dXGeUTyp5HnzIe65MnjWR7DGE4IokkJCd00bED+8cXjuc0yhr8N7hnCW4HqdjmCzlo0qITOVULt/HDavjVUDeJ+mT9DOTtoC/7X3wBvyUgEMIiQ8tzfyUYwuN0ygpaeYP6BYBJWr8+ZrF8wd41bNTHeGLGWF9HNWgtKHYeYMifETz/FvYE029/zbd/ALhXtBqhSkP6M8+wLZzqp2b9L5DyBIvo4pRMHsIvYsQAbM4R/Vd3JfRR8g4XO34ESGZuB/zPmt0pPVpdG2C/0cEklH75HwS8HuJOTHcZnTt0I7gfc5TDr6CITPrBsv0hnF7uQ5bjhYy2cxD3H/+6bf+Kd4uqYygP3nC7esVy/3Aqje49VNYPUVOb2F9Jm+kOozOoCOUnUweyJXIu+C/9GqYQfIQG2HoDxHXe/FSPXN7ja+Nyh+CvByLTflyyyWyjRif/aRHeDAYlA7s7eyxXLYUxT4+WBR2wLRi5jIzHygKjVI1otSI/hlSTiNOIQLSDyhCbpyhINt12ZxZt+19Wfkl1+mqPk8tn8faiOQxaprYPmRbV8ppM6SGN5f6Ln8fNjay/Pml90Tcx4aOcv59pnpN2yx5883P8NrNim//8DGEC/zp9zn5EDp7lPrZD7jo6CSeCEBqUwghiGhhJLKLUeHD8JvUtlesUVed/7fTtt34qu9zHj/ejgqRHD/ui03NyHQgEYY91v3dJfNVzVlb8+mdJYtFEc84zZqnfc/9fcmj52c4Z9B6irBrVDHFikjyP5jss1YH1MpyUDcsbc8qCAgSIwVOph2ZD0gMq17h9RF74il953FGEeiRDlqh8HqHyr2AsEaIks0g+au3f/xcED9yw3kfN//BR4BVKUmtNR5BYWIMx9AFtFS0fc+LZ4+Y7iyQQmC9p2tbur6nEIFyOsEpRTmdYIuKa3uwenLMw8dP0QRMcEgpsAHmixXN2Tl9SFK93nPeesoAb8+m/G+/8svcWB4jZIfoAnQBWgt9nyT5JMEBBvjSl+DoGc42vNf/Ko/O3mPVd/zFu+/jFyuuv3GPx3/+Z6h1z/RAU5cTdl9/k3tf/iq7r9/nF/av8eHa8meP3qGaVDzVhn5a4a1FrNYUbUf48BlvrVp+sSq52XvKNoUvSWFWhhUxPoURQWAI3JtMuG40P1os+Ium5ZGzvKUN/ztT8vvrJf/MWY7rHbr0O5ekB6P6Soq1LVUyOCbvBR0IvdoswolWqFU8lJIOfT4EUMm44vNET0bcNAakFDjvoqdDYrd11iKtRayWTJYL/vOq4D+7ecT//fEzHtiek48eMKumSDSibxIg3kZ1EEhxLCPb3SdpxN5ZrLdYBF4brHN0ztFbi/OOXkqsIJFA4h7BI4kUl+jF4ImkjXjAC8hyiqgqQgBvLVbYqBISzUMEYpgiL0IykEkCHidSm4gY5iVGJ/FkdRApAa0QBpbzlnXj6YYwJ6RnuEQS8/Ah3rOWks8VBcfG8E0ZWKy7dDCLaiLf++hJ9IZMRvRSFxgtmVQGsThBrU740YcfcrZokEqxf3BIieX6rZt87Uu/yPf+6F+wXi5ZokFKbhwd8savf5H/6z/4l3zrhx/xjS+/AcQxJJTC911SPQlYFw0D69WC2ZtfAiURziEWc/TTx5xfO6Sfn/ODDx5y7fZdHq/e4zvrJX/DFJvQJiKSNKSUKDLARDLUpe/y405kQaYwKPL5NL4yCzY/iAYWciJoRBmq1CfJW8J5j/WBgCQ4z6m1OCFwIoA21GWRHpYxY+88IsRejRFfo6FwIHlIjSxnoKKUZt5Nub4djAMkCTBCNHoJFzcVUsokrRxjT/sQ87bWDQogEI0bTz5+n+LwNjMV4npCftyCUQIjBIUW1EqwdtEg40lyx8kUlokrsZXiziQrACFABoFLS5B1Iap+QFTEWV1A21DVM4wScCne4yfpZzGNQXhr7QDMZXD85OSE1WpF27ZUVTWEVWiaZgCmu67j4cOHA7i5s7PDcrkcwPGiKDDG0LYti8WCR48ecXx8HMlOVUVVVdy6dWvwln769OlAHCmKgmfPniGlZDabcfv2bXZ2dri4uGC5XLJer7lx4waHh4fs7OwMgPCYQJLBrxy6RilFXddUVcWbb77Jzs4ObdtSFAXT6XRom7IskVIOBJIMKG9voMegeW5L7/1AGMnf5ftnskpWOchpDIKO88l1GBMKxqBnBuPH/TgOQQFwcnLCRx99xDvvvMNnPvMZrl+/TlVVA2g4BqSvMpD8uDQmCFwFUm4bScchRl4Fsv6kA81Pk8ag8Bjk7rqOR48e8fTp00HR5hvf+MYlRYAMBuU+ms/nPHz4kA8//JD33nuPX//1X+f111/n1q1bg3rK+ECcCQNjAsUY0B8bijMxI5MWMhlBKTUo5+T+3B4b2ySAbQB8rAahtR7GdAiB+XzOixcvaNsWay337t2jruuBrJDDN+X6jNsnjx3YEDQy8B9CoK7rS2oqY5UfYFhHxgfT3ObbpJKsNJTbcX9/n6ZpBnLTdDq9NL7G146JBH3fD+SyXO4xcSyE6AW8Wq2G9s5jdExGyXNSCEFVVZfCvIwJIcvl8lK56roe7p0JdGMyjNZxT951HbPZbKhHVjzKoQzKssQYM5QTLocTye9zvYQQl0JYSSkvqSjlMZ7LMV4ThBBDm43bVms9fJ5/Mya0bLdbJqPlsZVT7mvv/aBYMW6jvu+HeoyJINukkleRP8af/ZN/8k94+vQpxhju3r3LwcHBcJ9xuJq/zLqTCYbf+ta3otS/lLz11lvDPDbGDN8fHR3xm7/5m5fKt132cXtth0Qafz8u4z/9p/+UDz74gLOzM95++22+8IUvcPfu3SHsTM5/THD6JP18JuHTvj7y0gfjtFIabQxGa5SWGJ2UG1QRQ7Sm+SeUGhRBN+Q6jVIao0wEaRN6EARoKXHWY0OPVBJrffT5kBaCx9IjlULrpCYqIuAM8SzsfSB4kY5WMoEboKSK59J0jMvPEyUSpCsCWoqkUhJJCVKq6GkvoqoBQHAea/sEhgSMibLqPkBne5x3ICXWOmzb4NK5UckAEpq2w7FC6RpTBIRwVFWBCBN83zGpS85Pn2MT6K6kYDIp6XtHXRfU5S62WUGlmU0kq90J52dn3Li+Rwgaby0lkmuH+6zXC9plQzUx9E2P0gFVaIT0tNZifKBQmVQKotCYCdS95d6dI5RyzOqCD56e8vz5gsXFisVih5vXpsxmGqkmCEDpqHIqQ0AoiRYQdZ0zIBNja9dSUpQVQmt666PzhFTRNijj2VIQ0AFE8PSt4/TZOU8en3B23nI8bzlbeVadQekpVblLoWtm9QyF4OHDBzx68ghnO3TIIEvyfEVGBU8R6IPDi2j/UToqRDjncYEIeBNwvqPrlxSliUSF1KfOtgRvKQtNCD1IRdet6PsOIaFZtFFlQxjKwqC1oixK+r5Da8n5xRmFgRA8fYy5Sm8D5/MzVstVctKyqL1dFsslzarh/uv3uHa0B7io3NFbqrpKNpio6iF1JEb1rUcXmr5PNC0fJ60LDiE9IUQlFqUMNnETlJIkIVS8t3TdGtu2SDxKgFcS6WVqw6jSYbTElAW6KPCiY9X3NH0Mf+ScJwKjCukCovco7dEm2cpC1NgleJTUFEqncCoKhUZiUMKgZMBIBy6ScpZNhwuBUisKrahKQ1HqGH5HKVRaE6RQ+JAI6IjkUGQIIYZu8T4Mzl4Cl0gfBcZEXQsCqKRSQrY/sVn7oj0qKojktHm+eryPNprx5z+e+DH63HuCIil+yGRyEpHAEdygChKCJ7gQ10QfkkrIhvgRvMPLGL7Fuugk56zF6aiIpJzF+QLpHMq5tHY6lEvnBG83xJIE5Ujx4891n6RP0v+/p+0zQf7/EmFcxPktEPh+Rdu0FAc3cPOSWkiKo08RnnwPy5z93T3C9V/CK4PwGqRFIxBKo3xPf/6Eo73buNu/hhA9YfExvtpHzK5DcBAcev8O6wffRJQTlL6PljY6GgaJJREJZHQeVaqI0swu2swzZJzPKp5oux4A5wFsh4HsIDbgeAghyTok67LY4ORx1yVGHInkkBR/OGR8CXcf3epSGh2hNgSbLJYVyXsZzM8/Hgga2Y5N1NGWlEynO+hywqS8RlGeYHxLjeZi3iDmH+LrO0jl8dZBiKr4Iinjv1Q8IQaOsJBhQ/K7wga1PXauSkLmtsrXiEu/v+q3l+516b7jPMSo4InJQA6No4b79dLRyQnVZI+Vj2NGBYETMoZIJGMvUfVmaiRBthglWJc34347bBxEhcihe3JVkiLFFW30cruNRtLIJpbxlo0tLndNbqNYyCBI+CMbp+BcpvT8H1ropfuOxlC46pqXbZQCKFyLKmagryHUKcXeTaquY1af4sWEVije+dEHmFuHEZcSAhGyNkrcj8XpsX3f9KQN43kzzDKuGlLbdsTtdr6KGPKq3/+4z68mwb3a9iEQBBlJSgjFa0cFi0XDopfUcoHyIHqP95pCGtYWHl5U3Lo1pV2vKZXivHd4D4U7pprNWHKIQ9M5jXIzJkXPbb2kWa85dwIhCiJZLOLmQQi6XhKKKftqzotuL3GiCpTv8LpElgJnA6EoiIht7uW/2vRzQfwwxnDnxg1cyFLUccBrJeKmvZoyqUrOtIksZSfoux7vFSFICBbpLLXymBAPgtg1lYxeGhOtqArN268dcfNwyny+wllL1/Y063VkaisJPm7aOxetFm9Mav4PX/0VfnlHIP0iqntkHc/OQtfHzbSH06ajvz7hxi//LZj/CY0zvPd8hy9/QfPx0Yzn7/+AqVOI0HFhSr6od3n9R4+5uHWNH73+Gf7Jv/rXPP2//be8+fanqFRJ6xx9u8a2PU3TQlpkKSX+U7dY6IIPjWa3bbn7fM4vvDjn1qLBWBdJAYPXBzAA44FKSz6/t8OdUvOnixV/3jbccI7/WWF4vWv4f1rLh/WEvihB5LAmkUFuXTysxxi2cbBHBYboGRBXqEgSCULQ2x4tEvjufXqgeLJElxCRGe98/E3kqsTvIqhsCU2Dmc+52Xb8F7s1f/fWNXYF/JeHu/yD+Yp/9vv/Hb9z9Dp3Ecjegu2ACJ4Hkb2Ze6x3OCHpg6freywBryQ+aLq2wwKd7bAEei+wpGVVmyjrKMAXJb5ZEZQiKSgSgkdUFcW1A0Jd4oXDBYunxtk+HlwBm+Jr2nRQiqFefPI6cjjvIpnAR7KKsy4BAIG+sbSdZW096xAlJOUQKS15DQhJkBLveySBNwpFheCHb73Os6dP6OcrbIBJVeG9i/5XRnH72hG/9vm3+PTdm1w/3GNnVlOXFUrCv91TXDjJwaTgaau4fuM6f/rNP6Q4uMbhtSOen57xwcM1aM3hzoQ3Xjvk7u43+ZPvv8vXvvBaNLQBQhecP3/M4yfPmR3uR+9dU+LPXyB+9H34/JfAO6QL7C8WPHr4mOs3Dnj3O3/Bb/32b/Dgvff5i27Fl5TGAEHGwSKRIDTRm4I4/nSK+ZvCo+R4o3nDeelpON4D+bw/iqQbISI5RJBDqMRp5IgH5T54PILOOs69T8QegdCKUsu0TmWASGy8OkSSfXeWgWmbVW2cRSRvOakkeBvDzQSX9taxnmLYJEfDRDZkinS4CCHQ2RRiBRBao1VgfnbM7vQAVyqs64bqB8AYjUntGolayVCRgtImU2mm0lxqRilimX021ma1piyzng4+UmlMVSHKkr5bDWvHJ+lnM4VwWR6+LMsBHMrqFtm7P3uyA5e8to0x7OzsMJlMqOuanZ0d7ty5Q9d1vHjxgvV6zYsXLwZQrK5r7t27x927dwfg8NmzZzx79gzv/aCi8O6779L3PXt7e5RlJHyen5/z4sULZrPZAEK2bcuzZ8947bXXuHv3LkdHR7RtOwDOH3/8MWdnZ2Tih1KK119/ndu3bzOZTFgulwP4fXBwQAhRvcB7z4sXL3j69OmgDFFVFX3fM5vNODw85PXXX0cIwenpKY8ePeL27duUZTkQXXLeIQRee+21ob2klJydnQ1KJkII7t27x2QyGcqZQeH8l8HJMfA/BgyFEAN5YDKZXFI1qOv6kqLEGHwty3IArGHjlZ/zynmPiRPjcmWCTS7PdgiZcWiJfLAYq0BkT/uc91hhQWs99FlVVQMJYlyWPI6zAkEm22TSRC5jVuaw1g7t++1vf5v5fM6NGzeYTCacnJzw3nvv8Sd/8id8/etf5/79+8xmM+bzOSEELi4uePToER9//DHf+MY3hjJl4DzXLffXtnpDnme5j733g2d+JjllAkVuk0zoyeSPDJRnMD8D/lkNJ5M18vWZ0FSWJavV6pKqxHw+5x/+w39I13V89rOf5c0337ykxDHu28lkgtaaBw8e8P777/Puu+/y67/+69y/f5/r16+zWCyGsZ3Lk/MJIVwiYGitX1ITyeXNY+Phw4dMp1OuX79O3/csl0uMMezt7dE0zSWCwVjVYFsRZay+IaVksVgMRJVxWTMxIf8u92fOL7f5eMyN77F9aM7EuX/wD/4B1lq++tWv8sUvfvESsSj/9ip1mKaJ0up5zo/vkYk6eW6MSSl57Blj+OY3v8k777zD0dERv/u7vzvM9UwSGtdtTLYY91+ey3ktyASgcX0z0SyT47LKStu27O7uDoS3cV/ne+Z5kMdz0zRDG1dVNYzv3Cfba1KeH+v1mr29vWEMZaJiBqyrqqIoCpbLJavVavhdJrbkcGWZsPcqA8xVxsH8bMxrkBCXwxyNn6tjJZi2bfnRj37EH//xH/Ps2TP6vufevXv83b/7d9nf38d7z9//+3+fBw8eAJHY/+DBA77yla/w1ltvcf/+ff7gD/6AR48esV6vKYqC3//93+fBgwd88Ytf5Dd+4zeGeZfX1m21qE/Sz1fyCITQKB3HsZaREFEUBVVRMK1LJpOSqq6p6hpjCpQ2w3zbEAyjIddnR5B8FhIhmetj2FfnBc5Gg7y3FtvbBE4bCm0wRiO1RikTwzzo+PwWQSCFS6FqfQQ5vUWEkBQNRI76EM+AaV4ppeL9k5FbKRVjjoQweNyJiKFglEYVGttH8gcjldOirPFB0LglQXqMUnS9YzFfUUiBNjGchnOBftXhwwWz2YyyKjGloe1WMUSNlkx2Dzg9O2NSlYDkxdkC5yW3jnaZTiTTSYmpAtooCrNPqSUXi0UkQk9rXNsxrSuuHe5xcnLGfK2oyoK2W6Bdh1SKlVgiZIGbRjC/NJqAxBQBU1l2dz1Gw7TQXJtofvT4hMdnC95/BI2DW7bEO8d0UlJNCwgB23tUEqt1pGi+Mp5hCyMwBsoCZFHQ9h5rPbZvAIcClEwKER7suuf02ZxHH53w8PkF5yuY94K2MxizQ13tURUTdiY7GG348OnHPHn2GO8alI+KslJEJ6TgA20IMcyPkBhdIAUoXYBQOO+S41Ng3S4pZxXNeoUxRdzHFxVd13J+fo7Wkp2dKSE9S2zX0LQ9Ikis7fBJ+cUUFXVdReA+g+4hpPNRQdt0TPdnLJs1+J5+PcdIT1XVvPkLn+JivmKxuqAqCu6/fh/bd7TdOhrCqwohFEVZ0zXLRIASeN/TdqsISkiPFpqu63HWbdQ7nKLvHb2MHr5FWVGUUfHLO0fwga5p6bs1koDRApzaODyJ6DSllEIrjzbRgtB1HrzDMQ4XFhDeg++QMlAUASmKKPvtukgU8hFwUyKSsJQSGB3JXFJ4pAj0rme9WtOs+wiYFhptoCgkRgmkAkSIQIMYAz0qhqESUfGnz6rHGW8JARE8Umq00ZHUQgz3MqiMZZtXMmgOxA+So8+Qki0UhVIvEy4v/R9GYT/Je95oF4ohW3L4KZWIQAIfFCFsyNsD0dl7ZN5v+oDzDpWJH0bhQ5HCufik7uHwzuJdXB+D36g8Ou8JyU4Z/IaQHJKtS2ypqX2SPkk/W2kbyIzz+7LzTAZj41rQr5cYBFT7hLMnIHq6J3+MNhq981lCf4rpLrCza1hsJEVSsG5OESfvMN27xXr6aUyIKlTm6Ah/8S798xOKa2/GdUooqjtfpP/4u6jXJhEj81HJ2ojkgBhEdFo2FT7te5BhZN/NZxO1wTVDdvATaX8WrcLjde1SuBCyiX2keRGyCsHmPuOzdfrwpZYWKfQVmTAACKkv3QMBKRD8pf4Y8shklJBIjhKC63D9OX5yC6o9qsMj9pxk2ZxR0NPYa9E+L0N07s5A1aWzmxqVOWxuLTahTC7zMF4Gin8coL7J9+pz4qsIJGPSxFC64T6570Z9PS6niMpNwisknlL2yEIhiiOkdAQNImMmmT8iQChJWRlm02v4Z2fslJaz0CMoXqrb8CMYcIurCAOX31/Vbpe6ZHTNFe38EmFjzCCKb7PqYBjG26av8/3yd+O8rupXHxxCSrRd4OsC1B2q6gihXrAzrVkog1iukKEH3yNFGe/h45gPl0Cq7coQ8WgR5/Mr6QdiQ8r6yxA7XjUmX32bV18/qDO+qo8jsJXU6zXPX3SEA4GUFROtOe97lqqGUuGdo8LTucCDk8Dnr1c8OHH4Zs5Ut+jqFnO/D8ohnEfKGObwog9c9DNmhee2X7G2a879lF6aGIY0eLogWKwtu5VgT8+Z97sE1SOcAu+Z1YK5OyGImo3U/V89hvXzQfzQhtu37qaNdECiEokpoAKIYob3gb7rsL3FCwlSJIjWo32PCBYderzrKaTH9pJVr5DaUGmNDQ7reyqjqQ4P2ZlWGKNZLFYo7ziVhtX5KUIEtBC8VRj+j1/5In/tyETSh5Xg4qDEekLTElrL877nm23Hv7g45+v7O/zuD/9bmFzw6OzTLBaSg72anduv8+E771D6wNlHj/j+esXDp0+YPHxG0yzYuXeX7/7gA/rW8/z8lPXygouzM5b717HBcbFoogGEkNhJgYVc86h32L7HO0+xY7hvNL/64pRf9p7d9KDfRNUireyR3HJQVHxt33BvuebfrFYcu563TMF/pRz/vXd8TwgutMYFn4DdeKiIm4YIAjuXvSYzlS+JEjiH99F4bkWUbCXlI4VITLsQHy5h9CASMUyHECCtpV8tcBdzPh8c/+ubB/y1u9epuw76wJ29A/52kPz+44/4H85O+Z2j+7zetqiuGUJ8eAEuBJyz9CHgC0Hbx0NtUBpfVPRa0XUBJyVd5xLhQ+ClICiD1ybWuS7oe0uoSpgU+FJBVSGnE0S9w1po+ranWcdDZ9u2tH1H7wOd85F44hw2xHKFEMO9uBASKcSRQ3qRwPJANELE47HEyqiooIhxP5NtiiCjx5IykZF/zwjelIoPb1zn3dmE9WM4Otrn7OwCKWGnrCgUfPXLn+c/+Zt/jev702GzKkZPz9/+6hdi/3jHv/3RE2584VdYLea89/4HfO7Lv8Kf/at/zrPzi6gMg6CoJnzhzXv8o+8/4GzZsUscb1prirJkurODxNN3LbP9A/rTY5q2xXgbN6EEZus1d5ZzluE6/eIFT0/X3Lh1m2ePHvK9bs0vprEmiXKZOvgY/kREjZTgXCRxpQ2aRwxSacJnKkROmdHgk2xpzDk/JIFkrIt6F955gnNY75JRILBylnnkfmEFFEpSaMV4Q5NJTkiZPGrUEBMxxiQEL0T0kksbWNf3eNvH+qQwStkIEERiKYpEDhmq5NK87On6GMooEFC6wBQFB4c3mE1n+G6Jsy79LvadEmBdNNbYPrC0AqEkMiQCWf43P6Nz+5E2XMloJoSIBoGUsw9RSBClMDsHTPYPogfvohsIUJ+kn820vREce+BnwDqDpBnMGoc2GIO3GWzPUvNSygHkbJpmALvGIVAyaKmUYr1eD6BkURQDSG+M4fDwkLZtubi44MWLFwPInRVIsoT/3t4e+/v7l4DItm0HkK/rOpbLJVVVUZYlBwcH9H3PxcUFq9XqkhrCer3myZMnPHnyZPC0XywWnJ2dsbu7i/eeN954gxCicsJHH31EXdccHR0NQN/FxQUPHjzAe8+1a9fY3d3FWstyueTx48ecn59fAlFv3rzJZDIZiCHr9ZqyLDk8PByAw67rBsLI7u4uk8kE7z0nJyfM53OaphnqmwkTb7zxxqV2z98vl0vm83mUIR+B9pnksF6vqet6aJex1/2rxkoGfTPwu1gsBiJMURRMJpOXxlomYoxBV6XU4JE/TmP1jDEYnkMB5fGaweasmJDJLjnvHFqoLEt2dnZwzg1EnQ8++GAg+zjnBnKH956maYb8xnnmQ/y4DTMIXZblUNcMUMFG0WB7Ho2B7rESwRj03lZaGLfnmCwwJguNiUIZmP7oo4/ouo779+8PpJt87ViNIb/v+57j42P+7M/+jE9/+tPcvn17IBCM879k1BgZycekilyP7QN527Z897vfZTqd8qlPfYo7d+4MawdwiUQwzv8qUH4MCI7vO1YpGYeQGZd5HL4n/3ZbHWK8Hubr8vu+73nnnXdYLpfcv3+fL3/5y0P+4/Jvr6f5mjwn81jJ5cqKG+P347GSr338+DHf/va3ee211wYC0DjEyvj3V42f8VjJ1+Z6byvMZFJTbvN8j3zNmGwyNi6MiU65HuM2uErdKF+bSYv52TION5OfIZnUVtf10H/5OTMmuYzHw/b4fVUal3V73Oe5k+dnDoWUv++6jtPTU37wgx/w4Ycfcnp6StM0PH/+nLfffpu3336bqqr44IMP+Pa3v81kMuHmzZs0TUPbtqxWK46Pj3ny5Al7e3vcvn2buq75/ve/z+PHjwcSzHa7jMv6Sfr5SlJKZpMaGWLIA6kURuXQCjGsR1UadKHQhUYoHc+8rh+B0TJ6oifwMLhutJfz+KDYyB17lNBpT2cQoUYpTfCgTVRQMCaFTxEaKdWwRoh01gt+Qz0PIQy2CeeiF5iUMcyEHMLQpPkoJUpuPP4jASEAiaBFQEmNMAahJEbVhODo+xbbR1l1rQRGSXzvEMFzsL/LtDb064a2XaGUpCgUNnigY3F2Rj+ZYIqK2fSAVVhwcbFmOplRlfDi9JRqWjGd7fDxx49YLJbcvHaINhpTGrqmpSgq9ncnSCvxrSSYgms3rnNxdgq07B/uc3I25+x8jUDQnTWcnnUEnnDr7l3u3r9HsCXBFVGFVyhmkwlKBnxouXl9n+uHu9y6dcSf/PBjfvTkjA8+vuDkxHBzr+TaXsHunmG2U1FMTHRXCHl/l0KD4tEy2q2W64b+vMH1Ip3tHVWtKJRCWEXT9MzP1py8OOf5iyVPj9c8nztWvcb7kqLcoapnFKZgb3ePw51dnjx/wqOnj+nbJaWIihnRISnaftLL2MdkW4LC2kCzjgoePlj6bs1s5xDvPXVVgxAYbbi4OKfrerRW1JMSqcB5R+jAW8lsZ0bfLAkBTKEoqynGFDRNhw8ySlALhVYglcZ6gSo11ju0hOnOhEIKur7n3mv38UFjL5YU5ZTCKKx1tK3F2kBnO0pT4oKgqgyubwGPdT1CCVrbUDHDOY/WBSG0eN+znJ+xWjfU0ylCGLrWEaSgKKMailKGIALO9/S2i/lJj0Qhg0clT/EgZPKVsygZyUFKBrrUyJEUkcG6QNILRQpFVShKBUYFjPAoLKRwAyiP1AGtBcYEtHEYLVBe0DjPYtnRtB6IajyRnBGVT5WMczmGodZoqUCoaNtNdkisj65VeR8pIsFDmwJZGEBFc7BMUbbTuhBB0ag8m99nx5gEOxKtnB7EZe/gvAYNa1HIYOfm86yYHM1guW0zEaSP9tcAIei4XibgKjhPSI5mKrjo/BXiuHTew4jQoVW81qbnegz/4nGuG/Y4Y+U5n0P5+UjmiWX8RM31k/SznDbAL+Qzs7o0n+M+OM78SI8I+G6JMQFhDvHdx2glKa99icb26HqfsHeTXs3AOgpV0vue5vQd6n6OvvUFOlFhvEAoj/CK4AXF7psU9oTm+XfRszfxkxnS7FDe/AXsi+8Trv0iXgXEoF4d90/W99RMkDKGuSI4ApfVSkJWv07OkHF5jOS7QIiOhyI5BMa44al5kt1bjhVeBRtNEQZAOpPvMrnupSRyGIvUnsPxIhBE3L+JdE0Yn5PSPfNxZADyY5wWBD3OeurpTUx9HV3fILDEljPc8pxCT7C9pUNSZ1wmEUSj+FZSFPGZ1DcmVMT1HSKGI5PC+6URtHUGvOrclIk2A59kqN9lJdHLzfVyKGUYjdhcVEZ12PpShKTAIAOFlHzqlua9h449Q3IxVYBHKhkVyDNuF2BSKCbTmoPyKa/fmPFHLwzJz+TyrYeOyf+IzfB5xfk4442brzZhX16+wwip2CaDDPeMKaOkYiCQbM9jXvl+u7yX7TWa4CXBVHThiMb19EIROrAUlMGxLq8RVvMYRk9DEB4vAjJEx+ABH91WDIubVERSfL98TRjmFWnk5bn2qrL/JCLI9mfb9rdXfbdpi4hJC3H5Nzl6QcS1BEFqnBcoPaWnoJ40HJ97hBYE26GFjKplUoPvePdYUmuDDQ45vceCOhG4wUkicVhERbSAYtXDwlXUxnBg5jg7Z+5KbJhQSoNzhovGousZu5xx2h1E5R7RM93dY3nebCIFvNRifzXp54L4EVOObx1jHsZDgSQoQeMCq87ifY8XUGmJUQopAoUxlL7DWI8W0WMgxoPUKK+S4bFHyYIeRRCC0mjKsqIqS6yoef+DR/zp0x8iq5q7t28xWc75b+7c4DdvHCC7dZwx3sXDgfOE3nPWOP5N3/CHWM53S5yacm0HePiPoFzzow/O+Jffq7HnT7h5+ya+mDC5dkDVwHckuFWHvHadKihe+9F7HOzs8pW/9Xu8++Ejvvev/z88P73gXy8u6Kyl7zr6ztK0LU1voyyos4NiRHBx1VYIZgK+KBX/+XTGb1UV+4QUu0sgkkQpUiB8wAjBp6c117XiXy8W/HnXct8Y/hfA/zg/54+qmhemwAoZ5Vhl2gSIuPgHlw008WMfWR9k+cJAwFqHEiGGdkBA2MSYFD4SM4RIALmLoV4MgW65ori44G9Vhv/q9dt8ameKbFqEdYS+R6K4VSr+w1nN78/n/EP7Pr+8e51PC0lhu4H8Yb2NpAkBznlsCDhT4E1BMCWNbemlxCtFK+NGJ2hDMAVeS0Kp6HXA60AwNdbXUSmm8zTLNd1JT6eWrAW0ztN7T+8dLsXsTNsSpFAIE9VihJA4a6MngZIEKejbnt7F+JnW9fFQiBgOWbosoyHJuRgDVAQ0Uf40kkYC2J59PF8tCs7rivdfu8GPTk7QxuD7jrfv3+NXP/cWX/ncW3TNmrfu32NWl8MGjBD71ofEvEMmNQnP64cV77//Az7/lV/nf/qH/wO8/TaT6YSb+7v0TnJzb4YUgtduX6P9k3eYrzp26jgGpDbsXLtFtX8DHzxN3zPbv87zsxPam3fYURIZ4oG/EJK/Xlf8f9n7r1jLkju9F/xFxHLb7+PynDzps9KUS7KK5SiSRdOGrZaaal3MnXt72qABDWYwAuZBgB4ECNCTHvSkN82DBpiH0UDS3Ns9glqNK0ptyC42yaIp79NW+jzebLtcRMxDrFhnn6ysFnumB5hLVpBZ55xtlokV9v99/+/7461NOgtzXH3/Xb70wjOsr93n7SLlrFUkAkIhCaWsAnresOQgQwuoxCqqyVhW3di/J2YmQiGwxi1g8QtaKpKFH9qDCMOktnlxii+GUalJcf2hABqVn+zBN91xvCcfwjjuhxROlaZarDqzHj9xGqZ55jyMjcFKRb20FPWNuc2E9Yue6u6se47TPMdUVk0qcB613cVlwiRilO5RVgCEu3Lnw1uWZdVvDFOtqVVS6s2LWxiJQ7OdPciIoZL8ltI5YOECaYEKEEFA0puj2WzQDCNGeYa1GlOUfFZ+fosHr2el+f2GxANiQK1M4QFoD9b7f7OAdlmWdLtdWq1WbQnjFQM84DWZTGpgflYNwQPq3W6XMAx57LHHOHLkCKPRiI2NDfb29lhZWeHEiRP0ej2yLOP1119HCMF4PK4XtHmek6Ypq6urnDp1qs6w/uijj9je3q6JBV4FYH9/n06nUyt1rK+vc+XKFYIgoNfr1fcynU6x1tJqtVBK1eSGBw8esLKywsLCQk108CQCYwznzp1jZWWFBw8ecPPmTd5+++0a+C/LkqtXr/L4449z4cIFTp06xYMHD7hy5Qr9fp8vfOELtYrK3t4eH330Efv7+zzzzDM0Gg2KouDdd99lc3OTvb29WhWl3W6zuLjIyspKrcoya1lz7949fvCDH/CVr3yFo0eP1sB+lmV8/PHHbG9vc/LkSY4ePVq3BZ/R7wFjDy562x5PQtne3mZzc5P79+8jhKDRaHD06FHOnj3rMmPjuD7G1tZWTbDRWhNFEZ1Oh7m5OaIoqtsUUM+3ohoPZy1P+v1+rdays7NDWZYMh8PaEmh1dZVGo4EQgna7zUsvvUSe58zNzbGzs1PbGkVRVKt5eKLOkSNHaluidrtdKxV4lRZPCGg2m7WlSVEUTKfTWiHHGMNoNKqJPHmeMxgMajBbKUW3263v199no9Go23RZlrVahb9vD5Z5649ZKxDfbkejUW27ZK2tiQCeBOTJPZ6I4ok6njDlyS5JktRKI7MKHbNqFUBNQJolD/jrnVXo8Pc6S+zY39/nu9/9LkIIPv/5z/MP/+E/pNvtkmUZw+HwE5vXv6oIIWpVGWttTayateHwBKuFhQWMMezt7dV1nWVZXd/edsTbnEwmk08F/z3o74lad+/erQkAvv/MXv+sWoRXwvFjsz+Wr0Og7sv+Gv09zY5/flybm5sjy7JaPclfnz+Wfy6z5IxZYosnvs0qhMRxXNv9eCKKlPKQUolve55YMmtH5X+22+1abWU4HNaWNP4eZtU3tNZ0Oh28jc7m5ibNZpNer8fy8jL379+v1X0WFha4fPky6+vrdR8bDof1dXkCiH9uQRDQ7XZrdZmflRzh2/Osggo4dQ4/dniyobckCoKA+/fvc+3aNV577TXOnDnDk08+SZZl/Nmf/Rl/+Id/yK//+q/z/PPP1+dYXFzkxRdf5LHHHiOKIvb397l27Rrtdpuvf/3rnD59mjAMGY1GXLt2jf39fdI0PWQvNBwOabfbn1m+/IIWJSW9VhOBqoJ4Fam83pF5azmByUrKXIMwKCVwYoMKJeUBKR4HDCvlLeqqHUVlq0JFApFKOQIILqmlKErKsmCqC4oiQBUSJUOXeVeNI6IC9PWMyqaUPohdVufzJA/hQBFjscqpAijllAEAB+5Kd7+OeOLVRzKkcVn9gQyrwLhySiPVeBgmDayFQiiEMDTiBnGSERctdJ4isISycvKNFUIpxpMReZ4ThSGNZgtjLXGrQZeSPJ8QBIJzjx1nZ3dAlufsjmImeQEiIwoL+u0eeTNna3vA7nBIaSz9uR66nKDzgk4nYG9v4Obl0mBVyd7OHnmeUqQjTp46SdSboyByBADrlBk67T4maVIWBUcbDYJGzJG5Ta7e3WRtb8RolLI1iFiZxsynBb12QpQomq2YIAnAagwFoXIGHqY0FOOc8X5GkTp72CCWlM2ILAwZD6Zsro/Y30vZHkzZm2j2MsPUSIQI6SQtoihGSkGz0aDViNkZbbG1u4YspsQVIC5wCL4WBmMFZfXcAyxSWITW2MARI8JAYgvNdDhBigZFXtBMEhc3Mbpet/V6PZSCvJhitGvf2TR1Nj04pQYlQQnh1GqUJcsKZyscRiglaSQNRuMxmzv7LC4sEgh3HF0aVBjRabYoEewNBkzSjFBZrAqYjEdk0wlxHIE15OmUOGmjjaRCFlwSm4BQReRpUZEFDGk6Jc/TyjrIkheZI2qUhrK0NJMOSoVIqZBYpNSVaq2PAzpwL1DyIF6BSwCJQkUSB4SRYjp1CsEuXHEQGwE3tzeTiiQWuL5njI9UOOVgIZy1bRAq4lARBQGRdHYmo3HOzu6YIocoUgQS4iggjp0CUKAUSkiEcFZTolIlcqrKwoW+VNWfjVMtEVIipLOt0caThdy9WVtZHwkQSlTWuAfxHE/+cB+u6B8z67rZMgs61YAJYKsgi7GlS1IyLhBjLZV0vXQKxJ74Uf3PVBbXyJoRgq6el6gUQbR1CU1GH1gZW2MqS5eK3GE0ugyx1iX8mYowYo1XvdNV/Lf6/GeKH5+Vn+siEEIdAMUPg5m4PiiFRVtnKAKQpwNazQaaCKEzVKeJzdcI5DLSCqyM3BAtIyajXYr9G7Rbi8ilZzHGYTwIi7GOQlEqQaADCBZI5ucoJx9hB3PY9jFoLaDKMWbvI+TCkwhMlSSqiZAuS1eGSAGlKFwM2lREA09yhYNBvB6rRKWscUDosHbWVkXU9SEqAsgsaH24jupvHDpHXZdi5ty+oqu6dTF8WUP9LvlTVvOJOHQ9NekDahC4tCG2GEIYgx1RFgs0WxHT8ZgwCNAiIk4k2riYjq2O56pIVKF9e8B3sdWVVIA8UCmJV8rePBpc/6uKP88hPkNV54cJBs7u69D3HlnhVVygupc6afvQ9ypsAoFFUxhHytXFhFa4CzIkQFEIBeQOEpSyIgYZojDgxGLG9VuKU0ebvL6jKgLQ7HOcmd9mHvUnSCHuk8ygOtVznSV3+PiIvyf3uqsfcehz8FA7rTAeUcelH+aEuIqfvQJrD/4+RKypYy4HzwRRqWLJBWI1orAd9gYTeqGlkUgG0xJVFphGH2NLFLZS9J+5Fn/mStkDMVONVjhcusKZqK/W9WO/tvJ1PEv8qWv0E2sQe+j3TyN/PPydR7W5wwSQWcKOJ5R5bMlhXbJSnRfKkValUMRlQUYDqS26yp22ErePMgEGTTOUKG0ZT0pUU6IxCCtRUmOsArQfCVybkAHT0pCKLonMaUUpgRwwmkYYoYjMhGEW0I2b9OIthvk8+3mLBbmPLIN6zXfQRv5myy8G8UMIB8wKasC7thOxgiwviRA0m230dEKkKolHBJ3ePEoH2GyELQukLlC2JAhcYxrnllFqySixKiKOYpJGi1wE7I81r71zmQf37lYZnZLTCwv8Hy+c5+8stFDl1G1Gqt5ihSUrNe9nU16NBbcbLcbjCaO9EWVmKDILuzvYOGdapKTjgqV+h6V+m86RVUSjh2ksYq++TyMQlFoTigKKlJ0R/MG//79jhGXjwT3G4xGlLp01iNZM0pwsdwH/0miswbGwcbYdUiqsMQwxrFPwkzTjC1HI77Y7fDWO6RrHphfCdQg3ULhJsx/F/FJP8c5oyI+zlHlj+Uao6KWW7xjLZpJQ6NIFVwJZA8pCCucNZ0218LdYq6vgCZjSOEa5LSi1k2VUUtXSgPUmCLDSyTyayYR8MuYp4PeOLvCNY0t0A4MYZ5AXUFpEUWBNjrCWI3HEr1nBd0dT/nK4yebyKR4fK5LJAIQLFphAUEiFFgItBCYMMUGATBKyiSHXY4wQFGGM6rQx7RjTiDChIisKppOC8WjKMBszNpZMKjLr1CViVRIJSSIFPSFILMTC+Y/KSvbRWI3WJcZCaZ21TW6tuyYEJggY5AWjsmRaajJrKK2htFAaRxSgLLGBy3rSQhBJibXOH9daJ3HWsobnwxDbaDH4+vO8fe1j1jY2McawtDDPP/rd3+TYYr/qO1RfrJ5XvXCV9QTil35CKub7HW7euEt4/glWVo5y6849Tj9xieFPf8jzTyzTTgKEMESRQmDIdWWWYw0yEOyu3SWXMf25HkIGtDp91oVkKCQLxnoug5M/HY2IjMEeWSbb3WS/lMwtLLK5s8PHo5wnpc/CkShh8Goa1dVT/yZcuwIq6x9njeLtiDBVcNEaDM7eRFJtpt0yzU1LFgpTYoTLRymsocSigR2tKVwoh1JAEAbEQVBN2jOLTutl1ZwvrqRih1sDQmOmWxDEGJxcqMmniGzsrr0ixfhFzcG6x9aL3Hq6ty4jJC3K+jtRFGHyCbc+vs7xc084eVljDhRmhAvKWFGSV20uK0oMYX1st56ZWaZXUsy2WuyK+vyCQM5k5QYR/YUF0smU9nyfOEmQWCZp5rJQdPE3MIF8Vv7/ucyCsx4I82CRB/e9VLxnCM9mgXtgb1b1wYPnPuM4z3Mmkwl7e3tcvny5BjbDMCSO45pAsb+/z2QyYTqdEgQBSZKgtWZubu6Q3YAHpj1I6pVFkiSpiShaa44ePcq5c+fodDo1GH/lyhWGwyFZlrG8vEyj0WA8HteA2O7uLmtra0ynU770pS+xsrJCr9dDCMG3v/1t9vb22Nvbq0HNOhOrUtUAaqB8FrQsioIHDx7w8ccfY63l/PnzLC4uEoYhr7/+Ojdu3CBJEpaXlwmCoAbuW60WZVnWGfubm5usr6/z9NNPE4Yha2trXL58mcXFRS5dusTc3Bx3795ld3eX8XjMvXv36vv11+lJOPfv3+e1117jxRdf5OjRowBsbm5y5coVtra2WF5erp/TbIbCoxQKtNZMJhO2tra4fPkyP/7xjzlz5gxJkjAajXj//ffpdrs8//zzXLhwAaUUr776Kjdu3GB9fZ1ut1urw7TbbTqdDktLSzVw79VbkiSpFQrKsqwVRsIwrG133njjDT744IPaViMMQ55++mkuXLjAxYsXGQwGvPbaa6RpypkzZzhx4gTXrl3jgw8+4P333+c73/lOHVRtt9v80i/9Uq0Ms7+/z7e//e2aQJPnOa1WizNnzvDMM88wPz9f13O32+WP/uiP2NvbYzweMxqNOHPmDKdOneLEiROcP3+ea9euce/ePXZ2dpifn6/VXoqiYGVlhSeffJJ+v08YhuR5XvdDD8p7sMwTrIQQdX17RY8gCOrzzc3N0Wq1akDag9fXrl1jY2OD8XhMEAQ8+eSTLC0tMTc3x+7uLhsbGwwGA5aXl/kn/+Sf1ESC6XRa2wt5VZy1tTXiOKbf73Ps2LH6mpVS9Ho9dnd32draYjQa1USqfr9Ps9lkOp3Wyj9RFLG5uVmD9EEQ0Gw26z73sxQ/rnnFlslkUpOY2u12PWbcunWLbrdLFEW0220ajUZtA+WvGxxRxRPAHlb6mC2e+CGEYG5uDq11/Vwftpvx7Qwc4eHuXbfn8Yo0S0tLtdXUdDqtiTVaazY2NtBa02w2azWjpaUlut1urQrj+41SirW1NYbDIUmSsLCwwOLiIqPRiPF4zN7eHlevXqXX69Fut2sixvz8PEmS1EpIs+SsN954o1b2+cY3vkFRFGxsbHD16lXef//9mnzyhS98gaeeeopms8nu7i6vvPIKYRjyxBNPcOzYsXreef/99/noo4+Yn5/ny1/+MtZa7t27xx/8wR/UpKXBYMDdu3dZXV3lySef5O/+3b/LkSNHiKKI6XTKD3/4Q/7Nv/k33L17l8FgUAc+4jiuVWoajQZlWTKZTLh37179fPM8r8ec/1bxx9na2uLNN9/kBz/4AX/0R390aH7c29tjNBrx1FNPMRqNOH78OK+88grvvfceS0tL/P7v/z5xHLO/v8/y8jLf/va32draYmdnh6WlJb74xS9y4cIFXnrpJZaWlsjznDfffJOf/OQnvPvuu7z99tv1PHL79u2a5OZJbI1Go27P3s7os/KLV6w1ZNMUbQzOAdcivc1DnGBMhLUxUaXSYYUjtFstXWxAOZKFC3o6oFVKp7YTRzEqqOyh3Nlc4oupbFRwlgR5NmU6mVDkOQLr7F4SZ5chpartXpR09i+2IoFIpVzSv6jODQhhUVLWQVi3T54BM6tgvs/cdwk4LhYjArdPMmWJtRorvHKTA6e1LjFot4cNE0ptKU2JKXIkIShLEFfqQ2WBVBZTqWK22k3ERJBlBTJQFchu6bSbmFKRZgUYwUK/w2Sak+YFIkwIgwZpXrK5M0KGiqTTZLw94vbdewwmI+b6XXqdPiLKSUvNaDNlMk2JteHIYo8kUowGG9y8MeXosRN0eos0Gk23ViYmz90eW4UBc3FCt9GlHbVRIiAU62yOpuwMCvYnmtZ6Tq85pZEI2u2IRiKIlCRuBLQbDUKpMNZQFC7hqcg1ZVaS71ogICstu4Mpe+OCUWoYpSWTQqBNgApjoiBBW0uapQS6QJqC4d4W+6M9sizF6KKKjbtnpq17vl6BRCrjlFWFcAkm0u9VSvJswmQkiRoNtE6QNLAmoCydrZrfC4yGA4JQEEUBRpdEYUggwegSKS3GCJQKkEI4G5uydG1HCOKkRV4WrK9v0O20aMWK8XjCcDQFqcAWqCIizTVZXjAZD5jrt+j2WmytrROFIUIYinyKDiVlUaKFAJx0f5qOUWFAHIWkeU6Za8qyYDwegjXEUYIUAWmREySOdFjkU4rcqW5IoZDSxaa0dSCDqGKkbu3sMqu1tZUKiAvCx3FI1AhRk4Ii9yCJwueCq0DSbMR02w3akaIhJYkQxAJCKQgDiQwVKggIgpAgjBwhR0kEGeNpxub2iL39KdiAWEniQJIkIUkcEYRO7UMpR/YIZODUfWtwpQKDrKmSYJy1lBQKIRVl1XcdV0giPJHsEBjlfrceYHskaHKQie5/t5ZDa7xDYEu1FlRUe+QqIctaKrVagSTw3I4a6HQKII70YX1MxjrCBlXioLRO8cMajbaV+pupYrdGQ0Ue0WVQk3x0laTmrV3MjPqzsYft/T4rn5Wfx3JA3jr4+1DmP5UtFdopewtJMZ0Sxy1SJMZMaXYuIXod2L2FoINSAYVVZBuXSYqM1vIlShVBWSlZHwKVQRkD0lJKQygUovs5oskdpluXUXPniLtn0DvvYAa3MP2zKFsQy5BUTxB5im4YjDAI41SznV2Vu/rqDh6Jax4QIWytaOQC7uLQZwBH/pgZyz5ZTwdldr/7iJMe/MoBadAfRFbkmk9+0wPM1fAulPtTWrQtgAa20YdsA5M0saKkkUQMJjlWtZDa4VqSAFNZVvih/mDeoH7xkPWNEPVblgMrHTfmz1p5fHqRQhyQi2Zq4KBKD+xOPlllDxFEwM0FMyQe/9/ZOcxzWAIRYGxJM26QDkvGRQtsiTU5UknyzJBlOa1GA4zbXzeShNHEWTgaUyWtegLAo66Ph+a6TymHlSQedTRPMDh41gc8kwMCyGGFik8exrctITyh4+E2dUD8eGR79fuCKik+EIrU5gSiSVOMGO4bjqx0wIbkxZAkalOYZsVooFqX+rWDT2qeIUjUeAw15oJw78jqDWMPiCn1TVRt5FHXPkvweJSSx0F9HSa6+Nc+TTFktm4Onplf7/hrO6gzQwnSJSJjC2SgMYFFiBCDqdaJCi00xghikdK2u4hQE0YxidwhTTUynscqgzEWSYl1yJ4fNeq1KjZjWgak9AiVoROPiWzGJNNQjtjVioU4oSN2GZaLkDSwZog1GTL82WI4/5+UXwzih3UPRMnAgZ7WYnEgjDGarHCsfBFUPoqmJFJw894Gu6lBoGkqpzahsMSB4P7+yPWjIKHQUJRTirKsNvmSwWhKmE84f2yeVrTKezfuUWYpn5uO+TuNgFDnzn7CGsBlWj0YDfgzW/LTMGYwLUhTTZoXZBYUlo2RhkJgCUiSgKeePE+/E9Fst8kJ2B2OWTi+ysJon3tX36ecDigCwZFkjgejAXfurZFlU7CGIIwIkxbFNKXXneNEf4FGo0Gj2SDVlmvXriPLjKTTJx2PmF9Y4uat2+xur5Hrkm1j+PNxyk8mKV9LEv4P7TbPRRGxtbiMFnAKBy7LJVaKL3Q7zI0nfGcyYWgjngosDWv5Mwx3wogSEDIkUG4D4DL1ZiY66xjjlBqBqJng1jilEVGYaqNbyVBb93wtIEpDMRqymOf8Zq/N/3B8idO9BrIsIc2hKKB0liDWuoHNs9PnwoBf7jR5ZZrx6v2PWT92hqeSmMbuFhQZ1oIOwFiJqWQdrVTIKCY3mkK2Ma0Im8SkxjCZZgzXd9lPc0f0wBJYSxtYUYI5oejJgFaY0A4jZBBCo4G2kOcZuTFkAoqKTKHzDJCIUiOMJhIu40CbylhkOHXAHZJcWUokuYWxtQytYt/CPoYJgqmUTIuSvAI/hdVEUrBQFDwDtKOEjZdf4n5TsKBSop5ke6JRgaLbaiIFmDJzdQA1o1TWM8mBaJmf/KwxCCk4udDm3s3rnH/68/z4+3/BhS9/iU7nHbQuGacFYTNkMBghZUArjsCmGCuIpWG4t0EadOh3GgRKkDQ7BIFisrONme8jTGWpYiyj/QGroeLm3j5z/R7XPnyPz118kvd+8iM+DHLO4bLIrJBY4bNFLAp1ELRTB9ofVlSWL9XCxlTBOpc54tqDsympJidrHK8hirFFhkVgrIaydMGoisBTWMuecccrqn4QhRGhElht/MGqCVhWrEZ7wFGWzkbH6gnZxgdYpAssCuk259ZUU5S7dm30QRxBOuspHyyw9bQm0NqQ59rzqhBhhC0yijxzG/oqi1ZUdSOwKGEpSk1hJamWTEtHZHFKPa5y/LajFkeufeUqgkulfyWlJBCWslpsLB87w3R7jag3R6wUlDmT6fhgbPis/FwWT4SYtUGAg4Xi7AIzy7IaePQZ/p5A4AE7v6j32fFeNt8f32enW2s5fvx4bTFSFAU7Ozuf2Ez6ANZ0OgUcIOez5r3Uv78W/3eapjVg7FUEptNpDbTOZkArpWqVBH8/eZ6zv7/P/v4+Ukr6/X5tRdNoNOrj+GNnWVbXma8LT0ZIkoQoimriy87ODpubmwyHQ1ZWVjh9+nSdBX727Flef/31WiXDg9WeUOOz3/35ZxUBhsMhg8GAI0eOMD8/z5NPPsmxY8dq4ke/32dvb6+uO0/O6XQ6nDhxgjt37vDYY48xNzdHu91mfX29rsder3fIcsI/E99W/P35dgHQaDQ4c+YMCwsLrKysIIRgOBzWZBLfDtI05YMPPqDb7fLlL3+ZEydOkKYp+/v7jEajuu3BgTVMLQUvRK044evKt6P79++zvb3NpUuX6HQ6RFHE5cuX2dzcZH5+nnPnzhGGIXt7e0wmE44fP878/DyLi4vMz8/TarVq9QdrLd1utyYl+Gcxq2JQFEX9zLzdkJSyzsq/evUq/X6fXq9Hp9NhbW2NwWDA9vY2x44dY39/nw8++IAPP/yQ06dPH6pLr0rR6XRqxZHZ/jm7yfPtcnNzk1u3bvHTn/6Uvb29WqHi+vXr3L9/nwsXLvDEE0/Uqjjb29tcvnyZjz76iPF4XCsY7O7u8uKLL3L8+HGm0yk//elPuX//PuPxmNXVVZ566imOHz9eq3G88cYb3L17l42NDR48eIAQgqWlJZ5++mmee+65mjzllWZu3brF3t5e3Z9PnjzJ6uoq29vbXLt2rVbU2djYQCnF6uoqn//85zlz5kzd536WIoRgfX2dW7du8d577x1SKDp58iQvvfQSc3NzNJtNfvrTn9Zkmk6nw+XLl2uliWeffZaTJ09+Ygx9eOz0P3079SScoihYW1tjMpnQbre5dOkSjz/+OCsrK0ynU+7du1e3g/X19ZoANTc3x+nTp7l48WJN6Njd3eXq1avcvn2bjz/+mN3d3Zpoc+7cOb72ta/V5CPfVtM0ZTAY8IMf/IDNzU1WVla4dOkSjUaDK1eucOXKFd555x3u3LlDo9Gox9CFhQW++tWv8sQTT6CU4r333qvvY35+nmvXrlGWJf1+nwsXLnDjxg2uX7/Ohx9+WCttrK+v8+DBA4wxPP3005Rlyc2bN9nb26PX67GyslKT/O7cucPrr7/OqVOneOGFF2pyy6uvvsrKygpzc3MsLy+zsrLC5cuXeeutt+h2u/z9v//36/by/e9/H4Bnn32WI0eO0O12+cEPfsBgMCCO43rckFLWhBivyvOztqvZ9qWUqutscXGxfs+PlV4dqNPpMBwOazWo9fX1eowqiqJWbfLKO7u7u3X/bbVaTCYTsixjOp1ijKlJiX4cOnfuHK1Wi6NHj9bjkFfY+eve12fl56uU2rA3HjrhwcrqIFCK0DpwlEAh0UgrkfWe06kdBjIgCgPC6ECRQyq3n4vCmCAIkRWhvNApWpdoXWCrPZCxLjZQZBOy6Zgiy7GmpFQBcRFj4phAhegwJIwSCKNq31itLwUI6RRHpKSy5KyIyli3J5PKXXNFDnHX6SwxnUqArZRDVEWMV1jjQGKtNVZXcu/WB/ItZZGhC+1+ao0xXulRo4vcjRnV+kdVaxJlLe1Wi1ZTMBgMyNIUgbOOieIEoQKKokSXmvmFeUpj2R+OieIuc4s9JsMBk8k+3VaTZqPBvfUt9veG5KmmLCRJq0HUbNObMy4pWABW02p3KMqC0d6QG4PrdOa3WDiyRKfTo9FoooIIayVKgksyKml3upxcXSQKLEf2J9xd32U7zdifaLYGJUpJgiAlCjRJKIkjQTMIiaRAqSrELqEwkOeGyVSTZjDJLZkWWBGRF4bCiCrIK1BGo9MxpVCYKrljQImwBmld4pIRLptVYFBCISukQ+Iyh7E4ghAKIQJMWVLgVBHKomQ8GtOZK5E4VVtkiVIBjVZCnhVsbW7TbjUJlVNJiKMIazSC0qkpBC5j3JaWOEnY3t2h0YhASMIoxhjDZDSmmcREgWKwt0NZlFityXODChRSaSajfaIw5Mh8h0arwf7eAF0aknZSkzjCMMCUGZMiRaCRUpFOJ4Q6IU4aaK0ZDvdRgURJB+ILDGEgSUtHUgkjRRAJrHQ2NwBUZNe8cOQmWXnBWnxGuHDJNdagEIQSWs2QbjcmneTossDoAyBAKUGrEdJvx7STgCSEJDQ0IkUSS+JIEsWKKFYEsSKMAsJQEQQO8tNZzmB/yIPNXUbjlEA2kWiSMKaRhERR4NSDpDgAVnDxFG1cNryCytK3rMYx9yFjrYu3GEtZzesyqFyZK7sVUyXymEr1xFb1Ic2swofLzhUVuDoL3j1cZvfHs/vlA9DGk+SZ+b0CX6vYkyOFyMqiuwJ/aiKIJ5NUv2uNtBZlnLqRqhLWqNYXpnT2MZ7woSvChyd6eHJJrez5GfHjs/ILUGbXvYdAVJz5tlUGtCUoh8jiAVFvkf3xHiEKkzQxeYC1TezoJmXYJN1+l3ZvFbF4ASMUygpKZZ3tWKVZXSf7SbA2IjAaI0HaAtM8RrcxZLLzHkXjNHbhcwS7byFGm9jWgiMo6j0agaEfjykzTRBGFUEVELNKCY/eBwsf862v57DNzeHPUtfHo47pgejqjUPfg08C1Dz0OniD9oNjeOKbP4Mfb91f7qolAkyBFCWBCCiUYuvO+6y0BWmgKWyAzvewYh5BCLbwd1Mf2ys8WPxccQCwO6bGLFXjk+OhEJ8E2A+N+zN3YWYIFLMEkoNaPFxmSRCzdSiFcHotD5NvZutTSrTVWAJCIbl4cZVmr00Qxm5drCSydBxUn+CrpEBJAaHk6JEmKugw147IyoN69/d8aL6rQH8vDvPJ+zhcLxyqlcPlE8f+xGuz6NbsZ0T9mU8jPNTXU9OcPnm1xtjZJuyoGsIirSI1TdpmhC1G7OxM0WRYLcn1LoGcpzClw2PqtiuZST8+uJfDTb26Jvf6AW53+Np83dZEGz65X/9v3bf/zF9F8pj9zuHPSKSsCLXWJ4DOfsZgrXKrRhm6misl7Z6hsBEgMDIgKHNKUaIIiGWONPuUBEjp9gl5uEKotzC5wMQ9BAGFMMhZ8o71q1SLEaFLGrcZpTHs2YRjSUHLamyxTinajLKYTlPSndyjKFOEKLEmxfIZ8eP/uyKEI31YizEleVmArby0tKUwjnVtK/nBAEM7EGyONfubE4SQHG1YWg3DflbSDARr+ylYiJshpan17xAYkAqdl5xZ6HDp4jE+F/e5szWktZXxf7pwnk6zicinziDIavKy5PWdPf7n6ZRbUcTEpK7ZCIUxAqxTlnh7G3510iBUY5YXu7SCBbSZklvJ/iQjbvX58MpV9jbuk6cjiqJgf5zxYG8fbS2tZodOf4FACC5efJxMRDDZ56WXv05WWPr9Ltv7Q/7ilVd48ckLvPT1X2GSa9545z0e3LqOloq8kgH348Mu8J/znHfHI35H9vjfJjEruiCwIL1cl3UDjBSCx1otOkrxXwZDrhjDMaP5VWv4DoJbFrSUyMqWxzE9XYd2RIJKFtPY2ufRS4D5/m20BmkJROjUFBGY6ZTOeMzzoeJ/d+4ELxxfIC5zRJpDXil9aA2V0onForEU2iljaAyxELzciEiyku/ducru6fM8deI0rbV7CKMdyC6k+6kkJAFEhrLVJE0Vg+GY4fY221nKSDvP3a6QnApCTkQRy0mDuTDBSMHdPGMtS7lcFNyZTrmXZ2yZglRKxmXJuLJxMdYSRM5ruNdt02hGjPYn6KzEFhqrDc0koSMESV7SLEsWpORYqJgD+lhWlSAOQgfYC5hKxSCMGJeFY/JrQ1cIejIkjWMuP3GecUNiNm6y1A3RqWCsAD/Q2QpUsxpbFk66VimsUPhlhpNurR5arcoimO+1uHnjNu2zF1icX2Rta4eT5x/n+ofvcfP+GheOLfL2ldssLx1hvp1Aljp5uiii1+zQiZ3M/DQrWE4iwjDCRBGlKclLTWgtkVKcmZ8nGY8Y7+/CkUVG2+vo+BmarRZrwNq04LxUKCFwPqfKTZHSeUtbcBJrynnOGQGqaoBOB8Mt/JR0kpxuotLU8pxU7bXIkcYdLERQWhcUKHSJwTIxloF1fn8loIUgjCJqVyN7kAVha8alC1BZUxGv8FLC1hHNAKwjThnj5FDdotmxHa2t7JqqR1NLwVkXYHEbc8200JQGSgMqjEgaLVZWVokCxbQsZ2ThLEhJHCjQFbnDGApjKYUmDoK679rqP6IimbhTHwQeqvy9ahHiJPECpZhmOe25BWS7TRJH6CwlHU/q8eGz8vNZZtU6Ht6EeDDLkw329vYA6qx+T/Dwi0ivdOG/68Hr2cwir6AgpWR+fp5ut4tSisFgcGix6okis+oSs5YScRzXBAtvQ/CwMsksOcNn3nvFAX+d1lrSNK0tFjxIlqYpaZoSBEFt6VIUhSPyiQMLHH///rsPZ1B5IohX/PCKD3mes7CwQL/fJwgCiqJgdXWVN998k+l0yu7ubl0Pvi5qhZ6KCOLVP/yxPfHFqwF4a51+v0+3262f9awqS7vd5tixY1y5coXBYFATDDY2NgDo9Xo1WD/bZjyQ6O/ZP2eAOI5rQo/PwPcEoF6vV6sTeBuE9fV1+v0+J0+e5OLFiwgh2NraYn19nSRJ6nP6+57dtPjn7d/PsoydnR3W19eZTqdcuHCBlZUVwjBkf3+/Jp4Mh0M6nU7dLowx9Pt9FhcXWVxcpNPpsLy8XJ+v1WrRbrcPKbj4z7daLYbDYW0Vc/fuXT7/+c8jhKgJHUVRsLCwUNvpvPbaa2xsbJCmKcPhkKIo2N7e5tatWywsLNBsNmuSkf/p21kcx4fawmyf9Z/b2NjgypUr3L59uyYKhGHI+++/z+XLl4miiKNHj9aEpPF4zMbGRt1+vXLL22+/zfLycq2CkaZprfxx//59VldXOX78OEmScPfuXd59910ePHhAnue1ssV0OmUymXDixInaFufy5cu88cYb7OzsUBQFvV6Pvb29WvXH25R48okntSilSNP0rz0nTSYTbt++zZtvvsm77757yF7kwYMHtaqNJ3psbm4CsLy8zNWrV5FSsri4yIkTJ1hdXT1kqeLrfradzvZTpRTj8ZjLly/XpASvqJJlWU3siKKI4XDI7du3ef/99+s2uru7y/r6Onfu3KmJRgsLC+zs7PD2229z7do1dnZ2apUUb4Xy9NNPk2VZPTb541+/fp3XX3+9VgEpy5Ld3V0++ugj3n77bd555x2azSaTycTtf/b36Xa7nDlzhjNnztDtdrl37x7vvvsu6+vrnDx5slaUieOYvb093n33XW7cuMHm5ibPP/883W6Xra0tPvroI44dO8bq6mpNurtz5w57e3u1RZGfX9bX1w8pu0wmE9bX1zl+/DgrKytcuHCBubk51tbWWF9f5/333+c3f/M3SdOUra0tPvzwQ44dO8azzz7LE088Qb/fP2S95cex2THWP9O/Tibs7HF6vV6tuuT7ZlEUfPzxx3iLrTiOa/up8XjM7u5ubSnmz720tFTPDZPJpLZ68gTELMtqa6mlpSXOnDlDp9MhTdO6v3qFFq9o4tVjvKXTZ+UXr1hrKbR26o1CO9leXGajrAj6UgZIFTiwtlI3UCogimPCqLJ2AKfQaDTWakrhibS23pwZWyWVWOPAZaNrYD2QFk3pdimmxBbSqTVGpgL7/YamcpyvlEidQqezEK42fgg/5iqFCkMUB+Oxs++0iMLW4LG2FikCVGUH43TbwQhn0VvbLqAQxqIkGGkQSmDLEgsuOSYSgFMAwZbYIiPVhjCKEZFC65wgjGi2EoaDgnSSMypzWq3E1a2SSBVibEkSNZC9iI2tXaZZyfHVoySNmP2dTbrdFhfPHWea5RitGA3H7OzsMh5PaDYSev05hLVk6Zi9vTG9hQ5NIRntDnlwZ5siH1EsHqHZ7tOfWyJpNFFSUFqDLQoSLZibXyCMQua7Y7phwP5kyn5acn9vzF5aMJxaBxgIUwW+SwJRVZ21SBkicMSAohQUBeTaVtY+Fl1WCQfCqX3qMnWqtjJwgL4ARUkkBHGAUzuwwgH0MsDL8BtZJXmVBmEtgTFIFTj7HmsxuiRNcybTCcv9FXRpiYIYXeQYa0iaLabDEaPRmHanSSOJCQPX/o3NsZQo4de1MZO8BCkZjoZESUBRGoQIEDgCQitJmI6GjKcTmo0mWVEQJzHj7V2SXpdeu4VpNV1TBsbDEdYalleOsL29TZpNSOLErQeERgpNI44wSiKlQZuMPHegpNEpiBAhArTOQThyqhKVchiKTrdJ0nDWOW4fphG2xJRpZV0TgnXqp1Y4qW6nNmGQypJIgRQS043JJyV5ZkjT0smsK0OjGdDtRrRaEXEkiCNIIkkcC6JIEcUBURISRiFxEhLGChmCECWmKEhHIzY3dtnaHpLlhjCyBGgasSRpRASBQlo3tmjjyPil1S7MZS1oKK3EaFurmDhCh7NFNFZjhZMC90CjsZZSu1i1O7Zx9ygEaFGRQg7IJtXqjYeXmAdAyCHsc2ZwrYJK4uDz1W/VngGcGtGBKqzDjyr9W+uVq0Fab2UsKtvtKplOqyrhqFIVMU5turZ7CdxPZVz8yBFAKpsfqnhvpfpssJ+tAz4rv1DlYVKCEBKkS+JTQlHqlEiAUgl2OCAOAsKwxSQbYVvHEaMPsffepH/6K+gwwGAIcT9VlXJXH98nFGKwskALpy5SqABhYEqfcPEpxOAu461dkvmnmDx4HRklhHEPm/aZmoCGWkJFCVpkYERFTvikuuRDN1oDzZ62MTueHVIZ+JTyCXD4EWCyj8U/iliDOAy7W1/n9vAY6ePR4tD1OVUTJSRFOkaaMSJvkhhDs90gUBmtbgOdGraHEhXiEkzt4fuaHYO9+olLhjwg5cmHrv9wDPSh+vqUOqpWqXXipU8E/evGKWrQfxaQe+j9+vpwGJ3FEgSKD69u0G7FnD59nmlpEDLEkDq7wUbsrlC49XCA4cjySf78MhxPIZBQlo8mfdhDbenwfT/6/qoa8J/xVAf7iDbyqG9/gkRkH3o+n5x/P/l8/iriyUOkIAG2FMSiQNoxRoARksWFFqXWPNgs0EaizRhjQ5ALYGb608Ev7rzCzvxeqazNjjuPvCoOv/8ztptHPYeH+2J93EcQQg4nLM1+Rxz6jrsut1ZCK4R1SeCF1rSjgv39GEuJtIHrt4QkYkRIzkQs0eA+QrbBbLv+Hp0gLO6RpwYT9wmsckTkqt0L4YjJQoDSFRooFMa6PenmNCRMLYXsYMIGPZGjc4gaHYx2tsfaZBUW9v+b8otB/IDKa7UALNJaSmMpS5ehXoaSvCgQuKnP5jmJ0ARoZKgIVUAUGmKRMcUSYEkCyzhzC3MlpesvVQbHYJIyGI94O5twZ29IlDTJh0OeDEI6QYBIOpBmWJOznxf8x7Ud/tN0StZuuylYVgt34XwmbeU/+eYUXt1Y4Murivl8kySAaSZJNaS55sbaTXbu32a8v810MsZYy8ryUdoLR0iwfO1Xfo2w1WWSZly7fJmTvSZHT3yRYVqyu7fJaDLm+6++SpDuc/HLL5Mb2N7ZYbC/x43r15kOtpE168tNQHEUsbi4RPvIMv/JWH66u8s/QPKlUJGUGdI6NpTzsQRQHIkj/l63w58OhlwvS5aM5avW8r1Gk7uBotROhtVCvQm0hhkupHtSUrpJS3p1CdyziJQkkgGyyFlIU/5eO+ErR+c4vbxAO1GQpogsd4QPT/rQB0BioQ2ldj+1FZQ4mVSJ5LkoYK4h+dM713ilv8ilY8fpbG4S5lOkEtg4wISSUlom23vsFSWbkyGjIifSlnmpuBiGHI0aNKVkogR3TMnrZca0GfDkL32VYRTyk7fe4uMHG2zv7rMxHFc+uX2Gw4KpLkkCyen5Fjup5bHPP8tv/m/+PlfffYO1tXXG4wmT0ZCt7W3u70/43HN/i7u3b3D96lVsWfD4yjKr3QaTvSGjnSFxVtCWgrlAMZ8ktKSgGzSZiyOKyYS03WZjrstgsU/r1DHyO9eY5IJMtJBdSWKGyNgFoDA4P0/ryC1uMRO4p2McgObAfLfMEJUHnBVOZvPUQoPdzQ0ee+ISb73xY5579vPEN65zb22TtQcPeG9tj9/4pZdpRII0tW5jGcQE7T5B0iUb75IbQxhHhElEZjSlhe1qvugaTVlmNI3mhUDwnd09+p0OVz94n8fOnePKe+/xnkg5KQJC4TIunFSnu3KEC+2JKnvDSlGzJiUQWkFZL6UszvbF+R0KayvbFxwZQwiX7mQNQmtn1WMMeWXvMjQlE5w2UYYjOoRRWGVIuYnGVptio3XlXV11EkwlJlQpk1hddR436RnvSSbAs6pdgMJPqMws/qoghZCOHKVLCuN0OYQQRGGMBVrduYpMl1dBS9djpRCE0iK1IAmcLGxe9TUhFejyoYXUAQ+Vus/7Gq7GH2ld1hyCvc0HjNp9znV7RFHEZHePyWT8mdrHL0CZXTz6fx4EazQa9Pt98jzno48+ot/v49U/xuMxeZ7X5AulVK1SAE6dw78/qxLhiRMedJdSuqzICswSQtQ2Lv5nt9tFa00YhjQajVr9QEpZkwr8tXvVDmstYRjWILpXxyiKogZEJ5MJzWazDpzNqkt4IC5Jkhp0niW6+HMCNRFjNoO80+mwv79PlmW1Wscs0abT6dTXUpYl8/PzdDqdmqgSRVGt6uHVRIAaCPVklKIoasuF9fV11tfX+f73v8/x48drYP748eM1UWIWRG+325w4caLOyvfZ956AcObMGRYXFxmPx3W9zJIfZtuKf64eeF5fX+ejjz7iwYMH9XmttTXAX2edCcHu7i43btyg0+lw+vRplpeXawuO2XbTaDTqe/bkIl//YRgyHA65d+8et27dwqtNeNuVfr9fE2qWl5f54he/SLPZrAlKjUaDhYWFWjXlxIkT9Hq9Q3ZCnigRxzHPPPMM58+fZ2lpiel0ynA4ZG1tjXv37mGtZTqd1pY5X/ziF/nSl77E6dOna1WAV155ha2tLdbW1mi32/T7fRYWFnj++ed54oknasKRlJJ2u11bpXhlAF8nURQdIjWlacrNmzf54IMPEELwO7/zOxw/fhxjDP/hP/wH3n77be7fv8/du3dZXl6un1uSJPz2b/82zWaT/f193njjDf7oj/6I+fl52u023/zmN5FScu3aNd59913+/b//9/z2b/82CwsLxHHMD3/4Q959910WFxf51V/9VR5//HHG4zFvvvkm//bf/lsef/zx2mri29/+Nmtrazz77LM89dRTrK6ucv36dTY3NxmPx5w9e7ZWw3nqqad48cUX677qlVhmx66/qlhr+fDDD/nRj37E66+/ztLSEt/4xjcIgoC1tTX+y3/5L7zyyitMp1O+9rWvcffuXX70ox9RFAXPPvtsbZPhbUC8dZCvt9mxx1+TJ314G6s0Tbly5QovvPAC58+fJ8syPvroI77zne/UCj4vvPACvV6Pxx9/nE6nw+LiYq1+8uGHH/If/+N/pNfr0Ww2OX78OO+//z7f+973GI1GfPnLX+b5559nPB4zGAzY3Nzk7t277Ozs1BYw3W6XDz/8kFdeeYU33niDb37zm1y6dIknn3yS73//+/zwhz9kfX2d06dP84//8T9mf3+fGzdu8Morr/D+++/XykaejDEYDBgOh5w/f54vfOELdV8Jw5AbN26Q5zlf+cpX+Ef/6B+RZRk/+clP+Ff/6l/xve99j7Nnz3Lq1CmOHDnCjRs3Ds0FnoCzuLhY16MfU5eXl/nqV7/KSy+9xKVLl0iShM3NTX784x/z4MGDWsVnfX2dtbU1vvWtb/Hss89y4sSJWg3DK9x44s0swdHPVf65/ixFa83Ozg4Ax44d44UXXuA3f/M363aR5zk//OEPmUwmhxSljDEsLCxw6tQpvva1r9XjerfbrdU6vH2Vnye01rXijyeRhGHIN77xDR5//HGklDWhxKsq+Xa+trbGwsICZ8+eJYqin/n+Pis/Z6XaL0okykqkqawzpCAIFUHk1jKBUsRB7HYRyo1rWjvrAWsNwlSKohYiq0FECBk4ixYpsSZEKIsV2gVDhaa0trKgcIQSbUtKKwgr8NkagxElRgtM4RQblY0RkcS4jaSLp9Z7RulURoQkUC7bVusDCzpPTBW2UvQQ+BTICogwBJ68IFyAxaKQQjmrF11SFDkgUVKiEOR56mxqpAUUYRwgTEmgnIWLsRCEERAwmeSEStDtdEmijOl0TF5kFIWk0+mS506tUgpnLbo412Frc4edwHJksYNY6JJmzqq22UxI4gbHVpfZG45Zf7BFlhYV2SInbjbY2tpmZzDm2MoCp88cYziasLu/T6c/xo41RZEx15+n1ehU+3sFOsO0YqKoT7/TYnmpT2Esg+GYe/fWuLcz4P7ehO1RwbQU5JUKqkFico0VgtKUCCFRgXTWwtpZsxhhKE1JKFSVxGFQQlR17J5gKEAqizCCSCpi6axoRPUdhK0yLSWlUUgUWmlsWZKEAZgSbI4S8kCR2BhG4wGd/iLTdIyKIqQuKfIcXWqSpEkYBsRJjECjdU5e5ESRxEP/RaFJsxxdaJQKGKdToiim2+0zGY1BCibTCcZa5ub6jIZjmq0WQkCaTTjePYZSktFwHykj0rTAGk232ybPM4zRJEmLULm4TZGOiEMX55EiJAhlRc5M0MYgZUwURWR5QWkspigR0qmPCCGJohhlgooUX8lmG4ESIKwjYFmJy7SpCFHK6IpEpQmQGImzem7GzHcto3FOWbrks2YSMN9pMtdp0IwVjVCQKEsUBQRRTBAHqDhARRFREhMFIVGg3PM1JWWWsrUz5N76NnvDMdKEBEITR4JuK6bdahFVezSno2qrn4bKzMmpwNgZ9VRrHQnE6Cr50GU6i0AipEs/09opVAvhyeMCKwOEV5S17ielAzaMkEjxSULEw4DK4bVnpWKLrWNBnqxUfQNvWWRtRSy1Xgm3Wj9KVb8/S16TVfzIGINVpl5r1mSPSqWsfk1rtAmwxhGjZve2/qe1psp8/oz48Vn5OS+fEquo3sIiMUYiFKR6gpIpRbKI3t0gkKAbLdKdu+jpTfoLp1D9kHJwC7V0nsBatBWgBBj5iP2oQBBiKFHW2b0rW1m1CzAE0HuMueIBw+2PCBcuYHeuUi48jsWiqrHCEUuCGpT/WekEjoAQcKAeIGoSWZ1YBTNEiYP4tfu+Kw+fzysc1O97UPnQhx5mzjlCh/C5k8JW4G6lmF2dyOIUFaxwCeCogLCxRBy1sMWUfv8MYnqb0pYYA6oi5HmQXfj4/SdPPwMEe6uQmXtwJ6/q4eDVR4Hlh2/TxdGlAIOfmyodcOHquz7xI8iEs/VX17WPK4oD8uLD3xEC0I6QXOoSG3aZ5hM+uH6HcxdPwAgCKdEl6NIQxjElzuYwDCMerA3ZuDfmzKUuSjrsw5Fv7KFH9yiSxmx/+mQ5IDp6pHFWEePTyqFjWTtTVw+zPGwFpzz6ujzJp+Y8HXqfQ33I17WyFXkUSIIQG/ZIS0te5BgypOwRKcs4LzDWuJldgrU+udYXwwH5y1OBfJ242jDVvSlPOpmJHR2wQR++7r9axWM29jT796FqezRbFSEqFTL8M3WfnY2Hu+RlhaBAWkWrURL22wzLIQ09x91cIYVD2RARLTlAWsPQ9lDCYIygVBFSaEypKSQUySpJeg+TW8posb4/f5WyIri5AcO3IIkxmrFtMh/mCGEoTIst0SOWY2KdUezn6LJEyxyZHDyJv+nyC0H8cKBBUW2KHAAbCosJIkpdYAV10Mwa5y0YSAmUVcMyaGORVTZHKCX9ZsAkK1CysnmgsoIAlub6aKEYDfbYz0GUGRcbMV9VAeM8w3YXKHb2uDac8j/v7vCDacrUQsM6N0qqAddPCNoosJp9LfjXt0I+mC5zcUmTyxus0yJMWozSnN21u+xuraOLlLNnzvHMc8/TXz3DK9/7Hs88fhodNlm7c5/RZMIH779D/ORFgqRJo9Wi2WzyweWrMNrm/NOfZzDOINwnT1OKdIIwuduoNhLKsqgYdoY4TgjjhLX1dea6XYqLj/P/yAsGnTYvf3yd/mCTwDj5VatBSMfg6MURv97r8ueDAe+kExZNwZet5ftCci8WpEajlIDK+xLASif7J61AO7GCagC0zt4F4TbQ2hBPB3w9jvkHzz/LmcU2YrwP4xFiPIbMW7sUUClj2EKjHyZ9GE2JQOPGM1N5Xp5C8N91Ovzp3g4/HI84d+Q484FCFinZeMLYwkAKhmmKNCVzxvJsGHGk0SQVghtFzjuJZLMRcj/PsI2Y/sIcTz55npd+779n6dhp5v6X/8h//V/+K9y6x7gswFjm57pYLOl2gZKKTiPERBH/3f/4P3D61HHW795kOBiRJC2OrBzjxNmS9fVNut0Oc5/7AuPRkL3dXU6cPMqF4/MYbbhxe437G3vcnk7Z77TZ6zRoNhKarRadTputzQ1OnjhOb77PglQ82NxhfZJjZUQziWg1m8SdHqW2XL19h7OrR2gEju1q/axiNQd+W75dH0waUkin7iJgrpPw8Z0rXHzpG3SaTdZ39lk9fZabVz7izY9ucPHMab741BkXEMDpYCopuXflPbqr52g0QppxTBwlNJImgzRlVOR8pDVtbcgoaGxuMq8ERyYNwtIQLj3J1vp9wqefIlCKm1HIvTTjBBCoEBsYpLGEUiFF6CbBQmOlwoShYzQJJ7snwS2SpcQboQkpwTzCwsRaF+iRilIKtHCkj6m15Bh2SsMOMEKQ4axO4kjhcl4M1melaTcZW60dqcR7DGIqG6SKgCED9zhMCTWTs3oYpnTXinCckkolxBM/3LMCLJRFSVbayjLOooKA/c0HbI0zTp08hSl1tVZ0jC8hBLqEYV4iwpCs1OgqAGJUSIgl13pmveT55u53qG6hyirRuuJ8WCiyKcMBtHpLRJ0eIZad0Yh0PHSZV5+Vn9viAe1ZlvvsorHT6SClpNVqsbi4WJMbPKljZWWFRqNRA1FZltVZRJ6sEFQey16Jo9FoEAQB77zzDo1Go/6eJ3B4a5eiKMjznMlkUqtO+OzqwWBQZ5hbaxmPx7W1SrfbZW1trZbK91YnQojabiVJEubn51lYWKhJIWVZMp1Oa+Cx3W5z+/Ztbt++zcrKCu12m+FwyGg0YjKZ1Mc7CKZZJpMJcRzTbrcBGI/HGGOI45iFhYXaUsYrKnhSSxAE3Llzp7a0McbU4L6/516vRxzHhywY/PFOnjzJr/zKr7Czs8Pu7m6dXX/t2jWWlpaQUjIajWr7Hb+RiOOY+fl5jh8/znA45Nq1a4zHY/b39zl//jynT592RLDJ5BAZxtf7bFvxCivD4ZAPPviAmzdvcvfu3doKRErJ7u4uH374IVpr2u028/PzfOlLX+LatWt897vf5Uc/+hHHjx9nYWGB5eVlXnzxxQpwcoSLRqPBrNJIFEWHSDlra2vcv3+fmzdvcvToUf71v/7XNRDvrVc6nU6tcjGZTMjzvCYbeZKPJ6bEcUyz2aw3Qh6YVkrVr0+nU5IkqYkqnpyxvr7OvXv3WFtb49atW2xsbNSWOXme18/71q1btFotxuMxXl3Eq47443tg3CvezFo3KKUOBYO3t7drxZOvfe1rdZ3FccxXv/rVWvXk8uXLrK6u0mw2OXr0KM899xxnz56l2WySZRmdToe//Mu/rMkrL774IktLS2xvb9NsNmm328RxzGg0Ym9vjzfffLMm2xw/frx+Js1mk9XVVT788MMazH7//ff58pe/zC//8i/z7LPP1koF06mz1Ot2u3z/+9+vyVvPPfcceZ7XbdcTzX4W4gfA9evX2d7eptfr8Tu/8zs8++yzBEHA1tYW9+7d4+rVq0RRxEsvvcT8/HytkvI7v/M7PP744/Uzn31+XgHI91cPNs6Sx/x3Tp48yYULF/i93/s9jhw5gjGGK1eu8M/+2T/j9u3bvPPOOzz33HO1CtITTzxRq+74/jYej9nc3GRjY4PNzU3+/M//nPn5eZ5//nm+9a1v1aSG0WjE9vY2p0+fZnd3t76/f/fv/h3379/nxo0bvPzyy3zrW9/i7NmzSCn50Y9+xL179zh16hT/4B/8g9r2qNvtMplMuHv37iHimVcKWl1d5Xd/93c5duxYreDxF3/xFzx48IDV1VVeeOEFytJl4548eZLf//3f51/8i3/BzZs3iaKIJEkO2Ug9nJE3q/Lk1Zf6/X6tXrKxsVGPXxsbG7U1yvb2NlJKzp49S6/XQ2tdkxX39/cZDAaHSCVeRcr3q78O+cMTs7y6R5ZlDAYD8jyvST1e9cX31ePHj9dKSEmS8OUvf7lWw7HWsr6+XrdtP+b4tubngtXVVU6ePMkf/uEfopTic5/7HBcvXqz74dzcHJcuXWI8HvOjH/2IK1eu8M1vfpNTp079TPf1Wfn5K0KACtz+BAEokIEkihOaSZN2s0MzbhHFzu5AKlMRI6v9gnS2Dc7K3mDK0ik6aNClJQicFkKgKhCzNJVmqUEbUUV/FEJVyo/GUhpNrgIHcGpLoEQVfHTez0KGLgO+1Gibo6UgNE5xSSiJDJxKqDZujxwIWZN8tbUYH0yVAhkoR+6Y2Zc5BSnjZCeFRHnrQiGxQiGEwtoCayxJnKCUYDo2ZKVb+0WBoBFHiEASq5CyNKRZThyFSOEUqxbme8SRIg5b5LkbA+JGQLPZ4v6DTe4N9pBScGRpgeWVJaajffZ2c1qtBs0kwCAYj3I+vHyffr/L0pE+x47NY0rNcJjyYH3CxmCXubk5Ou0GEk2pYXFpniiO2NscEjdK2l3BWjqh0+nS7rZpNRskcYiwIUVoCVQEOCW0ZiOglQiWj/Q4tT1ga3efvVHO/UHOtDAYUZBrQVm6OjcWysxgcOoJSoIUhmYUuVic1QRKEgUKW7rfVbUfVSokCSIUljBwJB5vAWOsJbNghEDaCCEUeZaBkCRxQJE76x2lBFEkCQKB1pZWo4uxKePJLoltEUQNhJSMxmNHHm+1sFZT6hwq4o7RLvBtrFOHAYmQUBaSKGwTqIDpNKW0TqE2biS0Oi0m4yFxFJJECaPxhJMnTyNFwGAwPFByEpDmGQ8eDDl9ytnFCWtoNloU6QTpAjJEccQ0zQnikDgOUUFAiEUoga4sW7QpkIHA6BJERFlaVCSIo8i10Uph1bm0SCyeiONiG6ESaDSV9wBSU8donSS8odOSLHQCynxCKAN6rYB+O6AdCmJhaAhBM4xIwoAwjgiSiDCJiZOQOAqIo5igsjagLJiOx9zf3GV9a0hZGDqRpNVQ9NsNOp0mzWZCEIWoQKGUdKo60pHE8BbBUoCmUsV1awNT6gOgTYKD3wzGaMrcIqVGigAZhASBG5fcUk3U1jeysr1ySVZmBtL8JMlDVHE2anDU69D6z1ag6GwKe40ocoBj1ccUdezOAy91cvTMvlyKCjDGVqQNb3EKXp7dE0FUtSedtZk7TPyoyD8/4/r5s/JZ+V9fOSA1iArwnu2TniTgRnmnBl2MBrTjGCFb5OUeSdxkuvExTLfprj6FinqI5jzl5mvYjRuIMEF25lGy5eDeykq7VnrGjVV+TvFqHUjcGCUEQhjSaJV4sYfcvcnEBoQ7lxEL50EahHAkyTqB8FMA9EOg7yNIBAfwp6iHngNsfXZ8OzimrV44GLIOxiw/fvjzeSJF/XdNbDuwu5LYypeOKoYv68HOzvxX1IpLFXnNSpQekhISBgWNdosiHaKkQcoIoZx1n7eLt8ZWave2tgI7VEezUL3wnAwPxFcGHtKNuaayvXgYfD+kviFtFXsXFZHXVvUu8EQIfPvwz+igJg84IQ8RcXydeuv5A4KAcPcVWKSBQMHxhYhrd8Y8fSLAGKeK5+9dGGd96KxoQAYBYVTyy88Jluclf/FuWYH7s+3jgABSk4RmYn6PVpbw7x+85/4UOLWRGUKIeDRBwVfKw+/VShTW4zGz5zEz9SzqdlcxiWaen0dFDsijWLd21SQkQcC4VBxtNmglllBJIEJHc0g9RNZElpmbE/VIUz0738aq9UStAILv/fVD9285hQu31pWHTvBQtXwK+eNnTYB6xBE/QfI5wIseeoYYrFBICaluEgZNYqHYzyVGlAjhsPue2GViAwrTwQhLQ4xITYtES6QKKEWJCDSqDMnCVZp6E5utUUbLWCGRoqiwTves7aHKsASVgn+GIDDakcOEITcNChKM2aHQKUoXGEp8df9Nl18I4ocQzgsWU0lmCpchjzGO7IilKDKMNSClk2TBKRTkRY4UgqkVHGlAM3CyooF0s6FUgfNGE0Bll/DkY2dYKSz5YBukYG19i2Jti7dGI76WZny0tc4ruzt8b3ebHSym2UJmOVJJgkAhqkBAaXyWikEKB/fulJI/fmD5z+sZhquYIAYpKIuCMJAsr6zyxLnHeP5rv8La1j5vvfceMt0nI6Jd5nQ7bW7du8fZlSWOnnqMVruNVCFxS7Kz/UP+1t/6EovHTlfyiMKB0kXKExceoxQhw9GIbDphOB6zs73NaJoyun2L5cVFjhxZptProa3g/3b7JtvPvsDLH7zL6c17jvxB5YVUAbzNIOBX+33CwT4/yTLmrOWLwKtWcDdJKLRABM6j1631q4CGwkmNVuJUSioCKZwc63TEmbLgf3/2DL986RJxNkRsrcF0AlkKeaX0UZRgDFa7AFBRlhTG2bs4pQ9LqS3a8TqBSsRVWLCSjjH83U6bN4ucV+99zINWh8RCmaVkZUFsLGfCiNWkzbYueF2XvLOzzu2yhFaTx4716XSaxLaNkJJG02UHv/HTH3Lk3m2uX7lKUeQ0mk3OnD7J/ftrdNtN5voOZAkkIEOOnTjL5z5/CSng3MUnOH3mMW5/fJ3hYEyuNd3ePEJaJjmcPHWaUMLcXJ+kAlQev3Cax05m7Oztsbk7JEoSVlaPcOrUMYb7A7a3t5mb69FqtxhOnVrG6uoRilIzTXMmIwcAJlLw7tWrXP74Y1bm+pw9usTK0gIKgzWyWiy6LCchXN/yG15D5d+pLZPBPu+8+R7dM5c4ee4Cb7/xGi89/wXu3rjOxVPHee6px2iIDKOFIz5Yi1TOtkRIR2CQQBiEJEmD7cEOkbQc7TWZpiV3rSVMC85EIWIw5EjLcH9vn24ScePKdU6eOcv1y1d4T5Qcixoo4XzlZFiRU1SAlGDLEq1zyiJ3wQlhEUI5yTyLs4XBzaNaOvIQxtSEDydRKkBBmWdMxhM2hkPu5Cn7wmWH3DeWbRyxjGox3ogj1+qtcOoqtvJ+rckS1Wzs5D7cprpayB3IdlUUlGoz7hZElcGKJ3/Uk5Zf6FaLA2ModEGujSN+CEEUxZhsFyUlWBds9U+3MtHCaE1eGEJpmeTVokxYSgPNOCGfjA8WEzN3M2vMp6scGm1cxofFQpkxnUCj0SKIXXbReJqSjoYPTbqflZ/H8nA2xCzZIAgCoiii3W7XNic+gATQarVqJYK5ublalcOD5J4EMWvPopRidXWVzc3NGvzz6hZzc3PEcUyWZbRarZpg4FUzsixznuozWUxebQMgDEPSNK1JqHmec/PmTYbDYa2Q4S1E+v1+rfhgZgJlXpLfK4FcvXqV3d1d2u12nU3vZf1nN0LWWra3t9ne3q6JHRsbG7VChL/mKIqIoojNzU2OHj1aE2Pu379fkw0ajUatBDKdThkMBrXyxnA4JE3TGrD0WeXNZpNWq8WJEyeYTqcopdje3mY8Htf15oPQs5Y7XkHg/v37rK2tMR6P6fV6zM/P02q16nN5ANM/Y1uPe4fB2v39fT7++GM2NjbodDo8+eSTtUrD7du3eeuttwBqlY2LFy/S7XbZ3NxkOp0yGo1YW1tjMBhw7tw5er0eUkqKoviE9c1sYHNWaSGKIvr9PkeOHKnrqNFocOLECY4cOcL58+frY3qLCW/v4wkGPpve37MnIvl+4QFrT0Txz9VfhwetfRv1z9kYU9e5tZaFhYX6ObTb7ZpI5b87S3KYtRby9e7v3deJV0Txx/ZWQlrr2kbGE5K8lY7vO0EQ1Mebm5ur1Tx8O/QqJP5e/P17MtZoNOL69et85zvfqdUU9vb2akuLvb29+lofe+yxGpQfj8c0m826n/h6mg0yzBJ4PFnnZylCCB48eMBgMCCOY06dOlW3mUajwalTp7h8+TLb29tsbW3Vz35ubq5WffFjw2x9e7uX2WD6ocB61R+MMQRBQK/XQylVEwKOHTtGr9cjyzJ2d3cRQpBlGR9//DE3btzg2rVr9djk2+ZsO9/c3Kzb8/Lyct3OvErTbNtdX1/nz/7sz+px71d/9Vc5evQoZVly9+5dwJH8PBnOt2nf37yyzqyd1SyxZzKZ1PfmyTrewsaPvVEUsbCwQFmWdRv1bdyTrrwSlB+/H7bKmrWY8uSrhz/j+6pXMPLqQf6aGo1G3YaazWbd1sbjca3u4+e3n7U8rAb1aWOj759RFHH27Fl2dnb44IMP+OM//mNOnTpFq9VifX2d4XDI2bNnWVhYYDqdYoyp7z3LMhqNBr1ej6effppr165x584ddnZ2uH//Pvfu3WNlZYVOp0Or1WJjY6Me//1881n5xS2RirASgiAijho0kwaNRpNGEhKFAdaWFEWJEFCWjuilLc52Q7lYgZACqSobTqux1VxcFkWt+CGFcFmQBowGXQrKQlfqhAalJFIpSm1qAm0YBEQkIBxQo5QPlFoQVZY6TuFDKoWwM8Foa0H54G8ViLcGrasMPQSUGoPG+Vk7JUlrLWXhLBCUCkAYkBohJUGVraa1RAQBwrjrbrc7yGzKdDImz1JMmdFIEqI4IUkC8JmWxhBGiu3dHRcrC0O0MSRJSJQ0COOQI3rOHWeaMdzZwbYSOu0uaTpibbhL3GyigoAgcK+vP9jn7p0d2q2Yo0cXaHU7RIMBjSxClDm99hxB6ADs0mra3TYWxWQ6YjzZIwpDBrspRTnGlHMkcYMgjInjpAJiDFOV0rQtojih1ZrSCCVL3YCyhL1Rzs5gzLgo2R5mTFIorKIoDaW1aEP17GXlhV66ZxlEREKgrFO5DWQFvFtDFIZESoA2OOoIjuhgLCUCmwlKDSbQjhwkBGEcIYRFCQceyUBUwIlAiJAyKwmlJM+mRGFMsxsjhKLbbtNoNEknQ8rSIBRYWxAqRRLHLoHDuH16GEq0kFilSLMMawVlUYIEqSIarQY7W1skUeJinXlOFIeEcex4VUqSpwVZkRJGimYzYj7pk00z8jSj24mZjPYccYigiilphDAEgUKJAFO6ZCljNHlRIIPAERgqJR2hJGWpCbQmCgICGTgLJ+ESYQpj0chq7gYlA2y9dnOJelJadGkwZYEtS6TWNAJY6EkiERPIgCQOacSCUGliFdKIAlSsCGJn8dJoRLQaCc0oJFbOglsXBRJBlk7Y2Nji/t0dRsOCSEa04pBWI6TdjWm2Y+IkcXZSoVMNkjJCqRCUQssqEcc6RQ2tC3RRYnRexUItCAU4RSKd506RVQYEQUigBNIoTOli1y4hUToinFJObcN64kgFethZZYCHiR+V/a8bVSrAUdagjuOFqApAm1mnqgqWOkCO3N/i4NiPBnCtU2jFk0NsvW6R8kD9w71ua4ut2XX04fWpI4p8Rvz4rPz8lgPigRTyUF8SeCDUxYaVNFitIRvSiBMmKJQuCEJB3GyQ2aMI2YTQKUQFS88wvvUTOv1TEDQpjCQQBS5IPwuYerJA9YLg0Jgg8WoOJQUJwdw5muka5d4NxPo7ThVLRQ7Ar4ipn4oK+1M8dP7Z1z2x7NO+86jPf6JW/T094vMPH1PMfEb4Yx7+1OHfKvKBxKnQoUt0mRFGijSTiNY8KtAoMyS3lrxUxN0VymwXbSWSSv23mv8evvxPqxv70PtetcPH9h9F/nv48163ROJJH+6uDr532DZ9lkgiq7oy7o3D12UP1ENmr9Y/S4FEW81wNCSdau5tl1xYVIxx628rArTVGFMgq7VBFEg2h5bBnubcqnTkaOyhc9RqFjOxjEP3fOhv/5kD0sjB/c9Wh6jP4wkg/hiHCTVUa7lqfvzEfvrhdnk4dl2rnXssZLYDHiJpgjUChMaIkJicgoAi0yzOz3P39l3KLIXIGeRYnaKFIRJhjZ+4aVr6Ex0cv7rf+oyeROMJNYJD+Mps+5/tSr6dzP7+qYSZQ/f6yTIbO/z0GIdv28zUqb8BCcKijSQmohWkjHONlQ1CW9Jil5QOExEgpQUbQjkis/MO5wokhS4JtMQIi0SQRsdocBtbbmDDI2ADhNEQWPdsZp+csFXCssFYUa3/S0osBoXIhmgsnX6bdFxywKD9my+/EFEUaw1pOnYSdFRBUIuzXqjY+XmeIcOAQIUYWTVW66TnCmtJRQBWEjl6uZP1NBphKlknBIF0ATRjoRkFtLotRllBOwwIuk32k4D/azZh873XGBhLaS1RIDl5/Cjr27vueurAlsCWpTuHa8UuaODEA9DWeZJiCgajlKW5OV766i+xPy04euwoN2/fwWjYvn+Hr3zhWY6cfowgdNkX+u03+dIXX2Lu1EWElGhtuHbnPkfaMY9feg6VtCjyDK0LbjzYpBtKXv7Gt5gU2mUND/bY2NzkvQ/eZePBA9IsZW845PLlj1jZ3yNpthnmOX987UPe6PT4O5sP+LrUxNY3uAocBxKl+EavD/sDfjid0DeGZy3kSrIRxuRliZVOKlFVFjjGus15IBVSObDd5hliOODrzSb/569+nRNLfcT2OmLjAeQppDnkGRQFNvekD+0ydsqSotQUtlL8sBZtoTDOEshWix0rFDZQGOs2b0VZclEpeitL/HRzC60tz/UWCIzlTpFxWef8Fz3hdlFQKMkehhzLxZUl2v0e99c3CWVAt9emIGB7b8wPfvgmYfSe87RutjHDkq9+7av8wf/0/0SFAfP9FidWl7m/tsmNrX1+9+89j1KS/b09tjfus3zsFEePrXL8dMJwsM/G+iaXnn+Rt1//Ce1Oz4EhzTZGKEZ5SVZYbCno9vssLS6SZRlZVnD//gZZWnB0ZZVGs4UImtx7cJmsdEosQRCys/GA8XDE6snjlBWJqihSbq89YDIYMj/XpRGJekLRplL78EQeP0sYFxjQZcnbH1zhtQ8/Jvzxq3zr7/wd5ntX2Z9mzB9ZpixL0iyFMsSGAY6npbBBgxPnnyLp9NndXiMtSpABYRxjypR2rHjy+BJFmlPkJelozM0SxtYgi4L9jXWWHjvHg5vXOP9r30RcvcKdRswkSuhKUNbZurhNs3QKHkJilBtDBO76S23IjXYb+zzHygArBYU16DzDLeZdJqa0zvfLthJ2Bjvc29/j/mTKntHkUrCeaW4blynk6RcCQSNpVExLC9KzmW2lMOLE2jAVKaMSbzugg1RSnj4QKnDyqS4yc6Csw0GQEmHrhYxwaR3oQpMZi64m0yCK6LaWaVQygGmeYqwPLLgFSoChsICxTPXBwiZPU2jOoeSUspLx8G3D9XUn6WnxMs22sn2amRQFxAsLxHFIkedMxhOy8fjQIvSz8vNZZgkUHsj1YK4H2OI4rq0u4MAKZRaA96oOs+z4OI5rMNKTM6SUrKys1Aog/jPgbAQ8gD43N1eTDTzpw1pbg3hAfd3NZrO2gZlOp4fA9p2dnRqE1lrX6gLz8/NEUcR4PD6k5OCB+X6/T6vVqu0nPBljPB7XhBcPynvgcmdnhzt37rC9vc1oNGJzc7MmfuR5ThAEtZrI5uZm/b4Qgrt37yKEoNls0ul06kz50WjExsYGKysrtY3B/v4+rVarrv/BYMBkMmFhYYG5ubmaSJJlGfv7+wA1WO8X/b7+kyRhdXWV+/fvs729zc7ODmfOnKlJOF6ZxD9b/73Zze/shmIymdTg5fLyMo899hj9fr/OhJ9VjSmKorZW8WDtW2+9xf3799nZ2WFvb69uV75N+nM+HNAU4kC1w1tbfOELX6jbbZIk7O/v0+12WVlZYTQaHbK+8W3Mg+0eBJ49r38ms2oBvnhQfFa9wIPUnU6HpaWlmmDiVT2UUiwvLzMejw9ZN/hzesLGbN/0x/bF2yM9HHjxZCtPxCqK4hCxyn/GtwVfrx6Ei+O4th4SQtT14slbvvjzunVPxs7ODlevXmV7e7u2Y/LWR57QIoTg+PHjxHHMdDplOp3W/d4THfx9eRKNVxN6uO39t4q1zl5oOp0yNzfH3NxcfSwhBEeOHAGo1SA8sanZbB6yd/HX7Z+xJ7zNPmv/u6+nWYKaJ9UMBgPCMKTb7dJutw8pCG1vb/Puu+/y4x//mK2tLYIgqMkKnvzjr2E4HNZkhlarVR9DCFFbJPl/e3t7bG5ucuzYMY4cOcLjjz9eKxjt7OzU9+vH34dJLLOEC9/mZtugV2AC6nHXk/myLKvtSfz3fR16MocnTnnyiz+2f8230dnz+vucvR5fz0qpetw8ceIEi4uL9Zw0O475tuXVQ44ePVqTSXyf+FnLw8SPh/ukv3dvFeSJH6+++ip/+qd/yvnz55mfn+fu3bv1/DM/P1+rovg+59V/kiTh3LlzfOUrX+E73/kO29vb3L59G2tt3d+SJKmVhHq9HkePHv1r2dh8Vn6+ihSSdqONCkKiICYKQ5d4IwVFWaCN64tSOdDaZ8K7LncgyS2rLFQphUsgMBZTGrSq+nPgkkqUVGhpXKzAVrYMRYHRhUs4kDjlglKTZwZdOFtRXWV8GSMIhELK0KlaShzpROCUQZSsYkymJvaXAoQxVbajOAS0mmo/5BUD/JiqaltcR1bQugDhLBcCJRFxAtZQ5jllarG6kkAxhrIosSUoUaCUIQgsQRwRNyLiRkySRKTplMH+wAGyQGkVWoRIKxDKsrjYQRcNhoORI0ju79LrJigBg909ZBAh1IR+p8V8f4WtrT3yomB3sM/WYJ8kbrAwF6CsZTqcIgPhJMCNIAgkVoVoAkxuUC6NAMQEtCFLGjSaTeI4IYqbBDKiGQZEKkbnBY0gpBEHmNKSZTn98YDVXDMcTtgbTRiPc3INaVZQlBpjHWlBG0fcKLKcdrPplH21awtxKFFWE4YBgYI4UoRRRBgljKdTdAnZNEMbmOYFQkMmYVCkWBRhEBJEEVYbhNIYYSmNYVSUKJEQa81kOmQ4cNYjSVKii5I4cgHp0uQYU61/qvVss9FC6xKrLVmeYo1TK1aBYjKZYqxLbgJnzTqdTCkyTSNuIWVBVmYuoUUYrCnQ2lTxNkun3UYIRwa3xtm1dDtNynJCVM2b7XYXKZ1UuJBOol8FCdO0JNMGaxWlzgkDZ3ditCGbTmh1IhpJgrXaJV+VBaExzurE+litQQiDFU6VVCqXuenRJqOqdh8qQgxKaAJrCRJJp9LLFlIipCGKBEmkCGNFsxGSNGOSZkTSiImTgCiQSGsxRYopSqalZndnn5t3N9nYHWGMpNkI6TYT5jpNeu0WrUaDKIoJw5ggVAgRIkUIMjyIb3lFi8qCyegCH9mpiaE2d+rCQoIIkYF7xloKbOVCWvdzodz4Zayr94r4cTBnz2YOH3rF2RNXr0u//6ne9e8pPMhavS8VVusKJPLWMDMJ8D5K9Slg6izgM7uW8URpv04Cv653BCIhDtZxUnoAzVSA1WfEj8/Kz2dxfU7W2AMWjDgAUSsI2iUou1AwphwzkgFieh+TTgl6q2hCgv3bhHETqTKU6VIKQXfxHFF+m3QcobrHkHmOUT7SfHAN/uenAbJeHCJAIZShSE4Rrsyz+f5/osgzLAqDIZCHLTgePsen1cGjzvsoosenfX42nvfwZx5+7dPO9agrrK/7oWOBpazWZqbUBIklyHaRC48jog7abLg4YJKwPdjHtNow3auSVg8oAQ+TOf5bdfPJzwn//0df96EX5SP4OA/XzaPJMYgZMoy1hz5jrZ8V3H/de4784wgELmk1CiImmdM3GOUSbXJXr9aSphnawvraLs1E0egohAxpBylZb4F3P84IlWJamBmSxmGCxKMIB4evEx6ugNljHW5vfuabJTQc1Jcni1Ddu3sGnyTsPNyGa9JHve4/TKzw1+THAqpLdm4HhoYo0SbGSphYRRRExM0mMjKEZkApIFJTbGldP38EScNf/ydeqcks1c3W1evwodlvioOq/ytJHA8f/+HyqH78qL8ffQ7/ngE/ZlpP9jJo69QbkzhkksVENqXLmH3RxugYJQ3CCgJyEiVITYPSOCtfdAkyQpgCLQSUBbk8RsNukhX3yeNllIzQRQozeAJVtQjpMMMSRYzAakugQqaDdazUxL3TdJsDiskDrC7q7/1Nl18I4ofvquDYxIIqCCiqBbnW5NOUWCYI6xa3IpQEwhLEIRqFkM6LUQlBVmgCIepMeqUCrHCbcl2WFDpHhh2kknRaDcTSPDvSgZw3tHYbNOEGvyBQZJMBYSgpS5f9ihCULk1lhkHuB1hn3yCrnY8KY770/Ev0FlbZH49Z6HdBKJqhwoiAlX6bi5e+QNDqIqVia29AALSWVsgLF4SbTqdc+fB9lhbn2d7dJY6niEAxzQpuXb3CpUuXaM8foWkt3U6P4OQJ7EdXaF39kM997hIP1jbZXF9ne3eH/aEDRxqNJtd3d2g1GlzrdBi1u/zadJ+OLsCIGjDHGmIp+Ua/jxaWV8cTEgtPSEXRU2xJRVE6NQcpBdYIpHCZfEoqAlMyHeyxkCT83q9/i//+sfO0124hHtyH4R5MUkf4yHNsnkNRYo2m1Jq81OS6oCgNhXHAdGkMpYUC99NpNEiMFdhAYaMGeVmSlzlTBNOyRGxt8reOLvOfd/b4Y6VJjszzoxs3mRhNFIXkwtl+JHGMkAVZlnNkfp75XhshQja2d8AKxpOMKDYU05K1B+ukWc6RYye5f/8+RZ4TJB0Wl4+yPym5s7aFjZucufA4u3tDNtfWkEpx985tFubnGQ73OHnmDEoamhGM9zbZ3Fij2e6RW8GpJ58ljgIuv/sGk8mU4SRnaziiFSnm5np0ul2UDIkbMSqIWd/aIU1dENYawc5gyJ2NbZ4+f5pmK2Y8zRlNpuiiCra02hSlxpZjgjBBFi7o4Dqj29hNxhO2d/fZG4zZ2ttnfWfIh7fXGaVTLr/7Js8//wKrJ09z+aP3uXTxApsPHnBvc4+j802aUexmGeXkTPe2t5Htvgv4AyqQLCwu8uDOTWSZc6wdIFuCIJpDhauEcYu40SKMm4Rra4ynGQ1huHP7PqvHVnnwYI3/13CP02EMkymxKYmKkkYY0QwVDSFpqoAkcJNsIIRT7FGSMFDsj0dESwvoLMXmGTmiCtz5DDMXiMn2N7k7HLKe5uxiGFi4mWa8n2mGwnnc1oswBM0kcsFCb0tlHZ1D2ipgYPyC4WAh4UhqVJN9BTpi3CzkFyrSBdTckFZNNXJ2wqeijTgCj7O+cpY2oQrIsjFBswe2wOrSU0yq4IBwQUYriN2v1WgMRhekpSGJY4ZjR/7w3/QLskA6CyyD84s0YoYRKwRh3KS/sEQzTih39knHI8rpqD7HZ+Xns3iQ1YNonlDhQedZ8LrVatUECq94MBqNajWOVqtVqxXMgscetEzTtM5enp+f5+TJkzWA7osPZnU6HXq9HmEY1uQDD3AuLCzQ6/Vq9QWlFIuLizXwn6YpzWaTEydO0G63a+KFX0AGQcDp06dptVo1uNnv90mShMXFRYIgYHl5maWlJebm5rh16xZpmgJw5syZGqwsioLNzU1Onz7NsWPHmEwmvPrqq3z3u98lCAL6/X5NcNDaET7PnDnD448/TqfT4U/+5E947bXX6mvb3d3lyJEjHD16tLZOkFKysbHBaDSqLRzW1ta4ceMGFy5cqG1x3n33XT788ENWV1c5evQoUkrW19cZj8dYa+n1emxtbZHneU148ISCMAw5e/YsH3zwQa0u8rf/9t+u1TJ2d3dZXFwE3LhWFMUhsocHWv2zm7X38cD3xsYG29vbXL16leFwWKtnPHjwAIClpSVOnz7NqVOnaLfbvPrqq7z11lu1dY7P0gdqGxbgECDuSUHtdrvO5PdtJQgCms0mGxsb9WeLoiAMQ/I8Zzgc1iSFWSKUB/w9cWRWRWE2yDqdThmPx0yn09oqxoPfSikee+wxnn32WRYWFhgMBnQ6nbodTadTJpNJ/Xx9H3tU0Nd/3hNRPGHFW//MEgqCIGB7e5tWq4WUsra28ffkbY+8ksQsuaQsS9I0PUS8ieP4UB99mHBSFAVnz57l0qVLfPnLX65VFLwKxMWLFxkMBrz11luHSCHWWpaWlhiNRvWxZpUjPLg/S1aZfeYPb2gfDgwAh0hdswopQN0HPKFk9j1rbU0kmyUfNBqNWp3k4XPP/vPqE/7f0tISu7u7h+5vts6/973v8eqrr3L//n1+//d/n6997Wu0Wi12d3f55//8nxNFUU1QmrXHAtjb26vtd6IoqhWDvFrE/Pw8g8GAn/70p/zoRz/iK1/5Cp1Oh9XVVay1NdHKj8d+jPf2R7PKGP65emsj38bKsiRJEkajETs7O+zv77O8vFw/v/X19dreqtPpAJCmaU0q8lZgcRzX559V9pm1DfPqPJ5A5gkVy8vLrK6uUhQFt2/frsfJO3fu8Oabb3Ljxo1a7SYMQ+7cucMPfvAD/vRP/5RvfvObPPvss3z+859nNBp9yqz5yeL74yxhy7cjP540m02SJKnnrdOnT9ckxu9+97u8//77NSHkxRdfrBVAfvM3f5PBYMDy8nLd5jyZo91u82u/9mu8/PLLNYHKk0X8OLW5uUm73WZxcfHQPDg7tnxWfjGKUopOu4uQCmMtudGYskBYS6BcLCUMAlQgMVqCEijhx0WJmPGsdv+tVD2siwsVhYNSJM6qwb1/IC0ukWinqwy6Um3kgFxWGk2hq71ZaQhCTWgqK9NYIFXIAQHFKUlaqxz7XgJCuMQAwYzUt7MnBpxqrayuoZ47DKIin5hKCdGPOaoKuTvGiUCokDCR6CIjpKDIQ7JUkheZA/lliS5ytM6QKsZYQVGUxEmDlaMd0nTMcDQAwBQaIxRSBoRxSGkNYavJ3mgbXRZYM2Wu22Shn4CM2RtMGeyVNJstGrGkGYc0W002d4fYMsdIQ6uVEEcBoQrJ8ryS9A6IohAlrPOCR5AWJeNRyXA8ot2c0MkmtJpNdFMTxy2sLZyirzCEkSSMWlgrKApD0AjJs5xmu8Fc2iTPcrQWDAdjjC5dIpgApRKMEUzGE5pJgpQwmUyRVhIFTkHYCpBKkyQBzU6PIEwYjSaMRxMmkcTkhsAapLWIAqYFFNZZrgZhSGmzKmNTUhqLsqBNCZVyiS4qcg9OaTUIQ+IgcWoxOFWEIFAkceLIS7qgLDKMKdFlgRTSrceQhIGrU6UCsnRKlk0JgwAoyYvMBfCFIkszrM2rxBBJnEREYcD6+g5ZPqHd7gAaFUgClaBwn8uLjDCUTqIdS6lzoqgJWLJ8gBW5I8oEISooSMJ2RZDNkbIBMsBaQ6krAhcKqw3SGiIpkKZSNhVV5hfKEbwCZ2UkhUWWEhNIdA4hmtharMKp3giJVoIwiWk0EhpxQtIIieOEOImJ44A4UkRSYvISnU1JJwW7eyNu39/h5oN9huOCVhTTaoT0Oglzcx06nXY134cEQYhUqiJ8uKQfi3WqwsYlmfk1mKj6tMG4OFBp0UahwSkCSYuyLmdYGoPAqRg54liAlAYpRUWOqIgYM4SOGjjkMBnE/3CApgMm5CwAKt1YqUWlXOPHTmmw/lzeWqYChISoVEQ4DCQJ88m1rP/beFCwPs5hUviBoslh1Q/37/A4/ln5rPw8FiFm+7GosvRELYIsEDXxwhjDZLKHzndpFhYZKETcYjLcp7QpUyximhKEhrDM0MaSLz9LvP0O5SiAxpwDqz+lS80mifh9Qb2SEQIjDKFN0OSkpcDqKmKrpFM+8GIff80s9ofjCIfek+JwgPchkP1hEoJ/7WfZcx86j/8cHvueOekMMcS/LlUIWiNsid7bJF54nDDsYigZ7Oyzl+6AzUhLS4B1ieYWh2U8NGZ6RaXDxIBPjptAbes+S0J5mPjwSHLMIQJAhZbah+v+4B4fVYdAPY88HHO3M3VXt5fK8sTZABnKinTbUBmF0RRZWhFoIqaTjEbDcGSxy3Q4Ic1HnFld4CdXtjmzWFZ4wUPPbOY+/6p7n/kGDzcmIfjEMQ6fwycz2bqSvNIKzNIjP22/OvNdKRDmk9d9uB5na5KZc0BTjch1SEabNDdcvv2Atc19RFZikgXCYhupLKl0Y4YUzqrp4To7dK4DkGUGRBcHquw1EaVqvwdsmUfiLrNtF/5qYsij+u4nv+PUw6pvPJJYhn3od+GWj1Ek3RqkyIjsgL1gidIGBJUlDNoQy5TCRM6K07q4sMkKNG79pqzACEthNaVaoCs3ENk203COIIiw9nDyjRsCLU6NP0AqicqnjEYPIGoQJ8uUWmNlTBAGZMXPHsP565ZfCOIHAkJZ8cysx54dS1pbg7AFZVmQiMh9xlgChDNEcPISFXsftCmYFJXUJ7ZebEvhGNcSiMOIQkCpLWEISRKhAuU6t3WyfVZUWR5ByLgQDMdZzQp3lgsSKRUxYcVUspU8n0CpgDiKWFw9xeefeRa6R7h39z43b9zg6HPPsLS4QG9ugdsbO6wsHyFsdcjykuFwi9v37tNsxFz/+BatUHL83EU+uvIRkS2QjQ4bmw8Iwwgpnfzl6WNH6Swus7ezBUA6GZPmOVfff5uXX3qRePE4t27d5NpHH3Lv7m329neZplP2BvsIa2kmMTcnE/4v1rDVbPA/5pauLRHCOKsW6xhQsVT8UrdPbuHVyZRguMeyFEz7CwwtlFrXSguBVIRCoMcDxoN9TuqCf/Jbv8WXXvwS6i++g7h5zemzZgVM0wPCR2koC0NhSvLSZSLkuqTQznKixFIYQymgQGKrbBMjFFoITBhSyoCJkEwBYQyFsHyYlbx+4w5XlWC+3aTc3WOoNcrh7M4nuAoMdxOXFXrj9j2yLCVQAY1mk0YcEScNEBadFYSB4rkv/hJHjh7l3bdedxkVYQMdtLh+6x6l0SyvHOW5515ACsv1y1e4fvMBtsxpttpILOtrayRJgC4Lbt28yY0b13ny0rOsbW7TX1xG5WOWFvpMk4BWOGXSTJgWJWvbAyZZyelTpwhUQFpabty+QToZUhTakTGF4uSpkywdXWU03qfRTOj12lhjuHX7Dq9dvsar710GawlUgFKe/easOoqiZDTJ2B5NyUuNVCBlSClCFnotvvXCRfZvX+b8s19EfvAue5kmabUZ7G6zNbKc7oTUDFJdcu/jK8RzKyiT0WxExJEiaAZIXdLpdXjm2ZfptZpEcaMCSiICpVDA4xtr/E9/8hqnjhzj2ocf8MVf/jp3764xWFhg4ZlLxBLyrCQdT9iZptwbj8jGKUWaUaQZjEaYbIpIM0RRokzJcDJhcZrz8heeYT6wtLICCg1FgSlL0vGIveGA7UHGNnA3kKwLxZ0s51YJmazDdtSiYVLQjCOMLlAVoaPyW6mXPgZbSQhrN75VgYcyL7A4iVXvQRZEEox2yh/G4vx0ratTv35xL7s507qMpyIv0VY4WyqlCKRl495NWivn6CaKstT1wGtwsseRdNdYWsiss+dR1SJX6xLVaBCkaRVIkG5RYmyVAeMy14rSUaJtvb6o7qPZptHrESrB/nRCOhqRp7m7hJ+RdfpZ+V9f8cC9VwTwi0Ivze/B76Io2N/frwF4Ywy7u7s1KP+wOoInfvhzhGFIFEX1uTxo5wHXOjONgwWrvyZ/HA9IWmsZDAb1Z4UQh2xXPKDfarVqcHkW8PLH9CB9EAScP38eIUQNfk8mE8qyJIoinnnmmfr+9vf3eeeddwiCgMXFRRYXF+vznTt3riaeJEnC0aNHa+KHtZYoiphMJjU4+fWvf521tbVaEeHJJ59kfn6+BtqPHDnCV77yFTY3N9na2mJtba22Yrh48SL9fr9WUTh37hxlWTIajfj4449rG4PHHnuMo0ePIoTLcO/3+zzxxBMMBoM6sx1gcXGR+fl5lpaWmJ+fZ3FxsVZfEMJl4c+qUfjnEYZhTQSYJYI8+eSTXL9+nddee4319fX6uY/HY8Bl6Htg+r/+1//K3Nwcx44dQ0rJvXv3KMuSJ554gmPHjtXn9+QiDwLPZvsLIWplj5MnT7K/v89rr71W2014u5KPPvqIfr/PxYsXWVhYqEFjD6YnSUKr1aIoCr7//e+zsrJSK4U8/fTTTCYTRyzm8IbWk0O8ioAQgqWlJU6cOMGxY8f4zne+w2g04ty5c6ysrPDee++xtrZGlmV12/PPyIPtXmVlOp3Wx02SpAa4ffCo3W7XRBRrLWma1sSp7373uzz77LOsrq7S6XT48Y9/zPXr1+n3+5w8ebJ+pl5ZpigKOp0OaZrywx/+kFu3bvHCCy+wurr6CUKEtbZWq+l2u7z00ktcvXqV0WhEr9fjscceq4kl1lp2d3drYsCRI0d45ZVX+PVf/3WOHz9eE3yGwyHj8bhWGinLkvF4zPz8fD0egCMLeBsSTzyaJVvMElmiKOL8+fMMBgN2dnaYTCYcPXqUoijY2NjgypUrNflraWmJKIpqwgZQtytPlJpVf/HEM69g4Qlufvx5WP1lPB6zvLxMmqa8++67XLt2jePHj9d1cOPGDYQQPPXUU/zGb/wGKysrjMdjtra2sNbWpKdOp8OpU6e4d+8eH3zwAS+//DJnzpyhKIpaccfbv0gpeeaZZ/it3/otLl++zF/+5V/yL//lvwTgueeeY2Vlhfn5eUajEVeuXOHP/uzP+LVf+zV2dna4ceMGb731Vq1KMp1Oa2KV75ez9iVxHHPkyBFWVlbY3t7mD/7gD/in//SfMplMuH79On/yJ3/CuXPnWF5eptPp0Ol0WFtb40c/+hFCCC5dusRf/MVf8P3vf58rV67w8ssv1+O+H0v39vbIsox2u12TRvz4sLOzw/z8PBcuXOA3fuM3+IM/+AP+8i//sh4D33vvPZIk4cyZM2RZVqtQNRqN+pn68eSvM4eGYcjLL7+MtZb5+fm6PXrS3+nTp/mt3/qteq7ypJ04jvnlX/5lvvrVr9aErmazWRMoAU6fPl3bA02nU4qiqMe+WQKYJ176cb0sS3Z2duj1enS73drWzI/ZvjyKPPVZ+TktQpCXOUVpMNYFimVF3gCJCiIQqopXGkzusmRr4pcGI0FXcRoESG9KLVzcpSwzBIIg8BmEbsMhhEQpiZbuy0ZbJ19uneWEKS2mzBGly+gvM00YR0Q6rxONIqUIVIhQCiEVKnDECYmsVda9wocFtNFo4/Z6fm2ClBXYXmVi4qS2CQ6sotyas3DKIdaidYnB2cE4ckiJQRFECY1GSZ5qjC4cYGENttQU5QRjpUuIGRYEYUQQKOJGEwzosqQsMrIsJwyatJoJSk45dixhZ3MDKQ1WJFgVEUUhnaZhOpnSCC39fpMsLdDastDvoYKIvd0hmxt7dPttenNNtBRIKylKaLdiVKCqtURJoELStGA0GFCmGkmXQLl9ozQuWUsphVShUxLVDrAOhCY2DZQICIOQKBCoXgOs5MiCU6krtCbXhiCMkTIkzwtH0NEFRRajC41UAY1KsU5ITaAgimKElVgdIEpJIiLSSU6WgijAVuobWEkchARCVnFod31IQSDBmBKpQv7f7P1ZrGZZep6JPWvYwz+eOc6JISMych5qzBrJoiiJAyhRoqwWBMONbl/ZbjdgwAYa8LVuDRvoC+vCgNEGbKFlWGi1YLdksU02W5REFpusLJaqmFmZVZVjREbGcMZ/3NMafLH22mefkxE1tOSBrFzAiTjn//ew5uH73u9901ShlGQwHKCUCCwuOIRQ4BXGNFhrGI9HgMd5gxQOISzeWRKlaepw1p9ONzg9Ow62ySzBmppsPKCpS6qyCA4nEcBU4QxQkyQpTWOQ3lEUDXk2YG//SgCbJBpjGhIpGA7btcfXKJ0E24TSWOepaoNvg3yauiIfbZCoDOc8eZ6QpClVXYC3aJ0HJk/n21O9R/gAPlCI1n5LGwhDC0wQKCVRUpNnGmeClE0tHJkEp8A6ibVgkTiVogYDhu3ZKh1okiQlSzPyNCHXmgTPuqkp1ytOTpZ8eO+E9+/POZnXaCS51kyHGVvTEdONCflwiM7y4GjtWIY0CIUTrcwNvgWXncua+Jbd1BhHXTdUjaPxIbDFS4lUDuUcxtqW4STu0xRKgpSBkSNKPgVv8PmePrKMy84nddEhGoAfonUU0oFAIvBDSoUkMAlIqVqbTGTniOxtgT1JBERKYFW59D4BF2ww0XXlez7bOI+59gbfzvURVNwHoZ4/5SLI5NP0afqLlvq9XQoZwGKtczWsHb4NIBZ44xC2obGOZe1BNNiqwS0/IsvGpMoyUJJmso/KxpQP3mLoFGb7ZfThn1Eln0ckGcKH4L6+4/9xQAnfBu6KNoBaoKnWZ5jZm4zKGfr6U5zdeQshUgQh4FGQ8qRQvB/nDH4S2NsTo/lbl3j07f2YieEycOEnAQL6oI9PYAN63/WfJ50HpWjKNXVdoMtT1KyiaErOHn4PadYkqcYlE8bZGUiNxNJ4gfJxjvTd3Bn9f/2yX66bc84mcWFelUI8ER93fm8EjQSGGflYSZ5Q+MfZUS7kKdZn7x3dV84RBOhbSTsgMDI4slTTFDWpqBglY6rRElk7ROq4/tQOhx9/zHpeMRgkeKvQaYrzc65vjxGi6loiYg/++wUHfLKBYx2FcrZ97NLjRAR8QBfsGi/pPzEydF2+N9wnOnzF4/Ic2/gyC0joKZ6yWONHB4xUSXN0l3cO71EbhxcFvjrEKAlVgRhahNLtOIkBth1fF3TgeAKoIva7zhfFhQr4ZH56BYcnDfcL5X8yE8uPB4YE+64jQirOX3he61HGxrlzNhVjHVI4Hp48wniDm1zHeokSDisEylmssgzEmjMxwQmPtZJESbA1AhsCrXFIIYPPTVhm6gojeYRojijlQVdNF8YZvmXWV1R1xXq5QA9vorMsyBYqSWU1aZaxqNY/vvL+LdLPBfDDO0/VVC1jRLuo0cqFCI/QmsY68DY0lmonURdpeAEpWBiorcShGIlI/2nxUgeZFkGg6JQq3CM8Wgr0cMD29hSlNcv5gso1bRSGZtU4fLPG0wLZpQo0jt4igNo2eBM22UoKnHFsb0z56l/6NY6WFcloyuHxKXmmubazwbMvvsR4+wpSKtblfZIs440336BYzlivC0xTM5pOgqE8UaQPH2IaSz4cUlQ1tbGkmSVJwoK9sb0Voi8IABghwJiaZ56+xfa1WzRW8NT160jvSbOEDz54j5Pj4xCd4eF4doaSitlM8vfTlLPxkP/pQLHRepalAK/CtDKQml/f2KLw8N82FbP5Gc8OB7yfj1i4MHiUEKS2oVrOqc9O+cvjCf+LX/91XsomiN/5HcTxIzCBHhIDvgFfW2xjqE1DbR2VNdTOBeYOB42XGClpTINB4HSC0yk+G0CSYHRCbRoKa0OUkRCc6JQ/qyu+29TcbQyNFIzyjOFoxAcfPcR5R5amgCRLE4wRDPMEJQWvvvQMB1f2ePjoiNoYzuYLHjw6ZP9gHyUls8WSo0fHPLj3EUo6ssGA1776dbY2N1iszlgulkipODw85D/93/5v+I/+4/8Y6xxvvf0j6qriwdEpLzx3m1TPeeGFZ/jwg/f58KP7NKamKZekwwH//J/+lzxz9Qqr+ZzxaMR4MmE7z6hNxccPYH62oC7WmCTnB+/fwSOZTjdIU41UivFwzHCYo5Uiz1QbJRjoVK2FdDjkub1dbuxMkTIw4njnQbbAHRUm6+PZgq3phI3RgG/+6ff5ne++xxeef44vPneNDx+c8ujhIw5uPsu3X/82X/v8Zzn67/41d46XXD/YDVO8TNDpgI2tXSaDnPW86EABubJoaRmMUr7w67/Owd4Wwvs2OsbgGoOta/afvs7n3vuQtd9m+vAhtRPs7m5yfLagyXK+8OJNfGNwxrYOE4uxnsYY6trQWKhaOvdivWa+mJEvFhgE4//h32JjmuHqBm8aXF1jy5KkrNDLFYN1wV5d80qx4offe51//M03GQxSaCyiMUEypTUSISTDPAkbKBHkZVy7UfcB9k3AqoXNQJ+9I1Bc2XbhVkglu02asxHcaYMhwNuwOLXID+/CvOSxOGMoqooWg4FSCYnWaKVJk2Cgioa2EKvmUZIABlEq0FzZoKWcZTnCw2i0gRSWLM3AVRF6AkKgEFjrqSEgJPF4Z4PN1YdNaraxyWi6gbaW1XJFtZxjmvr/CyvLp+n/lymCHS4fRmLEY3Qw96MV4iYe6O7t39c/yFyOiO+j7OM1fZr+y9fFayLYpP+e/vfR0daPyo9l6zNSxGtiFHx0BvTzFUEh6/Wa09NT8jzvACSPHj3CGMNkMmF3d7cDCUCQJYgSNlmWfUImIl6nlCLPcw4ODjpncZ+aP4IkkiThypUrHe1/Xddsbm52gAwpZefkm0wmPPXUUxei55Mk6eQCpJQdg0p0vkdHar8PRJmZzc1NsiwjRvPHtuxLWPRlf+L/sSx7e3vUdU1ZlgwGgw5gE53MV69eZTqdYozh4OCApmk4Pj7u2Aq2tra4cuUKOzs7nWxFbJ++MbPf1hCYHfb29njmmWc4PT3l5OSE5XLZSUZE0ERs8+3tbUajEdvb2x2AYXd3l729Pd5//32Ojo4YDAZdpH4EX0Tpiv5PlCqJ71FKsb29zcsvv8zbb7/Nu+++y8nJCePxmLOzsw4gdP369a4MaZp2bBuxr1921BpjLsgj9SVnImhka2uLzc1N7t27x5/8yZ+wtbWF954f/OAHSCnZ3d3l2rVrOOeoqoqjoyPeeustrly5gtaas7Mz3njjDba2trh69Sq7u7uUZdnNBxEsEvM9GAy4ffs2H3zwAYeHh/zpn/4p8/kcIUQHRFitVty6dYvNzU2ee+453nvvPb797W93AKy6rjk9PaUsS770pS8xGo2YzWbcu3ePN954o7tuMpkwHA4vzEWfjHa8eOi/ceMG77zzDh999BF/9Ed/xBe+8AUAHjx4wDvvvEOe5+zs7HRSUREAEMEBfcmWy3NgPw99AEqfnaUoCj7++GPeffddJpMJi8WC73znOxhj2NnZ4caNG53T3jnH2dkZ77//PoeHh5yenvLOO+9w9+5dbt++DUCe57z00ku8/vrrvPfee7z++us8++yzLJdL5vM5x8fHfPnLX6YoCgBGoxEvvPACw+GQ5XLJ7/zO7/BHf/RHGGN47bXXeOWVVzogyT/9p/+U09PTjtHo7bff7oA00QEcATBxDo3zgzGGzc1Nnn/+eX74wx/yox/9iH/yT/4JTdPw4MEDDg8PeeWVV7h69SpXrlzhqaeeYmtrizt37lBVFffu3eODDz6grmt2dnYYj8cdIDFNU65fv96BguL8M51OO8alOFZ3d3f5+te/zv379zs5q9FoxC//8i+TJEk3JwJsbGzw4osv8ku/9Es8//zzbG5usl7/bAYDrTXPPPNMV9dRniw6kYfDYQfEieM5rmcxb8PhsBvb/XUszjWP6+dxPuj3wdgmkUElsmFFMNlPSk8yVn+a/vwnay1n8zn4cD7QWiK1IlEaLYMUrBQQXepBxTIwtfrWdCmkCxItQqJ1sGdoIVujapCqcLakcU175mnPRtJjcSAVCNU6W0OEpkKAboNDPBjjcK7AmBpjE5QXaBTeB3SHVBopFBbfOlYlQgRmEO89jhDNT+tEFi3oPzo5nLN4KUm0pnYGTwiIEQq0DHkQIswxSniUM1hnaOomnH29oDZBhs0JiVdpy6hlwFuGowHZcACSlokrBQfL+SIYr73EyeD0WZcl1i2ZjCesixLh4ODqNc5Oj7EkCD8gTYckyZjKnFJZQYpmMs0oiopHRwskCeM8YZRvsaoKHt2/z3g0plp7pNTMTtfoRLG1PaWpKlblkiTRbG5vUpRr7h8tWBeG/V1QWBAZQmQonSN0SpIrvLdY6RilGc44TFXhxgOktDhjEW3bVFWFcQYhU7J8jHdQFiucLTF1g3WeJM0ZjSdBWq8uSRMJPtiYkAKVpGiRUFW2lawxSKnwQqBEgmqDOtJEY5uEdVXhlaIBitpS1hWNybC2YF140mHKOB0haVgVc1brkmGesTGZIER0SHicMDSmAmtCJKJpWgYUT5aNkNJT1QF8Z03DYn4CwHgyZTgYYI1jsQ77zTwTjEcDlqs1eMfu9iaVrSiLEq0S8myEMRWrdYWta4wpyA/2kFKBEFjrWCxmSJngjKCpDNnAgXboJNS1bz39QoXoS4FEqQQEONcEQ7i3pAJyKalMy9bsQfpWsi4BlYTxhBOYWqGFxVZAqjDO0FiBJUUmA5J8QJpngeUjT8iSlCzV5FmQj2nWFeWi4ORkwYf3Z7z/YMHJrAILg1SxMcqYjnM2N8ZMxkOGg5wsTVBKhLbAg7ChDC1oTBJIgnyUe3EOYx3etzbIxlBXNQaBkCbYkx0Y0bJMa9WxHGuVgPKBFUVE4EYP1EEAbjiiC+TcBRWdiWE9BsU54L0Dv8vgfZJCtfI4AtXK93gF0kqUiLbgCP4QSGG7d4ehJFsHHzjhArDEtyzZ0Nqq2tnVRccWbcBRf28aWZViIMangTyfpp+P1NmPAIiOX2iD7IEAhLMenJkjasOVm88xOz0jkQkqHePFDJGOacwGnjVqdg/nS1Tl8UrjfY7f+SyD02+z3voSiRyAdEgvsTIwLrXcoCjAW4sBUhXmnloo5PwhfnEXP/+Q8c4NuPlXUeYMKd5B6BZUFmeDxzm0+/PXJfvaj2Pj6OY2ESEPTwAjPKZOH/f345zOn3Sz0+3DLlzf/h8Aua2guy3IxwfI8RXYuMbAFKjxAeX97+HThEGa4Yr7rEzGaEu3e5fAgBFBedGx/iRgzPnfbX6FBOEQDvDyfL49r6HOBhWf673vZCliHfaZU8L1F+0H5+DByOgdnKo+5rPL1fmzVO+9IVDe0VjIRcLx8R1msxPgOkoFGUUlNc415CohUQkyCYw3qRIcrg3HpxKVJEgqQBFlwUK5LtbPOVDgYj1+sn+17vkL9ewJzhTZgT8isqE728ax2uIOLjPPRNvsebpos5b+HEJx2VZ4Ma/n40MAXgY7WpBiq6m9ohbblCZFuBVWr8gSj3UJhTFoVyJVjhNtkeIQ6oralqsFlCA4z9uFKgnfCX9+df9BEXrxuCCNft0+Dtzx4wAf/fqJoI7zvn0RGHrxnvDjXACnNlh84zCDhEQmJM7RAAKHkwotHEobTDVE0NAIySCVCCfCTChtIJ+Dc+YhByv2GOkj8uYhld7t9rlSBj9WyK2kqU6hXDKdTFnrNAC8hCBBUBvFOJGw+tnken+W9HMB/ADCYUsCIixqQrRYKSGD9IJ3LYVijdSynQglEofUGR5YOce8FjR1zRiLNSGyTgqJRwSQh1JIrVEI0vE2tlqQKcF0NAIU68pizCpE2reQ6oDiFyjhSHSIBjGuBZX4lu64aXBNzfWDA371b/5tHlQJZx++wen2Ftf3DyhViizn5OMxi9WKj+7e4fCju4wzzWJ2RrEusMYEtLhQ1I2lrA3zd97D+XAQaZoGV9U0xpIPPL7VD63qGmdte7AJKPv9g2s0xiCEQiLY2JySDwaM8wS1vcVsPqesQrRwMAI6Dq3lP6sqiuGA/+XWiE3vcHhkkuB9OCjkQnFra8K1dJ+t4zPuHR/y/NWMR/mQoiqp5yfMF0uGpuHfu/00/9H/5H/GFa8Rf/ZdWC7BOtCDcFDQAp+PqcqK0noq62m8osLROEutUhot8dMdirLA1SU+0YjRGJtkGCGpgXVT03hDAXzoBd81hjvOMnOO0kPVgnKevXWdbDjG2HsI2oWu7X9SBLTf9uaUL3z2FTY2JgzHQ+bzOdbUSGCSJ5zOFlRViUgU3379W7z3zgbPv/Iqv/hX/zr1+ow//fYf46xhMgqHzu++8Wecnp1y6+lneObZ5/nwgw9YLNccPjriK1/9Kk4M+NGP3uVktiDPcrbGGV//S3+Ff/Xf/HOGScpGKjBVSVkI1LrCS0dd1Oxtb5LkOXp6lfmP7rKzvcfJo3ssl2vWRcloPGRvZ5ON6ZTRcMBwNKQqC1bLNQd7O+ztbpJbxyvPPBWihGwTjFPdwuraxeQqAKv1isNFQaLgM89eRwrH/kbOH7/xLV75pb8Gr/8RawKV9uHDB3y8O2In98hE43XKUy+8wnCYMFvNWDch0qLRrZHdVAzyFBXDvJREJQoGGd6PGG5v8Mt//S/zD/7zf8FnXvsK777zHl/48pf5vf/X7/L6W99n7UtuP/scn/vcC0hv8cbg6gZb19jGYJsGU9WYusE0BtPUWOcRiWJrIwXl0YMhWoSyu3YV3fThAO68w1RrmtmHfObuNs8Yx6qyFLVDT7fZuXoVPNTG8PzVbaLUi7UhusfZcOiOESVxc+K7A7PAuwBgc21ftD6IGAkZNx/hOiEC+lTGQ3l7uCc6uBHUlcERjIxpmpLohINbz6LSnNXilMaaAEbpGWCXxuMHYxpTdgaRqmmYjEYoCVmaUa1V0Hf2URYnPEMJgfEeE+xqbbnausMjBmPGoxxTrVmvFlTLeaD6/TT9hU99mQ4hztkL+oe/GF0fN9HRqdQHhPRlBy4DNaKDOL4jPjM6quAc7BHvjz/xvmjw6zOJXHaI9a/tRzlF6v2Yn3h9ZPWITrb4eVmWzGYz7t6927EwSCk5OTlBSsnm5iY7OztkWdYBLaSUTKfTT0ifRGBFnxVDKdVdGyVB+s7M+M7JZMJ4PO6YCCJDR9M0FEVxgWng2rVrHYNKlBvp18nu7u4FZ2OMSBdCdDI8WmuuXbvGxsZGJ90QJX767dhnt4g/sdzD4ZDt7e1O7iayIuR5zubmJkdHR+zt7bG5uYm1ls9//vM8ePCAs7OzTnInAjA2NzdRSnXSKbFdYxtHx2ps5zRN2d3d7Rhr3nrrreBcaBoGgwFXr17l4OCAra2tzpHsnOsYASaTCQcHBzz33HN88MEHnJ2dkec5zzzzTAeM2dra6oA1/f4aWTb6ztfNzU2+8IUvdDIa9+/fZ71eo7VmY2ODg4ODC/0wygNFh31kVQG68kZ2hdiP+uNTa93JOhwcHPDuu+/yrW99q+tT0aH+9NNPc+3aNRaLBd57FosFP/zhDxFCdJIRZVny7LPPcvv27Q74EfNymaUnz3Oefvpptre3OTs74w//8A/50Y9+xJUrVyiKgh/84AcMBgMODg7Y29vjs5/9LL/7u7/Lt7/9bT766CO2t7epqorFYoFSii984Qvs7u6yXC65e/cuv//7v0+apmxvb3Pr1i02NjY+QV8d02XDlHOO69evs7+/z1tvvcXv//7vd/I4x8fH3LlzhxdffJFr1651rDyxTcuy7Oo6zh39FOe/PjAkGhuiBKQQQfoqSo14HxiL3n//fcbjMdeuXePGjRuMRiOuXbvG3bt3uXfvHr//+7/PYDDo5qH79+/z1FNPdWPt85//PG+//TZHR0f8wR/8AR999BGz2ayTcrp9+zbGmE7mZjqddrInb731Ft/73vdomobhcMg3vvENPve5z3F2dtYxvUQplpOTk056Jc75EaDWB39F6aKNjQ0++9nPdkCOf/yP/3F372Aw4IUXXuDq1avs7e1RliUvvfQS77zzDm+++SYPHjxgZ2eHg4MDbt++3bEmxbH5yiuvsL+/z2Aw6ObdK1eu4Jxjc3OT6XTazZMR+BLbYGNjg+eee65jCIngrwiSmk6nbG5uorVmtVp1LFE/KcX56Pr1691cHsdbXBfivB3rqs/YExm1oixPlCR6HCCy36djf7y8zkaGKSklg8GAvb29DqwVpYI+TT+fyTlLURYoqUkTfW4obfcQqrc/6xycImo7+8AC6gBsOJW0e8QQpR/AAd4HYLnHIr0K0g1CIpQCrfHGtBH5CqnivNlGxasEkDjX9nvhMdZR1gW6UojEI5VBSoFSgAmAD7zFKzA9x3AYlwqtw3j03gdZT+8DQ4S1AUwvgpHct4CQUPzA3Ci1wLYSMEpqVK7bc5Mj05LVak3jJD4RZJmnLBaU6zWLxQKZKCYbUwbDIU1VMxyOGY6GLOYzkjQhyxPO5iuWizVJIhkNcrY3xsxnxySJZHN6ldWqoGocD49OyQcD9q9d4969Bzx85xFXr22zvTPm6emU+x8/wtiKLB8iKmiqEp9pJsMc7x2z0nP3zkMePDjixRdvc+3GDearJctlybpuaKoS7BKFR7HLNMlJUhHYEHBoqfFOMchb5jxraRKBtS3LgdPhrKk86aCiqWqSbIDWA8qiIEs38H6CqQ2gSfKc9XqN0pCmOVJ4qmIN1EymQ7TwlPM1wjcM8gy9chSFJxUelEMIgxSKNNE0WmErgbMeaxx5pvFeUjeW9XrNVGkSwNU1R6czGuDK1aeYTidY5/HO0TRrBBYhHNYExhBrDEiFaved3kNZ1lTVmkTrIJMzngCezckYKRTrpmZne4T3ltnZCd4JqtqxubmFx7GYryirku2dDdbFPJSzWKGQTDc2UCKhaqVawpnDUtdrEqUY5im2WWGkp6odzhrKMuwTk9SDCIwzUoG3DoHGC4/zFqUkTeMpigZtFGnuUUIi0AhUkGaSEqUgUQIlNTbxOONQViOdQyY5Os1QaUqS5ugkYZAJ8lQw0MH22hQ187MZjx6d8eHHp9x9NOdoUSK8YJwkbGSS7UHKdJQzHOYM8owsS9vzUrCheCHxNgB+nPdY6/HGYV3Y+5q6wbkGZ+ogN900NE17fhMyfCddq+rgEV4Hp4B0CDzGe6Rr2aPF+fn1E47SVjIKH4N6QjiNEKJzzhjOgd8y2rtlnA9tYCGSqq1ride+resAfBNSIkSUnAmR3KJjLwqS3OADEAQXgB+tcyaw0kYbEV3AEr09cfhpbVn9PfK//VLyafo0/f99is7jcwhC/EScO10FeOGpyjkWj3UDyvqQifKk9i7KnSCKNV6UqOkt0ul1KjzO3wNpcRbqZES6/TnyozdxO5/BqwwvHIn3ODRKgPMNzomWmUphvaeZf0y6Pgz7ElczuPUNXL4P0uGqKviGpQqgDxEnoh7YgE8CtR8HBHnS+fWxdXbJsdx/9pOe8eNAIheu97255zEAlgv59RZjG9LJHq6Y4/IxzjRoFPn0CjrdwZVHCD9DqTFCRFhey97R1cs5m8GPA8S0276uU0RQXosAiYvAE8v54+qnvepCXuI7u3u7/IY/pfefkBK5mF8JNtj28Q1PP3XAqhmQjVKsBeF9CzB0SBmCyoUUIXgfz1aecrApSQKCsbfGxXd4Lhelz2D6k87Gj/v+cUCGeK2kZfXiXH7lp3nmhe+52Od+XD++ADnyDi9HCOGoGaMzQ6Ykvh5RmSWNAe8KkmwbicM5CarpwCr95wcALhfAK/267ddFfzyLCFLy8f/wkMeDV56cftz4fDxYJ7bpJ591XpeByKHFN7XtJRmOclx1gl1JmnSCUGOEa/DakXvfAtwDiMo7gVQZ0s/QdoUlJUBngz/KihThQnDkig2GYs7AnlDKLaT0WKtD3/UCu3wYQPWjA5Qq2EvXzGoR9ng+QXiF9ynSnxHng3/X6ecC+CEArc7p7cLeN6CiAxxHoVRGO3OG71RALivZGqiFQCkBVUNRNMyKFZkOTlQlQ/RG3KTqNMNbC860VNwNZ8s1xydzqrpuN7mBvUNGejwP3rlAVmGDM9eYhqapcaZBAq+88CJf+7W/AaNtHrzzBjevXeHW008z2tjmaFmA1Lzx5tusFmdYa8A7ZusCZxwuGnWdx9Q1jTE0TY13PlAz+vDOYHgItJIgwvs9mKbG+9awV3nKqg4bfgSJDtTQu3tXUMLz6P7HoUCLJXVZdVEz3sPMGP7P8wUSy/9qe8LE+xb4oKgdfNc0FFvb/Cef+zJv/qvf5z8/fMCVR/f429dvkd56hvdOZzy4+z43n7rB/+A//B+zfbqAd9+EogDn8FKDkvimxkpFPdigsFDKJWUZZGoaldDoFOMcfmOHBomxDX60gxUCnw+phGfdNFTWcGINP7CGt5uaQzxGCpxUNNZg23rN05TPvvoS/+pPvgs9WqGmNSgrAbXw7O3uIIXn8PARtipxdUnaLmwnszMGwzFXhwOKYg0CPn54xM7+jPt3P2AxO+Lo8JhnX3gJJSXL2RlnZ6d860/+mF/51V/nb/6tv8XJ8RHL+YzpZMTs+CE0JZujjFeeu81HDw4pVgsOP76LUAPuz2p+dHbM3iRlf3PMZDTCVA3jXLO5tcHOzZd480cf8Mt/5de4/8EP+ejDDwl6RinrouLBg0OOjk4YjyfcvHmDyWTMowdH6DRDpzn1csnJ6RnTpsbZ6JAN2Bxaw5gUwWh3eLri3umK8XDAwfYG1gnSPGeDRzx6dMiLL3+ON777b/jCi6/wxh9/k3fvPmLjxRsMh0PW9Zq777/DS6++ipQ6SDiksLWheWp/m7puSJTENXV7ijxHhjpjaNZLisWcPKmYqTHrR29iky8zGQ95+70PWRjL+/fuc+vZm+xNh0B2DmkkjFvvbeh/LlLqBmYR4Q3/5g//Fbe/+DU2h3lH8dluTcJmxoNKEvZu3OSXvp6AzhBpjtQJIsvRSdrWu2aUepwtWmRuGLMhiitEUIgWnOGtCxqzPoA/rA1zAITFzIuASvYuGBSsizSyoKS/SEkmA5WoNxbbVFSNaekAPUmagmtYzc8Y7RzgjGkj7MImQBDAwHOZc2Vzk6OPP8b7oBNnrWO+XJEMJuxsb7Oen9EIReNsV0+t8FXAi3do0oAe962esBxPSLSmLOYUxZq6WNGGkZw30qfpL2SKRqvoRAY6x3502EUnVV/CJUb9x8+rquqc0dHR1T+UXnbSxmdGJ1XMRx8gEVNf9qVDyveu6SPRIzCh73DrR+vHd8RyRodbvLdfH5EyPwIe9vb2ODg4YH9/n93dXeAcOCOE6JzE52BNe+G9l5lGtra2ush5OD9Q9VkpovNQCMFqteokJ/rtpJTqmDWUUl2keF3XnaNvMpl0eY3R6PE9/+yf/TPee+89RqMRV69eJU1TlstlBwaJ5Yzgi365Yh+JdZxlWScdA7BarRBCdEwgjx496uqraRqef/55XnrppU6uINZDfH5d1x14BLjgiO8DjOJ94/GY8XjM1atX+cY3vhFYpFogxcbGRseWURQFX/nKV3DOMZ1Omc/nbGxscPXqVf7u3/27PHr0qAMBRPmK0WjE/v4+QojOKX12dsZ0OuUrX/lKV9dbW1vM53NGoxHPPvss29vbLBYLlsslp6enjMfjjo3l+vXr3LlzByklL774Irdv374QmR/7YBwDUZZFa92VJcuyrj9E2aErV66wt7fHW2+9xWq16pzkn/3sZ3n66ae7937jG9/gxRdfJGgcr7v229nZ4bXXXuP27dtsbGywXC4ZDoed3MZ4PO7a0BjDl7/8ZYQQvPPOO/zwhz/k9ddfx3vPdDrl1q1bPPvss7zwwgtcu3atA8/cuXOHjz76iO9973t473nhhRd47bXXuHbtGt/4xjcYDAZ861vf4h/+w3/I7u4uL7/8MuPxmFu3bl2YH/qsL4+bO55++ml+5Vd+nrJRRQABAABJREFUhZ2dHX73d3+Xf/AP/gFN06C15tatW/ydv/N3eOmll0jTlP39fbIs4+rVqx04rS9l1Qf8RGaWPuNKBMV4H9hyPvOZz6C15sMPP+Qf/aN/xPHxMVJKnnvuOX7t136Nz3/+81y9epWqqvjVX/1VlFL863/9r/nt3/7tDmi0s7PDX/2rf5WbN28ynU6ZzWb88i//MsPhkDfeeIPXX3+d119/vevjn/vc5zg4OOD555+nrms++ugj7ty5w1NPPcVXvvIVrLX8wR/8AUIIPvzwQ77+9a/zV/7KX+Hll1/mb/yNv8Hbb7/Na6+9Rp7nfO973+Nf/st/yXw+5/DwkM3NTV599VU2Nzepqor5fM7Ozk4356Vpyq/8yq/w8ssv88UvfpH33nuvY+F47rnnuHnzJt4HSaKbN2/y9/7e3+Pu3bucnJx0MixxTYmANaCT1orgjCh/cvv2bW7evMlqtUIp1cmA7e/v8xu/8Rus12ucc+zt7V2QLovguDh/Xrt2rZP1ifPFT5Mi0CsyfcQ5rw/66AOCYuqzZcU+1F9zLjpvLgKbInNRH/gVv4v5ieC3wWDQPSOuM5+mn8/kPDTOI6THeocmOkAv/qgW7BGAHwQnqnd4L4NZw8kgW+mDvKv0wQAdbDHhDAUhSivK6yIVOI+RDS5KLmgXgBXEaC5H4OsAKTXCB3ZLKXMkCcKBdB7pbJBhweK8CadB2wNCtUZTL2wbIKS6M1w8zxkbrhMqnI4QsrfXasHDOKSSCJlgLTjrwvusQ8mEyXiDOgsSprZySFUznAiaao2znnK+ZL1athIensFwwNbOJicnM+p6TZ6mbG5MqMsS4TyjfIAWm5ycHJFOJxxsHmAaODo6YrVacfiwDFKhtubjjw9xXqIzxWA0RSDQKmGsUqrGYUkCM2tToUXGszf3aZqKxeyYcr0kH47Z395BO4cdJyQ6ON5P1jWz5hisZTTKGA+HDPKcRCU4lYLWwTuhgiFX0EbkCYEVUFYGoYbk2ZgsS9FKgvNUxqCTYGgua4vSGa49TxRFhRSCROdY3yCcZ71chnoRkslQ4aUjsxmVCbrkAolFIFKFWRmK2iOcJxOCdSXI1gVJlpMPNc6XlJVjurVNmg1bx7tCaWiaEoUl05q6bnDGU5uSqmnI8gHLxRJnPAbP5uYEJRuapmJ/b5eqWpEkGTjPupyRpIJRpoO0S6KRScpWPqZYVyzmC7JUMxptMsoycq05fHQfKTzb21MEQR4oMFwpJJrBYIQ1KxpbkaQS68HWFcJBtfYkSYpSGmeDicG1gXCKcG5w1mMaQ2MsZdmwWlekDUgZpH+UFwhrW+mkMFa9COuGkwqnAiNqiiDNR+gkB6VRKiPRmjwBjcebhlXRcHY25/7hKfc+PuPe/QWzRYN3gnGesDVM2J0M2N4asTEZMBqk5K0cD+J8nXM+2Fm8szSulah2YJxr7bklzjRYV2BrizFt0J1xGK9wKEQS5hQtgz3GOg8k7boZgEMBwHExIlxETd7WIQznTpsYgnPRmXl+XoQIgo9nTN0FJWrRzinWB+YReV7fSsQAikB7LpGBGUSokE8v8LK14HgRJM4JjC2dvAtc2CvIHljlfP9wvof4aRzAn6ZP05//1O5j2iC6vgMwbBGCbLqQElctcN7hbYmwlmwwpExv4u0MIzUb9T3E8T3q4++A2kJvv4pEAw5hHVYPSHefpzl+E7H7JcBTSxGAisaivAKlsNZgTt4hNXNG+RZrn6C1QB28ipEJqlUfD6BaiZC6K8UF0MBjABqfKP1PcJRffk78/cfd90TgxGPAJU+61vs+c9LF78IFLlA/1GvQGXqYU5/cwVYL1GCMSIYESpAy7O+UCixv/qJEivcR+OE+Ua6L76NXx6K1mkMUVQlefNFd/OPqPYBWPlmm6M86n4svPaNf5b73UWyTS/XnvGvzFnwgw9EGpj5C+pr1akmiEyw10gl8ohgMckxTh31TXVMMFEmeYGUrr9cCXi7baH9a9ogu60/0GTweTHIB8APBR/GT6pdP9rPHgxp+AlDJhzlBeolLNLmuSMwDahtY3pApqQ42gXp1iMzGgMVrjbIeJxxBaqcnZ9OV87weIhAk9Me2eP3u94lKOe9H/cCOy3X9JBDN457Z5e9Sn/St1CYdx9kTUpsfgcN7gUwyyiZlMp1QFuDsCmOOcGqLyo+4kpWs3QTpwArYSWdkTmGQJLZAsEL4pp2bNUoPcWgcA7yoKJMdcntC0jyiSvbDmdU5/Ox9ZL6BSreResWGMJzWAusTnEvZyBpUc4rRitSKrs7/XaefD+CHECRKtwf56GiJGobtLlN4NBavBc5rnAzyBUmi0XmgZhbeBgpOKaitDYc9JdB51m2uJXB6eoxINK5cU9YNZ7Ml63WFNaYFnZzTzycqyGA4Fzb7ZV1Trtc0dQ2tNIMQgleff57PfuPXqPWQ9dERqWh4/rmX2do7oKgND+59xGq9phESU1dUdYPHY5oAIIEwYJumDuwhvqUfdy6wUbQbC62CxMPanNPNC6UoiyLoaaoBCEltGlTUT/KeNE24dnCNLElpakPdOtkXQFXXXXSCF561F/xf5gUjpfifb48ZAoWD71QlfmubX9q/ifrWH/NFLak3NvhvVytS7/nG1ga/dvsZqs9+LrTLN/8I0diWsgzwAucExgUZjsp56ixnbQzFaokZjKhHU4r5LFAl7uzTWGjKBT5JabynsJZmvaBMEu7WDW+Uaz60lpk1GGtQSpNoHVhinCPPB1w5uMUwcYzGY05nMywOJQRKK7wxgR5UK7wL27d7Dw4pywrb1CyXaybTLbJswHy9ZF2WXNnd4dr+Va49dZN8ssPW3lU2Nje5//FHZPmIF1+8wmJZcOPGM3z/ze9y5+498sGAcr1kczrm6pU9vv/dP+FPvvkHPHX9KoM84euvfZbvff8HnJ7NeOPNN7j+9Es8/Ph95rVndVazatZsrR1b0yGj0Zjh1hU+enjIePcKo0whpWBra5uyNuG4Zk1LK2uQKuXRo0Ps5oTVumT32g2SLMdbg84UuzvjbjF0eOo28qGoalZFw3xZcvd4QWUc4zzh3qNDHh0GaY/GGD78s2/zlV/+Fd747us02ZDhcMjJfMVxASIf4W3DenZG4wSNCLRgufQ8f+sAe3iNYxkMOEKIoEnoQ7SXFxKpM5ROePqFlF84PuP/+t98l6tPPc3v/Yt/zWc//2WesSvWhx8xPzvmnbffYvczz4QNi9JhUyaAqHMqwcsAHlPtPtcbmC0XJHmGTCRdjIMzBOiHAiRCpTz/jV/h5lcamsbQ1BWmqqgji4i1NHXDgzvvIxPNIEtw1mKdA6HwvjW2hwEZQB4BYRMMftZgnA9RaTYY0K1UrTyVbw/ZrpXkAfABCNJOHJIYbWapG9OiJiFLEly5ZHnyiMHGNs40547oMCTxQiCHE3IJZW2xRMpPj7eOk5MjstGU8WSDsioR2HBfzwDgBGFjLIJBNoBqwAvFYHOPTCuq9TrIvJQlHej00/iQv7CpH7V/mZEiAiSiEyo6mqLDKtLZx81klAaJDqY+kOOy1Eo/Uvnyc6OD7LJMTAQb9K/tP/MTzouezEufVSRGrcf7qqrqnhkdaBFEMZlMLuQvAguECBIWUQqiLxcTwTAR5BFTH5RhjOkcl1rrjlHkMmgkljs68oQQnXM5ysfE98S2iqwQ/Y2+9/6CMzPWT2znKNWxvb3NzZs3LwBYYt3GfMU2jc9O07TLe3xvzGtkFIjtvFqtLrRFdO7G+okOzVh/8fuY3/j8eE/Mf8xnnwUltklkGojtE+VwohROn0VktVp1jAY3btzogBZKqe67ra2trnz9eyO7SZS4ieX/+OOPGY/HbG1tYa1lNpt1zABZlnX1cePGjY7tZL1eP9awkiQBIHx5bHnvW4B03Y3F7e1tvvSlL3Hz5s1uPG5sbDAej9FaM5vNSNOUr371q3jvO0d4fyxEoIwxhtVqhdaa+XzObDbr7rHWdnXz9NNPs7e3x6uvvspyueycz8PhkI2NDUajEYvFgiRJ+M3f/E2WyyWLxYL5fM729nbHcNI0DS+++CJXr17lF37hFzg8POzK9NRTT10AnV0+0Pb7Z+wXTdOwu7vLL/7iL/LUU091bTAejztgSpoGuv6/9tf+Guv1muFwyMHBQccY4b3vQEhxLopjMfbZyHwRx8r29jZf//rX+eIXv8hgMGC5XHJ2doa1lq2tLW7dusVkMiFNUxaLBdvb2/zWb/0Wv/qrv8rbb79Nmqad9FIEK/TZNl599VVu3rzJl770pU/M0aPRiJdffpnbt293gIxYN1/84hc7RpDIqJMkCbdu3eIzn/kMX/va1/Dec/fuXR4+fMh4PGZjY6MDj926dYubN28ipeyYJGL9nJycdO39ta99jVdffbXru4PBAOcci8UC51wn2xIBHf2+HVk9Yt+SUnLlypVuTowsTrHtkyS5wP4zn89J05TRaNS1zcnJSdcnY9+Ikj5RagXOQY0/bYp1GFOcQ+CcqSfOWXEsnjuMzufjvtGmL+EUyxjnswgE6wMz43P662sfQBLn08vSUZ+mn6PkwTiLMMGQrVWA0AfIRTBMCyHwcf6MkbEiBKjIVn5SOIu3wUHhhMeicEqT6riPbPdu4vyZCEmSaGyaIXBB9sU7fKZx3mBNRd0YvBdYL7CuCuzXhWCVFJRVycSOsd5jvcL5oHiPFyilAwOIc51cp2zz7G1g8giGS9sZXYOZxndzdgCsgG1MAO0LhxAhqEegOi1q0zRYY/DeIFAgFXkLWAVBXS5R2pEmQaq2Mob1uuDs5Aj8Jmo8YjIacnJ2hrECqSSDYUZVV0EquIbaSJrTBet1w3S6wdbOBqNJzmy2wFIz2hhQrWtOT8/I8wydKK5c2UdrwchpXD1guVgj5ZDhcJuyWoEwCBFYTpRSfHz/I5TUTCcjpqMRWZ5wcrpkVTo2k5S6qVguDKapqIqc0XBMqj0qNUgJWiuEzILUTWsXSLUmUTnYAC6qmxohBbbV5MaFvjDOFavFHG81pq7I0oQkEZTrFbZpqKuaqqpbMI7HNAXSeLRo11sdgCCNNUjTsDkc4NYFta2pbTiLSy9wRcn69IxUQDYc4fIS0oTBcEyiJdZ6kkSjnAffYOoCZ6rWxmdIdMIgz1kuV2SZZjY7QcnAcNZUJTiBMwYjDHmWohWczU6oqwYhNIqEoiix3jKZDpGBwIHVakVRrBkMMrY3NoNsr3BYW5NnGmvb9QDBaDrm4aM1ZblGSUWWpDhjwHp0olGJwrRjMREBzOGED3YUGyRii6JgWZbMVmtGuSJLFZn2SBXAV8o5vKlDfxQgkYE5pI06kTpBpwKhHEJatDIo6fAGirqmrg1nZyvuP5rx8aM59x8tWCwahJOM0oRpnrA9ybmyM2J3Z8TW1ojRMAs2JRySCHhv9/MuBLUYE2yEOEdjGqqyoKkKnKnBNzjrcRaa2lLUBut1sEepBCmCdI9ocWBBhVcgZGB3RfQi4bvRG/4VCJy3XECA9P6PrLD4wNoRwXNSSqJci9YJQgWpl8YHlg+vA4uHli3gTgamlAvAD6HwLUAk2IjkjwV++Nar9DiQKJw7fGQbiNgvyadBPZ+mv7hJtOGB4XfgAuNANwZEYK83xQyBQcsGLy0yyUimGdXJJqOrz9GsH5FtPkMy2cWujvHFKfboe0g1xuXbJGKK0RvorWfwx9/CXfkSCos3EikzfDXHntxFuzXZ9BbresDy7AHjq7ch20L4MCc5Gey01pvWrR/2YlIE9rX+zv1xQI3H7e3754f4d//an/b+x93bv+by50+69vL7XB84QFh3pFf45SluvINvALMCQPoasVrQuFO0sDiXILRHovDC9cA9F9k1LmUgOOJd2CuoDvBx3lOcEBc+CfvYi8CFy/Xfzb2X2ql3R+eEjr6GPpznHLQCPOYJ/fp0tNJgBJ/Q8fEZztU0Tc3m9pjT42CDMk1QC5BKQdN6T6Tn+9895ptvWJ7dt0GaXqi2bKL18Ufw9rl9rW9fvbzO/KR+E8sfy+e9/8SKJC6BPp4EPHncGvfJa0PN/jjwSrjPAwbtGxq7g2JB5meMsgxX32O9mrN2CdYZnKnRVYFKDV66cwdL65MOFdbm71KfjlnyXdbOAUL98XkZwPJ4sMaPX7cv39Nvy8vPPs/h458pRGR8bK8UQRoL4bGuZCWfJckeUJkhMt0hM4eMyyXCFSzddVLtuTlYclZlCN2Q4EFOQOYYLAIHziBMRcISOME7jZUJjcwQoiYpH1GSkZRLxGQf9JiNrGHQNJzVHuyaTAmE0JhG4uqgFOFlTRgd6rFl+7dJPx/ADykYD/Lg7w2foAgbV+NAeI+U525C5xwORyoFWZKQDwc0zrM4Ow0H8boiEYJMChIpGOSDEOHfTgWnJzOSLKcsg7yKFIp8kAeKTh/0GYMkVjBEOG9ZN3UbJVAEo7B34aCU51zf3+dLv/LXWFWWjbxmb3uDdbEiyQc8Oj3lzrvvcHp6HCajck1jLM466qqhqgqqch0WZecRrWabdzbQQeKRSqKTtF0+NIhAc+OdZzAYIgjMH8NsgmujcREZjTEBAWoNrgobdJ2kbGztUlVlANYoxdliRbEuUFJ1EhBL7/nPZiu2teI3pyP+cLFkf2uLr482MO+8RWkNWmu+Pt3k6c0tzNY2pq5hZ4fBvYfw8G5w7iYab8MBy1hDXZs2UqShrEoqv6IpK+rBkHK9pFzO8WmO2LpGMT/DNyUmzVg3NYU1zJ3nPe/5gfc8sA0NgsYH4/uVrQmzVYlOUpqqQknNb/7Nv0stEurDH/HuB/do6kDlGiPlIEQn1Tawx4Dg7GyO1gmzxZrDwyNe++IX2Njew3x8B2sbisryi7/yG2xvb7O3t8fiwXvcO/6YNEkYjyboJEXKFK0TXv3cF9g/uMbhg3ssZie8+4Pvs17Maco5G5Oc+dkxzSAPGtlakg9HbOzdIElTqmJNlkg2t/d47oXnubK9yYOPPuRotUaVjh9+8AP+/f/gP6CYHWOsYTCeMPAhatnhKdZLqvWKslhRrR3eWUaTEZOBpC7WDAdDPl4nnB1p8vEUhScRBo1FioJsIriyFTRDT9fv453lleee5uuffw5cg5CSprG8/v07zBcrnn/xZX741g948cWX+cGf/Rveeuddbt++xQMz4dkXX2KYp2Bq8kFOYSQzk+CSEfNlhXEuAINaZoxu0WxXMtfUvPXhMfsvvsDzLz7P7oO7vPDSS6iTd3nj5GOu7G7wxne/x8u3r7M50IAh6o+2GlLtxjYYHkU7mnCOze2twAYSt2I+RKbEBcsDQnmyYUY2CHT0oes4HCEiyQPF7JR/+c//K66++BnyROFanVhro6PZE2kxwmanAXQwRDgCC4g3YWPnPVaZFqgSQB+BMrhFfEqBbCliIQBanAsO3cpYpqMBm0oymo7I05SdzU0SITGmCYbHuKcQEp1kDAdDlrNTGhcMplIEyRsvwDYNs7NTNq7fID07CXFzTd0C6USHX/Y9GjspBM4LhE4Zb22RSVgUS6rFImg2RyDAp8CPv9Cpv2nujFetI8o51znKB4PBBUBAjPaOTrroTOuzUERHZF8S5nHvBy68u78xvYxw7v887ponXdd/V/+avvM2OgGiQzPLsi7aX0rZ1UPTNJRl2VHz9yOu+z+XGU3idX2Zkst1E/N4GfjSdw72WUsuA2niZ5cBN33ARZRuiA7rKHUynU4ZDAYXWEli3mN+syx7rMMytv3lg3Capl0eIytMn+ElPqef73799Y0TfWfv4w43/b4QHdFA154RIBFlS/rMKVmWdZI2l8dC5xzqlbcv8dNv41jOCMio67pz1kfJm5if+NwIAhkMBl1+gAv9pV+Ofh/rMwnE+oh9OLIQxHocjUYdYChKxgwGg66vGGMuyAmlaUrTNMznc95++22EELz33nt89NFHjMdjRqNRx07SNE3XN7IsY2dnpwNyDIfDrn9E0M14PO4YLZqmucD2EtmGtra22NjY4MaNGwAdSKrPONNv88vjPX4fx7lSimeeeaYr22g06tou9q2Dg4Muj3F8XO6rjzOYxXrv/0QGESGCTFCU4ojPHI/H3TuiLMvGxgbb29sIIbp+ked558SP7/Xed/PucDjspElin5NSMhwGkG+cb4QQHeDsypUr3byxXC65c+cOy+WSCBB6+PAhH374IY8ePeLpp5/m4OCA8Xjcgbn6klJN01yYeyLAIM/zbtzFtoggtThe+mCl4XDIer3u8hrHawRrxP4Yx0Z/XorviACQvpxLH8AY269fnxEodllO7KdJ/efFFCWR+nNivz9e/unPHZelz/psNnEM9IFNcXz389NviyfNl5+mn7/kabHsBHYE12LbnQ/Gd9cZfoPDMkpqBgNqcF6G447HhWgbPDYE3TgPTl2QjBEyWGiECIyvQilUkgTWDG9w3qG9x6cabw3CW1wbwe+txTqoakvpJVVRUhQlZTlmOjHYpsY0BS4fkCY5TietgTLk1SuF1uG9vptvAlOJ4BzUGoJxFAKFa5kmnQmBAsbUNHWN4HyPlyhBmmQYq2jqBmsqQCBTzWhjihdQndWsigVKlgyHKdPJkLIqWcxOELYhGwyZDAcsyxWJ1iRJhrWGk7MTQDAcT1BKUtcV66JgkGcMBkNWqwrbFGgUmzu71FWN8JYEgWnWaJ2hUs3mzhY6ySgqg5ApG9OEYr2iLGucMSQ6ZXd7k6axNJVhVpYMxzl5mqKkxQsNKqNsKmpjWauC2jgGWUFqFcPBCOkGKAlaJXilkVLhrDk/53obmDZlireeTIMQgWmjKtbINnAs1cHRUNcFZbGmWq9ZLdYY73FSUZclqUqwsiFNFd5BnkkqawNwwEmkhcYKltahpSVNFEmqAqOEN9SrOVmeohOBlxbnK0wtAkbIG6xrED7I/uR5gmlWKCyJCs6eNE1Zl2uCG06yXK5QAlKtaJoKCVgLhbMY69BpineC1WrBaDplmA6oy5KyqsjaNV14T6IFy/kcKRxCepQS6CTDGEHdgGwMOknIkoRy7UJQiHQkaYL0ILRFqmB/tY1BZufOA4/F2oqqDCCTdWmZrz1VbfC+wjmNQJIJibFtII4Cr1QrLRLWLCUlWgLO4J0DrzDUNAjK2rBclyxWJY+O1zw8XHE0q1iuLHjNMFFM8pTt4YDd8Ygrm1N2NiZsjEfkeYpq6xcByAj8UIH1wwUGWGcM3jY0dUldLanKIswVwmMaR2mgrD3WqY56XYRomRAo5gOoy7eBep28S3Ro9GAfwsf7PbbjSo31eT6JxjmGDhgnWxr9dp5BYUzTMqlItAjAMW8FSihsx/ghkcK0c1P7HCFBBkdmAIJIXOvQFF72gB8tg+u5/+gJDqGL++NQ0ifTu3+aPk1/MVLL8hHtxN5f8KUHS2j7r3eY9YpxnqK1R5gGNbiC11cR+REiu0q+9yr16XvUp48Ybd5GbqRY0yDcAr2eY5Z3QUpEvks+2qR+9CZ26yV8vaCZ30VrR7b5NDUpi0fvMshSNm59Hu8TbCvhJF0AcjkpMdYEhjLO5d0fBwZ4nFP4snP4cfv+n+Us8DhgyE/z/J/ooCaUqH+tEAKEBetwVNjiHpIB0i2x1QzZWIRvSFWK00NwrmOsiqiRaIsPk+M5eLmfb+98Cw05r7OunkULGxKSiGEOPn6PuDTfXgZ/xDKIC3XUh3f06qxdV3zv71gv8bvO1tN+Ft8jhUIEJygKD0KRjUYImbd6HB7holxYCEi37R7WWMl4ICBTCOHRWuCrx4N04meXGSuf1M6faMvHXHu+nvoOJBPBEuoxdXX5WU/qu0/qc+fXPwa4g8TiyZ0lHxSMzJLT5cesZzUlNakcMRA1LhU0LPBMgh/FSaR3bdDv+TODT+tiHj8xVvo2a3/expftO3AO1nhS+S73v4uAEbq8Xf4+djfvA8Dd+x83f4RxIgVYohxP2KsYKzB6n6k/ZmY8TbqLkg1KWybrh+xkgqNyg1kzYCMrkGgaZ1D6HPiET7E6waKRZoBWKwZ+jbBHeGGo6wJVOcx4j0wKBA11OacWAzSLoCChMhqXsGHuYaVmzRApEvBrugDsf4fp5wL4oaRAt5EagbpThogKKVA2sGokUiK1wDXB4azSLETpS4HWCcVqhfOGoljjnSFTkCqJkJCmCd4JvG9pmzzUVR20EtMU4Vv8XpRkIDBtlGXFYrlgtZhRN4EGUwJZljIYjsgHI4bDAV/66i/isilmdcxwmLN7cI3DRcWDw2M+PvoBpqyxjUGnCc4JrHWU64LFfIYxFZ4gqeFao6q3BgGtdpsLMi6AwKGVRqcpSZKSZQOMFAwGcO1gH52kIcpOJ52DOYzNQJ8jgCzLmUw3mc/nFKsVeWOYDC0KKKoa1TrenXOceM//7mTB761r/vr2Nl9OBxT3Pwo6lkIEyRTvua4SkrpGnZ7C698CmeDxOCkCG4IQ1E1D1VTUCMqmoSlWVHVNbQ1NsaayDX40Qdx8FtM0LE+PqasCmw8pjON+0/BmVfGOM8xF0PBVScZwNKIsK7xfsGocewfXSZKc9b0PeeaZ5/jNv/k3+d///b/PNKl464fvAcGYmbTyQI0JEc9IiUg066IgTRSD3KOVIh+O2D14is3dPWanj5AipyZjunvA088+y3J+yvHJMb/9X/0zkvE2g/EGaTZiOJ50k/AXP/9ZxqMxpirIsiGbGxscP7jDeJhRrAvuP3jEcNhweHjEhw+PuVZLbugB66IKdJ57B3z9F76BdgsmQ8mdu4f83u//K774tV9kYzJCesPBtavcfOZ5Ht37iKoqKes6GO73D1idHbE4O2WxWLG/f4X1ckWS5dSmYSVSKjtArAhgBwd4hXMaiUcqgXSet08ctXOUeoM3ZkOULcm1RLuGwdZV3n37DT73xdf40Q/+b/gXnyfPckCwP0lZzR2ja3uUrkGnCZ956iZSCN69c8RplfDuuz/kvbfeYH9zhE6yIKGiEjyKuqxYLRY8uPeQVZ3zxS+9yrZsmC/mfPDG93j15afZ2dqkXC+5c+cuJ8uKzfEQRKvkJi5tGhDtwdSG0SEVKklRKuC3JUGiJOBx28jzcHNYdEULvhB9xHcDrsE3S1bLFdaawPZhA3grakq3WBHA4XwAf+ENDof1wZglrWg3Kx5jXMiXFHjjgp6ZVAjv8C7QzzkfIuVwFu8sprGtEU2TSBkicdMh0ys3KK3AmJpgxw8b0dFkzNN7Y4yE2brCuBaN3BplW5gJ1WrBbD5Hag3Ok2pH1TRdXRAuByHaDW+gsZP5kK3pGGkM61VBvVohkGzvPsX6dIZ1P33k66fpz1/qbyz7jnetNev1muVyyXq97pzOEByci8WChw8fkiQJu7u7Fyjl4SKQ48KGv+dUfxxoI97bd+b1na190EPM8+Oc9TE9DhDQf+d4PL7w7r6DMII6orMtOh37DvYIAIhO7ggSuRwxHr+PeYhR231N+sv5vFzmxzm2++XsS9ZEwEoEKcS8y3bOicDKuq557rnnOnAC0LEc5HnePTdK0kR5kX67xPaKgJE+GCReG9kwosM4tn90bF4++MT7YltH5pO+s/dyP+sj2qOTPTqNkyTpQEzR+RyvM8Z0/aDPyBKdshGscZnJJfaP2NbGmC7yfzwed07Z6Ijus7P0GXUi+CL2p+i07vfXy2Ceftnj+yP4Krb3aDS60OeklJRl2ZUtAjIiK0bMT9M0LJfL7voHDx7wzW9+k6OjI05PT6nrmqeffpqtra2OpSE68Ptgp379zWazDhCW5zlFUXT9YzQaUZYlw+GQwWDAycnJBYf/7u4uxhjKsmS5XHaSNpfHzOVxE1OSJFRVRV3XDIfDDkSQJAnz+bwra+wDka0h3hPBYRFAEkEC8T0RWBEllPpSWePxuANDDIdDNjc3EUKwWCy6OuvPCXHsbW5uXphvY931yxbnqggaiuPiMsBsvV6ztbVFXdedXFQE7WitWS6XfPe73+WNN97gnXfe4Utf+hIffPBBxwrzW7/1W9y+fZvRaMR6ve7mjgiWiTIrEagSx1gsX39+iEDBOAcsl8uu7/bljGKKwIk4XuI4iACPOI/GeSQ+Z71eXwCkVFXVzV1CCKqq6gAfMe/xnth2P61xNIKm+n0vtkUsS6yHWIZ++8SxHj/rvzu2e5yr8jy/MK/E3/uglsvgj/j+2J8+TT+fyXtPYwMYXlhHbUO0v7YO7Vp5CBfOlEIEQ1/Aw8twrpEh+tzL1i3ceh4De2eNM6B1GtggRAB8aC8QSbDzCFoWT58EOV7ngk65UqRJgsRjRIMUwQEijEMKEM6T+AZpBL6SuETi0yBF0dDgmxJkghAKocJ4VqTBPqUjs2SwSdABZS2NcT2HsCMQcTqcM1hT453BtHMZbYCPlAlKBpZZJUBIiXU2lF840kFO1kwpqoqiWOKcY2szYZgH0JoUjrpuQGgSnbVzpCJJFNbAalWwWq6ZbozROqWuGlarJc6HIKvhxg6uCiwZG9MpplkEO7+wJElon3yQk6VDzmYLTo+PmQ4HJDpBJzmPZmecrU+ZjDKu7O0yyINsVlEYjC0DC6kr8c5SG0OW5DRNw9m8wI8zvEjRSuCtg7pBJw0qGWCFDHTrrRwGPsE5HRgbEGit8E5SlRVluUa0AIyyXFIXFd45itUS4SxJKslzTVHawN6ARw41jYA8y5DekSJonKcqDKauyYSFRJAp2criarxP0DpnNJkwGA6ROGxdsbIOLZfBwaN8oOx3AoSkqQ1KSSZZjtQ6SIakKTmeqioxTYNWkjRVmKoEb/HCkegMZIayDXVd4bxnNB2T5hl1VSGVQFmNJABK9SgB2VLau5q6LgCNEA6lMoTOqI3BliXSWTKhWJclG9MpKk+ojUMnOVJqQKAkwTarQrCHtQ5vGky5plmtqcuGqvKsakvROJZ1wnYDo3UA++okQWUKnWp0Ikh0ZMWBumoCu49T1LWjLhvqxrMoas7WJSeLitNZzWptaBpIpCJXkq08YW+csTvN2JxmjCcjhqMBSaKDfTJ4/TpbjfUiyNNYh3W2A1+ZuqQsVhTrBVVV4n1kvnXURtC4dtwnAYAkEHhnz51L3ne2INF6Q3zrr/Hed/TyEfgRGH+i3G6w8frWQxdtRGGJlT3gR3CEhDkiAFiEVEilA1u2kHgLUmi0CIwfUrX3yDBXhjk3gEgic4hEtNHF7e8t02sXkNMCQH4WR6/g8VHbn6ZP01+kJIVHRDvyZSd0C/uQHqytcKZgcyOnLEuUSnHNEvPoR6xXh+jjdzDzFKEzZPWA06MP0KMpmgQvPE5kOFuRuQLx8B1qCUVxCB+/yXhjj9FTX6POtjh9+C5pcZ/p/ivI0Q61XZJIghwHEu9pBcygqZswn0vd+tzOOSye5KAFPrH/f1L6yU7ykH7SHHFui/DB1vwTnvXYvEXbXXcWDgIQCQprVtjmDFyDaZYIl6ElSFkj7BBjVijXoHzYr4bYTdE6tCMz+cXwRec9OnANIFpfAsITWN5a/4MLUn/n/eacTURcApI8rkyfrLfA6tTWDBH00A/78iLk6UJ9XADE9GxkiLB/cAotQCWgpceuK9ZlAbLt/yLuW0MOhITSlYzGE26MV+xt7JAoED4oF1jfAqbadcXFwFopkN36+OMDCgIo8tyOGevs4hp1vt6efxb/fjwIorsuPvcTAIX+3/3xIc7/bhdx0T5IiLC2qvJDqA21SvF1zaIoyYcqKFxUDaiUqXPULfTBCRkgEKLHC+Pbdunbo0QrGBTryYf9RVfWXhU8DkR02bbVr6fH2c8vAjwEwY8Wnx17W7uHEJfr6fGprWoi4FXisFqDFQhhsE6x1rtsi2POaslwCFLk7G9q7i/A13MSs8abJADt3ZyRW4NsUM4HkLf3BFCxBJ3ixYjab6ObGW68CUPFlig584ZBXTPISoQocdJR2xVJU1E3gkY2VC5FpClSn8uO/rtOPxfADyEEaaJCxxFBLzUc+SVZEja6EGRctdI4HZBQQmucs5R1xWI5pyzWNOWaoRIMBxmNsQhrSXQAIngfjAuNsWHB9m2HFMFJGeS6BOuiYDGbsV7MqZsK8CQ6YTQZMZ1MyUbjcMgXgpeee46DZ1/hzbff4frWgK2dPRoPZbminC2o6oa6DFpDogHV0igvVgtOz44xpsGaBi0VOgmH5KYqAyLdWtJEMMhzNiZTtnevsH/1OvtXrzHZ2EJqTdMYMq2orWVVlFRlSWMsxboMCKpM0zS2M7qgJFmesbG5SVmuaZqKgXckOkGs1+AcpqUnxsOx93y7KPkfWc/s+IQ4kJUQeBE2E7ZxNEWBlkmgCNQeCxilqZ3BSknlHGVjMEpRVAXV/IyqKDBaIza2kJs7iPGY5eyU07sf4PKcVTbkw6rkzbLk/aaiFIAUJGnGaDwlHw5RSjMae0w1wXkT5FyOjzGN4S/95b/Myekxjz6+w91qyWpdhINPe5gqm6bVMgupNoa7Hz+kKAo+87nXuL6zy/LsmCTN2d3do3rmJU4P7/PU/nXuvPcug0RR1xUPTleghxwfn7C88xE3bj2LkJqyLNja2gqMMvmQNM1orOXk42OUtwidkedhlTDWsi5rVuuShw/u8ezzLzEcDANDx2TCh3fuIMyco0cPefeHP+DjB4/4Dz/7OdZFycMHD6ibGqkbrly/jjeBxu3w0SO+8NVf5I9/7//B3Q8/wEtFYxoGowGj4YCHD08ZXZ/SiARXmwAoEDowSxCkb3CexsGyDjS2tRjwYAmSIM8y1hlzPWJVvEdlPc888xwffPAh1249w9333uOP3/qYdPcGrlqSpRprGs4WS65seZanR6SDMbPZkj/5zls8vzPCN46z0zUPD+ccztecns5Ynp2irCNzgpM//EM2dnZ55jPPcfcP/oQ//Td/ikwUhW1IRcKjO4/YGQ4YjHK0UkHDsGXNaFf0MOegu0Xwhc9/heF0GsAO3rb60kEPuoUtgg+sQ75jDgobGOcJ1MUk4TtnCSwfFms9NkZ5eUerjouLUZq+1RXzrqWFA9syfuAFiXZYJ5ARvGkDLamQCmwTmEu8w0sPSFxLXVrZsGGwLYtJvTpFmAbyTaxpehRokOcjpEzBW4rGdEjgiAOPUXl1XbBYnKGdwVmPTgcIa3GOMLf5wPwh241cRKbK4QQ5nVLXFcVqhSkC+ClVSWiXnyH69dP05ytFZ9DliOG4eVwulzx8+JD5fM5oNOoi+o0x3XfR6bi1tdU95zILRF8eAc6ZC6J0Qh/YEFM/wr4vodAHfTxpsxrfG5/Tf3d0KMdyRgdlX8Yk5j3mL8uyCw7fyAZS1zVVVXX39gEJMe9xcx7BAvHQFJ19ESwQ2RmiIzJJkg5s0XegxjJFJ2uM9I/vj2XtS/QAF+RO+o7CyGwQQR4nJycYYy5ISmituzrol6sPGLh8AIvO56IoOudsXwKhz6xymS2jD9aJ9SGlZDKZdI7ZPmCmD+6Iqe9cjfVXFEXX1rHehQhsDKenp50jvA8C6INS+kwN0YEeyzIcDi/I8MR7opM8ghyifFBsy+ic7tdRBHhEh2+f1cVa20nJxD4T783zvOsPsf3ruu76cWT8iMwZSilms1nnwI8yMBHkNBqNunq4fv063nv29vbY3t7mN37jNzr2jCj9Epkg+gwp3vsOOBbbKMsyjo6OOnaQWM999ojJZNLV7aNHj7r+1K/PHweC6v/E+hoMBgwGA+q6ZjabUdc1Gxsb7O3tYYxhPp93ciqxDH1WmD7YIY65CHKI5Y3Aptj/4jNGoxGPHj0KzHEtKCsyWPTnqv68FvMd+0sEFcXnReBDURQMBoOu/mLbRlDaxsZGB/aJ4zpKA0kp2dnZ4dVXXyVJEpIkYbFY8Pzzz3NwcMCrr77KF7/4RaSU3VhO07TrVxHEEftrmqaBCr99fmSCufx5WZYdK0qs4/l8/lgGpf5cE+s/gkYikCeO83hfBHnEsZOm6QWQSH8ea5qGxWJxQarqZ01xHY1tEusigjj6wJDL60QfxNQ0DZubm1RVxXq97tabxWLB2dkZBwcH3bti/ca15PL62x8Ll0Fin6afv+QBg8T7EMlYW0diPU63jtBWTtZZi/SytbuIzknifJA4COepYETGB1kV5+JZqcF6jxIKKc5BgLrVuVBK4r2GLAAzjBcBMGAdwhmQgfkx5MWjhAfpEaLBO4+1kqZJqOsSh6GpCxKlQ16FQqYpSZKhTYqzGUqnIBVSKaRozXXe460FY2j8eXiAROCsC6yu1mJtjTUNtqk6gAgkaJUEY7pQKKHAeRKvKG2IntOpJMkUVQHrItin0lQyHOaolgEXBEVtWcwNy8WCLE2ZTIaM9nZYLGbYpkBgEUJSlwWnpwtmpWM8GnN1d4JIoDABEDJKxzhvmc1KpNaMBwOGeUqaSpS0NGWNcRYpNPt7eyyXK9bLNY/MffJRjs5SjPEU6wrT1AwGitFwBD5FIBmPh3jf4BwUFazXZwgBqc7IBwPyXAfHPQmgUFIjZApSgwelMkydYswaawuQNd7VCK3wQuIENFWJFhKhPetVyXpZ45wnkR6RSIwDLYOMUKITwFMvV0gRbAHG1ygVjO5l0VAlmiZNqKynlgKqEiscUmrSNEdlGVVVk+aaqjZolVAZGI+mYR8pNFLn1LZiuZiT6iCLU1dLTFNhTYNSQSqmrkuMaULQmhNoneERZNkAIVO0BoEjyRISrXEGjDVorciSMTiLtQPSJMEYEEIjkwxRrzGmYb1ekySCySRv7R9hfbfOtQACjVQi9NnaIlRgHimNYVkWLIoF3pQMUkXjFauqYdk0nK4t40wxSDR5qshHmjyTZGlClgS5XW+DUd74wK6xWBtWa8dybTgrG1alY11arHGkQjHSglGq2RxJdiZD9qYjtjZyNidDhqOsBX4FRg/lk9bZJwLrqg12FustZVXRNIa6rihXS1bLM+qqpDENTkgaa7FeYX0CUgbZbC870AetBBGdwymydwRNe+87HtRzAIiLyA6B9XX4zrf5apEi0aES/lPBeSJEsPmIwGokhURLjdBB5tsIhVIaL0FJjZXtfCWDTFUEjoS5tGUPaRlFJCIEFPngePMqGpvamdl38JaOPb7zKfUcabJlSwbR2ld/slP30/Rp+vOfYr+/7BQN414Bpl6CNahswslR0wb5Wpr5ffJsSjbZJd25hncp8CxjwCw/pCos6ZVbKDVCS01VLTGrQwbFMcOdg8BspRLmD99Cr2fhfL7/GdADrPekYop3BisqlJBImQZ7rQBvAouaRwZMQpxnYqnExTL9TDXyM9z7JFDDJz5vndrxnv489NjnXnqHv/C3wrmKfO8FxOZtmN2hqpdMXYVPBqyP3sTlm8hmjp7uhrlTBMcxLZAjnB0h+LrjnN0HE7Rgjxb0IXq5ejyY4fHO+B8HZLkMfmi/DDL2iMd8FwEJ54CQPtDn4oUeh0MrsCgSb7k21Uy2hyRht0GtK4SNc3/7LC+QLsMrxb/313+BzeQ+jhJEux62++GY/1gvogMSXOw/P+0a4ru+0f7+BNDHZXDDY7/vfXcZBHXhvs6X1F3dBcr2ngzCsRA3EDolS0wLDK5pBCRDSWXAiYYqGZNmKaVrn9ueg/qFu9wb4vvEpTw/KT2u3E8Gt3wyXbSN9ZlAoA/26NdPv94eBy6J+6A4jkCSoTHehLOelFjnWKodNtM5Y3mGWS65Uw8YphlCg3NzpNhAC7Cuxqg9DAIrAlzcSRGY5rxBeonzFQM7pyHFqC0SZUkzxy5nnJSbZEisFbjlR+hM4mxKKh3ONwg5xzqFOPd2PbG+/vumnwvgBwQpky55j23lQcT51ID14fgnWro7LRWNMazLFXVTs1oskc7y1NaQzc2ce6drGmPwzgYHsGrJl6RAocMAd+DaSMjVcsF6OaNYL4NxQijGwwHjyYTJxjaD4SgATwDvLVubm+w/+xKPjk/QomHvyi2s0Lz91tssTg/DRh+CZIsUmLJAyaD1Pp+dsV4uSVKFFpKqLFjOCoTwjLKU6eaQK3t7HBxcZffqNfav3mRje5/RxiZJkiOkCtHyPmrOuqBx6xymMRRVyccfP2R+dsq6KCjWK9ZFiDhTwyF+extjTaA4NAaMYToYYr1jtliilMZZh/OGUy/4Tw8f8b+eTHlZS1IRpDB8D7VnvKMxNaLRoT6dxwhFIyx2c5v1bE6zmFEVFUaB39jGX8kR2QDjLMYaZg8+Zn78iKVSvFNUvFmueOgMtRDhAKoVo8kGk8lW0LiU4J1DSoF3KcLA/OyUoiwYDIa8/Orn+e3f+V1WqwWNqQNYpR2qFo8xFtc63xOtMMZSVw1b000wDQLF9vYWDz56h8nmNts7V8iqGTeu7uEHU97//p+yf/N5nnv1NYZbB/zX//T/zp07d8nyIUIlWNOwf2WPnZ1NvGs4OTkm0Qn7+1dYnj5C66DRm6QpaZpgrEMJSZZmQW+0MRRlw8HuNm/8mz/l4OoBtvY0tee1L32N/SsHNNZx+Ogh4/EmD+5/xI2btzk+fMTBjZtsbqxZnD5kOZ/hgJ2dPdIko6oN1i4xekSWp4BDtXWsdThYWidofHBoJbSGZQJwCRFww84JThuP95LJ5g4/euv7vPzyZ3nvn/2X3Lr5NaT8gHVZM0oydva2OX7wEdZUCOtQUnLz6WdYnp3x5eef4+MfHnJS32UxX3G0WjJbrXBNw8DBlpBMlIa6wtklTW04psY9fMjp6g6NA1SI/Pq9D+/ynRtX2X/uGbYOrrB3fZuNjRH5MCPRCp1m6EQjVYyYUAxGE1BZe3SPC6M/X4dacAY+gGOEC7I54ffgsPHSUa5WzFYlddVgcxVAFtbgEUjf0qMGC2e4p122z2UVAhgkYNHO2UacDRt04uJKOMB719Jite/wbRSL9RGAEeg+m/UM5SU+CQaUDgMjBFIpTlYliYTatBTLYQd3ETHrHcVqwWgwRClwXpKnGUVZtshmWsrTMFu7tvKS8Qbj8QRRndGsl7g6HHxMXYCQyJ/9XPFp+nOSomOtv5mMG8cYKb5er1mtVlRVRVVVnXM9Otyi06ksy85pH52fcUMZnWHxHX1gSB+13L++76zqR0nDJ2VQ+mwYfUfp5UjoeG8EC0S2hr7symWWkv67pJTcv3+fNE2ZTCZd9HV0akenYn8j3QdbxO/6bAFa684pehn8Eu+PQJDoWIztE+uuLwESHe8RkNLPewSE1HXdOeFj+0XHdXSMw0UGi9jW0akZ2+uyFMHlckbnc2z3vvM15v1yG8Vyx/xeZtiI9dlnwbjcp6LDNP7el9iI+e+33WV5iX49R7BEvDfWbb+fx8/KsuxANLEvPo7lo+/kjwCP2B/iZ7Gc8ffozI/5jm0e7y+KogPi9MFN/f4c3xsZNIDO4d9n/JFScnZ2Rp4Hmbuvf/3rnRN/MBgwmUwoiqLLX8xDZAxxznUMJOv1upMcifNEv/+t1+sLMlKxjH22mFhepRTr9foT7BcxPS4Cot+3IgtMBDX1pUL64K2Ylz7YqT/X9Mdqv237P3FcxfkvAnMiYKQ/R0bATgSCxD4Z+2UEzMT+PZ/Puzkn/kSABdCx2/TbPJYzgq3is9I05eWXX+bmzZt89atfJYKhIrCsz94SmZ2UUh1opw+8CBFz6sKc3we3RJmp+O4o3dOXH+oD9/rMN7HNYv3H+uiD6vp568/hZVl2742gs/48FsfBz5qccywWiy6f/bay1nZgs1ju/roQ2yWC/KKs0mAwuDBnOuc4OjriRz/6EWVZcuvWrQ7YEtulD3C8vN7119hP08938pyfYZwDHK00gmw9hyEi0jtLNBxKlbQBGRFuHhk8Wip16RE4pBftWd/jaXBKIKzHW4WPYCrf0lQnaZAycOE0gm1BU43BYzCE6C9rLYIQle+bBlUEOmhpDUpLlA4AK5RGSI1qND7PcUmOcg1SZQiZIqVGqABcsdZgbQgqkVYGQIgALRSiZV3QyiOcQ0gX5GlcYDYJzJE1Ve1QMiHTGR6PtYamChLDdVVhaofwqgU7gKkNs2pNWTQkeUWapYwmQ7a39zg5OcM6h0o0m5tj9vf3OD6ZcXZ2ipSewUhRN5I0UwwGGctVyezBjHyYsre3Se1rjLVgPRmC5WKJH4wwEoaTLVxe4L3B4zEG9na3eHR4F1sbnHUcPzylMYbpKGNvb8pwklE1lsWi4uMHM5SE7a2c6TgnFZKmtZdprajrJU1hAU0+2iJpAT0yEQihKStL3TQoUSOVJc9AiJTVsqGsLEpJMu1onGe1rGisoa4M4Ggag3FAkrBaF2xvb5ImCUmqmc8XYY40DVoInA0BJBZL4y3zdYH0cLZasdfUXL22R5alWO9QEppqjTcVZuWRKmW1LhhPtmlqg5CSNE+o64rGW0bjKaausK4JrB91wWg0QCFbtjCBE5ZMusCGu14jsBTlEqQmS1IGaaiXuqpRziKBct3g6gItPIkSSDkiSwdUTY1palxdo6VmPBgGlhgtaUyNqzxF6WmsIB84dJa1ASCgtUR4gzOWZjWnXK4oljXSeDaTDCEFpx5OjGG2csxXHiUb8kSTZo7xICFJIBUBdNC4UB/Ge1aVZVY0FGUAADXG4Z0A58mUYCNVbAwTtoc5VzYH7G1O2NkaM5kOGAyywCbSArGQGi8UtmNtbUFj1mHqhqYoWTcr1qsVi/mMplgifIMUOgToxOjVlslHqMB247wHJ1Aq2L+EDGE0kp50lRd4J3EYEB5vTQtsawOHXLAfu5bt1DuHtefnV+fiuYXWsBKZO4KUkZACJxUoiVASIwMTCVqjlaZBdowgWjQIIUOQUCvtImQLVBHnZ5o49/rIYACfcJ71U38P0P0u2t9bifQY/PNp+jT9RUxd/+eTNhVoyZZEAI9mqWQ4zlEPTxgIS759mxV3SZzGLh7Q1AvIJyTjfUw2RW2/zKg8Yf7gh+jJbVbrOwx8xWB6A7n5GmL9Pvnua7iH/x22LNDP/AJIAcUct7qPbzwi38KPRyTZTlifnUcqWmb7BqUkUupgVv4pQRqPO//G9CQH8s/iwL/8zO7Zl5zbPy5/QlyWQuFyC4FZI5CY6hBpDVJq9N6rIAR5miOMxBaHqGwTX520wA8bwHhcdpbLi05uKbs3ChE/P8/D40DzP6n88foIZAxljfNrTxqMvsQFXbtGeJLjkzaMi3UXz3IO5TXWGXK94MHREYeHa4a718gyzWoVgxsIoFQpUVKGICjbgBzx//yv/yWvvbqHUAqinJhsmdijn0ME5pEIHIjliWV5UpOfYyE+2f8ugwweV84n1e8n6+MS4KMHwggsLb2PH3O/hwBCHh2Qekcphuj0DJFuY+0KK0qcOEXn14KPWiwJPiGLjTJCbV3gw/7jJ4E3ftZ02T7/4557EZQDnwQ9fBJodBF09aQx7NoxK/COwDrvDFHpJkFgvGM4aNgaDHhnVpIqR+MszmVU6joNDSoDUy5x6TGNmOBlRtAStHjnUEKTiTW+nlHJEU6P0c7ihOGwSNgcjdlJT6majM285OS0wqohDQneW/A5UmVY1zCdKB4cP64O/u3TzwXwQ4hzGmnnbIjEUKrD0gSNV4s1ERVt0VKQKMm6KCExVEWJrSu2M8WVnRFpotnflhwtbaCyEiJoOkLr05WtnEvB7OyM9eKMpi5wURIlSbmyt89kexedDVq0VdgaayHJs4Qr15+mtoJMwM7uAeva8N4bb1AtT8LEFly+SCVZrVfgAqXnel1QFSuWyzNsXZMmiixJ2J4M2d/bZn9vl1s3b/GZL36Fg6dfwKmU49Mz7t+7x4/e/TZnpyeU6zXGWTKdMN3YYGdnh/2Dq0w2tzsZmq3NLaqqoihrHh0ecfjwASenx5TrFQiBsWEwCKmYnR5TlwXG2MBK4izeepQMxr+3reX/1NT8J9vX2F/OSHAob0isQOk2cr9pwJdYKWg8NN7TCIH1x1QeTJJh0xyXJVihqb2jXi0phOdsveL+csnbdcUPypJTb8Lgl0H2JxuMmGxsMRyMghNctHkXLfYlcMaiW+fFjZvPcjQv+O6/+Q51XaGkRAqH9WFp9P58kpNCUtcNHsF8teZsdsa1gz02Jglnx2dsbu+ys7dPUxaI2nL6zp9hh1t88MO3ef1PXudv/d1/n+efeYYPXniBH7z1NocP7yO8Jx9NyNLA4pKlCUmSMp5ucHL4gMY61o3g9ktfZDid8vaffY/ZYoUUko2NTR5+fI8HDx/y4qufRdiK08P7fOXrv8SV/X2efvE1tnZ2eer6UxwfPuThwyPe++AjhLfkoxOG4zH37nzIxuaE2ekRjfWMJ1Om4zGjQc7J6RyhNBt7V7FNg058OCwDTV2QpylpkoaF3HiksNRlcMI05RJpC5xKA+UkEicdw8mU+3fewwrNjRs3uHv3DrtXD3hw9yOWxw9R2S2MHjJf1ZRuyW0Bu1oxWs7YHU9pmpJ7h4/44b0HFI1jU2quZykT70hwpDSUTY21kBmDPT1DrFaookH4AIBwQHk25969hzx88x3IU0Z7O1y5dZXrt6+wvTNEaEGWD5AyGP2TNFDsqWyATlJUmqFUgtApUiconaJ0Eg7PHZJWIWTSwUt9Cww5Ojrl8HTJarmgGcjA/uEA71qOEIsnUNYi2sgRB162iFffOgO9R7owF4oW4BFh2d55hLMgNS6E77TzpGudrg1VbTFWkGrFeDLB1YLaKmoXaIZFWw4hQEtBuV6zMIbatsbYbvPYag62f5m6pGoP+elAo9MBujHUxkSYSauDG/QNrRBkW9tM84zmdEW5XuK9JU8SnG3og14+TX8xU98p1HeURUdU/JnP5wCdMzACDKKMxdnZGavVCillJ2cAF6PYLzuh+k6rvqGqf03fGR2/66fLYI++07UPNOnnId4XnWrx977zLjoGo5PQ+8Ce8NFHH5HnOfv7+xwcHFxwLEdQQ3xnP3L9sqO4D1zol+FyGfvAj8sglehY7ZcpPv+y7Em/bSM4oA8IKMuyK3NkLIjtFQECl0ELlw9dQojOiduv837bXQasxOdGR3504sY67INEouO0fziJIIjL81QETcTUr+uYpz6bw+V674OSYr774JJ+m8Xv4/u01h14pi8F0wcB9fvf4/pc/7p+HUXgQh9MFAE/kYWh7xS+XKZ+P7gMbIj9K0pg9EEvV69e7cAX8V2RfSECUC6DeS6Puf7vEXCjlKIoigvyP/FZsf4iwKD/2WXDVb+Ml/tBv6/3pYXieI/gpljmmM8ou9HtQ3v9vv/82AdjP+tf22+DWEYIYJZYb/H62N6PW3Mvz4/9fhjbo1+HfSBbf0w9DmQFMJ1OmU6n7O/vXyhPX7omgir6ZeyPqdh3+umyrFMEpsQU6zle+6R1og+Gip/1ASSPm2fi9RFoE8dMBIVdBsT0JVUut/OPS33pofi+qqo4PT3l448/5uzsjCtXroQz4P7+hefG+ovySo8ePepYZyKTStM0nJ2d8cEHH5BlGVevXu2YVmJe+3X2uHER6+XT9HOcBGgl0UgSIdEiBBM4Ec4FlhDQE9hCw7kijHHbznPtGtVGt3vvg+NVO5QPYHspg8E72LRlZ+Dv5qwYLYdGa5ADEYzZweKNNY668ShRoVOBc4EhQMpg26lqcLagLmsSLUjTBJvn6DwLTA5ed8B95yyIwL6F9whzPlacMRhrgLhnFDQ+lD+uExILzgK2DS6weBdkSWxtMF5gkgTaIIVBpmlqQQkYHEtbslgs2XQ52xsD0lTTmIL18RyBoNnd5MZTN5nceorj4zNmswJTK8ZjQTacsCEUdblG15JsKyHLEqRUZPmAqtrjdLbm0f0TlkURZCQUHOzvsL01Zt0UNMZRloamqUmUYjLOGA4SUq3Y3niecl1xdDYnzXLKsmJVljw6WbHtWgBmVZOnA06Pj6nXa8TVHQZ5C1bLM6QMkh3GK5RUSOUQ0tA0HowHEdYdaw2VaRiPRninWK3WNGUFdclyPqcqiwAeGKRgHMgEYy3Kg3Aeaxqu7mwyGmTUPrDxlmUIGstGmmq5ZuhBpZq6MkwzyTRV5KlnMMgZSIc0Da5pkGnCulxTrBYkKkjsSOVBag5PD0nyEePJJs4UVE2JNYZEawZ5wnq5oi5WIcDEJdRNSaIEOs1xTuCQZIOMbJDhTFifGmdp6pLVumIwzMjSHGtDfxxmGiMFtipYV3OqpmRj5zqD4ZiyadBCYo1BZppBqjGmpikakJ4kkSxWaxrjUEWJThT5MEdJj7UV3gDOYhwUxqCkYHOYkCUwTjyTQnBcGRa1o6oEZelwC4NKHFK3gSs2SNE477HWYywY63FeQ7TDSsFAC7aGKTujlO3pgJ1JznQ6YXtrI8gBjodkadqasCTGBzCZFyoEsliJQeC9xdQV69WColjRVGuqqsTWK5xrkBK8UgiZkggJXiO8Bik7W49rnW9hrfNB7roNfvNeYONcgMeJ9v8wKQTHkPPB5oPFt3K/QcrKhe98YGbuaPnjXsq3AQriHPgrdAhmVHEO0glWKRQysOEIgW6v9UIhI9OHCPUUnksH9gjOw5hcALCEb+iAHYgYf9TuRy6erfv7YffpfuDT9Bc1id5/UU78MsuAtAgP69UJ+zsZeEltFYPEo5MU7XPyvedQqSYdTrDlIeLsA0xd4JIUm05IvYGjb5EkO6Q3v4EVIJp71I3n+P0/ZmfvJaaTI5pqgR1eRY7GyI1beGuxxQl+cUJzdAelNAx2kNkGKh2R6BSvdHDKt2W4KAxyni7bYPr/XwYu/DS/X7Y/XX7X46tbPAGc8uRn9b646JRHgq1phECsj4Ah6SSAY3CebHqT6tGfQb1Gbh5gC4sUGZ4KWpn4SObRz0M3D8Y66uVOiseAUR5zTn5SHQghAutU+75wjbhQJeGcfC75EmM4+6362Brs3td7rw9gliTzDJRnb3uHZZWRCIHSKdZ6qBxKSUxjkBKUVqyXJbOy4ua1lF/6zJidqcDWJUKoNujax7hSIkCnzfylGmvBMpdstk+qr5/2LN1Pj7Mv9m0CF773/mLu+m3UYUIiWKWDgyCAYTKk8lXwkTQ1drCDwiEYIqggTVG+xKgNrFni9RTBqPWpuAtNE2Xa+mWPeX9SelJ/etz9TwJ6PG7MdkCm7p9zkFP/2u6M9hPy1dWaEGGeIwlD11pcqrg1WDLKhhw2nq2DXY5PaiohEbZEVksmumIuLFI35MIx9g9prGLRTHHZlERJBnaOdw1rtYOXAiUswltS61B+zuKsYisX7CRz5kVNgyCRCqMyvJ5ihSIK4aWjNUI84v8T25yfE+BHMDoHRI7AqzBR+nbzKEjCEHAV1hE22wicTimaJW5dUBcrEhy397cYDlO8dQxTiSRsip0IdDHWWYypWC3XnJ2dUq4WLbuHDxEiQqCVYmM6ZWtvH5UG6vBoaACQUnD16jUWTjNsaqabW8xrx+z0iLosMQ6axiCVxLlAJ9hUNXVdUpRrjo8OWc1nCO9QUrAzHfLUtQP29va4cuUaL37mNQ5u3eb+o4f8i//iv+CN736Hu3fvcHp6StXSanprccBAaQZZgk4TlErYnE65fuMGL7/8Mq999RfY2b/G9tYGmxsTrl874MHDQx7cv8fJ8TFZmjAejRkMxzzIco6PHuLXS4x34FOUVJimAe+Zjke88rf/Dv7zX+fo//h/YPv0AdqD8Q7pXHAWO4ury0CVKAUGiRGSZrXEJRkNYL2jLksaBCvnOTMNHxRLvr9a8H5VUiDaNScYh/N8xHg8JRsMW0MsAW3vBWVVBjmPlspQqXCgtxaef+FVvv3t77A4OyayojiCNIfgHLGmhEBoiUPSGEdjHQ+Pjtmajrn+1DUGec727hUGecpgkCOe/SyP7t/l+9/7M2oLJ7M5f/btP+Zzr32FxWKGsw2uNBw9us9gtAiGHe8o1mtmZ6ccPnwItqYqVlihmZ0dc3h8is4yNqdDGpcwHk84OzkG4bn99FOcPLzHwZU9Dq7uUxUFp8cP0UnK/XsfM8gTlEj5znf+BJUk3Lv/gC99+UuUyzmb25ucPbyHEpJBPmB2Nme1XiFVSpokrBZnyNWcRKfIpKWY9Q2VGaB1ivQWnMQ2FbZe0ZiaxXKJ8+349GE84RqU0GxubvDOD9/m1vOv8Ae/99vc+trXeHTnLuvTY9R4m1Eueeb6PlWTc2V2iFikbG1e5f7yR7zxwQ94eDQndfCS1Ox6T1pWSK0RzuBKS2LDhktWFVW9xhR1kEHxcRfmkB68rZB1g1BQnJzy3jvv8/50zOhgj9H1q2RXdpGDjFQFwFCqPNobppMBwlRBYzdRaBWoM1UbsaySFIQOh2+dIpQGqZE6QaiEdQ3Xbz/NqvKcrBs04Jq63TS0rCGt1JAzBqEimExgjW11n1stwyTBWdPu3lrDgxRgTYiCEQ7XAstsawh1zlEVBavaclY6BrlkT3pEOkJ5ha1qvGlC/2+Nqk25xjUNunUMRbBdtzFzEfwRIlrKukTrhO2NbVRT4oyhsat2l9ZN6AyHA7QQbOxeIdeS42JNvV5jyjV6OME3rfSA+Nlpzz9Nfz5S3xEPXJCW6DuynXPcv3+f1WrFaBSkrcbjcQcQmc1mnJ2dcXp6ymAw6KKtLzsx+8+MDuX4cxnc0Xd69/PY3whfBjr0Hbp9J3PfURyBCxHgEKURorM7ghyUUuR53uU5Ot9+8IMfMBwOsdayt7fXlS0yKMTnxndcZiPp122UiIgO0T4YIjryYlnjs6Ljte9A7rdnvCfLsu4dfbaLWF/xHRGgUFUVUkqm0+mF/PbrJOY/TdMLjAH9fhPBL335lXhtZIKI5YkMIhFQcLmtIwiiL9fQB1n083nZ4RmZIWK7CiE6p3PfaRylMuq67kAN8bt4b18mIpa/Pzb674wSPFGmow8CiXmMjue+JMjlfnzZAR3v7UsVRWmPyWTSOZ5Xq1UHFOrL5sS89g/PfWmaKCtRVVVX5j5gKP4d5Y3i30mSkOd5JyUSAR2R5SHWe/w9AqJiH+kzpkRHd57nFEXRtdPOzg4nJyeUZYkxht3d3QtzyeOMXf05ol+vZVl2Eiexzy2XS5RSbGxsdH099uU+2COCM540LiMrUgTDRBaGfn+P7Rrnv9jn41zQl5GK/SfmvT82xuNxJ4uTZRlnZ2cd40e8JrIyzWazjoUmGkr6ZYnP6ffR2GfjWI8AwMuGmMtyNVGiJ97fB+30WWZiiuCaNE0ZDoesVqvuHX3wWZxn+nNEHM9xHovfN01DURQdSLEvKxafFdmA+mMjzmNxTXicweoywCTWQWRsis+7f/8+b775Jt/85je5d+8eL7/8Ml/4whe4cuXKJ4w51toO9PHuu+9SFAXXr1/n4OAArTVVVXFycsJ7773HYDDglVdeYTqdXgDLPWmc90E6l2XdLhvPLhvxPk1/sZJEkKrAxJqphDxNGSYJmVJoGU9owbkJwZjvrMcpAUITgtglSgVZleCVbFnFvAdcYBGht5ejlXVzto2ID5HyvnXEeiSo9oxmDEIbpLbBAY1Dy7i+hwAj64INwDSGRoEzNpyXpEekCUp5lAxStwKBItiVHCYwBRDOlHgXzqoqADucjf0+OHbDGJIoBd56rKvBG7yvcQ4cgXHWGxtAJB5EMgQpmUxS8myTjaHmLIlznyDRmo3JELUhKKqKdWG5d/+E6eYWg2FOmkrKYsXJSYHIUryDVEkSndK4iqIqGY7HGOFYFGcI6djeSsl0w3KxRiuNMDWz0zNUGlh6y3KNlIrGeE5PK5I0RIkmgxHTyYSt/atsuBotBctlwfGjE6qiYjIZhMhaIZlO9sAaptMRuqWAVIkKQCAlGeYD6towXxe4RYFpAkuL8I5ECZI0QWpBsS6ojcPUNVhBtW7QaoAaKLxrqOoS36hWEjgjqx1NaVGTEGR2ujxjMBgw1AKRSgbpkPsnS7yXbE8zpEqYOUdVG07risnIIJRDry3FUiFp/t/s/dmTJcl93wt+3D22s+eelZW1dFX13ugVIMAmCRIkQZHS8F6BtNHMg0hJJsFMutJcM82L3qS/Yl44xpEZxasZG5NI6Q4pQaSES2Ij1t73rq59yT3z5Nlidfd58PDIU4lqgFcj2ZiBHWbZXXnynDgRHh4eEf79/L5fjLWEQUS/P8AimGUZxpSAxAQxxSRlOqtotdu0WjHddoQuC1d0VRaEoSKOIsoyo8wzklYHYytmWY5SIf1+113Ly4IwaNNKIkLpoAKpXIRMnCiUtEirEVGADkOyXKEC5xBjbEEYxJgKrFDkeYaptHN6LZxDcRxFLPRjKiNdhKsUKABd74vVDjJKJ1R5QSuQ9EJBtxOw3I1YTAsW0orD1HCYVYzSksJIisJgCzd3JK2DwUBgrETUoEQsBa0oIFIhnRAGLcWZxS5rgza9VkynndDqdGj3enS6bZJWmzByoBBAKCW6jprSVe3sYp3zc5ZNXIR35hxZlHURP5o6pkWoOtomwBIijKLubScQBvP3gvBwu29LXaRbC8IS74pjhR8D3Y+t44VddLDBWvHAvad3WzGOmMHo+plPVwipsDKonZA0VioMCqFK9966yhrhXElOlEIvIDWjMg8Kb3Nidg18+HHXCHHi0CSEE6PEgz/ww86ZnyyfLD85S33+1ILnCZjgUS/v3Bwg8ykbmwN2xxFZmbEQdkmjRcr8XVpKIMIAbSNk7yK6fQE53aM4+ICesZQdiQ4G5KP77H3wH+iuPU3CbUZpn7ULn8EISy46RMMPqKSC1gBTFEjhHL5UtI7kPNpMEbNjqv27FFbDbLeGEaQbjx6KBNR7+hBhfP5vp+/xf9y9/Y8T6efXcfqZ++R12bTxA89R4oTDEcI7vJ+MbgBaQZFN0KPbdPuXMUGGKWaoZACmQFSCYOky2fQHqLCLigSlrQiErefcRXPMvbP1g8BCDQk0L3tYbs6N49Ryek7hR7XhSaDIg6CBdwCxc9sHJ4K7H+3n23VuC07W77u1EBgtyKykH7m0BVNWWAKm4ymhzGkFAb1ul1k6wwpLf9AjlCFXb94jz2ZsXIyoRIAUFmPmYA9xEoHmtQbfRk1/mmuXB9pWiAdgntP982F99DS48LGA0anrOzAHevJDENEcuVL/yT7QmkJYKiFQOscQIuIeytr6/j7EVJooaEE1QVQBQQlaBFRCPwAQ4ccav10/qn/8Jc7B08uPO98+5oseAEHnAahmfQaMONX29b3RybhBDZ35ew4AiZKGqtJ0O5LN3oj7k4D+IOTwWFFYuLAYcu8oI5MdRKvDUZmjVE6RhSB6CEKknhGxhc3uYXRILixWDgjUMao0WBVSKgUoKtNGxQn7VlIIQ6c7ZTYT2EoSRjEFCkWIpnTPbDZCiLqQ+7/x8lcE/IAodI1vbd0ZfGag8ZXkbkKzzAWlFi5KxDhSPEtTdJmx2ElYWlpEaDcIKVFbg1clZT2RPRmPGA+PyFNXfe5s+Wo6W7qbWwS0Om1kFDWV71L4PB9BHAZsXLzC/rXbtFuLJN0u23e32Nu+T9xuuwFOKqqyqjM0JxRZxvF4xP7uNlWeEihBuxVxfn2F8+fOsrC4xMVHn2Fh4xHefPdd/u//y7/izs0bZGlai7LW5VHmBYFStKKIMFRuYiVUDHptrFKMJxPeeOcdvvuDH3D7+kfoqMPzTz3Bs5/+LIOlVa5cusjq2gq3b99ld+su0fExKgiIooAwjjjY24HhkMy475VCsLa0yD/6h/8Tz7z0MjffeZfWz/086s/+lM5sBEY7+ylt3Q2PBC0kpa2zOzFoKaiKglIqslAyqiqOsoyPZjPezWbcr1zelfET6UIQx226vQFxq1M/1Jj6ochFUDSTRsIgpXvYMxbS6dhNyCYtPnz7rbpKSKKNRglHP0op6Xa75EWBNgJqC9kwcNWLR+OU+7uHHO4dECUxApen2uotcHg8ZHt7i/s7O9y4fY+z6yvc37nP9X//h7z11pvoSoOAoCrp9xdYWV0lCkOyLGU6TVldWyNUgqpY4czmecoy485HE9qR4vzGGrTXCIOQvZ17BIFifLSLiSSD/iI7d65x46P32do+5N69bd6PI371136Vl37qsxwMR9zf3aGsSrCScxcfI0unHBwcEscx7U4Hay39xVXW1texdbWHc4goKYsca3IX25NPyUZHuHQ3QVFVxLXDymS4j8mOEToCq1FBDNUUhaTTanHv7h0euXSZ9bUz3Lp9j8WVZQ4PhtjJAd2oz+bmRcxhyiDuokzB22/+gLfu32M2K1gn4FMBLGhH3mpjnQWqkGhrKY37UUWOxSCtxGLwAE8oPChWC7HaIITBak25s8do74Dh9bvYxQWC84/AmU1Et+esY42mU8VIE1NUzs2GYoZSJXFUkASSwBYoDMoUxJGiFbmqtlABQmKCNr/yxV8kH+0TJzGiPg+sdZmxeVFSlhajK4Qu0ZV2kxKVyzKrjHQWwbpClhohq3oiVFFWJWGgELaqJwyD+j5A4fyMRZ3la5x9rbVMc0M6mYLOIFlEVyWFrqjnTlEKijRFG5ChAze09aSww4WNz+bzi7WgNbPpmNXFBSpd0tIFae6EGCkFSbvDl//ai3RHR7y1uUKgK9LpDJPOULoEXdDv9ty6PjH8+Cux+OgBoBEcNzY2yPMcYwyf+tSn6Ha7jZiZ53kjooZhSL/fp91uk6Ypb7/9Ns899xxxHFNVFffv3+fo6Ig0TdFa02q1EEKwvr7eVEB7R5EwDFlYWABogIvRaNQIiUEQMJ1OG8BESsnS0hJ5njObzRiPx6yurj4QQ+MfRrxjwryDRJIkjdX+cDjEWttsn6/K90LaYDB4AG7worl30FhaWmoEUi88egG4qiqiKGI2m1EUBe12uwFLvGDugYA4jtnf339AVPbf4UV6HzPjIRvvWiCEqx5fXFx8QOSejxpw9tSy2Zc8z2m1Wo2IO51Om/iPOI7p9/vN8Zh355iPcvGQxGAwaPqMMYbFxUUmk0mz/71er4kQGo1G7O3tAdBqtXj00UdJkqTZRyklnU6nOW5SOmttay2dToc0TRvXCB9t4bfbi7tBEDRQRJqmjQDqwQwvyPvX8jxvQAa/bt/HPbQDNNvp4yn83/I8f8Dtw4MA823mhXXvQuD7fRAETCaTBx7MvHuOj43wsIJ/gPbHsSgKZrMZ/X6/gZTa7TatVgtw4lO/3ydN0+b89oKxUoqFhQXG43EDw8yDPX77pJSNgJ6maXNsfB/w56Pfz3kgqNPpNIDA/I/vN536/qcoCo6Pj2m1Wk3/H41GDXDT6XSa829+2/yEjFKqcR6aj3OaF8BPPzz7NjoNJXjgYR686Ha7jbPKPLjkY4zm4aJ5cCMMw2aM9cfa9w1//nrIAiBN06Zvzkd6+HVPp9Omj/nz1I93PlYmz3Om02nz+ry7hAe9wjCk0+mQZVkzzvnzykNQZVnSarVotVoNTDTvCuLXDTCbzZpzxR97v14f9+P3JY5jRqNRsy7/Wd9Put3uAwCUHxN8n593xpk/J621DQTjI5ja7XYDrnjQaH48S5Kk+bsfQzzM5MdN389brdYPOST564oxhm63y3e+8x1u3rxJGIb843/8j5vXJ5NJE1k03698m33zm9/ks5/9LAsLC03fbbVajSNLu93m3r177O7uMp1OOXv2LOvr6w3YM5lMyLKMvb09vvOd7/Dqq69ijOHMmTN8+ctfZm1t7YE4IA/M+P4zH831yfKTtQghSIKAKAiIgpAkck4CURigQlEXabhnCzc2uuc1XYFRGmPcZJ+LenFQu6V2VJQCIZSLqqxhdCENRjgfRVtAEETuuUgqjLWYen7IWIERAUKFqKAkDBXGKKgsUilCJRFYSm0RlcXW9XgeNNHaxcJqo3FuIy4CF+1EWSFDbGWwdQwxODcQrSsq7Wy4jXbisBDu+Q/ACoMI6sIVFeKkdXetl6F2UEmpKXWBMBaTzdy+6Dr7WoXErTazyYzRKOPocESUBCwtLzDoD+jFbRCCqkgpckMQKMI4ZjbLmRwcE6qQXCqsqUiziZs3aYco4+JmRaSxIiAKLZ1OhNGWymh0WSJlhBU5cT1nUhTOnXI6K5jOUtJ0j4WlBVqJImlJet027bjHufPnOR4fu7FMK2azFGMscRhyeJyTZxlhIOh2YvIirJ9xi7qQShMECqzFVBXKWlSsyCZTVBwThM4mWglJURUEAeiyIs1TirxASkEgFZoADFRVgZEKKRR5kRFFLnJokqYIEVKWFQJBEkgGnRAlFaISpLkhiVoEIWAhzzOKLCZUzhW0lXSYZTnTNKfUJa1WRJK0SIsci6bVjui1W4SRYjY7pipSVKDQxtBq9THGMJ2mKBmiZILG0m0rhLDYaoYCuh3nCFKlKXEY4+ZOnVuO0SU6LwmUwOYZeTojaoVugrpycwS6NKAj59xRGeIwwGpI4oTRaITEEgQhhook6QFxPQfp5iW01pRFSpFOQJdEkSJJAjrthDgO6fQrFrKK9dxwNNMcTDKO04pxXpJbMNoVb2lRoYREImmFkkQJOqGiG4e044BOS9FphSwtdFjotWklLVpJi7DVIooSwqRFEEUEYYiQCqykNAZdFyUVZUmZ5xhdUpYZk+mYdDYGrV3BjYGihLJyzsWBDFBBhJUSjHdTrcc3TgtzXuJ1I5UQwjl7NK5DspmvdALlnEBnbe3wMQdYWFyRz5xYZ5sfBzr59wnhXGKFtC7HXkrQBiu1c10N1IPVwV5c8WqkPZEg3Rjr/6WbbW/QukYMq+fVhHRxWkK6cVm6OVvXNk6UdVHFn1znP1l+MhevwT6ISTk4FWwTYa0BU445Pq4YZyApQFns8T10MUOO70GRoOUe+WTPxcwpTdIeYOMFrDSowJC0Fsi33md0/WuUvQFqdZNseBcVKAohEPEKyfQj8nQJ21nBmBKBoBIB2BkIi0y6qNYCWIXQU/Rk5J7DpcAF3NUK148AMx72t3lx+DTs/bD3nH4+/rj1PyhgnxbkHwQlmr/NCdDNIuvYcuvxtYCqHBH1H0GuPo6Z7RJEvTpGIwSpEekMqUJMNQXVxUqL1Q5YFEJg5QkQ10ALuBFU1tvhuY/mGiLcPWWzBx/TRj+qfZqmOPWn059thm0rMNacQIunvlPU2zT/eWMsSIUwDpbMqxk7kzZRq+Mc8Iymv9AllglFPkMb3SAmae7u6c/1Ygb9K+zMCrBDB0HhL3d1TxPU0TVud1wxag1Y1u31MNeoeXAAcJGKQv5wG8AP9cXT7/mxbT3X1POwuWs1d/z9dR1hT+AP66/aFmUM1owpkwsIl5VAAGjj3NVkOcYGXUIbUpoMY3OEUA1O4JpEzK33ZPm47T8NT/040OXjlo/7rGtTD++Ad8J5YCwAzNzqRf0fJ1W5VnTrNyDno2MEaIkIFItJwZnWhPtHyxQESJmRVgpr4Pa44NyCYGuYOvAfF6epzZjIFlQEGDUgVC0op+ggpBUYtIiYmBalCE/OJWlBCrQxBFJznAFxi95izvBwirQVQkJlCqgdHFNT75M5iTz/b7X8lQA//FKfK3UOVH3SC4G2tm5cSWUUeVkQYJFKkhYFZTkhEoL1s+exhATaTTAra6i05ng45DgtmIyOsZWrPugkESqImMxSbFnUJ7AFYQmDgE6nh5TqhBVsKGzN6spZDkYZLWFIOn0OhyN27t12cIEunaufsVRlwWw2YZbNODo64mh/B1uVRIFisRfz+KVLLC0vsbi6wdnLT/PRzdt869/8IXdv30IJQxiFCCx5VtBtd1nbXKEVR1jhKPUwdLmjQgju7x5QjSc89uijoEKKoiRQku76Wf5v/4/f48pX/iNf/OKv8NJP/yz9pRUef/Qyy8tL3L93j73dXeIoIlABSeSgieHBIelsxlOPbfJ/+cf/hCvPfprXvvsDoihAPf8cR2FE+Z/+P8TZtL6o1oOBAS0sGklVlWgVUIqAvNXisCzYmox593jIR0XGEOuyJ5WrlFFCkCQd+osrtDs9jNXoqnIPTXXYk9bu4V/XjipSuZiZoiwx2jBLU/oLK2xtb3Owdx8pTiyatLWAdI802lXeaFMDP8JHczgar9KatCipLBhzTKuzT9JdcFWrZUocR2ycWScKJdtb97l27RZ54SZ2KyArCsoyo9ft1hd+Qafb5epHHzE6OkDojLP3b9Npx2Rpxmw643A4ZnVwkTRLKbV70EqnEyIbsbe3jcWSzcaMDnfpLSwThhHXb3zE5uYF/of/8dc5Ho+wZYnVObPJMTYbAwYhRS0aJGycPUsSRSfnV7uuLK4HcoQDagTWQQA1OKVsybsfXiedjmkFhkiWGCyDtiEgpqwqSlthBgG3b3zElWde4Gv/5Y946aVPs7+1zejGdS7mF0i0pa0Cqvff4o2tu7xyeMAsTdm0gl9qtVkHRJBgtKvSqAQUFkoryIUgE1DlOVX9uG1EXYmCm2DTRruHV1Hb/lYVEoixBLpCHx9TpVPy/X30yk3UhUcwa5toFXKUagLlHl6lgIAWWgeQCYTVKKMxlUXogDJoIYyDNEQ6IYgCkrhAFClBIAiZEQoXRRRJSGJJGEgU0ApABBYdK1yxToiVtQhR93FjJZUx5HmBNqDKispaKl2htUXa2rq0LOsbPJcfe6xjButnaakIEbYg7rJx9grl8RHDydSJHMI91kdBQFwL06JMSZVCStvcO1ss2tSRUOAirpQbB7PpmLzXJ44TVyVXVRTaIIEnLm3yWKK5fgC608MWKVk6QRRTNoIKOTsksSnKujipT5afzOVhtLZfTkeF+Ap0/7pSik6nQ7vdZnV1leXlZVZXVzk8POTOnTuN4OhFSqUUa2trjYX91tYWs9mM4+Nj1tbWaLfbjXALPCDuCiG4d+8es9mMdrtNURSNMJznOYuLiw2EkKYpe3t7LC4usrq6ytLSEtPplMlkwmg0YjKZNIJfFEVcunSJwWBAHMe0Wi3G4zHj8bgR6E4L7f59HrzwAq6HWrwQu7S0RJqmjcCYpmkj4nuB3AvtYRjS7XYbRwMvNs+L6F58nYdJTsMO89t5fHwM0IiM/jj6/fbr9YK6F93nhX8v/sdx3LhLeLcTX6XvYQwPrfR6veb7iqJooA8hnAvC9vY2165d4/79+0yn0+b7B3Uc3srKCkopWq3WA9thjGE2mz0Q4QAnzgIe1PBC+3xEj1/PvIOAb6+iKBqI4bSDgG9/KSXT6ZQbN26wu7vL448/ztLSUgMW+D7rz5t5pxOgib/xi2//09tWFAVZljEYDJrj4/fLi8zj8bjpA/64+H31Qnkcxw1UlGVZ8/fj42O2t7eb/ulde3wbeOHXi93eqcODAvMQw3zkkN+2eZF+HnLxv3sQot1uM5lMmvV68dm7paRp+sC4dNodw48/4MRr/zd/nniYxMMYp+NH5qsl/PH1/ck7qsy7N8xDL17w93E6HqDw7i3z6/Tn4jzsMA9E+G30415Zlo2DT6fTafrI6egRfy753/25OQ+KzTvdeHjJb9e8+4PvPx5ImR/75/u/Byr8+D0PlZ2ewPAwm//d9w0PxnjoSWvd9FU/jsxv//HxMdY6Zw9/bvq28Of4/Pg2nU65f/8+k8mEJEl44oknmnby/XoeBvJt42ET3/YeBJxvx3mYq9VqNe/xgJNfjDHs7u6yt7dHVVWsra0151qSJI0Tk28zf/04ODjg2rVrfPe738U7bL355ps8/vjjXLlypXnP1772NT766COEEBwfHzMej/kbf+Nv8Oyzz3LhwgWSJOGjjz7ixo0bzGYznn/+eXZ2dhiPx/y7f/fv+M3f/E2Wl5ebPj1/jZl3V/pk+clbhIAkCIkjV4ySRM6NNApDUDjHCOtzxmvt0xgQrtLdXT91LU4KpHDPIMY4xwFfYdtAicK9r9LGxX0aSxhaVOCr0WvXD+sr9l2mvQoCZCmpsFRaIwQEUhHIACFdAYuplR1tLUVpqISmIkNbQVxZgqh065EuDlSKCCskQiukUggRIJSgpHAikQWswQV2VK4YyVpMCS7/vHZtqCMawGKlhUAgrAStqaqSqnLxEFVVUVSaUoMREaUp0UYQVJbJaMJkPCUII1qtNu1OjAxCShXQTlosLSzS7pQMj445Oh5SFDlSwHQ6IwgD4naEwNLvtklaEb1Oh8loyvB4itWaJGnV7rwhKrJEkQJyitIV50gpWFzoEQeuojGbafLZMWE4otftEoSCfr+LFJKyKJlNJ5R5ThgECGvI0hl7sylhoJASkjgmikOiMKConDOCEiCkoKics4rNK3Q1cyCRKTCVQVjJLC1Is4IoCoijiOlkhqkn1bUwJK0AJSAUEaWuSCdTbKVRIYTKOV624g5WC0QQYIUklAHduEOazUipEBHk0wJdwmB1meEkpcg1RZ4RRoqFfh8pHNQUJy1UoBBoyrxAFwWhDNBlRSQDpuMRFo1ULpa1KHNkGBEGEUoKjCkx1pAXuj6HNDKCMAyoTIUUJWEQU2hLmc+o8glSGmZpThK3kUIhg4iw1UWIBAhIS9f+o9ExxmhUqLDWuRZ7Ecai62tc5c61SlPlpXNosyX9IKAVB7TbLaJWSCd0c3RFXrFUVJxJYyaZYZwW5JVxsS6lwSiNEpJAKtpJRDsK6IQB7TgkbAUkSUwcKZLEPRfFSUIcJ8gwJAwTgihBBiFSBti6GExrS1WUFHlOlqWYqsTqgnQ2YTYboYvCgT/GYipBZTRY6dxdRYBAub9pWwtPvtr2VLyeqKuurQdE6uK3up7FuQ5ZvDoicPPCLtb3xO1VItGYZgx1g1z9inXf4QtxqAEyWb9NGIMVIKVF1A62FoEwdTFQLU4i6ugWa/GOS37O1lpTO826f3txxu+rj9RycBog/fgqaxFENfffsn5NcBLZ98nyyfITudRCuv/nPCJWP0lhtGY6PUIPViiJCZGEUQIGgt4mRfcMMh8S5UPixbOEg/NUIkIog7WKUMSU2RB5+D7rmy9RDTYQxRA9HkLvIkF7CUyODkB3Ngl2X8PIAWGrTaXqZ0ErkdYBrdZahLLYIMR41qzedCt+vBD88Gb4YdDjYe+BHwZAHibKz7//434//f4f/b0nnzXWoqzFVgKSECOcIztWoAwYZSnTPaJkQKUEJp9RmYII6YT9H/Vl9Rg/3xVc0foJrPCwfXoYsPDQ9wh3H+mP2cc1t5j7nNMy5EOhgYfvQu2GaQ1KANogRMWsKimKjALL3vYuhzv3ufjIGlK6c0AF7r7aBpaP7h3w0qUNRllGVOSE3lGmPl9sfU364fCPk3ac3/8H2sfWTjr1cbXWnmi3p9pwvrDoYev7y7SFb8/Tn6ixhZPXBQ0U5L7j5F3GZhgGCNWiMpbQGioBgR5RRUuEoSWvBCQdrB5gRI6ki3R7eDKa/Jjz5b8G/njYHP3HnXunv6+BXACa+5c56GaurWCu43Jy/D2rNb8V1oKRgkE7oh8PuTVcwXDMIFZMjip61nlsFBlsVyHnFiPu7pXMiEiUQlvFSPRBSfpmRlWWmGgZIzqUWFpCs6CGGCMZVy1KYoxyGp+WFmxAJATTokCHi5xZEuxNdT1XVTuwCShLi1Tiv4uG9VcG/NDGoKSb+DVQ22XWXaaeLNLGUBqBkAHGaoRQ7qFfa5YXF+mtbFBMDxBFBViUgkyXHNy9g7957yQJS0vLDBZXGM4yjm9eo0EDrMuo7Xe7dAeLBFHb3WvXRI8yBmENcX+J8eSYTq9DVpXcuXUTYQoCFaArTaUteZaSZ1NmsxlHwyHDvW0wFUmkWBt0eeqJy3T6C6yeuYDsLPIf/9N/4s03XmMyGmGFwVaaQCrOb26ysX6GdquNUG4AT7MUrKUVx8RJAlJQqZjh3h6ddhupIkTbWZdeXOwQL23w7ddf5eadu7z0+ht8/vM/x/Of+Sxn11fpdjt0ej3iJHbOH3FMlLQIg4jL5zb5x//kn3DhkYuk0zHnLmySTWYopbixccSo0+HZbEYELq+zpviMhVIaCqDsdthJU25u3+Hd6ZTrRc5MOOBjvqqv2xuwsnaWhYUVkIqsyMizDCEKrNGUlZtQSKKIMAxIyylJ0qIoMkxVYmpoIy9K+guL3Llzg/HoCKPrC5iUJzdjFqZZ2lx4AhWgawo/qKt/jobHjI6PWV5dRtuK2WzsYlxMSRwGGOMeyG7cvMdoNKKsKkQtyrRbLSqtmc1mLC4O0FXJeDRkZ+c+qyvLXDi3yfhoj1iBoHBCvlJkecH9rTsUpcFKSavdpihydCviZ371NwlMzrf+y3+gM1jixRdfQlFRGYPRBVnqqDcVCL7/nW9wPDqm2+miAompKqwQBAGMDveJV1YJwgAfgYOoCWXhrHaRdZWSqVsoDrlw4TyDXpusLNEEbG6eRVLnLiuB1RVYy8pCl++99h7JY09xdv0s23uHLCwvMbp5k9n9LR4JFcHwiKs723zn6IiRgWUZ8D8sLvJMEBBKhZBgtUQblyVX6IrCWFJjmBrL1FRkuiK3hkxKtB8ztH/QdFViEomyzWUHaQUBlqgoCEtNVW2RDw+xq1vIS4+jllYbu1CsxtiKvLL1s3aJ1RXWCoQIyKY5SrrJgcLEiAxUYQmImU0qMAIp3JglagpYijoWR1eo2joupCTEOXsEwhBgiIQlCgWhFARSECpBOwiRvjosiFDK5TEbBEK53DFrJaUVPC0CtBXM8oLvv38PrUsmaUllKjDG0abW5enG3R4rieLwMCeoL2rG4qpGrEUqi5QWaSVLcYi0mlGumRUF08mEQb9PWFa0W13MdIxUIT/1yCrpZErYX6LfH6CLlHKW0pGWXgDHuiRPdU1LnkRJfLL8ZC7+Zvi0UDtfse1FsfmKKi+idTod+v1+897t7e2mKt47Ffiqfg8/7O7ukmUZo9GIlZWVRtyeF6H8tviq//F43IiW4ET7g4ODB+JaAHZ3d5tK6eXl5cZB4uDggNls1uyHF5urqmIwGNDtdknTtHEY8XCHF6yXl5cfEIq9sDyZTDg6OiKOY5aXlxunCB9P4d1R8jyn0+nQarUa5xIverfb7eZ7fXW4F0uzLCPLMrrdbgOdzEMKHrrwv8OJg4t3v5gXLudv/v3v8zEsXqD2Ymmv12scGryI7IVsL6xaax+IfPDbMu+soZRiOBxy9+5dbt++TRAELC4uNkCBF409yDIvpPvokvl1w0l80LwI7P8+H2njj7nfLw+7ePDBOwQIIZq2mhfKJ5MJd+7c4aOPPmJ9fZ1er9e07zxY4PvKvODs22e+P8+DNx7iSdOUnZ2dRlz2IMY8cOX3dd4Jwrd5EARsb2/T6/Xo9XoIIRqBW0rJzs4O77//Pt1uFyEE3W63WaeHK+bhgvm2nBfFfZv4dfuf+WM/Dz54aMI7NvR6vQYm8AK7Bxc8TAU0x8dDT76P+X7n+8bpsWp+u+crjh724D3/c3q7/bGchxv8WOPvjX0f9b97wGfe4cW3n4eZ/Lr9dvrPz5+D87FG8/CI/1xTTTN3bOb3cX7b5z/r28rvk+///th7IMV/xm+v3w4PacyP077dfJ+Y/z4PHYEDeDz05o/rPLQzD/XMQ1F+/fPwhgcv/Ps9SHf//n22trbodDo8+uijzXnoIZX575o/H+f7hG+X+Ziq+WNaFEUDEfpr29LSUuOAc/fuXe7cudM4Kd27d6/ZFu8k4o+Vb6PpdMrh4SFbW1vs7u4CzvVlaWmJCxcuNC47Ozs7TYRPmqa89dZbXLx4kcXFRS5evEiWZezu7rK/v8/i4iKPPfYYN2/e5KOPPuL111/nl3/5lxvnLg/Z+G3xIMsny0/m4mByRRhKwjggjkPiOCBSsq4Kl06cpU6u9uVg0qKac95i6mctIaWrGrOVm8D2k77QzK6fXPtclKbRGiFdBRk4kF3IenLRGKTSSFG5inUpscZQaYMSikAFSIFz1DDOZdYaSWVAVFClBdYKytISRQFRFBIGETYsUaqsazkVQRDhM82VcM9PiFroNhUWjTaVc3w0FUq4uaqyKNGlcwnJjYt6CVXgcqiFAOMAEF2VVKWhLNzr7ThAiZgyl6AryqwiCNzEbDmrGM40lVXkpaXb7bG0skxvsc/Ccpc4UeSzzEXKGI00htlw4iDY5RXiVtncX85mGdoYFhcXaHfaREkbYy3Do0MCVdHrtHCOJpasEIyOZ7S7bVrtCCkMlU6xVHSiDrbMybXG6JIksnRaLQSSMImI2h129w7Jc+ckoYuK0WTM0kKfJIlIkpAwCBgOhy42ptvFKkMg3RxeOsvQlaDIS5QMWBz0EAGMxmOQlqWlAVrnzKY5ZZFTVYI4ShABZNmMIBBIJTBlxWI/JAhDjscpqtJIATKSpPmYoi4qCYKA46wiRmGOjvAp4IGAtaUz9NoJ0yxDG0te5nSikLJ0UXOtpEWgFNoUpNMJRk/BuvWGYYiSlkDVE8xSIaQiVorx8ZAiz4lbQQ1QWXSZMxsfomRAHCWUZUGWlXTaCYGU7hxDoAJFEEWEcYeyNMRhH8OMqFWRpVNUoBwIgJuXNcYVfuiqACPJ84LZ1DnYTmc51kqEcpGzRhhUqIhbLUTHuR63ypxuUbFSO/BoIyjziiIvMBKCMCAMIqLYHdcoDIiDABnKBoKU0oEZMghQQYKKQoIoIYhiV6Bn3bnsorydg22RzcizGVVVYqqCLM+o8sK58hrh5kAqVwjjonxDrJTo2lm2MjjnDw85WNwkvzwlkFhfOOViqn1RlT4JiXHz2XWUjQMuBHWmNXBamHH9RwhvHlJbqls3yiJEA4UIN7FV6y51dCLCQWY1QHbyOObiuG3tnOymPN157+6paARC4cmSZvHPKuoE+hAukkvW9y1irq2EcMfjk+WT5Sd1cfchPyySijll3pRjJBrVWmB6PEZYTdg7Q15mBOk2raOUaOVRyu4TIENKYQmsRpiQSpeY41uE+hC5+gSFaiGsglZMnCxTDj8iTfdIls9DJdBSwtrTyO1XqdZeRNDByMrFoQtTw2Juntho6+Z4lQNNhRCI0zt4ar8eJhw/DPqYfw78cXDDx8EOpws65tfxo767WefcWxvIQLhiUSmcNhB118iHt4mDLjYEYwUiO3QAsOphtAaTusJmU+N3p57fT2+H15j8vamZm5cRJzeu7h0fAyv8yHYSgLFzz5Tmh9v/1HrEqfX88HERTTdu5gCs87OwSqFQKBMwPBoTRCFlMaQqUrCSqn7+1loTRjEJgmwyhCjiB9/a4enHAqRy98Ky2TbvlTG3PYCon9sf2G/fcP7YCx8KIh78Ow/vH3/Z1/8yx8DaOqYNavhkHv44/R3uGi8AQ4ARBTIfEcQ9LAZZlUBEjTGjzIyymGF0SSATQAG62c+HbeePWn4cCPKjXvtR751rjflD0hxDHnhH/d+5qJcGgnUdwYGw0iMyJ+szGJQ03BspjAooTMDSomF31GIkJUIbbBQihObWSHLlDGwNNUZUWA1tURJVM7Jiim2dobIxUBFYwdhKjFiiF2rW2wVFPmRYRFQiJhACLQwloKwiLQMOq4LVQcT2pMCIEHfOSYpK4UqoP3H8+K9arHUEuUYjZYCxoHVVZyvV97TW5bFGYURZlUgRUlYlQkrCIODMxjnCpEM+G1Pl2jHUgaTU7ka43+2xuLTC4vKKq2TUhtH2LqYqsHODZ6ACuv0BMghBOntSaVUz2EZJi9E0pddOiHuLTMYzqsLlXmpjCESIrjKybMp4NGIyHnF0sA9GEyrJ2qDLp558lO6gz8rGJQ5TzTf+y7+jmByyvjKg04oYj0fMpjMWuy3Ory0zWBwggtDZl+qKTmfJTaTgJtdKYziztMBqv4fRFaPDfUZHB4yPhwT5jJdefJH9W++TTie88+r3uHXzJu+8+Ra/8mu/xrkrj5Ncukin3SaKYoR09P3Ln3mJ3/jN36DValNlU1pRyGOPPcL9nSO+/73v8e/++A/ZuX2d3wwCPtdqu+p9IzFSU2gYVZY9Yfho+x5vj465VxVkQiCURAq33VIFdLoD1s6cZ33zPGEQI5V0tOFsigB0qVwVQuAmWKO6eq7T76OrgjxPQUiCMCDLZkRJQtjqcXT3LlWRuUleLEH9UGJqiMcaQ2BBKEESuUnpXJ5cAIuy4qPrt+j1OoSyQ55lTEZDDvd2+OjaNSbTlPt3d8jzDOpJcFtPqC8vL4K1dHoD4iRhOpuiDaytn0UKOD4+JkjaBEHIbHyIVDGdQLG+tsK4lFx+7EmO9raxxYQ4avHUiz/Lpz/9Gb711T9iMpvx8hf+BvlsTJbnPPXsC0wO98iyGXdvfMTu1l2KfIaS0lVsWEk6GTPLC1bW14iSMXtv3ePpp58iiFru+oKDGZqhVzp7XGe3625iFhYXuXjuLG9+dIfrN2/z+OXzKBG4Bz0MWpRYBFEQsrmxxp1bN3j8Uy/yjf/tT3j+hecY3bjJ1r2bvNDu0spz3hgec6QtgbX8XKfFL/S6tEOFrB/urTZUlUGXBVVVUlSazBhmlWVqNNOqZGoMY2uZYSmlpNDaZSHXk4TaOpcgf7EWbtdQgSLEYIucpNIUdpvZ8JBq4xzx408SDhYwCHRekUhLYSo3TtQ3SVIItNANN6OkotLaVbhY66xUja77Un3Rs6YWokqq0kXZWDTSakKhKYxAG/fQIBEE0mXqJbKu3pUZUeBqQ1qhIpaWOHQ3sYEKiJRACUkUhbUbkkCVBpNN2b8/o9DK2eMKkNKBH61Om04S8eRmn7enY6oycm2ta9ilvqmJgogFJYitZVpZciOoKJmlY8IwoJW00BjapiKMAlZkxSTTZEmCCUOK0THFdMpav8ejPc2Ht7YZZQ4a8lU0nyw/ecv8A9LDwA8vkHmBd15s9tXvPmbDV75LKRvR2otax8fHCCHo9XqN6Aw0jhVZlj3gWOHFUqAReb1TwPnz5zl79ixCuIrn0WiEEIJ+v8/i4iKLi4t885vfbAAOX2Xe6XSav3vhVgjB66+/zmQyYXNzkyRJuHfvHjdv3uT4+LgRz6WU9Pt9vvjFLzKbzbDWNjENBwcH3Lhxg+vXr3PmzJkG7Njb2+Mv/uIvODg4IE1TjDGcO3eOy5cvc+HCBRYXFxkOh2xtbTGZTNjf3+fg4KABGF544QWm0ynD4ZDt7W329/fp9XoMBgPW19d58sknmwlXL976paqqRoS01nJwcMBkMgFOHECMMSRJwmAwaI6ftbYRFT2IUBRFE8fjP+8jcaIoYmVlpYkXybKM8XjcfEeSJA88KFprybKsgUmeeeYZnnnmmaavLS4uMp1Om3Vbaxth1Tss+L7p238eDGm1Wo2LhBeZ/XZ7qMDvl4/48REJ84CKMaZpBy/Wjsfjpir/5Zdfbpw95gV632/nf3wsj+/fftvmhXdjDPv7+9y8eZPXXnuNF198kUceeYSzZ882LjY+jqHT6bgqzhqC8eCQ/66vfvWrnD9/nkcffZQLFy4QxzFAE8H053/+56yuriKEi1rybdtqtZqojXnh3UMsSZI0cSBeFPdCvO8XfpmHs7y4fHBwwNbWVnP8/PGIoogkSTg4OHgAKPLH1gMjHvTwUTx+PPI/3tHCAyLegcfHQM1DTX45PZkghCBJksZZxrtRTCaTxvHEC/8+rmkeVlBKkaZpc1zjOH4g/mce5PHiu4e/vJuPH2O9A4Vvy3lXDx8t5CEx3zf8az6iyjsujcdj2u12c/74KBwPmPhz1LsNtVqtBijxAJw/zj4aCiCO4wdADO9eMQ+3+Ggh/1mtdXPe5HneRJN4Zw2/Tn/sPIRSliWTyaQB33zUkx8ve70eVVVx79493njjDZIk4Rd+4ReaccP32TRNm7gtf10Lw5CVlZUmRmg+UskfP+/WUZYlV69e5dVXX+XOnTscHBwghODXfu3XeOyxx1hcXOS1117jm9/8JtPplM3NTV555RW+9KUv8fzzz3Pp0qUHYJL5/uOjsM6ePcu5c+dYX1+n3+83fStJEjY3N/mVX/kVzp8/j1KK3//932c4HHLt2jV+7ud+jqtXr7Kzs0MQBLz88stcvHiRs2fPsrCwwHvvvde0tz+n/bk+PxbNO+B8svzkLEI4F8EkcMJtFAUEoauSCoLARR5o0AgHfSiBlPU8ixQECrAGa0q0rjCVQTd9xyKRBCpwz/IOeXdxKUK46Mo6csDPSCop3LyJEc45o6badakJwpabW9Ilsn6uU1IigxBrQzexW0f/WiExaAqtKYqSqqwjxKR73oqjEBUojMuRRQYhYRg318YgCJ2AazXWVpRlRpG7qE9baZRwP6bKMZV1bqba7V9pBTl1zK0KCGTonhENtMKQQrvojnacoJIWZZVT5BOscYBgVUJRuWuXQjIbFRzt70MUs7i4hMIghSYMFb1BF6Uk3Sih14/RuiTLC8rCUOQV09EMMJTpFJEoTC6QQcLS4iqHVpLlEzqtiMoUZEUBqmSwGNOJY6qiYDyTVIUhXm6TznKOx1OydIZSsLzUp99rQ1mAUqydWXHPh8ZwvHuAyQ3VbIKxEZUOyGrIZ7DYQ4Ux2ggKK2r3tw5RrOgvKFpRSJVnpGnOIOqgWjAeT5jMZlgrUUGHXi8hzwp2t48JVUCSKBABMYq2rVBhQFoojkcT2kmENZYoDEmzEqkklbbuWZsSo6AyJUkSESYJGkOuNWllyIuK5f4iSknnohpIZlmKlDCejhCVRmhNkaZgNZ1ulziq7zeyKVGSEEgw2tCOLK0wJteWNMuIY4vVhm6rjTEFZTlDBDFx0kFXllanRRwGlEVGEsYEMkHZACssMlSotiVLh4ShwNoQU1qwglJnaF0RRCFSOYeWbDqlzEpG4xFlURAK6SKDqworLFEQkcQtZBBQxYZQh64ox7p4EiklprLOpVdYhAoIlAOmVBAQBRFCWKJAEoYRMoiwwkWBI4MaNgiQMgFitHaxPVVVkBcZaTquC+8y0jxlOpuRF4WrXtagVAw12COwSBRhECLDwIEf1lBZi7buXs47c4BEa4OwXpB5UHzyMEUj/NalZW5+2eAFQZ/q4kN0G0DUC4VudSj/fl8sqw0oWQsnJy4jvmLdi2leSXHO2f5bPNhhsKZq4A+jNUZXGFN/idD18ClqpxG3P74g093jyQbykCrAWlXPF9ZuH8IDeZ+AH58sP8lLXTAMDwIgnjsQgjIbEgM26lAW20hrMNO7KBFCe4EqaBEUGQoH+bnrfEQxPkSNrpIMNil7L2G0K1o0QhPokBRJsvwc0ewO4+13SRavIKIELRNYexZx+Dp2+addIaO1IOtIYQBRQ54icJEeFsS8OHt6L+cAh9PLw8CL05/z//5R4MbD4BG//oet97Q4f/q1EzcJP0af/F5VGiEiZKtHlE/JpjskrSUkGVQFsn+Oqhy5IVXjHKWUws4Vnjzs2R6oHaIc5NAADfheUv/LPtzr4i8LLvhYDPGwv/Hw4/Swtv1RixAKrEDZCiUriuKIyWREWRbkswnTbMx0fEwQKirr7quNrjjYHdJNFB/d2WI4M2wdBBRFBaj6HLHOcd9f5Dg5dg/bF6ivp/P9p4ZZ7Dwt8LD2qumMB3GFB99/GuKBB87qORihflXU3+8vuVacnDsNPeNXUP/DGoiWQe9hMyDuE1VTyqDlimqVJDdgqylCLREFBbkyoGv9zR/rU4ft4/rgj2zLh4I/P7y+H7W4z9SF/j8SFzuBnSwe2p87G3ybehiEk3a3RgEhotUh0w6AT1RBmodgNUhXuC4MaKm5fhDyyErAZE8iZYVJt8hlF5JNtMaVGwRQWY3UlsBKpoVkmreJg4iFTo5gwnEWonWMEvWzJZZZIQnTgrODGXeHfQfLUYD1c5AfP3b+1y5/JcAPgKrurNYUjqq2zvXANCcmCOlo8CqzIANsPRGw1GuTLC2DsBRRm2mwSCAspVAsrIf0un2XaRzWOZ3AKB0zHh2DtbWlqKOX260W/cUlgigGXJahVI6oV1JggghjKvqLy8y0ZHfrFmAcGW8tWZ6SzmaksxnT2YzDowNMmRMGksVuwjNPXqHT67J65gL3Dsd88y++ya1btwilE31XBh2efOQ8ImqxN0x556ObrC3scf7CRVq9RUIVuMr/+pJicNUuVVWxv7PN3vYWx8dD0rLCIok+us4vvPQLRK0WNptwlGaM7t9jNJlw7/59/u7f/TtsXLrC2TNryMDReZ998Xk+9/LnnAAwmxAqCVIRKcn9rVv80R/9IddvXmdaaf5fWUZuBc+12khpmGC4qTOuljmcWeKeCfjwYLd2PXDbLYOAfm+BjfOXWT970d2XCGi328gwIpCCdqfHURCRTyeuSkZIqsJNKMatFlIJjo+OiOMWFoPWzgZ1eXUDYyxFnuJJRiUcbFJjhQhVVxAb7SqUAkUrCVFSkOYZ1ljCMODu7gHZ91/jkXMb9BYWuHHrFkfDQ+5v79PvdllZ7jOdxeR5RpZn6MqSZxlHR0M6rYT1jS5V4eAkrGV7e4+9nfu0k5BIGDqdNsvrG6yc2aQscjqL60ymU5aWFpFn27zyypv8zK/8TV56/lPs3vmAdHTAYGWDMAqQQtLudsgmQ/r9Hoe793jztR9Q6ZLlxQFRGCKkZTIaUhYFN27c4er1O5xbWWIyPGI6nvGZn3mZKDwBPpz1kjxhFZB1dicIJM898xRvf3iDGzduMp19hkGvVd9IKsLYD1WCc+fPc+Mvvs/5R77I0uIC+8MJgzNrjPf2efN4xEI75o5UFFWJsIbDPOPabEonCukmCd04otdq07VApbFlQVUWFEVJVhTkNiStAiZVxbGpmGjDVAqmGPLa/lX7C2Z9h+As0uqbMiObCZTQVsSjEZGC0fER050tomdfQG5sIo1F2JJIWLQCKk/8CWJbuOo162xBMa7CU8k6mVobMJpKVyADAgmKCkSJkBpDLebpisxolwVpBdq6iw2Fi5UaC4UV3tHDYqxCYQkDn6eqiANACEIlaStdu7BAmRUcZCWVLpFhG2sqAiWJlMvKVipAmYrlxTbL3YisLCCSVNqQlRVFZQHJWjeiJwzDccZYaypAWEs2m7qJGSmJowRdOsHq33/3fZ5cX6D9yCMgIJ9NqPIpSy3FUtLnzGTM3t1jtIV2Ev33v7h8svz/ZfE3oPMV3P6m04tT81XBvnrbV+V7YMSLVn6dXvycj8GQUjKbzdjb22M2m5FlGXEc0+v1muptv/55xwQPj3jx3LsZzFv9eycRb6m/urrKaDRid3eX8+fPs7GxweLiIsvLyw0g4QXjN954owEf5kXQTqfDs88+y+rqKr1ejyRJHoikUEqxtbXFe++9x8HBAUEQcOnSJbrdLvv7+3zve98jyzI+9alPsbKyQhiGvP7669y6dYvRaMRgMGBvb4+rV6828IcX2judDjs7O9y4cYPRaITWmpWVFRdHd3TE7u4u3W6XlZWVRuj2ArAXMtvtNsPhkHv37vGd73yncVvwArEQgrNnz/LSSy/R6XTY2tpie3ub2WzWVJN7l4L19XXKsmR7e5uvf/3rTb/x8T3PPfccq6urdDodptMp7Xa7iQPxUSFZlnF0dMS1a9e4efMmk8mElZUV8jyn2+2ysLDA8vIyaZpyfHzM8fExw+GQbrfL0tISm5ubdDodsixjMpmwt7fH0tISvV6vif0py5KtrS2Ojo6adXpHgel0SqvVIsuyZt1SStrtNktLS437gAcR8jxnf3+f2WzWtOvCwgKXLl1q3uNdanyf8ueGrt3E/LHwQIPLhndCv3dv6XQ6D7TP9vY20+m0gZn8eVOWZSPKezE8y7IGaqqqijt37vD66683sULPPPNMA+lordnf3+f4+Jj19XXW19cZDAYPuM74c0xr3bjP9Pt9Op1OI+rPn/+nIy7mwRx/Pvsx4A/+4A+4ffs2q6urXL58mY2NDYbDIdOpizcbDAYNeAA0biRVVdHv95uxJQxDFhYWGjDFQzweOvOxQvNitl/nfATMPOzmf7wLhAcZkiRp9k9K2cTnzEev9Hq95rjGcczS0lLT5621XL16lTfeeINvfetb/PN//s85d+4crVarATmiKKLVatHr9dje3m5gDA/a+HZMkoRut9vAXqPR6AFozx8DKSWDwaBxDwEXLXR8fNw4M7VarQaa8HEvHq7pdrtMJhOWlpYIw7Dpx779q6piNBrRbrcb8MVDIj5ayLeV75f9fp8gCEjTlN3d3Wb/+v1+A8f4ffnoo48aJw8PPC0sLLC2tsb58+e5d+9e4xZ1/vx5bt++zcHBAcPhkDAMuXr1Kvfu3SPLMn73d3+3Af5+8Rd/kX6/z6uvvsrXv/51bt26xc7OTtP3/tE/+kd85jOfaY7nvHsNQKvVoigK7t+/z1e+8hUODw/pdDqsrq7yzjvv8Du/8zu8/PLL/Mqv/Aqf//zn+fa3v41Sii984Qt8+tOfZmlpiSRJSNO0gVt8u7XbbTY3NymKgp/92Z/l85//PM899xyXLl1qoEHf186cOcOnP/1pzp07R1mWnD9/nuPjY1qtFkIIDg8P+cY3vsG1a9d46623iKKI6XTaAIsPq5KaXz4BPn5yFz+2hJEirKM5ojBECWeji3XxLU44dFV7EosMHDDi4lwqBz9YMJUD7k2pwVhKNEYpjJSuIMdIQCEDQSBDB7WroKlAdxulMTW4b6xBKAjjAGMjV9FpDcpqhDW1dW+AkhbQTvAUCqx1z0NFSV4atPZVi5YkCui0Q9pR6FwYlUKFAbQ0UhusAl0UDjIxFoMlyypMWWBMhdQlGAOBex4tjNteF1NTX1+0pbICm1eU1Yyi1FjqyA4VuOuBdG6fSSIJZESRGkxZEkcx3aRPoTPysiTPNDrX2KrgKJ0w6PVZWVsEYV0VopFM85S4lbCwvIC1MB1ntGIIsCQtxfLaMlHsQM5sfERvMGB5eZHhIUxnGUm7x/KKBluRTidUlabdG9CNeygpKCy0ui0udGK0NZSlJs8KRuOcQBk6sSIJBVIErqhrqUNR5AyHh+TGEoY9lJTYylAasDpwEallQRRIWl1LHEusCZzLQ5UTtkKK3HB355g8TVlb7qKNARuwvz8iz3PWlvsI6+J/nFhh0HWhhJKQRAG9VodJOmWaF7RaMWleMJpmxIGmE8WMs5xuEpO0XH8I4oTDwxGzNKfTG3Cwu8ug26a/OCDNC8oyJ1SCbiKxWmFEgA4iF20SORBilo4RSIysyKxzVg2kuy53Oh3ngGw0FRWF1QwW1shnM7LJMdLaukrXuYomnTYyBEPmxB0lQVl0CWHYI081ZZ6CKQjjyM3BGtfn86ykTGegc1TtSuwLVaSBWEp3LokKi0EISRyexKhh6sl/CYEMnNuOqJ0erUSqEBXFRNLBlEIapFIgJBaFEMpFMwsJdZS0qSrKKidLs8a5cJZOKMqcoqyoypLJKMWU2p3bgSQWECiBkQoTSlQQIOIIIYWDvLTAmroYyqp6DkljBGjmC1h9LbltVBkrTB19LbBoqMES5/BRF51ZEFYjRU2JCC8R1VEpWBdF5DzxQViEdbCFmbvGNpFZCBC1PX9dYyysxdbfJ3GxwEJQFw86QIU5pw9ttBOdvYZVC1eijrMxRmKldOO1EGDd3LiyFq0rN4Fr1Jzrx8NFyU+WT5afmEU0qBXwoBDtY9PLdEyvFVKILmU6Iw4UcfccIxsRtJYJWx1YvEiNw1IZQbH1Di1VEZz9KUqUK6r1oJd0Y1AgtHNPSs7SO7tMvvcBRvYIFq8gwj7ByrPYox9Qrv0UVgo3LiNRogbJKuOcyaR0ceO4ud55UfZ/7/k7L9w+9O8fIzb/qO/5OOhkfl0n4vzcfP/8+xuQzSKkweocpLsXFb11bLpFNdshUiU2WUZnx9jZFpEQSCoq2ghrHM7RQAcfL6y77aJxuZDC43e2gQjtQ6IxmnWcrMx9zwPfeQpoERIfMYIQ9bj/IPYipWjG9Y+DTk63tbYQChBSM1haY3W1YPtgzHQ8RaGRSlGUKSpskWUFSrnrtzEV49EYZQ1ChnTaHXevUn+3wB9zD0Ke3oZmM935Ve9/s/k+6wYfNVLrVnNIgYcixamVeliG5jv8xQ58ho6DyesVP6yN6ot1w3U02/PgPrjNFu6eBVeca+RFVH7H3bNEilBCKCq0nWKNgCBDBAtonaFNiMSBqXXj/fDGzB3k+bmah7nRPLC//PA5+OD2ix967wPvmaNuTl4+3WYOG/X912txzUfnvwt3nCWigWLdA6LElhCIgF5SkWtLrisELbRICfxx1IJCWG4eSFZFiLSWlJigfcY5RwrqFA3r7nmlojIZsi4KL41kb5YQyDaDJGNBj5lWmmnVpzIxUiak4xGlUGwupdwfBpQiIRSle27671C8/FcG/PBZRtrUhBTuZtRaQaU11mqkNcjAUdeYEiEs7Shgae0MUkUoAf1ej273cTcgUt8Uy7qaQ8qm441Hx1DldYWeRQlXlbKwuEi3v4gME4x1z0VOuHKfT5FEKqTV6bB3dwsroUhzykpTFQXaVEynE+f2MRqhi5xQSfqtmOeeukKn16e/fIa7+0O+84Pvc/vmbZSASZYTI1hbe4yltQ1WFrr85md/jtdv7PKtP/1j3rt6nQsbq6xubCJl4GwPlUIbzfFwyK0bH7F/uE9RaKwIsdI9AIzzGZ1Y0eovkRVTlFBgDOPREddvaN58512+/8pr/Mqv/Roby0tc2tzgzJlVjo9H6CJDqQCUIq8Mr7z6Xb76X77KcDxDqBAE7BvL/zIecSadEePE92MEJAnhzgEvPfsk00pz585dlAro9BdYX9tg49xFVtfP0l1YYuveXXqJoteVCFkRhRFZDnomCPtLSCGI4pg0zRDS5dBrY1HLIePgiNlsjK2rEdfWN9ja2sKayuWPuc6FkALlDiZlVTEdT8nLgsXegrNhT0IWFvpoY9ndP2Y0HrOy3Kfb7nD7/hbm3jZLS8ssL/X57GdeoN/rEccRUkgOjo547Y13GR6PmUwzDo5GHB2PWF7bQEjJdDRkOBzS73cZDJ5gOhmxu32Xe7v7BHe2uXLlMutra1y42ENXJZPxmMm4YOP8ZT796U+zvNDn/mzM0uoGIlkgricEuv0l+t02Nz54m4O9HZYWBuRl7twelMsXrKqKvYMj0qxCYdjfO0BS8d5bb9JfXObxp55ybSP9haMemK11FpDCAUYg2Dx/kY21JWY64+33PuRnP/eSm8izYI1tLqjlbMxk5xY3bt3k8U+9yPe+801eeupJRrfv8LWs5OLTT9Cmwly/T64lH5QlN4uKAQKlMyIVsLa4wZPLS8jhEQJLmKeEs5R2OqOqDIXWpGXJQFeMi4IpgpEwjIuCsRWkuqJqrjYCXQs2woLUGqvr+BVbEUtJXEliUTHe22H4rT9DP/oU6pEr9cBusVIBzokEIalqhxRhDQJNIgpKIyh1gKwnBY3RSOuse6vKIK12FRalBqtRVYo1El3nUQe1HZ/ATRhaq+qbME/iaLfvSAp7cqMzLZw1rRKaiYAQTSFChAgJEslae8D2/girKyIlCaSoJ2Ot278gYG3Q5miSUpQVSEusBKGUtIUgqjTjSnNYWbRSRMJS1q4ms8nYAXFJizCMKCvDrVHJ1uyAtTXBUzqnmI4Q2YzDvOLtgwnjTDCzAYYCGfyVucT9lV9Ok8i+qj6KIrTWTCaTRhz2lcLzsIgX/zys4cWzOI5pt9ssLCzQ6/XodDqNSBlFEe12+4FqgPmYD//a/Pb5Snj/HV6M9tErXhz11eUAs9mMra0thsNh8/kgCJoKdu8YMC/4Li8v0+l0GoHfV417F5Pbt2+TZVmzb8vLyxhjmEwmDIdDNjY2WF1dZXV1tRE7PbzhHSa868YjjzxCt9t9QEz3jiXr6+t87nOfY3t7m8lk0gAvPvbCHxMvWPpt0FrT6/V4/PHHSZKkgSC2tra4c+cOR0dH7O3tsbq62oiyw+GQ9fX1Zr0+/mZ/f587d+4QhiGrq6sADRxx584dAM6fP984M3jHAjiJVijLsnFwGI1GHBwcsL+/z4ULF+j3+9y8eZMPP/yQo6MjptMpxhiiKGIwGDAcDnn22WfJsoy9vT3efvttLly4wObmJgsLC4RhyNHRER9++CE7OztcunSJxcXFZtvfeuutRojOsqxp45WVFZ544onGgaGqqgZQ2draahxMoiji9u3bTVQQnMTMFEVBFEUPQAVezK6qinfffZfpdEqe542Qu7y8zPLyMqurq03Mz+7uLtPplL29PT788EOGw2HTr3wf9GCDj7fw/TtNU7a3t5topYODA65fv85oNGrE/sXFRfr9PmEYkmUZd+/ebRwZPIQyHzXiv9Mfr+l0SpIkjetCu91+IKJnft89JOIdMo6Pj5lOp3Q6nebYeyBj3j3EA2UeFBiPxywtLTXrVUoxnU6b8cefsz4KZDgcNiL4PHTzcdVQ82OdB0F8REpRFE0/9MCX304P93hwJ8/zxlnC9/Vut0tZlo1bjHeb8DCKBye8q4uPncqyrBlHvWOFtZajo6PGTaPX6zXRRB648GNKHMccHBw0fcW7EBljODg4eMCZp9vtNufJPEx0eHjY9CMPHHnQYWlpqenbp4FB35Yehnnttdeadtje3mZvb6+JeHr66ad59tln0VozHo+5fv06X/3qVxvXHz9ebWxs8MQTT/DFL36R8XjMO++8ww9+8AMuXrzI8fEx4/GY0WjE5uYm9+/fZzQaNeOTh7vCMOTtt9/mww8/ZDQa8dJLL5HneRPZ4mGn+e3354Ifk0ejETs7O2xvb/PYY49x+fJlVlZWePLJJ/nKV77CdDple3ubF154gfX1dcIw5Pz581y5cqUBbeavZR7M8Y4tfhx/oDKu/rcHs+bjmXwk2HA4bFxojDEMBgMuXLjASy+9BNC4zKyvrzcQ4nwEEJzE3fh9/kQU+slbpBS0WxFRGBJGiiBQyHqC3VqLqQxWi7o4XdSCp0ZYhRSSQIESisq6CU8j6jxwY7ACgnqSW1qBNZUD0GWAkrErKJEBUtLEltazPyfnBBqLRklBEEhsFFLYCrSthU8ILFgFUhiEEiAsShiioEJgCIWl0lBq4wTUqiKfGUypUUoThIrQBBBIhAJKTSVckVKlDcZYqrJw4zwV6ILAakIEGPfcVWpFVRlX/GQsVZ5T5CVCRYRRjIyhMIKiyCmLkjhUSGtQJsBm3glWUWIZFxqZ5wRKEUcRSaxRQcrx8YzcGKAg6ZQsL/cJQweVxpEinWQcHxcsLg1IWglR5CKBt7bv8frbHxCogF6vw+rKCirAgT55i3vbB4zv7LO00qPTisFIjvaPODwY0e8tkGclsyxnabFHKxFESUQcJUhhCGNJ0nGuaWUhsVYzSTNcdERAXiUcHe6TFprFhT5JFKKUiyApiwyJJZAxRV5xdHCMKSzdbpukHZEXFVmuiSJBp71AYQpCZQlkRRQZFheWSIuU4SgFowhURF6kWCkZZyl5qYmSBC1hVpZUxonfeVYhA+liWUVFpCCJBZ12TLuTONvzXNNWAZKCbjtCWMPW7btM0pzeQpc4iYmUIo4TwjCi12m72FVjmYzGCKVRyjKbHCNkSBS26A76zGYzZrMpqoad4shFvZkCpA3c68LNc9gwRAWJA5mMgypMlTkl0bpYmm47IZsIqrxAKReVUxpDGIXoIqfMXEGbLisEzvEmUJI0L9AiICtLispQakNkHeQkpSIIQldA5HJQsECgVA2zGFx2intdBiGqnt+0ihryqKEJ684JY3R93yWotCXNUyaTGZOJiyiqygJjK7CGUAW0QoVWDqeQUiCki7ABQ6hcpK60BmsFRkvcZbOGx/z8lqWOLhGY2m3Ii3EPSGnC1JCHwFt/WC+AcGKPbutzmx8yxXDbJywIH4WFOBGS5sazk7+5LTDWIIWsq86phWmLtg7gEFa4fTAaqzWVdbEvxug6yoBGrDH4auNmMHXQiT2R2awAreuiTOFihKgjsh6i6X2yfLL8RC7C/7cGD2StCls01fSIlXaL3MZQTGjFAVU6wooWqn8WwhZCGCoD+WgHJnfprTyGTZapjEEIg5kr9ZdWgtAIlHMks5LMtAiXn0OV26R7byL7l9HtDq2lxxB7b2JXXnChEcKgdYVQkjSd0RdgrcbKoFHE540/fpwg7F9zH51X7B8OeZxe58PE6YctPw4MmVsjfrydH4CEsDWIB1Y46FYpsEUK1Zgw6qCH72O7a4TWoMsU2TlLmW4hbU4Q9Gvh+sc/tziGwNb6m29PcwKkeBcpcQLZPHR//DUHD7e4UAmLi8GQqObzvh2lb8umJfx47q5JjZTzkGN8+pnQOZaAkQnjSlEVY4rMMjw8YtBrE0tLu+1cRPMoBtz1ORCaaaaxfU2kIlegKutrhr+W+oY69b1wAgU04KCYg1aEBy1P9Uv8/f3JOh/EY05+EfbBv9cXvZM12PkIGv+1HpxoQoNOtrEBM2jAEd8LPQxhkWgjUaoi6JxnUF0jJ8YSUVWahBA6IcXsLjq6gLEQklNxcn6c7OUpYOVjjuPpfnq6nU+//rDPflyRhrv2+974sMWcbN7cOubBuPm1NVQITrMWCApradXam8Gy3Is4Tn3sb4Wt71uNME6/FBAyJggFkQpIdeHuQYXT6OYOMliNSzgwzT5IIdBodmcBsVygTcpCMKawYzITgRXo3LKjOpzpT9gbSzIiDIqH3MT9/7z8lVHFdH1jXVYu+9Ti7D39RTVQgkCFCAeAI6wlFpKwtttUQeAskEzdeYU7US001J2sRdTRdMr+7g66qhrIxEhI4pCFpWWsCrDCT1a5G2dTlQz6fSaZod/rMpykrCSW1bNP8M6bb2OKkqLMKfOMNJ0xGh2TTkcEUtCKAp68fJaF5RXi9oBhpnnrnbe5eeM2oRKkpSGRkp//zKc5c+4CUrk7/6tvvMqzjz3OU//T/8wf/8evcOuD15hMxlx59DGi3hKVNty+dZNbd25T5CmtdgctciaTGVVVYLVmFkpEOiJs9znK64pHqxE4IW02GfP1195FFjP+/v/8f6W3tMzBwSFWl84RQCmyPOfbf/FNvvedb3M8zlhaW+P27et19qfluKo4Lkv3rFOfwN0gYGHtLLd3Rnz6U0+TV5p0OuX8hUv0+0vEcdvdGlUluiy4fOUcvdiSJK4qWkZLfPdoj+cefYTpbEInUvQXNjkeTRlnfd567zrtdoc8m5EXGUWeEidtoqTDbDoCTHP8BO7YO+GuJJ1MSWcZYaAY9HoIKTh/7gzLSwOmWUan0+J42OHM2iLTWcb+0RGXLlzgqSeusLK8RKu2sRQSlBAsLQzQZcnW9gH39g44Hh5TlCVplnH92nWy2Zhut007aTGezAhUwMbaOpsbG3QHy0ynU27e3eb5559n9/5NFldW6C0ssn5ekWcpt27s8c5bb5JEAYu9HkWRE4qAVhwxPtjl+GCH/eGIVn+BVjkjDJ2QamRFEsVMUs14VrgJIiWIlKJIC9585VU2Njfp9ntYY50lbn3B0sa5UQipEEqhpCIII1567jn+t299l/fef4/Lly+yvrToqGHhYpKy0RE/+LM/5/r1exyJV/ib/8f/M712h72sIHnuJX7mzDKD2R72XoebcUCaFuxozbWy4mmlCKyrcNoZHRDs3meQtGj3erTaHYLOAFFWhNmUsNK00hk9bVkUcDyb0CtLjoOITlRynGVMi5LcVBgcQObGAdyFQCps/TAsLQhbEUhFLARJnnP09psc372HeOZp2gs9yjLHYinSHKtc/JDQBqMrpAopja7jUdyEn9TOwtSikNograHSlZuk0tY9gJcVSgqwCmMEAhd9YhCg5ihfXVula4vRxtGLVuLnFxAeFzEYYUm1QUuBFO5BXYmCVDsrNqO1K/KRkjLLsHKAlILzyx1u7gwpq4pQKQIhCISiGwim05yDXFNJ5zailAPpKmOodMl0Mq7HaOksj7UgJ8B2+1RFxmQ8YjQ84n5tt18YgzW4cfUTAeAnepmHPeZt9fz/oygijuNGsGy323Q6neb988u8kOT/7V0mvHDo4168iD0PUwCN0OZFsdPb5qEC76zgxUcvMnsQxN8Ux3FMVVUMh0Pu3LnTRJ54WMQDIl4c8wKvlLJxI/Hf6d0AiqLg4OCgcWZYXFzk3LlzDAYD0jQly7LGYcJaF0/hQQAfheAr6ZMkYWFhgYsXL7K2toYQ4gFRXkpJp9NhY2OjieMoioJOp4PWuomJCIKgic7x+xMEAYPBgEuXLjEYDBpx3LtDzGYzhsNhE2Ewm81I07SBc7zQmGUZ29vbbG1tcebMGR555BGEEIzHY3Z3d9nd3W220bum+Gp23xeAB9rWH2t/LIMg4Nq1a7z//vtkWdZs+3g8ZjKZcHR0xPnz55vvvXbtWhON0Gq1ANjb2+PmzZvcv3+fxcVFpJTkec79+/f59re/zfLy8gPxBpPJhIODA9rtNk8//XTT9t454+DggDzPabfbCCE4Ojp64Lzx++mdG+YjSjwcUBQF165da4R/D4t4oOPSpUtcvHixgSFmsxn7+/tkWcbt27c5e/Yszz///AOxFx6w8G1YFAWj0YitrS2KomA8HrO1tdW4oFy4cIGzZ88yGAyaGJ+trS2Oj48bYKHT6XDx4sXG3cY7HHh3lHv37pHneeNOcebMmQZKmAdSfNv4iKWyLBtx3ccv3bhxg4ODA7rdbuNicfv27QZCaLfbDYCztbXVjDl+XLh582bjKLGwsNCATsfHx9y6datZb7fbZTAYNFDH/HF72ATXfBTKbDZr2sfvs49i8X87Pj7mkUceaY6HMaZxMcnznM3NTQ4PDxuQ6f79+41DSavV4sKFC0yn0wYuGQwGHB8fN/3s7Nmz9Pt9oiji/v37jVOOMYaNjQ02NzebsdSDP94R5/bt200kS7/f58qVK43j0v7+PuPxmFarxZkzZ3jhhRca1yUfYXP79u0H3DU6nQ5LS0s8+eSTPProow0w5/fbt9+8681oNOK73/1uA+AcHBw84CQzGo2ayCprLVtbW1y9erWJd+r1ehwcHLCzs8NsNuPzn/88s9mMO3fu8LWvfY3Lly+7iur6HJgHUJIk4cyZM42zSJIkfPDBB9y+fZuqqvjc5z5Ht9vl4OCAO3fu0O/3GzCr3W4345bfH2MMx8fH7O/vM51OefTRR3nmmWcaKOXtt9/GGMP29nYD83k3Hh+FdbrffVyljx835uNo/LXJHycPB/mxz19LvYNJr9fjM5/5TBP3EwQBm5ubTcyLBz/8Ns0DLp9AHz+Zi5SCTjshCEJUKFFKIqStnS7AJa24wh7RRAJYJ5biKlKFoAHjDM4pU0bODUNJoK6iNNalYxsNWiikChHCYoRBCe0m2WuXT6kUKlBo42d5BUoF2MCCrTAl6Mo9ZxmrUbUjifDu1NLPPTk4pawMSit0adGVc6wo8gpBQRhIolBRTlPSIHDuJD7CUNSisqmIlEXKEmNzMAZdCIpCgwVT6Xq+SrpIHCGxUlHkGVmeY6REBjUwISVKBY23QIU734JWB6GdI4KtDHlZMp5MCUJBHAYs9DruOqorDvdnjI81SSdHG0vSabG4vILUmvF4QpbFxB1DVZX0FwaUFdy4fodbd/fYPJuxvr7I2soSSahYX1tAWMPoeESetwmVopV0ELYinRzRSlp0lhOmswn7BxlFpVFCkigY9DqsrrvrKbZ2atGaJG5hjaGVKORSHyRkpaEoC6qihh0RhCrCVDkycC6voVTIqEWmQRtLnIREsUJXFWHQpioqJqMpcRgzm07Iy9I5ycQhxkrSiaUqK0ptUIkTxyoMWgiysqAUYKTEmgIwLAwW3BgaBBhTOecM5QTxPM8IEO6Zv90CYegP2iRJjNEVhUkJlCadjrHWEIbOor/IMqJEoUsw2hIGAUHk1MVut4XVLmY2KwryvCRUAXk2oSpyKp3TiqJaPJFIpZybDMb1b+leF1a6pPIyI45D+oM+eVFQFO65IrSaskgxlcYrJeksdZEnRjs4yQjSUjNLMzp5QatTEQiBUC6WRQjpnHelAwakcI4RRoIgaGK9EW6SXluDqdwchqpVqKpyBXZlUdZzKG48KYqMKp9hyxRlKxBObHFOpBGqpbDSOTKbypKVpSsmNG5euLLaOUtL6ZxjbIiVddlgXdFrbV0JjAHjnTFs/ZqT4/CyVF045V2BToQrr3rVgp5g7j67XkWtB0oB5rTQKGrRZV6kq9dvrK3njkDiIdmTuBVjRAPNuHOrnmPXtTzoxWVhGmAEOEmRqTfaHSO/7tM6zlxhxyeX+E+WvwKLF3oFNNXqTnNSmHJGOTuiu9FmK5+irUbGA5LzL8C97yFkTKRa6CKj3HmXuD1Anfs0pRFIU0PcD7mf99+LdY5pgXCx4FVylu7qgGL/Onm6RLlwnvZSQTp8E7P0LFQQRDHVeIe4OiCJJaoco+MVJO6eyX6M2Nt878cUOPwwkPbxn52fw/hRnzktQD/s2Qa8YuH+U3+CZmSyToQXtaO+RWCq1DnDlVMsBSod0Vl6jNnRVaw5R9BfRZqc0kYoIbE2REucA/jctsyDICf/rr09bOMF5TG5U+DI6d/nxPvmHacRB7+uOuqwaQ8x99+5dvqhdVsejN/ycxY/PF8hBNjKYq1Ea0sUd+l2Oyyv9nn+uUeQJqSqBHlREiiJpcJUJU88ukEoNHfvp7xxM2VxUJIbDSI8mfd1X9LAjA/s31zfaPb4AUhEnpwSfi6Z+b+fwDMPAB7zu2znwBjh7gHc50+uxQ/2TX9x9tfyB/xFGhTE3bee2hcEykA7HCGDAFVJ8mCZPDuk1Hu0OgEqEMSlIreGQB8jqxlVqJBNX3p4NMvpdnvY8rDixo9b5uflf9x73d9PWseNf6LpifbUzYHvf/MNdLo/uJX6tSmEku52RGkGiebuUYhUlQOfaqBGWAHS0DH7FDpEqxgwxHZKqA/IxLIj8axz7nOZmxpqCPlkY8CiCEWF1SVjIoSMSFTFYi9leliidUo+URxkmpX+lO1xhTEPDNH/zZa/EuCHtRZbGbT7BUQt0gbK0XPCZR/mRUknilBKUhYapQxKOUvQytj6+Pq7UydGW2PQ2hJKhak11IOjIVk6ORmWhUWpgEF/QKe3gEBitcagmaUpo+Mh6WjIp557gbNnLqJthSxn9JeXePPaTcq6mleqwNlpFhn5bIq0hiBQnFsdsHn+nLPWCtu8+9abvPvBVayuCNoJxXjKZ59/iguXH61nG9zJZ6Tk2rVrrK8c83f/T3+TP/n2RX7wX/6I2Ztvc+HiBXaHI+5vbREoSbvTIo4CRtMpo4l7iJVCkBYVu3fvsLi0xBaQFiVlWSFlfQ5Yy5NPPMEXfu3X6CwssL9/CNagAheLM5lN+NbX/5z333+PUkQMBgkySlheXCCduKrF0TSlqqsr/c1KWuSksxF0F7l24w4/8+KzfON7rzE82KPbHVBWJZPphNF4gi5T7t69TSeJqbRmdWmACqZc2FylKnNaceRE9zwjkHBufYXt7V12DmfEScJ4NCQvSpaX16nKkqosUMK5oQT1xKqQMJmmzFKX92mEJWolyLhDKA0Xz51lbW2VvCyxlSVQElMZhFB8+vlneeKxS3TbHTepVUfiSOEcZPKiIo7bXLzY5fylx3n7vQ+4e/cWrXaHbq9Lmk7Z2TvAVgUbG2dJooTZJEWGEWlW8vgzL9Lvd9i/f4ur126SlyXnz53n8pVHuX/7OmEYsbKyxO3bd7hz5x69dsiTTz2JyTu899ar7OztcunxZ1hbW+Pu9XcZH+4zyjMm0xnnLpzn4pVH+bOvfYdrN265TPdWhAW2d3d4/533ePrZZwjCACXdQ7o7J+qJWqHqSSjASh5/4gneff8D0qLge99/lV/95V8kCpyIX85S3vjm13nlnWusXH6ShaVF3nv/AxbPX2FtfYWf+eyz7Lz7fe59eMDm5fOc2Tng+r0jcuAvpiMCMeBMS9KuLLYouTU8JJSScH+HRIX0kxZLy+v0Vs8RdXvI6YS4rIimE5KjPbrplHY6o1WWxEoxkhUTA5mFvL6JkvUDK1q7mzElwbhrgxKWyFoSaekIQ3K4w/DtkpUvfIGFi2cJAkGRV+RFTqU1ujLkRUZZaWbTGXmeUeYZOi+cdVdhKcuqtjg1WCMwpkTo2srYKmxVYYSm0LJud+NyUq1tIDZdU6qVNhiLq04RJxMHVui6SsVVWzg7z8JNyJQFWR6516wmCAWqqilcY0FFKCEYDDpcXuvz/t1d8hICI5CmZJRZjiqDSCIWo8BZImtNURkqI92YqivyyRCTtImTlouSCWOiTgtbFMzGY4oso9QWGSUsthLiMOLu1rbL+/tk+Yld5oGK+RtJLzB3u12KouDevXsI4ezmFxYWmrgF75xxOkYBHHThwYXpdMpwOEQIF5kBkGVZI7Sddu3woIUf5/z2eWjAC7heGPPwgBfOvNgVxzHD4ZCtrS12dna4cuVKExsC8IMf/ACggSKKomhEa++SMb9OrTXD4ZD9/X0GgwFPPfUUFy9e5OLFi027WWsZDofs7Ow0QibQxMW0Wi329vaaOBX/N1/BPx+dMB6PuXr1KtPplAsXLrC6usr6+jrtdpvDw0PyPMda27gT+CiBJEmI4xghBPfv3+fu3bt1BeKsWa+PWfARMwsLC2xsbPDX/tpfa+IZiqLggw8+YGtri/39fX7913+dxcVFhBBMp1Nu377NdDpt3EmWl5cZjUaN0OndVXxMz6OPPto4OfziL/4iKysrJEnCdDrlP/yH/8B4PObSpUt8+tOf5umnn+a1117j7bff5q233uLy5cs8+eSTjUPGfERJkiQPAAC+Pb1TzcHBAU8++SSPPPIIGxsbTKdTvvGNb7Czs8Mbb7zBiy++2MAd3//+97l27Rqbm5s89thjbG5ucvv2bcqybMCh+Yev0/CU70/e+cI7grTbbfr9Pvv7+xweHrK9vd3E9oxGI46PjxsA5Pj4uBGAL1++3AAAUkqyLGvOz7IsmU6n3Llzhw8++IAbN24wHA7Z3t5GStnE9CRJgrWW7e1trl+/zp07dxqh38NCf+fv/B0+9alPce7cOXq9Hq+88grf+c53eP/999na2mr2p91u8xu/8Rv81E/9VBPxc3qiyEcW3b17l6985SvcvHmTnZ0drl+/zr179wB48cUXefLJJ+n1evzO7/wOL774Ir/6q7/Kc889x71793jllVf4wQ9+wC/+4i/y8ssvN/FFv//7v0+32+W5557jt37rt3jzzTf55je/ydtvv81HH31EURRcvHiRZ599ln/wD/5Bc+7689efcz4exo8dHvb64z/+Y77xjW/wwQcfsLi4yOrqKhcvXuSLX/win/nMZ7h16xbf/va3ee211/j1X/91nn/+eR577DHSNOXP//zPef/999nd3eW5557jlVde4caNG2xtbfG7v/u75HlOr9fjs5/9LF/60pe4ceMGH374IdeuXeNnf/ZnGzeLLMv48pe/3LTPv/yX/5J33nmH0WiEMYbDw0P+3t/7e3z+85/n6aefZjab8Sd/8ie8/vrrKOVc93yE0MHBAX//7/99RqMRH330Effu3WuAr/X1db785S/z8ssvs7i4iFKKo6MjvvKVr/Daa681TiDT6ZQgCHjiiSf4Z//sn3HmzBna7TZZljXjsof8/Niws7PDH/3RH2Gt5ezZs/z0T/8058+f5+233+batWu89957vPTSS1y5coVWq8WlS5f47d/+bTY2Njh37hwA//pf/2vefvttXn31VW7dukUQBLRaLaSUnDt3jr/+1/86V65codvt0ul0+Lf/9t/y+uuv0263+fKXv0wYho3zx87ODgcHB0gpWVlZ4dFHH22AOa01R0dHjdORvwZVVYVSijRNOTw85PDwkDAMeeyxx2i1WkwmkyYC6tatW82x9mDF/v4+BwcH9Pt9rLWMx+PmeuXf40FBH8fSarWaa4EHPfzY489X/+MBvVarRb/f5+joqHGTWVtb48KFC82Yce3atSbCx19nPdzi4UkP8Xyy/OQtUkqSGhKVSjSzq8Zo97xknOOhBdCGQCmCQCKMg0Mq7afJnWOFlSBVXUFP6aB9aQnqoh5Tw+PWWAdu4OJkKm2cy6sKEUIipHBOAnWUgTHO1VAGkoAAI7xcapzQ7KNnQumeiYW7P5RCo10IA0iNFoZKOvcPra2rdFcWYwvKokCXzmVUJAlIFx1sTYUtK4xVGFGiS+0iL7Qly0oH0xcFoXTiuJLKgScCtHBOBWVRUswyKjshjhIQEhVKup02cawIZICUCqUgbMVgLEWRE+WCqrRgBEEo6/uviiwvGY6nmPEUEYSUu0f0D0b0+w6axKbEWYGUYE1FJA3nz65QaAfXpJMZea9LGLZZWRkQKcF0OsMYcOETBixMi5LxNKU/6NBut4hiRWW0a4PKkFclW7uHrjDBKQ9EkaIVK1Qg6fUjLEtUpWWWleRpAaFExYpIgikrwlAhI0lZaoK4w7gUjMYTbJHTbYdEIYRKUmYV4/EEsOhKYDSEYYDUDgAYj2eIQGKrim43IUkC0umYAEUrcCbcSkrSyiBV6IQkGRCEFqNzyipA1ZP3s6xAqghjK1QUUZqcwWIPqRwcVZYVRarRRU4UuqKzKHLnQagSZ9Vf92VtNYYSW8d0zKYj9ywRBdReL8RxwExUUIQOiJEBYauFCAJsVdUVtxIpAlrtNqXRTPIZCFCBIm4lqDAky1OsrdA6A+vOVWxEoUvKcoqQLiY5DiNAkFeWWVqS5yUYTSAVgQqRQdgIbyJQzkVZKISVWOsAJ6VcBabWFcaU6MrUTqd1pbOtXGFdpSnKCm0s1gqKoqQoagczpQgjQdg8yymCMERKBXWkTC4qlNZYCaUx5KVzOJOBQqjIVb9LWYNpDwpx2BP3DlHPI9kaQHNFpcKBE1Y4dw3hxCJhbS0OWDDOGrypia3nS10xkitcdAkvbg5IzMEbXsR0m2LrqJcTCcq/5pWtpnrfWtwpaBpgzgMjLpFCYoQTRZ2LQG27jsDUBUjCCqeVCF85bpuCr/kvc8AKzXd8snyy/KQutccNMCe8OsUZAF0cE5ZHKHmeYrRLaDVh3KOUXUy8QDR5i6m5hKiG9M48Rxl0QFeo2vKnZstw/5ufmzoRqbVUCGMwykFameghNj5Ff3SbdPcNhouPM+hqZrtvI9ZeQFcG01pgUimS1gKdeAlhwUgxtzf/+5YfBW98rGvAQ8Tlj4MA5uH/xt2idgJthGNqKO6HvwhH0+FAR6OoyhlxZx3RWUFPD5hZibQJ4erjVMO7wBqBtAhyVNgCWWBqELf28nZAIOKBbfbatkt+EJ5hdNuLB1C8xPkgBHN6f08G9ZP1+kU23/mgkD7fnqfBlAe2kUZmbVrudLtjDcKxp4Shvx8X9PqrLC0/zfHhLpgUROWc8YzCGsl0krM4iOklLUIxwaj573xw3z4ONDhdSGNPbdtftnBA1lDIAxDHqe9r9tzWMJW/Tjff6t/7QAs1/3+Y68X89vnQgcLE2LJDYg4o5RKEii5HaN1CcMg4l9gqIAoCoEVlqxpx/9HLj2uPh8FFP+79P3qpj5E98dVpwKuTtTDfLz0s03z24yCVuU9ZI7FGkWcFySBmOhtR6sDfXrmTzAbEpLTlEZkZYKOYTnvH3UfJAd1YIfMtZqwSCImxAUIWDvqw5gGsylrrYF/rkAsjnT6Y5oIgGdCKjjBqQqhLJDHTSciFQcrhneK/y13OXwnwwxhDXhQux8xqpFQIFBZT0+EChMQKQSCVy1+0BhUEBIGkLAra9U24wT0QSFkb8klBIN3DvxBQlgXHR/sIWxJIT51J2q2EpZVVVBST5zmTyZiDw31GR4cU6Yx+OybpOPvKWS4wSF575yOyPEcg0GVFmedkacp0PMKYkkAJeknIlUfOubyrhTU+vHmb969epcxS1pf63D84Bgz7hwfs3L/DwtIyg8ECMoxcJi1wPBmh33mV/8PnPsW5s5v80f/793jrvfdA4DK0Akk6m3J3a5fdozFV5YUpSSDg6PCApcU1/K2KkhKpJO12i3Nnz/J3/scvsbyywsHhEGGps2olR8MjvvX1r7J1b4sg7JJQIIC41MRRDHFIpxURRgEHR+PaJtq4By0LBwd7rAUh+7pFcu8+n/v0c3zru6/S2d8mbrXQZYW2hrQouXdgOHd+jVmesXXziCubA4St2E+ntJKY48mMN9/6gMXls8StBGthOB7VVS+uSvrsExc5GB67i5YS5FnBrCoJw8DZZZdVfWF2jzULi8uoKCZRFe045Mz6GlHccq0koSwKut0eyysrzGYpZZbXFQz1j5RIAUkUkcQRVipWzlxkZeM8X/vanxGGAaurK4yP9pHtDlGywqzQ3L1xle27N3jm2Rf5qZd/nsGgz/raMvt3rrK5voKM2uSV5dr1Wzz9zNMc7m2zu7ODsXDpymWqMmMyOeb1V++ws3ufssy5f/NdDu5dY3FxwPbWfT64ep2wM2BhZY1LK8u8/NkXOTw8YjQakwSSVhSQTWZ8eO0aVx6/grUxWlnnVCEcx6hUgBCWQEq0LqkqjRSSz332p/hfv/KnFGXBG2+t8aknryCw3H73Lb753dc4mFb88ss/y2NPP8OlK1fIDrc42rsLZcrC0iK7vTa9ssdyt8V1DrFCUvZ7/N5ohB2POdfL+JvxOS4Ll3NaCsFMlxyOcu4cD2nfv8Xi2iarz75E77MvE2KI/+LrhB++TbS9Q1yVJFjawnKcw6QSTK2hxPG4LieurgizzoZNSlU7ZGiUEbTigE4gaB0dsPeVr8AXvsC5lz5FOxZYOm4SwNaONYYTMbWqcFVkJWlekGWZm8wrC8qiIk0zsjwjzTL67YjxqKAoSwph0VZhtUZrdzuhjaCyAmMd+GZrSrYChNAIIR37KwRG+Mkyg7TOxMoYi7HGbauuHGgWKHotyHVJJdzkRmUMhVCc31hjd3uf/cKQhJI0LznWIOKQxSRyk2uubglrDYUxVKWm0GBVQLfXZ/3sJr12zNEko9eO6OqctSTkzPPPsBgFINyYWOUF/88/PXA5258sP7GLByge9poHP4IgaKrwvXvHwsICZ86caaz0fXW5F6viOG7iItbW1rh37x4HBwfs7e01gnwYhnQ6HRYXFxtoxAuI3kHA2/3PRztkWdaAGlLKB4SyPM9JkqSJqPCxK+PxmCzLWFxcZGNjo4kf8NEw3o3CQwRhGDav+/d5kbPT6RDHMQD37t1rIijW19c5Pj4mTVM6nQ6XL19mMBgQxzHWWsIwZDx21+EzZ87U9zGT5m+TyaT53itXrjTC4XA45L333uPmzZv0+33W19f56Z/+6ab9vJAdBAHdbrdxJhiPx2xvb/ONb3yjOT5KKZaXl5u4k6IoGgDFHzvfjt6p5O7du0wmE7Is49//+3+PtZY4jgnDkFu3bhHHMUtLS6Rp2oAo/jj6yRAP9QghSNO0cUbY2Nggz/PGQeWZZ57h8uXLbG5uopTi3LlzDIdDbty4wWQyaY4T0Bx332adTqfZft/3fJ/yrgdPPfVU4xyyt7fH9evXm6iHg4MDbt26xbvvvstjjz3GZz/7WR5//HGWl5e5evUqs9mMq1evPtBP/XniJzq8mBoEQRNp8cu//MuNc06322Vvb4833niDq1evcnh4CMDm5iYvvPACRVHw8z//82xubjZC/NmzZxsIB2iEWr/f/X6fwWCAd5S5cuUKTz75JKurqwghWFlZod/vc+3aNZRSLC0t8fTTT/OpT30KpRT7+/u8/fbbfPOb32zAojzPef3117l//z6tVovf+I3f4Ny5cxwfH3P//n3+/M//nHa7zRNPPNGI9B7yEUI0fXB9fZ1f+qVf4saNG020yC/90i8RhiFnz55lcXGxOadGoxG3b9/mZ37mZ9jf3+fatWu8/vrrJEnCM888QxzHpGnKG2+8wQsvvIDWmq2tLX7v936P7e1tWq0Wf/tv/22GwyHXr1/nlVde4c6dO/zDf/gPWVpaasY3P3ExP4HhnVv+1b/6V7zxxhtUVcWv//qvs7GxwXvvvcfVq1e5desWGxsbLC8v8/jjj/Of//N/5k//9E8pioLBYMDh4SGvvfYah4eHrK+v88wzzzTgW1VV/PzP/zxhGNJqtdjY2GBjY4M7d+6wu7vLH//xH3Pnzp0HHDgee+wxZrMZr776Km+88Qa/8Au/wKVLl1heXuZP/uRPeOWVVxq3HaVUAxNtbm7yq7/6qywtLVFVFV//+tf5wz/8Q86ePcszzzzDb/3Wb1EUBVevXuW1117jL/7iLxp4oqoq/s2/+TdsbW3x9NNP87f+1t/izJkzvP/++7z11lt89atf5d133236pW9PD9UAzTnZ6/VYWVnh3LlzvPTSS3zpS19icXGRd999lx/84Af8wR/8AdevX2d1dZV+v8+ZM2dYXFxke3ubP/uzP6OqKt58880GUvJORgDdbpcvfelLPPbYY/R6vQaAms1mTeTNaDRqIm7CMOSJJ55gd3eXV199lX/6T/8pjzzyCBcuXOCxxx57oL0mk0kTD+PHLh8rlqYp0+m0gRo9oDgcDhuIcWlpqYmEstbSbreZTqfNOO373ek4GT9mFUXR9JvZbEYURbRarebaOxgM6Ha7D8A9aZpSFAWf+9znuHPnDm+99Rb/4l/8C77whS80cUNHR0f89m//dhPVdfqa/4nTx0/2IoSoY0gVqHpKVFuscfbk1joo3VrnrGG0xRiIhMRIhRHuWc85bjjgAYVLecBFS0qEq76UCiEsoobjjTWg3fO9Fba+Xor6GdbDJAFCGCyuTEt4gdcDeoGboFdKEoYRQRQghHKT3ghkoFFVhdIVgTUY92hFpSuqSruCJQFWl2AsWhtXoCQqEqXQ1lDUsS0+0M0a4+J3AWkF3UCiZURprINJKoMKJIEKnHMBgqiqmGYFWVZgixlhEKBQ2AwQLcJOQtKKESIkUCCEQZuIOE/I0xJrLMIFoSJURNKK6ZQlpalcrEsrYTZNEaYkDiLSbEY+y+l22lhhsRgiJdDGgZ0FIel4SiDcIWvFAXE8oKosWVkBGqUi4nbMLM2dcIUgDEICKyGMUGFCUVTsD4coa6GqiCLBYtwjKy3He4cMhyNaSYskaTOZ5UynM9qthIWFLkkEpiyoioqj0YTjSUacWLIsI4kUSwsDijxF6xIbhcSBu78si4KqyAiDiFmeU1lJmWrS6YzFpWXuT3JUBW0LoYooCk1lFIaAqiyJY0m3HdCOJGWVEQSSJA5ddI1SLlJEu4ILbUrneEE9cV7mFLmhLCqKoqKTdBFoSl1SlIY46iKCkLAVuigZJV1krAgxJnARRGELKYzbL2uRIiCSIWEUu3k+62MBy6YIRQYRMohARhhc/w+jBBmEc+K/YTJRiMqdgFIKglBQ5iVau1idtCjBVnQ7CkSANlAUmiLX6LJ2NcUBAkqFtQLgBTMX9YLR7tw1FRZDWRVoXWK0Jq8qjHEx3ODGDLcugTEVeVYymrjIZqctKP6/7P1ZkGVJft6J/Xw5y93jxp77WllLZlVXV1WvaKLBJoFucEDIBJJGYWQ2HA4pkXqQmZ5Go5HpjZJs9CoaTfNAo4wzwBhnZCQIUgTQANlooBdUL1VdXV175b5ERsZ693sWd9eDHz9xIyuryTGSD+ouN8uMiLucxY+v/+/7fx/4PiiQKCRF6ft7JByRUkggjjQKHw8pTOHJECKpyGoKpSrFjyre4qqszkD6OFK1sFVOLhXxwx2xLSqSRpDa9+OSf9tWBwzkCK8QcgQEQkXoqOJwH4ssuMeSVhcRwhqYqa65UkGiTozEf1n59bSqwBkpfSbskUquCEiLFx8SVM/P258jjqwepKhIIxUS9ElKzyflZ7kIqr0Jld5P1f98H7HM5hkqislFi8x6UpduLeHMlOHEsNJMSMc30ad/iSKKERQIYp+8h0EqifPZ0EcgfQ1Qh2x5T3hz1o/9xs2RJiJrXSBqTdF7b3GoNkl1hNt/D5YvEecFSgmksDhnEOiQFP/EoeZJShs/lezhP/CR7zzp+48f53EVkI/73EferyonWFAt3sgxZQEkmClab/g5c7aL0hE60chkDSE02Xgb0z9LaSREa7jq2VmCBsdHr8+5oCgc2oSo5ww/Fi6SNY7TCn4qGO+OnooQ8oh0yJPj5kft0B9byo8qaBy7zsdIJ0f3pbCusv0VhjTW3lavUvkuHQipfDuSAicinMsxpWM6tUzyOWfXFeutGVo0EKE+3NF89lGiT2gfR3UlKiKVE2EuOz6vBGIHIdayQEioDni8jQQySE0kkjhrjp0zEBX81231u3/ubnHurGv0o8SV8BknBFY6LCmRyylpUbgM7QRjveLxNdvCyAiiJUpbonTkiZdPIMbU5JSaIcGx9//tKh3u2M/j9f/k4wgR6mXh/lygvYT27D5KHDt6ED/1mo5/AYRwpPEUPR+jsaykffZnfn8YxliFosGQ0o0YmT6Z1awnM4qyJJaKsSkZlUs0Gg4132Vm+xVxq7pqcWTREsYG5cC5snICqK5bC4Zjg8xjmlHMXPVJopzYztmfpl5t8T8ChvVzQfwQQmDqzCDlN/fOYGzlwVMxwo11FLmXY8RqcN5eoixzkA7rJLZiwhtbOREq6S1kKgns7d19RocHXs2haplaKZSUZHnJzZs3ONjfZzYdUZTeZSmOIvorG1gk1pSMpxk7D+/gB4IQlCyZTL2s8mwyAufQQnLuxAqtXo+02eJgPOWD69fRwEq/T5JENBsZLk25ezhl63s/ZKXX46nLT3HmzBnajYQ0TZBKYZzj9vvv8NypU6z/3f8dv/Xf//fs336Lhs3Z2Z/xaO+QeZVN7CvVZ8PkhWF3f5/O5fMUxoKETrNFr9Phf/G1r/Kf/JW/Qtzqsbc38HYo2mcobj18yJuv/Rlx0qS3dpLB4SFCqwrsGlLt3pFC0Gs1KUrHwaB6vRpASmPZ2dnmxIkzPNib0EgO+PSnrvLGm+/QaDbp9tdQUtNMG+Ac21sPQYGxhq2tHWw+Jk0jlBaMpgX3Hz4ibq9QCk/0iKMGxpRIpYijmJX1DR5sPaw2TNJvfCNNFGnyosRZR7fdZHV1iftbO7SaHZwFHWtvATQ45MKlVdrtBhhDf2ON85cu0ektc3B4yHtvv02RZ2T5HGsKkiRlbWODJImJd2Lv4yUlly6dxyF4uHWfyXROUZYsr6xQlIZWHOPOXsBkY4b7j9jd3eHys9e4+f7bZLMZ89zw8P5NTp46w9PPPMP9u7d8hnEco5QgabQ4c/YZHt65yaDYo7O8ycO7tyjmGfksZ3S4w3x4iJvnWCaM9gfc0xFnN1f4c69c5Xf/8NvMipJGGpPEEQcHh8yygm6nW8lFhl5ZBeiMl561zlbZ5paN9VWev3KJ7//kXb79Z6+SRJJ+qvnWn3ybRwcTjLN8/0//mPNXnqbfbTOzy8xnY/b2Byy1urTbXfZvPWBSOoQSlKWh00iJTM7t4Yy98YjLkzFnAB1J0BGuKHBSkFvHfD7j4O517j68R/feh2z+yq+w9gtfoNHp0v7JmyRbd0gnI5qzCam1pEBiSsbOkllHKcBpnxUlKt9YKSuGLM5nR5SOdqqIpSCdT3n4h39IORhw+vMvoRNd7Z+dl5yyFuGsz+ZyBmFLUpnjXIZzJRLI8oxep81yt0VZlMzzDOcMS90ORRX0x/hxxFlLNp9jjcFahylLytI7VFsHmQFT+fYVfnlH7qC0AuMqsggOJaARK7TLGBpDGilUqXBS0YodUwPKWeazOe3YMhyMEQikdcxyw0RENDsRSSSIpERiq0WHD/Q0lEZHmihOaDZaLPe69PrLdJopzljs5AFyVNKJDVGrg83GICOULXzg0dpPskJ+xsuT1D7C6wGMClnWi+oNzWazVn5oNps1eaHX63HmzJmafAHQarU4ffo0o9GI2WxWL5bDcdI0rTc2i5kDAUQOYDpAFEU1ySOQQcJ7QeFDCFErYAQ5/EASmE6nbG9vc3BwwHw+Zzqd0mg0aiWAkF0N1DYk4VqCYkeSJJw+fZpms8kHH3zAgwcPAOps6mCBURQF7XabpaWl2qYjWLS02+0ayLTW1mQYoCaznDhxgk6nw2Qyod/v1+SVO3fu8NRTT9V2GyFTPBBVghJKsP84ODjg7NmzrKyssLy8TJqmHB4e1ucO0v+BuBAINUopkiSpCQfNZpPNzU2cczWpIssy2u02J06cqMkHi8BmIGIsblzDMwvtriiKGlDtdrt0Oh2U8muZVqtFr9ej0WjUxJKgUBGef7BPCSSh8CxDuwggarvdrkkjzWazJskERZY8z2tyz5UrV2r7nqIoWF5erlU3wnUvHj/YJIT7C/ccrvPRo0cMh0Om06lfv21tHfs7TVOazSZFUdBoNFhZWWFzc5PRaPSRewzWM6GNBkD45MmTBMWAtbU1zp8/j7WWNE0B6t97vR5nz57lxRdfRGvNnTt3ePDgAT/5yU9qe4lg75LnOf1+n+eee47NzU12d3drBaDd3V1OnTqFMQatdX2Noc8EG5mTJ0/WRJBWq8UzzzxDo9Go699ay8rKCkIIBoMBeZ7XqijBImVvb48oijg8PCTPczY2Nuj3+9y/f5/bt2/T6/W4fPkyr7zyCrPZjKWlJX7yk5/w1ltvMZlM6ja1qEq0uJG21pJlGT/+8Y9J05SnnnqKP/fn/hwrKys1CeB73/se169f58KFCzz99NN8+ctf5tVXX+W9995jY2OD3d1dtre3WVpa4jOf+QwvvPACw+GQu3fvAnD58uVarScQQEK/SNOUa9eucenSpdqipN1us7W1xY0bN2pCyPnz59Fa89nPfpb79++zs7PDzs4OS0texr7X63Hp0qVajSXPc7a3t/nwww9ZXl7m2Wef5cUXX6z7/ocffsj+/n6tJDEej2vVlNCugsqIlLJWMrpw4ULdpkJdLo4hizZem5ubnDx5kqWlJeI45uTJk5w9e5Y0TWt7o7Is2dvb4/XXX+f+/fscHBzUZKBgwwXUzzCQ3EIfC2SvRqNRzyfBgimUa9eu1f3l4OCAvb099vf3uX79Ov1+n6tXr9ZKUIu2MYvzjlKKyWRCnue1zc5oNKrH7E6nU49DgbgYri/UT7jm8C8oNEVRxHA45Cc/+ckx9aeNjY1jpMdARAzjQRjzp9Mp6+vrvPzyy8RxzAcffMCPf/zjuq0Fwlvor4G8tjj3fkIA+dktQnglCSkDMOE9mL1SRyBZVICwo4rTeNsBv7+UPrjsKiBSaJy0oKzfT1S2EtXJEEKBrBRBXFHFyB11hmeVcQ941Q8nfTYYUFasDReC+UKGkIZXErEOaQVKC4QSaKnRWmG1whhdker9PRamROQ5ZWkx1pKXrlIiMJS2ZH9a0pjOSbQAU3g1WQfzwiuUJFFEHPm9qCf1h4xfh638ZoyreDBSe+BaxSRRjilyoij2dWVhPp/hKLGuQdRqodIWkZbIIkOIGc04QiuFEb6enPPPzNkSaxUOiBsJjXMtDgcjprMZ/eVVTFEQaQ3CMJxNSFttuqng8GBIMXccHgyYZVNarQbNRhOpYqI4xpGTF37dMxyOaLU6dHp98jxjMNyn007pdlpolWCMQycKrGQ8nnP37n1u3Tvg3OlVNlf7tDdbSOlotlqsWEleOqAEYcmLGSKK6XVXEHGKioYopVlfbdFqNhAo7tw55PTJDVqNlJ2HO57MI0FFkqQRYzSQGyazKd1eA2NmJJFE2IJet8V8Jri/tYcpSpZ6LUajId1GRLfh03iL0qBLaDUS5tMCV07QsQ+Hl3mBEF4VtdVtYa23Y9QotJKkjYiimDGdTLDO0FteRkpNs9UmK70CW2l8UlwURZRkDAdDmkmMKUrKfO4VSXEkRQw4HxdTmrIsmM9HNJptdLNBpBuAt6EtjKS0JUmjibXWr89KwXQ68cq5UtY8ASmhdAVl4ZBS42yG9E0VHXl1ZSEijHWUZUGZz5DaK/eoiiRgjcEZixMlwlHFZb3lSFGWFGVezxHGeEtqqyOkAq2Vb7uU2LxE2Axh5mAKpFRYYJ5bitJ5UpFyWFd4kk0aEUUKJWRF/gJZOCItfMKN0igdI7Sfg63w6qq+VOyPGlEUFaXDHAfUnCeoeUikyvGugCNnXTUsHYdHxGNToRDBkiCcrlpH1qDY4mer0a36z1X6Okcv2mo8szh5JKkunPdLDyCKFMEO3dsHi4r8cQRyhnsLIEmVVFmRPwJ46ROpAodlEcD7pHxSfvZKTTIgYLGuHhkckmIyZKXbYm6amPFtrz4eLTE/uENDCuTqU6A0+YPvE536LC5q4ZTBUdbjR02gWgCTw48jRaLCD862JBJxfR2FSEk3X6E1uMl4BN1kgBk9wCUbGECKCJSuxpXKOqG6t49bo//bFAGe9P7jBIPH/w7lSaD04yQQVw1Ezh0pOtT1X417QQPp6NkEkp1AKoe0JVIryvEDrClJOyewaR+sRcQnaJQl+eA2UdzGKI3F+IHaLV6LZHF883+H0dJrNdTj8zEiincYOEYiWLjPx+/X27GZhe9W9yiO9nr+ONZbI1bnFcfqMpAYOGpHi/swcXS+xWchhMY5n+y7vXOL+WxAu5XgnEAJb8empG+nwZItt/5fI5EIErZH1D3iqIQKcUfXVT+/xfo5+qygWgAfLenrJxvawVF7cEe3tXgCOJpwA2HxGEFo8d6P10lthVZft6gP+6T2vKgKpoVCuAJJRunaRMzIojbCOiJKjNkhAWYuR4glhBtiKHEyOT6DVmsD4DFy09E9HhFUnqysefSMRU1jCuV4nz96zz1WhYFMU7Ws0LA4+v9xcsxj65ZqabDQ+sNh6+swNkHICGEtncTyaBQhyBEyIjI5nXhCYXIysw5OoKRjKTWMiwaxdlCUWAyTrEcnOUSWe4zMemXXZDBC+ju0VaJ01Y6dUFVbtUjnsTUrCkSyhFSHlGWGLVrkokFTzzC2SiT4D1x+LogfIBBSeaDZOUrnA0RllQ0RvFa9hYGl0WwxnkyRtiSJFaUpsWWJUBotBSU+210E+Srhfxal5WB3F2m9WgjCVYOYz26/dfcW1pQIvNRnt9thqddnZWWV9TPnvWKEVBzs7zGfzdFxgpD+/GFzO5/NcM560DVSnDq5iZMKdMq927eYDQ45t7nm7RuKnKtPXeKpZ67RPXmJg9GM/f09ppMZe0XBo9GIXnNOO1GkccxhUfDt736HX3rpef7r/+q/5P/13/63bL//I7T06h1Ka+I4Z2V5mbS5VNnUDJhlOR0dURqDkopWI+U3vvZV/g//p/8aIxMODwa1V6yQkpu3bvPD7/wbrl29RtLf5MaNWxhTMhoOieKY8WTCfD4DfDBPK0W/3aIoS8aTKdY6pPTMuSzP2d19yOrGKT68v8uLV87x1KULXL/xAecctHrLxFRWLqbw2TpCsjUryOcFiAJblJRlwdLyJvNsTpZn3vO3slvJZnP6K2tY6yiKjEhpjCiI4ghBRLvZYDqd0241ibXyfrLdFmnapCwytJIo4RgOD7hz412y8QScY+gMW7cF8mzBwcMdhMnYfbTN7v4haatFf2mZRmPMufNnGU1mPNzeR2tNpBXXnn8eW5YkkabT7ZLlGUXuN+jNZsqZS88gVcKZS88wHgzJZiP+7Nt/ynCc8dLnvkDS7vDuO2+z3OvQX+px88499nce0u+0sPMh0+mIhw8f0Gm3aTVidHOJ4d42W7eu08DRjBMmWcb92/c4e/4iqDanzj1Fu/U6s2xOYRK0VgwmU3Z291ldXvIDt/PytM5VErlSeHKM0N4X2RryfM6nX/wUOwcDbj/c41t/9j1WYsHtrT1saRAObr39Frc+/JDLly9z/vQZhDPcu/0BMmqAk8xKEElMnMTkZo4djjm30uHucIpzltF4RN5oYEuDFhEOmFoonCNFEAtJUebMfvwauz/+Ea12lxNrJzi5cZpW0mDpwU1SpYmnExI5IykgMoYxjrlxFM5gpUUonyXijEVIh8T7KRNpuif7UOQ0RznNyZy7f/Ydbk3GnPriK+hIU1qfQeO9Y+cUWQZKY4ucKc57KQkfYmnFCjOfYqRAWIfGUlqLMwZlS3BehlNLMEIQJxVQK3xQTkhdeVxX1lZlSRICGYAxnuxWWp+JNi8N42nG1GmMszSEQ2rNcm+Z+WTGNCsplQJrmM1zMiHZ392ltA4rHLmK6DRitPQ+uwhBI22TRhJjHXGjQ7/bRBpLmc8YTadsb43Y29/n5MY67VhUHsAW4Ry28GQprQ3TyYS9wdhn2X4i/f0zWz6Ozb+4wAzAUsh6Du/FcUyn0wG8pUuWZTXpodFo1GoRAWxeXl5mOBzWcvUBkFsklQQgDI5sGAJYFsD1KIpq24pwvkWp+kUSSfgXyAOtVqvOzHbO1ZnSURTVZIE8z8myrA6yBgBOa12Die12m/X1dc6cOcOdO3fY29tjPp/z9NNP02q1aDabxHHMwcEBm5ubBAWE8XhcW+JEUXTMZiNkly8qEAT1kRMnTnD58mWuX7/OrVu3ePDgAYeHh6yvr6O1rtU6wrFHoxFxHDMcDtnZ2SHLMs6fP1/blkwmE9577z0ODg4I6h2BRBFA2/DaIsmm2Wzy7LPP1uCrEKImVKyvr9Pr9Y4pvywqtoT7Cs8qtLdQ70VR1G0okGRmsxmdTqeuz0DMCOSExboKAHQ4bp7nx4gZQXEjKI60Wq2a1BS+G+49KK4sLy8jhODw8LC2U4jjuCbWBMD18aDJIvFlPp+zs7PDu+++y507d9ja2uLMmTNMJp6EnGVZ3R6EEMdIPEmSMBgMmM/n9fMJqjTBSmY+n9eErE6nU1tzJElS989Fe5gkSVhaWuLUqVNcvHixJii1Wq36M4EAsL29XZOput3usbFiOBxycHBQk6viOD4G0If6jOO4tg0K93Xy5Ek6nc4xwsD6+jqj0YjDw0Om0ymPHj1iPp/TbDYZDAbs7u6itebg4ACtNefOnWNtbY179+5xcHDAlStXuHbtGleuXKmJFM45vvWtbzEYDFhaWqotpR4HvBetNG7dusXLL7/MxYsXawuSoBr0R3/0R1y/fp0zZ85w5coVfu3Xfo0f/ehH3L17l1dffZXBYEBRFJw6dYrPf/7zXL58mffee68mlp0+fZorV67QbrcZj8d1H9Jas76+zpe//GUuX77M8vIycRxz79499vf3uXv3LmfPnmV9fb22OTp9+jRRFDGdTtnb26vbZlAjuXr1KisrK0ynU86ePVvbOAVCU1EU3L17l1arxfb2dk1CC2SxZrPJbDbj7t27PHjwgCzLamupw8PDWjFoUbEitMnQj8J8ERRphBC1xdTS0hJpmtZkttlsxvXr1/nDP/xDJpMJcRxz6tSpej4Jxwukj9C/go1L6C/NZpNGo1GT9xbHnitXrtTktYcPH/L973+f999/n7fffpuzZ8+ysbFRzydhHgrfDaSSRqNRk2mazSbNZpOtrS2AWuEkzKOLqkCBBLlIrgi/hzk2kKDefvvtegwL81YYa8O8FcbZx4lxS0tLvPLKK6yvrwNw584der0e/X6fl19+mSiKjpFaHs8i+oT08bNcfPA3LPsstgpA+32N0AqFwhiJFQYrDJFQBAKClBIlRUUC8f3DuQKERSoPUnuFQOnjN1KihE9GqJwuAZ/l6G00vFqH7xs+DqTKgpyKYF8aH0i2PrDrrScspijJq6CtcgpNhI1BKgWRQhjhlTFNBYpXEsGx9PsvIUBJiRKKeZmjnENahzQSKSMQPpGnIRUOiRSK0kJR2no9akyBFNJDMcrbsFonfV0pgVOCOGoxn0+xQJzEIEqE8Pa0ZWaRao4RAtdoEEdNjIWyzEk7LSLpbUvzLMcYR9poY4qSyThjfzyh05Esr22ylkh2d3cwzpC2mjhn6AgNQtHtdojjFsODEaYsvEavhTjtIKOER4922Np5wGq/T4SkmFvu7z3CqF0m8ymtNELaZXqtHv3+ErPZDCENUmhW+0ucOrVKNh9jsiEqLmh2uyidYo3DzDKEcLSbTbRWDMeSPCvY2x8wmY5ZW12h1YxJkiajScZoNOTpZ6+glOXwYJdGS2MKQ1HEWKkpkayu9dnf20fYlKx0WKHpdCVJFFGUlnEGg7lXzLWmYKnTJFKKoihpJBGREkjpGE1mlKUlbTbpdBsUWUakNZnNSFOB1JCkEVo3mU3nZOUcLQxWRCAkUsZYFzE6PARTYIRlOhnTSH2MbzIf0+32EQ4mkzGpjhE2whlQWlegvMCWGbkpve2Pg2KWEUVtpPTxNqQmimKE8TEQayxaagrnyKYzEh0xmc0oS0MUp+QllIWjKHOiWMHUIC0+IaW0KOHw1jAleVmQFzOYg7Ce5KGVquIcJaZSyylcSWkseZYzz3IKU3h7GCHx2egK40qE1aAShBOUBkrnIdYkaSCVxlpBZiy2tBSZ8XWRWhQG5yyudDjjyVvGKQpXYBDoKEYrhZMJTkUIqSoQz1+gdAti7pXOuKuREFcTIur/XXjNedKaWwBQKuwqKB/5IauaHxc+FABDgWdkCFupsFREkjq7NhAtwhU6d/S9qtgqlo4VFWZWrYmrfwKBVAFkrEgcUiJRIMAKi3D+PLY6gk/YETgpkRUK67OAfczQyUA5+aR8Un52ixMB6K+AbVHxAxwgFMXsgMaSYxK3yGZDImlQZo/IKezKWUqrSVbOEMcN8u0/Q21+DhdpBAnBHk2IJwPLYoEIJoIlhDwCU51wKPyaQrUv0m0MsINbuIO3UcsZaWUp7tUSjo9Ti+f5uPJTVSoe+/7jKh4/7fiLe4Ynvk8gwT0OJ1OPQf6PAE5XH/YsACSConDkhzeRzoJKMSiaSGYYlHS45hnU4Mc41URoiSgGSBVhygKkt/GytiIH1AB8qI+gmB6SJN1R+6ivUx6N1Yv15Cvr6HPHbjCQOo4TDvzPCkkXklqG7nGwva5XwUdGZ3f8c/XLziGEwljHysoqhzNNltv6GWitKGyIR1X14f0A6S116HZHtOIYWxqkiKvrhNBrjn5fmAvFEVdDIP1c51lO1T0u1FP1TD/Sdo8dM3w3zM2P1ZkLiYBHFSHlwucX6+gxpogI539C3S1+ptmMMEBZgpQzCtFGWYsTgpKYVCqsXEaLIcbGpG5OKQ25++ixjhE+6vGmYsM4fBs4Thf52L76WMtbKBXR5iMfDmSZJ9vvhMdUEziO1ZP/vnPOd9GaG1LVdXhe1fenuaAhE6Iko9WMsNavR7Qr2GhNGZUp87JdOYJYtLDoSDGbaKTWMM+qR2YZ5Et0oxl99tkzfTQKYT1hyKCwssS5AuFpsxUxxbc95ywS4dfmLkIDrlLRGZcJFlW3of+Q5eeC+GGdJStyrDUEJp3AS0EqpdBKIJVGKUHpDIlWxJFEGk0jUgzyjKIskQi0ihAC79ElJVpJdKQxDmajMTafEkf+HLIeUEBKT1ZIGm063R79/grNbo9Ex7TbTdrLa8zGU0onGQ4OfYerAIE8m3sihLOUZV4vBk6s9Gi2O8RJk/Es58HDbR7sHjAYjRhlBSfaLZ698hyj4QjsDVaXV7hy9RKqtcT2wYTbd+4w2H1EOS/pKMWj/T3mheHN967zuV+c8b/+z/9z/p//zf+VU8twVkeMhges9ns8/8KLoBMEjv3hmAjHvtIIIek0En75S1/if/9f/h/JnGY2GnsZUaVwQvDWT97kX/3z/4k7N26w0u3wwqkzrG+sM52MfTZLYcjLEmNLVDWx+uCiZKnd9P6zeYEzth7fJ9MpycEO/dWTvPnBbV65epnJdJO7d25w7qJGtrt1oFUoWStL6Dj247uUCCW98kHhQShjDcb5oPBsPuWZi88zHA2xxZw40eSzgjLPEFIxHI04d3Kdzc1VdvYO2dvdY211haXlJXZ3fFB6Np2SzWeMBsMqWKSYZhkHwwHv37iBQFKWhrjZ5kSzx+raKisr6yyvbbKzs02aNmm1ChqtJjqKWV5Z4dlrV5FKkM3ntJptkqTBaHjIdDzBmoJ+r4WuNlOPHu3QXV5nJsb88be+wwsvfIrTp04yGQ7I8owTGxu8cO0qO9v3uXfvDoODXWxZkmsfOIqU5vqNW+xs79NrpKz1uqRFwf7+Lm++/jpf+NKXaHd7dDsdhtMpWWHopBpTlgyGo1opByRCeDUMwG/6hCPS3vbFeye30Crml//8L/GHf/Id9kczrj/cJysdzhi/SZyO+eE3v8GlK5fZWO2zt32PO9ffY7zap7exTmv5Pv3VZVq7h4wnM8bznGfTVX6oJJPCMi0NmRQYY8lEztujIX88HjGRkrU45Wqny7NpSmc+QxUZ09k2ew8fcPuDtzm1cZKzKqK1tOYztUaGyEHkCiIhmVAyM4bc+KwOW7MUq4wHZ4kkNJc76CjixIu/wJnJiPh3/yU3fvwjboxG9F+6ik6TihjjgTpnDaXN/ETgHNgCayxF6aVLsSWldQjriWqlUF7Ro5Iac8IvEq0x3v9aKL/AVTFCeXlj58DqCI3DGuMzxBA45ZeGqQCBI85yEg2rSpMojcCAaxJpyZ6CTMfopEGsIMsKciE9UaQoKZSm0fTWLlL4gIGKEi5eOEeUzxhUcr2zw0NK4fuFkxqTzzgcTTgcjFjpNuilEdIaCmMoypy8MGR5SWYsk2lGXpZ8wvv42S2LAHX4GUoAiwMhwduEHYFDu7u79WsBLA9gVFjEBjAwZD23Wq1a8SJ8zxjDeDw+tihfVG8IWeNXr16t3w9S/q1Wi1/8xV+sgcc8z2t7jaBCUpYlWmt6vR4XLlzgxo0bNVAepPWD6ki4rpDdHQDMADQ+evSoJrXcvXuXS5cu8Rf+wl/gxo0bvPbaa/yzf/bP+OpXv0q/3+fpp5/m7bff5uDgoFZfePvtt720dZXdH6w68jwnz/Ma/J/NZnzwwQfcuHGDTqfD888/z6VLl7h58yYAvV6vJqZYa+n1evVxZrMZ8/m8JjSEZ9Tr9UjTlMlkwuuvv87u7i4AaZrWhIigjNHtdmk0GoAH+NfX17l37x6DwYBOp1Orhmita6sW5xyDwaBuR8HWY39/v/672+3WKg9CiJrAEQg7S0tL7O3t0ev1WFpaqtvKdDplOBxy7ty5Y+1VSkmv16stGqbTaZ15L4SoSSRBSSIQaQKBJ7TTUE+BADCbzQjZ9UGhbT6fM5/Pa5JOUEUByLKsBvAXSUhFUbCzs8M//If/kF6vx8bGBl/96lc5d+4cjx494s6dO3z/+9+vAy+BXBUA7clkUt/LouLHZDKpAeNARMmyrLYRCgSmyWSCUqomI4U+qrVmdXWVyWSCEKJWMJBS1uf1GcBD9vb2eP/993nrrbdqYo5SirW1Nba2ttjZ2amJR4tqM4t/T6dTlpeXuXv3bg2Ch74Wnv2FCxd44403uH79OoPBoLZV+fznP88f/MEfsL29XRMTGo0GGxsb9Hq9uk+dPn2aS5cuMZ1OmU6nNJtNTp06hbWWb37zm3z5y1/mc5/7HADj8Rig7n9aa4bDIe+//z7j8Zjvf//7vPvuu/zLf/kv6+c/nU45PDysyS5SSi5fvszf/Jt/k69//ev803/6T+n3+/yNv/E3ePnll2sbl1arxfLyMkVRUBRF3dfD70BNZlhZWaHVamGtZTgcopRiPp9z69Yt3nvvPV577bW6XfZ6PR48eMDKygpbW1tcu3attikJbXw8HjMYDHDO1WS3YCkV+lF45mHczrKM2WzGw4cPuXv3bm2FFKy7Ll26RKfTqe24AvEs9KWgsBRFUU0cCSotSilms1mtjpMkCUmSYIzhwYMH/NZv/Rbdbpdf/dVf5Stf+Qqrq6u89tprfOMb3+Bb3/oWDx8+ZGVlpe6P0+m0fobT6fQY2Ws0GtHpdGpSYmjXYez91V/9VX7zN3+Tb37zm/yTf/JP2N/fr/t3IJItzmVxHLO8vMzp06fp9/v8/b//9/nlX/5lPvWpT/Gv/tW/Yjgc8swzz/Diiy8yGo3q+ejRo0f0+/1aKWeRJBnmnNlsVvfJv/W3/ha//du/zfvvv8+9e/f42te+xmg0qokpwcrMOUe73eall17iU5/6VD1nz2YzGo0Gzz77LM8999xHArWLxJP/OQHfT8rPQBHiSI7XHQGcPmGnItxqDzB6XodCKY2OIqJIo7RGKoESClOpH4oKnPSZ6K5S6lAI54ikQAnt90yC2hZCKIlQCl1JUbsqO1JJQaQVJooweYyl8GodlbKsEGBtiS0hN5YiK4nihEYzQkmJTiJvs6EdRhmEBWt8Nn2sFFjQzkKukVnuA6IZGFtQGAMS0ijxgUpjEVVmr0BSGnDGktsQttb+uqSBoiCbO5yMEVFCkjaII02kJWmj4e0vYg3WYIqsUkDwpJSytEzmOboEJWOchNwqYt0gagjiVDCeTBiOC2ScQCRIVYZwGfsP7yPiGBWnJO0UEUfegiX1Vr7zLCeKEhody+7OLhGKpk7Z2xswms5BQKI1k9E+G6t9nrp8guFgzHA+YTqP2N0fMxpPfRzNeE/tbDYH54iijEarRXt5ndmsw3B4yKM7j9BS0Gok9Dop/ZUOTkWUFqJYMZ8OceWMNI3IjEWWmtF8SpJEbKyvMx0MEDKnmM3Jspx+f5nJ/iH7h4ecO3ue0XCMVi2idI4whkazyXQ6ZzKa4IwkVorlrmY2d1jrSFKfBOVwRLEkFoJ5lqFUhJaaOKqsU7Rkks2J44RIpgijGA/HGGfQMmIyHtLtNip1WkucRgg3p8gm2EZEkqbQ8OpP3s61JI1TTNMwz8dIpWlEMZPxEO1ACm87JBREUezVjaUAYSnNnEgkSJ1Uyjuu7q9CKKKkhUXS7JWo2YQsN0zHA7CgdAw294m3ZYFS0GwqpNJkmWFeFqhSkOUl0+mMKPKqzMpmlKpE2CY+U9k/b1N4goixlnxeUBSGvLTk0qArG14nQEURSSPFGk3uBGXh1YGiOEFpgSoF87xEG2ihSWNvjaSVoDCWLLPkhUApQ1rBSTECmXgCU+EkFo2MIpBe9caVFVkRWYERIUpUKXggwGq8AYCtYz7OLYj6V2CRrH53TmCljy9V8BNWVIBxgEAEvs6o1Imq9YGzDqn8MYytspvdgmIRElspcXiAxXCErghEBUBKIZGyIscJjRD4LPgq5ulJIRXsUCkP+Rt3lKKa02splBpyxoSbtL6OgkrJJ+WT8jNZKkJG3cQt3gqpiuMqCc5O0cRk2QxXZCidEJ94hdnWT1AiRguBKOa4ZAm9+iJu68foky9SRoGI8W8DlZ/UwdwCCFsp7WExagnVu0ocNzm8/R3mWYHo62psACq1D7fwvY+99YXkCzgOkP/7rO+f9N0nAtZiEf2uPsdxINvfw5ENlXCenOCsoJztoDtPoToblMNtSBLmKDQlYL0Fju5gi0Oc7eHKpEpmVnV2fw12iyMUOxDqgr3WsXsI97ZAfvdjZFX/R2yOwGxYuF/Jk4ontjiOpKkC6WRRcWqxno4IIWG+WRAx+cieTSjwCcAAiuk087W8kFDv/6wQz4CaO8nD7X3OrqakbTDHWBlhb7BIRglX545fc1j7O3lUJzXf43Ey1KIlyzH+zEfa88Ju9SPkm48rQoZv/ru30dBOndSIfIxVMUZ2/H7FU76x5RjjwMoxiRJgH1G4nJwZImoGN5JQQR89d0VGebwWH7++433VeSwvdJq6vo/dwbF7O1qbLF5MfYBjhKRw/8e+f4x885HT1M/XLTRIgWI5jZjkCisVTTGhn0zYL7vM8wgnrbfOQ7LWLBjnDmN8sh02w7oShUIIGJQpXe1YZY+DoouVKQrv6GGlhrKsrGBMvTYLNSuQlaOIRktLTokQMdLmFWz45P7571N+LogfILAutD8vAR8pTaw1SstqQJEUpiQ3JZEwJLryX1R+GZ4mMSJKMcbbLyjppeek9MeeTKfs727jTI6u3nP4wTyOIzqdHt3+Mp1ejyRp4sLC1Qmy3GCGIwQOMx4ihVcK0FFEUZaU1jLPMvL5HFv6TLBGotjcWAaliRodDu7cY3tnF+EMcdzEDKdcvnaWpNXGWDgYDDgcDrl18wZpmrCxsc7nr15k5p7mw5u32dveor+8xsbmSVqx5u72HmtnLvGLf/EvMR0csre/R2OwT5rEXL+zRStNcAK0lmysb3AwhkgpPv3cM/ytv/t3yV3MfOKDpLLyIv3e91/ld/6n3+IHP/gBpshppgmnzp5j48Q5hsN1hqMh82zGfDYm0NeFkJTG4kyBQNBII/LCZ/LLSj7VlIb9/QOiKKbVXeFH797g5eevMJ3PeHj/NucuPYNDYW1JHLw2nUIgfGaOECgdLRLd0EKg46gKbjuWV9Z5/713vVeoBVtkXrpLKbqdJmkjZXWpQxRH5HmBlopmklTedhKdNMDZ2mpDal21S7/JN8YSRxHnz57i4fZulcntpa+X1zb40Rtv0F5S3i820milWVtbZ2frLlpHRI0Gw8NDhuMJrWaDOG4zmc351re+SafVItESEXlVlqefeQ6k5IMPP2AyHLK2ukK322EyGnqFiTxn99EunaUVZNRES8cPX/0W7354jzRSdDvLrKyfJhEZcmuLB7dvcu/sWa596gU67Qb2gZekVRKUhGw+R+kYhfcJVJGqJ2dbLWSMdV55AkkUSaJIsbq6zF/40uf4xnd+yKTRIpKSvXv3sIX3Wr7+xmv8wf/4T4iVYrXfI2p2GY4mnDnzDKsbd1l7tEev2+bR7pD9oqCVFZztNHlnf0xmDVYIbBwzMCWv6ph3S4PDsuXm7G+cZP/sGVZGU85MRiwdHqJnY/L5lMHt69zTinO9Vc60mvTLAm29d7AuS1KtGAvHxFpm1viFid8N15YvwjpmhzNWz3Vpdxus/tIvkL33OsVwwr0PPuTRZELrhSvoZor3n3XYssThsFUbssZQGkthLFoJtKjacxUgcHiCkzeRlUghvSypENWG3GfhlA4kEonfXGjhFZJQcXWMasEmpV90WlANjZaKR3uHoDRLnSZaKTIHjdVVnpUSKQRZlmOxXiFEpuhuEzXLKmlj6bOF4phGq0VkS3IkjUhyOBixfzhgOC8xOISKcUXBpCxRjDk4FHTSBIkhKyyFqTYgQqPiFJ12UXroAzyflJ/JEsgDi6B1KAEYDIoUzrmPWCRABR5UIFkgVCzaXiySDx5fwC9mPYfFfzhXALkDMBs+H67xSeBVWMQuAq0hSzsAl61Wq1YOCUC4lJJ2u02n0+HixYtsbGzUZJEA0jrn2NjY4KWXXkIpRafTodFooLWuJfSHwyHgyRRf+MIXiKKI3d1dDg8Pefvtt2vLltXVVfr9PoPBoD5GuJdAlGk2vbRzsGr50z/90zqLPZAdgj1MUMIIdbe0tFSTIjY2Nrhx4wavvvoqP/rRj3DOMZ1OGQwGtNttyrJkOBySVzZ00+mU/f19er1eTYzp9Xpsbm6SZRm///u/z7Vr11haWiKKIm7fvk2e53S7XU6dOlWDk1mW1YSA8AzLsmQ0GtUgbbAriOOYdrtNmqY8fPiwPpYQguvXr/Puu+8yHo9ZW1urlS+SJGF/f5/t7e2avDOdTmu1FqAG2QNAvWj1EV4PBAopJf1+n5WVFZRS3Lp1i3a7zcrKCtZa7ty5UxNcwr0F+5JALllsv+DJAvfu3SPPc86fP8+nP/1pvvCFL9QZ+0KI2j5iUcEk2Hx0u10Gg8Gx9j4YDOrrDf0tlEA6CeopQUEgEFqCjUogh0yn07qft9ttgJp8dHBwwHQ65eLFi1y5coUvfOELPHr0qD7n8vIy0+m0VkkIyh/hOsJzCnWxqJgSCFqBIBNUbd555x12d3d59dVXOTg44Ny5c5w9e5azZ8/y4MED9ve9YluwWQrjSyDGBDJFUE8IhK4oiurntmjdFOo9HCc8y/Pnz3P27Fk2Nzf58MMPayLUSy+9xNNPP00cxzWRaGNjg5WVFZIkqQlXSZLUhLnwd7vdrmwYx3XbCEoOgewVxplACrh79y47Ozvkec5XvvIVrl27hlKK/f19Njc3AWg0GrXlU1CYAGrLImstnU6HTqdTK8WEZxLIWUGRJVzLfD7nmWee4dOf/jR/+S//ZWazGXnuJd4X7Z5C+1mcF7TWdT+RUnLmzJn6esLf7777Lrdu3eLg4IBTp07VKhplWdbj4/LyMlmWcfPmTe7fv898Pmd5ebkmjQVyU2iP3W63VigqioJbt27x3e9+l/X1dTqdDmtra3zjG9/gwYMHTKdTnnrqKYQQ3Lp1i/l8zvnz5+u2orWu50VjTK1etbq6ytWrV/nKV77CH/zBH/Av/sW/4Otf/zqnTp3i7NmznDlzhtXVVZIk4bOf/Syz2Yx+v8/BwUE93yzOeUHhKijj9Hq9WpWjKAqiKKqJJsAxVaRFtZPH+//j8/PjSh6L739C/Ph5LMKDtsfWUB68DIHbMMco5QkfWkf+dxUUk4wnYhgvYG0AU1btTzqcM1gbPiuqBCFQQuEqWxnw7dOEYD0+iUQnMalz5HOwLsPmVezJCSrmvwennUUYSWlKtNM+vKoEUiiE83afUjpU7PudK/0eR0UFpY7IoxwpFUXhVSGsMcxNiTUFpjCUzquXeDVaQ1FaEIq8LBHS0ogUjQi0EkSxRgqLlCXCzjG5QtqYpJGi48gnxogIEaVgymqvmSOEX+NKDcbmICyzoASmUtK0RW+pT1GUDA5G2ELgRII1kCZ+rh7sD0HGgMCanEhDM40qUseI4WREM20wnGQc3r1Pq9VGShgdjsjzgiRR7DAkPdPm9NlTzOYZxnrS72g4ZDobwcDHWTLncCUMxxOaWU6STrDOobSm3WojnKHZiMEJhuMJRDFFCfMMDA1EEvmEJBGzdzhmdXkJJR2j8SFRoihsgtMNGirGOcF4PGF9fQ0oabUblKVFGd9+4jjh4HDCLPcJT2lDV2RYr4LgjEAYS6MhSWNFs5HgxtDvL6Fji7VTijwibfhzmXKOcQnFfMZ4lqGjmDT2cQCpY0rrVRSiuOnte5zf9U+nIxyStNmgNDnOGubTIbbISaIIgSDLMybTKUJ4olWj2URpjTEWIRw6iXFIj2A4hXB+TnUmx2J8/E1oVJLQ0BJjcspsTqvVYD6fIWWEkIJGI8VZrxyXJCl5XpAVjgJFaS2l9WBAnjvmc58Epy0+lzIBpSKssxS5n2vyssQ4KHKDcR60MKbEVko9srIdKUpDPpmghUJrRRxrnFPkGdgiJ8IT9W1DUDhvHZMXGaYw9Vw4n2dY55NqQCNUBEIjrCdfCakQ0iua+nMrH0t0FZFmAWDDLbjbu6ClwdH7CyCWEwvg1EfTYMNRfMwZPwbVNgVhvpWAlZX66iKAQ31MLSrSihOAOnoDn9Qo8HEjKSqFS+mJH1qBUJU1m5AoFVWkFR8jr1grtYqzj5mKIGjrY1VS1MmQLuy5+SS280n52Sw1UYtAEKtA0IpkULqMyeEe7w4n4AbYckrUSjEyxlhFFMdVx4twtoTGEtGpKxR3X0Oe+ixEEeK45MPRuWsSQX0xC2URkA2vCKSzGCmw7aeY82OUHSN1DEIinK1IK9R39T9nnf5v++wRSeK4zcsTQeEnfPfxpLFQ36JS3wi44dGxjv/uCDYg1bjkFLJzBhcllI0ZadzAUtmFWYvLdnBJHzV5iHIZRmmEtKjSYaUfmyUCgz1mN0M4208jwvyUuqqv/CO8luPPs361Au0DAcniVV48IvrTT1ef4uP2bVLi8PErifQYkapiUeGz1psDyUrlw6t+OJyTDCaWVz+c8NQpSKImk+JJJz/+zKWsp5qqhETkoLD82D0cazcfv88M+4/H29FR/Xw0bhyOcXR9HyUbfdy5HnsX7TLGSJxawTqvjIcQYBzaTbGiTZJNKRsbnow5d2hT4pA4cTxB5EmkrEAVF6Ii1CzQQetLX1gL1L+L43clCH2ruqdAbKrWGk+++0XSkazq131snQhEhXF5LOv4PYiaEOSswyrFcifhwUDTdI+ILDycr2IsCFF6dwgkFsFK23JzO6pjRMcbkm8fw1zTi5dYl/tsFz2siBF4NUYnwFJWhKZqTVcRuaQVWOkoC4mWGYYY5WbgDqq6jZ5YM/8+5eeD+OH8ACJU5OU0Bd5mQol6IRtk6YrCoDQgJVY6nE5xdooSAotAq0rSzjmsLZlNZxwc7HOwv0c+m6ClA6E8CaHZYmllle7SEo1mF+McCIdSEq0iQGCtIYo0MooR1nrbhE6XIsu8LU1ZVI3Mq5Y4Z1AIek3vo542WpQqZX84ZDyZEknJLC/ppJqTp0+TpD7ztcgLEA4dp0yLkhu37/Jg6wEbq6t88dqnuHvxIg8+eJvPfuYlTp05S6fTxeqY3zxxmjwvmGcZuzuPuH79Oh+8+w6HOw85GBySTSesLa0w3B/wuReu8l/9n/8vLJ+6xDzz8t5KKuZ5wbf/5I/55//0f+B7P/gBJvcWOu+8f51vfeOP+er/8q+wutJnOFhhMDj0vkbOYo1DKocV3g4kTRRKtbAIBuMZQoCSAmP8pmZvb480baKaHd56/yYvX73Ct1/7CQ/uXGfzzEVfjWWJscGv2taMQuccWiqs8zJEUsde5thYGmmD+TxnsL+LjhTjcYHxoxcbq31efuFp+v0+t+/cJ5aSpy5dJGl0kEkDFERpg9bySYaHh9y5/R4nN1dptru02x2k1KgoIk5aOGd5uL3L2qnL5PmMjROnaHbaTCdTzp8/z53722zdvs7Ogzv0+ys0Gg2ajQbjJPEkgLKgkSQ4BNPJnCSNSZOU2XzGwWBO2l7mpVcuMJ/NaDab5J02h40GB4Mhe++8RyuN6DYUh/tbXL99l0vpEk9tnmW9DRdPr1P+7te5c2+HRrPFpChZWVvmhaUOr735Du+++QaXr1w5GsSNQQkvPSl1RJI2jjaZFeFJKm+HhHXYsvAeckJ6RnMUoxGs9nuI0Q6TIqK5doLJLGe8t8O4LBFZzg+/+x3WTpzk1/76/4oLV65x490fcTieceGZZzh4tEv3Totut8l0MOH27iEvnVjlzjijUBqn/cZ7LCx6fZXmbMJ0PsMBJbBTFkz6XSYXztGfzmhf/4DV3V2aRUFRlgx2H3LnMOJSo8mJRgslZmgcsYFISCJTol3BzDmKyoNVIuqgYTaYcHj7EWb2b8h+8h3keEwnijgZl2zdv8fI5MTPXUJEFRPYejFMawymtFhnMNZnYdgCSlnt34HS+ACNsT7YJ6VFSC8RHGkdHB4RUhA5P6YJAXaeMZjMUUlKt92qQSYRFoAqQgqHdcZ7QUcJxjq0kmR5SZLEJEnsF0JK02gJjHVMpjNWznXoNCIGgwFRFCOlII4STwQqC4azGZPxhK3tR2ROkmofIAuL3yiOSWRE0mrRjBSFsURpSifSLHVbpBhEo02r1UQguPto9+PWUp+Un5GyKEO/mCkQgOXF7OdFm47wuTprwdqa8PE4GSOA4XVfWDhPeC1I4odrWgSRA7gYSgArFkkj4XWgVklY/LxSqiZZBHAvqGIEFQRrLUtLS7Tb7do+YJFIECwZtNZ11rXWuiZBHB4e0mg0UErRbre5cOECnU6HyWRCWZYsLS2xvLzM8vIy7XabZrPJ2tparT4SilKqzi4fj8c45+X9u91ubeGxurpKHMfHFu2LIHue57RaLU6ePMnZs2drBYqyLGk2m2itawWWLMtq655FpZQAjnc6HU6cOEFZlnz44YfcuHGDZrOJUord3d1aUSJYAi0C8QGodc7VJJpg7xMIL0J4K5zNzU22t7e5d+9e/XyC3cXS0hL9fr8G0FdWVmoLld3dXZaWlrh79y6DwaAmNiyW0L4eJ4UEglAAX8O93Lp1i2azWYOrt2/fZn9/v1YpWewT4XUhshoAAQAASURBVJ4XgbTQPoOqQyD0SCkZjUYcHBxweHhYK54s9ovFawvXFdQzgu3L4303nDO041D3k8mkbruB9LIY6Fn83iLBK1j4NBqNGuxfW1uryQmtVov9/X263W5NQFokVSweO7S98C+QfcL5nHO1VZAxhjfeeIOyLOn1ejWofuPGjVrtYG1trVaVaDQaOOcYDoccHh5y4sQJtNbMZrNauaXVatVg+eL4Ftpp6HONRqMmTJ07d46XXnqJ9fV1Wq0WSZLUfbjT6dRWRB9++CGDwYBer0dZlty7d49Tp07VViyBjBKCGsGqJ5BQgupFGCeD8k+45kCYabfbXLp0iaWlpdoCJ1ifBJWQQC543M4mtI9w/2HcCIpMoR5CXYUxzhhDmqacO3eOyWTCeDxmMpnUZJvFf4vtcVHNJbSTe/fucf/+fQDeeustbt68WberTqfDeDym0+mwtbVVE9QA3nzzTba2tmoCYiC/BTuX8LlA7gpjbFEUfPOb32RjY4MTJ07wmc98BiG8CsqdO3c4PDyslZPOnz/Piy++WFs7BUWXOI7reXA6ndbP4tq1a7UaTlmWXLx4kQsXLnDixIl6vLtw4QJFURDHcd1uFxV9Qjt0ztWEDiEEaZpy6dKl+vmEawh99PF59fH5dPHnYgA3/L5IFPmk/HwVH7yTBDuExfikEB5zxh7PeBRSIpVEKlXZLFTrNVElH+BwwtumCEqUCuQjqkBnWBt60FIGEn8V9Le2AjUqUFVIhY6qfEhXYq2mMAXGsNCGBTi8ZQaO0pqKEAAoUUfTpZBewURU85H25AWlI4wyaJ0j9ByTlzhbYoylKEuKXCFkgXAGW3q1x0hHREKihR8PCuuJL05pUNqrlniKvU9qAooiJy9zhFBEWqPixKtyCImREq1TVBSB8wkEOOmtOQrDfDYDWTCbF3SXOqRJzOp6l8lkynQ697ahxpEkMUv9ZQaDMfP5DC19rQx2RjgcRZGjE4lWgl7Tq2GMD/ZpNROWGgrVVj6mFkn2dw/Ic0en28G5krL06+vRaMI8twgRMctz2q02jUaMswU723scDId+rZBGtOIYVSoajSZSawaTnPk8Q2lBpBRlUaITRRxLmmmLRDuEsyRJg7zw6n+msDTjmPl8Rq/XpNlQSBzTrCRNGqSNBtOs5M69XbRM0aoAaRE6JjdTsqKk1+2RTTPmpUXZCIPECUWrndJoRlhn6nanpSR3DqUVw9GQtNWk02mhdESaNMlzS9poUJYFsVJIJTDO21E7AfPJlGan7QkRVUKVLQvKLCNVKQ6HimOanTbzyZhsNsSWUzq9LoiYwkLa6uKcrIAln3EqEJ7cUIITEqk0eTbFmjlCOubZjDRVrKx0cCKiLB1KKq9QaiMmkxlKSBLl9SZU7G2OtdLESYxFeCKIlWSFoShnCJlhhaMsPBFjnhUoFaGk9rYhynkpbkFFhHBeSTWzGCewSmFMRKkKEIJZMavsYTTGQGkssyJDIDHGeqWMavBxzlbrQ4dWsSdLOa907ISu+nUV7Ffefsr6lFJPeBDOk8HwdriL5AtnjwMjrkoGAofBx9mAKtN30T5GcAw6FNWHKwDZk8t83LPSH/F14jgCsapgkatGLH+YAPDIaqwCQUXs0LIaL3VFmhOe+CFUvd+wAXCjIus5V9/DcbDIr89MEAexVeKTM5U9zSflk/KzW5yrbJuE8Fwr6/tzno2h9El7UStFOIiSLlJZrC1AKJRu42SJQIG1OLmEPnsV9/BV7NqnIG7X5/k4ckWtBvEYScTHI1yFnSmMK7DzIez8mF5LkA8jP+YhEcIrQokQZObJJIyPnvuj4O6TSOGLr38cQTz8vRg3WDzOR+8/2FEcYYNH+kfhXAv1gaC0AlxJ0t7AZo9QboXIyGqU9OtMUewgdB+vAGeZjOd+bnLKg/DO1bF2+fhJCES4J4DzP5Uc8OTPHX3e3y8sKFosAPhCeNKdH+Mr9L8iIYU5KoDx9Z7tOCugBv2PatfiG3SJkgmRliRJXBOtBSHeKvySGI8DeJtFS1HkKBUhokq0IlxS+HVhX+/Pa+uqPN4OwlVKREXgduJJ7ejoWP57IXZb1VZ9bHfsPE96HouvH/3++PM5Osfi64+TnKwF4wSitDiVoVWCcwbpDM6NsXIJ3BSnNLb0Kh+5gVJklbrb8TrB+bYcCKV1W3AVsbPqAcem3vDMOSK6hDZ1fM3yWDvFDyseDhSVbWc4yEeJOPVbC0Sax9uzW1jXCUctKEb1vfr5OAm2YDYrme5cR7ZXOKSNAJS0CDRlNb6k2jKdZBSmgdQlsVQeI16wNFJV0vEog1m0wYnGAbsTSyZStCv9mrjqRzX5xLlahd+3T0kkHabYw8kMp/s4HvIfA8T6+SB+VItdv4j0G+pEK+/jisBYi3G+IxZlwXyWUwpJPs+Iq6CisV4qEgfZfMbBwR7D/QNm0xGl8bYkWmuajQZLyyssr67R7PTQVZaUA7Rz3ptVRzjrKI3FlNYHCA73WFk/wf2H22B8QHCeZRRZRjafVVmoM4RzKOVY6TWJGw3a7R4PDofs7h9Q5Bm9NGYwy7iwtkyrs4QSHniVWnHy9BlWT5xkPs8YDYcMD/fYORwx//6r/OIvf41feOmvs7LSx+HlFLOsoMhLnClJIs36ygrdToeTJ0/ywfvv8foPfsBoMGAwHnOyk/Cf/r3/G721UxR5gVc8geF4zDf/zdf53X/+/+H1H/2YsshJkxQB5MWU7/7gBzz7/AtcvPopVtfWefhwG2H9gFQCtrSkkUbHEUmsQRhaaUlelGS596k1GK/cIiU7j7a4cKnD3MbcuHmXz37qOb71/R+hH95j8+R57xEJdcDcAVJJ7zXptQ59QDuOKIuSPJ/R6y2zu/eI8XhAt5MiMUgdI2zB6VOn+PwvfoW97S12DsbMxmMQ8NzVq0TNHtl8xvnzZzl95jzb4gbRlWc4ffYCJ85cZGl5BQR4Xpnj0f1bxGkTa0vGkxl3t/ZpLK1x/YO3Wd88jSkLirLg3p0b9Fc32Dh1jqL0nqdra+sU8xn7uff8XVpeQgrBZDLFWker2URIwd7eDs6AKUuU9vXcaXc4d/kKd9//CduPbvNod5+k0eGFq88RC0Nv42k2T5/hV8uIhzuH/Oj736Pd6WBVg7OnNzjc2+OtG3f5yY/eYDAY+UFcCrRSKCnp9nrEaQKOCrypAkVIgmyjNRpXSVR6lQo/MO5ubXHv5j1GBaxffoa1c+fJTcHuzi4GS1nM+eNv/mtOX3mGr37tV5gMDzg42Of01SucOnua9Q9vcjDqM++2uLu1w6Ui51y/g45Sv5nMM7Jszl5WcOWFl5mNDrh76zr5bIw1hsl4RFmUHCYR5eYG/Y2TnN3fYX17m8ZsyqMi47DI2EgSLsYxXanQ2QxZFl7xREmULZk5KK2pPFElTgjywQQznjF7tMtuEpNNc3QUsdQoscbB9jYjCerSGayiXghY59uvdRZrbCVCFrzjwu65mviEQAgfoCjKAik1Mo19MMKU1STkszdMaRE6IkosSIHFooWsgqBBplPghMWWJUXpSNMA3kqUkF5WNS89iGZLtPIWPo00IYlBakUzXvbXXpSUpkQYR5F7QK2UiqX1TXDO+3A7R6wlVmparSZpHKO1z5YS1hHFEaJiRktRSTBHGoGsrKI+YX78LJfHwbvwWlBHCIDbTwOMwncXM84XNzEhszmoACxm+y9aY4RjhM8vAleLAFgA8BfBzMfBrUVAM9hyBGWPcKzAPg4WGdPptCYkHHm5m2PX1el0ahA9ZGW32236/T69Xq8mCpRleUyeP2TgBxWCoCiytLREs9mk2Wwyn8/rc62vr5MkSW1fEMDjJEnodrv0+33m83lNpHgcxM+yrAbu5/M54/G4ti5ZXV2t7Xk6nQ5FUdDr9WrwO4DkAVhtt9ucPXuWJEm4ffs2W1tbNTknSZKakNFut2vwH6jJB+F5BXC53W7XdRFKHMdcuXKlJn5sb2/XbafVanH+/HlWVlZqcsCZM2d466232NnZIYoi1tbWGI/HjEaj+nuLbSwA2aFdLZI+siwjz/Ma2F1eXubmzZtkWcadO3eI45idnR0ePnzIbDar21t4Vou2RIvkJSG8okewZ3n48CEffvghW1tb3Lx5k4cPHx5T3AntcjAYsLu7Wz/HlZWVmhgUiEChLBJanHM1QWE6nWKtrW0nms0mrVarPoe1tm5f4XkFZRClFJubm5w8eZKyLDk8PGQymXDu3DkCCenw8LC+93Dt4XoCwB3ajxBe0WIymdSKDEmS1CoXeZ6zubnJ6uoqWmt++MMfcvnyZdbX1zl79ixPPfUUr732Gg8ePCCOY375l3+ZZrNJkiQsLS0BsLu7y9bWFs888wxSSobDIdvb3iYw9LHQnxfHvPCctNZ0u12iKCJJEtbW1vjsZz/L008/TSBChboP4P6jR4/44z/+YwaDAWfOnGE2m/Hmm28SxzEXL17k5MmTzOfz2mbFWkuaprRaLabTKUop0jSl2WzWY9UiAaDf77O2tsbKykpt4XL69Gk6nQ6Hh4fs7e3VY2Mgb4XnEMdxTQIJikCBdBMUNsL9L9rO9Pt9rl69yocffsgHH3zA66+/zle+8hXm8zl7e3u88847nDlzhpMnT7KysnJsjDbGHCOwBfLE/fv3kVJy6tQpdnZ2eO+99zg8POT06dOcOnWKpaUlhBCcOXOGP/qjP+L+/fv84Ac/oN/vs7W1xXg8Jooi5vN5TTDc2Ng4ZlE2nU7RWrO5ucmlS5f43ve+x7e+9S36/T4XL16slWMODw+5c+cOd+7c4cyZM5w+fZrLly/zpS99qW6Tg8GAVqt1jKxxcHBAURSkacozzzzDqVOnmE6nNbljbW0NIY6USIIKVOgv4VoX56zFOWY2m1GWZW21FUg5QcVocU5+XBXrccLH4/NhGBvCOT8pP7/FWXzGFD5hR1QZgdYqpDALQdtqjqliL1XEHB/bFFXmucI4Ve1vBOAJijjhhQucrQLePghfiQNQRxSrjMEQ9JbSy1ZLBwaNzLWPKQj/xZCZ7yrgWQqLUgIl/PjsZhnagI5ihBIgnM9SBFSlFiCdrwSpHUJH6DTGlAW2NBVhtMCY2Ft9lgZXeksGB0gZEVX9sjSeEIKUngDgBML5OJZUAoHCWUde5jhrEc5h8jnCFIgowhiJizXYCGf89ZcmJ59llEVBsxXTaEUIVTAdHVDMYq8cIgT95SW0jrHWURY5ZVmQxD7Wg7UoKWk0FFleMMsMtnAU5Zw89yqxq/0mURpRFgWNZpNmo0FRGobDGaODIePhGKEUUkdkpWA4OKDdyIi1QmqBdAlSN5hOS6RSnFhfo9vrkhcZ09GUwWCEPtTM8ozZ3JJGMe1mTL/bpt3vMiszlIROq0GkFUVpGAxG7O0NSBoNur0mWhiKzBClGussDx7ugROcPtmjtIbheEa7t8RoOEJEntAQxwlaReTFCGMK2l1NmrSZzebMZzP67Zhmq4lQgkjFCBHjrCErSorSMhpPSJImcSqQEiKtiZSmkcTYMkc6i3JgizkqjklSb5Wi4wRTWnQkEEJ7IkleoCNFVsw8EcjlNGJFItuUpScPF6UnShW5I0m9XavQGnQCStdsLC0F+XyGLWe4csZsOkKplDhpMi9GqChGCIVU3jYlwiGiiAkzSusTwLSKiCMFWIwtiJOINI2QgpoIb/HxhyLYHpaGeWERlSqMVhJdKX1YHNb5xDtrSqQAjcBV95VXyibzfI61UFqFyzIfD5WOZtrwVjPGUbjCZzA7T9yw1uB0UEj1ZDCBqv6WnsjlgomLVxjCVqqseCpHPc4szINUrzm3QLyo5swaHKYSmK9iP7JCY6rZs+rzlfx/ALScB6yE8POslKKyHvDgU0DUAngnqjG2wrx8IpMI+1ZPLJJKo6RXW/IqS34slBURT1UHdBXwIx1YFdgrR2N2UD4wYY3gwFmvbvRxYPUn5ZPy/+/FOYep+phQXtncW24YHBozGxFHmqXuCcZGIKUjifvY0uFEAyXAxYqoIuNZ6YlmTvWI154j2/sxYv0VrIrqfuYqADJkx/t1lq3A02DdEQD/MI4YytEWZudtYjMiOvECuvEUPPxdtNKAxckK1BX++IsEg8fvOZTF+Nvj8b3w2Z/W/xcB8sVjfxwZpPorHP1oT4K/bFdRGb3tlqzHVyEExhY4PJGyGE/QiUa1TjHf+wDZXEFhcMSI+T5CJTitkaXBGYtSBVYkIDwoLSowmOp5hIh+SE6uFpFIebRv/bj7FzWhgyOShl0A5wPRJMwPC+vmipvsj1PXTqU8I2z9ihB+HLehrQYMYnHeqp79MdKCAyssSmpKU2D1ErYcHZH/KvDeI2MgVHg8vu0ZvI29F8MyNT4RWurjtSIqd4AjsoZY+FtU9MmjNvAkctBROVL4ON5mqz1J9fwWn8XHte1QHlcjWdzvPukZH7VngAQRSSK7QyY30UJSONA2x+hlDFOggc4GWEqsi2klklmQ1KqIPEIEokQgfVTtoT7/kQrZE+tlYQ0SPlOvEzhqwzU5J3xOhEcb1hsfV578nvAVX99KuP4wogmOCEp+DAMnHLaEYTknKzJQF9CmpHRgpUAaU6/puipnd2Bx0noVGikBBaXBRQpJWZE6HFZCVjr2abPcGnEwdeS0kOR+H4fAukCfURUeCNIKnLZgc5AxRq1i7MI+9j9w+fkgfuDBfSU8OUBLSRTHXnbP+U1tpCMKKSndGJvNcBiskaAgL0umsxmz4YyD/UdMhoeY3D9In7HW9CBIf5VWx8sqK639Al8KQIf1dgVQKgphKQu/YZhbMGVOaQzzbOY3IlJXGSbCZ4QY4z/vHJFULC910TombvWY3N1mMBiDdQityIuMzdUVtI68HYqydNI2G+ubCKXptLxs5PJyj/lsxnKrweqpM8ikySyrVAWc37ikjZQiVxRlgQW0hP5Sj4uXLqN0whuvJwgh+S/+9v8G3ehQFJnvaMKxvzfgd/7p/8gf/cHvcv3GHUpriKRgdXmJw8NDcILhdMb3v/c9Tl+4xOrKEt2lHlGa+onCOYz1PrIIic1LhAOtFK1GSmm87A94qwolJYWzbD+8x8Ur19g/HNB6tM9nP/0p/uwHr5OmDVrdJaw1qCrLRSIW2PSOWGukUsTaZ7JMJxNOn77EvXt3cKYgLzRKCpI4oSgNO9vbbN25za27D3j06BEvvXCNR3sHdDo9vvBLv8LLn/ksKyt9/uQP/788fHCPK899ikvPPE/SSFHSexBLrXn79VfJxvucOv8UB/sHCGGZjA4pZjNA8+H773Hn/kMGhwPKIuf9t9+k2e5iS0OaNjkcDDkcDDF5zvLGJlIr8iyjkaYIAXmRe4/wOCFJEybTGbvbh2ipOH/pEt1UsL99h4P9XcbjKZdOriNmewjVYXK4w4ejA5ZXN1lbP4UtCx7cuk+31eHB7oDLl0/zYPshP3ntB+wdjomEII10ldmgWVle9gQA6TOPnC0RSnuJVeFfF5HfdDvrJXaNsZRlwc0PP2A0mVIa2LnxPkvnLnDy/EVKUzKYTJFCcG97m3/9r/+Qay+9xIWnr/Hj732LncMxZ596is2332Frd4AoFI+k5O7ekBfX1xio2GcjRxIxMxR5TrPZ4Bc+/zn+9Pd/h929XQYHO6ytn6TI5xRlhjMW0e6gn7vGwZlz9G98yNL2Fi1TkmUzdkzB2SjmdJTQQSBKv3BRAiLjvPJHJfVpC58FZoBcScRwjnGgWwlCKRpRxLKAcveQ2cYKthVXVnsV79i5auHjg4NKqmrO9cGEepFqHLkpsRZQAiUERX5kR0W1qMVVWiFC0mwk9eLMS7lKbz8ltc8icz6vxDo/LvkZTaBijXT+/NZ54omxPrzhXNi3hKwW3zY0JQbQWiBlRNxI0EL5CVoJr9DknJfcMyWZsQgnsEWOVpIyN3jfRL8yjJOE0prKF9AdrUM+KT9zRUpJo9Go/15U5gjZ6AGwT9O0BvVClnIAkQI5IpQQxA/g1JMyCxbB10VyRbDPCCDloh3HkzazixvTcK6g9hDAsydZ2CxmjwdCxHQ6pdfr1YBpIGKkaVpbOYSM7aIo6mMFhYkAGi9miAfQPU3TY6SD+Xxe21G0Wi3KsqwtRALZIrwHR1n6QnjrlOl0ymg0whhDt9utLWwet+aJ45gLFy7UzyQoeMxmszpD31pbg/rBEmfREiSQIVZXV7l27Rrb29tMp1PKsmR5eZlOp1MTY0ajUW1vEcg1oX6FECwtLfHMM89QliUnTpyo6zKKIr70pS+xtLTE3t4eo9GIoihYXV1ldXWVkydPEkVR3RY/97nPkaYpOzs7lGXJxsZGTQoRQrCxsVHX2/LyMufOnSOKorqeA6EkWLukaYoQgm63yxe/+EV+8IMfcOfOHd59912SJGFjY6Ouu06nUxMoFtt0aPMBzG02mzz77LO88cYb3L17l/fee4/f/d3fpdvtMp/Pa1uSoihYXl7m0qVLbG5u8uabb/LWW28B3rboc5/7HOfOnWNtba1Wlgh1FggNoW+urq7y8OFDvv71r/PWW28hpeTq1as888wzNbFgOByyu7tbt988z2tVmv39fabTKdeuXeOll17i29/+Nt/73vfY3t7ms5/9bK0i8id/8ic8++yzPP/887W1SWifoc+FvhIIU3EcMx6Pee2119jd3a37u5SSZrPJ6dOnOX/+PL/3e7/Hn//zf56zZ8/S7/d57rnnWFpa4saNG2xtbXH69GniOKbRaHDlyhVOnTrF7u4ub7/9Nk899RR5nvPqq6/ywx/+kCzLOHfuXG1Jk+f5MUWSoJTRarXQWnP16lXeeecdHj58iDGGX/zFX2RnZ4c7d+7w2muvcenSJc6dO0eSJPx3/91/x6uvvsqXv/xlfvM3f5PpdMrf+3t/j9///d8nyzL+9t/+26ytrdXWKK+//npNTghKQjs7OxwcHFTWhFGtLjIej9nY2OC5557j5s2bfP3rX2c0GnHlyhWeeuopdnZ2+M53voMQgi9/+ctcuHCBwWDAeDzmwYMHNRFECK9gMZ/Pa6uZoiiYzWZ1PyjLksFgwGw2Y21tjV//9V/nH/2jf8Qbb7zBj370I377t3+7/v5sNuPXf/3X+dKXvsTa2toxIl5QLFnsF6G/vfXWW/zkJz+pCUTPPvssv/Irv1I//7W1Nf7O3/k7LC8vc+vWLYbDIadPn+aLX/xiPSadOHGCVqt1TAkmKL6E8fzy5cucP3+eq1evsru7W6svPf3007RaLT796U/zG7/xG3WfDWSwMEZFUVTXZbiHRWJOuN9er0e3262AHn+/ZVnWJMlAHgzkjEAyC3PVItkrSZL6OgIBZ5HEF2yDwrk/mo310YDsJ+WT8nhxHBESra2UOISqLQUMwRYhBFEDccjvlaRwuBrUDKHHI/lfKVW1j5AI6RVdfSa6QCqq7EMFSnmFjCrVUAqBEhIl/Xc9YcQn+EgdoyLv7a4UOCORVVJDyGAUzmLLHCEk1miMNCgR4ZTyYI0QgEKJCKGEV5W1FqkjbFlitfOWNWWGLgusK3CV2p0zHtg2psCUXhVEoglLSmt98oCuMvKV8rKRUmmk0kSugRTCK4rmuQ8Oa12p1yqyrPC2NDKmtDDJC4ajnHia05mVNBoxsRYUKvcxMamZjGcgoNFsksaJD+Q7QWb8s5qXOdLlNGNJp6kRSjKdFkghGI5nICM6jZSl/hJRFc9oYBFaM9gfUkwLlNak7Ra9bkoSLdFsRHTbHUorebhzwKOdO/SW2rQbCdPpFKwlaWiajYg0llib00xi9HLiVS2lZpoXGFOSZSXZbM58VtJuNwFHGjfYWNHIRKKAYpYhZINWt8ve3iHWjektJUxnE6yFk5urZEXJwd4jYp349luUOFOw1EpQbk437ZMhmY2mlFIiZOKJRBLQgrIocaXx2YaFQSKIIkVZ5BRZRj6bUTRmaK1wpVcmOTw8xCHQccrSUodG2qTVblOWhjRKPEdJeFu5eZbjBHRaLaSwOFuSl3O0UiRp08fMrI80FGVO0ugSpT2kijAYn7ihHKbMUVpQznOkzVEY8mxKnEQUhVf4UFJ4+5SsZDQvwAqMsURaYhWUFvLSkiaemGVNiVQNEAKLxjjfdpWOwCgfVo8gKryNspQWoXyYQ0WaNIoxzlHkJUWR+zitc2RFgUBROqq9jMQ64ZNiqkSepJGQxDGmMCgpcZH0aiGmIigKn4mLlaC0t72p1GwFHjS0wnqwKuw9q1gM0hOFfL2GnHOq+EwFxAbkpB6/js4b1FHr6bQCuerM/RrY8iSQCvKpCSI2KHjIoKp0pK7kIzvyCJyikmqXApTwZCvpyVVKRY/Za0X+PVklSdTX48EY7zQja6zJx6UqAgtHKkT+QjTWGOQnNr6flJ/VEhD/Ot5E1fcMEkk5G4Mt6S6fIDvYY+YsSMNseAewKOeQKvHHchaFxskSaS1FvERj80Xm9/4MffJlkB1PurJHgHaFCHvShqWK1QsMBqFihDEU+/dhfAs7ukVr7RnY+HMYE6OLu97qXUlMrRRSdep67HrCLS/sAR4ng3+casKT3n9cHeFJn3vSuRYr39WkusULXqRiuHrclWEstQ5TDFHJEkJDo3uC6e4NaF5DlBMcFp2uUBSOYnoH1WqSzS0RJVKUlMJWKgx+HJQVgC0XqQzV87HuuOricdJK+CkqoshHiTJe7d7Vt1dNE9Ux6w95MJ6gKPUYZB+IIo5KKYqa8LG4x3vS3s7ndguwIIViMtzncG/H254R1PclqIooKOXCXGLJyxxLhJCevHp0iqA2cfTcntQOFt+r60yE+fSnfQ8eJyAcb0sLPInHPvP4M1g8/kfJHR9PbDp2DCTKWcqoiy4FjWKbTPfRdoqL2iANkZHkGCJgbgwu7WPkYd2manpEXTWVskfdBYIzxhGJiMXf/Q0cv97H7ldwFBOo70EcT8w9Tmb56HP4uLFgocoWSEr45GZ3hBEvvldWMUcRSdTEoPIDCtnx/c96shrWgRS04hm7s7RSYROgYoSw4OZI0cIKgazWe0HUaF44DkWffrrL4QwKk6Cko8Sr2glbYKVDOYGTgogcXQyqhOsIQ4yryCL/MSIjPxfED4FAB5nPivk1zwuEs0ila+sFLSXNRovGfITWJcY5b1OQFbzz1puVKoiXd0mjiE63R395hV5/mXarg9KerCE8wwPhHFpq4jhCVkC3LX2ALBAYnHMI41BSMJnPKPM5NnQ8a7BU4JE14CxUrOp2q0EUxRROMp2OmUzGyKqnKgfLyyveV8hZtFQs93oeDC6rwAiSOEpoxCkIx7tvvk6/3+fU6bO0l1conWKeQQGURQamRFbygkJ4lv3pExs8e/mvcOniRUonKarAAEKw92if/+G3/t/8mz/8V2xt71AUHswyOO7cf0CsJJ/91PM8/+d+lZ3tR9y+eZPLz11leWWVZrtdZ4ZZaz0Xz1oSHaOEILKOxDrajZTxZAbO1WC0koLJdIorZlx8+ho33nmT51pNnn/uGX78k7dwOBpJStpoIyoyDkKgpPYBD6mqoKff8FjrQEnGowE60syzgsksq6xMBDqO+bPv/4BOt88Ln/kFrj7/Ar+0sUmvu+StaZa6bN+/y6UrVzlz4QoIwSzLGQwHWGvptNuVdPac/UcP6KWWzRPnWVlus7u7z2vf/w4qaXEwGKAkjKdTlNLcvnUTqTW/+Oe/yu7OI5QSjAf75JWMp3CKZrOFsYZsOsMUJY20QZTE5FmBs5Z2K6XTXWJlZZkffuuP2NvbZXBwSJ7l7O3tcPeOotfpMBkcsn7yNP21U+go5trVa8wGE3QUkxUzLr/yCndvP2Dr9fd9BobWpEnsg8ZJk9W1VS/rLSTlQhatlAolxdHIJoO6hAf9ynzO/dt3MdUmOJ9PuXX9Q06dv8DG6XO4u3eZzCYYLG+/9WNe/c53+Kt/5TdYWdvg0cNtnnvqHMurq6z0HjG4v8W0NHxQzPlVA83NNfK7d2klnqzgTM5bP/g2RV5y6fmXGH3rjyinQy4886s8ffkiNz98j/t3bjHLMvYPdtFRxJ21VZqdJZ4eD1je3qIoS941c7Z1xAWp6EgNGCSKSEBkSmbOkuEorAThszGdcSA907DIc4wTGCSxjml2W8wjiTFltcCpGJTWIYLfmY/A+LCA8Yt0n33ivDyps35hZn1wyRqLVQopvLKHn09VtRcvEUik1l7azVlwhtKBoCRHICrp5Dj2fd46gUJjhLeWCTZKhfHjq5aeVGKtA+fHLyErzzYZIQqDEYKssmcwGp955gSU+HMISaQUzlis1Cjpx05nCh/Y0ZZIa4r5DKE1cZwEIusn5We0BILHYkbw4uLSOR/AnEwmFEVBs9k8Ji0fAKkAHi7aSCyCfwHUetx+IIDEIaM5iqI6a30RaAv2FccWwtW1L4LtARgLx4Mj8seiQsiibUxQrQhKBIsWIcE+INRFUMII9RPA7qAaEWwboiii1+sxnU6PXVsA9cJ1JYkPLMzn83qjF+xPFu8ngIez2QxjzDE7B+AYQSZcD1BfV8hKV0rV5IfFTcEiSBmuMxBHwvEWjx/uMzyzcI3h+4EUEzIJsyyrn12z2axJCo1Go7aOKMuSg4ODmmAS7lcIUZOBgr2GEILNzU3iOGYymeCcY2lpiVarVbe1QDTp9XpcuXKFZrPJmTNnSNO0Br+efvppTp06VdePc16F5Pnnn2dpaYn9/X3m83mteDIajZhOpywvL9eA9ZOyasK6SynF6uoqv/ALv8De3l6tuhJIJNbaWskhWKt8+ctfrm01xuNxrXISrFECeQeo+04gu/R6PV5++WXu3LnD/v4++/v79Pv9+hrX1tZ49tlnWV5erlUWwvMMSglB+QXgxRdfRGvNzZs32dra4tvf/vYxFZhut1vbKi2C/kHZIrSrKIp4+eWXOTg44IMPPuB3fud3SNOUp556imeffZazZ8/Wihubm5ucPXuWzc1NhBA8fPiQOI45d+4cw+GQR48e0e/363s+d+4cv/Zrv8abb77JrVu3+Af/4B8wn89ru46/+lf/KqdPnz6mLhOe02K7DSS4v/SX/hK/93u/x61bt/jH//gf8yd/8if1s5jP56ytrdXqD9/73vf47Gc/y8svv8zm5iZKKb74xS9y/fp13nrrLd544w1OnDjByZMneeGFF/jGN77Bd7/7XZaWlrh48SJ//a//ddI0rclNgYwRRVFtfXLu3Dm+9rWvUZYlDx484Lvf/W5N+EjTlDNnzrC8vMzKygqXL18miqK6fgCSJOH06dN84Qtf4NKlS7Tb7VqBot/vc+3aNdbX1zlx4kRtX/Xcc8/x1/7aX+Oll15iZ2eH0WhU29ScOHGCV155hVOnTqG19qAfR0ovgXASxgHnHGfPnuX06dNcu3YNpRQrKyusrKxw6tSpmnhUFAXdbpdf+ZVfqeebbrfL6uoqQN22grpGt9utn5sQXlknKKuEfheIKc65mtAU5pxAnApzTaiTMHYsKoyENrxIPgwEuzDOhP4eiBphTAt2PE+yhgpzTBiDw3i7OJ7X0u4LBMNFi57FuSIc//F2vmgH9Un5OS4umJEEYoe3evDYYyCyPyblHcZKvNSuJ484cAbnjN8rWUslJVIdoyLNiypoDt6aAP+eqJHKKjaOV6gUzttkIDRSG2gkOFv6hAZXYpz16hjO+oz36j78Ls6htCRpRGidIHXks+QRmEquXQpdWdYIcAYdebsUaw0CS+xSytxgSoP31zDYssCYwis0ZjlFkaGlRCkHUmGsq0ghM5+QYD2xQJV+7iudQGjl94EBRAbSOEXFEThHaa3PTpMaI1JEJMjzGePpjOlkRJpo2q2UNG2itZdSno4zDh7tIFVMq9tBRbJSj8ixpWMwGKNwbK6v0G83WOoJTOEJNGVpMGbO/qDE2oh2t0+700LKBs2GY2zHzOZTolRiHWSzORJJpEsanQabJ1Y4fWaTNIpRUjEcjXn4YIvxyJOQu70eEsN0OqHILYPhgNk8J88KlFB0l1ss9Xp0Ox3iSGNMzv7uLrGWNJKE2WRKYSyNdo95VtBqpaj1FQ4Gh+TCsLqyRJbNuHf3HomCbidlb3+PrDQ0Ej8XdBqR33dLKvJQRF4UqMyQighr8LEUW2AQlIUhm41oNRO6rQ5FkTOZjplZi5QR1pU0mykrKytM5jnj6ZzhwYiyWdJbXUUpQ5YXxIn2ZCHrrXp1HJPPZ2ANYIiSmDhuYZHIak0cKYuwOZF2VXaboshyrCmII40gItYKoSC3hkhb8nKOLTOEK2rVUiUdwpWYvKDILKWzGANZ4dBxg9IZImdQUewVH5wljrx6qXVQlkWVcCfQwtuDaCUqS5QSZwSlkb7zCuXBpbIky/KafmAQxFGMko7CGBwOU3pl1FjFSKWw+PeKssAIn2wX4W0ikSCFxqJxTuGswlTkLWWFz6RdANGkqATm3QLY6PAkEQcCc8TxqABBR7AhqPCX2vbKVYQKT+QQYaCqXqsGRP/dxbiIcAjrk+eCpYO1i8qXlbIc+ggCrcBFqgQiKTVKSnSkiCKF1nEV3wsWjdonMy7socN9BWw4CKc/5irh1yeyss+qVA+sU59YvXxSfoaLqAD/BS6UCNQDRzbZByTolMlwh0QJ0pV1yukeLreAV86y1iC09OQ6Z7AqBUoymkRnXsI8/DFu9dNIlWCForaVC8QC4wFgGSlkqTF2RrZ3AzV9QGxzciFpXvnLEC/jrMXpEls4nDMgInyOu4FKAd4PfX6MWwR0H1/bP0lt4Thx4eNB9I+t0Y8B0o/WiuEFKpKNO8ZXqbUvwtiFV9P29oPOKzeVc1TcRoqIzBmi7iqzB2+SpBG6fZJi/x7WzYmEIi4cubNIFSONxAqFdaZWmYOwrvXj7yJyLTh+70dkFN9aPOnD1q8LEe6zij9KfyDrbHWu46B6/ZyqErTtjmP+4ZwfX6eP13kduxCAcZW9maCVCk6eXENrhVR+ce1jUpbjChreoswasKXG5FTWON6GsVaUqFkLx9vKk9tSfXXH3nvy947u93Gy0ZPIR096bfHzT7quReLMTyM8iGqfUkiLshKnuxRO0y32yIVCyAalGaOlJZs8pFQd4vmUebODnA8R0uKEqu/J72WOFDoQQZ2jGoUee5Y/tbdVzwEh/P5KHK/bJ91zOOpi+3o8nv+R+69fCyNl6Bv+d2+DJ47axMK9OSkp8xkXzm6ydeCI2YcyYiR6GKeIVUmsLFqAValPAjAanfq1h3V+N6qN8io9sqzOqFFCMs8tu3qJ5fYhhzNDaZtIpzCiwAqvxoTMSc0YYSbYaBknMlxmPT52RKf6D15+LogfCM/GllIgtEBKXUtdalsFtvCyf40kphc1KJ3k8HDMfF6QF5ZJOSeKYpqNBr3+Cv3lFVrtNkpHaOknVlt4RnikvKRmnMQ4JM6USO0BWSvwUnj4DHfnINEx88IDox4E9mwoU2VriGCH4UBjSbSj2UiJ4oTBeMLBgQ8KSgnGObSSpKnPdtaRopnGLK9vMJrOSNK0krCx5CUoIbHCMSvmHI4f8OjhNq12m0ani250meaOyWSCcAapPCiQzeYIazh/+gRrmycZT2ZYU1R92zE+3Gfr3k1+/KMfMptMKtkoVzHJqoxlAdsPtvjPzq8hX3mZN77/Z5y7cIETG6v0107g3Ove20v7IKOSgjRtkhdZPQcksSYrIoqZqQPgkfK2Ffdu3+Rrf/k3EDrmg3de54VLJ7l08QJvv/0O89mU80urxK0u2WwMVfA2L3KfSUBlp2EtZ89fYl5lMjfSBsPBIVk2ZzKdsbG2yguffplPv/QKzz3/Iu12C2ddZSHkg73GWDZOnmFjc5O8KDBFznw+I5tNKfKM8WjArRvXWTtxBiEVD/YeMivucOHiRT519QpvvnOdvcEIayzNRoveUo/R4SHT8YCDvT0uPf0Mp85f4Ob777G8tsEHH3xAb2WDpeVlJiOfBemANE08wUkqlpc7lMUc4hanz13k3odvsfvoPmWRUZoSHWlu3HtEbuHzL69TFhlbd66zv7fnM1W7y5w5e5rRwQBkTFYKLj11ntffu01kPYmpEUmmkxn9zXOsrfa97YZ1uMrvylm/oTZSEiexX6Q4v7ET0k962XjM1oNHfuGpBFNjGOVTbty6yfkLF1je2KS8f5dpPmcyGvDdb/8Jr3zu85w8f5mDvR32BmPOXL7I7Rt3WDpsshVH7E9zru/uIocD1pWibRMkYEpHb6nJzbdfoxydJtIRqtHi7T/7I05v/Kf8xV/9TyhmU66//w63PvyQre0tjFKUaz1unDjB/omTrN24Tm9/j7woGQjHSSs4JTQN4UMKWksSaxk7x9RaSjwr0UoJSQrWE6usxffJTovhiT6ltGGnXclgeqUPW6kVWWu8hzSeTGOspQhEEYJHoUVTBSYURFV83WfJeakzYSqZMuHldVHBX9YDot4Sy3m/1kqRqDSVJKHwRCiwOFNiypLSABTYyNuxOOuwUiNlpQJCcI2zWOmlbqUUOIsnyDlLURiEVnhXLkecRGitwHgQASFQOiKKI7SSfpGiI1DqIwuVT8rPVgkg0+LiMLzunFfOCDYhgYARwFylVA1UBSWBQJSIougYQSOAbY+TFYAaZA6g2yKxYhG4Wsx0Xvz3OKkjXM/iwj/MbYHUsQjSB6AwbA4XgeDH7WIWz7t4feH6w7nD9QeQLtRTKI8TYEJdLMr2P75gXzzvIsgYPrOo1rJYH4EssHjtQU1l8XkvAqIBcFwEDBfva1HtItRX+H4gAATCxuJ1Lrabx8k/SqnaCiRN05o4sGgzFK4zKJkEtRGAZrN57DkGkozWmn6/jxCiJimEdrq0tFR/P9yLV6Brcfr0aZaXlzHG0Gg0WF9fZzqdMplMaqLQ4sbq8bYaXldKcebMmRrYH4/H9Ho9Go0GUkomk0mtuqOU4umnn2Y2mzEcDhkOh/T7ffr9PkmS1ISexf4U1D+Casbly5dpt9vs7u4yHA7p9Xqsra3RarVYX1/n+eefp91us76+foyMc+bMGV555RVOnjxZq9usrq7y3HPP0e/3+eCDD3j06FFdp5ubm5w+fbomiixaLgXAe7G9nj9/nueffx6tNbdu3SJYzQRrpTzPabfbXLx4kfF4zLlz52i32+R5TqvV4qmnnkJrzf7+Ppubm8faxgsvvFCD9ffu3asVJNbW1vj85z9fK9KEdvp4WbQFunz5Mp/61KdI05R79+5xeHhYW5isrKywsbFBmqbkeV6TIC5cuFCrdXzmM5+h1+tx7969mlxw4sQJvvCFL9SqGaEvhGOePXuWq1ev1gSf0N7zPKfZbHLx4kW++MUv8tZbb/Ho0aP6uZ48eZJTp05x4sQJlpeXuXLlCr1ej42NjbovSik5ceIEL7/88jHFGIClpSWeeuopTpw4wdraGnHsycbtdpsrV66wsrLCwcEBjx49qokxJ0+eZHNzk0ajcWzMWSR+hHOHcabT6XD27Fk+//nP1wSKQEJbVCgCOHXqVD12hT4byGiBZBKUMkJdhvFw0WopzFWh/U2nU+bzeW2BE/rSIpEq9OeyLGvyyuPjZCDUPE5MfFKgNXzu8YDVIjnj8Tlp8ZyLc9rivS3WT7i/xet/0vz0SfmkQLUXQVLp9Xr1Dk/TwFWoql0ANhcJtkdrk4ocHz4nZEUGcT4RxFV5aJXli/+Oq/dAQhy11VrFUPi9kamy+qVyRJEiSSKc9Xs7ZyylK8mLAi1cvUWRzo8lAoc1BiLnA5OUSCHR2isGCOcVSXxfUyj8Nsc6H24U1qGUrZKGLMb5hCGbzzHlnLRhMc74QLmzyMrqIiuyyi7GIqjAn0p6WglwpU9CQghQEmtyr/iB8fYN2l9j6iRJpGhoTV4kTCcTZtMx+bRkWs7IsoJ2p0UjbdDvdWgmEaPJlOHeNlES0243SRuSySyj2UiZTmfc29rlcJzQ7/dIkxicIo6bRMISAePxlAd3biBVjJUJSaPBcq+NiBX74xnNRpOl5XVG+/uMiyHDwxFOCtrdHhMxpyxKimxKq5XS7rZIk4R+f5lms4GQiqxwDAdDBoM9kjhieXkFozXz6ZTRcB+qsbnbSokU3L61hRCKtbV1JtMxTjoggriBc2Om85y9Ycb+wZDJDFaXuoynGQZNt9ummSrm04LxdI6xcxSC5aUGzWZKHCmm4wnCQNpSzKYzymwO1hAp/9yKomQ2naGVJhIxRVGiI4+uF2VBVihavS5Jq4s2MJ/NPKFQS4RwTCYlprSkkaTTblMUBeO5j7k5ZzE2x6GQQFHmpGmbRquHUBqEwpUF1lWWRAiKbO5lgsscS05hCkoHkYpJOk3yaclwtEen08IpjY4b9JcNZWE4PIQsK4iEQ0fgtKpUd/z8PJlNKSmJVOTVc50DYyjxySLZdE5pvEpPWTgKB1lZkh9kWDOCSBNJSBuSSAuUgDSOvaW0FcRCUBpDLgpM6chLS5ZbdFl4lSEkOkp8/5Si2ncJjFOYUmKcj8f6hB1PvvAYmydkOFftO5AYYT14IAQ4VYF2/u/AsViEOGCBIAmVhfXRPBo+7a0ZavzS/6uVXsP3RXVNlSqS8wBXAAvDJZsAKlIlIAmBUH49oJVCq8jHzbQngmqlK+UP/9OrfiwQOcPZKwWkEOJaJH4IPHBj1XFSiF8/fKL48Un52S2ismSqevJRrARDNt4HpVCiwMzHxJEGKTD5mJaaM3rwQ1ScopvrWCtR0oFI0MbHTL0qc4do/XnM7o8xy5+GqLJ4cyFJqbKkk46yyBgf/oRGPqMhG8ytxfQvkvTOYNHg7BEo7izSaaSIsEJWCYoKJ6o9daVCAUdr/MfX+o/HjD5SN2Eth1+3HRFWPrp3eRIhZPF1UcWtCeQI4cekelwU4QlwFIevj6+pJFFwQlLM52hlsAODzEcILVFpgywbkiQZSIOKVqCcUpITiQgj7BGxjUVSx5PJCm7h/Cx8suY6BCKgO/r040SZcGxv43Y0GTj8nFFPRYtEEBZJEotHCjZA9ZUciwM+qYiQIGosQjlEpFBRtbYNdYus1O7C/XI0f1pDrEeoSODc0jGixwLuX9fTx13HUd0uElr8WmfxvP6gR+oR4biP3+PHEV3+3drgk57TRz9//FgVydM5JBlOFeAsOptTSoEwKTbKaaQrOCmZzrbAzkBqfFq/x0v9sStiC+F+f8ozFF7YwC38zWP1cOy9j7mf4/Xpz/t4nYR49uNEkX+XZ6oWrmOxvp3TCKEpMsnBSNNrS7YGTdpRwSr7zK1jWHZYbhWMbNsLHyhFKUsgJpISZyc05AQjFIYIZzXWapzwxDclLKaw7JcrdOJDyvk+E7mCNBYhIoSbo+e73nYmXiUjJrUlwuWeaP/TqTX/XuXng/iBI0401lbAAq6SnPMyQ9aBrDr6rCi5M5hRmIyDUUFWWGzUZHW5y8rKKu1Ot7KJUfVEUViDRaCVQOOZZ1p7P0mDwwmFcaZeCPtLqsAJpSmFphhPiZMGUaMFtvROa4X0zP7Kv1VWwjiRkpTWYRw8erTHzu4h08mUVEnyrKAZRcRJA4ffuAilyEWDosxRRUEcJxhrKUtHYYs6AJIby1TCJC/RwxFK3EfFCVbGCKFxorJuMJZTm2v0104wHI6rhYLvaB9c/4D5rR9zZzhm99FDsqKsgoUCY462Gha4/vAR//f/x3/DL3/1a+yOS/Z2HrF57iKXn3maN179JuV8UqsaaB3R6LSxU0VhHbK0OFvQSDVZ4T1rbW5Ba3QUMZhM+PCNV/mLv/JX+ZfjIe/fu8cLT51mPJlw/95dth7e4dLl5+iunwBnyYsCa7yNhZCCoAu5trbB1sMHFPmcJG6SzaZYY2i3u/xn/8X/lsuXLnDx4mVa7Rbz2YzJaMhwPEaomLX1DS5cPE+71WQ2n5PPZ0yGhxzs7zGdjMnmM0xZsLzUo9VKWbp0gemJDQ52d7l9+y7j0Yhickg2GiFlQhRplntNWpHg+vCAi5efoixK5P+PvT//1u24z/vATw17eMcznztPmAkShDgAEilaiiJZcjwkSidpe6U7dvof66xevXr4wc5K0rEtux3FFqmJBEUSJAgQwMV0xzOfd95TVeWHqtpnn4MLUrallRi8tQje8057165du4bv83yfx1lu3L7D+tY225euspxNMLZBKU2W9TCNJ3Q4JLdv3eFg7yFSSDZ3rlDPTvjo/Z/gbMPu7hZ5llBVBVXdcHh0yvFkwcsvPcPx0QGPHnzItCi5dv0Zrl65yWIyRyvFe++8zVgbnPILurVxH5xlVja88sxtEmqk80o7Fmhq40kgwrPacMar38TJ1Xqu8/HeY2bzpScLISmtweKoqoL9R/fZvXyd4cYm5nCf0hjefect/uL73+fWf/p3GY7XOTya8MydF9jYeZONgxN2N9f5pDriB8sVty0w6lE5iU4SUlXx6hee55MP7zI5OeJbv/Iy2WidN99+l//5f/onHJ8e89rrr/Mf/97fJvtPHP/rP/0f+e4P32QyWyB6jqPRiP1nniN38OJqwWA15wPnOEVwW0n6xqGcJcFno2ghWOColcJKibU1DoVx3uv0cDji8dYAoxyuCRNf9NYLFi/WRnAqTGjCB/ajNZiEIN0KTsi4t8caQ2V921vnA3oOHxCQBGlhYZFNlCHFyxEHVqhtah+TQOCkAqGoTOXrIXzw0jTGB1OlwjQedEYoHKZzLWGx0Z4bjPV1tYDQKXkWGKMufIhXQYrBBhGIXtbaVj7O1TXSNLQHf1o+lyUCahc3eBFYquua+XzOZDJpbV8uZhdHgG8ymbSZ/jFL+yJBo0smiOB7BAEjiBaBsK7CR5cMEgGICPJ3bSUuXlv3WpqmoaqqTxFOusfSWrfnj1ne8fpixnmU/Y8l1iFJkta2I4Lx1lrG43FrI3KRvNIFFbvg3cUSyRO9Xq8FAKMCRNdOo7vp6b6OxIJ4j+M9j21QVdU5kDRe/8X3nPPyfhHojG0aQdzYXqvVqq13tFGJZKBYL+dc+714zJOTk1bFItrlxGNHEDuSNyJBpXsN0T4n1qUoCqSU5HnOaDRqQeYIyEcAORIwIkGnqirSNG3JJLE9wSsoDIfDtj0u9rvYdrHPRVuISLSYzWaUZclgMGjJH7FORVFw586dlliwXC7RWtPv9z/1jMW2raqqvZ/R6uLWrVttnxVCtLZBUUEhTdOWVOGcYzwes7GxwRe+8IW2f52cnLCxscGzzz7LnTt3eP7551kul63ay0UlhEi8iKUJ6lPOedWgy5cv861vfYsvfOEL7O3tsVqt2N3dZWdnhzzPOTw8ZG1tjddee61VoYhEqI2NDV577TVefPFFyrLkmWeeYbVaUdc1h4eHPPfcc6ytrfHyyy9z9+7dltCztrbGnTt32v4U7013fOha0jjn2NjY4Hd/93d5/fXX+fDDDzk5OSHPc4bDIZcvX+b27ducnp6SJAn/8B/+Q1588cXWGmk0GvGbv/mbfPGLX+TevXs++3k8pt/vc+vWLZ555hmWyyXWWnq9HpcvX8YY0xI+nn322VZ5qN/vt+29trbGb/3Wb/HlL3+Z2WzGarXi8uXLDAaDVv2n3+/z1a9+laIoWlupeA9u3LjB1tZWS6qq6xqtNVtbW2xvbxPtbqJa0mw2Yzwet6owxph2DErTtLVhimN4HNciQBzH865N12g0ahU+5vM5y+WS4+Nj1tbW2vu8XC4ZDAbt2FgUBaenp+eOHcfVOG90x82oQuScV+RYLpftWDAcDtvnKVqrdFUz4ngXX8dr7hL3Yj+K5wJapZr4WRz74zMaiWbdcfWiUlOXVBifmfi9aAnUJdfE3z4pyNVVQenOUZ81tzwtv1xFQrCANWd7EONwMrwO8TIXctecE0ih0UIjUcGsAIR0fk+K8GR4520tcfacgqIzIelGGrAaJ0Ea0C4QTBw4YcBJrPFO4Q5F4jw5XuuUNPXHcc7QNDWiSajrysdDtEVifMBW+izGqgCdZIgkwyqFDiqJPh8oJAEI4UGVcKXaOYwznmjgRFB7DaiRTNGpz7xVtsY2PqHIOp/xp22OIMHJuKe0CKAx4bmPRBEhkNYT9q01WFt7BZJGY6XySVBKkWcJWSbp5ZJ6qBG29jEr50Ht+apqxwOd9himfawxzKcFQjj6WU4vzTlyltWyxNSGk8MTsl7KeDTyKrY6IdUpjVmQJgl1bViuZkymUybHKXmq2FjvY2vD0WQf0zRkG2PWN8YIqSkqy9HhIUpYxsMemxtr9Ecj+v0Bed6jEYJVUTObTGjqhu3tXYypODh4zKCX4YA07TGbzkjSDKd7PNw/gAY2t9Zbe5BeL+P49IjReI2iKVFKUy0L1kc9rl7eZDqZ0RgBypH2EiyWulli6oJ+3gvjofOKGjIjG+YsF0uqsqIqy0DQUcyXDU4q6hqWixUIR5al5FlCXdUoLTBlw8IYmtqS5T2fkJFqiqKg38+9daqtqYoCrXo0eDBLZZlXurAlqZbURcloNESliqw3QmUDnHBUtiFJFbW1yCTB1RYpNMKUGFPTWANSI61A93O0UCT5CXmRUhZLLBJjak+kEDAapfR6CYtlSdM4tM6wWIRMQCjKwmKMJc0tSlpPTkJQG0tVVgjnyLQnMRUVFEXNrKhZVb4Pp1qys73BeDhECIexNdZBUTUeNJCCntakiWaxLJnNCqz1Gq46kJ0SnSCURkuNUw6LQRj/iJoIXirllaADkOpJDt7y2wmv30GMvQQQ0/PavGVUSI1psZVW76MlZYTATgtyuJCB6kFLHzUOqh9CYALrQwjZkkoIBJF4BKUkFouL2wMhfHa2iXtAjZB4iyipUIH0kSYpMvFrm1SlqET7xB3lFVaEDIrYEWgUeOKHBZ9hHEDLcP3R6ECLaLPgY1bWPZkE/bQ8LZ+7Ivw6RzpPrGicpVlO0cJRr46pasewp0jTHZZuD5VfZ6gKkum7zB59B53cQAwGyHQd1R+jkhFKZ/jRaoje+RL60Z9RXfq6t4fAk06k1DTlgtXR+/RdTX+0RVkB1jC49Ws4qQEX1kbghEOiPQvVNQFHASVimt/Z9fwiTPMXEQdikWEscS4SH34xGN99v/2euEhYCKobzhMhziDumKzoi7MOS+PHcqdwLElGr2K1Q8wVoj4mHV3B1UdM937M6PIrQI3BUDFA20fAZax0CBOItiFJOoLv4hwT7swm8FPXc66W8ZrOk2hciIkLFzWuRFDM6oDtQiDDfGLP7dHcE9q1C/h/9vcu3keJwEgbCDaWxgxo7AypzizB4j2IROTOhWJRlE2GtSOk0lC7zvnP//mpdvoUOePiN9xnvO8/8+9/mmh08fjdGO6T3o/v/bu8bklRznlbJ/EYZXJM7ShJaVSCnR+hNZhkhz4rJoUm1T2MPYFygdDr7TX5nhPUNy+0xJOuM/Yv0b3O2GCdGHv7+zbRw547VlzvtPW48Bmcxbx+LuEjjCkOcIFE1d7HC23enks4jGioTMKqFNxYa/jwxJHobbRcsqVWXB8J3t8rELqHM4Kbw1OGiSFPFKva4GyDtEty5XDS+gdYplgUlUtxRmOsY1KuM9ATdHVCozYYqxmrxUNsMgB9mZIGaRzGKZxosCFB/q+r/FIQP3zASaOEl7+SiZehwxnviSikZ9hoSJM+RXITbQxXLikvdVhWaCmQKixClWf1+GOH5WhY1FugbizOlDSp9ZuDJMPibVf8htl6+wORehlBJehlGT7xP6cpVh2/SOc3F3g2Fg6cESzmK66kPearQxZlgbGORjiqxYpL/Zwsz9FKg3Psbm6RbmyzPDgIjG4L1vjMemdBSJ/50BjQihUCpb1yg6yWKLUMoL0izXLu3LpFf22T2WwRMnA8sPDRR3f55//sf+Qv/vSPmC8WzJclUmqSxBNGrDU4Z8NmQ1DXFR89esw/+e//CbefeZ4Pn73BjWee5eb1a4zWNjgpFxCYnEnW587tZ3Eq5aP33+Hk6JCVW1Bbw7DXoyhrrLOUIftR64Q/+s53+I9/7+/x1de/wV/82R/zsw8e86uvfpE/XC3Ze7zHu+/8hFe+/DVU2sNZDwCJILmY6gSpEhxQrBaIEDhdrFbUjeH/9g//L3zt619jMZuw//gx0+mEsqqQUjEcr/HSF57hzu0baKWYzqYcHhxyOjllOpkEyfUF5WqJabyNzumRo5cq0izj8u4GUm3z4ME+hwcHJNLipMTYjKqoSLXg8pXLbO3sMhoP+dlP30KqBK1gZ2uDZn2d5WrJbDJhKk+ZnRb0eiPuPPs8B3sPmZyekmYD1kd93vj2H1BVJflgSFkWYUMluXxpk16+5P33PuDGjUtsbG1jjGOnP8RZ3Vq6aC05nUwolWcRSp2Qp5piWSCyHlevXmExm/vnT0qUVkjnpxpjLE5YmsrRtBNO3KY6Htz7hDoogNQuWP7gz7tcLnn8+AE72zv0x2vYyYT56Sl//iff5rXXXuPyzTu8/cPvsjSCZ158ntPjE5ZNzeFkxmJVcL+ueFyWDBtFMswZNxUfP9pjc9Djpz97wOkLNc+NG164vM7/79vf4797922+80d/yN/623+P3/8//Rf8/v/1v+WZL/4F/+pf/gs++OgTqrKiKJaIZ15ASMvWh+9x6fjIZ6Y4yQ2l2DWOJFj0KjwbcakklU68hUpTU6YJP1KavY01pHL0rSURtp2aAQjKGw7ChBqCAe1q1Pv9xgnXOINzEis7Gf/C3+foo9pO/CKSKfAOY87fC5zDIrHOUhsQUnoimjNhcneokNXSBOUS50BaC8aEdawA2ZEddRYnfVBAYGkcgEAqH3x1wtAEIp3/P41wjqYxCAG9LCUTlvlqRSM1Lu2htfR2XjJKCz4NDnxeSwTvYxAZzjzg4yI7glkbGxv0+/0WFItgdV3XLBYLjo6OqOua0WjE+vp6qwzSXWR3FTSKomgXkxGci3XqKht0gcfuwv+i0kL388Fg0BI+usSSSGS4aBETrWG6GdYRbIzgaWyLeP5ulnp3IxRtTdI0bcHACN5FEkJUK4jX1lUU6YJ2XUB6Npu1bdQ9pnOubb9uO8drjoSWoijaazfG0O/323sUrWaeZNUDnANU4zVdBP27Vh/r6+stiNxVT9Fat9Yu8fv9fv8cQSJaOETrnFhPIQSz2Yw8zwFvjxPB6Phvl4QRLV26wH63DbpkmWj3MxgMANo6RrA29seuPU8ssR91SVFdpn1Zli3xJgLgEdguy5J+v0+v12steCK4HYHpi/d5NBoxnU7b445Go7YNi6I414+idcdyuWQ6nbZqJpHoEq0xouWFUqpVSokEmGgl0zRNS4SJahWRXBLtSbqgdHdckVKyt7eHc47NzU1u3bpFtPgwxjCfz0mSpLX0uXHjBvP5vCUOHB4ekmUZOzs7AKxWq7ZPLBYL+v0+u7u7bG9vc+3atfYarbUcHR2du2ex/8cS7ZpiP1gsFmitW/uV7vMYSTa9Xo87d+7wta99jfv373s1u16vbf/BYMCrr77a9p8I7n/ta18jTVOapuHo6IjlckmapmxsbPC1r32ttXpRStHv91trodVqRdM0LZkljhexLSIxr9/vs7a2RpIkHBwctH1oOBwyHA7bfhfH7djmsT/E8SKSQLrjT2yzSPToWp/EPhJJHpGQNRgM+JVf+ZWWoLK/v9/2tfisx3PEtloul+2cE1V14m+6z0Mci7uWKJHsFZ/n8XjcPpuLxaK912VZtr+N/TReTxx3433vqkB1CW6xHxlj2N/fb5+LOF7Etum2Y/wszkvxGY3qIt25JBJ64jzZJaZUVdWOmV0i4dPytPy80pLcw+s2N01YH9PwaKgnt1tPgLdS+7iLAKn83kD6yHdI9vHJNUIIXDiOs+BiFjoe7HTWgrJeNVMEQeSwX/Hbk+Dz7rzFZm0MynkSu9YpZJ48IZyloqQwBmuC3ZIUyMYDsZYKacBa0Hi1RtPYoFSbhiCpjwsJEYKt5ixAakM7CPz+EqkQUiFQgPVZadLQmDo8swKdgm086cMHthui1bAPFRmcMR5EdhbTBFKuMT7+Yy1SJxgETjqk1ICPBSQyoVguWCwXSOlQqkKpkB1bQtVYtM4QePVIn3lrwlolpW4a6rJG5zkJGiUcaSpIM42QiiTvUU7n6ERzZWudurGcnpyCM9jSkSQp0hmmqxXTYsV4NifPe4xHI67sjrGmIlWa6WTG4ckClWbk/T7rGxtkacr6xja2qSiKOcbUDNfGKCR17Ykw/cGQpilomhopIB+vUzmBLSqcTNg/OCVRmunRnOFgxOHhEYO8x8baBkma8mj/gDzpocrSq50q5/up8gC3QlCvagySYlGTDTKsUZi6xhqHTn3mJdaR91K0EiSJwlgvQ93UFYkS1FVJWVYgNdWqpMpXCC3J8hypJHVhkYlGUjEaKnTiKBYThHMkKsFaQ7+XIaylsiZIxPt7LLX2hCHj44oi9BUZsol9/ognS9jSt1WiEmpnUTIly/vU9YoykDWkkKyKEmMEqxpOFg3COfJMUzcNvaFAKYkTkqKqsAJ6vQwrfHJZWVvKygNblfX2QFKm4TuQpH5PMOjnjEYDUqUwpkEGkoFfLxqE8nFjh6UJhCepPOEjTTzRQXqGFbVtfIzTOZ/AYwQO6VUuhEIIFcaNJwGZXm0FEfdu0fIY/zxwNq92M18vjovhf5z9IRDujPDhz3MmH94CmHHutR7s83slPx4hA5lOumB35RVKvFuO9EoeSqGVRusEnWhUmqCUJ70lOhA/gj0vUnhQ2UVY5gzkdBh/zmh7I88+U8GaACcw7kwt5Wl5Wj6fJdgviLPXRnjyh60LTDVjbTgmkQ6oSdINxNou7mRAtnWbtDfE9tfYrKeUx3cpky10kmHn+5TVh0ihQWck/U2arI+++hrsvYnYehnSdWyxoDh6i0w4RttXWS5L1PEDepdehHwzaB8YBDLYLzm0UzigNks/jiQ6kAJU4JVFTYuO9UOrXCTaaxbCz12t3UqnfPqZt4EwFghz7izuFtc3sf3a34e3RPiuCy/i10UgA3fjx6L7I+cQnfsihQIjQDWeIJv1UNZQVEfowTVWyyMGl15HLO9TLOfku8/iTu9SL45IBtteHc4qrAzrWHx8HffpuSLSQWJ7ic7YLQJRxXMxPk18iaSP8CaRKiKQoc1laAvj27ElOXSJEGcKVBCS6LuAvzhr43i6CP7HdalfOiuEK5FCMZAJpllgmsYTeYRo5514BwXxvvq1dqIlril94qo8a5VYVynOZlrX3q9O64nu9892FHHu+3SJ841sP79IQvg0oeS8gmV7Dy6UaE3iur+zZ4jPxd+f3y87nNBYsYVBk9gjqmQH2dQIVWFUD1VXlE2FESlSb5PZBDPoIUSDCwnuEAhaVvlVwUXLzE4rdcu56++8bvuiO2vX9mrF2W+7pI8ntd3Fc/08IpiNj+iTjhfXOi70YwdCKKTTGFFQFEOORMLN9SUPJgWlSxFSU4kKK2t23B7rYzgt+5QmRyKorEOKscf8ncDSYK1DGYO0FVrNSYQlleCSBiMyTD2H4pjK1ohsF6vXKYVDGp/47oQiVWDEz7/Wf9/yy0H8EIIsy1vLAhXIGkL6BbZUCtD+fSHIs9RLiErvO+r6faT0rqwxWCZE2Cg7r+4BAhMCCcaC0GFD1liUsp40IiRJIqkai2saGnzgzogGIRUni5LTowMfRPfGj34glJ6c4jMtHEVd4pxF5wPq+lE7YVXGYBuL7VmkgDT1Qc55USJWJcYJFkVFagTCOiTWe78FkFiaBusktW0orFca0WlClqcgJUVRsjnMmJ/2WK7m6GyAVAmNsbz77jscHzzk6vUb/DEJx7MFWimUFGjhgRxpLbbxD6BnxFtoGhZFRX8wZLqqONl7wM76GjpJfYAB6OU91jc2GI/XQXuZciX95JAFMElIyXJVoJWitgblFPsnp/zx//LP+Y3/4h/x+N5HfFw1vP/Bfb7x1Vf4139asZhOef9nP+HVr3+DJF2jCXYvDodSCWmaUTc1xWqJsYbUOrSCG9dv85///u8znRyzubXD++/dpSxLtFIMx2Oee+ELPPvMLbRSzBcLTqZz5kVFbSROpiT5kJ5QJGlOWayoiiXz2SmHB3s01Yq6brh+eZevf/2rHN64xA/e+HOK5Qn5SNLvZQzHG9x4Zo0vvfJllDPkqebGnWd564ff5f23f8h4bZPLN55hdO0am8U208kpy8WCVFrGwwHL0ZjtS1d4+41vM5tN2L28g5IK6xxVWTKbzqhWS9YHQ37w43f5g///d/ibv/Vr9AcDEBLjBMVy4ftGljI5bigWnqTjnMBUlqpuuHbtNjmGyfERWit04jOCrHMIKVHSLxSrCIRasKbBSYl0lr1Hj3HWB+xq02CD15uSEmsMi/kcKRVra+tkdU1ZLPnwvXf4yZtv8jd/+7dY29hmf/+Imy+9zIMHD2kQPD6Z8vHDQ+bG8fDSVb58/Rrlvbu8vDPgD+8+4NqrL/NrX+yz3suoq4JhlvLV527w7bc+5Mdv/pAP33+PP/+Tb/OP/tF/y/MvvMh/9l/+A777p3/M97//fYxp6A9TFpXh9NoN9sfrXDs6ZHu+oAaWSnHdNeSmQONIhCABZs5glOaTXsafq4RHWiHLktsbm9SrOdYYEtqlsc9Ic8EkxXnyVlgGgvOEa5wlZk9Y6z+1YbMtXJTq9GCIzwxzIUBvkc5gLSEzwxMynBMhA8P7wTpnqcLOxIRFSlyUKRljCNL7RivrNywhuEkgyOG8nLFNvF2WdWFMtl5dCGMR1vksOyl9AFNIZKJQQtAgMAZEkpFKiV8rOwTec/rcQvdp+dyVCPZ3gesIcHWBfSEEk8mEqqoAWhJFtLwYj8fs7Oy0hIcu0AW0wFUXEG+ahsnEW2ltbW0xHo/PZVxHYKxrWRCPFf+N9e1amUT7iwhWR7uRJ5EaIuj9JBWQCNjBmSJF18YgtlOXRR0B8QgkdhUpusBfBIIHg0FLXOjW7eJvInkjKjREpZRYv67lTrftLiqBdFVMugSeCLjH97qbhu5mId63CJDGEtuzSxaJJQKU8V7G/hHJK90+0jQNi8WiPU6api0pJU1TBoNB+9t4vnj8rqVMtB7q1j3e59gGsS/Hz7ogcCQaxX7knDun9HKR1BDr3u0X8R7Gf6NNRZIkrK2tUZYlVVUxmUzY2Nho7/tsNmvvYaxz7OdRTSUCz5E8chGkjs9C/Dve/wjaR7JGJKQYY5hMJqRp2j7TVVW1FjJSyvY5im1QluU5W5KLfaVLrornjG0RSSVFUbRKIRH8jkovkQAR6xLJG7FOkaw1Go1YrVZtG6Rpymq1auva7dvx/keygBCiVePpPiOR/BL7W1QKikoR8XW3XeIz2O1/cayIbRIJLrF9YhtHQk7ss7FvR4JFVICJpLhYlzg+9Pv9c32iKIp2jBBCtJZTXZJcl3AR2z0q6kRSSfxudzzskhvqum7bpEuWiwpCm5ub/PZv/3aruBGf0dj+8Tpju3Wfmdi2XcJZ7EPx2YgEoNiH45gbn8Gu0sZgMGjHr1i/br2jzU23n1zsz92+2yXppWnaXkt3Po3PYa/Xa8e6OI/EOsZnqEsw6RKn4nHiNcY6dBW0umTFJ4E53bH8KUnkl7gI4feYAWDFRRWaEIwOKpJSBCtJA3VTI2qF1AnOaWTIZBWAs42HF5TGSyN6JREhvO0uTuCsxOID4Ti/94rwhQuAqieDeFDUg/C+ujY8f36/pVFJRmotWIU1lqYqEM7bdDa1VwShKVGJJ49EEEJohXAKtEHhq+r3Tl410URyvghtIQTOhWvT0kt540EFKbRXkEjyViEhJgj4mJcA54kezlr/O+MTmaz1ao5CNtjaK35gHdZWPmivDTLNETIJezkPCMtRgtIZRbEI65TK2+DoBGdrylXBYlFgjEUnkixPSdOUNJUMnWYl/P1eVRaxbEjSnLw3QAqNc4rxoE9TV6AkG+t9NtaH1MUKQY2WmrTfY317m9lixXK+YHJ0QjGfcPnyBoNhn/nSsVzVrK2P6I0GXjHTFhSzFQ+OTkm0pj/KSYI/qhOCrJfRVDVVWZIlvk/0L23QOMFyVSKdZDlfUJQNk+WCJMnQvTGzZc2ly5epG8NqNWFjbYhrHI2SNKsCIwTFylBXILHoRCOVwDrBZLqE5ZJBf0hZ+Tk6SxUKR6oUqVCoxMcuV0XNQGX08j6mrr2dj1NIJM5JnJVIJ1FItBAkYU3ihCNLvHWqMAVVtcKZmiTL0bjWKqipalAOuyxQekSSDcAJmqpE6zwQkBr/lDgf0Da2RkhJohV1OaOpG6wt0anESY1xlqossbZGCG/zao3v+0I4qqahaoJyiBBoCSLRWMA4hVIZxtUgHTrNsY2lMjWmMaRJTa8/YKQ1SMVoOKKXeSUOaw113VDWFQJBXVvKqkIrTS/36sTWQqrxcaxUo5XwNku2oSosRV1jDEilkSoFkYJSCJH42AlRnQeeBE1EEpcnZnSSFwIJ4myf1gHx4nECgBkpHS0RJBLThMA5eS4WIs79EcYtfJynJX+EmE4E/ySAFF69RMpAgjkjrmutUYlXXNZakwSbF6kVWiYt8QM8fuqHzy7xQ59dc1S3DWCdCvV0TiBpggLK03XA0/L5LX6d4VWUwwIEJyymnINp6OeKVVOgnUTrhGoyoVqsUPkRpixQywklgNrEHP6UiRmTbW5hRY5tCmQxY7b3Pq6e0kvXSDJYPf5nmHzM2nCT8ZVXKCwsHr9Nvr6NvPlVkCnONG0inxAOFcaZBq8uX5aFVyeROgCuriVXnA16UcHn7DXEsUtFmPYcYNwF6M/e4/xxPgW+PwFsb8khTyZIxJHUuRhvD4S49ovhMGE/4rBI5WiMRVtwQlMu79NUFUJPSDdu4khJ125Smk+oDj+mlhqRbKG0DqCxRbjmjIhy4XK6Ma1ItHChT4iOdc7Fa2kv+QIBIRIpiNcoAg0knFzIGFPg06WtS0vdO0fMaAH2883Vft9/qSaRuVe9S4LSgZXBEcEgvYQMgrimPZuLLDVZCqQpNAYpLUKo8+c41186c+KFughxxq95EtngSaSNJ33vL1vOkXB+7hcv9OsnFE9M96IFBolmDjLHNQusM7jhbVx1gqShXq1gsI2lpFIChUEE4la0nUNEK7pPc46EeMKbv6C4bie40PZn1x/73GcTaD71jLr26E/8jgPC1ujsnrXrodhX3dlqLCS7zwqHFD2u9pc8XAi2RnB8CqNEMB71eTzTyKail1Relc7W4BzSVRjrwBX0jPDq+MqgnLfdsaIhsYLULUilY6oMRhgSVZBnJ3ilf01lEoyxiCr1e8GWOPdXX34piB8I4ckdgeojhWwHp+iBqKQgURqL8EoEzmdEGGu8GodKcNYzvy2WqqxZLRZgLdvb6+R5n+irYEMWvpSSLE39ZgBIkxQrBCrIuFgh0FKHDbdCCB+8LKqK6F0bJb5kWPjWjcUmPlih0zxYocT+4ScNE1Q4mqAacjKd4vYfIU2FEYJF2SCFz4qRArRSYCr/rzU40/iFeWMoyhXFQmGFIk0Vyyrl408eMRrmpFlOf7zJweEh9fyQ4XBAkqY889yzTE+OkFLR1DVKCFLtg7oyzen3B/SylPF4ja3tHZ65fZMXX3yJn/30J7xRT7j95a/z7DPPIl3jLWzyjLX1LZCK8XiESHpY6xDWkaaKsqrI04SqrD0ZR52BYX/4b/4Nv/P3/nOu3brF9PiQw6Oaw8fHfPNrr/JHf/o9JtNT3vizP2I4WkMrnx1krCXP+9y88wJlUVGWS6q6olgUKCH5zd/4TaazKQ8fPsZaw3LpJd+lyLjzzPM89+xtDzbOFpxOFywKg0GDChkSxqKtCd69HtDo9fuU4zUO9x5w8PFdjg4OODo85Pd+51t861vf4M//7HuYesmtWy+ilGQ86CNWx6ymfa7fvIEWlmJ2TL+XMZ9NKcqa+WxGnmXsbG/RbG4zmZxydPCYhIrjh3d5/7236Q965FkGCG8pVNc8erzHs3duU86nCAEPHh/yFz96m6/9yhdwpvSKM3WD1AO09m19NJlRG58R5ISjtILrV3ZZTCeoRJNoRRo2hbgzCwRjAjhIJ+sYTyw4nUzD8ymprQMnkUr4bJYQiFss5mRZznhzi+VE0U/gvbd+wK989avcevZFfvaTNxD5iG/+9u/ws7d+Sjre4vLeCcYovvz8bcgS3Mkjtkc517emvP9gn9/48nMkNDSVr9+ty9ucTud8/4NHTOZT/uSP/4h3f/pjfvd3fodf/83f5lde+zW2dy/xx9/+I/YO9smynHw0YpJo7lUlg946z9iKg+kJJ7XjWZmyIWCQZ6TrY05sxfek4McGHi5WSKfYCXKySiU8Pp5zadwjaScyb/PkEGFcUGckB+EXgj6AqcJ7AbQOY52NwUg/NIaFZCcwYAUCgzPWbz7CvXJE+TsXY66B4esQeF9aYyy1CdkyiUArT9KpnZ98rak9IQSLEgolQDQNdbv4VBjVeHWYNqAawEnjUFohrMBIibLQOE+oU0GNiUAmcmHsE+78IuJp+fyULnmiS6DoEgAiieDo6KhV0oggW1QryPO8VWLoZlTFzOQI2EUbDqC1FJjP54zH43MZzV3ySQTYgHNEhi4QB5z7Xve9COh3r61rXRK/F18/CRiLdbuoING1AYljb2yvJEla9YVuu3TBxW5mRQQMYz26ryOgGksEYy9+J7ZPF/jv2gTENugSJCK43CWKdAkwFzdp3Tbstlf8/UVFhXg9XeA4tknXSiaeIwL+UcEktmF8rwtWd/tEFwSNRIN4P7r3M96bSBzoXmMklBhjWmWGLsmkS/LpAsLR4qcLGl8kE8XP4/kj+B3B/3gPYr0joHzxnGVZtjYSkbjTJUx1+3WXnNRtnzPy9Vm/j2B2VICIBInYPvEcsV91FRm6fbvbX+N3Lz5XkfjUfW66/aoLjHeft3hN3b4WgfRuPbtjRJcEdTHb8+Kz2X2mLvavLgAf+8hqtTpHiInfie0TyQexH8Tx8OeNRbF0n9uuok2378d/o1pLrFd8hrvj5sVxslvXLuml23fbQF7nGY/vxeesSwLr3kfwJKfnnnvu3HN2cVw6F0wTZ8SU7r/xurvqS917F+sTfxOPe5H01p3vuqXbnu0a2rkn9pWLY3ecG7rjbnes7rZ5t37debf7jHfnh27bx/G6e8zudXXrePG6ft7fT/rd0/I5LlJE1ni7H1EyZMxL76nslS5C1pzxRANVJWiZILUnaggHKO0VCvCEd59laLFN7eMu2NaO1GJC8DBk/IXtlAdmzkDaSAzxUcdAvHeBdC8UQiWoTJI5h0wUrql9zEXgSezOgKhA+uQIUEgylPQKJdYGBwfhj23iPgwBzrZKjzqQ5EWsBwJnPXDqgvKIx4LDuCZVUGz0yRdIg7Ce/GFpQPSQ1tdNCA/kW6k8EcR4iwxbW3A1CI1S2rePsqhE00vX0HkPU9fM56cUZY2kwLkG6WoGmVdbExqkNDhbUS5iRqJgVRT0kDirWS2WCKVYG6/TzzKqyuKM4PhkyunJgq2tTfr5gKYpcUJQVIb7+8dUpWF3a0TeS1nMJuwdnDAqHbPZijzvUZUlSeJVU06XE7TUrK+PGY/X6A0yqnLlbVaaCmNKklT5JLGqQEpFUVlUmpEoFQghmko5VD9jvL7FT9/7kPWNMUkisE1BouDoeIZGY+qauqoR1tDLNamwjAYZWZZydFJRVDXLpmYyrRg0PearCldX7K730RiKsqI3zFmtvHqVV+QoqSuDkpLx+gZSKVbLgjRPSTO/hlssCwSGra1NlEqYLxc0tWU5XaClRCcKZy11ucLakiRJ6fUHSOHXV02xxAxqpMsRIqGqS6QGa3wcTeCoa4tOEp+gohSZTilOjhDCK/BYkZBm2vcp56iKkmY+Z7XyCrq7m30qIzg8XWCcJ2bUtcE4R5L2QGmquiaxkCQaqbz1jxQKKXLqqqFuampTI9OEXp6RpgrnJMtVyWK5ZDpf0BhHluWAomoEiXaoxJEkijzPSFXiH2sZ4rHOYkxDUa5YFgac8ootWqM0KKlD7CLAUAFIcw6csCHucmGODuBoO28G4oOfX8OzHmK954v/3hl4GT8PSQUdyokVPk7sbbM6v28P6dr/ZMzOdw4pvUS/FNLb13QIH/G/JElQWns1YKW9KooK5BAZYvDiIogaxs7IBsG3Q9T+FUJ06gk4hXPmFwJjT8vT8h9y8SpAIqxB/PyulKZYnOCcY7gx5N79PU8ENVNW+29gmgLsFEQebJq8yr3buM0wEbjZx5RVxnDnGZwz9C4/hxMp9eKQ8vADNjZGTKcHWDdg8uG3ca5mtP0SLtlGkGKEQiQp0tY458dgJ4IVk5MoNKt5gRYSJ8N46akF4VG/QMwI/55f0p+NY12CW/seZ4QE5yINN+T0h/GkJS1EAF1EJbQLyT1xLIr2Uy6QU2JyZFs/Pxp9ar/nwAnpRddMgU4ldnWIWy1Is3WS8S2ctDghMFbRW3uGcnaX+ugBw81b2GaClJ4cJ5ow7n5qaO+OdDFOHtd8HnO5GAt4Uswr/n2OAhHnIAfCeUse2tY8U4CVQp631xJnZIB2GouvOU+/aGvemV+kcDS1Q4qGoihZTh+z1nfkqU8elUJiMAgZ7HyIU4cnAeZacnN8ympyRC2fB5G0Zz0jB3T7jWvbLt7PJ5XYVhf3lJ9FCrkYt7pI7HjS7z517nYO/PTxfx5BxN9LhzMrSA2yrKiE9phZMkC7EpOOwEAhKzJKqGucyqCZY6WCoPouzs3/T95Td997Uuz5Yp3PLIqCioo7f5lnbR2/8+n2+8zzhvHEcv435+5fbNduQ7uILzpvPYkLGLpf6ywWkrSfsjlcMc5LMhx7pwX3DgZosUAo6MuGmXI0RcmO+xCtRxhrWVmHSHQgsA4wQrMwoJ0mtY8ZZVCLBJsOoTSMx4ajaY5zBp00aLekn0LSWzKiorI5R+qvh/vxy0H8QCClwto4eMcMW9qJSArlWTthUewDxZ4JZRqLw/iMBK1QIiFLUgb9AVoKzwLX3uvQBREsrTRCSS+LJySm8R7o1jmMde0iVwrlfcKEoChWZ4tf5zAWX+ewEXBEeV8vtxdl9kRHZlQI65U6AtAAgrKxvPPWjxlqx9rmNlmvh2kaGmMwjSd/SCGwpkTYhiRNvJSh8AoXdVNRLJacTCeMt3ZoVMJw0GNtfUx/WfiN1PYWy6omTxO+9KVXeO+dd1jOZ2SJJlGCVWlJleJvfOVLvPbrv4FMcybLktPTExbzGd/5znc42X+Am23x4NEjRlLz3LVLZHmPneu3SXteYlwCN249g3COxXzKZDalRwY4etZnpirpAWeD5GRV84M//Q6/9V/+Q65f3uX9jx7w53/ybXpFxVdefYXvvvEXbGxucunKLWoDdbVisVyys3uVta1dHt3/BBOC/KezOaNezhe++Ap7j73Pe1kUFGVFmqVcv3mHF198HpxjMi9YVQZ0Spb7TIAWVNMak2qaqgzgusM0lmw8ZGv9C9y4cYN3336Lw5Mj/vm//ENeevYGd+7cYu/wlI3NLR7eu0s122N2/IBLl6/zzMtfY7y+ye7OBttbI1a1pj/eYDWfMZ+eYlFs7l7FNgUH9YJ3f/Y2q+WctfUxQljPaA0M/jzP0VmGQaASifJUex4fTTiezKkXC5TSDAcjxmsjlBJUZcnJdEnjoLGWWVEyHI9JhGM2nZCkKXme4myGM/75cFHKyIGzBmsbrAlTkPCA2aoo2oVvDBlr5W1mGqz3MbaG+fSUPM/pj0bURcn+g4/48MMPuP7NX2M0WuejDz7m9p3b/MZ/8vtcuv0+R9Ml5cE9Dt55k5/cu09xfMB6ustzN6/wv/7FO3y8t8nNjb4PpAmJFIKXbl3hZLbkZ3unVHXNwdERf/AH/4xP7r7Hr/7u7/OVV7/E3/o7/xnf/dM/4p133qGWirJcIdd3UIMBd1cLfjbok68Ktk+PeGE4YGOcczzKeGdVcGAstmlYE5LJfMFsUTDKE7RzLJuG/WXFRuK998ISBoTnJivp7XD8Bll4tqfzGTJah0y0C/O4tQ4hz7jVzsUJxoSAPETpPL/2dmAtjR+M8NFXL8uphEAK755tJTRC+IwvaxHCkUiFz6YL/q4+HQyco2oMMlE+H8aGRa11OGGwUqCU9otyKX1gqLEo6e2CkP44QkiM8VZAkSUfA5r/rszcp+X/+CUCyRFUGo1GLRi1WCxapQspJffv329tBsqybINU29vbjMdjFosFdV231gUAk8mEyWTCYrFACMH6+jpra2tsbW0xm81YLBatPUhX8r4Lhsbs9vh3URQtGNoFzLqgZQSvrbUsl8tWRaJLNIggcVQFEUK0pIMu0SDaKkQSQRfMjVniQNsuMfO+S2KJ2esR7BZCMBwOWwsW7xHeb5UoVqvVOXWLSLSJ7RNBzUiOiAC0tbbN1o9Ehi6xI5JwhsNhq7ZwESSOr2MgEjyBp2uFEAkRXbuIqITQtZRwzp0jcESrlngvxuNxe/3OOQaDQft3t53jvZ3P5+dIBvH+17VfG47H43YT0yXdgLeGiaQJ5xxFUbTtEs8XrUciqSH+fbGdhBCt1UwkcnT7ItAqPsS+FYlDXWJDBIyjBU2b9RfIJF3SQWyHqqrO9bt4P+Ln3Wc79uGuIk48T1cRwTnXkrLKsmzrE/tDF+jugvPx2Y31vXie2Fe7fSnei/j8dMkgXcJIVFGI7RiJKfE88VrjfYjFWtuqKFysR+yXXfWLeNz4vdinY58FWrWR+OymadoS2C6SQrpg/EWySGyD7n2Ify+Xy3N1juQj8HYu3RLHqovfj5896V51X3fbt0se6bZ3tx/Fvh/JRrFvRZLMRQWg2DbxGeuep0vS6l7PxYCgEKLt1/EaohVTvG+RwBY/76r4XCQbde2sYn+JdQPOqZwA584d+038O/6uO87F5z/24249uspGUeEmlu5z/CTSyM8rXTLIRbWlz1q3PSko1y3dPvK0fM6KC8TzSBh1XXIZLfHC4WMnOItzDaYWGFliVILWCVL5hBtjG5zES5UL6Z0FpCdE4FrnA6RWngTiHBYVrGIEQqhOINH/pg1Mhvi9kJEoEvYqSiOcRKYWhcNKb5mp8KodSoHQEql1IGOo9lj+Dw+GdgOpPpAfyf1eelwoHSw6vfyiA5yKhLAGawwK5YkpnbHBOYeQKsS8vCKui9mUwpNjsAYnDUL5sdMaCU5ijMMKfz6VSnSisRKE0L49Gx/LWlvfRLgGKSymqTBNTV0aprM5q0VNlgv6vYzRaEiqNeWqYrFcMV+umM0NzvZxqkCqFRub26wN16iLJUnS5/Hjx9y9+wFbW5vsXtqmPxiwlWZkeY+9vT22NzZY21hjuVqxv78PMmVnd8x0Oufh42OUEvT7CetrIwbra1ineLy3jzErhDOkWjGbzr3NSpbQVBVZlrJ7+RpaGO5+8AG7O1usljMSaRjmio3tq9x7+JjbN7cZ9DKkMxgpOT2ZUhQlo35CnvaYzOckynNneqpHonpolSJVwWS6ZLJSPFoqmuUca2oujzISnSIan+Qxn63I04zGWpJE4JUjLFmWIrWPC6xtjhEyWBcZiQZWyznH9SN6oyGJVj5ha1lSOkdtEzKdUi9LdK4RQjEerSNMzaSYeoVPLKdHhwzHW0gtKZuKGCawxoCM+36/x28cDIZrmLqiLBskBplISuPob/gYSbGyjCzY2tE0ltmypAgWMU1tqEuL0hprJbZypIkk0ZY0dThSJA1NYxj2E5pUMplZ5qsKalC6QdUFDiibmmVZUNY1WZbT7/eQwHLl1YLq0vkYihJe9gNHEvaUpvExG1M34CRCaoRMPeApExCaqKARAzAxegMhRkIERvEPs3/AOoNefC8MgJztG86+EpIIWvJH56dBYbWFDs9w1AtDqwvf6yYk2PaYtGOLt2zxih8KFS1eAtkjVd4iO9EJifL7bwLpwyfynF/LOUHIrz/LwBdBfck6145rMQ/ex6Mszqqnc/zT8rkuYZQI074nVhjrqBYnbG1ukA2GrFaPEc6wfukVyiSjV+4jqlMSlaGzHq6/jdI9sryPUyn55ouUs/eZnO7T33oOV8yw05/RH4xJnvtVemJGeTSHyfsMd1/Frl3FVqc0qwPk9AMkYNI+MltHZGOUThFC4ZBIGj950aAUuGBN55xXFnti6RDdIobVflVEeDfGmyM5LBYbPpdn74uzAc4PD5F2Ej4OH0QSg+jgEP5z0WKAkSjg2vo9CdwG6STONThXUc9OkOotknyNuqrJtU+IrKkQ0s/Bavws8tE7SHKkWmDEwt9fKT49PLt43QGkdnTqGtqos///LMJBmxyCawmHZx+HBHPp7QFrawOBQwTVbw8CdEkl58Ze0SXyhTXkuZv8abURhLeTwzoWiwajDxgMxvR6Kc6Ge67Obqegu6cTLIzl5HhKL1lghs8hhL8ua8/uveuQPy7eu4sEjyeROn7eexf3+xfb5OJ3PysJ4+Lx437i4j72s5IjBI7VaomSKxrXACkiHSEdWGHxSu05qUqR2Q7CfowR0BRzalGSjLdwNO065Fzva9vvF2Mo5+vVFSOIn9M+Uxf7w5OP8eQYQNsDXJdIdP43kSQbD+fb/xwXyK+ohPLQEoH0j8E5xWkhudlv6KsVexOvsKfTFUUzZFlJDi3004pMVii7xtw6KtYwKkHF/aRTWLukLwVj8QiZOuZNn7kbgtEe728MV7bmPDwesGp8nBPbQF2wKh1N0scF8vRfdfmlIH7EqUMq/5dWnqRhnUFKvwBvjEECOpEgPeAd1QiM8wt8pRRCKUxgA2ZKhzW3pGoanK1DoFiFDA0BWmEtXlinMVgbWJLWW8VIQOiEqvJ1Ec4zxZ2tEcphwpwZiSvGGhCpf7RsExbffnHvrEVLxaqqaeqSnpBYHNZZ3nn/XerZATtbO1y7ep28PyLTkn5/QNU4yqqgNjWiMQxHI4qyRglIlB+MMmmQvQRVL7EmYbaaYaqKkyQhSzXj8cjLEafwwu3bPPP8S3z4s3dIpMM2NUVZgYSPHj7m1fkpem2X2fSUxXLOZDJj//FDivkJn0jDVm0YpJo0HbDeS7l+6RI66zGbzrB1zSsvvsDLLz6PFLBYFRhryNKc2aLg8d5jnIDRaMSNa1cZjcfcurLLrZ0x13df45VXvkjSG/DH/+qfc2cz50svf4F7n3zC9GSfjUvXkVJw584LXH/mOT764P0gT1oRvdfyXg+dphSrkqpqvBylcwwGQ77w8svkec6islROotMU7Zy/R1oFtQSDFo4aB9aQZwmFrcEJT5LRiuFgh52d3+S9d9/h8Scf86dv/JSru5s0wMHhEdduP8f04AGz02OWiynHJ6dcuXqDLLHkvSGXblzDqgx16QrUc9LBJr3RGu/86JRiteDa1ctMT0+YT6eYpqbX65MPBkxOJ8gevPzCc+As82pOqhWp9jYc/X6fJkmoq5KT42PG46tIZymKFYtgJdTUNYV0PHNzE1eXVM4DQDiLqSoqpTyDMw7KLQvT4Yz3/rXWUVcVzli0FL6twsbwbE0YlnxC0piG2XzG5UuXcbVm7/4D3vnhD3jpxZe4fP0WP/7BPg8P54yvrPHKN3+b5ekeP/vzKX/+7QPe+vA+9aLky5e2yZTg5saQN9/7iPEXbzFIVNyZkmjJl25fYlJUPJosMQhOZgse7O3z4ccfszbIUQK+8a3fZG085oc/+iFVnbEx6FMWJTpLEWLAqt/nHe14NxtwZXsNu5ozrxasqgqHRWnJ5tqQ0+mSPSHZHmdIpRmOR0xOT0ms72+xJaSUyCCDpqT0YQNBsEISGOdQwt+/EIH0AUznZ04TnFv9IBnYutZnqIW7c8YEFjFjzFv6BMqIp7qFKIYQAiEdGEdtfOhAS4Lcq/G/E/5eGmu8QofzxB4lFNLPnDgXrKtNAN9E8O1zPphkhTdhdJgwOfrgkjU2SLPaTwFXT8vnq0QA8GL2dRc4jKSKGzducPXqVcCDlJ988gnz+bwFQafTKbPZjNFoxM7ODvfu3ePg4IC6rllbW8Nay97eHo8ePWJra6tdYFZVxcOHD3n48CHD4ZDNzU0uX77MZDJhNpsxnU4xxjAej+n3+wwGA/r9/jkricVi0YKyo9GI6XTagt7d64vS+nAGgEXbBqD9PAJqXQWLCDSWZXlOZSG2WQTpI5kkWrrE7wMtuN4FDuN7UQFCa02v12O1WrW2Fnmet9dgjGE6nfKDH/yA+XzeKkS8+uqr7O7utsSLyWRCtG6IqgmRZBCB/qhq0SUfZFl2zuIiArWRJGCtpdfrnWuTbtZ5tH+IwG23j8R6dC0RYls3TdPeV+Dc+cADt/HYEeTs2ifEe9Nt10i2mM1mGGPo9XoIIVgsFufIO5GU01V+iISdrkJN/CxapERlkS4xKBKiIlnJOUee5237R4uMeOzYFrE947HifxeVH7rkiO5z/KS/4Uyp5Enfic/2xQyXzyoXN99dFZm/zG9+UYnHvEgeuXi8XwRed3/3dP56Wp6Wp+VpIewv/Do/SQTOGpw7U6oRKJyM+0mLccHa12lMU2NMhXU+w8opD14KPCAtQrDQBYDFWufJ8QikEEEdI+5DPTlChgx2nxkpfFKpDIqrzuHwlpkxEOkDun6/5uLv8TaYUcoaKS4k83gFE2tqTO086V36zwQSWqKVQCqfmenr5GM/UgaCm/CWoK4x3uZDN+H4AUh1Ph7l1w6uVezwhBcD1mKNwzTeetOZBqsalFA4I2kacMKiEuWTRYQLXuyJz4g1jl5PYUyCkhac8fdESWyaoNKGtO9l5Fcrv/evK0NZNCSJZnN7k13hsMaTsxuZUtYNe4dHjIZ9ennCcGPEzX5Oun/I44ePmc4m3Lx5les3rnH1yiWaumFRL0nKFGtha+uSv27jGI8Vly6tMez3SJIcJxTzxRKtay5tr7NaOMrlivl0jm0adi/tsphNyIc5vf6QVbmirAwvvfQiD+5/wtpogK0LnFUUxZJUexLtycGExWzJcDzAa/om/l8xZ2OzR5oo0kRg6pqsB8vVirJ2TEvHcemYVd6WZ3uQsjUa0Ms10qUUywVl6eXRjXNIlSCkoq58UlLT+L6wnM4wpiLv9bxKhRUM1zaoqxXFskQpTZqkpLrvrcjSDK0TXNZQlhU6zVit5jRVhbUCaWua1SmiNjSFJh9uY4UIliCKpqywzmBsA9LHIp1zmKJCCUvekyyXBXXhEMaCNRhToxOBNY75YolKUrROSKUh1T7OWTYW4QxK1IzXR/R7PZxpKFYVOvPk9aqsOJ00CCExjSNRCRZYzZbYIveKNRZAMRwOGPZ75GlOVVZAIIcmEiugripM7RPqvDwPuMZ6uxoDSqXotIdMeyC93ZGUGkQk0tpzQFtgiWGDbc0ZWGkRTvl4YySMBSlyF6zEI0Dj2iefDsoBhPFBhvvgy9k614fbzmCY9ocOrPSgn8f7ouJRjOvQZmDLaPESbV0ST6pTSYrSCpUkKBkSE5VESh3Gbj9+ywgySRmA32hU4wlyYFt7Fx/bCU0W4uZOnl3y0/K0fN5Km6wXCaTQPov14oRX7lxiUS3RKfR0isu3MM4gt26QjtdJ13ZpylMoTjHTBzR2iXCSJl9DD6+Ti1Pqd/8VDNYY3vkWIhtg97/L45MZ6fgKvTv/EfXBj5D9XejtogeXwQmka6CeQTXHzT7B2dKPF8kA0dtGpBtkWU4lHEImnrIRxrf2eXWdC4OL2Dyi85kIKmRh0Op8L4yhQkDQqRAtSO0CrtCJGcThhmCd0z2Hc56cGL4TVQjOA/r2iYC8f9NbZriqwImGweWvUxuLrec4WyFkgm5qnBA4kWDnj0jWrlKUD1kbbOBMICx3kr79OcJVxXVpByuJKm6RENLW89w1XCQJxHJ2/HbNiyBRCqHAVGdEmIjYC7yKF5GMcw5UbxmMZ7+J54mkGRHvj//XuQZrHVpqKpOyuXsZUzXMZyXbOx6EVyKo6UXsp70ASeIqNkZgyiHKpWcPyVnzdMgfZ/Vs4zEOLsaUuoSQzyrduM5nxXg+63efqsOFc8e2CX+0733W8SOhAWtR5SF1egmVDaCxOOUw1iKlQLslpilwZolOBggzoRZ1uJ9+nm+fs86KwHXq8vOv8wxXEt173z6DoZ5t/z1rN48H+c/O2r/bmZ5w3eH/us9rt+7dvtLeo0iuDd+XDn8e7+sIyoHR9NOKm+sLlNQszGVgwYNVj75y6LRiTRWkGkyjqWxFoQRaZvRYYkzNvBngRA9FzVBbro4N05lkWg9ZuSFS1iTKImVJVVtMkvDCpQV371Uo4UjyGgRoVlSN9njXRZWiv4Lyy0H8EIIsSz1D29Wex2zjhsArBwgpvQehUNR1QV37wVwpGawRLM4ahJF+0x94ylr67FKbJpgQjG/9kqSXLEpzTVlJjF35ZayD6CnrwsMpHVy/cpn7UrGYnGKtwloRAFqH1gqLJ6lYBzpshlSao7TPzLBYEIqyrlnMZox3JUoI+irhzp3bfPBewcHpjEd73ydNcoaZYnt3l/HaJnmaIZWkMYb9g32q1ZI8Ten1cnpB2SQfjXBNA6ZBNCXV5BCjEk6N4O7dT+jnCS+8+Dy7m1v8N3//7/Pxgwf89Kdv8/47bzFdfsxGr8dv/Y1fR4+3KcoVAouwlvn8lOX8BFNXjIYDnnv+eaZHh2zu7GAsHO3dY7i+TZLmWO0DHZn0YMNaf0CapQz7fZCKL734LOvr60xmc44O9qnnMz76cMEnH33kMx+U5gu3L/H4C6/wyds/4vrOFrPFkvnpMS+99CVeeuWr3Lh2mY/uPeDhvY9YLRfeQ1f4TLlLV64Hj2FP4vG+uY4rV69x+dIuaapx0pHYEGixDiW9WGy9mLKanrCYTbxEqvK7mSzVpEmPLJHt3NXvZ7z++mu8Oxrw0zff5MHBMcNexmhtk16W86c/fYeyKLh06RrNRxO+So4WNVmWsDFdsHv1Fuvbl3m8f8iVG0OEqagXx4yHfYTI2N3dZbFcMT2dkKea7Z0t1sdrPHz0EOt8kAfn6GUJo55Bu4bT42M21sfMlyuKoiJLM4RrWKxWLIuSRQXGOtZHI/paBu96D/TUpfbEBCnQOgIunnkng3KDxD9b1nlf4c5+uCV6WAiMABcWIRCjal/88ldgccz7b/2In3z/u7z0yqv82utfYzhco64a5if7HH/8Fvfee4cf/uB7/PT+PsdGcefWNbJRBs5w88oWD97+mPcfHPPSlTXarDIhGGQpr3/5i/zJj3/GpGjQQrJ3cMj+/Q9ZPHeH7/0v/5wrz77Er3/jm+S9Pm9877vMliuUSrCmQUpNKgoWIfhxPFtgLWxef54rgwF1seDxgwcsZhOyPGcynSLEENMYVpVlfXOd2ckJIrDBBOCcobG+ZRoR20whgkeqRGCEC20uwlosTJTWgYhO0X7abKd/2wWfXGBX+pVUlIBzYfb1/tCA8Jt1QVTecGGzbnHBm9s4T1ARUpOmEmyQlremNc/2ikwe1BME4p0JAQfp74VwBmtlm1Hn/BfPFnviKWj2eS+eZKmfuCC9aB2QJAm9Xg+tNWmacu/ePcqybIkTXfJEnuct8UEpxdbWFr1ej+l0ymq1otfrtaoKWmv6/X4Y62RLnNjf32c2m1HXNf1+n8PDQ5RSrK+vc/Xq1VaxYDabcXJy0qooDIdDlFIMh0PG4zHr6+uUZdnWtdfrnZPz39jYYLFYUJYlTdO0NhpJkjAajdpsf+Ac2cM5d67Om5ubwBlBIRIEuhYNUWkhqpFEckPMoo/tF61DospGr9ejKArSNKUsSx48eMD9+/fp9Xr0er2WOFLXdXueWG/nHPP5HKD9LIL9TdOwXC4ZDoct8SQ+81EdJBI9IiAfSShRCSPar3RJLF1Fiy45BM6UOGIbRrLNRcuIaOvQ3ehFhYiYdd8F97sWM107lfg6klzm8zmz2Yytra32+FHBJt6H2JZRsSP2526/6SpzAOcIRLFO8Vouvh8JILFu8fhdK4f4u4tWEHVdn1OKeBKh40nj9hMDGb9gk/7zfvskQkn87hMzMT7j9ZPIZl3LpifV8eeROv4ypJB/2/Lz2umz2qD771+mjn9Zosq/z7X925Bwnpan5Wn5/BW/FwNQ3oLSnCld+HVK3Ax4kNAR9ifOBoJm7aW15dlaBq0R1mCMCLEbRSJANiJk+QsQQeGQEJAX3h5FBJuZGER01kufe/J52CfZGHb045e3SpC4AHo6a2iMCRsXF+qlEKpBqYaqqZAqAa2QMkFJn2WfJikCb3UjRdgbSeWtjJ3AEQHjYFVnDdJZomqIbBUdJdbYsz2jkt5yBkf0AZdK4qTycSa/acc2eNKMFOQ9D6pb4V8rpdAqEGSFAGlCoDkoKAgNtcYI7ZOqEofOfJxNZzXlqqCqS5T0gLPUEpVKpPXaKMNhD4dkuliyt/eQRKf0en2kEGyujdnZXAMMvf4ApTKyQcaN29c5OjykCGTeNMuo6oZBrpAypakrZvMFSjU0UnM6nVGs5mytD1gbpsg8QxnHMMnYP56gtSRxmvl85dVv1/qcnh6yvtYD21A1NSC8uuramOOjKYvlitH6CKUk2ilcX1IsCpyzjPopWkmSLCHNB0yPZhSlY7qsWFjJtKwQtuH61hqXBopxFojtOsW6FXXtvE20cCRpQl03JIkgSRVpP0M6QeMM0CClo5drptM5ZeltqJV03p41ERTzAifA2ZKmsUidkAmFQFJWJWW5Is9zhIDZ9BidpDizxJgSlY2QMsM2lizVrIoZOklorAoKoxZDQVksqOoVVbmkLGqaxnumL1c1TeWTU3Sq0akiVQmDfuatZGuDE4I0S1AhZpblKc4p5vOKujFefSJLqK3F1AalU6RzWNuEvimwwtKYBpzzdgjOUtZz6qZEJw6dZCgtqE3t18e1w2kwpsHU/nltnEXrFHSGzvqIpAcqw0vP45/pEHeR+BiIV9fw44pPPDwbG7zcUMdG0/r3nDPnSBqR/BEBShHJNtInAkaQUkYFD9phEUVYS9mgvBvARY9vekJJjB61gSEhEEog0CD8+OQtbRJ0kqBU+FtrlEpIVIKSCqU1UsmgnKv8MUQEewJ5jQjwtihd+/oMkKKNXUXAsgtlPi1Py+ettAB7DGhicaamn8HGTs7qkVcgyrOEXAtOT07Ih1fQtcFJjU43UPkmYkORSoUpVkwefp/+wU9JhxuIXLB3/DGz0yNGu1fp2znZ2vPYbEixmJONrmEe/mvEzpf8/GhrP0iYBoXEJgMcA6SpoTyhmn6ErApsNadWCik0tWtQKBBeQcwHeJ+0j/Pvf5pYYds4cnw/gsntLwPx4Iz8Eca7QGgQrgO8W58Y6PfonozXqrJ16iJCIPmM5KDCrYijY1AoEgIR1kBNsyBZ28XUJUiNlJkfv5z1KnUIzOI+STamWqQMtGQ622M4vIxTEmGNP7IISZWRNByqENdyrXW7i3NAi4y08Xt7HrOPjRfIg52ZpDMWl9YiLJ7EEhQ0/MwU1rsiEKQJigounifMRkGxxEVQ3noVkyd0bAQq2MiDM0s+fGwZZfCC0p7gIrVPbJde0VtIG4iHvm32J3BvcY1mNSGXS3S+iXMe0439y7norHB2rc657uV3q9Re40X1ySeViwQQ5yLRwd+FJxFDzpQoPt0m5wg0F95rCT0u7iO61j4KKUpceh2RjMFYrHL+PiqJauY4NFYOgRW1HCBEhs1S5PQxCIG0nlhpz5+eqIbjhGgfWYG9MPe6c/+dPUcXk5/ObHa619U+5q77PufaqW0/QAVSSDyP9I/JWR8NZ/eYfXfFBEqc3XohwThJY6GmRgi4tb1iQwve2e/x6vM5b35oyHWfvltxMBsihca4PqOsZoNDhtog6kN0toWgJldLKI8oS0nWd/RVn4f7GqUEqS4ZJo7GORojMEBjBAcTxclScv2G5tGB4KDOGcoCoQUlY1xIAPirLr8UxA8E6CBhaI0iSj8ppajDhtsPrAJjmsAkkkGazi+ijTXQ2GCr4DuLrB1Jo8nTBJn4ptRSggyZkMJnpXgZzAZnGgRecUQphSDHhCyLui4oVgtmJ0dgDUornBEkSUpVFi2ZxFmo6gYhFKYqyHp9UhVtafweo3aOk5MTbiaeyNLr97nTHzHMBFVZcXx6yv7BEQ+Oj7i3v0cvTennffI0QYUMFCkcQickWQ/hYHt3h82dSwjnSJREaA3OkrgGaS1KGZyB+x/fI817jNeGvHjnJi/euc29r7/Gn37v+zx4/2fofMi8qJienJAnisO9PWZHe1hTU9eWR48e89UvvkByeZfldMZgbYOqWDA5LMmyHv21TZJ8jE4TokeokDJ4qDU0Zc1iPqMKoJSQirK2lFVDcThhcnpCtZxwaesyzTMv8vF7b/LC7Rt8/DBnY3OT44OHvP/Oj5nOJphyjnBNmwmTJpobV3bBVljb4DiT8n/2uedJ8xwpBP00evsKyrLi4OF9Pnjvfe4/fEhR1VjnKJZLbF2yPhqwtbXB5sbIt2vYuPV7Ob084fXXv8JiPuGD996n1x9SLabce/dNHj54yLJW3H00pyoWFLNjvvilLyKV92NN+iM++uhD5pNTNCXu0hVsU3Ll6nW0ltRG8PLuLicHj3j3pz+ml3ryTFOXHBzCbHqKA7I8I6+9ast777zH1auXMGWJzrfJ+z2mhw85Pp14X2Ar6OUZlzaGvn2cD3IZa3CuQQegSaAQWlIb79MpOAP5RZCmNHUNwVc5bmIFjjSQk+L8HTfA0jX0VcP2tcvce/9tPvn4A374g+9x6/ZNrt+6w1s/eoPv3nuXxeyETz76gPuPT9i5/UWy+Yz9w8dUe3Our2f0s4SXr23xg0/22Bhm7PQVOLCBjLA9yvnW66/zL//oj8l6OavZhLvv/ITx+gbXbt3mv//H/18+uXeP//Tv/l1+7Ru/zve++2ccT6ZopaiqFbP5kt7mJTLhZTSl0mxurPPMzRuM8oT5C8/x4NFjTianHO0/4vHeHjrRVMaATEiynMViyTgPwLYVYBzGitafTxIdFTtsx9BgMiyaTVgAusj6EAqB9Rko4bfRQ9AfxAXhT09iapxfGCqEV9+QIcO7ndAdOvpsO4eQ4fdhjeAJbwqpJc60sQUveSc8Cc84b4GFc96vsqlDsFWRKoWQPhPNBEapk2HM9qsGHH8dU+bT8n+kEpUXLqp+dO0KuoB6VFyIKhBRaSMu+CMxpLs4jySKSA5I09Rnw4VMp/X1daSU9Pt9er0edV1zcnLCcrmk1+sxGo3Y29tjuVxirWVzcxMpJWVZ8vjxYyaTSat8MZvNkFKysbGBEN5eZrVacXp6ymQy8apaHbJFnuecnJwwm80oy5Jer3dOBWJra6u1fohWI1FxYX9/nzzPGQ6HLdFiuVy2dglra2ttm5zPfjizPIgEimjV0lXO6LZrJA/Udc3BwQHT6ZSNjQ12d3dbmxyfbeq/F5VOoppHJCp0LVsigaOr6NHtD936dDd0F0H62Ae6ljLOOdI0PUcM6hIgIkGgq9xy8bOLQHW8J865cyopsU7denbbOypxTKdT9vf3OTg4YGdnp23XqqpaAodzriVmRKWSSI6Kz0H8blfxorvZ7RIX4r2ObRqvq2sR0b3uJ11/9zfdZ+7i9XbLk457sVxs48/6+2LfjdfY7QefdZ5fREy5eB0Xr+VJbXSREHKxrp91Df8u5WJdP4ug8fPIH0/6Xff9X3Sv/iqILH8dhJi/zvJvQ1L5D+m6npan5X+v4vcecW4yPo6jvMKBbQTO+iQdT8LwyTSNdVhh0Yl/xnyyRlBblZ5EIMJxfLzG2+XG9DBnGy8LHjZK1nqifGMc0fhFKeljSyGuJIVPBjHGYF3I9rcNwgbFRHyWv0g0Aouxzl+PMWAbrAkkDpWCtgjTIJRBkiG1RCofTJRKkchAhA3B+xCa9SCK8woKnnxifCA0KAnIsEfDOYwNpFD8/ltKgXPBxib8xjjnr0MIUN7yw8fTfNu4AJ5rpQGJSjRSaZz0IG7iBEYImsZhjMPzbVKk1JjGUtZ+nZWkmn5PMlx3NKbGNcaDKMLhbElVF9RN5ckgSqOMY63f9zGDpsIiWC1mDPo9xmsDskxTlgum0xMWixUSx/raCKU1TQ3T+Qz6kkQlLBYV09MJ1tRsbG2yOx4g11OgwWFIkh7SVTjj2NraoawKTk+OEaYBU2EtTObe/nG81keHJIXhYJ35fI61BdeuXULplOnk1FsSS6io6WWKfpb4e2JhVay8rH9VgGkYKsd2Dr31HqPcsD3uIQgkHgdKJxRVReUEi5WhsQvKomA0TKkri9ZTsjwnTTV1bcCVpGnKIIt9CbT2qjFKSfqjHCEUZV2wKlckOqU/GnoCEtBLkpB84ddyDkNTl6TOYhuDSB2NbRCJJ01JYRGmwjqvkiOFT/qoygZhJdIJVvMljbHoNMfaijT3SVQx+a2X55SVQSmDTiR57q0EjLMsy5JES/L+AGssZVnT1I5eniCC5WVjLKZJEMJ725dVRdNYautoaotQPu4JmjxNSdOM2jTYuvKKH6WlkTXWGBrTeFJKkqHTHPQQmY0gyUH6GFdE3zycFwkfhCzTOgA9vn2sNS3HwokzNQ8nHE5E6f2QPR8BGnG2dhdSYmVINpQKJ2U4Rlxb+JHTg3UugHJecToCM4BXqW5/5ZWPor4Qwh9bq2AHqrQnd0mF1P61kgmJkmjhrWBkIIuJUEcRiCqSTxOvrZIhacjHntps77iWEqE9nAcwn66bnpZfnuJjnJUtWEwO+fiDiqXYoCocJY7KVpj5KUal1DohmUosAoPD1hXl0Qf0sjE7W9coLz2LRWIt5OkHlCf7FKcPMfkGaZqRCI1IFbXsk974NVb330BvvEIyWsc1EjI/tAliwrHAjW+ipCMREnv/O9Snxz6+25LUHAjboWWIzwTBz6PfBKu7T3+vBXFbwPcMRT4jzQCdPStCoAJBASvDV85bk7R1Ole3Lskivu9/LaUFlWFdRW90k7qcgpXk+RjjwEkBlLj5DKUHiP4YcWKxWExZYPsaKRpwCim6STa+bkrI9lrjWaP6R5wpbLg2IcOePAa/Xdc2LF7SWf3PyAgdKP/C3j9+2NquBJt2CCrcndZ3hO+1OMBn25zIoP5UlTPuf/whuxs5nZkvCj91CAKe9OCEQFYFxdGHVMsp6dr1lvDi4YJYk4sl3L9IHAyYUuwXbat8RkwrftaNO/nXwu8ZOr2oS+Q433fjXPsZc1dkPUBY3539Cj8rEo/ocR2LlRnOFlg7JVHjgLcYRGNJLax0DhiU1NDUGAymXvljBoI3rqO4dbFKZ9Nvew3++i5W/dP3umt7fvHzz0qIsgTSUcCc4rkFYf6PtXRxIXT++e0SRVqCSaxTeN86hZUOR81AKK5cmXK6yPnRoWKQO6YnJXXjqCzcHCRIMWPVSBIBgoJqVbFclGTrAwSShpRl1aOXr0j0grIWkAh0L+W4GCCNomlqFEnAqgDntTzqWnL/IOH2TsHDo5JVDSOhqDt2vX/V5ZeC+OHlOiVJIrHaKzFI4X1elTUtaCBEsBSQSei0EawMm/wwAHrJLUGUVVzVDdoYz5pXCqn8OZ3wG+KyMNjGBNnLsPi2xm/RrWcDnk7nrCtFY5rgs2zwQqS2s9nXIKAylspYrGkYr62TaC+p5/BB+sYJ7j98yCt1yWhjh5VI+eCdv+Dy5UtkyZLxsMe1yzuUVcV0tuDg6ITDwyP2T/dRpiLVCcpZMiXItQ9sLE8fMz85YLi1i1IZeZ6RCEGWWtI0QyWKpioRxZy6rjicTchOB6T9IaM04Xf/xjcRv/nrnBwfcnx0yLIouHdvj6ODh1RVgROS2lgO5hX/07/8Nn/jV1/BJAMGxrP1rTGsFjOqcsVgXDDavIRMM0++aGrQfkOZZhlCSjIbrHOQwc9NYpua0Ree44/+7M/4k3/zL7h++0Wy8S6Fk3zhldd5vLfHyeSU1XIZsmYbVrMJ0hmkVpjC8fG9e5RVzebmDsXyHo2p2R5vMxiNWZSWxjXkWpJoSV2V/PCNN3i8d0DWH3L1+g3KYoUQXka+bmrmsxkf3nvI44fw4vN3WNvaYDwaMuinOFtTLgueeeZZTg+PeOHZ2zx+vM/JomT31svc3zvm4Yd36aeOd95/j9vPPsuon7BarZgcH/PRJ59gVxNcvWDz/l1syHC+fuM5VH+DomoYb9QMxuvsPbrHpWvXEUL4rHMpaFYLelnGYtlQI1jVjumiYDmZ8+VffZ3xcMB7b+0hhWGQaRZVwe76iFSCbUrP2pT++amdn7SsaXBWBY9O6SmqQvjNo5LeokRADaRpkJsNhKbWW1kKbx3izibBJE2YHDwkNdtM5yuOFwt+9MMf8vIXv8w3f+118rzHcjnlV772VUy5ZDZbsn+6x+budW7euMHPfvoTfvJoxtWRZmM84NJgwnv3Dxg9u0NPBQUKAc3imJu3X+Xm5V3uHRyjs4zDo0Pe/LNv84WvvsaNS9v86z/8lxwe7PP3/6v/M7/2jW/yxve+y97RASLtcePWZcrlhOnJCZVI0Erz8cf3cNZx5/pVsjxje3vDB2PC+PBw/4jxUFDWFqRkaSUjqQmcFJ8xZn2g0U+KfmySUgR1DdBh8eQi+SIsYH1gTwARJPXB08g8jjYyDp8pYqxfeGkViToCaQMFp7t4JORyhEWiDYvRxoJxBu0cxjnQnrR1BpiFxaYUCGMx1qJCv3BWBelSQ9lEwP4MzHVWYIVDCj9+49wTljFPy+etRHC8C+S2Y2xQ7uiSA6KVRSQ1REWHSEBwztHv94lkibt377K1tUWe5/T7fdbX11s1kCRJuHLlCuPxmCzL6Pf77O3tMZvNSJKEmzdvcufOHYbDIQcHBxwfH7NardpjP378mOFw2BIg6rrmo48+aokXQgim0yn37t3j448/Zn19/dwCOs/zlkhRFEWr4hCVS1566SWuXr1KlmV85zvfoSiKlrxy//591tbWWgJGlmUcHBy0ViI7OztcunSJra0tdnZ2WK1W59ZKQGv9sbm52ZJd4n9JktDv9zHGkOd5ew8WiwWj0Yjbt29z584dsixriSSRzNAlRkQVk9jeSZK0ihaRBNJV0Giapr2vsb7xuMfHx9y4cYPBYNCqVkSFD2stdV2350rTFCklVVVRFAVCCAaDQXvt0RonEmmigsqTiAxRtWO5XLakEufcOZUTIXymrLWW5XLZ2r1I6dWz3n33Xd577z3u37/PCy+80Fq/CCFask6XeOOca/tpPEe0D1osFi3ZqHvueD2RuBOvMyp6GGPa/hPbPrZNmqafsnq5SMCKljBdolaXfNL99+JG8bPKX4Z80H3/SSSgn/e7i2SH7vvd38fjXjzOxevoXvuTzvlZ5Iy/TOnW5SJR5d+VPPGk7JUu6eNJJJuL9fms1/+29fjLlqegwNPytHweS4eE4OzZPtAZrDE449U9fMTG72kaa9AOT1YQGlq7SsLaQCCE8tmJwqtcIBUK5wGOQJZwzvugx3MbY/GS1WeBX4EgwccihFDYGLxHU0vjVWEdIG27B/NuDi4oYgQwwllcUyMcyMR5tUYSZLRaCHOkdd7q8gzcONtD+bHYBkVGT2DxEscGE5MqQizKKxFIBGc2MzFo7VzcT3rLBin8ntOZGDQ2wdfaYl20ddMo7dUBCElJSmm00hhrve2x8ZY5SaqROglr9RpjapwzSC3ROsWYBmsbEq3JekOKZUFdGhItwRjmqyXWGpRKgtVfDsKxWhUY21BXNacnc+q6YTTqeTJMIPGsjXukeU5ZlEjlyHqaJO0hlWI6nbO1OSRLByyWK1Z1QZJoRCIpqhnCKfqDPgJBbzDi5OQUnab0shyJQ+JJxVWxQkvB9vYWVQWP904olgu0hkwpBj1NIj1RxzYNDQacRWlN3u+zKmfsjnqMEsn62hBrGkQCWdbD1jWurlACMi2QxmKURaBZVZCUDi1qXC7pq4w0U2S9IWVRUFQlKiS4CesoFgV1UzIYDqmMYWNjk37Sp65qmrqhWBXopEElCVmSYCqDaSx5r4cxkERFC9sgXEOSJnjFVE8QMk2NcRUIgTElVlisMRRFibOeENTUDdaVJFlKmqfYyZzlokansFpVLFYl/X6PYT9HqaBkmiiktdgadKKRqaKuGqxpaBpItEIpjZSOWhiqxtDUDU3VgBAk6mwd5222vcJo0zQY6/sexis+N43BmAapJFolJFkfmQwRyQidDkAl3jZAiKCa6skUfjwyAcFwmAhOWQfxfUKwK8bKIsOjTeM+Q2BiTFvEsUBHRV2fDOdBLh9BjmSSCMgBGAwhooZ11qt/uGDjFIhuDk/YgJgwJNv+opVCK0WitE/o0v55V8qTQKSKikrePsATxc7W9622cVyjdsYbGcBSF2r4aeJ0iHE9XeM9LZ/jco5Y7xwaQVEsqcoCkWzSyB0cb9IbXMOtv4A8PCbbfNYnoI63cIsFq8P36NuS3s1vYEbrNNY/p25xRLX3Dmsb13A3v0GmU1SzR3X4iGbwDHo0xlkwQpHd+HWa+9+nHnwd3Rv6KZ8YI8bHf6VAWrDar4m8dUkAXGUgepyLxjrO7J1aWPeMXdCSvc6Q78/a033q3e7eumVAdPaP7TgYFIUcCBnB/C6xw1448MWkjTBvGHBKYwtHMsoQyZjF/luUOiHNc4y1iHqJ0EA2QjmFFTXSVJD2cUpAA0qIQBKM1+Ha2vi13tm6TlzYCwt5Rsbo9p9WJaR7FRf2522coGXSnAftz+IIjkRrjLPUjfHH7hAVuvfCcaH/Esfus2tQQlJbi1aWjfURd25dYjwaIQMmFMd+EYb6GFuTUqB7GaPxVaZJhriQIHxGLDofR/Hn9u3snLdCEZ+RqPOkOMfF4hVRouIf577ftoI7axXX9jk8IaclqcTffuoU5+/pE/ksBqyBbBO9+gSrDCYfo5wmcaesktz3byUwIodmgpQNQufIZAnC+n2P9fuKc0/oBQKQiGuUTn3P6n++vZ7Ulr8oLhYPGsk47uyts+uOw0Nn+dDSvjpjhuPTT++5Ii2i0eR5xvZIsH9oUHLFTi7ZWpccTyuGqQILD+eSrVGOXM45LAcY+oztKTiYFRIt1nE4rq6vyJIt7j9U5H1FbYcYV7CdLSgrw8r2aYRASIs0BicUVgi0sRRCcncv59bVJUdHDltqDNXPu4J/r/JLQfxwztI0BegEgfQLSwS1aRBBVtsFcMCFBa8PmoeHCjy5IkhYehazCvKcJrznpfAs1kOeYVOvlcRpCaKmMX7jH7PdUQKVeOUKoRWnxyeYxiCF8ABr2JhLoUiThDTNkEJRNoayKFktZmysXyfNcpIkoVoKauO9bX/28QO+ebBP09/iH//j/ycvXNrg9u1bFFiEc+jEkaU5g36fSzubNPUtpssVR8enHOwfMp+esCxXJLVAI8h0A24fs1qydukKpVpj0TiSStLvOTANvUSTJoIkTzAITF1hVgtqY9m+cgmVJEg2GfZ7WOf4+P23MU3J7ZvX+a3f+zv8m+/+iEcfvM/u1gbvPTjh8qWMS5dHpGlCVdc0dYV1ksVsAs7QG2+h0z7OWWTYfCRKe2KAEDSmQTrIsx5VXSPCRP+rX/86H3zwAT964094+ZWv8swLX+Ltt3/C44efkKQpq2KFC2ChDGSe4aDPYrnkzq1bXLlyncODg9bqpT8YcDqd42SK6eeQJ1SV4Y+//W+YL1asb25SFqXfjIZM2aapqcuSLEm4dOkS88Wc77/5Fl968Tkuf+k5MgXzoubgZMrxdEXaH3PvwQOqyqKE4rd+4zc5ns1488e73H3vPW7duc3161cZj3rs7+/xB//0f+DSlRtsrI+pjeWTe59QrlboJON4tmJjfZ0bt5+lP9rg5S9/lXff+iEnB/usrW/S1DVNmrJ9+Tp7e0ccn84xzpFIyenJKTLd5IVnn2U1P+Ho9JA8AdcAzjLMUj8FO+tlI60F6QNtNrCenFE46yW8kpC95Lw2FVYob6ujoZ9n/rmyjjzVaClZ1qbNVlJxYQn08xRbzvnwg2PmtcE4x4N7H/LTn7zJCy88z7Xrt1jMJxwdHKKVplgWGCtZzKdMTo958eUv8fDePT5+eJ9JpbhxeYc3P3jAJ0crXrg0RslILnQsD+/z2ldf5d4//QNENqAxJbPZlLd++H1e/sLLvPvhx/zkzR8wm5zyD/7r/5qvfv1rfP+N77F/dEzdVKxKg8pH9IVjNF6jKFbcu/cJSsKVjRHYBuEMw8EQt7PDYrFg/3hKMx6QOA9qOqmonUEHjU6lvOwX1qusuM6Cl0iOkGER5EC5yNSNG2mHX797r0NNkHQLxA3nvPqGDFkl+NvtlUOcxYRFjghSoUoBMeNeeL/e2vrApgo+1AjhbYWsQwjtxz0XM/jCdyWeLORC4KCzwPOZab4v+Hr467HCB87+Q8sSflr+7UoEzi8CyHExGS1KwKstREA/gvBKqRbEVkq1NifOOba2tqjrmslk0tqTRAuWCMZHckFUijDGsFqtODk5QUrJaDRiZ2eHsizZ3NzEOdeSNMATDrIs44UXXmBzc5N+v89yueTRo0ftdURQPk1T1tbW+MpXvsJoNGrPWVVVSxDp9/skScJiseD4+Jj333+fo6Mjtre3GQ6H1HXNw4cPGY1G7O7usra2RtM0fPjhh/zwhz9kd3eXfr/fEjQePnzI1atXuXXrFuvr6y1wH+tlrSXPc3q9HlVV8cYbb/DgwQNOTk5aVYnhcMiVK1f4vd/7PT744APu3r3L3bt3mc/nVFXFO++8Q57nvP76663CyGw2I5J5omrFJ598wvHxcUuoifc9TVOuXLnSElfKsuTo6Ki1kCmKgtlsRr/fZzgcMhqNADg+Pm5VWHZ3dxkMBmRZ1ipSFEXB8fEx169fb1VNogXNYrFoyQ55np+zuonWMfEYaZq217BcLtu+u1gsyLKsJW9EEkkkXkR1jq4FUStNH0pZli1ZYzweI4SgqiqapqHX67WEFPDjZdM0Lcklz3PyPG/VYKI1znK5bO1Y4ka73++317lardpxNdYpEnW01qxWq5ZQEi2WLhIPLiqx/DyiRldl5WK5uGF/EuEmfnbx+38V80I3aHLxmN1rfFL5rLnpf48567PO1w3U/GXL0zn3aXlanpa/juLwwdqo2uEJGLTgqbU2AM/Wk9dbtTeHNSnCuWATGYgVxFiPj/u0dgPCIaVu8/JEOI/Ex2Ic3Tq4QFAFJfx86+K8KwNgKr2tiwsKAA6Da4IcswOBwjmJsd56WDqLEA5JgxE1utG4JPWJSI1B6oa6ahCq8KqLSgTQVftANmd7tLh3IwCm1jmMNVgMPsfizI7OuTP1OxGyTKXSoP1rGVQEhIsJS9EOxp7djwhOBzUVGexwPMHEzzVaOZQMpNxAyNHaz99J4gm6ZVlRlSUVVUgycCyWK7T0hNxytaQovY3qoJ9TlCuqoiBRkPdytE7ASbROkChGY3/cwagf1mR9amMpypLZ0tt4CFIS7WiqmnI1ZdjvMTmZeSU5nZKmmjTxSii5TlgtCjKt6Q0GLIuKsqypiyUqz8myQD5ZLFCqx8bGJoenEz786B6z2ZLhYIg1jrWNnETIoPKrWRQV0jh6vQwlLUni0KqPVJrVSnornKTHsixRePXguqk9uUHgSUgIklSjtQQki7KmnyQURY2zjuFggHSOujKM1oc0NL6PWklVOHoGZGMpZzNUlnlFm0TSuAacoC4aNATSlcChUCoBmVIUJf3xAG925EkSSgqs8YSepimwpqIuFgjrsKZCYGhwweImxzRhj5EITFPTmAZbe/ViKRxVtaKqU3q6h1IpSZpgHdRlQ1nW5JlXnWiUpFiuqKRGSEXT+HFBxiQBBNaClF65QiJompomxGDrRmOdpa4MwnmLWhuSwJIsI80HqGyESP1/SnmVlIjgWGU9vuOCanS0WfLMBj9WGU8IEc74ccxaCAmDSHz82flYiZSujX0oIRDheZchfoxUgfjhSRoixHJcF4S13tpcCYVzQe06kuhw7Zjnh46zTFqfna2C0ocn0iiVoKVGS40U/l+lFKq10dJES+eoRCJlBC5FC9DGIgLvJQJMHtQJ5DMRFALaGlqerjKfls97OSMwCEwiMasJAslwsM2jgxkC0HkfUZVh3rZUyynl/s/oKcvG7guUvUs4s0IhMGbF4uF7aGEY3ngVyEFC5UqE3iG9uoWavs987z7ZxkvIBFTWw13/Gvbh9xGXv4LLB6gWgA1rjTCuSSextfEx3qD4YYVDOdUqH8XrghhTb/+KL8LBiayHz2idM6V+v9Xujl5tC55ry7NzROsJ2/k97fvOnVmd+Hd9fFvFY3TRaOGQsgEzpVok5IME2ZTY6UOqRJJor2Cl+pcRTUF5eh9dLkk2rsHBPb8u1V5dznP9zuL2ztrWQkWIjo53J6birzKSXDqWGnxadfZ8O3B2DHeeLEEkHXrGTaQuhLH7rC6xxc/ummi/91nnFTH2bwwyYK9ZlrK5PvIKIp17LoQI5MZAKJYahWCga66MB0hpaJxq5/Oze/7pOohAKjjHI7jQPlLICwSlT8eozpFvgh1KrHKnG7VtEfgexDmtq8LifyPaNXTbNxHnzn3Wc8+zPwQKo3wsWPRuIsr7yMIgswynNInz9nzGSerlMZmbY2yG0rknb9sGQYLDIpz81CMUSWfx/p+V7hcv1OkJMb1flFzVfq9LzopEk3B+G5/NlmRydnbaNo7VCcZSof7xg7aNHTTCISvD/SPNLBlhG4WtDM/eSfjoWDKrKgIrn+Wp5PJoyJorOCwcLgnak64i1wWXRwUzN+bRwz36wyGnpodoHIg+U5vTSy15PSFJlszKPpVIEcIhDDQahHE0QvD+vZznLglEfkK13z7tf+Xll4T44W0HlFR+44sDa5BCet8fKdGJlzCsG4N0BiUlmU68yKMxrQeiJ0cFUEY4rPDeXS6AmdGKQEkvgRdB1/g4W+u8/JNz9LMcmaTUZUWeZxgp0HWDsI2XLG1MAM19/rxOEoTS1HXNfD5juVyydSUn6Y3o5T3mkylYh5aK7d1div4m/8P/+//Ds9t9fuW117BCoJQPgBgbMmyd39wkiWfu72yu89ztGyyL0pNADg6Znh6zKFc0RclpA1sccvV6ziDIu8+mU5q6YSoEWS9jfc3R7w/IB30PzlUr3n//XVCa4XBIU9dsb26iB0MGtuJv/c3f45vf/FW+8vq3+MlP3mE6nWDLBePRAIXPgpBSY5LUP7POUdcV9vSA3nCNtD/2GxGpSfOcpg5ZI9ZicVRl4cFxrXFCMByt8Xf+9t/m/3V6yps/+DOqasV0ZWkqn31brlZ+Eg7qBEpriqLCGsuqqjg6PuXk5Nj3CwR5b8Dx6ZTZYsXaeEQvz3j3rR9xdHTI9Zu3qYqSuq6QUpHolMY22CpkGTeePJKlKRvbl/jx2++jlODGtR2Kqub4eMJkMkVlOdPDCVkiWS0XHO3dY7Bxmde++hVe++qvsDbuI21D0wg2N3d45oUX2djcYn3YB2FRWuLEjCTv8/DxIz5472fsHxyytr7B+vo6z734RX729pvMJsdsbW9jDw4oVisQIKRgtL5BKuH9D+7zG7/zNxgNMn70zl2apiBLJPM5pFqBqakrvOxm5p+zujJYIbxMpRDgLEJ5SxsRNrRJkpHos82hlIIsVS2498KzNzDFgjc+2KcKMo8yLNh8kCBBKkm/l7G1PuZ0tqCuSt756Y+5+/KLXN4YkiYpn9x7iEEzLysg4bmbV1gVNe998B7r61tcuXqVTz76gImEtdGAuw8O2BikXBpmCOGzV6r5CVs717h15RJ39w5RSrEoK5LljEYqdtbHPDg65ZOPPuD/8d/93/mv/sE/4NVXXuWHP/gBh6cnpL2cYrlCa0FTLllNJpzOlpyeHPOW8gG24XCIbWqUlKxvblOUDziZLdnop1R1g85H1MslWnnSB8HTGeXAeJOcoJOGEi5kZKl2/CKqFuHaiVSGhYmMn7gwl0pQ1ksHWmtx5sJiWQgU/rhC+8HO+vAjLsr7CYKPLj4oFI/tRBsYscaEUKwPKFnpWcE2jN9+bRqknAUIIdGBiOKqiqoxIGUI4pxnjT4tn88SbSO6IHcEjJ1zrUpEVHSIZJD4uwhUSimpa+/lHD8bDoet6sPh4WEL+M/nc7IsOwdMe79r0ZJRusz2rhqDtZaiKFrbiyzL2vNE8DoqPQCtakkkBmxtbTEYDFpg3jlv61YURQu6F0VBWZZMJhPW19cpy7JdcOd5zu7uLs8//zzOOY6Pj3n48CHT6ZStra1WHaQoCn7wgx+wWCyYTqfn2jleWyRfWGuZTCYcHBxgreXSpUvs7OxweHjIcrnkk08+4aOPPjqn/JFlWUteubgZSNOUsixbdYijoyPu3r3LyckJZVmytrbGarWiKIpWwSTa7RwfH/OTn/ykVW+J7bC2tsbly5f5yle+wttvv83jx4/Z39+nrms+/vhj1tbW2N7e5urVq0wmE46Ojjg4OGA8Hrf1VUq17y+XS7a2tlqLnC5Jp65rZrMZ+/v7rK+vs76+znA4ZLFYtOodx8fHbG5utsSPyWTSKtNEIsh4PGYwGLT2PZGcYYxpyUqRjGGtbdVIImGmqqq27wyHw7bvx2ele7zY36ItTCQ0RcJHVKno9/vtb5bLZUvw6drlRNLHRTuY2H9ie3UJWxdtZ+JnXTucz1KSuEgA+Xnf7Zb4vH7WJvXiZvVJmSBPOv9FW6CL5/tFdf2s+vxly5Pa5rM23j/v827bPDl75snZMJ/13X/X6/kPrfyi4Ea3fN7b4ml5Wv7KioMmKGSFjYTfY0RWgcMnwMXs9fCZrUsaldDkDWk7FsX9BCCCqgUAAik14C0spZDerteKNnFEIBCKMwu4QAxxDqwM9ghSBfJ9JJN4QFdIGzI7Q5DTOmwgeRLiQ854xQTrvAaHMBJXVzR1iZAJSI1QqbdA0RKtFTpJ0UnmM+07KnbAWXY94IJCiBTRNkfgrME2jd9bhf9ksK8R2gO8TiiUUEghA4hvkNgQiA2qI2cR7k4QPqgPSIGWLpBsXCD7K4zz1x4JOlJqlMLHw2xOXZlgTdLQ7yUILKZpyPuZX4fUDTqR7Gxv+TU0CpkmJEkWYkUGISs2soyqKmnqFau6pKobsqyHVpIsU1glMQ1e+aCxjEZDxuMR1hoqYxHKE72X8znldBUUNB1C9Dk5OsHgGA5S9HCbNEvJeil7+wcslzVZ1uPR4wPefu8epjHsbG2i8fHDclEyKxYkqWQwGpL2+uAqhGuQygENvb6mbiyDQU65LMgTQT9kEheFTyQzxmGdoigrklQx6ClMU1EVBiF8m7q6otA1tnYkWpCkOTZgcRaLRNMfrDHe2KBczlFKUtY1mdBkWYKtKnSicZXFVDVCe6qAaQw676HSDCEVmAohoGoEaZJSWYMUDiUFxhma5YJ6OSdPUmxZoYSX3UaApaauS6x1zFeC+WJFYwQ9pXEYdCJYFgXHU8k6XkUmyb2+inOOYlVjG8gySZamgKUxxpOijMUaSypSpE7AGeraesuaxgUbZk9yaKxDiBqpHFpJtMywxisCCinI+gOSbIhLB8hkgNQZUvkkvtjnkQ3honBOIiw46RNdVIil+FBMTZSCtiExT+A8qKoEIaXcJ84HNWkVSR5SIoVGaoWL7+ED1T5cEiGbAEBZkK5BCIk1wlu+YLEiWP+GMdFnwocojvBELRHIHF1VSaESpNbBvtyPFVJJrwAQYlkCD6jJYOMciWUtoBnGdhmJHiJmF9Pij+dpHq4zvjwtT8vnuwjh1R6khXJ+Qr+X4bJd5su3vTWINSwO36GcH9N//FP6a2P01Rch2cBIg7YlEsXsaA87+Zjx5edxvW2sMEiakO2f+XHAKszai6wNp5SHP6bOrsNoG52Oaa59herBD8gufQXbG7WiZVYIoEE6TeIcc2tAeVwtyki0e8BIrvXvttfXzc9vl2Xg49qSzm/CdxxhPRZB/k/TPtq9VXc/JrqfndmcnPswfC7Ch22YW0aljbNrchBIhV6lX9Qlq+VjjFmSjq9T7t+lBvK1NZrjBWVzynB0k3KVouoDRJr7taIFSYKgOWuXYKWDEIEAEsFm2nY9q8uFaz7Xtp8un4oDcKZW6pwLrgZn8QjrG4A6rFUjMTi2UQu+x+qF38Vk+ti25+IASoT5VlOUJXW5AvzaVEhvBSfCOkvGXNFw7kWTcFLlCE7COjSyLvz3n7gFD3VuQpKoEx4nOCNsuJaw4fNdIyH6rN7WRawEiEn+nNXNoylnFEqHQ4R9QOfKL5BLXDvPxTZy4W0h/Bls7NztfQ6f41AYn1QtNa5/mUE9JTdT5nIdxQyHx2mFW6BUhanmZL3rlNYhWvXZcI2fartIJHpSElDX8uU8+SNe59l33BPfi8QXAtEqfiNcfqtGFqHDAF2dfc9F942zNor91oPuLtwV0T27J8HhsCJhviwRiUQ5S29gkUKzrAo0HqsEj1vtTyW745Rx4xAlSA2yWTFIlzyc9jDlY9LBGotmiMLgtEE2Ai0ETaWo5QapseysVTg753iuKckQVgRXCi8I8cG+4falPtu7Sz58+Nez1vmlIH4IKdFJjlQaYQzSgRFBtq+oPQNcJ+0KNMQPvPClgLzXR0hFbQxNVbYqIEJpv3mxlsY0Xv4ITzCQUiJV8F60IIKiSJokyCxHKp850FQlAsvm2hpHxxMGQ8V8eoqSika6VopIKkGaZqRpRtmUHB8dU1tJQs3a5g7DYZ+DA4lrDDd3t/jGr/8G//O/+Bdsa8vXv/4bbO1e5tHhCaau0NqrY/isEL/xJzC+rXNoa8mzHmvDEbevXaGoKo4nU05PT7F1w7Kq2D+dIeSSjVGfUS+nlyc4/Ibp8dEp+bxE6SnSGapyyUIINnYuUQXFBp0lbG5ucf2ll9l55hXuH66QbsbaIGcxmyJURt4f45yhKVYo51gb9WmMYTabkuYDrHWUqxk4S6KgN1oD54kU1jp0okNmifEBDweN+d/Y+9Nn25LzvBP7ZeYa93TmO091q+pWoVCFgjCQEEmRbFl2NJvqjlZ3u213hyPa4Qh/8Sf/T5Yl2VaEFFSoO2yRIkSRIAEQUxUKNQ93Pueeac9ryMEfMnPtfc69VQBF0hLBm8C5dc7ea8iVK4c33/d5n0fTzCaM+n3+y3/4u/w//uk/44MP3ufl17/JYjZB6wbrLE3dYMN7tsZ2v7etz2JwzmvapklCr+wx6Je0Fk4nUz756H3+5A//Da9/6UvoJmTIhslNB6eJNSY4jGygo/fSNMPNLb73g7dp2zuoVHJ0eEy1XCIQWCfQrX9X88kJIi2QWYlUktmipZcp8tDHXrx5Hd1obNuSZIqNQZ8iS5lM52hjWTaao8MD5vMZ+48fURR98v429z79GNNqBr0Bk9PTIEvguLg95PjohN5gxJdevUNTzTk4PASrUYli2bb0shysZrE0JIkPgEkRqGQNYQ/sAiRABNBCnM0rkkT4zJXAwqOE1+Pb2hxy++KQh4+WKOkHp3NeqzSCCrLUbz4bq/2CYS3GCh7c+4x33n2X4vXXcEKxubnJZLrg9q3bPD444PTBJwzKHl//8qu89c67PHx4n/F0jDZeoqZuWn7y6QF/984V+pkA6513i5PHvPHlL/PJw9/HJhmm8cHe/QcPuPPSyzw+/j7GWQ4eP+If/5N/zH//3/3v+OpXv8oPf/xDDg5PEBJOj49onSDv9djY6ZMkXuf38MkBd770JRJd8fjgkGXVcPHSVe7fv8e40ijnmNeGXpqxqCs2iiTY015P1UqLcgLPogEIhzY6zGw+u8g4D7gBgRQKb0WszBVg5ciLAIrwsXHeALaOzkBU0oPoovVtXJBycQRnqf/CM+u5LjgYg5DOGJqoJU0IQGpLK3zmim5bfx3l+4e1PsDW4plBpBOkSgWjznYOhufll7vEgPP5dx0D1VHWZcW0pFcO+7Uf4AzwI0pkDAaDjlEjypQsFguyLOvAG5HZILJ+rAdL4zExmB/rHAPykQ0iglGcc2cAERH4EeVKIjNFdOqXZcnR0RGTyaQLxDvnmM1mzGYzlstld34EB1y+fJk7d+6glOL+/fu0bcvh4SHXr1/n9u3b9Ho9qqriZz/7WccCEUEJ8b6xbrHd7t27x2QyYXNzk5dffpmXX36ZDz74gE8++aRj+bh9+zabm5udTMjOzg5bW1sdWCHWcV0ypK5rPv30Uz777DOapjkjMTKdTrl//z5f+tKXunY6OTnhnXfeIc/zDowznU7Z2dnpgBI/+clPePLkCVVVdWwivV6PK1eucPnyZabTKQ8fPuT999/n5ZdfZnd3twNofPTRR9y7d4/lcklRFOzs7HRtEkETWmum0yl3796lrmvSNOXSpUtduy0WCz755BOMMWxubgJw79495vN5d0zTNFy9epW9vb3u/k3TUNc1Wuuu38znc46OjjDGcOXKFQaDQdd/FosFk8mEpmno9Xrd2EiShKqqOkmepmkYjUb0ej36/X7XHyMAYzwed4CRXq/XgZvi+0/TlKZpEEJ050cAzDp4II7R8zIr6+CPdeDHF5V14Mj6teN3n3fOs/5+Fkhk/Xrr0kafB3Y4f73zz/2LOmD+MowZ58EV5+t8/phnnfesep3//YvAH+frfx748x9S/iJAir/MfZ6X5+V5+U+3OOflbf1Owoa9hfejWBf3KYHZImwDbPhpm5a6qkmzFiULpLQol4JLCFSIK+eg8NK6UlicaH2w1jQ+Q85XxO9ihad5VoiOelgG72TMtLcB2NAFMazAOJ+Qssp/JQALJA4FrcG2MRgSWExowbVYKX2SEQmWBCELUAkqaUmSmiRNSXLP5qWSIJUqIgOV9fKhziKFxTiwWmN0g9EtHnihECohSQuSrPROeBRCJgjlZXOtE77dhQrPrZByxfwh8DIUHUBHBke3kIGlU2Ct8T4Pt+YAdisgtZQClaSkaYJxgrZpwLYIaUgyD9TJyx7LxZTlbMqkXdIr+16mbuwdBE74YElRlogkI80ynMxZLluW0xnz2QxrDPNFhZQK41qctggHdZuybBqGww3KLEdlCc5KcDkqmWF0zeZoCEIiQsD79OQIrRv6/QHTecNy3jLoD5mOF2yMNvnGV19lPp9iqiVSOMq8oJovSVyJsw2uqukNCkbDHap6wdF44llQkBRK4bShHA2YL5eUaQCKC+83sliktPT70gc4nMM6QZZ5eaDpdMxo2GPU72PahqY2FLqBZoHRNUhFMRiQ5hnLaoZxBodkMNykqSuWlUU7QU8VyNywXMwh2IRGL7G13/pn/T6ZU+jWeVbXPEEqR7usqZc1uq1p2gWmbSFJaesKpxJIChyKsuiTkPD40TEHJwvmlUXKFKkEaeKzhLMkp60EC2VIRI3GA1KESuhveBCECPJEuSxIrUWJJIDEWoz1zEHaxn7aYgOjX5qngGRZ16RJQpb786yxNLUDJ8mKnKw3RKUDXNIjyXz/EiIB4vj0zNAOAvAj/ARQmnECaa0HhzkBTge2nDAj2BAsshInE5z1gRElfcKTFBKpVABUpYHxw89JyglsIPwR3hGCBZzxDCQ4GZJ3fJ/yPi2H0dozGjmfgSwDg5EQztPBSxXumXhfuvI+bhklXpRcfSZjIuSafX8O+BEjrhGaEqoa6rya99esyXDKLyYB+bw8L3+jiwCPenAhwCtYzqbsDgqaZEA7OyV3AmvnpNUTBukCmWyCSJGq5+VEREKzXLB89Dbl5h7FrW9gVYoxrQ9aB7l1KRw+R94grKIVm5RX38SMP2F68Jhi+1VkPiC9+nWqu9+luPYryHzkfbwyMKdhMFLQ6sYDwpRECwc2jHtWSYarRzy/VwvMDSIA37xhwHpQ2QFIOlBbLDK0mQvOaiEi7HY9at7xBQQb7vP3iyIkR7qIhlt3ghOD8v57ZQ2YCpINzPwxwsywtqLsb9G2c4wWJGj6g2s406Jcy7Jy9KRaa5u2C6Sv79896ML/bsJ+Wp4LtIuO4VuymifP+zZWSVYxkG6tPQPw6L5fu6cQMRF0JQG/fkx3D/FsX8qZGXwdOCG9PSglZCphe2e7k85T4eVZ5yCAEiN7FMLLBFZSUmlJHqQdO+hFZ0+u1cMHhYKdj2cRCcCA9T7i1uopOjaY4PMUwZdhAxgE1YELXIc4CDEPGez3DgwTmLviOwqsfv4YH0+J3dmuv4Pw/iPjTHgRAXQkEUgUGQJNYhuE9Da8MQK1OKBJLyLlgjSFvCjIkxJzch+1eIKyGqFbyDwriAvGylkgx1kf2dPr7uqYpxKb1tEsbu3Tp0BHEeRxFoQjwj6KAEiJXMdn7hP7WACExLF61icVKuBW1zbO4pwkVwnWTJHGA/WvbTj2TywGiwoSOFaEfmAs+xPFtU1Ne5wwSTww+niakooxsrdDrRM8esArHljp315Eq9ROcX+S00t7bAxrlFswnwtmJsMF2wuX8NnDCXeul+QJ1Jq/8vK3A/gRlpdUSGSRYkMQJ1GGttU4IWi1R9q1rUZJyIoSlQjyvAThWQhKmWPKkvlijm5qsiwDlWCqyi/M0hv+pjVeAzH1WaJ+o+FIE7kCcggv55KQ+IndeFfAZr+grnLqyrNUeCReghQtWZZQlD2aesH+0SlNXdHOJly+eo0Pf+Zp3gWGl27d5E++/wMef/IBv/Gf/2dk/SGPH+9zcLCPC4Gy4cYmWZqSZBl52UMmaQjCyhiipWkbmsWM0rSMNrdwN24gpaIxsFhWTCZjFk3NbDxDIigSxWjQoyh8ZkbdNDTVkvligcsSnDpGB0+M1witsG3L97/7Z9RVRVPXFKlke3eX4WgblaRYI7GuwqkM52AxOcU1S7RrcSTIJEO3S2anmmo+ZmNrl8HGDtrm3pnhLOVgSFMt0K0PnDjrJ7fLe3v8t//Nf8M//af/DEzN5u5Fjg/3EW7ZTa42BAE9JZgL6MX1zYpvJ6sbekWBTTN+8tknpGnCwf5jELB94QpCii4L1gNRABcyfIIYqDWOLMvRQvLx3Ydsj0qqhdf0w3kZH2tqjDXoao47ecLOlVu+/7YtJ8uK04klSxVFqhj0srDY09GBbo2GpElCWfZZLhbU4wllmZOUI+6//w6nx2OcE+xdSKgWS0zTsljWPHr4iMViya3br3Ht0gXufvAO9XKBUhKjBbV1bJeCJEBSBQ4btVCl8s4uZ5EuOogE2jqMDhtD5TMQrNY4KUi1xqWKvUvbXNzbRElLniVhsQylWwsDFa1SCN16cAgWayXGQa0dMk3YHu0xmyRUyyU3b9/CtDUnxyeY+glUY27t9XnwWYU2Xr8ZQEjFk/Gcnz444c0bO+TBuqwnR1y6+QYXtzd4eDpDSEXdWpbjI7Zf/CZlmjBtWqx1HB8e8C//xT/nf//f/vd85fU3+MEPf8jjw2N62xco0oQ88Y6tummpqgWzyZjj8ZQr2332drdZzBZMlWNrc8Sj/SeU/b7P1k4l40WFUAUbmc80EUKQKk+7sVrAvc5tBBopKbHGOylVNDYCTZw1rEBKDiJ+VSA9iATnZW+ExDow1svqGONQSnSoTePsWWPBCWyokwfeed1q463c4Kz1mWAyUThjPK1zcEBo5xAyIcXLZ2UioMmD41e7wOBkLcZqRADW/TWAJZ+X/4RKdBavb1zi3xEo0W02pHdGt21LlmUI4bOuFosFK3S66+RCoiTLzs5Ox/axLsHRti2LxYKjoyMGg4F3titFWZYdkGE+n3eSKkmSUJYl29vbHSgjyojEDKqiKLqAedu23TOub8rquu6e//DwkJ/97Gfs7+9jjOHChQsMBoNOziZJEowxzOdzptNpJycSGUvis2itu/OSJGFjY8NnOAZwQF3XHfghtrEMAD3wwIW2bdnY2ODFF1/sACRlWTKfz3ny5AmvvfZa18ZZlvHKK6/w4osv4pzrGDystQwGA/I871gl3n77bZIk4cUXX+S1117j+vXrzOdzPvjgA7797W93II8IeKiqisFgwMWLF3nhhRdQSrGxsUGWZXz729/m/fff5+bNm/z9v//3uXXrFn/wB3/AZ599xkcffcSv//qvU5YlAB9//DHvv/8+ZVl2jBk/+clPePz4MVmW8ZWvfOUMeGHdgaC15uOPP+b4+BjnHLdu3SLPcw4ODvjkk0/4kz/5E5qm4dKlS1hrefvttztmFmMM0+mUt956i5s3b/KlL32Jb37zmx2TSwQSCSE4PDzkT/7kTzg6OuI3f/M3efnll9na2uqAOx9++CGLxYJ/9I/+EaPRiDRNGY/HfP/73+f+/fs8efIEYwx7e3u89NJLfO1rX+OFF17g/v37PHjwgI8//piHDx92Y2Zvb48XX3yRvb09RqMRAEdHRzx69IjJZMKXv/xlLl261IFzos2zDmwqiqJjR/k88EYsnwcyWAeM/KIgkc/77vPKOkvP+n3j589iuojHx3ni857tLwPw+P9n+cvU81nn/k145ufleXle/tMsLmSrS+nBAcL5oLcjggbWnN5IhEy8E9mBbiuaeo5KFInskyYewOCle+P1XbiGAJWEzD+LlA6jJdaY4ISNsI2YjRaz2GXIKPOyHDFjLdYIYX2wQApQEmEVKk1x1gdbIQ/Z7QLbNlhnMMF/LFmxJzrXABZjLdIlGFIgBQxOOqRwJNb7dJwMrtXom451daEOAp+IFPwiIsp7Gu1ZT5xCKg+4cRIIgWDhghRdTBgQKuwNA3OkWGU2BjgMMffOS0eAcLLba3sbXBGDLM4FYDdQZgohc3CatmlQ0nofmxiSJSlNW2OVQsoMIRpk4ijLgv5wBCJjsVhitWdbGZQ5dWtYzBdUy4pMWrSp2djoYy2UxYC8zJhOJzx69Bmj0QZlf8BysSRRCWkmEDJFA3lW0hrL6WRMq1s2hkNOJxMslu0LmygJt3dvYJqWJ0eH5GXC6VyTK+EBF8LQugWDXoExLZBRVUvmsyWulVStRieSXi9BFSKwtUi0TVkuK6xRwaflWFpDmiQogQdUWItRPrkrTftUlWXZF5SZhFZjjMDlBWlekqQJSealTOp6Tpp6APmysh1YwQlJ2zbkygcysrT0718qEiXRTUPZ30IbiUg9kGYyPyFTKVlRYoxGyIKmLXFpw6yas7mzg1QZj58c0SyW1POKRmuW2lC1mkXV0h/kSKWQwksKWiRCJBiraXWLbPq0DfT6CTJJwEGrHTiFQHnZoKamDeBkmVicq3FY0uAAkcIzQxojWcxrGisQRQA4CIu1YKQjLUvysk+aDyHpIVQe/KYJiAQnEz/OJAj83sjJmMHrAWrWeFp155wPDFgPUnEGhLA+ziVDGErFhJ0AMJMisEl7Rh61xpbnGYpCMFLGOcN7bwLeA2dCtrMMSQ9CIoz1fmHvbe4SAeM86PMDveyuVMqD4tSa7ItY1UFKGeY25eWDEB0YDmKgd/V7DOwKKbBdrGkVhIvzsnLrQeMVa9Pz8rz8cpcQWJXQOk27GFNcy6ldjm4ritTS729SlTvI4atkGxtkG3s4lSCqmun+B0i7YOPmV0BlCBIw2stDndubRqCDBIx11LXGDV5kszelfvIOOrlAvnWF5IVfp334p5iLb5IUOzjjA9lKSlrdspxXlEIgtY9dOem8LBh4NqOn9oTPeubVs0ewF8Q9pf9brOHD1jEZ69dbMS6srtEdt/apW/v5olll3R8RbyqFpdWaPN+ld/E1nFVUJw/QUpMMLqBn9zGzh5DvUiQGK0qEE7SkZO7YqwQIFbL+157JhaRKotyL69aCWPP14Lb//dnyLqv6P+t5Vk+++nvtOdee+1nt8HlJH+ePXa+DC1RjAonDopQHgSghUVKgBTHqgGc+8XEIKRyJlOS5RAkPfVDSJ4dG1IATHnQRpXq6xOv1+0ffZ0hFlkJ6AAbguljfeptF4MBK1iWCPFj73p9wlhG9e36x1m8E3TECQMbk2LP98ouLj50IoSncnMYKD7JyBq02cWpO1t5HZAVCFdStoa4tSVaCSkhshrUVkr4HXmA71q3z/ejz/Fe+57hn1jX2JD73OmfBJN1XISZkXQTSP8WPsjYCeKqe5xOwVuetHaccWA/2SESCFQaH4NJmzvc+rn1/sAAW5RKs05AInFEcT2uubSfcP5DoekLKDJNdpG3iexcd6dt6S3izxiKFYdE0LFtFInoMCsFe3/u/F3VCIwqEyLl/f4pzkGefP57/Q8vfCuAHwqOJrNNgPeDCGI1wftJw4XujQ8apUiTCb05brZHSobWlyHJEMLyzoqDsD7xsiwGjKxxgHEjrP9NNgw3U1j4h3/cG6xw66Kh7SiPbUe4sZlNmsykqdCCLCZOMIE9Ter2C5bLkdFExOTlkurXFtSu36A22GPR6lCLlJx9+zMl0hm4145MTMAYlFBd39/j0/j43b11nNpnwtd/4dT47afn0x9+hSHwgp1eWJEVJmmTkWU6R5V3AVrcNxrT0gNFwwN72BnVds1hWzOcLFsslk6MxThty4ej3crCGFENV14yPa5KspMxzjNVoA0eziqQ+oJ+nDMoCZwzz0wlZkiACS0e1GPPx/Yc8+PA9Xrp9k8uXLmMai6knSCVAJmRFSZoXTLHkWcpgc4+6gY2NDUgyjg81bQyshOwXox23rlzmd37nd/j2H/xbNncuduwH3pFksFbT6iYsLoIkUWRZSpJ41KySitZomrZBAlVd8fDeZ2wO+uxcvMKirjn86VsMNjYwxjtu2pAVUi0WYaPqMM47OIQV7O3s8ulnn5DJC9i2wjQ1xvqFTwfgSV0vaY1lWF8kL/vomEmkLXXVcqJrEiXZ3hxx49pF+oMexjqWVYUsGorhpgeQNA0fvvMTBtsX2T84YD6dIRKFBE5PjqjqmsWy4eBoxmhzgy9/+XUSaXn8+BECS1kknE5qCpWQdZQPfu52zqKDQZgob9y4AAiwxk/D2gSEofbSQ0ZJlBJY40BIru0NSBJHW7eUeUquBAu92gQ6Iu1WCMy4VTaBEILrt+/w6isv89G779HPJYNBnzxJGI8nbG5tMZ5Maeqazx48YnPY486lTZy1jJctIknplX2qxYzxouHheMkLOz2PBDQGszjly6+8zKM//T6ohEVTk0qHRjLs5cwaDxjqZQUjs+R7v/+v+a3/4r/iza+8gfnhjzmdzjECyEt6maOpKiaTKXXbcnIyZrtfoJKcwVDR1DW9fo/BoMdkNidXoHWC1o5H45piuySLIA67omOTIgIrApWWk1gpSJQHZxgbULyAwqO/I5hGGhskW6IzT5CKsIAGIIdSai1TRQYZLEsqPY2aCUZXNKS8pnOoZ/yb4BAhMBw5r9snkSTOYYVGCuEzcwzoxmCEDHqxwf4yFiPtSsNa6jXj/Hn5ZSyxT8bAcgy6RuBHnueUZdkBFIqi6IzBnZ0dgE6qoyzLTkalLEtGo1HHmDCfz1kulyil6Pf7XLp0iTzPmc1mTCYTHj582MlpbG5uUhQFZemdok+ePKFtW+q65vj4mIODA1544YWu3vfv32d/f79jaoiB/6Io6PV6nUzIeDxmPp93zBtRzuO9995jsVgwGo24ffs2L730EsYYDg8PefDgQcdMYq1le3ubo6MjxuMxh4eHjEYj8jwnz3OEEEGXWXbMJhGI0uv1OlBLZNZo25aiKIAVU8rOzg47O55y+8GDBwD0+30uXLjAo0ePPANUAOPUdU1VVczn885Yj+w/k8mkA5lIKZlOp7z00ktcu3aNK1eusFwuuz4Q2UyapgHogAU7OzvcuHGDr371q12/GI/HfPbZZ2cYMY6Pj7l48SKnp6d89tlnvP3227zyyivdvY6Pj5lMJh3IYTKZkKYpu7u7XL9+vQPNRKBOr9djNBp1DBlRGmY6nZKmKZPJhKOjIw4PDynLkkuXLtHr9ZBSdowdSZJwcnLCP/kn/4T9/X2GwyFf/vKXO1mgyMASARQRUBHlWSKryMnJCYeHh2itPYOWUozHY/71v/7XPHz4kMFgwO3bt3HO8d577zGbzWjbljRNeffdd/nggw945513uH37Nv1+n6qq+NGPfsT+/j7f/OY3GQ6H9Ho97t69y5//+Z9z7949kiSh3+8zHA5XbE6hT0fpnvOgj/NginWGjXUpoDh2179bl4WJc8Czyi8K/lh3hq2Dyc5vKM9nN5wHnsXM5TiuzrNgnL/vf4zyiwBmPu/zZwFy4jW/iF1k/fP/2M//vDwvz8vftOJBF0IoVCKRePY/tF8njPCeVCk8m6QSyn8vPOzAmBajW2SiMVajnE/W8TugkJ3Fymkr8aAQJ1KkilUQnknE+QSGVSAz1k36awkfdHD44LlxnkVSSs+igXOoNMEKgbMKjf/cCAVO4qzA6RaD9nFYIUhE4n1WSJzwwVWhFDLxP36t8Y71VnuQjFIpKvFJPdL57Fyk3xtJaXEmQViLsx5AoKRCdGuW9YkaWKzTvk5dwCPIOYg0OLLXgygOJ4Pj1eKBJMKE5hOhjSUSD47xMqAiMLeEt+As1hiMbtBYkizxMibDEXmWef+CrmAusIsgiyMcQjmaVtMYzaJyFOWAtPA2blMvmU7G1FWLUo7RMKcocp+0JSFJStK8h0qgyDKWxYTFfMGkqugP+hRlRm8wRBs4nUx4/Pgx8/mUwXBAU1U8WVRcvXIR3dZY1zIcDamXFePTGUU6ZP9on1Zbtre3kFbjHKRJTpLkDAZDmmaBFbB3aQtjHKcnE05Ox1TLjH6vpGnnqAQwFca0zJYN02WLkIpU5hhtkcqSKEGS5RjraLXBtDW9Xg/TGlrlUKnEWsHjx4dsbA8ZbQyxxqHr2geanARpMNJgDCRJRpJnCASzxQQcVFVNXvTIsoJaN+RlD+0UyiUkskeWCKzRJFL4gI1qUSoFBKfVHJloGtPQ1hU4g8WgZI7WlsXCM4yoVNHrJRRl6gNsrfUyOyomnRiSxPtpq2VNYrQfJ1mOSiR1Y1g0LU3dorVngEkDaN1YgzV44AKCtnEsm4q6NqAETePQRgVfkkIkGVnSA1VgZeIZMeSazyv8I6SXMYn/c0HuG+eQ1mGF9RJHzs8PwkmkVTgZgE5hjAlpEU4F2AbeJxxkkOUa8GM98CUD+MN1oKsAMMGDhqzXmwlgKy+v5IQHgyBckG8QWGdCkDbY2cqzTiupUFKRSEUikiCXvi4740PH3dN3vjhvpzsR7b/A+gfrEZ8wh4QQWGhaF/5YD9h1a8Hz8rz8EpYOACBW4hGiOkW4JYNsh5N6ga0rkr7F2hSaFLWd4gOMksXj++jJA0aXX0D0X8Vig2y2AaGQbs0RyyqAGoFZUgbbxzkqNyK58gb57C6zR2+Rbr1McfVbtPvfx+6+gUoHIFKM07jmBFcdIssW6xYI+h54IVyYF1aMsf45v2CP6CKg9Ok9pff1rzMAxbai83d3Afoz00QAskFnx8QD1uuyAqgEa+Zsc63qgQBpMaZG5TkKRd0uEGZCnl9CLyfkG7eQOy8yfvQTkjYj613A2iVJfgHDDomQISAc7ckz1V2BFtwqwH6eTWMF+viiPfjZ+TL6K+TaMU+xMay3yc/Ztz8LHLD67uz9PZjQv2MpBHVdYa3p2liEdcQG4KAK40EEJhipCP1KdDEgDyJxoS/HVluBWCIg2XVADUGpfPDfGONh1Z0UUWzz+ADeKBUW/09si7W+E5nMv6gEbMo5oMTTf52/ypm2F+HZQl+xSrJMdnF2SWHmVOoCigaRWlwyxNqGQk3RVuHkEGtLkkSRDnPqdu6BWYB0srvfGcDE5/jOVuwwoV3t034tYb/oWdbaJI51/N7N4OOHMf607mdyYk1+6alane3/Zz9f1c9aiZMggoSAk4J+ZhEqZWlqL7okDBDjwZ59bTObUOTw8CSlUI7KQZJsY63ASIESFuk8ucxq2MaImgnKMT4hQTiLE4LThUVSMMzhwuaCejlhWkty5Wi1Rf01wDT+dgA/AG0M1rQI2aKEDwQZ5zuD1ZFKSNDvlfR6fZAS3TYeZSYlOEmrDdq0WOup3U2kv+6XaJ1iw0bOGkPTtt4xICLfiIsrOVFrSojVZJSkCf3+gOPjysu8tB7pLYNDu200wkFRlKR5TtVUfPThx2xfuMyNq469q7d4dP8T9h/vUzWNB7sg+PHP3ufOq68x2rvCrGnp90tOF0uGeslXX38Vsd9y7/2fMBoVLJY1n9z9BLOcsTEaMdzaIS9KVJKg0syjxa3DYAPqOyEvvO779uYG2lhmiwWnJ2OOjo84fHKK0y25EpSZoiw9qn62qGh1y2A4YmtzRCIlfeUY5CJ0comZTqnqhqQsUFnJlSvXcVmf+0+e8OmDH3P14jZXr98Cq7F6ST2b0FYLqsWc5XxG0X9IfzgiU1AZy8NP3mNj7yYOhzG6o7iqKsOXX36R46Mj/uw73yHNcrTx9JlKJVTLOUZrkIIiS7l88QLLyQlZomgEtNZhtca2DZW1HD15wuT0mFGxh6nmbG3vcfeze4gk5dGDe9Rty3w+58UXX+bhw4eUZUGeJtR1TZHnXLl2rdPnnc7nKLPsNlCJUjSN36ha44CWZjmnKAdIIdDWy/U44dkTFk2DdhOqquLlmxcYlgml1fT7GWk6Is28bun3vv3/5eT0hKaqAijK8PDxI05PxywqT3GbFxnXb97m1q1b7O8/5vjkkDzzskbHkwV5GuQ3BAGZGRkeXGABAdVlPnhDLaJXnfOyIdpaEqewTuHQ4KBNJJlNsNbRGk2eSmTjaWpxq4VcG40U4PAsKkoqdnZ2ePXOi2wO+tw9PuLTecXe3jYvXb2IaTXlcMhoWPJocsLdseHT0ynbPclLV3Z4Mq04WdQUvYxBuU0qNE1rOZq17I0ypHQsx094+eVX+c73fsDCCYx1aCcQ1nDxwkUejz/CAYmUlFnC5OSAH/773+dXf/t/zZfuvMRPfvoOi6ZhcjqmWi7Js4REOZw1PNl/hJmNKfOERHonY5b12N7YpK0bxtMltp9jLRRlwbi2XOwn0TJZASIi8CKiDkVk7gBpFUJ60IVwEX0aDAsHSvmssIg1Xt+ky6B36JwAJbBWeAo0DBKBCY5WKRRSuOAQEF3ymwu62JE6zsvG+K2BDff0woeCRASHgJNYqwlJKx5qIjxK2CGwOqJ8oy6be7Zl8Lz80pQoG6GUOhNsBbrgewR3xIByBHBE1gprbQduiCCIsizZ2toiz3Pq2jtN0zSl1+uR53nH7DEajTrWjigjEz+vqqpj9ohB742NjY6dwjlHr9djf3+fo6Ojjo2jaRr6/f4ZxoH4rMaYMywHkR0kPudgMGA6nXZsIevMA5HVJLZF/Dt+F9k74vHrTAcqAFjXZWrWy3rA2znXAVQ8WDJ56lrrx0bQQrx+fG/r73Jd4qWNoFmxkhOJ50XgSpqmHehHKUVVVVRVxcnJScfo8tFHH3WgjCiXMplMSJKEra0trly5wv7+Pk3TdMCRyWTChQsXuHr16hngwjobi5SSsiy5ceMGH3zwAaenpzx58oRr164xHo+ZzWZsbW11rBkRAGKMYTab+QBFkByq65r5fH5GYif2h9hfIrAgTdPuHcZ+HIE6SimapuHk5IRPP/2UmzdvcuXKFS5evIgQgsViwXg85qOPPuK1117j8ePHHB8fo5TitddeoyxLlsslo9GIfr9/pn/G9xfBQetjLUoxxXcZ2yq+4/hZ/H69P8b3eh4csg6wiGP/PPDj8667/vl6RsD6fdf75frYiSU+2xeBOdblbOLv54/9IqfX+fp/UfmizIzz11iv57MkbM6DbJ4FUnmWM2i93ud//0We4ec91y9avsj59POO/Y9Vvuj9fdGxv8jxn3f++ff9N6l8XobP+nd/E5/refn5RQgV9gZ4DXDl9+k4iXMtzrRYHF6ZwO+FtUjCecoDNky0PUzw2XgGAUJmYSzOhWzAmMWPB8db5/0DTocMr5CN2J1rg8xICNjE4E3Y9IawpqcSlihc4vcsUjqM8kFjhAApMK1E2MQDSpREJCkySZEq8XVLUhBerkaGgLO1jsbU3kHaCV87lPKMBCI6t4UjBn+tdVgk1mmEM8hIR22lDxaLFuEC0AR/XS0tMuRkCqk8wET5RBaL8yAS4QMjXlE7+Ahc4DSRMgTKld+fdsEuDwDxoBGfBevlXS1NbfwzCUWWZeSFT8JJE0m9rDHGkuUlWQZNs6RezGjrBb3hEBEYCspMkSlv+xrnmM2XmNZ5pgIxZWNjyGBj6EECWY/CJWRFhhQecDOezDgdz6iXNfVyjkIjmordzQ2SPOf4+ASjGzZHfWanpxw+OULIjHwnI8sEiSrI8xQlcxYzz3LnhKFuFigpWcwqMCAziVSQKMFyXnlmF6cwzjGfe/mVeWOoak2RSzIlUHhWExsSl5qmxTnh2YmxWGewTiIThdYte3ubaN0yOznFGEOSJpS9DExLkWbhPXl5D6USnHA4oUiS1IMw0gKlCmwjaFtF2vdyq8o5nBWkaYmQ1vteVYbTC6RzJHg2QZVaqqrBGkeeZl5iVwpkklA1xgOhNMymtfcjKUFRJmSFwmjPlLpYzCh7ffK8QCjhE/ycRreewVkKR1HmtE1F0zS0jQdyKOVtx6JMcdaxmFdUywZjvG/DGoNwApkmZHnRJZpYAm7KgbXCA7AItp7yPjEPtAoMGgKcDYHM8KOs3ys6SUgwk8Eekzin1uxbTxsu4IwNuwJ/RMCF/xHR1pSrbGcXXClWeNCJB5/ZAOoxEOVJrQWR+PEZ6itEmF6U9P4gJUHh35Hy86//CcCyIM0rEUjrPFhOeMDHKsRKALOIiDfpAlpnHTbx9xVvkDd3TPfJ8/K8/NKWEBjv1mo7ZWNYoBQsZ6coBEm6hdy8RHPwIaPdy+jqlPr4IXnZZ/Ty19FWkFiLkwnGGRQC6TzTTych0rlJvd3hXJTE8ODMTErMUqB7L7FRzJgfvs14uU3/0jfQd/8UcflriKIPpkUk28yajCLpIbItnNTefpGyC57720ZwxpotH6sQRR26Y8AKsTqmO/ZpYEEMmgvB6tpiNY+cDbuf33fFWWb9nxWDnFirT/xOOjAugdkhSihaIzGLfRrtyFxDsnkZIS1SOzZ232B+8GOUyNAmIR9dg9O3EImXBFMuxaGxsnOSBygDHfjjvL/pi8Bv5/f88Tme2m92/ezseeePfdb3n3fM+nE+6fPpY1zo20IIkjQNfWQ9wcafK6RASB+PkMqDSL3MT4NwtbdH3bq8y4pJKq4ZZ/eFAeggIEu8T6my8d2v39sn33tQB921RJAse5YfJ4Kl43N1mbRn3tP67xEc+YvtW11XOTp7WZCC0ySuxZGSiAbpWqq0jzKCRLS0ro+b3sVubqNsgTQapzTSOKK8nJe06UZQV8+n+1yo+bmqPpUM5boR5HtxGGPOnXtOF2NAHiXSyQmtt+uaXRChOZ0t8Yy6rdcpxhjPfI5nivR7K8DU7G6kPDxsEMJhbLA1hMM6Q4ZiLztiQY8H05QchcxymM1pm0NUdpFM+NiVWwOM+LlVQADcCuG65xSAMwF4JWHcGMbLjEGesDOy9NUSsLTtX73Wy98K4IcUgl6/T1s3SAwq9ahzJSVZ5p0BMWNdiqCLqRJcUaIEnsIqBMSVVR45HfRmhcCzYuQ5TV2j25bWeupRHQx534dXRq21Xre10Z4mygaje9jLOTyRXhLGSbT1moxSKdI0w9qWUsDGcIjVLXf3n/BGVVGPj7nzyqt8+t5bHB2PoWkRIRj08HTOz376Dm/8yohHT8a88MIL/OQHf85//Q9+k7w/wJl9MuUn1bLIIC958ugeQjl6W3uIrI+RGacnh4h2SZoqlEpIshQ6DUhIs5zEOtIsZdAr2dvZZF43nJyMmc7mnCwXVEvNSC4o0oxB7umGUiUYpJJeqlBJQtMaTFN7VLnwbSXzgo08Z/OFG8wvXeB44oMnb390j5NHd7m0PeDyxYv0eyUpjqVpqBdzEr1gnqc0smBRtWwLhyyH6PbUB45Ugq4W1FXFN77yZZbLJf/uD/8ty+WMzdEGMut5R0Ka44TPPrm0s83HH73Hjdt3OgfFbDalPxiSJAl1NaeulzR1Q6NbhHBcunwZoxtQGbZqKIoe41Mve9PqFmsMVVV187lfNLxEizPabx5lpFbyfSoy1MynE4r+ht8MComQfgFM0hSHxGjDk6Njjg8eUs/HNE0bNpHOOzIcaCewrWVr9wInx8dcvHqdj99/l/l8waxqQaVcvHCFr371m4yGQ/78pz/E6JphP2cynbNY1GyWPgNHhAwoK/3CihBoY7EO8iRmQoBUAmOjWRMYVhy0WmOdB2FpC0nUCcZP+LlSJMpi9Yq+SzhH22i84w6a1mvlZcIwffKId78/5Z2fvccbv/pr3Lh6hV4/45KuQSW08xMODw7Y7GcczRruHc14cDhha1CQOMNifIIO7+XC5pBZ05ItYFQobD1HGsPVixd4/+E+UkpOp3P0fMzexUuoDz5CI6ibllZI+lnG8f4j3vvuv+e1X/1tXnv1VX789ttU2mKs4fj4FGMh72+QFyXTqmbWavplCU4wOT3GakNW9qnGJ4wXglz5NpIqo3GOUnod5rhU+mwPG4AbAazRBZYcOI+mXacTJYCHhBCkUnpAS3AOxOy56E90oa9K5fue17EGiQOlSKSnPBYEKReiRq2Xv1oZ7N7c98A5L/GiZACedIaOQwTqY+NAh+w6FWifcX6zYqxDRG215/6BX9oSgQvrgV9YGYIRJBHlP6KkSgzMR6BGlBeJ3xljyPOcixcvdmwS8Vrr9+r3++zu7jKdTsmyrAN09Ho9tre3GY/HHB0dMZ1OO7DJ9evXOzaLNE1ZLBbcvXuXJtAQDwaDDrQBdNIeQMfusC5TE0GCUVZmNpvx5MkTnjx54p2cARgR2yAyhUQwRmzHdeBH/D0ye8T7myB7Fhkm1oEcSqlOFkZrTVmW3fnx2eK7iaWjBg73i4HoyFwRwSUR5GHCOrkesI71WQeyRPBFkiTMZrMOtDCbzRiPx13fmM1mKKU6YE6/3+/eS5Zl3Lhxg3feeYflcknbtjx69IjZbMarr77KjRs3qOvay+uJFavDbDbr+sKrr77K3bt3mU6nPHjwgJs3b3J8fMxyueSFF17g2rVr9Pt9wDPOvPXWWywWC++AXwPLNE0THPerdxDBPhHsIYR37sd32O/3ybLsDFhoPp9zcHDA0dERv/u7v9vdP01TZrMZP/3pT3nvvfe6fltVFbu7u7z55ptsbm6iteb27dtnxosxhuFwyPXr19nY2GBvb6+re/xRSnXvJkmSzqkdQSuxX8TvYSU/FME8cWyel42JfSPWZb2PrQM+zgM/Yn+Kx63Xa719Yx9d7/ProI5ngUhgBUY570R5FjAlls8DUZz/7ovKenbGeWDSeVDH+nx5/jnOb5TXr3u+PAtk83nPFa/1F32mL7rn593nWfd61nl/1eXzQEHnf49/P6s+59/9LwJk+Is81y8LMOKX5Tmel1+seD+eC6ANF3zFwSFsfUDUGX+gQHinunUemC4FQvmAhzHRn+P3ptH5KITs9uAEjW/vYPVyv04IVGD8ECrzFMFd0CCoca/1SSkdPtHHYfGgBRe8fg6CpIMCmaBEilAtUiUkaYLVGVa3frOD31NJJVGpl6Dx+ykFTmC0xaHDfmoFelcItAy5k4mDxO/XovPbxzdkJ+FgpcBZg25bHCClxiqzAps4hZLe5nRCdUFb61yXrOEdmQIpvYyOFAQGAQ/GwbQBVGM8qCQEmKL8jJQC5aA1kKQlzjVYq8+k0RmjaWuHSyWJSih7Q3rFwDNcGP8ectOjP2p8ckBaIGRKXS9wKiFJM4pyQFO1LE4OKfKMtEh90Fv5JBXdtggEw16fRbXk8MTLuWRFSVYUqF7O5qhHpgBlQUjGp6dY49ja2ELrlrrx73N7e5umqWmaimG/wBqLrjW2bUmkwFmNSFSQYdVY01LKkqTIsP0eMxoWdY1QCdZKlkvBbGmow15fCqjbFpRACd8GSZrghGA48GwkPpiRs1g0JMqztQrpfNsKhUwLWmMpycnSCKj3Y0alkkSJkPjm0Fjysgy+AEOWpf59tgt8Qk6UxnO0TUXb1iQKnLZYq8mKnKpSVMs5SiVoAVZrlrOK0/GcRGTkiSNRCmc0y6qlLHLyNCFLBUWmaBFY7Vgsa4yDLM/ol31wnvFFG1BKowoPsncGnJLUrQXp5SaTPPOsO8bgRO0TBLXXk5d4AI1QPks8TbzvU4gER4JDAcrTggfbVSUeiKSUCr4uL4kbk2lXwOLIVOnBI9b48ayUw7kVKD7OKVKsAB+r/8oAAFHd350d18U2PRDMBhknYyxY80xbwVqLM7ILDONsuL73gXt1KhV8PZ7B17OehIDRM+2YWJcYTIrz6QqYIlkF187Sxq/8Ny5mXLMWnv1rtuOel+flP27xQW+sH9uzpSaRltaN0EbgqMl7N0j3XkEefYTd/xGUlxnd+DIy62OsAQs68SzKColzllYIFDGAGj3DYf8Q7CAZYhtCWlpnULlEaEOVZOSXv85g9pDxw7dJ915Dnb6DHdxB9TexyzHICkO58jqL6OF1a8Fqgo3mn/NM6eauOMRXgIzVXkicO341V/xFZoXVvtffK/rMYzu4+B6CwSYjaDbeTnj7zto5Lt8knXxANT2l3LhG3r/obT7jAaMqTRhc/DvMT97BtUuyfECLQ1gvNebEKiD8dN3EU891dg4/2ybxqw5P9znP7p91LVgfQROsWE6e2mKt3epZe/v1/eoKUHP+nYX2lArrHMPBgCTJwroSwM+xLwYVl/gjMQinkdYg7HLF/BLXBxETXWNc40yXWq8slTZd7EkIEYA87kw1O7YPsXaNdT/OeoMDzp3fr4fohrAdMIUz9XLgbFfX9frGe3TtuQag8O/KkquKGktCgxYpKZYlJYmWWOnQakTZHmDJSd0C1WraNCN3DalYop23UYT0/S+CXeJ7e5Y/SHQ/kelkZXN09V3rhGff/jnQRzjWjwEXjo8jz58ZAaLr/a4bq2v39MCsdaDKORYcF0A/KJSq6Odztp3h1GpKa/nw2CFdSEJ0EmkFAzUnT1uOTJ+lThDCMSoVzdwitWDUkwh1yqQeIV3qJfWUj9379x6ia0HmU1mfqGykj5daaxDGoUhwUjJvHIWqEbpCCYeSZ9v+r6L8rQB+ACgJycBvCvI0RSivqeUncu/YxUEb2CCyTCBUErFKCOsz93Xo6omK9OgBCQ0kiQraVX6bDyCEI1GJX+RCkEM6G1DgYDAdAkglirLXp241dXVCkqY0jUECaZqite/Yo0EfcMzGYx4/uMv27h63rl/iyguvMDk9ZlktvY6bUmijef/eI/ZeOKHMSh4cHLE3LMiHm7RaMH74MaWdISkRAq5fvozQDfX4ENFO+dJLX+Xal77Gd3/4Ft/+vf8nmZ0z6A8p+n2KokQlKWmakGYFImwCkiRB9fqUvZKtYd/TbrYNy2XFdL5gPF+As5SJYms0ICsS0iTxjgYMCItSCRaL1RWJNdjlApmllGnOxdGIYZaQCct0vslbnz3kpx9+yoVRj5vXrrC3t0vRK5jOUyYffUg+2GA4GNLWS3Z3LmKdQFcLpouK5XjK1mafIiv5e9/6Bmme8sf//t8xOz3h+sUrbO/sMjs9AOc4PjnmweN9fvXv/RYCeHJ0RFO3LGZz6rpCa6+3KmTCcrkgkQrdNly6uMNnb7/FxXqGGhTUzmGkY3t3AyOVNxy2t0Jg3nrdKRW0bq0NVEyBfk14dgtnfUB/uZzTtA1pkmKt8Vq9SJRUGGHRxmG0pq0WnmnF6LhydZO8kgn9Muf08AlZniOsxhjNstaoJOPi7gVeuP0yd177MvPJEUeHB5S5JM8LTu4fUKRBR7mjRQJCAB7ng/zWOapWB2pK5R1i+OXLOLABgKCUz/7RWuMQWCnRremMrjJRONuiBEGdzZdl1dAaA0KgQjbLo5M59t0PeU84Tqczyo8+4cuvfZkrVy/xyWLGxuaIrVJy99598tmM8ckhp5WXFZnVmlfuvMLu5pAnB485ODzkdLagUDmPTxvcxoBRIZmf7HPr+lU+evgYVMpssaSanrCxd5lUKbQ2NMYwawz9MsFqzZODh3zw4+/wyjf+Hq/evsXb735IkhZs7o3AaqrJESeHT0izlDTLuHL5Mld3t1hOdnjw+DFH0zkbruXweIwqC+qmxfQyjmYVu4OCVKwWWhczt0L/iUaRC4uxtSZA3gLrhrPdou+s9QbJmjFqbaQv9vNZZ8QHJ6kSEic8gCTYdhgTjDifLBIAHh4oIoNlaYMDsXOKGt93pYp0o0CQfjLxPQNYT5/lnZveKDfOSwnZ50GBX+qynrV+ng0A+NzAZizPClKuX+NZAeT1YzY2Ntjd3T0TrC+KgizL6PV67OzsdHIg4Oe2LMs6tgRrLcPhkGvXrnX3K4qC733ve2RZhlKqk97o9/tdoHw9iH3r1i2WyyWPHj3iO9/5Dj/84Q+74HVVVezt7XVG72Kx6AAE0bG4WCxYLpfUdc14PO7AAjHAXtc1s9mM4+PjDiSQZVkHtoh13t3d5f333+f4+JjT01Nu3LjBe++9x8cff8yPfvQjbt++3TGvxDbo9XpnACLxemVZMpvNOgBIXdc8evSInZ2dTgplPB4DsLOz04FBnFsxjZRl2UmRROBOZND4lV/5FV588UWuXr3aAVNiG0fWirquuX37NqPRiB//+Md89NFHLBYLrl27xqVLlxgMBt07Xy/r/Wlzc5Pt7W2qquL09JTT01MeP35MVVV8/etf5+bNm3z66ae8++67/NEf/RGLxYLBYMBwOOzqEt/FepslSULbtt17A6iqirquaZoGKSXLpdeMFEKwseGBofH+AN/+9rc71pkkSdjf3+f09LS73t7eHsvlko8++ojf+73f4+rVq+zu7rK3t8fW1tYZ+ZLXX3+dO3fuYIzpJHEiK8z6GEqShDzPzwB8njX+zgMJzo/3deDH+ufnN6fPGv/rY399sxhBIOvnRMDJOiBs3bGx/nzrLCBPOz546t7n55hnlb8oQOKvqsT6/SLAk18EfPHXVX5e+z0vz8vz8stXYhY7OITwwUkI7H9S+hx5axEWVOLlSAhgBO88Dpn7cd0ACE7+zqEYP/PRSb/PkT4hx7GSUxBOgQ0yZUG2xDNaeKejjGTFCqzRGGuQ1vs6ImNiDCoACJf5PZkzYA221VgDpm0xrUbrlmZZg9VetiawmBjt/QZSJaRpRprl5AHg61ljvWSHZw+I8jWsAslKIlyKkkmQdPH7P2s9yMaDSTwoX4S9n1SBZSQ6fQUYq/2+3CmkTHyAO4TRI6W4U0CgPQ5xk8DiEYLYwgeoJDlSJhgtSJIAfNU+cQXpZUD8PZ336WiDUsLvGaUiLz2oNlLNOw3OCmzr77lcernB3QsXSaRksZghHQyKPliYjWe0jWeBENLbuVvbI5I8R6CYTadgNUmW0RjLYjFnuDFi1B/Qtg2LaU2mFCLNcE3LbDqmyDKcdjx+sk+v7JFKRZYopNUMihyZCMreprdLdYtpLdYalLNIa4JjXFEUAociNVDXLRJDXqRgoWoN1igcliLJME2FbhsGowFSWLJEBQkdTVO3IKEsCkDSGENrNbVO0YuKsl94dpBWU1c1aWCacYBMvFPbWh38VRa9PKHcyHyyhxA0tWc+NsaQqpQ0K2iqGUIosqwgkSmtNsznC7RxqDT1ACcBm70CrR1Z6jMlE2HB1hTZgCzNadslaaYo8gwhJcZ4m1gIP+pUltHPFMvlAtMaEpXS4pljpfMsJtZKZrMK4ayXycF12cXOBXCV8BK0Kk1AJQiVI2SGUhkyzZAqRWUpSZIhVRLAIXHPKL1XzEkiFby3V88BP6TEmfA9HtQWgy7gmUy9zHikvvfsHkJ4YE5n44qY4bwKGq7YFUEIL+OjtT5jr3Z7PrHa14FDSv+2pfQMKX58JoHJ9WkwtRPnA5fPsM/8RNv9YTt0Rxdj636INt6aaR93DM8tv+fll7nEQHccK7OTI/YGJXU6YHG0T0LKoNhgfvIZ86VguHMFWc+QTuG0Z59XSYJzkV3He4UlENP1fVb+2v5Y0NkjCjq/u3POS6FZiTWWprjJ8Po29eEntKIkm72HFm+gUn9epjKkE5F7OTBFhcD1U+CGc/vljhlkFQqXgBX+E2+jRRuwa62I2gDc2tyzihav9u7r946/nwNOeM95YKJe83OHIHWsh2fLdiyaJcXiCfVwj6Ts+9iFACFSnGuRSvlE8VSQb9yiOfgQJQWty3Gu9fOc9EDD1VwXnlOsPclaQH4FHwAR39FZaEPoOmvPz9nA/Pk5NJ4d2adi23g2iGjoue7o9aD6yv9ytg4rsMNZEIMM7ehskBwzGjCdfSqV9PcVYvUcQoD0Sdsu2USoOQ5DVFzzgJHwLsUao0zXvdfeP5Hrwl8/AqFiuzgX+q2LjH4hGnwGKBN6y5ovav1Zwe8zfP8628dWF3Bn3t36+i3i44SaRskkZy1Ii7OaqklIiyXOgEigIveEAlJ7RkHhY4xCpbh0i1TvAwt0a6jqGbJHSLANbD+c7Ufr/qCnEi1C/c7223BO6ArdSHQusPLH81bxu3WgiAvXlMIDcVx49xbnFTQ5d95aW6/vpcIRxH66/pETmqqVuGxIkcJQKo4nFY3JSIVX20iloM8EJeBI7yDbBiksxim28jlHMkELw7TuM+xJLpdTHi9HYQoK0lrWICUY6+22VffwsnrgUGGcRfsQAYdVn1K3OPax7qyP9K+i/K0BfpRZjkwSny2BN5yRCh0CAoLgYJcZSgqE9Bqs2vpMhSxNcUKQAoRMdev84iCkQCFQeU6WpuhW0+gGa1pPwSdFkEwApxSQYG3MIAzajWFy2OqXOGNIkpzFfIpUlaeDkbLbUCgJSg5IlWD/4Alfy3N6yvDVr32do4efcTqZMJ9OyPMcqSUHkzHvvPc+v/3bf5/vfv97/GfffJPhxhbHJyf8+z/+Difv/Cnf+I3fRhU9EiW4cOES99uG+w8esfneTzmdV7z95z9iMn6CwjGeLVFKkUjBaHsHZy2DMmcw3CTLC9IsRUiFRKJy3+5FkTPolezubGGMpdKG09MJp1XNyeyEQsHWsKSXp6g8Iwntn0qBTBXBLY9dzmibGmkte8M+u6+/xuSFF3iwv8+9e/f55EfvMkrglVtXeeHlO/TKkrIsSESB0TUpmtsvv8qnH/6Mn333uzx+8ICXr1/gG9/8GnOr+ZU3v4IQkt////zPzI4fc/XGS5w8aUmTlEQlNLqhyDNOTyc+sCwFs/mM2XTmN5xOsbG5zenRPrPphKQoyIzj8ZNDXjk+wiwXDHCkCk5ay6XtTSZJgn39TbJe36MNrQdr6NYbFSouJGETh/OoROc8q4w1BqsSf64JyH7rWSSsNeAsddMCXhfUWOOpUxEhO0dRVzUnkwlttWQ5nzFf1tRty97FXTYHA65cu83O9gZ/9IM/BhpGw5LWWBbzJUVwAMSFUUZggQAZddisX7psyLBKg5MuESpIdVjP9OAsSnl6udZE9i9/Xe0M/TIlGVc0dg0t7KBuWrSDvOxz6Vqf/ZM5vY1NhsMhjx8+4PjklLd/+jYv33mFF156le3dyzzZf8Ard17kxpW3GE9n7GwNqU4WVFVN1bYcnBxx4cIOX/vGr9DLMj774D3e+tk7OOGYLxuubA+4lB5x8cI1MiFopQecnEyn7F26Rp4mzLVBWMey0eheRp4o6rZlvH+XT3/6fW5+6e9wOpnwyf1HFGnC+HSCkArTLrzUUFVzfHrK1rBPVhTsXbhAkhxx0HppmCpkwjTaYrTjaNFyoS9Be5mpaDQFU9X3o7gah8Vc4TNBpPQSVDYgI51wGM8H2M2jUkZnqPCLcNCKjo5SEVY24yweihFlf8L5wdxSQiCdPw68Iw4brilc53iJTEzW+fwaCJJBzvcZ/zTemdGxyQiCXMzz8ste1uVaYAXKgLObjPVAdbfxkPIpQ3b9OrGcD0JHR1oMvK8DTeq6pg2MQ0AX3F6XwJjP593vkZEjXiNKxsQgszGGjY0NyrIkStJEBoWmadjc3OTGjRuUZcnBwQFJktDv98nzHOdcB4CIAIBer9cxW0SpkMhcsu7AU0qxvb1Nnuf0+3201jRN0wEjIrNGZG546aWXODg44PDwkD/6oz/is88+4/j4mOPjY7Is4/XXX6csSyaTCVpr6rru2EwikCQyLUSgQ2Rh2Nzc5PT0lA8++ADwbBaTyYT79+9zcHDAa6+91jE/LJfL7r3E60Vww3A45MKFCzRNw2w2o2kaLl++3MnwLJdLJpNJx5Zx4cIFXn75Ze7evcv9+/dJkoRvfetbXLx4kcFg4EGea5ve9UB0ZMLY3d3l8ePH3Lt3jx/96EcduCYCQg4ODnj48CGLxYJf//Vf5+bNm+zu7mKt5Y//+I+ZTqfejpOSpmmoqqoDd6wDIdb7agTGLBYLmqahLMuuTk3T0DRNB4pxzjGdTnnppZe6fr+zs8NoNGI0GjGbzfjwww/58MMPSdOUzc1N3nzzTW7cuMHOzg5Kqe4dRhBM7BdA128jWCP2rfPsOWc3zmc3dOfH3frPehucB3PE68Tz19lFzgM01qWL1p3h5+eM80CU9fJUVsQ5AMV5ANp6nddZROL38ec8OOYvWs5nbJyfz84DPNY3+c8Cf8T/rrfP+bp/0d/r9fpFgRvn6/+UA+JceQ4IeV6el1++YpyXEMF5EIKzArVCk3u5juBAN0YjnUOGLHyZeCmSLuhAmJOdp+yWKgCIIwNsYBxcORKjAzfYksI7YF30fuLdkxIb9kHgMN6BboEQVJV+k4MV0jt4ncAFyRWs6aRordY4K2n1wrNHLOe0dYVpa5xuUVIhEkD4BhBS+ez8RHoZlLL00itCYML+2gFSrEuVra1xrBhPpBI46cC2Ya1qES5KQTTe6SxynFvJy66CEgTQiEEJzyQiwh5QSO9htwbA7/29sGuLNdrv95UH1SRKYIwHcYBnMS2LAU1TYWhpjCZPErKiJC92sNrQNBXWVuhWY6xn7FBKoUTCol4yPj1FtxWjUY9emZJlJUo5RsM+w40hy0XFw0f7nJ4eUuQ5w36fIsswTqPblmq+QLYeMCKFoNGOj+7eo5dn7O3uIIRmWp3itEHT4GgZ7Yy8/Z2OeHz3IWapuX7jKlLB4/ufsTHsM9raYmNrg6qumc/H9LKcpnacHh8iseRlQm9YUjeO6XTBoC9xpmXUz7EuoWkqsA1SpTQWam0xLagso6krRJphnSZFk+U51mSkqfT7hNTinCWVCXlZMJ3PmC09ALinEozXxMZax7KtGAz7PqAINO3SSxs5D+h2DnRboZxBWIvWjrwskUJhdeX7o0zQrsXYhqZpKXtDsqqhqWtM3dDLMoxVNPUSK5Uf2tb7LzKhmM6WDHAIY8ikIpG+XyWJRCUKlMJqvz9zQntgyULTNgajLSrxMpAO5ZOhmgYpHKkSpGnuWUaE9TI5CJRIUDIlkRlSZpAkyCQBpZBKkQS2Yik9Q4pSQe4kAD/8kJAdwMEG+SIZWHucU1gbKP4dOPRaAGlFGx99vysWuzCPBcZVxMqGdC7KPEVbMwRepEEEW9fooJnrBIIgqei8j8eYGOzyPh4lRaDa9/JDUWomAuVQchUOlD5S6US0Pf3MGJN6YBUAedqGfprKHbcW/HTE0PXz8rz8UpcIknD42cM2JwyHGUs1pDp5G6daFu0pYgzDzcu4wR7Z9gssP/4z+rd/BZNmgE+SkxHYECLJK0DA+kh7eo9IAFREVpBVzECjXUp24Q1UfZ/F4aeI/T9A7H4dleAnNyVWc0i8/LrxtQZdWAcodNBbIdaO6GaQLgh/pv5nphERTj+3H5dre84AYOi84oIQ9PZ1EXFuivNbnMdFnI8JsQ7/ezM7ZHDpSwwuvkI1fohIe1hAdYmUBqVy2nqM1HNsWlItHyKKDGGUn3+dQ7rVc4lQGdGBWdbeUQysd4+7AnWsF8c5Xwpni1sDOfhjom9UhXk7TtTBdluZut13Z/bxq+qdac/1GEIEKMbvVEjqTKQgkQKlBJ6/PbDAxRiG8CBpKdd8t0LiVJDtYMUI1wEputuuQCqia9NVn+rqg/AydXiRNZxbBezD1SNb1UrrZL3/hXfXAT3Ov5Nn+0a6uAqxj64fv+bPtoEFSEYpJkciKzJzyiLZoREpiXNYEoRrMNKiTAB5mmOkvoHOByTVMSpJWE5rEqAVNqyrTzO2PMtP49Ze6TpTydmDXABq+L7j3FoyrlsBl7pLuzWgTtf3wo2cC4wfoQ+fqdNqfHyeV8iDTELSMw6cwcgU4xJaJxnmDY9PIJGCxiX03ZKcUyq9QS1TnGtASLR0jBKDcwZDjhAO4Rpm7SY2ybm4ccrjk5EfP8Z5ZQNDSH4/W59nFQE4YbDCoFVKnKP+qsvfDuCHEDRtgzIGnEaqDG38xkZKRZ4myKQABNoKdNt4lBZeDkOFzAbvb/AaPdL5jagINJ5t02DaZqWraoNRLUKWPBZJ0k3OKkmgm0DCImQto57gdDJBCUddLQGfwZ5ITx3oN9I++6Hfh0VTM39yj5defoGvbO4y/vXfIE0Fb739NspZ0v4GtllQKFg0FbubAy7evM3j8YzP/uxPWMwn7M8b9g/22b10lUobqrriypUrVHXLsvUZHF//tb/HS6++RrucM58vufvZRzz67H3m8ynzqub6xT229y5jjKVdeAaMPM0oewO/SRICIVKcMT6roczZ6pdoBIv5gsl0yuNFhTtd0MsUm/2CjeGQXn9AlucYo0EIWt3ippr5bI6ZV6R5waWNEXsbL/Kll1/k8dExn372GZ8dH/NS2iMdbHM6XZDPl2zt7CKbOVv9lE+cpT8oGO5u852fvM29hw/5yptvsrm1yZ3bN/jBzg5H+4+50DZsbO2gmwbnTrh39x66mlOWGXWb4yzUTc3h0RFCKLIiZ2t3j/nkGJX5emulkIMhd+dLjsYzXikSNkLAMmtbem3Lk+mEqmlQeUFWFDgcKohkOef7htGatm2R0jNrCKk8Si/2Iwfa6rA58sAKZwXathijUUp6zVynEKkkeKMQQjCbLmgbTaMts8agtSVNM8pUsdSS17/+LR7d/ZjH+w/YHJSoVHH/4T2EtShFZyR0/8WjZZGghN8sOuJaaWkbRyP8+BPeF9TpE1tjEcKRKuWNVilQOKyQDIqMRApqY1fLgiBkdTv6Wxt86Ve/TrVsqU3L6dEhdVXhjKF1KXWrefDwAbdvvsTB43ucjKd86Uuv8un9R+xuDTldtFRNg7aG05Mj9o+OSNKU11/9Ev/T//g/8Af/87/i97/3fY4mcw6nC8bLhq/s3WTYK9if10ghGc+WSCk9s9Ci8s+Q5jRWUqYC07QsK8HBJ++S94a8cudl5osl+ydj8rJHvXCMhgMsAmMtx0fHZEnKjct79IoCMxiwWMyZLxa07Zxl1ZArr6k6by3LVlLIFdWVJNJqekpYEWYeJzwFcLBv/fEusAGEuS/sOjwjSEdFJ4K/YJWtIWU0h6zPXAmUXdZFrGpwKIY+4mkE14yoYOwIEfqIiHrbcbn2JSKtPbgjPkPQnQ5auAK/0K5vGp6XX97yiwQQzwdsnxX8Xf/+WYbZ+meRXSKCCmJQOTJhxKB23KCsG/punza2AAEAAElEQVQR3BFZGxaLRRfEjkCRGEg3xlCWZcf+EK8bgRJRViayjAghGA6HnZyMMaZjibh8+TJFUTAajTqJkH6/z87ODjdu3GAwGHT3FUJw4cIFlstld34M3q+DaGLA/MKFC1y/fp2DgwNmsxmffvopQnjZlKtXr3Lp0qVOziXLsjPyMeuggdh+sSRJwq1bt7h//z5VVfHxxx8zGo3QWjOZTDq2jhgcj20fWU8igEYpRVmWXLp0icViwePHjxFC0Ov1sNYynU45PDxEKcX169e7d3r16lXu3bvHdDrt2joCZyJoIb7XCGyIQf08z9na2uLk5IT9/X3ee+89wDOBDIdDqqpisVh0AJIrV65w7do1tra2WCwW3fVj/4msJfEdxHuuM4/ENl+Xv4ntGkEhzjm2tra4ePEiZVlyenrKzs4OQgjqumYwGHTtOZ/PmU6nTKdTZrMZDx8+pNfrURQFg8GAwWDQtfX58bQu5RPrtv73+n/jT3yuZ43Z9Z/172N7r58byzowbP3+5ze2nwdyiG18nlHkWYCOzwOdrR9z/nnW7/Us0MfPAzj8vPLz5sZn3fv83+t1/zywyPnPnnWt9c+edfzPe454zi/y+/PyvDwvv1zFAVHxw4M/1uwRFcZ9iCj6tRO0blHGb0JyqUiUQiUh0cDqsH5q78eRfm2UArpsR7lyFDtnA9ABHwgQzrOIBHYM7+D0zAcurMs2AkcsOOEpO62l29G44DS2wkLY+1rtMK3Bti26qXGt9mAPW+NcizEtzvmEjpSEREmSNCPNS7K8ROUZQiTBmWw9CF64ALgwPnijVmuzW5/LAUTiHd4qBAWs8Xt3qUJCT2TlMFgrPPhFyCA54bpAraevtgjd+mCRCgBsGXxfLsL6BYlUWOFlSrVtUWmKCuuOEgmNbqmaJarsMxxuYKXBWYvWNctlQ914UHSaF2ATcEua+ZREgtWO2lmcgsEoZTJZMJ1NQCS0TU2SQJ5KZDJAW9ja2WJje0C9XDI5fUKRF2xubpEmKc4J0jRhjqBeLmjbhpvXL5MlkjwrKAZ95tMZ1XRKkWc4JVnOphitmRyNSZzh6o1tyj5Mp2MuX94CC2mSc3R4SpYqyqygrWuMrtjbHWG0TwSyTngZWxy6MWyNRrS6RklBmaToRtO2Nda2qCyhMbBsGpyTIVimMNowq6cM+j36/ZJG1yiZUjctLlUIUyGANOsDllY78rLAITCVpTWenU8lCowH9jgBaebZTaVKgRQhM4Tw49KJFG1bWm1InCMpeoilIs1SqqZmNj9GKUmaSZY1VMYync9pjEYYTX+jH+QMJcZoFrMZWtdkRU6mFNV8iTaGgYPhSJKIFGctbVtTa4NzCVUrWMwbyixnd3tIkaXMqyWtNeRJSpYqhIQ8xfvarKG1DpVKVL8PZYlNc8/wEaSOZMBaiOC0iIG1mEEvgu9qFS5a7QVXwG2w1gOenApBFqKN6gNxcXh6v4dbu/bTMoZdQMat7GwT/Gmd7SW8NIsUdk3KMNRHNHhC1QSI4A8b5gvRSc6sfuQqcrNmdll8Yo/3h4uOEQhkiJO51fzXze+dIEQMdXXzPhD8mvGc5zbe8/LLXSIzs0/41BSJw7aSJsvR1YxCSjZvfpPp5BjlEoQVqMEu5e03mH/6x/RvfA3KDaSNQArCz1rA1K39zrP2Tn6seTkmPzKt8AxVQgqMaXDpFYqrF1DTn/Dk4/8Fs6wRg9TL2HUaDSFKEWJi/hP37FHsHJ1Teq0oIZ55jhCxrYgThD81moNh4ojnCVZ1eNbTdsxk8V6xjeT6Uf5ci2f0dy7DGkW1OA22XdKxG1j8fGqXJ9BOSUbXwH2PPN1i8vBd+jduesksB0JE35cggmpjzPBZ7dT5oZw70zbx2URsoNiuXZvFtWb195n4zTm3wzoQYX0t6UzueE3OAVQ+xwO//i6ctQyGA4rSJzgRbHDPIh6D4dGHggfICOlBw1KFWEUEI6wgSs8KmAtiDMp/HwGWYq03+KbwSf3uXNLNet+wa9dftaPs1qeVr+Ps33FvcdafxLnPXHde51+J93FrTDgyQbYN83IDJzNUlJkRHvAlnCN1c2q5SaIKhJ3TygEkA2oUJkj3eTy23ws4zvrF1ut1xh8U+twqDve0Lz32D9e1K2eO764dYoDxS7fWsVYg+tUQPOexX78jiNh/I5/LuThTOEU6DwSWQrDVg/vTDWgd29kxSmhOmm1QqU++d+AUCCPZGdZMliWoHCEkVjeoVDJpLI3d4Nr2goOxopE9Wtt4v6RZjY/Pa6tYpJU4kdBazapn/dWWvxXADyEEWdHzRrig001DOFSifGhUeMkLv8pKL6u61mOtszjj4UtS+qC671B+ICRpGnp5uKmWuLbGG9iBGlPKoIfoB7/XjJWBeisgwaWkKHtU9diDQwINEkAi/QZKIkmS1AdWEbz1s494/StfYefaFr/5rW9wfSNl79Iuf/Rv/4g8U6hsyO3bL/CDH7/Fph4zHp+g7Sm1BbU85pU7L6HKAbP5nDRN2BwOsM6xrFrufnaXl29c5fVf+Qfsn8yxzRKR5nzy6V3ufvA2H777Fp9859v82je+zmg0xFlLohSLpmWxXFI1GurGa8wpRZLlgWIxLMCmpV9m9IttnBDUjWY2X3JaVUyOZxTTilEvYzTokWcFeVYgepYiz6jrinY5pzl8wGKxIOmNuLGxy61vfp3aCg4ODzmefMbGqA9Fj8eHT3j46ID/6vptvvrVv0NTt2zf/QBbTfjT73yH5XLGjZu3uXHrJmVZkmYZV6/fpBhs8MlH79JWSx4+2udf/r/+Gf+bf/hfkqUZi3lFmqRU1ZKo3zMabXLlxm2WyyVlWVAv5rx88xr66gX6pxOMElRJyhBYSk9XunvhClYlSATVcu7pWpXFGoKDxNG0bSe1IcKKmKRZCFIl4MTKKRQ2dziNaRqSgMgXIkGoQKWqvAPGGEPVNAxGQ8YnhixV0IB0moPDMb/zP/4P7IwKvvtvfkyvTEkTRbVsOD08IQ3OpYg+JDiaEJFKF6wKQSAhwpDyE7OxDmENmZTdggNeIsaBdyxIBdLnQwkMqYBcwiwaJ2GzrduGZd2g8pzhxhZNNWcynXJyfMh8uaQxmivXb/DiC7e498lHvPTSS+xdvMLhk8e8ePMWl3a3GE+njMqU8VzSBhrUw/0nbPR7PDo44NO9bUbXrvPbacJ3v/8DPt2f8N7dMSf1d5HS4azGIBkvKoyDsswQEz8dpOUQUoemQTnHyXRBVbeId39Mb7jJqy+/yPRHP2bRGNI8J0tgMZvR1C0Gwb27c44PHlImkkRJjLUUeU7btMyWS7JMkWM8ilN7qSqJBSd9P5FJMJ66ldVnZYlAHar8ezTWO/wkIbCI9fNMoJvyCOFomEUjxiOVo3kvpfBOO2/D0doAwnD+vkHubkUpKjwziHGeFks40WnuiYDMDrlqHR2b/16eWRY9LZgDoUjEytB4Xn55y7oTCp4OdJ7//lnnx/KswG4MKp9BzYcfY0wA4smnJD/Wr7MeFI+fNU3DfD7n6OiIw8PDTo4kSRJ2d3cZDocURYG1lqIoOnaOeP46sGFjY4PNzU0uXrzYAUXy3EvRzefz7r4vvfRSJ7eRZRl5nrO5udmxYezs7JDneQdiuXnzJlprkiRhMBgwn8+7Z4l1adsWrTV7e3u8/vrrPHr0iPv377O/v8/u7i5bW1tcuHCB7e1t6rpGSsnm5iZlWZJlWcfWkKZp12brzB9KKb7+9a8zGAx48OABjx8/7phHIlBmHfiRpimj0aiTQ4mMKlGG5/bt23zve9/j8PCQ+/fvo7XPrj05OeHRo0fs7e2xsbHRydJcv36dt99+G6VUx3gRATnx+aPjVWt9BgCklGJnZ4f9/X2m0ykPHjzgK1/5ClevXu2AH/EZ4zvRWjOdThmPxxweHrJYLILTW5yROFln/Ij9JvaJKEuyLn8TATLD4ZAkSdjZ2eHOnTvcvHmTxWJBr9djuVxydHREr9dDa83Fixe5evUqvV6P4+NjPvroI/7Vv/pXfPDBB1y4cIHLly93IJXlctlJvcT6wkpSKDqo1+u2fsz5cfescbQ+Dp/F1vOscRyvvz6Wz9/zi+aL2L7nv4+/rzvdz8vPnL/2+vmxj8f+8qxnWP/5RcEM5+ezL9pgngeunL//s8Adn3f8s+bRn3fNv2w5D6h7Dvh4Xp6XX+LiwGgTZHkjYxIoZZEJJB3Y1menp6mi1X5f4doWqRJkqpHKS5wZa3BthXGO1DrSMB9HeZTocF8FaoXfNwU6ZOecl2KNQVfnpTHBYp3BOu0ZDYP33+KThjCmC3wQgCEOgzOBvct44Ii1pnOsqiTF2hwR2Ad02/psUyWRSUaSFeR57uWGw7ltXQdnuPJ7OpXglPPsCw6SNASOAwNXDDp7FpCwvikvIeu3aQHc4VE2IVVTIDynuU/iIAI/Wpz193KB4STu/60xgTHAO0JNAM2o4NNyzmK1p/tUKqXtZGB8EouTJVJ6JgvnJEWWok3D+OTAs4xIGyQxNPP51LtCkpQs71EEEPNyMcMYze7OBZIko2lalstDlPRU9rPplGo2Y9gv2dwY0RqNwbG5uU1aFORlj5OjY/KeJctTVJrStob9h/u0TcXWRo9er0SblmpuwFq2L+5QZArpNE1TkyUJWSqRAoxpyJMUjEVYS2Ito7JEa81sMWe2qJFJSZbmJGlLUzUIqUiyAmc0rdZo6zwwRWU4IdFGo6RAB3bh6XTKsF+Q5xk4y2y2wDmBWdbens+9RGCSJpR5SlVX1Msl1ni/glI5zjW+PyrP1mld4v04QqGloOjtkJRboDKEVKRKYSxIlaMyQ1MvKLMeZW+PhZb0B4LlYkGzrFFKUBQZ01mLsdAYS5oqz+ShLNZoHL69k8TLErkgbeOcpKkdk/GSPDNoYD73/T9LU/p5QSYERZGSl94+zfMUox3aWbSzuFaDc74PCMiKEpnmZL0+aVYiRI5z6pnhSg/2kAixAvxHxg8RfXHR/+XAuWjHi8DGEVg6XAQ/rCIkXcBrLfglw3hYZUSLEHAL/pJO6kUgZQQvhwAUK/D++f2sYMXCR5jnZPRhCy/7oEiCRHpkCDrHwBHvTQw8yg7gFYM51jkfoJGrgFsM79guKrvW0o7uOB/Z/evIg31enpf/tIpnS7boZsl8fMjHBxWuV2GXU1QvJ8l7LBYfM9q9TiIyUhxN2qf34q8x/eg7FJdfpxhe8pIVhCCqgMgO5Mv6+Fv7VKyNt+4Yh3R+bZbW+5YdmtpJksHXqezHYI9wIaEznhpBbOt3fBrQEKUORIxw+zZYq9tqPoiAiNUXIoAF/FEyADzsmdjp+jPFcLqLQf4uBvGs54+T8ZlaoBAePJoq+hevM39yF2s1w90tnLAIK0iFRDdLdHVKsX3d24ytoa2OqNuGvhMgzNr8He8fUjbt0/t4KdahCitAwpkS98RrAfwuEs968Dle/yzzRPfOn+E/eHrfHY+xT4FGzrblWt2kDDFZiTaGxljPHBVitM769cYfF1QPhLdVkYLGCYxzZNbbrTbIzRtnz/n+V+9PxDRYF4EEa+0dbO34rwdyijUpsgDEYK17rvURH95YQzZ091615bNKbHuIr+fzVzaxNjy8b0hjkgJpJmjjcOkAJQUYMEpR1GN0WnjbgxKcQ1qLNRW6lSQignUU2gmfbP4F9z9Tl/B4Yv2P9YdfC4fHMYZ4mgW3W8m7g+FcI/r9wbkx4K+zAimd9XetX9N3hm6qcOCEwjnPrJElllrn0FgubRxi1QaPxhkK548Jtoh0AqEMexsJPz1VFIkiTSTWtAipUSRoDfcnfa5tLDhaLFksc4QUtKIhdUnHgvZ5xW8HLcK1wTaiA778VZa/PcCPPCeAgbxeYZJjrfbGrEywxiIxSKnIshTroGm1dzRojRDeAFYi5pfHjgZKqk4LvA2bhzzLEJ02Y+IReTbQOUnZIcUFK60rFejztjZHnJ6ekmcZumlWDm8ZqGqcCJv5FCkEp63hx3/+5/zDG9dJNocMvvwGV7dL7j3cZ/+Tz3j11ddINy/x7u//O9RyzO0b13nhjW/iBLz80h02QqAkcA92bXZhZ4d7G9v83//xP+XvffIps8ZSppILt15lumwZZCnXbrzIvfffZffiFU+jKj0V0bCX0C8K9g8OuHf3Uw4eP6TIMra3ttjY3KLoDSh6PYTyzhQpFDhLkSX0ig2Q2xhjaRrNUlvaClS1pFRQ5opeOWCwuUVbLZhPCpw6YTE9ZXH6BIuiP9rm6s5F0uE1xrMlj05OWE7H9POco+MTesYwGhakVy7RWMuffu8HPHzyhMPTMQ8e3EXXFcPegFdfeoHWtEyPeqT2Cnvbu9RNxcfvv8+FS5fJ8oGf/CQsqpqqqSml5MKFizy6/ynVYkFR9qnrGca0ZJmPhtfSkQ5HjC5dIs8zVN5nvqyp5nPuffKQ3a0Ry+MHOKd9do2ERmufNe1XRECQFTlpmnaUrs7quENEJZJmWWN1S5qoYIAl/mtHt8g11ZLBoE+RpTx8uE99cAhNS9M6Xvu7/zl//zd/kz/7N/+CupqxszGgqRsePbiHcJZE4hH9eJS/Z2qIi6h/t9b4TCYZkgO8oywuwI6mNWETGza0SoD1DgtjNGkAZvn8IehnCceV9uNA+vsYazmezHnRaerpsdeYVQlpXpC1GisSXrp1BWZPePDJfR48fJPLN1/iyeOHTJaal++8zL3HT9gaDTgcLzDW0hrDeDLm+GTGcGNCax2ttdDf4s1vfJP0R+/x/r37PHh8QJ55p1etG8azOdOmYVCWKLxxUs0XbO1uMpvPKRAoFCfLmrK35LO3vs8rv/qbvHTzOm9/8DGVtlTLFlTJcGsL8O2g25aDZcX1a9eZHz3mtGpRaU5hLLN5jepl6KbF5j2WTUOZhIwvzznng0zOv6cOq2kDBi60vZCsstQIMi2ObnGW8Z/o0AxgDocJmWNheV1b9JPgFLVY2nCaDJki/u1Fwxvff5wNskDequi+P4OYXTEqre4Z0L7OA0X+GtbL5+U/oXI+8ApnA8JnMqF+UWN2zWkHq4CzMebM9aSU9Hq9jp0jOstikDnKc8TAbQzIp2lKr9djY2MDYwx7e3vMZrMumF8UBWmaUhTekT+fz6mqqmOgaJrmzH0jW0Rk8GiahvF4jBCiu0asw8bGBlJ6fef5fN7VNU1T9vb2EELQNE0HRLh48eKZ9ogSI7GsnJuC8XjMaDRia2uL1157jel0SpqmXb1ns1kH+njzzTdRSnWyMVVVdUCJCCiJgAwhPBPGzs4ObdtirWVjY4O7d+/y3nvvMZvNEELQti29Xo/r16/za7/2a+zu7rK5udm9+yjJ8+abb1IURQci+dGPfnSGneO1115jd3eXsiw7tpStrS2uXr3K9evXuXbtWscoUhQFTbDPIpAgz/MOgKG15vLly52ky7vvvsve3h6XLl2i3+939V0ul7z33nv883/+zwGfFTsajdjf36coCjY3NzuQSOx/0+mU7e1tNjc3efnll/kX/+Jf8Id/+Ie8/fbb9Pt9Tk9P+fjjjxmPx1y+fJmmadjZ2cE5L53zne98h6OjI05OTrh48SLvvPMOR0dHjMdjXnnlFd555x2MMdy6dYs33niDoijY3t5me3ub+XxOr9cjz3Pquuajjz7io48+4vDwkG9961vcuXOnY59Zl9txznVAIVgBMmLbRWaOZ0mbnAd9xLF6Jmvgc8bz+WPOM5Q8CwARmT6+CFyyDk46f/76cZ8HoPjrKj8P/PFFoI34DOuAnb9seQ7KeF6el+flL1ucs2jdetYK5zztsfD7T+kcOOUlYgGkQKUJSOX3n0BrLdJopLUoFWUMQqDUGoxtfaBA02W4d/TAAfguYoRhbVlw1sutOquRwiCCtrjAkkQvrQiJDg5McOY7Zz1bhvX5g9b4733mvfN7WwkiUUghSHEY2XrQhFIIKxCZJE0yz0ZhoG6XQT7CA/it81yzSBWALylJkpGoABpxystkEgPUBOYO2T2bEJ5h1CdsiAD+9w+lpEKIkLjhVvazd4z5dogB7Ziv54MRq+xQJ6K8qNeVVypFa4sxnmHFCUmSJgFoY6iXc3qDIVvbWzRNRV3PcFqTKOslloXy20Mk/dEeRdHDgU8Uar0MjHDe11VXS1qlsdqQSl9fpRwbw4LNQYExhmXT+mezMJnOEfM5i/kMnKPX75Fkivl0wdGTY0Ri2RgN0G3DfNrSNjXWGNIsJckV1hgSkVAWOa1pMbrGWuNZSaoJWaow0ttNTWtI0wyhEgQ1uqoQaYqUMByUSCVpjWNZaVoTdcSTwHYpwXlWlCzxgRIVnPNJqkjzBKcFVdtStQ2DQd8DpdqM5bJhtAFlUXg7SSbUtaExM8oyxwlJ21icEzS6YrZYMByUDDY2ESqh1V5OxK2NlyTPcbWDVNLqBqkGWDdDtzN0bagrTd1o6qolSQRK+mfQWjKda9LEh2hUqijKAmN1sBslOIFSGaCYzSraAtLM74e00WjdIoQlSbw0iwljTBsLzoA2WGMwVmOs9SzQRUFW9lFZCUkKIUGLAMvoEBysmIAiUCIGGVb2pQx+DNm5KmzwnTgHVsYxgg/SiRVzyHqGrbehbJAP99eKcxTxHOkZNWwX7FvZ113gNfjajHk6KcIKD9Lyn8nOzxN9QyLK+8Vj4k8oMdAYHjg80wrkjWNFMhADvaue0n3pQszWEoJ/MZiHDEHs586d5+WXu/gYkx9TbT1jOhmDU2zu7aHkffKsh8xKpLbkSQ+VSmo0iUhwKEZ3fpPJx3+GaueovReRzifwCehspy4odi7YH4sUPkEQ8BJOzkvXOfA2ivAsT3axYHLwNtLMEMqRisTPX+HaQqzmNdxqzxnZGej+jtVYMVJ04IW1OiEiADeWlfyLC3+D6K7vWAEduucO60DHBrAWUI/B4qeaJDKWdPORIhEG5Xx8cOPibZ58/AOq2SGDjSs4KdF6jJ4fMdi6htYO4Voa25IIicwHPhbilJfPi1V3kQE7JlqeBaVY51bsE+4sB5I7Ezg/u+9fAQzEuWPjWhb73Opa0a3/iwBA1u/xeQkg/rOYuGSQzjI5XTI+OUG3jiQVSCGxIgbs1+/r7WrlapJmgXRLDJKkSwqN5rY7W2fh17xuvQndftUGdK9cyiDLFkDI/l3IiNEONu1Z+Kdb/02sXXO93c/38XiGW4GT3Fo/75hp1jAM8flWHjJBKlKq4hJq/gAnKnS6QyIkwswxArTLPeggaRDpiGRxgBUtVg5Iii2MNT72zCqs8/T7Ep4TYRXe6X6Rbg0cc6491gmF1o94GqCxxlYTJz7Ogpu6FuhsDPdUX1ufE1wYk6t34brrOxqskxhyEj3lyekxOyPJUu9wNJEoYbAiQYR4qglzxih1OKOotKWQkiQRWKMRIsVqz9yGg/unA65uVZRqxsG8j1KelQdrkHhmQycFMkw8kX3GIbBCoawBl3Xxr7/q8rcG+CHjJBJQZlIqdLTTQ2eygAiDHGt80FN4pBEi6rrimRLCJGGdA2M8JagTpEkS2EFMCGx4JhBtDG1dASHb0AU5AunZRSJNkxSSUa9gtLmFMZpp26ASQastxhmECiwgzqBEErI3LH/2zke8+fIPePN3/zseq4TFbJf/4re/xe8tlnzpxRv88KMPsW1F5ST/y7/7Dv/bzR2u3fkybthHtxpwZGkKArQ2wTiwXNrb4h8fLjn6V7/HP/w//F/YvniNN77+NQbDEf1ccv/RI+5+/AF5USKSnLqqSJKU/mDA4XTBH373x7z39p/Tti2JFAz7Pa5fvsi1a1cx1pGnPit2Y3MblebINEUGWsdEOXq9EpwIm1zLbL5gMm/h5Jh+KtkYDdjYvsig36Me9ZlPx1R1w2x+yvhon6zsM9za5caFK4gXXuZkPOV0MsEKy2g04ON3fswbL97id3/3v+bg/oeMZxNmy4qm8T8//tEP2L14hUT12Lk4xDmLdI73P3vAweET7nzpNVTa80ZYT6K1RQqfEXHj5i0O9x/5SVMpEiEwxoOJVOo1eK1pqOYNejqjqmqfcdvv00z2aZoGcAEVqVkulwwHfZRISIwDFGVviBBgbYttK9pqTl0vqZdLqqqiWsxQScbG1haXrtygNxiwnE84PXrCwcN7WN1ycHiEShVi2GPY7/H4ySkbZc6N177O//H/9H/m4Xs/4P13f8bGsEBJydGTJ0xOxqRxH6t83/f91yM+rQ15ACLoi+JZGhKkZ4NQAU1pTej/YLXBSEEuE5TyCE6sCzq/K2Nss5fyeN5Sm2ilAAgaUVJsXeXx40N6GzuUG47Rxoj5fE7W26afZ3zws7d5//4+l376Frd+53fY2rnA4cETbtx+md23f8rpdMbmoESHwI/WLVVTgZScjCdU1ZK6bUkHA77yzdfpDQT3H+8zqSqWokVYR920fProkDTovhrjONh/QF3N2SoSskFOZnXQ0bPMJ4d88rMfc/vLX+fJ4TGfPj5ke2+P+ekJufLZbeOFnzuqasnG1oiLGxlPnhxydHzK3Bnq2ZzKpChjcVIyWxokChH1pB0IY0MWVtCHDSAdG8THfNaKR0zbYNh6ir/gqIibfWcxcS503olpBJ0WoYy8a25ljPvFWrLS4Vvp8dkA7JAI7wQNa7sMXsGoCyedP9LGdx6MOekcJhjyNhiGa96F5+WXuJwP3n7ed1Hq4lmB2afp6VZgkWd9H53i6wCRyGIQASIxsByD3eusDZEZIoJHIthBStkBB6JsxTpbwXr2fzx//b6RzaNpmq7OkZ0iAivOy6LEa0UwQ/yuruszgfkIWolyM7Ht1tsqAlQiM0YERETGhHidwWBwhqEi1m39uvH4tm15+PChB8IGcEaUfHnw4AFN03TXi4CZO3fukGUZWZZ114x10Vpz69YtLl68yHw+ZzKZEEEym5ubbG9vdzI6RVHw3nvvUdc1vV6PGzduMBwOOwaN2J7rcitRSgU8G8hgMODatWu8/vrrlGXJtWvXOnaNyLRy/fp13njjDU5OTnDOkec5169f58qVKzjn6Pf7VFVFURTs7u527z6284svvsiNGzeIAJzpdIq1lsFgQK/X4+LFi13b7O3t8Vu/9Vu89957PHjwgCdPniCEYLFYIKWk3+/z6quvkmUZBwcH/Mmf/AlvvfUWSZJ0oKLbt29z6dIlhsMhTdNwcnLCvXv3ePDgQVcXn318lqHGWtv1zc8DZa2Pt9hf1v9eByKtS7s8i5Xn8+aHCOqIkkbn54QIQFnv3/Fa63Vcl/lZB6OcL+v1etqR/+zyHwoWOV/X9fufr8PnXf88k8aznuFZ93jWuzx/zy8CoPy853rWPP0fcq3n5Xl5Xv5mFmP9/hn8/KICCNxa7xaxAZiA8BrziVKeCRBwUoUECUmiFGmS+kC6UH5HYq13fCqBsgInY3AThEjW5tboXA7ri1yxXjgnwj7I73d8pnzM+DcI4WU0LXYF8jCeHcSGzMWw+QEk0iov1QIhEOz3YooUqSRCRUBlHRpABx9WkJSRCbgEo1uMSEC3kDtEphBtDI1InEuQamXLauPZRoRbY2sUzku54Km2hYh05ATgR6SKthhh4hYtyOQ4jA1gSxUyN5HBsR7YGo3BCotzEiWTIA/rEDZQWiMCUMYym45xWIpeiVA98jxB0MMa7WUmhCT47L2Lz4JRmqY2LOZTFvMpeZaR5gVpkqCdxTQtjW78VhUf4MJZ6qrBGIs1lrk4JssLD/aW0NQ1i+US01o2NweUvRRnW2xrkVKQ93sY7cHBiVTIJAs05QYlJTrJcNowX0wo8hyVeP9kmubAgrauWC7mGGtIs4IsS0iCvwUhMbYllb6dGu1lE3Et86qlah3SOMoyQzmvr75cGoS0yMxLeWjrKPo9esNBGDuQFylVXZOnCikcYMmLnCRIKxkDrXQ0VY1xXk641RqEpKorUtGSORBCUdearMwRKsE4jUgs4CWMrGlwznO52vCuVJJC5Z3jhH6/WCwpi4zBoI9I/fiWUtG0mjz1iXpZVlD2Si8/6zRF0e/2NE1rmNctba1p2pa2aUiUom6WXoc9SMgYY0Aq8rykGJSkWYFQKS7u84RDhf4fgV0iMKIGBwsEblrOMOdYIjNqZMcQMtiu1iFQ/vi4xxM+tVCes7X8jwt+GoVQFhGSovy9VmEvCU/ZsJEhKbKoCiHP7Ed9vWx4BD+Og1Pc+4E6WRt/vhOumwhjMiSxBs5BBMBFF51zIXiM9/i47lJdkKZ71hjY8bxGa0Hwv8zq8bw8L38zSpxBtMBLnS0nlL1N8jzDzJ+gbUtejvz8Zi0ylZAUpNaBkCg8E9nw1tdZ7v+E9BGkl15AQgj0qrXxJOL/V3cPf/swgw+qCwdK5DjXAgIhJbpasHj8A6ifsLnzIvrO3+Po6P+NTFMiyCvKxNgO+Hk+mLseSH72AH96r7k69Oy2UKxdYhVtPp/UIWQUQAchJdaGeVuszzWfN9ms72EtlVmish5CKJzWDPeu0MynzGcPyIpt2uUx2fYVGiuwi8e09ZxEepkI01QIZMegvXbh1bOdqzs8zfAhunMCCwIEp318jnh8ANt2vvrAZHfmmTh37Or3tbs9EwgS2+2Lkj3i5zJIxqSpYGOYcWlvRL/IfGKzE0iRYkQdfD6B4UpKUAprLFpukKUtSC+v4tkfou1n/PpzrgpOeIiCWKtnB7pYNWQ4NrQR68CGuEaztuKtWuXMGiUIbRtu2N3qrE9ZhHCJf30BoCLwEpIxPt1d0q/LHlRlEU7RApAgiqu49pC8PkSrTVK7oMp3UNYBNQmCthmTyAbbCkRqSZI+xi6RdgeD6dbsp96vc9i13xEx1hKf85xfx/nYnOs+4+w11/7brf2xqVj51c62cWdInGmT7p7xnLhPW6/+Omika0lJY1poNa2pmdY30MJbHODjTkLQ2aLWaK5egkenEiMtWuQoqWjbisxZrBSh7yqEsDw4SriwnXA9rXh4mmKFREnjpfdkgrAGJ7zs6GoetCinENJ1ag9PzQ1/BeVvBfADQCVJ14hCKi/VIn2gEYKJKbyda40NkhqQponPineBkUMpLz8BniHDBVQYK7okG+HcFiymM4zTNOvuZY0NsVGHdSYY8z4gK4DdzSHT6RSpKrCGJPWTXZokuLDZElLijEEZjU4K/uA7P+bl179CuXmFYvMCl3ZP+Lt/9+s8efSI48d3uXL5Cm3bYBz84Z/9kH+Q5Vy8dZuIVnfWIJX0dKHWoZ2jpwSvvvYVfvIH/5JdN+G3fuNbTBdLPvnZT3hw71M+/fhjhoMB9/ZPQEquXb6EyAs+vXsPJWF7c8juhYsMeyXXrl5lb2+Prc0NHjx6yB/8m9/HNjV72xt85bU7DEablEWf0cYm/dEmSZoiERi8k106y+5ogBOKZVUxXyxpFi26miDaJZnTpEKxs9Hnws6Gl41ZNswnTzg93Cct+pR7V/jjf/OvuXr1EmnZ4+T4kPSNb/CNb/xdPixSjo8f8+TwgHfff5fj4zEXHj/k8tWbDEabTCcn1E1DVVcs5lPcwvCxNLStZdDvU5YFW3uXKUY3MVrjEOxcuMT45JhmsaRf5kiZ4pQgSTPKssdsOkMIxXI55/HjQ4ab27jlMcv5BAJYwuGo5nOcM34yswbhvANpMT1kdnqAbirquqJpW7QxGGuxTpJnCWWmKBJHZqbkTUtTTTk5eMjxZIySiqI/wDlLVWuWVU2/1+PC7S/zP/1f/2/Y00d890//PUXhs7HHp6c8vHcfgn6xkNI7DJwN2VMigDwilCmiVR3WOrRrEVKQhNks9nu/KZTeGLA2AKEkFhMAIHE76yhSRZ5IGmu6hRIBe1eu8hv/q9/hJ9/7U/LekFRYllXFzVsvcefOq+im5q2PPuDBw32++90/482/83Wu3nyR9946wcqcOy+9yOPDY3Y2e0wWC2o/dKkWM7ZGfYb9AYuTFN1MsdZRFgWvvP46d/YGfOft95ksvJFineP+px8xzEQnawIwnU4Qtke/lyIs5GlGKhw4y8mDT3iyfYFXXnqJw+Mxi2UNQjJbVIx6JYNej/FkitaGalmR5ZJBr++dSqahTVNmi4qtYZ9GW0zrGDvLdhHEUQIllgOf1YJEihBUtqvsEG8gEy2j8I7CvLlm2cQcsGgExLlPOIczzrOIIHDSdUhSE4yBSPkbF3XnCKaMQEiHEgJtfUaRnztDBkvYMXibVXQru+uQnyI4ZFQACj0vv+zlF8kmXwdMwFkj8bzBej6Y+Kyf9fKsgCSclZdY/z4yWUQmjBjAj9deB3ysMzx83k907kVgSwyIr0tZrNfpWQHcGGCP9XLOdfWMJdb1POBkvd4R5BGBFPF+68HjeA+gA1isv4vzrA7WWo6Ojnjw4IFff8ZjkiThwQMv77a5udnJi6wzpsT2i0H59etFto2NjQ22trY6hpN+v0+SJB1wQmvNw4cPqeu6Y7yIIKLoLI39KrZNt6mV0oNdk4TRaMS1a9dIkoQLFy5QliVN01BVFVJKNjY2eOWVVzg4OAD+f+z92ZNl233fiX3WsIcz5FiVNd1bdecL4GIgCIIgSEJkU2yJ3aJa0QpFtO0HPzgc4fC7/xxH2H6wI9oddlhSqLutlkBRIkEQEEZe4OLOc41ZlcPJc86e1uCHtdY+O0+dvABFwCKBWkDeyjxnD2tev+H7+/6gLMueCaSu676vtra2uHbtGru7u+R53rdrf3+fz33uc8znc7ouqoGxj5VSTCaTfo7lec5nPvMZ2rbl4cOHPVCkLEvG4zGXLl1ie3ubg4MDuq5jPp/3gBSlFPv7+zzzzDPs7e2htaau677/RqNR/57hHB4CJYbjvA6sGK7X4b/D+THs3yGjzrqxIY1J+n392Re9cxNw4qJ9Y/jetIY+6bmbQA+bwBN/UxDDT7v/ZwF/rJdPAm6s73+f1Md/k7b9LGPzpDwpT8ovcbEOnAtpRL1D9OwUAdxuPRHIoaKOo5BZCNYRIhiNtdaoCHqXQoT84G4VkBNyi0uEGxgKRWQ0FMEBGiLtozNXKqQWOBMiGF1STUQyOIYUvloInBCBIdMFTcV7iQ+kwgFs4KMOLQAUQopA3ew8aIXUGmkCq6zwYLolbd1iuwbhAzNtYo6VTqCUA0w0R0mE17SEFB3eWbw1WNMGu5YOzHWJjlgIEftnIPv2qSwiCwIiRgmGdnpvMS6ke01sIEQABVh8tDOBDc9Ssr/OxtQ2ztkQGBJJFno5GgUiOEm6ruLs+CFVFcAQea7RMusDTrJMY0xF3Sypl0uE7bDG4KylyAST/UuUkx0m2/sILTk9OcK4GaZt8bYNzBK5DqCTUiK1BGNpqhrTLmmxlGVBrjWZUKiRwtoWqQRCF8h8xPbOHgjPcn6CdyH1TdsF4IXQQUctUdRmgbEN41GQ6VSe0bYd1guE1HidYWuLxiG1BVMFdg+VIXFI6fEYqqqlLCZYZ2iNwQBaaIyXGGepqgatM7zUWFdhnUMpyGXQ250NhvZyOkZlEZjgbVhPgpCSVwZ7XN12qFwzygtGIujtVV2DkGSZxRpL1dVYLyiVDrnUpce2HcIauqYi02C7IMPlWRFskBqWlYysLI5yXLA9LdjZCUFJdeeoq5bOWbTKyPIMJSVFodC5ZCxGVHVD2xrKkUZphbIehaM1DQJBVkLIZhxTFkiJ1CowxI5KptMtinERItmRwTYhJSltbgiOSeme0u9xDbjwzNV3YY0LAUKKaMQfpLIVoITsbSk9l4VPzr2VLCVl+juAyYRcA5b1DhuivC5wLgReJVlZShmqqzSOkOo3uQUFhNRUgCfYL8P+JyN4Y+WkTSlmQt2is8Y/nrYwwTfSOibti35lI19d+3gSneDnSRHRqU957Lon5Un5pSteIGTgM6jPjvDe8/TNV7j94DaZtQjpaeaPgg3ZOTIZ5B5Lh8918A3JnK3rv0X74Ad0d18jf+ozZC4xXKzsu0PWi94B60ELhxMqsNVD2DOkwLULqjs/xs7eZ7T/DPmtPwRVYo7eBhuc7zKB3USw3UcOsfACMXSUb1rNA13xsU0hWY5XDB5pO9m8gazfLlZO6V6nFX3bE0vIUN9dgdJ8D8YIL5S4piLPpwhv6OwS2y0Zb19j9vBH1N0bjLdv0D18j2Y5o5xuU25d4swLWN5Dj8YobwMQTrgI6Bx2w3kb2qok+2Ks0wDYkTpWrGi0V/7uuCOLfg9eOZ0/qfy0wIth2WQPXb8nuBgkQnqs6xhP9hiPR0G2FpEFKwbfr+4J7dDSUuYZhTqkrR+h1A1WgAwf2NwGVROp2fEZvd2T1WXn2xL7qL87+TnEAGCzUg3O2aWgP8PP9YKPgf3p83VbCgQQaDof4+8iiv7p0uBii35rEX4kkTFMSaS+ijRHjO3HeApKN6PBo8mwrsW4BUaPsPV91PQ6RjQhtWRug39RyFVf9XL3as2khntSKpz+o8gidB6qMZTfB8Mw7NkA3u87bMjl9fiiFo9dc97eJJKDKPn1+3U+fJYPshwe73OKQmJrRW6OcWIfL1Wsl8FYBdIghCTPHPtTzU/uWKQHIQqkVnhrVgMFeGyYPdpy50hzZSvnmf2aD44cUCJFg/O292ut2udikEKHERKhgu6lfwG+rF8J4IcQgqIogzIhPFrpAAQRAtN1IZdq3EBzqUGnaL6gwTsXwRxJMPcOlRwJ0C8MFSdMWNQqRpAkGuugxPWHqo4COh7nTJhoKijSOM/2dMT+/h65FDyazfFtjc6KaPDO6doGH1PTeAGZkrx/1vL//hf/M//on/wx43KLgxvP8WLd8Fe14fLeHOMVtq3ZGRd0uuTr3/wuv3V2xs0XXgKdsVwu0EqTFwVSaTIZkHNf+eIrfOPf/S/8n/8v/1cme5c5bATLasl7H33IvTv32J5OyOoRhw8OcfMj9q7sszh5iJaCzz53gy+98iLb29s9bdib77zL1//03zOfz1FSkhcTimLCvXv3uX3/AbbtuLy7w7WrVzi4eo3tvT3KyVZgT/HBwK6lZ397DALmc2gazbxuWZw0+MNH5L5lXGimkxG7B9vUBk7mDc3JA+4vDG9/8CHCGV5+4XkuH1yj2DLk5de4/cF73H7071lUjs44Prx9n6eeOcJ4Q1c3NF3D8uyUxekxrSLkaM1yzmYn4B3bD+6hpKTcvgRCkmcZO7v7VNWC2ekMZ1qKPKPrHG/+5CdUdUVRTpCqYDwZsXj0EXW1gAh8EFLQtR3HRye0XcvRo2O0zsiLEqkky6oKESSZivM8ZyTDXJRCkGlFnmkkjqPjR9y7V3FyMufodIl1YROUeDKtWC4WZMWE517+df43/7v/A5dzy//rn/9rmjhn6rrhw3ffo62bmEcYvJT4YAlD9mAPiZQOE9k6IulEj+Qz1mOtiwAAEdMNhugiLyQGRxejDlKez5V66sm0ZisTzFsf6MBiZMiPv/uX/OivvspXfvPLfOnLX+VP/uX/wGxecfLokO99+xGl1jx69AhjLG+9+QY//OH3+a/+wT9gNJ7y4PCQ5196mR+99hMeHZ2wtzVBAg9Oznh4dMTh4QO+9rXfoRAdYDk6OqVVMjD8XLrJs8/UPFy8RdUZnPHUVYV2so+ucDbQ/jrveHgyZytTFEYyzkpGmQLXcfuNH/LKb/8DPvXis/zorfcwRUnTNtRdg5KKTAXWi9OTU7LdKUJAmY+Yjmq6tqXuWuq2o24V1hiOKsckH0fDThqDaAywwUAh4n4WDm4R8+vFXM1OQQKMxPETEXThYy4/wZC21Pcv8kSwhg3jG4wiDmMdXgTst4yUnYkK1fcCajReonqhxkbMrU9MMkBv6IiGmdDXLorjFygNT8ovTVkHC8B5cAOsAARDxo8kpK+nckhRT6kMHfrpWUMFIQEjhqCILMv6uqV/syzrwRRd1+G979khkoN++MyUHkMI0QNE0s96CpoE2LDW0nUdZVn29UwAhdSOxOKxXr8h88ZoNDp3b2Lr0FpTlmUvxHdd14M/pJTs7e1RFAVd19F1HVVVnXt3AqlYayOT1WpM8jzv65CuTQwezjmyLGM2m3FycsIbb7zRp2A5ODjgpZde4uDgoO+v1Ib0/gQ0SWPQti1N0/SpWNKYCSHOpdRxznF0dMRbb73Vgyu2t7eZz+c0TdO3P/VNes6wzUMmlpR6JqWQOTk54ezsrL/vlVde4YUXXkApRZ7nTKdTmqZhPp+zXC4RQnD9+vU+/c50Ou3nepZl/ON//I85OTmhbVu01j3biXOO+XxOnud9Sp1bt26xt7fHbDbj9PSUBw8e8Oyzz7K1tdUDXLa2trh16xaf//znOTs7o2maHvhx5cqVnllGa82tW7d6FpMXX3yRyWSCEOJcH6XxyfO8B/sMATnDtZrmZmKCGc7TtC437QPD9T1MQ7RpbxjuA0Pg1BBYsg7w2FTWnzdsw3CNrQNA1veidWDFzwMA8rMaatbrsX79OhAnfT5kCNpkGNrUpv9U8Mf6PZ8EAnlSnpQn5ZexhIh7Hyk8pFBooUk5v8P/dNCbPSE9pusQTqK1Q6lwrXcWY1ZmQCV1uE+lfOLBkSoEA6cqEQC/ouVNxn9BeL9UDiFyvDd47Op74RHC9nqWUsHRG4J3BEIpvBE4wnmdAihEDLN13iN1pFzPQkSkM5a2rjDGY7qOtqnxtkPjUVKilERLkDFNRQC6aJwAZxXeKaxzCBnb4jzCBbbGkFc9suPGLVXiQQxSwsR+6X0gQSOMeleS+WS8M/SVViHNrCcwTzoXGBOEFGgRADmOFUBUed3b24LBPYB9tJSMyjw4BuK9gSlFoqTDOYNtOzKVQwaU4LoWSYdUlnxrlywvQWqEBLNcILoW7zuUhqwcUeQK7+D0dBZSTrgIVlCgCk2uHN42dFVgLBUqpo+WGeNiRJ5putaEVDkEdg0pDKNCU45HZFrRVTXL+Yy27dg9uBIcdx6quqFpHM0ipFacTqaITrKcLdHCowSUpaYzBte1eOcoyxGjUtA0HWWRI1RDriXGBL1bC5iMMrKiQGUClSlymaGVpl021Kphe2eMzktcZJpZLs+YTMbILKNpG8qRxnWG0WgbZ8E4S55lKBkCN+q6YzJWaAmmaykmu1gvqOsKjcc1S5rFKYKWPJOcLTq6rsaZFttZtFQ0XYc1HVJbJlqyvTVGaRkALJ1BOBBYikwhpCLLcrqo8zd1oO82xlHXM9oIvHYuGKCyIkMJyLQO+4fUOCtASvLRmCwrKMYjynKM0AEoFpgzwip3MAAhGLwPQU/SK7zXAfjgY+CSt0N/UTS+rJxwMgEuBJCo0n2MLo5OuVVUd7CRBPkmyalq9cg+kn4ob6YdCwIgK9hRhiTxUgisTSCR6AhxMq5lGUFaAukjm0jfhJR0WUUHXvSrxAhuEV/tBH307HAPx9PnrU/f9u6i3iHp+30FzsuNT6S8J+VXoYR9QoIL6cuVAFGMWJ4dIVFs719nObuNNBJpQeYFTjg0GViBlSLsJ9aTX/0i9tHr1O9/h+LWbw12gbTXQDrPw2/hPw7Xy1oO6Oo57f03qe69ytal62y//F+hR7sY1+Jsh3QWj4FMR/dZ2gBX+1P/fIZ63Pn6PPaZSE7f4d/ndWUfDdLJ9ht2kbjfxP01gQfS96v2D9MtRE9D0tP9OdfzwOEPUilMtUD4JfXsY0RbBfCfWjIqD/Djjq6uyXKY7F3DiI66OkVqS9salI97uwyb6HmOgwFj9qA+Q2d48senT1PDRHrOwF+/Xi4K5Hh8HMRj165fc9EzLyoiysXeSaQsmC9Oca4LIOzkwBdBhpQRMJnAj055hC7wxVVkp/EujvkQjBj9Lj6O+Sa7Qw8fSOfMYGKsp9dQIqb7ODcXNrTTx3nWr6fzto/g82X4kJVd5Pwo9oN6fhwHkzd+Ib3CyyXSOCQtyju0zOnqjqWdo7Mpwj0iL8ZkoxG2c9SuQTSPsFYhRIsXAu19CM7d1K71flu39fTCzKo7U/t/lmChYX+u24nO2c/Od82KjcWzcaKn9eJFqmTsUy9QQqBlSJl+7foVfvxRxb6oqMwpxm6xoEBJgxQe2XkuH8BpE4I3M5lhnUdnGoxHRDtnkK+S7JehsBzPLPV4zAsHFR8eLmnIgQ5EjnQtIoKLnRDR16YYK8dz+/DtO+mZP9/yKwP8EFKQZZGtI0YNeCF6J2dwWstAR5mUeSWDw1YJvExpBoiAIh9ZFwR40DGffQDw2ICQTEbteEBYZ7HGYJ0NyjXBaG9dEKx1liGlDpEPzvH0jWu8djYPB36WY73r8wQprfFBQwhoL2uQWvFXHx6y/ad/zm/9/u8xnY45uPkin/WCsii5d/c+p2c5VduwP5IsLx/wZ6++wysnM17+9EuU0514koQIXmMtpjPc2Jnw8ue+xFvf/jr/4d/8T/z6H/zXNBq6Rx+woxXT8ZiunuO7OW0n0ULy9NO3QroOG5BkUgraruPVH/+Y/+VP/pT52RwtFc899yx//Mf/Dd47Pv7mN7j/6IS27bh9eMQP3nqHUmfsbo24dvkS165d4+DggMn2HqPJFkrrEEmjFLL0TCcFu/tbnJ7scnp8xPHJI3j0iHFxzN72NuOywNHx6csl5sZNTirDveMZ3//Gv+XzX/k9tm4+zd7eLroo6ZzkJz95jRefuYkzDU1T0bUd8/mM0+NHCCyZLujampHy7O/v8Oh4xunsjNdf/S7Pf/rX2L3yFF0X0GB5MebajS3aakFejhmPJ5i2ChSeSnL04DanR4d0nQVVBIFPCnxX8/DBfR4ez7DeMxmXlJOC6daU8ahEKxkdiAopBbZr4+EiMBZkVlKMJ2Rasjh5yL37D5mdLbE+REdZPDqT2NqQj3f50u//Pv/NP/vv2JKGf/7f/z84m5+yuz2iXiz4+P33MG2LzjQGEehPjUcrjxRBFbTW4i1Yb0IElgsHWkiXFBGLUbjyPgqWkaI3lJAqR4roaE2HgRQE80IADOyMMh4sTNzrHc7D7OSYf/7f/9/5+NVPsXv5Oq/+4Lss9ZhxUTA7m3FndsKDh4eczRcUZcm3v/WX/MZvfIVrN5/lzVe/A5d2ef7WLW7fO0JxRqEkn7qxz0eHM/78z77B6eyM/9Uf/0NuXb+KaRrOqhopC1Sec/DMC3ymqjh77V2OjA19MVBghYyOT+eZLSpqqShHOUWmybVES4Gp53zwk+/z0he/yvHJKR89eESWZcznZwiga2u81Bw+fMDxwwcoQlSHFpYs04yKgmXdUBcZOI/UkoVxZLlcCQ0+GPd87Ou4QYaxUXJlZAM8lkRd7PEhnY9Q/fzqmV3ieSt7Q2miQg1MMCnVDwKUlH3qFudCFJ30HiF9iDoTMQeld6xyT6b3BqOvc4FkNaB7A/gnza/EPmNd34gn5Ze4DAEQQ+cwcO73TVHo6fpg+FKP3bcO/Bg6SNfTPCRQhhCiB24APbhjCIRIzv7EENFHV0rZAywSqCKBCVJ6kaZp+mek9w+BLOm53ofUK845iqIIFNDQ35/elRg+uq7rgR4JKLHeR0MwSNM0ABRFQVEUzOfz/lnee2azGUDPJDGbzXrwStu2/ftTXRKTR0q5ktpqreXll1/m05/+NMYYmqahbdueXUJrzWg0YrFYUNc1bduS53lfj2EalsRUkvoi9V2q19AZb4zp2/Dcc8/x0ksvcevWLY6Pj1Exajil5RnOr8SAkUArs9ms79+tra0edKKU6uud6pwAEU3TMJvN2Nvbi+kC1TmWGK01y+WSPM/7FC7z+fzceKc5nWVZn5YGwpl6dBSihy5dusStW7c4PT0lyzK6rgtp0fKc8XhMURRsbW3xwgsv0HUdQgjG4/G5lC3b29skUEoCW7Rt24N7EqgnjcUQkJH6fV0x3AToWgdRpJ9hOqFh+ph1BpYhA8/6Wl4HlaT70nzpuu7c/BjWNV0/BHB9koFkE+jjovLzAEh80nUXGWg2KecXATb+U4Ecf51yUV2fAD2elCflV6eI6PA2XUrpohBeYk3QQ4214A1aB33AOxfSqHSBYTPLS3Q0ROa6QPgQSOKdDXZUqZBK9fZfF5NlrgMKw+9JN1mlPPECnBRYF9hAvAj18T4ZSgOIweMQIqQCESKyXCoR9JoUjRuZLEVkY9RSAgocGNNF1gxHrgjpQYyi8wbrHMZ0SKfJNSG9DSKAWpQk0yqk01UZUmZIoVEy/K500ctJoW6QnA/B/xDAFcHG67HeBOd0bH+otkRKBUke9kQa9XB/6N+oMzqHcYGJw4tIrykFgU9ShJTD0fjvCfqgwNOaGiVS8FVob2sMwluEgK45Q3iLxcasyEFPHk+38EJQLSsWR49o6yWmqxmPSspyRJ7ljMsRQjqsM2jt2dkd4W2HNR4/HVNkOcY7jPG4ztH5Fl1keC9omhppamx9irQGhENpjVbhR5YZo/EYpTO6rkEWGZmdsqwNXe3oXMuoDGlZbF1RZuClwHRBrs9zhfPBPlktK6yDrrGczpYh7d9OiZSa5bIGL8myHGdbrG8ZlSOMdeA7pCjpjEdkkrZzGKOQlaUoDEoKrIfadmR5Tmc6vIByPMI7A1IzP6soyhHOedq6ZVQWLBcVzkOjF8jRFtJ3SGPIshLjCX3qDZYO0y3JpaU1LaZtQTh0IVCqwMuMojRMWot1HmyLVCVt2+KcoBxllAjyUYaxHqUstuuwjWOxNEipQ7oaLSi1xMdAP4ujzDPwgY3EdMGWqlSOznN0XlCOJuRFic5yfEp560AQ7DaSJH+G/UJ5E238Ci8NTkQmDy/whhBAKAIxjvcyeuiCHNOz6MSNbcWmsXIsrpyj8Zte3AmpbpIjNdhcYMiG4UWkHWcIYo6mnjWHilI+1A/wGER0JlmbvEuW6C0hsRT1xSdflx+AOZKzdeWfSu+Sg5b5/vuUCivZklz00fjYI7KPlF458TafEU/Kk/JLUQR46RBOYlwAzGVKgvK081NU5tHbNxHdgmLqOHv4Bi7P2d69GvZp4ZAupptSABZ96VOI/D3a979N8dQXIDLQ91Hy8WzvHaRCABnCe7pmwfLwddzRu6Allz/9D9B718FHpiipAIfwQe+XQvX2WyF8dMTHhp0rj+uPF4P84x29LrjGZDH4XRCCGGWyH689N+0hIjpbISbQSM52kRgh/Nq9SRf1EawBpqnY27+J3H4a05zhmwrna/K9G5jlfbxvqOuK3elNpHTkVFQypBnU7SOcEig8uAywAdQSA16T3Lhq6yfowX5o9D6vwwshH3Omb+rri4JF1q/f9KzH6rNWzgXRpH6XHnzLZLId0+cFgGVKd5b6PIEfhFAoFwbWdgsCc0Nk6usHj5iicABkitkZztXTp/8M7oXVPPIDZokIiib8nwS4of99MEZu5UHpwQ/xQr9h/HpmYrGCfojBC9Jc9sR0K37FRiIAJR3CV+AzOinxpqGRW4ixhfkDlDjF5wfkvuGkldiqohhfCqlitMM1DQIb+lKs128IQE3nfOpA0TNwiEHX+eQveqylg74f2L0TYCs930Y2DDFYa/2c7qsXempo4xciMKH0qWoGbRC+X8kEthzovKHzjto4qk6yqwvu1R1ajpmWS6auZm5yljb4Y2/tO9664ymUpsNiKIJe5jvwLq6x8D6PRKZUeVqwnDs+NiW3rhjunrQsGoFzpscIJEa1DMtW6enqU07bcQjOv8BO9zcpvxLAD7zHdQ1SqSC4RqpQ6x3OhlQrQYENqVO8IEZMSITSQdH2Hq1DLkbrgjNTSwEi0HRqHfLHGmtRMkRvyOiMTBuIkipEUViD6UxYyM4hRHDaKxGdMQQjfYng6aef5vWzM/AOrXPqtqNpWrSSKK1BeJzxIFVwlmvNf3z9faZbW3zq13+D/Z0tuPk8SmnyvOT+/QfMFzVn1YKtQnP55ef44PCEB3/5fb7wqed5+rnnkZkGIZBKo5RDYvn93/0Kb/zg23z7ez/kC1/5be4dnvLw0SnbW9tIWr7/g+/x8fvv8uWXn+OZp2+gUh52KXFOc3p6wje++Zd8+3vfo+06irzghRdf4p/9s39KLuGtt97g/v172C6k2bEpJ3xXc7Jc8tHhMfmb71FmGZe2xty4doUbTz3FwZXrlFvbyCzDeYtzlq3pFttbU8y1a1R1zfxsxlG1xD46QzhDqQQHl3Z58eAAffMaD06O+Naf/mt+46u/z9beZZ557kW++ptnTHLF7Q/f5/D+Em87hPAsqwXWGMZlARKO5wtOz2bsLGt2L11ivqxpqoqP3n4NbMulG88xGo0CxSaend19rl25TK48y9MT6uWceVXT1BVFMSIbhbHXAnAdi0XHaHuLS9Fxk2cZhRZMC82oVAjvA+Oqd4GaNiqzk/1rZKMtpJTMjw/56O3XefDgPovWorM8UNI6j1YC23n2rjzN7/3RP+W//KM/QjZz/vU//5d8fOdDJoXA1DX3Pv4YZzr2tsdUFVRLQ9MZrOkw3qFj9JL3IRKkMxbn6Z38AZEpemFQSUmu0kGtolJNZHYIG7x3Ljr0g0KNWGHftkcZk6zlpI2Gtajo3rl7m+Xz17n7Vx/y8dExiDPmOmM8LslkdBQhqOuGn/z4x/zVj37I3//a71KMJty9f8hnPv8F3nrvI04WFUdnS07njs56ms7w3e9+jw8++IivfuXLvPD0FZruIaY1qEKii4KbL36K+aLie2/fpiwLhG3C2hYW5x1SSJquI9cKmReMioI806GNLuRkPLn3EY8ePMNzt27y8GRGZxw7lyZU1ZLp9hZ6tmB7Z4e67RiVOTjD6dEhi7oh14pcdpwua0bSI7Mc6yXGebJEE+oZCB+RmSPShglBoBX2DpD9oRoY4wKVGDKMpRIhYszGaJpeiBoIPgEUElK2CLGiB1PIaEQI+XURRBaQVUZXJ2WfhssNBd4ooAUkqYhGE6JBNSDkfYygeYL5+NUpm5y3Q8dwcg6na4f/wnmQyCamgQQuGf4MncqJEWEIIPHeP8bekd6zzhaSHAvD9BhDB3v6eygwJ6BFqkf6Oz07pQNJ9RgyEgyVMxlRz0MgSnrekJFh+O7EoJGuEUL0DCoJdJDeUdf1OaUvgUrSmCQwRuqLITglvTOBGIqiOPd9SqmS2iCEoG3bfqyH4wX0AIY0Pt77HhghpexZQoQQ7O/v8/f+3t9jZ2eH3d1dmqbp6z4EKKS6pP5IcygBNdI4DwE7qW0JpDJkT0jtHTLSDOdNGsv0e5p/wzmWwCvps/TuBIhIz02sHen70WjU35dATXVd921M4zX8e30dDsdt3UBQVVUPThqujWHqluF6HRoc0nPX5+T6nE4gk2FdhiCN9I7UzuH7Ux2G7do01kMgSAL2pLL+3Xqu9VSHdRDIumEqtX29vRddN1z7w/4Y3rfJqLYOsll/7vqz0vebfk9zLI3hJ43RpvdeVDbVfVOdN4Fz1n/fVIf/f4FaftbnXdS/v4xgl582134Z2/yk/PWLQKAzjRQypDO1LqTltY6uszhj8dZhtCDPFFoFgIa3AuOakILSObJcILyMYIcAMpAh0mD1riTbEVQavbZP91GaBHUp0FZLiCl4IQT64BQ41wc8OAIFt/MO7VdnkxdRV/aJJtoHB4yPAHeCjuWsoetaZNuhsxJvJ9i2oa1rTFcHMH1EvSsZ5AAlQiqLAPjQKF0idYGUOgQ+icAuInAEdpLhvpf2SqLu7BDeBjrumMNUWBEM9VKiIquIJ1DDJ73fxWCAoC+ClgqhAjuEE4Bw2M4AItqOVLBR2A6dFQTntkUo6ExNa9tg9NY6aIxCkOkMgaduG5wNKUWk0CiVI0VGvVzSdXPa5Rm2CSlnR3mBdYGtQmoR5lJtQGmE1jjXYbqWxjicMXivEVoiddCTBZq6hrPZjGq5YGc3YzIpmU4njKcT2rZmPp9TZiWT0Yh62QEBYHG2OMU0DaNCkOkAaDLW4vOM0c4WXbXA1A0Yg21aMhnSuwgpQTpM3YJv2ZqOaI1jMZ9TlGOqJgTdCCcC2ElIlo2hay15keMzTdtYxKJhXI7oXIfIShatpCgEZVmwOD1D6YKmMjR1G5wNSqJ1jnAtpusQWtJZh6s6rPeUoxFCKWxX0RmJbT1utEOxNQ2sLsUEWS9wbUdl52FNFSOaJsjfnWvonGE0zRFSUy2WBLrQLtgOlMZ0EVQsZARVaMpRQduAp6WqK0bjgq3tLcAHe0tekuclxhna1iIzTa4C84/WOVlekJcj8qJAZVkMBlwBDIQAKYLdKGW6F17gvQ0AL28h0n07m5iHBCIy2hCBS4LoWxFqYIGJziQhewdL+m6Y6iWtw8RAFPYNt3pK8oxFW1hYf+dlbykHaQ2QOJfsM6utL1ricM6HYCsfa+CiPJQ8X7FaDmIQZWL46TfQ/tf+PlbgE983SazAITii1/NcmxEpFfDAJ8eT8qT8Ehe/cgY2XY1ta4rpLlRzfNOiC4Euctrj24x3bzEaZRRixtGH99i9+jIqH+MkOAySYEv1QiO3n6dQJfXt71E8/UUyPcEJAzEdlfQ+BGNKjVeK7mxG9/ANuuYhqjaMr36B0dXnELrA2miHkDYwl7kMZ1uE9yiVBX+EjO53kXTN0LyLZPp+vxoC4VhBGaLbNl4cuyrpaUN9sHfQi8gsngABQ1iE6Pcz7wOTlk9/J0TbOXAJJLYjIQi+FyTCLRHF00hhMfUC7Bwx2cUuHzHZehqnnqU6vc3Z0W32nn6FRpdMR4p8+xpt9QiBxAkTZFAX6iTTJjmwfQ/tb6lsZkFd15+Cffyi/v5ZdPCNdoMebHO+74fPPB9os7rfeRfHRiNEgUSGIxQd/AUIhDQhPVpM/5KC5YVwmC76HaQP56nwIV0jQb5UUd5O8yD4G+h9Cuf6Kh2bvj+lBn6GIMMKtQJtBN9+ApYM2xcZGyIQpQeueJ8y7kQAQg8FQfqoY4jBTE9giuHzfZhvqQ9FXz+HV5rG76K1ge4Mr/dxaERXIUYHLMjQTUchBIgckymkBuEblN4nt+9gnIpg1w1zIQJNRGImi2O+6iHW6p7mRwKBpLoPT+80P0TKkBLT2ID0YjV2IjDUM7iT+N4A8ljNt5V9fE1uir4pwSpIObEvOiSmkcyrhjvdiOtjz+25ZWG2GCvHXlFxWcxYGEVe7HK2MHFZCpzQ5FqAC8HoyS8VoKoOIqu+cx60pa7hgwee565JHpx0nFQFzlsUHotiLD1KLHAUjLZyWlfSGkEmN6/dv0n5lQB+DI2vlkFOeTxKabTKEULirMH5IMgLb8OA6pBH0nsdJo5zKC0jpWCHMx1S6T59woqqM27KLkQtSBkcnM4ZcJZMB8VX9cJyuL/zNcZGinKlubxVcunyZY4fPqRtG0Z5EZTwrsPTkmUarTOsMRhvEc5jgT///usoKXnpi1/C7UyBW0idMR6PePDgEYdHOWfzOf7klM899xQzq/nm6+/y1O37fOqlF9i7fBmhM4QIDuTPPHOVl774Zd77zp/y4+98k8/+9h9w6+o/YlRk1K3hbFlz5+ER3/jxWwgp+L0//IcU2/ssFkte+9GP+Itvf5t7hw8RIoAXfvu3f4ev/tZv8ej2u3z84ft88PFdZmeLAMZxNhru6Q8U7wWtcbS246w54YPDI/jha+xvjblx7YAb165y/cbT7BxcQ5YjZATubBcluzvbMRLGMDs74/TohNfvzNg6aXnqhmV7XHJ4eMy//lf/gq/87t9j99pNTs9mvP/u29TVEg80pqOen6IklHlOphXeWurO8uHH99HiPp99Zs7la1exKIxtuPfRe8xPHvLpz30BW2yjiwnPf+qzmHrB2YO7zE9n1E3N7OyY05Mj9i9fDc4721HPj8iUIFewNxkxyTPquqHrWrrOM18saJq63xyVTAeHZLwzYj47pr5/j66tuH/3DmeLBVXTYS10rkUSomPmlefGc5/ln/yv/7d87WtfxS5P+fM/+Td8dPt99rZK2nrB/Tu3MW1NmQW2nMwU2DKAAFoBxoToDJEc/YQNtbMmqsoC0afgCOlrcB4jg2NGCotWMmDkfBTsnMBiMfGAlVFwsfHQkVKxXWrOupDLNp0580XF/aMZT+9vc/vwmNPaYJqORVPR1cvAtgMIpThbzPned77Nr3/h17j29LO8/fpfIa9c5rlbNzianQUFutxGH5U0jx7StC33Hz7kL77zPR4dP8enn30K27UYE5x85XSbFz/1aeaLJXMU87MOKUO6G+tD9P50VLAzKRllkiuTgrEOwn1wUIFxLR++9n0+/7t/yNXdKSdnC1SRI2xHZy1tvcRt7aBEMCDu725zaaK5f/+QR7MFKsuplxWqzJHGUrcGjWC3iDSkIpkliHmy/UogEQEEkgRFGakCPURjXEgpJeL+GaZbOK2977PXQWRHyjwY77EuGg5JhgSHRPS5gxEgY5RXHEqG0SsiPT8pJSJEjtheQAKkXAld4jx69kn55S/rjtmhA3cISBg6RtfvTzLC0Pk//G7oDIeVQzaBM4ZgjuHz0rVDcMdFjs/EXDAErKynwxgCM4b1HNYX6EEKQM8Ysd6eIaBlKEAPU64M70nvSW1M70wggGEfpO+SUzxdn8APQ5DLkAGlbds+/cqwv6SUgSEtsjAkgMR6Spk03kNH+7Df1tN6DFPWDMd4Mpnw6U9/um9z0zQ9SGI4t4YpeIZjtw7qGQIRhu0e9k0qWusefDB0zg/n+bAO6X3DMdwE9kn9kd6fACOpL4fgjnRPSoGTWEGGbVtfW8NxT2U4NontZAgaWU+xNFw7w3FcB3EM5/x6vw7X1RAwMpwDaTzWwVzra/Ki9gzTSA37a33dbHreJxlb1sEc620dPmN9bgzX6Pp9mwxHm949/PeiPli/fvj3+j500TN+EcCKv84z1w1Uw3pv+v0/tTwBLvz1yi9iXjwpf/eL9x7T2d4J6ZwJ+goevEXEnAPeuYhdVzFPvcA6T9e2WOdxToaUBrIlkyowwSIJATjq/JkS/037uUigPikiK4YIyA8f6xSj40U0TKZoxBANPzi78QRqagjOBR+M/iLpwMHNLLzESREcMz4EEIRUKcE27UxwynvnAwuIT2bl8O6UmlP2BtC4H6nw3pArPGhZtpdVo7MmsZlFeSY4sj34APhP6SwkKSWoC45gKUGocF9Q7kKbRbAG+BikkpwBSgXbmsoknetwvkMAWkmMg6arg70MB4llRGVACMBytsHalqWxIXDLd2RaoXNN17VU1RkCi8QhpQNhyUchEMvZlrY+DXm9hUKKwNrbdoZaQJYXCKFR0uNEsJlIFMePZjRVg7GealGzXNYx6KBEigyVedpuSbWcY0xNti1pmiUSERgqjEEIKEcjlJLUi1Pq+QJvLTovkFKQZwVb023mZwuErGiqGqk1y0XFchnSvyqVIURgbrVWU9Uh9Yzzgc1DIjhra3Seo2QWUng4y3SUoUTJaFRihSXPQl8Y79HlhImY0jUG6yRaBeZWZy1COLJcY7xFZZq2tRjraZuW8XSKM55Ft6SYZDT1HO/BSk85mlA1DuszEBl1bcmVoDHgvIbW03Ydzhrq1lE3lrazCAm6LJIvhLzIkSiWdYcXCiQUmUJqgcwUGIeQga0iyzM0ge0nzwt8GxxFeRYZdY3tGT+yYoTSRVzzMRjFBctVD4YiBMH0GqR38ccGJhYnIggqAKGCsSqkUwmSokb4wKqKintH8Fyck9dg5bB8HPiR9ouhE0XQ20GSjBftNucdRcT1TX+P94FlKMnG6/VIntJexsSHlMAqOnF7/2iyA4k+WlslbxerKNy0j4d/wcdQYSeCjVCi4t4VWhBsP32nhPvEyuH0pDwpv8zFC0tXzXGmIcczP/kYIyzjIqN6+D72+G1sO6fRGq48y87WiAfv/xnF9Hl2r7+E1GE9STXC0eCsRW9dp8zGLD/4JuLG18imJViHw+DJcFpSnzzCnLxL3nV0zpKVT7Hz/KdweQ7W4W0AfybQZRBIAvuWx4EO51K8aOUsXhPrh47b5G/3PoDO0p7f7wV+dU9yKj+u5w7d0UR7c3JWBxBe2pHO35eS1A3d0mnXXQEX0t4koxPZ+g5l5+hiStcabPUAlW0hkGS7z9L6BoVhtHMTWUgeffgqW9dfoXOCcnwVb36I95JMKCJOJtTB+Y378fD3dVvQpnL+nBj20U+3TQztJZ9kr0jnzqbnrPq4d+XFMY+BnSIEyqtM4IzpY0eDxBrvjfMmeQeEACGDD9W0ARCcGOdFqCwJadHXaoVX6NsSzt8kFKfrRA90QASvhBc+hSOv2hvl6b6dSb5ea6zAnyMA8fHZXqza1DNo9Pekmq/6PI2a7C/zfbUNDpTGR7bh2kkye4rLpkg9gXqO0IpF4zHmY9T0BaRs8E1kqsgKnPBI51f9MBjHoLtEnSimbuzXqQjnc3++x05e9XHq4sHaTIJDHNc+XUvsg03r2/s0d32/J/SySd8XaYYM5/V6m1b+5F7nwgIWjOOuL7k+OeOjM0+tCxbVCCU1n73ccHx8QiFh6QqsFQgNSuV4UVGKM7Qc0aHxTuKMC9xpHgJTow8AD69457bl+RsZU93w0TxDWnjhmmd/a8xP3j5jNIXT2TZndajjY1iln0P5lQB+pMWeNigXFXElFVrpmG7Fxbw/GkJGSkQ8JLQKFJchHUwodnigeHCmw/uQNzJsCCF9jPUea7uwuUkZ/hUKrTOKchyoJ6sFmoDO72JEh/AW60B4wc3rB5yczTl5dMjV/T3yQtMSmDG6zpLngf3DmnBySiFpHPz7776GA178/BdRe9tkWY7OS7YnU7amj7jz4IjZyQnvfnCb55+6ym//1hf58MEJf/7qW1yZfsD1qwdcurTHeDohd4Y//sPf4//24Tt869Wf8OKnPs3Ojefw3jEda770hS+AM3znhz/mO29+jHFf54XPfI6//OGPePe9d2mbBikExjueu36VF566zEdv/ZDDw0MOH52w7CQHTz3PRx++i++6QPcUNwglIvbNgxAB72m9p7OWw7MlJ9XHvHP7Hr9Rd/y9Z16kqiuWbR0ANnlBnhdkeUk+koy3trh69SqdsdSNYd52PHiwoJ3Pae0p3/i3/19+/au/QzU/w9ZLvLUY57Cmo7MdUmqyTOO9w1hLvWzRWc5Uw/Jswf32Q3Yu7VNm+ygJZ7NTvvHvvk7dGD71uV/n5ede4Gy+YNl6tM452JpwePd9ptt72K7l4f3bzI4f8dFHH3Ht4BLXb1ynqmqKokBJgREEKlUEbRcQelJA4wPAYLy1Q+fg47feZHv/MqePHnAyr3FO0LYhBUmpJc6CyCd88fd+nz/+J/+Uz778LIfvvckPvvMt7ty7TVkIFqcz7t2+Q1dVlJkOEU4ebOYYRQeSFyGa29hobPPESJ1wEEvkuUM/GDQCw42WkEmF9yJGAoUxliLQY+JEZI3w8WAIqmDIiCXZLjQP5oGatE/f5SwffnyXZ69/jlvXDnjz9iFVPOC6rsM6R14WSF0wHU1487Wf8Pqbb/CVL36Bj959k7v3HgTWj3c/4PDRMTjPwZUrHFyacu/uPR6cLDk9Oebu/QmL+Zxf+8xzYV3rDGc6ismUl19+kY/ef48zH4xcaeP2eK5tlTx7eRwECkJkhfUe50LfeOep58d8/O7bPP/ii9w/nnGybCgnE7AtE2sxzZKT2Rn5aMR0NGFrNGZnOgY8y7ql0YLaWLayDOMcp5VnnAkKtQKZCSGisRNSepeEMHXORwStRGpJLoMQFtJdhXFeLlt0oVcoTeFwfiXYByNgMFgm2V/G3dOmk1hEGSncAATQmvBJ+g1xK4E1JjCHwIomVXrfs8r4mAbLIyOYbiVEPSm//OWTFJ+LHMLrfw8N/gkYse40TQ719RQVm5553ti2AiUMnbU9q86AQWS9TesAiyFgAR5H/Q+d0kPwQbpnGI2fvh8yd6T+2lSP9T4bOv7Tu9f7Pr1nU9mktHrvexaPofK5/sxhnYbggPXrht9Np9NzgJxNY5HaP0yrMwSrpOtT2pshSGf47PX+GgI/huO13h/rTudUh01Kerp2CPIZ9g2sgC7DezYp/Zvm7RBwMbxmfcwTe8dw/FMZzkHv/UYmmYvKsE7r/bYJ9LQJpLHepxe955OMG59Ut3Vwxnr/rddt2E/Dubfpvk39+Unt+Fkc5r9ox3pqy7Du6/viL+r93j/O1rL+zieggiflSfm7V7wH067SDgQNKrEpWrwUfZQ6kUki0e8KAd56uqbBWQJwIXIMSqnJtEArRaYzlA6sClJFeYjw3GSwXikVIiqekhhuGJkAPM4nWSAyRRgTHCs2GBm9tzEsMUa0+eg7IaTCtC6CKqQMsfvRBhFS5wYAhgqWkJhe1ffpX4j1DI6EAGZROgIblQ5/Z4FVQ6gAHlBZEVlr1QAEGSnbfTKqh0CmAKxxOG8Cu4gLbJvOO7wL9ichRd8XXvgYyB+MZc6bOKBR9xQgonygvO2ZT5xw6DwE2LjIRhKc4RrhLMIrjGlYLha0zRIhPEWRk+dZHAPPaFyyvTUGa2iaCoHFe4PznqIokUIFm0nXYZxDeE9TL5mMFXleBtuEtYjOUeQhFQ7OMSoF1mrqs4qq61gax6JpMYuO8qhDyftc3hlxaWfC9tYULwqWixD0lWeKLMvD7PMO27RI55mMx+RlCOxqmgbTtDRVFe7JC5yFtqnIlCDT4K1B+JBn3GcZdWPwUtJ2Hcb5EJhjLU3rUJlnPBlTKBGM/d4hULR1ixVhPCfjEZnWwd6kR6giY0FDPs6QmsBSLBT5OEdbS9UYmqZDSE8xGlPVHZlSSJ3hETTtEoTHnDnqRUVWjnAo5lWHd4qi0FhlULmjXVYYK0GM8L4NKWurDq09OVCUOUIJMqnJSoUXgs5JbGdp6g4hFNILMiWxxlJXLSBxKJwlMAJZHwJZnAcpUVqSZQV5UaCzPNjVhAQv8AGtRABPnXciSYIdJMxfi3MS6JBCYIXA27g3RPuwjE5NFCHtgkupQ8Enh2NcG7282jtJRbRfRxmGpLOkdySZcn23FCtHF371rLhnhPfImGI3ADacCxagZD2J8cS9beacNzTpNvF77zxehn25J1X3EQDHIFp6IHp5goOzl8s8eBnAKANfGSmlTahCHIsnMtyT8stcRNiK8IJmcYJzHVkm6JpTlPfobIvy8jN0J/dRl5+l0Jp8vIczjun2DbrlbZbvfURLyWjrEqLcIismZLrEWocst9l67ndYfvQfce4zFNOrSNdSLY9ZPnydMsuRaGpaLj39Crq4jNUWbTu8jNbr6IAWCHCR6d7aADSROshKcZ8auOBD89Kax/c/wewbjfnJ0Rtvk/Ha5Ox2nj5NQq9Xki4fOJ77fTQiZZPDec0hLCPqpL/P+wjCWLEshJelugSAHy6kKO/wNKfv4W3HeP8ariwQriUjw2BQviMrrjG9nHN2+5vk+Q7onFx3yMzHtDQrWOHQTrP+93DvGzITD8smu+ZjU+wCe+Pwu002oQTSkSLZ5i8u54Ep/twYhVR9Hu9DurbgZvUoD8gAIFRS9bKoSKlWhGJSBFm1kzL2WyhpbBieOfGsCqlaVsCC2FLSudsftKvar846VudvAnAkdocAjhg+M57tA39xr4SEKsQURPErkeZffPZ67cKUPad2pEuU8BS6Ze4teVbRdBphTyDfRyiJMwad7dKZMwpncbKksDO8U0hZgG0ppAlgYWFDUrWVEycuX7GqfzqK+34KoNiVPDLogyhF9GAQtw7XHARrpr1kJbT0sgVpTJMY8ticdPGWtG/Qd5AfVP9c3eJ4eyGQSrCzXSAeQUvNzJRcnyy4v8zIZJDtdrYzvv2uZJxbrskl0jmubsPt4xwtdYD7O8NUtijtkIVAisDkYZzEOoV10FiBt4p37jr2tuHWtMXhcHbKGx88YvdgwqMjTWtAaRNl0Z+/rPOrAfxAICRkOgMZUrcIguKrpAjKp08EUoEFxPlI/xknk5KEdCqkiSrQUtA5TdcFkEag2FxFHTvhETIq4MEjHu5XEi0F3gVKRiUkXoTcrdYaiqIEEXJdOeuYAM/fvM44UzRdh1hWaK2IrJlY49BKYSMNIFLTdA0uy/jmD9+ks55P/9qvsbc9Ic+fZjyeMp1OuHz5MvcOjzh8eMwHhzOmsyXPPn2Nz7/4uzyct7z7zru88fEb7Ixynrp6wFO3bvK1P/iHfOfr/5If/PDH/N7+VYqtneAoNjNuf/ghf/+rX+LH9yo++uhNjn/yFldv3KK2ljsfvYcSkss7U5596iqCgMxXxZjdq1t84fnPcnh8wrJecnjvNnQN1tigpIqACOtMh3KeXGU4G0wRSiiUVmxPxrz04vPQzulmxyznZ5zOZrR1TV7kTLd32Nm/RDGZkmUFWZaTZ0XITXf5Mo1xHJ/MaOuGH/zwh7z/zjucdZauaQJIISuQ6MB8oDQqH9NVc/Ky4KVLl9iZjNFSYZqa5WKJaR7i9/cptnaoRUY9Kpi5kgbFaP+Ad//jn/Lo43fZmY545523sHiMsZycLqiqJT9862Nef/8+/8VXM/JMUuYZeaZBQJHl6ExjY7qWFFiEDgaYN3/8KncPHzK6dx/vYTIZk2mJyhS5EEiZcXDrZX7n7/8jvva132YqWv7sf/qXvP7WG2jtUb5jfnrC3du3MXVFkclAr2u6ML2UAKHJBUCBAGTT0XYW6z3GhY21UAo8gcUFAh2uCOtKESJuciUwLgkAgXbSeXDSk6gfkxIaDkFPyLBqybVgpBV1a/qV7jwcHh1zclazvzPmmW6PuycVs9kMax3T7Uvs7+1SLeaczU6YnZ3wrb/8Jp/9zCtcuf40H777OuOnr3PzxlUeHM04bTuK0S6jrISuZjQa8eDRCbfv3qPZu8Srb3zAlz73Et40nJyc4oVgeukK+7NTPnw4C3lSPWRSkgl4884jpqOcaxMdBVgfctnie6ox5wVHhw949pUv8OKtp3j17Q/orKNdNHhHACBpTVU3nC2W7Iy2yPMsGNwEZEpR1Q2uzOKhAw5FygHoCYZPIVZ9S1xjvj/gw6Hbdh1e68AoEzjJcNaGMbAOLQVOhLRXIoFGXBytJPx4CSKSdSbBLhbvk4gWBXQf7/UeZNhniSC3JDa4FO3WU32Fz1zawyMlXKjCEwPBr1LZBApY/3vI3jG8bx34kX7vz/NB+onh/UMH/yYH7pBNIIEKhgCC4TOHaSDWnaVDUINzrgdHDNuxnlIhpaVYR++vP/ci4MlFUQGprKfBucgRP+zTi8rwPaltiUFiyHyy7sgfvn/dCb9+fapzeldRFP1YJoBElmU451guQ+709XQx6Z3Dd20ap/TvMHXJRelz1suwzWlebHr3cPyGc2od6DTsi/U5k+p4UVuGjCab6jpMHzMEZawDI5RSffuHfbQ+nuugnHXwxvCZw++H3w3HfX09DssQOHShkWNDGbbtk8o64GCTEWZ9nW1iNlm/fhOQZVMbPuk5P2v56967qR9hNcd+kcAPoGe6SXVY/1kvT5wIT8qT8nej2Kj395qLBZxAq0gRHXPNy8iAKUMeAiwe7QGl0UVOPh5RjMZkoy3ycozM8qDbiBDhJlxKHRmsrAmrIKPOlHSLBEIXgHcuOkQdzgeAhrMWbw3OGrzrwBk8Bu9sVMd6lwbeBf0p6PQhwtUJEfRnE5hQhUtnhY2OB4M1Nvy4Du8M3poIFAm6mPCEfN5SIKUObxQugFt0jswydD5GZRk6y0EqlNaBDVfHdIFSopRGKkWmMrTK0VkRDfMqRGuoYCdQSq9yqqd0EgOAsPcBJOqsC33jUn8YpA+gDrzHOYEXHpXklSjXONuhlcC5DucaMq2QcoLWGePRCIHHmAYhHVpJ2qYBZ9AyAEI6S0jrITXLqqZtmgB60cG2M93awTpHUze01YI8zxkVJR5BFYEYk+mEcpKR52d452k7g6ss88rw8KRiNC4p8pLdLcVy0dA2ltGoAA9nyyXCG4QzeOEpygKloMgzOtNRL2oW8zk4QVFkZKMc5zyjbIyqBE3VMNEabwzL5QJrHVk5QbVQLRuyIkN4yaKxVI1nMt1mOlYUuUZ6QdPWOOEpRprJ9pi6bgBPawx00CwaphNJmZfkeUaW5SA8WaERWmG9pDWO2awlL8eMJxNcpFLvnGWUjcnyMcYtqNsFudTYtkMqaKoz5mczisIjszFCSqyXyKxEeUEb2QGFC+wuxliU9Hit6USLtYLxdIzMBZ0VtAZkrjDWk+VhDAPox9IZQ2c9nTGoTCMEaBVsGR6P1iEgLMn4XiQnxspB4X1gGU0BLwkO0TsWvI02DId1MV2LEFhiO4Ts8xP3z5QBwOQsAQhCimiN+g0r30vvLEp/+9XHF4qeQhBRZL39Q0gRU7vER0p6e1P6SVHT+MSi5CLIy0dzkMPLlROv/0kOIU+fniLZ0r2HQNaa9qENcjqEqGKRHD6D9p6z34S9pHcjf7Lo/aQ8KX/HS2C+qGYPUcJx5fKUe8cODYy2r+KKbYp8h9Fkh8lkD69HIBzT3etY6ZCmwt95laPD2+xdeYr5yX2sPYGmQaBR4x1GxS6Ld/8Ni8mzeGMZ7+wxml5lcfaArd3L7B38GtYLDBbvVb9Ggy4V9iwBOCeRUuCsCYAJmYf640CEvbdf40SJp/89rvnYahFpDUTcCH3ajwZbQcD3Rrs59Btieu7qyaz2QZF05/7DvqSgx+HHqy14td/0QOKUusvVtPMF2UffQJTX0UpgVRZ9eo4OG9i50UBLPrlMNtnHzR6RyQxX7IbvnGDAe3BOf92UvvWiQJVN4JC/bkDLT7WFpKE4PySEYPkVi1xiyVqdFYP6xedJIZCCIMcMwAO9rbTX28MQSxmgkkpKpDcob/DeBZk2TqLwj3+8LwYT7Vy9/WpeeOn7dCx9P7M6I4f3CcTGMzgdh0Cfssf65KtIDBqrKp2/eXVm93LI2mEnwmENBB9IVSnKsmJZdyAl2fhKYLa3Cic8iBatSvIip3Y7UGzh2yOEd3Q+6Al23CJVxpBeYrgmo+eF9RFfjWuQD1a9tGL58n7lY1rvMBHXZFxU52WC9K5kozvXC8QAYd8/UvQ7ghtU3G8co9S/1gFKsD0p8KohsyVnXQfjKVeLOffbEVcmitHWHo15xG5huL4vmC0LHtXgjUEISWc9lZhSW4l3DZ33KCHR0pEJQ64sZQa7RU2ZBzloVGQoVfPRnSXLxRH7O5e5f5zTGIP1AkzUPX8B9qlfCeBHOgQ6Y1AqbNhaBdpwL0KEBVE+743pqLApyZBWQsokkEZHUDjSyERI5eJdhumC40BJGej+vEXJkPvUJUU/UsxY60Jer6hMS5IzNsTFSx+cqYawxg92ppyezmnqQ8BTZDnWeYzpQjqFrg0RKjrQeEuV4T1UxvO91z+gaQ2f/eKvsT3ZI9MHjCcTtrbO2Nvb4fq1K9w7fMThw0e89fEh16uW/b1t/uA3P4caT/n44TF37j7g3qtv8rUvf5k3fvwqP/rwDvOv/xm7e/vUbcfR2YJKlCxmC/5P/8f/PX/+3Z8wouZotuD9j2+zt73NretX+NwXvsRLL3+Gg6dv8vDwiPfffZvt3UsgNG3Xce3aU5i2Zj47jiwqKzqhLgvpTqq2C4dATBWRKcnl7QnH9z/m7ntvcjZfBKW960JUrnEIESI2tra22D84YPfyAdt7B4ym2+R5ySjLGV3dB6U5PT1hb2fEqMjo2gopNdtlTlNplMr42h/+E67e+hT/5l/9Pzm69x54S5ZpXnj2Oa5dvYqTEuEle7u73Hr5czz18mcCa4oULI4P+Y9/8R949dv/gcViRlmWdC4YdZz3nJ4esbOzzTO3boS8rNYyGuU4HDpLxokQBeNjVJBAIiTMHj3k3uEJjfE4oTldNIxHI4z1KOEo8hG7Bzf57Jd+my999Xd44enrLI7u8z9/4895++032BlJcJ6j01MO795BdC3jTAT2GXxvTNNZWDNOK7IsQytFrVvEsqZqIhggLjwhg9EsbNsOfGDOUUow0gJjAnhFysAG44m5jm00hvmVcNcrhHEfzLTgYJoxOzGYKBt5oKob3rt9ly+/8jx70xbTGapTQ2cMl/Z22RkXLI4OsdaxtbPN4viQd959h888/wy333+b+w8e8uLLL/Hex3dY3D9h9/IVvvzZF/nmv/1XaAHKb9PqKeVkm0VtePP92zx/bZ/lskLonGySc+nGLa4dHnF2/ziAEhw0ncEBZ53nsgfVS06R2cQFY1yWKfauPcW9D9/n2o1r3HnwkDuHJwidMZ8tsC4YFo0x3Llzh7MjhZaC5bLGRoOgErCoO2Qe8jg3zjPJZEzHkzLVAsKjvFwhI0k/IhoCwr5pxCriTQiQKqZe8QnwEdoRbH3xEI/nr3Mm5HoUwQjbH+o+iBY2KgXWB6CPkp7A8CFZuUgjzZlzARCShEFCDuuIMcL5JFv8InCST8rfxrIpknyTYrTuYN2k3AxBHqmsp9Dw3vfU38PnbAJ/DB3hQ0d9un49pcWmOm5yHKd7h2XdgbneFwH1vgJnrIM9hnUZAg0uciInsMR62zYBCDbV6yKHa3puquNF4JdN472u9K731fqzE6hhPS3P8Pv0sz4v1oEu63UaXj9kBFmv9ybwR1L40/dDEMm6Yr6eyiS1bxMIYPj5eh+lzzcBS9bvXe/v9O51gNEwBcr6uy9yxg/fu/6O9fp+0ly6CPBxEZjkor7Z9JyLrlu/dtgnm4woFxlwLnr/J82Xi+r68yg/67N+2l48rOdf1yD11ynr4JhNfTisz5PypDwpf7uL857OWXBBN5EIFBqhPFIzOC9iqi0hUVJSFAUyHyHLCSorQSi6zuAMVLM58+MTPB6ZZYxGE8pyRBEdwjoxVYnzEHLvQYhIAC1ClFwiGhRegpeRxdHjUQgRAfFO4r3A2ZD2RPT/I7qXA/OlEAIcGBuAHdKHlDVCygjQT4ZhiRA+vMdLjBc4L2PgqI3vDc7pAChpol3Lh3rbDuUy8A7vc2y7BKEQUgXblA7soiENjowyUYYQEpQPABClQcqQBkdKpNCggpNdSx2ZUyRCSZRWMUAhMKpIGc92EWwPLgY/uc5guxZnA8OLtR3GNHS2AW8w1sQoY4PMwXsZg6EalHI4VyOMpaobrKkCK6oL9XXWMa8qovYYU6tIhO+QIoJ3TBdS7Y4neKEROgAgjBfYdon3jro9ozMLstxyeW/MqCxRxzOOzyxNXfPx3Zbl/JRnbl7l2tVJYDBRiiKfUC/PEF7hnOf40ZxxoSkvj5BaonVLWRbBzuMd7XKBNR4hFWU5omks9WLOZDxmNA7rYtk21FWDtxYhLJ1TtA6ElpTjEpk5FnWLsZ7t7S0604B1bMscJcFaQ+ckrlN0Jqe2wVXV1TXjQmOcozUtRVnQth3GSTqh2N3ZxwuBaWu0UpjOUoiQUiYrCryRdBaUkpydHWHbJToHhGdZVTjrkDqnc5bGVigBo1yT7e5youeYLgQDGWPIMkVdN2Rlxmg6pfDQNCawm3iByDKs9TRNi5ASbxKzj0BpRZZnqEyFFD9eoXRKL6mio0pEavdow8AjzuW9d9GdEYNYPNHJFZ0btgsBMTHtQVykeJVHhxVRd4wOCp+YP1Y7y4oZbyArnUNCbNJxAmAl2F9WwTvCr5xWiXFHSnqbjRSpRYFNR6y9Y6jPhr0mMO3Sp4pZvd9Fx1wvc8VN0kc5L9HFb9S9fV/L0Aepb1lF25/nvXMXOnOelCfll6EI4rKQgursmKJQFDkcHZ3ihWOxuEv16v/IsjIoKZmPtgKoU2ic6JBWABovFfm4ZHn0LoulY+/ap/DjCgRIpTg5PkK1FcL9GNMqam7iTcPutZdQeYFdnCCyKVJL4sYNXhOM+mlfIAJuBdWiBkRgTQOEdyEAurfjD20mj/mBe71QyLgf+OQU3gDaTzeL5BcYgNL67UTEvdHH38UgtQZrnvfVHjTUT8/vt/F58ZHWdlTVKVc//w/xTuGqE5yWEQAjyYXAO43DoERGs/gQpUc4KrrFHZzKyQh2cR8Z1dM7+yqt9c1jc0WcBzisX/dJ9oKLPh+2+5yNInatENHun5i4B1758IhoS/IugqfpwRQr20v4zANbW1MQGh+B0Ku6hBdKSX+PkoFRT/olmW8wRNudICKCVmw05yp9QQln0OrvgXekP0fFmvz/SVaD4TlKPGdTvSDoCQHIEJ4p/cXzbTUcYrXWwkUBqCoVSht8/Qipd6AYY204p62yeOuQAjIqnMvIugdQ7pFnu2T1HaQ0tNURlM8EBnX5uN1wNUc8KYB36Jcd9tyqZyLINNZ11WFpLa7sQrIHupzrxDiOQ9tRmi2ruRj+Wq2bBJxZAUhWIJ5zNY17jEfRWY/0AoUKaXMkLGrQxYg9WfH05Smnx4/4zRsLKjvitdsSg0N5x1SPUP6UrF2wPQ0AeS0Uig4vLAgXGA+9wDmJ9dB2jpFynJ4IJC37lwsu2RJyeGVfUWqN95raWv7sgfyprDr/KeVXAviBECgV87UKgYh52o21SBFTuggRoyuIuUAlSsWIUzkkEAr/DZtD2NRyrXEuKHXGBWO39J5MqUC34+J9Mj0hPjPua0oFOk0twbkw4Z3zmK7FeY8UkixXPP/MDb53NsMta6RWlOMxRydtqJFUKCEjE0Z4SwKR1Nbyo/dvc7Zc8sVf+wI7V29QFNtUoxHjumJrWnFpf4/5zRscnZwxm815/f07jO8+YGs0Ym9nm5euXiLLr/P8jQO+/Hv/Jfd+/Jd89ktfYTTdZjwZk0lBXVUcHx8jZ8f8t3/8h/zJn36DF6/dZO/qDd78zr/nleev8+Xf/UOeeuYFZmczDu8fcv2pW3TWspgvyLOMvZ1dlvsHCO9omyrSlwqc7Si8RktB3XYYEzY1vGeS55hqwes/eR1jIxWpT8i9yCcgAvJ/vlhy78EDdKYp84KdnSn7l6+yu3/A1qXL7Fy6imsrnjrYoW5u8dHHH9LUFU54JltbPP/cy/zOb3yRS5cucXT3N3nzJznWtAgheXBySt01SKnROuP4+CFdu2QkaurtHR4+uM9fff87fP873+L2hx/hvUNqjc9KpBRkmeZ4tqSpDU9dvYwUgqaqsIXGdhovAqW685amaWjaDqUCi03Ttrz25rtMt7Ypy5LMBwe9lgqVlewf3OSlV36NT3/+S9y69RT18X3+5H/8/3D//l3aasZO6XGm4cHxCSeHh0jXkWcKERlFguHHRwCT6+lYlQxAEGXDHDVWUpsuGMRcVCyFWKU9cg7jJGWm4joKaZNU2qj9KiWIEzIKl4HiIcITIhVlkPamhWaqJbM2ZFdOSuOHt+/zykvPMhmPOTqesb875d7REUcP72HKAuMdz966zt6kRJuK9974ES8+9yw7l65w/84HPPfcTS7v73J4Oqc9vctPvnOPk/kCKTWXd6aMDp7h4Oplvvfd7zOvxjyaN0zGJY0LB1RWjrn5zC0ezhbcP6vxIozvrau73NzJ0AK8Gwg2BNpN5zV6/1mKsuDrX/86v/+P/1uevnKJhydzfDmiyATLusF7h2kNXdcxW1bocovRZMz86CiAR3RG1RpKneGtZd46phqyeDAKQhRcMA5EAJFQvTHA9cITK8MGQGTV8IToC2KUi5LBmJcO3nSfQKJUEh7DAPcwnmgMxUfgi0jvCgZNnKeTQxRpEGJcMgrEKB2Jj/MnpEJyLgCMQnliIfhlLesKzRAUsK74rJfh958EFtjkNB868NfLOlPIEPgBq7QXSqlzoICfxQG57vwepib5JHDGOlhhCHhIfZVYR6y15z4f/jtk9RiCBIZ1G36/WXF+vE2fBCq4CGCwyXG/fu36GA8NmeneIZtFas8QADIEfay3YcikssnhvT4G1tpzzBgXgRXSzyaQRKrj8LpNfbjpuevPGX63ydCbrlsHXKw79NO4rINSLopUWW/n+hit33NRvdcV1Ivavd4/P+1Zm0AKF7VjU103GrfX2vjTxmn9/p/WxmHbNj3rP0fZ1Fe/SLDHpnev/z78+z93/zwpT8qT8tcrzgm8bTFY8sQwISWZ0oH1QypUXpCPJqjRCJ3lCCfo2pZ6sWDRnOCdoXOAkHiR9cwbrgLTtJhpixs7HCN8lqNVDAKKxkghBCk2P7Ag+oHzUhDopOOeLZNTORgFEwgDAuMrfhXUMDSMk1g/XJTZonFYRgO09zKweriQt15Kj/OB7cT5QLceWEOiodQ7xCDABCGQWpNoBzIV2VMEMR1FZD8w0UAavDgIHN4F/d774LDABvZWpEUhcXQh1Y6UdCKAVYQcnMPCIpAxJUzQET0rgLXOMnQ2Qo4m6CwxpHlM1+CcBWto2xrfVthmgbctTT3n5PiQxewU09UUecb2zi6jMicbbWHbBq0ERVaEAAEHddvSdoayzAGHkBnj8S5SjqmqBdZ2IdhG6QAOQTIab2HMEtN2FEagxQnYQ5bzhlJKbuzuMVE187qiHOXs704phWb+4JTpVoERwbFQZAKdawSS7UmJsTWz2RFFrsmKgnw6xhmwxqJyhdCBPcb4julWgbAdi/kZUiu8N+RSMSrHzB/O8N6RT0smIrDIzo6O6cbBIZCXmmVdY23LbjGhXixpjKVtOspxicglZBrjHYtqiZSSzkm8l2SZpq4bpFAYa0PAme8Y5RlKKtq2BQWtaRAmozGOUbnDbNmyN57Q+A7fQaY1EktXt1SVIctLrLC01lFKTa7A+Bqsx7RdSKWbZ9jOYHVGWxtG02Dr8cbR1BWthdF4Sp7lKCUxNrTd2halSnQm0VlgorEEpydaDYAJondOeO+jkSFG7HgX1i3gfdIPVs4GIWww7NP1n7u4XrxP4N8M5Rwu2oqligAQv5Kvh7Jh+Cy6RERyGq3KOWfMmiMl/A6pQf2daf+CRP6KEPT2Oedsf6+PBp+VTmAh5pt3BGBMSt7poy1w6Bob1Kp/nog2Hs+67DW08Qzks7SlDh2KItrQn4huT8oveZFC0dmGpjrmpWef4cHpkqZuKJXm4OnPsqwWIBqyTDLJwSuNLiYU4x1UPkIVJS6lzHCednab2cOH7F19iaZbsjh8j+lkl+nBFzl6dIrs5mTlJbae+zW65hRvLO3iIbb7iNy3OKmRegufZ+hiF5VNkDqL8oRECEfTLIJMpnMEQbYQKV2WfFxP3WT7ebwfxGAnW9PrfLJFb9Dr+y09puJLtukNe0eySifrtY9+i3XzcUqLjgAlBcuuYXTpBicP77C9f5VWtBQCkBphg2wWAsAV1elt9GQbhGPhYKK3OVscMpYSK9KGzLl9FIhZA9wqTVa/H/50XT9tnxeDRsIZxeCa9X4836dhr4+J3OP35zqS4aa9ujeN8+p650OaMSklO1sl2/vTcMYgYi4Uj5ThFBQRcJMY+JSSCJWDzgN+GRHnWZwPIobm9332OPND794Ijo2VjyO2SaSLRD8jBn1CBBXEb/zwieefP5Txk5S/mvc8dpatxmF17Pc2STHkhQG8o7QVs+JFRJbjLKjIYuEdwS9sWrw3dGqCMxXKL+lEhlVjXHYJL+7hfQtaR3fUYKLFThFRrhm4hFbzyfcnd1yLQASkpwaEcVmlzQu9GfaMlBIypeEZMq6sOiX6h/xq3acBWrc3r7rdr28Zg+eRPEpI52lMi/cCiwERWGtOq4xbozmj7iHzpuHhzOE5Y68sENIj3Ah8hs5zrJIIVSDNGYtO06gR1go0kbTAhVbnzNgdO1S5hZQVnSw5fFDSecfnnle8f6/laKHIZY3SYGzMNvJzLr8awA883gmUEpGuMqTJ8M5ihUCIlP9raCQOhuyQ2iXrhWabhOO4KAI1psbaLkSji7DJW2eDE1uqoIQbiwB0FqMppCLlQPPRue19OCQScCFEIUTHjXOMM8XnPv0Sf/XamyxmR8xOj1D5OETTA521fSqatHM5B5hQ4w8enNJ896944ZmHPPfSS+xMdimKnKIYMe1aLu/tcf1KR9N1nM6f49HxCfPZjAe1x5yeMMoVuVT80dd+kz+pZkgHom2pO8MibLOMprvcW1omsxN+89c/z9NPXeesqvnMCzfRtuaZZ5/j8NFDXn/jLawXWOvouuDAdi5QeE4n00D/aS1dW7Ocn9GYNhgFlKLQnp3RiNpYTNuQC0u9bPBJIYgbhEQQkPcBAOJIKXegbTraruNsseDOnUNyrSjLkitXDvBZzniU8dKVAyZml6bW7F67xfTSdbYmW3z0wVt8/OFbXNlSjF95icXZKbatEd6RZQXFdAfTGc6O7vPaq/d48/UfoXPNbL7k4ckpR6dz2mJE2zSYpsMvWwQhV5xSirqFjz6+F40hgjsPTuOB53BdUC63tqdc3t+l6lqadonylqbpmG6BMx1CKiY7Bzx18wWe/9Rnefa5F9jb3aWtFvzgz7/OGz/5Mda17E4yNB1N3TKfnXJ2fNjn6yOm9pBShvkuQ1oS58IB62JqFoFAS4XVmkx3tJFFxybkpAv/SQ4irRS5DlFQTWexHozzPTLUehkOgQQyiAevExH570WkehOUmebSKGfeVpGlKsz7xWLBex/e4wsvPM3WdMyyNUzLgpP5nOXsjL3dHbZyicJjmoaHdz/ivffe5amnnuX+x+9zdHzGyy++xJ0Hx9y/d48f3btPlhfcfOo6T998GodiLDrGeUbdtBwenfHs9T0mPlBjSgl7B1d46tohc/OAzgZjQu49ylpsj0vw6AiYMJToSze58fyLvPqNf8v92YKP33+H3/jsZ9i9c5/D0wXChbaNR6MQRbPs8NaRKcnuJEN2Y86WLVYKCuWo25Y807TGcdZ4Lo80IqVFSUKHENEwFw5eFcfU+QEtHiuEqoiLzCFCfjKCk0/FZ8nemBCG3se9yDuPtUTDX2AlSSJRMgCKuB92biXMp3RcAzkDnEPEnLIesUJV9vbXFOH/czlAnpS/I2XdKT387CJn7/rvw78vcr5uen76fMgokYoxpr8nz/Nz6TnWGRaGzvT1921SlIfpQIY52oG+LkAPOkh1HjrorbU9e8fwfevgkvW2DgEfQ+DHOhvJRW1c/3sdmDHs002O7SEYZHjtprqme9b7K9WvbVuMMRhjHgOxpPoP+2gI4Bn283BupOcZE2ScIdBk2PaLAA+bFJqLQBybQAXDPl5PMbM+PuspiAB0BEkPn78+TutpkFI/DfvkorkzLBf1waa2b7r/b1KGbbvI6LFp7X3SGFxUv9RP6Zk/LdXScNx+WhmupZ/W3l9W4ENa08Ox2bR/PSlPypPyd6cIPBkGp2Tv2JRZgS7H5KOSfDRF6RzvDRiLXS6om0d0XUswtCgkIfVl0k2k8kilg33Ag8Bi6prap1QlDp/lK+YvAgtsojgPRvDk6FzROeMjSxyWwPIaDRNCApqURjiwdjosIY2CJMhoxpuQGlj4eE80gKf3QgiDxAZAiQ06utMW5wUIHfrISvoIRh9SZgpPAHf44NwW3uNsh3AhxYuIRtjwXQC5CBnAL94LRMwDLoRECoWSGqEUqKAheg9e+oi9D7qZ8EQaaR/1OYt1BuFtb6MxEhCut615EWRlpTJSoIdWGRBZC9SIfFoGdo5yl2JyifnpMdXZfbyzSJXTWTBti5KSyWibpjO0XUvbVnhvKIqC2ckxWZYzLnPmJydUy3s4LOVohHMhytkv5uAlRTHGIrFeYUyNyASjnR106RBOYaVlr2nxHooiY1Tm4INtwwvHqCxQgUcTKYnsKQpjFFor8Ja2reg6Q6YLtrd3aNuG2emCZdsSel+gdInKPJ21WONZ1C0tGjHK6LrgZFfSU3UtOsuZbOWYtsOYDp1NEMKzXBpa1SB0BrqgtorqdMn2dIzvHGo0wks4qxxlMcY1BtN2CBFSwG7v75AJSb1c4r2nLEcgNEhFZzrwipOTY0SW03UL2rpC4igyhbMOoSVN3bJcNiybmrqu8KMSJwSz+Symyo7AI28YjaaMRhllUYa07kKS5zleKcyyprMGLYK8aq3BuZD2tshylAj9JoUCKdFZgUhsyyLaYuMaTk4/NwA+hKUef49OJIkLuBAhkAT9DgkuMfA4j9PgXIZUPjhOvUMqHSLhnUOpBAbP1tJ8EtlwggPlvA6bjBwrXW1Yzsvp4b6kjgWw1uod/T3ReYqz4C0eh/VJJ7QhZZRcOXLS/aE/oh7Tu7aSxSg6i8TgLb0vZrMDdrXXD/Qc1p1jT4w6T8ovd/GAlAJbN2yPC24+fZm/+PYbyK5FFRnjg08xu/8h051ttnf3mWwf0DVzXFfhmjOas7tY2yK9AhReS7Jygshz3vzu/8C42OX6y7+D9RWPHjxgfPOLTPeuc/zRDzj54Edcfu6LGKXIxVMoAa03ZK7DV2f4psHOj2ja93E2pD6Dgmx8Ce0rtMiRchTFFknap1YuZTFwql+siw7X//DuwRXn3fHnnOhrzuuN959/l/f+8XcNb+k/itxHUmLqJVtbT1HsX2d27x0m08tIF32KWoVzVCnOTj6mHO+Q622WzT2EqajrI6yJQEQZmEEQ5lzbYXXmnCeiOG+LuKg9RJ/YJtuhiGkyhEh7evp9dV/6fPXcuPvGsUsQ6P75PvXQ2k1r/d+zWDmPd5a9vQN2dy/RewTiswWrF/oor1of+rbzOcbm+EDuF8EFgSVNCr9W78FApjPIgxfn+y96gcP59tg3qeGrE+6xObX2zqG9bNOYDd+y2ba1Nr69fyOOq/PUckrhTmiaCTrfwioP2OBPcx7PGV2xja4WCKtQbU2bZSgDcAqiDFwX/cBe3PIVBCtyaURVQQzuEmspYfq5uLqgZ93xpDFY9Wti9UjShEgvWKvUJhv9Y+vhgiXvCAHD1gNCYqxBuAKNoLFwfdrwwoGlahVHleCdu4a22Ed1LV50CO/An7FbGHSW44zl0QK8L5kIw8ifYN2YpctwmaKgYi+vKcaauis4Xc4pim0enSm8t+R5xu0H8LtfyPjJuxULUzItJXffc4/Z0n8e5VcC+CGQlKMRUkqsMRjTEQAWPkYjhOgDKRReijhxwfmwKXvrooNxYPAFdKKAJwr5LtDgKCkCql2FfKkpb6WzKmxGkRUg0O5FMIkP9EVCAErGzceBBykSMMSzNyr41Esv8K3vnVHXc7ZziRTQdpaubdFahUMkOuulkNiEh1OeB6cL7v/gJ9y5e5/PfvYz7F9/mmJrQtuVtG1LWYZ2Xrkkee7mU3hnOataTs8WmK5FIbDzGTtbE67dvMl4OmU8HrOzPaUsR2SZZms6ZVTkGNNRLZfkQvDFL3yetq54883XuX/4EEfoO2MNXddSVRVVVUXniGB3dw+tMz56/22q5RxrTaAndQFkMB3nFA6wGuVM7/yF4HAOBofAGiF7StXh3uMjCDQc403raNqOs/kZrYOFsexPx3RtqM/TreNq1+KmU5rpHl4KqvkZpmsRKmO2rJmdLTB1y2RrjJAZp0eHaGECBajSeO+ZFop8bxvrt0KqHmtoOstyOaepW6x1Mf1PhlQa1zW0nekdTsaasO/LmqZ9GJRcB0jN3tVn2N3b5dKly1y/foNnnnueqweXUVJz+vAub/7gW9w/fIAUlnEuKJTEdBXz2SnzkyN816G8w4vg9HfGBINF7F8tNdIlIWUlZQml0FlGnimcNSzrFm+jQckn4UChJGgtUdIHZhZ8oIGNY4EPxhERt+VUZG+U89je+OQD4Mk7dkrNaCE56wKtEj5QfL713gc899RlpltjHhydcO3yPouP7lO7jqPZGdcu7aJ1Bl3H2eyED955k2ee+YfsXbnB8aP7PPP0La5fe5fZcoHQmgcnc1R2zB/80R/x2Reu89q3/oKJaHnYCioJ79895PnrB5GaKdDdPXXjae4fHnHUOPauXadRijceHiO6mixKv8aBml7m2c99nqeubPP6t/49b955yLMH24hHH+N4haeuHXCyrOhMRqYzmtYwHuVMxiWnpzVKKhJVWlFkeGfp0MxrQ14I2rbl1Ep2iyy+NxkSw2oIZ60P6XZ8MjjEQ94BMS8ufui0JUR4+SAIeByBiTk6WaTshYPwFoEkRrx5MBFMpIjAIhdAQM47TATrSikjlWvYfcMTkywYwHVWiBBiE9f/UJZ74uf55S/r4Ix1Z3py8g+/g8cZCYagh/TZ+t9a63NO73WAwroDefiudN/QYT5MI7P+riEoYQjCWGeZSJ8Nnzfsg+Q8LoriXJ9tcvKnZwcg5or9Ij13eO86QGJYUv2Hz1j//iLHf6pvcrYM22+MOZeGJj1r2J71uTC8dsi8kZ6X7l9PwQOcA5x47zHGnGvDELwzBD6kMgTUDMd/vc/WlcNNjDHris2wjcN3nzcSP+4AX/9sk2KRvgsU29lj8zP93nUdeZ5jjKFt2/7a4VoazvVUxyGgaP25w769yCg0BPsM77/ICL5+7SZQ0Kb2r/fV8Jnp97QGhp+vj/H6nrRpLW+ar5vKpr7a9O9/7rKpnr/ouq0DnNJnw3/T75/Ux0/Kk/Kk/O0qUkpGW1O8VCFIQmcgMkBgfcvi7BhMi3AWKVLQjcCJ4KB13gbQhhQIFy0CMUWEzrPegKlRSG8xTR0YL41FxZSmWmoUGqVlD9gXIp1n0cLQ7ysCgYo6iMN5Ee3ikQGDqMdE1kMlJEgf0rhGK1SIpk3BRsk5nYytIqRXiVGS0mco54Nty/r4PgtoYt6ZGEgZdKWg1gUwisMjrAuADTGIlBRB/1PCghBIqQnsnwqhJUKoqDoGx4To9f3YfiHoQ/W8x+Mw1mF9FwJHnA0ObiTCepQcyD5eYJoOpMN5h7OG2hrwltbWeG8piyzY9awBJ5iMRxSTW9iuxZsGbxt0psF7Dh8+YlnX5HkWAlu8DKwZwsY0OJDpDJ0F8FDXBGYTZz1NtUBrjWnmGHxgW6kb2s7iVQYKyKDQOUppurYj0zk6LxB4Mq2Q2lIWOSpanbq2oW0tEoNzjq5q6JYVnWlx1qFVg2gdepQxmpaMJmOaquX44UOq5ZKQWxw6I2k7iVch6EIKiescSsLe3jazxZKjkzOMceR5ge5cmGtCUnVdsKnpHIwB6WmdoNCC2bIiKwvw0HULjGmYjMaMxgWyqVFKBrvC8QnFuGQ02Q6sM14G+xAGayrm8xOK7IDONZh6jsIGNmQpmO6MAImcqSDDCk3VtXiZYduakBbJ460H57DGUFUNXmmyTKN0yVapyPMxx6czuqah7RzWhbnqnMU5g/M5SmdkRYkXEikDQEUphRIKBDGFUli7NqUx8EkfiDbeaLxYsf0HJ4uHUE/nwpqQHicc0oFTDmEtQhmk1WTaIX2GFQ7tsoEepns7IYD3ok+FdF52ikZkOCeHrsulPdgs3eX9ub/TZ87F4J1oj/I+6DLOGpw3OJcamxww6R1pma/JxvF3F20xKSLae3oQiBxc68UqYC/UOwRBisewIX875Non5Un5hRcP1rU0Z0dcvTrh3u0jvJRILclzhV3OqI8/pLh0g+rhGaKb4Z3ACIH0AvItpHcoKTFdw+L4AfbuO5TKsL17wNnDO7zxrX+BznNuvPjrVI8+oD76IOyD3YwPv/OvmFy6AVriBSipghiiiCzLBVKMICvwwmFcg/enCGnwWITyQV6J53/ai9aBFd6vHO2QtpKBnkYCDayX+Lx+S3zcTpDe+ZgLXwzft8nZ/gn6d0Il4IPc1M4Q+RSZ5YwPrnP64AP2RyNksYWzFqEci5OPApNYPqHzjqaZI1TGKM85dTqkucMjREcC+Z5/oUXgkZEZ7TE2hA1tushusmpztKKng4zEX8G5/xL9LsNxCT7PtH9v6qfzNqrH3w8Oh/ICocIYNqYL8kk/4hJSNoY4ziHdi0SKjEJCJsEp8F6uZF0pEE4ivcNtsKWcq83aHz2QgXi2rVU9HH2xj+OZlvrPPwYCGXob19v/+Fj9tBKui6nmfGK9T8tLYPOr5PXHtPUCX+yjyXEYcjOjliNkKxBCgQSbl4j2FO9auhbK8R7Wd8TcCqta+4vbk/poKJGkfln9fUH7GbB3AH6NmSV9GpbaJ/fNJhvTz9K3OrIy4h1CZZHQoWVvbHjxcsZplfGt9x2/98plfvB+TTGymK5B6F2sN1gpkcIjdEMhTlk6wziv6KoG40GLBTBj7CWZcUxGBcbnPDxxlHnNwcGUPBuxN7Wc1Q174zEtjh++0XHt6hZXRc1r7xlaI/B+fU/4m5dfCeAHeJw1mC6ALIQgIMBVyDsqhA5KvVIxR6MKrBtxc0fKEJ0uRARo+D7dRVBcY/SDpFfopZDoTGF9oOfUSuOigunxtNYibaR49yBibqDeINw7M4ICkRDZArhxaYff+vKX+Mlrb9DWy5DH0gdQgPMGKRVaaaQSqLh5Kk2f2kGIjI+Olsy+9xNuHNzl+eefYe/gGtPdLVrradouMIzEVDf7OwJuSJQO6UY8glsvvsBkNCLPNHmmKTKN0hrrHMtFRbWsQCl29i+BbXn/nXd49733aDoXaBi7KgBD6orlfEm1WNK2HdY5irxka2sLvOMjKWhNh7ehn8uiQAiJdZ6re9ucHT3CmxDhKyONUCSqiJtIdDyLYDYJho6BBuKj09r7PrenEjDRkvmy5t6sorGGt+6fUuZvsjMZceXSHlcu77G7u8NoPCGXcGVvyo0rl8NjpQpIvKv7LJcV1XJO13Z4Y3Bd+DfXimKck2djjIdFNaJuKoxxFEWIXppMd8hHY6TUNG1HVddYE0AgEihGIyZbO2xv7zCdTsmzjOO7H7A8ekD94D0+ro+593bB6bJmWTcURc5kpBjnCtu1LGZz5rNjuuUCFx1gQkq0DseKNTZuymGOWmxUzgAR+9KvNleZKcqyYGwMzjd0XToIInODkGgJnXUhD5kIa0MiMEnJjgKGSAexCDnkggIeBEgn4hqL9ygl2B/nLGdNQPDFtTKfL3j7o3t85tmn2N2ZYj1cvbTD/eNTqrrh0WzBZFQglKBeLnh07yPef/t1tre2OH70iMZpPveZT3P46JjFsmVrD06OT/iLb/w58uxFHpzMmBSaB8uGJYKTkxP2drbZm5Y451FSsbW3x7VL+5zefYgzLUVRcvdswbzp+n1j5/I1fusrv01ZH/GXf/JNzpqW5w6m7GRg6wX3P3yfp5++ye27h9w/XVBOpsyOj2ikR+Hx1mGamrYM+ZOtM3jn6IJVEe8ddeepW8+lacFOFiAYPmZ4FYKeNth7EYFwcS0515/ySWB0cR8KQJ3IAuJ9jJAhCigS4S1KBLYdIlWdlMEYYAlAN28FTpiATPUERhmScdMjrI8GxwiwEwLRw0UlKVuOj4YJT9jb+53/iW/nl76sK3Hps3Un+CbH6rrhDM47bYfC4ybAxfC9Q+PbENwxfH96/iYH/0UAlk2AlOE710ECmxysw7psMhYmoMYQEHERkCC1YZ1ZI9V16PAeAjcuAi1sGs/0jORU38So4f2K3WMIqhjen0Ai6b4E3FgHr6R7EvhleM2mObMONhnOnfV2XsS08Um/r4/XOvDjk5X6i/9OdV2fl8N3Dsf8IlDI+lwf/vwsQJNhOzY56i8CW6w/e9j3m8Zpk0F8/dpNiuL6ur8I+HFR/27aJ9bv3XTPsP2ftK8N+2rT7z/v8rOCJIZ99Unr/RcJurioH4br9qetoyflSXlS/nYVqTNGO5fpjMF0LU3V0FUnONMisMFpHwNf0CHtScit7HBO4ozDChMNoAqEQqKRXpGJDBVTVri0NXkwzuAt+KR/eI+L+odQCpV0pWgb8lFXRoDwgkB3LgCJV8nW4IJRWyqU8IHZwst4lprgdMbhMSQQR7BTBK6I5IOVQgV2AYJzRiiJDhnjcdLipcF2LqRMtXplhrUhoEi4wLRoO4OzDmcIzLQRFCCUROosyFE6I8sJ+au1igEWAexifdTvnAiMKEKv0hoLFQICImttiDK1eBO0vEjyEeocojtwwqF1hsp1tEkIMqnBawQBKNL5PDD2mhZrTKANj6AVqRRQ4KXCdYp6PqNpFrRVhXCe6mxJ0wWwyXx+htKe3T3FTrFNlo/CvDEd3jgOH85ZzAOf7agoUUqiiwIpBc4HwAzGURQZ1lsQhsnWmFzvoTON8xbTtgjnKPSIHBVtiAYhNEJCUwfmVykEMp+SZ4ZRWaDLAtN1NF2DdRIldPi7djS1o7MtVWOwViJlBrZld1yAzFlUNXVnaVpP03qKcY6SFl3k1B0ILymKkqqtMXSMR5rxqMB7x7LumDcKby2TTjEuC9q2hsggUjCiMQ6zWCKFZl45xrsT6tajtKDrWpy3ZDon0xmTUmKNpSgn2Lal62qsdZi2RRYCYT1lkQFTENBJh7dFYO3oDN7HVLKEgB4lBFpJZKaC8837mA4gBJJBYHZGQFkUZLkmLzRFkZPlRbRhhtS6iWV2gE0iJcv28cf5CMAKKIXe2RVsIFE3w4e9RoRob+c8SId1HmkVCAtKIaUN6XuyEAnurEfrYIYPcolDSt3L6EOZdwV2Tj/9VtPfD0nWYuDBiltQuibaQL17XIb33vVgmR6wHjfEsKTDuj0vNq3q4kTql9Anvf/Q+z4gRyRn7MoE29uR+isGz/fRpreS+38B3OdPypPyt6kIkELTLY9577XbvPDcdUbTy9hqyaWtMQiBEznF6HJItb59jUx4TIQHSC+wSjN7+DGLB7fZObjO1qe+yrJtaD/4MfsvPMVk9wZ2+YBq6dh/9hXKLAcUQkpmJ+9Q3/mYg5d/B6+zEBjtupAr3Ctwab8JfhcnHFJ7Tu+8Gey9KgtOVQbpv6OTPK1tET9fqYLBAbCy7w5+57xOnNJCQHRAX2gPCTZnF9nG0l0/Te+78HkIEDYAaIXANHO2tq8inER5yd7l55jP7lNODbqc0pzcZjK9hshKOrPEnDxA1WdkkwNs0yDz2GynEIO9Ne158Y1xv3TnQDLQb/LxvoHgeu7f5A/jcRDMEKwgzrOePK6jr8YlPfOxayJ7wxDYMzyv0lu9dGAKBAZsTWcs1oZxkiLa+WVIj5jE6xCMGljuAtAj+jvaGXoERki8N8QkLxe0of8iyugD8IqPPoeYYm3Yy6SZkz6UYZD6NDEEv8fm/hq882cufu3yYZDi8JkOXI6THp9dR9tHmOohPttFOkunJVKNQ38vBd7V2K5D2hmdGgX/spqCXQTwcjq/+yqfHzd4vA3nUxBFeaNv+qY5tLkf+sf089gN5IfH7WyhL34W4MzjdbDxMy18AGg1Fb/+zIh57XnjAXgMB1uOmzfHfPhgTpcVTBZzaveI0bigVDlSdWQ47i80dl6jlMaOS1zjMexQqBnbY89srpk3lqqt2d0rWXYT3rgdWP9Cer2M96nxyiGF4v1HZ3zm+W0++4Lj9Z/0M+/nWn5FgB+BGjIwavigkMuY310phNRoJQPaOiqoqBDJkNgNUtdbD9ZG16QTkcjKBWXWe2Q8OIkACbxHElLAIEM0uyR+712f1gW/imQXcuU0USrU21qzMkQLwY29Kd1LL/DGa69hrUErSV4UgdGkCwwiChdQor7AWTDG0ZmGUZHj0cybjvfuHXF4Mudg70Nu3LjGlYMrXLp0ALqgc4TI30RN3bXoiLTPtEC7DtcYrM+QmSLLNM54Rlua6TYsz0754K3XeP/9D1lWLUiNsY6mbuiMpalrZvMZs7MZ1XxB09ZoJcjLEVvTKe++8wbL2SO0FLhgIqBtWyaTMZd3tzDLBd60EbnlV1tjTPsge+PLYCsXGqWiwCFiOosYceO8TcE3wXmsBJlW1J2hw2HqlrO64/7pkunt++Qq5Cudjgqm0xGjomRve8L27i6TyTblZMLW/g7iYB8hFNaG1BtVtaSta5qmpm47rDUooSl0wSgLSnCew0i1bI222N7eoZzu4pWOc1EhlAwoXCnJtMIYw6MH9/nw3bd58OABXimK8YSrVy5z+fIlnr62g8JTLRccH55RLebgOhSOLMsgL/v5GFJ++LA7pDnq4mdKkhJ1WuewPmjLOtMIC1orJqMSIQTLZUvXhXmrpETKmKLFuxiHFBQ8F4WosNEHI1k63ALVrCAFEQX2j9VhHW1FbBeSkRYsupVwZ5zl7fc+4ua1A/Z3dzk6PuNgf4fZfEHTGQ6PTriyv81UKbqu5eThA179zrfR5RajrR22W8f+5QOu7u/x6OiU7aJgN4e/+uFraG/ZnRSosmA8UtQuRF/dPjxif3I9KNQEI9S1p5/i/tEp89kxyzMPUrI9KtGZQEtJIWo+/tFf0lVLtkY5T+1oEi2vc5bD2+/z9Asvsr8z4eF8iVQF4+kWVVXhrYUsp7Uddx48BNuBD8aR4IQVNI2hjGmuEq0cApwBb4PBLMjkiUZYYJ3vx8D6uN+JEIVFMh5E5V3E5wVaMQJo2Tm8I+Y0DmldErg4GFJWDtPEHiOERCofUtH0I7wCgqhozQjAEuKqXikCwaiRsLfyIvniSfklKheBMNbL0NG9yTk7dKyvAz/S30NH/JAJYt1BPzTaDSP70/vXgRybgAHp3Uqpc2wO6+/cBExY74/zRsPzfbep/zaBEDY5bofpW4b9NgReDNu8ydn6SQ7+IYtJYuZYByR478+lW0kAkVSHdWBNqmO6JgE/Ul1SapM0xum6i/phEzBiHTAwTH+zCRhyURl+tw6y2eRgT/W+qE/XgSvr7xrOkfSsLrLVbSqpjzYBOYb9/Elj/ElzMD1nvV9T+1O/AOfm3jrIY/j7JymMw365qI82PXP97+Fest7udWDSsD+GYLHhGv+k9b0+Ny+q29+k/HWflcZtfS2vr5v1Ov+8ykX9sAlgs379k/KkPCl/O4t3juVsRtPU2K4lIDLiWegicUYEjkvncTIyV9pwzpsu6J1CKITK0DqwAoSHB3YOrYsenOGdhcgEK0XI6S6iVd7ZLqRn8QoZbQy96TJZ7iXgU0oYkF7ilQNnQnCK9+BVry85k4AeBmwL1iJcdCIjMASdykUDfgC1ylUaFqlACKRUwZhpW6xUONuFHyMRxuIJNgeBwBsbAmu8D+lfYqCTECqAYPOcLMvI8jI4FoTHE4KLBBopPFrKlWFWRAeGCEwnjmEqQwAZ0nIKGVPKBFuclAItRKICiPYzyPLAtJLSzUoRrlXC4q2jbWq0DilhlZJkmabrWtq6wnmJlx6dj0FIsrwE11Eta1TVcjavyLSmNh2LyuBVhTtaUFU1befoWst8vmSxXNDZhlGZcXlvi73dCeOyZDwe05owF4XWFFmGxaIlaGFoqhqpNJlSNG1LNa9pAdNZurZB6ABcwHtEphiPJ+R5gfcWqQRC5+Qe6uUC21lwHqUc0+0R1lvquWDZOqwDnXkyJVDe49sGgaDpDMuuwwtF0xik1JwsDNZ6JuMx3kMnNHXbIRV41yG1pO4aqsZRaI01guW8ZjTKyIpgwzqdVVRVh/eG8XiM0DneZ9QGCqXwAqplTZeBs4J8NKJuYZJleDLqek6e6ZgCxgR2FmsRipDeueuwzqKywFQjgPEkpxznSC3RhUYoj9IeqQNQq60txajALWus9VjTYT2ovEQoTa4L8qxAqQyHjGsKkjOjT4niPd6FfcQ7j/WR7cNH+1S0LgiiGcwHo4YXKeXtgKrcxjRGCJAZXmUo6bDKoqxFa02mfXQYOZzLIhuhx1m3Yifq5eGwx6Q1wEB+EkQ5JgEokqwqVvUhIltS2txVivHwOc5jnMckXcqZ3u4sEIi415F+BkVGa4xE9EAR38utwV4YdrFQGze4Jzx7+LTUrjA2K/EsOkXTHvtEbHtSfkmLIKzPan6EzkDn2xiraVqBGR3A1hWy/B1kkZOXUzI1xouwx0vvOH7wMYu7b7Nz6Qq3vvAHdN7w8Qc/QjUt15/9LNl4G2ctcvcaEzfj9L3vMx9f5vKNl/CuZbrzDLkec/fVf8f1V34b8klIRZdAbMTcGtjAOOA8qgPTVWjl0SKL+xM9qA7Ogz56W+45fWxoG/GsFvlqL+gt/GL1Db0l2D9m912B5T6hv6PItnbn4N3pVxdS0Me6uq4NjB8YWtsihGBr/zond9/GNyeMty9Rn76NrQxIQ7F9gPAjuvkpXXNMLicoBTamu0utSPUJ26BcVS5MDBLoQyS/yUYH+7lO6AEMg4/Wrhdrf69/7s/t+5uBIcP7BiX2V2B+8kirscLi5ncpqHnntTfJuiX/xX99I4I4Jc4bwmgna3/oB4mjFIJR/cH/j70/+5ZlN848wZ8BcPeI2NPZZ7ojJ1GipFTOWanOGrpX1VO/9T/b/dKr1+qHWl2dmZ1ZmarURIqiLsk73zPuKSLcHYD1gwEevv3GuSQlKgfygDx37x3hAxyAAwb7PvuM7ficXTxH7z6nPfsQiSOCEl2cFPyY1qEaPF/XwvIM8+7FsIeJADENAZ2+/9pj1XYpfXNolnkaolm7T2SewzHHR2cNdqt+inKLKXBdi92bLJuEy6g8JcgVbnxhbd1c4NIVRMX5kRjvyH0P4Yy8e044/QBQ4tAT1kKukbTzl0urmqFhsDrrZ509xeGd1PqgB9BnlkJIp7afDprgpHw0fZ3cq9LX/YUUe+ab3vH749JuLSSXiWQePr3ko08GXlwpzo2M6vn+By0/+STxxStHyi2JB5xxx+evHUrLqJ53NjfgBkR7RFu2Y4vTLU26IjUbvthvGGKi83sun665vUrs9p/j3TkpnCCuIbuBkAF1RSgi8Od/dcXvf3jO+ann5i6+4Zn+9uW3hvjhJSNegAC2r7RUBGVxcs42YSKKOKM+JU1lDyrElHGOsuG1TaiIGeG5pHmhLICSI+I84zhOG14nDu+dbcqzkrLJj6ZkMRzmKKbIkEIuyiSV0W4KJDbBqzNFkg8enjH+3u/yl3/+5+Q0EofBJBSDL0CDgc/jOCKF3W1AEvigDP0eWHG9H9g9u+LzV3d4/RHvP33Ee+884fHTp6xOH7A+OSGsN+CMKe9E8G2Ld6aS0gZHGxyqiby/4/NPPuazzz/n2ctr9kMkqbXp/m5LjGMhf+y5vbvl7u6W25trtrd3OAcnp2cE7/n8i0/46G9+xNCP1l/iCN4xpEzrFB323N3e3ptQD5nkqkzQYcJ03pnCi5Np0jEFCQOhoTDaaxHFKzxYteyGsYA2TBsxBfoEu7Hn9V2Pf3FN6+HJSUcIDSEEVl3H2fkZq5MNp6dnrE9O2JxseHi6xl+cgg+oCjEr4ziw73t22z277R2vd3e8vHrJ+uU1Z+eXnFxc0LYdzgfItiGMOTKOGRcCfd+z2+9YXT7i+0+f8uBsw7prcLaLZXdzxW6354svPqff73jw4IzHDx8RirNIVSGPRTI1Mw49aRiIZawa2Qm8yyZNJkbiyKnI8gMxJcQ52qYx6F08+31PjEYeaVxl95ryhy/XiGUzK2pCU4fca1WulnLP2m/5sKCK9XYbAo+K6oftO+2Zbu7u+PFPP+Wf/uC7PH54wc8+/ZLT9Yrruz37oefZq2s27z7Gxczt3R3nY8+3/uiP+aN/8s/ZvvgMN77md777XT794jmffPWcHEfiOPLTT77k+996B/yK/+F/+T/xp//u33K9b5Gs3Ox6ztedGcNOuLh8xMPzU158+iWnZxvePVtjnkBTQekaz6ZJrFZrXElzkqp8Bo7+9hUvnz/nww/e4+dfveSuj5yfnZCGPSLCkAInbYt2azRHdrs7bu/uAKFrG/pxJDiTFN2OidgwETFM3tTGtc+VCWnywYq9G7mSPnJd6JWaxs0V490Wb8UXYk5d/3N5P0vmQkoc3qQOYrZSIdDlbE7QQs7jngFWzf2DAawlos/sLnMgqGYy6Zj9/7b8Fpdj4N4SJF4CsEsAsRIJKsngTWUJMAIToWAOlB8jFNS6zoHvJVllfszS2J2n0PDFYV6Pu7m5mT6bE0fqdSu5opIt6jFVCaNGps2JMPM6VIIKMKmH1PZYkj9+UZmTJ5YpbOr3c1JAJX3M+2VJkqjXmhNt5ucsyQS1DaoKiIjQNA1LokP9V1VS5v1Xz63n393dfW3svDG6RORrzzmv/3LMHCMrHBvf87atfXSMcBRjPNpny/FYx0PTNDRNM43bej7cH2fz9p6/B8eef05GqcfOSRNzNZjlOzev4y8L7s8JKccIFcfadd5+x9p7SfA4Rs5ZEsSW4/2/lfJNpKj/ks+yHMf19zqnvS1vy9vyX3fJObPd7YijKacKllY3+GZKGZrVAnfyGHHO0qAOKRPHTBwSERAX8MEiaqVhAlhxFrHeFD9HUoMmnQ/Fd+PwYgEMIkWhgkNqzEl5QwQqOCBg5A9TAjC/hJEiRNPk1LfAoYya7Aaaxgn0loJ2aj6oX0oFlsue2nlfOBP2Gc7ROiOaxNiQYk8SR5ZMEleIF9lICGNPjrbvVwScxwWPti1eEw0AvgiOKzl1tK2lpRDnyQJRa3odISEQ1dLWqJoqY9krUnxavmnNaR1MQcSL7esTGU154n8Mu5FBe1QzMQ80TmiCwxdV0i44XBD2/Y79dmAbRzTt0NiT8ojkBDGRe1PVGOKe/XbPOCitC1zvd7z/wfucXVyQFcZhx8lGGXY9L1/c8ezuJa+u91ztUol5CZxcXNAkePblV3gnnJyd0J2dIN7TCjRNgxfh4nRFG9ZkjWxOesZhT06RdQkKEyfEuMNy0gshOFQHMsLp6YUpqQDrxmycYb9H057dYGo3dzc9t3eZXRRUdzx9vIGoxDFzt08M0bEfE64x9eEsnn6XSTjiENklYRwSUTJxiNADIvT9SB4Hnj58yH7MtC4j+8h+6Om6js2qob/boQ6c9Jydn5gNmMyHstmc0jQnNG3H8+evaHLDyeaEYRQiLX1uSYMBMDkpQx9pgiP4hnFQhBYfPCndWUolpzSN4D34JhRFDMd+2+NdolmtCBTfrXjGPLDb9qSkIErrG9wqo2kkOUwJBkHUVEFy8WlMUeFaI00LiFOQhfq/SfNDq/+jqkAbIJOLD8KCkgwYxWVImewzqqEca8drOde5VGzyhPemQu28KVObnV7SRs0wmQkWrTYmE5RymDhn/k6zP2UC8CabNOuk+ppjJqdkAX8platV5dZCe6m2ca3D5POZA4wy+WCZ1U/E/E1z4Kt+MSlnzwCkaW/iDsGRb8vb8htfNNG/fsmqdYSzR8Sr50CmW58xxCtEPUkgtCtccIwRXn3+Ef3Lj3l48ZR3/un/wk4jL774GTdXn/L0/d/l9PI9U3ZGEW/kyjY85N0f/CtuXv6ML/7q33Dx7u+zfviE9cljHv3BP+PzH/5rnnz/jwlnlwjZ7Iui2J0yBAJOB3IJnHYIeF+g+jI3ksynK47iyi9zkDCfqhYNUH7K1z+rE+FsT2foQaYSTg5zYAWtKz0kzz6v+/V67AGLO9yvAt+AWnC3PZGS+y391Sfk0DDevcRvHiCSOVk1cPIhQxpx+xvCeoW6gPbPidsrnG4JsdhZRdFfDkwFsykVU6QQDspU93wxb/ZBTnttPbTXwbexbN+5ygiLz3Rqkyr+XdukKoyoTtN7+X2uVgpOTY1DGEEdJ83IBxc7Vn7Hgy7z6Rcb3nmy5r13zw2XcFW134NoSYXj7P8OFM9mDaerHde3mYfrHS78EDe+ZogdW/+Q4NdG4Ky+KjCb/Eg73fMLTOyFstJIWSvrWK1jJx+CYfWw6DLPJjDTajn4lOzkGSWk4jxyaMSpbgfFL025DHktYyKhRU0H8YiPNHEE1+MQ1g2mdHfzAtc9IubEKihtSFw8Smz3PXe7LcRbcEZcOhA66rPWNlBQN/190DxjhrOU/+by7IqNjxkOcxiTMuF7C1RnMb7r8eW5D60/K/NPjs0X88/vX1mcR1LEt8FI4c2GF/01vqQZ+taHT/i//69f0mchJ0ElcONOeehe8ixBK4Gr8YTVLnO7TYSYYHxFp3fE5h22MRDHnvP1nq7reP66YcxCaFtcvsONX5JkDX6DSiiAnJLJdM7x409uuNtlLs6ar9X/71p+a7xeFYBUpEwkNhySQqBsvBMl+rykM3AWTYEqzttg8t6RxIxkAwbse02CZNvA4YwZJWLAL84VJ4XHB1t6xpjQYbT0LNkY2E6kvDjVCVxSHKiB7+Y0sJyX9g4q33p8ifuH/4j/8Kd/xsvnn/HeO++C96RkzH4BNBT2ePaoQgiRrIGYFPY72tUaxJHU8rn//MU1n1/tWf/0CzbrhrP1is2q4/x0g/eOrutouxWKEIeBmBJDjNzdbrnej9ztetQ5coZxHIijwb5jsnbe7ffcXF+x2+24u7tlv98hAqcnZ7gQ+Ms//Q9cXb2i77emQOGgbVpWTWCVM8SR7XVPcIJqINZJ2DLdUtU8XNlISOkTKfJMKdsmzyJuKuMeMxhmjDQRaIPjfLOiT8rdticUUk3KqUR4mLLEGDP7MbMKno0qwzCy2/dcX1+R1PqyDQ2rNtCuOtbrDWcX55ycn3NydsF6veL8wSny8AEqkFSJMRNjtugmAfXOUqKQaJyRS7pVIATBX6yBC4gjOY8M/Y7t62uub25IGR4+fszDx09wqw2ffvoZj5485vHDS+IwoNmijMbR0rkMKUJOlvdK82RAVdJe0liA+ToMTXrLm5QKSRQfOlwwJZ3dvqfvIzFbGp0sTMxPdWKKOBzkuer6kEVQd9gc+0JGcE5KWh47Nnghq3CxanixG7kd0rR8aYaPfv4p7z1+yJOLM05Xr0gp07WBPMCXz1/x+MEFp6uGfT/SZ6HzCb17xnuXZ3z80Ses1ysePbjgq+cv+fxmR/aNEWJiZIwD/faWy7MVr26e4UPHy+stZ6vWch47wXvHe++9y0efPePqrmfthXcu1myaQLCpgqasfiqC85DygRWqGvnyk4/4o3/+r3jn8py/+fwZSRvGYZjG6s3dFsTz5MEZ5yvHvnO8ut4ScyY4R8wZl4XbfWTbek6CNXSBf7EULwGXc5n/qjxe2eSrFrobZW6rdVNyLioiMGlwTEaOHoghqplYHipl23hU1Y6MWYg1EMf4HFIMiKraQzEApBA/pMiVFsJdGTzVNCmuh195rXhb/tsoSxC7/qtg9DyCfk7mWALK8/MrWFlB6zlpoKofzO+7JHIswcV6jbkyxxwIrveZ161t26/VYU6yGMexRIb5e89dyRWVcArmhJ6nL6kKFyEE2rYlxsg4jgzDQNu2nJ6eTiojwzAcfZ4KtNd6VkLpHPCdg/E5Z/q+v/csTdPgvZ/aYK7oMU850/f9dP15/1YCwTiOFnUoMj1nvY/3/muEmzkxZk5oWapyxBjp+57VagUcSCBVeaOW+TPPr1GfO8Y4kSuaprnXh7Uv5uoOtT3GcbwH/i9VZebPMCfYHCN4LOta71/Pn/fjUkFjs9l87fr1/rV9QwhfS8VT+3EJ9tfnyTlPY66W2lZN00zv1vydqePi2DXnfVP/rm1d+37Z1t+ktDP//Fi6oiUp7BiZpf6rda7jfan8Mq/z/F2uZak8syxzssk8tdH8+2Okh1qf+ve8TsfGz7LNj/0+J/ssFUuW6i3L5/xF1/5lyzHy1zESzTFiz3KOnpc3vU+/jvKm9l6Su37RNY6VX9TGvywZD44rHP1dy9/2uY6d+98aUept+eWLqjKmRExjAUrdlGfblwj4nDJJM6nYEzmZ2msaS9pcVYvYdDr5c01xwlK/QHU4g5eAL/4Dce4QHORKfud6PBT1irrfmIvzlg2P2A6mKgdYepgDWSVlC44AcGJkCS1Sx76kpsBZcITTuhZYqsusgmWU0YMTWh3qSk0kg5hKgmR7dt8IeRxJeSypZZSkGS8Oyl5RUsKniMYRQmOpYRLgITuIYgQNku3NXC6gd/H2p2BKn77YCF68ObNLq6BKSiNxiAypB8mWPzvOgn2K8kjwntNubfv2NJDUCDL9fgCgW615cH6B5kTfbxn3t4z7O/r9DXkcaVwi6x7J5jcaUuZud8v55SVNUO62r1k1KwKK8571aaDVlvP2Mc9e3PDituduGEnjwMc//4R12/De08fmixiV7fUdoWvpTtZkdfhmTeg6+jhyc32NAOt1S9N4hn7LfrvFO8eDiwc0bcd+GMFB03ji0JPGnn53x9CPhKZjtT6nOz3B0TLuIrvtwMmF5zZvefl65HY3Mnh4fL7m4mRDFwZ22z2iQggrbnrz6XgECQ27XrmKA+OQESLrFay6FnE29pL33I4j2z5ytu7YDQPBWzCbdwPOexKw3Q48vHxKjJ7Xz1+zOjulCYLSorQgDVevrxHN7Lcj4gMn55fs7/Zcv37Fg/OOMd4xjANkpWsa2lXH/u6GrKmQkRwpmTpF23radUtG2O97+mEg6kDwLTmW1ENR2e16GxejJ8WBmAZSjjTSYQQKMdJU9btoVcKwOaIG89XPDuvzwT6ytUYnvKeI0BvMmF1RBLHUMyaamsw3Fx3qHW4IeN/QNm1JUxUsuMtlco5k1yKEGWHjANRltfmn2gf34BKRr9la9dT6CDo9o07XTymhMRXSR57tl4yMMS/CwS9Tf6/3ysxsqjkwu7BjpmMK02OCm2Zg5QSPlWesJLhDWoO35W35zSsKqIy8vrvl3ZMOWT1le/1XdCGyOnuHYdjTrC5osmEbzz77K3bPPuPsybs8/f3/HlYrvnr5Cbef/pSLiyd8+/f/x0lJyLsAclBrFTJJYf3we6zPH3H16Y+5ffExjz/8B2zW75L/4F/y7K/+dx5/5x/jz5+Qc6bxwWys4BnT3pBE6UmjpdqYv58iRfVMjNxwmBoOus6HY+dA/HzSuU9SKN7hGcZbAhgp+9+qwDCRx+4DyvO5fPJnLz6jEHUPSEeZt3BojqjuUC+cP/0d0BYNZ4Supb95xsmDD+i3r3DjlqTKunuEC0Y0jLlBxi2rVct2+znEgDQZxRcsRKcq3H/+en9bd74+Axrp5TDnysEmnNrg8NnXU8P8giIHoJ57RJkjh87WA3WmGB9wDDguVg0nnWc3ZL66GrjZJd57/33e/+Ad2rZhSNU+Lync3KKfszAkGP2asweJbTxnHUDbC7rWM2ZHnzxO0mSJTzoVC//NMjDnfntTVENmbS8TKlOuezjGmleYW/+5HjS/T8GY7ze9Tv1WbeOph+/VsdxzwssyjY+48YaMkBJsmsiec4bVCt1dk4ZrNp0D/xCk4fX+AXm8pjt5TE49vSa8joY918ebjYmKiU79LKAzMZLJ+ihjylrnvirPNKS13kKmo/JsvNcV/15fVBtt+rvYAKjti8oN7s8dNsZrHoi5XaFTQ2qZm8x32DYekiJt4MF6oB8d+33ZPxVseczCrTzgND9j5x4To6UNTXlkrc9YtZfcuQ8Yo+LijqcnO5Jf82rbknLGuUwikN0jxCUavcOl54y0DPIAgiCaGNUh3jJkXN3way+/HcQPMVCTAnRKAQ/Vcq8QyaSkOLF8rHWAS5lHwYBFsqVQaUMwRqOzEZnGZNlXLWwBEWN0WWoL22hp1im9gakhKK0XXBsYxmg5ropqSF3Qci6gqVhEB0ohM0gB5S1y/1tPLun+5b/gL84vuHn+ZYmIWBlQZTM8MSZS6mmCQ6IQd0oQj3qBfqQTIYsjOI8LHbjAqI6bPnOzv8O5LfLFK3PWd62pOoizKBSx51QxIDbnTE72LLv9jpQKECWe3W7Hvje1ju3dDTFl2tZUMk5PT3j58jnPnj1jd3dLCAUkajtWqxYv0IyDKVFoIcbILCLXWQ6w6lA2J7hObZtj2dzlbGl/9LCAHmyE4rgRc+Z44MGqoS/AdRs8XdfQ70dysvxQ3hkZJ2Vl1ydOWsvdClr61MCofd/TDz16t0N4BR9/inPQtS2rVctmvWa9WbM5OWN9dspqvaHtVrQ+GOHEUcgrgnM2llIeGG4Hy3m879nebXn1+prb3Q6c5+z8nN/9/nd59/ElQRyXJys+eHhCHEfS7pZx6IsM7mBqFuVfSgnNiZyTOdmkOHW0wOlawAoxhprGiHeBtunQ0JCzEpqE9+CCIK5nGKJFBeVDVEWKthIodTEUi/rBzD5XeigDSQUv5jRTjA0q07eOdRt4/7Tjo9d7hlzIBij7vueHH/2ci3/0e7z37hNe/finvPv4kp99/owYE9shcrLuePTet3ny7nvcPP+UP33xMeu2gd0tz15dAUZaOXvwmMu2w6eBz59d4ZvAD/+Pf8/zz7+gb08IWATPeHmKCORsINb5w4dcnq34+GrLNlzwgo6b/ciDVmhdMkUUDyE4ghOCa20MqRGU0vVLcs68+/CCT756SczCenPC7e6OzWZF7Adutnuutp6LVUNwnvPTNTErfT+wHxNt0zLGxG5MtAIOI7ehQoyKSHGW5mJ0qCWyqop7B8k+xSSQKYosOkl0JT0YhykrqZClzTlQF/wCAAPOlQgUCunDUSgeZVwoVImxKedbiQhxqiSp5oNZBlkrmFPf5bdAwG9yWYJCS9B7CZwtQb55ypBjm4IDS/7rwOkcxF5+doxkcqyucADZ52UODs8B0iV5YQmEV2B5CeDOQfv58yzbqwLPS1WKY2SCObA7JyZMTsRybCVoTE7KI3Wck0IqyWJJcnhTmV/3m46bf7/so3k96r8lqWd5rTfVY9kPtc3nBJ9lmy/beQ6e17ZY9vP83/K5vwmkPva8yzrV34+lCFoSleb/5nVZkiCW9/5FgO6xvl++h8u/3wSSL9tpft6x+WH5zh/7+031PlbXbyIzHGvHZd1/mWf8Vcub2mM5Dn7Vex2bC5ffHTt2WX7dIP4vIpR8U5/+InLEfy7Cwd/1Pss2+FVIH3+f5df9XG/Lb15RBUvh6lDJODIesZyVoghGyDDRDyHHZKqqJdAnFFWMLA6cpefNOZZ9baYRk9n1ISDiwFuwjgDi1FK+ONv32N63+hzytE+RmaPRnAhATY9ZSRjOm4/eK2hGU5lvi6Ob4IFgqWqypXexSzkUb76lss9JzlQ3VS3QZ66GqWoBShXABhBvqWtElTEn27N7JWm0FN/OnlucQ72QxNRysySyG3Hi0dyTBkW1IbsW71qyc2RRQnB0TWNt6Bsjq4iieWSsNmFRWp38L5pRtXQ9XehwXUdOGfFWbx8coQlkHKNCaNY0DQRX1GedKWjE0UhBfnOJ787oTgfWsWfc35KHPWncs+9vuL26JnFLkpab62vwDi/CTm555513ODk/I40Dq5Mz1je3qG9I7pbxxTXDXaLfC9d5z+vrT/j2h0849yB3Eb8dkdsdKWNtnHpcHmhbIzO4vCEET/ANl5eXZmNqhjSSco8mT8oOZEUSTzi94PTC450wxoHtbsu+36IaOT8/4/Xr59zeRW4HeD44vnwRGXxLjMoQlateOVmvGceeMVvK16wOmkwSwfsWH6AJLVmVYRjouhVxGFGvbHd7GjxOze/ZdI7kG66/es2DywfcXF+z7jqu7vbcbbeMY+T904fstgPjMNCsB84vTrm6ecXuNrNaefqb1wiCbzdc3dygbiQnGAcl58h+TJBG872FjpxNzbXtjBiBc6RkCjM5Q9+PSDISetaG/bBjP4ymACoQ40g/mBqub1dIswLnbH7IFpCW89zuVmJKqMqkBlhTosBBFdgIYaCFPGIBsWI+46xkEnFMlnYmmR9ECaa04xtc29A0HW1jpDMnDueCKUBLJgRH0lSCzbUQYFJRHKqSGKlMK/6eTah1ruHrttb9OcGmpeqvzTkTNRI1krTsgzD/k8GSFF8d1TFOcUpO16/fa/lP9Z9Xf4wUp47MHXwFDVIpfp7iLKqEFp3dzsClX23deFvelv/WiqiS+h7Xbzl58hRtAnH3ms47rl5+TpJATo5XX/S8+uzHnL/zbb79j/4nmrbh+voVn//kTzhpG771g3+BdBtcspRf6lyBWu+rTlqKdw/ujIe/88fsrz7ms4/+NevL7/Dkve/R/tF/z+d/8b/x6L3f5/S97zLGSOsbUh7wPpQ50xS7sjfMa6J12IRRs8jZ/Yq9g973mcDcfp/PYQfkeJrrpr1LOU7mHt9fsK+pKSjmp0v9wzDCOhW5OgnVH66Q6saM0xPUNeS4R/OW3e0rTs/fY3/3gnD6iJPLb5OHG7ZXL1mfXeJlTbj7kjgMyMmHhNuvICSctKbq4O7PdwVtObRNfTo5eLbvE0Hmz33MP3L47utlqQgy89eU9q2nzgkl8+sf+1ujErwnAw2Oz24in92Bzxc83Yw4CVzf3nL7+jXvfdsyBwTvSTlNgaCHvilB9d4x6kMGd0LbtGhzwc14SZIepy2WNKRgeczb9JdZPIqtXMgFh7NmA4HS7rV+01gv95tSH3796nVdq+N3wVGa7jStlWLYm5G0DndzCF4FzRtuV5c4SawkkbVnGzcEP9J2MOoZW3eCizecn1/wPDuya1F3xkpfkNoP8fFHDGJZJnT2XokIHjF1Gy3tOamfFNyljoXZI+vUiveLXcHevfrzTe10rFjbzlXCjrTtVP8Diap+dO/tUMOqVIV+GAhtC+IZx4E/+sMH/Ohv7kqWEEFzsU000YunDQ8J/Zfk7h0cjuAcd+mMGE4gJjx7Hj5IDPEhV3emleicjQtFUBdN6SdvSHJG63e8s7kDdXx104Fa38JBtOLXWX47iB9UB3oxQMuGM+U6MdQIQQM6bZEyGZ2U83Q8IiTN7MdYNv/FQZ3iFGmatawLYhEaiCeNVUbHjk85oti1RISua2jaYCA4UvSMbMOvGGHEeVPrcFKiT9VUPbSAtO89vODkn/1j/uTPfsjd889RTVTFkbZbM6ZMSpExJ9wYjbTgPS4JKSaSRrtHTtPLpdrgs226Y8qgSmg7cJY/NGfQnHDeoVkZY8SJEMeBmCI+WFqdYeyJcWS3HxiHSNbIMPaMMdG2DevVmhACz776nD//s/+DNI4mGZoy3gmnm9acLLFnv72zTVIXaILlnw1NQxPa0ofZSBilDgKo97Z5A5xrIVi+XieC5EQqbZNSKgoDdb9lDNWgmWZ9QrPa0O/3oJkmOG63A1mVVVfHlmNU2+T5Yi1kNYDdxpkrbVs3XXZsjDv2u57Xr2+pTECT/vS0IRCcR7zJgq03K9brFW1r+XUpm1XnhKiCb1ZsTlY8fXpB17as20CXtlx9tSWnTL/fWcR4HBljtA1fyqQ04kQMuFedGRtFrlErk6+SMcrwFi2kJtucZtJknHiBk/XaIhqalv1usCjycs+YUnF2UWTEColAzXkX0ULyMH6jqvFhR02Is09LE1BZWmcrz1kbeN2PRWrTUiV9+dVzfv75A37nw6e89+Qhz15c8+jslJd3ewKZ6+tbvvODE0Lc8/yLL3n28Udcbfd03ZqzkzUvX70gKpxfnNOszuhaT8OWu1cv+OzjT3lxu+f84Zq7u2siwnZ4SOcwKTzvCW3L5cU5n1/vuHr9ks3Jh6xOT/nos58zjgMrp5y2nvNNw6ZtaNsGHxpAGGNmt91z+ukzvvXOu5z99DNe3PV0qzXXt9cMMRWDQ+mHSB8cASljT2iCZzdG2sZyP/fJ1ktTKraFPOuBbDMtrKKTsXFQ2VBMQbUyXEvvZLv/NHdQVDpEicXgrEaUK85UNE9peercaJJ2hTpiGnf2fVGeUep7k0GLBkmxnCbmqDLd/2357SgVqJ8bfvMI+nnEfz1+CZ5XILSqRMBxEKl+VkkXS2LH/L71mDcRUo6B9pU0Ub9bEiaOgfHHiBlL4HXePnNyRiWQzBVPKvFj3l7zNql1mKsw1PpVwsRS4aCmAan3nh87J8nUfpqD/3NCRK3zPJXK/Jnnx85JDXOizLJd6j2qYsE8Rcmy3ZaklXm9ax2W9Z4Taeb9900g5Xwc1PPm/b1MGTT/bl6W9zim9DCv67Le8/E3P37efvMIw7kjp7bFvJ7LMTpX4Fj2/5yAshyT8+ddtsExhZ15u30TeWV+zry+byKFzJ/l2DWWbb2s27Fz5vX9VcrS0fOrHH+s/CKCxDeN3zeNw2+q46+LjPC3Af/f9E7+10KOeFvelt/mIgJNAFwoEZcj5sfxRQrZKOhOfCGCW4rSpIB4Gl/S7SKMSRmHwfwKvjdChXhEAlkheEcTgu1VpAKXda7WgnUWJ1BRHjFnrwHC5LLWycFZn6X4lupxeFMCwJRlVbM5jxR8Ay5bsE/KGVVf5K8LKUXL2qGJ7CySF0rADZS0MMVGIFODJHAQnEfFnH9OHHEAHYUcbb1PueTaduWaqkZOiabm2XiT3RbvcF7wwdRQvPOFGFIUxWIEcahmcraUEc57S8ujlP1f2cAriHiiOvPNeEtH3PqA86bC4VpP2zQE78EH28AqODKaR9SNBK9oGo2CIwGRSGhakmRUEq1s2BRf1upkZL05pe3WeL+iPdkgWYhDT7/bk8eenCOhybTdyONHDTnCdoi07ZrLJ4+4vDizFMT7gf1u4PbFDf1+IDQNZxdnnD94QLtZcbLpSv9EoireB1JOqCZiHAC1VLtjT2hbvLSEpsP2/wMpDgQHp6cdTk/Imjh/cMLJ7UDTD3Rdx5fXez5+cQMP16yC0HYdY4ZthO3Yszk5YxwTo8I49HRrx2rtkZxpncer0mpPbqrSDIhLrNuO/Zjp91tSTgxDJumeoU+4HNle3yICrQ9cP39J1oz3QnyVOT8/R/tIWJ8So5CTY9hvYS9k9bx4dct+t7cIUnHkGNmshLNNa/v7HEkqZEylJW171puGmKHfKym33N4MDOk1XdOSstA0Hdc3W4ahp20viAl2+wHClhGHDy0pFbeuuAMooBQ1YPPP5WxgZi1VQbg6CXNFH9TSPuWULNgl2Xs0RNs7xGSqqBBQF3C5AIuIyY3HQoSSg5w6lPrlbOPce3sfnf/avlCWchyUucm8POXZ7u9z4X5wgf1elT6K6kcJkKsqrhlK6pbih7kXiX1IG1zrX0FEEZnqU2C7w35v8u9J5XuUgMr6FAcEVDB3kNbF4G15W35Tiwh3N69RVdqTS4Yxkvdb2osWmobrLz+BPHK2aXnynX/M2XvfZxwjn/38z6Hf8e1v/wHN6TloxudEdIqXCEnI3l4i23dX34IAPeoDpD2r0/f41h885uqLH/Ppn/4bLr/7+3zwD/8XXvzo35L6ngff/QEpRcQFckoIntS/xg2RTeNxqUe7E2QiWOg0I02/aU1fQXHi6wTQmzpa2XuVyPx7pA/mU8A9OPd+O04AfK1HqUG9LxRsYw52z/aC032qD4mJKLzfPyc0Iw7HMO7ZXj3j/MFT8rhl9eBDsjpCjMRmw+bSM778jObsHTKO4E/BdUgUUG+pwXxG9P5cfnjaElApRX3ha8QNPcyV8/q/YZ48fH4IvDzmm7HmqXbWvSQfLMkfb7qXl1D89glQggbQhJfMy3jOk/AlXViZ3wfBu0Aa9zgX0ELurXiU5oj4gJeG7Ft2e+Wy23Mr38JJZlCHuISrAHtdi5RpjblPSDpeJgyEQ/DxYW2e6DfMF0CZESDq9w4D7idcpIw7mY1FjvXb8rtaJ6n7DJnWweQTrQo+RU66kRfDhk62aB7ou0t8HInjNbm5oPED+e4ZfvUu6iJjHsjc0vqu2BkFcinYzuGtODy/7UO4PxZm7XGsWefkpAPkc5gNan9Mvx8th5RLVS1I9KC6eLA9tOBhh4vNr1+PzzmVoz0xZlaNB0lsvPD+e+f8f//0EyNnaFHEF5kwyp6W8+4cl/4a33o0ZjRFUGi55fy849V+TUy2/3LeG0YmrownB9nVcGli2vDsZsODM+EPPhx59uqOVzeeX9Qif9vyW0H8MId/JXYo6mwziuYStVFAhqIGISImeamHBae+dN75e0ZnKpuEGBOK4nwogLNNxLGohATvjHGZFe+UzGHAMhEEqniPMgxGSHBO8D7QNC0xmgKDJC0+B19YlDY4npy3/A//4h/zJ3+55uO//hG3t1fk/Z73v/09pGno1aQ1d2lXrmly66MaCcA5xz7s7Xfv8D6w7lacnJ6YLLkoQ59xPjD0fYnKcOSULZcbUialNEmVI8r2bstuv7PUJUUhwznP+fkFPnhefvkJ2+2On/zkR4zDUBQHBB8CQ4ykocc7z36/J3Qd69WKbr3CqSsbKEV1JKUywYjHrTZsmoa229AWskoI5jDxzhRQhv0dw/6O7e0d29sb8iTXXYAtymQdMy72vPPuh7y63bO9u8FJz3fev6BpPS9eXvFi35sqBtDHRBdKXVSNyS42JqoUJEhRO7DXOqrF8lTQXMvvQz8iCAnwQTgbVkg+IfUe58xBZcZScbyQcM4Tb3eMqtyVe6kWckuMjJOsvo1HceZsEe9JeJqaYFfEZF+dK3NpXXDMwZay4FwBbFSKE4dpUq/mSdOYA60NgRhbxjEyDpFxNGm4MVbpSkwGC7tunhlk1heRjDNCVDZSkE3ogogRtJxzvHvWcjeM7PWwiU4x8+OPPuXRxTlPHz/k9vaOdx5eMCiMw55PX9xw+fOPCR885otPfs7V1RW7ISJuy/OXwt3NDYJw1l4jm3PEtTQehuELrvc9iDAOA+fnD9j1A0POrJuGrIeF/8GDC86fvWafMq+ffc54esrV9o6b3TBJEHsx9uDZ+SWPHz8tCyxIt+FydPz+g0c8vrjgeniJSMODBw95+fpVSZ3iGfo9r/od3tk4N9Ubh0cYU8ZpZp+UqA6nBQDHnBpmYNW+M5t5KRwvDsLU15YSxpURYX1RjJsKxIkQFKpoXjUY6uZf1fjnrlzboluKeTHllTNHVK5jL+dyPZ2GaXUwiDqqPOvb8ttRqjOrpgupZQ4q1/Qnc4B+XuYkjTddfwnS1nvUn8fGXP3smILGvH7zEmOcfp+rXyxJB/P7zr+fP/sSDJ+n7YgxTvVoWyNO1naaA+Pz+s/TZSzTeyxJE/P7zuujekhXM1c1madzmffLkphwjKwy74d56pZ6/0qieBPxY670Udu3EoCWhJ05oWXe1rW+x/pleXy995zkMC8i8rV0Q8tzl+oty+/fBNrPiTtLAsmbzps/0/KebwL458ctHc7ze83HxXwMzI87pprypnd9fr9lupHlO3SM+LE8p5bl9ZfPNW/XZV8s23pJFFoeu2zLX7Yc659j5Rc7hL7er7/uMifrvNHhxK/eBm/L2/K2/OYVEWibYMEuydQiNWdSVEY1Z6CQ8c723qmEC0YL6S94rSmVokb8yEmJNSofQDwhHOwK53xZn+o8JJj073zNqf6Y2Xrk5lG12N4UgZl6V40KdL6h7lMz2L5Hq3qIgrdjDTC2gCLNOvmqqhy2YGlUSZauI5NNuUOkeHN92ccrmkdcML9CJpBFyRKINR1OyjgSvm3xrsGHFte0+K4xEBqxbZkmxrjHFG4dwQeCD6YkW6Z0J46mfO5DKM+jJXW41d+5kqTX132cEoLQtWHyzWXNU/rbrAO+XJtsyi0WfejwAdBM8I4QWlJUcupYdWfk1NM1a1Ztx5h2gNC2K5RgSg05ktJA0wih69heD3St49HlCU4a4mjjZX1ybsogKbHbbRmHEcQz5sD1vqe/2fLl6y3rL59zsup4752HrFeOrhU2qxUjEXENwZs6StM6NAs+dPgS9JHUk2IyBRsppOpRyMmDX9GuRs7O1lwMjpthpA2e2zHz85e3XJ4EzlYBNNN2HbsI/RDZ7RNZHN51tHiaOHLeCmedJyAlhZGDbISCRCKEkbMQuOsTMQ00CE0aQRN5GLh+8YrLxw/o2sA43KEpQ9Ow3+7wGfrdjtcaSM78gdfXezIjY8zc3F7RNC2iwpBGVBO+awl9pJWMqJq6B02JNHKWPlpht9+z3SVuxsSLV9ecn57RNCtwnt0I+33kZN9zsmpo1y2tjLQu4n1DdA5xAXENiBEUauqTnNWijXMipVgRheLPYPL/qh6SEfjiMFHVg1pGSsSUbLxmRTWhEvE+IdrgVImCpZ6iBttUMFRtnimqxZTAp0kdR2bqxpom8kedo0QOIAnc34cc7NCqbBhJNbVzTmgyQpJmA+qkpHRyUt5REcCUYu2Z56nUv26v3bOv69/uPvljDujOrc7qR69FK7L29wCGvC1vy381RaC//jnnZ4Inc323NV9rhvH6Jd3qhA++/0/x3Yb15bs8/+JvuHvxBY8++BZnl/+o4AcZCJjXXCyVgzM1qmy5pxZ7LFPdyoXb5nA8eP8fsn7wiucf/wVN+4DHv//f8epn/5Hrj/6ci+/8IV6V6JnUmrMotB1lES7PUqkeE70LMPCzoEbFfrJPDyAyB4Ith581CJHF/rReewKkl1NEmTbuw+nVH10Okfm3B8IvdR4Ss3nEOdIQOdlcMPQ33D3/yNax9Tm+PTHExiViUTJz4ukefIv99c/o+ysuTt/FuQ70Gi9q6ftwX+epcFhyDBsq64Mevjv2qL9M0Wlu/XqLHCuTN16tv3R27L1xND9dIbpcQp4Nr1NRHIEoGYmOIQZ6cTSNlPbPpgyjM2pBtaNxNOIYUyLT0jJyN6zpW6VBLFVf8iSJTMrhtaW04AXUUTLzzcyrPPdF1IeY9Uv9qfP0QVL9XfnegSWvBMza7DCaDkQRFnWoA1Xq53K/h+qVvMBKRvZkLtqRu9iySteotES5xJeUcU04J41XRJfwquR8jfcblA6ftkBG8g5xK6YhX2qac30WCpH9/vulKuU5q97hsRVapzE9jbn6YHogxpQLTl/N+8PattgelX879zVPeOGB9CHIZJtUs+rQ+ti48qaiuG4DZMeH7wc+e75jHxMhyDSOslqgelbl3N1weRG43r2Pu/6U4ITYX9OdCSfrU17vVsSYUZcRJ+Tsy6PWZyxcBNVyfdu/vLr2vLrxtG3H5UXi+ReZeB86+LWU3wrih5UCZmZzDCStigUlUlWjRYo0DYJYVAKWUsWMWTcZrbY5PxikeXoVTHJSpKYttI0KVaZnWvOCMcG8n1ItpJwBA2I0JbwXmqYtQQ1iUlpgE2kecWTEBULTgjg0m+xf54V/8Ye/w4fvPeV//df/nv71V9xtb1mvT1ivVuS8YizpPBh7VI2Nl4pD2ntfQHgheM/Q92y3W3uhNFLz5NaX2juPc8HaNCfAkXI0ZQws9UPf9wzDSIqR9XpFtzqhaVsa73n56hV/9ud/zrC7nSbBjLJarzhddez7PXHfI97TtR3dquNksyF0nUl+tiu69Zp2tWLVNZycnLE5PadrO9r1iiYERCAOPfu7G65fveL29QtevnjG7uYlt9fXDPuBMUUKlo0mJeSMC4GklsbiyxevOX/6AR98+C1efPUlVy++4sFpw8PLUxpv42McE8M4MOZMSILxTAoAgaDFYTQttyVzhWIAuz2/w6lJK2oy8J7yfRcCq1VnaV/Mu2XjL5mTKcqIDnNHit0f53FOi5xnsmgkJ0acUGidKzvYTMDIUDlbxIdrQiHimHJF0kwo0o51nNSFzGHMOEWLdKWQNTKOJS2RF0JoLY9bMxJHz37XE3VEKBFbTkzatjrNyobcblIAosnuE5Bk41/NOedE2bSOy03DV3eDRXuVdri+veOHP/2Ef/KDb/PuO0/46Kef8mDdcXV9w/Vuyw9//NfkwaRUw2rD5WnH+uyM518952b/Co/y+MwcdUO/Y3f3kucvb6x+qmy3W959513a3R0inqbtiEkng/bk7Iz3Hp4SPNz2ke2wY921lssrmpMgKUi3ZvX4Xc7feUrXdpxsVjx8dMn7Dy/wDp48OuOr6xuud71FX6XMKnhycPSD0Dilj7apv9vvWLUNwdv4bAOMKTEm6ORg8NocV+Y1sTaXbIumaW1YSqfKmXVFiUPVnIzVJM+za82NFqeVWVx1OQ5mjEWBuOJEMceq1jRX9cq2ZyGpmRjGHi5uAT1EglQKSqYssm8xo9/osnQyzY32JUmiHr8kC8y/OwZWL0kfc6D6WFqV+XfHAN1jn81B0DnJYvls87QwczLBHIC+n+rs69/V85dkDjCCTL1m27ZfIxnMiQoHKeSvEwiWKhzLOs9JFvX7ebvOyxxMXwLtIsIwDPcUVqqyyHwc1HrGQnCdp1BZ9vP8WvXfnPCy7I83geVv6uf63fyY5XWdc/cIQMv7LMkCx9IFzZ9rec/lcfPP5+PhTQocy7Z6k+rHsq2WdVq+X8tjj9Vz3kbL55pff0msWBJwlmVJmlm28fLncuzMCSvLNlgSX+qYOtZWx875VcgPy7F27Dnf9Pmb7vfLkILmxx47bv4831S/5fnL8fHrLsfqe6wOb8vb8rb8lyuC0IiiklCnOG/AtO1PM6koH6qzlAg5FRKDGpA7ZnOw+2CAbw0sj+OIuJ7sHCLeAkTEzcSiDwQOJ96c03U+0sW6IMWRWYIqoDq9Z47O6k4W8x35CSQp18ke1YgxK0yCXcGw1iWIO51pagSuZIGQQvKXAmpXQNkUPosygYJ4cC6iTvE+w8pyS4emJTQN6j0Jxy5lpO8JcTQl0qbBrTIhdDRtZwSRtiiwhsZyVeNovCmyinO4EkaQqsN7SimqqEY0JVPDzaZQourZ9QOSi0QlSihEEe8P6iRGhgHnApvNCb4o0KbRVGGy7vAyIiQkKk23sYCX2NK0DapCCA1OHCknNuuGnC3diPNCd7JhP0Rurrbs93dkVdom0O+2vHr5Ei+OmB27feT29o6x79nv94xjou86uscNz758wYOzlg8+eJeTswuabm1jLEVEYIgREY+IEVacE1Ls0TiQ4p4cR1IeyZIJ646N8/RDZtXtOGl6Hq0cyQVeD8r1XhmL+vDFSUfOStutuOsTY85IjpwEzwMRnpytaSThEYK4KSK1W3cmNlGVYjTRNhbokVQZ9A4h07oVTiEOI/u7O9q2LYRpx+nZOSG0hJzp+4HX13cMObMdB8YcGePImEaaIGhSYo54b35GWQWkdTROCW2gXTWsunYiVNSUw9vdwOvbHQ7HSdfROAGnbBsgB0II+NAQ1h2h7SxltlhUMeJL2qhCuyhkC5VU9gZH9ogcVCsqkmE+EJmcEFJ8EgKm2JGSzUUkwMgVvswuWUCdWPoaMYVhm0YyrjgkK/iUVfAaJlvfTeofxYdS1EsqSFihF/PnLMnzRdmjElxyPBDxc0ZTmu5dCR+uzHH2dwVRKvEjT/7wqa1kHujDNGfWaVKkzlsLu3tCfKYp8d41p+/elrflN7QI0HgDw9uLx+xevqZD8a3w3u/9Yz7+6V/juzXb/TXP//Rv2Dx6wHf/4R+jeFJR3bZ3pe513SylRJ7NEfO9WwHeS0Ct4kia6dbnfPv3/yVXLz7hkz/7//Dkwz9iGF7x1V/977z/g39GUKHXjKbA7XbH2j0kNxsjERoijhfBO6N6RC0q0GVOOf70C1WOOhmUh5DF+3/Yu+p09NfBcu6D9eVAp0yEC8PY7Jig1c4rGhGukGe0BGePd/RXL9hv/zdcd0I4fUzTnFkAubgiQp4LTmEBR+HBt3HPfkI8fQ/nOk7Wa1sXpEMlghZ6TJnTM4oXMUWs+y0x+dxLA0z1ntpEq0+/eN0nJ/nMR/o1b3w5h/met/SFHvyZqjOlBu7PyzJ1Q91Dy+z32leGFVpJpH6gpaYWsxpVOxuxtBfBYeu3imFYWXHZbOggjlSCRbMoUnAorX6s0iC5KuoXjMmV8ZlVil16IG6YbbqgxkhpP7HnyNmU5oWq8vsGH0XpHw8ksHWeqiBTsJFlJ7jiA1MKlqJUBT0vVleVjIYNZ83AOCoh3dC7M8ARsJR3ZGV0Dt9e4uQat/4AL5CHO8PockcrtwxuRGlrBZirndU2rB9Vu+NA/rg/Luvfli0glzGhswvVZi57mLk/sPaXVjyndoArF07Tduq+vsgB+yndWkGhwzxTXhNVmwu8eERHbl5f8d63z1GUP/juGf/mL25ogieJEWXMhmkh3vGtsx3Srfj09Yo+ZS7DJb65Ab2ha7/Fy7vG+s4d7p2pEnP1OVOtzvR8WkQmkkIcAl/tAmN2dM3xIfV3Kb8dxA+1jZCU0ZJro085BUvz17QCFEd7JXJw6KA8jUJ7YXNOB6KIeMTNpOalgNTOI8VArjo6guArgFk37DgDXN1BKtyXRTLFzDD0aErTdb2fL3CHCNhOAt979zFP/2//V/6PH/6Ej370F1zdvGbVrthsTlEy7WpF0mxRpTGhMppRHw1Ab9uWWCbCmCL73R7RPAGqXdcYU8qbY2XXj0VmVArRo7emr4u0Ygx78UiM+Lbh45/+hI8/+gnDfsswRnzZhG02HafrFW1wnK1O6VYbzi8f8+jxEx48vOTk9JSu69is17bRLOBMRmicsDk5ASzlzG57y+3VK66/+JxnX3zOV199yasXL7nd7hjGwcAvhcpKVaS8hODGSM1Dd9J1aH/LB+/+QyAThx3jsMenyAcPT/m973zAEBM/+/w5z758zm6/56Qx0kkuE1KQA7gt2KbaNk1QmZyupq+gROxQpE/L5B9E8aG6kkwpRmfXHXPJG4uB7U3XEJxF0GhWGvEkZ44U720iVwEvlXlfXgVnyi6qNd2KNVNwoRgArig9FEMgJzSb/KX3gRA8qoImh5NUgJGi5JGhaQM5doSmwbcj4xhJyYhFMSZyjHVgI26+MJbNeM0TXAyLbAOzyG8KT046tkPkZqiRXDZuP/7iGZfnp3z/gyc8efKAT754Di7gxPPy5Ws+6Ro+fPcJTTBO4Gnjufyd76AS2N1e404e0XQrdjcv4e4l754E7rZ7Ipntdst2t2PdtsVBpTjJOAlklPXmlMYLj08bHp+YGkiflD4ZoSGJwzUd7333+3z43nu0zuHygO5vuXv5V3z+0xuaf/I/cXqyQVBSjISmM9WeFIuzMzGmhG8aToJHW8f1tqcpxzS+QVUZshphCWbOgjwpeLiay0opxpWb5tJp1VWM6qEldUB1WnAwWhUg1/EjJnNcbVSpc2mZZYtxcHBnUoyA4jpwrqiiALlIAYqrb9PMBDDy1HSBt+U3urwJpFwCvMfA3Hr8nFRxjHwwP39+zyWIPf9Zfz92zvLeS8LJ8rhl+UXPOAfvl6SPOVlgWa+qAnIMlF4Ct/WzZUqSY/c/Rqh5U32Ppen4JhB23gZzksax9o0xWq7wxfnzflqm6fma83f29zLtzrF2mH9W26qee2w81t+rskit+5zcsTx32Z7z51qmu1mmRFref973c8LLklgxr8c3lWW/HyN+zAkZtc2WxKn59Y6Ni+V7e2ysLokmx+p+bEwuv5vfb6nk8k31PUZ0WT7H8pxfpfw6CAq/TF2+qe7Lc5YOxvnf8/rOx+gxoskv+2y/jnZ70xrwtrwtb8t/qaKMcUBTBSSNpKAOvCZTzMxljs+Q1DGmbMEEYqoRaMTjaIMQGiPd4zwhtLgmEJyzfWiKJBFGHUiFLLpcD7PLRnIAvLP0JeIOjnQRu+80jUxbHNvfaHE9K4o6h0gw91DKiAaQZCSIqhJa9mhJTT5YyUaKKGx7ma8rzplMdgkskkIakZgQGZFkzvEonsY3tN7jvbFGavs5EVzT0HUd3XpjP7uWbrWmKVG9tr9zFG0AKKB3zCM5J8Zxj+t78wWUyMFUJL9zKqklVItqR0LV/FAhBFZdS9t1hBLj4h2E4A5+Y3G25yxb077fMcbeUuOWdvA+4HxLTD1x7HFqjlUNnvXmIW23Nv+cYqodaSR4gJFxt8P7jphB4sjKB5qLMwiZECI5wbe+9ZSUlbvrHadt5KxTrq+Ul6o8H3uu+4w+v+KkyTRuQxov2N4JfuhBPCmNePFEBXEtG1aEpmW/j5AGUhpRFULb4rxjjNbZTRM4O93w/ruXnGw61s9v8K9Gws3ILTCkyO0+k1A8lt4oZGgUThvh8SZw0QZOBNTZPt6XKFsjwSjihSBqCpsxEoojz3lHGyOdt35THLob2G637ENP03QklPXZCafnDbd3O7Z3+6LUGmnSiGhk2G1pgkf7niEWLdym4TZnzJsXOWuV4ALBNSABH0yLvG09tIlthH22sKPGN3inNI3j6YNThrihWzdszja0TUsIRbkmBMATc7HjKsAgkEXIOJImcknJRAlA8SL3gFPBTUCiRWzb+25fOpQ0AVcWSm/vMqpoEtSDJodGV9KmiIGL3oKfcumPVAhSTk3t1sjFFkjljI1hLsyamleKx0QsBndyhVQ7W6uySS6qJunwe7ZU3FktzY0rgLGKQ9SVNFXV92ZBcfX5ybZvzOYYOkx65adTKSBZCaKcKDQlKMAmzdInta0Pl1COA5Vvy9vym1eUly+fsfZr/OYDtj/794jL+OaEZz//IU4DX3z6Eeerhm//4T8ih7WpYRdfqmD2wPQCVfd5/aUCr8u93uKzgKLZ0Qfh9NH3OLl4zPOP/5o8jqy7li9/9O948nv/HI+jH3okJ7yzgGZVvYdsBleUw4uvfg7b1ltWX/S8yOwaEziznADqflN1uuaxNrUWmJ1c55xj/j4n0zUrI3KsKes0Ifs9rlEe/eD/QowJl0ZE0mSLaSHs1ozlYx6Q/oqwPqXVKySeoayLX7ykeZHqDz/40SsYbhDlYa4/NMLBV34IQWfW1QdfuKjOmlBm38+e/97YqAGbhdw4a7bDMYc+qBWvxJB7F5zdQ1GcWMD9//l/+gfsX31pyQ9dJVZ7VJPZtHUMK2ZDOiW4FnTg2995wrMXW15Xvgb13rXP6ud6SLuo1EUf1WTrdmnbqnaj0zjR8lha1i7DGu4/lkIhbYI72Pf3/Dx2nUprqP00+Ric3qurqAXNM917errF++HodCDm19zmDdGdknG4QhxXGlQSJKFze0Q6U/sID/Dtmqb/iqw79sNrspwj7X0fzdd9INUvU1+ZpY/tcNx8dCpp1lxV38xW9Po+GoHWWsiXvtOi9g6C5EMfGt40v+/hvb/H7TriA6wHiZpN1gWhZU/n9qxD4mS94sXNa3LZJ6l4VB0nzQs+fOr49Oacm1fWG62HODgaEmM+4W4XEJ/Q7HCaSUVJUTSZPch9v/r0nswmwCCQJeBdQrOyG/m1l98O4ocIfhZx6sqM6FxJZCBlYqgsBbEXMzPLTW4mNKgpIWQ5RKTbZFMM7yJnJFSSSHldpSy+YgPfZEjNqEctF2rduHtMsSFmmxwb5xASTlpyAVqMDGcvQNO0qDhSHCFnmqZBnOeExB//0ff57ofv85/+/Id8/tGPubp6zc2rl3zv+z+g2ZzinCd5M/IrUDAMI31veUe9E4JvGGKRAMTkT3e7LT54Vm0HUiI+iypJlTpsQ0Noap5cS0Xy7NmXfPnqM0D46vlX3N7doTkTiiSpKJytV7z/5JLLywtOzx/w5PE7XDx6wsXFBV3XWoSMGBMSbF4IocH7huurV1x99TH77S1Xr6/47LPPefbVM65vd+zHzBhHUg4MePbRokIECEW148A4L66VnBlGI9vcXV9zebZB9Snj7pbr558zJMfJquHx5Sm77Zbw/iWbxvOzn39hxI3yPteUH1m1RBrVex0WtrrYJxuCCM4iFrzlPh1TZhgzm6yEJpRNXCFyqqUTarvAOMSiEFMIRGrpYGJRpjDnlStKGnUCLqQPjCXpJZQZO8/Gbtnw4XEIsciratnQ+WApRSpHwCFI8EZwokYj6cGwQDhZr+iHkSGOpJjoh0jf9+z3A7GPRmIpG1hjVR6iCEwIRAgCOJ3YuEqmccKTk5Y+DvQpT9nBUkz8+Gef8OB0zaNHD7nb9qw3FjV2ux/5/KvnrFYrPnj0ADTx4vUVZ2eZP/rDH/Dzz55xe7el290x7G/ZoHgPrShbwHm42W7puku8o0SimTSvU6VtW7xvCrkCgtjmfe3LAi8Kbkf69C/57PMflbzR5nhJOaMIV69e8u47T1m1Ae+sVbq2ZYzQhsDYD9wOkYcnJ+TUI+I56RqG0QBH8dY5fVLW3kGR8ax7BCkGxrQgmZ1CqN42FZvXVMlZitqROaOkbO4FcyZVp2a1AByFiTszZrTOl3Uh12pRVKvV4mQEcPlg4DqhRMxlmwfE4bWoDombnIxvy292WYLfc0LEXIWg67p7YOsSfJ2TB+YpPuZl6fxfnjev0/wev6jMweB6zWMg+7HnPpbaZfmc9XpVvWL++fzcOYljqdjxTUDoXJ1hrpRR77FM/7IE3pdEjWU6jzeRSI4RCOq9lm1ZCSE1xcybyDdLksMc3J8D/sdIBsvN0vx6tdQxWfuhHjc/vv5r2/ZrKhnHCCrzui7fhfoezPu4puFbqnnM6zp/lr8NQWHZFvO2Wo6FeT3n36eUCCF8jRxwjDTxpvfs2DgBvtb+yzY4lhJqSfqopbbrkhDyprq8qZ7L6x777JvKsfG3rP/y+Dcds/xuec03jYljxJsl4eOb7v9Nz/a3IXT8omvW8usi3rwtb8vb8vdTssLQZ3K0iPnQeHxopjS+ybtp/U/RnG6u7E9z2SNYLhGPk0ATOprVCnyD84GwWtO2HY1rcL6kYqWoC2SdfIh1jahrU3COrA6cL8CoOZGRgy+8nEl1COe5zeEcPtnuX1zA4Holp7IpUwFNdm01xQZSAZJ1JI4WMCFiQRdOAt551NnNvbN7pJxomhbajtAGXGgIbUvTrWnbldm9iAUeASll+nGk34/sd3tudz2vrm9I40tLM9M4VqtgbZwsAKpxvgAeNYWnw4eAcwGZ7LiS0lgzmk01NhcySOsD3ntWXUd30pk9KYr3juCEnEY0Z4axN6UWMed9ipYmRIC+bB1JGSclVSKR4D37nFBtWJ+c0bZdiYg0ldpus8KJEsfelELllLYdCZuBsN+yu7vFC3hve+aUhabp2O+2NF4JznOzbVmdbnDhij6P7MeBh+fnfP+7H/DBh+8RvOP25hq2t3StpcRxLkNUQpPRUbm9urIxi2ezOWXVNeZPa9eEYhPevn6BuOe0K0e7btiPJm2+CY7r/cjNLrFLif12RJPy7uWKi8bTZOWsa+mCZ902FlBEUW/DIc4UR9BEEMEJRI0kIl6EnBKiiZU4umAKJUii7+8Iww5N5ofcxhEXlNA4+ptrutAQgpCGiHOZlJTTzuG8BQj1TriLyt0wkJuWpAFxidW6YbVpEW8+hdA2+MZSN2+c8gRTmEnFZlx1K3zjcGPLxgWkwXxlKOSMS+Z/y0VJhzLWtQQSTUAgxdYTMVJGeX3N1eUKaCYICdWDgqSBhTb6XfFRWoYFOUQfu+orVkQT5IjLEQugSWiOpraSBZLZIt4r4FFRFI+TmX3uFdRPfhyZ5EYO9tdyD5Fm+7KD+mIuKjq5kOTMz1YS8U5tMvnfckak+OWq36cAMIIcgOfi55zUiIpqT7nQPQDnMFUevtBZtyhYlO4EF70tb8tvXlGFYb/lfOWQ1Zr99Ssa77j87j/k5tU1Yfsx50/e5eF3/ggoOh9ZihLzzOa4b4BweJvqUfM9WcUADiURccHhsidLwrkT3vndf8n+6me8/OQnyBB59sN/w5Pf/2NzaAPS1OvUvaL5jz1VVUFnWNgE196r3/16yIRx1GNUF3vwuseUkupFdXHN2V0mQ+7QRPfarJIqtMzRlPkOm/tzijgnjOM1q5P3GYctLmacb+xon4oD3OZOLw7JmXzzKevT9+nDmiznpBSJtzeciJIk4cTWKam+dyhr1WH+m4goFMzyXivp9Bwym/+nR6rtJAfV/mP7dQQk1+nX1rN6qcNcfGRP/yv4PUWMCCw4Xj2/4SzsSCkaloSUtrZ+r6nAKkGDnMiMBK9sb3flOqVyemiX6bOSPmYaj4cGO3wmhhsV71ex4wsBZ5ZuaMIt9P6bw+TbqTa7zvpghu8AssAmDqN01si1nTm8wlrstDpG6lUcW16nJwgWWN/oaE8ioXyfcGJqcZGOTreMnKE0RB6TVhvyeIuLAzTCfebEN5Vj/qzZ34sxJlL4LXVLMx09S49TbXeVgsCWdtB87+rHSB+H9rCz7La18e4rAiU1OyzheHad2N19Se7W/O63L/nRxztijLQ0jC6hseeDiz2u6fjrlw0pSiEHZXIKuCbgvBL3PWcPntNzSWIkiUOksXcJKOz7qc5Hi9R2jWSfrT3+HqCs3w7iB0byyGogocjBuW1Tz0EQy9iIkQp4Zq026CFPbBYpX2vZVEpJeWISoap18rNrpix4X41vOAz5UhcpqKpUoAFyTkUxwL73XoCAETRsEhlHnZzlmqJNkE3AtS0pZVK2heu9iw2P/tU/56ff/hZ/8md/yd2u5+rqJc3ulvX6lNV6Q4wZaTziG2IqSiBq+UVjNrAkRducO2eKAGNKDINt+BGLwHGNp+tWRtAQgZwQ1xBj5NWLL/n0k7/hy88/pWs8giNF2xicnmxouxbGHe+dNzxYOS5PVqy6wMpFmrRH94LKGpoO8QHxHgHGfs/Vy6+4uXrNq5fPubq6IuaMa9Y0pxe8f/GUD3zDOAw8/+JTXj1/Rhx3NMHjXGs5OKVsb7SqRig5w25IxGQTUP/6hhwzpyennKzXbH3g/PyUk3XHl1+9KkxE4emjU7a7S149e0mFF1SMZW8/pbSjpcyQYthQ5KkQk7RadYGT9do2rjkzxoQLvmwKAYEQyh3UZEk1p7LJKqlPvCcnLSmLhJqCyEBDi1Tyztk4U53uX5YXRDxJI6LmVLJVqC6sForjBWKqxhgU+se0QTSSTjGc1ChUsTi1BKVrG1ZdIOXEMAz0nWfVNfS7nrv9QBwTWZ05xOruXGyzrpoZ1eS2XCF/aCHVnK8bHo2Zz2+HyZYD5eZux1/+5GP+xR98nw/efcrL16/o2if87PPn3G73/M3PPyV4z9MH58SUuLq5xTcN//0f/1M+/fI5f/Kf/oIhQuNbJEV8u8btdrTe03YtDy7OaIt6UBEcQ7UAx9icUiElLUZBXQBdNmdCSjPAb2as3169pP2d3+HByZrn11uGqKzWG/ZXPfu+Z0wDPnikGKda0gClGqVSds+xGCKudJTOJM3AHBWobSRU8mTs5drJKrii9qNaGbMcomdyGWelDYp7aaYKUubQMm9QDGtgcry4MtfWXLJ1jqmWkkOJuTBjrWUtNVJWktwH7d6W39yyJA1Mzq2SiiSlNCk9zFN31HMr0eNNBIclIDw/bgkAz+tzDEz8plKvdywFybwuSyWCJXANx1UzjqmA1N/nxI35z+Wz1XadkwRCCPfSxlSljBjjPQLOPF1MPfZY6pRKPFkC7UtSQz1+tVpNZI7a3/O2MfnxMLXrm1QqahvEGO/1+bHfl3/P6z+vb22LeX2W/bbs/1qappnabRzHe311jKyw7P95iptlPZdtWutZ+6OeH2O8pzxidkO+d415v9Vz5/20fNbl+K5tNm+7+XfHxsGyz2q/Ltt5fo1529Vnmpc33ecYoeJNhJhf9I7/fZdf1/1/1blreex8Hn0TMeS/1vKW9PG2vC3/FZbqV8FyLRu4aDLCkQzOiBOu2CfiMqH4KVIOpGzAQMwOl9VA4CR47/C+pWk62rYrQSS2r56vkUuSb11XFpprzEHLjMOXfZcWCWpxAZfzlPoVLUCoKpIVLwk0kHQka2TI+wK/ZlQTmYSIAcuJkhY4eFNEdJ4Q1oS2o+02+LYjNC2+pLnwrqEG+aSYiMPAsL/j9uoF/X5HHPbsd3f02y277ZZ+u6Xv90VlJeMkElYdF5cPWZ+ecndrTtDQdITQgi/t7DzqwDWetjjRnIayfyx2rnf4yTazHWLWkTgmtv2eMfVs1ms0CDnBkCNoJsaBHM3PYdN0RnMyh69CjntQRxvWNO3aUvamgdu7a7pNw2rVojlzt71ltVobsB4aUsps+30BJZKpuPiWOOyN2CCZxnl8bd/VinHoUR0Zd9fcXt2RUQJwsnG89/gM8Gw6h4w9H//0rxlj5P333mdztrF4zLaj69bkHE3NwQcQI+oMQ0Sdp2nPEQkGnA174rClXZ1y8bSl321p1gPnDz7kk599zPrZazY3W07XHa+2e27uRvpxYNxl1qcPOfUr2qwEdTCABIFkoD8uoGKBWyG0lv4jRmpgB1rS/SZTczGXkRIaaH3mYh1Q8XgfWGlknSJyc8MjZwq563Wg1x7IDBrpGiGLkjXTCKxWHeeu5W5UfMqsQsvpyZrzh+ecnJ/hfEO/t7TRKpBiRgXOL8+52+7IMdGsLOitaYxQgsugiRxHRucQbwq6Kg0utIhvcLnwqCrQV3xJSW2+MHKDTh74g18DslT/G0jxEWvxkRhwJaZAE4SAFDtVEO8I3nwozhl5xpFwRZWI7NAUjTxVfEBZdZJ/V2c+E1WHqCkvqzOF4Kr8Qf0Hh3ppUdo5QvxIqRDnk/kqa3DhYR6c7QlnAMa0r5MD5OaKr004tIPFVBqoK5VM42Smw84cBZvm+DJFWpsCugCB3pa35TetqGbWCM519Cmi+z3OC7uXXyFhhXvyD9i/+oTx/Amrp+8RR6W65+s7NPfn1t/q6yX1s3qAFI+slm+LGdPQonE0LCyDOiWNt6xOP+DpH7zD/vlPePnjPyH9yf+Dh7/zT1l34DHcKqZUSF7QiOLJxOIfdrM62fPe3ytOmHzBbStGN3/n7/lBpucyXCHLLEU8h+tU4Pzrc0fVSik4kFjQtT+0lF2/tK+KY4w977zzu1zfPcPt9qzf+YBGMFvJWU0CidT35PFLurMPyd6UoNLuOSk9ok8DG60+kUwNcJ2qrYdpXLSA2jIjg8zbpLbDkT329Pl8bMz8IcdOcByQB51f59ABh8Y9NOOEehy+0sWJ9llNs/I3H33G//zfPaVtu2l9OIxXma0hVifvHbttwrtAjntSDkhRStF7DVJwLNVCIqgExvkBMj3H1CYcxlEdWxOCIYc+qUSCiunYYx7ICkuQv55zwENmqZGxgH+dtaebtaPqQQVm5k1BJRFdg++vkPYRKYupBwKmih/R5PHuij4/YeMG1AWavGNozqzi4xaXO3CJhC666tDzSz/YN5XpGK14ohTSR3n7Kglj6VtTLP3dQS9+ds3lvXU21HU+dZW+0MO7PiN9gM1RGkv/+YCgrFrlg8sz/p//5hWtDwz7yMP1lsdPEl/dnPL6GjymoJid2Ue4Pe+dws+fOaJ4sqw5bV6zH07Z5QA6lvHztbf1DQ1XsbSMK8EAfx/Mj98S4kdxbktJCQEl8jyXRi3pAqS8eDViXKvhatewpF1lMnImOekKwOtdJXAYycSWEUXEo5pJOU5pDjJMM7c5q+0aqnkCeg2gD2UTYt/bQK7RyBZ5gAi+aemHkTT0Jg2aIohnjJGsinceHwLf/+ApHzy55Kf/4Pf54V/+kGeffMT11RVNCFzdXPP+k6c8+fA74BpAiSlZfkrE5DnrpK5lYSss+ZrzFYHggxE4VPnqyy948dlPadYnPP/qS25vr4nJAOjdfqBpGtpVx8XpmtOVp9GMY0VMidevr7jbbkGETddyenLCer2i6zraVWeRKyqMQ89uv2W374kpE5qGzckZF6enrLoVIsJ+P3B984LnX37JF598zvMXr9jv93TBcXa6JjQNwxDZD+bgUA6SgkYOsVx3MSv9fseDJ+/QdQ2QON00liLks+e8+95TutYAlG+/95i7qytyiUoxvS8mW2saU1i0kqk9CCGIPcOqY7PqaFtvYLaqSWGqKXsYEUhoneA85GSjI2UAyxE7RQVgN5RpZiwb0sZZdAdC0jyxZiVDrtJTUswuRyEClev6IsmFGYTinDEZRUkpzowMwXkbJ8Y2zeWrg6KKcyWHqHM2jhCa0NC1DT707Mpmf4iHhS9rNpKO6iT/VNMwudKeToUHq4brIXM3pKlPHfDs1RU//uQL/vB3PuTRw0vO9jvyGPnxJ1+xGyJ/9dHHhN/9Hg8fnCEot7dbPvqbj/gnf/QHXKwb/vLHP+f1q5f0KbJOW27zC3LOPHn8iG+/+wifRxtHWmkwWvonEZPDeabcqVYpS6mTczEbXLY8diJIldtUpb99DeI5Wa9pQ2CIAw8uLthtb21c7Qcuzs+4uXptsqrOgG3vLK1UPyYagZxdcVZMW+syUxZyCjOmc66Gd5U/NeeQLVD1fJnGtmox0qeUWoc59CBVx3Rertepyh9FCrAS6qQ6PewWJhnq7D31JV1SKo4okOJzfesW+G0rx1KO1DIMwwRie+8nIsAxhYM52Dwne/yi1BzL33+V72qpAPYc+J6TOI7VYUlGmacuWSo7LMH2+TPXdpkTAGo9vff3SDXLtjlGSDh2fi1L8sWS4DLvl0rEOEa8gYN6w5x8MH/OZR8eu98xoHueemR+/fl15gSC+bibE2Dm15w/+zEizpxUsyQiLDdcy35cfrdsz/qvkqC+qe9+VaB/ed9lP9Q6zEkB83G9vE4tS/WXeRsfS3NUf86PXbZFrcs87/my7m+aR445S+b3m/+b12d+XL3OvP7H5oK/DfngTf32i67/i+71y85383osx/ab/v5l6jc/7xeVv227vel+b0kgb8vb8l++iCghZMQX30vZRaakRTGi+GFUgGCBGN5BGvFZiUNmTJmcB3YaiTnRx5GmbVlvEqH1EBpoDOQ8RM+WPbOzvdhkG3KINJ3W0rLvqBHzYCDBfJ8lZQM0BSBlywlv31JUKOoze9okZI0WQR+gynyLcwRvahqhbY3gUcBsRSzdSsyMw8D++oqx35PGSBpH+nGHjpEcR2LsSeNIjhHVTNSR2Ee07yH1eB0thY5A261YrzdIStxdX9E0DU1oSWW3mFNRr5WAF48Th1LSuajtcS0tsuDE1uCh7wuZBZrG0XYNjfesVo0FatWUFyoMcSSmaC5856bc2aqe7KyPQvF99TkZIWRUNEfWbYd3LTlan67WZzRtS04Dw25PHHcomTEnGh/IMTFsr9ndXBF0oF2tWK03DDlamuAs6DiQ9gOr1Zq2W6Hi2d/1oHe0bYsKrFcNvnGcnT9hs9mQY6K/vUVTYju+ph8ypw8uefjOO6zPzoqDQ1htzGbud1scDiceCYH12VP240ByI213QXMh7G/3XOaW7nLP+qsXPHv+EvFbVl3Pq6sb+ph4dbMjnK1wTsiarF/7jHgbk42a3zGNIzGPNN5DNvKUqWLnAuBbpvqckgFVg6mdeLA0ScPABsH3W1wK5DTSIKTtHp9NXbjB/JtRMym7kg7b04RAZsSTOGka2mD7+WFMSBQUj5RUxU3XkDQxxsjF2QXeQd8PZCAnU5MJrdmJo3U6rjXfpPMB8S2hWRVbUsgp43KmpkExpdWIAQyWWsnNfSaqUNR5qm8XCUgo6XLV3luvpqSiHEjtIVhqJVcIIEYOS0DCqcMGafUXz/Yp2ZMdqPfmA3Y2a5ifpowR0XvKjlM0/ELd4x7xI9VUS2o+5Mn3LXUiKlG6phaCk0mBqPpvFAt6FDfBdVRtlBq1fc+SKtesnx3OmWCcma+0niJUcs5bq+xt+U0tmhNdEwnugt3tFXHYElbw3g/+Rz79+CesTk5pHj7i9tnfsNtecfGdPyTnZOm+LU8bmmcBSvcuPkHTpUjxt9a9jsk92Ow2Ig6cDmUtF0SCrR+SOHn6fc4ffsD1z/8D13/1/4IUadoNpgpeAoVxJfX5AQI1HDZPQX8VTC/WVkXE7Git84FOPmYmXI5J/cBsgnr9+sz1ajrdZ/qY+7+L1rtrsS9rHQqap7YOCKApEzTQNI6Ly/e563/C9uUzzh99QHANo1oq9WbYkeSKtH6f5BtER0KKqINmc86+acB35BxpXG0HKWpJB7KLYEQUEQu0vB+WNX/WuedbZv19mEenfS0HP/zSF1Y0Nw4264zMfLgrpT76tc+Xv937swLZVb2qDYh0SAilkjqNFXDTZ4bhesBIRX3vLDVICWqt7aS15tM1vt4WhQ3AlOVh+mq2IE1+nNkjVF8RoOTSeFKhLqvnop2mWihmR5d2q+cYIbO2/aHGlVhin8vh+9kjOZSUNmR/ymb8ihQuGHI3vS8ug3PXZC5AegukdQ6fBR13kPZEaWi6hzi5JYneq/OhBSvp5TAGF093pBzaWjUXVbBSs/Icx0asnT07+wABAABJREFUX7zv9rVDxMZkVXhxcq920/FWb8OndN74s5Ix3DRpJmsg58Tjs8D1PnE7Khu353eeboluw0++OiWmiHeOCCCK5AwSOG9Guk0DTYfb7hmzsNtecrm5osmR2/6EPLHFmOooU3UXc7FYij+Xa0f//eBYvxXEDxGY5PlKSg8l47VhgjjLQuGLHl12OnWUViO4KByIiKVZ0EJWrg5dOQzs4g4w564WJrhg6ShEUScmAaXTFDJNBE4EVVfURGzyE4pBXdPQZMU7k/F0knDBkVxHwtJZjGNERQopo8M1LWkcOfWBf/J73+MH3/0WP/v8K/78L37Eyy8+YfvqmufPXzCOPevNCZuzM263Pdvrl3z4vd9j1awnEOmrTz/l9PyCsF4bMUKUr776Ehl2nFxc8urLW168eM6LV694/vwZ67ZEegAxZ5pg+TrbILx7ecLKC6qx9JUna4nkH0YUZRhH7vYjaMZLrh1a2sn6yTlz+ngfuHn90l6gpIxjYhhHhpjoh5Htds8YIzErqY8gex5fNqzOVuh1ok8RNE390HhovEPxJIXglbPTjtOzM3zwDPueTz/9ipubHU3zgstHD2iCZ9U2XFye8/LZK6pV40RKVJItbMF7fOMn1Y3QBJoQ6FrPqg2FCGAqBqrYeCkrtRkgwjCaTKhqBUnMWMi5pGgpE2DT2PjPxR9VmY3Gni9Lpt4HJmxBLeC7s42j8760TVW+mUWol4lZnG2ioZKWXHmVtIBP1oVeFRVnKXBSMseaCM75oobSIAKh8fT9wM12z9hHstrmXLWqZ2hxLtlkHsozZBE2nfDuOXz8es8+pqlO3js+/eIZp+uO77z3GO88rSiPzlZ8+uIGzZm/+slH/N73v8vTJ4/wvuHmbscnz685aRr+6R98m+vtE756ecWD/cDT7Ttsr6/43nuPOFsF9vtIKv2hOZp0bErcbXfc7oT3H2yAbG2llod3vlBmpahmlH8KZGHY7+nHkZOTFW1wJRd1SxxHQtuaxG8c2aw7cm6IKbEbRotewjHGyOBhTI79YFKkbWNOzgMYVwwoV7mp9l3KVXyrrq4zM0ukjL3yWcm1ncvi5r0UtY6DkeplLk53MLrFUVLbqPWpm6kplLFUDfvqGCg2F6pi0VNvFT9+a0o1xpfEBOAeKFu/r4BrJSNUEHmukjAHyuu6Nweg58SQY5/P59FjZIhalmDovF71mSp5Ab5O/FiSNA6OvQPxY0l4mBMklnVZkhXq5/X8qt4xJ5W8CRid168SDo6RLeZ9Mgdba19WMsQ3AbHLNpn3ZSWOLK9by1KppbbjMaLFksiwvN7ymedjb07qWJKOjhEN5qSSN43nWuaO3jfVtfbXnAw0f9blM7xpnM1JE8t2W7btsfrXz48ROpbjY0nC+iaSSR27y/fvTaSk5X2X5I35e7es27yOx4gf82vO77U8dnmtvyvZYPn+vOlav+z159f5pvOW7X3suDe13/L35We/Cgnp71KWbbccq2/L2/K2/JcrItAGN6VdSVnJ0fw3KY62hxSHpZZ1qPfmG1DbcDqvCJGkNaXdnhgT4ziQU8b7QHAdLjSohAJyyMEbq+aDmWyqUq85kbPuj2y+qEBD+Vl8R1L8OrmACOIMSAdLawreUpRUp7fkAj0Um2IGYKSi3DGOkX2/I+VbNEcLXJIEpY1SHNBU0jLEiM+mcipOwDtcdiR1RrrHIW2wekvCOcW14EKDCw05Rbb93iTg0wrXUPwPig8toS1qrK7BaUCyt71iskzfqah91oAmEELA0rl4oWsbnAgxjURKABVM0uBd09EERxpHUoyM455x6JGsDP2OrHuca8k0jONA064Qj5GAJOKdpcIZ+j23N69IcQ854Tx4F+i6FYJjGEe68wdsLh+iMdLv94wpIc7TdoIPDbLuaDbrAqonYo6Ek8T68gHb21vSOBB8oOvWNE1AXKZpFF233F7fsBu2nJyc0W1a4rjn7ibSBFOh3fc7Qrfm9OIhTVgDgYxytxssFsd35tzOlmJ5tVmjKO+Ex6xPN3z26TPc62viStmnyLaPvHaRfXAE0ZI2J+EcNOJZBSMSeFVWHnARR1GYoaqmAQg5xaLaYAQjxNRKvAoajQyQhojzzvxE5Z1Viui6mKpxEkV8h6elqiV33rHqAqfnDScnga4xQoiKZxwTYOCPZkGTBVa1jSskiMCYMtt+R4qZMUsJlPKIBIJvCb7DuQ5x3nxN4i0y3ReBkGrHUQkf5l+aJOXJJt+tlhIlayI5RVIsyjMJkWSqHN6TfTagYHrfD2k3fQg274RSFweIRX6jyYKqVKeAm1nWGXMrO0W8tVvlvFUV10lhtsxTNU14Vp3ILTlb/XM+pPvOWdHi6zF/s81h9RqiGcnV7nUGmFo+mwJGyVTH5b5F7ilJVu/S4Zmmb+XwoIbNyeEUKrnvbXlbfjOL5kg/rnjx6nPCNiI60nYn0ATG/TUPLh/Rdms2f/C7fPnRf+TZX/5vvPN7/wrx5uPWrDgtAaVa7YpDqZ7dcrfyYcULqv1i/+zd84czRDCxMkES4AOXv/M/8uX1XxmJrTWim8+KOmidsmlgF7H08GJq5Bb7rKgovswbKiWYj2JTTLQ3w9Zw3AuAN0y1gPMmKWQ4XlGc1nLNQ7oSnVKVV9C+zkEz6q0R9wz0QMTUKbIqXgz81diDjNxcf0XbdKT+hu7hOVfPf8aDJ99CnGe8u+UfXLzi2r/Pp3vHuNuRdi9JcU/38AN2N68Bj+QRJxkI5V7F3ivrrU7dU/fChwaYtw7wte+Bosz19f23iBQiI6g4UpGtkOn85bVqWq8CWBdcQuoZWi9W8Kk6hJaX0RIDLdi6mB2rk7VJPRRLV0qlswgqDnUZMOxJybStZ7vPnJ6AJFskjKhogeqqDkhmA09tWAeP2ZFaZTqW7aV1nDPhudN39Yz6jigz/NgxV/GoD6tFucqap2iJCFPH2mf3G+h+X9Xf3WR/adHm8GWFF3/BXj0+vmLtN4yyZsQT0hW9Ge94jWhucLmnTyOka4bmhNzf4M4ek3evyhPm8s+V55os8NI/tT7zDr7ve2SqMVOdSzNNbcxsrEpp8Ht+yNkJ9b+HfRQTDFXvU8+3n4ZrigqLYU/F83NyeEmoy8TBs16f8R//sufR6ooPHkc+vbrk1U1RV3SGU1blIcThNPHehfBq21oKvlyUJkV51Z9x3ux4vHnNs90JSuEaqAeJOJ0r0EwVK+9EbST+3spvBfEDZbaQlYm9gJCInxiDIg5VQXOaBqVOM6+UAeemiU7E0gy4DMlhm6E6IZZ8PlkPAEJNUyByiHaYjF+xvEM2WO1nKuC5yXo6NJaNuMg0x1quxjhNKqsQ0NCxWgl9NIO+bVpUHeozTm1RW3nh9z54wnffecS+H/nRJ1/wkx/9iK8+/SnPPvsUr5ntEOn7HXEccSZRQNM2fPzJZ7z76JJmvWIcEuTEV6+u2N/dsFqvTEI0JlM5EWHbjwasd4HgPaedMfmDCI1GUgJUaZpAAoLUSBbrvqTKGE1FYiwOgYAQvDktUHBZGcXBEC2dSpErjBlU07TOn522NI3nxfWWfrenj4nX13ecnXSWEkXTYbMjEILDBwdJSDnz9GLFe48f8NOfdoxj5rNnr/ns2RVNcLy+uePs4pzgAzEmnjx8wNj3aFaCD3Y9XyOFBe8D3oMXIQRXSDQO88WY+oOIFLEQP20yxXlCcR4pag4dX8H7w3g3VRaTwk2piA2VSVNweBfQssr58p7Ygu7K5FaheDN8vLccdCrG5M/2SpDUFglLW1Ty8Tkz7rx3OFdk9nMs74Ftsg2kt3RCOCNdZRW8T1ZfzbSNOWmaxuOccJ3v6IfZ4irF+Ckf5QwDCZO7tH48bwLvnLR8ue2JCVPrEUdS5a8//oKua3n34QXnF+eMcaQfMi/udow589c//5TNesXDB5d0bUNKiT4lhr7HI7x7cQoXinLJ3fYBMWbudrsScaK44CeHVRx6+jHyYjsSgufxSVsiXyhzhfWT98WQmeaaPEnMjXFk24+sNhu6tsVh0TChadnu94QmEIeRJMqm85yHBo0tr2/3qEAeI6rhAGpgkVTWmYK4PJnBlaXpyteVPalFTrSmB6IY3WaDl0XVmSyWaLZ+VrWol2oEUObG+vAYM3y+6BupzsZe3SwY6a30+7SilzFXjVYnM7/AWwfBb3KZkz6q4gcwpXUBmwvnihVwIDDUc4dhYBzHewoRk3NuoZpRSwiBEEoexYWKw1Ixo2maXxo8rOfGGKdnnIPG87rMiRrz9Bw1sqz+9N5PqVfm15sb6845mqYxhatZ+8zrNG/jJdg7V8ioda0qIrX9a/vO770kWcxB/tqvcAD25wB/reu8n2vqmRgjMcapX1NKtCVHeq3zm9Q3KulgrtyxJFvU7+ZtNSd1LMk2x9Lh1DK/9hx4fhNxYf55fY55OdaeVeWjaQ5S9vM+WALu8zG2rO+8z+Z9Nb/mktywrP+y3ZekjDlZaEnOWT7/McJB7cMlSWX5Hh5rg/kYXPZT/Tknwhz7fnmfY/26LL8ugsHfhaxwrN7HPjv2+69C0nhT/Zakizcd88te69j3byIavencX/V+y/Lr7NdfpSwJQN/0TP+5y9+lLsu5+r+m53pb/n6KRY57Ega4xpjooyl+aI6kmHBlj+29w2mDE49FrzpDRJM5a52Cw+EVKKoTt3c3FmxQItZ9aL5m891zTJY1FCn7G4p/SAqpXQ97IjAQY+5HcoXI4RB8sP36IUTNvJqalaQDGiNjjJMyR4rJInslFuUFmfaRThzBBdtf5RI96U0RQ3O2e2fBZSPHuCwkJ1BsqQoGUMSntbEADLIy7HvGOACKy97UQWPGqwE3nW+sfdIBODc/ZjY3tVjgi5Q9mvcG6lOig0EZ41DsDbuGEwjOXK6aIjGNbK+3xP2WcdyTxj1x7M0pmwHvCD6j2pP6HXJyympzTghGBIl5YDsO9LutnSeedrWhbVpc6BgGRUhYNGhDymqKE65l1TYoI3HYE8eRNPb0+1tLhRtaIgLiSdmh6mh8C3jGUfFtsKAnEsEFNmrRk8E5xu0djIlus4K1BTCtVxtyVm5ffkUTGkKzIjuPOI9vGkxVpQHnaVaOy80D1rstXz37inS7J6xaTs8eMCaP7neMfuSr4vNKFdxAGLJFb6/cyFnjedgFHq0iayJeE40rKYKLyqcXZ8SHbG0kYoqwOo7m6ynvQs6KJlORoUi4Zy3Ep6IW7ERREkhESrqSkzbw8MEpm01L6Db47hTfrXEuAHtyGtjHgZQyQxpwTWBMkT5myLDf99zeboljZn22olud0LYdOCGqvZvizVc2jiOokcG0gDPM7DSwoDIn3lRZi1+r4EdkKfZlMNXRUe3dTDEzxGFKkZ1rOp3iU657SrPJ7xOyzedbb5BLdKz5P7MzEolzGXWZrB5Rh/cGilic1UGdw8vBp2IkD1NHqv9SToUMkieVE5a2qTtAgXUOPBCy688yz4k7gDTlf9aMh8jpaT5VDilgjthHKofPq/d+OvFteVt+g0uMkatXN/hVx8nFU24//QnN+jGDjiRN5NAgXYsAT7/3z7j68sd89mf/b977nf8Od3qCp0ElErMvakrFt8Bh3j/2Hun8Pa3HalW6N3vEixrepUrjGu5uXzI8+yua3HB5qqyatfnfUVQ9Jy6zCp7bPpXAZ5m+h2zre8EUptBCVcPi5nWU+zUFqGmf6neqCYeR/VI2pej8tZmjEPgKzkaxT7R6vuWQikYKEFRQOtQJIWX2qadrz9ldfc768nsM+1vOTk5o1xdsv/w52YME5Yf+ITFecXv7Gqc97YMnhK0nx0TOI12zKfd0pqSuWjCaArarqa/YVxUQnu297z3ZAUi/11L1Ob9W6r7e5vzpmcs5IocmWpw1a7fyWe0rrT6s+z1Vx1I9RVXxWcCbStzL1zd858manFPx5dtYNJX5bMC6lgD5DKsQ6HxPKiRFaSIyQHYBp4nEaDUt9vREjCmq9LVvdabMVwnVlbRd6Q7M6l/M4MXbUy9wSIk3nYTWDpjeoRri+7Xz76+yHHq3fp4nu8na3Egl2SWEHdIEYnpC1Fe0Gumc0nuHyAaRSPC28fG3d+y5pGlbGG5ZeyXn3sjPRTlxWsNnD3rfBqhSBd/k11jW/5uOOVYOKiPz/YHd/5vOLf5brW1/7Lh8IETRst33jNmxSR/hHjzkh5+vSLmQaWEi6SvgxWyp0DrGtOX5zQmh6YAtkgfQFYJy05+QQs97JwMvtp5ejVyeUDSHQpY1n7BW3+9/JtPmt4P4IRR5Q0WLgoIUiSmZDe6UijoHzqL1J1CzOqvLGij5QFtXGPXAAHdAZSxIUTugTFopW4S/K2lmTKrG6mIKDmWZE48gpBSJabAcac7hg8e5YLknNUGOsylNyu8jTesJ3sCW6dNk+R/HYSTlksYlW5qNky7wz3/3W/zj73/IFy9e8xc/+jH/6d/9W+5uP0WAz778glwkCARhHyPjfodvfNmbKDFn9n1vBI26UDpLlbLvE13jOFsFY+aLEQ3s5bSJ19pXLOIgBHywuvtiEMQxWjoRb7OnZmVU0HGeb34WrVlbZnKyF3UIcZyuA03b8vLqhv12z26MpJtqGB0mq67rePjglNWqmSbvb73/Dp0kchqJWbm+2XK3HxAnDCnxeNezXnWknPEOzk5MWtNLIIQiT6R1TB3AD1WTpvTBdm85ZyNUKJC95UJFqSy8Cnpr1kJkqMQhjEziA+IwCct8WNBUzRHkHVTJWBScL7vZbE6vXJxWRgxQcgKPAfXOedSZ4oxmi7wSSSBuShRi/eHNCBNzElQ1iaRiRJhsBqURogrIguVwMxlQAxoRc/540RLVUZgiZZHNZTEUMjnZc/rgD4owAg83gT4mXvbVsLTXtx9GfvTRJ7Qh8OD0xIBNHKMqt9s9MSW+/PJzXr58ze///g8sPZEEtts7YhrKWiRFocIjghEvFLzz5GSbbecddzfXbIdIFuHz13e0jZE/QlmAUpHTTeYnKal6rH9Szvbej5F+jFx0K5oQCD4wjpGmbUnbLQnbvMeUic4yBgbveHSx5m43MIwR1wTECaH1eFVLJVSdGZMBLuRizEVkYkCLM4O8sltzWYCNDWnzmoqNIYWiNCLT2BUtHG7Nxpwu1pBT0OJEqFuCKSBsWnLlsBkoWmGmQlKN48NcPvkt/nOtpG/Lf5FycM45k1ZeAHohBFSVvu9p23YCv5dKApUgUM+pYP4xkkNK6Wv/5tdbAuWqymazmUgVx8CEeV3miiRzUHtej5wz4zjeI2lUoki9RyVc1O+Ppc1Y1qOqnaxWq3vqIUvyRYyRu7s7VqsVq9XqXl3rNSrZZBgG+r7n7OwMEeHu7m5qZ2BS5JgTRGq95kSO+u8YGWdODKjHqerU/7U0TTM5LuvPvu+nY+rzAPR9P30vImw2G9q2vTc29vv99HclVNQ6zVVkKlGg1m0cRwM9So77pmmm/psTlcZxnIgrbdtO/bbf76c+7rrunkpNvUYd63UM3N3dkXNmvV5P/VnbeE7GGcfxa8SEOVFjTvSZP1vTNPdIMPM61JJznvp9npqpHjsnk6zXa25vb6fjKpGnfr8cB3NiTX3n59eu961jdPl+L9thPi7nc0o9fv7+LN+revwvC0LPj1uq+vwq5U1Eh3m9lz+Xny3rXttmWc9fhjxx7P7zOezYecd+n/fBmwgz82PfpP40v8Z9AON4W8+PWZKrjh37i8oxYtKx7/62119ea0miOfb7L3queXlTaqlj9z9GpDn2DHW++WVKJVC+aQy8JX385halBIGksexxIsOQ2Q/RUsgqBCc0QWkFWrUc89m8hwQakhi4kMUVhdAajJPIY89ufw0ukTTRtpYG9l7atSkFy+HdmZzCZR2UerwdWPwmaioe3lJ/+hCmvYoFnIxI1hIwkNEcydFSMCQdLOq/qEoYeeMQmSlOC+HefC4OAR1RHcmZkvpUQKWoGzhSUSQwJQJ7gCweHccp2j9k8KsVqDKOPTH2ZB1RjSiZoR8YU6JNiUaEzjtyCKQRLC1GJnuPw/4hivN+cn6LWsBOjgkwe8iCfnwJSnGMMRHTQGQkDTvGcYfGnnHYMvZbchogJdI44l2g7dY06xNUPSG05tdAuX71jH63QwI0vmHoB0LbcnJ6SgielIWUHUQlxcR+t6drG4ahR/OIOAu+SqNYHcbIOOxIOdKtWzQLOUGKmdubK8QLp6drQhNMKdV5Ix+IJ+cG3zrWYU3TdJbKpWlxqzW56RjF07hQgnEizWrF6vQBTXtK1Ezb2HXGZEDIkJQmFMJ6E3ioQkqO9fqMu9sdbdfRvb7i1dU12ii9jlzvBsAxJhiLfxAET+bUDXz7QcMH64YLyaxU8RnISmO6/4gKMQuqCe+MyCDO0loreQJkzFdlYzITwYXiVwDwRpwA0EzOQtsFHj064enTB4SuwblAzAHJpo48agJvY+v25o7dbU9oGzabNaiw3ffc3O14fbWlHzNnrqEnsu4CvvE0OnLuMitRJJT3CUuRbKSI4vsqvs7qw7BUJZaaRZyU1DtCjcwWAec8AfCYfyyEEqyVSvoYLR61Ok9wX6XOlIqkqHuYH62uoFkLqWkOWqrNOZKKClIQtPiS/DRVHdb5w34kkXMk5UN6l5QtUC/lQtSZfGwHeyFjwVjkw1or1ZYiTz71+rvUl1wKEUUUmY4rT1YccW8iKh8+1+r9tPQHb/06b8tvctHM5uIBJ2dP+PKnP6LxwsWDc3Q/4iXQAE0TUBnQLDRnl6x3PVcf/2tO3/9H6PkH5f1Lhjd97dU69r4VHysVTSo/awBpcRKXMFDy/jXPP/3/sXLKxXu/i7u8ZP+ffs6j844QYJcgiPJwA+IhaqYqukvOBO/pxMBoiu84KgXfss8EDsGBanWsmJcXm1/B1hiDhrTYF5UscEB05gD7/MmngMFSCk24/D7bL2LTVSNiaeHyjjhs+eqTf8d+d8fJ65eEdkP2iWH7ipPNJdfPf4wXWLk1o4v0rz/B9Vv64WPak6eMKM6DZJsbkUN9JxLOzK9dn2EKPucbsO3Z8Wj1os/mzXsnztpED2vMvctOebnyvfaiIJdzpZCJoDCvQ+3P0rBOlFEzaMSHVUmjcVAPMQyk7M3LaBQV8IZQrE83Zmsx4jUwFhA9y+wZynOpUBbQAzmj4k3z5qjnMPvcWbUOfoRkwflVbcWCpcuarXNax8IXU29QiDz2UVXQOIAWToymddjLztv60GfWFYomTw6eoNCEDHrO2t2w1mtG9STZknPHdre1FHMddHJDHHfc7bb07TlrMn0fkfVyXljcu7brPT/X7Ig37r+Xz3D4/f4py/P1yGfH76V1fqjko8UovXeOupJe0pNywknm5598yT6v+eh5h+qAiKkeUqYoKVgVySESebTa8vLOkbLgNdgQyIPhXeV93qWWtE88PLvhdrvhenQEBTQSp73aoX5uwsr+fstvB/GDg8PKTy+hbXRSSRtRc41lNeUOyss8bfCr01tcYecoMacyAYArzHcDPWf5YcWAaS0R8U4p5ztSYYh58aRp426pMFIyQxwtRAjvkAKu1BcvVyt/qpvdJ9ccpWLpUrIW2FWEpjECixPPGE1u1Bm1Go/yzoMz/A++j6TIv/+3PVdffsrY7+3ZSpvkrOziSJtbm+7Vcul6UXLJDaoZvApdaLh8cELXeRpXNwaFY1Am3mlSlRqdLGUzZiBxJXXWVcOLGJ/GgUk6WQqZrNVkUagSP5qniUXqJs3BSXCEB6c8V9jt9iZzJTLJPDnnOFk1rLpAKJ6KdbdCcqbfD2zvbjAXkhk0GgXVkZu7PY8eP4Y4GOA1JBovNKswpbZwvrDwy1CsBCAD22uOK8tnVpmOqB2XUjb5zEIKUcBlM5jqtki1Kn1IGcbm+EKzAfEl1YkTu1+dxN20PuuhDal5Og8bSc2JMYFIBp0LO+YyHiw9jC/9HVOiRlMIloPXpJPKBtVXw9KYJDmZo71pA24U+jiiZTOcMpNZ5p3HlRQvVjs/Tf7kTHKVgGCRD09PVyQ3ct1Hc65hbXO36/nRTz/lD7/3ASdnp+aU08zPUuS6H8nAX//0Z1zd3rHenPCtp09szOJKpEQyYkZOtsEXU6tIKRanoiOnkZcvXrFPoCqMCT57cUfjHI82FfDSkt5HiVrei5n5okAfM0PMhK5BsDQ445i5uHjAzfUN+zETYybmhOuNSe29pw2Brsl48Xjx5iQVT3DKpNZRWjFRpz83jU1ynt7/4kY4pOqr73OR00t6MK6yHPK7UtIOIcWRobapINs9ZZoPqloNB7CRwrosY9rmx2TEj2xkIsRUgA6yfm/Lb3JZAq1LUG0JSM4d+HOAvP69jOCvYOf8s0qwmIPJSwDvWAqPCm4t08bMSR9zgLWev3zW5XPMlTRqKpF5epf5fSpR5FhbLAkpx4DOY2A3MJETKii/BNCXhBARU0CpIN5caeVY3yxJNMdUNZZtNldtqZ9Xwkr9vdZzDgLPr7GsyxJ8nvfzso/qWFmmUZm3x/y+85/fBELXn3PCxZxocYyIsAT056ot8/Faj12qeczvPT93SUqa9/N8vMzHz/I5lmNxPg7mfVA/P9bWta/Hcbz3rMdA9WPtMx9T03ozq/ebQOZj379pPlre81j9ln3/JiC7HrcE8n9RWbbB/Lzlvb6JtPJNxy2f8dg953V+09x97LPlfLL8bvkcy+vP2+2bCCRvKsfqc+z7NxEGl8d/Uxsvj/tl6vem+n5T2/6ydfjbHHvs72Of/7rIGm8Cj96W34yiKENKjCWCPqVMziPkoYAJiuJBPagr6VRLXnrnzV/iWnQ3mC8kNEiwfYkFYYzouKe/yxATuRsI3Yqm7ZASdOMcOM0lEkwJBJwHU0g035AAFBJrJa7WVJopFdWKIdrvJcWCUhQBisxvLvt122S5Ev1eItCyTFLums1ZKOUaqCkoWOBItP2jJvNJmeOIClZbpEgl+ltAjoQM0eNIUAIXxmEwm0NAvIPk0Gh+rNgnSBHxI/Q9OEhpJGiH9wbuEFpzapj4ivmJIogKyVka0q6QaiWYXysOmf2wIyeTRQ5OTdVlGBn7HUPa4kXpVidsVmesTx/gmxXDEOn3V9y+fsnt62cgETSiYyQ4Yb8bUAm0m1PEO3a7Hanfk3PGt55xP3B3fUNG6VYdm9MT872VNDxJLYyhCYFufWqS43c37G5uySnjgqX4daEDH0hjxJOM4DIkfNfQrDtiNL+F+jXdaYf4AnJgYyCsOkK3MTVc7/FtB87TBm9j3AXbK2fFS8SFotKqDu87Hj5+Sr/fofKMh/mi7JeVZhxJN7cWcawwOmXInpuU2aPsEW5j5vWLPV+2PX/4YM07K0+XBgPpUia7bH43EYSAkwZpLM1M7vf8/9n7s2dbkiy9D/std49hD2e4Y86VlVVdU1d3oydiIEGAJIx6kEEmmQTR9KT/Qv+RnmSizGiQ0UgTIBgNJAxAD6hu9FRzZlZm3pt5hzPtKSJ80MNyjx1n5743MxuAjKi6XnXy7iF2RLiHD8vX961vSfSa9oQcp2mTCu3ErKhMwgOByBAT4hxYh1jh5M4JD958ncXpAts0kKDvOvphoK6q7DtJ9F1gdb1lu+moZz2utgQvDL2n3+5IQ8fpbMbdZYVziX5Y470hxBkxWdzOk4xjVFYtPjHjNB1JSiPIEZPPwKlBgsGKIWYy1QjblNS2+bXODgYwGLG4SsaxTEq3bN6iEjTOc0nBkCTq89H1Meesz+cOeX4wySGiqiDGg3OKUkhUGX9MUQYuv8+paUIgBiXnxLhXJNHvc9icQDSq5qOeIU+KBu2sxUZOoxIM5NTOorMMkgkggpLTspdJ/TqoL1my2nC+Qna1jsXoxE/RoR0DHf892Qyvyqvyv8aSYqKt5rz3m3+Xx0/+H7BLBITt6grrGkgJ1y6ACmMCy/YOm3uB5vQ3+ezP/7/U19c8eOd7JOlB3Jh+6YuKTkN7tQOTAyyTkNOSQ+jWXHz4AxhW3Hnn12lO30Ek4OMFxhkenDlSBclHZibx1lnFx9eBISaCBAzQOgWpbTL4BEOM+Jh0vRi3xTpfSvYlS3Y4F7xhqiak6hBCZcr8ocSBJDkwcFrH2y2t5yhzEkzSvOg+KRQwWYQKSzSRMKw4O32d9v7XGULP+tmHzJdn+M0zThanyMk5109+SgyJav6QVCUaq0HbcdhQt3cwqG0iKWCdYoCGokCRAeHsL9flpwQy5M9HJ3casZWjuPm07pP3Zf6eNsh46jx3k6bkmQlIL6BK5NPWTOP8vb+9NGKf+2sLRhxDGqiI1BL47KOf0w7nnL/9t5WMncJ+HUlKNJW85iUiTWNYho9JYcOVLKkQjC2BRJrWHiZKwmM7prF5RwCtHFAqr42J5LPoGr3vH0gq8PFBQ6dJ7fPaWGzuW3v4gq+V4758EckUmAzpGREq8dypVoCqCrpUUznD4M/YbCLb7pJKtpwuGurFOdubHVLdI6QNy4Wn2z5h2KwQBm2z3PfLsxpDwMe2nAwWJh3ic2Vav0ln5sX1Tgdgb5p2uJd07pf54F7kexjTX0nC2Yr1dcejZzvuNIZVN6PDU1IIjfVLkIxHErz9oOLPfjEnGg9G01OmMECde0E0+BSJ0fL0asGb5xucT1ysahIel6qs2jjx492u1n+w8itD/NBuk1U2JE+Qoswta9VAJ+6BSckTrCGBKJM9Z7gipTjK42lOMd2YO6d5YbOFraQKEaytiJpzBExWTEiiaTtEySTWFIehKo8owcOABEKI+V72qiQ2560szq660jxOMeqE2fc9ZIWGoe/HdogpUVJIVFbTc6SoS6MRJRs8vH/O7/3Ob1FVFf/mj/+Iy8cf4bsOH4dRkiYF6LodpCxwEgy1sTR1RV1Z/XMOZ2VkxpPKhgViHnMGZdE7kzcNCN5nWZ88LxfJP5W7UmWMmDdH1mieKyOCMwrEizhiSnivTyxEfWYme04Sgg9Z7WTR0vU9fijkAh11Vc65Sfa/RFFG/2Y7cJMsV+seW1lmtWO+WLDd7PDW8ujpNe9+3WFcYLfyiLGsfYBdj5mpkok+s/IcIKZAMVdiSqSsw2nz5kvMXkpdxI4Tf4yatiek/SIForlFSYw0X0FZpaLpUsaFzEhmmupCFJJgpTyV4qRhbBMR8lVy9HLcs07VENizPHWIZRmjfC+6+TTZ4aUGTkyCy44zUKkLi5JCyrkqEjEZ7DCoVH/S52fHaOZE79XJpft1A4TRcLP5RK0zvLaoSUm43vV7lrDA5c2KH3/wiG+/+wbz+YJmtmHZ7BhCYgjgrOWTJ8/4H/7pP+O3vvUN3nzzDR1zE2MHYzRNVFQHX8wLMxaunzyiD4EH98959PSSEGEXIp9eb6nsnEVlIEqeZ0qEyX45FRSQ2kbNtWhtUVMRUgxYU2OItJXFGxiCIeF5dr2irR2ns4ZFWzOrdjllUHEmFmMuE9NEsipNiXzJxktenIRM7JCUF8W9apJkZrmOUx2vIamTx2W52PJQdSiKGnJFxUP2/ckkGW2omNSI1/kg6jlz31HihxJlVNVJo8aMmTKOX5Vf9vIisPWQCHAMKD0En6YA7FRZoMzBU5ne8t0xsBy4BeoXQPwYYFuudwjKHwO6p38vi24vRJACqHddd6s+5a8cPyUiHKYdKWUK1BeAvJA+CiGm1Ht63FR5RESo65q6rsfn0vf9LXB32k7HCB63onAPnudhe03PVwg45Z6mCiDl+ZR/i4rHVEGlpMuZqrKU3xymd3kZSD2979ImUxUOkdspbQ77ajnnoZLL9BkdtuMhseOwzY4RfUq/n7bxi4Dzl4HJ03MfjoPD+z0Glh/+e/j8S1sc/u5YeRkJ5EUEhRcdf9gu0+MOX09/ewwAPxw75bm+DPg/rOuLrveiefFF5zlGmjj87bENdXmWh/dzjITwRe+nrw/P+aLxfnjv0/fHnAAve+4vu8+/TnnZmPkq5/gy5VBV6mWvp/POF13734Uk8qLfHuufr8qrcliKc9ug+yyJCSdCdE4BaVEyv+5vtK9Za7E5TW3CgA/U0Y6pRBvTIDlYJsTE0CvIPPjAduhxXUPbzpifLLFJo+ts5bA5u7ZovIGu/c5Su+pWmsEQA7vd7tZ4MEzIYanEYCqkqfvmPDdR5joL2X+SkmjwUXZqJ5NIQXegKfp8jUK8tHvf9l4/O4PCMqaDCMUXYLR1XeUgGlII+OSz/8DgggVxRJNIxiskE5W8ErpOfRExkKqaGCK2SgRXYQFrKmxIWCHXQTfezhiwFh8iaRhgUEWwylgWsworNZGBftjRdx5bC/PlXdr268yX5zjX0u22rK6fcPXpB4R+w+BVPaSdOUQcIobdekPf95jKkkLEhoHWCpcXzyEFnAi+T1R1w+tvvYmrGlW1W61YPXsOvkODnxymrqgXJ3DnlO0uYMwpbr6g227YDit2FxcYHNWs4fTOXYxtNfWgCH0Ef7VWRQ8r1K2mbLVVQ1W1uKohmgrnZvp8+54kgWQboq3ZDT2h35GAMAxUTU3dzkkYNruO65sbYtD0zpJaZrMTdrsBZwP3zxdsdwo0zWYtz69u8P2AM9BYQxjURxZMxcZ7Pu0t6bIn3Z3zRlVhjSfhEVE1C2OtSvebrBSTNGwk5N6cQszjVXLWeMMgQp+yCkjlkGaBOEdVWU7vnnD/9Qe0yxNsO8dVNSkGGlszdFu6fiBFGIbI6mbLdtOz3fSIbSHmNMk2UTeOpj3h7OyU5ekJs9kcL6Jpfl2FrecYUwGoMmrQ1NQSLRIHrK1wLpM28nhDoKQyjiaTtSb2ZoyMewhVDslrXXbiGNHUymrT6/msOyQbF7+ZUYVlJJ+P0RdTwDoV5dCodk05petnCEVNY8gAjkaixgwo7hU/Qiad5bqkRMpqv9ndk+eHfTVIaVTaKFLvkmIGlfN9pZGaASlH53E7+vqF5eiXMp59esAra+FV+WUuKUXqtmLYXuJXK1pniBJ5/sGf0vmBZWvYPPuY5XLGxiecM5w/+BZIz2u/+fd4+uO/5KO//Fe88Z3fJtiES4lUgukyXsLee4+OTTNiZDKxDyQmnKvZbdasP/sx4foRd9/4Fu29v4UGmHqiUaXrbmc4nc94eGb5ADhr4J27FR9dddhomDth2SZmlaX3kZvB00cYcgCfzh6JPf1LyWll7yb5vnV6LYGVQFZ91vVI7SiDIZmszl8IBCMtpBRhlHErtpLV4GvJPu2Y0Hk26b1Q1fh+xaK5h1QWCaqW1q+f0Z68Ruh2pLjizju/x65bEfsN5vQBLiXqmFhd/YRBHrBsG4ZNDhIPHmfMeF/qkp/YhKngOYrViIBJSddZkTGdoNwC2DN2Nt0LT14VIo1ilkXRRSYKMYpDSW7bUYHqc5Pv/lx7zkPa/z4BsifuGRJJOky0LPiUqhp4/f6Cb33zNSRcI3aJWCFmxSkj0326oes9TdOy7TZI2lJ3G2L1GOfu05EgBqJJmAmJ2mR1+v2DVkxQMdbcE9K0ffI1y5qV2yOb7Pvj9wBN/ieOa7RefApCFPRsb+Pfak4hn7/01c/DF/tz54sLBLGsugUWSCZwXg9c7k7pfWBWdyztOZtUc5M8zWrAReHZ1iKDgeZdWnnKtv465/7nXEtWMhlTyhTcbryBPZ43Pueyuk/6ACMldn/fwv59+cUxl0CxNV7iP3iZH2Lv4ynXmDyP8Rj1jWvGBEhYfNzx9dfe4idPdpzWK5ZiWXnHLta6Z5OEREsQ4eGyB7tkvemgTlinAQAxeMWxoiFJwCGkoHus9y/mvHHa87X7gV88a0ho2j5EcvKDNFpIImMMwOc7wb+H8itB/EhJN6QkVbhQRvce9Exkoz2WiTZPgKILQMnpFKKmMYGkBI6kRnzJpRr9oFKWovnFElmhQPaMaLWqDT54DJG6agEhxGEE9J2rlbggAySTNxLge42odc6CZGICWg9EqKwgVYuPiT50mvtVIsZaJYTkRcRk4ke5nYimVBFRINU4w2vnS/7e3/49vvvtb/LzDz/go/d/zuXTT9mtb0h+QFDyiQ57wWW5R5tVFkiamkKSSmSWScAU/QiRHIHCOJPGWAgRZbJXA2AkNEzIkC5vnvKqknNZhuwd0odqrBBDBqOSEInIOJq0votZzTDMefL8SgkJZHIP4KMfpy4jBmsrOmourq7pux3zyjFzwmt3lzzyA68/vMeTqw1dP7CcOYwVrq+vuX/3nNAPkIqKhy4oKaepSALGZKMmg9alm425PlGljzENRwyqKiUWycSWqLRcHJKVJzLwh8VayVFHjKSCsklTkochxx1h0HQxJR2aSBkjoNFRSohKMYP2Zi/dGqIufkX1wUgxtFJOeaLStkOK40KuG86YDS0l2EjekIrR/i7GUfIU1m1N8PqMrBVELDEEtpsdu66nULS05SKSo1OSwKwSHiw0Pcralwhr7RFPrq6xvxC++c7r3Lt3j9XW04dI8jqG5vWMwSf+9Ccf8fhywzfefoP5zFG5DNClnD81xqwclPCpZ/PkGX/5w5+yCfC9997EpMQnz64YvGc3WFZBmNUG8Nnk3S9UIiY7CfWZdclinTqzJPedSMCTcE3DbrelqivSriN47TOL+Yz1dquSsNZqyqnM5vXFAWnMqIaU8jxJStl5WAyevNlHVKUjG+KxzAICkjRXYZRsMkRG40qyIag2gMkKMymzsssx2QkpJhOvtN9oiiHtiMaohKAPWYXEBupUxvTUsHhVflXKIcED9o64lNLnVCOmv5sCXwVIL5Gazjnqur5F/CjXKO+n1y2EgmMgagHSp98fgtUiMipnTH9/qChSfn9xcfE50kkBHooaSTm2KCKUeyiqG4dt45xjsViMDsljQOuhQsq03tO2K0SDuq7puu5o3co5pqSTPpNVD8H28vqQ1DAtx5Q2yrFT8su0fabkj0JQKelYSj1DCHRdR9d1hBBGxRKRfUqTQ/D5sN0O04ZM235KsijvS32mgHrpz9P2OUb8KZ9NyUTHQPnpMzz8rNzvNG3NMRB5mtLlsO6H/aO035TEcXi+0gem6WCO3Xe53jTFTDnmtrOIW/15+gzK60Piy3RsT885fT/9/cvu77BdDwleh8e/7LsvKi8iWbzsXNO2ml7/sF8d/u6rEAEOn8dXKV/2d1/kEHgRYeRF5IPpZ4cEiZeRVF5EtPkiIs/Lypdta+DWOJ1e89jrL0v8+CpluiaVax2Wcv0vm2rmFTnkValcjRVDEFUK0/2pqPKjxNHBGGJkCAHnPTiVBTcGTUvhylwQ8EEVSgkKXIgYgtEUI9Z7qkbTmgQSbTOnbhpMMlhTUdcNzlXUtZI9kkAKed2KmjZQ99+ZrlLWJCbrtilAq8tO8pCVUOMehCW78gsBhLQPBkmAiSQCIg4ySEKMmOJLKeBAzH6qVNQbUZCjKMc6gwl6ZVIiisekQFWl3ObZRjAarDP4QPRBA2NCwPcDhEgVAt57TOWp21ZBEwHrBD8MOV2vQVxFsuCaipPliSpRmphVFySnuxmobIN1NWf3XmM+O1Hn/27L5uaC7dUHbFdP6XYrkvdYY3AWnLG4ekkKgaHb0LaaCnK9A9tUzBYndP2OtnGkpDZwiJoy6Pmzp3SrNYSIH3aa/sKkHLDlmFWnzK3h6rNPif0Ok3pEItZVNIsZzrUYalIybK46Ij0CVNZhrONkscDVDowqTNTNHKlaJRt1kZQ2eL9SvQjjSCYSb65JKeLqOfPZCdEITduSxOKDgFgqW3Pn7n188GzXW7a7NaTAvG35+ne+y/pqzWefPOYkReJ2g6kr4jBAClQ2MRdDtIZN35OsxQs86z2PbjacnDqsg1o0cA1TgXGQNK3s3lGvQTthdGJrOmefoBOhS4neGkI1x5iGeTvjzr07nN85ZXkyZ3myoFnOadu5kl+Dx/dbYpU9cz4gaeDmuuP6eq2krgjBB1wFTSNYO6eqLYvFgvliQV3PGJKmq7FVQ1MvENvgU8KHEqSTCQtJMFbnCgWdrKa3ZU9oUF+qDsWYyh6z5KBHEaZMsChzQBRDNJ6UnPq3qHKAn/pxC8hC9s2q6IhgrDqgXRYDCjlNTirruyjAENPtPZG1VglaFLtJAdAUy35AAxNDSR+VPKRsk8a970mk6HFI1tT9HISif1KIIDLOyeO/YsjOyoxVTewCJsFjZV7Mc7gUx+PnyueBnFflVfllK+3sjCdPnhL9gDjDanXNNsDpvGGzu6ae3WX24FvMm4rnj36iHCuv2NRrv/brXDz+GR/+2b/k3e/+HqluxjnOIpmYVnwPiaKMLjYRQ8BIhY2qitSHns1HPyQ8/ymnr32N+ff/Aa5pCL7HyIwgPTYJMYSMOVR8+2Hgwbzha3fBzQZs8nz3Dcu9umITAo8u4VkPQya2pYJHZTxuPysoCFGmhf3eBaZzQJkrCshrJriRv6X5kYO1b00f5WoRSQlnDNboNWPImCE6V3kJuBQJ2zXVyRt6v36gjj11e5d++4zZ/BSqh8QkzJpzBrGsL36BXbyGmJrV1QVny2+CuHxeg4gGT5qYQylTqV8iIpgoIKqQ1eDwOS26KtRnks6oEMekfmk/uU6bbNIAJqm/fjqvStpTE/TnYQ9Kp/K5MH0G6dYPFLMaL5lM1oO3JPGItyzcExh24DtO732de298DbEVfYw4kZzqPvvEkqgdJIqp+kqYL87orND7Dal/xiA1Yk8wGAI9JEOSTA6PCYMlqrZ4/v80OFRyKrI0Npk12r5kpfEkex/C59Ox7BtBSiOMmInZ40OSraKj/oF9e47f3zr/dN2TsZ0lCsYJgcjdemAXLEOMiGxZD4J1p7Rm4GZomVU7nARkN5DcjIRRhT4zEE1D9D24mbZH9BRlt9vrrUz+C0VR7FZ9bvWL/DqmW2d5oTsj3W7XY8SNz7fd5/+dnvBzfhdKHzfEqHbUTz9+zvLuazycOz5dzRDrWdgdC+nY+IZtdEr+T5F371k+fOqxLpKSA6kUXw0dheas3VZGcpA1ic+uGxaLge+85Xn/kcUHRydbxLQQvYo7SCJFg5gSfPnCqv+1y68E8UMfvBrNMUL0XhU9cu6yMscqy0/JCyZL/6W0D5YogHrKke7kyAtSIqRAHHRRsK7SaS+DCVXObZqK4a06mcQU2fW7PDkkrNEokijoRSnKH5CSbritKUBEImRSgxEhSmTX9RjpsbbCWt0MGjGY4vwnYhIYZzVKPsQcHQMphDG9gtY3Mavg3Tfu8/r9M9bf+xar9ZZnFxc8f/qUiyefsl1d4rstZPnSmELeMOlE7RKMW4WyJ6LMIxMZJBLsVU51kac4ifesMCnPMqlahhFTpCwgp/wwGDSXbCZZWEeKiX4YJhuaokISsSKcn8zZdgPrzTYbF7p57TqPD4HoXF4cLd7UXK+3rLcdoRuoTM3pyRKpWqxYfvN7b7M8PSWFgTAM7IbIx09ueP3ukqI04lzWPBXUDDNJN0aS2WeZiahLQVYsiQlrEoj22RB1o2UlK8lQHPYBXI4qIOGspSh6KOMxu5KK7CSlr6UxFY1GJJRzZgANNC1HNqjU4ZTJAoLmMUVJJSHnKrYGfI4+claVKvRWE0732FTOUllLH3x2dGXmbhLAYiQR8FgjzOcNVeVoes8waISWCFTOIQjtrOb6csV6u9WILlGyT0lVI7kNTpzh3szht4E+BJVYy3Kbn11cY5zhG2+/wXvvvsX85BSxjodR+OTTJ5BBvi4kfvH0igd37+DXz2hMYrFssbYiIQQ/4Pstq5s1P/zZBzxba8qln3/yGd95+zXECI+eXLHaDYQID++dcvH8ks6HUTVDGdA5n54InYfOVjRNDQRCiohR5Y+U9g6JlNOybH3k/GzBorGcNEvWnQev7KkCyI6DK6HOrWxgx2JQFrZnZiYWdayYz6FdKUvClT5IduaLqtbo+NRjjdHeqOM0j2jZE+6KMZkwOR0Rt38vSgALKWJzHrVijMVU6g4jvfcVVvBLX6aEjBeRFQrIf/i7w9cppZGcUADa8n4KHE4B0imZovwdU0+YkiIO73163qZpPge0luMOgcopyF5AtOnfMYLEFHSenrOQHkSEm5ubW4ogU4CwEB0KwaSQVYrihzGG3W43fld+WxQ+YoxsNhu6rvscCF+OvS2F/HmA9ZAIMlVfCSHQ9/2Ygqa0e5nzCgnhMLXM9N9hGG6pkJTrlOdXiDPleoV4MCXyHFPwOASfD0lHh69DCAzDcKtNDo8dhkGBp0kfKv8eEmCmJJfpvU37z7RM+94hGWD6+SHwPR0b02d8jFhwqC4z7ZslNc/hMz9sg/IcpwSWlxEgpkSRUm/v/a25Y1qvY+P5sI0OySaH88thPz4Exg/v90VA94sIB8fa6IvA8pdd91g9jv3+8Pvpc3nZ/b+IGHCs/of96EXnfNHvX3T8lyETvOiZHru38v5F4+jwsy97D4dj8Mscf3jur9IvXlS+7O8O0zV90T18FVLLq/KrWQQQY3CSZXUlR7CGQMpqoTr36AZm6DPgHoYMtlp8hKEP9D5ijCWkyBAsYEZlUGscprKqhhmVNNJUDU3T0szmtE1DVTuscaQQ2e7W6o/Ie5oCpkgmdWh6hpiVOA/HwLh9zz6S7AdAYyNL+IKgihzZQTEqZorKKFIAUw3CCGAhSo6ZTGmMeoxRlRF1M5WjaRFSTv+JDYRkNIIjGZyAtZWujf2AmApxA8YP0PfEwWsQRowMYSDEgSH0iLPUcYYxKateGqJz1FVNO1vQtnPqtsU1lqp2kCAMHh8SElWxwlYN7XxJVc8QZxmGjsvLS/r1FcP2htDfIHGLSVvmM6Gysxz84nCuwRpL3++gnpNCYH1zQ103tHVLjAFjKgYfGfoBP6yJoSMMnhSUWOP7ns16RVPXRBGSScyXc+bzBSkGnIHUOGxVA5Z+51lfrAjdc0xSksH89B4nd+5Rz5b4FPFhYEBIUftjZVtCsATfMXQ7+t2WkAJ1M6OdnSDGkYxFGoutKqJYhhDUae0DzqpzcrW6UhVb6wg+0ncddd0we7ggRVhtd6RoObmz07EiQrcZEJ/Y9D0heayoQzhZDdQy1lClipuu56q3NKIBV87UkBx+SKTgCd6rD8XktB4pZb+BBoX0UeiS0FuhMzW9rbCm5eT0jMWyZX6yYHl2xvJkQdXUiGlISZVZY9IUAN57TcETPBJhOW8ZTmY0dcts3mQMJFJnMruxlqqeUTVLxFRYiTjjsLbB1TOsq5EQST4QCKSgKb9jAgka5a62pKqPGjEksUrGsWYctFVRPQxe9xu+J3iPjP7UlMlYCSNxVMiQkio3/2lMTVGdTqMPJlGA0ew7zT67MoSRoAAXWWnWC8lKHu+qpjMqq0ZIIWW1jzj+3Uo1qZE62Uc0sW2O2T2Q1V2n93Ng0xavbUrZ5Wlu//7WfJjnzWLn5U8nVvRkonxlL7wqv8wlUdULbtaPMWQiV7fDYZk3D1jcv09/9YgnP/8BD7/5O6hoeCZliicE4c7D92gWp/z0T/8Fb3zrt1mc3yel7F/OAX1FGRsAG0g+KfGSwBAMVx//JZtHf8HJvYfc/41/QFWfYGOEGJhVNZX01FXCYdkuB/ypcNokXl9+zG9/q6Z955Rh84D/ZnZJv634g58lfvI8cbMbFBvIOIjeRLF9tGRkiBIwWEb+ODtIeTeZY5jYUyh47xCGkPf6UvCVyUH5/ClFrEBFxGUoO2acTrIPWlOxD9DdIM6SkqFbPyJEhzUN7uwOQkVkIIlnSBFTz7jj3mZ9/RG+neEJzE5OYdggOdVLyjee9TD0f3nONVEJHTEmGmOxBEJJNZLQQMux38itf6a4G5Q5+bbKXMl8cAuWT2nEHRj3rgcnza9TOvxMuDV7Z+Q72EgVIxIb5uY5xsxw5oZF02AXSzaDZ366wEUghZJkCA0GzwsNkVmTGIIj2Nfouh3WXRGaM9r0nNQL0pwQOovYHMSdHJFOU4ql/S0WspBMPisdTXK/k4y/lP2GklfGJhqbRMkpkxYa/T1mfx2RAuuW/2RsQ9voNi3iWJERkykVEYL+JcuyHgjR0vURK1s8tRKDY2KH46z1bLoWh8XKilSdQmjotpFZs8bbCMOOxKysxLf64e2ia7CK0Bz4NSbmwqG3YFy9X7LfP/S93W7P277i6fcvOteLjlGbS5+oMYY5hlWXqOaWeyeRz64tF+GMWjytXdOaLTtfU4vl5OyEZz/fIEYIGDwWawzJx0wv0prurylqp4nnZt3wiyHwzbcjH18I4aIFSSRriCGNWUkKlO7+A7A0fkWIH5InSEGs6OYoO35TVKDDZTlQYzQyIyUYhgIA6UOMad/xYoKUQk7noMz8GMMkBYwqEbh8PjF7ub4yHIojUSMyNAVJTKiihhQnAZR0MEWiK2VAXvfTewBDjKaC8V1HIuGsYdbWeBTIiGHIShqGGAPOWVJUCbGUpT5T0miZwgA1MdA4S71ccL5c8s5r90G+Te89m23PZtex3mzZbtbcXF+yubmmW9/QbdeEYSDFgWHoid5jjC5eKYQMAJeiU54GgiTCmBXHZralToxRihNEiQTWVThrGPyQ1VrICi3K1AdVM/E5/U3lVN7RasgPiOagrTHcv29Iz5WcI8YiYkl1RXN6n6ZpsSaxPL9L9D1+6IhhYL5Y8M57v0Z7ck7lNJpDSNg4sN1t2QyJ79x7l5AS3cVjOt9hMmDdzubMF0sFF6MnBTWmUlISTSqJhUJmWhpblL7IawwCo6ykZKeTc1ZlOE3eDCbdbMW8oaucjOOhtH2IGVCXmFVYMjNYQ06wYvBByUuIRg6ZkpEzp+SwRh1n5anavFAa9P58CLfUPxTwMggDzgq1FayxxCj0gydEnzedhrpSoozP0qGVczhncyofTdWUYmRWO6p7J7gbx9X1VtsuKhOy5HwmRUQCpzNLFHi6hd5ndREMSYRPn15ixPDNr73NN9/7OovTc6qq5qpL/OgnP2W9ukEk4aqanQ8s7r7B6vkTnr3/C5ZVpLJOI1X8QHtyh/nyhOv+kl2feHK1pbVP+cbbD6ic5aNPL/jg0TMk9Ly2bHXRdiqB62S6SAo3XcLeWzJvaqLfENJ+LmrnS+qqYrPrCX3H4AeqyqkBm4Smspwaw2rlsaKpldTY3ZNehAIKKrkNa7DZ4EqSn71Rd6TYvInXnTspRrpuYEi5TyQl5LjK5Tzbelwh0ek11VE0njdLqZLI+Q6ZgLaq1ZZQY1xCUIUgEinqXK4qNCbnxy5G0qvyy1wOAbUvA5YeA+CmwO3UwCwOsUOliinBZEokeJFCxuG5D8HV6XHH1COmpIJjBJdpHQ5TsZS/qqpugYfTe35Z+8JePWVa9/I5MBI/yvdFNeXw/gvxY9o+h/WIMY7kmENCwRTgL+1Z1DfKX3lehfgxTZtR3ocQxhQ9xyL0CwFmCmKnlEbCB3CLwHP4fMv9TturkEReVF5ELpie+5CsUubHQ6B1eh/lmqU/lHQ103rBXmHky4Di02d1mJLkEHSefl9eFyLOVCXk8L6/zLiefjdth2OEiC8qLzrmcHwd+/7wPMc2p9NrHCMLHHvOL5pLDu/3y9T3kLxwOB+97DdfdO6vUv5dQP6XbfC/DLHhRfPuy0g65XeHRLry+bGxf8zhcLhWHRKwvqh8FeLHsbq86PWXVdyAL//sj/XvF33+VdrgVfkVL0lTtCIGEw0yKBG8OOatmFFhAiAEjwqBiu4tfaLvB3XMCWAszlVYV0FOx+DqhsXpKYuTU9rZkrqucXWDESUodLsd3U73i5rKV/R3xmraBWNykAQQ0yir/vk54fOpycpIMOrI2AOjqTi/M9mlRInm/2lEfXGOW5X3Nhokoz6FrGZqig4Duq8zSYEH0eASdaDk6EfR1KwxRowNBBOwIWCHHmt2ansZQ7QZ/Pc+K4oocSJEwdia2ckZp3fvMFsuqesWcU7J+z6wXa3praWZt1RtSzOm2BWs0ejj1eqKGHr1vXRrQrfBuUhlmrxBrpTIkf0kYjXFTfBe/XJBg2cWp/doF+dY15ACbG4u2Dz9hG5zjbPQNhXJWoadZ73esN3uMAjBZxJE0yrJQywRuHx+jR+2WKvtHoZAVdvsOwwYnzB2lRVWnlE1La6uCdbgqoq6abDsGLzX1KvJYqo5zlqadobYVoE8J1hX0XU9nd9RVw22ajAI/dAxDDucMbTtjBBh43e07ZyUBD94ksB8Nsfvepqmpqkbup3ndLbAiiOmNSFA7SKWRCsW73tVybGJ3sMmJLxYIp7oPcGomqc+95iJH2CM6saS/Xt9gt44elezFkuPwUrF6XzBnfNTFucnnJwuaBdz6tmcpp5R100GDrJKMUKKibqq6KPH+47gO9rGcHLaULcORNS/J1bJG1YJOVU7U7IS6tcztsa6BhFHJQAe4kAfPUECKSSSRIRADIJIUJ9pViEVMXlcabRwIQ1XrqJyFTE1ee/RMwQdDyEqUceU9M6iv1UgqCj+oONUULXoWHx7cfTxaSyLIeU/ijKqqHpMKufIAUAxaRroaHTeLMQPH71G54eofzFOfNN57pTiey4QIUjSL0ax5UzrGBMU6MQ01i1/qL4kgQKlkX93y5YodsmLJ36mKYF55dt5VX6ZS7YVhvW1jprasg7QuooohuvtjnZ+n9oIzz/6ATfP19x/41u4JPhosUZxg9n8Ll/7G3+bT/7y39Bfvca9d79DyMoWpJRBbx2fcRCMq0i+5+Lxz7l69BcsFyc8+I3/gnp5j+ATEgYGAeNFA6mlwjHQtMLpWc93/0bF3/3GX/Lum5/Q/M7/Db/4z6n9cx6u/p/89/+T508fD1wPKV/fIWPgKiRRdQS9oxwMQgagx/3W/lO1iciTRqEJpHEuU1wuKUE4Qkj5Z6OVNfEvJFVls0aB1pIePqSs9ZQPNWKJIeIksdvd0D/7gN32hrOH38XMF5AiIXqdofPELniiccxO3qbbPsKJxdgFKXqcDaM9llSLJfv0YwbdBaEiSY8Rw6kLgKXvM2kmV8NAttsmXWjalaTUs2Bo+zYY2/bY3vLWZ2k8q35c7NZj4Hra/2WssooQkmEhnxKNYTXMeKe6ppnPkSGR+p1iWjHkKV7GPoAIIgmTVHmNWGFMjzhh3p5ys32CX/4Gi+EXbAeDsw1eddBJqacyBUvM68b+P4zkxLFf6VGSFKHSdTrlFChxVNtT27q06+ebTn1dEAPMGx0vwYecmoeMZZS0cF/Sp1LGa2aZSAogWyq7w5mB9TpQ2UCXZkQ0zVsiMHjDOgjLZof4RJJTXFT7tHeRYAILHJ7IDsXJMsJ4/Dam/SLvT259L/s2GXvIEV/WC8/5Jb4/9I3d8hcevC9p+ab3oq9VdS1heP2NOc/7yONVzdvLxGtL4fF6oCMwDCdYO9CYHd99y/D0+Q6fVKnexkDE4qyw8z21RIacYm9/Xzm7A0rM3+wSP/5F4FtvC3cXwk8eBVVNEpBMHXGup62hbduXtstfp/xKED+UFCGaHsRo1GWPkgLEVNpnjcrqiRgFjCedRoMrMhBpJEvmBd0EpL2B7ocBS0JslhKv8vliSZugVrEx4GylRn3UqAzj3MS4z9dP2Uy22WQ2Njs+NNdoXSmBRT0fqqggYhCnzrSIympXdU3lHD4lSAFBcFWlc5yJOVVNYX7nKBGjEb2I4MOAlaw8gbL+26piXtfI+Yk6qa0jRCXC+Jjo+oF+19MNA91ux2azwYeBfrdl6Aa87/HDgO97hqHDDz0kjUZR4Dnny4xZ6jsDT9YWxruMqWVcCNlpk8kGYnCuypKUej4xgrWOqlLViOKYqazDVg5xFevNQAiJqmmw1mCt4/x0QdtUeeMGu/WGza5n6ANfe+8dfvN3fh9XucyeV8UNPyjr/+Gb77Dbbrm+vmZ97wwF9RSYWrQ1de3yhi8QMhgz9D2+3+KjRiEMXYcfBmLodQURbSODo7AEjSGnJbI40Rk3Jc3Dmjk8GCuTiSiMxBiVbSUrrthMwSkAojqZ8n6PlNUzEjFHCmlfJxtmhTgAmoomihKTKqtgfAgQ/EAgpwAxidBHQog4YxDj9blHQATLJBVCzIou1iKo2ofJRmBMeTOYFUjOzha0TcNqvWO92ozEn8JgTSjJ6Ly2DFG4TJEuKKvWoAbiJ0+eIcZx9413uffam8wXJ7xtK37jt36XP/rBD/js8ccqzZTb6e5rb2Ifvsmjn/8V3eaCWtR4GdYXfPONu2z6jnizpR88n1yuaJzh7Yf3cG9YPnj8hJ99es3zm455pUbv/ZMFbaXsX8HQBcNVEN68c495W9NfXBCCytapqguY4LES8TEwX56yW19ztdoQfcAtZ1SVpvzxcW9sGGvHlFaC5uwpajlN0yLJa3SctZpqyGRp0pwP2uSolr73SO2pyryZBJsjZqqqySoGOj5iSuqsyOzulCXAyngGo+LKXqV3xQpWZCSNWQzRaJ7vEKMulkmdGjGFHJ2Tyn7mVfklLVNAdlpeBuAeHnP4+0M1CDguW38Izk5fT5UEjt3T9JrHQK8p8aOc75D4Mb3PKQh/SIw4rFtR+zgE4Mu9TK8x/a6oeZS61HVN0zQjIWQKUo+5rvO5DlPuwG3FhSlBpPyVz8rnU3B1qqoiImy3W+q6pqoq6roe73eq+FHAza7rxt9Oz1PONU0TU1Qgpm0+bZdCJDkkOpTXh+SMabqgY/3vkIAyfZblfr6I5HH422MA9Uh6nlx3Wg5TmRyC29PnVP6KMse03tPnd3gfh4SjYyD0VI3msK6HY+6wvY+Roo6N4cO+Nf07JMccEkCObVQPPyvHH6aLOdZOL6rTsescI39Mr39Y1+nnh799UR86/O2LyrQef53N+2E9XkbgeBlB5Vh5ERljOv4Ov5/W+dizmD7/w++n46uc89gzeREx6GXlqxz7ojnh8PWxfvhVz/tljnsZ8ePLnvNV+dUuImBNwBqrLnJr8EbnaU+xWyQ7/LP6gAiSIj4EhiGwGyL9EHJSTQExVDWctUvO797j5PSUqm7Aqj+m954hRNxuhzEOa11WO9O9jyBZwrhoSyrIW5QO0637V2dqSmVOn8wHuX5SYABRAFr9RpNUmUmd3+rYhESW0E776DwpzvwCPCRNCyopg9pJ81UTMwATIeV9XJICSispJGX1D2MiYgISvEZS2gQm4SUr1AJRNEikbhcs797j7oPXObt7n7qdM4TAanPNs4tPiEOHs47Tszuc3b3HbHlKxLDbdnSrjhg9xIAxSX0HYVDZ6ThgjWF2ekIi4PsdKVaEoadunWrLpkgfBiBhbEU/dEDF6YM3mJ/ex4fA7vqC9dUjVqunGNMzn9VYcfTDgLU1rjHY3Y6TRaPpbIKqofrOM1SBZDqePXnC5uqaFAPLkzmL+ZzzkxNS8jx/9oyu66kaTwxQdTuadg7J060Dzgi2qZHzc4wsiGie9srWWKuqeD6CsRFrVH3Wdz0xJeazJVVV44Oq5g39jqquaNo5xjr8tsdVNWKcqru4SgOvho66sZyenbLebLGV4+R0QRSD6bbQ6z68ccJcrKqQEDEpMTdCay0up+UNkpUBk6q7+qARqJrmVn1CIQneQG8sN6YhSE09m7GoakSE5XzO4mTJYpnVX6oZzjYkY3PaEvVFIoKrGvUb9QNVpYjJfDGnbisWp0tcU4OAszUkDTJKCCFBNwy0bY21lRI/zH6vAZpqx9iECRpqZa0GvMUUJsep8g8l2EluAx26hrrsY7SqKFK11H4fXBijKiIXoHVMt83ef5Yy3lKAv5hJI7p/yumDowbyJIwGqBlNQW1MxDhHSm4EAhVg0nGuPtQEAULxN/pADJ4YcuDZZEYyIxy5B99MpnoI+zkCo3OJSSaTYw6DJ9jPw6K+xPQVl/vb0GKGd1+ZDK/KL3ERMfT9NTfPL7AWlqd3uYkNPiTq2Qw7O+PO2T3sbA62Ig7/gicf/Smz5UPOH76BDxbJWEFbL/jab/1nPPrJD9j9+R/w5vd+O5PH9j5xTR3uuf70Zzx//wfUlePNb/wuzb03IRlVs7eRGAwGC1ZJf1sTGJLQJY887YjbRGsfUX/97zPM/mscQlr/TwzXTzg9e8A7G6FawXoQvAcvkNJkn04AKamvcikgvUxew0heVbspFuRCMY+snm6JGpVvNIgW8ow7JUik3A4ITqARVdHwUVRVO8+GiawE5QeSX2O7Z5ycv0U121JXjU5s3oJT1XKiQSQiuIwxKu5nYofvPsXapQYwG5N94FqD2/6SRIo9EuHEOL515vnDZyETN0rQTKlMnEyue2UKSEeJCS/ag+rcHclCI7mhp9crtuuLgPd06ziD2hI2wkxukJTY+jNmZs3ZsuKG+1gZ2K56ZllwSoOEfSZe5OB5lCjpJPLs2SW7XmsY5D51+oRt2uHNQ+b2Kbt4nxQttUC7MPyttwP/7EeOPivQy607ZfKZjOPCiI4SAQ3cD/qMRHQdVhJR6YuZRnDQljEpYeW+SwQMn8aspif71f/2PUz0WHIz7xVJ0qTnC5KzHPhguDvzXGy3GDejlyVkXDhm0pIQ8dGyGoT7aDrLII4qCq1NiO8YWBHC55O5fX4BLhgPR0s2C8hTy4sOm5zu+BFf5F97mR+OfRcdj9X5Jd+jFIsmjSPlwXlLtZvz5Mbz6Y3lnfOOs6rhYgfOJPpokdRyfv+EP/6z55zNdsTBsZIFMdY464hdwIgnYrGfuyeLSeorDy4yBMNffpD4tTfgd7+94C9+vFPidBz4tbctqRMefSBcXm2/oAW/evmVIH6AGtRBVMpKI8L3vdJaZZSnEElSmILZSZ+yAU3eDMQSne6ocsSndRUxT4rBD8QYsEaJAxptmfLGPoOlxcktgnFmz6TOi5RIzV6er2zgJYMhqtZRWYczDj+EzIsvQI5DjCUlVbMIMdJ77Yy6idTI/RSjprwxWcozeU1XE3WzqdOfMuSckayioW3pKqebkBCImZkusdc6GcPMCctmQZzPABBj9mkoyEBv0tyaQ1BgJcZ9lI4RBfETEIOHlNh1nea6rDJRh5QdsXFk1Etpp7IZsjZfK28gncFZh+87nDVZwkvBlJRZ77t+QBBCDKSk0TxtXRFDYNd1rNYdQSyD95ydnnLn/GwEv0lJ05zEgA8KSgc/cO/eHYL3ecLUBXVcZkRIYcDnfjIMgRgCIQyEoSMMPeuba64unhKHOMoAWWOwzgKWQvSweaXwmQgTY57WRitHRrJISFFlQtH+J0YdZMpqzGKUQfte0kGgG8jsI05oXUUlUjRfaYQQA8aaMW2R94Fk5daCJkiO2lIm7uAjwWge4YiydK2oiocPhdWbGZeiCh4hCcY5dUagx5V+ayXiZprr16TIatsxhDAx4tSxZizcnRvA8nSr9QyQ20J4/OyCv/rRj/nGt7/P/dfewBpLXdf8N//o/8Sf/8Vf8cO/+LdcXjxTtSACGMsb3/guq5tr1p++jx02mo5ks+H7v/Zr/OmPf8b1ao0PgQ+frSDBGw/O+eY7b/L+o2dcbnfcdOAErjZXvH42586yJSbL03UiNUvu3b9HZS3XqzUxzwVKvOiIqJKKWItEjzWWxbwhxsCjixvmbU2bozXUQSo4m6PTxGDEKkHIWqyp1CliGqq5qutYo1KClavycQ4jgvcBHzzD4DUHbRQkS2hJnhMqY/Jco/KtKei4LA6OYkwk1HlXmK0G1EEBIwtaSEgUQolcKVqoRIzRvjKBFr7sEvGq/EdYDhUUCrAOtwkb088PgeEpwGuMoaqqlwJi5btCEjgkKUyPmwLah2Dji8gk05Qp0+scEkpEhL7vx3oWQ7gQRw5VM6YkgvLZlMRQinOOpmnG+hcFjQLuG2NG4ke5n5KO5JAkU0gf5TlZa8dzT9OEqD2wP269Xt9KSTN9PlP1EBFhs9mMqWXKsyjHlPsu74s6yDRVzZTgUlK8lHZo23YkILRtS9u2Y3tVVcVqtcI5R13Xn2vLKbHikLBRnsO0jtP7SGmv0FLaZdoWx4gx0/NN08pM+8a0/x4SMg7/LWNh2nfK9cp9Tdu2/O6wHuXfQ4LL4fiapkEqfaqqqlu/nd7bsbQuh2D8tI1KnzlU03jRWDi2qSz3ckhKOSyHZJPD9EWH5UXkk2P38SLSx4vKsTqU+r6IDHHsuR3e77FybM78IlLGlyF9vKztptd50bM5JNhN+/KxOh62/zTt0IsUi6ZryouUfQ6v81XULtyX1Pwsa8CXIc18FdLFVzn2WJ97kZPmFfHjVfmyxYjugSUJtTEEZ+iMIcQAVgM/yv7YCNgSpU/IkZqRZBOVq1iennN+5wEn53cRU7HZdDx5ekk/bDFiNS1JO6duGmZtTVW1OadB9k1IQl0yWa2VHClaQFbRoITPz+mTFGNS4uYZVc/T+CbDr1Ki1fS3eomyx9G9tKQcAJHJISLkCFo9o4iePKWQ/Ul5fopo8EZIYA0q1qCS6ylqbKz6KPTaRanVJMEGS3Q1mJrZoubh+R3uvfaQ0/N7iLNstgPPnl9w9fOfsr25ARKvv/kWr7/1Lifnd4kRNusNVx9+Qtf3GMBacJXBVRZxji54oh+w1tA0Myqr/pIhJlwzI4VEVc/VnxN6ri+fq1KFqCJoffKQ5Z2HiMD68pL19WPwW0R62qYiOoMVy251jTEDlQOS5879BZv1jrAKhCHhJXF2/y4Ry9OPP2LYrLHRc/f117j35pvM2jnDdsfz55+CRGprSGFgdfWceThhNm9oGks/6PMTa7m5uuHi6VOcq6jbJUM9x9VzXDMjCOAMVbIEP2CwGOMYOs92s8FHT1XVzJdn1M0M52pVyq9mdH1Pigk/9MTYIxhmi1OcaxmGxJvvOO4+2PHk8VN6D4t+xnbXEVOkscLSJk5tg0mJ2hoaCZxWhlZEFUJMIg1eCQ0eQkgkF3PQhsWIIznLzghrHNdecCKcz+ecnZzSti2zRcPyZMl8eULbLqjqBlPVuKra+0xTZBi89nNbU7cLUqhwVY1zDRFP0+rvNACmIiUP2zWRvF4KqI/LYZJTX2sGGENOYyuoaqyxBm/UHyEx7f1eIyCnA0bTe49DLX8+6GjMPtiyBzNGSMZpoNUImCRyTufst8z70NHminvfSAqEGJT4ESM+gvcahqgkLYOxHmsDrnKa5ts4bJX3G0bbIGZfcgxxr/gRwxj0l0abJvvlKPNm8VmrPWEyKCoS83EyKsYeljLnyJjTpsxYB8d8xVKe4avyqvyyFiExdDdEH7AG+u0lLsIQA9cfXdBHeGoMSIOtWs4evMPDd77D9bMPefTjR9x57du0p6eApgqxwNvf+R2ePf45P/mj/4Wvf//3sfMZIo6QAqvPfsazD/4UiZ773/htTu9/k2QSIUaciCpHRyWakTQ9usVho5JDNzFx/bznvHG0D/8G8fQ/w9hI2PyPpA//nE8uHxJ7WM4rHkrgYhfZDInOCyEkVQ+JhVq3B5YLnUOJr0fUL/fW06gVovhGDlOVhJNEY2DIimkit+cPARDBJGisobVax5uQ05wXG46EJdGlLbWrqM/eoZrfpwuPsW2LD57aOoaUFaGMBwxGlIgo22ck22Dtkro6odtekAavpJW8zozXY6RPqB0ZhQ2eHzzNad7IqGKa2IzC/le36pduTZcFP/v8cfv/7u3L6Xdp9L+ncszoMyiPrCChJqOHam8mamq5oJEd19whiefEXDJ3Nc84p7/6mNO3fj2r96ntWmD5ySKrdnee/7ebDSd1YmPvUZlPcP0VqX3AEO5yb/Gcz9Z36QMMW8MffeSIKfDuKXx0k5BoiCZlU3zf3wrJw5DV7zNMLKjiuJTKjiSCY+tQ/sxoWxESH67BWDL5Wgqkq4L5aY+JTp+T7DcDhX5CUeAaH3qMnFZbLvotwdwhELABfGbsWJNIUe0y6zruzRxpm6jCimjP6ESom7sYk4ihgrDTk4u2za2hMr6QkYwyoVLsfYlwi4h+rJQxWKyC8agjx7+IADL6baZ+TorCYu69kb2NU3z9SYUAjK0xJs8t4tjsIm7mIAY8A7+4nPHNBx3dRWLXCckIr93xbDYVzzcNxjYspefMXRFMT7ARiZbaBAaTCD4RrBAGEBNJaVC/rUm4ZFVtEcMPP+34+iD8nd9e8MHjnv/iP/0Ov3j8jH/6zz8h+MBy8eUDc75s+ZUgfqgjzu8nKwCRcaLTTfwYazF2IDEmOw6sbpiTbvAlKTkipYR1yqysRTu6rSpSskgGKSd3kVOcqHwm6OIysuJE1TzEqnKFyvdVJFOiTwVrDd7rBiOmyDDstANnkB2jxAZrbU6VkfaR/CRNcRICFCklI7leHlLIzhKDKekdssKHqmgo0Dumg/EDKaEqKWVaEuj7nmQtVtTBHpQhAjl3rhgBW+asSJ1sXlh145VKNGwig9q6wVnMGkhJVT0m+bBSiTIgz1CyBwbIGyprMiAkSvAZosc4A0nBE2PAoyoHMUI/KPmjKLro5K/9w4dI13ustSzmc+ZtjYsDKXhwFeIaAhkM91kBZejw3uc0QIWwIplIpAQcAGdg5z2CIQw93WbF5dNHbFaX9N1WZWNNVkZwFe1sTlVXRO/pes1PG6KC7+NUkUQjdJLBZ0OmqJ2kqEA8pe3QyAVTNqGiv4+ILlp5vMSkG8iQ2aW6OY1K1kiC95GqraitsN1F+kHPlWLAiS7cSZR8oP5xUVKAhAnpRglYuoBEhhizMoXkGImYI5X20dNkKc1CVBKjpIl2MWO13rJabUc1GZOdcC7BeVMRkuGqU4WJlCOgjHF8+uyCf/7P/zn/h//jP+Ldr7+HiKYK+vt/92/zjW++x5//2z/jg5/+iM12TfAB5wztfI59+1usLp4xbK5wDpaLJX/nb/4t/uUf/iHrbUfoez66WAGBt994nb//9/4ef/5XP+LZ80uNgBo6PlpdcR0TtRPWpubOnXu88fA+lQhXlxf4HBFS+nnM/bQSgx8UkHPWsZg3MItcrrcM0SAWGmdwRjflRixizZiWClFVIuMEZzVKxhh9RkiEFJBUYcWgKYYTGIttNCVLjGSjB4gZJIyRGBIxCCl6TQtkDCkEhqHXOdorOUcNrqSLckw5oi6RsrSxpJy3NgZM2n9XDKWoXeFV+SUvx0C+KZFjajAWUsIUNIbbaSgOwb5D4PhYmao3HPv9lFhwDOwuRIqpAsVutwO4Bd4fplopn5dzT1NulGPgNhC73W5vXeswgr38rgD60/Ys69ch6F7OUxydx0DjQlyYEiqmoHNJuVKuMa37YXtOAdby3Ww2u3XNKcnjkJxQCBVT5Y4pKcN7T1VV9H2vammZfCEio3pI+QshjM9q2hbH+sxhexUCy2GdS51CCCPppNSh1L2QVqZtvFwub5EvCmHj8NmV+haCyBSQPUx9U+7xGJFimhf8RaSk6XHTZ1j6wXRsTBVXSv2HYbjVTw4JWoX8Utq81GnaTtP7nfbP6Twx7fdTQsj0WtNnWs41/e10TE5fT4+fzj+HbXVIjJkee0jOOEZSmN7fy8qLgPdjDvhp+xy21Yt+d/h+er1jBJ6vQv441g8Pz13mm0Nyz/R5H5I0DuerQ6B22penJLvp85r+vawc3vuUwPVF5VAR6IuOPbzmi9r1q5Qv+7uXOWsOy1ep16vyq12GoBQLJQlCUzvi4JGoe9ske8KHBiCoP8VWQrOoqWZzFqdntIsF3RB4dnHBT37+U7brjfo9QsLZiqquYTancoKpDYmaJAasQ4xDxj4rkKPdMTli1ajTO43BLDklKXYCXuwVK4HsTN478FVxQLLzPIMeoys+Zd9NzEoEer7RBs7/LT6qhKo4qg89z3GKpKiDPUV1R6WUHdGF2AKSUyEHAZGKKKqiaqsaU8P5csn5vXuc3rmHEcf19TWPH3/G888es1tdQRo4u3+Xb/zO93j48Gua7vTJEz75sx8y7HYqq25NTrdjSU2FoSaSWG83mq7Y1Vhj6LuBLvksx26zKog6w9era/rNDVZybOrilHtvvo1xNTeXz1ldf4Lvb6htRaJmtxlItqad18Ttjno2Y9Ys2PYdrqrxXWI+X5DSNc2i4rV33mWX/TGLWWSoau6+/h5vvvU1hmHg4skj1teXtPWC0C7orWe73hBjZNZW3Dk7YdPtMDHRdx3bJ0/otjuM1LSzBbZdsTy7S2h6nj/+gOXJGfN7b0KzUFUFF4mivr66XtLmdDMhRk0VExWki3kcbLdbSELtaqxYPIGUHPM7d6n7Jc12zdXFFbWDs6ZmYy27oWNha04t2CqxbBosiYoKhyDRax8MiSSOZBKenoQqWaScZmcwhp1U3FBx44VmVnO6WHB2dkY7a2hmDYvFgno2x1Q1UjfQtERjNMDMWWIAH7us9OsZho4w7DBGqBqHcYngs51qDcbWVLYmJQcxERCqusVUM2ICUk8IAxJsTtkk6vMd/cBk/4nDOQHcXqkwgwVkYCYZHdtl3N629Twp3bbTNRhvouSWg272e1YleVBsaTQwqwTFFBWQEFImgSR8Dh4Tk/PKG4/1FdYNWGex3lFXNZiggXeoH0XVrHO9/IAPPtdx74tWn3jM/h0N2BEBIxpQlkquF1C/rdHUUuoDVihWpexzpHk0o9LHNIJ7WtR/nUmzk7kMEimn1ZnqJ70yF16VX+YiRpg1A957Zsbw7vf/Nzy/eQo+cufeQ07vvcmu3zIMO0wY6DY3PH7/z7HJ4rs1H/7l/8JiucTYGfV8jrM1TXPCyfwO7r2Gn//JP+Otr/86fQg8efwjKt/x2te+y/yNb+OqiugN4gOVVewrJlV8SJI5olmx2UvE2sR5nfjO257ff7Pi4/efkP71D7n/tT9h/cnA5c09fvSJ5yefRq69jmefFJCuRG0Tk9TGUDNopHtgCklhTK1VGoiR8iH5kIJBqf2UGKIqhs2IOAu9E4Yhz7njybIdlJX0W5eobWIXkgadUugOmfMrkb7bslg+oLaG1cXH1K4GBGcgSKLKiuXJGAyCHyKEK0x9Fxt6JG4QA34YSEHvNWWUPaWSrkUmqVcSSCBlfEVEqIxhCEGVrXNbyUTtQ8s0qOBg7kx70F7yB+Pe7IB8kMYXuT3GtruNb5JpOEkinqBqK0AKllm6pjU7Ls1DCAOSEmftlk065T/97ftsn37Kv/6Lp/ze33rGvH1Iil0mcad81vK/olWjMvW1a7kKDSfzOSbesE53SBgudyc8nF/weH2XPkYuN6p2XBuHiREPmDi9/3FJyzhjVtpDsddCzCDjUYo5yNgEhZphRO0ISOPz00BioGTnEcVdDpk1UnrbaFpku35yTCH7ZKOAiAb+1zJnmwZVOydB0KD16AOYCqLnbi08WTseSLafiLgEIVUEDA0Dxig+vFcR3LfNxJOzv/dSx3J/6Tgh5thynQ5+v+9/x30Hxb5QEFMmWP3+mJTiqOlyyyc3bq72x6WgSmwkISbD6maLM31Of1njU+T9Z8I75/DB84gJgW997ZQ/+dFKTxRhcDB3wh0b+Wzr8KmjdtdUVU9TG2qpsU3uE7FmiIZ+iISY6AN0Q8BEx9UmcefBfX7tvcgf/puf8YMfX3JnvmY1M+z6I43371h+JYgfsJ/jx5mszPlWyQ5FMSIG7TiaS0qBz1E9IWrejEI4NsbkkWwY8neS08no25SJEqpMkfLm3iZUOgibUxOU3D45tUZKYDSvKEmtbpeJC+X+Va1CcCMTDKzTPFa+H3JakPyd0dy3MQbdqAAlZUxEwY4UI5VzeeHNag7cdrZKHnApRiWR5O8V2A3k5YsQE7vBq+M9ieb/zBNecVAYEZWm9HuZQUE0P6t6KLDGjZPIxGWZH2hmcXmPVHuHSkiqHlImFWtKFGmWBHOGREPw/TiZx7yxsk5o24qQIiEkjLWTScYgpqLzWQoyojlVQySJ1ecXhMpEVYAxidpGYgjYusJVDnL0kc1knSEE/DDkSVzb3fkBv9twdX3N008+4NPHj9judqSUMEkUByfiYiIMOe+tH/TcxmDtHNsYTAY9DIIhEoPHoNEzSf1GY+qf3Jw4m1MXlcFBVnMRNOLAqyqKOhhSJgXIOJHGMqxEGPqBLuoCGVN2ghmHL0QCa6hcTZVEUxnlNB6SPFLkJ2PCp5LzTq8zxLiP2kgK9gg5bVPpVzl1iTXQNpYYDVVlaJuWm/WGzXZ3y26pjHC3UXm5652n8zoHNFYd+Y8/+4x//I//O/7R//n/wt/43d9ju1lhrOE733yXb//ae/zspz/nD/71v+bD93/CerXJTWeJriXMLeswEG+2fOvb7/A7v/Gb/NVPf8bOB0LX8dlqjX38lKb6CX/393+bv/zZL3h+eUV9fsZqsWS36wiu4XQ558133uWNe+cQPdc3G5JkVZaUaNoFdVWRMnFmGLzGpsSIkYqqNpxLy3q9o6lbGpPbMCZi8mp447P0HqruIYa+kEFMFvrMRLJkrPYXJDspBGurcUEWI0Qf8TFlpaHS4Cnnr83EuBTw/cDgJ/NqJp4hOcIEICsx6WlUVcakNBJ5bjPFp8zZlwNhr8p/vOUQfJuSPw5BpmOgY/n3EEw9JIOU3x9+L7JX1zgEGQ+ve3ieQ8Dx8L6OlcN7PkwLcXjNw3vx3t+KBD9MqzG9zhQsnQKeh6UccwwoP3YfU5JAaccpEWRKBin3Mq3nMWB1qgoxTTNzjJRwTKGlkAicc59TA1ksFuM9le/KfTnnPpfi5JAkcEh6ONZ2h+SI8qxKPYpqSWmXY+1ZyBylTeu6Hp9jUWMpz6bKUZVTxZVS38P+fJj2pdz3Yf2OtcEh2F5IJ13XkZIScA5JOIX0UIgfUyLFVEVhSogoxxySn47dZynluR2SLw4JAVOCyeG4ntbrZWN/Sv54Wdsdu9dj719Erjg2Vx2Ww3lsWofDcixdyfQ6X0TCOLzm4b/HiB8vIidMyTYvmu/K62mqpvJcDlVpSjt+Efljes0XqYQczseH5Vjdjr3/ovJVUrIc9uuXla9yH190rhed88v2yb/OuV+VX5UiSHQMEYJXAr5xBjuHylh8PxBCxKeEOMd8PqNpGubLUxbLc6rqhG4YuHj2nI8evU+3WeP7DhsjbYwEI6oe4Awpp5AZ/EDvPbUwrlf650aQcwQmhbxfUWeHgp1lLtnXQdPVpnHfX4o62F88HhRMnbpg98RAVT2czm/lbYFHdHe+Hzua1lVy6lSNigNSQqKmgxGJqu6QVM3BuIraOdr2PifLcxanS4JPXN+s+PDnH3F1+Zxhe42VxGy+4N1v/i53H75JCIlPP/mEH//ZP2Nzc0kMOyqn6UncfEHVzBERmqaltjW+T+w2G+aLGbOmpR86fLfVVKVGFU1jH6ldRR8G+n5LSol6eZd6vqRe3qFp5qyvLukunjDsbrDR09Y1q/VzNjc3zNsZ7fyMPkC9bDmfv8HVzSXOBWauoe8HtrueBw/f5uTkhJurZ3jf0zQzvPc8fO0+Dx++QYyB7eYGUzkevPEmz548pWotromIcZycnnHn7j02myuGzjP0O6xEZrXhdHkOrmK789SLlsXpCc3shPtvv4txLT4kdn2H7za4Zo5pFngG+n5FSmSVjAofIyJqz3TbDouhqmuiGDZ9r77DmIhDIHQ9VeXoreX87jkxRm62KxbzFgkDDnBJWLqWihwYlPuGsRWQHeXe46PK4SOaztdnf2dvKm6S5dlmi2la5rWlbh1VZVieLJgvT2hmLU0zp25arFhsACESwqCwjjU4MQxM7Do0yC6EHORiVQWFbPv5NGRfmPqAYowM3RYwGvwkBmMqBCXKYBwiNvtM1OeZ0t5OFgGXA8RSTGPq6b2yDsQ0qD8jk7xS0gDDECH2HcNgca7CuVrnK2NG8MwYo6mLResaRElWRNkHIo/+LwV5ioKx2h/qrxMZNMI9ZYVFb3Euqi87p+UlqwJRzpOGfJ6Q/4rtrHNQUfEo6VsQJZoZc3u/qa/V3zaq6eWgu/FvOodN5sL9XJT/nSgh37Y3E9PcLl9eI+1VeVX+4yzWCk07I3QeszR8+ot/y3azBlnS7bY8v7xQxfc4UbrIOJPEltncslpfwfCU7S+uOH/wBsbUVK7FVjMMKz768/+BJIa6OWH24B16Ejz9hMrOMbXFNA3YFkOVx3RQhMdKJvslWmc5bSxvnxpurhP/9N+2PA93+L++3vLRY8vj58LJomI97Lgc4LInM+x0vjQ5iFeF7XVuyELjOp/muhWTZ7+fObIHmM43GQfqfeKkUhWxPgo3XpXcR6SavKc26qdeNImZg5sh+6DHti3q/5bUr5Gmol7cYR48F88ecdbOwdS4ZIiS0+ElSP0GEztidYcKwyp5tiER10+pqxnezRAxiA1EIjEZxuplO64o7++JAVlRBL3FKHsy71HbcTqXHmmyqUmZDr4bzzf558X+Bl2TTDLUKRAMpORoZIU1K67ldQ06x7KwO4zZ0Cy/yffee40/+PCPuLz8DFY9sdH2swnCATtCU5wJrlmS4mNsZek7S2rOuWufs/UgKdFTc92d8lr7GZ9u7+BjhTHCxzdCNEIJHL5NaNA6SFa2s2pwY4E+r/lK7FBFPLWRZY+pwpjmJBZcdnwEe8BJyjNkn+oljMQZJiSI2yIFt30qebykSNe32JOeU+9ZxwVdilipMjHIYPGcLwMXm0RIHWKEwRtqs2IrZ0hyVPGSIE0mcWq6pa+KnNzqWwd+h8N++UXvDz+T/Yfj+8QxP/5tf6XaSdnPaVBlxFgQw5TNuIgYy3obOF1sSHGBMJBEFYk+vYa373l22wrLnKubT3jzzHPHbhmi4/luzg93LfO0wsqAdJY+Rq6uLNFYkI7WGSrTMW8Hlk3FrBWWrWE+d8xPKt5+4y6nJ5YPP7zme9+wVG3kjftf4//+wcdf8Sl8ufIrQ/zASh4sxUCV0nPIQzEbtZompSwcw9CPx5Q1xVqnEe6CMrkRUtQIjRIxr+vGfjNuRCPtBV33DEk/c9WtTi0m50pEo+nJhraRmHNUNqrmEYujUe9XM2cIMajjO0o2AvLqOUSf65XGFVVEa26NZYhFMcAwKiIA5DqmuHccWKOKJQmVICSVHOaCzb5JZ934e20TVUvRRUsHoRFD7WqC2Q/MWAg0aCRHjCOdQBVJclQqIiRjSWamk6VFF/QQEVEKSol+2IMBOUIczf+ajOZEizFgJKnaiIHaCH1MEAZ8THQBqkolE4eg6Vti8LRto/lMY8S4GrFOJdCSOndCNKp8oPROnNENeXlm1hiiVQdMSok4dGwvnvPR+z/h8ccfs16v8FkFRif5POkZ3ahuPdTNjPr0HvPTE2azLJmpSDnR98reC5HB9/i+ww89fTeg0rdBnT4+EHyvJIvynPPUqhKTaI5k7yfjQEgScGIzSUhJICGzXYNojtVkiuy0oWnnVO1s3FS6qsI5q6MvRoahx3c7pNvSd5nYQz5P0QAbjcTClEUncNG79pPNewwe7wu4pcb0Yt6QgO1up0I0+RzOwt1G83I93fjcX/MCEwPPnj3j//2P/zti6PnP//5/Sd8PiMDJYs7bf/dv882vv8P//D//C/7wD/+Ip8+f4KOnaRq8q1ht1lyvt1z/+Q/59e9+h7fXGz69uMSe3aWuHDfPn/DR0yt24d/yre/+Np8+u+D9Dz9kNptzcnYXiT3N8g7f/fY3OJm1dBefcr0bMrghMCQkBfzgMcaRon4XQ2S93lJLQho7Go/OCc5mieAy90RlK8dYyFdqxJCjiZiQMTQ9tLZOSHsWsBTlITHZYaJRaokyB+gcp2STBNlhMPiefgijU9MgkJV8NCWNqhDZfbI7QtC5Vhf2vNAXh8L4n1fll71MgeHyHm4rDUzfHwMbD89zTDnjGBhbwPLpMVMg/9j5j90r7IHsw03V9H6n300VJMbItHRb7eQwlcqxiPdp20zbblrPY9edpo2ZpiE5RiqZAt4xxhHQL+BsacfD5zqt/zFyTTmmbdtbdZ4SP8r1y2+HYRjvoyhplHYsJISqqka1jZOTk7END++xgEDlPqbKIdP2n5bpPRVlgnK/U4JBud4hkWSqDDNt091uNyqbAGO9QNPXFOKIiHyO+AF7FZZpupxy/DHg/hCgPwZ+T/tV6S/OOXaZyFrSBU3rU9rNez8SRKakjuk4KmSX0k7T+kzJWdMxXdqn7/tbfehQ3aSco5CADp9jeXalHKqJTPvjdD6ZElum43H67yFpYfr8p+09/c2x53VYjvXFl80zh/U4vNdjJI1jJLFj7VLa4kXnOazX9NkcEiym5z/WNw+VOg7JHNP7epFDovTd6bie9sXSH1/U/l/Ubl+2fFmSxHQOnz7XY2vJf4hy2H+nZJvp9f865TYo9PlrvCq/nKXs1RCNCo9R9wo2WaxEqAymNri6ZnlyzuLklKpqiH7Hzc0NQ/cZ0XdEH5i5QDVv8U2rwSopZl+H7meTOEzlMNYpuKx3kP3xB+Bn8SulPe5Q9tC6l8m+npw6AbKfZXT27vcu5beMLuDs7KdEPua9GCUYB0oqhZQUnD3qcC3phEXy8SXYJLujY3ZmCyBxlHm2tqKxBtdU1O2cqmkJIbLb7Hj6wYdsri4J/ZrKGe6czFi+/W2WZ3dBLBdPn/Jnf/CvuHj6mH6zIsTsPzAOkiOlAfErmpnHnZzSbYXdbo2rK+q6IoXA6mZFCgNVU+FqlWHWW49suy3OWpan57h6TjVbYKuG7eaKz55+gPQDxB5DAFux7SLJLji5s8RiiDjatqFqWtbbDXW9RGolMLh6wRuvL+k2O55+9gmxW1G7iDSG+dnrzJo5q5vn6ocSaE9O6HYdp/fv0683rK4umS9OmZ/cwZsaM6up60AjZzRVjbGw3W7YrtcsFkvm53cx7QzqBh8DabemD57KtVSLc1JI2BQJcYB+p/10F/C9QbBEH+mDp2lb3KxS5dch0FhL8J5ttwYRFrOahKG3FbP5OVXbc7K4x+q6U5UcSdhKgAGShZj7XMoqFDGqvygk+qDqFq6qNJ2vEZK1dKbicjeQnOX0dM7JvKGpHCen5yzmmprGVDWmmeHqGiuqApqC+nl2/U4Di6rsschpXCJCCDtSjOo3ynsP33UYyWSXqP6ySCIOkSEERAy2clhXa+CXugkJYYegvtyY/b0xZ0dSF2OxNQxkP0SOVSEGVS0tahzFr6j+ojx+BcQomSVGVV+tXa2puItvzYec1Smrl4jNZCyzT9+UItGrf7NEG+vYDqqGmkkdJaNKzP6/lP1mYnWS0nOCZ8h+mOLT3YNfyq9I6ne2E6Ltwbx3uCfLOWBU1aP4YT53rMnTkIwTpSmgbgZjBRnnxf39jLOmzmf7afRVeVV+KUvd1Dx63Gnas6Zmef8Nbn7xY5bziko2VDiILbVrqG2icg7TLJF6RtucaLBv02Jc4ubiEaunz7hz9x2C9aw+/RH37i2xb3+TEC0Wz/Kt7xO9Kob7XUdaXRGuHcF3pNSrLWBajGmwFbRVzaKdMT9ZUMeaD545/uW/uuK9k8h7v/Mez9aeHz+Bp9cdb97xGHHMTGQtkSEpuSPldQXKPqT8aSlzTkrFv1+mjuwDmh6b56iCt5W5og+BzhpaZzgTofNCN8TxOiJqo8WUaAmcNxq03Q157hy101PGyhz90LOYzYkh4KoZJ3cecvPs55ycvo0sZtioNl/oVtgUSe0ZLhlWmxXdzRNVUqqWrFYbTNjhJCuVIiO2IewJbgnJ2Ir69MUIISjBcY8LTfWQxh/uAXKmL0oLfX4vNp1WXwbcl/OX34ye+RQ1Vj0ZCBErAzNzxTUPCTGo3SWBpbviznLBljnXTx6x2UTq04ec3Z2zCgOVMSQJpHG1y88U4Wrn8ZKobKKZV3AT2Kb73E2f0MoaLye0QBeF5/1dzutLnnd3EDH41GGSI0lExKrdP9Za10PFGpR4IkJO71P8H1rxVHCPlBui/DuuVAUjKXjEtJ+Pq22x/nPQaxz7eXk+n/dJ6PEGxRAlRkIK7MKSZepp3Q0xVHTZ1o9JWFQ71t1cA7fNQMCRohCdpfYXiIkMvUAzp3Hr3A4OJKc6OrbWHttmf8mt9zH/9+F3TL97gU/m8PXoy0j734yjYuJ/UZ+PPneDwacIyXF23vLO6zP+4qOEiEWCh2S46hOz3Zrv3F9R+RX/9W8FrgNcredcrTwiF5w5SGwQk+jiDkfFeR05mfWE6NnGCpxl6CK7LTyJkbfemPHuNx7yr/7VJ/zLP/wLzuct87nwn/wn7/G//we/xyePnjKffz5l77+P8qtB/BChdlV2upaBpOCj5suKY25WNTwL2cDm/pMBSbG6oQghp7XQjpVkH4cRUyB5rwZtMXbz9ZwmhCWOzGnd/I8ye9lIrpxuRkLQic8ak1OwhpIQahyLQsqSgCoRmLKlrFH6FhB8CKTgVfozx6Nkj4HOOGYf4WxNlhfNvgjN6ZZbJCXEe2WpG4GUgYmUVKEkL1y6EVKunnHahia3DalImtuRADDmzMyTlfJulJlZFkBBVMInBoyxkGWIklXjIWaZbohUztL1A5LAVnqtFKNGAyRtb7JDWCdWdYaIcGsjlpKSe/q+w0jQDVwYVN1E0gjaAJrGxDqSFEl47XeuaVWJI2hKlhg0Z5+EgdhvMNbR9x3PP/uEzz7+gE8/+4z1Zkc3DIAqg1gjVHUDtqaZn7I4u8vp2R0WJ0vaZkZVOazuTUlR1Rcgag7WEEghEuPAMHi87wlec6iG4JEY6Hc7+r5j8J4wDKqGkqX5Ix4xCQmqTBNi2ZZFUrR5o5fJTwkqZzDWKVHGOIyzVNYBRokBomoPRhyEgcH3GCIkDyHpa6I622zIDEtV0tG+QllJi+AWKTsnxtx3BpXNTIk+KLnHh4DLfbHKz+R6taXPOfacsVhjOKksw8zQof0mxEA/DDQkVusV/+Sf/H+4ePaUf/i/+4ec3r2vfVzg4YO7/IN/8F9y7+Hr/MEf/CHv//wnXF9fAbBoWuazOSFEfvT+R3zn29+j+/MfsNoNdF2gahdch4HrZxs++Fd/yK//xm/wve9+l599+DESekxzwre//xs8PNVn/eEvPmA9QEga/YQxarB7T4xhtPKMFbohcNUFrtZblosZUVQ9aD5rqIrakRSWdW5jyQtmQg3pFAkhO1HSXiZr8J6EMs79JEddSiUX7ES2Tvbb+ZhyPmxjsVliWAoZJKGKS6JTVAgJTBjPM85tCbyIOmvyxsAIGONwZU55VX6pizGGuq5Hx1sBc6dlajTt8y6bMQ3IFMgrIHJd1587xzRFRzEeC0A9Pa58V0pKaUzTMf3+UOUAGOswBekLIF/Ay0OlhpubG5qmuZX+o9Sz1HFK0ij/TpUqDoEM2CsiTNt6SjqYKoeIaPqOQl4oaUbKd1VVMZ/Pbx0H+1QfU6B4moaktN8h4H8Ioh6mYBGRUTGiECDKcwshjO1T1zXOubGdttvt+Fld17dUR4BbihMiQtd1t9L4TAHIQ/WMUpdSj7qub/VBY8wt4kbbttzc3Nw61xT8Ln209InyjEo/LaQW7/1Idpm2XWmbcg/T/n1I3jgktEyJNaU+5TxTEs+UjDFVXymEjkNSRHk/7WclLcwhGaLUa9o+h2SAUufyu+mzmtZrOv4Ox7uIsF6vb7XLFMSejufDuh6SA6btO03bdDivHG70CrmotPH0d9N0NlNSw7TfTl8f+2zqXDjm2C/z0lc917EN+vSzwzF2bGxP2+Dwui8iUHxZZYzDufpF5z78/mXpWV5G+jhWSp/7suXLOgG+7HGHRJl/X2U6rv9Dlhc5gV4RQH75SkoaSWeNppJNMYG1NE3N7KwBUYVQHwPdbsf1x79AoselgLMK/honmNrR1o5+SLSmBlFFQk1hYEg5pS9isFVFXdcYY0kx4INHvHp7JO9DDHkeGQll8XNzWSnF8Tv2z4mjt3xfJKYLeLuPAyyAaVmDpr8rPpzbBOI4+pb0lPt5mhy0lOcAE3UnJ6rsUFU1dd1gnMWIaIrb3Zari8fstmvC0FM5w917Z9Tta7iqJaXEdrXi/R/+FaurZwzbjQafeD9KZxfFgDjsGPyASy1SV2zW12y2K6pZSxNbVfsMiaZpmM3nVJWlHzr6oSN6jwNOlqe0y1Osq0kidLst22dP2KwvSX7AiVXVVuPou44Ywbqa6DXNqHWOPvRsLtfUTUs9PweEmXP4oWO3vqG7ucQkT3IV3dBroJYzDGlH161xrqaZL0lJmDVzus2a3S6njlnMqdoKU1Xql3OBtp0jCF23wjpYnCywTusb+wHPLgcIzbCyIAye2HfYqtKUJyFQ1a1GeluLqxv14TWGmbMEH4h+v66EpCSpZraAqMqrfd/T1C2xSSxmS6r6Ge2sodsKIQ1EHEP2p5hkMqFA9/0hBIYQGHzAh0jlKgoxIllLZw1rHwlYzk7mPLhzzsMHd6nrVpU+5jMlaBgdcz4GUlaIsEaQCK5qISWGsFN/m++JcSAOPX2/I4WAq4TKWoIf2G23SqyoVLHPOgtiSUn9n2JSjliPREk5JRNoGuOYiUQFTEwjdlhsH5ikdaKk4dYUCCFCSdmdUsZ3btkuZozOjjHQ+35MPW1M9nYF9ado+mPV6EXAOofLp4lWn4ESTAoEVmAxJXzFqPOjARVmiRkcjTBGKCfwEpACeGXlVZEcSJPrbcRiinbRrb3Vfo66ZbcWfy6357z979QXrrWT8bj9BLaPft5/dog6pfHjV6v7q/LLXIYhsF5fEpLn5OQ1zu5/g08//AUP3/429aylak+IgyemwBB2hL5jt13TX13ydPcjUn9FHAJGAomKADz54AeYGPjeb36Hi5vXMe1bEAfWzz6i/+EfYM5e17kwK53H6EEsQotJAT90SNowrDv6uOEqdGgAcIVpTkE6dlSY5j6fXnoeXUVWvRCeJ84WGpBrJZHpnzr9pMkIL/7jPB+NY7xMA7fAYJAxDUgajxlTTEkaD197OPWR+wth8PCkpFeRMo8qmnDeWM7mwqdXnsEfAMfjdYVud8Vy8Za2rYk0tXD65re5ePILUl8zP7/PsLlETEWY3cVvt+zWz7EW7OwcY2cMccChCrzWCCHErDilqQoTZF+7+rszl08JeQX7mBALXoTPw8F8OSFr7JtW9k08aeCpT0sfz+f3ioWoJ/sPSES8WBo8S/uEm/Q2KWqKlpgCVuC07ejDkuubAT94rBOirbkZoBJVmCPa7MVPo+lrBIiB2lWczoTdzpBSYOMrujjnrL7m2TBXexMF+Ae/5E57wWX/ILdnJI7rzUHjSE5xllIO7NdOWkKMyd0wxpJyJldbGO2H0i779Wz629ziI4CrQe42Y6wl+4zIvrtru8NUXR+M8h+LamAKbJMlMseJJ6QNIdWcVpGYHJtuQzI1VXIkB7XxEA1iEltaJHUgtT7PGPfbjc8N0iP97AhHIx2Om0kfmv77snK7P77gmIM+SkqY8X1GKhOKWZW2LT6fjPdrJgtDwnN60jKwpjXCSTtwWq05rTyCw7qKdXpIb1ru3p/z7bOWujFUFmgr/uif/THP/uAvuX/vDr/x+7/FzdWKy6crus0NbDqut3C9i1jgZNZRy5w//uNf4JwwX9zlYt3znd97l5/8fMc/+ef/lnfeu4e1lqvr7gvb6quWXw3iRyYiFOepLhR7qT11nntdBIyjzDLTiVWyU8AZzQ2ph1hl3pWD8hyi6htmnHFVUSMqKJtDKYzZEyIQA9aORrXNxrQ3RiUaU6nDnrRCUiqEmOJgV8M9JgX0SQlrDTHLREURfIk2zTkaU1CAKSHZ4a5GurNVHsGp7A10E0SWNU0JUswOD1T+KpNRjEDIKhMihhTCeL0y8lTJgTzRRVLwJDQVTcjEiDLpmJwPU0krk5xNojc2nRxi1NQqwzAggKuqMUclOQ+lDwMp6UIbk6oMqLqJjIus5rv1+CFmBRZDiHrfCmhH5rVF+jUpBozVVDLD0I33iCScdSOQ5YyQKp1YYxgYtmsuPvuEJ49+wbOnn3Gz3rAbhj1h1VjqZkE9P+Xkzl3uP3iN87v3WMwXzGct1gh+0AmhLDq6SGm7xJQV1bKBMAwDwXvdrPqQFWoSKXhCDPgh4P3AMPSEDKoMvaaR8d1A1+0YeiVKpKiSlzYbWCC3ViwRJTnVlUOswzinfUrU2IpDz2azpus6+m4gidfNsez7hibqFaxxmDpqPjxy188vrFGzrXA2i5JE2aSLrahrjeaokqqOFEJArBxV3bDa7nj2/IrVdoc1BucsS2uZNy2pnjH4DJZ6z3az4coY/vCP/piry0v+4T/83/K93/xNwjDQ73riMPC9X/s6b735Oj/4s7/kB3/8xzz59BGbzYrgA01liSnx5Nlzvvu9X+dP/uRPqc/uYq1ju92SiFhjefzpZ7zx4C7f++bX+fhixa99/zd578GCpqkZVhf8/BePiFJBiOr8dBXOiEp7pSJplQgpcXp2woOTOd12x9YHrDG0FZrXOaB93RqctZmUkahspa0qog8iaXSK8YYUIj56HSujYk7MgiCq0BJTcdLGvRJPUmM1ayspSQyfDQ194MVBIqhyUCJp8FnSiLlkNBdkdlFi8lwwSr+SEBvA7SXZX2wOvyq/TGXqaD8GAE1B8/J66iA/BHWnvz8ENafR/YcqAdP7mb4+JINMQe5y/fJvIQMUgHEK/BcQv9SppB8piglTYki51/L99LflHqYR76UU4L2c47A9pmSDaRtNAfxDAPswzcL0ty97f9iG02dxSNiZAvDTekxJNlNSyfQ5lfuakmsKYefYsccA8hcB0dP7nbb59ByHBJgp0eGQIDRtw9JeU7LNlFgzPXf57Ww2u0X6mD6vKeGh1GFKjjjsY4dEkkLYOPacyj3P53OAkWAzPW85ZzlPIcOU9ij9soyFcm/TdD+H/eaQXFHu9/Cz6W+nbX74Nz122maHBJdjxKlyvWnKoMPvDvvOVGniRYSRaT871gePkSamSibT+hyevyifvOich6+P3cuxDf/h719G/Jj2qa9yDy8rLyJ5fBH546uc+697b/+u5atc7z8U+eNl5f/f13tV/uMvicQQwTUz2sUJVdVqtP4QGLot282G3XbFdrti6HaIgcpWRCsMIpjeU9UOV4F1FYhBnCp6WASswRpNH5G9GRkwLraDh0GK5zcHxWTHvDFYW+lexaqEuc7NoI5eQwleGPfrsidvfH4eyzuh8nnS86gzI/tiZeK4Hp2ZESn+mmKbFg/0uHcGyUFB6tNVhdKqbZXkYrP6QQh0uy277ZZu15H8gHXCnTt3aRZzksDQ9XSrDc8/+4TN9TX9bkWKQdUUbMJKBdbg8qYw+IhPGqRE0jzw290G2wtV3SpZtjYsl0vauRIAut2W7WYgJI91lll7wvLsjLZd4vuB9WrFavWMbnOFhKzkFdEIaISQfRspRgafmLUtxMBuu0OswTULFuf3kaai3665vnqOSKSyFtfM2fVBU/nahn63Q6QnGEM1P6VpZjirShO77ZZAz/LOgrqdY1ydgxyEqmlo27mqvIZAsokqtZl0UIGtaWcn1O0pPgm7rWeIPTF49QvGSFXPsVYD2ObNTEksQArg/cCu6yAlDeiwll3fE4dIZWuK+qatBPEBY3N6wBhwtcVWWXWth+gTYhVFC1k9lexbGUKky0Eg1liwqgyRrMUby844Lnc3SDPj/OyMh/cesFgsaGdz6tmcqlGyp8ukozAMSFWpHxZ1wEe0f4i1ELymDe5WpKGj320IwwAkXFWhQSkeSGzWK7UpG1U1SVH7VF3XYC0GIcU8toqzP+0DTm7jB1MCa7F1C0l1alequrKkSInYLY5gHaoF+skBM1Ffh1QI+Ta7zxIxqOJqSdFUxr+zDqqimpptkxDxqJ8vjZ6VmH1/AiErq443nLVXYyRIVl0eQaps7yGq3CplHtwTtYqf2Zj9fsoYmwkfNoOTqkBdSCFKmDHj+zL/7efzMhexBycn93X7qDKP5vqng69elVfll6ikmOi7HbVLGGd48vhnGAncXD2h7RfIaoezmXBmasRa2lnLbLFAuIM4ixjD+uoJm08/oZYBd3qOcTP+6idPOLvzJveaCiMNy9PfIPktzz/4E84ffpfq7K4qTFlDkqLoXlFJRJzgjAFxRByqyB2YOcP1Hz7mv/ody0Uz8NEqsOqFIUUuO/XROyNkFGdMO8Vo64BOmmlUURIx6i+OKQcRSlZlV9RH0p4gKyX4WrIfOE/oNgm9wKYPvHVqeW2ZWA2GbecJIohmtcCJ4cGZ4aQxfOiFKD0KGutsVALDI5rCq63mOOlxyRPFYsOWs7Nzrq4/45Mf/ZD2zls4wD/7kD4EnGtgdoLvbnCpw998SjW7z7DV+3Yi7NI4vau/I6mfW3GnokCe9k2Vax8n8/h+Us1v0uTtwavx+0wKnHrLC9ZyFHyf7uvZz9uQ5+VoqfGcVldcx4eEoJhEkgA+MTeBmdty0d8ntYa3v/4uq5tnfPzkKf26oz6pSF5T1JsJnmMQTEpYEh89egYpcr3WQNcYI8+HltfsBc68hkhiZhJdhD5aQphx3z3jyXAONlBF8GNl1drXVbQErWpaFzFG09Cjge2aUi3imKQcE8nwVwGB9634eeLQ3s+iy7JiaGIyMpLIgdva35TEpKQPMh4jKSm2IgYnkPCKk8aBrq8IEZzpWNiByvY83zXMK4vIDieRxkRu/I6dFWpT4fwaL5aIMAS1Y6QqGoNx7Aukg04y6W+J437b0YaavB5x4HEPdFup9ZYvKs8Rh36jW9cq2HJMSFRc93NFQXT9XRSSiVgciQApkLCEwRH6nm+eXXNSBXxsuOpr3l/P+BvvzNlVws2TZ3x8GdkFsDFR1UJdGe4sDMPmAueE1WaNrC95cy58/eszdt5yfnbKrGn5i5/+iPfff8x3v/0O8+US18wAYb3aMATLvbOKxkRee+NtNp2n2w3cPZ99vj7/juVXgviR0I0skogipCCkFHQzlSJJDIMPGAPOlIUkp0Ioxqrs4y5qmxeESfoJMovLmBINJiRCJoGoakcBrIWUSSAKiopuU3SYxahsMpN1LowQosoFWlDDPOdFs0bA7gEiPacZN5d+6NW5IUog8XHIhvXUUbzfyFirxkJRPFG2l9bdWqNMqswQj1EZ/xA19Y3IXtooAtbljQRje1gxGKvkkZQ3bCElUkgkPN4HBf6t6LwXIohK6mOzf0MMmkfTEoOSFxAI3ucoDovLkT82K0yATuJKzUoE8Qp6hz24EkIGGAAnZGA6EAZVKQmg4Hc+3/2zOT//8Y/BD7z93jdpzu4q6cBompuYGfVDt8O5CuscBsPm5oJP3v8x7//4r7i8vKIbBp00xWLdjGZxzundh5zfvc/Z+R3Nido2NHWFs4Wwo+1kR6e/tqexgmjy0/xXwER9CCH4UanBD7pZjpmQpIzTwDD0SI4UGIYBP/TEweOjZxg8Mejxfd8R/ZBznCaSVzURQ6Tf7dh1ShwJGTRomwofVUIrek83eAY/4EPJ36rswxTTuPFUBmWW7E9Zjpe9ZKWSdgzWZdAukxTUx5Q3iFZwZdEiW3nZoJLKcGZnOGd5dnXDzWrDdheBAdsPVG1Pc3ofcRYfI123wzpD5Sw/f/8D/tv/9v/F33v0Mb/3N/8Ws5MlN+sdXddjU+J3v/8dvvXeu/zwp+/zkx//iM8eP6LvtnS7LTerFVVV873vfZ/HT55CGLCV4AOEOLDb9nz4yVPe/vo3+Jt/5z/lwUz7Y1s5fvKnf8mlNxir6ZsQcNZSmf2Y7rtBnU2VY9lUWIFZ21ANHQOGRZXBUfUPQFQmdoiBEBPG9GpsZ7vaGI2wVsJQUonTpMSSIrurC38GMYuKhzBmiDEmA0kp7hViROVRkwgRVT8yRjnkEclO1fwsYySkbPyzNwqMMTQmjWQT3RQU1Zsy+78qv8zlGFD3osjzKfB+GOV+KE8/LUU5Y2qkvoi1fAxknZJKpmlIpgbvFFQ+JKUcqgqU3xSAfAo8T+t9CKhOy4tIFseAwGPkj8M6Ht7/9NqH933s3NN6vwgUPqxneS7HwO9y/GH6ksO6H5I/wqgedvwZHt7LF4GYh4SAY5udwz4yJX5MP39Rm02JI9N6FcB+SngpZKBSh3JPU1WXKclk2n8OiU/lu0JwHe3IyW+m91XG3+FzOSR+lHvsuu5zRJ5yfEnNU4hRo/rawSZxWrdpOUb8mF5/qsIxJcSUY6fnnqryHI7Dcq3D+zi8t+kYmvap6bg6HEeHdThUpTl8Pf23qCVN2+xwPE8BxBed80Wvj/XX6etjc/Cx19M2+DLXf1VelVfll69YV/Hgra9hnYMQ6NYbtpsVvu8Z+h3Bq1PUINSuyf6bvMcJGnSTUiAMHpxHpEKCYK3KPhsjJAvOWXWKGA22IQWGGEllzRQhZSVOhAyS79MYJNRtkoSD+b2sq8V2KlGpL5q7bqt6KKiRY1SLszOrA+Qj8jwYJw7m8v0EKRUhSsIYS1M1tLM5zlmCT/RDz3a9oeu2+L4nxQESLBcLTk7PMNay3W5Y36xZrS7Zbbf4TtUYYvSIU19T5WakQiqNGjiQgofkFcgwDiMOZxwGhxhL08558Mab3L33AAEur55zc3VF9IG6apjNF9y9/5CTszvsug0XF8/YXD8jdDcM2xUQGfodMXqaZsEuqN+M2OOHnqadszi9Dwn6MBDFcXJ6j9nJHXabHX7znH57g++3xOAVOBfHzXrLsNvQOEtTOdr5Ce18Acng/UBIqohVVRWz+QMlviRVxbC2oWoWuGpOsg6sIcYdZIVWgieJxdSqQtr3HX2/VZJAMjkQSf0Vzqr/b7vdEW9uEKNpjdTTZDHOEWPCp8Tq5hJItPOWGHf0XcTYCglquzlX0Q3XnJ6fsd1uuLi4QqqG7WZDYyKV0eCPGJPK8sdEFEMXhc6DEadpi11OV+xqdmK4HALbEHnz/IQHD+4xXyyZL89oZ0uwJqvqCK6qMdbgY1Z9DdpPNTCPDPj1GpjUd3TrNaHf0W9W9F2PcVb9exiCRNrZDGsTw+DVH0mi6we64GnSnLayiGuwRn1QIqqiDJrWJRVH6DhE9na42l1lzxeJySPZh6FKweofjaIRzcXnof+ZSsln30Tcnz/GiLEmAyORGPNeKSjUVVKoGGuoTKUKPNbS+wEGPVdIUQktxeFCJCXJqXnUDRhTzF8lggmZ+LFX8LCZ1FHs9/2ctbfHrDWT/eb+r6Qnx+izVZ+5PZj7pqTmfTunkpYaFbP+Mv6aRHqV7uVV+eUuYvBYTDLc+9o36eWMfrni7uvvMj85xdoZmLLfBWMNBNEB5YSb62fcPPqA0F1g2pqTO9/g/OE7VO2M4AM3zz/i4pOfcH7/PZan54Qw58F3/i4XH/wJczpOXnuPkDxO2qxsoDiLmEoDEGIqXlogMYSOodvx1hvnXK1mXO4GfFTMK4mwHRK1K3s6lDg7Imp6DvI7MQqza3Cq4nAk9U/rcljgdJ2H03gX+fwZ4DVAlETyA9cCN73jrXPLTRd41Cf6FEZ7ZW4N755ZQkpsu4DEmrmNBITOxxFPsLFnVg006Qq7voR+p+rm9ozd4LE+Mbv7FtuLJyRb0bY1tYUhbOBmR4hqd/R+zUmosVawogQZKcGRosHVJTOAKukXcp+uE2X9EErQchrrPpqGpUEnRY8rdmexEfP7bIsqJyeVRz4+Hr3m7XPd2nbnY0zcUvvHXJl30GU9EcOAGEefLGfNI/oucRUc5wtHSAN0N5ws76ky23KBkyqvPUzuI/v+bc2TZzu2aY5JHkPCJMvO3CGk59Rux44FjoA3CQbHEA3XbuD1esXTfqa+Mb9jMHay4iiG4YzgrJKUFCOJOZ1OVopJKQeLMxIiTNIb3VvZagSMj2GCg5ZrSUqjMjr5fL4AGUlXRQ3AVSUYPZ+MJ0tJiKKq6yckvIvUsWNIkbaGhbnm+UXHSejxVa0B/C5R1xVnlee0WtN5oe9vCKGCSm1aIz2JmpRc7kj5IaR9PT63SqfxqIO+dtv3o8fJSGjZ98Ev9guX4277rScBmKXtpk906gvM70MM+sxEMnalY+vi+RpHx3wWWA0tAWHeek7F8M33FvzBv/mEdS+8dSpcbyIrr3PHZgPdLkEUXPBcPL/kn/3rT+l7jxjLdht45+FT7p0OPLsQ+niH9x8HZvWaP/vxpywWgW+99zp/+OfPub7piJJwVCSJXK48V+t9wOa/r/IrQfwQlEWVQszSjwlJOYoj76GLIzyJobIq0a1McDX6bXYiINCHMEZ0qFGrE4aO14glgZgciR4JSWUZjWhuxxgC3vdU1lJXDT4VVY9su2fENcVATAHvlXwQrcscrIJjRyDkzUgaB6n+rtQ9KOFDJPsDsvPf7uWdYrajiZEQekqKFcl1U2dF2agUBqbVXJrR66aiOClS8ZnEcbPkSeDzJOdRhQxjiLkiZXLXy2gqHN/nNBLGYAwE0aj/EANDP2SVDSHFAec0YsBavacUlS2ZokYOGKfEFAMka5VhbnTDFbNDSARsTgUSMzg9n83Y7AaNcE1oaosQmVWWnTO8/q3v8+Fnz/nJT/4Jv/U3vs+Db3wXK1YJOEmZut4PrG+es715zrNHj3j0+BOeXd6w7XuMrZkv73F6/yHn91/n3v0HnJ2e0TQ1VtA+Zxj7n3aCTMgRyfU2kCIpZZKL3Ue3W7uPONKULA6cpapr+mFQIH8yUWpEjDprnFWZrK7v8YMqo2i9dPkNMUIKxJxLNgbPMHT4fke33dJ3mj6Gvsf7npv1TtPcWKNKHTHQ9+pc0fypep9ibVaI0Yk8pDT2d92XF9eXyf1OnRJJ1PgVtO+V+T4lm5VyMoshZZqCKNnEmMhiVlHXZyznLc8vt2y2u0xMuaFZLrlz9hY+ZJnTruMm6aL9URr47//Ha/78z/6C3//93+Xb3/9NXFPz5PFj+m6HhMB3vv4233jndS6u13z86DFPnnzGsyef8ezpU5aLJcu25dmNJ2CpZnOWixOWZ3d4+523efvBGS2a8me2WHDx87/gh+8/IrlKo9xCRJzFWct2dU3vgzrqRHDO0vvA86sVcd6wnDXEmDi9c8a9e3eonKZ8cVYJQkro8PS7Tgk7vqMfeogRGyIx7CPhi7mvSkLFe1IsAvU0GGOzAa7EEZuVe3TPH4lKAVZpUslEnGI5GM2vF3POYSVwJmISddYkTeGjhyoz3ZgskSfKJi+G2Re7EV6V/5jLuAGagNilTAkKU2WLQ3B0CkQfI4wcEimmKh/T6x8Dbg9JJYfpZcr9jfbHAZHgEIwuUfrlfQHbS/2OAdnHyBSHbTet4yF4PSUiTMsUEJ6SAaYA/4sA6UNiwNSgP6aicgykn6a+OKZcMnVilt9NlQNKmT73KSnnGNFj+myn5RiAPX0GU+LHsfaY/jslc7zouuU85RlNlTD26/+eFDRV99jtdreuMx0XpR2nBIvD6x6Sfkr7FtWRQ8LK9HdFPWLqXJ6Og3Ltw/FY+t80pcuUROKc+1x6osPncUjsmo7Lw2c4rcN0fjh27DHi1WEKj2MElOl1Dokfh7+dlmPkr+n88qLfHr4+TBNzeK/lbzrOpuc5Np8c6/+H538Z2eQY6ePwOX1R3Q77wKvyv+7yVZ/XK5LPr2axxtKvd7q3SgrqD92Wvu/2fSjvMwLqy9FIuEASgWjwXSTZgIQA7EidQWxNTJZkDNZVGJP31VaBZRFNo9e2bQaVi1M6z1fWYK3DWpcJ7uooMmgqy/3cX+a3MpfFHJGvDkQ+Nw72Dt7Pe1VhT/rYO2ZJloKnmFTWury2iqbTsM7RLGY0TUsIid1uw+ZmxzAoOSOEngQ0Vcvy7CHLkyWbrufy4jmb9Q2h64idV1WPMGCJSKV79xQF8REkEk0gRE8gYoKQBrBJAXhQMNvWFbZpePPt93jjrXcIwM3VJZ89+pj19SUGYb485fzh69x//TWGoefjjz+gX18Qug0p9cQwEIMndltNVZsiwUdq1yDOIMaxOL9D0y7oth3DsKNuGu49eJvttufRz39GRc/15SWbm0ua2tIsZiCW66trsI7l3XucnpxQu1od7sBud41xBonC7OwuVVVDjtSO3RZXQ9POFGyL4GxLCglnBKkcYqBuWsRWBDEkqXG2xaUTUoSh7/C7DiRR1zNMErwfmNUVgzUaCGEsbVVr0JSADxqpvDxdItaREgy9x1aqDpqC9oXVZo3YRL/bUTeOk/Mznl1fc5MSfvAEY5hLojEJJzoGvDWstpoyZ940ZBkIorXsYuDGR55uepxruX9+l4f3H3JycgrFDnVVTrsiBNG9vHU1IhqIVEArQyKGiB96us2abrNic3NNGgZS8OzWG8RVnJyc4uqafrdl6AN1bUnRIylRNQ3iKlWXMeBDoA+BymZpdWsxsRA6jisb7v8m9naKOagr22Ap+x+y2kVOGpgHaDmnORi+t+0fHwJi9qQISVkBJAUlpaiwENZoeqLGOexQM9gB77NfJjCm407RayBNSkhU0LSk2dYAqgAhaxqZHHCV1BasrB2DdG6TQByF5ILROc84VfcQowGKBdOa2tvHiM4FYMyNMLZLmrTQGJw4ApQ5iDD7A19l8n1VfpmLiEWCZ95WMHhWm89YnJzSdVvmyyUhdBgsKWqgagw9uIr1+oKrR+9jhh0eoV0+4MHr72HrGR5NC2METu6+wfLuQ/5/7P3Zsy1JduaH/Za7x7CHM9wp58zKAgqFAhpgo9lNiew2yUjJRDOaZNSTnvgkk0l/lf4BvYsmM4omioO6SaK7QWIoAFWVVVmV4807nWEPEeGTHpb73nF3npuVBQJSo+q62bnn3HNix+Dh4eFrfd/6vudffsTtjx/z6L0PWfRrmt/6x1x99mdsPvlL7r/7AyKQjEEkKmnPe0JRJwAOeXgk0xlLajM3G89GRFXSC8ibgz6yttg9CaVYLxWrZJk98yhJbRx0XWedwxpTbE7sLLdUY2WdHw6kh6TK71kE4yw5JfY+8fnNxO8+6vj9N4WbXSZ7W3IlmQdr+M594S++iAzJazFJIQSHFAlJVcw7ydzr4Y1HbyGtY3fzHJolu20kRiHfexe5fcbqjQ+Yts9olpeY7hF91tWeGZ4h/oamu49jJEvSd5tryAc4Nh/Wi6oSpeBHJWPAMZ+SMwfF+gPHlwpyH3/QabTmDI7WLBmYCyQcj3Gk0bxE/ij7rfbqlHdg3VeKE+3+Z/juHUJM5f0T9F0TPBduy4obrkbHdQSeX/Gjfz3y+Mke6a548PCsEDINWm5d36X1uhRfe+fdezz74WesWpAIAU9mwe1NYn35GY37EMkW5zJjDrgIIfVIs+eds2s+vb1UdbmUjnb0BYtsTFGnEaMW9gktjM2oXRrytXf5QRnj2Pu8fDfqDZmtC0QONj4GIVYZqyJQYMq9SId1vLxE6smiggHOBBDP6IMW0ZtIisJXw4pmuVYlOn9FiyH3l4zZMcotYz7HpxG7dCynHWMeMSZhoidmeyCHHvr/FWH3KbHj+CbPx3M9bHOcO/Lh3X78+eUQ6EhQEnl5++PfS86tKpXwdfWQlzDOot6WY1LFl4JfJSO0nePhOw/40f87MplEI46cIx++kfnyGfzocwHnuGoiDxeeaWyZsiq0hRRxCJFM9okpWGIW1q3nf/3vvsN2n/iXf/oUH4SQIk9vPevFjj/8wT3efvOMP/7T58QYaZyOuyRK4JbTS/5bar8RxA8Aaw24ajFAWQjLYSCYog5hrTskl50oA8uk40QZo9eZ0rhDgB1zPsgxkaHA5CjEWUDrlDEOqKy1LMSUGaYBIwbnGsTqpCZloMYShOWccIWMotKAWQHWclYqE6jsSA1Q1IJBiRNSJElrpbN+rPZDFmW2GVOCuFQCxFw9rkwhgBRFE7GInflNpaYokigA1bYttmnx03iwkpFUKvZEBz3eU6W4NKgq7MYDycRoMFUUSFLKxMmXvzvEtno/jRB8wYqNyoJZMVCJarmAIDmXSUEJBtk4jFH7kyypBC+qIGIMGJwShMTQtYlxLJYTCbbDwGa7RWLkwcUZb7z9Hv/Dvxp58mLD27Yh58Bwe8P2xVdsrp9zc33N4ydP2G73JAS3WLN+4wPeefgmD958mweXl5ytVzTF680gxBRmLwZLMomc4iFplFPi4ANcWLCV+SmVoJRVlcVYV941qrBSbXecGMTpAFYGXCrgjTtYC4loZUaM6RDg6TnpOEnRk4IqtaSo0vHTOBL9wH7YE4pdzDDsi9XMRPQTSlQJ2MaWcanPhVCDRl1wZUnFr0vwORBSworaC5GFnMyB4GRtZWAqKzKnqgKSERM52CQZTchlBFJRkEmq4nNv5VgvFjy/3vL0+pZpCrx48hVOLO9993tsd3u2w0AcBmII7KeBYRi52ez4+NPPefdf/Hf8k3/yj/ngt77P3geePXvKsN0QQmDp4HsfvMN333uLYZi42W64utkS/Mg4eRKZZd+z6jv61tCQiNOAaVtWqzU3n33Ev/off8iGFgpxLMaAiGWxWPDk408LW9hgJBFi5PL8jFXfMO33vNjuWfULvvOdD3nn/hnOOqTYQMXo8cPIMGzIPhPSSBj3OuaoOUlVL8JoZUZNJyoZ25DNYWar0yJIIR/VRakI1XcvKwNEZ0gp97MmODUKwEhRf5FMTpVIImAgJSHEiDGZFDJto3NZfUnmfBxTr9uvdzslCFRSwDyZNwdUT39/CoDfRYion5v/fJfSRA1K5yoK88+ekjFAg7i5hUPf9y+pB1RAu4LxcyB6v99rADMDput5j+N42G/d5i5g9fSaKzngtJ/u+twpSWFO3JhvU6/hVaDxKcmkkgjmfTS/X/PmnHvp+k5B9zoe6rbza56TZup5VouXpmle2uYUEL+rnd7fOfGjrpHmJJc5iWd+TvNjvAoYPR3Dp+d12g/1GNM0vXSMVye8tT9Ofz/vvzrW65q5WuTUv8/Hxfy5PO2LOWGlWvfMySDzbe4iN50qbJz2Uz2306T0q+7t/Hf1WZ8nsef9/SoCy+l2r+rzbzOm6nb1+/w5OyWhzLf9pp9P7+dpf9X2bYgfv4yEcXqc+ZxxVx/8sj75pvaaGPC6vW6/fi34id32Ckwq9q+iMgjJkUJRziwJ2EzSmDSbAjBkVco0mRwyaSrFMmJoTMSJVTXYpgU7AhaMKcUBlrToMaaqotoCZjusbRCxql5hVZWzWr8AJEkYis2EyRo/4fRcRa0rS0KoJEQ10Zrz8d0tJev5MpVdZj9mco5UhwipFYpF/tyU83VNS7voERH84Hnx/JoQPCRf3rkG0xr6/oL12Tkiws31FZ/8/GPCMJFzJOdi9WoSiEWMYKKCJyZHVVF1Bh8njM2YmLBRrWF9M4HTe9e6ntXqggdvvcUbb76Nj5EvvvyS6+sr9jfXRD/huo533/8ub7//HaZx5NOPf8G4uyWHgRgnWqdxfpwS3k/k4EkpYpsF67MzEO2xcRi5efYE77/AtB1vvfddxLb87K//lPHmGSRV7BCXOTs/ZxpGrj75AovgFj3nD9a0Vpi2W3b+hiSCa1XFtbUdjW2w4oghsdlvCT7QtQuWyzXWaKWlMw3GOqTR93NzUDVVa+RpCmASuEguNjxNu6Lv1mDAe7XkNW3LNHlS1sK0MEwM+4mz1RnGQn9+jnUd3geGYSD6kcZaojGIFfzoIWVWyyXTuMc5y9n5BTe7HU1nGTJMXvA+cK9znLWWzgmezO1u4nYInPctwRpM44jWMsTE9RQZkuAMvP/2Wzx8+BDbdmAdi/WanEq2QVSVV6TYD3uPcy2Iqu5aa8hRc1fDsGG/f47f32LyxG6/VXVSY0g+EP1A2wqrRYuIVZWSzqmyiuvprKVJkZIgQ7JaHaecceIwzijoktQ28KgYJ5q0LDmEXIgedf0ZZyQKXcsmrKk51KwWOAUKyjmV6nYOObWXVid1HRs0t2ltJYapmrEUQpdQyGEBRAyda2iajhh0jT75QIjhkE/NXkHBnItidLGAoqgrk9SCO5mEWCHbhJiMO8QkWQupbBEdMUnVRoxgpajVkA5EHqkglrEYU5RbiypHKnmchMyk+0t8rVev/8ospmGOMx1JICLCy9qNr9vr9mvYjMVZ6BeO1eoeXz75Gfff+hALNMWKTlKx1jaC3+356qd/js2qZp5cxxtvf8ji/JwUIRBKrluJdjp/Ge6/833CeMWzn/8Is1jxxtvf4/53/m2uvvpLvvjpH/Pou/9E10AkfOZAQMhlXlJNjERIiUYiMS24iRCQAzmgkg4IovMNgqjn1VER/rAdB/JGDJPOuX5ikoxYwzR6pmHAupa+X2OdQQzEFEpRLCWH46l4lA8eKwbxSz75KvBPftvxo59veLEZadoF4jreWbWcn1lufjEChkbAZ1W5XzqhwdA5i0sjZxdLVk3DzShs9wazG9nnQDYOu7nCLu6RTI9dvcHmy49w249Y3fsdIgLZk4Mne4hBSQe7myuSW9CenWEEvC/voaQKbFLwQS3cKYqrJU8eQ1RcxOh6tObZD+SDWRFsSbgrHilS3hXpa/HyIdcDh/eVFDyK8vsqMJ5zpfYopuniFS49o/VfsVouWHVLIpFGLLebHW23pG0X2M01F/6HOO/4s792rCxgX2BNJEyCSE8++H8dmyC0TeS7b5/x53/ymH3oedg7iANLl3DOcL4MnK2/pDWJvnNMMSGp4XqaICZ2IfF8WOB9owXqWZREkTNGMq0IbVG7CkHJBBYh5OP1VkXykp3kCDwcz3TWo4fvinHlCnEUzFJXC1pkD+RyrPrcZI73MGvRMoBFydM5duxizy5o0fRlN3Kz7xj9RCt7lm2Pd++ySY4m7FlLxJnINixJCca2ZykjXs5prcVxq+v7oMczVu97yvnkdrycjyqnekLsKO/0+RicEUPqeOJr+747dzQfy18jfeSqPJ8PY5xyf17OhepxtYRZJ6FkhSmp2IKziSRW127Z8Hvfe8R//+fPlcA+BTbZsnQ9j1aBT7dqTYgVHiwsuTnnq+uBs+6Gd9684P56zZ/88DlfPh1prNG5yBjII999/4LHTzJ/8hef8NvvLfj+P7rPLx7v+etf3MAkWPMyaeVvs/3GED9SDGSMMqmMxRlLLJijKZNlHYCx2E84a2hcQ0he57zDYn7OvMoHT7JEYeeVSVIrDzykhOk6jGlUNrE8RE4qsSESYkCKtFUNVMiqjpGKvYJIRKo0UZG7UnuX48DQZGoBxEv1vZ6oVr+QjTLJcwGJKRM6Sh7JBxkttK9EVOYxxUKOUXWMGBNZlLyitHQ9Vo6RxIjkrF5waPJDsqoytE4JMz4oQCAlwSFG7TQIRV3FZJyziFiySeRkIEWVxrJWKy3ImKZBiQKiTM8U9QUo6LWX/hQy1jqSJLJqLar9DlYfRASf9cFX3oO+BJxkRr8jRmgk0RpVuiAnla7ab7m/bFguHJ/85b/k6ZMvef7sBZvthpgStl3Sry95+Fvf48Ebb/PmW2/z8OEDGiu6j1yTPakolDCzsDFkKcol5YUjZfNKWIoxlsBMg00l2Sj3NZdkkRjt73z4LEWqDVzjcDj1wC3XnQvxxJCKF+tR9UXMUfY8pobolfBBVrJB1y8IYaIPqigSQ1SLmBQIhfzhx4GxkCdy9KQ0YXIkRZXi9SFgEKxVUkpG6LqGcdLkR10Y5qyED2OUJJRSJIYEWA1mJZNCIkSAVJ7rAuKISoTW8S4IYqE1lkf31qzWC57f7NkOgevrK/xP/pqzy4ecXVyy3dwSQmK/3zONE+Mwst9tub6+5qOff8L7b7/F7//eD3jvw+9y9uab3Nxu2N7eMI0jIQRMjlwuWy4WLdGrHU5OsSiqJFLyYCz92QWtgec//SH/8l//Kc+8QUQJPCFoNYhpLK0TtvutKoCkSCbRNC2tsyw7x7pdsR889996j9/+8EOWLkNIBD8Qvcf7kXF3xe72mt1uXyx64mElruS1+hI1B09WKYv1nHURogQxlUkWa7HG4VxTHkHBOAMxkEIkxmLzUyV16+Ip14lvFgmQEaNJGl1DCMbmUn1HmXMp/rll+jysu/72X5qv27+ZbQ4Gz0FZUMB/TqaYEztq8DMna5xu86rj3UXsOFWsOAVs68+nf69Ae9u2X6+S4hiQzY8zB8Tn134Kls8B81cde36d8766C2SfE0XuaqdEl7tUIubbnh7/aF/3sk3IfJtTdYS7iBJzlYcKNp/+fU4YmffTnEjzKiD6rjFyVz/Vr1MVlFPyRr0OY8xLChanwPopoaD2zem1xBj1nXOH2sWc4HD6+9Mx+yqiwpw8Me/bV5Fa5sokpySeOYmifs0Vck6vfa52c5cSyem9uuveze/h6bFP54dvIjp80zHvIjScXvOrPj/vx7vO+/R330R6OP3bqTrIabvrefxVyB7196dzyC873qu2vWu/pz/DyyS1b2rfdC6v2+v2uv2b1UQEJGKyVfDRCKYRjDiCiaqwGjMhaaznp0TOnmQ0R2MKNT2TCSkQkyZwnbW0rlEw2Bg612sRTglBjIGUA5MfSmwNGKGpxA+OssVa/WoLcGkO1PNjXHIo0VCLhlwFvGvMMicdz+fVmswvP9ekqhwrDaWuCWqs5Kyeo1F7iBAi+9uNKjpktYsVwLmWfuFou46u6xnHiedPnjEOe0i6fsgFvDXZApoHitmTYwRjyTlqrsZmcgRngJRIEsBkskl0LMjWslyf8fCNN7m8/xAfMh9/9hnPv3oMMZLDSNt03Hvzbd58/z1ijHz0oz9nv9sgOeGMxZBoGi2yiUXF1+aGJIa+7Vit10hWa9v9zTPGzTUpGc7vv8XlG+9x9ewpL778GPyOMO41FyQWZ5fcPHmOtcKyb8kZ2q5j3O7YbXaMk6fretp+qeSLlMiX98irljRExmEihJHFYo1r1/jc4jFMXlVPnWiFtjEG2zW0i17JCsZhGgVxJj+ockq538ZYQix5jax5H80WQYgB1zasV2ucdfigdrx+UqVbBLp+jbEt5MA4johrsBiGYWCxOKPv12x2W1zTcXFxH1k852azYySRJo+Yhsk6NtHz1X4kZ2HhGpJzjNKwD4kXu4ldijhj+fCdt/juBx9w/8EjFuszRBw5iarhACGMTDlrgZiBzWZHiDc4Z3HWlTxfIviRGBTEFNTKr+ta/LRntV6UdZlXkEwc1rZMRBbLNYv1GrENYhts05BRVQ1jdTtEmHxAnD3YBBqj1sbea3VySjMwIxel41zXDDq/VJtbzT2W+ciVQrn6vJuSfy37UoIXhzmhgmuIEHMkxYwt550KaQSqrUwlypa1Iqrs6pyjaRJT0PMPIRKy5g3VbimQCykEMkGNqzHZolw0S5M1j3tcW+pxREzpN82pmgIwqsKy0Qp+0Ws2xcrKWDn8XC2wXp7HmPXAN8/3dYl2JIlQ8rkv7+l1e91+nVrKmSkGhAZ39hAff4ZrFYvKOev72DaM22uuPv+MKe5p2x4/Ru699R7r++9CGsk+Fxwkk4kkiYjtkBjIAjaAtRe8+f0L9lfP+ORn/4oHF+9w79HvMnRf8vjH/y333/1D7PIMgxIOpdilMyvaczmyWibSFNlPCTFymBsBYnmWc8rYkrZJKR+K9XKxFSPnouo+MQ6DYkFZaFpLDoE47snBE/zI7f4WEaOzWdJ9G7Ja2otaoNdiX2Mb0n7Pv9zDHzzs+F/8TuTzLwe+fHFD5+D3/9E9dtsFXz4ecWGBcbDdR6xE7p/1PFg1dA08u7nmegp89GTDdr9nf/0J1jUs2w7b38OdPWCfDH7acPN0Q/AtTCO7/b9iMvcJuyvc/hrTQ7Y7Wtvhceyjx/oR0zQFEyt2GAgp5YPlbUqRGBNd39O4RlW+QHHJ6JUMUOZdfeecSiOZw34OY22mnFpbzvmAQZYBWd5PWpRjrKVW3da1by7gUt9fsG+/x56JJ7cjRiJGEjb1tLvIchF4eK9BMNxbrvh3/r3f4y/++J+zXPc4IwRlKKLVpC/P8gYlPsVpz2995xFhv2HafkG77rm6Hdl4T3JKSNwODkvDmGE3QeCcxhrevHhOV9bf6gekauIi0AgsGmhsPuATxtjDex9q4WspvJ/nECoefPiFlLuo+IQa2tcCZyWW5JRVOM3WR0nxVnWL0POqsYPGBUAhOAiiZGtjiTlhBS77gevRkNNEZ7eE3HE1NfRNYtV4ttOam+kFw80L8mqN2DWOTIwdrVwRaUvhv0NkLMebX00dH4cLPuY3j51wZ5M6rurK5+WdlfFa8l1GvnbvD+Py5P9K0EhlrM/P6XS7lz9bMVViJllIITFlj3GQJo3l3r6vLhJfPrnG2I5AoAmZJ7cZc2F4tJh4umkhCfvQ0SchGcO//8++w09/uuM//9fPaYxgEgzZg1iaZuQPvv+In3y05cVui8vw5x/v+Kuf7/idD9f87/7ZWzy+zvz5j17oel9eznP/bbTfCOJHRi1epVhKiEiRzVOrD8kqw6SBszmAyzEGKFYtOSWaYquQJSKSkRxLoFAeQynrdASxyvyOgnqxlZdvlaNyVhMOOZeHvQZ6og98VLS6kCAMoH6KqlSSi0KFklSowLyUkDBLAcYzoCoWMktkm6T7rMcGPXFjBMSRsxTGtqqgRO9pmhbrlCBgEDANOScSkWwMKUQsSRU9YiRnJbHoe70AKcahFSPQWFHZTKTYjgS6rlXGuBiqj5kUNQjjHDGp/JAxmuTIKR4mHJ1UIhlLikmZoNVnUoR52ielhHUO4xpssgcFDWsbIkoqCEED++unX3H92c8Ipme9XnPPDJy5SJDIZz/+H/ns8QvScMuneDCOUPaVs2Bczw/+0b/Hd37nB5ytljTWAvlANAFDDFqpk1JNxFcAwBwCH9VJqBUIOhYxcggQY5gOii+ZIqcqgFF3vmoPY4zKsnkfyVk9k7tWWY9DVusbYwwpZnzZZ0oUQlPCWZWudVYZqCk7UtMSQyCliDsExyXxFrySPXwkhpEYVCFkv9/ovavsxikxjrF4GJekfVaClbVWeyQJuRF80EoXYy2uk1ItUpmrFtCKkkXXIJKJCcZpYpo80+QJPhBSJIWIyZGYMpKcHqP6kBrHebfgnfcbpijsfcLHRNv1rFcrVssFm81WySs5sNtt2e12LBYLbNOwG0Z+8cVj7p39MR++/x7vfec7XNx7QD47Z6wKKONA8ErwACGL4BoNqI1RhvX22Zf88K9+yF//7Au22ZT7oOceszKgm6Zh+/wpow90zrAfJnyM9BjCNOIbobGWpl/xB//w3+LtB5fEaWTY7Zl2npsXT9ltbxl3W6b9jnEqRJT6kiwyc2L0CTJlAZILD1bnCa2KM0WST2yLs6Yoc+iQzjFB8OrHHRIpF4uqVBMoygJPOZGMxeYi+VoebmVz6lxCIS1JmQdqeYiydjmwUb7OIX3dft3aXaDiXWDrXBVh/tn69zkYfQqmvwpA/iZSxqHi8w5G/av2cVcF/+m1zEkEp8c5BdRPF7qnAPOrzv/0OPP/zxVN6vFfZbNSf/cq+5TT+zj/7JwgcqzEexkEnxMJ5n+b/1zJG6eWMKcki/q3Oeh/1zi6q3/qNvPrnW9/erz5/Z6Tjk774fSaT48z31+1aZlfY7V2qYSXOSnkm8D1OcnmdAzM++yU1HHXeDrtx/lzdXovT8kXp/s77du53U0lJtRtfxnhYJ7kOH3u70yCfEO76+/f9Myf9ternom63bchKLzqOu/6/LchYZz+fd6/d13LXZ/5Vc/3rrF9F8nlrnv7bYger9u/ue1XHTev229eExG6piPFRIpTqe7SIgljHbUy36Ss1gsqdIoQFZSlqGSWiFpSJOaESYJJGawS0mPQLaWp9i1qU+FcU2L3+n6LiKSSdwmkkEhG7UOzKSBtkYy2Jc+isVMBTERzNUqbL7+bQZr6jjtkXQ9pZEFtDypHXigAs2gFoMbpCrimmBjHCe99SXQCWWXLm77HNU4VGID9buDm+obk1b5XwycF7VVyXK9VswrFvkJcibWiEvCp+RhLiqqcZrLB9T3r1SXn9x6wPr/E+4kvPv2cF8+fMg47HIZuuWR9703OL+4RYubjn/2MzfXVofgkC5hWbd189lqY4lotMnI93XJF368QMexunzHsnhOSpzm74PzeWwgNX33yU8bdNVK80N1yjRWrNq5hoOtaXNMUJdNADBPjGPAh0fVLwuDZXj/GirA8O2Pz7CkvvvqSlBJNt2C5WDFudwzbAeOcJvGzV8DENGQMTbeglzUpC95GjLgCgClYk6Lmw4yoArGPGkTHoICNoDZE/XJB16sCyD4MIArKp+gL8UMYxz05qfqq2lzrfU4xslytCcXq+MH9++x2A842jFHtUVpjuBHBBOH5LrEN0DrLjc8EC1MM3Ow929GzXlref+ct3n33He4/uI9rWsiaZwyh5gKV8ETOBJ9wznJxfsl+v9OCIO+14tJBTiM2e/q2weeeaYq41tL2a/qux1iLnwbCNBFiwDhLTJGYGmBJ13aYdolxTq10UgUChJATMQfyFAvpQwkgbdsjKHFCSUW55CgUm0gpEYv99gGkIBer4azqM9mQrVphm6poJkaBzSozfmQwIFIygUZJFmVygawqMdnWLWekD1F11FQKZXLJh3VWrar85BExeD/hQ5GKr+cZM4mo4JUkMhYp+WAjmjfUNXBZC6PzlzOq6qr5qeOXWqZrzlakWu5WGwZTpPGP8zenWZmT/9RMbZ1jkTnhQ/PV8u2WCq/b6/b3tqUYcY1g+zXXV9dMm1u++OopBmEcP+H+wwdcffkp2e+x6x7ZCsvlGeff+T5ki4SBWCr0RCKSlUBhM+TskZI21QWSgE8sz+/Rri/YfvVzPvvRH/Po3d/h/vt/yNPP/4zzy/fpHr6FpKjq7Elz1imrUoIQWfaW/aZYeFHwiFye+xwVXK/rm5KjTUWRaZ5zCCEyjhMhBFXXsgbnOhrX0hrHfnuL9yOS08FEKwpMOTJFBc2relBjDClbQtwh0fHlZPnv/nLiP/mPL/lnX97y//lry8LCB7+d+eyvv2J/m+mawBDgQWN4557w6GzHFNf8/OmWv/7kKS9uMuH6M6zfk8MtbXtO9gsWJuPaQI6RYT9onCow0mGTYxmfQ94BDdbCqs34cMOzjYLKi75Xe8Iyv4tA27UHEmJKsSiq+kPf6bqWolRQ7M5TAPTeaNHPyzbAIUR912ZoXIMYzaU456DkVYwxuNYd7o33E5XEHEuBbsXWUhaInhgTjohtLTcDZFpIjpgnnOu4sJmnY8c7+QU/v32TG7/iu8sl/9tH9/jJ8j6PHnY0TcNmP+ByKVrOIDmWdasSLjZj5tmTZ1zfjgz+gqfTJXlqCAKX7TOy3PLV8IAHS8vzTcuyz+C1YHs7Rhqj41FEC6UFwYliDa2FpYXcJLwXshhsjvhD3syU4mElaJAC5EpmEkJZrycFwnR0Fks5awzJSlHeScf1AYlW4KwXXgwaFoA+lqase1Oqj0x1nzA0JnF/tcR7z24SzlcTwwRMI0YgsCKhFm1+EFIjXPae3U4gBQwRZ7YM4ZwkFmMmJDc42TMREF3FF+sdylHLG1yqEvyBGUSV76nv+5wVyz6+r2c5QVRJrfJKvkbKSBEjlQ0z2ymz/xdsNKHEj4OLxHx/s5ze1/KOOSmRLQshB1JK3NwMSDKqnpYSv/vBBX/+kyuycYSsonzJJWyyfH6b+O7FgrN+x9XOkI2QEkxD5r/451/w1XXE0JBiJAqYKNy/8Lz5xn3+5IfPCdFiYjyQ4jzwZz+64Yc/u+XDN3r+w3/6iCdftuz24du+Nr51+40gflRQlxqwGpTAYJ2CnIXZheQDezmmqHYo1h2sMFIMhcFdwPBcJgOgKeoUppAh1c4g0Ta2BHz6QgtRlQ2MEcZpBAQrCqxmI1rJTwGPRNSqoyymc1ZSQkwRk48sdJ1AOLDQcpl0ciEKeK8vBWWiG14SDc1aTZ8rSxCASM6zJIlRSc+y8tbFv3Psh11RqciHQKGyr4xxh/07q2C8EX0JqxVHZSNm2qYhiFHJI8nkwmqVcs31BSoZcvG9FFG/U/QStfqTjHP6YE9TKmCHLQkJVLrQFHZ+jrSuxVhbiEF6f+Kox8gEphC42U5c3Wy4uLB0rcMRIQasdbx4ccXCRl74EWlarDiMs7R9T7++xxvvfYff/p3f5Wy10JNPgZRLYCMzQCUlVbuYVRBU37aMvuiVnKKXa4yBpFVIBrVDSQApYV2joE+KWMxhYVU/nFLW8YQmxMZhRyG9ITGWzcrCw6jNhrGNBmTO6XFTKkQT9Zdruk7ZiYU8lPNRHtxPE9M4MHkhBkOYMrIHMUIgF5UPo/suVUhNq8ezRkgxsbnd4UOmWy5ZrZeIsbTdUpNxTXOw5WmavgSmZTzmYvGUEyEkxmFgu9uwHwb2+4FpmrTPJZcx7gqZxrFYrGn7JU3XYmzDOEV2+wkftVLMSeZGDNM0MYVJ7+N+oIl6L7337PYDT1/c8md/9WPOlx0P7t/j0aOHnJ2d0/U9XeeIrdOq5hgI08h4e8PN86c8/uwzfv7lE57ttOqGnIhJJexiUMWeputZL3s+/uQn+JixZSFsjCUhTFm4vt2xWK74wT/6A37wwdu45DWAJ+KnHeO4Z7/dMex3hDBRRMyArAtOODCp8wngIiihTdDn0VmdM5QZbpWKXCpCIDH6gD/IckVihFRe2JKNzntGX+VkXXRJjRFKgqQykasCSZZcnhVdYKhtFKTX+MBvTLuLsDAHpb9N+6aq/leREu76+1xSbn4OcyuXu4Dt+fl77w9/a5rmYKNR9zMnMTRNc1jUhhAONhlz4kStarvrvO4iNcyv6VRpYR5AxhhpmuYlW47TfpmTPmo7tdGYEwTuOqfTRXv9Xd3PNE1fI+zUz9e+m5M/Ts/z9NwqmWSuKHH6+Xmiou5rfp3z85zv+9QuZE5eqL87JUfUn+f3cD7G67meHm9O/qjkiHodc3Wb036Yj9X5PTrts1OSxylRZL6v+f06bXPiw+l2cwuVu8Dh0zHxKhLFq67jLlLLXec4txq66xzuIrOc9td8+zkR565znp/XqZJK3a7ez7vUc+bXfHqtr3rO5n00J8/MP/tt9nu6/at+Pt3ffB+nxzvtv1/Wvi2R4HX7/397fa9et2/TNKFXwMNU7C+KhW21rjXW0BqjRQxJq0irrLhginx0pomGYBTcFYTGlBwDmRg8GS0OEqP2sdY5uraj6zqMU3KAMfP5V88rpVhq/HIBUitpIZfqvqosO7+qYwUfzOc7g1GEtvy/lIDkCFJKWYwWI9mStM9kckxMPuAnlUonJpIoOaRtW9q21Tk3JWKI7Le3BD8d536JhUyCKjBkLaoQW45NlfoWtc5JKpddsvSFlK+5o7btWV+ccXZxSdO2TMPE559/zvb2mnG3IcWJs7MzHjx6C9ctGaeJL774ku3NNTkrOSCnWhhhsbZV8oMxtNZpBaV1LM7O6BYr/DSyu/6KYXeNaVrOz+9jbcdue8O4faLVpyYTjMEtl9imgwQ2Bnrbq3pM1GuwRouIQhKWZ5dICgybDa0Ruq4ljHvGva67rTMYIrfTQEpgxbFYXYDtmFLENi3d6gIpNsYpGmKY2E1bRIR+uaZpOsRoUU3TCClFhv0eK6JAC5obUc938FNgtxswaIztGchZaIyjaTtiDmw3Gxrr6LoeEHbDFussy9WaGBPDOLBY9ThnOT9bcb5e8/TqmkBikzIxqOT+4AXjOqLAdcjc7AP7GMgx8eB8zQ+++y7vvvkGZ2f36Bdrur4pqiQTThpi0rEgRq8lZ5VRbztDt1iRYk+II9OwJYYR4oTkoOBX32EdOGewptpdG3rT4mXA+j0Yoet6mq4lpYAPkxK1pMHahrZrEdFCqyFMxGkko0ogUhSVXcn9GOsIUe1WvA/AjHieijJvZYMcHuNEkkJqSRlrSl40mwJO1pzrCRFXlPBRYC5dE9YiPmNpDqqDM+Kt2FINHAvZRJWmU9b5xTUNRgQHSJ7QfK4SXjRPkrQYzDisltJgRdQOQSrpo5BRTFGKFqNqyzWGqWvWWUyjXwo06bxXrrsgP5Iz2UjJ3+Qj+Hy8sK/N+Wn2p8O8eExnvm6v269li8GjtbMdT588IYQJPwSSCWw+f8GLT3+M6xus7Vi7nntvf5e2WzDsRjbbWwShX60xBpqmYZomVTZrelVSohT6wgHzyilBzFy+/T0Wl7c8/eInOLPiwXv/kJvHf0EYt5y98VtYIwQiJgritKjah6d0Ga5urwlhQKQrD2mmEkghE4GksveEWAgIlSwqOs/kpDZTDsUBqpVv0zS0jcNaw267IXmPxdNIonElTxQjuxGGYJmmxBQ9rWtpxdJ2jr4x/PhF5LOrwD/9p2f4caRbt6zfeAh/OXBx3rFo4Lfe6PjBB0uGseGvPh/4k4+f8/n1wPMXNzR+TzafE5InxAasJUugDQNt7nAI58sFPmT2DpwI+3Fkbx5g5Tm2gZVJgGO/HbTYWCaCj6VIWXG/Yzxe81kZa0HE4lx7AM6tNQdwvZIRKWvZCv2chlhVRWSf94c1rHPuoLC6Xq9JXm3gdftjAVE9jjGao68rVyVGBtqlpTWqmKEk54a12ZKTYZgyLjuiV9V5HyFFIcRE3yyQnLDGKavAVgcFtDBdMkgiEdjtE12/IhrDtBMySmaa8NC2fHadeXTuWRpobUvfJ7aT0EehNQ2SMlkSTkCVRbQvl43QtToOb9Ok90MyPuZa1X9gYUgpwjZSiOYCIUei1z5UXETfcQaDJRO8Z++rKrtDnGO17DjrM2POrDsB1BKkMrtTCRjUn+F4I62Asw0pLegbTxo8cR9JrgcsCYPJxf7NKSa0mRyrdmKTIciFdrPcIk2CuMS6DWTFB5NkclZ1v8PYOmCWJY8sGdcYTFaLvkr2SBmmFIn+mDdzxszcI8o6uKiSxXm+p/wbU5ytlTQOmw/jyv84xHWFSFux0gNJ5iQnqfnVug4JilUHQ7aGp08mtb1Ljr7ZcP/hGf/8z39e1AsjWQQ/ZayNmEn46bPMd+4Jb37Q8bu/+5B/8V9+hY2BmMBkxTSN6DU/eAvOmjP++sdbohHIXtc3dYqkqEvGzE++GPnZ579gHCPrVcvfdvuNIH5UIoaoAZYOAquBsjGmcBOdWqCY4wAWI7q4Rm9MTAlJ1Te1LI7L050y5BgOthuq4qAss2z0YQ5+KuBpohEHRTIr5ATZY7NTooNRxnR9b0rWz2QxJDGEBJLVo7FtWwXRg/q2HV4UoswyPVk5sKBM8WjQ+VQwcgxgKpsr5YyRhFSpJalzXjgeK05UcotQSmtq8E9Vl9B0RgIkFlkl0UkFK3ifySlgTLWFKYkTZXsQojKdrHWHiVmDtypdrseISb3PROQgPxVj8X3LAde02KYnJg9RAZCYMuSJPJXKZWsRDMtFj3EtfhqwZs+b738I7ZrNZz+mefqYGCOXFxc0zSOubrbkdsV3/ugPuLj3ACGzXq9YLBYsFz1to0HbMKhEbK3XmfIeMFjrcE1TYrwS0KKVRDEm1KNXPXGT0fHgnEq3Tj4oKSQJJJXgiilhc6ZpO4gaoIaUlGwiJTUmhsZqoiqEoMEpmuSgJIfUk5ZyP1WCNKWERNTfuBByNAGl99yKwTlzsKsRU5JURq+zSRFLgwHaXq1EXNOqLFtKylJNagOSkxKCUs7sN3tejI59yOT9notVZtW3ZAksjWO1WBKNY3V+Sd8vlOBEKmol6SB3GWLCuQaxhrZbcLZOhZ1ZE1WWptGEXtN02KbFtZ2+nE2xtsmwHyPeR+7fu8f1zS3Pnj/n5vaW3XZD8J4UPcMgLBdLuq5XudlRuNrs+cVXLzB/9RMaI3ROLVqc0WqvECL7cWAcA9vR4zOErC5kRL1PPpRqlMIW7foe6/fshxEfAmNhBDvXcP/+BevOIimxevgu//gPf5/eCjkbYkj4qC/MsmRTwkR5djKzKhEqMFPmUTGYXF/OmmSMkokhME0e27RYKzSd4GyjC3jjSDLip4myriVjsBKP+0WfeTN7nesMVebwVBimcJiTqk1S5njuZCFKPlTmvW6//u0ucBJeVrioRIU5gHwX+eIUvP4mUOoU8D3ddg7ozhUyTgHi0+NX4oe1tkhaH0kYIYTDZ2qwVokRlRRSyQsVEK4/hxAOn5srg9wFkp+qeMyPNT/Xef/W7V5l03F6jHkfnfbZnHRRAe56jafnOScN/LJ7VX+en/8p6Fz7x3t/ktR82TbmdP+nRI5TQsP8GKf9cHqev4wMUPunBuJzckztq/r303tUgZ/TPqyt3rfTcXA65mpfHCtJwtfOuR63blfPc76/+bFOiUJN03ytv+7qm7t+/yoiwl3XNb+Hd339KuSDu7b/m7TTY9xFfppvd0qOedX4Pp17XkVUmV//qYrNq36+67l41fX8KsS8b9tOSSmv2+v2uv16tJwTu2HU+DBBLLnREjVTFQasU6KGk/Ieixxjm6yxQvRKOvcplJxHxGRDU6rWnbPYxiqgbwRLrTx1GuO4Fmsb1PbEFdhW8yliBFcKYSpgKXP+BnWersUtx+Rm/dLp6xiPASUk0kIdZ5UQjFOJ4hQjKai6ZkqxqBCUd13jWPQ9XdcRo2caJ3ycihJnLKCBHBLNWSqRPmreRXn7Ja4v7/2k0aGUPBK5VDnqJWMbw70HD7m8vARjuNls+OrpF+y3W9I0kn1g0fU8fOe3WZ5fcHt1xZPPv9A4OgUwmhNB1AKjMZbWGXIKNG1D41Qdw7Y95/cfINayvXnBsHlBCjva5ZKuPyf4kXG/xZJo24Zp0gq7brmkX54RUyb6PU2zxNqGze0tMWtRmE8Bmo43HryJHyaunz/Gtom2WyGuw5BpTYtzDVOY8MFrwjwGhmHPsB8wTU+zWnN2eZ+27xh2A34cSXmrY9M62nbJGCaGydM4R9O1DMNACJH1+kzzdSnTtx3ilIwSpxHvPTF4Yi5qEa7cbwvTOLAfBtoCmMWSJ1su11hn2e0HdpsdXdORc2I/ec5Way7PzmisY/ADQYTbUe1DnFgWXcuUMvuUyX6ktfDWvTN+78Pv8N0PP2BRi3HKc+GsAafAvylJfrEGH4JaYANT0Opka1R5o207pn0kxklzDVaQnLG2o2ndIYcVvBKsXNuV3EHGNa2uTa0FAtHvmSTjSj6paRZ0XY/tOqxr2A9DiQljiS1yIYrb8r0qVySmKZaclxxsBZjJ6FsoKjwZcuQgEyJqI2SK8o8SKmrWiQPRo+Z8tXDJlXV7UYC1SqjQ+asAHej8lVNCktosp6RzQUxZbaqcwUmjaq2SMdmXnJ9Qs9rJqPI0VjDFRrt+mVqwZ5VIZ22JK2UWE4kW09V87EvrLplTOQ6ZnAMIWWc7Yb7dy62uEF9aOeavA5mv2+v269RySjhnGPbXeD+QUma3+4JlHnCLN5icYxwnpFlxexX4/OojjHWsV2tAVaH9l0/VtmzZlyJjsKY5xOl1jquFQffuX9L2jjQNNKbl4Xf+Lba3j/nqF3/Fev0u2GtefPqnrB/8DrmxqsYxWZCMadfcu7BcTSuS7VX5HFUaMQXbqQ+6FObWPK9TSWtiLNJkmuR0DUJdRwn7YWS32ykuYiw0NdciiGvpO8flquH+qqdJgZwCuwwvthCTQ0wLTrB54tMfD3z///C7/Af/7K+R1X06CTQy8e98z/KD773Nouv4+LMdf/aLiR89hi93kF2HmIB0hq5/l3FK5OEWsS2mEXIaSeMWaxaEmLm53hxImq1tsLInYli5zP1lxtolz15kXGuxjc50VenENQ3W6b1KmUIohrZ1ui4iFtJCnQyV3ZFneCRU0kc63GdVrICS5QcyMaLk1ZJv9N6z3+9pGlV6U5u1ruQXAARry4xd8EYxWrzrsgXjCmEHYnb0JrF2ez7fX9D1GexUcKaIJWOzwUnAiGF7e0XO50pqLoQHyVIwKl2f3jtrcC6TpqyqfeOoVj4+0yw6YroiZ8NHjyN/+N7Ei2eOh8uW/e4GxNO6SdcXSTHYutQ2klm3wqrLeG9IOdA4B0nY7EZySOVeHN/tFExPi/CVVGmLHY8VfaciStKQnDEp0ZiyZo6R/eSZxgHJHSMd5x10NjM5GLzajeh6XMdAyhpfWKC1StJsZaLxT7nyHcH2ZOuQoCSREmaQc8LaTIwTtCtal/DGEFMiyZLW7HH5McE7CIPeIxHg5aKj2pwt98YIzSHHHY4QcAY1FRR80PVQQtc+1tqCKceirjZzbhBdVkklPhU1D5LiYRT8WoeGrntV0abkgQEkFAwxlrxVVWbTsVtA9TK2MjkHxKl4Qwij2g3lxB9955JPPrvldu9xWRAiqVwTyTNly/tvCT/47gMeP9nwf/9/fsaZMSTJ+GmArFipiOf9d1eEqeHjJyOYhCSjg+KOgjYD2BwVK/aR0X+zJfPfpP1GED8QsLYEkkntAIw5DqCmLWzwKqNkEgcqIzoYUyyJykouKAMOk8n6RwWZyUVyUCv/TTm4D2OpuhQQi/ehVB8YxNriw1lUPkpS2lpLyEBRs5BSrWIAlYLK2FpJUif9OSghCZMFq6YJHFBX6gOWCslDEyY1OaKTTFLlDWqCVggFSLfOAUZJLTljTD56rAk0TVvO91ixrwHUURosATkFJY/kQhQQgy0TfkatcjIUFqRuZwSwrih3pHLtoC8w9UI1UtUCBNe03H/4Jq7puLl5zn5zg6Sk5BFnC2HE4NBJSVImTXukTBhdAx9+8BbDO2/x5PFTumHLdz9oWF1csjo756xvIQxMww4fY1GY0IVJvbaYVVJRCS26mLHWIFbvYe03kVpxoy/2lJQwlAv5wzhbki6BrnMlgTURkwZy2VhCTNiSEQspEnxArFESDBUErMGeXn8Wir0HiFTJ1oKviybNQkzEBNYJtojDSBkbOWWi6EtRZcD0fuSkL6lalR7DBGSWqxWuabRiK0ZsqURKKeo2B+IHPER4N8IQEkkcm+3AdrtnILLb72ERuf/oDS4fPKBrWzrnlB09jcQUGYeBMel4s92KletYlcRaLj53xqotkLWq+iFWv6piTLkaMobVwiLGcv9yzWZ3n8vLMx4/fsznn3lCAh8mcooM46jKIG7CWaeyVUWWbUyZmxK9ZqraSz48K7G8qI+gmnoZBx/UkspY+uWKlsAnv/iYgKjNi0+6VJaMBI9pha7t+cGH77HOA8mXCjVncU2D65c0JQlinSOmUGx7MqkuujkmVSlyZnKsw1CJ2lLp4WxL03UaUBiDMxZIhKCL2VjmAN1frXJTgs9xrjYHGWN9HkpCJYMvFi+mMjvVd+agfIRUlZwiP/e6/dq3+aJ0DnCegr8hhK8Bp/Xv3xZAPQVR59vPP38KiL7KeuUuAPaUdBJjPATqlfhQE3POua+B0xWYr+B8JUpUQL2C63OSwylQeheBZU4mmZMF5sF73V/9qn1ejzvvm9M+mvdx/ez8GLXdpfxQr7cqI8zvTe2bSoq569pedV53jZXT+zTvu9PtTy1E7gKjT+9DnfNjjIf7e9e4mN/DeR+dqpDUe3Zq+zJvp+P7lCQz//vpudT7e0qmuut+nT5rp2SNuwgYp1Y+d/XdXW3eN6djofbLq+aMU9LHafsm0serzuGufbyK4DPfRtfXR2LXKQFjTqSp/z8dw6/a913ne0r2eNWc9m1+d3qcb7p/ryLy3LXtLzvm6/a6vW6/fi2nzDjuiTGpfYpRWxOD4EwqBIQjwbARq7GGCiZqDJVU1VKMKXECmtcIUWWvncWhoIgm3x01ZgkpYWNCbDpWqmcOJIt8SK/oDzqHHavjkZKnqSQLqdWcGtUc4c8SddWEPsd5rml7msYRc2bynjQEVSw4rG2yJs0FpHH0fY9zjmka2G1vNQ6bPBCLVLNAkQKXkgvSJG06EFUEjf0sqn2gCgUaI1IURCVpfspguLy84N6jB4gIL54/4+bqinG3J8eExIgxlocfvMPlowfcXN3yyc8+Zre71QKWHEqOQwCtjLRF0SKGyHLZ41pHzJ7F8oKHD99kt9tz8+w5YX+N5EyzeshquWZ/e0XwnowlJE8SRzKO7uySruu1SCtE2q6ncQ03V7dY2ynIboTzRxcsVhfcXr1gurlieb4qlhgGkQbnevpuyTgO5GJrE8aJYbc/2KpkA4v1PWy3ZrvdkEIgeu2Dpm3x08jN9gniHE3TkxcrbNfSL88OeRTj1Cpot9toviplhnFPCpp31Hjb0bqO6L3auHqvti8ixCz4MCFiCFmYhonN7S1tvwQM+2HAx0RyltV6weXlGdsXkU2MxKgBeBbI46R2AQirxvBbbz3kO2+9yRv3H7JeXtC2LVOKpJLjImluJcYJEUsyEZM096XciEhMkWkciDHRNFbBEnSc50K6SF5JIg5Ny+qOA35U1Y7Gdlq4ZRypgEQV2MrJk5JlGrWquGl7bNuxWCww1jKOI+Po1dabdJDGF9GCs6ZpyvrCENOgBVXZUKuwa75VKsvBqAWL2ioFzWcYrfgVKxhUPVkKcHTIexqLNVrUJcbijMMYVRA2TlWj9XCqNK251KRKJSkBlpQDSdT6Kma97ypWLRijZCkRq32PWgap0q3mb0whmkixfLal4Kl+F2swJZclWuHIQaXkJOaWOg9KAVpKtufw80v5GbmTAKLrUQ5z6ev2uv2mtJwz1gj94h552uK6RNM34A0xbLD9m0izYPAeCKSYWDpDmHwhhKFV5yJq/YSqG01jKDmiRu3Y/XRYO1w/f8G9Nx7w8NF9SJE8jiz7S/p3F9w+/YLdZmDRdzz/4l9D912CNLQtLLoO1w886g3PtkJ2npjBR68xboaYVN0D0SLTSh6bpolxHJlCwNmGfrlguegO846IFkKNg8c1Dim5i5giXXdG3zqMUXB4ypldahl3CTsNvL1O/N6Dlre/b1idL9nkjqfXE7ubPaSJ4fo5l//uf4g1b5D2/zXf/Z+9z/cuM+GTDX/6F9f86eeWj77yDBHOFx3JNGy/ynSLC8xyRdsm5OKM1jb0jWHVQvCem80NY9CiyBAnDIK1EZsNkY7VynJ+sebF848IYcHq3krfG3kixswwjKTdDmMM3gdVaxNVZIqNwxhh0XeIaw75t1wIuDlnxZtyXYOawzoixsRut1fCaDzmQ5XHkBmG4ZAr0zWjKfkHYZpGxbJiLrZojZIOWrXa0/ddxhEwWd8vkGjixP3lnhfbc3qTWLrMH/3ed/jxP9+Qkq69x3Gv63lncE3P5nZP27WqxMBRXSumhGsWEPacny15/NUNOVuin0AyyVh83LNeODaPN9C3fPI08f75Nftd4NxZbgd1PSAHbPKE3BGTKAnFZPokdLnlagcpeJzLDFNkmrwSKou1mtq9ldwuEGuuRqApxAwNDgrOKtpvzjaQIlMcydkQsfiY+fz5SGszUyec96pmN4aC+Zb7mlNR4stCYxO9FToDY9rzYnqT4Hps1rlARMkFCQMmYYikZDlfOla9JQi0dseQFyTxhHimi1WxiLlFciDjSNkrieilPBmoihhEn0lRtUhSrnhNDYYKuaL+lLMSYEzNf+maS68tHcidOSs+JLbkRCu+rYNcsZ8Clqn9ZD6s0RCjJCAJEI8Y0pF1X/Nb1ZrGkCVhW8O9yxUPHlzwW+8lrrfXfP+79/hP/8vPFYeOkYOyDZnLheP7H6wYU+a/+O+fkch8cE8Ytg6bLSFNxLhm1Y68+daSZy8St9s9Iho32YLVlzL5Y9+CXrsTUvCU6Otvvf1mED9qktWovNQ0xeKfWbwJO5R5VBhSSnxQ0kMuFbSmgIxiDKmA3tbWynhDioHoPclwWNSGFA++stZZ9Q4KAXJ5AZbkc4OC0DUvGmPAmOrhWBKbAsQiYUMui3bHGCKkPPOPTAVYzao8UlnrIpj6UqAm7Uu1h9UJa/Z8F0IIB6aUKlZQnrrasarqoGxCgEzbdBhj9Rps2UKUEWesIElVVIwYxLlDgKDWIdXLK1LpYVolYA+eWamcAvHoBaknqmzREJN6jdkSdMSJ3YvH+BiL+oWwD0WCfPQ4a2i7roBrQQNPMdi2paMjBVWnWLeGB7/zvgLVUZVPEIf3Iz5OSh4qVbjHMWcO1+eahpQC6TCeDCkmhkltQjTZodU1IoU0koo6SVF7cc5BqUxI03iYCBFwVokgptHqgRRTYWeIqodUAKmqYcwURIy1Rf5N73uSI5hTX+oVbKyJrLotKWJdU+xKKgBRbInK2DUi5CILZq1jsbD0/ZKUwYdABrq2oXUt0+hJhMMLFVT2zFqHcxZSZvBeLWCGgb7t6RYL+sbSlIosTAF7QqDpLUkaJUIYo4unGLGFdFMJXCoXzCEQrs8Mor5d9cWVy3O/WnR0jWPRNfRti7Et4zSx3W3Y3m7Y7QddVE+Ct7WSo6QLRQ5SubV/9XmsD5ber1TAyhD1ZZ4oFUJdz/nZiq8++ks+f/oCSkVaYw2L1YJl1xJDZLMdubzvGL/8MT+bnnPv8j7L8wusa3FWySPn6Q2mKROyBdswDTsIAZMicqgMKV6RhwC+pEJrBQuUvheVHzMG13bFGzrogj8q6zMntfep84vaCdXZRP8RKpijFBMLJJOx5KMyiejcK5IPZBcjVl+kckyifkuM7nX7e9xOwfn5z6dg7pygcFqlXv9/Si6Yg8d3HfcUXH4VmFu3m4P2p+0uYsGc+HEXQaJeUwX5519ztYV6bfPzmwPJp/01B+prO1W/eBXQ+ypg+fQYp9u+6p7Mr/nUUmVun3J6LvX67wLN7yJsnALnd42TU7WLV7X5vZ4TOE5thU73D8f7dKqQMd/36bnPr/eUUHFqxXMXYWh+jLssfuq+5gSa+fXcRdSZH/tVRItvIj7M9zPf9tskgr+JjHC6jzkh5fSZmffBq+7dXcf5pnZKiJg/96fbzMk187njm56fX6XNx+aveu6vIn/cNTfe1e6aI7/pHL9pu7uej1+1fdP5/k3u77f5zK9CJPpV+vJXab/sHP6m4+rvov1PHe+v29/TJqL2DS5hi6qGlUr+0PkhhFLVHqMmBkuFXE34paxAhBGhKWC0FHtIY4QkWlnmo5JBUioFGjmRmDCmQWzExEC2llxsIDSesWjsW5ONSo+odesiBikJckpuhaw5lBIQ1sss3zUGUplzBUMmH9gNQ6mNKEF2RoWkS1LRtY7lcoEY2A8Dm9vbYourYITmshygMVhE+yO5XPIGCZM04XqIgQEOCUsOpHstfMrYxrBYrbh//wGNa3n89AlXL17gJ0+OquqQrbC+uOCtd9/BjxOf/eRn7DabkocqRVbGkWIsCWJBAW1Vuu3bjrZpIcODN99hsTrn2ZMnDNtr/LABY1ieXdAvlwy319iEJvUz+Kj7atoli36FdY4wjYS8x1jhZrfFrRTEMFgWZ/do+iUvvvqCMA6IadjttiwWPYv1mqbtCFEYdhsdYxn8lHDdkkcX94lxwodIv1zR4Nk9/4rJR7UmRRinidtJbZ3FWVrraBcdYoTr6xvIQt8vMGR2+w3TuCWXCsm2Wahkf4x0y2J3aww5ZIiWaRjUrjqrRfU0bNleXXFxfsl2t8f7ifXZmkTm+YvnpBQQMQybW+Kw52K5YBcC/nrDNnmcsZhskCQsm4Z1m7m36Hj/rUe8+847rNfnYC1jjLSdJWUtVjFOleXaRc84jhj0/J11OOvUOz5nTNMypj0peSiFUNZZsmkIflAihBhyjEx+whgwzuBoSVEJYEpoSISYGX3CkTA5YBGMjQiRED3jLiHDSNuqumvf94BhHPMBDDtaHQac075tuw6t3lRwLCQtkCm1p1TSlRKWimJPzkf37JyxYhFbCldq/GTn8ZrF2mI3XHJeUnNHxTaqHkvnMqM53agEKVfmnZwhjr5YlEclPRUbZ2cVGOmzOyg6a27PIE5UKck2Os9aJdQ72ygZSuSQL5NiHa7ndbK+LvOaKX1w8CE4bnD8XtkeBZzU/Lscf1d+TyGAvG6v229EyxGCIYQNIoksPXEM5MVDLRJOEOOENYmcHW2ritv6rOuaI+VadOqKpfoxdxGCWrvFWUX8Php4esV+u2N9fkYYRiWCdYZm+Yizs8jVFz9l2gZafkjs3+LGrlm0A8/iM/7x9xM//HTk8/A5OSox7YBfHYiiR4Azl/lRxKjtWPRstwMvzMt5pxj1vSnFVlzxDktrDV3rOD9bY4wwjJ7HLzYYMibCT5n4827ig3uW33o48YPfWfHv/YMFqzfvkeW3cffegvN/n2QfYc7/bfqz/4zpf/x/8N/984H//McNj28jPgWaxnLv8i1SygzTnvvv/IBh2OKna1YXb9I0S5wVjBlxObDulFQXvFdVMAfjFNh5oZXE26uBB27PVerx4Zqw29J0C9arFVMM5Jzxky9qCIkYg9reI4To9XfTHkFtycTqJGoLcXnygeAjMSlpo+tarLV47xmG/Uv5mHkxXCjK7sYY/DQBqiiRc2QcdqWYWYqqsOCcZbFcKglFDJID1gr72y9It1/xzoMlbbMh+44zWXHvosVvrrH29+htUsGDceL5l5+qjYkI42Tw04bJOxbuDIglh6RYzb1Hb/HZxx9zdv9t+i+fcvsiINkQBk/Ome5+y/feM/z05z9nfyNc2QU/uN9g48fcW17whawR09KbhpBUdSRFVYXvOniwaOmZ2OwDJlkkBIZhxHtdCyvmwAErzvFop1Pf+RmNT1JWQoM1UtayStT0MRNCIXwLQMKLIfsdBseeQqbKlpB0f9EHXUujGNVyaVl2LZiJTVpy0U/cToYsrqCoCWQCDJINOQnn68S6F756kegNNIyMOEwsVoI5Fr6GHGxtMp6UjsWBB3KC/vGAO0fReKAWkNcX+IFkUV70Oat9oG5T1w5FxZpZLrSomxxoEbnib6ksGwSyQVI8gvGFSC9S+lsskgVJiRxruKT9Lso+0XWbcQyDMI6Zm9sN29uedy4dm33keipOHMaUcCfwvfeXXKwtP/l0x7Nrj7QNEhJfbHveX4kSW4YNZ51w+fCCjz/1FdnGJkEkEdWIR908anfmXEQLhBwiTorKGyfrp7+F9ptB/ABViwmFMVUmS51II9NurwPgkEgFWyRDIwrISi7qHFntGRC00iMrS1wX3yBRySExpfJC0yDCooxLBXDBOQVpnLWEWIMGXSRL9TnDHBIEkijyNULTWLV+mIqKiHNY59T2JEZVCBElEmByUTtpCCESoy+gutqrHPwliwVLNnq9zjp9cas0R5k81BQniygbXlQqTIEpZcHnqIxTQQ4Pms4DR/l2DZLKvso1USZKYiROowZGzmmwRaaxlozak+QYS2Bb2IDG0DhlxBtT2XFKnJiiSh/GmDBF9lKK3FAuEo+hMOlMCTiyKYoJFCpKBpMT+D0pBjAWTKc/Z+0r4xytHI8dgnrJigi26QrzNoFVKyAKsaOSbUwN2hIYoy+InBI5q62HiBC9P0ykGZAqb2SAGAorzYJVL09nDc625KTEpliYwL7Y9IgYFn2jL2sfyFko4i6lHw/iGzpho4QaKaSR+gKIsUjLZlFZyPoSkCNZxzpLaiwu9cUPrQBSpOKjamibhm4F0zRBThpgmuPLppKvmrYBgXv3LtSD16u0XRSQpIkt1/WaoEuZtvjbSlGNiTHqPc96XqaOmxjBWE1uVeJHzqTCTIhBlXx0/FlaZ7GrnrZ9xL2LNV9+9YznVx3P/HiIV8dxYpoGHZulv8QIRmogXRVtMiQpLzZNwMWiSFLnJWsb1ueXNBK4/fJjHj+/orGO/Tgx5UTTtSy7llXX0S8zCwmsmNg8f8bu6gVftQ2L5Yrl2QVn55e0bYe1lgcPH7Jcrxh2Ozaba/abG8KwV8USOXQDRxanLgQ093hcpKdKZFEqHCYFfBwJwx4/TgQ/4kNl0UA0RSq4EjhKpymvw5SxKEVsxJT6r8IJKystmUmoVsuplF8NrL9uvz7tFJC+629zEsccFD+M2XS0lqjVXb7Ms3C0OKn7rN/npIq2bQ8kzho4HZn4Wf1VnSsBdzicWyXTzQHxesz5uVbix5F8p2D6OKp8p4iCA6vVimmaDuuRruuopAeAxWLBMAyHY1QQv+6jaZrDZ6ZpUj/7ci7TNDEMw8EjvmlUarz2X1WnAJToxcskkVRIjFXZohIxTtUK6rkc5IZn29f+aNv28DvvPW3bHu7ZHCCu1zHfp/eepmkOX8Chv+bqJLVP53243+8B6LrucM/n+5+Pk7qvU5WWu4gswOE+zIkmtX9Px+xdBIpxHA+fne/zlNRSx3MdJ/PxOr9X82di/hyFEF6yG6pfdUzEGJmm6Wug77w/5gSReq7z667f5+dUyST1OZqPobvIRKcEpXo9tU/qOKhqMXWbavFzqnZR+/UlKewZ+el0HjolBc338W3JAHNCRr2WOma/DXHmVcc+7av5z/PnZ+7ne6ocVNv8/txFSDk9xuk8Nz/H+fFPr/+UXHR63q8iov2q7a4+qe1UKefb7m8+Pua/A17qy2+7r3n727jeX0bk+btufxvHOT3v1+u/X79mjGG9XmNN0iISVP3AlPyB5IyI2r76YmOSYgZXAIOS40mF+OFaC6bBuVIYYx2CAq4mAyERbFAygBNMdsRcFDGLAnHOuVR0ujLuNF4yJaErhfeRUVWMjB5LQQzNpVCI6xpalaTk7F2TQ8RPvljalvmixM4GEKvzaOOMrlci7PcD47RFL1uOMZypEtAKzpIFi8ZkOalaIkWdUhPcxXYwzUhnonkdkx22cTSLlvXFOcZaXry45ub6mjDtFcCXDI2jaVsePHpI07Z8+fnn3Fxfk0KgSjcb1PZY+0jIxhwSt84Iy37B6mxFv1xyce8ew27PT3/0Y1L0iEQQy/rskqZp2N/ekGNmTDBFIXi1hFkt10ViPDGOW4Zxi5VM8oLJwrgfWJxdcvngLcZx4rPPfkEad8RxDymzWK/ol2u8qHqERWjXK7X/2I88On+IbYxWx0bLQhxk4frmFhFH1y8xBsZhiwh0yxZJmheM+1ueba+ATMTRL87we33X1xyMdQ2maRHXMARP13WAYZqC5lBMYrvdsT47V0XVcSL6HZvNjmbheHb9Amcsl/cfErNwe3PLfrfD9Y1WbCYw1tG2Ha3Z0QB9KXpzkli3jvud5a3LSy5WS+6dXdD3S8Q6mr4vaqHpMLb9NCJNg3ELUlDVEecMw7RTi5biiGKkBRvIORBTwIcJHzPWKEhgrCUSiGEkxRGbVbG20HJJSb3mxTokB2IORGzJX2XyFDFtQowCaSkFdiEgxqm9r7HQ9kx+YvSTKpFEzU96P2Gsw7mGxlnMoseKYQ94n1VJN1WFk0w2+jmMkjx0olDyWJbCXzBFOcPYkuc42i28vL60GKty9KcEdK2kz6pMbK2uV632RTLo/BgNMWdCErW2ThEjSu4QUWUUtXNRIk7TNFhT1Ygp1i+q9GGdQbDFtkYJH5X0ISLFzrzkBOvcppllygytcFTUrdRJuhI8yrxyWIaUjpJaXUwBkP/Gr47X7XX7+9dcprFrtpsruvWSpluTm6XmOE1GomIjoi9vSliOrieKk0CMjKNCpfV5BH3kUn3nH56/yD7uGcaBze0Wayyu0ZxG0zZgLN4+wqxv8bsn2P3PcM0bjDwkbveYLvBi45lSJGYPpehZZBaf1Zxu1mdfxRxqfksJfBUgrqoCImU+yFmLHEUgK64z+szz65uSJ4tK+iCRY6JByZj7ceDTK8tfPN7y/T9v+YPven7vH3yM/c7/GWPeUGxLzhG5ZHM78XwPl26PLBuebjO7beLa3BKtIMkTsrBYPSTnkeH2C9z5u0TTsRn2rLK+J8mRLIZxTNyOIyk1WMmsJfDOhdD1hpsh4drENCV2w7PyDmswYjSXNw5MY2Rleiyq+IBCZOQQiaJWhckXfCcdFaRiUAxJc+H2kMdSMknBG0/yojou8iFHKTJXzq65+vI9gk8RHwLDvinZeTg7H/neB2c8+/mKHz0VPrjo+fx6Sb/uefaiZ+XO+eKTp/jUIzHijMXZUownBh89w/aKtmtYLdcFJizKCMbRLVdsp8DmdtB3EJEsEWgQItvdxI9+PnGT3mIymeE68t/8tfDv/8E/wIWOR/Er1i7x4CIgmw5JiRe7jM+ZZSu8fWG4GjLjFFn1ir36UONkFAs1QiOGZI4qHDqsBYzGGELCkQvOXO5PTEwFu8tQlDQSWVSeJyLsQ8BZw3nrWduA30OwKPE8K0YXw4TpHctmwRSE3SRMecG63TNES459UfMDK2pZs1xkzhYtX72ItI3BemGKHb3ZMqYFxgRMCuyisO7O8dP++B4/qM8fY+s6ZaSq9GdqQdjx93Pyh64LI+kkd3NaZHmMb4pzQh2Xx5kNUiprgaNYg6kyAJIL8Tgd7ktCyEbnSJMz6qRQ1HA4ElP2UwIfWLSeR/dW/PFfPMcaV4qzE+89cnzv/ft88jjwr/9iC03EGacS/SK82HnamOlcIviJ5cWSX3w6UjUdEdS+sOBoMafaQ4frD7mQhc18Xr7r5fA/rf1GED9qQE4B0cm5DAh9OZrCPjfA6EeN6KUQICrTbkqkELBNQzaBrOI5iDi14bBqbxBixhpDU4gMMQSETAjK1HNO5aqUmJEZs5IHrHFUMTyykhGsvt8Kc1/3K8YoQSpnfIpqjRKj/mwqKOLw3hOmSeWlREoAb7BOE/mkoIFT1peJZCGbupDPhz6qxJcUPYjgnMEWv0lr1YrFOXsA/0PKRTZV5QlzVjJJWeofSCZ1NMdqcVFe8IvVGe3Dt4gxMu5uiX6EMqFY50jWkLJa31S2mS1MeCHTHggwgRAqqKVelc4aTEk0IFZlIWPCFnWJlEKRJUvEcSwvvgQpEUTIQRMlOSckDcSgQZ4SgrLap6C3x7aGUECGQ4hqBBOF4D1Hwo3KmZVsil5TRJdz2aiElmSQhM8VFNQxImLU17ckb2PMmByLX54hh0hOscjiysH/11hbQPrENO4UnKrARZTD37LeFLXVybpoSzkiRQEGVA0GSSUprq80/ayqVlTVGCn3yboC4ogoASYl2qbRaqmcaLqOtFwyTZP616ZQCDklKZWLhZLVBI5Yq17FOSFJMCHgGqfBvEmFpGILuUJfTk05fk25pxjKeepx9KWRiqVSwiRR0QvJIFHZmrkoS2RoBM4WLenhPc7O1hhr6bc74uTxIfDi+XPGIq8Wi6etmDI2dYDqfJQ5vghEX2WJhG1aGmOwruVy3fLpX/2Yj796gm2c8lMMWKzOJ8PIwhkuOsvS6oI5Zwgh4cPIOIxcX13x1H1O17W4tsW1Dc62IEJHZCQXZawis1XYQAfeBxq0a5Wd2lnlFElhwqSIDxNxX1/qOpeFmKAkAkO50Jw08Sm2LhBMsbqihjMKDlVvO+twVp9nYwyNc4ir4F1TZoCkiZugz9iRnvq6/Tq2OFNQmifG5mSBudrFKVB4ShZISYmcbdseQOzNZsNisWC1WtH3PTlnbm5uaJrmsN0wDAdQtILlFVjuuo7tdnvw0awJvjnwOQfGz87OXjrf+dfczqSC5/XnGCPjOL5EXFksFnTFfmkYhoNX51z5pG4bY2S73R72O03TS8BrJVLUz8/B3vl+gJeA8fm1zO1f6jWd3j+gyFHGA9gajlmNlwgW8/v7q4CW82uqNjB32afU/q7XdApGz9Uv5vdiTjaYn+spKDn/3SnoPN/3qwD7+rdqjXOq3FF/vkuOeU66mB+3/q6SIObHn9/z0+PcBcLP+6o+W/Pnc97ussWZK13Uczodk/Pjzvtu3r/z8ztNoNd9n15THZuVEHZ6D14mMnMYL3+b7RTsvwv8/zb7mH+Hb34+vomsUT/7TWP5rm1f9f9v+uyvW/tNuMbX7XX7u2rGGBbLBVZ5ERoLpnxIzFXiPmJIyZd1kCYFjSkVXoDGXwYxRanwMH87rGmhWiA4VeKwTYM0DV3b49q2vKdMid2t5oCMLTGxxolzEggcLQuMqXOmxjyaLC4AjpFDkUfOGnuNftRCA4DM0cYFDvLYIqasORO77ZbgjwnY0yYFdBGpCWgtKirUlJKXmBWkGHuwftUYXOOvruloli1912OtZXO75ebmhuDVtkPEYazQtY7les2iX3B7e8tnv/iMEKZDPksTx3q8WnvsjCtWPApM94uOy/v3WJ9fIGJ5/MVjnj37CnKgaVSx9Hx9gaTM7dULxOi6YPQj0Q8supZuuQBgHAaG/ZacR6zVuDiNHmtbLh++Q9uvuL6+YXv9lCZ5ooBt+6LKKUSfsF1PTobsLKbpMWK4XN1n2A/s9hslANDStBpTr/szRFStNMTI8uIBMUW1oYkTkiJh2GHEkMXQWAfRkyxlPAtiLDGpxL3NsL3ds803ICqJ7ZqWbrnk4YNHBB/YXl9Djmxu9Xw2N4nFcsXi7JKbmxt2g5JZGmfp2w4/TkhKdK5hvVyyHieeX1+DFVpnOVu0rNqGi25BYxwP7j+g73tSivRNy7gf6PuOkBJGIPiJDOyDp00ZY8Cnif0QCkiga66cAiEnwjSRktfqWGqhUpUDH3R9W+aARCb4qUiaG1xjaBoLiZL7shxkvAsRgpiQrCwsJVtlYrEicI0SHzrTIaIk6ozgp0mVfEIkTB7TqOps2xmM6RlHYZgGgg9qyV0VZIUCyCp5w2IxFLKFsYcvZy3GuBnx2mpBoHVYW+xXrCtzzXztCWKOc55JqairiqrcxIQ1DiOxzE9BSRc5KdCK5iptyZvU/ElTiSfGFtUPd7B5McWK5i7FR6kTXcV6yrNcM776czn3l5Y/ReEjKyB9WBvNSR7Ccaev2+v2G9Sss2yH56p45C4JacQWFQ0kKznr8EClYrOeD/OFNi16re3lHNnLtq1VUcgYU/LjmXHyGKMF0qHYI2RZYvp36NItffwKJs/UWdo2sZkyuQkQrQLapMNaxBQcSOf94/Hr+cxtSUS+HnOCEna12K/kC4oiRYqqZpYRIrEo9Stgvp1giqrAb33i4YOJ3334DzDmPuDJOCTuSeYF9/7gbf6jt3bc/mLgL/5q5F9+JHz8IvLi9injKJiY2dxc09gNIQpwwbOnv+B8fUmMFtdOPOwcpC2rxvKzbUacw4Qd61a46OGt+xdc+0DOE411qtqSHPvNninuyCT6Rn/fiipdI8VtIGUimYghpFQUXRSXclbxIi1kSsV6I790jysgTlmfplRIyHK8D3cXI8hJ/Fr2lyNehbqICLfbPZcPH9J81fPd9Q2/eAy3U2DwG1q7RzqYgN3tlt1guX1+g5G3lHyTE03TArDf3PDwwSMlUCfFewwQppHLixVPn35O9KGQux0Jj2TFuZB8UBMTAy9uEz9/An/0XuSNew/pFi+4Hh1xEjZjwphIkyJvrzveuRf58meq5nXeG26HBDngRBRnK1il9gVYk2mdUYeBHEm1+FoMmcAUMmNQbK4KX2QjxILLkLOqwGVUsS9nYgCxhncWwm4/sZ8iKUJXFC2iCBetYdGMXO8DJGHjM53rWDYjknYk2zJ6QfLEonFcLi1Pr0ZsJ0h0NLZhjIm+sVi34WZaIdOeyTQkAi5DyEKUWJ5LfVef5jTnhWmaS6sKKC/nhlI65vpO8/Tz/d2VG6rjrfb7XWNzvkYQVAG/7lOPqWokkvLhE4jOR3o6lpSgWzd8d7ngZ19Enj4fyMaw6jP/6PcfEGPD//AXV2z2E7bPhGgRp0SSLIkuGYZoWfUdG5+53SguaY09xp3Mr+Fob3NyqYXM96rr/Z/efiOIHy8laZOqIcSoKgHWGtq2BSPEoqpR/Xcm75VZnjMh6uI5TIOCva4pQXqpFI0J61QJJETBpAJQihDDHIBKONGKhowp3kUaKEnxW4lFtSBHXxjTGuRXaw0hkhLF4zHS0OrC2VpyGigzOk3b6vUeFtYK2KaUSKiCRAX1UxZlqNXj5ESqL0ujn7dWMMko8SWJ0rfr/mJUUNsIYpRIE6dJAwlcSSSkomJSgrKccK5RVn5S+53OGpaNZUiBgUxEsLYlSybErC+AplUgSAxNUwCMsmBQ2c1YyDhW/S0PCZhEirlU55TUiy0Pfi4khZw1AEeB+BAL+E0mouNFjLJSxVp9SJ0jC4VoUtQ4RHC60sE4Rwoa1KoDi6gUVJFSNTMJR5OPgVROSjDIWaXYNHAUYtAEgrXV9kIJLclZnXiTllJMMR5saaxr9GVU9l0TSaNX1pwuCDQwzVSgVK9bgbakQXRJSOluSgCYIaEEp5zyYZtMJpVjKZlByR0xcujrY4URpCkSfaBtG9pFxzQZRi+VDqXkITEg2s+Iko6Wi47JB3JK+BCZQgEMpS5m6wtFgZxYxrcUS5CcdZwJSjIIKZckW03GCaYQs7Tv1bIkJ1/GjSFkWLYtXdOw/PA9NvuR59e3PH/yhBcp8sYbb4AYtjc37IeBWPxttXJDPeKSZCWsJO2rtmuBjG1aLtYLnnz6M3722Uc8v1F52f1OvfGMdSwWPWfLnvMWzl1mIcowpfSzlJdjzJCDSq4O417JR0ZoymI/x4yPmnQxIkQMxvW0jS3zZiCFQsTIemfUTzIgeNRhtlpglcoTUbsfNPdKe0gUqLQyRrTCrhLPjByq6WIoHnNiMdaquo61ONeoN3PbYqw5Lk7IBO8J46gL2r8LuuTr9m9EmwP/c5B5DujeBezOP1t/PgWTgQPwvdlsNBlYft+2Lc4puXIsBMFK6DgFrSsxpZI9aqtEhrvA4vqZOaAPL5MRTtUM5sSCpkg/VxWOSqKo/TAH3ucEmHk7BcvnpIcK4N8F/L4K3KzElPlX/f3psebf77IJOf3dqZ3K6fHn13YXGWVORKnbzxOcryJ+zM+vbncXQaC2OVnmrr6a34c5eaHub34up2SHU5LIKQFkTv6oxz0lfpySTOo9m3/2m5QVTokcp0ST+n0ePM7v3bw/6ve7yFuvIsDUdkoGOT2PV93D+Xm/qj/m88zptZzOKa86t9PjfJv2bUkDdxFxXvX/b+rH0+uYX/O3bad9/cs+e9c2f1eB7/8v26/Ldf1dnvNrQszr9suapj+kVB6ieYuSzMtSVBJDIoYCKEop0ih5BymFMwoY6Jpr/u43YmicAq4YJZWbpsG6Dmla2mZRiB8NtnGlQhVyjuQCNIvUr/quAKjv2jqnanGNiBQrh5fXJCmlEvvXlGbJNkPJF4gqVRotX8o5s9/viTFwyC5/Qx8e4djyu0OErccSSlGErWDtUWWk7Tr6RUfTtqSU2O8GdtvbAtzrsa11tG3Der3GWccwjnzxxRe6hk7pUP+fpQA51qq6ZQJK3zljwBpWqzX3Hr3Bcrlmu9vz1ePP2G83wKR94Byr1RnBR4b9RgskWsc0BrwH55aYpmWYPDHsid4TpwlntVhpHDyu6bj38G2SOJ4/vSJNWxrT4CMkEjFHgo+qEte2jOPAOKnF7cq09IsFU4jsRo+fIqt+iXGNFipRfOGTruessfgEMSTC5Gmcw7UG2zWEUT29XdtirMWHiGs6EjCFRPAeiJic8eOOGCYlouMwTUtjG/bbLftxIgVPChPTMNDYhvVqBcayH/aM48h+v+dsvWKcJhKZ66sbtUAyWlHaOsvFckkME5bMvb7n4mzJsu1oRa1AXNOw2WxYLpZkyex2kxbzOFXTjUV5c9jdFKDSEoMnFKl4Yyy2saSgah4hjAQS1kD0gRwC1oKVTJ4mphARK1gLsVgVGGNJuSHT4prZKDau5JVKziiLqv9IPtqvoESamBPBRLq2o+8XuKZlt9sTgxb7+aC5JSaNv9pGVeIW/QLjhGnyTFMgxFiEYqWQzfQapZInrDnYpxhxxc7lqPAhVokWtnElr6MkjL9e69QAAQAASURBVArIUtdrKFmCpOCsAGKKcp9YrKSX4hdjLNlEcirljvP1cY1dS27FumrvouCQtUo+seZlUjv165hc1W/z9ed83uGoNoAc/yYym9vq/2U+T2mb0bJfNbW9bq/br0/L4LpHmNUj5PojJN6Qu7WmvYNgnC3qY8fc17GgRw75UJ2DXs4xHA5xsp6vsXVKiSSoTUIGfVcXxRDJSEykbJnkkuyWLOQJZvcFy7BmPwnequKHnbO8JCNVKfqQazk9L6V/VvC4nuPxlFV1ANFiQd2XzoUxaOFxzHU+VKJkzh7TWHKEd9aJ//h/2fN7/7P7LN2e9On/Fb7zfwQi5EeI+yPi/i/4xb/4EVe+4+F5y3/0Dy0vrhw/eeL46bMdN5uWr8LAfsz4YUMyKxINfnzGwnr2i0sW9/b0bcvNzvDgLPLsdkCsIdPy3kXiw0eB/+qjHfsAXQseLQgdfSRmaExGwkTjnCrdp5GA4LMwhcwUIpF4UHWohJmcw6E4JhX7wopvHFT5S9FlXX+SpRR2av/OCSBfG5Jfy28o/pTRcZZEVfj9OPL+Pcf1jeV6r0pePidyY1iYhlW/hPgCn1osLReXbzD4n6uiV86s12t9v6eg7gVkcgo4I6y6nvPVSpVXpomcQ7lWxZZSSkQPKWZVi8sWaQM/+jTxe+96fuvCEWiJk2OfPFMyrFvLYpH4B28KF8vMZswsOsN5Z3i+1WJwS9IxlRXHdEZYNELXOBor5KyYaG5bQkyMIbEfYfL6hhYxZJOJyROikLJiT4KO32wUTxVn8TFzsxt5Y+l4775l/zRDCkRjiSgZ6O1VJu0T+2mEFJmmDdPkSF3P2SKzH/esGkdAuL82vLhJrJqGNEF0gnVgpms20xrJMEZYiCeZMyIRR2RMQjBKNK7EjyORKJ38v8Zk1Url5TzeywSzr+eMvz4nHeeHOtbqn+px65yhcWHJf2YOKiEV48w5kk6IZLq8OCE4ZQNR+OCdNX/8w2cYm/jD31vyW2+9wb/6syf84slznAPTQoxSrD0LDmYcMY0sbUsOgWFM3D/fY5s1V7eW1nmdQ1PF407ntpdPTmbX+3fRfiOIHyklxmnQF0RK5BBUnSVnDdpiIvuaOBemqBOOaxr1LkKT/ikpc0lftkmVJopcizVCmCZNLDiHMoiSsvK8VwDUuAOBI4WAM7YwsDkA/XWpm7OQYlFvyEoUEaO+YVEKsJqViBCCTk7BT1grWNfqY5MUnMFY9XKrMkNJH1opVTC2yENVYgrlQY4VMBOFT0PMZEnkodjXWPWVSyXJ4tpOgfCiBBJDwGQIsUrzqIKGHEAEHX5igKA2IcM0sR+fkXORTzXFdwpV+og+HYOZVEGBeIgkUk6F4aqSYFKkylIsQJVVlYmck6oWSElCpEgKoRARjokbZ4VsBYoFhz60GkAaqiKI0gYqIcRIqV4pljdUmyBjscbgMoe+FgGSMlQpXqvYRokHZRL1etOUeJQL69ColCUihwQUMZOJTCHRuAYhIcZhjdBYqyoZWauhDiCUdg7ONVpRQi4KIVpJoQQHtYhJWcHDrkjtp6iVDkWVSVm4UlVcjkF4JVxo5VJRDhHBlSDZ1arq8j3EeKhCUHGTWKqI9GAKiiW1IissZ1NINbFWJ+VYJC3r61Wf7eoBlrKoGo9oNZZBnwcfIjXMjJJxtinkD4O1OlF3XadL1EI0iTHT6K0oVepC2zZcrlc8uLzg8uEjrq+uGLYbttOGs7NLFoslwY/cbHZ0iwWSYbfd0PYLrBP8FHj44BH+9hnX1y948uQTPvvqCTe7QVmtGZq2o+s7nFE26H0XOHdCI/lwXTVBqj2QZgzxfJRxS5ksQftItKLG6IpEvZbTRA4GjCYAaA22kNbUR1ZVXlKoHnGHFakSrczRu6xKgAJghMZIIS7J4d7mGAk+MZXnFNFKGGsdTasJJ+daXNPSNFWmX2Vc1ZYqkGPxtP57CLC8bt++nYK49XenxI9fBrzNA9JT8H0YBrbb7YHIcX5+jrWW3W7HdrulaRqWyyVd1x3UPuCYxK+2KbZI8s6JGDFG2kMVqQIEVRmk/jz/eyWX1CqxuVpFSulgXVE/P03TQbWhaZoDyWROKpj34ylBADgcq15T7ZdTe4c5+/u0j2tCIcZ4sFapxzpVnJgTE+4ipbyK+HEK8J8SKYA7z/mUOFPbKWmg3r9TkKb+fEq4qEH46Zg7Ha/1WKfXNCfmvKovTskwpwSRu0gL83Odf5+Pg7qPuk6Y7/su4scpCWa+r9Nn6q5ruIswMU9mnRI55p89JZWc9smclFO3Oz3n+Xnf1b+nv7trbH3TZ+76/F0/n37+rmD5m/bzbcgctX0T2P6q+/Gq83kVoeWU9HH6jH0TKeLfNDLA32QtcXrdvyp55nV73V43bSkldrt9qeqjFBd4TNZENuWdWePNGt8aAVvwX43jLCXiI4CS2WOEnPCS6RoOlfjGugKIKrBunaoj2sbhnMGV2FsT6RRvaSWBIEfiB9Sf63pKP5dK0QU5FwXRmpB/eb41pYikru3qui8UADwlDlVsqb4L5rFfpiT9pdbIHMHXuzq7rnFKHNcUIkfTtkzes7nZMI17Uim4kJLcbp3h7OIS0zjGYWR7dcVuv9f7UuL0lCkqrYlkDVXhUpOogBhs03Lv4T0ePXoTny2PH3/F9YvnBL9X5QZr6NoVy8WSaRgZdjua3rFYnzGMI9MYcdLQth1jUJVOkwQ/xaKuq6pytlnwxjvvsdsPXL94gkMtckPy+BjIQa2D2rYnI7y4umEatxhxXDx8E5HMsL9lu90TY6ZtWrXTSZHdfix5MCAb+lWPiCFMIylDtzhjuewZhgHvR/r1GdY6pjSxGwas6ZiCVpGmlEjB0/cN0zTRL3ug1xi66TC2Ybe9LfkPQwgRPwXafknXt4i1TFOgNY4UAo3VON85x9XVFbc3Gxb9gv00qsJwCDQlD7DuFzy8OOfyfI0R4d69B3TdgqZvCTGyub3l7HzFbthpjtJauq4pPveJGJUUZJsGK1rsMQ1bYog0fa+/S/6gBBJHj0mR4AdytNi+wzWay/HTQJxCsZ3O4ByhEAWyqCoqYsnZaCIfwyElRFlrpvqcHS27fQzEkGi7nq7tOFurCu12p0qNISVySEzZM1hL1zS0rVrFNK6lcZFxmojJYyjkLFsUfKw5KHxogcDRxsnZ5qiO6JTkUVU36hx0WC+UZ/cQI4DaPR+IZKVQqeSK9bhKOElJ/y9Z5ycx5pCwE2MO56lkFFeK44oNTSGfzJWOKHlhSv60TMlH0thpy2j+tKTiDiLH1PzcybrPHOcnDhmjWgYp8z++bq/br2UTsyRPgX2+IMYd/eaWuHwTTKtq5RxzGYdcVy7Fr1Li2ww2H0mnx9zAcT0Cuk7Q7+VJi6nME6LYUVVMK4+fyUoki8mys2ua8ATYQbyPBM0NU/L7YvS9f4xP63NetMV06fbSeZzG6uWvZe4TyIZExMRynbHa1emObJEcMq7h/sLyH3xP+F/9E0PbJZ7/yXP+6uqKh28G3u//b4Sf/wX8wf8Ft/hD9r/4S378WctHjwPPJ6+vGJvobMPDpeeyMZxPgcEHNjZxGz2TD8QgiEuk8HOud+f8g3c7vjKRdy4CfxKFkMES+YcfJu6tBp7fOLwXOqfG6KrclOltZt05egcigSmOTFHYTYltyEwqLXVY49V7Wcp2y20SVWdhXqRyJAkf51n0Zr40YedDP8/Hg26ZZ/hkGQhSrEBSJiEYEn6743Kx5r/985GcoxZtp4yfMudtZr8XsjQQIskJn376KWJacgyKVTWOYXMFEhFT3muoQlxOmYuLS7wfgIiVUrxFLhY0CYOnzZ2i21IvNPNnHwl/+IHnfj/xzjJicsPj64ndCPd6x/feU+xriPDePaE3id2kivPOlrU9CSeZvhGWjcFavRchWYLJjD4yhszkQazQiWXyiSlEQiFUKUGhFElVTKSweHICnxMvNiPXyzV/8FbH7S7y1IFEg7OZtcv87tstf/rJQIrCRMZ4wdjIzRAYY+LBqsPmyINVYJgCDxY9yY10Zw5jIpvnDcPgmcyEEHHxmn3SAu4xZ1IcGVMDMRcFveL8cMi9HgnmLz+jLxM9TvOw8zH1qrzffBxWzA6O5JJ6zCMhJJNDBknldhe8tuDcqaw7Dqpj+TiXKCFNfxeN5cnzgcsVtIvA//5/8yE//8XAf/pffgJZaI0jx0hGxSE0h62OCilmMI43LiJffRVJSdj7jM0bHp03vNi1BJ8wRUThlMzy8qXXtd0df/tbar8RxI+cYSrSgN7rZG6M0LgGP034YsMSY8SKVsoaa5lSgBiO/q3GqsOD2EJeMPpSDBNcXeHWa2Kn5IfIsVIxFjDZ1soOYzGHEaxTdgy+LNp1cBorIA6DqonUgSyo5YmkIriBKkdUkkibVYVExBCJB6JJjAHBKPGhVMLPx1VKOqClPCiSUVZ/2ym5IVQ2u9EXqrHEnIkpH6pZTMmuSGECqlxnkaAqIK4tAQQHWa+koK9VmabMUfLTxAAVPMnpsA9lhJoyEVslehTvLRGLdRqCUP7NOWOcO1T/50IGMFRwQJMbpun0XVZ/PyNuxKTn6YunlzNW5RiLIoySCDRpoPwEOZADciFCSAn+ckqELHTLDsQyDQNTjFCIMDkqi9EaNYlJKRbbEZV3bLsWl1GyiCgBRJK+GEFVMACya9UDtF4UhrZtQBpSVJkslXHLRD8VOUejai0xYlM8BoWNyq96H8okqi/bnKvCiT5nvkzOqgZRFiVJLXHEaB80RdZSJ2k9Z62mTviYGHcTXa8gqhEhoM9F5pi8qiQB7wOVfQjHEBFzBKsoyavqVWbEYhpTXi3m4E+nfaceez5o9U0qi8mMeg0j5bwLy7XrG7IIPkT1lU5lwinWS/fOl5wtO/aP7rPZ7PhsteaTTz9l8+IJhJExCmftQ2xO3OxvuVw2hO2OJ0+fMD39lOvNLc+vNyqhaoSmsXrNxnC57Hiw7Ojw2BhpckCSI0hRiDmk9HIZ1LpIqYtwTSIUqxY0ASd5Fl5ntU6JMZY5SC1xqi2QNfUZ1uSCaXU/KetclcrzrX0/OxfyQVFEk7eJLAYRTb6qhXbEiN5NI0kZrx6m8vyTEiYHfLQlcZRKMjQQoif6YwX86/br2+aLzrvUIeBIXJirXdy1jzk5oFpSOOe4uLhgu92y3W7Z7XZsNhsuLi4AVf+4vr7m6dOnWgW2WPDw4UNWqxUhBK6urvjoo4/48MMPuby8xFr7EgGjno+S11Syu6p8iKjiRyV75JwZx/ElQHu5XL4E4M8VJcZxZJqmwwK8EhfmZJgKlk/TBMDZ2RkiwjAMh8/Mk4/zPhzH8aXKstpntU/noL8x5iWySyV+1Dbvhxgjq9XqTkLInHgy/92pVcfpvuvvT61K7hoHp8mU+bWfkiROj/Gq/58SUn5ZyzkfCECn5/xtiAbfBmC+i+hw+re5fc28nf7/LhLMqwg2dbu677vIP3Wb+X2+67ruIrbU7U6frVed93w/pz/XsVb75C47nW+6N69q8zF518+vIsi8avtfdq9P+29OUjrd7puu4a5x8CpSx9+k/ToTI36dr+11e93+rltKie12WwgEmi8wkjhkDXItQC+2K0aVUgUKcbx6xIAmFI/xeQyJWNRbSWCTkvmxQra2FFGoZUXTVLu+Rq0ZTF0j2YM/s4LOoMUd9R1X3yFOi4KCp5b6SNbq+5IPPOQtQOfYpm3AGlKIhzVgTR6WrTV/cyCOHOM/3ayoepTvmqSvgPIcCJJS7au7tk3D+fk5XbtgP+y4urpSdY9wXMMpGAxn6zX92YrdMHB7dY0fxmJtiuYVilSL5oVUQbRWjZii5Cmi1jpvvP02D994k6vrKx4//pL99hZSVEVU4+j6FavVku12wzTt6bqGi4v7DMPEbjcCQtt3hBgIfoSUigx3Qzbqz768XHNxfs6LZ08ZdjuMJELS6r0QE3EaCWFg0a2IIbLd7hGBfrliuVzhQ2D34hkpRxb9itZ1mgOZAlkywzASU6LrVjSdkjNC8GSEbrHEuobNMEA2LFaXuFLwE4cdVnRMbre3tH3H2WqNNWumacdiucI1C7a7HU2nAP04joTg6bue4AOT36s0vGTafsHt7Ya+64hhpOkct9uJ7D3LxZL9bkvXOIb9Du89w6i2K13jkMZyeX7G2XrN2fk5KUTOzs5wzjGFka5rdbxZQ9d17IctjesJQa2hU9a8lJLQ9yX3qkUcw7DHbwctMrINjXU0OELyByQwksm2BaNE80TADxGyjl7DsYgsZUGywaA2zpIAW9ZwqSrLlcdl9lzWsR5TZLfb4r1n0S1YLldKZEmRyXuq0mEKwjiNdL5l0Xd0TU/fKQHEx4kYveYOnUWsXquzFmeOyo/ONeX3M+KHVcKGPShsHGOvCrlV0CClhBUBYwiozTFRgbj5OtVai42GZC2ZqNXFIgerF1sKwqTkMivpY04eORbqvULt7yVQpUxeMzpZwW3KXHb4iM7Br1oOHXDFAlzm2VzG63XU6/br3UQgOceUBjAOZ84ILsP2Mba9ILeXkD1ZzGEtoCBnsW2v+fCiNFXXBccijqI4cHiMig1WjZdFrfA09lV1dpLmiiUL0SaM8ZgQsPGas8WSMQWCkVI8WsgGs7ngmLeq/6+TAsX6hcN1zG1njmSRY4W8vnIE0lG7TBEHwUgGJtZ9w+8+avh3v2d55zLzVx8lHr8Y+Hyf2V0H/k//yYLws5/w0z+deOD/M+5//7/m+hc7fvZMeLzJ3HrBe/Ax49OIyxNnbSSYAVykawOSR1JjcRIYk8HJGbe7kXMzcv6oYYgtvxsiP/4yc95N/P67hiFYxmnizbWwHzJ7v8NZyxsrw8VCaJxl9IHtlBmGyNU+MoZEMlYtbTJaRFlyd1qsqWQ4iWVulIrw1fxYJdUcFRu0H0XXYeU+VCWQuwH5mfV0+WzKqE0JmZwNq75jGuEnL25pTMKZRJJEg8VauLeMbPZ7rDgMHnzk9mbD9vaGcW9pGofte65SoIL+VTVcvyDlwFtvPeTLL55jDRA9iKpyNGbAuUjfBxBHYyJd0yE5cr0PfPyZ8E//KPH+GxPt8xaJwpcp8sa54513Mx99pNf2waWw2WXIic46IhlrEsvOsO4cncn4pPYvQ8jsfcIHLTzOKWEdSEoEKIRzcKJWPYjieBTSk77StNhaSvF/NoYvr/f8/nuW//l3ev6rjzPRCTk1vH9P+M4jw3/zo4CTxGUXaC8UuxO71+uWgfcfOTY3t2QDVl4wJIvfJEyzRsRxbxURO3IzBkLcIcmRgiUipDgQckeTHcEcRRHmY+JuMsfLRXjHHBwcFgF8Pc90/LH+UCKjPD9OIX1RyUx1PVH2lasqEQesr8ZAkmuEpPHh4Vi5FNLnhE+Wn330Bf/kj97n7fsL/l//4jEvbiON0U+mrE4eCYMaLhViSRSMRB6cZ0KKegwfEDquJjiLA4/O4Pm2YSrCBZnTvjv2rxZiH4kffxco1m8E8UMEWueYcsC5pj5pJGAMRR5TADHqjRUnsAbn7AH4FbHENGIkY2xDKWDQSdY1xPsPiUYnAqZwALIRA9YSgieGVKT+WkKYlBEPxadUvSHreYXgISkcqzJRGq6bg4QXGNto9UOGIIKkQEZVC5wtoLuUapKkigk2QRa1tKAwAtPswUoxHdjfOQUsCr7nxkFWGw9BVI6wJE6cKPsStKImZVXYMKLJluBV7ilntT3JsSQhCnFGCkgdD8kSXTRovqK8hA7nqC+4mJXUYiSpvxb28LAbDMYWYDtmcq5elyoJBFFtcZRJUJRPSr7mcBQ5PofqNUFGx0QuflWxVLrUCUkkF9kplZa1hW1PLvL9lISLMTSmKQmPRNe1qi4TPDlnfAjEKRAK8SKljG0aVRKZPCnEQz/pgikX1ZV8sA4Sg0oRGSF4HY+ShVzUV6S+cDCIbUhJq55yiDRNq8SepLKiziSMcVjJtIu+2ANpIiUUUCTFSPABjUXlQHYJSY+ByCGxpV6EAA5ryyIlJbwPYCxZDMMwkmKiW/QYa4khkGM8JI9An4GUjmOjko2kTP4hFvC2cSopK/ZQEZaT+pyqtVFJAuZjAqxaPBnnlIwQIrFUiak0WlKFkajsPyNKDJGmwVrDOKp/sDWGrlHCxrI1XJz9Nh988C7Pb2755JNP+cXHH/PDH/2EPA2MwfOLX/yMxlkmH/Eh0lidP2whVy0bR7da0LtML5ku7amCn6EkLDGUZ6qmCjikDXIhYQQiZDkssGvAPnu9l0+VgWKOcpuGqqxTEpNEpqTBv+ZVpfRUeZKqKXXOx0QncggMqgRgTrnI81VyV1XkqQHJRPSqerTLGWeVLesKoK0HVaJRjN8ehHvd/n62OXh5upA8JS1U25NTAsG8zQkS3vsDseLy8pL9fs/19TWbzYanT59ydXXF+fk56/Wavu8PVZjDMPD8+fPDMReLBev1mmfPnnF9fX0ghpyfn9P3PSJSZLrjS/YpczLFXLGjfq/XvdlsaNuWtm3p+x5jzIHwMU3TIdEoItzc3NB13eFau66jbVt8SWru93suLy9xzjGO40vqIBX8nvfpqcpE/V0+zMdHkofahX0dqJ/fuwNRtnym3o+54kglRMyP8auMl3oe8/EyB8DnRIcY42Ec3JVonBNJ6j7mpIfT8TUncJySDO4C/E9VROaknXmrxIb5vXkVaebbElDqPirpqLa5NcucBHFKrDpVUpnvd34Oc+uieq5zBZtXkVPuImqcjqlXbT/v/7u2Pd1mDtifBruvIhnddcz533/VxPWc9PGqz77qOu465imZ5ps+P//sPJj/Vc7hdD+vOu7fhETzb3J7Vd//fW1/F+f+GsR53b5dK8TXGHHOKjDtNO5WiXAFLw5rkJLTMWIwrsE1DqQAJCkXpV/NrYhV6NIaB6LWlWkaaIt1rbET1jYYOuajVXNAL/+fnBEp71rRav+mxNi6dqlqazV+Aqgx8XGtoYQVR7vomaZJpa1TglRju1ItmnIh72ukp3P06bpI41opZQsco7BDhjEf4jRVKlivz1iuevb7Hc+ePSP4iRwjqSTl9d0sWGe4uHePROb50+f4cSLHSYH5kmyIMR2UUDEab4tVpcfDutdA0/a8+957nF2c8/lnn3Nz9ZxpGrBAFks20PULVqsFtzc3hBBYLs+4f/8++2HP5v/L3p8+25ac553Y783MNey9z3iHulMVqlAYqgAQBAlQJERJzehuteRwWBHqdisUHeEP/gP819jhcDjCX/zBdthfZLsV3bTcaok0WwNJgQCIsQDUPNzxzHtYa+XgD2/m2uucuhcDCVAkcBM4dc/ZwxpzZeb7vM/7POdLjDU0TUvfdfhBi7qIXpMDTU1VN8xncwTDw/sPCcMSAbpemDUtzsByfUoKA3W7Q8Sw6Va4ytK2DfVsl+OTEyUDiWTspsdHj3UNp6dnhDhgq5pmNiPEHumh69f4wbOYzdl0HcNyhTHCfDbDkzg/PSasB8RZ/BA4PzvO63pVJY6+w5oa4yrOV2tMxobWa1UjdNaw2WyI0dP3G9rZjLadcXx8hrVK5rZW+4kfBkSEJ+ePaa0DEVbeI8lik8MPS61YtRVN1WCSIYU4Es2ROMb4la3wIeLqmjoOWhg0Ucfrvc+kI4VafRgwBppGVUpMVZHEEiVgCERNPWCy0moMPdZVJATXzAFH8ANWBFcpXoSrEJsL8qzTTIuxI6ahfUATilIUXTOOt1UhVeWczWbNMKiNTtvW7KddUgisVqv8nHgCQjfoExgitE1LVVW42hJipYUvJmGNw5lioyJaTOe2VpzO1tv4xlgQwRol84hsxyU9kW2F6HaMUD/7KEK0FgmBlAlq1kAwWkRoExhlwYAxejx2S1ozxmBdhXUGyUolUwuq7Ry9TaQIbLHES1P4Fu+ZfGWSwcjvS1EQ2J7LpTZCexNgtryV0tWXnrfn7ZeqVW7OcPYAkZaQBKl3CLaFzTHGv09obpJSle1NlAwXJWaLlVyQWWIsyJhGKYRQxXa4uv7WdULK45pixJCk5EBSpoisqIcVSWb07jbYh6z6C0CwJuBDxqLLo1vw5aRKS1sVTsXQQ7ZXV0zAbJ97ShJ3SxoxebWjuV7B+5jXcpGUPK1z3N1v+fSNmrt7FQ/PAt/+0PPRESyHAAY+f81wMDO89b0N/7d/4/i9sx/yuYf7tLs1fRh4fBHpopJSQ4AhCT72BBqWPlGlCrGBytZUbsBTkYJnMJblMOMHTwL/8DcT336z4W990vLRE88n9yM3X9rwja8LfWoxrqdqKu41lk9cg2utsB7g4UXkeO15ctFxsWEc+yT5nNQGopKYR0yARMqFkoxYiM7Qeq1zUVy+bka215b8uQL16/q04Hsft9Ee70xKeSZN2dYtEBFaF/nE7pK7DbRtj1ARfc9m6Njf3WXwhpfmD3n9do01Kz56ZCANvPfRKZ8Pmo+LKU3sDrPqhxjU6s7zyRev8+GPPHtuyfzlSASGfmBuhKqa8+svrvBxYHde0boBJxGbIp1PhGB55RWoTU/lak43cO9mYnE443S94YUdwyvXHf/2aEMjgmugMpEXDxpmDZhk2fjE/fOBJ0ufVVg0zrB5zu5Dos9EEJK6JjoDYgRPwidRsnmARMbRcvF6zAmXLgTefeT5b35HeH9leXDuGIbI527X7CwSJz1c3zF0vWHdWYboqY1hdx443AkcHTnO1zN8WnO2GqhqwdY1vksQDacr6IeKYBKGPWp7BgSgYm8BJxdG1d/iVr14Wlx5tV3Fcwopo+SZNU8aSR9je+axZSRAaM42p+nzWLDFare73hZzM/bbHMfkbcZpny14XO7fuvaYLF6SWvTsznc5v3jMqvOYEEgYgulJqSEZNIeKITHo+JgCnsTtHcM7xzAzlph6xGgh9lpm9Kcd1/Y866Hh5CLhTFLXEVPUe8hqRZncVg7pY1f559N+NYgfKTH4gWHwWEHlOqsmS9f4XNHu1UvSGIaggwvUmlivHCloItpZi8nyPsV+g3T5R8FrnyfYgJWER2W44tCzPD+lripSpTYSXeixRlllIoJ1jhQT3g9qtYDLC3NNXtlJx4gISRLOWJLRwScBISkvCZN9vdgC6zEGVfkzBjEOJUYAaIVAkpwkAPphoK4rrK0QMfigVjcpqHRRCeZ9UIWDlEkJlTOQ9DrFGPA+fy+pdPj0oVYSi9NFSspqBZIJJUn9UCV7v43VKXk0Hbwn5slCmaSqREBSJZQhkxESAVekJ73HRUPK1iyuPGSpKAxkVmqWRRzfG48vEw5UMgQAm0kNSSYJqRSxUYMo5xxlOZbQCiRBGHyPASqn6i6I5IDLEEPAGKhdm5U69N7HFJVMQ2a3SqJyWuWgFjh6nZ3L1d9SlBUy01MsQSzWqbWKOLXV6fsuL8ZyNUVh/RtRFRFdcVAYc6YQJELQoLyqdAyNiZgXDI0VfGHzItlfVZ+dlAIE8EFDPuscMem5IC5XX2XlE1MkcdHkfiEXGO23RsBV2Y7A50mjBK5pdELFWr2uIQeuhVARU2ZFG4cpKiaZ4GJzV4hJiRyqkGkxaJEFZKkuSZgUcSKYpqaXMC6+JZNIxERmzS77ixn3rl/jc5/5DKfLJe+88y4fvPsejx8/4nx5jkRVvDBi2WmEeWUxRIxEKkMGnbZAQEI/m7IFUVmkZw40hZ2YDxdlR+qxRbaEGX0GChBiLlVyZLc6RAw2d/3yWgkICDqGkOdTkwSjJnFkIS4FC0SfUzFuBDgSQpX7XozTRab2J9BJN2ailZGsdiQGU2lf0Cm/QsLH7T2et1++dplVnD72XkkcTxPLVxPSVwkI5bXyeiEjlIWv9wrgz2YzABaLBSJC13Usl0tOTk4QERaLxUj0WK20mm61UhA9hMDOzg7z+fzScYGC/cUSZrPZ0HUddV0zm80YhoG2bS+pgBTSyDAM43fLdupsy1WqM6cJ+3JeXddxfn7OyckJoKof5XvTc54+T9Pqr6nSyDQ5/SxyxtMS9VfblHBhy/z5DPLFx8HJZycor35meg7TPlOuzfSaPW3bzzqHp/Wlq0SCq9+bkmamJJrpfSj9sHymvDet8nuaLcyzkus/ifxhiw3blXMqxzu9ftPjKe9ftVJ5Vh+aPmvTc/5xqiRPO7fpcf605zn93rPIJFerHZ92ba/2o590nFeP7WlEiOm2r57TjyOk/DRkjp+W+PG0vvvTntuztve0c7i6nb9u8/dfhPRw9br9OOLO8/a8PW/PbinDGAabExMBm9RCVZ9NwQdV5wRNfBqRHHdZjHVMlSMTigXZzmZFw7I20tjYSFZHTYkQ1Iql73tiElyMWOupXEVKOYaRbOuAHdcsJuMvXddRQHSYjgn53LKkulVIAhFL0zTEEFgtl9s5LYOjRZkxkCGKnDRJCAQF5NPkdQXNS4FKTgJn8FNjw2zXYB2zds58tqAfPI8ePsR7r4IqSRMB1ljdrjU08xnNfMHy4oLV+QUp+HyjXE4MxLHYRc9BLVol4wkpxEzKht29PW7evoMxhnffepvl8kzlv1Mu2iDRzGra+YyL5Tk+9MwWO1y/eZPNesPZyTkGQ9U0LFcrbAwZg9ICD1dVLHYWNFVN9IGTRw/VftY2RGCnrWjrivPTJ0pCsY3igR5oZzhb08wWnJ+dUBkLyZKS4GgAg6scF2cbkjcYW2NNje8TYhKb9QXz2Yw63+AUPM4YiIknD58QgqeqDG1b0/cdF8sVe4cH1M2MTd8TYqSuay5WFywWO1pgkRLnpxe4SpVkjVFr3+VyYD7fY7HYoR88xgibYaC2hpASy+WKFA0miCrF1hXHp6f0mVCdQmAxmxG8p21U1UJMUVjtCcNA28wREkMI+BTp+4HdyinpQnsoIfdDYw0heMBgjJIrYlRVjLpusGKwTvvnEIJiYKbS58wYTEzgvVIcbEuzaEByoRmCZDKLmIqqacE4Qk5ikbIKUMY7tJpSMQ5SIkQ/WqIgYCTho2c99PShZ97Mqeuaw4MDksDFxRIXJW9DqzuHKIgXxAhVU9GaFhIE1PbFGrWFKsQP6yqcqbLChtq6SCZ9iFEqB5SYVbY8CUpSaLJuurJetdYQwjYOqKyF5BQBKuomeZ06Ws9YR2UdNqsoG5OP48o6JVHIHpO1Mfm6yva98tlxzC4g+bi9Ut1PrrZL2QZYP5NG5IpxrNM23fYvKiXyvD1v//GbiBCGNYPvcTv7ikFjMSkS2+swnCPLD8EdEOuZqhulgJnkeZiMFeXfQkgtc7HI1vplq5w9eU5JqrKF6HcJODlGYk0nh0QMJgb2duH4fEmSQ2LIpIGkmLiuTzI+YDTZXVqxpSlFs6DF1+RsSTmO8XkvuEbS/EkIgeg9knTU3Gsqbiws1+eOzZD43v0Njy46TrpASJbaJnZqx90bjv7Jhn/3huFoA996b8aPPtzwj/+Tik/ccLz1KDGss40GicYKYiNGHI0REmslLxrFOGzs8cbhU0WUmodnkaN14pVXzhHT8F98qWFeDzR3bnP6bz5EEO7uCS9da7ixq2P+/SPDm0dr3nnieXRRVMbVNi4ktAh9HAPLNSmqLZLH0HJdc46FPM6We5wx/jjFRsoWx+9o4TKFcCM6Fl/FYsrfBSeLMXK8gb15zXJxi//+f3gP1+4gMWTig+Urn0rcnCUerzzvns559aVDvvxbd/iX/+rfc3jjerYvAmJi6FV1OHlDqtWmfjSUSYLvA7WtaZ1jSJaLcEHXP+HO7oyH64ZHRw1W1E5+1rbMbOS39j0fPdjwma/s06/O2N8ZOLqIfPITLbKIrNcDN3eEaweGYITbu8KN3cALew27rWG5jvzouOf948jFZgBxNJXBB/BR6EJS9feQLt0vQyJJRAw0IswEpLH4kOh66GKHSRUilpnxOBewVc3pJvFwXfM/+wr8d3/iGWLkNz9teHgWWFhL7QaCh8oKN/ca7uxU7NRnfHjS8OHJwEXvMS6yO7vGxZDw6wpJF+wvImYNXYBqgDOp2TUNIUY6Y9mbWWyK2f1hq9h+FQN8Fnb0cUwru1ak0me3Y8xktfAM7Gq7/1EliDKepcuf2zJFLm3vMnEpTcY9EInECD70fPLluxyddLSt4+9/eZ/3H3X86L0V52vBWHW+UOKv2oXGqOvVGzuWLkQuNo6ZMxhXcmY6poZY8+Q4cLDbcfdGzQdPgmYwg47BMUTN3Zmc00vlGf7FrHN+JYgfGENVt8TUIUkTpiH2+F5lKH0IhDCQjKWqa8BiXa3J0KjBthhH5dTztahRhJgTA0GZhkYAY1V1ASUfDH1PDINObsaphYSxJIHBD1nRI5MviFhbqc/ToN8LCOK9BnY5ge2NHkfEj16JgoIZOLDGkmJeOGdwweTgP8ZtUlvFKEJWGxFiBGPzZJG9j0bWINoRjViixDEJH1PEOMlKJMrepCRkgydmu5SqrvBB1TBCP1C5LCkUC0PVI0bVFXR4TwSrzD9jRZPXKWTCjZ6TJhaSgjqZxCGE0WdXxVYsxTzGhzLo2JFdZapKk9ahVOEYgpDVVzKYb/XBLLKliMFUSlQx1jGEQIhBLUxESKKyieREDZKoajP2JyXO6OJGgx/9XkDvU+UqqqpWS4+oCbe6rogxEJ2l65XoYo3NEv1qNVNZoakrfIBYFlpZtSMrJOFTZszmSd06p4SJ6DACfdfpPTA2x2SBFC2+GzTAjgmRgK0qYhRCDKSQME7BM0HAZNHZGEcQTsP6QiUoyh9ZdWKshNZwL8QtSaEfPE1d6z03KrlrItSV3Q7ithAasoKNKNnE5UVRudeXKtRzRQXCKPVJSki29VFLFK3+8j6UB0xVPnx5ZsDmoNUYZXf6THoQQVl9IWnfTAo4uDwx1VawjWXWHnB9b8Hd64f43/gSp2dnPHxyxPvvf8jRo/uszo7wviNVaplkpdRsTYC03G9VwSQDmnnoS0nlUinVXGiF3BggjPOvAnSpoCWJvGg344YS5OuiMqMieW1fxopCNhG9qBliJSbdm9linoCSsBh6IkEByQnoqrLJufJEVGmHfB2LzZY1xV8v5koafSZjykQ0987zJMsveZsmiZ+WaH8aMaDYjkz/npJHYoyjwoT3frR32Ww2aoE1SYYDI8FjuVSZ4A8//BDvlVA6n8+5desWFxcXnJ+f8/jxY05PT1kul+zv73Pz5s2PkSymSfCy37Zt2dnZ4fz8nP39/VH94/DwcCR+bDYb6roez2k2m1HXNavVir7vR6WQorLQdUr0W61WnJ6e8sYbb7BcLrl37x6vvvqqVpnm8yhJ+KLEUZIaRa2h/DtVqXhaEndKoiifm/5dXvNlroVRpeNpqiLTZPu0Tzzt96t9prSr3y/HOlU5eRpZ42ri/WdJnF997Sqb/iqhqaigXLXpAcb+etWK5Fnn/6xr9bSxstzTZ13T0i9KnyrHevWzU4JQef2Z0tH89Pdweq2epTJSPn81GH3aGHH1eK+SPqaEmtIXy/fKuT2rTQPa6Wevkjum5/fjrsvVY4Fn211dvcbTY7+63ennn/a9p5E1nnW/nvX96eeunsN0O1cJTH/T2tV+9zed/PGLuBd/U6/F8/ZX20R0LWCNGUkdzpYKUYOYBCZm0jeapLAW5wRrMihuSiSqzRpDI3NSgORMtiMFJxZnXFYRsIgTjNFihRB6RSiiI4aIj0NO6KqcsyZcHYRI3/VAIV5Mcp8aTcEYXwnO1aMSQEqJbr3JsWuhzBfAH7VjEVHigmR8JMeyKUuXls9sM7OCnVjQqEqoynVbZ3GNKseR4PTsVLGFjB/oNzLJPpNaZrMZKSbOHj+mH9SeY3yWC4CZFUmiiPpwo7hTOabkLMZa9vcPObh2yMXynCcPHzJ0G4hakJSsKoY2VUXdzlivNgx+YHd3j2vXb9D1PWdnp+A0Pl6dn+r3KMl8i7UVh9cOMZVjvTzn4uxECxScRahp64b5bsv64pwQDdYtdJ2V1DrYWUcza9msN7hmzrDpcFVWxsVQuYqu6+njgFSGqmoUw0oDFQ1N3dD7Hmss3fmp2jJXDTEl5vM51ul6ebleE+LA9Reu41xF1w2KTaZEtwns7O4QQmSzWTH0PbWrMFbtQZIxrNYd4mqSGM4vLjBZbQKjGNHQ9ZC0YCgkTzNv2PSezWZQHCt5xCZMMjRty7xtmTUNFmE+n+N9z3K9pJ7VJCKVNQTf48Swvhio21bXYNmeuJCVDQk/eKKAtejzitOiLzGY6EAizjhCUnl4k4RkhGSsFoJkqxIxDlfPqKo6F4iUfmcwUmFtjRT10Gzjm1cYY+IwKdqmOGLwhIgquWYF9hhg6Dp855nNMoH/2nWssZyvL4hBbautcQgGSao0mlImethMgkm5SMgZxDGxdnGZzK7ENV0PmsnzWuZFmQ5XGCNb1b2snBMlwzsmZVl5wcS8VjUJsYIkw/bRTGpTnfdrsirRaFcl21hCRMbvRTTRUmwhCpgjbNM309VBQkG80dZ8Sx3JJ5jvXS4vCjJVimXc2qX1U0aV4t/gNeHz9rz9pKYkrgh+g/EBX1c40fHW+0gwu9AuqMJ92n7N2h3oWDI+M1CeJCW7psnDWewQeEYMOHkKE0RJGDwVF7g40HFAlAYjHl1HGG4dtpytAjEZ+iHP22MSVgsiRZR4VvajxULlfDU5rErPRQk6H8A4IOpcYU3CWiWxktSKobGGWjy7jV67o+VANyRO1j0+OkxS0mFnDTZ07LUzHj1c8d0HFX2KfPsjtZH71BuOT9yI7MwSpxut9o+iiipV6nBugTXCzFkSAzFsMGKYLXZparUeNDFRN5EfvB35n/9XL/Dhdx7zpb/bYtMesemQEPi7n2v48ms9s87w6LjmTz4Y+J/e3vDwNNIHaGqDJIP3Stgbi14nRMBL92wL7+fPlfVXvtFpS/KFy+M0RV2BCWZfFPyNrl0LDnJJiXdCFATtS+ebxIdHA94eES2kKMTk6Hzg0y80XKwTtw6ER+cNR0PLft/Q1BVNe8hsb8Fmc0FbqaLf8uIEv1nRDz1OFsxnNT4IMQWia6nE0lFx1rVYEtfmM661nr2DlrgR1pVhSBWVTUQJtK2h2xg+fLjH/ocd1z63T3rzMV99zXL4WgVsaG3Py/da+rThzgI+9Ul44bpjs4l8673I9z70PNkEKmc52KvpO1h1gfXgWXqIvsxPOkdFfcjG/IgIiAQqK7SVZaeBg5kQoqH3sO7XJITWGFxtsXXF997v+af/+BZfPj3lyYOWT7wO3/sfPc08sV8b9qqWz92xvDC3PFpe8B/eq3h0vmIYOvpkSWHBEBK3FhXLXnh8sWA39OxWHcvNOZ0sMBLo45yF6zjzEIegs/K4Vi9krMu4TZrM7dovzKW/02TMKdGLPGUb08+OOajx90lXvYQJXcaPpp+Z/v4xzEKS9vns0lC2lVKFiZ7PfvYWX//2Y/7wGx9xY9/wm5/b5WI541s/PGIZE0KkosaHDc5qsfqtvcAHp4YUEjijecShQ9I+yXuSCXQx8fDUsKg7Xr7Z8PA4sd5EklOLLhsTxIokfnxOf1HtV4L4ofdVVSNSCAxDDhRyorE2BjdrMUYnopCBA2crJWmQJ8gYiGGgeFRaWxHDwBDV98VHT4pdTk6KkgK6HiORZFQqkaRqD4PPCgMpUk/ka5TcIJrkzMoNGjgHzlYeho52sYM1PbZqEGOVuJKU8CBRZRaDMaSctElC9owMxJgyA7/MEYLEqPKbFBZ3XjiIAJEw9PlKqpWNHxTMsJlgEUIcpZmszbKmOf2cQoYrYqKtq8xcVHDeCZi60UAm+kyasMqiiglXWyKGGH0mrGRdgZSyvJken3MO51oS0HdLGDy2qjHiSEbvTWF4pRiwLgdZKBswRCGlkJn/dhyYdLINVC6fjziSqMqFEa1a8ENHynY7SRirCTT3nUiZPR9DQIJnJN+IMjSt0/0FrzKu6vGpCzjXVAyDeqSmFBj6nkSuYAo6mczaGdtB1iO2oraZZRYiodQIGIvP/U9M8b8CSZHQZ79eFAAq0l+FxFLY/xp8K9gUc/LP2ooh9sSQNChN5KBYiEYXqQlwVhmk25We/iclJUOotQyjNFOMQYNmq0SM0jeNkZHEVEgZaULoKJ9xkndHUTrRgSDElCVm83vZI1AJC4VtrICgkFQSzeQJx2ifiVqoo/dc31Rpz5gUyyPfc1FSRAjDONYYEUJM2Uc5YlAvWHVIEfbmN7l3+wZffP2zbIae5cWKB/cfcvrkIcP5Ef3yjKFbY1IgFMldyYk3Z7Jqjj4XCgjkUDml/BxYSKLjhZjijkJCj0tkG1SbPLEaKeG6jPyRAuop0US9H3OnhxS176DSg0K+XiQs2/svmSQSo4zPZkyQ8nOSUFJcZQ0S3FiZE0Pxn1XCV5UPSBVghMo5Wml0Qfo8qfBL21JKow1KaU+rHi9j+dNUP64mfqfEhr7vWa1WPHr0iIuLC5qmYT6fc+PGDZqmoaoqRIQPP/yQzWbDMAz0fY8xZrRRAViv17QZED0+Pub8/Hwka9R1zcHBwQi0lcR5AUzX6zWPHj2irmt2d3d5++23x8+HEPjqV7/K+fn5SCwBDahnsxl3797l8PCQk5MTzs7OxirUtm1pmobT01Nu3rzJfD7n7t27vP/++1TV1nO6EEOstezt7Y1kkWEYGIZhVCuZJpufZUUiIpeCxqu2JFMSTrkPU5WLIUtTT+/R1e1c3eez+sE02HhWv5oSLso5TgkqT9v+NFn+tM9MST3T+z0lHE2/f5XQ4L2/pOZy9fo+jeRU2tMIMlf396xrcLVd3ddUHeXqvp5FpCjH/LRjuKoeMj2Ocu+nv18lyZSfab+4SuwoCjJXiTLTc5ySVMr2nnZtr27/xyWyp33xF5Hw/nH9GvjYfZu+/qzPl+1OX3vWPqb3s3zvacSn5+15e96et5+liWhhg3NKBHc54a3zSND4oXjw5vhM40WNfZRorhhIirkizxiMcRhnx/jCGlFbF1spGd2WZKhVW0+KdLMS/ouymrFaxQ/Q+2FMcZpMVNc4PW6TMibbzBijip9GE6B+GHI8XsbJgr5+HODUgpccm+UYrPzPGA3HJOkvCf1dx26N1Yy11FWlRQPAZrVm6PqxwCblODvFoFhZXY3r2tVqpdY7Meac+mUiYvnbiCbUk+SiIMkxPEI7X3Bw7RpNM+PJo4ecHD8m+EBKqlIpaLLHVY6qqem6jhACe4eH3Lh2g/VyxdnJqe4zJdbrtV63BGIFUAuLg8NDjLGcHh/Trdeo5KfLOJ5j/2CfTbdivekIxkJgLP6ytqKZzRj6HpKq3tqmZeQJJGGVYwWcwbmaGGHTD1jnMNZB0ju0Wm0gRaq6Jokwny8AQ+97VYVJMJ/tAZb1utcCsIwB7u4dEELg7OQcHxO7O4f4oBbSJniC177trGUYelVqbSv63uOqCkQtXgUUN8y2vr5XCx8ngiUxa2qERO1qjaWbluAD1jlms4bVcs356RmzxTYGCoWA0XnqxhGCpx86VYcZi2FkxPJAMiggSqwSBb+NSVhR4hMpKRYlBmNCfsY9JMWnRLaKhjFlLV0xiqtVFkKxjs310SPIWnJol5MNKYbRllYyxjQMniGcM8TAznzBwcEhxlk26w0pgXFbS2drTS4602feVdnCMOl7xpqsGK3Ej1El0WxjGCV+TOKC8Zl/ypqzPGcZhyFXQ5tM3JjamzpjFbtLaSyIUwKdGROwxmzjkqukaBHJY8mYbtmORyJcXg7mxCBTNYF06f0yfiXRojQdsjVRZsjj1Yg15TRIyphqvmF/kwnBz9vz9mObGNahJrmKSjrS0OOx4NRqrhTi9nIHF8+Y+8f09oAoLcVCDnRdgKTxuR3jT6KuC7YUAv28aC4k/4ExiTadYuPAwAErrhEljIlT8lw7rwNHp5GYk6kiJle7Z5wtiqrtk9WSsorXlsSQsS+fQHI1vGyPixgxYqgM1M7gHDpnOVGybQyYCL2HlY+su0GVxpVyqGrZyZD6wOGe4Gzgo7XhwbHmvs43Hh8S/+r7wu+93nL3AN477dl0mquREGnChkoMizqyqFfMqpp5tYsxibVPbALMK8OrL8/4279zB1m9A/acT/zWPazbw372v8Y/+iN+9x99nUWzID254FvfEv5ff7bkBw8SQYSmFhqxxGDYDInkInEIJCkIvBkJdVdWgyNGHz+WdC/XMZWkSJkAdezO6+PJ1UZEbZGryub5wdD3w8eKayxbm15jhPWQ+OhCOJh7RNyYzzEmcOeg4sOTxGvWYBa7NGcbKmvxyeGXpxxcexVroK5qYoq0bcMff+2b3HjhBnfv7GnOL/YEIhsfOPOJTR85X0esTSyqJYvZAa/eDLy46/i8Fx4eGU4uPOc9VBhS3FDvHfDozfvc+e0vQbjg3os97jOfYvODbzBrAndeqmli5O4nInVK/OgHnj/5QeCdM7g+r7i5Dxcb4cFZ4GxjOO+035HzTJE0qqGnnBsa1xlRA4huiKw2A61LzFvD4dzxiQPDrcWc6OB4JSyHCu+Fhxt49NGK3/svrvHRf/ghzd3PYuKHfO5mzRc/1fOp254fvd3y//nWE958BBvvMSnRpx0SQmOhEsdJFzjYqfA+svaONHSY1DAzPZtkxtxeZTak1EPq8EHV5Ao5QqSQJLZ9YIvn6Os63z+buFDUagq/4+NYo3wMRyrb1PFpWtzE+PulfUxxQQ0+KK4XQpq4u5T3wWMYNht81/PwozN+45OH/MGfH/Ojd0+4fcPy1d+4wXo98N23z3l83FE3LT55DhYWxHK0XKswgrFUYoh+QGTY5gYzKea0N2werHnxzozVpuGDR0uMc6QYxrFxJNk88yr+5dqvBvGDRNf1DH2v0p9WPQ5zLjgTgAKhdL4YshWM14R1DqIDSt6IKZJ8r0FMUOb4Zr3GDz3OZvmBwmQ3FkxF7wNh2FBl5YaSBDfWqOKDsaQU1d5DDL7v8INn6DuqpmHTBTbDgITActNjibxw90VNfPsB42yWMVT/Th+DMuyDPqyhgO3G4JKjqmrtpGiSPfpMpJCsDmK2KiA+DLiYsJUy2BFVkhgTtsZOghbohl6VTMRQOzs+5jGq/YwzynLXgCfLdQUlyhRJeaMzO1IS1LbCicEPQmLIyiuWGBUsaZqKflDiA0lJCJW1eGsY+pCvuR6JyeQeVV0RrLG4ym3JbTDelxIRucpibMwJA3KQHQh+IMRtoq6pK8RYBq8PvDEaiAWRfFzjDKD2MEUBJKIKId5nqUVl4kvTZKnZiK1q6pwQH4YBawzOVXivspkxOdpG/+77QYPQAFXTanVUiHg/ZB9VXRQV790yaNfOUQbe4j1lkDx5leRUUhIJZPDN5OBbiRLiKnwIeg9qM5JKjLWqGJKPOyZdRAwx4IDRpib/kIkYIarFEkntc2LcgjrOKaBS1F9KoGnQSohtk1F9RNfFER9ztYGQg83CcJDxV2MMboxvSxDtQAwxJDBCDJ4+RLzXSc9ZCzFgTYU1ka7vJxNiXlSZonDh8mtl4a3gQQiR2tQczFru3jgkhM/SDwPr1YUmeU+e8OThffrVBalfQxhIKYyYp8/7IUZCBgYMRj3n8qRr0tZ6yIzAXQ4qihwqjABkytdD8sJGJ9c0Bu0iMY9rRiGIcl3RgTZRCCDk5amCK1VlMxCbciCQJ/ckSpqJouSzLBliEJJR8kcBNMQYvDOIVeBN+9qWpfq8/fK1lNKoDPE0pYPpYjDGOCbLS0L/aYny8tqUwNA0DU3TsLOzw2KxYGdnZxwLi02K9562bTk4OKBtW+bz+Qi8HR0dcX5+Ttd1eO+5desWe3t77OzssLu7Ox4TQFVV4xxorWVnZ4eu61gsFty4cYPz83OuXbuGMernfXZ2RkqJ2WxG27aj3cx6veb8/JydnR1E1Ef9O9/5DlVVjZ99/Pgxn/70p7lx4waLxWKcw87OzvjhD3/IkydPsNayWCy4ffs21lpOT085Oztjs9mwt7fH4eEh+/v7GGNGQsiUrHA1ULxqS3I1+V6uf13XIznmKlmjkH2m379636cJ9un9LWSBp5E1pn3lat+ZqplM9zFNdpS/n0bqmH6+EAqeRvy4SoAp2y335mrifHpNn0VAmf5+lVBR2vRz0/fLc/I0W5ar1zuEcIlscfUePI3k8SxCTPnMer2+dKxTgsxPIn5cva/Te3Q1QTW9F6WVe17IXOVnSjYq93J6DE/b70/TfhyQ/bMQSq62H9cHnkYQmr72tH1Oj+Vp+/s4IHB5+886z6fdm18GcP/q9fplOa/n7Xn7q25VZXHOjglLtTM1kGyOgXMBSUyQ8Qkh4PPaZMgFFkSNscSaXIXvVMXTOZJUlJRrSQCnjHtoYlUTuHVdU1ct4gwkiHmtolWQxf4lh5WiaHtRUAVN01prqaqKlLTAJoY89gEfh/1k3M70tTGRPRI/thWz5fgZwf1MQLHQNjXWWEJQJYhuUAKAwgEJyXhpSlDlNXBMOh/2fa94RiH851gSLs83hRQCcZRRdQjWVezuH7DYOyDGyIOPPmR5fq6FE5kkI6jCo7WWpm3ph54YIwfXr3N4sM/52TnLswuNF0Ok7zstVhjVVQxVXTPf3cP7yPnZKdF3esy20utpLQfXDonRc3Z2RkyCdQ3JZEtiI1T1HBC6LhAj2LoiASEpZuC9p+s8YipcrWq7fa8i8VU9ox8CViw+RYYQaVwFpsFWM1ZrVWyNGR8w1tD3veJAzhJTwtqKqm0IIXF8dExMicXuAeu+Z7PZAJFKDMaQVWA3uNphrcEHzxA0ngg+sln1IIKrLf1moHYNy+USQe2Km6KuUteIGFxWr3DZntm5mt09x+rigm7TY43aZq+6Fc46PZ5UYSwQEz54TSxN1kt+8FqAlkQri1PCVI4UIz740o2BrNyZwBktxglJsbIU1BrAGEFcraq2SQvgUlLlEGcM0RtSCIovRCVHFOJIqWoW2apIjJYwuZI5ieJGy2y3NJvNWCx2VJnEB6JJ41hUWYutlOBh3FYFT7BbdQ2zJatv7TGL+mxOdgpZQYQRNymJlzTFM8a1WiZ/KOo1ntOlOMsoNqzjYbbAyoS0q+v7S+MN289slQAKjpU/ljG36bg0FltdadNYIKHEj4L5hII1ljWS2WKPFGn26fr++RrqeftlbWJg2BATDGZB3TjscEq3tki9T0hRlbWTZyMLqmCpwzFiZvR2QULVNULJrpIYZb/zWkBHOsuY0BUtYlQyWcTFniatGNhhLQeonveATVpAGIzi/TZFaoFVV8YLLc5Vi3QdNyRbk+nYvsU2Klfp8sHmZz4GVI0pY7f5GTciVNZQV5a2sTgrIAOeSMr2GiAs14MWViZDikLlBFJgiKDza+DmXsuTs8COC5xtNIex8TpmvnHfc3O3Y79NHDSWi5XHR0GyK4C4C4ZBOD4LuMpxY9dzd9/xuduO2zeizns7NcHUNOfvEB5Y2t/735CaV/HVZ7C3Ps/O/H/P6pvf5I/+qOfff6/ig41lPtcCZx/Ai2GTcr4iZPV0tOiyjLRTy4sRZS9J9Bi1fyQoRchp7AdQst7CdBtly2OaLBc9bW2eQ9gm/DVfpPOAtduEvSUwrODwToVPgks9BsvO3HK6iQzJkpJn2MBsZui9p7WOlKKSipsWksdVNb2PfOOb3+ezr3levHsXkUqLQlNFOvqATRdZD46H5ysq0+C7gY037O8Iy2XLF15a84UXIvUi0A0V62VPPQ/0dsCcNfTvXFB97h9jlt8m7f5niPljPv265/rf+iQsP+Lxt075468PvPvEMGsMv/epmiFE3n5oee+442QZWXm14dEFoYzXNsUyT8VxbZFiVmcP6kIQJOEHWPaJ01Xg8Wngwc7Ap2+1/Parlk++aHm0dpydeIazB+y8+EVev/sV+tDz1d94wP69G5w+us8f/lHg375xxNEy0XvwqaEj4ZwSJq01VGLAVjy+GKgr4ea84cHyiIE9ghiqFIjS4VOgksgQXCZb2dzXthhGiWe0T+W1di7IZiR5XlakT/EyVpRy0KLT+GXsTHNlW2JI6beF6DFiKZNx7ePEj/yJpDlMRMn+43tbmf/8ExmCsPGewa+5v0yYk44vv36DP/3OfT54Evjo6BE3d4UvffKQ/lM9b73vOX7iuXdgeXiupOFgLV2ssMAQB7AgqUKFJ/TcjMCA5d37HS8cVnz6lQPeeX+FVDMiXony2d1jm337+bZfCeIHKZGCV0azSJbhq9T3MA+Q1pqsNuH1GQ4DlavzoKkKHg4hJIP3A0PfocxIZaSLNVjbqqxPDsAiShroQyD6nrpuMM6SgspBOVHPx2idBiZeJZeiX7NedzhXY5sZrnIMcaB2CWlmajVRWTbrNaZSGwxNOkdsDrTDoIMn2YMsDFpFT/SkFFUZwMXs3RkVcMiV+lHAOpcDC4PYhpACLluA2CQMKQdeToHvGHVZEEW3GTPBYgielILaz2AysUXl8utK7XBEsr0DGsgNfVD2f8ykGKfAjA/gh159M3OwJigJY3Vxku14dHJIMRBEqKqalBq875EMToQYJomVrDRAwqc0PmZt2zKrVUWEcZBjlLG0VvCDIDbLOpKT2TFBGvKCS7+rKis6wAUfJsFVIQYZtcURnYxjCKOlhyEpMLPpNFisK0QMvt+SlMqAYsUwDAE/qO+UpJQtZYZs4xGVAIPRYMpo1ZNYlXgMMRGGASN2jEHH5KoIYhySPZIl6XX0ubKpsEIxBpOgqao82SWMmzD4jAILSooRxLktsSAvDEmq4pEk4ZzNiw3J60BFBUrgWoTLlIA1GchFsi/iVhrTpMIingBilAWOZAsR2Z6jFHm0rG6T91aC6Yg+c9FCShaDZGujqCQko/Y3JvveklAFmAxUOqOklpASNulzmDs0EiPOZJsiAeOEylbstIfcvnmNlF5SP7auY3l+xunJCefHT1ienrBZXrDu1hCULGFQQEMLbCxI1MqWsnijgB45EMdknC4nS/NzKXkhXiZVERmJI6CLHa00m4AAkoU8Ta4KGgkl+lmbtCqn3FtTVqmizHGDmTwvmqzy+ThMSjpeJ6WDRK8BRs+KOC5cn7df5vY0e4tpmyaCSwBTAMhpsnT6/ZLwBa3mXCwWVFXFfD6nbduRlFAICPP5fAQFd3d3x8/HGEclkLLv3d1d9vf3WSwWo2pIadNjK8dT1yr/XQgbs9mMvb29EVS/SrTw3o8EjJOTE27dujV+txxHIbLs7e2N9i+lQmwYBk5PTzk+Ph5VPZqmoe97zs7OWK1WrNdrNpsNq9WKzWZD13XcunVrJJtcXFywWCyw1o7HWFXVaFFTlFQKwFdV1Sgd3jTNxxRAptYmIgqql9dK4mRKMrmqGDEmICb9pZBKVC3MjdduSgAwxowWOWVbZV/l/asKJ1OSQlGVKK9Pz+Oq3RDwsT5prR372SiZPSEfPM36pXx+NpuN+w0hUNf1SFIoSi5XSTfl+sBWcWX6LEwJEtP+epUkU+57IWaU/U7vT7me5ZzL9bx6/afP8PT9qfrJlPxylchRLHJUnWyr4DElfk0VSEpfmRKKpq9dPc9y38q1v0pEKc9wGSsKCat8ZkoeuUo8mX7mapuOa8V2qRzDs9qlwPvHgOY/idhx9Z4/7e+f9PtftD3tejyNjPS0Y37W9n7e7edxns9qz7ru/7HJJD/LNf9pvvus9pPu43/s6/C8/eKaCNm2RYkW3qtlqs0YxkiWsDmiNIosxiHkIp2eGLSghmiJXskIwfQ4V2GixfkK6haTMSErZlTuTDERnaFpZrTzmWISPhKyUoPJ1jBqVZkrRaXMYRGT0fmYY6c6r/2GYVBAmxzf8rSxfzrGTQBMfSsnYVWRElFJSslEecne6CXB6uqKqq0I3tNtNltybQGm0xZUlaRWhraq2Gw29N2amIs+CrEkFRXy8T5Nj9kgomq0JsfudV1x/eYLiKtYrlacn53SLy8gR9SQMjan+FizaAne433g4PCQ/YNDTo+PWC83GGvpug2hHxQTKnidGOp2xs7uLpuuY7VcQoxYZzNGo/dhf38fsY6To6M8F2vhSzIh43qCq1suzi5ImKwii+IN4nQ7Men3jGQF0kRKhtmsJZEYvCc5VE3ER1zGIi4u1pACTVNposUnfBrwoaeuHcMQqetWi5SSZ8jW1PPZnPVqyaZTolCKCckWSEPqcY2qnCRyIiLakaTe94P2z2AIPnJxvsT3QVNHCZq6AWDWzIghUNUVVVVUKjTx71xN0yxYd0s2m47ZLCvlGiHh2XReCVkJtUc2iRQ0KWidrneC1xgFE8BYQgbi1TN9S0zQtZpXWkNWzyUnUWKIDOJxxmV1HkfKhVghpqwMZBgS6t2SEcOSyIiS+3nICYuMS8SYwJILq3JBHbpuFqBt2qy0C4G8xpVcHFYrTuOsqggZ53BkSxcpsarF2ry+NBMy7lVSRyE9lETbU8bEq+uB6Q+AFaNWOVaVeKcqOlti2scLIJRYwYTd8XTS9/i7TEjdKSe+Jp+7SsjeJoUSMaeUU7ZQNyjhhozf6XfCx7bzfJ5/3n55myHGHnEtRKHzltpdo206fPeY6GtwMyU6JojS0onDxXOqdEJnDhRvRef8kisRYi6w02fZpUwsyPiukYSTDXVc4pPjwuxD0rlByFgKitM6KcS1QCsDF5tIyibjucwv53ck/6tnFkLGAIKOs84ZxObikSFl4oYWCY8xskBlYWdmWcwqgo8MAiD0vVp1DSHRB33NWYMQGXyvBErUVn5mE02tag2zPcMQIj7C0CesgVPxfP+B8Kkbluut4ZE19ENEYk9NIoWas8FgXEdYek47ePtxxzfejdzad3zhTstXvnTO9UXC/vanqetXwe4gzWcQb0ipRy5WpLjk+k7is/c2zI92OFp6NkPFKiWWK89ySPghF1gWO4oydoKqw48k4qCjupREetySOtK2MBPY5j/KLyFlNfuC2RdyULH4nRaLqIILRCY8IQrhhJRINvFk3bG5KHbzwuADt3aEtx4N3Du0GHE4gbUfaFJPpCehTghRHCluqFyF7ze88ukXeekTL9CHyLwWeomqUm91jt90K9a+QuyK400LNnD/LPGDdzt86tltBu7dmHHvjvCJT88wtxq8uUY6EarqA6h/HVn8ZyD7VK/9r3jh8zvI+glP/uxr/PB70DYV/+C3wKSa77wd+f79jrceGE79QJRC+NY1g1q7KGFH1ev1uqdUiDpZtWK0TcmYk4n4mDjpYBUMR+ue+8eBOx8IX31N+Ht/b4fdT76M2f9PSM3v4MLb3Pl7lh/9/r/k//sn53x03NFLT5A9ojGkaLEE1MVAyc3JWYSIBU4uIvt1JuyYrABvwKaKStb0IWLMBmcbfHIYCWO/A3JcofkbGfuLFHpSnrdzkW8hr2bJobL2ytXXmn+aqIdcaonxub308hjro4SO0XruKsZ1eXNSclvjeznfmNc2IRnw4CrHrHa88f6Kv/1rDZ988YA337sgCTw+hX/1zRNuHBh+7ZU59cszTs6WHH2YMskjAWr72Q0BW1mGBBZHTH6MB0X0ubv/JLLfnfH6qwe899GG1QYy41WdP+QpJ/JzaL8axI8JwQOJWKPSoF23QbIMVr1Y0A3dOEX6QRPaSgCpMFVNJNH1G13UZgZy7RyuaYlRq0okyWhB4MOAEPEBZVvHgEmZJOJaKh/z5zWYrJqG3gfWZ6e0TU3V1IgyBlhYS4w1RhTkcNkTMqWASKKqq1GKCQAx+EE9RbGOZKKqC+TBxseE7zu1ngghJ701GHBOK/bJCf/KWqJo4tjkRYMxhuADhJgTvXrldI2e7TesDnyaVI4a9NWVfsYYfIw4SVRZAWTwiRR6SAqaqDWJYFJHjAM+qMqGzUHSVnFFSAEqa7ACcZRQ1EDCGr0nMaZcEUImMqitTAyqlGDyQsYaC8Zmz09VYUnRj+odOGVwxRqM12AwpsQw+DxRTuKlmNSTzGt1ERSfu0AqXnVRFT3IA1nvfZZW1SAxhEgfEnHosH2fA2BNVnVRA7kQE1UlBO9VXSYocyympAxdkwlKkn30EoQhkEj0Po3qIWK1kkHlVnOyRXR4L0QKtQUJ5fRUjSQKVSU4DEOW3VX5SzsuNBlVIpQw4IzVe5PGXL++XzU0tRI+SEHJKBmISzFhXWYhZm86o7wCnYpSZleytZUZpchyAk93lqiMkMQRxGe1kty3J4ueApiFrFZDDmwlqbWIsj2F2loCooN1BKpEiBEflOhgnVZdmbL9DE6JBCRF7dtJr27KljoyLteUlFEUUmKIuMpRW0Mzb9mdNdy6eZPefxLvI0PfcX5+wenJEcuLc7rVGZuLC/puRfQ+A3wKepoUSTEDKmzJHobtwoXMeYmynZzzfHkJPEh5AEjZz7FMrkWmsyz49fNKlJECvuT7L7lySYlTMlaUmbzDhBLmlIiSNKCwRaVIjyGS8CMR6Hn7ZW0iQtM0Y1LXez+C2NPEf0kK932f5QtVrtp7/7GkdRnzyr/F3qUQJERkJFuURPyLL744Sh4XIkPZXlEauXnzJk3TsFioX3RZ6JYEcgHUQghj0r4kp/u+HyW1RYSdnR2KbUwhl6xWK05OTlgsFqzX65EA8pnPfGYkoty7d4/lcsmLL77Ipz/96ZE8UK5VXdesVquRmHH37l329vbY29vDGMMPf/hDmqahbVuqquLRo0c8fvx4VEI5Ozvj3Xff5c0338RaO1rSlPvyla98hf39fU5OTvjRj36kVaMwEmru3r3LzZs3R+JAIWVcJYis1+sxkV7uZ7nvfd+zWCxGwsIwDOPv5Rzbth1JK1OCymazuWQB4pzj9PR0JBwMwzBa9pRtrdfrMdk+PdcQAjs7OzRNQ0ppJONcJWiU/ZXzK0SFEMJ4XlOSTF3X4zn3fT9usxAohmHg/Pz8EvFjvV6PhBqAvu/zOs+NpKLyWiHSbDYbCplptVqNz5u19tJ5lOeo/F2udwhhfN6aphmvb3muyjWYPoflmIoyTl3X43NRSDfT8y6vAeN+i51RVVXjNS/P4Xw+5yphakr4KPdqSioq59J13SV1j/L9sq8pmaXso/SLotpT+ky51uXcSh8o5KNpu0qGAS6NHYW8ND3Wp7Xpd6fj59VE+tM+V47jx7UpUWuaPLi6nek+p4Spn6aVbU2VZX6cEs10n1cJIz+O2PLzaD/pev2s7ccd51UCyM9jP1evWXn9KhnrWcf4tL710+z3J7WftN1n9d/n7W9+E8AmkBSJUfAhZDUDSEHHTRGNtYzZJopjHMbClEocZMK+t5pyjKj9LCFbU8aAmQCbSZTUUTdz5rsHGGMZBk/fq/2JsbofmwPREhOBqk6aXG8QM/Gi2OmFMKhNbszKGZlsASUe/jixsrxXznUCx+b/KxiNMUQTIGxVRpy1zGcLfAisl0tCIa3mmFgTNHlOTFDVNfW8JvrIxcUZBc0u0s9X1dzKMV0eh7cWNKTIYrHDjdu36Lqes5MTNus13Waj0IcxCB6SEEPCVpZ2NiNG6HvPwcE+BwcHHB8ds16vqJxjvVbrQSPbfYiFum7ZOzhktV6zXC4xSSv//Hi+MFvMmc9nPHj0mBCTkjpE521rHTEm9navsbxYMvhe72le46UErjIMQ48zjiHbsVS2YvADbTNTn/r1UpUvuh4/aAEUCN1mhQ+RtpmxWXtCHPB+ICVo6pZuHalqR7/p8XGgrmdKYLUVfujp1htCWYPYijD0DEnXJTd2rjH0Q7asdromiLDeLAl+AHFIACSyWq+xNpMhYhrjGOccPiWMMxgro1WJqvOqVc3AwOAHKl+BKOZnRAuxQsasQwiqwJOUrOF9xFodn7WgyGMqEBxihYghhR6N3nNSIUFIFrCjxWsyiv9JEAYUo3RSIc6QfK/nnm2YUsYeQLGhgkfEgrXEXNIjJlvUpvGxMsZQ2YyFJMUVumHDrJlRVzWNazJ5JCrxo1I16cpUWKcxhJEtAbw8s0m0wM6IqgonSu4tY6kZOxnXSpTk38fn2SnOcZXEYaRYNsuY27v0fOZd5nTedtuXbCCmv19eOxWiVgoT4knBgfi4Dei4di0JjUnlekQuWQHrPzlFWfpQSoQURwuB5+15+6VsIoSwxrDI847Qk4hdS2UrZvWGbn2CTwtwDdEGxBt62aGKHbP0kN7tEuKCgiaXZyqXKAJCMA6DJyE0ssGlc1K0XHBAEMHEQkfTtYnGBKUYU9cWhoS4DRdd0ERsUtJkUbmGQjTYji8xRpJJpNgjUmGtoTGC97leEbVjjylgJGGN2qC0rcXYpAWuITLExCYkhpBQd3gtZkTUVtyHgJCojSWkwLy2mJR4vITba4/HgNdiwui1WPjdoxUzO6cWYVELZ6tMFhaLC4HKgkmWWVVhnB+LdB+eeY4vznnjScvnv7vmb93b8Nr/8g4y+3VVpJKIhMds7p9x8sMNM3vB5+61vHIdPjyrePN+4nuPAz56HfeM6s6VfAPTe5HVvVVN5bLqrE5uZRxn/E7JzwnlF5kklUuuoWCn2zF+O19orsmOa00h4lWZSkCoCB4WdYsxHXt1zfE6YW1kp6lYbSKOhE+BPngu1o46rji+/0QxfN9hjZKHmtqBX/PCzReYz1tchZJcUyQxMDt8jcr+SNdlEVwVOesii9qzCpbjdcfX3rK8egs+Our44UfCl44dLx19wOL1Q9wrXyUe/h6meo3EgkiPLP4RDP8Mf//rhL7jc3/nReYOHr694k+/v+H7H1lOushABK+YYUxZnyaTGgJpOqVlkjHjnKjXbLvO1znZEKMq4juj23jcC+f3O46WAx99dMZ//ndPufGP9kEC1t7BD5aVhRt7cLLu6c5u4u2gBFg/UFkhWUsQR11ZZjPDMMBys+ETN+cs5p7VeaK1ns1SSDiGZBEaAjUBj5GISarAts37lJyNFJp57lcTlY7ynKeSyypETdn2zzHOKRZ9jK+V7lkSg2lU/3la3F/WAQHGo2Ecd8pWJW1fG7t8XhBNCfWD1/5cO42l/sO3nvC7X7nN8VnP6XlHEoPQc3re8kd/ds4XXgrcuL3H3//qHj98Z8XbH17Qh8Ru6zhfDlR6JQmEbHM1PXYhWcPpJtK/c8KnPjHneNXy7n1Vz3NJCVbI0877L9d+JYgfqSRp0Umr6ztICWMdfd9jgU3XgRjCMKiliWjo73xQ64qhHzuzzdULplJ/UO2z6gurILDFB49V5gNtY9kkrexvnG7bOEtoNEiqU1S1jwgmCXXTkKIn+oAxqhhQWUMo1SOZXVZZkBwI2hCUnZ51h0LwkH3WCBEjCWPUazTmpHxKRj9HwhntlDElgvcEr4GTMTYTCwLd4Klqh6saDbSdIz9WkIkYIkatcFIkDlnVwahMZAgRh4IkzpATFYGhLExIxDDgjFGySkrEDOz4kKs0s0VOGAI+6P6sUyDF5eBUCqAcE0ixAAERVX5RcKgGUV+yaFXWaCQeSKkqCsQw0HcdMXrqusE6VdwIftBKoJSwVUUMyrp0NttlGFV/6IdOZWStRZKhtpbe9wi6eEiAdRXOCb5b433Eh4hYp2of3ZokGpBjKpU07TpEekoFQdmGiICBdqaymZoMzVWoGYcWcnJqGJTEkAdp7xPEAYgaWItQWJ8hat8yRquCYkoo6UgQsfm9TKQhV81o1yeSFDTIxIFY7APzc6mB7FZ1wlhHayucUa/jmAz9ZjMSSWKMuOBwrlY5MxS0C8GrH3HaLlALGCRCDuiVWKA+rFqNZI2Bps7ExAJOeCUsjSCR5IWfEodCTo7penw7aegzJUST9Foki7Vu9Jktz6QYsEn0fLP3mFQVNllSDPgIkkRlT1EiUkxeFw5JFxbRB6zLQb4AMVJZQ1NZaCv2dubcvXtbvXb7nr4fWK1WLC+WdJs169WS1cUJ/fIiL048ktV5Ugp5kZWDeSAWFdJccVOIKCkpUUM9jBkXkymTt4jbhUIy6u9cGNxjR8jhvTJmMzmmTMhSSB5KqjE2bsEF3W22jZLtAoNSrf3znzCft78+rZAvSgIQuJQILH8X9YKpRQNsiR7TBCJw6d+rScqribDp+1ftS8r3C8HiaZYm0yRu2W7XdQB470cFjRgjdV1zenrKkydPEBGWyyVd11FVFW3b8tnPfpYbN27w8OFDTk5OCCGwWCxGhRKA69evc+3aNXZ3d1mtVpfUMTabDScnJwAcHBxw584d7ty5Q9M0HB8f8/7773P9+nXu3LnDSy+9RNu2vPPOO7zzzjvcu3ePl156Ce89FxcXHB8fc/PmzVHlw3vPYrHg5OSEb3zjG1hreeGFF6iqivv37/Po0SN2d3e5desWs9lstNeYXpNCfvnggw9G0kshoUyT4zdv3hxJF4VQUfrKer1mPp8DsF6veeutt1gsFojIaNfz6quv0jQNJycnpJQ4OzsDVNmhkAeAkShSLISmx1iseKbkgPJ6IUOcn5+P2y1qK1N7oUICKO8VRZJCXCjvW2svkZrm87nKd+c+VogwhZBS1D9KAur8/JzVakVVVezs7IyEpxACm81mfE4KmQMYEwRFWaZt25Eksru7q/LsTUPXdWw2m0vPQtepetnBwQF7e3u5ErUfyTuFwNF1HTs7OwzDMJJByvWp65rFYqEJCu9ZLpeXFFyMMZyfn49En9lsdomwsb+/z+np6Xgse3t7H3vGC6GlruvxGS/HV8gshcBTrmvbtiyXSxaLBX2WYi9krqIys7u7e0mR5Pz8fLwXU9LZlAxyVZllSryZ9o2fpf1lSQI/a/urAOqfRUh4FpHheXvenre/GS2hBL8AxKhAcPSZ4B0HjGiRRwiqumGtzrUYp9YLxpHioDayIeEgFyVonGZchTNWlVarFmsbTNVQNy3tbAEY1qsLtR6tKqqqRqSsKdW202Wr1KLmaYqDqBis1YR6CIm+67V4QHISJmMPYxFNurwOvUpo2q5NS5J6WyaQAy4AjHMYjNq9WuFiuWToBy1y0A2SjIKq3nslr1hdf7iqYrNaj+tREQFrttLjE1uJ6di6HWMZ4zvraq4fXmNnb5/jkxM2qzUET+gHJCqeEJOqxarNiaFuGowzbFYb9vb32T884OT4lM16g3OqeJtizAUpKH5mHa6puHb9BpuuZ73eIMlgjKqCTpX8dvf3OT2/2NrbsC18SqIkFT8kNps+FwpmgD7bsyrO4xQLyrbFQ/KYylFXjk3fI+i8XdYZJp9fTFqAtl5ejMTjohyy6Va5n6j6LTZhjEcMdH3HernScwUqKwTfkZKw7jv2dg+QkPTaoGQWYwzrYUP0HueEGAb8oCrA0Q+q6BeGbaydcZ9SqKS4ieIgpnKkMOBjR9s4vEC3WWFc7ht+q7ym/U8tkqPXxEAInhg0Lqsbx5BxFcU3VFkkiRJZyAQTBSJVQUMm6mbKJ9JEUIwdyaj9s5FqJNKU9XaI4MMmk8DSiEdqAZGqTSBR8c68bcXNFFN0RrEda01+vNLYj6xR2pYxSjpzxmXMT7BOK5zJ76dM3JCE9vtcbKQPjL002l0iqnLZgnHapnHquP5JJpM3pqqCio3FrOyc0pasXGySY8iYlt1u6yoZdPqsx0m1bnZtufS5bXyrmJBP8dJxpowlKREnE1XY2hqMyaUryU394Xl73n5pW/Q9tr2mhAEUE06mJwTDOs0w84Z5uKDrV4S0QFyFSY5oa1YyZ+6PEPGs7V4mnjodw6RXLBWDEY/ImlnsiTGxZp8+qjqASTbzv9J2iBqHq8mzGAf8MLDsHdIYUlbwgTI25ROS7QMbQyRIUCWiGHFOaBtHCJGh1/kREawkjAFnLW1rmc0qJX3EQEpC13n6IRFSST7nZHLIQk5Z8TqkhNjAoqkISVgOgbNNtoAJmpRW9W04D5H3jzbcXFhsSlQm0vk1uJp1Z2ldh7WBYYAmGdrWsjeP7LSOuYM7hw2/9skNL77SIvNPQDgl2gWk99gc/59wn7nBtbRHc3/g+P7Ag6PEjx5ETlae/TphFoaL3rDpAkMUtZFPKRP0dAx3uXg0FeszIGEy9r7NgYx3qvB+xtdkJIeM1JyyfksQZav2kUa2yGVEvWDtCVVpMQyEAEOoMGng3o2Kx296PnXX8fBMi59jUgyqGwKdH+g2C1YXx7SzBVHUlt4MHZWz/N//uz9lZ6/l8aMjbt9e89pnP6XFwyFAdYgxSjSqJdBFi2SlmEpaEgN9CphqzvnFhvU53FrWyNs9e6tjbtz972m+7PD7v44QcakG2SXW/yX23owbt34Nf/RtPvo37/Luh5GTlVMiBAafEsFm0oCxqmwigMS8JsxGRTFhoub4JpyG/Pf2BRGorZCMIRrBkpiJZzFvubMv/NbfmbH3G68xPPwDmts7xOGI1bvf4u5LNXfuHvLrH1R8+3sDf/h9w/s+YSpVm4hJU7S1E5yJdDFx6+YujYE33zlm10SceKw0+NQTsFiTqCpPosLGAZiROR3jSWwJnpnUIhp9kClmIWZ1rqg/kZT7aukzadsPnzKJb/vv9LUi3MA4tly9rowMm4+/nj724oQYktczSuTRdeG8tcQ4cB7h6995xO+8doN/+eePNU+aLEPYsD+fUc0tf/i1C2b1GZ9/ZY/P/O27fPDRMU/ee0KKCVfBOkSIRnNol84hQVK1l5W1fO/tFS/ebfjiZw/5wVtLBum2X/g5t18J4gepLHYnfCCryfuqshjjKBI1Yh3GGqqmwopWjgyDKigUKxQRVZOw1qoiB2BzsldZSkGlpkyFQQkGYjTor1yVWfNKKtCgxLPpNyO7p7IW285GfzYxZFKFqn+IKVKIWa4rg/AxBEKvRAPrqpGAIdhcbZ/tJ4Dk/TYxTpFC1w7W+4A1QIpUlQZ36+Wa9WrF7s6MZgEiJamdsnhODkiypE8KaUvAChrUYQ0+JCRAzNWuJanRDz3EAScG6kZVHKwjiIIjSZRJEFMa1SaMtThbjYFTCEpMMNZqoGRQkguRlB86j5JZymxpMsGjiKKVytnoe7oh0XfrEVQXUZlQlZLMtjfWYiUh1pBcTvBlFqz3gSETaFy+/2HoIaqqB3FDU9fge1xdYaxWFYuxJGGshE0pkXyn97ppaGYz/ODzQKr+l4KCBtbmhEIKpOCpnd1WVlqnfSQkpFISUAk+oySGGCCptO00iVC1LVYgBK8+b5n1J8ZkT85MBsmBtMC2Qt5k376cDI2SFWAkqzgkvfIlUZ9i0Oubz9tkNRgQ/BBVPSVBlFLNQP4OKmslBXbbLnQEZQ5TAkvRYN1YBQeHPivjiIB1qniTVVi80ogR5+i7jj74TD4KGLbP4Ug0oVRYJExCyUMxUFWOaKyq2EQZF2KxsB1FWc4BtXWRfF0QUTJG0IVumWC912tdOaPSZyHmflOkTFEwFCWbVcawu5gjL9zSLYjQdRs26436X3cd6+UFq9UFm9WFJuL6DdEPRN8r0krMfqu6wLEwqscwgpN5MgtRSVhZvlNPs/QpyfcoLywpCXg97hgTxQ9axk/r7SmVccZYtaVCgxKVF9WiNDFGx6/n7Ze6KXFwGP9+GjnjKlB+9b0pmHSV8HEVeL+6/ULwmFptTIkcKanNxWw2G5P05TNTosq0crIkp6dgf9M0IzhcSAygoOOTJ0+o65r5fD7ua7lcslwuCSFwdHQ0Kjis12ustWw2G87Pz0crFwWZNXFdbGSqqqLrutGmpcyLXdexXC45Pz/n+PgY5xwvvPACzjkODg6IMXLt2jVijHziE5/g2rVrWGtZLpfjvmez2XjNqqri8PCQxWIxkg7Ut1xbUUMpRIOU0qjyUhLuWqFpL92Dcq1LMr3cuxgj5+fnnJ6e8vjxY771rW+N9j1FjeHi4oIbN26wWCx47733RuLNfD7nu9/9Lk3TsLOzw+3bt9nb2xsT+M456roe+8CU8NA0zWiTIyLM5/ORjFRIGOU4Yozcv3//knrFYrFgtVqNyhmz2Yz9/X36vme9Xo/EgkIQmqpAxBhZrVbj9WjbdiRbWGup63q8x3Vdj+SV0k+nCjZTMkpRWDk7O+Ptt9/m5OSEs7MzXn/9dW7evMnBwcF4XwpZZbVa8c1vfhPvPZ/5zGf41Kc+BShp5vHjxzx69IjFYsHe3h77+/vjMzG1hCmtkFumrZBDilJMua9d142vlc+1bTtu03s/fqb0r+l7hWBR7vXUp71t25EQs9lsePvtt/n0pz893tPyPe+VhDmfz8dnvDx7hcDmnBu3VYgdZXwo5LFybOU4yu+FBHY1AXf19+kY9xdpP440UcbEq4nK6e/T1/6y5JOnHctVW54f1/6qyS9/Fe0vSmr5aa7Fc8LM8/YfrSUdt4cYCT4SM0FeyROZBJ7jsZLANjkhX6wsYqqoKDFjtmPBYcWB06Idaw22nlHP92jaGYjhYr3KcVUm4MUIWSkVpmsPRr9rLQwRjLFKEiHR90rgL4BEzOqIpECRAdYm4zkXt+eRwMYWN54+sqpOkhU0Bax1uExm6fqOzcVaY/5JEtXkeDGmlJVSDHWtBSbLi6Va806wgCJdPf7ORLlJJtX5lNhWmLfzrP7muH//oZJvkrDpBgrZRVBbzyEojuacpap0LtzZ3+Vg/5CTkyM2qzUi0G/6TDzJRS1JECvUbcP+wQHDMLBeLtUKyCpZQYuZNHbc3z/Ae896tSYVi+PxOirppKlbjk/O9F7aekyAxJBoq5ouF2aIsYhTgrmSBFQtRPuEUbtXYzGZAISoQoIv9suJyVzuR7JQilrQUrmW5GG1XBJSRNCCK2usxsQS6PoOJw4njvVFR7/pEUlUrbDpNrrvqN/xXm1xTFLFXQiKI6CV3FWlysDWlNhdIKrFdVUVhZzcf6wW/2iRjGBsRan0HqvCcyFGGnuNEm2ME8Qq1ifWKHkEENdQuXrsG8ZZMIr3SWVUJVkUjxOxijMgxOQxqC10SFqw1PfDuJ5TrDUS00BRIlaShIwAeyk8ERGMVbUTVfWxWPTGmApVq814kzUGayslnlWCNS5jwzl+QckiZOta7Rv5GR7LpUtljT7cl8hebPsSEsfkXil0mc7akvHcq7GrYnQRir1KEgyBQv4o9yrEgInl+2XLV8emkkqRrWpKKMe1XUcUUkiMJR6OBMprW8JIyplJMZItfTK5Jo8n5RlJSa9fIeWMRLfn7Xn7JWuaNklUzo7PqySPFUuUpLj3kOjMDlUTaf2SfuhIbg8AiYaNvY6LK9rwBG/2FbePgZjt5YWBNl5ggmcjLRtaTNSi5oQhiipFTQuZjCmodcrq1olKPDZZupCVikRGctn0jAouLSLbAmOXlUFSoqose7s1fvB0m6x6kXFyaxOLnZp2VtMNa5KxeGDdDYRYyAlpnJfKGGqMwZKIBnZqS1PBeRcIqed0I3SdqoqTRElpXnH0x+cd1lS4BHVlMRLJwlmso6GtPM6tOe8cq96x2lhuHUQ+80n4T78Uuf3lQ5odR/jh78NnD7DX/ykER734CnG4j1z0hO6ck7OGvXngN18yvPM48tYRrPpIZRPUERk0n9QDXsCnoi4hWuAsUxxTVbZ1fTiJtXVRxkgUzhh6oX0UW5jRMCOVd6dJ520Bp+ZT9L0kCaKlwtOYFRghdj0P759y+3CP9xZLPnEN/vRNXTd0AdJgcEbJQRdR85ertadfg4u6Pqhrx29+8S4/eucR1w+uc/PGAcH3eo5RqCtHXVdYsVTSc9a3GFRhPUaPJWKC5eR84JUXHEPoqY3Hmp7d2WPsq/85tL+L898lui8SRDMSlmuk3f+SdP6/Q+6/ReManIt4D5sIHodJAw5PhyGZchV13WjiNidSCCCXZ+gyxU/mLhGGlKgRRBJedC3+O590/N7vOLre8K//r9/itc9+h9u/9h7Rn7Lu4Z//v48gDSx2hIN2h3/8G573TyLffK/i/TPP2TpRt0I7dxCF69e1cPkH751jcMQAfaqpbU/ylraOJJ+Ig6O3AWdUpc6UzM2EVDFirKSS8iVGzc2FmGOCrPYRx/WK6PrjCiZ0VchjzCNdJTzIlX8vf2vyfbn8Wipx0oRMMu5iYs0sBu+Fs7MLKpeLkEk8vPC88d45v/3Za/xP336k9jgx8NL1xEcPEh7PeVfzJ9+9wFUnfOGVXao6YY3HpEAYLK5KeZwzY84TIKSg8UIY6E3FW+923Lnh+coXDvjuD+HUGg7250874b9U+9UgfuSJRhAN1sl5/xjxw8DOwqHeZCpt2FQ1xlXEkOj7jr7TylhXtTrxiaGq1YYlxZh9LFVJQ0gE0SDP5QkwBmXoWVG2f0KtI0hJk/Ap4WxN5QzGOfzgMdZSGfVTS8GTMtO8BFjFdiXXxBKDykgm50glgU8OlsSy2Qww9BirFZIGwGWGfVKyB0kD1BDjVlYpJiQZhpSo2paA0HUDxgSMGaisI4nVRKuxSEpZWUUHBbWREYiSA98MYBjD0HcMfY+iGzoRBAQTo5IFslJIJObqmAQGnK2wrgKEELcJtfKgp6AWNWJNXmAoUURIGFcr2BJVpjVmWxRrDCIJCZEQI332aw/RZ5KCkKKn8z57eEXaylG1re4jxDL6gVH1h3bW0jRpIuVfyAxCVdWE4Nn0SgSpcyIgJsHVRgkwlQ4SfvB0fq1yaGgiw1sh+ICIqomkGPT6IqPoUcgkihjVGshZQ12pfHprDDYr3pTKh2REq19SvAQwWTLTEatjXiYz4APWOa12ka08ZSJmopRgomRyShFt1L5RWTNOgMqP2FZiKNGHbD8EVe1IMTF4rWytxOAHrU5KouoRrq5UhSLbpCCMHodKzDBazSSZqFL8D6MGwzHkhUx+nlJUFRxrLIniIZ3oewXzEEPfd6h0r0Mk28GUgD0H8s4VUT0ldBWgrTCUpSzKUAKEiFrKBIr1jPbfGKKSXRS9RIyquYSkwYGtcqWZJEQKmJDRBYHKVZMFoMFWjlnTEHZ2dTDM99oHz+D1XgzDQLfZ0K3X9OsVXbdm2Gzo1iv6bk3wg1YvRSWxGfJ1jSgoY/IjKbqgEbYJIQUFiyOkyfdIGdc+a68amSyo8sSt1UiBgNrdmHxvUgRxRqWgM2gxgivP2y9lKwnPqTJECfZgS5ybEjNgC4aV78Bl24GynatkjhEsnyh7XAWxyrami9tp8rt8fqoMEsLWv7h8rhAC6rpmd3eXqqpomobd3d1RyaLYqxSSXbEFKecdQuD09HRUhPDes9lsODs7G7cxTeCXZHhJli+XS46Pj8fEddu2o7pFSqpCtbe3R9M0NE0zfq/8vbOzM6obiMhoX7JYLEY1jEKqLOSWq4nFQj6YqiEUy5eiYDElKUyTzlPAorRiFVIIDGdnZ6zXa3Z3d6nrmrOzMz744ANCCLz88sus12sePXrEMAxcv36d09NTmqYZSQfFjmR6z0ofKPYeoMD+o0ePODo6IiVVnCgqG4vFgrt3744WO8vlkrfffnu0GdrZ2eHmzZuAqpQUUs/Ozg7ee87OzsZ9VVXF/v7+SBYoVi993zObzZjP52M/K2SHabK+WCJNFWumaipT0lI5z6JMc//+fe7fv8+1a9dYLBYcHh6OBJRy/733PHr0iM1mw82bNy8Rgh49esR3vvMdXnjhBV566SVu3LgxPpfFaqf8XrZXzmVq91S+U/pJjJEnT56wv78/3qO+70dCS9lGUcWZ2v08jfA1vQ7Tn/V6zfHxMd/+9rfZ2dnh+vXrIyFn2gendi4juTWPJVPrpXJ9pxZChQRT1FjW6/VImPppCRU/jhDyk9rTyBvT7Vwl15Vn48d95y/Snnauzzq2Z7WfhfzysxIefpbP/3Uhn0zHzTHJ/JTx+Fntar/6RV2D5+STX81WxscUUlYWsLhsp0BWddBErVH1U6PxkBGN/QSjioNGiAFcBh0lWQXkrKOqGlzbUNUtKcHZxfnY36qqyhX8Fme2ZI8x0alHiaBxoLEW4yqNCYPHx1iwRtSXvYxXuUK2AJaiFe9jNKrZzwz0y7gNyWB1AekL4C/GUGVlhBTjuF6QJFu8SHes1xTFKyrrxnl26IeceNmqEMQiy6yeraPiqWIF20IFRYYNzlhmizmzxZzNesP52RMsgnVKIE4StZo55WsSE4aIcQ5X1/iQ2N3bY2dvn5PjY9bLNRJiJsuIJqWMUJSgm/mM2WKhpI/VGu/VFtgI+KRIvBjDYneXqq45PTlRW+J8fgVBMtbQtDOW6xWJqHYnhcggSckNxiA+F5iIJgD0Lqjyiw+qPiYp4TMBIBm3nd9FCQUKaRlMUzCmpBbfRtWHSR6TPKEPmMGDiVhnc8JB5eeNyUU01pIYGIaeGAbdZ98hMWDbOmOI+iw4AdM09LkzJRGcNZDXRyEmrK2ydHnMiTxLilrUpC68WkRjLETxBJFs3WsoKSK1YFV8MMTiqaL4qEGTI+qQJBmysFSuVmwueb3+VrEUweBMle9TyCQHwGgBmFaWDhhMtiMCHz30gstqzD4GVcENqsAsoolVRQxSPijBjOQNo8+8MVQ2E22sEj2cq7RIL/8YI1i7jQ+VFJOJGEYxEFA7aiSqNUHGLjB6fQUzJvM0sabJveJPk/IwMZVQL6MAeVwoxTEJ3XaMKasjKc7qUYKSFLVZowSmlKCqtABHH5eItZrw1DXnVq0yEkdJ+xgSJLUqThOd+yRlzFZ74pTUpmUaS1yN1YwxI/5YXheJl8ZYkclY+bw9b7+MLWWVj9jrXItFUmSwDmOsFiSKFm72qcLIHs1sA8MZq2DA7IJAb1pCrKniKY1p6cycygTqeIaVgS4t2KRdFUcnEiQhRtU+QAl3yiaYUAHUT10VnVOkqvUZ9j5qvsRUxHF1UzDYlBOwOVbLY1eIgYQbceu9vRnDEEl+IMasHiLQNIa9vRZXW3zSQlsfsjq2MaQYRpKIIOOYLJJwKYIzLGqHGGHZQ2sHzlaGwat1SYxJDW9iRKKwSoHjZWLXOUiJpurp+5pk1nrNe8GHhkUzUFnPbmt4+VD44m3L4aHlO//uhMcfnPN3/t4Oi53fBHbwtscu/hsiAbn1A/71P3+Pr/0ocrSJNNWKm4uWmYMbc1gNifNOSapDD1LG+QQ+Cr6QNTKvw2SLnXHeKP/mLPlI6EjllbyO1FlifE9zDLIVGih5U1JO/+dEfukLUTB45m7goDmh63pWneVbZ4Ev1k/44r3I3CUWixnnfc96rfmQEL2Sf5NwfrHk9PQCqRu879hpawJzru/s8qF5wLWDlr3ZLCvfCBBoXMXg8/FnxwRsDWTSZ0hIkzhZ99zcqxmGDXN3yq0v7HLw+U+yPv4O/Pt/jv3Uf425aRD7IsgLJONJcgz1p3Gvf8StnY84OFhyez/yw8fCB0887xrhwXmFTx4Ty5XQvJCMPTD3d8Dm/LPOWVkpPediypLDij5urUu8fqfhd19p2WkTv/+vPG8fDwwx8dnP7/PO17/P8YM1X/g7t3jhxg7fePeU87PI/aMe5+FgPjBvhU/XhvN1wlcNtmmYL4TTi8AP3j9G8hrTWcewCsyahCWy9Jr/FBmQ6HBWC9dMztmpcELMKSUzjgsj1pg0r6XnWp5xLs3zV1uJHcZh75mf2+Zynh3/l047/WuqHHIZJxrpIbmPq1J8RRDHzqIhsc7rscgPHqw43G95/cV9vv/WCbs7lllr+N77G6w4jIlj3vzPf7Di1qzGpMBnXt2jbed8/dtPuFgasAmbscaY16/RC8lYtdUy2sdiOuYf/L1X+G//hyPOzlbPON+/ePuVIH4oe1stWXTh7nIySD0WY+5UwQdqq0zvoe/wfqDbrAFViQjegzXUtdOkc+hJIRF6fUBCToQbm0bPpyJrI8bq5OJ9TnBEnK0JMSAx4CqXZRCVcU+MWBFc3RKd+pWnmFULjKjUaUrEbEGTgsdWdc7tagBHCBogSFBlg6HHxZBDd4FcfBJiwlhLTEbJBCSMU9nElHQiqKuKGPxIONFnR2V9nE2EIJADSwP5mmqwba2qdahSR4JujasrkqnyeWowHbJdhNpIJLT6QbuoD8M4ial0hyYBVA1hQlQYyyXAjtKJOdgQlVFSv0uj9yipZQxJpaJCzPdLQKzFZoJG121yJZHJcqItpDj6mjqrg37wuoAiJpwxiFPwh5Toug1d31PXDmcsISSVYDWG5D0R1L4j9hqkUQYpQ93OIEVCBBMCJqFWJykhJk88MZAkbe1KElirAJTkKhRnLaCy81Zg3irBKYAqR0SrFisxZPsbDQhjijlgNVip8D4v7IRRNjbGoKorAiH3OWOskmGEXCFRgIOQE3+5bwYP4qicw+VVSgHVgg9shk6vtTEjABZTYaGSgZnMvSnEjlQq6ktyhdFiRFVvBkKelFOWuyzjRQlgJYNFALWzWNNk71xDchWD96QQ9F6TlCSSF0zKbizKFoxVQKM6Th4jCimmkBp0LbF9vzw3JiVSSPiYaNqWylVKIiokmqjPu5JLwqjuI0aVhkCwboQ4dNFh9XuIAlA1mRgFjLZN2R83hIHBe4be0w+qBhBDwA89q9WSbr0m9B3ri3N674khKKEnDLqAz8BATCWZp/tUMEOQKDhRoUBdXOpkvJX9EwVTXAWoH68QSeKUeYwSefTWlWTdzzBRPG9/41oBUksSuBAEAPq+H5OO0wQ0XCZ3TBOu5b2ridSrbfrdqQVEITMU8kUhB5RAtJAVCpnhasKrbKOQKK5du8be3t5IAjk4OKBtW0SE/f19bt68OSbxV6sVwzCMZJGu6zg9Pb2k1HB2dsZms+H4+Hi00SivP3z4kDt37rC/v8+dO3f42te+xvHxMfP5nDt37vDCCy+M27527Rq3bt1is9mMKgaFuFBICOv1mouLi1G5QkRo25b9/X1EVNmkfEZEODg4GM+3HNdVks40iVxsNooSSrnHV4k35RobY1gsFpyfn7O/v0/btty9exdjDC+88AIvv/wy3/3ud7m4uBjtXW7evMmDBw8YhoG2bdnb2xsT7t/4xjdYLBYsFouR8FHIA1NCAsBqteJrX/saT548Gff3jW98Y7SW+af/9J/y8OFDvv/97/PWW29xdnY2KrkcHBzwpS99iaZpePjwIT/60Y+IMfLyyy+PBJ379++PNjyHh4ccHR1xeno6KrPUdc3du3e5desW3nt2dnbouo6jo6PR3mdK1pkScaYgbHm+ClnBWjuqnzjnuH//fiaThvGalP5a7GuuXbtG13WjHYq1ltVqxcOHD/mzP/szXn31VebzOV/4whdYr9fjvovCS1VVzGazUYUDGFVTSl/33o+KIWdnZ3z729/m9ddfH4lKpc+W85jaxqxWK/q+H22KUkqXVDim40QhU6WUePToEd/73vf4/d//fUII/Nqv/dqo/FFVFXVdM5vNODs7G+2F1us1i8Vi7DvL5ZKdnZ3x2i+Xy9GaqKiEFKLM8fEx9+/f58tf/jIHBwejpdLTxqrpv1d//2naVdLLT0syKe1ppIy/6DE8bdvTfnp1P3/R7T5tP3+T2s96vFfnxqvX9Kf57i+6/SzH9Lz9srVM/MiWC6oKoeO+yYlOcqJgTALERMITMaMiolawaYwfMgHdVBXtfAdX1fjgWS9P6fsN4KirGa6uMVL26zDG4ar8mlH7VyWbZxXWrAZaCn9IZPtURgSzwJQlckxZEWAKhBbMCrZy6+WNIm0+viRbZbQYI/1GMaxUFJBEY3QQFZ3MTIfaVTir+FbfdboWzvve6sCyxWZy4Qv52pVEgiTGAhDbKCkYhLOTU9brtRJnEIZ+owVF5XsZBFcl221R1SLb9T158oRu3alNbI5vCy4CgEnMZq0WJg2ebqMFRSKiSaGkeJwxlmqm69fz01OGXrEzVbbIBHFjqNuGYfB4H7bzfVIsgDQhvpqMLZJl18cEuSokGBGiDVTZDmiIQ17zGwb6LJWdcGIJOfmEgNhcPRmyFVBJIDiDywUcBReR7HPTNJWqClst/HFWcbCUoLKq+hJNQBDqqlZlF1fjjCOIqsDarDQrRrA52eNsSfarxXVCFVaURKTXLqaErXSt57LtScFXSQlxiuvVSW1cUu47BkEsOY7XRIEq9DjFa7MnfFVVo0qNtbqejmkYkyyKk8SM6+bCM6uksDhEhj6QKlGihqsZjB8xHGDE9dSOyY5r3FGNLyvDaoGPFqnYbOvksqWUGdVXtA+rGoneP8p4VMAq0afeWsHHojqS+x+S8bxMtmb60KfxH31v8t1SvCMoBphrjVMu0lJszis2iEeyCLOOnEKME3upJKSsdFIwqxAmSZVs010SPMXGpbw2EqXzf0fFuhgvWcPAdj7XuJeM2U1JH5OCDbZ434hhPW/P2y9jSx4rEd8d45Ih2vOMmeuMbEoCPjPoQhI2xlFbx4wLhv4hye5oQjMJUTwmPWRmDBI90Szo7AGRCGmTE7lKcKRg1NkCzkyftDx+pfyvSZ7GGFbDQMISTMLGSDJRVSlkSyIoRY+SFb58AAYYhsDOosYPkba13Hhhznq1ZL3KBVvA/m7L4eGc5cpnFQ8YhkzMFJvHBZ2PqsppCsxkvALBSsDaxLoLbIZAMx9YrpQIGIJM1lI6E4UAqw3UMx2vnASoEus0YFPKXLwZVhyvHApfvCd88gXHycrz3/5R5M0nnlcOE0PjCfEdTLgN6/8f0d3DugrvP+Q//Z2K29ccf/pDyw8eR77zYIkPiudbp4XIZZFoLVQSNTlc6A9G8CHHeSV3JjKS43ScVTt2KJ/T66RrI8b+RCE/lpxKKvtJ5W2KJaHmKwwQiZIVVquaey/c4K33O+6+ENj0BtcscAJ99FTGYEOirQPd4DSF1w9YWyE4mrljebpWdZa+owtrBPDJ6brLlDVPRzSW5dEPWPcrfIysY4MRwQWP0ONDjVgt1K2swdnE51/b5dbdXaKBB3/0Q7qzBYefuE17+HeR+JB48m+x1/4XwF3S6jus/uD/wJPlHk3bMneH3H7hnLYJ3JoLN+eBD04MH5w6ji4ip2vPSuvxczxBJnfnVU0mFwvbea2keLSQN1A74c6+5VOHDQc7lj9+t+f+6ZqztWXRGA4XgiPyb755jqei+zcXvP7phh9+JFx4i60tx13HRx9tmO04qmioK7i347h+d875uuPJ0RN2a8t5HwlJqCrD3A6sfUtlepw42qrSIu8QcEmYZdvDVA4YXQumUiye1x5KIJfxdyj0l+ksnX8fA5nEsyfxqzG+8KwPbzGBcn11vZ8m208l4TU9lileRMInHdd679nZrfDJqHjD4JEq8ec/us/vfPEuT05bru8MvHN/hcdikxCSOk04I7ROsDYiEvjRm+9z+8ZN/sHfuc7DJ4l/942HDJM8owQl2IYc/82ait/+8kt8+cv3+Ld/+EOOT85GHPXn2X4liB8a/FjEKnuv2LakBCnLchoDOLV/2KzXDN6rqoC1WFeR0MS1qzXw6roN/WaDNYZh8DhnsHWDEaMDb0q4Su1WNBhMkJTgIWJxZDKlq4gewuAxtqIfesQKfvD01jAT9Ud1TkkYKUaGfsCZ7IOY0EW+EfywISVlqKfgx4X5yKo36mOmwWM2aIlBA84cAEtl8XmR7pwqFFjJZJMMniiGoOQGnxLBo/KGaDBREtkhqOUNeQKCfB/qCowjiFEvyazsIbqaUOUEZzCik7bPYEWVE2jBa8KZDLb4mOVCZSvzbEev3TAmBZ2rtn6cGRwR4zAm4X2fJ1uDWAuiyiJVXdN3SjpIJJx1VM5gjSUGCN7rXGiVvCGipJoYIuIUVNHPDliBdjbXAAxNhFgRYvAEq4ozKSW6YaDvByWvVI5ZO8tSSzYHwJcrx8siLOaAOgQlsMQQGPqoSiFNVpsIQckftVaJ66SkCihgsCaBcwyeESwJuZo59D2YRIo5UEa983ypWIJs7VIsX/LEl7KnXxxw1iqBARiCx/ceV1mtUBIlnwx+O6mk6IFE75OSXqx6rLn8YIeASt+FXldHKZOiMrCUiFrlEIKe/zSozjY5ZU6LIVdUTfprWVSSA3pjDXVWkjF1rYQt7wndCqlm23BUhBCU7JRiAYu2gXxKukAovq9isopPVWX1E6P7y8SQ6L1KnYlgrZBCB1SEmKhspWdalFWsAVEQx4nKUsWYSWkFmMkBdtCLiuRqNLUusqrqElRRKKCrmpJcbZsELLaywjESUqBfb9hs1nT9oBUhPuKHXhOB+rCoakuvJLYUhgyMRlIs1fuZHJYiwavsm/b9fP2lsDPJ90dlaUtvM85o3x+Bhb/c3PG8/fVu02R/IW+UoGda/Q+XgaSSvAZGYKo0nSuKL/NlxYjp939cEnNKPpiqkpT5aZqsLNsrRIGS7J4eS3m/bdvx9/l8zv7+PsXOo4DrRZmj7KMkzGezGefn55fsMA4PD0clgRdffJF79+5xeHjIbDajbVuKdcjOzg53797lyZMnvP/++7zxxhukpCog5b2HDx9ydHTERx99hPeeP/7jP2ZnZ4f9/X2++MUvEmPk9PSU733ve1y/fp179+5R1zXvvPPOSP6w1o4J7nL+5RqW6+acG206yn0qcs6FIDNVayh9w3vP0dERMUaaphlJG7PZjLt37/Lqq6/y4MGDkfRx48YN2rblvffewxjDZz/7WV577TUePXrEW2+9xb/+1/+ai4sL9vf3uX79+pjIL3YhxSan2KXMZjP+/t//+8zncx49esTp6Sm7u7u8+OKLVFXFd77zHR4+fEhd1/zDf/gP2dvb4/333+ftt9/mz/7sz/it3/otrl+/jveeP/iDP+C99967pARTzvnhw4f8wR/8AV3XAVDXNev1mj//8z9nPp/zT/7JP+GNN97gu9/9Lt/5znc4PDwcj325XPLVr36V3/3d32V3d5ezs7OPqajAVrVCRLh27Rq3b9/m5Zdf5gc/+MHYX4sKRtd1rFYrHj9+zBtvvMHBwQF37tzh9u3bo1JM0zS89tpro1qG95733nuPo6OjkSx9+/Zt9vf36bqOjz76iPfee2/sf9evX2dvb4+2bcfjM8bwzjvv8Oabb/L1r3+dV155Zew31lrm8/lIKCmEi9lsRkpJk01ZDWV3d5ejo6NRLnyqCDJVobl9+zZVVfHo0SN+/dd/nXv37rG3t0fXdaPN0mw2G0lZheBV7HtSSiMB5eLiYrT02Ww2GGNG5Z9CEDk+Puaf/bN/RkqJ119/nRs3bnxsfHwa6eMX3f5jESR+kaocf9PaX/bcnkaw+GW+Xs/bX/+WQBU08rjrqoraFsWlEi+Gcc6XnHSNAVX4lAQmEY3RilaxSOWYzXexrqbr1mw2K5IPpOgxJmbVya1gc9LQBOPyujATKgrw6IxVJUIRLaAoCe5CtpgkqzUBgsacU7JHfuzG+kERTI4Xy6dG6DRtrdCsc8QY6AeNpdQ+phArtmSTEpMXoogxancaQlA1SSkFOJMCnpJg9nEk2RdlzVJNKkatHxY7C3b3D9QS8Ox8XAc5Y5VYEnN1ZC4SIoURWHZ1Q1XX7OzsYpzl9OSUft1nq9O8js6Ho4n7SFW3uKbVeLPvsxKnxouaUNcY2hjD9cNDNpsNq9UqkzhK0kcT3a7SiupxDZ4kF34oJiCS1ThiUHuSchzZUkbxl6zKkZSMgGFUwVQcD6xzOBHtSyXxEuJItgnBq1KNs7hKCUvOGIKeuH5DYrYP9qrkm+X1XSaUJ1RlVOMAVS9VhVJVxxDjsGLIsKiC+SSsU2wrprS138ikJ7GKlaacIBRrsQJWi7apjJImfFbKlcIuIqmtcPbbFrI1rsmJ/piIKasDkws8JFvmGItxJXmV+3M0OcbPJIcUsRnMDzEhISg5y1Uqoz0MpKTXpmlaYlwTGaBcF3GKYbpM+HCompBV9RCXt1UIHtaYvBbUwiprlMwxZuoYH2LKMFEe7nEMQFQtJW4VH0tflG3J9fhv4vIcPP49fjcQk2K3Keq4p2OhFrf5XKQUUPtictLQxoizWpSTUiJatVYQE7WPFxuauD218XhzAeL4WsHt0gSVkW2MXWKyKbl0qyC4vX7GTNYgaZsAVdwsfTwv9Lw9b79MrRArYiSZpPZLGTcn535IOi6T7Egq06TiHGdqrPSswwofGm7e2Kc774gkenLsOlwQE6imdRjXCao6DRIzvUqK2oMeU4qad4nBYpKwax2rXjUoJKIFf5R1QdqOD5o1wooF0bVVjIm+GzBSU1eWbt3z4ss3Wa16PvrgCImWuoEbN3e4fmPByZtHzBYVy4uBwQdcrYWrYjS3VNdqd2+t5utiAj8o1jwMWiS7XAduzRNn68QwBGxWjYY8rIjmojYDrF3CiR65CDgCViJVjCzmK1661vDioSNGz5/98AlvPIbDa8Lf/ozntds9/YMl5v7/kdXp/4NqeB9XnRHlgKNvbfjOtwxP1pG9WeAzL9Q8bh0PLwIXned05fERTBIqW4og85iadG2ohBby2lKLQmOeW4G8ltPiZlUtn+KYGc8xei8LKWR8Pa9rp6tGSUoSxERIg+ZQcUoe9RdcXHg+/7kDlkPDD955wr3DmrZeQqz53J2OX7vTULWwWV5wZ77klT2HC+/y774lnC4veHD/CSlsiNbq3OMczq6w9KQUscHhY6RKNXZ1n836gvV5x05bUSfBSuTmAipb87lXKu7sBV68XfGlV2qqueP9d/f57rcfQd3yhZc65p95nTh/HU7/BfH9/yey91lwH5D4AHftNhfv3edb755xNrRcWySaSpjVkYOdCjGBeZ140loeLw0nm8jFEFn1kW5QMlFIegFVVXCcOIloThU8zkQWM8e11rFTRR6ce777YOC4U/K1s5HQRV7abwjHG775Frx0T/j9/7Dmf31g+PTdOQ/fDsx2KpZDz2rdcH5qWLSJVBtkDq5KHL13wd5iRpDIojHMZ0I4Nzy+qPBxA6Zmbjq97iTqakadOtY+EKPb5shiXmOkmEmeuiYrBJDLQ9jH8YI0PmQ5NzPhczwNX3gaBnEVG/44tnV53fST2rjfaDBS020Ci7ZB0kCyonnmPrBMFV/79hG/8bIQ8Bw9Cdzc6diViLORynqiRKJv2AwBMdAk4d13jvngwyMO9iyv3LLszhs+eHCmOWyUXNsFw6xp+O0v32VvJ/F/+T//j5ysIl0XqOufP+7yK0H8gMyyipFhUO/LhLIGRdQuxFlDUzcMXgkTrqqoalV1EGPU6zRHSH1WHxBRRQ7bGAxx3EflHGJUtSPr+GGsEKNBYiAFP4IX1giDsXSrFbUJxCT0/aD7LEFrULa2SgxapBa6vkOix1Q1YIjRY1Biig9+IgWqQaTPHowijhAide1GuwljzGivYY3B6fSc5QlNJl4kVUeJOjC4XF2ZgkoxJlTGT0LaJlyyZ5q15ADI4EPCxpwCt5CM2w4QY0IMGAK2MoQU1MbGWghDttTZKgAgFtNWmljOx5koygJDvv4qhZmCJ6ASsRgFHYwVJFkCLh9jlqHMkmopJmzd0IpQFD76rifEjspZVVaAsVLHWiVEiFSjWkmKXhVkQqCuDZVxYCH6gO81gaJ2LUq28UFtPKzRydoPg7JX6xpElWm891l+E51ZEjpASamcEaIYkjEEsrQq2jedNFRG+2IMIcuVynj+I4gUckI0JprKEZzD+zBKrQbvVZbRWF3UxTB+R0yWkhw84gRndUESU8B7QBQgMc5RNw1OtBfEEPJCUSfJUhGhpkeCFYM4p4o0SQPLmAJDjFptkmeoGLL0W5Ls56x+XVKISDHhY8iSnTbHlgX0Eki6GAzJU6qLBI91LvvhSvZytrq9eo5zjmHTIXYL6Im1iAStcsnnlNfBGTzR5yz6QZ/FlCXYMts6pIA1YLKdTAEJYpLxmvhMsCr2R4L60BaAzEdAIiYpGUOBxgnhJd9/7e9eiXHouDhWWdgSi2TbGzFYYwhhUIKJrUiVAniuUoAkJX02Qkzjd1JK+BAUVMw+vNErWOuz4hEJYvAMXcfgO7VzSqWaJmkf80o2MRmN1QWI2s5Itn8offp5++Vs0+TrFEyaLgSvkjumlcxFcaNUSE6/V/6evn61AqnMc9PXy09RCijHcFWF4qp9xPRcCnGj2FRMj6Gwf4vCQfmutZbFYjFWqpX9lp9CnFgsFoQQRouUcqy7u7vcunWLvb096roG4MUXX6TYsOzt7Y3khaLYICKjisHt27c5Pz8fj6XYU5RjKSSNYl9SFB6cc2w2G9q2pWmaUR3hKmlnqpCyv7+Pc46u655qFTOfz7W6NP89tT2ZVsSen59zenp6iUxT7pP3nuVyOdrCTEkm02s+7XObzYaTkxNms9klO5pi5/Laa6/Rti0XFxd89NFH3Lt3b1RSCSFwdnbGarUipcS7777LfD7n8ePHHB0dUSxY9vb2RguRorTxmc98ZkyuADx69IgnT55w+/Zt7t69y507d3j06BFvvvkmx8fHvPHGG4CSaZqm4dVXX6VtW46Pj/nggw8wxoyqJuUeTO2MyjMESrwqRJsQAuv1elThKKSUQjxaLpc8fvyYx48fc3Z2xuHh4Xjt67rm+vXrXLt2bSQ+/OAHP+Bb3/oWzrnRPsZ7z5MnT3jzzTd58OABs9lsJPx86Utf4s6dO8znOhefnJzw9ttv89ZbbyEiIzGkPAcffvghR0dHPH78mNPTU3Z2dsb+d/PmTV566SVSSnz44Yd8+9vf5uTkBO89s9mMT33qU7z88ssjEee73/0uDx484MmTJ/R9PxKKUlLlm3fffZfj42OWyyWLxYKPPvoIgHv37vHaa6+xu7uLc47VasWf/umf8uGHH459qWkabt26xb1793jppZew1tK2Lbu7u1xcXPDOO+9wcHDArVu3RsWZq2S1q6897e8f16b3vLSng/eTpEB6tsTnz6P9Zbd9FUD4ee3rF6E28lfRrhIZn3YeV1971vH/Iq/BcwLKr2hLgsnFGKr24XDZD1LSlhgaQiY+IKMSg8Z8A0lUNcC6ip29A5r5nKHvWJ2f0W1WmBRzIrhAN1lFJKs0miSKHKWSpEwEVF2zbmZUTgsBfFH+gjHrm3ISnbKGTEWVISvDjo9BTm5n7FLKa1u6h8aOOUxumgaAruvyPK3xpsuE6PJ8jWppWfmzqipitgwryWOTSShFVcPomWu1fgw58aDbc3m9ZES00tEaDg4PadqWJ4+e0G02iodZtcbxw5Cl3UtCd6vcIQJiNCm/t7eHiPDk6IjQD2rPHLP+o5SEso4DdV3TtA39MCCoqqqklAk7MiZMRIRr16+RUuL87CxXIW/HOMn4lLNOCxMyCcOIxvrT+1jWOgYlZoQUsViVZR86LawCSJZgjPY/Ay7jQSEEVTVBVG3VB3wKikWRSF6TTO28yTGswYcBYysQjamHbsC5CkHJCTFEnAFjaqpKsCmNBTvaJ6CqalVxtVavZSGBWFWmSCERCBinFsja07YxhNhsT4le32iUAOWsA5MtGcUoLhK331PLkWwbnTNrktcOMatiiFWJ/fF9A8ZUiGFUQHVZ0UPEICZm4FyxnJACHtHivtLXU1T1EmcZ+p6u75i1SjROKZEGTYhZsargYaqs4IHaLTslfVjr9LpaLdozRvEZaySrf0xJwBnXeUYrz3DM/UkKNpmralPGfAuRTKYki6vz3ki+0OsbMv6m8Z0WgMWQ8D7iB8/glUDlY7EbUgzMBE/tAikGfKyxLuFcxkmHbQytJKjtMelYosNUlDTiaiWOgitElaesC6/G1JeulEzWrmyPI2Xs7a/Pyul5e95+zk2gqneIBsRUOqfnpL8Cu1lRSQDJmHmmEJISgV5zPiFRmyWr88hqMAQqIsIsgnM9Q5gxpIRLjpSMqgUlIUmhqxZCgBnHpKK0VdYJ9SzQeZNV25WARjSqcEZeA+WxzYiSJVM+zqKOv14nblxv2Sx7FnXF5z9/j+WyY7307OzV3HvpGgfXDog/OuLmrRucnj/E1S2ORPADbVMzay2uUosytdodGLoBoqGSBhHY9J51F6lN4v5GsfhRJSkvwUzSvz2B1QBzZ6gBS0NjoWHN7k7D4cJRiefDi4HvP4CTTeKV68JvfUq4uEj8iz9x/Fd/V+jeXvLv/mTF9f3IrYMTHr9vuPuy4ckm8I0Pa94/G1TFXSCJYeYMjsTaR/ogdIOqp0SKNbzmxXQIVSJs1pDIeHi+vplwqgXiReFKVbwELQ6WkqNKmXxc8uWU2L50x0QdjgnsEB0ITntHCpASTeO52Gzo3vqItmn58kuG9fIJVT3DSmBWKZmUICQsKQx0wbNeDyyjIPSsNrDp1rR7BzQC1hr6dY81imtiAqauWS9P+OCx56OjDeuLC8RYgnO6trSOpjrglZ3EizeE3V14cASP3+t4+/03efkGvHIrcO3A4d/7Ps3+/5b1W+9x/pbh8KU/pLr4Nn4tsNjl/BiW3nK+Hnj/WFj2gRAMMxdpnKGuIdnE3iJhTaLpDHNnWfWGzge6kOhCxPsJFUEgZxtpnWFWOYxEuhA5PU4sg2eIgQoIFqrBkGJid6fmhx8G3j3ZcPtmxXvHK/7Ft2p+9zXHnaNAM1TMZzVnbsP58gyk5sVbB3zqxZouRGbNivnCslxuWNkIveYq9+ozhIpV2FDZbMVnDJ0Y8BZrVegghvT/b+/Pni1LrvNO8OfD3me4U8xDRs6ZyMQMkEyQAFVsQaS628q6SmasMitZm/Vfo79CL3rTU9P0oCprVWugqqAWqwiAEwiBmDKBHCNjjjudaW93X/2w3PfZ92ZkAiBBUpnwjwxkxL3n7MG3u2/3tb71fVlNV2PHQchWdaW/PTnWs32H53WPbItyP/jZJxM6flZ84HysQvOFwBPO88TPlXU9UQuTe8vepGHPw3QSmO81TFuDY60K+RHm0ylf+fUr7OxMcRaMnbB3YYed+YTJDH763Tf4n//dnzO9uMv/+D/9JsvVIbEXlicdq5j4whefY7Xa8Cd//ha7U8/vfPVVfvD927z1k5+yO2147mrDUw7++OhvJ4f1K0L8yJ02yUCosCK4pkXZjE43LMYBiRQ7rDj1i81+lxJ6QtIEczOZ0M50MmqcKnJs+kDo1hinG0DIi/skOA/WOzYd2QtUlRdEIk5UHkamM0K3om2bnEjNahYxokfTwSOjRFAIiakH6z3RbH+nEn2R0Pf5ZaabJbUpsUhOwEreGOI8KVe59H3KvpyJFMA4QYxVNYbs4bLdaCQSiRgi1gSc+JwQ1mS4yS/VkDRw4qxHrCN6lXs0KWDEKUsxy1pZq/6nSVD1k6zIkEIkmojpe5zzGKc+d8qcS/ketZ2SqJ+oKSECY4ZNVhLou05lKp3LkpJZMcS6vGHRJLp6k6r0VieRsNFEuPMNKaj/qPd2mNG9b7JFjZJkrDH0myUpK3w459VCxAhEyS9ftfbwhryxTbTe453XSgJj6PuOrgtMclBKBIz1mOy5G2LS9jPZogOv6igx5CRW3FpvYDA25kWXzVKo+gwhEzREsBKVjGINjcmqOE6rY1KfN29Gq5isZMlPtJLFSkJQz9iU8vJRJJNshF7C4OnaTloap8oRGlyCru8H8sB0os+k73pwJrN5NWglQUkKJqWhukeHe65cKZUKGAyRPkaImSQjylCMIZGSejeXnxvIAQdtG8lvN5PJLTHkKnzRNnPWYLwDSTTTVq1RJGlbRVXjUcsYVSMhnd3QCllZJapHovZBk/skOLwSrHxDSFZJYCLDuNaFeA7llMVaVFKItYbGZ5JHjHRJgyG+mQwBSIvOhzFs9MWcPVmNY2hTi1YzlaBbHlLD3Br7oFZN3mODjmddVJdqPIP3bU6gSu6vhiK3ShIlrOUNRoqRkFR1Sf2QTbYeQuVK+8A2LZyJSH2vlWuhp+sDrvnBL+PlUfExQFF1GDOCx0Gl8rtxYvRJAafxIrIoKsEHlT/OB7HK90oStJAExsc6f47zxIZyPZPJZFvV6v2QSD9/fcU6ZnzOQlooqgZj9Yu+7wc1BufcYIVTlAguXbrEdDodyBrXr18fbGrGqhXj6y7nKKoRxfpjMplwfHw8EB/KvRV1jaOjo8GSYzKZcHBwMKgdFNWPJwUJy7WWey/kkkIgGFuMPOn7hRyxWCw4PDzk+PiYCxcusNlsWCwWg83Her3m6OgIYwxd17HZbDg9PeX+/ftDAn9sYRNj5PT0lLt37w7WL/v7+8P1NU3DtWvXeP/997l//z4nJyd84Qtf4Nq1a+zu7g5VqF2WKC+KD6vVanh+hWhT2vjq1au88MIL/MZv/AaPHj1iuVxydHTEgwcPWK/XHBwc8MILL/CpT32K27dvs1wuOT4+5o033uCFF14YbHf29/cH9YvVasV8PifGOFi2lP5bxtj4esqfMrYKGaesUUufLISKpml455132Gw2fOYzn+HGjRvDs5lOp0ynU/q+5/DwkPv37/POO++wv78/kD7u3r3Lu+++yxtvvDGcu+s6Hj58OHyukIAeP37Mu+++y927d7l69Sp7e3u0bUsIgbZteeONN3j33Xe5f/8+m81mUN+IMfLaa69x48aNgdTxwx/+kHv37rFer5lMJoOyS7HjuXfvHq+//jrvvffe0Ja6RtckZPn9G2+8wdNPPz0QP+7fvz9ct3OOe/fu8e1vf5v79+8PhKjNZsOLL76IMYbr168PZJBiV3T79m2uXr3K5z//+ScSPs7/fTyH/Tz4qO+en2/HP/8o8scvk/Dws47/Uef6pJII/jr39fM8k/PkkI/6/t9G2477W8WvFgTJxHDJMRaL5EIFJA67IJ/jCzFpgYTuqXKg3Hpcu8vFS1eJKXL4+AESNhD6XCyjQXGMg7wXUaJ9hNBjrcMapyqeWmWAw+Mnau2wWJ5qUqYE0I0S6m22VMCoNQj6qy2JATN8Z9u/ZRvEN5roTFl51FlVKmiahk3X0XedtlC26XT5HV1Ix+M52nuPcdD33dCe42Cp7lNzIncI6Oa9V94jO1MUU/VeJpOWg0sX6ULk7t17pK7PRUJkG5ku20WMcgsluYDu6SazKbv7exhjVJmt77MiqSa9xuRCUNLpbLbLpl8DQr/ptI2Gptf9OkbY2Zkz35nx3jvvDmua0jYKVco01hKyGpeSscucUzIgxcYj285as1WyyM8atP1SLqpyXte/MfXEEHVvkIkWiU77llGVWgOYtMY3ReFC4yrYrKIGg90dJscVnEGsaJLQaAJOQqBt/dDejd+STpy3mLxe874Bp+Qok2XxvXeI1QIxcoxBY2UaP7Si8UzJpAfvVF1U1ULzdVuXHafdiPiRi19y0KNY3Vib+7712WbWZsKOPg+TVMlH7Db5WFIJSYLOC0ntiVtrcK4hhJgL4LSNnPdKfuh7mlb3Czg7qB02Tu27fcOgguO9VwL5OQuYojTsBjWgQvzYkjokk2207kz7yXDR2wFAmQhMJj8l0YyKEiry/jRzzDT8el4+fZtMGZPcY1RiTAgdIXQ5oZGteZMqfqSsaETXExstBLACLhXr020BgSnXWO4jjy9N6PKBaxqvAyHb+eQkUemUxmi812VVESVr2aEtjdHvWbPdG5fzPpksUlHxyYAI9Gam1mhuhkkR5Q+anOjP7zeA8XpCAlYMbWPwcki0ns7scJIi3gaiWLBzlhEmckrjOhIX2CQo6cBClwRIWM2RlXWJUe0Pjd8L1nhm7YouOJrWEo3BekPK470Uoap9lwUxNN4xm09IorkLjKUPBmtbnntun5PTFb/+lVc4WW746RuPefmZfV595RkWy8ily/vsXdzD8IDZvMV72Ns5YD5vcBZiTCyXHcvFhpQszjaISaTQs4k9MUKfoHWw6tRm3glESvI3k13yWqcPEXFZNd+3tI1l5hoEz+Em8mjdcNIJrQ1cO/D0acI7hzscHkUuNo4Qd3njBz0PFrs8vneMf1b45o8v8PVp4GBPbXJiJzxebehswsRio6tpopSEZLIKWF57KVF46Cn5mW3JNCIpq3GXeyJbpejauPW6fulDpI+50JoyX5P7Uo5npjJ/C7bpsgq3rmENiQCQIrvTCU1a8vkXrvPT23cxZoejxZJLl+ZEsTib8BJpTCI5YdqYTDL2OBcwyTJ1hpP79+lOTplOHN3pijvvP+L9K+9hU6dEU/GcPL5HWndc25vwcLMh5oXzfKfh5h48e5CYTeBkJRz3u5xuWqxZsDPtmdiL3HnQ88btFV/69Q3xO3/I7e+D3b3C3r2fcvS9x1y4MsWYjivPzPizt3qO18JqAyuBZei5f2KQmIiotd60Udu2KOBMYtoYVQgLEe8gWAh5Pa1rIosf7PSgi0IXAptg6KQDsfSAjYaNsXirbfO9tw2nm8QqChsx/OC9DS/c8Ny6Oufeu3Dp8k1Wm4esN465S0jT0PnLJBu5d2qZtA3v310QukiXPM9cnrGMgJlDG7GtMLWR2GmO+GCe2CwTi7Wuo/sQdV2hqcZBiV2JRWYsCKZroCfFZfLcUUhWUhYCw7x3trDs5ykeGsfgh06c+/T5b54heg9rkZyLIpGM5a9+8BMuuIYb+8fEaOh7eLxOpNAybWB6fcbpsaE/PeW9e+8y91rI31jDxDf4qafpT5l6z/Jowetv3OFgPme6O+HGtT38fE47mfP4/vvY8AYpCn/2l+/w+HhFip77C5Q4tOkJEXZ32o+8/78OfjWIH6IP3OWKi4S+bFIMWcpQn30IPdYafDPFZpsTb5xuqpIMthg2b90b7+lDwFjdzBmmxL7TDY61JEn4dqJEj5RoGmWWbzadShca3TT1MdF4g2VK6HvatsFPWrpNRxKhsU4tNULAes8mqHRVM5loR+67zBbMG09T/BJ7reu3yloXYyH1GNGEvfMNpIhNCdc0JBolS8SUgw2AcVh0826sQ1KnNIuy4E+5LY3KkUpSG52YcpLY6MJejFrI2KTsVItRCctcqYOxxBSQuGV6q8xg9oVzDrWmyDSzrOiw6TuczUoXUiQ0bU7QZ6sZq0xYDQJpoCNZiEFwmQwUU6K1Xu9RRPtGCljntj5qzhP7tSqQmNyXokoeqWylbpK9b7IaCayDbgB9JgaAA6NLK0T7nyvsWmvVmsOIyncaUeJRVLJH1yWsVZKFb3RzFLQcAis2a9qC9fmlmrJtjrVqsZOrkEvAQlLEWskJdV1Qpvw81dJH/QFVrjH3i5hQdqhWcAzXG9WaJkQNtDnnCSK4/CawVsebkmK0azXek/qeTVBZNmsdzjsaN83tmyuN4taHTtc+SW2NYtTKKqteXM6WzaClzWo1SFQCB06rWtL2RWKyN2+IMW9A/ZZkkF9MDkMf8kYeMkFF54LyjEs4KoEyJB2k2CMhqi1LHksGQx82mbiztSKQpJv0mAJlVx1jIAS1ttEpTMeFKsooSUSkVML7bFNUKsm20m0xjTyec+BR0ISUswZJalUUJAcQYsyBF6FY+kkSjMsv4qSEFiN+u0gUCLFYITmtEMskjnHStQQsk6iUqTFGA0lGyVM2eYrsrzUGh2MynSvZyunztzkYURjPktVnjKhKSAiRmHSB4pt2u5Kt+MQhpUTXdVhr2Ww29H0/BOg0aGuHoFVRLCiWH7BVkQDOfP4DrOBzAadynPN2LiXo1rbtQHYoRIwSYC5EgTFRpQQUz1fPF5WOcaC6HKtYtYQQhjm9qAuU+yskh3LOxWIxKGEU0sKYJLK7uzu0a0qJCxcuMMlrjOPj40Exo23bISFd2me1WhFjHJQIDg4OuHr16jbwZy2LxYLpdMrv/d7vEULgwYMHA9GgBFqNMWdIH6WNi2rEuHr1fNuP27t8tzyz0kYxV7g+fPiQ27dvs1gsePToEc45FosFP/3pT7l37x4XL17k3r17XL9+nc1mw927d/nmN7/JwcEBfd8PahhXr15lOp2yWCy4c+cOf/Znf8ZsNuP555/n137t14ZzFlLIt771LUII/Nqv/Rqf/exnef/993nrrbeYz+eD3cf+/j6/93u/x3K5pO97NpsNx8fH3Lx5k8lkwnK5HGxJAA4PDwfyS1FiKZY/xfrk0qVLXLt2jXv37nH79m2+9rWvMZlMODw85D/8h/+AiDCbzbhy5QpXrlxhNpsNz6IQGELQwPnYVqd42hcCRKmmLESZYjF05coV9vb2uHz5Mv/qX/0ruq4b1GLW6/Uwfh89esRisRiIJ7/7u7/Lyy+/zK1bt+i6jn/zb/4Nb7/9NqvVit///d/n6tWrPHjwgG9/+9v85V/+5UDuuHTpEg8ePODu3bssl0s+//nPs7+/P/Srk5MTvvGNb7BYLLh8+TJf//rXWS6X/OAHP+Cb3/wmu7u73Lp1i0ePHvEv/+W/5Mtf/jIvv/wyzjmOjo744z/+Y959910++9nP8k/+yT/hlVdeGZRcfvSjH3Hv3j12d3fZ399nb2+PnZ0dVqsV3//+99nZ2eHTn/40m82G//Jf/gs7Ozvs7e2xv7/PH/3RH/G9732PX/u1X+NLX/oSOzs7/It/8S+4d+8eDx8+HBIV0+mUK1eu8MUvfpHvfve7eO/58pe/PFgOfVRQfExS+5vgw5Lw47H5yyZ9fBS54KOICB9WPfKLtMEnPdEwfp7nAzDnq2o+DOerdioqflkQkYGsul3rNbrnz4nC8t8xSd35nLR1LdP5nLadsjg5ot8sSKnD5P1GEkipxIQ0JqHqj1ntwnaYiEqtR8EbfQ+2viF2HcvTU0ALOMp6zOaCAiVQaOK2aIUkoiYajMXJiA6Ro6LbHS+ZfKJVm43XZLQ1htPVUq1ETaaPGDvklMdz/EBkyGvjbr3a/o6tehmxKOfpflWyTagZ+VJbo3tC71UCejKbsrO7q1ZxJ6dICDmhr2vPojCnMYBMosibYmcgRsFPJ+xfOMBYePTosa4BY8wJLgsmnZ2DvGM632GTySspk3zy5hHn/ZC0mEwmHOT1XAzFgkQVJ7GmSF7im4aYZayNGWfoh2wIlOR8TkgP+0/jzrz3EC0Ucb5Y8GjMwDowogmwkGK+R0PbNLo3DlELpHKcJCWhaVrEKYvAkOMyTmOSzmdVWq9EJC0sKgQQh0QldhvvcjjPqk2RRGQjOf7ms3qpHb6HEURCjmkpAcbavI4mDXEJmwkhpqgLst0jFRKTPj9LIp4hPyhhpvRAA6kQIArpRfvQ0P9TzP0UNJNlkai+8kkAJ5g+0XqwxhFEE0rWona4onEDlyJtO2Hq5vRdhysEBK9xMl3bquKi87o30zGdLT4H8kmxe9rOUcagpCiTx19mRkTJbQGZODIqBBjGuRn6ZonfJJREpn2GM+caTYyIbFUNVYm5J8WeFAKh7wn9hq7vSH0gSNLYoM3kDmuIQUgRbEx4H3UeMVuVRZsLGl22pFYilB3dwXaOybo1Q18Yksajzw5zibU4u7Uy1f+63NZbgofNFkDGFNWRSvyo+ORCBFV5sAaCkkpNnj9TmQiGvFbMMV+Y+EgjC1LfcWR3CDJTBQuxBDuhkQ0pPqBnxknaYy5rZuYYa6csRWO5uj7J5A2DEvSG92ACo7FfbwWscDCB02Vi0swQ53FWlSS804S4ycQPjOYw9H0tmZBrcDbRx8jh4ZrPff4mb/z0Lg2er33ts3Trv+RLX3yGl55/ju9+7ye8+spz3H94xO5Oy85ew62bB8ymDd068vDhIYtFIATJeZxI3/XYoIlOyTFojKVxQpca2pZchJtJbtr6QMKI4G2DM72qy3lP64WQEqsgmgOxgYmHadNyuLQECbR7kdMT4cLlhrjp+NMfWZYTQZaG1U922RD5kzcsT12EthWcMzgfSb3DREMXtjklTaJvU+pSbOrZzqvObedLMGq1LoLLtmtWmTpKgMzFxACbPsC6o4+jPTQgKYL1xAi/+bxldyq0zrE3uUhKHuuExut6drGGFDa8dOuACzuXaAh87tYzOf5+mZlvcTYQUgA8rdf5XT59Rfty1PtHLDGsef1P//Nw7yLw+N4DfvS9v6C7e0BMhSyoz/a/efUK8y/f1PyW0YJSawMiniCGlAx9H7g4X7MzbdmsdlkHxzI6dqe77LU9D+495I17B7z29IzF7QU//FHgS9NAWG64tDvh2l7i7jKxTolVJ3QbzzqTSgOOGNUayTeWtvE0TvNM1hlaY/GSSI22rLE250azMrpvEGPpukAkYJ1hknTPEU1U9XcjzL2SSt4+EoI09FEzNPePIz9475TfePVZLs5X2GnLc8/foG0fsT5ecOniZW7cvMIbb9/hqRv7rPqG9Tsrzek12m/FWGY7U+Y7+sa2IvjJCcvljD6e0G86UmpJSVc8xlis2ZLDdV+i62EnZR0ARlKZogZyKHKWsCR5noGz+5TtHDjeu2gblnWRHiGN0juS7RS3SmAmX4zmiQzGSXaPsIj6blJqm4VEsqqhf/PGZa5eusQf/J9rJSVZr2Q4sbx01XC0mvLjO0u+9soBj+eO9x9uIIsjYJRfcNAIMyc8Ptnwh996h83G4C30yw1XLlm+8NkbPH54yruHE8xh4HdeusGvffUGf/ynb/Jn33kPMcLlgx3ms2NOF9vCz18WfjWIHwaatiXEoCoXbUPoNTHpvAeMDsakCeB2MlP2edAJ1FqDa1oiGyZ+B5utJhI2q0QIIhtSTHQhKKPLNbRtgzUaGC++myHr41hriWhg3VqHZKmi6XyeWVEW6xtir6oQzntCinhjcbMpMbUQEykFtczAYKxuDtWvLG+ych2MEd3kxZRoJxM9hzUkoxUKxhpMTJmkYZEU84vC60vDmCH57fxWKSN0kHmhw4ZM1TZUuQNhSPbqplmHvPFOgyExZeZ8YZHmzXTeDEr2LbVWX3vAIF2l153l+q1lCKmkvHFxHtXhMSBZ4SVLoQ4blCF7bbKcpsf5FmuhD2Golpg0DdJA7z1939Gvl7l6IldvGIv37ZBsC/0aguB9S8rPpIuqfOK9fjZZj1e/odHGR4MIgUDXdzQCjc8vjBjBqN9r34esoKFai2KhVBYYo4x/U4ITeW/qTGazaiRKrY+C+vGVFyoouxNjhgk2Zd/kGONWocTZYYOaktD1QVm9CDgNMsQYEWdp8hiIUSVxm5KoRBdPfYh0Adq20eu0XjtKrrYJ+fkYNOjiMqEgicFkJnEJNJDtbzL/Y0tyEp+rIrYvo1iqfSRpvzAB51tdGFldJKYoWco2b+RttkCyGiBxWZlHLVpyAA2InSHZuG2/GAk5aLklRZCrapS0kGIcVT2wfeZosE0tU3RCsxgkB130J/oZC4gpL0olokSrqjox5v5sbZ43ivKAqgoJuuhLEvUZSpZTtVYX0+QEetIAlqoA2VGAR5ep6nlriWGTF6o5WCZR1Wls0TCyQwBQBjllHfOYHHzNzzWGQAwd1ip5SauMhJRU6SXmuQmrtktN0yghrOITi5QSp6enOOc+QPwoNiljuxPQOakQCyaTySjYtLV+Kcd+UsX6eUICMBA5xvK25bglCQ4M5IOx9UuxKhkvcsdEkbH6R7GPKUSHcRK4KIOICDs7O7RtO5A9iuLC5cuXWSwWA8liZ2eHzWYzKFoUq5W2bYfjja0+xkSaQjIp7SYiHBwcDO18eno6XPNkMuH09JSdnR2MMdy+fXu4j0KMKOQTEeH09HS41/MEj9Le5ZrKe6m8d8uxzlv/jAlA1louX77MhQsXuHnzZq58VPWGL33pSzx48GBQMkkpcf36debz+WCZsru7y5UrVzg4OODChQtDv7l8+TJN09D3PcfHx4OVCig54zvf+Q6PHz8G4I033uDtt9/m9u3bzOdz/uk//ae89tprvP766zx+/Jgf/vCH7O/vc3p6yuHhIV3XcfPmTUSE999/n9dff52nnnqK9XrN8fEx165d49q1a0ynU2azGYeHh7z77rvMZrNhXJTzvfrqq4Ms/LVr1/jt3/5tQgi8++67fP/73+db3/oWX/3qV7l+/TqTyYTZbDaQfNbrNbC1GioWLoUMU55FUSZZr9fbSklgZ2eHq1evDkooY7uevb29weZnuVwSQuDSpUuDJczDhw9ZrVas12tOTk74t//23w4KHqenp5ycnPDgwQNu3LjB1atXB6LQfD7npZdeGsg66/WaBw8eMJlMePbZZ3nttdf47d/+bR49esStW7e4du0azz777EDKsdby5S9/mS9+8YtcvXqVvu/55//8n/PgwQN+8pOfsFwuuXDhAr/927/Ns88+yz/7Z/+M4+PjQXWlaZqBeHPjxg1+93d/ly996UuDIs67777L6enpoODyh3/4h7z11ltDv/3KV77CwcHB8HxLHxcRfvM3f5PvfOc7PHz4kJOTEy5fvvzLml7P4Bepwvh5SB+/aND+F/l8TQhUVHzyICmxXC7ynr9Y5arCqPcu2zJYvCtJak1Y+rah8VOc9YSw4XR5ipGIR0hZjbDIYJcdMprT2FaFIdkyNGAlYO1MLUqdZ3W60D2dNRoXyioTg/IYhcyrBRlF1VDKvskarQkZzp2V40rkNNsoFAtWyfuhLhRiRA6sFLYIozk4X31JzKcY6bP2tBG9noTJxTggdhuIlWzRWdQ5x8Qwp74iSvactJycnLBeKZnEYdTWxmW1VpSMkURG96pXlwDfei5evIQ1hkePHhL6XmNkxlLqS5UrofdorWUy31FFl9hjMIRe98lKmMjFNZKwzrO3f8BpuT5SjhNp4r60k8vkhS6vb8rWE7RKcxzANpkEIcKwfy2xEFUIzcQLawarFSVj5zgaIBIQVK2iaRp822brkqiWLqiSqrGaGFENVYcxDieS44ZFYXBLZHGNJ/Wq+GKtUwuRJJBjeQbRdkzgW6MJGmOw3uV2yTXk3tE4O5CTlDhlVOE2q1OU8SY5lqax+bLmLuMJve5M8BAZkZlGaQByyElR1DJijv2ZIfZk7ZSShNQQVcrED4EodHGDQeO11vpsW1ueRQNiM8Er0TYNvlUpc5uVS7ZqH1uVD+e89g/jhnG7zWowxDVLTFVtjoWUjAYrMTmRq0SW7X2Oq61HyTerVfaSbbMlynYekpJFYYirCLkyPKscxRhyzC4QYqdqO31H7DpCH9U+WwxBNH5tnSdaQ4xgYo/3IScNXY5x6X9VvS8TnWxRnB0RMvJ8pgnLbSHSmKihobVMqCufzwnKMfmoFAzqeDfZBinHhNA4cV3nVXxiYUwJymtOxwpqAFLi9xFnDZ0Ikiw7jdCyIPVr1mZOL3uYkDAmkYzPRZQRsRM8DTtmQx8esWZGZ/aYy4JLEjg1+yQEK2SFETLnIGqs3TZ4p7HvaWOJIuy0HQ+OeibNLnbS5vnT0bROrRFSRPI82DSGGGyOvTcYr0ns1nuOTjc8vtfxhc+/wI9ff4v/4f/5/2C53PDsU3tcuHKDo9Mf8urnnuH+gxO+/IXnuXZtl/VqzVvvPeTRyYo+thgb2XRrFqcd3SZCSjSaIla1pFTIMkIXHKRAMlZj3qmsL0Ttv4zgvMHaDrwjGtgEgxByghecsSTxLHqTi1Qtm96z2MDMe+4eJn78fuCF5zruLOD773Q8fclyuFB7lMu7O0y8xVlVE0lWCy+L8nZMdtQddPYTY1WRLSPTcZSwitV8nagN3aR1OFtWWuXTDIpuzlkS22Kp8ryd6bDO8vXPHXB1L9A4R4qTXPSc15IC0ST60NA6C+gaZ5qKWpcBtHDI260tckphG+cHQhBUvU/JtwOyopxzLW3jAJ/zQAKmIcVe1fFtGmL9knJBvTEYDxPfKGkqWZzfJQrsTYTLM5A0I3Q3uXIp0k4dx3c3NLv7HB07LuxFVmvDjYuRv3hXiT4hqUq+9y0mNJASyUQwDRFYdAlrE43N+TGb81M535ZSVoj3jb6nk+4/vPdY57frGJFcrKw6gTuzlpON5dGqUxECiRhrCOK482DDg6c6Pv/qLf7LO4/5zOdv4VrhvXc8n3rlGtcvz/j+6wYz32EeIlcvT7lzuMJaz/VnbiCzU21L8ZxuhH61gWDZbDrNL3eJkInvgz0520JK0UUIhZVa1iP6wez6UNbHmPzvslfJ3x8n5SAvsj9o33L2s+P1o5TV50Au2RZpZrqU0zyXd55Er+RSLDEJg2lDbvOjo44Xnp3jTMBbT58S3nisWfHSwQW+8YMjfHJ8540H/Nanr3K07Fivk1roJYNxhhCM5gr7QDMX1iR2Zy2f+9JVLu3t8md/dZ87DzpdvzrPv/3f3uBb336Hr//2q3zl//Up/vhbf8GNG3v8v//122fXyb8k/EoQP6y1TKYzXLZgiKIeSsY4VUkYmNxZYgnBGSFYQx8CjfXZHqFlMml10peE1+w5ibzwt41KNKYuyxMmxGvVROwDbjLBOI9tIPQb2skE8MTYYyUynUz1IacEEvEWfDvBO0OyTiWpkipdyOBhi1Yq5IGZQvaCQytPwKkUpOjLrGnV4kFiD2QGtzHDBiSGHucbXKuJ02IpkkSy/ZNW6NtcKWFarxKCAiYz8ovXpUUQawghDoPZ2jzKRNR6A3L1jG6oLIBzSAlGGH2hFvKGsUXQVScbV1iPeaMbUySKbkq8U2JPEkMK/aCgkWKmHEjK7DwLMWZFCiUB2LbJ3qmRPgaS0+CSRfDWEKwloQSBttUNpRFtvxQDUSRLm6Yhmd+0agUTs8qKtZbknFqVkDfwziMx4DOzHgN96Nn0muQroRSTN0UxqrWPyYEhklqaiOii0Vq9Rie5QiO3Zwki9EnJCE3batAqk6MEcoCtFMXo4iD0ATLZSZNzWsXkfMNmvQZraZ0u7MZSp5KKfKcZ2Kd63JiJS+imNCV8k6u38mTeWINYvXPjXLaQ0X6eQsCaBut8DhZoXzUIxDg8ZwaBXQ0gaV8riinaVspNCGCaHEPLKixY9Q0UBu/gSasBgoFdmKt0NIBmoQGSQ8IGSUK/2dD3gSQ5aZPbp3Rt6ywh6s9tZq6IlICB2p0UVmGpJLEmv/CGRIy2cUxxIEEZwDjDxDoET8iywBqcLNEtIYJ62ea5QoxRhSCU8EUCm/3oUiZ4xCgYURKLbxpVe8nJNmct1udglmTlE9GXomQp0WSdLtCyOpKOl/xyHx6WMp5TSoixRBKm9M+sOhNlG2xyzmBbN8w3FZ9clGRk13WDHUohdoyJHMaYMzYOY/LEWC1iTNwYK3GUzwPbSs5MWChWKR+mBlKuZUw+Keex1p6xYilJ80LoOK8+UogO40T6mHRSCC/ls4VgMiallGs878E8VncYk1MKCoEBOKMSUoLa5TiDD7OMZYfjcF1jYsZ5Msa4PcfXdf5POX75e7HTGCu4lGOV35dnNCZ0pJS4ePHicM5CZtjd3UVEmM/nPHjwgL7vadt2sEUp5ylB4nKd0+mUz3zmM6SU2N3dHa4FlPTz8ssvc/HixUE1o+97bt68yd7eHt57nnnmGVJK3Llzh/v37/PgwYOhvW7evElKiZOTE+7fv49zjtVqxcOHDweLmPl8zt7eHjdu3ODll1+m6zp+9KMfcfv2bZqm4e7du0wmE1566SUeP37MnTt3uHfvHtPplLZtWSwWAxFoPC5K2xaSBTCQO0IIPHr0aCACdV03qKjs7Oyc+dxisWC9Xg/Poe97jo6OBnuhcv6DgwNu3LjBer3m7bffHohIpT1nsxmz2YwvfOELA1Gq6zqOjo64desW+/v7Z4gKRQ2l67oz43u9XmOtZX9/fyDMXL16lVdeeYUrV67w+uuvD/NKIXBYa5nNZuzt7XH//v2BTFJUcObz+UB2Kqo/Rd2kkLguXrzI7u4uMUZ2d3c5Pj7OsvUzbt68yWuvvTbc78nJCY8fPx6UUF577bUzBLednR2K8tF4TJ1XwhgHycfktQ/7zPizZTyNPzuW3T//vTGx7Unf+VnEkSdhTPb6Wfc1tnr6KAWL85/5ea/hPJ50L7/o/f085/xFrv0XSYp82LV+FHHnZ33+r/OMz3/+SSShv85xKz4ZCCnx6PgUayzeZkXWbGVpLTRerVmtdzTNhPnunN35HGctIfSs+wVG1EsdLL7xxEzYN3kvQtJ9oWh2H7FFTTThbIu3LfPpLpPZjJiJKORArMn7JZNjKhTbgjxPqf3r1gpYr0OT8EMyGTIRhTMBTN9kpTgRQuhJWanUlozzdis3BPk1zLKdD0PQvT1A2diPQqZ5z4QmmsoeOs+ZQyI/n6uZTJhMJ0hKHD4+VLIGQJIzyd6BpFjuV8q2Uwt3XOO5eOkizlgODx8Tuo0+g7FChJCvUhPis+mUPqqSgbOWbrNh2Osai7d+qP6b7+yQYmR5uijhlNyulmyiQjTQtJNcXBBGwencLNsWyjExjTPovlufm+7p46Aeo4IUJsc3JCfMcwyGrLBpUFJSVkkTBNf44blZDK1zGkxPSnAxxmQChBuKRgyqZGqMGxRNrXODRYixaLKizKOiFi5N22Dyek5ywUZjm+HerVVbF4wqqBRVUetKQl/HiDVGSVK5c2iTFBWa7VbeWr/dd6AS9ko7KsqyBQaJ+VPZRkdEC1c0llDWEdmUQHL7iib4NqFjai2N9Vu73qQWNi6TfcqVtm0z5Fi9d6rW6xzOm4H84NxWkSIHiLWrpzQ8e4xofEgsNhPJnCkqPWZIzukabRR6yUmhotKsQ1OjGyp0Y7WIKw/uZHPb5TiXJFWo2Vq9aOFOzIrHodfYc/nTd4EoPSlBMh5jPM4mrFHlyNQbvO9pXaOq187R+IbGN2ASgsdbnYu2ajA6Nku1rdphbQsunNmSOoZCDJML/wai0Fb9tfSjUgSn59B+Uay4Mt2EiopPLgKYVmOukuhNwidVVo9o/mTmYWIWpLBikxyb0BCJGBZKBHReFZWNKrYnhB5LTDu4dsI8rYl9x8LMkSYxt3dYcpGYGryIfs85EN37Ws+wzolZEdrZyKpjUIOWoYwwx/8btdGyJqtQGE8yFqwW7MXO0XWJ3kR+9NPbfP0ff5bvblYsHgb+8dd/D2LPcg3Xrl3k1o2nmHzFM501/OiNd3jv9iExwM5sxuL4iIcPTjk5WeRXnGjBM0Yd3hOEXosZjYEu9PRBCS0lZ6bvEb1WnfN7+r6jMRNc48D0mk8yhpSmxGA1lp/XbyKOKJ6ZB+MsP74/5d7ilJdNy72jUx4tDMkmdiYN7z/eZdZM2N1JHPUGH6xKEDiPMZYYEiGqInkhSZQ1nsvrMRElfChvRYCsiitCH7To2rZuWL/p3knXl41TG5GYhK4UOucTKMkIneuT6ALVZqWHrKaRUiQSVb2lWJ5l2zcoNddmuG4Rk1URGNaradjLZYLxNq+fUwFJCZSknLMJGuN3lmTBOdE8gNiSVlQeQs6ngSVGLRIOSZDoaU1gMhHWJ8KDk4S3nrBMPD5N9D3cfaT5t+UaZhPDM1cm/PBeokuGZCJ9vyL0LtN27cB7SJKJDSIE0XWNqsdne8qcN1FVfMFaD1IUy/TzzmrOMi8VMQh7FzwbZ2ACk9AiNIgERHpM2/D67VN+99ULPL2ywA6fe/VlLszv8uwzT+F29ujlx7z80g3+6D+/xWRnzueuT1mtJty4OgEmnHTCcrEhdInTVU/s1sSNJXqLc2sIzZl4tKAWN+XBlvX9sAfJfyeTY1Mmfwyfy3mylPcZjElM+XhhUHAuxdloXnx450tez5dM8Nl4cYEZxqUqdyAWg8cncLZjYSwuK6uQdA7rNonZrMUQQSyzxiNpw1de3qWzCeca1jGwjj3fe/OITz0153tvHmPFIlmpJcok7wN6nEt8+bldbt3a5/W3Fvz59+9qTLyxGJkquc9GTvuef/2H3+Pzn9rj5tVLfP7zn+Hf/Ie/ZLHc/Iz3xC+OXwnix5ht3Hc91ltV6wCsbzFGVQsa3xBzxw4pV+OnLNloDa7Ji9XQA4Z+yE9amkY3Vk4a1mvRxboVUi9q+2IMJukkYDBsuk7VGpyn7yPTSeYlxpwIt2CdMtx0qlZf29CpzFAMPdPJlKZpdAMieT+XO3oSTSbbQRHDbBNZxhF7tcGQ7EdrstIGspXVSdlWRdIoUTMw6yIpK2U2jcc4T7fpSSlgjG4YYlLrh0IkwBT59bJxVla3y0QPJIcskmDyRllsrgTONiZlsa9qibkipageWPXWSlG2cmhktQRJhKQShUOynhKQsWodgYVkEGfo82Zry8ATDQxkFmI7meR2iZroNwzyjl1Q+wzd3GT5S6sMzMZaVX7IUmLOWFLeNFqMykRaTVxrxYC+XNrJFOsMMcuZSgjYdoJrJlhTLEOiqimUtjK6oTeiaigiKTNthdjrhOqcWvmQIilmn02BFCMpFJn3nhBy1YoptkWZsGQtjWsJubpGN3j6Agt5Wk59D0Zf0EbAGz/I1jpnwHgkxtxni5RkJmKRN/XleeW+2jYeg6GnkCQyAzGW10DuT+XlYtIQ+DFGiQ3GaKVP2bgXBYooylQs50u5b3irz9lmKx9JEZxniLCUl1CK2udDr0mp9Vqr5/NCJ8aoVSRGjadiPpfLyhzahuov6BqPiToWdXwm3cTnucdFHedqnZIJD5ksUaoijOR+g768feMIIc8ZItoXjVoRSYwa9DQG71pMY4b214oVl+2XEtiIqufoXOqyko1IJBXFGWfokxnY1MY6cjSpdNAzC1trlRhS+B+SVUC0FkkDs0lCXkgKYlQq1DuPMVtWbUp9TRB8wlGCcSVJXVQ+xgQCYLCiGC8Kz5NAikXFOEFYbDLGxI/y3TGRoViUlO+NK94LmWNsxzImkZT3d5HDLgSL88SPkqgNIQyJ8vK5QrYqiemSDC/EjHGid0wGGZMjio3MmLRS2qoQHcr5y5+2ben7/ow1y/mkcCHiFCWMMbGj3EMhnBSiQSG/jBOdZe1yvg3L78v5y3MZJx0KkaYk08u9isigTFH+TKdTdnZ2hnb40Y9+xGKxGPrPrVu3hvvquu7Mudu25dOf/vTQF8p9FuLR5z//eY6OjgZrk0IIKFY3V69exTnHfD7nhz/8IcfHx7Rty/7+Ps8999xwL33fc/Xq1aHNlsvloDBx4cIFrl27xuc+9zl++MMf8v777/Puu+8OxIRr167x/PPP861vfYsHDx7w/vvvY63l4OBgUHCZz+dDny7nLIoope3H/z46OmK5XHJ6ekrXdRweHg6KLdevX8d7z2azGYgS5TyPHz9mvV5z9epVJpMJDx484PT0lGeeeYZnnnmGvu/58z//8+H8Fy9epGkadnZ2mM/nfO5znxv612q14ujoiOvXr7Ozs0MhZ43JRWPrn7Zt6bpO5ekXC05PTwc1k3L82Wx2Zl44OTkZ+th4fBT7qPV6zWq1YjKZDM++67rhs6VPFvn7Qiop47Y8+y984QssFovBtun111/HGDOo6JR+rWuzcOZazs9DTyJMjeeT8XfK38vvn/TZ8Zgb/3d87PH5x3PCmHjxYaSCJ5E1xp8fX8eQjBmdf3zOJ13Tk37+N8X5Y/wi645flFzxUcc4T2z5eY7z815r+dx4bv+oz/0iBI0nvTfPP68nEX9+nuuo+OQgJWGxWtN6B65RS9dMKBebbVqcwbkpu7u7TOa7rNc9cbMkJa2QdS4ngL1a31qvSQwwOOMwjrzHUDK8sWZQdvR+yu7+JYx1rNYrUoxYnCofOqd759E7Zxgrpqh8fLSVoORA/njs+qbBWq9kj1KYkqMKjLu4zXssSfk4Mqx5yjs8paKeulVpBL1Xk4OyJgfoS3J8iNJbg8nv1PmOrg+WqwWr5UqtRDOhg9FcX+xIC93CoEHuHCen8Z4Lly5ireH46Jhu01EIABo/sTlulIbLaBu1HtxsNrTeE7K1sohowiY/gxgDs+mUSdNwfHI8vDOLHLPOL2YIznvn2axXg7KCfqxcSSGL5AY6S1FQNZMY9R5d1iTIhSRxFNQusabQ98NeuvW6/k/Idp2dE/kWJRGlLmBdk61rcgIoyaAuYY0h9AHf+GHdb53TfXtZ66IFQpISxikJyVgl3pQbtDnpgNmSpVMmIFin8SuNJXKGyD4mRMioH5T9vfbzbXC+nLAQHKTE/UbJgZLEMaVviiqlStraS2pfU1UPjTc4hETqAz09ttWYTcrxHGu0PYtSRVFk0T0BeYwXK8wP2o7AVsECNPkWJX3gnSc5nlMUeMtwzyFLJCQlBhlX0hWcyXgNDWByPNYMfTyV3w/fMfonbVUOU0qEFFSOPkb6LE3fh0AIPX3skWLnSyRajVH2vXrVd84RXINvPL5pMNnSOUWUcEdRlFGSlS0EoDzuHWWvlu2b7ZbwUfZFxqhi61D9C5kQs01SWluIHrnKG4bfDc1WUfGJRCY++Ym+o4reh3VYicxch5ee2C3YGEhmj816SUwntO0uyATnVHFcEhhvwTUUzZxkE6QJ0c1pJj1td0xInrW5yo45JrqW3hwgOMRFbNJxb8mFO6JFd40BSR2na0MwwqYPWrAphiRaIDppHe3UM2sbJhNV6I4CYmE2bdjb32W2M2dn0rAOPYcPl/zOP/wK999/i1c/83/B2YYHR2/wD7/+f2c2nbJYnPCnf/pdutjwzEtP85M3bvPGD97i0cMTYhAQl0mCCY/TnJ4VnAU/cUi0uCYr+TuAiDXZLhDBO6NkAlQ9TGIihQRuyYYlVqZ4P8N5R6QjBovWlBhENiTpmHlHCMKPHq55vOzZhMDjjRAjLLHMxPHgRNjdt1zZ3WERAut1opdEFFUTUTalFmSWNZ2+SLfvlJg0XxEGJQ4zkAJTSixXPZsu0Lb5PWfUmsRkEoZxFpegcYZEO+y7YpIcQ++JKWSSscsEU7BGldVVdU9JJBrf1/ceorktYzLh0WSVERLbLaTZrrfKXlVkuDcRuLTfsjd1WKskwdaV968WfCKqcCJGC8xLwTMIKamNoOb9VNnEiK5DrRGOTnvWa2Fn13K66FmtEsl6Fhvh4XHA2inL5Ybruw3vPA6cLDv6IPTBIERCRIusy0rZsF0ziSrKS1SVL1WGgZCEPqRcRJ8QYl4qmYEoC/r5SeuYtZ6DvQaSYXff0e4ZLl+f8JSd8uhwQx+FwyW8deeI1379Fb75V6/zW//gH3Dx6mVuXL7KYtUzn+1x+cZVdifvML2ww6deuM7l/SmLB3e5e+8x641ntfTEuMCknrQBk3pgQuMtjWlyXisN6zPO7deHn4/IRfovJaqJ0QJx8u8sJT86JNHPHM/lvVAhIklZEJ6ZIbdLgDNxn9E6VETz12IaYtLx3rSWVy7BnQcdTubYJBjTgFWVweViwWa1Zm93ggWiGGJ0vPLSZd7+yUOMb5jiiDQ83AT2Tzc8fW3G7bsbpJkiRKLplVR1LHz+6V2OTxv+/bfv4G0DSV0+SKpkaNOaZC1eDC8/M2VzmviPr7/Ff/rTn4IkLh3Mfu63xs+LXwnihxIhNElvTPYZRRf6GEOKKSfagy4xBZAw2J2AJsJNypXwmVhB7LBAO51p0lrAmMRkOtUAfdepLKhvMFay7KBuBn071YWvCG3jwTn6LmBIWokSNcFv8gbOAZHEpu8xgJ/Os8QixNgT+14XxU69Ta0jB+814a/CD0YJAx5S9llNmchAjDhrs2ShEl1CQD+fN0lD0EISYlRCisx6s8bQtJ6+S5kBuQ1+KFlCPaEGGss4YItW1MQk+Ly5TCS1XilypHlgxyQMAqplk2D036Hv1e/UqgIF5CrpGLPcWCaMlAkoByFikTwUlcskRlIIDJdosiJBypvXmJA+4jLBJeWqG2ut+rbmPmdzVUnZA1tQT1uTWZOSmYgh4ZxRJYOgyiMq26gTmVqjOJ3Qkmi/skoiMaKVEDEmlWZzOZhEUvmyzPpNheSTX8S6z7RMnMqyRmFQT0FivncNaKQy6dptZZQqzAgmCjFsBuUTA1kO1OB9HJIiuoHVZ6UKLvkFrJdCMirbWyo3SIKURSbaB0o1CxasM7hoVKIy9BpIaiZYZ1Dqnr74RSCkvKkszxddhBij7YUUZQwdm4akcrFZUSIElfh1NlsfISTT5kqMkKtvQq7CMrnaCLpuTd9poEL7hyH2pRPqJt44h40xy3epx5rEzAC1+QXpPTiVEO42aw3kWAux1+BNkSAd1Nwy+zbHNGMIpKRy+kaE2HeDa6ASgVKW4lXFmfJsvFebJ2uA2NNFJbwpHUctYAxNDmoKJum8qeNCA39JoPjCDu2Q5yJd7EnuG+QXdl40ZEUhpQ9vK1rIXdHbHJQQMwStjBEwPle15aBTDQ58olES1CXhP65u7LpuIDOUxPeY6FAsTk5OTgaygXOO3d1dANbrNU3TMJ1uCZaF5DAmFpQAaFHcUBlmO6hLjAkIpTK/JM7HSeRChpjP52dUOgaf5Xyecrzx78rPx+ceE06KOkAhLPR9f6Z9yrnKvRZ1hG2g7oMDqZAvxsSX8Z/zifdCnigkl3Gyb/xZ7z3r9frMsxon8mFL3CltOVZBKRgTMMqxS6L8vCLB+cRj13Wcnp4ONiLr9ZrT01Pu3r3L9evXuXz5Mvv7+4NdTiGXjMky42soPy9EiEI4Ke1fCAXXrl3j6tWrfPrTnx5+X5RJuq7jypUrPP3003jv1c4q39vJyclglRJj5MUXX+T69esDeQCU0FCu+ctf/jJPPfUUd+/eZb1eIyLs7u4OdiPFVuS8CsyYtFMUO05PT3nzzTd577336LqOe/fu8ejRI37605/y4osvsr+/z8nJCW+++SZHR0ccHR0BcPfuXVJKPPXUU+zv77PZbLh9+/Zgn/PKK6/wH//jf+TNN9/kz//8z/n617/OlStXiDHy5ptv8gd/8AccHByQUuL4+BgR4fj4mBdeeIEXX3yRg4MDZrMZ6/Wae/fu8eqrrw7PdjKZcOnSJd566y3+4A/+gMVigTGGN954gz/7sz/jq1/9Kp/5zGd49tln+ZM/+RP++I//mFdffZWdnR3ef/99/uRP/oSrV6/y9NNPs1gseOuttzg6OuLOnTt0Xcdf/dVfcXR0xJUrV3jhhRc4PDzk8ePHHB0d8f7773Pp0iXu3r3Ld7/7XWKM3L59G+89f/qnf8p3v/tdnn/+eV555RW++tWvcufOncH2pozRvu85PDzkxz/+MdPplEuXLjGZTIYxeZ7YUPr/+YT6+f4//syH4fxYGX/2SYSDQhYqxJuPK863yd/1vTzpuTyJzPL3cW1/Ezyprz2J/FFRgeTKeCMYE3Mg12JNg7Ge+f4Bly5dIgEnx6og4XNVuRbSOGIfsMYj1mIkz3doBaMGboU+JsQZGtHkcjvbY2/vAn0XCP0mBz+BXJhiNHOcC1m2lepjwpK1Jfhth349kBFL8JNtkrkok3WbjrJPIu+dRZ4wxkuevSTxzymynSWinFP6YHuKElItW7ESQPXes7u3R4yRk5NT+jBKwmtmf1grxhjzXtvk+9aLK2dtmoYL+xewxrBcrNisVelDCRb5pCiZo0QKvPdYY5Rc2RYyjJ5D1ygqXR5FCysmkxmb9YbQ92ruYp+gLIGlbSZ0XRiSGuMmKZtIMzA/ShOagTxTtpnnidqSH8jQ9iKEWGwcDc563X+LZIKRHWIc3ntVJA26z/bO5+KPqMSFEmgv6hco2aRpGiXhWFUQtVZVYcQIxuqW2hoGGx7D9r7EJFWZMHZQKytxLe+sFsnAEAt8Ut9L+Wm5ckyNFFCKbkrblET+MPfnLpRGPvGShCQB510mfkS6uKCJPZPJTJM+GlVDjQnyM83S5CIJ41TlI5dkZ/JX2U9tiR9K+tB71LFTiFou25pAIW/BNsFgxAzEHU0yZM6GLdyXrDiSx4nNscRSDCeZuDPubaMmHcap5DGYclGMyclVEZT0IVuFxZSSFnJJTwyqolvicCGEzA8TImotZMoJTFDpeedIPtCSiyKtA9FKVoda4njf5EK2BuucJhCtKna4M3Yt273rmBBnRoofA9HMyLk2BovL/aZErIZeVEM7FZ9gaNGkMR6yooEFGt/jWRPXR6z6ji5aoMGaU7yNtE2LpBUpnSgBQjZqV84+zkw0pm4MJuoMHE0gbsC7CzhW2LRgKVM8HRP3gN7u0ccJiQBGiKHBSkCyhbw3IGnDul/TNHMtao2JaPR91jeevgeJPSk0iHHs7E7Yme2wu7fD3k6LbyDFHu+Fl5+5zHPPXuXlZ38LnnO4ZoIxcPnyK0hc890f/HvuPDjii7/267z91m3+7b//E9575z4xCEkcxkYkJp2/sThNW+B9oxbnxkAXcCaw006wbgIY2kbjGpuNFmOkaHJbOXrpIDlsarBmH2e8xu1oNE/hDS5F9BXvmEwbZqZlk4TjDcx2Zmz6Sc4dCi4IvXvI4Sbh1hbn4WCWCRxrM9iWG2uY+BzPlqyOkdWkEokUEzYAcasENU66F4W0EIV+1WNtLkjO5FS13tOZVCQTTfNazhqLsULT2KwYA2IiW1qmrs+kyF3INsc1vNtEVwNKoTm/Ry1ZMhneFSW5Lzk3aS188dWncSkq2dsILmf/Us6rYTU/FGPKeTgDKVvFick5IL0wpUtHvFe1sfXKEgFvheOF4zQo+SeI53Dt8bLh/ukd5rMZF+bweKEWict+hz4UhRHJuQlyjLesoYbkYc51lHd0yYPkWKYRMGFQqLNWsDbioqq7788aGmvANVzc36GXxGzmeeqpOc++uMvetGfTdRydHBHsDq++9CmOlx2f+czXcK7l4Y//ik+9epUbF67xf/3vf5vLuzPe+ek7/PTth1zeN2w2kcNTw+K0Z7ECiQ6c5ktD1+XmlOF5kZ/8VmGsFFaP38bah0q/yg+2dJkRoWOL83Ekm9fVJfWo+aHyM+1N1li80Q8kPni87VpeLbG8hVZgPmv52pcu8K//tyWubwg24U2rGTkjPHh0wnf/8jYhovNcCjx1EY5OItf2Zni7pLNgo8H4hncfB5697Njd8Zwse4x1PH2hZdI3HMzhR+88ZJMO8Nbr/dsWkT7nuNR+a+Y2PH19n7tHHceLNc4IEhyny47TZf9hL4m/Nn5FiB8QYq+SP1EHps+KFJaQO6NWliMqo4UUqxT9rndeZ6JYpJSKRKLFpIQxkV6g36xpvLIjp40fiCKh2+ggMg7pO6a5KjGlSOh7mpzr9M1Ek8hFKUJUDqtM/pPpVCfLsmEQ1BPUNLqNdcoOV8JKGjaSUbYTjkTd5BZbDFuqcJPo4j0H933bqExl6AbmNrli2DujUoKZMGJQr93k1aBDXyRkW5VBp2PYBPkSbAGMxKE6JIY0bMowhhQiKRMCwJyRcXZWJ9QYgiqqGDSpbhySggZJMiM9RQgxYEUZ5NY5rG8poYgyoaTU0+drTlEz+E3jSV1J8rksx2kIKdJkxQg7uE+KEjWyakLoQ642AEkhb2w1YFWCG4U5iVF1iWSUHWcyc1HyJkjIQabJhKLOYKxDjMrbpljYkprEF0nEkPBW+30YBZmscTTeqYpJDGqVEiJ9XKtsq9VKhxBiVqowFK/T8so+W9GjChhKQDCZRAJiso9cvl5v1cstYghJCU0G8mckt2D2EBZR3yzyLroEHFJCrM0v+4T1ugCzxQOvVABlUkSSMASTBnKEVeKFqNyGnjNXepkEIWyGwFeRqkoxK7702k/FWDwel3QcpKhSrTEKSMwvKiWOSNKqJWOtLl3yWEx5HDrvSbEfFgYpb4aTpBzwyi/HmChCl6qYA5PW0vgmk04S3aoD36jvtErjaKI0hKwGk72TrcP5TLYp5BpUkrV4ZWuDbhPDMaktkbrn6AvXWb0f6wwpmsFr0GY5P534ImJ8lizVdrZZDrhIlmZW1hBsKouGYYGREiIqc+t8k9nCukKU7AtsclAB2qEyp+KTi3FSf0xmKFX1heQQY+TKlStDILwkzUtyaT6f0/eq0HNyckLbtkOF/f7+PvP5nMViwXK5JMbIdDrl4sWL7O/vs16vOTk5wTnHZKIVicUaYjKZ8OjRozNqFoXsAQwqGCU5P/63Kis1H0iYlqB+CaaV+z5P9ihJ7rFSQyGHbDab4V1ajlfa6TyBoRx3IHLm48CWGDJGIQacJ6WMlTeaZkvMLISYMblgfMxxYrP8GVv1jNviSdgGG80Zu48zPpXnfjaZTNjb2+Pg4IDT09MzhJqdnR0mkwkhBI6Ojobvjc8zfl7nA58iMjyD88SPcu/e++F35Tzl+bRtO9inlOdwcHAwPJuu65jNZgMRoJxrOp0ynU5ZLpccHBywu7vL008/zWazGdQnirJGCGH4+djKaKyoUsgiL7/8Mjdu3GC1WrG7uzu0c9d1XLx4kfl8zmaz4cUXX/wAQWa1WrG3t8d0OkVEeOmllzg4OODixYvMZjN+93d/l+VyOVjhzOdznn76aZ599llu37499OMbN27w4osvcu3aNS5cuEDTNMzn88FS5/vf/z7PPfccAF3XcXBwwO///u/zk5/8hNdff51/9+/+3dAvX3zxRX7jN36DK1eucPnyZdq25dvf/jZ/9Ed/NPS5F198kddee42XX36ZEALf+MY3ODk5IYTAtWvXeP3113n77bfZ3d3l/v37LBYLDg8PuXTpEj/84Q8BWC6XQ5seHh4yn88HS5of/ehHvPXWW1hrmUwmvPzyy3z6058eiDjFnufNN9+k73um0yl7e3tn+nQZh+N5Y/y78Xg9TxQZ/3f8vJ40Vsrfx/PFuO+PSVbnyWEfF4zv8XzbjH/2t40PI+ycx9gu66Mwvq+f59w/zzF/UTwpCPSkc36c+kvFLx/WGKbe0XiDY1u1L3ikabh0/Sbz3T0Wq1PCeoUl0loBK7RNoyoHuT+lmBPoSeMQMVtKmKBED+s9WIfzLXt7F5nvHbBYLtWKE01ymrJ+8rpnwvicTM7Vj2ZMoND9iQ61kWw3OTCdyrqpwWFom5a+6zMxoMy9uTqwBE5l+FX+vR6wrIfGa64nzVMaj1VShc4BZ8ddUbcQhEk7Y74zp+s3LJanWkQlWvmv6o+6vzbWZiVUNCZE3lMXoolRVc+9/Qu4puV0ecJmvSYmrVj1bBVYyrnFCE2jNqTdple1Fs1tUxLE2t5Z2TQJ7azFWMt6vYKUcvzJajJHS0myHY2utRaL0+G+B9JZ0vhPSfqLyVclqs46BJvL/eW9rIFs73L2nTfMtdaoeiiS1X1tjtvlGJO3AwEjdT2+UYJ4T7bySCBW1VNcKahqNPmu5IMccFK/GQxaoOS8zzapmYCQ533JlY4kTaboMjoXGhk9hyqplLZ48npAyS4aUZOsdCKjNiod3uSCK00OnF1PpBSz5LbGZJPJYyepkkeUHPNxFmcnKt9tVMU45ZgPRp9NnxJeEt63qiRrSpX3di2rY6WoemxJH9u9S0ncmKGPQeIDawBJOXbDGWW57YASSnlzDmEhEiEK4lJ+VmQyh+ShaMr/D7EZbauU46oF27EromrJKSWNFaWsIp33k4U8omSVbdxTn6NVSwRp8jxiSE4LoIxoXNN7T+sbJu0UPyKe24H8v93/qC3S1u6Ksu4bETjsoPpS5rcnkzuGhGNWd9r2nIqKTyCMkhTBEAQ8ganvkM0jYoCOPTbGgFnhbFJ1eqOKHkEEaHE0wBRjI8a4TMRUda5Efr8nLcLtYySaFnEtk3SCjZZNMkzNMVb26exE57Y86pwYxEYmrlcynfF4bzHBZ3ahKpbHmPAsWYWOEC9zujC4xyva5pimmXPl6gHPPX+Zz3/6Jtefusx8foAxDZu44tLBy0ogEYMHIi0vPvMb3Lp5xL/5X/8X/tM3/gv3Hq6w1tGnHu8dvfqbIDFhjd4rOYcWYiKI4FPZAxsCghVh1fVDkafJ2yFjHX0MSIz0booPQnIRwWEDON9hncOLYBqjCiC9ZeoNbp04XBseniwxjWcde5brhO1X2PQ+YdMSLDgJiDT0aYlNCzb9Pps+EU3ARKGdz/H5+tXSXNcNXa9qEilbnae4tfwyxU/FxDznZiXzqO8gm8oi0oDdri+HZ2ssYhITC0bW9F2P923O0Wg+xrBd08SYaHzO38ReY/7S5nVWsULT9a/N6mh5BaZrAJuG9zYoKcNak1+SUNZ4qpgVVbVEEjFF+mAwsc/vf4txueDXKNkzSr7n/H/ORSY+0veJVe+BgNCw7HQPIBFwkXVvSZuGhydXuHLrGuEn72uhr9vQmp5lanIhO5n0OSIr5P8xtrS7vmnF2JxjNplM0WOdwftCHjY5vqQEYUmWhgX0LfgWPxMuXpmxuzvHRcsiGaYTx9PP3GBn/xKTgx2eeeU3eOf2D9mZPYVrp1y6/ICnnv5vmXrP7ps/5Kc/foNN6JnueNanK+7eecTj031S0PHQi1MlIMDYEyUN57vU9aKuofWpQCliHhcOkdtACkWj/CydjVd8WAxq+7NznzeFUFKuSQhleVn+d1g75T6Tc0wWtXtJzhODsNdMaNyErjNIdIjdYLGsNnBh7nm8XIC3SCfYxrM/i7x7d83nrhlMDCQgiKXthOAi7z2Cy7tKlHruRstiGXGrCcZ0pNhjJLJJPQ6HRINzEyR1GAtTs+HKpZa3769ZrSPRqLjBNpr9y1/p/EoQP5II3WqlAUlj6TdrzGSKydLwKUHoA41XRQ9rTE6kavLZev3eVgY9IaRc/W9YrZa4Ri1bVssls8mEECKz+VQ3ITESBZpGbRKkaUGibmIRXNMOSX3Ji3aHbuCKBGSMCZWhUva9kKtec1933g4WGwJIihQ19JgSTWNp2ilJVCFCJOGbBkyWird2kHDMLANNyFtIzqrnmcuMbcg2MCFP/om+15dg61XGK3Q9xntwRt8/esWDMkLKCSbnHQk/8Llz/UZ+sYBkRQ3rMrEhFYkpU94TQLFk0esVMYRek+3WqkyVVtKoMoPKe1q165CYySZ624UGUixGnM1yomhwJ4kgm3UO5qiUlfP683610M+5RoNIukvWySpvtPqkcqv6MtDgSQiRGNTHd0gQWdSOw4Cz6hlckt9OlHRQqjcMQEpKfHAuB0DyBtlaukzQGax68u5eEH2RG0O3XrMKPeuuQ0p1Td4EJ9SywxrtB/oCdzRO1VxSyqQK7LDpTAjqD9iqh6gkkrX0IeVFmcH4ssnNSiSxyIgp4QnR14xWAsigBBL7vAnMyf4iDhFTUrKCtaS86TUmB1ls6dP60hXIHlz6fKwxZPrAQARJOTAjUipclI1LDtBgDLHX59x3qjriGk/btPp5Z2knc4zr6TYdfd/lQJKezxtDJBGMzT56AAbvtV1F1H8vxpAreCJdkmxvonY3jdM2RwISTSbCeNRCJrNiySQsdEHhmhan0UqsKMnEOzuMEx1XmlxV32JVzYlBiUuq9pLOVIBjDM42OOeJuexl60FsdGDGLNWdx4WIkj1KoEMDYCUglxcUSUkmyZbkxNaiphA9dI2ox1UfWQYrr7+FPEXFfyVQhnVW0coJb9gqb4wJFjHPf+MF6tjiYzab4Zxjs9kMlitjZY0PU5VIKXFycjIQP7quG+xdysbo+Ph4IKPs7OwMZI4S7CyVmWMCSEnYl+rJEogsthDl3OPFcgk4jsfmOOBcbCdgqwBQiB7ls0XR5LzKR/nM+cX6+ecxxjg4XBIQ42sqvxtXxY7bZFxBeT6QOk5gP4kY8qQE5fg6Puzz5485n88H25LyfItdzvi5fFgie0zaGdvUjJUZxvc/vtax4sv492MblnLcok5T1g9jAkrpE+Nq3HI/0+l0ID0NCQ8RHj9+zMnJCavV6oxCyVihxlo7fL8QUXZ2djDGDMSR8fi6fv36mfuJMZ75jHOOS5cunRlnL7zwwkC62d/fZzabsbe3x+7uLnt7e2w2WnXsvefWrVvs7OwMJJL9/X2uXLnC8fExDx8+HAgkZWwdHBzw9NNPY4zhwYMHg9rOpUuXODg4wDnH3t4en/nMZwZLmEIau3btGk8//TQHBwd0Xcezzz7LarXS6ljvWS6XiKj9z+XLl7lw4QIXLlzg5s2bXLp0icuXL3NwcMBXvvIVUkrcunVr+Nx6vR7GZVH6eOqpp3j66acHhZuu686oh9y6dYvd3d0P7c9PIjqdH4fn++Mvmmg/P+bGSj3nyQU/yzLkv0Y8qY3gbLLw7+t6nvS7X/b1/CKkj78OQeRnvU8qfrWxTUoX8qfF+gmzvX2uXr8GJB49vAuxZ+I8TZPtKr3uJ9RTW+ejUpmvRHStdkxRpSGLtUU7nXNwcBURy/Hj+3jfDIqXtgQfMoleRLS61PKBuSGfCYwqGAw/l62NgRiD857GN4BhuVwOn9seLxfyQJZJHs0Bed9qrB4zRC0o2loCujPrX8q3JIecy5oo/8+QxLdKQtnb3WG5XLBYLSCNErKyVRIxVgPtUcJQtWkKWwItZrFNy87ujKZtODk+oQ+qgOesBuq3TaeBLZGEcy3ONYTYI6giwXqzykSSvM8bzh9p2ob5fMrJySLvC00+XEnI63ewlqZxw5p7vC4cQkvj/pd/YOwHib1b5IS02aq5FBJOSlG/a3VPXPbHJbZgnVPVjaw0EfqA9Q7fNposKWoO5UwDuUIVVLCWFFQhI3cXjHE03hNLEsj6HBezaNhGZc/LNRhrtQgo9xNvtsT68/3x/DtmtDPI46tcXy7KkDj0rfL9Em+JMalya4qkmD9jCjXB5TWEEFMEEptNx2TSqKKuVSn4EDpV8LEmq7vksWIt3jV6LLclp+p6tthh57aw/sy/tySEct9nSR9KzLCQVBWjWAuPibDGlP1aJpvlal+NLQkSRS2mcjFPaRsp00ZOKBlj8GhhTcwxtpRjMZK2ah9a4CikqBbXUUpVcnlCpSJ6FOcSEIlgtV9qMY8WATrbYH1Dkwnnk8mEZjrBNw2NUXtnM1L2KKod2/2HcHYeyySaM/MPFCUkBpJHolC2SrKpqJx82B6vouKTgBwlJklgz/XY/jGr055gD8DPAaGVDnFzjE2Z3KhrDG8bVUIzAjZg8OAsfZ5/yWHvgTUqOgcnBBdhwwWMi0w4pQsG707YTaeszGU8Pb33xGQw4RRPzzoYUnJE6xDnsSYizuJEsPGIaHr60BOSMG+v4NqENw2PHhxx/8EJr7/xDv/n//EXPHV5j6/81mf4x/+3f8L+/vM5/q+KV0kE6z3TZso7b3+Di7ue3/j15/nJW/e5d/uUxcmS00WHOEvfq9pTkmypjIGoc1wCGpvoUmLV90hIOLEETXUr7VR0XZiSKs+bZBDTIDHgZUNAWPVgbcQbP7z3nFcFdAwsemG9sXRimCRhcSws+8SuXbPxF0nSknCsTETiKSa+x4X2Midrvf4kMPMLTJpqTD5fmxL7hMaBZAWzhGGCIUahH5TWiipDXvdInmOtJeKwaQPOYUTVuMvb2xiTCa6GGGCzCjQTk9dejpBzhX1e9xmjxdtd15EszIxjnQzObPRd7Rw2ZcGtFEnJ57i9EMUSUo+hQfpcPC0JnOYYrMBKLKRISD2NmYJNNFat6kVU5d+ZqErttiH0CWt9tvwR5VN2PcZBbw3e65qy7w3LzRprPZt1z3KjuR61k7eI9Dw+7blzkrh23Gnf83C46RFCtuLRNauuq3OeTt9qgwWcxJKDEBKJgdjJduhpPxPEFEWyUgArxASnq0gvR4STNQ8O1/jVkitX93juuWd48dYO0c043axYPLqLvflZPvXp38FZ7TfPPfsakpa895M/ouvXXHvhJg/+Ysmdn7zLew+PMJ1gZUlIc0zqlbOjGipoYbESoYYeMsST8vVTHDB0MZHK2i7PL4PCyXC/P987u7zjz3zvzM/y2nFU3GtdXj8VUm0ZL0bwCD3kvhqJOGaTFllrMbV1ui+4dNCzOwv81m++QGrf5//35495dt5x2s85XAlfeqbFOUgbg0WVjVJsmU4iT13ZwxP53tuJk8WGFy/09EQa51l3gpeGPun6XvoOTOCCT0x2Wt68G4giWHE69xvN/f1tLXF+JYgfIPQhKJs826FY5zQ5mZQ97bzHtUrAIG8wlKUshE2XIw5FsULtE2LX43yjmxZRj2+sA+vwsyliNXDrXUPrPSl0eKvWE8vVGlLCebVMCTERNtmuxQhBBJcTPcZ7fGPU/iUpO06yGoVzNue0NaEeul6Tp87R2Gz1QiZtiDBpGmQyyYndPi8QDN4q67MobZSdb7FKsSMlD7DKrgwWR66+RdUbipRmFGUtOQxBJL84jFbRmDIZyECMKP4Yxlhc3rQZVMTR5s8UT9Akgs2TqeRkeanOUKZcxFpPsHpuT7b1yZtItbUIpKDXIEUNwZhhA2VyhYOgsq/OebxTBmhM6SwxCG3LLicowGAk4Y3X30nM1yiqkpLVDVIKxKDXbcWAjUgIeOtpWlUjkRiJpsPhsdYjWGLSzzjrtC+KvvhsZooV5rxzXps3JRxKUimbb7DEZFQO1BjEWpL1mEZJPV0ItE2LEYNrVXbUOQeiQYNyHCPgS8UKkMQPk1WpyAJHiAkvDW0jhH6jMq0h5USMtr8x6Cbeed24DoEFyQvhQmZBiT1o/3fGgQmZlFKIKYVdmTf/FOm8HOgz+b7zYkhSyl5wJhNoDCZFwGZ2pqoEISDOZoKKZXuzmagVekJuI8kb4MZ7Dbx5TY5JCogoGUfbKFuuOKfqN6METRlzXR9IOcGVRO2hWqckBxGdP7ZSmqKKJHl+kxQzGSPSrxf0fcd8Z3/wEPboZ0z2q9aQTQ685YRxzPNeyvdmiYTQE1LQKjVrIQadj1CZPJMDoxgNUhmbVUVEBkvqgWplYKBai8UMtGswDpxkP110HlMZ2EICyb0joQFHOzpWxScWhcwA2yTy2Ee4KCOUz0wmkyFRDGV+skMCvCTwQghD9XwhbEyzfdv5qu/NZsPh4eGQnA5B1wGFOOK959GjR/R9P1idTKfTMyoPhQhS7qHYzhQVj8GfPVf/l0R+COEMwQMYgtjlsyXputlshmMYozLioHY2wEAaKOcqvy/fLwnngo9ibZ+vDBdRhYvJZHKGfFB+V+6htEkhzowT1mN8WMJznNwu/y3XUv70fX/mPs5f+/m+NVbgKGoZRUWm3NM4MF6SIOO2KKSL0k/Lec4rm4wtgUq7n1cAGd9TIZ4453j8+DF7e3sDgancw/h8Y1LN2MqnkDVKnzo5OeHOnTs8ePCAw8PDwZJorFhS+tD169d54YUXmEwmA6miaZrhO0UtxXvPbDb7QBvNZrOBLOWc4+bNm/R9z3q9ZrVace3atTNjupCP9vb2uHnz5tCXNpvNQPYqbXfhwgWeeeYZVqsVt2/f5uTkhN3d3YEoE2PkwoULXL58mclkAjCQXIraSdu23Lx5k9/7vd8bLHGKZVRpiwsXLvA7v/M7w/iazWYDScQYM1jwlDYuxDKAV155ZehH0+mU/f19bt68OTzz0i9Km242G4wxnJyc8NZbb3Hv3j2+9rWv8alPfYrd3V1OTk7O9Ovx2Dk/Fs4ToMb4WaSB80Sn858f95VxW5XfPYkM8l8zxvPQk9rm/Jz3y8b4WZ2/rif9/Un//mXg7zrZcr6fVvzqQhB66YnR0PiWvZ09Ll68wO7OLuvTE0LosKjCpPNKZNeEJKoM4LKEoTHZ11T3FiZX1jZ5z2i9YzLbYba7x3J5St/1WGuIoVNiu3W4vE811oO4XLUYkOSQoIURum+FkjQeSO+gcYOBO+IwzuCalq5X21Yt6NAyGJESu4DCjEjDYXJ63W6JuX1KOXnrNXg7FHyUMbQlo2hoJakUdtLjKxFFFVUn0xlt23J0fKyFC2mI+G5JDSUmhVFL2eH4+kfnRm2T3d1dbOs4PjrJvxaIoskWU/bo+b1hAGezNXGkj30mPfa5XbYEFE3UazupwpjGYzR5pAUWxtpBIrzMKc46ur7TPesooa/Eg7PqVGUuetLPpMSy4vZRjdVpQQPTw3yW79dn5btiKa3FRFrvZ42qy+ZfnHl/hqAkEmvV2gZrRrZ8WiwVomBMom22a3ljNBZDjrOV+IMxpc9qUgYx2FwuUtQayrN80rtu/Cxsfh5KUtKEhybiUt6nn1WoKHGKGMMQfysFXoJqtNh2AlitWkYIfcS7AN5gfYO1grM+E1dMjtf5rNZRFHoMWC0aU3KxxtS8L+So7f2Uvl3qLsfrHXP2g0PbljYp+6XS78lpu6Gut9hvO09ISuYysfxOK2+HMSpDSi4PlzJ+dbym/CdGIWZB15jyn9zKCSEZm+0ZBGNiTivbwfbZlH7PlriBVbKcazxN02Jcg29b/eNVWdIbPxQ5le8NRDBTCB0jgtCo24xWLkNybPh7vrlinK3xpNwkZug9VFR8ImESOy1YeUx/ckrwO5j2Er6d6/wRN5hk8hohF1IO8W1y4WT+u0WJpaLkgCCqfvTBXYwSDyEiBnr28a4nyBIvibm7x5rLmL7HxQURS7s3p49HJLFqFY5azrsUkLRUde+0CybSxw3RrJE0ZbXqCcZh4prY94TJBZJ3THcn7OzuauzdORCflcStEtgM3Lj5JdZpQuBd+hTZmUx4fNRw9HjNcrFhsdzQdYHYa97AGptVj1SVqW0TIUIQnVP0vaTvmjR+ryV9e4EjGY9Jmox1LiJpjaSWzuacilHl3YkRHMImdCy7yKZPdKklxA2OxNRNWS5nTMyGXqALERsesjebskmJIIlkDCnqmnK1yfYqBrbvAF2zluR4An0PpUwEFMmcnrzfT+X9nIuwTQ94YuzxyaptvLbEaH0ipByvVzWrTLANBuP03DFCkoA1jh5VYjsVtRJzxG1BudFcZkop5x1zIS+QrMGYHgmeaLXvObFZ+UvoerAmEmjBdYhYxBlWfaTJ+aguCSIB10CKFmGDFYtrPYTESjwTK5gkWKPEyGXwLFcdk6lhFQyrTWC90rVhJ+BiYr0OnKxa7j6KbPrI1HklEBnVK4nZGUByQWsygsGSjGQCaB6CeQ1Q3m36es95DzHDOimKkKyuyyDibCKEFlwiyoZNcKy7U24/2HD/6IQ/feOQz79wgc98+lVe/eKvs3/1Ju30Ik6mmLG6nZlw/bnXeLz8T/zh//d/5/3bD3jvvWPmjQXvsMHgwoY1FmPWuav1xDRlZntS6Ilk4qrJOfCU+6LZkkqHVb/+Ytj7lO5Z8GHx2/Pryg/u+7PzwbBWYMjNlbWWs0WBZERyz3uZKI6Y1FZnEYS9SYOlI1nNECKJxanj6p5lbxZxndBK4MJFx1vvRtoWpu2e5g4xJBrmk8BzN5QY+8Pba2YucmEKR6dClAYhEjedEmqT4DAkeowRDnbAmJZ7j7T42gG9Ddjkcr67kF5/+fiVIH6oTYoqE1hHnjB0WGjyGNqmUf9NyRsd53HW0GcWWGOtLqCj0HcdyjNQhj0Yum5DEmitJlFCHwhRmdkSe5qk1fT9JmrFvfOIFUIS+sUqK0RoZX556M43KueYCts55YC/bhY0odpTyChiLH3smbaqKCJ5c9c2KnUYBSRFJs6AsQSXqz9DIHQrfNNowMGrVFYI2doiE0jUA9frwjttk/JNO1EvydjjGg3otl7tImJSbzhjHX6olM0VFsNmKVc5ZI3HGJQogc2SXS7lhLTFuCxNJTIkh0XIG8q8QIC8WNBpJ/a9biBysl7lpyBE3RpJErUdsQ6xDuvaIhABBnYnKikZ+w0WtRbxTVt2t/qcY9gmNBCiEWwKQ2DH5WoMRCd4ZwXjPNZtrxVRKxmVvcqVksZkNqNgYlS1E2MyCzLRtC0TN80TWp7ASEouyOGhlCISdTGXMuPM5OOkfO82x8LaacN6FemSMmYb16iPlnY2JalYR+j7YVNrncpaeeuHF5vktiWrOjRevQpjtxkCDSpflhDrMY0uRkVEx1GTvQQlW63kKhg7chUo7wP1q3X5RVrGD8OzKNUfGIMRqxtf4yjimeo/mEhGVXMK4cQaSyLmChpVo0gx6KIrJZLRdos5IacBDO3PHoNxWbIM8CLgdSOdkh2CHAZUDQUGJRLDtvo9ZhsWycEQY8EbrVRyppBtBIfNc1Je6BpNQnkLIaAqQWJJ0zmb9Yrl8SGzvQMm01m+l6zAgZCCWghh1YzJWF04O5dVSvqObtPpQt85rIk6l4pgaLC2VNiN7idXEhEjIYY8FjUoi1VGc5KYLWQ0mGEEPUeWtTGSsiSQVqLoc9VNjnVuIJOoapMbgpgVn0xsK4x0jBX7llKVNJlMKHYpRZlgbBEyXoD2fc9qtSKEwHw+58KFC/R9z+npKYvFAmO0CrMkf4tdxt7eHiEEnnrqKdq2JcbIe++9R0qJ9Xo9WF6UwOBisaBt20EZAJQ88vjxY46Ojui6biChOKeB8uVyOVjAHBwcMJ1OBwLFw4cPuXTp0qAkUpQGygbu8ePHPHz4kBgjX/ziFxGRQVFhvV5z7949RFQd4erVq4O1x5gIU9ppXBk5TuqOCQ4F4/srCgtj9YzVakUhfUwmk8H6ZUz+gLOqJuPA6zi4WD5fSA3jJO1YnaM86/NSzONkeDlmse0ZB/fL8cr1FiufQgIp138+KT7++VjpYKzMUUgD4+TC+UTz+J7L71JKQ98oxI1yL4Ond0qDcsd0Oh3IFZvNhvV6PZCWyp/pdMqFCxcGqfjybMbtXPpDUdIp116SD2XMle+Uezg9PR1IItbaYfyU6zk5ORmINdPplMPDQ6y1A5Fk/KzGZJqu61iv10yn06HC9saNGzz//POklHj//fe5e/cuTdNw5cqVof+XY1+4cOFM31mtVsMYODw85MKFCxSSSRkfhRxyfHwMwGw2w3vPvXv3uH79+nAti8ViaJvNZjOQXUq/nM/nw72sViuMMQMRpBCQYozDmCnkstPTU27dusUXv/hFnnvuORaLxaAedJ4wNSZRld+X5/qLYkiajRSAzm/Yz1cJn//dB6vPP154EsHtZ5Fl/i4xnrN+1ucqKj4OEBFCIr+fLnGwu4uRyOHDexiUmOq8KhxY7/BNrkI3Xm1ZrBlUS41VO1Znt8QEYx3WeObzXbCW05MT+vUaI1r44VxDMEYtbZ3H+ibvIx2uafRnyeOSxznd+ViTNUzLu5vyvtc9vW8anPUkIpv1CvLvxWwTvGageWwJG0MQme2ez8CgdFn82rdrkW2RzDjpXiLTWiRqhiBu27S07QRBOD4+UjWUDGdyYtpo4NpZVR/RwH4uBsoxHU0nJLx37O3tY6zh8OiInK8ilYR3lu20bksKhUwKNl7Jo7bRuEJO1GxDEkb3qxhmsykGw3J5qgVAMLpvQHIBF2jBSJ7+jNF9Yznedv6UTMawZ4kB1mjirbzjcvW0xgw0/lB+F1PU5jJn16xatJXVIEQGW2ZrJPcPp4mEpBXMw27amGwB6zQmkbbJncZ7JfygpHMz9INsu2Gy/Q6i94/gXLl2U0rBdf9c1qKFLGXGzyavR3O8pbxFxvn9lGMCMiTz9T6TqHWIyFYmvQTwtU8WafRCBTAogcnjTchJlUiIPS6vYax16tWbn4E1W8sWOyJ76PrDaBGPKYUyLicUxgqA2sfPv0M/uKYpMT/OqP+VY6RUFGtzPDON9h8oKSmZbX/PYapytvyz0r6lP5aG1kTasK7LdjOqjLJd5+UnmIkTOVGjnRYjWSk6t7M4JbQ567FOiTHGaqGktqfPsRaLM/nfVpNxZW7TqWGrrCNFz7l0giEZVgqMct/IiUeEnFyKgzVziWWa0mdG/bCi4pMGI4m4fJ/UHBBnNzB+wqTdJaWO0HVaWOpU8UFttcu8LRpfR6UGJHmMiYiJeGuI1rCKWiT3QWjuK1myMrwQaGj8AWsWTOKaqb9PSJZ1msNkl5lb0feOmFZMNmuC6TDOMEkrgjTIdIKJHda2NO0u681DTGiywtASkrCzd4Xnnz/gv/3HX+Arr/0j7j9Y8Pj+d3jp068R00aLnK0qmv/V9/8PfvSj7zFtGqat5dkbu1yYBu7e9Uyd49ALk8ayWjokNjij6uEhODYhseoTu+2aEIqqqJItPGYoKCxLBWuBGHIuSBPKSSwmNVibsHaDpEaLZsUBEWsFb2DVC4cLS9f3tHbDJjpSBDecwmgBAAAfOUlEQVSfQ+8xZgHJsAkO6dfMZjeQdUdLIFmNaRtpCDEXVCMDmU/JKdvCiZQlGSyaWCnrwpIzKypUZLV2seCk5+K+58FhwER7JjOv77BEEsMmwYN7hxwvAtEYRPywHkGEZB0tkYCltYnbhz2x79lvJ1w68KSSiMdkWzvDZtNzuu442PPQa96nmYARfR6t03XNetMhxjBpDSZp3iZGgzGOxWoD1jKZTQjB4mxC8HhvMMYznzbcOerZnybaVkgdzKc7zNseKz3rTvuCAJMm8eBhYuM8O80EcQlrPOvouXPcc4tdjpeCd4lJk+jXCSuqxpOGdUNRpypKY0XNvpA/FQNpMb+DTcoWcUbXkCkJySVVn0/CJmhu0biYSa+w2qja16JPvPveA+4frvnB22/xj/7RP+DTX3hWrePyO9XahgR4t8OFCzv8xhdv8p9Xa06OFmxWpzStsFk2zPwKaxLLjaNParnYi0NspwQSUVWTIdecr7+8h1MeNybfpPbN0qdk9Pf8k4+I/XxUMcv2eGdjHGWNmVLRei/fR5UEnWXiDev8zE/XwsGuQ+4krIPWW0LfcxKE5UboO83TPnWxYb2BVddjW2F/2jBxicYkbl5LXNzd4Z0HkfuPlzRWOMby7CXL/gyWnaExnpN1wO2q+ksviYkx7M16+jjjZBmGfVQUXbcW8vvfJn4liB/GGJq2zZsLHZQhdFqxYYTQd1igS+p12bQtxrlslaD2Fn0fENQqIaSIsxbvy2AwtN6BzxsKUuYmiC5eraHrdIA3jRJAGHdYY5CsVOGbRhfaVs8vAWLQCVPyZjImVWWIsR8qMCRX4iOBPstgtpMZ1mlCNfU5QG+cJilMWVzrC1X8ZAjk6uZatGomFTFS9ZpL2e5BB3jmY4dSTeyIQa0hnG/yBj1vAJxRMkJKgxxjUaxIZEZWMjijGxBns4WDUykryRuFFNSnM0XBSWEqlo2Mbi6cNWVfoWoeaZvADqlXOR1T7CQMYiJNsefQXUcmDHgQnYQNiZSrh2zePJpsUxFCtrWQrGqAwYgZFCRAVS5i0MoMl9VTnM3qBdaQBPpuo3JheLrNJm+8bfbSbDRBHwMgxJQVMWwmMmVrD4jKoI09Bk0S9J2Sk4b7y9dkcgCnj4nQa1JvNlWpWd+0pJxkkCxnpd5rapHjnMM17TaeklSqU6/Xk0TvSUklgjNCt+mUSGSUwJEISioCJRYM3qeJEJQVVwJaFu1f2iYOjM9yUlptVOTZtC+U8VeGh2TptpTHXk6+2Uz0MJ7Yd4AudjQgIHl/rX3E+0xSEKdEsnKmpIouMTeryfON8R7yxrv0F2+0hiYaMKgykKREyhN+tOo3KH0Ap4SdlJR1m/I48oJaTwFiC/lDF61Ym0fq1nu568NA0rLG0E4mYAxdr/NSCh2h3yDWIsZjSUqCghykyBYNOsnkRGPUChItNSEZweIzvzSpAozVwAsiWY1GA5hiRAkgeS4rG3ljdOGnnrSCdQ3KXk9DUDImNJjnNLBgIa9uNaiiwc6RmlAlfXyiYa0d1ApKVXyplC8qGiU5PbaoKMSHompRlAhKItYYMyTyy7GLQkI5RyGUlGR8+XkhGJREbjlO13VsNhs2mw3T6ZSdnZ3hMycnJ4NCwMHBAScnJwOJYUw+KYn6opwBcHh4yJ07d5hOp+zu7nL9+vXhvo6PjwcLmp2dHSaTCYeHh0NivaiVTKdTvPfM5/OBCHB0dDQQHM7bvozbqgQXz1c3nl/Yp5R4+PAhTdOwt7c3JNXL7xaLxZDgfpJlxZMsTMrfxyonT6rKHyfBzyeln7QBMcbQdR3Hx8dcvnz5jOLFWL57bCU0Ps/ZhMv254U0UJLehZRRfleUT8YqLoW8UY5TjlushwoJ4PLly0Nf7LpuICGUPuS9H57nnTt32N/fH+6jnLuMjWK5sru7yzPPPDP094Jxwr9tW/b29gaS02w2G4gpq9VqILQAg5LJmMRRzj2ZTGjbdvi8zzaMs9lseEZljJdnUcYTqJpP13VDmxbVkL29PV544QU2mw1Xrlxhf3+f6XTK6enpQG7puo7FYsF8PieEMFjClPMW8kYhrJR+Wtq32OyAklB2dnYG4lchbpS5oqj9lPueTqfDPFGe83w+x1o7EIrKZwuRKqXE3t4en/3sZ7l48SLPPfccs9mMGOOgKnRe6ePD+vr5sfJhG84njZEnffY8SetMkugjvvdxwROrfv8O8GFBkjH55OPcrh+G832zJnt+dWGN48qlK1y+dBlnLJvlktCv8VZVDk3eWzfthKbxWG8hJ0Y0+4sWKPgGZ9QSFYmDmmvTTtjZ2WOz6VidnhC6NSIhq4lGRKIWPWS5aiuSVTUjBFFLzJyUJqkKovIB9H1pBoK7Fng0bYMxlhADMXXbuTEndosSiWRL3aIUIDkupZHd/B0RUki5yEnOzFNDLErGeyNFScxr4hWMcUwmLd55uk1P36+1+nJI3OreC1tsQVXdNO+s9T6FgTBgUFXOvYM9SMLx0bEWEJmyhrADSUKt6MfkYUPTeFXC1SYhBMkJAr1uLdRQQo1vWpp2yuL0WPeJOYtepMtTtnrROBE45wlRA9kaEygFVvp5MdskkMnBWciJl3LNhmHNVwgmWeQUY8hFTgnnDJKJIsY6hJCvvbyrczr7zPvFDOSJlK8xJclxJrRAIhfWxEI8yeouFlUTLPHBgTZi9HkYEq7EQcaJI0rf2947poTu9S/la1tSQVGO0b41Wnnk/y1qF3odGnMzkBU8AU10GI1zYt2W7CCoirJRBV6XuSkq9J0oER/nLMY0Ggc1YysXj3dWiR+jPYSzbogRiRR59ZTjTzL8nBxT2CYvLGYgZRnNohkolfbF6uSsxLk+V4sdknRlLW9z34xSEnrnkiR6IcPphqFrCpkjk3xz7C9JyFXfUe2BYo6fFBUR0bGLqBozxpCwWIGEKne4HBczqBLwlviUCV6a7dECqPIsh3GyHTBb2ofOLyZByvEZkwxFDYYsAJNM1CRvjglpwZrefyq9Kuk954FIRcUnEgJh+jRWDGayz3TS0m8WSITWZzWkXAhoKKSrhC1zKEpIJZMZBMHZiFgH1rLc5LzImT2DVqmbYvVgPM4q2TaxjxhP25/SeME2K9bMaWzgeNOTxHAqa9icYBpH5wRx+yCgOvqRKJ6p22EdDoEJ3kSuPnWLL3zhFv/ga5/C2Bn/8//yDQ4uvsJv/eZV7j/8Pu+/eYfnXniVN27/CQeTCe284f/zv36D4/cf4FqPs5Yrlyc4N2Eyc7RmwXTXcRoXhBCRaJT84jzYCQHHfGaRbFVPCngrWSnAqNq/lIJUg2Tl8YTGqkvy29mJFifabhCX9s7QOLVmP91sWHVCJNLHKZvg8WnJqtOK/400iIPl6pR5EDatMJ0Z9huLLKFPgUikkwZEVeUxaUi8D0RJtELS5PeINds1nyqS67yeBkV93UNd2ne88pTlG0dqUQMgWcHCGAOugRjYmzqevXSJmBLTacN0us96tcQ5Jb5iwSZHIGKx/O/f+Qmnp5ZXnr/Eczf3aayhsR5wpNTnY3v+w3fu0mD49S/M2JkYQnLE2CG5aNY1DQ8Pj7h4cAD09JsIqSdGYdIKk8klOvYR16qFvNHC1ZQ8ziaa/QMOvvDfsHp4h8uXr9DLhvV7b7B8901MglUIqoJjPX0XuXEl8K23ljQ7if3ZnKuXDO89avjz14/YrCJHy4693SlOYN4EelZ0vWcVAynYgWwTUxgNp3NkB/KrLr/TPxBLKDlZ0SVPjJZETzIGb4Q+RZwpZcURkxLORC5cnPLlL32eF17+ChI7knM4AzF0mEz+WK0eQNdx7cZT/MOvO9786R7vvnWXxYP7LELkdOGJaUOfDMEk2tQQ8jrYWrU9YpgrTLZRyiSjZIZ1yZYUIhhGe4t0NobxpHjSh/1uiCmX/y37Gc7SSSTnmko/L309pUQ0iVljEduwjhuOTzr2dgrxFyYNxGCIyRE3a5brjv1Z4sJO4id3VHVwvQ6sneXSxLB70LBcwQ/eXCMpMvWOPgGy4e2HnucuJNa945nr1zhcR2xj6OOUo9NjmsbQ9xO6lGisIYohkBAbsMme2QcIT1Jm+pvjV4P4YS2umWBGPpLOOiUChEBM0CdBUsK3Tpeq2avL5MR4LjrXBH8SmkmD841uKKPKQTm7TRh77/KEq9cgJGxjEaxKS+YNqDU6iYcYaNoJWs1uBwJBEiENfqkQJYCoh+tmvSaGXhnbzlPeTH0IzCZTkgh9p8l452zeWCgvLRRbF3QhXWylUhJC1+eACMr89pMsW63f1vdJ8YJ0xDzrRVHmmwUkRLZ7AIOy08sTUZnDMfnAGEOTg/2GoJNk9ny1zunmXgq7Xdu4L6oe2YKDFHN7J/VLFcmLI6PHSnmqSJBsIRNkaSXRTbr6cGXLC4lIDASz3VCrlUpWF6FUv6riQOp7fUYxqlSZVUY8Se8F67TNRRn5fa8TMwacV4Y9+ZqVEJMwRmisklJiUnUMSQnr9HOh7weP0BBK1XEgxUQfAl3X5/5rMVby+dVTV6uSJAe2cuLMWNqJow8qiaRy956m1f5jslKFb1raPKljlLxErtYhRbx3SFKSjKRICh3GkC1sLNYFXHS5QiFmsoe2Y9lEi+imsSjDxKCKJ2L0PgyWKKh3kjEqCWpGDGivibMkOUmpA3oIQiA5IWF0c5/y7ejiyQ7SpIiQYhiqRxIWbxxJJAd2RjtyY1XJxRjE+cEPWDe7KiVm0faKUQMX+m+1JBKBYMGKMq7LJlwVOzTI5JoGS5ZoRT2pRXTOSFlNQ8eeYK3PvtIaIEtJbZ/mTYPzBlLSn/eRYDRAZZ1GrUSCBiViIMWAdVrl4b3DihDz/KESxlnut490Ijo3tv7MCyyNErRb8o3eqwGMGLW8yQQWi8le1plAlwNNKhsLhcySo07Dz3WhYEbzTcUnESVxDdqnxsSPsfRuSWiXv5ffO+cG1YbzleilIh+0PxVVhfExzysdlCTyOPlfPj+uri+WFCWhXpQDAHZ3dwf1gpLU32w2g5JD+c7YamSz2QzKIpcvXx4IKIVMMZ/P2d/fJ4QwkEhKUr5YyxQiQkmYL5dLdnd3mc1U6ntsSQIMhIXSJh9GphiTIIp6Q1FlKe3T9z3L5XKw4SnPsRzrPM6rfTypX5z/+xBoHZFHzjDGRc58Zlu1t62oK/c97nfjazqfhD2fiB0fZ/z5QgQYJ+yLtU75fSGCFDLTtnrRnlFdGV9b+W4hERW7kKLKMVZCKccr9jaFiCEiAwmj3FexKSnXMu6fRT1kfK/jvlOsccZKJ+PrLiocTdMMhIkytsr1lDFWgvtj4kxp20LQKOSKF198cWiTYntUrhkYyBtlDOzs7AznKW1cxuy4/5f55nw/G4+Pcm9jYlDBmFRTyCJt256x9SnjuZCiypidTCZnbHrKMxz3hfEYGPfp833zSZ8f99sPI4ec7+fjsX9+Hjh/zI8TnjTmz2N8T38XJIzzc9hHXc8vC39bz+1JgZ/xu+PjThiq+Jtj0ja88MxNUogsFksk9bRNXptIj7U52dt4LTzJgfsoEUJW4kILGDRhSo4TwM7uLk3bsDg5oes3GieSsyTSLTHDaKyl7EUNgy2w0U1aTnQUdc2SlHfZrrOhbSfE2A9V7cbaIYk+ZKQzhjEn6Uyk0xjdqxVrjPE7ejxexuuYQk6AmBPQFEoAxphMyE0slyuNa+UEb1FmKEkF2Kaoi5+3JohHxBPR99vu/h4xJo5PjolBlSMtZAvcrTqGMZCMlhtb67HNWevCUrCElGeihQU2x1Xm8zmr1UrjKtoQA9mhBKBLu6j9DcQ+bOdRKXLV+fPofdkcHxqOmYM2xc5HFWv1m0O1oSlHUjsZA9kiVq9b0EKSFMv7uFxbWcNs13sjPY3ts/WNkh4SqgySA/yqimoHv3gtolCrXpPX3mL0mVrRdVikpxB1Bm2J8owHlgeUeF15tmbURwdqRw7Kkys+y+4/dx6UkpITHNg8jEqfEuVRlHNLysfUa/DekYIj5dT/dlxtCR26R7LZiqQZ1mxjQnX5L9aMyOuF/LFdp1s7Ilh9AKW9ynM597sRwSOJ6oJYm1VkRWPIlKStMWDlzBgW2d7flgHDMIa3TZ8JR2Kz6ktZ92tVfx7C+ZpyO+WEoM3sIZOfi8MMli1FiUby3iLmNXYIEe8iwTncKD7mJCcdczyQHIdVBdYEqVgTZKVN0XaIkpUJBKJRnxqddpMqgERV2daq4zKGc9yyBngqPqEQY+mTYW//Et43xLDGo3H+QrTDGiy5uNZGNPckWCv6Ls7HMuJJUd/33gEmYHAcrcjkqZzcMoAEHasYjEQCOk+4tKALRwR/CZscO+6EfR6z0zjWm4iyH/ZJZkkMLW1zGSMJI54urGmmF7FsoJ3T+gXORm7deolXXrnO1WsX+aNv/hWh2+PpF57mwcNvsVx9hffvvsm3v/kd/rurLT/90V/wzk/u8vv/4//AK5++yjffv0+32CApcXp0St+p1QcGdnd28TYxnzTE1CEScbIh9B1t37NjHZveMrcTxEpWxm+IBEwSLQQNkYhhlVZYt4PJBOAkAtZjUwTjskpXABZMnUWCWq9NzSHTXcPudIfD1Qm4CTO34kvPRF66HrHimE4TM7/h4i7M2zukfqk5JzullwlHy8Sj0yV3TlrunhiOV3N+fEcIydF4QR9pGuZTIRBFhvyC9gtDKssUzEAAPV5G3ro3Z94EBEtKhp78fjaCiNq0hOjZ9AbfGvxkAs7RB3J+1GW2sChpRALLlSFJA2Jpcnq5S5pzKrk3KwnbNJysAkkaVLXG0DSznA9B/9fuYsyE0JOLtyf4xuGsYdq0dMERN0p8Saasm7UYXaLnxec/C8+/AkkIJB6cLjj66U+VBOqm9KIqGlHUXWGyM8e2sEgNt5xjNkn09Jx22q5dWKvSmgvE1S6N27AOTVbX0jV1LhdGSaO54Njk4lYpa+VtTMUYkwtfM5FzWEfo+4+oBc7OJFUHMRDUd4Vp2/HyK0/zla++xtWrL/H6X30TiT2fe+2/ow9rIi19f0Rcd9im5Y0fv8Xy6BFHp6eErufK3pTL88ss6bj/sGexPMaxYb1pCN0mK647vCT6YW28tRYq7+PyFi6Kdflfw9poeFuXex9+kleIQ85mu8YxpXj/CeTOYTVc4odWLXb0d4mQ+5OV7ef6EGl9w3wCXTDcOdrw2WsNjcuELevoYwcowWa97rl5ccIP3zSc9hsuNC2+jRgLB7sNX3j5Gi89s8/R4xPuHa94+96ak2XPpUv7dEs4Xm/Ynzk+fesiq+UJESVR+cnT3L4vfO/tByw2gcfLyCbaTIT22W1Adwidsci8pf1bYGmYj1sA7heFMeZh2zSXbly/Vv49bDzGC+uy8dLfybBlYeis5BX0dsDmvQzDr8rm0pbBnzfAhlx1XzbawxeGzaTZ/mN09WVDPjr+6FypDMZzwb9xEuZMW5QjDO96sz2daJuUAT3+trF2uObzAe5x0rsEAkYXefZn+V4LGSPvEvX+cxs8sT+WwMt4shiuAYryxtkN2s8bJNw+69Fe6mxb5Q4yCndz5lmIDKcb94ktqaVs2MbBnPGtbY935gBsgwHjx7mthpAznytMwtKHx5c/vq/z918u35xL3qXMFB4S7MO9by1MzrTHaA1JDg6Uflo+V74nkjeVH+ghZ54wQ0881z8+NAi8HSzDfWybSs4ex+hmVeMU5ytkRwPjzOHNmXsr55DRPZYg1vnvSFEcyt8tfWf0pEYtML6N7Vyxba2z//pAE47Pw2isi3B2FG3bqRxt3AfOHXT46wc33eZ804/69uhy89/NuZ44tIJsP34e47lrO2+cPe4wpwDv37uP957FcvXzTgYVHxMYYx5aay8VtY7Rzz8Q9B7/PI2Ca+Mk7hBkzWN5bJkwPk5B+UwJSI+JCkUNpBz7PDHifKJ4bJ9R7GmKUsk4MWyMGUga5TvT6XT4rIicsWgpyeKiklAII+W75RpK8npsdVHOXe7xZyUXn5RIHhMQCslERD5AJCnXWY41TrA/4bk/8Ro+LGl9/nvn10oflqx90n2NiRlPOu6TzvnEddgTkjJPuqdxnzpPODh/3kJQGCenP4qoNB4LhVxQ+gxwRl3i/D2MzzHuS+eJHE96JuNk9Zi0Un43to4pZJjxmBpbNY3JQ2Pix3Z9vm3bQiAp/x63Q7mO8X0VEsiTSA/DWmvUruN+cf5Y4++UcfWk9nhSe4/tbFJKgyrK+FmU5NiY+PFR+JsSFH6e7/yy9pXnz/XzXvvfN0ng7/v8nyR82Hz9UXj99ddZr9ePROTy39Z1VfzdwRjzsHH20pX9rQLUdg+93XMZyNaRUGIwZe9xdl9eDmyzaodaj5YYz5lzj/7nzN7QbM+zPeaT1iFm+GwpNCh7wTM7nQ9MGePgzPnP5H1s2f88ceP0gU3hE/9ZdtnFjkYtW85+8AOnP3OOcztYgVLR4vIxt2175iujHb7uqSXv7cZxOmueZAt2NtZT1qhlHWkY71ENH3ymOSkiss1Tby/qzP72A89ye5OMvvCBHTFPPH++sg+ZxsZ78DMxPxjFCtAiqO0FDkce77k/ECsw53fdT7jOD3tt/ayuOnzsCTf2ga73xMba/txsf1L+M/SHQowY4rBKVDhzb8NcsCVqlfjHMGLzZKF9Kn/+SXvGD7RJoXt8RGOUyz43hs4F8bZfNqMfyRNbcHuUIfanbXCW1LW13jxj91fGzhNiTduLN2d7aW63Qlax1mFc+ftWdXH4c/7+zpzm/PwwBHE+2Me3N3qmnwxppDJHoPGdrg/1HV/xiYIx5iGYS5PpTNWhklDSpyDnppsyV/2Mg46mGx1ehpDGv3jCdQC+sVkNXACf39clSS3MmkSMQh8TWENKqj5ubIlxGRBV9p5NfVYkkhxzKiqZAlndOyahccLe/g7r1QaRxHxnh77fsDhZs7s3B2t4/PA4FyPndUOZ//I8ZwyjnAVDbNpamHgt7t70Og+JGFV3zx+SUUOl1FPWbyV2vn2XnH2n21z0PZu19JsN3sG0NWy6CGIJIlzbF9pWC167XphNimuAHtvmNjOqgTUcXzLp9/aDxCZostoa/a87V4/wwb5QSCA5j5RjBMpnLW8zw2ojLLryFtBnfG1fz2OLTSK5sHm473EXE04WHWCYtI5J4564UDAYTtaBJLA7sSq4J+U9uH23xphwrqz90pl3t7MQZUuw0G9u1wDGWuYHl3JjaNF22CzoVwsQUSWsfK6yfF4FGfrM1Bu6KDw+6ZlNPF0X9ToRrEmE5MEkYjKcCwl+AD9/JGSbN9qu5RTWRHVaMOo0MW0dvm2YTj0SDZuuB6NFczt7F+k2C7yfY0isVic0kymb1YrVUu0kVYxdMCbRBYaiYJFEitkqTozaDkZVvBkv287c0xP+8TPv+UOX5h/9/Q+upLd/P/+9871T+40WQDfOsDd1PDrth71jiDq/Njaytzcj9pGTlZJbL+9PWHc981nLYtExaQyzWctyuaH1luOlxuDa1uOMYRPi0JUN2l+njcZZu144WfWEVHKjZigeKGlxnWOE01WHNdCF9LNm+F8IvwrEj58C+8Cbf8+XUlFRUVHxd4fngWMReeHv+0Iqfrmo7/WKioqKioqKJ+B56trvE4O63quoqKioGOF56ju+4hOGutapqKioqOBvaY3ziSd+VFRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUVFR8UvHRpsUVFRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUVFRX/1aISPyoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqPqaoxI+KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKio8pKvGjoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKiouJjikr8qKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKj4mKISPyoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqPqaoxI+KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKio8pKvGjoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKiouJjikr8qKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKj4mKISPyoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqPqaoxI+KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKio8pKvGjoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKiouJjikr8qKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKj4mKISPyoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqPqaoxI+KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKio8pKvGjoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKiouJjikr8qKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKj4mKISPyoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqPqaoxI+KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKio8p/v+lL/gzonZ19AAAAABJRU5ErkJggg==", - "text/plain": [ - "<Figure size 1152x360 with 8 Axes>" - ] - }, - "metadata": { - "image/png": { - "height": 351, - "width": 1087 - } - }, - "output_type": "display_data" - } - ], - "source": [ - "original_images = []\n", - "images = []\n", - "texts = []\n", - "plt.figure(figsize=(16, 5))\n", - "\n", - "for filename in [filename for filename in os.listdir(skimage.data_dir) if filename.endswith(\".png\") or filename.endswith(\".jpg\")]:\n", - " name = os.path.splitext(filename)[0]\n", - " if name not in descriptions:\n", - " continue\n", - "\n", - " image = Image.open(os.path.join(skimage.data_dir, filename)).convert(\"RGB\")\n", - " \n", - " plt.subplot(2, 4, len(images) + 1)\n", - " plt.imshow(image)\n", - " plt.title(f\"{filename}\\n{descriptions[name]}\")\n", - " plt.xticks([])\n", - " plt.yticks([])\n", - "\n", - " original_images.append(image)\n", - " images.append(preprocess(image))\n", - " texts.append(descriptions[name])\n", - "\n", - "plt.tight_layout()\n" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "WEVKsji6WOIX" - }, - "source": [ - "## Building features\n", - "\n", - "We normalize the images, tokenize each text input, and run the forward pass of the model to get the image and text features." - ] - }, - { - "cell_type": "code", - "execution_count": 12, - "metadata": { - "id": "HBgCanxi8JKw" - }, - "outputs": [], - "source": [ - "image_input = torch.tensor(np.stack(images))\n", - "text_tokens = tokenizer.tokenize([\"This is \" + desc for desc in texts])" - ] - }, - { - "cell_type": "code", - "execution_count": 13, - "metadata": { - "id": "ZN9I0nIBZ_vW" - }, - "outputs": [], - "source": [ - "with torch.no_grad():\n", - " image_features = model.encode_image(image_input).float()\n", - " text_features = model.encode_text(text_tokens).float()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "cuxm2Gt4Wvzt" - }, - "source": [ - "## Calculating cosine similarity\n", - "\n", - "We normalize the features and calculate the dot product of each pair." - ] - }, - { - "cell_type": "code", - "execution_count": 14, - "metadata": { - "id": "yKAxkQR7bf3A" - }, - "outputs": [], - "source": [ - "image_features /= image_features.norm(dim=-1, keepdim=True)\n", - "text_features /= text_features.norm(dim=-1, keepdim=True)\n", - "similarity = text_features.cpu().numpy() @ image_features.cpu().numpy().T" - ] - }, - { - "cell_type": "code", - "execution_count": 15, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/", - "height": 831 - }, - "id": "C5zvMxh8cU6m", - "outputId": "22bca748-ab42-4888-c9da-8f22c21c6185" - }, - "outputs": [ - { - "data": { - "text/plain": [ - "Text(0.5, 1.0, 'Cosine similarity between text and image features')" - ] - }, - "execution_count": 15, - "metadata": {}, - "output_type": "execute_result" - }, - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAACB0AAAY5CAYAAAATvfRQAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjUuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/YYfK9AAAACXBIWXMAABYlAAAWJQFJUiTwAAEAAElEQVR4nOzdd7gs2VXf/e+q0OHEG+ZOjhpJI40CEpJASARJBIEJImNMsEDYxhjDa8BgE2yBbTBG2NiAAZGEAzkLAwIkRhIISYByTpPTzfekDlW11/vHrupTp+/JN49+n+fpmdN9K3V19a7qWmuvbe6OiIiIiIiIiIiIiIiIiIiIyF4ll3oDRERERERERERERERERERE5MqkpAMRERERERERERERERERERHZFyUdiIiIiIiIiIiIiIiIiIiIyL4o6UBERERERERERERERERERET2RUkHIiIiIiIiIiIiIiIiIiIisi9KOhAREREREREREREREREREZF9UdKBiIiIiIiIiIiIiIiIiIiI7IuSDkRERERERERERERERERERGRflHQgIiIiIiIiIiIiIiIiIiIi+6KkAxEREREREREREREREREREdkXJR2IiIiIiIiIiIiIiIiIiIjIvijpQERERERERERERERERERERPZFSQciIiIiIiIiIiIiIiIiIiKyL0o6EBEREREREblAzGzBzF5gZt9oZv/azL7fzL7NzL7OzJ5nZv1LvY0Xmpl56/GqS709Hy/q46697196qbdpP/byPszs1qlpX37xtvTC+Hj+/pjZPa33ftel3h4RkSudmb2qfV65AMtXuy27ZmZ5/Zvo98zsXjNbmbru+YlLvY0isjfZpd4AERERERERkccSM5sFXgp8DfDJbJ/wX5rZ3wH/B/g1dz954bdQRERERETk0jCzW4A/BJ5+qbdFRM4fVToQEREREREROU/M7BuBe4CfAj6FnX93Z8Bz6+kfMLP/ZGbzF3QjReQxST1ML08XumexXNnM7K7W8XHPpd6ei8nMXj7Vq/nWS71NInLhmVkO/D4fZwkHj5UqZCLbUaUDERERERERkXNUD5Pwv4Av3+SfA/Be4BHgOLAAXAc8AWgnGPSB7wWeDbz4Qm6viIiIiIjIJfBlwDNaz98N/Kf6/2ut15cu4jaJyHmgpAMRERERERGRc2BmXeCPgRdM/dMHgB8B/tjdj28yXwd4IfCVwNez/hu9e8E2VkRERERE5NL50tbfI+Cz3f3RS7UxInL+KOlARERERERE5Ny8go0JB06sWPAKdy+3msndx8BrgNeY2Y8C/wV4yQXczkvC3e1Sb8PHI3e/C7ji9/1j5X3sl74/IiJypXD3Wy/1NsgV4Vmtv/9GCQcijx07jS0pIiIiIiIiIlsws5cA39p6yYFvcPf/vF3CwTR3/5C7fzHwXcCu5xMREREREbmCXN36+6FLthUict6p0oGIiIiIiIjIPphZAvzXqZd/2t1/Zb/LdPcfN7PfO7ctExERERERuSzNtf4uLtlWiMh5p6QDERERERERkf35EuBxrecPAf/mXBfq7h/by/Rm9gnAU4m9hjrAUeBu4E31EA77YmZXAZ8I3A4sAimw2lr+u9x9db/L3+c2PRt4EnADMAAeBO5y9xPnYdkHgOcD1wNXEd/rI8Syr/ef6/L3uC23Ac8AbgTmgVBvz4PAR4H37qWSxnnYnlng04GbiPvmJPBmd3/HDvP1gE8F7iS+j+PAu+p5/UJu816ZWQY8uX5cR7whvkZ8r+8B3uHu1Xle5w3Ac+r1HQJOAL/m7mfO53oeq8zsk4AnEr+zq8B9wF+6+8p5WHYH+BTgVmLbGoht37vc/Z3nuvzHAjNLicfvE4AjxPvMR4EPAG9193Ae1nGAi9QuX8jzy8VmZgZ8ArHtvRroET+bjxKvDRTk45K1+9cBzyWeT3vAMeL35b3nuNwucaiv24ADwMPAx4if93l9DxdL6zt5PbEK2PuJ38nBDvM9jdg2XQMMgXuAv3D35X1uhwF3EI+T5rpsRDxOPgT87blcb7fWcw3xWusGYnv6APF6793nuuxN1nUH8TrzauJxfxy4F3jjTvv3clNfoz6f+NkcIX42R4G/c/cPneOyH0dsR28BFojH4Uni76A3u/vauSz/SlS3NZ9KbMOuBcbE7+XbdpjvABfgfH45/l79uOfueuihhx566KGHHnrooYceeuihxx4fwF3E4RSaxw9exHX3ge8F7p/ahvZjGfhl4MY9LvvTgT8Dqm2W7cQbb28lJlpk2yyvPc+rdlj3ptMCXwe8b4vtqIBfA27a5778nPqzLLd5r28HvugCf6YGfCPwzh32uxNvpv0p8JXbLO8FU/O8dK/TEm8K/gywtMV2vBX4xC2Oz/8InNlivo8B/2CX+2Uv7+PWqWlfvsOyF4FvAF69zXtsHmeA/wZcv4fP9FXtZbRefz7wWjb/jj1jL98f4OW7OF42e9xaz/+fpl7/vH0cu982tYyvO0/fiXtay7yrfi0B/jkx0LPZ+1oDXgkc3Oc67wB+ldh+brXvHgC+A+js4bjd7eOlrWW8vfX6W3ax7dOf5dp221jP8yVT83zOLtZzA/CzxCSZrd7HMWIbML/Pz+G8t8tbfZe4gOeXbbblpfs8Pl6ww3IPAj9KTILcahlLwE8DR3ZY1q9MzffKPby/H5ma99WA1f926z7f+8vP075f5NK0+3cAf0Ds1b3Zut4PfP4+3k+//sxPb7HcB4jXi53ttu88Htv3tJZ/116nJV4LfVO9PzZ7PyeBf7nF8r4AePcW8w3r47K3h/36lcBvEgPy2x0na8AvAU/Y5z67A/hjtm7v3g586X728Sbv6d8Sr8G2ei8D4P9SXyOcx+PipTvsw80er9phmc8B/qj+bLdaxofqdSe73M4c+Hxi+/fgDts3Bn6HTa6DdzjWd/s467M9h8/+1qllv3ybaV8wNe1L69cPAP8TOLXJtv7ENsu7IL+zOM+/V/U4f49LvgF66KGHHnrooYceeuihhx566HGlPYAZzr5xfOtFWvedxJ4bu71ptQZ8zS6X/R/3cVPMgQPbLHMvNxA3TEus3PC/drkNDwF37mE/zhMDAHt5n78JdC/Q8fSafez3d2yzzBdMTfvSvUxL7NG3XVJL81gFXtRa1rXsLnEiAP94F/tmL+/j1qlpX77Dst+wj31+ov1+d1j+q9rz1q/9a7a/8fqMvXx/OPekg1vZeMP2d/Zx/LY/71NA/zx9L+5pLfcuYnvw+7t8fw8Dn7CHdRnwQ2wdENzs8R62CEZvctzu9vHS1jJe0Xq9BBZ3eA9v3mR5n7HDPD/VmnYEzOww/TcTzym7fT/3A0/bw+dwwdrlqXlexQU8v+xiW166z+PjBdss84vZPBi01eM08MJtljdLrFrRnuerdvHePofYvjfzPAAcbv37rft87y8/T/v+UrT7X872iUztx7/aw3u5ma0TsKYff00MGp61fefzwTkkHRB79//6Lt/Pz0wt60d3Od+fsEMyVr283bYN7cca8NV73F9fzfZB8/bjFXvdx631PI+dg+jtx3Cv72WH9b90H/vzVVssKycmF+5lWXexzW+V1rJ/aB/bWQLfuYdjfdfbfC7fr6n5bp1a9su3mfYFU9O+lFgVY7v38BObLOdCns/P++9VPc7fQ8MriIiIiIiIiOzdc9k4ZOG97n7PhV5pPZTC64il2NvuJgbAhsQhHz6RGESD2LPpf5vZrLu/cptlfxPwfVMvj4B3EANHQ2IJ1muApxDLjF5oP03shQoxGPi3xABGDjwNeHxr2uuA3zKzZ/oOZW7rUpx/TryJ1nYCeBuxV9tsvY7bWv/+FcCimX2en4fS4S2vJAZq2k4Rg7mPEm9oLhDLuz6JGCy7kA4TKyncWD9/GPh7YtDkZuCTWT/+Z4DfrEv1DojJE09vvYe3EnsmXk3s4d+r/82AV5rZ3/g5lr89B8nU80eJPZ5PEo/3A8Qkn/YxcAj4YzP7ZN9jmX0z+yrgv7Re+mi9vjViudlP2svyzgd3v8fM/hx4cf3SF5rZEXc/tpv56yEOnt566Vf9wpVm/mngJfXfTvyufgzo1ttwa2vaa4E/N7Pnu/uHt1toXTr7V1hvaxqDeh0P1c8fT2wzmrb1KcCbzOw57v7IPt7PTl4HfGf9dwp8BvCHm01oZgvAszf5p88EXr/NOj6z9fe2paLN7D9y9jmiIPZQvJ/YTt1Sb0fTPtwIvNHMPtXd37PNdlyKdvmCnF8uBTP7Z8QeqO02LRCHs7mH2J5dT2y7u/W/LwJ/Ymaf7+6vnV6mu6+a2VcCb2G93X6lmf2du390i+24FvjfrH9HKuAf+eU1RMXFbvdfQKyW0XwnPkzsxb/K2edTgB+v9/Ebd1juEWLFnMdP/dODxOu2FeL38ZOI7/l5xID+o3vZ/ovsvwNfVf89Ih57DxGvPT+FeG3S+GYze7O7/4qZfT/w3fXrFfB3xKECcuL7v6E13+cSKz+8fIdtmT5OTgLvJbZDa8SA6h3EIX7a19v/18xOu/uf7PRmzeyLid+XdOqf3kU8Tow4fM3T6te/08zu3Wm5m6znC4nB3N7UP32AmLSyQry2/2TivobYTvxfM8vc/X/vdZ0XSj1s16uBz5r6p2Xi5/4ocdufREyebXwG8Hoz+5TtznOc/bkvE39fHSXupxnid+5O1j+3FHiFma26+8/u+U1d/q4iXnvcVD9fJl7bHyNW13na9AwX8nx+mf5elbZLnfWghx566KGHHnrooYceeuihhx5X2oN4w7Ldc+K3L8I6+5xdAvojwGduMu3jiL25pnstPX2LZafEcTWbacfEMpRzW0xvxMSGHyIGpA9ss9079lraYtqmnG0F/PBm6yCWQD01Nd8/32Edtsm+eTfwD6jLP09N/+nEG83t6b/3PH6uT51a9qPAlwHpFtN3gM8mJiq8aZvlvmBquS/dw7TNPn2QWH7dpqa/jXhztz3PfyYGvZzYg/ZlTJUxJY61+6dT8/3mDvtnL+/j1qlpX77Dst9I7P35z9mmfHr9Gf3+JsfMWcfL1Hyvmpqn6en6FuCTNpn+MFPl6KfmP+v7QwyQ3Vo/HmhN++bW65s9stYypkvsf9cejt/p3obPOI/fjXs2OSadeCP79k2mfzFn98T76118Tv9map6TxB79Z5XhJh770z33XjO9DmJwp9nXvz01/Xafy1xrGXPEtriZ779v8x6+cGodk/e/zTzXTU3777aZ9h9OTbtGLNO9uMm01wC/ODX9uzbbn615Lni7PDXtBTm/7OHYnmt95u0KFQ/scHxsdkx+Khurp5TE5KZrNpl2gVhivl2J4CHgqm229Zun9sHfsUlPcWKw7i+mpv2BTabLWu/nJ6am/9Rt3vtZn9E+9/3FbvebY+0NbD4c0U3EBKP2PLsZTmW6IkBz7ZBMTXc98But6U625zsf+3Rqffe0ln/XHqZtvmuhPn4Xp6btcPawHQ/Ux0xTreeXgGun5jPg26eO+QFwaIdt+z/Edus72GbYBOI54eentusoMLvD8o9w9rANdwFP3mTaO4nJY068ll/Zwz5+AmdX2fhF4LZNpu0SE91GrWlXgCeeh+Oi3ebdOrU9v83m3/mz2iXi0D7tee8jVos4q2w+MRnxr6am33aYGGIP+o8B31/Pv+n3nZjY+J/ZWB1pANy8xfQ31u9p+lz6XVu891uZOpb3+v2amm96n798m2lfMDVtc/ycIZ4POlPTp7TaUi7g+ZwL+HtVj/P3uOQboIceeuihhx566KGHHnrooYceV9qDjSWpHfhPF2Gd3z+1zg8DV28zfULs2dSeZ9MAFLF3U3u679/DdnXYZqzUqeW+aodl+dQjAF+5wzwvnprnrTtM/8+mpn8NO5SDJ/YIbZeQH7BNkGaPn+v3TG3Pp+5h3u2CeNM3DV+6h2m9vjl36zbzXMPG8bDPEG/8r7BN4JnYS+y+1nzD7W4C7vF93Do17ct32H+37PGz+h9Ty/+8HaZ/1Sb79XU7HW/n8P25pzXtXXtYR8bGMeDfv8v5ZqeOgb87H9+JLd5P8/gDtkjIqee5ibOHBfnH20z/FDYGDe7f7riv5zFicKu9ji3HYp8+Dva4D/66Ne97tpnuJ1rTtRN7CqYSWVrzfO3Ue9i07SEGx860pjvN7sawfvnU8v/FNtNe8HZ5k2PpvJ9f9nmc39Va/j17nLcz9T0ZA5+7i/leOvW+fmyH6aevJc5KgCH2PG1P81p2GEd9k2Nk2+/eedrft+xx+vPR7v8ekG8zT59Y+aY9z5bDeRB7bbenPQ48aYft+rlNtssvwP5tH4937WHa5vHNO8zzG1PTn6r//x93mO8VU/PtlKC6afB4m+m/Y4/L/+mp6f90h2Okw+bDcO20j/9mavqX7eK9fBYbE5l+6wIcJ+1tetUu55luj9/Bzskjm+23p24z/Y3s0G5NTf9lU8v+0R2mf8HU9C/d437b9fdrar5bp9b78j1soxOv7Z+1y3VdsPM5F/D3qh7n7zFdLkREREREREREdjY9vMGZC7kyM8uJvfIaDnydux/dah6PZSlfxnppcIDnmdlmJbhvnnr+e7vdNncf+/kdaqDt5939N3dY/2uIN1UbzzKz2c2mNbOU9RK8EHsGfoXvUA7e3c+wHqCD2Iv5n+6w7bvV3vfH3f2vdjujuw/P0zZs5l/4NkOGuPujwP9tvbRATHT5Pnd/xzbzrRErIjS6xLLJF52777VM8b8mJmM0vmqrCbewRgyAX6jhB/bF3UtiEL3xJDN7/i5m/SpiienGL5zXDTvbSeAb3L3aagJ3v5+NbSXAv9hmmd/NemlzJ7YH92y3ER7vXn8LsZdt49u3m+cctMveP6UuX7+Z9jAJryImpUF8b5++i3lWiRU4NvOtbCxP/E/d/W1bTNv2g8QhCxqb7qNL2C6f1/PLJfI1xBL6je939z/daSZ3fxWxd3Hjm8xsZptZvonY+7fxbWb2kuaJmX0q8fNuHAO+9gJeG+zbJWj3jxIDi8U22zQAfnTq5c/YZpnfOvX8u9z9Aztsx7cRExsuZ6/2ncvT/9ep5weIwz/9u13M563nL9huYne/b4flTU//X4ll4xtbHidmNsfG4XyW2PkYGROThZZ3u01m9kLicHCNn3H3X9xpPnf/C+C/tV76EjOb/p1wKXxv6+8B8MXufnK7Ger99nXE66/Gt20z/QN7abfc/XeA3229tNf24Urx79z973ea6CKczy/X36vSoqQDERERERERkb07OPX8giYdAC8klsht/Km7v3mnmdx9mY1jyEO8qbOTI3vYtgtpetu38setvxM2GV+09mLi0BONH3P3pd2swN3fDfxl66Uv2OW27cWCmXV3nuyC+xi7u5H3uqnny+wu8Dw93yfsZqMuNXcfEXsjNj55j4v4zToofjn6eWLP78Y37WKel7X+XgN+9bxu0dn+504BBgB3/yPg7a2XnmNmT5iezswOEMsyN/7fbtrVeh1D4tASjRfuELTdr+nvyoumJzCzq4kVGyDesH8dG5MVPnN6nk2W9cbNAl5mZsRei4137xSob9TJGT/ZeukJZvbETSa9VO3y+T6/XArtBJsTxIoXu/U/Wn8fIJap31T9eXwVsZJC45fN7GYzOwz8Guvjmzvw9e7+MI8B56Hd/7k6oLaTP556vul5sU56eUnrpfuAX9lp4fX72O0xf6n8+C6meSsxSartv+8UTHT3h4B2YsaFuO74g9bfz64DsJt5MRsT9v6Xuz+y08Lr79T/2sP2tNuHko2JQTtpt90p8Ll7mPe8M7M72ZhA94s7JQg26gTtX2+99PnncdNg4+d+i5ldc56Xf6mtECul7MbFPp9fLr9XpUVJByIiIiIiIiLnznee5Jw8b+r5r+1h3l9j4/ZNLwvgg1PPf8jM+ntYx4XwYXffba+86R5+W92EeuHU89/edKqtvbH197POU4JAe993iOPJXmp/UQcMdzL9+bzJ3aeDAZv5yNTzy+qmoZnlZnaoDqjd2n6wMdjxRDPby721Pzy/W3r+1L1//6z10leY2fxW05vZk9nYlvzmbm8sn4NdBbtrvzH1fLNqGs8H8tbzc2kPMvYejNyNvyH26GyclXRQv2b13++pAyyvnfr3Dczs8WzsMfja6WlqdxKHUmn8zk4bPOWNU883q6BxKdrlC3F+uajMbAF4VuulV9e9enfrLcThNxrbVjdx978jDgfUOEi8vngVsSR548d2U23hcnMB2/1d7Qt3f2BqPVsdZ89mqt3a5fka4LfYmFx2OVklDiezrfq9fmzq5T/f5Tra1x77+h6bWWpmB83spk2Ok1Fr0lk2fi/anjv1fC/t6l6mfUHr77+qK1TtSl3poV0VZDfVjy6k83meuN7MbtvLzGaWmNmimd24yec+XX3pSXvctsvd63Z5bQ8X/nx+Of5elSnZzpOIiIiIiIiIyJRTU88XL/D6njX1fKsy2Gdx96NmdjfrPU+eYWbpVInydwPvIwaYAD4N+KCZvRL4HXd//z63+1zsZZ3TvQgXNp1q403TFWJH3lv3sJ52gKZDrD5x9x7m38zvAj9WLw/gu8zsM4g9z1+9m95vF8BOZZob00Hm6ZuBu51vq8/roqh76n4FsffbJwA37XLWhLjtp3c5/Tv2um0X2c+x3ptxllgF4JVbTDtdCeFCD62wBrx3D9P/7dTzZ3F2D9HpIMqJPbYH071Y9zLvrrj7yMz+mjjGNmxetaCdVNAkD/wlMdnMgKeb2VXufnyLeeDsigqN6X306B730XTAYLN5L0W7fCHOLxfbc9nYofDBPe43iO/tqvrvHed195+oS7Z/Uf3SdBLjm4Hv2+M2XBIXsd3f67HWDN+x1XH2iVPPp9u6Lbn7KTP7KHBW5ZfLwEfroX52o30NsbSH66T2fLv6HtdDIXwJ8MXE4+RxrCd57eQgGwP3jadPPX/7JtNsZTdD21BXlWknVty7j/bhFOvDt+x13vNt+lx0Zo/vZzox51a2OU/Uwe7PB76M+J17Amef87cyXQ3vSveOPUx7oc/nl+PvVZmipAMRERERERGRvZtOOjhwgdfXvnHonN1TfCcfZD3pICcmSUzKlLu7m9k/IwasmuD3TcB/AP6DmT0C/BWxB8pd7v6uPb+DvdvLkBXTZcHzTafa2OtsjnNPGDh0rstw9wfM7PvZWPb4OfUDM/sAsfffG4G/3Os4w/u0230/HSDY1XzuXsaq7RNbfV4XVN1b9TuJY0HP7XMxewk+HdvnOi6WPwIeYn0ol29ik6QDM+uwcTzq97v7jj1Uz9E9exyLd7qNvHqTaaZ7ob56b5t0lkPnOP9WXst60sGtZvY4d2/39G0nIvwFgLufMLN3As8gBshexMZKEe15TrJ1UGF6H/3Mnrb8bJvto0vRLl+I88vFNv3ZfB/nFvDf7fH7DcQg6fTY2qeBr95D4PiSuATt/n6Pta2Os+nS7but2NH4CJdn0sFe9lP7GNvvfDvGxczspcSk0Kt2mHQrWyU2HG79vbTL4TcAcPczZrbMxuEZNjPdPvzj+rFfF+r8tlvT72cviRqb2fL9mNnnAz/F/hMtLpfEtPNlL9euF/R8fpn+XpUpGl5BREREREREZO8emnp+xwVe34HW36t7DL7B2Tdlz+qF4+5/RSzFulkv92uBLwf+O/BOM7vbzF5uZheyN8+FKAF8vm+a7jdgsYG7/xgxwLvZePVPAl5GLGF9r5m91cxeZmYXsiPJfvf95Vq2+Sz1WPW/SEz2OJfPcdf31tx95RzWc8HVwcJfar30HDPbbPz6l7AxEepCVzmAs6tj7GS6zTuwyTSXZXuwiemhDyYJA2Z2C+sJZSXwhi3ma89jbCyBfNc255SLsY8uxedwxbRV27gkx6+7nwR+eJN/+s7djrF+qVyidv98H2sHpp6fa9t4ubisrjvM7AeBX2b/CQew9XHSro62vI/l7uYzv1LOb7t1Ud6PmX0jMQHx1nNY9mMt5rqXa9cL/jldhr9XZcpj7QsgIiIiIiIicjG8eer59PAHVyR3/xvgqcRyor/N5kFwiDfj/j3wUTP7wouzdefF+e6hutsSuzty918kBg//BbE0+nCLSZ9DDPK+3cwudLLLY9nXAy9tPXfgz4BvBT6F2HNqHsjc3ZoH8IMXe0Mvsl9gYxBnehiF6dfGnD1swZXism0PpryNjYHCz9zi77e6ezuAtWnSAfA0NiaNTCc1tF2MfXSlfA6Xm0uy3+phCX5gk3/62rqKwOVM7b7sqB7i6t9Nvfxm4DuATwduIx4nnanj5Bt2uYpx6+/9fI+nh63ZzGOtXb3g78fMnkCs5tP+t/cC30s8hz6eWMWgO/W5v3B6WR/HLspx93Hwe/WKpuEVRERERERERPbuzcRepc3v6lvN7BZ332zs1vPhdOvvWTNL9th7bnHq+fTwEBPuXgG/C/xu3SvwqcRxmz8DeDEbe7EcBH7HzD7T3d+4h+25VE4Se8EAHHX36TLFl1RdYvd/Av+zHk/2WcTxSl9I7NXTvtH8VOC1ZvaMqbHaZXfaQbMK+HJ3//1dzLdTSeMrmrvfa2Z/CvyD+qWvNbPvdvcRTHrWf1Zrlj+4SMffXssVT7d5pzeZZvom9Z2X43jA7l6Z2V3EChMALzIzc3dnYzLBdPLAG4jl2nPg9tY56jOnptsu6WB6H/0Dd/+TPb2BnV3W7fJlbPqz+RZ3P9fhL7ZVXxP8CnDDJv/8QuD7gR+6kNtwjh4L7f7pqefn2jbK2aaTar7N3X9yF/Pt9jhpX4MvttrzHdWJPbv5zKfbh//i7t+zy+27HE2/nxl3H5zndXwP6yX7AV4BfPcuPpvLqX3YysVKCLto5/PH+O/VK9rlnn0oIiIiIiIictlx91XgTVMv77aH0360x9M04PY9zv/E1t8Fuyyv69G73f3n3P0fEcdFfwmx508jJ96YuxIcbf196HLulenuI3d/k7v/qLt/LrFn8rey8cbrDcC/viQbeAWrK0S0v0O/vMvAE6zfTH0se2Xr70PAl7SefwMb7ydejKEVICZ27eX7+vip50c3mWb6tXMpo32hva719xFitQKAF7Ve/4v2DPV56q2tlz5z6v8AD7r7B7dZ78XYR1dMu3yZuRTH778CPr/1/K1T2/HvzOzTLsJ27NljqN1/dOr5Xq8Hp9tGaTGzOWI1g8Zf7DLhAHZ/nNzX+rvL3j6TJ7AxML6VK+n8thsX4/2027YPAd+zy2SQi9k+lK2/99Kh/MB53o6tXJLz+WPw9+oVTRdxIiIiIiIiIvvzU1PPv8nMZi/Quv5+6vkn73ZGMzvC+pjfAO+oe4fsmbtX7v6HxB6ND7f+6ZPM7ErondoeFiMDPvFSbcheufuyu/808MXEktANlQvdu+kb/K/Zw7zPPZ8bcp7tqqfkLvwR8GDr+TfBpIdlO7nqHuDPz9M6dzIDPGUP0z9n6vl0GwpnD5Oz63Z1j87H5zJdjeBFZnYn68GONc5+P9PzvcjMMjYG1F7H9i7GPrpi2+XzZL/Hx1umnl+o4xcAM3sO8J9bL50CvoI4ZEHzHlLgV+shGHbjfLVZu/FYafffNvV8uq3bUj2u+V6TFD7e3MLGEvEX4jh569Tzz9jDOj5950kAeA+w2np+QduHi+CCnovq32/t5IE/30NFub20D+fa5i21/j6wh/nuPMf17tZlcT5/DPxevaIp6UBERERERERkf36XGHRr3AD88Lku1Mwet8nL01UVvmoPi/xqNo6J+Td73qgp7n4M+OOpl2851+VeBH8x9fwrL8lWnIO6LOjHWi/deok25Uo2XV56adOpppjZc9mYwHO5GbX+3k1PyE3VSUm/1HrpRWZ2G/A5wM2t139ptyWhz5O9fF+n28jN2r3XsTEAcKHag/bngpnt+bNx9/eysYfzZ7KxYsEb3X3M2TYkHRADlO1S0DslHfwtG78fL9nP9u/gim+Xz9G+vrfu/hDwvtZLLzSzC9Kb2cwWgF9nYzD2G939Pnd/DfBjrddvBH55l4seTT0/38dW22Ol3f87YsWqxpfXpcV34ytQPGgn+z1ObmT3CQF/OfV8L5XSdjWtuxfA61svPcXM9pK4d7m50OeJ/X7uM2ysBrWTc23z2pXvbq8TCXfjxXtcz35dVufzK/j36hVNJxkRERERERGRfagDc98x9fK3mtnX7neZZvYdbCxt3vhLNvbU+Adm9qxdLG+Os8vv/5/9bt+U6RtymwW8LjevZuN+/JY6mHqlae/7K2G/X25OTz1/4mYTtdVBnf94Qbbm/GkPm3Ku5X5/AWh6+RnwjdQVD2oVuw8sni/fYmaHdprIzL4AeGbrpb919w9PT+fujwK/33rpOWb2Fee8lWebHs5mv59NO0HgM4hJII3pSgiNNxOrIABcB/zLqX/faj4A3L0EfrH10o3At++4pXvzWGmX96t9fFxlZuke5v251t8zwA+en006y8+zMfD+k1NDE3wfGysvfKGZ7eY4OV/fjd04PfX8imz362FT/qD10s3AP95pPjPrAt99obbrMeT01PMdj5PaD7HLcvfu/k5i8kjjeWb2D3eaz8y+GviUXW4PbGwfAH50DwkqlxV3/1s2Vvn4MjM7n9UOTk893+3n/l3AwT2s51zbvHe2/u4BL9hphjqZ/WIF/y/H8/mV+Hv1iqakAxEREREREZF9cvffA36m9VIC/C8z+669BA7M7Ilm9vvAj7PJTcu6x9LPTq3nf29Xwrguhf7zxCBR4831jbvpaV9c9+bb7fbOEMv8Nwo29r6/LLn7kI3loWeBPzKzm7eYZVNm9gwze/b52CYze6mZ7brXTd1T7RNaL203Hrts7t1Tz7/FzHo7zPPDbOxZfjlqHwu3mtmt+12Qu98H/EnrpX8CfFHr+Z+6+wP7Xf4+HQJ+ebu2te5t+jNTL//0Nsv8IdaTKwB+ycz2UuoaM7vOzP7BNpNMf0dfuJflt7STDubZOP70pskDdfWDv2q91A5sfcTd79/Fev8L64kLAD9iZnuptoOZHTCzL9tiGy+7dvkiax8fOfCpe5j3lcBDreffYmbfuZeVm9mMmf2jbf79n7IxYPV2ppIZ6+SUf8jGgNp/MbOdSmufr+/GbjyW2v3pNu0VZnbHDvP8dzS0wm58lI3t3dfvNFyImX0ze6tWAPAjU89/0cw+a5t1fDYxGXDX6vLy7UD95wP/bY+/TzIz+0d76FF/IbWTqhLg98zsaXtZgJk93sxeMP26u6+x8XfMF5jZE3ZY1hcAP7CX9dfrKFvP99rmTVfJ+P76996mWlVqLmQVmYkLfT7/ePm9eqVT0oGIiIiIiIjIufkO4I2t50YsNfwuM/varW5WmlnHzD7HzH4ReC/wkh3W82NsvEH/ZOCvNwuQ1b1KXs3GANMY+OdbLPtTgL8xszeb2f+3Xa8UM3s6cYzb9jSvdvddlSK9DPwUG4OpdwJvN7PvrMc73pSZ3Wxm/8LMXk8Mupyv4NZLgY+Y2e+b2ddsc7wk9Q3O17Dxfs75qlzxcaMOtLbHVH4y8aboWckfZvY4M/st4N/ULx2/CJu4X29o/W3AH9TBgqea2a1Tj90EENpVV65hY2n1PQU/zoPT9f+/CPhjMzsreGZmn0MMsLcTrd4E/K+tFuru7wC+v/XSHPBaM/sfm62jta4DZvaVZvYbxGF2vn6bbX/j1PP/ambfZmbPqo+v9ucyt81yphMLmh6rJ4ht0m7msy1e35K7P0JMOmmkwK+b2f+pzwebMrNZM/sCM/sl4AHOrrrTdrm1yxfTG6ae/4qZvawOutw2dXxsCJLXAZ6vZmO5/VeY2R+b2fO36tVsZl0ze5GZ/SRwH/Bft5juqcBPtF5aAb7K3adLhOPu97CxGkqHeJxsd0z/LTBoPf8eM/s+M3uumd0+9d4PbLOcHT2W2n13vwv4jdZLh4HXm9mXTQcg66SoXwf+Wf3S6YuykVeo+tj+o9ZLR4A/r78LG5jZNWb2M6wnuu36OHH33wV+r/XSDPBnZvZrZvbFZvaU+tz9JfXn95p6mr8BHtzDW/oaNvb0/nbgjWb2uVslH9SJBs81sx8F7gb+L7us4nAh1UkU7euS64C3mNkPmdl1W81nZldbTPD9I+LvqM/dYtLfbv3dJX4eZyWBmdmimf0H4ueXsbfPfcTGqjAvMLNfMLPPNLMnTLV5m1VB+DM2fv6fQUxCP+s8aWYvIl4DPYeL+72/kOfzj6ffq1esS95YiIiIiIiIiFzJ3H1oZp8H/G82jut5Z/1aMLP3AI8Qb0wtEG+UPZGNY2s31jZ5DXcfWCyt+jrgQP3yHcBdZvZR4D3EsUJvI96oaQcbHPj/6gDbdj65fvw3MzteL/MEMSiwCDyFs8c2Ps3Zw0xcttw9WOzV+Rrgk+qXDwGvIPbMfDdwL/Em7SzxZv6dwAUZK7uWEZNOXgJgZncDHwZOEcvYHwGeUf+/7W85u1e37M73EW/eNt+TzwQ+amZ/R+wF1SX2Cm1XlfgbYi+z772I27kXv0HsmdscJ08nBgs2cxsxWL6d/0e8uX3D1OuPsDEoczG8k9gD9RuJwwp82Mz+nvXP6ulsvLEMcezjl7q7b7dgd/8Ri1Uh/mn9UkochuBf1t/FDxC/izmx7X0CcOtuN9zdP2xmf8p6oOMQsdfxZr4BeNUWy7nbzO7ZZN1/ucN73Cq54HVbvL7Zun+1vrn/H1j/znwN8DVm9jDwLuAkcd8tEj+Lx7PLDm+Xabt8sbwWeB/x/UAcb3qrpJ4XAne1X3D3N5jZPyFWNmoSgz6vfpwws3cQrz2c+NncBDyJjXGBR6dXZLGH6G8A/dbL37zZUCWtbfltM/tZ4Jvrl55ArNK06bBT7r5sZr/Smr5PHM5gsyENfhB4+Vbr3qXHUrv/L4nXek1y1DXEoOmDZvZ2YoLIzcRruia4/OfE0ufbJUlJPNa+iFi+HuJwPe+uv0sfJLZrtxD3f9PGfYQYbP2JPaznpcD1xM8I4nH5D9mYMNx2gphk9PrWazud3z5gcdig3yEm1UEM3P4JsGxmbwOOEhOXFuvtuZP4Xbgc/UviNn5B/bxPrDbwA2b2AeLncIb42R0ktnXX73LZrwBeRjy/QDzXvrFe7nuI1+M3ED+vpq09Thxi4VV7eA8/CTy/9fxl9WPa65kaPsHdKzP7bjZe2/0j4EvM7E3EtnyeeMw2CZgDYtJRO1HpgrlI5/PH/O/VK5mSDkRERERERETOUT3G7pdaLEP8w6zfsIJ4Q/Lp9WM7q8RqBj+2zXrebmafTgz4tUtV3s7WZXOHwD9z9y17+27hKnYeK/Q+4Ivc/d49LvuScvfT9X78KeKNviYAkRCDDZ+w1bzNIjh7XNbz6TbODqBOuwv40rqsteyRu/+FmX0HsYdv8/mnrN/InPZm4AuJN7wvS3UA7yuJwYVD52F5lcVKLP9u6p9+5RIdd98CXE0MNhgx4LNVz/ZHgRdvFyBtc/d/ZmbvIra/7SDrbr6LEJMStvMy4g34s3rL7tFrOTs4sVPFgrcTt6/dw9A5u0zzttz9P9XBl19gPfENYhLdlr1MW7bdR1dAu3xBuLtbHK7i/7HxvL6XZfyKmX0M+DU2JgkdZnfDA2z22fwk64kQAL/s7lslMbX9K2JArSl7/jVm9lp3/+Utpv/X9Xo+fRfLPiePpXbf3Y/VPZn/gpjc0biBsxPFIFZ5+Crgv12Ezbuiufv7zOzriZWk2mXpn1E/pn2ImFS2p6F53H3JYoWenwK+bofJ3wZ8mbvfa2btc9TKLtbzZxZL0v8WscJHY36X27zExmGILhl3H5vZS4hDI30PG+ObT6ofOzm9xbKPmdmXEH9jLexiuY8Sh6zYLIF8S+7+G2b2ycS2cs/qJMBnsTGA3mfztn4Z+HLiMXrRXOTz+WP29+qVSsMriIiIiIiIiJwn7v5KYs+YbyOWz9y2BxKxZ9GbiMMe3OjuP1iPK7rdOt5NvGn4A2wcy3naCvArwB27SDj4KWJZ5N9jd2VCP0YsSf5kd3/nLqa/7Lj7yN3/CfEG8q+xsfzsZiriZ/rvgdvd/dfO06b8E+KN07vYWGZ6K28m9hp9kbvvFOiUbbj7TxADBdsdwx8Bvhv4dHc/cTG261zUZbefTNzmPyOWtV9j57ZoK79APPYnq+DiD60QVxzLEr+EWB76o1tMNiBu3517bZvc/aeJCQavYHflqz9EbDuf5+5bDV3TLPshYoLEPyYmhXyY/QVyNksw2DbpwN0DZycYvNvdj+1x3bj77xB7+P4AuxsX+T7gF4nVKT5/F8u/XNrli8rd30MM0v8LYsDrHuI5fNffW3d/I7G6xL8k9vrcyVHgV4EvZSopsq6q9I2tlz7ALgPv9ZAPX8XGqk0/aWabBgPdfYVYweFL6+15HzHYdEESmx5L7b6730f8rvwYW39XHiZWiPh0XTPsnrv/FvCpnD08TttDxETjZ7n73ftcz5K7fz3wXOB/Au8nfparxHPM7xGrqH1yPYQJxJ7cjV0FZt39vcQ25uuIVbJ2Ovecrtf99cB17j7ezXouBncP7v79xGpvryT2cN92FmI1nv8CPM3d//M2y34j8Vz9R2zd/p4Efrpe1t/vcfOb9XwHMdHpp4mfxwniMHi7nf87iZ/lfVtMMgZ+HXiGu//ZfrbxXF2g8/nH3e/VK5HtUGFMRERERERERPbJzBaBTySWeLyKWI5zmXjD6qPA2+ob9OeyjmcQbyQeIfbIOka8yfLX+71JaGaPJ97Mu5l4czOtt/sh4J277T18JbE4xv2ziT0GDxNLfq4SP6sPAu9z9+ULvA05scfn44k9FedY7+1zD/F4eeRCbsPHKzN7CvEG8BFiMtDDwIf2e0P5scLMDhG/902p5bvc/YWXcJMAMDMjlu19IrF08hqxXO/r6iDm+VjHk4i98a4i9uwfEQMxHyW2B2eVpP94Y2a3EMeLPkKspFAQAwv3EPfR/ee4/EveLl+p6vHAn0usDnKYGGRcAu4nBvbv3mnokce6x1K7b2Y9YvLGbcRe2o8AdwN/5e7VdvPK9szsccTqHdfWLz1MvM5+c53UdTG35TY2Jnz91zoAvdflHASeR6xSc5jYOXmJeL5/P/DhK+W4qa8HPoH1Ev0LxGuCU8QEv/e5+8l9LPd64NOIwxRkxO/UfcTvVHF+tv7c1O/9WcThFK4i/la7n7iNl1XC1IU4n388/l69EijpQERERERERERERM5iZt9KLLPe+NpdllgXERGR88jMvg5oVy/7Onf/P5dqe0REpml4BREREREREREREdnMP2n9fZI4NICIiIhcfP9k6vlbL8lWiIhsQUkHIiIiIiIiIiIisoGZvYiNY72/6lyHgxEREZG9M7OvJZb7b/y9u3/oUm2PiMhmlHQgIiIiIiIiIiIiE2bWAX6s9VIF/M9LtDkiIiKPKWb2CWb2s2Z24y6m/XrgF6Ze/qkLs2UiIvtn7n6pt0FEREREREREREQuETO7FugBXeCJwPcAz29N8ivu/tJLsGkiIiKPOWb2bOBviUl9fw68Bng7cBRw4Crgk4B/CDxnavbXAZ/lCu6JyGVGSQciIiIiIiIiIiIfx8zsLuAztvjn08Cd7v7wRdsgERGRx7BW0sFevQv4XJ2TReRypOEVREREREREREREZDMD4CsV3BARETmv1oDhHqYvgJ8HPlXnZBG5XGWXegNERERERERERETksjEGHgT+Avgxd//wJd4eERGRxxR3f5+ZHQE+l1hp6BOAW4HDxOGOVoATwAeAvwR+y93vuSQbKyKySxpeQURERERERERERERERERERPZFwyuIiIiIiIiIiIiIiIiIiIjIvijpQERERERERERERERERERERPYlu9QbICIiIiIiIpcNjb8nIiIiIiIiIvLxy/YzkyodiIiIiIiIiIiIiIiIiIiIyL4o6UBERERERERERERERERERET2RUkHIiIiIiIiIiIiIiIiIiIisi9KOhAREREREREREREREREREZF9UdKBiIiIiIiIiIiIiIiIiIiI7IuSDkRERERERERERERERERERGRflHQgIiIiIiIiIiIiIiIiIiIi+6KkAxEREREREREREREREREREdkXJR2IiIiIiIiIiIiIiIiIiIjIvijpQERERERERERERERERERERPZFSQciIiIiIiIiIiIiIiIiIiKyL0o6EBERERERERERERERERERkX1R0oGIiIiIiIiIiIiIiIiIiIjsi5IOREREREREREREREREREREZF+UdCAiIiIiIiIiIiIiIiIiIiL7oqQDERERERERERERERERERER2RclHYiIiIiIiIiIiIiIiIiIiMi+KOlARERERERERERERERERERE9kVJByIiIiIiIiIiIiIiIiIiIrIvSjoQERERERERERERERERERGRfVHSgYiIiIiIiIiIiIiIiIiIiOyLkg5ERERERERERERERERERERkX5R0ICIiIiIiIiIiIiIiIiIiIvuipAMRERERERERERERERERERHZFyUdiIiIiIiIiIiIiIiIiIiIyL4o6UBERERERERERERERERERET2RUkHIiIiIiIiIiIiIiIiIiIisi9KOhAREREREREREREREREREZF9UdKBiIiIiIiIiIiIiIiIiIiI7IuSDkRERERERERERERERERERGRflHQgIiIiIiIiIiIiIiIiIiIi+6KkAxEREREREREREREREREREdkXJR2IiIiIiIiIiIiIiIiIiIjIvijpQERERERERERERERERERERPZFSQciIiIiIiIiIiIiIiIiIiKyL0o6EBERERERERERERERERERkX1R0oGIiIiIiIiIiIiIiIiIiIjsi5IOREREREREREREREREREREZF+UdCAiIiIiIiIiIiIiIiIiIiL7oqQDERERERERERERERERERER2RclHYiIiIiIiIiIiIiIiIiIiMi+KOlARERERERERERERERERERE9kVJByIiIiIiIiIiIiIiIiIiIrIvSjoQERERERERERERERERERGRfVHSgYiIiIiIiIiIiIiIiIiIiOyLkg5ERERERERERERERERERERkX5R0ICIiIiIiIiIiIiIiIiIiIvuipAMRERERERERERERERERERHZFyUdiIiIiIiIiIiIiIiIiIiIyL4o6UBERERERERERERERERERET2RUkHIiIiIiIiIiIiIiIiIiIisi9KOhAREREREREREREREREREZF9UdKBiIiIiIiIiIiIiIiIiIiI7IuSDkRERERERERERERERERERGRflHQgIiIiIiIiIiIiIiIiIiIi+6KkAxEREREREREREREREREREdmX7FJvgIiIiIiIiIhc+X7up3+R4CMyxvziL/wRf/+ut5JYznx+A2D0Zzo881nX8sEPPcAtN9/IF7/kBSwcWMTSDBLDHQzDDZLESNKcJIl9JRIDq/82S0gSIzXDDMwMDxVVNaIqx3hwwDEgTRLSNCdNMzqdLmmaYnEmjAQI4FCGkhCcEAJZltLr9snyjCRJscTAjRBKQhVwdxwAx0Ogqqr6eYI7YEaSZKRZRpbnZFlKlqRAIIS4Ptxxr+L0QJIm9csBD45ZvVOduD6P8zbbsL4fsri+NI3vNUvI0oQ0TUkswXFCFSjKkrIoKMcjvHLKckxVFHiocDc8xNfKsuDUyYd5/zt+i+XTH2RcVJw6NebYyYwnP+Nz+ZTnfw7dmR6Y1ct2Qqjiewlhsp3uTlWOqaqKYTFiPC4pq4rg4MEJHuLf9T4MIRDc+fwv/er4tpsdswV3j5/7LqaL+8o2vLabeXejWVZ7XbuZ51zWdyGnn563vd/2srz97Jf9aH++zd9f9cWfH9uFepo0SchTIzHDgSSJ7Uucv24jEovfNaa21eo2yX3qWDKaNsbr11IzEquX4GBJnO/6fsqz+sYN8xmMSnILHFjsMDuTkc92yGZ6hKIijCu6s10GZ9Yoi8C4CizM9yjyLjbTY+nUCuMzq3QMlgpjWDheVIyThME4cO2BjJtuXCDr9Skqo5M4o+VlhisF3U5K1skYrw6xXo455KnjVQCcYeFUZUW3k9JJIUuMNE/jzgqBccgYjktm8sDMVQfJOx2K0QirCizvMFwdkeKsDQpmehmdfpfVYYGTkGdpbNssZTgsyLo5ad4BKlKLbStJ3JM+LvGsQ5WkeFXRy5yqLHESstzozi2QZDmjAhKvsKzHaBzbsTyDvNsnnZmhM79AmvcYnDweP8NOF69Ksv4so8oph0PyTka33yftdLEkBUvJ8oxur8MP/KsfpgpOlia4JZwZBhZ7GWmece3Tn0lKRtYxvNdntLTC3W/+K2574i2cXC1hMKB/8BC4Mx6NePZTbuKv33MvWa+PWcLJk6ewEm7IRsxedTXDYcFaAXfeNMNNt93ESrbA4MjTueOOp9PpdEmSBLMEzHG3+tyQYIlhZhiGJUn82wwswYzJfHH6wPFTpzlx5gy9pOAtv/VzPO+Jh1gpE372Ne/kPY+uEIqCm266lR95xU9z6Mi1JGnOcDDAgKIoKMoxa6urLK+uMB6OGQ7XKMYFZVlQjEeURUkVKoqiYDwumm9G/JLhVGVBqCqCB9wh1N+p5ntq9fl8dnaOzCp6s3MszB/g/vvurqdzzB0snnfW25VAaJ3zAW7u93j00QcZzh2ozzEbv9kWv9hYkpAQ9yusn3fdncRs8jyEgAMHF+c4dvf7ee9dr+bhU0NOpgl33pBy+mTGi27u8YfvO8VHV+K0//4HfxAzoyxLqqpiZmaG0WhEmqakaTpZLkCe55PXx+MxMzMzJElCkiTkeU5VVVB/DvPz8wBUVTX5zJvpxuNxPPcnCVmW4R6vadrPzYwQAmZGp9OhqirKsqTT6ZBlGePxeHLd1el0JutqPqM0TcnzfLL/3X3De6qqiqqqJtvv7vWxaBvOC812t99Ds+3rx4NNXmv+v9l5pb2M5t+aR/Nv09M02zN9nmtPu+FUMPV82mbznOu8r/rRfzv5/JLEyLKUNMlJ0nhtmiTx+196IBQl5XiVxU5CYs5gaQkfrlEVJcVgREFCVVaUo0DWyaEoOX3qNJZ2uOrmm7j6pptJEiiqwEy/S1UGivGIykuMjPG4oN9PePi++zl4YIH5xTmqUDEYDuhkKQYkeQKUnDq5hHuJl7CyusKxYyfpzvQ4cvUBzpxZ4QMfeYAbrj5Iv9+jqCpmZ3oYFUsrQzq9PnjF4oEFTp5cpjt7kOF4xHVXX8Xi4kHKsmDl9EkeeuRh3v7hh7nhmkM8884ncPjQAZZXBjz80EO86yMPcv/Rkxye63DjkUU8wIHFOY4dP8HB2S5XX3sVnU6PYVFwYH6Gbn+GLM1YWz1FVVaEYkS/m8fzQ5axOhgTyEktxSzF0w6JdcjTDlnWIe90yPJe/I3i8TxUFoFyOGQ8XMPLAMFJsxQPsS30xMg6XcrhmGo0JlQVXgRCVZB0Eqo04Z6jD/H2D3+Ak6dPxePdHbfYdib14R8wAgZU4Na0uvHXjQXuz582fbQR27rmOgbASVc+BlVJGK9RhRB/gwFlMNLZa0nzHuPBGZJOH5u5Gmy9H/v0d3u743yn18++XmyustZ/H5qvb/lZy/J4SVEvDMcxh2Dr8551pde0Za1tsHq1G/5l6hrRvL2IeA6KGxbX2V5u6x3W0zbvK27j9KXnZHnNMn2TaQjrV6H171LDcYcsDFnoBgbLJ/CqIKGidKeTdylsljQpWS06eL5AYOP+nP4MBsc/xH4o6UBEREREREREztnnvfhTOXXyBK/+w9fyyINrJJYTvMA9kFiG41x/00H6sxkf++hJBqMhB7NDJFnG5P5MkxBgVgcG12+OxRuzSQwWWB1oIQbNqKdJzPAkBuQNSMxIs5Qs7cQbgvX8CTGI0ywv8w5VWcab82l9M9diYCdN0kkgPYSKECrMYvKC1zeEzOtkCUswy0iyhDTLSJOEuCUxuJCY4W4EryaBEycQgk2SKjxUxBBLmAQ8ob6RaMmkZqUlKWma1QkU7Zvo8aYkxMBJkhhpYgQzkjQhEEhJ61tVGWAxkSFxyqrg+KMfZHXpI3go8CqQpUZqBe9/71u5+dYncevjHo+lzWdS33usrP4MEpLUCO6k1sEpyUKgSuPNUYJTUWHBYoJF1QrSbnkb8WztwAVsHdje7CZne97tguO7SUyYDlbsJqFhen17SYDYTRJAe3lb3eTdzbq32nd7nXezbd5sH2z1fraz+Xq8Dio234vmPzSNzCSZB4jJMpbUx2G9/jrZoGlajPoQn6yL9RvS9etNAlS8iRynvH4m59nzCYuhJCkrUnMW5nMOXzVDZ6bL2GGwPMKqisoTynKNLFSE4MzN9gi9PiUZ1dqIpBiRJ8bysGJ56MzkKUUnIwmw0AncfN0cea/L2jiQZjlhuIYFSFIIoaIq40YmwXFzQogJQEUFFqCXpyRZSpY6nSyhdPDSITHWkhy3QJanMB4xHg0hBDwxyvGAxKGoKnqdmKS1ujKELMO9TqgKEBKjO9OL+zs4SZpQVhV5mmJUhLJeXh0o7WZQuTMsAv1uQreTgSWsDgrMUtIsxww6KZCmdfsZ2zIrK8L4NGmWMhoVeBjHCMEoBrXy+fk6cQo63Q5pf4HB6gplWdDxnODOsHRmuyl5mjAbIPfA6bUx9uAjHLnhBjplbKeTLKN34CB5f5bDCx2OP3SS4uRRFvsJ3SSjKgpCVWJ1YGA8GDA8eZqbbzlCHoYkC4vkg1XyPLbF7gE8UIVqy8hKc0wa1MkHk1NnfX5c/7v5bgQPk+nTPMOyDl4WVO6kOJamuEGeJQxHIywpKauSLE2pQhXPLc3XqA69JGlCRkoIKe6Bcly1EnvS+Ll7qL9jTfCkCdy0AjJGDJ8EZzQasjIecnW/z/LyEkVZ1omEMaBi+CTJoAnMrDcb8bW7V9Y4snCAM4MRaZ7HeM0k0ANmsY2wAD5pN+oEjjpoFNpfcksBY2UwZml1QFFWdLtd+gHeee+IfuJU3mNYJ/FATPooy3Lyd1VVG4Lh7k5ZX3MURVE3Tz6Zrmlfi6KYJEs2QfxQJ+o1rzfLarSD72ZGlmVkWUZVVZNEh/b5ajr43wS6m+SE9qO9je352m35Vu1/s/1babat2f62sxO/HvvyvEdVlbiX9XtO6kM4HpuhCaB6PI7LEBgVBZ3UKIqSajAiTeK1ZTUsqEKzDxOybp8kXSXvdFhZOs2B0REOHzlM4Y4FqLyi0+sSQoJ7TB4ryhHX3XAtw8EqIVQU4zGDtTWS2R54RTWuyPKMUFWMy5LhcNxc+pKlKaPhiNNLKywPh1iaUJYVVQh0uxllkbCw0CHLEsoQ6PX65J0hBw4s8vCjJ+h2u3Q6GUniDMuCNO+BZayNx5w8eYb5uVmyNHBwcYZuN6MKAUuMTpqyWowpixFZmnD0zDKz8zMUAR46dopTpzvcfMNVzC3MkSRGf7aPVymJQVWWVOWYUARKKpLODHknx5KMqqgoy5igUxUphDGT9sVjwlen14PgFNWAYPFzJElIu/H3SDEu4zFvcX2egJGQ5jmjaszxM2dYHQ0hTeogdvwtkUBMwIpHRH2tkmwIYxtOWl/fb/y62NT/67+tQwhjsJiImRgYgZlexnh4gtIPxcn6R4iJbWcnEU2Lh+rO022Yx6x+n97KIGh+9IT1azrf5NqSeI5o/o7XgIZbO7lt43t3b34/bnK6tXp59YnPbXqa+N1b//3FepIqkFhYTziYtHsbP49mnpif106kqjfAmazjrGliytxky+utrM/zCakV5HlgUAaG40C3mzMuu3hmjOwwSTqgxDDzyf48n0nCSjoQERERERERkXOWJDAuhzzw4GlClWOkOAMqClJyqrIiyXKe+azH05u7G2xMnsUAvU9u1MSepZakdf+dePvMLN5aow6iWx2csclNoXgjzi2NEbRJcCFZv4EFNEGEQDNvQmIJaRp71NHcSE/izT13qKpYzaCqeyy6Q5LEdWy4AeRgCaRZGh9pEnMcmvtl9Q0iA+Ld4voOltc34utASqhCXT0gtHq51jfzidtpk1trYB7v6oYqYG6UEFMKEifNkhigSwIhTYAMr//2zAke12dJCXQYnX6U44++C6+GeKgIVSAxo5PB8TPHeNe73sK119/AzNxcvFGWQIIRLFY5mNxs84CHdJJI4KRQjCmrEiOhpCRx6vmYfP6TfTnV82mngP5WPaX2kpSwXYLCdsvYbPqdgv27Wfdm806//730HNsqGWG7de+0r/eTLLDdus9LdQRv3YZt3Uz21vO60yAAwYAQWjelbb2NIKn/b/E4d48ByyYWSTz+zVrj17qRGFw7k/OcAymLoWQGJydw8EifxcUeaWaELGO0MgKH/sICR48vcWgmYTx2erN9bGaOKk0YrgzphDEzWcbRomJ57HT7HYoqYO7M9lKuW5wlzTucOTWIAaZkRBgX4LG9yTs5HqrYUzY1Kif2vgTyzOrvqZOGkk6vE5OW6uBw5QlVMWI2hzTPKMtA0+PSvQ6ChUCaGN1+j+GoIMuMoqwgTalKqMqAZUbaTWJVG7M4j9nkpr5ZrLJAcCwJZOaEJKXfT8iShCokjIcl4wp6nYQQKlKcXidjXMbErVFR0M0SfDyIbTYp4HXQPJBkGXliZL0ZkjynHA7jeSBNSLKMJOkxGo3od1PGVUUIsFrFdtCzlE7i+MoZSG9ibXUNy2coiorBsGA4KhgsrcCph+nM9KhCoPJAQqyYYMRkD/cQz2PmeFVSJHn8bqb55FilDuxCq5fk5Cthk6BCUmcXNNV/msCkWRO4jcciZvUxG5fWJGi4Q2VGmmYYsVrN/Y8c481vey1La6vkWUpVBrqdDk+5/VY+8tGPxR65OIdnOizUFS0GZUURPAY6q4rUqM+hsQpPCA5e1eeI5nu2/p7crT7XQ5pmJHlGp9sjT1KKolxPmHBwj1VxYk5i00vUSBImx3vhjvVmqE6djlUs4iEaF4CRpPF7GjehFRCr97PH6E6rTYn/mFugGA3pZsbS2OnhrJHz4ltnyDDWyvWEo/i+1nvSF0WxIYjfTj6oqoo0TSdVAuL2hsly0jSdBPmbgH+TGNBeXpquJ0hOP5q2bbqCQLutnQ4OTq+jnfQwORq3ONe0kw+mKw5spr2d7aSD9uvT6zkXmy33cpPnPSwZE0ICHsDjd7bOMojHr/vkWjhJOozHZ+j2M5IEhlVFMS5ILSOURZy/iklA2cwMnX6fNO9SmbO6sszioUW63S7uEEI8f2RZRll5XSkoJcsS0swZDYa4V4yLgn7oEkLFYDhidGaJcjzGLIkVaPKUudkeVVlw/NgqSysDrlroU4wLiqTiyOEF1tYGVCGlO9OjCBWjcWDOE+bmFhiPBtx64zWksYgAx48dZTQqyLOUMow5dnLErdceZjBeYzwqqTww00256cgih2Y7JKlzaLHPmZVVKod7ji+zuDDH1Z0uw+GIh4+eJsuMx3V79VneGI0KsiwmIDuQWhUrs1CQpDNxP+KEsqAITpmMsSSrz/0po7U1kjSl2++T5DnddIZiNKIcDXEvyfuzVFUglDH5OKnPQWYBS7p4J2H59BlOrS7H4yCrw7eBSYKk1dc57UM4qa9L1n8BNdc1TWMbr1Xcz/4OFp5AGXvEp1ZXHKkqEqDTTXFfZVjV7YMl0Go3p61XUYi/oza/Lm62pXVNWrezPpnXJz+TWmfC9bwbpq5r4wutd9y8bq25J4ulmXw9Z60+Q25Ypk2S7drnYbdmiev/bydD1L+ESPD6urH5DUPzJlsJA+vzt6v/OAEsqT/Xjdfuk7bSm3NOvbS62gUO1XhAGJVkWU6wWejOUtoMqY8wr+pkB6c5+261jv1S0oGIiIiIiIjIx7HxeAzEUrorKyt0Op1JWd29cHf6vRme9/zH8773fJgT78gpqgBWMX+wz02PO8DNt1zNjTceIc+dwdoq4OR5hpPE4Q2w2I/HYuA8saROPYg3WBOrh1qwpC6xWfcs9KaHYXO7ar2/S7xHFkv9JpNYgtdJArHHbWIp6eROVoi3XxwITvA64aAMBK8w1nsR4tQBFcCclLSuLhCTGWKCBHWCA+s3muo7Xc2toqYEdKyAEIA68OAhvt9mFRZ7kMa3HEuxujc9F8E8BVLMnDyN2wAeuztnzY3j1pAGwamspCohpIFTJ+5nsHJf3Acee0PjTppCZs5HP/xejj3z+TxucaF15y8lSephFeplu2WxdHsdFKMOLhpGSQWWNR/X5A5p2OKG1/QNy52C5DslD2w2z27spsrAbpaxXRna7aoITFdomF7mZuuZ3u7dJENstZ3TyR/T27uTzabdS5WF3S6/7sPcWlZ9W7e5EVzflI2Tx6FFYjJTrETS2orJdEl7GR4TFeJyrW6PbNILzsw42E34pIMZ1yclXlb0Urju2hmSThZ7X872cUvIsoTeTJdyNOTIfEZRVOSzM2Tz8wwqKIYjbDykKktODp3lcWwvszylLANz/YxDBzp08pTBWkGKM1ob1jei672RZQSg0+/h4yK2sAb0OzTJWlUIhHFBp5fWrZ+tl+93mO2mcYiGfh86OeXaMqGoKEYloYJuZszMdlgdlhhGRUKaGeMKRuMSx0iriqRKSbNk8tVvbtmHMm6TV0YSAlQBy1OyFDr9PmvjksqNcV15IZRjsm6PXq9b92iFcVGR5BleBkIxxi0Gu8uqCfZb3L9VoN/rYiRk/RmqUGJlSW9ugdHqGsVwSGkpWYwhsbJaMJ+mnBlVdDLAS8q1VcLSEsvHj3Lo2ptIQ0EYDJid6VElRla32UVR4g5plpNYLIMO0M8TDiz0cBKGSco47VFkM/XwA3WyQWiO5FZ0vt5nTcJBc7i2j/FJUgJNQoLVMQ5vBVs2tsmj8Zj5+djT90Pvezdv/vPf5dGjD5OnKccfeZB+r0/6ki/nj379lycBlE94wk189ec+j2JtmROnlrn7vke479gSHzu+Qj/PuPnqRU6vjnhweUye5Sz0shjEy7uQd0laQx2RZFTBWR4W3H7HU7j+mqspR2MePnqUv3vLXzMsSnqpkeM0vWwD8fuaWBzaJE0TLE0pA5TAQ3MLHE4DH7znPtIsi/srWR+OAjPSply/GZ3EqCYBprgH06RO2sDI8pTbb7uRqhrT6WTMjUbMZ8Y4yVnoGaOiZFhNWqPJEANNBYJJEkkdtN8swN7WDJ/QJBw0CQDtYHnTFjdDGyRJwnA4nFRHaK+3CfhndQCzPfxBo12ZoUk6aCcKNNUPmgSI9TZ24xA07dfb696uJ+1Or2+WPLEfl3uiQVun2yMpE8pyXA+Ftd6uQ/taCcajEaPhKnm5xmx3hjRLqXCsClgaSFMjlIGyKgllEYf8WFikIqU/02EwHFGOx3Q7WUyAyYzxqMKSeE1rHuj1OgyHq6Rm9TYFgpcMRoPYvlXOqdMrZCkszM2BV+SdlF6/w9ragOGw4OqDBxiMBgxGgV43JQFWVkYsj0uuSox+N4MAa6urlEXg4KHDrI2GVINYo2swWGO2m7I0HmBUjIsKcMqiwIAs73DtVQdY6KYsrQ5YHo553PXzrKwOKEKgDHDizCpXHZin3+/x3nsfJb8vsDjT58CBGUJIWVkeMD/XpdvpxuSCtIzvLxSEssQsnVyQl2UVA7gW4jBneUYxGkEIlIMheb9Ppz9Dp9cnzXNCCIxHBeVgDPV5F3cSi9fZaZ4zCCUnl84wHA/J0wRPY4W0OBRMUx0sBqSD1wkIk+Bxfc3SjCUQi6i0vi/TlQ/iDOnc9Xg+S3nmXjybI+nNQzEgpD2CpZB1sWJQLyG+9xggr4PmkyD8+vHJ+tTY5EdMHNauOa9NJyLUW9h6uv6+mr/Xp5x+b2z4t+bMWf882/CbsL4UrM+J9ffImGRxTNqu1tuKVRFYrzrRbKbXW1afKzZcO3vr99/GnU5T4QBs/fxszY9UW/88J+M4WGv7mvcQf+N5nchA/R4MGI8Do841hBAoAU/nyBkRKkjMSRJvDo+z2+bJftl/e6mkAxEREREREZGPU+PxmFe84hU873nP4/nPfz4//MM/zMte9jJuv/32PS/roQeP8fa3fQhjxDOecQMf+vCHGC2dZP5Azgs+63ae8rSbue76q0kzI+/mfPRj93HzDbczMzcXEwvMSeqbLetlf+ONl0BdutIMc49hreamToDgFR6q2FPFvO6dDGYpliakaTIZroG6B7MTmNwWSdb7esRMgrqns0MI5aSnLHXPEw8Br8eSHo2GrK2scPSR+xkPlrjuuqu55robybpzJNkMWadHluax6oFZrA7g8T2GpjqAVxt6X8cbg06SUAeGWjeAPPaQKqsYILDmppMlsdpDUsHkZlZditmrWFabpL6xlsTgaRViAgcFw+Eyx4++n6pYngSpml2cGGSJs7x8mg9++D3cfPvj6HQ7kx41kMYhHDyWpfaqTiGwGAjK6oBBqHvludfDY4TAemWKsOHm4VZB/s2CDbsJfm8W7J9+baugyE49/jdb3n4C6rutIrBVhYDdbN9ut2mzxIPdrLdZT3v+vSaDTH+e21W5mF5WPQfxO7QedI0VCurvXTOdQdN71Ot75+1yu3Hd9XchSVrLXl/TZPvqm86H+znPPZByYxYIoypWODg0Q3euR1HF4HqonDQNHD60QDkcUi4XFJVhnS7dhXlWykBVFKyeXCIpAyM3hknO3ELKcG1MsTqi3zGuOtwn6/UYr67VTVtFXrcXZRnI87pHJileVYSywhJwEvJeTpYbYVQRioLebI9Or4uHMpbJT2IgN4wDeQppCmU5pjJI8i7l2nLseT0a05vrs7JaYGnGqIROLwczqmKMWaBTt8FpEntjkhgVMdmhHMfelBiQpISyYjgak9Y9ZIssDmdTlk6axpvuNh7Rm+nVo+UkEOLQD2kCoSgYeiDrz1B6IM0zgkFRVHH4myTBy4K01yfJ+oTBKll3Bk8yitEpvCwZV86wCpwcgZdw3VzGahWwEBisDRicPM5sYsxkzmhtmZncKKuCUHQYjSuyZByHj/Bk0ivc62N0XMJgWDEYVZBnDCqj0+nFwFFV4l6AV62TgdOUVa9Pj/G4jkf4huEB1qMg9b81wRpfD9ZATJRbXVpiVAZW10YsLiyS5jkYHDp4OJbiD4EiVKRZSr+XM1g5xfW9wGBcMaqck0ePMtPNmFk8wo03Xc8zn/EkHrjvAf7gNW9iJs/43E+/g/d84B5e/ab384QbDvJJtx9hZXnAcGaRZOFQLP+cGKfOLOHuLC+v8JfvfA83HZ5n5up5QlVw4oGPkQ6XOXbvR7jlwByLoxOkWcp9p4YcHZQkBjfNdznUTej1uxy8+jBv+cCDPLQ8ZK6Tc+f113DskUdYTnIOzM0x2+sCTlFV8VRaB2eWB0OunU15ZHnEOO+BO508Vt5IzSnGBf3Fg4TVZzAaDMjSWG4+ATrEa5LBuKJqJXU0SQdN8L5pw9pVAppAep7nk2nb7V0T9G/PD5yVfNCeN36+YZJ0ECYJhoGmWkGTDNFUOzCzydALTVJC23T7PZ0AMV0tYbOkhOnEhM0S2rZKJphOsmjPfyUlEexV3ulNkouqqiSEqrk8jWL5DpL6mnk4GFBVa1T9BJKUTidnbW0tJs2OCxLLYoA8jCkGQ2YWDjF26M/McurMCYbDAf1+Byz2rjeoq3AYRRmwYBACwyImtgzW1hgXJWuDIQcW5ijra+W52T4zszMMB4F+f4aqqhgvrdHr9zm4OMfMMOGBY6dYXDhIlmesjcY8fOIUVx9epCwrBuMRlRckac6p0yc5sTLm+qsPk3c69Pt9ep2cURVYGVWMxgWPHD1JvzdHp5NyaHGOuf4MSeUMR0OGqwVF6fS6XYpyjZuPLFIFSOrhzzxUHDu1xP0PP8JM/wbyrEOSOKNyjA1zPJRkWUxYqsYlZTEEg1DVCb5OPfyLk2QpnTyLn0uICbjFcIAXJWm3S29uDkLCaC0mHCRuJHkOVUlqFWmWQApnllY5cfoU7mUcdqe+0J4Ezr0Jpnv9evxt4E3cOdQTYVC0zw9ni5dABiRY2iFJu1jeJ81noXuAKgRI4rBxTbIBZusB8ibYzXq1mrr+0/oarD6H1VMxqUDHhvPShmyBDU9sPRhvyaS623Z5R7FdqK/ZrN5P9bKadj+ek5P4u7HZGmufd5n8lFpve+KcZuvXfZOfW5slUtS/WWM1kmSy3GZ9kySCesg/vDmJxETJs1MomOzv9W1cTzhoEiwqTxgm81TZDGmxjONUGHiX1JcInk+uEKxuyzdvU7fZyTtQ0oGIiIiIiIjIx6GqqnjVq17Fj//4j/NZn/VZvP3tb+dXf/VX+YZv+IZ9Le+Nb3gnv/0b76QKy+ArhDLDyFlaWuLtf/cARx9e4dnPHfOUp93K/NwiiwuzrK4ucw1Xxx6Laey1U1YhBgXN6qB7QuX1mM/1umLP43hjpHk9VkWIAXez+LwZyzhJ6yETmrGhDSAOfxBv6ja3ymJAJvYiWS83mSYpAaOqnLWVJcpiTFGOeODej3D0oY+weHCG+fk+1117iG5+mpOPnKAsE5ZXhwS6HLnuDm6+9clkeV5XyW3drHNfv3eUGOaGka4HSuv3Gm/Ip3Es3XJMqCrSLKvvAcYAQ7xXVWGegQc82PpQDZZgzfQ4HowqCZNg6Wi4ytLJe6BOtojB2rr3jcegY2KBj330AywvvYCrr72W4KEO3gScOunB4ljfsVpDHXhIIW+CPEmKVRUURQxEFCV4hVfptsfXboP1202zWfB7N4H06aDKVv+226oCe6masNN726wiwX6XNb2Mzd7Pdts+vY+3mmez7W6/Pt0Dd7vlnLW8pgsY62HYpAnC2npP0bQetiQ0GQfefBWtPv7r97b+9Zz0ep6kHdQTNjefD/RzPu3qDtf4mA5OiXPwcJ+ZhT7D0sjzjNQCo7UR/ZkO45VV1s6sMhgGspkundlZlotAKEpCMWaml7G8UpDNzTBnEIKRVRVz8zkHD89iWUZZBEJRMS4rqtKZyS0OGWGQdzIKj70kQ1GQpjFgVY4rOlVJWZYET6mCMzM3jzsxOJXH4WHWVkZUVUWvn0PpjMZj1sZLzCzO0umkkKTMZDAYlJCkVBjDADYuyfOYiNRJgComJBAcsgSvKjyUFKMy7vTEYm9OAli8OZ5iVBVkpLiHuid7AsWYzBzKkmo8hLSLW4duxwlVxaCo4hANPoQsI08TgsdxsmN7aRTjEVk3J+sdwJM8nncGq+AVSd5hXDkrK2MeLFL6BAbDkmHdG7UsRpx59FH6BxbIDhzkzInTDNdGzC86aRqrOITgdXJHDNBaPa56M4ROGZzjKwXzB/o8NEy5fbFHp1ohSQ9tCHCYbUymaRINJhGG6f/Wh3K7N6ix8TvZJD/kWUa1ukw3S0lDRprE5JsHH3qA8XhEmsbt7fa7hKri/W//G66eSRhkUIbAwbmMuYU55mZ6dezEOHBggVuOzHLtdddw+5OewHA85KkfvZsn3XaET3zWnZw6cZLl7BDj7mLsiR1g+N73cvCa61g+fZIj/ZQDiwfI+3MkGEcOHyYf3UiXNZ59/QGuH2f05nr8zXse5MPH13DgeXdczeMPd+nOLvC4T3ouJ3/+tynHI64/2OFZV1U84cDV3O/z3HLr47jmqsM4TjEekXhgVBSAcd8DD/CkIxlv+9hxZm+8nVBWHDg4Q0bBTMd56IGH8e4C6fwsg6LL3LVHSI6eZnVtxFzexUJFUQZIYsAxaVUZSNN0UkFgOvDeVDPI85w0TSfDI4QQ6Ha7DIfDyWeXZdkkkN+0j838TRJAOxmgGa6hPW8zTwhhUg2h1+ttaD+bhITpRIHm7xDCZLntadoBq+mKPc107aEfphMlmve11blss0S2C2m78+peqyzsd96806mvT40iGRPqxAP3pqJUMmkjDKMoxhSDVYa9BLM8VnFJU0IRq3VleUqSeBw6a7SGV7P0ZufwsiBLEkajIfjcJBk11vOvKOshGUhSBsMB5XjIcDxmOBxTeSDvJKSJ081Set2c2Zk+UJEmGVjKgQOL3P/wMeZmZihD4Mg1V3FmbcTMTJ+VtTW8cu64+Qb6vS7LS6v0O13KckxRFXhIuPX660hT6HUyDGNmdp4HHn6U3BIGpXNmecTpU8scODCDGbFaw1yH1dWM08uBk6eXmZ/pMR6luKV1dYLAXLfDXL/LytqQE2eWWFk5hHsSE9UchuMh1bhgfm6OUFUxGTmUDEerpGmPPO3hGEmSYZaRZln8NZJk8Vzg8bOqghOKAhsMqMbxStktxYAsNciYfI6DouD46dOsrq2SJmmsVtbkQYYm6GwEQj3E0XpAfRIur4uMJb6L420SsHaggmyWJOnEFOysh3nAiwGxDD9MxoubRLinz0PtBLepf5ucz2w9V2HboPbUdd/k/0nr3cJ6oN8nuXrNeuLQPfWvRCMmaFjTPsXAvjVJAxuC+fXyWsuf7KbWNSbJevJBk7Bqk60iJvw1SR2T35XN9SVTbdn0bxFvbVO9Zy3UlZuaOdavPyeD+1jc0JicYRAKEuvgXoGl5KmRhDIOo5d0wTeptrP/XIMJJR2IiIiIiIiIfJxxd17zmtfwb//tv2VmZoYjR47wvd/7vZRlyezs7L6Weddfvpu1tYLhcMza8hmKIiYADEar3Hf3SR689wykKbfcejUffN8DPHD/KQ7NnyHxQNLJCAHKEEulxs41zbih3vRXwfANJdANI7WUYBWeND1+rB5OwcjSpC6RXN9QY/0GeFNqmeZGUKu3S3OzENZHyxysLvGRD7yTY0cfYfnMwzhDrjlykDuffAPXXHMdvf4M3f4MVXCqMnDmzGlOL5/h5LEHefD+9/Ph9/wV19/8ZG6+7Sl0e3NUId78Z8P9NK+rLjR9huqbX5OAkUOoYrUHi8kQSZo1O2OSQGDEcddDfdOpKSedJAlpHbgIFu9MmhlWGmsrJxgNjpFYWpfznERh66EsIDM49ujDPPrIw1xz3Q3xs6iHxTAPVISYepDWNz2rQFUnRDhGZikWnKSssGQMDAl1MkLi1ZaJAedqN5UQtrObhIdzCYJMj2G9W7tJothqvu3sNdFhr+tvptuqosFeljO9jCYxYD1oG6eLVTbi32nSVD4xINTHYJymaROab1+z/EnwmPX1WH3D3jDmOxmfcrjDdWlJp6xIQ2Dxqh4HDs1wemnM7GKHLA1Awuz8LKPlFUarI6oK0n6X3sIcZ8ZQFiOSsmBhrsfJYcmwMpaPr1B6wmIGs52Ug4dmsW6PsigJRUGWJxQVpP0+Nn+Yg9feSP/QEdLZOULSoaoqOmEM4xWWjz6Kn3yYcryElQXuFWmvF5MR1lZxN3o5hPEYQkUnM6wsSdIM90BZlCSDNbyTM5sFVkbxVnfeyymCkRQl5TiQArkHyjFUVaCXFYQiJpQ1SQjWJA9UkOdd3I0yQL8XhyMIGKPhkLzboSpLCAW9LCWxDqGscBKKpCTrd/CqiMMXdHLKIlANhszMzzGuwENJ3uuTZFks5V85VRWTyjqz86ydOUGxtlJX1unE9Xoc3qeqYtCtcifPEjyBuV5GlzHl2ilYXqFrTp4aeZ7G81jlJBYwi4kQdctMDIoERhXc8MxP5KbHP4476FAdf4j+SsnaYEyZJHhZ0gx90ApvtL89Z/+vSRKbzNFKTKjb/snRa0aaZfT6Xa4+OM/9Dy3T6+QkacnC3Nxk2IKQGmnew8dDTi4tcUPXIDdKUqpyzMOPHuX2W64jhIpqPMLHq1RlwdKZMzHgniRkCeRZSpYkZGkG1EH2BE4cO0rP43AFcZgEmOtnzOfxHDYYDinTnJBkzBxY5PbDXSxxbj+1yjgEsjTlcTcc4vZbrqKsYL7nXL+YUx7qcvN1B3jyE6/hoVND7rl3zPzCAQ5fcx15Foc1yc0pi3FMOCmGzHVH3HzzjVzzpDsZDQdkWcVsL6GTB1ZW1ujMHmI17dFZmOHQzFWYZ5w6vUw1drAhwZ00zSCMybMO7j5JNkjTlPF4PAmql2VJr9ebfC7tadtDMoQQ6HQ6k2SBXq9HWZaTIRO63e6k3WuSCdrtYVO1oB3wbycltKsVNNuWJDFZs6l60E5saBIOmvnyulx8URTx2mKToSCaNtrMJkkVzfY362onNkyO8PrapKoq8jzf8HpVVRveW/uc0SynSfhoJ0HkeT5Z91nfqNZy2tt8KXXyDvWRECt4mVFV8b15iO1Xs4lJkuBVoBiNqcr4GVVVSRXi0F1p1sHrgKiFAElJVY6Y6S5iSUqvf4i0k1GVRawgkySURR00r2I1g04nZzweMVxbpSwLQlWR5SndThcMOhkcXJwjTVOKYkzlzupgjUMHD5LlnXh8pUbe6RHM6PZ7nD69xMLiLHNzs1hiHDo0Ryfv8tDDJzh4aJGZ2Vn6s32qckwxLmIyFAVVUXLT1Yd53z0PMiwKxsM1OtkCaZbTn+lzcrDCiaU1RqVx4swqvW7GkcMHObk6ZGlllaqsyLIkVoM4vcKZlSEPP3qShYUB11x1gNxSqjDGicPklFVJUrdbRhUTZb0izbp0Oj06eZ+828UDJJ0mCA3U4fHUUgYrq4QiVmFL85TYGobY05yMcVVwenWZU0tncK/opHGMH/MQK6/Vl4nu6/XPmrzl+l/if+v83aYiAOx0DRu3M5RDPEnxMMSyw3USZwppjpcjEi/iLxNvXVjFWVt/bnb9ZmdPCFgTiN96yybvqR3Ib5JsoqZuVf3br355PfkAnKbCQCuhoP7pF9uM1rTutLe3bl3Xz6aTDbH1fI0Ni7XJym19oWB1UkD9HfRWYuuG3xCtJA73pPV+mqT7+r3UFeImy6l3jtfXozERxiCU8bIgy7BQUKUphecUaQ8rqlhh0NrH0PmjpAMRERERERGRjyPuzjvf+U6+/du/nZMnT3L77bfzhje8gT/8wz/kyU9+MgcOHNjXcj/8wYfJ8nnSLJbWTiyL1Qe8IHhJ6h2WTg945MHj/NXrP8L9954kTXo89c4nMzfToUog9YTgTlXVnXomQT0nrYPy8aaNY5ZgHiPiaVIXinTDLMMSSJOMNEvJslhSu+lRa0lSBw/r4Hso1+8c1T1hvA4OJWYMBqt84L3v4IF73gVhhX6vy7XX9JmbPcJMf5bUUsqigBnHvYSqohwNSanIEqfTzTGD1ZWjvPvt93PPR9/Ojbc8lRtveTLd/hxNUkAzvEPcrlDf3HNiLYeEhHr8+CQlyzOwhDRL6x5p63edYq9ao3m5SVpoxmhvXosBBquDIGOWlh4hlGOSZIZASZJAmgOjcew1Y3HLhmsDHnjwPp769E8k7SR44iR1AC8JBll94z+ph8WgHneWOvgVIHVIPMWSDLOKJEmbjd00iL5ZkHwvwf7zETxo99icfn277dksuL7bbd1pvul9s9teoLtJKtjt/Lup6LBZr9F2L9ndbld73ukEhc2WE4On6wkFwZn0HE3qnt3N7Wqvh18gWb+BG1rBsAQmveISq5dD7Fk+kxnPu6bHjckIH1SkeQyeLsx2GI4CMws9umlJt9cj7/QYLK9QjmIv66TfpTM7w2owxqMRYTTkmiPzrCwNOHNqxOmVgqEZCRVV1+gfmCWkGVZVVEVB3j/Awi1P4eY7nsnhJz6F2UNXkfV6MRlpw11ycA8xMDUcsfrwvZz+8HtY/ti7KM48QFg9gZeBtNfBCCRpRiePCVSWW2xvZrrMVhU+Lun1ctaWC8pgJN0cSw3KQDfPIDWyFMoiMCpDrDZQBbz0OLxCKGOyWfA4HEFmeICiDhSmaUwzK4qKTgLluIwJZqnR63coikBVwbAIZF0gVJSVU5YBz7t4Fdvhoihjr9O8CyT1eSUGXN0yQhUoi2V8tDb5nIfLp8myhLHHIF9mUAVnrXI6mZFlKbMLc/SspGQExRojT+L7sIS1oqKbGWVwLK3H/Ib6vBYTxhwnZB0684t0zRiu5BSnAxkjxlVFqMrJ+SCZRBSmQjfWTi9wklaSQdPm1596DHjUB3uSpDhQVgVFURKSlJUzp/HZWbIs58SpUxRlEZdaBUIVSIBR6dA3OkmKhYCPx9z3kY9w+w2HMQ9kVOSdjDxP6eVZbPkTI/VAWo3JKJid7TIIHQZJHKf++AMPMD/ToZNnhPEw7pukw2pp9KrAbDenSo25bsaB2Q4zM4GqGFGMRxTjMUme0U+dPHWybi+++6ogTyCtAt1uh06ySieUVIMBxZklQr9LvjhHlhmJ5XVlo8Btj7+VYx89CV4BgeG4YLbXoapCDMr2uiwNKvIEevMLJCeXWbj2FhaqIX767nj9wXp73QS9x+PxJLDeHvagaa/avfuLohlhO/57M1+SJJMAfxPM36xtbSoUtBMX2oH/9lAKzWvN9jSJb82/2aSdTCbVGtoVGzZrq5shHNrtcbPcrc4p7XmaZe90TmgnS+R5Ppm+SYhoho5oltVO9thqPZu51IkHed6J21FX1KlIYxCYkkCJe1jv/F4fXx7qxK7ghOCThJZentfnu7rnfGJknZQkTejPzoAbaTenqgpSd6pQUVWBsipJs4zhaMRwbQWvkxKyNCWZ7dLrdzByiqogT2OQOoQ4jwHHjp9ibn6eLM1YXhtx1VUHqUIMA68srzA73ychcHp5jRtuuInZ2R6jwQoLB+e5+pqrSZKM4dgZjwrStCBNjMHqKgvzs4RHTjN2I00SDhxcIMmS+H0lcGp5wJnVNW686iAPn1rj0VNr3HnbQQ7nKcPhiLIystxY6PfJk9OkxPPv0lLg8OI8BQVrawPmZvqMRkOKEOgmSWzfQkJZQZIaWR5/Y6Rp/K5UXtLJciyJIVfHyDt9itGYrFMQrIS60kJMfARLE4LB6nDE8eUzDMdD8iwjqUKdvJBQf6qTgDVkda95bwWrw+S3TCzNHzY9fre6Vg3B8WoMndm6HcwBw7IelMPY/njSZHaetbz1IPj092ZyldVKMmheY/L6esy9faYLG6djvaZDnDaZ/Fuz/vV/Y8NZMSZLNFsSf2tt+I67s17JoQ7qT9bZai+sTgBvlrdhJe2VT21Ms2YPG65hN1aGqfdnnYTdfgcbP6/m39aTEtsDXMR9VEGoCGknJkkkCXjALY3HjqWk1l5367dE6y3tl5IORERERERERB5jQgicPn2aEydO8Oijj/LQQw/x6KOPcuLECU6cOMHrX/96PvKRjwDwyCOP8H3f932sra3xtKc9bVJud6+KoiTNAmmWk6Z5TAogJzCg8iEdm+XYI6v81es/zP13n2E4HPOhDz7ImaUzLB5YiL15PFDnERAwgtclLSe3icAIxC4dVX3TLfYoTZMELI29/5OkrnYQb6o2vZeTxEjSJlC/Pt5xLKPe3HxJY5CyCgxGa7zhtf+PD77vbzi40OXxj3scV119hHJcsrqyzJkTx1k8cBCjwixgC4sU4xHjYsxwOODA4iJVUXJieIKiKCjLMQ8/9CFOHL+HB+55J9ff/BRuuOVO5hcP1Te7PI4FC6yX1ayDTkmCmUNqpJbTVGuIY5w2ISpw4vupQiBNqW94GXgsLx883poKeB2IDYyLEUunH8YsY3buVkbDExw4coj5q27m3W/5f9jqaFLtAA888uhDFFVBJ5nHvYoVKszjcBhJEm98T3p5JSRUuBmBKt4MS5INN/ySJAPKTW+Qxs9k69tf5xJA327e7YLm29nuJm/7791WNJiefrOA+262oXEhkzS2ChZtV41hs3VNv9fN5t3s/beDV03CgWEx4BvWe5ElFm+wx5UZlvj6ndvmZULdo7DVa7FJvrH1fnZ5Ytw5n3OTjbBRQS9LmOum5N2clbHTn8/omGOVYyFw+pETDIcFWZbiSUp/ts+pMYSqICnHzM92WD29yqlTQzw4nW5Kx5y5Tsbhw7P05rqxGknvKm751Bdyw7M/jdmrryXp9IAELIMk9k4kTVs3vCEJjoeSbr+ku3iIQ098KmH8EtaOPcqxd76F42+/i3Tt/tjTPs8JoyLeVE9SRqOKLAtUVaDbi8M6rK4VdOZmSDsZoQKv4hAEaZ5BqKgC5HlCnsYkr0DArKIoK6hiclneyerezyVplhPKCut2KCvIerFkvFeBLDF63T5VMLI8o0yM1A1CwWitogixdLClgTQ1nA6DoqSXx4BGksT2MYRAliQknS5rZ85gFkgsBo4GK0ukmZGnxmwCR8s6mAl0zegdPsgtT7kTVpbJzjzMuDKG1fpuDjG7hXHh5HUiQFFUJGlSxyfiNiRmlHUliQBkeYd8bp5y7RSJZVQeYjWOyTEOTRihzsJrH6q0gy0bIyDNRE5VFpPvSBO0HawOOHbiFKPhgLKKvXl7nZw0SeM6q4qVpdMs9nsc7hq9FAalUxQxGJZlGXOHrqUcrbF68iTD1RGrg5LFg12SxOphAjLyTpck68DIIY09ME+cPE1WDUk7C6TmkwS4fifFvAKDQRkYVyWpl7zzIw9w+lDCdQfnuea22+ldXVA5nOku8u5HSh4+8QD9+waMQkra7REIrK2sMqzgxsfdxvW33sjVV13F2soSxPBt3E1pTEg0HzEuS+qwLMVghTB3kCwxRkXBfJKSZRkzaR6D2kkCM3P4ygCgriq0HpRqqgO027H2vm8qCLQDT1VVTZIN8jyf9OZvAv9ND/+mnWvPB+uJBU3Qffr1rRIFmmmabW4qELSrDTRVF9oVG7aqDtBOjGj+vVnu9GvN+5s+zzUJGu0qQJutp9mPzX4GNlRBaDTDT+z23HipEw6A9SoQk6vgBK/Wo5uh8np4IKcqSqjG5KnFBC+Llb2qakSep7GqRxlIsbraVUKnN0OaxyHIQhJf95BQhYLB2ipJJ2c8LsirQCc3HnrkOLP9LiSBPMvIujMkiZNlHUYrsdpFUY7p5glmCcPRkJNnTjO/tMh4OCLLO3Q7XaoqDgd0ZnmFG2+8ltHagEE1IM8zxuNYwWZ2doHhcESWxmTBmV5GWcVkuEeOHosJsKMBh+Z6zM92ue2W61kdDCjGQ0JVsDYcUlSBmZkuVzmMx4GVwZCF+Tm6WcaJMyvMz8+Spin9Tsb8TJdrD80S3CnGY8ZVyfLKgDTNWVsbxCEbspwQIMsT8rwHSUZRjOMQCEkGVcl4NKIYF+RZDNhnnZzxYJXR6hAvK6wpq1SOCR5iDkmaU3pgeTDgzOoK7oE0SUmtHkbBU+pfMTRDQWExiXASeHbWe/Q3197175zNj+NNhg8pRyRJDvks5mWsjOH1d6szR7V2HGaujTPUMe+zr+3q3yn1RBuTCNji9fZ5rjl/tQLu1iQa1O+nFVrfGO2nTl5oJaY2iQOTAH69SE82TDe9n7x53xhQD+dgyYbEsiblL660/VZ8MgzDegU91pdnMfiPpbEaxSRHYf3cvp5AsPH6tqnEMHnXVlfnqudttjqE+B6TNF6LuQeSpIOVYzxN8fEaSdajSWfZsI3tZIdd/k7ZjJIORERERERERK5wTS+ne++9lze84Q285jWv4b3vfS8PPvggZ86cmfQK2yzgeP/99wPxBudnf/Zn7/9GqztxzMiMNO2QWEpiObhThiHBS06dGPC2Uw8SigBuHDu6zMMPH+PGG6/DEiPx2FMoaXp8tPqkQB1PIQAB94qmvKwlKZYYaZqTpkl9EztZLzFpsTJCYslkvOV40xqqqqzHyKx7DdU37cqq4L57Psrr3/CXjAcn6d9+Hffd9zFOHH+IJMk5cvW1LB46TL/fozczw+z8AnPzi7gHls6cYjQcMlhdxSxw4MACSQJHj6+QZkZVjXj04fexvHSUM2eO8rRnfBqzC0fiTaXYpbq+MdV8Zh7ft8VgvSVNgoHXfYHCpLeTB6fwMVUoSLOcJIlJFp4kJIE6yJbggUmp4+FwwGD1JDPzR/iil/4g7/n719Dv9Hni0z+Nuz/0FpbPPBiTNgwSc86cPMF4NGZ23iAkQAVmJJZS36OLm2sWE0LSDLfYYxYLJCGtAykZZVnG44btA9D7tdve89vNN72M3fTO30u1gLNKrE5Ns13Qfjev77aawFbbvFnv091Wn9jspu75qqSw3TxNEJM66QBie5I2STjUwy1Y8y+sJxzUN4qbG8dxko2BXDdIDZ52sMednYIZoNNNmeslzB2cIetkWJqSdxOKQYkH4/TRZargdHpdxqUztzDHakgZjkaMzyxzoGsUBidODqiCYVnCjMUx4q+6ejYmEuQHuPa5L+bm5382/UNHiOWPu5D1sawD5nhVQCigHNU3jR2SDLcMshRLe3G+UJEUI+Z6i8zdeDs3vPDLOPa3f86xv/ptipP3Y5aQ92LCgWOU40DWyQnAyukhWScnyTPMoSqhLCryTiwJHNIMy400S+hkKeNRQYJRFhWjsROqMbMzOQEjzVOaoWHK4HTThF6vSwhQjUakXtLrdjGLbUblMYCZVIGyrFgdx3Yp7XWhCqR5wpiELIk9TsvhkLTXJenmjNYGdDodRivLMcCb54RQEYoxlEMWjhzhY6fG5AFKd0YBlovAQpbglpHkPbKwTJWmWFVgwNoo0BsV9EOIpfuz2HZXHhiPCzp5xqACLCU4VO4MR+M6aARkGf35eU6vnCLv91gripgoYXXS3SQ4nUyOS7P1sMwkztT09qyHx4mBqHj8lmURe1gmOW4JeZYTqopFH5LhjAcD0oV5Voejet0piZV4VdFJE66eTclsRFo6WQKQcMutt2F5l26e45VjR4+SeSzx7sEpi4LUnCxPoSkPbglJlpPkfTxN6c7O1UNpRKEKrA3HZGWFl2PuXBjw+DvuZHVlhfFwjWA5+aFD3HHHEVICwRNWTz7KsWMnOXr3vawmfVaSinB6lde9614+tuQcOlJxZjjk2htuoJOmXHfzbaRhhCWxClGoCqhKqiqQpDE4neYdqrLCU6jKwGBYMhyPme0GSDMqjz3ri7VV+kBZV2cyi1U9mkB6URScOnWKbre7YfiDphJCWZYbyv6XZTkJkLv7ZHiFsiwngfjpNrGZJ01TZmZmWFtbY2lpicXFxUnCQLsiQBOUb64Jm2mmg/XtoNxoNKIoCvr9PmY2qZAwfR5rVyFov948mmSBnf6t/VqTsDCtqdTQTlRotqepatD8e1EUk32w20TFvVbhOd/SrAO0A422MbDoYFVJGUrKcox7RSdLCWVJqAPgo+GI0o1qVJElGWmnF7Ok0gx36PV6VKGi1+sDRlGUDAdjhqMhaTnGkpTBaESaJWRpwmg8ZmGhD3UbUpQVq6sDBqsD0nSG5dUVijyP7fLySjx2BmvMzPTi9V6IQ9uMClhc6NFJE4YkXHPVIVZWVjBgdbBKN8sZjCsOHT7E4uICg9UBedZhVNbnglBxcLZH8JRrrzrA/PwcwUssBELapSgN85yySpifm2U8rDi9PKTT7ZB2Ojz08DHSrMNcJyZdDCtnZn6ePIGiAg9GliSMRiWjcaDfSSmqQBUq0rQbe40T8KpkPEro9uPwEEmSxGvwoqjb5ISVMyfwYcAq4u8Tc1KqmEBtUJRwZjxkaXWNsgiYJWQGwRwjqxv3ZPLbJF7zxyC8TxKz24mTde2bsyoONOoL9Po4ahIQimJEt9OLyW5JjoUCki5NcN8trROd62uo5s8NSQJx+TaV/LYxWdY2eW0yZWtZ7YB/HNahHiCkldiw/n2YZvX5M/6oaic11OfF+jw5WX1dHaKZN+7KeN5lsu/X32cT8I/n5KatXN+o9hZ5faJev+RMJhMa623tek5hgllMB3BvXUdbkwhQ74s68aT5jTz5BJrdmCT00pThaIhXWUywDWUcdiGdw6rh+u/r9rX61PL2Q0kHIiIiIiIiIleo5mb0O97xDl75ylfyJ3/yJzz00EP7CtTedNNNvPCFL9z3zdVmzO/EEpIsw5I0BqGBykcxnB8ChKYQdMrqSsnd9zzEMz7hyeSdnBDK+sZSvLeS4FR4XZI6JhuEqmByYyih7pdYJxXUN/2aQLslTG5yxcoATU+Z5v+x3HSgounuMR4NWFlZ4qH77+Gtb30999/3MKNByYHZ09z8rCdx4OBB+r0eh6+6mv5sP45PnXVYXVvjgQcfxoD5uXmGg4KV5RWWz5xiMByRZhk3XH8Ny6tnCCWsra6wunyck49+kPe9Y8B1Nz2D62+9MwboSXBvVWBobjJZTKaIN5oCeHODv745FKpJb95YSjyQph0sSUg9xdO695SHSZWDsioYjwaMx0M6+TzX3/wkHn3ww4TRiLn5A+TdebJujlkRkw4SY21tldFohGEEPOZHVBB8/YZcTI6gTkZIsKrCLPbC9BTSNCNJM5IkI1gVS99vEVjY7O+92CrYvVkPp51sFrTYKWlhq/k3W/9eEgr2Wi1hehu2C9xPr2Origq7ee+7TZrYbL1b2WpbzeLN2MSakrjr0zc3Ytd7jzKp4DH5DtEk9ECSEHsxUt8Kt/VBUFIznnqgwzPnnWTgzOTGwkysQNKd6ZDUXam9iD3vq6KkKis6eQxezB2cZyVkrA7GDJfWmMsTQrfLsVMDiiohVBVzHSOZ6TB3cIYk73Lg9mfyxM/7hxy45fYYPE67WGc2bvN4hTB4FC+GUI3wUEFo+rDVd8AtjQlaWUxSSHrzWHcOujNQFnSSVW749M/j8FM+kQfv+n1Ove3PCOUyEEt151lCliUcP75GrxsDSDOJEzyhHBf1kBWBECDJO3SyjPiVNvJuDO6WayVlUdLJLAals5wsTcnzDE9S0rk+0PTQK8Ccfq9P5U5GRVUZwVLAKcqSooCkbh9DiEO1VAG6vS5J3sWDM64CYW3E4swso7TDYDBifOIk84tz5PnBGMwuR8zM9DCDe04NuSF3CnfWKuNA5Zg584MB6dJJigzmZ/rgFf0sIfUSQkWaGGkCScy4Ao8B6yQ1zEuwGChMgKossTSPAWZLsSxjNBwyGpRUC0UsE24xsNQcv01HyEnwqQ74xOd1UkK9L7yumNNME0vjxyN8OBoDHWZnOtxwaJ654yOWVlYxEjIC/d4MJ/0kWZbSSVP6nZybb7mRR+7+EB2GGMbBxVmuvf5aQlWsJ7xlXU4OAwcMKhLKkOAYofKYbLI2wPuHIMm598SAxcM3cO01V7M6HMRtxOjNzGJZGnsVu3PkqoPcccctUI0IoaSyGY6eKen1exyaCVRkpI+7hmc968l87AMf4Df+35sYDeMwJWtrcGKYMDe/wtEHVnn0/rvBUhY++EHmu8bM7Dz9uXkePnGKYyc6LK0MWRyOKKoSD4FiNKJT91hO04zEHKtKxmWgmybc/8gDpKMzLGTOsKjWk5vqtqnT6fDoo4/ypje9iaqquOOOO7jjjjsANgxV0AT9J1VakoRHHnmE5eVlnvnMZxJCoCgKzOLn2E4amB5qYDQacffdd3PPPffw4he/mOmqCNNtZ7vaU7siQjvoX1UV733ve3F3PuVTPmWyjdPtf3tbph+bnePa58HpoRna/29XM2imByb7ZHq5TUWI5rXBYEC32z1ruW3n49rgfEvTBDwjy+p94xC/7/WwVQHGVaCqYiWT1GI7sLIyADfKccFoNKYYlfTzHp2sQ9rNybtdPMkYF+PYFnc7jEejegguWF5dwUOJj0q63R6nTp9hptej18vicEKz8wSvYjg4TThx/AzHjp1kfv52ykHF2BOWVlcZrKySd7osrQ5YmJ8jSxKWllbozvTpdBJme10cI8tzvAwMB2uMRmOOnl5mcXaGmX6fUA5JWGBpeY35A7O4w43XHmF15RR3Pv5GHj25ypHDC4yLgiw1ejNznFpa5diZFQ7M5hxfWuXA3Cyd1FleGzIs5pmd61NWBRljrrtukVFRsbSywtLAOTDXIc0CSVXSn+kzGlX08oymnH5VBEIo6c8l4Eaep1iSxKSiKpBndaU0N7L/n70/j9Xkus/7wc9ZannXu/a9vbJJNjdRJEWKkiWRsq3NsSxZird4NOOMYg8iw0gmRjDAIH8k4yCAbQQIEsdwkonh8SCKs0wib7IjCT/LixSJWi2KNMVV7Gaz9+Wu71bbWeaPU/Xet5u9UUksB3gfqXnvrbfeqlNVp06d+j7P9/nGCeU4R8kYETksFVKE+UCYHwuIBIVzbA12GIx3ETgiqXAiuAy4Zko9FcIAonnmzfbbPUECeBx1+blazNcIEaaeaF5MP/M1ea0leKFRLgMR4V2Fl3FwQCi30Wk73A8iiAH2KOmry6004pir532vdV64nrB2j7JvfiqE8DNbnC2rwMzvjYvCbPNmx47ZRVcKHPbK283e/3vtaX730/M8s6PpcTSLGjHHlW2b/X7ThFkxQXOtph/OHIhoBAeibrOH6cvXbDvr6y5rB75YC7IqxusIUZXYYoRKF8P2xbVKPFx/nHw9mIsO5phjjjnmmGOOOeaYY4455pjjf0N47zl58iS/8iu/wn/6T/+Jzc3N/6HtWWspiuI7DjSIOoiFqMsaBCPXsG1f4HyFkvoKYtAZx/ETFxiNxyzFS+G7hCCYIthaSwcWE0gUH2qh+lp04J0gFAn34F1N1AfCHuHBN1sJbXMWhGwySBQ4V8fzLGWZMdrdZTjcZmvzInk2Ah/IACdgOPFsbw/pd7vITptJltWiAEdvaYFWp0u/3yUbj3j5xWd49vlvYy3EkefA/v34qsLZglasOXn+PEJErO0/hLWWc2dOcOr0ad4qJLfdeX9Nviu8k7i6jAQiBJulEIE89KImsew0/hT0CcEFAiHwzmEpEU7hpSMSAl8Tp0F0IKiMxVQV1hTkg00+81u/zuWLz3Lb3W/kwpmTOBsEJYIKpSFSYKoCU5V0+4sMR7uBQBES6xzO25r0BSUkSoVsLJBYVzKNBNZBSVmXxWhI3df0q+uQJTfDtfrxrRL31yPlZ7dxq4T/zTLyb9Tm1yOyuB6JciMHhet9fqPtXr38ei4I1zt/t4obiQqu195AWs0EdGs0eX0Oj/QEIVO9vmtI2pCuBi5sQIogLGiy5ZpYu/Chb79xIeLRBUFqLVEsWOwlCOEpKs9wkNNpBcLd+yC4ybMqOAJIRXexy0SEOtjZKGOpn6Kl4dxmRikjlC/Bg9KK7kKbqNXnyPf+CHe+64fQnX6wctadIPkZXwiCgyrHmxxXFVhT4G2w1A1jQyPKUgilkTpG6gQ3ThA6QaR9ZHsJ0VnAmw4pgmM/8jNsPvAOXvndX0ddfgkpPbGCsxdGtBJJ5T1ah8x97xxKgNISJyQyUmgFUgcnEyEFWkWUmaEyljTRKAlZadHekrRijHWkcYxXGiMEzlYIW5EkcXA28FAZR4FBRhprDLmRKOnxFirroayQDqT2dLsRIk7AOXS7y+blDZI8o7W0SrY7wORjxOIixSRjcPkSyyt9km4XVIT1AuMdxkPmPJUTRFpQKkX76O34bExry1BORkQKhBNYE1wXPCKUU/DghWA4yfGdQFw7qK+J54UXT5ImLQoZEScdvmddIfDYqgzPJdf00pm+3TBOUJdeaFgQpq7b4d6oZXmioVQcztVuMsB6LyXSilanTbebsrLQxetwrlpJShRpnDMU4xHvuOc2BqWnte8ozz99gmORpKs8+1d7mNEmmTLgBWUZhCT7u5pECbJJzsbuhLERqFjivUVKgdaK3Bhuu+NuLl06h1MxUhQUxrGRexINRnjyyRhcxdLqMkl/hWq8jS9Gob8KhxQOKYJDgZA6EFrO4EyFdxZjg1DGOoFwDqU0panwrmKydYFKCja5SG49l3ZGsHORE1nEVuFI2y3uvfMgWjrwQTQilcZ4gVYCU3qcEKzddhf5yScRSEyT3Ss81OUJyrJkPB6jlGJtbY0XXniBw4cP0+l0cM5hjCFJkimpbq2dljYoioLd3d2pKKAh0htXgyaLfzabvxkXjTFXzOVm15sVGDTbFkIQRdHMGCqmnzf7Go/HtNvtK8b35t90nK3LHcwKCJr9zZZQmP3ebFtmnxmzy5p2Nu4KAH/0R3/EY489RhzH02XN+kop/uzP/ox3vOMdJEnC7/7u7/LII49w//33X7GN6+FqoeN3S3ggUEgl0LPPvVp85L3Ha40xGmsd1lYoKXEeyrzAG4c3JVVVUVWWNArCXWMMOklJOx28iphMxiSxxjnLcHdElMTgCdc7jjBVRRInjMcT4liSJC084TpPshFpmtButcirWgwsHcYapLW0khikpCqDYDVtxezuDkh9i1acBiegOKUsLbmp0DoiywpaSUwcJ0SR5sLFyyjdYnN3TKu/Eoh+LWl3OmSFYKnnsZXl4vmLLK8skKQdeg6W2zGx8FgvibWkFSnaZYX3AoXnwPICEofUgmNH1jl/SbOxM0CpBfqdFO8N1jmKoiSOVD1P8JRlSZRI4ijCWpCtiKTVRWkNItxzVWmQKsZZiykLBAqVxOhYgzNIHEIEAa9VnslwxHCcUVWm5pEV0oOvS6EhGqFZEJ74mcz/kG0fxCZe7q2nEHg3Q/uLmVIJUxcCMSWyfShoh8DgZRfpHVQTvAvvXCSL+GpCmE0122mEB3sCgb175drz3ODuI7mVW2r6PS+mbQ535pXi5PAuBI0YPYgo9u6iWXJ/ry3NvG+v/a91XJiRoc4IqmVD/lOf//ozPys4qN9vwlBy1bx1duyr35lDQcFZYeFeG8JRh3fV5kMxPRc+lMGYdVFohEkioh9VFFYhsEgsUmmiJMG4AuubvvDaefz/DMxFB3PMMcccc8wxxxxzzDHHHHPM8b8RGqvYT37yk/zCL/wCL7744v+UYMHp06f5pV/6JX71V391aqH7eiACw4ZEoaMWWieoKkJYhcNgXEkkWyH44xw15czZ01tsbu2wtLSEFBrnK6bZ+87jbQj8e+eDfep0jw7hFR63R6TXFsV4hfBNYGs2G0fWJHcQHghCoDqbDLh0/hRnT71MFGv6C4sk8QKtNGLfWodIKR556BhLiz0ub2zgsXT7DlMVdDodJBJrDVqCNSVRnPDY448x2B1xeeMSQmkQhryo2BlkOG85deYyKopY7HeYTLYZZwVf//JnWD9wlE5vMRyGCOyRd65OhAliAYlEiJD5g1M46cBanLc4G+jVQEQ4sB6PBR1hzUwgzUPa6jPJc7I8w3uHqYacfeXrZPllTh2vuHTxLGU2RtRlEyLtiSOBdxapNcsr+5jkYypjiHWMkAJvPNZ7lBSgFBIZBBA+1Hh31gejhqngQIa+c1W95ddrbXw1IX0t14Rr9tsbkPK3Goi7kZPCjRwPrred6/19q/u92bZej8PCjdp7s33d6hhyLZHCzZwYriuyoI7p1iKBEK6Fxsk+xID91MkA31jwhm/vfVdMg8ONkCEktAnuW4p5bFWxIBw6gv5iC1taIi3ppooiM1T13855bFkTsx72rfTJZUxVVJSjCR3lSKRlZEB2OvSLnNw7VtZ6tBe76LTPsQ/8TQ4/+k6kTvBSgVC4fANfDnHFBFcOMfmEqphgygJnLc75OsAv6rFCBMGBlCgdo6IEnbTQcYooJ/jJdhAfdFeht4YvJ6ze9xCdn/1/8crv/3+xp7/K+bNbpKmGSJNoSVU5qjzUt490yOK3liAEqCpKWSKURkWKbFIxGebgPbGSVMGLPtQbLw1KhvIMWoUgfjnJSHDIJKnFYpBZD1phnWOcV8ECWipGZYETEmcrEgStKAFjkNqgtMIraPe65EVBPB7iyxwtFULH7G5tYUdbRKsdOmu3M9m+jBdQ+b2EQ+McSgZiTimFU6G++DTj2oMrSnbPXyCpy/t4H6Rx+SSjEBof1zbVHrTw5BcvMHiuZJAXqM4So0fvCGMgImTCSrn3nJShHwVSIfRJ5/0V5ESzvEmCDB/W8jwf7PXLogLg0GofpSq01rRjze1rCwxO76C9JxEVaylc8iX391NujyJe2d3lq9/8FsqFUgvrKx2s8xhT4KocENhigrcFSaIZTSYkacr6co+nLQxLT+klIgoCEmMsT37ja7zx2AFk3MZPhuHUCIFTCbmBXu10UFUVUimkUnid4gQkkSKOY6SS4KswPyAIJxtRh5ZQ+mZscEgJyjvyyqIihWoIM+cwVcXuwLMzERTmOCppc+z2A6DDGCBkKAeEKfCxozKWnd0hlSyIjcFFikzGKBlcEpy1VFU1JeDX1tZ485vfzBNPPEFZlrTbbbz3ZFmG1noqOtBaT//ev38/t99+O0VRTAUBSqlpGYbmGWetnZZqmEWzzYbsb0j7ximh6aPGmKkYoSnDkGUZ7XYba+0Vn4fz62s3KTnt/w3ZP9u22TFaKXWFC8LV4oemnc13mr+bNs+KFhphxlNPPcUjjzxCp9O54viklFRVxTe/+U3e8pa30Gq1OHLkCFrr15yH5rw2pSWqKtwfN5s7XP2cer1iyNnvNstmz9nsciEEUodr653HSYeTMoxHyqHqa2lMhcBjrMdYh6sqiskozB+RFFVFq91CJzFoSdxuIZQmUpLSFHhnmEyGiEyQZTmDwYhosYuUik63jXUGqSVVZWm3IdKa7WxCmWdIKeh20uA2EMcUkwzjDfuWF6m8ICvr0i5CBjGf1qysLmFMiUNSWcdgMKDTa9PtdOj0unhnqEzJ7jCjPcq5cHmb2++4k9PnNlhd6hDpCGMLtrZ3GIxy1teWueOO2ynLAikVRw6ssr21g1AKiSWOE5z1WGsoqxznLJVVWCtYXl6gqApOnr1Av92i3+2QVQZv637tPUpKcJZxXrDa6SK8Q0uJTCLSTju4CnlBYQqscURxi2w4xlYujOsxaFUT9h6QHicchSkZ5yOqqgx9r/5Y1ER040hgG/8l75D1MkcQDjdEcyCom77TzGUacVrdRwV7v1/ZMYMIA4W3JV5GYHK0t7ioG8o3VVkYD8Xs/bHXX6/ut7PY+7t5L9t7el35+ZUQ9btd027hm/PTzA/l3uQMQVOOwU8P1++9AV4xB772fmedEKa/N5PJaUkKpg9a4WsByN6iWogw035/9df2dtIICf3U0QiCAIFmY1OhUWhLI6zYUyeIpgTFVNAdtmmI8BKEdSSRJkMjfYVVXaTPwruXzTF7Ooj/qZiLDuaYY4455phjjjnmmGOOOeaY438TeO/Z3Nzkl3/5l/n1X/91JpPJ/9Rtf/zjH2f//v38o3/0j0iS5HVvQ2lBkkqsDbb5UkSEvKJgk+18CGqJOhAmUWxvZZw6fYHbjx4JgeYmC995XE3seGeDoMCHYFJDJgrh8Ei8s1jcNANICEKRZXFl5o1A4qRAijoA7R3ZeMDpky9w+uRzWFfR6y4xEsH61FQ5B/cv0041d911J0uLXYqiYDQekhU5UZJSVIaTr7xCq9OlyCfgHYkWUI7ptRV6fQWhI1ppytmzZ9je3aEoRuzf16IsM1qtfQwGO5gq45Xjf8EzT32Jx77/QwA4Z0NcaSY+570LGds1kai0RLhgLe6sxdpAEuzVHg3ZNM4ZjPB4E1welIpI0w5KbTMah/rm1lcU1S7GDtm49DJcOoOzBaGsA+hEEFkohKfV6ZO0Wkil62ywBIRDKInyIfzpELXDu6tr+TpsQ4QJEdaNdH3lXhtsv1YfvR5uhUD/jhw8riM8uJ5LwY2I85sJKW503DcSQNxI2HAz8v56+E6+cyv7vhZuJFKY3caNXCCm29gL185kzQIihKPrtWecQfw0iC33osZAyC1rwtjUQehDLcHb+4JWVbLQj+l2YorKEScKnUaMx2Wof60DaTbJLQoonGDl4AqFihnnJcVwgiTYOQ8mFSOVoMscm5d0+i26i12S5UPc/cMfZe0NDyFkbfFsJrhqgitH2MkuxXiXcjKkKgpsZXDe1ZmBM8IJoZFSgDAIKZGqQhY5Mh+j45R44Qja5ghT4MsxsrWE6O3D61Vaq4p7/y//d47/4SLx5u8TKYOKNHiPcI7KOGRSEwmuJi50K2TqC0HSSqcB+EhLhPXkWQVK0UkVzolQWqF2RSnzAoRAOluXJXBopcmtQyQpZVFgXBCh6UhQCkXlHSqK8daAUjgURWVoRRapFV5rdKSp8oxiNMJZh44k+XCXYus8S90ElXao8hEohUeQ2WB6AQLrCRnEWcbzX/8mcaS4e0mFUysF1noyb2kVOWkUyFVZn/92p0WlFKYWDzQ9sBdLkkiw4CT4CecvbbLaViRKUhCelZ6wfRoRHsExAFxwE5JqSrg0z9T67sAjUDVRK4QgSVtkVZ2BrhSyLpeTJhE//q43kP+3L5OVHqUijqyvsPmq4sJgRLVRcGl7yCRq88iBDrf3PbK7wKXTp/ja1/+CRx5+AyuLPRSOcjJkMM4YVhleeKwH4z37+ikWzdB4fCyZjMYsp567j91OmWVsDDM2RzlaSTraUlUKU2QIb+l2u0Gc0llAJRUxgrTdqseXCKFjvDPYssTupZqG2umudg5QGq0VxocLqUUYJWR9z0/LYkiJxCOFoNdph8zeei5SWYEVoTxRnmUIKXjppRd4Q8uwVXi+vWOIFWS5o7BhXhZF0RVCgjRNKYoCgK985Stsb2/z4IMPcvfdd3PixAmeffZZFhcXefjhh6dk+2233caTTz7J8ePHeeMb38iDDz7IN77xDay1HD9+nHe/+93ccccdU5GBUoqqqhiPx/zWb/0W3W6XD3/4w4xGIz71qU8xGAz4/u//fh544AG+/e1v89nPfpZOp8OP//iP473n05/+NEII3v/+97O2tsa3vvUtPve5zwHw+OOPU1XVVOAwu80f+IEf4PDhw3z5y1/mxRdf5F3vetdUNPEHf/AHHDhwgDvvvJO7776bp556islkwvnz53nPe97Dk08+SRRFfN/3fR9CCI4fP84zzzzD933f97Fv3z7OnTuHUooTJ07wtre9DSEEeZ5z+vRpvPf0ej1eeukl4jjmnnvuCePuZMLJkye5/fbbeetb30q73QZgZ2eHzc1N7rzzTowxlGWJUorTp09z5MiRqaDi6pION8P/CjcEZ30YA4UiuHvIafkwKT1KaqSsQtaztYCnqCyTwuCNoTSelhLEqQwmBJGmt9IjbndJOl28J7h54BlnY7J8gikN7XabdqdFVhRY72n123T7bawx7G4P6fe74RxnoWyCsRVJpCiLglRLZCsmSWNanZSOjklrAV6URFTOgbMkccqZs+dQKmZnd8CkKFhZWSJNI5QUnD9/ESE1SRwxHAyJ45jBcELa7mC8ZDwesbG9w5nL26QqkLG7ozGmLEiSlJWVZZw15HnBwkIPISxbkzE5kGLpxIrSWPLKsKwkrTQl0RrnPXlp8EjiRCPi4AZhncFVHls6jKkoyow4ivBeE2tNVZY0ZW9arTamrLBlQbAfqIUCtXOBEBbnLLbKGWcjJpMJ1rngKOZdeIZTl6HzDTEdMtrxsnZBACF88JJzEtew8VC/HEn2ygVMGWumAgSxN38S9bxH6xinU7zUeFsQ9Q5gs61p2SRRrysQe4+aa+Bm87m9xdffyGvn843r1J7/j5ydD84I76ZOB2JWpnf9+ffNBc61u0v9TnXVzBPqZ4afXdR8ctWcfsZYIZQNFLNCCpoLe+U+6mWingPsfcE12sLXuDuAQAkQIsw9lKiI7QShFLgSUYsPhGjNlOGYLanw2jn368VcdDDHHHPMMcccc8wxxxxzzDHHHP8bwHvPCy+8wM///M/zp3/6pzjnbv6l14mqqvjn//yfs76+zs/93M9N6+LeCg7ctsQDD65SZZI///NQv1oKhUAihWJtfRFhU0Y7wclASIFEUuaeEyfO8T1vyWm1WlOGvQkhee9rq3DbsIQgmuz5JhhXry9UIGicrWNOsg5RhSwVITzCSxzB5n8yHnLy209z5tUXGA53iOKUyWhCnLa5/egdHD18kOV+i1aaMhqOKCdj9h86SLvTDzV0laDV6dBfWGQyHpKkLTY2LlBMxrTaCyRJi1Y7IW21KEpLr7/I9vYGW5cn6DgBJpw4/hJShtruSmu+8sSneOiRd7KwvA+sRxAsWXF1fWQXBAchOy5YvWspcSoIJYTbC77h3dQ+ntolARmEGXHSRkWaKErIipyqChmizpY4W2Kdxfoc5wweh5AgtSDS4KOY/uIKUgRCpyhL2taitEbIWgjifciMtH7q0CCkCBmrOHDBrUFJj9chOHYzgno2KHazrPgbbePq36/lUHC971wLs4HF12tTeiulI16Pc8D1HBtutu719tmsdz2nhOv9fq3t3ErbrxeovpVl0+3QZN+GwHsggWdEBdR2tDNx+iZcXcsUQn9tPpQKCRxuSb5vSbEkLZ0klBDIc4vWgrgdk00qVKtFVZRoF+5VLaASmuW1PpmKmGQVZlyAc7S1YGNQUnW6tKsCbyyi22L1yCokS9z9Iz/L2n0PASGD25sJvhxjs12K4WWywRbFOMNUJd6FsVAICVIi6zEyWM/XJWZEOMfOWbyUOGuwVUVVHieKE+JWF20d3lTIaozoHcB3V5BCc+yv/y1E0mHry5+gmkxIWxFWSCIlsMYGYYNWgVzqdCjyUEddODDWEClIugmTUUbpPL0EQOK8p6wcUarJiwqlImJpSTsxFSFzeVIZvAzjoHdgqiBuoC4Z0Gu1cUrhvSZptcmyHIFFKQ140labdq/HZDJBCFBJynhnE1xJP9WkC8tYFMVoUJM1gSy3eJTc6xfSO8anXqFqpUTr96CkDC4PSuBmMhKFEFjncXhKY+tjsyD9lLy4PK44mJdob1EWTp++wPI9B4IjgXc1OVXTB4JwXetnmZQKIVXdv0Pv9eEGCn/V6bJ73giCPM8w1rCxsQ02lNTxBKvyxYUeSkCv10X4kuF4xN133UZ0/BRVWXJgfZXVQ7fxvW+5j+Hlc3zpyRcpM4E4fgpjPPc/8iBrLYFxnsJ5NkcVUmqUcBxdTpDC0k8c5aTi0vaQykc8fP8xks4CxXjEhe2ME2c2QoZ53KUoSiKlkQKss3hXP99VFAg8HMHQoFHiAcJTVWVwh/COSIVs38hLZF0LPE4SrCuJVH0ea/GBEKAkNWkvaXVaSAmV91jjpo4Avs6QLqxnNwvEoBagk4iDHc83LmaIuk/Mugw0pRHG4zFJknD+/Hmcc9x3332cOHGCgwcP8swzz3Ds2DGOHz/OcDhEKcV4PKbf7/PNb36Tfr/PF7/4Re666y6eeuop8jwniiK+/OUvc/jw4Suy86WUvPjii9x+++08//zzPPbYY1MRw9raGp/73Oe47777+PznP8/hw4fZ3Nzk4sWLvPzyyxhjyLKMr33ta7z3ve/l93//9zl8+DCnT5+mcWNohCxf+cpXOHfuHJ1OhyeffBIhBE888QSdToc//uM/5md+5mf49Kc/zfHjx/nWt77Fww8/zG233cZv/dZvIYRgd3eX48ePc+7cOaqq4tixY3S7Xf79v//37OzscObMGT72sY/xiU98gq2tLV599VVGoxFvfvObeeaZZ7h8+TLvete72L9/P7/xG7+BUoqPfOQj3HPPPRw/fpzf+I3f4AMf+AAAd911F0ePHuXf/Jt/w0svvcSP/MiP8GM/9mP84R/+IWfPnuWb3/wmP/3TP8373ve+aQmL7zaMMyih9kqpSIGwjeigcYaqidjAaTMpSvKiINJBnJMoSOIIayTGWqyr0LEiSTVaJ1hjcd4EelprTJ6TFRnra/uoTMHuzmBadmVQDnDOMBoOieKI4SAjjVt4b1la6JDbEi0EsdYIrZFSESlJ0kvZHQwx1YBuO0Xg2d7ZwXsYD3cZDoZIHSElxElMVVZ4J+h023iCYCpNIoo8Z3lpgVOnztLtJJw8vcHGzoDb15Y4cfoSvcVlDqytkGU5/YVFnHPsbG8TRREXL2/TVoLxeASJZpJXJFohxzn7rKXTjum0YqJIBnG0UMQ6wnqDVuH5b61HSrDWYEwFWNKohcBiTSjdkLZTIGY03EaqUDLHmBJrKkDVY7dAVBXVeEA+GWCqUGohlHYStZOBB69qglkEZxZvp0R6mLso9kjvxjXE413ou3v0/JVzJSH2ku2bjwVBZOPNhChexnmPLQagW3iZIO0E74obzu1ej9j3VueF19rHdNyfeR9o3hWvOEbqmd5V89Jrze+vJyyedT6oz3Q9d/R74ovGLYu6DETz+nWF84R/zTYbXbgX061OhQ3Tdk8FCnvLp8c+FQb4Wswwe2EFiTZkVRTefo1B+CDCLGSbyIPwtbDK1+UGrzgXNxaW3ArmooM55phjjjnmmGOOOeaYY4455vgrDuccTzzxBD/3cz/Hc8899790X1mW8Qu/8Avce++9vO9977vl4ND/4//5w5hiyCf+81N4q1Ayrh0FNB7HoUN9Dh7Yx1efOAsCjt6xyHC34uyrhldfvchgMCRttUJwx8tATOPwLmTv+rpsAFCTaXvCA18HaLyzYENWaiCPHHvWmBDCQBaBIJuMePHZr3Lh7IvkeU6RD5hko9C2xXuJteDQgYOUy0uMJyMunDtLFSlOvnKCJtO01+uw7JaJ4ohWu4MUkoV+n7IsybOc0XDMaDBkMBhiXRBidDqLWBcx3BpTlDtAxD333c0CinPnziK15dm/+Crf+54PI9A4YQn8mq0tXH0QEziHoBZVSDGVVohwmDhr6mOus5qEqzO9HSiHUhHOeaIoJklaCKFBCBZX72Q8SsizLVSySHbpJbwp6xrtECew/8htLK/sw1iL0grrLWVlaScp0osp0QvgcDjnQ6ZeXXPYu0Y4IhFSoYVCSnvDYODry0y6Nm6F4L/e57fivHCz7V/L+eBmBP3rLZ3yegQGNyTuryMAuPp7t/L7tYK+t+LAcC03iZvB18F3RLAfnuqURAjOhsDwdOUrAreirpHcOCCIOuPMO8eBtuTdK5LDiSdVgiTS5MZjga6W7OwWtBc7yFhTFSVKgheKUeFZ3t+nilIG45JqkuEmEzpaMi4lptXC5lmo8xwplg8tEac9Dv/Q32Tt7nvBW4S3OJPhyjG2GDLZOMV4e5Myy4IbDHXWtgyhazGNlFMf/Ex6HYEspbE3dy64pJgyZIk6T7JwACa7CFshe+vQWUIKuPOHfhJvS3a++rt4XxHFCi8EZW5RsQRrkWlClWfgPWY8xguJ9xItBV5GeFWSpgoRaVSSIoqCOA0uDlJKIuVJIk1lQMaKrPb+FVgENZkrLEqE7F4pPHGSQhzjhEJoHbLc67IuRV4ixhOSTof+0gLOVDjvGVQVuvIk/QWI2lhjkcZMiTwlJU2NZQcha9/VtIN1+LosjJfBCQERhAfhJygfnjRlZYh0hLaGsjJo4UmkoJ8I+i2NNYKycrWVNygt8c4EEopGTuC5wqejJn0b0mLvYl9nDCEIwJy1nD/5PGvG4ITB2wq8xduKsshY6PUxKmU0ybl05ixrKwtcvrxJd2mFN735zey/+26s7rKyf8jZynFhtEF1dotnqvO8dWnC9uUNNsbheS1laGOWVzgXhBBpS7PS7qByT+kE1oVs+Ue/9wc4efIVLm5s0tWObeEp8wytJGmk6yxSf2VGaHOf18JDvAu17F3I/kzjQLwpIxAuPIOkUugo2NEXpUHrOqOdmlCsmSrvHaYocMZgpMNZh/UymF57mOQFuRd8/zsfZ/cbn0HHEXHkMNYRxzFaRTjnglDBe7a3t3nmmWeYTCYsLCxw+vRp7r33XtbW1jhz5gwbGxusrq7ylre8Ze/a1tjY2GB9fZ13v/vdfOYzn2F3dxetNe9///tZXFzkv/23/zZ1H4iiiLIsKcuSe+65hx//8R/nj/7ojzh37hyPPPIIH/3oR5FS8pu/+ZsMBgO01nzwgx+kqiqMMfzZn/0ZP/VTP8Xu7i5//Md/zKVLl7jjjjv4qZ/6KT71qU9hjJmWc4iiiFOnTvGRj3yEbrfLZDLhK1/5Ch/4wAe45557+Nf/+l+zubnJ8ePH+fmf/3nOnTvHk08+SVNe4Sd+4idwzvHxj3+cv//3/z5f+tKXeP755+l0Otx333386I/+KP/qX/0rNjc3GQ6H3H///Xzwgx/kT/7kT3j/+9/Po48+yoc//GH27dtHlmX8w3/4DxmNRvzBH/wB73znO7nrrrv40Ic+xKFDh/jsZz/LZDLhueeeoygK/u7f/bt88pOf5D3veQ/Hjx/He8+P/MiP8OSTT/Le9773ps+Yvyx4H8pkOJq5QxhfrniO1uJOamGMkpI0iUgTTVsrIg1KaIRKEF6SZRVxXiLjMQtpXIu0QjmU0XhMu92hLEqMtSwtrxLpBIen1WqzsbmF85Y8G7GzW5JlE4ajiNF4xNrqElJJIh1RlgW97gJKibo0mGM0HFCWOd1Wm6qCohgjcGxubuEQbO8OaMWKbqdLWXqMg7SVYsLBs75vBaUihqMhSgiErYi1YFJYoiRlkA/Jswll2aEsDZ2OptPpYp1juLvD1u6AtX3LnL20ifPBCE2K4MQ1mZS0EsXB9X0YY8I8wBsKI5CEkiOuIf+VoCjyULbHp7SEYjIZU5UWrWPiJGE8LonjCG8cphqDn+CtBxnjZYJHY02ByQcYW4EIpWLC8FMPckFFNhVWh8FP1nOWZo7dlB4QjewsPO6nlVZuXcTprKmdgqJQwslWOFMg0zaYDKIOiOya37/R3zcSr96sTddafqM589WiAinkFetfb35+rXZdc10/IwkQ4V1MAE7MCr3hCmu6GceA186DZ3UCjUQQoBE81ddV7AkSQv9oWlELuKcTgT0XBinC26FxBitilBBY2cXJiFRUdf8P+xXehms+FVhcrUr5zjAXHcwxxxxzzDHHHHPMMcccc8wxx19heO/5kz/5Ez72sY/x6quv/qXsc3t7m3/2z/4Zjz/++NSW9ma47547+PR/+zwnTgzBK5TUCKFQIsa6jMo5HnnLnfQWu3S6Cf1OxB996kW895w7s8WFCxusrK6EwL+YsQSlDsTUcZCQmRiyvZr6oELUVpt1FomoiQMIAoSppacIvxtreOnZb7Bx8SSRjhmV2wyHmzin2Ld2EF/lbF46hzMlZZ5TlBkXzn6bOE7Ztx4seL00ZOOMS8U5lpZX6C5ojLUIKQLpqBVLq8sUVcV4NGbn8nl2d7bZHY7ZGZacPj3EGIGODYeP2lAjd6fA+Ywnv/4FHn3bu4OQQQYHAy9CADfUrIZAd9bBsVqLIWWo+Ol8o7xoliuCCUQQZigZEcUpQkiUimm3e0gl0bLLoQMPs7nTZjQ8x/L6vWxvn0KIUG9WSUmrJXnTI29j3741lJIoFdXkUka3169LJ9Q1Rpt2elsH5UIwTQqBExKpNYhgQ++vEXO8XmmC660z7S/XyVxq1r9xzdkbL7tV4cL1xBLXDj7uLb9ZgPR6+7kZrteOG2V5Xev3m+FWxA3fqQPD62nHzNFOiWQ8yDpzvVlp1jLXNwPNTLBeCliNBO9a1tzeU3gTXAKMEmiliTWUhUG3E6JEM9ga0Y4FSoRs6N5Sh8xJxrmjs7DI+XMbRM4w0TGik3DxwjY9LTGJIuq10HGLfe/4MOsPfg9IHQQH1RhXDqlGm4w3zjLcvEyV58H1REqCu0EQ8wglp8c9G4ymzoK8YiBFIHC1i4oLZQsGG9hiTNzuEXkL3iFtheisIr3j2A99hJPVkMGzf4apHKasXROqirjXASGx2QQ8WGMROkbEMR5PUTmcg1grVNpCCEEcaZwN7dHCkUSKSemIYk1pPU4InLEoKbA4lJLEMkJ6T26Cq4pQCqU1Tmq8tyRRsMEXCConmIzGYAp6S0v4JGK4tcniQptIgUi7QTTgHM6YkF2Pp6ImBFygeJQQJDqUUqhMIKKhLiHgoMSTaI8TEMnQmYrKExlH5MN18hBKB3lPpMBUFmMd1jm8t/WzTaGYsbAWjUhrykPRXLmZP4PYpOnnM/eAuGLd5rqHPuGMxVpHUVTsjias397GlRPuOHKErnK8dOIEaRTRTWDj0gUWel1a3R53PfAIDzz8KN/8xtfYF+/y4OqYZ//iAqPdISORUqB46dQF+vuPsLJ8AuEleV7iLFTCUziL1gmvnj7LKxcLWuYsWWEQQlJ4ReFKUhlcfITS+FnrcMKzPRyLDFMFZF1+JNRftzb0NetBakmkNcY5TH3dJnlF5SBRHuccUniMdcG+HNBaI7XE21Dew3qBmxHojCc5mdcIpRgVhnYMZU38NjKRhpz33rOxscGlS5d4+OGHg3PHZMKpU6e4cOECeZ4zHA7p9XqUZVkfX90uKdne3ubo0aPEcczS0hKbm5tIKVlaWiJJEuI4xnvPZz/7WTY2Nnj00UdRSrFv3z6UUhw8eJDLly8TRREvvvgix48f59KlS2RZxu7uLidOnODo0aNUVYVzjuPHj1MUBePxePqZc452u02SJCgV3EeUUpRlyYULF3jwwQdZWVnhM5/5DIcOHeLcuXOkacrFixdZXV1lYWFh6t5grWVtbY1HHnmEs2fP8sgjj3DPPfdw8eJFiqLgzJkz3H///VNXht3dXTqdDj/2Yz+GUoovfvGLCCFI05R9+/axvLwcxIzOceHChTC/M4Z+v88dd9xBq9WaloM4deoU73//+3nnO9/J888/z9bWFgsLC3zoQx/i4MGD/Nqv/RplWRJF0Q0J0r8seMDVYiRBEB9P+ehaaOtsEA15W4Dz9LotMBqwaAVpolE6QUY9IuUxhjCW2Iosy0g9mCqIDNqtDpcuXgJbMRrsoJVnob/EpAglg3rdLrbKiXXEYDAK44oXtLTEmpJEtcI4sr5KK40xJjxTJllJXjryUYaSimJS0mmnuMEI6yVVZdAqYlJW9UgV3EriOCF1YArDQq/PYDimKsYs9VtIDKtLLc5vxWgtiZVEOMdoOGE8Luj3e3Q6bfJiQpxEgKSoDN45dJxw+3qfrJzQSjV4S6vdptVO2d0ZMBpNkECeF2gpiSOFcQYtg8V94zKidYx1UI6GQZhAKM/gTE6kNKWx4TpgoMxAthFah8dxleGLCUpGSKWR3jEtqjDNnLdhUtKQ0qKZU88QzD7M+YUP33PCTwVaTfb6FPV0YG9+tTfX8c4iVQoqBpUgpcany+DK8D6mUryK2XPcuer+EHvPmb1HkLhyOa+d816Na31+PfHx9D65gXj1evu7WamFWWHAlcdYuwrMOAE0LgS++V7zoG7aX396hXuB3/tE1HMx6QXUrjxBkFC/B3t31XHvzePEVFixJ2Kg7kN5ISCJ8GhiKSh9EEh6H6HECOdACYdA14KmsOP/cblBwFx0MMccc8wxxxxzzDHHHHPMMcccf0XhveeJJ57gb//tv82pU6f+Uvf91a9+lRdffJFHHnnkltb33lK5EqUkphYAKBkhRUwcxfT6MSv7Frnz2GGKsuILf/IM584MEUIy2Ml5+ZUzHLvraLDkFnVQTVBn8cuQ2eoDOSbqwAxS1GUa6mCNCJnK4GpDgJoEqANfQgmEF+xsX+bM6efpdjuMBltUpkBHmkgnaBUslq0rqaoJWgm8hn2r+xjsbnHm1ZdY23+QOI6JY40XgkmWoaKItNXGO0dR5WgdsbC8wrEDRxhPJrzw3LNsbm9TmQJjS6JEUDrPwqKiLEZ4K0lTxeZ2ycsvP8vmxnkOHbl9xiY9WEuLmpyyTmJdk+kp6ixgGWqY15mdTcDMOQtIUCKsoyKStE0UxyglSdJWsNa1I1568Y8oyh2sLdgdDvAWlG4jRImOFK32Evc/8HY6nTaVqaY1kIfjIcsr+4K1N3VRizpIK2qRQShH6pFEKAzCCayQYE24RtcJCr5e3FJGvPevCTDeKAPq6kDmtT67epsNeeScm5I6ZVmGOuNaX0FuzP6b/f71ju3qNtzoOK8nZLjWsV1r/Ru152rcLGPsRkKHa23jVgQSs5CzxyRCthve17H7vXuiyQic8plNFmlNzApCnfflSPL9i4pDKWgBRkrabc3CUkphIfKGDE2UxOxeGpDGkkjAztDSWerikpRRZhBYxpfGuNJidUSy2GNrd0KEYG21Q+4lcb9D++63cfjt70GoQIYEh4MR1eAiw8tnGW5uUOUZ3jvUTHaad6F8iXdBKBDs98WUsKrXavLhmS71e5/hLK4qKJyrMx8r4jCQIgHRXkV5x5Ef+r9yYrjNzit/gbGGREEUa6I0opjkUFmMMTgX8vOl8Ki0hXclcawxhUM40NLjhAyEu/XoWDMaV7RaCYXxZGWGlxLpLKiYtKORSmKNozIWKVUwf3EOawObppXEywjrQl1z7z1SxriqQggY7e5ixwM6sUSm3eDE4ByKCFeZ+vkhKb0PbgYiBOFLa0EEq2LvQvaxsaFPagG5g93C0YoU1jtiKVhoBbLb+0B0KRNIpEgGUYFzDmc9pnLYmixSUobyAc21asYUMSMgEXsZj8L7OmXxSsFB6O97V7vJ4sdT1wwP2a3WOiaTCXme0+u0WV1ZY2eyy4Ejd/B/fOkb9JKYd//A+3j0kQdYWFxiNJoQuTHPv7rJfXcv8X/70fvQxcuceVvGr/7uEsNLE4rtAb/3x9/gQz/5f+LRn/xZit1LvHr2OUSR82pW8uSLr3DXPfdz75vezrHFw5w+dRKtFEpKNA7vGveiIAigdjQIgrv6hq375V4H9pRlFZw/PBQmCAUSKYKoo75uUSQxxs7MJ8JZss245z1JmpLECcPJGGcrrPNMsoLdnV2+/uwLnB9V3Hb3G7BlSVVZvDVcGuYzpz+0SalwDx45coTl5WVGo9HUAUEpRVEUrK+vB8LSuenPhkSHQHCur69TVRVxHE+t/6MoQimFUgpjDIPBgOFwyHg8ptVqBQcHIeh0Oly6dIlz587xe7/3e6ysrKCUotvt0uv1+PjHP87DDz/Mu971Ls6fP88f/uEfArC+vs5oNGJ9fR0hQomIpt1N++666y4+/vGP88ADD/CjP/qjXLx4kRdffHE6Rud5zvLyMkmSTAUYWuupIKMRA8yKGC5evMhf/MVf0Ov1OHPmDA888ABKKbIso9fr4ZybOi0YE9yctra2+K3f+i1eeeUVjDForfHeU1UV/X5/+uwdDAbcd9995Hk+FTQkScLCwgIQxCaudoD5bgsOgFroste/vQvjQvM/EDjvsGVGkWXggtOBijQSkFQICTqOEZEijAiOSTYJlv/WkWcZWTahKIK7ipIC7yVxFJNNJmipKB0UeUY2mVCUhm6nS5ImpGlEWRUs9nsIAVGkUKXFlBU+iZiMR7TbKYNhiVYaKTVaR4yyDN1JWWjHbOzkJHHE2kIPrRVFkSOkoN1OkUJSlZZeJ6YyBik9cayJI01VVKwt9rjniGdhYQGtPGkcyill+QSBp9PusLu7TStNObS2j8kkC/MBB7GC1YP7iOIWFkErbTPJc8rSUFYlvU6bcpTj8GEMtw4VSRBgynAvWCfIiwpbThCyTdzu44wBbynyCrwP401WYbMxSEcUxXgZ4coCcKAjIEJIW/uW1fS0nyGRfXAME414MCgMccIjHPXcv+4RtShB1KT2a/qxaP4zIzyo5z0i7oAXSFtBshA+8xUQ3HeCKE7Mbmjvb7G3bI+Pv3LfYmYyIq7VtqvWvZEw+FpC3atxLWHB9VwNrr+tWYFG0/JG0DnrMtDo+WoBALJ2nGrORC0GaR7Dwl9xjhq3g+YZJATBWYdmW4Bw0+aE0z07L4amzMLU9054QCGFBSkRNmzRItBSkYoKgaEwIGR0pdhAvLZkxevFXHQwxxxzzDHHHHPMMcccc8wxxxx/BeG957nnnuNjH/vYX7rgAGA4HPLVr36Vhx9++JYCsNZ43vzIPVw6O+KJ/36Z8USjRIQUmoXFLnfc3qOsRkTxAmVZsrU1wVah1rgzkuPHzzJ+bIJSIbNfSVVn4IaMDyF9KFEAhBCOQ9Z2oyE7pA7A1GRFbQiAx4VyDQKECxbdl86/SpGNaCUxkY5QAtqtLpHWpEmMKUvwEUtL++gkEQCD3V0WeiucPXcCbypKU+KqkpV9a/QXliiLgnY7QsSSFE+R5UwmGadfOU6716PXbbGwtMrJk68yHk6INciOxFvHYPcy7VabXq/LJDO0UokzRQjM1cG5JqYnfG2hLhR4i3VuGg9TNdHoncMaE2zTfZM51dhRg9IxOoqQUoRAsxLBihRHVUxw1lKWA2yeB4JK1MlOUtNu91heWQtOE76qCSJPlk1wzqOVwjs/Jbb2jF9DIKwhQhunCk+oB491V/Sn/5Gg/+sNlDXr3ygg2ZBA18u6aoKks/tuCIyqqhiPx2xubrKxscF4PCaKIg4ePEi/358KENI0vaJm9uuxlr2VkgXXEhPMfnazcg+v15ngRtu4kTjhWsHim32vgYTXuKU0ZRZsvX7zDTX9auijsiZ0G5J2ORb8wL6Y/SKIdtJU020r4iSiMg7lXQjYJi02Lo/oJYJOLNgdO3orfXwrZZxVRMIzGYzBOOI4or++yPntDOc8+5dSnJSsrK3QWT3Ene/7UVSSBpLVFvhyhBldYnj5NMONy5RZBr52bPAz5WaQe8QsMggI8EEsUN9zeAciYkpQ+/qENQQF1GOuwdSErAcSlcB4E4lEtJaInOHgD/4UG/+fU2DOoFsaqQQmy7Cmvl9McCkQwVsf6wNhVJUO6yoiZ0HIYJ1vHWmsqZxHaUVReUpvyS1k4wKqkrQDxodETITEC4lxHqE11kt8adGRwIkILwVSagSeSAnSKATty7xktLVFW1RULiFWCUJHCOeQOpqW5lAqnJ9w7pghuT1KgnUOUwWr+VhBLsEYR2ZFIHSVQEahfUbUghAlakGBx0kojKvrxhuM8RDJMH5GGlHTDeFeCMR6k8kor74HRSM28NOfoS80PTv8R6lAmLmpSC30ISklk6xA4Fld6tP2JUWk2B4VoaSRF8iFI9z58LuJlMCakv1H38CRM6eIxUW2zEkWVMLSmubu21fYGV7mzIUtFqzlxeefDaKXKkcbz4WzG1za3OT01oS3vOuHkb1V7OY2R5bWeV6GUh3GOZwHW5XEWhHphrqYqoPqw5w97nCQzlqM87TaLXqdFFNk9KVAuBIQVM5ibXCrsEqHcUEEoQoiCMSsDyKRqqqQUmBKF65j5TFVxWBjh7GV7I4zzl08T6oFW7sTzo+K5mzTZBM3pHq73ebIkSN88YtfpCgKkiThrrvumpLoFy9eJMsy8jzHWlv3C6bPjqqqpmR9Q843zwdjDEmS8Nf/+l9HiFBi5dvf/nYt3gzrxHHMt771LR5//HHe8Y538J//838mjmM+8pGP8PWvf50//dM/5b777uP222/nx37sx9BakyQJn/vc5xAi9OmGkLfWTp9rjz/+OFEU8elPf5qvfvWrxHHMz/zMzxDHcShhURRcvHhxKrQwdfmS4M6y93xrRHjNsre//e2sr69TliV33XUXn//856cCPq31tJREI+r4/Oc/z3g85qMf/Sif+tSnAKbttNbSarWmy5IkQUpJp9OhLEvSNKUoCvr9/tQ14lafew1u1ZXoVjArXPTO1fM3X59zRygrYqdzOettmOdZOxWHeULWvJLBAQYc1oxI2j36aY+icAwGu1RFgUNhfSgtcml7gPeWpcUurU6LSGuyyYjBKCebhJI5w9GYSAva7RgpFzFVSX+hi/AepyLWej2yccbOTiifkE9GKAkbm5t0kog4jkhjw+bOkFYS0ev26PdbdDoJSauNtdTXS2GdI0k0QmrGeU6sJHEUMZnkpHFMK005eijGOI32bYx1DDc2qSqLNYayymi3UvJsTKRhfXWB3dGYONZ0+316Cx2sE8RC1feXo6wMg1HG8mKPfat9TGWYTDKcDwJn732dLQ5KpxT5GG9LUtdGS0WWT8LcV5oglrNFcNGxDp8XqKRASIGrLF5oRJSCDe83ofuEMk/TjPO6L8iGwK5/No/uhsSmJrmbR5UErLh6XtbMocLvzSYFoChxSJzLIVqr5xQCdAdfDcAVSMz0u9MtTkUHe4ICmvewKU9/1f00+3tzAGJ2JH8tbmUeeaO577UEtzcSHV9/Tnt1ea5GtBGe01MpQS1arB8GNO8+U+Xf3o6m2/T1Fphpm/eN8KA+M40wFj/Txvr7fq9tgvBuZ5EoOwGpQCiktxipcSIh8xItDM7mqKgdtikaSYW/opnfCeaigznmmGOOOeaYY4455phjjjnm+CuIS5cu8ff+3t/jhRde+K7s33vPl7/8ZX72Z3/2loKpv/XxP6HdFiz2FSsrMTu7CoRACs3u7jbPfuskR247zL7VNZIk4i1vu41zZ3c48cImoDh7ZpPtnV16vXYQCXgL0iPrbC4/E9DxnhBklaJ2AqgzO31dz9zXYoPazt8LFwgzBEJ4drYvYm1FVY6J4oT1/Udw1lCVE4QzXLzwKlpCS91BZDsY63CVIVYJa2tHsMbSSltYVyC9psxy0k6H3Z1dvBf0FrrEiWM42GaSGdSlyxTZkH63EwLyShOnEQv9Putrq1y+fJrdwQ5etYgihZAVJ49/i6N33r9XHrROyXYO8CGb0zpfk0hNxqYMJRa8r4mLkB0n6iB/KPYazqFzPmTO0Ig6JBCcG6RMEEJPicxpBo7waBUhpcI6S2UM1pk9YUSdsei92cu4aQh0SQimNsHJOsAaan9fSULcSt+8EV5P8P/qbd3IXeBGy6/nUGCM4fjx43z7299mY2OD06dPU1UV7XYbKSVJknDo0CHuvfdeut0ucRyzvLxMp9N5XaT/9Uj5q9e5UWbYrZy3WymfcKvfe73buNXvTQUENIHcus9731QiQc0Eu2e2FizdhaSnPO9e0dy/INgZQBwrOi2FFFA5MM6ilYJYsbMzopMqFruK0aiis9BFdrpsjguUgDzLaEeSSgtaSx1GJpDXnQiiNCLudUijiCPv/Ou0l9eCYMKW+GqMmdQlFS5drLNZHbIppSB1zTrY+jgkohEJ1SFf7yyNzEIozV4kOVjzXn3eBC4MNdZjS08BiHQpOB5wORCprQV6h27ntvf+JGd+//+N9yVlaRBKIZSm8pJhVZBGIF04+3EUys84Z5BiL2tXJ1FtRe2IkhivNJPMIrWirUHEitwrxsZRUtFRiiSOA7UuFV4qjLVBVCZdyOWtyfQ4jsOoXwuzRtubtLVHRW1k0gp1jF0o2RAsXWKEqwBBYSDSYUxKtCBSwcY4VpJQjyFkssdakipYSwVeShb6KcZDVZR40Ti9gNKqPsce54OswNSiulaq8FJSGYeOY2QjMgPwTWmGRkVQZzUy23dnMyq5Qojga6JKylC0IYpiRB4INCGDi451kESa5aU+cbdHOurSzoe896GjvHphh688+S1+5CM/TZTEKDzdnueO9WO47CS7J05zYVwiCsHi0hqyOsuRtRX2H1pne3dAu99nd2dIXBTs7I7BC/qdDoeP3kEcRywIz/r+/XwuUgghmZQe6zw6aWF3LX5PPTjN5Ax9t37OCzEltUxRcP+9d/COh+9DxSkbG5tEccx4PGZnc5PSCUZ5xbdPnKGq+SBrg3tCaSHSGi0l21tbvHjiVVZ6LVIV7pMqG2PKAoBOu42SAl0V4D2XxobKhUaGOuKuvnSBLBZCoLVmeXmZra0tnHOcP3+e5eVldnZ2kFKS5zlpmjIej6cuBUopOp0Og8FgKjbodDqkaTol1KUM/TJN06kDgbWWsizx3nPu3DnW19c5c+YMd955J1VVYWoHkHa7zXvf+16GwyEbGxsMh0M6nQ6dTgeAXq/HYDCYjrNxHNfPdz9t37vf/W7W19f58pe/TK/Xo9PpsL6+TpIknDlzhu3tbZxzKKVI03Tq7jMrYvDeT5cvLy9z991389BDDxHHMWVZorWeig2ac5rnOVVVIYTg3Llz/NRP/RSHDx/m85///PTYm/02+2i32+R5DkBVVURRxGQyCc+G+jrdSLz3P5L1+x1DhBIK4RhcmEdRu3FJURPahigSFFlFVVp8Pf51+gmtdgvnFePxmEJktNsdeotdhBJkgzFlWVFWFatrKxRVRTbJ0JFAxxohNMZZqrKgKgrKqsQ7Qz4Z451Ha0WSaPIip99tk3balKWj3e1w/uw5VpYXscaQJpqDB5a5fHGDrChrEVpwF/Emo6U1adIhTWLGmUEIhRMhI9t7j4oitndGLC8s0EoTLlzY5s6jB2m1NJs7I7yFdqeDouLC7jYqShmNR0SRot3u8fTzJ0ilZ2Wpw0Kni3GWpNWmKD1lZei2JYjgFDYaj5lkBc5DO9GIdouyMiSRJo4VRVWiwiSBKE4ZDTbQKjxrTFHgTHCXiBKFzytMWQT3IanBhxI+KvY4pXBRgtEx3oXSaKGsRC0S9m46ggedlQ8EdD22SBReOLwXSL9XukdOdVg+2PVf3Z3qeUIzpw+kvwvPFVchozY4g1BpWAbIqIMvdnDslbS7YoNToruGDC9ks2UV8OHda3bRXkZ+OPZG0CBmxWXhy/W4HxwdpkunL0fXF/3cSlmFG81pr7jvfX09GrGbkPVrTCMBaWaXtXh1qgkQe8/vRiTv6/VfM67UwsHpWDTVJUwFCEF8XgsDXiOeqB2CpCSNBWNrETLcT05EaF8gPDg0Hk9ugiDRN3MG75mRHVxbAXKLmIsO5phjjjnmmGOOOeaYY4455pjjrxiKouAXf/EX+dznPvddbcdTTz3FaDSi3+/fdN3PfPp5pBDEicAZDSJkxmuZMiktf/Gt0zz40A5pdJ4zp3fo92LuvXeV82dGjAeO7c0Jly5uceTwIYSQdTYcOOnxpi4RUJPuHldnmThiBULqQGTVARc/tb1sLJrlNKPHe481lsl4i/HwAkdvv5+F/iJSQlUkvPrKtzl7boPD+1dJlaacDNjZ3UGgiOIWiUrJbYatKhCCosiJspTJZBO8wPmS7c3zLK6usLuzgyVmPM7Y2TpLke8gqaiqkNHabneYjCdIoVHa0eq0WVhaZjIa8Oy3vsHbv+/D6KgmzkRNWFkXCDvvptkvztp6WQjUuVqQEOKTAiFUIJn20pppAnnOh1rXxgRb3WCz6etYYE1q7qX2BCKtCbbV5zlNUvr9xbq0QiCzZH3uQzmHplZt44Dg6+WiFocIrHXXDA5eL/P9VvB6rfy/023Mrtv8bGpMf/7zn+eb3/wmw+GQwWBAURRTi+zGSjtNU55++mn279/PQw89xIULF7jnnnvodrvTbM4b4Ua2szdzL7i67TfDdyoMuVHW2Y3WvZVtz8L6QHI7mm67Z4bra8eP2c376X/2fulKwfd0JYei8HevH7PSiwIZrxRRIsFE2MrhvaUbS/YtxAxHJWm3hWh1uDAocFUFZU5HByv+7kKfrcwwGRe0CWR33Gmx2E1oHX2Y1fseDoIDZ/Amw+UDsq2z7F48Sz4eg7N7VLKoXUK8q8VEIQiOiq4ipOsjc5Zg2JsglLqCmG7Oh7/iSx5vLa4sKDZPIboLRJ1FxGgDuXAIHy9x8Hu+j8mJpxk+9wRKeYSSOKEojcVHMR6Dsx7lDWVR4VAo70Jd63pMi4SnqkI5CA+UZUUcS5wJwiWtBJ1OgkXhfXALcIQxTitFZd3UzcVUtWjCeVQcBBjCWzwK5wzdXg87NhgRgQqZrVQGqVJsZREuR4nQ//asjQUiklgFpbEhezgOQjghgh1503/iSOOoBQqRprIe50FFCVY05VZqURieqrI46+oxO3xujGG72CEvChYEeClqgQZT0qOhMGZkBTMXfMo6QS0ac3XWvVSKu+9/M4OnL+LtGAFEOpqSF0hNPhywORiDinjwnoPsDjMmowFlWdFK4/p+EsFpQ0Ar8Zw5XcK4Ynd7hyU1IYt7dBf3oYucvrQs7lviL07scuyRH+CF5/4cbQweePG5Z2hHCWfKiqosibVGuRKAyXCX5VZEFOn6KJsjDs+LQIj4WosXiLOqrFi97Q4efvMDOFsRLz2GTjt4ZxicfYlqtMuTX3+KV08YSifwUoS+6kMJBi2D+O7smfNcvniZ5aU+KwstTLxIb6WDd5fBeSIJ+9bWibMNMufZKPZsrYUIz95mfG9ccpRSHD16lJdeeolDhw7xzDPPYK1lMBjw4IMP8swzz/Bnf/ZnnDp1ijvuuIOyLDHGsLy8zNNPP83q6iobGxssLCxgjHmNAKAp4wOhTMDm5iaXL1/m9OnTPPTQQ5w6dYrjx4+zu7vL5cuXGQ6HfO1rX+ORRx6hLEv279/Piy++yFe+8hXuvfdeXnnlFQ4dOsRnP/tZDh8+zBe+8AU+/OEPT8feqqr42te+xn333cdoNEJrTb/f54knnuBDH/oQzz33HIcOHeLs2bM8+eSTPPXUU9N2Sikpy3I60jQOCM459u/fz5NPPslDDz3E+fPnp8+/qqoApseolJpuqyk9kef59FxHUXTFc8QYQ7fb5eTJk7zxjW/k9OnTvO1tb5s+I5vSR/D6hIf/KyFlcMIKvT88a7wTdUkrhY4iIh1TWUNVVGBsKCWhfHAJidokcYxQUXCBcWEe7b0hbbWQSlJc3kJLSVkUrK/vYzQa1OVawvbL0tLqJICFDPLMU5UmuKSICHwQVOV5SZQUeCEZ7Q4oi5Kd7S2iJKYYDOn2WpTdFFtVaBEhrMMXFd5Y4ihCeItUknYrBgmddoeyMgxHEzqdLoudFuC4tLGNFI7d3R163TatVkJbpHgM2xsX2R5kqATyoiCKY6JYIwDrHcZZhFaIyoE3FIUNAi8lsdYymWSURYHGsnF5iyjaT7ffwrNNp9vBmhItI1AeFbXwCKypSKI2KoopinFdxSxGWCiMxRQ53obSFd57vDUIKdDtPrlQGCIQDqX8lEQP4ukZcVk9VxdTd5eGia7FxPX3VHhahb+bMnRi9rkQforps79ZJMBWKClBd2tGO4y3UgAyRugWNr843f1UUCBgdkIVfpMzqrjmXaJ+N2iORzSihEY00LybiCu/O9PE8LOZMzZZ/VcqR282l7zRPPRG22jeraZH8JrhQNAIMmbdiRqXualSoLlGtdjiyvn5VKUwM3+n3ubse0hTMlA2T8Lp8vDt+lyKGLxFqCjMhYTHiQjlKrxQCJsjasFXWbcMsTcL9N4zew1eL+aigznmmGOOOeaYY4455phjjjnm+CsE7z2//du/zW/+5m9+dzKrZnDixAleeeUV3vSmN910XWNCtm1pQEnqrCyFEhECRZbnXDi7w2Az57//6UmiSJMmimISgslFBufOb/Cwc0RCB2JbihCc9DbY+E/58hCM8dMAVZ110rCN9fKQ/a/q4F2ocgueVrtNPtkhSWPyyTYDDPsP3oYSno3NLba2J9x+pMXa+n6kd7iqYjjYxSFod/tQQVnmIEN262S4i7GGMs+oqpzxeMB4MEREEVm5w9bWJjvbG2RZgcVhK4u1FSdfPYO1hv3rwWI/TWKysmIwzLh8+TJ5ltOqs74aq21nHUyDWjXZ6MW0trnHY2yw/vbOomQUMt+EZK/8RCCipIetrR0+/X98lqe++jwHuyVHD/n6GCaBXHN7ta+ps42aDCjvwznWUUya+jqbSQYBiGxKY9Qkm5A4UWdmIuoAqwNvp9uZxeu1+Z8N3l0dTGyyQW8F12vHrTgIANPszWeffZbf+Z3fYWtri8lkglKKVqtFq9WiKAqGw+E006woCr797W9z4sQJnn76aQ4dOsR73vMejh07xv79+9FaT4/hRtlZ12rv1Rlb1xNQfKdOAzdrx7XWfz0ODrey7dn1av453OkzigI/zVKr1wtbmvZtWW+yqxSPLSnuSgX9RBDHmn5bBYK8lyKERytJkZehREhpWOrFVMbRW2zjohYXc4dwlnwwYrUVhezUpMsgt4wHE1o4kk6KimPWVvtUPubI4+9Hxing8K7EVRnl8DKDy+eZDId4U9HYZ+MlYEMoWwZREbYWFymDVBq0Cz+nAezgLuBcjnARKoqnQfCp+KA5PzNBcGctviooJsNQF1hFML6M6B1EtRY5/L6/wYunnodsC288XnmiSOCsR1hod2KscYjIYSpHS4KTiqo0tGIdhAVJjNSa4bAgSXUQAIiGVBBhDLMOJyVFYYg8KK2pKktWVqFEg2zG+QppK6JIItB4B9lolyQSyFRTOoHqpAipkFKg4xi8wzqHdL52z5mG8kFAljuKSNLuCryGQWYo8gpZO7SoSFIYKJFE3uHsnhQgnE9Zi0OCyMI3JJKouX4Rgv3OezY3tjm1u8mBe86zfuDg3thYi+qmhMUVN8BrFzUfhH4faolLKVFxHBwOfBBxLC/08VEoxzDJcqq240++/DXszhl++NEjSOnJi4KqKhGicxXZYjEmZ3FR0VvW/OFXdvnmbkJvocWosJSXBiwuLvGm+27nqdPnKduPcHL3ebpdx+lXTnDm+Avc9abHiZeWeM+P/y3y8RDZ7uNGZ2n3FxmfP7t339Ypn84HdwI1I5qh7r3jwtCKYja3hxTWcXDBIV0VyGmlAIcrc0rjSVJJrMA4hxa+dkzwCO9Y0oKdouDChcvsbCh2xQ733tPD1Bn0SscsrqxhTp6jtJ6hbfqKvOLZ45yj2+1OifJut0u73WZxcZHV1VW+9a1vcccdd9Dr9Th8+DAXL15k3759U8JcSsn+/ft54okn+L3f+z0eeughWq0WUgaS1FpLp9OZlisApg4CL7/8MidOnODo0aOsrq5yxx138B//439keXmZJEmw1vL5z3+eL33pS7RaLX74h3+Y0WjEJz7xCdrtNm984xt5y1vewqVLl/jN3/xNkiS5Yqy11vKNb3yDz3zmMzjn+Mmf/ElWV1f5lV/5FZ599lmKouAf/IN/wJ133sm//bf/lk6nw5vf/OapOGLWkaFxNAA4duwYn/zkJxmNRly4cIGPfOQjGGOIomgqVmgcEf7rf/2vfO/3fi9LS0v8zu/8DnEc8/zzzyOlpNvt8ulPf5q3ve1t03IV999/Px//+MdZXl7mlVde4UMf+tC0DQCTyeQKx4PpXfQ6RXH/s6CUDLb+tdAzlAoLohspBEIrkjQljhI2swJXlaRaooRHSYEpS6ypEAha7T7Og4ySOuteYD3kZYkS4XmWtioWl5eoKkteFEglWFleZXuwzdLSIp1uxdbmLoKKWCu6C12KLKPb6xHphEh7dje3gJB1X+QVzjh2hhlKStJWC2s9eZ7TUg4pPI4CnCXLJqATnFfEiSDPKrrdDpvG0IojZNpiUhXsDnbxxZjx7oi77zpMGveI0hbOlgyVIk1idsdD8skkuIe0ElYWupw6ex4FlEWGNZayKNnYHbOw0GW9lZLlJTs7QzaHOVlRYfyIfn9C3OpinaOyFucErW6PLMtotft4Z2ilCZ3OQpiGO4PSMd55yiyjnEzwXqCjCFfZIKRzZRjz2x0QCls0z7rGycDjRT2Jmc5PXBD0+tqRh2Ye1XRQ8E3mPAKPBB/m4XvOAM1Y+dp5tfeOIhsRt/qhLboFNgfdmb5feaFqEXFTqqmeUzZE/JUaAfZI9tlyBHK6+7DZWlWxV4fhKgHD1WT8zFyl3na4NffW2ythcKW44Mr57+y6XLHe7Hm53rIrMbsvMXNdrprjTn8R07lA8350pcB6ZrWp08HsOWwkDM35Ze9aTL9cvwY7ByIFLLFyZFmF1xFKhOcoVQbpEoJipp1i7zJe4zheD+aigznmmGOOOeaYY4455phjjjnm+CsC7z0vvfQS//gf/2OyLPtuN4fxeMznPvc5HnrooZtnW0uJtVXIKmIvIy3U1hYYW/Lyy+dQaLJJTuYlQwQSGch7C6++epHxeEISRTVruJcd45zfI79rskc2XqKzsarg418zjiE846DOcg5YWFim31/BlANGgw2SKCEbD7h04Sy7gzH9hRgdxazsW6ff7bO2dpAzp44zHg1Jkhb7V/chVUxpMrLJmMJYhGhhkpjhrmN7tMvl8RCVthmMM3YGGdvbIyalYHVfihCSdqLRvRSEp9frsLOzRctYNja2KArD1uYm2xvnSW87hiCQGbYyGBMC5EoohKqDYXUg0BEC+aYyGFMFwkaLOgPKB9KQ2oHAC4xxfPmrX+I//9dPMNgdctuqRsoNXHEZR0mSOLy0CC9wPgSSm+83wSjvHGWRMx4NWVpaBUJ9YC9qIUQd3GtCc0GU4KYZug6Jd02Zh2sH+25FMHCzbP4b9d/ZoOSsQOF6+766nMHsOs45nnzySf7dv/t3DAaDqZ1zt9sliiLG4zGj0WhaJ1sIMb3Xx+Mxg8GAzc1Nzp49y3333ccP/dAP8YY3vGGa8XmzIOituDrc6nm+lXP/eksrXCvQeytCiFt1q/CARNBkhc8GpMO6geRtsuRcLabxCNqR4D3rEbdHoJwl1hEARWEQSQSlZTIxxEogI0VZGBa7Ec444jRGxjGXMk+VV1TjCfs6Gus8Ju6SWcdkd0ziPWm/jdCSg/v7VGVF977voXvwtrqZDkyJK0eMN88y2dnCFgV1fn9NGPmp20GTSSdqchlbYZ3FVRVCSVQUo6JQLqVhC5wt8diwnMbNZC9Y3tDbtYwGb6EqCoQco3QMSqOibUTUo3PwGMuPvIeLX/ztkLUqJUmssJUmjiUWgYhisB5lDT7SaOlQhOC71DFJohjljqTbopxMEEJRGUeUJEgfspxFFAeC3znKssJXltJ6HB4VxUip8c7jyjGdOEIDzhRY47FlRpp2qIyndBJVlkRpGhxZBEgVnHKi9gLl4HLoP2IvD9M5T54ZRkgyZ5BC4FxwDzAeMgODwjHOSo5qhQ6PNIQA44Jww3iPs6CloBKCwjkiJTGEkkBWSExluXz5Ikb3MVVZu9XYun876ireU7FSuDy+cWx+rRbBB0LJexeI7Jp8xQu8tRhr6XS6FLXgIc9LqtEui6lirMNYJgVU2YR8MkGIldA7BBjjcc7gqoLhQLK1ZTjsNSvdFVw5ZF+xQ//t38uF3SGffnlC4RVnz52inXRZW1V87Rt/zvkzr/L008+SJgk7RUXcW+adj30vCEE+2GZfJw6lL+qD8dZw/uxFjLEcPLQWXCaaHus9WVGxksbsbO9ybjtn6dDtRLHDO4O3FeBw1nLbwVW6icDbksWOrscEGdw5qoozLmK4EcpsaAnWCtJuj9HOJTzQ7i/hTcV4OCA3ntzK6TjUCJtmSxjEcTwlsx944AGMMbz97W/n3nvvpd8PRN+jjz6KlJInnngC5xzr6+s459Ba8+EPf5jBYMD6+jrWWv7aX/trU3HBY489Fkqe1Pu31nLXXXfxsz/7s1y4cIFjx44hpeSNb3wjf+tv/S16vR5JkrB//34+9rGPcfLkSe677z6SJOHxxx9nZWWF7e1t3vrWt9LpdPibf/NvcubMGR588EFarRZRFFEUBVpr/sbf+Bt86UtfYnl5mTe+8Y1orfnoRz/Kk08+yZvf/Gb6/T4/+ZM/yaOPPor3nqeffhprLW9/+9tJkoTFxUUee+wxnHOsrq7S6XQ4evQoH/zgB/nUpz7Fvffey9GjR7nvvvvq7H7J/fffTxRFvPnNb+YLX/gCKysrPP744/yLf/EvOHz4MO985ztxzvHWt76VX/3VXyWOYw4ePIhSimPHjrG2tsa//Jf/kp/4iZ9gbW2N9fX1qWvCsWPHbnm+8ZcBpcI19jIITUV9/0ulaqGJJI4i0laKVJrJqCJWCdQzPmPBOYGWEVrHpGkL68KcSycR3sPy6j6KPAdvKcqSOG2hlUYqxWiwTbfTY5KVVEqQasH6/mU2L2+hlcA7Q9pOccYQd/p470iTlM3RLhgHKjiSJXEoWxLHMWVVILygHcXkRYm3kGUjChuxO7pMq92ivyDotFtIIWm322gtMKaimEyoyjCvrcqKsrIoCnqLi+QTQxxp9q8tcPqZTfKiYDgYstjvkzvPxu4EbwzWGBYWuug0YffcZaSWJEmb0mSMx2M2BhOy0rC60ANnGAyGxDpiNCpI05g4SRiNC5Kkg7djWu0Wnd4CeTEORLgIjhl5liEEqDgBW4L30xJvSIGMYrACX2bBkAaC5X4tKN4TGDRlkMLI0ggShLyScK5lyOF/vpn/AIRxDMLc+7Wiz1AGQEoBQqPsEFwc3hGad69ygKx2idJu/b6xx0c3IrhGO7AnFGiECGLastmP9l4I5HTRazCrZfDNMYQPGleIcCyzouLmWTmzmauPdypYuPl890ZuXHvuAv6q5TNzTiFCyb96337GqUe8RniwJy7Yczhgun1fLxT1fK+5QkGw0oil9s4CMswYtPIoJZC6hXMVzhZ4O0RHcV1WYaZfzLQ96CO+87FwLjqYY4455phjjjnmmGOOOeaYY46/IsjznH/yT/4Jx48f/243BQhBjk996lP83M/93DTT7QYrh0x750DW4ZDa6UCKGO9HnHjlLJII7XsoEepU+zrgL4m5fGnIaDRheWmpFg/ImjQJ5NI0MEPIXm8yTaekdl2SISTw1kE67xHO4T24OpCysNRjYXGJna0MhKIsJmxtnOf8hdOYymNdxeLCAmnaptdfYn3/bew/eAfZZISQCpNn9JfXQTgmw13yYq9O8HDrMgfWDjAcjdnYvhwEFsMxg4EjKz1KZuSZJ888usqpjKHIMjyWSZaRxBG7Oxllsculyxc4dPSuEH1yHuctzlqkUNOIzqwFpvfMZBKGc2VMBXiElGgJwYo9lFUYDAacOXOG22+/g1dOvsLGaMI3XzjDasuyuiSQosTVFq3WBtGBcWZ6XsP+PJWpKKo8EHg1sRXcDUSomS4VTjiQPvii18S+9w7qUhHOXj9QeDOC/fVmHt6s9MCt7PtabXXOcfz4cf7Df/gPnD59emr/vLy8TKfToSgKlFIsLCygtWYymWCMASBJEpRSTCYToiji8uXL7OzsIIQgiiKOHj1Ku93e6+9XBUGv5SBw9ec3avvs7zcSVXwn4oTrOVPMfnatfV9rvdl1rlWmQUwFSH42FrwXrCWY0ob1Q3cUQCLhsaWI+7qKVHq67ZABXxmLayW0WjE7w+AggoiIrWexF6EQpL2UqN1iY2SxZUUxyUhMSekFttdnmBvGW0NaQLTYwwq47cAiNi8oTMSxh78XqWPA422JqyYUuxcYbW1QZnkot+AqhNKhhrc39SGF7G0ItvDhn0JIPU2ut2WON1Ug5pssVwRYi/U5KkqDe0Et1GjI9vqkNSccbypMkVFmmlTHeB0hdAehW6y/4we58PQXqTbPIiuDs8H1phIKDMSxJFYSISLSThtXZiglqawnTiRZbomiCOssSkcMMkskAplfGkeWFUgrUDjarQSBJMsrjAtkemUMkVDEWiEVdLotvDWBRIlj+qKLl4rKCoyXCOsxZRVIPevwvkJFCh1pjE5q1wuBDWc2lFwAYiCJNd57TFURRwKtJMJ72lqQFc04FkggG9RulEWBkaqhjZBASzXijrCedSbYyy916C4cZmlpeUZJF7YjBAi51++D+MsRZGX1arNCm6A6QAjQOqJxuUEIqjLsT0cKL0O7qqokiWLe9MAbefHpAcPMoCQsq4pstE0xWaSYDDGmZGt7gBk9y1sPdDicJmydqrjjxCneMJH8+f51Dr75Xby8NebVcyVbl19ksdXjzUc7HOmu8tzLL3L2/FmKoqSVpozznMoZyo2zoX49iv7KMuXFM0ytxmvXBmstdnpr18cpgnBtUlSkaUwSKZZ7KZGqb4J6PiBVBFLx137kQ6z1YzYunKMsciaTjLwoOXjHnZS7O/ynP/wirajggIex9+woTTaZ4G2F957Ffoe11VV2qoJh5akahqi5hwhzASnltHRAI2gDpqUXlpaWpn+/8soraK3Z3d2l3W6TJMnUDWBxcZFDhw5NS/Osrq5Ot9fpdDDGXCFKS9OUpaUl9u/fH2rN1+V83vCGN0zHS2MMhw8fnjrpCBHEPQ899NCUhK+qirvuuou77rpr6r7QOCxIKTlw4AA//uM/Pv0uwFvf+lYeeugh0jTFe89kMuHBBx/kS1/6EktLS6Rpytvf/naUUvR6PR555BGqquLOO++cjuMf+MAHePzxx+n3+0gp+eAHP0gcx2it+Ymf+Am01tx999384i/+Ip1OhyRJ+KVf+iXa7TZRFBFFEQ899BC//Mu/PBV9KKUwxvDTP/3TfOADH+DQoUM453jf+943ffb+4A/+IJ1Oh6txs2fr/ypoKfFSgQr3sHOidrYJ8gPwaK3ROqHTaePLIpQ9wSOlRipNaQy+LIiSmMoEoUKWG/KyCho2JVlcXsSairwoKcqK0WhInmW0WzHejVha6HD50mUm1pEmmjSJcPW9UFUlURSxvXmZJO3gLEjvabU1pijwZUasIowxdHottJQkwqOlxDnJTmkoCkPpBb2lRcqiQHhHu9WhMCVLKysYZ6iqimwyJtWanZ0xvbZmNM7RCSw5h3GCVrfPkkrIqhe4sLFLllVBqFNZ8rzgYlXSbUWsKoWUmizPOby+iJSSJIqZ5BmR9HR7KatLXVqdFhc2h6wstFFKkMQxznriJCaJI8ZjQ6vdRylFVRXEOsZbh63vN52kUJU460PZI6nRcQcdpVgd4YTHi8AMS++plYNAcFzxXtUSQB9EJ0F1EIQHDRou33sQmql7QD0WNST9dPVrCU9xSOEAC/EiUoC3Od5IvKtC29r78fn2ntBAXiUwYHbe0Gx/Rp1w1f6vUMk1bZr5/qyGbk+wsEfeT7ePb9QOV7Tlauzdw9d2OriRy8Hs8mvNha81Nw2/i1qkN3t8eyK6RmQwfWZfPacnvMxORQrsfdU3h94c8UxbfPNtEdGLDJFyTAqFwKCERyiFtAZvJUgDSswIWfeeYWH2c+V72evBXHQwxxxzzDHHHHPMMcccc8wxxxx/BeC95xOf+AS/+7u/+91uyhX4+te/zksvvcSDDz54w/WUijGmwFkDjkB+iWDFL0WdMWwmKNFCSYsQcQiKIUMJUakYDSbs7g5whw/gnAAfAtzOhfVCXCYEQXyTySwUQqigUaiziBqSQdDEcWay830ouRAIlzZKQp7tkk0Uu9tj8sIhhGRlcTlkfAnNZJxjjWVx5RBJ2g4kZauLSluYckI5GRGlHZw12HzC3cMthsMB4yzj3LlTPPfCszz9zPOcPnORwY7BCTh9dkCnq4gjSRSnKK2YTIZo3WJpucNkUrG5eRkA4T3Ou5pA2iNg6tBWbZtucM4SDA1kTRyE6JF1HmktTiqksEGU4B1ewPe87Z18z9se49Kl8/zJn3yKrz3xBRiM6XdAxXYaxHIOjHHkeRWCXU3grs64klKhlEKpOqjna3dPEQJiQgqElwjpEa4uqVCTKsHFwr2ugN/s8teLqwOvr9c14Hok+vnz5/m1X/s1vv3tb+O9Z21tjU6nMw0IlmUZCIOajIrjeHqtqqoiyzK894xGI5IkoSxLvv71r9Nut/nQhz7EysoK7Xb7Cjvtm7XpVj+/0bZuZV/Xu0bXw61e06vFCFcLIq5edzZQvVdOwYfqt/XnUoWfpr4/2gresah5oCtpR8G6ejA2tGPF0mob3YoYZ472UovB5gQtPf2ORqtgV592EnYKibEVo1GGMoa0nVCqiImFbHdEW0u81Eys48hqF+khG+akB47RO3JnuFG8AVtis10mmxfJR0NcVSJcha/GoNvY2po3ZAiW9TiwF7wPmYgyZO9rjdIRqCjcX1WJ0DEqToMDjXPYKgt5sTqqBUkzfeaKcwjWlFT5BB2liChF5NuIdJnW6hH2P/puLn3u/4dFsi0SkoOH6C0s4TYvIgYXKYsiCA6MIdYKaxy9hRZl6fB1prbCY4Wg3YnxTrBbePLC4LxEVCVaS7KswmlV0/cCpWMkHkyJEgolPa4sQWnKPKOdlnghyIxEKIGKwrPI5EEAJKREC4V3mmoymmZ3GmtJla77XugrW6VjTQXbfGMdIpF4oHIe42Ax1SjhsFbghcR6QVYa4qpEtVooGWzPER7rwViHlhIhPM46nPfozgJxq0NUt5Np/3d7gjHnkTKUHZKvuReng3P9GHRopRHSTDlxUw/MzgsirVBREE7gQ2kkbEEqLCdOX+TQ4UO85fHvRWXn2To1xDkDCBIvWOgabGTxWlDsW+DibW063/wW7zw5Yfyp/8rLt7+JN/QF//2ls0zUD3Bycwm98ST5cBPvLGVVhudwkhCnLVS/zdZwwNLSPgSehXYSaqE3ebwy9EhZp4AGYc1eFug4ryiM58D+ZQ5KSZrG0/uB+j7xQrJ05C7ufeh+3mCL8OxJV4ITgbc8/ce/S39pgUcWVui88DJVaTm8mrAjYddUOA+FiDh1/hJpr4u5FGFcEAFdrwROIzyI4xhrLVrrILKxdvr5N77xDYbDIXfeeSf9fn8qWGvGvKqqptvI83xKpHvvpyKBRojgnCPPc6SUVFU1FSQ0gsTZdZq/m/bkeT4VPMy2HULphub4nHNTEV1T7kGIUCqoKYMgpeQTn/gE1lpeeuklPvrRj04FEmVZXlE2qDl/jahucXHxiv0251NrjbUWpRQrKyvT5cvLy2EMqY8V4MCBAzOOW2E/nU6HO++8c3q+ut0uxphpqYrrCQz2hD575GQjIrl6nebzZr/NZ6/XQUFphfcWIRTSgvMCKZp9epyzCKmJkjY6TklbKd4WCClQcYwTitIA0mCKnFRrorSNUg4lJIPxiKKsaLVS1vftQ6sIYw3OWIrJhDwbsf/AAVppzNraClk2YXtji16vS7vTDsfnwZQVeWYYj8YIKSmKCqUkWmmKcUbaUXRaEZsXLhMnKtSQz3KG4xGdBDq9DpOdkoWFJQbD3VD+rJiQJilaxmSTLbQOfSYvCqR0yFroopPgHBbHMYoWg8GIovJUxhOnKZM8p9NKsUKz2k7otjStVkSe5RRZgfZQFCU60pRlgTWG2w8tcmC1z+LSCpd3zpJXjm67jY4ihNAsLPTwtatYnHSYFEUwKEBircMTxBy+crjK4a1HqBgdKVSri4o7GKGxvsJ5Wbvi1OP2VEwpEFNHMN9UTgj9k73xHUB4EUQDtSNC+H9DHl9/rjbtt84gfIUQHluOIF3AmwIx2YLuAUSUgE6QqHqONUvs17OP68yhrzm3boQLs24DoiHQxbRqgJgua7QYcrqbPTeBZnvTo2FPWLB3lvbG5CuFGNcTLl9rHL9yO68VvE7/bsQAzXHWJTFmZANMRWoz7RZXnI9w1LPX2AumfWN2njYtu9Rc8kb45hRCRzg7ICYn8zFOttCmwKfLRHaMw+HLDKHa0/lF2HzdtqvGuNeDuehgjjnmmGOOOeaYY4455phjjjm+y/Dec/LkSX7xF3+Roihu/oW/ROzs7PDJT36SBx544IYkr0QhhcK40P7GwlxKhUQRIiIuZHb6Cu3TOlQSgi9KWvbvXyRNErz3IcvP25Co6INrggC8kzQ1noVQ09iNlHoabPG4acQqtEGGIJUPBMZwmHPx4iWcGdFOU7ROmBQ547xESuh2WiwuLDMejRgPxyRpl3ZvkcsXL2DKjDRtsf/IMVpKoXRMlLRQKkJ4j9eadncFdErcLoiSFt2FVVZWD/DiS89x5vwlsnzIt17cYWvgWFuNQm3yzNDuapLEs9Tp0GoLur0eUgqcNSFzd7asgXd1EqcNx+XCuW1siKWSgVBpylIA1hrwEmdtneErSOKUAwcOcPexu8gmQ1584Xm2Xh2TFRAnHqSfig6KwhDlpknSqQObISilVMj6VUrirK8FEL7Owgk23V425JYP7SZsoymZMYsbEduzuBZJcL2A4evFtbLvr7e9qqr4L//lv/DCCy9MBQWNk0EjLmiIGaXUNJvVGMPOzs70s4ZYaciTPM/54he/CMAP/uAPcujQIVqt1nXPydXOBbPtvVXRxrWcBF6vmOB6+7zV9a/lsDC7/IbOB82/q7ahCCVZlAxBaYUnVfCufSlv6AoWWpIk0RRlRbulWFxo0e7GjDKLjiT5uCQVjsV+SppGIARRrLk8MEyMYGdzTDHMWOjGlHFKiSLbGtCJFVnpEXHEci+mk0YU5zYwZ3bZ/4OPoeKUIB4yOJNRjjaY7G5RZVnIrq4m4AxSVkFkVYunrghANwqrOhPSOYerCoyo7bijGKUThDE4U6KTNipOEE7iqiIYd0eCaSZePc404W3vg4rIlgVVNkYlLUQUI5M+qIj9b3kPm3/+x4zihJFM6R44gE1aJKuryI39VCe+gfcghaMsKtodTWnDAK0ECG/IrUDgEFIxKSuEjBDCEEtPtxNjvGJSWFxeoqIIpQJhr5XG5hOiVGOcxxQFOoFYecpJRuEVUUsTAzKKQq1z66iKEiUllggdpyE7tbuAFIHYVlLivCeSAi1A41lc6DDYGdVuDhovBMaHnMCRcbRtI/QyDEvLuPD0nMA7qEz9/HPg5JQqCEIxIXDWEQtL7h3WBRGCsxbng0hBKolSs4xMUAnMlsy5kmMKAhUhG6YmkCBnhob1yhMJiRMC64OIrchy8tJQTkYs91r09y/w4KNvYf3oUdIkQghPlLbxKIwV+AqUgkp43IP/Z+IH38T5r3+RNorOwYO8paxwJuO+h9/E0qH7yCZDnvqzv8Cf22RcbBMlCb1WC+McB9dXuefuN5D018lKi0raJKY9O1LszSfSxdojov7nQ9mEcWEYjCZBvAHYfILUEdYavDOAp7KeQR7h8jGyvYBDQWc/QkY4kyPjlDiOEOkC8ugB7MnztHVdU95bMm956ZlvUDz3HMf2tRmXlqoueRQI4eCq1IgLhBC0Wq3p+N44RpVlXSZJBbHee97zHsqyZHV1dVo6oSGqZwns5vnQEPbOuWmpnkbM1syfmp8AURSF/l0/Wxr3gzzPp0R/IyC4WtSllJo6HTT7v5p8v9azxBiD1pqnnnqKN73pTbzhDW+YlpmYbVsDa+0V5H4zl5kVOjTLm79nl88ua/Yxu/60J1317JjdZ/PdqwUFs+s3aMpaXL2NZjvN9bnWfm8FWinwem9bfurRU5/fUBZFqQghJEmsEYAiCBacEzgv0Tqa3iNYh5aaxcUuWsfsbO9QlTlbO1skSQyEccY6gZaK3c1NBjsiOEYkLQ4cWiOfZAgh6Pb6eGcoy5LKjMmzCik8VTnGeE+300KTECVBDLa82mFrY0hVOYosZzIuWe5FTIYj0qRNkU1QUqN0i4vnzrC+to5D4L2lLD2DnQFlkSG8I40V2XjE0tISqh6De90+m9tDkIJeS7PYbbO9PaDd0hxcanHHgUXyzKClBGdYaGm8lBhnyDLP5UFOL9XsX+mTdtp0uj167YTJJCdd6hMnCUppWp1uKH9jPUol7OxcDM8HGQRhQgRBQpUVYCo0BOFInKKjFkonOAdV02+8qN9nmmsbnMm8t7U1v2um2eEdg1rY0NRlkI1AUExZ9bCp8PTecwW4EtP7HIsXEVK18VEHW46I2/sQSQdXFQid4AU44dFCMC17dwti1Gt99pq1rv5s5jF2xc8r7uHw0888Q31zMgjvHLNb2BuvwrLriQZm2/t6BAlX/i0azUD9WV0eY0ZQ0Jz54IYwM0bIRjPoa/HF7Pxur4+Ey7wnCvXUroA0ApZQdkEJT14oKgtaWEqTIXUCQiKMRwmBjLtBlF7v62rB7neKuehgjjnmmGOOOeaYY4455phjjjm+yyjLkn/6T/8pL7/88ne7KdfEb//2b/N3/s7fmWaTXQtS6Lqutg21o33IQhRComSCtBEOG8hvX9KYnHtv0RruvX8f7/mBR1laXsZUDnSw6LfOIYVENVmewk8DUFLqEDypCZhpAKrZdh2sETN/CyFQOqKqPLvbI3ZlhtYSqSK6/T5qXNDr9Wi120Rxm9X1A/T6q+gopcwzismQ0WCTc6deor+4Qn9xGR2lVMWYohhT5Bnbly+xu7ONSlKMsxjvUFqRxJqD+5cpqy4vvjLk4q6hMAXHDkVo7SkLQ1mNkKpAyTYH9h8OdurGYEwVBAQu2Kk77/AuEPeizuRGSLyo7da9CPatcs/lwTmLdw5ZlZRVKLuwf/86QnoGg10mkwznBJkRjDNotzxShbCTNeCFp8gqjK2mwe8mm1YgkDKQZ164OiMVbF2LWCmBkIR6897VmVuiJhUc0r02iHk9q/1Z3EyccKsBw1stD3AtwsIYw5e+9CW+8IUvUFXVNKO0qqppSYTGbjpNUwaDAVEUMZlM8N5P1ymK4optNuTOeDzmC1/4Alpr3v/+97N//36SJLnhubnWMd0oKHwzJ4EbZX/e6Pzd7PrdaP3rffd6wW4hgktBk/0tvMfX1uyOhkgWSBHGlljBm7qKe9qeI2st2mlEVRm63RaxDtvIRyUWyXgwoaUEi8sxrUSBksStNpd3DRtDQ5mXDHbGtLSi0jEmq5js7iK1ZHcUjPrXF2GtG2NPnSc/O0AfvovlNz1aR5otuApXTsgHm+SjAbYq8aZCeI9UMbPR+DAO1JmRdXbgNFDsm1zIWoBgDZQFQk5QUYxOUryzOFMSpV0Q4KoiiLd0QkMoTE95LR7Ce6wxVMUEnY+QcQtf7CLjPunaUbj7exhtnqIVxSgp6HQ6VGXOgcfexdnBJsnoPN5ZlBYUlQOlQ8kXZygqGE0qkkQH+sQ5hHBoJWhriVKaovR4qRDOIQk1z/EgvUXgccZMr30xGiFbCVZoKidxRRlqGOcGISPQCdYYrHM1CWWxZUbaa6OECNbmPrjEVDKU5mkrweJij8lojDHBEUfgiQQY76msCDXU67Guk0SYabXtICBQtRjOeFBxjLcWrQPBYIzFW0MsHcdPnebi/5+9P4+1JbnOe8FfDDnt8Yx3HqpuTSSrWJyKIvkkSpTckixLbrfagt22+zWeoWfZ3e423Ib/EmxDAmzAgGHpwQ/w8CBZFiBRz4YlC5YtWSRlkXocREpFVnGoKtZ0q+rWnc49055ziKH/iMw8+5y6NVEt0Qb2Ik+de3JHRkTGzoyIXN+3vjXOKYscrTVJ1iFKUpTSKNWAzoIoioMijwzAsFYifK403oeIcCckpo7A9UBpPDu37nD54mkORhOyfgoC4ljhbcmVS6c59851Yi1IE4c6fAU3v0MuBNPRAQJBlA5JkxmHfoYpMg6nI27Mn+fq7g5RZ8CDvQ063rI4uM25S6fpn9rkc5/5Kjd29hiND1nkBRcvniXVcDgec89D7+G9H/pOHv/SH4BOGI8O2YrCM7ssjS2lYDE7xA+36oc//DjnKIzj1DBFiqCmY8a38c5ibViPwGGBzX6CHL3MnHdhpSRODVp7vK2YTyaUZcnYzlgohd3aYP3MNsXkkIGrqASMpKQXw+hgn7wy7dxka2JDA3x5H9ILNASyRpkgjuNjc5bWms3NzTZtQUMkaED45XlZiJBux3tfqwSFdaYhIjQkhgaAP1I9OlISas4DWkWNRplgGShviBKNLSslLIPrjWJDc/4y4P5jP/Zj/Jk/82cYDodtn5fJDcsg/0k1gabOkwSCZUJAM67Nv5fHaVnlYLn8yfWj+Xz5s5OEiJP9av4+qWhwsg9/FAvX5ZeuRQVgsU45JlVo09gKvEdFGukbhS9DVfpAVO0EpRtjDcqF3954Op2UPM+Y7s2IYweuQGvF5uYGadrh1o3rVJUhQjM6GNEbdul3u1g3w+Q5WZaRdLp4oRjKiEgtmE5GDLoR8+mCfDEjTRPGM8N4dsDW1pDNrT6HBxOE0nidkntwswXD7S7WLDA+3DedTo+dnR16wwFaCKIkZtiPmE8ssYIsS8hLizEVVVmQdTpkScJw0ONdFzbpZQovBWWRs74+5MF7zlGaBU5BUVk63RSjJFkWE+mI/cMxo3lJdy1GqhghYqK0y2AwQNHI4DtUkqDiCO1BxR2msxnz2YRupwNCEmkwpaSo8vD8+JBCQagIodOQRkkqbFVhW5WWsJZ4EdZxIQgSEr5GoH2zx6lncH8EWjdSAE2Eu5C+3gss7+GO/n03kq7wFSCw3hK7AqdTvDOB45z0QcXIYlzfW7JF/BsAv63nLdzvr2mbmnOxRHKsL+U15IS7HWs3KUIgqF9WRDMHHwH9x/f8gazYqOad7NfdyAYn95+v9/fROb4WC2i+oLu8d1AT6GlIHOEea5Mp1O+1bR9qMkF9ueFcL0D4ulhD4a/XSgKhpBQdlJpQLHISacAPcMRI3QXvMCIOCnjLBIO27RXpYGUrW9nKVrayla1sZStb2cpWtrL/Ls17zyc/+Ul++Zd/+S2Bct8Oe+qpp/iv//W/8mM/9mOvW0bJ2sVQR2cqHbXkAy0zhFA4X+CxISoDixQarSyPvOciP/BDH+TshVN4D2VVggzO8OC8DbKaQqggpU2QmT2KaKtjRXwtwSyOolp8HUWEoI0c6Q+3cA729ytwJXEk6K/3SQGlI7a2T5EkARjY393l9z//Ba7fvsN4dMj22oArly9z6tQ2UgrSLEVHMWUxY/9wh8OdXcaHI4x1xBay3pDN/jrDtXV0FPHNZ74WCA7dmNHUYI3DS4HSMFsYtFYUZc75C2dZG65TLOYYU2JNkLVunWthsBEyADGiiWiqI50FdeSlD2PgPG10obUGJSXbm9vI7W2cMxwcdLh44RK9Xg8nBdPCsy0FUouQfz2v8B4qU1Dk8xYE8N7WTtMgWy9l+D6UFDhHyGeqBML5OmpL4cQRuCJVUMFwVK9x8r1ZJNUbAdFvpdybgeonzz1pzTnf+MY3+Pmf/3kODg6CXLjWJElCHIfIwSYKtd/vt5GonU6Hqqro9XocHBzQ6/UAGI/HGBPyrTdS1FVVMZ1O+b3f+z2iKOKHfuiHOH369F2jUu/WvzdymJ4s+0bjcDdn693aeqPosbvV92akjzfq9+tdR4g5Dk7c0F6YABoXfKIE7xtoPrKpObvVwVeWorBICUJKTGWpKod1ntJBLASDTNFNNRZBlibsTw2Hs4rJeMZovODM2SGxluTzClEaoixjYSxOOs6up5waJsRzw3Q/p/eOh7j0Z3+U9PQl8BV4h7cGm49ZjPYp86BygPcBsGii1n24Ju8DkYElx7lvneyNY13UmEUgHYmaqGSrAhktiJIu3hqirIeMEmxZhDlVxzSwReMSb0feO2xVUi3m6HSOjFK87oDU3POR7yP/0n9BRZpup4PwFVJqxge3OPPejzL5/L9HOYcTkOeGrKNxxuKtp6o8USTpdDOqokQrgZeCVICSMbPCURhLlGQIrQKJRCuqsiRG4KTCVAYdSbyXxJHGGIfqdoi9QnoDXmBNidBhbpLCoqQkUjXQ6jzF+LDmQwm0DGtPWoMrSgbwywGlMTSy/c6Hey1V9TojA1jkKxtIFVLUii7U0LdnXHjG85KOFiA9xgus81hrEN4AFmsq4jQLkv6loTIzZJ0S4iidjVpaH2X9mUBK1YK5UiqkDCCC9z5In2tJlGbMbu/R6aVEWiNEQnf9DNdLx61K4/ISOQ15oLVwSFeiRUqEIVYTUuV5aec0kTqNFjN6fJP3nlNEcYG8+X8AksQ6FgeWau9pHshydjb6vLK2gXXwA++7wu8/+SyP3btF/szn+NlP/TZn7nmQe+57BzKLKYucqqowpkS1a71bivb27S9nLZWH81v9ULaJAPU+PDsyPPmVVwy31rFbD5Aqj3VAfqcGOw3lYo6xjoUtqYxFZAkLC36wzv4LRdDnUZJ+J6Ucz2vSXXhInDX1Q2JbILyqKmazWUsgaObsBnhvUi00gHySJCRJgpSSoijQWreqCGmaIkRQTlhObWCMIUkSoihqyQV3a2cZUG/m3TRNMSZEqyuljqWAaPrbEBOa85p2GpLE8tqxXLeUkjRNW7LFMsFgmRTQHDuZJuKkSk9Tpjln+XreDsC/vD4tkwuWx65p563a6ykK3e3fy2Pwhv2UOgCH9X3fPL/O2Vq1igCeWhsAcSmQXiElKKmwZVCxQCiUViAjrPdUZckiL1G6QkcqfEda4pylqBx2NkMKycbmBmVeMJ0cIIVDiy5lWVAWFUoKxpMJHWOQSjOfLcI6IxTGaCKl2D2csqkUw2GPnf0Jr946JIoiJtMFpzc6dHtr7B5MSJOIyXjCZpLR6axRFHM2tzZ5+aUJsQ7zWFUuiKjoxJKyqnCmQHiJUoEws762ialJVqc2hswKSzQrUNJyanuD/nCdL37lGlfObaNFIIVpGVHkFVUVyEJrvQ7dTBN3u6SdsB+LEo1ONFIpojhF6zQIyChNt79OMTsMc7bwCBEUJoICj0DHEVgdyL5KIaMEESd4KSgqj60swvmgDIZHtfyAJlI9QMdOALjw/uJlAJmpQWchEN6370eBpOJp0isIcfw+vhsZxrkKjEHbBaRDcA4zvoHsnEJU09C2UPUepKn3zffOTgR1qXDvt1yF5qz2lwyVsFxQUL+vieNnvFbVSryGibC8SxQ1eN6A+0fNLCtANKA9NCSfpszr2d3eTZbVFAIRNLTZfE9HjRwRE5o9mlgiDjS0gZZEKmgXGVEPimsutH2vXRpgH8iQaSwoKhNIq0LhPFR6m0gBzuCEJqKk9B7lLVaIdtd3pHWwSq+wspWtbGUrW9nKVrayla1sZStb2X935r1nZ2eHn/qpn2I2m327u/O6VlUVv/ALv8CP/MiPkKbpXctceWiD6cjyzFP7eBvAACk1QiqUjJEkWALpwHkVIrOk5cF3nub7f/AxLl46g9IxlQkRQN7VDnopUI2kuPDB+SWayJD6xx9F5XpXO4Ran5askTrRHs86A6zvkHViqiLIpC5mJfNZwdr6OhfPXiDLUjwVz33j67x0+5C4t85zL36Z4vIDnD0Pt1+5ir7nCmarAqDIF+zdus3VF56jqgzzwvGVr36DvKw4d+EiW+s9tjbX2dg4zc7uLbpZgpLz4AASEevrMUmnQEiF1jH/w3d+D1EUUeQLrHEhl7YQRwASIhAsvKivWYJwLQDVOKCc9wivahAmRL067zkcTXny6ReIdMQ77r+E1prBYMhwuEGSdZksJhgTyBAhTUMA1owxzGezGswRtYevlmeWIuQE9wIvQ6SRqJ1dXjqc80BwwHoPXnqE8wGweQvpFeCtEw3eSl0n63s75zbnVVXFJz7xCW7fvo0QogV/lFLMZjOKomhVCTY2Nuj3+wBt6oUmqrWR3Z5Opy3g0gBKEMZ9b2+PT3/60/T7fb73e7+Xra2ttwzYv96xt0p0eiukhJOfvxmB4Y3qfSuqFncrI8WR3G5w24qagNM8Eh4tBd+5FfPYumLYicgrRz6v6HcjBmlEGisOdgsqF6LLuqlm0NF0Uk3lPFk/4WBqGc0dRVmxO57xjgdOI6WgmpW4smJUeCphyKc524OENeXJ5rA4nLD5/vfR7QpI+3WUoQdv8LaknB6STye4KhARwhwHNClqcOAsR7mC/dJY+BqUVRCy9dLK/Urw1uOFQ3qN8wWFMdgyx5qSpDNApV1sWaCFBBXVgMYRianhM3hnMeUcWyyQSYkyC4TOWDv/AN3e50EG4HA+nVMVBeV8j86p+8E7lJLki4I0VuSLAiWDnHfcyehqBQIq75DSk0mDlIpxEdQaAlAHOorD9UiFqQqk0iitEK7E44ljAV5inMA7Q9LpQOkC+CPC3CNsRdwJigxVniOUIk5ipE4wCKwPcydCECmFqvseRxoljnJ3I4J73lpQOoyPsa6N4FRKIrXG11GtkQAlBH0N/UyRaIUpTQAXZUgZ5L1DNYCxOEoPJAQByGik8wElj0AoUxX1c9IQ8sL6KYUkTpIQGV0/BwKHjjS2LEBKkjim0x0ym5csjEPpBtAP64WtHMYIEBE4HcAiGcANapBfSRkAOBwSh5KgpUfhiUcGJSOcl8znMwbrW3z2yaeR1vDoI+/hnrPb3P/0Vf7jl17g8d0dvvvDH+L0OU2Vz5j4Eq0UQnjSNCJCU+YLXBXad85RGsvD913A5HPGY0FUg3/G2Hq9BGMckxKmi4I1Y/EejLUoFZ4jYQ3FYs58OsPbKbFzWGMQvdNEds7MB3ZJ4KcEVYqG3NCAQs2M1CgCdLtdOp1OO39prds5vZn7lVLM5/M2rUKT+qBRJGiUC5o9YhRFLTGgqS+KojaFghBBDSHP8zAn1iB6U2fTJkt9XVYTsNYeS8ewrLqzTDpoSHFSSuI4pixLnHMtaULrADtprY+pITT9bv5uxqU51pAlnHPtWgoQx3E7Rs2YNAoDSxc5hgABAABJREFUzTU262YzJy7/3dS9vOYs/5w89mb2emvd3ZQOmmtsPn8rFsikMihWtUHHHmmhUTuwxoI3IbWV01gTlK6SKGKwFpGkCXGkiXSMjmOk1GitwHlm8ynGhO8rTRJUTXSxzlJUOePRiI3NIVl3G2crJII8n5OmmixJUVFKUcyJpGC41mMymSKVQokIQ0Z/IDgYLdiQivPntvAoptOSg1HJ9TsLNvoVa8MuvvKMFgUeQZZ1GB0esrm5RRxHSAmRihmN95nOciSCThTA1yyuiXTeYm2BsY5Ix3RijXWK2WRCZXIeuO8eDJK9wzkP3qPZWBuwP5qhlSZKIpyzKCk4vT6gpyuSNGMwGIb3Fx/mcB2nqCih1+kznY0RIiZLElwuQnoHY6msJYszosiQZimmAltabFUhowQZJagopnCQlyVVGQh3WId3Dq91wKYb0F2IWgXgiGTsRQNm1+l5BEBQ5al3OjQz0RFh5bUE3nBThv945xAqwssEGXURbk60eR/GFCg9BG8RUQfMoj7lzVMrBNpj20hNyji5fzvxDNV1Hz9wtz435e7yHIkjyD60Ub831mC+b1NONHME9d8c+/wk+eDNiLSvGeOGZbHE2hTLf+NOdP8oJYOv13Phl4oIGagG9QHZrDV1oePvAeGayyqoLxmZghdIxjgVUSBQwiDdFEOCdCUCd9T+XYb9W7EV6WBlK1vZyla2spWtbGUrW9nKVrayb5M55/hn/+yf8ZWvfOXb3ZU3tc9+9rM8+eSTfMd3fMddHVj/9//xg/zWb3yRp79+NTh7CXlJpQhOSCViTCMNiUeIknvvO8cP/MBjXL58Hh0F5zVCUVUm5KolOFeEqAGZNl9n68VBIPDO1i42h5Mi5DqlzZhdEw/CeVIo0rTDBz70PXzu0zuUkedwv2Qxyul0IgSwvrZGt9tDyYj7H3oXly5b8soRmwlnzl7m9KltfK5JugmuspT5hOl4n92dHTyKU6fOIvqb/MHXv0F3fZ13vecx/vBzn2I63mdza5MsiRkM0wAoCUFVWHr9DabzHcrcoKTg/vveFZyLHjwGh0M0iV29b/O4hhzSDteAUlLW1xtAf1UTLbynzhfqsVbwzede4Itffpwo2eDV69d51wOXUFLUEfoR8zkUFURRcNjVgcA455hM9tuI/fDjoI60U224lsTaEJUnfHD+hXpcDSgE8gFYnAdxFHD4hoD4Sef+H9XeTmR/Y9ZaRqMR1loODg546qmnWqCmUSZoUiQ0ebMbkGaxWLRgibWWzc1NAPI8J45j0jRlOp2249uAOQ14cnBwwKc//WkeeeQR1tfXj0VMvh3VgDe6/rdaz1sdx7dT7q1IU9+N4NCYrJ3Ofsl7KkQAZ6UMLvmHe4rvvdJFWcE8t0wXFWfWE3qJJNFwuDfDOkGnq1kbxJTzkn4/ZlFa0k7GZGYpSMHO2N2b8I4HTqNTTTGas5gVFCrCRx5fVJzpa84oi7oxYpSUnP/Qe7C717n57JSH/y/3NheEdwZXTsknB1T5IqROaR3WvvYj+zbi9Sh6zx05ip1t3OU0BITg+m9AN4eQAudMmFfxVKXF2hJX5iT9DaLOAFvm6ESCbIgO7ciHep3AVgaTz9GdHG8LkBqV9RmeupfJ3ot1fntB2ulQ5lOm176JllBaTxJrqtKQJgovNUJr8A7roJjnqEiTxiF1wjy3OBdEh7WKUN6ihEIpjTWGRCniJA3RmbknziK00lg0GIMSEuktFeCqCqEjKBeBQBBrhDMBDIojsAXeVdh6nrXWIpVCa0WMIFKSKFLoKESvFkWJsw6tFWVlQwRtvc4EKfowZ2ItMlZI4YmjkOondzCaV2x2CKlnmohQZ5DehcB8jlIJhTmjBllr8lkrxFwD3lrpOso2KB40QMpRBGQgJJjAqEDpqAbOQUtFHCeoJGFeBUJU3aN2PW8ILLImWoRnNRCkhFQ4wHpZAzcqpPgxNQHDBYUFLyOGa9tMKsve/pxT/Zjf+tw3efA7LpBtvZ8f+tEP85lP/QaTvOI5f4FXdyQKG8gLwqOFREuHEpZIOrQEhUehee93foRCC67NINGgpUV6hxIe4YPSwPsffZD9pz+H2D1FFGmkioiSBISmKC0C+Oj734H1AovCCMXNasirowLnHYUXIGSrIrSEgh1F00ILzKdp2qoSNCkTGtWB5lijCNCco7UmjmOklJRlSafTIY7jdu1s5r7lfy+D/N6HND4NwaCZyxtQfln1YJn41hxr6lUqpOxoVBCEEMdSQJzsT5PmobkO4DVlm340z0gz1y/XA7TkgiZlxDJ5oCEYNNfV1NOQNN6MdLBsy6oDy+kb7qZ08Hrr0nL/m3bvRmZ4u+oJQgBSBhF2HwBLEfQEkC6QaZx3YX9AUOlKUh1SWAG9boc0TYiUIolikCElixKCRbGg18nQOmKe50SRQqmYNOuzyBfs790h7aQs8iIormiFjCJ6vQESsGUZUsEkGdY5xqMRvomwThVkXda3hrx67TY6isgXFVEsOH1mi83T23z9a8+wPyo5FYV0bN7DbDql15tjq5wynzMc9InjCOFCqi6JJYoskdRE9RxUVRVxFDGdThmubzGbz8gGa7x85waYEhUppErIiwOyLKKTZWgVni9XlfU6CPlixixf0O0pIsI9UFZlvY9wVKZE2QRjCopiQZJkOFuiI0UUR+RlRVxWZEmCVJqoo3CLMnxvUgSVg0hjPczzgnyR44zBWYM1FiHqVVrJWpWFQJSWS2tDvaFpqU3iiHzQEAgakzTz8HFi52vAcQHCO0QyROgE50qU7oDOUFLhTImsAXhBs9c8Xu9RVUfvGlIsNXDsnj75Ny1Af/LJOrZzbPdA9TPlw55Airud0ZDQj5QB2r/rLrVkTOHbcQp/u/bfodm7kw+OunViL+rrVaAd66ZP4X3siL0p2qERbf9c3blA0mgJFD4oA7b72mXiRcNtWNqz+3qcHIom1YTC4nSC8xYnFAaF1h2iaoxxCu9KfLPda0gMHN1z34qtSAcrW9nKVrayla1sZStb2cpWtrKVfRvMe8/v//7v86/+1b+6a/7Y/9ZsPB7zcz/3c3zgAx9ondvLFqmMF54fB0eFM+A8MlIt0K9ERHCFKbSIuHB+wA/8wPu4/75LIaJJBQeJQmOEDTnLde2IbuJUlgFnAtjivG0Qt9fiY035mqQQAHmB1jHve+x7uPriszz5+OeZ5Z4oJHpHCU8ny4jjhF5vnY2tBCkkUmkeeOgBpvs7mKJAD88RJ4GkYIwlz2eUVcmps+c5c/osQkb82A//MLPpnFRa7j1/FiRsbW3gfcnWsE8c30JKsLYKIJIR3Lw+5/zFC2xungrOcu/xXiOb6BdB62wMYx0y1gsCqKpa8MMFYKQmHQjvkCg8AqUT3vXOi7z3Pe/g1s4uv/k7X8C7gtObHfqDHutb68ynY2ZzSycBpMO5Oj+8c0ynhy0QHiTBfRgjqZYc/IKQeiF8d7aOJlJS4nwALqS3OHf375elv18v0uitKh18K+SEuzlqvfdMJhN+9Vd/lS984QttJOmNGzfaqM+mXJ7nbURrAz4sFosgBby+3oJIi8WCoijo9XpMp9MWNFmWtV6O6DTGcOPGDX7nd36Hra0tTp06dayvbzYubydFwVsd3+W6T5IVTo7j8rG3U+6NHL/L7cvGWdr45QnEJSUEsYRH+4o/fWVAP1Ls5hZlK073FWupwFnH3r4BKUkzydntjOm0YrDWZV5UyDRhUUoqIYmxvLwz4b6La6g0piwdO3s5pRVEvQjKgvUEzuOwOzllp8P5h+9h9uxTTF49IL7/EdLNU4Ew4C3eWVwxo5iOsFUF1kL7XEAD/uJcDYQsSQJ7i3ceZ3KQGqSruQhBWl5IgXcN4BUi4x0SWQPS3htKN8MaQ2Yr4t46RoSc1WKZxEENevmgPGDKHFcGtQOhU4SzbF+4j3J6nSjtYipDFCm6gyH61eexQoGvqKwj0iEa3egEU5VYY1kYCV6C9WgtWVQwzkMkq0o6eA+uLFDeIUQCXoCzSCwohdSCNFKgEry1SCHAOaQzeBmB8JiyQEtH2hkgVQRKo6QjG65jqwqHpnQBTJdUKBmF2VQILB6tJCiNcxDFESPjqBDEscQrQeGgG0u8AWddAAh1hCDcm45AHoipI5mlwNblrAOtJGVzr4eHIAD8zoMIOatDnnB/9NkSLGWdQ2CoZgvm4wOGp84SJ926nMc5T1kaVKxB1FQ95wJQ6T3j8QHXX36RYaLIR/s8eM8Znrg2JoozDqYLtjc2yJIogMJC0k1iCuvxOqr3BaIGiQPQ3BBnGln4OE44s73JnRdeQinN4azijBpz65XnufTAu5kXJZcffDenL15mVHhmNqJR+xBL96D3RzC/dwEYDOo7Hu8CeUCK+v4XHuEsQngUXeTMo2/cQWEDkaEGYiUWyRrqzJBYBEKDxnF428LBPOxJBDXYWSLl0bMZZpqjJ0VrTZ7nx0D+BgRv5v4GKF9WzWlUbpbVB4wxpGnapjQwxtDtdo/NfXdTAGhSFTRryjJID0fEiEbdQIiQbqEp26w5cRy3ZZv6q6o6lo6hIT00JIqTBIZm3m6uqynbfHaSTNGMQ/Pv5c/gSCFheS1YLteMdfP33ZQIGjtJDlj+fdKW23gjaz5fJjy8XTtqI+SgD+QyX8vph/HxzlJWFaaqyDoRSRphqvA9djsZYQvpsD4orCzyOd4HopSsvw+tI7yHw8M9sm4fkKxvrOGdp6oMu3d2AMtWrT6QF2Efk2Qd0qxLuZgh7CKQ3hKNqXLSNCaONBubaxgj2D5zhtlswnw2Y+vMaR555/28cvVFYuWY5hXDfkYnS5mMx+hIM58viKIEgUIqj1ICpQWmCvOnc4ASdDvdkBrCClQUE8UZKupwfWefXip44N7zIARKKnppSpwENbV+otkYpkSRxhrLrVu3ub6zy7nBaaI4kDPm8znjyYR8Pmc0y7nn3jVG40PysmJDhf0tSKz3GGMpy5LK+tAP4bHG421IOSEjhReCeVEwmixYLELKNO8svqrwEpzwSBQCWSuFNakHZL0WyEAQICggtPN+Hc3frO2BpCKXAO+lvZQ4SjHQzlauRAiNrXJ0tg4qrK1SpQhnwBq8WaDcfAnoXn5+Qm20JIelYxD2KuK1z8EReaIhAxwRD8Jc3sysRwA94uiZlq/Zyx5/puumOSIfLEPoy0SD5u/lNAuiPXb00/x9otXlPa8QtfjA0hzTnFozMo71rSU/BGL2UU6JOqVK0+uaaMFynfhWDeE44aC+N+q9sMAh3AIrNEpYPLrujwShUVpSLcq2n41ShDgxpm/XVqSDla1sZStb2cpWtrKVrWxlK1vZyv6EzXvPaDTiH/yDf8De3t4fa1vf/d3fzXg85mtf+1orcfut2q//+q/zt/7W3+KRRx55jdP15Vd2GY0qGtlKZy06CQ40gSTEigbQf6Pf5bs/+m4efPAekjimkQkVUhFy0yq8Ny3YsOz3OHIbNTmBlx1dovV9CRpHsjj2Py8CQD4YbvBn/69/lco4vvrlLxL5kkQL+p0OcRQTRwmdtEuUdpFCURY5IklIz1wCPOXikDKfIgRUZY4UIZrMVxVZlqGU5oH7r2CcZ57nyFRiTEW312GxWDDodcgSicejlKcsS+JY4b3iu7/nBxgMNmq/kEOj2ijK8Psokq91y4kGQKQlITg8wtd5WEVwVCupSDpDzl28QLeXsrE25NXr10kjjVKeR971brY3NvnN6X9kOtthwwmkbyJbQ3NlkZPnC+IkwddRjLJGR9v0Do1cJ0Gm3DuHtQEUa5z+sv65m/Tqst0tMv/tpgV4K/Z6Ef4NWPLSSy/xy7/8y3z1q19lNpsxnU7bnNgNaGOtbYGhZTnrBuwxxnB4eEieh3zLy+oITdRpk6Kh1+u1ktXNZ83vxx9/nMcee4zNzc1jgNJbHY9vlYjxVvLcvt6Yn2z7rX43d/v+X7d/+NYvLBABGARi6fnO7YT3DiPWUsXVVyZ0Uskgk/R7EQ4Yzw1KCTr9mM1BzOFhQa8bU1YGH2mK0pNbR5bA1Zf3OXu2hzQli7FjMikpraBywHTG6Z5ivfJUNxY4HTFcz9h/8hnKWYlzMLx8BZWktF5j7zHFnHI+w9mqBjgDXYdW4cThvGsdzr5RPHAGX+UBfNUab6vWaY8A7yRCmvqPCNFEAhOAbCEDgGLLnPnhLtZUZGvbSKEhjoEmqruZb0ObtiowZY6yFTgLriTdOBdk9G1F1h+ghMcZS3m4h6wvNYpCHgKvUpyXeCS5DfNLnGiEc+SlYzoPUaBKBxDIW4f3geQlnMOWFb1OjNKawnoiBdYHyW0ZaYx1lGWBxaMGm0hbYZ0hyboIXFAx6PSQrkJHMdZrXDHGe0GkY4Sp8EJwaGErjSldIKQ4pSlLg1aKJJZ0FBQ2DLaulV6ElAgF0kuU0kSxRqsYnEcMLFEcc/7MNoO1IZULc2aWRtCJESIiwqDNHFcJrAOkIErSABAiEMKE406BDICWp05ppCSumND1I7Cna4IeAbR0HmsdURRj6lzkldesbWxgnSWfz3j16vPorR53XnmedPEKzzx/SD/NePb55zi7uY51hsun1ymM5P/z1/4Kn/38V/nkHz4PqoIoYTjo88g9Z3nu1iFGatI0Q0cJWZby4Q98iA+eO8vV/+WfMjIlpnKUXrD78jOcunAf21tb6DglSzMmsznOuxDdWt+vUlATbvyxuaRBi5yrCQlCBnIGgbjnXA3mex/UJ9rnqJn3m3PDPdYAPgJCJLaahFjVWkXCWY9EkCpBJqFyRzsTD626QTMvLRMITh5bBsGXQfZGPcd7T1EUx9IULBMJGgJAQyY4SWZo2mg+P6kOsAz+L5+znBKhITwsz79Nm00ahObfJ5UT7lb3skrC0XdwNB4n+35SOeDkGC4fW/598vjr1bVc38lzTtq3sqf4lqwGGo9HOAuQ4R70ENJu+TAXRZFG64gsiULKGSXBefLFHJEkIEIaAHxQF1BKUpRBVaxyhjhJyNIMj2AymXDr1g26WcxwkJBEioPDPaQXRFlCb7BGFCdICb3+AG9LJuND0m5KkQtyK7C5oKoko9GY7nCTrTMXmYzG3LlzSCdL6A1TTFFw4eyQ2cKxubnBvHA4JMY6Ot2YssrJ4ohICXCCygiG/QRvK7SUaCWQQiOTKMzHaYaSkMqQEkYS9lZZmtHNIiazBcI5+r2YOElJE4WOJKPJDFNZumkSVB8EJHFEN02YHh4wLQXeevLFNIDxUoDXeA9aJYzme8SRwrq1oGygBVIRSGAqQqigQDGelozGc4q8AGvAGVxVhjml3sMHwpRoWGdhf9zs+RF4EVTPRH1P+Oa+9UcvSxKBO7mvbrD7E88lvsI5j4502EuYChXrAD7HfXwxwlcLrI9I2vPrda6us2n6KCXA8j2/nPBg+fZu2Q9Hv5p7vn6XWSYjHD3nR+88b6Q+0LTRqBq05IOWBLA87yyXP/4IBgJAo4JwfOzu+mwfu9gjksfx94uj99ZQ/5JqkGjWsOPX54VHtKoMviUl1ktiPf5HbNtIlhQuQtgCqTpgSrxIUK7EiITSaWKRkIsE4aZhTRQNVeBo/ftWbUU6WNnKVrayla1sZStb2cpWtrKVrexP2Jxz/PN//s/5zGc+88faTpqm/NRP/RSPPPIIv/Zrv8bP//zP88QTT1BV1bdU3+7uLj/3cz/Hz/zMz7xGKvadD5/n/oc22du9gSlz8EFWWWoNRQ2GOIWi5MqG5P7L2wF8Eg4pVBtXIgRIJahsDRxIGaI6WzCxce7X0T8+RBK1TiXauPnwfxHOaf6WjWNPCra2z/Dn/+L/TDfLuP7cl1CuYmtzG61rIoQPKR7ipEskI5yrEF5ibUkxuoWUGusMvjJYE3Kw4wzCWZIkIYm6OKmI0wwdaSajA+aLOQBxkhBpgbWgZcjXXJU5733fe/iBP/1/JooTnDWBLCBVDSJKXO1hcjXgKFBLzkRqcoGqh8uFcrXjS6pQKE46HEwW3DmYkMWC9z/6rtrJfJ0o0nS6XVQcM9sDY0BHDS4qatCvYj4bo/UaTXoH4wLYbp1v85c3TjGPC+fWETshvUZdoiaLSPn23FtvFhF/suzd7PWk/0+WL4qCT37yk/z6r/86u7u7x0CcPM9bWeeqqijLslUqaEAmpRRpmlJVVUtKmM1maK3p9XptJGlTjxAhzUVDOEjTtFVVaECovb09nn32WR577LE3BfuXx+btAv53iwp9vTrfyF7vO7qbg/huTuS79ffu359oEVYpQHhJJD3vHUY83NdIY7lxcwLecX6rg0OwN7Gs9yOkDpL1zjj2dxcM1hIWeYnXQXFiUoAUlpdvjZBSUE0WLKxnvJij4pgoUoiq4vxWxoYtWbw6Q2QpvY6muH2AKUM0ufWSzqV7Eap2y3qHtwXVfIQp8hAdSaAYte56X6u/NEQD78N5zuJNgbcV6BRXLepyjuYpbAhJUmmQFV5phIqRtXxzyBMt8cLhrKGYHIJzZOugZR+hY5r5tZ6C8YA1FbZY4E1Zq9sYdNYnyQYUxRitFLbMcUWJL3K8r8FI7yBKQEmqyQyLCvL7UYSt4RVjDDiIBAglEc7gPBjnSaIgHS2cIVIRxpR0NaAyqloCWkcJaZxiTUjPkEYRpipCaoTKEEURwnviNMFVkmJ8ANkAqTQOFxRk6mj9Qy943/1XkOUMIRVpp4NfLHAyRnWGRLaDEpqk2yPOMlQcg0pwUjFYWydOU4yzOFMhlGK+mOOdJ05SCgFpltHtDVBYCmfJ5wvYv4XnFkLA3sEhWazY3NhgbWONoqzwzqPjFFsWLIzDukDWU1rhkEFVQFk0x4mGrk7PUXnFZOFYdM9x/bbDp4MQfV5ZMgxrnYRd73j4/gs8d8dwminRuXU6/QHDtQGnh2u8fGj50vPbzOSfZnN4h1OnRrx0bc69WxU/8Zc+wE/8rb/HCzf2jqLSdcTezk2++0/9aba3zzEZj4gU7E1ClPQ3v/ZlxKOP4ZxnPJnx9NNfR2pNHMWkcSCXLEfTx1GMVLLeN4ha2QfwHilcfaPWsZ7e14+OX1o/oWHSed+Q48K6ydLHVbFgPM8xNqynRWVQDjLnWYslW4lkYY7WPFcD9lEUYYw5piiwDMYvKwI0836zbgDHQP6yLNFa06QAWCYVLJPemnoacsNyG8sEg6bMsqJBM/82JIOqqtq1aJmA0Pxu0kU0a+FyaoOmH02KhOV0C8vz91tVAni7JLmT5ZfVxL5lEsCftNVkmHoHd0Qkq9OvaCXRSpLGEarbRaqwr+90+3hvqSpLmc+xZY4Sgso4hFJIGeGsQWuJUglKCm7v7DHPZxyOpiAF+XxBNc+ZFgWin5DojDRNEV7Q7XXpdzJmswmLWYVSMbs7+1TGsNkZsLm9iVQJlXPoZIyTkul0RJREZJ0e0/mM3d0dJmNDvlggcXT76xRlxWCwjnVgrKvVvxxeBQUQW+Z4V7FYSNJIkKUJnSwmjTUqiurUNBYlJWmiyRT0kghTzOnEMUJIRod7rA8HKCHQUpPFCUmSkkjYSCX9VBEpGQgMSUqW9ZBC0Etj5vMZZVHS6cRUZUiH1el0wUvGo0OM8SG9Q+ZIkyysUUojpMTiKHLDZLogX+TYyiKsA2dD2p9mzyhrAoFzeKkIaTVkky2tvh+OR9G3+gCiUb2p9/stsC2aoi1JoNknCe/A5kgnUPo0mByVrrXvUAKPSAaY8bWwxtekcrFUt2i2KE21zf0rltIZsLyfuQsBYfmalp/PlmfQqAE09Oum6NtLA3aCGVG36Y8+o+EMNITu5dZO7EW9OxqPt7EvXt7nHic7LJMRfE10oD1efxv1sIT5PpDBj67pKMVFINeFdzGFBbSUGCQaEN7hGwWNKkdnHcpGRQMPS+SGb9VWpIOVrWxlK1vZyla2spWtbGUrW9nK/gTNe8/v/d7v8TM/8zN/ZOWBN7MHHniA97///QyHQ37iJ36Cv/gX/yKf/vSn+YVf+AU+/elPMx6P33ad/+7f/Tv+5t/8mzz44IPHjg/6Hb7v+97Bc0+/wI1rI4wpiaxFyRaxRnnFhigZjG5R3H4Vf+5MHUl7ZIIgwW9MEw3X5CQ/ikIMpdySO8gREgtwxC+oGQzBiRKcQ40jLeTGVkilOHPuEj/4I/83vv7ZjGpxQJp1SdIMbypcVeF0ichEAP7zGdaFHOhS6ZBrvChCnlzq/KveUc6ndKIokCmUI9US0ox8OsYUOaYyVEWFsw5jArAxOpiwtr7N/+PH/58MBmvhQoRCOGoALEhnqvr6ZQOc1MoN4dIbMLIezPZ3UxaoAYt8vqAyhkjGaB0kkheLRR1J75FSM6+gKCFOa5C9Hm1jLfP5lE633zq+nLVYY7GqCtFYjbS2b0ggdYRh3WfXgCZ1ZI9cijK8GzD+RtL9y+XfDqj+RhFSjRVFwb/9t/+W3/zN36QoimNOxwawaVJNNMea4w14Axwj+jSAUJO3O8/zlrzQKCAopVowabnuJrWJUorFYnEsOvNu43DyWt9sXO523us5UO9W5/L3tHxeHMd0u12qqmI6nb7u9/RGfX+99o/fE4APmiqWkC7lvUPNuzsCVRl6gwhfGbqdBK9ibowKYqAkRJQvSoMzgu3NFCskpdBkScxo7kNE5N6Ew1HBWk9zbeywStHTAhlpZoc5F7YyznUjDr+2i8s6dPsRbrbAGIf1HuvAaU12/iy+juoLpIOKajHHliVHIX5HwKk31dLjHEDxAFgs8NaCVLhihjOGOuS9Hot6ThCADSAjOuS5djoO0fh4vFchOl8KvPPksxHOO7pSEHWG9fxZty88uPAMO1PiTQneBr+31HSG25Q7E9IsY1IsqCYTVFVijUNrkFpRlQWuKIAQYR7yFodIXGE9lfU1uCZCCgWpsb6Wy8dBOSeLFfNFQW8QgykQWYoWGoOoCVsSGSdoAUqBkxJvDHESEcURMk7AVqi0w3TnOh2lUGkH5T3WH7XfSRLOnN6kKLvknU0uPXyJ2WzCHSA+vc3wYlqDeEGqOF9MAUmapMzyCc4vmE5nQa4cgSlKUBrGh3SymLJIKRYTullaA+7TICFuKyIlscWCwgpGhw5BSVlaiqIgjiPmswVSqSDtXRakiWY2zxmsb7C1tRXujbAyhHXAOZTW3JxpvvbimFdu7jLsZSzygvFsQVkpwKGkpDAWoSMm0wVbUcVHH32Q933X99CP+zz/9Wd5evclDg8k7/mO83zX993PzWsTzp9fkFeeT//BLtIZsjiiN+gRK8mirLjz4pN8IdIMN8+g44RLWxsc7O5we/eQxXiP0Z2bxN0eLz79JF/5/O8yTMAbw3ge1hWUwgoV8rJrhVIhujtNYlAaoWOiKCaKY7I0YdBJUaqO/I4CcQEhESIQNIQMACMEYK+NNoVWElxrha9JAx7CHK2gE4Xc5YmqyXjOI5q9QA24NwS1RqFmmVRwt/WnIQ8082BVVS3RogH3l0F8ay1a69fU26gYAG3qg2VSW1Nfc15TZrlsQ6Br2mnOOanS0PT7pEJBeLaPkxH+KID/SVWI1yvzZsf+uyEdAA0E2toSeUZKRaQjkjiBSiO9pZulJFFIIZBXOcViFsYtL+lkgqQmwmihWMwXFKUh6fYwKIwryWczPJ4yz+lnmu1Tm6RpB2cdeTFnMhozn0+p+j3iJAE8eVGQdVO2uhmD9U06nQFCxoynM3b3DimMJ5IwHY/I0pjNzSEHCEYHM6wTTOYGpecMNraoygUq6qK1wroKpaCqcrwzVFWOUpI8X9DvDOn0OnQ6KVqF+7kh/grh6SayVnAI7xBWhHXn4GCGqxzKCyLpiZRECVgfpEwOQpqJQBTQOFfSH/TppgkiSllMD0EKtILFfAZSMux2WMwmJEl4jykWM9KkTk8gNToK7x5FmTOelkymJVVp8dahPLjKYo0L8712CBdUwZwMe/yWaIIgSAWJQDSGEPXe4N81ui9b+naIil++16UQR1gy9VznHFWRo3SKwCF1hK/miGRQY//hZUokQyhuLxEWlsgOHCkeLFt71zY80IbIUO9LjpFZ2/v8RC1NOgVZb4uErKP9G7T+jYkHr09iBb/cQZZIAMK3JJ+jPh/9u7k60YD2r2N32xcv//vksXBO05/QxnHiQUiZ0ZKQGgpCQ/poyRC+fveVCG9Bp2FLiEN6cDJFuRxLCs4GZSiVIKrlNI8S3vDq3txWpIOVrWxlK1vZyla2spWtbGUrW9nK/oTMe89zzz3H3/k7f+ePPa0CwEc/+lH6/T4QHCBra2v8uT/35/jBH/xBnnjiCf7Nv/k3/Nqv/Rq7u7tvuc7bt2/zC7/wC/zDf/gPj6kd/PP/9RPEkSOkPa4ljY1BqSj4jZyjLzwXpUVOBTtf+RqnL95DmnYRkT+KyiFELAqhlggGS86oBkATza8AlIna+dSkVZB11LOrZZIbUDvI/0ukCoCFVIrzl++j478fYQ6ZHOwT6whrAogmvUeoGN1dx5YFs52XqaoCISNcWYRIQq2JhCbymv3dPXppj36/R5Qk4KrgHHIOX1XkswWmLJnNxggXcmrP5o5TZ8/xl/+n/zeX73kw9I/aKSgUqEC4gKNIVY9EuADgi9oj10qvC+qYIonA4YUjSLQLpIiIkw79Tg/vHXGkkVKTZTOyrNOCFd5DbmBRQdcF51cbDeodRX5EUMA3TrEGhHA4Z0METkM2kALhQAqJ8bXUe00A8V6CPA7mv1XVgmXg5G6fL9f3VqwBUBpw5lOf+hT/+T//Z4oaIF2ODO12uzjnKMsy3GN1P5YBmGXgqcnt3eTuNsawv79PFEVtnu8G8GmiUIUQLamhITF0Oh2UUnQ6HaIouuvYvB1g5W5EjruN4Zudu1zu5O+qqjh9+jRSSl544YWWwPF22nqzPrfHaie3Ep6He5Lv2dasx5LxzNCJJcYrhltDDgpBNzJsbWQI5ZkZQULJsKuxUUbZGaLkIaM8SBVL7ziYW6ySzISGnib1BlsYnFV0YsnFzYzpUzcodcRgs0d5a59qUWE9OAcWD1FMura5FAXo8LakykNqBXzITd88xUG1IAAOvgGdrMGXc7yrAImtZbJxoSEzm2Hmi4BTOIeMdQBE0xiVpni5QCYZMu5gfYxUMSGDfZ3yBChnI4SAnlTotN8CFjXNCO/BliWuBoUaDZp0eAp745soHRGlXebjqxze3mN9rROIAAqUrZ3+BrQSCOkpjMEZh/ABrPRAZR2R95TGoIQniWKUAKEkpatTuBQzjNBEhGfM1aQe68DlC7qdNDjfJQil0EmCxxPFkqjbJ5+METhsPgUhGXRi5pVEzOcoYE0GBRu8Q8Yps9kh83yKrSxaSaIkDvegkMRahlQp1uFNSWUMuLjhX6EE6DTBOE+kNEkco5VCCY+S9T3sHUpGCC9xPijJOCuoKsNivghfsTVUlacsc9Y2T1FVJUo1c5PA1ySIJi1OiJCFg709JkXFl795g6os0Fqx0YkQAp7+5rNsnr+fyoQoyUVp6Xf7XO73yEYlZjfniade5rz2PPjQFu7xCae3Kx46EzOrHP5CzP0PbzCfefZuHjJcGzLRPQSes+sZB7MCZwwvfv1LLHyY5y6dfpB7Lp5lZ++QTppx5swZDqczNja3OH/6LN/9znXSck5x7QZ3RnO+eGAZlSU6ThDWMi9KtIBCSebG0WQT6mUxm72YMlKYymGR+EhjfUjPYawjiiPQKVtrQ5QvydIEGWVU3iNNyTS3oBO2LtxPkec1qUAxyBIik+N9hRKaVCsqE9JdpJEiUUfkgUblwBhDo3TQrC3N3LX89zJw36wzy+SDZk1p1o9mXWnWhrutj8tR/k2ZOI7btpr1qOnPcvvL5AngmKLCcnsNweHkWtuUa/r+R7HlsXnryjf/fVtL4mj+hnYfBp44SkKaqyoiVoo41pT5nLxckC8WVKYi1po8z5nP55w5tUkSa8azklm+YNDt0U0TBhcG5GXFtWs3ODwck6QZhavIS0OaebrDAVGRMp/nZJmm19E0NNQ4zYhjzXxWMNjIiNI+1kO3Jzl79jSR8vR6PaSsSBONVl3ipI+Xgv07e7zy6nXW19coq5xUD5hMxgyG6yHNUE1odc7ihcJbkBJK4+q0VAYVVUSyg3EOrRTDbsLpruRg4RjPFsRxhHGCbhKzt3dAEkdMxpN6nbAgHP3+Gll0C++OlIGEkCRRhDUlWkUYUzDc2EDJkD4nUjGz+ZzpeESSpCzmM6rSMZ/PSZIuUgsSFWMqSz4pgspB7nC2Jg97j7c29EMJnHVIRyAWeNF+2YKaMIBrU8eFe6He7AuP8LIG6BuiNeHfol20W7B6Gdf33gICrQWyOsTHGU1qtAaE98UEVezhonqdI5AXQnNNX5aqbea2JdC6SbvX3MnHCQe0fT5KpNNSAWoy+ZFKQrNx8vXxlshw4plZbuf4sSaNwVH5o/kYjgD3NyI3UQP7/g3nnTciAp+042Waaw3vV0fj0moR1MQ40fAyQnthsMIeynu8iJACYhXInZVZ0BAmhC9RfkGaBGUU6pSGJ/v+rdqKdLCyla1sZStb2cpWtrKVrWxlK/tvyu72ots4bRup8ubnWwXZvh3mveeZZ57hJ37iJ3jiiSf+2NtTSvFd3/Vdd40kzrKMD3/4wzz22GP8+I//OD/90z/Nb//2b78l5QXvPR//+Mf5iZ/4Ce699972+BN/eB0hBJNJ1UZreOcRWoK3pL7kjCg4LRO8h73nbrDz5Fforq2j4k2QARgXIjhMtFRYa4CIxpGGAKFqJ1LjfPMNoUDWjkLRplCovXZIoY4iZsQRCK50SMugdMTGhQdRs+uc3j7F/v4+lTUUZoGuNOXkDmb/JnuvvsDurVfZ2L5AlsUolRDpiLjTwfrgpJdCcfPVa6SdhHPnY5SO0VFMaSzWGPJiwaIocNYQK0nlPPfccx9/7f/1d7ly3zuXIoZqx6Os6Rh1Copw8UHm2S6DCd4hvaijXZYcgSICjsBvfAAA4khjTFWPCURxAJ4CSBAAxMpBUYC1AcAL7Xics5RVSVnkOGexzgYgUYk6Ese1gKoUQZNCyXAfGFxNDiFEd9Ug0d3u09f7+25OxTd7/t8INK+qitu3b/Pss8/y7LPPcvv2bYoiEEpeeumlFvRvnILLpATvPVmWsVgsXjN39ft90jRld3eXoiioqqo9rwFupJSsra2htWY4HDIajdjb2zvWRhzHLWFhbW2tlRcfDoevmQdPXtvytX8rIP9bcZS+0blSSpIk4ebNm1y/fp3BYHAs9/i32u7r9SVwkoJ6xsNdxXcOJduJxBjHxiDC5BWbFzZIh12S2wecu9Bn6gWzEmwxQ1mH84JJPKDb67EYjRDWgjO8emfOIndYD5mOmU9zEltBpNFVweXNLrOnbpLn0DvVxR6OqcqqVTiw9TXpTo+oPzi6Tg/eVthy0YLbLbmqlvqnJvN45/DO4BZjXJEjophqPqMYz7DGUk0XSMAaV6dpOHKmSymQkUQlCiUFcb9DNBygemv4WnBG4PBSE8RhJMVsglQ36J26B6mTllgQ+mMxtsRWBZEzQADno+46WumQDqHI8Ye7rGcShUHHHaoyrOOmqsK0bEMHpXW4Jj2LF3hjEAhy44hkmEecKMk6XQwS6Sy9CIxxkAQCnDEVzjhMWSF1glZgygId1+oGKkXHIce2ywtMPENJSLpdHJJiOkY4Cx5cve6U1qCVpCgd89kB+XwaZKlFrZgjJFL6kELBuqAY4QTGGoSA+WyGjjRp2kFHCVIqlNZI4dFatWSFrNvBVAYpBE4o5vMpSgp05VC1sk9eOXr9AZ2BxnlB5RTT2RydZOgkQ0jJMOmj0ywQ3lxY071z3Hj1Kk9+6TN8/amnOBxPSZOEc6c26Xe73Nndx00PyXdeZk1b8vE+D957kUsXL1LKp8ke/BA37AEbUnLQ7/PsnRLvCsaHV7lcpbxy7YDNR9/FxfMRN3cLrr+aU/TPIco7mGLB3HgkHoMnVpAv5lSV5bmrr/Ad73kHw05GP1ZE5ZhYJ2RY/tL3f5DRnZfxu4dcLO7w4FqHd1+5xJfNRZ69s8uWOsRMR3RjxX3nhnzq6zfYmVYM+x0ee8+DDPoDEh2h8wW3pgueubmPQyFNAcWMWCvWtjZ55IHL2NF1hmeuYL3k8HCPxOzz7LM3OFg4pklCXgaA3wClsURha0GiJZGwGBfmn0RDJI7APmttS0Z7vWPLSgNa6zatwTLJoNkDNyoHzRzakM6WSWknVQmaY0dz5HF1guZYWZYkSXJMleckaaGpZ1l1oLmuZXWf5nhDTF1eL5uyy335k9q//7f+nrBsLdGgjr4OCHDYw1lrqYyhrMqwV7CGKAl768WiXneqCiU1lfUU+SKoY1U5TiY4F0h/lXUUZYmvHMY6Lpw/y31XrrC3u8fL119lPC2JowVJnJHEmrVhD28rdvcOydIAK+ZljlAd1je3QQhmsylKK0zpGR+MGI+mmMqilKTIHN7P6A/W2djYJI4ivJCUxYwyLxgMU6oyEK2kUCgVY2xOVRQUeZhPhRRoAc6WeGfr1Fwh/ZhQmm6aoJSgcoZ+NyNNE/YOp+H5VBG2qtAqJqgRWKrKkCUR3U5KWRkEgQjX63UYH+yQZBFFkSPwpEmElIpeN5B2hRDotA/OEsWSsjQYU5Hn03rvryirimlesCgtoNv+Upl6fvYg3DEw3TcqRxylHxFC1cdcTSSrSUmiIfcGwhm+Vjmg2UO0FYRfS+C9t2XYm4sI3VkHb/BS4k0BKsIXI6Qz0D+PmNyhITI0+/vQPkv1iroLDTxev6stqRMsEx/8MiOuOd4oOAhqgvVSn5tn4eiiOGqkPdIMw1GbJ0D/u6kONH1riD1H53HcfDuUx+au1yMfnCQe3E3l4ORn4d+SNk1Wo7Tgl8gQQtSD36TjOhrhpnzCDA8kkWRWRBB1Ea4CM8YvDon6XSps2G851xJFmq/jj2Ir0sHKVrayla1sZStb2cpWtrKVrezbbo387M7ODs8//zzPP/88V69e5ZVXXuHmzZvkeR6iB2vSgdaaNE05ffo0999/P48++igPPfQQ99xzD51O51ge22+3ee9ZLBb8h//wH/jpn/5pnnvuuT+Rdvv9Ph/4wAdedxyaKLwPfvCDfPzjH+fv//2/z7/4F/+ilX5/I7t27Rq/9Eu/xN/7e3+vrd9Zh5QaKSOCFLcMUTxFSeJmXJATTsmYSGgqb1nknutf+jrD0+c4/b73IXu9mkQgwFuUVJSmrCMXg5cngDt1e0tOoQBuH0XqBQJCLTQqNeBDztT6fCEFSmoEoRwIZNLHVBskbkqaFsznY9IoAetYTMfMRgfceOVlhtun0VqwmM1YlAXdwQCihNmsYHR4iCkqZuWCqy++QCQl61vbiCqhygvKoqQsSpzznDt7AWs8g/Wz/Mhf+Kvc++C7j5xkSxHbTZiqIKQ8UELWUcYeWYMP3i+nWAjxMK1rqnboOemwxoZImfaeEO1P8GXXYIQJQKcF5qXH2boOQrS2sw5rDEWxCO07h4wkWgcykHU1QGib2JzQMylCnlklJDUy2kYy3U3l4O2C5G8lYn9Z9vng4ICvfvWrfO5zn+O5555jOp0ek61eLBYkSfIacKQBaACSJGlTIjTfRRRFPProo1y6dIkoihiNRvze7/0e0+n0NdfY6/WIooitrS3uu+8+4jjm61//OteuXaMoilZauygKoijiwQcfZH19nclkwunTp18zbifrf6NxeytRWG9V4eD1yozHY7a2tojjmJs3b3Lnzh3SNH3Tc98qUeK1jt6gQvxgR/I9mxGbXc1iXmJQRHiy9T7rWwOeeeYG77i8gYglHS+Z7h6irUFKxcLBeDwj7iQIZxFlycICxiGcQ3d6yDSimpfEKkbFEZs9iby5Tz53RFmMn86pZgXOeJwXeOGPnNdxjO5uhLnHB7DBmxJbFCFvMjVxwvs6Ut3VeJMHU2LGe7gip5oVVIuCxbjAloGI1KRrcXWkW5g3FKKOpFdKoBYGHUlsbqgmc6L+iGTrFKq7jicFLxDK4b1GOE8+OUBFKZ2t87RpbJr+WYszVei39whv0FmfrNMHIbDWEc32kDiyLKYsDNZYSmdRWhLHEmM9xkuQAlvmGA9VaVHeE8cRGIv0FgmkqSZKIuazgn6mAwlDJ8RRIE3ZqsJaEXJup6BUGkATb1E6QuLAG6K0h5kc4qcHqKSHsw50UBaQzpHGKYnq0N/YZrJ3C1sF4pEpFkg8hbVY54jTLkCQM8+LEE3vwo1Y2PA9VM5BYchcTJYlDNf67O3c5r6HHuL6s09z5p772N25ze7+lERJNocdbG8TH/UoRIekG0h0WkcIpSjjBCMVQimyzgKZ77C1fZrO2ln29u6QxprSCth/Cu9ClLBwlqf+4L9SHt7hXRe3sX6LOEk5c+khur0erirwpSW3Bq9TDnNPJWJ+/Q8PSS9+AMsYCs/9j7yfr3359/mBDz7Cdz5ykQv3drijI+xaFz3eZxZVjG4dcuWd7+De59/B7d09dBSzP8nZ7kWMFoHMFcUx02LGdDrn2edfZbvf5cJaxPTmC0TDsww31/kzHzrH//ZLT7ORz1GxYlqUbDLjH/zNH+Szzy9gdpPxzVcYjXY5s93n2t6M+M6cc2c3eN+730UlFYPBBtFswe9/4nOodICSEiUGXH/hALWVsb0x4IMf+wE6+Q1EtsmF+95DfnCNvRvP0k0/yxeeeJa0N2Cxu4tWkm5vgIlT7OFtYhkIjlrCehr2nEqGFXUZbG8UDk4C7M1a04D1zVzWnNuo6iyn6WkIBw3Z4KQSQaOI0KgXLK93J4H/hnTQqBw05ZRSxz5rzltW9FkmFjTnN+0tqxucnKcbwsXyHN6Uj+O43X82/WjOXR7DZWvGs0kzcZLUd7c1sCnftLdMmmjqvJtyxPK6unwdyykqmuMNobB5FzHGHCu3TAhZ/n2sv76Joq73hO7ouHOBWLWYT5lNJ2hvkSLMxVoKVJJircMjqcoCYSx5XnH9xg79QZe002W910UIia0K8nJBt5fhzIzSl2xt9tjaejdlvmD3znUODnaQwpElEVJrnO4RxRprJZ00QsUpaWeAQOKcwJtAZpBxyrlLV+h3O0ynY8bjMZPpGOs1Z8+eATyXtMQsJkHVwBiE1BRlRa/XC/OtAF8vbJVxdLoJSSTodjrEUUykI7SUCDyVqahMiZcpizInSSIqY1BYtBSc3l6jmo3pJQqnPJ0k5fbemG43oz8YIITHWYO3Ia2K85ZOp89sfIvhYICpcoTSGBuUcjpZDyUU+WJEpDImfk6iPM4s6HQ6eCRIiUcjtQKpguqZdUjtUFrh0TVBIUIqgazTvAjq91gp8LJJTwYtEaGNtndtKpj6Tq1BcXEE6IsAqLfHm1vJlEFZLupTiQ467kM1w+bXibpnQGpEMiBI+8sTSgn1c7P075YvIMSxYkdg9t33pH65WmqOzfKj0Dx/zRtDg7Avnde8axxLIdHqAlCPxWuJAe2+vq6Sk3vnFs2vyR0SvG/23SEd0WvIC3fZl79VOz5nNekVjsgFbb1Nj9rvdEkNwoNH43XEICkpqpBwTLkFHo2OYiI5BF/B4jZWdsCr5WaPczu+BVuRDla2spWtbGUrW9nKVrayla1sZd8Wa8D4l156ic985jP81m/9Fk8++SR37txhsVi8rbqklKyvr3Pu3Dm+67u+i4997GN86EMf4tSpU6Rp+m0hIHjvyfOcL33pS/zsz/4sv/3bv02e539i7T/wwAOcP3/+TcsJIRgOh/yjf/SPmE6n/OIv/uKxyLa7WaN28Df+xt9ge3sbAIddcoqGI67MEXaPe7jD+UiSO43Do4XEes9iL+fVz32ezuYG6/c/gEhVcKIJQc0twJoKXct6NmkUIHzeOKACySBEBQmpwk9bTtbRUSFKKHyug0Zrc54PuZij/mlm85hJNQJTMpkeki8U5WjGzu1X2bp0L5tnzuKqkqqaMJ2NmS8WLKYvsLN7m4PDA7zwJJ0MFSXc2d1BRRE6SZnN5kymY4oyx3vPhfOXuXzxfi6+4z3c/673B2d9E0WICEBjHVJzFPHmQlRT7UtUjVOydkwLIVoHlG+dfE3O9NqZrhRS1s58mvNC/a52ZlvncN7h8OSVwNRS6J5ATHDeYZ3B1JH79Q1Ql2ldYaGfUuJdnW6A+qO6zSZqTSGDs3nJUbf8+43uw5P38puVNcZw9epVPvOZz/DEE0+wu7tLVVXHwJImCrT5aSJOl8GJLMtwzrWKCA3IoLVmc3OTj3zkI6yvr/PNb36T+XzOO9/5Tr7xjW+0qglKKc6cOcO9997L2bNnWVtbY2Njg16vx3A45MUXX+T69evMZjOEEMznc4bDIT/8wz+M1pr9/X3e/e53v+71v54yxFsp/2bj/Fbrn81mbGxsEMcxZ86caSNuq6q6q6rKt9rOskkBVxLB925F9LQni6CwkuEwJsliLty3zdXndzi13WecV/RjxfjOiPFhTj9TOB3R7ychJcKdOzArGM9LSqGJNgZwWBB3Y0gTjBkjE0k/cmT7c5gaVBKhlcBUBmN9UDnwLT4Q3OJaI6jqiDmJRwbFkKrAuzrC0dfRj65OreIcvlxQ7N5mdnufahbuo6qCorBUVSD6WB9IRLZxyAuQwrVTnRKeSEEceVwcCEa2MJhpTnp6Srx5Hhl3cSKkZkEKsDA/vIPOOsTdDbwPru0g829xpiAwjBzYqgZiBU5KvPEIV6I7GUVpyKsAuDgksQwEQh0pqiqUU1GEdA5pTL0WOIR3OGvp9FKSTkZpHIPY4U2FMY6o38Hj6xQT+oig5ipwEitiqrIiTRUCi/cSX84D2cM6bFVSzifoTh+hEgbdDrNonfOdIR/53h/m13/tf2e8qMirkmefeYUPvPcRXnzxOe6/eJbe5jmmowOe+cpTlLMJnTRmkNZkCC/wUiAHQ7RWDAcD1tZ6rJ29zJNfepyPfv+P8NwXv8Dpj5znxtWXiVSFTzM0hsHps5j5jNLH7EwsXsig8CQlUuvm4aAjC+LBgPn4EJX0KRdTtJNY1cdaTzmfUpJwuHOdU1tD9JlNyqIIKhajGbvfeIobsWT39i0OFwXr97+Xl16+yWK8R2U9g34PUxpMNeHS/Y/w+S98nj/44ucAx/kz9/LZL32Ozz71JN3OgHsvnefi3hq7u2Nu6TEi6RJFCd56cJbbo0VQpZCCO4djlIo5tbnNUFSsi4IoUezsHbD/8jWi976fzfu/l0cefZWXf+Np7rmnz41XDxBJzNa5U3znxXs5f+keynzOfDJicrDLw997i1uvvsr4YAeBJ487OGMZFYaL584z3NwgjWLyfMY9Z7fp97rcd+EMa4N1BptdSuMRUtPbPIuKEz4g17DxF3lhd4qzFoHkkcc+wrve+35+81//L6xnHpfnKClIoyb9UNhaNGoDk8mkTYPTgNVwHKheBu8bMLwBoRvgWtff+bICzjI5oFG8ac6NoqhVTGgIu8v1hNRIR/u8ZfJB026jqNPM2VEUvaZcU/YkkaIhyzXll4H2Zr0sioJG0WG5v8sKCc04Nr+bdo0xLfHCWtv2bRm4X1YBaogLTR8b4uBy/5t6m34vj+/JPUlTpiFJNG0t93WZLNKk1Wj6skygaPYOzWdtn8OBE+oyzUbL18TPHGcKvCipCot0Fc5VeBGjo4T5fIEzjjiOKEpDVRoODyZ0igqpFZ3BEC80UkV0sw7OVlSVZZHneF9QlotAJLYVHouNIuIsRYoInWTEaYdIx1RlUPswtmI6HdPvr7G2scFwY4vZdNqSvs6cu0B6uMfkYJ/ZNEUriUw7+Chikc/rvVWMc6CEpKjykHIIw6CbYI2l29F0+glxHKF12OsrJSkWiyAhbw399R7y9h7ew2yak5clpTGknYzDO7fB9lkb9BgMtxgtDMV0jqIkjrtUxiOVoCgLlFSYqkBLSxSDKeY14Vlx4cI9xJ0+VTkHPEVZkiQaoYKqg5eAA2tFIGUApnJgQwoh4RRGgqoCmUxGgZggVEgZt7zbaQjJUoiGh7wEvDfpAJpjgWjghTsSDmjuX46dCM4EEjMCqWKsC+lTsu4GVTlBJkOETnAmpHFqwO62RdGkczhK6tAQHI4UAXwLnB+B5A39QNQg/9G1CtGkD1kiBSBOgOAtE6B9VkRDsKCpX7SkBA+IduCOnq3lNo51YOnz5tjSTpSTdjdC78k54/XI1HdTPjj67CgVhPcSIQIBFSFbUqogkFqbsWyICs45KjK0MoznFi0KPP1A6BKOUq+R2tuoVFJ4hS1rjTrRjuhrSCJvx1akg5WtbGUrW9nKVrayla1sZStb2R+bBVCkIs9ziqKgLEum0ykvvfQSjz/+OJ/+9Kd5/PHHGY1Gb0na//XMOcfe3h57e3t87Wtf4+d//ufZ2Njg/e9/P9/7vd/Lxz72Ma5cucLa2tqxqK4/DrPWcvv2bT772c/yS7/0S3zmM59hPB7/sbZ5N3vsscfodDpvuXy/3+cf/+N/zLVr1/jUpz71puVv3brFwcFBSzqA4CDBg0TiqjkpBefdTe7TJalQ3G4c0kgqHNYLRlf3ufH7nycZDumePY+INAjfxN9jvSeSAqmCw6VRJxDUjnIha+dXHYkjgsNFIBGyDjtckqP0BElsoYKjT/ggYynrcnp4imIxoTB7iEihophKjNm+fC9rW6cQUYTSkn4Ss3n+Hop8wcHN28hM099aYz4PIHGUJIgo5vBwn85gwGgyYjQ5wHpD2ukzPHWec/e8i4sPPBrSDFjbRjS3zrjGXeYbB5DHelOTDZYcek1kVOOJrH877zDOtteGEoF0IKijn23rbA9OreDUdrVMOB4WFipTEw48OHFETqhqkNQ5h5UC5ywKVXdBtP2TNeLpMEsOtTrizzlcTTL5owDfb1beWsvzzz/PJz7xCb785S8zHo9bQGG5/DIw8HoS0g1JoSiKVoEgTdNW8SBJEk6dOkUcx8RxzGQyod/vs7W11aodnD59mg996EOcOXOmBV663S7D4ZDNzU1OnTrFnTt3+MpXvsLe3h5JkrTkqtOnTxPHMUmSvOk4NJ+/EUHjrSpLvF01BGMMvV6PJEnasWwIB8vRuW+nnTc755KGx7qSTAoWhSGNJOcv9IhTzdblM5SzGUmiUVIzmS7wVcVoWiKUwEjNxlafyjriROIWC6YODirB8Ow6u/sT5pXn4saA2WjMZuJZ72l6xqDHBT6OibzDFQaLxCECCSBMVrTPc9TIJAMyAhFkop0xtb/X0TC4vHNgDdXhHrMbt6nmJXZR4B0scssid1QOrA/tOV9HDjfziAhOaakDEK+UwOAx3lNZS+IksfU44/FmBypDfPoSKu3W/VUgHc6UzPduoeIMqcN9h/d1JH2Ft7XEgi/Bh6jTOO0gFlN0PkEOBuSLEmc9lfUgwRYGGUXEOg5kBSnQCpwxWDwOyWReUVpPN1HEWUKJIo4kypQYJ+v1QlIUJd4T5kelUQJUfb9IpZFC46uCCkjjFBWnNUkB7HyG85IiL6l8xQP3P8Q3DwxnTsUMNzdZ3zqNMRbnPUMNolrgyoKzp0+xdvo8ka945yMP8eUvPc5Lo5yLJiLVYU2RWoGHja1NkkRTTcdkSUh5s73Wo9PrcefmK2gsJl/Q6XTBGqTSzCvoyCkdHTO3NfggRcjbXlaI/BDl7lBGF8ln+4hkH1PmTGYTcjGj5xx3Xn6W0scU+YyO9igliZKU8Z1DptducfPWNRZmxtX9BdunT5NkPbTSeGe4954r5NbxyvVrbA87PH/1Kk8++TWGa2t8/vNfYrvX5Xs+8j6uvXqVp158kRdefIYtLbm6W7Jx+REu3nsfW6fOcHjnBpfOrvPCS9coS8/e4QhjHRfPn+HsIGPDjhgbhZ4tOHN2m7V+xuGNl/jk73yR/PYN5oucM2uOagp2bYNkuIGYAzom6cUkvTXWz17i0rvq29LZQCTJFyymU+azOf+nssTW8utKR8RpShTHgRSodACe5ZE8ubWOrUvvIc89Vz/1u5iqJK8qXrj6PPFgiHcVa4M+h/kiEA2EpLQOB8RCtJH6t27d4syZMy35VSnFjRs3GA6HDAaDNtreWkuapscIb0mSHFMcGI/HvPzyywwGA+655x7iOOREaRQF8jxv1xygTUMGcP36dc6dO9cqEDTkt06n04LeDVDeEACaPXkTvd+kX1heK5MkOUbWa6L2mzob0sMy6aI53oDzzRwexzHW2hbIvxspblnZoWmvAfqXlR0aAsEyyaE53gD+TZ+aNdw5R5qm7fg319iQJpp6lhUQmjV4mdTQ9Mc5x3g8ptfrHXvvaFLF7e/vc+nSpWPjsPwe5BtCaJvailZZSiz9W2tBKnW4721Ik2B9SV7Y8N5VVHgP/fV1vLcUi4L5oiRJE6qyAuHIiwXWVGxtrJFlKdV0TmUshweH9NKYfi+lKnPKsqDSEUkSoYSimM+Y2zFx0iFJu2gD2eYmKkqYjA5YzOdY7+j2enTSGGdL+r0BHsf+/iFnzpxlUU1RcURHDVgsctbWumGvaWuynfMIr/A44jjsK7vdXniO4w5J0g2pfOp0E1JEaBmzNRiw1u+jlSWSElNZ1rcTou1BIBHoQNJZX99kv8qRSmKs42A0ZvNMuGeTrEusw562KkuEsswWFd3+Gl5oJvM500WBR5MkKcKX4FxIZWEsSIUTkjRN0bHEu5qw58K1uSrFmrAv9jWhwENQRqpTLIT3nJrUU5N3W4C9BvhFu+9/7bHmKRLU81t4EMK9Kiqc7oLO8CYHs0APLuBdgYwkxD0ox2GfIkX7WhLoyrImfzeE42Viw1E5OEqzcGQ1AaFhJgRWQNhTtIyK8LmXNeHA05Ie2jmhPk8cVVvvoY6TNgLZoQHlX/t3m6Ji6aSTu9DjZICjY0f70oY8cZxkcNLuRqRt6jiplHC3tsM7YT2G7SWEiUI0EgU1+TSNHJYEoT3FoiSSI8DhfAquZE4P57oY3UdH+9jmW6zr9K8ZhbduK9LByla2spWtbGUrW9nKVrayla3sj2zNC/FkMuGFF17gmWee4etf/zpXr17l4OCAw8NDRqMR4/GYw8NDyrLEGMPrgUt/VCvLklu3bvGbv/mb/Jf/8l9I05QHHniAD3zgA3zsYx9rpc+Hw+FdHZtv9Xqbfx8eHnL9+nW+8pWv8Du/8zt87nOf4+WXX35LqQr+OEwIwYc//OG3fd729jY/+7M/yw//8A/zyiuvvGHZd73rXZw7d67923lXw2yAq0jtIferCdvKkEmFbiIyvAsgu/NU3uOM5M6TL5KdepyL3S7JxmYdpQFKqjptQ+P0VUcRNnVdoiEgIGtVgzrqpkblA+BNTTiox0cqlAr5k+uP6r7VZAcpwDvitEOv12PQ36CqDFefeprJeMKly/cxGG6SyIhhd43u5Q3WDvfZ27vNnrjDweEBt/dusr455MKFc7VTd45xFoTg/L3v4js+9ueQUgUigW2iegOxAgRHAW11Xna/dN/V/UUQIl85UjiQUrUBL965EKgsgjNNiBCFWeuDHrtfmsgkj6+jrYNVFooKIlEHUNX1LisieHxN3qjHMnhF2+jqJrO89BIpPK5lTHiUrCOl1FG05OsRDN6OVOmyE+/69ev8x//4H/niF7/IZDJpozyXCQd3i4gEXkOGWu5HExXZgCpa67b8fD7HWstwOOT++++n0+kwHo+pqorBYMDFixc5depUG61ZFAVVrRzR74dopObcsiypqoqiKNjd3cU5x2w2Y319nTiOybKMNE1J07QFVKy17Tzb/DRtPfPMM/R6PR5++OFjUapv7Oi8+/z4Rt/H8newDBS91e/u9T57vb557/nONcmlnqYXC2yqWVtPyDqatbNb+Cpn/2BOr9/l+it3iJRiLhNKEeN8wWCrD8IzWjh8nqPSGI/kzL2b3Lp1CNMFpwcdytGIam/E+iBhvZcgXxpTekmqJL4IQLbqJeTzMjhtfX2/N/dZPWeBB5WCK2qiQT1V1RFs3nm8NXhTML56k9mdMXGmsNYznzsWlafyksr5kKLA1BLsNHOHP4oHLHyQlleSOFa4RIa0D4XDWk8c1VG0O/t4Y0jO34tMeuG5FRLhBNV8RjHeJV07C9QAn3M4Y+p5yuGFBu+QCGKxIJ1fD8DeeMzCKhaFIclSvDXoNMGpiLlxQQHBiZAKgaDoUlpPhaSTCbJUk1claRyj05h8nCOjBIXElEVQbDCOKNYoWY+tAK80HoiUwFa2/j4sUoLOMqwDYxZUzuFsSbEoOH/vfbwwu8minHHz1h73v+MR3O5TxEpwdr0DtsIB16+9xM5owsGtG9z/wL3c8+CD3HjiWV4ZV4H0ICGJNGuJ4f5exrVrNxgqxalLV4AwF2dJRDkbgS9DXnbpUFRoHKQDTLFPZsfkro/1EV6FiNJyPiFdXGMRR9y5fUjXzLF7t1ks5iSi4nB+SNSD+XjBwggqY+isD8jzguLOPvsvvMjO3i1mVUWcairnyZIIU8yJleAdD78HryRPPPU0N27tUFw8y8HoGqaqWEyneFfRixU7L+5w+coVrH2ZvKzYjBVf3LtJunFIVRkuX3mQ+eEOV65c5uDggNmiwHnHxsYmZ7e3uNyp2J/GrG0MuXLffUx2b7K5uc5sPufLX/4qw/kdrHVMJxbhJGcfegTnBL/1a/87P/bX/r94UwX5eCEREpSOSJKEKIrI+mtk/TU2auDOO4+xNqiQGIO1jqosWJg51tkQwRwYfIDAOoexhiROkTiss9y5eZ373vkog1jw6q1DknpuK0xQGdnuaRJ1BI4364H3nt3dXeI45s6dOzjn6PV6Lalg+acBtps5uwG5X3rpJZxz3Lx5s03d0yjt5HnO1772NT7ykY+8JqreOcdzzz3XEuGklLz88stsbGyQZRlwPO1BU+cyGeCpp55CKcUDDzzQgvsNOH9yXm8A9OZ4oxTUEB6aNfPFF18kyzKuXLlyrL5lBYPGGiLF8lzvnGNnZ4fPfe5z/OiP/mibXqFJi3R4eMjh4SFXrlw5tm40IH9DDGnOa8ZRCMFoNEJrzdraWvuu0pSx1rb1VVVVP8tHqRSW1SCeeeYZ3vnOdzIYDBBCMJvNeOWVV7h48SI7OztcuHABrTV5HhSwjqUeasfVt3/jLJW1lNZgrSEvcqpiTjf1eG+Yz0NUf7fXI0sSokgzIacqcqwPKi+oBOsl6IzD8QIPLOY5V1+6xtqwxz2XLjBc6+JdyeYwQ+BQCLL+EGsdSdLBWvCuROuYOMkQMqIs8wAvO1gsZixmCxyOosi5c3sHQdj/DtfWkT6oAb384lXOXzhHFMdUxlJOZuRFQRonSAEqSiiKCZUxFHmFjzW9XoKOEpK4h9IxIHHOo6MkzPWRoZNEnNpeJ0niQIBTll6m0MLTzRSmmKJFArhaTU0zXN+iKgqwBmctcdLBqoRev8/e/j5lUREljvXhBr1Oj7wsmeU5Akm/P2Axh2JWoKSlshUYg9AaKT1KCyor8ULRpJAT0oHySMI7SlByCwoJ1hoqU9RruSA8zq59rwlp56DVGJDiiF/tOVr3acgG1ASE43s46auQYqE6RKdn8HFGfvAScdJBxn0wc4SUiMUdZPte1dRbS/yLRuegIVYGEF/UDOojyf/6/UqIY0oGrj5P1H1sfrVki6NqaagN7WwjQLQEg+Yds73c5mXk6Hk6YgssVQDS15oB9cfO+2OEDeq/j53adne5g8dJA8eJCq+vcrD898nzloZleSDq+TUca0YxvMbV2hNC0FGWaREI+ZE0LCpBHAe1O0eKIyUThxg8QqjjY3S84bdtK9LByla2spWtbGUrW9nKVrayla3sj2TOOa5du8bHP/5x/v2///d885vfZDabfbu71Zpzjvl8zpNPPsmTTz7Jv/7X/5p+v8/Fixe57777ePTRR3nooYc4e/Ysm5ubDIfDY3K4IaK7Yj6fM5vNGI1GHBwccPPmTZ577jmeeeYZrl69yvXr1xmPx39sRIq3Y71e7zXS62/FhBA8/PDD/O2//bf5u3/3775umgWtNT/+4z9Ot9s9di7OEzlNlwXn5R4Xlcf74ASS8ig3parDZcra0VPNPLe++CS9M2fYeuQ9yCyrAXNFVZUh0knVktkteUAEB6oXrcuqHXkPSBGcKLWTTkgVSgiJlCGyUQqJqvM4OBci+x2OOOlTyAipFEJo+usbmNJw6cr9fPkP/hAXJ1jhOdzdRbhd8vGIWVkyno7Y29vhia88TpxlvOeDP8r6xgZeCG7v7oVUA0KwvnUOqSIEPkQu1U7F4MCTgSAgl66vBiGlr6NxWlUDfwwcaBgUjdymp2YKCImTHpxE1fndl8JZmm8Q7x3OuGPOfuNhUYHSoX7vfEs6qKqyLedrxQVVhz45jnLSSyFAgvcSIR3SN1F8wfkoxEl51dd3dL0ZMWj5+XPO8YUvfIGPf/zj3L59+xhAshyZuVx+GUCRUraAw0kQKYoi4jhGKcVoNCLP8xYoms1mLBYLHnjgAU6dOoUxhosXL/LSSy8xnU7pdrucOnWKTqeDlJL5fM6NGze4efMmVVW1xAHnHFmWtf3Z29vjV37lV7h06RLdbpd3v/vdxHHcAjudTiDJlGXJfD5vFWYakKTT6bQkqdFoxIMPPthGap4c49eL0np70rCvJTPcjeTwdlQs3igSTAjBhVTQjSTCOzY3U7r9hOHZLfCGvb0Z/eGA2y/uhLzXkUJ0M9Rkn8FGh0h4buxXFIUhwlHkhs1zp9i9sYcfzehohdSK+d6I9UTSyxKS0ZzZ1KJiDVWFsyE+sTiYto7rusNHfXW2BunDfIRKAuHHWbyryXje422FtzluMcXOFyFtQ+mYF+Gn8hLjQnxbE/3oPUiCPLQSMuAbLqRY8QTCUJF7nFWYWGIjiV2aBwQOsTdBRteIz9yLEJ3ghJYevCMfHxB11hBRBriaGGHxPsTIhYjEkJIkllNMlLH2nu+EJ/4P9mc5xgl0Bt4rtJCUHqqqDEQN73G2vpeUwhjDIIY0lUjlQUfE0lMuFljjiDONKUvyRYn1QcVFRTFKASJGdjpIoYi0QkUeQ4bLx5higS1jZJTivED5lMQL8vkcvKe3voW1E3b3d3mHgisPvINXxy8QKYuJFKlWSKVwxQJlc1558RU2Bh0uXT7Pwe4eO7f3qCzkHk5tb/LQA/dQWsv+zi6d9T7OVpTWYl1I95NmXZwN4JfzDu8CYUEIyZQeqZiRmT0O/Rqm8JhyDrMdEIanXtjl8PBZ3nlpg7JYBMBMGmwJ0im8DSknrDFUBtg9wIxvMl3s4l2FkoIkUkgRpPfTbo9zl+9lNJ1RLUZYYxh0Eq7f3AlAfhrTyzTCKvKu5As3XuH7Lt/LhfOX6aiKP/zSF9kQBXs7N+itbVIVU/Ii59b+hM2tDb7+4nUsgnsu38OVs9v03ZiZ1GytDzm4+Sq6JtINel2muedH/+f/id/9X/d46dWX6XTWuPLdP4RQEXa2x7VvPsHt69eYz2Y4WzGbjJlORlROIKOEznCTjVNn2D51hvPnzmG9IE47dLIs5E+PBBFZC+46H8iT1jqssZRlQZJ2EN4Ra9AKVByTxopBqrm1O2Kjn7LWiZDOY51n2NFE4ih9gta6BbVfffVVAPI8Z7FYcPr06Rag39nZIUkSiqJgMBjw+OOPc++99/LOd76zTblw584dLly4wHA4ZDKZMBqNEEK0SjqvvvoqN2/e5Ny5c8eIB9evX2+JBHt7e3S7XW7cuNG2tb+/T5ZlrbJCQ55rSA2dToe1tTXiOKaqKg4ODlrC2/7+Puvr6+26uKx00NTXKAQ049Gsw/v7+/R6vXYN0VofA+6bPXgD9DfrYqMmVFUVX//614EA/idJgjGGb37zm/T7faSUXL9+nQcffLAlB0RRdEwVoSEONsB/0+erV6/iveexxx5DKUVZlq1aRbM2N0oSDRnhpMJDWZYtybpZr4qi4BOf+ATf//3fz+OPP85Xv/pV/tJf+kv8yq/8Cp1Oh7/wF/7CkSqCOwJcwx7Q1SC0xxmHMZbZfEY5m9JBIZXEOhBC4Z2g8C4QwKQnyWqCFh4pQWnNnd1D9g7HeGfI8xLvBNdu3uH69TucPb3OWj9h0E3pDnpcv34bKQWDYQ9nx0RxxmC4SZomgQRlSrqdPpUp8N5RGYeUOd45lPRkCcynC6zzTA9BKMWw32U2zZlOp6wN1zFliUBinay3sp4oiWEqsJXBe0FZWrrdTk0yEjWBOcylUtZgvBT013qgNVEc7uEyX9CJI6QUrA175OOcLMuoKoPConTK6XPnGd15lUE/g1rxCwGd3hAtFdIXpFrR66REkaIsF2SRpqwcZWkpKoNFgFNBNSAfIbVDiBipIyIRiH4CHTQHvMPZOKQoavaeIqQvQyqUjhE+rM6ujlyXS8TjRkRJCZDCI5D4eu/fKBw16RSOk+tDBd47qrLEuwqpM0Q8ADxJdxNEhEiHiGoK8RrIDOYHddOhnYZQIPD4ZSJl3b8Gsm80DTjWh5P9aaBz2rOoSSrLBcNbTU08qEkMLQuhJigcqSGEh8e3GgzLRWuShK8fsIYksWTHzqlJoyf73qoMCA/HT39j4sGJ+t5MbSwUDATWhpBxvI0j5Yimd1JKyqLEeUVJikKDklRqAy09qcgpjMYKjfYlUjjwFV7WCmr+qK5vxVakg5WtbGUrW9nKVrayla1sZStb2bds8/mcX/qlX+Kf/JN/wgsvvPDfBOD+VmwymfDUU0/x1FNP8Ru/8RsApGnaOl6jKDomh9rktG2ijfM8/yOlg/jjtjNnznD58uU3BWjvZkII/vyf//P803/6T7l+/fpdy3zsYx/jx37sx46DiV6ivePc+Q7nLm0ivvkKqfHMa6eMxKER5DSRGGB8yHeuEMxvzbnx+S+RbW7Ru3RPSxZw1tTysgLfpFZogHVn6ojgY1cQfnkA2RIO2ry9rYwyUINc1hqqymCNBTxORFgnQl5uHNYbdBwz2NziAx/8APPJnI1Ll0mHG8SdNYrJAYvFjIO929gn/5APxx/i9OWz9NfXGW6f5eDOTYqiBBnyt2+euVgD89Tt13k6ay9iAOHVUaSyX8rle/RFIYUKqgz4Y447mmAkIQPYTy3JK467AVu6hjiK8GEpQtED1nnyEjIlULWzy9SggXUWZ2tlBCWC41OAkB6sbwkiDo9E4ET7tQYHcU04oJZpfTv365tFwhtj+MQnPsEv//IvM5lMXhNNerKtZUJCA5w0x5frXVZIaCIhAcbjMY0KwmKxaIlX29vbGGO4detWC2yUZclgMGilrDc3N9u5pyEv3bp1C+89V65cYXNzk4sXL/L0009z7tw5Guno0WhEv99nPB5z/vx5Ll26dCw/dNMfIUQb4Xrz5k2GwyHz+bwtczfSwOs5Ql8P7H+jz5d/3ug7W1aeWJbKvlvZ5fLLx41QjHPLpXMZ/bWU9YunMXnOnTszBlt9Dl895PbugqgT0+mn2P0RnVSzuT3kxu0JxnikdxwsDP0za9y6vkt+Z4RysHZhjZs39hhGiuHGgI6E/MYIrzWRDk7kIG8d0hfISAdwwAZwvvF426oK6VSa51AnIFUtT+BoJEW8KcEYFrf28JXFWE9hoLBQIXFCEMUaU1mksKhEoLREx5o4S4jSFBkpTGWxRUk+nVMVITWIt46qcAgnIdHh+fceUeeBljuHyOhVolOXQSStQ98UOdX0kHgQopFxFm8rcBYhdX2Nniqv2D2Ycu2LX+Wev/o/Ut68xpZ7mRtzxyK3JLHACYUpDcZaYi0C4cmH9DweSJOYWIOKNA5Hv5PWxAyNEoKqyClLR1VUEEVEOkJHQU2lqAwJEGmJ8BWmNGTdDWysUcJTziboDuhOH+dDFK8XY4rScPXmhMPdXeZekmrH2bNnKDY2kcUui0qT6gC0ZMrTiwWLsuLFp5/nHVnGO955Be0qrt+Z0PWC7e0ter0uzzz7IqDIjUdJjXAB8JE6gKDClcRKYr1gPs85l3Up9kbc2T1Am5LNpKQavcKrt/Y4uzXk/EbE7XHB3u4dykWO9+t4WyGdIVIWKSJ6sWi/D2MsTz/zTTqRYk0YitKQRIpYKU6fGvLcXsFoMqe3mHMwmXH9xnW8mZOmCWWRc3prjf5gwHg8ZuvUGTY3N3jm5ReQUcwnv/Al7k8Ea9tDvnprwnReoModss4LzOdjQPDci68wHo3Iy4rBYJ13vONh3nfvKQ4PdtkkZ7Jzm6jbJR0OiNKYyhhyO2FsJR/5q3+N+UvPMzx3Gb05wJQzvv70c3z4L57jT33wf8BWVX1fuDatwuRgl+lkzGw8YnLrOf7gyd/lF/7Dp9gvJYPhGtunz3DuzFlOn9rm7OkzbG9vsTYc0uv1SZOUyhjKKsyx09IBFikEpsx54g8+xz06JtIK6zyRAkUTzeyo/NHc1IDds9mMTqeD1pooirhw4cKxtABNlP90OqUsS86fP4/W+tgc2sz5WmtGoxFXr15le3ubyWRCr9fj2rVrXLt2jXPnzrXqAs899xyf/OQnUUoxHo/5xCc+0a4fDcHtM5/5DJubm2xtbfGNb3yDe++9l6tXr7aku4ceeoj77rsPgI9//ONMp1P+8l/+y9y6dYvf/d3f5a/8lb/CpUuX2nn7y1/+Ml/+8pfp9XpMp1M++tGPkiQJv/Vbv8X3fd/3tWknjDGsra3xhS98gcuXL/OlL32pJdLt7e2xubnJ1atXefTRRzl79mwL9jvneOKJJ7h06RIvvfQSH/rQh4iiCGMMcRxzeHjItWvXuHLlCi+//DJPPPEE73nPe9jf3+eVV17h4YcfxnvfppPI85ynn36a+++/n1dffZWqqnjggQeoqornnnuOvb09Hn30UW7evMk3vvENPvrRjzIYDPDek+c5/+k//SeKouBP/ak/xa/+6q9ireWv//W/zr/8l/+SZ599lg9+8IPtdzybzdjd3cUYw3w+J45jXnjhBd773vfyyiuvUBRFqz5BTdpsSDFCiFrNK5BjHR4lBNY65rkl62R0uoOQ7gaLKyzWemzpSLMEKQSuDGD6Ii85HC+4czChLCti6Ym0JvKO8XiEsiW3pWZ7a8hmfxzmu7V1tMrQqSbrdInTmMrM8RbS3hCkYz7dYbS7w43rt9jZH+OEwCFZVBWTWUFpLGvdDv1UsdZLuHDuAvl4ws3JAZvbZxj0ujiCCkxZLJAq7JEirVG1QpqzBc4anA/EIC8MxtaKI0WJ1BqddOirlCjN2Nm5Q1XkeFdhq4ThqU0iZYmyPjpKyDpQGIn1ob1ur4upDC4FhCfNeiSRZDEtMWWMEo4inzHcOEvlapKMteAkUifYckoSJ0il8FJSlR6HQEWaSAQiswRwPlxPTUJs1MOwBgwYa3DeI6UDIbE2vAdIAlkNRL3/Fy0YLVqycUhHEfZFx/dnx/Z0QqDiDK8SkBpfTNC9M+AM1ixQOsH7oDyEEEepDtpaG5j7JGC/pHwgGqGAhkBd397tRu6IriAAa4ug2IaqU9csgd/HXoGW9o0nXx2ErxUQlhQNlq0lLBxP13AMzD+5/2z3nr7us6u3RUeJJdr+HDutOe4CEdvlCJWFdl+HqHDUzaM9uGjpFqEtIZZJ0kc9aMbdOknlYqxKwxtYowooFZUPJMxY5UjjMTTvoQ6pZE1veu2wvh1bkQ5WtrKVrWxlK1vZyla2spWtbGXfku3t7fGTP/mT/OIv/iJFUXy7u/NHtjzPyfP8292N/7/Yww8/TL/f/5bPP3fuHB/+8If51V/91dd81u12+cmf/MnX1K+QXLzc5U//2XfTkyOe/bURh0++ECJXZUgHqmqHioLgwPfBcRqAbsneN2/QPfc4FwcD9HAdoTXgMf8/9v4s1pLsPu8Ff2uIYY9nPnlyzqyBNbJGDsVikeIkWpRoyfJtS7aMbje6Ww00IMOGbRiG0X7otzagl4YfLgy3INi6tmDZvteyJFKiyBLHIllkseY5K7NyPPO0p5jW0A8xnH2yihKla8kPvf+orLPPPntHrFgRsdaK//f9v88WBIQgSnlSUVkIACXYJSrvU6rEVUUsEFIgVZlskTook7UOBodjhgdD9vd22d87ZDQaMhmNydIEaw1Seu47F9HrasIiosgTCMBjCbttbFYwfO0d9EWFcXsUyYRk/4Bkbws9zonn2shWSNzqYVPDwd4+mckprKHVW6bbX8RYg/dHILbzrq5JqpJbdRKxrCsqiQO2sl0obQ3KyidRqjrcVtXTJOKmOBhN/wDTWbySjEGlrlAlsqZqhbICTFRtzpc+7rUthLW1xQLUEte1lYOnwlFLRgUCj5JlVU71Vgnw19Lyf0Z1+3T8aQQF5xzf/OY3G8JBTSh4P3B8Wob69m1OA9/v9zmtNb1ejw9/+MP88R//MePxuLFZeOmll7jvvvvodDo8+OCDvPHGG8zPz3P33XeTJElTcRnHMYuLi5w9e5a1tTUuX77MxsYGZ86c4QMf+AAXL15kcXGRnZ2dpl1FUSCE4NKlS1hr2d/f59FHH+VDH/pQk1S+vd+CIGBvb4+vf/3rvP7666ysrHD//fe/py/f7/Wfdl5+nCXDdBtqoGi6H29P8P5ElV63xft9fj8puP9ch7gX0D+9hM1z9g9zekvziDRhMEjoL3WwcYviYEArUpw8t8zVzTHDzBH7gkRIOisLuDwn3TrEFRbfitjd2GeppVha6qFDib20SZpDu1MOLEXhsL66nkUJMkglKLIC50xzvZssx+alSoh3piQZSVmadiPxlH7POEM+OCDZPMBYT+HAKoUINH6UopTAm4wwLCtu504ssXLxNHNLc4TtGF8UKK0whQElMVYwGaRsXHqXwfomRWZx1mKLglzokqBlKNUOUoPc3EG2uqi5E1X7ABzZ+BDV6pVkCW/LYwPwBghAlNfo/vaQnRffZuubf8jiU1/A/8G/Y94nHOQOFcSggrKq3Hm8K0cwCxTOEwaqtEkQYKWg32ohgTTNsD5HeI81gsJUJAkcgQalBMYIwkAiTQ6+wCIQhPgiJYzbWJNjXQRZjghzvNd4a2gvLOAEvPnOJYp0QuYMr7/2JmfufoDW3BLq8IBsLJDeonxZ2R60uvSigCTNuPLmJe555CEu3nM3g/R1THuOTrvNxq1ddNCls7JIvxfRnVtACMFkPMZ6mEzGSK3wviRdTNIcaw3vvPUqm7du0u/GsDLPyYUO43FEL7QcTgrWNw8oKusV5xw6UAhrETik8Chs6bntfSlPnudEYZtAB8RRyFovhDDijvsu8ur6hIn1FNYTRjHj0SECRypgbmGZ8xfvIk0z1je3yNHsHKaIQtKJYkQgeHN/j+DwEIIWp06vcP3WOge7W0RxSaixRQbOEGrNB+65jw/ccz+D8T4LJ0/DcBM7GYNWSC1RWiO1Rro9/vm/+J9pteFjTzzGxovfZ+c/fIX/1//lc7h8wv7hACEDVKAacFZHHcLuPN3lk+A96WCPF775ezz7g+fY2N5h82CMlHDlUotWFGGzMYudNh+59xxaBURxh4nXvLZxCM5xqhtivMR7h44jaLdIjeAH+xahFzghUjrCYL0HV5Iho+AI2PPeUxQl0WdnZ4d2u00cx41CTU2GnK6UL8HTjLNnzzYEMu89Fy5c4MKFC7z66qu0Wi0++clP8u6773LnnXc2xLUPfvCD1FX2Qghu3LjBpz71KS5fvsz6+joPPPAAW1tbhGHIXXfdxWQy4YknnuCdd97h4OCAJ554Aq01a2trvP322zz88MNcu3aNoijQWhNFEQ899BA3btxgb2+PL37xi7z11lucP3++GZN3d3eZn5/n8uXL3H///bz++us88sgjPPbYY7z44ouEYcjh4SEXL15ka2uLy5cvMxgMGI/HfO9732MwGDRKC2EY0uv1OHXqVGMz8cMf/pBvfOMbnDx5kq2tLZ5//nnuv/9+hBBYa7l58yaXL19mbm6OZ599ljfeeIOFhQWefvppbt26RRAE3HPPPc2c873vfY/nnnuOW7duceXKFcbjMZ/+9Kfx3vPVr36VLMsahbPxeMze3h79fh/nHEEQcMcdd/Dyyy9z5coVlFK0Wi0uXbrEvffey913392cUyEEy8vL/MzP/Ax33nknWZYRRRGDwYDvfve7jQVDHc5bGhixXjABSki0ECig02oh+3NIO8E5yIoU63KUhFAFoBTYopxVrEVJh81zxuMRWT4BW9DS5Thq85xAC9a6bdqxQGjJjRubbIcxa0sddDikFYd4LzF5ThSU4HmO4mD3FlvXrnDr5i3eubHH23uGkfEsdyWd2CMFFE5gnWUwHlBB5rzx7jqLnZizq0so5+kvLhO3ugghCYLSXi3QEhVqWlFElk3w1hytJawlNwWIcotSSJyzCKHx5LTimH6vxVhYbOHAG4QKaXUWEVLjraPV7mEspJMBYRgRxh1SmyEkaKXwWtGbm2dnZx8VFiTJhH57nsJalA7pdEOS8RgZBOAsYasPSmPxWFOgVEAkNVIGIEqikhQK4evFebn+N84hrUEYgRMOnMQVAmdKKwbhS4KCE6CcQ3iFp7RjQKqKGFBNms0aTL7v+q0mSngHIpwvSdDZEN1apFZfkpXag/IJTupGTYGavNzYKtQ05vfu4/bXUsipJ4ujKJ85BMKmLJlLTNQqqVqg1MUTU9f/e7/3vuFLy7lGfalpY2lJ4OsHFEpViFoxryZd14SEqR0dkSV8Tdg+4iUce+56n7bVr88sCgKb885BhKis//60te/R+0fPhOXrI02Io33U7wM4hBQoLE6UKg6BKMhljHA5XgY4L8mImA8muCwBkZFZVa2njqs7/EViRjqYxSxmMYtZzGIWs5jFLGYxi1n8ucJ7z3A45B/+w3/If/gP/+HHSvDP4n9cPProo02V3F8klFL8rb/1t/hv/+2/NZV6dXzuc5/jYx/72HsSWfM9zad+5mEeeORuDvducO7Tnybd2WNyda+U+8aha2MFAYGXpNhj9RQ2hc0fvEx77QQrjzyO6rRQsqx2KRNqGiF1aZUgmjphLAZEZZ9AKYGuVIBU9WcFtnDs7Q65dX2drc0NRsMBBwd77Oxus7e/zcHhIZN0Uvq7SsHgsbs5f7pDFEXkeYCUGqEEQkni+QVi1cbZlGI4IRsMKfKCwhlS7YnbbcIoRkjFwe4Ou/u7FIUhTQ0PPPwREJVsf3PspZcr1Gm1Mrmk6kJVqCprZGmb0Ch5lr0nvawqWSqQt0L7rbMN6Ovrf1Xy7Ggb9f5VuX1fkhrKSqNyD5mBwnh0WJIapsERa6rEpS+Tnd7XoHed6fMc5a6qoxOl0oFDIL3EOYPnxyuH/CQKCNMJu7fffrshHEwrE0yPVT+uAv891U5wTI55uhpfKcVkMuEXf/EXiaKIL33pSwghiOOY7e1tfu/3fo+PfOQjRFFEkiR0u116vV5T4ZplGZ1Op9n+wsICp0+fJgxDzp07x9mzZ8sqvyDg1KlTPPjgg7z22mu88MILfOhDH2JhYaGpoGy1Wo3k83TUx5znOUmS8Nhjj/H88883Va7/veP2czUNuNR/n35dn58aXLud3PGnJWTf770zixEra106S3MoD5Pc0l3oMD4YM9wfUygNYYTdO2RhLmJprc/2Qc4kNagswUYhuVBMtgeIcYI1HisFSjiWWpreXJugHSF2D8kGBTrSkBfkTmB9Cdr66hq3xoDUFalHlKQCD/kkIR8P6XiHcDmoEKmioxy3c6WKi7OMb27jrCfLPYUHGStcZlDCgzVEkaS/1GPl4inidkCyt8Pmzctl5aUvKx+RpfVAZ3mF5Qc/xPnHH+fw5g3efe559q/dIM/KSnHnNJkpxx1lPEVi0JvrqLgDolt6rHiByRJMOkK3VKly4kqDB1/l7/EOZw3pYAJJxtWvPMPc3fcS3/tRFt54lsxOSHKHVBZnCsIwxHmPKQxKSpQMUKIkKBkdEEqBNQZjDAKBVorCOhwKZwpAEEiQGExRyp1HlZe2zQsskkgpJA5vMkxh0FFM2OkRza/gRmNy44m6EZ3FFXY3f4BSEiE0r7+zyT3vvEWyuUE3LquKAYTW7Keeea9Y7IQcDCckBwOuvXOF8/fdz9yZMySp5cb1DRaWF4njgEK3yKIeySRB4UkmY8ZpTrvfRQQdnBkStfvko320Dti4cQWbp+i5mGvrW0TBCe64eJaD/R1eff0SQRBWthkCTF56kAuIVQk06ZqbJyQOSdTqIr0l7nRZXuhx77lFUifoL8yxNN8lPSiQUjC3sIhWiuFwjBAw11/iwp338I2vfQVvLaPRhMFoC4njYO8yQRjhVYSPYtq9DqMkYeXEafI8wRlLlk84dXKRez5wF7c29nnksY/w5JMf42tf/m8Mb66zEHqMkrz15mXuuuM8MggBT7e/yAfPPkm7q7lxecj63gpB5wRv7y4RR4vk4zHHQShR/Tfl8S0lJptwc32Lg1GKloKo3aHT7jAej/F5wfk7Fzi10CltnKTjRy9f4o31XUKtGK+e4NSZ8/goYiQVyTChHWsGwzGd/hzXMovVlhUtya1HOU9hj9RaapUsgJMnT3Lq1CmuXbt2bExUSpGmKQsLCyilWFhY4MaNG7RaLRYWFpqxsp5/gqBUGel0OiwsLPDtb3+bj33sY0RR1ADvqrIncc5x4sQJbt26xf7+Pi+++CJxHHP//ffT6XS4ceMGL7/8Mt57Ll68yB133MHm5iYLCwvs7Oxw/vx5bt261ageLC0tcdddd3H58mXeffddrl+/zvLycmNTAKW91+OPP44Qgp/+6Z/mj//4j+n1enz7299mOBxy33338eSTTzIajfj617/O5z//eZ5//nkuXLjACy+8QL/f5xd+4Rf40pe+xJNPPsmFCxfI87yxW7h+/Tq/8iu/wtNPP8358+f55Cc/2exbSsmdd97JfffdR7/f58knn2RlZYW33noL5xyf//znWV9f5957723mkFu3bvF3/+7fZXNzEyEE/X6fGzdusLKywtraGj/7sz/Ld77zHfb39/mlX/ollpaWmrlHKcXBwQGvvvoqFy5c4OGHH0YpxWg04uTJk825n15r1OpJ9dxnjOGBBx7g/vvvJwzDo/mtWa7U9cv12qskeGodEEQxEyExzoK35ElKlqdEUUwYSaRW6KiFK3Kcc+QmpRMHpC3N1o6hpSWhDgg6ApNntOIYFSpanRjvCi6eXCDNHd1WBHlKOthHOEuCJ+svoMMWezv7vP3WNW5uHLA3Sbk1cTjlOd1XnFzQhJGmcIKRdeRFgbcW6SxJYbh+OOaN7Qnh1UM+cHKfJx6+l5VVRxAsoGSA8AW5hyBQdHshUejLcd+V47MxBUqFpJkhCMq+C8MWpsgIghaBDmi12mRJiisSAulRUqDDAK0jjPW0g5hOB9JkhBKeNE2QUatUlVABVuZIJZHYUsXH+BKwzXJ6UZdJmlMUBq0kyJAkSejoCrjFEWqPlx6hPIVzeKnQOkDXa2xbrZGsRZrSEkk6U5KAhcdZh5gmGwvwQqKUgOpaoHpGktT2ajTjRnMJTb8Q4HwOPkfIHs4kqPAklTcRCJBhC5sO8N4grOW9QHdJkK6dBWSzXpvaT8kCL4mUNXg+9XhwfPx2CBWiZQch2igRNKD30TaPA/m3PxqUKiD1x25fHx5tqyZKaDLSyRgZLzQEgKN2H23hSEug3kC1BXHb5qtnH9H0dU2UKNnYXnfQ2iMq1Y7biRm3E3Gn/1Yfc/l2ve36+/VzVm13Ufe/w6NQboT14AgJfIGtyKEAqQ0wuo0qDNYWDeHAHz/qP3fMSAezmMUsZjGLWcxiFrOYxSxmMYs/V1hr+fVf/3V++7d/+88kHGit+dznPsf3vvc9Dg4O/moa+P/nIYTgoYce+omA2j9tG1/4whf4/Oc/z9NPP80TTzxBt9tldXWVf/bP/tn7+sA//PhpHnrkA0SxRuuA+TsucuqpjzLZ/SoqKat6dZUUKyuSBM77skKwqgBGQLaXs/Hd7xMtL9O7cEdVAegRIiirbcWRZDzSAxopJFSgO0IiVOm5LZVCIBkOct595zrbW5vs7mzz9qXXef3S69zcuMloMsZa0ySXRNXOD9wxR1YYksmklO3WEVpogiDEuBzjU7JsjMGQBxMMFhNY4m6bTq+FVIrR6JCtnZuMJxOSNGF+9Tynzt9bJpPFVDpLiLKCyB/JodZp5rrKzbuyChYEvkr0IeRRpY3wZaWLp6yGqqqUShKCP0reVbusd112f7l/WSXTPKDkUdIwt1AUEIZls613pXqAq3xvy4sGKCuZnBcoWfrIUxaAlgn2ek+ilPC0ok7Au+kM+1/4mgU4PDzk3/7bf8v29nZTYTr9mWkCwu3kgtu3VQPeNUjwfhYBaZpy+fJlfvVXf5WXX36Z8XjcXJ+1x/bKygorKyuEYUgQBA3wUEtCp2nKZDIhCAK63S7dbpcoippKytoHu91ukyQJm5ubbG5uArCwsNCQD9I0PWYLU++n/re+vk4URfzKr/wKa2trDYjz51EXeL9+mt7f+33m9v68XeK3buc0yeB2lYX3IyC83xh37lyPoBUjnGNvNyVoB+T7Iw62xzitIAjIdgfM90POXFxiY99wMCrID8eEcURiwaQZcZpjA0VqDZ35Di0MiyfmkDpAC0++OcTpgJZ0WFPZpNTAUn3rWIcrLFKXFZteOJzz5OMRk91dFu7w4C1CBsggRqhSEh1fEgrceIhLCgrjMABaURQWm+coPHGs6S11WVjrY/e3GG7laC3otgRKaYQUlXpJVcyX7rP3w68yXFyje+5uHv3i57n16utc/cGPSIdJBWhIrC+9wa3xFKMEfbBJsBKXYyslIcKkE1TYxntXWixMezp7hzeGd19+hdA6kkHBO//pf+Xe/9v/nfTd15h3kE8MNi8IpAdb4Cj7T7faSGtQrsCHYSmrbXKELgF2axzCQ5J7tCrQYQDGoEMN1uHJicKAIpkQtjvkRYGQGuktQRyUSX8/Bu8oxkNac3O05xcYH+5iswl4gc0NUkh0EDJJCzbfeZvOcAd5do4sL2i7Mqk/SHKc9cRRiJ7kWCTrN7cwQczqyXNcvbbNymLIYDxC+4JEZwzGGQ/cdR4tBVIqQiWO1B6AIIwZFmWFs7dFWVnsLFmacfXGBo/MzeNkSJZZOt02ba+w1iF1hMDiPMTK4WXAxDisKxC6hfcTWp0+sZ/gpGSUFWglCSqbGykVUbdDYQzX37yEzlJiKVFxxAP338+VS2/irCHu9HAebJago5hOdx5pC1QcceHUCn6SkIWwN56QO0dmCtJkzJvvjLh48Q7m53LCVpu5xUXS8RCyCVYFeC9YWjmBVyHbm9ulH3vQ5eTaAj/7xCLSGbTWZJnn2+8E7Ox5Dg8PqcGXmujWFK9S/smZgmQ0ZJhkJREjCFmcX2A0nmBMQa/XZ21xnqywJLnh8s4hl3eHdDo9Vhb6WCE4HA3Z2jxg92BC3GqTpAlhq41znlGScgnD4nJpZxFU815RFMRxjNaa559/nrm5uUZNK4oidnZ2aLVaSClRqlQq2d/fZ2dnh36/zyOPPMK77757bMxL0xRTkW+892RZxrlz54iiiMuXLxNFEXEc471v7BXSNCVJEgDCMOQzn/kMZ86c4c0336TdbhNFEZ/97Gc5ffo0zzzzDMYY8jw/Nh5rrRsSw/R43e12eeyxx5ifn29A/1rZIc/z5p+1lu9+97usrKzQ7XaBct7a3t7mtdde45d+6ZfY3d3lypUrrK2t4b2n1Wpx99138xu/8Rv88i//Mh/96EePzRcLCwvMz8+XAG+n07S1JuABRFHE3Nwcc3NzHB4e8qMf/YjXX3+dJ598spkn6/l3aWmJg4MDut0uKysrXL16lSAIWFxcbPr13Llz/It/8S/45//8n3Pu3DnCMGQ0GnH16lU+8YlPoJRq5qQkSWi32wDNOmT6b0BjhVRfD7evV2qiZqlaVStilf9qKNNYi7EOmxuyPK1se3QplS811pUKGtZ7rBcIr0gyg8kL+oEkFzHGGXqtABFrPDDfjZlfaCOEpCgci0FMGId02ppsfEieTAhDKMaSZDTmcHfCeOzZmTi2UjDKc++pFnestInaIamT5IUjyC2TZMIotwySnHEOAkcYQGo837q8TeEDnnzAcfZ8CxFGSAneObRWFdlGIoMuOuyBCPCuoMiLkrCmdbnGlArQhHFEmiU4PDrUKNmi045QIkAFliiOEKpSSVEKoSO686tYkxAGEdT3kQ1KFZRAYwuDMTnWFihnGQ5HqCCgFSlSkzMcHaLDAClBoUs1Ay0RgUaoAOkVXmhK+7TyPGutS8KTqezjvCsJD5QKK9YXpRKQA3w5N7naIa5C0JUXTBMKpl8cWydVA2PJDbR4D87mxPOnSzuFsE31KAYIVDyPTXbw+RCHbkhd0/sR0Kx5qoew+gpungvK0VkegfJC4v3RuhrKZwcpIQ7ajPOgesapVxXls8kRoYxmzG8eaGoiwPQE4I/aUJMF6tZFgaCjwKcjjItAdUEcfw4pd3G0jfdT8vLVM62vn+n80VzU0CQElDSU0qauXOMeHcePIxvcHkfHLCplg+PWaOX3y9+1ktjC4V1eqSrEeG9xqo1wCcgW3kuMD7BSU7gY6cfUyg/HHxb//DEjHcxiFrOYxSxmMYtZzGIWs5jFLH7i8N7z7W9/m3/1r/5VA8T9afHTP/3T/Lt/9+/4a3/tr/GjH/3or6CFs+j3+43/7v+e6PV6/Ot//a95+eWX+cQnPkEQBI0H7vuBfZ/86UdZXJ6jyCdIKZBhyMpDDzO8do39771MbWkuK2lOVasbMI2BlwDZ4PIm69/7HrrXJ15ewdocIWW5bzkl7yk1CIcQpWe6FyWQI1VQJR4lg/2U1195i/2DPd56+zW+/p2nub5+Het+/PXrKa/1uYV5hqMxUpeAfCvqEkdtlCx9WZ1z5IxwtvTabHVC4n6Id45kMmZ3b5fR+JDRZESSWR751GfQOqysFCRSqqrqxh8lr3ypAiBEXTlU9lBdeeKdb2Q0vSslNJ1wZX2T91hnpsDxupK83IdsgMGqYkkc/aCWGcUjRVkZDWWSq1Y7aFfFSs66itzgsZV/MCiUFGXbqdpdb78+JiQeW3q2V4k852prh79YcmsalHHO8Xu/93u88MILjVVKDWDXSgW1jHXdR7eD2nBkq1BXo9aVo9Mxvc9vfvOb/O2//bf5qZ/6KZ599tkGdKlBhWnVkRrMqcGONE2bbfV6PVqtFkVRsLe3R5qmrKyscOLECcbjcQNS9Xo9HnzwQdbW1tja2mrAoJq8UP9eR30cxhiEEKytrXHmzJljxzLdn7d/7yclJtxu6zBNMPhxBLXbz8d0/9/etveze3gPqQRZ2hCMcuJ2hCsKhgcJUUuzN3aQJ/S6mrMXF9keerYPE2xecDjI6AhJURgwHtVpkRdlf+k8Y25tDl9ZtGSb+xSjgvZSDz8c4zwlgcr50i6GKm9rXanfIQS6FVIkGTiHM4bDWzc57WxFnJLIqIXSQakc4C14x2R9nzwxZLnH1zLrxqIEdNoB3bmITk8iJge0A4XqaLSWSFVf8+XN7esCRV+JLSTbpJcOsHtrnH/4w8yfPsObf/SHjA8GOFMeg3EC7Twmd5j9A9TccnUPSHAOk00IiqwaJyqgR0gEpVR0NhpxcG2DVecwxnJwZZut73yN/uM/BT/4KuE4ofACnMSYHAHYzODDBCFLz3KcgcJinUUGITIIKEyOMRYhFUErxpuMIFCgBM5C4Apc5X2dpSnWeoJIEHUihA5RrbgqvlTkWc54dxvdKYhaHfIkYTwYYAqDVAqlJJkVHLx5iaVVycQKtrYTFrqWVhSSjBO0Vkgl6cYBuZNIBMn+ASfP38OjDz1GeniD0dXLCGBBOe788BNsHR5QIDkYTtA64GBvlyTP0HmO9Z7DcUaaTujP9fG2JJBJCaeWuownQ07MdTh//jRBe5EFFaFUQD9OcZMtjKW0HApabE0KVFD6uFvrkVqyNL/A6GCbjb0xO/tDenNdirxAtOfpL51iff0mZjzi8ac+w87OFu1ul3avz9uvv4rH0w4lrfyQsBezsrjIYiwYDDN2dw/Z2z3Aa4WQkv7SIm3nGCcpoZYMDve5cvUGh4Mhi6c2eeaZ7/Pa669y9+lFRibDFSn33HMP5y9eZPPmdZ770Y+YX+rykZ8+y2be5uDQEXlFoD0uGDLKx+wNR9V9X5PnaKpua0adNTmDwwO2DyfE3T7Ly6fo9PrMFSnJeMLJ8xeRCwHWj7Eu4druDmHcYm5uEecSoqhDFESAJghipNS02n2UcNzauEVRFHTCUuUgwGNKpyeklBRFwZkzZ1haWmrGuMPDQ5aXl9na2jpGfFteXua5555riGWvv/46Fy5caOaQabJcrSoQhiFvvfUWSZIwPz/P3t4eV69e5ezZs83cFEURzz77LMPhkMcee4yrV6+SZRnWWl577TW01mxubjKZTEjT9Nj2jTFIKcmyrAHKa7A+CALiOGZlZQVjDEVRNIQ6IUTTPiEEWmvyPOfjH/84X/va10oQ3Fqcczz++OO89NJLrK2t8Su/8it0Oh2+/OUv473ngQceoNPp8Nprr/HhD3+4mSOUUnjvGY1GxHHc2A3VURQFURQ1KgJSSvI858EHH+QXf/EXieP42JwhhDim/JPnOUqViil5nqN1udb77Gc/SxzHvPvuu5w9exZrLZPJhMXFxUaxCEoQud/v8/3vfx/nHOfOnWvmwCAIePPNNxvSX6fTYXV1lZdffpmXXnqpsakAGtsp78u1HtX6DyjXwRXBS+JAepKiII5DWt0+aZqTZzmOlCxzDAcJSsFCP6YoLINJwSS3eClQOmCcGc6u9hiNC3SgEXi6cz1M4Wl3+7Q6LawxtDoRAR5vMwoDw7Gn1ZG02gnoMROXcdfJDnefmmNxuU2GBgttL4gKhz2MGY4SNndGbOxOWOpq2qEnDGBJKV6+vs5iR9PthCwurxKoACU1zpbtDOMWvYVVgriFKXKsh8JYEII0zwiDiNwYRNBGBQHeZrRaHaT3JBOL1qVCgBDlHGBNRpEKrAhLcloQlfON0uQmp6XCCnP2tLsthHe0I0WnFWHDgEAp8iJlZ/cm1owRUhOoDkmSIslARTinUSIs7RCURFXYuxQCWdlCGOuQlECtdpJMlZXxznuMKcmKOMB7pLAo5XBCox1I6fDYCn5XldZBXXg/TTiQNDZsArAF3hYEnRYyKI8NW4COjkB7QLeWSYbryHCh3LI42naDT8sjsPuoPn6KfFC3SnDUPn90D9YD+RTXucbuj7YwTWDwvtlq/SEpJc77kugxTRaonn9EtQ/vS9rFUj9AFo4ilGRuRC7aTZubz8MxmwExRVw4atMR2aBWueJo11ARygGsBxXUz1bH18o/TnXt9nX1FKvkiGzX8C7q7UFRunwhnMGrACtUSQiVIcrmWF8+tyEVskhAagJdWh3Wffy/hwo+Ix3MYhazmMUsZjGLWcxiFrOYxSx+4kiShH/5L//lT6RaEMcxv/Zrv8bBwQG3bt36y2/cLABYWlo6Bij+RUMIwenTpzl9+vRP9PmLd51DKYkRAlEpDIS9Pic+/BHS6zdIrx8gRV3rUSbcoJQkPw5nCrwR7L50ic7pM6x86MP4MMJZiwijKmElmkIMX3uW6tIfU0iFULrMz+Xw+qtvsbOzzXe++zTffPabJFn6YwHUukZHCUGRF4ShRvR7bG/vghMEqg2hQPlSAly3Wijnkd5RBKCikMLmHOzvczAeMZqMKPKcg4MBrflVevNLJRDoqgoVSo9UAVXKcCpVVyesKqCwTKYdESJA4KqMn/RlYlpUpA3rSnaAr4qAjhJ0dYVMue/SDLYMKatqHUqrB10l4DvdiE6syAYTTJnbxjlfJkEpX5f78fWZwQuPrCqsS7JEqWQgpaAsXZNYZxsgWvj3c4Q9Tiior8nbk3DT5/LSpUv87u/+LkmSNOB+/b1p8L/+/XbywXTirwYdpi0Abm9befyOS5cu8eabb/JLv/RLvPvuuwwGA0ajUeOrbYwplSGmjqcGOpIkaQCaLCurcSeTCUmSsLi4SJZl5HlOmqYURUGWZWxvbzfKBkDj6V4DJ1mW3VYx6cnzvNn+aDR63+u/7pv3O873Ix/8JESEaU/y279zu9pBvW9jTCm5fxvgdnvUJJJjf9caJQVZXqDGhvHEEMcBe0NDPs5ohYKLd59gd+DY2hshhWVyMKbTDpHGYnJH0G6xnxZ02iHzAaydXEDFIWEUETjHeHOIQ+DHCaYo74VyHDuSH3a+kleursOgHaNbIdn+CJEYdt+9incWQWUvEfbQrbi8L53DZQnZ/pCiKO0dhJTgLMo7up2A/lxAu6uJorKKU2uJUCXYJypQoyRo0eTEhZANaOWcwezfYOvZQ1YefYoHfu4LvP1HX2K0P6oKBkU57liPSQrC0S4+auOlQnqHNznWpAgZgxQIGRyhv84x2jtgsrGFFL6sctWSW9/6EQsPfRjaS6w4x+ZgjC216MkLg5Iesgmi3cIrTagVpjAIHVbjocaTYgpLFEmUVlhfkrestUghCFttkiQFIfF5jpKSztwSIupijcMVpYWAimMcAmMK8t0NeqcvkE5SfD7CeYO3jiJPsFZyMJ6QtlYJPZwcZ6i0oBsGmNEEKhqV8eWIHoch3bXTnDl1ijs//LO89K3/DXn9XbQU5EVOnk149tkfko0m/M7v/j7nF3t0I0GaG3RumSQpg3FCOhnS6XSYjCdIIVnotMDmHGQaooBTi22CpTO4cA5jLa1ik8noFmluyQoLUtJtRQzGKXlusBVwOT/XYzgcMEwKrm0cciGMYGKxnZPs7h4w2Vrn/g89RXvtHOfW7iAb7/Pulbc5efFeHHD+4l3MxYo5VbB58zr7u9tcfvctgrxA7G4ysp5caaL+PAsry6yuLPPqG2/S7pa+6UmSMjjY49KrL7G/s0kSFyyeWObRT32OR37qZ9DScfWtV5hMRqiwxX0XCqTfJzeOuW6AQPLY/Rnf/nrO/u52BcZW952v60upWAgek2flfDzJ0O0WDsFwMKiIJ5qb169z83qlrOIdPp6j11Y4QKoeMuhgvaSwJfGl3e2D9zhbIJ2hFcWstRyxcBXoVrN8SoJZHMe0Wi3yPCcMQ5aXlxsrnWmLhU6nw8c//vFGASeOY06fPn3MMufkyZNEUcSpU6cIgoAoilheXmZ7e5u7776bq1evMplMmrnBWsvdd9/dWBecO3eON998k4ODA+6++26eeeYZfvZnf5avfOUrzM/Ps7S0hNa6UbJqt9vkeU63222q+OM4btQZTp48yW/+5m/y1FNPNevEeizOsqwhI9TH/m/+zb8hDEPm5uYaMsD999/Piy++yD333MNv//ZvMzc314D8X/va1xiNRrTb7WPz8Pz8PP/5P/9n9vf3OX/+/LH5OQgCwjDk2Wef5cknn2xsjOI4Zjwe88orr9DpdPjIRz7S9G2r1eIP//APieOYl19+mV6vx913302WZY09UJqm/Mf/+B/Z39/nZ37mZxrrjG63y7PPPkuapvzCL/wCcRwTxzELCwsIIeh2u43iQRAEFEXBZz7zGRYXF5u/KaX4e3/v7zXrjun5rXxBtQ50pRVVpWjlPVhjy3UZCucEeW5pC4UUgtEoRYcBYRig5Jg41CSFY2N7zPreGGFSzix0WZxvoYUklIJQe0yeI71CugIo12pR3MVbQxTNU0yG6GgJa1KsSEhSi9cKIyT9luLOtTkW5zpEcYDzZbt00AKrCWWGDgT5+hW8yDnIHDqUKO9oRYJIK77/9jr9dotH2l1ku4MUGiFbxG2F1AFRq49AYqzDU56HKO6SJBNkFHIwHNLtdFBKk9sCrUJUt4/AEEgDOISXGFOQpGO06dLqrSCkLtdqvhzPKQxhEOAcRHFMqxOTjMa0Wm28CIhUyHgyZDjaJ9ABvc4qhfMEYYBPwBQpzjuMzQgCg9YhUnqUoiJQV1YGCHS1LreiXC+oilztbKnu4ypVAu9KUFuj0KK0bfDelQi9V0cV+VOkATH9nhclaVtAkafk6ZhoXiGkRukImxygdNx8t1zLuAq/r9aAU9ssSQT1AqMkUMIUKD+lTFAi8bJ6NpkmtFIRyUvic/1sWFsS1I0XFaVTNPv3jeIbVIpuNFPBESFBlM8YTQiPVpLFfkQ2UuSxRmYFBoet6PBH9gS3kV+nlROae7QkS9hGRcE33Sd8+UxVKlh4rK3G/Oq4BFNralG3+b1khOnXNVmh6Ttua2N1BgQCbw2ELYTPcDJAujESj1NtpBljdRtjPd5kiHgRYcbV2XPNdv+iMSMdzGIWs5jFLGYxi1nMYhazmMUsfqLw3vPMM8/w9a9//Sf6/IMPPsiTTz7Jr//6rzdS4LP4y4+77rqrSXL+VUYUhThbeigLUVevS1prJznx8Y9x/fe/ih5aVJVcqmopMP5IKraG3CVgBgVbP3iOeO0kwdoaRZ4St1pV0schUJVspUAogZQaIXWVuBIo4I23LnNwsM8zz36Dr37nq5jb1DnKaiOJkoIw0CUo7i2hEoSBwBnD0tIK3nvWN7YQXhKqkHarhw5bSFUm6qJIQpphhGEwGbB3eMjO/i7OeQ4HB1g8JxZWCYIISWmP4KrkEaKuja5qSnxVMVSpGQhR+bR6gfGmVDpw1XeEbyqMRJVkrL9rKk9376vkUZWgKhOIdYKuylxXlTKCajuCJvkdxS2WVhe4dXiFwkDgPMYarDPYKiGPq88gldWFq8ggrqqYKUkHzlESUkSVUKRMlgpBmWS9LX4cAH47cA0l8P47v/M7bGxsvOezdbKuBktqYsGPq66vvzOt6vHeaqOj12ma8o1vfIN//I//MQ899BA//OEPG5JBrWgwHo8BaLVaDYBTkw6iKGr8tyeTCXmes7+/T6fTod1uNxWpy8vLTCYT3nrrLX7zN3+TPM9ZW1vjzjvvbPzAbz+OOoqiaCpZrbV8+9vf5s477+TkyZM/9vhkpS4ipSQMw4aIURRF41X9Z8X7ETZuP6c1oJTn+bGq1Voq+3b52NtfT18nURQxyoHDlMQ6RKgZZ47DYUbgDWdOL7MzsNzcHGKSlKAwGCfoRDCxktQ7TFHQiUI6Hc1ip03QikAFxO0WfnMbmzmidoBLC5zzmJp0UCWAqxx6Oa45jzUWJyTzd57DvPo2bpyze+0arsiQYQtcgQg6RL0lhH8T7y3Z9i5Yj/EgAwW2JBL1epq5+ZDOXITWoAOFUKC1QqjSTkGqkmyg6mrYCigoG1hZsjiJshZnJuy/9C2WH/0EH/jcZ3j3619jfDjBWA9Slp7SxmEPBqi5DK9UOcZYgytSvA7L+0SXPvMecK5gd/0WjMeoWOLx5IVFDXKu//5/5czf+BUG3/8KrTDnIJ2U9CvvCFTpQV44aCtLkVm81KW9CLLsT6mQWuCtQ5iETr8LhDA6QAhfeoYjKApDoAVRr4cMI6wxOOMRdozSGkdCEAWI9jzpxmXy0Yh8MkBKjXMGpUIEjthnEDpYOsnwcIMTUUDgHSutkFhLsixlf2TIk4JWCO35PkjJHefO8dgTD/Pad/43+surXLmxySjJufxHX2Y4GZXV116xs3/I+VOrzAcO5T1FkZMXOclwSNzqkqY53V4PlexwbX2faGWR+ZN3UQx2cbvvEp54AO+hMKac+1RAUaRYbWm3esCANE1LgNJaOq0WOu7QbcdsDyaInQm7RrB9MGb36ts89MiHWFlZIhAFb1+5wvb1d3jw8SeJe/NIpQiFYHlxAZIB2+9c4oE772a7MGSVhLvPc4T0nD5zljvufYCt9Wtkacryygl6c8t0u/Nk6YSTZpfllma+HXLi1BlOnL+LLE+ZjHZRLicIFGdOzOGTHbwQKKFxRUgUQKwz7jnTZ+/NH7J+7R363Q5SaaTSR4ob1Zg/Otjh3as38N4ThRGh1lPEvaqas5r7vKAkuTiPpwSIonYHby15YfFIpGohlURZxYl2wE7hWW1JQl1WsS6vnWC0f9AQCmrCl9aaJEka6f+aFFfPBTXBqh6/T5482cxV9XaWlpbw3rO2toZSCq01KysrnD17tnkdhqWvfb3dEydO8JnPfIY4jrHW8vM///OEYYjWmrvuuosgCPjlX/5lvD+yZOj1enS7XZaXl4njmCeeeKKx+LnrrrvQWrO0tISUknPnznHq1Kljc8cDDzxAr9fjYx/7GO12myeeeIKFhQVOnjzJ2toaUkrm5ua45557UEqxuLjI2toaxhiWlpZYXl6m0+lwzz338PWvf53Pfvazx0iEjz32GG+88Qaf/vSnj1kRQUlYO3v2LH/wB3/AcDjk9OnTnDp1iuFwSLfb5Utf+hK//Mu/3BAOnHN86lOf4rd+67f47Gc/i5SS/f19Hn30UcbjMaurq/T7fR577DGSJOHatWvce++9zXyjtebXfu3X0FrT6XRQSjXt+dVf/dWmTXUopXj44YdLklS15qmJiXX/T8+NVGs/URFavK8sFYoCa2352hgiIYniEGtsqcAShbTaijzP8M7RbgVEoSbNLDYv8EXBQrdFFLeAiE6vhXM5LefBFgTKV32kKxInBFrhPETtXrlmU4qoBWFckBWWcZ5zYj5mqR/Qndc4HRAJhZtk2HxAmmmyIiTJNWfP3c3C4gEbN69jXAHaICW0Q5DO8v3X32VlYY5zp9cQOiJodcFbpApL64hqjjW2ABQeQZqmBGGEzcfkWqDDFqMkI1YQBBIdxEShQ2ARgMknpElGKCSdeV1WfONRPsQ7cN5gihwhLEoKBoMU4SGzEm2BIiPQmhPLJ0o1LOkJncdYi1ASXACuBLilL22B5DRYL8r7HS9AWZSXWFvSsp0DIUoSgbEOa6t1u7WVahhIJE5avCztFqQsVx9NjX5FFCj7SiKr/bpmuVQQ6gDpMkS2D3oFdIg3BSIIy/7NRrjhBkFrDmRwBIw3qgMlcdrVx9RcvSXIXj6W1F4QNUGiOrZj67ga6K7W4o0q23EgvflO/Zdm7ffetWGlO1ARE3zDBxNC0G6F9FoKkQuKUCCdJacgcbriDNT34bTV3RTZfZqMUM0hTH2SaZJD9ZxTf19JXwL/VQvLZ7OKNF4xy3+c5cIR0aA+5nrdedQPR1SPqj2ivP5aoecwMQgzOvqESfHZgCDuUCCq69EipMRPS038BWJGOpjFLGYxi1nMYhazmMUsZjGLWfxEYa3lt37rt5rq2j8rvvjFLzIcDvn3//7f/0QVsbP47xMPPvhg5X36VxtKlIXzdf6oAQIDxcJ9DzC+cZONZ15Bmdo2oE6NHEk41kkmLUrQOr2xy85Lz7PUfQpb5GU1fbVtIeu0U1kFLHXQJPScKTjcS1i/ucHVq5f5xnf/5BjhQElBv9fl/gc+wP7ePtsbGzjrSjlbD8aUvuXeGeKoxdrJk0gluXFrA4HixLKkJRQ+nWCyHGsNBkuaFmxsbnBz4ybD8Yg0ywmigCCKWFg+RRSETVVS6ctbJpt8mQnDlmhgBYZLxFQ1jJCArQs4j2gazvtSPUKUCXTnPaa2VKgUD8qCoTKhVZIWmnqhqkeqJJ0sq4bL77kSOHWepaVVNq7doDBFKSFfExcqtQLnHTV1pCoYLZOf3uG9xbkyMY4XYD1CKKyrEqmUZID3Ug6qlt1WaTQNNgNNVearr77KN77xDdI0bWSda4npGsgpiuIYsDMddUUjHFXn1z/r/U8DVdPtsNbygx/8gMPDQz7/+c/z6quv0mq12NjYaDyy9/f3G+BECEEQBAghGs9trXVDdFhcXMQ5x1e+8hXCMOTjH/84H/jAB5pjrQGnxx57jHa7TRzHjZpCDTbdfnxZlrG/v8/dd9/Nb//2b7O/v8/q6ir/4B/8A+bm5pBSNsSHGiSpFRjCMGzek1ISBEEjWf1+JIz3IwNMR90H07875zg8PGxIB+PxGGstZ86caQgw72e1MH1NeO85sAEH1zeIpMThKMYFSWFpK8cdd68y8SHXbh5QDCfMR5LdDFqxZOQEg8yWHvfGsbAWs7w6h/OglEYKQT48hK1DvBREWpC5Mk1uqYDLqQR4rS5SNRAzSTCDIbVSy+H1G2SDQ1R7DuEykCHxwhpCSDAGNx7jEOgoqBLanm5FOGj1A4JQluCnFkhV2qFICUKVFjRC1vStGoyoq/5EOQZYh1QSZz3OZQze+AELD3+Skw9/kO1XXiQdGwpTkZrw2CTDJwNEWBG/fKka4COD1CEyaJWghHe4ImP3xiahKZAqREmBNQbrYPf1G5x46hpy/iRL3TGHg0sY70EpUq3AK3oSisIidICOYpAKV2TkBmRF/pC+QAmQ3qFbGuVapJNRyc1QGi0EYaeNiltY48itwXlLJCW2sETtGBmG+HCZvZ3vshDExN0uaZLi/CHeSUKpOWkOyHybNC9QUnFDShaqcU5YC84w7yyTxJJFXTqLi+Resr+/R54b9nJBvHQGcSgwZhuTJaUSj/MMJhMCCdfXt8gW5pmPAzgcE3QWGacFyIDCekbjBJ84DouQybWrLJ86h+yepdh5E7tzBdM6gUpGBELS7i0S+IQiM4T2AG3H5HmBp+orKZA6ZKnXQuVwQ66yuzdh/fJbtGxGu9vlnl5KR4+4OrjB/+kTH6C9KMn9hPEE2Nvn8NrrnFs+xcVBzrvf+g496zmpIlIl0dYxf/oMc4tLXHnleT756U/x7LPf59Spc5w6scaN9RvcvHqJdBn6/Q5Caw73tvnG7/42K6fP8/Gf/jzGWFqtDoaAVy7dIg4UsRYECpTPwMMXP/8JZBBh3n2GQUU4QCi81KAi0AEexbUXv8tyR/E3P3YvWbQIusMkN0zSgrSwpEVBYUsJc9+YPlUVpd6ilAZZzXXW4vEYY3H5mHNzIcleRlcpnPNkFoYiZvXOexCVks70eDk9L03POTUxq67wv93Gqh7fjDENeW4aoK6tCuoxXCnV7LO2WAAa4kBtd6C1bsbbaQsgKSWTyaSZV6bH806nc4xQcf78+WNWa957lpeXG/A/z3POnj2Lc44nnniiUXKo5xalFA8++CAAP/dzP3fsuB566KHmb9MKAAsLC/z9v//3m/em1SDq/f+Tf/JPjh2T954wDPnwhz98bA1Qk/n+0T/6R03/18e6urqKEKW9Uk00+NCHPtScs7pfbifuTdsZ1fNx/Xt9vqbXFvX5rc/J8e3IBuQVlPZhwnrSNCHL0pKA6hxoCKOYnAxTWMJY0WqVa+LCGNqdPmmakGcFSZYhpaLb6rC0OIcOJFE7QBCTpgU2NYRxRNBqkWYFUgjGg31W1k7iXUEQxeAKHJ4wcuggIy0KjDOsLfZYPtGm3e+TUxJ3g1jTUn2sEqR7I+bnT/PWG6/S7bbp93rk6QCUwIsCrWGxp7i1l/H8m5dZ6HVodQVh3MY7R9Rq4ynvA6UDMAbvIUsmjEcDwjjEWjgcTFBhHykUaZaiwzbGOpAB1mYVDi5ptWKUjvAe4labNEux1iCkQ4hSKUUpQZ7mFGlGv99G66icp4FWu0OoJdYZ0jTDeUsQaJQWjMcpxniE0ijhwecY49HCV/N1eT7LsUaCsM2zkxBgrWns0YyxOOuw1pT2EJTPUcoKrBIIJ0uFs4rwXaqYuaOKfdFA8EhRrs9dliCVxntXzt/5GCkELh8g1Twm2Uf4gmDuJIXJECYHRKlsVI2T+FIVQFARLSsAvFQ8qyvuS2Dc40DI5jnNe1+1tVq/VW3DH1HQ68/VIaCxmqu+OrVfjr5Xr0WpeQnlPWhFScvotiPCUBBrQRFpcJrcGtJaNMdPqQ7UhGoJ3onG9uSoUaJ5NvW1ats0UaBhY3i8K9eTUBx9tyY01EwaQEytH99/jX38mGvywbT6gadUDonEEKk8Tii80HjdA5eh7ZB8vIuMF5tzIKp1s6IkvIijXf+5Y0Y6mMUsZjGLWcxiFrOYxSxmMYtZ/ESxvr7O1772tZ/os91uly984Qv8/u//PlevXv1Lbtks6pBS8tGPfvRYQvOvKkTlF1om445k6q2zRJ0OKx/+MIMb66hLW00VhkRga4CuYisISnsDj8cWjp1X3iS+cAFz7gLeOaSuJERFKU8qhELI0hKgBLwt6XjI1SsbjCcJzz7/LSZp6UccKIXA4wQ88MF7mJ/vsb25RZoVFMbiXenJrJVgkpoS0NeKTtBhZaWs8H/76jtcffcqq0tLtOI2WocYaxmPhty8dYNr69c4HA9AB7RaLYIwBCe4duOAE6f3WFldRAqJ9KXagHN1crl0iHBNlY1rEnjeu9KLvUoW0iTRZJOsK8kAZcW180fKBbdnjY5c56cTebL5TQpBkkywJiv9yqVgdXWN7VOrHF6/ibWlpG+tuNCQDiqbBTsFqBhTyv4760uZWGtB2PKclfoNKClwTmB/jHvoNLAAR6B3nbjv9/torXn66ac5PDxsLA3qasMa0K6jKAq01o08cr3tGoSoAYUa2L/dduHoej+yaJBScv36dV555RU++clP8sQTT/Dcc881n7PWMhwOGwDHe08URQ2YnyRJAxh1u1329/dpt9s89dRT7Ozs8FM/9VPNPR1FEUEQcO3aNS5cuND0x+3tqrfnvW9AKSEEGxsbvP32282+X3nlFf76X//rzbHfunWLH/7wh3Q6Hfr9Pu12u6l81VofOx+3Kw/U52e6Lbf3Xf3etM1Eva00TZvq39r/uiZTTKsu3B7ToNjk5jpp5gnmA5LEk+YW7Synz8+T65jL7+4x2hux1hIcHORYUdY9jlJDFGic9yz2Q+b7LZAC5SFUkslohNkf0or6BMGEYpRhHdW95qtE+NEdVeH7zV1XJCl7b12rvFIc450dDm5cp33iLDgDKiBaPIWOIrKdDcwoxQFKSaS1dLqaucWQuKsJY43UogQtlEBpiVTlOCh1+bNiZ01V+dXjY0ms8hW454wt7VKyEZN3XmDlng9i9jYYbW2SpQ5jQVLew348QHSXQWrA402OtwYdt6sqyDITng8H7N5YJxK1bQkIpTDWIwu49dWnufBL/0cGL3+HTitmYi07VjCZ5JztOKRsUxiIA12qLQBZ4TDWEre6SC3Qsk0QapRwSKmg1YKa0FYkBN0O7aVVfJFjjGWcpcRxRJ5mSK1JRyMmOwPCNY8POhSTEe3+Ato6rB2Ak/SNZFDkHO5r7k/HnLl4nqffvkanMISdiHFqWMUQK8HdseTgxAKFt2xd3+GHz3yfh7/wK9waZMxLz4c++jFeeul5DrY32CsMWTJkrtel3WrT6XYZGU+WejZv7hKHEd949hWkkEzSFKkUSZJwa/0Wo+GAJE35+E/9NcLF89iD62S5h2LIcqhptfpExRZmYpmMJth8QpGDUJpAS4xzBFJw38k+qenwynDErasbxEWKDUMOBkNwIdc39ljraC6emgOf4VLHrYND3nnzEqt7+xzKVxnsbnF3rJhb7rNbWK6NM8bes3fjBtfWb3HHHXdx+sxF5ubmWFlZ4aEH72V94yaTJMGcf4Rg8iOyfMLC4gLDjRts3rjKH/2Hf8Mkz1m544P86IZhUkikFASqrJz1TqNwKNEmVJ7g5j6hsITSEStHpCHSnlB4Qi3oRpbP/dRHK2UggUdjnSMrPLmxJJlhkjlGac4oMQySnFFqGU4y9oYpuVBMsqSsFlVHVbZdLTjTi2gpAUWGEYDz3Lp8mStBlyfv+kgz34RheAxorse7mgBQzxH1GNxqtXDOkWVZ8536czVgX4+pt89b9Rg8PdfcPsfled7MDfVcWkv7T2+znjemx/caOK8B/XpeUUo143a97/r7eZ43barfqwkYdV/U+54mBtaEiHpb9T7r420qmqfm+PrntD1S/XtRFO+Zc6b7cVqR4Pbq4mklhdsJGrerJk1/ryYy1PMycOxc18dWz+PHAcbqp6Ac030911qsLUjTCUWWgvdIocq5QCmM8SRpQqsVEbdjzHjM3v4hyXiItwUYw0Jbs7DQIW5FtNtdPOXaOoxThuMh40lGL7aoICCSASqQSBUglcJ7RxDEZPkEWSkeJHmBkp6V5S6dfg8VxSjnS3shHRJ2VggEZAYWlj5IYF5n+/o1Ji5DSwjiCC1BKUcUCVbn4M1be9x1c4M7LkbIoIUQCqljbDHBQanGg0LiKFxJVMqygtFwSG9uiaLIiaMYFyiMObrXhCztv6yxtLsLWC+r8yvIs4wszzBWoZQr2dRovFT0+m06nRZxFKCkJIojvDPkRaksVliBq9blge7S7cekaUpelCQGqUCqiiyoZEU6qFUNKiJbtV7weGx171hnSrUgU1Tz9dHaWVqBsOWzkBSliprFI0SFnlfPCrVCQV0HDyBchlQKZEiRHKJb8+AK7PAqPptHd1ZBthBRD4qsXE80axrZLHWOlAqO1jzTRFDffE5ydITVPcbRM02pBOEbVYG6Ur9WE6i/R3MM0+3h6DvCN8fIe8D7UmWv0wrxJuWdS++QJBkry12CAHwhjs5DvXaq1Q68b9pdn6XbhQCmlQiOxpCjRlrnkdN95ms1iEqVoPr89H6O2n47+VpWP31zbEz3kofEBYQoOnrMQarQpAiX4FF4GRJ2F/AiBJeAqY/b4AmOuvQvGDPSwSxmMYtZzGIWs5jFLGYxi1nM4ieK73//+410+Z8V9957LxcuXOCf/tN/+p6K21n85cXS0hKPP/74/5idC8+RT2X5P+MMeJCBpnf2DGsf/wi7m1/BjyxO+KriphbVPEoqSSjVDjyMd1J2XnmVtfs+iF89VclllskrKWQj11+THna2dhnt77G9vcf+wQ5Xb1wtQYJOG1xZ7frkxz/MY48/xre+/Q329w/IC4upFAAcEAjB1Vs77OwPOXW6oNXq0enOs7JikUrz9ltv89YLlytQJ8c7S25yjLWErZh2t0sYhGitUFJycDjh7R+9zGCkeODBu7n//jsJAoVwrrQjEAohPc5bZI2P1/ksUVfBVFYEUiC8bpJtjqrwmFKB3VUqBkJQ2kUgK9UCUdpPSNVU7lAlq2qnVCnKaszlpUU++5mf5qlPGuKozf33f5BTp9b43f/4W2RFRmEMtvKYLdsqjtREPRjrMIXBFBZbVYhaW1Zq1bYLZaWXao7N2uMAeQ1EBEHQABdQ+iLXntC1r/Vrr73Gt7/97bI/nGtA/BqMnwZPaqLBNIAxDYRPAxXTFaDvB65Pgwyj0Yjvfe97PPXUU3z+85/nhRdeOAYuAYzH4wZ4qKsaa/AFYDKZcP36dYwx9Ho94jjmE5/4BJ1Op7nNnnrqKdbW1viv//W/8sYbbzAajThx4gQrKytEUdQQLsIwbPy3kyRpFBWuXbvGyZMnuXz5Ms45fvCDH/CFL3yBPM/Z2dnhy1/+MmfOnOH06dMsLCwQRRFxHDeS3LcTCfI8Z2Njg729PZRSx5QJbo/3A2WmyRFRFDXbrffx8ssvMx6PUUrR6/W48847iaLoWFumz+PECLo9TZoZ8sLSCzxnzy1SqIh3rx8y2J8wLxyjCQwLz0JHMc4srXaMjEP6gaXdiUApnJNoaUnHoxJMGGZkSUq7FZAmRQMM1MIhdc1acyNU1XV11bPJTGlpDLgi5dZrr3Ly4Q8hAgXeE/ROEM8vcfjGK1hTJuID5QkVLCxHxN0AHcuSZCCpgCBV3UsSoShVDlTthVwpRDSkLt+oH9TEMCEFwjqkhWKwTTzZZ/GeB7DjA5QuyLOyptBbi0sShMvBB+AtOIvAEbTnqnHJ451ltLnO4dYuq6oCAIREqCOSyeDKFsm1N/Gqw/KpE7z+7g32MljD0NYRxjiiThuExJhqLLG+Uj0o25JnKWHYQwDO5BTWIaIW3hQgIOr2EVoT9BdJd/aRdg9XgDMBgQqZbK7jTUF3ZY2TH3iYyc4NnLXEnS5n5ts4FGdTx/fzSakckA5YCpYoNAyN56z0LGFoiYzLE8O402N+rkOwPWB9Y5Miy9m8dYsiK1hYbfPLf+//jPtfYr77J19mZXWNnU2L8WV1qlKauB3jgWIyRIQB4yRF6gCtFNs729y6dRMpPIFS3Lh2jbdee5GPPPkJdvY3CbMdgqiF8zlCB5jUYwpDkuSYwmGtR6uAONDsHw4YDgacCCQqCIk3t1kUOUUoyTzoqIXxgs39EQ/fcQKcxiUhb7+zzvarL3NvllNMEgax5P9x70UWRznj3UNuGscr8y1kmrLajem2BVsmY/vmVZ54+AF0uscPn32Gt99+k8FwwI2bN0nSlG5HEvYXeeQDH+Dd11/h+RdfJF5cxe1NOBxrSglyhzXl9WuMbW4yKQXOVzLQdXVnbSfkS3BW+ACJK4FNYQmkIVaOUFraWhJriCPJXKdFqEqMsZ7KCuP4zltDfvTOiNHBLhbI85ReFOLJcL2I5bbmYFTghUcLEIVhcrDXXOvr6+usrKwcI8LVRISajGCMaca0NE2PqRXUFfq3V9HXxDilFHmeE4bhe4Cpev6bntNsNRdPz1+1JUMN9tcWPNPAfp7njfJBTdpzzjXqCdP2ENMgfRiGx4gT9Vg9bW8wTZyYbvOfRjB7v6iJgvW2pr8zTfqo2z4999frjWnS4fRcVffh+6kI1e9NH4fWGmPMMbugen/1fuq5riZc3E5ocM6j1BR7zZfXf5HnmLwgzxLSLClBaQdhpW5RFBatJd55Dob7DMYJW9t7jCdpo2J19sQCc92QuBURtAK0CpFSYbojyNo4YylyQ6ffYXA4pL+4ghAea8r7SyvQMkAFpeXXJE2JQ0mvEyGUqNSvXFmNryOEjvECbq5v8IM3fpOzvZCV+SWubd/CeUEratEJFMpndCJB7B3Xd3JevHyLpbkeq60+IgixDqSKUVIgRIGTHm8zvHeEcZdkMsL7Ah0InCuwThIGITs7m/S7EYUDLSRCabQor3N0RGENWVaQjkdkeYEOAsJAEMoQawxaB0StFnG7BVojhMfZHGs9prJeMd5XKm4CJyRhEKOlZjhJSAtHoB1aOWSjAubwWJgGyH1J4nXV84i1ljTNyvW9Lc+/x6OkoDClqpGSFicNVlDa1slacm7auuBIkQAqgJ+CIAgRQhF0VzHZGISgtXQHdnKIzw+RnRMVlO+htq6plDeOntaO3wfTr4/u16n71tW2cNXaYOrZASFKWyghGmKB9/7oeKbuOz91f8pKeaKG+H3FzX7Pul3Icq2lJNevXGN/nJLsDwiyIZ0Ly8dJBuKIoFETwEv+Q7UtREPhvv0en1YkaI6h2pZUtYrdUd80egdHfIFj/doc87H18ZFG4O3jYvlUV6paBVqT5Q7QxIEklTHeGQKZk8glYj9BIEtChHcIZ0BTWbr8xfM3M9LBLGYxi1nMYhazmMUsZjGLWczizwzvPU8//fQxGdU/LT796U9z/fp1nn/++b/kls1iOj70oQ9x9uzZ/yH7dtY11et1BYcpTJVE0kitWb73g5x85Cob330NZ8ukjakrUaZyq1JINCUxQVjN6NJNJlvb+DvvB+QR2DgFDgspMIXjR99/gdNn1piME/b2dxiMRizM9YgCRZZbPv25T/HpT36OL/3RHzIajphMMoyt1AGqY1FSsLU74P/z//0v/Nr/NeBDjz1Ku9Oj3e4BiujBNmsnT7O/v8/m1ha3bl7DSc3iwgrdVocaSFcIrCnY3Nnjt7/0VR558B0ODj/H1sYOT3zsYTrdNnVisKyglE3VivMefJl4F1QV3sKXVdF11bIXeBzOi7qAGucA55CVgasUgtrjGjy59RTW0aqAmrrvQSCkIgxj5heWuHjhTlZWT+C9Z3XtDCdPn+Pddy/xo+98iywvKxZtZa/g8eBokqWmMOS5wRSmSq6W14d1HrwpwSJKpQSQpT1FVWV4eHiIc46bN28CR97XQgjCMOSjH/0o/X6/AfONMfzGb/wGu7u7tFotiqKgKArG4zGdTudYZWg9ftVJuxqEmK46nAbU6/eyLGsICDWoMl2dWv977rnnuHnzJidPnuTJJ5/k1q1bjMfjBnwAODg4QAhBq9VqpJeTJCGOYwBWVlYA2Nzc5DOf+Qwf+tCHjoH47Xab06dPI6Xk61//Onme8+qrr9LpdBrrm6WlJcbjMVEUsbi4yOrqKs45RqMRURQhpeTw8BBrLfPz84RhSJZlfP3rX0cpRb/fJ45jgiBobCCmK0KnSRkAZ8+eZW1tjdFo9B5f6vrzdX9O9/l01FWuNdhWn/udnR0mkwlSSg4ODrh58yatVou77rqLc+fONd+v2yK1oBhnTApPJ1Ksri0Qzc1z6fIOO1tDFnxBJiXOeeZaisPEELcDEI4TS3Ep4R4GIBVRoPAOdKeDPRxjdseoQGHy8lou4Zf6PioHsCY/Pl3RVyXAfVWw5z0EWnLtR8/zyC/+H5BBDM4iwzbdkxfYSP8YKQQ6lCjvmVsIaPU0QaxRAZWtgkQqgVCqrJ4UAiryARXm4HxpZQIgvAMvISjJWc6W0t3SO5yQOFmen2z9HVp3PEI8Pwf7+yhdVufZQuKNQZgUwnZZDegNSmuCzgKIUpHA25zddy+TjxNCXabTnS+F65WUOA/WeLae+R6nf+F/Ynz5FWSrRTcbsroY40RU4Qse68EVOck4Iep2kVKjFHhToHD4PMcIjxMaI2RJgjApcX8OoTV5mmKEIpqbx6QTCpMjlWXp5GmsiMh2rrNz7Rqd5VVEGKHDEBXG3NjP6EjBnsoJowgD7BwOiUjJnGdiPSpWnOhrFmNJa7FNZ3WJ4TjBX9/ksMiIDw+5+vor9Ltd3njnMotzfT721CexRcb3v/00rd4C7TAkbLU5GA2x+/t0Wi2CKOZwkpPlKb12m/2DfQb7O2XfOU9uc6x1vPj8Dzh/4SLB3BkOb75KT4UsLSwx9oLD1JFkGQNZMEhyrFNI5wkUrG/tsL9/SNE1RKFmIZYst2ErkwgLWTJm98AzHiasLd+PHYb84EfPo965xCdbbS5eOM3XL1/hF3SPO/Y9ZB4KmM8sjy4tYs/FdC5KHu1u8fzmIVdf+zYdH/LMa2+Ajtjc2sB5wZVLl1jtBxxMCrLJhM7Kae7yOVc3NjGqzeWr28RRhyiI8FrhKsWLGquRVdWt8K66D8tKz3oOsra0RKjnt1KRpCTh1T9BgKtJeKUVkZKlQ3wgPZGSHBYSZw3WFjgPhXck2ZAJgvFKSF8LpIS0cJiKAdgPj6T1h8Mhy8vL7wHqoVRBuHr1KnfddVejODCtAFAD1PUcCBxTtQmCgDRNmyr+aQWEmhA2bU2TpilBEBwjvE2TDGoSxDQ5rB7z67mqHt/r+SwIguY97/2x19PjMryXSHC7Us5xkPK9363n7On+mCYO1H1Qz/O372ta5QBojmlaJaEmAfy4KuPbgdXptk73W63MUCs2TNtmHBwcoLWm3+83n3fO8cILL/Doo49OzfdHi2JfKUjkeUqSJiTphCwtFanyLKcQklArlNblOguYpBm3Nvc5HIwZJCnDpLRBOD8XgcmRUhGEndICJ4wRHsKoRavbwuZjDvf2CeMuAkE6HqODVrk21wEIhSev7FoUHkmgK9JFUdqTGVdCskqHCCkorGA8HjHY3WASnOHs6gmi/W0Sm2GFpNPpoXJLoCw6ULRCyVtbA+7d2md++QSxCnHWlvsTIIRGSofWJR6uwg5JMkGpmDCI0FFMkqQUWYoU5VpYK4WSLZRWCC9JTbmudJVyV54OMVYQBHFJALAW5y3euYZMXeQJyJisKMmwWmmss4RRTLvVReuIanBCUKqJOOnK4cb5EnT3Hu9ttR4QU4phFRnBWax1pe2bdSXZwItm/dAUtlfXjrceLzwWW49sja7B9LV6dN844lCS+y5h0MGlA8hHEC+A0DjVRreXEMLh0gMwGSI4spWoK/qPrAPeex80+0TQ2DtREiM9U/d63cya/CsrkuSUYkD5AXdMuaD5W73o8jUNorSW4LbPlUoFR4oSeVZgjSKIHLhqOxyRF47adjuR4IhQ0fAlpsa84yGO/XBU40u1ccFxElOzH1mqVL0fwem9MU1gqJUhKkKKcETaMDZ9rAfsCJXtIMlweg5pEwwKhcfrDl4EyDSh4Uj8xTkHM9LBLGYxi1nMYhazmMUsZjGLWcziz47xeMwPf/jDn+izUkqeeuopvvGNbzAYDP6SWzaLOpRS/M2/+TePJaX/KsM4i5LgS4dzrPOYokCWxtuAIJzrcvLDH+Lw6g1G1w85Ujfwx2pYnIdQeWQF1hWDhOH1q5iPWLSoKnmknALfHeODfZIMbty4wrlza3jvyPOUdhxx6sQKG1sb/NzPf57/6Rd/mW996ztsbG6ys7NPmpvGEgAg0hKtJMZ6Xnr9Kv/P//f/zEP33cldF89z7uwZ+r0O3V6HdrvPufklTp+/g3vvu4/t7R2sLbBFQTJJSjUAa9g9HPD869cYJSnf+eEzrG/d4ouf/xscDoc89fHHWF1ZrGuiQdTkh7I3hJRl8knqqjLJI3zltSnKyhTvLN6VVcd1T3rvwXqkEpWyAOAdmfG4sIUOovJzTdKszi6VXrNCSayzDYghhKTV7vCxJz/Lzs4Wg/GorHyM4tLnuqrI8q4Eio01GFtgrCktGKzDuPK1kBKcwFlbVl2L8rvWGMIw5MSJEyiluHTpEsPhkMFgQKfTaSpDDw8Pm2S+957f//3f5w/+4A9IkoQwDOn3+yRJQpIkjEYjut1uU0Wa53mTHKyBi1KOuCQ8TFdpTgMuNVBQV/1PV0FOS1pfvXqV733veywsLLC6usoXv/hFfu/3fo8bN2401fxCCLa3t5mfn6fb7SKEYDgc4r1ncXGRw8NDVldXuf/++wmCgCzLyjN0G6jx0EMPcccdd/Dyyy/zR3/0R9y6dYt2u40xhuFwSJ7n3Lp1ixdffJEvfvGL/OEf/iHW2sYmYWFhgSAIuHjxIuvr62xubuK9p9frsbi4SK/Xo91uN3YONfAkpSRNU8bjcWO3IIQgjmM6nQ7GGCaTSQNK3U7iqCttbweY6vPhfWmzkGUZe3t7JEnSVALXihdFUTT7mPbX1lojspw8d7TjgBOnFlk+e5LX3r7F+q0hfZuhehHOeCLhcFLQbodMjGNlISSKNSiFdYLAO1yWIcOAve09osEEjEOKEjRHSoJOCzuclFX/TINo04AVVc1ZLW4MUkAQKjYuvcVw8yYL3bmy8k1quqfvQwchKlAl8BJJuvOaIFLoSFbyzEeWClQyzUJKhFbNOAKVTY2v7++jO71Oy5eJZYmMJMJ4pHX4bASTPTonTuGTEUVuUEiskhSJhXyC6CxWqXlH2O4g4z4IBTicSbn56uuIwqAjfZSzFkfXggMO393mxO4tXAFnV08QppNS5jcor6kszUEYbFGgVFlTGCiHMwYtS/DIUg573heoVhu8QfW7yLCFtyUZy6QG5BBjDaHWFNZjsjFhp0e6H1CqKRgy48jdkDhMUCqm5zy7pqyqlUw4mGQcZqVSyyDNmDjBuhUsKcXF86cxznO4sYOhwAkYCcNkMmRxcYlXXvw+3//+Dzl19gKf/tzPcPrMGV74wXfZ29kibnWYm19mb3cbpSSLJ9YwxrC7s80oL9g/HCCdwyEoiqwCdhVJMuH73/kTHnn4Yaz1TIZjtoVnmI8IXIuFjiRJh/R6cywIxzCzJIN9xmnWAFthZUdQFAbvQ6QQhFGbzf0Jdy6eoHso+fYzf0J7d4e/cecdnF9eJtk4oDcouHDHKsIUuCwjxXPdWg4MzH/sYR5V32Ljaspob0J7Yph75Auoy7dY39klywo8cPPmNXrxBW5tbXLniassLC/SacfoMOLa9X3Cgw0+Fbe49/QZfLtDrhRpnpN5SNttEmMYWcdYCsZekACZFKSu4kF4T+HLek9Zke88JfngSAmhIsRNA/WVokIJqHmyQjPKj2T5PR7hoXCOUeHo6VKxxzrLYeYIhSDWJTBej41Xrlzh9OnTJElCnucsLCyQpilSSvI8JwiCxlagJrNFUURRFA3pq1YbqFUJ6rG0HmPX19fpdDpcuHChGWsvX77MG2+8gRCCpaUlAJ588skGjJ8mg92uBjCtjgAcswmoLYrquVJrzeHhIVmWcfLkyWNEgOntTAPv9bw5rTJ0fAzlPXNErQTw49QO6jVBbctTh3Ou6c+6T+v3659xHB9bF0zHdBtvP6a6ndM/67mqVoDY2Nggz3MuXrwIlITCdrtNv99vyAk1QeW2HVfbdVhTkGcpSTJmMh6RJmOKLMG7Au8FSWZptwXWGvIswVnPaDjBZQmH4xGZ9YSBRkvBymKf5eU+SpVENecVWWaIohDjIQhCXJEQx5rdrQ2WT6xxuL9H1O6gVEQQRGR5RqglUpaIv3WuUi6zeAG5MaTGYYVEeIe0nqIwFLZUG9s63GNlaREdhHiTEWlBrDVSRITaEGrBfDtj/dBwaX2XM6cOCcI23gFCo7TAWYNUGmMKlG6V5wZB2OqRG4cVOSXR1pQ2Z0JVBAiNs2lVpe7RSpKlpY2BFKWCSl4UeOeIlSzvb1vOQ86DFJoiS8nyAkHA3mgLZw1zC0uEEmTLY5GEQYTSIWEAQlkMJe/PYctq8grAx4tqnWyb9ZGt1jX1T1cRSZSSKKlRUqGkqpTSjoBt35ANRVM1fzvhoCQClmswmx5i8MT9NVRvhezgJlkiUL1TmNEGut1HCg/FPjI8eYxQOb3d48B5RSqYZiNUKPZRU6eUEER9rwN4vIOp3VTfh5JN6RolgylWQnP88n0IQe8hQXgQSnD+4mmEHZNnHhVIVKARWUk+qPvPi+njuo14cPxufc+4daxPqAjirpx7yvVg3U81baB8VRPE62MquUfvJT0w9fmSd+ErQsoRCaETWJJckDtN6McU3uKchqBd8ixUB+MFyhUIn4PUpZ2WqGkrf3HWwYx0MItZzGIWs5jFLGYxi1nMYhaz+DNje3uby5cv/0SfrSWw/+2//bd/ya2axXScOHGCz33ucz82IfuXHdYWlXRlle6oK5trgJYyWRKvrHDyI4/y7t63EAN7lEBqmi2wvgTSlHNlssbA8No1ismYuNutik+OUlrJeMil11/knvvuBrlPliclgBIo7rpwGiHg8z/zKX7xF/4WzsG1G5sMBgdMxgW2Tn4DoZZEWqKEJLeWcZazuz/gT555jm9+/3kEAq0VWinCMKDX7TA31+fU2gqry/OsLs8jnGV7Z4+rNza4cn2Dt69uczAqK9C997xz9Qq//b/+L/z1z/8C49GIT37yQ5w7d7pyQahtCmp7hNKfXQjKimVBVZNSJcKdR1iLMXmlGlAl5KqKT2sFXjiEKIkcVrVYWT5JK4qYTin5qoKorCQsK4YKkzcqBmEYcXKtzwfueYBPf/av8+rLz7O3t0crbmGdwzqDdQprDcprSiuIasvOkhc51tmy2gxKsNCBpwBfElasNXz1q19FKUW73ebw8LCpyKy9oCeTCU8//TRRFGGt5caNG/yX//Jfmqr9Woq62+0ShiGj0YjRaIQQoqnar+WOb6+8r8/PNEBSgwxBENDtdjk8PGQymTA3N9dUpk5XPw6HQ15++eVGxaDdbvPJT36SN954g+FwyNWrVzk8PGQ8HpNlGdZaTp48SbvdxvvSomF/f5/nn38eKSUPPvggaZq+556eJkX80R/9EdevX0cpxdbWFu12uwF+aoWFwWBAkiR0u92mGrVWi3j66acbD+5er0er1cIY09hT1IBIDaCtr69z7do1Tpw4QbfbPQYk1fLWSikODg6aitidnR2uXr2KEKIBjpaXl1lYWGjArCzLGqnu8XjM7u4uo9GoIRbked4AOUEQcOPGDUajEUmSoLWm1+uV12+gEd6zfHqZtTtP88prN7j67i6tPEe1NePUEWmFASaFR2pYW5uju9BGRxFYj608n3UYcLB9QNhqYa/ugtIECvLMIDodlj/yKLee+T5+aI+K7ZxvsAQqKd/jUrtllXaRWdLJgGs/eo75cxcRUQe8JFo8y8I9H6QYfButBZ2+KgkHoUYqiQ5URQyqZHqlbH4igEZZwjSVkB5b2qr4uiqxJG7VCXpHRQzzpUWCOdwinj9BFgWAKysxA4G3BrIEWdWWSyGIF08hdQyUSgP5wRY3XnmFUJTHCR7py+pFvKtryrEWtp75Pssf/wQ6HTGIY4rcEUYliCVkgZkkoAOCKEQ4i0kThADVbuOcJ9ABCkjTHJEMCeIWutXGeYnNC4xzWAvGAsaQpjmDSYazpoFErLWMDwfkKKTMCHodAt2iNzngumoTOU+oYZI7hhnoQJMUOVuZZ31s2S8UOg64tbGLSCxZ1Gaxq1lsx4AnCgPaUcQLL7zA/mBMp9fn3nvv51Of+ATDw31u3rjK/u4+o9GINEuxtpRP31tcZmNrkyxL2Lj2dqkhJKkIT5IoUuwfHPDqCz/grhM95ufnGO0NOBiOWeq1iYMWVgg+eM9Z3MF1/usrKXk6YZJ7tC+l03OdY4wlCjTCSRSSvvKc6i1yx9YG6rkf8rHC8vjDj7MQdMBIro/WOdXuoEKNTxPSdMx6nnLdFtwajPjU44+xku1z6coPCeQIUxQsrZ7l/nsOeOf6nyCVJM8Ltre30GFE3+X88MU3OZiMsXlGnjnonMLdegs33GdxroOaDMA7hHVo71GtNj7PMNaSi1ItyVhDISSZlCTAvhC8cPMWL2QGubbKXfdeILeU/xwUzpNbsB4MNGS/mnBABeA453CFqRcwpXx5RezbTQwnYomSgrRw7GeO1UiAiks7FWMaktTGxgZRFLG5ucnFixe5cuUK7XYbpRQbGxsIIRgMBhhjOHPmDFmWEYZhM7YuLy8jhGA8HjcA+WAwoN1uY60t5+NWq5nXvPfN+Ly3t8fq6ir7+/vH1Mpqy4QabK/tE2q1gNreYVolqJ4na4JCTfpaX19nf3+f1dXVaqw7rho0PTdME8Xq47tdTac+jprgV7+3sbHBZDLhzjvvPDYn1opBL730Eh/+8IePzeVaa27dusXy8jI/+tGPePTRR4miqPke0MxpjWLOlKLR7f/eT5GhJo1MKzDVShJFUTREvTzPMcbw6quvYozh7Nmz/M7v/A733Xff+1ZK+2rOy7OUJJ2QTiaMBvsMh/sUeYL0jqDi4DrvcdZijGN4OGA4TtgZDhHe0dYCKeD06jxry3N05uYRUpNOxkQtSbvTBanodOfI7ZigG5OmBUoJnEnQocYUGVIqHBYtJeARUhIEgiDU5EVOoCVKVao2laoWODCOw4MDJpMJ3sMomTBOxtTgaqA1UmpiHRJHmsgp2uEEAVzfGzEcDFhcWkEoiRcSj8ALjbMZ1jlMkSNMTpEXxK02AJPxmDCKGA3GtFsBwkM7KNWChHeYokDJEONMSdpzEhm0CZxjPBkhwgDZ6eDxKBUQhBFCRoxHCTsHt9gdphyOJmztDhgmBQu9LmdW5jh7apXl5VV6c4uoMETpAFc48qLAWI+VjiCUKCfwqnz4McZjasKuMRhTk3ft0TOKEEip0EqjlUJqjdSqVJKT6gjkr+b4+jnh6DqvLAukxxeGvBBILGG7j+qdxDlDNHcGoTReRqigjZcaGbQQeQ5C1xv/saB+8xpZqRV4pHA0jIhqjEUcoySU26G2t6n0CoQ8rldQEw8aYN41+681CmqFBDEFmNdNPWqbJWLM/PISWt/BeHJIMh4xLESzDV/LJlTba1Qa3ue4p9+fJhzc/tmSXyEIlEIKX9kXCLyolC8aywXf9NXx4xDH9ns0Hr2/qkNtmeBNjhMhuerj/R4ibJEHfQSW0I4ofIQXAdqlOB8QSEPWbO29Fjc/acxIB7OYxSxmMYtZzGIWs5jFLGYxiz8z3n77bUaj0U/02fn5edrtdiOPPou/mnjiiSc4c+bM/7D9O2sQOqrySQq8w3lLiaZTJvHxyDBg9YMPcfjuVfZ/+E4pa1nndQDw2Er+WAmPEmC9ILm1yXhvm87SClKY8vMVENrq9NCB4rVXX2ZzfYPd3U1AsbSwxB0Xz5PmIz7zmU+zvLzEzZub7OxusrM9IM0MYaAJtSyTXRVoWBiLsQ7jPFluULKubpMUpkBKw2iSsXc4wl69xQ9+9CrWgtIaqRRZYUiyopRsrY5qqnaFnb1tfvfL/4kvfO7nyfOcz3z6Ce648wJSeoSrqgCrymAlVeUDX1UzC1UmXJFISvsCJSUoiXeirJgWVYW1d1VRp6AQISvLp5jr9ZiyxWY6mSYqILFMntOAhAJBr9vnrjvuKZO7Hna2N+h0OrRarUaatq5ukkJWwGvp6+sp/26RWFdWIB5V5JTguLGGoiir6GsZ6Fo2ujzVR8B2nudsb2/z+7//+2xsbBxdg1WCvgbW5+fnmUwmjMdj2u02YRg2ZIFpL+m6ynMaVKi3VwMj9famwZimB6cqO9vtNvfffz/9fp9XX30V5xyDwYA333yTnZ2dxm6hlsa21nLnnXeytLREq9VidXWVra0tdnd3CcOQJEneA3DU+3TOsbu7y2QyAWA4HDIcDhtlgFarRavVao5lOByitW4qVaGsusyyjCiKmmraa9eukWUZ99xzT1PJWu/XWsva2hrtdrvZZ23bUFeTbm5usr+/jzGGNE154403GAwGDeBUFAULCwvcddddnDlzhiiK6Ha7pGlKURQkSYKUkjiOG5Amz/NGqSHLMtbX19na2moqhcMwxBjD6ZHj9Ol5Tqwt88obG1y6tEXfFSwtRKS5JQwlmXWkvqw6XD3ZZ36hR9SOUWFEluR4Z2h1WhxubmO9I9Yh+SQHUcriOl+SEorREBUEINKj5HCd96/YVFXe/VhIASYvJaRf+5Nvct/nfpoo6gAOEXZY+ehnGF5+CZVPiDsBQVzKZkspkLoiGahK9UCXqidHF0i1X+ubPL+oE8fTVZANuUkiyvLN8tp3DjveR8Y9wk67BGcqK4Yg0tgiQ3iDIECHIa2Vc6AqSXhj2Lt6mYNbmwTeIyuOgxCltkJT1edLEtTh5S1WP+HJRkP6i8sMBvuoICj7OM9LIoWSaFWqvSSThHY7wBqD96BtgQ9inNRIoZA6wqQ5UgcIAdZY0BatQnJTYAiIA0+RZqS5IQw1kzTDS4kVim6/hwhC5hQoCmQQcK7j2HExaWEYjCasLfZYmusTBiHnH/gANxNBlqacXOzDfL8EU1RYSV5neB+xOD/PxvUrJFnB4tIyp9ZOMNfvc/Guu3nokYdx3rM/mLC9XVqJDAYDLr1zmfHzP6S1v0cYxXhbYK1DaVkCft6Tpgnjdpv9IiT0EXFrgUgucmV7g0AZ7lrWxGS05yOUTOn3uuwfZEhh2RVt3j3IGaaKO8+tkqwnLHUX+fmzZ3j9tZc5vbnO3UGbE6fvQNJGqBZEAaPxiNPtNsI5XDLkYDJhOxmzbw1jKegGCdlQ8tC9Bi0cb14bMNhY546LF+m1v0talIDsOM24fPkdFjsdiqRFmuUYU2B0wEJ3iD3cZRS1qonCIbwHJXDW4SdDaulzXxiwBmktgXco54i9R4yG9NdvEY1TAsY8/NQ5qCpCPZKiMBgHxguSrCDNLRPjsASM0pyX3t1i6BUL8/MYU6ozaFmSIbHltb03LnDzUTU3QWEdxiu0K5o5qNPpcOrUKa5evcry8jKDwYDt7W1OnjxJv9/n1q1b3Lx5k5WVFXZ2dnDOsbKywnPPPceFCxeYTCY45zhz5gy9Xo/NzU2WlpZ48cUXabVafOITn+D5559nfX2d06dPNwoLAKurqzzxxBPcvHmTBx98kK985Ss8/fTTPPzwwwghePXVV3n88cfZ3Nxkc3OTj370o7z44ousrq5y8eJFrLUMh0M2NzcbIsLCwgK9Xo9Lly6xvLzM0tISk8mkqdLf398niiL29vaYn59vlHFq+5+a2OacI03TRtmnKAqyLKPb7TIYDOh2u8fIB/U8NBqN2NnZ4cKFC42Vg5SyISa+/fbbfPSjHz1mLVEUBV/+8pf5O3/n73D16lXuuece2u12s17QWvPSSy9x3333ceHCBYCGfDGtyDAcDgmCgF6v1xAk3nzzTRYXF1lfX0dKyZkzZ3jxxRex1vLZz362WUMYY/jSl77Eu+++y1NPPcWzzz7L66+/zs/93M/R7Xa5fPlyo0bRKCc4T+EMpsgrhYMBk/GIZDIgG+0jXAbekhcZtigIlaDTa5UKBjYnz3PyrGAuCkisK8kHgaTXjksCjVSIyhJEK42QAqsDTBhik6S5xwSglCDPs0pJRzG3sFQqm/kErTTdVsCoEBSmtFMIpCTwnmySI7xnnA545fLrDJJRvWgiTROcs2UlvQClS0WSKFRI4wkDhVIwzC3jLEOoEnB2zgK6fMawJaHOmhwpJVqVaxTnBUVhSNMMIVxJaHKOSSDotTuIqIMiJ8tzfJEjVJu8SMs5JE8QgFYKgysr3pEIHTIaTbhya5u3t/awAkI8hS04HCXc2h5wc32L3e1NHvzARYTw9OZXEUKglYC8vBad0kCB0wLpAGRJMqjJBpVKmjEGawzOumquFkihSuUfVSpwyUrtrSZCCU95TqmtBKZIApXqkffgK5UIZIyIl0vSVTZGdxdBCExyALqLL0b4sI331VqhqfCfrsav1sweSsWjSiUAD0IerZWFaIB1cdua6IjsIwhDBTkgSiuoEkCnek6cXlCVNgritkVW+btACDf19rSdgyEIyr/nxlJktlSxyI4rE9SNvB3sb4hp7/u3PyV8qZQhpoj45SmZekoUFflTHNkOHtvE+xA8apL67U3wQpAYhYpaIAOs92ipKFSAoGxLLvsoP0a4DCcCvE1xriZ0yONqE3/OmJEOZjGLWcxiFrOYxSxmMYtZzGIWf2p473nrrbcame8/K7rdLnEc/8QkhVn894lPfOITx7zf/6rD1wC7oEpOqakkv0f4KvkkBPFcnxOPP8rutXUGm5UKQPVPQONKqoRAixIYyHYO2btyieWLdzfgvqz8zIVUBGGLne1bXLx4kTdee4V77n6CyaTN/NwCa2fu4ML5CxR5wfVr1zg8OKQVt4iCAOFd451qbA32g3Gll3lhHF5JpBcgSvCjBuw85WecA+s9WZbfBuhP9c/0aw8HgwFf+srv8vNf+EWe/hNHEAacv3AW4UoA3FNWCssK0PCluUJVPV0n/GqPV1+RPUqpVaU0TpT+sM57CjSLyydZmFuoAEp3JO85lcSSSqNUgFK6qR6q5V6FEMz1+nzg7gewzvPqy8+ztb3Nwvw83jusryVgfQO4ltutKpaExHmPNbUKgqj8t8uKLudqz20aaeRaar8GoKfB5+9+97tsbGw0Vfr196Z9mr33xHFMEAQNeF97ZtfHVce0tHT9/Wm1A1GRMWqgo/7cNCGglnG+66676Pf7KKV48cUXuXbtGleuXGmUGIIgaEgASZKws7PD0tJSU7VakxcWFxffUzE6nUSu27u+vk6WZU27xuMxSinW1tbo9/uNssHe3h6j0YjhcEiv12N+fp7FxcWGgFDvI01TkiTBWsu1a9coiqIhk9XAiBCCyWSCUorJZMLBwQFJkpCmKdvb2w2x49q1a43qQX0sNZizs7NDv99vbCGklIxGI7TWrKysMD8/z8HBAVmW4ZwjDEMWFhYapYhaJaGWzA7DEPv6f+L0hWWure9y7Z0tlgKLs4qi3acbTEAJ0rGh3W1x7swcUklUEOHQaKVp98ukuXeeqNtFGEhubuEzSxQJTFoScYrJhIO3LuPykhQzLYNb3+NCTv1pqghRCigyS3e+zZXnX2XrrTc485EVhC4VLzoXHmXpvgdJ336OsKVRWiFVNRZoidSlsoGQAiEVjdG994hSf7qsBq2FDSpvbSp7EyEBLyuCkkRgy01UBCXvgGKC0hIfSigo5XbjgHScIVyBEDGt5ZPo3olqTAJXTLjy3HOYSUZHNvyH8jqtM+hTSXpTePZfeIHOHXfSDQOSbIywBl+BjeV4BBJHlheEkW6qq4UQmDwjzx0yjPGirKo11hLJUi5cSIXSClekjCcJwjoCLUGU3vVJYthPLYGSzM318IVhcrhPPxlwCHRJ+Mgdi1wvIl65MWAwShBeEMiAjhLMz/UxJmc/zTBZSmEszlpyY7AONm5cZe7sBzl9xwd57pmvcv/jLUbDEUWWErda/P/Y+7NgS5L7vBP8uXtsZ737zX3fqiprw1IFECCAKiwkQEGUSDbVZqQWk0zitJnGJDP1y9iYjdm89JP01MZptaTuoWTqaVJNkaAkiAAIEMRaWApVqEJtmVW5r/fmXc8Wu7vPg0fEPTdRBCG1keyH87fKujcj40R4+Ilw9/h/3//7At8ny+cptGZrZ9iQhsajMaU2aANCebT7S+SjTRAaYyyltfies7wptOGt63e4sbbFhceeZPXwafxCEA+2aIeKIhkzRBMEAcsHDrMjdhFeyKXtLbY3diiTnPb8HO8/d4yPHTtP69ZVRru7XFg4wMEDxxDtLnT7oA3EY3SR0g5DbDwizlOuYFgrC3Z1ydZ4yPWv/RGRd4mFIwFJloAx3Ll5m4snf4YTh1e5enudKAydAkBZspOktDodsp0R66MJ892IBzsDOkIRzi26e7qaPccTpwSjpGSh2yOswDYhhTMMqWS5pTX4FX4TSeXsRwQoTzUgtvAgqKpmO56kDEoQCi/wSScpX/3yy7y1W3D67KOkWYa1EPoSISTKaEprGaYFiTYEFbPGE249sBEXnKwq34uiaMbCa9euMRqN6Ha79Ho9FhYWuHz5MqurDpg8depUM/6dP3+e1dVVhsNhUym/tLTUENYOHTpEHMdsbGzg+z7Hjx9vFGnqsbY+v5TO7mEwGBBFEdeuXWvIcy+99BKDwYB+v8+VK1e4c+cOt2/f5vjx41hruXr1Ki+99BJJkjA/P8/S0hKPP/44ly9f5tKlS5w5c4aXX36Zs2fPsrS0xNe//nUuXLjAV7/6VY4fP06v1+P111/nzJkz3L59m0cffZRPfepTKKX4xje+wfXr1/nMZz7DlStXWFtb44Mf/CBf/epX+dSnPsUTTzyBtZa33nqL73//+5w5c4YnnniCVqvFCy+8wFNPPcU777xDq9XipZde4rOf/SxBEHDz5k2+9KUv8elPf5rjx4+TZRmvvvoqKysr5HnO7//+7/Oe97yHlZUVfv/3f5/3ve99vPHGG7z55pv8xm/8Br1ejxs3bhBFETdu3GBpaQlrLd/61rdotVr8zb/5N1FKcfv2bb7whS9w+vRpyrJkbW2Np59+mldeeYUsy/jwhz9Mp9NBKUUYhnzoQx8ijmPiOOZXfuVXuH79Om+//TZbW1uNWk89Njr1hJwiz8mylHgyYjIaMI53SSdDyiLBQyNxpJw8y8gSiScMg90hG4Mx47jAaDdWzPfaYC39bgBofD/EC1oUcenuXS/A8ySlSSiVT6IFWeEs0pI0ZX55HiF94jih67VI4hTRDvF8HykNURhgSkiTgrLUGKGRxqKUpLQ5mzt3SbKxk86v1tj3tzbc2q4laQUCT1k84eF7BmEknchHyoRMW3bGGVo7hSyhwCAxVqM1znrL9xlPErLCIE2Bygs8CXlh6XcjyiLFD1zlt7YBvhJ4vqQoS0yZUxpFnpeU2iKEot2OiKIQbSqgXQQMhxk/vHSTtx9s44UeS72I0PfpBh4KuL42JgB8Y0iGA5LBFu1WF091EVLh+RLK0q15ywJTgcsgMNpSFpqiLBz5IC/QRYkuKtKBEK46vlIIaEiNVmCtwFZ2CVYLTD2ZP6xKUI+XQoB2ClqqPYcpYsDidZYd0Q8I2osUk02kDKCIERRI2aoOI5rDN2SGhvxQze8CpK2AcekUYox1BMi93fZX6DdkG9zxGkh9T6qgEh6o1jpCYO0e8aBZie9d+kPgfLXNCgbjEse9cYukIjfsDhKs36/3YhrJt/XL7Z7uwNTJptQFpt4Hpq+ptkyo7X2UrIkXFRVT7BEypzqhOX7T9ne7nn3nnvqMFWC1U24QAk8nKGUwpkRLXV0flKJNFAj8bIfCSEqduvWc4MfO/18SM9LBLGYxi1nMYhazmMUsZjGLWcziz4zLly//1Pv2+/2m8mgWfzHh+/6Pyc3+xYetfMJtleR3Ut3WGgcaCAceKOUIBXPHT3D4PY+T/PEPoNwD5QCMFWgjkcIdUwmJjgvWXv0RJ579WcJOD6n8qvrXkRtu3LxDr9NGCHj9tVe5cP4ZMIJuZ4HV1WOsLh8hywqu37jB8RNHOXfuBK++/CrbW9vo0jh1A+PO53sKKTWlgby0rurEmqbKR1TAujGWwhhsRTwAlxi01Lj6jyeIpBAYaymNZWc05Ctf+0P+2i/8N/zxV77NZ//qxzl4cLVJGznyRl1BNJVJo5JQrv80gDQO0bSuYswIQaEt/flVlheXEVJMyUjXx6qTho6GIKqMcC1rb7Qmy5IG6Jvvz3H+3GMURcGbr73Mg40HRKFP4M9VbamS5ogGIJRSoJTElhqsxRiLsQ5A09olxq01tNtO8r+W4Adot9scOXIEz/PY3NxkZ2eHK1eucOnSJZIkaYB8a21jWdButxtbhpq00Gq1yLLMVZ1N+VFPEwpqSeSH1Q7q65hWOPjTEozr6+ssLCygtWZ5eZmTJ0+ytuZUIRYXF2m1Wg3pIY5jiqJoAItWq8XS0hJRFNFqtVhcXGzaME1wmE5kPvnkk6Rp2tg61KAaQBzHdLtdFhcXUUo1Vam19LcQoqnsrI977949dnZ26Ha77O7uNsSGugJ7MBg0fe55Htvb29y9e5fRaNQQJmqlgppcUF9vDYrVXuaTyYQ7d+405ApjTPO7tZYkSWi1WnS73YaIMd3W+nz1dz8ej/nA40dZX9vlxtV1Wrog8BQjAyRjCiUxBmQUcvhgD60NFkXYCdBGgDVIqciSjDLNEZ6PTsfYzSHaugq8PHb3eJkXlNuDCkQRzT2/9+zSkKxoKvxdgltKgdGasB1R3B/w+pf+mIMXn8Sf7ziSTjTH0vs/zdbWVTyZo5RAegrRKBw48pFUjY9DdZ4qSS72FE+qG8WNX0o1wAOiuq/sVCq9Ag6EElCmjnTgOyKbNkDg46UF0pRIpegde9xJLwuFNQXZ4AFvf/dl0IZACqR01+pOaZsKRLAuH45l+63rzD36GLYsCQQkaYr0ffwwAulAtDTNUEoRhH7lr11ikaRZifW8isijKLKMqNMmTZ2kvSdx/eUFCL8N5YA8K9G589NWgUdnvkc8GBFIC6YkjVM60rDT6jAnS5464NPNJJfWAuIkZXuUcu/BCF9oTp44gc0LFgvDEREwRjJIJ1zeHnB1nLBRXuK5pSM8+b4P8fJ3v8HVN37AkbNPcOtmRqfdZmFujjjL2B0M2d7edlYooxE7O7sMh0Oy1BF/+gsrbMYjymyEAFpRiOcFTCYxRTlAKYnWOW/+6CW21u/RnVsgN3B1I2NYhohonsxsc2ekUa05doe7bO8OGI8nCF1gJvCZx8/Reucy8+MMX/rIoycRx8/C4hIkGXZrA3YTUgFXo4BX2yHf8ua43FUsjnfpWMODZMLLl9ZoLRZkGaQjeBDDm2+/zZln38+5E8e5dvs+Qgj63RZJVnDmxAmIx9zbHdPptPGkIE5TjJSsZSlgqv8suxtrSOXR7/QIO11UlmKL3AE5NS4lagKju8+UcPYJQgjKIq/mvXocraqKS+1UmLCOoKOd1DlCEPo+Y12AcNLf9fwvBWSFZpQb+sqSlBAbwYOxISkN77eWPM8b5Z6iKFBKcfr0aTY2NhpSW1mWzThXq9JkWUan02kUaGq7Gs/zmrHz+PHjrK2tMRgMWFxcbI6VZVmjzFPL/tdzSK18UBMNjh49yv3797HW8uyzz/LWW29x7Ngxbt++TZIkLC0t4fs+H/vYx7h8+TI/+7M/y0svvcTRo0cRQvCDH/yAJEn4mZ/5GTqdDm+88Qa9Xo8HDx7wqU99ildffZU8z3n88cfZ3d3lk5/8pCOGVXPBk08+yfHjx3nrrbe4du0aZ86c4Z133uE973kPr732GhcvXsT3fS5duoS1ljfeeIOFhQXSNOXtt98mjmPKsuT+/fu0Wi1eeeUVpJS8/PLL9Ho97t69y4kTJxyRq9MhiiLiOCZNU77xjW/wN/7G3+ADH/gAr7zyCt1ulyNHjjTzU5Zl3Llzh6985St87GMfoyxLLl68yMbGBnEcMz8/z7179/jMZz7D0aNHGQwG/NEf/RF5nvOxj32M3d3dfZYLRVHwrW99i1dffZXDhw83c/39+/dZWVnh7Nmz3Lx5c9+6Is0S0iQhiUdMxruMBttMJruYIsazJVK5daE2hlwXrG0mhL5iHKcM45xACA7Md+h2W6wuL4AumZvr0unN4QURVvnML81hEGhr8ESAkB5Rq43vCx7kOWk8oshiBjubzC8eoNQFng/G5OS5QPqSTqfF6lKfS7fvM4ozkrzASkVpDEYXFGWOLnP8wHM2O0bQjkICzyfOUg7NrRCFLRCOaIbQWAqUlCghMAg2d0ckSUy7v4AnPac25IX4WDxPkqVDijymLCa0fGfBFrW7hC3LeLBJ2HLV+r5yBD5tCsAp6AjpYQyUZU6SpLQiD99TCBVircIYy/bukK+98g7fubqOHwacWG0hpIcX+HT9gF6giCclOtf0o4h2K6QsM0bDLYRUhFELaQ0YS2lKFBJryooAWBGcC02Za8q8pCwKdF64sckY3JeNIz1bgdAWIQ1Y7WSDKgDZ1ipwzbLeb9ZMe6C0QNmCErDCx6QDwv5hR2B0veII351l0vEWWKc6JOWevQJyb7VTH9ctb/YIArZmR4i9tQ/V8qMmIDabqnWLlKIau+W+3Sv6xD71BqgVm5wiwh7RuWkCDQmavTW9xeL5AYEHOpCYloewiiAMSGuiBA+v76t1i62ucY+P8GPxk1UPHFFUNFdVa2TVE5ehVjlorvknkAz2Kx84AsbUjvjSooVEGo20OdoojB8hdYz1us1FJDrAF11kAHqygwlNRVDf/47zXxIz0sEsZjGLWcxiFrOYxSxmMYtZzOInhrWWK1eu/NT7T1frzOIvJnzfZ2Vl5cdA0L/IUFI1FShK+XtFIrpO1Li0vvMeBa8VsfroY6y99hbZ/RhtnQi4VwH72oKSOOKBBaxk9+2r7Ny7S//AYee5i0QKQTwa8tI3v8yHn/sYSkGSTLh162363QNgepRlyb3795HSJ/Bb6NKQ6ILlA4tsPNgkyQu0ceC+8lzCXhYacBYLnnFghhDOr91YwNRKAC5hCHUyqvIFhaqqZX+0Wy0sMIljjIUHW1t8+3t/wqc+/tf58h99m1/6pZ+n1+tURIKq+hw7lbRz1gfWuKoZbU2FM8oKOKwTWq4PW905VpYPoKSqwH5HktgriKmTV1TAr5N0zYusAUaSJKYsNV5VKTrfn+fC+cfI84zXf/QSW9vrtCLPKUZY06gM1LKoUtWKCq6txmqMrr2nqbbBQt9VGdUWCWmaNqB/7Y+c5znf+c53mkr8GmCPosj5oRcFcRw3xINa3rn2yE6SpKk+rNtZExHyPG9A8mllAVsBSLXP87tFvf2tt97C8zyOHDnC22+/zb/8l/+SVqvF/Px8c22+7+N5XuO5XbcT3Pf94MEDjhw50rTh3aIGmbTW5HnOwsIC586dI0mSxj98PB4311MrIbRaraby8umnnyZJkkZlYDQacenSJaSUbG1t8dZbbxHHMUmS0Ov1GnWIsiwbMsPOzk5DnphWgijLsiEGRFHEZDIhTVPa7Tbdbpe5ubkGCKoVKurveNobuwbQpn2/wzCkLEu2trYAGiApTVM21ne5cnmd0GiUrxhrWGw7u5fcSLx2wMmD8yggLQx+EGAqb2i0wEgo8xJjLXlW4ksPneQgwZRl81xXziXU8t7Uz3udLa4sC5qo1UmE89aWQBoXzC92eO1PvsV7//pnWX1yGeG3wBpaJ95D/7EPUFz/HlIYp+jiq8pixY0zQjr1k4crGTEWlNdU8HnSIiuZY2vZs+m1NM+kqNuqKlUCnTsVCM9zir+lQUjPjRcmp7V0iOjAWZC+65MiYf3ym6xduYFXGoKgUrWoKxqnCFv1uS2CYlKSPriHXFghWjxGfPsKWqeoVuQkj40jaHh+pUBhDQIoLJRGoiQEoUJb5QBjoUgKN2BLJFkcgx858oIRSCUpSoPnCdLSIqTGE5Yky2mpAHTGdqDQ0mC9ABO28GyJF2R4GOIk5fLtLQJrWcokf/u9z7C0sMjk3Hne2dzi3hf+E8HmLktRyFzksfXgLgdOb3Py1Dlef+37BHev055b4crbbxOGESoIGI3HJJMJUgg2drZJJxPyqrpdKg8rBP3lg+T3EoSAEoUtNKq6F7rtDkHgs7hyEC/qsLOzQ5HF3BULbEuf9MGA4WDI1nCCxhHP0jRBYRBZzkeFovfSDzjhRcydeoT/vt2hc/Q4HDoKpSH3h2wtHObmgRa//eAdhn7Abjyh3QvYKj22Q4+jecHYCt6+N+awkeSjjDKTmF3BhllnsDvk2KGDhBImxtDuznPqzGEeP3ear3z1y8z1unSUIK7GEYVkyfdcFa9wkuoCTahClAVhDdLzUNY4+4P6uat8srU17jHAkipBWRaO+CIVYDG6JM9cFXkD70hJUeQkaYw2Bik9jBANgdZUcvdtKgKCMdzYTckLzWZaUlTkw2o50BClLl++zLFjxxgOh9y5c4dWq8X6+jpxHNPv99Fa02q12N7eJo5jlpaWGsLBeDxGCEEYhs2473leY6kD4HleQ6ibnjOmVYBqG4H652Qy4Z133uHkyZNsbm7yB3/wB8zNzXH//v2GOFyrCrVaLYIgIIoirLXcuXOHL37xiwghOHbsGKdOnWJ7e5uXX36Zf/JP/gnf/OY3efPNN5lMJpw8eZKLFy/y1ltv8YMf/IBPf/rTzZg5Go342te+1igo/Mqv/Ar/6l/9K7rdLmtra81c0O12ee6553jhhRcoioI8z3nyySf53Oc+xz/4B/+Ay5cvc+HCBdbW1vA8j+FwyAc+8AEuXLiAtZZ2u83Fixd57rnnWFtb41d/9Vf5/Oc/T6fTYTKZsLu7y6OPPsozzzzTED+Wl5e5fPlyQ74LgoAXX3yRNE35yEc+0pAGH3vsMcIw5Hd/93dZW1vj6NGjTT/V6xNjDIPBgF6vx9//+3+fe/fuNWST+fl5Dh06xMLCAnfu3Nk33yfxhMloyHi0w3i4zXi4RZKMkNYQBYrQD1FKEPgB0kpGk4RcCaTRzLXc/bI812ZxYR4VeCjZor+4RBB2Ub6HUB6gsJVsfhKPkGgQkiBs0+71mIzHFHlJXg7RWhBELSaTAZ2uI+oZqQhbXQ4uL+JJyfYwoygMQVs54FkqMlNQaE0UBQ0Yri1M0gRtLcrXlDpDlxoVeiAsUSDIS43BWX5NksRZe0mF59fHsUipEcKgpFs7LM7PM5mkWC/HaA+kIIrcmrsoCqK5LpiiIt1ZjN3D7KMgoCgyJBYpFAiFVD4bmw/4k1ff4oWrGxgZ4FmBkR4I5WwkfI/AVxxYzFlfn6C1xVMSCcTjEUGrg5CCXFvyrCAtNEFg8QOnOmStcMo2haas1C2KPMeUBaIsEZW1m7QWiSMaCCvdH+PeS6y2GLG3dt0jVwVNlb1AYhug32KMQBQJ0dJJTD5B+SFCgEK4SnzhEfYWSXbvYrIYEy0iG+KCCzmtSMB0iOa8VCB9o07AdPumAH5r8SV4siZRNhzKhkghmL6+PWje1kQAtyKaUkLYe572wHoIpCbwJYWU+EqhPcXCXMBwl4ZcvveZasHC1L/ZPTKD+6udOv7+bfXv9b5BTb7Re8cXWGyldlD3X21HJSo7pZ8up1KvBS1KSKJQMREKUU6w0scIp4/iCR+hC5BujjVWYoUitSEaH2FKpPSwD32r/yUxIx3MYhazmMUsZjGLWcxiFrOYxSx+YiRJwoMHD37q/Tudzr6q3Fn8+UftN/+XGUIIjHZgnVSeS5QIMMI0eRAlJaDAguf7tFaWOfDUI9zb+iE6t6im5lZQGkGoQLGHkZVbYzavXubYk+/Zq1DPJqgf/Ef8+9f4oy+UHDl+mDDw+O53v8kv//W/xWAnRuewubWF50U8/vjjPPHke/neiy+wvb1Dd75NvL4LuKpc5dUAogOzrbWVDzrUGVOLUywQ4KpMKxDfWOtIE1ZArVDwUCRJwuLcHAIYxzG5Nty8fYNLb/8Q9eiz/MnXv81nPv28q/S3zoqg0k6oDlcB91TtQmGldSCbdUlFLJTaIP2ApcUVPE+hjVMWqAkMojJ/NVpjrUFJ5ZQQqiS5NdZVgltLkkzQpas8c6CGYH5ugQvnL5LGE3706ojxeOyqs8oCYzTG6CbRVkuYGow7v3HkhL2KIrft2rVrjT1Cu90mDEPCMGQ4HDa+0a+88gr379/fV+Veg9mdTsf5GFdV9O12G9/36XQ6TRVoXcVYg+8PWxbUxwX2+TkDzXl+Ehmg9ui+evUq/+yf/TPW19eJoqjp19obuwaPlFINeUApRRRFrKysIKXk7bff5uzZs/t8oWtCRu0NvrOzw9zcHPPz87RaLXq9Ht1ul9u3b3PgwAHm5uaavqutJur+bbVa+4ARcPY4aZpy//79BuAZj8dYa9nZ2WlsKpRSLC4uUhTFvn6riR1RFNHtdlFKMR6PCcMQz/M4dOhQ09ZaLaGuLp22spg+Zl3tW6snCCFYWlpidXWV7e1tbt26xWAwAODWjU3Ic1q+x6S0eMKZiYStAKstS0tzhL4kiUs830d4HkWe0+q00VlCXnh4nTZWa0xSUGxtUWaadi9CaO2eo3psqIhHpiIsyQpAoBoq6h+2JglUZCGBw/bHw4RD5w7zo++9wyv/8T/z8VPnCJbPgNQQ9Og89Rkmg1vY8brrHyVByqoCrTq60SDBWuPsEwCjrSMhKYEuNX7ku/NK6YB5a5yqgdvbNXxKkYFKPUZgUb5ECg9LiZESL1BIz6d/9hlkNA9CYU2JTgf86I+/AZWcvCc9B4pPJ+XrvoBqHLUYYPDmFRbe16F98mm2b17GWF1VmirKOHEKB0GEH7XQeUpp3fzghT5+u02clCipkcojniRoI/CsxUqJVC1nudBbYLAzpCUlvbk+xm9RjsfYZEKRF4yMI0J5vkT3+5g4ZpCXbA5S1uKM3d1t/EOrRJ5irhXyTG+e/8czz9MSgtHaJj947Yd89403MNIyN9fjyNHD5L0Wu+t3GG9vcujACmt3lrlz4x0OnZSYsmA4GhIEPtY64qBBkKQJeZZR6hIwKC8gjcdIFbCwcpjRYJuo3SWNJ5R5SrvdRSPo9hcxSHaHQ0yZY40hzQu2BveZTEYgPeYXlsBYsmSMFBCFIc9YyfvHMWdXj9A+8yhi5TC9pVU4eNDdX4N1dk+s8GK+y/XhHa6PttkeuOd5q8iRwiDaIRujBC0gHee8eqVkvQ13xnDOSApdsHH3NucePcPhxXkWj63yyPlzpONt+v0uC90uUpfsxCmllfRbbY63fMosr8AWMKVGGOHWAzUKVRFv9oAnCZWPtzEabS3WWNJSE48nNDoFlRe8MZUqTF0ljMZoQxwnFKVGeRG28lUXFaDVPPMIjJDcGuYNec09kQJP7lni1CD0/Pw8YRgSxzGLi4vcu3evIR10Oh2GwyEPHjwgjmMOHTrUkKveeustAA4fPsybb77J5uYmq6ureJ5HmqbMz8+ztbXVKBO8/vrrPP744w2Rrm5XEATNOG+MYWlpiZ//+Z9nNBrheR5f+9rXKIqCz372sywvLzfj8/TYXM+VN2/e5JOf/CS3bt0CaAhmvV6PmzdvYozh8ccfp9PpcOPGDYwxPPXUU9y9e5evfe1r/Nqv/RpZlvHtb3+bCxcuNBYSSZKQJAnD4ZBHH320sYeACjCOoma+OXbsGMYYer0eW1tbvPjiizz11FPs7Ozw9NNP81u/9Vv8+q//Ok8//XRDACiKolEaCsOQN998k0uXLtHpdDDG7CNyRFHEvXv3OH/+PPfu3ePChQssLy9z7ty5hlhdHzPLMi5cuMBzzz3H+vp6Q6Kb7m8hBPfu3ePevXscPHiQf/fv/h1nz57lwx/+ML/zO7/DM888w8GDB/e9N41Gu4wHA4a7G4yHG6TJAKsLfC9Aa0WhDaCqObFSjpnkKAEH5yLmOiG9fpsgFPiBB6K2zTKURYGyEEQeaZqRFxpPQZYlKGHAE3Tn5rBGkyWOFKzLnLIU6Dxk/f59Dh89gdESY0uWludZ6LcZx1ml4tOlTCyeMUic8k2n5eF5ikxr4iQBLK22jx9oKCf4kVMuC30PTxcUpsRYgQekhSXLM6zVDUFO4GTjLRKlJN12RGEspbZYNGmWoxSEYUQcT8izvKlQ12WO1hapHNAupKRUEonGWtx2FHGa8d1Ll3jz3ja9Vo/UlGijKbTAU4GbpzxJ4HnMd1usbUwYxSlFmrk1h4IiS/CDkLJwCLrnBRTaYosSpQQIhdFU94pG6wJblhhdusp3DNIYlHU/pcSRnjXOigiBlTWhryIrI5AWhOg0a1iB3GMG6BSLRkV9ZNDG5jHoAuGFe8sBBFKFBL2DjHe/i5w/zp66QDXm1mvnqTXwnpVWvUX82DrbjdfVu0i98hAWY0qyXDcksr3P1Ad/6GTuhM15GipCNabX9hUPW6J50jDflXSkomgHTCaa8YYjQtRvWjWxYU8pYXorzf+noyHbs0dymA5jnI2GelhFQOCsMmpu6lQr9o79p6se7P+76wlTkQl0mYHqIGwGwkNYjfEiVDFCy1r9SiAM6GJC2J0nrazn/s9kcWakg1nMYhazmMUsZjGLWcxiFrOYxU+MyWTCzs7OT71/DRDV8uiz+PMP3/fp9/t/9o5/jqHzDAKJ0S7h6ilXeWSNdgn9KvmjlLsvpBEErRaLZ84xeOsaOzd2qwSWA7t0pZMshUU680tkaRlcuYZOMmRPIrFEt1/hzotf5TGR8v974zX6cz2Wlpe4dvU29+/eZK4/z2Q3Zjw/oNMRnDhxHEvA2fOP8C/+5f/IZBKjtSGeZBRZia9UBZi765oqcnHJHSuq6iS3rU7KKCEo7X56wLvB0hYYTsYcXFlCYJkkKZOk4I03X+fY0VNceafkjWOXePLJxxy4aavzCuGAejQWtVfZqQTCuApkay3COJ2Fosjozy80CgDbOwOGO7vMtSOibp+g3YJKStQYU4HZUBvRa20otas6jSdjJpPY/ZsQrg+EYHlphUcfe5okiVm/f40sT4mKaIqwUasYuLZJIVHSgYHGWscmqaSzrbEcO3asqQAUQjSA93A4xPM8wjDk5ZdfbkAOcISbGhwPw7BREahJBrUVQA3o53lOWZbked5IUE97YNeVoLWlAuxZLKRp2gDl08SE6Wqm8XjMf/gP/4Ef/ehH3L59e59iQl25OV3VX4MqSilWVlaaatQ8z9nc3GRubg7f9xvwZ3V1ldFoxI0bNxqgSCnFkSNHOHr0KHme8/bbbzd9ePjwYTzPY3l5mdFo1Fx3r9cjCAKyLANoZKprwGRzc7O5Z+M4ptfrEcdxU1FbkzfqPkrTFN/3m0rRaZJElmUMh0MAdnd3mZ+f58iRIxw8eLAhIzxMOqj7rVZPSJKkIUwIIdjd3WU0GjW2DjXAZoucbiAZ5IZ2ICgt9Jb7jAvDQten2xIkhQUlCAIfjGVhaZ5kPCI3EqxGa4M0kMVjyp0J2oCyTv5Y27o2r376HWFmuiLO5XBFA3DQJMD3ku2+UkwmCa2Wj1CCl770dS5+6nmOPLuEbK9gsx3UwjlaT3yG9If/B9IUVGgD2Io4UAH5zny3Kgk0FmGNg/NLQRQorHaV2sIL0Hgo5R5OY0qnxmDs3uelqCxrHMHBWo2xZUV6UARhSHjkPK1Dj4F0Y7ktUgZ3b/H6Cz+gJaEAPOUkmuuBsh7Za9ZWLQIsEIzv7zIXj/CkwA8CSmmRSiGDCFU4cFgpge/7ZPGE3AhEGKICj0JbMDm1snCpNQoJaLKkJGxFhO02Vka0Ol3yLMYWhlBp5vp9hpsP8JQkjZ3aTb/fodAGz/eJgoDECGItyUvDVhHSbXk8dXae5/OQC29fxpyAtx8knN/c5m6WEkQ+51ZWSVYOMVIeN3e3GN69wfLBkyipiDyfu1ff4sjJcyTDXYSweGEEQqF8n7x0RJtSa6QUDqzKM7QAa9vMLXqMh7vOfsJTFAaEp1jb2nGAVVkQhBGe57EzceOf73koz3eWOVpTFgWHVxeJ7q3xtzo9Hj1wBP/MY4gTF2BpFXpd2B3C/Vsw2CZ+ZY17p9rYIkaaksDkzGtY1obO7ojBOGbXSVqwkOWk1vJm4sb0MpJYVfLg5mXOXHyMC+fOks2fxpeaQiqyLEVKwdYwIc5Lji0tsLS6yocePUXn1ctuLrVg8mwKhKqfQNFMsnUlbP2LNgYN5NpQYNGVl3r9UOpqTG+IBLU6gNbOk96C53uEkSNmObsghZCGxEAhnHe6rsY/WUnBS+nu9DAMG1Wb5eVlPM/D8zwOHDiAEIK5ubmGhFbPXYcPH2ZtbY1ut4vnebRaLQ4fPoxSinPnzvGtb32LlZUV52EfBI3azO3bt4njGCkl6+vrPProo41VkJSSdrvdzD01aU1Kye/93u/RbrcxxpCmKcePH+d73/sec3NzfPSjH23Uf4qioLYIqIlgNbnu3LlzWGvxPI+nn366sUBotVqNHY4QgsFgwM/93M/xxS9+sZkzwzDk4sWLfP3rX29Id3Nzc3zyk59s+q+eL5VSDIdDFhcXKcuSt956CyEEW1tbnDt3jl//9V+nKAr+03/6Tzz55JOEYcgrr7zCk08+2Vxfrd6QZVljafSZz3yG733ve80cXM8vrVaL+/fv86EPfYhXXnmFD33oQwghGhszIQTz8/N84xvfIIoifvSjH/H973+fZ555ppnT6/siiiIWFxcrGxifxx9/nLm5OU6fPs2xY8f4tV/7NZaXlwnDsFH5ARgOthnubDLceUA82UHanCgMCAMPTyk85cZy3/doRT5l3iJQPgJDtx3S6Xbo9udodUK67Q4WURHkNEiJkoLAD5HKx4zGGF0AhizLkdJZC0hP0em2SdICzw8QuHWZ73nE8ZAgXEYYy1w/5PThZV67cYvhYMJcv02RlxRFSSAEUSDodSRRKEgTA0ikFKyudGh7gsh3igFR6NFtK9IhGCMJVIRvLWWpGccpRVm69ZhSzdhZ5glgEFKQxAlKeVihsMYwGU0Q3U71/BvK0jgyrTEoKTHWoI0mLwWlzpGU7t3Fj9Dacv3ONa7dv8d8e57e3By3Nu8z1jG6MHiewvMkngLfF/S6vlPXKdx6ymQJ1jPk420wBYn2yEQHvzVHqHwsypEDK7BZCoESrvK+UBJfSXwlwIhKgcCirHVS+cJZI6ELhHX7WFuReq0j85lawaUeN6cUAkbDHaQuQSmEVKioT5HsIv1lQFYcRDfO6tKtEa10ynLvVlBQmwT8uO2Yex951yKEfQQC1y4pwRpBbfUEsA9or/jcFTuyOl+tLTfFP9g3H0yD8+5n4OX0OpJSBmRejicV82mA2KoIaFNzSd3Y6WM83LZ3IwTs6x+x1xihJFKY/Ts0n90jZ9ZnFhWR5MeP97DCAlM/q7bbSqZPelidOpJopRZlvAhRxuC1sQiKIkWpFkgPWcbV+/hPo67w7jHLAM1iFrOYxSxmMYtZzGIWs5jFLH5i1PKjP23U1bQ1+DSLP//odDqN/O5fVpSjAXI+wFqNUj5KOU/Ougq4TpK4pKpBej6eb1GdNosXzxM/+AEyhTrJUVpZVfLvJV8UluTOHbLhLouL87TXr9K++yrz/Q6tuUU+c/EsL71+hTgv6LRDfvCDF/irf+W/YTTaYOPeOuKoT5IkeL5god3lv/uNf8T/8D/8vxFC0e21uXn1BvE4pizNvgRSXZEjhWh+d3WVorFQqFJRVZUJrsryT0lCSSnRpuTYkYPcvHOXIi/Z2R3w5puv8DMf/CTfeeGHnDx+lE63DVZgoEokVhk3YdhLBkkQ9blslWcyGARRGDng3xiyOObBzRsUrYCFYydRgU+pS6d0gPOzLcsCo8sm0V8WBb4fsLF5H4NifmGVIAjodHr0+n1AsLy8yukzFzBFgi5tBWqYOl/pfjdmrxpc7BFPjDZooSsQ1TZV7zXIrpRidXW1AUq+853vsL293VQu1koFYRhSFEUDSgNNgr8mGdSAgu/7jTRyFEUNgaCRyK+qVmsQvAbV6/akaUqn46rHHv5+a8Dgm9/8ZqMgMA3yTNsG1NumSQy1B3itKrC1tcWZM2fwfefL22636ff7vPLKK7zwwgsNqUJrzc2bNxkMBiwtLZGmKXmes7Oz0xAYfN9nbm6O0WhEmqYcOXKkqRxN0xStNevr641agTGGLMv2ETDKsmwku+u+roGsGvivPbvrSlwhBHfu3OHy5csURUGn02lIUlEUcf78eZ544gkOHjxIEAQN6FMTPQBGoxGDwYB+v8+DBw8YjUaNn7eUkp2dHdbW1mi328y3PHYmhrmOA9xFt8O4tHQCxXy/TVKCHwbuUZGCsNcjzTOyvMT3Q0bjCUoKbBw7cs24IlkYS27ck+4S+lUVXZ0An1I2cLfgniWK+65FzQlAIvCE+96y3DC30GZzY5vv/rvf46+cPk3r6AIimMNmGv/kR7Cj+xRXvw6mrM7jHi6XbDd7NXGiUjKpQYaK+FCD/TpN8QIPZAuQzgnZlghvinijnG9zXaloPadMo0qL9STB8kHmnvoFRDTnKmd1Rpns8NIXv0Q2HhEVFiUq+fl9OEMlF1xV8e39k8UUUOwMkNtrtOcPk+abDiDOM5TnofMcgMJY0sJghCTwPVAe6e4unVZIlltKbRxQI5UDSj1JWRRIlaM8aPc6qFaLeDQAU7J89CiTdJFosouVIUWWkCQZRZrgeT7Ks6TCJ7HQa4VIP6DruxlpOQOVpHg/d4L0d99AbW2hypKo8PiriyvcSXOudTw++DMfpXvgCJ+/eoP3PvEI0p7n6quvkdx4h2jlIFt5RuH7yFYP6/lYKRFSOfucSmbaq4hUSimS8YB2t088GaH8EM8Y/KhLmiYYC1GrjQH8sI3UGj8I6Ha6WGtRlfS8KRLCnYT/Dsl7jh5HLa0iVg7B8iokKTx4AINtWLsFD+7Sjwdk5QJ9nfPpccnCqCCIR4RaUxjDBrDW8tkuDCLwGWY5xsKKkly3lgNdS7JxmyzLOXj4CHdTKErN9s42N69fYzSOyUrN0XlXdRu02qj5Re4HFfkIgS4Lp3BQ/bF2zzO8noGrp8IpERhLCWTWYFVForG6eUCD7iI2iGC4iS5zZ/djTQNOgpMOn8QjtC4RUhIFPpkuKIWlqFSAPCnxlEBWwJfFrROyLGvsc3zfb4Doekys1WJqq5uiKHjppZdQSnHixIlmjjl+/DhhGCKE4KMf/WijlON5HgsLCyil+MQnPkGaprRaLZ5//vmGLFDPn/U8dfHiRdrtNo888gjz8/N85zvf4fz587z99tscOXKEp59+ulHECcMQYwyLi4u0220OHTpEEAQsLy9z9OhRvvjFL3Lw4EHm5uaaefj48eMAnDhxgq985Sv0ej0OHz7cKDZ897vf5dlnn236whjDP//n/5zHHnussSS4ePEiv/3bv02n0+Hv/t2/i1KKOI75rd/6LdrtNu9973u5ffs2N27c4LOf/SyXLl0iyzI+97nP0e/3KcuSP/iDP6Asy8Ziqe6vr3zlKw3hQUrJ3Nwcv/u7v8vu7i7PP/88n/vc5/h7f+/v0e/38X2fc+fOcfToUR555BHOntApqmAAAQAASURBVD3Lb/7mb/Liiy/yj//xP0YIwSOPPMK3v/1tnnzySS5cuEAURTz99NONFUUURQghOHr0KFprHnvssea6z54926xVTp482ZAHp9cUuzvr7G5uMBlvg06IAo/Qd9X1bu0QEYaKXErarQhpDaHnISkIg4B2r0d/bo4wahGEERhDmiYURUkUhRhj0doSRB28rCSNcxASzw/Q2lDkOVIq8qSgFQWUVpClKUmS019YJk0y0lZGO1R0Wl2OH1zhlas3uftgRL/r4XU7WOssu5QpaHuC/lzEYNcpDiwthhxcCJiPBJFnHQki8JBCM4lLkjzk6IEDbG/cBquJJwkSR2bsRGFFAtZ4vk9ZGvIsw2pNp91ilGSMRxM8z2McJ3Q7HTw/I4xaWOMUBLQFXdmh5GVJXhYUZUG37YFVDMc7vH3nGpOiRBeWwwtzdOIR4yyh1Aakj5AC5Rm8AIgUUeDRavuuTXlOnhakaYYYxeSqS9Brga+RMqjWULKyd5BOCSmwToUlKp1ijS4pcqdcI3DvQNLaimig94iNlYJLrczm1A7k1IJkD8i31qKkh9AJKhtBmUPYRXkhtsgwQYi0zpIqG64hihFh7zBSqH3kASH2lAVqApisVSimQHi7h+Dviz3gfG8J5UnwZcVcnNpvP/FANEJybomzZ9/WHLc6qMXZUlj2rCeEgO2tHW7fi7A6JY4njHaHbGYrzSn2aAz727Bnr+Cuue7PvebtbfvxttfvwKoWF2jeja2tyBnV+g0rmt9FY61H1V8/iQiwd16sdQQDpbDC2Rv6ylLmI6QUWBm49uoCq0sEGuEFgEZaM0Wu/a+LGelgFrOYxSxmMYtZzGIWs5jFLGbxE6OW1P5po67CqiVIZ/HnH/1+/y9dWWLw9iWWnmiBlEipUCrAUz5FmVN7eFrrKpwEEiUUpRAYC/OnT5OvPWDrlZsuSQRNpY4S1tkrCJBYio1tJrev0Y0vEW3fQpiClaV5jrQ0G2+8yM79nA889zOUxuell97g1dde5tHzF4g3h2QLCdtb28wvWO7fvc3JU+f4jf/b/53Pf/4P2Ny8z9zcLuPhmELrxpfZxZ5FgJiSORBSYLUD0EylctC4Ktg/PWGjtWZhYY4szTl/+gRXr9+mLAtu3b7FxYubRMEcL/3wNT7ysx8EphKItehmJY/eKA+YGgYFKu92bSxlaShy51+7MD9H9NhjWK0JqurGO3fuEPgByytLrshZCpSnqqSYoCjKquKwYDQa0O0vUcQTEJJOpwvCSa/PzS0TRB1XXVq6ZF8tXa2NRZe2AXXAEQ8cbcJW+7nE3K1bt6hl9WtQPooiwJGfvvrVrzYV9bU9QVmWjZpDURSEYbhPXSAIggYor9UCWq3WPuWCGtSvz10TFPaSprYhRSRJ0qi5TKst1HYQQgjiON5n01Cft/5Tg1r1ddaqAEXlZ15XRhZFwYsvvtiAFwcOHGAymfClL32J8Xi8754yxrC7u4sQgosXLzIYDHjnnXeYTCYMh8PG4mB+fp7V1VWWlpYalYuXX365USCoZaFrFQQpJb1ej06nQ6/XY3Fxke3t7Qa8qckBNXmi0+lw8+bNxlJiY2NjX39aa4miiAMHDrC+vs6rr77KlStXOHv2LO9///tZWVlp+qMsy4YkkqYpWZYxGo0ahYg8z+l0OnskmbJkEBuiUNGKnFf1aJzitwIOrPRICgg6XSQGv9VCBgE66jLZ3WIyTAiDgqjVIRuPwEpCJYjTAuXVCf3aB7pKMlcVioKpRHAF1lPLBleVeHVfiSop7SuXOH9w8wGrBxfZeDDm9a9/j3Mf/CoXf3EZf/kRpOlhhSB49LOYdIC++0MwpYPs9yWzrRtXqwo9UZFbhKUxmLfWqQfYogQ9cfsHAYLA7W/KvfYDwvMQRoP1kNKC72GjLq0nfpHg4OMIGWCtwRQxm1cv8a3P/wkHFluMb6W0paisFfYq8VwbaJLtgjqR7kgck1vrePPv0Dr2FPntdQo3qrq2KIUuS8rUyXwHfmVLYpxaRWEEaV4grQVdkuoS/BClHHiWTGJabYMXSKySmLyFEHD32i2M9EFb2t02E4GzjxDQj1q0A4nq9PDHOyz1WmAtnu/hCcUcBhFI8HpYX5FIQfHkk/hXr9PKNY+cPMTZpy5iDpzCvvM2f3N5jt0i4Jvf+S4n5ufxs5j3tkLeyFJe2Vhn19vmQavDTtRG9eYqrw6BEgrhuWdSGkXY6pDGIzq9PkWWuTlSKTq9HhKLUj5xPEZ5HrUNR5pleJ7PeDgiHu4wX2T8bWN4/vR51NwCImxDXsKVt2HjLkwmkIyxO5uwu0k3ifnltfuIPCfLC2JtGEjJwBpiIfGsYSH0GBY5aV4gsCgpWfUVXzWa8wuWB8OYra1tDq3Os+BL7tzdYm1trSFEHQo8VKuH7MzjR112JynRyhJG22qsLJvnqL6ZatuSaRpLTUHQ1mCFI6p0+j0WllfQhSPWWWMIDp3nZmeVQ8O7jK/+0PmhVywZYw0Ggx8E9Do97lRS09JYWli2yz11AyElntwjHPieT2dxuSFj1WPzw+otddQAfBRFHDp0iJWVlYbQVQPm0yBTDU7X57fWkuc5Ukp830dKSVmWzb612kFZliwsLCClbCrqP/7xj5PneaOeAPDxj3+8Ga+llBw/fhwhBCsrKxhjePrpp1FK8eu//uvNfkEQMDc3h7WWEydOoJTiV3/1VxvSRRRFPPvssywvL3P+/PlG4eETn/gETz31FAcPHmyu4amnnmI8HnPs2LFmPo6iiBMnTvDBD36Q1dXVxoLo0Ucf5a233uLpp5/m+9//PufOnePkyZMIIXjzzTf56Ec/2sxPH/7wh7l27Rrvec976Ha7vO997+PQoUONncW5c+f2EQOFEPziL/4i/X6fX/zFX6TX6/EP/+E/pCxLul1H4pmbm+Mf/aN/tK+/HrZkKsuyWZtPExrr9UH9men1Rv3Znc17jHa20WVKFCgCz8f3PZTnVAqkkvhBAFqTKR8/CGi1AtrtOfwgImi1kL7nxgLpo3xFywspC+3GMT9EeT661IRhm3gyQRjwlVP5SBKIxxlUylBBq0WhIYknZHnJ4oEjJGmMJCD0DUcOLbDa73H3wYijBzssdiParYBMacLQEQSOrUSs3x/TafmcPrJIPyzph4JOpJibD5mb89CTgsEgR4s+hw+cYGf7HsY68L4sc7pK4fsBWZ6jVABC4ymPLM3wvIhCG5TnE0UBQdAhy2IkhqjdxViJEQolBbo0zrLIOMJakmQIFDJsURQld9buMJiMSXLN7s42RxYO0QkjhHX7W6UQnsDzPaQy+IFPGCgCP6Td7eOLHFmW5AYmcYHs+HgqQskAzwuRsiJni4q4KaWzjhCOuKQkeJ6zmdFF5oh42ikZCQzSGoSpSdEWrFtjO36vY/3JWmOoIWdVpOR8TBC2AI0shxidIP0WOh8igwPk6Zgy2UEqgVo6S7F5bW9tU9MYp6rxazC/Vnyydu/3PZ7Cj5MPGiuGmkEAiCl7Jx4C3JuPN+RLsQ+4l3K/1IGtFZwsze+eVBxZsnTDCeNUoZCcPNRitdDcHUBZv0o1hI79za6Wd/u3NSyCh9c8lv3vEoKydMpNtTpE/cy798aa7mD3zsX+eHis2DsH1PJx9fdRWkmLmFLnKE8irCRRAVYGCJ2BLSDdwRIgoh7CWqxQiOp90u7r/P+ymJEOZjGLWcxiFrOYxSxmMYtZzGIWPzE2Nzeb5OlPE57nNZKys/iLiboy6y8zrv7Rf8br9umeOusqcpQDhuqszXQ1jLUapA9CoK0h6vc58P6nie9tkD9I65pYtHGkAyU0rk7DosqUuQevIXMBvoeQkk6vw5kjq3Q9yZsbG5w+cRbpS3a3Y1555Xu0ohaHDiyzdv0OnBEIJbl/9wpf/M9f46Mf/1k++cnP8m/+zf9EkiSUpabUtlEwAEcokLJKsAkHcNQQhwMR60oX1277EwgHAqdoPhyOWVpaIPQiTp44xv1768RJzJV33uSZ9z/H669f4aknHqXX6zvfWrtXAVOheVToXeNr66qfwZSGWzeu8aNXXmmqFqMwIgxb9PvznFpepiwK3njjh3Q7HVZXn6PdahFFId1Ol8HukLzImUwm+L4iDEK67S4rS6uoKoGeF4VTKjAGKTyMFcRJRrfjlBO0LqtKf1ftb2wla1spLxSlq44vixKjLdporly52STQPM/j5MmT9Ho9hBC89tpr3Lp1qwGp6gR+neRvtVqkafpjAE9NNqi3T4PqnucRxzHGmAaIqRUKaiBh2j6h/pllWUOGkFI2pJ96/+nqymnLgBoMmlZQmFYLqKtisyyjKIrGFqHT6TTAThAEDQmsbkMcx801z8/P88lPfpI4jrlx4wbW2sZOorYy6PV6bGxsNBYJYRiysrLC/Px8UyFbkzHqStZaDrveVqsM1NdQ/0nTtCFZ1AoTvV6PY8eONdW/vu8TRRHdbpdaanxtbY3Pf/7zPP744zzyyCNNP9YS2qPRiH6/v89fvP7+fd9vgCDPE+i8hEgg/IBeJ6LbDUlRBL2IqN1muDOASDDY2GJj6xpzrQhlIez0MdmYItd0OiF6NMEUrnreOKeTCgavUrFSIOvEdMU4EtUYZ+pkekW2MsYllatCQCLllAYGGwOOnDmI7wvGccI3/u2/58Aj51iN+qj+UUgMIlomfPyXyXSGXnujqtyr0tXTmfE629+ADWavCk8IrKmS02WJkAqTJ0jlY/FRQVjZN7hj7EkYV2CC5+Gf/hDh6Y8iZIg1GlsmZLvrfPV//108mRNIn0IbwtDJMO8NghUwUM0EjSiLdO3yPEG2Paa4e5v2+Z/BmspeAZCBhzUlaVqQZ2OiTg/peegscwQ3FZDmmiCICKRGJzHWanSeE4QhaZoCgklZoFod/DCETosiz9AW8smIKPTReUIY+QgLrf4cO7s72HZEBhTWgilpzS1w+849Hj1+AG87ZTy/gPz+iGRLYnJNuLZOJCSi30ccWEYdO4O8dgUZFRzpdTlcFJx936MEvo8YHYc458LumE/d22J3OGbXGN5Mxnw5jVlfXMb4EVqXSKmqKtOSIAzpBIK1zS2k56FwCil1V0tVorVlNBwipMQPQoT0SScTRrtb9HXO3xeSz56+gOrNI6QPcQzXLkE6wU5GEI8hTdFpSpZMSPIUgSSVkgTLGEsiBJmQFFaTC0EC5BZKHBlAGktLCDY9+NSSpdAevidpLx7k+uuv8M6brzTP+Hg0YlKWtIlY7CuUNjzY3GJleZnx+pCFiuwj6mef2r18irzSzL+uurM0bi6OreVAp0O72yONR3i+h7WgwhalhajTw1terhQOXJWxv7OFLyGxlnFl36IEdHxBmjvSnJA1qchgrLvffd9n4cAhZGVnUIP+9Rjs+z61TUE9x9SqPp7ncfz4cXq9XrPe1lo3n6/nrGmlnBqUr8Hy2mKmPnYNoPu+T1mWDcmuHkNrhZp6jpu2DXLDid03j9YEsGmiX73ftAJDWZYsLS017TDG0Gq1GquD+pjLy8scPHiwsUSq55Lnn39+n81AWZZ85CMf4eTJk+R5Trfb5dixY3iex3vf+95GHcL3/aaNTzzxRNMXQghOnTrVqAtIKTl79ixSSj772c82wP8v/MIvNMS/sixZXl7GGEOv12vaW1//tB3QdDtrMuI08Fj3w8P9WX92WuVgmpAy2d0izcZgIQo8/GpOlkI2sv4KCZ77e546UDNs9enPL7u+KLPq+TAUhQYsYRQRhC2k9DBAqQ2mIlvu7OzSqlRhpLTEownjUUIQeHQK6HR7JJPMKWfduUm720fKFsoLmV9Y4tzJw/zxD97gwWbMgYOLdHs90txnNBoS2JSTi5CfilCeTzcsaXsQ+opON2CuHxF5ikFuWdsp8VsLWOtsv0rr1jllniJsSZ5nKOlVCkIeRnqkeUbbDzGmRJeWTqdNnJVoXZImBVrnZO0IFSiU52OLBImHMcZZSwiD8EKU3ybORqzvrIF082GcJIzjAVIqNKbi8jmFE+lJhA8+bozzAp9Wp0crgi6GQlvCVKK6B2jNLaK8EKk8pPLceGQs1nhoz0MpD0965MrH9wPCMKXIUooiwxQlpiwwZVGpNVQqYUaD1mhjMNZi7N4YiZDV24qo/wNb4okMqeaxXpssTVCRgskmJt0mj7dR3RWnRtdddTZKFoRQDZGNCjJ3LxzT9zP7xhDR2AiId1U8aMB5pxHjgHJTV+pXawfsvv32BigeOtZDf2fvOLUqFNaihWI4zjh0bIXAZiTxhHEi2NrZQhAB6l2OVy+I6ne96Wup1Q32N+rHdpvaJsR+0sDUlVakhppQMWXXVR/jT1U62BtT6t21AWQb36aEImagFdImhCisMiAU0hNYKUFnSFFiVIQQ5v+UygHMSAezmMUsZjGLWcxiFrOYxSxmMYs/IwaDQQOQ/TRRJy5Pnz7959iqWUxHr9f7S1c6ePDSD5H9Pk/88q8hVg/hVQlgMZVKMTj/VCFkBSBUFcBW0zt6nEPPPsmdP/oeFAaBpLSCUFhEBfW1WpL3P/8IF1YUXssBrnowBGNptzssRLt86ojHtSsPOPu+U5w5e5AbN2/w0kvf4mMf/TlCpbn/zg3yEyWtVotbt65w7coqy6urLMwvos6fZmtrG53m+67NAko4WwBTJUFdhb5uyBSyypFZW0Mf7x5uH8v2zi7Hjp1k7d4d96wYwfbmJusP1ijKmHiiuXL1Bk8+ddFVVANO60FijQGpXFsqZQOEQNQJa6mYX1jG9yKyPCOOJzzYWGM0GrG6epADBw+xvn6bJB4RBg5477RbjkAhBUWZo3XJzVvX+OYLNxmPJwR+yKlTj9JqdwHnsWtMiTGQ5xkbG3fp/+wHWFqYc+Csdn7XLhFqXGLXVtKxWlOUJWWhMdr501sDX//61xtQpdVqceHCBa5cucIPf/hDLl261KgZTIMq0woHtapBbQkwXQ00TSCo/0gp6XQ6jEYjgH0Az7S/M+xVLHmeR5ZljY1AbdPQ6/UwxuxrX5Zlzfdek4JqwKIGF+q/a63JsqwhVAwGg8bTu/aBru0gVldX2d7epiiKfeeor6Hf73PgwAHCMKTVapHneTOOLy0tURQF29vbjW92fW3j8bhRktBa0+12iaKINE1JkgStNZ1Oh/F43HiGC+H8rxcXF7l7925DAKi/n/n5+QYQy7KMubm5hgjT6XRYWFggiqKmP9I05fbt2xw6dKghHGxtbTVEjqIoGunqWpK73W6jtWZzc5N4UjI/F9KanyeXHq3QwwYRQaDwgpDJaITf7ZFlOQLBUr+DSTPac3OIckJZlIStAOH7yNJZg9Q+7S5hP0WiMsZVJQaKoBOS7iZV3X5FRKp2l5UXsqlK/QUQeBJPQF5qRoOYA4fnuXljm7tXb/HN/+//xqf/+0W6p9vI9jI2tsjeMaKnf438tf8D/eAS6By0dtXeVlTgA1Vy2mCRDUAgsBRGVtV1BpTC6Mq+QgAYjM4Rwil1yIospnODrAgH8tDThI98GhH0HOHA5JTpgLe++sd8849e4BMfvcDdV29QGEsgRVVxuQfMNoQLuzdGqop4YLUbT3VSUN5/Bw8fbUsEoFNNnubkGrwgQHk+eVE4BQdr0AbyLEPrnNbKEpnw0bogDCVGa3SWAII0tShdMucv4nuQ6RCp8kqm2pDnJa2WpNWKWPAthw8uYpUi1gqMZpwVbI3G7I7GYBb4YphwY5xy/NYhbi8cB3Mdtb7OiePHkMcOwMEjiLdfQvgpZAbevoksDVGmYRJDloA0BGXBYiSYD/oMdyfMpwlPa8MfP7jPl3tzjMOWI/B5qqoklUxyi/IC8jTGGo30PDc3WI0tKsUdIZEW8jynKAp0nhJlCX9Pevy3Z8/jteag0BBPIM8gTbDpBJvEFGlCVuSkxpBqTSYEeeCTaU1iDQkQY0kFZAIKAYmxGCEwQlAaw5wQ3Bewiccf31W878JZeksH+eFrb7B+9yZSWMo8Q2J5+tQhpNaIQxfZ3FhDGpiMY8zxw9xd32HeKif/LCplg6nK2WlgqSHiISitcQoaxhBEkQPIjGnmbGkMB0IfM4wxusQYjdGW2g7IWgjClhtbjEZJmA8sk9RVqTZDgQWNRQpBq9Vm8cBBxttbzXxTz0G1ZU09h9Tgdk0CqBVgarugmiwQxzG+7zdKBA8D2bXSwPR8V89XZVk2P2vSW33+migwTeCathWqzzE9f9XHqAkGtf1OTZqoP5tlWWMtJIRoyHG1qkNN+Kste+rz1UpB03O4UopHHnmEKIqa+bHepyZx1GS++nx1v08rHdVEgpoMMt1fNVmibl9NpsiyrOmvmkxRKx49TB6o+2SapDitBvRu/V1HbY328HtWmk7IigIQRNrgKb+aBw1SKJSUGAxS+eD51Tzro3wfpKA0upqiFMoLCaTCWINSPkhFaQ15nuPJACEsnqcIg4jJcJs8yzHWKbuEgSBJU6J2RJxq/KjtSE1lyu0bV1AnT0MvQinLuROHePnSDe5sJBzdHnOw5dOOAnQRkbdy5sdjTi4qUi2IWpIo9Gh1Atq9Nr4HeZqwfn+XrViysNRle+sBVhdoYUiygjIvKMocZdtYIE0T+t2IXFjmF+YqqwJFGEh2BzGjyZilxXmUKNGFxpMFaA2BIPDDaq1b4CtB6Cui1jxS+ewOdxnFI4R0tmbaWDYHu/S7XQfkCkElJYRE4kmPIJAoz8f3fPwgot1t4fkeWWnwtUKEC0g/xAvbIJWzMpJUNkcWrQ2ep/Erm6owCCiilrNpKHKn1JLnzg6myF2/lJnbZixYZ5fmpkaBQe0Dq0U1/xpdooRA6jGKefzuEkU6QQRdPGFQFBTxFt7cYUcOb4bW/ah+A3DLh85Rj0HAtMeSEHvjszvm3v1evydKrLtnqaidDeG3XoNXqjJTpJ59ZMaptjUNr+znGl0Ga1jfHHNBthDCKegYK1nfjLFlilAdHo69Q0+RThuSxTQRQOz1gd17D2zaI5ySg6C5yKofzb5ziIbFsKeG8GPXxn4lBfcR07SxPlYuQub8EZM4J5ASgyVXzvpEmQEmWsWXGlGWFGWGKIZgCkd4EP/11IMZ6WAWs5jFLGYxi1nMYhazmMUsZvETYzQa/QRm/Y9HDa5duHDhz6tJs3go/q9gr1AkOTe/8W0Wjxzj8IeeR7ZbLgEqJY20JXtJESkktc9mkWd4C0ssXXyc5O49dn90HVF5rUrhKo0PnFzgAz/3NAcPLyKVwiQp0g+QYUQ5GtFqRcRBiyM9wc1bPyI5c5DHnnyc73z7RQajgjv3r3Pq+GmSwZA771zl+IWzfPBnnibLJhw9doyz587yzjsZfuhjhnvXJXB1L671rtzZVgA6lkpGvAbWqyRQ9dl3e2ocQUAStX2SzLK0vEhZFKysLOIpRZomDIbbRP4Cl966wuMXH0Eo6dJcVmMxjoJh9pJv1JYP0lkwSCk4dOgw3TMdPM9HAFmekWYZYdgCLO+8cxmvqmjPssoyQKoqKZ9T5DmD4ZDNjS26vS5aF+zuPGD1wBF83yW+o6iFEBJr4eCBQ2Al2ug9CXpDU6kjhQNnHNCgKQtXZWlNDXiIfcC97/vkec43v/lNrl+/3gAAxhiyLNsHkNTgRV0tWIMG01WDDwMMNdBfEw/G43Hz+Z9EsvJ9nzRNnWd81YbNzc0GWKnPXQMRQAPO1ISCsiybdoADBsuyZHt721UtV6BErVBQKxjMz88zHo85ePAgcRwzHo8fSjqKfaSJ2vd7Z2cH3/dZWlpqlBKklI2qQV31aoxhMpk0/bW0tNSALv1+nytXrhDHMTs7O4zH48am4uTJk5w5c4Zut0ue57RaraYPlpaWUErRbrdZWFigKIqm75aWlppq3uFw2Ow/HA65cuUKJ0+eZHd3l/F4zNzcHJPJhMlk0pAUasJEfb1lWVIaQ3uhSy4VQeRjlIfyPaLePMIajJBOoSMvUJVlgmp3UdKSZwYVhOiKHFNOUlAKTwkHSIp6HNtLIFsrsEISznXJBkn1PDSZaKgABYOzL5FSYI3BV85iAaNZu7HGsbMHuHNzB63hta99nwOnPsezf2eR6GCA6KxgJwLRP0Xw9N+meOv3KO/+EKkcMGQ1YEyjzCvwKhDWESaMNvjKJZAdSODkpZFO9UUoiSktKI1QAVaXlVoDbkw49BT+xV9CtFcACabE5BO2rrzJ5/71vyf0BZGnyMYp2lh8VdVBGtdnElcdbs0UCQgceGtx7UZQZgXlg9vIsIuvDEWuybOEVAuEkHhhxGiSIMocSTUW5BpjM+cz3mlhrZMLp63wo4g4SRnuDvGUoiU9bJFhhSIMQzIlyCVkeUGrpWgFPtZo8qJgUnjsjIacOnIAXwmGccraxjZpUdLPB9zQI154sEuwcZPjx07B6TMkl97kMyuLsLICa28iVIwdF+gjLV7z2hy6dYcDOgcNZDkEJaIQiLhAWcG8B925Hnd3dvnYaMjxNOHftDo86PbxgqAirAiKvHBgh1TVOKExOneAurUoz3eViqYiwFiNX6T8HRR/5+BRPBFA4ogPNksgm4CFIokZTUbERUaBoFCSzPMd6aAiimXWkFpDYiyptaQW8kASl4YS0BaEtRz1Fd8Sjly1evoxnvzA+/lPr/yIYRwjtK3uB825E0f4jb/6Yf7kGy9y1/fd9+MtoyYjYm255cOjqXae5g0JrGau7Kl9NMBMta3UGm1haC1e4DefE5UHiUCw7PmkZd7MFdZqrHFAV6EtRpdM4gnGGNqeoKXcOZQQ6IpAVI/5SngkecHW/ft4YauZG2pVmmmyW/1vNVBdA+k1gezBgwfMzc0142lYKXZMqyPUQHtdpT+tUlDPifWclOd5Y3dT/6nnw7od05+dJtpNKyrU+9Vzc92OaeLANJg/fdx6Tg2CoJkXlVLN77VKThRF++yIAM6cObOvz2p1m/q8Nalt+tz15+u5ZlrxoT7uw2oEdTvqfeu1xLSq0rRq0nTUn6/tHOqYVkyaJinUba2PW/fvdJuKSvbfGkFelJTWVAsp165Sl0glUVKi/BDph/hh6BQMKmUdYxw4XpQlhS0qVSeBrxyRQoYeZakRykNIz63pdOTmKhUyGU8oCk237VMWBSoomMQl27sJ7VAiBzs8WLsLHGKu12F5aYn3PHKCr71yhdsPhnS6ik63S7vlYW1IHocsWkmqFX7Uotdr0e95BGGEr2ASj7l9P0HbOYQx7O5uOuUCKYjTjNF4xIpQeL6PtcK9+4iSMAxIlaIoNcJahCextmSx3yVLYtqdNtaWFFmM9Dy8QjgZeaGRQiLQ+H6A8kIsgu3RLnlpAR9PORrx+nCHUZZgLXgVMc9YgVU+fhjiawCFpyL8IMTzI4KojaiscYz0UVGE8AKk8lGqemYr0NopLmhK33c/K2JrWRTkRUFZFuiioCwKdJGhi4wiyyi8hCJNsEJgyDGlI1YZq6mV2KbXLMIUCCmR3aPI1hwmG6OTbUzeorNwiDzeRfbmkX4bm26CjBzxe4qQNK1A4NRn6q3TlftinzvCvmURAkH1DugePDwhyIqcJJ5AuI9bVhG8JIj99iXNgae2Wbu3SmtoCcI0W6yV5GVImhWuFRVxM80N1skvNYeuVeweft73CM1M/fs08UE0c43jXVhqeZ6t3TFxnCBkp5mv3MfrzrK1cM+PXdfDa/39sadwUK87EYJIabywg7RdTJGh0i1M/ADPj1DSze1aa1oiRQuJ9ufwMJRMETX+K2JGOpjFLGYxi1nMYhazmMUsZjGLWfzEqAG+nzbqJNv58+dptVpN0nUWf36xsLDwY0mRv+hItcQOM27/8R8h220Wn3iv8yVVqqlCFLZKuHsSJT0HRnseeeGS7X6/z7GPfBS9s0t8axsEhP2Ax997iseeOU+7GyG0RuQa4QWYNHUyx1IQBj5hq82Jo6e5vXuNd169wvFHz/LeZ5/i2rX7hJ2QN95+i8fPP0Y2HnDz0tscv3AaIQJu3bzKpbcuce/eLUajSSWB7vpTTSXhHWBsqtyRq8hxFb0Wberqtb1Kl+l6kzqUp3jyqYt0uhF52uL40YPcunaD5QMHOXzwGG+++TpFltKJJBsPtkjimE63W1XVmOYcSL2X4JKi8UHF4pKDxhAn7lpUBU4pCb6nuHPvHg8e3KMVhRRFTpJmtDolutKQz/OCu3fvMD83z2OPPsb5s+cpdUmnM8eTT/0MYavjrBc8nzTLiScTtjbvI8sBxtiKSGCa9gkpEFYiyj3LhToBV2ftrLWcOXMGIZxc+IMHDxgMBty9e7cB3utEWy3hX4PPNTjwbsl8YN/2OsE/DVLUlaQ1QDGd2IM98KU+VqvVaiwIrLUMh0OGw2FjRVDfKw+fcxqYmZZpttYyGAyI47i5T+p+mEwmTZWl1prbt29z+fJlhsPhvjZNt7MGaOoKyna7zWOPPdZU0mqtCcOQD3/4wxw6dIjFxUW+/vWvc+3atUZyWwjB4uIinucxGo2I45jV1dVGAeHq1assLCxgrWVjY4MwDCnLko2NDeI4bipzB4MBQjhP8DiOGQ6HDAYDoiii3+/T7XbZ3Nyk3W438tg16WR3d5fbt283qgpBEDQEjZqcopRqtvm+z7ETi8goIggD8DzCdoew00KXOcIaPFMyHIwxVlAYS9jpYrOELHOVZ2Wp8XzlEuaFQXkeyjNoXXlyU1W3UXMKDCYvGN7c2Pt+KyDB2BrEc8nrPXl48DxJ5EtGhSEZJSTjhIW5kK2dlDTN+c7vfYlj7/0gR6QgOHARuisQ7yA6h/Av/reI9jL65gvIbOAS5aaqlDOuklE0xd91daB71vaqElUzNhldXVdpyNKMVigri4MAeex9+Bf+CqK96j5jSmwxYbJ+iy/8r/+Wt6/c4UOPH2Lz/i55rlE4a4e9ar+9R11WVXvCOruaPf6GE1fOBynq/hrRE08yGo/J05LcCqR0Vf5FnKHzAnSBs7d2BKyo3SX0S+Ikpz3fRXo+VkFZ5HSXltAywDMZnlTkWY7yAwIfFldXEUKSxDFRuwVS4vsRl+5toS34wpJrQy4DPvL+pzly7CRZMkYlG5gH26RGoz3BwcMH+ML6Fcae5MHSImzdhGIEkwwWhqxj+cO7i8zdf8DPnjqIXD6AiJbIrrzOwWLMYhETxamTTZeKE50WXWGJh0P+QZHzW2XBzU63kvS2FebuOk8qD4l1gCSyInlZZ69hnUqQisecyHP+1nt+lmB7EzEZYsvSqRtkMYUuSa1hnBeMtSZFkAOF9Cl9j9JasjyjtJbcQmYNGZYMp3SQ+4o0qypsBXSFoK0kV4DVhT66KPj//O4fsnrqMcIg5MHWfRZaAWdXjjDOCuKsbMZNnWUUO5tE4212dgYEUch4MmmUMqq7a9+cWv/dWpzVkLEUxpAZzcRY/DCoqpJtsxax1gHpwpb7juOAIktpLVGrTRgEaGPwK3C/tNPntM0zZqwlN5bFpz9AZ3EFUw6QUrKzs0Ov12N3dxelFEmSNOSvWtGnXiP7vqtW397ept1uY63l9u3bHDx4kPn5+UYtoSYcTM8L9VxQz3l1fz548AClVEMQq0kH9VhV7zc97zysflCTGupK/3p+SJKEbrfb/H1aRaHet57f6veCWmWgVhaolQOm1YFqpYfpNW1NNKivc5pcUPdHTd6QUjYEg5rYMT1HTtse1cSFaQWDen6ZJkQ8bCc0/U6ktSZN0+Y7m4579+6xurrKtWvXOHHiRKPm8HDUigzTYYx7ppxCjkQJhRQCGTiLBSHcus/zIro9CUIRttp4fhcvaCOEpCwydJmTFyVSGEBjtMCU4Hkd971ISZmkVXW1wZQ5Fks416Y7t4DOJrQiRVpYhjtj5pcWSeIxvrAkSYIwMdtba4T+UXq9Nh944gyDwYhLN9ZZmm85C7R+wFyvgz0s6cSapICo0yYMA0IffM+gRwlXr+5wfyjpzi8RhQFpNsKZBTg7oiJNq+9Aujkdv1Iq8CoC7RDl+45o5/uErYDJZowxljQtKIqAyANPSbQQ6IrQhtBYK5zijrHsjocYqzFC4fuKKPDxpGQYT2i329Ua0KlKeEEXP2xhkgkQNPYVjngQuPWCVVgZopXE8wN8L8DzPdwE6KrPrXFWCbpSA9Nh6GzKSu1IJ+Ue8UCXBUWeU2QJRRaQ+T5e5pFnKXleUmqNMhZR7kHvzTNuM5AeVueYMkEYQ9RbJh+tkWwM8OZOYtNtDM6GiWKIKAcNWXCPOaAcmF+/71TD648p0TAt+V+tDZq5v1Yf0KRbl5kMrmOtJTpxEqRfnUrsrWeobB32D1p7vz80btR4f/NuJARWekzSgrIo8SuLJ6UUBQGo6vmsxxmm7fP2n842F2ubH9TkgWpbc/5q7aOKDXbubpFu77B4fJHtyRSJY4ppMG0b9Kfr5j08FtWN2NtfSYiUJi4CrIDAB+1HeAZ8M0HLPtYLKa0gFT6eSMitcmsuU4IK/tRz/1kxIx3MYhazmMUsZjGLWcxiFrOYxSx+YuR5/mfvNBV19e6ZM2caye1Z/PnG4uLiXzrpoLCSUAiye+tc+eJ/5kzYxjt0ACU9Sl0SyJDaiVwqhaySnUpKjNEUeY7yFAunz6Cf+yg3/+N/ZnWpzQeef5QDB+YdSJXmSC9ECI3NUoTvY5IEkCAs/fk5JttbfPDR01x58TW+/jnLufdf5N61O2yt3ycrSq7fucUTj1xk+8FdbrzxDiceO0sy2eHY0ePsDraZX2gxGeVM4txV40qBp0RDRKgT0aV2/rjGWFcZXSfBG8LBXgXTdDz7/vfx3HMf4a3L77C5kdLrzRO1QjwpOXjgINtbG4SRj6cku1sDdn74bTrPPodotRCV5oKhSkTbGspjL9EncOCikmAs2pSYsnSetUAaJ1x661WKPEVgWFu7y/zCCv25eagS2WVZcP3mdc6fPk9pC44cOozyFNZkDHc3yDbu8eqrrzCZDBmNB2xvb+BJ+MRHnwfTYU/KtDKaqIAKrQ26rCsH98AaayzaaH7+53++qSR86aWXGAwG7OzsNEDCtKXCdMXgw+DCu1VY1jGtZlBXxwO0Wq1GOaCO6epPoAFR2pVnd/1v7Xab4XDY2ArUJIO6unG6ErSueqz3AUfsGg6n5DXYq3xbXFzkySefJAgC1tfX+b3f+z12dnb2tQv2qzrU/XDmzBl2d3dZWVnh2Wefbf4tCAIef/xxPvKRjzTkiwsXLvDlL3+5UTqI45jl5eV9FgnTsthHjhxx6hjV9Y7HY4wxLC8v75ORTtO0AaiKokAIwdzcXPPvcRwThiHGGNI0bcAbYwyj0YhWq9WAVfV3VZMz6urQoijodrv0ej2C4RwKgVEKFQR4rTZFmuK3OwSBIh6OXQW/5+ErgedL8sLZH+AHeFJQZilZkpFs7CKKEitklSivErpiOu29B4Ha6ZS4bR5FlO+5rdJV1AkhCANF23PnLY1h58GIo0fnGQzXKY1lZ2vA1/7n/4kP/I1f5MxzhvDA49BZRKROScg/91eQc8corv4J7N4EUbi8ujYIK8G4Ksd9vi/SKTZIb8/eBiEZxZrQB2VLPCuwWmFbSwRnPop//MOIcA5QWFti84R0+x7f/O3f5ktf+g49X3D40CI3X7/NpLT0FSgxhQNU1Y0C24AHzoebRgXFWGccU2YF5WACw11sodFGo43AVxJdlhS6QClJUZZYbdCALyV6kmCERfk+k+1tOguLKF+RJSkUmm47RAp3j02SlKAwWK0RfsTC8jJ+7HzC4+EQ34vxPQ8PS8tTKN/HJgKbjUg2b1CkMX4U0Fs9TCeGIk3ZuHOHwe4uy0HAo90ARrvYOMVkGvHBx7llI3KuMQoDhlLhPfkexktHeWl4h3Cphd/1ubAz4b2DmMVJihCwpCRPdtu8NBzyt8aG37KWm2GENq4iV0qBMW7ulFLiKYUWjkDigA5nW6PimL+hM055iomBhSTFlhlWu+rZuMxIhGSUp0ysIRWCXAoKC8ZTiIVFijIlKzwKW5IbQ261+2kMpbWMkoLcOsUfhGBZKe71egxGE3a3BsRpzoXjhzi/GHBrY8Cpo0c5s9xjsLnGyUMrXL/3wM1jQlKWhVMX2d1h+/460cFlbhfbHBXCebhPkQCr26v5aaWobLsNhTYkpSZB4Ad+ZZlQrUCEQCj3HBld4qAlR9aT1pGKtLGUZUGWp46YpiS+dESlaeCsKjx38ua6pDu/SBhF2NFuY5PT6XRI05TBYNAo9gyHQ8qy5PDhw2xsbNBqtQiCgGPHjmGtZXNzE9/3WVlZIQxD4jgmTdNGDehhcL+eR+o5rh6X7927x9LS0j51g2mCQr19WslgOqbnz8uXL9PpdOh2u3S7XV544QWee+65Rt2sqdauCAK1XUH9DlGTDba3tynLkgMHDmCMYXd3l7m5OXzfb0hkNYGgni9re53pqEkUNUFhWgFhc3OTpaWlffZF9Zxf2/TUBJBa8agmsU1bTEyrE0yTGKe3jUYjXnzxRT7xiU/sU1uw1vL666/zyU9+kqtXr3Lo0CHG43GjFDEcDgmCgD/5kz/h05/+NKurq/vWH9Y4sN33PKKo7cDsoOvIKZ5bD0VBG6kEUknSNMZgKYwmqIBOYyUIDykVnpIEvlPuqZVTnNiOxvcEWaYpjaXQJbbMYBwRRT5Fvw9liqcs7Qgir2C+33Ir0VKRjib4pSXJMnpz8/QXVnjm8dOMJmN+9MZ9nnxslYNel1ZoaIcST0kW/ADhR5UyjsWmJZcv3eeduzGoFbpRjzv370BZOFUbIdCOUUupDWWRoZRHksZEvnsOSm3x/JBW1GISTwh8SZrk5GlGEYXMzffwgxDfl9VaLsBYKLWzoCk0hG0oyowiT5FK4eERRYrCWpIkxVjwS4OQoIQl9H08FeJ5AZN8hLaCTjsgDH1HOKCywbCC0lqUVE5hLAxQyqsIiqoi+Oh9z1B9z5rSEOjSzYFlgS5LyqKgzHOKICALApQfkGchKsvx8gyd55RGw2B/Nb4QgiKLMVbhlSMC2UMsnQdriLwQ6YVYFKK3gkmGqPayM4yKd6r1QvVcTBEQrHCEg4bc2KjOuHuwemgaIL/5q6N5ga0INoVmeWkVnW2RlDE2mINqXqnx/Jq4MD2OUZ2/WZs9tIa3FeXR7eFImH6rj/ACIEEKQRR6yKDj3rEq8tpee/dIlO57qrfZ/ZdYSzJURIk9pYJqHMEQiJIDh08wySxbG2vQPowQpvp+KoKBremZvDtz/V3iXZUPhCBQAl3mSOmR5or5jkdqDTKISNUCntB0bEFsPArr0VaGVBs3PxYlEP7ZJ/9TYkY6mMUsZjGLWcxiFrOYxSxmMYtZ/MSoq4x/2qgTkPPz8zz66KMz0sFfQCwvL/9lN4FQuASSLjTbl28ivvoljn3yU9AJKQtX9S1rWV2pcCka6SSijSWNx7S7XbzQZ/mxi6jBGu85rJhv+UhbIrTClgYRtaDQ2FJj88JJimqNQKGUxmQ5baP57IV5/rdrN7jR7jDYecAoTlk+dIyinHB7/TZPnH2M69cvc/nl1zh2/gynT53g0YtP8Nv/e8L6+j3u3H5AWRh8JQj8uireFbFoY0A7wLz+Mx2yBpipqi+r7QsLc5w+fYy3Lr1FkSukDfF8j+XVJQQeUglOnTrlqitLTVlqJq99h3R8D//pjyEPnMBK5SAS65LSVoGUCquq6mxrKthTVElMU/3UYCxra2vcu3sDhGE8ztgZDGi3+xw+fByty0bGdX19jXgSs7O7y6XLl3nq8cf4zCd+jjzZ5vqN67z+2vcRwiXx4/GYKAqxRruKLeoqIYu2Goyg1E4qWBvj7Beqn86OwQFk0wDHmTNn+M3f/E3G4zFAI7tcgw91ArUGKR6uPKwBlJokUidzH67knCYxTBMXHgZfaoJDDUhMgxI1wDJd7RjHcWOpUIPtdaVm3ZYa0BgOhw2wUgM1dRvX1taIoojxeMwf/MEf8NZbbzX9VFds1n+vAY/azuHQoUMkSYIxhn6/35AlHn30UY4dO0YYhk3F7alTp3j++ef53ve+R1mWzM/PN+2oAa7pvqhVHuqq0Hr7NLg1LTWdJMk+FQngxxQs3i1arVbTZ0CjqFD3fS1P3vSxJzHCo7uwQJYWSN8n6LRRaNIkY3d7gBeGaG2wpcZYQxD4JEmGEiWq00KpCJvExIUmaoWg80rVpAIcqYraLDR3bC1h/DBpRbjKaxX5qMijHGcIHEjZaXl444JSCNI4I2wts7TQ5v7WhKw0vP3Dd9hd/9d8Is8493FNdPAx6K0gvBCbBMhDHyRcOIu+8z3Ku9/Dxg8Q1kknC6wDoCsPZCrgwwDCGKwuEIHz8m4HCs93nxFeG7n6GP7pn0XMn0WolgNFKkuFbOc+3/v9f8/v/NvPk5cFjx2dR0hBMUmZlIZDbYmSYl9VZA0USFnZ0di6yLGynbACVDWeWTDxhDyNKVSIFBYjSkoNVheURoLWmKLypw99cl3itdrkuXa+4HmKJ1oOvJQSvxVRaM0kK8iSDNUKyccxxqa0OiXd/jyTJMVvtRlvb3Ph7HmE8BzhzG+h9TbGGiKd0BUppihZiELOnzxCEIRkE00k4R+eO8FJr4BYsz2Y8OVkzM8Vbe7FF/A6i2xO7vHtK3fQ0feY8BLDjXWMLsizhJeF5ne9kufKjL9moYOgKwXPdrt8ZzjkV8aGfysFa8p3SjHVPGSMpiwNEuP6CkfiEkKgsoRPFDH/8PAydpLxzde/SyuYIzI5mSmJscRALC0jXZJLSREGFIEk9wVlf4HEC9gZpky0JS2MUxDQmqwsMcZSaENhLJ50lDhpPQ5FPn+8vEAfy5nDq/ztzz7P2aMHkRImScY7Q8XmxjrXrl/nvY+d58qtO3hKYY3G833K4S5BluHvDkgPHOBtW3IU8Kp7qa4erQtYG1ym+p+1hkyXjIqCDKfuU0tNuypU4VRIrLNQ2Ksmdc9uqUu0FYRRC1mpC0khKK3AUN/btTC4QAg3lwVBRNjtMnywzoqniaJoH3je7/eZn59nOBxy+PBhtra2KIoCrTWDwYBWq8W9e/fQWnP37l12dnbodrscOnSIN998k+FwyIc+9CF2dnZYXV1t5pz19XUuXrxIkiR0Oh2klAwGg4bsVpYlg8EApRRpmjZ2XFJK1tbWUEqxvLzMYODUGZaXl9Fak+c5nuc1djxaa+I45vr167zvfe9jNBrtKVRMjcNAA97XCgI1Gdlay/Xr15v5CeCFF17g+eefbwhm0/PrSy+9xFNPPbWPCFDHtMpRPbfX89GXv/xlfuVXfqWZs2sSRlmWzTV97Wtf45d/+ZebY9dzYT3XAz+mrlDvWys21KSJOI6bczxMVKjXFlmW8fu///sEQYDv+2xsbPD444/T6XR4++23WVlZ2Xd9/U4XIX1aUYv5uTnCVhsrcc+4Lt14KAXKeEghCYMI3/OJola1NpAYo9waDONsGJTC8wOU76PLEm01yg8wWAI/JPA8bNilsJaySAGF9CKyAuJ8Qp6VzM+3WZxvsTso6C8tMBrssLuxy2ByBT9osbQwz8rqKs+/7wJfffltXnnjAY8mGUeO9Gl1AnwpkBTEOzmeb8k1XH1ng1cv7zIu+qwur5JmMZtb9/GMez6FEgjhocsCTNmQRKQ0GFRlGWTBc3YRvh+CBCUz5k+dYJyURJFT6/CCEKwlL1MQkngyZpKmGBEwr3zyvMSaglB5gCQMNFhnC4YAa3ICVaJshpQRYSgJAsHaOKmsezw8TyE9D+UFaCOq+06iqjWbqohiUjrinxvDVLPWsfU6WRuMbyo7jT3yQVkWlEVOkYf4YQs/TMmzhDzPyLO8UrgoYVDZ2UnVVOELU4BQyHAOOquAxSTbeN1VhPBRxRBdZkjPw+gCoQL3HcgpxaSK7CwqSkBTmU/1kuSm/GY9VK8F6oJ+27wYCRAGrMVrL+F3NcKklcJAtdayCnD2UaIZ4+2PjQe1wkAN8tdTQrMia8gRsNCGwPPAOPWnXjvk2KLHpV1JqamO8+OKZ3vnrEiUzfXuWf047oOt3tOq6xduztGyhVY9gqCNzUE21hfTBIfqmJV0xD6Snf3xdtXbq1GxIjG47yTXChG0sFaipWSQGjCGUs1jEORGUQpBpDJyLclLQaRyRxArUwrx0+d+Ho4Z6WAWs5jFLGYxi1nMYhazmMUsZvET42HJzz8r6iRkGIY888wzfOUrX/nzaNYspmJ1dfUvuwlIZCV3bUEbdn/0BrIbMffsM6hOD2Nc9Z0DCiq/XulASQGURQa2A2gWwpLHnj5Ep0go0wRbgmqHlOmEbHObaGkRaaEcjRFBQJGleEjA0Dt4gHhjm8OdjL9+yvAfbt3ko889z3/8D59j/fZVwvYcZVkQT0Y8+9T7uXPnBm/+4IesHj/M6Uee4hd+4Vf47d/5Xzl4eJHxbkyZazwlKY1BRYrl1SVa7Q7CwPVrt9jenTR9IISoKjGrvyMwUwmwNIn5wQ9+wPzCKqdOPEmW7uD7gn6/hy5cvurUyVNsbW0x3k0pdcm9tOT42m3Sr/4OHDqD//iHkCvHXMJLOkldK51ErK0qhwGsKcEYRJW4xFqyPOH6jctkWUyaJRSF5sDqaX702qts7+ySZQmtygd6a3vEzVvrWAt37m6wsrCApxRKCKIw4vjxYyAsZVHSbneYFgG1U4lBUzpiwR44vZfAm5aFFkIwGo0AB2B85Stf4f79+01ifzwe7wMu6qi3PUwUqKsXgQYoqEkF09LRvu/vqyqr951uV22lUKsk1H+mk3+1UkJNOpiuQq0JCDUZom5ffd4sy/ZVSNZhreXu3btsb2/z7W9/mxdeeGFftWe/32dxcZF79+41ag+1PPexY8f2+V6nadpUkl65cqWxNJj2Fn83K5xpksS7/Rv8uB92/fu7bZ/+3PS+0/Fux3m3JOv099AoW6DoHThAsjtABBHYknycYpFsrT+g2+ugLSDKxht7MhpTIllaWKTTi5g8uI8nBLbQqI4EXfm9U9/lVZK4Adbd/6xxpc+ybr9wbZOewG8HeN2IYpSgQg+pIPLBV4LSFWgTD2MOH51ja5CQFpqsFNy/u8UX/sd/Tbo75OJn/zqtQ48i5w5DdxWRjsDr4j1yEnX4WfT971CuvUG+s0lUbiPllB2LcLYGvicQKtjDBaxEeQItuwQHzuIffQa5dB4R9EFWFcw6x2RjiskO3//c7/Nv/uffZXucsORLjp1YZGd9F6sthYGwUoURdRfhbHXAouT+SkDpvuC9xL0UmEKjdwZ4gY/wJaYs0FZQ5gWepxDWSeeXxiKlRWKJ2hEG46oVpUCXJVY5cEliSSYxmYEsK7CAER5pkVHGqfP7TmKCTockzwl8he95IB2JqzQQeK46NYwCVJFTGNjNSvA7jHLB9157k59ZWOCvHVqGtOBuAb9tNXes5pO3v8sbt3Yo+qcoLzzB9+/d5Ohgh50crly6yiROidOcOHEV9X9oNF8LAv5f/T7HpKQlBB/odPiT8ZhPjwb8+948qedVCjFOOcZaQ6kLPCXcWA+ILOV5W/L/PHWAuSTH2pKTJuWPS5+nDBhTknoeeatFUuSkvQ55v8Muhq1JzGRi8JJd2p5HZCFCoTHkFuIgZKPQ7BYlY22wvkdgLaEUHNQGe+Iwd0cj/tYvPMfz77tIFPhI6Z6JTitiIS8ZduY4d+IIy3MdLpUaT4D0PbywTSYF0hrmxmO2kpS1bociG9ITThlENc/cHvGg/t1aSykEuTEMtSYXjjS4N2YYLBKb7qJ1jimc4o8T5HGkl7IsnXqRMZRWY6xFCsiMcWOH6+GmGlUJgZEevfkl2q02d9fvoo4fagBnYwx5nqOUIgxDPM/j/v37xHGM7/tNxf/BgwfZ2NjAGMP73/9+rly5gta6UdHp9XpMJhOSJCHLMkajEevr6xhjuHPnDlpr+v0+ZVmytraGMYb5+Xl83+fy5cuNwkxNUrXW8u1vf5uyLPnIRz7Cd77zHXzf55d+6ZeauSTPc/7wD/+Q1dVVjh07xtbWFq+//jq+77O9vc0XvvAFnnnmGTqdDi+88AJPPPEEeZ4zGAw4f/48/X6/Gctff/11+v0+URSxvb3NjRs3OHz4MCdPniQIAnZ2drDWsrS0xHA4bNpdH6eOmmhYR23HMB1pmlKWZWMxNBwOWVhYaAgeQoh9pLfBYECapiwvLzOZTBgMBszPz/PSSy/xgQ98gM3NTRYWFhgOh1y9epUf/ehH/NIv/RKj0Yj/5X/5X3jiiScaIsL0PHb9+nX+xb/4F2xtbXH8+HEuXrzI/fv38TyPj3/843zhC1/AWttYNk3Ps4EUFKakLDLKMidLJLrIiaIA3/MJojae54OtyJu6YHewQ562aLe7BGEboXz8wMeTFqmcYo9bcwuE9CiKHFu4sccCXtTHAn4QUJRFZZmiCNttpDTkfsn2WBMFkqzIebBrWVpcZmPnJlmxw60rbxA8+iStsMPKgcN84tmIz3/9JV56Y4u1jYSDqy0W51t4QrC7MyHNLPc2U67enzDKAjqRRzzZZTjchrJEC4GRglCAwJAlE4oshoosaE2BrBiAfkXsEFKS5jm+55EVmpKcLM/whEQJQ1lGjkyEIElSsjTBaEOmC4TwsMLZWnhKYgWEvkfY8oiTEiUE/W5E15f4wuD5iigKUQLWN0YEStFph/ieh++FoDw0NZHFqTJFXquxhatJeULIRk3N2qmxykyph5iKYFM6+wRdlhRBTlDkBGFEkbfIs8wRh4uCUhfIu3vrxVqFSViD8BwhykoPE2/hdw4gZDVOBn1kMcRYD5EPoL2CMPnemkfIZgC27NkmTA3Lzf0//VTaKXB9j6Rgmu1WKrSOEcqvzlGpE4jmE3vHF3vHaq5tatP0knKalCYqkD/0FAL3PWMt7chjeS5ADQWl2Vuz7LNqqBULqvWLaBQk9ogHTLfqoTFJCED65GnmbFJUsK9H3BqtnlVqktzDx/jTCbru3/bescAgFXjSkGgPx+3MkUpT6AxUy+1lBbENiVRJQElqJEJY55zxMLHjvyBmpINZzGIWs5jFLGYxi1nMYhazmMWfGtMVpj9t1NW3AB/96Ef5p//0n/4XExdm8dOHlPL/EqSDzEo8YRoUwGYFGy++jOn3WHj8Cf7/7P15rG3ZXd+LfkYzu9Xufp++q95V5WNXlctVbsEUYCAOhBAgIjePwFOUXG50pZen90d0pbynl0RCydVNpOi95EoJSqP4ASZALsbBGOMem7LL1ZyqcnXnnDr97lc/u9G8P+aaa6+9fUwgMYJc7a90zt57NXOO2awxxhrf7+/79WG1+FstJk0rzv1+HIG3Dm9KGpNdzkyuEJoMtER327jCYNIUFWrynT3KUBN02qg4wmQ5QgaU2biqbhIl4UKXt7/1Ng8e73Knv8Vmf5eo0eb69ZuEsod1hqLI+NLXv8Ljjz5GEjd48dKLbN3a4tH3PMmTTzzNc88/y7333M/GnQ0Ggz7NTovFxVW0jsE70nxEqxMzGI0xVhAGkkYjrqxUjSUrymph1ziK0uAB6wRJY4FHH3kcQYuynBBHIabU9EcVMd1qtdnc3KUsq4qma2nBu5sO7RXy+qvkG1cJn/4Y8swj+LmFPuGn5947vHOzRTjvHXgLzrG7u8PtjTsURc5bl1+j3Vomjod89vc/y+07Gygd8DM/9TMkjWZVeTr30d+PRfDThetpJZ8UU8LczlwMKt2Jq3Jp3X60gnPT5TUpEdYBHufsdJHVstRu473n2rVrXL58mVarRZ7ns9zqeUeDui1a69li/XcSINTkT23vPB9xUNsvF0WBMYZ4Kro4LDYwxsyq6WE/pqEWC9TkfU1W1BX5eZ5X1r1lOauarNs7HA4Zj8cIIWi32zPRRQ3vPf1+n89+9rN89atf/bZ+NMsyHnroIT70oQ9hjJnFLuzu7nL9+nW2t7dJkmphsdvtzqpgAUajEePx+NvI/EajcWD/8//mRQR3gxBiVik6LyyYj7qYd5r4TqKE+n3zzx0eiw6/t95ua20dmxfoRosgjhBUlZDpziaNOEJIifYwGE1QgabZbqKTRhXFgGV4+zbCO8I4wVk/I8mFFLPF77qSfMbnVy2mdh/2eMRUfKC0prHeoX3qOCbNGd/YRgVqKgCARiCwVlJ4T5GXtJdbLHcjbu9MMN4jnGdna8h//t9/hf7mNk/81E/ROfMIqnsSkXQrx4Iihc45VGMJdeIpguFN3M7ruP4N/GQXl43AlvjpZ41AgwoRYRvZWkUtXUAt3Y9ePIsImuAMyBCPwBcjbNZneOdtvvbr/4mP/5vfZHswJhBwaqVJZ7HFlbevUyqFFEUlOuAgJ+C9R9Xna//BKlZBVo84W1Xm2cJh0qwSboQWgwJjwViEEtOIFkekBV5KSiTCeZSQ4EoKp7GloamqfPN0klFaQ2kccZxgSkFZFgRSIcKYfDIm8xUJH3cWiZsNpBhU/bWpqtADBddu77DXj0l0ycbQYGWItQOWQjix2uVHwoim87w9GPPPRmPesKCdZdCDs2dPsxmssbeV0tvdZrB7ByNDTp67jzNnL4AO6e/tsL034Ktf+QK/MhxyzVr+t6UlHvSWplS8rxEzHE14Sim+qDVCB9gpyWBNlfFtjCfWAWGe8Rc0/N/PHWchTaEsMNZyKpBcyQd8uXuMc+MxTkpcHDLuRGyMxuxu7bEkFQ8mDSaB423jeLso2CkKZKRpBwE2z0msYLl0vEMptFQMpGQHTyTg2EKTW2eWOfFWj5MrC4RK4J0FWYmtBJ62MiStDn55la29HnEUIooCTIEOQ8p0grWO1SxlZzwiW15kuz9mSSqUVFWevZTVNRICpKriJmwVnYC15M7S8x4jZeV04KrxpiqwtZS7b1JZYFek10xM5B1labFeVJEVeY6nmqdkZRWlJOTcZ7+uRFWKleMn0RLGu1sUx5aJouiAmG5+3AnDkPX19VnMQt13lmV5wNWnFtzVsQp15X7dhydJwunTp3njjTdYXl6eifSCIOCDH/wgly5dot1uc/36dZxznD59ejbmOef44Ac/yNWrV7l+/TqnTp3i/Pnzs/ZZa7l9+zanT59mb2+P7e1toiii2+2ytLSEMYbBYMCrr75Ks9kkz3OeffZZdnd3KYqC48ePs7q6OnMD+OxnP4tSine/+918+tOfptvt8uM//uN8+tOf5tixY3z84x/HWstf/+t/nV/6pV9Ca83169e5desWf+fv/J3ZWDaZTPhX/+pfIYTgmWee4Utf+hInTpzgYx/72Gycv3PnDn//7/99fvRHfxQpJR//+Mf5gR/4AR544AH+5b/8lzz11FNIKfn93/99lpaW+O3f/m3efPNNfuEXfoHf/u3fxjnHmTNnuHHjBgA7Ozs88cQTvPLKK3zzm99kbW2Nr33ta3jv+fEf//GZkODwOKWUotlskqYp29vb/Oqv/irHjx/n/vvvn8UIdTodLl68SJqmhGE4Gy+3Bz2kkLQbLYS3KAlhEFQCBakIgxAhIM3H2KIgnwyYDPagTMBW8WBR0iHQMVJYpJIoXRO61adRBzGlMdiixDuPMQWlMeA9Js9BaZrNhL2dHloHNAKN9QEqjFhUAaMsp9fPiJIWwuWkk5RXX/wG9z7wMJ3uAssi4OlH7+X5V6/w+vUBl64OSEJJHAickIzGnkmuKI1CipLRaI/JaIsQgVIeKRTOewKpsRbSSYo1BmcNxhriKCAIwE/H2kCXCBWQ5ynZZIQpDWEgaSUaZwyFzRkP9wiUwMmAyTjFe4vWEhkFCOFxpkTgkYEGZ2lGgtWlhP5eRhIrVhZiOg1JqxnQaEVEkWLc73Fno8dqc4FGEoGonJGsMZUgxHtMWeJkiBceY6vvBFJW4h417VNmgmUBQii88AipkN6CA6cULtBo53DWEZiQsjSEZU5ZRBRxFZtjS4O1Btg6KOLEof0Eh8CKEDm8TbB870xwUDkRCIi6yLxXDdUmQwo/jYKQ+KkYQNTE+MxbbfronAhsXxFWSwv246j2faLEVCRdYtNeJQKrIfZdBETtjFDLJw5MQ+e/De3PO+p+d3ZufSWKPH3qOIvdDr1+hpSCZish6Sr89f05HHNtnRdvzusB9ueB+8KDWoTh51ozE4R78HZcHZMrDx1P5Z7A/LEd1FZ8R+zPg6vvYtNZKAKJlh5vJQElgShwKkRhwFv8NNoDL8isIvQK7TJCZcm94b/B6OBIdHCEIxzhCEc4whGOcIQjHOEIR/ij8ScVDNTEH8AjjzzC2toat27d+m436whTNBqNPxfxCtZDMF3UwSuEKHFDw82vfAO9uEhwbxMfhihZ5XPbsqDOz5RC4J0jsiknRjeJtcOHGlsU+KJACoVuJNgsq2xVd/aQOkSFISLLCANNb7dAt0MkEEWSlbUOtzb63NtqsqQzLq0s026tUlx7g53NjcoStrHJ1p3bnD1zgccuPskLL36D3/2N3+CedzzARz78gyitePPytzh58gwf/OAPMhpP+Pznf59vvfIydza26fcmOC9QArrdFseOH6fRiClNRTInUUKaprz2+mWyzHDm5Cm+/yM/RBB22bizyfpqmziOCYNVrr71PA/HTYqixOQeUxjKYkQ7qAgT4RWuKJGmRL/02WqR8MzDM9ZO+Lo+ZroIKARSSJAaZy3GFFy7fo08z9ne2WQySblw7hSbmxsMhwPSSYaQBX/w1S+zuNAmzwukELPoCGPNlOivlsjctOK+EiNM/5lKXODrTNrpT+eqxTRf8T4IqsVWJdXU8cBhp0Tu9vY2v/VbvzWrRqxJ/zAMMcYwHA6/bWE/CIIDsQjzi43z1qjzpHdN3tTkdS1aqF8/T6zXr637wnnb5PnX1W4GvV6POI6RUjKZTGbuCtn0/k2ShLIsGQ6HCCFoNpuz/Rwm2Yui4POf//yBY6rJqCzLuHz5Mj/5kz/JQw89xDe/+U1+7/d+D2MMvV6PtbU1lpeXWV9fn4kJDrsM1Mc+j8P539+J7D8sPph3Hqj/vpsAYf75elvz+6ifnxd03A3z13BGxnmHDCOELSnTCXlekk5SAqVpdpuY0jLa2qDILQvLK2TWEjcbtCLB4OZNZBDQPX6C3Vdfq+6Z3CATjbBlVf3m3ayCb349VjDtx2oSUgikVoTtiMaxVZL1Y2Q7e4CoBFZKoJOApYWE4VZKoCXGOPKs4OTZZXr9gkk5zXgGhsOUL3zi0/Q2e3zkf/5/0BxuEi6dRXaOQbODMA3IY1BNVGMNtfow3qQIk+HLCT4f4m1etVRpRNCAsI0IW1WEgopAVYIblMKbEpsPMeMdNr71Ap/595/gM7/3LP2ssk/vaMm73nMf6WhMOcooooSmyomUREoBvurThRAose/+Ulsqi5okmFaX+/qesA43yVGBAhshkeRlWfU3JWgpwXlK6/GBJGnFOHRV8aoUAb6yTC5zSqkYp3lVrRpHhFGENZZ8MiEKA1rdLrYoKfMcWxrOrKxg8ryKz3GmIuCKGGUN0hkqa+rK5lrI6pgub/VZsJbz9x3nVmr4xds7XIljnBDkXvHpW0s0k5zN8YS3vnWJlbV1jp08y/kH3sHe1gYr68d56YXnuXDhXm79wR9UIhKtuL68yj/qLvD/tAXnhz2WtOKJSLM9GvB2GHGl0UZphTQSi0drRSAlp4Xg7549zofbmmg8gizHlIZJWZJbxyNS8PuTXV5uLrJoCvbSMbvbYxYdrCQxNxKN+r4PsBeF/LtP/CfGRYnNUz701Ifpnj/DG6+/Tn9iuHb1KhePdxhu7NK0jvPLiwSNhK3zp9jsbdFaXcHaqkpbaY1w+31FK5K0fETR6PLsc1/jyUfv5erbt1BBhA5CJumEq9bycJnTHY8xa6tc0ZJ7fBUj4fHoKblT1asyi+yQUuIQpNay7Ty5FFXFsq9EcPvdjKAivfzcfVmNoaW1WKrKZOvdVBwpsM5hvSCYiRSmfSLgrGVcVtXm2Xg8E5Z1u12uXLlCkiSziILaBWd7e5sgCAjDcCZaq/vBerwJguCAG00tWovjeCaMq8UNtZNNEARMJhN+53d+h9XV1dm+B4PBzO2gHkcuXbrEnTt3uO+++3jrrbc4derUrB1aa+7cucP29jabm5skScLq6irnz5/nwoULvPTSS/zgD/4gzz77LJubm1y4cIEXXniBVqvFD//wD7O4uDjrzzc2Nnj66aeZTCbs7e3xzDPPMBqNCIKAkydPsrGxwblz54AqbuE973kPx44d4w//8A/5oR/6oQNk/I0bNw44OGxvb/OhD31oNi7XsUY/8AM/wKVLl/hLf+kv8bM/+7N8/vOfZzKZ8GM/9mNEUcSrr75Kr9fjQx/6EOfOneOJJ57AWkuz2eSnf/qn+dznPscv/MIv8Fu/9Vu02+2ZEOT+++/nmWee4fOf/zzWWi5evMhgMJjNQWb3hXOsrq7yl//yX+bTn/40Qgg+9rGPcfHiRZ577jm01nS7Xd7//vejteaFF17gsccem/sO5ZAIAq1pNBo0kwZeCEpTVSp7V9JqtdEqAFnOxj9bltgypywm6CDB2IBA10KdivDGUol1lEZJEFElyLPG4Mqc0uZEjZjSeJyFVjMhz8cIGVKmJThLqxUQxCHWScbKs7OXEyLpxCGbd64jBbQ7Le69cIrja4ucfeMqX3/tJjf3cnZt5VbjvUJi0bJydtDCEStNM6hcawrvsQZC4avvDtZQ5BNGg13iRpugk2BthvCQFxZkiJACHeiqWlsKpBRM0pxAVfM+U6SEU2GtkgpjHVoFNJvVYwKHl9XnPzRVH3FsKWFzY8DKQsJyS9NshMTtBo1mgsZw9dYGo/GE+4+t0WjE+3MUISg8lGVB6T0qjsnzHOcFQeD2P9tiLtZr6iYg2f8dUbtUKBygqOaz2gbooMSYAB2W6KIkLMsqIsZaYGs2J5FCYj1VpB0WU+Y02isIkyOisOoGxVyFf7yIzHawk20QGqSmjoLY70Onvx1wFtin22fzujnxQd0n1N9VxFSoroOAdmOZyXCzuldnAgyxz+lPO2sx3ea+mELMuS7sCwKYfi+S+yoBhPBMMotxFiXl1J1OY2xRiald1f67FVzMSSC+7bBmYoPZMXPAecFPp9kqaiP0sHL/qQKmqCMmqI+VOsJnTn9w6LvN3QtC5ubRQCOuvisIHFoYjNcIFEY10XaEUR3E7KgEqU/wsklpUspiGxHdZRd/TByJDo5whCMc4QhHOMIRjnCEIxzhCN811AuZNdbW1njnO995JDr4U0Sz2fy2LNg/G1SLHXa6EKKqeldGd8bc+cNnSRaXCcOTCFWtd5Z5hnMGRLXArrzh9OgWzTaV/TeV3bJsJmAdviwre9hIY/Ym5FvbxGurqCTB5gUqCkjTCc1Gg3IwoNFt8dpX77C0KLiwsMW7koLP9iLe8b4fYPvFL/Nqb5dhvsN4NKQoc3b7u5w4foZoOOTS15+ntdDl4nvey1/+0Z/mHY9cpLOwymCcc/bc/bz2rVf4rf/0qwRBRFmWTCZjnDeUxmNsiXeKUEdIGbKw0KbT3iLP9hgOh1y5fI1GawFJRne1izGGIsvZ3twiUCHXr91iOEzJ0gnjwQbB6iL4Kn/ZmhKlNTofIV7+LG7SQ973JD5qgJez6+CrUh2cr8ljT6+/y83b10nTEXt7OywtrHL+3AUQ1zh95jwIxfrqca68fZUXLo04eXyddqvFK6+9OSVJHNZ6SmNIs7SqpgyjqbGFxDnLJE0PxhdMrWKdq10YqHLb8Tg/XXD1laOCdJLhcMgnP/lJLl++/G3RC8CBaIF5ciYMQ8bj8YF4hMM4LCaoq0prEkdKSaPRmMUfwL7gSil1gPSZ3069zxr1trIsm4kQvPekaYoQYuaAMBgMZoKDOgu7bvv8guJhd4F5ot17z507d8jzHIDFxUXG4zFxHM9suZ9++ulZ2+crbb9T/MH8z/q45s9dfXyHX1OLOuq21cKO+es1L0g4vO27OSPMuxjMb2denDHvriCEIGq2yfq7eA/GSfKipNtt41WA1JJsbwBBTKsZUpY5w70hpSmxLiOME5rra5SDPQR6X8xTxeBO7/X9iv19gmDatlnpMygliZeatE6tEi8u4L1FxUll/18anJNQWpqNAK0zrBCkaUmSlbQXGiwvRpTbaVUt6QHvSSclL37xG0jxjzl24QT3f+T76J55AL14GtVaQTS74NtQFlBmCFsA1WK2cGXleDL7QEzti6VGyABkVZ3pyzEu3cFmQ0YbN3jlc5/ld375U7z85k3y6cJ0IOCBCyvEieTOm306i232rKal+mg5X8nokQiqSy1mREH1jK/iYVxFDM33D85YRGERRYkMNAqP0BLnQUhJHArSwuOVxqQ5QjlKZ7HW0Gg1iaOAwljyUQpSoIKYQCqKLKsW8o0l9444KemsLDHe3aEpPGK0gyesqt+9QXpLaQokAucseZmzFHoCJUAHLHa7nD5ziv5bV3l2OODrZc6VKEQFGuc8xll+47Ym3rmCCW5z9uw93PfIu+ntbGGtY3FhgaTZ5sL586yuH+fY6jLv/8CHeO311+j39ujdcx//H6n4O0XOqXzC+Sji4aJku7/LRhAyDgKkVERRjBaSdVvw/3roXp4SOWxtwHBMmeaMsoJxacitxwjJY8rz+5MerzuBNIahgN/JU8JjXToLbaJEgvKcueccw8GAzdsbPPm+93H8+BrtdodRmhNHIadWI8S5k/R6feSxVTIVEMQJkcvIHbxy5QquzHjn/feg9PSqe0+R51y7fo3uwgKnj69zcrHB1bcFtswJw4AJnqzT5LnJhKXdXcyJE9xoNDClJVIKIQVlmVefC+dwgsoa3TkccLO/yx1rue7BSllVNTtXWcXPBF1TFyDm+77qv6IocULibCUerMYCS2FdRQNPRTmVawIY66oIIR2QpSl5ls5EB51Oh/vuuw+t9SxiodPpIGU13tVxPEVRzAjz8XjMN7/5zdl4FAQBb7zxBmVZ8uijj84EeVrrmVsOQJqmxHFlm/7+97+fr371q4xGI6SUBEHAYDCYxTsIIdjZ2cE5x8WLFwnDkDNnzvCVr3yFlZWVWfROWZYYY7j33nsPiPtqslRrTRiGbG1tcenSpYq41Zo4jmdjTd22brdLFEUMh0PW1tZYWVkhTVMajQY7Ozs8++yzAJw6dYonnniCc+fO8dprr3Hs2LED43Kv1+P+++9HCMHLL7/MBz7wAd75znfO4oq896yurvL0009z/fp1+v0+n/zkJ0nTlKWlJZ544gmcc/zyL/8y3/d934cxhnvuuYcoirDWcuLECVZXV4njeBYPcXjMU0ohhGA8Hs/Ecd9J9FhHPSwvL/PFL36RKIpI02oOtba2xi/90i/x0Y9+lL29vQNjZhLGNJI2i4tLKCkpy7yKe/EBoRZYU+CsRUtF7gTGWJCCNMuQQUCUNMEbrMnRKsE6h1ayktrUrlTW4YxB6RA9vddclJBlY4SQNJoNitIyHg2QOqTVbrO3cwNCRb9MQcf0+inNVkIQhLSaDaJGxGg8YfPO2wiO0Wl3aLdbxFHIsYUG127vcPl2n92RpbAe4R1xoGlIQRIKmlGId56scPRSg9UKXSWJ4bwgT0cEco04lBTphChQWOtQWlHmVZW31gnSe8IoZDJO0cpVojFfViIdCVIGlGaIZxqthUQqhfGVk1esA6wGP8nohCX3nYxoRJpuImi1YjqdFomCbDjg8pU7NHTA2kqbZqtFFCaIIMI4jy1TnCnwQpBlKYGX1fzElGgdTO+laeSFEKAkUkjUNIZOCIVQoqpbF9U/ACkkTshKoCA1UmqU0phAT+PMbCV+FLIa273AuxLhPdJbdNhANpYg3YOow1TjsF/BD7h4CWXGUAymcwbuOm+bf3z+ofl52uG5HH4/MkG4Euc9YRiRqsZUaDEXz1ALCEQ9lzg4D68mD3DYGmAm4gWUlNM+H0pjMbY61rK0ICQLST2XmxNJMj8nnToRTL9fMXXO2XdAqD/7tafD3HmYCsEFgihMqmsydSKoxiDJvg/E/jHMjs9/+zHd/e86/qL6fuBFiHUQkZMREvgxchodhW6g7AQrkv1jRSCw5MYRBpryj2Oz8B1wJDo4whGOcIQjHOEIRzjCEY5whCN8VzG/OBgEAR/4wAf4z//5P/8Ztuj/3FhbW2NpaenPuhmAx3jB1EEfNa1C1F6x9+otkpMv0uguYuOQSEYUZYa1JVKAloqldJcov40Ll8jSCW6aXRrEEUGriYwCvLHoZpO8P8RPJhS9PtFiFxWGJK02W1ev0Gg0kTrAjsccW1KsLSiaruCZlYDtvW1euJSzevJhTsjXubZzh0BKhjubDHs7XH37MgtLa6ysnyEd7/Gl3/0UV1+7xPf8wA9y74MP02gvEEZN1lfXef/7v5d0klGUOe9892OsrK7S7/V4841v8c1vfp08SxlPJoxGI7QOWew2KYoJ/f42S0sRrWZckW14Xn/tTfCCNCvp7U7IJinbm9doJXA6kkjhsUW1wCwJKtt2W1Be+hyqfwfzro/iGwt45nNgK4tcXElZFly9dpnhcMjm1h28g0cevsjDDz/C+Xse4oH7H+HSy89R5iXtdoOXX/4m7338MaQO+dYbb2FtVZlpnKM32GNj68408zakLEus86RZzltvX2ZxcYmk0aiOzdrZkpXzrvrHfkatmz5fcbeSN954g7feemvmalDHKsxX/c+T09baGemR5/m3CQAO3J3eHyBBavcE7/3svXWF5GQywXtPGFbVX/VztYgADi661vu11s7IiJpUqgUBk8kEKSV5ns8WMVutFkEQzAgkPa1oFKLKmx6NRgeOoSayrLWz4+j1enz84x/n/vvvn9k415Wtt27dYjAYsLy8PGtz3c7673lnhfp46sfritf6scOZ0/X2auK/FiTMjwGHt3s3cUP987Cgo/693ufhts+3oUaZTWh0uwx7A5w1dNvNqkIw0my/fZ3hxLCwskTUaeHzCa21RUyREi8u0+h2KPo7TAZjRtv9qXjH46aOA1NOcq4aDeSUiPcOkNPoESWRoSJabNNYXaEYjSnzEu/DygZaSoSqarZd4VhZ63DtRo9QhpSZYTLMWF7v0B8UeKUZp/nULcQzGWc8/4VnCb8seOMPvsm7fuQHuO/7P4aO3kZ3VpGNFUTUQEStiiTwTKNXppV0floSPl3sxnu8zfDFAJf1sGmPdPcO1194ji/++u/wta+9wl5WYKGq/gYurHe5+OT9jPf6jHZGXPyRH+bVT36ONV25vtQr9lJW1Zq1SKNehRdU4oG68m9Wzuf9vnNLXqISQyEkSlTxCwpPqCWFiJEmpSgMw7wgaUbkVhA3InQcEzbb2OGQ3s4eSkDS8oRxhLcWNXWU0A7S4RAdxyRxQmAmFJOUqClwJsfZEvCUeUYSCLQEh8JbA7XLiRQsdtqU68t8YncXay2NdpO0cDgHUVjZnxNG3PvQRY4dP4mWgndevEh/lNKIAjZ3e7Q7HXJjuf/+B2h3F0maDV55+RW++c1v8EoQMm7E/L+1olta3tts8VZvj/PjIS+1umgl0ULR8Zb/5UPfx1P9Dbh2C/oTsjRjmGUMS8PYOQqhMDpkYA3HpeL1OOaVEq7nGbmSvLPdRocJV67dYZwWLK4eRwUR/XHG4tpx+v1djp86ATguvfwKy6fvoRjuUHiIopjFpRXeuvY2/f4IpRQTY7hye5PFdpMgjDHWcXNjm29dvcWdTPKDH/0outEiLwxCaZSKkGpCN9G85x2naCQtVNTiynjCoNXiS+MxS0VJnOc0PCRCIIocHYYYW+WYbw17fGl3iz8oDD1AK1URrfXN5v2McHXOVVE/U6Kt4o9E5WQhJFHcYDLYnY1hZhoVJFVcjccelJxW6wpBa2mJdDjEFDl5ns8iClqt1sw5oHbvSZKEtbW1GXFf9+/Ly8t47xmNRpw6dYrhcMjKygrWWjY2NpBSMh6PuXHjBhcvXkQIwXPPPcejjz7KpUuXeOONN3jwwQe5cuUKZVnSaDQwxrC4uDgbQwGiKJr1t3t7eyRJwjvf+U4mkyrmqSxLwjCk0+mwtrbGmTNnuHTpEtZajDGkaQowiz1qt9t8z/d8D2EY8pnPfGYmiquFFrUzg7V2NibmeU4URbNjf+qppzh37hx37tyZufnUrhB1Xz8/HtWv0VqTpuksvqgeI+v3fO1rX+OZZ57h1VdfnUUdKaXI85ybN29y3333UZblTHRQxygVRUGe5wRBMItfqsejWoxSliWvv/46/X6fr3zlK6yvr3PPPffMxqM6cqHRaHDixAnCMGRvb4/HHnuMOI553/vexwMPPDAbs+fHvpWlVbyQZPkEKQ1hFFauBl6hhMY7S15kxGFS+XaYqo8v8hxjPEJqgqhBM4qrCn4h8ChqdnkWERUEmLKaXwqp0WGCFIpsMkLikDpmYWmF8WSEF5L2YpONWxssdTTCe7otjRclcRIShAFOKBqNJmlecOPGLU4dL+l0V+h2WjTvu8DS8hJnj+/Q2xtVriJOI70HY4ljRXexTX+QsnW7R14YUjQSyI2hKAuKbIJzJUqJafW/JC/z6juDM9jM4r3FYygmDiUl2SQniUNCMT0PwmOdxeMJggghPN5brCkR1qG8BaEI4wAlHMqVNJZijNR0l1p0Flu0miE+z3jrrZtcv93nwZPHWF1ZJI4byCDGTR0OnC2x1jDKLVYJIi8prSXQIVrbav4yJaRncy+pqnmE0mipEEohVXXdhZRMu53pOCSR3qN0Pe+aurTYOhJBzKr9vS0Z7t0mSBZx+RCyHlJrhBnjgsasLwOm1voKqzpYW1bzm4OZBn+kwPfwfO6weFSI/Rp7X6boIGZ7+xrGWERYk/R3QW0rcFDdsE/X+4OiATknPKjbUBqPtVWkCL4g0p7UhAiKP5LUF1PBgZ+KTv1U7CAQ0/nNvmuAmBki+Nm0S1OyO0gp8hwlgqlvj5gezszaYepcV/s38G1tuZvoo9rG/DmpHnQmoxAdQE4FKAJwWAICkeEweFTVciHw+QSvYoIgoZgXqv4JcSQ6OMIRjnCEIxzhCEc4whGOcIQj/JG4u4Xfd0YQBLPfhRB88IMfJAzD2WLnnxaiKKLVarG3t/dtduH/Z0ZdofVnDSHAekHuqkUXCdTBHEXuuP3cJVbOnWXh/L3VAo2vRAVSapJAcmpvmziQuNLgbUm2N8DlBRNraawsEi8vohoR3uTEC10mG5sUO9vIQKOTBB1HeC8o86KqtiwLHnvvBYa9Pr4okeMef+3+Zc7c2OMl0yM4fZE9I7izcZ0gVFVdiioYKMH165dpBC2Orx2jv9PnV/7NL9FdWuS+dzzCvQ+8g9yUmCLHmoL19eOsLK/QbLRQUnPi+Gm6nUXOnLuHIAzZ3d7g+rWrfPI//f/I8wlLSzHG5JRW0w4UV9+6zTef+yaPPvgYV67cIE8Lhnub9G+9zkOP38+CTbGFxkuF0gqtp9XB3qJsibj9OsZ65Hs+hg9CnDOzik3vqhLtdDzi5u3bDIc9JpMxD973KE+8+z0kSUwQgj+5zu7eMa5eucLK8hIPPng/99xzDxubW7PFLes8zntG6YRJOqEoSgSC8WREWZQYY7m9sUmv3yNKEgRVBZYQHiE84KakrZtWjldVqUxJAu8sX/ziFw8IA2oiviYaarKielv1d10J2mg0KIpitmhb3ZPiwCLd4fiEWriQJAnD4XCWp50kCYPBgKIoSJLkAPF9N4vTmhyZr4Cs7a4PuxbUbT1sw1znbNcEei1OqPcTRVFF5M3FNdTb/vKXv8wnPvEJfviHf5gwDImiiNXVVbIs49q1aywvLx847pqcqYmZefeEett1leu8I8G8MKB2M6gfmz/uedTnp35P/fqqEs8dqBKtf69RixwO73c+8qcmnur9Rs0W494u2XBEY2GRIA6RQcBwY4M8K2g2m3gpKCYpsYQyHdFa6IASZDu3yQZjgkAR6KqisM60rQVCHFrrnl9cro5BoMOAxrFFWifWKCcj+pevo1sLaF2dd4TEOXBU/dXahVWu3+pRlJYiKwmTEKUEq2tN9np5RboLj5vufDzKsFry9uvXGPR+nSyXiKLHiXe+g+6ps0TdFVTUQUYthI5AhfuZzd6Dt3hb4E2KL1NsMaGc9Bht3OLmSy/yzc99lW98/Vvc2RvjpoV9dRXhchLwvg88iLMl496EJGmw+uQzlB//P0iiuSgF55HT99Zr8VW/X5017/yMJEBOqwRn94wA58gnOXIxwlow1hMKhzGWzAocEu8cSRKgQ433EqkEpQdtDSpKUPSZ9EdQFIh2A2c9XkIUh5iyJMsLlHe0tKrI5yCsXHW8mV5Zh3XlrLKxcLCTVS4tCipiSAq6ix36oyo/fDApMc4hRSVHaSYJDz/2FM2lYxxbW4YgZmt7m63tLdrNJpO8JM9SkqSBDkL6vR3OnjrD4tIqv/s7n+TO7dv85ijiTLfJ3wkkcaD4nnaHbwz7LDRbDJFEwz7/08V38UHvYXeAH2WMJzm9vGBQekZWkMctiqRB33leLwo+NR6zEwQMixyhFN1mRKfdZDxJcb6q3l9fXSFuNPnWG5cZ9Ae8culV1lcXWFpocuPmDc6dWOFkJyBcX6XXHyFHE4bjMY12i7XlRcoi5/r1W1y6fLOKCRCSSVYwKQyPnT+JdIbUB+yMq0pTl0+IAkHcTfip//H/ymK7RTEc8YlP/B57QmHvPcvJe84yGY7p94Zc2+2xvXGHRx99Bw+cWKPo9TB3Nui+/iq3fv9ZYlvFamjhcdbNxhbvHEVe4Kjnq9UHuM66z4uyciZIJ5jSzAhB4yHQmjBKsEWOdVXlrZQej6S1uMS4t4ctzazviqLogNCtHrPqvvKwoKrRaKC15tixY3jvabVaWGtZWlpiPB6zsLDAe9/7XpIkwVrL+vo64/GYU6dOVVXqzrG0tMTbb79NkiQz54FOpzOL2KnRbrfZ29ujKAoefPBBPvnJT3Ls2LHZfFIIwfHjx/nsZz/LK6+8wsrKCuvr67z99tsHhF5CCJaWlvj93/99Go3GTAxY9+dhGNJoNPjGN75BnucsLi7y+uuvMxgM+P7v/36896ytrc3iH7a2tnj11VfJsoyiKGZxRfUxLy8v89prr82cJIzZP9+1sKEWZsy7APX7fZaXl7ly5Qpaa5566im2trYYj8d86UtfIooinnzySZxzs2v1qU99irW1NZ5//nkuX77MO97xDrTWKKVoNps0Gg1++Zd/mfe9730sLi7OhIr1eXn88ceRUvLkk0+itebnf/7n90WX0/aur6+T5zmnTp06QCgOxiPwjmYcIhKNMwbjK0cDHyq0ErgyI5+OP1JJhJfgBUWa0dvZJQwj4kYboRVJVNn+O+/Ay1mFvQeU0hR5QVnmlM4igqpCejxMCRsgVICWmvEoQ+mIpaUVpLA0khhEpRqbTAyDUUbDSYJAE4YRk1Gfjc0N4jAkanaQYcyyimi1Yk6fsqR5jlAJRTrGuerzKDEoAZPeEKkgkhpvDKWxWGcQwuJdQZqmtJpNvC/BlUghCYOIohiRpUPiqIkSVX/eabcxpgQc1jqk1mR5DsITxwnOldNYIIfGEuNx3hBEEa1Gm1Q68kBQqiad5UW6C22SQLNxa4PnX72Bs4ITq4t0FrooXbmx1HOssjSkac44LfGBw1hPFMXY0KJUJXCpBs19AZOSlbuW0hqtAqRW0/sumI47tVBh6mhWk9OicmnwXqBURc7XrgXegzcFUgUIkyOneWdOt9F2VEUu1fO36ksEfnAb7YbEzYXpnP6uMoAD+DZHg7nH5z8bs9+rVuC8pTcu0RSoer7Nvm+BELXbwdTp4JCIoR6na/J9Nr8A3Oy4qufLYkSkJUEjIgorxyApXSXUxB1oX3U8tSigOi/7pD8gpvKAQ98LDrxmOnd01rCzs4dNM5yrXD6QdSCFACwzJXYtPBDVzwMTz7lzsv9dwO0f+/RcWOtIixIfa5QweDRSyGq/AkrZJLBDCtGqNOrWIoVH6xjIEX/CeM15HIkOjnCEIxzhCEc4whGOcIQjHOEIfyTMn+BLZ03szeOhhx7i1KlTXL58+bvdtBmWl5f5xV/8RZ5++mmeffZZ/vW//td89atf/VMXOvx5wMMPP/xt5/zPAhKPQWG8JpIGhMQ4OV3sqmIWNp5/ge76CYiTagHMOUId0DAjFqRFO4kZpyAtKtAo58nzjHRrC28tyfoqqtnC5X2CMMJkGemt28THjiFDTdJqYrKUoN0mbHcpxyO6q6tsX7+BFoqyt8MP3X+KB3aG/PJLX2O5fZYgbLO5cZm8zHCFwZS77KQFE8Yw2GZ1YYnu8jHyYc7Xv/QlvvbFL9LsLHL6/D0sra2RjvZ4/ZXniKKEvCjY3Nxkff0YtpxQ5n0UBQudajG73x+ghMTaAonn5Rdf5qUXX+Wec4/Saq0wGowY7W2z/eYLnOgGqFde4fbZJsnJ00Q6RGqJkLqq1DQlEk8+GjPc/CpKNmg8/hGYW77y3uGtZTwc0u/16Pf7nFw/zfvf8zTLCwt4HFlecOP6Fd54/RKvvPIS3c4iP/EX/wqnTp3ms1/43NQG1GKNBS9YXFjlvvuaXLnyOoP+LsPRiLwo0EqjdUxZVu4EUk6NFvw+oSPwCC/xzmJtRf74qTjCe9ja2pq+92C+NXDAQr9+PMsygFmW9byo4Duh3kZNiNTkSZIkM4K8JhRqp4L5avrD26//nieS6seSJKHVapFOYydq8cC8VXNNksy/bz7WoT4ftSvD4Sos7z1lWfLrv/7rCCEYDoc8+eSTPP744/zH//gfuXr1Ko899tiBPPH6ffXvtUvBfOxCLQiYFwXMV5nOCxXmXzf/b75ys/67FijUYoLD9ruHH5s/L/V5OxwPUTsrCCEY7WyTpwXNpSVUq4MSMN66zWQworO8iG52cKZAeE8+GNBd6uKRlIM9pHe0FtroJGF8Y6/Kg9YC3YxxWYHPy7mKtnoxfLoIPiXbVaAIuzGNY8sgPJM7O6S7E0IfUYoxznosrrL01RKPoBwXHD+7wuU3NlFaEjVKvFY0miFFbsgm4ISknAp1hIe8tBibY+w2X/oPH2fYG9HsNDh54SRn3nk/axfO015fJ+p0CBoNZNCo4gyoqh1tWVBMhkx2d9i++jZvX3qD1194javXNtmd5JRUi+FCUNnSAy0pePrxs0SxxGQ5Ozd7PPj+ZyilRpmSuCmRct/+ef9DQm0LAf6g7fB0zX56j4C30/tSChIlGPnKNjuWgijQWCGJwoA9A4qcEoWQmkYSY6hcEvq9AagqJiHQEmMMk9GEuN1CBYpAKdCS/jClIywyCnGlRViDQSEFKF3FTwipkLISnQiluDNy2DyjHTmQEhVECDHGygDjLUJItJaYssQUKU+89wOEzS7Hjx8nS0ds3bpFllbRQqPRpLLlNyWlMTjr8N4SBiGdlubsmbOMhn1GwzH/YrvgPasdPpQEhI2Ejywu0RmNkJ0lzj/4ID96/0OIt6/idcIwaLFNSl9FTFptMjxpo8X1PONL4yHPOceeFDywtsLg6nXwnoVOm4cfuofLb13B24LBoI+4cYtz99xDp9Ph5Zcv8a53P864v8t42KOdhAzGGcVgl9VOk3arie4eg5u32dntE4UBywttTh5f4/Hzp+k0YuyU+P+NL73IufVFynyM0zG3t26hgohYagpT0mrGLK4s0G01sN0GJ84fJ91Ief7K29z77nfwrve+E1uUlGlKkWY4LCJO6Eb30vEQvXyKIpCYuENrcZnFZlRZnFtTORx4h7Ulztf9ehX1IWpxW1ES6oBWu83O1k3AU+iIsnBTIUFMPpE4b2efDYQkTGJ2b+zgrSEIgplTT9131fPRumLeWjsTptXEed2/1YIvay1ZlvHKK6/QaDTodDqzuJ+6On9hYQGlFPfeey/GGMqy5OLFiwfGu9dee40HHnhg5tYzmUxotVo888wzM4efkydPsrKyMosAcs5x4sQJPvjBDyKlnJHsFy9eJIoi1tfXiaKI++67j6WlJb74xS+ysrIy65vrmKE4jjl+/PjMjeDcuXP8zu/8DsePH59Fg62trfGVr3yFT3ziE/zIj/wIv/u7v8vly5c5duwYv/qrv8rf+lt/azZ2rq+v8xu/8Rvkec5HPvKR2THOxwgZY/gX/+Jf8MwzzxDHMf/m3/wbOp0Ojz76KL/yK7/Co48+Srfb5cKFCwwGA5aWltjc3EQIwYkTJwB4+umn+cIXvsAzzzzD8vIyOzs7PProo3jvabfbPPnkk0RRxMMPP8zKysosUmIeJ0+epCzL2dykFkbWY2I9DtaiivmxvTcc0E6S6npLiZpWVntrwKuqbwJwOYEKUDpAaV3NtbSitJBOJhTZAG9aWK1w3iGQs/vHWluNC9P+WemAyAtILPk4o799E5WmtLrVPWZMSSMK6Kx18F4wHKX0eimLS23ipsaMxtgyZTz2LC91iZqLjNM+23s9Fryj014gajZoxAHOC5KiIAxi8iKnNJCnI2yeEgYaJT1JEJDjcUKxMzFMspIin1BmQxqtDh6HLTLywuBc5bTmnSWJYsIAtHDkhcOaEik8WToiidqVcMM7hDPk+QQpNME0ishZy2ScEzY8moA4itDdBlEcoJMOjVaLbizp7e7xjZdvsLmTcv9ql/WVDo0kAhzGFOTGkWcZeToiTcdkuUe42m1iGhcmp2Kn2WUXQDWfqkS3AToIkVoT6KD6WyqkVhV5PCWX3Uy866fjSJ0HJWffCIQAY1I8Cq0sIr0OcQjxEoSLyKKPjxere9NkMN6o7o3F+yH75kyUdRh3cwaY/3lYhDAj5+vxX1QREK5IUY1VRLE7c2ZgLkph5nAA4OXUBWDegWDeWWH6fQOmfP/+cwJPt9NCCAg0RFHVB8fBfsTFt89Nqch/z+zvu5+L/at44Fin1wkpQQagmmDyKi5ipqzw83/sn6PaJOqu7Zo/76qK1GBfnyCEIhElmRmiwhDvJYHyyGyIUAlWhjgRod2E0gdgC1qxJvMlyjqEO3I6OMIRjnCEIxzhCEc4whGOcIQj/CnhT+p0MG+tDbC0tMRjjz32pyY6iOOYX/zFX+Rnf/ZnUUrx0EMP8eM//uP85m/+Jv/wH/5DXnvttT/xMfz3gjq+4o9TffKnjenyEkI4QglSQGWmX7kIZAbuvHSFtQuvEb/rMUQYIzwEElbdmFZ3AZfmIBxShkhtkDpCBppyMqEcjKoF0ZUlRBwg4whpHHY8oNjdJlxYoNltg3VYU6CjhIA2zhiWTp5g88oVlBBMNm/zwPEz/E9PN/nkpSt8dZhw7tQ72RhucPXONYo8J/UeKwURnu6kT9OMaegEWi36QYPezh7P3v4KUivWVk6wfGyVlbU12p0WOgzY3tpgOOjTbCY0m5X1baPRIc9LxqMB4+GYm9c3mAwL7r/nIqEOKPOC4cY1RjdeY1iWtFpdxjsZ5mRSZfdKgbNTJt9aXFFl8e7t7jAcjlDPfYbo/MOohcpK3/uqast7S5albG7eJgkjPvTk+zh5/BhKWIos5fVXX+TSy69w88YVrr19mTBu8dFnniGOosoOVCmKsqz+5TnNVptGFJJnBTdu3uTylTcYjTMee+djfOgDT7K0tIR3HjN1XKjX0qrFOjmtaKoztadVWr6KhaiJivnq9vpfHSkwT5o75yiK4oC1/2Gng9n9eUg4ME9q19s0xszIfSklSZJM4zH0TFRw4J6fI8rnFwHnifhut4v3fpZdfdgJpj6W+vU1GZUkCc65WWZ0fV7u+tkTgr29PX7913+dM2fOHGjvjRs3GAwGtNvtAyT+PBlWiwDq/dTtqPd5ePG4JpXmLaznHRJqAmi+orMeF+afn2//YUeDeeHJfAVr/fu86KCG1pp8khO3OyTtJnme0t/eqc794iJSK4p8jBuNsbak3WlWbix5hnIencQESQPnHcU4rarxBehmEx8o5CDDzSrpxGwRvF4UFkqgo4BooUmQhKRbO4xu72GNq+zYXUV0WFslCpi8xHrB6NY2J594kLcvbzEc5jg8KyttpJa0OzHWOPZ6GR6BwWOpsoWdcxTjjEma4xEMxjk37+zwja9dohGHNDsNOostOktdOsvdyh0ASZEVDPb6DPYG7Gz32dkbM84NxnscHjvt0KeGxDgPDSl57B3rnDyzhC0Khttj8mHKQz/6l3j+6y+SCEegdF2sOVUQiNkaen2OHHWwg5/SAvtViQhRDRx4ZBhUr1MhxhqiJGBUWAQwmWTYwhFpSWE8gahaOh5nKGfJ8pKw2SIQEDVDisIBlkALRBBRZhN0HBHHEYmSjEZj2qHGCgFa0YwipKqqT8NAolRFIutGi8FggLGWZqOBsYad3V12d3t4B1qHOO8Y9PfIs5SHHnyY1dMXaDcTRpMJd27eIE0zJpMRZVliygLwBDrAFDlSaQKtsc6jpOTchfvY2drkWnGdXl7wv/bGnI5CTNzkB0zJahjyxONPkpQl+sZNitIxLC07UYPdpMUYSL1jx1men4z4Ujah7x1Wax66910IM0ZIibEW6zxXrt7Eo1A65LHHHufcvQ/y5uWrPPLo43gU66uLvLl5lcneJqePrZDmlo1hRtxZYiFqsbCyQruziFQBw1GOBDqdDhmKTtJCmZQyzciMQ0iPL/qosMnIKoJAY3NDIw45s96lthSRQrKw3GG8bbnnvnu4+tbrvOveY5UbSRiTLDTp7+5QYomCqtrz1MMP8z+cO0+WZQz2+rzxysuEaurWY6tYH1OYSsAjqngFZEXG4j2lMbRijU2HWFM5HcgwphyMq6rlokSqAGcsiOo+RmuCMCLt9xBTscC8KKsWktWV/7WooCJyzUyMVbsE1H1gbc3/xBNPzEjtOk4IKuIyjmPmYwiAWf9cC9ba7TaLi4vkeX5gPFlaWkJKSZqmHD9+nDiOZ/167SJz7733zvZljOHixYuzvlgpxenTp3HO8dGPfvSAu0G97zoC4Sd+4idm5Pu9995LHMcURcFwOCQMQ37iJ34CYwwrKyucOnUK5xzNZpPt7W2azeZsf41Gg5/6qZ8iCAK63e5sLCmKYubS8DM/8zP0+31Onz6N1pr777+fhYUFms0mJ06coNFoEMcxYRiSpinvete7+MIXvkCn0+E973kPWmseeughHnjgAZxzPP744zPXpfpcdDodAM6cOTNz4KkFGzXqKId5R4v58RQ4IPirx0OA3FgiO3W4ERKmtvrKgytLcudIwmh2H0VxjMkrl5x0MiE1UwFNnBBFzZkYTClVzc29Q4iqc7bGY22BELpyy5KSZmeJdDzEFkNG/T0WFhdQsiJ6jZUICd1ui9F4zKA3YnVtmZXlgOEopdGwTMYDpApodzqMM0t6Z5Myz1lc6BImHVARQTgdw3WCMg6Tj/GycucypqShLYkMGZeOQFqsLTDjIb6cEEcSKSzO5JT5GB00KMu8Gka8x9uCIk+RulGJz/KCMImBEpxEqpCySFFSoeM2TF3asixle2/EghW02xFOKQKliRoKHUk0BbtbA1565RavXN1lUYdcOLnM2koHJak+/85SZBmT8S7puEc6MaSFImxGCFngnaeQqnIA8rX9v6e22BeycjkIgoggiNBBQBCElehA6SpqQalZ3IGn6vuqebWbCnwFdSCCmNoN+TJFiRLVOoWKOohkAWHHeF8ifY4vJ/hiAjbD6wTdXsdTCQ8FtSPDt+OP8x30sAMAMI0RAJcP8PkA5QYILFLcP91f5TxQz0Wkr+YldRJBzc/Xbg5iti+mTgFMXyv2JyEIiqJEaQG2+jyMJwVvb9bHMRVbzrl57QsP9o9n6j8wR/AfnCcfFgZ4QOMI3Qal3SUtBdJbhK/6Ly+m89+DuoNvExd8p/NaH7efOxEOsEEbhSRyI4QKwEm8jPFSI12OchneTMBIGq0FvOlXshhX/jetnRyJDo5whCMc4QhHOMIRjnCEIxzhCN9VHK66l1LygQ98gF/7tV/7UyH/n3nmGf7qX/2rB6pd2+02P/MzP8P73/9+/vE//sf8+3//7xkOh9/1ff9Z45577uHxxx//s24GAEoYjK+IIi0PLkI5IPeWYmC4/eU/pLm8QufMOQSO45MdjiuFlg1kp4FUCpyDaVWibljClsU5i0wiIEZHEaWYoBsx3pWUgyFSh4hGMrWs9Ng8Q4URzlu0VHTWV9m9dpNEQN7bYW3tOP/D+9/Jxas3+M2XXiNlhXecfZTN/m22elXF/TC3vFY6rueGNV1wthhxXCmWdUK5sk4atzFKcOfGLa688RbOe8IwIkoSoiiurJ21mlajT63rnUcgkSKh3WoivKfY28BvXCbd3uRqbji10mW8M2K14YgjhTM5VggCHSCoLEBtUS1celti8hFqoHGv/yHBe34QV9cAeQveo6WkLFLe+673cnz9GONhn7woGE1G9Pt7NOOIsydP04pjdBAyHPS4fv0Kvb0egQqBjK2dbV689CJnzl4gLwqyNEfKkL29HlvbEzrty7zn8fcwmVb1m2nlbm2rPiPkrcFMLa6dc9OsbQ9uX1wwT+QDs8rFecFB/bOOLLhbJVX93vqxw++fr4ysf9YkUe3aorWe5VLfjSifd02o239Y7KC1ptVqzXK0550AaoKnbldNIMxHDdS/zzsV1K+v92WtZTQakabpATJ+OByyvb09y4yuhRc1ETbv1OC9nxFNh8/lvOBgPoe8Pgd15eThc1OTTIfFIPVxH16gPWzbXf8UQswy0ufPR72N+trEzYQgUriyJN3ZoUhTlo+fYNDvEWqNdg4XBTSCGB0l1cKx8BDHBI1GtbiuNGGzUbVJKYJuGz+sBS/7q7r7x+QRcio4WGoQLy1QjiZkm32KSQlCUo5SpHAgBNZ5rPU4LyisY1xYdrcHHDuzxJU3thkOCmDEymoLrSULSwlFYbHDshIreF8RptPz5FwtFpguWFtPWkzYGYxx17YQUhAEiiyvK7MF1gusrwQGjur9zjMVNVR28rWrQlNL3nvxFA88cgZTGmxh6W3s0V1usX7fabZ+5ddoaUEgoS6ZFVQ2z3Ujq4rDOXHB9MGph8Vsnd07X0VPlAZTCFyzQEYRQaNJQ5ekk5JQeaxwOC8Q3pHlJdaAKy1FUY0b+XAIxiIFBEGIDEPGwzE6KEBIBuMBy82YdDTCC4+MNU5qrAjptIOKaBaSQIFgTBSH5AJMkdHQklajAVKRpgVR3EDqkEG/T293By0FJ0+d4YF3v5dut0t/MGBzawPvPNs7W0xGA6wxmCJDCc/i0hJJt0Wn3UXogLK0ZHlO0mixvHqMwaBPrz/g2aLkn/Uz/hciSmt5srNAfOc2TgUMgF6esdvfY5BOyDpddgcDLpeGz05G3I4iRNKiFWqSRosL997HV77wGaSSFHnOaJKSFYbjp84Sh4rzDzyC8gVX3nqdxbVTnDt3nv7uDpNRj7LM2d7eQTYFy6vrPPzY43z1q8/y8HvWuO/BB+nt7pGnI3buXKPTdrydNbg9aCN8A2FLJiJh0jxJLBzByJOHMc1EsxJIhqXg8kafNy69yFKny/bOiDdevca5zhqBy9jd2CMrHA0dVJ9B7wjDCD9zWxEEoaCtWzTbLYTN6fcHLLaSagyxDmMNpTV4B4jKnUOIKkMc53HOs9JqELU6mCmpH+vKaUQA49GQRhQiRY6dElE6TAikwOYlS52lWV+7t7dHFEXs7OywtrZ2YAyZ77/rMacWX9U/k6Rqd7PZJAgCyrI8MEbVZHctZpsXZNUEeBAEPPLIIwf60Lq/rvv7OI4P9Le1i1AUReR5PnNIqMeN+fFj3sVmfiyoIxbqbc5X8tfb+8IXvsBwOCSOK+v/IAiQUtJut4njGGMMFy5cmO2rPu6TJ0/OtlGT9mEYzhwalpeXOX78OEVR4L3n7Nmzs9eeOHFiNn7Ur//n//yfA/CRj3wErfWBaKH5cXpeFDI/1tXHVJblgfNYX8v6etfCibuN44eJ26x0hMZS2qqCXQSq6jSVxHlwRc7EVs4bsYYgDAmimCiJMHmKKXPspGS0s8mkGYJJ0WGDuNkh1CFC6ErsJKp4qyIbk2cpWodIqSoBcaPN1u4O3Y7GmQIlJM470qwkjAL6w4IwatOOJYH2xM0m1kKWTtjpDVlZWmA4ygGDUoLd3h5KGZoOgrgNKJAhSgfkxRBrU4a9PWxZIKWi3ZDoUMKopKUtSqhKJKAEk+F2Nb9AEYUaoTTSGLwp8a7AS4XUAUEU4qwlabRwNsWUE3QosUWOco6k3UToEKmqyJ7xsEdvULAzNJRW0lkoaXVCQg2TUcZub4vLdwZcuTEiQHF+vcOp9UUaSYSQAuvlNFJhRH93i3E6YZxLCLvgfCUithbwMwGuc3b6z+G9QAiFDAKCMCIMk+qnDlFBiFYaparP1sx5ye+78tdzk8oJwSGYm7faDCXBZBNE0MWkfZBV/JJIN7HFiHD5AZyKCLrHKhGEs7ip2vIgwf1fnnPv/73vEnAYgiqKyZZ9nBsglEbU+gZRzVdqNySE23cwQBzazsw7AQ5EIFRxf/W8QwJxFOCdQwmBUhKh4NRaxKu3SgTT+VVt1zY7rvk/9x2b7na88+dDCoGr5z75bfJyBHgiHVTzLsH0+PaPgDoH4i7n9sCcmTqCoRKbzNrj67gNiUUTqCrGsCgyrDVEGJQIKQWgIwSOWAuEy5A+g3JSzQPdf31Bw5Ho4AhHOMIRjnCEIxzhCEc4whGO8F1DTTIdfuzJJ58kjmPSNP2u7i+KIv7m3/ybJEly17acO3eOf/pP/ykf+9jH+Ef/6B/xB3/wBwdIvv+eIaXkb/yNv8Hy8vKfdVMAUEJUBI+waCkrQssL6op2P7XW718bcPOrXyVqtWnHAQs338BLsErgpcBLCdaAsVN+z4N1MwtYk5U4Z9HNmGxzhyBKwDnK3i5BsIr3oOMIMxwhtULpAFMUNFpt+nGMNRbpLMVeD6kk7zq5yvFOk49/4zJf2dlivXOCY8sn2BncYmd3h6woGRSWQWG5nhvaSrKsCjqDPl4KBlGDxbV1Tq2s01lYRuiISZoxSVPycUrqXCWEEAI5zWnVGFSZEmYDouEWaW+XV4c5b5eO06tdVo1nwQkWIkUShFhTQhBSTobkUpAsdTHTxXRFJUpQtiDYeBM/eR802tVF8ZWDQLfT5n1PPE0jafD65dfI8pzSlJRlSa83oNfrUZYFi91l4iRhPJ4wGk/Y2d1F6WrBcjSa8Pobb6JUg053keXFNYQQPPzQo2xsbRBHIW9dfYvj2QREZfN/8/Ztbt++DVQEZG3xKlCzRUWPR4rKYnbezWDePQA4IEaYR/36eUK9/rvGYRKgXvyvyZv6ucOLqEopwjBkPB5/mwhifv932888GbO4uMhkMrnr56Z+f02AzOdpz8cHHK7snxcL1O+vM7Dr9teVovOuBLPPa10pd0ggcbg6626xEvOE//zzNXFkrZ2JL+rzePh6zr+vvg7zoo35czkfq1ALMqy1s/zs+W3b0iB1Sdbvk49SooUF+js7+DAhH/SQ3tOcRotYUxAohWw0UFGEnBKPydISRf5ytX8tSU6cgFsFUsiK2HdiuiA+10YtK+v+ZgLWMLq+yXhnjHeiEkxNrYCt8VVFG5X4oLCOsXFsv3qTx7/3EW5d2yPNLMNhgRBjVo+1iZshK+st0rxHljusrxwYRF24N116dtR9bl0XOK3+s1AYi0NgqOMSKpFC9Z7pda1r96oCQLyHtpY89uAxLty3XpEixjIZTBjujjj9zgv0brzC5uU3aFRFmzP3AjGlBipXA2aViUL4fYHB9HUzqsBT9f9SILTGpwXOOHRDVsRUoAgiSWlT4iQgkIIsN5SlQwnIs6oyT3qPdxahFN44ZOiQ1hMANssZjIrKwl5DnhtagcWVkoEV6KAkTlb3CUItUb6qDh8M+rSlo9OOKzGZ0ngD2SRlZ3uDIp2wtrbG0vIqK+vHiJsdSuvY2LhDlmVYU7K1cZs8HaFxXDhzknc/+T7O3PMQSauNmvZhdzY2uXHjOpsbG6yfOMn21m2sc/QHQ74wnvCmDrgQxVAaikGfPIgZhgG93h4jKUlX1rnR3+MrkxF/mGe4ZpPF5XWUFBTZmOWVVS5ffotJms6Oczic0Gwt0O20Oba6ijYppQxJs5zJjeu8652PoIOQIEzwTcskywlVRqg1b1+7ycLiCkVhsB7Wjq0z7Gl2N29hncSqGGMdUgVkxpFZwUYWEGqNIUOECdvDMWGjquTvJk1+7z99hds3b5PvDVkqDYvLS5STMS4K+eTI8MBTFzlxdo2kmaDDBJl0QVbRGN4ZhC2quAzjSNOMdqzxU5cDNyVxjTHVPeJEZXtNZc1emKpvM+kQby1KatKsmBJOgjybEEznFq4scN6j4gS8A2OJo3hGiO/t7dHpdMjznDRNSZLk22Ju6n54nryv++w8z2fiOGvtTLhWjwV1XxlF0QFBQi0mqEVzSZJQFAVlWRIEwayfrd8ThuHMdaHuZ+fHx7oddV8+37/Xgof5eIj6mOrYiHnSvhbWaa25ePEijz322Ox81aR+7dwghJjte76vnxcCHB6b6jE0y7LZ+DEfVzEv6KudE37yJ3+SdrtNu92ebb+eW9Tn5rDgoBbp1dfzMKFazzGiKDrg0FPPc27dusXp06cPCEjmYRCUFpwFZx3WWLQOqupvKVBMxQJFPo2YCogbLUyekY5GOC/wWNJJn71NRT7pk7Q6OFdQll2k1JSmQAhZ9dvWImxOWUwQQgEK7yx5aRj0JwghaXbblMaxtrrMaDzGFgM8IYPMEDiN8wWNVhMhPaeSiH5vj3FW0u0kleOMFUgxIMtykrhB1FwC1cBZUYl4nKMsCrK8wJqSIAwIY40YjBCiqlDf7o1YHg1RocLgQUbEzWWMB60FRZEhcVhTZdr7MiMOA5QoEdaQxAEi0DhvSJIQU+SUhSSMJYEaMuj3aUrBdgYvXu7TaY5Z7AZIPMPccmu3ZDi2LIUhD59d5N0PnuH4iRWE0jgkpigYDYdsbd5muLtF4TVGNtFCYJ2rLij1PLSKuLDGYEyJNWVF9AuJ1CE6iNDRhDCKCXRMEERIte9kVTsdzBQHAqSQ1XeNqZhqRtB7j6Ia+2QQo3RA0FrFuxJb5oSL58GNcSarBJnjXUTSxoupoJ/9+c5h/Jcem3chmD0ybbP3Dt0+jm6u45XEDW5V3/dqQWf9/1R5ILi78KFSNTB7Tz3/mc1BfDXrEMJRpEPCQGNNWb3WT+fi03MmZ6T9/OZrcv87RxzMH/tUMjATW3gBWXAWH2oCPwHrkVLgpiKBmSCz3veh83jXoo2ZEmL//fuv87NzJVVAJDJGBrRU+CIn0wtYpQjcGKu6SAmhGGGyAusjpILA/fHjNQ/jSHRwhCMc4QhHOMIRjnCEIxzhCEf4IzFvFfrHwWHRAVQV+evr61y9evW71KoK73rXu/jwhz981wUPYFbZ9dGPfpSnn36a//Af/gP/9J/+U9588827f4H/7whPPfUUP/dzP/cdj/3PBqISH0hNaQzW1xWz02oMBN4Kdl+6xsLxFzh1eh2dpfgoBCHx1k8dTh0SUVW8WIcXYMYTjHegFLLbREcJut2g6I+RUVRVp4xTfBThgSIvcM4TdhYqx4N0zOLaCptXr7O4slJVSIUaD6x2Wvzw/Sv4W0M+d/kyxmva3VVWz63Sn/TY3tthlKaUDnasZUdYlBA0lECbIcMbE/Y2brO8ssj62QuMBkPEYEiz1SRstonCpKquzDPS3g7h3g4+H7FVlNzKLbcLR+Y8q42QewE9LCASnD3TQRYjvG4jhSQOI1yW4YsmHoErLd4ZFJpGHJFQMNm9g2h08W6/FjoIQ9qtNm+8+Rp4UEFVyVYUJW9deYMrb7/FeJKCl3Q7Czzyjot0FxbAOwIJSgrajYSlhUWcSZmMHEpr2s0GFx99N8aaqeNASFlW5HmWFmxubPLNF14iy8u5O4T9SudDf6+urn3Hz+XdxEL1aw8TMHf7/fB25ytCayJ7ftG/Jglq++WawKn3O7/Pw6jJinmS4W6kwt0iCuadEmrSZ57oOYy63TXxkWXZ7FhbrRYf+chHOHny5IFzUJM68yKM+hzXba4JmsNkyuFKzfkqzpo8qS2la+eD+vl5Mcb8eZ/fx/zztYvF4cre+eOeFyRIKRkPU2JTMukNaa2u4J0jjGPKLEUoRafbxskAhUFFMXJaPBokMeVkjG4tcPvqDbJxBkyr3VsLFFFIZXQtcBKEny0p1+vchO0EnYS4vMDmpqrYr109yipWRIgqrqAWHIxKz6DwDCYpmwPLfRfP8cLX3qIwjt4gx3lYO94iaUasH2/Tv9qjcKBkJdbBV1WCUIsJwE3b5nzlWlDf+VXIwFyEAjWJN7VnploYd16gBHQCybseOcaF+4/jhceWhnF/zO7tPijF6zsjFn7vUxTbGyzradVlTRb4euG/Qq3R8NNzJetG1/dW/fl0DowH49ASgmaEjhO8s5QIomaD0nmUYGqXHTEZpQhTMpkUxKEkiAPKwhBpSVoYfGEIIzWzUQ6VoNNJEFKx1BHEwtLPHaNsSDuRxPFpvLNTy2qLnQo90tGQ80sNwjjEAmVp2NneorezCc5x+ux5OkurKKVZWj1B0mhx88YNiqIgy1K2Nm+zu3WbY0tt3vf0B3jwnY/z3EuX+D/+8z9j2O+zsrjAPffcx9Mf+BBPPfUUV66+jZCS0XBI+uqLBGlGX0peffJ9rLz0PIUzMBkzUTljFTDRmh7w4u42n9nd4qY1NLuLLC6uIAQ4O53LqZgb11+dnXMhBJMsZ9DbovXQ/bS6y7z49a/gVEQriXn1W68h3PeT5znGeVpLa5XFf5YisZw4dYZma4nJaMTXnn2edz58P+V4QBBEFC5AZWOELQmjEGkMeTrCmwKUwNgSFUYU/R7WNejYnDMq4BtvXUFv7HFWKTAG7DYiz7DW8fKt3+b1z36ZtUfuYfmR+2h0WzSTkEYSksQxSbOJCiJUGLPTG5Pmhiw3WFMRfDiHLUuMKRFKVVXd1gICZw2jNEfoNjqIZo48w0k5tRiv+uGiKOi2O5TlHkpJVlaWoMgRtkQG8YxYTtOU4XBIFEVcvnyZ06dP02w2yfOcZrM56+vqPrcij9XsX+0WEEURly5d4syZM7RardlzdSV/LV4IgmDmCiSlnInQapFWHcsQRdFM6FYT+vMxAPOON/XYFwTBzPXgsLBufjyax+Fx77CbzrFjx2bHPu/wUB/HvMPQYWFG/VgtBqjH63r8qds475AwL3KbnyM8+OCDwH4cQt0Ga+3MvaBu97wI8Ds5ItQIgoDLly/zta99jb/yV/4K3nvefPNNzp8/z8svv8yZM2cOiC2DIJiNh3GgSYIArRXWOzAGpRVaK6QKEBK8KxHC46xFykp4EMVNdNQgiksknjy3bG9usWIXSJIG3hpsmWFRlGUO+IoM9xaBxZkRpnR4GRBoRdIIKdMcUxak44zO4jKDUYqQIUsLHaI4Yq83ZHNjB7dk0KHGlJYkCVg9foKFsmDUHxJpsGUJPkLpgNJ68v4ewu9V77GQj3JAoEJN0oxACJwvUMqx2A4xeYEvU/Jhj4kCpTXGZ4yNotVKcGWKt2UVHyED4ihASIV3hiBJUMpTZEOCyFYRBTqgyFOMmVTCkkCRTiasJJoTCxGvbafc3Cm4vlPip+N2KBUnWzEXzyxx/9njnD1/ls7CIk5ostKTZiN2tm6RjfawzhHEMY5oJryr7v1pzJirrp1zJdbk2MJgbeWIhCwoVE5QRJRFTqAzlNZIpRFi3x1I4MELhFQgqs9LFb8w54RQWSDRjDy7aES0gO6exJdj/PA63pTY9ipOJMSJwsVdcBY/2YEyRQt/QGj5x8Xsc7pva8TMMUrsj/lusoturSHjNibbq45lKqaEeu5QyRrx+1Ek9fbmxZeVvmJfsDCbj0zfIoUjiUKs9ZWoTEusK1lvTeerYuqscEBw4Gf8/uHvE/MChOmLqx/AbEPTeZrEYnQbnEe6EdYZkNEBYcF83/pHuhyIOV8Ev++TMDOFEOCRaOHoJiUF6wjjwA4RpMRmk9JJvIiQEnwJJlA4EWHDVZzSaDb/RNd7HkeigyMc4QhHOMIRjnCEIxzhCEc4wncNd3M6AFhYWOCee+75rooOhBD81E/9FO12+4/12oWFBf723/7b/MiP/Aj/9t/+W/7dv/t3XLly5U8sqvjzgOPHj/NP/sk/YXl5+c+V6EDgq+qaaRWvY0oIAkwtNBUSl3pGl16j1ZboMKqqPbK8cjQQILVAJA1kECJ0RZZIazGjEQD1LaajkNT2iVSI0xqKAhFGlJMU6xzKeWw6QQQhwgsCJemsrDAcDFlcW2UyThmPc/KypKs1HzuREOcpX7g9ob93nb0dQRx3OLd2Eh0HDMZDtvd2maQZxjqGpUNZiK1H2pJ4UnD16jWyvKDlHMVwRCi2KCJFYRzXhgMGaUZhLePSkjsHHrSSHEsiTkWKtJdyvAlPnY9YCjLSUU63s4R30yo3DxQlQivSyZgyyxDOEiqNlgIGm3h/X7VY5TzeOgb9Pq+8/AIvfP1zeAs67hAkXcIoYTRKCWWLsc3Y29tjPBxx+sQpkiTClBnWTB0VtKKRRCglKIqUMnUY43CuIi7yoiRNMwajAXu9Htu7O+zu9cjykrl1synpeFhYsE8gHF7Amyeh5/+uf4dKGKX1fvXX/Ovmow+qXe/b+NdVjzXZXhMX86+t7jf9bQuNB+776XOHIwPq/cxnaM8/Py8wmCdE6srR2g66fs/hKIbDfW1N1Nekyoc//GHOnz8/IzLmXRDmK1VrK+ya5JlvZ92OmhAzxsz6zHnSphZH1MKJw+4M824H9Xvrts67Ksw7ONTbqu2867bMk0jzwoOqnZZRb4IKQ0b9AYvrVdW6t4ZGu4EMAmzpsIUhLzM6a0uoIMAYi251Ge3uYbIUN60es6Ul6CwgAjVXOSfwU25NTlemVaBRcSWeyodj8lEJteuAZ+7YK2K/dI7UOPqFZWwcqYOv/uHr/PRf/QBXXrvJ7l6KLS39XoYpSpZXWiAkhfMYJwhCRbsZMBjm5MZVFr5MxQO+dhKoXQ+qBX83rbqbrsdP++Vq8V9MJRVSQOQ9rUDz0P2rnL93De8sxgm2bvTYurmLdo5Ru8lbt/u0L72FH4+ItJgSRA7haxeL6r9anjGzPJirYNyveGROoSBxxpK0E+i0sF5ijK22pRRJHDApPTICJxWBk/T3hhVBbD1lAYFSFIWZiSCCOK6uu5c0pWM0KWgaQ3OliZEhxhqCwFEKQRhooLLvd84ynEiy0uNNzlJnlbRI2en32Nvbo8xSTFEQRSGNVhshJXEcETdaZEXOeDykKAp2dra4df1tTq60eeqp99Ev4H/93/5XxoMhWgc04pDJYJeXX/g6d27d4OTZC3z0Rz5G/J73IoQky1Nwb/CX/uJfZPXhx9gcjWlduwzWkHnBBMk14/hcb5vnswml0qysnSCK42n/aavraAzj0ZCyqMRJQRgiswyD5+btLXa3Nkhai1y+cYvxeMSVK9dQShPHMeNJ1dZy0md1dZU720OKSZ9bV16lNyx58ukP8NA7HqHVjNjcvcPyyipnz9+LMYayrAj/wOQol0O6TTNsE0jDWFU20wujIae95g+ef46lnQHvUYqiNAytJTUTmN6f3jjY6bHxxW9w7bXruHc9gVpdR6sM7XeJlEPbnEh5QgkLq2t45dFxA+UstixwOoSiJDUWYwpwFiEk3ju8TsBV8xLrbVUpKyVCVSIk5zzClARhRKADTq51+f533cPLkyENM0HJ/X5vYWGBdrvNxsYGYRiyu7vL7u4uaZpy4cKFWd987do1oigiiiKazSbtdpvRaMTJkycpiuKACGx+zKmjhQ67z9T9YhiGOOfI8/yAO0z9fP3vjTfeoCxL3vGOd8z2VffB9XhUv6cWNYxGo5mLQ73duq+u3RPqceSwq858tFH92joeoXZcmG/HfwnzgoV6fKjHssNjTj1uzY8rdZvmx9T56KB5ocM88ZdlGY1G465ix3r/b775Jo888sjsGn35y1+m2+2ytbXFpz71KT784Q/z+c9/nps3b/JzP/dzsza144jFVkw7CafX0VcuHb4SGHgv8Ra8tRQiQ6gAj8QKjQ6raK9QCWxZzaHTrCAvMnQxwSMRshLVGGvxtgRXooRF6wgpK4eQMrUEOqA3sZRln9W1iCwdo3REv9cnTcesrS3RXWgQBo7hsEAGAmtKbtwcEIYBCwsxncUFTGnIxkM2t4cMR2PiKCIIqlieIAinIoiUwjikCJHKIoSjnBi0kCw3Q8bCY0rHaLePUiGlc3QWF1A4XJljirw6L3iiQCOFxVpDnDSReMp8DFiwhkg3KEWOM5Yw1AggnYxIJwPajYDTp5ZZXc55686I24OcwjpaUcjJboPjay1On1hhdW2N9sIiQdwkzQ2TcY9Bb4thb4cityACpI7wdl5wWY23M5VgpRSccfDeg/VVPJp0VMIEY7FBiVTTuv3pfe2snY7flUhbqgCpg6nwQCNEY2oo4MB7srT6buSKCbZ/DR01UMv3IbM9ykkP3VrA6hiZ7kCyBMkSMnbIyQ5MhQ7/JeHB3YjyeeyT89V0wHuPiLvIoDkVFIAXtSvanEB5NpcRc84F8xuuXRCmn3Ffhw9wwAHAWUtqKieVONAIKTGlRwQt8ONqU3W3UDtIzPY3nbscFj0cPs76PTOxha+umVRULiIOITXe74uO/0vOCd+5UELMyTPmdJzTPxbCnKJMyE1BKF1lyIMG1UQqh5aa1IU4KZk4RyMpkcZhfe2i8F+HI9HBEY5whCMc4QhHOMIRjnCEIxzhu4q7iQ6CIOC+++7j937v975r+1lbW+Mv/IW/8Cci3YUQnD17lr/39/4eP//zP8+nPvUpfu3Xfo3nnnuO7e3t/y6iF5Ik4R/8g3/Ae9/73j9XgoOpkSP1Wo1DTO1d69qLilUSoroOZ9aWaAUSKT1uPMZlGfWCjpQe7wVBY0ouKoWMI7Sosk9lGM62J6UkHY6JmwlCeIR1GJOjpMSWhqDZxBkDQqB0RLNpmXhLXhTcvHqTW1dHTFREY0GxuKz4gbMdzrcUn74y5FouGBd9bm/0CGREt9XhHefOQRzTH43Y2t5iNJrQdJauKxn1d6s8WmMYF45TWtFUklAK9pzj7aIkNfuVO5GSRFLS8oJ3KFjHc+Gs5sKqIvKefDyh2V1ET6sfBaCiEIxFh5qsSMnSEVoFaAFYR3b7JaL7zuHdCs76yjq1yMnTIeloxHgwwrBFKWJarWVanXXiaJlmXJCHE4aTnK9//VmSRsJub0R/nOI87O71+fxXvoRSiqIsKUpDURqMsRhrsdZhjK0WNN1/3ULV4ft5noj/Tgtu9UJ+XS14tyqkeeHCYQFATf7Pv7bebv3v8PsPt3V+f1prwjBkYWGBO3fuHIh/mG9T3e55AcL8a+rn5+MTDscrzMdJHHYukFJy4sQJYN+NoLbm9t4fsOOuiav6sbvZfdfuCDWJBQeFBPU5ratNYb/ysyaXaqIpDMOZ2KHeRn0sNek072AwLwg5LCKprcTrx4wVRK02zjtaC118aUAJGguLRI2QPM0INexs9OisLFf3qrN4GTDa3qaYZLiyxNmpU4gQBAun0NHzlUeAl/sL0sytyUpQgaQcT8j2xljjKuGVc3jvUErjSovzjtJDWhj6hWdiPLmDzEJvZ8A3vrXBI+9/iC996ptYW1WgRmFEmebsDAsUVd8xzi25rSr6jJfI2j3AV84yILBzC9Ew5TeEn/EcTO+Z6kgFIRAKSacZcO5Uh3P3riGEpCgsl1+7w83ru0jvuPe+dV7aylhbbDPpjwlMiYzqe2+6KC/lVOAwXaCf3rqzarx9B+Bag4CQYvZTiIocEErjjCUrLUEUIgJJkQuSUGKMQOEpdVVZKwFbWmzpUU2JMY4oDkAIisJQlpZ2orFZhnOCMjf0dsfkUcxgnNFQvhI8KIcQFWFoncV6jUOw0OniRUiZ99js3WJt7RhFu8vGjbdJ4oRWu4tQmqjRAgk725vTCJtd7ty4xmJD89jFi9zY6vHlL38RrOHiw+8kSmKUEJV7QpYRKME3XniJt157lb/xN/9HnnjySfI848nHHudv/F/+Gtevb/DFpUXefdURloa+Fnw9T/nM3jY7AhaW17jvwoNY7xkP+4DDWcNklGGcZ2tro4qfkApTGvR0bOkP+gyHAybjAWVZ8Nobl8mLkrW11apqVggQivFowiTNQCma7QWWVo7R6lpCLTi1tsQrL/wh3pa0WgJpCzqNBEQ87SM8J9ZXaTUSTh9fxZmCtzdGTMII98abTLZ3uH77Dn9vdY3jQUieF+xlObeLgg1nGXmw3uK9QFsI7txh8gdfJL34BMGZ84xMJa6yFgrjqrgAa/DWIHEIkxK6nABDJCyNAOJAEYcBYRigtOb9T76LqzspL92s4liSZot2ALY0GO/pCvDWkxcFSaPJ+eU2YdKCvTGrpAwHw32Ca66CfWVlha2tLbIsY2VlhclkwtLSEmma0uv1iKcCkSiKWF9f5+bNm6RpilKKfr8/E4ZduXKFxcVFrly5wqOPPsr29vZMGKC15vjx47NxM8uy2RhSk+x12+r+vyxLvvWtb/Hggw/OxFx1/1yPGfUYVPfPRVFw6dIlPvShD83GgrqvvnPnzkwEWGNnZ4f19XWef/557r33XpaWlmbRBPPiNCGqaKYkSWbbq9s9v715sdvh8bmOpajdGerzMi9omBcY1KLF+TbMxyvU49b8eKeUIssyvvGNb/C93/u9MweJeYHic889x9raGi+88AJaax5++GHG4zEvvvgi99xzDzdu3ODOnTucOnWKKIpot9sMBgMWFhbw3tMKQ1qtiFYzQkld9fNK4p0jT8c475B4lATnKicPpRtoHaGDGCk1zlniKMRYSW5ge3tAWViazRwZxugwJlCCwuRkkxHS5hRZRmkrkXBhPDa3NBsJJpvQ39qgt7NJ3F6k1e0SG0k2GeNsyCQ1KCVZWl6l3+/TSEKyzBEFIUoYkJaFhYSsiNnb7ZGnfVaWW6RlJWIpjcdZSBoRSoL3GaYUSOlpJwGD1CClJysse4McJ7ZZXlmajitmymBb8tEujVYTpavoFG/zytkhLzBlSRxq4riBbqxQeItMJ5Vbl88Y9IYMh2OWGgHthQ4LqwGrayl5bsmLkkazSafVpNNpESVNokaHsNHBWs9kPKTf22bQ26XIS8rSEzebFGYq6HMCi4XaZ0hUggLv3bTtU0HBVJznncAK8NP+zjlXzT28w9oSWxYHRPNKaZQOkTpC6mAqOmjO7mnnKtGdAJQbE7bOQ3Md8j4qWUYv3ovPdvEqAhxCaFAxuBIv6riDgzhcof/Hhvez7QnvEULhva2EBd4h/aHvA97PXPOEELO5y+H91o4HtSnDvmhhRslXIr4sZTDYIeg2UVIxyHJ2h1U0nBB+XyM52yb7ExW+3XHggGBgNqHxB4UPiGrc9QZkiCtTpAz2W32X7dXndv58HxYniHoSJaaiK++ZTbiA1CkiJciKCI9gKQQxHOC8wKoFSiwNXVI6Q2YDchuR6IKhDdCHnGv+JDgSHRzhCEc4whGOcIQjHOEIRzjCEb6ruJvoAODcuXPf1f186EMf+q/eplKKEydO8HM/93P8tb/213j77bf5xje+wVe/+lWef/55bt26xd7eHoPB4IBF+J8H/ORP/iQ/8zM/8202tn/WsN4TSEugZVWI4/eLV+erVQDiSHHmWBvhSmxaYMdVdrKQEryvrFGnzJgKFVIHU+FBiIoEMtKVy6YAU5ZYN60AlBJRFlUlstYUwxHetFGBwliPLVK0Cmg2WzgtOH7+JO2llCBWBI2Ewhj6kyFnFh0/lBVc7pW8lkdcGeVYcvaGW2z2tpBKs7i4yD0njhPGMV5IvHGU1tDKC8osJbSO2HtKLBPv2C1LwuGEhhBVjISzZKUhdJYfP5XwxKkuobLIMsUVVTa5CjTdziKBUjSarcqSUyqEd9NqSkuepbQabZSQ+KJgwQaku29hkggpIvCWpYUu3/fhHyRPM37v9z6LwBHGglZrgfXV01g8gTIEysDWNlmWMR6NSQs/u45Zabl87Q7AASLzu435BbYZkTytrDz8/PzraiLhcMXh/Dbqxw5nQR8m/Ott1rEF82KEu4kNhBA0Gg2ccywtLTEajWYEhjHmQIVp/fqa8Kn/nndrmD/2et91PzRfyTkvyBBC0Gw2ueeeew4c97zN9HxudlEUM6JoXswwHzVRt2t+P7Xd992uweHK28MkTf3+ep/z7awFCPX5mBdlzFfF1q+fv2a1/bX3HqmqaJfu8hI2nZAsLSG1QscJk+GIJIoZ7+7Q6HSJux2EEkxGE/JxD29LlBSUWYkMFGZa2SaCJYJmEyFkVZ3nPUKKacUigMBbRznJcYWhnEzHjKngwANSK/K8xHoorCe1MDEOIwSF8+TOkzrHp7/4Er/w17+XU/escfX1OzgEWWawKuRG7ljsNBCTlNxAamwVpTAVMJXWouriNDH3OaL6HFe/zz0nREVaIZBU0TiLrZDz5xY5fmoRFSoGg4xXXrrFxuYIrwQry23uDEuubPW5M8548tw6HSqyz09FDhWH4hG66s/9tFqw0hTMN4SpA0MlHqgrH11pcYHEZlU/aJHYIqXdjpkUFgTkxiLKEmsdZSkIk5BAQJHmlMaQZwXNRlzZjmvJZJQRKUGnFbGRCmSoQVi2JwZMQVE6AiFpeElpDM6VeFddO60VSgV0VheZRBGFC7nv2BLtTpdXXr4EQtBZWCSME5TSVR42kjzLSNMJW5sblNmYdzzyMMPS88UvfZHlTpO1YyeZTMYsLS0SRDHD0ZC33r7K7u4u5x5+Fy+98A1+6X////K3/+f/Gx/68Pfw4L3nSELNna1b/Idnv8q1NOV8bHkh0LxYZAzxnDt/P+fufZhOp02eF/SjCO8tRVllpHuhGPT7SClQUlKUBc2pg81wPOHSK69y884mk8mY1eUlNra2Caaf2fFownDQZ2npOI89vkCaTnj08ffR375JZ6FNFGqeffFZ8nTIeDTm1VdeY+f2Bh/8yEfQuoo9kkpw+tQJ7mxs8eC95yFKiNSIsNlm4AwbeVZdfwHB4gLHVcDxQZ9jownX0pRrtmSncJTO4bwjRBL3egz+8EuMTYE6dYqQEis8Fkcx7e/KIsd7R5HlmCKltFNBk3cVoeYzIi2IA0UrkIyzgmySEihBqxFxuhNQZilZYVnQguvDHDcasLi0zLWtPp0cRD7mxGKDt/aKWf92t7Eiz3N6vR6rq6sz0dfa2hqtVgvvPTs7O7O/b9y4gfee1dVV0jTl5s2beO9nQoPLly/z0ksvcezYsVkfuby8TKPRYDKZ8LnPfY4wDHnwwQd5/fXXOXv2LDdv3iQMQ973vvcRRREbGxszov/atWsopVheXkZKSVmWsz54OByysrICwNbWFlprtNaUZUlRFLM+/NatWzPhRD3Wfv3rX+ejH/0oV69e5fz58zMRwLyoQUrJaDTic5/7HB/72McO9P31uFKPyfW+63FzfkyPoogvfvGLPPLIIywvL8/GiPkYBKjG/VrcYK0ljuPZeD3vUjQej2m1WmxtbTEYDDh9+jR37tzhtdde47Of/SyLi4tcunSJGzdu8Hf/7t9FCMHW1haf//znieOYOI45duwYzjmSJOHDH/4wjz32GHt7ezzwwAPs7e3xmc98hiRJeOaZZ2bH2WokNJOYMFQIV3WOYRQTBlCkI4pJCgjiUBM2EwKlkBKMqCK1wriBy0YgFHGk8B76vSF7uwMW2j0arSadxQV0EJLlBel4RGAzTFlgipIsLymtpHAaKQXNhqbMMySS/uZt9jY3OXd2nSgMUJFCiiZFkWLKnM5Cl63NHaJEIaTGEZE0PN452pGlkSyTj8dkWU7SbGCEQAceFUpMYRhkBq1jiiKrYk6EQRcW5zSpM/SGKVEoQQkm4xSlAoq8ckELGm2UCsizCVopgjBESEkYJGArJ4cg0EjpiaMY76rq+rIoycZDXGlwVqKVpNleoN1ZmNWSSxWQJC2CMAAR4EXAeJyxs7vD9vYdhoMB2SQnjkJkCDLQWCsrMZt3eDv3fWhKDldzBIcQVWSSFKoa16dGCAaquB/vpoS4ra5RmWNsNTcWgFAabQxKW8T0O1Mt9AbwrqgimZyFZAXiJXy6h9IBPupWwvFkCZHtgk5QZoDVcUWaCz1zDLibtOAA8X+ILD/QD84I+GrwF9PvezJsYNMtgriNdAVyKnLYn++Lak4B+1ZNgBRTYbvfd1ISdRsPkPP7VkpKKeJIMx706TY13nm2t3fI/AoQHnBGmOobvk3MjKjaXr2gmu/U0ntm+5ub79RiEhWj/RCtQnIMKM28U0H92wFRwV2EBvXZrNuCPxjdhhBT4YbEEyCkxnoJOMZG4XFY1aoiv5CMTECoLC2dMckEnpJQuSPRwRGOcIQjHOEIRzjCEY5whCMc4U8P30lEcDcIIWaVRYfR6XS+W01CSsmP/diP/YnadjfUC5T3338/9913Hz/90z+NtZbd3V329vbo9XrcunWL69evs729zc7OzkyMMBqNmEwmTCYTsiwjTVPSNCXLshlBV/+b//u/tb0f/ehHDxCYf37giJVAyQDvHdZTLZBNLcn3l4lgebVBK4Cyt4ewDkxBHa5erU9JbFki0hRfSkQUIHSIiDS6EU43Nq2OFh5nDXmaEQUhXoDUGq8FxuTYIkWoBniLjv7/7P1prG3Xfd0L/uacq9396c9teTv2nUSRIkWJtBorcifXc+K4S7lQDR6QvCqj8j7kW4AEQb48BK+q8PAQwKgEAZIAVclL4pRsl51ItixZkiVKMklJbC95G97u9M3uVjeb+jDX2nffyytZ8pMTFXAGeHnO2Xvt1a85555j/MeIqaZjJIpARsSDDu1+jyLPUEFI7GLiKGIceyHB2iDnhbjHl6/v86c3RlAYUgnaaIY7W+xvb2EcOKlQcZsg7ZGkbcJoAZ0G7AiHNpqiKJhUE3AlRZ4jjCFwjtVE8d8+MuDJtTZlNmJyOAbrUEoSJTH9wRJpEhOFkc8XLio/uSSaZy1AFxkybdNYitrJiOnFV+Dh0whfLoaSkiRpc3NvzNVDW782YmP8Nr3NbV+FbXOwOdOiJC8NlXZU1ufCzxUl/5Xi7oqehpy+u8r/XmiWmSej5yc7G9Li7giE5nP3snBuyPq7IxLmJwE7nQ5lWdJut5lOpwghmEwmDIdDptPpjCCJougO0rypjGzW2YgK7t73eaLl7teb45BSsrKywi/90i+xvLw8O48+Sz6fHUNjca61nrXT81WizT401aDNMvOEzfx+NOetuU4NAdW81ggbtNZ35JbPuxXMi0ma7c9XBzciiea9Zr/mK2Dn16Wrgt7SMlI4OidP0z1+HCEl++9dQhhNPspwzhGFAeV4RNDuUeUlwlmSVsr+1g5VpRGhjzKxxqJLiBcWkEJgsICcTUQ3+24qQ3EwxRkw2p9PWwsOnHUY6ygtaGOprKCyjtIJMm0pLZTWUTmYTnL+37/3bX71Ew+wtbHPeFhwOK04zDTv5IbVIKYfRpxY77J1a5dJ6SidFyA0d7tvSf1PJ/COM6IW0ngj5tu2v84RKEkrViwOElbX2qyfWgQUV97d5e13ttgdF+QCJlowHuVE2yO0kFgE08mE1aCO0nDWRzrgcM0EuWPWNjlR6x2E/0Vym4Rwop5sr+MQ/H0pqLSvAu13U4osZzipiKIYYyyqLMiyiv1hQSsNiDopUilEVuCMr0wFQRhIymlBL5WMRxYRhejSO7U4IQgqjayfI20MTkFZVWhjUEKgBESBpNXpELd7JP1llFIUVUk2GaGUZGFpmSCMAEecpCSRwljL4cE+h3s7nFxdYGHlGF/8s2+hy4pHHniYU2fOspc7brz1Cmvrx3n3nYvsDA8ZZRlSCiZFyZsX3+SLX/jP/Lf/3f8ZnOX6rZt8+U+/QuEc/344RAwP6S+tcPr0KVpZxfkHHmfQ73J+vUVeacSpLlEU8do715lORlgkutry5JYUZHnBidPHOXPfcW7e2qLUhtWVJT78oQ9QlCVf+spLbOzssbmxQRKHdHoDKidZXT+J1iWD/gLTwz3SNOVgd4cojpEuZX9/yPbOAXqac//Dj7G6vu77EudYXz/G21/7OllZESvJm6/8Od1TDxA8/jjhco/trR1emWRM+5qHu20Wj53keFHSH40YjIa8Mzrk1nRC5RzSWgSOlSxDfvPr7ORPEpw6gS4rXFUhSy9MEcZidYUuc1yZg5M1XyTQVUVhHNNKokrLrgAppM9jl57QObbQZmN7j64SHGQVYajIdUmWTShVQDszTG7e4ruXdllYXrlDhLW3tzcTUBlj6PV6nD59mjiOiaKIsixnwrg4jlFKsbu7y1tvvUUcx6yurrK2tsbly5fZ2Njg6aef5rXXXqPf77O3t8fZs2d56KGHeO2113jmmWfu6Lc+8YlP8M1vfpPDw0OKosBaS6vVmo1Zm3Z0OBxy/fp1Ll68SLvd5oEHHiAMw5k44otf/CJBEPALv/ALvP3223znO9/hscceoyxLoihic3OT3/md32F1dRUpJTdv3mRhYYETJ05QFAXvvvsuX/va10iShJdeeokLFy7w0EMP8dJLL9HpdPjABz6AtZaDgwPefPNNHn74YbrdLp1OhyRJ2NjYYDAYcPHiRR544IGZiE0IwVtvvcXCwgJCCLrdLgcHB1y8eBGtNU8++SSTyYQgCBiPx5RlyYkTJ3jnnXdmfeRgMODhhx/mD/7gD5BS8tM//dM+Mqoo+Jf/8l/y3nvv8Vu/9Vt8/vOf59133+Wzn/0s//bf/lsWFxcxxpDnPqrk05/+NFmW0W63uXXrFr/4i7/I66+/jpSS++67byaiSNMUpRTtdnsWKfHcc8/x7LPPMhgMZv3rYNCh1UpA+r7RO8A4okAh4xB0gXAWXInV3j3IE+MCIb1jRl5mvs8MJGEo6VjJdKo52B+RT3KqLCNMUqwD5aaUOqcqDBhHmRfklYWwhZQx1gWoKKbMS5QATMn21g5pr0+7J0naHSajMdfeuMSjj16gqgzjsSYOS8IwY7usiAPF6mqXstKoKKGoNOQV2XSCQNBOW6ggoNNLKbMMKcEIyeHUoIWgsJZpZUkjQZqEuKrAWk2hBCIMUHjhZlmUOCxRd4E4UlRaY7RASZAqRDgD1ouKwjBkkk0p8ooi82OBvNKUVUkvjIjjBKkUUoYIoUAKnFBYUzKZjsjyiu3tbSbjCXEQEPUUeVGBFZSl9zYgaMY1fvxQDxygDkVyNWkspUTOKtgbMScYA0jna/2tQesKbWwt8KsdmKzDShDWIq2PZ0DUEVACTDVFOEuYdIjCGJsfoqIU4v5MmCBQkC4j8l2QIVIP0VagzBjhv27ViQP3djX4QY4HXhwhGm1A7XIkcE6jzARhpshKescjKe74viGEd1ESjQXBvPShXi+42mRgXuA483WaHaMVCqESjh1bZNDvMTwoWVns8OrlCsftcbr/WXP6tQBBIGqifv77UL1dRHNy/HWdt0nwNlNEyhJicXZCJB2Vq7wz1UzdcMeK34e7nSXu+M40tymBH2M5BIHAn7taGKFcgULjMAihavEElEaiXUwS5ZjCkAQ5Wh+JDo5whCMc4QhHOMIRjnCEIxzhCD8h+H6EeEOy/Thw/PhxXnzxxe87ufGXQbOuIAhYXV1ldXX1nss1k8NlWc6yipuc8+b1hnQcjUYcHh4yHA45ODhgZ2eH3//93+ett976S+1ju93mkUce+bEe948LSWCJohghPMFmnK+irXmt2WSTFI4Tp3q46djHICCQzkJjr0k90SaVJ7EqDcYglCEIY3QhULGfGBIqJIgTivHUV3GGIbbJk1chwglMWaLiFF/jAU5ryionChWyqgjbHUQQEbbaFNMxsp5kjKOE3cNdEhXwm88+yKNXt/nWZkk2sYx2h+yXORkWg6Mwlmk2RFdjqiJkXGkK6whry3KhDW0B5wLBIJJcLxXdQcT/9rEVVmXOzvUdqqpEhYJWJ6Gdtuh3BgShpN3t4iyoQKJzi7MSGSqEtaRxC2csKgiQCKSSyEBTXd9CXthEipavdLGOS1fe5ZXvvUFlmmpky3Q4YXs4mV3Dv2C+678I5on5u0U6P0i0M18BOW9t3VT4N6T6fLXjfITA3ZN58xOP8/s0H7Ewv48NwX/p0iWKopjZ/jckUrN8EAQ8//zzvPHGG1y6dGlG4s8LK+I4npHzQgjKsnxftML8vj/yyCN89rOf5dixY+zs7FCWJeDFA01kwvz5a4j6xoJ63tGgEQjMn4v5+IJmHY24oznn8+KI+cnQxtFhPj6h2bfm9WY7jXBh3nJ73mFhPgribheMhtAZrB+jtbBA79g6naU1nBAcblzzzgDTAlPkxMqQVxVB2mLv6jWcg4WVRXRZgAiJuxF2mJMLn6FdZRmthWWQAmx9nzTWAc1EtnHY0s7+dqa5X6AyjnLqxTxOeMHBVFumGgoNlXOUeIGPdfDa1U0+90XNCx84yytfv0heGPYKh7NwbX/CVhQwDXJOrS7hNndAQ+l81V+/HRDUBL4xXgJQlA5tXe0oAKqOLwgCSacVstCP6Q9Suv0WUStiNKp4483r3NgeM7KWEjBO4JTA5CVDYwnTBISkKgqSSCGkw+jbIiXnan2UcLVIquZY/I3VFAgiaunBrA7R1pbE1kEoyPOKKA4RUYSdlrQDTVYWUBlwhspYojBAOnDaUBUFaaiwSchoVKJ1RWgsy90IZy2VscSRd2UIA4GxhigUlCUYKdGVIbPFrNoQIZFCoqQnjLWxCOkJ6+FwhK4KWmlKkna4dfM62fiQZz/xsxyORvVYYAym4Nx9D3Jr54DtrS3aSUy308Uaw5nVRcriYV556U8oy4KD4Zj94Zg8y8kqS9s4Xvj4JwDJNMv46le/RhAmLC8OKKZjhuMxk2xCVRnaSchkMqYsM2zZYakfkUSSGzt7YL3cJEna/jiqinxUIIOAxaVlnn36Kd67fp1et8+g3yOMEqKi5KmnnuKdqzeRSjHNcsYH+3z4o58gG+3z3o1Dtrc22N07YDQ8YH/7JgsLA9783nucOXuGza0DpsMD3rn4Dr3BAKUUSik6rRZxGPDuu1eITcF+5vjwEx9kvQfvfO2P+NrXYn43y/jwwSHVaEg7CDl27D5OfPLnOP/GqwSvvUpQ5mxWhso5hHUEAk6akuCt1+g8dJ7W+ZMURcF4PCGbjBiNxkwmJUNtqKxlkpcYJ6jq59PWVaHWCDSgtcXqkjiUCCFZWejSDRSHuSZMY9aVZFpptLCsH19nkIaknYTjjz1Mq46PAR+Htb+/T6vVQik1i5a5fv06S0tLs2r9pk1t2tq9vT0+8IEPsLGxgRA+sqaqKvI892RynnN4eEi77e3TW60Wo9GI1157jQ996EMzIcNXvvIVdnd3WVxc5EMf+hBVVXH9+vWZO0GSJPT7fR599FGWl5dn4rk8z0mSZNZGHz9+nJMnT7K3t8fW1haf+tSn2N/fn/WzaZrywgsv8N3vfnfmlNC4/zTHV5Ylo9GIra0tbt68Sbfb5Rvf+AZJknDmzBkWFhaoqoobN27w5ptvsre3x3333Ue/3+fNN99keXmZl19+mU9+8pM899xzGGPY3t7m3/ybf8OxY8c4fvw4x44d45vf/CbXrl1jY2MD5xxf+MIXePrpp7l27RrT6ZQXX3yRf//v/z3OOZaWloiiiOXlZS5evEhRFHzgAx9gMBhgjGFnZ4dnn32Wixcv8vM///N89atfZXNzk6eeeorPfvazfOlLX+Lhhx/m85//PE8//TTdbncmPDx27NjMKaER6JVliZRydh8IIUiShHfffZfr16+zuLg4e73X6xLGEWVRUpgKdIUKNdoGSJUggwrpvADAR2kUqBiUDJEqRoUFYRzgtCFppQRBhJQlMijJiwrhLNlkhHAV2lpMVeGMQZcl1hgCqZA4hof7xHHK0vIC7U4PehWj4RSHY5KV5AeHlJUhmkyII0UrDhkdjFhf6TOZ5JS6JC9KOp0EaxzXbuzSSRWtdkSUdBiNMtKkQ5HnbB4URKqgl9ZuUMZQaQNS0gpkLXgQxFGAFBqjNRZHEAmiCGxVIKQiiBOiKEYFEmc0gbSEQYQlQiiJRZKEKUGV0e11meZDlBLgSoIAtHEUWYVSISpKCJQC5+V8eTGhrLwbTpaX7B0MKaqKIIqQQVT3FY7heAouQKZqVnnvajFAQ1zDbbEgwpPtArwDSy04qGWZteuYd0mzOO8GECionRGEkL5vkgJZxyippvMFTDXB6AqZDHD5LmpwHBn1vQCApsq+FiUmy7h8B/IRhH2sjJFIamOlO8ais7Ficwx3vTc/lryN24J0cDiZIOIlrAKhQ6QQ2Nnyt90R5JxzwR1OCjN3AT+2sM7Nlm1W04w/fHsasrhymgB/v4RBSBzq2Xbe51IjmEUlNIOZ930Pr5e77awwf4SAFGirsKKNIkNbjQzq7x5uzgFD3Hm+7vW9x4swbosfhKjHTLVwfzbWF47A5RS6i4/QypFBiFIhgcwpbLsWpPr1GgS5jUgECGtoB+X7tv3D4kh0cIQjHOEIRzjCEY5whCMc4QhH+LHhBzkdXL9+/ce2neeff5719fUf2/p+FDTH+P2O8wfBWkuv1+Mf/sN/+Jfa9rFjxzh16tRf6rN/1eivr1Hu7fuKGmuxtejgNqHU5GwKMAVaKlQcewturf2kh2zSTiXOSZwxgK9SlFKgbIjJclxlUGmMCBRxu81waxusIVxaoTg8JHAOpPSTPEVO2Okia3cAKQNEoKj29wkXFzG6wmqNrfxksIpDnDG0rMMYy7SYMM4mfPiB49w32OPi9pDi/nOQT+lWU0xVcXlzysWDnMGTj/LEJz7OV77+VcbasdRpc39/kYM//SJykjHWAe+Zik8+uMBHT7bIdrfZ2C8I4zbL6x2SJCKNWyip0PmUQX9lRkpLpepCGgdCgoUkbSFkQF5UZNmUlrYEYYjbG2Omu0TRBQQCXZW8/tabbO0N5+w+34//2oKDBveqgp///f02ox5a6/eJnpxzs2r/u+MXGjSigOZcA3cQ3vMk//xrQJ0fbrh27dqMYFdK0e1279geeLL/9OnTvPjii7zwwgt89atf5Utf+hJ7e3uz9ZdlOXNgaYj2+f2aR5IkPPPMM/zcz/0c7XZ7ts+NaGB+XxvHgUZE0OzrZDKZVeSGYTgTEDT7M+8m0Ig44ji+Q6jQVPE27zeigUZAMB+DcPe1aY6taU+bmIx5h4vm+sxncDfXcF5AIYQg7XXoHT9Fun4KU03BaCbDfV579Q1WF7qoKqe1NMCUFeVkjLAV7YUFhvsH7B5mLK8sIdHcOhyjACscerJPtLaMUhKt63vRzWr0AeEJ7bJCCIW1Dot3NSi0oTQWpx2V9RPglYXceHeD0kLpalFAPfuvcXz96i6JMZw/u8L1tzdZCUAawYFzWAHvbo/YGGakFroCjANwRHHAoB0SKomQoLXBGchKg9YGIQVJHJDEAWkrpN1OCJMQIX1l/o2bQ15/d5vtSUnhwOAn78NAYpyj0pZMStaWOggp0ZV39tHOEYpGdFA/K/4izyyQpcBX69b7Wt8F9f/nHVX8+dXaYMoC2UkZZxUJhrgVs7s5BGMZjksklm4SEgaeEJFSUekKow1COirjCKsK1Q0RMkRoS15W4Pz+BIFgXFrCOPIT9KHEah/vY6whVAFCCQIV4qzB1uIG5xw721uztuntt9+gLAvOnrufoqy4eWuTsig4PNink0b0Fxe5/OYljq8ts7B8jJuHE0TSIQ6HnFvrc3DhIb71ja8yLUvanS5V3Zb92q/8Cg89+hhaG969+BZJp48KNnDGsNjrMM0LiqJkY3ub8/edYm/rBsvrp9g6GLO3X2J1xsEoJ0n99YqjGCUlRkiiMGAwGHireuk4d/IYDz70EBffucTuziYnT50m0/DCmYeIlMUlMcV0TBhGXL55HSVDDg72KauCNE7J84KD7U12r98gSbo89MB5Xnrpm+zu7c9It6ZtunDmPr7+zW/D8IBh4Th7/hxuskUQt0BJ9hR88XCfJ5aWyIxm9+o7XHcVD5y9n9MHB0hdIcZDto3GSokUDongfOS48sdfJPnFnyWJQ2SsaImYltBMQlgbpOR5TlEUlGVJkRdo7cUr1hkqK8i0I7eGQmiSTspUQJZNEcaiWgmLSUgShaStDnEUsbq6Srfcxi63cDrk6q2dWf/Ubre5cOHCrE1t2rmtra0Zqd/0QUqpWdvcarW4efMm4/GYwWCAc44oilhaWuL69et0Oh2effZZxuMxly5dIk1TnnvuOV555RXyPKfdbrOzs0O322VxcXEmXtNakyQJCwsL9Hq9mYirqiomk8mMhN/Z2Zn1WQArKyscP36cW7dusb6+zvnz5/nud78769+aqIft7W0efPBBTp06Rb/fp9VqkSQJDzzwAB//+Mf5wz/8Q1588UX+7M/+jOvXr8/EAAcHB3S7XZaWlnjkkUf45Cc/ye/+7u9y9erVWQTCjRs3+M3f/E1eeumlWX9w6dIlfv7nf57XXnuNU6dO8cUvfpHl5WWeeuopPvjBDzIej/ngBz/Iww8/TJIkhGHIzZs3+eQnP4m1ljNnzvDqq69y5coVut3u7NgbF4NHHnmEj370o3zrW9/iy1/+Ml/72td48cUXWVlZmY01er0ejz32GP/8n/9z/sE/+Af0er3ZOW2uaQOlFJcuXZo5HzRuSXEc873vfY/HH3981jfGsa/aN1KhVIAuS4yxvg1C4WRIVVZ1VAxYW6Fc6NtY5eOBXO3KY61AhSFpFBFWljAvyaY5h6MJw3xCqgxlpQlk6F0+HF6Al6b0Aw3OUZUZTqZ0OgPSdooTfmRvnCBKWpRVhbOGpaWY6XhEmkaIIMKWFdooJtOS06eOMc0KdncPSKyg02nTancwJkeogG5fIRAoNLosiVsxUz0ljgRRGKCNpd8JiUNBVTmsNigFSliqvELgyKaCQdrFWoNylqIqkFYTdBRBFOBsSRANsIQkqUIeeFEWzo8BFQ4p/diuyoeEUUSlfTsxnuZUVQkI4jRBYXFVRlkZsgKisKLXjgki32e6KEYoP47D1kJEZ+bc8O2MNBZNd1i3Fb6yvlEd+D7ROFP3pREyEiglkVLNnIX8Vx8vlPIEtKhdBRzClF64oHPE4kMIneFSVy8L4MUKAMJJaC0jp4Ywu4WRPjbv3uEKcw5cd71+hzBgfvnafQk8ue3KEUHnGMJV2HJUxxU0nxMIbO1iUBPs89tqHALqV26/dzviZn7MLgBrCqQQKCHr8Ymox5mi/s/NXA4aMUPjICBvv+HP+Wy/bjsiOCdub98xO3cKixERTrRAFN5xY/7EzQkj7j53/m/vdmGbj8wEF40w4rbTQSMAjQPH1EA7KJjqmLYqyXAYEiKZUdh47sKAdZJKphQuwei/fLHIkejgCEc4whGOcIQjHOEIRzjCEY7wA3Gv/O4fhHs5HWitee21135s+/MzP/Mz97RD/0mHEOJ/lVjikUceodvt/hj36MeH0z/9s1z53f+IK3VtQS5QwiGcmpXQC3xhThoUJK0+SauFaoU+O7vSqCTBAXqakW/uEocR4WCAkAqdFcjAT+A4a7CVRglB0ErpLAzI9vYA0GVBEChC1UGqCF1muKpCRDESvG06mrDVptzZJlxcRiQx5WSMKQuc9pOAldUoAa0opdI5u7tbDDodLpicQBQMOy10/zSLnRYXDrZ4+L2rdF/8GA/94m9w7MIjfPObX8Fozcn7n2D54CrDt1/DFRG/dHaVRVNxcGufVtrm+H0D4pavihcokjjl8GAHpo74XI/NrZssLq7VjgW2zmZ3gM85jpIYWxXoKqUaTwk6LVqBZP9gi7hbIISvfr146TJVfWw/qZgXBMyLCuaFBt9vGWBm6R8EwcwloCG/GyJ7ftlm3Xf//f3iUJr35t0JhsPhbPn547iXG0kURXz0ox+duR98/OMf54knnuDb3/423/jGN2aVmY3wYD5Te/64lVKcOnWKn/3Zn+Wxxx6btdGN5fS8K8O8y8C8e8D169d5++230VqTZRknTpzgIx/5yMwloRE8SCln4oLGTWD+HDQOL018RLPs3XEUDeHWvDYvUmjObfOvETE027vbuaIRPswLE5p+R2vDjctXUNcuobVhd2eH0cGIw90hJ1f6JEkbbSxxx1cI99KEYnSIQNCpLav39w7IKk1bSFQSoUdDoqWHUKFAFEDtJFDX1XE7RgCE8udYGy82qKzDGDBYNIDzzgaVdb6imtu/G+Fw1k+oj6zj2zcPEZOcc6nCGhg5EGvrXD4YcXX3kKw0DK1jEgqUc3SA93Yztg5yklB54QGCQEEYSJIoJIkUKvAW0dPCklU5eTnhcFRQaMuoqNjJNUJ4R4RAClpJhEUwmuQUDjqLPR4+d4wkjbnx3bcpK0scgBJgrMOpOceSeYKgJj+ca+7l29IDUU/ay8CTG82kfiwtWV4RqpC8rHBlhQxC9qY5lVNEwscfGFPbVjvrXRmE3/a0MiwmAcNRQZqGPr6hJgpsEJKVFSKOkUGARGOEpKi8e0YURqgwRApNGgeU1ty+r52jKqakSUQYKhYGPdaOnWRl/SRZXqCrimk2oSpzTq31qJzg2vWbLHU7PPXoI345QjZv3mS6cZ1Pf/JTTEZjHhOGU6fPMKkceZ7xN371b2EMfOUrX+Z73/46L/y1z/LupUtUukIpQTtNOBxNODzY52BhgU47ZToZEsYJhXPoSuKEIsszL9CxBokhDhUnTq1RGu80ceP6NcIw5PDwABlELC6vsrR2ikm1wYlz59ndvMFoeEDa6TGdTtna2cOpmEWj6Xa6XHnne2AtF999j1iE7Ny4yblHHkVbx2Q6QRtNrGKkVARKcO7sWS5f3+TP371CORzxnW99ixMrbZL+gFavQzCacqArhsbye4cHVHHCX9u4xc47F7l/sMSZwTKmKiC37GmNVZIgkpx48iEWog5f/8IXiZ96DGsNlS4xZU5VFpiq9AIDa1E4WkkESs7undoGiNF4ilIRlU4ZaRiNcgopaScRg8GAlV6Hg8Mhh4eHbGzvcqzfpioLhnnFfm544ryYxcIEQTCrcA+CgDAMue+++9BaY4whiiLW1tZm7505cwYpJW+99Ra9Xm/W1imlOHv2LJcvXyYIAr785S/TbreJ45jhcMi1a9eIY08gNX1HEAQcHh7OiO0gCFhaWuL++++fLdcIt4IgmLkcNCKHpu9p2u2mn2zcd5p4hcuXL2OMmQksmmWbvmzefafVauGcY2tri1deeYV2u82HP/zhmWtNmqYztx8hBFmW0e/3mU6nrK2tMRqNZm3/zZs3+da3voVzjs985jOMRiM+9alPcfPmTeI4ZjqdcubMGay1/O7v/u5MrNHtdmfijCAImEwmvPTSS5w6dYpWqzXb1yaSonFp+Nt/+2/z+uuvz8SGVVVRFMWs78yyjDiOWV5e5nOf+xwnTpyg3W7f0Z9duHCBW7du8aEPfYjFxUW01jz00EN3OC6BF2g5fGV8GCgCmRBHMWEYYa1GKkEFWCEJgwChFGGoUIEkDAMkyovZtGEymhCEMa1eDxU6ZBAShBEOGI8nWCH9+gxMC0soDImzOBnS7g4I0xhnNFY7KgMECUnSpig0eTbFlRqnIpRyhEqRZQX7hxnjacagGzLoJewfThDOIjCcPLFKVkzJi5zFQY/x1GBdhoxaJIEgG+8hwhAlA3p9SVEaxqMJUagQGKRSOCCMApQCW2k/ZpeCJG2h8wkyDAmk8JE7zmCNJghD7/oiBEp6cXOrM0CJW+hyisCRxIq8NEymOZPhHkEgQCgmkwmjcU5lIAgDnBQIJK12CytywgCqqqAoKt/fttroIMEIgbACK+o4BCtw1uBVcnU0gKujEYRAOFGPe/zyynqXA+MA56nrIAyIQkUYRkjlhUymdkfwFH0t8CoEwnmXI62nFNmY9vI5ZNxFxC3IDyFdeL8Yt4nGax2jGm3QxD/ciwyfx91j5u9XrT//WatzwrgDKkS4xs1A3rmthvhvXAZm+9KssDE78Mfryf4mrq+JlKoXlqCC2LsYSYGUXiCaRI3goP5M4yYgBLgmDOp2lMLt9xqhwe3zc/epEUIghRdcWhkiTenvP1N4dn7uOCS3BagzIQOuVhT4cyBmB357283nb4scvIAgjmPCvKJyAY7bghRtJVIpAlFREfnjcP62FCJAAFok77vGPyyORAdHOMIRjnCEIxzhCEc4whGOcIQfiPkqnb8IjQ3s3dje3ub111//sezPwsICL7zwwj0nPH7SIYSg3W5/30rtvwjPPvvsT6zYYv1jH6fY2uLGl/+oJgclgfB5yxKBrI0mW62Ak+cfYGH1GEGc4MKMg40bvHfxOo88+QAyCNC7+3SXB1STMUI5wnZE1IuQUYgz+ErUejJVKEVraQGMocqmKBVSZDmBdTU5JLBaQyv1NsydLuXuLkGcEPcWGG9cx7US4sEqMm0hjAFjsSUU+ZQsz4iUQgUBuioJo4DRwZBjS2soN8bJhJtJj85DT9COPJlz9vwDvPzyN5hMpwwnh+ggofvwAzysNOpgCCogXl9hYbGNdAanNWmccjjeR1cllQ5J2i0qa6gqjZIKHN5WtSiJE08sqCAkSTuU40Om0zEDYwmikE6rxTtvvU269hEC1WF/b5frNzewP/ot918c9yL772XTeq/np4lQaCr3m+rRhlRp4gia9cx/Dpgte7eIoCEYGqvoeVHC3QT7/P7eLRZ45JFHOH36NGVZzgj4lZUVPvOZz/Dcc8/xx3/8x3zjG99gOBzOKvfnJ3iVUqytrfHCCy/wzDPP0O/3Z9ucJ7SaKBtjzMypoDkHZVly7do1/viP/5hOp0OWZVhr6XQ65HnuK/3q89DEJzTbVkrNHBLmYxma35vPNk4EjQBh/nw1zgfzcQxhGM4+03yuWSbP81m1aCN0aKpIpZR3XGNrLeV0wtbNDQ6dwBrB5q0tzHDKmVNrpL0eYRxiq4psPKSzuEgxGpFNypqsicjKgmlWEnZSjDhAl5rpjSsE3Y8StRKyyRg/w90w5rPZZpxQxN0ENy1wlcE4h7UOA2gHQehJ7tI5CusotXc7qFxN0DNbFULApnG0jGYhkVgFbQvTvW3Wco1WgpvGMgZGlcVIMFIQWkdmHbG2BOL2ZL3AT+ZL6Vti6xzG+cgZXVfVhmlCLiRppEjjkDOnFzl95hjvvnOLq1c2qZxDJjHdhQ5GV6RhytkLZ3CvvUFlDIHyBaNWCoz3q59V4/lja2bE5wr73O06QoG3CBaBjz9AKia5wVQTon4HKxR5lZPnpY/HcZBKwaTQpJEkFI7SWpSSZJWjsLDYiSkqjQgEqQARBghtwRlvbUxI2knR0ylOQF541x2lAuLQixRMpVFSEiBRQUA2nbB56zrSVqyvn2D95ClanS6BCrB4hwSkpCwKrK5YW1lmOK3IspzVs+fQ2rCzs0kcBjzy0AVWj73ImfP3c+rBJ/jed7/DS1/9Eq004f/4v/8/kXb6vPPuO/zP/9P/HZNnfOSFT5CkLf8sOEevkzLOCkpdsbe/z8Kgx3B/h4Xldf9sSIlTIVIKrDWMDvdx1vLE4w+SxiEi7GKtxsiUfq/PYHEZHLR7A1bWj7G1PwKrieIIEAipuHLlEp2FZYIgYH9/j8nogGvXbvKhp59hc3sPCsNC7Djc2wVnUVLQ7vQIwtATYtpHZpw9tsKbiyvsHw753L/7X/jv/vv/KwvHjnPs2Co7w8t0k5iJkrzXarFvIT1zlgcODxldvcq+lDwYp9xnfMzGoQPpHNObW3SWcxbeu8qNRMHqAF1VFGU1u/cqY0EKoiBASv9qXhTErRZB0+5JhTaONIpYiiIm04Kz8QApBIFSbG5vsz8cYZGEccLQKrr9JU6f7HF/0mJUVbN2rWnDtdaEYTgTAzRtdCM8aARcgzqK4iMf+cgskkFKyYULF+h0OjMXnYsXL3L8+HGcc7RaLcqypN1u02r5+2N1dZXvfve7d7SZS0tLfOtb3+K9997j+eefv6MvbLfbvPrqq2itefjhh/nOd77DdDplZWUFKeVM0Le/v8+1a9eYTCYzEdp4PObcuXNcv379DpFb09c1/VRD1EvpybDPfvaznD9/nna7PTsXTT8dRREHBwdYa1lfX+fw8HDWXzTrS9OUn/3Zn2VtbY00TQEYjUaz/qXpswA+/elP88ILL7C7u8vW1tZMjJgkCa1Wi8985jN88pOfnDkUNTEJb731FoPBgKtXr/LVr36VbreLlJIkSbh06RJf+cpX+PM//3PW1tZm1+bkyZMsLi7y8MMPs7S0RBzHM+egn/7pn75DcNcI9t5P5irfHtaV1yoICKOIIIzABUhnwFazfY0C5W3pnSOIQpJ2QjkVdRflYwqMk97xIIyJEosKvMNRnk0pJxOMc4RRiHUSEUQEEi/uAYK0hYpCnJNU04IiM/SXFrDOO9wUmcbkE0yVIQNBPs1IAoGroDVYxGjLrY1tgqjFZLJPWZUs99pkmaEsHWGSEscBzmgqF6CUJFAQpy1GGwcYAlASo/34Y2HQQUoHWCpdELU6SOEI4xiMwdkKoyFUXiCQjQ8I+gNU3AOpakGHQEpBu5sgA0We5UynOWEgEVJRlBXj0QjjYDiaIkSAVBJrDZPRmDCKKLUmVAHWWarSj7EQCQQpqJBAKDTe7ciicRhcLcxuCOrG6cAJ4clwJ5BKoBBYZ3wP6UAIiVL+2YijmDCKUUHoxY9NNb4Qvh+qSsTY9/EIGB8eUE7HdFSMUBEiaiOybZwtQcVzTgD1PQPYMgNbIlWI8Lz7+10L7jHenccdrws3iykQ9XhAyghkhJQCYQzWmvruF9iZMLEJfqh1YTT7wty58WJDOVuuCdXjNhlf/6qkJFAShRdWamO5uWdn12J+v2dGCjNFQnPdmu3e4btQf3ZegHCXMNVLJnBCYexcLIWrr6Dzz7sQfpw2M0tw/v4Qtjn494s57jxUv09FnlPqFC0DAmEoK4EUfhxY2YhUFZjK1EPapg2CwI6pVOd96/9hcSQ6OMIRjnCEIxzhCEc4whGOcIQj/Nhwr3gF5xzf+MY3uHXr1o9lG48++ignT578sazrvwZardaMzPtREEURH/7wh39ixRad4yc5/elfYHj1EtnbbyPnrC19XYWfyHjo0TMcX7sfZS1OW0w1YefSDWxZ4iQUkxHxYg8phJ/sdA4ZRcjYE0DOOkSlfWY6zk/QxQnJ4oBsd98T9NYTMkIFWBxGm5ltqYoCVOpJpqjfp7NyjIPNawynY5KFVeLuAJVEpFGElIqyKsnzCYPOAoWxOCsQgWRaZrQVtPMhZ6RjXFmuvfHnfGnrfyA6/zidKMJ2e+xvvIfcuEE4mWAXI9JWAqFiaWmZVrfN7vX3SOKEqpzSb/fIjGVr+5APPnWS4cGur44RnjCy1mGrEu+R6m1B0zQlP9xBa4eKwppMCZF7Q6pygkzaHA6HHI7HcwbmP5mYr3Jvno/56sr55e712eZnIw5oquTnK+TvFZcA3PFa83dD0jeVn/PLN2gI+fnPzpMWDRYWFnjuuefucF5oRAFKKdI05TOf+QxPPfUUX/3qV3nzzTdnhEsYhqyurvLcc8/x1FNPsbCwMCOR5mMQ5kUB8+ehIYG2trb4+te/ztWrV3nooYf4yEc+grWWoihIkmRW7eqcuyMKYf5czjswNMfabKv5vSFYqqoiSZIZmXa3G8S8i8789W76kWYbzfE1v8+7MMCdbjxpf8D55SVe+rNvc2NziByOOXtswNKCJwCcCtm7ucXasRXK0SFloYmSGOMcptCM9w4ByeZ+wbIBXVmq3Zu4sE/Ua8P2GGv8PtYygfq4JEIFENRREtrUxL5AW4d2DlNpKuMFB1VduWicj5SZL1QTzk+4dwScbSnaoY9siK0jMpYkEXSFJLKOq9oxcT7/uHCOCkfhBAZB6CDA+VgDf/Jw1sycByygEYSBYm21S3+px+EoY9BNPbmahiweP8bqhSe4+tv/0gvC4pire2N63ZRoNCaJAmQrxYzHWHv7WOaJgkZIMZvQnu2Bm5uq9yQL3CZetPDW0VIKiqxAWz9hHytJWGkIIMsqcJZCgMViDIRSMJpWBDjCSGClA2txxiB1Bc4RBAoN9AZd0BVpK2Q0yuh3AuREEgYhQehFBFWRI8qciS7ZOnyXg71tut0uDzz8OAvLK6ggRElJqQ3TPKewYuYgIrEsLi5wZWcIWJYWF2i325x96FHyfMr68iKPf+gZSuNYdYIPfehDlGWFrKY8+7GfYmdnh//H/+1/4MrlyywN+rzz1pusrq76M+cEcRTSSiOyvGQ6HTPJNa0kYDIeEiepF+YEvhJ6Mi6ZTkYEYcjSoo8LefyDz7G0usaNS6/zyJNPk6Qtbl5+g6LUbG5usnHrBocHh6wfO4HAj1+sMVjnq78HC4vs3bpMqKDfafPTf+3neOWlb7GWGL7yzZeRQtDpdGm12wRBbV/tYnRVcePKFca7O7RXVnn98rvc2j3k/Poy999/ls3L1zjd6lKZitX1Y6S2YljkfLfX5fDxJxi++TrjMuehKGbdapyumJSa/Us3OHhvg4VAsnf1BvutEOMsldZIKeo2xlGVFmJHoBRC+opp4epoELzYNgglzgmMcSRJRBxHWGsoi5Kl5SWWV1dqcZYiiUOUEERxTKACCufFt43rTtMmN84xTR/RtNmNWHd+XBjHMWVZzvqehmxfWVlBa82TTz45I+GFEHz4wx+eLRNFEVEU8cILL8xI7TAMiaKI5557jslkQq/Xm1XxnzlzhtXVVRYWFhgMBpw5c4Y333yTfr9Pt9udifkaJ4UvfOELPP7447N+bDAY8Pu///uUZcknP/lJvvGNb9Dr9Th16hRxHKO15ktf+tLM3SHPc44dO8Yrr7zCaDTiiSee4NixY5RlycHBATdv3mQwGJDnOdPplCRJUErx5ptvUhTFTLSxuLhIlmUopXjrrbdYWFjg5Zdf5vz582xsbMz6pNXVVT7/+c/zne98Z+bOUJYlVVVRliVra2t87nOf49VXX+XXfu3XOHv2LACvv/46b7/9Nn/37/5dNjY22Nzc5KmnnpqN4z/zmc/QarU4d+4caZrSarVm/f6v/uqvzsR+jWix6V/nxY/N8vNRQlBH1Vh8JJg0Nansq6ADqZAqIElaXtgjHErVdKNwRFFIFSfIMAIdonBYXVHkFUkrQoYhYRTQCROipEU+HVN1p+iymI0vhQoIlUQGIVZrnNVUVlFMBft7E9KWxFlNnLZJktS3eUajhKHIDUEYsrC0QpKk7G5tI4IQayuMztHG0WnFTPOMg/EhUdwijkICpai0odvpYHTM7u4urQSiJKSqNKGEdifGGEuYRuAs02lGTEAYlIhAkI0O6Hb7SBUQKIHWBXGoUJEXAYWtCBV1cVhcOcaUI4QMCAKDs40jRYBSgjBOECrAVBWltoQBpHHs3VJ0yXhcemcc60UocRRTlKWXFKgAqQIEXkTgpEUIM9fzedLa1fS4FMqPI6TAWUltyuOvqXVYW4sOwpAoSYjTNlGUoILgtrhvzlbI6BKz5RBSgLVEKkCkLZw1qHq/RGsZMd3CtY/X47LbgxE33kRMbhL1T1ONNur3/VjnXriX+OCeQoS6nN5vyyGDAJwGnYEUWIyP5hOudj7AOw3MnAfw57MZZczcCW47IIhGbDB7p3aUqP8qSz+elEoiA+/GZ+9wULhLSCCak9Ic622BaOPcdOe5eL/wQCD89bNTnAwQWIRUc3vV7Ontn3OX4459EdzbPULU4lUadwRqB4xa8RAoQ2GaSA3vDpWZkDTIyaqmYMShXI4WMeYex/bD4kh0cIQjHOEIRzjCEY5whCMc4QhH+LGhyQWfhzGG//Af/sP78rz/snjxxRdnE5///4g0TWcTtT8KlpeXefDBB/+K9up/PcIkoX/hfk588q9xsHEDsz+qPQ4gqK0lW+2Ihx48j8tzKlNBUjHd2+DG5S2OnxqQ7R2iAkFrbQVX2/eaSjO+tYmKI6J2G5UmfhJN1HNX1vmJPRXgHOzu7DFY7PtJJuWtul1NeMog9MKDtIXJc3SRE/Z6LETnGW5cJ9/dxBlL3O0TpglBJ6UnFtneqbASIhEwKTOqasrYGuI4JssynIROGHFaVlx584u8t7XBsLVALgXV5jaD7U3uO9Pn1OoqabfLaDwkcIZiNEQIiQWsDBhlU95++5AoSggiyeH2kF530U8mGo01BucMqJrQFn4SsMhLcD5rNsgrhBT0wgCjpzgEWVlQlPovuoT/1XEv2/15wuaHcQeZFwbcXeUPtycB52MU7nZCaMQAeZ7PnA2adc+voxEWzFd2NuRO83vz90/91E+xtrY2cyRoHAaazzZt5+nTpzl27BiTyYTLly9z69YtFhcXeeihh2YuKfOTuE2laVmWfP3rX2dnZ4d2u821a9e4efMmOzs7nD59ehbh0O12+cxnPsODDz54x743+9LYYTfkSHNc82KQ5nMN8dO07Q2J1ogEnHNMJpNZ1Wmz3ma/5y2nm9fmz03zd7P9Zl+bdc/va7NfViXcurHJwfYIhmOOLXdZXeshA2/5HGiDjGKqssBaR9RpY7SfoC2Gu4RCsHUw5nBSkhjLoKgodm9SHExIFwYIthDSOxjURWk4vLCgygqssOiq8lWl3jTFZ3A7gTYObR15/dM4ga7dBmZ3tgDp/ITtB1uK5TRAWufFR8o7wUcWlAPhBDqDmwgmzlEYiIRA1zuWCC8+kHMVfrcn6GuBgBCsDRKOH1/gV/77/wudU48w2h/y+pd+nz//+rf49te+y9/+J/8T3/zSl2B3yOa45HBa8N2ruzx8YoFQShZUiDKOOBBNQjXSMWunqS17JXc7G9yWWsyqBesKQnDoKKUyliCMyfOS0bggDAPSdkpWlBjtcFojpMBaR15ULPRTDqcG4RydSGCEoBV5AklIibCGSDoKpM+DcIZEOKRStGJFrxWiMoWQgrLShFFEN4mxeo83rm2i4jYnT51hdXWNtNUiSWKiMETb5ogUxXCMNRpjarvvVou9/esoIUjThF6n610a0g6TouTKm9+ht3KMw+GUPM94/NGHWV1dp6pK/uD3/yPf+PqfkeUFZ06eoNVfor8wIIpjiqlGBRGtJCHLS4zRtNotWknKlSvvEEQtL67QGikkvcUVyqpEG0tnsMSnf+YXOHb8BNZaLjzwIOPhAYd724yH+xw/tsatzVtk0wm3bt7imedfZDIeIpSiLHJSmSClj/C4eult9nZ2uHrxNU6ducCJE8dJqei0YkZ5xsLSknfX0AYVeFcSoRRbG5vs7+/SWj9JICRf+7Ovcf+v/nVOnD5FjkDkBVZJdm5eo31sndWVZZ744DO88b1X+faDD5Ndv0a2s8WDMmRNWbasZlwqKieopEQutrGmFjvZWgRka6cDBEZXnqSREiV9rndVlVihPCkjBE4YdFl6NxUhQQqE9FXlNO2XFJiqgnpcp4KQQAVorSmKgqqqZgR/0x81LjFKKcIwnLnEzAvmmjatcQto4g6KopiJCxq3mKZfaGIcGtFXt9sljuOZSM05x8rKCuvr67O2NwgCTpw4gXOOF154YdaW//W//te9yMRaTp06NWvrz549S1EUpGk66wtOnz7Nxz/+cdI05cSJE+R5PotIMMbw6KOPcvHiRc6dO0e32+XChQucP3+eixcv8tprr/Hkk0+S5zn9fp9Wq8UXvvAFnnvuuVn7v7KywvLyMv/iX/wLfv3Xf30mPHvwwQf5J//knyCE4KmnnuKFF17ge9/7Hp1Oh3/1r/4Vv/qrv0q322VxcZHV1VW++MUv8jf/5t+cCf16vR5PPfUU9913H7/xG7/BxsbG7Fidczz66KP88i//MoPBgF/5lV+ZndtmXPLkk0/OHH0aAUkjjJt3sWj6s3lxYtOnNffF3S5m2howsiZoPaHshPQ2+8aLUEH5+1Ta2+2oNUgZkKYJSavNOJ+CqUA4wlASRhEyDAhUQhRDmLSJ2wNMlVNOh+hiinCWosh9lJm2WBxKKqIgwEmFGfQ42N/D5jn9Rcf4YB8jvKDmYFiwttoniUImk4zRxIAIcKZCIYmjkG6UkE8nrK6uIpViZ3ePMEo8ae40WhcIFRHGLaZ5QVkUSKFZGrQwZUkUxF6IVhaYMqeVpuTjEWEck3balEVF0gnQlUEJSZblJEmMrEV2zkn//AvpRR2WOnYBhPQiAZzFWj8WpI5jKauSyb530wkCSRjGaOsojaXUGhkEpFGIdSGlC2YdnxQCI7x7grCuJsVt3R/W1fBCIoS/nkiBohlXNt2kRQYBcZKStFLSpE0cp6ggmPWdzbOLAGtiZCNocQajCyQGWe3jykNIOyACRNyHYh/SpXoQXWEPbiBMgVp9ElPuA3WbSFOpf29BwZ3k/pwQoZEiusb9qNFGCDAlUjrMaIcg7hI4g2sEGMJfk+bwpJD+O5Cwd3gLNOJG7h5T1GO0efcGAaRpQBB44bZsrASsmYkDmpXfFhbMXQduu5x9P3eH2ZJzyzkBQoYk5hBLiKHCEtGEdYl6gObmxkL1qaI5subcNV+F7uX6dnuc509cFEa0laU0OXEgGRlFJCXSlSAUxglKGxDKgsLGRHaEExHOiplQ7y+DI9HBEY5whCMc4QhHOMIRjnCEIxzhB+JHsfNPkuQO0YFzjrfeeos/+IM/+LHsSxzHvPjiiz+Wdf3XQkNI/qh44IEHWF1d/SvYox8PpFSYKGT16Y+wf/ki1/7o8+SFQQhXOx0ITp5YJNYT8tEUGQgcGXvv3UIbTa/bIT8cIU2FFAFBp4UuKrK9A7LRAUI42oMl4oUlgnaClIFXHQgHQiGUIkhi9g8mRElIyy36CSQpaxtVR1M6JOMQ1Wph8xxbFAStFv0TZ5hs3SQ72AVtwAxQ7RZxq83S0io6z9FWY3SBLjKSbsp0cogLEozQVHFCoQ0hlvbmZcan2hRhwnjzFsdbLTqdHir2RG6n00MFCmccsic4GB0w2t9h89qUvWGHj/3UKsPxIdpURKGvPrHaYoxGIv0EWH1cOEuRG6pizM7NGyzkBUmnx0LaYmSZ2X2bpgLmJxzN5PvdZHie58Cdk3gN7v59Pne6IQDujmiYX09VVTPioCG4G2L77ur8+e011v7Nv0ZQNN9mCiF49NFHuXDhwqzKtSF75gUEDcnfOBsMBgM+/OEPz4iMhtyZjy5olnXO8frrr/OlL30J5xyLi4tMp9OZ68Hm5ibf+ta3OHfuHJ/4xCfodDp3CAgacqg5//PxEs32G8eCpn1vCJP5Y2leM8bMoh6stTMCqvncvJNC87nGcaEhsubX35zzJl+7yThvCLcmLzwMQy5++9ts3NimJeGRR4/T6vXJi5IgCCkP9yjDmG6/QzaeEKUJDokVYKschGSc5QRJTF7tUWCxQiFtQb6xSbq+iuBthJAIWeczO9fQB+iqpDgowIExFm29k4HPY3ZoB5WF3EJpPEHvYGZP7ee6fRVbX8KzywlpGlHlFQ6fNx9IiRIOG1j6RrIaOc4+fpY/efUqE10S1JPvOSCknylvquX8ndZQ/n6bofL79r13tjj/lZd48VceozQJr71+ne++fo3FWPGn/59/y9L6gN76Ar/3Z2/SjxSjyZTvvVsSKcHJTswgDDGmwgl/7CJSXnJWN7vzE/dKCpy1M/cZ7+4gsNbrAGrNAaPRiIkKSERJ0kmJCs1oUhImEVGnzWjnAGsd2jhCB0kkMUBRVCy3/HmW0gs1hPCxDXnlUNISygArJC6vsEIjg5hWHFBZRxgGKKlQyhPEE0LGtsva6R5LSytESYxr7n3pCaJQKuIophQhWWWZZjnOWpSEojRsbe1gjSMMI6J2jzwvEQImxnBwMKJzawMVxgRRwomTp9Ha8NKffYk//eJ/mlU4X752jT/+T7/Hr/3m/4HeYJGDqmJ5/QR5eYUwnFJpzXBvmwc+9mlMlXH52nXuO/sQlS5RKqQsK/Is4+lnP8LP/cJ/A8LHYJw/f54b197j1vCAIptw7Php8mxEMdyl00qROFaWV4ie+jCbt26wuXGTXm/AeHTI1rV3aLViJknEpCi4efMy3d4xpnnu7eCDgKWlBWxV1A4DgjAIqIqC/b0DCmvJ93dY6PX53qsvk//y32Bh/Tjr60u8+fY1zvcHrPbbWGmp9m/x3vVr/Npv/Aaf+53/wKtSMHSWcnuTR2TAKganJGOl2EwTbvTbyFp00Ig8jXOzzG+nNZX1MhkjFQjvgqFt/bRIn6uutbehR/qKWCnrtrt24cD6iBlfhe7v/7zMaez5mzZ6a2uL9fX1meCqacebtrdxRWja4PkYhkY00DgiTKdTWq3W+9riOI5nbWxDhDfvN0Kxu0VkrVZr5h7QxOU0fXBVx0Q0yzYOCp1OZyaeaPDEE0/MXBY+9alPzfofgIcffpgHH3xwFhXx0Y9+FOccv/RLvzTrx5r9+pVf+RWMMaRpyv333z/rX6SU/L2/9/dYX1+fXc9Op8Nv/dZvIYSg1WqRpikPPPAAQRDwwAMPsLCwMPv8/Hqb41dKceKEF9489dRTM4Fc0xcuLy+zsLBwR9/WiEPAizDm+/xGDNFcu/n+rem/5wWV88LIu78XSOfJYmfrODHnq+GVDJEohDIY69+3zqDrCBgrHEXpkDKk3e6Sj0fkw5y4qqjyKSpKSKIIWWfazwsklVTYJEGXBSpsYXSJsxW20oBBZ2NUnNBpJcThEuV0SFUWGOvb9FAIAiXZ3R2yuraAkCGHBwdUBgbtiHGeI6OQoix82xK3iNMWq2GE1QV5ock1CCOxpkJbTVFq2r0O0/EYqf1zFWDJxjm6KnDWkuUFcRRQTscIHHG7ja0k1hhUVDs+OEuUtqCOQgCHMRVSRcRJm/HokDQNabUTplODQzEcFSSpI8tKnNXkleQgK1hfXfZRENbSSWO2sikGQ6fTQSlFUYFwClF/TxGyri4X0o8fhHeskL60vv5Xu/zgl0Far4vDR8dIJGEU00patNIOaatNEif1s3+7Jt5fS4k2FQg/bsZoXDlGSoVK+ig7ptp5C5UuoKIuCu9A4XQJ2S5SBqiFB3C1SEJK6UWDDUHfkORzuP33vb7f3hYDNE4HXrQFMoiQGETgI3yM9WI90Sw3f1zNCGZGzntXKMD/7eY2J2ZLz8Y/wjmQ4FyAVH6MImqRh208+er1iIbhv2OlbjaOmT/Ou0UIzt0WXzTfvCQgXUHlIlw1QgqJwo/t3GyAdHtzYna+a9mEaL6PvN/hYDZOri+EqM8tQlAYCUGMNZY0LLHaEISWvNIEKiOSzfXQRGaCEyGGGMhw3FlE8qPgSHRwhCMc4QhHOMIRjnCEIxzhCEf4gfhRRQfNpCj4ydDf/u3fZnd39y/87GOPPcbe3h43b978vsssLy/z6KOP/qVI+58U3Gty8YfBxz72sfe5SPykQUhBvLLM+sc+wc7Ft8gvXqonSj2Z1I5yrN5EdvoICfn+AaPtHY6d6Pps1GlGFEiy7R3MlqXIxxhtCFSAsI586CtunO0h48hXJCmJqKtYVRiyuDrg1s1dFk+f8JMvMvBTRdZXgqEk0llUkmDLCqstotSoIKB7/BTBzjb54R6l1YRuCVlXixmpKLJsNs8kBSRhTBgEGO3IsoyyzCGvaGU7DEb77LiQaQAXzh/n7P1ncKbyk2xKetFAZTgc73Owc5Nsv6SYBjzxcJ9uP+bm5hZhEBOGEc5atK4nl62vjvJMnvT7LyFIFMYUmKKkcoe0Bj0mhcBYW090/uQ/M/OOAXCb3G8m5RuSfB53VxrdHYNQFMWs+nP+/aZda8iHu90M5nGvcyeEoNfrzayrgTuiGppqyBMnTvDCCy/M4gvm3RWklGRZNiNigDsIjUY00RAhzfPfvA6e9BgOh/zJn/wJw+GQNE1pt9scHh5SVRX33Xcf0+mU+++/n+eff35WuRqG4R3ntSG05rO3m58N+dSIDprjaMQQzTE3BBnwPjKlqfRssrqbyIVmXxqhQkPQNNdCaz2La4jjmDzPKYpitr0mN705hk4U8OCZJdrdNnG7RTHOCOOYajImSFsk/TZVUWKcoCoKdD1h78oClKDT77J3Y5u1bko5KXAOtDaMLr9NevIUUnlnA6MdTgisgyAJCYVgMs4pjcFasLUgYSYqsA4DPmbBOrQTvmK1uW/n7y0HjySKY2ltxa18FWTjnk1liZQgVILeUos3bu4zzKqZkMDWdga5c8iZ30yzbndb7ACsdhKGpcE5y+/9uz/kW1/5JoWDW7sjolZK71iPK+98l3yasbTQ5bHTK7x2aYPIWWxVMa0cV7Th4fOnaF2/7LePnyd3uJrUuk1ICMDYWoimBNg6q9k3zbMoCGcdGkmWeecWFSfINMWNCvZ3hiRpjFSSNPbte2UMxjgODnJWWopKe8I/kgKnJIEUEMZkkwntCLQxdNqeNJKBwhofyaMLQ7vVrgUFkiBQtNNl+ovLBEGEMZqyyInDEISkrAytMPaiO23QeELYWoOQgkpbtBWUWlPUBK52UE4yAjTC+fZ8dzJFRiG9XptJr8NwknPx8ru8fekqYRBQKo2Qkr3DQw73tuj1+lR5xpmzF9jZvEUnTRhOMq7fuM5nBl02WgNayRaH+9u02j1UpCjyQ6RwfPITn2Bra5sgjHjm2Y8wGk/YH2VYEVFowcaNK7QDwdrKAuXGDhee/ACddsLWjT3SOODsufOUeUaVDckmB6yvrxMpxcV3rvCR5z6IxFJax7TUSBkQ4tjb2vDtgwBrLNPxmPF4ggZcVZGVJW484satTc4sr/KBZz7Arf0Med8Zkr2bdBd79GLJ5/6Xf83h3g4///O/wOf/83/milJMpMTs7/FkBauh4u044k/ShCUVklQl3l28sab3ZIx1FiEdUtq6jfViLi8QwjuQWDcTF2hjAIGr2x2pAqTygjYVBJi5p6ya61OaNj6OY9I0nVnsl2XJ/v4+a2trM8K9eQ+4Yxw97yzTrLMRXjW/N5g5vtTOMneLDxoSfF4oNl+Bf3df2ggk5iNsGmedJElm65gn64UQZFk2a6OVUnf0WfMOQE2/MB/T07T3DWk/38ccP358th9N37K2tjYToc2PkVdWVu5wEwiCYCasmHc3mo/umXdZStN0FkE034c152z+WBoRXXPdmuvYHHPTtzX9VfPZeUeh+fMD+LGrLpDSIazFaIvWFYn0wgMnLGAw2keICUvt6BLgnBcUySCi3eljioKq0ojpFBFEqCBGprEfG7taCKYkQia4IESGurnYWF1gy5yqyKmyjCrLCNKQOGojnCSbHCAwjIZTpJLEkSLLSqwVLC4MiEIv2BG2RIQhO5s7rK50yHJFOB2TTTLiJPSCLOtQIqIoA3Z39hiPC6IkQgqLNIaq0ITCcTDKKbQDZxi0I4y22LB2kMsqwrDEBKBCiSlHRHGCs4KyMiQiRFjt2wIkzmlwmjAOcYeGsoJcG3QjDCLCSkdZOqIo4L7FRdppwnA0pig1URQSBpIgCtFVyeFQI8MOKoqoGW7AIoRECoUVpu4Ja/FJI0RwtSihfk8gcaKpNhcoFZAkKXGakKQpadoijhLvwFJXtjcV7gKBthWCrBYC5oSywhgoK42yEc5MMaMtnL2JMCWUhyQLZ5BJD9k74Z0XAGM01M4vczz+bP9vN1QOz1/fHr/P3Aya43Uwi1bAj0cEFmsrov4ZyvEmTrb8OGB2im7X+ns9QOMKUK+j3vad+yJub3+O/kcIL1JwfnwgLUgpCCQoobntwDAvLqi3PnMXgLsjFe4WHNzesfq6uMbrIcRIhwuXMMUeMky9IHLez6AWGjRCidtyAonAzP6ed20TtV/CbXHp3NVxjqwKaAUloYRYWioLwmmc8cspqUnklImokNEA8N+NcXfGyf0oOBIdHOEIRzjCEY5whCMc4QhHOMIRvi+EEHdMfv5F6Ha7pGkK+AmHl156iX/9r//1X/i5559/nt/+7d/m7/ydv/MDRQcPP/wwS0tLP/T+/CTi7qrrHwZRFPH888//Fe3RjwezCgspidfXOPnTn2F68G/INoZIAiIluO94m/ZyHxUnVFnGZGebIFL0FxaQUQjTMUpFmEpjqhKsIwwClAxwpsSUOeVkiIxCQiWhmYQX3q5ZSElhHG/dcjykLaLJOnXgjK9s8hN5EhmGyCiiscpsJgbT5VVkEJId7KAP9wjEEipNCNIEASRZylgKnHW1BXeBLnKsrShGE8IwobNwgmR1BaXhipR8JwC7n3E+MIRSEKjAW66GIVpXVOOCqrScPtnh9IU19scHVFVJv7OIVJKqLH20AgB1+a5n5jDGUmlDogTgyTNdaIKqgzvcRC+cRKoQpe6dw/qThHniYd5loCFdGiL8bveBe1mMzhMI8xbUzXt3x5v8oOiGe7krwG0xxN3OCc3vg8GAn/u5n6Pb7b5PbNAQKw0p1RxHU+169/oaMv7u2AHnHIeHhz4/XsqZ28L+/j6XLl2iqiriOObMmTMzQUMQBERRNLOCbrbfLNvs6zxJ0pyvppJ1PuKgIaSKokApNcvfBmYW0w1RE4YhZR2dMi9KaIivhlhriBmlFHmez0i3eeeHxkWhsSr3DhEJYdzDCYUuKyrjLYlbS4sIIVHOMj44QAWKoNdBBiH5aAQOolabvWsbLPdjjLO8enmHsyoAJZheucjSz3yUIApQgcKMLKXxz2LcitGVBllbYLvbriIN+S6ErwTVDv95bTBQCwAEVtRFfQJSKXhqIUJqg4sD7nv+EVQc4gwcXrnBzrtbKCcRseS90vK9rd1ZynJTPWnx8Q0FjnBuQl5QxznUFr5pEiKdIysMMgjYOhwRJQlJu8viYkRRlRwbLFNVG9zcOMAVBZ0oZH+ak1uNQpJXFV/bPOBnBiuYwy1vY+88AeZq8lYJMXM8mDdpnkXl+J1HBLI+j468vt+08ZWmYSBI04jpJKfIcrr9HiaberGAtuS5oZ3UpJ2UBNaipPO55EpitSFUXhASKwiFwSnQpSZKIraHJcvtgCAIcc4ipfKZz8I/j4EUCBEgSEiTeCZ6EcK3TcZZrBMYozHaIhBU2tvtg6xJbMmN994jDiOSVstboTtDHAZkhxWHWzAZT+kPujxw/hwLS0sM93a5cPoUH3r2oxyORnz1j/+Q0yt9HnnwQdq9BbbOPUi0cZNwfx9fJ2t4+qM/hdUZV65eRaqQuNUhyyYoFYCQbG5u8vzHXsSJgNIqWp0+uirBVKTxea69+yah1KwuLaKnO+xef5tBJ6JwbZwTHFQF7737GlEofZxAFDOaTLh+a4u1BUMoU0ZZQRhFTEeHHCiIIv9sG10xHk2otKHJNs+yKf3uGlfefZv7T7/AJ/43v8LxC0/w5uf/v3TSiDLPiNI2F9b6/O7nfodiOuanf/az/Omf/BGX1o+x0Rvw1u4WqysdvlJkjKcFaWXIyopEOBCSUPn7SluHUtLHntSiF9/e+nvV1gJFJSTGeNJFSYW2DosX6AitCYOwFo04XBihlPTRDRZKU87asqtXrzIajRgMBkynU65evcpgMGA8HnPy5Mk72lrwji15nt/hotP0gU1a89K4AAEAAElEQVTf0JDY845ATdvciLKccyRJckcf1rS3TYV+U20fxzHT6fR9fWdD1M/3y80y8wK/+ZiApu1u+tyGkG+OpdnX+ar/xkXh7v62EQU0AoWmr7i7354/D/Pvze9X04fNC/ze5y4wF58gpWRtbe1925gXYDTbbF5rtnH3uGT+HN0tqpxfbn5/hBR1hInDCYuuNFU+hc4AFcY4YRE4rIUg9C4cQiqcEAhX90VIZJQSd/pQZSglvJDAWKzVQFDfF4ooDsH6SnNMhbP1Na6JbOUMwhlM5byrQaQI4wEyCLHlCJwlL0viMMIayKc5euCdvW5tbLGw2CfEsNru+OiGrIK9PRYXl7C6ZH9/iHOWOG2T5yXtWCFlgsAyGU7RRUksDO12yntbYy7tVZxbTIhViTj04qD+Qo8gkOhSkzuNVAolBboy9PsDTD6hKEpUHNb3iiIIEsIwImn3SDoV8X7BCEGea9IWRFg67QSbhByOc6yz7B+OmUxzwiCm0oI49s//ZJJRloJAhQgCT5LPX1vhx+nzMkPvYeCjFYSQ/nzX5L1wDoRvr6IoIo4TkiQljVOSKCWKYmTjmtV0pEIgHBgT1NX9YKuCMApwpUShCZVFdtaRAqqiQAlLaLr+mVQBbrIJYQuiDs4aVO3y4o+j2e/bdH5D+gua7yY+Jkk4OXMkEHPuBM2KHBaXbRF2joFSqLgN5dCLDppxMNwmv+9yGKilxLNXrGucnTx57z87pxbw0jNcte8jcowXdUgJLt/EyRUvsJgJI9zcx9//3M6u4V3fDd7neiCEj0TRE1y46K+LChAmxxJBEDbWBojbNgvNUdX7MRdIdXebMX+Mdf/lj7R5yRIHhkgMKVWEVZKwKqlkC01IS2ZUOHS4SjusmBaQyupIdHCEIxzhCEc4whGOcIQjHOEIR/irQ1OF9YNIuQZnzpyh2+0CMBwO+Uf/6B+xv7//Az/z2GOP8c/+2T9jeXmZa9eu/cBln3nmmR9JBPGTiLsnLH8YLC8v88QTT/zEV6sL4a3/nRIce+o5RjdvMPn9P0BMHceXEk6dXkK129iqohxNycdDusuDOiqgzj21liqf4JwjCiOfgSqkJ9HKimpyOKvqCbttnJDYokRGAUJKKm3ZyKDShrCe4BN1rIIXCgD4ajCZxFBpZkUr2lc8xv1FhAqZ7m5S7W7B4jIqbaHimFa7jxruUhQTiihECYWUgA3or6yTLCzy7fcOKW7cpHv/gyypiHEQ8W1nKA+u8Gg/RUoLZTOBHlJVjnY74tT5U2gch4d7BEFMFCXossI6X0fpBAihaiazroFxEAooRhkjs08kFZ24hag0bvNlzIkHCaPIV5v9hGNeaAB3Pivzecn3+txftL47LEh/iLbsh9nG91tPY/X8qU99ahaJMi8qaKr1m8iCebcE59yMvJ93ZGiI/4a4SJJkJkA4PDyckRrdbpeNjQ2MMayurpJlmSdw85zDw0M6nc6MrG+20QgNwAsHmu3FcTwjnJptN6KB5nw2ooT5qtRGZNCQSvMVtFrrWdVrQ+w0TgdNPE8jSmjiFBoyrrH6bvYliiLCMJxlnIdhyOKDjzHZ20VnU2SY0o5jnDW4IEZYS7a7hRCKzuICKoowRebPcRgxPhiyvtonqxx/+tomUwuVA2Oh2LxG2BoQdyJkJ8YYSzEqiNsxQUsx2c4oy6oWHXgHBFNXnvnKal9BbRyUDi8yqKsFm8nhuhSO07HgvoUWlCUL50/y4G/8BvHKSVR3heu/96/Y+6f/BqHgQMN3tycU1hHjJ+BdvZ7Gllk74Stl/c1VT8iL2TR95eCxsytcvnlIp5/QakW8cWvM/ecH5HlOp5Wwv3NrVnG3vtTn2vaYEImUviJcO7i5u8u3RJ+PSYVxlsoaYoIZP+CFGMKTBJ4Dpm6KPRkvRb1fHgaYGkVeWqDESYlwAWkr8eeyrMjHY1rdFofDjMw4pDEkErb3KqJIIQNBIATOWKaFJ6CEc0glCSNf9YpwCKEoECwttJEYoqi+x4ScRRs4LJWuUCqklaYEYS14Uwqkoiim3jVFQZKkZJknfo11KBUgA4k23g69LHK+852XGXS6rK+v++fGWfqdNmGg2L12hZ2NlOX1VX7+536RP/7Cf2J5aYEkiXjr7euEesrq0oD1xS5Jt89nPv3XkCpASAVS0m2nxGmLK5fuZzwecu7+h0k6fS6/8watdpeq0igVcN+Z+5BBSKfTYri/jSly4iigk/RYffZZXv76V6nyfSoL7713kyeffIJzDz5Op7eIK8fcd98pssmEIPR57a04wBpDluXYQDDOSk522owO97FlThwHKOlJnmKae1JK+GturWFxaZlLb75B+VMfRzqFlIZv3dzi7HKbGzu79CO4b6XPu9tjvvf6G5w+e54HH3yYPH+FPJC85BZJ44S0t0S8eZO9UUa/FTHJJjUJRr392ka7qQwVt6tXvRuC/2ms8YSN8SIahPT28847eFjnyXNhffY7gLYahCBKurN2/8yZMwyHQ4bD4UxUdnh4yGg0YmNjg93dXXq9Hmtra1y7do2lpSW63e4dBHYzDm9I96YdbRxrmvb14OCAr33tazz99NMMBoNZG9yQ+o2wbPZM3tUnNs4/cRzP2u57ueE0jgFN3MK8YLAR+TX7NS+YmHcumCfz5/uJuz8zLyRoPnf3WHh+vfPL3GvZ74f5Ze8g/++x3nvhh93OD1pufv1KKmQoEGiw2kct6IqqKklaHYIgxUgvjHHWV2l7Bw5/bYw19b0qCcIQbXzEidYVRTbBOEiSFmEU4+PK6kZZO5RUdT9eIcMQoQSV0BhhCEOBUwFxFOFUQBAKrI4JkhatqiTLMpZWOsSRt/ivjKbX76GU5Mx962xv75CkKRKNEpLhZEovFWhrEViEsEQhaAuRFQwPc0ARhpJep814VLI1dUQqoN+J0eWU4bik1/eC9yrPsQiUVQSBqDlYyTTPCAOFznYIWwOkAGNzrJBYW1CMh9iqxOHothJA4KQiTBKM9gJGJRVVXqKtoN3ywlEU5HmJQqC1JWkvotI+iACwfvxhvbuBwNa9YK0wrAXTMgiYxS84h63jlBoxt1QBYRQRJwlxnBLHKWEUEYRhLZ7x4jhkQ0KD0aIpmiegJG1HjFWMaq+guitQTKiGN3HZAWW6iEv6hEkIQei/Z+gJbnILMd1DpMv1fVmPBOrq/FmPPRMizD8fcva6dI24UMwMD8DhtCFIFpBxF+k9DxDomUBDOHuXW8JtgYFH0zb4vZHiduU/uFrIOf85r+6cVgHGWCIhEVKgjWNSxJA27UjTLv5wz/Xdy9wtVPajG4sTof9O6wo/8hMhzvoYwsamwDWDuLnju3OfxNz7c6ffgRWuPlw3u1TWOfqxppsqDqdLjCtoyRIpcjrigKJ0JLGiECmxKsAJElVRlIrcJn/hsX8/HIkOjnCEIxzhCEc4whGOcIQjHOEIPxANIfbDEHXPPPPMbJLwt3/7t/mjP/qjH7j82toa//Sf/lMeeughvv3tb//AGAYpJR/60Id+5P3/ScNfJl7hiSeeYGVl5a9oj35cqCdErEFISdTvsfbs82y9+zb7r1zi1IkeJF1K7bCTnOnBPlY40sEipVMYXRG0WiS9HmGni1CeQBHCVyOaIkepQ4rJAXp8SCGkz1ROYj9hIyUiiIjjCIOv5BJh6F1Blar5vDkDSimQgZ9otaby7zgQ1iJ0TtTtIMOA6fYmxc4G6eoJVJoQpy06/WWq8ZBWuwc1CaGQyCTGJSmXb1yiu6hJr15mdWmFUenQYcAbmaU7vcG5k6d8xRlgtSYvYW19ibDTZWNvm8r4nFjjLEVV1VU/Ph9WBngCRQqscThniYKAsrRMp1M6ZUEaRoDFbO2ze+3zaH0/2prvd+F+onCvSX9gVsF5LxHBvT7f4G7y4W6XhB9VgPD9XBXmtxUEAc888wwXLlyYkULzDgjNsmVZEscxUso7iPwoimaVrEVR3PF5KeWMdGoEAq1Wi7W1NTqdDkEQcOXKFRYXF/nlX/5lvvnNb2Ktpd1uc+LEiRmBVBTFTMA1X/HanOeGHGsEBI2ooSzL2b7Nk17N+/NODQ0xNV9h25Bm82RSs8x8FIZSijRNZ8s1UQxN1ex8dnbjhmCt5dWLb3L65Dq9U2dxuiDbuIWroJjmlHnBYO0YQZZ5IqAsPREeJxRZTksaksEi7128xnKsuOpgZCwLRUW5v4POK6JeFxda4l5EUVlUICkmBdmkwJi5OAW88KC5VRphgXaQGwNCENQV2Lk2NcEgCAQ80Q1wWQkCisOM0a0d0vs+BAvnKKYlBsit5eXdgv3C+ul055B+Pn023yxrN4PZ3So8mSpcQxjArb0JZ86ucfLCSbY3d8mNoNWO2dkf0k1Det0u0lUYa7mxccjm9gis48lza5Ta8taNHUaVRkrB6/tDzi21OFtlWCfQ1hHWYqkZ/3B7bt2LqKRANc+sdciaJSkR7OcaF0sCAWFpMIEim0xJWzGVs1SVAWd9O32Qs5oqDjNv6W/rZ0wpSQHEaURRlIRKEdaO1846AuWwgc+VtsUUJwVxN8ZY71SgtcE5A0iMtThXYZ0mFTFJEqOkJ5oD5cUzWaEpi8zzd0rhzTAUURRTGU2R56zedxLQ3Lx5k5e/+youn5IqQa/X5dS5++m02ihdsXMt5/4Tx1n7jf8dO1u32Lx1k4OdTaSzSOHIxgeAY3H9PoIwBgFGa7IsZ2tzC1NOWV47RVVVvPf6qxhdkPRaSAm9Xo9Wp0crhHdef5t3L11iPB7hqpzz962ztrrEY088ytvf+x6vvnmZt969ipSCyjoCAQu9Dk89/RxbN68xOtxjuL9LO025ef0m7fN9qnxIIB3tWJJNpzijsTpEKkkUBFjjRRlKCkx9/fvtkO2tW1y/tUFUHvDqqy/TOX6WzeEWgYC90ZiFdspjF87w52++y3de/jYiCHnyiSd5/fXXSFstnBA89MijVGfu49VX/hxrS2QQ0laOSuuZ0EdQCwesvxuttfggkpoI53Y7rQQ4ZzHWEdfOF1J463NtLabSKOe8g5JUBEF4hztA4xYwL9paXFwkjmPCmjScTqdcv36dvb09ptMpaZqys7NDv99nMpnw6KOP4pxjPB6zvr5OnudcuHBh1jZnmRdPjUYjjh8/TrfbnUXhzLvaNP1P81oTFxBFEWmazvqheacBgFu3biGEmEUWNEKD+ba4LMs7ohrmo3Ia0UQjariXyGBehDAvUgDucAi4V5/8FxH589u6+70fdTx+9xjirwpSSLSwWAvGN3VYLM5anAUZ3Y7lMFWGtQZtHJHyYlacBGswQiPDCFsodDHGmAocBKG/hmpWUW6wpsIaDc76nyhA4JwgiBKUVDhTIaQijCMQAZUAKwISEaFNRdLrIzDoIoOqpKgqoigmjiN2tzZRwle9a2M4HB2wuNgDJEmakmUZo9GYbDoljb2zUdJqMR0OSUPlXURKzWpL0UojkliRlYKDSUm0NUJJQaebEgjn+z9hCOPIi5rDEOcMVpc4a71ISxucMVRFxXQ08cKaJACtCKKYJAlwCIQKCQLvvJblhlJrnAxJwhCHJZtMMZUh7S8StZcRcRdnxUwo4tBYW+GExQkvTBJSeUFaFBHURL9D+Ggea7BGe5Ec3h0tDCOi0IuRw7AWHAQBgVK+PZLCiw+Ye7bwwoMiG5JNBYQpCImrSlw+Ilq+QJUdQpkhoxa0l3GTLWT3OC7qItrrBOFVRGVq8r+OqZuj/v1fohZwiZlIwNUDkUYA0CzcjE+ad6SKQJc44RBB7AU0zkdD3f78zEBhzrGA2/vhbT3qZ9wvYuzcOH1eGyEco+EQpXyUhayVkJPxEJn4+2LWFom58dMPwO027c6INSHcbOP+/xo/OnRYoXzsn7wdj9CI46hFmnduoxFY3Cn0oNagW2rhabPt+jpEyqBkSV4qjLW0A0s2shSVIkh7qGBC5iJyrdAE4KCfBoTWkZd/eenAkejgCEc4whGOcIQjHOEIRzjCEY7wA/HDOguEYchTTz0FwEsvvcT/+D/+jzOb1XshSRL+8T/+x3z0ox8F4I033pjZu94L3W6XBx988K90ku+/BBpb8h8FH/vYx37iHR78MTk/YSYEKlCkKyusPf8Rtq9vYBdP8+r+AuPvvs2ptTY9obzt62BAtncIZUF/eZkgbRF22nWFofbWoEYTxCEqSVCHEdneNsVoD6QicgvIwIsKhFJEkUQKhy5LXFDnkMoAoaS3nbQOJ/Gvq9qKVgowGqTzE4VaI7KMIEnoHj9Ftr+LHu4j1QphEDLoLXHghM+rjiKsNTipEEnMra1Niu0RSTtFxyktXfDBZ57mMM/5T999hd1ywq91Dlnu91EhHOztE4QpyydPMipyRtMRKgwRMsDTIN5Boslhd825riffXF25GynISkM+nWLTFkJAWIVUBwtsTbYZTTLg9mSf+6Gm0v7rYv5Zn6+mhB/ereBe7cXdhMrdr3+/9dxNjMyvZ14Mcfr0aT70oQ8RRdGMZGmI+obYb8j0piq0ER9orWfH2VT1z1tQN5buVVWRJAnGGNbX1zl27Bi/+Zu/yTvvvMPLL79MWZZsbGzMqmTjOObTn/40X/7yl2dChkYc0FSlSilnRNQ8yTMfpQCQ5/nsHDQODI3Twd152UKI2fHOH38cxzPxQZ7nVFU1c26Yt9GuquoOgUGTld2QaM652e/T6ZRyL6dUU9ZlxrVXvs25Y0ukSUAaS5LeMnmWEaoAnKXUFm0d0lVErqJ3bJ3rt/bodyJyaxkbx3ZpOWYh7scMr14jXVthtLnpM63bGqkCDnemNXkJt21T5m8eZi4G4EUJQsCp5S7vHWSY6nam+FIgOd2qs84DxWRjh1f+2f+LJ7QgHgy4+sVvYKxjV8O7Uz030ezXbeo2wwLSa7GweOcDP4cvZpP3BTAqLf/uz94BJTm73EUmFYu9iEAp1o6tU04OWVrsEccRG1tDnDZEoeTG7pBpXqKNRTkwWIwT/PlUcyKWRNZhXb1zzcS9aKodQYYCZ/xkuwr95LtztfOAFJRIjFT0Yy8gyCe+MjBKfDuQO187OJyU3NgvWI4kYSipZIiQEm0sUV1V3+QWh1ISSlA1yaBCReVAyZCWEuhQoSuDNSVK+diEIPRRC3YWESK9I4PwUUJSBQTS52RLIbzLQ1lha7t8KyS6Kul0ulTacPXWJnuTgnPnztJptzl96gRbWztcf+8qw41NjLWcPHc/URDSTQ1mOiYNY06dOsnWxg2ef+7DVKpF6TStTo+qKhjvb9FdXKPV6UHij3Uw6PH6G6+BkLx75TLZZIySkiRJwYEMQja29yknB2xubdXOKYZsYrh89QbrKz2mpcUgePzxD9BbPsaxEye5dvUKg0Gfnc0bLC6tsn78JJPhHguLS7S7HQ439ugOltl673VaoUQ5g6sKSlvhKkUch6g0QUlHHAUsKsXO1AsQiskhuwcHvPryn/PC048jkISBoOiuMBlPyA+mLLZjjvVTpBS8d/UylTH0+j3W19bYuLWBEJbXXn+DlX6HXn/AjRvXaAcSFUCgmvZboKSs7esd1jmEUzNXkpn0wJn6wfLErxO1UEpKjLNgJUI6RL0OZ3XtZmFxdZxM49TSuMl0u10uX75MGIa0222MMSwsLFAUBePxmCeffJLr16/jnOORRx6ZCbd2dnY4ODjAGMPbb79Nq9VicXGRwWCAMYY//MM/ZDAYkKYpN27c4OzZswghuHXrFjdu3OD8+fOkaUqe56yvr7O1tcXq6iplWdJqtciybOaA0LSv8y5nk8kEgOPHj8/a4KIoZp9tIoPgtghhPvKg6WeafrJp++8W4n2//vZeosH3NbN3iSXuFih+v+8O97JL/0HL/TDL/jigdYXWFc5aTN3POuPFUM5ZrAMpQ1QQg7ME+MiEoiiQQiGVIogkxhp0pYCAonRoO/VjYaHQ2hIlMXGSooIArMOaCpzGGoMgIAhjZJp4YtuU4HR9f4cIFSJlQKU1UlmklhTlFGcMUgboMicQjjQJCFSIVY4gjpBBwmg4QcqAIiswOkApQSgFFSFx5AUM1gquXd9koRWhYkVlNIN+QhxLjMa7M4QKWv76TKYFrSQiiAOss+jS+SiBUGKrso6fCHFOI5zA6Jx8tIUpJ8hAEmiB0ZqyMFgBgVKYSiMQVEWJtuCcROEjDPLcuzYpIIhjks4yQWsBqSLvzmZAygpdlTgj/RjAWISSKCUJwxgZx/5cOu8Up60GXeGcxFIggECFRGFEHEZEoXd4CpSPAlJS+e8wtcBA1CIRRH3v43BVjiEkEAHCVkhbIVcuUE32CDsr3mnAjLFlhuys4ooRsr0MQvjIAlGPd2G2DaCOgaAeW9x+4bY7gZt70/+wwo9NAKyrcOUIW4xQrR4YjRIg5e11iNnW4I6B1GyN3HYJ8IECftzTKDBn+1I/t9bS77UIowChhY+uCAIGvUVGwo8rZuJnmBH4zVZnz34tEJgJS2fnYG4/Z6LTum0KUgJ7iJAJTudIZzAiunO9d307+36iaekEtvnejd8XYf2hNsIDnKOsJFMdstSLOCgMrdBihQKXIYQlFz2MVUSyIBQlmQ45zARr7YBMH8UrHOEIRzjCEY5whCMc4QhHOMIR/oowb/39g9Dtdnn00UcZjUb8/b//99na2vq+ywoh+PVf/3X+1t/6W7OJx1deeeV9NqrzWF1d5eTJk3+pY/hJwvzE6w+DOI55/vnn/wr36McDb0Ppc5IRAqREhRHtkyc5/bFnOXn2EaJsSJh02b45IllbonviGCZKsHofjCWIQxrnUees/2ddXdnlCclkcQkZhGR7W1TjfYSAIO3gTMD27gGjgz2SQFCVBtp+ZVJJhKizwkVdduusn7AUzlc8IUEYMAYnpJ/oNd4aNRkskO/vgTGoICJNUqzto4uc2ISAwKiAUpdcf+8Kp5dK2ut9xsMhoevRaSV88r/5Ve479wAv//Ef8tLFb/G8lIRZwGhccu4Dj2GjiJ2NHayTRGFKFKV+YhGHcxLnLIEMAOkJDiHBeSLMCQEOTAXZKMcOLCqKCWNL/8RDbHz5SxSFrxBvKpGsu+fUXT03J2bX9L+0OOFebU0z4dZqtRiPx/fMRf5h13v3z/nP34t8UErNKlW11jPCfb7Cf57gSNOUj3/847MqU2DmCgDMiJmmYr/J8m7ahfnqUfDtb+Mu0ORaN/va/MvznDAMOXfuHJ/73Oew1rK7u8sf/dEfsby8zGOPPUZVVVy5coVTp07x9ttvz4ihxi57vqI0CILZcWdZNlv27vN3t6V2IzAoy3LmkNAIL+Yzwlut1kyAIKWcxSo0zg3N7w1x1TgoNMKNRuDQbKcht6Io4qWX36Dfv0rsDOcGKfGpFW9tWxMgOIsDwihmfDCk3e8hi4qo12Fn6EVv41yTl4bcWG6UkscBjSW7fpl0bZXNty8TdbxNbjEtKAuNqyum/Tyvr1Jzc89YQ7pb57AISuvYHGVMS41pJu4RnGsFJAqMtVgDgVNMN7d5+X/+fyKSkPH2PqV1XBwWjJptzk186/rWVd7MF2nrirc6Rtq7MHjHhUqAlX4nnTG8tTXkg/cfIwgU9913gmwyYqHXZm19maI0xHFAECgG3ZS94ZR+O6YyfsOZMVgc1ycFm0lKy5UY63CyuV/qtsc3w7MqcyFAKIGpLBJP5CMhi1oEZUUSCyoDQRhijUVPc6IoJGq3GA2nbOxnRFIihWFSWioHvW5AmZX0WgptBVHg23NrHVEscdZhrcM6Tdhp+TbWFmjtqznDIEQGgRcSWDF7boPQ39vOmprgoBadQep8P2OFIm21KA+HPk88CJlOJiwsLNJrd9g7HPLIQw9ijCWIYvppi95gidOn72N7e4uD/X32xhn9lsNYx1RDXhWE/T69wQIPXHiQJG0xzTICNOgpZVVQTg9pt9sIoXwlq7M89dRT/Pvf+Y84B9PJmLKsKKuKIs8ZLC6zvb3N9779NU6cOFWLnzRGG4pC8+2XX6copuSFhmKP+y/cTxjHnDrVRQlHWUZcuXqF/f19725kfSl2q92i021zLZ+iBEynU+JQeechAWGgCMOAKApZ6LVZaUXsXLxVtx0hQaB4/bXv8vRjD9DtL9DbP2BiLK/v7KPLgmMLbRajkqVBn72DA5YXl/jSn3yRv/E3fpkwCjk8PEBKL2JZXlhgb3+fyWRINwroBHJWo2vx94DAuy3gXP1sOoxthEGeKNK2YbR85ALWoZ0G4UUnIRLpzOw5F0JS1e4tTdvZiLOMMZw4cYKDgwOSJCGKIvI8n7037wKzuLjI4eEh7733HtZa+v0+jz/+OJcuXSKtq8IXFha4efMmDz30EPv7+yilOHHiBEopyrLktddeIwxDbt26xa1bt6iqijNnzvDee+9x8uRJyrLk6aef5o033uDpp59mf39/5sBzeHjIzZs3WVxcpN1u0263eeeddzhz5gybm5ssLCzw5ptvcu7cOQC2trbo9/sYY2Y/m36k6dfmYxfmBQXzfW/T7jd9y3xffbeAYN4pYX65edHb94tNuJfo8IfFX8Yh4UdFWeZe/mLBGYe1oKuKoihIrQDhq/CDoIVAYHXh7eVFBcLNxs+zUZzwbZQpK6ppialy3PAAFXdJOz06/b5/FqxGOIMupvVnFIHs1FEyIWB8FXoNFYQEQUWpK6QSJGlEVZVgM0wo0UWGcg5dTLzjiy4Zj6c+4kk6knZKq9tjfHBInIQoByIOOTw8RClYX+oSGE1VZvR7LSbTMe0k9BFHhcFgiUKBM4bJOKcdKbAhURISdVqgAqyDEIE1FaYqcFWGCXybV05HUAvkp1nJra0JvW6bIteMhhPa/TYqjImSBJuXaONt/K0uMC4gjAKiVosw6RK3F1FxCirC4VAGtAprVwWJ/4LjI9KCIPBi2sgLThESYxyyqtBOgvH3lxSggoggjAhC3w/J+plSUt7+fsNtnt8BYiaAFJ7ctpag3CWkj+quUo12iFsDXJT6eznoEsgpusgBA6aA0IvRlAzwogZ3p8+BqD0OxNyzVDsNSHfbjWBG2ONQrmbHAWkN1uUIEWInW6i4g7PlHc8zc5/18P3b7bgHOydwqMexbn6MXxPxNUEvpCAKA8IgwFgvUHACMquYxT/MdBL++1IjLnD/P/b+M9i6Kz/vxH4r7HTSzW8OeJGBbgANoAOancimGqTUQ7LZHIojasajUlm2oj/4g2xLVdaXcTmNrZJKlqWqKY9L45kRR2GosTgSqWaTTXZGR+T4Am9ON5+w0wr+sPbe99zbAIjuIa3uqvNHXdz3nrPD2mvvFfZ6nv/zCBfIDDTHDxXdlc83JLUffC0JY43GEAmLqXdItMObKbXLGlKDP7SvEL5TsTpMQmjL4rtqb43oDnNefVcXSkkqGwysZjWsxIrZuGRaSayI8HhKn6CFZRjXzErJ7kywMvjR3/8WpINFLGIRi1jEIhaxiEUsYhGLWMS7xnvNsD937hzr6+v85m/+Jn/wB3/wrts+8MAD/J2/83fIsuDB6b3nlVdeedd92kytn/T4YUkHx48f5+GHH/4TLNEfV7SgW8igCqBkhIpTzj32JMsqIrZjesM+pag5d/wYajTCS8NguIUpmpUUZ0KurmtIAZKgTuAcNBYBcX+AQDDdvEEx3iNxHhklXL64y+qKJBKWsgjZpkJFyDhGNFYKXRl9A9SJoLXtBaAiICzgdWCqbcCDrIczDqE8kdL00x4z5yhcA/rjuHP7OrbOWT+WcPyDT/LqGzcAxerqBlmvx8c//XN84NFH+a/+1l/jK9eucWw24dgTn0T/6V+nGO+wtHMNMdlFzfaIyhzKHFsWCGvwdQXO4gWN13MgT1jvMcZTzzz5TJLFHiElQkp8XTAb73P12rWwmAloAVqCcWEJT4qwIKVk8B8PAG1zK5zHuPC5+9HXnt77EzQHFMAPggBZllGWZQdOz2/zw6qHHN1/PlqgJI7jQ9meLQCSJEmTGXw4g1MpxRNPPMHq6mq3fUsogAMbgRZYaQH/JEmo65o8zztFh7quO5WCeanu9litLHav1+P111/n7NmzaK158cUXD1kofPzjH+euu+4iz3N++7d/m7/8l/8y165dI8/zbpujktjGGLTWAWD1wd+7le+eL785Aqy15YUDokVVVfT7/a6uWzWE9nxtluy8rcS8xHdbb63twrxqBNBZMbTXvJJoht6xHEvuuecUQnpUkmKMJRGeae0Q/Yy8qBiurlKM9zh+fI0bNzeppzNU0qMoDDrWZFJwq7LklcEYmFx9g/UPvI96WuKspcotVVEHlYN5X4NmofjtICkvwgKyxbM7M5guQy/wAs5mCoTAOhDe470lRlFOpph9h/VQGsflwgZ36JCKFxbGG7ChEhA1fZr04cdZ0bRhjxMh27BdLxdA7iHKIoyrOXHyNGVV0u9nnDl/ivc9+hjXr93gzYtXmE1yEq2Z5SVSCDItEcJTOovzUDvLZSe4oNosvFaZRYAI9iBStcoHILVEaEnjNIFQwfZmp3LoREMcs5Ro8tqjbZDCLvOcqN/DR5pbM8OqFoyycC39nsZbT6wJi+w+gMh5UbPS13jrkDqcPxv2kCrC1xXpsE9Z7QfAREq0VHgFtQ1+3Gka1B9UFgCaQIohqAYIQRRHCARJBN4FKxAlFWmvz+3bdzhx6i7iSPHWm2/wqac+zKwoOXviNHv7EzSG5fUlltaOYeqKoizY3d5hdzxBTKZYU+JnM6q65rnnXmC5n3Hs5EkmVcnt65c5vjrCFDO0cBw/dy866bGzvUW/l/HYo4/yzLe/1yiFlEgp0ErirKGY7jHe2+WGCJCINSaMd3iu3rxDKkL7rcoZ/dWS7T3Dti05dXyV4yeOk/SG3Lpxnctvvol0NTvbe6jeCrgajyPRAu8M1ngiJXHWUhSGsqyJ4poTx4dMy7rT3ZGNndJweZXdvT1GoyESh927ST+STMi4vlewMig5e2yFW5ub9LKMqix4/dlv874PfIg3rEPYit29Pc6fOcX66iqv726zOXVE/Yh+IoOFB2F8s66FesK8INRPaCvOORx07ca7YIfkaMmLHutc4Elq1REX6qZvcs6RNc9Laz9Q1zVXr17l2LFjbG9vdyoHrcLNeDwGAumsrmuuXbvGyZMn2dra6ixmWquG+X54fX2dJEnY3d3trLiMMaysrPDggw9y8+ZN0jTlp3/6p/ne977HI488wuXLlwHY2dnp1GKUUly7dq0jx50+fZrXXnuNM2fOkGUZr7/+Oqurq7zyyiuMRiO+//3vs7e3x9bWFtvb2/zsz/4sL730Ej/90z9NkiSd5cI8gewo6eDoWDxP5GuJC+8V5D86HziqlPATFd6itQIFpm6tlSR1OcNagxQS8Aih0FGK8R7rDRAsazzgG4l+rTVJmoEzFMZgzIzEg/AWfI2zFWVR0Ov1UVGK8C4AqLYO805b4USEAJSKgoKMc8083QXFFzxSaqwxKAxpfwm9JNjduYOtalxt0XEfW8/wdc7yICUbjHC2otjfYWV1hZ29CbPphEEvZX19RF1ZytmEJAavY5TyxFFMVZbgHeNpSV5Bolzo6w3UzjPLDVGWYvIZOIOKUmoqtE6a8TdGyZgo7qPTAVVdUZcV42lNZT1RrEGGeVhdOZCW2kj2JjVCSLR0CA95XlIb0HFCli2jogQpI4SOGpl+kCYoIlHrYJFmYqT06EiTRDFKR4EoBRgbSNAOhfACYcMcXUcRkY47Ao9QKliuCYmgtXtp2QZh7kA7h/IGTYU1FXq0gVo+T713h3i0hoyyAGyLJqtf9dGJwhcTKLcQ+hR1MUalA6Tw3btR25QONNPEAfDd/P9AEeFApU10W4QJiBCCdOkcdV2EL63FqzDnbuUFlDhovwfKAvPkIRq5p/Z8h9u7OLR/ay8FWslgiSNCnx/IB6Kpj4MJkhK+IWmHeUxQkmpUDpoXyXZzmm1Edy/aKw4TLYemEglC5DjhcXoDJaNGLcJ35Q3vqXN9V1trQjSbufma/AFqeLdvc2/SBMo6HFy3c0DnMSI9YKx6T2mhthH92CCsoa7etYd611iQDhaxiEUsYhGLWMQiFrGIRSxiEe8avV6vA3TeLR544AHSNOWf/bN/1mXxvl3Ecczf+lt/i3PnznWfGWPY3d191+Pfc889P/YWA+8lflh7hccff5yVlZU/wRL98YSUspHI9iipCIsiHqEkK6MlRr4mWVtjUlccO7GMThRSeia3rkBd0RsNQMpG0tUjhD6QqlQKoYK0rLceby1RltFbWWe6fZtqNkGlBofDeR8ktHVDfGgX6BvANEgCeKx1OOtwjQ97sF0IC1Oyk9cMmSbOOWScUk+nAVjt95BKESlNXhYY4aHy7Gxu0k8TsqzP2pmzDLdy5GCZq2+9xEMf/QTZqM/SsdPc/ejH+b3/6v9J1U84maV8+Vtv8q1n3mQ06nH89GmWV+/jxOk+Z06t4IoZuioYqJqonsJ0j3jvDhS7+HwWsu/zklmh2CsSlrxDZyloSV3ljC8+x52bN8IimxCMegmnjh/j9JmznD9/F2fOneX8XXeztLLMdLrPZH+f/b0dtra32d/b5+Jbb/Hscy+wtbNHbUL9zmdw/3HGPBAxDxDMAyxpmnZS/G38qISDoxFFUQd8q+55OSiLUop+v9+BKG2GaguofOADH+BjH/vYoT6zLWebnd+C6+0iaAvMtNu0AH4LOrUZ/215yrLsAKEkSbDWcvPmTT796U9z48aNLvs1y4JSxsrKCnVdk2UZs9mMV155hQsXLvDCCy90ANBRwsB83bZA17yyQXsvWlCotYaoqqqzbvA+gCRZljFvgaCUYjqddnXc2jlEUXTIXueo6kOrmACHiVstANfKgT/9iUe48vpbHFtfIuqlpKMlKutIB33qfEqSePbGU4aDHvn2HfqDIZt3dqimM4Zrq9y6uUkSafoSBpHkemHYrR3HSks028c4jYw0dWEpyxprbFBjoa2zBuzploL9wZoubQauAC+wHkzzvUPQFzCQAovAS4GzDnwAOL08WKTfrhybVUMQoukimzapPZQNxq/8wdK8ncuW68gGzQJ+iWfi4UOnlzi1ltEysh589BE+/DM/x+rp+8m//G9YXnqWc2dOcfv2Lrd3J+SloZ8oZCWYVgbjA0h7rajxS/oAkBAiICftOrwUQWJaSoQ6AAZk0+caY9nMK2yvh5aCurYkQlJZS107YiXAWGZ5xcmlFFPVeO9Y6mvK2qCacUD58GznVY2QkiQOme61cfSGGRZBjEMmEXVhoQrPulQK6wxRHCGdJo4i4jhBdPYjkuFwRJ7n5EWJVI2Nj5BIKVBKBy9vD4N+nxubW9z9/scpvWSyt8vly5dYOXGOey5c4KU3L3PptRdIlGD9xDmqIhCPTp48Qblacf3GTXY2c/qyIo5jNkYZg0hhdreI+0uky8e5vHmH2M6oywJb5Yw2TuNkgvVw74XzfOfb38Jag/OefhpjnaGsSna27lDmE+xgyFtvvYl1DgEsLY2QOGozQwiPrQ1lPiPpDahqx5uXr3Pn6kUEnu2dXcaTMdiaSVHywccfZndnE2Md+EBac40KgBLBr9y4kHWrtMU5j5KBPOecRwrJUqZ589JVHrhwhuWVFb7/8kWiSDHQgsI4pmXF+nAV7yFLYpSUTLZvw2yXYS/l6tVNirLgxVdfI1HBBmlclKz2YgahcwGC2gaiVdwI8xbbqG8oSfjbujD/wDdEhKBU4pzHY5FCYuuSsgrXQTN3SBr1lVaxZjQadQSzLMtI05SbN28SxzHb29torVldXeV73/seJ0+e7PpiCHPmeduddvyBA8Wctn89Smhr+822n82yjFu3bvHGG2+wsrLC6dOnuXXrFuvr6wD0+31msxmtvcPVq1c7dSGlVEfEq+ua7e1tPvOZz/D1r3+dKIr4xV/8ReI4ptfrdSS5tkztONOOi/MEsnmywfwcYH7cC/dL/kAGdDvOzBMD50kL7RjXjiVvF0eP2SrnvN127Tnf7vOj8aPOS9p729ZRFOkgxa8EUkR4H6wXggVPsAoSMtggQFAcEc1vKT0Ch1ICkaa0WK4rU3SkUXGMR2ONJZ9M0FqTZX2QEqkjnDMYUyPqAono6kapBKGj0K3bGudqFDqoIYhA1DHGkqR9VlfXKYuCqswBg44kSvUD4TCa0e/38U6wubmNrS2jXkxdG3JjMbUhSyN87XHeUVeOLFVM9i1F5dncrVDSkcigUqKloKwcyUAy3p0wGGbEWXgfcHhUlOJ9mGNFKsaaEiUlRT4D76irCgHMZhVr60OsrTv5/drUxLEkn1Zkoz5VUVIWBcbUjNIMLzTOK5RWwa5C6aBOYWqMCvMUq6NQV1KgtepIpUJKnIO6tiAUzjcKHbVASRdsFLREaYWUCtlm5DdsQ98OrrRk6mZmIADv0FrgXVCf87M7JBv3IHWCxDcEqgbQRyB0D9kDM76NKDYx1uNl3B1SdooH4QPfQN/zM5+5VtD96j5vXuyE93hTUJsCPdhARhnl7hVU3G9IFIfbWPc+2B3Ez8sYHAD8HeHgYO4lGmJ4u2sURSAkUoJSAiU9whfNOURjsdOQ4YRo3gRpOQUdmeLwedrPCPO2ubckQZgCCRwIjYmWEGYGMkJhMXOUjLbOpA/k0Dmmxdx9Fh2RwftWdaE5Y1clocxx5BC+ZlwpJI5EVuQ2JRIGKSpqoua6PNKF404qxTDSYLZ/oH96r7EgHSxiEYtYxCIWsYhFLGIRi1jEIt41hsNhBzi9G/HgAx/4AN57rl69+q7He/rpp/mVX/mVQwt17WLou8WDDz74k5ep9DbRgojvNT7+8Y8fWoD+cQ2BxPmQbaSUDkCaD5LZqdJEsxkiiTn98H0k5RSw2GIPZWs8giiJkZHGmcZOgQAUdfkabQYKHm8dVTlBJxlx1qOYTbAV9DNBUdZIqcj6aQMCBf9UIRTeOkxVYY3DlIKiSCjqAbNKY6xDy5peWpOmOXFcoxUhswe6rCJvLd7YRtZUhwwlW7O/c4e6qlhZWiLKevR6A84+8igvf+dbDE8vY+sSISToiId/5k/xvX/+j5G9mO+/+iq7d51he2vCrVv7XL4+wVrDp5/+AJXM2L6zz9f/4DmUlPQHCaurQ86ffx8/874Y8eV/SW0dVSXZLyJmNkbIiihNCbL7U3pXn+W+fsypn/00n/zUT/PUx36Ku++7j9HSiDjNEEofLHV5j7PVQSKSdZRFzrUrl3nmW9/k9774u3z5a9/gytUblLX5E1E/aAGAee/ntmxA5409mUwOERT+pygdaK1JkqRTNWjPdxRkiKKoy7pvge+wSJ9x11138dnPfrZTQGgBg7quD1krtOBP25+2dgFRFHXqAy3ANA/0QyCAaa0pigLvPWVZkuc5ZVny4IMP8uyzz/Lwww+zsbHB5z73OX7nd36H1dXVDiS6//77+eIXv8hf+St/hatXr7K3t9eBMq0qwbwNQgsWHVV5aIkGdV13xIpWCWFejUFrfUiVoiUaHAWXWgufLMuajOz60OftOdo6b3/aZ2O+riNXc/c9p4kTTW80BAGDfhZ8o8uCsrKkacbt63fYWOlR5DnFZMLa6VNcvnST5ZUlfFXSTzWl9+w7z7XCcnJ7TH95ja1XL5IsDahubFMXNaYl4nRXeUDK8cwlkXX/a5bmAzIwR+DxZDJk2+l+hpuVgAlAiLXIxndGxZq3tktmHpQMddiSC3wDBiwJ8F4cLGOLg0xD32TQeRGW4Gtg0pAe7ju3xuMfeJhvfPU5jq8PeOhDH+f0h38BPGTL32FpfZ2f/bW/yq23XuP5l/+PTPM9SutQQqFF6P+9h7HxyF6GrovWLadZEJeB6CUOFuHx4MqGiCeD/PfMwa1pzWoGznrK2iKtQ3lP7aCuBbPNGb6uObbUI595qA3GC7IsJtKSaWkojWNWQyIl/dgHIN2CQTIuLIPYUyrJYNRH2KC0oARkWUocJyilAuDiHHGSgD8gD4Vnz1PVJVrpjjDn8QwGA3b3x3gHw8GAfLLHqJ9y9twFXnnxOV6+eIkPrhxDSsnk9lXW1zeY7NxmZWODL/321xgO+iwvLTNcWuXsmVMcP7bO/njMZDIJbVTFaCUQ5YyNVKOXRrz65iYXL13hkQfu5iySE3c9TF6NmU0mfPpnPsm/+O9/k+FgAFKh45T9/X2EUJRFSSQ9J3a3EN4illdQwwH7491gm4DHWMtsNiVKewghmc1ybt65SlVXCCTWFNRVxer6MS5cuMDFF76OqW2QaickQxe1JdYKpGqsCAK5xHtPJAXWemrjiJKU5158lfWNDT7+8Y+hfMkffv2bvHVnnw994DFsOeXSnW3uPjskjSJMXSKlorCe6xdf4q5HPszWzi51XTErau6+/26cqbl89QrjytLXkkSJDsTq+hMOJP6dazzFRVCFUBDICJ4AtjmPxWNtm4ka/metDSQGH1Q5WoD2aIZ/r9cDgjpYFEWsra11Y8JgMGBnZ4fhcEgcx4xGI5555pnOMqFVTGjVZ5Ikod/vc+PGDfb39zl+/DhJknTj6M7ODhsbG4eA7LW1NX7+53+eXq/H5cuX+drXvsZnP/vZboyTUjKbzdja2uLFF18kSRLiOKaua4wx5HmOEIJr167x1ltvkWUZ1lqef/55Pvaxj/GpT32qGzdb5Z75sbTt39t21Cr5tGNES2Zrvz9KAmjJBG8XBxnNP2i78Hbbzscf1/vFH9dxvPcY65p5bLDVEkojtWqe2zA/pXmWhU7QApSUOGfw3qC0B1WjXCC7JVlG0sso9sfM9nex0x0sAhWnLK2s4a2hLnO0DvUukxRnDNZWwTJN9QP5ylRB8UBFSBUHqXdACt2AuRIhGiujdIDUEWl/iBCwv7OLV57VtQF1XbKztYW3jmwwoBKeqjaMRgPK2QThBMW4RHvH8ihFxYLZ/ow4gtp4Rj2Fc57aQiRACo9q0OG0l2E8lLXHUhNFCo8FOwU7w5phGDOVJo5SnKmIcPQiiRK+Ix55b/AIlA7EACcUReUwtUVriQN0lCB1ho5TkjQhiePmPUhgtKJWupnzGby3KCWIdSAoKN3YpjmPUmEg8q6tT4GUNhBEVFBEQB6B9b2YIz02zyCiIwc4F5TRnDMYC73mvUWIA7JAR7Bt/idUj3h0Ard7EWWnSLHRKBkEe4IW2D6YvbiOdiDmMvzb+ZBofQA84IM6khcCU02IkgypQ58l4wyf54QZycFhDq5YNMSZjgYwXxE/QIYWc+VojyJ8sMeQArxUDflRgkzp/AoCHw3hG6sGBBKPFe0Mjo5E2REuRDun8yBcM74QrkWIcN9dTaX7Tf0ppPDUdR4eXtw8v6ARz3q795o55QMaYkLzvBzs3xbOo1RCbQ14Q0LJzCQMoxmlUCTaYWvb1vYhtYxxrUh9/23O/95iQTpYxCIWsYhFLGIRi1jEIhaxiEW8aywtLSGlZDQasbm5+bbbaK159NFHsdYynU7f9Vh/82/+zR8gGEgpefjhh/nCF77wtvtJKbn33nt/9Iv4MYqiKN4z6SBJEj760Y/+RJAtQgZG8NYUUoXsX+/C4pCp0cZQbN2m7IMUDkyNkyVRrKjLsLAqlEJJhbOm+6GxEnDW4V2Q13bO4qoSC0S9JUxVUJua4SDm1u0Jw0SSJmHJQ+oI7yV1WVPnJXUh2c3P8PrNJS7PUop0QCU0k1lJJC1RPWOFKfcu73Ph5D79zBBrgdKCZDSkmkxxxiCiiFbatDYVO1t3UFIRxxlZMiDREQ889RHWz59ndOx4yOpxFiEVJ++5l+Hxs2T5FvcupXxX5mxtvcbGiTNIOUAoiROCk2c2goe01ty+M4bbe1x+c4vXXr7DifQcp/b3qUrPeKLZq2I8Ch1HoBP2S8NuuoE8fh9//vMfZXTXPagoIk4Stnc2qUzNaHmFLOuhdUSbMi11yKIXgNfQS1LuGy5x30Pv41d/7c9z/eplvvjF3+W/+af/lGe+/V2ms7KTUf/jjKMezfOftyBHnuc/oIww7+P8Xs7RZp0ePde8ykErYV1VVadU0hIV4jhmdXWVX/qlX2J5eRkI6gB1XR/axnuPMYbBYNABKdPptPP1llJ2hIrWkqH9TClFWZYd2WC+bPv7+ywvL7OxscGbb77JYDDogKj2vK1VQ5Zl1HXNs88+26kdTKfT7lytksR8XbQEizZzdt76oAXS2j5/Pru0JQoopTqQyhhDmqYdyaIoim7btoyt9UJbjnmJ2TZjtbV/aI8PASxzzqHSlCRSZMMe6Iis36Ou6pCFrCKUN2xdvYxOegxPneLmyy+xdOIkN65vMRyNsFVOliVYGdq2FoKLheX905LJZg7bVxieXWPv0majkhJWaJWSGB+yu4UIWXzM5bq1i8TdenVIK+TgI0FfCuKVY5z86KPc+OY3qcY53lmsdRjj8EJQ146LhaWWEAmBFSCbBWcrwtp1IsDJYCvQrst7F5bxG/0ZrPAYYObD75FW3H0y4YFHnuCZr32XY+uNh3eTLbl08h52J1O+/G/+Gecu3Mex1SGmKOjFgmvbOVmsmRhDCz+oYY++lyQC6nHZEAyCrY0zLjjnqACueGPD982C+a4NRIjjWuCVJAKcD8eV3pMXBmcdg1FGaR3CuEC+cpJUR0yKijTVCFvjK08sJUkUYbyksgYDZM6GbFHXAC/S49F4VzZADwhv6Wcp1gmsDc+aJGTEu/Z5FEE9QSmF0BEIwXJtuSElCEiSFJwhdiU/93N/iq07N7DOIyW8/PprXLl2hTNnzyPjHlJ4nvrM53j+O9/kjTde57HHn6TIC6I4YW1lmbXVFabTKZvjMX3tOTEcBsLQpOKxC6fZKk6wPZ2yMp5QT7YZDpb50u8+yz3nz7C+tkptYW93F1MV5KXFi32irEdlDPtaszErGZQ5UyUbwCRk/rqGIBNsdjzeGaQURDrCe8fW9hhrHevHTmBMkD2vyzKAQL5VsZDUTduWDVlGq0AkjJQgtw6L4P2PPsHXv/41tvcnWBGxtLLGuVPHeePGFjvjfX756T/F7/273+blSzfRSpIXJZFWWBkxGY/Jt66zvrpMMZuSFzk3b99h2O8RKcWkqJlGEQqBVopOgKNVbmna4UHirO9aJwRAM7TdoNogVNOu57JvxRxSdhQAn+8jW2uZkMkeSKW3b9/m2rVrLC8vc+zYMbTWnD9/no2NDQaDQadiAwf2C957VldXeeGFF+j3+wwGg4401vb/3/jGN7j33nu7PjnLMi5duoRSiuPHj3fjUgv0W2spioLJZMJHPvIRbt68eUihqyWh9Xo9nnjiCYbDIVVV8dWvfpVLly5x9913d2SCeQWCdixtCWRSSowxh0i1rUrB/L5tvJexvSWgHSUdvJ2CUnvM7u79mM2z69pha4MxAaS2zuFMTdYPiiG1qdBRdEDq8BYhIrzwKK+wBoxxSAVCusZOROCNp64rynKGq0p6/QGD0SCMYcaEk4sYLSOEjJFxhHM2ZON7i7cgFRhT4m3dgeZKKByOKFKUXpJmvaC4IDw6gmK6i/UWpMHWNVLGAXiuK0bDHlmWYMqCNEupq4pIS0xZsTyM0dIRRxqBwycVrhdTm4q1UUJdOvJZSWklxjlwBikjjHVIobBOIH0gAznXtGkRIWREbTQyzkB6oshTGYfzliyNmE5mJL2MfJajooTJtGJWelZGQ/KiREeBBGCkRqYDVNSoETTzFN3Ui1QKKTVaK7wLY2Rrc6O16sZZaz1SBjKv9y0xzyO8RKuWyNFB/4Fs4Gky8tveq4XAfTg34T7VeYGSEt1fRY+O4SY3iZZOHxAY5tQBhGhgdZWgVu7CXH0OsX53IFy1D6ekmeMcBvZbwkHT24X/t6fgYB4U+kkPMgbVCwRzVyMEGFMQiwPlI7ora0/TpfIf+t2qAMyrD4ju8ubbOSRJjFISL8N5lAjEdCECecA387egeGC7Mge3P4ECnDi4Ua3NwkEckAhaIoLAYxoapvB1uOcialwD3/t7C831yWZT6dv5ZPtsA64le4aiaKnJdMm0TkFAHCk8gmkd0Y9qJlX0A+f2zXzxR40F6WARi1jEIhaxiEUsYhGLWMQiFvGusbS0RJZlLC0tvSPpYDQacd9991EUxbtaK/zyL/8yTz311Nsu7n3mM5/hH/2jf9RlBc9Hr9fjxIkTP/pF/JiE957ZbPaeSQfHjx//iSFbBAxNNEB8ANh9s5CvdYyQmkhpZFUgIoFKU8oyDwkZtcWMJ9iyQg/6CB2hdFgEcVWNqyt8A2YBoCRCa+p8QtTrE6VDzHSPLJHUpeX0MACiQZ1AYGtLVdZM9wa8fPM8z+cbVCurPPypu1g7ucKt61N2tne5694N8J5rb9zi269e4tVXbvPhk9scW73CAEecxag4CQQID0rFCFlSFjPKsqTfG4SFPaVQQhElKcdOn0dnWcg8cwYhFUmvz8nHHmfrq18k8pb7772X5bVtPvGnnuLJpx4liiTT6QznHRsnVnn0yXu5+Np1blzbZGV5he07u2yOazaQTCeWvJIUKGRU83xR8+3L15iu3UPVO0c1nlH9/v+I92C9QUcxg96QQX/EcDji9MkznDt3geMnTrO+cZyllRWixsaky32SCo8iSjXn77mPv3DX3fzyL/8Kv/d7X+Qf/uN/zFe/9k3Kqv6BLKMfNuYzGOfBgvnfbQwGA+I4Js/zQ3YLRy0R3m4RrwVlsiw7ZNlyVN2g/SnLkrIsu+z6FjhvFVqefvppjh8/DgTQowVM5sH1+XNorTvgvM3qbK97XhZbCMFkMumOZa2lLMuuDHVdc/v2be655x7yPO9IDPMZthAA+ZYscO7cOX7/93+fv/E3/gaXLl06pFIwb+/Q7tMCQi0BwXvfyWcDHZDVljFNU4wx3fXVdd0RCOYVDFryVQuOJUnCZDIBAvDU1nl73ANZZ9UdowWxWsJCVVX0hz2iOMJ66Kcp+9t7ob6rEuk9dVlw7NRJspNnuPzSC6yvLLOzM2E4SLFVjo4UdeECyNxk071VOPZrS388JYlLvFjHqwCeGhsW1YUPQDJOdJlx4Vlrst+c6xaA5xIEwzaE/jOWsL875vJLF0kHGbEU1JMC7wOJxTrLlYnheuFwwpNGEUoISl/jrEfhCXlpzYlkC04QFvgb1oMTQeFg5jy1gB6CURpx991nWTp2D488dIE/8/lfYDrewhsT+loPO/sFz738hxTF75Nvb+OtJ1GKLNaYKhAHJCKADErQG2YkSmJjjfCeYm+GN4FEhvM44ZHe4x3Ipj69gFu5obKKojSsrA3I4nDPi/0ZO7sFznh6sSRNInYmOThLP1GMYpAS4kiTDvqMpzlxZOglntwYsCErVUhJPyaQ3OIYYyzlZEadF1SmIusPmucteFiHDFmDsRatJEJA1BAMZJd16lFKIqRk0O+RJDFlWeEkRHHKpUuX+Pyf/bNcv/Qa12/u8u1nvsHpUyd5/0MPk6QptXG8/Ox3uP+hR/mpT/wM3/kqLC8tURnHN77xNZQznLvrAv3BgP7xY1jv2ak9qfIsr29Qjneo9u4w6I+YFjWvvXmJpz55gfPnzvD1b34TW9c8/PAjXL9yhStvXuTYmQtUZcFoaQVnKpbuOo1XmqWz50icYLy7GchTNoCNSZqilaKoCkxVkPUGeG+ZjsfEcczW5g4f/ORTzDbfYjaZIJxr8jEd1jWqR0LgrQ/ZyFJS1RYpBYmS4C2zokI2qhwnj61z59ZNzmyMuOv8GXrPv8btO1vcGU956JFH+dIffI3xbIJOUyKtqCxMK8vVSxe58P4Pcf2mRumIG7du0+9lyChmWubMMk1P+gbQatqqkE12uAte3S005uhIEjQqGR4aD/CgiBBUEsRBw27BwiNWAPNjWKuEk2VZp3AAsLKygnOO9fX1blxoSWtJknQE4I2NjW4MsdYSRRGf/OQnu3PM95Wf+tSnuHr1KkVRcM899wBw11138dWvfpUzZ85w4cIFzp8/T6/X6/rnY8eOMZlMWFlZ4Xd/93cRQvDYY48BsLu7yx/8wR+gtabf71PXNdevX0drzT333ENVVTzzzDM8+eSTHaGsJa3NWz/Mj4ct8aD9vv18Xs3gqKXC0Tg6P3g7csHbzSPeiYzw4xBVWWIqg9KQJBoda+rKUxtDjAtqA3VELFo1idD/OOPwpmpICB6cxdRVsGQwDustKopJe0NslBCnGa6sMGWJkzFR1gvz9SRk1wshkS60VaUjpIyaZzso8XTPtQApdZOxHwD2/mBEPtumqqaoKCbWEdZYsAZblygMg36CNZaqLEjTCGMNpi4QypPEkjTq412FyadIwvnqsibSEisl01k4r7CN0ppW5LOadBAhnEAKQVnWKJ0wnRbEvT49meClREV9bLUEIqGuAuGiMlDVjqQXYy3Y0uP2CzyCNImQWiKFpyhLVJQwXFlD6vaZDuCyFK3CiUZ5UMpirQR0yJqXIpATGkUJPJ2qhe3IPQ5EIFEoJYLKQYeihx/vPaHX8ggvuu+lELgm296UU27duUm/t4yQGhVnOO+w+Q66t9YRjuGgHTT6btRFgfcBkD/Yhk4R4aAwTYhDvwIBwQeVjUDQ8rSqB8KHuR3egTXgKkAGsjntpcgDwmaolEAw+IH27Lp92nl7++/562qLW5ZlGL8bVQ4pQLiSwKY4IEe0FgXz0RIqZLONnSdGCH+kvAf7CCR4h3AVwrtgWWEKEJoDZYID0kg7L2yVeH6wyv0BgcOHN+QfJA54rPMgZhRuCU+wkkAopAQLlDaiHxvGpWyICw2ZxfuOaPejxIJ0sIhFLGIRi1jEIhaxiEUsYhGLeNcYDAasra2xvr7OG2+88bbbnDp1ihMnTjCdTg8yZY7EysoKf+2v/bXO43Y+hBD81E/9FA888ADPPffcD3w/Go06z9mf9Hg3JYijcebMmS57+sc/ZCAZNIsh1jm8d3jnkBEk/RHS1AgK9MoS0uTI3eCjigCbTzFbd5BpRry6hur3wXuKvb1gaeAaUoMMmUMiEWBqqukYnfYCOIBH6CCBjw2ZJ7Wz1HnFeLLGN689wK21c5x/6jw6i7hyeYevfeVNZtMSYywvv3iN0VLG+584y9n7P8JzX3uLL7x+iY/mCWdPvMKypAF/K4IkaVjcq6oS50M2p5AKKSNQDfAcxXhrKOuCQVjJQUjJmUceZ/LM7zCb7ZNSkfUyvvbl57n7gfPcfe8ZijJnb2+K0pKPfOwR/oPP/TR5XlHkBf/5f/b/4dYe3K00pakoU8PEj7kUOXZsRDb2MLvGZDwhL2Z44UjShPvvez8PPPAIDz/8KKdOn2VpaZmsnwUPc2v4+je/xMXXX+Heex7msQ88yYlTp1E6OliTFAKERkjF8toGv/iLn+Opp57iv/vn/4z/xz/8R1y8dPW9Zeq8Q8yDJkez3NuYX0RM07ST+S/LsgNhDhYdDzLy22zAlhRwVPK5PWb7WZtx2cpDAx2JoN0miiI+8YlP8Mgjj3T7t6SAVgq7BVmklMRxTFVVXRlaIkOrYJAkSUcOmAddWkWAVjmhtVyIoojt7W2efvppXn/9dZIk6YD6liAxT4CQUjIYDEjTlGeeeaYDn+b9tVs1gtZyYV4FoZXebpUf5uuz/az9XAhxiLDQgmMtOaCt27quO9JAFAWiUZqmFEXR7V8UBa3UdnvdLWGjPU67v84TpvmMXhyzc/M2SieB7OMsdVmilaJ/bJ3Lz34XGfeZecVglKHqGbKfUllH0u8xzCIqH3ykty3cqCwrs5o0jSi3x0RpxGxadouzYYE6NBQlG893HwBM0ZAQ2gXcBpMPiX7NorzwoBAUxYSbr73O0sqIlY0hcZYy3dzDVIbKwVu5pWieV2MssZZEQlDgyWSANJouOEC+wuPEgeyxFVAAUx/IBwMEkYRYQT6t2H/zmzz08L2cfvRnefUbv8uNF75OMtrgxa9+ga3dKWVRc+3OLrEUJCri4x9+gG8//xbblzcRHmrv0VJiC4NczqhmBUmmMXndgAVdYw8L5XP8OxlrLJ7X9g07Rc1QS9ZXB/T6KbGWOCeot3LSuFHSmExYixSlEWwsaZJIsTmuSaRguj+lNI7lTIM37OeWYSqIez1kXVPUnjSWiLrGlDWmKLHeUxmIdPAmD+0oyEYrKRCE9iRVoKNopZBEIVO+6XecNSRxxNJoxHQ2Q3qJknBtc5dqvMtTH/4Iv/Vvf4cr04IbV6/w4MPvJ4oj4hjevHaLj3/qBEQR2XCJ3Aims5zL129y6vgGz3z3O8TC8eCDD7N+7AT94RAQ7M1ypOrRHwxw9ZRid4odneSl73+be++5jyIv+Xdf+LfsbG3TGy5x8c2LlPmYtWMnWT57gaqwOB9kxsfjfXZ29zD5PkVZU5UFUilm4x12N68zmYyJdMT9997DrVu32N4bU1c1Z+59hBPrK7x0+Vm2b90J91MKvJf4xqICKZCCAErKoJrgLPRjxZ1ZzWD1OOsnznLffdtsrG/wla9+lV//tV/l7LnzrC8N2B4X4TmOFB/90CP8/pe/wp2d3UDWuXOHwYkVlk1Ftb/J6tISt+sKZyDrZfSjNd68fIW89kwbkA5PyHZ1BiEb5ZamhXoczh1k8/qmgYsGkJE02aVzGbW+bdxzY0nbF8+PSaFuZDdGtH1mkiScOXOmG3/afef78PYYLUGgPUbbZ7cksPa43/3udxmPxzzwwAPcd999OOc4efIkn/vc5xgMBly9epX19fVuXu6956GHHmJvb49+v89jjz3GysoKaZqytrbGgw8+yMWLF7nnnntYW1vjt37rt7hw4QKTyYS9vT0+//nP88orr/DII490pLNWcaE9fkusa8ePt1MXalUX5pULun7iHcgIRwke8z/tuY+SC9pz/DiSDqypm2dDYq1CqYR+X6OShFhrtAzPQSDz2EYpLCiNOSQg0TIocpja4Gxoe0JpdJIBAlOVFLN9hC2JtcLLmMLVKBnar4qSbkzTIkJ50ZCJbQOKChwGLTUSgbMGGSmk13hnyfMxvUEf4VKMrZiN9/G2JBsM8c6T7+6hJdjaMMsnpP0BWkX0Bj2yNKOqJghAe5iVM7SOyfMS7x29WFNUgdxsvKMwFudhb2Lo9SMGUYSKgopAksRB+UArHIR5VhrsIJytwQdVBCWCAsGgn+CkQEWarN/DeEGSpuzsjilmBussUmlGqytEvQyUDApu3gbCdUP4kBKEUCgv8Vo2YHJQ0gl9QFAv8N4jZLClco1iknUGMIGoJcP7hhCim4/7xs7Ie9EB5QcaAx0VCltXjJbW8XXdqCdIVG8JO76NNzNk1A/Pf9sGhMc7T7l1Ea0gGR5HStWIIojuvnfR9H0/iE8fsBSEaK68mZeIOSBdK0G5fwWlU2w1QXob3qW67P0DoN83jM2OEM1B+z505ncgKbXviUvDjFhJStkQQJRAKYNtyJm+URLwDQmuRfe7crcEMyEaPqfvyuV/oHxzJA09IHYTwhiTU7sKoXvhnO2cqDlWa3XQ3e+OzdD0Z93/ffdXwz9oSuFafiemMphmnppGnsJIEIFkUNlAgOlFjlklunMIAYP0R+8XF6SDRSxiEYtYxCIWsYhFLGIRi1jEu8ZoNOLcuXOkafqO2zz00ENkWcbu7m7n5300PvvZz/LYY4+94+LeysoKn//859+WdLC8vNzJyv6kxw9DOlhZWXlH79ofu/DBTgFCRkcgCoTFb6EkcawQpk81swxXVuDGLkoHf9psdbnJFNaYfEZx+xbJ+gYySXBlFY4lgvS6syYAPjom6o0w411sVZKkfeqiIOllXL9Z8L6iwmuPt468GPLta/dw69jd3Puhu7h5a4/n/vCtDuCtTch+3d2bsT+eUVvDhz9+H+//5D1UT5zh8jdeRd2ugFdYWRogtcZbh1BhOciaOiwAYpvsJIGI4m4BcPP1l+ifOoW3BnRIxz191928ojx5MSPbv8NolHLp6g7/+O/9C5786Pv4qU88SJFP0FFClRXM8inOSy5fuYPxmqvXx1xxNW/u5bxReV7xlju5R00l9Y3LzPK6NSMFCQ8/eB//67/xt3n0wx9CRZq5nBnahbHvfv8ZvvHtL/Ot732d/++/+Q0euv8RPvmJp3n4kceIk3RujwBQqVRy/OQp/tJf/Et89CNP8X/4P/2f+be/87uYd+gD/uhH6DC54N2yFYEuK7LN0J8/xjzp4O3O806Zj/OARQtk9/v9Q6oEEMCeD37wgzz55JNdm9Za0+sFAkxrKdC237qu6fV63d+tL3dLUmgB9zbLHyBN005xQHUgaFBeiOOYvb09hBBcuHCBf/kv/yUbGxvcunXrHQGeFri6cOECX/3qV3nyySc5ffo0u7u7nYVDS4ZoQbH2vG12bkvYaAkJre1EW1/T6RTnHP1+n/39/e4+tYSBthzB39h2QBkE1YSWwGCtpdfrETXy0caYrkwteaEF3+aBqf3JhChKMLVBSYWta4SX1NYgnWXl5Am2r19nZXWJKl0ijhV+d5v+Sh8Vp9RC4zfvECeKWIZ8vxp4Zea4OzHMJiXGbhEPU8T25GChfQ5x9PhmcTtkUs+HIhAOZANcdii8CNnX2cqQyfYYtzNm+f0f58S969x85kuMX73Ofmm5WgX1GOXBWkeSKGoLO96z6Tw9AQmgRXt8sN5jPBR4Sg8GTywEaQOaOgFplrK9PeHU+ZzRxiny7SuA5aVv/DZV7fj+d77HrZ0pwlpiJTk27NGPIz70qZ/lldf+a4qqbgAQGEQJovbsXt0l8pZoY4ApDc7YBtfwB2CJDwrPKlKISDGeVbw5qSlRWOe4en2XqqxYO7EEWrNxbMRkb0Y/CYSANI5IVUwkgtd0T8HWzKC1ZJQq+pFnvxLEgx5ZLChmwRJCoHBFiVcqEOMkGO+ptaaXZSETXkpMXQM+kFe8J2DnAoREaokx4RlxUtGiC0prThw/xtbODsZ6vFM4I/iDr36Dj/3Mn+Le++6nrD1vXLvJcDDk9NmzxFmP9WPH+Te/+RuMjp9gc3OH2d4dTh7b4Fd+4ee5cvU6X/vmN3nqQx/kK1//Bnk+474LF7jvvvtYWT+GT2Jm9Qr723v0lYX9m7y2fYtHP/gJfuqnf57vPPcy48kUYyv2Nm+hXc7WnZtEUcLqsZPhepRkZ3/CZG+bqq6ZznJu3rrF+voqUu0Rx5rVpQFx0uPWzau8dfkaxjqW1k7yS//hf8T3vvzv2NvaIs9zcA6lFc46qsY+wxrfpmVinUM1JIRBrBDAW2+8yk8/9SG8tbz04vPc3tzkT/+Zz7K8ssaJtSWqsuT69Ws8dPdZdpzj3gvnufG9l0EF+6Xbu1N6Goa3rnDywmPc2tzEGsPW1g6jRBFrxaQo0V6SRY1wtg3gqXdt3mp4Hp0Pz6gU4G2TMdr4lUglUVKgEDgvMNYHSXREl5Xa9rlHx5R2nGj7xXnbGqDr71rrgfnxqe03W+La/FjU9o/zpIOWKJCmKadOner62Dacc+zu7rK8vNyNQ20fvLKyQlVVPPnkk0RRIHLGccyjjz7K+973vm5M+HN/7s91KgnGGJaXl/n0pz9NmqZdv9+S9tpytXXREuHm7Rfacau9rvbvo6oR83GUtDA//s3X+zuN9+80T/j3HUkEQchdYWoPpsTKmkxHGFMhVBTstAgKY0qEDHxkDNpijQNvENKitCAmDoRcD5GO8XHCbH+XvKqoiykuUsSJQ+ooWDFYj44VWofnXSgdfOYbzTEpQ4a2VkH2w+MQwa+kQUodwnnqSoErsaZACs+gPwqqC97QG/TBVLgqR2lwdY63FiVj6mqGlpKsn1HlAp1keGPAgRYKX5fYskZLgZUSqR1aS5AC6xRVach0grEgE5AiItIxOIczFSCw1YyqGFPXJVoL4liwLGIiBVKDiMO8yNjwrGRphHAeL2KGKysMlpfxqNDP+xpnS7yr8bbGuwi0RiqQqCY3no48EJQQ2mc2GB85J4l0hLMOqxXeKpw3DWjvAwmq6WN800/JlvUkBI5mrke4D0KCKfYBT5T0cNU0AP9CEA03MPs3YZQgVAMRC4GtS2Y3nicbHSNaOovfuoKULdmg2+wgxOG3ie7jeTKWb6T6hQi2AEIANjyvrg5ztnIPHWWY3LaXcyA50EYrCDD3ectNeDeiwaG/23ugQruRQiK1BjUIpDIBsiVDNucTkoZ01syrRPd1ICc0ldCdp9mWgyGvIc/MEAJsNSWWHi8luAKHxaMOrmeOjNleqpir+AP1g6Z/86G+vT8gf7QkBqk0y6nGTQsKp+nFmp0xwXrEOxCKooRhCrFyVCaQK/pRTS/70akDC9LBIhaxiEUsYhGLWMQiFrGIRSziXSOKIp5++mlee+01gC4bdj4effTRbuHz7UgHWZbxF//iXzzk3Xo0hBB8/vOf5+///b/Pzs7Ooe9OnDjxrvv+JMUPQzrY2Nj4iSEdCAIIRgO6O29D9qeziLIAYUn7CcXYUN+8hPY2ZG5EGluUJKMB5e4O8coq1fYW5fYWycYxdJZS7e3iRZBJFUpjbY0ppmBrdH+Ene4Rpz2iqqDXk2xRU9Y5tq6wJbxx5wG+MxmRrORc3b7Cte0bLF9w/OIvfYwk0ty6vs32VsksTzh53zFW1od4Ibhz8Q7jyRR79jhvPP8Q6s414ihnMMiQSKx3TUakbaSWQxakdQYRH8itbr/2LLqfYEyFjFPwnuHGSUgS8rxAbV3j2Mnz3Ly9wyyf8Y2vvsD21g6PfOAuTp8ZcvvWFl/72it8/7tvsb05ph8Lyhe/zu/dfJ7LheGlyrFVOWoHmSjwAhIJSgbBh6WVVf76/+p/w30PPhwWppxvFvTaBaxQ0q3bt7l27SrLyyvsjXe4dec63/j2H/Lw/R/gs3/mV3jfo4930shtto/UmiSLeOT97+Pv/92/y//l//af8//+J/8NRQOc/1DPUFOeFvho4ygZYV6C+ei+R1UC5oGGo9u/ndLB0fLUdc1oNOpsUdrF4gsXLvD00093mf9tZmZZllhrybKsA/DbY7U2Cy04ZIwhSZJOQaH10m6jrYe1tTXKsuzsCFqrgytXrnDy5MnOo/uJJ57gxo0bh9QeqqrqFAVmsxlpmtJaJHzlK1/hscce49vf/jZJkhy6hiiKOqJBS0KYJxy06gLz1gYtKNbWaXtNvV6vGzPa8aFVTajrmqqqMMaQZVkHftV1zXQ67Y7RAlatAkOWZYiGiNTeA6WCT63JZwFEjSJ0BONZjsynpL2M7Zs3iWONy0b4Wc7mzV3WV/rMxlN6KzF1XZCkGYMsIVWCVEnG1vJyYfm41aS1I45qlIxQUlB32W0AISs+qBrQADC+W4BWIqgK6ENMhGZvH57P8aymch5nLTeff57x1nnWNs4xqa9wq3LsumCjIL0gliIsIM9q+gQFg5mDfXGQXddm3Hk8GogR9IPQLy3hQIqgsjCdGWSUMSsMX/3t/wEvIm7d3sIZw9VrN/G9ZcTeJpHwPHhmhTVf8+y/+G+Z3b5F7UP2eA/JsSyhGk+ZUjOIJJPb+4Fc0NQL1iMaT2jRtAePwNWGGzNLgWSYapaGKTHga8vW7X2GywN8EiG0ojI1SSSC2oOEvDIIK3ENMBQLz3ImqLynl0QsJwlFnhMrjzEBHJNxkAIXdY2xhqwfMy08ztSgFK7JQPfOoqO4q01rQ4Y8QjS1CsYavLVoHaEjzcbqEoN+n729cQAmdMTFa5s8cO0KTzz+OHlpuHn9Bhev30GqiLPnz3HvXef4t//6e3x4Yx1FzYvf/z73/MIvcu3mLf71v/m3PPTAg5w+cYzvfvd77Oztc+n6Lb7+ne9x16kTvO99D3Pq7Dk2LjzAbLzLZHeboRbs33yT1bXjPPXUT/F7/+63mE12SSLBles3OHVsg2sXX2RldZkz5+5hbdTj6luvUU52ifUQWxcsLw3pJxG9WCIlpKN1ZnnBK69epKgNWW/Ir/z5/zk7199isnubcjpmOOwx3p9hraOoaqo6gEhaSuJIIhBBiMh5lIQs1mgJO9vbfP3Lv8u1O/vUdcXm1jYvvvIqT9x/hrvOn+fa1pTf/4M/JM8/yIXTxzh/4R7evHqTO3tjahxSSZJIY8sCUY4ZDYZYU+Ocx2kNUjLLSzKdYJtztwoGSvqmf2n6MOdxrm2bLoB7TfatbHzAQxsLWbwBXArt2rrDQHfb53nvD6m6wAEQ3/abbdZ921924+0cEa5VPmj7yZag0JIV5se5Bx54oLMvaPv3uq5pFXbOnj1LlmUkSXLIoqgdf621HfGsHbPmx4JWxUAIQa/XQ2vN0tJSV97WZme+j27PMU9GO0oUiOO42/eoHVk7DsyPJwHAld0x5rdtzzc/D2i3ma+vecWD9hxvR0Z4JzLkjxJHs7TnjxNFGu9qiqKirmrSWIEIfaXSGVFykKHeKoApHazHpFYoFyNcIMLGWSDWOt+eU+KMYbC8xvL6BnUxA28DmSZKUHGGkBEqitEy1Jl1DmungETJBBGHMS6URzckY4twInzugx1UUU6xdQEuKDthJjhj8CiEUIHgIwRIxWxasbO9zcp6n+WVYaMO5DFVgcBhrCHNYuq8pCol4fY6auNwXpPXHiKB1IK8Dso/VVEjlERHElcHAkYx20MmY3SahjmaD+omaRZhnMH5GuEUEo81NTqKiSKB0AnWOLRK6C8tEcUp1licN1hvwNU4U2JNgjURUmmU1AglUI09QqsYEAgH4fqcB+FEUF7xCmuD9YIyEm8aUoIPMwzfkJ2E9wgXlIyapz08C81/iDDGmnKCKyaI3ohIC8q9qyRLZxBRhBqs4aa3UaOT4KGa7lBtv8Vg7QJ6sBpmNz5Y7IRnjfbo0AL00E5hDp5jDogIzeUiCQRI0RBSsBYdJThniPprqHSIq6aQDkH4DqxvSZxtBn97XE9roXCYWNDN6xuo/khrBSHCPfGB9CFlU5fNs+Zxjf2O796NGrYNrX1Cp+7QfCV8a/AwR8DwB9s0BcMSyItex1RmjJExSrmDfQUIL4LKQUdiO1x+39b1EULrQV/Sqh4EYoKxnik6qHUoRy8y3HGNvQcW2xhFTHJY7nt2rWeYhSSC2/s/+vrDgnSwiEUsYhGLWMQiFrGIRSxiEYv4I+PXf/3X+a3f+i3+yT/5J5w7d+6QzYJSiieeeKJbKH070sGHP/xhPvzhD/+R53nggQf45Cc/yb/6V//q0OcnT578iQHf/6gYj8fvedtz5879xFy3dRZryoMVFu/xzqCcJbEVtprhqxn95QFMN7EieJh677FljasNKglWBMnKOsXuJnaWo/o9VJlRTfZxziJ1HBaikZhiFjIqkz4oRdpbIqsnOOWpCQv2s9kyz+6uUwwVYvkWr7xxC20z4jTju994jfP3xqytr3Hy3DrGpuzuePZv7LJ7a8rdd6+xi+Qr33mVnW3LKHmA0f43SbOEJImwtcM7T20N3oExJaWpoZhBFMACZwz7b73K6sYp3P1BtUFISTpcQfVG5PtXETfe4q5HnkTpiNm05sSZDV568S2++qVn0cmrFHnJ3n7B3u6MY+sjTpvXuP3ms7xeVVw0jhooXchuroCeIEhZa4Hxip/70/8BZ+++l739ffbHY6yt2d3f4frNa2xu3WEyHjObTvniF36bt167Qn+4RdbP6A1SoniXnZ0tnn3xWzzx6Ef45c/9ee598OHGh9Y1qE3IfFtfXeF//7f/Nuvr6/zdv/cPmM7yH/o5ahfw327RvwVo2mzJozLKcJBBdjR78d0yT+cXLFs1g3k5662trUPAwurqKk8//TRRFHXy0e2+LWheVRXj8Zg0TanrugNXsizrgKcWjG/BqDRNDwEtrex2XdcdeNJef1EU3Lx5k5/5mZ/h1q1b1HXdlXf+mlvwqrVWgADUnDlzhmeeeYaPfOQjLC0tdWSooig6tYa2nvv9/iGbhXl575Zc0J6r/TuKIsqy7KwS5rNZW2JFSzwYDofdNr1er9sPOATS1XXdjS95nnfA2O7uLt775jgViTFYHTHqpZgix0zHLC+N2J0WbCz3cXEfbwqEyTlxaoOqLNBxyIhUSlGVJf1hn/VBzCu7BQ7YMY6LhWU5sngZUU1KdKwQlWV+IdyFSukW2QUtsB6y5ySNtQKHfzxBmGQ8KciyiP4wZbx7lZ2t61wTksms4lbpqBxIL4iE4MxywnZekyUxoqzIvAj96nw2YHeOFiYQ3WK4E75xTA73QkSanaLme898hbooOH58g2E/JZ9Nqb3ggQ99hltf/uc8dveDfOZTn+T67/x33Lgz5k5pGDtPjGcoJcuRxOclaPBOYMsA9IQ2CQiPI5C0dIdMeIyD1/drlBREWhJryVKqqGoTVtl3JlS1I/WGfqqoHQxigTWOqVck1oJ0rK2NSCkDoaCyCGe4vVnSiwRaAV7QiySmCtLlOAsCitqyPOhT+CCtr4RERxpnAzEjiiMEgsrY8Fw2teqsCxYV2aAh/DiEt6yvrTKdzSgrh5QK4x3f+Pb3+bmf+wxPPv4431MR+zs73B7P2H3hZU6f2qC/ss5KP+P06Ue5+NYlfu9Lf8jLF9+iqkruOrnG629cJO4vYXf3wHsmec5zr1/krWvX+Q9/6RcYDgdoHXH8wr3UlWFnUnLnK1/i5vYus+mUsipIooxZYdnf3KafxHzvG18m0hmRP8Fkd5vdnS0uvv4KZ04eo5fGGGPZ258ilGZ25/UwjswK0t6Qz/7qX0DbiktvvMju5i1iLen3e1S1YTYtAgDfNAmlJLGSOB8AFescxob2kWnFXmmY5iXDXsKVm/v0+n2ef+45Pvj++zl/7gwvvXWDKFY8/9IrDHoJJ9ZWeeDCWTa/+wJKCu4/NmA5lVhv2b1zjY3V8+yP96nLKd4nHFtZ4lJxi7y2VFaRycb+ALCNvLpWMsijO49qlE4cAuEan215IKcd2mxLKALjHd4HK475mAfJW6B9XuWlHWustYcICu24ZK3twP08zzsrhXYMa481P1628/FWFaYlsM0TCFoSW0teODquzoP486SI+XO227X9+1FFh/lxc/6Y86TA9rujcZQs2B67Lcs77dfGUUWDo8ed//c8qe0oeeHtyIj//4iqMDhvwVdoTQfElnmBTnKy4SpxnBAnKUrJxt5LIFENiQ2MybF1iRcG70NXZ63DeairEuEMtq4bMoIEHyGJEFISZxlxrx/GaaEQOLAG5xowVyhCT27DPFxGmNohpO8y7r2H8d42aaKJE4XGYoVEZz1q44ISEcG6wTtHnudcv7PDazd3uffsGqdPrGDrKYkKWf5JHFMaQ9rLqMoK6R1KaypbkVeOXhYTSYGemwOKpI/XAh3HTIuaREZY5xjv3yE2a8wqj1MplZlQlI5ZXpFlKVkvJrceIRXGWKwNbVvrhCgbEscZoBogWiBwOFdjTYmpC6TWSKWRUjdtM1hWdESR8DThvQ1ExcaORkqHUhIlWnJgeM/wvs1wD/MKnG/Aao/3AqREeHEAxDeEKFOOwc7Q6SmEilEqptx6FaF6RNkAvMPs38LYGlvsMzz+ADLth4mD83hnkVIFQBzR/PYN0aBVQGj0LxqyY5fhLw6Af+c9Et8Qtpq+rdxFRX1U3EORks+20EmvU1bgXdtwqL+Dv48Qiztk/gCEb/fwzgSyZdPetQ4kLhHuaDdmHWrqzTkddPOm+eOCb1QGWtLAgdqBFzT2GxXGR6B7WBuINAIXvhM6vDd3VIWgftGWob3/LcntB0K049JhywkpJVILlPFBAcQITg+mTD1MfEmmXbinQmCt4NigpPaK3VmCMT+aah0sSAeLWMQiFrGIRSxiEYtYxCIWsYg/IoQQnDlzhmPHjrGxscGJEyd44403OHHiBPv7+/R6Pe6//36AQ6DQ/P6/9mu/Rq/X+yPPFccx//F//B/zW7/1W4fUFE6cOHFoofEnOfb29t7TdlJK7rrrrj/ZwvwxhrM11poguerbLA1HZEriNIXcYGYTkrWlsMhrKsx4F1dW6CxGJhHR8grl7VuolYyoP8SWBfQydK9PXVa4fIIrclQUBV/aOMXkE+hLlE6J4xSxN2Otl6FFgi0cN/ZO4s6f5/1POpaGDzEcHEM6GE8rEjkmTad4PLfu3OTJpz7C8dMneO3569xZ3qUuLEk/5qOffoDf/dfPc3lyguXtmOVlg44ViEC2cM7hLTgbVnWN9cg4RUqF8wXVdIrb2SQsDjqEF0RZBtky49lFkp0brEUTVh46hUoHIfPEgzGG19+4xsZGnzPnVrl1cxO/+yY3n/8GrxQVr5uwWFQ5H4BMKViKJVXtmBqPMx6pBLc2N/ni7/0Os9mEty69xs1b1ynKHGsd1hnqylDMcqaTCc55is1d/O1dolgxWOoxWhkwnk7Z2v0f+f7z3+ZPf+Zz/MIv/RrLq2vBy9aleO+wxjLIMv7GX/1rKKX4v/7f/x6zWd4QFBqv2h8i3i5jEQ4WH4+CHS2QMm8NcJS0czQr8aiKQpud2WbyH5VkzrKMn//5n+fcuXMURdGpAsxn+idJcuiYxhiccx3hoO3b9vf30Vp3ZIM8z7sMz7IsOzCkzexv1WRaOeu6rnnwwQd54YUXWF5e7rJnW/BpHvRp7RDa67LWMhwO+f3f/30+/vGP87WvfQ2lVCeLPQ8ozWazTmGgJR60mbFJknTXNy+lLaXszhXHcUcSaK+hVYRIkqQjXLTHbjN3kyShKIpOuaEtU3uMoigQQjAcDqnrmv39fc4UOQ5PlKY46zDTCevDlHFeEEkgHWCrCq0jRktDxuMcvbQMAnQSU1YVOkrIegm9QYJQElk5KuDZmeN9fUdVWhLtUI3PsfNtVl2b+Sa6VWuBRIiQkdgSDJQIC9tHiQeV99TGIQ2cfPhhZnduc+mVy0yKgsqFDDTvA1Hg9Cil18+4Nd5pVGB8k3Uo0PML0r7N+gMD+COL1YJAhHDO8L7HP4BP1nnr4hVW1pYwTR9kjWFpeYWTGydYP7/B4x/5KfQsp8wrLo5rXp5UCClRznMslqxFIGcOUCHLXAiU88gkZMTawgS1hiZDUCgQSjA1nsu5BRnRixRZrEArXGnBOpwRCGsZDjTeeUaZII0U48qghEJ4S6QkkatxcYItchCCfSNR0iNxxFJiRVCA0VriTFDnkZHGG8lQQdUAEVEUhaxIoVFSopXqMoqlFFjrUAi8Bu8s+AZQdhBHCac2VrmzuYUQjVqIkmxNCr77ne/ykY99nORDH+TVN95k+84dVlaX2Z4Z7rnvAb7zwmscOznj/e97jMHaCab+D7nn9Abv/8AHOXF7B/Xcc0zHu9R1GdqHsWRJxO3Lr/Hazi55XrCyusrZu+/l+Jm7UEmPN69dBuFZXlrlAx/6GC8/+w0evvceVpaWeeLjn2bj5Cn+x9/4L7lx7S0mec7W3pSl0Yz11ZC1bqzn1p1NbtzeJksTNo6f4elf+nWWM8XX/vDfsXXjEtQ5UgrSJGJ5aYA1lqo2aCWw1hOpANp56xpLghbUhVGq2C0Nt7d3eey+u7i1uc2g32fz9nVK61lZXWN1ecjp42u89tZlrm/ukWrFaHWNQZpwZ3/KpDIspRJjHRGS1dEAIRRlbSi3tshUAL5mpaHyMb12PCAQC6yzFPUBgUi1maYN2CMbNO8go7VNxQ0qCJGQGHegfDJPbpvvU4GOXNb20/OkhHnQ++h2R5UPlFI45zpyAXPnbVUQ5rdvzzGvPtASHtpxc34cbaPdpiV/zVv2tPu255k/xvwY3BIC238f/Wwe5J8f74/WzfyY/E4EgKPfvZ21wnzMKyj9uNgtlLVBekckNUordBwTJz1E3AOVMJ1OsQ6WlpaJozjUkwCtZCfDjwiAsDWBJGiMxViHrSzSe6w1mLKkLKaYosDZGuEdSkuS/oi4N0TFKTpKiOKUOElI0h46SYiiuLm/Ye4pfbAtsHWNczVlvgfK0h9k4AzelNRmhgCKyiC1CuSBvCTrjyiLGcOljJVhzPaNCddu7rIy6uNjyaycBuKCBuMsUgmSXoZzjtnMhCxyKZE6pkKSJRG1E4xnhuEgwnnJpLAIPcDLCK9SvIA836Oq8pBdrhKyuGBGTl1VJNkALTSzSlJWkOclPR0IlXGSNSQBB7jw21k8FcYUiFoHfwaCOkV4zptnvSXfNZn7bQgRFCIC4N6Cyw5nHb6Z47aMwXlguVVh6ezUmoz8FsCWvsQ4RzXdQSVDyriPlH1MsUtV7CO8we1fQvdP0t+4K6gQ4RFC42U4vxQC2QDgneqAaA0jmnbcnjd80JECDpEQaOY7QmKtAS+JsuVuG6kUwlbI9sjigDD5A0oGb/N31845qIOWeDBPYpCqUZ1o6lsJ0JShzpqb007jAlHTd0oOUogDC4OmjFICLkxowm0VHQngoCgerIFoiPcQSYFDI+o9vOgjVLu171QKOkLJkfeUltzhPYiWqHDoeukK6HxQ7CqM4FivYiXx3C4ltZVE0qAI5CHhHUupRUhItKTXi7B+Ya+wiEUsYhGLWMQiFrGIRSxiEYv4E4wW2LnrrrsYjUYAHD9+nLquOX/+fOcX+3ZKB8eOHePpp59+T4t4Qgg+8YlPcP78+UNqCqurq3+MV/PvN1qf8z8qlFLcfffdPxaLn+8l2iw9KaOwAOfDIllUjNEbx3G7Fu8Mdn8bGSc4s4+d7COcQ2U9VJyEzB1rcVWF7vVgNsOXJTJNg+em1DhXYIsJWqcoHaGSjHxvi7LMkXHGeHeXk/0+OIetI275DY59YIjUfaJoyPbVXXxd8+bFPaazXX7519/HrKj4wu98C+eO8fAHHGfuWeP8/Rt884vf5eXnJ5w9t8RjH7qb53/vZXYm6+zmbyHkgDTNsMbgrWuSUhzSC6T34XrweOcwZYXb3QzXZiuESBBSkS6fZLxvOLtSsv2v/xGXd6FOlhke22B07ASjjRPc9cgINVhip1ZcevO7+De/w406DwoH/kD+03hIgXHlqNqsKABn+O0vfIEv/eHvI6iQSqC0apfx8I4G2HYIoaiqGtvIv4upYW+vYvvOmKWVPv3llNlswn/9z/8Lnn3+O/yn/8lf5X2PfgARpcQDSb23R20MWZLxV/4X/0vKsuLv/v1/QFlW7/k5ersFxPbztwMOumy2BrwwxnT9UGtb8E7HPEo4gAN1gDbTvv2uBSE/9alPce7cOabTKXmed0B/mqZUVUVZllRV1WXhK6W6jP4oisjznHnFgxZYaq0G2vO3KghpmmKtZTqdEsdxt93t27dZXl7mxIkT/MZv/AZra2uHgJj2OK1v+P7+PlJKiqLojnPq1CmeffZZPvGJT7CyssLm5uYhJYl5CeuWDDGZTDqP75Zk0KobtHU/TwDp9/tAAK3quibP826fOI47ssZ8xuk8kaMoii47tz2n954kSbrzlGVJHMfEcUw/i1FxhheS6c4Oo36G0TFxuUeyfoIyzwFHXTswFcJ76nxGvLQcrEKUIDcThidPUOiLROoAeLtUWm5Wnn5lGY56lEUeMqOtC3LxTZua0xQIhINmwV14kM3fdH+3GXkwcQ0oWlS88r3XGfU1Zx86zwvfe4PcOaYukAqWYs1Sprh0azcQBvA0ivBNdl4LhoqG+xBOKJmXYgbhwKvQdU1mOacuPMLm9etkWYxWElwNJmRZnj97jgsbq5z++c9SXn6Jay++wqXdimd2grJHgmXFCy6s9OhJT6QkXcK3n1NYMK7xRA4L6VIGoAUB12aOO04gFBjn6Y9STOmBihiHqTxpBFpBL1N4YDwtkRJ0baisx0uopiVyVpFKBzoK57QVupciECRakA561MZDPQly29aznGmKfIZUEq2CXLiQQa+iBYwC+OxJk4Ta1DhHkPhWEqWjUNe1wzhDL405fmyd1994AwgkBSEEb167xejZ7/Pokx9kMHiEze1drly7jjAFd5+/i0FvyNraGlpr7r//Xp54/DFGieTW5i5705ql0YgTJ8+wt30bLQVVVaOc4dWXX2nvLOPxPteuXmE4HHL67HmW+z0eOLPKqbsf5J67zxH7KZiK2hpe+taX+Nr+Pt/61reY5TkISZZlXL9xh63NHSIt0VHM9TvbjJZWeeD9T/Chp36a3TvXuHjlFcbbN5ntbQUQxXucd2itSNOYyTTH29AylBQB8HQHktECjxKS5TTi2l7J1k7opxIF+/t7nF1fYvPWLYZxxlIv5uXXL/LAvfeyN62YlYblQcqZU8eZVteCb7XzCJXikiUuX73KyvKQ2WxCns+olEJJRVUZJqVlJdKBBCgC6OQceNe24iB5rlqQkGCbIOaeaetcAx42ZMIGIDtocwdxdMxp+9j2d0t0Owp8z4P47RjSkglakls7zrSqCfMqNXVdH7JyaM/ZWu7MEyHavrXtw+eVc+bH2fbvdnyYt1yYv76W0ND1N+8C5s8TM+bHg3Ycaq//6HfvpGDwTud4p2iv550sFf59hGgsyNL+iKw/RClJXVfUswmVm4BKqcpA6lxaWiGWEi1UB5wKqZA6QpA2Y2UYa0VrIyICeQpnqSqFFzJYJNQVzil0XOOtxZsaLxXOVNSEkcRUNaWWyIZ4J7xDCI/SAhVpojRDxQm2HhDle2zfukSkHJGO8F5Qu4LIeWztkVGMUBHZcAXjBBsbNWl/QJxERFqQxIKo1w/vFK4i7ffJZwVCGawTGOOprCCSnrKuQUVMpjVSx/S1oDBgRYQzFYMVgZCBuGCdR8uaSAsmk5pxXiFcAHElAuFlIHDg0Cqidp44itFxEsDxhljtGxs57xx4g6XAo7Be4b1oVH0OQHetdTOuHKh0BY7IHFmxqVPnbAPOtxA0bXJ/IG01xBKazHrJHJFRCLxzjAYx1+9YVtbO0R+thzkLjjhZZ7J9DSU1yfr9yKiPzJax9RQz2wzvNULg67K53vbYoiuDEHMaSmL+OkVzHQdMA+nplBFaAoWIeoFsgKSuZyidUuV7RMy9CxwhZsz/uyNiwByRg6A04Fv7BcF8kxbAyuoyQkmEaS5FAjLuvveiJQsI3NvKCrT2D/Mks5ZH4g/VRfsuJgEvgr2Q8AYrJN5aEDFgmzlhSyoJBxXzxJTupS7Q5Hy3UVekQ+XzDVHC+aBukkWSpaTm+lZOaSVSKCrjkbak9gmptlRaUPqUgY4w1uIXSgeLWMQiFrGIRSxiEYtYxCIWsYg/6RiNRjz++ONcuXIFOFig/NjHPtYBdG9HOvjgBz/IuXPn3vN51tfX+djHPtaRDoQQLC8v/9gsBP5PCe/9eyYdjEajH6re/n1HVeSNZKXACfCN5Gq/2sfdeQNZz8KCfh3kSFWi8XWFVgKpFS1C5azDVjVquYeih7MN8CMFSIGQGmsNVV0gTYXWMXHaZzLexcgpdV0xHA4IEvA9buc9Lr/8DPXuGhv9dR486Tm/lqPPnuDf/O6E/9c/+A5oR7o24jf+y+9x8tRFPvnJ+3jjrTusr0mkzNjennHnzpSx85SsMdl/hZ6KiXWE8x5jwuqS92CqAnSMjJNGAtUF0Gs6wZsaU5cIoXDW0lvdYDoLi/wbmaCcbFHs3cFuvcb4ebhTCYyVTPprfDteJjW7RNU+L1WO0h/1+oTCvv2iet4A4SFLCaSoG8/SdkFPYKzHe9vJih6Ep6oN4+ke/Z0pK+t9qsryvRe+w9bf/8/4s7/8n/Dpp/8MSZIwWFpivLODsxWDfp+//lf+Ktdv3OC//m9/A2v/aJWD+exJOAwivJ3k83y04EmbXV/XNVUVyA5ttuV85ulR8KI9ZkueOVoGpRQf/ehHefzxx7tzJUnSfSelJEmSQ/LMrcVAa1dgraXX6/0AYNQC9vNAU2vbUJblIeBFCMFkMuHKlSvcfffdVFXF9evXufvuu39AHrtVSbDWsrKywnQ6RQjRHVMpxfr6Ol/4whf4zGc+wze/+c3wvOR5B+a3Vgmz2Yw0TTl+/HhXjul02gFU8/enKIqOMNGq36RpSpZlh+TD5+0y5sGzNuMWoKqqrqxpmpIkCePxuPs+TdPuvqVpSrK0gjCW/Tt3GPYi9HCFaneTbH2DPJ+R9FOoa5wTiLhPjKc0nvH2Hr4fEaUZK6fP8PxzL3PnzpjaWJQQWOeZ4nl+ZjibKaxURLGkKgUWgRN+Ts4XVJPxppsFeusDASESAi08imadu/vx7DmoRFAqGG9tYWYR41mF8VA4KD1kSvJTj5zmpcubSN8QGuBgXVq0rXZ+vBQd8N/qBbW5f96HBL1pXnPj4pucf+hx4uh/YGNtiZVRn4GGYyvrjM4+wtJqj3S2yo2Xn+O1N27yBzcmbJeGjX7M1n7FCSVYWR1gtiZNVqTowBDvAScQSjSr7w1I0PTrXitenZYYL+gBkfCUeY0r6rCILwWJgBRPXtT0k4QiN0igKiz7paefKWSk2RnXLPUkMlZMKki8DTL4ZYWLFFongaBQm3BfK0eaaepZyWZlWdJx8CbHo5VCNnMd1/QdUoaOVCuFThPK2mBqE8AC7xFKEUmJsZbzp05w4/Ymuzu71HVF7GO0lLz85lWsh8eeeJK77zrH+bNnKGuDtY4L99zD8ePHGI2GgKAqcl568UW2d3aZjMeYuubUqTNs3b6OBDaWh+h6FmTPmzpXBOB5b3efvb3nqD3k1jOZ5pT7m4H05GGaz7hz5waurjh1YgPrPTu7eyTZkDjtMcsLjIPl/hoPHrublUHG6VMneeE7X0aYCbOdm0x3thHCY5q2LQUIJRn0M3b3JtTGopXEOoecS9oUBEAED6mWpJFkVhRs7u6zvjzi9s5V8rLkyqU3eOR97+P8uTN898XXWF0ZYfMb3NgsWBqc5NSJE9zYnvDGZoE1Ffc//Chb165wbKnHysm7uXbjVrhnXpBEEVVt2M8rTE8HKwTfjj2NZUEz3jjvqU3jZ960GmODlZBpJOo7uXR81+ZagYD5PrHto+YVZCAQulpSVTsmtKSBdixp59rzigTz/ei8ZcHRvnjeNqCqqo6cEEVRV6a2j9VaU1XVobGovY52bJ4fo48SEebJZvN9+lGrpLdTXzi6Xbv/vFLE25EW5sfMozFPmHgnosL8cdq6/3GJpY1j1MZiioLZeItISWovsFaBiInjlDjNsFXJdDrBZ31UliGEIkj2K5RQGKGQKsZ7i/YeSwDHnQ+jkNQRcRKUgWpX460BBNZYTFkGUpVQ6ChFKt3YK3i0l2A9xtVYWyKwqEbhyKkc70qMy8EaljZOYsqc6e5t9vfHmNqS9WOWl2KqsqSY7RDFEdbUJIOU/vIgXJ+Cfqopi5xyOsOWhjRTxGnG7vaE/VlJWUOWROTGURiLMAnGOaIUfOGw0hKpnMEgYzYtoa9BO4yb4hprEZ1okjQlryqcFwjvMcbR6/WIqpw8r1BJEpQesj5K6wDoOo/1gZDknUMIg/NgHAgrqK0ncR7n5sB4WjUq/wNtoP2ncw7rHdbWGGuCQkw7T6W1dfHduD5PRph/gr2tKGc5cRJhqyKoGMQpztbMdm8jpCSf7qJ754iSHrOtK2QrJ1DZCOEVztUEUYDGrmJOuaAlTh60mQNwP4z9ByTIrt01JAmBwDtDkg0xJkdoGUhpddnsO0cUaEiTHtHU2RzB4NDkp+27g93FgZJOS872HbDvfaPwJFRQcRCyUTDw+IYY2h5etlh/97Z0UPcHvf5BHH6nOKisxogHzAQpFA6Nd+ODsnjXTeDmeQRinnbRqhm0Ugv4t+37us2buddyUpKoBNkbYXctKkpJtKXOS/CaodoGr8jNEl5odiewuqzYnL3zsf+oWJAOFrGIRSxiEYtYxCIWsYhFLGIR7ymOHz/Opz/9af7pP/2nAPR6PYbDIZ///Oe7RYej9gpCCJ5++ulD2U5/VEgpeeihhw4do1VX+EkPYwyz2ew9bXvq1CnW19f/hEv0xxemCB6mCIL/q5eIumRQ7iB37uBrEzy0kyWilXWEhqSfQZ1j8xmukZw3dU2sZcjSSgXChAU7Z11YOBUKhcBUBaauqMucJBuSZSP2xjtIUdPvaYS31C6h0j3SdMx4ZqljT3+YUEpNOlri9Mll9vanFCF9C2dhZ6tAqph7zp8mTSO+f+UmdWW4fm0XGxcUlabMLfUgPOu1qTG1Q3nw1mOrimQ5Q8ZJ8GG1jgRQ+QxvTFA7cC4A0EsrVBWMpwWDdMRyL0L3BZVxeOtQSjMxij+wa9SXrnBrus9MCcb1D2tTAKbJyDlYIJxbRusWrQ6+80f2tx729gx5vk+R1zgrSOIB//2//u/Y3triF375P2S0tMRwZZlqNkYIxeryKv+7v/m3uHjxTb781a+/h3IenHUemHgnUKH9vv2uBV1a6eiqqjrVgVbx4CiYcpBpdhiAOUpQ6PV6PP744x1o08Y80A8hi208HneWCUqpziqhBWFasL4t72g06sophOhICEVRADCbzTrf7RYYGY/HPPzwwx05azQadYoKbfmklCwvL3cS3a11Qnv8PM85ceIEzz//PJ/61KdYW1vjzp07VFVFmqZdnTkXFt9bgCxJgrRzr9ejLMuO4NGqDZRlida6s9SZzWYYYxgOhx3pQUpJv99HKdVdZxRFh6S5tdYMh8OOONF+PhgMOrJDW4+tZcNkb4bZ32F5mDI4fpy9rV36wyH74xmjjTUwNdOqZDoZs7S6jMwyluKYaJZTW8/x8/ews7fP7au3OLWccX03D+ohTTt4cWb50MAxmllOPXSWy9+5iHEO6cQcYNmurbcr401WnAs2CC3xQDbZfwqBxlN7uFo57u8prHFMZxV2toVxjlmzWL/a11y5M2Z7bxYsGmSD4bfr7u1iuGjXrxtkwIlm8TzIwzuabZp9a2P5yu/9LpmyrPc0pq7ZWF7i3NkzjE4/TLp2nu3nv8HN577NlZs7fPXWhEuTkpUspqgdGx6GSjIuYGk2Q/ajg6zIpmcJ7ckdrJfLkHUqlWA/7nMxnxJpySBWJFox3i8ZJpJIeCI8gziQo5JYMZlUDREApAKDoDYOMy1Z7cf0EkFuwBuD9y74YwuaDFOJmZVUVUlRWeIowdaWrWnF0sZqZ5Wgo4gojgMY7Vq55RZACmQ4730gmCgZvNIbcFRKiRZBIeHBey/wre98n6oMZKJaaYrK8Nb1TfLqmzx4392cvXA3y8tLKB2RZRnDfg+lFVeuXObFF19he3fM7s4ut27dBBzOlFhTIgT0tKAqLa6pbSlkR95xzuOcRQE745ybz77Ey2+8xaifcnJtmdOnTrK2vkGc9qgbZZF+r4/wjhMnTrO0fhIhgzf5jWuXuXr1LZwriKRnb/sOiYSs3w92SXWB60BxRxRplpf6oe7wDVAmOUiU9UhaoByWEsW0cly+cYcH7zpFGmn2JjNuXLnEsdP3cOrMBU4f+z7f/Nb3OH/mFNZbirJiOBqSCEvpDb0sod65yoq2uNzQTzRpllHkU2ZVhS0rjLXktcAAolM2EB2xBN9IaMuQpeys7cpsnesyUb33WNcQD1poSHi8OwxDtf1TSwRon4+2b237ulbtYN5mpgXe5/v1+bFmfvxqj3EU3G/nG62NTasac5Ss0B67HSfmx0ljTDcetNu340pbxvZcwCHyRHuseeud1mLo7dSG5sfe+XK0x2qVG+YJF0fJAvPHfTvC4jsRD9rx/MfBzi2flRTTMUoIYiWQsaKuPXlZohNJIhRJ0p8DaB3GtPOIMDBINEp78A5BRFkXeFPgjMO60ENrnRAnGYLQR1be46qSyd4uk/19oixjuLQa+hQlQbeZ3ApLQ/yQmiiKQzmUQsUaJWMSMmxV4V2Fqwq0ioiVZLw7xauIWbHL9etb3Nzc4oF7TrE+6pOmEVmWEEUp3lRM92dYb9FpynBlFRVFFJMJUil2JhbjBbFWjHODUxHTqQ0qCdaxPZkxSC2rSzG1EUgl0OkStvIYC1kvJq8k3hKuTQqSRNHrx+hYITWoSGFtgfQxaZygVYQXMqi6uEA8OACZHdZVGOPxlQ+qN40MkhDBgs3EEVGkD8iwEEjLeBAO5xsyprHNO3WNFHEH+MvAOuhUi1oeXwD9mzmHEHghsHURrjtdRveXSJdOYkyFK2akyyeQUrJxYYNytosQEelSjPCCpBfUBb1zQc2ia2et2ssBoA+tZUSYeHgcHBSj2e4g87/dz7sarEEKKLbfDGO0r/CuRsyNEYgW3BcH+/qWNtnxGLo3mgDEt7Ovg/mXB7RwjPSY29emgdAuwvOrhScxN5k5ENFJwojZzpbad6Q5qwhPRwDwHQHkoAQH4Zr725ATVELsZmBD8oH1BbVPmvYpAuGB1rKrDX9w0kPvQQdGP8Hmwx85u++uIE5Tbm95kkxS+RRfGYyH2oCXEYYekbIU0wIjBAjNOPfcczblR40F6WARi1jEIhaxiEUsYhGLWMQiFvGe4uzZsxw7dowvfvGLQFAk+Ot//a/ziU984gcWBdvIsoyPfexjP3T20IULFw6Be0d92X9So6oq8jw/9JnWGq11B7y1cf/993fS5D8JIcoclawEuUgE3lr0bBtZzhA6LKrEkUCpHG+2qXODqSukE2Bq6p19oqUleseOE/UTZKRx1oEUuKLE1AbnHQKJ1CnCOjweWxpm+zuoJCOKUyJVolUEXuC8pq492g4Ybgguv7nJtVuK5dU+T3xY8Jd//SwvfestXqogt5aLtzxVVfKF3/k+jz52hrqGmzc3efjhk4zHJTKx1FNDXQYZfGsM0+kEWzucbeTUvae/vIKQQdrU1hV+NkazClWFTQw6Ch6tcX+ItYLdccWpDcna0jre1gEAEYrcJjy3bdnd2ePE6RWKGyW3d3KchyxJyMvyh7pHLVDy7lu8w6cNYFlVns1bM+rSoYQmiRK+/M0vsb27za/92T/PxokTxL0BpghA+/lz5/hb/9u/zX/yF/5nbG3v/NFlfBuSwTsBBEezF48SCpIk6UBx4JB3drtfC2a3AEeb3T9fntamoCxLRqMR3vtO/r8F2tvsfqUUWZZ1/WALdrRtvKqqQ37Z1trO+kApdciioM32XFpaYjqdMhqNUEpx69atzn7lN3/zN1leXj5UT/P9ZnvMsnlW5uW6WxBpZWWF3/7t3+aXfumXuHLlCnEcH/L8htB3JUlCVVXUdd2BRu15WpJCC6AVRcHe3t4hy4v5eyCEYDabdXXovWc8HrOystJJf7fkhXkVhJaA0R5nnhgCsH37Divak66cZGdnTC9W7O7PWD2+EfresqKcThiuLhFlPbLBgHx3C+cEq2fOono9XvidL7C8NOTk2oTVG3vczmukC3W3bR0vzAzHxzPkYJnBakZ9e4rzDi8bfN8DhMxp2bSdAAaHDDolIBJhUdbgm8X2YH9ws3LEEu4dRJSlxTqHdVB5gRaOWV6zW+wghcB6HwCII4vdB2SCVuo9fNsqG3hx4Bjc7ue859adbV767jc5Nkoxacyx02dZv++DRPEy+2+9wvVv/Tu++52X+Nqbm1yZ1GRxhEOQ5wXnlWTU71FWnmUNsoHAuwzCFh1wAcRxxiFxqEiCErx0dZsZ0EsUWksqB1hLWTrSSJBFAq0FUmu8dTjjyK3DmPAz6MX0NMRpBALGs5qiBmkdOhFkqcAhAkhU1mSRpFaK0UBjyprN/Zp0dZlBLBgLgRTBi9v7NrMwoCBKSkTTLoRvAH2lUFpRlCVVVeMJoHXI9PSsjoacPHWSi6+/gXOOonmuizLizs4+0xde5a3rtzmxvsry0pCs18caw9buPpu7Y2aznLwsuXnrBuPxHt5ZXnv1BaQQpElEnc+gBfOVQsqgHFRb23g/hOdvqZewk1dY59namzLen7C7uUmkNb1exvLKCuvHj3PqxHHQMd7DdLrbgU7DfoQ6sUqZT+itrDAYLTPKEsrZmOl4l7r0jYVEQEyUlAyHfZzzTMazBlgJZBtPC5n4DjBbyWJuTWrubO1w//lTDLKU7b0xRW1YXh4y1IY0ipnMKirj6MeKyaxgsLrEaDhgf7zP2aWUfhSIj9Y5pKvoZwm7gHeepUGf3X1LbVywWEhF13CsOwCywkwjAHzWuca6pJGt7wgGAYaSTVsKfASPE6COjGPzffO84kHbX7ZKOC2435IMtNY45zrrnbaPTdP0UP/cEgvaMagliLX/bs85T15oLW+iKOo+a/dp+/yWUDBPfmivo7XwmVf7mSdWzG9ztO9v66S9xnb/oySC+fFqPo7OD+bfO9rzzs8b2muaL9/ReUV7jPnv300doR17jr6fzM9HjpIq3m7+NX/s+X2nsxxrPP00ZNXP8pKq9lQ1CG0D2UUpoihBRxrVzm9aQFYAUiK9AqkwBoQzjSJBIBx4L5snXqKiBG0tzloqY6nLaUOqApcV2LoKpBvnQFiEd+hII0WEIMa5CmtLjCnBRqSJRMqaJJZ4K3HaYESBr6dIb8gSxaAXo03C3o7g2o1thLUcP7aGJSYSGpQnHiq0AKl1sOKxlnS4RJRuUxjPlZ2S0+tDaiRlLaiAYlaSxhYpA7BqnWVpCFGkGY9L+qMelbFU1RhnI4rJDFvXIDRCKXQkiSLNbDLBec10VpJYh6kLZNrH+WCXY21Q97ONMo5s0vu9KalcjatrqrrE2jCHpO+RsodSousHae6Z98HuzDdEBmMMpq5x1uI1ndLA3JDfgfi+fZ6FD+OWCOB8XeXIKEWYEuEFk92bRDqBKEFHKb3RKkIIsuEG5WQT7ySVmaDTHjrO8NLhG+BciLao889wSypsKG9N39/yHVtywEFZPcI3BAJn0XEPa2bEvRHeOgQxwu81h3YHLx9BuyBcK74hO7Tf+bn6aNpYa48wX1d4BgmsJo5TJ1eJIompD9q5ThN6dU0pa5w7sL841FabsgvRjKu+FW4KCn8dCeEH2rlv5n8lWivqGrQwOJ2hrcF4c1DeRklhzlCjI20Es4/W8OGA1Dp/Lw6RJSCMUDJGR56diae0CuVKTA3e5JAtEyvDzCQ4GaN9jsdQ25jLN967Nd7RWJAOFrGIRSxiEYtYxCIWsYhFLGIR7ylaMO2DH/wgy8vL/Oqv/iqf+9znDi24tYt9bZw5c4a77rrrhzqPEIKzZ8++7ULjT3rUdf0DSgcf+tCHkFLyla985dDnTz311A9N1vj3GftvvcnKg32sUiilMXWF2t/EVRVymOHlwSKyyXPqqgIpkFKDUvjxhHJnj+zEMVSa4I0J1gTWU81yrDXd4pG1hlk9w9R1A2wJ7O4YtGI4XArSwUKgpCHTmmq6wtkHZqRyxqOPnWVzq0THU+w0Z2ld8aceu4+VjeN880tvsHFsiVPnNpCyxhjP/uQFfu9L34P6AsOhQZmSKAoZpJWpGU/2cMFyFa0Ug94SK2unwYfs2Ho2g9oRxTGuLPE904EARBnGS8ZTR17mrC9vAJ7KWC7XG3zztS1u7G1x4sn3c/vObcy4pGoqoah+9MWg/ynhPBgLu7sF1l7De4WxnqqqmE4n/IX/9C9x/ORJVOSwtUFrxcd+6qP8uf/oV/mH/+i/ONQ/HI35DMf27/dKQJj/fD5zswW0j0pGO+c6okD7eWsP0+/3u3JUVdUBPZPJhI2NjQ4Un1d4mQfBW8n/1sagKIqu3WdZRpIkhwgPdV13YEOWZezu7qKU6vy7vff0+316vR51XbO/v8/Jkyfp9Xq8/PLLnD9/Huccs9nsEEAEQV2lvYb2u9YbvJXrPn36NM8++yz7+/ucP3+eW7duUZZlp0KQJAlxHHfS21JK8jzvAK5er0dRFCilunqCA9uI9p5kWQbQWH2Ijlgwf8+2t7cPZfm2dhQARVEQRRH7+/udZYWUslNOiKKIzOQMTp5lb3dMv5dQ1JYT506zt7mFqw3lZEw6HGGNRwrBdPMWydIK/SwjXdngrRdfRtSWdNTjxHrGShaR7pfkwmOapfXnZoYPDC13Xr3GiQfPk+++gi89whGyp2kXpkH49ncD7YhgHRALglIBoHz4tyEQBt7KHbmtuXcU40pP7hyV81QOKuE5e3yVvWlOXpScWF3m9tbOXKYlzWL/fPsA51oJ5ha2OMgDbHMCb27P2NqZ8MCF49Af4IqC7TdeIt++zVuvvcaVSxf50qUtrk5qepGidOBqw3kl6SsYHltl89INekOJEpJWhlgQgBSEQGrRqUEErWfJ1MKzuxVCKIZphEfQizWpEkgsw0STSE8vFpQWisJReUFZg/SCXhLRjyVJrHB4KuMoUcQy+GxHcYRII2KgqEBWNbmVKILX8eZuAaMhS4MU6wzq/8fefz7rlt33ndhnhR2fcNLNt7tv54BGHAoACYEEgyRqFIvSjDSccZVl1UimPSXJfuFX8w+47Cq/cLlqqsYzNkulqamZMWwZTCBFQiBhBoAAiNBANzr37e4bzj3xSTut4Bdrr32ee3CbAgGYFFzPr+rce87z7GeHtfdeaz/rm5RGykhKckjpkULiHL0VM+BCFI2Q4Sx6Z1AS0kxjbW9/LQRKCqR3PP/0kzRtx+1330EqSdXUIINaVkjJLK1pzCE37xwGwpr3wd7cdDRNzWy+YDY/JctS3nztZZbzOdPJiMR1qCSlKIoAUNueqGIdWoM1LV3TIoVnkml0UVJXFW3n6FwfwdK1dKcdp6czbr71FlmaUJQFo3JEMSrZ2hoRVNQeJySXLuySuwrTLTg6vYfpWow1IX6ivwiDstSRpQm7O8HhpF7VAUTza3nkRD6HoEwV41Rz2hjuHBxzeW+b6s4hWT7iO3/yB8yOj2jalqsP3eDk9IRb+ydcvXKJvUnJ9taUd46XvLMUJMenXJwWFEXJyemK7cmEu0lKVdU0EsosoWo66s7iUtXzMiLYKwewJ9rIu55IJOK5B5SQ97mLxGOP/JTYv6+PYRFgj2PLepTMenxB/D0+/8b1rLvqxHE0Av6RcBCa39/nSAAMy6xvN/bn8XUhxDAmnHcAimSF9XWukx7Wx+l1YkEkzq1HTJyv9XXE/88TA9ZJAOvkgPO1/vywThZYP54/7bn6QWSE9ff+PEspgXAeZ1qQoa81nSdRKdZ21E1N6UPMU5KmKCmRSvTEA/p+PlzPEg2dRGhNluUY69EkgAwRDl2HNRbvHSrRpEWO6xrapsHicM7ivcV5g5YZSmkEIjyDS9v38R6tM7JEkhcJeZ4FsocNLig6SUmzHJFodHbMaGtCkqb4ruH6pRkn8w6ZZMxnFe++c8B4OuLK5W3KIsH3fZsUKUk5xTQVo0lBmiXcWbXYU0OWKFbG0XpDrjUjLKkQaNNRr6Bqa8pcI8SCatEhtCRJFU295Gi2oK1ryhTGMjiqdZ1jNu9Yth2NcVjfMF8sKHWBQ2JdcAjqug6BRymJEwwkoVXV0hmFLCYgFUmSkqQJSkuk9OATlPIIrxAiQNfx/rfWYUxHZzusCyCzD7L8YMPfP2NIIZHIEOmiwMuzCAQBmGaBVCmmOWR29zWuPfkRrLfk5ZSsnDAA+sKTT/ZolkdYI6lP7jC5dCM8t3gThmp8T6Y+g7fDjeGH98PfnDkDiPiUETYlhO+dmYLTgbMKAaSTK5iuxlfHkI7PnA4GHP2MSLDGIriPbDAs1mPxcbsDcQjBvBa07ZgPOdc/o8lhQ0bt0bodWp+FY+r7/BjTwPA8E9v/bLvxfgt3XHiO6x+4AgHAR/A/wwrwmca4BqzBqzSQCdYjE/ptxaPwPZHDrbXlGUfcxwM+289+PBK+dxgSnkmhuHXY4PFYkTKSB5xaRaEclcsCsU4IOlGSa5gkhpPVhnSwqU1talOb2tSmNrWpTW1qU5v6cyghBL/wC7/AQw89xCc/+UmyLLvv/agIjvXkk08OKtw/S126dGlQ1QLf5Q7wo1pN07BcLu977e/+3b/LN77xjftIB3me82M/9mN/3rv3A9Vbn/8t8u0d8qsP4YWnrVbow1thwtQHVwJvHdILkAm+cySpHmyMs60ptmswVY1tDTJRSK0AGyboOMvydISs03o5p+06lNRoqWjnFSMvyCbgnCVRCy6kDdeffz9LcY/m2j1efXXFT/+Vx9m9mHPznXe48cm/xO6FXSY7U/7jf/wInQmT4qZr+epXvsTLr32D+WKXqzdWyGOB6A5JM4V1htmiYTmv8Z1AJTAZT9jZvUSxvUNnW4QVNIsZqVYI5XFthTUTlLV0xqDSFO8ly5VjNluwO94hL0ruVI7feukYjt/ioZ/4KIf1ije+9RqrzvVqzPNqoz/fct4jnGC56Hjjtbdom6Dgr5ua/+b/9l/xn/+jf8qlK1dxwuCdo8hyfumf/lN+5Vd/nbduvvOe611Xf8Z6r4n+80SEdXBh/bUIRq9nVZ9XPUYwJSpJ0zQdSAla6+Hv/f19tre3hz5NSjlEC2RZhrWWNE2pqmqwuo4gS1yv956qqr7L0jqSDOq6Js9zxuPxEKcQ17NYLNBas7+/z8c+9jEODg44ODjg+eefv6/9ImATCQtKqaEvHY/H9wFPEch66KGH+OxnP8sv/uIvcu/evaFd1u2/Y5sopRiNRoODwWKxGJS2EECyoigoimKIrGiahsViEQCSJGG5XA7tFdW13gf7+clkQpIkg0tFbOO4/O7u7vB3URSDk4W1lmwy4u2bt3nkxmWs0Fy4dpnTw0PyomBeHVNeuY53HakKbgyj6TZJXpJs73H37bd441svsrU14eLFLZTvKIpbZFpB57GEKeBbneeFeculu/e48vzj7D68zf6bx/jOInuwMioDhfBI36vcBCgvSARkwpNIUE5gCREL0nsUHgPcbh3HRzWTXg658GC9YHda8oH3Pcm//cOv4RBcf/QR7hyf4E2Ylb5PAderDte7ijhXLfpJ7AgKCGD/cE45Kdm78STTS1f4yud+h3GWUlc1X/72TW4fnXJz3jDJEqrO4pqOj2zn+KqFouBW5bjmO5RM+kn+eH8CUobjTzSuc0G1lwi8gNeOWw5dYGEsO0epBKNEkOY9LOY9UnqSRFC1Bqkk7cqi8CS5IlWCrEgCQNdZGicRSiOspSw1OtGgE7wxmNUKlWts09E4OF0ZljrlkZ0J0huquiVRisHjWABIhFAE9wqBdR7jPFIKFKExpVAkUqKUpjWGrrMIGcBkKSSjsuAj738fs9NT6uUCkeQ9cScQb6q6Yjwa410gDcR7uWkaltUK03aUoxEvffvrvP3WG6RZijAdeZmzu3uBYjRmPJ2ytXOBoihZzo65/fYb3H77LTofoiCMNVy8dJ16NUMrz+07x7TGksgAWgkR7MLruqVuWmYnM8oyw3fbaCWHfmr/ndMe5LZoFdpXiRAlgY+6zzMQWinFztaYOk84PVlQ1WdAhuvVvdG2e7fUnDYdb9/e56PPP8koPeWll19mNdujs47Tw2P2rm0zX9W0nQ2kRSmZTMZsJ47T+YyTZcWtNuGpp64x3bnCRWF4692CxXLBYlX3Lhye2jqsDUCeEGs56YOCNLgxiGEc8hBJRIDtkSDnwLje2QAGF4wHgfQRNF8f1yKpIILoMWZhHfxfJy7EMSG6ZEXy2jp5LsYbrMceROJZ/InLwVlUQ/x8HCPWx5/1WIbzjkTx2jgfkRD77PVIhjhGxXWsOxOsux6c38b6uH1+vF+v9bZ90PPDutPQey3z70OVqaTzmrZtsK1FOI/pLM4FZ4L5bEZaTsiLUR8DE16XIjidCB+uQyEV3mtUkuNsF/T4OowLxoTrSXiN9KBVjrOaDk+XN0gPSZZDkmGcIzEtIi3QiUKnCVmWo3QyAMreGbTySBEcLqxpEQjyYox3OUqCEiHCQyeBnKkZcf2hC2zNO3SWsljU3D2c8datA0xT8fTTD5NkBUJ4itEWzkGzXKCSjO2dCZOs5q2Tiod3c6wXdI3jRiG4XMjQb0mF9Y5VW2Erjc00xnZ4lXB4aDhcLlhUK5xz7JSCbAxCSdqm5eBoQdV6hNRYPIlyaOlBZ+E8iA7vHXjf938B8HXW0rUdTdugvKdNE5o0IU0UOroc+H4cVH2fskZWsqbFmA5vDMK7nsQXiE5xfBAyfHeSSqCioxA9OSBkKyHdEu8diZJsX32CzjnGuxdI0pJACoh9XVg+H+8h5Qn17JD6+Db59kWsU8Exo49ycv0DxHDXrJEr4wtuzY0gGBz0fSACKUMsjekqTHXC6OKTIAQ6yWmWLYnWyP4BRRBAf9+TAAJhsXeTWNv+Gso/OC34SH7oH4Xi/lmvAhGn78dl/P7gBMbJYR3xaSl8CxW9015/XPf1O354zovbPONF9I4FIhy3FpbWKrxIEcIhhEE401OD+ugev+aU4O/fHTHslu/bYD1qwoML34/pnyfDECXQMsT1tV3oEyZpQ8cUxR2MTzBuaEjwUHeexii2x3+2KL/12pAONrWpTW1qU5va1KY2talNbWpTf6ba2trir/yVv/LA9yL4E+v973//9xWNMB6P2dnZYT6f473n+Pj436lQ+lGo86SDoij41Kc+xd27d+9bbm9vj2efffZH6njvffWr5Jcv8eRf/bvI0YSmWUK1PDv/Lk5SeaSWKNWrsggWsbIscJXCdi3SedDB3looyLYnqCyhWaxoqipYmQrd52ZLbGdZ+WAbfXr7mFqNeOjyBZRackUccGfWsXflIV785j3KS4e8+XbCBz76Mzz2zBMBiEbQ1A2Hh0fcvXOL77z8Ejdvvs3rL59QjJ/l0fcbFvuGi1UG9h2yVNJ2DfPFCW1l8QjG44Td7QuMyjEizzFdi3CC1dERW+OQbe+aYLNqTEfXNr0iVNB0gqb25GlBkuR846Clnb3GhQ8+j5rmfOcLX6TrLN5DLgTLv0DCQaxIflitOt55+y7GWDpTY4zlv/5v/yv+V//5f8HO3m6v7lQ8+cRT/Ge/+A/53/8f/0+9gvTBtW4vDNyXkf5e9sTr9SAgJK4nAtXRlSDGKESgIyrwvfesVqugnO/V+ZF08Nhjj7FarSiKgqqqhn1bLBYYY9jZ2Qnt0ytP8zxkosZ1R/BnXc2ZJAmLxYK2bWnbkMkc+4nodpBlGVrrAbx/5pln+Pa3vz1EG2itSZJk2F5UmK67KUSwX0rJarViPB4PgM/169f50pe+xN27d7l69Sq3b98elLOj0WggUsAZeBZjH+K6ok14BMwioSF+LrZ3BKIiCeP09HTIGo/gV7MWHbLe92dZNgBfzrmBkBa3PVu1XNwb01jBhZ1t9m/vMx6XtE2L8bCaVeSFRtYV091tZJrhpKJrat78k68xmYzZ2tlm68Iex/uHjFPFSAu0hM56rIdawFeXhg+MDdNvvs5jP/4csztfxFqHtwIrRD9pHB0PgruAkmECXQlIpSB14XfpA+lAIbAE8MI7WDlY2qgaD+tqreLQFDRtACRniwrj6M2N40R8dDCI9wT4Pm+51w0P/w72wAKO5ysYXeSRZz9MmTrevrjLqy+/wcmy43RZY5wk1YpV3ZE7x2O55JlHL3HrtIHxNt/5+nf4yDRcz3RmOHZH7/QQZ/m9QyegUs3SOL5y0HDSCfJckgvJTqHZLjV5psidY5JLspGms8E5oOkcZQZO9+o8KaiMRxqLkhLjBDkdRSrQWYrQCmE9y3kdQGJj8UJwsuiY64QLe9uMEoEzHiMUynu880gZgFshZYQOAgAyKMlVAH1kn//eA2+JUkgh6KzDmg4nIJGScZHxcz/9k/zRV/6EO2+/TV6OaTtDZwzWGpaLRbhPU02SZAPZRimFNYZ333yFd2++Cd7zyJUL3LjxKE89+zy7e7sUeY6Uino549WXXuLmqy/zzjtvU1dLZK/WtM6zOx1T5ZqLhefGtYu8+O3Xcc70JJS+74wW2VqihEApCYRojrppSRM1jF0IEa5h2SvUlQxAUQ9eSKXCelNPUWbkWcZsvmKxWNE27QCuBDBFMM00uZYsVjUHpwu2JiNev3WPva0Jly/ukWZjOmtJuyoAekqhpKTIC27slXjnad2UrQuX2J1Iuv2XSK4+iZISLQVGQKYVVWPD/UwgtQxjB2INzDmzQI9K05i5jR/oOz35pFchu/5z/XNPjI9ZjxJYdys4H7+wTgpYd0eI/e66yn/dzWZ9vIzrOx/jE7e9TlCI/WZcdxw3IjFAKTWQ7tbjDyIhIi4T/z8fJ3Cm3LYDEWJ9/9aPf93dYP1Yz7fRep13Rjrflufbdf1zwzl/wDn496HatkU4R54mmM7TtAFOxFvGeU42LtFSYrqWujojLoo0RQ9tHcBJh0BIjU4KLBJhW5wz4Dq86YJaPkuRSuGdIy9KppeuonXSg8yKNElIixwhFUIous5gTQWswnhvDUJYppMRRZGghcd0HcYZ2naOkgrhHE4mTPYeRskEZxuUzpg2UOQVUiUUicBUJd95a0Vdt7R1g/enCKlBZgggG42QieLRGw1Hsxb71inH85atNOXRQnE116TKk2qFVALTtuTaYvA0pzNWUpFtTVnNV8hmhW5qVp3g1GuuTVPSvCApSrZbRbJoyfMEmSSkaXgOc4Rx3veENIFD9cp+J+MzLHhvwNT4doXrCqxJ6TrVu9mAR6K8CP2kIESyufDsaGyHdf04SnAOct6hnAjjOdF5RwbFPrJH2gP3SyDC+OMlKkkx1YwLDz2BSPIzVwJgeB7oEe2k3EZKzeLOa2f3tBAo4VEITFTki/X/Ql8Yf4/rOl+ydzow3oMzlDvXwzUH4CxKSHDdsC6JDwSGwGwIK4nksPvIlH5tu+EtFRdZd1sQkKaSrVFwlxCi/w7qYVRoZk7iTGQ3xE7f9eSEPhqif7iKTjjDFvtdjOdG9OOIECC8C8QCZ7AiBQIRFRkiTmy3BB2IKEN7RmJFP9YEUkKkMbiB4CA9/Xh1RiCN++XxOGdwzZLO7IG3bOUNK5NjncA7HyJTUGHpoe8L1+fR/Pufg9iQDja1qU1talOb2tSmNrWpTW1qUz+0Ws/5BnjiiSe+r/WMRiN2dna4efMm3nveeeed/78gHaxWq/tcG65du8ZTTz01AJWx3ve+97G3t/fnvXs/UNXzirc+/3vsPvw406eeQ6mENJugbSSiBGNIKYKKKPiBBsKBMx3eWKRWqKREpAqhE7yxmK7F2UAyyEYlSZbRrla0TUWrFdp5urYZJtIrK9ifOa5cMSSZ5mLxBrfefY7s0T32Rg9z86Xb+Pk+33nqRZ754HMs5jMEis//29/i87/9JRbHE4TYYevCmHJ8Awx86NkP8O2799hafZ1peUiqd9g/uMPxyQxjBKNScOHiRba3LyCkwCiNbBq00NQH+2xtT/BeYHo1Vdd2GGNxtiVTnqoDITRCJbRec/tkRnrxIttPXOdLv/UFVssGA+hhouvfj3LBc5qqMtx65x7O2T6LtuWX/9V/yz/5x79EnhdD7vkv/sP/hP/7v/xX3L5994HrW8+CXicfnAdKztf51wY71bXlI3ASFaJKqQFwWQfLQ0zEkjzPB8JBXMdsNmNnZ+c+VX+MPMiy7D7QJoI1y+VyAOEjsWE0Gg3LOedI05TxeEzbtgPQYq1lPB4P64mgybvvvst4PObatWt8+tOfZnt7m62trQFQqqpqIB5IKQeV6rpqNboVtG3LhQsXqOuaxWLBlStX+OxnP8s/+kf/iLfffntox6qqqOsaIQRlGQg0MTJCKUWe55RlOQBT0VkiRlDE9o0KXiHEYAe/TsZYV7wuFgtGo9GwjnhN1HU9nJ/Y5vE8d13H3vaIfDIlm0yZn86Zbo+p5ks6ocnGY5qTE05XkocfuYIuRnRC4JOco5de5MojV8nzkmwSXCaKIuXGtW1e219y3JhgR0+Yh75tPF+bd1w6OGB2eIPLz17j7a+93Ss3Ra8cA6RHuwBmWh9AAiU8ifDkEioHtldZayGGiWMfmAYYAtEhTuKfLmb8we/+3jDV/Y0XXg7Ku355IcJEMv4MmBDijGAAPS4RAYOwKN6H7OvP/f63+Rv/6S9x+Pbv8YEPP0tnDN/63Ne4fVTRGcPEezLvuJgInn14m0sf+Qlmr77Kn7zwJg8JS5kkSO/RsreiD/PqSCDJNa6zgYCWKSyOV48bXm8cTipa48m948IoRSUa7zxZIlCZQGnFsrU0xmOcCyCa9VQGEB0aidaCRRcOyBmL0cG62VQ1zapFAKmUSKVw3tEJQT4ZcSEDTMf+rGU+q3lM6fsm3+0aOBoVrUrpPlohTvb32ICQCBmAlUJr8izBGDdAAYWU/MRf+g/4sk5567WXSZOEbDTFOEfbdT15y6CUIc1SRmVBtVzw+isvcXjvTnD6GKVc3ym5ujNimhgyV1MdHXP3zi3qpsWKjHwyxSGQOsHZ0C9XraHrDLvb22i3RAnPpUs7HNw5wPpgw6xkiCvK8pSd7THRgjrV4VgjWGU9KClRa6RS79fUo0TKTbj+lFLgHKNRTlmkzIqEo8MZy6rGu+AaIAVoKdktEt6dN7z57l0+9v6nqNqWm7fvMRqNeOTGo9y5d0iVjUhTw2RcBnxISrwXpBJS6fGn+xye3AWl2Xn4md5RRQNNr1jtFbtr/XuwKw/uBoFk4HvhZ38ti0ggWgO0e4KBVEG9KyTY/t6PRLc4PkQgfd3ZZR3IZ2298b1Y0W1mfWyMxINIolt3gzkfZRD7++iOEK/nSBSOY6L3niRJhvEWGFxn4piyHgERge51ktn5eIP18TJuez1OIrbB+fF9nRBwnjSwvtz57wbfCznhQU4N55f/i/6+Ya1FO4tQCTpNQASyVGsExnpKlVCUI9IsQcoAWAaVvKHnAyEJETZCSKROiOb7rTfgPFor0jQhzcck+QShNNY6uqamXsxYzU45PTxkdnyC1oKtC7vkRUmalog0JStGKJ2gtEJriZIRnHVUqyXOtHhngoOL0EglkSpDquCSZ5xB6jL0deMSKTXC3ebiXomz2+ztlBjrkJ0hK3OkFAiVkWUj8tJDMuJDagcnXuaVdw7IlePSqEA6j+8cHgteoL3Fe0OCJ8XRWYFcOJQzGFHTplAlmlonbG1NGG3v0BnHdGtCknVhv6Sga2sSZ1F5ircS43ryj7dIIQAXHCakQCtFniqUlihpkb7D2zY4usVrqwfENelwb1ljMdb03xGCQp34HBwdQQguO9EpjvB1ihipQU90FLbFdC15sUWiNd4F8LtnOMQeeiAriJ68KPMRk2tPUL3zDVLaMH7LdTeYflkR3AsiGS+MBy4K8c8cBwjXhR4MCTw6LTFtg87GgaZma4RSdMsZ+dQjOYuk6j+yDvGfEQzcGnGt/yUSA6Dfjx6gBzCtxdn++0FPjPTOY9oOnOtbtqcZRv5B724g1gkVwoNjzZXAD2RTuU7rGJ67XPieqyC4/4HE4bwcthmIKv5cf7fWd3lPdNYYDm5omEiN7EkK3iE8GAdCtCyWHVt5x7JLaTrBOPGshCeRLbXNiVfDWRzGD/Zdc0M62NSmNrWpTW1qU5va1KY2talN/dBq3elAKcXe3t73NXFXluV9oPsrr7wyWNX+KNfJycl91vEf+MAHmE6n3xVB8WM/9mODKvhHpTon8feOeed3/w3X85Ti6nVkmiHqfgIFgU5TdJ6h0gyh6jAxpiVKJUNWqPMeYyzt6Zy2WmGbNkyYKoXQGqUkOstpOsnspCJLJbNTT9tJRpOAcL16sOLDT19GK8X2eJ+L916hPbnOJ3/mcT77mYaTOwlf+De3SbKUrEzxZHzpd09ZHTyPFgVKJrTHGn8q+KmffZxHH7vM/u+9hVp9hYefGWNsx2yxpFl5kgR2d8dc2rtCkiYsl4swuVdVeJ3C/IiizMOEVl0jhaTrWrrW4F1LWXjqBnQawICZS2ltzUMf/yCvfePbvHv7GC1grGBp4ftP2Pzhl4cBMOo6x93bx4geiPrDr/whl/Yu8/f+/j/AWEtelDz+2BP89E99kv/+f/j0e65zHUx5kGLx/LL37c8DwIf42rpis+u6AahZt32uqoqu6xiNRgPhIK4XYLlcYowhy7JhfZE4EJ0N2ralLMshciFJElar1QACRUA+KkOBYbvAQIowxrBcLinLcgBn2rbl9u3bPPbYYxhjeP3113n00Udp25YkSe6LbfDeD5ES8Zhj5Xk+AERHR0cDKeDKlSu88MILvPnmm9y4cYM33njjvpiI6EYwGo2w1g7xE5EksO6sEAkUUd0a22m1Wg0AWiQNxCiHdZXr1tbWcP6MMZRliTFmiJyI7g2xTeP6ywtXUEqSliVaNcyPjyArMG2Ld4KsLBlJhRUSazvIJ7zx1a/w5LNPYVYrsvGEpmkQQqIVbE9zLk0y3lk2qDbEH0QF3VeWhmdrQ/HNF3nub32S2a0jju8uCKm/Dm/dAGRKgiLciT47XULmIJeOxoYpZ01wLFAEm94gthNnqr7+0nZwpv6DEFnTg6P9lR8Fej0XQfTEhLW56v7HSrAw5NZ/68VX+B//m/+a/+wffJJ6do+7+8dUsxWXnAFvGWmF95JUC144qLj7la+yaAQnt+7yqV1NIkVQLPZxCqHvlygdCBWudahUQqJZdY6vHLXUPWhReMdIepadJUkkRRbU9h2Ko5OWo9MW5SHVYdzwFiaJZHuU0XrBvWUA16eJYJxqjJLUxlNXFtt4Ug1JLjhZNFQt1EnKw4VG4jmpPJmEhz/xfhqpzibe8Ugh8S6qFkV/bGfJ2c71Z6MHpBEBzBNCIqUg0R4hJK2x1K2hKEo++eMf4/rDD/FHf/gHHO/fIs0zinJMWaS9yNGyOjngzf277O/fDmByormyXZAqODo5Ybb4Bq+9/CJahXte6ZAbvqoa7tw7YrFcsDstaTtB23WUWUKeSiaTETdffhOlE7anI44PjxEItJZkWUKZZ0zGOYnqIzGkwDsXgCUCKq9lD2TK6AARbaol0f4awPkQLyHwSCXAerwUjMo8XKOHsFhUwSFAgJaCvTLlYGWo6ob9o1OubE+4tX/MvaNTdi+seN9Tj/PuuGBydMzl3S28cxhrOFnWXBglYX8jMNb3c2kSQPVEJxjb4ryn6QytEeSJCm0uQtRDiEpgAI6sCzdSoiQyIHtE22+pIwkB7JradJ0o1zTNfa4E68400fXmQS4AkQgQx4Q4FkXCXOwHY2zCutNA7FfPj3Vd1w1jRFx+3Ykg7kckAZyPaojjRiTprY/Z6+4EkVSQJMl9sUmRfBDJDXGMiJ+Jy8Xtr5MD1l9fH5vXx/33IiCef+38d4kHuRv8aQ4Lfx5VpBLTOuq6xVuD1ClaJ3TOslwtcEIymuxQlmMECqlksKaPbdQPG1IK6B1HUAqVZxRMwj3tAknBWkPXNXTzU7qmpq1rumpBU6+w7RItWhQJ1ckRwjrERJNleSCeGhuAUC3J8wwpA/E0NKKlWc7o2gpkRjndIutjpLpmSZplCKkYb18KUQzCM9m9QjppScfH5FlGkWlmx8foJOudADqSXOGRpONL7F7LebpTrLqXubN/wN1li/QduVJsp4qx9iSY3sClRQkFVuCaRYju8aCEpEhz8kIznWQondJZg5SeyVizWNZ0jekJHilJUqASjU481hu8d0jvsNZA09B5hXEK1d+neZYG8zZvcabFCInvmQLBXUUOzzDGGow12P7ZTanQp1hnUc6FqKL+upSRUYhAhou175/CK4vZCcvTU7YefR/l7lWqk9tMLjyEUElE4s8p44PyHyBJS+TlR6hO3kJJyPv4o0gEkCL0s64H0AUikC9kiJHy4kyBH4hovndV6qMSbIvAY5ol00xgtWK+qPFCIfEo0Ts2+aj672N74Awc70mNbnhE6tkOPoLn6zQFhucoqVKGA+mfO9uubzfh6alnxIcuN/QF5/qStZa7v+Ln14hSXuBEinYrQhBXIIF40/aOQG7gEEQ26NC/9cc/UCFFJFusbdeHc+Aj46Nvo0A41SRiydwkNF14p0glCw+d05S6YdFlDBkWa6SL77c2pINNbWpTm9rUpja1qU1talOb2tQPrc6TDiaTyfe1HiklV69eHf5++eWXmc1mP3Lq//N1nnTw4Q9/GK31d5EO3v/+9/8579kPXh5JiuPohZcw45LHfu5vIGUS7I67BmEsAoNMEoSzyCLDzBbgLUIn2M5RzRbMDw6Y3ztidbLsFSm9kjmBtExIy4Tx9hbOOU6Xkmk/caWFpKkc1sHBrGJV1yRFjhIdl5Iv88of7jC++BP85U89xu/99qvcvjln//Y+rVP8zr9+g+ODEoxDqACYZLngEz/5OI89dYkvfvqrqLf/P1zYus1ktMfd/X1WS4NOBHu7OVcuXqMsC5z3zJczpjqBukFqx9hXCC+pFnOypiGnVzNZg/JQZFAoyBKFQHLv5JTtx66yXMx55cW3EHgmStB6OHUO++8+Ff8/qftUgQ9433lP11nu3TkmSVKElHzmt/41jz32JB/+yEewxpCkGX/nb/0tPv3/+gxt233XOtaVnw+yTV5fLv6/7oaw/t6D9j8CPOsxCxHMj1EJ0+l0UOPHz8X9iSr7CPZEVf+6IjQSD9I0HZT96wrXqqrIsmwA6ieTCWVZcnJyMkQgeO/J83zYZl3XTCYT2rZlPp/z7LPP8sYbbzCbzdje3mY2mw3H0zQNWZYNDgJxfRD65+jmENWp0Z0mWrlfvXqVz372s/zSL/0Sr7zyCmmaUhQFzjnquh4IDFJKtNYD8BTXZa2lLMsBdFsHyfI85/DwcGiLuM3YVkVRDO2tlBq2F7cR4y6aphnAqwhUxdeqqmG6M6E+OaKp6qAyN0uycgSpDrEGozFpWSKykq/9/h/x+LNPUy+WZHkeAL4kQSSK6dWr7B2fslUmjLRCSoty4VxbPIfW88fHDVeyJbe+eZPHfvbj1J/5Am7Rhkl32U8auxCf4PA4IYJjCZBJT+kFjQt/ax9cD7QPywZXBMLkc1TXxX99iC3w/YT7+i0S74AACoT71bl+eXxPWgA3EA5ETzrwOOv59Gd+kxtXxzz22EXyIkXi8M7x0G7O1cvbvPD6PidKc/Wp55jNK776J9/kxwvBTiZIZAA/lJbIPhpCACpR2NoEG/4kKLa/fbfijdaTpgmtcSRSMDeObFkzEoasLBCJ4q17NUdLS5IqruYSL0BJSBLJqJAY7zmYt6x8ysg3KCHpvGBVGYz1VI1jJ1dsFUH1W7cNp0awvTei1NBYaOqOLBW0+/uIZwJgJ/vcCWujshvo+++hb5IBuBWyB2t6twrrHBIDOsE7iwdSrcnyMc552tbw2LXLXPvbf4sXvvMyv/PZX6OrVwAoJbEu3PsQrJm3JzllqhDeImRCkqbgg3uBd24469bNcc6TJoqdnSnz2QIpBFppLu5NeezaHrWXvHP3mOm4ZGs64uKFbVIdYgq0liQqKJZdj10EsgH9vp25OygpEFINjhY+9sm9dTgitImWwXFCyh7ech6vNVmWMCpTFsuqJ7A5DIJUSy6NEu5Wlpu3D9h75gZXdibcOTjg6pUruKtX2B6PKLMzBX5TN9w6WTHOp5T99iWhHTvnSHpL9LbrGOcJQof3pFSDYjecU9Gf9wDAeYJ6Nx6b8bYnEokz0M8H4p33vTI3ElDEmctLJA9E4D/2fVJK0jRlNBoNxILz41kE8GOftw74r6tiY98Zl4vbivsR/14H8Ne3GQkR0clgnRgXx4j144rj9XkHgrg/MXYh7pvWeth+HH/OOxnEcfX8uB6XidtYr/XX42fPRzOsL7MeQRG3EY/zvPvBe/2+TohY3//198/v63u10/p5XW9vj6DpOk7nNd45JiOH0hlKSLwQONMyOz0gzXPyvAwAt5c4FEpIdBIIVUhIoiuLc9i2oWtaTNtgTItzLX0ICgKHVIokz0BYLI7ce5SE5fyUurEgNSovUW0LQqKTBKU0SoS+yjpJZzqa5SHV7C62qdBJQZLC6vSA+VF45ijKMda2/bVqEM7RrGZ4Z2haz2g0JktzkjxnKjTOOpwLcSbeNKTlFClSJrsFu43noaOKRWV47d4hp42hsS3XC8VHthXbypEB3jtUIhDO4zoTiEZS45UAabh6ZY9iuovQOa6esZgv0GnKweEMkhRZanLjsXWINbI9sU8rjdQKLTWlTFA6I8sKTBeONUl0eC4W4J0J7d6PuzGiAcA5i7EdxnRY2+Fd/5zTXycu9vUi/kR3gvsrEBAC8VR6AmElyxCyoNp/g+LKE4EgxpljweBM00PV7fIEUc3ZmY5IFIwTz6L1JEKgZegLnQMvBr8EfKRSSIHxwUFA9oOD6A0ZrO9pme2cxlpyITg8PQ1xA75F+uAGJYVnwNX9mQvAAOb34HgA2++PXBjiFuIyrAHwWJp2zRWgJ8wp6YOFT38fBNDfDuuPx9B/RcSuI/IiEBKEi/2AW3MN8D0HwCOURGPJ/QzjNVLW4Cqsz8IDYNzb4Tjv28S59a0vt0aq8Gvv90dzMltwWl2gNYFKp4SnNQ48OCfphGCUGpatCue/H8se+GXve6wN6WBTm9rUpja1qU1talOb2tSmNvVDq3XSQcwa/37r4YcfHn5/9913eeWVV37kSQdHR0fDhKdSig984AMIIRiPx8MyZVny+OOP/4Wpq77fkggSKTGNYf+Pv06ys8PDewXHhyd8+wuv8+yzV9jd2UUuZ3TOsjicsz0eI5oa6z2zgwMWh3O62iC0JskSsnGOEBJnO7xxmKpjflqxuDdH5JKLl1KqhWV7B/I05fZ+xWkt2EoctBU2z1FKsj055vr+v+XtX1Vs/eTz/NRffYqvffklvvoHN3nrZUG96m2KZUKWap5+7irPfeAaUsIf/E9fZfzmH7I1+gYfeu4hWlNxdDJDKs/WNOHShctMpzthgriuWVUrxirFmA7ZNeTO0lQ1xydH7LYN2jg6Y7HW4IyhzARVainyDOssbxwekz39PC/94ZdZVC07OqiaKwfVewDqfy7nVwoGn1S4D6QBgksFUNcdd2/fI8sSwPGv/odf5pEbN9jd3SUXgo999ONcuXKJmzff/a5tPIhocN69YB1EiGrF9yIarFcEAiIJIIIfVVUNQP14PL5PAXleYdm2Lfv7+1y5coWu66jrerCznk6nZFk2kASqqhrAmwjsRMVqJB3EdQCDo0D8XwhBnuecnJwMAE8kPDz55JP8xm/8Bkoptra2KIpiIBdEN4eoKq3rmjRNB1cGpRRN0wzgxnrERCQM3L59m5deeomnnnqKb33rWwCDYjY6GUT162KxGEAxKSVN0wwkC601ZVkOy1hrKYpiiJkpy3JwO4AQ2RBdC6qqGtxepJTDeYoOFNFJIv7Ec7l37QrN7ASSFIXAtx1Cp5BobN3iRwUqTfEq5eVvvMijTz5Oc3LEZGeKMQ1SpYzGI6zKcDLh8Q+9n29/5zb65gmpFHS2n1QXEuc9X6ssT81axq+8yt7Tj3H9g4/w5h+/Pij/vLN44XBeDDEDkgCap0DuBYX0dLYnHkAgJvTKNicE0nvcQDwQZ5Ptvdr6jI/Qq/LW1HgDGBBdEnq3BdcrBp0XPWB6dq8cnS74l5/+PP/oP/or3N6fcfHaBd5Vxzzz8adonWPbwvuuXuZPvnPCN779HS4LeHakQnSBkAF8AoQMLgxKCVznEN6jUoXUgsOF5Q+OGhZekROiJzohsM4zUoJpEhwT7h4ZlhZ2L04ZYZHOohIVrKu1pOo6bGuxSNKupsgFKtGsmo7WgvGwN81JXIuXkuN5R+MkYlqyM0ow1tKaDiUcCs3i4ISx7/OWe1DIutBaorcpN9YAor+vfewMcf25JZ53GQBOgUApGbLQvcV2HQIX1KOp4kPPPc3x4SFf+J3fpKlXPdgr0UoyylPGeYJS4ozcIATOWqDvG2V/PfbsE98TVrbHOXVV09QtWaoZFSl5mnJ4vMQLyfGi4WEX2lzrsD0pxQBhDIYF3vdKT4FWZ9vUWg6kksh68YTYHe88ot+vkCvtw/EjETIkSCdJglS675egM2dkjr1RxmlXU7UNN+8ccu3SHidvvstXv/kCk1HJ1mTU93Ohf57PZ7QO7s4aHt0tAhgmA8xknSDRmjxNmXlYVA3bZcjV1r0jRxD9SqJHgvchS9z2IFJol0AGwof4E6lUfw862s4G4oxWKHkGIK/HEMSxJfbxwBD5U1XVQAhbXy6C8HFsiYS4dcV/dHyJ40SM/ImEhvUIh+gYE/vWdWJaXCaSatYJAHFMif1z3O46uL4ej7PuzhDJC+vEizgerhMMovPCe8UorI8z55dZf15Yr/j3evvHz6wTHOOYdd45YX396wSQP41MsH5M6+SH9eXWiUvxc3E/4u9NH7XiXNevJyXPNFJLuhYSLWiWpxzdU0x3L1KWk0A26B3BIiHSO4ftzkBs70y4frVGCY/oPNgO50Msg1QC70OETJKkfZ9VIWUgjtXLFUl6CkAugli8844kKXAofGdxtgEBWxceRjjPcnZAvTrGNCtMY8inuzSLGT1ai2trRJIy3tqirVscnjSfIJTCekVSbJFmKWm+hUpHyCTFCQ1eoFPBaLLNtUeuMzudU3cGc3TKqmt5aW4Q3vPxHRH6eJXiRYKzFd5bLIrag1U54+1trty4QTEd09UrVlXL0eEpW9vbCG/Z2yqRGuquBRvGzkQloT/2ls6Gvs4Yi7EBrM7zDKWCo4mSEucFxnk6Z7A9xi2EHNxgwnNSF74fODsQBb03eK8D6UxEopC/T3nvxdp4j0B4KIoxtq5Zne6HWIxygt66yOrwNuMLV/sxrX9aEX18gIfq6BZaCbK9h0m7mzwygSKF08aTa5i3DmMJn/MxckCA8ER3J+NdH7MD9GOSFqInHVjGW5exzpHoFKcBleGbU4StAgGgD7MSkiELx0Ov+j8jAvi+Tw6cQB8+FkcxEZ0S+mcgDwkNpmpwnQkEsX5/c3sX6XOQ03C/RnKHj0QMP5DNQsQBw9EN5Ie4Pc4IAmGcDrsmhUFLhelyhGuQKkMkIGM/AcT4BO/Pxt9ANHDftd77iQn3v+b9GQmjaSyJb2iFxwhFlnqQaSAICk/nJFoaytRRNWfUkx/kK+eGdLCpTW1qU5va1KY2talNbWpTm/qhVQTygGHC8/uthx56aPi9qio+//nP8/GPf/xHDoyP5b3n4ODgvmzbGzduAMHuPE56FkXBlStX/iJ39fsqJTyKoDBdzhyvfv6PGP3l93FlPMI28Mqr9ygnNfPWk7YNj1xIOK1qzOkJrnPoNOfyjYfIJyPSJKg4bVNhrcH3E3PWWqrlisXRjNVsie8aRpOcatmyOGx562jE1xYNn7im8U0Acz3BReDqhX38vV/j1mePubPzCI+9/zpXH32e555eUNcdh/tzrlzf5s1Xj9nZHvHWt/bxb91i7+BLTNQXefbJbfCOw8NDpGwYlZILuxfY2do9A5Dbmqbp8Oigfqwq/PyU1771ErP5KdutwVhL14bsbtO0JEqQasjSnJP5gtXlhzjcv8vbb9+jlEEFnUhYOU/9F3Ruo2241gopRQCVfJjQC2SDcM87HyxEq1XH4b0jiqLktbde5dd/41f5B//xPyRNEy5fvMxHPvThB5IOzisQ30vtuD45v/7ag4gK5//33g+gT1mWaK0H0P5Bfct50OPu3btAUPZPp1NWqxWz2YzooBDBhBixoJRiPp8PmdpFUdzXBzRNQ9M0lGU52G13XUfTNNR1PWSBe++ZzWZcvXqVyWTCCy+8wHg8HsgJeZ6zWq2+C8SJgHxU1hZFMYD6cfvRaSDagd+4cYPf+q3f4p/9s3/G22+/zcnJybDe0WhEVVUAA3gVCQDR+cF7z9bWFsvlkqZpSJKELMuGeIrlcglAmqZD3nkkPpyPUIiOB1El65xjPB4PxAytNXVdD6DXanFI1xmsl+BakjSjsQasAe9JswxjBXdff4tRrnDVgnw8pW0dRZ6Rj0aoYoRWks45hMu4cGHKxfKQg9pgtcfYeM17Kiv4o1nLtUKT/d4f8fw/+HkWd4/Zf/MI50K2ryMo95D9/SIDmJ0ABZ7WQ+ODajLaBAd9oMMFyWJQpNHPpw9T7msT8XDf3zGe4exC/u4/orGyJJARInHB43n1tZv8X375V/m5T32Uxx/fIxkfc2o0i2XDwczypRe+wdF8SYHgx8aKiRZoJVFrJAgpJdL3QAagEhlcDoCvHtS80ji0liyaDq0k1ju2soRpJtjdSpnXHYcnHTsP7XJhlDM/OsULRWsAHMuuRUtB7SWTRKC1wgFt5+law6RISfMUOoP1ntOV4XBhqKTg0rhAmo4kUXRGMEolzhqsdb0C266B7cEBAAHOBWAu3OduaOMA0Du8kEE1qTXOhriHCAJqndB2Xa/4FEglsMKjlebnfuoTPHT9Mi9+65sc330X1zWETa5naMc+8AyA9d73jgP9lSYioGsRQjIeFcyWNYmxWA+NlSxWNRd2JrRO4aSkbjomRToQF6QCehAsksu8CJYanqA0VjFeYY3kENWcYfmwo8YZpOtRfQOyd0ZQUuK1Yms66okK0DQtJ6cL8J5ECi4WmnfmDXcOT9jdmnBpb4/90xVf/tarPPfEI0yyAPLNTk74zmtvcWFaMqsavFR43yE8WC9AJf0FKUi07EFBjxK9I4XoHUXwEdsa7iklRLg3CNnpwfg7kgo0SIGUmgSFUMHCPQ4Z6wB5/HHOMRqNSJJkIFFFQDzGDaxHEczn84HwFklbkdgWCWNxG2maDqSwSGZYV/ADA1EuRuU0TTP0rXH7kTB2Xs0frj0/9LWxomPCOkEvHtM6USB+P4iEhnXwPm4XuC/OYb3iuuM619e9XnE/4u/rr62D+w9yUXiQm0Fsz3hcD3JcOE8uiJ9fb6f4mfXtno+LWK9EJ5Bm2C4+MxqcNUj6WAwrMW2NcSfINCMvx6RF3o/BUDcrBBItVbhmpQxKc9bAYA9oAgkh0PSgs3jlUTohyQOwnZsuOA14sG3HanaC8x6VJCRpRj4qGW9NkUKQJIok28V1LYuDmyxO30bgGU8m1MrTagPeYTpD19TU8wXGtkx2L4ZIIG+wzoNLEFvbpCIhL0fItKCqGxLfIaxHpwqpFLZz6CShKMc88vjjLJuOw8WKsgv9+a3actwqiiIBFF3T4hwYJ1kIySwt2L1whYefeZzx7lW0FoGsVVm8FSgpuHBxhwuXLiKzkrTYpbaCtrVIqXoXkfAw7J3H2QpnO5w1eKeQWQEE0plEInFIR3DQwdB1bXDMCbLzIe7COReU+n2ffnbCXB/MIL97kB8upR6odp6uXtLWM9rqlLSYkOZjbNdRnRxQbO8N/gZ4j7eWxb3XmW5foty6hPIVWer52x+C01ZQ3NS8sN/SuTOKHeIs+CBi5ImUdCFkql+/RxPGKpxHuDaMV7ZDjSZovYWtl9SmJUvSwYHB+uAeEI9PCH9G/OoPWRKjACJKH5c+I/p4XCDS4MmZcfHKo2itQhyG8Hgl0VlJOjvGJmMCrTCS787GX9lvz4WvP/3xEUh2ItxV97mixLPRnzvrQ6RC4xO0ysDO6MSYRHdhEByezdbO4for/TE+6DvR+s/ZVRBIJKfLlmSUop1DCsM4Tzle9KRAb/FIqk4zLcM9XrVhk8kPwBzYkA42talNbWpTm9rUpja1qU1talM/tJrNZsPv6xaw309duXJlAJ4APvvZz/LP//k/pyzLH3g//6Lq3r17w+9lWTKdTgGGjPg4IX0+buFHoRIRAArroXISeW/F3Rdf4QMfu8HHP/4sb792lxdfPeLusuOv/6WLbJcjJIL06jWyUUFS5GHizQQrb+8sPklom1UADAmSqrJMGY0u0DXbzI5mzJdLxkXO3Tbli/NjaudQQtPZBmkNTumgYkkl1y8eIQ9+hfzWU8xnH+L11x/ClZqk1GwXinuv3qV5e0Z18xaj1ZuI2R9wbe+EG9e2kMpzOp8jhafINZPRmL2dyyRJSmcMjTFU9RLvOiQeayzLoyNe/cNvM793wnPPXiJNM+q6wXQdXdvRrRZgIU0EWiluNR2riyNufvNFus7gZUKXb5HVh9Tiz646WZ/K/n4FK1IIslSTZWmwyZYBzbTOY6zFGDsABi4SEbzg5GRFMTpgd2+Hz/7Or/KXP/FJnnzyKbRO+cuf+Ak+86u//kCHgnUg4UHAw3lr4/dSJgL3LbO+bAR0mqYZwJcHASsPsk/e398fwO+o6o8xDU3TYIwhy7IBPLfWDk4m4/F4+AzwXWpXYHBIiH1CzO5u25bDw0M+9rGPcXR0xLvvvssTTzwxOBtEF4R1BWvc9zRNBxJEXddDX7NarWia5j5wJeaNA3zta1/j6aef5qWXXmK5XJKmKd77gTgxmUyoqoqtra1hXdHtYLVaoZQaiAXxOKWUbG1t3QfirLtBxPMfiQ2RvLZcLofl4/rW/zbGhLYvS7rjU6wzZKMxJDm6W0HTMppOcTJhuVggXUdRpAipqJfHZFmB7yzelygVJo+LRHI6a3nimUfZ+ZObbGlJ7RxWgrcxMgHebBxfPmm5ODrlrd9/gcd+/qeo/qffxNxboV0fs+CCQ4ISoEUP+PfmISMJnQpK+V7khhMBMJU94OvjDHuv3oNA8ImWwcO1G/EDCNbR6+8G9laYPAek8CFeQfQZzEQlYfj97v49/vtP/zq51uSpIpGCtu1oWoN3ngR4Phc8Wso+776/XzzEWfmY/SwTiUwkXnhunnR8/rBh5WHiA6lKS7iQJ5RaMC4ViRa0TmATxShNOTqaYy3sjjW2M6FdZCCGZFojnEGKoNivgCTP0dKhhQMlaI3gcGFwQjLZ3UKZkNftRDiHsr/X01HZxwMoTNf2ZKszgBPA2kBGcE2NShK0Uj3Y4nt1agCFpFQIGe3SA+gqBSRFiTVdcCaw4ZrPUsWH3vcM73v6SRbLJSenM44P73FyeMByfkJTLYNa2XQhi933YyQerYMTgnMOITXGOhASrRVbe4pkvIdOMy4/dJl0uoO9vc9HP/Ep8qKkWs5Q2QSzOiEfjciLLNiutzW2ayH26xCUz6nutaF+yBOXBNKDElFh63uCRg/Y9ipQrA1qV63oTGjPLE3Ymo4w1jEaZTjvOT1d4r1nkknGjWbRWt68fY9P/viP89EL1/jWy6+y6AT1co5Y7NNZy7w1FJ2hSBPunKzYyoMTRu0USZ7jPIH0YQxaCsZFGiJ+RA8EI5AqEAoC6K0w1lK1lkSH8U9IUFIF8FYqvHcoqfDOkZjeEt0EhbLt+7f1sSP+RIeAJEkGcD9N0wH0X39ujqSBGJsQwfZI3IrjT9d1LBYLkiRhsVjcRzyL24lOCt4H2/XY5wKDK9l5N4G4vUjWi+9FIH6doLDuiLBODlgnFsRaJybEMfm9gPt1F4boTnCemBjrQWTD82DcugPB+jlaJy+sEyji9s4TSM4vu17rTgoP2rf19a5/dv1YlASyFFXVeKnAeZaLJVXjqDpLko+YTnfZuXiJre09lJBUyyU20aRpjtISIUIkwQBaCoGUKjiiiH6wEIDS4C3OGaS3JFIjpe5JQgopNEJmeAH1YoE1HTpJ2b14mWIyQUqFSlKSNEPiaKol3fIQSct4MkVIhe0Mudpjmm9xfPcdbHuI7yz1aonWKQjN0f4BSli8teisorOCfDxCqhRlNUKmJOkoxNYIQWcc0lu01Ey3dnFOsjO9w8XJnLoxJKKlcZ6lBWscxjaYzoL21EJxmpSoYpvL16+xfeESWT7FmFUgXwnLhUtjykmGVClpMUUXE3Q2hs7RmYrOWIx1aCVDHygl5WhCkmbUTQ3OBHV8D+gbZ+iMobMe53VwCZCKrmsH0L9rW6yxeOd6txjOVPSDcv4ccfAcccUHtBzb1cF9QKSkeUkzPyQfbTGa7jI/vE23XJCWY4SAtpqz3H+d6dUnkMUW1nl2Us+PPXnA8z/1OEd3DZ/71tt0JhLhXLS5CY4AoRfF+0BCTwU0hEAngSBRgTSAB+EcZnmALHZQOkSJJWlBg0WrNJBifFD3SwGlgnnnkJ5+9OkBfx8IBeuEg/MEogjUh1iBGq3DeOWFDOQ/AYkALzO0duRUrCjItKPqeleF/vlLEZ6bjPCBPOHjfkKpHbWJhIgzUmh/ogBLKg2N1TgSOhdiAZVwCN+AyPGotRO4/uH7jyn+fr4vWr8AotOBR7A1Tlk4ASicF4xTy0HvVBGYPoHQOlvBpe3w+Oac56FL379b5YZ0sKlNbWpTm9rUpja1qU1talOb+qGUc26w4oYfzOlACMHVq1fJsozVKuQcf/3rX+f111/n/e9//w9lf/+8KwKWsaLlOZyRDrquY3d3dwAOf5Qq6cEm4yUeQS7g4K1T5s/OuXJxl/QJaOpTynsrJiPFzqVdhHd4a7BtE2xGpcb3WaVCJeg0pdAJtm1xtrcdNQbT1XgBkwsTphemiCSh2DHw2j2010jnAhjeVaTlRdpqiRJQpAmPXDZsT17i7tG3OX35Ao2/ROsSnFCU3pCbfbS/ze54xfUnR0zKCR7HbLEACUVZUDcJl3avkKcZTdexrFcs6yWr5ZymDkBVs1pw59273Nxf8DMfepjxZIxIEpqmpqlrcA6zOAXvyDNB19XcHV3n5PCA/TvHSASm3MV7z8KFn++1BLA1GjMqUuquoe066sZgrON7X0tYT5EnTMcTynLUAwVhKqluWqw1VNWKtmsxJgAt3nucd3St5/jwtM+Ivsuv/8Zn+Kf/5H9NWUo+8P4PkGUpdd2857bXQT64fzJ+/e8/zZ1gfVIu5kdHckAkHETF/IM+v15xPaenp8xms8H6Os/zQeUf7aPXVY5R0boOXERiQYxayPOc5XJJlmVsbW0NalRrbVA3OkdVVaxWK55++mlefPFFqqpiMpkAgagQnRZWq9UAXjVNQ9u298XeGGOGScqiKMiy7L7tRIDqqaee4vOf/zz/4l/8C15//fVh32OGegSskiQZHA+klBhjGI1Gg/tAHAestUNEQ9xGBL+idXh8HWAymQz2203TDPtsjBkcJKL6NwJ1QggWy4ajxYo0S/BNg3Ygk4zxhS2EByclspkx3dnCm46ubcizlPHeLkmikEmKTjRCCuZHh6RJUMs/cW3Km4cVZSeonUdLH5wJAAN8edHxyGlH8vJ3GF+9zFP/4Sd44f/xeZbLFu8CaSAowgMQr/B4GeaWMwkjLzDeY3zIPFYIlOijFUQPwLmeDLN2F0cVYf9Hf7GGbbizP4f3A8wUHBgUgqBxY4hfED0xISrV8dCYjta0A1lB+ODS8EQi+fBEkSl60kFQBapeJa9lUJArLUlHgRyz6Byfv7Xk9caTJ5LOO3KhKaRkpCV5ojA+uArcXXZceOgy3WJFqkBNRoiuxXlHnmg675E6I1EeYQU4yatHLQ8/8wj5YkHiKhZSU+Kpm4ZUS0SaMS5TRq5DKZjPw3E56XFK4pqOjKBmlFL2rg19m/TNmGiFVBpr3aBwjersXluIFOEYpJQoqXDeoXXSk7Y8SirAooUOAHAEl1NJnu1wYXcH8cRjyB787oyhbVqapibaLHdduP6lCK4YUkCSZAgpSXQg22mtaI3DWEeRJdzbP+T1t97hwx/+EFJK6rbhxuNP0NZVf7E4urahXi05uXebk4M7ZCpBJwk4i/cOIfurrndp8JxZXgcIyOGsx1q/dgEGBaWxHa2xhGYIGfNChPgD6xwX97bAeU7nSySCC6Vi1Vnmy4r9kwU//dc/zvMf+Rh/9PtfoF4tmAuBPLnNI1cu89at2zx9dY+DVcvhYsVD2yV1WlAWeej/ne3BOc98VZPpYPmvE41OUtKsQKcZaZaRJClN19G2DiEI7Sk9ru+XAZzpcM5Cnz8uOAOUuzVV/bp9fyRuPUjh/qAxbz0eYH2cCeOqG8gLkfAW3V+stSyXy2Gb0fWgqiq6rqMsy4HUu1wuGY1GQ3++7lizTiaIz6VxLF13AYj7EcehCLbH9ojEBOfcQKZbJzbE+IUHAWlxrFwfQ9fJBudJCudJh+sExvXXzgP/590Q4H5SQmzXB+3j2f3Pffv6Xs8w6+t4L6cDJS1aw1yCUyL0OwKEsIymEybblxAiYXZ6ikAx3fKossT74LIiou2G7I9lIBmo4EwiRO+kEkiTnmDzDxZnDSqqx/tlExyFm1BOdti5fBWVhegkLwXTrV0Wp0u8PyXPFEp4sqLEpxLsiGY1R+cTsnzC8vhdpGzJsxSrOsw4x/mM2eERznbsXbvC8d07zI8OyMuc2te01YJstE0xuYDOQdie4IqgGO1A0uFlTdmNufzwdQ5nS+6cHHEx1WTAXiox3mA6M0RVHOuMZLLDww9dY+/CJdJ8iiVE/STpiK2dCwjtyfIRbdfghUSpFPrIHK0TPAJnHcZavO3wDujvhaIYA25Q7TvvsM7SmYa67mg6EDojyx157sE7nLXhuaYL5CBEdJSRIfJHxOtlGMzPrFXCVRavsHCvtgsSpRB2SXV0i6SYMr/3JqPtq0x2LnF6cBupNM3yiGa+z+5D70MlGc55DJajlUGMc0S2w3z/a+yNJPPWs2wFJpoL4HoupEDIOA44cuXpzNk9UipB40J7SNdSlFuIcmdw1BHekEiFtx0SML4nMnhobVT998873g8xBvi4fcL4RPQH8Hgf/3IognvRKBOBeBJJSAiscXRWooSFRJBZQ64l3kFjY8t6lHCc7YI/25bwdDZ+JwnjoYhcAd+fLm97QmkKPZlC4nEyRZoZ4UlSRcbIGvHg7PcHEbGH/iQSt/qx2PV9s9aKPEs4WgjyxLNCsjKKVKxAOaxscSrBOrBOcG/mePKhgrIsuH373ndt63utDelgU5va1KY2talNbWpTm9rUpjb1Q6mmaXjxxRfve+287emfpa5fvz5YhgOcnJzw27/92zz//PPvOUn373M55+5zOogKZjgjHUBwQPhB2u0vqrSUQR0BvVZDcFoL7h7Ouby3jRCwtzsmT2qO9/dZnc64cO0Ku1evkXiBazuEUr00Nsh7vPeoRCN7i3h6JenBnXdZLBcI4RFegLcsVit+fE/y1Xsa00BnLNXilPHFSwhX0rUVOIdSku1JzmTkaNpT5qu7rOoWYx0SQZkrtiYlabKLwGG8YVl1pGlOVmTMVqfsTndJsoK7xwcc3Dvl6KhitoJFI2iM4uKJQcoa11Q8/9iYxeqUl19/l6d/pkF1bbBT9QK7PKXuHErD3dWC/e2Cw5e+zbIxWJmwtT3l8N4tFt7Tfo/nQQCjLOW//N/9l3zyUz/Jl7/8FX7/97/AV/7kj7l56x2azv471zGcU60YlSWTyZStrV3SHpBVUrFYLug6w+npMbPZKUJYbNPgXJhAVEJQV4bFfI5UY37393+bv/03f4FHH32UG4/c6MH1/Qdud91pAPiu39eXO//7OuBwXjm5DuBEoD0CKQ+ayDsPPkSQPAI5EUgZjUYDGB+jBOLvUYUqRMi5nk5DXmxd17RtS5IknJ6eopRitVoNZIX1vGmtNcfHx0wmE65fv85nPvMZlFJsb28zHo8HlWvsOyIQpJRiPB4P2xuNRkMEQ9u2TKdT6rrm+PiYLMsGx4Zo+V2WJV/84hd55plnePHFF4d9McYM/XKapszn8wGUGo1G91lXx9gG7/3Q1tHSez6fDwBNPAfR8aFpmsFNQSlF0zQDIBTbzXtPlmUDqWK1WvH2G7fZ3p3StS0qy0mzLOQg40lHI5xpmTz2JIe338W0LYlOmezuoLMCYwxFkoF31IuQ+Ts7PgEPV67tcvn1I2adZW5tIAcEXR/Wwan3fP7eiotlQvZHf8Tk7/8dnvjUB3j5c1/H1QaHwFuLFx4lRJiZ9sEVJMFTElwHOu/pvED5QAzoobt4RQZl3cAz8LgBUxK9FXOEfoNjgRCBWBDXEgEQ0dshrGcuRywjTNYD8uwzg9LSQ+bhhhZ8dCyZ6EA40CK4OATloCCRYd/TMkEqCUrQWfjK3YovzDpqD5N+gxbYTgSFDp+dlJq39is6maCsBywoTaoVrhOMU4XUAuMUWaJYrhrKXGIbi5qUeKGCi0XdMfKwWjacNIataYlKM1LTkUrD6QpcZxnnQeGeKE0qPU1/DQYQpScMyGDl7axFyOAioJTuVe1hwr8Pnx5A23hdOx/AeSUlzgW1OiIoLJ1zJD0BR0iBEhLrQpK3MYY0kWgVQAMxLrHGolRUXPcAaXQG6q8NrROUFHRtg/cO5wWrpgtjNAKtU8os9E1lWTIuRxgXohmCyrbj4NZbvDM7pu06kqQkTdJgeS4VSgpM2+D6bYoe+HI+gPLG2PD3oHj3OEK8hNIJ3juarsZ2vY06hGMgxC7s7ExASY5PFhRKsVskHFSGm++8y72DA37qp3+GG488xG/86q/Qti2nBm5MSlZWcLiY8/wzT3DrpOHeasWVS9fJlKI1NjgpOYdKFOMiA+fQPfCDNbiuwXpLa1pskmC9CNb03tPWHd47rDU0dY1xFokPsSn095Fz0INesQ+PrjrrbgCxr4v95LriP0mSgdQQr6P4d9u25Hk+jG2RxBWfJSMhK7r3pGk6gPnx85FkFte9XC6HcS26H8S+PBLf4rrjGBGXieWcG4gGcd/WCQQxPmc9XiEe33obrb+27rKwvr742vm2jBV/jySC9bFl3d0gEg/iutZdJNYdFtaJBnBGojhPJlh/Zj/vxPBetU5IWV9XLGslWkJZ5HhXk49GbO/sIXXJyckJ9+7dwThPMZoyHo/RSYpOUoSUIHtXJ+GRyEEtH8zpI3AtAwGhJ7/hw4AQkoB8TwAWxMiWrBxz6aHHabuOxekRCTVlkXN45w7777zDdHuX8daUViYkUuCdQQHWWLKty0iZsTh+l66ekZVT1Gga4pcmwSXn9N4dLl69QbWak5UZUgnSBLrVjNaCFJpyvItpmxAJISVCJSxWFc46EIo0y9i7dIW923fZShMuS8FuoUm9w3WezlmEUixVykxkPLm3w4WrlxlNt5EqxTuLNZbOWVSRo4RHqoRUKJyHul4iZYYTGu+D+4uUDmcsnQ1kDecMTeNIdEaWFYGwpFToo43FWY+xhra1uNbSmeB0VWQFQoAxDcYGQpPs3RMCgeR+p48I9J+RSXrnI3/mBlAWiooxyegi2e7DeO9p6pr5G19HeEcx3uX2za+SFBN2H/4A3hg8CqE1UkgulZ7bBynf+M0/4vX9lDeOLMYRYhJ8GNtw66TFMAwa69nJoGpDlJQQnlHmWS179b23JGmKsQ1Kpwgszjuct3inQPj7QP2eY4LrCQ7r6n/RMw4CSYP7XA8YCBEe7VuMyNiZZrRNTVu3ZJHfL0IfnKWapVfksmFRZ8MzjwekcCjJMGaJ3n0u3rGm/+4jete/vndCiNBO3rngstATL7R0GCMQOBw6kCcGauP9xxcO2X/Xz5mLAwP5ZOgN+wc3Yw1V3eG8p+o8lyfQdjWTpGGpFIoVSiuKLHz/HY1THn9kwsHRHHV1+p7917+rNqSDTW1qU5va1KY2talNbWpTm9rUD6Xm8zmvvfbaD2190+mUixcvcnR0NLz2a7/2a/zSL/3SYAX7o1TnnQ7WJyTXSQdRzfajVgKJwWG9RCDovGRlHbf2Fzx3oyFJNFcuXaIqMqSWuNYwv3uAEJILjzyGztKQcdoaXBNARCEJlqVZ0oN0HlMFYNu0DVhLtayp647OOh4dCU4WklUrWK0aVGeY7J0w3ruMP+3ofJgQ9NYhpCRPJVlS4LdyQtqnP7OadB2dtaR5yXSroLUtp7NjTmcL6sbzndff4uDYsqgVncvxeCa55Ykrmotjxd22xVY1+28fUC0cyfY2qs+i904gtUIYQ9N5MuU5YMzhfM7p3VM6D0ZlzJYz6q6meQAg/uBzAKmSvP+Z5/nF//n/jKuPXONjP/kT/C/+l/+Yb3zlG/xv/rf/BX/8ta99T24HQsC4LNje3mV39xJ7u3sURVD1O+fYqhtOT05wLuTGLlbzYHE/7Kuk6wyL2Yo009xz+3zxj/6Qq9eusbO9w8ULe9y9ez/p4E9T/cX3z/bvfjXj+WXOLxuXiWALBIeANE3vU0I+aF3r5IMIio/H4wEsiZbVVVWRJAmz2WxQc0bQZDweU9f1QCxYd0PI83xQSq67Eqw7Iuzv73Pjxg2cc7z88suMRiNGoxGLxWIAWKJbwboNdHQ+iH3KevZ2BJwikCSEGED8pml4+OGH+YM/+AM++tGPArBYLNBaUxQFp6enwzqiynXdUaIoCgBOT0/vc16IwJUxhp2dHaqqYrFYAGdq3hgBEfcjAl5Syvv6xwh2pWk6kD6W81PyVDDZmbB7cTdMxBvP9qWL1MsVBsXRwSmr2pMB450tVJpjTUc9mzHaCrbCi6MDtJTs7k0R5ZTlwRFXLk+4uTKMjOO49X0+9hkJ4K3W8fm7Kx6+cZmbv/VveeoX/hqPzpe8/sVX8Y3F9UCz7pd3PVDv+59SglGE+98xgAfDDSvOyARxJjza/g5BC76/N7zvgYDeyYCQrSzWJsm97x0R+tVFUWyY3A8Wu4iBx4DyMEJwIxF8YCTZyQSJkiR9VnggHfRODkqQZopkkjN+9BqnL9/kzaOGz96pODTBRNj14FcmBVoK0kSyM8nYGmleO/TsjjOq0xm72wUqy7Ao8kyD6/CqV+V6R5ZKHJKVMWxvjchXM9quoWtaNGCN4cJOiREKkSSIZsVSgukceSKoVy2pluSpAiRKa856gXBynPN4gqtAcHuJzA+BkmFfhAwtHYAhgZSBtKD6trU9sO6sJVpOgwjOARG8B7TSWGfRKqCBUobtJ1ojE9mDpwFAVVKCVHgfQFljzdCvIATeiV6hKTA9MBbIKgqvUoR3JFmG9h4loMXw9ttv8J0XvkbdNOTjLSZ7V9i7eJnJ1hQpJLatqVYL6tUS06yolgtMW9M1LXUTgOWgyg6OEEmekeYFSZKHGIiuZTHzGOPwwoXx3EZimKVpOpCKrMiZz5dMU0ltU4y1/N4Xfpf3f+B5PvKRj7A9GfHL//JfMV9WHKxabtx4jLv7+9w+mHH5+sOcLFY88+wztNUqEDlcAG9WVUuuJdtlSprlPQkkkD3aajUQJ2zPvFGRBNlfD8aaQK7zHq0CwQ7ZkyRlcHqKJLPzIPg64B6B9OhQEMlwkYgQx57YZ6+PSxEkj04zcblIHoiOCnH5OAacJz1cvnx5cChYJ0AA9zngeO8Hx5pI/Ir9byQtxOsurieOfXmeD248cTtxf9bJtzEKYp0sEdswrj8uH7exPrav/x/baP1YYnvEMT8e6/nYg/X/4z6tn5MHuVSs13lHhQfV+eeMB7k2WBeeq7NiSjLaQyhN17bQHpAIuLi3ixUSZIbSmgGo9OHaFIKecBBB60g360FLQd9n0V+jFucEQiiEEjghUDKhGKXoLKGua5azY9JEMZluce+dtzm48zbWOdJ0hFJhABkzRZcprTUUOmGy9zDL5Yzm9B7WeGQyQusUoTSTcUpTL1kc3GayO6VrlyRFSVq3lNMp6WjCeGuXul6QjcfoPAtRE9aF+BdpQCQIpfE+XJtFlrOze4kizxFmhugjYprO0vlAupqj2d7Z4uLFi4ynO6g0B9+7DBiHc2C7Duks1XJFkmW4pkNnJTIVWGcwXmJdfz1IifQJpjNY4xA4rG/pAOFzRN8fa5WglCUEDwQCQtdKQIXvJf13D+9sOD8qXo+9O4cK8S6i/1v0fVMkBoIYnkc8YL0iG43I8gzpDd4Z0IrRdIemXuC7Y7auPoZAYU1LU83wpkE6yyh1vC4T7skJq2/LQAjwblD++57U6AX3PWPgPcY5EqUYJZ5FB6n0jDTc7V0inLVkRQ5tg7SrsK8GbGfQqUAJh2H92gx9d9zOGibP2Vi8RjDu2yO+k9BgREoiDYnUdK0PHEER9tkASSJpO0UhOpY2IVUtrdWBdONdHxkhqHy8zxjcFsJ/DnoiahjaPc73fS0gHHQ+QYka0HRGkCCQvsF6PUTXrR9VvFX/XXUfganvrxECoTXCO8pRwmIJmJaLI8vOpW283ObmG47aSPI05XDWUNWWcZnyrZfu8tSTF9jrCdTfT21IB5va1KY2talNbWpTm9rUpja1qR9KvfXWW/cRBH7QKoqC69ev853vfGd47atf/SpvvPEGzz333A9tO39e1TQNJycnD3wvAnHwYMX1j0KFybKg/nVA7aD1njsHDfPFnK3RhDTVqMmEtmvRRcnO1StYBKujo6AQ6ixKJugiRecpKkmDSlYCCGzd0NUNeV4g2MG2FWmiSJct1bKm6yyPlR23FiNmJy2jbZjt36OYbFNMRqTG0lQh3qAzbQCAvEOqBGfrXnnqUUqj05Q8zZgt57x7a5+j447DU8+iljRGAQIlFIU2PLLtuHYx4dLFHYoip0wErmppT044PrY8/MiID/zY89iyoHUOpRWqV5sb79FWcFJeYH7vgOWqwwDpOAsgtrN8r1eEFIJxmfLzP/83uXz9Sg+mQDkqef4jz7O9N70PQP/TSivFdDplb+8Sly5dZndnl1FZBjtxfG/dbKmbmratqduKNE1wfc41Pky4rVYtyXwOlPzuF36bv/rzf53xeMLly5d54Vv3O6O810T+g1wOzr8f34sAyYOUhuuvlWWJtZaqqhiNRveRDt7LYSH+HckG62B3URRYa2maZrCvjiBM0zQDOL9ORjhv8xxdEbquG9YXgaiTkxN+9md/lps3b3J0dMRjjz1GURQDQL8eN7BOOqiqirquB+DHe0+e58OxTyYT6rrm3r17ZFk2ODhEMCxJEr7whS/woQ99iC9+8YsDKSICNxF8iqrbuM8xDiJJksFdIb4e7cC9P8s3l1JS1/V9ytQ8z4ftdV13nwW31po8zzk5ObkPTLp47RKTccHF69fwHpquYzSZUq1WWGOYH97DNA2iM+w8/hh5mWOsQ2DIp1NOTxco11LkmrYxFJMps8N9dKY4WhlyGfo3IehR+X5+WQgcni8vGh7dF/y1a4rv/OvP8f5f/Os0i4qb33gb18sBDQ7lAN8rUgmfzRCUHjrlab2gWbtNB3Gb6EkEnn6KP6Y6n/3re8AJ4gR9r4b0flguzosnYZ48EA9cn1lMIA5E8avAoxGMBVzVgicLycVMkimCRb0IfY8ggOBKigBAAemFC4jyAoer1/g3t5a81EZFe9iJQgm2U8kok2jh2Z0qvvXOnK3dMXLV0SkoL+5QLzvsqkYVirZzKCdQErIsYTV32Nbi04QtYUiahkVtUc5jnUNoiXFApsisoUw9i9YyLjWuMSFyA491llVlyPu+K7RliBMIit/gdhD6meBzoZQkQiFBkUqfmS7DdWUtQkiUVv15DACYtSacPe/RfTTImRq7B6+9x/mQDi0RvauCwxqDEKGf9T4YAzkfXICUEGilMD5EDPkefZESOuOD+tg3NKtFIDtISdK7D7z76ou88u2vczxbUkz3uPG+x3n4xg3G5Ygwsp9dE13b0NY1XddQrVZUqyWr+ZzF/JS6qvC2AWdJlCDLUpJM0dQLTuaLAKh5gnOEC0BbdJFwHqQOWedjpTFecPfeMaUOjhknx8f8v/+fn+bqpT2uXL7E3/uFv8ev/+Zv8torL5Nowd6Fi1R1zaJqePLpZ3jqxnW++cdfDEpTfA+CJyQ6uGQEIE8PzhNCQGtdaGdre+DX40U4t1HhG5e1LgC3wno657GSoZ+Pyv51Jf36eBYjD2J/HR0Ruq4jTdP7lPkRxF93wYnrij/rAP75MXPdiSCu0zk3kAriuBQdEiJxLDrVxH2I42Dc9/h/kiTD37FPj6SDSFSM24xEuvUxNrZRPM5YkVARQf/YXnGZ+N55EkF0/DnviLBOBoj7/aDnhAfFI6y353s5I63/H5d70HPL+jPKg8p7j0hLRpMd2rrl9Oge1XJGoiVplpIWI/LRCJ1kCJUhlaZtarRUOK1BJCRCoodrJXT0w/Z8dIFyPUmmd/wgjDFCKPJkhM4SmrpiOV+QJJrx9g5Hd+4wO7iDaVdkWUbbtBjbUS0WbO3ukWUpq+WKyfYWSVGyf/smzeIAnZQY05LojM7BaLpDvTym61paJ9BIsvEYgWC0PUYmRSBcaI3OM1Q6RviEqqpQOiVJC1Sa0XXhvuiMRaoUlYTvVGmWYiqPkRZrHcsuxF21ScbKwTMPPczupSukWYZ3FiMVTgg60zE7OqSen4R7BkOapaikZLqXI5wYMu/DM4wdAHghw3eD2O8aZ/BdHZ6Nek6zlJI8yzHW0niH9TZ8H7EWcHhveiW8RMsEKeNzY4KSGtWr8qUM7jmiJxqIfqyGM7Lialkx3dshzceUkx2QgXZSLY5gViEmOwg9IR9NMNWcyc4VlApkQo/EtDOulIZbsz42zEe+YxgNnIvuSz2pUfRxEs7TGbgylrxxbBingjyJ++fBtWTOkqeKw6ObGNf3z90clYzDVz7vUT3Ab3sHh+Dk0BMQRHwOOntt7QYa7iPpbBj/vKSQSxq22NspsF2LUCr0+XVHVVsqk7OdVrRqCs6ReEfnwjishAgRF/TfMQAnBN7FZ7CwXSE9mkgmDQ8SYRkXxnIhKMSM2ongqNBB63QgdfQPdyIeQn/86/38OpnsjG5B2I61YE3/vBi/C4HtPI9szVnUCTId8dIrpxgvEaYjzTP2dsY8cnXE6XxGOS4YTSXv7LeMigd2T99TbUgHm9rUpja1qU1talOb2tSmNrWpH7i897zyyiuD5fYPo7Is46mnnuJzn/vc8NrJyQmf+9znePbZZ/9UpdG/jzWfzwdF7/mKyi5gsBH/USvrHZ0H66HxjpVzODzLFdy5dwctBGWWo7IMYUOUgnGeen6KdZYsz8nHE3SeodM0OAGoXv0BQfGhFdkoR2mJaUq6tqFpVuSjjmK5ojpdImzH4QqsUGxf2CLXmurkiNGFS6gsYVwWlNZi2o6uabHO4kxQlUqlQAg659i/d8Srb73Lu/sdVSPxXiGFJxGWrdQwyS07E8nliyU7O1OKfERnDYvljOT4ED3eoV7NUWPHjceuMbl8jTueMIGoU3Ce1jq6DjqRMtMZ1eEpK+tAasoywy7neBkUb99LpUpwce8iP/dXf34AxwAQgrt373B8dMT3etdkSUKSJOR5wXQyZntri+3tLTwCbw3OQlk0jEc1Td0wm58EAAlB23V91jkY41jOG9JE8vJrL3H39l2KxwuuXLn8wO2en8z/Xp0PvlcQIFYEbeq6Jk1T0h74i8s/iGwQgZTZbEae56RpSlEURIeCCLbE9cR1RoDDWjvYVsf73FrLfD6/j5AwGo0GZWhU+6dpyhNPPMFv//Zv07YtW1tbnJ6eAsF1oaqq+1SgEUiKQJH3fnBggBD90LYty+USYwx7e3sYY5BS0jQNZVkC8PDDD/OlL32JT3ziE2xtbXHr1i3yPCfPc9q2pSiKwcWgruv7HA2iu0MkaMR2n81mQ/vneT58LrosSCmHfYkZ5PHYTk9P2d7eHqzEY4TFarWiKAq2JmO2drbxzpMUZZ/VnmCaFW29YjwZY/OMpCjQOoC2ZVkwP7FY6Znde5erD1+jnlU9i8oyGpW0V67x8aducjJbMa4llfVYFwgAAc7xOKD18Bvf/g7p4ZiPXsx46TNf4Lm/8x9iu8/w7ot3qFoLUuLx+D7SRUvwIgAVmfQUDnIB83htr/2L7zXug7pRDGo91sAA4UPMjV0jGRC19V4M6nsdkg9IRchoNj4o67UPE+dKBDCoEHBBwY1cspcJUhVcVVTgTSBlv2xcX5kgtWBx812O377FF24t+YOFwXiCuk+Gz28linGmaRxcnqYcntR0ScZ2a6jaju1HLnD9iUd541uvkueCRHuSfISUClctaRpDY8K5UIkmcYbaeFIZJvc7PI0PavXWSS7rNtjrC4FQmlXV9UCyZzFrSDPVc0nCOUIEC22ldE8y8QjkmULYu57LIbC263O+Pd4ajOmQSgdKibchZqAnRQVANJwT30cuqN4hwfpAaqAHup2xIC3OnVnRW+fQyqOdH4hgOtH9GfZrxKEO6zxaKVrhgtV5lvHSt77N8x/5MEoIjm+/xduvv8Kt/UMuXLvBxz/2NHu7e6SpRgrAWUSEEIQPbZ9ndEVJ13WMpwGYbpuGzlictXTNinq5YH5ywGJ2wup0husBaSkFUshgN973sVJ4hFZBoS1lICBIx9akwON5d/+EzDYoJbj59lv8y1/+Zf7m3/ib7Oxc4Od/7md446mnePnllzk+vMdsueTK1es898RD2MUx+yczQIScdDzORZX/VXanJc465qfHVKs5znR0JkRmuAEY7sFbEQhCXhA9KnDeBZcGencHGQAmBwPxABj6/kjmWlfqr7vcxN8jsWCdlLBOrIuA+nkQKlYE69fJLHFb64B37FtjfxzB/0gSWAfgI1kgEsLath22GwlkMXpo3Q0gkuzWHQ3ifqwTIaJzQ3RBiGNmXMe6I0/cbmzT9XE2HvM6oS3uf1w2tus6keIvgvD7pz23FMWY5fyE1ewkEFKS0PZ11dK2BqEbVFKgsgKd5GidQDlCJwlpmoVIF6WCc0vvjBPB8AiY93YH/XY9SklUkpCkKV3bslqs0FpSTraYnx6zODmkbSuSPEcpiVANTmqyfMTVG4+TliV3bt/myvWHcUjuvP0K7eKArq5I8wapEpRKkSLn7ltv0C6Pw/PBaEQxnuIFJFmGTENUVFZsI1DobAuvc5AZmQhEIWM6mrZFJxm+d5wxtgttlWny6ZTb+3dIeiJQhWacZhwbKKZbXL56nbycIrUO8T9tB66jmp8wPzykWswZTSaoRIOXGO9oTEeZB/cI31+b8Z42zoRxt8+rcN4jnAfC/YSUOAPWh75uPJ4iq5a2syFuwXXBBccZlBTIvAjnQ6vhR+rgdCD7uB8h5OBMFEd/fA/tS4FOC9I0ZzTdoZxsg3fMju5g6xVXn/gIOi9o6wVd3ZAVI6St0cU23vfRCYFJSNWF/iSREtG7DggIzhsEBpZSGklQ2hvvOFrAcxcL7pwYro5ThLdk2iOtZ1zC1njE6apDFTvYdkmabuOdRfuOtqlB5T1w3j+K9d8vZO8MFMkGw33Ug/DCh6E6vq+6fUSSU9BQijmHR3C0v+Sp9/VMDS/CveMDGa6QM0ZlzqrzWLvi3rLAi/D9SkkRttnfR8K7/tf+OUs4FIGc5pzDCz/EMUjpUCKQDVurcGYJeQJKkdgGQ4goCsdzfx+x3s9HEsJZ/9ETE2wgJOLidRmeE1z/nHFvWdB2UN1zNJWg6sB1HnE6RyZTmsZzPBM8eaPj4KTj1r2GNTrFn7k2pINNbWpTm9rUpja1qU1talOb2tQPpb75zW9+1wTaDwKeCyH44Ac/+F3r+5Vf+RX+yT/5JwOI9aNSi8XiPtLB+iTCerzCYrEYJlp/lMp6j3GCxnuWztJ5TyYkWSaYVUve3r/JU489RyKDNbUHcJAVI7JRiS4ypFLIRPVK0V5KKMP0vndBfdi2HavZjLpeYboWb4NCVSUJ6bRgAjxsWopxgkwV21euszw6wrYNKs/xziGUIh1p0jKHXnU6X6zY3z/h1p1Tbt2tOZwHi/NMeqZpwyi1jEvBZJSwNS2YjgrK8YgkSfFSM5+fcHD3DvPTGnvzHcqPPczFi2PyhycU5RixdwknNFoKsjQNCkKhmNfAeMqpaTg6WdJ5D0JSNx2u67BakYqUrckO+0e33rP9lRDkmebpJ5/nfR98blA6Q7jWXnvtZQ4O73JOD/SeVRQ50+kOk/GE0WjEeDKiHJUkSUpTV6xWQbEeQJeOIitpmpa26wJQ1it2vfVUK0uetZzKE96++RYPPfIQuzu737XNCIzEfX7Q7+vHdP7vP005+KDPR6VnXdf3Afbryz/o77quMcawWCwGkCQSF7z3TCYTIAA/6xbbbdvSdd0ApCulBuJCJC1EdWwE04UQvPXWW1y6dImtrS2+/vWvo5Rib2+Pra2twTFgNBrdFxOxDtbE/UjTlDzPh7iDuI3pdIoxhqqqaJqGPM8H0Ony5cvM53N+53d+h5/8yZ/kzp07962/bVtWq9UQO6G1ZjKZ3KfeLctyAKsiUQAYHCZizvhyuSRNU+q6HtaR5/l99uLx9Qhkaa2HzxhjmG6NkNKTZGHCWiiFMR2r2ZLxtMT5M1vi0MVIjPWkoxGnR4dcvn6V43tHFKlgeXRAkSfkW7uU1nP9ietce+WAZW056ixKQRdckINLig8RBCfW8W/256QCPsTrvPLZjKf/o7+P+x8/zbsv3aY1Ll6E+D6iWAFWgupJCK4nMbj+jg1gZyQerDsc0Mct9G+Fi7VXuoVX1u8UAUMUghIeJSEHxiIQASofHGoEoIUkFTAScDmBq6lkmggyJUiVJNUK6X2IUyD0QUJAPk4DCKEVne/46p2Kzx42nLqwnJaQKTmQFDrnmKYBsN2vPJemmsVshcgzli7lpdfv8fSzT3L00newzlJkObauAThdtSwbTycFFwJCj1IwKVKWqxbbgS4zjJSMrCHNJW3tyDJF01qKMqNrOkZFxmljQUQVfmipoEyUwSlCgTEWoQKwjPNY3+czJwnOBnBZ9oCH94QICDzW9pYSIvQlWqhwffZW004IpHMBTBJgfGgnhAxXgggxEdEmPkmSgfigdAI+EFgC2cwTLKH7vssG5aywDd40aCnQiebNb3+NW++8hUhHXHvsWX724z/HuCiQIjouCLwPcQzWhWiJs34w9NVZnuOsxVpDlqYgJDpJBieE1d5FlosFq+Wc5ekxxWqO62qs6UBo2tZgjA1hE/11a52jqVtwIXpiazxC6ZT9WY03LXUjefHlVzg6/u/4+F/6KE8+8wwPXbnIzqRgtVrRdS1aeHxX88UXXqRuOhBQVSvwnq1xwWiyxTPPvQ/RVhzdeRtTL3Bdi/f9dawEXimkStBJIIV6Z+iaBmNDXyckKGQP0MVnOtH3K/dH+kQgP/Zb6yS29ee9CNpH14P4fySexTEqOhes943nHXriexHcX3f6iY4C3vuBJHDmtMEwfq1H2awTE+K4tk6EiIQxOHOmie+t79d6rZMA4nPwOiFgnfhwPtZgnWwQj8uuAcHry60f23rkwfnj+osiHzyo6tUJCsgzSV0b2iZEfnkHre1QqmZUWBLvcM5SVTnFaEwhx8GtSye9DX/s9aNqOjjTRCG1J7iN6DxFKY0xNrgUaUU5HrNaLqhOTzBdh0xSUiHwOsPoFlTC9MKYS9ce4ejomNmdO+xdvsrs9ATbLNGyw6EYbV3Am4amWiJIOL71Yq80lwgk1jqaVUs5mWB9gkeRl1MWlUEpSTGe4FwgaCETGtMTb3qiphcSY3wgLxkYjadMtnf4mhEkHTSuI1eaykpm1vHxx58gH02RKri8dFUV3Ge8oV7NwLcUuUL4GimDS4AuxqATHAKdpHgMbdvhXPgOYUyHsw6txfCM420gYdG7IGgdnhElCqkT1CSnrlsWq4quazBdg7MGrRWpy8JYoSLhJjikKa1ROhDUfCQfijjSx6ienmCoM6RKEd7jupqT/VuA48JDTyF1gkeQZmOUlNTLOa7zIDQqL/F4pDe0RmD62DlcHz2BxxpLtVwgBbR1hbUhNMgaAwIyrVlslWzZlgvJiNuLAmXgka2Mk6OO2/dOsLZFl5eYVZbl0R1cfUo5ylgulqRjRdeFqBtjLVonOA+JjoRq37d9GIu86HX/PaFGBAYYiWhQeDQNmWjZTjswGUoY8P2zaKa4PK5YzE/R0jNRp2in6KTjWGSBnNpvVggIfle+fw4L/T4iOBIJIA1mb8HBwXtCPAN0LsHhaQ2oZI+qOaYjQ0jfO1VE+kh/hD4Q2oYHuv4n9GmR7hr7t0BYYSBqhGcFB3TW4/AUuebeUYO3nlHuMCTsTgWzRcfxrGNnS/LazRVlLnnykZI3311+3/3XhnSwqU1talOb2tSmNrWpTW1qU5v6gcs5x7e+9a3vei1aj3+/9cEPfpCiKKiqanjty1/+Mq++IE/90QABAABJREFU+irPPffcj5TbwXw+Z7k8+wLfdd0A1K2TDg4PDwfQ7UepjPO0XlE7T9dPhiRCkOaOynfI2nF0csTudBsrBKnWFNMxIT8WkATCgUr6CbSQfd2tGurFinYxZzVbIqUiyRLSfIT3is7WeO+wOFRekAjBtq+wPmOytYc1hnw8pZnPyIQCOkSSIBONtYbDkzkvfOcOr72zpF05NB4tPbuFZWdsGeeOUanZ2tplPBqTJhqpJFImeKmp24r56QHHd+9hO8fW9ogxDfl0h2LnEvrRD1JsTfGXH8J6C51FFBLvoCLnVuPZ297l5GTG3aoj8+ClYrlYknuPV5pPfOBT/Acf/Qj/5//r/wHTW2eeLykgSzN+7Md+nK3drfvec87y0ovfZrlYPfCz37UuKXowNyPLAiheFCV5XpAmCV3bkiSaNFGMx2MODw9A+AFk9r2jg7UuAGjGs1oa0rTl5s03+Uvm41zY+27SwXsRC85bH99nL7rmTnD+vfOfjcufJzFEQKcoigeu93zF2IEkSQbgPIIuERSMQA8wKPQjsWEd5JFSMh6P6bpuyOIWQrBYLAY3hJOTEz7xiU9wcnLC22+/TVmWjEajgTTQdR3L5XIA788rYMuyHLK30zQdXAGUUtR1jRCCpmmGz8Y4Buccx8fH7O3t8cILL/CpT32KRx55hNu3bw9t1LYtWZaRZdmwvcVigZTyvtzuSHyQUjKZTL5rm13XceHChQEYWlfCxvPQdd1AQtjf3ydJku+y7q6NYXtrl2QyDY4o1Yr9W/cQtqMYjRjt7fUAbodraqxMgvXy6RzZtVSnJ+QJSJ2ys7dNMt0hHU+YSol/9Q2efGiL2yc1YyVprCWRgZzkfADyfa8yfNfCb99bkMgRz/MS3/m1hOf+0/8E/elP89bXbg5W+F54nA/q9pC1DY3zNL2jgaVfp/e9wrq/Ln2wgw9tE28OoAceovDODYbLoWR4NygpBaRALiQjAakUbBMAb+sFWsBIwY4SbCWCSSLIFeSJokg13gTb4kQItBAoIRhfGKPS4NJS1Y5vHzb863dX3OrOSBIeGEnBSEtMf0TbaYiv2J0UnBytKMuEo9azfPeY8t4xj+6OMc5iqo7tixorco5mM7rOsHTw8NUL6Pkc2wUWx3HXIp1DpxqpFZlO2NIdIMiScL60FFSNxeOprR1AOOd7YoaSGNPhjRuAFCmg6wx4BivtCCpDtNw2AzEA77C9dbQU9DbLARAR9HncvXJVEEgwzju8d33sRyASKNXHsFjVn+/gcCAQeGcC8M+Zzbzo878zndI2DVW15O6br5I1Fe3sHq8f3AYpme5d5id++q+xvb3VH4ejM11/KVmcNWE7IpAjhO+voF4JmwrCtkSCseH+TpUiHY0pipLxdIvlYsFyfor0FuEMLRYl6e30Jfm4JC8n6DQny1KUgMViydHREauqAqG4MprygckOtRHMq5o7d/Z59/YdPvd7v8fLL7/MU08+zt7eHuMsocVycrTP17/2Dd45PKXuOqQQ1HUVCA1tx5NPP8PFnS1uvnqbw3v7NE0dstgDohSeM9KMNP3/svenwZald3kv+HuHNe3h7DNlnpwqKytrrpJKExqQxGzLl8AG2jfwNURwiQDb3e1wRNsN0d0Rph1utTuM3R8goCPctAHTxtfY1zgsjGWHGCRjDAgJSajmUmUNmVk5nDzz2Xuv8R36w7vWyp1JliQKMHL0/kdU5Tl7WHvtNbzvOut5/r8nCcKksxR52Qo7oQNYeEEXXxL2ncc7j+X2PNR10odjw90h/C+aDhZNAh0VwBiD1pqiKPr9uijkd9383bzSjaXdWLhICujmte4zOqG/++wuQqGb2xY/H7hDzO/miG5cXqQILBrfFukGHU2hIz8sPncvSsHdpKFuHlk0Ryxu10ViQleL235x7u/e2/3brfei6fC/ZS1+5uL1htYRTVHgfIhMiSLdCp4CWYcxwtsa4VLiKCKOonZOcMGQC0Q6wknfvo/b28mHY9VJR6wipFaYpqEoSpTWDIZD6rJkfnzUztsapQEhcVLjjEWoiI1Tp4nTEdvXr5LEMSsbJ7h18yZ4R5ZGSKlZ3TiHEJAf3yQdrzLb3QYtifQg0GjqhqZo8Kagmc85lJBkY+I0hygmHaxQG0GcZEilsTYYDL2QKC+ZTucIqfrxsjYGHUdsnTyJUYobxlE2goEG29S889EHOX3mTE8Qch7iOKOq59T5jDTSMBmjpaApSlAOpSWD8RoqGlDUFXVdo3VCFEkaU2CtC9cULphC8B4dReADBU5IhW6NB2kUI4XGIxHOEaeCoQdTN9StSUFIhRKyjVaQ7TkeoVWIWJAiUG0EPewoHD/t791cVRY5R0fHTHNDpmG4ssp4bYuqqtHWIkWYd5SKyYYr5Ed7mOk2Q3kKoWKwNXXV4HQYWwyEOcIanPUYYxlmCeloTH60RyQMjbTktcC6iFdvef6Hd2uGI8v0pSmPPTDmlVtTXrx8i6IsGQ42GSYNSkhyI4hsgfSCvMiJ0kEYI4RAyJam5D0Cg7HB/BCiy4pgUFKyvW61yHZ8886BiDioJsSyQskSP95iczOYWfAeiaMoK+ZmwNVyi01/hIxXKawkYob0nkjAOIFZ5cFbhGtjG/BUxoRYPoI5Q8WaYRbhvaBsWlMCrRGyMcwqgSfCWILRsI1sUFgMraHPhzkmkoGUUNk23qgjlVgbzuFuDOn/1xmJ/B0/l6WhkYYsUczLBuEhiSUn12Ou35xRUjEaQFEIZkXDNIfDec2jD7z1+xBL08GylrWsZS1rWcta1rKWtaxlLeuPXbPZjMuXL9/xmLWW4+PjP9Zyn3rqKT70oQ/x67/+6/1j+/v7/ORP/iQ/8RM/QZqmf6zl/7es4+Pj3mQA4WZJZ8rQWvedMfP5nFu3bnHixIk/k/V8q9V4qL2k8iGXORbBdCDw5E3DIBHsHO4wzoZ4F26seSFAOoQMwpD3UJclVZ4zPTimmOYoLRmvThitrTNa36DM59x45Qq7N2doUZFoiY4kQomejrA6HhFFEcbbVkFzVMU8dCk5R93U7B/PefFazovXDXUNq7Hl9KRmfejZWB8yGKR4ZxgNRgyygKz1SFCaxhiKuiYvdqmrgmo2Z5ClJGsxWTpk7B3GNdz3oW/nvvd8CL9/i6s3XmW6s0OEYzheD4KuUVxpPOO1VZorrzA1HgUI2XYLAaVp2J3t8Knf/jXSNCUvijclYURRwte9771IJe94PM9znnvuacqq/or7scuPjaOQJRtFMZGOetHLWItSmtF4zP7+Qdt146jrOuSeSo0Ugto0vRjqPNS1p6kNV964grWW1dXVP9RVeHesQVd3GwnuJTB0z9/LVHD3MoQIpoqNjY2+c7+qql5s+UpmpjiOGY1Gd+R2dxEBxpjeAJBl2R2ifl3XrK2t3dGZ2a1P9/rOuLC5uYkxhlu3btE0DQ8//DDPP/888/mc8+fPs76+foegsyhyLSLc4zgmiqLevDUYDLDWUpZlL7LMZrNegOmQ2RCiD6SUzGYzzp49y6//+q/zkY98hKtXr/ZZ351hYzgc4r3vu1M7wwGEKIfOVNEJW10Nh0MGgwGz2YzhcNiLaB2qe2VlpTdf1HXdxzFAmHeyLOtFo6ZpkMmQw8MpsfEUh8fke7s4W3P+sUdYvfAw83nBG1ffwBYFDz54DqzhYO+I+viQrZPreGMorCdVkKxsIZREqQjv4Nz5c0Ra8+qNY67PayrnmTWu7yoMB2646R+MB4JP7eRIL3jcPcNz/87w5Pf+ANH43/Dqb7/EvDItHaHNkycIuXMHlfctIri9cSxaw4CnFziFECD9bSNCvxK+Qx70HffdCzpCghIhPiFBMgASCYl0RK2QLRHEAlZixUAKUulJlWc0iBikCU1et9ELIetYCUGSavCeZjpFZ5rX5p6PXct5qQ6IX9WK+ZEQJCpQFLSAUazIYolIE2RZo4SjQXD65ITnX7vF1z12P/PdbUSc4oqKYl6glcbUjtLC+uYqfpoTS0KcTmOpcovyHp3FNAg2RdUilD1OKGxtKeuwDaNYg4RRKtk7dqx2d+9FoAhYa/DOtYLyQqd0aw5wVgSho+vQbgV/T5vIgEdJhXOmNYZ06eldRIYPHbxC4r3BeRd2eiuchP3WiqNtW6v3vhVUwgFnFjpqB8MVnLNYWyOAKI4RYsjG+YfZufoa5x9/D+O1DYaDAaPRkCSKKMsiCO1RhFYqfGfvAzJbCpq66kXopqXZ6CiiMS1FgdZ04xzOe6R3REoiBaRpgjcZ1WQVoSKE38A2FbO8ppjVCCkZTjY5e/8DxEpQFjmjec5kfTMYE6KEdDAiHYxAKmal4eVXXuflS5c4ODrAWMu17R1iLYikwDQNs6Ikb8dc6zy4mqpuiKOYzbVVHnngPJ0Fx1oXoi/680sERHhjUFFKpCOsaWWctvNUtJ3j3nusDwKftW3WuLotnHfifTj97kT8d4L/3YQdY8wfel0cx7243gnk3djfva+7LuiOz27ZnaGgW95i9EL3XLduHY3hXia+jmTQmQA6E0P33s5Qt/h53c+LtINFI8Eioehuw9nd5IZuPRaX2xkt7o6P6MgSi9/j7muLL2cs/Fqo2bwI0V8CtNKMRrq9xrB4HHlZE3lHFCm0kq0JMUbQxna0wnPfHQ2EYz70aKtIEbXnelNWeCAbZDhrmM+OccYHYVy0tAShEUJiBURxymRtlXmec+vmDSar6zTGsHvzekDRS00caQYrE3QUU0x3SAZrlPNDZJowGY/BeYo8Jxus0JQlzlRE6TiQFqZTirxh4/yDbXRThYorsuEKUmlUPMDZ0GkfYtgUzoGK4tasUZKlKVk24Mr+PloKSgNvP3+atz32GHEyQGkd+tW9oyoKBC6I+UnMMFrFewtCIZVACEdTz4iijOFwzPFsRlVXKBWMj1VVYWz4WwMBjQ/jThSFdZNSoXREFCVESYrSMd6BNAbjarw0xEmMNTFFZRFaoSKNjjQ6itA6QmndGtBuR4KEfdnt2fZ4pv1byENTzsi9J00HHEWrHFYzslkwTsWxDLFEMph5tk5tkozXmR9sc7RzGTU4yUpsaJyiqmpwrh37BEVZEScJ2XAQKBWmQQ1GZNpzYSVhK20YjVJqJ3jk8RHj9ZStE8d84SXJK/vgtSQZnwwmsybHlDVaCkapJo4kyqlwXeo9xjnSJCZOEoQMf9J1Za2jaWrqOsxVTV2TJHEws4tAnVFSECtLJD3DQfgbUHqPaxqkjnsBXjiPqSoa5ZiXNZMIjK/ACiSSsfbkpcXVDbayvRFUWhuIT1K2x3TNno8ZZElPRJB4EuU5LBoMgbglhAumFaeQ3uFsBbId9wlXCUoGOomWIapOqWC+DX8fhotGb11rMhVhfpUO21Q9DcF7RxRpkhjq2uKcQUvBZBwxnwXjb5JaIGJeWlrrCnVlee5L+295/FqaDpa1rGUta1nLWtaylrWsZS1rWX/sKsuSnZ2dOx6rqopnnnmGb/7mb37LN/XG4zE//dM/zd/5O3+Hj3/84/1N2p//+Z/n/vvv50d+5Ed6sf5rvabT6R1CaZ7nHBwcAOEm8XA47F/3pS99iSeeeOJr9mbovar2isbTUg5C16wSoTMlVZqjqsJ42J8dsJKMcR6asoL2BnFZ5OztHnKwX6AjzemzJ9i6/yxREuO8pylyDm7cYOfKDVQy4Kn3P0mUxtTFLGCOowghFUIGsceb0KVpihm1bZiXJYXdoywKXt+e8fR1xVGlGEo4P6m5eD5mffUEwywj0oLj431WhiuMhhN0lOKkQKuIqirZ2bsR8qGtQQrJymSNwWCENzXD8RoRlnJvmxMffi/xygp1lnD1mc9y9fVLnFyZsHH6fhAwkxFTqTEafBE60SsEvskpfRAFXW145sXf/4rbXyAYDcY89OhD3H3Y7Gxv88orl6hr+xXDFaSQRDp0eUoZOmydC7mldUs4UDoI28YZECGfW0pJGid45zHO4upwsw3CPwJBVRmuXrmCNZZ8PieKojtE7jfrMrzbRPDluhEXX3svoaGrJEn40Ic+xKc+9Slu3LhBXdeUZclgMLjjhu69Pk8I0QvpncCzurpKVVUURcFkErqG5/M51lqyLGNtbQ3nHIPBgOl0eoew0wnxHYGgMwIopcjznMlkwtmzZ/mVX/kVnHOcOnXqDmHKWtsLW3fngndGgo50UJYlZVnSNA1ZlvX47o6SkKYpSZL0URDdeq6urvL000/zjd/4jdx///28/vrrZFlGmqY0TcPh4SErKytAMBkURdEjuNfX13uBqTMMdJSIOI6p67qnIyRJgpSypyA45xgOh0yn0zu+V0c86L5HNw/M9nbZOn8WW06Z3nqDKIo5fd95hhsneekLf4BUjtdeusTjb3+C/e1bQYypS06e3iKOJfnUkEWgkgyZpigdMZ9Og6gTa7I05typFS7v5jjnKa0PYqMXXTQ2jfdoJXHe83LlcbfmOO954pXn+cI/q3jqf/4eBpu/wbMf/12mcxM6FU0QD3LnKByBftBHK9zuZJSi7VqVnXwUbhDL9iexcKh6wk3xjnogW8OB9G2MgxCkwECGGJlICmIhUB5SBZmSDLUg08GAkMaaQZbSzAOuOBIioPqlJNICqTzOGkQk2Bmv88vPvs4zhQ1ibtdpKwSDSOIIEQvDWLGaKGoEZ2PB3oFlZTVjMBmzVxjW1zO2Tp6gODggGw+YHx5T5gWTrVPMasvKSkbiDbFyZCpkIpeVI9ISKSReSiJrEKnA1AapJNZYjPUBn+89SjrSVHNYmP5GfTD/hM5973wQoDw9daAxDa4Jx6dUCmuCySGKomDMIhhIhA9UA2ODSUfgWxNLOHeVlCCDaUEIh1wYE4LRq8Oj+2A0QeC9C92WUvXri2jBzM5hqjnWhygIrSVRKyRvrK9w8sTXIVUw6BhrW+NCENAFgrquunbKYFBRuu8c7dYjjDkOJSVxFETmcK76Pk4CoLFhm2mlSLKMzZOnmay3EQPWtGj0iKoOBIrRcIgQHlSEijNGq+uBuqOCYIdUWOcZDiBLHkYpyWuvv96OHQIlb3fKO2d7mlSaZVx77SoA42FGSs32S1/AnznPaGWVlfWTmN2bUFctJwS8EEGoSRLiOGZWFa2pI2S2C9HKb0LgjMPLQKlACJy7U8xe7KjvaC93i/B3kwFCJ7XuiRmLpoKu419K2Yv8i4a8btl3xyp0xgFrbR+x043L3djakRcWqQN3mxIW58du/ujmos5Y1pnf7hb+745IWDSmLb6+2w7d9+i+/90GgsU5ufsud5sU7xXzsLh9use+nJHxXtfifxTD5N2P3/3cvUprhYoliZQ4a6irCtee71qFzvlwvAVBsq6bcIzSCYft5y58vmjJJErqls5kwDmiOEUIT1WVmKZGtPOZ7/afVGEsQ5INByRpxsH+PnVVsnHyJNPDQ2ZHh2GMFCF+YDRZR+mI6fEeWTahyacYYoarp7FNQVNXjDa2sNZTFTdJRhOUSrC+YriyBlJSzeZYZzGNRSUVjTGBSlDO8chAP0BgqgYvBGY+J45TiqqiqismowF+/xDpBWcnK7z9kUdIh6OWzAWNqUOnuRNUdY7CgNSItkNcZwkSSW0soqnIqznCOpI4ZZ7PKYocpTRpFuLFGlPjHYG8JkAqSRynaB0RxylJkhGnGUpFYbxUDY31iLLCWIuQkkGWEemopWnFRFGM0sFMIZUOJjQRxuLbFwh3XSf79kHboJXGx2sgUoSQmKZpPW3hGqVxYUyo6prxcICOY8rpAWb3S1TjCJMZrs6uLRxPIhjjCHOhdRYpBVkS0Qwz9qbHXBGG9z1Q85EPrbD5xIfQ60/yyvM/xqc+N6OSQ6J4hNIRo9GEqD4mExXW1GyMCAY8a5FJTDkvcKamKAR1FOhyTkiaxqKjmPl8hrW3DbBCCObz8PWTJGY4SFlNDhm5A0Q8ZC3L2KtqbOMoy4JsGBAKUTpBa4Wpa4YrDXu7N7h43lCQEKHw1pKicXWgBrjWFxiipRxCBJOosZbGeQ7nJdobtIgoqgYpIFIizKN4nDBkkQAriagxXoIXONsEAoII83xDIB0418277bjRjgOO7uc2nqG9Xpc6whuL8IEl5Zxhc6S5dKMJhINJxNXtilg5tK8RAg6npqdndePGVxiivvz49dbfuqxlLWtZy1rWspa1rGUta1nLWlaornN4sbz3/ORP/iQf/OAHec973vOWBHQhBA888AD//J//c/76X//r/Jt/82+AYGj4B//gH7C+vs5f+2t/7Y4blF+rNZ1O7/jdWsuzzz7L+9//fpRSvWDnnOPTn/403/3d3/1nsJZ/jGoNBxZHJCSxECTCoxWMooxagnWW/fmUSCWUdU19a5v9o4IruwV7x46NQcRTT57l/IXTqEiBkBTTYw5u3GS2vYv3klMX7mfz9EnUIA0dkG4csKbG4Hy4OeOMQeqIJEtQWmM4olYzrr12ixvHnqtHCaUVbCQNFzcbTp1IWNuckGUrrK6sczw7YnXjDFI4SlsjnEWqiNEoxTpLNT8mSQck6QpJnBInaUDUyhE6ScFZhkWOSgboNEEoydHxlMPdbbKqQEqNQOKUQg0z5kVBUzsGCErvyRDkQEPITv1qynjPaDxhfX21a5Jt94vn5Ze/xNWr12i6LPkvU0KIIJjakOXdmAbrLGVRkMQhBqSuG6x1DLIBUu4DQTDJy5y6Ma2ABh0WXog2fqOybO9ss33rJkpqLp57gDduXmOWz9pV/fKUg3vd/H8zqsG9OikXb/KnacpoNOIbvuEb+NVf/VWOjo76rv27jUx3L78sy/587pbZ3fDsRJ8sy4JY1YrpnaDU/SuEII5jjDHMZjOSJAn70Zhe+Nda8+KLL3L+/HkAXnzxRaSUrK2tYYzpCQAQTBR5nvddsXBbgOrEFWstR0dHfSxDFxHRCVudKNVRBTpTQSf+nzp1ik984hN893d/N7du3WI+nzMYDAAYjUZUVeiuqqrqjq7X3d3dProijmOSJGFnZ4fhcNibC7IsYzAY0DQNx8fHFEXRC1CHh4dtprFmNpv1gpxSqjcjdMser07Ij2cIIEkzsiwFpbn26quYukaamosPXmR9fYNiNsU2NRtbJ5GmApmgvEMmQyoDmUqojKcpCrKNVZI0Y3xii3PnTnDmygFl4zhqHMZ7jAuirQMiFca/qu08f76x+J0c6+Fx/yqf/ye/wBP/8/fygQsX+Pz/92Ps3AzHkvOC3LVwFsAQuqiBVoxuj2txuxMudMqDal8jRSf5BlHCLpxTor1JLkWITlDAQEAmIWkNB6ls/1OQKcEglqRpjLQGCdTTgkiG92sZOsvjSBANNEJJjHW8kTs+/sIrfPawom7XpEPQawmxklgfxNsYT5wqBJK9w5JKKVbTAfPjgpe2p3zruy9gyhJrDenaGaK9Y0yRc3RUEA1SVpOWdGMEkYS9w5qjvGFjJcb7gKYexyFmQgDzokF4z7yyeCXJfEMtPeNRTDlvUCEjox2/6LHZUorWKGDxPlB5hAxdp7bt/EcIjGljFtq4BtVSZ6SUKB1hTIP0LdWgFVmV7ATcIMRHOsYKgTUNQmu8uH1947wjSYJ4VJd56Dx2BikESdwSRRqDjiLiJA50BmsDXcg5hLM4GywqgyTBuCBQdrnZ4ajqui09GBviHqxvOy3DKzxBQMe7VggNx6EXLnwvgiiOD0J9kiQMBkMQt0XwbsxM4pDhbowhjiIGwyFiOLotprfZ1EI4lISqMYzSiMcfvB8lJW9cu87ewQFFUbT6m2i3uSAbDJjtbTPLcxCSSaqYKMsbr7/Gzo1rDMdjojghjSPypkFIGbpDCfSJupjRFFOqusY7G4wGMkRiBKXPoZOIJM1I0gzvHWWRk7difCfyL9IHvPeMRqOejNMZvsIxYPtxs5snFuN8upiaTujvjGEdoaAzoHVj4qJQvzgHLkYpLAr13bI6Q0K3rzqCwGLkQkf6WTQS3B2b0H1+R8C5lyliMR6nm686E8biHL5Yi8aL7rO6n7vP7LZptz53X18sRi309JKF+nLXI/cyKXw5U8GbmRre7L1CCCara2jp8M5Q5Tn1dIpzHiEFURwx1opIpygt0bFCR4FEoKRGK9VSDoI4LESgr0gdTFTh3PVhLkgSTFO383drQpIOaWnjFEQwOQnJaDTEGsv+/j5KCNbWN9jfC2Ze2TrhVBSxcfIkFsHerRuMhkOqsqKsG9I0C1SAuiZSKaiU4ngblMLLFKIBg3SNspihohghdGCKiJx6fkBT5pjhCp7Q1T47mpIMh8RJxvT4EOcE8WBEYyzeOsaDAZNYsz5Mubh1kuGgi8OyVEUOEpxrkEJSVzXCVnhbY02FwiKEJ0kTnBQ0FmgM3lYIFcbhxsyZ5/NAohhmHLuGpqlRCKRQbSxFTJJkJElKkqZEcYJSOgjJAuLIolX4e8dYS6RVfw0Yx635QLexClL1BrOOMNMZpcIx1BpLOquJEHgRI2TSHpMOYzqqVUcJCX83lUWJaQxSKRwpsahQZp9ZkVKWFUIQqEAIgs+tPY5a/oG1huPpFG8slbD87qWKU5s1F78lxouY8w+P+JbHpvzaMwfURY2Ihtj6FveNBJsDS1kZzq0Jruxb6iLHlIZUg44kdWMpyhLjA52hNmGuuuuMvW0sAoqioK5KLj6YcH0Xjo8lK2lDaaYIhpimApu2sSMCZxqc80wLz36eMreax85Zbh45lPcMIkuqPMJZhBfEClIdIpWqxlLVhtp7rAvXasbCJPXMbUOmBThF1UCqKmqniJXCOYGmobYCj+6pO8b5O8bDe5mubpsP2p9FMDn61oiAcK3JSOG8ZDQIhsD7Tw+5cj3HuBCxojwUZbiu6chMAF56hHvrroOl6WBZy1rWspa1rGUta1nLWtaylvXHrvX1df7e3/t7/MRP/AQvv/xy3738yiuv8AM/8AP84i/+Im9/+9vfsvFgZWWFv/k3/yb/4T/8B4qiAAIp4O/+3b/LhQsX+MhHPvKWlv3fsu4WMp1z/Nqv/Ro/8AM/gJSSyWTSP/fJT36S6XTaGxH+eyghPKa915EoSSYhlS50HHoYRkF0VMDhbI+j45JnX5nzxX1DLBR/8cl13vXoOZI0xjuLqT0H29c5vrVNHA85eeF+hpNxuJmqRHtjLGRtN2WFs+EGjWvCTfEoFUgJPtJInZCNR5Rqzov7lhVteWizZmsdNlYHjEZj0nhAlo1pqgopBOPxKgio65J8dsjqaIJQEqnCjT8JrKysE0cpWgUsNkKGDHXr0Ls3GY9XMGWNjBQH+7u88cqL5Nka7/zzEounwaOGGcdHU6p5yVjCsYVEhI5lBXzlQIRQznsuX7vCR/9vf58PfOiDPPhQyK2djMf8/u9/mu1be72A+eVqsYumLCuKIiefzxkOBhhrKEvRih2hOz3kXPvQ5dl23Yfl3G7CkgTxzkjF61df56P/j/87V69c54nHn+DkyU3+y6d/u//suzsG3+y8vpcQsSiu3P2axZ+FEAyHQ8bjMePxmG/8xm/kk5/8JPP5nKIoeoFkcfmLoopzjiRJ+uiEriN1UbDvukm7rvyOMBDHMUqpXvhvmoY4jtFa9x3/nRjUmRYefvhhLl++zO7uLkmSsLm5yWAw6F+3aDwYDod3iDoduaEsS6SUDAaDPiN8Op1y5swZ9vf3+w7cjpjQrWdHGpjNZqyvr/PCCy9wdHTEhQsXuHHjBhBu7nYRDl1sA9wW0LroBWttiOFYMEJ0poM8z/tt3+3HLt6hozc0TUMURcxmM5qm6SNohsPh7Yxxb4gHQ6YHB8RRRJrGuKZkFMHaaAReILMh5fQIqTTj9XW8bZBa40xDNplgCSSA3EfsXX2VkxurTPenRHHM/I1rrK+PuXB6hZ1pzSASzFvKfcf20O3xHjoCA4Xg5cpiduZU1vE2d4sv/r9/hovf/T186P/yf+AP/tkvcOmLVzg2Fi9lGNfwYSzxrcFAhmNQ4ZE+3FSPCRE2TgTZQULfqRbuFYsgGNFi40VA4IuWjJAIGCrBUAlSCakSZKqlGihBokB5B604rJVAyfCZSoISgiiRREnU0gEsr00NH7+R8/m5oaSlLPj2MwVErdFgEGliLTi3GlPXBlOD1rB6csgwgs+8ccyZ1YxhJMjzOSfuO8PujRtEzuE8TA+POLE6pCpLnBNEQlE3DQZBHGuMl3ghWI/DAGSNQ3pwTcASV16SGoNOBWmsqeqGTMPMSYqywFkDDpIswdk2P14IXO1C57lzoCRCKqx1AanvOgHe9yaE0JkYqADaNkjVRqF4G/K/hcA3gRLQOUh8S0cIhoJWTBAEs5poJ1kcUiqcd61gGt4nhWzBBS7gtmU4Fr1QOMLYrQWIlq4gRTvORioYLZzDGQdKBTHKO6wLx6BoowK8d0RxhAAa61qTW4frd+Cb2+Oxa8kcQoKwRK1gHWmNIHSGWt+KPk5Q1XXb2Q9dd72nFaaVREeKCEljLMNBymMPnmcyHnH56jV2dncDvaHt0tbA7vYbXLtxE6kUW6OYU2krvHmwRUnRjuGhMVigo05UC9cuZZEH84gMZjxa8RagrBuQmuFwyGi8gtI6xFA0dT9PdESaxXlrd3e3H/smkwl1XRPHMVVVkSRJL9wvmgk6AkFHFgD66IbFObNpGpqmIU3TXlTvxtruGrQb5621PVmmW/bifNLV4md2wldHwOke72gJd5sYFteh+4zbx8rtn7v5pztu7kWCuBfBqD/O+vPu3nN/Z1C4m4Lw1f7dcC/Dwp92lXmNlB58g63DflbCE8cRQmmqKjyfJjFJNiLLhuE54ZEt9UvKYDhQwS2EaRpwwWygtG4JVhXujsz4TkYO4rWQoHRCNhySz2aU8zlpmqB1xOH+AdYYlJBYrZBOcmLrFELG3LpxlWyQUNcNSksgoagabFXjvYZ0QD49xhjwaGbTYwZj3U5WMcaFMCApIIqHgTjmPbXxWFPhnQEceMd8NsPUhihNyGdTHHB0dIzAsZkl3H9yk5NbJ4KBNp+htESqMAZaUwezhW9wVY5wDXhLVc2RSiKkROssGKNcS5ZxBucEw8GYsmw4ns7aa50ReVGCt+g4IopjkiQmTVPiNCVOMiIdI6TE+hBDI5VCtSZKqcJ1p9KaOI6Jo5RIh2sxoRRChm75EJvUT65hom1dCE501wgCqVdwzuCaAq/T2yY0JKqn6ACI1mRnsd6FsV2MEO6Qpk6wzqJ16L7vxkvnOgNTeNw5F/4OM5YoUrzrnOQdF+DwuX/H6iPbvPFyzhvbFQ+sOUa64sbUUNQWNTA8uhljG8tDJ2subUuErRkoQyYER4XjqHIYd5vpFCIWbo9Td56XPszDeLwLsU5FpYOR1cQYI6ibhkhJ8DVKeIZZirM1xgpwhkhG3DyyfMuTns3Mc2pF4IFEeRIFQ+VZzRTWw3EZIgWdcxgbqHBSCPLS8eC6YloKTg6CibApLTKL2RpUeKHx0rF9UFI3EoSiMaY3O3rfEQ7cHePN7fGvMxy43uDtmiYcDq6jIID3BmM9USQ5e2LIq1dnGOsQ3hMpkMYSK4NxgXzRDQKSP944tzQdLGtZy1rWspa1rGUta1nLWtay/tilteYHf/AH+a7v+i5+9md/lh/90R/txcfnn3+ev/W3/ha/9Eu/xMmTJ9/S8oUQvPOd7+Shhx7imWee6R/f29vjh3/4h/mVX/kVHnjggT+R7/KnVe9617t48skn71j/3/iN3+D555/nbW97G5ubm/3jzz77LJ/85Cf5ru/6rq95M8XtcjS+RXYLSSZt6Ig0YHyDdZAmI5wxXNme8jtXGl6fGU5GEd92PuLJBzYYjccIFfp3Z3u74DznHnmCwcoQjAtxBkmCyjKs99Tzgur4mGo27TtJkywlzhRCSZqqYTqfMS9yrFCcv3/C5b19ElFz7mzMYGWVWkfMdYSOUwwe4R3ZYIx3obsrTYcMRysU8xm+KpFKEUUDhNBk2Sj0NnuPUBFWSgwK6z2Jl6FL1TmqeY2Uips3t3GrAW8qlMJLic5Sjg7nRLVBtyJl3nUwEzqev9raPTjgX/zLf8Enfu0TrK2tszJZYWVlxEuXnmc+r76qZVhnaUwQIUxTM8+LFlVbM5tOiaLQRd80BmMCwhoCptb2XYrtvxCMCAAeytqwd1By+fXL7O4dcXP7U0FNvaverIvw7scW8cjd74udnHfXIt6567yXUvLoo49ijOE//+f/TFmW1HVNmqZvul6d6aAoCqqq6nHSTdP0sQRVVbG2ttabCfI8BwLxZDQasb+/3ws8g8GAjjbQiTh1XbOzs4MQgosXL/Jrv/Zr5HmOUorXXnuNJ598so8nqKqK4+PjvhO1q6qq+s7Yjg7QRVo451hbW+vNAsYYDg8PGQwG/ffrKANxHDMajdjb2+P+++/nE5/4BN/3fd/HfD7n5s2bpGnKysoKs9kMay1FUZBlWS+mdVEUxhhGoxHz+byPZYiiiLW1NZqm4aDtVh4MBgyHwz4SouuCnc/nrKyssLKy0q93h/TujsM4iTnaP2Y4HiFMQzIcIuM4ZCPL0NU3Oz6maizrp7YoDvdYOXGCuiiRwuOUZrixycHM8NoXn+PtTz2IK6ccXn4VN9rk1Lkz3ELwwCNneGM358q8JlOOqfM0vsXl++4meJvx7gVGeL7UWPK9gtLBO89qXv/3v8jeE+/jqf/t32Ljt/4T/+Xf/TaXb8xwhPPe0gq2hO5y7cONYCUEGsFIhGgE3xIWnA9GBQi3453ozAe0y/H9e+OWcDDRgpGGWEpSLUgELcUgGASUaDORpUARTAtagNaSJNHoNIiMee156ajmV7cLPl86bGs6Ev06h7gGLWAcaYQSXFxPaaylNmG9Rispa+tjrlzbp7Dw9Re3uLU358T5E0g9IDUVDYbcOrJUEGcJx0fz9oMcZWGQkUIrhZWCLNYkiSIvLdI5Kuvw1mGUJiZ0zZeNYGUUU5SGyWqGLwxFXVE3DUpIalMhkehoocvUgzAtg8Z5jDVo1G0Uedd+GrZ8e/x6msYQeXDCtiJ8GJOscyiliaI4HO+tIC9lMB341njAgvGJbn+3NI3uOkG3c05nTvAQjA6uM5O1wrEH7zxSSaQItAUdR3igqqCuGzragZYS72wwMcQafMjPFkJQN3UbQ6FRSpAkYdw0psGaQChwNlAK4n77eYqyJkkilHdYEaIgOhFrUSRGdJ3wqj3GQUcxWtv2PZazWxusroy49HrGiy+8wPHBHqapyfOCqqnJYs2JgWKoHM71PcDt9l/oGPYOU1Xh+a7DX8o2JgMcHQHDteaK0PVaFTneNigVxPy6aRBj0Xfqd+atrsbjMTdv3uyz4NM0xTnH2bNn++/fjdmd+ayjxyzSchZf281LncmhMwp08TWLIv7iMhZJQIvi/uJ1Z7esewn13Xp0ZrHu+c7U4JzrjQgdJWGRUAC3iTydKWCRBtTV3UaGu40Hi8SCxWuAe732rRgO7jYbvJXr8jcT8N5sWQeHh4EME4Fwru2aF8RJ3I8LTVOj6pzYraCVaK8h4lagbukGbWe/a41w4Vo5UDFce+wIcVtoDDp2MCl4AUkco7RifnyMqRsGwyEOmB0dAx6hBMKHzxmtTojilJs3rqGVBhRRHAdSiJA0tUXHI6JIMz/aQ3iPMTXOeJSOyPMZUZRgjKTxjjQbBSqA1miVYMuKIp+RjcZIkWKdo6wqrDWgFcY6nLeUVUNTVzR1w8pwwPrqGpPJCkoH07L3BmtDd763lrI8JtExtFFiOkkxtkFJEcRhocOcbB1CWIQUWFthnWM0GjGbTzmaHTMejhmNVrCuRkWKLM3IsowszYiSJIxdKgpmPOtaE3WIKdBRuD7TKlArdBShIh32oZSBdnQ7LOO2QwSBX7zc9SKIzQQqhhUZwpQI57FEWG/BW6wF1dJ6ukPQOQeuveYQBUpG5Fa0QnZHHxHhWt8JhJC96QwE1nm09Dy+pVkfSP7Fr+Z814cLXn/x37Gz7/iDKw151aClRSrPSiIoK8GjJy14yeaaJNWeB9Y8mRZ86VbJ0dxSe4FDttc3IRLBt98xbIfOeOcQ7b/QjtkWzk0amMAoNiSy4GC/aMeH8H1Gg4iTY8t9KzkbI8/6wLI+0Awyz1PnNNlIs38AA+04vSLYHCl2Z5ZZ0dEVHFpCCiGKA4WxHq0MD29FnBorXroxZS2zDJOSVBiOy5LwV58B1wARxmTAbULZm417t//O6gwJbeSPa0kHLsz9QgRyUVEFAt2rb5TU1vb7Mk09pvZUNQzihrxSvXGyMza81VqaDpa1rGUta1nLWtaylrWsZS1rWX8iJaXkxIkT/KW/9Jf46Ec/2hMJAP7rf/2v/NiP/Rj/6B/9oz/U8f/V1mQy4Zu/+ZvvEO0hmBo++tGP8k/+yT+5p1D4tVIPPvggP/dzP8df+At/gf39fSCYJv7xP/7H/MzP/AxPPPFEf9O3LEt+9Ed/lMcff5xHHnnkvwvjgfMCgycSgXKgcRivsEaGm0AyoJd3bs34zGuOl/KGE1rzDacFj1+YIPHUdYWKYqRSDFdWSYYDpAoIYxFJ1HCMUIqmqpju7JEf7mGLglhHxMMx6SAlHg3wUpHPCw72d7Eu3JhSUpIOMp58KOZL2zl/0KyysvYOTj/8GKPRgOPjPfyNS5y0x5xNYtI0C5m23lPnBVGSUpU5pjGk2TB0H8XtbRWpOJwfsXt0wHC8hhutMTn7EDLNmO8fU+QFK6snsDJi7izWW2IZoaRCR4o8Nwy6zhU8pu1aLv6IXSbOew6nU4oy59r1q0HgcZ6qDtETX011XTZlWTAvZmSzAbPpiGE2aLHPBmsCpr+sK6xr8duevrsIAqqhQ77aXkyC1dVVxuNVHn/knTz6+MM8/+Jz/Kdf/UQQcfjDN/UXux7vrsXXdO9dJBIs0gruJiMkSdJ3X0opefLJJ8nznN/5nd/pBfrOAHA38aCjBqRpShRF/bKm0ymTyaTv3u+WURQFSZL0ZoY4jllZWenNCUAfHaC1ZnV1lbIsuXz5Mpubm0wmE37v936P/f19vPfcvHmTd77znX2na9M0fTxD9927ZXaREZ340zQN4/G4pxB0BoMutsF7TxzHTKfByNPFRGitGQwGaK154YUXuH79em96aJqGGzduoLW+A/29urraC09xHPfdrXmeU9c1k8mEpmnI85w0TXuTRieKGWN6SkLX0Xvr1i1Go1FviOhMHE3TMBgMKI/nrGysc7x9i9NPvAs330FnA5raMlxbZ36wS2Mdoq452r6JnR+RDIakiUQmI5LhkJlY56Xf+xjv/vAHwnETZ2SnzxFJiZewtrXB4fExp0+OOLM3D0YGY1Be0CXad7pzf16IgPh/wzj+015OLjV/+a/+T5Qv/A6/9f98mcf/6l/hO/5Pj7H90f8PhztFa9gR7c3/QD1RQCQkCsgErCoYqk4wBTrzQfu78fQY5q77TguIBMQt5WCkJYkSRMITq2AwkAi0FGhASdBKooVAeY+WgiSVJCspOkmopwUHpeGLexW/eVDxfOV6OovEo1pJN5IgvWdNayIl2BxIkkjgvaB2jjrWnD5/iv2DnOdvzviW919ExQO2ZwcMD3Ly6nU8kso6nFTk85qTZ4ZIcYAzBuMFloAKz5SgsI6VVAUjmGnIpKdsfMBDe4HQgkwrhgNNVVkU4KTC2pC/rFuKQVNXeOHBgJAOrSOMNcEE4Ax4cLYmL00gQWiNVkHQk0oSRXEwfKhALujwxcI7lAqioPfytoFKKJwzYey2AY9sbKALSNt2O0vVj4vWBZHFtVESQshWvHfgQ8yQ8KGz+Tap4HbsjW9VdyFUL0Jnadrivy1KCoq8wHlHpCRCKDyhA1MKRZxkIKogwIv2eFUSfNjz1gVB0xOiIaxpQHYHrUAIhZAeSSeGL3awgxIScHgXzgPvXKA5CBm6vOMI4QVqKHjbIxc5e/oUX3rlNV54/jny+YytoSJTAhXkpwUJxbefH06W8KzE4ds87SDc9PNJQEK0CHlFHGliIVFK0sm1QgmSKEHGnrId6zqjQNM0d4jhHTlmOp2SZRnz+Zz9/X1WVlZwznF8fNyTeDrqQTcGXr58mYceeqgnFXQxPXcL7118xaKJ4G6jXmdSu9e82REQFo0Ni8SAxTm1G7O7sb5bVjeed2aCxbp7Tu4+sydcLBggFk0Ji++/W8hfNFcsvq7bNouf+0fp4r0XWeGt1L3MlG/62u6obGkqXgRjUJQkKAegQQjiOEHgaOqKspyD8CRygNYp3rnWuBoEaNkej6Zp8C6weToh27cEFCWDscgLQZymGNMwn05xzpENhxhjKeazloIgkK3BdpgNGIxG7G7fRCKI0yREPkiNNSXWWFQ8YDAaMt3fRadjXJ0jVIKMwnks6ga8pixKRKSoigKlI4SQVIBxhjhNKasKHWmaskJFEQ6JlpqqrgP1yCvKqkYLSTYeEynVCswqmLy0wtlgxMBbtNQIEYx6WkhAMlhdb00KESpOsa3Q71rSAVJQFTVRlDCZrLK3v0ttajKRMR5PUEqSpDFpOiRNMnQUI0QwEHgCkQAhUEqgtCSOFD5J0FIS6Qito9vkJ26bo3qHQHcstdqwE7T7MrzatXZELyToEa4+xgkLIuodgYFQIFpaQeAlCeHRIseSgoqoCodpbCDH4YkiDTjCqeZAtCYor5DecHKoKCrHf3y6YpwKbt6U/N7Ljvdc1IEk0HI0hLdEQuPIkKnjW98H03rAtzzZ0NSKX3m65LhwC+YK304bHu8t3nXnkyDYNIMIL+TtSCPvLQbF6/senGMldTRygEoT8J6mLHta0CRVbI0isrji6p5EOsGslrzzcUGtQFuD2fJMUsUfXDVUxlEbz7zyVMaDdyTaM04klQn2iMJIvuGpjEg6ruxKLqxLGmN4YduikhHO1RSVJ28SEh3+1vXQjruu/36d2c231w+LBJlg+riTiHD7XwMCpBKMhin3n7JcuWmZN7afs60HYz229GQJzEp64kFn3ngrtTQdLGtZy1rWspa1rGUta1nLWtay/kSrQ4svlveen/3Zn+U7vuM7+NZv/da3JKILIfiO7/gOfvqnf7qPb+iW/a//9b/mr/yVv8K3f/u3/7HX/0+rhBA89dRTvPOd7+STn/xk//i//bf/lm/5lm/hB3/wB3n66af5V//qXwHw3HPP8f3f//387M/+LG9729u+5o0HhtCFmElBIhwOgUXSGElEhJQx86LitRueK2VAPz+QaCaJJckGZOkQ0zRUdclwskq2MgpCDSB1glABq9mUJcXRDNuEzPloMCYZZujREBHHOO+ZH8+ZHU8ZDkahy855mrqiNg1ntk7wxuQC3/V//Ps8/vXfiI5jwONNQ7G/y2tf+BxXPvXvWT+4zIqOcMaETOkWPxpHQUiOoyiIEFnGwdEhv//ciwzGKZv3P8kbPmO0P+MJBKOTG4ylIPmdDBnF5I1BqYgoThiPxqRRxPbsmEGbCd+dOWME+2+hy8QDZWMJAbRvrazz5GVJUuTMZsccHKYMWwOIFLLtsGvabGBLWRaUZRHQoK7tkmnXZjESVArJaJjhnOfPf+RbEcLz9qee5IUvvcgrr7x2zxv5Xy5i4cuhjr/Sc51wvfjYu971Lra3t3n55Zf7rOzFZXXV5XADvcBSVRUnT54kz3M2NjaYTqc9Krtb1mQyIU1T6rruc7eTJCFJkl6o9973UQO3bt3iXe96F4eHh7z44ou9QWA6nfL6669z7ty53vzQmQo6kgFAlmUMBgPquu5Foe47Lb4H6KkLnQjVRRoMh0OqqmI2myGEII5jzp07x8c//nH+xt/4G9y6dasnPpRl2eeKD4dD5vM5ru0E7MS1zpQwGo16soMxhqIoiOOYLMsYj8dMp9N+33edsMPhEKVC3nFVVTRNw+rqar9967rGoKl29hEe6uM90smEvRvbnH3wIrO9PRovGAxiZoe3qOZghUB4QzQ6QTwYI+KM8pUv8t5v+iAIhamKgJv3jijNQqe1hTMXznO0s8/1nSk35g2ZkhTt/f5FyofyXeOhbw0Jgm3n+Y87x+z/wi/xdZsxW7Hl9/5fP0Xy6Nt48p0XOPrdl8mnhlq43jSg6QgHIRZhImFFQSwESnTnF30fpIeWNuB7WUkQiAOxBC1ClEKqBFr4EJfQkggUt+MTtBLhdzxxohiuJiTjFFs7yrzkjVnDZ7ZzPjszXDaemjZOJZw5gCASoBFkWjCMJCcSxZmVCNNYDueGmYetSYRcWefZZ28xTDWjwYBL149IrGfsSyI9ZJYbShNypVVTc7y9w9apCcXxMfNZTZIodKwQwrOexCjlMcYRKU9ZO7TwNE4gtWScyCDupBGH+0esbw54Yz9HuCDSOFsjvCeOYqSSKBURXGgW4R11XeKaGiFlEKEQNDYI+VaIMBbWlmI+ZzAaI5VG65CZ7F0QRqSMcNYiZRDvgiDvcDag5oWUCGR4XkR3UBQE3O5W9j6c10IiRBD9Ax1Bhj2/YM6SUiMFNE2NUBIlRCtauTanO6CyB1kSopGahiiOiQnChpQiUBBaKo9ujSjeS6TgNkJfhv0ghcfQZksTTHBKhliFRhiiSJO0xjTvXRtrEDr1EQKBC1SGltLREcUdbXc7AiXDMR5pxfo44z1ve4y3P/4Iu3v73LzyOoc3rzI/3EUJ13YAd0Eo4Rj17TylpEP2Y71o6RL05gzZbhtnLM4ahJQowpirtEbqiDhJQGluHdV3dO53Qrdzjp2dHU6cOEFVVb1RTQjBzs4Ow+GQpmm4du0aTdNw/vx58jzn/PnzzGYzRqMR1lrm8zmj0Yi6rnvTW1eL42pVVb2RrBPpu+vzjkiwOMd1ov/i+nY0mcXxuHtPRyVYNBQsEh66ZXVmhc7EsGhW6Na5e92bCfr3Ig4smh8Wf15czt0GhruX9ZXq7tf+Ud5793K+0rIXKxukZGlErCOsqcF6pHCYpsGYQNeKkpQkzdBxghQeSWtmkqGLX4mWGCEEUmskIkQsdF3M7Xnpxe1O+tA4HagJdUt+kkKQDAaB5lSFSBKpJNIHE046GDNcmbBz41q4rkgShFQMh2MODw4wTbgGGQwnzGdTkDHZcMBhVaLiFeIkI5/PkGlEUTX4SGEdCCmp8pzhaKWNYYkwNnSnF0WNR1NXBuE83ge7rrfdeWoYpjHDljZgjYFY0dQ1QkQYU4PzWGsQXrZxBLqV7ENERTpYC7+rCO+gsa0QK8DZMG5MpzPGKytMJmsYa1FKEscRWZoSxeH6LorjMIe0Y2A3sIRjUqKVRkcaZyO0FERat/tRImRrJWivK4I5wLf76rYwLIRCCNcafV1A86sY6etgsJADpJ3hlcDLLhYlxOc4157boiGiovFDvJBo4akNWO+w1oP0xLFCRzJsTwHCe5JIkWmH8I68cuxOG5yHYSy5ug8v3XKsZJoo8hzmDePEcWKQsLWacHprwIMPGU6+84M09SExv80//XhD5SRRJKmdAd9F4IBo1/n2OdUZLSBq/z4ryzDuKRWxc2wxTpAoSSMUlY9JkxSlBFev3mBtbQ2hBIeFIK81T52TPHYq5mDuqAvB+iOreK2J9R7n547f/kLDIApRQvPSUBkwNtBvKg9J5Dg70Ty4FRGlmocuZmRDRaIKnn254FMvNVQuYiBUiBnyRTB7+CZ8z3a/BKMUQBezcPs7dwavxSHlNgGBfnvQEo+s9ZjGsndYs76iGKSC6dwE+pMP5I8Gja8s44HmcOpamMZbv++wNB0sa1nLWtaylrWsZS1rWcta1rL+RKu7yXl3HR8f8w//4T/kfe97H+Px+C0t+33vex/vec97+N3f/d07HjfGcPPmzbe0zP+WFUURTz755B2mg6qq+Pt//+/z1FNP8cM//MN8/OMfZzqdAvDZz36W7/u+7+Of/tN/yvvf//6vaeNB44OolkrQwlF7hfWCysgQu+A9N7ZnbE+H1L7ihErYSmH95BoySnAItI4QtqGYHoeO0ThCag0u3EC0VQUIktGAdJggHGAaVJpCkmCbBlPVOONY21gjTmNsEzrzlZbENiGOU96+PkRLF7qwum2qY7K1dR78uvczue88L/6nXyJ/9rfYygak4xHeWYSAspgjtADf4K2lyAs+9+zLPHvZ8Mi3vZ3nXnuFlz/zEh/63/xPQVBSit3r1ymOZzxw3yO89OpL/Y2xza0TpFmGsZYcj9Wh8xcHTStS/llUEJ8tRZ6TZUPm8yn7h4c0xjDMMjyCqqpbMXpKXuTUpglpq0q1oobn7mEgjiNWRiO+9Zu+BUvD5Zdf471f/wEee+RRXnnlNeDenX/3Qj4vPv7VIo8XBYnBYNB3onYiSZIkfOhDH+LWrVscHx8Tx/EfQlADvWjTiTtlWWKMYTwe95j/EydOUJZlTzLojAbd56+urvYdsJ2IPh6POT4+BoL4U9c1Dz/8MM899xy7u7vUdc2JEyc4Ojrid37nd7hw4QJnzpxhc3OzF/vKNqO8E+oDQjVFCBHEg1YwGo1GzGYz8jzn4OCA8XjcxxRIKRmPxxhjmM/nGGP6OIooijh16hSvv/46ly5d4uzZs7zyyiu9CWM6nfYo7t3dXQaDQf/+Lm6hE7GKoiBN0zu6eLXWTKdTvPd9F2/X6dsJX1VVMRwO+67gTlQ6Pj7G1AXSNsQrq8RpRH6wz+n7z1Ef7+O8Y7p/yMbmKqO1McezhlTC4ORZ0vEYlWQUx0ecvP8iXsf4OkfFE0wxQ0aKuiiJkxjV1KSDEQ+//RFubR9zdTencJ66tpj25rgPTZFIBCFBWSwYDyB3nv+6v8+NmeTdo5SLGxnzF77IUeU4f3ad7GjOy7fm5OY2OUG3/40lrCrBQIaoAyVoxdLwOYHQ0pEWOsE5mAEkgV4QSUmqAvVAyZDvrHq6QYhV0FoinCPWnsEkZfWBU5jSUtzaJzee524V/N5uzjOVZ88GKVcRBGFN14Dpw3JlMKStJ5L1LIjdVQMHtePBM2NKqXnm+ct4V3P+5Jir20f4uuah86vEo0GIMbGWLFJEWpDPLYWdMVkdsXNYo5VgFGmquiFKY9JYIrVESYspaxyCSIYMbSwM10Z467h5K0d5yBuPco4Yz3FdIaVAKh2EZudBWrRWNLWlqOYhw9k6FEHg0VKi8Qip25iV0AlrjEVVNVEcxC3vAwI5ZHNDZxOJ4gjvA4nCuSDMSu97EQkcWoX50BiDMzVaJSGX3Vm8s+goCIdSxZimAhfMMt5DnKSoNr7BNA3Ghs74uqmJ2vNVKxliDKxFaBWOFSCNo3b8dBgTMqOVBGsNxnffIAgjgebgwTZ9l7+QQQCTUgYKAr7HO3dkBK1CzrYTIbPcCdkus83IVioc2yp0fgdRRrY0CIkSDhnp1qBhiJQg29rk7MlNrHkHs+NDdq5f5db1a8yP9rBNhfVBMOv2gvedQaPrxu1y7SWIkBMuRSvMt9ujaWrqpkbKEqUkVaHDu/RJOrKLMeYO6s7m5ibr6+vs7e1x4cIFIFzD3nffff02W19fRwjRR/bkec7ly5fZ2toiiiKee+45zpw5w97eHvfdd18/73SUnPX1dYbDIdbangKzKFZ182H3893RBh2pp6vFsbubN7uSUvYmukWjQleL9ATvfb+uWuv+5zBGiX75HSVoESfeGRPuXu69TAdfbS2SH+5lalhcr+7nN1vO3a/9Sq/7So8nkUQi8c6hpUJqMHVFWcyoS4NUEXhLEidIIVA6JkkHrdGxDgSEdp9ILcE7jOnGk+48ojcedL8oFagdVV5gjUEgiNOMqipxxvQEK1oTVDbIGK+us7ezE4yNMkIpzWhlTJGX2KZBCkE2HFFXVTBPZkMaYxAqQScaYwAVhXHVW5wDIRVl1YCX5KVB4LHOtAYrjWkMeZGjhCRJNXhFXdZUVR068z1MBgOyNGuvJRLq2qBiRVnk4dxv6SlNXSNReJrWUKHwzmMd6CTDeYvSEbWpsNbhvMP5YIKSUpAXBePhEAREsSJOY5I0JdJJf50opAAve0G5m9mFCMaCSCl8G4cRzpdgtA70gbCvnOuQL7fPX+ts60RQgQLTHk/WWhwKZIqmwnkwIiOyOUakOLooKIWQgljUSGcoRYYQYd4OAr8O5i/RGnm8IEs0tmkjU6RgMogQ3jLNLWUdrkeECHPKzrFlWnqevV4xScLfDsbAcaGZ1iUX75Nk6+uo038Fsfsb7O3+BuuDhL25pza2xf27VkcP84vvuvyFQPoQowCCpgmkNu8t+NDFX9chugEP1gqkNEgJOtIYF8xt3hpmRc21g4bnr0se22r4lvdEnHh4iN76BuTkEdaj/8SXfuOL3LcpefF6w3HRhOtu4wMpwIbxx1h43YTrob/0/owLT72ddOvdvPzij/PsjYbGJ8Q6GK68M6AUwkqErRAimOCloiUXhfPSOoux7XzobhvZXHehB63B4DYYQvSmAYdxkFeWNNG8catgMlI8fDbh5esVIZpBAIrGOarGszaJOZrZdru+tVqaDpa1rGUta1nLWtaylrWsZS1rWX+iderUKTY2Nu5pAvit3/otfvVXf5W//Jf/8lsS0FdXV/mZn/kZfvEXf5FLly6xs7ND0zR88zd/M9/zPd/zJ7H6f+r1wAMP/KHHrl27xt/+23+bf/bP/hkXLly4I0Li2Wef5fu///v5hV/4ha9p44EjdMwOZOjGMe0NH2OgdGCLObv7OnSvIDkVKbY2Gipb8+kvXOLd77jI+mSTKIqxpqE+nmHjGKlDnrRr2m5JIXCmDjhJY8nW1iBOMHWNq4OQko4HxGlAiCokiLoPLsBKovyI5//Lr3LfE08xWt+iqUsQgkhJdByTjSc88Oe/m2emU0Y3nyeyAS1qjcU2jjQbY8sZCMczL1/jE88dsvrow9wQBdeu3yDy8MynPsG//akf4/5H387nf+8/c+KBR5hWc+ZlFbKa45QTW6eJ0oQkipjnDadPnyDe3sdWDQlQLXTw/LesgPd0NKZBSE/dlMznMyKlQ5ep9zR1Q5HnIVNYKeIowTtPvYB5vvveu9aS3f19ssGA3e1bvHb5NR5/+9tYXVu9/dl3GQkWuzK/nKBwN7XgXs91ooSUshcw7sYur6+v8773vY9PfvKTVFXFYDD4Q+s2m82o67oXujuDwPHxMd57jo6OenFHStlHBCx+lzzPgduCTdftL6Xk+PiY2WzGYDDg3LlzfOpTn8J7z9raGt/1Xd/FysoKzz//PF/60pe4dOkSp06dYmtri/vvv59Lly4RxzGf/exnefHFFzl16hSTyQStNWmacnBwwNraGmVZ0jQNxhjyPO/x352QX5Zlv/5ra2uhe7k1Lsznc+677z5++Zd/mY985CP8/u//PmfPniXLsh6n3ZkwBoNBb4DotrlSijiOSdOUPM/7xzpiQVmWPUGiKAqKouj3XUdLqOu6j8HohKssy4hqT7K2EqgFwrN68hT4cPNaJwmbW5v4KkePJsTzWwidouMYoWPq2hJlg0BpyadIJanzHFM1zA/2SAcZTT5HRQlrGxMGqxMuPHiNs5f3uD5vQsc3YCxYHzqznQ+0g45Y4IRHtkKPBb5UO3YOC95ZGR5ZHbA20syOc7SFR06OOZ5XTIsmdBn6YDKYKBhrSJQIsQuiJRm0ne6dkOS8BxG670XXHS+CASBWEOsgcisZOly1EGgVDAcSiJVgdHLM6oUTGCM4fmOXsqi4mRu+sFPwxWnDq8aTt+d5ZzgQ0BumIhEejwVsxJLTo4hhIqlqy97ccGJjQDZKkE4zKxse3hoz2hihdMyF02NO3neWYncP44qQry0lOI/SkkEiyecF09KyuZpQOnBCMtSa2gsSJ1Au4KOTSGJLT2UMJzYy9nZnjIYxxbzgzOaQnaMSDcSpxFuDV5KmLlFSBmKAD13wxhiU1sRKBzKNd/i6CjncOkLIkIkcx4FE4H3SirFBbPLOhg55paibBtqOWupAKzBOgFKAx1gL3qGkCqaElsjjrcG3HcvOmiCwINvc74Du1lIiddTnMyulcE2IhVFSELdElE441zrGWYuOwFlLUZZIEbDLHZlBKQVCobREeo9pDKbN+G4HSCIt2jkiGG5CHIRGiFbcbsXK0Hlc09RVoAT0ThkXBDgRRLqwPcEaE0wOxhLpIPRlkcYIKAjrKUUbx+A8jQt4auE9SnhObm6wub7Gw489yXR6zP7uDge7O0wPdzFlgW2qYPaAEKfhfb9OrjPQydvGs2Aaac1dhO+qpMBZi/EQZVHf2d+ZDrq5ZzQa4ZxjZWWFQds9fu7cuTuMAdPplDRNe1NaFEWcO3eOhx9+mBdffBGgH/def/11IFB3yrJkZ2eHs2fP9hEM29vb7O/vMxqNWF1dRQjB7u4uACsrKxRFwXA4RAjBdDplZWWln5e6sbuu63787mKCuvmwiygyxvTEnMU5tYu/6QwOSZL08003z3dGskUaQvf+uw0RXS0aGe4W/RevFe42FSyaAu5lNHgzc2P33JsZBrr6ctcoi+v2ZgaH7jFvDUVtcU2NEp44jVFKEEcJwodudSU1Wkek2QgVx9Rt1JKSKmDmpURpFdJW+s7pe1xPtddrWiu8c9RlhWufj1Ldm8lbPkL/vjjNGK9MmB0dBYKCFOhIk6YDrLHUVRkMhGmGVBpbhN91HDOfVWgVoXXCvJ7jCeOelBqtPdY6rLHgwZh52E5CYrRHKktZFPi6oXIeJUc0dU5dV8F83BiUDOdEiEiQIMM2cCZE0njhcca0pi0DraiuAKEVQmusrdFigPca2W5T09QY4/BCoZQkisJzCBiPRkSRJkvTsB9UiMAI1BnVbzvfjSnt3zQC2cbGAAQx2bUbe/HwMs6CCWYoay3GWoyzeBtm3rCMcB0ShGkBQtGg0aoGa7BiQORzjBzgUSFOiAJnoSYJMQreI6XD2BrrJF66XsR21qC1Io6DIS+JBVmmOD6umdcukChEiCiSAg5yS2081w8aotVgmrNek0WWqhG8fLXkvWUwsDh3jXOnHA9c91w7tCTKYYzHetcaUoJhLbhSCMd1S3IKcRFtFIsPtjFLAx5GsaMxEmsbitzgTCCDHE/nDAcjhqMRQkU0puK4hGeuCZyo+LZVz+AdTyHStxOtvs6pi1f4wicOOCgCzc8LiZA+zDMLRmshJccVvLpdhygTazh7VnPxVMKtGYAkTRVYRZFb5mi8jJDWYn07txDMFtZ5nHXhP2fbv2W7KLtwrIh7/K3oOyORaAlDAvYOK9ZXYm4dVqSR49yJhPlhwvaxAWKEkBS1J9KGyVgzzc09x7Kvppamg2Uta1nLWtaylrWsZS1rWcta1p9o3XffffzQD/0QP/ZjP/aHYhbquubnf/7n+c7v/M4e6/1HKSEEjz/+OB/96Ef/0I3Gr1Ux/u7qutnu3jaf+cxn+Pmf/3kefPDBO0wHAJcuXeKHfuiH+NjHPsZDDz30NfldhyfXaW7uE0uDcYqmFbly48krh6pqymqEQrKuMjZiw2CouLFdUdWOvMoZ2xotFMlgiLMGUxTUVY7zBq1jhPMU0yPMPCdJUtYu3I8ejzFViTMWoUJHltItwcA7rHcIpZDO05iG2jbEwqBf+yKf+nf/gnTzFLs72wipeNfXfYAL9z9MnMRkgyEPfOtf5KWff4G3+Zw0SaiKEiEjVOTBZTTW8QfXDnhJeP7qB97FqbOneOw938pn//kvkO9c43/9xz9GsjrBDFJ+4P/8f+Xrv/HP88qlVymKOdlgxOrahNXJCqsbY/bmIVbCbR9Qec8hkAjxZ+E5AELEQm0sMvJkowQde6JUceLEOsZYjqdT8jxnNBwzz2eh48cFxKhoxda7K3QvRmzvXufy66+jdcTTn/88169efdP1+HIdjItjQPfau59fvLnePZ8kSR8v0K1XV1JKHn/8cV599VVeffVVmqbpBfBumXmek+d5v5xOeJnP570w3xEG5vN5b3IwxnB4eNiL5UopxuNxbzToBJWu2//ixYt9h//m5ibvf//7uf/++8myjNXVVd71rnfx3HPP8fLLL3P16lVee+01ptMpg8GAF198kd3dXZ5++mne/e5391jujqrQkQW6yIIuBqITcqy1vcA/n8/v2JZlWWKt5dOf/jSz2YytrS2uXLnCo48+ita6j4BIkoTt7W0gmNGapqEsg/BZttjmLo6nruue9NAZNLpO30VaQ2cE6cSvRVFvPB4TmQzTWGKtSMYr2MZgqgKhNFE6YL5/i8F4jK1KhhubqDhDJwOqsibOBggdYebHSKVxpkF6S3m4Sz3PybIEoTRSJ0TDVcrKcOr++3jkwVu8upvjfM3cOY5dayzw4UZvEGCDMUu2wo3w4TaxFXDgPP91XnO5Njw1T7iwMeA9X/9OXv3iM1DWnDgxJE1CB+PG5ipm95DycB5y7j29GQvR3nzuKAfhrAjHNUEYlQIiGQwHkZJoIVqxNhAOklgynCSMz24wPLNFNbccX7lOcTDloDS8tF/w9GHNi41nx/nQLd4vPxgpQoe86OkKsYQVLXlwPSFLNPPa8cZxzepAszaMWVkZcpQ3nBukVHgmG6s0h8ecO38WJSSlqYi0wDlJWVpM3RBrgXOWg70jLp5fZTYvIY5ZzSTRYAhNELSbukFpgRISpxp848lrgzCGGzcrtBDUUiE8WG/Zmbo2s7sTXjRSKaIopq5LlNIhkqftVFUiagkBEmsavAnHrY4TOsUoiNMC04TnvPNYH4gBQkqMdTRVhdQKvGcwGCJVGJOcDcK5bQBrMfiWapCA6ATV0CnrnQNFm9UOgXwRDr66zGmaCt0aEaIoRsnQ8ewJYlfTGGxTI7VCySSIyXhkHNDiURSiGVwbn+B9GkQvGzptlaSNXXC9WUB60CqIM61uH7qETY0gfGZjHbFWPW48iHEeLRVayyC6yAjrFJWQ1E3Tm2WkUiQxaBvoPA4f4jCc6s1xeEdVh65brSWrkxUmKxMuPvQojWko85yjwwNmx4fMjw4oZ0dUZYE1Dc4GTLjojT3081vo5pVI4dGdGO4FykNZlncI84uGN7htpusic7rxrjOgde85OjoiiiL29vZ6kk33+s6otr6+ztraGmfOnOHGjRvMZrN+fuvMXDs7O5w+fZr19XWqquJzn/scZVnyyCOP8Nprr3H69Gkee+wxtA4C882bN0nTlBMnTrC/v9+b0YwxfWRPF+sQRVH/fZqm4TOf+QxnzpzhwoULPP300zz66KNcuXKFxx57DKCP/+lMbd282y1TStnPe3meM5lM+rm6m8e7Tt+OutA9vmgOuNtE0BkB3+w6YnF9vtw19pe75rj7+cV1uXt93mw53eN11VA3NpwHURhDkjhBxCnOW7yPiNIhyWgV48HVFSoNXesBlS/RSuFsMPOED5Lt/NBOHN1nCkEUBXqAaepW4BfoOKKpa9oG99vrLAVKRQyGA4r5nLquQsyPUsRJSpTEzA4Pcc6itCYdDCjyEhAkaRt14EHpGA8oqZFaYa1Bao3wIS4hjFcW19g2KsbjTIOtwTWtKcYa6rwEXJgDnSUSHq1l6BaXEucCiUUpFcYnEaJbtNY465FRgtSKWGcI6VFRiDuQSgXjl9A01ob1KSqM8wjpA+EmikjimDiJiaKoNRyE2BUZBkWctXgtue0ADPNXT8En6MPhPHKtwaqhaUxPOfAQYhNMEJ6NtTSNwTjTxta0kSat+QxaAd5ZEBKLRstgsDBigPYFTqbEoqGxktpHrfDvkErinMFZdzumqT2GjTEINGuTiL26Ik0VcRZRHRQY6/A4pFLEUaA8HeWWsrIhWkZZlLAUlQRfYQ188PEU6Y/AzJDjd3N1999zMLesZ4KiDiYD34A34fztPAeB+BB4NF6A8BKcBSlDjBEwECUmn3FqI+P6cULTRJRGUZQGLSWjVJNGwUAYqTAXOxfMFasjQ7Si8cdfQCSPIdfex2D8a2ytScbbir0izDXSBcOBk7f/7pFKcGZN8+gpzef/82d45KEX+fyzDZGKePKM5WauiYYJl6/neAepsjgL4JDehRihlmhgjGvjMlxP/un2R3v2hvO5H05aA4K//Zh1Yd6dFYbVccwgFsxzx/W9nLXIsz7W7OUeY8Jyj+dw9gR84D0P3nsQ/CpqaTpY1rKWtaxlLWtZy1rWspa1rGX9iZZSih/5kR9hd3eXX/iFX+g7eru6cuUKRVG8JdMBcMdNxf/eSoiQG/9N3/RNvPrqqwghWF1d5cyZM3zwgx/ke7/3e/m5n/s5Pvaxj/2h9z7//PP81E/9FD/+4z9+R7fX10pd/PPfxqVf/mXIC0ovMD50HubWc5x7okJQW00qBStCsZqVIBWHhzErY0NezpkXUzbXTmBsQKg29VHoQrQ1VX1MdZijlWRla5P1By+ih8OAkbYOqSVax0gdtR0g4caRR2AcbO8d8MzrV3l5Z8p+2bCeKvZffp3Vh9/Je97/jayM13j2C5/DGcvZ+x5AxzHrW6fITz/KjVc/zfnVTbLREKEUdSXBNtTGcnNaoEdD3v2hr+f0fRew1vD5T/wG/vo1bOnZv3nAXApee/kSo41NPvD+97U3BGE8GrO2NiGKI3QSsXOwj7UGB8yhF/T+LHwHQSBpRZDGtffyJLP5jKax5POceZ5za3ebvb1d5kXZbXXcm6xwHMckScbxdMoXnn6GtfVVtm/s8PAjj/Yo5Xud33ffkH8zksHd6989d3eX4mAwII7j/jM7IaLD+8dxzPvf/36uXbvWd3guvr8TJrq4gkXhRSlFmqa92NPlkwshWFlZ6d/Xdeh35/L6+joHBweMRiOMMcxmM+677z52d3e5fPkyURRx33339d8tSRKcc7z73e/m0Ucf5bXXXuPZZ58lz3PW19c5deoURVGQZVkf89BhqzsSgZSSPM97E9Rid2vX1dohtXVr5Om2nzGGkydP8uyzz7KxscF8PifPc4bDYS8edZjwrtPaGNNjs+fzOcPhkDiOezT3ZDIB6I0e3brWdc1gMCCKIubzOVpr9vf3ieOYpu3sdM4xn88DJllJBqMhpirJ9/fRgxEqSpgf7DNamTCb56wMY9LRKQaTNQAkBnSEtwYdaZqmAgKmd7AyDp3ddcN4a5Nkso61hmp6gDWO9fUR7314k2df2+O1w5JKgbWdsSBEHtzG1AbxUhEMB92Razy8VjtumYKLRcNO/SInM8XJYUzdGJqZZbK1wWP/w/dx4anHaK59jlu//2n2X3mD4+0j6tJgTUDpO4LG1Poe2qxu2u5DQaIVkQrEgyhSRKlivD5k5cw6o/Nn8XLA0bU9bvzB6+RHRxxUlsvHNV+aGl6uLNesZ0a77iIIy8FUQdur2n5T70NnrpKczDSVhf3DmuPaMookWRaTjlKEtZzcGOGqmixNmN7YZ2s9pfQpiTUIZzG0XeWmJNICaT0OyXAUcPa5Fwyl4LBynMhACUvlBbURDNKIfFZwUDpOrQ0pixpjPVXjmKwn7O7nDBNNLASvXs85raM7OoCFgKKYg4coi8P6NDVCKqQMhAnnfEtECOKYEiCjEPnjvcM0XU5G2FKmaUJ3rGs7VKVECoUXIcJAe4fSEaITqGzdOjokXiisM31Xq4AgxAkDro2DURovRIhDaAVEpRMa02LXvWtzwT1pNqCqAtUhfBeHqEtUe/4FKrgHZ4KxwnsipYIJB2iMoakrXJsPLWUY07xz1NbgnAqEBQFNO84oJcE5rG0QXmOlQEmFbbOrlRRYZzCVCUJW280ZKYkSEU1j8LYVib3vDQHhXA6CW3cCOAc6jtC6NWZohVQBeZ75MA+fOLmFcxZrDFVZUpQF89kxR7u75PmMuswxTYW3JpgZnQ37XApipXtxUfg2P9uYPzTmLgrni+aDjhDQPZckCRcuXOjpBkVRYK2lKApu3LjBdDoly7KeUDAYDPrxsJuHiqK4IxrImNsdq91c0o35k8mEoiioqqofU59//nnyPOfChQvcuHGDBx54gCtXrpCmKY888ghXrlzh8ccf58aNG0RRxDve8Y5+fjhz5kw/53Vj86VLlzhz5gxxHPeGNqUUly5d4oEHHmBtba03ki3G81y9epUPfvCDd5APOvHt5Zdf5vHHH/+KFIKvhmTQ48pb40G3fxbfd/fy3qzuNh38USIfFl9rrCWOIwZZghA2xI44i9IKRdQa5lLy2R46GjFcWSOKY5SMeqKJNYbuKjIstzXBdKvTGmq6eda157cXECndGw5aNToIygKk0gxGQ2azOb49toQUxDohGw6ZHh5jbThnkyxtl+NaysmA6fFRmAe1pm6aIO7LIJ5GQmCNgygYJxwerw24EGVjTIMQHhlFeO/QUrfY+fD3pxZhvLHegVSB9hBgAygZ4ktUK/4HSEuCkmF+VDpCtrSGMIQEHIEXAm/DGC+lRsimvVaMSJKYQZqRxDFZmrTkA41UoqXAyIDE78xa3D4efUtk8a3RoKkbrDMIoZGyRAiNdR4pFE609Ic2SgsPtWkW4r5itA/mAydaI05zhHM5gvDHhBEKrYIJDwEJO1gxxvkILWpEN6e41iDhmmD6whFMHYJGBPrN+vqA6VHB2mpGkkZY7xFKkOiIOFEkwjMvG6RvqOpg0jvOHQNtUNIyiDLecx6ef61GpBUb7/gkvt5nqD3bU8vlPUfjPFpCosL+dd1FDR7n2mgV2R3HwRkjWjIPOIapYLI6YpRpTnhBJGsiJMfTCG8btBJEWmKaGWU5JZYNsVacWBU88eiIvSs5ybkpSfyr2Apuvjbj4imFt57oVXh5x3DsfGtU6OJ4PFsjwSgS/NLvzfmGJ1Ju3tonSzXP7VqOjwwPnB1Q48m0QOhw/RcrgbRQOoGzjqaNubMuXNd1RiDuGl/uprfAXWZsBM4LdKTwAnYPc85tRFzfK5kkjhiDM4aLpzZ441ZO03iSNOHhiye5euXGVzVu3auWpoNlLWtZy1rWspa1rGUta1nLWtafeE0mE378x3+c7/zO7+RXfuVXePXVV0nTlMcff5y/+Bf/IqPR6M96Ff/M6uLFi3zsYx+jLEsg3ABOWxQnwIc+9CHW1tY4ODjo3yOlZH19nQceeOBr1myx/vYnOLNzgxu/+TuYsg0z8NB4x+6hZw2IhGckDSNdsTJy2EbijSISgvn8iEtFzurqOhKYHuxh8xnjjS2Ob16HBk6cOUO2MSE9uYkeDAK2ElBJjNS6FzvwodvSWMvl67v8+rNX+O3XbnFrbmiQVM5wNoUzmWHTST74jR9hf2+XMj/i+vXLTNY2SZKUKI657+3v5oXf+3XOTTbAeYypMFUQnPaPDtmd1aTrE9Y3T6KUJk5STj/+OJd+57NIJ0iUJNKC1597mvTEGZ569ztJkhgpgiA3HgxRUYqIY/b3D2is67dpib/Xpv5vUqJFch4dzTDG8ca1N/iDLz5NXdWMRmOss8xnc8qq7sWUr1RN3bAyHiOkYp4XOBxvzLf5H//H72XzxCn+l//1f7mDTLC4Lvdav3v9/mampMXf0zQF6M0GnUCyKDRsbW3xyCOP8PTTT/cieledscAY0xMOhsMhaZqytrbG8fExVVWhlOrP76ZpqNo849XVVeq6DohxaxmPx8znc86ePcsrr7zCb//2b7Ozs8Pb3vY2iqJgZ2eHwWDA1tYW1lpu3brVxzvs7u5SliV5nnPixAkGgwEnTpzgiSeewBjDU0891RsLOqHK2tB1NhwOQ6ffAra6KArG43FvFhuPx8xms94M0QlZneA1nU55/fXXefjhhzk6OuKBBx5gPp+H3PnWcAH020pK2Ytxs9msR2qPRiOOjo6o65qyLJnNZozH456qUNf1Hfs2juN+PYfDIbPZLHymh5W1NXQ25PjmdZyMiHQwK4zGY3Z3dkmkQG2dJEkT9GBMPZ+hh2vU+Sx0e3qFMw14ixASlWREg5rZ3iHphkDZ0FC3ce4+ojRl7fRJ1k+8gPPPMH1xF+NbvLvydHEk+D6KuRfmu8gD1wKCvYC5h+dKw2tvXOf+SHAx0ZxINcNYYnb3+OI//SleWdlk/eKDnH78CZ547zcQqxpfzZhde4P5zV2q6Zx8f0pT1BjrgilLBcx2mkZk45Th2ph0dUS6sY5IhlQzw/zGPm985lWK/QOmecVuYbieG16bGy7XLpgNPNhWNvEh7b43HPj25+7GuxKCVApWtWRzqLk5N+xWlqGWbI4U2SRlkkTEaYRtDC7WxMYyHkri0ZD5zeuoLHTfOuepTUOUagQalxvq2uCMozquiFbH1EVFFMe4YkrlQAjFcJhimoZZFZDQeRO66mfzmrVRBFFEUgUiwmCS4V3ed1IH8ZG2Q1a3j1u8kOgkRSAQrVBvjEEisQSh3wtIVcBvN7XBi5Bx3XWPekR/QEgpQt53Kwg652l8iFAIkQwOoSReCBCBhuCdx2FRUYi8kSoKy27FU4VBoEKns3PoKEYpTVOLYIQQIIVHSI1zBiVAR7LFkguaxoROZ9V+N0HIAEfQGIuzJdlw0BoBFDJJw7HmPXXThFxtgqfCCg9YjAtGNt0u3zrbG5q8DyKra/O4TW0Jkp9ou42D4EgDSRyRJMF44GzX+Qm0+eiI8B3QGnwguHhn8SJC6iDuexeiLhAqHK3O4qwNhqXhkMFwyGRlwurqOq59zjqHNYamqWnqhqo1IggfBMOQDR9ia1SuehG7M6N1+2YxFmCxc797Xfd8MOklbGxshOiNdkxdW1sLwqlS7OzskCQJZVmyt7fHysrKHfE0QghOnjzJeDzuaQXdeo3HY4wxnDhxoh+brbUMh0MuXryIc4433ngDKSXT6ZQoirh48SJXrlxhdXWV5557jrquieP4jvF5bW2N/f19jo+PefXVV3nsscc4efIkn/zkJ3nssce4dOkSZ8+e5YknnuD111/n4sWLfPrTn0YIwRNPPMHnPvc5PvzhD3P9+nVOnTrFq6++yksvvcQ73vEOZrMZL7zwAn/uz/05rl69ypkzZ1hfX/8jGwG+2ue+3Ov/KEaC7t+vRDlYfE7qmOHKiCTSCG9D57uxNI1jOFlBKElxtI9UEhHZMAgDXoSxq6mrQOKQQSj3+Nuicn/8SZQS1FWF9643wSkpQ1yCD6a5gHgPI77UknQ0YjadgQs4fYHECxiNV6jKEtcEY4CKIoSUNGWJEJIkSfrlBmFYIHwwF9CesoEsEuZer3ygOqgwD0jrSVQcBGhrWjEWbmPmQUQCpCKSEi8C8l5HGq0VKlLEcaDTCCnw3nbgAcC29JgIoTpTUIW1TRutEigraZbiK0iihEGWkA0SsjgjiiLiSBFFeoEiEWZDISW+JYGxQIxw7djRNI66aSjLIkTJmGDgsl6gdB2iHUSYi9yCSca33fBKq/7zAuUimECETkAF6o5vqQCNF0QYBJbCD9HSY50HHEI4hPRIQiSQkO0GdhYvRGvakHgk65MB9n44e2olLDOZsh5HDAaasmxwRc28sGhpMT5EEOSNJ5WwEitOjTwvblecP+nJ9xwvffxfEdkjXnxNc+MgjP9l7aialrbgQHmCeQXwWHT7vZ1vhXcJ0htwwcSHq5GNA+c4v5GQFzXF0QGvXVkNc5yzeGupipz9/X0GKue+9XXe83DKq1cMG5nDH36O+d6zJJnjtSslf/CKwzjIIrhvVXFrJjgoDEXtkRKGkaC2nj+4liOl4OBIcXNueOrBEWdPDTBNxcZmzP7RjLWRYWYdQtTB+GIUdSmojKWxvicniJZq5HxrX+0IGfcgpPyhscT7YEJtLKeGNVo6JrHh5MNDJisZtplwYzfnySfP8z4leOmF14iUYu/6Zbx8a80hsDQdLGtZy1rWspa1rGUta1nLWtay/hRKiJDZ++3f/u185CMfWejEiO7ZjfT/TyWEYDweMx6P7/n8t33bt/HLv/zLPPvssz1ad2tri7e//e088cQTX7PbTg0GbL3v66kPp8w++wzSBsOB857jqWJ1BANZMdSeUdqQpOGGWYQgFoKh1sy8C0j/pqYuclbWT1BNpwxGK2RnVxASopURMkkDctMGsVQqjZCqx8Y659g7mPOpZ3b41Gv72MEKWxfOcmp6i0ruMDg34vANybXtPU48WPL0Fz7Lwf4R+IpklLK/v8fW1mmaxjA5scmN3GKd7XOIlY4o64Irt3aZV55IQFHOOHvhIsZYHnvne/hk/It85OIq922tMCvmPLN9lUc++M3EcRZySQG8J3YOU1ccTudErWjztVDOe4qyoSgPQRzegVuYzYs/8vKEEAwHKWdPbXF0dAAe0jgln1W88OKL/O/+93+Tf/lv/mUv9vU33hd+XlzWm/1+t2Hh7pv7nZjT3XTvcrGBPgKhe+5tb3sbL730EmVZBlRuuy5VVbG/v4/3no2NDcbjcS/cd1EEs9kM7z1pmvZCe5ZljEYjyrJEKdWL5YeHh330wOc//3mee+454jju41TW1tZYXV3l93//98nzvDd5RFHEYDBgNBpx+vTp3szVfY8Ol90J+7cFvtuo745uoLVmPB73RIOtrS289xweHnLmzJmeVhDHMTdv3kQpxQc+8AEAbt68ydd//df33zuOY5xzjEaj/rNms1lvxBgMBjjn+u1oreXEiRPEcUxVVUwmk15k62gSQC/IdcssiqIX65xzxHFMnCQMNk4y3b2F8Z5ynuOlINKaG1euMo4N6cmzCO+R0ZBqPiNKB9RFjo7j0ElYzbDOYIsCqSOMA48mm4yxxmKqgjqvmJzaYv38Kkfbb3Dq3Bne8cQxO/sFx28cY7SndOE8N8FZwCIFxLWUZejYB+3x2j535D0v1PBK03C2aDgfK7ZiyVqscMfb2Gd3OHj208RxSpxlDE+eZHz6JOnkQVYfGHN6NSWOCBnORYnUAbNt6hpXNDSlJT8+5tZnLlMfHlLMZlSNZb80bBeGm4XjRmXZtp6b1lO4sF4eiBEYfKdxBeyzCIJA93swHMBYSU4PIorahu8mBeNBxGCc8uDWCrFzDLVntzSsrawgypLKODbXVxiuTpht3wIB1jU4HSOsQzUNxkNpPI0PueUbkQwCsvB4FyIMBoOIVMJcCLwXNFGMihRHs5LaetY3Ug4PC+JIMckU29OKgQzGgk7k6QQjpVQQuJ0jG2ZIpWiaOhAEpETEUei8V7o/r6zzC9sDvFCBkEAQ9aSQrbjVGg5C220QFhYiErxS+ECN5ra05EFKrDFtN7BDybazXUqMc4Ha0c6NHemi63bGe6SOUFJRV1UQ6dv5U7ffwXsoy5oo0gGy0OHFXZjXXZvd3ZEJlNRYY6i7hkzrWttANw635ppF4d21hgZrgrFDhRiNENcT5vJYtLn0gtD5aS1agFaqFSNrIHSFQofRt+2YpdvPFiilkSKIf44ulsWBsxgTjk/pZRt7EcxoYSwNY08iW0OADJ3YzjmEDPnb1oV9VlUFTV3z7KefDl3XWmOtpaqq/vq3m5e6jv2u+98YEzLoW4OCtbbv/u+w5lEUMRwOSZIErTUbGxsIIThx4gRaa6IoYmNjox9ju2ia7vXd5+7s7HDr1i3SNGVvb480TdnY2OCll17ive99L9euXeOhhx6iaRoefPBBRqMROzs7nD9/nu3tbR599FE+/elP92Scrvu6m1vrumYymfRROEdHRz3tJo5jrl+/zsmTJ4njmDzPuXTpEsYY9vf3mc/n/OZv/iabm5v9uPjCCy8ghODo6AjnHPv7+0wmE27cuMH6+nr/ukWaxOL8/+WMkfeiIr2ZEeGtkA4W4xUWBf+vtPyqDHEoPhUkWqLayJU4HuJMQzWfh7FEZThvaUxObDIircnnx+3xF+OFRuCQyH6mcT4YgKQQ1E0DvZEMpJItoQUgCPPBgAdSKgajIfM8xzmLaGNmBJBmKc57qrLsx04dx+35GbZDlETMj45BBCKK694fTrowTnmPE66PSbOmoTENSut+vBFa4p3AuZY0Y8P5K4QA2X4/FYEMxi0da5I0QkcROtIoqdoO+UAZ8M7jfUskEgHPL4XCOYFxgbzk2rFVa81IjYgixTBLybIBSZy0Y7/oxwjvIAADgrEi7HeLd7cJR9759tyvqcuCopjT1AZVNlgnqIxHRzFxFAdzSRj5EQikDBh/pTSR1ugoGCy1VmF+UYrGJQg9ARHQRwKIZYUhPCepMS5Qj2qfhP1CICHEWqJ1wyCLUVFCpDXOh+VaKxmNBzz46H2cXB9x9cYx87lhkGlu3jxiPmvwxoPzbVyKCFE9eCKtyJKE3QIirRCHCecOLNeOJK5MQcBh0XBcBMOIA5wL3zuKFNJL8AIZSeIooqwN87IBDyeHgsdPa2IVXn9iPGRzFMxzIaIqpTkfjDzP//Ynubq9z+7KJSIp+fCjm0TaszFWlHWEkp7D+ZCDl29gxJiMikhLbk0Ne1NPYRxxJIl0xGoWMUoAPF5IprWB9njdmUqOasvlPcuZU1tUxExObLCbO7aPc4bxkNFggDMV1bQMEXWEfWJbSoHzriX6dGfp7fHCt6ZWz+193M3lviVtee8xznKQh+PaqwhzULN7PCNVFcrXXN2VPHBuyLVtw/0X1rmVezYmt83Wf9Ramg6WtaxlLWtZy1rWspa1rGUta1l/atXhyhcz05f15SuKIj784Q/z4Q9/+I7Hv1bNBl0ZWzM4dZIH/8J3cHDrFvPXt5lbj8XjjKZuNKlqGKSewcgglEdYhRYOb0FLyYpOqPI5s2vXGE4m1PMprqoYndxCxTEqi0ApnDEgws0VFcfQdrk4ZynLij94eZf/+ELFG3XJ9/y1v8kHvunD/MI/+Wl+87eucm2/wF6Zs6ElcRZx5cY2VeV49JEn+Pgv/3Nu7Nzg7LkH2Ng4QVHkVFXNrFVRlFLYFq1dNzXXdwsqBwPruPzcJayXTI8P2T/c4ZGLFzi3Pme+f4undys+8P6HIRVcfvV1Vjc2SJIMZz1lWeC8p64tsViUH7+G6k/IBzEvCq7dvM5jjz3OhQv3Mz0+YjgY4G3D0dFR6ORpxYHu5vvdGcxfzXnwZjf1u2UuZlB3jy8iljuR+8SJE1y4cIHnn3+euq777Oi6rnuh/ujoqI9TsNb2uP80TXuiQtfJ32Vwe+/J87zv7O/Eotlsxs2bN/uIh42NDUajEdvb22RZRpZlbG5usrm5ibWWJElYW1ujrmum0ymDwQClFLPZjLW1NaqqQghx2yzTmgbyPO8Foqqq+oztDsktpeT4+JjZbIbWmr29PaIo6rPF5/M5KysrDIdD3v/+93Pr1i0mkwmPPvoozz77LFKGjsbr168zGo0YDoecPHkSIQR5HjrJB4MBVVXhvefg4ADnHIeHh6ysrPR48I4S0Ql23XbqRLgOBd7tz7IsWTt7lp3Ll0lGI6YHx8RZQjk7po4zNKErV6iIZDTG4YkGI/LpEZGOsU0TkMhRiinn6DRDaok0HhlloFexXtLkOTY/wsv7kVoTjzeIkuuceeA897+6zWxW8fJ+gbCQC0dLgAdASzBtx153BErodOde2KcT9j1cMXDDWUaV5YQynNaCdS0ZacFKbBkUc+LDPdJXXmrx8wIVRahIBREE3wrmAYss8FjvqZwnrw1HleOwsuxVjr3GsWM9+w6mzlP7YCjoaPWaNk5BiJ5qYBYw1KI1H8QSMiXYSCSVsUSRYuYco0HE+bNr3L+ZoW3oTDycl4zHQ5rpnFnVsLYyJFpZRWUTqvwyQnhqFxDXyhissURaEseSRHpUJMjiYD4wjSNNFBQ1miBoHx7NEdZSC89RKTieNWyMYvLKkCWKeW5YW00pbuQkGtI0C4aKoGgFsVoGQ4NWKohS1oYOWiXDdxYydCUKgVJJO9YQ2jPR7fYPiHTRCkJKhrzxYAaKkEoFBLeQbYyBC5373gfRAYHwFqlVu7U7UoZDth3N3nukUohOeAe8M3gU1jbEUdRGQwQR01oTzAsINMGco7TqRWTnXfhsR5h7nWnx7mHcbEyIIwnGAkfdCoxKRRhbI6HPglft+F5Zg1IhOqLrKA76pkB4F1DoUaAfWBtESSVDN7RHIHVEFMcIqXAujOn5bIYxNdZ6nA249zCeu9ZEIPsOYJREKompqxDP1JsAQydx+OD2PXii1jgQaAkgvOrXyXqLQxIpiRWQDYYMh+OeDDCZTPo5o4tQ8N73RqmOstMRZDpzQbcvO+MC0I/BTdPcYR4zxpCmaT/vddSaxUiF7nO6+e+9730v8/mc4+Pjfq5ZWVkhTVPquibPc5599lnuv/9+bty4wcbGBl/60pd688JnPvMZAE6dOsXu7m5viFBKcfPmTaqqomkaVlZWaJr/H3v3HSfJVd77//tUVafJu7NZq9Uq5ywyAhlhsAFzAWNsnJCNjW2ur3FO1/6BM7avfcEXnHAAbMAGGxsbDCaILCEJoZxWabXS5t3ZyZ2q6vz+OFU9PbMTe2d2ZqXP+/Wa3Q7V1ae7q0511/Oc5zQVBIG6urqUJIkOHDjQmqKnt7dXIyMjes5znqOBgQF99atf1ete9zp94Qtf0Pnnn6/du3drcHBQ119/vY4cOaLe3l4dOnRIURS1qgq1B/LnqpLUnlgws1JBe4Jjrv17x2KmbJjPbImQM5Mh82Xa7zt05KhKo+OqFCN1lYvq6etWX3+/0riuwCWKokCWldGX81MFRFGkerWqIIp80D5LBgoszILxPmEpCLJ9oBnLnE/KcvJB9SSJ/XZugd/mlU0BE4YqVyqq1qpKs4oocZolGIVSuauisRE/bYILAl9lxTklse/TokJBccP3a/nzNxoN5fP0tN4i54e1h1GoOM6SSEIpDApKg0CB+WlS0jSQYl+RxSz0x6QgkgK/fwVhKAt9v10oRoqKBRUKkUILWslCCgLFaSJZVvEgdXLmk5Is9PtgnDQVBEVZUPQJWRaoXC6qUIzUVamoVCr5/ig7BjjzSUou8WX3FeSftU9IaMZ+ahbnUiUuVZzE/jdFvab6ZFXVekMWxGrETuV6U+VKl9LuLpVK2fMHoQrZ9DCFKPLJWtmUEFHor+dTSPhpNFI/xY5iFa2upBmr4UoKwlRpUFKkptLUqRg2FKvsqyIVCiqXIgVyKpeKqvRUVCoFCi1UrFAuDRSEkS65+Dz19farf/0RrRvo1t337tHISE2T43UVXKpILkvazBIeIn8cbzin2EWqNyIFk6GOjgcabTR1aEg6a1OkrnJBDUnNxCeaKfCH0sS/GhULoSrlrEpNFCqJfNLWuZuLetUVRfkpcPz3Er9d5PtyqCT1+3mzNq64UVPSCFSpFLR1sKJyZEplkpmSVCqWTHGjpFSmoVpJPcVUzhKlgSlWqlrDKUp89STLJqFIZCoWCiqVTMUoUDVLhDk8FuvCywa1fvsW9faUdWB4Qlu3JdqyTnrqUEMHD0yoUW/IpQU/bZOyqRGy+lhJmlXNSpRVOkp99Z9W4kGr/EH23cP575ZKJWeKCiWlirSx1zQ6EatSjuQUqB6Hqliidb2h9uyf0NER09hDQ7rqki06OtJYsH+bC2d9AAAAAABYY9Z6gsFsgiBSV8+A4qik7S96gR4e/rTiI2M+MUAml5RUrkyoXHEql+RPjASJFCRqNKVGM1Z314AaE+NSM1alUpE5U+/GDQq7ywqKBbkgUlQsK4nrkvmTXv7Enp8j++DhI/rMXYd186GKwt5Iz7r2OtVrDb3j135JX73jVg3VJpU4qWAFuahbG6NIex59TPfd/U3dd+c39MUbv6BKd1nFQklJHGtsZFhHD+xX2mxIaaI09qU4XZpqsl7X/iF/QnSgNqH7v/J1feCf/0mT9ap6KiVdtPEM3ffYrRpJmrrkzB06q7+kvYVIp+84Q3feerue9fwXqFgqamJsUj3dPQok9UqakGlkFadVWElRFGrf/v26/PLLdMbp29RsbtLtd9ytoeEhTUyMtU76z1V6OA8uzBcEyK+3jyqcuXxvb+9xwZ18VGmQjWTLp0a44oor9Mgjj6her6tUKrVGMeejTfMy1dVqVUEQaGJiQvV6XT09PSqXy0qSRCMjI63AUR5Ad86pu7tbtVqtNSd3V1eX+vr6dOTIEYVhqMHBQT3wwAM6dOiQrr/+ep199tmamJhQf3+/xsfHZWat4Fb+OqrVqtatW9cKxOdB/jzwVa/Xp80hno887erq0vj4uC8NHoYaHR1tVUbIq0EEQaDJycnWc+XzcheLxdYI2cHBQU1OTqrZbLYqHcRxrLGxMT+HeqWiiYmJViJEmqZav369qtWqzKyVKDE+Pq5yuaxKpaI0TVvrzBMPuru7W9NGDA0NtZJAhvfvU6GrS5OjoyqXAkWBU6V/QOPjE9q480z19Perq29Qk+NjKpZCjR8dVqHgT16H2VQT8eSIgjDMShdLYRTKpQXJQrn6pCYPHVAqU21sVF0bNqtYKqlvsF+T4+O65PKzVDCn8Tuf0lNprDgxNcxJgSkyp6bzo95Tl02zYFMj01Jl5YOzG0Ll1RB8cGdE0mjqtLvp1GWp+gKpP4zVH0i9QaBy6KsLFANTFNT8SEjLytlncwQ7JzVSaSJONRZL9dRpJHUadtJI6jTp/PzCkvkpE6xVtbtVxr4V8M4v+NwB38/Lqein0da6YqCuQqCeMNBYKsUWqLcYqb+7pL5SQWmaKJGp0t+tSKES11AzNpX6+7X34d3asnm9ioHTUCPVZGLqiWsy56caMDNFqfOfjfzrVLGgIK1pbKKmKPAl81WM5JqJGi7VgGtqaGhMPeWi1q3r1pFDY0rSVAOVQHv2jqkQSHULFBWLajaasmxKBQsCP5d6GEpK1Ww2VCyWFIWR4jwZKTDFSewDXc6PlA/DULVqMwum+VLJYeiDgVEUKU2SLLHBv49pEvuS1nKKCgUlcexLkJtlc38HUhhmoz9TWZa4EjiTsvfEglDOpXJJ4lcb+lH5cbMpk1Oc7cdhEGZTEKUKAlOhUPTzvztrjZY0C1QohnJJ4vcN1zaSUj7g3t5nx3HTB4iy11AsFBSYf1+UVT8Io0D+qh+1GWZVSJzzSRLmlCU7yI/sNJMsUJKmrSSLKPT7qx/J79Ss19SMG61AcR68DcJAaSI1mlVZWFAhCrPvHYGUJd/kFSXMAhVKJR+sShKlqf8sgiDw88un2eOUB6SzGI+Tn37BTFFUlOTf67x/zQPu9Xpd1WpV5XJZPT09rQS2vO/OK+UEWVWNPCmuPfidV0TIKxjkFW3SNG1Nc5BPu5DLK0rky0tZ1YZSSZVKRZOTkxofH9cFF1ygUqmkCy+8UOVyWS996UtblXp2796tjRs3yszU09Oj008/XXfddZfOPfdcPfDAAz7omiWISdLDDz+so0ePyjmne++9V+Pj4xodHW0dN/JpgPKpfcrlsnbt2qXNmzerUCho165dqlarOnTokIaHh1UqldRoNFrHviiKNDw83Hot+XE7Txqcmaw41+358ac94SM3WzWl9vtnS1xobXdt02fMNbVC+7ryRMv2pEtJGh6fVH85UZSEcgVTGBY0OnRMhUKgSlfFJ5YpUOwCRcWyyl39qjdqisLQJwaFWYn8bKSzS/wo/jCKZAp88lF2AMpHRCepTzAIgyh73yTnfL9SrpTVaNQVN5oKg2yfzJLRuru71ajX/RQMZq3vSc1mQ86lfuqxQlHjE2PZPhr453YuzzmQZX9O2b6b+vtSJ4VRQTIpCgv+tsTJNRv+OC0pyvZHC+Uru4RBluhQlDOnQtEn7wTZdDitzsSpVT0gdVLiJHOp4qShNPErTJOGmo2Gyl1FBUGgYjFSqVRUuVxSuVxSIZu6Ji9/76dTypLLnE/4U1aJzb9UX00nTZVNE5GqGTdb389qtYacmqrV66o1ahpwzic5hF0ql8u+8kFYUKFQ9MmYYaTUlFVZyJLZsiSOICxkFSIShY1xNZp+OocgTGXqVhCW5ayo0DUUuIaKQVNJ0CU5p2ajqWqjoXq9IQWhSoWy1m8oq3ddrwY39OmiC87UuWdfpoH1pyt89DY99sRB7XlySCPDVZ9rFwQqlgoKlKi37L8dyMVqNGMpKCoqlFVrJFLgNDRpGm2aRutORydDbeyvqOFiNVKnNA39ND2JU7WZyORUb/hKOIXI72vlUkEqFlQs+CpHJik010ouCcyyJM9U/jBgSp3T+v6yKiVTKfLbQ+xMiZP8rFSmYug0PukUu0ATtUBxvaa+ckP7jpnqTSlOnZIo9BX6LFAzSSRzssCpWIjUN1BW30BFGyuhYmcq9/TqvAsuUJI6DU80de6ZdT31+G6NT1TVrDdkgT8m5AlpQeCyxEG/bySpT6JoJUq7LJlU/ntlflBy8svu2BDo6LG6Ugs1fGxcpUKg0zaWVd3b0GgtVV9XqGYcqaJAh4abqtYTBYFUipzuvn+/Btd1PhUmSQcAAAAAAOCElcoVRYWCGvVJ9Z11ptZdeYFGv/ItJTWpZKZiGioIQq1bJwVKNBbHKlmqSqmhZtNUrTXVu95UHxrz88JapL6N/QqLZX9CLQwVRIHiuKG4FvuTuEFNJj/f8DcfekKf2DWhicpZuvCqizWwflDHhvbpE5/9N+06sFfNbBRqyUrqt02q2HolcUGD3QXde88dMovV19eta57zIm0//XSNjQyrNjqsPQ8+qPXFQFEY+ROkzsklsQ4fHdNow+mV5/ZoXW+oY4M92lwb0JFjDdUmx3XX7gdVPJrqvO6CLh7slTmpUi5pYNvp2nb66Urly9CODA1p44Z+haEpSaR+M43mIy+fRsqlosIw0MHDR/Tkk3tkJhVKkSQfMLn/wXskzT6qcObts92Wm2t0Yn5boVBoTW2Sl4VuH+E4s9Ty9u3btXPnTj3wwAOtILuZtYI2o6OjOnbsWGtqhTyoUa/XWwH+KIpaI0jL5XLrpGIcx62A+5YtWzQ+Pt4axR+GoY4dO9ZKUtiwYYNKpZImJydbgY18ZGmapqpUKq1pG+I4bo1srVQqGhoaUqVSaT1vd3e3JF/ZIE8oqNVqknzArz2AValUdPTo0dbz5PN3568rrzzQ29urW265RZdddpnuu+8+5dPITE5OKooiVatVxXGs4eFhjY2Nqa+vr1WVoVQqtQJA+fLd3d1qNpsaGxtrBaby97FQKKharWpyclKSNDk5qYGBAX8iWVJ1ZFTlSlFpHKurr0ejI+Pq3zCoct86RZUBTUzWZUlDSUOq9HZL8iOok2Zdcb2mYnevUudL7KtZUxCVZGGgZrWqYle3CqdtlRWKKvZ2aXT/bvUObtT6nRfJBaFqkxPacfqgztx7TKONMTk5TSam1KR6to2lzrUC+k5SlI8y1VQCgpSXw89vMJn5UZTNVBpx0qiTnoydAicVAz9XcsmkkqSC+aSGwHxAPnFS4nzSQ9VJdSfVsut1y4ItyhIh8n0h+z+UlI+ZjvN1ZkFkH6JyirLQTShTMZC6A1N/KVJPFKivHOngeKxiaOqLTKevL6kkpzgyRYWCSlGoQnVCkz09Gjk2rHXNVAPVEdUOVVVtOB0cbWhLf1lho6lqI1axFCmRqVQK1Yyd0iBQvRmrq7dbtQmfbFEsFlQsFTQ23tBYM1EhjFRvNNVdjrR9a6/2Hp1UpRiq2XRKgkBxvS5lCRlxHPupA8LQVwlIg1YgNw9gJklTcRwrDELFcdNXOAhCf4zIpjlIklRRwZfcbjZNhUitQHe+HplaASuXpn4Ee5r4QF0hksunaMj2jzz4HgaWlUgPfAnxNFYcpwrNB+Hy/dPPpJCqUatnAfxshG8hq0QVhr4iQtLMAv+pLBtRaVl1gTTrD9PUsj7CH3/DKFSajdJNksRXIQj9nN9J6re7MDCZQsXOJwgEviCFZC6r5BC2ykP7PjebZiL1QUCX+ooRLg+2OymJYxWKPuErCH3wM01dHrHMRn77+EsQ+PnofUwqVBLHcqlT3Gz6qhJBNkI6DBSFpqSZKI2b2fzlPuDq+9qpoLOTZEHkg0CxH4IbhpFkvnS1lI04r9dbI/HDMFS9Xm8lTzUaDW3cuFG1Wk0bN25sTT2TJxHkx598iod82qF6vT4tSWHm8So/NuT9/8zpBvLpdPLn2Lx5cyuZLo7j1vEtnzJIks477zw1m01deOGFrSSJ5z//+apUKtq+fXvruSSfBHD55ZcrTVOVSiX19fXJzHTmmWeq0WjIzLRnzx6VSiV1dXVpZGSkdQx89NFHdc011+izn/2snvvc5+r222/XoUOHtGPHDp/AU6vp2LFjGhsb0+bNm/XYY49p/fr1rSmR2vfP/L3Jk9vy9yR/fYvVnpgwn/b3uj2JoT2JYLbkhvw5Zpu2oRxFCixVodyr7oFBNeuTKoTyCUOx3wdjC1Qs96q7Z0DOORXCSGEhUhQVFIaBpFRJ2syC0KYoCqQkVZIm2dQCllU1yLdvv723Sn/4F6dSsaQ4jn3ykt+BfZDfAhVKkcKooMnJqq9G4pzCyAfWXbbdthJpfDaRr+qS+kSH1KW+Aox83+SypASXZvPXZ29LkB0QzZnCMFCc+v00sHw6A6cg8pVoXBaEj6JQzpyfciBLzEpdKlNWmcZH/X3VgcQnDElOSdJQoqZK5R6lCny/ZKZCseCPLVnVhPw7WJ5kEShQM8n7Op/V4JMk4lbiQf4x5+9PmiZZPxr79ziOFadOqRpK5dTX3aNSVFBPV7cqXd0qFH2SQ54c62sZTaUD+q8MqRL51MVATQXxuGpJSc7VZUGUVZPw1eEkqWmhQhUUJXVFFih2ZZnF2TJO9XpDYyOjSpKGxicb2rShWztO36oNm85RFJa1fmCT9uzZp8nxmiwwWehD32maVV4xn/zmnJQkgdIwUClIFUWmSjHSsapPMEjiYR0dX69AqZrNcGqaDJNi+ZH7cVZNohGnasRpNqo/SzxMTXHWHwVmsjTrRy2vGJBnmvjv/JVSqEJokkt91Y7UVzrwu60pCJxqzUD1xFRrSkdHI0WhkzSpOImUytRoZukMLlWa+mR6v/3HGh2tKQhCnbdxg7btHNS2ndt0xplXKIjK6unu0i0336T9B0Y1NpYodqFc2vTfD7N9M8n2l9Rlx8Osr5g+rUJ2e1b+L8txlQukvq6Kjo34hNNGM1YQmeLYKQyd6lWn/oppXXdB48OJJusNpWmociFVI5Ym66nGqyPz9nvzIekAAAAAAACcsLBQVJLEmhgb0UR9QgMXnKd4YlL7bntAhUQK5VSdKOng6KS29Pszb85MPf2xhg5Hajac0kZTPZWCJsJQX//Gk3relVX1d3WpmcSaqDeVpIG2bepXV7lbQSHQkb17dfDImD6567BuGTJtOu0q/eCP/6h6ejfoEx/9K33u65/TkUZVSXbutKCi+sLTVAk3qBBV1F1Zp3giVpRKF11xpl7w4mt1+dXXKI6bGhs6rPrwkO6//Xa9bOuAn0vbJBekkktkSUPffkGfzjxjUAfGxvX4o3fq8ceOyLrK2rB5o44dPKp6I9FAlx8JlaSpKl0VX3q+UtL4+Jiq46OKjxxR91nblEahhuJY/aGpPzANx0+ftAMz05aN3TrrnDP11a/dqSf2PKltWzeqp6dHlXJZaRLry1/5UmtZae7EgvZSxLM9z2z3td/e3d3tq2hkQZr8hHG+XPvz50GKiy++WA888IAmJydbo03HxsZUr9f9XL9ZgKW7u7s1f/fQ0FCrEsLExESrkkI+pUBXV5cKhYKKxaKiKFIcx+rp6dF5552nCy64QIODg7r77rvlnFN/f78GBwdbwfi8akD+F8exGo2GhoeHNTg4qOHh4VZwPp/6oVgsqlgstqohmJnK5XIr8eHgwYMqlUrq6enR5OSkGo1GKzEin9ogn5Ihf3/yaRn6+/tVLBa1Z88eXXPNNa0EC2kq+Cb5Ub/FYlHr1q1TqVRSsVhsJUsEQaB169a1ptnIK0BMTExo06ZNrUoSeRWJfNRunliRjx6uHnbq7u5S7FL19fepVk800F9WqbdPyfioxptO40cPa8vZZyqwUM3JMaUKVeqqKG02VOruUVjpUdxsKjBTs15TVOlRUq/KzKnQu05h6TS5sKjq+IgmjuxXT1+vov5N6upbr0pk6uvr09k7NmhovKl4pKZC4DSROiWpD/Kb+aoAiZyiLHDvXFb6PTuhPjWiXK1R1a4VVHXZCWgf+Emyk/uTWdyhVZHA/DQHUaisnLMpTp3q8usu5DtI1tXkYbVEkswpcKZITs28HW37lB85mI2w8yEthaEfcVoOTNsqkQomDVRC7Z1MVI5CRcVAl27v0cYN67TvqYPq6atILlCva+qpalOVDRtUqFRlw0cVlCSXpDpydEJb1veoYk5Nl6pcDlUsRVIY6NhYQ8fqTv3rKqrFqXqy4H3gspP0qak2UVPDOW3uLWjPwaY29hVVS6QuM002Up2+qUf79o+rEJjGY6d1fRXVa1U/EjcM5JwPnEumJPVTDrgkUaPugwPFop9rOwxMcbORlQRPFAahomLJV1uQr/Li0jSbViH0I3clJXHT9zvKg8Imp0Bx4gMsSTbyN3VOygJ4FtjUp+GcpFTOmQ9EBqmvNBBk84fHTbnYf1IuC8ykzsmliZpxU4VCsbVvJkmiOElUSEMVCkXf9ij0SRQyOfNTJDilSuNmNlLXqd5oqtFsqlwqqxiZQvNzjVvop0gIs1LvjXpVaRbwjMLIB/LkFMeNbCoDv5X5IKJTnA3vzOefT1Jf/j2VU7NRU9xsttZdKFXUaNSmVY5QmipOEkWFSMVCPnWFfIUJFytu1FXu7lUQZIGpuKl6o+4rcCSplPXNLo98Zkkilm0TFgStKSNc6kuY+ySGqYSu9evXt/rR7u5uHxjO+q7h4WE1Go1WX7tt27bWZ5FPq5BXNcinj8kv5wH8PBlMUisxIU+ky5MV8sSy9uoJ+bJBEGjr1q2anJxsVf7J25tXJkizZJg8Kaynp6d1TC0Wi4rjWLfeeqsajYa6u7t1xRVXtJLR1q1b16qKkyfQ5UkM+XGp0Wjo+uuvV3d3t6Io0lve8hYVCgVdeeWVrYS6/BidH1ODINDtt9+uSy65pFVNJz8utR/32xM0/P41lWC4WHmCxkKPzQOBcy0311QM7dUQpj2vYhXK/ert61ejOqGucqRiqaggKsrC0E85EhbU1dOj+uSowqgoK5YlqygME6WJr3TgzI/CDsOi/y6cNLPvOT4I648VfiqFKIyykdVZOpmZCoVITk7NRt33U0FeEcEnCJUrFdWqNSnNknx8R+OrETinfGqFer3hH2+B7+fS1Aeo0+kVqSwIpKyKSpomCkLLKgf4I00QBkpiP3WMr1rg70tdmlVE8FNJRJGvehBIrfL6OXPZ9AZpUy6JlWYJKnGj0Srh02g2VSg6hYFP4AjCUKViUYVClCVlRq1qKK32myl0liXi+H6zlUCRZgfn7Hip1CemxbFTHPspj1K/IUguK/OvQFGxpFKlonJXtyqVLoVRMfveOjUdmz/e+YToJEumSOVUDOoK4rrS4qAKYaxQJQXmZC6VgkD1VEpdlrCjSHEQqpROqBA4WZoqTrIpGtK66g2nsF7TZC3Vbd/cpec/99kKXOC3AzNdeflO1cbH9eijhzU+mmRfJFJVG07Vhn/PlSYKFCmIm2omkixUszfUULOpwBI1k4KGJxN1F2sKgn41G001kyB7fdk+rWwaqiTNvhP5zyt1TnFTGptIlGbfjSzIqlWZy7Yr5WV8FJhTnKQKA1MURFlyXZAllGVT65hTM5Em6k71pvTUUanZnKpo4T9Sf2zyFQd8Ukwip0IYqt5IdPjoiNK0KRcWdOnlG9TXv00WRDpWWa8Hd+3TyHBVpkTm5/SZmnrHSUlWXci52aukqLW9+O0nDLPkM6UKTKo3UxUKgRr1VFs3ljRcrWu0KhVC07reUEEojYz7qh4uNQVKJCeNVf13CrPOf4eSdAAAAAAAAE6YOdPE2IiODR3SxOSYuvt7dea110nVho7d+5hSOdXiooYOOfV2VdU0yZJUYdHPyT05Eqq8s0dRJN1yx0GNxdIDTxzUE0caOjqZqKRI527sUilyanRVdeTQIe3Zf0T/tqepxxpFnb79OXrjj71Zr3jdK/Xe3/ttfeqrn9Fw3GgFDIrWpYHgNK0rn6Weynr1dQ2ot7dPPb3dKkvae++oytcPKI6beurxxzS8f5923XO3ukcO68Lzz/Gjy5JmVgrWtGnTehUKkWILVK1O6oxSWZu3b5bKRaUuUc9ASdUg0RNBqvG4qXVdvWpaKJcm2anEQEl1UhsnRiRtVU9PRUPNCYUlp3Wp5FLTaOqmjTI+VZ139tnaeeYGHTzwpDZv6NbI8GEFSnT2Oedox2mnKQydbv7mN9XV5UfgzzVf8swSxzOTDGYrjzzz/zxAnp8ozgME7XNoz1zn9u3b1dfXp9HR0dZc52NjY+rq6lK9Xp8WUJmYmNDAwIDWr1/fKgOdl9huNBqtYHscx2o2m2o2m9q4cWMr0F8ul/XiF79YGzZsUKPR0Gc/+1mdfvrpqtVqKhaL6unpaVUyiOO4Ncq/VqupUqm03oMwDFuBr/y1joyMtKYzyJMgisWi9u/fr1qtpkaj0XotxWKxFdTKA2V5gEiSGg0/urlSqbSSGDZt2qSbb75ZV155pW6//fZWhYJ8FG3e3jxRIg8W5YkQ+Xry5IRms6lyuazJyUmVy2VVq1V1dXW12tDT09N6fWNjY2o0Gjo7bihRSeVSQWmhJKvXVegbVGNyQr3rB1SfGNWGLZuVxKnStKEklQrlgpq1moJiRYlCJY1U8eSEKv39PolksiGlTRVKFQXFLrmgoLBYURQcUyFIZVFRzVpNhb6NOu28c9Ws3qONm/t15tCEqkmq8TiVaqnUbPoTw4FPHAidtUpIB+ZPGKepr3rgsiFrQT4KVT4Q4cyffHfylQtkU1MytM6ny5+UT8zkAqkUBkqCVM3EJx4ELh8X6debSq2pHGKbGuRqUnYq3PdDoZNi86NT81osoUmh86UPimYqmLS+EKivGKoUBhpuOI0lTn2VQNv6i9q2dVBj1YYK5YIUSGWlGp+o6b79NZ3VO64+S1WSVCpVND7pq34UlahZ86NaK6VIqZlq9UQTk031lwvqUqoocKoPj6pY9HPdR3I6enhEI7VY2wcqGhmrqxyZegfKOjI0qe5KqMFSUcfGE1XrsRrOVA9MfUqVZNUlnEt8EojzgbY0dmo2G1lAwo+cT9NYSZwoDUMFYaRGveGDucVQSdyUZIrjph/p6pyKQUmFrPqBc04uK/+cZkGgJM2G+MsHO8PAVxoIzNryDHwg2GWJe1kHl90npXHsg+NBmJUqD7OqAz5YEoaRkjSZGoltflSyBaaCZYHubKqIfDRsISrIZIqbNdXr9SyAnycwOD9lSJoqTRJFYR6QkwILlKSJLPAl05NsSgXnsrm2Q18BybUCc0lW6twHK82kZsNPdRAGPvDngzKJHyGd9dWlYiSzkh+pnPigaRL7KgnFQpQFI301DsumygiCQGkS+9LVUisJwiWx4qZPUDBJYRhkccCglQGUpokfX+yHmfokhHyEdRioq6urlYyWB/njOFapVGr1y+VyWb29varVaq1jwejoaCsgPzExITPT6aefruHhYfX29mpkZESDg4OtY1p+PMoD681ms9WH5seCPBEhr0jQHuTOp9AplUqt40qenJAfI9qT8/K+PG9jfmw455xz1Gg0WscJP6o+alXWydedry+vprNjxw6Vy+VWclz+nuXB+1KpJEmt96490eKCCy5Qd3f3tOSI9uk+8ja0mzmFwXJr/04yW0WD2W6f+dhcT++ABgbWqVadUE93RcVylywqKCwWVCyUFISBSuUe1aoTSpo1RYWy0qSpQuJH7qvoFISRgrCgoJBNYeJSH6jP2+Lrk/j9OZ9moG2e+DCMFJipVqv6UHn2XjvnpCyBttlo+MowgQ9ABxZIqWSJD68HQcEfu5IkG2geKAsdS/nobAt9pZBs6hMXhDLzRydTQXK+MoAslBQoVSwpG1Gf+qmRZKlPBIry73bZ9CJykoLs2Jhn9OX7ru/DkuygaoHkFCiwQIEackoVZdUMioVIhchPE2EWZtu//LpbSROhfH6Lf58sCHwFBjPFie9r8wo3/rKf7iBO4uy7gMkCU2Sh/wsL/vMLC1niSOQTICzfzn3CU5JVTEiSRGlWyUTZfdazRZFLFQZSYJGiIPUVKdJEBUmjddeqWpE4U926VUlHVSxW1IhjxfUxqdjjj2+NWOVKpGtfdInqzVB7994nJ6ev33yjbv7yV1UoOG0aqKrb6mo2IgUu1FjVqRj640TsmkpdoDj17ZA5ySUaq9ZVKWZTRKihgjlt7GroqHMaraWqx05xVvkitECh+e9HFjj1dUlHx9LWMTKKEllYzJI6nO5/4oguOnNQxSjQWC3W8HhNW9cXFJhPzggtkZJUVujzlSEKkXp2Xqii6qo+8YCqjUT7jjWVBoGOVouq150qYaxRC9SIzSc45FWAlPppQ1JfeaMWOoWR00Stob37Dmt0fNIfayV1V8q64vLtqo4e1rGJYU1W/T7hnPPTOzjXSuxz8ttnPsVR3p9M61+y5FNf9UMqRlKcpCoVTNWGNFlNVQpMY/VEp60LdXjM6dBoQ10Fp8H+HvWkoaqNRCOTeWKDU6Ww9CStHEkHAAAAAADghMVxU8PDhzUxMSKlqcJCQYWeHp3+whfKJqqqPb5PE6lpqBrpiWMFre9PVczKnla6EzUmQx09dkyNRkO3HanrqUaswmGnC/sLevG563XWlvVa19+r8eqIdj/ykB7a29SnjsTaG0ubNl2gM847W+dcfLk+/Ofv0V986C90rNloFRstW6/WRTvUX96h3vKgurt6tW7dJlW6K5Kl2v3E/dp34KvaePaErh66Qun4uEYOPqWvfOrTevMFg4qy0qsWBn60pvNzzIZRQfVaTftqBR3o6ZcKk370kQuUREUdOHZEB6pNXdC7R9cMbFIxSVQJJJckisJQBUt1xZZU+92kzjlto55KQwX1ETkzdYdOtdSfuAkkNU7S5xgEPhC6XApRpGdfc4WCKNUjDz8i55IsuSCVc4n2H9qvxDlVqzV1d/fMWuEgP6HbnkAwW0niucom58uZmTZs2NC6nAdRZo5kbC8VLUmVSkXr16/X0NCQ6vW6ms2mSqWSwjCcNmXB2NhY9h4GrSBLXk47D7AXi8XW1AB5EKherytNU42NjSlJEu3YsUMHDx5sleHetGlTq3JBb2+vxsfHW9M3dHV16ejRo4rjWOVy2Y/Wi2ONj4+rv79fzWZT9XpdSZK0qivk1RjyAE8cx+rr62sFxvL3IE9OOHTokPr7+1vBpvz1+Tmb/fzlzjlt2rRJDz74oIrFojZv3txqVx4cy8t653Obj4+Pa3JyUlu3bm19FmNjY6115kkOeYWGPPEgDzDllRLygFUcx6rUevzJ+UJFjdExDWzarNGhY+of3KBGw6lc6VZUrqhZnVScJAqCSEkjVtqo+n261KXxQ3tU7huQ4lSjh48o1TEVu3u1btMmmRVkUUWNaqzUulUs9UhhlyytKwzL6tl6tjaPTyqqVBTEsTas79Ldjx/T+IER1SQVgjx27MvcOpdNSxBmyQTZYMggC0qUIx/0T9IsAup8mCgPXAR+VVmcOhu1mp2VLpkPKCkIFcrUSFNF5qsrNFNTnCcTyD9nFgJS5PwIe5NTMx8Qm/3nw0W+VHFg/vU0U6lsUtFM/cVAW7qL6o0CTcSpDjRSDXaVVAik51x2hko9/Tp24LCi0FQMCoqqk/rGY6NKgkgj+4/oou1dvlR3vaHxaqx13UU1aomaTaee7mJWnlmyVNrYU1JXd0HNJFC5YJqoNVQs96lWq6leb6jaSBQUSrIo0GS9ps39ZQ2NNtXfFWl8oqHewbL2PzWuRuI0GUvr+itZgL2Ulb7O+yI/Oj5NfSC8VCy2gna+/wj9CP5GozUi1u/vPnhfq9WUxj4g60f81ltBIzNTI2kqTRNFhaIvp2z5Z+myYGCWImJ+nu4wDCWXZhUP/GdvvpHKqkL75APnx8r6Qs2mZhz7udiTWEnsg1xxHPt5z51a+6q5VLFzKgY+0JykiRqNupxLVa/VVa03fTsCk1yqKCrIAvMjmc0Uhj6o79LUz7GeBb9KxWI2Wl/ZvNR5gFOtuUR88NEHi/zU6z4Zwc914JMJCoWC0tiX0Q6yYLjlI0R9BFMuCOUKPokgT1iwIPCBzyz8GJWKCgNTo9FUWIhkiS8578vB++NL3GxIKmTHBMsC3KlcnJW2dqmfsz0MFJiTS9VKXsunIsiryuR9c55oNXM6mbz0fz4dTR5czxPXRkdHtXfvXnV1dWl8fFxbtmxpla2PokhmpgMHDmjDhg06dOiQNm7c2JoCp/24lgfp8+vt0y3kt+eB+bzSTaFQaB3H2qsc5Meo/LhaKBQ0Pj7eShjIjw15UkR7UoOZ6bTTTlO9Xp92/G02m63Xkyeg5cfxWq3WSj4444wzpo8yz15be6WB/PLMhIulaE9kaH8f51puvmVne/6Z32tyZ23fovGxCVmhpKhQkIJQzTiW5KfUqfSuU6NeVRzXfbA29BVOgsDvc2mW9FQolaU8IJlN/5GPDrc8CSPLWPPTU/j9zCfchKpXa60qH05TbY2iosyFSuO6Ajml5quQBMrK9pt8IlOxkE1boGyEuVrZcXkigO/y8gotfnf3fbH/BpwkaZaIEvp+zwIFkU+kyKudBKGfhi2KskQDy3Kx2t5fc74vTJ0P/IZmcha2Est8Mpg/Eub7Y6FQbH0fKRQiFfIEpmmfl08l9olSTnKB4jhplR0KzFdwyQPn2eD0rE+T4jhVkjgF5vsNUyQzn2CQJL5aSTP2FRmi7PNzWZJXmqSKk6avxBM3s6ljmv7LQWlAhShSkNR8gqClPgHMTJb6qjWTzVCNZqo0SFWITEpqmlSPeq3pKzAksUIFCkPT+sGiTjttiw4eHFW1eptGhh+VmdPk2Lgee3Sv6pM1xXFTPWVfYaK/K1LBmcpRv5xFqtV9f9tMYymQwrApJQ2dtW5MaZzqlVc3tak/0LqeRIGNqNooae+xQH97o6+Q0NeVf5aBwqigQqmkMDQdmwh09x4pCEINrutVIZsqJ04SVYplbRzoVSEKVaolarhIW9aXJTPVm4mUNFQphErCLjUSU1go67LnXisXx3p49JCKk1VNyKkWB9LIpEarDZmlCoPEV4JwvhJHorbfGmGg1JmaiZPUVLFQ0ZVXbNfokSe1d/ftKha69Pjj9+rwnkPatrFHw2N+io2xcV9BxFcmSlvHE+d8tSN/u/O5GnlZK+Xf+fy/ceK/m7ksE7WrGGp0Mlat0ZRLnC7YWtHOwaI29Aa685FjKhV7FQVOZ2wo6LFDsW7eNaQojFQpBto+2KqHtWTWSWcLAAAAAHha4gciAAAAAADAM9fsWVYL6LxGAgAAAAAAAAAAAAAAeEYj6QAAAAAAAAAAAAAAAHSEpAMAAAAAAAAAAAAAANARkg4AAAAAAAAAAAAAAEBHSDoAAAAAAAAAAAAAAAAdIekAAAAAAAAAAAAAAAB0hKQDAAAAAAAAAAAAAADQEZIOAAAAAAAAAAAAAABAR0g6AAAAAAAAAAAAAAAAHSHpAAAAAAAAAAAAAAAAdISkAwAAAAAAAAAAAAAA0BGSDgAAAAAAAAAAAAAAQEdIOgAAAAAAAAAAAAAAAB0h6QAAAAAAAAAAAAAAAHQkWu0GAAAAAADWhvTAuavdBJziXr7titVuAk51z750tVuAU92t96x2CwA8w0VbNq92E3CKiw8cXO0mAHhmc59LP2ZLfRCVDgAAAAAAAAAAAAAAQEdIOgAAAAAAAAAAAAAAAB0h6QAAAAAAAAAAAAAAAHSEpAMAAAAAAAAAAAAAANARkg4AAAAAAAAAAAAAAEBHSDoAAAAAAAAAAAAAAAAdIekAAAAAAAAAAAAAAAB0hKQDAAAAAAAAAAAAAADQEZIOAAAAAAAAAAAAAABAR0g6AAAAAAAAAAAAAAAAHSHpAAAAAAAAAAAAAAAAdISkAwAAAAAAAAAAAAAA0BGSDgAAAAAAAAAAAAAAQEdIOgAAAAAAAAAAAAAAAB0h6QAAAAAAAAAAAAAAAHSEpAMAAAAAAAAAAAAAANARkg4AAAAAAAAAAAAAAEBHSDoAAAAAAAAAAAAAAAAdIekAAAAAAAAAAAAAAAB0hKQDAAAAAAAAAAAAAADQEZIOAAAAAAAAAAAAAABAR0g6AAAAAAAAAAAAAAAAHSHpAAAAAAAAAAAAAAAAdISkAwAAAAAAAAAAAAAA0BGSDgAAAAAAAAAAAAAAQEdIOgAAAAAAAAAAAAAAAB0h6QAAAAAAAAAAAAAAAHSEpAMAAAAAAAAAAAAAANARkg4AAAAAAAAAAAAAAEBHSDoAAAAAAAAAAAAAAAAdIekAAAAAAAAAAAAAAAB0hKQDAAAAAAAAAAAAAADQEZIOAAAAAAAAAAAAAABAR0g6AAAAAAAAAAAAAAAAHSHpAAAAAAAAAAAAAAAAdISkAwAAAAAAAAAAAAAA0BGSDgAAAAAAAAAAAAAAQEdIOgAAAAAAAAAAAAAAAB0h6QAAAAAAAAAAAAAAAHSEpAMAAAAAAAAAAAAAANARkg4AAAAAAAAAAAAAAEBHSDoAAAAAAAAAAAAAAAAdIekAAAAAAAAAAAAAAAB0hKQDAAAAAAAAAAAAAADQEZIOAAAAAAAAAAAAAABAR0g6AAAAAAAAAAAAAAAAHSHpAAAAAAAAAAAAAAAAdISkAwAAAAAAAAAAAAAA0BGSDgAAAAAAAAAAAAAAQEdIOgAAAAAAAAAAAAAAAB0h6QAAAAAAAAAAAAAAAHSEpAMAAAAAAAAAAAAAANARkg4AAAAAAAAAAAAAAEBHSDoAAAAAAAAAAAAAAAAdIekAAAAAAAAAAAAAAAB0hKQDAAAAAAAAAAAAAADQEZIOAAAAAAAAAAAAAABAR0g6AAAAAAAAAAAAAAAAHSHpAAAAAAAAAAAAAAAAdISkA2AVmNl1Zubyv1Nl3avBzN7f9nrev9rtebozs91t7/cNq92eE7WS24+Zfalt3e9YznU/HS22b1rCcje0Lbd7RRoNrCAze5WZ/bOZPWpmE+3bvZm9ZrXb1ykz2znjtexc7TYBAAAAAAAAWFnRajcAAAAAeKYws0DSP0p642q3BQAAAAAAAACWA0kHAACsIDP7kqQXZ1d/yzn3jtVrDbA0WYWQN2VXP+Ccu2GRj3uHpLdnV7/snLtuudt2CnurpiccjEi6R9JE220HT2qLgKehA4divfPPjulTn5/Q3gOJ+nsDPevKkt724wO6/tquJa/voUca+pf/HNetd9a069GmDh9NND6Ral1/qMsvLuqNr+vVD72+V0Fgxz32R952UB/86NiinudN39urv3vX5iW3D8uv7mrarQd1RPtVV1WRCurTeu3QOVpvnX9GsWtqtx7SIe1VTZMKFapH/dqus7XZts/6mNSlGtJBHdEBjWhIkxpTqkQFldSnddqmndpkp3XcJqyMemNMu/d9VUeGd6neGFMUltTXc5p2bHme1vefteT1pWmsY6O7NTKxV6PjezU6sU+Npu9brjj/B7Vh4Nx5H7vv8B0amdir8YkDqjfH1YwnFVikrvJ6DfafrdO3PFelYm/Hrxcr66g7qCf1iEY0pERNlVTRBm3VTl2gkpVPaN2jbkhPaJeO6YhiNVRQSYParJ26QF3WM+tjUpfomA5rRMc0qiGN6pgaqkmSrtALtcG2nFCbsDzW0rFMksbcsEZ0VKM6plEd04RG5eS0Wdt1qT234/ZgZdSTCT02/i0dqu9WPZlQFBTVX9isnd2XabB0+pLXl7pERxt7Ndo4pJHmQY00D6meTkqSrl73Km0snzHv48eaR3SscUAjzUMaaR7SRDwkJ6ct5XN0xbqXd/QacXKttWPZhBvTQT2l0ez7dUN1JYpVUFE9GtAWna6tOkNmx//Gw8mxlo5j/CZbHJIOAAAAgJPnLW2XPy3pdc652mo1Bng6uvv+ul76+r06eiyVJPX1BjoylOhTn5vUf31+Ur/3a4P6lf+1bknr/PdPT+j/+6Oh1vVK2VQsmA4dSfS5L1f1uS9X9XcfGtV//uM29fVOn8WwvzfQ5o3hnOtuxk5DWVuvurS0pHZhZYy5YX1LX1FTDUlSqEgN1XVE+3VE+3WOu0Q77YIlr7fmJnW7vqxqlmcWKlKspo7psP9zZ+kCu+q4xz2oO7RPj7eum0yBQjVUa7VpkztNl+g5CoxZNNeCsckD+tYDH1Az9sGUMCypEU/qyPAuHRl+WOecfr12brt2SeucqB7WHQ/9Q0ftacZVPbj7k63rpkBhWFKc1DQ2uV9jk/v11KFv6rJzv0/r+8/s6Dmwch53D+hR3de6HipSVRN6Uo/ogJ7U1e5F6rH+jta9z+3WA7pdTn5mu0gF1VXVPu3WQT2py90LtN42Hfe4CY3pDn2tsxeEk2KtHcsk6T7dpnGNdP6icNKMNY/o1qOfUDP7qRpZUY20psP13Tpc363zep+rs3quXtI6x+Mh3T70nx236e7hz2ssPtrx47G61uKx7LD26rG2NgUKFShQQ3UN6aCGdFD79LiucC9UZIWO2obOrbXjGL/JFoekAwAAOsDI7aVxzn1JEqnBeEYzs4qkS9pu+mMSDoDlVa2mes2b9uvosVRXXlLSB96zSRefX9LoWKrf+dMh/elfDut//8FRXXlpSS+7bvEVDy46v6jf//VBveh5ZV10XlH9fT6J4PCRRH//T6P6zT88qq/dWtPPv/2w/uZPp4+4eNfvbtS7fnfjnOt+118N6xfecUTFovTG1zLKeLUlLtFduklNNdSrAV2sZ6nH+hW7ph7T/dqjh/WI7lWvG9DgEkbyOud0t76hqiZUVpcu0bM1YBuUuERP6hE9onv0lB5TrxvQaTZ9FLxTqpLK2qYztUmnqUf9MjPVXVWP60E9pUd1SHv1qO7Vubpsud8SLFGSNnXXQx9RM55Ub9dWXXz269TTtUlxXNNje7+sPQdu0iNPfkG9XVs1OHDOktYdhWX1dW/zfz2n6e6H/3lRjwuCSKdvea7W9e5Uf89pKhZ6ZBYoTWMNjT6uXU98RpO1I7rnkY/q+Zf/jApRpZOXjhVwxO1vBWl26FydpYsUWUHjbkT36jaNa1h36SY9z71Mgc2d4DabMTfcCtJs0Q6dp8tVtJKqbkIP6HYN6ZDu1s16vvsOFe34pDg/2nBd9rded+vmZXnNOHFr8VgmSYEC9Wigtd0c1l4dpcjbmpO4WLcf+y81XU190QZdOvBS9RYGFacNPTJ+m3ZP3KldY99QX2GjNpR2LGndkZXUX9iovsIm9Rc26c7hzyz6sWaheqMN6i9sUn9xkw7WHtOR+p6lvjysgrV6LOtWn87RJRrQRvWor5VY0HB17dPjelT3aVhHtUt36SJdszxvBhZlLR7H+E22OCQdAAAAACfHek1PvnlytRoCPF399T+M6omnYvV0mz7xwa06bav/ydvXG+iP375Bj+5u6hOfmdD//v2jS0o6+K6Xdeu7XtZ93O0bN4T65Z9ep/GJVL/3rmP6yL+N6y/+cJMKhcXn2X3wY6OSpFdc363B9Us7yYblt1ePZSU2I12uF6hsPvgaWUHn6XJV3YQOa58e0b0a1OJPcB3WPo3KV8u4XM9Xrw1IkkILtVPnq+6qelKP6FHdr61u57TRMdt1ti7UVcedhC1ZRRfoSiUu1n49oSf1qM5yFytc4slaLK+9B7+pWmNYYVDU5ed/v8rFPklSFJV13hkvV7U+pMPHHtQjT35+SUkHPV2b9eKrf7WjEr+FqKLzz/jO424PgkgbBs5VV3lQN931bjXjSR05tktbN16+5OfAynhE90qSNmqbzrOpz6XH+nWFe75u1mdV1YSe0uPaoaUlsTym++Xk1Kd1uljPam1bFevWZe75uln/rbqq2q0HdZ6mbxM96teL9erp26Pr8EVi2a3FY5kkPUsvmbbNjDhGra9FT07ep1oyptAKumr9K1UOfWn6KCjqgr4XaDIe0aH649o1+g1t2Lj4pIPeaIOu3/zm6f3G8OLb9bzB75a1bVPHGgcW/2CsqrV6LNto27RR245bZ9FK2qkLlLhYj+tBHdAeXeCuekaPXj/Z1uJxjN9ki8NeAgAAAJwcM+vxxavSCuBp7MMf9/Obv/G1va2Eg3a/+NYBSdK37qnroUcay/a811zh5yCt1ZyGhpNFP+6u++q66z7fjjd9b9+ytQedOyA/Ym6LTm+d3Gp3hs6TJI1pWBNubMnrXa/NrZNb09d7viSpoZqGdGjaff22ft5RX9u0U5KUKtGERhfdJqyMA0fvliRt2XBpK+Gg3RlbXyBJGpvcr4nqkUWv1yxYsTmFu8rrFYV+e6832YbWinE30ipFn/cR7crWpS3y86rnfcxiNV1DR7Rfkh91OnPbiizSdp2VrftJOTc9o8DMmON6DVuLxzJJbDOniH3VXZKkrZVzWwkH7c7suVKSNBof1nh8bNHrPdF+wwj4npLW8rFsIX1aL0lKlbZK/OPkWIvHMX6TLQ49NZ6RzKxoZi8zsz8ws8+Z2RNmNmFmDTM7aGa3mdm7zOxZJ7ldp5nZb2TPf9DMqmb2uJl92MxevkLPeZGZvc3MPmpm95rZsJk1s/8fNrOPmNkPm3U2cZGZ9ZnZT5jZv5jZI23rHzKzb5rZX5nZd5vNUquvs+d7jZlNmpnL/m4ys8FlWveybzdm9qW2tr6j7fYXmNnfmdmDZjZuZqNmdp+Z/ZmZLS3l06/veWb2N9lnMGlmR8zsTjN7p5mdu9T1LfG5P9f2Gn9rgWU/0basM7MfWWD529uWfdsS2nRZ9l7eZ2Yj2ef4cPYeXbnIdcz62WX37czvk/TitrvePuP1tf/tXOD5rsy2vVvNbJ+Z1c3sqJndbWbvXsn+ysz6zeynzOw/zWx3tk3GZjaW9VE3mtn/MbNXzNVXmNl17a93Bdu6xcx+PXufDplZzcyeMrOPm9lrO1hfaGZvMLN/NLNd2fZSzfb/z5jvPwcWua7dbe/BDYt8zPvbHvP+JbT7RVl/dEfWPzWy9+O2bL+fd9Kzts/pTW03v2me7fe69tco6e1tj3vxPI+b930wf1z8JTP7fPaeT5rvDx82s38ws9eareyZK/PHsZ82s/+apQ0fNrPvM5v7V0f7ti+1Tf7mPb7U92QR7Q3M7Foze7uZfcrMHs321aaZHTazu8zsL83s+hN5ng7aVTGzH8k+yyezPuyQ+b70bWZW7mCdLzDfl99t/rhWN7P9Znazmf2WLfJ4aWY3tL3/u9tuPydbz23ZemObo782s0vM7I/Mf+84nO1zDfPfd+4x3wf9iplduITXd6WtUr9/qhobT3X73XVJmrOKwXOvLqu/z/8MvvFr1WV77pu/6dfVVTFt2rD4EQ0f/Kg/GbFxMNR3vmTxlRewMmLX1Kj8yfO5Rsz0a1BRlkM2W0BlLsd0OFvv5lnvL1tF3erLll38eiWpoGLrsmOo8aqKk7pGJ/zJ78H+2Q9D/T3bFYX+sDc0+thJa9t8JqqHFSe+H6uU1q1ya5DL+41IBfVngY+Z1md9yqiGFLvF57MO60irv1g/R7+U94MN1TShxZ/Qx+o6VY9lWBvitKHRpv/sNhRnr2IwUNiiyPx3j6H6UyetbTg1ncrHsmH5aiyBQhW1LKELLMKpehzjN5nH9Ap4xjGzV0n6oKS5fklvyv6ukfQ2M/s3ST/inBtZ4XZ9t6S/kzRzKMTO7O+NZvZxSTc4t4T0rbmfryTpdkkXz7FIf/Z3jqTvk/Q7Zvb9zrmvL3L9JunnJP2GZn+v10m6Ovt7i6QnpCwdrENm9lOS3qOphKr/lPS9zrkTPqN8srYbM+uS9G5JPzbL3Rdlfz9pZj/lnPvbRayvIOn/yb/H7YG5iqRBSZdL+lkz+wXn3HuX0tYl+KKkl2aXX6Lpwcj2toaSXjTj5pdI+vs5ll8v6Yq2m25cqCHZc/yWpF/T8Yl352R/P2pmv+WcmzdB4mQxs02S3ivp9bPcvT77u1TSz5jZRyT9uHNuYhmf/9WS/kbSbJNR92R/OyV9m6RfkPTHkn55uZ5/KczsDZL+Wr7vaneapNdKeq2ZfUrS9yymX8gCen8n6ZJZ7t6R/b1c0m+a2S86595/As1fFlmQ9S8lzRZQ3pj9XSPpF83svZJ+wbkl/KI7Scwskt9Xf06+v5qpV35//UFJt5vZG51zD69AO35I0p9K2jBPG94ovw38qHPuluVuw1KY2TXyx765asttyP4uk/QTZvZVSd/nnNu3wu26TNI/SZoZcN8on5j1Ykk/bWYvd84tGH3J+sW/kfRds9y9Jft7rqRfM7P/J+lXlrqdm9nPS/oDqe1X4+zLRfLH7Z/S9ONsbl32d4l8P/ROM7vYOXf/POtc1X7/VPbAww3lg1cuPn/2jy4ITOefXdCtd9R1/64TG61SrabaszfWhz8+pv/z58OSpLf+SP+iR3DFsdOHPz4uSXrja3uWNCUDVkb7icju436WeWamLtejUR1b9AiWhqu1Rkf1zLHe/L4JjS55ZEx+8sxk6lbvkh6L5TVRPay8xnx3ZdOsy5gF6ioPanRib7b86nAuVaM5oWNjT+jRJ78gSSoX+7Vh3fGjELE68r6gW71zHlva+5RJjbZGZS68bt/fFVU+bo7rXHs/OKHRefsvrB2n6rEMa0N75YKewuz9iZmpOxrQSPPQkiod4JnpVDuWJS5RTZM6oD16Qg9Jkk7X2VRqOYlO1eMYv8k8kg7wTLRT0wPHo5IekTQiKZS0VT6IkB9JXivpLDN73nIEr2djvorBx7LndJLul3RI/qR5+wn610namJ2UP9G2FDQ94SCW9KikI5Jq8u/RBZLyIVc7JH3RzL7dOfflBV5PQdKHJH3PjLtGJT0s/173STpfavXAA52+kOw5f1fS/2676X2Sfso5t/j6tvPbqZXfbgJJH5X0yuz6kKSHJDXk36s8iFSQ9Ddm9pRz7r/nWlkWYP8n+e2m3SOSnpJ/zy+VVJL0HjNbqTpR7ckAzzGz7jmCI1fp+O3gJfOs9zpNJQ4ckrIJwub3Hkk/mV0el3SfpKqkMyWdkd1ukt5hZvudc3+9iHXOpiop/2yeralt51H593+ux0xjZudL+oymJ+Q05fuIo/L7Uf4ZSj74eb6ZXbdMyUnfJulfNf37wpD8fjwmHwzeLOksTX0Wq1JFycy+T9JHsqux/Gd7VD4Z6GJN7ZuvlPS3kr5/gfV9u6R/k9Q+gfeE/Htfk9/ft2a3D0r6ezM73Tn3Oyf8YjpkZs+TDzi3V3epyrd5RD5QeYn85xlK+hlJ55rZq2cJyObb76VSa4K7fZLumePph7L/vyy/TZwj6ezstmOSbp3jcXtneR098tvdy2bc9XDWhoL88Sn/FXq1pJvM7Hrn3N1zPM+SmdmvS/q9GTcfkrQra8NFmjqGXSTpRjN73Sz98pCm3s+KpidXfUXH7/vHvSdLkAfcc5Py79uwpFT+szlf/vOXpGsl3WJmVznnVirqcb78sWggu/6w/Gvskk98y/uvcyR91swuc85NzrUyM9sh6QvZ8rlEfp8fkk8yyiv4FCT9vKQLzey1zrn6YhqcJRz8Sdu6783WvUl+22v3l5LePOO2x+SPs035beRMTU/cmrOfXO1+/1S3/+BUV7Zty9w/dbdujiTVpy2/FMXtjyiZ8Q0ziqSffFO/fvdXF19g69M3TurQEb8iplZYG+qqtS6XNHcBlpIqko5NW37x650tl84rZs+52PVKUuxi7c5OiG7SaYo6K1CHZVJvjLcul4pzn2wsFXulCaneOPld9/2PfUL7Dn/ruNt7urbosnPfoDBgG1or8r6gOE+/0d6nLKXvqGdfQefr60ILFbmCYjWXtG6srlPxWIa1o55Ona4rB91zLlfK7mtfHpjNqXIs+4L71+NGp5tMp+ksnT3rWCSslFPxOMZvsikkHeCZ6g5JH5D0KefccQFAM9siH4z5Jfn95HL5wMPPr1B7/kE+IPYJSW9zzj3R1paz5Ueqf2d207WSfl9+9OeJGpIfvf8JSTc556YFnbPkgddI+iP5k98FSR82s3MWCKT/X01POLhX0q9K+u/2wJb5ybiukg/+zTaab0HZCMO/ltRehv+3nXOzjqY/QSu93fyU/AjU3ZLeJumTzrk0W7fJv6d/r6lEkP9nZue7uSek+llNTzi4RdJPOOfuamvzJvmRnD8q6c+0MvOLf1M+QN0rvw1dKx9Qmak9waAqH5zbZmYXOOceXGD5L83zPuReKf/+HpUfkf+R9m3efKnxD8sHlSTpj8zsQ52MHnXOHZT0Hdl6v6SpKRb+0Tn3jsWsw8x65QPIO7ObhuQrh3ywvU1ZdYz/Kel35UfkXiXpL+RHgJ+oP9XUd4WH5RM2vjjzvc7a8BL5fbm5DM+7VBvk941Efnv+E+fccFv7zpH0j5Kek930RjN771yVW8zsNPkgaf4Luyaf1PSXeTA02ye/Q/69zhNWftvM7nDOfXIZX9uimNl2Sf+hqYSDJyX9iqR/nbGdr5Pvj39J/rjznZLeIb9ttTjn8u33/ZqaYuFzzrkb5muHc+5N2ePeoamqJnfn61uk92kq4SCR3w7f1T4aPzt+vFp+NPg2+W3gY1nw/ITPepjZd2p6wsFT8vtZe79clq9K84fy/XKXpH8ys0udc60ak1kiRP5+7tT0KRbe5JzbfaLtneFh+f3hk5Luy9uby7aBH5P/fLolbZf0Vzo+QW25/KN8wsG/SvrV9uOnmfXLf74/mt10tvz3m5nJHvnyoXxyUXvCwd9I+o2s382Xu0jSn2uq780/z19cRHs3yX+mqaR3Svo/zrnW8B0z2yafSCMzu0LTEw4+IOk3nXNPztL2HZJeJV95aFZrpN8/pU1MTh2eKuW5R6J0Vfx94xPpnMvMZ8umUHEsjY6lqtb8c/7km/r1Kz+9bknVCvKpFS67qKgrLqFc51qQtn0VDlr5WccLs/uSRX51Tha93mhJ65WkB/Ut1VVVqEjn6NJFPw4rI02nflIHwdyn3PLAfpKuVN733KKwpGKhR2matKZU6Onaogt2vkJd5WWZmRDLJO8Lwnn6jfY+ZSl9R6rkuMfPJlSoWM0lrRur61Q8lmHtSNrGIwQ2z3Esuy9OV+MUEE4lp8qxrKiynFLFilvr3a6ztVPnKzBmqT+ZTsXjGL/JprC34Jno/c65q5xz754tcCxJzrkDzrlfl/RDbTe/xRY5b3cHNkr6Z0mvbU84yNryqHz54PYg1s9kJ9RPxKSk051zP+ec+9LMhIPsuZvOuY/JB+r2ZDdv0zwntc3sJfInwnP/LenZzrlPzRxJ65xLnXPfdM79vPwo0SUxs275IFuecJDIB9VXIuHgZGw3ecLBc51z/9EeKHLeRyX9RNvy50p64WwrMrONkn677aZbJH1be8JBtt5Dzrk3S3qXpLJ8qfxllX3uX2m7aa7qBe23v2eJyy84tYL8+zss6QXOuQ/M3Oadc1+Q9N1tN/Vr5YJwi/FHmhqt+5Skq51zfzEzoOqcm3TO/bGk/yEfJJOkH7ATnOs7C2JfkT+NpFc5526cLbkja8MnnXPfrxnB65OkW377/QHn3G+2Jxxk7XtEPujYPpL7RzW3P9bUKPpU0uucc3/aPvo62yc/LZ9E0z4y/a+yhK2T7X2amgLgHklXOOc+Mst2fsw59yuaqvghSb+cJVqsOjP7XvkpfSSfwPJq59wvuxnl/7Pjx7/Ll9DPg83nSXrrMrQhkh+9njsg6dpZ+uWac+498sl5+ZjnAfnEu9Vyo6TznXN/4Jy7Z2bCgdTaBv5YPhEib/drzOy8FWrTBkl/4Zx7/czjp3NuJDsGfb7t5vn2zTdLen7b9Xc65368PeEgW+/98okrX2i7+efNbDG//CryyVZvds797/aEg2zd+9oSL1/VdtfXnXM3zJZwkD1uj3Puz51zV0h6YI7nPmn9vpndPtffYtfxTLbnW2dq391nauyxs/T4bWfo539yQH/5gRFd/pI9+vJNiytwNXQs0Sc/5z/aH/6eZ27pRZyY3e5BHch+pl2kq1WxuUckArnzzvgOveiqX9J11/yqrrv613TJOa9XHNf0zfv/TruemLOQHgAAwNPWtfZKvci+S9+m1+iFeoV26Fw9pUf1DX1Ox1asMCSeDvhNNh1JB3jGcc6NL7xUa9l/knRTdrVbfu7ulXBM0ltnC+Zl7Ugk/bh8OXjJ77s/Oduyi5UFbOYsXzxj2UPyo+ly8wVi26c42CfpjW4R0wss5XORWkH1L2qqAkRV0ne7zsvhz+skbjdvmRk8meHDmh7gvHaO5W7QVEWERD54Mt/n8Kvy5aBXSntSwHFJBFmQNk+geFy+/P18y8+cemQxSQeS9MvOuYfmutM59zVJN7fdNNf7u6Ky19deveOGhUZDO+c+I+n9bTf9rxNsxultlw8553Yt5kFu+aY0WaoPO+f+ea47s8Dh37TdNOtna2ZbNb3yyl9lyQVzrfdJ+QonuW2S3rCoFi+TbLR1XkmgKekNzrmhuR8hZX1lvt8UND2haTX9atvldzrn/mu+hbP3/5fabjrR7V7y0+PsaLv+tvn2P+fc5+RH1bcen41qP+myYPRCVV/yZb8mn/Ao+aoXr12hZu2Wr7wznz9uu3xWVk1gNu372j2SfnOuFWYJNz8in2Qp+df4M3MtP8NnnHPvX8Ry7f3k1xa57ln7yTXS75/yurumqgzkFQhmM1n19/V0n9jPYTPTju0F/fHbN+hP3rFBQ8dS/eD/PKDJyYUrKPzzJ8bVaPhpGX7g9SQdrBVBWzHIVHN/pUmy+8JFFo8MF73efCTYwut9yj2mR7LZxc7VZdpspy/wCJwMQVBsXU7TuUdHJdnI0LBt+dUQRWVtGbxU11z8ZoVhSXsO3KRDQ3PlxmG53eq+oK+4/zzu74nsJ+vUSLu5+432PmWxfZI0NcJvvj6p/bmXsm6srlPpWIa1J2yrbpAeNwvjlLwiQsSUPM94T7djmZmpbF06zy7XubpcTTV0r26ZVgUEK+tUOo7xm+x4JB0AC2sPQD57hZ7jQ4sIEB2Q9LG2m757rmVXyILvQ3bCvD1A/H9njhBcDtmUEzdJykf0DUl6qXPuE8v9XCegk+3m4Sx4Nads1OpX2266eI5F2xNDvuicu2+B9dbly2uvlC+2Xb4yK+/d7rmaSpL4QpYYkCdXfFtWyr5d+3b2pHPu4UW0YVx+OpGFfLnt8lzv70r7Pk3N131HVoVhMT7Qdvn6E2xDe5LKprUyEn4e713EMu2f7TlmNttZ3u+SD8Ln/mSWZWb6N01P2lmp4O1cbmi7/Ek3+3Qks2nfXl66fM3pTJY8cUV2tSnp3Yt86Eel1kRrpy/DiP32z2+Pph975/KnUmvyv1B+6odTwcn4jvO+mRU3ZvE1TY3Yl2bpe7PPtf32dzs3/6/+LCmlPRnpNQu0I7fY42F7P3nlIh8zl5Pa7zvnrp7rb7HrWIu2bZk6KbDvwNybx/6D/r6tm5fvZPiP/2C/SiXTvgOJPn3jwnm9+dQKL/+2Lm3awEn5taJ9ztD55vBczPyx09fbPk/t3HnAjew5F1rvfveEHtS3JEln6SKdsWLFarBUpeJUElG9MTbncvl97cuvpnKxT5vW+ZzufYe/tcqteeZoqD7rX5yd7M77jsY8/UZ7n7LYPql93fP1dYlLFGez5y1l3Vhdp8qxDGtTOZgaoVtL5565sJ7dVwqe2SN68fQ+lm3XmQoUqK6ajujAkh6Lzp0qxzF+k82Osxt4RstGy3+7pMvlR6f2aeqEb6593uDtK9SUOUfRzvApTY2C22Zm213bvNGdykpJXyfpavny1P3yI/TbA72Vtsvrzawyy8j5F8+4/tETbdtMZnaN/PuwKbtpj6TvcM6dtOEYK7jdzDq//CzaP/OBWdpX1PTgx1K2rz9c5LJLdad8csh6+YS36+QDtblva7ucj77+ovxUHuvlA5F3tC3TnnTQntAwn29myRULmff9PUna96V5E1FmaJ8+Y5uZbZtZln4JHpA0oam+4JNm9hbn3G0drm8lNSUtpl3tn63J93Uza6Q9r+3yA85PcTMv55wzs//Q1Gju582z+EpYju3lajOzxY6SXyHtr+MO59zRxTzIOVc3swc1lbBwjaRFVeaYQ/vn96nFvCfOud1mdldbG56n6dPEnHRm1id/rLpC0hmSeuWnIWk/trcnE63Ud5wFj23OuUkzG9LUFCEDsyw2c7/6z0U+/yc09d1pg5mdu4hEta8scH+uvd95mZm9S9LvOueOLPLx7dZCv3/Ku+Ccoswk56T7Hmro/HOOzy1LU6eHHvUnnS46b/lGGJdKpsF1gfYdSPTYE/PPbfvAroZuvcN/JfnhN/QtWxtw4ro1FQCe0Oi06znnnCazAnTdWtznV7SSCq6ophoa16gGtWXW5cY1uuB6D7qndL++KUnaoXN11gnPuofl1F3ZIH+odZqoHsquT+dcqsna0Wz5jSe3gfPIEyCqtWUfN4A5vNBeMe/9eR80oTE553R8Lv5Uv+GXX/wxpSdbtqGaGq6uos08peH7wU7WjdV1KhzLsHZ1R1NjhMabQ+qJZo4Z8tvPRDwsSbPej2eWp/OxLLBQBVdUXTVVNXcSDpbXqXAc4zfZ3Eg6wDOSmZ0hX8r3tVrafjCwIg3yJYI7We48TQ+iLUlW0v5n5UtTL/Vsx4B0XEpYe7n7w865PZ22bQ5XyweYe7Lr98gnHJyUk+snYbtZbMpk+7ecrlnu36HpSRCL3b4ekg/eLntttCwo+yVNVWB4iaYnHbQnEdzY9v8Ptt0/V9LBYqdWWK7392S4rO3yd5nZ5R2uZ6P8NCdLlgVy3y3p17ObrpB0q5ntkvTf8oHEm1dgP+/EUefc/BEeb+YvhNk+3/aEobtmuX8ud7dd3jZHYtayy6qAXNJ204+Z2f9Y5MPbk8mK8glUI8vVtg60b/dnmNlnlvDYM9oud3z2PkvC29l201K3gSuyy+fMs9yKMrNBSb8v6YelJaXxD6xIg5bW9+aRmYX2zQPZ1E+LcfeM6+dImi/pYGSh6lNt/kXSb0s6K7v+NklvNbOvyH9fuVnSLc65xZydWPV+/+mgtyfQNZeXdNuddX3+K5N63St7jlvmlm/VNDLqC2u85IWV4+7v1PhEqsNHfYnGhaZt+ODH/EmM9esCvfpljA5bSyIrqM+t06iO6agOapOOL/Q0oqHWaKn1rTzoha3TJh3SUxrSQZ2h40fB1Fy1dVJ0rvUedvt0r26Rk9NpOkvnddxVYKVEYUl93ds0OrFXR0ce1ab1x5+AHBnfqzjxI6jW95113P2rpVofliSF4epO+YAp67K+IFZToxpSvwaPW2ZIfnbGfq2fVhZ9IQPaIJPJyWlIh7RFx5cDPpqtu6TyrCf8sTat9WMZ1rYoKKq/sEkjzUM62nhSWypnH7fMcPOg4qyY3frSSuWu4+niVD6WxS5WQz5ZnCljTp61fhzjN9n82FPwjGNmz5L0WXV2cv34dLnlsajRnLMs13E6qZlVJP2HOi+pPdt7sb7t8mKDAUtxyYzrP7OUhAMzu0zSHy1i0V92zk0LUpyk7WYxo/BnOj499PjtYrGjhWMzG9FU0Of4JzP7BfmRs/M56Jx70yy336ippINWCehsW3xudvU+59zBtuXVtvyfZMufKenMGetdjOV6f0+G9m/gF2p6Qs9S9J9gO94uPxK6/fM8L/v7X5JkZo/KJ5D8TTYtxmro5LOVFt5/ZlZBmM/MZdfp+MSsldCv6d/nrjrBda1m0kH7dr9Z0ss7XM+JbPcDM653ug2synAPMztLPti9o4OHr9R3nJU4tp3ovjmf0QXub8mSs14p6ZOS8rNxBfljVn6ca5rZzZL+SdIH50lAWCv9/invja/t1W131vXhj4/pN39+/XFTKPzJXwxLkq6+rDRrJYS5xLFTFM39teDP3jesZpb+9sLnzJ3vk6ZOH/pXX1b9e/9Hr4rF1fqqgbls0Q6N6pgOaI/OcheqZNOTU57Iiun0ap26bfEnLrfodB3SUzqqgxpzw+q1gWn378nWW1S5dXK23VF3UPfoG3Jy2qozdMEJz+qClbJl8FKNTuzVgSP36KzTrjtuCoUn9vsiQL3d22athLASUpcosHDO+ydrR3X4mJ+ha6C3k68RWAk91qce169xjegJ7dJlMwo/1V1VB/SkJN93LUVkBW1wW3VY+7RHu7TZbZ82+jRxsfZmM8ht1umzjkzF2rVWj2U4NWytnKeR5iHtq+7S2T3PUjmcniS7e8KPCeorbKTSARa0lo9lqUsV2NwJ40/qYblsJs11c58uxwpYq8cxfpMtbP4hGMDTjJl1S/q4poIKTUn/KD+P7qXyQfOyc87yP0m/dRKattBcx7mZJ+5PJEDw+5qecPAt+RF6z5OfMqBbUtj2Ppx5/CqO036Gde4Jdzp3i6T2ksUfN7OlzD28Xj6ItdBfe/LEWt5u5jLzDPpity9p4eDQpVr4/Zs5zUauPTngQjPbml1+gaa25dYyzrknpOyboXRtVplDml7l4JFszu6nm+Ua9nhCx3nnXOycu0H+Pf+EZt8+zpb0i5LuN7P3ms1Sy+zU0t7+E9l3TtYElss5RHa1vxeuhe1+5vbb6TZw0icwNbNAflqj/Fe6k99vb5CvwLBBUmXGsepHZlnVWnWy9s10CeuWc+5B+aTIn9XxVRUkn4TwIkl/LukxM3vtHKtaC9v/08JbfqhPZ2yPNDbu9Oof2q/7H/Kby9h4ql/5nSP6t//yeR+/+2vHj7AJtz6icOsj+q3/c3y+5iUv3qP3/O2wHt3dVPusKw890tDP/sZh/X9/5AtkvOY7u3XphXMfCj//lar27vcVEX74DYwaXYtO01kqq0uJYt2pr2vc+Vyk2DX1sLtbh7VXknSOLj7usZ93/6LPu3/Ro+6+4+7bqG3qy35m3K2bNZLNIpS6RE+4XdqTFWE5Wxcdd/Jz2B3RXbpJqVJt1um6SNcQAFzDTtt8jcrFASVpXXc+9CGNT/p8/Dip6+E9n9XhY35mwHO2X3/cYz9/y9v1+Vverkefmn0GuWZcVaM50frLJUl92u1pmkx73K7dn9ZDu/9Lw2N7lKTNaevbd/gOffP+v1eaNhUGJe3YcrJnCsN8zsnGXhzSXj3s7lacFXgbd6O6U19XolgVdeu0WU7Z7HO7W/1SdZa8x7N0kUymUR3TfbpNjWw2wpqb1F26WTVNKlJBO3XBrG1ruoYart76yyVqTrs9dUv6eoVlsBaPZZIPAE7bNrKv3qnSabfHLl6GdwGdOr3rYpXDXiWuqW8NfUrjTf89N04bemj0Jh2s+VN15/U+97jHfmb/e/WZ/e/Vw2O3zrruZlpTI622/nKxa0y7PXXJcY9NXHPWZZxLp90ep0v5uYiTYa0ey76hz2qPe0STbnzab7wJN6aH3J16VL4f3Kht6rFnfH7/SbUWj2P8JlscKh3gmeZHNDVncVPStzvnvrzAY07G2cBeSYuZOHHmRDIdjUg1s3WS/mfbTX8l6acWmLd6Me9D+2tYiSPxg5J+TNIXJG2SH634BTN7uXPulhV4vtxa3W7mMnOU5lLasmKT7jnnHjCzA1JrwqSXSPqQ5p8q4Ub5stU9kp4tX9K/k6kVTjXDmqo48fPOuf+7im2Rc+6Lkr6YVaV4nnyiyIskvVBTAbxA0lvl2/29q9HOZTLcdvlE9p3h2RbqwNxD0mZ/ntc55/5ttgVPAcNtl//DObfYaSJWqg1S59vAzPWcDK+Qn4Yo94POuQ8v8JhTKeI53HZ5LeybLc65mqR3S3q3mW2T7x9fIOnbpGm/gDdJ+hcze7Vz7lOztGvN9Punskol0L+9f6u+/Xv26lv31HXpdXvU1xtofCJVmkpm0u/92qBedt3SZlB6+LGm3vYbR/S23ziiUsnU222amHSq1qa+Pn/HS7r0gf+3ed71fPCj/mvahecW9OwrT3p+EhYhtFCXu+frW/qKxjSsb+izCl2kRFMBkHN0iQZt9jlA52Jmusw9V7fry6pqQrfpiwpdpFRJawTVaTpLp9nx5fYf1X1K5U+sD+mgvqpPSnP8cjtPV2iLHV9aFidPGBR0+Xlv1Lce/IDGJvfrG/e8V2FYUpI05D840zmnX6/BgaXPxnTLPX+pWmP4uNvveeRj065fdeENWt83deI+SZvaf+ROPXnwFkmmKCxLcq1pHiSpWOjRZee+QeUSJ9XXkg22VWe5i/WY7tMT8ifDAxe2+qSCirpcz5+3ksVcem1AF7qr9YBu1wHt0QHtUeQKrXLFoUJdpufNOke2JN2iz6umyeNuv0fTT89cpRdRav8kW4vHMknarYf0uB447vbD2qfDbTOEbdUZuljPWlLbsHxCi3TVulfotqOf0Gh8WF878hFFVswCxf5zPq/3udpQWnplnK8f+ahqydhxt981/Nlp15+1/jUaLE0vqf7Y+B16dPy24x57sP6YDh58rHV9W+UCXTZwfGIfVs9aPZZNaly7dKd2SQoUZP1k0vreLUmD2qKL9ezOXjg6thaPY/wmWxySDvBM8x1tlz+yiMCxpFkmA1p+Z2pxSQcze7uDsy61sOvlR95J0qSkX1gg4UBa3Puwv+3yDjMrZyfil41z7l4ze7F8sHmrfHLDZ83sO51zNy3w2C+ps3L5a3W7mcvM7eJM+Xml52VmG7RAICcb9X5Dpw2TL/v9xuzyzKSDVNKXZiz/BflEk3z5r8sHcXJP16SDA5oKPs0fvTiJnHNV+ff8RqlVBeR7JP2OphJz3mBm715of1zD2qeGOX7ywrm1L9vQ7IHN9nT7wiz3z2beWoXOuQkzG5dPzJHW0PbSgQNtl1fldWTv56SkPBrZ6TawElMMLaT9WPWVRSQcSKt7rFqq9vd0h5lFzi1qCNTMz3BFP5ts2qd/yv7yKS9+Wr6aVJD9/V9JM5MO1mS/f6q6/OKS7v7SDr3zz47pU5+f0N4DiQbXhXrWlSX97FsGdP21S0s4kKR//8BW3fjVSd10W037DsY6fDRRITKdc2ZBz7qipO//7l694vr5C1aMjqX690/70Tk//IYVy/PEMui1AT3XvUy79aCOaL/qqqqgkvq1Tjt0rtZbZ7tp2br0HPdS7dZDOqS9qmlCoSL1akDbdbY22+zzIru2s1nNBYq9tJ8kxerp7d6i5176Vu3e91UdGd6lemNMhahL/T2naceW52l9/+wBuZWyc9sL1V3ZoKHRx1WtDflqCC5RsdCjnsomDQ6cq9M2XqUoIhlqLTrLLlS/W68n9bBGNNQaEbpBW7VTF6hknX9u22ynelyfntAuHdMRNdVQSRUNarN26gJ1Wc/CK8GatNaOZTi19BU26IUbv0+PjX9Lh+q7VU8mVAzK6i9s0s7uyzVYOpV+SmItWIvHssv1fA3pkEZ0VHVV1VBdpkAV9ahf67RFO7ShVagXJ9taO47xm2xxSDrAM80ZbZdnr/PUxnx9lOevXHNaniM/vcFilss1JN3T4fO1vw/3zzO/cLsXLmKZ9sB2Uf69W/agsHPuwbbEg+3yoxj/28xe6Zz7ynI/n9budjMr59whM3tKU0Hg50haTPDpOQsvcsJuVFvSgZn1ampk7recc8OzLK+25f9FPtlE8rmEs9cdXVvaa0kuNunlJimrPSat2fqmWd/xfjO7RdJdmgqkv1z+NZyKbpcfMS5J15hZwTnXnO8BmfZ9/g7nZq0h2l6FZP0s98/m0kUsc5Okl2WXnyfpLxe57sXoZPvt9HE3yQdnJekKM6tkiS4n2+2Srs0uL6ovN7NImpb6/s3lbtQiLOlYlVnMsX2tuL3tclnSVVrc62z/DBNJdy5jmxbknHtM0s9nySz/O7v5XDM70zn3eNuip0S/fyrZsinSu353o971uxsX/Zhk/9yjjr/rZd36rped2CwYfb2Bxh9fSi4TVlPJyjpfV+h8XbHox7zUXr/gMpEVdI4uaZWZXYxr7LpFL4u1o1Ts1fk7X6HzW18tF/bS58w/S98Lr/y5jtrSXdmo7spG7dx27cILY00atM0aXGJe4jbbqW3aueByfbZel+r4MukLeaEtftvG6lhLxzJJOtsu1tmzlMLG2lQKu3Vh/7W6UIs/dnzH1v857/3Xbfrhjttzbu+zdW4vI85PZWvtWLbRtmmjti3pMTi51tJxjN9ki/OMn/MTzziLHVma+w5Jpy241Il748KLSJK+v+3yLc652eZXX4wlvQ9mVpC0mG+Ft0s60nb9p5byPEvhnHtYvnzxE9lNPZI+bWYvmftRHVur28182qsxvD77DBfyAyvVmDbtSQQ75asm5AlwX5i5sHPukKR8AqbnSXpl2933Zfevde1JPZVFPubTbZdfaGbnL2N7lp1z7gFpWo3EpdW2Wlva951+Sa9e6AFmtlHSd86xjnZPtF2+bBHrvVrSYuoVtm8vrzWzxSY0LEYn22+nj/u8pDzBoyTpB5fwfMup/fN7qdmiarW9UlL7BPGLqYiz3JZ6bL9Ip1Zw+1ZJ7dWTfmiRj2v//nK7c258+Zq0JP864/rM7eqU6vcBAAAAAACAdiQd4JlmX9vlF823oJl1yZe/PRmuNbNXzreAmX2v/Ki+3N+ewPO1vw+Xmtm85bsl/aYWEUTPyhy/p+2m15vZggG7TmUjBF8sKZ+4q0vSJ83sZXM/qiNrdbuZz9+3Xd4m6WfmW9jMrpL0vSvaIrVGfLYHXn+97fJcVTHy20uSfmERy6817dOOnLvIx/yHpIeyy4Gk9y0ycWTZZBU7lqK9VtnQcrblJLtR0qNt13/PbMEab++Ur+4i+Qoc75tjufbR768wm7tWafb+/8ECz5v7W029572S3rvIxy1GJ9vvzMedvZjtyTl3WNIH2m76XTNb+iSRJ+5vNVWpoSDpD+db2MxKmv5ZPSHps3MsvpKWcqwKtLzbyYrLkgU+0nbTW8zsgvkeY2Y/JOnKtpv+ejnbtMR+cub+PrOfXPV+HwAAAAAAAOgUSQd4pmkPUr7ezF4120JmNijpk5JO5iizfzCzZ83Rnmsl/U3bTY9I+ucTeK4vSa1JaEqS3mNm4SzPa2b2c5J+YwnrfrekPW3X/9nM5q3kYGYbzOyXl/AcLc65J+QTDx7ObqpI+o+FkjiWaC1vN3O5UdPL2/+Bmb1utgXN7DxJn9DJOya0T4mQj/RsSPraHMu3v/9b5rh9LWsvCf4yM1uwXH5Wmv/nNLWfXivpM2a2YM0vM7vIzN5jZr/UUWun/ICZ/ZOZLTgS2szeKql9YtpTYdqLWTnnnKTfbrvpfEkfmy1BIOsjf13Sj7bd/CHn3CNzrL59pPOApHfNtlAWaPxLSd++yDaPaXoCz/eZ2UfMbGChx5rZs8zsg2b2/XMs0r79Xm5mL11Mm2Y8br2kH1nk435LUxVzNkn6spktWB/PzDab2a+Y2YcW+Txzcs7t1vTkhx82s9+cLcBsZt3yx+ML227+Hefcakze1t4nPtvMZq02lCXH/aOk605Go5bZH2qq2kFRPtFw1kmxzezbJf1V200PSzrh7WOGfzKz3zCbf5LH7D1v71f2aup7i6Q10+8DAAAAAAAAHYkWXgR4WvlrSb8iP9oskPQJM/sHSf8p6aCkdfIneX9UvkzyqKRPafHTH3TqI9lz3JS151OSDkvaLOlV8mXv86SAWNKbnXO12Va0GM65PWb2MUlvyG76fkkXmtlfy5dIL8gHUH5IUp4I8ZeSfnIR6x42szfIBx0r8vMuf9jMflbSv8iXyh+VL1t+oXzCwMskVSX9UYev5ykze7F8ef4L5RMpPm5mb3DOfaKTdc6wVrebOTnnnJn9mPzI6i75z/Rfzezf5T+HJ+Xbfb2kH5P/rL4q6UxJ21e4eTfKT6vQ7hvOuck5lv+S/Kjj9qSIRKtTvrwT/yofXC7LfxZ3mtmd8qOS2wOTb2mfLsI59+ksoJ2PoH6JpMfM7OPy7+EeSZOS+uQrkVyRLZOP/J1/QtiFRfLVL77XzJ6Q9BlJ35Lfdkblt5nzJL1WUnsg+ib5MvmnLOfcB83suyTlk4C9StJ9ZvY++X2qLv/a36Tp5el3S/pf86x3l5n9q6Tvzm56c1ZC/W/lK7Z0yVe0+VFJZ8tvI/dpEckHzrm/MrMrJf1EdtP3SXqVmf2zpK/IBzkb8n3vDvnR398u6Yxs+fkqjeyXtFWSSfqcmd0rv/0125b7DefcvW3tecjMvinpmuymvzWzX5MPtDbaHvdnzrkb2x73lJm9XtJ/y/flOyXdbGY3Svov+WPUiKRuSRslXSrpBZKeL99HLFe/8LPyx6c8oP3b8u/n++VHoxfkP6u3ZG3M/btz7kQqEZ2Ij0n6fUmnZ9f/PKv881FJT8lXwXi2/PZ1uvzn90FJbz75Te1Mtl39kqT/l910tqS7zezv5L8DHJOv7vMa+e84eaJIQ9IPnsh3pzlszp7nt8zs6/LH0Xvkv8PVJG2QdLV8X9FeteN3siSDadZAvw8AAAAAAAB0hKQDPKM45w6Z2ZvkRyVG8gGKN2V/M03IB22ecxKa9hPyAayr5UeDzjUiNJE/af6VZXjOt8oHnfJy2VdK+os5lv07+dGFCyYdSJJz7pYsCeA/NDUy/dnZ31yqi1n3PM+538yukw86XCI/AvJjZvb9zrl/OcF1r9XtZl7OuQeywOl/ygczJR+Iec0siz8mnyTx9ZPQtNmCm3NWLcgSWe6Q3z9ydzjnhpe7YSvBOXckG3H8Pk1tP1dp+nQpkg9yznzsO83soKQ/l09aKMl/Tic7oeUMTQWz53OXpNfPFkw7Bf2g/Ijj78mu75D0O/Ms/6Ckly9iu/xpSZdLOie7/sLsb6bDkl6teZIYZvFT8kkhvy2/nfXIB5Q7Dio755pmdoOkf9NUP3JJ9tfuXbM8/MflE1AGs+vnaOp15/59luf8clbh5+OaSoJ6SfZ3UjjnRrPj2GckXZzdvNBx7OPySXyrwjlXz5L+vqCF+/ym/PaS6BRKOpAk59x7sikt/lg+qaBbfj+Za18Zk/Ra59ytK9isQD758NpFLPtHzrm/muvONdTvAwAAAAAAAIvG9Ap4xnHOfVx+VO69cyySyM/FfJVz7tMnqU1j8iM13y0ftJ7NLZKe45w7kWkV2p/zqKTnypcanqsM9GOSbnDOLTkg4Zy7Tb7qwO9pqlT2rIvKj57+taU+xyzPeUjSt0m6M7upIOkjC03vsMh1r7ntZjGyEcRXSvqcpko2t2tI+gdJVzvn9p6kNu2VtGvGzQtNlfCFJS6/pjjn3i+fNPFeSXdIGpavWrKYx/69fIn/98qP8J7PuHyVjTfJB+ROxI2S3imfSLBQEsEe+X34Oc65/Sf4vGuCc64uX+nh++RH18/lqKS3y+9De+ZZLl/vAfnA5Ec1+z6ZyAfir3DO3T7L/fOt2znnfk9+9P+H5EdEz+eYfOWT75b04XnW+9lsnX8kfyw6qulVDuZ63J3yAfu3y48AP6zpVQ7me2x+DPkVTZ+yZzaxpJuzZZct6O+ce0q+2s9vyLd9LrvkKxK9PttuVo1z7hvyx/ab5lnsZkkvXMWKDCfMOfcn8sl9X9Dc/VNNfpqMi5xzM48hy+Wd8lNVHFhgOSdfceR659yvLLTSVez3AQAAAAAAgI6Yn7oYeObJ5ma+Sr7086D8SLj9kr6WBYVWq13d8qM5d8iXQj4o6Sbn3EMr+Jxb5ctI5yWZD0h6wDn3zWVafz6y+xL5ctgF+fLsj0m63Tl3cDme52RYq9vNYpjZGZJeJF96uipfbvtLzrmhVW0YFs3MQvnt7yL57a8in6h0QH6k/X3OuQWDwR08b698Ge+z5PfhinxA+6B8UsK97mn+hcLMzpMf5b5JvpLKYUn3S7ql08oOWd/7bfJl0hP5ffKry5W4YWZF+cDsOfJl3gvyAcq98tvLA6dKVQozO1e+390gP0VEVT75YZeke7LkvZV8/kA+AeFi+X0glnRI0m3OuQdX8rk7ZWYXyk87sUn+/dov6Vbn3OOr2rBlZmYbNXVs65U0JD/VyVfmmbZnJdqxU3772CFpQL4Kw2jWlm92+h1hNfr99MC5T+v+HCvv5duuWO0m4FT37EtXuwU41d16z2q3AMAzXLRl82o3Aae4+MApc7ocwNPU59KP2cJLTUfSAQAAAABAEkkHOHEkHeCEkXSAE0XSAYBVRtIBThRJBwBWWydJB0yvAAAAAAAAAAAAAAAAOkLSAQAAAAAAAAAAAAAA6AhJBwAAAAAAAAAAAAAAoCMkHQAAAAAAAAAAAAAAgI6QdAAAAAAAAAAAAAAAADpC0gEAAAAAAAAAAAAAAOgISQcAAAAAAAAAAAAAAKAjJB0AAAAAAAAAAAAAAICOkHQAAAAAAAAAAAAAAAA6QtIBAAAAAAAAAAAAAADoCEkHAAAAAAAAAAAAAACgIyQdAAAAAAAAAAAAAACAjpB0AAAAAAAAAAAAAAAAOkLSAQAAAAAAAAAAAAAA6AhJBwAAAAAAAAAAAAAAoCMkHQAAAAAAAAAAAAAAgI6QdAAAAAAAAAAAAAAAADpC0gEAAAAAAAAAAAAAAOgISQcAAAAAAAAAAAAAAKAjJB0AAAAAAAAAAAAAAICOkHQAAAAAAAAAAAAAAAA6QtIBAAAAAAAAAAAAAADoCEkHAAAAAAAAAAAAAACgIyQdAAAAAAAAAAAAAACAjpB0AAAAAAAAAAAAAAAAOkLSAQAAAAAAAAAAAAAA6AhJBwAAAAAAAAAAAAAAoCMkHQAAAAAAAAAAAAAAgI6QdAAAAAAAAAAAAAAAADpC0gEAAAAAAAAAAAAAAOgISQcAAAAAAAAAAAAAAKAjJB0AAAAAAAAAAAAAAICOkHQAAAAAAAAAAAAAAAA6QtIBAAAAAAAAAAAAAADoCEkHAAAAAAAAAAAAAACgIyQdAAAAAAAAAAAAAACAjpB0AAAAAAAAAAAAAAAAOkLSAQAAAAAAAAAAAAAA6AhJBwAAAAAAAAAAAAAAoCMkHQAAAAAAAAAAAAAAgI6QdAAAAAAAAAAAAAAAADpC0gEAAAAAAAAAAAAAAOgISQcAAAAAAAAAAAAAAKAjJB0AAAAAAAAAAAAAAICOkHQAAAAAAAAAAAAAAAA6QtIBAAAAAAAAAAAAAADoCEkHAAAAAAAAAAAAAACgIyQdAAAAAAAAAAAAAACAjpB0AAAAAAAAAAAAAAAAOkLSAQAAAAAAAAAAAAAA6AhJBwAAAAAAAAAAAAAAoCMkHQAAAAAAAAAAAAAAgI6QdAAAAAAAAAAAAAAAADpC0gEAAAAAAAAAAAAAAOgISQcAAAAAAAAAAAAAAKAjJB0AAAAAAAAAAAAAAICORKvdAAAAAADA2vDybVesdhNwinv1/UdXuwk4xX3yWY+tdhNwiktXuwEAnvHiAwdXuwkAAJx0VDoAAAAAAAAAAAAAAAAdIekAAAAAAAAAAAAAAAB0hKQDAAAAAAAAAAAAAADQEZIOAAAAAAAAAAAAAABAR0g6AAAAAAAAAAAAAAAAHSHpAAAAAAAAAAAAAAAAdISkAwAAAAAAAAAAAAAA0BGSDgAAAAAAAAAAAAAAQEdIOgAAAAAAAAAAAAAAAB0h6QAAAAAAAAAAAAAAAHSEpAMAAAAAAAAAAAAAANARkg4AAAAAAAAAAAAAAEBHSDoAAAAAAAAAAAAAAAAdIekAAAAAAAAAAAAAAAB0hKQDAAAAAAAAAAAAAADQEZIOAAAAAAAAAAAAAABAR0g6AAAAAAAAAAAAAAAAHSHpAAAAAAAAAAAAAAAAdISkAwAAAAAAAAAAAAAA0BGSDgAAAAAAAAAAAAAAQEdIOgAAAAAAAAAAAAAAAB0h6QAAAAAAAAAAAAAAAHSEpAMAAAAAAAAAAAAAANARkg4AAAAAAAAAAAAAAEBHSDoAAAAAAAAAAAAAAAAdIekAAAAAAAAAAAAAAAB0hKQDAAAAAAAAAAAAAADQEZIOAAAAAAAAAAAAAABAR0g6AAAAAAAAAAAAAAAAHSHpAAAAAAAAAAAAAAAAdISkAwAAAAAAAAAAAAAA0BGSDgAAAAAAAAAAAAAAQEdIOgAAAAAAAAAAAAAAAB0h6QAAAAAAAAAAAAAAAHSEpAMAAAAAAAAAAAAAANARkg4AAAAAAAAAAAAAAEBHSDoAAAAAAAAAAAAAAAAdIekAAAAAAAAAAAAAAAB0hKQDAAAAAAAAAAAAAADQEZIOAAAAAAAAAAAAAABAR0g6AAAAAAAAAAAAAAAAHSHpAAAAAAAAAAAAAAAAdISkAwAAAAAAAAAAAAAA0BGSDgAAAAAAAAAAAAAAQEdIOgAAAAAAAAAAAAAAAB0h6QAAAAAAAAAAAAAAAHSEpAMAAAAAAAAAAAAAANARkg4AAAAAAAAAAAAAAEBHSDoAAAAAAAAAAAAAAAAdIekAAAAAAAAAAAAAAAB0hKQDAAAAAAAAAAAAAADQEZIOAAAAAAAAAAAAAABAR0g6AAAAAAAAAAAAAAAAHSHpAAAAAAAAAAAAAAAAdISkAwAAAAAAAAAAAAAA0BGSDgAAAAAAAAAAAAAAQEdIOgAAAAAAAAAAAAAAAB0h6QAAAAAAAAAAAAAAAHSEpAMAAAAAAAAAAAAAANARkg6w6szsBjNz2d/u1W7PyWBmrzKzfzazR81sou31OzN7zWq3D8Azg5m9v63vef+ptv6VZmbXtffPq90ePP2Y2c4Z3wF2rnabnm7MrN/MfsnMvmRmh8ys0fZ+D7ct94z7PgoAAAAAAAAsl2i1GwA8k5hZIOkfJb1xtdsCAMBczKxf0n5Jlbab3+Kce98qNQlYMjM7V9IXJJ2+2m0BAAAAAAAAns5IOjhFmdkNkv4+u/qEc27n6rXm5HiavOa3anrCwYikeyRNtN128KS2CMvGzN4h6e3Z1S87565bvdZguZjZlyS9OLv6W865d6xea1ZHVqXgTdnVDzjnbli91iydmV0n6Yv5deecrVpjTh1v1PSEA0n6UUkkHeBU8mFNTzh4VNJuSXF2ffxkNwgnT93VtFsP6oj2q66qIhXUp/XaoXO03jZ3vN7YNbVbD+mQ9qqmSYUK1aN+bdfZ2mzbZ31M6lIN6aCO6IBGNKRJjSlVooJK6tM6bdNObbLTOm4TVsbo4bo+9749uvdLRzRysKFKb6gdl/bpuh/ervOft37J6xsbaujuzx3WQzcf05P3j2nkYENBKK3bWtZ5z12n6354uzae0TXrY//x1x7Qrf9+YFHP85zXbtEP/P6FS24fll89rerx5j06HD+puptUZEX1BRt0RuEiDUZbl7y+1CUaSg5oND2ikeSoRtMjqruqJOmq8ku1IZq/HxlLhjScHtZockQj6VFNpMNyctoS7dRl5RfP+1isvqPuoJ7UIxrRkBI1VVJFG7RVO3WBSlY+oXWPuiE9oV06piOK1VBBJQ1qs3bqAnVZz7yP7eS4iNXBNoROrKXv1O2cc9qn3TqoJzWuETXVVFEldalH67RJZ+g8hRZ23D4sj7W2/XzN/Zdqmpx33efqUp1h53fcNiyvtbYNSdKkG9MePaIhHVJNk3JyKqmsfq3Xdp2tdbax43Y9HZB0AJxcb2m7/GlJr3PO1VarMQAAzOHNs9z2XDO70Dn3wElvDbBEZnaVpGvabnqTc+6Dq9UenFxjbljf0lfUVEOSFCpSQ3Ud0X4d0X6d4y7RTrtgyeutuUndri+rmuULh4oUq6ljOuz/3Fm6wK467nEP6g7t0+Ot6yZToFAN1Vpt2uRO0yV6jgJjBsS1YO9D43rPDXdqYrgpSSr3hBo/1tR9Xzqq+798VK/6ubP07T9+xpLW+ZsvvklpPDVbU6krVNxMdfCxSR18bFLf+Nf9+v7fu0BXv/L4k2eV3ki9G4pzrjtpppoc8flU2y/qXVK7sDLGkiF9s/pZNVWXJEUqqOHqOpI8pSPJUzq3eJXOLF66pHWOpyP6Vu3zHbfp3vrXNJYe6/jxWD2Puwf0qO5rXQ8VqaoJPalHdEBP6mr3IvVYf0fr3ud26wHdLiffP0UqqK5qK5h3uXuB1tumWR/b6XERJx/bEDqx1r5T5+quqjv1dY1pWJL/bh0qUl1V1VXVMR3WNp2hULMnc+LkWKvbj+T7qWCOmedDQqZrxlrchg65vbpXtyhVKkkyBQpkqmlSNU3qoJ7Sme4CnW2XdPiqT33sQVh1zrn3S3r/KjdjxZlZRVJ7b/PHJBwAWE1ZxYIbVrkZWGPM7FJNBWsbkr4m6SXZ9TdL+sXVaNfTkXNutyQqb6yMZ7dd3kPCwTNH4hLdpZvUVEO9GtDFepZ6rF+xa+ox3a89eliP6F71/v/s3XeYJFd1sPH3zMzubM5BWStplSOKSEJCiJwtgzEmmBw/ggGDDCbbBhtMMA7kZDLYmJwREkFCEeUcVnHzavNOvt8fVT1T09M9093TM9Mjvb/n6Wd7qqtv3a6+XVVb99xz0yKWxl41l5tS4lr+yB52MYs5HMOpLIpl9Kd+7uV2buc67uNO5qdF7BsHD38vA3Qyi304iBXsyzwWEhF0pz3cxc3cxx1s4H7u4HoO5bhm7xLVqaern8++9jp2be1lvyPn8cJ/OYq9D53Lnp19/Py/1nDBF+/lRx+7k/2Oms+RZ9ae8WCgL3HIyQs5/Vn7cMSZi1mwvJOB/sSaa7bxnX+8jftv2slX/u4m9lo9l30PHz4q9FnvOJRnvePQqmX/5kv38n//cjvtM4KTKwQtaHL1pz7+1HUBvXQzv20Jx3Y+innti+lLPdzRcw13997IbT1XMb9tyZjZCcp1MJMF7UtZ2LaUBe3LuKbrwprfG7Qxv20JC9qWsrB9Gev77mZz/wN1fjpNtk1p7WBn8QEcysEcRUfMYGfaxvVczk62cg0Xc3p6Am11jurdkbYOdhbvxQEcxvHMjE72pF3cxJVsYQPXcglnpCcxMzqHvXc850VNLtuQGtGK19SQjU6+kovYzU7msoBDOZYlrKQt2uhP/exiG+u5nzbMcjCVWrX9lBzH6VWDodQaWrEN9aRubuByBhhgPos4gkewgCVEBLvTTm7nOjZwP3dxM0vSyodtxgOHUUiTZwnDOxbunaqKSJI0imKWgx8CHy/8/cKImDG51ZEasrTw3Guuh5H7uTNPj9jB8Zw5OGqvI2ZwWBzPcvYB4Haur6vcjTzAdrYAcDxnsCiWAdAe7ayKw9mf1QDcwY0MpIFh792PQziTJ3NIHM38WERE9l+CzpjNEfEI9iYbMX8vd9Cf+hv85GqWP3zrAbY80EXnnHZe+cnj2PvQuQDMntfBn71tNcc9dhkpwQ8/ekdd5b7hvx/BG79yIqf+2V4sWJ51vLS1BwefuIjXfu545i+dwUBf4sIv13/Iuuz72dQLRz96KXMXe5qeavf13kpX2kU7HTxi1rnMa18MQEfM5PDOU1jRns38c1vPVXWVO79tMY+Z+1xOnv0EDu08iZUd9WXbOG32Uzh9ztM5etYZ7DfjMDqjfCYttaLS+Wo5+3BYHE9Hfik+LxZyAmcMjli/r5BRp1Z3ciOJxAIWczSnDHYKz465HMcZdDKbPnpZw80j3jue86Iml21IjWjFa+rS9rKAg/mcwmNYFnsPZgprj3YWxBIOjWNHBLlocrVq+9H00YptaBNr6c9n6zyeM1gYSwf/bz8n5nEMpzGbLHh8A/c38rEfEgw6kCZP+d2fvoprSZI0RSJiJvCCwqIvk00HtCH/ewXwtMmul9SA4nWX11wPI+u4B4C92J9ZFTrUDuQwAHawlV1pR93lLmEl82NRhXKzeT976GLL4CEzszCWjDpycB9WATBAP7vYXnOdNDGu/NF6AE562koWrRx5w/rclx0AwH037mT9XaPPCVu0+pRFVV+bv2QmR52dxUrde0Pt7RLg/pt3cv/NOwE47by963qvJsbavjsB2LvjYGa1zR3x+qqZWQLEHQNb2DWwreZyI2LwxmYjwulbpp2daRs7ydpI6TxTNCvmsBdZEEvpPFWr3tTDJtYC2ej38rbVER3sx8F52feSUhr2+njOi5o8tiE1qhWvqXtSN/fnwTGHctxgAI1aTyu2H00vrdiGesiSls9gJrNi5PQtbdHGPLLgiH4evoMJmvY/joiYGRFPiIgPRsQvI+LuiNgVET0RsT4iLo+Ij0fEKc3aZtn2z4mIVHoUlu8bEe/Mt78+IvZExF0R8fWIeGKD2zozIj4REddGxKaI6I6ItRFxSUS8LyJW11jOiwt1XlNYvjov5/K83L58nVURcWH++b5YKOrA4mcve7x3vNusUO+TI+L8iPheRNwSEdsiojcitkTEjRHxxYg4L2r8H221OhVeH9dnHo+IWBARr4uIn+RtendEbI+I2/I29NyI6ncQi+0SRoQM31Wh7i8eZ33bIuKsiHhPRPw4Iu6IiB3597MxIq6JiE9FxGPrKPNLhfp9qbD8uPx3cEPeBnbl++VzEfGI8XyOMepzVES8MSK+HRHXR8TW/PNtzbf/jYj466hzJG5EHBQR783b27qI6CqUe1NE/DAi3h0RJ1d475r8O35PYfGjR2mjLy57/3sLr11YWH5CRPxrRFwdERsiYiAKx7cK9ZjoY9NeEfGOiLgsr09XRNwXEd+NiPNqKTsvZyLa6Zp6f0fV2nb+2qrCb/fRhZfeM8r3uqrW+pZt67ZCGS8ZY91ryrb5mFHWjXx/ltZ9ZoV1qu6D/PXSPnhRYfGLRtkH59T4mSfl+FH6fMBvKn2uCo8v1Vjuooh4fUT8Pv+NlX5rP42Il8Qo54VRytw3It4aEb+Kkeebr0R2Xp2MNPzPZGiE+AbgpymlPuBrhXVeWmthUf36bGVE/F1EXBrZ9VlPfjz5elS5VoyIYyPikxFxc75/dkV2fP5IRB251IbKmx0RL42I70TE7Xlb3BMR9+THptdGVPhfxMhyVkWFY0FELI7sGuKC/DvtirJjVLX31rDNzoh4Yd42bo6IzTF0zromIr4cES+IiHljlNPsa7pq57ODI+IDkZ3PtuT7eU1EfC1GOY7VKwrHNGo4J49zW03dd2VlL87LviSy43hXZP9/+VFk154z8vVGvYZ+OOpLvWwnm698KZUPCwtZSkcek1LPjagH2ZiXWzl1/ayYzVwW5OvWd4NrBjMHn5fmQ9bU6NrVN9jpX23qhFXHL2D2/GymylsvebBp2567KGuXA/31tYFLv5d1+MxbMoOjzqp9ugdNjL7Uy/aBzQAs7din4joL25YPHoc296+dtLpp+imdezqYwUIq/76X5Oel7WyhL9UeZ7mVTYPnnCVVzm2lc2kPXexi+A39iTwvqnlsQ2pEq15Tr+c+EgPMYGbVemnqtWr70fTRqm1oFlkwcS89dKWRwecDaWAw0G8Bi2qu00NNRzMKiYinAf8NLK6yyor8cTLwxoj4P+AlKaXaQ7obq9ezgC9A3kqGrMoffxUR3wVenNLY4TARsQL4HPD0Ci/vlT8eCbw9Iv4dOD+/UV9Pnd8MfBAKd54mWK3bjIj9gYuAg6qssjh/HEk2R/gNEfEXKaWbmlfbyRMRLwQ+Ciyr8PJ8YDXwV8C7IuKlKaVLJ7N+5SLrDP8hVL3qWpY/jgNeFRG/A56bUqprEsfIOtPeB7ydkYFLq/PHSyPifSml99VT9hjb7QSuBI6ussrC/LEaeC7wDxHxvJTSH2oo+13AO6n8GyiVewTZ6N73RcRTU0o/qf9T1CYiOoB/Ips3fczOikk6Nj0H+Azk4XpD9gXOA86LiB8Df5FS2jNKOZPSTqeZ3wClgJBzGR5gNSgilgHHli0+l7IO9YLjGTp+9ZMdv6fUVB0/mimywIqvkrX9or2AJ+WPV+fHiU01lNdBtk/eBFTKcVs637wAuDIi/iqldFvDH2BsxakVvlY4Vnw5ryPAkyNi75RSQ3fJI+LpwJdgxF2vfcnOq8+NiFenlD6Trx9kx8TzGdlujsgfL4uIp6WUfl9jHZ4PfAio1Buwf/54CvD3EfHKlNKPaym3UP7jyfZZ04ec5tcnH2RkG4TsGH1c/vhrYGf+Xe0sK2PSruki4vXAh4Hy4cIH5o/nRcRngdek1Pp55Sd630XEU8n+/1I+weSq/PFU4JKI+Mt66/5wULyZPXfEfwEzEcGcNI/tPFhzVoGe1EUvPQDMq1Ju6bVdbK87W0HpxkcQzGV+Xe9Vc62/YzelgZh7HTpyhDpAW1uw4qDZ3H3tDtbdsatp27798q0A7H3oqPFiw/T3DQxlZnjqStpnOJJ9qu0a2Dr4fF7boorrRARz2hayfWDTsPWlcqXzyVzmV81yUTwv7WY7C6p0LI8sOztnzmRW1TTkxXPpLrYPbmuiz4tqHtuQGtGq19Tb2Jy/vpDEAHelW1jHvXSRTWm0gMXsxyEsj8pBf5ocrdp+im7lWrrTbvroZQYzmc9i9uYAVrJ/1WOlJk+rtqHl7M1MZtFDF9dwMUekR7CAJUQEe9IubuM69rCTuSwYzGb4cNSUoAOyG2DFgIPtwO3ANqCd7Ibraobmsz8PODgiTh+tc2o8Isti8J18mwm4kWzE3l5kNwFL/hxYHhFPHKOj7ADg1wx1DEHWiXMDsIXsxu+h+fIZwJuBIyPivJRSd411fjPwkULZ1+dlryC7oQ5wGdCVb++YfFkX1TuTbm/CNksWMvwGa3de/hagl6xz6wiGOm6PBv4YEaemlG4ZrR5jaOpnrkVEvIOsg6NoA3Ar2fd7FAzeETwKuCAi/jyl9POy92wBSstmA2cXXvstUN7mxjPZS6lzuWQ3cBuwFRgAVgKHk/0mAc4CLo2IE1NKG+vYzn8Ar86f7yT7DewhaxulSSUDeG9ErC11GjXBDIYHHPQBdwCbyNrDYrL2VxqVegDwm4h4fEqpamdrRLwTeH/Z4nuBu/Ny55F9rmLHUfndvIvI9u9q4JB82YNkbbeSsb7njwKvz593k+3j7XkdhuXDm6Rj03OBb+R/9uVlbyY7ThzN0LH9qcDngeeNUtxktdPx2sPQb/dUhs5xd1D9GNPo+ewC4BX583NHWe8xDO3rknOBd1VZv1jWn1JKWxuoW2kfHMtQ5+wDwHVV1t8yRnmTffy4juwzLAGKo+fLj9XF9auKiLPy984ku7a4CVgPLCLr5C2121OB70XE2SlVn0QuslHo/ws8oeyl28j28wyy41rprs9JwMUR8diU0rWj1bUReWfq4wuLvlR6klK6JiKuBk4g+5wvAv65gW08Dvi/vIw+sn2+layTv3QcC+BTeRv4IfDvwP/LX9tBdk3XRbZvSqHJC4EfR8SRYwUpRcQHyAJfitaSZSTqJbuuLbXHfYDv58GF/13jx3wkWcBB6XroduA+siDYkTlN6xARHwP+pmzxHuBmsvPOPLL9WGoz86h8vT8p13QR8XdkARKlbVxPdj7bj6FzE2THwE3AO2otu4rSbx5qPyfXa8L2XUQ8A/gfhk8NUWrz3WSfaR/gdOBXZL8NFXTn6Q4BOplVdb1OZgMPDlu/9nKrz4E+M99mreUC9KU+1pA1jRXsa5rYKbZt49Dl8cIV1WPyFy7vBHawfWNPU7Z77a83cs/12c210/689pF7N/5uCzs292bvO88Rf62gu3BbqXOUhEmzYjbbge6BCbklpoeI0vlk5ijnnuJ5qZ7zT3f+39fRzpft0U5HmkEfvcPKnsjzoprLNqRGtOo19W6yWPp2OriCi9jOFoKgnQ566WEz69nMevZPqzk8TqipTmq+Vm0/RTvZShvttNFOD91sZmwwcqwAAQAASURBVB2bWcd93Mnx6QxmxKSNB1YFrdqG2qODE9KZXMPF7GArl/MbgjbaUtBPPx3MYD8OYTXHjDq94kNds4IOAP5EdoP1xymlER0ykaW9fQPw1ny7x5N16r65iXUo+grZTevvA29MKd1dqMshZDfpnpwvOgv4AEMj+Mrr3k7W4Vbs1Psc8M6U0vrCekcB/8VQKu4nMzRaeSwrgH8h63T7Z+BfU0qDuRojYh/gwZTS2/K/X8zQaNj1KaUn1bCNhrZZ9p4HyEZf/RC4qny0dGRpiJ9HdoN5GdkN9q+TdZQ0pMmfeUwRUfreSu4j6+z4UanzKCJmAS8n239z8sc3I+LYlNJ9hbpfSzbqlchSJhenWHhRSmlNk6t/G9k++hFwQ3lnV0Qszuv9HmAu2U3/T5MF39TiqWTf62bgLcA3UkqDd9oiS4f/dYZG530oIr6WUmrWEKAtZFlVvg9cXNx2vv0ZwJ+RjV5dRXbT/usRsbpSUFGeIaDYYfsT4C0ppZurrPtkslHAw3KeppRelK/zXobSOV/bYBs9kewYsgf4e+Azxf0XhSkSJunYtIysTfWT/a4/Uuy8zuvzVeC0fNFfRcR/jpFhYqLb6bjl+6/0272QoX331ZTSe5u8uQsKz/eJiCMqtUGGBxHsIQtkOjUi5pWPYq6w/gUVXh9TqQ1HNu1AaYqFX6aUXtxAcZN+/EgpfQT4SJ6d4DeF5Y2eP75L1pH4aeB9xZH++XXO58g+J8CZZKP2v1ZeSMFnGQo46CcLOPp4sdM8stTszwD+k6yzcRnwnTwQp3nDKzMvZiio6uoKgQ1fJgs6gGyKhbqDDoBv5tv4Z+Bfyo4nZ5IFYawku4b7YB6Y8f/IAhPeTPYb7M3XD+CVZPumneya4x8Ynq1hmIh4NcMDDn4AvCeldHXZeieSBcmcnpf96Yj4U0pp1MCU3GfI2sn3gLcWr4sjYi40Now5Iv6W4QEHd5N10n83pdRVtu5RwF8CrxmlyIm+pjuWLNhyT17Pz6Q0lH8u38ffZCj44K0R8eniNXu9Sr/5vPz3Mv5zcjVN33cRsTfZb6zU47wTeBvwhWKQYH6s/CRwGPDuJn2eh4wBhr6KNqr/J789f62fvqrrFPXXXG5HXeUC3MxVdLOHdjpYPSKhkSZbz56hy9IZndWzBsyYnbWD7t11JQ+raOv6br71nizw5Jhzl3HUWUvHeMeQy763DoB9Dp/LfkeaJaMV9KfajhdtUf/xQg8/pfbRPlpbKrxWT3sayOcaHq2dlrbdR++wsifyvKjmsg2pEa16Td1HFmi5iexWzEEcyYEcRkfMoCd1cRvXs5Y13MvtLEiL2TsORJOvVdsPwHL2YTHLWcSywQwtXWk393I7d3MrW9nEdfyRE4eNG9Vka+U2tCAWc1I6m+u4jB08SBo8G8IAA/TRSx+9g1M/PBw1K/fel1JKJ6aU/q1SwAFASmldSukdwAsLi18ZEYuaVIdyy4FvAeeV37xMKd1Blob8R4XFb8hv0FbyMuCMwt//nFJ6RbFTLy/3RrLOg18XFr85Imq5ezSbLBjjZSmlvy92/udlPzABWSHq3eZtwKqU0rtSSpdVSs+eUtqdUvocWWdLKf/IiXma4ZaXp7n+VGHROuCslNIPih2jKaWulNJ/kHVwl44ri4CPTVJVK7kAODyl9MGU0nWVRtemlB5MKX2YrDO1VO8/i4jDatzGMrKOnzNTSl8u7/RPKf0aeFZh0UKa11G8G9g/pfSmlNKF5dvOt9+bUvoOWQf4PfnifcjSklfyBIZGIt5Fdryo1NlLSmlD/pnPBn42ng8yhvlkgUBPTyl9rLxTsewYOxnHprnALOD5+W9/a4X6PBkoZiEYbb73yWin00pKaQPZiP+SatkOSsu7yDq3ITuGj7gSzgNSziosaijooMmm8vjRLMvIpid5dSqbWiCltI6svsUU6lV/C3lq9Ofmf/YCz0gpvS2VjdJPKQ2klL5HNnq+9Ns+DHjteD5IhfoE8JLCoi9VWO1reV0BDs0zP9RrKVka/bdXOJ78AXh+YdHReT32AOemlL5YCjjI108ppU+TBZqV/GXe4TtCRBzI8PP0P6WUnlkecJCXfRVwDnBhvmgWtQdZzM/r/efl18UppV15W6lLfvz7YGHRVcDJKaWvlwcc5Nu5MaX0HrKMDZWmEJuMa7olZNksnpBS+ngx4CAv/yqyKSxKHeodZFNCtLqJ2nf/CIOT/vWTXQd8MpVlJcqPlWeTBZ0sb/RDRMSV1R6Nlqn6rUk3sy6/ZD2Kk5gdldP566Gre1cfn3vddezY3MuSfWbxvH+sPSHOrq29XP+bbCanU//MLAeSJOmhbmgM2F4cwCFx9GCWsJkxi6PjZBbkiUrXUPH2rh7mDo8TWBH7DpsSZlbM4dA4jsN5BABb2MDm+m/b6GHivnQnl/ALeujiGE7jLJ7Ko3kGJ/Fo5rOQddzD5VzAnqaPE5s+mhJ0kCqPsKy27jeBi/M/5wJPbEYdKngQeG1KKVV6MWVzxr4CKNW9jaGUz+XeUHh+HdVTWZN3oryErIMUspF6b6i2fpmfpZS+VOO6zVLzNlNK3cWb/WOseyvZCMGSVus4quY8srT8JW9Mo2QjSCn9kmwE+eD783T3ky6/wV2xvVdY9/dkQTmQtdHz6tjU29Io6YHzsi8pLGqkU6pSuQPlHRajrLuB7OZ9SbX2t3/h+WWVAhmqlD/Rc05/Ju9UGMtkHZu+nlL6VrUX84ClzxUWVf3OJ7GdTjfFoIARQQcRsS9ZRzNkv68fj7Y+2VQCpcmpeoHfNaGOzTAlx48m+kNK6UPVXsx/Zx8vLDo9DwCp5O8Kz/85pfST0TacUrqXLFtUyeurrdugxzCUMr6XbFR2eR02Aj8tLKqaUWAUF+SBAhXlx77bCotmAh9MKf1plDKL5+G5DGVjKPcmGMzL9ruU0jtHq2j+fb4UBkObn5xnyxrLRuD1tR7ranQ+QxnKdgHPTiltGutNKaU9lc5Zk3hN9+H8d12t7NvJMoiUtNpvfoSJ2Hd5EPZfFRZ9KqV04SjlrmPkNBsC2gqJ/IbGG4zUn7/WXmPiv/aayy2NJhy73PvSndzO9QAcynGsjP3HeIcmw8zZQ7dIerurzpBE756sHXTOaTx5ZG93P5993XXcc/0O5i2ZwWs+dzzzFteeyvWqn2ygvzfR1hGc8nSDDlpFe9R2vBhItR8v9NB1Wfo1v00/HPG4O/8v09BIu1HaUuG1etpTaYTfaO20uO1i2RNxXlRjbEOaCK16TV38+4BhSV8pLM8S6e1ix7ApjzR5WrX9jGU/DmZWPmvzRtaOsbYmUqu2oa1pEzdzFUEbJ3E2e8X+dMZsZsRMFsdyTuLRzGU+3XRx++gz+D6kNSvTQb2KHQqnTtA2vpZSGnVu6fyG3XcKi55Vvk4+sqw4j/y/VRrRVFbuvQx1lEE2Gr4WVW/CT6CJ3OZkfM/NVuzUvIfh7aOajzIUatlOlgZ7Omjk+9lJNr3BWC4qPD+66loTq5bPV7z6PG6UzsHJNubvcpKPTf9ZwzrF73x1RNMmv5qOx5FG/Kbw/DH5qPOiYmDBr8mCCEqdXo+tUF5x/UtrDdiZYNPp+FHNf429yrD6zwYOLl8hIk5gqGO8F/i3Grf/bRicUGz/Jmf/KAYQ/DQPMKjky4Xnz46IenM5f6aGdf5Yz3tSNq3RfYVFIzJX5dNUFEfR/2sN9SCldBdDQTtB5d9bua/XE5A7ljwL03MKi76U12syNXosrvc302q/+WaoZd+dC8MmFPxkDeX+ALi30UqllE6q9mi0zFZQnO9xtHkda5mDeHi5xbmOq9+87Mm3OVa5a9Pd3MxVABzMURz40EzmNC0tXDE06mnbhurxyNs2ZklIFixv7JK3r2eAL7zxBm7941ZmL+jgtZ87npUHVUzUU9Vl389uhh75qCXMX+q8s62is5BwqXuUS/CuvCOks636fLJ66Ouhu+KjL7/ZXTr/9Ixy7imel2o9rxXLHu182Z/6B9OZF8tu9nlRjbMNaSK06jV18e85VWYtLC7vGmUbmjit2n7GEhEsYAkAe3j4jlJvBa3ahu4hS2a6jL2YU+F2aFu0sx/ZWKWNrKW5Y5Gmj6aHCUbEcuDxwPFkac0XAJ1lqxVD0fZrdh1yPx17FSAbLVpKJ7xPROyX37wuOb1s/R/WWO73C+Uui4hDU0q3jfYG4Lc1lt1MDW0zImYBjwMeARxC9j3PJrshX7Kk8HyivudmK37fP65llGJKaU1EXMNQB9LpDB/VNukiYgHZ7/AEstTK88lGdxa/n30Lz2v9fq4oT/NbRfE3tKjGsmuWd8CcQzY38mFkadjnMvzzFe/eLImI2WnkFCWXF54fCXw1It6Wd85Ple3ANTWsN1nHpl6G76dqit95kH0n1Tots5Umrp1ORxeSTavRRnbsPAEojux+TOH5BSml3RFxKfAo4PiIWFIWaFcMOmiFqRWgRY4f4/SHGta5r+zvRRXWeXTh+Z9SSptr2XhKqTsibmbofHMycGst7x1NPsq6OAL7S6Os/iNgM9k0CXPJpoj4bB2bu2TsVSjmsbsrlU0ZU8Vaho4Riyu8fmxheWL4dDNjuYah3+DJjB040ezruZOAeYW/v93Mwifwmm5NSun+GtZr5d/8qJq4704rPF+fUrqhynqDUkoDEXEhw6eue9ibW7jJuIvtw/4uSSmxO092N3cwKdDoZkYnM9JMeulhJ9tZSuVR5Tvz2TRGK3d9uo8buQLIRmIdXHWGP02FlQfPIQJSgnW37aoYCDAwkNhwV/bfir0OqX9KjP6+Ab78tzdyw0Wb6ZzTzqs/dRz7HVlfDN+6O3Zx97XZ7DlOrdBa5rYtHHy+c2DrsL9LUkrsHtiWr79osqqmFvSoeMqor5fOY7vYQUqJkbHpQ+eebP3azmsA8/J1e+iiJ3UPSzVdsqtK2c08L2p8bEOaCK16TT2XhWwadrtArahV24+mj1ZtQ6Vz2myq/x+w9NoA/fTQ/bAMmmta0EE+T+6HyUaK11PuombVoUyt+SvK1zuM4Tc/iwES6/K07bW4tuzv1QxPF1xu21iZGSZA3duMiLlkKdxfA3UduRfVs52pkHdkryosqqXjt+RahjqBKud3mgQRsRT4ANloznqOaItqXK/WK7tiOGB9Q3ZGEREzyNIJv5X65zFexPDMBqSU/hARFwNn5IueCzwnIv5I1lF7MXBJKpt3fILdVWNK7sk6Nm2uMY10eQho1e99EtrptJNSejAirgZOzBedS+Wggx0MBYFcQBZ0EPnr/wsQEZ0MtenSeq1gSo8fTTLmZ0gp7Sq7kVPpMxxXeH5gRPysjjocWHje8HzuZZ7H0G9xM8On7xgmpdQTEd8AXpcvein1BR3U0g6KwwJrbTfF94y1z/uA/610w62K4vG2ln1+Z60F1+jIsr+vaEahk3BN91D4zVc0Afuu+Lu+qY7ybqxj3YeFjpjBgrSY7TzIZtazYljsYmYbWwZH3C1hRc1lL2YFG7iPLaznQEZmJuhKewZvQlQrd2N6gOu5lERiXw7msDi+5u1rcsya28H+x8znnut2cPPFWzj+CSMP+3dfs509O7IRpIedXinOrbqBgcRX334z1/xyIzNmtfGK/zqWgx4xslN6LJd9LzvEzlnYwbGPWVb3+zVxOmIGC9qWsn1gM5v7HmBlx4Ej1tk2sHHwOLS0fe/JrqKmkcX5+aSPXrazhYUsHbHOFrL43IUsGTa9x1gWsYwgSCS2sIG9hs0+mdmcl93JrBE3/JtxXtTEsw2pEa16Tb2EFdxNNnXIbnYMjkov2s2Oweezp8d/Lx9yWrX9jCWlxHayrrLROpU18Vq1DUU+vqSLUbKZFV7reJhODdSU6RUi4hTgauAvqD+QYWQYZHPUNGqwwnrldw2Kf486ardM+bpj3Y3YPsbrE6GubUbEMrJRnudT3w1WyOZkbnWLyv5u9Puu785Tk0TEwcBVwCupryMXav8d1jJKuVzNvTqjFhIxG/gJ8CEa62ir9hmfDVxZ+LuNrMP2nfn2NkfE5RHx1ogYeTXbfLX+Lifr2NTIdw5VvvdJaqfTVTE4YDCFe77PVuV//rYwjUbF9cnabynTxx5qG1k+Gabs+NEsNWZqKFfpMxTv9KwEnljHo/ibrb+XorLi1ApfTylVzyed+VLh+SMjorxTvKoayi5X7/ow9j6fQX37/JDCe2vZ582+piuee3Y1Y7qUSbqma/T80dImaN8tKjzfWkd5D9a5/YeFvTgAgHXcU3Ee17vzBDHzWczcOmaIKd1I38x6dlSIR70nL3cmswZv8BdtTuu5jj+SSOzNgRzBI2retibXSU9dCcAVP1rPtg0jD2UXfDFLirb/0fPrmhIhpcQ3330LV/5oPe0zgpd94hgOO63+/zoODCQu/2HWiXPiU1bSMXOqZs1UNXt3ZLNrre27i+6BkaftNT1ZQpsFbUsrZkKQSubFAubll593V0hw1p32sC6fbal0/qtVR8xgGVnQyz3cOiIFcH/q4/48lnYl+48YIT/e86Imh21IjWrFa+rFLB9Mj15Kcz7y/dm4qgUsZmY8/EYYt4pWbD9jjfG7nzsHO4yXVRkBr8nTim2odD7dxLrBqdKKUko8wBogy5JQTyDfQ8m4/3eaj/T5LkM3y3qBr5KNGD6W7EbprJRSlB7A+8a73RrUepO6/C5CeadW8e96bnyXlzvWWW6gjrKbpd5tfpZs2oySC4FXkaUaXkk2Oq2t8D0/ZkQJra38u2/0+570K5p8rupvw+D/EBJZGv0Xk2VgWAbMLvsdvqRCUa3sA2QpjEuuAt5INs3APmRpvtsLn++gWgpNKa0lS2v8UrJ5xMuvQNrI2viHgDUR8erxfIga1Pq7nKxjU9M8TNrpeBSDCM7KM3tA9akSLmEoe0dxneLzPzTQyauJ16yQ6WZcxx3PUIYNgJdExKbRHsDPy4p5Ga1vMvd5s6/pisfp6pPZ1eehfk03kdx3LW5fDmYWc+inj6v5AztTFgfUl3q5LV3LRrJZP1Zz9Ij3/ir9D79K/8MdFWa4WM4+g6OpruUStuUz4wykfu5Otw7e4DyEo2iL4YeKrWkT13AxAwywkv05ipMrpjdWazjzL/dhyT6z6N7Vz6dfcy1rb88SsnTt6uP7H76da36ZxfA+7U0Hj3jvG478DW848jf85D/uGvHadz94O3/837W0dQQv+djRHHXWyNGmtbjlkgfZtj67pD/NqRVa0n4zDmNWzKWfXq7q+jU7B7YC2XHo1u4r2NB/DwCrZ44MPvrFzi/zi51f5vbuqyuW3Zu66Uldg4+SvtQ7bPlAGnk50p/6hq9DPwADaWDY8r6aEt1psqzmGAA2cD+3pWsHv5+daTtX8wf66WM2c9m3wi2QB9KawXPbnjRyfuqDOYog2M6D3MDl9OQx1l1pN9dwCV3spoMZrOKIEe8dz3lRk8s2pEa04jV1W7RxKMcCWUfkHemGwfbck7q4MV3B9jwu+2CcwmwqtWL7uYWruSVdzda0if7UP7i8K+3mtnQdt3A1kAW3LAszUU21VmxD+5H9/6+fPv7E79iSNjCQBkgpsSvt4BouHjwG7T91ydCnXDNCLV7C0PykvcDjU0oXjfGe+iYsbMx8ahv9Uz5CaVvZ31vLyqxVeblbK600XUTEMcCfFRa9I6X0wTHeNhnfczNtLfu70e+7vJzJ8BSyOZ9LXpBS+voY75k2309ELAb+X2HRp4HXjDENQc2fL6XUD3wR+GKe+v8s4EzgHLL9WrorPB/4ZERESumTtX+CCbG18Hy6HJtarZ22T2DZjfgdWdr3DrL5208lG01bMeggT3P/B7JgnMMjYt98/vRqQQpqHVsLz3+QUnrmVFWEkQED8/JHPV4YEW+vcSqWqbK18Hx7Smk6DSssXs+Ou94Pk2u6CTGB+25r4fmiOqo0Jdm1Wl17tHN8OoOr+C072Mof+QXtqYN++gbXWc0xLI36OmsjguPSI7mSi9jDLi7nN7SnDgboJ+Uxq/tyMPvGyI7oO7hhsHNvC+v5HT8aGeaaO4wT2CtGpifW5Jk5q51X/Oex/MdLrua+G3fywadfxqx57XTv7icNQEQWcHDkmbUnQdvyQBcXfSWbxTECvvXeW/nWe0eOOC35p9+dWfW10tQKKw+Zw4HHOVdtK2qPDh4x61yu2PMLdgxs4eLd36eDGfTRR+nHf+jME1nWMTJV7Fgu2f1Duip0/F3bfdGw8PKTZz2RJR3Dj3N39VzPnb0jZ5Hc0H8PG3bdM/j3Ph2HcMysR9VdN02MZbE3B6ejuZMbuJvsZnhbah88r81gJsdzBm1R/39v58cijkwncRNXso57WMc9dKQZg+mK22nnOE5nZoxMOjie86Iml21IjWjFa2qAveIAdqZtrOEW7uIm1nAzHWkGvYXxWIdynJ3GU6wV208/fazlbu7Ns2R0pBkk0rA6LWIZx3F6Ix9ZTdaKbWhRLOPQdBy3cS272M5V/JYgCNoG/7+fvf8g9nsYn7uaEXTwpMLzb9QQcABUmOSp+Q6itqCD8m9/fdnfxXnSD4iIjkJq69EcUvZ3rfOtt6ri97wG+Oca3jOt7pbl83DvZmg+4fLvcDTFdafiuy5+P7+toSMXptf381iydNiQzdv9ljECDqDBz5dS2gx8L38QEXsDrwDeztBo0w9GxJebkeJ6HKbjsWki22lxNP+MqmsN11KdNSmlnRFxOQxe3Z5LFnRQGiW7CSi/S/hrhjKAnBsR/wecUnjdoIPWVJzrfuVUVSIiOoHnN6GoFcDTgP9rQlkTpbjPF0TE7JQq5EJrTWsLzzsi4uCU0p3jKO8hf003gSZq391deF7zdCXg8J1q5sciHpmewBpuZhNr6WYPM+hkIYs5gENZEo0demfFHE5Lj2MNt7CB++liF+10MJ9F7MchrIz9Kr4vFSIMesdIUFW8WaGps+8R83j7D07hl5+9h+sv3MS29T3MXTSDA49dwDkv2o/DT69v1rXif136exM7NjWWiGrPzj6u/VWWaeHUZ5rloJXNb1/CGXOeyV2917Gx7166025mRCcL25Zx4IyjWNphZ4hqd3AcycK0hHu5jW1sGRyZvoy9WcURdI4jhfg+sYp5aQF3cysPsoleeuhkNktZySqOYE5Uj0du9LyoyWcbUiNa7Zq6ZHUcy+K0nHu5g+1soZeePBX6Mg7gUBZGY9mk1Fyt1n7242Bm0slWNtPF7vz/ZYlOZrOAxezFAaxgXzPStZBWa0MAB8ZhLE7LuJc72comutlNGmxHS9iXg1hWZyDEQ00zgg4OLDy/bKyVI/vVntGE7Y7lNLL067WsV9IDXFf2enGu91lkKYjH/JwM/4z9kOdnaZ5irrzJOBIWv+craujwBWh2aPxkfOYryUa5Q43tNCI6yEYkl1zR7ErVoK7fYW46DV0ofr4bU6owtGOkpny+fPqF90fEA2QplSEbaXoa8Juy1Sfzd9mqx6bRTGQ7Lc6hXutd4GNrXG8yv9cLKAQdRMT/wuBEYhdWOPYWgwrOJQtMKAVdbKd5x6PJPuc027D8snm2klrOYxPlYuB1+fMTprAD/M8Y/ns5KKW0ptY3R8SPgKfmf76U1g46uKTs70cy8hjeqsrrfg4wnqCDVrimm64mat9dWni+MiKOTqlCHsCCfMqic2oo+2GrM2ZxOCdwOCfU/J7HxbPHXKcjZrCaYwZTFdfi5Din5nXVOhYs7+RZ7ziUZ73j0Jrf84mbKs+osnTf2VVfq8fseR185E+PHnc5mhydbbM5ovNUjug8deyVc0+Y96JRXz977tjHqWpWd57A6s4TGn6/ptbSWMnSOuOV94lV7MOqMddbEEs4lkc2VK9GzouaGrYhNaKVrqmLlsZeLOXh3bE3HbRS+1kYS1mIASnTTSu1oZIFsYSja+5+ePhpxoRItY4oLXkSUH8Oufr9VY3rPa/w/NKUUvl855cxfP7cF9ZY7l8Xnl+ZUtpZ4/tqVex0nd3ksiup63vOU9T/WZPrMBmfuZip43ERNYUlPRWGnbFqyfbRbPV+P0fBtMoVVO/nm8Hw32Az/G/Z35XaxmT+Llv12DSaiWynxVGix9VQ9knAATWWPZnfazGI4HSyKSlKfl1h/SsZmhboXIZPrfDbfOqQZpjsc06zlQcqTfVn+BVQmoqgE3jBFNWjOLXCpfUEHOS+VXj+5DwzTEtKKT3A8EwhL5+qutQrpbSe4QFirx5nka1wTTddTdS+uwAoBh69pob3PAMzUEiSJEmSJKlFNCPo4IHC87NHWzEi5gAfa8I2a3FWRDx1tBUi4i/JRgeXfL58nbxD7huFRa+MiCPGKPeFwCMKiz4zdnXrVky1uzwiJnpu4uL3fHo+un80H6P5nTqT8Zk/z9CI2BnAv4y2cp6aujiX793ALyagXmOp53fYBvznxFan6Yqf79iIGCst/ruoIbgp6suXVJ4PbkuFdYpt9JA6y69LCx+bRjOR7bQ4ov8pEdXz9+Xfy1hzcBcVv9fah7o15mKGZmPtBN5SeG3EVAl5UMFv8z8PYHjndTOnVpjMfTAR1pb9PaWfIaW0EfhyYdE/RkStQTBNkW/vsYVF36q27ii+z1DwUzsw+tC8qffhwvPnRsRTqq7Zej5eeH5KRLx+HGW1wjXddDUh+y6ltJXh5/RXR8Q51daPiJUMbxOSJEmSJEnSlGpG0EGxU+PZEfG0SivlI31+BBzehG3W6isRcUqlFyLiLOBzhUW3U/2G+78wdFN9JvCjiDi4SrmPBz5dWHQb8LV6Kl2jaxkaJQnw5gnYRlHxe94X+MdKK0VER0T8K7WPuq7HhH/mfJRnsSPoryPiXZU6jyNiLlmbKc69+w9NHFlcj+L3c2pEVBwhlwf+fJXpl473QhicDLcT+I+IaC9fKTJvAt5ZY7kfj4gPV/s9F8rtYHgAShcj013D8CkPlgAvqbEejWrFY9NoJrKdFjNRLKJKZ0yeBeNTwOPrKLv4vT4hImqdlqFuKaXytlXKqHFfSunWKm8r7te9qiwfr+I+OD4iHtfEsidcPsp9XWHR31Q6hkyy95FNhwGwArgoIsbMRxkRKyPi/IgY7+/3JQxdBybg2/UWkFLaDvy0sOil46zTRPsG8If8eRvwnYh4yVgBYhExJyKeHxFXjrbeBPs6w3+HH4+IN+UBWhVFxLyIeEt+vVLUCtd009VE7rt3Alvz5+3ADyPi1XmAa7Hsc4HfkU31sLGO8iVJkiRJkqQJM9bonFp8BjifbBRwG/D9iPgK8ENgPbAYOIvsRvRSsjmmf0zt0x806hv5Ni7O6/NjshtzK4GnAc8nu6EH0Ae8LO/sGSGldEtEvBX493zRIcC1EfEFsnTXDwL7kKVPfQ5D8133AC+oVu54pJR2RsT3gdIEJe+OiJcCNzI8Pes3U0rfbML2fh8RlwGlyQjPj4jTyDro7yQbxXU8WSdGabT1pxh/CuBiHSbrM/8N8Gig1Hn7fuBpEfEl4BayDAgnAq+EYRObfS+lNCJbxiT5DvABhtLs/ldEPIGsE+k+YD7Zd/fSfJ1e4L8Znlq7ZaWU7omI75D9viCbFuXIiPgMcBPZd3Ik2c39UqBRLe1vIdnI3L+NiCvIpsa4muzYtZus8/q4vNxicMnH88628nrekpdzcr7o8xHxdrIO/p7Cqp9IKY27Q7gVj01jmLB2mlK6NSL+F3hWvuhlEXE4WfaSO4E5ZL/bl5LtpweAG6gt+OB/yYIYZuXlXB0RV+dlFIOMXplS2lBDeWO5gJEBF6O1l0qvbSIL1GqWC8iyBexN1o5+GRHXA/cwPBjsnSml65u43Wb6KvC3+fMXk2XEuA4oTjFyQUrpE5NRmZTSfRHxbODnZMFUq4BLIuIC4Cdkx7ZtwFxgOXAscCZwBtn1VsNT+eSd7MWgqN+nlO5vsLhvAeflzw+NiLNSSr9rtG4TKaU0EBHPAi4l67CdA3wBeGtE/A9wFbCZ7JyymOy4fypZRog5U1LpXEqpNyKeQ1b3ZWRt4KPAKyLim8CfyI7588gyeTyKbPqnuZRl8mqFa7rpaiL3XUppbUS8CPgfsjY4D/gk8C8RcSPZufsQhjI53Up2/i9dA5RPESdJkiRJkiRNmnEHHaSUNuQ3yL6Vl9dG1olXKcXuLuC5wGnj3W4NXgUcBpxEduOv2ojjfrLOt99WeR2AlNJ/5CONPkzW4TIXeH3+qGQHcF5K6bIG6l6rN5F1bq7K/94vfxRd3cTtPZ8s9ffy/O9zqDwSOZGN4LyI5t+gnvDPnFLaHhGPBn4GHJ0vPpWhG8yVfJesI3xKpJS6886IXzPUMfJnVJ5HuJdsruB+pknQQe61ZFMDlNKiP4LsZnwlXyDLAlBP+zuZoWCB0XwNePcor7+CbL72pfnfq/NH0ffqqNeoWvTYVNEktNPXkXX2lPb3o/JHuY1kc2HXlJ48pbQpz8rwWYbOcycyfHoeyAKWmuECsmCn8mXVXEf2mZYXll2YUkpV1q9b3uH5YuD/GPrujskfRR9v1jYnwPvJOo9LU4ysYPj0AjA0ynhSpJQuyjMvfZehc9m5+WMiPZas071kPIF6PyS7viuNpn8Z2SjslpRSWp93En+HLCgWsuCCd01drWqTUrozIk5neOawI8muuerVCtd009WE7buU0g8i4s/JrmNK5S8AyrOg/JEsmLA4ldy2WrYhSZIkSZIkTYRmTK9ASum7wOOAaqMb+8nmuT8xpfTTKus0VUppB9mIwH8juxleyaXAaSmlmuYxTil9hCxg4tfAQJXVushGOx2VUvp1XZWuU0rpPrIOtrfkdVrLUKr1idje7WSdsj8ZZbXrgKemlBq5AV5LHSblM+fbOYUs1e1oqWtvJbv5/OyU0pSOMEsp/ZHspvTFo6x2CfCoKczI0LCU0mayz/c1ho8uL7oTeHFKqdZO6k+TdSSvqWHdq4DnpJRekFLqrbZSSulqsmCV95B1vG1keJaDpmu1Y9NoJrKdppTWkXUifpuh6TiK+skCPk5IKdWVJj2l9CWyILb/JBtRvJUsS85EuIzho+9hlKCDPLjgN7Wu36iU0i/IRtt/iOz8uZnhWQ5aWn5dcDpZUOJPybJr7Bn1TZMgpXQ5Wcfx+WSZI0bTR/b7OJ/xBboVj5H9ZCOrG5JS2k3WCV7y7IiY32h5kyGltJ6sk/i5wBVUPl4U3Qx8BDhhQitWg/xa7ASyrB33jbH6LcDfM/J40hLXdNPVRO+7lFIpqOTtDB1ru4G7820+Dzg7pXQvWQa3EqdakCRJkiRJ0pSJJg6ELKXrPZHsRtxSshG1a8nS9q4b7b1N2PY5FDpdUkpReG0u2ajBA8jSd68HLk4p3TKO7S0HziZLXT4f2ELWcfnb/Ab8Q1pErCL7/HuTdYKsBa5OKd04lfWaCPl8yaeQdSQvJ/u8G4DLU0o3T2XdqomII8lScK8g61RbC1yWUrprSivWJBGxN9kUGKU0/euAm1JKV4yzzGPJsmgsJhvRvpOsE/CqlNLd46nzZJlOx6aJbKf59/kYsjTU/WSdc79LKa0db9nSRIqIQ8muo5aRTQGzh6zT8Vbgujx4Qk2UHzfPJLumWUx2nt9KFsh2/URfw45HRBxDlr1jOVlq/x1kndN/SimNFcRSKmMVD5Nrumab6n0XET8Bnpz/+U8ppXc2o9zHt/1F8/6DqIelZ9y4eaqroGnuR6ccMNVV0DQ3sKva2CNJkiRJtfjlwHdi7LWGa2rQwVQaLehAkiRJkh4qIuJA4HaGpst7ckrpZ80o26ADjZdBBxovgw40XgYdSJIkSePTSNBBU6ZXkCRJkiQ1Ls8aV8t6s8imTCoFHNwL/HKi6iVJkiRJkiSNxaADSZIkSZp650XEjyPiORGxsPzFiOiIiKcCl5JNM1Xy3pRS/6TVUpIkSZIkSSrTMfYqkiRJkqQJ1gY8JX+kiLgLeADoBhYBRwJzyt7z1ZTSFyazkpIkSZIkSVI5gw4kSZIkaeoNFJ4HcHD+qKQL+BfgfRNdKUmSJEmSJGksBh1IkiRJ0hRLKX03Ik4FngScBhwG7EWW3aAL2AzcCFwIfDmltG6KqipJkiRJkiQN85AJOkgpXUg2IkiSJEmSpp2U0uXA5VNdD0mSJEmSJKkebVNdAUmSJEmSJEmSJEmSND0ZdCBJkiRJkiRJkiRJkhpi0IEkSZIkSZIkSZIkSWqIQQeSJEmSJEmSJEmSJKkhBh1IkiRJkiRJkiRJkqSGGHQgSZIkSZIkSZIkSZIaYtCBJEmSJEmSJEmSJElqiEEHkiRJkiRJkiRJkiSpIQYdSJIkSZIkSZIkSZKkhhh0IEmSJEmSJEmSJEmSGmLQgSRJkiRJkiRJkiRJaohBB5IkSZIkSZIkSZIkqSEGHUiSJEmSJEmSJEmSpIYYdCBJkiRJkiRJkiRJkhpi0IEkSZIkSZIkSZIkSWqIQQeSJEmSJEmSJEmSJKkhBh1IkiRJkiRJkiRJkqSGGHQgSZIkSZIkSZIkSZIaYtCBJEmSJEmSJEmSJElqiEEHkiRJkiRJkiRJkiSpIQYdSJIkSZIkSZIkSZKkhhh0IEmSJEmSJEmSJEmSGmLQgSRJkiRJkiRJkiRJaohBB5IkSZIkSZIkSZIkqSEGHUiSJEmSJEmSJEmSpIYYdCBJkiRJkiRJkiRJkhpi0IEkSZIkSZIkSZIkSWqIQQeSJEmSJEmSJEmSJKkhBh1IkiRJkiRJkiRJkqSGGHQgSZIkSZIkSZIkSZIaYtCBJEmSJEmSJEmSJElqiEEHkiRJkiRJkiRJkiSpIQYdSJIkSZIkSZIkSZKkhhh0IEmSJEmSJEmSJEmSGmLQgSRJkiRJkiRJkiRJaohBB5IkSZIkSZIkSZIkqSEGHUiSJEmSJEmSJEmSpIYYdCBJkiRJkiRJkiRJkhpi0IEkSZIkSZIkSZIkSWqIQQeSJEmSJEmSJEmSJKkhBh1IkiRJkiRJkiRJkqSGGHQgSZIkSZIkSZIkSZIaYtCBJEmSJEmSJEmSJElqiEEHkiRJkiRJkiRJkiSpIQYdSJIkSZIkSZIkSZKkhhh0IEmSJEmSJEmSJEmSGmLQgSRJkiRJkiRJkiRJaohBB5IkSZIkSZIkSZIkqSEGHUiSJEmSJEmSJEmSpIYYdCBJkiRJkiRJkiRJkhrSMdUVkCRJkiRJDw3ffd0TproKmuY+dP0np7oKmubecdCpU10FSQ9z7UuXTHUVNM31b94y1VWQpLqZ6UCSJEmSJEmSJEmSJDXEoANJkiRJkiRJkiRJktQQgw4kSZIkSZIkSZIkSVJDDDqQJEmSJEmSJEmSJEkNMehAkiRJkiRJkiRJkiQ1xKADSZIkSZIkSZIkSZLUEIMOJEmSJEmSJEmSJElSQww6kCRJkiRJkiRJkiRJDTHoQJIkSZIkSZIkSZIkNcSgA0mSJEmSJEmSJEmS1BCDDiRJkiRJkiRJkiRJUkMMOpAkSZIkSZIkSZIkSQ0x6ECSJEmSJEmSJEmSJDXEoANJkiRJkiRJkiRJktQQgw4kSZIkSZIkSZIkSVJDDDqQJEmSJEmSJEmSJEkNMehAkiRJkiRJkiRJkiQ1xKADSZIkSZIkSZIkSZLUEIMOJEmSJEmSJEmSJElSQww6kCRJkiRJkiRJkiRJDTHoQJIkSZIkSZIkSZIkNcSgA0mSJEmSJEmSJEmS1BCDDiRJkiRJkiRJkiRJUkMMOpAkSZIkSZIkSZIkSQ0x6ECSJEmSJEmSJEmSJDXEoANJkiRJkiRJkiRJktQQgw4kSZIkSZIkSZIkSVJDDDqQJEmSJEmSJEmSJEkNMehAkiRJkiRJkiRJkiQ1xKADSZIkSZIkSZIkSZLUEIMOJEmSJEmSJEmSJElSQww6kCRJkiRJkiRJkiRJDTHoQJIkSZIkSZIkSZIkNcSgA0mSJEmSJEmSJEmS1BCDDiRJkiRJkiRJkiRJUkMMOpAkSZIkSZIkSZIkSQ0x6ECSJEmSJEmSJEmSJDXEoANJkiRJkiRJkiRJktQQgw4kSZIkSZIkSZIkSVJDDDqQJEmSJEmSJEmSJEkNMehAkiRJkiRJkiRJkiQ1xKADSZIkSZIkSZIkSZLUEIMOJEmSJEmSJEmSJElSQww6kCRJkiRJkiRJkiRJDTHoQJIkSZIkSZIkSZIkNcSgA0mSJEmSJEmSJEmS1BCDDiRJkiRJkiRJkiRJUkMMOpAkSZIkSZIkSZIkSQ0x6ECSJEmSJEmSJEmSJDXEoANJkiRJkiRJkiRJktQQgw4kSZIkSZIkSZIkSVJDDDqQJEmSJEmSJEmSJEkNMehAkiRJkiRJkiRJkiQ1xKADSZIkSZIkSZIkSZLUEIMOJEmSJEmSJEmSJElSQww6kCRJkiRJkiRJkiRJDTHoQJIkSZIkSZIkSZIkNcSgA0mSJEmSJEmSJEmS1BCDDiRJkiRJkiRJkiRJUkMMOpCkgog4JyJS6THV9WmWiHhx4XOtadUyJUmqJiJWFc/REbFqqus0lR6q1yySJEmSJEmafgw6kCRJkiRJkiRJkiRJDemY6gpIkqTKIuIc4Delv1NKMWWVkaSCPMvAXYVFB6WU1kxNbaSRulMXa7iZTaylmz10MIMFLOEAVrMkVjZcbl/qZQ23sIH76WI37bQzj4XsxyGsjP1Gfe/utIN7uJ0tbKCL3SQSncxiIUvYj0NYHMsbrpear7t7B/esuZDNm2+mu3s7He2zWLBwP/bb/0wWL1ldd3kDA31sffBOtm+/jx35o6dnBwDHnvBili49vO4y773n99xx248B6Jy1iNPPPL/uMjRxNm3o58v/tY3fX9DFxnV9zFvQxlHHz+S5L13AqWfOqru8Bzf385uf7eayP3Rxy/W9bFzXR1t7sNc+7Zxy5iye+9L57L9qxqhl3Hx9D9/64g7+dFkXm9b3ExEs36udR5zayV++eD6HHT2z0Y+rCdBK57KBNMAD3MU2HmQnW+mmi166aaOdOcxjKSvZn9V0xuyG66Xmsw1pPLoHdnPn7qvZ2HM33QO76YiZLOxYzoGzj2XpzH3rLm8g9bOl9wG29W1kW99GtvdtpHtgNwAnLngyy2fuP+r7d/RtZmvv+sH37+p/kERir5kHc/yCxzX0GTXxWuk4dEe6gbu4qabyF7Ock+LRDddPzWH7mX4MOpAkSZIkPWTsSFu5it/SSw8A7XTQQzebWMsm1rI6HcOqOKLucrvSbq7kIvawa7DcPnp5kI3ZIx3MEXFixfduSPdzPZcywAAAQRttBF3spovdrOc+DkpHcEgc0+CnVjPt3LGWq//0Ofp6sxvh7e2d9PbuYvOmm9m86RYOOuQJHLjqnLrK3LVrA9de/cWm1bGraxtr7vxl08pTc912Uw+vfd4Gtj2Y/ebnzg+2bhng97/u4g8XdPHaty7kRa9dWFeZTzntfvr7hv6eMzfo7U2suaOPNXfs5Aff2sU7P7SEJz5zbsX3/89XdvCR9z5If3/2d2dnFs9839193Hd3Hz/57i7e9g9LOO958+r/wGq6VjuX9dLDzfxp8O8gBt+7g63sYCv3cSfHpdNZEisa/NRqJtuQxmNH32Yu3/YjelM3AB0xg57Uxcbee9jYew+HzjmVg+ecUFeZO/sf5MrtP224TtftuJAd/Zsbfr8mX6sdhzroYCadVctNQC9Zm5/Porrrpeay/UxPBh1IkhqSUvoS8KUproYk6WEiz6RgxheNqj/1cw0X00sP81nE0ZzCvFhIX+rlTm7kHm7jdq5nflrE0tir5nJTSlzLH9nDLmYxh2M4lUWxjP7Uz73czu1cx33cyfy0iH3j4GHv7Und3MDlDDDAfBZxBI9gAUuICHanndzOdWzgfu7iZpaklWY8mGL9/b1cd+1/09e7m3nz9+HIo57D3Hkr6evrYs1dF3DfPb/jrjt+wfz5+7Bk6WF1ld3RMYt58/dlwYL9mL9gP2647msN1/P2W39Af38P8xfsz47t9zZcjpqvq2uAv335RrY9OMDhR8/gvR9byiGHzWTnjgE+/4ltfO2zO/ivD2/j8GNm8sizax/R298Hjzi1k2f85VxOO2s2y1a009+fuP5PPXz43Vu49cZe3vuWzRx82AwOPXJ4xoI7b+sdDDg47axZvOndizlodXZL8I5bsteu/GM3H37PFk45s5P9Dhw9Y4ImViuey9poY39Ws5jlLGQJM5lFRDCQBtjCem7lWnazg+v4I2ekJzEjzJoxlWxDGo/+1MdV239Ob+pmfvtSjpv/GOZ1LKFvoIc79lzFmj3Xctvuy1jQsYxlM0fP9FWuI2ayoGM5C/PH1TtqD6CMaGN++9Ls/TOWs777Ljb33lfvx9MkacXj0IFxOAdSPbvYhnQ/13IJAPuwqqHPreaw/UxfbVNdAUmSJEmSmuF+7szTI3ZwPGcyL7KRxB0xg8PieJazDwC3c31d5W7kAbazBYDjOYNFsQyA9mhnVRzO/mTp9u/gRgbSwLD3bmIt/fQNvndhLCUii5+ZE/M4htOYTTayeAP3N/Kx1UQP3H8p3V1baW+fybHH/TVz52VpOzs6ZrH60KewbPlRQOLOO35eV7nz5u3FmWe/mxNOfDkHr34Sy1c0ntVi08Yb2bTxRpYtP7ruwAdNvP/72k7W3t/PnLnBRz6/nEMOyzrO5s1v441/v5hHP2E2KcF/fWhrXeV+6lsr+PS3V/LUZ81j2Yp2ANrbg+NP7uTfv7KCJcva6O+Db3x+x4j3/uqHu+jvzzIu/MunlnHwoTOICCKC1UfM5MOfXc7ceUFfL/zuV3vGvQ80Pq14LpsRMzk8TmBF7EtnzB48j7VFG8tib07gTCAbzb6JtQ1+cjWLbUjjcW/XTXQN7KQ9ZnDigicxr2MJAB1tMzl87iNZMXMVALftuqyucue3L+XcJS/ilIVP5bC5p7Ky86C63v/Ihc/kjMXP4pj5Z7P/rCPpbJtT1/s1uVrxODSWtdwNZKPUS/XV1LD9TF8GHUiSJEmSHhLWcQ8Ae7E/syrMCXwgWQftDrayK43smBur3CWsZH4sqlBuNuKhhy62sGHYaz10ATCDmcyKkTdH26KNeWQ3Jfrpr7lOmhgb1l0NwIqVJ9A5a+TNov0POBuAnTseYPeujTWXG9E22MEyHn193dx2yw9oa5/J6sOeNu7y1Hw/+342LccTnzGXFXuNTDD6wlctAODm63u5+47emss98bRZVV9bvLSdM86ZnZfbM+L1zZuym6b7r5rBnLkjbwXOm9/G/quyuu7Zk2qukyZGK57LxjIn5tFBliGjGwNXppptSOOxtvt2APbuPIRZ7SOn7Fk1+zgAtvdvYlff1prLLQW7NSrCrqzpZLodh3pS92DA094cWPP7NDFsP9OXR2pNiYiYGRFPiIgPRsQvI+LuiNgVET0RsT4iLo+Ij0fEKRO0/XMiIpUeheX7RsQ78+2vj4g9EXFXRHw9Ip7Y4LYWR8TrIuJHEXFnROzMP+tdEfE/EfHXETHmVCej1HmfiHhbRPwhIu6LiN58nXMqlHFQRLw3Ii6MiHUR0ZWvvzUiboqIH0bEuyPi5Bo/W3tEPCcivhoRt0bEtnyf3R0RP4uIN0ZUOHpXLutLhc/3pcLy4yLiExFxQ17+roi4LSI+FxGPqKXsyRARZ+b1vDYiNkVEd0SsjYhLIuJ9EbG6lcqtYbuH52209J2sLd/fEfHiwutrRinrvYX1LiwsPzgiPhARV0fElrztrImIr0XEYxqs879GxPV5W9kRETfnbevswnoV21ozRMRRebv/dl6PrYXf2G0R8Y38Nz9qvtJSHYHflC1PVR7j/hwxxcflCvXpjIgXRsRX8u9xc2FfXhMRX46IF0TEqJPORsTJEXF+RHwvIm7J20Zv3uZujIgvRsR5UeP/Xkdpz8fk++f6iHgwsuPrzRHxsYjYt0pZT8nbyj35fn4wIv4YEW+JiOqThFWv274R8daI+FX+/e2OiO152/tK/jnH/B9+td92RKyO7LhzeWTHhL58nVUVymjqfh+PiFgZ2XnygsjOk135vr4pIj4fEU+to6zi7+6cfFlHRPxFZOfQu/LyN+Xf5TsjJia8eap+s6XfAHBX2Ut3ReXj04Vl719V9vqqfPniyK6XLsg/S1f++ovHem+FOk7mdV7Tz9MRMSMiXp5/rw/k++KeiPh1vtwhPVX0pV628yAAS6mcYnEhSwdvaNdzE+FBNublrqz4+qyYzVwW5OsOL3cW2c3aXnroSrtHvHcgDbCTbQAseBjP/dgK+vq62bHjAQCWLD204joLFu5Pe0fW+fvgg7dPWt1K1tz5S7q7t7Fq1bnMmrVo0rev0e3aOcDN12Wd/o98dOUggWMeMZN587NLsssv7mrathcuzi6r+ivELu2zX5YZ4d41vezeNXLU1s4dA9y7JsvIcsTRpjSfSq16LhvLrrSdPrIgmtmM7KTU5LENaTz6BnrY3pd9z8tm7F9xnUUdK+nIp7/Y3GuWLo00HY9D67iHRCII9uKAmt+n5rP9TG9jdnRKzRYRTwP+G1hcZZUV+eNk4I0R8X/AS1JK2ya4Xs8CvgD5UWXIqvzxVxHxXeDFKdUWPhURbwLeDRXvHpbKfRbwzoh4YUrp0jrr/DzgkxXqXGnddwHvBCrdQViYP44Anga8LyKemlL6ySjlnUK2vyrlBT0gfzwReFdE/G1K6Utj1bGs/HbgfcDbGRkgtTp/vDQi3pdSel89ZTdTRKwAPgc8vcLLe+WPRwJvj4h/B85PKfVNVbm1iIjTgR8CS/NFtwFPTCmVdzCNZxuvBz4MlHesHpg/nhcRnwVek1Iac8hfRPw92W+tvH0fnj9eFBGfA14/3rpX2X4ncCVwdJVVSr+x1cBzgX+IiOellP4wEfWpV6sdlyPihcAHgUqd9QuB4/LHXwM7I2LvlNLOsjL2By4CquXrW5w/jgReDNwQEX+RUrqpzroG8PfAe4H2spdL7e8lEfH4lNLl+XsWAl8HnlK2/iLgtPzxkoh4XEppXQ116CA7Xr4JqDQx8HyytvcC4MqI+KuU0m01fcChbbyZ7DsZ9S70ZO33WkXE3wLvAcqDUzrJ9vcRZOeSS8jO77fWWf6BwDeA0yuUv5Tsu3x9fk69ov5PUHW7LfWbHa+IeDzwZWDvCdxGU6/zJvD8fxTwLUZeX+2fP84F3hIRfzFWWQ9Huxj66uZWuTyOCOakeWznQXaxvaZye1IXvWSdiPNGueyexwJ2sX1EucvZm5nMoocuruFijkiPYAFLiAj2pF3cxnXsYSdzWfCwnvuxFezetQHIYpXmzq18IyqijTlzlrFj+33s2lVfh8p47djxAPfddwlz5qxgvwMeNanbVm3W3N5LysPdDj60cpxxW1tw4MEzuOGaHu66rfZMB2O56tJuAA45bOR2n3TeXD73ie3s2pE4/9WbeNO7F3PQ6uyW4J239vKv732QXTsTp501izMeU+lyUpOlVc9llaSU6KGLB9nEHXl641nMYVme8lhTwzak8djZv3Xw+bz2yv/djAjmti9kW99GdvY/OEk103QynY5DJaXU+MvYm5n1jwNSE9l+pjeDDjQVVjH8Jvl24HZgG1mHzd5knSOl0ZjnAQdHxOkppQnJr5WPbvtOvs0E3AhsILthfGRh1T8HlkfEE0erS94B9HmyDrGiuyHP4QKH5uWXnv8mIp6RUvpVjXX+c+Br+Z8JuAlYDywh60QprvtO4P1lRdyb16eLrDPmQIbf7K86+jTvHPg/GBZ6vItsv3WRfX+lspYCX4yI/VNK/1DLZ8v9B/Dq/PlO4AZgD1lHVilHTQDvjYi1KaXP1FF2U0TEAcCvgeIoxn6yum4h6zAtDZGaAbwZODIizkspdU92uTV+pmcA32So4/Iy4KkppU3jKbdsG39H1nkJ0A1cT3Yc2I+hzwXwCmAT8I4xyvtn4PyyxWvJjiszyX7DC4CXA3OA5t3ZGzKD4QEHfcAdZPXvIjvmHZFvH7KgnN/kHdEXVSjvOuDnZL/n4ijlapMHX9d41YEWOi5HxMeAvylbvAe4GXiQ7Hi1mmzfkP9d6XpmIcM7vrvJPtMWsjawjOw7KXWiHw38MSJOTSndUkeV30sW8ALZfrsx39aRZJ2+pbr8IiKOBTYDv2Toey211XbgeIaOq0cD38/3cdVJxCLL9PC/wBPKXroNeICsbR7B0P46Cbg4Ih6bUrq2lg+YBxx8JP+zn+w3uyX/fEeUrT5Z+72Wen8aeGXZ4vvIfptzyTp1S0MQTwf+kJ/fr6pxEyvIOspLIcz3AGvI9vlxDH2XK4CfR8TRtQSR1GgVU/ebvZ3sWDQbOLuw/LdQMQ/qWO3skWT7sdQmbif7nhZAntdunCbgOm+izv+Hk2W4WVFY3EN2jN9J9ts6gOw3dAEjj5UPe90MjRjuHPx5j9TJbODBYevXXm71zriZ+TbLy22PDk5IZ3INF7ODrVzObwjaaEtBP/10MIP9OITVHENblMevaTL19Azd4JrZWf1GVGfnAnYAPd21p/Icr5QGuPWm70Ia4NAjnklbm22lFW3aMBQvvWxl9e+o9Fpx/fG46Be7uena7Cbq0/9i5AjhlXt38KFPLeOdb9jEpb/r4rmPX0tnZ3aZ0N2dWLKsjZe8bgEvf8PDd/7ZVtGq57KiG9MVPMCaEcvnsYjjOI12z2VTyjak8egZGMrK1dlWPcFaZ9tcYCPdhfWlkulwHCrambaxg62AqfFbge1nenN6BU2VP5HdKD00pbQwpXRSSunclNKjU0qHAfuQdUqWRoQdD/zTBNbnK2Q3or8PHJRSOiavz1FkN5R/Wlj3LOADY5T3jwwPOPgi2WddlVI6O3/sTTZarTS6czbw9YiodZTfl/J/Pwvsm1I6Oq/zCWT771oYHIn3rsL7fgIcmVI6IKV0Vkrp8Sml01NK+wAryUae/o7SEJ8ykaUJ/yZDnSldwFuAFSmlU1NKZ5PdbH8K5CFemffnIzNr8VSygIPNeX2WppQemVJ6TEppFfA4GJbf5kMRMam51/JMDN9geIfD58i+i+Pzuh5G1qFW7FR+MqO05Ykqt8bP9ErguwwFHPwEeEwzAw6AY8l+P3vIRmUvSSmdnLfdw8g6RIsjsN+ajySuVucnMzzg4E6yDBv75r+zRwLLyToedwDPY+To8mbZAnwceAwwN6V0RErpUSmlx6WUTiIbVf0cGPyf9Qyy3/yIq5yU0kdSSk8C3la2/ElVHh8pL6MBU35czkel/01h0d3A88nayYkppcemlE5LKS0l+w28HxhtMuUHyI7HpwHz8mP72Xk5x5N12paCWyDr5Px6HVU+juz4uhV4CbAsP56eQ9aZ+SoYnJx7EVkAzUfJAg5uBM5NKe2T1+lMsmNwMYDq1Pzzj+azDAUc9JNlENk3pXRYSumcvNzlZJ3OD+TrLQO+U+NxcwXwL8AA2W93eUrphLxtHEPWAbq+7D0Tvd/HFBGvYXjAwY3Ao1NK++f75RSy/f1+hr6jZcD/RMT8Gjfzn2Sf/yLgxJTSgfnv5Yy8rOLvYwlQT+BdLabkN5tS+mp+fHpR2UsvqnJ8elulcgo+QxZw8L38sxyan+tOIvuOfjbeOtPE67wJPP935OWWAg4S8CFgZX6ePCeldCBZoMdNZL/rj4/5yR9mBgabO7SNSD4zpD1/rb+w/mj6ay63o2q5C2IxJ3E28/N4ocQA/fnhZ4AB+ugdTCmsqdPf3zP4vK2t+hiNtrYZI9afaPff90d27LifFXudwOLFB0/adlWfPXuG/hvdOav6rFazZmev7d5V8b/dddmwro8PvmMLAGc/bjann1P5Jurp58zm37+6gv0OzNp2d3eiuzvbfk93Yuf2Abr2jL8+Gp9WPpeVdDCDmXQOpjWGrLP4CE5gTs2X0pootiGNR18hOVvbKLMBt+ev9Tcn6aoeYqbDcaioFAQ1g5ksm7gEjKqR7Wd6M+hAU+FLeefRv6WUKk6CmVJal1J6B/DCwuJXRsSiCarTcrJUtuellIod5aSU7iBLnfujwuI35OlvR4iIRzK8o/CVKaWXVvqsKaXfkI2uLAUeLCebAqEW84H3pZRemVJaW1bulpTSlvzPJzA0evAuss94c6UCU0obUkpfzgMHqt3k/zBDI2YHgD9PKX00paEJalPmp2Q37ouTe306xpjLPreMrBPvzLw+w+7mpZR+TTYtRclCstGJk+llwBmFv/85pfSKlNKwzreU0o1k38GvC4vfnI94nsxyRxUR7wc+zVB6+C8Czyx+r02yhKwD7AkppY+Xl5+yEcZPIRsdDdkI9vKMIaU6B/CJwqJ7gbNSSr9IKQ3eLUsp9aSUPksWzNJH9jtrtt3A/imlN6WULixvs3k9elNK3yHriC1lPNmHLOX9VJvy43JEHMZQBgyAq4CTU0pfTymNCC1NKd2YUnoPWeaTSsMMbwNWpZTelVK6LFVIa55S2p1S+hxwJgzmzDoxz+ZSi8VkgVfnppS+lFIa7C3Kj4OfKftMLycLRLiJ7Pj2m7L67CILuLq4sPgl1TYeEX9JNl0HZFkEnpFSeltK6YHieimlgZTS98hGlJeOJYcBr63hM84m+x2+LKX09ymlYbkTU0oPpOEjwidjv48qIhaTnatKbgQelVL6bVk9tudtqBiccBBDmSvGsoxsKprHpZT+VFZ2V0rpnWRZj0qeWynIqEFT/pttovlkgZR/Xv5ZUkq7UnOyQzTtOo+JO0+/EnhE4e+/TSmdn1LaWlbu78gCD25nHOeziLiy2qPRMjW6+9KdXMIv6KGLYziNs3gqj+YZnMSjmc9C1nEPl3MBe9Kuqa6qWlB393buuuMXtHfM4pDVExU/q+lo964B3vrKTWzZNMDe+7bzzg8tqbruZz66lRc/Yz0zZgYf/cJyfnHVvvziqn356BeWs3yvDr7z3zt5+bPXs31b1SRbEgCHxfGcHU/nnHgm5/BMjuE0+ujhCi7k1nTNVFdP04BtSFKrSCmxjnsB2IsDaAu7TFU7289I7gFNulQ27/YY636Toc6XuWQjmCfCg8Brix2VZfXoJxuVWap7G0Op/8v9HUPpjL+Sd3ZWlbL5lV9VWPTiGkdZ3kBtoyb3Lzy/rFJnaJV6jcjzmGdheHZh0afz4IJqZdwLvKGwaB+ykd61eFsaJdV2Sun3wCWFRWfVWG6zFD/XdQzPJjFMvs9fQtYxDVn7eEOV1Seq3Ioioj0iPle2nQ/kgTITFa784fz7qyjvdPpuYVG17/bxDB9p+ubyztaycn9HNjK56fJO3ZoCNFJKG8hGgZdMdsDMCC1yXD6foWkSdgHPTjVk2Ugp7al0vEopdReDAMYo41ayKV1K6vlO/rm8w7nMpwrPZ5D9Tl9V3pFYqEsC/quw6JH5yOpK/q6sHj8ZraL5MfmthUWvH239gp+llL5Uy4qTuN9H83KGT//z0vJgibJ6fIEseGDw/TVmgdgBvHiMY+WHCs/nMbxTuWEt8pttlo3A66tdgzVJM6/zJuo8XQwC+iPwsVHK3QS8ptrrD2dthdl2BhhxahhUyjDQXuNsg+01l9tXsdytaRM3cxVBGydxNnvF/nTGbGbETBbHck7i0cxlPt10cfu4Z0zSeLS3zxx8PjBQ/fA+MNA7Yv2JdNstP6C/v5uDDn48nZ2OAG1ls2cPZTfo7qp+aitlFJgzt3o2hLF0dyX+9hUbuenaHhYvbeMT/72CRUsqXzb+7Hu7+NwntrNkWRuf/vYKHnXubBYtaWfRknYede5sPv3tFSxZ1sZdt/Xy5f/a1nCdNH6tei6rpiNmsFfsz8k8hnY6uIfb2JDuH/uNmjC2IY1HRyG7wcAo/9UtZThoHyUbgh6+ptNxaDPr6MnT6JsavzXYfqY3gw40HRQ7lk+doG18LQ1lBqgoH2n3ncKiZ5WvExFLyEbLlfxrLRvPO0Pvyv+cQ5b9YCyfq9TRVkFxBOpxo3Re1eLpQDFTQS0p3f+PLOV9yXk1vGcn8N81rFdMW3x0Des3RT4iu7i9fxurgz7v7PtWYdGfTVa51UTEHLJU0y/LFw0Ar0sp/X2tZTTov8ZepabvtjhdxzqytjaWCQk6aMBkHNcmUlPrn6cVLwYkfSmldFe19SdIo5/pM6O9mFK6H/KQ18zN+TF/NH8sPJ9NNvp+mIg4ATgh/7MX+LexKpr7NgxOSrZ/ftwZy6drLLsRE/FbKJ5nfp9SurSG9xTP14vIpkkZyzdruHa4lez4VDJp56oyrXzM+Xo9QRQNatZ13kSd/w8vK/c/xgrCSCn9iqFMWXVL2XQcFR+NltkKivM9jjb/Ynd+eTza/JDDyx1KUtI97NJ6uNLNhvJy7yFL4rGMvSqmDG6LdvbjEAA2spaJjcHRaGYWOvR7urdXXa87f23mJAQAPLjlDjZtvIE5c1ey194n0tfXPeyRBob+S1haNjBQy38TNRGWrxz67/am9dW/h9Jry1Y09t/z3p7E3712I1dc3M38BcG///cKDjykelLBb34xSwz2lD+fy6LFI7e5aHE7Tz4vi7n87S+rH+c08Vr1XDaWWTGbFewLDKUZ1tSwDWk8OtvmDD7vHqg+tqZ7YNeI9aWS6XQcWpvPDj2XBSyIxTXVQxPL9jO9GYqmKRURy8lGKx9PNgp+AdBZtlpxJPN+E1SVqqP1y/yYoVTX+0TEfiml+wqvn8VQMM+GlNK1ddThGoY6lk4GfjHG+r8d4/WSywvPjwS+GhFvy2+C16sYDHFTnpJ4VCmlFBE/YGie9loCKq5IKXWPvRrFfb+ohvWbpfwz/LDiWiN9n6H2sywiDk0p3TYJ5Y4QEcvI2nOp86kLeH5K6bvV39UUa/JO2LHU8t2eVnh+US1BOCml2yLiXoZnAGmqvAP9HOAkshT2C8lGFxeHMRVTrC+JiNllKeqnzBQdl08iGwVe8u0mlDkoImYBjyMbZX4I2WeazfDvpJiLttbPdFeNqd/XMdTmLhltxdzasr8rXbU+uvD8TymlzTWUS0qpOyJuZihg4WTg1jHeVuv5ZpgJ3O+jbXMmcGJhUa3H0d+RjYYv7evTGZ5uv5I/1Fj2fcBe+fNFNb6nZi10LdWohtpXnZp1nTdR5+nTytavp75H1rjuw8JchjqAd7F92N8lKSV250kt5rKgpnJnRicz0kx66WEn21k6+JMebmc+Y0x5ubvy5bOpnkSl9NoA/fTQXfeNejXHnDkryE5TiV271jNn7shZTFIaYPfuLBnT3LkrJrxOXV1bAdi9az2/v+h9Vdfr7trK7y96LwCHH/ls9t5nWscQTVsHHjKDCEgJ7rytt2IgwMBA4u47s2wZBx1ay+yDw/X1Jd75hk384YIu5swNPvbFFRx29OhZN9bcnm1vn/2r3wbc94DstbX3OT/3VGrVc1ktSjfz9zDR8aQajW1I4zG3fdHg8539DzK3Y9GIdVJK7OrPsuLMa7eTTSNNl+NQb+phI1nS3H0cpd4ybD/Tm0EHmhIRcSDZfMvnUV87XDQhFaLmPKbl6x3G8M7R4wrPZ0XEz+qoQ3GO31rm6L1z7FUgpfSHiLiYofmHnws8JyL+CFxAlnL5kmqpvssUOy3qmWStGHyxTw0drLXO31yc9HYyQ2uL+2Fdni6/FuVBKKvJ5j+f6HLLzSf73g/N/94KPDOVzXc+QZr53RbP5vWM9ryRCQg6iIgZZME1b6X+ebYXwSghlpNgio/L5Z1mVzShTPIU+e8iS0Nezx2PRTWuV2t7Lg4PGPM9KaXdEcNS7Vb6DRTPNwfWeb4p/nbGaqvbxhohXm4S9vto9md4h3tN56o8QO464Ox80erR1s9N6bmqBa+lGlXT9cw4Nes6b6LO08WMIw/U8ZszD3+ZjpjBgrSY7TzIZtYPjpYr2sYW+sg635ZQe4fxYlawgfvYwnoOZGSSmK60ZzC4oLzcyGOtuqg+Wqz4Wof/TZ8yHR2dzF+wLzu238eDW25n+YpjRqyzffu99Pdlo18WL67ldKGHk7nz2jjyuJnceE0Pl/6ui8c8aeSp//o/9bBzR5bR5JQz6gswGhhIvO8tm/nNz/bQOSv4188u57iTymMNRypNL7vu/upx2qVggznzTIo6lVr1XFaLPfllb61pjjUxbEMaj462mSzoWM72vo1s7r2PlZ0jEi+yrW8DffnsvUtnjGxf0nQ5Dq3nXgYYIAj2stO4Zdh+pjfP4Jp0EXEK2Sj+RQ28fez/TTemphGiFdYrD+dcWni+gMbnTV5YwzrV832O9Gyy0Xil4S5tZEEIpUCEgYi4imx08edHudld/Lwb69h++bqLGb2DtZYsB+Uanwyzfs3cD5NRbrklDB9d/PFJCjiAxr7bahYVnm+t431V53ZvVETMBn5ANqq7ERN1bKtJCxyXi+1xV0qpeq9MjfJsHr8iG/1dr1onaO5poOxG3lPp+FY836xk4s439ZxrJmu/j6b8+NfosbSW4RpTdq5qgd9sM9XVxhrUrOu8yTj/11rXetd92NiLA9jOg6zjHg5OR9IZs4e9fnee3GU+i5lbYaqD6uXuzwbuYzPr2ZG2Mj8WDXv9nrzcmcxicdnNiXksZCfb2MQ6utIeZpXVKaU0mEZ4LgucG3eKrVh5PDu238f6dVdz4EHn0tk5PH7u3ruzWZLmzd+3YiaEZtt7n5NGzVpw152/4u67fk3nrEWcfub5E14fje2Jz5jDjdf08PPv7+Llb1w4YgqFr302O/UdcezMUadEKJdS4gNv38LPv7+bGTPhQ59axsk1Bi0ceuRMrr6sm1/8cBcvff0C5swdHliwe9cAv/xRdgl+9AnNuCTTeLTiuWwgDdAW1QNSdqcdg6P9FrGs5jppYtiGNB57d65me99GHui+nUPmnDRiCoW79mSx/Qs6llXMhCBBax6HypVS4y9hJZ1hprlWYvuZvgxf1qTKRz9+l6Gb5L3AV8lG3x9L1vE0K6UUpQdQPYdk89TaCVTewVB+4756ztT6jPnbTCkN1FpYSmktWerel5LNFV4+UWwbWYrtDwFrIuLVVYoqft56Os7K99t0PwpP1H6YrP27geHTbrw7Il5Wx/Y00gcYHnBwFfBGslTc+5AdG9oLx7WRoeJTpEWOy8U2W32yrvp8luEd3xcCryI71q0kG3HeVvhMj2nSdifLZJ1vaj7X5KZ6v5eflxs9lrbseapFfrPNVG8ba0SzrvMm6jxd7N0ZT7kC9uVgZjGHfvq4mj+wM2Wde32pl9vStWwkm+VpNUePeO+v0v/wq/Q/3JFuGPHacvZhQR4jdy2XsC2f1WYg9XN3upV78uQVh3DUiBvq+3EwAP308Sd+x5a0gYE0kKWmTTu4hovZnsdE7l9TohVNpH32PY3OWYvo7+/mumu+zK6d6wHo6+vmjtt+yqaNWfs4+JAnjHjvhb9+Oxf++u3cdeevKpbd27uHnp5dg4+S/r7uYcsHBsacNUwt7Lznz2PvfdvZtTPx5pdu4M7bslFYu3YO8IkPPshvfpbF37/2rSNjP09ddQ+nrrqHz3xs64jXPvb+rfzgW7to74AP/McyTj9n9oh1qnnW87OZzNbd388bX7SRm6/vob8/0d+fuPn6Ht74oo2DWRD+8sW137jVxGjFc9mtXM0t6Wq2pk30F2Y27E09PJDWcAUXMUA/7XRwwGBSRU0V25DGY/9ZRzKrbR79qZertv2MnX3ZdWrfQA+37PojG3rWAHDonFNHvPfnmz7Dzzd9htt3VU5g2TvQTc9A1+CjpD/1DFs+UOG2d3/qK1sna0cDDAxb3pd6x7sL1ASteBwq2pV2sI1s3KWp8VuP7Wf6cgiFJttLGJpLuBd4fErpojHeMxn/451PbaOfy9NEbyv7e2vh+bUppUZGek6IfL77LwJfjIilwFnAmQzNPV8afTkf+GRERErpk2XFbC08r+d7Kd9vWyutNI1sLTxv5n6YqHLL7SHrIP8ZWad4G/DZiJiRUvpUHdudalvJOjGhvtG+TZ1wLiIWA/+vsOjTwGtSSuXBPUWtdCevFY7LxeNvLZleRhURxwB/Vlj0jpTSB8d4Wyt9J7XYWnj+g5TSM6eqIiUtst+3jqP84rG0vJxW0gq/2elmIq7zmnmeLmZ7GE+5AtqjnePTGVzFb9nBVv7IL2hPHfQzNEf5ao5haVSev7GaiOC49Eiu5CL2sIvL+Q3tqYMB+kl5PO++HMy+cfCI9y6KZRyajuM2rmUX27mK3xIEQRsDDN1035eD2K/C+zW52ttncOxxf83Vf/ocO3c8wOWXfpz29k76+3vIYreDgw55AkuWjkzHOZYrLvsE3V1bRyy/8fpvDPv7+BNfweLFtoXpatasNj782eX8v+dv4Obre3nu49cyd36wZ1diYAAisoCDR55de9DAuvv7+OYXdwDZ+z/4ji188B3VZ+P52RX7Dfv7ic+cyw1X9/DNL+7gmiu6+eunrWNmHvLWk4e7RcCr3lJfvTQxWvFc1k8/a7mTe7kdgI6UZekopTaGbFTgcTySWTGZs1+qEtuQxqM9OnjEgidyxbYfsb1/E3/Y+h06YgZ9qY/SOLZD55zKspn7jV5QBRdv/V+6BnaOWH7Njl8P+/uUBU9jycx9hi27a/fV3LHnqhHv3dCzhg1b1gz+vU/nYRw7/5y666bmasXjUFFplHoHM1jOPqOuq8ln+5m+DDrQZHtS4fk3arhJDhMw93oFB1Hbzejyo836sr+L8zuvpEWllDYD38sfRMTewCuAtzM0+u6DEfHlsjTnxbmLD6ljk8V1e2jtzpxaFPfDARHRkVLqq7r2kPJ9Vj4X9ESVO0JKaXtEPAH4CVkASpAFm8xIKf17DdtsBXcz9Ds7so73HdXkejwWKOVF3Q28ZYyAA5ic41qtWuG4vLbwvCMiDk4pjWee9+JnWgP8cw3vaaXvpBateL5phf1efvw7BLikxvcWj6VjHkenUCv8ZqebZl3nTdR5urid/SOiPQ8WHYs9klXMj0U8Mj2BNdzMJtbSzR5m0MlCFnMAh7IkGjtszoo5nJYexxpuYQP308Uu2ulgPovYj0NYGdVvvB4Yh7E4LeNe7mQrm+hmN4lEJ7NZwBL25SCW1XnDRBNn3vy9OeW0v+GeNReyefPNdHdvZ8aMOcxfsB/7H/AoFi8xI4VGd9hRM/nGz/fmy/+1jd9f0MXGdX0sXNzGUcfP5K9etoBTz6wvqdJAYcBnXy9s2VR/oqA3v2cxZz1+Nt/7+k6u+1M3WzZmp5q992vn+JM7efYL53PcSa02C9PDV6udy1ZxOHOZzxY2soed9NDFAImZdDKPhSxlL/blIDqi9ilDNLFsQxqPBR1LOXPxX3Dn7qvZ2HM33QO7mRmdLOxYwYGzj2XpzJFzrEvlWu04VJJSGuw0Xsn+tEX7qOtrath+pieDDjTZirlGLhtr5YgI4IyJq86g08jSodeyXkkPcF3Z6xcXnq+MiINSSneNt3ITLZ9+4f0R8QBZamzIRhufBvymsOqVwFPy5yfnHdS15Kwqfod/qmdqiBZ1ZeH5LOBEamjPDN8P/cDVk1RuRSmlnRHxJOCHwLn54k9ExMyU0kdqKWOKXQqUcrk9upZOmog4lOZ3vhWPazemlHZVXXPIo2ose9hvJc9AMlZAQ71a4bhc3il8DjCeoIPiZ7qixn1W63fSKi4GXpc/PyEiZqeU9kxlhWiB/Z5S2hgR9zL0Oz+DbOqBUeUZgA4vLKqcC7I1tMJvFkZOixAV12oNzbrOm4zz/2zgOOBPNZR72tirPHx1xiwO5wQO54Sa3/O4ePaY63TEDFZzDKs5pu46LYglHJ2nclTr6+ycz6GHP51DeXrN7znnsaMn+Dn9zPPHW60RDjr4cRx08OPGXlGTbtmKdt7y3iW85b21v+eyNQdUXL7P/h1VX6vHKWfM4pQzWnYWKZVppXPZ3FjAXBawiiNqfo+mnm1I49HZNocj553BkXX8d/KJy1456uuPXvK8huuzeu7JrJ57csPv19RopeNQSURwFk+t+32afLaf6WfMeeOlJqs3XPVJwGSETv5VjesVr4wuTSmVz6V7ObC58PfLx1Wryfe/ZX+XD7cqjqZcCDxjrAIjYjnw5CplTFeXMXze+RfW+L6/Ljy/MqVUnk9sosqtKs9k8TTgF4XF/xoRf1drGVPoR4XnewHn1fCe/zf2KnWr67gWETMY/p2NpjyAYSJynU75cTmltJ7hnXCvHmeR9X4nSxk+LcB08CsYzEPZCbxgCutS0ir7vXieeXZE1HJn/QUMvy7+bXOr1FRT/pvNTcbxqVmadZ03Wef/MesbEQvB/+VKkiRJkiSpNRh0oMn2QOH52aOtGBFzgI9NbHUGnRURo964jYi/JBvRVvL58nXyFLvFOv9NRDyiOVVsTD7CsVbzyv4unyTyAuCOwt//VENnzj8D+WyRJIYyKUxbeWdBceLVV0bEqKHaEfFCoNgWPjNZ5Y4lHx39DODHhcUfjIh31VvWJPsl5JMBZj6aTxVSUUScxcQEHRSPa8dGxOIx1n8XtXcAri37+9Caa1W7Vjkuf7zw/JSIeP04yip+ptMjYqzMTh+jtTtMR0gpbQS+XFj0jxEx/uFv49Mq+714nllONnVQVXnww98XFv0+pXTjBNSrWVrlN7uV4R3lE3F8apZmXedN1Pl/B/A/hUX/r4bf83uZZsctSZIkSZIkPXQZdKDJdkHh+bMj4mmVVso7AH7E8FTHE+0rEXFKlfqcBXyusOh24FtVyvkEcFv+fA7wy7FudOfbWBQRr4mIX4y1bp0+HhEfjohR5/3NO4f+pbCoi7KU53mq7PcXFh0OfCciyoMViMw7gJcWFn8tpXR7+brT1L8w1NkyE/hRtX0cEY8HPl1YdBvwtUkud1T5aM4/B75XWPz+iPiHRsqbDHl7fENh0f7A7yLi8cVgm4iYGRGvIAuq6AA2NrkqF5IF1EA24vw/IkZO5pT/Jt4EvLPWglNKDwDrCov+plLZ49Qqx+WvMzzF+Mcj4k0RUfVaJSLmRcRbImJu2UvFz7Qv8I9V3t8REf9K7aOVW837gE358xXARRHxyLHeFBErI+L8iGjoeDGKltjvKaXfMnxqoHdGxEuq1GUZWbteXno72X5tZS3xm82ns7m6sOi1NWaVmCrNus6byPN/KXvJnLzcioF0EfFa4G+qlCNJkiRJkiRNurFGoEnN9hngfLIR9W3A9yPiK2Rzyq8HFgNnkXVULwW2k3UU1poWt1HfyLdxcV6fH5N1TK4kSz3/fKDU0dcHvCyl1FWpoJTSjoh4JvAHss+zlOzG8eXA94FrgQfJ5gJeChwNPJJsDvMZwN1N/mwLgRcBfxsRV5Clnb6abH/vBhaRzR38QuDIwvs+nlLaXuHz/XdEPB0oTY7zNOCGiPgs2RzY3cBh+TZPL7x1DTCekcstJaV0S0S8Ffj3fNEhwLUR8QXg12Tf8T5kqcOfw9Bc1z3AC0ZpPxNSbo2fqSci/oLs91D6ft8ZETNSSi053UJK6acR8SHgbfmiQ8iminggIm4n+00dRfY7gKxju5esfULWXsdbh3si4jtk3wdk6bmPjIjPADfldTiS7DdW6vD6FLVPIfBV4G/z5y8GnhIR1wHF9NwXpJQ+0eBHaInjckqpNyKeA1wKLMvr8lHgFRHxTbL5zR/M63ko8Ciy1OJzKRuRnFL6fURcBpyaLzo/Ik4jywxwJ9no4OOBl8DghJL1fCctIaV0X0Q8G/g5WcDLKuCSiLgA+AlZ+9tGto+WA8cCZ5LNL99Gk6e7abH9/mKyNrOE7LN+ISL+iuz4dhdZp+4ZwCsZCjgA+LeU0q8moD7N1BK/2dxXya5hAJ4ArI2Iq/NtloKxrk8p1RxsNUGaeZ03Uef/6yPin8gyGED2e70hP5f8juyYf3Be18fm63yd4VNCSJIkSZIkSVPCoANNqpTShoh4EdnosQ6ym+UvYqgDsGgX8FzgtEmo2qvIOspPIusMqTgiEugnu2E86lzPKaWbIuJUslHjR+eLT2Gow3GqnJw/xvI14N2jvP4Css6Ev8j/PgAYbUT8zcATU0pba9j2tJFS+o+I6AQ+TNapMJcssKJacMUO4LyU0mVTUW4tUkp9EfFc4CsMdVCdHxEzU0pvHm/5EyGldH5E7CCbtqA0lcc++aPo88DrgC8Vlm1rUjVeS5Y+u5Re/BHAJ6us+wWyEa21drS+n6yDqZSeewVDHU4lW2utaLlWOi6nlO6MiNMZPjr7SBobef584GKGOpTPyR8jNpuXfxHTLOgAIKV0UT5K+7vAfvnic/PHVGiJ/Z4HAz0a+BlD05k8Pn9U8x/AW5pdl2Zrpd8s2Sj+pwNPzP9exMjve9EEbbsezb7Om6jz//siYi+GfhOLyQJMzq+w+mfIgikMOpAkSZIkSdKUc3oFTbqU0neBxwHXV1mln2yk8okppZ9OUp12kI3+/DeyG/SVXAqcllKqlm63vMzbyeYGfjVZp/uoq5NlH3g/2b5ppk+TzW+9poZ1rwKek1J6QUqpt9pKeSr+vyTryLhplPI2A+8BTkop3VNzjaeRlNJHyDpzfg0MVFmti2yk71EppV9PZbk1brufLLCkOF/8myLi34vTFrSSlNI/kmXs+BhwI1kHzy7gVrLP8eiU0svzEaYrC29tylQLKaXNZKN9v0Z2DKvkTuDFKaWX1Vn2DrKsIa8CfgrcB+xpvLYVt9Eyx+X82HkCWXaH+8ZY/Rbg7xme9aFYzslkI/6ruQ54akqp1dPpjyqldDlZcMb5wFjH2j6yqXPOZwI6K1tpv6eUricbLf4RspH31VwJPCWl9PqUUrXjbUtpld9sSqkPeApZW/o/siwSuxjKctASJug6b6LO/68hOwdXO/7dD7wqpfSqWsqTJEmSJEmSJkNkU2JLky/vvDyRrHNiKVkn4Vrg9ymldaO9twnbPofCfM8ppeL873PJRogeAMwnS1V8cUrplnFuc3+yTskVZKP+uslS8N4OXJdS2jKe8musw95kHTCryEbPdZB11t0DXJVSamhqh4g4jCyd9gqykeYbyTp+L50uHTjNEBHLgbPJRtjPB7aQBXv8NqW0u9XKfTiKiA6y/Tc/X/T4ZqdSz39njwb2zxetA25KKV3RzO1MhKk8Lo9Sp2PIsjwsJ0vPv4NsGpo/1RrMFBGryH5De5N1uq8Frk4p3TgRdZ5qEXEo2Xe4jGxqkT1kQWC3kp1vdkxSPVbRIvs9ImaQdTofSrZf9pCd3/8wnYPiWvE32wom+zpvIs7TEdFG1maPJrtm2wjcBvxuoq+tHt/2F/4HUePSd+5JU10FTXMf+ny1hGFSbd5x0KljryRJE6h96ZKproKmuf7NE95VIEmj+uXAd+oegGrQgR6WRrsZLemhKyKeTzYHOWTza69IKTVrigVJUgvwOm98DDrQeBl0oPEy6EDjZdCBpKlm0IHGy6ADSVOtkaADp1eQJE1rtU75kI+6/mhh0f8YcCBJkiRJkiRJkjQ+Bh1Ikqa7f46Iz0bEY/MU6sNExLyIeDVwBdkUIJDNs/2ByaykJEmSJEmSJEnSQ1HHVFdAkqRxmgu8PH/0RMRtZHNfQzZ3+5FAe2H9BLwhpXTDpNZSkiRJkiRJkiTpIcigA0nSdDdQeD4TOHqUddcBr00p/d/EVkmSJEmSJEmSJOnhwaADSdJ097fAT4DHAScBB5NlOJgFbAc2AVcCvwS+llLqmqJ6SpIkSZIkSZIkPeQYdKCHpZTShUBMdT0kjV9KqQf4Wf6QJD3MeZ0nSZIkSZIkTa62qa6AJEmSJEmSJEmSJEmangw6kCRJkiRJkiRJkiRJDTHoQJIkSZIkSZIkSZIkNcSgA0mSJEmSJEmSJEmS1BCDDiRJkiRJkiRJkiRJUkMMOpAkSZIkSZIkSZIkSQ0x6ECSJEmSJEmSJEmSJDXEoANJkiRJkiRJkiRJktQQgw4kSZIkSZIkSZIkSVJDDDqQJEmSJEmSJEmSJEkNMehAkiRJkiRJkiRJkiQ1xKADSZIkSZIkSZIkSZLUEIMOJEmSJEmSJEmSJElSQww6kCRJkiRJkiRJkiRJDTHoQJIkSZIkSZIkSZIkNcSgA0mSJEmSJEmSJEmS1BCDDiRJkiRJkiRJkiRJUkMMOpAkSZIkSZIkSZIkSQ0x6ECSJEmSJEmSJEmSJDXEoANJkiRJkiRJkiRJktQQgw4kSZIkSZIkSZIkSVJDDDqQJEmSJEmSJEmSJEkNMehAkiRJkiRJkiRJkiQ1xKADSZIkSZIkSZIkSZLUEIMOJEmSJEmSJEmSJElSQww6kCRJkiRJkiRJkiRJDTHoQJIkSZIkSZIkSZIkNcSgA0mSJEmSJEmSJEmS1BCDDiRJkiRJkiRJkiRJUkMMOpAkSZIkSZIkSZIkSQ0x6ECSJEmSJEmSJEmSJDXEoANJkiRJkiRJkiRJktQQgw4kSZIkSZIkSZIkSVJDDDqQJEmSJEmSJEmSJEkNMehAkiRJkiRJkiRJkiQ1xKADSZIkSZIkSZIkSZLUEIMOJEmSJEmSJEmSJElSQww6kCRJkiRJkiRJkiRJDTHoQJIkSZIkSZIkSZIkNcSgA0mSJEmSJEmSJEmS1BCDDiRJkiRJkiRJkiRJUkMMOpAkSZIkSZIkSZIkSQ0x6ECSJEmSJEmSJEmSJDXEoANJkiRJkiRJkiRJktQQgw4kSZIkSZIkSZIkSVJDDDqQJEmSJEmSJEmSJEkNMehAkiRJkiRJkiRJkiQ1xKADSZIkSZIkSZIkSZLUEIMOJEmSJEmSJEmSJElSQww6kCRJkiRJkiRJkiRJDemY6gpIkiRJkqSHho4LrpzqKmiae8dBp051FTTN/f2dV091FTTN/dPBJ0x1FTTNDazae6qroOlu85aproEk1c1MB5IkSZIkSZIkSZIkqSEGHUiSJEmSJEmSJEmSpIYYdCBJkiRJkiRJkiRJkhpi0IEkSZIkSZIkSZIkSWqIQQeSJEmSJEmSJEmSJKkhBh1IkiRJkiRJkiRJkqSGGHQgSZIkSZIkSZIkSZIaYtCBJEmSJEmSJEmSJElqiEEHkiRJkiRJkiRJkiSpIQYdSJIkSZIkSZIkSZKkhhh0IEmSJEmSJEmSJEmSGmLQgSRJkiRJkiRJkiRJaohBB5IkSZIkSZIkSZIkqSEGHUiSJEmSJEmSJEmSpIYYdCBJkiRJkiRJkiRJkhpi0IEkSZIkSZIkSZIkSWqIQQeSJEmSJEmSJEmSJKkhBh1IkiRJkiRJkiRJkqSGGHQgSZIkSZIkSZIkSZIaYtCBJEmSJEmSJEmSJElqiEEHkiRJkiRJkiRJkiSpIQYdSJIkSZIkSZIkSZKkhhh0IEmSJEmSJEmSJEmSGmLQgSRJkiRJkiRJkiRJaohBB5IkSZIkSZIkSZIkqSEGHUiSJEmSJEmSJEmSpIYYdCBJkiRJkiRJkiRJkhpi0IEkSZIkSZIkSZIkSWqIQQeSJEmSJEmSJEmSJKkhBh1IkiRJkiRJkiRJkqSGGHQgSZIkSZIkSZIkSZIaYtCBJEmSJEmSJEmSJElqiEEHkiRJkiRJkiRJkiSpIQYdSJIkSZIkSZIkSZKkhhh0IEmSJEmSJEmSJEmSGmLQgSRJkiRJkiRJkiRJaohBB5IkSZIkSZIkSZIkqSEGHUiSJEmSJEmSJEmSpIYYdCBJkiRJkiRJkiRJkhpi0IEkSZIkSZIkSZIkSWqIQQeSJEmSJEmSJEmSJKkhBh1IkiRJkiRJkiRJkqSGGHQgSZIkSZIkSZIkSZIaYtCBJEmSJEmSJEmSJElqiEEHkiRJkiRJkiRJkiSpIQYdSJIkSZIkSZIkSZKkhhh0IEmSJEmSJEmSJEmSGmLQgSRJkiRJkiRJkiRJaohBB5IkSZIkSZIkSZIkqSEGHUiSJEmSJEmSJEmSpIYYdCBJkiRJkiRJkiRJkhpi0IEkSZIkSZIkSZIkSWqIQQeSJEmSJEmSJEmSJKkhBh1IkiRJkiRJkiRJkqSGGHQgSZIkSZIkSZIkSZIaYtCBJEmSJEmSJEmSJElqiEEHkiRJkiRJkiRJkiSpIQYdSJIkSZIkSZIkSZKkhhh0IEmSJEmSJEmSJEmSGmLQgSRJmjARsVdEvC8iLo6IzRHRGxEpf1w91fWbLI3sh4g4PiI+GRHXRcS2iBgovOfjk/sJJEmSJEmSJEmqrGOqKyBJkh6aIuIM4EfA4qmuy1RqZD9ExOuAjwPtE1QtSZIkSZIkSZKawqADSZLUdBHRCXyb4R3tNwIPAP3533dMdr0mWyP7ISKOA/6NoYxUPcA1wINAypfdNEFVlqSHhO7UxRpuZhNr6WbP/2fvvsPjusrEj39fS7Lce3fiOI4dpyekAAkEQgtlKQsLLOzChs7Sl96WXhd+hLp0lt57L6GFlkAKIb04ie3EjnuXbdXz++NeSVfSjDQzaqPk+3meeTRz594zZ+6cOTOa85730EgTs5jHClYzLxbXXG5Hamc9N7GNTRzmIA00MIPZHMExLI4jSh7TlbrYzO3sZTcH2EMrh2mnlUk0MI0ZzGcxR7Ka5phac7008ka6DXWlTnaznb3sZh+72Mdu2jgMwGncnwWxpKJytqY7uZNbOcBeOulkCtNYxHJWspbGaKq6Xho99dYP7WIrO9jCXnZxkP100UkTzcxiLstYyaJYXnOdNDp2be/gqx/fzaW/bWH7lk5mzJzEcac28y/PmsMZ95tWdXl7dnbyh18e4Mo/HeSW61rZvqWThgZYtKyR08+ZxhOfNZvlKycPWsbN1x7me1/Yy9V/O8SOrZ1EwIIljZxy7yn8yzPmsPqE5lqfrkZBPfVDAPvTHvayk33sZh+7aWEficRijuDkuG/N9dHoaW0/wO13/ZEde2+htW0fjQ1TmDV9OSsW34f5s1ZVXV5XVwe79q9nX8tm9h3czN6WTbS1HwDgXmv+nQWzVw9ybCebdvydfS2b2H9oC63tB2jvOMikaGTalHnMn3UMKxbdh+bJM2t+vhp59dYPdUspsZn1bOUODrCXdtqZTDPTmMFcFnEUx9IQzgMab/XWfvwcG1qklIbeS5IkqQoR8QTgu/nNLuChKaXfjWOVxkUt5yEiPgK8JL+5Cbh3Smnz6NVSkno9bNKTJvw/iPvTHq7kD7TTBkADjXTS0XP/ak5iZRxXdbmH00Gu4GIO0dJTbhedpDwe7AhWcVycPuC41nSYP/KTnttB0EAjHbT3bGukiVM4m3mxqOp6aeSNRhvan/bwV35d8r5Kgw5uSFewiduBrB1NoqGnXlOZzpmcZ/BKnai3fuj6dAWb87YDA9sPwCKWcxL3YVJM/JVY33jbVeNdhWG79YZWXvHvm9i3uwuA6TMncaili64uiIDnvHo+//aC6hLKPXTNOjp7X3KmTg862hPtWTNlcnPwmvct4iGPLT1g94Mv7+Wjb9tOV2fv/gBtrVn7m9QA//X2hTzm32ZXVa969K5Vp413FYat3vohgEvTRRxg74Dtd8fBmjjjxPGuwrDtP7iVK27+Iu0dhwBobGimo7ON7rkQq5c/hKOX3r/KMrdw6fWfKnnfUEEHre0H+MM/PtBzOwgaGprp6Dzcs62xoZlTj/lX5s06uqp61aN0xXXjXYVhq8d+CKA1HeIq/sx+9gCl/z+7P49iSlQf4KeRU4/t5570OQZwUde3o9pjzHQgSZJGw70L1/90Tww4yNVyHorHfM6AA0mqXGfq5B/8hXbamMkcTuQsZsRsOlI7t3E9G7mFdVzLzDSH+RXOLIdsJszVXMohWpjCNE7i3syJBXSmTu5gHeu4hju5jZlpDsuj76yvSUziSFYzl4XMZh6TmUJE9Mw8vpmrOch+ruFSzkmPoCkGn2Wq0TVabQjIZ+bMzS/zuJpLKj72znRrT8DBGk7mSFYzKRrYk3ZwLX/jEC1czaWcxYOqqpNGXj32Q4kumpnCMo5mEcuZwWwigtZ0iNu5kTu5lW1s4lauZQ2njPQpUZVaD3fxxufexb7dXaw5sZnXX7iIo49tpmV/F1/6yC6+9dk9fPb9O1lzYjNnPaDyAZHODjjl3lN41JNncdYDpjFvYSOdnYnr/36Yj7xlO+uub+M9r9zKyjWTOeb4vhkL1t/S1hNwcOb9p/KiNy/kqNVZdpXbb2rjo2/bwVWXHuIjb93O6febxvKjzLwynuqxH4LsO9EM5vR8Fm5nEzvZOpJPXSOks6udq9Z9nfaOQ8yctoSTjn48M6YuoqOzlds2X8yGrZewbtNvmDVtKfNnH1NV2Y0NU5g1bSmzpi9j1vTlXH3rtyo6blI0smLRfZg78yhmTT+C5qYZ2Xfqrk527b+Nm+/4FS2Hd3D1bd/mfie9lKbGKbU8dY2Qeu2HOlI7V3AxBznAdGaxhpOZx2ImxSQ6Uyct7GUrm5jkaqfjql7bj59jQ5v44cuSJKkezS9cv2PcajH+ajkPnjtJqtEmbsvTIzZyKvdjRmSzLRujiWPjVBayDIB1XFtVudvZzD52AXAq5zAnFgDQEA2sjLUcSTYr61aupyt19Tm2KSazNk5jUSynOaYSkU0WmBSTWBBLOY37AdBOGzu4q8ZnrpEyWm1oBrN5II/l9HgAq+PkqlLZd6VObuN6AFawhqNiLZPydK9zYgGncDYAe9nJdmMVx1099kNHcAz345EcEycyM+b09EPNMZXj4l4s5SgA7uBWOlMnGl8//to+tm7qYOr04F2fXcrRx2YBANNnTuIFb1zA/c+fTkrwmffvrKrcD31jOR/+5hE8/F9mMW9hNg+toSE4+cypvP9Ly5k7v4HODvjO/+0ZcOzvfrKfrs6sDm//5FJWrplMRBARrDqumXd+einTZgQd7XDJb1qGfQ40PPXYDwGcxYO5bzyUE+IMjohVTMZB4Xp15/YrONy2l4ZJkzlt9VOZMTXLxtXY0MyxR57PwjnZ7OJbNv2mqnJnTF3Meae9hjPW/gdrjngoi+ceX/GxTY1TWLviESyaezxTJs/s/U49qYEFs9dw2uqnAtDecYjte2+qql4aefXaD63j2jzgYCZn8SAWxNKeLE8N0cCsmMeaOJnJ4XJB46le24+fY0Mz6ECSJI2G4tSWjrJ73f3Vch48d5JUoy1sBGAJRzKlRJr5ozgWgP3soSXtr7rceSxmZswpUe5aANo4zC62VVXnaTGDxrzrb+VQVcdq5I1WG+oenKvFLrbRRisAK/LHL5oVc5lHNhjQXX+Nn3rsh2bHvJ5AlVKWsRKALjppYV/FddLo+PUPs3bxkMfOZOGSgUlq//V5cwC45dpWNt7aVnG5p96n/PIrc+Y3cJ8HZVkTbr6mdcD9u3dkwSjLVzYxdfrAn5Onz5zE8pVZpp7DBwf+SK+xVY/9EFDz56DG3pad1wCwZN5JTJk8a8D9K5ecA8D+g3fRcnhHxeUO5/vQUKZNmUdjQzYA2NpWebvW6KjHfqgttRYyh51CY5iVp17VY/sBP8cqYdCBJGlERMTkiDg/It4TERdFxIaIaImItojYGhGXRcSHIuKsMapPc0Q8PSK+HBE3RsTOiGiPiD0R8Y+I+GJEPC0iZpQ5/vcRkfLLWyt8zLcWjvn9IPsNKDsy/xQR346IdRFxMCJ2RMTfIuINEXno5SiKiPtFxEci4ur8sVsj4q6IuCQi3hYR5RfX6/e8gAsKd11QeL7dl/UjWO+GiHhCRHwmIq6NiO35a70vIq6PiG9ExPMiYv7QpY3PeShug3yqWebzJY75whCP/4D8vfb3/L3XFhHb8vfgeyNqWPAsK/de+fv7bxGxOT8vO/Pz9OGxeG9HxKqIeEtE/LlQhx0RcU1EfCwiHlBhOSv7ndOV+fapEfHMiPh1RNyRl78tf01fFhGjEsIcEdMi4nERcWH+WJsi4lBEHM7b3p/z127taDx+oR5HR9aP/T4ituSP391v3hARP46IN0fEmUOUMykizs1fq59GxK0RsT8va3tkffAnI+IhVdRtfeH1ekaFx3yh0vdNv+POjoj39WvrLXkdfhYRr42IgaN+pctqjIh/jYgvRfZZtCsvb1NE/CYiXh0R8yosq9hmz8u3Tcvb7M8i4rbIPjsq/ty6u+lI7exjNwDzKZ1icTbzewb4qwkO2M32vNzFJe+fElOZzqx83+qCDlrSvp71Q6cyvapjNbJGsw0NR/fjzGB2yR/doLe+u/K2qvExUfuhJnqXdelez1bj4+CBrp5B/3JLJ5xwrylMn5n9pHvlX0YuWG3WnCwwpbNEzMCSI7I2u2l9O4daBu7Qsr+LTeuzAIg1Jzk7dDxN1H5I9aOjs5V9B7PMSfNnl/75Y/b0I2hsyN7ru/bdPmZ1G0zLoR10dB4GYGrz3HGuzT1bvfZDW7mTRBdNTC5bL42/em0/qszAcFlJkqoUEY8GvgSU+1a/KL+cCbwsIr4PPDOltHeU6vN04D1Aqby1s4FT8st/AAciYmlK6cBo1KUSETGX7Pw9ut9dU8lS7Z8FvDwinp1S+tEoPP4i4LPAY0rcvSS/3Bd4fUR8FHhtSqkuZuBHxCOBD0GJaX8wEzg+v/wr8JGIODWlVDLP3kQ+DwCRBUN8Eig1kLswv5wJvCoi/hd4ZSX1z8/L/wJPLHH3vPxyMvDSiPg68NyU0ojmVI2IRuDdwMuA/oudz88vJwEvioifAc9KKVW1qFpEnAJ8g6y9FC0EHphfXhwRD08p3Vb9syj7uM8DPgiUW5C3u+2dA7w6Ij4LvDSlNHAK2vDq8Sbgvxl4fiHrN2cDx5H1U2+LiH9KKf2sRDlnAj/O61zKgvxyCvD8iPgj8JSUxj8feB5I8HFKv4cmkwUEHQU8EnhvRDwzpfSFQcp7OPBRYE2Ju5fllwcDb4iIV6aU/q/K+p4OfJ3S/d89Ugu9Mxy6fyjoLyKYlmawj90Vz+ZtS4dpJxtImVGm3O77WthXUbkpJdo4zG52cGueEnIK01iQp4nU+BitNjRS9SpXp+y+mQC000pbajUl7DiZSP1QUfcPsEH0tCWNjw3r2kh53MfKNaW+lsGkScGRq5q48R+tbLil8kwHQ/nH37IAhqOPHfi4D/vnmXzpI7to2d/Fm//zLl705oUctTr7sX/9zW189G07OHggcea5U7nPeQbQjaeJ2g+pfrQc6s1cMGPqwpL7RATTpixgX8smWg6NX8BjSom29gPsPrCBdZt+C8CUybNZOMd/kcZTvfZDe9mZ3z+bRBe3p5vYwh0cpoUGGpnFXI7gGBaG/5ONp3ptP6qMQQeSpJGwkr4BB/uAdcBeoAFYCqwGunMQPR5YFRFnp5RGNI9wRHwQ+K9+mw8BNwK7gRl5Xbpnls5gfD8PG4AfAufmt3cBN5BlIzqO3vO6APhuRDwppfSDkXrwiFgB/AYohq93AtfldVlO74BZE/AK4PiIeHyJQc+/AYfz6ydDz8jJZuCafvtWNSBcpu6vAN5P38xN7WSv9XayQdyV9A5+NpMFcpQqa7zPwy8L2x4IPYuCXQts6ndM/zKIiLPJBnqL2RwOAdeTvQ/nkQ3KN5K1uZcCayLisYMFHuQz638Bec7dTHte7k5gVv4cu0c2ngqsjYjzUqoiv9kgImIy8D3gn/rddStwBzCH3ucG8CjgLxHxkJTS+gofZi1ZwMGc/PYtZOd9GnAqvc9vNfCriDglpXSw2udSxrH0DTjYCdxO1o82ASvozX4xCXgesCIiHpVSGpGpiBHx38Db+22+A9hA1pZn5HVYWri/XMa07iCJbgfJzuceoAtYTHa+u3M8nwv8NSJOTymN269VeeaA79PbBrqtI3vvBtl7eRW9n2X99y2W9wKygINiLuudZOfiEHAEvX3KHOBzeQDcuyqs8irgA4U6bATWk/Ud99hf2Fp7ul5oHmRtxWamArv77F95ueVTU3ev5zhYudeny9nM+gHbZzCHU7gPDYOkP9foG602NFzdjzN0nTJtHGYyBh2Mh4nQD/XXkTpYTxaTu4jlphoeZzu39X41X7C4/L+p2X2t7Nw+MjHQf/rVAW66Ovu35pFPGhh4snBpI2//5BLe8dKtXP6nQzzz/I1Mbs6+ErW1JuYuaOBpL5rLf7y0ogROGkUTsR9SfWlt7/1XvrmpfCBa932t7WM/h+e69T9i846/D9g+c+oSTj7miTRM8rNsPNVrP3SQrK020MjlXMw+dhEEDTTSThs72cpOtnJkWs3aOK2iOmnk1Wv7UWUMOpAkjZS/A18EfppSWtf/zohYQjbQ+Wqyz59TgXeRDd6OiIh4FX0DDjYAbwC+l1I63G/fE8hmv79gpB6/Rs8nm0m9m2wW99e7B4Ejogl4Otks6Flk5+1LEXFCSunO4T5wRDSQzZItDrR/Fvjv4izx/Fx9nGwwHLJZvu8CXlUsL6X0msIxX6B3aYGLUkrPGG59+9X9iWQDbt12AG8BvpJS2tdv36PJZum/sExZ434eUkqPKByznt5B5g8MNpM63/8I4Ef0BhzcAbwW+G5Kqa2w31zgdWTvwcjr/1ay2e2lyp1JFsiwMt+0K9/3S8VMBhExDXgR8E6y2eCnA58AnjZYvavwDvoGHPwZeGFK6epCHRYCb6P3/bwK+HpEnFthNoqvkA3efhd4XbEPi4jZwIXAs/JNxwAvJ3vtR0ICLga+Bvw8pXRH/x0iYhXZa/q8fNMjgJcAHxnug+eZLN5U2PQzsiwYN5bZ95HAs/N6l3ML8HngJ8B1KaU+OXjztvgcsvfsdLIB+E8BT6j9mdQuIo4BfkCWzQGgg+zcXphS2tRv37nAY+l9LUqV90iy7CDdwQl/Ivss+lMxUCQi1pC1re4sN++IiMtSSr+qoNofIsvm8kfgZSmlnl/c8kCde2SuyC563+6TKD9435Df10llAzWdFZfbOGS5jTQxmWa66OpZUmEGcziO05gWzi4eb6PVhoaru16V1AmgY4zqpYEmQj/U341cSSuHaKCR1Zxc8XEaHYcP9X7Fap5Sft3g5qnZfaWWOqjW9i0dXPiGLPbznIdO594PLJ2p4N4PnM7/+/Iy3v7SrWze0E5ba29d21oTB/Z10Xq4i6bJBtCNp4nYD6m+dHa191yfNMjgfffAfmfXyGVcqVRjQzOTG6fTlTp7llSYOXUJa1c8kulTKlpZU6OoXvuh7v+/dnAXAEdzPEdxLI3RRFs6zC1cy12s5w7WMSvNZWkchcZevbYfVcagA0nSSPhCSuljg+2QUtpClkL6arIBXoDnRcTbU0p7hluBPC32ewqbrgQenlLaUWr/lNL1wFsi4r3A2P+H1Gsh2azX81NKlxfvSCm1A/8XETcDvyWb9TyTbHb/U0fgsZ9NlrK923tTSq/vv1NK6fqIOJ9sMLI77fgrIuKLKaUBs+5HWz7o97nCpvXAQ8qlvE8p3Q68P8+CUeq7z4Q8DwWfIcuEAVkWhPNSSrv675RS2g28NiJuJRvgBXhNRHyi/8Bq7n30zsS+Ezi3VOaAfMb/+yPiGuCnZDPg/z0iPpxSumwYz4uIOI6+QR2/Bx7RP7tEPkP+hRGxg94B9PuSBfX8bwUPtQD4REppQGBKvgzMs/NsGA/NNz+LkQs6eNtQy7vkbfv5EXE7vf3cKyPif1NKncN8/PPpXVLhduDxxWCVfvXYRhZc9sU8WKeU3wJrB8vCkLfF90fEJWSvaQPwzxFxbErp5tqexrB8nt6AgzbgcSmlX5TaMa979zmY0f/+fNsX6Q04+BLZch8DXqeU0i0R8Vjgy8C/58dcSJa5YygzgYuAR/d/vfLbGwc7OCKuKHffQ6PUSioaKcfGqRzLqUC2VuUOtrCOa7ic37MireHYOHWcayjpnmR9upEt+UfGCZzB1DAt/j3NoZYu3vT8u9i9s5PFyxt5zf8sKrvv5z+4ky99ZDdHrZnMuz+3lBNOy2YCXn/VYT713p384Mt7+fulh/jot5czc7aBB5JGz9ojH87aIx8OQEdnKzv23sItd/6ay2/6PEctPptjjzx/nGuo+tT7M8USVnBMnNhze3JM4UTOpCXtZR+7Wc+NLMWgA6la5dKiSpJUsaEGzPrt+w3gL/nN6cDDR6gar6V3QLkFeGK5gIN+9Tk0AoN2w/W+/gEHRSmlP9F3RvO/RMTiEXjclxauX0Pf2c7969AGPJMsVTpkg2MvLbf/KHsx9Cy+1QU8tVzAQVFKqaN/xovcRD0PRMRpZLPeIVv24MmlAg6KUkqfJhsYhiyQ5fklyl1C9jy7PWOopQryQdovFDa9ZLD9K/QSer+vHsrr0X85i6K3AlcVbr80IspPEeu1noHLsvT3/sL1VREjs8hfNf0nWSBId5aTFcCZI1CFIwvX/1Yu4KC/cv1mSulgpcs+5H3bN/ObQbb0zpiKiHPpXd4G4E3lAg76K/PaPYcsmAyyJUCeN9hnTH6uXki2/ATAiRHx4Aoevh14dqWv1z3FpEJcWRflP9o78/saKozBb6i43I6qym2MJpbEkZzJg2igkY3cwraSMWAaK6PVhoaru16V1Amg0fkl42Yi9UN3pttYx7UArOEUFseRQxyhsTBlau9X19bD5b9SteYZEaZOr/2n3bbWLt74vLu46epW5sxv4H1fXMbseaWDBS76wX6+9JHdzJ3fwIe/uZyzHzyd2fMamD2vgbMfPJ0Pf3M5c+c3sOGWNr72id0110nDN5H6IdWn4tIEXYWsB/11Z0RomDS57D5jobGhmSXzTuKs455FY0MzG7ZewrbdN4xrne7p6rUfKt5e0SfZKYXt2dybFvbTOrIrAqtC9dp+VBmDDiRJ4+GSwvV7D7ewiGgEnlzY9IV8dvtE0EmWsn8oH6U3JLeJLL13zfLMECcWNn14qDT0edr3bxY2/fNw6jAM/164/vOU0qW1FjTBzwPAMwrXf1IqJX4ZXyxcf2iJ+58CPYtB/z2l9Jsayn1I2b0qVxyE/m5KacNgO+dp/D9Y2NT/9S3nMxUM3v6JLMilWyXljqj8+f21sGnY/SdZMEe3UwbJYDBaRvTzoAbF/mQHw1+y4hmF6x8ZIkgGgHxJmO8XNpV6T/b3s1JLcVQqpXRGuUutZdaD4nqPg62/2Jo3+8HWh+xbbu96j62U/+GpLX/MSsvtNiWmsojlAGxmfVXHamSNVhsaruYK1hUtts3JY1QvDTRR+qG70gZu5EoAVnECR8WxFdVDo2/B4t4fuHdsLf9vSfd98xfW9oN4e1viLS/cwt//cogZsybx/i8tY8Ux5QcOv/f5PQCc/4SZzJ478Ovi7LkNPOzx2TJBf76oZcD9GjsTpR9S/Wqe3LvkV2v7/rL7dd/X3DQgAdy4mDJ5FgvnHAfAph1XjW9l7uHqtR8q3p5G6aXtitsPD/IYGj312n5UGUM1JEkjKl9b/WHAqcAyshnpzf12K4aTHjECD3sGUPwv51sjUOZYuTJPWT6olNKGiLiO3rTb9yFLq1+rs/vd/nGFx/2Q3hnwCyJiTUrplmHUoyp5hoe1hU3Dfa0n5HkoeGDh+kVVHPePwvUzIiL6zU4fiXKXRcSylNLmKo7vERFHAUsLmyp9bX7U7/bZkE/jK+/PQxWaUjoYEbvoXcpiToX1qVi+hMODgVOAxWQp9Pv/+ltcbHkk+s/iEhjHA1+JiNcMZ0C7W0TMIvs8OA04iuz5TKF36QEgH2nNjMTzqVaxrf+4TDaUikTEHPq+PrW+dyrJYPGHKsq+x5he+IGohX19bndLKXGQA/n+swbcX8rkaKYpTaadNg6wj/ksKbnfAfZVVW5R9w8gh6gm+YlG2mi1oeGawSx2soWWvI2V0kL2w38TzUyO/l+9NVYmQj+0Nd3J9WRJ1lawhlVxQkV10NhYccxkIiAlWH9LW8lAgK6uxB23ZTOMj1pT/Qzjzo7EO162hUt/e5Cp04P3fn4pq08YvN/YcGsWn7vkyPLruy9dkd235U7XQB5PE6EfUn2bPmVBz/UDh7b3ud0tpcTBw1ly0elTFw64f7xMacra+6HWQRNAapTVaz80ndnsYEtFj6XxU6/tR5Ux6ECSNCLyAcL3k81MrubzZc4IPPzx/W6XXaqgDl1T5b7dQQfDnY5UDPzYUkngQ+7qEuWM5WD7SL/WE/U8kC8bUFz7/TkR8bgKD59auD6ZLDhob2HbKYXrj4moeZHxhUBNQQcwINfdP0ru1U9KaU9EbCRbfqBUOaVU+l9nC71BB9MqPGZIEXEScCHZDPdKloPoNme4j51S+nNE/AU4J9/0FODJEXEp2TIcfwEuSSntqbTMiJgPvBv4D6gqNHxOFfsOW0RMom9fOtz+5GT6ZpL7SERUunxPMfiikl/thlxS5p6oMZqYleayj93sZGtP9oCiveyig2ygZh7l163uby6L2Mad7GIrR5X4CD6cDvUMCFdTbrdDZLNCTeE4vkazDQ3HXBaygZs5wF5a0yGaY+qAfXaydUzrpNLqvR/anjZzLX8lkVjOKo6t+SueRsu0GZNYe3IzN17dyhV/OsgDHjFwBvENVx2mZX+WgOv0cwb2B4Pp6kq851Vb+eMvWmieErzrM0s58fShy5g0KYDEts3lU61v3dSeP4dqvs5qpNV7P6T619jQzKxpy9h3cDO79t3K4rn9f4aBvS130tGZJXWbN+vosa5iWYfa9gDjv+TDPV299kPzWMQGbgLgIPuZxbwBxx+kN7vH1JH72UdVqNf2o8r4i4Ykadgi4izgV9Q2YDQSU7GK3xJbUkoHR6DMsbKzxn3nDvNxi8dvr+K4/vsOtx7V6v8fQaVBAuVM1PMAMJu+3+VOH2ZZxaCD+YXrxzMw2KOacmvV/5xW+/p0Bx1U8toMmQK/hBH5NTUi/gn4LrX1hSM1lfWJZJkkulPrTyILQugOROiKiCvJMot8LqVUdtpIRKwCfkfv+a/GWE/NnUPfIIHh9ifz+92uZJmEUip535Sf7nwPt4QV7GM3W9jIqnT8gMHZDdwMwEzmMj1Kp9QsXe6RbONOdrKV/WkPM2NOn/s35uVOZgpz+/040ZW6mBTlVzY8mPazPY/PmsPAmWQaW6PVhoZjHouYTDNttLKBmzmWvgPF+9MeduVBB0s4ckzqpPLqsR8C2Jm2cg2Xkkgs5SiO415VPjONlYc8biY3Xt3Kr3+4n/946TzmL+r78+03P7MHgGNPbh50SYT+Ukp84PXb+c0PD9A0Gd7+ySXc6+zKBlSOOW4yV192mN/+6ABPf/E8pk7v+7l2qKWL3/04m3F4/GmmIx5v9doPaeJYMv9k9h3czF07r2HV0gf2WXIBYMOWbJW8mdOWlsyEMBqG+k7dcngn2/ZkK07OmVnLv6MaSfXYD81lIc1MpZVDbGQdJ5VY4XFjPp9oFnOZHH6ejZd6bD+qTPleWpKkCkTEdOB79AYctANfIZstezLZIPGUlFJ0X4C3jXA1it8Ca06NPU6GWke+qDgwOtzBueLxtdYBqpvFPBL6P95wX++Jeh4Apo9gWf2/E45U2cP5rtm/jdf6+tTtf4kRsRz4Jr3P9SDwSbKMMceT9avN/frPL450PVJKd5Et2fIs4FIg9dtlElnK//cB6yPiP0uVk2cO+Ba9AQeJbCmSZ5AtsbAAmNrv+TyzRFFjZaT7k7F833SN0GPd7SxnFVOYRicdXMWfOZCy+IyO1M4t6Wq2swmA1Zw44Nhfp+/w6/Qdbk3XDbhvIct6ZsJczSXsTVkcYFfqZEO6uefHqWM4YcCPoTdzFTelq9iTdtCZepNftKc2Nqf1XM7FdNFJA42sYM0InAUNx2i1Iche87bU2nPp1kl7n+1dqe9bfFI0sIosBf5GbmFDupmuvC3tSTu5muyH/9nMZ2EsG+YZ0HDVYz+0J+3gH/yFLrpYzJGcwJlkSbNUjx7zb7NYvLyRgwcSr3/2Xay/JfsafPBAF598zw7++IssO85zXtU/3hEedPQ6HnT0Or7woYGx7f/7jh387Fv7aGiEN390Cfd+YOVfXR77tCwmcuvmDl7zjM3cfO1hOjsTnZ2Jm689zGuesZmtm7NlFZ5wwZxqn7JGWD32QwCdqaPv513+lbaLrj7bO5JLdIy3IxaewZTJs+nsauPv677OgUPZHICOzlZuvuMitu25AYA1yx8y4NiLLn8bF13+Nm7d9PuSZbd3HKKt/WDPpVtHZ2uf7V1dfZPG3bTx59y48efsOXAHnV0dhfIOs3nHVVx+0xfo6uqgYdJkjlrcfyVNjbV67IcmxSTW5CsibmEjt6br6EjZbPm2dJjr0+XsYzdAz3dvjY96bD/g51glzHQgSRquZ9K7Dnc78LCU0sVDHDPS08J2F64PZ2b1cDXUcEw156K4mNTesntVZs8I1KF/OWNhd7/bs2FYC2DvKVyfSOeh1GM+IaX0/REsu3u6witSSh8coXKrrUPRTCp/rYuvT/9y6snL6R2o3guck1K6fohjRmVabUqpE/g88Pl8eYRzgfsB55FlQOgemZgJfCIiIqX0iX7FPIrebAkAT0spfW2Ihx7NacJD9cml+pPh2NPv9ryUUv/H0ChriAZOTedwJX9gP3u4lF/RkBrppPef/tWcxPwovX5jORHBKem+XMHFHKKFy/gdDamRLjpJeZzOclaxPFYNOLaTTu7iNu5gHQCNKVvzujsdJGQzKU7hvkwJU3iOt9FqQwB/5dccZmBCrmv4a5/bp/OAAek8j4hj2J/2sInbuYWrWcc1TEoNPfWaynRO4b5V10kjrx77oVu5ji6ywZtdbOWP/GRgiGHuWE5jSZgxYzw1T5nEOz+9lFc+bRO3XNvKM8/fyPSZkzjU0kVXF0TAc149n7MeUPlnxtZN7Xz389m/kBFw4Ru3c+EbyycS+95lfdOlP+SxM7nhqsN89/N7ufbywzz/MXfSNDn7etjelnrKfdYr5lVVL42OeuyHANZzE7dzw4Dt29nck/UJYClHcSJnVVU3jayGSU2ctvopXHHzl9h/8C4uue7jNDY009HZRvcHyOrlD2H+7GOqLvvS6z/F4baBP2ldc9t3+tw+49gLmDdrZc/tzq527tr5D+7Y9jcgaGzIYvc7Ontjxyc3zeCUVU9iymTXYh9v9doPLYkVHEh7e/qj9dxIY2qivTDPZQ2nsCCW1vCsNVLqtf34OTY0gw4kScP1iML1r1cQcACMeN7XuwrXGyNiVUppuOtdF2dVN1V4TC0p9qtZ/K74jWdrDY9VVEwjviIiGlOqKAyz/3+Uw01HXq27+t1eC3l4a20m6nkgpdQSEQeA7oVeF49g8VvoDToYyXKr0f+cHsPA13+AfLZ98X015q9NFYr954crCDiAke8/B0gp7QR+kF+IiKXAc4HX05sd4D0R8cV+y9kUn88fKgg4gMqfz4j3ySmlQxGxl95gg7UVllvOln63FzMwsEFjYGbM4b7pfNZzIzu4i1YO0UQzs5nLCtYwL2rr1qbENO6THsp6bmIbmzhMCw00MpM5HMExLI4jSh63krVMZya72M4hDtDGYbpITKaZGcxmPktYztE0RqVNW6NttNrQcB0fZzAvLeJObmM/e+iik2nMZBHLWcla21Adqbd+KBUiDNqHSB7VHZyg8bX6hGY+/8sVfPXju7n0ty1s39LJrLkNHHdqM0981hzOuF91A/upEGTS0Q67d1T/Or/4zQs556HT+cnX93H9lYfZlZex5IhGTjpjCv/8H7M58fSpQ5SisVJv/ZAmnpnTlnD2iS/k9rv+yI69t9Dato+mxqnMnr6cFYvvy/xZpQflRsvRS+7P9CkL2L1/PQdbd9HWfoCu1MXkxunMmLqIBbPXsGzBvWhqrNtkh/c49doPrY6TmZsWcge3so9dtNOWp9NfwArWMDsGZhLS2KvX9qPBGXQgSRquowrX/zbUzpHl8TxnqP2qdEm/2+cBww06KK6XPa/CY06u4XHOiohJKaVBU2VHRCNwemHTFTU8VlHx+Cl52UO+fvR97TqBq4ZZj2pdC+ynd3b0ecBvh1HeRD0P3f4CnJ9fP5ssNf9IlXtSodzxcA1Z9pTuEZRzgD9VcNwp9E1zf/kI12skVdt/zoB+C3mPgXz5hbdHxGbgM/nm2WRLMvyusGtVzyd3/wr3G60++RJ6gyXOq7Dccq4GWuhtf2cDNw6zTNWoOaawltNYy2kVH/PQeOKQ+zRGE6s5idU9XeTQpscspjOLlRxX8TEaf6PRhu4fjxpmrWBxHMni0Y8/0wiop37ozDiv4n1VP+YtbOQlb1nIS96ysOJjfnf76pLblxzRVPa+apx+zjROP8dMBhNFPfVDAMfEiRxTIhW26ldz0wyOW/FI4JEVH/OwM98y6P3nnvJfNdVl+tQFHD31/hy9tNJ/IVUP6q0f6jY/ljCf6jOXaWzVW/vxc2xow1lnV5IkqHzGabdHAMtHsgIppa30HfQtud54lTYUrp8y1M75TOBactouobKBrkfRd9ZuJRklBvM3+q5f/vQKj/uPwvUrUkrDWdqgankWgt8UNj0rIpqHUeSEPA8FPy9cf3xEVDoYW02594+I4c4Ar1pK6TD0yTf9tKhs8eELCtfbgEtHtGIjq9r+8+nA5NGoSIW+2+92///Qq3o+EXEClQe1VNsnnwGsqKDcXxSunxsRtf1iAaSU2oFfFzY9p9ayJEmSJEmSpInEoANJ0nBtLlx/wGA7RsQ0YLTWhv9Q4fpZEfGSYZZXnB1934gYavDqHdQ+GPjuiCi79nhENAHvLGy6jb6zi6uWD5J/vbDpeREx6BTMiHg6cK/Cpk8Ppw7D8KHC9eXAu2staIKfB4DPAbvy6zOB/x2hcn8E3JRfnwR8Jm+HY+0zhesnA88YbOeIWAO8oLDpWymlPSNfrRFTTf+5GHj7SFegwkCObjP63d7V73Y1z2cS1bXXYp/8qDzrQ7myA3hPheV+HiguKDrctv7+wvVzImIkguAkSZIkSZKkumbQgSRpuIqp7Z8YEY8utVNEzAd+wvDXzC7na/RNlf+hiHh5PrBVUkTMiIhXRsT0Enf/DDiUX58EfKrUQFRk3gQ8exh1vw/ZQNeAoIWImAJ8mb5pwt+dUnFVzpr9D72z/CcDP4mIkovyRcTDgE8VNt0CfHUE6lC1lNLFwI8Lm14REe8bLONBREyOiGdHxMoSd0/I8wCQUtoPvKGw6SkR8fWImDPUsRFxVkR8KSL+rUS5XcDLoWcB4HOBX0TEsgrKPSEiPhYRr67oSQzum/RNT//xiHhEqR3z1/ZnQHc7aKXygefxUuw/XxQRZ5baKQ96ughYMAp1+FBEvL9cmy/UoZHsvdLtMAOXtik+n3tHxAsoIQ9A+wrVLWdQzLIwh77BR8Wym8iWGXlYJYWmlPbRN5jjvsBPI2LRYMdFxMMi4qElyvsz8I3Cpo9FxGvz8zdYeU0R8diI+F1EHDXYvpIkSZIkSVK9GfTHL0mSKvBp4LVkM2AnAT+MiC+TDQpvJVsS4FzgWcB8snW5fwo8dSQrkVJqj4gnk6VjX5DX5ULguRHxDeDvwO68nmvI1hH/J7K1tz9Xory9EfFx4JX5pkcAV+TbbiAbnD6JLM3+acB+4FfAv1RZ9R/kZT+TLKPCZ8jWsg+yFOLPz+vb7VcppQH1rUVK6aZ8YPij+aZjgKsj4v/IljDYDSwD/hl4cl4nyFLWPy1Pfz9eLgAuI6szwKuBf42Ir5G1gZ3AVOBosvTtjyVri/fqX9AEPw+klD4VEfciaysATwEeHRHfBP4AbCKr62yydPP3IhuQ7R7Y/C0lpJR+HhFvoHfg/sHAbRHxvfyYjcBBYBZZxonT8n26M0W8bQSeW2tEPA34M1kwwRTgZ3kdvgfcSTYA/SCyVPbF2e+vTSldP9w6jLIPkWVvaCDri/4YEZ8lCzDYBSwCHpLvMw24g6x/GP6i4L1mk72fXhURl5Mt3XIVWf99kOz8nkK2tMPxxbrnA/ZF3ybLPNK92PjHI+J84Ftkr9VM4N5knwdHAu3Al6ggaCuldHNEfJfePvbZ+bIfnyPL/jINOD0v+xiyrAvXUUHwQUrpwog4p1D2w4BbI+LrZFll7iJ73y8DzgQeR9a3vJy+yyl0ezZwbF6fBuC9wAvz9+TfgO15eXPy/c4Ezid7LaC3j5EkSZIkSZImBIMOJEnDklLaFhEXkM1IbiQb7L+Avuuqd2shGxC9zyjV5baIOJu+GRWOp/bBzzeTBSd01/dk4BMl9msB/jXfr9qgg3+QDZ7+H1ldLxxk30uAJ1ZZ/qBSSh/LMwS8n2ygazrwkvxSyn7g8Smlv41kPaqVUtqdDxL+gN414VcAr6uxvAl5HgpeQDYg/Xay9+AMsoHP4WTgIKX03ojYCnycbMC/mSxgaESDhoaowxV5doMfkA3KBtn7rNx7LQGvSyl9eGxqWLuU0rUR8Qqgu65TgBfnl/62A4+nfJscCWfml6F8lax/7CMPEnkyWbDOtHzzP+eX/trJ2m0nlbfTFwOnAqvz2/fPL/1tJws0quZcPQX4GL3BOzOA5+aXqqSUDkbEA4Ev0NtOV5AFR0mSJEmSJEl3Oy6vIEkatpTS94CHAteW2aWTLAvA6Smln49yXdaRzbh+FdnM2sHcBLwROFCmrINkM14/CXSU2gX4PXDmcJ5XSunLZDO1ryqzywHgHcCD8nT6Iyql9AGygInfAF1ldjsMfBE4IaX0m5GuQy1SStvIsmg8B7h5iN03ks02vnWQ8ibkeQBImXeRBcZ8lWyG+mB2A98hGxD92hBlf54siOd/gb1DlHuALJPJBfRd235YUkq/B04kC84pl1kikc1KPzul9L6ReuzRllL6CFkw0foyu7SRZRA4JaV0RZl9huNTwGcGefyiK4Enp5SellJqL7VDSulSsiUK/jJIOZcA9682a0tKaQvZe/5b9C79UdRJFpxyWrXnKqXUkVL6T7JsHRdTvg+A7H3wBbK2Xq68AymlJwKPJPuc6ByiCuvJgtrun1JaX2m9JUmSJEmSpHoQI7MktCRJEBFBlk76TLKlFPaTpaX+Uz5YNB51OoksnfxCsnT7+4ENwN9TShurKGce2WDUCrKMDpuAS1JKt9VQp98DD8xvvi2l9NbCfSeTBU0sAw6RDZL/NqV0qNrHqUVELAQekD/+TLIU7+uBP+RBGHUrIo4hS92+iKzuLWSz/69OKQ0VlNC/rAl7HgAiYjJZAMVqsuVGmsgCAjYBNwI3pJQGG1QtV24D2Xv8BLL3+FSy87wlL/e6coPRIyUippK9NkcD88ie12ay12bbaD72aMrP7X3J3v9zyAJDNgEXp5T2jFEdlpIFrqwkW46kkez8bgSuTCltqLK844FzyN6Th8g+D/6WUrp9hOr6ILKlPTrJgsz+mFK6a7hl5+XPJwtwWEZ2LlrJlpy4HrgqpTRUEEH/8mYD9wOOIHvvJLLghfXA9dWe29H0sElP8h9ESdKE9sbbrhrvKmiCe9eq08a7Cprg4owTx7sKmuDSFdeNdxUk3cNd1PXtqpf/NOhAkqQxNljQgSRJ48mgA0nSRGfQgYbLoAMNl0EHGi6DDiSNt1qCDlxeQZIkSZIkSZIkSZIk1cSgA0mSJEmSJEmSJEmSVBODDiRJkiRJkiRJkiRJUk0MOpAkSZIkSZIkSZIkSTUx6ECSJEmSJEmSJEmSJNWkcbwrIEnSPU1K6bzxroMkSZIkSZIkSdJIMNOBJEmSJEmSJEmSJEmqiUEHkiRJkiRJkiRJkiSpJgYdSJIkSZIkSZIkSZKkmhh0IEmSJEmSJEmSJEmSamLQgSRJkiRJkiRJkiRJqolBB5IkSZIkSZIkSZIkqSYGHUiSJEmSJEmSJEmSpJoYdCBJkiRJkiRJkiRJkmpi0IEkSZIkSZIkSZIkSaqJQQeSJEmSJEmSJEmSJKkmBh1IkiRJkiRJkiRJkqSaGHQgSZIkSZIkSZIkSZJqYtCBJEmSJEmSJEmSJEmqiUEHkiRJkiRJkiRJkiSpJgYdSJIkSZIkSZIkSZKkmhh0IEmSJEmSJEmSJEmSamLQgSRJkiRJkiRJkiRJqolBB5IkSZIkSZIkSZIkqSYGHUiSJEmSJEmSJEmSpJoYdCBJkiRJkiRJkiRJkmpi0IEkSZIkSZIkSZIkSaqJQQeSJEmSJEmSJEmSJKkmBh1IkiRJkiRJkiRJkqSaGHQgSZIkSZIkSZIkSZJqYtCBJEmSJEmSJEmSJEmqiUEHkiRJkiRJkiRJkiSpJgYdSJIkSZIkSZIkSZKkmhh0IEmSJEmSJEmSJEmSamLQgSRJkiRJkiRJkiRJqolBB5IkSZIkSZIkSZIkqSYGHUiSJEmSJEmSJEmSpJoYdCBJkiRJkiRJkiRJkmpi0IEkSZIkSZIkSZIkSaqJQQeSJEmSJEmSJEmSJKkmBh1IkiRJkiRJkiRJkqSaGHQgSZIkSZIkSZIkSZJqYtCBJEmSJEmSJEmSJEmqiUEHkiRJkiRJkiRJkiSpJgYdSJIkSZIkSZIkSZKkmhh0IEmSJEmSJEmSJEmSamLQgSRJkiRJkiRJkiRJqolBB5IkSZIkSZIkSZIkqSYGHUiSJEmSJEmSJEmSpJoYdCBJkiRJkiRJkiRJkmpi0IEkSZIkSZIkSZIkSaqJQQeSJEmSJEmSJEmSJKkmBh1IkiRJkiRJkiRJkqSaGHQgSZIkSZIkSZIkSZJqYtCBJEmSJEmSJEmSJEmqiUEHkiRJkiRJkiRJkiSpJo3jXQFJkiRJkiRJGgnvWnXaeFdBE9wvN1813lXQBPeoey0a7ypogusc7wpIUg3MdCBJkiRJkiRJkiRJkmpi0IEkSZIkSZIkSZIkSaqJQQeSJEmSJEmSJEmSJKkmBh1IkiRJkiRJkiRJkqSaGHQgSZIkSZIkSZIkSZJqYtCBJEmSJEmSJEmSJEmqiUEHkiRJkiRJkiRJkiSpJgYdSJIkSZIkSZIkSZKkmhh0IEmSJEmSJEmSJEmSamLQgSRJkiRJkiRJkiRJqolBB5IkSZIkSZIkSZIkqSYGHUiSJEmSJEmSJEmSpJoYdCBJkiRJkiRJkiRJkmpi0IEkSZIkSZIkSZIkSaqJQQeSJEmSJEmSJEmSJKkmBh1IkiRJkiRJkiRJkqSaGHQgSZIkSZIkSZIkSZJqYtCBJEmSJEmSJEmSJEmqiUEHkiRJkiRJkiRJkiSpJgYdSJIkSZIkSZIkSZKkmhh0IEmSJEmSJEmSJEmSamLQgSRJkiRJkiRJkiRJqolBB5IkSZIkSZIkSZIkqSYGHUiSJEmSJEmSJEmSpJoYdCBJkiRJkiRJkiRJkmpi0IEkSZIkSZIkSZIkSaqJQQeSJEmSJEmSJEmSJKkmBh1IkiRJkiRJkiRJkqSaGHQgSZIkSZIkSZIkSZJqYtCBJEmSJEmSJEmSJEmqiUEHkiRJkiRJkiRJkiSpJgYdSJIkSZIkSZIkSZKkmhh0IEmSJEmSJEmSJEmSamLQgSRJkiRJkiRJkiRJqolBB5IkSZIkSZIkSZIkqSYGHUiSJEmSJEmSJEmSpJoYdCBJkiRJkiRJkiRJkmpi0IEkSZIkSZIkSZIkSaqJQQeSJEmSJEmSJEmSJKkmBh1IkiRJkiRJkiRJkqSaGHQgSZIkSZIkSZIkSZJqYtCBJEmSJEmSJEmSJEmqiUEHkiRJkiRJkiRJkiSpJgYdSJIkSZIkSZIkSZKkmhh0IEmSJEmSJEmSJEmSamLQgSRJkiRJkiRJkiRJqolBB5IkSZIkSZIkSZIkqSYGHUiSJEmSJEmSJEmSpJoYdCBJkiRJkiRJkiRJkmpi0IEkSZIkSZIkSZIkSaqJQQeSJEmSJEmSJEmSJKkmBh1IkiRJkiRJkiRJkqSaGHQgSZIkSZIkSZIkSZJqYtCBJEmSJEmSJEmSJEmqiUEHkiRJkiRJkiRJkiSpJgYdSJIkSZIkSZIkSZKkmhh0IEmSJEmSJEmSJEmSamLQgSRJkiRJkiRJkiRJqolBB5IkSZIkSZIkSZIkqSYGHUgaNxGxPiJSfnnGeNdHGm0R8dZCm//9eNdH9SEiziu0izTIfs8o7Ld+DKtYdyo9Z5IkSZIkSZKk0WfQgSRJkiRJkiRJkiRJqknjeFdAklSdiHgr8Jb85sUppfPGrzbS0CJiJXB7YdPRKaX141MbSdI9QWs6zHpuZAd30cohGmliFvNYwWrmxeKay+1I7aznJraxicMcpIEGZjCbIziGxXHEkMdvTXdyJ7dygL100skUprGI5axkLY3RVHO9NPLqtQ2llNjMerZyBwfYSzvtTKaZacxgLos4imNpiIaa66fRsTNt5Q7WsZdddNJOM1NZwFJWchzNMWVYZe9Lu9jAzexmBx200UQz81nMSo5jWswY8vjD6RB3sI4d3MVhDgKJyUxlFnNZzJEsimXDqp9qV6/9UNHGdAs38w8ApjCN+8ejaq6XRt6WbR289yO7+emvW9i0pZPZMydx1r2aedlz5/CQc6dVXd5N69r4zo8P8LerDnPzre1s39nJgZYu5s5u4NQTJ/PUJ8zk6U+cyaRJMeDYw4e7+OmvD/KL3x3ksr8f5rYN7bR3JBYvaOS+Z07hPy+YxXnnVF8njZ7WzoPc1nIF21s30NrZQuOkycxuWsRR005lfnN1fQVAV+pkV9sm9rZvY2/7Nva1b6O16yAAp899NAubVwx6/P72Hexp39JzfEvHbhKJJVNWc+qc82t6jhpb9fZ9qCt1spvt7GU3+9jFPnbTxmEATuP+LIglw6qTRl69taE/pZ/l35+HdgJnsixWDquOE4FBB5IkSZKku439aQ9X8gfaaQOggUbaaGUHd7GDu1idTmJlHFd1uYfTQa7gYg7R0lNuB+3sZnt2Sas4Lk4ve/wN6Qo25TF4QTCJBg6yn/XcyFbu4Mx0Hs0xtYZnrJFWr22oNR3iKv7MfvYAWTtqoJFWDtHKIXaznWUcRQMO2tST29MN3Mp1PbcbaOQQLdzBOrZwB2ekBzAjZtdU9ua0nhu4gkS22lQjTbRyqCcw5dR0P+bForLHb013cj2X00kHAJNoIIBDHOAQB2inlUUYdDAe6rUf6l9WsW2rvlx9fSsPfeImdu7uAmDWzEns2NXJTy86yM9+fZB3vX4+r33J3KrK/MHPW3jz+3b13J46JZjcFGzb0clFFx/ioosP8X9f3cePv7KMWTP7Jlh+7AV38Zs/HOq53dwcNDUGd2zu4I4fHeDbPzrAS58zmw++Y+EwnrVGyv72HVy260e0p2wAtjEm09Z1mO2tG9jeuoE1M+7LqhmV9RXdDnTs5ordP6m5Ttfs/Q37O3bWfLzGVz1+H2phP3/nT7U9IY25emxDk2mmi86y5XbS2fM9exbVfeZOVAYdSJIkTQAppS8AXxjnatSFlNLvgYHTZyTd43WmTv7BX2injZnM4UTOYkbMpiO1cxvXs5FbWMe1zExzmF/FzJWUEldzKYdoYQrTOIl7MycW0Jk6uYN1rOMa7uQ2ZqY5LI9VA46/M93aE3CwhpM5ktVMigb2pB1cy984RAtXcyln8aAROxeqTb22oY7UzhVczEEOMJ1ZrOFk5rGYSTGJztRJC3vZyiYmYZaDerIj3dXz4+gK1rCKE2iMJg6kvVzLZRxgD//gL5ydzmdSlRkq9qc9PT+OLmEFx3Iqk6OZQ6mFG7iCXWzjai7hnPQIJkdzybpdy19JJJaxkqNYy/SYCUBbamUPO3oGpjW26rUf6u8mrqKTDmYxj33sGnJ/jZ1Dh7r45wvuYufuLu51UjNf/NgiTlzbzL79Xbzjwl1c+Mk9vPE9O7nXyc2cf17lgWonrJ3Mu98wnwecPYUTjp3M7FlZv7V9Ryef/8Y+3vQ/O/nT3w7zirds57MX9s3G0dGeWLOqief8+ywe/bDpHLdmMgC3rm/nDe/eyXd+fICPfHYvxx4zmRc8o7ZBI42MztTBlbt/Tns6zMzGBZwy+6HMaJpHR1cbtx64jPUH/8EtBy5lVtMCFgyRnaC/xmhmVtNCZjctZHbTIq7a88uKjw0mMbNxQX78IrYevo2dbXdU+/Q0Dur5+1CWRWhufpnH1VwyIs9ZI6te29C94yGDln1V+jM7uIuZzKk5IGKimTT0LpIkSZIk1b9N3Janim7kVO7X8499YzRxbJzKwnzG7jqurarc7WzuGVA5lXOYEwsAaIgGVsZajmQ1ALdyPV2pq8+xXamT27geyH4gOSrW9vwQMicWcApnA7CXnWxPm2t52hpB9diGuh8vCziYyVk8iAWxlEkxqaeMWTGPNXFyyR9TNX6628lClnFsnNqzjMqMmM1pnNMzQ+vOPiuRVeY2rieRmMVcTuSsntd+akznFM6hmal00M56bhxwbEdq5/r8x9WVHMcJcWZPwAHA5GhmUSznqDi2lqetYarXfqhPWWkz29nMQpYxn9qXetDo+PSX97Hhzg5mTA9++KWlnLg26x9mzZzE+9+ygMc9YjopwRvfXd2s8cecP53XvmQuZ585tSfgAGDhggZe8+K5vPbF2SzOr3//AO3tqc+x73z9fK77wwpe9cK5PQEHAMesbOIbn1rMg++fZXv6wCd21/ScNXLuOHgdh7v20xBNnD73UcxomgdA46TJrJ11PxY1Hw3ALfv/WlW5Mxvn8+BFz+KseY/l2Jlns3jKMVUdf9/5/8I5C57MSbMfxJHTTqR5kpmdJop6/T40g9k8kMdyejyA1XEyi2L5MJ6lRlO9tqHBtKVWdrIFgKUcVXW9JiqDDiRJkiRJdwtb2AjAEo5kSomlCo4iG0Dbzx5a0v6qy53HYmbGnBLlrgWgjcPsYluf+3axjTZaAVjBwAG8WTGXeSzq8zgaP/XYhtpSayFTxik9P7Kpvh1IeznAXqD39S2aEtNYwpFA9e/99tTGDu4CsmCmiL4JoBqjkSNYlZd9Byn1HfzbzHraOEwzU1nFCVU9tkZfPfZDRR2pgxv5Ow00sJbTKn58jZ2vfS9rF099/EyWLx2Y6PhVL5wDwJXXtHLTurYRe9wzT8vW0z58OLFrT9900+ecNZWGhtLJ6iKCpz0pC3y6fWMHu3aXT1Wt0XfXoZsBWDplDVMaBq5jvnL6aQDs69hOS0flQSIRMeDzqhoRDmVNRPX8fWi4bVJjo57b0GC2sJFEIgiWUF1WmInMnlqqQURMjojzI+I9EXFRRGyIiJaIaIuIrRFxWUR8KCLOGqXHPy8iUvelsH1ZRLwmIv4cEXdGRHu+z3llymmMiH+NiC9FxI0RsSsiWiNiU0T8JiJeHRHzaqjf2RHx2YhYFxEHI2JHRFwVEe+NiDW1P/OKHvsLhXPzhRJ1ujEi9uf1ujoi3h4RAxbUiYhJEfGUiPhJRNyVv7Y7IuL3EfGcqPKbbkQ0RMSTI+IrEXFzROyNiEN52/lFRLwsosR//X3LWJ+/3m8pbH5gsS30uzxjiPIeGRGfiYgbImJ3RBzO283vI+J1EbG0wuf21sJj/r6w/bSI+H/5a78tIrqK7bVEObMi4vkR8Z287ezJ2/CuiLg8Ij4VEf8S0Xf6VkR8tPD4N0cV3xYjov/5O22I/Wuq43BFxJqIeHNE/DF/jQ7nj31DRHw6YohcTsN77CkR8fTCc94fEZ2R9Xkb8zp9NCKeGBHT+h371vw17x9menuZNvv7MnVYEhEXRMTnIuJvEbE9f08eyOvws8j6q/kVPqeV/R53Zb59akQ8MyJ+HRF3RNYfbsvfEy+LiClVnrumyPqLiyJic/66bYysf31O//NVQXnPKNR5/SD7lXtProqId+fvyV2R9UHrI+KrEVF1Pu+IWJu/x6+NrE/bH1kf+4WIeEBhv5L98nBEmc/BEvuVPGd5m3pD3p62RW//972IePxI1LFMfUa0LQ+zLssj+wy8Mm8PByPiloj4VkT8U0TWl5ZrT2XKHPHvR/n7r/vx31rY/qjo/f6yp1z9ImJaRDwuIi7My9qUt/3DkX2+/zmy7ycD/3OtrH5nRsQnIuKm/HXcExHXRcTHovCZUu55DFF25K/Fp/L32Y78XG7J6/2WiDiilnrfXXWkdvaR/fA5n9Lppmczn0ayAdvBBlT62832vNzSszmnxFSmMyvfd2DQAWQzaUoNHhXruyt/HI2Pem1DW7mTRBdNTC5bL9Wf7te8kSZmU/rf6nl5e9jHLjpSR8Vl72FHz5qz88q0qe620sZhWug7MN39g+wilvdkzFB9qNd+qOg2rqOVQxzN8Uyp7l8ajYH9B7q44uos2LHc0gn3PWMKs2dl7/3f/unQiD32JZdnZU2bGixaUF166/lze/fvNOZg3HR0tbGvI+srFjQfWXKfOU1LaIwsW8XOtk1jVjdNTPX8fUgTw0RtQ3exAYAFLL1HZaMbGOooaVAR8WjgS8CAgercovxyJvCyiPg+8MyU0t5Rrte/AZ+A/D/Eofd/OPBRoFQQwLL88mDgDRHxypTS/1VQZlNe5vPou9b2VGA+cCrwX3l5/1tJPYcrIiYDFwIvKnH3yfnlGRFxXkrptvyYI4DvAPfpt/984IH55WkR8eiU0oEK6nAW8H/ASSXuXpFfHg68KSJela/bPmoiYjXZuvD3K3H38vzyQOC/I+IdKaX/qbL8RuBdwKuoILgtH9h6OfDflH5fzQXOyC/PAzYAKwv3fwJ4cX59DfAg4LcVVvf5het/TSldNUp1rElEzAA+ADyLgZ/ZzcBs4DjguRFxEfD0lNLW4T5u4fHPBr4KHF3i7mn55Ujg/mSvwbeBJ4/U4+d1+D/gAkq3pSZgel6HRwJviYhXp5Q+UcPjnAJ8Azi+310L6X3fvzgiHt7dVwxR3gnANxn4vj8yvzwYeGVEPKnautYiIl4CvJ+s3RQdlV/+LSI+A7wgpTTkTzwR8UbgzcDkfnetzS8XRMRngZcMt+4jLSKeDHya7P1TtBx4PPD4iPgp8KSU0oj9AjdWbbnCujwL+BAws99dq/PLk4AfR8QFVZQ5Jt+PImIh2WfYoyrY93nAB8n6qlKW5JdzgFfnbfalKaXWCspuIDuHL6Lvdx7I2tYJwAsi4t30DRasSP7d4RNknyv9Lc4v5wCvjYh3ppTeXe1j3B0VfwCYXuYrcUQwLc1gH7tpYV9F5balw7STzQKcMchX7RnMooV9A8rtrle5OmX3ZW/HdlppS633qB8l6km9tqG97Mzvn02ii9vTTWzhDg7TQgONzGIuR3AMC2NZRfXR2Oh+Haczs+wsumJ7OMg+ZpX5IXVg2VlbncyUsv1FsQ23sK/nsTpTJ/vZA8BM5tKS9nEbN7CLbXTSTjNTmcdiVrKWqTG9ovpo5NRrP9RtX9rNHaxjOjNLZu/R+Lvhlja6J2KeuLb/v2uZSZOCtcc08be/t3L9zcPLdHDoUBcbN3Xwte/t5/99fA8AL3zm7KpnD//hkuxfr8ULG1gw32Co8XKgkLlgRmPpz6SIYHrjHPa2b+NAx66xqpomqHr9PqSJYyK2oQNpb8/37XvS0gpg0IFUi5X0/UF9H7AO2As0AEvJfrDv7gEfD6yKiLNHcvCiKCKeQDYwCJCAG4CtwDyyAcn++7+ALDigGHa8E7gFOAQcQW8wwhzgcxGxNKX0rkHq0EA2YPeEfnetA+7MyzmZbMDrYxExcvnbBvcZ4D/y6zuAm4CuvC5z8u1HAr+NiBPJBib+QO8g63qyAeSpwGn0DrA9EPg82cBMWRHxMOD7ZINJ3VqA64HDZG2lO6PAfODzEXFkSukdJYq7mGyQYTXQvfDZbuBvZR5+QLhxRJwKXEQ2kNqtDbgW2E/Wvrs/CacD742INSml55R/lgNcSO9AYytwHdn7ZCn0zYGUB6p8lYHncR9Ze9xLFkizlt7BsTnFHVNK10fExWSvCWSBBEMGHUTEAvq210+W2W/YdaxFRCwGfg7cq7C5C7iR7P09lWxAuzvX3cOASyLiASmlO0fg8Y8DflUoH7I2chOwh+y9vICsPXbn+O3/y8A64Jd5XR9Q2P4Hsr6mv6tLbDulX7kbgc3AAbI2eizZe4f89scjYk5K6T3ln90Aa8n6rzn57VvI3j/TyIKlur91rgZ+FRGnpJQOlissnzH9O8hzZWfagGvyeh9NFmx0HFlb/a8q6lq1iHgd0H0+Wsne7/vo29cDPJesn3zDEOW9F3htv813kb3ek8kCN2YBzyE7h+3DewYjJyKeAnw9v9lB1j/tJHutTqT3s/ufgM8B/zaCDz8WbXlI+XeAj/fbvJPsvZ3I3g8LgMcAPyT77KnESkb/+1Ez8FOgO1PCLuBmsja2qsT+x9I34GAnWeaVfWT91gp6P/MmkQWNrYiIR6VBcublgWhfAZ7S764NZN8bppH1z1PJgtWqmuYVEY8ja6fFKfH7yb7fHSD7LnAC2bmcCrwrIlamlJ5XzePcHbVyuOd6M+WT0zQzFdjdZ//Kyy2dqQCyHyr671+8PXSdMm0cZvKAGDGNhXptQwfJ4pwbaORyLmYfuwiCBhppp42dbGUnWzkyrWbt4Im7NIa6X8fJg7zmxfZQaXvK9j2UH1++nTZEA42piQ7a+5R9mJaeGV0H2c+NXEkXnUyigWASh2hhE7exhY2cms5hXiwq9xAaBfXaDwGklLiRK0kk1nIvs2TUqbu29s7wXLak/E//Sxc3Aq199q/G5CPWDchI0NgI/3nBbN75uuoSt226q4NPfSmLRb7gX8sPKmn0tXX1/tTRPKl84Fn3fa2dZX8akYD6/T6kiWMitqHNrAegicks6Bn6uWcw6ECqzd+BLwI/TSmt639nRCwBXgq8mux9dirZzO9XjFJ9vpD//QzwlpTSXYW69AnriohHAv9L74/+fyIbYPpT8Qf2yJZBuBB4dL7pHRFxWUrpV2Xq8F/0HcD9K/D8lNI/CmUuIhv4ehbwEbIBn9H0aLIBnDvJZmH/OKXUldeliex5vzXf9yiy2YoPIhsQ/AvwkpTSlYX6zyfLWPDYfNMTI+LclNIfSz14RCwnG8js/pZ+GHgj8MnuAct84OIRZLMZuwc+3h4Rf08p/aRYXkrpgvyYt9I7a/LqlNIjKjkZ+az579IbcNAF/A/wvpTSnsJ+ZwOfIgvMAHh2RFyVUvpYBQ9zOtng/6H8uX46pdRSKHt1v/0/SN/B/GuB1wG/TKk3F1Jky1mcTjYA+MQSj/txeoMOHh8Ri1JKQ+WZfAa9g8l7yGallzJSdaxYni3iO/QGHBwC3kHWdnYX9msCnp7XcRZZ2/1qRDyou60Pw3voDTjYCrwQ+FHxOed1mAycS5bhoM+3v5TSV4CvRLZ8QXGJhQtSSusrrEcb2fvoO8CvS82Kztvse+kNbHhHRPyy+P4dwlfIAg6+C7yu2K9HxGyyvvBZ+aZjyLJelAzCyl+7r9MbcJDIMgy8p9/77Fyy99nxZLOlR8vJZOflEFmf9+liwEREnE52fruDD14dEZ9KKW0oVVj+GVIMOLgNeAFwUfdnSN4mLiDL0vFvUDe5wheQBYt1krXvD/R7TVaTtYXuLDdPjYj/TSn9eYQefyza8qDyjB4fLmzaRvZ95TvdGS7yIMJ/JvuucC4lghcHMdrfj15EFty1AXgZ8JNiZo4SnzGJLGjia8DPU0p3lKjTKrI23T1g/wiywLmPDFKP/6RvwMHfgf9MKfUEAUbE9Lyct5N9ZlQ0BSiyJRm+AT3/8XZ/5vyi33NdBrwTeGa+6bkR8beU0mcreZy7q67CV8tJg8R6NOT3dVb4VbSz4nIbS5bbXa9K6gTQMepfkVVOvbahjjx+r3u90aM5nqM4lsZooi0d5hau5S7WcwfrmJXmsjTuWbNp6lX369gwyGtebA+VtieALjoHHF9KAw100N6n7I5CPOh6bqSZKZzA2cxjMRHBnrSD67mcgxzgGi7lnPQImqL0bGmNvHrthwDu4Fb2sZslrDAYpY61HOyNnZ06pfzg/bSp2X0HWmr76WDJogY6OmDf/i4OHc4e8z8vmM1rXzyXpqbKgwY6OhJPf9EWDrQkVixv5HUvKZc4TWOhI/V+RkyK8kNHDfl9nalu5hioTtXr9yFNHBOtDXWlrp6lzJaw4h4XpHnPerbSyPhCSun0lNKHS/2gDpBS2pJSegPZYGC350XEnFGq00zgbSml5xUDDvK67Eop7YKeQecv0htw8CXgvJTSH/vP6Esp3UI2uN6dQSHIBt4GyFMdv72w6a/Ag4oBB3mZ21JKzyYbYJtC3xnUo2E+2YDK/VJKPywOwqaU2lNKbyMbYOr2DrLBhouBB/cf5Ekp7SQbWC0Onj6T8t4PPbl8uoAnpJQuLA74pczPyQZ2ipkJPpUPKI+kN9CbIQHghSmlNxQH3fI6XZLX55rC5vflmQGGMpPsuT4mpfTBYsBBXnZxMPfB9F324pfAvVNKP+0/sJ1S6kopXZ5SegXZ7M7+vg90t/0mBn9duhVnhH6p1EzbEa5jNV5JtmQBZDNyz00pvacYcJA/Znu+9MkDge529QDgX4bz4PnA+SMLm56eUvpe/+ec16EtpfSblNLzqey8V+v8lNJTU0rfLZeGPW+zDyGbAQ3ZrOJXVvEYC4BPpJSe2L9fTyntzfutXxc2P4vynkff7BSvSim9tsT77I9kr9U6+mYeGWnzyAK8zk8pfah/hoa8n3sUWQYEyAaC/4MS8iCp4kDsHWRt81fFz5C8TXyGLFtAB6P7/Koxneyz599TSm8q8ZqsI2v3xSCJwV7rao1FWx7KB+nNTLKf7LPum8XB7JRSZ0rpu8B5ZFkKKn39xuL70Uyyz8ruz/U+c6tKPO7bUkrnpZQ+XSrgID/mtrz/en1h8yvz4IsB8u9SxewTVwEPLAYc5OW2pJTeSxaAE/RmsSgrD177Kr0BB78Bzso/c/o/180ppWf1q8t7IqJ86H/fx7qi3KWS4yWNtd5/1ZawgmPiRBrzfxUmxxROjDOZlSebWc+N41JDTRyJPv/6cyJnMT+W9MwsnhMLOIWzAWinjU19/v3VPVVrOsStXEsjTazhlPGujurAxiuPZvPVR7P/tlXcftlRvOI/5/DJL+7l1Adv5OK/VJ7o9aVv3M7Flxxm8mT4yscXM3tWVUnCJEmqK7vYSlv+M+s9bWkFMOhAqlpK6UAV+36DbMY8ZIMdDx+VSmXpoUul4+/vOfQOHtwKPK//j9hF+SDSCyFfgAZOzAdh+3sGvemLO4FnlxrALXgd2ezYsfDqlNLGQe4vrpc9maz+z0xl1nPOt3+usOn+pfaLiKX0ne3+qTy4oKR8MOSlhU3LyAIcRkRETKHvIPvPU0qfGqQ+e8kGkLt/kZpKtmxBJT6dUvpNBfu9sXB9M/DUIdpNd90GvAdTSu1AcXbnc2OQfHwR8SD6ppUvubTCSNaxUhHRTDaTvtvLU0qDDgKllK4imx3d7SVldq3UQuiT17lkNo8S9Sjbn9Sq0nOZB0QUz9tjyw0alrCeoZc4eH/h+qp8lnEpLyxcv5RskLeklNIOsiwBo+39KaU/DVKPdcD3CpvOLbPrw8jS43d7RUpp8yDl/pFstnw9+VpKqVxWE/LAnmJfUu5cVG2M2nJZ+bIfxc/wt6eUrhukHjfSmw1oSGP4/eiVKaUBywcNt07A+8gyI0G27MKZZfb7N2B24fbzUkr7y+xLSunrZMtUVOKx9Aat7QaeklIaKnffm8iWhIEsgOqpFT7W3dKkQiK/7hkLpXTm9zVUmPivoeJyu2dg9C23u16V1Amg0YSE46Ze21Dx9oo+H8UUtmdfbVvYT+vorOqnfv6WfsMf0o8HXDakm4DijPHyr3mxPVTanqB3NtZg7an42MWyi9enM4t5sXjAcTNiNvPItu9iqARyGkn12g/dyFV00sEqTqQ5yqcg1vibPq33p5DuDASlHDyU3Tdj+vCGByKCFUc08f63LOADb13Art1dPO1FWzh4cOgMCm98904+9aV9NDTAl/93Cfe7d0XxsxpFjYX5T10D55306Mzvaxjx+VKaaCbq9yHVj7tbG9pMljx2BrOZFfe87D0GHUij75LC9XuP0mN8tsLBvmcUrn+k3MB6UUppH9ks8m4PLbFbcVmF3w02iJGX2UqWVny07aN37e5yLoc+nyoXpZSGmspxaeH66jyVeH+PoXc2KWRpxofyffoGYzy+gmMq9QD6zrIcsj75QPfva6jPkK9tnmK7OPj1wf6z+GvwKXpfy2Mo3Va7FQMo/pBSumGM6liJRwLdv/xtI8tIUokvFq6fHRHTyu45tP6/VN+r5F51JmUZWnbmN2dQecaJz6SU2obY509kWTy6ndh/h3xQt7j9Y8UMAKWklH5Ntk77aPp4BftcXLg+4LnlHl24voW+nw3l1FvQQSX1KZ6Lcn38qBpGWx7MPxWut9E3gK6c/2NgfzBSavl+tINsGZQRl7JsSH8tbCpXp+L74PKU0mUVFF/J8kTQ93vaF/LApEHl3/+KWZsG++wrHndGuUuFda1LxXUYB1trsZJ1G/uWW1wfsvxboi1/zP7lNg+yPnapcidXWC+NvHpvQwDTmFny2OL2w6PWdauojdaSl+4lUrpf97ZBXo9ie6i0PRXLHqyddqbOnqUUimUX22O59gQwPb+vlYNl99HIq8d+aFfaxnY2MZ1ZLOMoOlJHn0sq/JvUva1r2Kv9qVbLlvQOiGzeUn7Q+K6t2X1LF4/cINxznzab5uZg85ZOfv7bwfuOd39oF+/96G4i4FP/bxFPfPRoJ0NVJZoben9Kau1qKbtf933F/XXPNFG/D6l+3J3aUHtqYwfZ/Kx7YpYDwNAeaTjyZQUeRrYm8TKyNdWb++1WnIpyxChV5Q9D7ZCnLj65sOmiKsovLpPQZ+ZfPhhTHJAsO5u/n58C/1NFHWpxRT4DvqyUUltE7KI3A8Qlg+2fKy5hEWQzHvuvWX524foNKaVbhyo0pZQi4kf0zrg+e5Ddq1Us6wDwuwqP+yHwoPz6qRExrX969n720be9lPPAfre/VWF9ykopbcrPX3dwxPMp0c7z920xgKJckMSI17FCxcf9fSqxpEEpKaWNEbEHmEP2+X4avTOJq5JS2hMR6+jtv74WEc/NB8jHTUTcC7gf2QDsPLJU6/1ngE8vXD+CvsuElPPnoXZIKR3M+4ruZUbmlNjtPv1uV9MfHl/hvtVaX+Gs8DsL1+eU2af4/C6uJNgtpXRLRNwBHFlBHUZbO1DJAHHxXJTr44dlFNvyYIqv398rCaJKKe2LiMupMuPDKH4/uqTSPrFEnVaQBZKdQhbYNZMsw1FR8XtSuToVz2MlWX0gyxbTTt9gxP71C/qe5xH5nnZPM70weNbCvj63u6WUOMiBfP9ZFZU7OZppSpNpp40D7GM+S0rud4B9JcudwSx2soWW/P5SWsgSZjTRzOTo/3bRWKnXNjSd2exgS0WPpbFz/3jUoPd3t58W9pNS6lm+oOhAoV+otD1B1q9ANkDcllpL9hstZcqeHM1MTs09aV+HVvna7Bq+euyHDueBJy3s4/eDJHA6zEF+zw8AOIEzWcbKiuqmkXXc6slEQEpw3U1trF09MIa6qytx063Zz2UnHDtyMdbNzcH8uZPYvKWT2zaU/znuQ5/aw5v+Z1d2/R0LeOZTKu//NLqmN/TOyj3QsYvpjQNn6aaUaOnYA8CMxnkD7tc9y0T9PqT6cXdqQ1u5gy66CIIlrKi4HncnBh1INYiIo8hSbT+e6t5Hc0alQpUtVXAyfbObfCQiKk2Fvrxwvf/azivoO5BQ6aDITQzxA/wIqPSXueIgeiXH9B90LxXWWxxMqWQQvtvVhevLImJqJen8K1Csz7X5jM5q69MIHMXgs7JvH2pmd644wLp9iCUwqvFxegMKHhcRS1JK/V/TZ9I70LQD+M4Y13EoxQUy7x0Rv6ji2GK4ZaXrsJfzPuDT+fWVwEX54PEvyAbQLs1nY4+6iHgs2fIR1Q7Mz6lwv0r7ihZ6gw5Kve+PLVzfnFLaVWG5wx1MHkw1z61buakKxRDdarIzXE99BB3sHCoQLdd/OseITd0Yg7Y8mOG8fhUFHYzB96Oql2aKiJOAC8kyAFQzajKgTvlSRYsKmyo6jyml1oi4DVg7yG5HkAWgdHtjRLyskvL7HTfcvn9Ca4wmZqW57GM3O9nKoj5fYTN72dUzS2Fen5dzcHNZxDbuZBdbOapPd585nA71/BjRv9y5LGQDN3OAvbSmQzTHwNTBO9ladZ008uq1Dc1jERvI0oseZD+zGPgD/0F6V3qZOnIfXRqGufnr2EE7+9jF7D6J5zK78vf+bObREJV/dM5hAUGQSOxiG0tKfNXq7leamTJg4Hoei9nCxj7tpr/u9jjF9jSm6rUf0sQxc8Ykzjy1mcuuauXXfzjIE/5pYAaBv155mL37sp+FHnz/kVvS4EBLF9t3Zj81llu24RNf3Msr35ol9HrPG+fz4mfPGbHH1/A1TprMrKZF7Gvfxs7WO1k85ZgB++xt30pHnixy/uSBfZRUVM/fhzQxTKQ21L20wnwW32OXozLoQKpSRJwF/IraBgBGa9pS+WlTvfr3xhWl3y1hdr/b/UNed1KBlFJHROyldwBvNAyVLn2kjik1iFE8L9XMkO2/71xGJrX1SNZnMJW0Reg7QDKSi4T+BriZbPC3EXgW8O7uO/OZpM8t7P/5QdLqj1Ydh1J8r67ML7Xo/16tSkrpMxFxNPA6etv4kWTn77kAEbEZ+BHwuZTS5cN5vHIi4p3AG2s8vNI+t9JpXkVDve8r6gtr2LdatTy3cuYUru+p4rixWJakErWeixGZ3jdGbXkwcwrX91RxXEWv3xh9P6r0MwaAiPgnsuUYajl/pY6Z0+/2nirKG+o89v+edr8qyi4aVt9/d7CEFexjN1vYyKp0/IAB/g3cDMBM5jI9Kv/RaQlHso072clW9qc9zIw5fe7fmJc7mSk9P4p0m8ciJpPNKt7AzRzLqX3u35/29PxIUupHDo2temxDc1lIM1Np5RAbWcdJJVaA2UgWCzqLuUy+h/7AVW9mxCxmpNkcYC8buJlT+iWya02H2MIdAFXPhGqMJhakpWxnMxu5mcXpiD6zvjpTB5vyWL3FHDlgRthSVrCFjbSwj51pK/NjcZ/7D6S97Mr/BVpQZka8Rk+99UPLYuWgWQtuTddxOzcwhWlDzlbU2Hjq42dy2VWtfO17+3nTK+YNWELhA5/YA8AZpzSXzIRQTkdHorGx/L9HH/nMHtrzMO/732fgZ9EXv7WPl7w++5npTa+Yy2tefM9b63oiWDplDfvat7H58M0cM+NMmhum97n/9parAJjVuLBkJgSpqJ6/D2limChtqCXtZx/Z/LOl9+BsT6VDDiWVFBHTge/R+6NzO9k6uk8hyyQwD5iSUoruC/C20a5XhbPWpw+9S0X69xv9/zupZtB+JAfD6k1xsGI452SkfjEcq/pUmkGhWE75RZOqlGdZ+ERh03MjothmH0xv1odE70z+UkaljhUYrfdq1VJKbyBL1f1VyPN39rUM+E/gsoj4Vr6My4iJiMfRd5B2E/B2sqCpVWTp0Rv79bkbRrIOVSr2h/aF6jEB23JVxvD7UcWLA0fEcuCb9H7+HQQ+SZaF4fi8rs396vTFGuo0Ukaq77/H/4qynFVMYRqddHAVf+ZAymJVOlI7t6Sr2U624sxqThxw7K/Td/h1+g63pusG3LeQZT2zy6/mEvamLF6sK3WyId3cM+B7DCcwKfp+BE+KBlZxApANDG9IN9OVr1CzJ+3k6nx1r9nMZ2EsG/Y50PDUZxuaxJp8BZgtbOTWdB0defKetnSY69Pl7Mtjm7rbmurDak4CYBubuCVd3fO6HUj7uIo/00kHU5nOco4ecOzmtL6nTR1KA9fVXsUJBME+dnMdl9GWsq+Uh9NB/sElHOYgjTSxkuMGHDs/lvTMZL+ey9iZttKdsK7YL01luinyx0E99kOaWJ739FkcdUQj+w8kHvv0u7j+puzf0/0HunjtO3bw/Z9lfco7Xz9wtmjD0nU0LF3H2/7fwNj4kx64kY99bg+3rm/v6TMAblrXxn/993be/L5ssOWfHzmdk4/vG8P73Z8c4Lmv2EZK8KoXzuGtrx742KoPR047kSmTZtKZ2rly98840JG9rh1dbdy0/y9sa80G4NbM7L/CJPxyy8f55ZaPs27/30qW3d51mLauQz2Xbp1dbX22d5VYzbEztffdh2yfrtTZZ3tHVyUJDjWW6vX7EEB7aqMttfZcunXS3md7V8VJgzUa6rkNdbuL9QA00sRClg7n6U5oZjqQqvNMetf4bQcellK6eIhj6iVvz55+t+dVsp5zBfrPOqzm+d6dF1LaU7g+nHOyp9RONSiWUw/1Kba9kZ6V+QXgXWQp0VcC55MtCQDw/MJ+v0kprRuknNGs42D2FK5/JKVUaXrtUZFSuhJ4WkQ0AWeRzb49FziPvm3pScAREXFuSiX+O6zNmwrXLyPrc/cOccx49rnF/vDu2BfuAbqnwc2p4jinPtRHW95TuD6niuMqef3q8fvRy+kdyN8LnJNSun6IY4aq055+t+dUUZ+hzmP/sk9PKf29ivKVa4gGTk3ncCV/YD97uJRf0ZAa6aSjZ5/VnMT8qG7mbkRwSrovV3Axh2jhMn5HQ2qki04S2Y/uy1nF8lhV8vgj4hj2pz1s4nZu4WrWcQ2TUkNPvaYynVO4b43PWiOpXtvQkljBgbSX9dzE7dzAem6kMTXRXohzXMMpLIh77g9c9WhBLGVVOpHbuI4NZIO6xfd+E5M5lXOYFA1Vlz0z5nB8OoMbuIItbGQLG2lMTT1p9xto4BTOLrk2LcBJ3Icr+QMH2Mvf+SOTaCBS9NStmSmcyjlVpajVyKjXfkgTx9Spk/j+F5bysCdt4sprWjn5vI3MmjmJAy1ddHVBBLzr9fM5/7zqlk+55bZ2XvbfO3jZf++guTmYOT1oOZg4dLg3AOERD57GFz+6eMCxr33HDjrzXwq+/O39fPnb5Zd3+c7nlnDOWSO37IOq0xCN3GvuI7l814/Y17GdP+/4Bo0xOR/ky17rNTPuy4Lm6tcr/8uOb3O4a+Br/4+9v+pz+6y5j2Nec9+lG24/8HdubRmYZHNb6+1s23Z7z+1lU9Zy8pyHVF03jZ56/j70V37N4QErKcM1/LXP7dN5gEsPjaN6bkMAKSXuIluZeQlH1lSPuwv/c5Cq84jC9a9X8IM61Mc61jBwXe/FjEzK6639bh8N+bSIQUTEAuonIGM0FNPxD1wArbzivm2M3CD/SNSnfznDcVfh+oqImJJSGpFsAimlPRHxdeDZ+abnA7+IiEXAPxd2/eR41XEIxffqwP/Ux0lKqR34S355f0RMBh4NvJPe9enPBv4V+NpwHy8iFgJnFDa9dqhB2oiYQW2p3UdKsT88MiIaKgzAmCi/6m2gt00eP9iO/dyjp1zWUVveAHRPRRnp168evx8V6/ThCgIOYIg6pZQOR8Q26PmloaLzGBHNDP0+L/U9TTWaGXO4bzqf9dzIDu6ilUM00cxs5rKCNcyL2k7vlJjGfdJDWc9NbGMTh2mhgUZmMocjOIbFccSgxx8fZzAvLeJObmM/e+iik2nMZBHLWclaGqOppnpp5NVrG1odJzM3LeQObmUfu2inLU+DvoAVrGF2OGu0Hq2K45md5nEHt7CXXT0zsRawlJUcN6z1XpfFSmakWWzgZnazg3baaGYq81nMSo5jWgxcy73b5Gjm3unBbGQdW7mDgxwgkZjOLBayjBWsGfTHVY2ueu2HNHGcemIzV/9+Be/9yG5++usWNm3pZP7cBs66VzP/9bw5POTc6gIOAH7wxaX89o8H+ctlh9m8tYPtOztpagxWH93EWac182//MpNHPaR0Aq+uwiThrdsH/ze5rT0Ner9G36ymBdxvwVO4reUKtrduoLWzhcmTpjC7aRFHTTuV+c32FapOvX4f0sRRz21oF9tozVfIXspRNdfj7sCgA6k6xR6jdJ6ognz9+HNGrzpVuRpooXfW39nAjcMtNKW0LSLupHeG432obMBxYA6uu5crgO7FDM+MiKZ80HYoxfby9zJLZxS3VZpG+YrC9VURsTil1D9gZKj6bE0p3Vnh4w2lGJgyOX+c345Q2QAfpzfo4NERsQx4OtD9a/4W4IfjXMdy/kI2mA/0W6SqjqSU2oDvRcSfgevoXY/84QzsA/q340rabf+Q+SH7XLLXaDzzgBbfZ1OBU4BKZipPlP7wr9CziPQDKwmqiIg11E/w3Xipl7b8V+DJ+fV7RcTcoTIeRcQssiVWhlKP34+qrdMM4NQKyv0r8Jj8eqXTZ86l9/OnpJTSzoi4GTg233Q2vVl6VIPmmMJaTmMtp1V8zEPjiUPu0xhNrOaknvSO1VocR7L4Ht8tTgz12obmxxLmU93sZo2/+bGY+VXGky2LlRUtbTAr5nFyjZlSJkUDK1nLStbWdLxGV732Q/0dEydyTImlHjT+lixq5EPvXMiH3rmw4mM671pd9r7HnD+dx5xf26pgt122sqbjNH6aG6Zx/KxzOZ5zKz7m4UteOOj9D1z09Jrrs3rmvVk9895D76i6VY/fh+4fjxp6J9WNemxD3fV6KEN/B7sncIEuqTrVTj96BLB8yL3GQD7g/evCpueMYPHFGY1PzNOwD+XfR/Dx61HxnMwGHjvUAfls2EeWKaOouPhQpfnmimUF8LQK6tNMth73UPWpxRXAjsLtF4xg2d1LAnTnwWoEnptfun0updQx4MC+RrWOg/h54fqKiHjYGD1uTfLglT8XNpX6Fbz/glmVtNtapns+e+hdRtXfgGI2jKcOdUBEzAb+adRqNLJ+Uri+BHh8Bce8aJTqMpHUS1v+aeH65Aof41mMzvt1LL4fVVunp5Odl6EU3wdnRsRZFRzz4grrUOz/L6jw+5QkSZIkSZI07gw6kKqzuXD9AYPtGBHTgA+ObnWq9v7C9XMi4j9HqNzPF64vA1462M4RcTpZCva7s98CtxZuvytiyBw/76V3wCMBnymzXzHt/zH5jNFBpZRuBX5X2PT6fImLwbyGvumdPz3U41QqH/D/WGHTEyNiyMCMKn28cP219C4V0UX5c9tjjOpY6nGvom+A0EfzgekxU0mb6qeYY2pXifv30Hcwfk0FZW7ud3uoPvchwJMqKHfUpJT2A98pbHpRRAy1yOFbqTx4aLxdBKwr3L4wovzC0RFxLgYdQJ205ZTSTfTN1vKmiCi7dEJEHEfWPitRj9+PqqnTYuDtFZb7NaC4PManI6LsclER8RTgcRWW/SGgOyvSUcA7KjxOkiRJkiRJGlcGHUjVKf5Y/8SIeHSpnSJiPtlMuLrKUZhS+jPwjcKmj0XEayNi0KVWIqIpIh4bEb+LiFKL0vyWLCV8t/dExBPKlHUsWVr7u3X/k1JK9B3AWAt8O0/f3Edk3kA2o7TbV1NK6/rvmyumcJ8HPLPCar2dLJgBslT4P42IRaV2jIj/oO9g0x9TSr+p8HEq9WFgY+H2NyNi0JnhEbEgIl5TYfnfAnbm14uDur9IKW2okzqW82p6B+nXAhfnA4CDioijIuJdEfGBYT7+AyLi5xFxfkQ0DPGYjwHOK2z6Xf998hT8VxU2vXCoIJyU0kb6Bu78v7xvLVWH84DvUvlyI6Ppf+gdNJwG/KTcwHxEvBD4rzGq17Dl/VoxqOxI4I8R8bBioEpETI6I55LNrG8Eto9tTetLnbXll9PbPmcBv4uIJxff5xExKf8M/x1Zpp5KXr96/H5UrNOLIqLkMhF5YNBFwFCBeACklA4Ary9sOg34ff+MBxExPSJeC3yJ7LN3J0NIKa0Hiv33ayPiwogYNDApf80eFBE/jIhBAywkSZIkSZKk0TDoQKOkAT5NNmN6Btmg+Q8j4svAj4GtwFyydXufRTaou49s0GXIFNtj6Nlk6wWfDjSQza5/YUR8kyw1+HaywY45+X5nAueTDTxAiYGQlFKKiOcAl5MNsjUB342IH5DN+r2D7Nw8hGxZh6nAH4GjgSNG4TnWhZTSl/IB2e4FfR4NXBcRnyE7V61k5/gCsrWbu60HXjJIuTdFxOX0rrP9uYh4PXAL0FbY9SMppd8Wjvt9RFwIvDLfdG/g+rw+fwb2AyvJllR4RKGcPcB/VPasK5dS2hMRTyYb2JoKTAG+FhH/RdZuriN7D80GjgceSNYWDwHvq6D8wxHxf2QD+EWfrJc6DvK4V0XEs4Evk/U1p5K1nZ8AvyIbwNwPzCRLc38q2Uze0/MivljrY+eCrA08AtgaEb8ga7O3k83wbQJWkS0L8Dh6g4huBb5SpsyvQM/CWOcDd0XEVWTnrzsY5tqU0n8XjvkAvRkrTgCuiYiPky2d0UY2E/hxZGn+A/gZcBIwVHaBUZNSujYi3kVv0M7JZK/dp8n6vQNk5+7f6V0P/mvAv41xVWuSUvp5RLyPLBMKZBlEfgVsjoh1ZG3jBHo/M75GNsh9QX67dQyrW0/qoi2nlK7O+6//zTctAr4J7IyIG8nei2uB7kVf/0i2tE73+7Lc61eP348+BDyD7LvOdLIAmc+SBRjsInvuD8n3mUb2XeUaoJIFHT9J1t93Z206HfhbRKwn+wyfRvbe7w4WeA9wTn4MDP4+eCNwSqEeLweeHhFfBy4BtgCdZN/TVuWPfT69mYnqLcuWJEmSJEmS7gEMOpCqkFLaFhEXkP1A30j2w/oF9A6mFLWQDd7eZ+xqOLSU0sGIeCDwBeBf8s0rGDgwW225N+QD7D8m+7Ed4J/zS3+3kQ00/LnEfXc3TyMbxOlOlb2CwdMl3wg8PKW0Z4hyn0uWgr97tuzq/FL0gxLHvZpsAOa/8tvzgdcN8jh3AY/IZ1+OuJTSX/P2+COywXPIgiHuPchhh6p4iE+SBVl0D4rfQTaYV091LPe4X4uIPcBXyQaXJgGPzS9jaTHl+7mijcCjU0oHy9z/KeAxwMPz23PomyGhe1vRJ8kGBbv7qqWUf/9cSTaQf9UQ9Rx1KaW3RcQSoHsJm7lkA7KvLbH7p4GvM0GCDgBSSq+NiP3Am+hdEmZZfin6HNla9l8obNvLPVPdtOWU0scjopVsUL47+8584H79dv0JWcBZsd2WfP3q8ftRHgD0CrKMNZAFjb04v/S3nSzgo2zAX7+yU0T8O7ADeCG9AZkr80vPrsC7gbfQ9ztP2fdBSqkrIv6ZLHige3mSBXndKqqfJEmSJEmSNNbu1unNpdGQUvoe8FDg2jK7dJLN+jw9pfTzMatYFVJKB1JKTwQeCfyerM6DWQ98Arj/YIPP+az6e5HNIkwldmkjm7l9RkppU9UVn4BSSq1kMyGfAtwwyK47yQYlzshTcQ9V7lXAifkxfyQbMGkb7Jj8uJRSejnZrMjLBtn1ANmA1EkppauHKnc4UkqXkWUJeBfZAE7ZXckG414/yD79bSCb0drts3mq/3qq42CP+zNgDdks2W1D7N5Klk78RcArhvnQ1wBvBi6lNxV7OdvJsjqcnFK6sdxOKaUOspm7/wZ8nyxrQgul+4ruYxLZ++fNZDOjS9lNlrHl7AqCdcZMSukFZEFHd5bZZRPw/JTS88euViMnpfROstnYHwSuJ8u80QLcTJZp44EppeeklA7TOwMb7qFLLdRbW04pfY6sT3snWXDDHrJgqVvJsrg8BnhsSmk3Fb5+9fj9KKX0EbJsQ+vL7NIGfBs4JaV0RZl9ypXdmVJ6MVkA2qfIsg0dJHt9ryfLbHFGSum/88+dit8HKaX2vOyzyYI5h/p830K2jMMjyL4TSJIkSZIkSWMqst9AJVUrX7/6dLIU9/PJBlzuAv6UUtoynnWrVkTMJpvheATZc0lks/DWA9enlDbUUOZRZOnel5ENZNwJ/D6ltGvQA+/mIuJYsgGKRWQzhLeTDU78NaXUNQ71OZLstV9ClgZ6J9nAyZ9TSkMGMYxCfSaRva9OIkvv3UQ2gHMbcEVKaWuV5T0W+GF+swM4KqW0uZ7qWMXjBlm67lPIZr3OIBvk3Q7cRLY0wbAzLJR43KlkyzesJmu304DDZMEX1wJX5QEFoyoiZpD1KceStdXtZH3UxSmloQIjxk3eXu5HFiQ0l6zetwB/HI/3/FiLiEaywJ+Z+aaHpZR+PY5VGncTrS1HxPVkAQoAz00pfXaI/evu+1FENJAt73IaWUaV3WSBPxePRbBSRCwmCwzotialtK6K46eRLc+wkuycTiI7rxuBG1JKt4xcbeFhk57kP4iSJOke7ZebrxrvKmiCe9S9zh/vKmiC69w61NwjSRpdF3V9e8BS60Mx6ECSdLcVET8nm/kJ8L2U0r8Mtr+kkZWnoP9KfrMNWJRSuqcusTDhRMT9gD8VNp2QUhosa49KiIg3kmWVANiSUlo6nvUZikEHkiTpns6gAw2XQQcaLoMOJI23WoIOXF5BknS3FBGn0xtwAPDR8aqLdHeSz2SvZL+VwIWFTd8x4GD8VfH6zQOKWQ3+YsBBryrO4xnAGwub/m90aiRJkiRJkiSNH4MOJEl3OxGxDPh8YdNfUkq/H6fqSHc3742Iz0TEQyKiqf+dETEjIv4TuJxsSQ7IluR491hWUmW9JCK+ERGPztP29xERzRHxVOBK4Lh8cwLeMpaVnAC+GBEXRsTZ+fINfUTE/Ih4LfAHsmU0IFuW5iNjWUlJkiRJkiRpLDSOdwUkSRoJEfGD/OoCsvXEuwd5uoBXjkedpLup6cBz8ktbRNwCbM/vWwAcDxQHYRPw0pTSdWNaS5UzGfjX/NIZEeuArUAHMA84Id+n6F0ppV+PaS3r3zzg6cDLgUMRcTOwi6ztLwaOBYrZENqAC1JKW8e6opIkSZIkSdJoM+hAknR38bgy29+QUrp0TGsi3b11Fa5PBk4cZN8twAtTSt8f3SqpCsXXrwFYm19K2Qu8LqX0yVGv1cRTPI9TgVMH2fdW4BkppT+NbpUkSZIkSZKk8WHQgSTp7iYBe4DLgA+nlH42vtWR7nZeBfwMeChwBrCKLMPBFGAfWQr5K4CLgK+mlA6PUz1V2ofIlr54OHAWsBpYSDZwfpDs9fsH8FvgSymlveNTzbr3VLJz+CDgXsDRZNkPmsiCNbYBfwV+DnwnpdQ5TvWUJEmSJEmSRp1BB5Kku4WUUgy9l6ThSim1Ab/IL5pgUkpdwB/yi2qUUmoBvpdfJEmSJEmSpHu0SeNdAUmSJEmSJEmSJEmSNDEZdCBJkiRJkiRJkiRJkmpi0IEkSZIkSZIkSZIkSaqJQQeSJEmSJEmSJEmSJKkmBh1IkiRJkiRJkiRJkqSaGHQgSZIkSZIkSZIkSZJqYtCBJEmSJEmSJEmSJEmqiUEHkiRJkiRJkiRJkiSpJgYdSJIkSZIkSZIkSZKkmhh0IEmSJEmSJEmSJEmSamLQgSRJkiRJkiRJkiRJqolBB5IkSZIkSZIkSZIkqSYGHUiSJEmSJEmSJEmSpJoYdCBJkiRJkiRJkiRJkmpi0IEkSZIkSZIkSZIkSaqJQQeSJEmSJEmSJEmSJKkmBh1IkiRJkiRJkiRJkqSaGHQgSZIkSZIkSZIkSZJqYtCBJEmSJEmSJEmSJEmqiUEHkiRJkiRJkiRJkiSpJgYdSJIkSZIkSZIkSZKkmhh0IEmSJEmSJEmSJEmSamLQgSRJkiRJkiRJkiRJqolBB5IkSZIkSZIkSZIkqSYGHUiSJEmSJEmSJEmSpJoYdCBJkiRJkiRJkiRJkmpi0IEkSZIkSZIkSZIkSaqJQQeSJEmSJEmSJEmSJKkmBh1IkiRJkiRJkiRJkqSaGHQgSZIkSZIkSZIkSZJqYtCBJEmSJEmSJEmSJEmqiUEHkiRJkiRJkiRJkiSpJgYdSJIkSZIkSZIkSZKkmhh0IEmSJEmSJEmSJEmSamLQgSRJkiRJkiRJkiRJqolBB5IkSZIkSZIkSZIkqSYGHUiSJEmSJEmSJEmSpJoYdCBJkiRJkiRJkiRJkmpi0IEkSZIkSZIkSZIkSaqJQQeSJEmSJEmSJEmSJKkmBh1IkiRJkiRJkiRJkqSaGHQgSZIkSZIkSZIkSZJqYtCBJEmSJEmSJEmSJEmqiUEHkiRJkiRJkiRJkiSpJgYdSJIkSZIkSZIkSZKkmhh0IEmSJEmSJEmSJEmSamLQgSRJkiRJkiRJkiRJqolBB5IkSZIkSZIkSZIkqSYGHUiSJEmSJEmSJEmSpJoYdCBJkiRJkiRJkiRJkmrSON4VkCRJkiRJkiSpHjx82WnjXQVNcL/c/KvxroImuIc/4T/Guwqa6C69erxroHsgMx1IkiRJkiRJkiRJkqSaGHQgSZIkSZIkSZIkSZJqYtCBJEmSJEmSJEmSJEmqiUEHkiRJkiRJkiRJkiSpJgYdSJIkSZIkSZIkSZKkmhh0IEmSJEmSJEmSJEmSamLQgSRJkiRJkiRJkiRJqolBB5IkSZIkSZIkSZIkqSYGHUiSJEmSJEmSJEmSpJoYdCBJkiRJkiRJkiRJkmpi0IEkSZIkSZIkSZIkSaqJQQeSJEmSJEmSJEmSJKkmBh1IkiRJkiRJkiRJkqSaGHQgSZIkSZIkSZIkSZJqYtCBJEmSJEmSJEmSJEmqiUEHkiRJkiRJkiRJkiSpJgYdSJIkSZIkSZIkSZKkmhh0IEmSJEmSJEmSJEmSamLQgSRJkiRJkiRJkiRJqolBB5IkSZIkSZIkSZIkqSYGHUiSJEmSJEmSJEmSpJoYdCBJkiRJkiRJkiRJkmpi0IEkSZIkSZIkSZIkSaqJQQeSJEmSJEmSJEmSJKkmBh1IkiRJkiRJkiRJkqSaGHQgSZIkSZIkSZIkSZJqYtCBJEmSJEmSJEmSJEmqiUEHkiRJkiRJkiRJkiSpJgYdSJIkSZIkSZIkSZKkmhh0IEmSJEmSJEmSJEmSamLQgSRJkiRJkiRJkiRJqolBB5IkSZIkSZIkSZIkqSYGHUiSJEmSJEmSJEmSpJoYdCBJkiRJkiRJkiRJkmpi0IEkSZIkSZIkSZIkSaqJQQeSJEmSJEmSJEmSJKkmBh1IkiRJkiRJkiRJkqSaGHQgSZIkSZIkSZIkSZJqYtCBpP/P3n2HSVaViR//vt093ZNzZGAYYGDIIEEEEybMWVfdVcG46urqrquuOf7UNa3rumbMcc2rGFEBUUSigEhmGJicQ890Pr8/7u3p291V3VXVqZr5fp6nnqm6de+5p26de29Pnfe8R5IkSZIkSZIkSZJqYtCBJEmSJEmSJEmSJEmqiUEHkiRJkiRJkiRJkiSpJgYdSJIkSZIkSZIkSZKkmhh0IEmSJEmSJEmSJEmSamLQgSRJkiRJkiRJkiRJqolBB5IkSZIkSZIkSZIkqSYGHUiSJEmSJEmSJEmSpJoYdCBJkiRJkiRJkiRJkmpi0IEkSZIkSZIkSZIkSaqJQQeSJEmSJEmSJEmSJKkmBh1IkiRJkiRJkiRJkqSaGHQgSZIkSZIkSZIkSZJqYtCBJEmSJEmSJEmSJEmqiUEHkiRJkiRJkiRJkiSpJgYdSJIkSZIkSZIkSZKkmhh0IEmSJEmSJEmSJEmSamLQgSRJkiRJkiRJkiRJqolBB5IkSZIkSZIkSZIkqSYGHUiSJEmSJEmSJEmSpJoYdCBJkiRJkiRJkiRJkmpi0IEkqW5FxJMi4jsRcWdEtEZEKjyeNtH10+iLiJUDvueVE10nSQeHiFhTuPZcMNH1kSRJkiRJkiaLpomugCRJA0VEA/B14HkTXRdJkiRJkiRJkiSVZ9CBJKkevYr+AQe7gBuB1sKyTbUUHBHvAt6Zv7w0pXRuLeXUs4j4MnB+/vIrKaULJq42Gol8tPWX8pf3pJRWTlxtpMHybCR3FxYdkVJaMzG1yUREKrx8RErpkomqiyZOe2pjDbewlQ20s58mpjCb+axgFfNjSc3ldqVO1nArm1lHG/topJGZzOFQjmJJHFpymzvTX7mbv1VU/jwWcXo8vOb6afTUUxvqST2s5252sYO97KSdNjppp4FGpjOTBSzhMFbREtNqrpdG32i3oZ7UzQ62sIsd7GY7u9lBB20AnMpDWBhLKypnU7qP+7iTveyim26mMp3FLGclq2mKKVXXS2Onnq5DvdrSPu7hNraxkTb20UAjM5jNMg5nOUcQETXXS6Ov3trQnrSTXWxjNzvYzQ5a2U0isYRDOSkeVHN9NHY2bu7ig5/YwUUXt7JuYzdzZjVw5gNaeO3L5vKoh06vurxb7+jgez/Zy5+vb+O2OzvZsq2bva09zJvTyCknNPO8Z8ziBc+aRUND6WtJ47I7ht3Hdz6/lGc9aWbVddPYaO/Yw5p1v2frjltp79hDU1MLs2cuZ8Wys5k/56iqy+vp6WLH7rvZtXcdu/euZ/fedXR07gHg1GNfwMJ5Rw+57frN17Fr733s3beR9o69dHbto6GhielT57NgzioOW/YgWppn1fx5Nbrq7T7WK6XEetawiXvZyy466aSZFqYzk3ks5nCOoTEaa67fZGbQgSSpHr288PznwDNSSm0TVRlJkjR57Ek7uZbL6KQDgEaa6KCdrWxgKxtYlU5kZRxbdbltaR/XcCn78xjIRproopMdbMke6UiOjdMGbddEE820lC03AZ20AzCLuVXXS6Ov3tpQJx3cwnUHXgdxYNs97GQPO7mPuzg5nc38WFzjp9ZoGos21MoeruPyEdXrb+ka1uWxgkHQQCP72MMabmET93JGOtfglTpRb9chgG1pEzdwBd10AdDEFHroZhfb2MU2NnMfp6QHH7Q/stebemxDf+Uq9rKr9g+lcXXDze08+lnr2LajB4DZsxrYur2bi369j59dvI//9+YFvOk186oq80c/b+UdH9p+4PW0qUHzlGDz1m5+fel+fn3pfr74jd385OuHMHtW+ZnBF85voLGxdGDC1BaDn+rFntaNXHvzl+ns2gdAY2MLHZ372LrjNrbuuJ1VKx7FyuUPq6rM1v1buO5vX6upPp1d+7nl7p8ceB000NjYQld3G3taN7CndQP3bbqKk1c/l/lzjqxpHxo99XgfA2hP+7meP7CHnUDf/83a2U87+9nBFg7hcBqpPjDr/sCgA0lSXYmIacCJhUUfNuBAkjTWzCRy/9CduvkLf6STDmYxlxM4k5kxh67UyV3czFpu5w5uYlaay4IKRwVDNpLhBv7EflqZynRO5IHMjYV0p27u5Q7u4Ebu4y5mpbksj/4/UB0eqzmc1WXL3pzWcQNXAHAIK2v63Bo99diGGmjgMFYxj0XMYT7NTCUi6Ek9bGcTt3ED+9jDjfyJc9LjmBLNo31YVIWxakNAPrprXv6Yf+DaUYn70p0HAg6O5iQOYxUN0cjOtJWb+DP7aeUG/sSZPKKqOmn01eN1qC3t40b+RDddzGE+x3E6M2MOPamHLaznb1zDdjZzG9dzHKeP9iFRleqxDUF2P5vJ3APXsS2sY1ttSSw1xvbv7+Fp529g244eHnBiC1/55GJOWN3C7j09vPdj2/nYZ3by1g9s4wEntXDeuZV3rB2/upn3v2UBDzt7Kscf08yc2VmQ0pat3Xzp27t5+39s4/I/t/Gv79zCFz5WfhTzlb84jJWHmZ2nnnV3d/KXW79JZ9c+Zs1YxgmrnsnM6Yvp6mrjrvsuYe2GP3LH2t8wa8YhLJi7qqqymxqnMnvmIcyesZzZM5dzw23frmi7hoYmDlt2NvNmHc6cWYfSPGUmEQ309HSxfddd3HbPL9i3fys33va/nPOA1zKlyUDMiVKv97Gu1Mk1XMo+9jKD2RzNScxnCQ3RQHfqppVdbGIdDRy8AZjlw8UkSZoY84FiWPK9E1URSZI0uazjrjw9YhOn8GBmxhwAmmIKx8QpLOIQAO7gpqrK3cJ6dpONyjqFc5gbCwFojEZWxmoOI/uh7E5upif1VFX2Bu4BsiwHvfXVxKnHNjQlmlkdp7I4ltMS0w6kL2+IBhbGMk7lwUCWEWErG2r85BotY9WGZjKHh/MUTouHsSpOYnEsr3jbntTNXdwMwAqO5vBYTUM+Gn1uLORkzgZgF9vYktZXVS+Nvnq8Dt3DbXTROahODdHAkjiUYzg1r/vdtKbdtX1wjZp6bEMAZ/JIHhSP5vg4nUPjSJqZWvNn1Nj63Nd2c899XcycEfz4q8s4YXWWtWv2rAY+/M6FPPVxM0gJ3vr+bVWV++TzZvCm18zj7DOmHQg4AFi0sJE3vnoeb3p1ljnhWz/cS2dnKleMJoF1m6+irX0njQ3NnLL6H5g5PcvG1dQ0lWNWPo5F844DEnes/XVV5c6cvoSHn/lmTjv+AlYd/hgWLzi+4m2nNE1j9crHs3jB8bQ0zyYi6x5taGhi4bxjOPXY5wPQ2bWPrTturapeGl31eh+7g5vygINZnMkjWBjLaMjbUWM0Mjvmc3ScRHOUz3R4f2fQgSSp3gwMVe6akFpIkqRJZyNrAVjKYUwtkSL8cI4BYA87aU17qi53PkuYFXNLlJtlMuigje1srrjcjtR+oJN4GYdXvJ3GzmRrQwDTYyZN+Z/Q7eyvaluNvrFqQxFxIOCkWtvZTEc+jcuKfP9Fs2Me88k6A3rrr4lTj9eh3tHoy1hR8of0ZaxgSj6VkG1o4tVjGwJqvoZp/H3zB1m7eN7TZ7F82eBk2f/2qrkAXHtjO7fe0TFq+z3j1CwQpa0tsX1n96iVq/G3ccsNACxdeDJTW2YPev/w5VnQ7J7WDbTu31pxuRENY3YtmT51Pk2N2TWzvaPya6NGXz3exzpSeyFr2Mk0hdlWSjHoQNKwIqI5Is6LiA9ExK8j4p6IaI2IjojYFBFXRcTHI+LMMdr/uRGReh+F5csj4m35/jdFxP6IuDsivhkRj61yH8dHxGsj4n8j4qaI2BkRnfm/t0fEtyLihRHV300iYlpEvDoifhcRGyKiLSLWRsRvIuKlETF9qM9ZQfnLI+INEXFx/t3si4jdeb2/FhFPjzH+n11EzM4/489K1OGbEfHciPITOxY/O+R37z53F49L/righjquyct/Z2Hxw0uUXdE+RnrcI+Ks/Bzq3d+FFX6O7xe22R8RJxfe6z2G5xc2OX+Iz3huJfscpj5LI+L8iLgwIv4cEVvyz7U3b+c/y4/TghHuZ35EvC4i/hAR6wvn0Y8j4u9qaeMRcXJEfDAirs6vIR0RsTkiromID0XEAyos512FY3pJhdsMeb5HxCX58i8VFh8+xHf5rkr2O0ydGiPiGRHxjYj4W0TsioiuvG2vj4grIuJzkV0L51ZYZlNEPCcivhoRt0TE9ohoj4h1kV0D3xAR82uo66PyMu/K67c1Iq7Pv8+jC+utGe6cjoiVA47lynz5nMiua5fl9e2IiI0R8aOIeMwQZX04Im7Iz4H9EXFnRHwmIqrLF8joHr9S535e/rMj4ieR3T/b8mP5p8jur2WHXPe2eyq7Zpc9N2KUriERcUGZ8+l3ZeqzpkQZw7aXEtssiYg3RsRvI+K+/BjuyM+hCyPiiZWUk5c1qt/RwagrdbKbHQAsoHSKxTksONA5W03H7g625OWWTvM6NaYxg9n5upWXu5G1JBJBsJQVFW+nsTEZ2xBAa9pNF50ATGNGVdtqdI1lGxqJ3v3MZE7JH26hr77b87aqiVGv16G2fN7j6cwquW1EMJ2ZAGwbp3at0uq1DWny2LO3h2tuyALVyk2d8KDTpzJndta19NvLRy/g8Yqrs7KmTwsWLzx405NPdl3d7exuzQKry02dMGfmoTQ1ZkEm23fdNW51G0rr/i10dWdtcFrLvAmuzcGrXu9jm7iPRA9TaC5bL8HgMDVJKoiIJwFfBcrdaRfnjzOA10bED4EXpZR2jXG9ngl8ERgYKrkyfzwvIn4AXJBS+XC3iGgBrgFOKLPKnPyxCngu8N6I+PuU0h8qrOdZwDeBgZMAHZY/Hgm8PiKeXUl5A8puAt4N/AtQ6pejWXm9nw9cExHPSyndXu1+KqjHC4CPAQuHqMPzgLdHxItTSleOdh3G02gd95TSlRHxNuA/8kUvjohfp5TKTkQWEa8CnlFY9PqU0g01fpQRi4gvkgU4lApinALMIGvnjwfeGRFvSCl9uob9PAz4NrBswFu959FTgFflx3rYnMIRMQP4JPDCEnVflD9OA/4tIr4OvCqltLfaek8mEbEa+A5wSom3p+WPZcCDgJcBVwEPHKbMxwL/DRxd4u1D8scjgbdExOtTSl+soJ4zyAIxBl4zpwEL8vq/Li/vf4Yrb4j9nE3W5gb2AC4Bngo8NSLen1J6a2GbVwMfAQYO/ToS+Efggoh4QUrpuxXWYdSP34DyDwe+BXk+5T4tZMfyLOA1EfHElNLV1ZRdRR3G5RoyViLi38gC2WYOeKsFmAscS3Ztv4Ls75Hbqix/wr+jyaaVvj/5Zgz6EzETEUxPM9nNDlqpLP1zR2qjk2wE18wy5fa+18ruisuFvqkVFrLsoE7BWC8mUxtKKdFBGzvYyp15WtGpTGdhnmpUE2Os2tBo1atcnbL3ss7kTtrpSO1ekyZI/V6HshjvRPnxEb3vjVe7Vmn124Y0Wfzt9g5SfqqfsLq55DoNDcHqo6bw5+vaufm2kWU62L+/h7XruvjmD/bwkU/tBOBVL5oz5Gj25758I7ff3cm+/T0sWtDIAx8wlRc9bzZPfLTBl/Wgdd8WyO8JM/JpFQaKaGD6tAXs3ruO1n0TF6SUUg8dna3s2L2GO9f+BoCpzXNYOG/1hNXpYFev97FdbMvfn0Oih7vTrWzkXtpopZEmZjOPQzmKRXFw/3/MoANJw1lJ/4CD3cAdwC6gkawTahW9/wOFpwNHRsTZKaUxye2Zd8R8N99nAm4GNgNLgeMKqz4DWBQRjx2iLlPoH3DQBdwJbAXayD77sUBvaO8KslGTj0kpXTpMPR8I/Br6DQVoA24C9tIXIHEs8FuyTuyKRMRM4PvAeQPeuh1Yn3+uY4HeEbCnA3+MiEeNZid1RLwF+H8DFm8GbsvrcDx9n/944LcR8YyU0i8HbLMd6F02DXhY4b3LYFCe2HU1VPdSsg7DVcBR+bIdwJ/LrD9oH2Nw3D9M1mnYm5njsxHx55TSoBDfiDgJ+Ghh0Q9TSp8asFrvMTwJDvzivB64scS+gXwSq9qdTP/OwrX5/vaSdRYeQ9YxRv76UxExN6X0gSr38R04MNnj7WTfzTzgRLLrEMDDgd9ExMNTSmWHZ0U2Qv+XDO4w/xuwiSzg4Hiy60sALwCOi4jzUko7qqj3SP2Z7HqxnOxzkr8ud925o9YdRcRC4BLoF6a7H7gV2EbWrueRfZ+9vz4PmS0rIl5J1mFeHJqwjez72w8cSl9n+lzgwohYllIaeD0pljkVuIjsuy7qbRNzyI5VC/DJiOgcqo5DOJEs4GAG0EN2j9lCFmBXvF+8JSI2pJQ+mXc+fzhf3nud30N2rekNXGgBvhkRt6eUrh+qAmNx/AZYDHylULe1wBqy7/rk/LP3rvfLiDghpbRxQBl3kJ1LlVyzAUpdA0fzGrKOvmtgMdvRVZS+1m0qsaxiEfFZ4OUDFt9H9jfEDLJ21HvdOhv4Q/73yLUV7mI0vqODTjttB563DDFHcAvTgB391q+83NIjhIED8xJXWu7etIs97AScWqFeTIY2dHO6mvWsGbR8JnM5mbNoLJ9cTONgrNrQSPXuZ/g6ZTpoo3lQHKXGQ71eh6YynX3sKfvDfk/qYT9ZnHY3XXSlLprCn50nQr22IU0eGzb1zXJ6yNLy5/GyJU1Ae7/1q9F86B10D5hBoakJXnH+HN7370Mny7zq+nZmzQymNAXrNnTzww2t/PBnrTzryTP52ieX0NzsVB4Tqb2zr9O4pbl0hhyAlimzgXW0d47/OJ+b7/wR6zcP/u/5zOlLOfmY59DYaOr8iVKv97F9+d85jTRxNZeym+0EQSNNdNLBNjaxjU0cllaxOk6tqE73R/71J6kS15H98H1RSmlQx1ZELAX+GXgD2XXlFLKdZBkXAACs1ElEQVSO6H8do/p8jawz8MfAa1NK9xTqchRZR83j80UPBd7P0B3628myOfwY+GNKqV+IbmRTKjwN+BBZkMAUso6jVeWCGSKbMuFb9HW4d5GNjv94ccR0RJwOfIYsU8R/DvO5iz5PX8d3N1mmgY+nlNYXym4gGwH+P2Qd0AuB70bEaSml1ir2VVJEPJ7+AQf3Af8E/DSl1JOvMxV4Kdlo/un549sRcVJK6b7eDfMO+cfl26ykf7ru81NKa0Za35TS+Xn576JvioUbUkqPq6KYUT3uKaUUES8E/kLW4Tub7Pg8OKXUWShzOv073tcCLynxGXuP4Zfpm2Lh1ymlC6r4jNXoIOuc/R5wcakMJ/mI8Q/S1yn53oj4ZRUdb18g+9yXA69MKd1UKHspWdt6Yb7oOOCz9M8GMdBn6B9w8H/A61JKB9pcRKwg+26fmS86A/gcg0fXj5mU0hvzulxA3xQLm6psr5V6K30BB3uB1wHfSCn1++s6silSHkh2XEplROhd7/Fk7b/3f/mXA28BLk8ppcJ6R5Md5yfli94bEVellH5Vpuh30z/g4DfAq1NKtxTKnAe8jez+819k195qfZmsQ/eLwNuK2TMi4gSywKPekPd3RMStZO2wPd/3p1JK+wrbPA34Btn1r4kseOhR5XY+hsev6H/Irk2XAv+SUrquUO7U/HP0ZnGYD7yXLMPFASmlrwNfH+E1e9SuISmlX5MF+hH9p1h4Y0rpkgrrU5E8KKQYcHAz2fXpssI6s4HXkx3HRrLj/b2IOGWoDEwFI/6ODkY9hVO+gfIdr435e90VXiK6Ky63qapyezuOp9DMwkHJfDQRJkMbamIKzbTQQ8+BKRVmMpdjOZXpUf5HXY2PsWpDI9Vbr0rqBNA1TvXSYPV6HVrAEvaxh43cy5HphEHTdKzn7gOjB7PtO2nyZ+cJUa9tSJNH676+/05Nm1q+8376tOy9va09Ne1n6eJGurpg954e9rdl+3zF+XN406vnMWVK6f2+8O9m8dynzeSs06Yyd07WDm+5vYMPf2oHX/72Hr73k73Mnd3AZz9SenS9xkdPd98YkIaG8veC3o797u72Ma/TQE2NU2meMpOenu4DUyrMnL6UY494ItOnjWiGWI1Qvd7Hev/vtZXsZ8IjOI7DOYammEJHauN2bmIDa7iXO5id5rEsDs6BBUOOUpMk4MsppdNSSv9VKuAAIKW0MaX0FrIRwb1eHhXO+V2DRWQdsE8vBhzkdbkTeDLw08Lif46I48uUtQ84LKX0LymlSwYGHORldubpsM8i6+yFrDP5+UPU8fX0n1LhRSml9w1M0Z5SugZ4BHBt/rmGFRHPIZvqAaATeEpK6Y3Fju+87J6U0o/I0qH3jug8BnhVJfsZpg5NZJ23vTYCD00p/V9vwEFeh7aU0ifJgjZ645fnUl2ARV0Yq+OeUtpMdu70HrczyQJlij5BXxaPbuAfxnnUfTnnpZSel1L6frkpVVJKV5B1sF6UL2okOz8qtQj4PfDoYsBBXvbGPJikmG796Xk2lEEi4jzgOYVF3wSeVgw4yMtdSxZg8NXC4mdFFXOzTzJPLjz/15TShQMDDgBSSt0ppStSSv9GHiQ0UJ4N5Cv0dZh/FTg3pfT7Yod5Xt7tZAE63+jdnKwTvVS5K+nfbn4JPK4YcJCXuSOl9HqyTvqpDE57X4kFwAdTSi9JA6brSCn9lex61vs/j0Vk95sguyd9pBhwkG/zI/oHvj0iT5s/yFgdvxIWAj8hO6+uK76RX7ffBlxYWPzciDITQI/MeFxDRlUe2PLhwqKbgYcUAw4AUkq7U0rvpH9wwhHAOyrc1Zh+RxFxTblHpWVoZFJKbOReAJaygobwv+aqzDFxCg+LJ3NuPJVzeSonchZddHA1l3Bb+stEV0/S/dQKjqaRJnro5jp+z/a0me7UTUdq5950J7dxA0Gxk9BRxpKGtvbaI1h/wxHsuetI7r7qcP71FXP5zFd2ccoj13LpH0snrP3Sfy3hsY+YcSDgAODYo5u58D+X8G+vmgvAhd/cza13jGzKB93/HbPycTzsjDdy7gPfzLlnvoUTj342Xd1tXP3XC7ltzS8munqqS30/yy1lBUfFCTRFFjjTHFM5Ic5gdp4wfA23lCzhYOAvG5KGNLCTfJh1vw38MX85g/7pjUfTDrI51ktOJphS6iYb7ddb9wbgFWXW7RnYQVRO3jn8vsKikqOp89HAxU6Gi/IRoeXK3UuWDaBS/154/sGU0s+GWjmldC9ZFoper6liX+U8nf5znb92qJGt+QjU4lQAT89Hk08mY3bcU0oXk42U7vX6iOjNWvAc+mc1eHdK6fKKaz2GKr0+pJS66N/p+pT8PKlEJ/DSlNJQYc+vh7znJvNPZdZ7beH5JrKRyeWuIykvpxhU8rphazs5HVZ4/vtKNsivs6W8lL4AqjuBlw+xbu9xfhXk+cXhhIh4ZIlVX07fVANtwMvydlXOf5Bl6anFbcDby72ZBzr8rrCoGfhSSunnQ5T5VfruSQE8uMx6Y3X8BtoDXDDMMfxQ4flM4AEVlFuVcbqGjLaX0je1AcCLhwoCSyl9kSx44MD2EVHJRKN18R1NNg2FEZU9lD116M7fa6xwBGZjxeV2VVzuNjbSkadrdGqF+jGZ2hBAU0xhaRzGGTyCRppYy+1sTrXMQqbRMlZtaKR661VJnQBHqE+ger0OTYsZnMRZNNJIK7u5lsv4HT/kMn7CrVxHMy0czjEH1p+CaaknSr22IU0eM6b3BQ31ZiAoZd/+7L2ZM0bWxRQRrDh0Ch9+50I++q6FbN/Rw/P/aSP79lWXQeEdr5/PtKlBSnDRxSNO8KoRaChMTdDTU/6/tN15RoTGxomd0qmpaSpLF57EGSe8lMbGFtZu+CObt908oXU6mNXrfaz4egWrSm67Ip8JtZU9tI/NzON1z6ADSaPtisLzgXOmj5ZvpJSGnIs+ZfMaf7ew6Jnl1q1SJZ/vFLL5tnt9usx6B+SjGP8w3HoRcSpwav6ykyx9eCX+Fw5MRHRYRBwz1MoVeHrh+Vr6H+tyPkZfSGAj2QjdSWGcjvs76AvaCeArEXEOWVr/XpfQf0qLSSMflb0tfzkTKJd9ZKBfppRuG6bs/WTTMPR6fD4lxQH56/MKiz6fUio9IWlfuXvJpmvo9cg8Zfr9TfGv4JF2Wl5QeP6JYYJFgGxEOPDDwqJHl1jtSYXnP82DeoYqs4f+gU7V+OIwHb0Afxrw+nMl1+qrTxtwfWFRufZ/QeH5aB6/gb5dwX30NrIsNr1OqKDcMTOCa8hoK97/Lk8pXVnBNh8pPJ9LluFoOGP6HaWUTi/3qLSMelSc73GoeR3b88veUPND9i+3L4lEO+V/OOioYM70XhvIknXNYDazY15F9dDYm0xtqGhqTGMxy4G+aTs0McaqDY1USwVzrBfbZvM41UuD1fN1aGEs40GcxwqOZhZzaWEas5jLSo7lLB5N5D8ztzCNhgmLD1U9tyFNDocs7etYW7+x/H+NN2zK3lu2ZPQCTF72/Dm0tATrN3bz899WNEbsgBnTGzjx2GYA7rrH6T0mUktz309n7R3lZxds78x+lmuZUkuSytE3tWU2i+dniWbXb6l0VliNtnq9jxVfTz8wo3Z/xeVtQ+zj/syQQ0kVi4hFwGPIOtUPIZt/fmAoYjHM61DGxlCjSYsuAl6UPz8kIg5NKd1XbuV8yoBzgdPJ0uHPIRvNWMwLWExdPD8ipuUdnkVnFZ730H9E7FB+S/nRr72K85lfl1LaVnbNgpRSe0TcQl/H+Rlko3lrdXbh+UXlRosPqMOaiPhLoQ5nA58cQR3G05gf95RSV0T8PVnH5FxgMdmo894Awa1k0yrUNlneGIuIB5C13+PJ5hefBYMmyCqO7j0UuLGCoqs539+dP28CTgOKGSHOpP/fPcWRx0P5caHcBrLz+9cVbjtZXEVfR/UnI6IT+OFQI+xLyafUOamwqJrjVMwHfcaAcmfSvzP1NxWW+dsq9l90xfCr9Ovo7QAqSUlfnKphUA/jWB2/MoYNcsvdByzNn8+toj5VG8NryKiJiGaya0uvSq8jvyfL0tT7vZ9N/2mgSqm772gymFH4D34ru/u97pVSYl+eeGQGlcWRNUcLU1IznXSwl90sOHDI+9vL7orK7UwdbMkT6RxiloO6MlnaUCm9P6LtP5BYRxNhrNrQSM1kNtvYSCvlY25byToFptBCc0zsiMODWb1fh6bFDI7hlJLv7Uk7AZiLc2FPpHpvQ6p/x65qJgJSgr/e2sHqVc2D1unpSdx6ZzZK/fhjBr9fq5aWYMG8BtZv7OauezpHrVyNrxnTFpL9nJ5o3bc5f91fSj3s25/9vDpj+uLxreAQegMm9rfVw6y2B6d6vY/NYA5b+/0cqFIMOpA0rHzu6Q+Tje6r5roxd0wqVHknw8D1jiH7cb6fiJhCljb9DfSlta7UXBgUtlb89XhtpdM3kM0LPZyTi/uJiGommSrWq9rPeUAenLGysKiayWNvoK8DvnQeovo0Lsc9pXRPRLwE+H6+qJiR6IKU0voSm02oiHgK8EHguCo3nVvhepWe738ly6TRGyR0DP2DDortLZG1xUrcDHTRd+1bxf0v6ODDwKPIjt18sswlW/J2/nuyUf03VRBcdBL92+wnIqLSwIXlhecDz5NDB5T7twrLvJss00i1Q2wq+R9E8bq+rYLMCAO3mV7i/bE6fqVU+r+kYk7KUnUesXG4hoymw+gfbFnR/S+llCLiRuBh+aJK7n918x1NJk0xhdlpHrvZwTY2HRj5XbSL7XSR/YA5n8p/3JrHYjZzH9vZ1C99dK+2tP9AZ95w5W7iXnroIQiWGnRQVyZLGyplf345MJ31xBrLNjQS81jEPdzGXnbRnvbTEtMGrbONTeNaJ5U2Wa9DHamd7XkbWtJvJkaNt8nahlQ/Zs1s4IxTWrjq+nYuvmwfz3ji4FHoV17bxq7d2ZiYRz5k8D2lVntbe9iyLftvcLXTNrTu6+GmWzoAOGKFfw9NpKbGFmbPOITdrevYtutOFi8YnKhw19776OrORpTPn3PkeFexrN5gg8aG0QumUXXq9T42n8Xcw60A7GMPs5k/aPt99GX2mHaQ/kTj1VfSkCLiTOBX1Pbj/lgNj6holHmJ9UqNLJ0G/B+VpaMupdRnnFt4vrOKsioJoSwOGVgCPLaK8ovm1LgdDG4LW6rYtrjuZMolPG7HPaX0g4i4hCzrRq8LU0oX1bjPMRMR7wPeWuPmlV4fKs0q0RYRrWRp12Fw+yq+3punu6+k3M6I2EVfG5hM7bYiKaVfRcSrgY/DgclXFwEvyB8A2yLiZ8CXU0rlMggMHNJU63V14Hkyd8DrnZUUknf07oQyocvldYzx+tA/g06vsTp+pQw7bUMJpeo8IuN0DRlNA8//sbz/1cV3NBktZQW72cFG1nJkOm5Qx9o9ecKhWcxjRpROiVi63MPYzH1sYxN70k5mxdx+76/Ny21mKvOG+dGjd2qF+SyhJUw9XG/qsQ31pB4aovwP7/vSngPZM+YyeCSZxtdYtaGRmM9immmhg3bu4bZBI9X3pJ0HOoyXcti41Enl1eN1aCgpJW7jenroYSZzWMSyirfV2JhsbUj153lPn8VV17fzzR/s4e3/On/QFAof/fROAE4/uaVkJoRyuroSTU3l/9vyic/vpDNPcPCQs/r/nZxSIqL8tu/7z+3sb0tEwOMfOaPsehofSxedxO7WdWzcegNHHnouLc39rzX3rM9mmJ0145CSmRDGQk/qHnL6n337t7Flxy0AzJ1tcPhEqsf72DwW0cI02tnPWu7gxBIzb6/ldgBmM4/mg/T/+tWFi0k6qETEDOAH9HX4dAJfB55LNiJzPjA1pRS9D/rSkI+lSjt4Bv5gX6qD4v3079i5FngtWerjQ8hSOTcWPt8RVdZ1tI3WX80juf4PPI7VdLgVv5PJdOcdt+MeEefRfzoHgMdE1NeEzxHxVPp3Fq4D3kN2Ph1Jlhq9acD14Z4adlVr+xrYTouvq+0knqzttmIppU+RTWHwGaDUPPILyAIQfhMRv42IUtPn1MP1aTI7qI7fOF5DRtPBeP+bdJZzJFOZTjddXM8f2JuyUQpdqZPb0w1sYR0Aq/rN2pK5OH2Pi9P3uDP9ddB7izjkwEiGG7iCXflMSz2pm3vSbQd+XDiK44fsHG5Ne9iVX2adWqE+1WMbuo3ruTVdz860le7C7EedqYP1aQ1Xcyk9dNNIEys4ehSOgkZirNoQZN95R2o/8OjVTWe/5T0DZmRriEaOJBtluJbbuSfdRk/elnambdyQzy41hwUsikNGeAQ0UvV4HQK4I93ItrSRrtSX8nx32sFf+CMbuZcGGjmeM4bsFNT4qNc21J26+l+ryK5VPfT0W95VUSI5jaWXv2A2hx/axJ69iae8YAM335r912fP3h7e9N6t/PBnWYal97158HQqjcvuoHHZHbz7I4PHkJz48LV88sKd3Lmmk2IyxVvv6OB1b9vCOz6U/Z38tMfP4KTj+v/36zkv38jbPrCNq69vo6Oj/7Yvf/1mPvTJnQC88O9mcfxqR6lPtOWLz2Rqy1y6u9u5/pavs3ffZgC6utu5/Z5fsmV7lvB31YrB4y0uvuIdXHzFO7jz3tJjXjq79tPR2Xrg0au7u73f8p6e/skjb7v7Z9x690Xs3LOW7p7OfuWt33wdV//1i/T0dNLY2MKKZWejiVOP97GGaODofFbUjazlzvTXA38TdaQ2bk5XszsfU9r7d/fByEwHkobyIrK01pAFHDwmpXTpMNuMx1CNWVSWFWDghD67ii/yTtx/Kiz6LPDKYVKIV/L5dhaez61g/V6VdCoXy/6/lNJTqyh/tOwc8Lqa77z4nQwsp57tLDwfs+MeEUuArzJ4tOoK4PPAs8ZivzV6e+H5VWTXh13lVs7Vcn2otX0NrMvOEdRjLNpt+dDqCZJSuh14ZUT8E9k0KA8mSwf/CPqPwn8E8LuIOD2lVJwYeOeAIuenlEZjEryB5c6tYttq1p1oOwe8Hq3jV6/G6xoymnYOeH0w3P8mncZo5JR0DtdyGXvYyZ/4FY2piW76frxexYksiOqSoEQEJ6cHcQ2Xsp9WruJ3NKYmeugmkf3puJwjWR5DpwbtzXLQxBQWYcdeParHNtRNNxu4i3u5A4CmlCUm6k0pCtlonJN5EFPj4EzjWU/Gqg0BXMnFtPWbsSlzI1f2e30aDxuUEvbQOIo9aSfruJvbuYE7uJGG1HigXtOYwck8qOo6afTV43UIYCP3siZPK9yYmkj0HOg0nkILJ3EWs+srVv6gVa9taA23cneJ2fK2sP5Axh6AZRzOCZxZVd00uqZNa+CHX17GY569jmtvbOekc9cye1YDe1t76OmBCPh/b17AeedW93fH7Xd18tq3beW1b9tKS0swa0bQui+xv63vp9jHPXI6X/nvJYO23bqtm+//dAcf+MQOGhthzuwG2tsTrfv6tn3mk2bw6f8wy0Y9aGycwimr/55rb/4ye1o38Ke/fJLGxha6uzvonR111YpHsWBu9bPvXnnDp2lr3zlo+Y23/2+/16cd/yLmz+kbP9jd08mGLddz78YrgaCpcSqQDkzzANA8ZSYnH/McpraMJEmwRqpe72NLYwV7064D97M13EJTmkJnYUzK0ZzMwjh4sz4ZdCBpKI8rPP9WBQEHwLjkYjyCyoIOBt4dNg14/Sj6UonvA15fwZzllXy+4kjMFRExPaU0+JehwSoJgSvO8Tz4L/BxkFJqjYh99M0dfVQVmxfX3Tx6tRpzY37cIxsO8rVC+buA/wHekr9+ZkT8Y0rps2Ox/2pExCLg9MKiNw3XWRgRM6mtA/gI4LoK6nQofeczDD7fi+2tOSIOTSndV0G5i+mbsmFgOb2Ko52nlHi/lLr9NS6l1EOW9eVa4L8johF4JPAu4Jx8tVXAq8myxfQaOAf9Eiq7Vg/nPqCHvhH8xwHD3o8i4kgm14jysTp+dWecryGjaeD5fxTkQ0OHN1nvf5PSrJjLg9J5rOEWtrKBdvYzhRbmMI8VHM38qO1WPjWmc1Z6NGu4lc2so41WGmliFnM5lKNYUjIJTJ+U0oGggyUcNmRqT02semtDK1nNDGaxnS3sZy8dtNFDopkWZjKHBSxlOUfQFJX+GaKxNlZtaKSOi9OZnxZzH3exh5300M10ZrGY5axktW2ojtTbdQjgCI5jKxvYw046aKOBRmYwm0UcwmGsYko4srie1GMb0uRyygkt3HDJCj74iR1cdHEr6zZ2s2BeI2c+oIXXvXwuj3po9YGOP/rKMn77+3388ao21m/qYsu2bqY0BauOmMKZp7bw98+cxRMeVToJ4L//83xOOr6VK69p474NXWzf2UNDwBErmjjr9Kmc/3ezqw6C0NiaNWMpDzrln1iz7vds3XEr7R17mNI0nTkzl7PikLOZP6ean5RHbuXyhzJj2iK277qL/W3bs2wIqZvmKTOZOX0xC+Yew/LFp9HUNJl+Srr/qtf72Ko4iXlpEfdyJ7vZTicd+XQMC1nB0cyJwRlgDiYGHUgaSjHn65+HWznvND1nuPVGwVlknWGVrNerA7hxwPvFz3dzSqmV4T2kgnWKw0wayEYFX1TBdo+sYJ0/knX0AZwaEdNSSvsr2G60XQM8NH9e0XceEU3Qb7Kjq0e7UhUq5hqtNO/jeBz3NwKPKbx+eUrpfyPiOODp+bL/jIjLUyqTb7VPLZ+xGisGvB72+kDWTmpJ+34W2TQvlaxXdM0wr88B/pfhDWzfpdptcbT//ArKBPJ8XMMb6+9yWCmlbuDXEfEHsgCQY/K3Hkv/oIMbgFb6pgk4G7hlFPa/NyL+St8xexTZNBDDqeSaWk/G5PiNg54Brytpp2N9DUmFeozaeZNS2hIR99IXgHgO2bRTQ4qIBcDqwqKJuv8dVFpiKqs5ldWcWvE2j47hEwo1xRRWcSKrOLHqOkUED+WJVW+niVFPbWhGzGYGs1nJsRVvo4k3Fm3oIfGEEdYKlsRhLBmXsQIaqXq6DgEsjyNYPuEzTqoa9daGjooTOKpEKmzVr6WLm/j4+xbx8fctqnib7g3lR64/+bwZPPm82mYWPO/c6QYVTEItzbNYfcQTWH1E5X/DPPrs9wz5/kNO+9ea6jJj2iJmLF/EyuUPHX5l1YV6u4/1WhBLWUD1WcsOBpNizldJE6baYQ6PA5aPRUUGeF6F6/194fmVKaX2Ae9X9fkiYgrwwgpW/QvZyNxer6yg7AeQpTIfzsVwII9qC/D8CrYZC8VRxo+OqCiX0RPpn6K9kswZY6EYXDKtwm3G9LhHxFnA+wqLLkwp9XaIvwRYmz+fBnw7IoYLua3lM1ajliFQL6lxX3+Xj7Qfzj8Unt+bUrq7+GZK6S76n5cvqHD/5xeeb4I8n2h/xewmqyIqymv89OFXAcb+u6xYnrHlF4VFSwe830l2rvR66Sju/qeF50/KM1uUFRENwKtGcf9jboyP31gaGLBXSTsd62vIWJ43xXvXsyq4HkN2zyj+v+uy0a2SJEmSJEmSNLEMOpA0lPWF5w8basW8k+0/x7Y6Bzw0IoYcJhYRzwFOKyy6sMRqxc93UsSwkw++nQqCKvJRwZ8rLHpiRJTtpI6IGcDnhys3L3sL8JXCovdFxMARo+PhQvpGt04B/mOolSOiBfhAYdE9wK/GpmrD2lB4flSeoWNIY3ncI2IO8C36sg/dAvxzYd87yDrUu/NFJzL8uVb8jEePRj0HWD/g9XDXh0cBz65xXyuBVwxT/tnA0wqLSp3v0P88e1JEPGKYch82oNzPl5mCpZhFoQl4xjDlPp/KMx0Uv8tFeXsZNZW0/wGKU01sL/H+hwvPz4mIIb+7KnyOvnNgKvD5PHtKOW8EHjBK+x5PY3X8xtJOoK3wupJrzlhfQ8byGli8jiwC3jzUynmWg7cWFl2eUrp5lOskSZIkSZIkTSiDDiQN5beF58+KiCeVWin/Qf2n9E8dPNa+FhFnlqnPQ4EvFBbdAXynxKqXkKVghmz0+idLjaiOzL8Ab6uifh8D7iq8/lJEvDWfk7pY9mnA78jmtt5SYdnvBrbmzxcDl0bEg4bbKCKWRMSbIuIbFe6nrJTSGvp3wr8wIt5eqgMzD6r4Dtk87L3emwdnTIRiB/F84EUVbjdWx/1zcCBHZTvw3HxE+QEppcuBYm6xV0TEUB3bxc94SkQ8erh6ViOltBa4s7DoI/l1YJCIOBf4PiNLcf7RcoFG+fQTxfK3A58uU86n6H+efTciTi+1Yp595HuFRduA/y61bkppPdkUHL3eHxElA5Qi4glUNjVArxvoy7IBUFsOufIOj4g/RMQzI4aehDW/5j63sOh3A9dJKf0B+HZh0Sfz9j/klF4RMSUinhIRv4uIwwe+n19zPlpY9Djg5xHRL890RMyLiI+QBTm1AXuH2m+9GavjN5bya/n1hUWvGm70/zhcQ4rXwBdXEFRYsZTSZfRv+2+LiJL3kYhYSPb3UW8u0kR2L5EkSZIkSZLuV4b8AVPSQe9zwJvIRrY2AD+OiK8BPyFLMz4PeCjwYrK0+buBi6h8+oNafSvfxx/z+lxE1pG4BHgS2ajw3uCBLuAlKaW2gYWklNZGxHeBv8sX/T1wXER8Dvgb2Qj+48jSsPcGOHyGYUZd52W3RsTzyFJlzyK73r6PrHPiRrLUz4fT19m8BfgX+uaG7hii7Psi4lnAL8mCJVYCV0TEb4Gf5XXfRTYv+CKyEdUPpm8+7NGa1uB1wMOBI/PX7yEbPf5lshT0U8iyTbw8r2OvH6WUyo1EH3MppVsj4mrgjHzRhRHxZuB2+h/3T6SUflvYbtSPe0S8lL72B/CGlNJfylT9fWRz1D88f/2FiLg677wb6LdkI32XkXXU/ToibiKbpqHYgf22lNJNZfY3nI+SdeIDHA/cGBGfAq4kO46HA08lm0YgyI7RiQyey304vef7TyPie8APyaZJmA88huz6U+zgfE1KaXOpglJKWyPiJcCP8zotAP6UX0d+Bmwm++4eRzatQm8K+AS8rFy5uQ+QXRshm+/9uoj4BNnx6CQ7159Bdo0C+DJwwXAfPqW0NyJ+DPROSPaOiHgxcDOwv7Dqt1NK3x5UQGXOyR+7IuIXwJ/JzoedZNfSw4BHkwUc9AYmbKdMEAZZGvxjyM7/RuCDZB3R38nL3kJ2/Ofm650BnAf0ZnEo17n8TuBB9I2KfzTwt4i4lWzk/Byy8673e3st8Bb6sjMMnGKnXo3V8RtLXyf7bsjrsiEirif7u6A3uO+mlFIxeG8sryFfpy9A5mTgvoi4lqzd9tZnc0rp5VV8xqILgOvIrkMNwBfze/63gLuB6WTn1MvpCzgA+K+U0sVIkiRJkiRJ9zMGHUgqK6W0OSLOJxul3kT2w/r59J/jvFcr2Q/8Z41D1f6RrKPldLJR6uVGqncDz89HJZbzKrIU3L3plx9A+VHSXySbRqCidNcppT9HxHnAN+kLLphKXwBDr1vJOhSLc5TvGqbsS/OMDj8obPfI/DEuUkq7I+LhZHO8n5AvfmD+KOcHZMEdE+1lZAEhvSNrV+WPoh8N3Gg0j3s+Qv+/Cot+klIq14lLSqknIv4B+Ete73nANyLi3IFZI1JKnRFxAVkH/fR88Yn5o+jj1da74DPAo4Bn5q+XAe8ts+61ZMFA19ewn7eTdaY+gew8edYQ674hpfTNoQpLKf0k7xz8KlkHehNDX0c6gQtSSj8cptyfRsQngVfnixZR/ni8iywI5YKhyiz4F7KO5ZX560Ppf72A2o7tQHOA5+SPoewAnpJS2lDqzZTSvvza8GX62scK4A0jqVxKqS3PFPFl+reD1fTPtNMBvD6l9LmIKGYIGfK6Wi/G6viNsc8CTwYem7+eC5w7YJ25A16P2TUkpXRRRFxIFsAB2XXwIQNWu6eSssqUv7Zw/+vNavKY/FHOJ4HX17pPSZIkSZIkqZ45vYKkIaWUfkA2mrTcaOhu4FfAaSmln49TnfaQjSD/L7Jgh1KuBM5KKZWaVqFY1jay0ZnfoG++8IHuIut0fEmZ94cq/09kHfL/TNbJuJmsQ+w+stHoLyc7djeRZWroNexUCymlq8gyMbyJbAT7ULqAK/J1R63TP6V0H1kQxdsYus63kXUYPSulNOGjjVNK15N9L+8Efk9W97LZJQZsO+Ljnqce/zZ9AQHrqWCah5TSOrKR/b0ekn+GUuv+imzU94fIzodt9M9yMCIppUTWOf0OstHMpewgG6V9dkppZ4276ibrzHw72SjlUm4BzkspfaSSAvPrwslkwSPljkkXWUaEU4YLZCiU+xqyDCA7y6xyG1lnfVXp1fPz7BSyDsvfkGWxGJS9pUabyDqzf1dBmbvJOpePz6cBKCultDel9Czg8WRT2Qw3ncoasoCvh+RTKZQrtzWl9Gyy+9LX8+3ayNrGDWTt/cSU0ifzKQnmFzavdAqbCTdWx2+spJS6yAKD/p4s2OlusvtzGmKbMb2GpJReSpZZ5Ftk596eoepTrfy+fRJZxoZy9YdsqocnpJRek1LqGa39S5IkSZIkSfUkst/7JGloERFkqZ7PIBtlvYes4+vylNLGMd73uRTmT04pReG9GWSjzFeQTWOwCfhjSunWGvazjCx1/WH5oo3A31JKV9da9yr3/ynglfnLb6SUnl/l9keTfT8LyUYs7yfraL4NuDEP1hgzEdFAFoBwAtko7y6yIIurUkq3jOW+J9JEH/d6EBEzyVLeHwNMI+vcXQNcmlIatUCHiGgmGz19BFln8hbg2pTStSMoczZ95/1cstHw95LVvaaR8RHRkpe5miy1/yay1PJ/rrWe4yEippB1oh5NNup8Jlkwznay6RyuLTVVTYVlzyELFjuU7B6SyI71GuDmlFLNo86H2OcDyQJuILsezU4p7R9ik7o1EcdvPI3XNWSs5OfOg8nOnYVk94FNwB/KTIFT1x7T8Gz/gyhJkiSNwC/XXz/RVdAk99hnvHCiq6DJ7k83THQNNMn9uue7VU/hatCBpLo3VNDB/UXe4bKWLGU+wCtTSp+ZwCpJ0qQWEZ8HXpq/vDKl9KCJrI80WRh0IEmSJI2MQQcaKYMONGIGHWiEagk6cHoFSRojeXaIStZrBD5HX8DBPrLU+5Kkgiquq0+i/1QkF45NjSRJkiRJkiRJBh1I0tg5LSJ+HxEXRMTigW9GRENEPAz4LfC8wlsfrXbuakk6SPwyIt4TEaeUCkCIiEMj4oPAj+j7O/c24OvjWEdJkiRJkiRJOqg0TXQFJOl+LICH5A8i4l6yueL3AbPJ5nufM2Cb3wHvGcc6StJkcgjw9vyxJyJuA3YCLcBy4IgB6+8CnpdS2j+elZQkSZIkSZKkg4lBB5I0dnoGvD4sf5TSTTbFwutSSl1jWitJmryK19VZwOlDrHsd8A8ppb+NbZUkSZIkSZIk6eBm0IEkjZGU0rURcQLwROBs4FiyUbozgE5gO3A7cCnwtZTSnRNVV0maJB4GPCn/9xTgcGAu0EiW8WAD8EfgJymliyamipIkSZIkSZJ0cDHoQFLdSyldQjZVwaSTUroZuHmi6yFJ9wcppZ3A1/OHJEmSJEmSJKkONEx0BSRJkiRJkiRJkiRJ0uRk0IEkSZIkSZIkSZIkSaqJQQeSJEmSJEmSJEmSJKkmBh1IkiRJkiRJkiRJkqSaGHQgSZIkSZIkSZIkSZJqYtCBJEmSJEmSJEmSJEmqiUEHkiRJkiRJkiRJkiSpJgYdSJIkSZIkSZIkSZKkmhh0IEmSJEmSJEmSJEmSamLQgSRJkiRJkiRJkiRJqolBB5IkSZIkSZIkSZIkqSYGHUiSJEmSJEmSJEmSpJoYdCBJkiRJkiRJkiRJkmpi0IEkSZIkSZIkSZIkSaqJQQeSJEmSJEmSJEmSJKkmBh1IkiRJkiRJkiRJkqSaGHQgSZIkSZIkSZIkSZJqYtCBJEmSJEmSJEmSJEmqiUEHkiRJkiRJkiRJkiSpJgYdSJIkSZIkSZIkSZKkmhh0IEmSJEmSJEmSJEmSamLQgSRJkiRJkiRJkiRJqolBB5IkSZIkSZIkSZIkqSYGHUiSJEmSJEmSJEmSpJoYdCBJkiRJkiRJkiRJkmpi0IEkSZIkSZIkSZIkSaqJQQeSJEmSJEmSJEmSJKkmBh1IkiRJkiRJkiRJkqSaGHQgSZIkSZIkSZIkSZJqYtCBJEmSJEmSJEmSJEmqiUEHkiRJkiRJkiRJkiSpJgYdSJIkSZIkSZIkSZKkmhh0IEmSJEmSJEmSJEmSamLQgSRJkiRJkiRJkiRJqolBB5IkSZIkSZIkSZIkqSYGHUiSJEmSJEmSJEmSpJoYdCBJkiRJkiRJkiRJkmpi0IEkSZIkSZIkSZIkSaqJQQeSJEmSJEmSJEmSJKkmBh1IkiRJkiRJkiRJkqSaGHQgSZIkSZIkSZIkSZJqYtCBJEmSJEmSJEmSJEmqiUEHkiRJkiRJkiRJkiSpJgYdSJIkSZIkSZIkSZKkmhh0IEmSJEmSJEmSJEmSamLQgSRJkiRJkiRJkiRJqolBB5IkSZIkSZIkSZIkqSYGHUiSJEmSJEmSJEmSpJoYdCBJkiRJkiRJkiRJkmpi0IEkSZIkSZIkSZIkSapJ00RXQJIkSZIkSZIk6f7giQ9+6kRXQZPcL//w1Ymugia5xx5y6kRXQQchMx1IkiRJkiRJkiRJkqSaGHQgSZIkSZIkSZIkSZJqYtCBJEmSJEmSJEmSJEmqiUEHkiRJkiRJkiRJkiSpJgYdSJIkSZIkSZIkSZKkmhh0IEmSJEmSJEmSJEmSamLQgSRJkiRJkiRJkiRJqolBB5IkSZIkSZIkSZIkqSYGHUiSJEmSJEmSJEmSpJoYdCBJkiRJkiRJkiRJkmpi0IEkSZIkSZIkSZIkSaqJQQeSJEmSJEmSJEmSJKkmBh1IkiRJkiRJkiRJkqSaGHQgSZIkSZIkSZIkSZJqYtCBJEmSJEmSJEmSJEmqiUEHkiRJkiRJkiRJkiSpJgYdSJIkSZIkSZIkSZKkmhh0IEmSJEmSJEmSJEmSamLQgSRJkiRJkiRJkiRJqolBB5IkSZIkSZIkSZIkqSYGHUiSJEmSJEmSJEmSpJoYdCBJkiRJkiRJkiRJkmpi0IEkSZIkSZIkSZIkSaqJQQeSJEmSJEmSJEmSJKkmBh1IkiRJkiRJkiRJkqSaGHQgSZIkSZIkSZIkSZJqYtCBJEmSJEmSJEmSJEmqiUEHkiRJkiRJkiRJkiSpJgYdSJIkSZIkSZIkSZKkmhh0IEmSJEmSJEmSJEmSamLQgSRJkiRJkiRJkiRJqolBB5IkSZIkSZIkSZIkqSYGHUiSJEmSJEmSJEmSpJoYdCBJkiRJkiRJkiRJkmpi0IEkSZIkSZIkSZIkSaqJQQeSJEmSJEmSJEmSJKkmBh1IkiRJkiRJkiRJkqSaGHQgSZIkSZIkSZIkSZJqYtCBJEmSJEmSJEmSJEmqiUEHkiRJkiRJkiRJkiSpJgYdSJIkSZIkSZIkSZKkmhh0IEmSJEmSJEmSJEmSamLQgSRJkiRJkiRJkiRJqolBB5IkSZIkSZIkSZIkqSYGHUiSJEmSJEmSJEmSpJoYdCBJkiRJkiRJkiRJkmpi0IEkSZIkSZIkSZIkSaqJQQeSJEmSJEmSJEmSJKkmBh1IkiRJkiRJkiRJkqSaGHQgSZIkSZIkSZIkSZJqYtCBJEmSJEmSJEmSJEmqiUEHkiRJkiRJkiRJkiSpJgYdSJIkSZIkSZIkSZKkmhh0IEmSJEmSJEmSJEmSamLQgSRJkiRJkiRJkiRJqolBB5IkSZIkSZIkSZIkqSYGHUiSJEmSJEmSJEmSpJoYdCBJkiRJkiRJkiRJkmpi0IEkSZIkSZIkSZIkSaqJQQeSpLoSEXMi4g0RcUlEbI6IjohI+WPnRNevWhHx5UL9v3yw7b9aEXFBob5rJro+GrmIaIiIf4iI/4uItRGxv/Adp4g4daLreH8RESsHHNuVo1j2ucWyR6tcSZIkSZIkSZNf00RXQJKkXhFxNPAb4LCJroukkYuIacBFwCMmui6SJEmSJEmSpLFh0IEkqZ58k/4BB3cCa4Cu/PXe8a6QNJyIeBfwzvzlpSmlcyeuNnXnvfQPONgC/A3YX1i2e1xrNIryTAJ3FxYdkVJaMzG10cEgIi4BHp6/fHdK6V0TV5v61p7aWMMtbGUD7eyniSnMZj4rWMX8WFJzuV2pkzXcymbW0cY+GmlkJnM4lKNYEodWVdbadDu38RcApjKdh8QTaq6XRl89tqF9aQ9ruYPtbKaNfSQSLUxlDvM5lKOYF4tqrpdGXz21oTvTX7mbv1VU/jwWcXo8fPgVNebqqQ2V472svtVjG/JeNrm0d7Vy184r2bzvLtq799LU0MKclqWsnHMaC6YdXnV5PamLbfvvY3f7Rnblj/buVgBOX/oMFk0/Yohte9i2/x627LuLnW0b2Ne5g+7URXPjNGa3LOHQWSeyZMbRNX9WjY2Nm7v44Cd2cNHFrazb2M2cWQ2c+YAWXvuyuTzqodOrLu/WOzr43k/28ufr27jtzk62bOtmb2sP8+Y0csoJzTzvGbN4wbNm0dAQg7Zta+vhoov38Yvf7eOq69q4655OOrsSSxY28aAzpvKK82dz7jnV10ljp57uYz2ph/XczS52sJedtNNGJ+000Mh0ZrKAJRzGKlpiWs31uj8w6ECSVBci4jTgjMKi81NKX52o+kgamYhoAF5SWPQF4BUppe4JqpKkg8SetJNruYxOOgBopIkO2tnKBraygVXpRFbGsVWX25b2cQ2Xsp/WA+V20ckOtmSPdCTHxmkVl3Unf626Dhof9diGNqd13MSV9NADQNBAA0Eb+2hjH5u4jyPSsRwVJ9b4qTWa6q0NNdFEMy1ly01AJ+0AzGJu1fXS6Ku3NlSuLO9l9ase25D3ssllT/sW/rzhf+nsaQOgKZrp6N7Pln13sWXfXRwz/yEcOfesqsrc27GdazZ+v6b63Lz1Yu7bc+OB10EDDdFEe3frgTotmXEMpyx+Ag3RWNM+NLpuuLmdRz9rHdt2ZOf87FkNbN3ezUW/3sfPLt7H/3vzAt70mnlVlfmjn7fyjg9tP/B62tSgeUqweWs3v750P7++dD9f/MZufvL1Q5g9q//s8k85fwO/uaxvDExLSzClKbh3fRf3/t9evvt/e/nnl87hP99r8FM9qLf7WCcd3MJ1B14HcWDbPexkDzu5j7s4OZ3N/Fhc46ee/Aw6kCTViwcWnq+9vwQcpJQuAC6Y4GpIE+EY6Per9fsNOJA01rpTN3/hj3TSwSzmcgJnMjPm0JU6uYubWcvt3MFNzEpzWRBLKy43pcQN/In9tDKV6ZzIA5kbC+lO3dzLHdzBjdzHXcxKc1keRw5b3q1cTzddzGY+u9k+7PoaP/XYhjpSO3/lKnroYRZzOZYHMJv5RAT70l7u4EY2s467uYX5aYmjRCdYPbahw2M1h7O6bNmb0zpu4AoADmFlTZ9bo6ce21Ap3svqVz22Ie9lk0t3TyfXbPoRnT1tzG5ezEmLH8+s5oV09bRzx44rWLPrGm7bfjmzm5ewcPrKqsrOsiUsYXbLUua0LOX6Tf9X0XY9qYeWxpkHMhrMal5ERNDWtZe7dl7J2t3Xs6n1Nm7fPpvVC8zYM9H27+/haedvYNuOHh5wYgtf+eRiTljdwu49Pbz3Y9v52Gd28tYPbOMBJ7Vw3rmVZxc4fnUz73/LAh529lSOP6aZObOzAJMtW7v50rd38/b/2Mblf27jX9+5hS98rP9I+K7OxNFHTuGl/zCbJz1mBsce3QzAnWs6ecv7t/G9n+zlE1/YxTFHNfPKC+aM3sFQ1erxPtZAA4exinksYg7zaWYqEUFP6mE7m7iNG9jHHm7kT5yTHseUaB7twzIpNAy/iiRJ42JB4fm9E1YLSaNlwYDXnteSxtw67srTIzZxCg9mZmQ/FjXFFI6JU1jEIQDcwU1VlbuF9Qc6VE7hHObGQgAao5GVsZrDWAXAndxMT+oZuqy0ni2sZxGHsIDaU0JqbNRjG9rKBrrz2cZO4RzmxAIispSx02MmJ3IW05gJwGbW1fKxNYrqsQ0NZwP3AFmWg976auJMhjbkvay+1WMb8l42udy75wbaunbTGFM4benTmNWcfddNDS0cu+BcFk/Pvuvbtv++qnJnNS/iUYf/E2cuezar5z+UpVVMh7Bi9ik8fMVLOHr+g5ndsvhA+5naNJPjFz6K5TNPAGDt7uvp7umsql4afZ/72m7uua+LmTOCH391GSeszjIuzZ7VwIffuZCnPm4GKcFb37+tqnKffN4M3vSaeZx9xrQDAQcAixY28sZXz+NNr84yJ3zrh3vp7Ez9tn3fmxfw18tW8G+vmncg4ADgqJVT+PZnl/DIh2Rp8T/66R01fWaNnnq8j02JZlbHqSyO5bTEtAPXoIZoYGEs41QeDGQZEbayocZPPvkZdCBJqhdTCs+7JqwWkkZL8ZwmpeR5LWnMbWQtAEs5jKkl5lI8nGMA2MNOWtOeqsudzxJmxdwS5WYjiDtoYzuby5bTlbq4hetopJHVnFrx/jV+6rENdZClNZ5CM1Nj8EiwhmhgJtkPcd2YVGii1WMbGkpHaj/ww+gyqp+fW6Ov3tuQ97L6V49tyHvZ5LJ+798AWDbzOKY2zRr0/hFzs9lRd3dsZm9H5ZlOIuJAR1215k5dRkOUT9y9fFYWdNCdutjbafaVifbNH2TXluc9fRbLlw3+3v7tVXMBuPbGdm69o2PU9nvGqVMBaGtLbN/Z/1pyzpnTaGws3f4iguc/O2vrd6/tYvsOr0MTqR7vY8OZHjNpyn8KbWf/MGvffxl0IEkiIpoj4ryI+EBE/Doi7omI1ojoiIhNEXFVRHw8Is4c5f1+OSJSRCTgnYW3Ht67vPgoU8YZEfGmiPhRRNwaEbsiojMitkfEzRHxpYh4ej6/fC11XBIR/xIRP42IuyJiT35ctkTEFRHxXxHxuIjSE8YVP2NEfHmYfU3I91CtiDi31PcSEcsj4m15PTdFxP6IuDsivhkRjx3hPpdGxFsi4s8RsTki2iLivoj4QUQ8vYbyGiPi7yLi6xFxW95u9ufH/BcR8dqIEn999i9jTaVtN39cMEx5j4+Iz0fE3yJiR+EzXhIR/x4Ry6r9nNXK2/sbI+K3+b7b8rr8LSIujIgnDrP9BYV28bsB75U6JueOsL4Tde16V/4Z7x7w1t1lPuclZcpZGhHn58f2z/l1pSMi9kbE2oj4WUS8ISIGZo2otr7zI+J1EfGHiFiff69rI+LH+XlQ268+jOy8j4hnFo5Rd0SsqGK/MyNid2H719X6GcqUf2REvDkiLi60q/aI2BgRl0XEf0TEQ4cpY3pEPDUiPpafx+vy49MWERvy7+ODEVE+33VWzsrCeVXME/rOIa43K0fhMEw6XamT3WSjUhZQOsXiHBYc+CGgmh8RdrAlL7f0aM6pMY0ZzM7XLV/uXfyVdvZzBMeV/MFdE6te29BUZgDZqJm2tG/Qtj2ph73sAmB2v5mNNN7qtQ0NZSNrSSSCYCkV34o1RiZDG/JeVt/qtQ15L5s8uno62N2+CaDs1AlzWw6hqSEbub59/9rxqtqQpjQWOiZTyZ8QNU727O3hmhvaAcpOnfCg06cyZ3b2U+1vLx+9Dtorrs7Kmj4tWLyw5E+1ZS2Y17d+tzEHE6Ze72PDaU276SLLsjItv+cdjMqHhkmSDgoR8STgq8C8Mqsszh9nAK+NiB8CL0op7RqnKpYUEYcBlwJHlFllXv44DrgA+GtEPDul9LcKy58CvBt4LVDqL+SF+eNBwD/ndTm38k8waH+T8nvoFRHPBL4I+V9mfVbmj+dFxA+AC1KqIgQ1K/vvgM8BA3O9LgeeDjw9Ii4Cnp1SGvZ/KnkH9BeBE0u8vSJ/PBZ4e0T8W0rpy9XUt1oRsQr4MuR5uPpbnj8eDrwtIt6bUvqPMarHv5EFUMwc8FYLMBc4FnhxRFxB9j3eNhb1qNT94Jz5InA+pYOApwAzgMOAx5N1Lr8hpfTpGvbzMODbwMCglcPyx1OAV0XE81JKVeV/G4Xz/sfAeuAQsuPwUuAdFe7+74HeITf7ga9UU/dyImIW8BHgxZT+v9KS/PFQ4I0R8ZWU0gUlynk58J+Uvn8ALM0f5wBviIgvAP+cUmof8Yc4iLXS18xmDGqWmYhgeprJbnbQyu6Kyu1IbXSSjb6ZWabc3vda2V223N1pB/dyBzOYxYp8ZIbqS722oUUso5mpdNDGX/gjx6a+ebD3p1Zu50b2s5cZzOYQVlZUJ42Nem1DQ+mdWmEhy2iOloq309io9zbkvaz+1Wsb8l42eezt6Et3P3NK6fj3iGDGlHnsat/I3s7q0uOPle37sxkdgwamN5f7mUDj4W+3dxyI+zhhdel57RsagtVHTeHP17Vz820jy3Swf38Pa9d18c0f7OEjn9oJwKteNKfqrBqXXZH9pLhkUSMLFzhee6LU632slJQSHbSxg63cmU/1MJXpLMynfzgYGXQgSVpJ/0673cAdwC6gkayjahXQ+5fa04EjI+LsSjp4h3Ej8Mv8+SrgqPz5DuDPw2w7h/4BB+15vbcDnWQBAccCvX/dngD8KSIemFK6daiCI2IOWWfYwwe8tQ24E9hLdsyOBXpDqecOU9/hrGTivocRyUczf5esbgm4GdhM1qF2XGHVZwCLIuKxldY5Ip4LfCt/2QX8lex7WEz2nfYejycCF5J1RA5V3mOAH0K/kNPWvM5tZMe4t3N2AfCliDgspfTeEsVdStb5WGnbHTQxZUScAvwaWFRY3AHcBOwhaxe9eW5nAB+MiKNTSi8t/ymrFxGfBV4+YPF9ZO19BlmAxtR8+dnAH/Lv8doB26yj75yeDxQzDPySwUaS83AlE3fO3EH2eaYBDyssvwxK5lC7ocSyk+kfcLCWrAN+L9kxP4asDZK//lREzE0pfaCKep4MfIe+7+52su9oHtl32hvG/3DgNxHx8JTSlkoKHo3zPqXUFRGfpy9byEsi4j0VTsXxj4Xn/5tSGvGkixFxKPAz4KQBb20A1pBdI3rvLb3Th8wtU9wx9A842EaWGWN3vu0K+s7tBrLzb0VEPCGlQcNy9tN3/jyQvnZ/J1lbLOWgzOXXnqftBWg50OwHa2EasKPf+pWXOzi1Y6/mfJ+lyk0pcQvXkkis5gE01JaASWOsXttQYzRxanowf+GP7GEnV/E7ggYaUtBNN01M4VCOYhUn0lA6+ZbGSb22oXL2pl3sYSfg1Ar1op7bkPeyyaFe25D3ssmjvbv1wPOpTQPHJPRpaczea+9qLbvOeOnq6eDundlPMUtmHM2UBoPoJtKGTX3/pT9kafkuyGVLmoD2futXo/nQOwZlJGhqglecP4f3/Xt1CSPXbejis1/Nxqic/5xZNU8DopGr1/tY0c3patazZtDymczlZM6i8SC+jxl0IEkCuI5slOhFKaVBHRgRsZRsNP8byO4dpwD/D/jXkew0pfRR4KP5Pt5FX8fTDSmlx1VQxHqyUbY/Aa4d2FEVEdPJOqE/QNZRNBv4JnB6uQLzNONfo3/AweXAW4HLU0o9hXWbyEapvoDBnVS1mJDvYRR8jazj8cfAa1NK9/S+ERFHAf9NNlobstHB7wf+pYJyFwJfArrJvsOPppR2FspeBXwdOCtf9LyI+J+U0h9KFRYRy8lGfPcGHLSRfa+fSSnL75h//48DPk1fh+B7IuK6lNJPi+WllM7Pt3kX1bddImIm8H36Ag56gP8APjTgc54NfJa+NvaSiLg+pfTJSvZTQT1eSf+Ag5uBV6aULiusMxt4PdnxaiT7br4XEacUR7CnlH5NFkRBZFMn/K7wXkXHpUoTde36OvD1PIV9cYqF81NKayospoOsPX4PuLhUBob8u/8gfYEN742IX5YI9ijnC2QBB5eTfac3FcpeStbeXpgvOo6snT2jwrJH67z/HFm7aiLLePAk4EdD7TgizgBOKyz6TIV1HqrMlny/xWv5/wHvSildV2LdR5Jl0Sn3P8lEFpj0TeDnKaV7S+zzSOBN9J1/jwNeA3yiX0EpbcrfI7KpOnrvT19PKb2rgo930Oih78+AhrJfDTTm73VT2Y9b3RWX21S23Hu5k93sYCkrmB+LK9qvxl89t6HZMY/T08O4kT+zhx0keg7MeN1DD1100kXngTSjmhj13IZK6f2xdArNLByUFEkToZ7bkPeyyaGe25D3ssmhO3UeeN4Q5buPGhuy97rSyEapj4a/br2Ytu69NEUzx8wfchY8jYPWfX1x9NOmlu+8nz4te29va0/ZdYaydHEjXV2we08P+9uyfb7i/Dm86dXzmDKl8qCBrq7EC/5pI3tbEyuWN/HvrzFTxkSq5/tYryam0EzLgXsXZAEHx3Iq02NW2e0OBgYdSJK+PFznZUppI/CWiLiBvlHnL89HpO4c6wqWcTuwMqXC/4YGyDuSvxARlwFXkQUdnBYRj8k7R0t5EfDkwusvAS9LKQ2azSsPcrgMuCzvQB6Jyfo9QNZp/h3geQNH6KaU7oyIJ5N15j0pX/zPEfH5lNLNw5TbGxzw3JTSdwa+mVK6IyIeD9xKX8f9i4GSQQfAh8lG30PWwf+MlNLPB5SZgJ/nc7VfQTa1AcBnI2LI9laDt9CXIQHgVSmlzw5cKaV0RV6f39PXIfqhiPh2SmnrSCoQEfPIjkuvm4GHDBw1nlLaTZbi/x6yjBKQZRp5B1mH/kSYzOcMwHkppb1DrZB/948iO3+eSNbB/XrgHyrcxyKydvOYgWn782NzfkS0Aq/MFz89z0hQKitFqbJHfN6nlNZHxI+BZ+aL/pFhgg7on+XghpTSnyqo73DeSv+AtPeklN5ZasX8WP6c7FpR7tr/7gq+37uAf4yIu8kCqwBenwdPjekMkhFxTbn3Hh3PGstdH5Ta037u5CaamMLRnDzR1dEkdV+6i1u5jmamciJnMY+FNNDIXnZxBzeykbXsYDNnpEcwLQ7eOURVuZQSG8li4paywlHrGpL3Mo0G72UaC3ftvJINe7OZVE9YdB7TpwycmVP3V2uvzRLgppS4d10X/33hLj7xhZ188wd7+N4XlvHwc8qPZi/657du4dIr2mhuhq9/aglzZh+8o9RVmWPiFI7hFAC6Uidb2cgd3MjVXMKKdDTHxCkTXMOJ4/8oJOkgN1ynyIB1vw38MX85g2ze+wmRUmqvtAM4n3u+2DlZciRvRDQA/15Y9BfgHyvp/KnmOI50+3r6HnI7yDrMB6YEByA/fi8jSxsP2d8fr6iw7G+WCjgolL2DbDR3r5Ih7RGxDCj2pH12YMDBgHLvJRsh3+sQ4O8qqnEFImIq/bML/LxUwEGhPrvIAmJ6j/E0+ne81uql9J9q4sVDpalPKfVmFjmwfcTE/Bo0yc+ZiuufBzcVMwQ8JaLiPG2dwEsHBhwM8HqgOAr/nyosezTP+08Vnp+XZ5AoKSJmAc8tLBqNLAezyDIM9PpZuYCDgcp9j1XeEz5ENp0JZNMunFHFtipoKMTU942bG6w7f6+xwhj8xorL7SpZ7i1cTzddHMkJtET59JCaePXahnamrdzCtQQNnM7DWBqH0RLTmBLNzItFnM7DmcEs2mnjDm6sqE4aG/XahkrZxkY68rSxTq1QP+q1DXkvmzzqtQ15L5s8GqMv00TPEDPfdfdk7zVFc9l1xtra3X/htu2XA3Ds/HNZNnP1hNVFfWZM78sy0JuBoJR9+7P3Zs4YWTdlRLDi0Cl8+J0L+ei7FrJ9Rw/P/6eN7Ns3fAaFt75/G5/96m4aG+Fr/7OUBz+wskAFjZ16vY+V0xRTWBqHcQaPoJEm1nI7m9OgGXYPGgYdSJKqdUXh+QMnrBbVq6TeDwSOLrx+3yiPbB9N9fQ9fCOltH2oFfJR1d8tLHpmuXUH+J8K1rm08HxVRMn/8T4Z+uVo/GgF5f4QuKvw+ukVbFOphwHFCeaGrU9K6RrgklGuT7GMy1NKV1awzUcKz+cCjxiFeoyHejpnqpJSuh3Ylr+cCRxf4aa/zIOuhip7P/0Ddx6fT00znFE771NKvwVuyV82kAUrlPMPZMcAsoCGr1dQ1+E8kawt96oo4GC05NP2FM+9MW+fKaXTyz3Get9jqTjf41DzL7azf9D6Q5fb98NT77al9HbeFcvdnjazhXXMYDaHcDhdqavfI9H3Q1jvsp5UW3pRjVw9tiGAtWQzCC1kacl0nQ3RyKF58qQtbKBMPJjGQb22oVI2kM2MNIPZzA7TCNeLemxD3ssml3psQ+C9bDKZ2tiXzK2tq3wsdXt39l5L08RkpVi352Zu3vobAFbNO5uVcyf1f2XuVw5Z2tdZu35j+cCVDZuy95YtGb2E7C97/hxaWoL1G7v5+W/3Dbnu+z++nQ/+9w4i4LMfWcyznjTSJLYaDfV6HxvO1JjG4jxhbu8UZgcjp1eQJB0QEYuAx5DNe34I2XQELQNWW1V4fug4VW1I+ajxRwMPIEtXP5tsJHhxAq/5hefl6v3wwvN2svm8x90k/B7KZgwY4CKy0foAh0TEoSml+4ZYv5NsWozhFMsIYA6wZcA6Zxee/y2ldOdwhaaUUkT8H/C6EmWMVLGsvcDvKtzux/R18p8SEdPzaUSqlgdnnFZY9JNy6w7we7JR7r2/Tp8N/LSWOoyWSXjO9BMRDwAeTBZMMB+YBYMmmCv+knMoVDT8p5pz89358yaydnH5KJZdyXn/aeC/8ucvjoh35lkeBipm+PhWSmlPhfUYSvHaf3dK6epRKPOAiFgBPBI4GVhC9v0ODI46qfC8rtrnZDKDvh+wW9nd73WvlBL78gQcM5hdUbnN0cKU1EwnHexlNwtYWnK9veweVG4b+w7U5xJ+XHYfbezjknxmkeM5g0NYWVHdNLrqsQ311gVgGuV/1O99r4duOmiv+kcyjY56bUMDdaYOtrAegEPMclBX6rENeS+bXOqxDfXWBbyXTQYzmvt+PtvbuY2ZzfMHrZNSorUzS5I4c8qCQe+PtY17b+WmLb8AEivnnM6qeeeMex1U3rGrmomAlOCvt3awetXgsUE9PYlb78zGeR1/zOhly2hpCRbMa2D9xm7uuqf8OLKPf3Ynb/+PbBzFx9+7kBc9t7JrocZevd7HKtEb2LCf8gFb93cGHUiSiIjDyeZ1fzrV3RvmjkmFKpSndX872Xzk1fwlMLfM8uMKz29IKXXUWLWaTNbvgco6P0utdwz9AwYG2lZhponWAa9LjdIudjj/pYIye91QeH5IREzLR4aPVLE+N+UjnautTxNwOPC3GutwGP075is6Lnkwxo1k2Rqg/2cZV5P4nAEgIp4CfJD+155KzK1wvUrPzb+STd3RG6h1DMMHHYz2ef8V4P1kwRVLgacC3y+uEBEPBE4tLBrx1Aq54vEftYCDiDgR+BhZUFwMs3rR3NGqw8GmKaYwO81jNzvYxqYDowyKdrGdLrJby3wWV1z2PBazmfvYziYO55hB77el/Qd+TK+mXNWXem1DkV9Cejv+Sim+1+RPPROmXtvQQJu4lx56CIKlBh3UlcnShlS/6rUNeS+bPJoampnTspRd7RvZtu8els44etA6O9s30NWTzeI3f9qKca3f5tY7+cvmn5FIHDbrFI5dcO647l/DmzWzgTNOaeGq69u5+LJ9POOJgzMIXHltG7t2Zz+FPfIhozelwd7WHrZsy1Lnl5u24dNf2cXr37UVgA+8dQGvfsncUdu/Rq5e72OV2J//RF3p1Az3RwfvJ5ckARARZwK/orZOjoEjicdNRCwELiYb2VytciG0xfDtzTWUW7PJ+j3ktg2/Ssn1hsvjOtQ89EMp1blX3NfALAhDGbjuPBgiB1flRrM+o1GHkdRjQvLxTvJzhoh4H/DWGjevtP4VnZsppbaIaKVv2oJKvtNRPe9TSrsi4pv0Ta3wjwwIOgBeXnh+VUrp2grrMJxRv/ZHxBPJ6l9LW5vw9jmZLWUFu9nBRtZyZDqOluj/A9Y9ZDOOzGIeM0qk9i1f7mFs5j62sYk9aSezYm6/99fm5TYzlXmFHycOiZVDjvS8M/2Vu/kbU5nOQ+IJFddHY6fe2hDATOawl11sZSNtaT9TB9QppXQghecMZtMY/tQzkeqxDQ3UO7XCfJbQEo4krjf11oa8l00+9daGwHvZZLNs5rHsat/I+r1/46h5D2JqU/9O4zU7s1jt2c1LSmZCGCtb963h+s0/IdHDITNP4PiFjxq3fas6z3v6LK66vp1v/mAPb//X+YOmUPjop3cCcPrJLSUzIZTT1ZVoaiof0/+Jz++kMx++9JCzBv+N85X/3c1r3pz9pPX2f53HG1/tFFP1qB7vYz2ph4YoHcgCsC/tOZBJbC4LK67T/U35IyRJut/LMwX8gL5Ou06y+bGfS5bqeT4wNaUUvQ/6UnBPtM/TP+DgErJOqjPI0ldPBxoK9a5k3vniX6PlJ40aZZP8ewCoNCPEwCCC8exYK+6rmgwWA+s8Wr/K1kN9Bh7/Wusx7r9UT/ZzJiKeSv+Ag3XAe8hGxB9Jln6/aUD976lhV7V+p5Wcm2Nx3n+q8PzREXFU74uImE32/fb6bIX7r8SoXvsjYjnwHfo+6z6yrAxPJ8uqMBdoGfD9fmWk+1VmOUcylel008X1/IG9KRul0JU6uT3dwBbWAbCKEwZte3H6Hhen73Fn+uug9xZxCLPz+JQbuIJdKYun6Und3JNuYy23A3AUxw/5Q4TqXz22oUM5EoBuuriO37M9baYn9WSpjdMe/sIf2U2W4viwiUtApFw9tqGi1rSHXWTphJ1aoT7VextS/avHNuS9bHI5bNbJTG2aTXfq4NqNP2RvR/Zdd/V0cOu2S9m0L/uuj5n/kEHb/uKuj/KLuz7K7dv/WLLszu42Orr3HXj06urp6Le8J3X3225H2zqu2/RjelI3S2es5qRFjyWimoRyGk8vf8FsDj+0iT17E095wQZuvjX7CWHP3h7e9N6t/PBn2Yjw97158PQcjcvuoHHZHbz7I4PHOpz48LV88sKd3Lmmk5TSgeW33tHB6962hXd8KPsb52mPn8FJx/X/+eH7P93Ly/51MynBv71qLu96w/hPDaLK1ON97Dau59Z0PTvTVroL16fO1MH6tIaruZQeummkiRUMzhBzsDBkUJIObi+ib+7oTuAxKaVLh9mm8vDBMZKnrH5aYdFbUkofGGazSuq9o/B8TrX1GoFJ+T0UzKL/sStn4BQYu8agLuXsLDyv5tgNrPPOUivVoFjORNVn4La11mMkdajVZD9n3l54fhVZ/Yc7H2qpf63faSXn5qif9yml6yPiCuBssowlLwP+PX/7+XBgAthdwLcq2HelRvva/y/0r+s5KaWbh9mmntrnpNYYjZySzuFaLmMPO/kTv6IxNdFN14F1VnEiC6L0/I3lRAQnpwdxDZeyn1au4nc0piZ66CaR/di1nCNZHkeO6ufR+KvHNjQ3FnJ0OpnbuYFWdnMtlxEEQQM99P3gtZwjONQ2OOHqsQ0V9WY5aGIKizikyk+n8VDvbUj1rx7bkPeyyaWxYQqnLXkqV234Lrs7NnP5fV+mKZrpSp2Qf9fHzH8IC6evrLrsP6z7Gm1duwct/8vmn/Z7feayv2PBtMMOvL59+x/oTlkb3rZ/Lb9bW362veMWPIJlM4+tum4aPdOmNfDDLy/jMc9ex7U3tnPSuWuZPauBva099PRABPy/Ny/gvHNLzZBa3u13dfLat23ltW/bSktLMGtG0Lovsb+tLwDhcY+czlf+e8mgbd/03q1055ebr313D1/77p6y+/nehUs558zRm/ZB1anH+1g33WzgLu7lDgCa0hSAA9M8QJYh4WQexNSorl3fnxh0IEkHt8cVnn+rgk47yOaBn2jFeq8hmxN9OJXUe0Ph+epqKjRCk/V76HUElXU+DvyLbdMY1KWcYsr0o8quNVhx3Q5Gr4N9NOozsJyR1KG37CtqqMe4TkWSm7TnTEQsAk4vLHrTcAEHETGT2qaROAK4roI6HQpMKSyq5Nwcq/P+U2RBBwAvioi3p5Q66T+1wtdSSuUng63eaF/7i+3zvyoIOIA6aZ/3F7NiLg9K57GGW9jKBtrZzxRamMM8VnA082PwD1CVmBrTOSs9mjXcymbW0UYrjTQxi7kcylEsiUOHL0STQj22ocPjGOalhdzLXexkK+3sI5FoYRqzmc9yjmBhlT+6aezUYxuCLH15b9DBEg6jIRprqofGXr22IU0e9diGvJdNLrNbFvOQQy/grp1XsnnfXbR376W5YSpzpi5j5ZzTWDBtfLPl9HYIAnT2DD3rZW9wgibWKSe0cMMlK/jgJ3Zw0cWtrNvYzYJ5jZz5gBZe9/K5POqh1XfM/ugry/jt7/fxx6vaWL+piy3bupnSFKw6YgpnntrC3z9zFk941IyS2/b09D3ftKW75Dq9OjrTkO9r7NXbfWwlq5nBLLazhf3spYM2ekg008JM5rCApSznCJpiSsntDxYGHUjSwa34P4Q/D7dyZHnLzhm76lSsWO+rUzGfVnmDc74NdgXw2vz5YRFxZErprqprV73J+j30OguoZG71swrPO4Abx6Y6JV0D9E4wekZETMk7ModTPM7XpZR6SqxTXFZpbr9rCs+PjIglKaVKOnqL9dmUUrqvwv0NklLaEhH30tfZeQ7ZFAVDiogF9O+YvbrWOoxAvZwzA9tDJd//igGvh60/Wd1ryXF7Ftk0FJWsV3RNybUGbzMW5/13gY8Bi4DFwNMj4h76T6czmlMrQHbtf2r+/IyImJlS2juC8qptnzPp//mGUsv15qDUElNZzams5tSKt3l0PGvYdZpiCqs4kVWcOILa9TkqTuCoEikhNfHqsQ3NjvmcwPjNm6yRqcc2FBE8lCdWvZ0mRj22oVK8l9WvemxD3ssml5amGRy38JEcxyMr3uZxR75+yPfPXfGymupy1iHPqWk7Tayli5v4+PsW8fH3Lap4m+4N5adYefJ5M3jyeaWDCoZz11Ura9pOE6ee7mMzYjYzmM1KzKIyFCfokqSDW7Whd48Dlo9FRapUVb3zTtKnVbDqb4FiR/Qrq9nPCEzW76HX8ypc7+8Lz69MKQ2c630sFUfCzwGeMtwG+Yj0x5cpo6i18LzS3GvFsoIsdfxw9Wmh/5z2lYzur6Yez4qIqWXX7PN8+v8Nedko1KNa9XLOtA54Xcn3X0vI80tq2Abg7yIqGsL4D4Xn96aU7q5gmzE57/P3v1hY9I/5o9cfUko3VbjvSv2i8HwacP4Iy6v2O34B0FzhurVcbyRJkiRJkqQxZdCBJB3c1heeP2yoFSNiOvCfY1udihXrfXZEDJe55z+poHMmpbQF+EZh0Wsj4vRy64+iyfo99HpoRAw5ZCoingOcVlh04dhWaZDfAncWXv+/CjrYP0hfR2ACPl9mvWJq9qPyUfVDSindCfyusOjNEbFwmM3eCBRzh31uuP1UoPiZFgFvHmrlPIDnrYVFl1eYOn601cs5sxNoK7w+uoJt1g94PVz9HwU8u7pqHbASeMUw5Z9N/6CsSs/NsTzvP0PfiP5H0D9wofzEnTVKKf0FuKSw6L0RMTAjRTWqaZ9LgPdUUXbxelNJe5MkSZIkSZLGnEEHknRw+23h+bMi4kmlVso7Gn/K6Mx1PRqK9V4OvK/UShHRFBEfIRtFWqn3AL3zq08BfhURjx5qg4g4LCJeXcU+Bpqs30PR1yLizFJvRMRDgS8UFt0BfGdcapXLp+AoduytBr6bpzXvJzJvAV5cWPyNlNIdZYovpqKfD7yowmq9Bw5MSrgAuCgiFpdaMSJeCLyrsOj3KaXfVLifslJKl9E/+OFtEVGy/nlQxE/JghMgq/u7R1qHGtXFOZNS6gauLyx61XDBLCmltfQPgPlIXs9BIuJc4PuMLI3+R8sFB0TEcQPK3w58uoqyx+S8TymtAX7eWxTQkj/fBnyvivpV49/py3QzD7g0Ih4w1AYRcVyZ86XYPv8pIs4os/0K4NfAcAFHRcXrzXkRcVIV20qSJEmSJEljYriRoZKk+7fPAW8CZpIFov04Ir4G/ATYRNbx8lCyztcFwG7gIipPqz0mUkqXR8SfgQfmi94UEWcBXwHuIstqcApZ52/vREufYZgRv3nZd+edSN8FGsk6kX8dEb8BfgzcBuwjOzYnAo8CziWbp/yTNX6kSfk9FHyLrC5/zOt9EbCFbFT+k8hSt/emeO8CXpJSaitV0FhKKX01Ip4M9E7u9STgrxHxeeBqoB04hiy1+tmFTdcArxmi3Fsj4mqgt2Pxwoh4M3A72Rz2vT6RUvptYbtLIuJjQO+Ehw8Ebs7r8wdgD9lI9eeSTQ/Qayfwwso+dUUuAK4ja+sNwBcj4nlk3+vdwHTgHODl9AUcAPxXSuniUaxHNerpnPk68KD8+XnAhoi4Pt9nb1DJTSmltxW2+Sjwqfz58cCNEfEp4EqyNnM48FTg6WSd7j8ju95UO/q+99z8aUR8D/ghcB/Zd/0YsuNTDJJ4TUppc5Vlj9V5/ykYNOn0V8bq2pFSujIi/g34r3zRSuDqiPg/suO/huwasZDs/vJY4Cyy+8KXBhT3cbLzqhGYAfw+Ir5AFmCwHVhMdu+4gOz8upfsHvKECqr6/bz8qfm21+ftbT3QXVjv5VV8l5IkSZIkSdKIGHQgSQexlNLmiDifbPRpE1nn3fmUns+6lazz86zxq+GQ/gH4I32doOfmj4F6R2NfSgVBBwAppR9GxFOAbwOz8sWPyh+jbpJ/D5DNt34McDpZoEe5kf7dwPPz0fUT5flkbaI3Xf0K4L1DrH8L8NiU0s5hyn0ZcDFZBzfAqvxR9KMS272BrGPydfnrBWQjrsvZADwuHwk+KlJKayPi4WTz2i/PFz8mf5TzSfqCJcZdnZ0znwWeTNYJDTCXwdeiuQNef4bsevLM/PUyyrfDa8mud9fXULe3A3PIOrOfRV/ATSlvSCl9s4qyx/q8/wVZENmRhWWfrbKMqqSUPhERrWTZHqaQtaun0X/6iUrKuSki/pW+AIapwKvzx0BbyIJLygY2DSh7a0S8kmxqlN62fxr9p7GAvmuKJEmSJEmSNOacXkGSDnIppR8AjwZuKrNKN/Ar4LSU0s/LrDPu8lT3Z5CNQC3nRuCJKaWqU8CnlH5GlpL9f8hGnJfTDfwe+H/V7mPA/ibl9wCQUtoDPJisg621zGpXAmellMZ1WoWBUkrtwHPIOqH/NsSq24B3Aqfn6fCHK/d64IR8m9+TdSR2DLVNvl1KKf0L2Qj5q4ZYdS/Z6OYTU0o3DFdutVJKNwEnkY3A3z3EqtcAT0gpvSal1DPa9ahGvZwzKaUusk79vyfLJHA32XmQhtgmkbXDd1D+eO8APgicXUHQSzndZAERbycbYV/KLcB5KaWPVFPwWJ/3efsqtvXfpZRuq7acGvZ7IVn2iW8AQ2VV6AB+SZkMNymlT5AFeawZYvvvAienlK4ps065On6ZLNjjf8iylOwkyyYhSZIkSZIkTYjIfvOUJB3sIiLIRkqeQTbaeg/ZqOrLU0obJ7Juw4mIlcDDyEYLd5HV+/qU0s2jVP4UsvTpR5NlVmgg6+S5A7g6pbRjNPaT76vuv4d8nvnf9b5OKUXhvRnAI8kyCMwiS3X/x5TSreNby8pExDFk0xosBprJggVuBq6ciE71iDiMrCN3Kdk0IdvIpmn4Q0pp2CCGUarDlLwOR5Olkt9P9j3+oZIAjPE2Gc6ZoUTETLLr1zFk3/kWso7qS1NKnaO4n2ayDAxHkE2vsAW4NqV07SiUPernfUQsBNaRnZcAz0kp/e9I61plHaYCDyE7ZguBHrLgjdvIrv3lgi2KZTSS3T9OJct4sYPsc106gmCSMfWYhmf7H0RJkiRpBJqOOHyiq6BJ7qI//Hiiq6BJ7rGHnDrRVdAk9+ue78bwa/Vn0IEkSarKUEEHkjQaIuJNZJkeADYCK0YzCEPlGXQgSZIkjYxBBxopgw40UgYdaKRqCTpwegVJkiRJdSPPnPDawqLPGnAgSZIkSZIk1S+DDiRJkiTVhYiYBnyabLocyKbL+O+Jq5EkSZIkSZKk4TRNdAUkSZIkHbwi4n3AicAM4BRgUeHt96aUtk1IxSRJkiRJkiRVxKADSZIkSRPpIcDDSyy/CPjYONdFkiRJkiRJUpWcXkGSJElSvWgFrgb+GXhqSql7gusjSZIkSZIkaRhmOpAkSVVJKV0CxETXQ9L9Q0rp3ImugyRJkiRJkqTamelAkiRJkiRJkiRJkiTVxKADSZIkSZIkSZIkSZJUE4MOJEmSJEmSJEmSJElSTQw6kCRJkiRJkiRJkiRJNTHoQJIkSZIkSZIkSZIk1cSgA0mSJEmSJEmSJEmSVBODDiRJkiRJkiRJkiRJUk0MOpAkSZIkSZIkSZIkSTUx6ECSJEmSJEmSJEmSJNXEoANJkiRJkiRJkiRJklQTgw4kSZIkSZIkSZIkSVJNDDqQJEmSJEmSJEmSJEk1MehAkiRJkiRJkiRJkiTVxKADSZIkSZIkSZIkSZJUE4MOJEmSJEmSJEmSJElSTQw6kCRJkiRJkiRJkiRJNTHoQJIkSZIkSZIkSZIk1cSgA0mSJEmSJEmSJEmSVBODDiRJkiRJkiRJkiRJUk0MOpAkSZIkSZIkSZIkSTUx6ECSJEmSJEmSJEmSJNXEoANJkiRJkiRJkiRJklQTgw4kSZIkSZIkSZIkSVJNDDqQJEmSJEmSJEmSJEk1MehAkiRJkiRJkiRJkiTVxKADSZIkSZIkSZIkSZJUE4MOJEmSJEmSJEmSJElSTQw6kCRJkiRJkiRJkiRJNTHoQJIkSZIkSZIkSZIk1cSgA0mSJEmSJEmSJEmSVBODDiRJkiRJkiRJkiRJUk0MOpAkSZIkSZIkSZIkSTUx6ECSJEmSJEmSJEmSJNXEoANJkiRJkiRJkiRJklQTgw4kSZIkSZIkSZIkSVJNDDqQJEmSJEmSJEmSJEk1MehAkiRJkiRJkiRJkiTVxKADSZIkSZIkSZIkSZJUE4MOJEmSJEmSJEmSJElSTQw6kCRJkiRJkiRJkiRJNTHoQJIkSZIkSZIkSZIk1cSgA0mSJEmSJEmSJEmSVBODDiRJkiRJkiRJkiRJUk0MOpAkSZIkSZIkSZIkSTUx6ECSJEmSJEmSJEmSJNXEoANJkiRJkiRJkiRJklQTgw4kSZIkSZIkSZIkSVJNDDqQJEmSJEmSJEmSJEk1MehAkiRJkiRJkiRJkiTVxKADSZIkSZIkSZIkSZJUk6aJroAkSZIkSbp/aHvyAye6Cprkpv3qLxNdBU1yqb19oqugSa5h+vSJroImuZ45Mya6CprkHnvIqRNdBU1yH17zp4mugg5CZjqQJEmSJEmSJEmSJEk1MehAkiRJkiRJkiRJkiTVxKADSZIkSZIkSZIkSZJUE4MOJEmSJEmSJEmSJElSTQw6kCRJkiRJkiRJkiRJNTHoQJIkSZIkSZIkSZIk1cSgA0mSJEmSJEmSJEmSVBODDiRJkiRJkiRJkiRJUk0MOpAkSZIkSZIkSZIkSTUx6ECSJEmSJEmSJEmSJNXEoANJkiRJkiRJkiRJklQTgw4kSZIkSZIkSZIkSVJNDDqQJEmSJEmSJEmSJEk1MehAkiRJkiRJkiRJkiTVxKADSZIkSZIkSZIkSZJUE4MOJEmSJEmSJEmSJElSTQw6kCRJkiRJkiRJkiRJNTHoQJIkSZIkSZIkSZIk1cSgA0mSJEmSJEmSJEmSVBODDiRJkiRJkiRJkiRJUk0MOpAkSZIkSZIkSZIkSTUx6ECSJEmSJEmSJEmSJNXEoANJkiRJkiRJkiRJklQTgw4kSZIkSZIkSZIkSVJNDDqQJEmSJEmSJEmSJEk1MehAkiRJkiRJkiRJkiTVxKADSZIkSZIkSZIkSZJUE4MOJEmSJEmSJEmSJElSTQw6kCRJkiRJkiRJkiRJNTHoQJIkSZIkSZIkSZIk1cSgA0mSJEmSJEmSJEmSVBODDiRJkiRJkiRJkiRJUk0MOpAkSZIkSZIkSZIkSTUx6ECSJEmSJEmSJEmSJNXEoANJkiRJkiRJkiRJklQTgw4kSZIkSZIkSZIkSVJNDDqQJEmSJEmSJEmSJEk1MehAkiRJkiRJkiRJkiTVxKADSZIkSZIkSZIkSZJUE4MOJEmSJEmSJEmSJElSTQw6kCRJkiRJkiRJkiRJNTHoQJIkSZIkSZIkSZIk1cSgA0mSJEmSJEmSJEmSVBODDiRJkiRJkiRJkiRJUk0MOpAkSZIkSZIkSZIkSTUx6ECSJEmSJEmSJEmSJNXEoANJkiRJkiRJkiRJklQTgw4kSZIkSZIkSZIkSVJNDDqQJEmSJEmSJEmSJEk1MehAkiRJkiRJkiRJkiTVxKADSZIkSZIkSZIkSZJUE4MOJEmSJEmSJEmSJElSTQw6kCRJkiRJkiRJkiRJNTHoQJIkSZIkSZIkSZIk1cSgA0mSJEmSJEmSJEmSVBODDiRJkiRJkiRJkiRJUk0MOpAkSZIkSZIkSZIkSTUx6ECSJEmSJP3/9u47TrKsrhv/53SYnMOG2ZzYvOwuOYogCAYwgCigoGBOjzw/gj4qmADBAKKoIIgIpkcRE/CQJLOSdmF3YXOOk3dyx/P741ZPV/d0rOme6WHe79erXl1177mnTlWduvd2ne/9HgAAAICOCDoAAAAAAAAAADoi6ADgOFFKeUkppbZudxzt9nB8KKU8pa3f1SnK6Z8cUaWU17b1uU8e7fYcj3zvD18p5d1t7+G7j3Z7AAAAADg+CToAAAAAAAAAADrSc7QbAADtSimvTfKa1sNP1VqfcvRaM71SyplJbm9bdFat9Y6j0xoAIEn66oHckRuyNfenL/vTk96syrqcnnOzrpzYcb2DdSB35MZszr05kH3pTndWZHVOzTk5sZw6q7ruqjfnpnwtSbIky/LE8l0dt4u5139gd+6++RPZ8eA303dgV3p6lmTF2tNyytlPzJqN5826vuGhwTy07dbs3nlP9uy8O3t23J3+vt1Jkosf+9KsPeH8Kbf/0kdfn779O6Ysc+ZF351Tz/22WbeN+dFX9+f2weuzdfje9NV96cmirOpan9O7L8j67pNmXd9wHcr24Qezq27LruHteWh4W/qzP0lyRe+3Z0P3phnV8+DQnbl76ObsGd6RoQxlaVmeE7pOy5k9F6en9M66XcyfhXYs+2z9YA5k35R1n5dLc0aZen/GkdM3vD+3D1ybLUP3jO6HujfkjN4Ls7775FnXN1yHsn3ogewa3paHhrdm1/C29NVmP3Tl4qdlQ88pU26/e3hHdg5tbrYf2pq99aHU1JzUfWYuW/Lkjl4j86tvYE9uf/Cz2fLQzekb2JWe7iVZtWxTzjjhMVm/8uxZ1zc8PJjte+7Irn335aF992XX3vvSN7gnSXLlOS/IhlXnTrHtUO7dfnUe2ntfdu9/IP0De9I/tC9dpSfLFq/LhlXn5PSNj87i3pUdv17m1kI7ju2uO/NQtmVXdmRXdmRvdqWm5sScmkvLYztuD/Nn6+ahvOttu/OZj+/P5geHsmJlVy55+KK84CdW5DFPXDLr+rZvG8onPrw///PZvtxwXX82PziUrq6Skzd159FPWJwXvHRlTj9z6mHzW28ayN++fXe++IW+bN08lOXLu/Kwi3rzgy9Ynmd8z7JOX+q3DEEHAAAAfMvYXXfmq/l0BtKfJOlOT/rTl625P1tzf86tl+TMcsGs6z1Q9+Ur+VT2Z+/BegczkB3Z0tzq2bmgXDnjum7N9bNuA0fG3ofuz7Vf+MsM9jeDa909SzLQvzc7Hvxmdjx4Q8648Jk57bxvn1Wd+/ZszvVXvfOw29bTuzSlq3vCdd3diw67fubG7uEd+Ur/xzOQviRJT3qb/dDwvdk6fG/OrZfnrJ6LZ1Xn3vpQrh7478Nq1zcG/if3Dt2SJCkp6Up39tZduX3o+jwwfGceuejpWVL8WLoQLORjWU960zVJ8txuPzUvGLuHd+TL+z9y6H5o6J5sHbon5/VekbMWXTqrOvcMP5Sv9n284zZd1/fZ7B6eOoCOhWP3/gfz5Zvfk4GhJrCkp2tx+gf3Zeuum7N118057+Sn5qyTnjirOvcc2Jqv3vp3HbVnYGh/vnn3Bw8+Linp7l6cwaED2b3/geze/0Du3vrlXH7WD2XdyrM6eg7mzkI8jl2fL2VPHur8RXFE3fTN/vz0j2zNzh3DSZIVK0t2bh/Opz9+IJ/5xIH8witX5Sd+btWs6vzOR9+fwcHRx8uWlwwM1Nx+62Buv3UwH/jHvXnNm9blWc+Z+Hz4g/+6L6995fYMNN06K1eV7NkznC9+ri9f/FxfPv3xA/mdP1qbUkpHr/lbgTNBAOCoq7W+O8m7j3IzADjGDdWhfC2fz0D6szJrcnEelRVldQbrQG7LN3JXbs4tuS4r65qsLzO/0rjWmq/nquzP3izJslySR2dN2ZChOpS7c0tuybW5J7dlZV2TU8r0V33dmGsylMGsyrrsyvbDecnMsaGhgXzji+/OYP++LF+9KQ+74oezfNVJGRw4kLtv+ljuvfXTufObH86K1adk7QkPm1Xd3b1Ls2L1KVm55rSsWHNqbvjy3866fRc86seyZsM5s96OI2eoDuaa/k9lIH1ZWdbmkt7HZ0XXmmY/NHht7hz6Zm4ZvCaryrpZX2ncZEtYl1VlXVZ1rc/XBz4z423vHrypFXBQcl7P5Tm9+/x0le7sHN6Sa/s/l/11T67t/2wetfgZs3zFzLWFfiy7LI/LunLCXLxU5slQHczVBz7R7Ie61uXSxU9s7Yf6c2v/13Pn4Ddy88DVWdm1Pht6ZpYlZcTIfmh194as6lqfr/V9asbblnRlZdfarOpan9VdG/Lg0F3ZNnTfbF8eR8DQ8ECuvu0fMjC0PyuXnpRLz/i+rFh6QgaH+nLrA5/KnZuvys33fyIrl52cDatmd17S070kq5aenNXLN2XVsk352u3/d0bbdZWenL7xMVm74oysXn5KFvesSCklw8ND2bb7ttx070ezt29rvnb7P+eJF/1ientmfxU0c2OhHse60pUVWZNVWZtVWZstuTfb8uBcvnTmyIEDNf/rZduyc8dwLri4N7/75nU552G92bN7OG9/y6787Tv25E/fuCsXXrIoj3vyzL/rg4PJlY9ZlO97/vI87klLsuGE7gwN1Vz71f684Td35sZvDOQ3X7495zysJw+7cGxA9zeu7c9rXrE9gwPJk79jSV712jXZdFpP+vtq/v2f9+aNr92Z/3r/vpx1bk9e+vOzC4b4ViLoAAAAgG8J9+a2VorNnjw8T8iSsjRJ0lN687A8PPvr3mzJfbkl12V9Zv4D15bcdzA44OF5fFaWNUmS7tKdM3N++ur+3J1bcmu+kZPrmekqE18BmiRb6n3ZkvuyMZuyIqsFHSwwD9xxVfr270h396Jc9Ogfz+Klq5MkPb1LctbF35P9e7dl+wPX545vfmhWQQfLV52Uxz7ztcf1VS/Hi3uGbsmB7E13enL5oqcczBzQU3rzsN4rs6/uzpbhe3Lz4DWzCjpYUdbmKYufO7YPDcxs2+E6lNsGr02SnN59fs7suejgujVdG/PwRU/O//R/KDvrlmwZuicbu2c3XQxz61g4lrGw3TN4Uw7UZj90xeKnZknXyH5oUc5f/Mjsr7uzeeju3Dzw1VkFHazsWptvX/b8sfuhvpm36zFLnpXS1q92Dm+Z+cYcUfds/UoO9D+U7q5FueLsH86SRc0AWk/34px/yjOyv29HNj90Y26+7xOzCjpYufTEfPulr+jofKi3Z0kuOPU7D1ne1dWdjavPy/Il6/PZb/xpBob2Z8uum7Jp3WWzfg7mxkI9jj0qTx3T9x6q2w7zlTJf/uV9e3L/PUNZtrzkLe/akBNOajK9rVjZlZf/+prcc9dg/vv/Hchbf/+hWQUd/NU/bcwjHrN4zLLu7pLLH7U4b3vvhjzvGQ9m+9bhvO+de/Jbf7Bu7LZv3ZXBgWTTqd1509vWZ9Hipi8tWlzy3BeuyLatw/mLP9qVd/3Z7jz3hSuyes3xeR51fL5qAAAAvuU8kLuSJCfltIM/brU7I80g8e7szN66e9b1rsuJB3/cGltvM391fw5kezZPWs9gHcwNuTrd6c75uXzGz8+Rs+Xeq5MkG0+94mDAQbtTz/22JMneh+7Nvj2Tf9bjldIl4OA48cDQ7UmSk7rPnHCqgpEB/911e/YO75pxvaWUjvvQtuEH0p8DSZIzei48ZP2qrnVZ19X86H//0B0dPQdzZ6Efy1j47h9s9kMn95x1MOCg3Zm9zfQuu4e3Z+/wzFONH85+qNneUMSx4v4d1yVJTl57ycGAg3ZnnvD4JMnu/fdn74GtM673cPvQVJYtXpee7mbwsW9g5vtG5t5CPY45Fz92fPADzTR3z3rOsoMBB+1+7KdWJkm+ed1A7rh1hlG4ySEBB+3Wre/OE7+92Yd889qxdQ4N1Xzh002U3fN+dMXBgIN2L3rpipSS7Ntb84kP759xm77VONIDC0YpZVEp5RmllNeXUj5aSrmzlLK3lNJfSnmwlPKlUsqbSymPmqfnf0oppY7c2pZvKqW8spTyuVLKPaWUgVaZp0xSz9mllNe0yt9XSukrpWwtpVxbSvnTUsqTO2xfdynlB0op7yilXFdK2dJqy65SyjdKKf9QSvmpUsr6zt6BQ57v/FLK7W3vyf2llCumKH9F67P7Ytvr3lZK+Xop5S3TfW6llDta7/tr2hZ/W/tnMu72kjl4jUtKKT9aSvnnUsotpZTdpZShVr+7q5TymVLKW0spzy1l7C+GpZTXttp7+7hqb5+kvZ+cpA0nlVJeXEp5Z+u929Lq83tabfhgKeUVM/1cSylnjnveM1vLl5ZSfryU8rFSyt2tz2dzKeWTpZRfLqXMKu9cKaW3lPKy1nf1vlLKgVZ7P95aPqvJYEspL2lr8x1TlHvtRO9p63v3ulLKNaWU7aWU/a0+9b5SyuwmXc7B/v8Hre/aQ62+cUMp5d3t3+HW45H2vHu2zzPDtjyhlPInre/S1tZnd38p5QullN8qpZw7w3omfI9bffDXWv1vc+uzvKeU8v5SyvfPx2tqe9456/vzqZTSVUp5Umn27f9VSrm11ScGWu3+WinlL0opT5tFnXeUWe7PZtrfJqu7lPKsUso/lmZ/t7+UsqOUcnUp5Q2lzCKn4Wh983Jcmos+Odl7VUp5cmvdDa3PcGfr8/u9UsoZs2jjvJ6zlGZf/rpW23ZOtg9irME6kF1p5gme7IqZ1VmfnvQmyawGVHZkS6veEydcv6QszfKsapWdvN7bcn36sj9n5ULzpi9Ag4MHsmfnvUmSNRsnzmKwcu3p6W6l631oyy1HrG0cGwbrQHbV5gq89V0TZzFYXTaM7oeGHzgi7dox3KQOXlHWTLrvWd+16Yi2iYkdC8cyFrbBOpBdw83Vu+u7J85isLpr48E+tG3o/iPWNo4Ng0N92bWvmfZi/SRZDFYvPzU93c3g3bbd438WOzr2HtiawaEmwG7pojVHtzHHMccxDtfePcMHB/0ny2Jw2ZWLsmJVM/D/xc/NIuXONEayEwwN1THLd24fzoH9zbIzzp54AoHlK7qy8cQmQOKqzx6YszYda0yvACwIpZTvSfKeJGsnKXJC6/bIJL9cSvnXJD9ea515SHZn7XpBkj9PMu1EPKWUniSvS/LLSRaNW72+dbskyc+XUj6Y5CdqrTOaOKqU8qwkb04y0a+fK5Nc2Lo9P8mflFIeXmu9cSZ1T/J8j0vyH602J8nNSb6z1nrIfxKllBOS/FmS505Q1brW7dIkv1RK+fskP1lr3dtp2+ZK6zW+L8lZE6xe1rqdluSJSX4hyf9N8kNz3IZ3JXlxJg4C7E2yvNWGZyV5TSnlFbXWP+/geS5L8g9p+ki7jUm+rXX7hVLKd9Zab5tBfRcl+cc0/bndaa3bU5P871LK82bb1k6UUn4xyZuSjA9XPaN1e0Ep5R1JfrbWOjSD+v5Pkt/Mod/j81u3F5dS/irJLx5u26dpxwlJ/irJ906w+qTW7bFJfrWU8tYkr6q1Ds7yOX4oyduTjL+U85Qk35/k+0sp/5XkebXWOQvTPVJ9fy6UUh6ZZn842aD8htbtsiQ/XUr5TJIfrrUumMlJW4P+f51D+9KSJJe3bj9fSnl+rfWDM6xzXo5L89UnSxNY9dYkL5tg9WWt2y+XUl5ea337NHXN6zlLKeVnkvxhmuNQuyO6DzoW7c3oVTLLJzl1K6VkWV2RXdmRvZnZFcb99UAG0p8kWTHFKeGKrMre7Jq03l11R+7OLVmelTl9wq8OR9v+3ZuTND8mLVs18Y+ZpXRl6YqN2bPz7uzbfeR/zLz9uv9I34GHMjRwID2LlmbF6lOy8dQrs/GUh7uCdAHY27arXzHBFXhJaz9UVmVX3Tam/Hza03qe5eXQ7B0jVpRm/zaQvvTXA1k0u5hk5shCP5YlyU35evrqvgxmIL1ZlJVZm5Nzek7Maa4iXQDaMxes6FozYZlSSpZ1rc6u4a2zynTA8aE9c8GKJRsnLFNKybLF67Nr332zynQw12qt6R/ckx177srN930iSbKkd3U2rnaufbQcC8cxFrbbbxlMbY35n/OwiYewu7pKzjy7J9ddM5Dbbp55poPpfOV/mgCGc8/vHbO8/fRmeIpflkeCFW67ae7adKwRdAAsFGdm7I/3u5LckuShJN1JTk5ybpKRXfz3Jzm7lPK4uRwIa1dK+YE0g9JJ8+vjN5M8mGYQ/YJxZRcleX+S7x5Xza1J7k6yJs0A7ch+97uSfL6U8rRa6x3TtOPlaQZU239FHEhyQ5ItaQYlzszogNjiJIfmrpqhUsqz0wxQj9TxxSTfXWs95L+IUsr5ST7cev72tn0jybY0wRqXZnQw+EeSnF9KeUqth+TP+lSSE9N8ziOh1Dtazz+Re2f+qg5p9wVJPpJkRdvi3UluTLKz1d4NrbaMnGWM/xX3liT/L8371H7V6aeTTNQnvz7BssvG1XtXkvuS7Ekz6PqwjAZ+LE/ytlLKmlrr6yd/dYc4P83nuab1+OY0792yJA/P6GdzbpKPlFIuq7Xum6yy1mf+32kG1Eb0J7m21e6zkpye5jvyiST/axZtnbVSyquTjLwffUmuS7P/ODXJeW1FfzLJ1iS/Nk19b0jyqnGL70/zeS9KM4i6Ks3A5bLMeCbd2SmlnJ7k42k+lxFDSa5Psj3NAOzI6+tN8vIkF5ZSvr/WOqMQ31LKDyf5+9bDwVbd29J8thdndH/73UnemeQFnb6eCRyJvj9XRgI8RuxL8z3amWQ4zX7r/DTHqiR5UpL/KaVcWWtdCJOULkuzv7uy9fiBNMemmuZzHjn2rkjyr6WUR9Rar5uqwvk6Ls1zn2wvvzvNcao/zWc3sj9bnuQvSymLaq1/OkVdZ2aezllKKf8ryR+PW/xgkpvS7IMuTvNZvSzNezqrQKNvdX0ZvZpgcSYfLFucpUl2jCk/83on78qLWs85Ub211tyQr6am5vxcYZ7sBar/wOjp6aIlk/+YObKuv+/I/5i5d9d96eruTVd3Twb69mTH5huzY/ONeeDO/8lFj35xeno7/jeAOdDXtptfPEE64THr6tjy86m/9TxTt2k01q2v7hd0cJQs5GPZiD3Zma50pyvd6U9ftuWBbMsDuSe35eH18ekt42O3OZL62v6dXzxFVqUlZWl25cjthzh29A3sOXh/ce/KScst6V3Z9KGjMJXB9Xf9R+7ddvUhy1cuPSmXnfmD6e7qnWArjoRj4TjGwrZl8+io/kjmgIk06wayZfPwnDzvf39kf77x9eZn3mc/b/mYdavXdmXpspL9+2puu3kgT3vWoX3woZ3D2balacvWOWrTsUjQAbCQXJ3kb5L8V631kFylpUn9/EtJXpFm//XwJL+XZrBtPry79fcdSV5Taz2Yc66Usm5c2d/J2ICDzyX5uVrr19u22Zjkt5L8bGvR2Un+vpTypMmuTi6lPDfN1Y4jtqaZfuC9tdZd48qelSbbwM/N6NVN/Hw/leRtGR08+2Caq0kPGYQupaxMc/Xvma1F25P8epL3tGcyKE2a/Z9P8rtpBkyuTJM94kXt9dVaX9wq/9qMTrHw9VrrMzt9PVN4fUYDDh5M8579+/jPoRVM8qQ0GQ7GnE3UWt+b5L2lmb6gPQPEi6cLJGnTnyYg4J+TfGyiq2BbGRnekNHAht8ppfy/WutXZ/gc700TcPAvSV7d/t0qpaxO8kdJfqK16Jwkv5Lme3WIVjaPv8/oAF1NM/D4+lrrzrZyT0ryl2kG6N88w3Z24tI078v+NMEEb2/vq6WUK9O8vyOD868opfxlrfXOiSprXbndHnBwW5rv60drbWJsW33ixWm+ly9IMueDyqWU7jTvc3vAwV8l+fX27CitjBNvS5OpImmyAvxekv9vBk+zIc2V70Npvg9/OO4zPDdN33lMa9GPlFL+rNb6uU5e0wSORN+fSzeneb/+M8n1tdYx/z2UUtamGQR+TZqB61PTfAd+4Ai3cyK/lebz/lqSX661fmpkRauv/VyaQe7uNPvoP07y9Mkqm8fj0nz2ye9Kk9nlQJJXJ/nLWuuBVr1dSZ6d5rs0kgf7zaWUz0/T1+b8nKWU8uiMfW/vS3P8/PeRPldKWZpmv/S6JC/MPOyDjmXDbTEYXZn8x4nu1rqhGcZsDM243p5J6707t2ZXduSknJ515YRD1rMwDA31H7w/1Y/V3d3NuqHB/knLzLX1J1+cVevOyuoNZ6d3UfMj2IF9O3L/7Z/Lvbd+Jru23ZYbvvzeXPK4nzxibeJQc7G/mA8jz9M9RZva23uk2sWhFvKxbGM2ZW02Zk02ZFFpYtcP1H25O7fkztyUndmaa3NVrhwTk8+RNtPPuuvgZ338Xo3JxIaGR89vuqY4HxpZ117+SOnpWpxFPcszXIcOTqmwculJueDUZ2b5kqM+Q+NxbSEfxzg2HNg3OrXB4iWTZ1Ba0lq3f9/hD/BvfmAov/urzbQg3/b0JXnCU8YGzHR3lzz6CYvzqY8eyD/97Z686GUrsnTZ2AsJ3v3noz+J7d17/AYduLwCWCjeXWu9stb6lol+vE+SWusDtdZfS/KjbYt/qpRJ8lYevpVJfqvW+lPtAQettmyvtZmss3XVfPsg3yeTPK094KC1zZZa68+lCVAY8dgkPz3Rk7cGsd7ZtuiOJI+ptb5t/MBOq/7ba61vSjPAesPMXuKY5/vtNINkI2duf53kOVNc9f7GjA7m3pPkEbXWPx8/dUKtdV+rXc9Jc1Vwkryw03muD1dr4PxZbYt+tNb6/okCP2qt/bXWj9dafzrJj89Dc55Ra/2RWuu/TJZ2u9b6hSRPS/JfrUXdSf73LJ5jQ5I/r7U+d/x3q9b6UK31pUk+1rb4JzK5n0pyRdvj/6/W+qr2gcFWvZ9JM1B8S5qBvvmyLs1Vvs+otb55fF9tDRh+V5oMCEkz8PdjE1VUmjygf9K26O4kT6q1fmQk4KBVZ3+t9R1pgowGMz+v76VJHt/2+A211p8cPx1LrfUbSZ6RJiPCiJeXUi6dwXMsT5Na/4W11t+Y4DO8Jc33pH1Ac6q+MVtHou/PlU8kOb/W+vpa67XjAw6SpNa6o7Wfe2aaQfMk+b5SykLI6bghyZeSPLE94CBJaq1Dtda3Jnlt2+KnlVLOmKiieT4uzWef3JgmSOp5rfOMg5c81FqHa60fSPLtSWviyaav/dkU9c3XOcufZvT/s+1JnlJr/UB7n6u17q+1/lGSH269po72QaWUr0x266Q+ptZX9+fWXJee9Oa8XHa0m8Mx6uxLnp0Nmy49GHCQJEuWrc1ZF39Pzrn0OUmSnVtuzo7NNx2tJgLf4s4vl+eEcsrBgIMkWVKW5bxyWc5v/Zu4PZuzrT5wtJoIHCfOP/UZecql/ztPveyVeeplr8qlZ/5ABob250s3vzs33vuRo9084Biyb+9wfuUnt2b71uGcfGp3XvPGiWfSfOnPr0x3d5PF4OdfvDXXXdOfgf6arZuH8va37Mp73r4nPa04ra7jeLYpQQfAglBr3TN9qYNl/yHJ51sPlyf5znlpVJPW+XemLdXMqTyyP92f5CV16vTmr01yTdvjXyoTT3z4C8nBSaqGk/xIrfW26RpTax1sH1CZTimluzU39G+0LX5drfUnpsjAcFLGDsK/ZLqr+2utH85o9ojk6M1FvTGjUwokyWdmslGtdYoZmzoz037f+hx+pW3Rs1tXKM/EHZl+ioM3td0/u5SyaZJy7VcrX5VDU4AfVJvpOH52svVz6E211s9O0Y5b0kx9MuJJkxR9esZmFnh5rfW+Ker9TKYelDwcv9R2/9qM/W6Ob0d/mu/iSMBFGbf9VP6u1vqPU9S9I02GhRGTvXezdoT6/pxoBU7V6Usmrb448p6WNGn1j7bhNMFVU73nf5IczD1YkjxhknLzfVyazz75nlrrf05R941JfrNt0WNb2VImKjvn5yytQLz2YLz/U2u9eYp6P5Dk72bajuNFV1siv+FMftgeaq3rnmHiv+4Z1ztyJfHYem/INRnKYM7OxVksXfmC1t09mhJ8aHjyKz+Hhpp13T0LI4X4SWc+LouXNT+ObX/wG0e5Nce3w91fzJfRq/4mb1N7e49UuzjUQj2WTefUnJ0laVL5b8n905RmPs30sx4++FlLQ89Y3V2j5zfDU5wPjaxrL3809HQvzslrL8mjz/vx9HQtzp2br8qDO2d9LRZz5Fg9jrFwLFk2OkzSd2Dyn+MOtNaNzzgwG30Han7lZdvyja8PZO36rrztPRuydt3EPzteesXi/Prr16anJ7n6i/350edszqPPuzdPf9T9+fM/2pXzL+rNc36oCRBfuer4HXo/fl85cKz7Qtv9R8/Tc/zVDAea2weV/qVOkrp9ROuKxfbB2oelmaN5vBe23f9QrfWqGbRlVlpTH/xbmiurk2YQ6Rdqrf9nmk1/OKMD91fXWj8+VeE2f9N2/2kzbujcGj9h4RUTllpgWoNP21oPVyS5aIabvqM1MD2Vz2Y0C0UyQX8spZw/bvmfTjcQW2v9WJJvzrCdnXrbDMq0X9090XctSb6n7f4DSf51BvXOedBB68r49ja+ZbLgnxG11rszOtCdJN83w6ebSfvb37tzW9NLHFGH0fePliNxfJqNj7cG1CfVylJwTduiyb4n831cms8++dYZlHlXkvaAgh+cYd3TmUmfaJ+KY3fGBulN5k+mLzKxWusjJrt1WudC0D5n6FRzePa1TgWmmmN0bL2jMyz1HXIaMaq/9Zzt9W6vm7Ml92Z5VmVTzshgHRxzq22H35Flw4cmVOEIWbRk1cH7/QcOSeByyLpFi1dNWuZIKqVk5ZrTkiQH9m4/yq05vrXPnz7VPOkj6xaXyecknksjzzN1m9rngT8y7eJQC/FYNhOllKxKMwvl/uydpjTzaex+aLLElcmBI7wf4tixuHfFwft9A7snLXegtW5x78p5b9NMLFm0KiesuSBJcu+2q49ya45fx+pxjIXjhBNHB/23PDj50MzIuo0ndDbMPdBf84qf3ZYvfr4vK1eV/PnfbsiZ50wdiPd9z1+ef/jQifnBFy7PeRf05qRN3bn0ikX55V9dnb/+lxPS39f8VH7aWcdv0Mvx+8qBBauUsjHNVccPT7IpzVWVi8cVa78i+dR5asqnpyvQSkF9ctui/5hh3f8+7vHjklzXVu+JSc5vW/9PM6x3xkopG9KkLR8ZADmQJq31+yff6qBva7v/0Vk87dfa7m8qpWya6mry+VBr3VlKuSWjfejvSik/2RogP2pKKVekubr4ojTTBqxMDpmkbHnb/VPTXAU/nWnnO6+17iulbE+Thj1J1kxQ7DHjHn9oBs+dNH3swhmWna07aq33zqDcPW3310xSpv31fWomAUe11ptLKXcnOW0GbZipx417PNN9yr9lNPvIhlLKeVNdJZ1kIE3K/em0v3clyerM8Rzy89j351wpZVWa49PlSc5I09Ylad6bEae03Z+v49NsTLsPaJnye3IEjkvz2Sc311qnnTagtS/8ZEaDkMbv9w4xh+cs7c/1yZlkhqi1frGUsjWj++7j3vKM/uC5N7vGPB5Ra82+VmzJ8sxswHhRWZzeuigD6c+e7Mr6nDRhuT3ZdUi9B1qJaPZmVz6Zf5v0OQ5kXz6ZDyRJLsojsylnzqhtzK2lKzem2bXU7Nv1YJatOOGQMrUOZ/+eZrezbOWh6zm+LS+j3/89deeE+5laa/a1ZiVaXlYfoXatztbcl70Tz2qVJNnTalNvFmeRrCxHzUI8lnFsWd41ul/ZM7xzzOMRtdbsG37okPKQJMuXjP57sefAljGPR9Ras69v2yHlj7aRAIj9/TumKcl8cRzjcJ15Tk9KSWpNbr1pcMJAgOHhmjtua67ROvu82WfsGRysefUvbs9nPnEgy5aXvPXdG3L+xTO7puWch/Xm11838RQMN1zXZIB5+JULIyPe0SDoAFgwWgP4b0qTOWA2+6c189KgZNqU0Rk7kJCMHVSfVGvg+64kp09Sz/hB2i/PpN5ZWJkm3fN5rcc7kzyn1jptoEVL+4TE31tKeXiH7diY5IgGHbS8McnbW/fPTPLR1uDxh9NMt3DVNAO2c6aU8uwkb8jsB+bXzLDcTCfU3JvRgatlE6xvn5v+vlrrTC+jm8/B4dm8thETvbakGUAeMZvsDN/I3AYdtO8LHqi1bp7hdl+foJ6p+vC2WuvkeRJHjb9MabL3b9aOQN+fM6WU9Ulel+THklmFy6+ZlwbNzlx9T+b7uDSfffK66YscdG1Ggw4eNlmheThnaX+u2ew3r03y7bMo/y2tp/RmVV2bXdmRbXkwJ4yJAWo8lO0ZTNPV1mXmA8Zrc0I2555sz4M5Y4KucaDuz97WD1yzqZeFpadnSVasOTV7dt6dnVtuzoZNlx5SZveOuzM02MQFrd44/hT+6Ki1ZvfOu5MkS5atO8qtOb71lN6sKuuzq27L9uEHcmL36YeUeahuHd0PdU38g/lcW9t1Yu4c+mb21IfSV/dPeGXz9uH7j2ibmNixeiyrtWZXmn8Rl46JF+ZI6ym9WdW1PruGt2Xb0P05seeMQ8o8NDy6H1rfffIh6zm+9XQvzqplm7Jr333Ztvu2nLjm0H/ZH9p3TwaHmpll168860g3cVL7+3cmOfpTPhzPjtXjGAvH8hVdueiy3lz/tYFc9dkDedqzDj1vvfbq/uzZ1WQVePQTxl/3MbXh4ZrffPn2fOLD+7NkScmb/2p9Hv6I2dUxkVtvGsjNNzT9+pnPmbOfT485gg6ABaE1l/FH0tkAzeEfFSY2eU7VUePD2mZzBfCWjAYdjK9n/K+FMx14nKl1457jzbMIOEiS9W33L0znV7IflZD6Wus7SilnJXl1Rq9QPi3JT7ZuKaXclyYjxTtrrXM9uJbWc/xukummspjMTPt9Xwd1lwmWtffRbROsn8xsys5WJ69tMmva7u+cxXZzHT7f/j7Pdn8yWT0T6fS9m6hvzL6SI9P350Qp5ewk/53R/fVsHNG2TmKu9gHzfVyazz7Z6T5rwu/RPJ2zLMR97DHppJyeXdmRB3JXzq4XHjKwdmduSpKszNosLzNPBXtSTsvm3JNteTC7686sLGvGrL+rVe+iLMnath+4NpUzp8xacGu9Prfnm1mSZXli+a4Zt4f5s/GUy7Nn593Zcu/VOf387xgz5UKS3HtrM8vLitWnTJgJYT7UWlPK5Lu7B+68Kn37mlOStSdecETaxORO6j4zuwa35f6h23N2z6WH7ocGm/jWlWVdlncdmavw1nedlEVZkv4cyJ2D38zDeq8cs3738I5sG27iFE/uPvOItInJLbRjWTL9fuje3HYwu8+GSa4+5cg5uees7OrflvsHb885vZdlcdfYwY87Bq5PkqzqWi/TARM6ee0l2bXvvty//dqcc9KTD5lC4Y4HmxnkVi09+YhlOhiuw+kqk6dR33tgWzbvvCFJsnZ5J/++M1cW4nGMY8uznrMs13/toXzoA/vyU7+0KhtPHJsQ9T1vbzJlXHhp77RTIrSrteZ3Xr0jH/q3/eldlPzBX67Pox5/+Bm+BvprXv8bzf9jT3jKkpx/0fEb+NTZZBcAc6iUsjzJ+zP64/1Akvcm+eEkl6YZ6FhSay0jtyS/Nd/tqnVGE+qOHzzon8VTtA+wjD+6jX88bZrlWdqcsWmsf7OU8tJZbD9Xly4cteNQrfXXkjwyyfsydg7vEZuS/EySL5VS/qmUcWeyh6mU8pyMHXS9N8lvJ/mOJGenyUbRM67f3zmXbZil9rOlTvs502vfpxzO+7xgc+IeS32/lNKVZhqBkV8sapqpLF6SZoqFDUmWjmvrj09Q1beC+T4uzadOv0uHBAjM4zmLfewcOSVnZ0mWZSiDuSafO5gufLAO5Ob69WxJMyvPubn4kG0/Vv85H6v/nFvr9Yes25hNB+eq/nq+kIdqE+8xXIdyZ70pd7WSy5yTi6b8MZSF76QzH5vFS9dmaLAv1//PX2ff7geTJIODB3L79f+Vbfc3yVPOuPBZh2z72X9/ZT7776/MnTd8ZMK6B/v3ZaBv78HbweUDB8YsHx4eO8vTbdf9W2699t/y0LbbMzQ0mhSmb//O3PGND+bWa5upO1ZvOCfrBB0cdad2n5slWZ6hDObq/k9mTyuF+WAdyE0DX83m4SYrxXk9lx+y7UcPvC8fPfC+3DowPolVY6D2pb8eOHgbMZiBMcuHx/0r21W6c3ZPk7njzqEbcsfgNzPcmk1s5/CWfG3g00lq1pSN2di9EGaHOr4txGPZjbkmN9ZrsrNuzVDbTHQH6r7cXK/NjbkmSbI2G7OhuHL+aDu152FZUpZnKAP5at8nsmd4Z5LWfqj/K9k8dFeS5NzeKw7Z9iN735OP7H1Pbum/ZsK6O90PJclQHRxXpulLwxkes3xwRgnQmE+nbnhElixanaHh/nz11n/Inv3NdQ6DQ3256d6PZvNDzeD+uZueesi2H7n6t/ORq387t9z/yQnrHhjcn/7BfQdvIwaH+sYsHx436+UN93w4N9zz4ezcc3eGhgfb6juQe7ddky/f/DcZroPp7lqUM0547OG+BRyGhXgcS0b2QX0Hb8Np9lPNPmh0+WAdPGRbjqwffOGKnHxqd/buqfnln9iaW29qjgt79wznza/bmU98eH+S5BdfeWjg3BVn3JMrzrgnf/HHh04r9ge//VA+8I/70tOT/P6frc8TnjK7n0/f8Bs78tUv9mX/vlbfGa756hf78lM/siVfuao/a9d35f+8bs0sX+23FpkOgIXgxzM6x/FAkqfXWj81zTYzD4OcXzvHPV6ZiQewJ9J+Wcv4esZfPb16FvXOxP40A3wfTjOHfFeSd5RSemutfzGD7XdmNBX/y2utfzyHbTtiaq1fTfKiUkpvkkelmVf+SUmekrF97HlJTi2lPKnWcf/1dO432u5/KU2/n3yS1cbR7PftmT9m045jZRK1nUlObN1fM4vtpsso0Ek7RhzO+7xzokILxLHU978rySPaHr+o1vp302wzn23tnr7IvJnv49J86vS7NFG/nK9zll0ZzSbxrbiPPWK6S3ceXh+fr+bT2Z2duSofSXftyVBGfzg6N5dkfZndVZillFxWH5uv5FPZn735Uv473bUnwxlKTZPW8ZScnVPK2XP6ejjyurt7c+GjX5zrvvD27H3o3nz1v/8w3T1LMjTYlyb2rOSMC5+ZtSdMOgPLpK7+1FvSt//QJEk3fuV9Yx5f8vifzpoN5xx8PDTYl813fyX33/65JCU9vUtSaz04zUOSrFp/di545I/Ouk3Mve7Sk8sXfVu+0v/x7K7b84X+/0xPejOYwaS1vzi35/KOUppf1fehHDhkpqHk2oHPjnn8iN7vyLruE8csO63nYdldd+TeoVty8+BXc8vgNelK18H949KyIpcueuKs28TcW4jHsqEM5v7cmbtzS5Kkp/ampo5p05psyGV5XCcvmTnWXXpyxeJvz5cPfDS7h7fn8/v//ZD90Hm9V2RDz6ZZ1/2F/f+ZA/XQ/dDX+8Ymz3zkkmdkXffYPnr7wHW5bYKgqs1Dd2XzvrsOPt7Uc04uWfyEWbeNudPd1Zsrznp+vnzL32b3/vvz+Rv+PD1dizM43J+Dfejkp2bDqnOmrmgCX7jx7TnQf+i/Wl+/41/GPH7kuT+WdSvPPPh4eHgg92z/Wu7a8sUkJT3dTYz44NDo+dCinhV5+FnPzZJF/k06mhbicSxJ7siNuX2CGVW35L5saZv99+SckYvzqFm1jbm1ZEnJH79jfX7mBVvzzesG8tynP5gVK0v27a0ZHk5KSX7hlavyuCfPPGjg/nsH83fvav2MVZLf+7Ud+b1fmzyB7ce+fOgx8h/fszf/+J7mGLhyVcn+/TWDrTi5Tad25y3v2pCTTzm+h92P71cPLBTPbLv/9zP48T6Z2znUD8f49NLnJLl/uo1aV8+2T3o2vp7xdZyftMJA50itdVcp5RlJPphmoL0k+fNW4MFbp9n8gYwGHZw4VcFjQWse8c+3bm8qpSxKM6f372Z06ojHJXl+kukGHKdVStmYsQOZr5pu0LWUsiJHd374B9vun1ZK6Z5hAMaxMvpyZ0b78mymC7lojtvRvi84vZTSU+uMQqzH/6c/16nv58Qx2Pfbj0+fnkHAQTLz41P71ewzzQU310EuszHvx6V5NJtJRtv3WQ9OsH6+zlkezGjQQaftpWVlWZPH1mfkjtyQrbk/fdmf3izO6qzN6Tkv60pnpy5LyrI8pn5H7siN2Zx7cyB7052erMyanJpzcmJxdfC3ihWrN+XKp/zv3H3zJ7LjwW+m78Cu9C5alhVrT8spZz8pazaed0Tbc9IZj03vouXZtf3O9O3fmcGBfam1ZvHSNVmx+tRsPPXyrD/5khRZNhaMlV1r87jF353bB6/P1uF701f3pTeLsrprfU7vvjDru49O+vmLeh+TdV0n5Z6hm7N7eEeGM5TlZVVO6DotZ/ZcnJ4y8/S0zK+Fdiw7NWdnURZnZ7blQPZlIM3A4+IszaqszUk5PSfklCmnYODIWtm9Lo9f+uzcPnBttgzd09oPLc7q7vU5o/eijgKfOL6sXHZSHn/hz+b2Bz+bLQ/dnL6BXentWZrVy07JGSc8JutXHtl/Rc468QlZvmRDtu++Pfv6dqR/cE+G63AW9SzPiiUnZOPq87Jp/eXp7V6wiR+PKwvtOMax5/yLFuX/fuTEvOttu/OZj+/P5geHsnptVy55+KK88KUr8pgnzu673p6AZ3Ag2bZlJkmux/rlX12dL33+QG69aTDbtw1l+fKunHF2T572rKX5oR9bkSVLnAcJOgAWgjPa7n9xusKl+S/28fPXnFm5Ns2VjiO/zjw+yWcnL37QZRk7RcGXx62/LsnujF7t+JQkn+i4lZOote4ppTwzyX8kGcmJ9iellEW11j+cYtPPJ7mkdX+uL2VoP+IflSN1rbU/yftLKZ9Lcn2S9a1V35lDgw7Gn6HMpM3jJ5ebtt+n6VtH85fkr7TdX5qmD189g+0eMz/NmXP/k+TRrfvfNpOgilLKeZn7AKj293lJkisz8/4xYihp5TddeI61vj+r41PLTC8RbM8esm7SUmNdOsNy8+GIHJfmyQWllFW11l3TFx2zz/rKBOvn65zlKxkNeJrRfrOUsjpN8AcTWFyW5PxcnvNz+Yy3+Y7y3GnL9JTenJtLcu7B06DDc065OOdMkFaUo2/RkpU559LnJJc+Z8bbPPHZb5xy/aOe/qsdtWXVujOyat0Z0xdkQVlcluaC3kemmdFtZp6+5IVTrn/Sku87vEYlOan7jJzUrT8dCxbSsWx1WZ/VB/8t5lixuGtpLlj86Fxw8F/d6T1j+Y9Nuf7Jy36w4/acu+jynLvo8o6358hb3LsiF5z6zFxw6jOnL9zyjCt+c8r1T774lztqy/IlG3LWkg0560RZMI4VC+k4lvjf61i04YTuvPK1a/LK166Z8TZX3zlx4Mmm03omXTdTL/mZlXnJzyyUBNwLkzB4YCGY7eUUz0xyynw0ZLZqrQfSDFaOeFGZWWj/i9vu9ye5aly9g0k+3rboJ0oph8wtPRdqrfvSXNXfPvnsH5RSXj3FZh9qu//EUspcDnq05+lbOof1zlqt9cEkn2tbNNElSePzCs6kzZ1cQvTSDraZS1/M2Dncf2S6DVoDYt89by2aW//Zdv+kJN8/g21+fh7aMf59nmme5PZfhr5Sa12oae+Ptb4/q/aWUi7KzAOx7my7f9kM6n5EDg3aOGKO5HFpHvQmmfaXi1LKZcmYXy0mymIwX+cs7c91WasvTef5EUQOAAAAwAIg6ABYCO5ru//kqQqWUpYl+eP5bc6svaPt/qVJXjJV4dbV0T/btuifaq07Jyj65rb7pyR5XWfNm16tdX+SZyf5r7bFry+l/MYkm/x7khtb97uSvKOUOcvF2Z7C+5wZBnHMWAf1rWi7v32C9TszdpB4Jvl27xv3eLp+/7Qkz5tBvfOm1ro7yT+3Lfr5Usp0A6CvzVEOHJmFjyatCUobf1RKmTTfZCnlSZmHoINWsMDfty36qVLKBVNtU0r50SRXtC16+1y3aw4da31/NsenriR/Nou62zPcfFdrGonJ6i5JXj+LuufLm9vuz+txaR78Zill+TRlfr/t/u4k/zRBmfk6Z/mnjA1i+/3JCrbqXpFksmM0AAAAABxRgg6AhaA9PfNzSynfM1GhUsr6NFcjL7RUwv+Y5Ia2x29rTVlwiFLKmUk+mGTk6tC+TDKQ1Jon+j/aFr28lPLGqa4sLaUsKqW8tPU8s1Jr7UvyA0k+0Lb4t0spvzNB2eEkv5KkthY9KcmHSymbpnueUspFpZQ/LaW8YpIi7ems1yX58Rk0fzaeXEr5UCnlGaWU7qkKllK+N00K8RH/Pb5MKwX/NW2Lfq6UMuWkUrXWu5Lc2rboD1r9e6I2PCXJv+QoTTUxzu+nmU4kSZYl+c/JBuZLKT+X5H8doXYdtlprTfJLbYtOS/KZUsrT2wNVWt+xn0wToNOTZMs8NOf3MxrIsijN+zzhZImllKcn+cu2RTcned88tGlOHIN9v/349OhSys9OVKg1uPzejN1fTOdf2u6vydgB/fa6e5P8RZKnz6LueXGkj0tz7Iwk/1xKOSQPXimlu5Tyh2myEox4yyQZQ+blnKU19cNb2xZ9T+u9PeT/tVLKqiT/msRklQAAAAAsCNJxAgvB25O8Ks0V5V1J/q2U8rdpBjYeTLI2zaD2TyRZn2Ye7P/KDNK7Hwm11r5SyovSpOFfnGYe9g+WUt6f5P1J7kkzoPTtSV6WsVfOv6rW+o0pqn9xki8lOaf1+BVJnl9K+bs00zpsS3Ml+VlpUno/O837dcWhVc3otfSXUp6X5krrkVTUv15K6a21vnpc2Q+VUn4to0ETT01yW+t1fyLJXUn2JVmV5orYy1tlRq7a/q1J2nBjKeXLGZ389J2llF9NM5ja31b0T2qtncwnXtIMLD0zyYOllA+nueL49iQPpUmdfXaaaQGek9EAvVvTDCpO5L1JHtu6/4wk95dSrknTV0cCM66rtf562zZ/mORtrfsXJbm2lPK2NJ9rf5oBsuekSfNf0gSrXJKjm179ulLK76XJYJA0mT2uL6W8PclnkuxJ8969MMnTWmX+LskLjnBTO9Lq029M8srWonPSTDtyXynlljR946Ikq1vr/y5NEMbIdCl9c9SOG1tBOSMDkOck+Xop5V1p0tvvSLIpyfcl+aGMDsr3J3lRa9qXhexY6vv/N83V/Ke1Hr+tlPKMNFel35NkZZJHpzk+nZamP7wnM5gSotZ6UynlX5KMTIr60tZUNe9McluawJ4rW3Wfk+YK++tz9IMPjuhxaY58LM0x6Jlp9ll/mWa/P5DmmPTSNO/1iOuS/O4kdc3nOctvpen7F7YevyLJU1vf/RvS7IMeleSn0wQcbEnytSTfMYO6AQAAAGDeCDoAjrpa6+ZSyovTZAzoSfMj/oszOpDXbm+SH07ymCPXwunVWr/Sym7wgTQDkiXNQNIPTrZJklfXWt8yTb07SimPb9U7Mk/46UlePelGh6nWOlhK+eEkf5vRQZJXlVIW1VpfPq7sG0opD6YZQFySJujiR3L4ASE/mWaQaOQK6HNbt3YfOMznSJITM3lfa3dXku+pte6bZP1fJvneJN/Zerwmh17xvGbc479IMzA/0kdOTnJIVomWr6YZyL9mmnbOu1rrb5VSTkryM61Fa9MMwL1qguJvTxPAckwEHSRJrfVVpZTdadKWL2ot3tS6tXtnkl9I8u62ZQ/NYTv+tHX1+JvS7E+WJ/nF1m0iu5N8f631i3PVhnl0zPT9VlDZD6UJ9ljWWvx9rdt4A2mmzhnKDIIOWn4hycMzun97Yus23pY0g/eTff5HzNE4Ls2Be9NkEPlAmuCQyQIKkibA7Rmt7D+HmM9zllrrgVb2kk9lNKjjEa3bRHW/IMmLZlI3AAAAAMwn0ysAC0Kt9f1prtS7bpIiQ2muOL6y1vqhI9awWai1fjLJxUneldHU6IcUS5Oi/3G11jfOsN7Naa6afFmSm6YpfleSN2Rs+vJZa00Z8KIkf9O2+FdKKW9tTzXfKvvXadJH/1mmH3Tdk+aKzxenGUyd7PmvSfNevibNFfRbMjbLweG4NslvJrkqo1MFTGZLkjcmubTWesNkhWqtg0m+K80A0L+myZqwN6NZDibapiZ5fqstuyYptiPN5/m4WuvOadp6xNRafzZN/7hnkiL3JvnpWutPH7lWzZ1a6+8muSzNXOzfSDOgvzfN9+9vknxbrfVlrYwCJ7ZtOqdTLdRa/zDNYOXHkwxPUuxAq00X1Vo/PpfPP1+Otb5fa70qTSaTz09R7AtJnlhrfecs634gzf79nzLx/mIozUD55bXWr0yw/qg4Gselw1Vr/ViarBSfmaRIX5oAukfWWu+fpq55O2eptd6bJsjgLzJ59pTPJHl06zUBAAAAwFFXmt99ARaG1oD2lWlS669PM9h3f5LPtgZnjgmllKVJnpwmvfS6NIPt9yX5dGuw5nDqPifNwMkJaVJ7701yd5Kv11qnG/yZV6WU7jSf30VpPr+ladr3QJrU0NfXWqcb6D9iWp/TyFXGJ6S5kvlAkq1pBpOuaQUUzHc7VqTpLw9L855tSXJHkk8tpPdrvNZc409IEyCyNk27b07ymVrrZIPk3zJKKT1Jtqf5HibJ0+drELCUsjFNH9nUer7tafrIp6fIwLHgHWt9v5RyYZLHp9lf7E9zfPpirfX2Oaj75DTT8JySZtD6njTfpSkHwBeChXhcKqW8O6PZB/6m1vqStnXnpAno2ZTmvb4zycdqrZMFwUz2HPN6zlJKWZ0mK8gZSbrTnEf8T611XgM4nt71PP8gclgOfO+jj3YTOMYt/cjXjnYTOMbVvjmZ9YzjWNeyZdMXgqk87Myj3QKOccPXTDUbL0zvTXdcdbSbwDHu8tPvLtOXGkvQAQBAB0opL0zy3tbD/iQn1FrnbIoFoHNTBR0wNUEHHC5BBxwuQQccLkEHHC5BBxw2QQccJkEHHC5BBxyuToIOTK8AANAyfvqQKcqdmeSP2hb9s4ADAAAAAACOR4IOAABGvaGU8o5SytNKKb3jV5ZSVpRSfibJl9Okkk+aKTledyQbCQAAAAAAC0XP0W4AAMACsjzJy1q3/lLKzUm2tNZtSHJhmrnVR9Qkv1Rrvf6IthIAAAAAABYIQQcAAKOG2+4vSnLxFGUfSPJztdZ/nd8mAQAAAADAwiXoAABg1P+X5INJviPJI5KcnSbDwZIku5JsTfKVJB9N8r5a64Gj1E4AAAAAAFgQBB0AALTUWvuTfLh1A45RtdaXJHnJUW4GAAAAABwXuo52AwAAAAAAAACAY5OgAwAAAAAAAACgI4IOAAAAAAAAAICOCDoAAAAAAAAAADoi6AAAAAAAAAAA6IigAwAAAAAAAACgI4IOAAAAAAAAAICOCDoAAAAAAAAAADoi6AAAAAAAAAAA6IigAwAAAAAAAACgI4IOAAAAAAAAAICOCDoAAAAAAAAAADoi6AAAAAAAAAAA6IigAwAAAAAAAACgI4IOAAAAAAAAAICOCDoAAAAAAAAAADoi6AAAAAAAAAAA6IigAwAAAAAAAACgI4IOAAAAAAAAAICOCDoAAAAAAAAAADoi6AAAAAAAAAAA6IigAwAAAAAAAACgI4IOAAAAAAAAAICOCDoAAAAAAAAAADoi6AAAAAAAAAAA6IigAwAAAAAAAACgI4IOAAAAAAAAAICOCDoAAAAAAAAAADoi6AAAAAAAAAAA6IigAwAAAAAAAACgI4IOAAAAAAAAAICOCDoAAAAAAAAAADoi6AAAAAAAAAAA6IigAwAAAAAAAACgI4IOAAAAAAAAAICOCDoAAAAAAAAAADoi6AAAAAAAAAAA6IigAwAAAAAAAACgI4IOAAAAAAAAAICOCDoAAAAAAAAAADoi6AAAAAAAAAAA6IigAwAAAAAAAACgI4IOAAAAAAAAAICOCDoAAAAAAAAAADoi6AAAAAAAAAAA6IigAwAAAAAAAACgI4IOAAAAAAAAAICOCDoAAAAAAAAAADoi6AAAAAAAAAAA6IigAwAAAAAAAACgI6XWerTbAAAAwMLgH0QAAACA41fpZCOZDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCOCDgAAAAAAAACAjgg6AAAAAAAAAAA6IugAAAAAAAAAAOiIoAMAAAAAAAAAoCM9R7sBAAAALBjlaDcAAAAAgGOLTAcAAAAAAAAAQEcEHQAAAAAAAAAAHRF0AAAAAAAAAAB0RNABAAAAAAAAANARQQcAAAAAAAAAQEcEHQAAAAAAAAAAHRF0AAAAAAAAAAB0RNABAAAAAAAAANARQQcAAAAAAAAAQEcEHQAAAAAAAAAAHRF0AAAAAAAAAAB0RNABAAAAAAAAANARQQcAAAAAAAAAQEf+f8h3qkK2SxlYAAAAAElFTkSuQmCC", - "text/plain": [ - "<Figure size 1440x1008 with 1 Axes>" - ] - }, - "metadata": { - "image/png": { - "height": 796, - "width": 1038 - }, - "needs_background": "light" - }, - "output_type": "display_data" - } - ], - "source": [ - "count = len(descriptions)\n", - "\n", - "plt.figure(figsize=(20, 14))\n", - "plt.imshow(similarity, vmin=0.1, vmax=0.3)\n", - "# plt.colorbar()\n", - "plt.yticks(range(count), texts, fontsize=18)\n", - "plt.xticks([])\n", - "for i, image in enumerate(original_images):\n", - " plt.imshow(image, extent=(i - 0.5, i + 0.5, -1.6, -0.6), origin=\"lower\")\n", - "for x in range(similarity.shape[1]):\n", - " for y in range(similarity.shape[0]):\n", - " plt.text(x, y, f\"{similarity[y, x]:.2f}\", ha=\"center\", va=\"center\", size=12)\n", - "\n", - "for side in [\"left\", \"top\", \"right\", \"bottom\"]:\n", - " plt.gca().spines[side].set_visible(False)\n", - "\n", - "plt.xlim([-0.5, count - 0.5])\n", - "plt.ylim([count + 0.5, -2])\n", - "\n", - "plt.title(\"Cosine similarity between text and image features\", size=20)" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "alePijoXy6AH" - }, - "source": [ - "# Zero-Shot Image Classification\n", - "\n", - "You can classify images using the cosine similarity (times 100) as the logits to the softmax operation." - ] - }, - { - "cell_type": "code", - "execution_count": 16, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/", - "height": 102, - "referenced_widgets": [ - "1369964d45004b5e95a058910b2a33e6", - "12e23e2819094ee0a079d4eb77cfc4f9", - "7a5f52e56ede4ac3abe37a3ece007dc9", - "ce8b0faa1a1340b5a504d7b3546b3ccb", - "5e6adc4592124a4581b85f4c1f3bab4d", - "4a61c10fc00c4f04bb00b82e942da210", - "b597cd6f6cd443aba4bf4491ac7f957e", - "161969cae25a49f38aacd1568d3cac6c" - ] - }, - "id": "Nqu4GlfPfr-p", - "outputId": "ca7a0e3c-e267-4e6e-8a1b-bbab3c0a2462" - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Files already downloaded and verified\n" - ] - } - ], - "source": [ - "from torchvision.datasets import CIFAR100\n", - "\n", - "cifar100 = CIFAR100(os.path.expanduser(\"~/.cache\"), transform=preprocess, download=True)" - ] - }, - { - "cell_type": "code", - "execution_count": 17, - "metadata": { - "id": "C4S__zCGy2MT" - }, - "outputs": [], - "source": [ - "text_descriptions = [f\"This is a photo of a {label}\" for label in cifar100.classes]\n", - "text_tokens = tokenizer.tokenize(text_descriptions)" - ] - }, - { - "cell_type": "code", - "execution_count": 18, - "metadata": { - "id": "c4z1fm9vCpSR" - }, - "outputs": [], - "source": [ - "with torch.no_grad():\n", - " text_features = model.encode_text(text_tokens).float()\n", - " text_features /= text_features.norm(dim=-1, keepdim=True)\n", - "\n", - "text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)\n", - "top_probs, top_labels = text_probs.cpu().topk(5, dim=-1)" - ] - }, - { - "cell_type": "code", - "execution_count": 19, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/", - "height": 931 - }, - "id": "T6Ju_6IBE2Iz", - "outputId": "e1a155dc-474d-409c-e03d-d41b804648c3" - }, - "outputs": [ - { - "data": { - "image/png": "iVBORw0KGgoAAAANSUhEUgAABx8AAAckCAYAAACdj8paAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjUuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/YYfK9AAAACXBIWXMAABYlAAAWJQFJUiTwAAEAAElEQVR4nOz9e9BteXrXh32e57fWvry3855bd8/pnu6e++gGEkJAzE1AyjhgCkxiJ8YkBspJLCouHDu2VGAlonCVXamk4oADxCEqJQi5wMIXKEBAwCNuIeiGRhrNaK59O5c+573v+17r93vyx/Nb+xw1Mxohdau7p5+PdOo9Z+/9rr32b6319uz3s7/fR8yMIAiCIAiCIAiCIAiCIAiCIAiCIAiCXyr6du9AEARBEARBEARBEARBEARBEARBEARfG4R8DIIgCIIgCIIgCIIgCIIgCIIgCILgTSHkYxAEQRAEQRAEQRAEQRAEQRAEQRAEbwohH4MgCIIgCIIgCIIgCIIgCIIgCIIgeFMI+RgEQRAEQRAEQRAEQRAEQRAEQRAEwZtCyMcgCIIgCIIgCIIgCIIgCIIgCIIgCN4UQj4GQRAEQRAEQRAEQRAEQRAEQRAEQfCmEPIxCIIgCIIgCIIgCIIgCIIgCIIgCII3hZCPQRAEQRAEQRAEQRAEQRAEQRAEQRC8KYR8DIIgCIIgCIIgCIIgCIIgCIIgCILgTSHkYxAEQRAEQRAEQRAEQRAEQRAEQRAEbwohH4MgCIIgCIIgCIIgCIIgCIIgCIIgeFNo3u4dCIIgCILgl46IfAk4Al56m3clCILgncCLwJWZfeDt3pEgCIIgCILgnUG8bw6CIPg5vMhb+L455GMQBEEQfG1wNB6Pb3zDN3zDjbd7R95OZrMZAIeHh2/znrx9xBo4sQ7Oe3UdPv3pT7Nard7u3QiCIAiCIAjeWcT75reR9+p7k3cCsfZvH+/ktX+r3zeHfAyCIAiCrw1eev7552/82I/92Nu9H28rn/jEJwD49m//9rd1P95OYg2cWAfnvboO3/qt38qP//iPv/R270cQBEEQBEHwjiLeN7+NvFffm7wTiLV/+3gnr/1b/b45Zj4GQRAEQRAEQRAEQRAEQRAEQRAEQfCmEPIxCIIgCIIgCIIgCIIgCIIgCIIgCII3hZCPQRAEQRAEQRAEQRAEQRAEQRAEQRC8KYR8DIIgCIIgCIIgCIIgCIIgCIIgCILgTSHkYxAEQRAEQRAEQRAEQRAEQRAEQRAEbwohH4MgCIIgCIIgCIIgCIIgCIIgCIIgeFMI+RgEQRAEQRAEQRAEQRAEQRAEQRAEwZtCyMcgCIIgCIIgCIIgCIIgCIIgCIIgCN4UQj4GQRAEQRAEQRAEQRAEQRAEQRAEQfCmEPIxCIIgCIIgCIIgCIIgCIIgCIIgCII3hZCPQRAEQRAEQRAEQRAEQRAEQRAEQRC8KYR8DIIgCIIgCIIgCIIgCIIgCIIgCILgTSHkYxAEQRAEQRAEQRAEQRAEQRAEQRAEbwohH4MgCIIgCIIgCIIgCIIgCIIgCIIgeFMI+RgEQRAEQRAEQRAEQRAEQRAEQRAEwZtC83bvQBAEQRAEbw4vXxVe/K6/9nbvxjuDH4p1iDWoxDo479J1eOk//Z1v9y4EQRAEQRAEX0PE++Z3AO/S9yZfE8Tav328hWv/Tn3fHMnHIAiCIAiCIAiCIAiCIAiCIAiCIAjeFEI+BkEQBEEQBEEQBEEQBEEQBEEQBEHwphDyMQiCIAiCIAiCIAiCIAiCIAiCIAiCN4WQj0EQBEEQBEEQBEEQBEEQBEEQBEEQvCmEfAyCIAiCIAiCIAiCIAiCIAiCIAiC4E0h5GMQBEEQBEEQBEEQBEEQBEEQBEEQBG8KIR+DIAiCIAiCIAiCIAiCIAiCIAiCIHhTCPkYBEEQBEEQBEEQBEEQBEEQBEEQBMGbQsjHIAiCIAiCIAiCIAiCIAiCIAiCIAjeFJq3eweCIAiCIAiCIAiC4GuRv/NDf8Pu3n+dzXbLUzcPmU5HLGczPv0zX+Rv/a0f55Of/BLzzUN6uwIKDYdM0k3aNEVpMApGQUTZ2x/x9d/0DO97espsMeMnP3mX04cbnn3+Nv/Cr/sYH/3oB7hxfANtE5oU0YSIYmL+tRhgFDME9dswRAUMNIEZqDaIKpihqfHHGJgVNInfrgkzA1GwgiQFBDBUEqp+u4oABUQQ1P9uglkBKxTrsZLJpYfSgyUKBcEwQMzQlBBTRPDta0KbBsVvU21o2oRKQhBMSn2uukf+jYgIYopJRkxBwIpR6ClmWClYAQyKFV+LRmmahpQaGk2kuq6Ib1xVkfq6MUEQimXKsKEs/hyS2X322+qOYX58TSBnspnfXszXXOrxs4JZPYaA4MdUpdntS5MaVAVVRTWRVDEpKIKIUkoHCAaQ/XhgBihGxkqmLghWX4+ZP0wkoQrFd7p+n/lxKr5HUtfBrPj6l7qvAmZGIUMRVIVSsm8CKHmL75Kxe8K6ov7apF4HBmRSGoEIRk+SBhFD6mtWxc83FbSeb6pPfN6+7ptlsCL0uaOQsWyUbJTSk7u+Pr9ilsm5J5fOj0kGw++nQKnnmZhiuSObkXOHWaEUIZeexeyMhw++yKN7P8HV+U9jZYuIkhFKNjbrjtW6sFzAfGEst8rx7ef5pm/6jXzwQ1/H8c3rNG3rPwNIfi0jFEq9lgWyHwt213dGxZ+fevkN15xR6lr4cc+5253vpc9sLUMx+pLJXaYglNJj5teRr4HVY+3ncCk9mFK0Hu9S/Dwzw4ofuQIoQskdf+L/9KfkF/XDNAiCIAiCdx0hH4MgCIIgCIIgCILgLeBDL77AM0/fpttmkJ7Nds3ZyTn3H1xw77U5loVESyZVzZgpZESSb8CgkGloKGakFm7e3Ofa9RGXFx2Ly4dcXc24vFqx2XSYspNQiFSBU1x4pQQlu6AiVYnjMkK0ATL+bYKIi6qS+yoWQUWxkndCTDW5VNC2Wr4CyE6siSZUBklRMMt1v9RFiSTIHYahohRRMEGrmlB0t7lBoIoI0ggqWv8kUjOqWk2q5ExA8TWUQUNWmSmGksiSq7hqUBqwQjZDklJKduFVDDVDrG5DrApd2a2HiNb9cymIFSQLlEHgFUqVj6JUaeOSUa3KXwxTc5FXBNEq28yqDDR/TQaqCURJ6uJVUoNQj3nS3b4ZGTFcWJmBNGC9nweAikveapUwq7KRhAusgphhIhR6zFwwMghIjN3GBBdNZmTrSSmButwsJQ8rX8+bvj6nH2FkOLcyVo9nLj0iLhT9vvq91NcnIDSIqB8TSZhUvSV1H3fms+6i+P6XIphkkIKooKXKbin1OevuWQ9iuLtUMplEwsyf0zSTdOQSz4RsGZGCSoPlDMB227NaXjG/fI3F4otg28HPQylYyag2JO1JKaMt6No4eXSfl1/9LNeObzLZ32O/achSUPPzu1iHkBBpEOupHyWABLlkxKpsH8SyymD/XJSWUuUlLrBFwDKkRCobCoVkLUUFSiFJQ66ykyT1gwjDOeDrVsSwkik2XLfVzlOgFL/+RaN7LQiCIAjeY4R8DIIgCIIgCIIgCIK3gP2DfUaTEdtNz3I14+zsIXdfe53Pfu4Ry3lGkCoRGoSM0WPW11/wZxdNjFxLFuhLoZ2O2R+PeO7569x/fcnJowsur2asV2swQ5KQmoShpKah5A4TpVhBm5aSPXGXUsJMdp5GpEYfBVTNXaJWiSBGLh1Nmri4Ea1pvCpJa4JSahIS2KUXi2UUF4ae4iuI1ERVlVlmmaSJIgLF0OTbHySXVskngstAZScgzXLVHJ6SQzzRN+gPl1EZ1dYTfuqCZEh9Csn/bS75jIxgLtHIpKRoakj1tRlSJeDwuhMqnurMvSfjVJWSc00C+ssofV8Tpeq7aIMEzJ6aFMPE18xKdpEqglgCVbSmzVzOZhdwVZh68lB9raqOMqxKOU+uIi52h2SjDGK4Hnsr2aUbUh9TatrVz4ucC0nEX4+oJ1VL/TfUtKdQqmzC8PMkdzvxKFrloXhiLkmDGX7cKSCFRpOnVatwVmlAi0s1qQlcSbvzyzcApoWShaKFRhqMKsOkQXCZqqQqczMmkHcnST3P2pbSufwsNW3oZ5IiUjASSKbQ+G2lCmUapBhCT2++el23Yj4/YT57jby9qqLf16FYqWlbQ3HB2gg0rdFtt9y9+zlu336Wg8NjRqMRqVH88uz9OItfM55s7qtv9evE05EuXFVaF8tKzStC0Z1iRsRfpfT++FSFfSGTRLHGvaRimPmZVQwonV9nJh5+fuJngdXXJnWfUKlZUvV0bxAEQRAE7xlCPgZBEARBEARBEATBW4K5YJE1m82G87MrvvilR9y/tyGlSX2EeUKp1o0Wevq8QVNTk4O1EjQn+s6l4dHhPk+/z3j/8zO23YqOJdu8pFgmSeMpKRWwntR4ig/z4JEHDL0OVDTV5N6QlMQTa7m40BIXDi4MG4pUt1frTq1WqlIrXN0tVLlhVgWIy5tipQqcWvlZBDGXf6al1kNmRD1NJuKSsNT6UESQ4tsvRRA1svT1MYMAtFruWp+neBLOEKR0NXlY61rlyZTmiJEoVkDUakqzQWhotAFxiSi12hWDUnqaNKKUjEh2AWZCGeo3KTXk5pWmZi4BtaYTSxV0pZTHSbGa1nOJVIWRqKdXaZDGZdhOnD6RVPUU2hPKVfyY/ZwQoA1yeKiI7bBSqhC1XYJQqclUy5RSauqUmtMs6NDyiVRpKf5qS6kpVWoq1HaCGRGvmPVuTlQFywWkVgSXmmTEnkhZSn3twxoIKmknPGU4z2xoFy2opXp+u8ysuUCXalL8HKj7Nrh1dEikqqf7UDQnTHuUBtEhhQoiLUKuFb5VZFZpDYnUGJvVivXqksXVfTbL18HME4FW6KuAJ+PXpYI2oI2REiSB2cUlr776ea5ff4qDowMOjo7orUc11fSifzBhqB1G/HysntSPdbHdOUvJuwVLqYGiFOnr9VJAW4y+HnklNS7Dc+khGZaTX+tmXp8qCan1wMNnFEysrqGQROs1MCDkbIR6DIIgCIL3FiEfgyAIgiAIgiAIguAtoOs6Sin0fUeft2y6LctlR8meVEupRbNWXVcn6VmPJSNpW9N/XqHY5TWbbaYvhfFkws1bN/jY12X2ryVGI6UdiScIxSs4vXfTJZNpnbhYqprSxqsyxTC8HlVUXCiWDkguJShVMhmSGhhqWuvtUhNoMEgwvJoSTzxKKS6isF26MMnj+XND2k9LqnMMh3pXFxkmghRxyWOP5ZxJFUU15TfkLEWG2YRD7WmutrTOXJRBZjVVAsrj+6u4aaTxdRqShzVFqOrJTd9MnXFYzJNwxSVyyeXxrLuhmrIKZLeJrkWpazccW6spw90sRmlcwpnUGZUuWz10qKh4+tMFseykrlRZJ5rAkm8TQBJSY46mdS2H1N5uP62K30FUgtLsJKnhSTtKnfdo/lhP8PU1NCt1zWvd7+71w1CNazX5KCgkqjgsPquR5KlIqGnQQS7WelrxNbJs9XpxcS7VPvoxTQx9q7oTsXWdMA9tilZZXNOYJmhKGNDga1VEKMWrQlOV5KXOVcSSJz1VsKIguc6a9BrX3G1ZLM5YzO7S95dAX8+/utdV3lpNSabhj3g7snSFe3df5qln3s+NW7eY7u3RjNpaJ+uJ0EZa/4lRxbiKIpZcQmJAu5N9llJde6vncE1dJxfCkjJIU38O9JBbxIwWP75ZetQ8BZlLIUmtLh4+mIDUWlcY5kuK1KQxxT948FgDB0EQBEHwHiHkYxAEQRAEQRAEQRC8BXi9Yo/VGs+D/X2ef+Emr7x2yXI+R9YuIb3yVGpaLlOsVnbWMYrgMgw1UmpoRyMm0zGisHfQsNksaUZQasWlJs//+czHIQVWk2jD0DZtqryRKsu8MtMrUoc6VEN1SDYOc/vKbqae4DWtQyWqSuPVjHlLalzweNCtDK6wiiTx561iaagCVfGEmdfRDsLkcWoLqJLUK1G9JhXU5AkJWh9sIJqqvFPQWkdqnogbXo8MPqTOHDTpSc2kzm0s9XW1XmnLMCPRa1GzdEh2oWbWU4o9FqfFq0ULtT509+uXQRP6dobE6DCf0ZOGeXcOGT5fU4c0YQ21evWr1crV4snPKkuplbXuahXPvPYY2ZOiNcW3k1MMkxmp6bVSk5FWU5Y+j3Oo2jR6rFBnRdpOyg1bsZrSc2nsMtDEZTOSdsJ1dzx3R69WdQ7/EAXqtTBsvQxNq6XW8QqUgiRfv0xfdzX5+a9CEaPZVbs2aLWrRYVk6fE8SAxTrydV84rSUh7vT1LIVtD6oQAr2aW5Jko9d+mN5fqS+fwRq8VDKJ7mpZQqoQfxuHu1Q3gYUVAxkgqLxYzXXvsCz975ANev32I0GlObdUFcOko9jiqpyny/rnZdv4+P6i5RW8wl5HAeejUqnuaUBsuFpIKljjY19J1/GMFMyep1v2pSHW+p17tfJ0nw+ZdW57cqULz29vEHBoIgCIIgeK8Q8jEIgiAIgiAIgiAI3gqsoNrSJmM6mXDr1nU++rFnWC4XzC4uWC7V6z3z49mJxTqfa0dL0Yykwmgy4sYz13nxxac4Pj5iMm1pRyOSJMYtnF9AyZnVakV3sKEd75Oakc951IaiLrisppNKTdmVYc5jg1cqis/GQ1yEJvX0lCfRamKPhLoXBKoUrSKjmKFqXlVacq1q1N2MSKt1pSpSpeNgYTLsKjqFoRHVZVlyOVcrOYtl1BKgSFJUtSYma3pyl9Zz8SvmyUgpBZPOZ2yqS9hUKyulpiLNDK3S8/FOmG9HhoSqr40gtXqzYCXX1KPXqBarCchBMhZArdbitj7z0npPiGnjx6T0NXGpgyuqu1DnPlpCrADJE2v4jL/HMwlr/aqV3WxJKKR6oHI12S6rdCiFraKuyj8TkgzzGz3Fh1it/1VKyXWeX93/AsggFqnHFIYaXkFdHpuhNlSpFjK5ejSj4DMrdXcOKUZXRVgPdX4j1ntdqNaEoxUKuZ6v5hKRmhYuUlOh/lg/d3y+qr9MF4aNKblsXZLhotbPVRfmCRdoxcyFKgk1oc8FVV+jbBm1ps5PzOQ+s1rNWc4e0OdzGlV621aBWRO5fnrW+Yl+e4OR1AVnU9fywf27vP7gHrdvvY+DgwM/X/2o+N5KWz9QkGlkXBOHntC0ek1XX+yJVStIUp9nal6ZS65p3lwrdGlxsd/69poG+lwbexVSj+Q6A1LwBC74c2qtly2+lpmCqJ//tksZB0EQBEHwXiHkYxAEQRAEQRAEQRC8BaTUknOhZChFGU+mPHX7Fh/58ILTk3PmsznLu+dArawE3PAV0lg5Ohpz46kRTz19zPvuHPHcsze4feuYyWQMIjRtJqWG3BsXl1eUPOHg8Dp7hweethNP5SEuyijmySTRKpvKzrGVWk0JHaqDqKnpxMJuFuMQA/O0nqM6zNCzxyKrPr+btFrzKIJp8iQmXjHqM+l8Xx6LvbSbMSgilJzrDEkloYi6JBxSi57wLLXCc1trMakJvrqdKs3qHpPEU3Geu3syAeY6z1RrraqLYLNcayv9tVBn/FkpPmOypgBddNaXVNdXh9c1SFh8X8ykCln//uryfG9sULp4yqxGHkWGuZU1dTiI0JrqFGOXRBtqTmvHqm/NarXrIHpVwXq0GIWhMrWBMkhH87mWVuePWiFjWKlzD61UYVooJp7ANJ9JaZY9naiNi0nLNU2baiOrVWnqEtWTgQVlhJWOXLz6d0jxCcXXSupRMnbJWJ+9CWrNbg4k9dhqTYwKQm8dUmdzSqnCtgrK4alUfFyo4PWqYvg54E/k+6rJT4GCz5pEyF1hvZ6zmJ2wmt+DfuuCFF/2YgUrXi+cdy3CVUhWiWfeYIoCy+WSuw9e5tnnXuD4+jH7owbqtWa1Gni4LmyoUd6lXwHJ+MjH/LgqGCPnviaLrQpBxbRgortUo9H5+pkhWo+b+WOz+HxJLXXW69C5Okh3sZ2gr+bR97WUX+RP0iAIgiAI3o2EfAyCIAiCIAiCIAiCt4Bu27FYbrg4v2Axn4NmSs5cv3aND3/oFmdnM87Pz9nO54glzDoQYTxpuPPCIR/84A3uPHfEzZtHXL9xyMHBPnv7E8CTZ8MMwC5nLi7O6LaJm8e3uXUjQ6OIJp9DVzwvNVR0PlaIno4bEmlmWpOC1JrNQhKlaBWO4sm2Yl77Obi8QVKxm1vo6cWkQh5mwgGKkktX1V2hlEwjLtCGFKQiNU2omBjZOk9AyshbG3UQhYZqU6VVRlOdMWcAPVrFTlKlmKcOsUQj6qk7K2A9ouNdreZQY2lubHbzFr1mVSnWUaODXjFbvBK05L6KSWotqac5hwpRn5dJTTRmRNJunucg5Fz0lscJuToXcbta029noMpkNGWyNyU1DTSCaouqV7PaY5/L4HzZSaAqom04tLYTnENilTo3cNgHE4Ps0TyXoo+/n5oQLVarPhnEsaddReusRU1VRnV+DsigeuXn7Juvda3/LT5DMlvn92FVsFWBWxN2Jk1d5yG5anX2ZV9nZRaSenpTLdW1MdRSTcRSK2apiV4/36XKerW6p0VdQGvytCK9v9YyzM+s5w5QypbV5oLl/ITt+hzoEXVhV0pmd3jrau1ixe64/RjgacJWoMuF1++/xqPT13n6fc8wvTb1eZ+m/txDhbIN6+kVwT6jtOo/BSv1fvMPIQh5J5IRsFw/BGAZkiDF63LVBElgJdOXjGqiK7nWt3pFNABJsD4/lsG15thyXeM6Z3TIbAZBEARB8N4g5GMQBEEQBEEQBEEQvAXMFgteffU+P/3Jl7m4XDOddOzvC0kKJSvXDibcvHGNxeqKXBYUXLSNJnDtMHHr+oQbxwccHe4xnUwYNa2nzBQ0J3SUafuGvekIUWO5WjBbzOi6ntF0glDl3zBnzzIpuQhJ0tS0VMKskFCs9GjyRJSq7uSE1JTZbjBdKV4NK4oVT0xZ8Vl7aagtHdJlxi6Vl0upsyX7KgPFFZ3ZTiCVYS4ePeDb2yUc8QSVS7dEIaMpoaSanHSBhnqaUKUh01c541JnN+9RG6+EHUKAtcbSqIlFpX6fvxZPPHpFKBSKZcouyfU4TTgkI10E1RRaMXzWoVDILiurBK7jEskl0/c99MZqteDy7JQvfu6neHR6D2zJ8cF1btza4+nbT3Pz9jNcv/V+ll2PSctockQ72kfbka9ZqgKK1ude7pKnLpalphNRsNx7qrXkXYoTM/yl7fpIXfTVGtpE4/W3Yp6ShJqibXw2pT9JFXo11Vura92yeXpx2J/67ZScKcXPi1J8NqdKXUtachVfUitl2aV6oZjPAFUdUnie8m1qbavxuPp3EG+GuYzbVdY25CrUkjaUktFGq7CsKeGasjWSV8EKPqnVekpfWC9mLOb3yNt5TZn6tSM5IZbr4/2cEalp2+xpy+pnvUY4QTKYz855+PA1Li9f4Nr1G4ynU3bWUmSXEgavfzVKTcpmKPhxrLMrtb4mK8mPQRmO31ATzE4sW/KT01PPiYSQxWhQegpSEiUZkocZsH6tKT5jcxDGYk39cEGpKc8gCIIgCN4rhHwMgiAIgiAIgiAIgreA+dWcl790j3/w9z7Hw0crmrbj8FC4cdzStoWzszXbTV/F2zBTsLBedpydL3jwaEZGmM1WHB2tODyacv3GMdO9MW3bkNTnqx1dO+bGjRmXlxu67ZrNZs0hR54EM6lpu0EoetWmz4NUcp0zONRnWk04eSWm1hlyZTcv0siUKutAUPXEo5AQPFE3VLn6bMBcK0Rd0EitIFXxqschETdUNgIgVDlaEHNRKKJoUlJSGtVaK+tiy4Vgv0tkDV4Ky1BTf6X+HXwOJuJpyhqu29VQmvQuX2qdZbHiya1itfwz71JzSV3CkRRynYU4JPSk8VQjQ41qT6nrbMXFV0EofWY1n/HowV2uLk/YrK548OgVFstLxpOW42v7TCfHHByMOT7aI/cXnNw7YTu7x8HBDSZHT3F5co+rqxmj0TF7B7fYP77N3v4x2rpUU2nqsRtSrorRoyglidePakP+OVWqQJ3Z6YLZHtf1qvjtQ1pVUnXTnqz127yOVOqJI3U2p0gDIl7/S4OqJx4LmQI7qWtFdrNFh1Sgj8P0lJ7VyKCYC+dsuQpwP76qtUIUn0NYSvH5oVTxXStIdXeEBFHxWt86dzOlKvaK1/QWg5J99qRpdnFfa11L17PdrNisFqyWr1Nkg0j22YiGJ3XN3EPLk+lUl5HDkUkCqTFS9nmTm77j9UcPuLy8ZLtZM5pMUJVd8nQnIa2BKk49hZx2IrineJUy4vKUAuazT8tuRmjeJWR76z296s6cRhO5Eyg9vUIjLVkKlJ6SFEqP0ngitQDZ06Fa61y1+PFJT8jmIAiCIAi+9gn5GARBEARBEARBEARvAY8enfOFzz3g/t05mzV0ZcPr9zdQNqQRdJsli3mHWIPKhGIdRs9iveCVl045e9gx3T9hfy9x/eaUO8/d4KNfd4f3v/99tG3jCUhtaJsxt67fRMolhZ71ek3ptzTjMakvkBTbCZo6LxGrKTevW/WSUkV1sD0uzyCjNd2Wa12jDNWZlFoh6nIDfE4eUqoA0d2/xbxGFSBpC6Wv6scFlhk0MqKIV3e60Hw8FzCpulDSIcKYd5IJNa/BrLLSJSJghaS+72pNralNtaVSEHH56ftVZSrNroazWK6pxVpbSqqz8KSmKzcuc6kCFyWlWh9a1VYxw7LRb3tWqwWpUZfCosxncx7ce4W7L3+Wy8v75Lzh9o2bfPjD72cy+SCT6YhRGnF47Zjp3hTB2K42IMZqteFsNiOfXdJno4hwefaQ7fbTmIw52L/Nsy9+HU/feZHxdOprM8yzHNppB7lYzwt5QigONaU8UZsptdN1mFkptV6XodJXHou2YZYhuptc6ce1DjZMmhASZr3fkwslZ/rcQ6l1rOaJXUm1xldSvW2o/m19u8McSepxrcJ9EIylJmuNUudIym49fK6on7kiiqjUitCOpC7AUanLUDD1VSy4RNZUz3tRttsli8Up/WpBsioxa4LXq4aHSl5qylHcj/vikICO4VS03czUi9NHnJ4+Yj57gf2jIzQNcy2Tp2tNq4xN9Zr0uaP+PIoa9WMBrrwbrSlSTTWZWyiS0Foxm+pc1iw9KTVYD02q0UzNPvs1F7RpKKUgWcnZoPi1hQpa8BmieLoz0e6qhoMgCIIgeG8Q8jEIgiAI3mRExIAfNrNvf7v3JQiCIAiCt4/X7t7nM5+7z3K9pZGxSzzbsN109PMNXb+iz5udCKHqgWxrttsNi37DYr7l1Ar3Xptx7/6S0TRx7doB+wcT6I3lbMnsasl63aEqpITXmvY9Mhr7zLYq1xQlJW+CLO73ahWp/1urEHGJ5ClFrSmtQkGKusIxF49UmTWIu0HYyZAkGxJ0NtSQuhwpZp58FJ9dqdK4fJKdmvGklnkaTVItMpWCSOsVmVqFZ61+LZZ3wkWQmvTjsZgRfFu1QlbEBVlVKjWdmR8LuLodn49YQJSSty6ohlc9pBxNvEbUMgUDlaH5k81qztnrj3jw4FX6fs3R8TFK4vTsdc5OXudq9gBlzXiUuHXzaZ553x2eed8dT7eOGsbjPZpmhJmxXa+wkbBer2naKeOpsJQFm/mMzWbLerlkuVqx3m4o5bPcf/1zPPfsx/ngR7+F6zeegST03YaUWl83Cjl3lFznTopXk5aSa8Wun4+PyzLr3yyD1MTf8He3lVVGdzw+LRpEjCIFxROFtd+TXDpPLZZhBmNf60+pSUuvwE0yrjnRZhcZFPF5phTDdDh5jSLmkrQYRTPJHks3iqGk3YTK4Tzzdt36b9E6J1T99ZXs8zyLQRLUUq0QrTMylTqnMrPdrlktH5LzFVXDg7YUK5RiyJDuVfH9M08Ty9BoW3bel1RTiyrCYj7n0cl9FosZfXeLdjSu1ba9z1dEvQLXCqUKSJNCMakzUQ3LpUr0hqKPU8Yq1PphI1N8LdRrd0UEq6I4myFJfT1lEPzFX04aIZLJOZOlznaVHsn+AQSfKZpDPn6NE++BgyAIgjcS8jEIgiAI3oCIfAL4zWb2ZQeTiMhLAGb24i/fXgVBEARB8G7js5+9z8tfOqHYpNZONi5C6sy5lFqybVwM1JmB2FDJmTEpJGswhG7bMbvYMrvYslqtKX1mvVxx/94pX/jSPbptZtQWLCub4yU596QqaUrypBbZpWHZPb/XhZqZywtqWaJY3Z2aYCRTtWOVSy6JBK/IlJoM9KRX8Vl5MhRaNhjb+vihrlMpQxVsncWH5Boo1CoppaYc2SUmVRrUpFZlpjo3Mnu1KIJJTbtZqWlNdbmZtMo2PKlYZ/cJLkesSra6AFVE1srX+rqtFBeW5uKplB4lkVDvpqyCU82fY7WYc/+1V3j55c+RxiO61YZufcmXPvejLOYXTPYbjg6Pef59Nzg6OuTGzetcO7rB/uEe49Ge17qmhmRC6bekNEI1MdmfcHj9BqUU1us1282W8/MT5vMZTRqRNLE33We9XrBenPLpn/4EL33hx7l563kOr93i2ec/ws2nnquzINmJYQRy7uq5kCi169TlYPYUnhRft+LraqKejKtp2KGWtg7M9PSq1gpbY/d3o+xmAj6eX1jrW3MmUzzVKIqmhJFrgnVIMvpam/Ts7Dj4PEr1ZKlXp/rMRZEqVrVKRLwi1uqcQxFcklKrZQXEzAWoak185iotXTg26vM1KdD3W6zPbFYzNusZRq7NtEPCsiHR00uV6X2V/Xhi1fepKu1BmqvLSzHocs/D1+9xeXnOdvsce/s+ezSX4s9Ta4+L4mK/figgYbtjZ0k99ayJQvF1rXNZfSMFoaZBTUmpULKfH0XTrro3WwclIS2UkpEm05e+JitddHoSOdVJjzWJaUrRIeYZBP98fLX350EQBME7k5CPQRAEQRAEQRAEQfAW8LnPnjG/WtKMjCaNfB6dpJouFEqtgBRJVZbUgXAUinXAIAZcJvRdz2rT0fc9265jMVtz9+45P/ajr5A74ebtlufuZG4e36K7tQUKqfHaUwy0Eax4DWOuitPnP1IrQrN7I2rloyRPs5knAJVEti0qQ0rTJYnW1KO5rfDXtxMaGWQQffg+KZRsNYHZowiaUp3/54kvf+pck2hNnXPn1atSRRXDHELraorRZ/SZlMdSqWYbPRjn3zvMFvTph02dwUhd/0GYDK9KyWY7KTy0WyoeGS3Wo3gCzEzo1iuuzs/42U/9KKePXmK1WTEeH3B6eoqXam5ppw2jZsTx0SG3n7rN0dEBe3t7NOOG3HUsuxlUxdpOJxwcHNG0Le14RC7D3MGekSpp1LLd24dsrJcrEKFtEtMbN5nPZ1xcnLJcXDCfn7G3f53Z5Rm3nnqNG0/d4eDwBu14itRjX4rPHzWp0q7OE9SaDMyW0SrKVBpEis8OHZYZrz21nRzUWt3rkjFbhoIfQwbZbnUWox+PpIJoi9QEog4nQ01iqg31qFpFncstxHa1sqLiz6GAFE8YYp5O3VXsVrHoL7UeTT8/hQSaauq1VqVKU9cDT/8VP/dIhvVGX3rW6ys2y4doybvrWeprH8Sc1PNSBPJwSfju+34M+zJcB2bQw/nZGZezczarFVwzaM1Fd43YWvHYbxKplbWCqaH09QME4nNbh2zkkD624lIZn/u5myOJotqQpXM5KgrFdsloGcStGVJraXcXr4oLSi2+j4r35NrQBRsEQRAEwXuBkI9BEARBEARBEARB8BbwykundNs1qRnVSkrxBKQ2XuNZk0ae0qvSAJ9XV+jJ1pOsHaJQ5D6zWvVsNz3r2YrzsyteffWMh3dXUJSryxWrdc8zzzzNc8+uq5jxclQZ0mGN1vl1Ldl6n29nQimGzzSsNZtiYHV+Y6m1o9b79uqcR2oiEaMmAqvEEK3j/oaKUhdLQ8JQTEkpPU6YPVHjafp4jqBqqn/UU6ApYTXVJqlKWTXIdd2k5sck1RmUgyxKXjXprtdlmZUqxwq59LU6VGuhJmDDTMx+0Kxei0nx9GWVXwXx47Jc8ej1B7z60s9y8uAl9vaEtulZLi6xtOLOM3uYjZhMr3Pj+k0mk33apmGyd0DpO3IndGQ2VphMxuzt75NGbqWsdPTbQu639F3HdrX2et4Mfc7Mry64vLygW69pmkTXb9kse8ajCU8//SyLxZL1Zs5i/pDl8pzTky9x4/QFnnnqg9x4+jn2D6/561eXVgWfwyimFNGdLEvSuuiiCivxOYsFq/WdZVepSp0RWSQjqdlJPJ/LaQw1w3UgoV8XvuIoVTZrcu8oqR7XqvKsdpJiKA2FzoWhSD2+UkWeoEp9Lv8+k6Eo2Oocz8e1sjJUltb9f5xH9BmjPk+xqalUyGa7877vNqzXV+TtCtOGVBO0SUa1znTtsrMM8jEzLIHXm9YQp+5ehotIdWG6uJpxfnHGcr0i50xqxAVpHRopVei6/xOvXa3ScHjFSatstMfhVBk6X+vMVNslfwFSlbVd/aBCqknlKqeLz5wVy2jx61Y1UbpuyDbXa1kp0pNk9Ev4aRoEQRAEwbsNfbt3IAiCIAh+KYjIrxGRvygid0VkIyL3ReRvici/9sRj/oCI/GUR+aKIrETkSkT+oYj8/jds68U6q+I313/bE38+ISLfXu9/AXjhDfd/3y9gXxsR+cMi8o/rPixF5CdE5H8j3lMWBEEQBMHXEKv1xhOOuZCtQzSR2jGpaaHkOuvNpZsLklTTdoJZxiz792hDk0aIKevFluVszdnpjPv3z3nt5StyZ5RcWC87Ht5f8+q9R8zmM3Ln6UlVIckwd63DpEfoaJIgCk0SkoJKIYnsKlS9xhQ0ubx0EZLcOdbZfFBQcbGapBY+luyyNSXQOs9S2MlJG+QWoCmRUqJJiSTN7t/ajBBVUnL5lZILqSYNYhFPu5m43BKfTSf1MU1qkCahqUEaQbUhNc2uxtUMSin0pUPw9TMr9avtRFnZzSikpiHrHMA6GzF3PVeXF7z0xc/xD//+3+ZLX/gpTi5ednHcNty4dZM7d97PBz7wYT72sa/jQx/8OIeHNxg1I7o+s1qsubqYcf/ea9x75WW6zQopPSq9r2OfKbmn7zb0mzVJhNHeGEQYTRv2DqaMRiMm4yl7+4doqjWepWezWbNabhCMVsfs799gMhmz2S6498pn+Ml/+rf4x//gv+XzP/2jLC8voAwZPd3Vi6ponRuKC+haYbpLhnqXLaXvXXwN8wCrxhV8JmfOBcowT5SaoPOEX5IGlYRqizYto3ZM005o2gbVBk3iCUOKp2jFq4BNzFOoqYGUXGTWqmCxKsnQWgcsNXns567xRHpRPOWIClYTp57cHc4z/x6RZvdvq+ISM/q+Z7WesV5eUXLn6Wad0DTX0OaQpGPfPxSRxGhyRDse+/U0bE+t1pR6MndwuC7MhVwy5xcnrJZrcl/qLilJW1Ia+f4+kTA18HO9phFFCipKI039cIAnhxONC1Wp25NUk9h1vbQhpQmmjct60Xp0q7x//Kz155bX1Zq62BTR+rMj7eZMBm8dw/tZEfk+EflofZ/8UERKfS+rIvJvi8iPiMhcRBb179/xld6PisjHReR7ReSl+n77oYj8fRH5jl/gPv0H9fn/oYjceOL2XysiPygiD0RkKyKvisj/XUTuvPH18BXen//SVisIgiB4q4nkYxAEQfCuRUT+l8CfwT+K/FeAzwFPAb8a+MPAX6oP/TPAp4C/B9wHbgK/A/jzIvIxM/vu+rgL4I8DfwAXjH/8iad7qf7548C/W2/7z564/59+lX1tgb8K/HbgZ4EfANbAbwH+FPBrgf/5V3/VQRAEQRC8W6hT22rCKNU6RJdy2rSkbGTzOXtJ2l1VYyEjtiVbR99vaNJ4ZyPm88zr9y/ZzDpeefWC00fLOkuyx4qxWi45PZlzcTljvd0w3R+TZOSqzwpooi9bECVVmVQsV/HgwmaYf5hEfSbc8L1QZznmnVAEpdAjlnyOnpRafmlY3vhClOKywrzqkqHWVKkz+fx33pKEpgqNVON2WhNvQ2TMZwWmKrlq0s7s8QxHrVJMlWZIzLle9KAdqeodT/h5JaWiCUpNsWGKSMMwk7JQ60VF6Ps671IK3XrF2ckpX/rCZ/j8Z3+C0/N77I9GjCcJyz03b99mf/+Q8XQCJbNerVnOlmCw2SxdyAHaCtO9PfanU0at0nUblrMrJgdHqEISaMdjbDKi33aUfouKMLs4p8s+//L4+i0MWM5nXFxecHl+Tp99xuFsMaMdjWlSQ9cZ8/kZTTuFTWa9uuTy9B4PHnyB5178ODdu3GHv4Jh2NHHJqEPi1aV4kqbOGJRd4tUTvWNMMjn3j893kTqvs6AkXIR7WtLq+ax1TqhXm4Kaz+pE/Hh4o2rjcm2X1ku1JrjshKeVHk2JnQ1zvYgVw5Ih6pWtVut/ZYhzeieopyBrta6nG13YWa0cHma1DoItD3MOzci9i97N5gorPUKDSCLpAZJaDEX7LZZ6JtNnOLz+AsvlKaevf5bcr1Dx811xf5rwNze7SlbxROPlxQWL1YwubxnL2KuHd9WzngrOwzKZoZIxTX7tppF/qKDUdGUpoC2mmVSqVJeMmaBq5CKk5IloVEgl1WvFrzXLGVWhzzUVLFKl51CtKr4O6mtUADRqV38Z+RDw/wM+C/wFYApcAX8e+H3Aq8Cfw0+zfwX408BvAP6NJzciIr8T+K+AMfBDwH8JHAO/EvgP8ffZX5YqM/8z4N8B/mvg3zCzdb3vDwH/BbDB38e/CnwE+LeA3yUiv87MXuGrvz8PgiAI3sGEfAyCIAjelYjI1+Nvkq6A32hmn3rD/c898c9vNLMvvOH+EfA3gO8SkT9rZnfN7AL4HhH5duAFM/ueL/PU3yMifwDgK9z/lfhjuHj8z4F/12pHknhf1X8B/CER+UEz++9+vo2IyI99hbs+/s+xL0EQBO8KPvGJT/yivm82m725OxIEv0hK37nQsLybGehz5hRJCetqcq9WQQpev0rNPxXrPN0nQilbRBJXZ0u++IVzRu2M+w+u2Ky7WouaQYzSGacnl1xezlhv1gjHVcpV8WaZRlOdUZifEAo+z1A0UQqPE2qaYTebcvA6Vp3qsN911qIVsFIzb0Ya5iJKlZtWyxhrxepjGeWVs169aXhtqu5EhmpCaL3CUbxi0sxrWSm1wlMUJNf0oz4WTE/WsdZ5hENaT8zrND0pxy7d5snGKr6S+DGqFZYq0OctpcBqs+Kzn/sp/sk//rv0mytSKix7hTKi35+wvmoYKYzbRNskmsMD1s2S5WKFYSw3c5azOSLG8bWbrEcrzk9P2dvbY7q/Tzo7ZXp0yMF0n2KwWszZbjdo03gdbZOQvKXr1zRpSt8VttsVVjLjcQNdT9cZSRPr1QIz2KyuKJZJaUrTNggdKg3L+X2+9NkLXhsdsL9/mzvPf4Rbz7yfZjSikaZKRKla2mVxtuJyifxzpBt1DqEnU31dCz6/00pB05Be9MSfn4xCsnrM1at4RVJNzEEpGdXWr41CPd4wdOmKaBXE9av5HEZUaIsn+4bKX68Gtp1s9K1UwW1GnzsarRK0zjbEfKZpkWGOo+8TBTb9lvVmzmYzo1hHIw2pOWI6vYM0Yzar18n9grY94PkP/jqe++CvYLY451P/tPD6F3+aku3x0MlhOTDUFBXbzdScz2YsVgu6fluTj35NlloJPKRCrc7q9I2WXXUrZlit0dUqd60UT3EO1xY+pFExrHj6UWpltKn/HPPjrC44U6L0BSkCVq8bq2JazQdbDjNRh37b4JeD3wD8J2b2R4cbRORfx8XjTwC/yczm9fb/CPhh4PeJyF8zsx+ot9/CPzDbAL/VzH74ySd4w/tt3nDfBJeevxd///tHzPwTLCLyUeDP4vLwN5vZ3Se+77cBfwv4vwL/yi/w/flX2od43xwEwXuGd+r75pCPQRAEwbuV78D/O/Yn3igeAczstSf+/oUvc/9WRP5vwG8Ffhvw/36rdrR+6vPfAR4A/9tBPNb9yCLy7wN/EP+k6c8rH4MgCIIgePcgNalnFHLpoGhN7xUXC6lxd8LjeY+DMDC8djVbR2JUZQIs5lteffUCUDaLjf+iXwSjwUqHYcyuOi4uFswXc3rra6KyVFlYqybN6099swkpnkz0lFtDIbuUKF6bKrVu0RgmINbgWKlpppo28xRZouQOkzzkz4C+ChLwOJ89TtANda7qcwLdE5ZaSanVM3niEa2zM82TiaS6nkpdO/8zVLNalbcUAxGSJE+zGUidP4ilWgfrSTevi3ws24qZV3iWQtOO2XRrrBQW8xl3X32F1157wHRsnpQrxrN3rnP3tQc8uHeXvemE27efQpKgaUzbThm1icl0zM1bT3N87RhNymS6R9s0HB5dJ40So9GYg8NrHBwd0jYjEKXkjmKZ1WLu8vJqzuVqiQps6yzIbb8h915rO2oT4/EIVWO1FhbzJZKE3PVs1lfkPEbICCvWy9fZm16jndzg/oP7XM7O+Iam5alnnqNI78dAm5o8tXp+168oRXwG6FD+WXicUvX/6WuU0pPUk6elZLTxX0mZKYqfo8MwQqHx7VjPUFdq1vvW1UXgblqj+HGyUsi1WrUrtXK4NHSpq+JcqU2seMtqrQ0dZj8WqbJ/qAjWIVhbz2EXc4VB3BklZ3K3ptssyN2qfsAgsXfwNB/4+t/E3uF1vvjpf0j32hXNaMKLH/lmXvzwr2S+vOLk5Iuc3P0M/WbYP3aCHQRRIxXQOj9zu96wXszpu0yxAlXkmw3Hwc/VQRbm4uIZchXr/vrMIIuLV6uJVpP64YNhLQQkqc+CLcOVIIBfP5L82Co+VzOr0DRjzFbk+jNhWAsrLoPLcLyCXw5e5+emBAH+UP36XYN4BDCzhYh8J/D/wZOHP1Dv+jeBI+BPvlE81u977Y23AdRq1b8C/AvAd5rZ//END/kOoMWF5N0n7zCzvyMifwVPPx6aWXyaLAiC4F1MyMcgCILg3cqvq1//xld7oIg8D3wnLhmfx2tnnuTZN3fX/hk+CtzAa2H/oyHd8AZWwNd9tQ2Z2bd+udvrJzt/1S9hH4MgCN5xfPu3f/sv6vsODw/f3B0Jgl8khnltqJnPrtNEzh1NO6b0fa0/FZKOKJZRaxBL+EzETLEtxfo6R67x3/1nYTXrMfHUUtIWivgMSQGRhsV8xaOTS64ulmxXWxptEAWjR2kQGnq6WiXpCUhFKWpIcbkk5vP8lIZCFR2kKiNrdWYptZY0e9KwznUr1oH6TLhS8i51WNSTaQypSjylqKqeMGRIRA7prJrIQmhoKDXpJqY18VVqvaVXeg7z/VTUE3nFt+F9q77Pg/zNOVdxq7W2tmDFZ2AmUp0JmVF6hISKkOkoJpTSM7u84v5r97l771VOz64QyTx1a8KL73+Gg719Cpm9yQjFWCwXbPsVs/kK1RG3b97i8Noxh4dHNKMRSVrWyy1pTyjdmslon/3RmGnb0hQlb9eM944QbVkuz+k2HVcXZ6xXC6qWZT2bs9psaEcth0eH9N2YZjQlJeXRo4cs5lfksqYdTdnbO0ST0m1WmDZcnL3O/t6U5WpB6goHx3fIecHnPvMjqBRu3n6KNh1gBmourNxD+fmAmadpcUE4CC2olb7DcbVBgpunGKskcxmo0CiUTDFDk8t0IVFqLWox3c3fBDDrXRBaFep1LUouSPE62MwWM0W0QaSgaljvadqM7RJ+g5x0uWZkK6RBVppSSge4dOxLj5pS6izLvu/pNhvyduMfKmjGfPCbfiP/0r/6HUz3DviJH3mev/1f3qVtxtx8+jmevfNhZstLrl1/itSOkVG/k+5P+rn61NAYWmC73bBYXtH3ndcV41XOQqHUWZeuTRUV8euqrsmQ1sR8vdW0+viadJRC1pqCtlLTxw2I0OOfm5RcU9hmqDUU23paVDxdWbKhMqZI5x+2yEZmC8XlphDy8ZeRnzSzzRtu+1X4x10+8WUe/8N42++3PHHbL/j99hM8DfxD4IPA7x9SlG/gf1C//mYR+bYvc/9TePvwR4GvlF78qsT75iAI3ku8U983h3wMgiAI3q0c1693f74HicgHgX8CXAf+Pl7jcom/uXoR/0Tn+K3aycrN+vUjwP/h53ncwVu8H0EQBEEQ/DKSmhZpHoss2c1NrNWgxXweovUU63YyYah6LNLv0kmouKDEXAjWWYml1EKFKt2MQr/teXRyxeXVJavVmun+PuAyEYRifU0d7kZJIgKJFpNc+xFrxrGmwHSoI2WY20it1DTUoFjGcvHqxVqXKlWI+NzGWthZazTF/+HJKQQKqHrqStNQ56r18WWXUPNaT6949E0MgsVQrWsn9TmHvsr6SlVTnduXfI5fnT9ZSlfnPGpNudUqVnBJidCXLTn3bFdbTs5e5+4rX+JnP/NJ7t37En1fyFvjfrdmxBkf/dCIZ++8D0ktuXRMRyPG0zEqhclkzOHhdcajCZvtBg/bCWjLZDwhm3B+cclq26EXj2jbCXuHB6TLK0q/Ze/ggP2jI24+9Qw5F1brJYurGavDOd1mw9X5Jev1miapz6S8mLFYX3F07RrXb93g4vyCxWKBdD3j0QgscevWs+S8paknxM3DKWk8YdNf8uDepyjdFTduv8h479gFM1aPq3qyVTNWZyfKkDitj/FzwVOzQu/nqRlo2R3LIVPLkOATT6VanRkppYYiqck+8W1jnpDVXZq1zvbEsGzu2VSrhK6i3PzDADn3aFKKNIgZPzez6RJahmpSpKYh6wcJSGTb1orUTO47+rJ1GYegOuG5Fz/O9VtPA/Cxb/w1/IPrt8jLFZPJEePJlG3Z0o6nJJ3SND3Zttiy1Frhx0FiHa5PoPSZ1WrFtuvoS6GVeiwG2Sug2pBzrUetH3hUmjob0jAy5FqHWz9IIKL0u1mNSkr+U6bUsGNqkqdTpSWXKkrz8GEDIam6YEydV94mrbNVFc09pv4hh5J35S/BW8+DL3PbNeDMzLZvvMPMehE5wcXfwHH9+vO+334Dz+BpydeAf/AVHjO8N/4Pvsq24r1xEATBu5yQj0EQBMG7lYv69VngMz/P4/49/A3OHzSz73vyjjr34t98K3buDVzWr/+Nmf3eX4bnC4IgCILgncAgy2oNoSBIaii9V6GmZkRjE+g39KWKR3ksywaRMiSGlFTvcvGTravb1VoZ6nP2MOHsfMb55Zzles213NFK8lRY6b1S1QbxKLVSdZgTlz1DKIIV3xfDdvWrwOM5keI9qsWyV74a1MGRXh5r2ROI8jjQ1ZceHZJZVUgOnZMi4vLKfLaiiW8j+cDKOgewGVQiSepsuVoVOyTphqrUYb12ErUeh2KP19SgVkOCFRerKv6rkpI7clfot3Ny7tiu11xeXPDqK5/n7t1XeP3RAy5na7abQu5h0xuff/mK3HeodDz73Itcu35Mm0ZsS4/lDrGEFEVlxMHBGG2ElBpElcl0H1FlNBnTpkQBus64/+pdzk5PkCQ0ktibTnnqmacZjSbkbs3VfMbiasb88oyr2SWqI1arFYvFDDPj+q2nSI2QRHjmmVugt3j44HVA2Ns7YL1ZsZxtOT19xHa15eL8jMnkgNvP3OFgOub+3QWnj17nzgtfz/HNZ5HUPBZbKpglF2BWvA5UEmZdPQ+Gda7H2BSpMspkmMFYffLjrLDXgNoTCcohfYc+nh9o1JSip1N9pJyf2EVcGpspWKZkP7PbNPKBhwalFARPQ2KC6ZBS9oTlrrIYl59ewVoopXgaMnf0JZNzR+48sWmiSGlZnF1SckZTYrNaktdbko5I4jJdrCaHpUHSmJQLmrpdLargYV3NeB2xGKUU1usV3XaL5OF8hWwdll2cMly7tX7Vz/VaUVwvKtepWmev+s8lLx6u6VT1JKnXpbrQLyiZQpKWTIdJIalSilJk40ns+jOOAioNma7O76w/M4b0cfDLgX2Z2y6BGyLSmtX/eFREpAFuAVdP3HxRvz4L/NQv8Hl/EvhzwPcBf09EfquZffHL7AfANTO7IgiCIPiaJeRjEARB8G7lHwO/Gvgf8fPLxw/Xr3/5y9z3m7/C92QAEUlPzmd8w/2jX+B+gu/fBfDrvtybvSAIgiAIvkYx8fpSbSi5h7b+0l+rLNz9flhI0tCRa/qrChQ6urKg1b1ak5hdWiSXJC4pBkHjCoGaNpxdbnh0esl8vqLve5qmwfVAqSk0rQaHKjmHgsbHVY3D76/FlCLic/nEkFLIdV99d0sVj/5davq4BrJUIaNDtSW+D+KS0cQrI30WILWi09OHiZYi2Ws3rabPrK/RsGEN/fGQwTxJJ3X9Ml1N51GFYqkzBD35CHVe3vBHwaTHitBvN2y2G3K3YbvdcnlxxuXZQy4uTtlu11w7PmD9mRWr+ZaS3fn6nD14eNHzvss58BJPbW5yfP0pzIRm1NCT2HQZbbbs7+1xcHjIuG3pNhtKv4WkLM9X9Lmw3vZstlvOLi8Y7U04PDhAxMjbjrt3X+HkwQNeefkVtB1zcHBIU2tLYY22ib2ja7XmV9nb20dVWG1mXJ494ODgBteOb/LyF75IKTAZHfLMnSnrzZbLqxN0OqKocvfuq6zXG0btPvcf3OcjH//VPPvix2jHk3ruWJXLdVagespO1dfbLNXEYqnJX8FIqNaa3ZqiBV8/5fEMw5r7q+K5ryKtJvpMQBNmW6x4os/nNg5XlKcgS9mSaDB6RKXuR53lmHsaVXKVgLlk2tTUK8LTgX5aWxXhLvOgkEvvorVkSpfp87bKTMAy9770OV772S+yd3TEz/zoP2I9u2S6d0C32dB1Pev1ks1mTRGlafbIuUfwNKX6aEVPF9ZVsOoWu+2W0veeeBZDGiXZiL50O7lex2ZiRSiiWO796NTPF7ggrSIeEHI9Hlo/wJAQTYhAn/2DCp6qhJw9Ne3ByVwFsSeUkxhFoZrR+jkJfyFJWpe2wdvJT+BjSH4T8HfecN9vws+6H3/itn8M/E/w99s/9At9EjP7fhFZ47MjBwH52Tds91uB3wj8tV/gZr/a+/MgCILgHUjIxyAIguDdyp8B/m3gu0Xkb5rZzzx5p4g8Z2avAS/Vm74d+KtP3P/bgX/rK2z7tH59HvjSV7j/V4jI1MxWX21Ha43NnwK+G/iTIvLvvfH7ROR9wPU3vo4gCIIgCN7FWBV75jWrQ5WnV55qlWRVAJpPNvw5c9GqwMu2JVmLiCKaatWq1NmGSs5bkjyWJmaF9apwenbF1eySzfomk8nIk4Dq4iFLrmlBn+doDLMWqyAwq7WTDFk0ettUmfd4v3dpK4RincuMKpZkqNFEPDk51LWqV8AW610+lfr8UpCSMHXr0pdNneFY129Ig1rxkOYuweiJUZchhg4JSrTKIp/hhxmWqfMvASuomf9WWz0ptl2v2a42bLsN2TJSMudnZzx48Aql3zKZTBhNG1YLH6eWzdAE2wwpQdcb73/+Fvfub3juuetIM8as0IxHCEa3WXK2mHHt+DqpUeiVB/NT+i7TpMTBzetM96dcv3aTJrWIJnLObFZrNuuO3K2YlStOH7zOvfuv8Nqj15nNt1w/PGK617K/P2bcCEeH19jbn2BNy2q2oFsv6bIxHrXcvH6H/cMDzk4v+PznvkgzGvOBD77A3nSfo2s3uH7rFmW7YdyOyVuj2JL1ZsHZxc9ydnbGbz085tbTz4HKThgyiEf8uIp4oW6xvJPdoo8lrZVS5WFN1Gp2Ua0toFixeoWIV6vWOmKXe1JTiD6vcEhHukTud1J/qEsFl6FGpvQZkstGzFODqopppkltlZOCqs9D7C3vNu3yzGec5lywXL+KUnLGckchU0rHyYNX+Ad//S/TthMe3P8kuVvTlwmL1RXz2SXzqwvWi7kLefUUIiI+U7P4c2sjpB6SQlIhl0Lfrel6rzdtxyPG0ym5L8zzFV3XkbTOQzUXtDbU19YPHailKnfrlSPDvNS6xk0tnjVxaav+QYKS+3otW13Tof62prTNKLWWVlRQE5/RKoZY8g9OxMjHt5vvxeXjfyIi325mSwAR2QP+0/qY/+cTj/9/Af974DtE5C+b2d97cmNPvN/+ZzCzHxSRLfCXgB8Wkf+hmX2q3v2fA/8r4P8iIp97g5hEREbArzWzv//EzV/t/XkQBEHwDiTkYxAEQfCuxMx+RkT+MPBngZ8Qkf8O+BxesfpteGXMbwH+NPAHgf9KRH4QuAd8I/Av4W+G/qdfZvN/B/hXgf9aRP46sAJeNrM//8T93wb8kIj8PWAD/KSZ/dUvs62BPwH8SlyY/i4R+bv4/Iyn8FmQvx74Y0DIxyAIgiD4GkFq1adq44k8cymmomjTkJoJsl37L/q18WyHKGIJk4KnFLPPaGSYXuhD4QQll46GVBOPfq8WUJ3QdytOTxZcXM5YrTcclsc1qaa1/tKGuY5aZR4klFJ6oCFbdklkVYxQq08LDLMjsd5ljQhJWzJWk2TUvVUKvQsqK0+YPyHVmXGYYMWrU017MBepj1Od5nMZ8fmZkszn0ZkhZBcg8mReU+t8Oa+PFEk+N3CXNC27eZkuhI2SM6vVgs1qSd9tWa9XLBczFosrjMLxtSN/nBU26zVL5kxGynSUSBNhvsokFdaWkZx56vYxr756yYc/OGVuKw6scHT9mKPjG6TRmNViyfn5GU27QOhpUvIZkV3h6uSc7WpDOxrRdz39tme9XNGVLd1mzXK94vJyzt7+dd53p+X5dsJ6veXq4oy790+4dm1KM57SjreodGy7Nctlz2i8x2w7Z7lseHj/LlfLFfvH13j5pXs0oz0+9vXX6UtPv96CCmcnD1AVrh0cs+07pntHFOCVl36G/aNDpnuHLhxlENR1tqPI7tiZ+vqaFUTaXQUxklxsD4IK9fPF8MrPZN7gW5OGiIKWKul1JxpdcPY7ZT9IScGrSodEps9UrTMhCxTxM6tgPoOSQi7Zr00Rl41DLbDJTqKm1OC6vqf0WyxnSu4QqlyzQrYVm805J/c/g6mymr9OzityN+by9IT705e4mp1yefIIcqoCcEj8+pooII2REVJ5nGbs+rxLPU739tk/OKLrepbrBX3pEGtdWqpfv7vYIsnnU9YZnFhB1P+u1GSwgGpbv6dQ1DzFXGc1elWzJ5ZdUTaeIJZUj4Vf1z4T01BtKEOFbE0zB28fZvYDIvK7gX8N+JSI/Lf4Kfd7gA8Af9HM/sITjz8Rkd8H/CDw34vI3wA+ic90/BXA++v3faXn+yv1+f4b4BNVQP6kmX1GRP4QLkM/JSI/BHwWaHG5+BuBR8DHn9jcV3t/HgRBELwDCfkYBEEQvGsxs/+HiPw08L/Dk42/BzjB3xT9ufqYT4rIbwH+Y+B34v/t+0ng9+JVqF9OPv454AXgfwb8h/V7fhgY3tz8x8Ax8LtwaZjwT4Z+RfloZp2I/B7g9wN/APiXgQP8jdWX8FTkX/hK3x8EQRAEwbsPESWlEVYKVjLS+Ew1kURqxpQmk5qxp69ycklmnkDEMiZCsa3PmbMexbc1iEaRx2LA3V7taRRDS8PZ6ZyLixmr5ZqcM6M2kW2YHSmUUjA86VUKYL2nlcogjbwuc0hQDd2PNvy7ljYWeqz4XEZFfRvm9wiKFcG0YKSanNKdFPJqWsN0qMkUlOyz9qrXKOS6LgVTKLlKU0qtqdSacrMqnKzWsvrMSUpfx925jCrFdhWwuWRyv2Wz3jKbn1G6nqvzU05P7wOFyWSf0WRYd0hJGY1bppMxh0dHXDueslptOWpgf29M045IrfHrf/0385mf/Ryvn5xydNCy3o4ppdA2LYfHiUbh2vGxpybXG0SMdtQiIoz39ulzhpxJo5a96QHTo31W8wuWc2O9WmLZOLp2jQ98/OvoVxvWmy1XsyuuZuds1iuuruYcH94CNdb9itSMQRNt2me2OAEbMR7tsb9f0Kbw6c+8zHbT85GPvMB4oi60E2w3W9ab11ks5xRa2nafrvsk1288w/tf+DiTg4nPKRwaNYVd6nRI+oqpaycbKlqNJIliZVexKjaoK5/TaJR6PoGQ8NO+Rid39aJaU8D1PKXO86zB3BrQ221X6nVTrK8zTMUTtZIoRVHLFC2YJLQ+1jBK6TFc6I1HU7QvbDYb1l2H1RSiJw0NkZ5SenK3Yr08R1LDdjvDypbtasbJ66+Q+8LV7ISrs9e8FjYd+oleK2yN3SVOI0aThMZPB8xcPqYmMZnsMxlPEN0iIvS5R5IwZuTrJ4amhlLEa2nzsAZ+5WrBr4WmIYuhov6BA3OBCH2VudS0qVGK/7HiqUcrBbPsPxrEN2o576pZkccp1OFYBW8r/zr+vvYPAf/retungf8z3iz0czCzvyYivxr4Tjw1+S8C5/hYkf/kqz2Zmf1NEfkd+Pvk/15EfruZ/UitZv1J4N/HPzD8LwIL/IPCPwj8xTds6qu9Pw+CIAjegYR8DIIgCN7VmNn/F/gff5XH/CPgt36Fu/+ZAqA6R+KP1j9fbnsL4Dvqny93/5ctFTL/7cqfJ94kBUEQBMF7Ak8vWq2ndKHidY9VvpWMSvLEYNl6vSjJv7maE8NrLIt1oKOdQHHxZrXOcEgQ2q7i0sRYzXtOTi+4ulrQbXuapkUkuTSQXGfnGSUPss6qKCpYdrEnxbcow1zHkkFSFYW57qrPhys5U8QlXTGXO1iPSao1qwWtwqhIIZnXyErxxJtpnQFoiuHCx7JXUHrYzbBcduk4f72Q8QrZgqBiCD1WK0GpcraIVKmjvk+lJxfDSmaz3jCfX7BZznn44B4PH7yEpMR0PGY8GTFq92nHLaN2n9xtmc2vWFzO2KxW3LpxjXsPT+k7n4n5Ld/4EXK/5fbtG1w7/tW8/OqrXJ0+ZLFYslmvePToPkfXb5OqAD44OmA0atmbTpgctuwfHjCejGlGbZ0zmOk3HaVv6JsDxHpGk32ee/4ax7ePaZsRs4tLmiRM2+tM24blegNkxvsTxPZ46tY+JOXy/IzlasV0eovZ7BKzDpGGD3zgeR48uuKlu/fRpufjH/sYSTJNauikY7W5pC+Zbb/h6uqU07OHHB7d5n3PfQBjD6XFpHss82qatVD8WIqgRq39rPnD4ilSFQEVrM7kzMZO/D1xpgOQxBOIRfzaMRvkmAs5q4nIQiHVdLCIy3yX56UKwiq+qbWklqD49/kcSiEzDHx0iWpAk8a07RTRntS0dH1Pt93Qd53LfwTLiaIdpazo+gXWQd8vyWVN6Xoe3f8C89kF8/kZi6vX/XUkn2PZPx4Biwxhz9bTzIPCy73PPB21e0ymBzSjEV32+uJcDLGeNo3Q1KCaPaWohb4f5sj6z5Mkjdexino6sUCu4tivQZfDpRRy72lmn/NoLhyHv9efN0rCJNPoCEudy9jcodL4cdeCWfwK8q3GzF7iy7y/feL+gjcD/el/jm1+Cvhf/AIe95XeA38COPwyt/8U/qHcX8g+/Lzvz4MgCIJ3JvFf/iAIgiAIgiAIgiB4KzDZJbs8IfQ46WUC2rRIl7BSRYiXLro8rAWixrbOOcRlX/JvHmZDirhAyeaSLUnjcxIR+s2KB48uubi6Yr3dMpq0JKkitBTysC+Kpyu9y7SmpDwdmCloSdVtepWpFa9YVDOvwdTiKTbPh+1eiYli5rsseGqt3kPChWTOa6+d1ERT59EVyR6YMnOBa4aYpyyHSkwvpPT9L4DW2XxZapVnMa9xFdmt0VC0WYqR+45iRrddcnlxyuzylIf3X2G1vEI0M560jKcTkih529FXSVlyZrPd0vWZzXaOtnB8fZ+87TjYm3Dr5g1u3L5OzoXDowM+/MEPsnjmFvP5jNytWS2vKMVIUxi1E7Ipm06YLy45OT3n4GCfBmG0t8fe0TGlzywWM0opjCYjcl84OjpkNB5jpbC8OGfcNozHEy4uLrl2bY9bT9+g6zN9DycPH3L95gF70ymzq0v6suXk0ZJmNOLi4hEpeTXuretjnnn6/YzSHl1fOLpxnXa8ZNttGTcHWF5ydHTA6dkVpydn/MynfoQXPvANfPxXfBvNaITUGlUzq4HUKquH80VTnfPoqsrE5w0O8weHWZ6PrwFAhopgkCS7dCRSoCg9YNkoZo/TuFbq8TY//7B6NXn6TyT5bNFdJNIrYQuGomhRilJNHI+lu4A2I1LbIinVlGDPZrum226hnpNW56dm68l542K19CBCzhuuLl5ns1qzXJ2RuzkmDVJkuELcjwLaAMkrV3U4p8VnsKLCZG+fplFSklqX6pXEXd/TNoVJUnrr62vzFKSPRhW0DIs6SE2liCeSzUoNlxaKPZaOBvR973W0VndUfDbn4Ex9Imvxa1bxdGs2+txjolBnvgZBEARB8N7glyIf7as/JAiC4D3DV/x0YRAEQRAEQfDeRFMDw/Q7FVILoxGUbYMVI6v4vLuU0NLQyIgsDb1JFZCZYdKjz4VzKTD8XVNTpZr3TCrJBZt4VakV4/TRjPOLGcvFisODA0whl65GqzwxKeKVqiqeWMtDxam5CELEZz3iUqcv2zopjpqW3OlFEJdBmKHidaqlfi8IUgf9lTojT6StNaxQsgvCJMUTmlJ2HZRe/Zi90lVkVz8ruMDy9ONQqym7dKdSKziriHIJ1NP3WzabDYvFBWeP7nH/tS+w2cxRdWHabzo2tmK92jAZb0kjpWlGiAnz+RVd2bI32WO1WnHr+DrL2SVHByMomWeefpZP/+SP8Q3f9PVMDyYsFzNu3bxBkcRysWB2cU5DixQYTxtuP3Ub0THbbsP5yQkXsxkyn3O9N8yM9WJJyVva0Yj9/T2WszWrK9BUyJ1XcPZ1HuB2teHq8opCJjUtL37oOa4dXeP8/ILpdELhGsWumF9d0DR1TUqhFcVKZr4+Y7PqmIwTt25ep/Q9d+8+oNt2LBcPKMBo1LJcXvBP/tHf5pnnPsBT73sOkca9uNUYnRUohlV76HW7VFFeXJYxzIdkV+MrdfYgVRpjuc4jLFW6G1oEkqB9IlvvwrpkzDKqTZ3z6c+reK1vvSww8/mjtaTVpWHq/UyW1r/m7PMMq1xHIGlLapSUPF3ZNGMA+n7DultXF6e7tLOZkW1DsjHGhpLXmBU22ytKUfpuXYOVLaItKiMk1bQjfmlqK0g2tIHklyFmRjueMppMadsWTa2nHFODiNDlTM49JkPKGKQURJVGlVz8vBDRWntbKFpqO6pgKF32yuRirhPr0asfKLB6jdf1E60yWEg6wmxNk0Zgmd46xDO+AOR+yLAGQRAEQfBeIJKPQRAEQRAEQRAEQfBWYIao0IxGXH9qyrNPH2Kl4eys5/y8ryksoCaTXN8pQ4bIpeOIJjVM9qdYD6X3mtMa/yPR0pcNSV2ceJ1rTSNZw+xyy+npJbP5ihvHPTLBxQpUOURNWdaKTNjNjRRTtAqcUobMYk03yhtSnZjPVCwuMD31KLv5e0ga9EUdIldnMtZZfIJiajXzZrvXYGJILoi68BDUQ1vasBtDWT8HaDbMDhREC1Zq9acIxrbWeSqb1YKu37C6uuL05FVOHrxKzhty37HuC6JCEeiL0TYjVqWQZ4Y2DTdu3mBvb8TkKnHnmfex2iyZTkZcXjZcO9rn4vKc89NT2smYl7/4Mu9/4f288IEPsOkyfW/sH3RcO77JZDoi4fW1o9GI9WZJt+3ZO7zG3sF1cr+ldD1dt6XYllyUxcUZP/VPfxSxNQcHR4zHR4z29jg8vM5oPEZVGB3ss9e2TCcTrl2/DqKcPjjl9OQMTS1PP3OdpInl4gJyS9LEtmwxYDIeM5mOaXSKmHJ+cs7Z6X1m8xlXV1c0jdCMpxxeO2SxWHHv/mf57Kd+jFtP3aEdNb7+4rauZBC1mkRMFHrEqoKW4XObVSpTagXrUJNah4aaILT1nPPva6ylqCEUsj4WbEOCbxCZSqKp15dR/Dwy9TRvjRd6PbGf257GLORUz2EK1CpYDJpWvY5UFFFDk6DasO16NustXb9FTNDSoEkQfO5hV1bk0iGS/RrpV/Q2ouvXVaB3lNRT6Ktwtd2nWs1AU0I0o1rTn6qMx1NG4wmp8WteUZqm8erVnOn7jpy3XmmsgtKAuWBNya/xQqnS38/BXPLjnzu7HwI+3xXxRKMqlJJ8FqfUnx2lXuCqUAzVMbkUkno1bhGXsoJSUsjHIAiCIHgvEfIxCIIgCIIgCIIgCN4CVIWjpw54/wuHfOQDtzjcHzG/KBRbcHm+rdWij0WM/7+n+QBPD6qwv7/HzVt7qConD9eUDbs5kjnXyseSvUJSfI4iuAzZrDInZ5fMF3O6rqedJDCvNTUrdUaf7uShu7qaZqoSMA/zJXOttVQhW1ebG6uwJKMkKDX9ZTUx5Z2oYAVFKWp17l59LjUPYUpBk4sQK8PcP3OZKDutWrcLpbedo/IVLIg2tf7SE2OlVraqNFArIbtuzWa7YXF5yoPXvsDV1SOWi0uWyyXb7RbMaNsxjTak1HI5O6erIufOnQ8wbkdYKty6dZ1EYbJqGbcNN49u8sydZ/jcZz/F668/YH8yZTGb8dIXX+bO+3uu3bhB128Yp5ZmPIW+YJbJVshSmEz22dtvmO7vc3VywtVsw2a75fDaIdn2WK+2LDaXXHvqJqO2RbMynewjTUJU6boCJXM8GTMZ71FKYTlbMl8sKJa5ces2682cnH1KJgbzy5Ndks6alm7dMZpMKXnLgwcvs1zOURJP3XqG2XJFV7aoFZqUMHoW8wU/8eN/l49/47fxzHPP76SVq8EOkYSor7wUrXLNz2t3jl4VbGJewVrME7YpeSLP43ikmsgVxQ84haHFsyiIJk/dFk/dejrRZxIO57EL6FLTjH6bMEIoWPaELlKQrNhw/dVEoKRE045JaVSTsYWkiaYZYRjr7YrNZotZokg9z62gZkgxrzHNmYLRs0VZ0ncrzHyeY7He05mlenk8UJjUMPHUoxZ/KW3Tsn9wzWdP1pmVufR1VqWSc2Gz2TCdTL1iWWsCVBpyKS4kh2uq1HTzbrhmjXlacVlog/Z1qW8Mx09AUpW8Rl/rcq0mPj1Fil93Wj9UYYbqkIEMgiAIguC9QMjHIAiCIAiCIAiCIHgL+Ng33+bFFw75wAu3ONxPrBcbXl5dsVr0CA2qLSmNEHyGnGrapcLqBDhEEkfX9rhz54jpZIRxwcO7c/dHdY6hSx4ljaFpEv22Rzufd9h3HaenM64uL1mt5+wdjD0FhvjMNqgllC7+VJSCYbUCFctVBOFewiBnlx0mj0WWT7zrPRVZE2ZiXrEq5o/NNf3GTjDqbo6loEgxlztDAkuEUnqUBpM8FLvWOX6emnMlWRNVQ3UnabgFqdsQUbp+y3az5uzkPq9+8VPMrk7YLOds+x5tWnLekpIiTWG1XnC+PmW9WTIeT7h561mODg9o28TB/g0aMTaLGUdH+7TtiM16w2g04alnnuP89Iz9/QP29veZzc555ZVXmD54yOHxMX3uyBm6bk3uV6gaR0c3ODq+wXg0Zpkz470pN6YNosqonVL6DkP44Ic+SN9tmc3mnJ+dsN107B8cklJDt+koZcN8Pme1XHjaTL2KM43GmGU26zUnJyeYFA6OrjO7umSzXiOpp5Qti+WM7TYzm21pk4A2HBzscevWUzz/wgd59bWX2G5XlJLZbDqKCQ8evMLnPv1j3Hr6DqPxlFJ8zmCjDaV0oGkn081yTfv5uWtuE32OZ03imRjF+noFCKrVMINvmyqzSqmP8DOiDiKsycFhXqphpfi/1FDTKswaUJe/Ism3IQbZKMkQsteSDnWlkkiaSFoTkQCitO2IlJrq66ymCKkSlZ2kK7QgI7RQr3EQ2QIdvUGbt5S8dW2rkARPLCZ/dQ2QWkiiXL95i8OjY9qmIe/Ww/e/bV2Gdrmw7TKTduT7quofDNjJ23pdaKozMztfs3qtFss1lay10ragYqCjmpAUSs4uLFU9NdpbTRnzuOZWzK9xcQHbppCPQRAEQfBeIuRjEARBEARBEARBELwF/Mu/41dw/doB1w6nrFZXfP78FV67e8HFWY9kTx+l1O6qJl2mNC7wSJhkrMqA4+Mx16/toUnYrHquzjdY7+pQW+Xoxpibtydo33JxsebyckO32oA1XJ6vuJovWa039H1fJcWQaDKvkjTDVLHsaSjR5KLIBB89WWpyjJp4tJoydBljVYJSXKCUKhwTdXajP5u7G02YQZGMmtc1Ip4UFRsSVNVtSIPJkFbzVKbUUZTezOn7gQmYp+tcXtXvrUm3Ypm+2/Lg3kvce+VnWa7O6LZL1t3ak4LWu2gtQr/NdP2GTb+mHY84PLjGtaNrtI3Xafbbjv2Da7x/7L9SKdmYrxbMF1dMRmPapuXq4pymaRhP9si552J+wbpbs9msWa/WTMYHTPZHNEkonW8ziVA2G9bbJWbGdP8IxkLOmSRGUaXvt4xa4dat2xTLJFFWiy1FhD4DktCkbDYd224LaqS0RFToug3Xb94AS4yaKxZXc9bbUzZdYTKasr/fomnG7GrDaHJIM2o5un6IacPR/jHXbyy5/8pLbHSNNhMOjw/o1mt+9jM/ztd946/h6Wdf8GrSUqr4xFOOhsf3ilCk1JremsorvVd6omDZD6wpqGHW++hIbVwni6AyCGg/d60mak3yrpYXMz/nqugErSm/eq5KTfyZYqX3lG3RWkOqO5k+1MCmNEIkkU0g++tJojRpxGg0pWlGLlklIVYw9RmR7eSQyfSI+czI/TlFtmhKjMbHSEpsuyWWMyV3lPpBgkaMfkg7Jp/xmFrQDeztT3j2/S+yf3AEYuTdazJUhJS8ZrbLW7q+ZwKoJIoMlcT6+PVTr2EFKeqjG03JllFR1/sqWOlJKLnUSKbVulb1+bNm2X+WaFN/pllNshpG4z/TRLC+o0ZXgyAIgiB4jxDyMQiCdy2lFJbLJQcHB2/3rgRBEARBEATBP8Ov+pXfgCr03Ybz81Pu3j3n5ZeXbNepzqATKAWtdaFDgahI8rmJZljp2Wx6RuOGm7cOOTgcs+16PvOpC1bzTHsA73/xiBeeO+Zwr2V22cHLhcuLhTcp9pnLywUPHp5ydTnj5o0bpORJSa1i0ESqjBAPayF1Th1AvxN9YsOMPp8Zh2WocxQFKCWTJHndqhiNPv6Vw+MCx2Gun3kaS+qMRvHspFkChiTckJhzMaOmFMs1oVUYpJLWx1sxkFTnS9Z0nflr67uehw/uce/Vz5G3S8iZdjRhP7VIETabJdtu4Y8vDZvNhiJbrl27SdM29P2S2VVCDvZZr3raNGI6nbCczyl9Rkqm22yYXV0yvzzjanbO7WeeYjLeYzreZ7FesFjOGY/HHB0f+lolEGnpgdniksU8s7e/z7iZkFJD7gvLfkm2nkZd4nTbDdPpFNsafc5cLS5Zr9dsNhvMlGIdlnvGB3ukpKy7DcvVGsmFyf4hlMJqtWR2ccVm2yMls+nWvP7ggtXSmEwanzO4KsxPFohe5wMvXmc5v0LMSGnE1WzGwdEeuStsNltef/AKn//8T/HU+96/O6+KGKINWgp9rQUtarV+FZShwlcp5kdQajIRKY8rgfFq0uzdpSRNDH27QwK2o6uJO5dfWJWIYljugLRL9w4Vvu6v1Wt8a1KXVCi5eDKxJoGlyC5xW3KtcK1yPjWJ8WjEeLJP27SYCiqeWG0nR7zwoV/LtWvP8PD0Z3npZ89Zry44OHiGm+/7KKv1nK7MmZ/fJ9sc67e1jrgGFOtLSaKkptCO4dr1Y1584aMcH9+gadMT57eiqaFpWlSFbpPZdhtKOaJt/ForZhRxSemBZD8OJfeotkjpn5CD6pW1NTVa8Nm1JXtS1FOefpWasZv9Wruc/bZdktvIudRPLNib84M1CIIgCIJ3BSEfgyB4R3NycsL3fu/38rt/9+/mox/96K6Gysz43u/9XhaLBX/kj/yRt3kvgyAIgiAIguCf5frxMevNgtnVFQ8fnPOFL1xwcdHRSLv7PbxIQmvKiGwkaVESGfPEED2bzZa+COP9CTduHVNU6bJwcbXmzrOHfOiFGxwf79FvMtvNCatVRylVRJHZLDNnJ1dcXszYbLaMJxOkCL30SBlmvnnKqeDz6dSUQnaZZ3UYXY09mhVPUZmnDE0yDYmkiqsTodVEGapj69dUxaMVT6YVSz4/UApGrglMam0mj9OO6qopmycvKdljYWREhGzZn2e4bZhP56aJkjMPH7zKS1/4aeaXD4CeyWSKoHTbju12yWJxyXq9wnJhu+0xSdy6/QzT8T6UQt50yChzeXHOeJKYry9ZjscsZ5dsNz2j8Yj1dsPF+SknJ3fptmvaNrE3XZAPjplMDyi90W+NLHDj1k3Gowavu1XWsxkqHSpKP8pM9vYRSXR9Yb2cM24njCcjNCUuzi/YrLaYGpPJlKPxiK7r2W471ts189mWiY3I9MyuZpSuZzo9YD6bU3rhaj7ncjbn/PyCi8s1V1cbUpPY5kxeGWky5uTeiq7vaCaPyPnD7O0f8PD0hNH+AWevnrDYnHOw15K7wunZCfde+xLnJyfcuH3TZzvWmaK51qn6KaYUSlV/DVZyTd76cZIkNU2bEQpqyecOepjRE4rZg5HFsh9hTSQrZMtYrV+1Ot+TkqumrElLyS6o8fpTqxW/SX3uolpySWrlcUNpoyRt0HaEapXzJiBC04xIzdjTnlrdG0rDAXvjm9y++VEOxtfQpDx49Ufpt2tuPv0hnrr9IdbbBZezV5jPHkIuXhfrAUws10tNtTabCqMG7jz7Iu+78zxHR0dMJ/v+KkuuaVBB6xzKhS3oth25zzAaIVI8FVmo1cqtHyNA1a9nU0WKX2f+YrJfV8OHDnCpS52pWqwgIrXmVjHrfX9FQRI5e3WuFa/CLdoyTLcNgiAIguC9QcjHIAjekZgZDx484I//8T/O93//9/Mt3/ItfPSjH93d9+M//uP80T/6R/ltv+23hXwMgiAIgiAI3pGYGH2fmV+teO3uOa+9sqSUBmkUk+KVo2kYpujyQIqi0iLWeBIQo+sKq9UWTQ3Ht67RjBum+xO6rnDtaI8b1w8QjPt3Tzg5XXP6yCWaiAu5Pnc8erjgYr5gtdqwf9BjTXIBWsUQ4iIHy0ityJQ6R6/ejal6+qzWMmLFa2KtJhtFUIzUpFp32tckXFtn+WVU6pxKEqKCqFdDmkmdDVh8PUQ9FVfcNrmQGmpZh0ZN21W6qvisyaGyFfGspZixWs754md/iqvLe4hljm/cpmlbrs5O6bYbct560jRv2XQ9Tdrj8OiQ/ckeqVHUfOZdl9ds1issj1nOZ1yWTCmZ0hfWa6Pre0q/pElburJls7qi284xWsbjKfvTKcv1guXqksvPP2TUtoj17O8d0I7GNZXXcHA4pmRhPE4c7h/QTadsu45CZjRqGU+v0616SIAJk/396u+U/aNDzk/PeXDvAUf7DZO9Q+69+nlOz+9ydTVjs+nJZFIz5fDadVZbY3O2ZnXV0W2NpimU+ZLpJDHdb9nfU05OHnHnzlN+qubMZDpmvtjQdYVumzk25e4rX+LevS9yfPO6S6maFhwkVz1dSE3yY2e1KddyTfpJlVhgtsvVkQoMWUUxBa0xXB8quJsBKfh2CuZ1xfU8GuYWmlRrOaQmzWW7qlDMU7ie6nssLEUUlYY0GjNqG1IzAgqlz6h6wrZtR4zakZ/D0jOY0m6z5ur8Pno9Mbs6ZbNcArnWttbS0w6SjhHdx2wBMkfqvEgxn/uYVGkwxqnl2fd/jJu3nmG6t0czaiglY53XnqooSYUm+SzITbdhs10xmU7qHFUF6/xYJF9DQ9DUYrkHMqbFa1pzRmhA/HX2uQNLIPXaxGU/omga1VmrjaeSqfXJ2lIKniwtglAenxNBEARBELwnCPkYBME7ktdee43v+q7v4gd+4AcYjUbcuXNnd9+jR4/4zu/8Ts7Oziil/DxbCYIgCIIgCIK3DyuZPm9Z9XOuFgvWmx5oKKV/nCYCJLloU2t2wgNqSonGRZq6FJlOpuxNp9y4ccy22zCeTFAzHr1+woMHF7zy6oLVeuOVqgiJEdm2nJyec3p2xcX8isPjfcY6Aqm6z6wm0MANWHGJJ7VeVcGKK5lUqzDdkCQUT04lEZ8Bp0K23ufludJxT5TUk4jFq1obSRR6n1uoaTeDrpapuvzpiyfNpNR6yJqdEq/nNI9yeT1n8Vl7Uu93g5TZ5syjB/e4PH+ANnB4fJPUjFkvrth2G7bdgs16iaaENiNSyZSyIbXXMPFZfJeXVxweHaEp0TTCar1kvpkzqq9JfdQkSk/btEwmezTqszwvL8/Zbnq23Zpbt24xGjUk2aOfNi67MhTNrDbnNM0e227EarVEROn7ns167eHTvpD7nvbgiG67ottuKcVo2zHLy8zNp2+zd3DEdP+QGzee4tbTz3D/tZfZmyYktdx99YtomnB5dcZmu0XUWG8XpGTcvLXH5VnHw5M1y6Vx7Vri8BhGzYibt58il8zV2SW5ZDabGZMRbLewXvWowmK+4md+5qf45m/7DWy230DbtqjIrrVG1Hweo1RtKOLHUj0967dmhgcVUUqhznLsGWQhOqQS1dfOOhSlpwcxJAlqTdVqQimZXAqqWq83P7v82ur9GsnZ90+tPv/jgaJGQVIiNUpKjad0qyAFI7kjp2nHNE3r3yMdhcJq85Avfekf0d7dY70+YbuZkUvHyaNX6Kyw2Sy4urgHJaHtGG1AWVSJD6mhJiobRI1rR0/z7HMf4fj6TUYjRWVIKdquRtbTkp5S7votm+3a61PVJS8YkmpKUVzKCqWmUetrrjMkjYyqUqzU7eeaxK4/B1KtZlVFcvEPEViteC1+oFULpYh/yMAarHRvyc/ZIAiCIAjemYR8DILgHcenP/1pvud7voe/9Jf+EgAf+chHODo6QkRYr9d83/d9Hz/yIz+CiHB8fPz27mwQBEEQBEEQfAVEIGliNG64c+eY51+84JWXvE5SRJFcK0dz9lTXUBWK15cqLZNpy51nD7lxY5/xdIQ2ymScSDqlcEAuMDufc3G25tVXZpw+uEJzwtTQ1JBLITFmMVvx8NEZV5dXbG/dom0aJPnsyUxxSWjD6MkM0mB5qGAsJE30ZburnZSqCMUKJkq2J2Y6GkDyuX4oBUMLnpwyQSXV+tYqmMxrOr3GEQo9Souod1DaMFuyFBClFK90FK37XLI/v7YuI/Fa2GI9udty994X2HQLps0YKUK3XrHazOnyGkQYTw7AekajMePRFDQxSi1JRvRdz81bT9OkhqQNaTzhqrvgYG+f/YMD8nZTa2ph3cGUfdrUQDK2qyWr5SWb1RVnPZS8Yn//GpPpAWRlPJ1y7ekbNJpIDaw3c7p1RztWTDvWixVN23BweEx7uEffr9luey4ur+j7LSbGwd4+e4fXufvafV74wJjR3j5sCu973/NM9/f50mc/y6QZMxrvs5hfgQnj8RGl9FxtzpjNVuRtphkVnnom0fVKqyOk9OztGZv5JX2bON/OaVJLzpm2adg/UNbbFc1IEIw+z3nt5S/yrb9mQ9ukqrNcqoOQdknUmowTcQmvNfWL1npdF4zFSi1LTY/1WjVsUv/Pz5FCKQ25CjXEamJXSNpQrLhPfyJZO4g6DEykVgi7mkR8QqRfwAlNLakZI5rQlOr8UU/5Sk0apqZBmxY0UepuFluyWrzOejWm72YomZ6exdU91us5Xd+x2Vz5ekgV/uLxUHeOig6SvjGObz7PrVt32Nvfp2nGqAApkUtLkh4VXKBrQ5NGLBZLNps1fd8xaUf+2upag899FMGfT/xaGapskyayQc65romioi6Os6cjrRS0EU81qvn1WwpGQpPUa1JdadafF5mY+RgEQRAE7yVCPgZB8I7ik5/8JN/93d/NX//rf3132zd/8zezv7+PmfETP/ETfP/3fz9XV1eMx+Ofk4gMgiAIgiAIgncSKonReMTtm7f4yEfWrJcd5FMe3hc2287TQUVRbWlSQVToywbNnoobNRPuvO+Ij3zkBk/f2mc0ErrtivFoj7ZtAWW+uGSxWHBytuC1V2b0m1pJacV/6Z9ALdFvlUePrrg4n7Nar5hMR6j4zEnBk1Fm5hLFWpcLBsV8bptpIUnrlZHqCUazjInskmCIIKJkyzSpZYgi7ioXTUFtN1duJ13MQAuDnxRtPNlo9bEFlxfmVbIGUApSE6JKolSx6bWsiVI6NCVWixUP736J8WjMdrVk3e4znoxJ2pJITNoJy9WM5eIcUZhOj/y10LPZLLh2/BRHh9ex4jJmPJ0y3vY0bcPx8RF5syElZbtec6BH9J0nOUejMf1BT+4Ll7MTVqs1pZ+yXl1g2w2iDTl3LGaXjMcTDo+PaRpBVOnXQiJxfPM6KTWAUPqe0sF6veXg8IjVckG3WTCfL2jbPW7cvsX9+3e5te3YP77J6/depR233Hr6ab50+Rn29kasV9eYzeZsNxvWq0vOT67YbAqqQpMEkkulpu2wIpSuML86w9jy1K1bLjr3Jkgj7GtitelRK+xPx1y7ccTB0b5LMYMktWKzJg5V1GXXkK4Vb/8UEma5imcByySRWuebqyQU+tx5ulQSZdBY5glYEaFp2pqedVmerbgMU5ecVnO4BfN5haaYZqipQR2SkUV8+yK0OqVpxqTU0LYNqkrOuUo+r0lOKSHi9ahJWpKAFSNnI2lBrPgMRAyK0rGg3xg5F59fKYrRoDrGtIpSPEWYpPWlMjg6usnBwTUm4wOalIDa2EyHpqpnbZB7Qt/39D30ucds7K95qDNGPCm8k5D1WpREkdqiSl37KmyLeeVqqWljk2GCY0a0Qc1/luQ6Y1U1Qe59eyJINlTjV5BBEARB8F4i/ssfBME7AjPj85//PH/sj/0x/ubf/Jv0fb+775u+6ZvY39/n5OSEP/kn/yQ/8zM/A0Dbtnzwgx98u3Y5CIIgCIIgCH5ettueviuMx1Oevn2T9YcXbFeFn2bG3Vd7yIo0jdeGimE5e6oIFyptI+ztNVw7GNOOC32/ZLVRtIGmaVFNaJ33Jvq4bbSYCwJNDcVA64C90/MrLq5mrFYbrl0rkKXKPKvzF10EYUIuvSfCxMhk1FoQo1BQks/bG6QjBRHq7D3AhNJ3aNPsxE6hoOqzIEUgl54kqVa4ClJkGHTnVZkiGP0uNWW1FtZTZdkDmebz84qYp7dy5wJL6rw+lKuLR/z/2fvTYNuy9CwPfb4xxpyr281ps6+sNqsRKiGBsQzImM7XDsGlsW/Eta/jxrUdAT9sfkDYwLWRIxzhcAQYmXDY4bD4IZsABHaEJYF1kRwlBJIIZIRt1FRJVVmqJjOrMvP0Z3ermXOO8X33xzfWOinAxthIp6rOeCpSleecvfeaa645t+rsZ73ve3Z2l1u3bpO6GTH6rmUwoVsssfGS84stKSUkdCwWx4gVhmGAODHvO7RMrNdbJAin3GAYMovFglm/oCikPtKnROpmXF2tETNK9jrO45NbmChiLhmPjo8QFS4vHpBzR9fNKGWk6MCNm7d48eUPQBDu373P5Rtf4oVXPsJyuST2idkRLlmlsDxacpGV1TySc2EaR0KIvPXmGzw/DJzeus29d94Gg5u3bzBpZsqFBw96Hj++yzRNdLPEbjfWKYvA0VHgyoRpEFLy13W5mtMnYbvdcnH2iGlQYkjMl0f06THTUCg5o9MEOrk4rleJV+96UtFqYaccfuWizNB6LdXf82itJ+32KUItdcPTE8Ii+8ri+nshICqHrUYkeOo2BtwvhhrpLTVZXBPG5klbU8Wib2YKIJJcGtZj981E/ztrECiHPyuY1b1EU9//1EyMPSIRiGAukj1dqS7txeU2e4kqWu87IOwDioqE4JXEYvSzOV3f1epaf3xVBfVAsMt6pajvXU5lYMq1dhVxGSz+HLR+7L7y1ixXceknXGs6dJ9u3u92+r0bXBb/sldRa9rZ5X8I9XtDqDW4miEKoQUfG41Go9F4pmjysdFoPHXMjDfeeIM/9If+ED/6oz/6nndswnK55LXXXsPM+K/+q/+K/+6/++8Ofz6bzfiWb/mWp3XYjUaj0Wg0Go3G/y537t5l2I7EKJhOXDtd8aEP3EAz7HaFB3cnr2AN9Qf55nttgQgIUylMo3F+cc7VxQmzvifNErPZHNVMFyN93zFfRJ57bsU3ffNz/Ox4h8f3rrzWsoz4giIgcHW549HjCy6vLrmVbzCPPcUmDCWSwBQ1OchFRNGixNBj6KH+MohQzBVOqSLCP8dloASXQp6gChSSb/DVykoz37Mz8f12syo26nkT22/UyZPKVfYpORdT/nmZQPB0ptQtQYmo+v4fwNn5XZQdu2HL8eqUPO3oU8+kI8vFkvPtBSenzxEolFKYz+aUPLDdXnFychMrI+Og7LbnrDdr8rBBUU5P5mwvLgiixDSn6+ZYre/s0pIoI+M0cHR0ndQHUuxQAl2Y0897lsfHnD24y2w2Y75ccnLtOco0sl1veO7FFzm9douvvPE6b3/pc5zcuMl8dcxiecLR0U1iBJWJ2fyIy/Nzjk+vsd1tiP0CSXPeevNLnJ49Zn58xOP7dwC4duM605g5OT7mXtfx8MEFqUukLnG1nph1QMkskrC+KIxT4PatOZvdwOXFmhDOWMwjuUroaVwzDSPDBMcnPeM08JW3Psdud8lsfvsgxuv4Zt1/rLLvIL/qBmQV0qFeI0AVdH5d7EWZmUtCF4flsI+IGSH0mBaX6cEQOiwXBK31q4ZqqdWrpdbB4puSUrcVSexF4kHyyb6i1ZOZxRRvctWa8ExIwI2hQAi1Unl/YQeXeRJ6hISJEAOYpSodDYqhTF5fWq//IPvnGD1hGfz7BOKpSUEZizJlZZpGVEGLIrVW1VOhdpC7IrF+PReX9U6raVMjSIehFCsuY/2G8tfO8qES1uV+rTs2q1/3oEP9sQx/A0AtXo71TRBW2uZjo9FoNBrPEk0+NhqNp4qZ8frrr/NH/sgf4VOf+tQ/8OcvvPACy+WSv/gX/yJ/8k/+yV8mJm/dusVrr732q3m4jUaj0Wg0Go3G/2H+p5/6DJcXW46P5iwWxmwO0zix7CPPXZ9z/nhinECip/RUIiF0AIgFxmng7bcfkYIwmx2T0oLl6uQgQ+p/cXJ6zHPTxG47sBsmPjNl1mcDBU8QCpFgHXk38PDBOVfrHbtxR993SBSCdgQNEPcbjupihEgILhUOO4pSaylrLaqI16kq6lt7Fr1SVEckplq5qkCsW5FeL3tIMh5Eo9XkZHbREwKUmquy4klIqU8YqhESVJQQuipV/JxIwFNmIpw/ekiIPaVMZM0cL4+YdgNdN2eaBhZHp3RB0GKUMhEEJlFOjk+JCKVMTOPE/fsP6Xu4OHuXmBaszxb01zJ9mrG7WiNhIpeBYTtgUehMsKwEhC7OWZ1cZ7cZ0ZKx3JGk48VXPsL5o7chG9N2R9fPWF9d8NYXr1gdn3Dr9iusTm5y8fARm6stm4stsevpFx3Xbl6n73pSF3n08C7zxYzLhxsycLUdGKaHXC+w207kMqImrFZHLJcrbt+6wfnZJeOuEBPcur1C1dhcThytel577YiXX3mZSOHdO1/haroiBCj1tEuIrNdXFFWOj+fcun2dx4/uce/eV7j/9lvcuvkyWlOpBPPaXjNiTTj6ECDVSwbUDLHietsKYl67q+CbghJ8j1A8iadaxaN5WnEvC6XWkWrJqE6etRRPBpuOSPS60VCrhj1JmwhVWhvqx1LrSe3wMfjzKRNqfkWbCYRQs5GpVsKmuoEYMQmoZMR6kEiUBTmuq7ir3aYWa2LYEIuexxSrKUJqElgIkjyxC/VNCtQkpkJ9UwA10Sj1+UqtQBaJSEy1Dlf8tiiDp01rejLE6CnK/dRlCKD+JgPU/LkUPZx/iPgubMAd7D61uj9CQUJyqan+ennlavsRZKPRaDQazxLt//M3Go1/4qgq5+fnh38uLi64uLhgvV7/A/9st1t++qd/mr/9t//2P/Rrbbdb/uv/+r/mJ37iJ9hut7/sz77jO76D5XL5q/GUGo1Go9FoNBqNf2x+/Mc+z6OHE8fHHavjwLXTyOlxRx5hs80HkbZPdwmexpLgog6Fy/WWL75xD8KCxfKE6zdGTIVhGNEMuQyYGcdHc1588QbjmFmvd/zSZx6zXesh/SRE8lR4+Oic8/MLdrstx6sVQX27Ti0TiJ4grEfi1Y8RBVCpX0fwNGWp+3FQqPLQqj6sW30ueoxYBYuJESTVWlRPXZn4zl4gelrSqlRRqekvr71ExUUGiqn+sp06TP1jzI9O92WQWlhvzunSzM+DTjy69zZjzpyc3uDk2nXfFxRhGkeEGWLCMG6wlMjTyLA9Z5yMqWyQaU7QiS7NsTLSxQR5Yr3ZoAhFR3QyCtClgBYoJpScmc+PKdMVMXYQjGmasLUxmx2Rc6ZMI0JgvloRk3B1cY7EQN+vOLl2g81mw267YdopU87opMwWfo5KGVhf7phMsNwxmx2zXV9wfn6JKTy8d5+z/h6rxRHzeWIarzg+nXNhE0WV2XyOkFgdJV564QVu3jjiaHHEdrdFxHiXt1ivt5QM81XPOBVQiClx+9YJSCYlYX1xzmd+/n/iQx/7VlLX1/RtgXpdu9ArLqlrIlfVrx3TgNpUhWRN31XRRn1NBeMQhBVxCWnyJBVZr1cX21WQqdUa4ojZBIiLsPp19xJSq8l3SSaYZaL0taBVMC2ebqxH42nOcEgIhxCqdATb161KR4g9qFDkytOFGnysMSRC8C1IJUCtWvbaUzBzUUoYMBJIIpCQupWq9qQW1Y+q1IwpBAn+xoIQETHQ/B7RT92urElmCYfKVaOmh4loUKxWtiJCrILSa509LeqvEYARQqivtXhtrPnxSKx90PX1bzQajUaj8ezQ5GOj0fjH5r3pw2EYeP311/mFX/gFvvCFL/DWW2/x5ptvcnFxwTiODMPAOI6M48g0Tf/AP6WUX/b1/n7u3r3LD/zAD1BK+Qf+7F/+l//lX5Hn12g0Go1Go9Fo/JPg7XcumUa4uprgHfMNwXlg1s/JU2QYMyou9CQmpEaagiSizCiyRc3Y7ga+8pWHvPjiJS++vOXWMHG+K6y3G8ad0qVAP0sECVw/XfGB913j4vHI229B3nnVoYQIGri8GLg4X7PbTi5uSHVj0esTYwx4AaanCLUu4VlNsLGXg1CrVV3eqbpAVRkJeHrTVFERhIJEI1hExbxC0sf2fAduX6OqVgWo1XQlh0RVkFTTXvtpvFAFTHGJaebnT4RgNVonICboWOhXPcPmnM36AoKRYuT46Brz1RIRZbGYM+12xCDEmMkZrGTOtmvWm4HtZuQyb3jh5orZfE4XezqJFFGG7SXDNBJTqnWXQlEI0hFCcglr4nW5eSTn4hWf1pFix2Y8J+eRxeIaqevpZksombNHj+m6LbP5DM0TKcKYt1yerTkrcPuF2yxOFuy2I7vtlvliBUG4vNiCCednjxEKZ+ePyNMVfSfMF9e8ZRRjGEd2O6NLEyfHM2aLGSkkNldrrs4vgMAw7FjM56zXExIDKUZAmS+PWB2dsDo+Bh3QlXF1fs4777zJw/t3ee7FV3zLFH+dBU8vOl5RWrTUGl7za2X/Glbhp7WWF0KtaA1P6lqteKKyJl6l7hmK1h1C83SiidTkbARcmElN6IXQYxSCeOrWU45u6EQCHHReBpLLbdNaS1rDmxII0nlacy/w6vM0qmC0CRXFmAhxgYTOn4cErGSwUndf5RDsxfCEsEQsRMwCIjXBWOdRsVLfXAAmgRCEGALzxZyjfMxsNkMkuhiVmvrExaaESNBCiPXe3b9BgP25ifW5TxieTg0ESuEgeP3jw5MdSPHjlXr81BrWlDqvij1EOhuNRqPRaDwLNPnYaDT+kZh5HcswDOx2O15//XV+9Ed/lJ/4iZ/gM5/5DJvNhlIKOWdU9R8qCv/Poqr/0N//+Mc/znd8x3f8E3ucRqPRaDQajUbjnzTjtu7JSUCDUiajTLCRgRRn6H5/LXjWUCSSwgwNmals8MSYYBSGMXN+sSXvjN1mYLcZ+dIbd3n7q2tOjmacXptzfNSDKTYJfZdIfSAPAKFWoibW64mz8zWb9ZYpZ7ou1Y06qxIvVFkRENOastoLDvP6y/rvmEsNNRD1OlYJVTKJ1Q1EsCC1DtJTlh5nKwQJ6GEfjyohg1dPqrkks+gismbfJAhWXEx5As6ljZgLlIB/vFT5tFgcsV6fYySm4RI1mM1X5GkiBEGioWPB8kC/nBPMmM2vsb4649GDu6w3lww74/7DSwLCqy/dZDZbcHR8hJaJlBJ9P+NqfUHJHSnMIASmPDBb9nR9hyBEiQxlQ8mFPkVCEIbdlq5PpO6Evu+ZLxZM4wAXkJYd49XA5eMHxBBJfaTrArkMJFGKCA/u3eeV1UeYzVacPXyIhI75qqPoyPpiYL6KbDcXmEQePtwQZaDvH5GrhJv1kd0wcefOFecXO5aLjs3VOafXbrG+POPy6oqur9W5osQEMUTmixUhdcTYE0LPxcWGi8s1l5dbHj16yNnZA24+9zwF865WzNOBpYpqtNozQZUqCgtBEsVylVoFqbuFEqI3fKrvJ/qAoF9/1cKhprWJt7iYNK8DLurbo0apgjEcriO1UrdBQ5WY0R9TMooQzFDb14wG328sNa2MpzancWSzGVjvBoZJwBYUhCQJao2sVx8HVCeIQpClH5MZQQwLtSXYvxWgug9F7ytnFbXpsM9KvVf3lcbCfk8zISGwXKzAAiF1NdoZvC5XakK0vrVAgr/JQAKIqVewhlBToH6PVitZ9xtdxIpUyRqSHyy+RylVNnolrp9rqduREmS/PttoNBqNRuMZocnHRqPxv4mZcXFxwVtvvcVnP/tZPvWpT/GjP/qjvPXWW0/1uGazGX/4D/9hFovFUz2ORqPRaDQajUbjfw+r2UENpe4j+n5ikM7rQ01dlGCEFDHNhxpKF4LhUCOa88h2PbFZD1xebHn86IrP/eIdvvDZMyTAcjXn+vU5q+MERTg/2zHtMl6D6MIn0pGHLQ8fnzEM2SWeGalWRspBY+hB6u17IE33aUd/g6DsNyep+bBkBO0w3ScqQ/1Uc8knVfJYJlQJtNcgSHCRaPv9OuoGnz3pxRTxjT/zVJZ/PZd6itdKBtn/iCOgFAKB1eqIIIXt+qrO7SlmmdmyJ0+XnD+8RLXQ9zPm8yWL1RIRY7O+YrO9ZNhOXJxP3L2746WXl0wTLOYrbty4yW59Tj+fc+vWC+RpYrtZ0/eB3W7LsL0ihRl9WkCAGKAPxmZ7iaYZs1mHSGAYtl5pK4nutCOlOVfnZwyTT05Muw3n6wv62Zw068jDQJx1DNtLTI3Z/Ijja9cJaYYW4/zxY4J0rLdnjENkKpdcXZwzZWW9M27dSlw83rAdI+vLiVkH8z4wqrCblLsPrnh0tsOsUDSztMKNm9cJoaBlIqSOftazvtrQLyK7qwdcXK4Zx4k8weXVmt126zWnxS2av6Za+1Lra76Xh9SP2QsvdbHs05BCiLHWEkOIEYpSKH5tHq5L35EUxDcZFUoZqkzz62j/7/t7ChNC7BAKSESMWvsaXWbjQtu/pj++BzHNxaUqVxdXfPWdd/nil97gq298lbO7j+l1y9FyTgocUsRFlUkLJUeKZqKNVd71mIaaeN5XFe9Dg6WeGyOGjhhi3XKMfofW5KTs84rmO6whBFLqmbI/F/apxSCHhHVQ9RpZCzUpXDx1GrzCNokfp4REoFCy1W1KraKy3pt4UtpsLz7r+qOWeq9F3/I0QyiEQ21to9FoNBqNZ4EmHxuNxj+AqvLuu+/yd/7O3+Fv/s2/yU/+5E/y+uuvM47j0z40AD75yU/yO37H7yDG+LQPpdFoNBqNRqPR+N9EJL2nHrLWlqJYTIhHnzA1l2rZ9xklRoJ6hepeBVKbSK4ud6zXA+ePz3l0/5IH99bkwaXFtNlw/mBN6IQUOnIZYYog6qkwlBg7cpk4f7zh4vKKXEa09HWnzpOIUnsdg4RaLVkO8sc8enYQRy5OfU/SCiBTTR76NmMIqbpDe1JTqYJFq0nIcNjew4TIXmharVb1ClZB6lSDuHyp1tNMyGTiQZQGTAoBQS1iCMvjExdHRLpe0JyZ9wtQ5fzsESkm/7VBSJHZbAk6UsbCuDPWmy0midlMWCwWmHWklFitTjhercjTSH88A+Deu2+Rc6FPiUmMadzR5wER6LtEF04hK6Uom/U5/WIFZr6RVzbc/cpDjo5vE7sOHTLZRkIsLI8XiAam3Y7t9pLh0dZXNscdKnOXvHng7OKSSZWum7PdPGBnkWFcs1vv2G4GNtvMjVunpJlh48iYFRvh+umM45M5MSSu1ls22wnNhdk80HeJeddRNDDlLVoGgi0JwbcFp3EiTxmRBCFgjJw/fuTXepB67Znvc2I10ieY4tIYsOLXaJC9SNuXf6onGGs1K1r2Xacu7IK5o8PFtG+TZq9wVfP621zvPylYsSc1qcFQyYRaE+qXlCf1nuxA7q/9+kYB73alFGO92fH2u3f4X//e/8Lf/Z//Dg/u3yFMj7k5n3jhBsiNiZAgiDGOE9v1JcP2Cgk93WxCgpJz8Xs7ABoO172fm5q4tM6TiG7XD/ci9TY8hEDN/x5vhv+mCsRwqEf1dKf49xu8jtYrYr0SNwSvUPV0ZxWTVittQ63GrXWs/j3LvOpVQOs5M3vPmwlMqxite5ZQX8tGo9FoNBrPCk0+NhqNA2bG/fv3+b7v+z5+5Ed+hE9/+tPcuXPnaR/WP8DNmzeZz+f13c6NRmPP+08Cn/+Tv+tpH8ZT5cd//McB+K2/9bc+1eN4mrRz4LTz4LTz0Gg8XSSI+xIrNVnl23NmhSLKvm9UtXjDqgVsKrVy0VNTVvfj1DKb9RXnZ1sixv1HV1ycbzEENU84ikJRo/hP+mtdplc37pOLYsL5+ZqHj87YbkcWSy81DeriwaWRonXLEa0pNalVq3idqlnx39vXalZR4dGoQtDoKc9Q5RFaBWfylb66r+cJR0HJXgO5NykSDgt0++09Yf+19kt/VXjUx1DzTbuskwsmYLE44fjkGlp2jOOG5XJOl4xpd4nGAHHFKIHZ8ZKSJ8o0sV1vOXt0l0ePz7i4Uk5OItdOE/M+sBmuCF1gNp8x7ztCjIzbDS+98DJ9F3n33bcRC5S8wywzTjv62JFCYDabU8Yd/eqYvDtlnAa6xRHDcEEwSDGxvrhHCB1ZheVqyWp1jJlSpgm0ULrIxeNzpiIs5x33vvJ5ovhr8fjBfWZH11lfrunijPv33kGLUMqOy/OJYRLOz0dWp0dcnD+gT5FSlDxmjk8VpbDbDgyjkiLkodB3M4Zxw2a7YxxGujRjmkZi7Hhw7y7r7ZrVySlnj69QM2KccfboLuO4IXadJ+ukq3WfVBFtIIbmglevFlQzpe4cxlpZ6vqx1Kzkvu7UxZanW3OtBXULp+Yyr6ihZb/N6HWnEiIIRAELXidqqohkkH31cL0OiYdEodqTXVOvjBVynnhw/z6f/oWf4X/81A/zxS++hRYlBHi8EIY8Qrqk669TirFdn3Px+DFDzvQdZN0gUihl59du3FK01ERwqZuUckhF70Wgh4PL4X7TvdGv955qoZSJYpntsCUlZTGb16/lb3zYR5XFpG4w7h9HDpWpqjX5rIWi/rVdBPv3ApNad6v7mKZ/7l6f7mXuPs3p9zq1IrfRePq0vzc/PdrfTZ4e7dw/PZ7lc9/kY6PxjOPvnDQuLy/5y3/5L/Of/+f/OW+99Rabzab+BeVrj5/+6Z/mF3/xF3nppZfqDzMajUaj0Wg0Go2vPWLqMIxSJt91zPWH+IRDZEm1btHtc44SCUGJobZ8HGJZhc1uw917Z2zWcx49uvT/za4dUZInngI1Nbb/B1QzQRIiCWXEUHZr49GjSx6fPeb4eEmK8yoi/HNEeswyVAkjEmrqzEsri02gXm3qh1eTVWaYVC0YAgEjSMKCHqpXi2lNPCpYQKJUUVFraQ0IvjNowkF+uigSxCJITXOJJxqp6Sqnq5LDS11nsxmL+YqclVkf2G4v0bxlsTpl2IwcryIpHSMaGXdbHmw3TNPIw7P7XG02zGYBCXD9xhElT/TdKav5CalL9LMlfd+73BwmVsennFy/xaMHdzlaLRjGkW62RDBijMxmPS++9AomHX2XyNNEoTCOR2yuLjiSjtPrNyi5kOIM04lcMqX45t7x0TGqAylF8rClT0dc5jMuHt5lfnTMNE3Y5ozHjx8zX15nfbFjmkCCst0Z29E4e3xJ369I0ZOtIQip78ijcePmKScn1xmGiWHacLw65vbt59gNV+R774BC6nqUgIiy2Q2sLycCG8qYmUblwZ2HvPPVL7LbbDi+fq2m9vy1qyWmWBVpYkIuY00Hu0QUE7T4dqgF//tqV2t0TQIS9td1qddm9qtdOKTzihVUPWVZdPK/1+aJEDss7NOFhWCCxuC1o1LqvRcO90GQQCD4ZuN+mxRhs9lx5+4dfu7nf4a33nrb05UmFIWzrTE7KyxnI4tuQwhbxt1DNrsJC0IkIFNGBHIpLupjRqyQi7rPC4YFTy9KMEIMSEgUnfxexFPVIRgSxe8bLZ4+xRiGHbvdhllvmF0DCYSwv7dqrXEQdJ9KlHhIRIIRYqJ4pNTTqLUWN4S9RTRPRkd/s0LY34tVanpq9EmqWTXXvcn/i99QG41Go9FofF3R5GOj8QxjZty9e5cf//Ef50//6T/Nz/3cz1FKedqH9Y/k7OyMf/vf/rf57//7/55PfvKTTUA2Go1Go9FoNL4mKSWjZcLURd7+h/DUFJbXI4KZp872E4diARSEhOeJMkpmN2x596uPeTTrWa8vGaeRPnT10fYber5dF0I6pM0kBkwzUWaoTUyTcnZ+xdXlFdOUWSxxsSMJCTUhKcnTabUq0+WLofu0lFiVSAmTeux1083TiPsKRiPVasvD/2y3ApLQWpEqiCfA6o7dXmeKpFrrGpCUcKsBRkSk7AOZ/tT9mdetx4iIS5v5suPmCy9y/50NR0enbDcXbHcDxhXLo1P62QrEGHeXaI6k1HN5/piL80sEI0VPhl1ejMwWHYv5itVySYodIbjcTLGnW83IeeDa7Rc4Pj5le3nJOOyYrY5QKwybDaLKrF9ycvN5ppwZt5cQhGF7hR5fI8QZOcI0DuQxE0OiMDENOzbrNcNmx7Wj60jJPJQH9Knn1s0X2Q6ZnAuBwubqMXnY8eDsAbN55OzxjpACOfvpXV8NXCxc4pWsLJdz+i4y5cLl5QWzbkaMgdV8znLRs1uv2Q5bxm1hKoXtYEicKDrQ91AM7t+/oOsiXecp3HfefYO7977K0bXrdaewmilq4o74y655NWo1qHj9qO1rgCNBhMJIINXX23cJXTwKKoKoUsx8m7Eooi7Li9Vtw7qTuFfSViZC8N/DwGL0w9sHA4VDPaxaZr/dikRymTi7uOLNt77Il9/8kivumnAGmApc7oyHl1u6cIcuTeTdGjHoZ5BjJojLu6l4HW3JLkhz8RpZEcGKQK+oZBfudbPRqNuZAlLsUD+MgeqEIQzjju3uihA6IBPwhGmodat1uBFRc/FoxeVkvZfAxasF38/0VKNXqJp6XTRBEAtoeHLS6reG+n1Ma0NzARG0eCK10Wg0Go3Gs0OTj43GM8rFxQU/+ZM/yZ/7c3+OH/mRH2Gz2TztQ/rH4gtf+AL/5r/5b/Ld3/3dfMd3fAdd1/2jP6nRaDQajUaj0fhVZF95St1aCzGhJXuFZsCFXgiIRWJIaChE6bE8IBKIksjUFCCFUjKPHl2AGFkzWERN2S8l7itJg9VtN8ErLA1C6EGMoB1lGlhfjWx3I1POnkAMAQWSJUCREBGJqI0UXOKoqe/BCWgVhcUKYvFQowouIV0o1srXQ72qKwwJLu6K1SpHqR9rsSYcfXsPzMWpeWLO22hd2Jjuazi9nBPz86mWfdevpt5SF7h+esT67AS1zNHxMdskCIm+P6KoMmw3aNcR45Krq0sen9+jWEYnYcLTcJpdas36yGwxQ0SIIRFCR4yJ2CXm8wUGpJc/zDjs2G4u6bu5u5mQuHx4hyCB1WrFYnWN2CVUM7vtJXmcyDmT80Ap9bz4C4dOhTyNjLsNF4/v89zN5zk9vsNut2U+P+J8fcnZxUNOFitm/YJx94itbdluXORNk5AzjIMRMR492HL9Vo8aPH60Q24uuXHzhHEYydOIMjHsRu7deYCqkObCtBsJSYjbNQSYdcaiX3DtNPDo0SXTVCgGUyicnV8x5elQZbq/F1xraxWOVRYCVrxqOGrCg6ziSdlD665fA0YVWYdrHVwQuigUU9SgWKZ2COP1rEYgYOY7i17rqn4tWUS0EEOsqUqBKstd/PvnqQKm7HYTjx7f58GDh1w7fY4Pfzhy987bnD++YsoFA7YDnF1lpumCXox5gJNFPZyiaKi1pQU0WN2nhKJ7EesvfTEhWMY0kkvGqCnEuiHr5xBUa1Uq4nWz9fyKeCXx/ntQvUn8vhLZ++CaKA719XCx77uQAd+MrTXIpjV17Aeo5t9flF+ebFTTukPp6WPV+lrbPp3caDQajUbjWaDJx0bjGaOUwqc//Wm+93u/lx/8wR/k7bffftqH9H+an/3Zn+WP/JE/wn/4H/6HfOd3fid93z/tQ2o0Go1Go9FoNA5ITVbV/lC8p9AwCkJ0UWaBEEKtnQRUEYQYZohEeE9aSBkZyramlZQgoGSipX2Iqz7wXt652NP6sajvJopFtuvMerNjHCavQlWXhsXM21s1gxQXHWZorTgNpmQUteIpthCgGCZGCBEzIxJrurNuwKmhUogKxOTCx4xA9ArWeoz7dJkXXJYqPAxESexlaDmk2Qz13TkgSneof8UEpZAoiBmzfk5McHW1ppRASkvAmMYBYkQY6Ppb7DZrHjz4CoWEZthsCic3Os7PJ3Zj4fqNOSfHJ6yWK2JIxOQ1lyFEujQnhd6Poffkancyo+tnviGYAn0XwYTl/Ig0mzNbnhD7GeNwzrTZMBVPOWZTApEYAybU/cIJsrDbnjPsNtx64WUeP3xIP5tRFO7cf4ftes1u3FKKMmxHsgld3/Hg8UTX4deLwmatzPqBPMFuMN5+e80wwNEKxjyx2RSm7Of25FrH8fyEWZcoqp4oNcjZBdfR8QpFOb/YMFxklkthnCbG3VCvWd9QdPNmVWrVWlNqWe5euAVP9oV6TRFr9bAFtMrnvazGAlqmOju4j9ztNXytMpZa6av+tamTIwUXeuDCMSiYmEs6s3rv1NSeKZhXxWaF3XbDbrdmMV/yaz/5bazXVzx8dJc7d9/lq+98hYuzC8qU2WUYBmVmxtESljOpklFc5u2zukUxK2i97/dPb58mNDWKZtBQs4t1gVH93vT7BL/f6q6jmRFIhBCIte7Y21rDk28nEggRSjHfWQ0JsUJUQ2PEistMCQFyeU+TczwIyBD83Pip3W9HvkecHySkr7q+5ztUo9FoNBqNZ4AmHxuNZwQzYxgG/tyf+3P8l//lf8nnP/95xnF82of1fwlV5dOf/jT/7r/77yIi/K7f9btIqX1bazQajUaj0Wh8bRBDT4kZqfKi5AFTdVmSJ0LqCLEmhkQQE4yIMtRK07rvdlAOvqkoVQDovpJUao0r1EpKTzJCFXV7iSBCCIaWHeuLSy4vL9nu1pheh9RV0QcuKbSmGdWPWcUTYAGkCCn0iHldq4VcZVDxBFQwrBQkuPORELx+kw5MkWQ1TRUPz4G6IxjYJ6s8VRlDxNTTky6wBIkBNBIo/lhWj9P2qUn/fBBiihT1CtycJ4JA1yW6OKPoRC4joiPTsGWaJkoJ5GJcXGyRTpimkWEY6brAtN2xOjoixlgfJ9LFme9zFqMI5GmLFkVCoJ8tSDGR0gK1QnetB1NSmJNmPWmxQiQwT7fo5yNlGph2O0oZSWlOt1iCqifJVLFS0PE5dsMF126/zPXHdxGLhK7npQ++xtmDe+yGkZvPv8Rzz32Zd+7cY8iPuaFKSLDole2gTBnWVwYZYoIpK+/eu2S1CqQkBBH6zms2Q4hM24HZYoHmLdjEdjhjGqHvZpycnpAXShShi1uwwHa34Utf/Dyf/LZ/xs+NUDOvfi2rFtS8atTsibRWU8J+DzUkrzO14lKtGjkXXH4teG2rJ+rMDFWpIjNCjIQyuY6OXhdqAjEkv84JGErRAEGJWr9+MFQnQoxYrSTVmuADT2gerVZ88MOvecK2Sv/N5oo7736FL73xRb7whc/y4P5dLh4+ZrMtpAG2o9FH6GZGMPW61v3+qYonM8mQQaOL3VIKYh2DZU8+WsYkejpSEjDV52uIaH2vQazHtJeHtYK4Vt+KcahS5fAGBqnV0FXw7pOUIr4lGYWSn9Qg17iqf78RqbelHESjaamiukpn2yczG41Go9FoPEu0n9I3Gs8AwzDwhS98gX/n3/l3+LEf+zFyzk/7kP6Joap88Ytf5F//1/91/uyf/bP83t/7e5nNZk/7sBqNRqPRaDQaDYJ0xNChMh12E6lSUEzqD+WrQKHuqFWBQhCCdAgJtcGTWShqI0H6Ohk5UWwk0dea08533+qmnKfEqszbdzlKISHkrAzbgWkslGKeSZOwn1V0AWEF1doHiaCqvm0nEaj7j+pVk7xnht2sptvMs25q6gnPKBBTDXWJ15lWObGXOFK/tgiIyiE46oLSEKvPC5cf/vG41BQfrlPUaz5VGHaA9MS0JOcHpG7O6ugEyxM6FaZhQ+oSRY1cChJmbC7PyVmY9x2XVyMShBgiMUVuXrtFkuASTBQJiUmNcbdls77P5vKMbjbj5Prz9HGGxIiJkOIMkbn/ukyEfu67gyHWHdAnglh2QprN6RbHWMmUaecSOUSyFmaydLHZ9UzTRIgdK4X5YsVue8X12y/w6vu/iYcP7vLhO1/l7Tff4M69O9y9d8ZucB+02XmwsOv8/IYIWY1ZDHR9AlMW80DXdaRZwoL6651rXXD0Vb+zs/tsdgPLxYKbN69xfrmhbDNGlddiiAoqfg1ZTRIGhCLiF06wmmqkVvoWAgHVetnWGlRP0hWotaNCrMlKF18e7gtV4hsQvVLYcKlYH8CvtQJ0/memQEIR6mHWezPX1F6oTj4yny24dv05UjxGmVgsF8xnM8zg0cO7vPTiy9y4do1f+MXP8MX8WR7tztlNMIxC7o2c/VwT/H63KpfBPMXptyze0FqwrBTqiySeGHbRV2UjoQpGF3/7kxiCEGIkBK8Nln1tqjf5oiVj4qnIUDcuPb3o957X3gpmAdNCip7cjFIfwur3GvZbkb59aaXWTGOezNx/vAQ/9kaj0Wg0Gs8MTT42Gt/AmBl3797lr/yVv8J/8p/8J3z5y19+2of0K8bZ2Rl/4A/8Ac7Pz/lX/9V/laOjo6d9SI1Go9FoNBqNZxxVr38MsaOoumisCSsNSrBUf2gfCERKLhCFqB2q2cWhRISEMdXUkdZ9Nc8HqhUsAIckVTzUMgaJgFSRF7AwVnkBy1Vivlj41qNq3Vr0VKbqXmLgUoLoH1OFYwiedpIQvTqzJrkg1kpN89pTmflWo0QIgUIhEr0Fkyrc1I9xv9ln+2rOKmMPCVAJWBnrNqQXdrpA8kpbiakmMevjm6G5sN2O3L1zh4uzR+zWlxS7gjLQ9XPfGlTxTb2sDNPI5WbHg4dXaBAse83ofNYdJNNqeYQJnpLMmUcP7nB5cc7lxRnDOJBCIvWJu2+/y2K15Gh1ymJ1yvLoiOXxEf1sSep6l2xqoIPHDw2QSExztCuYuLyxUg61pLmMfl7TAjVlsexIOaOqdARCEPp+wTDuWCxWHJ1eY3V6yunpLW7efYfVl17n3Tv3eHhZGLaZ45NISsqdB8o0GKmDMRp9DzFFZotEUGUaJspWmPJEFGMRAzEYR8enaM4EKSxWc/rZApHE3e1DTq9d97RdwCtUAbTKbBQrVq/lKqmoG4+1/tQFpNRrPVdxWNN84gJXreClq+WQqowiWPDaW0sRqkAMFgl72V+vMU9SQpBA0UwguHgjYKU69r1IMzeDIQRCSEx5AJR5v+T6tWOiBPoo5DxxeXnG/Qf3uXPnq5zFS4ZR2U4wFWFWjJKNroNSNy+1Jjc5pH5Bi1Gykst+49L3LF30UROf/mf7a+SwhBn8PIbg+U7xbwD+nOU9QVLPX/rn18pZry2e/HuIKWipZ8qFZlHfg9R9Chn8czWzj1+rlSprPXHqhJrGbDQajUaj8azQ5GOj8Q2KmfFzP/dz/Jk/82f4oR/6Ic7Ozp72If2Kc3FxwX/wH/wHxBj51/61f60lIBuNRqPRaDQaTxXZb7tpTYuJUCz7D+ZVKDb6NpuI7zEGOYgaTzGB6wQ9VK8+iRjuf12ljASKZa+3rElCvDjTJVacQDMxBI6ur/jIR5/nueeus1otiRKqjKAmvyJFh7rF6AkzCQHRVBNQSghGQbEUCRp9060mpwzfnEMMiR1BfJsuSMQMotTzYeJS0Z8xIj2mBZXsCS0DzP9cDCQk1zNivpNZCiEmtBT2ZyqEuuWnGUmRFBNXV+c8fHSP3WaknwsX5w/pUiSlGalLKJHdMDCOI9vtmqyZ1XLOsB3oY0dMoFPmxosvslyuyJNCgId377BeX7A6vcWtF1/h+OQ6JzdeIHWJ7XbH+uwh5w/uc//eV8jTgCAcn5zy8gc+zun151gcnfg5yTukS0hIlHHnr3UxrHitpmr2LVAJ/vES0JxRvJ5znLYY/vsh9ei4ZcwuR4MkFosVN288RwrGy+/7AOcXZ6y3F/T9nLOzuzw8P+Pyythl4+GmcLwxVqtA1/cs+sDF+SUhel3nEITUK8tVTxlHkMzxyYLF6pS+X7He3OHoeMn1G7cgCF1IFFP8P0/ElUmtUz2MHHK4fjAXZhLlkHjFClqvZ0T9H3NZJ5Lqbqphod4rfvG4KN97dDNi7J88hmtF1NTrWL3PlCL5kEJWLZjVGtRc2Gx3XF1dcXb+mBDg6OSYcVwx6zxp2KWOrutJqSOGnhCF0WAYjd0kLIvfrqXsE8ZCwVBRNPvtL/V6n7KSsx9lLlbTzfV0SUSCa3zfPw3E0AEu/IIEgvgbD2LwxLXX03qauJh6XSq++WiHPUb/XLXJH0+iu0UDVSF6sPFQv+pvDhDXuVYlZ/DUKqo1ISyH0t1Go9FoNBrPDk0+NhrfoPzwD/8w3/Vd38VnPvOZb6ia1X8Ud+/e5T/7z/4zvumbvolv//Zv93d5NhqNRqPRaDQaTwMRgnaEoJgojLwnduQ1mx7o8pSe7vfsxGtJhUCUGcV2T+TA4X/e1pQRBdVMCvNaxeoJLvC9PK9SVUwnYoSTa8Jrn3iFD3/kBV58/gWWy5Unqkqg2F48FJeO6v9uVhOcEl0ABQiSCMYvS0AZtVUWMAsuhMwgBqJUKRig9jzWc8EhkaWWD3Wr+40/QiCYJy9l/+SrvAzSgcWajtMqs3wXUIKn3rAApoxjzbsVGKZMzsJioagq6/UjYjcjj5lxLCxXS/quZxqUZYpIEMbNwLWb14kxeXovBFYnN7j+3Iuc3nyBxfyI2WzJbLkidTNuhIS++AGm7ZZhd8XV1RkP3n2Li4d3+NJn/i7Xnn+RF15+jfliSYqR0AUkRWJc1CSgixvNAyXv8KZZ9e0/LVydP+Tq4pxh2LLeXAKBYRgYhjXbzYasgcvNBRdn51xt1hRVQor0/ZLr12YsFitK2ZGnYxaLK+xs9OtA4c6VshqNEDYcL4X5ImIFuhSYSqHrhFwyV+uRvuv8JQlXbC5HLs83HJ9e4/atF1xG1USjmFfmKpOnbKnVtXt5Vat590oQCRRVl/XvveY93ur3BobVxJ56H6yLdzwx/PffM34cARMXjrL/T6hbiGRUffOyaEaCoqWgWpjGiWEcuLryc44ZWozN+ooUEl0S1leXnJ2dsVlvGMdMLkoMkYHCMMGQjakIks1/GFc3L8ea1C0TlKnupApE8a3FYkqeBr8X1d9YoAZSK2ydWqvKviZWvDZWtZ4zr6sVCfVr+MakqYt0Qerrsq+sdfEYIjWNbIRITR0LREELZFOwUj+iylCUgHrIe//6S3jPsTYajUaj0XgWaPKx0fgGwsxYr9f82T/7Z/nTf/pPc/fu3ad9SE+FX/iFX+Av/IW/wCc+8QlOT0+f9uE0Go1Go9FoNJ5RxAISjEhHySMm+A/7YwArnk4K0VNWMfiennmCKhZPTokmgsxR22AU1HYUeoL0LkBsojCRmBMkHdKEiI83KqMnGzvl5u0jPvrxF/nIRz/ASy/d4vjaERK9wnWaMl3feULNtO7puYBQzUiQuqWYXE/4EGNNNNWMpfjWY4DDdpwQ6nmIqE1ES7XK1cXpfrPRJ+w8qaViiKS6t+e1m0hVIKG6FK3PExesQkAjQPAEqeKJqxRJ3Zzl8pT1JnO52RCAFDOb9QAEQjL6bgQSfd+zODpl2l4hEum65OnMuXHz+i3feUTo+hnz1Qnz1bIKOKWUgZJnaHGJrLlQpi2mmdXymPTqa9x88UU0K9vNFWeP7zEbVqwWS2bzJclmxKBIjCDGtL1iGq9QEwiBaRwZhi3TesvDB2+zvryiqBH6xGJ5wunpNUqZGKcBiR3DMLK+uuLBgzvcfecNHj24x+V2YD5b0HWJrkscr25wenLGu/dGSvFLE/Wk3nqrpCAcrXqyjX6dmlGysNtkgsBoI8Uy9+9tmCbfCnz11Y9xdHJaz0n2xF3JqBZKmTzNarlWhWoVkAlFaw1qrHWdexVZq3VrutdECaTDlae12lf2+6bEKrQLom7DXUpGvy+CEPTJtqSpizIzOchJAbJlSiloUQgFRDg6OmK5Ouall16ilKn67kIed4zTwDRtWe82rNdrclaCJIyRQWEcYZwgRUOjj5w+mb0MXmksVvcbIRfDMmD7ml+vXva52CryqrTnUEHr1bL+bSYRotfxYqCxpkDNap2qYsEQq7up4husqp6kFvE0oyiE4Pejilc0m5b9d7l6r4dDilXEICRk/xrKkwrqRqPRaDQazw5NPjYa3yCYGe+++y5/5s/8Gb7ne76H9Xr9tA/pqaGq/Lf/7X/LH/yDf5Bv+ZZvaenHRqPRaDQajcZTQUJNEVWJ5kmwv2+PcV9IKMHrCktBcyHrSJBIlI5ikXLIcSlGBuvqo7h88R/6Byy6WPDHN0QVSYVbzy/5+De9zGuvvY+XXnmeoyNP3KkaUxkQmRG0dkJW6aCU+nj7ekytSSlPP3ofYzxUYHq9IkDwlBQQ8fQkJlVQBFBPm2G+RamSMZGajApAxCzX1lmp/3u+7vTVJtu9BMRiraXFhZR4jaQQUALz5RFHJ9eIySiTsrnMTJOBTIhA30FKwsYmZssZ/WzJol+yvbzyrxAXTHlkNptxcnKKaqZPCay4XDUjl8k3LUdjvbnHdn3BuNux2+2qcC2kriemjpACKfh5KjYx7jYkMdKsJ2QoqfO62iqjivrW4ziOmBnDZsu0XrM4usbs6BpiidB1zBZz+sWKLs5cFNZ07TiMnJ894O7zr/DlL/0iX/ilz3Bxfpej0xukeEJIiaPjBSmeowko/oOiKDBlf/zdTpnNA+NQ2O6UzWZHTH7eJAr5qrDeFLoUmM87bt16jsV87vLbJopqFZAFLb5NqtRtR/XUYtjvEarXkXoazwVhlB4V9RpjxYV5TedZlZJitc41VFVp6iJf9voy1FpVq9uSbviiuGT3+tUMFN9nNX9DwF72xZTo4pzjo1MXtBilZIpmpimz3WwoRTg+vuLayXWOj1fEFJHgG4zFYMpQlCoEPQlopvRdRxRjaxNl5JcVlIYARZVp2rrI1UINOPr3D9knDutz0IlsLilDCMT4RA76HGpxZ48QzAXhfmnVLCMe6nWpita6430aFYIJRapkBFIM5PqjxaK+wSp1czMImATMhBDqlmyj0Wg0Go1nhiYfG41vEN58803+o//oP+Iv/aW/xG63e9qH89R59OgR/8P/8D/wyU9+ssnHRqPRaDQajcZTQaJg2d8oaGo14RW9ptCUYhDnvVedBoPJK0Ohpr9CIoTexZ/Jk9pRC4fdNdsni2zyatUw8zpSMsUmUg8vvHyDb/6WV/nIh17hxVducnx87I+ngmgGDUxlRArE5OJxv7NoIkiIhJpskhBc0kjwbboomNaUVBUhgfdUrko8VLKKVMEUnugVZV8lKRQzkgQXm2bkWhsr+8HBfeVmPRYVT1N5lWRmX/FoZkgwgiRm8yXHxzcpeWS72bAdlRAEzUaMsBugy0AMhNGYr3pKgd0uM1/N2eYdUeD0xi2Oj44xEUKXCDGRx8yUMjmveXT/S9x95y5nV2vu3X/AUCZCSJS8Y9l3XDs55Xg5Z7VYcev2bRbHC0KIiI1MYyRPk28GWqlpwMg0bMhlYrtbsxu2WDEuHz1muNqQRZHYgSmpm5G6jn6xYL44JnYdEoUQIkULqQscnx7x8qsfwYjcffdNhqkQYgQSi/mc1Alj9o3ERTKyGuNWmT3XEfvAbJHIeaBUgVQypBR8s1AVVWM3FI6Oj3j5/R8ghkgpBdNCqalHxcW6mCdlPc26v1lcuB6mThUM34UsjHUI0aWXUQ4iLUiEED2JZ0qQzsVjFdZmpSZkDRJe1mrZk7oxoVVymxkmfo8aETMlqHraLyspBObzGV1M9dI1hBlFJ6aszHrfeRRRLi4uuPnWLebzJSF1dGlHHoxdFoYCWY0+PpH6EnzPVGKGUOVovf8UQJRpWlOm7CnMauDVFC25nrdcn38i1BToQTqG+v2k1rxiigbxe6ZuQ6oWYuxQLfub1e9bcWloe/mP+jkXT4YeanNV/byYAcHTj7FDtHg1s4hvTzYajUaj0XhmaPKx0fgG4PXXX+eP/bE/xl/7a3+NUso/+hOeEb7/+7+ff+/f+/f8XaWNRqPRaDQajcavMokZJXncqRxSjr4hJ+I/oC959EpDgRA7RALZBmLsve7U9st04SAbjQlfjauCBf+Zfwx9TSxOQCYF4fYLJ3ziEy/xkQ+9xMsvP8/JtSMkuIjIjIhFr1i1mqgqYFK8LjKYp8dqDM2lpla54fJQEGKMIKXOOKYqcXxXTsR8r7Cmz0Ld6Nt7VDUDImpW06Ba1Ur9WDwx5/WO1H1HFyGh1jxSv8Zh+1HwmteSiSGyOrrGer3j4qKQFaL5cU+jC8hi/t8gpGDce/cOw7glhCWr1TWmYc3zN59nMe9Jkph1c9DMlHfYTnn3zTf4mZ/5e9y/uOJ8q8xmC955+0uMuwFBOF0uef8HPsCHP/gBQuh55623uH77OW7cvEGauQAreQSbQQikuCCPG1SNXAa2w5b1+oLd5ZaLR4/JpZC1sL3asL1aM1sckdUwicxmC1KX6OYdi9WK5XLOfD6jTJkUIidH1yjPFR4/fsw4rDHJrGZLuhgRywjmqUdcig0buHm9I0jhaBWJHSCBPCkxBtSgmy2YLwvb7ci167f5wKuvgcE47lxK1WtrX+lp6huB+11TJJCl+BQonrgN5hW+exPpO6a1yleTXz4S/dqqwiyI73HWL3PYhnQZJlUsFqTKxSdXsFD2ktL8CA7SzWqy0KBkZZp25DKRQqLvOkIIiHgFaghCSonFfMlisWDeR7quI8TAjsIuG+PobyQ4XMuhVqWaou7tqIHjel/7tuI4DIzjrj4/8eQo1G3L2qRaZ0/Zn7a6HetBzeLyEXPZqVLTwS5qfQOyphvFj8/vSf/4fRISEaRY/RxDrW48Rq+0Leo50hr4dlkfBNXoMrnRaDQajcYzQ5OPjcbXMWbGL/3SL/GH//Af5kd/9EebePz7+NznPseXvvQlPvrRjz7tQ2k0flV480L5wP/3rz3tw/ja4H9s56Gdg0o7D86vwnl440/+rl/xx2g0vu7ohNOTGfMu8fDhwIMJJEevNSw1lUREYvQtPaoACUCxWruaiDIjsz388L7qippEEoqNiKxqomzyBFQwrt1a8PFf8zKvffRVXnrleVbHM0I0r0UsUEjE6OkugKyZEHvXnGK1LjHUGtOIRMNUCKSagarpLcJhf09q3WcIVe3s9ydh70pdm3inKhx24Go15v7cqe9BavD9xydWRQji+5EiASP73h+hPu+I7utfg5AksTq+zaPHI4TIahHIU/YqXKEqXZBizGYzLi9GLs4eszias9lllqtC6mbcvnWbKJHUBVJXv0aZOLt3ny987hd48513oJvz637Lv8jp8pQf+IvfzQc/9GHefTxw/+4bvKiR/tpz7NZnLEPmwd2vkIc1N249x9H1ayDiArgKIDOjqHrt6m5id7bm8aP77MbMVDKP7r5LSDPmixPW48hbX3mbd+/cJRdlsZxjxTi9fo2b16/x6quvcOPmTbabK0BIqWfWz5imDUUzKXXM5gG78Fek76Dr3elJEmLo0bJlMZthAkpm1luVtom+68llYrWc822//tu5/dxLFB0xi5Q8+utqimatFcH1pVRFgit5UcWC13QKglbxGGOslcW2v0xq6tcI7NN7VeRBFZp+DR66S/HzGUI6nGMRAQs1Naxe+2oCMWKl+L1Wa5BLMbZD5nI78fjsEZvdFavFEbeu3+BoNSdEebKRGAIxdcxmPV0/YzY7Zr7I7IYLJjWGyRgn6E2I/kKjilcXq9fO+u4lqO0veyGPI9M4MOWJYvvK2f1T3D9+qKdW/VqN8cl9GLwy1UWnuOiNoQpJfHPWINbHl1C/lmYQISCoWX3jxL6a1bdJa1QZBWKIdfN0j7Df7LQnv9loPFXa35t/ZWh/F2g0Gn8/TT42Gl+nqCqf//zn+eN//I/zqU99ykfhG7+MaZr44R/+4SYfG41Go9FoNBpPhQ9/4pgPvnoKecMbb2R2u0se7bae8It1wVELkc63CkNAy+TJJxNCTDDhNatP1hQxSv1Rf62TxKok6L3uVDKnN+Z87OMv87EPv8pLL93i6OiILvjuXQyCiRCjoOpiKJhXaJoqkhKhppasSjqotZi1EzIQ67N0FRpqHargW4VSqy5VtbZl7lOPTz7HN//0oFGKFESrsCRgVhCNVXBqFZlKjVLWTb+Zpx3xtOP+2AwO9ZCL5Ypbz7/M3Xe/xNGqJ5HYjgPjTsl536JpnD26YjsUVotEGZXlyQy1iZOjm9y+dp0gvmWZuo5+vmRzfk4uheMbz/Hy1RUvv/8T/FO/4Tfz+NFjjhYRyZk87Pjwx76ZX/+bfwe/5tf9et78+b/DxZc/w/Vb1+m6yLDZkfpLFqsjZiyxkrFglHHHNFwyDSPry3MuHj/i7PEZV1dXqMAr7/8wKfa88+ARn/3867x7522+/NW7dFL4wIdf46Pf/Ov46ptf4tHFW7z5lS/ykVc/zO2XXmAct1gpzOcLhnGgDGv6LrCcz1AZESAXWC4CJRjDoGynwvVrJxwtEra5YL1JpN7oQgIi8/kMlSXz2TW+9Vv+afquqyJNawJO/doyPaTvsEKI6fDa7eOKAalpRsHbdl2yR0lu5mo1qphgkVrrWytBxR/TBL8P1Gp9qCCimJRDdbEJvnmI70CaBNC6FRpdiqsZRSe22w2Xu4mHj9d89nOf5t7DC156/hU++pFXeen565yerOg8OksMgRQDXeqZL45YzLaUPLG+vCKXwjjBNIIVIPqmZlQg+rZiMJeDvoXJQUarZnbDrqaTFYlUFx+qOK21xOJnsZRc31TgVboxJEr9mp5G1nqvUdOe/vsqERGvSo0SsBj9nCIE00OFqsCT1GfRKoNLFZvB35ZQv3fI/jVtNBqNRqPxTNG6CBuNr0PMjC9/+ct813d9Fz/0Qz/UxOP/BqrKpz71KXLOT/tQGo1Go9FoNBrPIN/xWz7Ir/3kS3zog89x+8YRQkJLpuShJgAVq3WI+/pQ30uMXp9aPP0XqrgT9iJur2kE8P1IFe9qFDLHpz0f/ciLfPQjL/PSy89z7fQaKXmNZghShYARQyDGKhH2AgMXoqXuUooZwSKxWooogRgiFjIx+HFF9pWNgtWvv++AlCpDfEfSpcZeQqny5DFND/9u5lt/LpTkkKbkUCnp23ZIeCKvrKZBxUghEiXW3cnCcnXMt//Gf47T01NiEJanS46Pe+YLoUt+NsfJWG8mUvKdu7Eoy/mCYbtj1vcslwtiSqxWXlubxx0iwny24v0feB+//td/O7dPTrj/+Z9nNg188AMfYxjuciz36O0CnbZcOzrh5Popq9PnEJmxOFlwcuOIruu8ijMXpnHLNF6iJZN3O7aXl2wur9jsRtabHd38mA984DXKZouGBd/6m/4Fjq5d5zf99t9Nt7rN9RdeZnWy4iMf+hj/wu/9f3LtuRd4eLbmf/30/8IXv/B5pmlCsxIDpASaB0RgsZqR6tvTU4Dtzs+lToVxW+hiT+pWLBdLosCwU1JMBFEeP3zM2f1zjpbHnJ6cuh9Wfz3FpEqwgpIpTCiG4nuFZvv8ntStQ3+dJViV4jWlh6KSMXKtT1VMJ6xMtbLU61yDSK0NrfWiVVh7darUKmBc2JFd+NX23ujBR0JwYZdiQiyxvtjy1lfu8DOf/nn+15/9Bb70xrt84Ut3+ZnPvM7rX/gl3nnnDmdnlwy7HVr8mk8pkaI/dgqRmAIKTAVGrbdHfeaoIupRx5L9z6gucR8OLqZsd+cMw7puaPrjeIVtvTW0+PeTKhwx9dJZAaMQRAmRQ5WqRCHGRAxeNeup0YxXK/trtr8n95uuIkKUQBAIwauf/T7zvc0QEoRQ31Dg+lT2/0gzkI1Go9FoPEu05GOj8XXIgwcP+Pf//X+fH/iBHzhUJDX+4Xzuc5/jjTfe4CMf+cjTPpRGo9FoNBqNxjPGt37yI+g48aXNwMPzke1WibFD9clcggSvS9SaBFOtNaKxkMIcgEk3BGYUxvd8df9hvljyPTybCHFgddzx0Y+/yCd+zau8//0vc+36CV3fuwwKRojBZQGGSAQVzDKKoKqoWZULvuUmIaCi9df7xkwXOZ6CDNg+3WR22L9DXDAdSjBDQuv7n1ULUVJNVgbf+BOpG3PqxxWkpuV89xIECYLWvcYaV/SmTImo5bph54+9P94YeuaLwKsf+AQvvPxhvvRLPwtJ6Lo5RTNFjc1glGKkJGx3SplgtuwpRclTJkWvkwxdpIs98/kRpZuYJVgd+UnJY0bHidjPGXYD/9xv+heZQuD80V0C8PytU+5/7n8mXJ1zem1Bv+iYr46Yr5aUXBOCBbRkNHZkm8iqbLZrtrsdYykslkfcfu4FHr/7Dh/8Nf8UN9/3QbrFipNrK37up3+EZI9436vfxq3nXmE5nzHvlA++8j76acPDx/fYDVdM4zVCDMyWK0yUYdgxDFtOjmZ0USiTx+BSdBmW+oDphObCbL5iHK+4Wk8gwmZ9wZS9wnbe91y7/jLz1epQiwq1kRMopV5D5ik4keTXRnCZrRRMFDNBxLxGFK8JlZr2FazK5lilmKAoQf364uCoXYRZlV5WNx8RLwCN+61IpKZn69cLNTzYdZTi13gpmcvdxP1Hdzl/fIejRaHr58wWhQcPz1lfnfPo7IqXnr/JzdMlKQYXmjEiwciaKeRalAw7hSEbRQNR95WrrufUBJLVvlUPPAuCFk8QD9stw7Ajq9JZ9I1IzM9X3dPc1xJnzX4fhPCeSlgXkhIC1ApcESB2nlTUgsRQ61nrd5nDmyT8NQlhn1auwjYGSp78/g3786lewVq3ZL2rt57cRqPRaDQazwxNPjYaX0eYGVdXV3zXd30XP/iDP9jE4/8Bzs7O+Kmf+ik+/OEPt3daNhqNRqPRaDR+Vbl1/TrvvHOPt9+94K03Lym1EtKsUNw6EFKHiAsDqbWRojUtVhtO5FCvWiVc3UYMljzRhBCJzFeJD3zkRT720Vd43/ue5/qNU7quBzFiF30XT/TJXqN52ixIpNjkj1mTaEHA9sNzewlUa0wPu4416QiAxWoq5DBaF+qPHGS/Z2iGWDxIJdNSn088bPjVnk5Pv0lASFUUPUlSuhmpwvY96UcJ0eWRaT3OhDtI48bN5/jkr/2N3L97l2F7n/XlwG5buNooeYR5L16HidDNPR12dXVOSoFVFWp9jMSUIEBiRlgmMMhlYjZb0nWREBJG5PoLL6I6kV9+kWnYwDRR8sRqPkeWM/rFjPlyTurnqA6+GwiIJUyVcRqZdGK33TCNAxKE0xs3uHHrJicnKxbJGB69w5h6PvnxT7LsEzdvf5aQhJdOT1maMT18zFFQVvNEuvk8y2UHwHzRc+30lL6H9eUZOmVOFitmfWAYCxKhSy7qpmLshpFSPBE3X8yJAR483kGpfx+NsBsmlvMj+m5e5W90mUUCyxCEbBMSIIbuIItBUC1YMK/+FT1MhHqaLh0UpG8gGkH2Rbx7uWhVTlYRXq9Bq9HB/d8DRdJBWruE8+v7sB9pdqhxjeKS1DSwWHa8/5WXeOn5G1geMQK7rLxz5yFfefsBn/7sm5yfPeabXnuV05MjTJUQhBASUxkYykCIfmy5UKtXjS65wNNigEtcK6AqXstcBNWaH9TMOO0Yxx1TzqQuucsTqd9XAmpQ1CVkip2ngwmk2LncVas1s9RkZPRzZAJFa/2qi/06uVnF5n5D0wgiEKJ/m9BSv10IIcS6X6n1nvaktWbq95C6q9loNBqNRuOZocnHRuPrBDNjvV7zp/7Un+L7vu/7mKbpaR/S1wXn5+f8rb/1t/hX/pV/hb7vn/bhNBqNRqPRaDSeIaZJefDggi9/8ZyryyrUVMBcuKR+jqlS8kjsehd0qr7/KIJWESdV0Lms8f/2dcUI9VezWeDV99/gEx99kVff9xw3blxn1nXEYIRQ5Z4ECC6gJBSvdg2BaEaRiEmhlEzXdSjqG3jme3EmRiQgBkULsW7jBZJvMwogsVZi1l0+EcQ8/WRo3at0mam8Zw+OjKl4JWyInqwy/5NixWWJRKgiqthE1ECssTo19XNjdkhh+u6jJ7ZS6Fkt4WPf9Ot4++03+J9/+ifY7QbyoHu/y1T8uc5n/msrRozCLHWslitCSEjsXF6VTD87JqaEhOhp1jyBRPr5nBg9UaoW2G0uWF8o0zZ7XW3ylJkoiAXKOPiOJYrmEY0uXMuUsVLQMlFKJqTkqbJ+xs3rN+iSv46lKK99+DVuX7vBt37sWzk/fwyq2HSJ6sQsBGaxozvq6LueSOBodYREo+9nLOYrNpsdq9WC5TKyGwt58MskBCFGiEGYphHDWM6PSUGIJmxHYxq9wvP9H7zFhz/6GjHs44dKCImSa2oOJYbusEMoQQj7NJzg16h6HaqpYmFCJKI2ATUhW/cGVbXK8OSSU6qYNxANnuir0tnqzqiLyb1R82tVzdhviO7TwN6ALDWVm1jMF1xbXWOxXLJcdkgIDMPI1dUVJ0dvcrXZ8OZXH3D/EVytNxwfLRARUuo4Pjri5GjFcjFn3veYPWBYTwwTDKOx6Ov6aTIXqyLu3T2EiQRDEa9AFSilsBs2TNPIfDar1bP+pgBTF3+lbmSCi8MQoFiulamGmY9Fes1s3ZbU4tpfQMwopgQSIlB0AlI9N/5mgSCQzR9XZMKPVvfB08P3g5IVkYRJrunuNhfTaDQajcazRJOPjcbXCcMw8Of//J/ne77ne1iv10/7cL5uUFV+/ud/ntdff51PfvKTT/twGo1Go9FoNBrPEOdnF9x5+5x3vnqOZiPE5GmrYqTU1Y020KDEJISYsM7QzbbKN5cI8T1/dTcmjIgQfR+SQN8ZL75yykc/+hLve/UFbt+6xWKxQEIgSPJkoBgmSrCIBfO0oQcUqc2PNV/mYpQgxLBPKboMVCkEvCaTw9HVrUfzpJqJeU7TcNmoxb+4xJrgmqrl2Hsg/3wJ+9pG37nc11SKPBGW/juKBBcoTySmPNn6q8kuk4JaqcnJQkgdpyfX+e3/t3+J02vP87f+5v+Ph/fvIdNI6g0txryHVOteux66mJh1kflsge/fef1q1y9c5KWeEBKxn7kozEoZPbFYxomshbzbYlMmxjkpCUaAEOh6QRi91lVcLMZZh3QduXjNqQDHR9dYX01cbbaUUMjDBpvPMImkfkaKUPLI0SoxSz3H114A6dCirK8uUUZu6guUmsY7OT7m5PoxJgW5gsXqmP5yzSwr8z7RxYlc5XeKggSjaGG7nZjGiaNrx8znC4puiJ1vZaaU+MhHPs6rr3ykRvmoaVZPo6aaUlUzT9vW60lSOOQWPYmYDteTy+R69RkeofOL1KtWLdbXu1bWmvi18p5kYy0FxkSeiLF9oahRa1i90tX2+4TBBbcn/QKz5YKT01Nu3DhmebTAgDyMXK2W7IYtt976Cm/fiSwWc9/uxFBTFvM5zz//PCkk1JSzx4+I4bO8/da7TBmG7PuPKfqxRwmYZRekoaYu631k5tdkmTLjbkOZpipO/eNUswtbc4Gu+73Net6jxLp96W8cwHwD0uoep4nWROi+OjkC/ndpU3+99qnr/Rsi/BuX1DRy/b5BAKbDayYhYJqRKjs5lLk2Go1Go9F4FmjysdH4OkBV+bEf+zG++7u/m4cPHz7tw/m64/XXX+fv/b2/xzd90zf5u5AbjUaj0Wg0Go1fBTa7kbsPzjk/H6sAeVIR6gmwur0ItSY0gFITY6NvrEkihI5YOvIh/Wg1SQhJIjeuzfnwh17i1Vde5NatG8wWM89KSkC1eKJQkj9m8IrJEoN/JS3+saFzMYGLi0hXreR7ZJCI1zzaRBRvFVGt9ZW1UdHTjdklW5VBxSavYD3UN+71jtaPyQguFIPtxUvd/jOjHmkVRV4x66fMavqznlMAc5mktYbTw5E1FSeJa6c3+DWf/DYePLzDFz73ae69/RXGPB3SWl0SYoAYApFEnyLzvkMkEgKk0NF3M2KIh23MIELo5xCVnCNTjmgSZhKYz09cqpnWnUJAFM1bhqsRKwXpYt3jE7RkrHhFa4wzJGzo+p6wuULUKNNEP1v4ll89JzEEuj4R+xUhzNC6w0cnWCek+YJxGgFjuVxxfHxMLhO77ej1nWLECIt5T0pbkrtluh6MQJd8e7OUiSlPzOcu83Ybfz7PPX+db/9nfiuzxapW89Y9QRN3kXUuRA6XSfQUrgTE7HCN7V/D4LrK5aJGjAwBgkl9XaVW8gJVhO85JG/r41uI7HdK1bJ/dZH6kLHuENpBXu43EYNELCb62YK+S6SY/POAGDu6LnFydMRLz9/mcr3j2vERy8WcGBOYMetn3L75HMeLE3IZOD064eziIe989S6TKjl7q7Ilv879VttX77rb26c3wZOZipKzMuWJPE2k1NV7ylwi1jcAuACtUr5KYL9XSz1X6qLW8H/f1zpbPtwrpX5N8GSx/3s8SE5PnwZMasWtCabFj9PqvVqmw5sM/NJvtauNRqPRaDxLNPnYaHwd8LnPfY4//sf/OF/+8pef9qF8XXJ+fs5P/uRP8rt/9+/m5s2bT/twGo1Go9FoNBrPCN2s49qNJcenPevNzncdQyRI9ApCzUjqa0tlQYKnCi12kIxp2lWpYITQIaXWrspesGXm88D7X7rN+19+nudu32CxXBIl1IrF4jKkfnzYpxXFRQ4CpfoAF4j+WJ6q8ipMVweKBSGooeISQuuGm4hLsyKTi0DztFusu5VmekhAeo1mTSkiVf6Eus3oMqmQCfukWhVUIC5mBEJ9/pjvZwZjr2/Z7/u57LMqvvaVrS6VMOPmzef5zd/xO1kuVnwGePjgHrvNmqMVUAzLRhdhsUj08963JGu2y4/X9+wkQIjxkPkKs44ggT7OsblilgldhxWtta2CqTGtL1mvLyg5EzDS/vlg6DgwjmsKEyUrOWfUXBJqTVcKxmI2R21CFUKakWRGHkeyTpB9K3E+O/Yq2Nmc3W5HmSa61DOOBbNCii6v+m7JZIXVYkYQIXUgRehTIojV5ypM08Ruu0W10KWAofRd4rf+9n+BT/yab6Pv5zUgV+ruotfXhlCvGdOaVKxVvlZXG2v1qYSAlX31qQtEpbhiDrU+1Tzti+y3GjuMUn9dhaKZJ/TqVqhZqPdMOIg9BNQmvyPMBaDUxwgWIAZi6pn1PalLjLkQRsWskKeMGSxmM1558Tlms56Acnx0zGECVYy+62EJRSPDODKbzyF4WnQsMGVIyTceo/gbEvb7liJPthcl7KWiUMrEOOwoqkQmQAkh+LGzf16e+ixWvD7ZXPZbffNDlEQxA0r9HMGq1DWxutHob1LYr876BO3kbyqoCUkXl54QRSAmIZdSa25LrYyu6ebgFcyNRqPRaDSeHZp8bDS+hjEz7ty5w7/1b/1b/OIv/uLTPpyva374h3+YP/SH/hA3btzwv8g1Go1Go9FoNBq/wlw7XfCR117gzjuPOb9YM+yEECJak4QSqh1TO6SxLAZsfJI6AoghIeopyAJV4GSiDFzrEy9c77l5Mme2CIRgNXXmFZJegyo1WeaPV10XYC7mUEwyQQIFq2IiEIio1G03qOJR/TlopgCRzj/XIBIotZ7UZZOnCU33hYueuFJ8P1KtPNncU8PDkrGKw1BTjPtk6L66sW7KhYhZQUMhSPIEZq2O3Qsgf8qe2vOknGIixC7xyqsfJHYdmPLOm1/m8YO3sOE+41bpZ0JIQgrC9esvkLrZoUJUBHSakDjzqcaopLh0OSwJSRnq/qPW1FoMc6+flIjmHdNwSZ52YEZMMyT2TGUk6AxRRRSCBiadEAmUSUmpo0gmDwPTZkvpZ6TY03VVBlEI8xnzeIQBwzgwSUbpPBmZEsN2yzjuUDM0G6UmPotMdKHHVH1f0M8yyZ0tUzaGwdhudszngXEYsWAkgdc+9iF+x+/8v7NYLAgpYkXrdRLAis8BHqp7IxK8rlNwkahBCdKh4iLLgnrC0VyEBQn77B9IFdMiUBO3RvELrFaFGqXK71grWP33/F9CvSalpv6e/L3Q5aQiEim11rXv5sTUMRVj3AxcrneoKlGULkVi13Hj+i263sWvSGAcR6+XVa84LaVWK1vZxzqZVBhHyFnc8wmY6iERrAWkA9SbTVVBav3pNA1M045p2BBkhqmipcDheRmm/vi+5ygUzVVOunwvpi5+azVrUU88mhjBArlW0Honc6jfi9yExuASWfSJ7BQTStBaqVtVfL1HsQBh9DcuvOd7WqPRaDQajW98mnxsNL6GefToEX/iT/wJfuqnfuqX/fCh8Y/PnTt3+P7v/36++Zu/uW5xNBqNRqPRaDQav7Jcu37E+8otvvXb3sejszN+5u9sGUbfMyw6eWVmTCiKaf1hv0FIiTKNNZVVJUHdMxTzysqEsGLLkRlHtqPXjCiImifsENSUropDqz2OVh/DkVo3Ki4sQ0BKIeCyxEL0VsoqL6XKG9tXmbKff/PazEzBNNeK2brTaHWHEXGTInrYn4N9JWZBgWS++ee6MXoFZpVCLqHkIBiNXHcxYxU7hm9hige6gme3PIWVkFBcQEpAopBCz4svvp9PfvKfJo8jpycr7r45ME8jq2XPMO6Y9Qtu3HiO+XxOIJEk+hZj8nrUSELHgRISIUVCDBB7chm8CbYoyoTZiOaJ1C0ouiPnkZIHl319QgVS7AmmBIkUE7SMoJk87MjTgEpgPp/Td4lSRvI4QMzE4NWvBNfFIbhg7VIghjl917Ebt2zXE1Gg6xI2TWSdKFowqUI8QlYj1mpUU9huM9dPe8owkUsmRGGzPme3HUgh8uoHn+P//f/5A9y6/TyePzVCSFCrfC36jqEW8dcvGFqrWEUEohDMa1klRtAqw0UQzVXQh1qX6onWQICgWK1MDXtBHapcNheLhxJXrduIWkACxQqhyjRXl+YpTDVPatYGUr/kI2bCZjuxmQY22xFMOVp0HC87UoC+T6xswa4mQ82UkjOlZJfjlv1rI4To9+5khalAzlCybzqGGNCSQcy3T6uw3+8+FgrFQDGmcWScBrp+5oLd9FChavt9RoNcJkoZQe0gXcHqJiQg8T2JR/9e40lHrdeAHmpWbZ/9rb8mBMQUVf9m4hLU07v76lpKxmyqr3Uk1K/eaDQajUbj2SA87QNoNBr/cDabDX/+z/95/upf/atM0/S0D+cbgu/7vu/j7t27T/swGo1Go9FoNBrPCIJwenLM+z/wAt/ya1/l1Q8eEeqenJiglil5QvPkcgRPeoUQiam+V1gEal2i7LWcCUsCJ2IshzX66A678/uU3QYrxStdzUWOmc/CWa113G+8SXiSEjQrJOlqYrAmlErdiKtbhVKNTpIOsbr/aKAyYVWemtVa0pqwEnnvjxwUq1WZhtd+ejmm7zbKfq+uOlgRReJeVnrq0v2KESXVQJ1/Nap88zinvqeG1fcuQ/TNS2r4K4YIAn3qee7Fl/jW3/AdfOij38zJ6U0+8toneO75Vzg6PqWPQt91HB2fYjHU1ymDKXnYonlk2q2xaaK2z6L4rp5OIyVvyeOakgc/A+Ln07QQYiT1nqyjFPKwYxq3lDISuuibk5IIKeA+zuhiIgUo4+BSqXhaVQxsnAhmSJVrSSKdGGKZJELXdS64FHSq9aHjiOaMWPQ62GmiuFdCBIbR1ztjBEzYbtYM28xmY8xnM77z9/xLvPbxbwFcLnpDrrpITEDc1+AGl1UxEkKs6d8MQWva1SVk2H8dgRgTMXYuNCX6OQjiHy+BlLpaYyyElNxtUytVqdd6lYjIPtkoNenLYYsQXLwbULQmAsVrXFULuUxMZWQYMtMwkXNhnBRVF29eRxrQoozjyLDbMo4j4zSSc6mVp34sMXSeZAWmAkNx+eg1seodyFrlb5XAoc5h7tt7TI08jeQ8ebKx+LHH6HXOgUgxP27fexRPuppvXhat9b5mqGZMXTS66I/vPWEuKWtVs9Qa2zr7WiW138vFcv1W9SRybFo8zVmltF/3+f/cN9JGo9FoNBpfl7TkY6PxNYiq8pM/+ZP8F//Ff8GjR4+e9uF8w/DGG2/wl/7SX+KP/bE/9rQPpdFoNBqNRqPxDHDnziP61NGHnhdvXeOjH73Nu+884M7myuWgxSruCjkPSEwuKGpiDxRKlQTmJaeBREJZhh1HMrHIkd3dh2zu32W49TyLxYKYBJgf6k8D4tuLEgg1/ST164lFf7xQkCIuLzSTTUnS15RiTSwZFFGCim8B4kkysVSFZ024mfompOaD4BFJVQ7W/cXaiRksQDAXUlVyBq3mRTJBuoM8Ql1wmJontepWZgy9/1mILrv2Sc6654eYJ+asii7xjwspcv3m88znK6IZ5WO/lh5DYuDWrRc5P3uI5oHj1RHTuIWQoKa/UnD5FefHLotyRmJBJBH7FZQCWRDpXLjmkbwbKTUV2PdLjEAuGSvFj6kYpgVlInVzpnGDWGCxOGK73bHbbQg5M+8XkHPdD1VMhZTmQGEatiBC7JeUMriUigu0FMYQCMXFaVattlSwrIy7HbvtQMl7W+0vl04uyJTC2aMti4Ugsec3/9bfyW/8Tb+NbjY/XB5i1NSrv1Y+uOmyWOueIeJVwInea1HNa4FNDCFipWAC0SKF4sLSjCzZa20luGfGkCCHKk8xq7ugcvgzU/WtQbyyV+qW4T5NWy8NrGSKuBBWnQixQzWTVUnm9bLzTujCkwrbIPujMErJ5JLJ08SYJ989RNg32PqOpR52SwuwKzBkGPfn2QwNnlv0eUdPQGK8R8x7xepu3JCnmq4ETzDu09FiT/7dDDGvX0X0SQK5bqpq3WRFn9xL7hg9TRpCfdODxDoD6X9uWN3NjDXhrPV18/O6/xjEvx+oFn/TRGoNRI1Go9FoPEs0+dhofI1hZrzzzjv80T/6R/nyl7/8tA/nG47v/d7v5d/4N/4Nbt++/bQPpdFoNBqNRqPxDc7f/qlf5PRoyWwWuDw/Jw+ZFENNBAZM9EkCy4OPqBVClWioEPfjbwhiXiy6YuJUJo4Eggi7y4n123cYnn+FfHJC6ueEoIhBlCfbbl6cGOpjJCK4NKy7iyl2NU1Vqpio0kPERYQUxEAkeZ7JRyo9tWW6d1YuHkM87EpKSJhlQvBKzCCp7uvtN/vc0gQzgnSecBOtcorDsZu4MPXkpICV+ql2kFIun/z5hioohVQllO/TSfA0nkggxp7l8SmvfPCjhPV9dHfO8vSUUrbce/ttSlaiGNb1Ll+HHdbPASPERAi+codAmB0h3Rwrkz/uuKMMWyiFgqI5oyglF5dwWiglo7lgliEGYhdcjJo/j75fMIsTl8M5WgoyjyyHiWnMdF3y+tfY+VakRLrQM+WRMm0wVQoZSFjxClKvzYRhtyOPYzVdcLU+x8pIlP3V5gnUaTLG7HWpMQa2u8Bv+I2/jd/2z/9+Tk6eQ1Ak7q+tWNO1LodVlH3hVjRcnkVDtGBWK1pr7SpkwHccQ703goVDYjdK59uluAwPCKpUQeYJvGCejDwMKdYNxUjn9aJBq1gT71dVdWGNsB8KDcG3TlNa0HVzYuiIXUff9xRTYhC6GD1NCyAd4wilKNOkxDjQzxZ03YyQOvJmwMzv94KLYvAdx5zdUU/F6Drfddwnleu8KmW/CWlee7yvYZ2mkWma6FKHmeFNqla3HHnS2XpIAR+GXgHq7qNLUi9VjQfpiLggZy8dq2z0e1qJEimqZJ1qHSuYdJhlr7yVQIqBkqf6+BEL+p5jaTQajUaj8SzQ5GOj8TXGxcUFf/AP/kE+85nPPO1D+Ybkrbfe4nu/93v5o3/0jxJjfNqH02g0Go1Go9H4BuZ//KFf4Phowem1GTEVHtw75/JypGSvy0RcNqCgoaBlIsU5HPJE2TcKzfcPI4EFyo0A1yVwhG/gjZNw8fZ9Lp7/MsvjY7rFkhgTxA7Ud/MMJYQnP/zXKglcgAqiVssnBZFI0IBpQWLyFJYVT5OZoezrEwNCdlsS4kEixn21ZdAqRHzfci+OEKsJTNdJhrk4kuhCRCJRYt2hrB9RdyXVCsAhySkSvDpSElbrHiV6ra1UcVVnBF0QSqx1sxETF72Wlflyxe33fYiwuU+3XIKOzNOcYdwRYiAaLg81HZbrLI+YdF7/qupGKXWEfo50HawvyNst2/Vjhu0lu6sL4mxB6mYIvg2Yp4EgQgyRTryCNcRAnjJRoQ89UQWdjMurS3QydMp0KdF3kb6bkVJfqy4hFKu7jx1dN8emHaUUdNyCKmUa0VzIOVNKoVihmLLbDXTBZTYGnYc8GafMbjBSl7hx+zaf+KZv4Tt/z/+L559/mRCs7oA+UVjiJx0ItdK3VvHuBVp9TT2Z55uewYyivj25/3hDsOzXie7zs2Lu0OowYwz1z2yftvN7iRBrra/LZtsnZw2/D0RqknK/eRp50tEa6mNFum5G33ekrvda1lIIgle+RqrGDJSFCzs1Q0tmmiZm/QxDyWXyZGFNMErw5zKZS8epVEFuvlkZzerion9OUNC6Q7lPNaoqJU/kMhLjftPVU5FS08dqdW9UCjUyfBCF1FpmMz+/Vu9vf+19X9P2tc2SPbVaBaeIv6lAy/hENO9PXZXEavvKVaoY9j/zaulGo9FoNBrPCk0+NhpfQ1xcXPCn/tSf4m/8jb/xtA/lG5bdbsf3f//38/t+3+/j4x//+NM+nEaj0Wg0Go3GNzCPHoycP5x496sRI7PZXjFMXv2ppbj0i54msrqvtk8sSa1YVR0QK0Sgo3AiA9ekcF1mRAKlSp/tlXHx5a9yfO05uuWCOOtJ/QysY79uZ/sEpQTE1IUiLs4kJKIaFiNqntBTVUhWqxNrPSMcxIQEMO0wySBGrEmzalMQiYgZtq89rXIqVGHoArbmoUJ0cRhqolE4VGiGKqNQPCGIEESQsN+oU5eohBoZq8/XfC/Rf8/llkurhH+xQDFP7nUhsrr1EnKW6fpEKT3Pvbgglx3ri0tEgq9Jdr2nxFJkLFs68cpU6xWddoQhoVoom0u2Z3e4enSXYXPF+vIMNLK8NkNMibFK2JBqNqynTDtiCpgGQoAYhJKtHr4hUegWPeN2zeXZBakTYpeIfU+04DWsMdJ3x34+qXIuGawvsQJaqnTME1MeyTkzlYndbkMSQbPVilQhBBdkpRgvvnSd3/DP/LP8lt/2u3jx+fdVQVhfJ/WhyL1I3lemAoTQuYyUWn1aa3ZdBsZDTWcMnvANFqGea02ewHMx57ue4v/Hv2aAYP66q3nCFwIihhJc9O3FqJWafk2e0azpwmKeEFZ8/xRVNLjoCyLElIhdOKSS97W+1M1W1QwUUozE6HuZZgXVzP5uMPFfmzwRcUWEKYNmUN1LffPZR3WXL098Pha0SnZ/Tl7Xq77LSt2qrEIRgxA8qRqEg8AXEVStpqpLTR6zV52eQBXZZxk9LVrFOME3Mfcbk/tks9XYtlBc8GP43qvXIKN53xv7y9780Gg0Go1G4xufJh8bja8RxnHkB3/wB/lv/pv/hmEYnvbhfEPz2c9+lr/6V/8qH/zgB5nNZk/7cBqNRqPRaDQa36CM05Y+zjELlKxowSWjCBLDIS0XJLh8UZc+IsGrV02q3BjpURYycBoyx5Lo6QGrkkcwjazvrjl745dIR0u61RFdP0djRKQjhoiWQjCDGLCaOqzGwBWm8B65ohTNFE1YkLqTB3t5iYEWl5l7mVe01DRlFXI1XSkiaFFE1OtE6/N08WHVfEwuK4OnycSk7kC6UHGhVMUJNR2nVsXKk6yoWKipR/OazZrClCozU+xcVNo+cSYHkTZbXKfs1nSyJUpAusycHlHh7PKKbEYphTHvkKB0IZJMsZCwUii7S8r2nJIz07Bje3HG+vyMq7PHbLdrTp97ma6bU0pmPpujJq6DqzhM4YhuNoMoZC1I6lDZIFcgoaPv58y7JduxcP/u24gU0qwjzZb04vWiEj2yKAgh9kQxyjjVszNBEIplhmnn4nHyjUIJeM1m0Zoo9K1HAW7fvs7v/r3/D37Db/xnuXnrJfZbh/4CB4IbN1QgENAqgIVQhVsh4OL3cD2YV6cGiVigpug4JPISgbKXh1Kv1/2eIaEmHs1l4f4aqnuTXg1sh2vZj1UwifXD/HExCFbPvybQ4tuTaocqXcHFudWEpe4XJ614olRLTRTuHyugJfv9oVYTf+qfoy7o9tWpuXjtaslGSS40S31aZkBxr5vFKFpQFDPfuSy5eGo29lUCykGQFpvq/SyEGInBU4+qdhCxEgKiRlElxFjfwmDs76aaWa7PfZ8LVb/HqDORQTD1O1kpVVk+STIHgSIusdFQP7PRaDQajcazQpOPjcbXAGbGz/7sz/Kf/qf/KXfu3Hnah/MNz3q95q/8lb/C7/k9v4dPfOITT/twGo1Go9FoNBrfqBhYoe4RenpLCISQyOMEIRx+wK9akFJqaK92sZoCI52NLGXHqWSuSWQpsyrhBDGlWEElMI2R8y/fpz95g8XymFk/J6aOFBIWfautxgz9+NwhumAoXr8pvnhHkeyODyGiIJFiuf46olKqNPNKRoIQSS5VRTHdGxQXQjG6XBE3nId0lEmVjQeJuN93rLLDXGZAJIjVDTp5cp5qjatXegohesWny5CuiqgqTDH2e5Ceris1fRlrraUSjm8zXHyFxSwx71cEgaOT6+i7d7k6v8cwbEkpkUIkBGUi0JEYt5dI9OrTabdh3A5cXTzm7OFDLs/PWVw7ZrZakgJQgu959guSiIsjoF8esXrpQ5T1Obv1GUhgHmfY4/toHonWUcYRM1hPO+48uENBCaHj2rXrXnNrmZILqVsSoiEWKeMFRUfUjDwVSs6YBop55jLEyHK5ZDlLvHQLNkW5f5ZJKfDy87f4zt/z+/ht3/kv0XWdJxdtX4PqadYgQjGtoqpUgUjd+CyeygtPEnUShGh115RSq1ddBquWugEpROMgtOsjUTRDsDrt6BWuscrwUhObiFapXLtOq8DcS1KR/bVUiBZd6FFcUJrW66H2zlZ5bfu9xH3Cst5HLjSnmnY0t4UWahLR5bwLw31drFMwJhOGbOQszIrtA6L726MiByGrZpQyYVrIZWKaMjENfjuL+q7ovgNVzO+FvZiFKuM5iEgLHCpaqz6tgrIegypmXoVr7O8jq6LTk9G+oRoIap7iLtm/r1mg6Oi1twYmNdLZaDQajUbjmaHJx0bjKWNm3Lt3jz/xJ/5E23n8VeTv/t2/y4/8yI/w4Q9/mL7vn/bhNBqNRqPRaDS+ARG8mtFqlaRYJEgkxkSWJ1tv++01zLfcoiQsF7CB3kZPPMqGGyLM8LrVqh6rbKlJNYTdGs6+9BaL4+v0qyO6+RIJQgozLB5UoEsiIkHVK1b3G294agz1jbtSlBiS77WZiwyTvXzyDUXxSUok1FyYBd9FFPUeSg9nHmoe2dd6iksKf+p1f7Gmsrzi1cWW1ZScIcTwZJfPa0vj4bgg1DpIF3Khdrq6NIn18eqnihGInrYEikQ/l31C5zfZXr5DXEaOjo4JKTGfzTlX0GLkcaLE5MdZDa5UKVryyDBsGIeR9cU5Z48fIWnO6Y3nmfVzyjSRh5EpJlJKIIFpHMmidGXO+v5XsVKYxpFpnNhNO84enlGy0fdzht2OYdiCwThNPD4/R+QrRFOOjq8RI5A6ch6Qksmq5KkQ45wyXvix5yrGTP2cxo7T41ssPjTn8cUjvvjmA1584QYf+9D7ee0jH+U3/pbvZNbPCeabm4YLP6NultbEnJ/3VHcGa9Vp1dWm5vWd7JOA+0rWSLBw2HaUoAR1SV3USLGv8ktd7gFihpBqja4e7oQQ7JAaFIkuFAViqLW9pngPaa0mxa935JBn9LRmldSesjSe3Gl+TZrV2l8pLrIRTFxWqpWawgw1mGkuVItvMu4Tt+Apx5wha5Vzmff0rNZ/DVX2KVVsPtkynaaR1HVISDUu6R/j05iBIIEUI11MvhNJqDW3hVAlotZKZKv1tJhiUncvcWG7Tx+bBQKRghElEYK5uFTPQ6oWTynz5JqQg3QMhxRyo9FoNBqNZ4MmHxuNp8x6veY//o//Y/76X//rT/tQnilUle/5nu/h9//+388HPvCB+q7bRqPRaDQajUbjnxxiLiYU32sztO4U1h/qa0HEJYrvMAZ0HBFRbNrR244lO66FHacirGSGIuSaREqC5wRrsqqYkNXY3Bt4/MXPM18dMVsskBQhBqL0SIyUPPp+ngb0PSmrIC4NU0jkMoFErBRIycWHuCR6omNcggTz5KKa15m65kj/f/b+PNqy6yzvhX/vnGvt5vTVqRo1pV6ysC1btjE2tjGNMV9yAzGQBvIlQAYhAQYjoUm+NCOMwPVNriFAwsjFEGeAgWuDAeNgHDAYYnCDOxn3kq2+V3WnTru7tdac7/fH+659SkZyh0olS+uxq07VPnuvNddcc23VPr/1PI9BUU1oCLRxslEsNrKNaSQYeM0aDQy5M1RjaKsaDV6hiBbEYE4sgrq7EoMv4AGRDi8BNYMj0hb8OZCxf/vbQWetEY8EtRRRpVw6wKSecebsCWbjMYPhArtbZ2maiqqKlEWmrhNZGnJWmlARGpAQqKsZs/GInY0znHz4BKnocfW117I0WCBGJaUGorK5dZqduzcZ7e4yGA5Z2r+P1GTK6RjNmbpq2N3a4uTpB1k/ccL2169RUZpUk8mEHMkibG9vMuhFkMzCYBWaTCyVumnIQUhVTZrWBpNztsebRFNZ5GqvKBkur1D1B2zsbHPDM57BV1x3A0sLC1x508s4cORSNPk5cxhFVojez+kuRkN2eR6BqqqQHdkF6+O0ntG2t/EcZ54IQiZSQMHcgajZvKqogTQN2UGnei+oQ62shg1FkdBzwI0toDbwU6KtV4LBPlUyjTl4pXDXZe0kHWIsfe21CvPrxLoV225E5pZFRdCcSanxeFchqbkL8xyS2/JrgKQWvWqVr62jc87ez2GRe45Tc1EqKTc2TzmhfhNDG5lq4D5CtmtTCISU/aYAe64ECBrI2jgYVOvubPcree4mbd2Pdp6YH4u5lD0OWdXWR0qoZIrQI6lF+orqOfPYqVOnTp06dXo6qIOPnTpdQFVVxetf/3pe97rXXeihPC11++2388u//Mv85E/+5IUeSqdOnTp16tSpU6enoDINIJgBy+CLouSUDGqkGgqDOjEEUlUhDlBCs8MiOxwKI1YkM5QhIMy0MWeW/zA/eNdbu20QNEV2H9xgY+nTFAtDpBcZFocN0kWLsUQcSmSPQw0GHgNCCmEOIutU08s9i9FsnW7u3LJ+RoMWOSdiKAyKOcQz3FfaPCjmJhN3geaEklB30CHmxIsSEAoQd805xBLvsGv7JEHJAaIUuG3Nto85OMUfNHhkx2MmRWXeOCnuftPk8ETcwRhY2HeMZrDKmYc/TbN+P+PJBAjUmphVI3Ku6ZUFdTMlUqBVQ65rqqpma+M0Jx++lzhc5tpn38jS8hKF4n2fBaEHvXKByakTbO3sMptVVLUy2p3QG1iH33i8y+bmBts7W4w2RkzqCbvFDgvLy0gBsRwQi0CvKCn7Besb6/T6A6KUxH6fLEJdz0hNoqorZrPErKqo64q6npByok4VWTPDfp/F/hJC5MC+QyyurBBCZP/FV3Ds8uttLUj2+XS/YFCCQA7BEoLbCFwwGC62mgjisafRYBnZGLMKSStiLBACSZO91ret2aNAxd2NYv2nc/gdlUh0R14mRAOKZqo0d2AIhSUXa0AjrT3X403V143HgpLNIasRCeKuSnUgKCSsu9KMnoXdIEBGhb04Vx+LeTGDw0w7LlV399K6/5QaoU7mfkzJYoidp9u1M68mVTRZ3+M8dVjNZZhz9tpWe0/AQaOqkFJCydSpsfH4PIq7J3OundCLu1EDMUZ3MDK/htSspAZ9RS3SlsYBrIN+UY+0lT23pOx12CbN8+d26tSpU6dOnZ4e6uBjp04XSKrK29/+dn72Z3+W6XR6oYfztNUv//Iv823f9m085znPudBD6fQ0loj8GfA12hb+dOrUqVOnTp2eGgoel+jxiYa8lFAEUmUuMJJQSJ9c12iqEWoKrVnM2xxkm8Mhsxh6ZALj7M4lhNwGqEqmIJJREuakBKGZFGzf8zD9ffvpLS5SDBYIZUERAhqV2EY5SnLXpXfLiSIkJEQ016hCyo3DHXM2mfMreUTrOQBGLPJRKAyg5GTbd/gELYxQLB9UPCmy2YswlbZ/D2jjaNuKR7M50vLP6JGpbuNEWkdXGxtpBMq2HbxvUKKB2nkfZdvl55xMWiNeoL+0zEXXvICUKrZOPcjWg58gFgHpFaQmMZs2hGDRoGk2oxpN2Th7ipMn72Xt8CVc+RXPZm11xVx5IdDkTOj1SXUi5swlV1/OxdeWBCLVtGL77AabZ88ynUwYj8dMqilS9OivLlEwpMk1CesplKCUgwFFf8DCwhAdDtna3KYoeiz2ItSJ3CSIfbSuGU12GI92mNZTGj/PMRbEnBgOFlhcWPZYVEhZWT54mCu+4oWUvZ4BrkIJqXUzZo/vtA5TM+XJ3AmYsefMIbUEdzziZy7M+wPRTJJMlIJMjWhBTtndsAbWpEXG0SBZDMU87jUIe+cUIQcHfBJAG4jRgHK02OPs7sEW3AURiNYlqSlbp6HgsarBQX07blsXKtn7LA1LtkTeGZ15gv01reMx52TXaG5ofbeKUmeLX80IUXXv7QLIAQe79tykjYNB61vM2SNsbRJ8Lbd43V/o7xNtjmtmD1iKWAyrda5m76l0p6jfWKC5sc1kczuqR8u2vaztldle1+rnPfgc2z4U0WzXdacve4nI5cDdwK+q6ndf2NF06tSpU6cnszr42KnTBZCq8slPfpKf+Zmf4Z577rnQw3la68SJE7zmNa/hV37lVxgMBhd6OJ06derUqVOnTp2eStK9P4hGgkYkW2RliAUxlWZ1agxIiFaUOmUl77DGFgdiYi32KEWYuVErEPYiSwXrf3SokBRqtThWCMy2la3bbqc3GFIMhsSiT1gOxNDzWEvfiFhvm7qjytyMdVuNh6r19sm5/4slSDJIZDmQ5jAMJeSEAQxBKFEai170beCJlSEUoAmR0gBFGzWJwSgV95BJG9sa7TERNKc5bHRrqT0ngEq256q/TiNBink0pkhAc4uAIiJ7MMYyJ20M+GNlGFAWJTGWBI+ltM0FQgyk1NCIMqp22do5y6XXPZOLr7iefr8wB16vR6CkCAZzYrFMXU8pokXQFsWQYtAjFjCIBdtbGxaJ2VgsbkqBqq6QWohlQRkjZdmnVwzp9QaEUNDr9WE4ZHt3h9Ar6A2GIJGUGqbTKePJLlXTuPtQUW3clRfIVSYNGpqmARGOXHkDVz3zhQwXV82pGMXqEinMuSvV/BzSgix8LSHuWg3+Z2w/bueTYH2lqhmIBntVSVoRQrC+SE3mLmz7NCVAyqhkYjD0bufRo4yJ7viL9BxutrGvbWQvuDsvp7l7MAQhJxtjEEFDcCAZiSEaHMUAaQgG2I0zesekqEUKexxx6xLM2Ts1caCvCc2ZnDIp5/nbgvr12iRo0l4kqyewzh2QZlTWeVxwm/japClN06coBE2NA+AWVQopNxaFDHOnNCgxeDysg0fNbdeluY/t2kn2uNj7i580NCfvZS2QbE5ju0kgEGKPhH1fxbsp/fqTnGlvFejUqVOnTp06PT3UwcdOnZ5gqSqnT5/m537u53jXu951oYdz3rSwsMDa2hqnT5+mrusLPZzHVM6Zd77znbzlLW/h7/7dv0uM8fO/qFOnTp06derUqVOnL0BBov3wX6wnLnnqoKo5BINEUp6SdEYZSkpmrOQd9rHJamhYCSWDYHGIoY031EwiO9qxxyLQOHycV9EhaI5MTlds3X0n5fIyZX9I6FmfYiwNMBgeKEhi/XEhRFAhxkBK2Tr31NxV6tGnSLYoTvUOPwl7kCclshjAMc+TAY7QdlqiEHy7WR0UmjtLxOIrYxAfi+1D3DMaJJNRRD2SFcgeA9kCE4ObFu9qQBGkMIgX5qDIARhKJEBosYhHu5qv0sGKwbVMJuUapUcIBTEIQQNlr0cIQmpqZrsjjl15DasHDpBTRV0b3Aw0lGXpsZnu3it6FGWf3nCJUJZUs4m5BRPExT6LVUXdVKS6odHEeDKiqmskBHOUqhJI5FRR1RmlZGFxgDTK9vo6S/vWkKKgbpTpbERTz5jMdqmmtYPVABoZLiwR+wXTumJp/2Euv+65XHz5Myl7PVQbj6HFx97468xJKipIMIjVulJBCRqt69OdrsEuADsvbUSwmBuOYPBaSb7GA0HUXXQgYluXMD9D7prEooqzt4+Kbb+F6SqRUvrm9NPaz6UBRetJ9LXhTtsQhJSzO3hb2C0eIxrJOdJCbnMItyDtnAhf8WP1x7Oaczin1rGbaXkm/qVWcz/mc67dtpI0ZGjcoDx3Oir+AGi2DtEYe+2dA9a36A7GLOY2zLlxN7AfizZzVyRwDsRX/75d38Qwd7kaMrX5FT/23LpbNTNnpyIIhe/TbhLQPfL8Jb6TdurUqVOnTp2+HNXBx06dnmBVVcUv/uIv8oY3vOFCD+W8qSgKXvWqV/Fd3/VdvPWtb+U973kPt956K7PZ7EIP7VF16tQp3vCGN/DiF7+Y48ePX+jhdOrUqVOnTp06dXqKSLOFoWaVPUdQth/gZ81oboAZQWFAZp/uckB2WZDEQigp3ecYSESEUqCSQNA8d0UFDB6ENvrRkU8bcZrqgt371imW76QYLMAgshAvooglEiNIRN1JafBPEDVnJlKZk0uzgaYYCcGhkVhPZNDo8BAM2O312okEc5oF2XNwqTvigvo+o8dbWixrCHEOrVAhesdfe2wRi39FIKHeQGgQFYnmOkORmIgOXKRFtRL34jRjG69q2/ZGS3OASUbVOwVb0KKZlGpEeogEYiiJUej1BxRloJBV0ixR9vpU9ZR61jAdjekPM+WwIKUGgLIYEns9RKCMfcr+gP7ifhaWleXFA2jOVLMx9WxM0zRMZ2PGkwnj0Q6zekpdNTRNbTd4KtSpos41u6NNNjcaDh++BEmBpA1FLNFmxrQaM6sqczYGwQyPBpEOX3oVX/X1fxtVpSz71qEJDgiDw6wWxIX5eZ3HoSaL4jX4lb11U5HQxuwCEt0pmR3OGVHToKRk0brtCoJMzlBEOyOIryZxh6Pi8bC2nyhiXaZO7EQFDQan2z7ItpdUHNy34Dl512ekoGmjhbNnhp7jNMwe8Wps08CmtuPV5EBW5/Gnqu7IlHP8oOIRta2712GdGZ+VVAu9AnMgu8uZaF+TXwu27CPtLPsModqQNKHiUFQVzQ2SzN/oJZe2vlXn8FPaa0vbc9rCY7teck5IFEQDoskwcgByJOVECIGUkxlN/fwHlCRQxDiPdw1AjKX1Pnbq1KlTp06dnjbqbjvq1OkJlKry67/+6/zsz/4sVVVd6OGcN62urvKDP/iDvOIVr+CnfuqneN3rXsd//s//mW/+5m9mbW3tQg/vr0hVefe7383b3/72p/R5eapIRL5SRN4kIg+KyExEHhaRPxaRv3vOc75bRN4sIneJyEREtkXkvSLy/32Mbf6ZiKiIlCLy4yJyp4hMReQzIvJPznnePxORT/g2HxCRnxB59Ft4ReSFIvI7InJCRCoRuV9EfklEjj3+s9KpU6dOnTp1ejJK2YsrnHejhUAgQGogV4TcUOqItbTBYXbZHxMLoSB4L54Fre71qmVV74jLtFiu/cdIQ6ZWgyW2R4M6zbRgdM9D7DxwH+NTp5ntbNPMpuSmdSVZtKlmT7wUJagYiHLDUz4nmjTEEpFIJBKCOFyRvejSc3oXEd1zg7VORBGPqRSCFAQK2hGrwxELU3XP1Hz+xOMcC6L0zJ0lAiFiuZgerRl83HJOR2QIhBiJsSAEIcZICKXhxhgxw1uYm7MMQjocDULZ6yEqpFTZ+YyRsjeg6PWIRUksC1b27WdxZZW1/YdZXFxmd/0sf/nOd/KX7/wL7r3lM2w8eIbtU+uMz46ZbUypx5nZ5pRqZ4I2kRgXCMWAXm+JXm+ZKH2C9pFKKKVHyRDJkdQI9TTTTDPNNFGNpqQKskQGywv0BwOHdO49TYm6aZAQCWVBKKOBcRFW9x2iKHoM+ot7vZ0S5uAMCR4BuhcLigMt1UzbWC4+V23Pn53jYG7JtgxxHsUqnitqDmBzB9tY1ceNyvxp+H6CRHPmOsCzbSk5Nw4Z1WNdo/stLQ5UpHXDZmIIDjrFrjGHlEGDQ8Pkbk9xC3GYAzxpY4CJFrnaXpctmPUbAJxuei+lwciU6vl258P3a9RiVy2CNfvbRXZAaUvfLJHZo3I1JwODCAlzEDvdpb2xwZyh0eJQg0FFu1xazOtjbQc7t2TOd2pwUrHI1yA+t/b8INbZak7o1ilaoOKoX/ySDGI3DJA9xrbTU0kicr2I/E8ROSsiIxF5j4h846M8ry8i/9o/S4/98/m7z/0Mf872VETe+Tn2+QkRqUXk6Pk4pk6dOnXq9Pipcz526vQESVX5wz/8Q/7jf/yPbG1tXejhnFe99KUv5fnPfz4Aw+GQ5z3vedx4441827d9G/fccw9vf/vb+c3f/E1uu+22CzzSPW1vb/Pa176Wb/qmb+Kyyy7b+7Dd6UklB4GvBRLwVuB24CLg+cAPAL/lT30t8CngXcDDwAHgbwC/LiLXqeq/f4xd/CbwQuAPgBr4duC/i0gNPBv4LuBtwJ8C3wz8ODAGXvNZ4/zHwH8HZj7O+4FrgO8F/paIfJWq3vfXmYtOnTp16tSp05NfFldojiEUohTWfwagCdFEyYxVdjggDQcj9CTQKA4QZZ6OGFsHnm/XXIatx3DeIkdSpQZ6LbxBQAOT9czWnXfSW1pmsLBE3etTFEoM7nJUcaemEnNEggMGrUlNTc59iNZnJ0WPQixuM4SIJnM0tS7CEIPHY4JqdNeWONAzv6KoORlTbmwbKhTB3VLBAaYDwBAccITo0MviVsGAVOv0VJQQHFACSLReSRFCaHv5bJ5EfNtaGnwKYe6Ui1KS8O5B31aMfbJDIIke7amQVShDoD9YZNBbIqWGuq4Y9Bc5eOxSPvq+9/HRj9zCdc94BpcePsrq2n6WDxxiUA4Z9lfoDfr0epsUZbSTLiBFyXRnRJVqqqpmUo2ZTEZs72wxHm0znUwZT0ecPPUAp9ZH7F8sec6Lv5LLr7uGldUVtsczFlfXoIDZbEZvsESIO2gDaKYIBRXmWFvZ5y7YIHsxmqhF6radfiKQ9yCXSCBpnjsTzQOZ2kpNXwctDAt7cZttD2PLIjVAyG34J6DEGG2taDbHrOR556C0RYjuUW2diebAFTNfkoghksxaaa/F4JlmA4MhmiMXDTSpNkomIFl8qA5agwDegSjmrp0DQcOW7mhM7d/23JY+L+aW9fUc9kJZ2+PNQJ2hUqWfhXzOd3EYCdabab2MyZ2KQpMTMdWUsWdPChDFIp4bDLaqnxBxoi8CUaK/v1jPqgaLLM4OawnqDm3mxyLi8DLnuWNSc+sqTe64bGgjcO2GgZYRi/Vr6t7Rd3pK6ArgfcAngF8CjgJ/D/hDEflOVX0TgIj0gD8Cvgb4NPD/AAvYZ+03ichzVPXfAqjqpx08fq2IXKuqj/ihkYi8GHgm8GZVffiJOMhOnTp16vSlq4OPnTo9AUop8aEPfYgf//Ef5+67777QwzmvCiHwAz/wA5RlOX9MRCjLkqNHj3LkyBFe8IIX8KM/+qO8733v49d+7dd4z3vew/r6OuPxeP4h5ULo4x//OK997Wt59atfTVF0b49PNonIDcAvANvAS1X1U5/1/UvO+eszVfXOz/p+D/hD4F+LyC+q6oOPspvL/LWb/pqfwT4g/RywCTy7fZ2I/AfgDuDHRORnVLXxx68FfhG4B/iac/cjIl8P/DHwX4FXffGz0KlTp06dOnX6clKQ0uCNGmYQhEILkgZ6ESTNWA4TDrLDPikYSmHgQL3nzn830NLCNmh9U4pByYgQNVBpIgsGjgSQNoJVEI1MTozZXrmD/tIyYdBnoSwRSqRfkMk+TovZDFIAyWNXgZTRaHBIcyYHAwooBg+D+n4E1YYgBfpIW5wbqqKDJ3c/ijstCaScDYKph1oGc62pZObBnP6bYi7LjMWtIufGg7pjUQz0hrC3/xCtVxAyzjR9nDqPwUyayOqddZoJBMr+IkW5AJopewuUsSRpRWwKpLdosaWDITFbJG2azThw0RFe9oq/CTHCwYs5/qybKOqK/VcfZfNT91H3KmbjXeKWUJY9NCWa2RRiQZUys6ZiWk+YjEdMRrvMZlNqnbGzvsmDpx/i1rvuJRYDnvW1L+LaZz2Dtf37yLmmv7LE2kUXM9o9SyxnILuoBIJEmmYGRUTJFGWf5bV9ZE2EHBEpIDeotDC5ICfrl8ze3Zk9OtNMpQYV7fcCDZmoDmYxuFVISetozRqIhUcOq86dq5qT9TzO43rFo00bW0+iSAuSVb3b09x85ph0m52DSFWZg9EW/gsZxLoeA3HeixrdSZkFpCjQbEC9kDiHnhnrXbU1lck0iEZfh+ohqHsOYFVITYM2dk01OZHUfi7Qujv9wgAgqTsfo88reN8l832gmawW/ap4P6XUpBRoUuVPzt4d2cbd2utat7GoXVttg6ONOsyPAqyL1UC/EMxuavBS3cEcWuiYQSzG1dZEgHN6V80Ba+8nhIgm8Y7ZTk8hvQz4z6r6L9sHROS/YUDyF0XkD1V1G/hRDDz+IfDN53xu/gngg8C/EZG3qepf+GZ+Afha4PuAH/usfX6ff/2l83RMnTp16tTpcVT30/VOnc6zcs586lOf4sd//Mf58Ic/fKGHc9519dVX86IXvegxvy8iFEXBysoKr3zlK3nlK1/JPffcw9ve9jbe8Y538JnPfIZ7772X6XT6BI7apKq89rWv5du//dvnzs1OTyp9P/bfrf/zs8EjgKo+cM6f73yU71ci8v8AXwd8PfBrj7KPf92CR3/NXSLyHuzDz4+eCxJVdVNEfh/4buBi4N5zxlkC//yzAaeq/qmIvBVzPy6r6s4XdOTnSEQe643k+i92W506der0eOrP/uzPLvQQ5trZ+aLfXjt1Oi8KBFTN7RQQYpNRzTRkSq0Y6g4HZJf9EliMkSiAA0eBOaRogwsLjzT0IEhUM5mIIN6bZ9GryYNak54L7KCpIjv3rdNfvY9iYYlycRliQSgKYlFYJ5tRPY9RtOzHnBtyav2FmUjPIFDGACGJIMUe+PAOR6UhUJBzY+4niQb1dG6PMwCS7U9BzH0o2UxzoY2BnAfLhj0XlzsrkdbpKHP3ogFRsajIaPMTpYWlmSARCW00J46PDA61u5rHcbp7tNdfoOgvIjoh1VN6ReGRlh5b2c5N2aNU68lTbTh05SW8VL+Bk2fO0j90gEEZOHT8CobSY/myKxg/eIrZzoiFiy9h98SDnL31VnqrA5b2r5LPbpDXM6uXX8zo5GlkWHBm40FOba9z/5mzHL34Ml7xsq/k+HVXMFxeRCUAkUMXX0nZKxhtJ1JKVFWDSoQQif0hVT1lNpuxdugYi0v7aGEu6k5W9RPgzris6oDW3J57fYbW/ag5zZ192TGciPcEtmA4tTDKz53/z1yn0RysWUlZ0ZAtLpXCYn79mkDd3SqFw2Tvb3S3YiQa2JQ2RTTZPsVyTItQOnjLSLao0eDdhCG4YzE4FG39tNrGATvATpkgBRaB3J562dsP6jG1wfaVE7lJNsMiCHG+Wmz7QpOhaSAV9iFi712gBYTM30csFTVBjmQRUsqklAx2Zo+wxQCoZgOGInsRqi10VzGQnFOau1BbYKu07wPBuxzt/SW561P8ZgEN3vmJOTOjFBAKi4T1zFyhsBjY+btap6eQtoCfPPcBVb1ZRN6AJQa9CvhV4B9jK+pHWvDozz0lIv8n8D+whKAWPv5PLL3ou0Xk36nqDEBE1oC/C9wJ/MnnG1z3ufmJ1xfyWaD9N/qT6XPD00Xd3F84PZnn/nx/bu7gY6dO51m33HIL/+bf/Bv+5E8+77+NnhL61m/9Vvr9/hf1mssvv5wf/MEf5Lu+67v46Ec/yvvf/37e+9738u53v5uzZ8+ep5E+unZ2dnj1q1/N61//+idlP+XTXF/lX//w8z1RRC4D/n8YZLwMGH7WUy5+jJfe/CiPPeRfH+3DSwsXL2EPPrb0/WtE5AWP8pqLgAhc+xjb7NSpU6dOnTo9RdS6HgFCygRV+qWyPCxY0wX6O4HerjKkpMQiE5PHkc6dVOw5IM+FOrb9NrDSQZ7HKqa297F1Ps5pTKDaSWzdeQ/l4iLF4iIrZSSXJQRBxJxPIhbBacmTQs5K1hphYFGs0oIk8QjGSNCIoISQzInYxqa688riIhWR1vnlkEpbIIU7tfygyWgSJJbuKAtzEIMY3LEuO3eFavQeOvWkz2BdkHPI6Y4tEQgenxrwcZq7LQQDqtmjLS3UNoMqsSgYLK1Sb+0ym07o9QYU0ZxeqpmkM2Iu3WWZCbEgJSVQcvCyy+j3B2zddw/ThR4XXXqUxUsvJfb6LFx8hJWiz+LRq9l/+dX0+yVrl19LMeyzdfohtk+fplxdY2PfMsXCAnf90W0sLi3xN7/u6zh++eUMVxeI/QjZoNvi2gEGiwvMxjvmHsxKNZ2RmsogFEJO5mLcd+gow4VFQijmoCvnDBRo9n5BtRtq56dIgrv+DOKpqvHaYGshhthmBpNbNyLBIbWfB01ILNGcaGM5DfyFOQ4WB40igRCiO1wNRNpQWgBqx4J4Uuq8X1AcihkoCzgcDIGUmddsZNQhe5gfk3hpofr6t+0rQiYGd+tiMFs1OKBWB87qoN2gqoSISPKrVG09+UTMkXUWUoYmm9Ew+hoXW8rUQKM1VnPnsgABAABJREFUKSVzLmqyuWjnNBvyNfDuUbnuXg5B9mJrg0B2x+jcHeo7C3b8NiS/OSCfA4wVRKy3MWeZg01i9EPKiLYhsfb+FCjmju8kQozdjyCfYvrLx7iZ9s8w+PhcEfld4GrgQVX99KM893/71+e2D6hqIyKvwypOvg14o3/rH2Kf6/+7XsjIrE6dOnXq9AWr+y9/p07nUR/4wAf4kR/5Ed73vvdd0DjRJ0rD4ZBv/MZv/JIiS0WE5eVlXvrSl/KiF72I7/iO7+BTn/oUv/u7v8vv/M7vPGEQUlV517vexVve8ha+53u+5wnZZ6cvWGv+9dHiUucSkSux+JZ9wLuxmNMt7FPw5dgHoUcl5Kq69SgPt3dnfq7vlec8dsC//ks+t5Y+z/cfVar6vEd73O/svOlL2WanTp06PR56+ctffqGHMNfy8vKFHkKnTkAb75iMKpApe5mDx5a4+MhBVtgln87sfmpEM2pjRXXu9YMWLraA0UCAuNMKf16LJZ2XzMHlXnecgRAlOEQomK5P2b73XsrVVXoLA0JvQChK27aDQpHCzW8yd2/lxgCRRHdRmfXQxqHZYi5FiKFvMEkVyW1gZzuygqy1OyfNLbfXBRhIuUEIhBjdrdUAHgmqdsRZPCzSnYk22dHdVhkkzuETmlEp5pOq7W9ijjFV67drI2KbugaEECJNqhG1KFgR6C3sY7R+H0N69jrxmE8RcpPQQklpgoQCgtMkj4EdrC2TZzX1eMrmLbexdtnV9I/uR/olIQTSeAdRZfn4pfRXVhGE/sIa/dWaUJakXLNx52fYVwy48qobWFhcQntCTaLsL0Io6JULLCyuAUpqEqnJVNXMzk0IkDNN09A0DRIDB48cN4dqVgOuBIRA0sZctZk5gLRIT3eOkm1tq/WCisNom+AWzuGr0hQlkNR6EyXK/Pni20HU415tNRu0a89fILhTT8VicNvYVSVan2BKdq2JzEF4DgbvUm6vKJkT+qCQfY0o2devGHzO2YGeOQ51/tKMGwtbRIpIe6X5letu4RZu2p49vjir94Y+8n0iAakx96NXlNrRRWx/iscZq4NVj0pVi3OV3JCSemRqC1UTKpmkjQ9W55A2a+PnG99BtE5MO2Cbc7VzH2JJ0EAWNRdksK7PrGLnU9PcIa3awuaISm1zqgaNIX32YXf68tfJx3j8hH9d9V9gTsZHU/v42mc9/t+Bfwf8U/bg4/cBFfArX8jgus/NT7y+kM8CrfPryfS54emibu4vnJ7Mc3++Pzd38PFJqBZSffbXVu0/YmX+r/tH/rnThZWqMh6Pectb3sJP/MRPcOeddz4twCPAM5/5TC677LK/9naKouCSSy7h2LFjvOQlL+Gf/JN/wk/91E/x+7//+09IHOvGxga//uu/zktf+lKuuuqq7vp68mjTv16M9TA+ln4EA4Dfo6qvP/cbIvIdGHw8n2oh5ap3XHTq1KlTp06dnqaSIFBZfOhgIBw9vsz1NxzhkqOr9Jmy82CPk9UGW7c8BJW7FTWTzolynDsEJcydiHMXmvmdHhFpmBWSZJrWPSl7DknciVhXJdv3n6a3727KpUWKhUWkDBBLj7+EHDIxBEIBuVZSbki5Iar14hENENkIQUNEJICGeZ+eOrhDIWd1uJrmY5W454zKNAh9imhdkWa+8v7GEFCsR9IgY54DFdxdZiV5GKScQ1FHYJrICZpkQG46mzAdT5mMZ0wnU6bTKePJmGpSUc0q6qamqWpUIAaIhcWrDsvE4RVlYdDQpIZQRJqciWRiiORcgfSQkMm5JuXKwEsIhEFJuW8RbZTqxFnWT/4ls6OnKJYHFEsrDFd2kVhQT0fM0hnSrGa8uUFTTcipYXrvQ4TtCRetXkQTYHO6QZCSxf4KEgsIQtnvo6mhmo6ZjMaMRlvM6sb6AYP1KNazKbO6prewxoEjx0iptshcFNXaHKZqa6btGMzaRt/mOQxUh0p7MLdd9Qa6aSGZlWqiZNqGSBX1KE53LEqAbHBaNYG761pXnuRkzwGDpXPwjUFLDe7qs++DdZ9qVouiFQdwWcwxmBu0hYO57Tu09WoH0/Y72soMGjxS1W8mQOfxvmiYH0eIdo1KMFCacutQDvP1ee5PBtopSxnqBE1WCoXUgvO0x+7aCNc2BlXVHMkBg+A5mQM3p0RW+yUCRXs9iP2sQrU9Q97z6udQ3N1ozl/23MbZt+sO4SiFOapDImZzhKZsXZOg5pQmITmAZBu3Bnetxi/sjbPTl4sOP8bjR/zrFnufjY88xnOPnvPcuVT1Qa8reZWIXA/sB54JvElVT3/pQ+7UqVOnTk+kOvj4BMr+oadMp/bhZjabPeJXVVXUdU3Oef730WhESonJZELOmeFwSIyRoigYDoeICEtLS/R6PcqypNfrMRwOGQ6HLC4usrCwQIzdP/CeKI3HYz7zmc/wS7/0S7zxjW982vUNvfjFL+bQoUOPG6wLIbC4uMjznvc8fvM3f5Pf+I3f4Kd/+qe55ZZbqOv6cdnHY+l973sfb37zm/nn//yfMxgMzuu+On3Bej/wfOD/w+eGj1f71zc/yve+5vEe1KPo/cDzgJcC/+sJ2F+nTp06derU6ckqVSJCrwgcuXSJZz73OFddfTH71xbIs10CM2bbm+TtCaN7z6JZ5mhurxsvI5IJ2ZyUhjqEZLiI6FBSdO/xrNY7l1X2wAp+06oDnWYc2brrAXpLa/QWl839uKAQC2IYEIgW7+iwSBASCQ0WJ4m0zZMGRufdiyLzOFYJ5ipEhBBK0Aa848/iPVu6YrAwkcwGFqwz0WyNLWQy26dFrbaAMRDcKmbHZjGcIRRoVprUUI1nVNOGna0R25tb7Iy2mUzGTEYTqromNYm6mtKkhlRXNNkAjuZMSrVDNtCcKMtAvnyNQbnIcFgQg6ApEUNEEgQpKKW28yfu7DSKgxAoFhbQA4LuzIizgNTCcHgQSUL98DqznW1UEzMephlPmWxtkmNkNhoRKgOeu9WEJjSEfo++9CjK0pyV/WXKYoAmaKY11axiNq0YT0Y07k5LdUM9a8hN4PAl17KwuIoCicaglLZ5oDqP3XXv4Pxxx1IOmHFHYvKVCS1es/MRDJaLwTIJwf29GUkC0dyA+Bzbeo8oSqMNkcLSP1GCZsTjVy0h1KAbbSdjC9tz46zNIKRmg6XtVQUWT2vOvtxepgYxM7aeAItzVV/XNjdtbK3hueAuW/XjsmOX1rbo16R6BG27vyCffWOyXUVNUlI2R2Gr3DqRFUh57tBUzCWq7PU2WhSrz7u2wL9x12rb52iNl6GNwXV3Z/A1qhJBGjvG9nUCSWtEortgs1/nbYSuwdaoBkglBEIWi7fVBpGCnGuESErn9zN8pydcN4nI8qNEr77cv35EVXdE5E7gShG5RlVv/6znfq1//ctH2f4vYL2R/xRLNQL4pcdh3J06derU6QlSBx/Po1SVnZ0dHnzwQR566CFOnTrFyZMnOXHiBOvr62xsbHD27Fk2NjbY2Nhgc3OT3d1dK7n/IrW0tMTy8jJra2scOnSII0eOcPHFF3PppZdy9OhRjhw5wrFjxzhy5AhLS0v+ga3T46XpdMonPvEJ/uAP/oA3vvGN3HbbbRd6SE+4lpaWuPHGG1la+pKSJD+nxH+Y8Pf//t/n+uuv5zWveQ1ve9vbmEwmj/u+Wk2nU37t136NV77yldx4442d+/HJodcC/wz49yLyR6p6y7nfFJFLVPUB4B5/6OXA75/z/VdiRfbnW/8Ni4T5ORG5XVUf8YYgIj3ghar67idgLJ06derUqVOnCyghUwRh38EB195wjGuuO86xo4coysxop2K4fx9rl16Ojiek6UeZPrxDVgMlbeOjiOEhb4JDUAcC86BHAlAIRI00JN8z862oWKCmYx93MUVm6xVbd91Of3WFcjhgEC8iDBaQ7A43EiFAJKIIyeFTzu70EzUg6qBJKA3QRDVHmERy0xiLycmOTfaApNnFLIpSvCux7Q4M0TobLdYzIEH3nHQowR2FEgyA4ewxZ2U2TYx3p6yvr7N5dpvR7ojpZExdTUgp0TRT6qpx4FMjZOqqoW7MJZhSjSb13kODRYFAqhMPnNzl4P6ChYUJvV6PGEtSqohi/Y85t/GYEMsSHBrXdQXAYHmRYuUQYZYZLhxg8eARwuKQ6tRptDEYND79MGnaUA4X0aJH3SSqasRUK6qQySEw6Pcpez1CEGIxoOz1ISt1PWOyNWYy2WEynVFNZwQJNJqomoY6NUhRcPm1zzWH3jyS1FbSIxKYxABbaKNUFbSFcXjnn7bzH9wpKKgmYmj7PtWBmsd+OsZua0iDQ2tL+0xz1tyaLbWFanif53zdq/c72voJ/pkx20UyvwLFwaHm5PGzvi918K1zryNoIudECKW5Pn3dt9DSYlkd34mg6oBTgfk6NRfnntvQYKGIUIRoWarU862CeoQpFnOblCRCsCVtLmjR+fzNU5XUnNVtk6TFB/s1pVhsMRENe5eajTOAJLvWUEK0NaDZ4oYlCDlbfC2aydm6I9sW2gzgPa4SsB5VxCJzs7sx3W2Z/WdbbURu95H6KadVrJdxXjciIs8H/gHmZHyLP/zLwP8F/LSIfJuqlRaLyEHg35/znM/WnwK3YclFA+AzqvrO83AcnTp16tTpPKmDj4+z6rrmrrvu4uabb+bmm2/mrrvu4tSpU5w+fXoOGO0Oz8dXu7u77O7u8vDDD3PrrbfOH48xsra2xoEDBzh06BAHDx7k+PHj3HjjjTz3uc/luuuuY2Fh4XEfz9NFqsr73/9+fuu3fot3vvOdT4gj78mqyy+/nGc84xnnFWzHGLnpppt4zWtew+rqKr/yK7/yJcH6L1S33norb3jDG7jhhhvo9XrnbT+dvjCp6i0i8gPALwIfEZHfA27HIlZfAGxjd07+AvA9wG+LyO8AD2ERLd8E/Bbw987zOD8tIv8Y+wD1KRF5O/ahqQQuwxyRp4Hrz+c4OnXq1KlTp04XXkWGwULg+LX7ueq6Kzh27CKGy33q6RgQ4mDA4uFj5GpCPdqm2vkEeVu9803dryQEIlEy0RGMzP9nP9SPEggq86jEbI1v/nuk7eHLHojZ4qacIuOHt9jadzdxsEAoB/QPDdBeArF41YABFZFo8ZNq8ZKqioSeAQ6B6DGV2WM2Rc1hJsEiJyVYF59qJkueO+MikSx5LzIzOKhyR2MILV4pDEqG8EgXp7a9hMJ4XLG1OWH37A6bm2fZ2d1hNhtZwlDKpFxR14lcJ6bViOl0xmQ2YjYeMxqN2B6NqKop4+mEejZDNRmAlUARAmUsuez4RVx5/CaWxwVLi0uE2CAaabJQzWaEvkWAhlCSQ01qZqBiPYcJgvQoe0MIiSbO2DpxB7lp0HGFSrTXiVjHXq9Pyg1NSGgftOhBsrkeDpcYDPpAoCgjiFDXE6pJxaQaM51OGE92Lb40BKp6l3rWUNfKpdc8k/0XHfXozoYgkaQNIZQG5TCn4tzk52RXJJNzIoqtwta1Z/Io1bb/L4eWThEtEXUesduuYJU070BUNddj1oYgQoh4bKk6dlSPeFWiFO7KNdetdZqe4zTNti8VRbKStPZrQNDGQ0VV5hGk5ERSsXhQbWycEtuDt+fO+ysVDe1VJB5b2rR+T583u+ZEAjnn+WdWifEcMIq/AnefCk0DKUARDUQWwW4qqNTdkZo9GtXjUzV7VGzw9d3YnLXnTVt7InP3poHDwoCjyhyMWherojQOCVuI7j202fbVeqlzVhAlxh4p13YtR59XMVAapCBldddkNsd0p6eS3gV8r4i8EHgvFqH697D7Yf7pORUk/xlLL/oW4GMi8gfAAvB3gIuAn1LV93z2xlVVReQXgZ/1h/77+TyYTp06der0+KuDj1+i2gjVnDPj8ZiPfOQjvOMd7+DP//zPuffee+cw8EKDqJQS6+vrrK+vz914ZVmysLDA0tISBw8e5KabbuKlL30pL3nJS7j88ssJIcydZp3b65Fqz/tsNuMDH/gAr33ta3n3u9/N+vo6VVVd6OFdUF155ZVcd911530/IsIVV1zBa17zGgaDAb/wC79wXoA+2Pl+3etexw/8wA9wxRVXnJd9dPripKqvE5FPAj+GORv/NnAG+DjwP/w5HxeRrwVeDfxN7L91HwO+FeuNPK/w0cfw/4rIx4AfxYDoNwIjDIT+DvCm8z2GTp06derUqdOFV1Eohw4PueKKI1x0ZD8Ly4uEoNRkVDJFWcLyInrkGNXuiNGpk8xuPwm1QYMoikhy95h9TitEiIrjDqMK2V1JQYSgFgGZ1eBCFiUKyDnUQ+bQKJKnyu6999FfWaZcXEIWhoQizjv2QAghotTkXJJyIsYSQpjbLkWiby8j7piznkZ1bhWRHNBgsY2B0mJVPe6RbNGVLSkxSGPxqeZ2xAGoP6+NalVIWZlOKkbbNWfOnGZzY5PR7i51NaOuZzR1Q8pK01RMplPOnj3F6TOnObX+MJvbO4zHI0aTXaq6whMy5zNl82QOvyhKEaDsTYnhK5mMp0wmMyTaXEWBlGsyAwqEpp6YazNEUChiQS4DuW6o6xHR0fKsmVJPRzTVlBCHiAp1UcNSgYbMbDaGpYKyt0Q5UZpZouz3KcpILCAUgUQmacOsnjGbjdmdbrIz2aXODVoITT2jntVUVUW5sMxVN7zA3IrZokY1euufAz1zvgWCFtadqCCeAdpG7pqrMc/P73zm5qcmGRyXcwClqsHn+dzaOVePIk3aEBCPQ40G4YO5JbPoIxyYSPB1r3NQqpJpjYFO5/ycGuJr+ytVfWwqe3xOkwHDENxRmBzs4ZGmrevQ5yf0DE57r6NIm9iDQ3IApW67GN2t215We35Tu35TFlKClEAzUBgwTODzI6g21iWpDfncHszs16KReMQ7K7M282s+SKTWmnl/rBhIFGm7Lu39hSzziNUYIilnB48NeCSuaCAEJWWARCAQQiarwVZUHSwbCNXsEbDn6XN7pwumu7Fkov/bv/ax+NSfVNU/ap+kqpWIvAL4EeA7gR8CGuwz+r9Q1d/4HPt4PQYvK+BXz8MxdOrUqVOn86gOPn4BUlXq2v6hPp1OmUwmbG1t8d73vpc//uM/5j3veQ/r6+vknPciMJ7Equuara0ttra2ePDBB/n4xz/Or/7qr1IUBcePH+clL3kJL33pS/nKr/xK1tbWWFpaYnFxkRjj0xJGqipVVbGzs8OpU6d429vexpve9CY++clPUtf1l8U5P98aDoe84AUvYG1t7Qnb5759+/hP/+k/sbS0xM///M8zHo/Py362trb48Ic/3MHHJ5FU9X3At32e5/wF8HWP8e2/8kamqi//HNv6buC7H+N7/wH4D4/xvU881uu+mP136tSpU6dOnb58tW9fjyuvPcTxy4+xf/8avV5JTnbTYgiRECLSK+ivrrJ87BKm117PbHuH8QMT1ONFRQUkkT12NUggeMSnqkLwCNI2klIdyYh6WKQQ5imQcg5cc1yokenZiq2776JYXKZYXCD2CjSsQDSHWnRHo332KSwtNRTuQAzn/OOqdch5NKrhDYtvFEVCJKjFvqqoHweEWBocVUE9XtX6HQuPnPS5iDYHUSIZqKvMeKfh1MkzbG5sMJnsMpvNqGYzqrqmmk2ZTMacOX2Ku++/kwdPPsz6xhmmVWWf33n0z3L6iD+1gKoFS5leTyj7Q0ajEb1eD8kNgQVCcAcaBmrQQFEW1PXEeiELITcNOdfmZJvNzB3Y7yFFQDRSzyY05QyRwvhZhDSrSXVNbhr6/SGDYY8iun8wGMZsmoZqUrG7s8327g6zWU1TmVNxVs2oa4uZfcZXfBX7Dx5xp6E5S+3Mic2zerKMZlSSwT8FHBmCrbHg513NZjh3PFr/ofkcNSc0tF7HgGptW/HSxWyr02fa+zyzdWe2Z8G5m8M2O//mrLPuQess9E5JdVCptcNHIWszP8/qALWNhrUOxoj1moqNR8TGhyDq0bAhnhNH+0g4bWvdOyAJDvYc6uZMlNBi2DlrP+dqoUWjWaFR5gAyRk9vpUXyAdT6NWl7MtWukeyuU7v+cDcmFPGc9B5VilD4cVifZ3t95WQuUXK24/FI2uzRrNnhZOvuxN9LosfKtte84Ddsi8HcECOpbhzKnuu57vTlLFW9h0d+pv6WL+A1U+A/+q8vRjdiF9fvqOr6F/naTp06dep0gfW0hY+te206nc6BYvvn9vHZbMZkMplDpxMnTnDXXXdx2223cccddzCdTi/0YTwuat18VVVx++23c/vtt/Mrv/IrDIdDnvWsZ/Gc5zyHZz/72Vx55ZUcPXqUw4cPc/DgQcqyvNBDP68ajUbcf//93H333XzqU5/iL/7iL/jzP/9zzp49e6GH9qTT6uoqL3rRi55QOC0iLC0t8cM//MNMp1P+x//4H+zu7j7u+1lcXOTZz372477dTp06derUqVOnTk99HTu+zPGrj3HwokMsLPYoYqBKSiQSiB4ZGdBeSX//GkuXXMrq1mmqzVuodnTe96ganTUYPGmjVzNt35s9FmiTFq2hrUHoqc4hDf6cLC0QMgihuWB8cofe0l0Ui4tIv2AhlMig9PhTi6O0rkiLWgQcOtjnybYXLhAN2hAguMctFOaIzApEh1PZwVFExRx3FqNZ7Dkd/djMQCdz8JNyYjpLjHcTDz/4MGfOnKKpDDjVdUNd14x3R9xz32184tOf4MSpk4ymY9IX6bxqO/nEHXIpWwdfLHrsv2iNkw+dZmFWE6SkbiqLaI2RsuwTQ0FONYX0KYoBuamIse9zIBDE3Gj1FBKUZZ9mOmHu06tqGs3kOlFPR1SzGWVZUA4C/X5Br+gZZMoGj0bbm0xHI3Z2t+YuToN2SlMl6rrhwLEruPL6m+ycqp0BEdCUHSUE92Oqx9ta72Xr+hOs8zDTWMxu2ItfzZqIYgDZGLQ7Wz1qNTukCmqQTyXTVn/K3Jnonye1xVxCVoghEFsYaN8wtK223SSNw83sJ03mLkt1qJ3970g2Z6FYbHFKzRx2R+I5cbMyXwHtNgUxR66EvThSh9JzcCsGPffSpNprwdZwnDuKW7TdIlx3P3rvY3udJx9CSsmvP3tl1kxQi7U1qJoNBHvULWSyX2eWZJz85gMBIkGSQ0tzPwYJJK18vhySuqsa0fbogZJMY+9FYv2YFuVqcDIGJWWDuikn79xsDEzHLna10xetf+Vf/9sFHUWnTp06dfqS9JSHj6PRiIcffvgRv86cOcP29jaTyYTxeMzY+x3G4zGTyeQRv0ajEaPR6LzFOj6ZNZlM+OAHP8gHP/hBRIRDhw5x6aWXcumll3L55Zdz9dVXc/3113P11Vdz9OjRL/tOvKqquOuuu/j0pz/Npz/9ae644w5uu+02brvtNk6fPv20XANfqA4cOMBzn/vcC7Lviy66iB/5kR9hc3OTN7zhDY971PErXvEKLrvsssd1m506derUqVOnTp2eHrrmhou59NJjrO1fpix7qDYORLBuNaBFEMXCkMXDR2gm11Gtb7B560PkJvhzW+eRgYq27zG0EHHOSYRAIGly15I1RyKt90vapFQyFukYUYJkmMLuvScJS8vIYEjsLdALq8R+JBSBps5ATdNU5DygKAtUoo0mGEyLEqz3Tw08GkMyR1/WyuMoBc0GKEQhuyFKJKDBXJYSgke3RkSsf9JGb9GUuQlMt2bcc8dd7Ix3rRJFE9UsMd7d5a57buPDH/8A9zxwH0n/+p/jsrtKs4iBIUn0+0P2H9zH9tY2sbdKrjLEPlU1Madn6BEQQgyUsUfsD8lNQ284IDUGbTTVFEVBIpObGjQRQiInIVMhGUJUikGf5YUe0eyOhGjnuWoaSA1ZlGo6ZTweMZ1NyJqpmorpbMrM3Z9xuMJzvuqVLCwskx1kwZ5TNkhwbiYedeqgS2QOnywNVQi5RFFSThb1GwQkepRu25eoFs1J644DVEmaLZI3txGkSs7qYM9gqbd6EoJ9TVqbsy7YeQjq0HsON6PNuaY5CEOFLHattT2VWQOaG7ImAqXPgQFcHHhbz2dApfGrJKMeAQvu2gwgGrBuSDXYN3eMKrhj0PpXa8KeIdE7Tc+VI0ixvTXZo1c9AnjvTCQ/RgOyONANajCzSdZrCo1d7zkjKgQpPCrX5ra9RpEA2cClIKSsvh+//nJNG4wbo3VEZk1+vMn5uYFqFYuyDUQ0CNn7NM02CmggAk1q6NTp80lEngX8H8DzsK7It6nqBy7sqDp16tSp05eipwR8PDf2sq5rPvShD/EXf/EX3HzzzfP+xRYyjsdjptPpBe9i/HKTqnLq1ClOnTrFhz/8YUIILC0tsba2xsrKCgcPHuTaa6+duySvv/569u3b95hOuCfCIfe54lBVlYceeohbbrmFW2+9df7r9OnTbG5usrm5yWQyOe9jfCpIRLjpppvYt2/fBRvDpZdeyk/+5E9yyy238MEPfvBx2+7Kygrf933f92UP1jt16tSpU6dOnTpdGF1+1WUcPHyIXn9g2E88WjJ4l11Uc1iJ9az1l/exdOQSZtdcz/TsFtVDO7gNy31WAQtxbOMY89yp1oKCgNDQRjY6nNTWgWXepzRvifQgVo2oKNUosX3v/RTLy/RXlpFBDw2Rwl2HeP+bhAgS51DFUjkNBokYnLHf2l46i1i1LsECyIgUdhTeLxfcgaXS2q72eimDxzamOiHaZ2djmztvv4Ot7R1SUwFKUuXUqQf4wM3v5ZO3fYrpbPaYsapfqBzFGGjxv6NKUzfEENl/8CCzyYydrRGLKwvEOlFTQ1b6ZSAUBblpkGIByWrOvRCQIhBCSbm4Sko1qWmopyNyr6SuRkhs6C0ukVJNkxpkavGsUkbv3lNm4xl1U5GzQmqYjCdMJiOyNJCEpknUs4bxaMqsyhy++BJW9x12l5y6e9DBoFovpXoMagzBI1EtijVrAtym6JDK4lANOmfN5hr0tRnE/qxqa884lJLdEWj2XIs4NReizpNECQlV75T0rwZDHTKKkrUh0JYnejOntuvF3JmKHWYgtMvJ4GIyaD+ne7RmTYd5YscsEn1tKtFduoh3WCrew+jnlNblOCfp5sjMjY85E0IkxuJRfxYSrUaTJjl8xL5KPCfbMpt7EzV3sd140N6+oJ5oZY5Ei1P172imjS4OwVyubY+mqtrbT/s+gF/nqI07KJKDg1Ol8AxcCcU8blWTvzMFc8cakMUgMwEhAcnfobobujt9QXoeFs+6Dfw28AMXdjidOnXq1OlL1Zc9fEwpsbOzw+23385v/MZv8Hu/93ucOHGCuq4tlqJzq50X5ZzZ3t5me3t7/ti73/1uYowURUFRFBw+fJjrrruOG264geuvv54rr7ySSy+9lKWlJYqiIIRAjJEQwvxXG00CPOLP54LENiZWVec9m+2fc87z855SYjwe89BDD3Hvvfdy1113ceedd3L33Xdzxx13sLW1RUrpEb86ffESEb7pm77pgveBXnLJJbzuda/jJS95CTs7O3/t7YUQ+I7v+A5e+MIXXvBj69SpU6dOnTp16vTlqSPHLmJxaYHoWZGqwWMa3bno6DBjUZSxLBnu28/S0UtZveIUG5sfRydKbh2GZHPTiZrjjNYVJZj5UOagUbEOOZUWKbVqP2M54BOLkhQCZKU6M2b3/vsYrh0gDhYJRR9iAcmcU3WoySlRFNHggkTaDrkW1qlDDAnRyY538oVgj0sgivXoZY/4BEGDO8zOgThqpXtUVcNCb4mzJza5/TO3sb27BTmQMzTNjM985lP877/4E06fPfP4QEeZtyASBMoQvB8x0FQVqpkYIoePXcw9d96F7rSxskNUM0Uc0o9Dcm7IKdMrBzY3KUPKxF5BlMJAXxEJvZpUK5JLCinJKkQiZQll7NE0NYkKCQXNtKKe1eQQqHPtVTJTqmpCrzcgN4lqMmUynTGrGigCBw9fSgzR5zUSVUjamFvNYZnBMeYACmk9tkJSaXmWzZEEh0nmjFRRA4ICmeRu3AA0HoHqM5sVDeo9iphLUNtYV+tKxON78aUgDkJVDWwK2X9GECAnVOIcdqlk+7MaJA2a3eHYBqWa89b/6NttI4DDHCS2IE70nGNVs+pKdMefA/Y2PlVCMJchMp8jcx9bL2sR4hyptwoxsLhUEEKm2U1kdz7mbMspiBsrfR9k5j/vKIre/Jho5wMh5XZMDl5V59eEit1AYPObrc8Vd3T6WO1MKSk1RkCxmxrA1o+IIu50zSETCWRf/qLmuoxEsjSIBqKaG7v48v8RZKcnQKr6euD1F3gYnTp16tTpcdCX7X/567rmvvvu4wMf+ACvf/3rede73sVsNrvQw3paqwV4VVUBsL29ze23387b3va2Rzxv3759HD58mMOHD3PgwAH27dvHysoKq6urrK6uUpYlMUYGg4F/CGUOkeu6pqoq6rpmd3eX3d3deTTu9vY2m5ubrK+vc/bsWdbX1xmNRk/sJDwNNRwO+Zqv+ZoLPQxEhOuvv57v//7v56d/+qc/p/P1C9nW85//fL73e7+X/fv3P46j7NSpU6dOnTp16vR00r59SyBC0myddcyxBBIDBHNLGVwQiBnpFwz27Wf5+BVUZ88wveNhUuNuMcwdGB2KqNOgc4GG+9Ictaj3QjqNbN1aDjDbV4f2TxJJdWB03ynKlTsIwwVCf0goDaikKtEUJU09o+z1/WCyu57wiE73UYnO4yzVXWbqbjcDWwUSIIq4m1IIoUA1zSGOqjknZ7MZC/0ltMncfsft7I62Pbq1ZlZN+NDN7+bdH3w345ml1xSxsEjSL1HmCLT5ST5tjWZihsl0ZrG2qSHnhsGg5JIrLuHO2+5gOp2wtrLCcDCgiBVlUxOykmIilw7hcqKpKpBAKHqE4N2fuSBpBSmiaYbQQ4KgTWPgSyBIAaFkOhtR5YqkkfF4wmgysvjMItA0ifFkl/FoxGRaQwyEWHDw4DGISvZ7bkUslhOH2Bkl54YYCnPqEeYOP+PULXVs11hwGCbeTapu1cuoJsgOpduOx9adJ0LOeQ6f20hhBDsGkT3nY/uaEJFsQExitmjfEOYRq751BEh+g7IBNSX5NkSCx4Y6fPe84qwOFQNorskSiQE0t25Pj1Gdr2/7+YTdRBDmnYkGTP3aa2+Y9uhV1F2gIsQY525hAYaDyL59+0EatppN6lGizpCTQ8fA/Hia3PjNAmHuEFas/3H+VYHc+OlSB+iRLHb9BSmss5GEEP1txY/B51yk8PkBNPv7jDtHBXek2vcNPCqaKpCIBAOVKSXQdnzJAG93T2+nTp06der0tNKXHXxs4zLf+ta38ru/+7u8613vmsOuTl8e2tjYYGNjg09/+tMXeiidHgc9+9nP5siRIxd6GACUZcm3f/u388Y3vpEHHnjgS97OwYMH+Rf/4l9csB7LTp06derUqVOnTk8NxaIkZ/sB/R78CyiJpI3/cB40KiT1iEohLgwZXHQRK9dcS9reYfrQNrl1HbXeQpn7zmCOXtrfzVGYsThL/LnxEfjTFB7xNyMK1Tgyuu9B+iv76S+vEJf6SNmnaRLMpuTc+BgwsJILkIjiLjON5sGK4h2Q7n6cx1MWHltpTkcpe4iqOSUJe4AyFGhqiEFYXlzifX/+QXZ2t82AhjKrpnzww+/iz9//LqbtzwUE6r9mqs25jETcUVqIEKMQCyE31n25EBZRgZXlNS697BLuv+8BzmxssbqS3NkqlDFa5GYQYjC3WCgiIXo0ZsBcZEWBNIGyHBDjwHobZzOP0RSQgpQz4/E2k2pMkxLjasxod8x0NqUoCwp6TKY77I62mUxqVISiLAnFgMHCkrnQBANuWYmhRX/Wy6l+TjPW34e4cy7j7jmHdjnP4Z4GdYDpIDHr3HFnsFHN+SYWQWoRvdbPGIL1Mc6jYFE0u9VPMLCHQ2uCwz4bcc6J1iEpEpGsDtTMVZtymj9HEQdz6hzRHMQ5t0A1zJ2PiPh1Y/ZDBYfo4n+3q84cmgnNjUFazTRZ5zDTImXNDpgdYqrsdVvO7xcIgaWlVQjKeHdENU6kBCl7TG0LPdWcx01T0aSaQiNuLzawp+0NCuZCVNSvezuG1uGo3k3ZhrYmd2UmEm1kalb7s2qyHk8V34pFLkuwjlAQJAuqDSKFpVGpWpS0Wn8mYnGv1pv613Mkd+rUqVOnTp2+vPRlBR9VlT/5kz/hZ37mZ/jABz7A5ubmhR5Sp05Pe33jN37jkyaWVEQ4fvw4L3vZy3jjG9/4JW0jhMD3f//38y3f8i3EGB/nEXbq1KlTp06dOnV6WkmEGAIpA2RQmddPFFKSaECTuSKlNGcS1u033H+QUDWk7W2a0SeoNhKFBIOIEojzqFOhDYGMYiGujoksdjXI/HnmvmqdkWCBlhA0EKRBJJM1ICpUZ2aM7r+bwdp+4uICYXWFlBNSBFLVkFNNGQfMC/V0L4JT3AGJRkSy9/e1MZDikMtBZCwtWzN7nOM8D9O6+WZNxaUXX8JHP3g7J9dPk5pE0kRd1Xzi4zfz5+9/F5PqnBQknf/2uU9NOye+vyIGytLqQHpFSc6J1DSEoPSLQBGFKAWLCz2CREgG2wrpU4SC/fsOUg77PHD3/Zw5tUFaS8Six6AsKOMQdIhIJPYHFCKEokcsBwaIqspccb2hwcbKXGIpKVrb4TRpSkowGU2o64bRtGI8HrM73rV1FksmszE7u7uMxlMIgf6wRBCWl/bTGy66QzFQhgINdr5E26oac6dGh1n23Ayte8+dc6hBOlVztUYJ3kMY8D+Cu/8MSoo5Dh1AFxJo1N2DWR16co5L1sbSOutoozzdTdvGoqoKQaJFvSoO4YBcm9vXo17NxerrT9rlIe5o1bn7WNpoUgVCmN8IYKBbkJwNxuJzI96jqhEkoS2oxcBcINrnyTar1qOSYxHnB9Yed1H26PV69AZ9RjKlUY9eTTgsVhurH1N2sJpz9sfSvJIGTfOeS81qbllpb0wIHgWb5s5MO3/2uTe17wpqkLIFlO28WNenQVVPW6bJzXz5zF2VweYlawXe81nEiHa1SJ06derUqdPTSl8W8FFVGY1G/Jf/8l/4+Z//ec6cOfPXilTs1KnT46MQAt/wDd/wpIGPYK7Fr/u6r+Otb30ru7u7X/Trb7zxRn7sx36MhYWF8zC6Tp06derUqVOnTk8nxRDIydxYGhwe0NiP+D3CUdV7Df2xjIGn0O8xPHARK8cvp97cZHN0N7lWRPNeh2TrNiNaFyRKkLa/zZ6RVckiFLLncjwHKTjMyczb6dwFlpvA7ORZdg/dS7GySr+I5H4P1UzKzbwaYw6lACFaJ1+I8565IKCx8OhLg4pRWsAo5gBE7JhF591/iFJXDctLy6yf3OHk6dPMa/ca5aEH7uE9H/hzJl9g/YpgDsYY21hYIQSh3wtEKegVgdiLSBZSrlEN5LIgBsu3TSlBaMzBhsWughJDIERhuLAAAa64+moeuO9ezpzdtLjTA/sZ5ExVV0gUQtEnhh6h7Fl3YA40eUxqaptHB3vaKEUcEGKfqh5B01DNxtSNMp3VjEZjxpMxVdMQY2Q6GrO9s8O4mhDKSFk4pENYXNtPvz8ghKINS3WHHvPoTjQjuV0h0aBky/VErfMvBFKqDfKpWgxn6wQESykNwdZZex51L/azdeKJA7ncAmss3jVnu1YM8qnHjZr7T1s3pl8qQeIcJoqog62AthGuTi+zR8i2QNPWqjkGIfi2s4N5G1dw6OhVkbaGbdbmrk7Bncot4SQBBq8lBOeLirjjFQ3EWHrMbtuBad2OoAz6fYYLi2wUW6QZpOzOUT8jdUpk2r5Fv9rbmFX/PG7MX2x95jaWFZ8kW1u4G7Wdx+zAUrGxpuznJ5uTMueEhGhOUApyauZxs+p9mkK0zsecHYLa90OwsOmUayTb9jt16tSpU6dOTx896eFjzpm77rqLn/iJn+C3f/u3u17HTp2eRLrmmmu47LLLLvQwHqEQAi9+8Yu58cYbee973/tFvXZxcZFXv/rVLC0tnafRderUqVOnTp06dXo6yWAetNACzP10TsseUQqDXVgfoqigEgkxUywvsHDRUZqrJsw2NhjfvwEp0G4BEd+eORjxmEpR2XN5oRbJKG00q+nckFZUiAGi6DzWUgik3Zrp/Q+wu7IfHfQp9+9Dk/W3kQxstrgqSPDtWYcjrfnRoc2eQ9KjPqN1HZopqzYwIgXifX6aE6mesXTgAO+/+aPsbG4YyM2R6XSbd7zrDzi9tfF5z4GIUMZAlMDC8oArrzrOcDAkxsjJE6c4u36anJSUEvWodjBkYw4SqCulyTWoMuwXNFVNTrVFz2qgkEhR9lAViljR62cuuewSgggnTp8BAkFK6lyhumROuZ5AymQRstY0s4rcuBOwJzS5pq6nhHJg8BioZjUbW2fYGe+ys73FznhCVdVkVZqqYjwaU2tiuLhAv9dHCovFVA3sP3AxvaI3T3bJuXGAtefoU7X+PwliLkCP7hTdA3OqiVL6NCSU5BGebQ9gsv6/5FBSk8WfogRRi+f082EMzJ6bswFvA9fmoNXc2DYxe13KFgerIXpMbcvbfD2prdvswDun7E69bMA6Z1RsfkMwgJnx/keJdrzSxpF6F6V6cLF6T6aYsy9RcU4h4tzZaNDd3byIRZPGghAigYa6qdGcvGt1L3e1dZAOh4usrKyxPjjJrEpkNQAZs9+kQOvU9RsHfHzBo2gF5jcFiATrCxUDi5ARKdwNmRFLx507IvH3CTt+nTsrU7Kuypwqh717/ZJBWwCa/PqArA0Z67jMWhNjAdnfI6QwCNupU6dOnTp1etroSQ0fVZVbb72VH//xH+etb30rzV+jML5Tp06Pv2688UYWFxefVM5HMCj6Ld/yLXzsYx/7gt2P/X6f7/iO7+BFL3rRk+54OnXq1KlTp06dOn15ymIPIykkNCcHDRYVmTUZg4jmoCJjMYaa0FxbK11RUKyuMDhyhMXLjzPd2qXagKTqbsXQ7gloWYL9Wzapek9bGz2JRaI6gDRs1EINJUiynkZR6yUkowmmp7fhoXuRpWWWFoaUwwEpVaRcU2b1mFV3wLnTLkjcg43swRYhI7HtrjRQpGpzhDuwRCyeNjUVISjrp7Y4dfo0s6omJ4uX/OjH3s+9D33ujvcgUERzxyUf5/U3XMN111zLZDxmNqt58P4H0CaRU6ZJCU02ZykbfoVEymrwC+j3rHfPIIqQSWiAouiTtaHf70MlxFBw4NA+Us48dPphptMxhy86TNObUg9rirKkPxiiqTHI2mTQAEHROpKZkUNmNt6hyQ2z8ZTN7Q12t3fZGm2ztbXBZFaRk1KlGePZlFj0GfT6FIWAZOsZDYGcleW1/R73mR00F+RcQ7Bz1a7FoLYMDe4Fc7dKaBczENy5mD3qVxxEyfwzVJA4d8K2Llb72oLCbA7MECAlXwONOwgdqnktorn1xECXtPvIiOxBLPMxqvcZ5jkc2+tejO6CDCDmWDWXHyA9d5vOcb77OC2ANLs9MYiiIt6LuHfJ5Rbnq9FBzdmmyY+jdRTOZg3jyTaj8Q7JHbOtUspoygz6A9ZW9zNYGjDd3aXJQsp+ravNZW6Svb69oDNoVIO47mRN2daseqwtWbAuVvHz4D2b+HuOWixuG9uqOWFdtOa6NIezkHKD0oAGXxLyiFhbc4nuxS9HKfbOu+XTkrT6nNdsp06dOnXq1OmppSc1fLz33nv54R/+Yf70T/90L9Llcdba2hqz2YzJZHJett+p01NZLXx8sqkoCv7RP/pHvPe97+X3fu/3Pu/zDx48yHd+53fyQz/0Q6yurj4BI+zUqVOnTp06der0dFDrPoxEkpiLzFv0EClBK3N+ZXXnoP00P6kS1VyTxXBAb98+li69imp7l3p0B0wjEeutg3MDVFvPYqChQUVpVOm1MGcOWOZ+RPYsigUitcdypvmz0xRGD59C9p+gv7aPYnGB3DRkzaRcQ4wE75Vsf8tkghR+nNbFF4z2mPPM0YY5Hw3sGM0xd1wQ5ezph7ny6uv4+IfuYDqryFnIKmxtneUjt3yE9Dl+RjAoe6CJpJm6sdk5sG+Fi48eYTKZsTPapZpUTEZTmkZpstI06vDFxpY8cjO1QA5zi1VVos5i7r6UHcYJ/f7S3JmWmprFxRUOXWQxs2dPb7C5scWh/QdYG26xuLBIf2ForlV3gwmREAMqkSZXpKahmkyp6jHbWyO2Rzuc3dxga3eTyXTCZDYlpUQCyn6fwWCRGDC3XYyICDkLTQ233/YAi0tHuOjwAY/lxHoNcSeiu/eEQPB5zZoJYp1+QTwaVzMSIOQ2Kljmbls8MteI9l50qa2D5POXSG2WqeBu1zYCtIXYLQ73vkLaflA7NxLc0QkwB5a28lUTKtZJOF/n2roD3dkoLXJ3Z2dr/sVdlyQkFCANbRqy+vwwh6iyd4E7jK5U9p47j0E1x22TaqbTKZPRNqRMG3tsh2Cu2xgDq6v72L/vEOPNETo1jpczFJazbDcTtFGofgND1NLditmnwzogWxiatPHn+lxJMHirWMRzzn4tJ7dD+jWQMCDu7tHs51/VHL851+x1SII46IVM0uw3OtjrQ4gWU9y1J3Xq1KlTp05PKz0p4aOqcubMGX7oh37ovIHHtqvu3/27f8dDDz3EP/gH/+C8Ac5OnZ6KWlhY4JprrrG7e5+EOnz4MD/3cz/H2bNnec973gPAVVddxb/8l/+SG2+8kZ2dHU6cOMFwOOS6667jkksuYXV1tXM9durUqVOnTp06dXr8JHsddeLAxaBNdmAjDmYM2CWtSdl+SC+BuT2xt7LMwtHDpNk1TNZPM713x37Yr62jcC9E1XdmrkjdAycZIXjEYxuVan81FBkDFMGcTo07yyxqMzBdrwkPPcTC/sP0V1eNyDXesCcFohBCYYcqBoJaN5nVPEYI0Uap2WFlcPjUgkt3pmlgNp2wuX6G4tpnc+rkKeomGQDRxAMP305qdnxsj5zuGCILvYKcM5MmzyMtB72Cr/7q57O6tszDD59k/exZmipTzWbUTU2T3IHWdgU6G0uqZGdjUYQgMJlVjMYVEizmch6JKUJvsIiK4eXh4jIaLNYzBDh18hR33ncvEWHf4hL79u9jMOgTix5FLIix57GYSl1XpJypZxU7o23Orq+zPdple7TJbDajSjVNyPT7Cwx7Q8rYo4gWtWmdlqBZqJoZSRY5fXrCu971l1xzzcVc/4yr6Q9LohbMux4JRBHr7wvR4JaIx+A2zJ2rLbjzCNLcOl9R0ECWRMwA0SM/szl4JZC0MrAcAk02B9y550+kAO91VL8ADER65CpWvqgITW79jt65iEM3WretrSvbloN07zIVbdFmRmkQLVB3BIcYyVnceVmcE6/KI8Zh0DgYrJRIJs0ZfuvQFLDI1WDXlqhShJLeoKTsBbSynz8VMVAUkeFwgcGwz7GLj7O9dZbdE9vU2WpGwd2NycGeWsxywK7X5OdQ1cBmDCUqjR2TFKhmYihJouTUEKUk0xiQbO9C0Bb8FyStURFyqhGH4TlnJBQOa63X0aJqzYGt4nASpSgi5EBKiSzejenwuFOnTp06der09NGTEj6ORiNe/epX8453vOO8AcHnPve5/NzP/RzPeMYzmE6nFEVBVXUREJ06faG67LLLuOSSS57UsO7yyy/nLW95C29605tomoZv+7Zv48iRIxbxA/MPqnsRPp06derUqVOnTp06PY4K1u3XAkcDW96R51GjFktof045mxPSQYsUBmBiXxjs3wd1zeiqKxitf5LpVkakjQd1IxnWr3YuhIQ2mtKcU9EfbDFARoii7iQzDGrQpg1tFXIdGD28zsLhEwz2rTHdv5/FtcZjF2tyDmTv7UMhxAAt2PD41SDugqT9O5g/Lrf0DsSCZO+65TP0FkrOnt5idzJDm4asDUkT991/t0daPnKqy7Jk3+ICs9mYiVe2xCgsDAMv+Krn8u2vehXvevc7DYjkhu2dbWZ1NY/XVPGYzc9yPO6dSrOPzurMXQ88zAtuegZRCkgNualJsaDo9SjKHmgk6S69oiQN+xy46CLK3oAH5F7W17c59cAWcv89LPQX6Pd6lEVB2etBzmYEFCWlzGRicatVVVOlirqZILGkGPRZHg6IsSAWJUUoUG0oytLOfwg0M+v/a0KPngjTacUnP3En6+tnufE5X8GB/fsJIaAhO8R2CiUBCYW75yziNGogi6CSPYpUgUQQi6GV0PpqC9oJ3AOLyaBm63J0mnsuOJQo89hOVXc3ZkVF5+smY9dGCGIRo+KvIRiQy3jcqzhql7mrUv06kXkqqBgwR/x7Bs/IUIQBSWq7MkLrCsVBZCCI5SOnnByAq/dH2ljwyFkJ1m1Zlj36vZLlpWUkKGsHD3D42Aaj0Rg0srC4wBXHL+fiY8dYXlmlLEu2d85y1+4tpEkyB26CJuMuaeva1NwgcTC/xlonsfrx5/ZGB4fIqnkes9zCxJSyxf7mZC5LTR777BnQKqRU7zlLc/L9FO4ytfeKHIBszkdbC+1+dR4xnRvvTu3UqVOnTp06PW30pIOPVVXx5je/mTe96U3MZrPzso/FxUX+1b/6V9xwww0APPTQQ6SUPs+rOnXqdK6OHz/O0aNHL/QwPqdEhAMHDvADP/ADj/n9Tp06derUqVOnTp3Ol0QdvISANhYducfjsrulAsJsDgcVA1BRCnMIhhJUicMCOXgRK5dfxe76WcafvI9URQo8AlMteFXmMZbmaczuHwOHLXOnZBtH6a9zZ5dBmtwGW855Ub3VMHrgPoYHDrB85BiprhwCloRoTsaMxUGSLX5TsP5IcwN6LyQWtcp8n5F5viUF29sTPnbzJ/nGb30Fp09sUM0MlGQyuzs73HHXXZzenMyPSYBBf8DFhw9x9uxpxrMaERiWwnApctPzb+J7v+efsb5+hgcfOsVkMmJ7+zR1FamqtBe16i7HVplz/gKEgDX+JeXmj93By150E/v3HUSTxawWoUBjQa+/RJTKozFbGFsaGIwwGJzm7HCLze2znNzaZLS7Qz2dobmmzXoNwfofc3InWa+g7Fm06sLiIr1+jxAiMQR3E5pjLYg5/LJHadZNw4c/eTMLqxs8+yueT3+wyH33nubMmfdx4403cNXVl9HvFw7t1IFesjEQUU3Wx4n3QaoQJbpX1s6nwabgZ1OReayq7DldvQfS1kiLx9o4VYO+0q7L0MbPZo/vDaiKr93sQDig7vRzOzFZkwPMPF9brStPxPZFFiTsXQ+t41EEihA9QleJGkECSZPFlmrwa0bmHZHQui3Vx9OGwQYkBIJEh8OBhaUFFhaGXNy/hOXVNSSUNMk6O0OMrK6scujQUZaXlzh+6WWEADtb65y572Ga2sB4SkrSTNLE/OpUc02LukdUWzuwujsR/7PF0RrcNTdz1rZbVckZg5DuyG6a2h3S5o5tu1lVs2HdkAltz2Ww6NYilm6pVvbslJBz41HG2aOXO3Xq1KlTp05PFz2p4KOqctddd/GLv/iLnDx58rzt5xu+4Rv4W3/rb83//qY3vamLXO3U6YuQiHD8+HEOHz58oYfSqVOnTp06derUqdOTVimlcxyF5mIMiMUQZsV/6k/K2Rx5jcWZinuUhGjuKrHevXJhicVDx1g9fjXjU5uMHtql7dzLYMBgzszMfWYRkm0jpMU/GhhyEAF78ZYEoiR6kqnVAE7w15Mj09M7TE+epDk+ItcJzWJOTiOYSIzmEET3GISIRbJqC9bEx0CbU0k7kKqqufszd3LyxAkWl5Z56I4HqauKpJkowtbWGbbGI3ffQRkN9Fxx/GLOrp9idzJDRBj0hQMHlnjmjc/kW1/1nZSx4N3vfi8CFDHSLwfMdmY0yQBNu73WrbeHp86RKuKGy3sfOM3b3v5uDuzfz/FLFkk5kXKDpJoQIkXRpyxqmpQoitI6PfsL7FuFIhYMBwvsO7Cf3d0dtnd22NncZGtni9l0xniyS2qmhBAoix69/pBhf8BguMjSwiJFGYlFNGNasPMYCRCsK5MsBjFFmM4qbrvrPh5av40HHryfl73oFew7cIjxuOb9H/g4p8+s87znPpPFpWEbVmrGRI/PVMTNg7nN791bl4j3QkZzK6rOX6Momg3k2fexcQXmca5BDb7bT2Ia6wJtnY8YfBQNtMgcdymqr18DxuLQWCFb56MBz8Ywt1i0r2p20BncjeeH49dHlOhGPQd1GmgSlLldE9Gvn8bWRqbNb/WbBxTRaFe3BFBb8zFEirLHACVqYN+Bg1x00TEGw9KOujEUuzgYsu/gYRaXllk7MGJazXj4xINMRyNGp7dICVJj3ZA2R3v9jhYNbANV77XMmufXu7kYW0BrKzvlRGoyOSdUrY9xPu+qBEpwxyJqjkg7tmhzlAxThiBo8jjakG0Oss21nywba/6rTuVOnTp16tSp01NfTyr4WNc1v/zLv8zNN9983vaxtLTEv/23/5bBYICq8rGPfYxf+7Vfm8d9dOrU6fNrOBxy1VVXMRgMLvRQOnXq1KlTp06dOnV68kqxLjws5r9FfAKIRuu5k8YBnLmEkjTEUCASLBZTvVdRAiHCwv6DrFx8KdOtdZqdT1JvO4TxHUYHi/MhzKGQwyRpUaTuxVGiQCTO+yiVKNmck7nv4w3k3Ux15iyznR3SLM2BVKB0Z1UmhJ7BoBBBhJwzMcyP2gAXEM2uZ7ACRUU4dWbMXZ/6JJOqpj8YsLWzQ04JCUKTMpub6zTJcFUZhV4RufKKqxntnOHs5i4B6PWEY5fs56YXPJ9v+Lq/wfFLLue3fuv/5eTJ0yiJrc116iqxuT12p+O54PFRsSMC9MqIRIFGGM8a3nXzrTx8aoO//6pX8pKv/iqKckhISmJG7A3oLQzd3VZQlBZ2GxdLeuWQstdnMmk4uP8gdWoYj3bZ2R0znkzY3FhnY/0Mo8kOCwtLLC+t0B8OKHslvWD9kFkbNFvnniZ1yKZIdpialZQaxqMJm6OK8XTGBz/2l9z/0AO88mv/JldecS2a4dO33sOZU+u85KUv4NCh/bTEWD3ytF0rYH2O4r1+sAcJbQVZOam6E88ckHhHpM+tGLQMIZDEvo9CmLv2vMVQlKTJI1HFnJxkh4UtyIwQxObBHZQ6j1Q1Zx+0rsTk59A6RTWLRRrTdhbSvoiQhSRQ5YxqosiJXhbKFpb7ag9B/DYCJefGD9BuNBDvMTUTZ0kZ+2hOkJWiDCwMhywtL6GaaZI5b8uyoIjCwuISicyBQ0e4/tpnsbOzxS3Tm6k3ZlQN1HWDJoeOTUUMEUJwx6PMr+RAQaKew9ykiSiFrw3IKZk7NmXqunLAmP0cGMTWtjM2Z9Aw71y1NaAeaRsNEmMAUxw24hHMtj/rpEwpkbX5wt43O3Xq1KlTp05PCT2p4OMdd9zBL/zCL9A05+8fJK961at4xjOeAcDu7i7/9b/+V+64447ztr9OnZ6KWltb4/rrr+9iSzt16tSpU6dOnTp1+hxKCkJDkNJAiCagIdOgofEf3OPxhwnN1vUXQ7boyWBxiNZfJ4QYkX5keOgg+664ltnZdc7e+iBNVXpsJe74akFM62k0ibshg8egGghUf6UBlKCBIN41p8Gcj+5+1KTMNs8w2zzLdLRNkw5SMkAloyKIRBLZYkPtwBBjGIRggMcAhEFHi9Q01+PuqOHEQye587Y7KBf6FBJJqe2itHkajXcdVsKgX7K2usba8oC7791EgIWlyPErj/CVX/kiXviVX831197A+//iXXzsk58mUaO5YXN7g2YG0+nMwewcPz42eCwMOKXGGwVVqerMHfed5L/98m9zx90P8Iqv/2ouufhihoM+CzlT9AcMFpYQKZhOR2hWYq9PURQQYGERmlRTzyqWl5ZZ2TcjJ2U0uojNjQ02NjeAQDkYUEYxqGy2PlLuQ1aarPNzE6UkUZG1IUsmp8TJ9W1GkwpQUlYePHWSN/+v3+LFz38Jz7vpBfT7i5w+s8EfvePPeMELbuSqK48TQ0EIce5UTTSIel+nZlTduRqiuwQNSFkPY4vX8zlrynojCcGddY2BObHoz6DuePSg24B1TuJdoNlO/15Mq49KiLZ6tT1L9iXn7H2RPlYxIJ9IRCIiydyQWRE/htbRmFWpkzDLSigFr4ecX0etsipFMHdhQMh+3KrmgLW4Wh+VQAiRWiuquqFxWJncjWlxpob8x+MRqalBExcdPsJ11zyL7d0N7pnezizVzOqaurG4VuOl2WNusztAs5+jRM4GkkOIHp3svY7Zo4aTdT6KYl2Mea8rs2GvWzKlvZhZc0KqdbNmNYgcAuqA17YXgAYJ2aNw2whY7RLHOnXq1KlTp6eZnjTwsaoqXv3qVzMajc7bPlZWVvg7f+fvsLi4SM6Zd7zjHbzjHe/o+h47dfoitba2Nof4nTp16tSpU6dOnTp1enRljyAM7iKT3EahBneOGfBo5k6k1Iajzt2QOJaJZCREYlHSX1lm8eBBVo5fyeT0GbYfrt35tAfHtO1VbLvexACHecLAMSVgXXFZ3ZUZlJi9K1LN1SZEd6dFmq0Z1eYZqsmIlNyRxrluS4/FxCBFjOU8hTWE1kkHEoTUZJp6QpCCE+sND913F2d3djh++AiaHFKiiCYkhLlTbGHYZ9AruPLyy7nvvvuZzRqGQ+Gqay7hq170Em567ldy5fFrEFXuvOcuNJRoSkxnFdWshjygqhs/phY9/lUJMCiicTadnw40Q1VneiGwvVvzprf+Ke+7+aNcfsklHL7oEJdefITjxy/n8EX7WV1ZouyVhBAJsUBCQSxKqromk8gLmaZO9KsxqpHllQXW9u1j/9YO47G5MzU1HonaGPisDECJRHfbKSnX7lQ10Lg7mXLr3Q8zmTXzY1NVtnZ2+JP3vIMzG6f4upd+I8vL+5mMat73Fx9ltDPmK555Pb2iRLxzUVTmUaqouyEFA0o+SwZx23ZRc71JsMjVTLJo0xC9R9KBnwSKkM0ZSySrICHsxbtKaU2KCsleaeBMFJECzcngtYO/OdgSvMsxOFgECQHJSpaaQAR87jTtxcpmSCpUTSJLSb+/SK/sW5ehYjCzXRSagdKO2DJHzcXsyFW8AzJEcy9LgJDCPC627bJM2pivNBSEaC7SJivEguHSEscuvpRrd57N7s4uZ04+xGg6NgBZzxjkPlmVqOYuzrkmZ527H3NKZE00qcZPnT3euhDRuRPRolnNQanZoG1uX+RrCswpidh7BKKIKJqSv99AIJIlo9IQNZCadqce5Zu6xLFOnTp16tTp6aQnDXy8+eab+eM//uPzuo8Xv/jFPOtZzyKEwKlTp3jzm9/Mgw8+eF732anTU01t3+Nll112oYfSqVOnTp06derUqdOTWkpyp5g36jm4EfK86lD8h/gtFEAzMUZ3eRmkCXPXmaIkQgyUS8ssXHSE5SuuYLb1GZoRQAQSLeLL7utTyVhvHSDqvrS9Lj2ATEHWiiIo84AT8VK8c2JcqSLT9XXq3V1SVdv4QjR3WIjmmmtfLgZhRJPFz3oUZxAY74z4xEc/yLXXXc3K8gparXPJJSVXXnuYgFLXNRKso1GxuM+cMjEEjhw6hEjN6tIiJ06dot8XrrjyYr76q1/ODTc8m6OHj9Lv9fjEJz5G0/To9wZUs4rJZIe6aahnU1LeO/ZHB49CrwgUUby70II2k8OtWZXo94ScIAS484FT3P3ASYNsURn0FlgY9On1C1YWFllZXWJlaYm1fStcdOAABw7tZ9++NVaWVxgMB/QHiwQJ9HolZW9GkEhZ9pg1UzQlc6alzLSaEkg0ZHJjMM5gtjCdTdk6u829D57gM3ed5DP3nP0rx6ZYfOeHP/4RNjbWeeXX/y0OH7mEatbwoQ99gs3tbV74wuey0B8AwdNu1NZw8BWj2UNGmTtZrfvQvLSiModThZRkwc6/tN2LERGL54xi6yJ6VGfGIkJDCKSsNCkTYqSNhM2a0WzAMHv8p+S2OtTxupjLOHs4as52neHPDxrJ2rRI1foQESZNolZhYWmVpcUFhoOhAXOJBG1jVVuP8Ty8mKzZXJwEJAgxBoLE+ZWF2nVZVzXT6ZjlZolY2I/jUm4IuaGQyMJwAQnCeDhhtLDLbG0fV15xHdVsxidVmc5qtre3mB44wGC4QFH2PPZ1L3YZxGChQtPUxkkd6uZczZ3G6g7IlPxcara5Qt0l6V2SiPWZqjlFW/acW9cnCRXbp+J9jxIsHjlDo40BzZwew1vcqVOnTp06dXqq6kkBH+u65td//dfZ2to6b/sYDoe8/OUv5+KLLybnzIc+9CH+1//6X+dtf506PVVVFAVf/dVfTb/fv9BD6dSpU6dOnTp16tTpSa2UGoK0vWieP6ptNKG3LWZALRpxDlcAVFGiRai6+yzEgOSAiqBlZLj/AEuXXEa1sc7W7afJje2jxYUt/midj75ZggifhRQd8AVCAFE1r6NkavdxSQsmsjI5c5bp9lnqyZTU1Ihns4qa67LtCQwAOaNBiCHY91EIwr13fJrbb/kYV1x5Gfv3r4CO+NCD20zrHQoCs2pGEYUoBVkaUmooCuHQ2hLDvnD02HFOnnyYGDLHjx/mxS95GV9xw7M5cvgYy0trnFk/zUc/8QlSLmiamtFki8loQtAe0+kM9tDTo6pXCr1oEC3GSCCgZJpk7tUqNdSNQFAoIijUKZMihCw0zYSd0S6qgVicMTBjObqoKkWIDIc9FheXWBgOWF1b5vChg1x+/DKOHT3EwqAgxkAZe9btmGFWj0l1ompqqrqibhpSk9na2WFra4v7HjzBp+88zYmzY2ofZ9uxea4USDlz53338j/f9lv8H6/8do4ePUZdw6233ElTzfiqr3oei0tDc9SKHbu9OJH8HM97RB0+quV7mtNPAhKiPcNBJJKRHAnROhNRkBgMg7vztpCIaD6nNzGQ1KJUYywge2xxNviHOwkNgEYD/tkiW0Vx17D3HobCY1EzSiZKmB/HtK5ocqS/uMbS0ipLgwWKXuGwLvn6tnG0canuMbSew7ZwUoI5AWHee0p7c0FuSNncmG3nItLObyBEYWlxkbpuqFNNTplUN1x51fXMqoZTpx5kd9ccscOFJcqiIASZxxfnpKTcYDccMO98bFJNDAXqYFI1+Zkzh6ok3JkNGXPXmpuyQTRaB6coUUCxfUUiObXuWLtBoNHawbNNhQHMBtRibr1ktVOnTp06der0NNGTAj7ecsstvO997zuvXY+XX345L3vZyyjLkvF4zC/90i+dV9jZqdNTVWVZ8spXvvJCD6NTp06PouMrgdv+7795oYdxQfVnf/ZnALz85S+/oOO4kOrmwNTNg6mbh06dLqxa51B015h13jGHAkomS7PnalSLg8QdjqqBeVJhCyUz1oUnUCwvsXr0MvJ4zGxjl92Hp9ZDJ0KhBY9w9jkTNOzJ3uO0cYze/5izdeR5GWLwVyjZIWSk2RlTb21Qj0cej1pCDhANR3mYq8GvgL1e2yEYrip6kRgz/UGf1bV93HbbJ3j/B95HnQJry6ts72zT7/eo6pkNRBMri0OOH7+YlGqOHTnK3XfcyxVXHuHrv/GVPOtZz+HokWMsLaywtLTMiZMn2NweUdeQ84ydrR22dyrWlpaY1c1jgscgQhEC/UL2AJcD2zqp1xoKKWeaRoklaEpzF1w7z0kzQa3XUBslhMJiN9XcfTlDvTtje1TRNLUBn5TJCYpeZGVhyOFDB7j42EUcO3yAgwdX6cXIdDZjNJmxvbPNyTOb3HHngzx8ZpPtUUOd8znzvHc8oo9+tFmVB06d5Ld//w284mWv4NprvgLNgU9/+h7Gu2Ne9rKvYm3fKiJCal2EbZSv0SV3ybbxuGaU1eARvqLz823/j4QYPTq1XYXuCg4e4qqChJLsEa+xhWgEi1ZVCFLY7hE01GgSMh49rErAXJYWu2rnQAgeKapk8fMVDFtWSWm0oL+4wtLyCqtLC/SKHsnxomreA7hi8bUZi30VgvdI+jWb03y9Qp47IIVMCFCWPVsn4lG1qtYPGQ1C9nsD1lYFFfcmq3UnXttcR7/XZzLdZXNzi+FwiV5Z0u8PDHb6ceWcSdr4TQwOZzUR3B0ZJYC7kzVbnC/uMjWXpnh/ZHsjQcY6HaPxVT+f7r21edRMzmmv5xVBk3qUM6SmAe2cj52ePOo+N3fq1KnTE6MLDh9zzrznPe/h7rvvPm/7CCFw4403ctNNNwEW8dq5Hjt1+tJ0ww038LznPe9CD6NTp06dOnXq1KlTpye9clYkZIcweEykQYAQSo+fjIRQk7X2CEtlr19PPZrVgE928ihqn3O1F+nvX2NleimT9VNMNm5DJ+apDDjUoG1jNO+aSIsHhODBrNmhkvU8mosxkOfVky2wFM9jlUlFtbnJbLRNU80o+n2PtSzN6SgetUpApSRI2HOPicGmo5ddyWWnHmZne4NbPvkx7r/vQR584CEOHjxGk2pOnz7J0vJBmjQjSIGIcuDAYXKccuLUAywtLEKY8qznfBVXXXU1a6srDAd9yrIgpZqi7NM0DZPJjNHuNuvruxSxx3jW0DRpfo5EIIZAWZjLNIoQIp6WmclJyRkqbVrjKl4TyazORIEYzX2GBHOWZe8CdDilOZvpK2RyVoLHkmaxMxGidRhKsufUdeL05g6nz+7widvupSgig16fwbCkSYm6zsyqmsmsZlZZHOijYR1t19zn0cbmFm//32+nSYnrr3sWMZbce/8p/vR/v4ev+/qXcGD/fotP1UQIJUpyKFuQc2LeJCriMG7PVxuCw8b5CB1EStsPGZxQmxdP/fUGsAzgSTI7bwjidFOQUJC1ISUIgq3QIBbvGTIhKxKjxZIic0duILgTOJM0UDWZJkM5XGBxcZmVpSX6ZZ8QhJx0fsLN2cm8B1Hcban4eAgeKWzuVsSuUwl+3UikjUTNKCW2ZrLaz8WCRIOZogz6ffatrKGaqJuK1NTkxtbfiYcfYjKbsbW9xWAwIEggRL+pIau5MHHXp1ak0Hhjps12PgcQWkRthpwJgDdB+vm185mbxmB61tY27S5Pi81NDh2VjCa7ljQr6ktCkzktUWi+gLXYqVOnTp06dXrq6ILDx7Nnz/L+97+f7e3t87aPxcVFvvmbv5l+v09d1/zsz/7sXhl5p06dvij9w3/4Dy/0EDp16tSpU6dOnTp1+vJQMuCWcUDSginUjV4eRZjVmUYLJxUJJRAtqvKR+NCRYSCGQLE0gIMXsXLpFew8/BDTe3YgB49I9H048DEaIA6DoIUnrVsuq1G3GCApRIcsQQyYtW663MBk/STTrbNUkwm9hWWKIpJTY462IEhso2XNNSkh7EGYEOj1Fwg5c9/dn0GPX8mJhx6krhrOrm9RVSNuveVTfMPL/w4hFOSmJsbA8tI+JvVZFheXaKqG5bUlLr30Mgb90gBi7CEBHn74PkajikOHDvGpT32MEydOMB7XHDq4yOkzm4A5AvtlQb8IhMJ6KHOjFvGpigSLoRWBJnuspgpJ7TkKNE0mFWLOMW0Q9ajZaIWeWRJRg8fmNhQxEsTPnxq4sUhSLIIzCCRzGZINVKWcqZvMaFyTNveSK/WzQI7wSAB5rqs1YLDz0dhPFBgUMJnu8id//nZSTjzrGTehqjz44Bn+5I/fzSu+8WUcOHSAoMFsjVEtTlaDR5Di3YPGEc3V6BC67T70xzWru34hxjAfuaqYi7LtmCQSxIAkNP40d1GicxBvcatunFR73LhXIKeGEAo0C0jyaGPLolWElJQqKaG3wHBphaXFZQa9PiG2/aTufJWISCClyteOgUY/cfPjaeNp98CePTcE63DNOVvXpKqD0+zAVUn1jHm/qihlWbC2csAAqKPbrIZ5z5w5xWg0YmPjLOga/UGfICUS3O2c/drTYH2YCCFEkkeuhhDIWhuhFCVpTZCej9scmFmTQ1fxKGdb+yEnX7cBkUxKeX5+DVJm20cWGlU0J1vHao936tSpU6dOnZ4+Cp//KedX9913Hx/+8IfP6z4OHTrEN3zDNwDwkY98hPe85z3ndX+dOj1VdezYMf7G3/gbF3oYnTp16tSpU6dOnTp9WcgAUXRXIBCsO9BcchClR5QeRmYKJBaeQBnJTRtraK9VhEDhcYrmBJMYiWWP3vIiCxcdYeWKKxisBkpRIjL/BdkjEA1uzGGje+YKsbDURoM7xALRgYmIAargjk1UkCzM1jeYbKwz3d4me4ypsSOduxslBju0YP1/ItFdUlCUEbSm7C1x+KJLWFzo87WveBlXX32cIvT49CdvZWmlZNAf+pYDoRigGVZX9lE3I6699iqOHD5Ir1cwGA4gwnQyZn19k9m05rJLLuPsmXU2Nib0eiWTyYwQA6tLQw7tX2ZleUAZDZBmZ7QxiPXZ5UCQSFlEen7esihJlewFmhklZQeHSdBscZmaDX+lpNQpkbQhaWaWGupknXopNeSkaFLPKXWI5wAzxNZyavA6MTfducmwdRee++vRlds42MdQrxR6ZWR7NOGd7/oTbr3to0gUyrLHwyfW+eM/+jM2zp61CNEQiaEginUUIoIE6yMNRTF3+IkEYujZuZNIaB8PkRhLi0715wkFEiKE6IzR1qxByWxQ7Zzxa/Cjze44zdb3KMEgW4iFMe5o+FwioNHjUpWEknNgWitSDllcWmF5aY2F4QJF0fsr89rGo4qIOY4VG7t6yLB3qCJCkIIYIzEWxFASQzmHj23TasbjlbFe2JzNIulIFpFIjJFBv8f+tQMcPHCYQ4eOcODAfvZfdIT9Bw5BiGzvbLO1u8t0VjOrZhZv2rqL1eJWsxrA1qy+ioLH0/qPA1UI0vOoZyWEQKAgxpIYCr+ZALLYHNj6z2g2eGvnKvjfG8i2jpucaJqaJjXkbHGwpAv+I8hOnTp16tSp0xOoC+p8TClxxx13cMcdd5zX/bziFa/g4MGDqCq/9Vu/dV5dlp06PZX1t//23+bo0aPzuKVOnTp16tSpU6dOnTp9DuUEmhDpk8UiHFVaeKgWeSiKinp0pFpsKeqwou3Oa2FCM4+vVLKlO4pCEShXl1g6ehmrl59mOnqAalaa45J5657BLVHrAcTcYaDUClH2XJUiOJxIc4fkXkOfba3ZGDM6+SCTnQ3q6iLKQR/1XksJ0cCjGGwKwZxwBjMjSCCUkWuf/ULe+ydv4+SJE1SzHVbX9qEHeqyvb/Dww/dz+vQJlheX2NneJKu5wVaWV9FRw9r+VS47cAmrS0scPnyMo0cupqoSWxsbPPTgg5T9RVLKbG3v0CRlbXVALxb/f/b+LNiy7D7vxH7/tfbeZ77zzZtzZVbWPGAqAEVwAggSJEVCnBUKKaSw5AcrujvsCD842g+O6A77xQ+2w263O9wKt7rVVKtlhdxUU2JLIpsTBhIEUUAVUPOU88073zMPe6/BD2udcxNAgSiAqkpW1fpVZN68556zh7XXPnXP/vb3fTTrBZlW0Xnm8VqhJRQVeuWxxi9EWe+DUAVBlDRRICSKyc4HV6RWoSPTi0e52IwY9T4vc5dreNx5CZGdErv1RMA5LB7rfIjqFRYxofPYWy+eaIJbsPhY5hcewO/Zqzd/6Xf+1HroTVwQ+YDheMwff+H30Frx0AMfwnrDnb1D/ugPv8TPfO6nWFldxrkwBt6BVjHaNIppi7UIiIT91Crj7qhSBJybC+HRxytZ6GQMJ07seZS7BFdZiIfOzmNBY8KpVlhXgps79RwKHXofY7StVxYhi+5hxcw6JMuj8LhMu9EgyzTzyGP8PKrVL0T4uwVJ5x1aKWQecxrNxcHcq4LBUsL+4SHTedhfF7sx7/Km+uimdd7EyNOAEqFWy1hdXsE6i3UOaz22KsFDr3vIoN9HvKfRbIT9jJ2Tc2+om/eWxvWJktiBCd75GPVMvKkBxAuICePk480PouK22yBiOsFKFY9pHp2TGmcdnuAOdfN+zeh0dc6j1HyOJBKJRCKR+CBwT287mk6n/MEf/AFVVb2j6/nVX/1VRIQbN27wpS996R1fXyLxfmRjY4Nf//Vfp9ls3utNSSQSiUQikUgk3hP4GDsqHlR0Fs21q1gGNw8+xUsQMZToKBsQYwr9IiZVREKEJqAyhdYSBBsv5I0W7VNbrF55hPbWEpmaS4vzjroTgQS4q7FOgvgWWiJPYj0JXX5zv5YWIYu9dh6NmUDv5m1Gh4eU4wl4H9xwMZJSCC4wNe+JjK45EVAqCDrrW+cQXXD5ymVqGoaDESpT7O7tsbN7h68/81U2T20gKKy1iNd4p8iynM7SGlnepNZc4czWeTZWN1lqLyOieO4bX+f2rZs899zXmE4r7r//Ih/72BNsnV4LwqNzlGUVRawgVnk8xjqsc5SVoTKGsjKUlQ0CpFYE/yPcnV9qLZSWICISU3SdX8R2hphVFTs1VYxS9TFCM8Nah6ksxjoqGzohg2btqYxbxKuKsBCk4Tt8jvLtx/Z7z8e3xi2csOHr8XDEF7/8+9y6+RpZkaFVwdU3d/mD3/8C49F4cYy1luDyUyFOVomglASnKzp2IGrwJi49bGyIWFVI/ArzSM5YqOkJYqh3wV0YhazwnBBLOncAiwRfqBYdnY4uOjQVSkXhM7pxlc4Q0VTW4ZWm2VllqbNMq9UMTlwkRMtGt2NQNueJrz4I8l7F+FcVBzT2U3oWQqR385sLCOtUsjjpjCmD09HPxVePMRXOWKyxC+GTGOWqRFOvN9hY3WB9bYvNU1usrK6xtLxCu7NMVRl6vR6DQZ/pdIwpzULsczHiee6AnMvTYdkOJy50zoqO88fddbOBBIek0nG+u3isVHw38eBC3K5zNpyf6HBuRcdjcPK6hYhrXboWl0gkEonEB4l7Jj5675lMJu94BOrS0hIf//jHAXjmmWe4efPmO7q+ROL9iIjw2c9+lkcffTS5HhOJRCKRSCQSibeJc1X81B3jIec2rthrp0RQOgdRaJ+BtzhngnspCibigmNyvpzwuiAGBkdSWJ7OMxorq3ROnWb5wnmKwqOjo3LuoPR4LMyNaYQl+oXQYX2IvHSOGLlIWPd8zRIlJA9Yx/TOAcOd24y7Rzhjgp6qdYifVEEA0lpQKkPrDKX1XbGbiutvvMyzL7yMsXXWNk8zmU3Jc00jz/HW8Y2vfY1Op4FSQZD1zlLLlgChWW+R6Rqt1jJ5XmCtI8tzTp++yIc++gm6vWNE4Cd+8kf49Gc+xbmL57j/oSssLS9hjUFLbLS0Dus8xljKmaEyNvTzRbdd6CkMQkyh1YnBb36MfXB2EaNYgwDl0T4UZzoD3jqcdRjrTro9cZjoJAvjHuJDXYy2tISOP+eCOChehe7NeRori5q9Bf5tfFT7fk9xPginx90eX/iT32c06FEUmrzIuPrmNl/60lcoZ+W3dTsioetxLkBqlaGzPM7z8EeYR8ta5mL4/IqUUip2Q55sXRDC49jOH/eCj92a4oNjlLmQP1dg52qhitG/Eo6h0kFcrKzDeUW9uUSr2aLVbFEv6kH0d5bKGZxzJ5u+8BqHPkePRZScdD7Of6o0XubbEPdHVDx/wnE1zmKjwB2PcHBHWktVTbHOLZyCQQ2c35AAjXqD1eUV1lc3ObV1lpW1dTrLq7RaHcrKMBwMGY/GzGYTqlkZYk5jn2zobtQEl6VEQRIyFHoRhzvfpzA/RQmS6SDgarW4kUG0B7Hx7oXQH+vxWGcxtozHLBwXay3egjMOZ6NrMpFIJBKJxAeGe+p83N7e5rXXXntH1/HUU0/RbDax1vLiiy+yv7//jq4vkXg/srKywuc+9zlOnTp1rzclkUgkEolEIpF4z6Ak9vfJXFCIkY4YMslQEjoccRbnqxD96TXigntOvIr9doKzBjBRS/CIz05EGxG8Al2v01jZZPn8ZZqnlkBc6Ghj3lsnnHQ9hq8OosgSvI6OLAgWC+FGUHfJL7KIENWU3Qn929cZHO0zG0/ARnem6IUw432I3AxJszFe1nkmkyn/+l/89xzt7jMYTrh46RLaC81mCxFFkec8/9Lz9Hr7rG2soSTHmIpmvU2uChrNJlleoBVMJhMGwxHVrEJEODjssn84QKkmWVZjMh5TlZbNzXVOndkg7GIQREpjmJYV3ob4VB+jVH0UZEVAa8iUDtG53yHfzcW64DQTrAuvL23oeTTWBiMbPrj4vMdYjzdgvV9EizoXRFAbNaO56GXnTsCFhzVsgmiZm2fJlCxckW/H/ZgpRS0vvufP538ODvb4oz/+HcrZiDxvoLIa33ruVb761a9hjEUBGkWmMjKlgyCpJAik0bUYegaDi3furBPvomMuxL36udNPFI7wM7yLMazBHegFnLfgFc4FV+5cUJ9nygpBOF64FeNgBOFTUVqDcYqs0abdWaHdXqJeq4fzywVXZRBUo67pgmi8iNplHmEafxaetIiOPbGgquhUDT8LEbUOh8E7Ex4jHDPrHMZ6rDHYykanYDxXY4ysj+dvvdFiZWWTtbVTnNo8xeraGksrq7TabSazGb1Bj9F4xHQ2ZVbNgsg5dzx6F4Tsu12XsNg+QccbDiTGo/pwPCGOaeyPtQIunP9+LtB7FSJv0eBUjKy1C4PwvCtU7u0lyEQikUgkEu8y9/T//F/60pcoy/IdXcfjjz+O1prhcMiNGzdS5Goi8UPw4IMP8tnPfhatU0dDIpFIJBKJRCLxdgnRjzqKBwJOR6OiBqViH+JcXAgCjffBIRQiC+dfXYwHVTGuVQXBQAVRQlCIZIgWau0WzY0NWudOkTdCvOJcPAQfxci5wHV3i10QNZ1nEfs6j2JksQexf5LgcBMjjLbvMDrcYzLqRbHIn8RcKln4xsK6gisMEQ53bnPjjRfJlaN/dJNyOmYyHmEoKWo5mdYcHe7z9W/8GWdOn0KUYL3gvFAvOkymU2azMePxiH5vwHG3y1H3mOFwyPmzZ7FVydWrb9A96mErx3A04Pioz/r6KkvLLZwNUafWe6x1zIxjZmwQAK2L0ZeEOFmVo3Rw9H37HsW4UudDtGQcPyUqbK+ThRvOx9xbL+HnjvgaFxv6XBC8rCeKkGHcvY8CZIxwDcdlLoHFqFfnv2Orvs+8FMXZrTO0Gm9dqeG8p3JgnHBnd4+vPPMnqAx0plE65xvfeIWXXng1xqfORc/oMOTEwTiPJvXuJH5TmHcpzl23gsQ57udxpqLCOcDckRuiPf085lQkRq7GiNTo2A1Cd+gtJUYA+3gemijsZrU67XaHTrtNo1ZHREdhPkS+BlHRRlE0SPLMz0sfnYMxtlTiuahUtjhnw/jGHkcThUTn4zyB0pRBmLPR7UrohSzLKcZUWDvvbLR4dBQQg1ipldBq1FjpLLG+eYZTW6dZXd+k01mh2WxTGstwNGE6nVGVFdaUi+22ziHiEfGL/cOHKNR5l6NzBocl3EVwckw9Jy2SPvo/F/+Osasej7VVEFh9FCIJ7t0wviyOeyKRSCQSiQ8G91R8/OIXv/iOr+P+++9HKcVgMGB3d/cdX18i8X4jz3OefvppLl26dK83JZFIJBKJRCKReE8ROg4FfLgQ75lHD3qUCl1wQnCMoUBLhkbFCMQo8M2TJEXHC/o+xJaq8HFeRCHKhzhKBaqWU+90aJ2+QHOzSSYh3nFen+AA6x1u0e/27cKVQ7DR+qZiZKXgyRbxrdHFB2jRTPYHDHd3mA2GVJVFooAkEl2ZKrrAlASnJkGE6B3v0z0+oqErhntHNDtrVOUU8Z7OUpuiVmBNxZe+/AXa7RrNWh3xDmsqmrVVZpMKJRn9/pBev0+v2+Xg4JCjoy4rK2s88fgTnD69yXDUZzgY0+0es7u7R78/ZmNzFaWFmJCKQHCguXmXYByR+TGIHZWCJ1P6u9yFPqZbWkLLZxBwo0Ycf648WOuxFrwI1vuFCImLQu3CqXdy7P38CC1Ml3eJfUQh8q4rO29H3jHWctQ94MLZMzQb9bd0S1rnMc5TmZI3r7/KK688S63IEJ0znVZ84Y//jJ2dPWL+KihZxK7O+x29PxGjw/cea81ipik/n4FEmTFovoIKAlaMKvY+OA1F5ureydzPsiJ2OUYhVIeeyZC8qlAqxzkwHkQX1JsdWq0O9VqdUGdootA4jzoN2+sWUaXBxet96DA9acaMwn/c37k4OhfpIXQtmqoMnY7O4p2LHZ8mOhJDX6XzBmMqTBXEx3kEr8SOS+cc3pkQsytCu9lkdXmN9bUtNjZOsbK6zvLKGo1ak9JYxpMJs9ksLM94vI2dpq6KgnF0PBL2b+6mBVncHKGVRukMlI7vUVl4rgfxJ65J600Ujonxwyp21MaeWaL46s33t+UmEolEIpF4X5HdqxUbY/izP/uzd3w9ly9fRinFbDZjOBy+4+tLJN5vNJtNfuEXfmFxcSORSCQSiUQikUi8PVQUqjwGXHQACXg0KBtECokRjF5wOKxYNGrhMCI6G0MEqKC8xooNBkIviGiUcmitg7inBN1q0thYY+m+i5je61TdIIqEHsG5gy78fu9ZqFp4H9x6KPDOMA9o1QLWq+hbnIsKUVgYefq3bzLqHWHKMdCIIqsEbcO7EOPoXVi3mVEc3KB97RssmRnXr11lVrZpn95gaXWDl156mWa7ifdhn67fvMFLLz/L5pkzDF7tBeegEzIpyDPFZDplNBmSZzWcFURpsiyj1Wzx1Md+hJWVNa5ff51nv/ENxqaP0gqROrVOhhVP1a8WTtBwPDxa5uPiUFnGvNIxCJSG78TNY1G9YLwj0/PIzDjG0W2KUuioRoZoWhc7JAVEB3HIzCNGT9yp89drFFpFR2wUJsVLjNOdR3++PYbjMbv7u9x35izXt28zmc6+y9FZWktmIJvO+ObzX2dz8xRra+ewxjIaG37/D7/I5z//WZY6SwgKBygtiA1uPlHyHWqoRCdumFl+EWU638/g8pvvn7MmSHmigy7mBYcJ7lMJ8qudp67qDJyNY+7xXqGcYLyhcuG1zWaLdqtNs95AKx3XFV3FEkS+sB1hGS72U1pHHHcb1WQHOJw9OYOyTJPbLH5udnMdD49grVm4hgWN+Aw/f07sEzXGLDpA8eHWgCBiuuCO9grnwnsEStFutrGmwtrgcCyrEjMrmZZTJpMhWjkqY7DOBiHVBiHTuOmid9S42cKtOe99ncctByndg4T+WWeDSOrF4xV4Y6PIOHdfz29hcIgHiwsuWBG88jgvYO0PMEMTiUQikUi817lnasL169ffcSei1prTp08jIsxmM8bj8Tu6vkTi/Uir1eLpp5++15uRSCQSiUQikUi8Bwk9fqE3zuNtuIiPd9GpFXsVlQoxrFGYCg4ih0Ih8+K6IJGhNCglZEqjdRbEK1FkuoZSGUppdJ5TW16mvXWR9tlNVGZjv+C8NzD8fRKDeNJvOJc7g+NRMQ+Nje2Ec4kJ4t/KKqZ7+4yPD5mNx3hOHJP4uWsvCqzWUrv+deSr/5LGm8/wyfUWZjrllVe/yb/4p/+CLGtx/c2bdI+PqddyNGBmU3733/5bzpw5Ra1RX4yFmYI3ghaFd4K14SZnpaCz1OLcuXOsr61x4cJFfvHzv84nfuRTQM5sVuE9rK1u0GgV1BqhbxARMj2Pig29iFoHJ1+mMrwLcoxzbyXxnfQOKiWLmNR5xyY+tN2FSM841io4/JxTYT5g0VotnJIyd71+23TyJ+5TR7QJxqMa3apvi7jL/eGI4+4xl89foF6rhT7Ok6fgPMwqh7eWwaDHM9/4U/Ic8lqOyjNu3tjhK1/52qL7MLjc7ImL0c3nskecxJl1MmQuio3WhQjQhZ8zimNe7h5bHR2PGi8hshilg7tRsthjqKKYpoIDVYWWSYdQ1BvUmy0a9TpZluFilCs+ip6huDM6GO1dsbuCWjgto002OjPxNroSw7qDuzcL2xlfF2JU1cI5a121iJE9iTX1lNWMcjbG2CoIogu3qMPZ6LScR7h6QBztdoeV5XXW1zY4tXWazsoqa6ubFEWdsjSUZYmNHZOhv9JE46aPzlId+madjT2yQQj1PgiW3rnFvLq711JcsPLOx8lH8V0RhVMXxchQ8BluWvDfOZkTiUQikUi837ln4uNzzz0Xfwl752i1WtRqNSTewZb6HhOJH5xLly6xsrJyrzcjkUgkEolEIpF4zyFOQg9a7FhzMeIRT3ApZhK6GkWhJOonXiE+iBZ2Ltoxjzk8CUkVlSEqQ6kMrRRKgVahC1JpRdFo0Vxdo3P2LLWlPHSyeY+JEYnAQtyaCwdzcdJ5CeJIXB44NDakUoqgxcVYS1AI5njM5Gif2XCINw6lY9yqkigOAQL68Bby+lc5un2bG9ff5MPrdc4oxWw84Pobb3Bn+zYPPvAws1nJuXPnaNRbLC21ee31V/jqn36ZK1cuAz7eYDylnFShX89V4A2VqXDe02p12Dy1webGBrUiQ2vNX//8r/DjP/ZpIMNaR722xMrKJq2lGrWmkGeCCr60sI9KyPIsyFjig3ho3VtqfPNjo+O+agk9hiIOFXserYsic+y+szaISkqiq83K4gKNUmFsM1HohQIpsWUvXMgJ/Ynffhzfrvp44jSEg+Mu/X6Xh69cJsuyu9YXRGTjYDw1OOfZ29vn68/+Ka12ECq1Lnj22Vd55dVX75o98xnlwlwkxNu6GL8Zuh8toR/TxB5FHwUvh/MhrtdLmDtzNyoSuhtVlkWXb3RDiqDU3PUbzomQPxy6GSvrQefUGi0a9Sa1vBbGwPuF8DmdzhgMh4xHU8rpFDMtMWWJs7H7dCEeqkWv6eJcjPthbIkxBmvCdSdRAqLQOrr/fPQKO4cxZXR0BudziKO1VGUZrlvFE9Tj8XH+OBduILA+dl96QWcFS50VVtdOsbl5hq2z51hbXWdtZYMsr2GMwxqLjeeFd0LQE7OTjk2tuTtm2OOCSxF1EsU6P++VIhON0holGq3yGB0N3tng7hVBax3HyEZHpV/cjJBIJBKJROKDwz0TH1944YV3XHzsdDrkef6OriOReL/zxBNPLPphEolEIpFIJBKJxNvHV7PQ8xbdR87HKMcYV6gliIe5KmKXYBD7rDexbxCUZNENZkJ8K4Q4Q+9iGCugJDrBBK0KdJaji4y83aa+tk5jo4NSJroSWcRdBlkF5iGswfmoMD649oi9hIpYSslcuDp5lRKHnzgmh3uUgwG+cngrKAmCZhBYFJk1NHZeJKvGzMZDbm/3Odg54OcvrbGkNXjP1/78a3SW2piyYn1rnVazweryGp12nS9+4Y/ItdBut4MTyzlmw5Ljw2MG/T6lqfDOMRmP0VqxtrpKlsXeOhGarQ6/8Rt/iweuXMFZjXOOWt6g09qg3arTWS4oGiHWVOsgqGgEpcNzK1uBqBhZ+90474MLbm5C9ELlPJVxGONQODKl7urqjOJT7DR0LgiRc1eicz72fd69PlnE784taSGm8+Tnb3NmLv6lMs1Br0s5m3DlvosUhabQevEMJcLUWKalxVrhtddf4/Bgh1a7HgQ+p/nKnz7H0dFRjAc+cWGG8QjxqhKttVryEyejU0EU8w7vQgSp9z72I/oYswpKh67TsNtBrEUUIuHYujjHROkwJ1UGkmM9WGvROqNea1IrCkQHIdA5hbfCdDrjqDfk1vYet+/c4fjgmGG3x7jXp5yMccbiXHXiqFQZJ4cl9l3OJWE/bzrkpAMxeonBxn0TrAtio3Um1GUqhXcwmQ4pZyWzmcUYh4kCtXfg3DzqWC1ij/GWLNMsddZYW91gY/0Ua5ubrK5tsLKyileCsQZjbIhNdcGZ6qxddLniPTpeGXTOhxsHJBwHYwzO2ygOg1I5IuE4BF3So9AoCR20WgWRVhH6YDMVjw/gbLUQcxOJRCKRSHwwuCfio/eeb33rW++4+Nhut8mybLFO79NtVonED8rly5fv9SYkEolEIpFIJBLvSXw1wZazk0hMl+Gi6KB0jhKNUmB9xbzozzsQN48sjTGG3uNF4wmPK9GIVsEJ5100hsnCoaVUHkSSPCNfWqFx9gz5ssKLnT8rpoS6RVOgJQhmoWvwZNkxNzQKHvPP1C6Ki0FkEAvTnT2mwy6mnAaBIkZTem/IbElz92Vag1vkkrPUboHKubU7Az3j6R/9GGdPLXO0u8v+7j5FXufqG9e4/8oFjo8OsFbY3d3my1/+IhfOnUW8wnuLqSpGx1OOu8eMBgOUEiaTKf3BkHq9Qa0oGI3GzKYzrLVsrG3wD/5X/xH33/8Q5UwoZzOcs6yvnWfz1BZnL55mbXMZrTVZlgd3HR5jY3eeNSEi9K2ONVE0XKh2KoqNQqYBEayzwRHoWURpKpVhY6ej89EpuIjEDQ7M+RqUEOMyg/hj/UmM7kmc7ttjvtR2s0Wj2eDW9h1WO23Obp3B+3nPYDAdahEG4wllOaWcVnzz2WcoMk2eFWiVc/v2Pt945nmccXhrYrxsiC+1fi4kKnCCcXYRRey8iuLr3I3ro78zCKxzn6lzoT/S+eB0DO68ubgnzKOCPT46H7Pg9HXh3MmynExrtMriOWbisQgCW6/f5dq1N7n2xmvsXHuTozu36e/dYdLvUZbTEJU7jyd1YJ2P+2TjOSI4Z2KUq4uVkB5i3DKEGwOUVot1OmfiscyoKstkMqLf79I93qff69Ltdun2ugxHQybTMaPpmFlZhfWKhH2zIb5Va0W73WJ1ZZ3NU6dZXV1jbXWTWtHEWoc1HmeJ2+iiUziKmtFJGSXT4Nz0NrqhdRgvH1yn1pZ4r07iogk3GSB2IaaHoyY45zGuxDtQXt0V8ZxIJBKJROKDQnYvVjqZTLh27do7Lgbe7Xz08S7ERCLxg3H69Ol7vQmJROJtcr3vuPS//50f+vXX/s+/+O9xaxKJRCKRSNjRFN3Jw8V8n+F8CfjonvKh9y86n+Z2Jo/FS8a8687H7j+JHXmooBQoNE5FcTK+XOsseBmrCucc1luyVoPmqbNUF7tUo1vYMghcKqwBDWHFC+FKgvjoPUEzk0UZpCJ4uCTa84Lfy5N5qLp9Zv0uZjoGF2JZtSiUq6jvvUrz5p8jswmiHUtLLTY3WvT6Y9ae/Dif/8TPsfTHv8urr19n++brDMcl12/e4rHHH2ap02I0MSglfPNbz/Dkk0+ytr7O0eEOWAvWMR1NGRZd8qIA7zk+OqTdbNJZ7jC4c8hgOKTdbmOc5eKF+/m7f+fv8Z/9Z/9Xbty4jrVCVmS028sURU65OuWm3GTUm6BE4XyFEOMqoxv0e+GjSGtjnKrSYL1bCIr+pC4TUSGKVAClPM6B9SouIxxz60Ns67wHMXRIhvUrJcEVJ4up8gMgi7+HwwHnL2xxZCtu3L7Bo488Rn/UZzgYUNnQcZmJwnnHYDShVquzd7DD9s5VNjcu0isrtM549rkXeOSRK5w+sxVniIqRuHPtPcycIL4qnLAQWRdJOz7Y+5TE2N+78oGd82GfYyRoWJrCE8R3jw/ux4WT0mFdGGil8xB/qhZ+UeblmkoUhWgaeOy0xAKVNfh6jq41KJoVvlbErs4oaJqTG+m9zJ2Joc9VCM7N0N1owAcXba5ylA7Rpd4bvLMIBXiHtyXT6YTReIS1ntJYiqKJx1EvCrTWoBRFUafVbJNLwdxtGYbIkec12u0lxpMRS8srDId9cu1Cr6ME4ZG5c5TQdTkPGg6u7BjDKiqOX+imnUe8zp9PfM8RCb2aXjkyRei1BcQq0HKXoGzwqOB+vTeXIBOJ7+Iv+7n5XpE+rycSifca98T5uLOzw2g0esfX02g0wi9phA9z838nEom3z9mzZ+/1JiQSiUQikUgkEu9Jpt09qEqUs+AMQhSTYtebVjlC6E7TWYZSOl78t3gvweHHXLxyi940wQYnFRJjDwUtOnbf6dCVFwUdlWkay6ssX7hCc7OFxO7GuVDmIHqVgn/MEfMdvQY/D1ydO+tAC4u4V8ECjkwsbjBkenxAOZ7grSGznvrhmyy9/kUar34ZGXVxpkSspdZssLG2TJEpVGnYfu4Zend2mHYHvPrNl7n+2uuU4zG1es6Vh66gUJjS0D3e4/nnn+f06dNkRUEs/cMNPLPxDGcttaKgLEv2Dg7Ae1aWmlTlNHT6jSdY73jyQx/hf/H3/j73X3mQerONd1CvNeh0llhZ3mD91Bq1WobSChtsX+Bl4TR8K0KkbejeVDqoyiIKrTTz3jyt1UJIDJphEH1UvDSj4vx4KylxfjyF4JD0828WNYtvv+/xRNMLkZjltOLK5Qv0BiO2t2/ziY9+jHarwVK7jpKw37U8Y1rNmExnzGYVzz33LK12g7zIyHWd6Qy+8tVvYIyLrsAQoypRSPewiPCci9lBmQwC2MmgCF6CwBjERQkCGhJFeQWSxccIUauxNzV0o8bzC8HHWNNMC5nWMa6X4MKMY55rYaXd5OKFC5w5d5ql1WWyZgNVqyMxDlh8ENOs8UynJbOyxFSho9I6R2UqjKmw1lJWU6pqxqIDUxxeDEorakWDLMtwNojN4uO2Ko11ltGkz8HBLv1ej+FwwHg0YTyacHR0zN7+Ht2jQ6qyxDsbBD3m8a5h34q8TqPRolZvkOU1iqKBKL14HgvxWoPoIFyLIGhUHNsQfxuPX+xtXRy3k3skUDoj09nCca2liFGsc3d2CC72hG1TPiP4qxOJRCKRSHxQuCe3He3u7jKdTt/x9dRqtYXgGKJn7lnFZSLxniU5HxOJRCKRSCQSiR+Owa1t8loHndeQuuCtXfTAKTKcxPhVHS78h67BEFzosDixQXiMyp/HBtFCNBLFSB/dkRDEL3z83OvBmSCA6HaN5tY6aw9cxg5fpurPYz5Dr+QcFcUb6z0WIRNL6NoTtMwjRR0ehYhDI1gvODRm5Jju71H2j3GjDZoHL1Dbfh41HoKzeK3AGrCGPM9YX+5Q1BRr3WtMXnmW3Zd73BkbnGT81M/9BNs3b/HoQ1cwRnPr2h7D4YjJbMhzz/05S0urXL50H7evvUHpStzUwjHMVqYUSzW0CJPRhJ3dXVaWVzDlmH7viDzPqRU1mq02P/ojP0G3e8zv/d7v0qg3sM5izBQyR7u9ykFxxGw0BAfWGIwNcaGxuvC7tL55bGVwS4Z/O+fwUZB0PsTQShQiRVQo8lPx1R5K56IgFcZa4cOyogtSS/AUehe6O+fisXqrDfoezLd/riKJCN3+gMcef4jz4wl3dnY5ffocjzz8MK++9jL1mmY2NXjnqecZ/XGfRj2j2+vxxpsvcfHCwxzu91BW8dLLV3n8ses8+MADcV0uCOlEy6fXwTmHwmGjO/LEeQcGkQzxWYxXDQKux0EUAsNeunCWLETMECccBEw/txPjo7V4IZV7EO/itoUjprWm0WqilMJ0loJD0AVHqS5yVJEt3KhVNaPX7TGbTWi3l+m0O2jtUZmgcBgTtifLi+BC1vMQ0gzQWG8RB9a50PmIRZQKDmEVzuWymuGtJc8yjLNUpgxzszJUBczKkqLemA9H6F304TxUSlHkBbVauBHf+hCzau2MPFP4eDODj/HMczEWBONK5s7UuVBpKgfMnaQxqNm7GLEaom+VCM57rK/Cz9DYeDxV7MCcO7Zt6nxMJBKJROIDxT0RH+/cucNkMnnH11Ov1xeCo9aaoije8XUmEu8nRCSJj4lEIpFIJBKJxA9J7+Y1au0l8maLTNQiGlJJFh1bhrkrLDgYdRQFggsRL4jW8cJ/CD3VUYz0zuOcCcvycyNZcFTG8NYgJFiHqtWprTXxFwyT/QN6o328zYLIRdSh5s46L4v+waCNqRgd6QhOJomyTRAStBjwCm0M04M9pv0usneDrPscjAehPzDPoytKUEpAazZPrfHA2QYdFBc6BQ8vKW6OYFgZlMr4/K98nqPdQ5ZWl7l85Tw7d4Z0uzvsH27zu7/7O/zKr/0Gqxsb7O3eBizloGSw3yfP6xRFDWtKJtMJ4KnKGWXpaNQ6dGvHjMdjms0OTz/9E1SV4/at2zz//DP0+j1OndpgNpphZgZxGnwZegr93GX21u5H70MXoEiMRY0OV+dDX2EmCuvswrlobXCg1nROZaIj1vmFBzUoQwobOyKVyGLdYRkn6/6LHJlvxVx71LE80hrL/u4h91+5zO7OAW9efZ1PPvUJjroH1HojjrpDTGUoihxnK0ajCY16m1dffpWHHnosuDpVDWscX/36M1y87yJ5lgVxMzodw5cYwyon2xDEMB9FrODmk+j8DcNgEF3MLYtBOI2uRR8HPox5dEx6FZyKzoV+VS9MxyXTfEqeF4i46PaL811A6ZyiIRT1BsQ+UWLcsc5DZKuxhuFwyJ07txkOu2ydvkCmYWmpg6gMJUJWlRRFTq0oaNRrTGsNZtMZM1fFeFWwhL5La8L3KgOVqYUwnuk69WaLdnspiJfREOpMhc7CtnjnQKk4IQWLi/G7QpG3qNfa1OpNZpMBxni80zgbOl89PgiHLsxl5y0goa8z5gILOt7oYBZuUsTH6NlwTFV8D8LH9woPuBDPG3o8437GSFxvedsCeSKRSCQSifcH98z5+G6Ij3meL8THoihoNBrv+DoTifcT7XabZrN5rzcjkUgkEolEIpF4T3L44vPoWo2s0aQlAvUGOo8RkRLEQUShyNE6fHYVEawz8WJ9+DPv9SP23SnReB+cjsaVKBSSyUmfoCxeGrobM01Wa9LYXGX5/kuU3T6TnQrnQ5yllvmzo1DgFcYrCh+ceVrsoiMyLD/2u6FwCAqLd57y8Bjb26GoD5gd3aHQmqxWC45MrfDWIz4HremsrLC8tESj3qaWFzxxasiLh4axsbzyzRf5kR/7JFunNP3BiEcff4jp6BWWWg/x6uvP0e3e4n/6nX/Jr//a36LdWaHfO8b6Gce7h6g8Y21rE+cdVVmRZznGWu5s32Q0HPDC83Djxm2e+NDjPPLYwzz+6Ie5cOF+zp8/z3/1//l/Iihm0xnWGJx1OBfGHG+/S+D7ttTT6I9TEoUxCQ4ymUfD+rlTDDKt4/h5jHMopbHWokXjFdFhKdjYgfjteBQquGN/CC1H7vqX95CpIHbu7O7x+Ice4czZLbZv7nL91k2eePTDvPDiCzSbbW7d3sUYS6NWoyxnlLMZh8cHXLv6BmdOX2b3ziFaaW7fOuDNN9/k4QcfjPr5fOb4IJLGnkUf41UFFnZMpaIL0rnggBQJMcDR+utFh1FeiJNBxZyPa8gFjcKcsUwnY467fZzd4WBpiVq9hlYKnWVkWY5WCqU1WgSlNUoFcU8rQamMWqNOrVaglVANK4ajIdt3rrNz5yZVVdJqNuh02hRxWbMqRykhyzRFrUZR1KgVDarSYZ3FOotHKMspVdUMArQKwnKjVgPvyfMG7VaLdqdFntejUDo/pwWlo0uSEJI8v+nAY/A+C+7HokFerzMcHFNZw6yckRe1qPqG/kx8cGCGGNzQvTkfyNBZaYDg4nXeLkTKGL6MCUo83of9ct7ibExd9YI1DmddcHL70IOZfI+JRCKRSHywuCfi487OzrsSu1qv1xexq0VRJBElkfgB2dzcXNwBnUgkEolEIpFIJH4wJtt32G+8QN5ZQmUF9c1NtO6gY68j3qGUJ8tCH6DSCpFwAT/Em1pA4X2M40ThJDiUMlXg/by/0SGSIxrEhj63eS+j8xZEo7OMor1Mc+scnUuHmMFVqmFwUobWQR9cavN+vhDsSi4hYlViN2GUg4hNciGaVUKI5Wrd82hryEY2xTUKnLU4W+KpAXZuHcRXJbVGzvraGpmBWtHm8pkxnx1Mqe018Z1z9Ha6fPRTP8Lzz/w5piy5cHmDV56/Sb3WZDyD/b1tvviFP+azP/3TTKdjvHeYasbh9h660NTbbexoSJ7nFEWB954/+8pzTCfCnZ2rLK82uHjfWaqqYlJWtNurnDl7jm8+/01ObaxBJtjKUDlPZe1bugu/83vnQ8dfpjXOKVSM0SU6ILXXd4lpoWvPOwdi522bzD9+KU7idP1CHA5CVNiWk7X7H8BRNndO+hC+GZJKxTMYDLizvceDDz/A7Zt3uH3zNlcuP8zyyhKD3pCtzQ0ODw5QkqMzz2g2pm4KXnn5RR546GHyg3C8y5nw8ouvcfm+C+RFESJMVZi7SqIoS4juDBGfPnYRRmekD+Kh92Gee3R0IsatX7hPY6Sp90F4jE5A8VHsAsqqYnf3NtvbdxARakUNnWnyTJPX6uRZRpHXyfOMLCvIsoJavU6tVmNlZZPTW5u0GzWyLMOYim6vS797jDOGybjPeDzE2A2aKkOJJ8+Co9l7F5ylWuHF42L8qYpirLWWWTljPBlRczWsNVS2WhwgU87ACXlW4GUeKRtie3Ee66rw7mCjjRSw1oBUQVB0DuUUVTljNqthTIVzVTxj565PdzLGTqLjWbCmCm5G52I3rY/zK4iWwQnp8RjEKYyzOOdxNswta4Noj/PRVR17Ju+6wSGRSCQSicQHg3ddfDTGsL+/T1mW7/i6vtP52G633/F1JhLvJ06dOpW6UhOJRCKRSCQSiR8SO5xw/Mor+EaO5AWn8gJdFEitwLs8xJmqDJ3VgvNLVNBYrKAIUY5B2BNEzV1g8SK/s0h0f4nOQCSIWyq6y7SeB0tiXYX1Bt2o0do8BeUjmF6P3suHiMti0KeLRjIHPsSrOqdwyqJEQv8gDheFiyBIBmkBEc5cbvOpz32Yrbohs4IsL0NeYKsx3lj8pAQfrFHig0NtfXOL3WvXaNTrNLKCKxtNxlXFrdY59vdK9m7dZOP0GQZvvMbp05vs7Oyyd6gp8gxTVbz2ytdY31jnySce52hvG1vNqCYTDq/tsHrfBl4UIoqV1RXWN09xsD/kC9/6CkWRMR4OOe4eMZuWjMcTTDVlPBrRqDdpdAq2zmxy6/otvAkWVM/bEPl8cIOKByXBQYeHyoUxtN7ErkYdojJ9cL9a52Mipcda4vEOnZweF4/zvGtyLh+e+NDudmC+PUK0r4vCktZhea+/fpXPfOYn2Ty9weHuAS+/+gIf+eiH+cqf/Qkb62tYUzIazCiKHGumeOPo9wYcHezSaLYoZ1O8wJtXt+n2+qxvrIVORxdnzryD0c8FyCxsi9KI90FAQ+L+ekSyIM5GVylRsJzvtYozd+4MFhFEKZyx+Hmcaq1GvVFnNisZjPtY6yhn0yDgzZ1/3lNVJgj0eY1OZ5kPfegTtDstlpc6uKriuHfM/v5tjClZXd7AOcto2GcyntJptlFagvHYhZZHUeGPkiC8GjNFiSZTnspAf9ClPzgO7xPWUZZTTOXx3nDc6zEczugsr6J0jJCN551zFtA457HG4F2FJ0Q6exfcsN3jI27fuUH3eJ9mLcO6EJmqtY7RqS7Gr4baUesN4hXidXDj+iCEOhfdzVHUlSh6ex8in7Xk4EN3qXWh5/HE7RiTcomdkSicqX6gWZpIJBKJROK9zbsuPg4GA46Ojn6gO/N+WO52PjabTVZXV9/xdSYS7yc2NjaS8zGRSCQSiUQikfghcc5R9WccvPAiRXuFeqtD1qyjiyJcpBeB2AmY6xpKa7zyKNE4MdFhKIuISfHBORYchOCChIX24ZlOPLi54AFeAxWx/86g8wZ5u0N76wyz+x9iuvtVyiOLMBcgQ6SiiiWSfh7HGKUtWUhd0Y0pDqWEldNtnv6Zj3Fqcw2t8+Bec6EzMau3cVWJm5W4aRn7I0GKgjzPmUym1KRHORtTU8LFZcPx8cvcufURavo5HvvEx8jrBa6qeOiRS+zeOWRw7PGZw9qSr3/1i7Rabe47d4ZyOsJaYTYac3zrgPp6B++OaXeWyLOctY0lNteX6PVHDIc9jg6OGI+H1JstOsvLdDqrXMxrtJcKNh/Y4Gi/y3h8HISUt3kNQ0ThlIcYHapUcL+Fzssg8KA8YmM/XszUDB+7BOUJj3N3F2cMafUnUuOi91HgB5UfT/blpI8Rgf2DI65f3eWJJ57gCwdfZPv2bR57+BHOnb3A4cExF87fxxuvv4aiwGvHpJrRcpY7d7a5ePERXNehdcagP+ba1RusrC3HONSwjc6HftDwrY4ilYuRthnehhjiucgamktjt6BSIDZGfQaX5Lw4Uqk8uISdwwkoJYgoWu02ly89xH0XHkAphXEeayxlOcEYw6wcU1WG0XDAce8YUxpEKfJck+c5IprKWCbTMXfu3OLocJ9aUaPRroOCshxRziZU1lKIjscx3ECw6AeNkaPGWEQs1iuUGMajAQdHhxx3D5nNSspyhrNgXUWz0WZr6wJLS6tkeYafW2Kj+9aYCucd1lTRQSrBZexC1O9kMubwaJdMC8vNAnP2NN5Hkds6WMTbhvcWiWJ06DYNy9CSUbkK8TbOrnAsrI8OaS8YX+Gsj52PoUcSpxbvQ86ViAqdseH9It3YnEgkEonEB4l3XXzs9/t0u913ZV1a64Vw0mw2WV9fD7+YpZLrROJtcfr06SQ+JhKJRCKRSCQSPyTOVngHk90++89/g9bKKo12h6zIod7BqiC0ZCrEomYqI9cFpZmBB+X1vA4P7ywQ0wsRHCGKUlwQD5TSwQIWxY4QqxlFBlthnaHwkNVqZMurNM+coXP/BXrT17FjdZd85aPHkeAx84QOPDlxX82747Lcc/HR0zz5qcfYWGshHowzZEWGr2LaUTVDLOhaIwgx0yniwBuD1lAUOZUxtBothsMuzYZna3yd0eQKh4d1ejeuc/nSFXZ3drHjYy5d3uKZ/ZeQPAMcxoz48y//AflnfpblVgvbnVE5w6Q3xFjDdGlKp7PC8vIKnXaLp3/04/z5n3+NyswYDI7QWZ2t02c4c+Y0Dz70EK+/8Qobq2uUZsLaqVX2D47mWu/3QVBK8N6iXOgMPAlLDSKf8kFwDC6xcCwl9vjhDYiELj2iiy8eg9D7J1HMumuN8aPadwfC/sVEH2EQnryQ6dAjWpUlO7vHPProR1nfXONg95A3rr3B4489zquvvcypB5/k7Plz7G7vkeUZ1paYasr+wQGXr3hEebQFUYqXX32dJ554jFpNR9diEGYX12N8iCMNhk+NlypEls6dcndFAIe81tiRCoiKopm3MebUhKhWFXoivQiIpSjqNBodWvUGRZEDCu987F80OGuwzjEejxmNR6F3U2myTNNsLdFo1qmqisPDQ3b3dphNJyx1VinyeoxIFowtMbZCK4Vz7qR3leB+FQRrK6pqhlJCLjmVNQyGQ1578xXefPNNbGXD60ThrQ29jbXnWeqssry0TlEr0Drum3cYU5IXTZTyoSdUgkMRUVFItAsnaGVKyiq4PJXSICq6J30QC0XwLs4zPErp8FyJPZpKwEkQLVEoTHRFO4yz4GUxnngVeijtiUPXOx88qqKxPjkfE4lEIpH4IPGui4/j8ZjRaPSurOtu8VEpxebmJs1m811bfyLxXmdzczPFriYSiUQikUgkEj8k4n2ILnWawc199l94llZ7Bck0jU3BN4KIYUXwKjgJtWSICsKdo0K7IsQkzpUo8VF08sE9qaLjiiA2OBwovxB7gkDiqCpDvQkqy3A1T2NtneXLl5HpjN6rt7CVjp5GomimYlebBYQs3shr8Wht2bq0xMc+8yRnL56lluc4V6GKDFyGq8ogorgSndWhyMBatMqgXuBHM3xlUQrWN0+xd2ubU5trlOWMweEdVuslS/tfp9v8CbZfe5OLH/oIS6fO8PpLz9E9eobR4JhGa5m8Xsd7y/JGh6s3XuXRh56gudSh3z/CWcdsMGY8m3BLMpRy1BsZq3R46uMf4eioy3QyYevMKuvr66ytrrG1tcmt22+itGLYm9CoZ6DAmb9Y3JtrNIpY54gEEVaCICVKhy48Fx17cZxDnx44a8l0hvWezDusm8euRrHRgY0uSVFBLJpHjf6wt1bP+//8PFIzczRaOUfdQyqX8+BDD3B8cMTe7g5PPPYEa2sr9AZdnnjkSYb9P2FWVehMKI2h3+9iyhmZKiiZkilhe3uPvf1dzp07i5J8ETEbhNzo3oz9jmCD2zdGloZI0Nj1SBTeJYs/UbGbNIi6LqrzihDl650DZ9FK0FlOnoWxFCVkWqO1IgbbxihdQ6vVZGnWBjR5VqCzcC4EZ+6Eg6MDut2j0LUak2KzTKN1hnMmxLzm0ZHpIdMZtaKBqD7g8aJwLrgFnfbR3aloNNpcunCF1dVV8nqGeMGYCgEynbO+dppz5y+zsrZJpnO8OIyZYUpDXjSp1XKUypFwVob9EsF7QzkzHB5uU1MG68A5j3Khk3Eu63rxwR4dOzKJ3bJ4j/EGH3teEQluXQRvJR4Pic5VFZ3OKhxb5xGv8FgUCuscSIhkVe/+JchEIpFIJBL3kHf9//zT6ZTxePyurCvLsm8TTs6ePcvy8nISHxOJt8nm5mZyPiYSiUQikUgkEj8k3usotmS4akb/jWscrb5EVitAa2qrm9BogfN4a0E0KvbEadGARkQj3gUhRnTodcPHeEdBnEIhiNIIGudm0XWkQEL0q1iLoGKnZHhu1mrTPHUeZRxlv8v09hBcEIJCm150Pi5iPhUihnpDuPT4WT76Y0+ycXodled455DKBlHUV0FQnYKlRPIMmVYhDrZWQ4ymZBYckJmms7ZGd/+I0grr62foT2YcDfc5q/c5PrjJYecK21evct+HPsLDT3yEbnfAne197tzZZ622xcOPP87FSxc43D/m5u1b3HfhPJ22pd/rBaflxNPd2SXP4cyZszRbdXS+QZZlOOfJ8zqZFkaTIddv3CBTNSpTMhwN2d7eJRpOgXnT4ongN088lThiEiNMxSuUdngfejyNsTgH3jt0FjryIIiLeGInX1iySLh52nkf+w4FG+M78YLECFeZS3EyF9F+QBlSogNSFI22Znm9jabFwd4UU9a478IlXl56AW8d23fu8MCVR3j9tddYWu1w4cJF3rz6ZugkdTAdT6mqkjzPKEuF0hmT0YSd2/ucP3c+zMGYRSpoRIK7EQlRws57vKmw3iIqiw7JE/eexK5H72Ru/Q3LQEeB/aQFkiiYzx3DOBech6XD6SycT/jgJFQqjqVCqQytNbU8J8vz0I3oHYPRmF63y3DYp1HUAME5g0iOcw5rPKUx6MrE10is1IzKcHRtOutC7CyQZwWZLlhf3eD86fOc3TpLvVUL4qQ1GGvROmNt9RTnz19hbfM0tXod7wjdkNagpaBWz2OsqcfF9wbvBe8MZTVl984qo/6dGNEbIl/D6GmcMxDnT5hlfuE2VRKERK3DvLY+3IDgbRAk7dwqacH7CiyLOFy8RFFTxRhaHW6Y8CqUQCYSiUQikfjAcE/Ex8lk8q6s6ztFk/Pnz7O8vMz29va7sv5E4r1OEh8TiUQikUgkEokfnplzVGRYr1E+x48q+q+/ghQKqzXLeOpyClRwcykVBEMlEt1csa8RE0StGHw6L+lTSLjGr0IgomhBWYVCoVVGpnNUjFkMUZsxytNZRGvam5sUWlGN+xyNn439jyedguDxGUiekXvL2kaHBz/xIFceu0S9VmCrEq9sdE9ZqsEQhUZndYq1Nexogul20c0aKq/hyiliQWUaKx7xFp3lbJ7e5Oa165w7d47z5+5jYB2z3SPWey+zt3+aN165wealyyyvb/DERz/K1avX2Pk3f8ipM5vcf/99HHe7lNWU27fe4Kh7xNNPfZJlpRl0j0Mk5KTk8NYuohTr62vUGx02Nzoc7A+oZkO6x13+7b/9H7l1e4dLF86yvXOD7vEB/f4kxKPKiaNxUcFIeFzf9XFp3vFnxKJ9EMqct1hncZZFJKVzftGeuWhf9A4POOujeHUSgzv/e9ElyLdHrt4thr5tCdILWZ5x5swajZZma+sc9aJNv/cq05llY6PFhYtnuXXjDgf7+3zk/EfIioLxZMpjjz7O/v4+1hp0Hhx3ZjYjL9owsME9h+LO7k4QHGNM78lQBcEU57FzidsrlERboXNRYA+xnSd7tyh5DK+/a5lzcdNLEN4F0CrYRitjsMYgUi6EylzniArnmLE+xIY6TRXdw0rnzKYlR0f77OzcwJoS3WxQmpLMe4oiwzlLZUzodIx9ic7bIEiWM7wLjkwRmM4qhsMeRVGwurRMpoRaUaPdbrG6vsrS0lIQ9ozFOod1lqWlZZZWOqysLNFstvF4ptMps3JGpguKoghuZ2txnvBa66kqcEZRr9Vw9Ta1rIzvG9Ey6x3iw8yxvoqiYDhGQRCeO57D+5F44jH0i0hVH92S3scbFkRFR2QU2XWOqUpUFEUXgmwikUgkEokPDO+6+DiZTN4152O9Xkdrvfj+4sWLrK2tvSvrTiTeD5w6dSrFriYSiUQikUgkEj8kFii9wsUQU+Udtjei+9pr+KIJOsfrDN3s4K0LHXE6Q3SIQrTOgTILA5X4ebanxP674ApTotG6iP15wiI/VbnwQhvFLfE4Z6jsDAQarRZFluFmD1H1u/S+9Tp2HKSbeV+hqzx5S3H2sbM89rGHWVlbQeGwUXRRlQnrGE7IGm2U8nhT4ieCajTIszXM8RE+K1GqgEyh6w1sWWLLCu0s9U6bdr1G7+iYzfNneMBcoD8oaRwNsbe+RXnqaY6uXqOaTrl57VXWN1a4ePE0Fy9cYNAdMptOuX3tKjPvODre5uvPPcsnPvpxsB7f6zLzDjOu2L+xja2mnD53kXqtztJKg3I2ZW93m1deeYW9nTvs7dxgNBoyHB5RlSWZAodg7Inzce52nEuCWhFdjh6thEJyPDb0bSLRRRYckd6zEDCDqCm4Ewkydh6eCJR3M+9qXChoxOXMN+oHCGHVWvjZz32WD3/kCb729a9TqzUopzOKIqM0FWUlXLh0H/u7R8ymU7zztNst+oMB9196gPPnLrC9vU2tWeC9papmrC1v0Dvs4bxBKcXtW7cwpiLL60T7HSiHF4VEh5yK/zlvcXPnI4LChw7FGA3q8aAI8z6eEErpxQAEp6GPxyHDuxIlQcC0Frx4vDPMld2ZLRedjMbaECmsg6hZeI+1jl6vy97eDr3uYRArRWGtAQfGFFRlhakqyqqiqNWjSzPuCy7cFKDCeTuZjLmzs4POQ8dmvVbHWsNsNqUqp2hZwYnHK8F5T6Y0lZlQljOMCa5KpcB5R1UaSl9iyjrWWqqqpLKO8WRANSuZTAYMR2NMOaaROZp5E0WO86HJde5EVZKFuGSyIAJLcEgqCZ2aWs1nuUIkRNx674L72rooQIbZG1KBY9yqaFx0aM87YkOPZ7q2kEgkEonEB4l3XXwsy5KyLN+VdX2nY+vUqVPcd999/Omf/mn4JTaRSHxPsixjdXX1Xm9GIpFIJBKJRCLxnqXygo2uIUVGpjxiHLM7uxxm30SKOpLntLcEyRTiPBoV3EIhSxNnPTrL0Ephrcf5INiEPjyHJgudduKiQCPRHBZ78Jj3Azpc5YPA4TxKa1Sekxc1OmfOYidPwGhC/+UbOKMRQr/dxrllnviRB7lw/1m0UijjEAweTZYr3HiKuIysvYxkguDwM4ObTMCUqEYd3VnCDLqotkayGm4yxJsKxIWOvKJgbXOTnWtXsdU6m1uneUppJpOXuXPrBteunWapXnH8zJd5+hd+ifsefpLRfp/KObT3vPHKKxhvAU9Wyzg8vMmLL7f5+Ec/Afp1et1jypnF9EsOq0MQzblz99FqNShnM6aTIVcuX2E06GKMpdft0euNUZnQbGoqA8OhAR9rN6MGDCeuyCyOO+KxWJR4lGiMiZGjoqhpjXEG64JI6bxfCJHezaXDhc3yuwjdkuE6h4uFjyF2da5Afu/XfttyBB5++EF+5Zd/haOjA9ZW1hmNR9ze3mVpaYPpZMZ4JJzeOk2zVccZxXQ6Y2V5mcOjHRq1gsuXL9E9OqLeqGGqEhGhvdQKMnuMiO3uHjN87XmWH3gcVTRBFXjxCDY66jReYsCvUmhi3Oy3XcoJywsuSEGpuQeU6NjzoHQU3QWtXBDbgx34ROwVh0gGhLhR0MGlakrw4fYA5yyVNThbwxjYP9hhb+8Ws9mEpfYS1lnEebw4xmOHMRW1Wovl1XVso413IeZY6yIKoA4RjcoU1ht6gyO8h6210xR5gTGW8WTCaDSiWqlQmcQIXo+IpyzHjIc9pu3VeC57jruHHBweM51OUHiMqRhPpozGXSajIaPxmOGwx2gyYLnd4sqly6x0WnG8Q8cnKsSihnHVWEyI9PUSbj6OQnqIWPVROPR4Z3HexQjmeR+pCosToosSrLd4HCIqzG0x8YYJSyKRSCQSiQ8O77r4GO7Kqt6VdeV5/m2uLaUUTz31FL/1W7/1rkW/JhLvVZaXlxcxLolEIpFIJBKJROKHwIegUyWhow4XYlRt5ZhcvY3T30TV6qAzitU1lA59a5nSIarTW8QGkUl0FlxfzuNChmLod0MhOg+OMeeioBCcZSJ57M8rMbbCmCmeIIogClEKlefUVlZYOnsRM+gxOz5mttOn3sh44kcu88hHH6TdakJlwn64KYjC47CDkrzVRjdaMC1xswoyUFrjvA/fo1H1OiqrU/VH6HqFG1XovI7GUU4naF+n3lmm024xOj5mZbPG6soKTz1+iYPRa3z51Wd5QTdQ/pDtl17hkY8/ze616+wPB2ydu8SprU2Oj7t4Eeq1Onle4/D4Bi+91uRjH/4YtewNDvZ3mM58cEBe28Ebz+nz56jVG9jK0mo0+LEf+yz33f8A+3vb/L/+i/8H49ERK6sdTFmBHTCZmthbGMZf5C5BUECrGAvqHUplQXjBo5VCiwpOPe/QSlE5iyL04WkRDEGk9OEQfw8Poyy+zDU4iS5M70+ckt9Pftw6vcXf/Bu/Tr/f49r1G2itWVvb4vb2Iac2LnKwv8vq0hoXL21x/uIFbl29w3Q8pVVvcO3aDYbDIQ/cf4Wrb1ylcjOyWh0lQpEX4CzeWpyrMOMp/a/+AbXBPo3HfhTprIBIaBANimCMm727H5FFfKcQv/cWh0JleYzuDNZeJ3fF4EqILg6xtLFPErVw/Co0aI9zKsSIhgVHB2twEOODMF/NKgaDCXu7Oxwf76OUxonDx2tZzpf0h2U8AIqNU1u0qg4iPhxjOYlQXvRxeod3YK2hLCtMZTGVZTjq0V1ept06Iq8FQVIUtBt1iqKONVPGw2O8mzEcj7i9fYuXX3mRQbePcTMqM8NYQzkrmUxHTKdTnLEYV3L5/GX8+YsnWu08XtXFHkYkiok69pW6GHMbY289MdLVYa3F2TA+1lqstVHnVuHZLvhyfbT0KjTOOVx8H7TWvo2ZmUgkEolE4v3EPREfwy8d7zxa6+8STp566imKokjiYyLxfVhZWaEoinu9GYlEIpFIJBKJxHsWjScXQQidiAI4a8AJtjIcvX4d1WwitZxlBarTCQIiIR7RWoNIiJZ0WkJdmwoiphAiHpUIWoITSaJgoGK7Y7gX1wXHkwNrq2jdc/hMgbjwnFpGfW2VzoWLlN1DutNvcv7SKR790ANBeCxnIYq1LEEDKkeMDypZpiFTkGeI8lCZKBxpMDOsG4MH3WhQDYZ4byhaHZy1VP0evhK8q5CioNZeort9B9Mb4rBsnj3PTzx5kb2vvMnV6y+w8ugnePFbL9Nstrhz5w5jDMNXn2d19RQba1tIrphWZXBoKuHw8CZvXm/zyaeeJn/pWXZ2bjOeTCknJbtv3qYsS85fukRR1FhaaiFSJ9MZDz74KD/70z/N7//+/0S7s4xzFrxn504fW9noepRF76JWYfwzJYhSQWCT0H2oBUSFGFvrQveeiY49520QaKI7z7oYbfm9pEcJDjM1/yZsRuzoC+KZqOCe+14sLXX49V/9ZXQGL778PHv7BzzxyONcv3WHleV18qzNaLBNZZao1wrOnDnDrRu3scbSbC1hzIzuoMtDDz3MxsYaB4f7wZmrQ7ysdw58dLx5z3TUx7/xdWb9Q/JHPoE69wCqyPF2Hj8b3K/BkQsiOvQ2ehVjQqOgNfeFKhV2WoV5jMrC+eA9YPHOx25MF8csHAOiew9nAcF5i3dz52gQLMW7KLRV9LqHHBzcoZxMqDXqOGODS9AZqmrG8dExlbUURZ3JeES1VJHlYdu8C0Jj2MggKjvnMKZkOp3SGx6jtHB0dMhhdw9TGSbjSbyBXji9tcVaZ4mlZoNarjFmwGgwZO/ggOvXX+PVV59nMpzgpCLPFUVRI1MFGsh1hgXE5IspgkQ5MRRtBlFUPN45lBa89cwDVIO46xF0cD46ELLQDzmPUHXBgX0y7fzCcCtx/w0hnjaI4oJzFUKKXU0kEolE4oPEuy4+GmPeNeejUuq7xMdPfvKTrKys0Ov13pVtSCTeq6ysrJDn+b3ejEQikUgkEolE4j1LIRo8mNj6WFmCa1AsldPMRpbDV1+l1mqhtaY4fRqXZzgsoMJFfjEoVQNCbKJEMSbKWSgVREhPiPF0hChFrxRWCA485bHWUJUGpxw6yylkfjlAoSQna0Jjc5OV+x+kpmY88fApWjWFGwxw0ymSZ+hagc6K0PVWzRDJKQ+OKZYtutNBVQ5XVlGMsEGEqSyOCbrdJl/qMD04QrIJfmrQtQaq5qnGffRMqLc7qHwPMqFZa1MdH3BxY41fe1rx29+8zvM3X6P20OPcvnaTn/zsZ/n9P/yfmVYz9vZv0+8fsXXuAktL60yrElEh6vLq1VeZjIf8/E//Ip3WK7z+5gvY4YjZbMadN24xGYw4/8Bl8qJAYTg+2EU4zc/9tV/n6PiQZ5/7Gp3OEp3lZcrScLg/DA4wQMuJYS/PitDvh2BxFKpGhQ0xoBJics1CFPRowHoXjrOfCz/fey4tLm34k1rPhfAT3bXWx5hR3tpjppTw2KMP4VzJC8+/wM7eHlunzvLCSy9z9dotHnjgQxzt9REvOG/wHlqdDo1mE5Sjs9zGuorhoE+nvcS5C+cYTwZYb8nzGtPRDGsJrl1AVMasKhFT4XZvMD2+DWcfovnkT8LSCi6KsogKYhc2jqfGuhD3iVd4yYAo0iqFRHchooOgpVwQFUVCBPFimD3OxzPG2ujCg6iKARbrPRIFUI8C5+n3euwd3Oa4dxBck0phTIXH4J1jOpkxLSustfS6R/QHXVZWNxHVwHkXRbuS6XSCqSqMtVQzw2A4pHvcpTKGdqPJUfeIbm/Ene093nzzDaz3rK60+fSnfpz7zpyl02ih6wXeg3UOcR7xnk6rQ6vWJMsz8iKjyDOMs5RlxWw6xRhLWU3RWRAQcVF4XHSFhj5ZDXinsf7EIOB8EFmds4jXIBbjDM6B8yYeWx/9oio6OmNPaZzDQcyUeMzm3bQ6GR8TiUQikfiAcU/Ex3fL+Viv19Faf9djn/nMZ/jH//gfvyvbkEi8V1ldXaVWq93rzUgkEolEIpFIJN6zaAiRml4BBgsUWoFxUTxU2O6EwZuvUzTa1JWgVtr4eoHkRXB/OUOFR+fNeMFfhwhXYn+jBJeSEoUVixbBqRDzqtCIUjhAO4s1MyTPojMyqlQqOL4kU9TaHTpn19nceJhThcWVJXZWxu46h27WgvtrUpI3GtjJDDstmfkeNaXQ9RpSz8FYfFkhSnCVw85moY+v3UI3apjJlHp7CVdVTA+6SOZxfkbearG0ucG0N6bRXsIMh5jRgEun1vipR6a89syb3Lm5xHLrCk8/fQVdK/jXv/OvgsBrZ9y49jrnzpW0V9Y4HnYp8ibWG27tXuP3/vDf8DOf/jkajYJvvfR1jroltqrYv71LWVZcuHKJerOGqY4RXePi0n38xm/8bXZ3b7Ozu02j2WB5bRnnDL3jGWYuQGpFrgRRobfQOxPEMwxZpqkqi3OGopFTyxrgHeW0pCot1vkgQIoEPSw0Ob4lc+1RRdVRYuyuknnHYnDx2b9Awcy0otfd57VXX2bQH4GGW7fvsLe/z+OPfQStG0yrQ+qtBrVaQWUMRS2j2aohWtMoWmjJMJWhqBVsbZ1mZ/s2o+mYRrPJaDTA+RIRC95hxNAvZzirECdINcO+8U0Gx3fQl56kdukxVGsFr4KvLkR/+hgFGhy8ThRKC5AHB6mE7kMfik5xEsVG0WEeR3/d3Q5S72wcwxDx6r1fRLIqH85E5y2IpzRTjroHHBztMJtOaNQbeDzGGYwxmLJE6YKtUxeZTWc4W7K/u0e7tUxnaZWynDAcDhkNR0wmI6x1YSx8xWQ6o98vGY720UowxmGdZzwecnw8xAOT8ZTJeAJ4VKajd1KhJcy1RqPJxvoaeEHnwWkrgHMwK6fkeYExFbOyRlGv4ZRgnA1TzLsQC+sMWukoMtvok85wYRIurtmddD3ONdsQMyvi8KJwhNcuYl1tEBuDc9UjXqHiPBXRODE/2BtoIpFIJBKJ9zTva+fjW3XViQi/8Au/kMTHROL7sLq6mmJXE4lEIpFIJBKJvwTzDjTnPV4geINsiHgUj47dfeXOPt38BRrKUnNnqW9uovMC53wQS5xDecXiOr8SXIyaVLE/T0mGSIZjgvVVWLtyiGIhUHkH3knojXTBEeWMIdOawlc0XY+teo9WTSHW46RArMXEzsnqeIQqamSNAtVqARo1GOOmFeXRMbXVDVSjBsqidAZosGBtiZuVqEyTt5oMd3fBe1TlKdotRKAc9hBb0jp1GlfdYdjr02jVsYM+04M7PHbfWf6eg//hzTe5+gacP7PKx3/8U7z22su89PKLNPMlirxg584NplevcvnBR9BKmI5LpAmvX3+V7f/+Fr/+K3+Dn/7M5/mTL/8h23vblNWIw+0dBsddzt1/gc7aMmVV0WjUWVld4dd+9W/zm//kH9Lvd6k1m7RXOuR5zmhQMh6VKCArMrQofOwhtNZRVgZRBqUdS8sNllZXaTWXMeUMXWS0am363T7Xrt1iMBh9z6hVmLseFSqobFGEDjNsHn0ZjWbB9RgjWL8TYx07O7v0hwOWOstsrJ9jb2+Pxx59knNnHuC1167TbDQxhafTqTGdTvDWs9Rqk+sMpSErCow1gGdzfYOV5SUqW1FvNLh54zBE/PoQwVpVjuum4CEjtLVD6QJRHtvdY/bnv0f5yrPUHvoItctPojrLwZkoBOUQIMYKE8+hkDvs8CpDJDh+bRTW/DzHNUbXhhxbEF/hPSgvUdoNLlSPXYhrKnYf4mE8GnN8tE9v0A1dqyJU0xnjyZjpdIx3wpmzmyx1tjjuHnFn5yrffP4ZjrtHrK6coqwm9PvHzMopjXqDvCjQKkdLiDd13uOM5ztluPkuW+sxxgbHp/e4GCMrPkSW1mp1Wp0WzjqUDuPjPZRVReGLEOtbZijRZDpDEfpGER/jl2NXrHd4CW5pteh7DH2dEKaPcw7r5t/P3YxzYVEQr8LzvAvxwW4+i0M+cHBwh9hh5scvkUgkEonEB4Z70vlozLtzt1PIy//uTPkf//EfZ2lpiX6//65sRyLxXmRjYyM5HxOJRCKRSCQSib8EXgTj1EKADPpjUIlic11wMZbC+PYus1xoK42uN9BFDefDs4J70UYRkSCKhIZHLAZREsWY4OISLygJbkjlgzCjRHDO4r0Cn0XxIMQwiq1Y6b7Jmjsin42pqgqd1REEXa/jbR7UEeVAgasqtAXVapC1GpTHx1jnMGpAka2CWNCCquWIdbhZiNB0pUPqOZLnjA4PWD51mqzeZHbUBadjxGVGa3Od8XGP4+MuzeUVsumYqnfMh+47w16vzz99+Rs8k1ecvX+DBx57jJdeeY3BcIDWGaU1TIZTrr78Iqfvu0ijs0JVVgxHQ0bZkH/2//snfObHfoqf+/yv8aU//He8+uaL9IdTRr0Br37zJdbPbnLmwjkO9hqICFun7+dzn/sl/vk//01MWVIUTYqVgpVVoSqFg53DELMpbtG16bwjbyhqjZzVlVWWlldZ7qyRZRnT6YR6rYbONOA4fW4Vbs/o9qrvHUvpY5xoFJ8zEUzQIb8tjlXd9e/QKvntOOcZjiqcH3N68yyHh3usra3y4IOPcGe7R6vVoqwUxswoigxbWYyZUW/V0VZjbEWhMyD0l7aabYqiTqYLnIdyarCxN1GcxZiKXn2J67MxDxeGXOdgLWBQzpOPjjDPf4HhG99k6anPkZ9/ALsQWjOMN4CLXkaZG0SZtzT6KJyBR0QhBAHSL6x682hQH+NQ/cIl6mwQH+fCm4hQzqZ0e0cc9Y6YDicUtRrWVoynIw4O9xkMe7SbK2ipoZTGecvVG9fY3blNu/0sjUabqioZTUac2TrLYw89wenTZ6nXDfV6k6L4/rUmoTMz7p8DtI83LGiUVmil0JKjlI0HPNygoJXCK40RhdKg/Tx8N0TPekcQZonjgA3JteiFcOicwfvoi/Y2vl84vJdwQ4Odv8fEblPnFw7qcPN/uLlCkHCcouCrCc7S0GWbSCQSiUTig8K7Lj6+m7xV5yMER9enP/1p/tW/+lf3YKsSifcGSXxMJBKJRCKRSCT+cpROU/l4IZ65Zy32/M3VEa8QBX5WMbx+E5tn5O0mul6HWgEqi4JhiInULsfr8GrlQ+dd6GkLQpRCB8khRlAKKgoGPnzng5PSORvEJOVYK/fY8j1yAFXgEOxoiF5dQjeaZJXFlhXiPGY8xTqPqw7Iag3yThM7nmAmE8xsgBprslYdLKAFqeUoW2HLCltNybSmubrG8OAAY2fI2KFyIWsuMTk+IO8odJHR3lpj/84ez79+g49/7D4KKvx0wo8/eJE7vVf53Tde5vDWR9jY3KDdafHmjT0yPaMQ0Fooqyl3btzgysMNrII8DwLIcDLid//4dzk4PuDnf/oX2do6zZf/7AscHA+Yjafceu0mx/tdHngsdPytrGzxwAOP8PTTP8YXv/iHdJZC9OXa2jpnz5xld3eXl55/nnJm8MqhC8Vas02j2WJpaZ16vclsOsW5ELOZ6YzptGRWTkInoA0de0pXUTiLJjGCsKa0kGcK8SFaU4uEnk9ncQ6M8yghxO+KQrkovkWUnLjqACrjcFa4ub3DxuYan/qRT2MqIc8LGg3FZNJndaWgVqvF7sIKEY/1huloxmQyQucanRXktRpoT14rGA1nwbEHc3Mcphxx3K84bmeYDDJfBbHQa2BGlmdgKorjHcZf/B8oPvwZioefQmkVu0vn7kYHyuO9Qim9iKAVomCGCv2Fnig8+hi1qqITNAhkyiusNUEgU0Ho8wsnqWI6nXF4fERv0EeUkBUF08mEwXDA3v4dev0jLl3sYL1jMBqwfecm2zdvc/P2m0zGoSPT2GBFfeShGfff9zB5XseLJctzskzzPUypC7wH6y3OBZEQdBRPg5vZeINxFSKCisJsiGaVGCkLDhMclj5ErlpvsdaE6FMPmhDF7OOGuPieMJ82zrvYAyl4P7/xIcNGtyixa1Ri0ajyCmNNcHOTYZ0BN49blYXg6XwSHxOJRCKR+CDxvhUflVJv6XoEKIqCX/qlX+Jf/+t/vfhlK5FInCAibG5uJvEx8YFCRP4e8F8Df997/9/8JZbznwL/CfBT3vs/+vexbYlEIpFIJN6bmNgnB0EgUqKD89EH76NHEVS6EElohzC8fod8eRVdb1DfPIXUNCIK51yI9pQglHgsQUEBa6sgEqFARYeYC12DEm/K9c5hncH5ID14KjJrWDf7rA1vBTErChmS5xT1FZz2uJnFWYd4yGoN8J6qrPBKMOUIRYZuNrGTCW4ypcqGSBbcm+AhA1VovNU463BlhW40KBpL9HcOWT29Rd5Zwoyn4IL4luUaVMHqxhrN64cc3rzNxYcuMe0PaDYa/OKTl5h8/U0mB/ssrayzvrrGGzd2cT5jagydQvACppzxxsuvsHHuHO3OMq3OMlorrHV847lvcOvaTX7u536Bv/7XfoOv/NkXeeW1lxlMDP29Ls/1nuXc5UMuP3SZzvIST330Q1SzCTdv3UBlQdQdT8bkecH6mQ3KWUlRZCiV0W4vkemCqvKc2jhDvz/huNdDnEdpy6yc4DFUVYk1NkRfqigeBXMpSgv1hqZWy2k0m9RqdZQoqnKGIExnJbNJyaw0KJEQ6+mhNBZfsRCv7r7koZTi1OYmDz/0CB/96MfYOnUKJR3u7OzRbi1xZ3ib2bTPma1L5DpDxNJqLTEezSi9p9cfUFaGVqMThHAynPM06y0mwxJTlWE9KHCC1o6llRVGvW1MUad0oPMsdlaCtwZfGRSemp1hXv0T/PoWausyc8ejoIPYKioKhTH20xGXE9S8aOINxJ0Orj0QlYEzwX3nPUo03tkYZRyeW5Yzev0uR90jppMpRV6nrKYMx336vWMGwx62chR5A2c9VTnm4PCQWVnibHCVhjM9iJrD0YjxaMpsNg7CH4LW+vu+Z3gH1jicN9EpaBEy5hG7eI/3FvHhJgOEeDOBCcc8Cn04HyJWrY8ORxXEaaJl1gWBdu7IDmMTCjRdjGh2+BDDGh3VxDH38bYGweJFo7WEeFYX5l04RGE7OTGh4mKEayKRSCQSiQ8G76r46L1/1yJXtdZk2VvvnlKKj3/845w5c4bt7e13ZXsSifcSrVaL1dXV7yngJxKJRCKRSCQSie9PDE+ExeV6F1xYLvSlzeMkPS7EKVphfGzpvvYGWbMNRY36RoZIDeeDWOK0jUJC6FJz3mFthTM2dkIKCo0WHZ1jxAI3j9KhY3JeEdisBqyZHTJl0XnoiUNr7GiI82DHFSiFN4ai3sQNR6haRnOtjUcw/T52MkX5DMkyvJlhR2MqUaisHiNBBdEZokuwDlfNUEVG59Qa2y/v03GWzEI1Ggc3lxLseISue9qn1ji91SIHXDljOCrp7g+47/5z/PITU14tu6BgdW2FjZUlLl9+mFvX3kT3jshE6HoBLHu3r3Nc1MmKgla7Q7PVYWVlg9euv8aNf/QP+Ykf/Qk+/elf4NT6Wf70a19i7+iA2XDKy998gb07Ozz46MOcvXSep576OA8++CizasZ0OmJv7w6VrVjqLFHbqFMUBevrWzzx+EdxRugPhvT7x2zfeZFGvUY5m1GWDmcUw9GU0biHKWdMpwZjPDZmpYqA9lDkmlazRbuzTL1eR4nGERxw3gnGWMpqihKo1ZrUiyZlOePO7g4HB12qyi1EMSXCqY0NPvnU03zoI09xanMNa4V+36LIcISYzY31dkiRUoLOcpqNJtPOjKGZsbO/jRfPSmcZ4vWd6bRiubPGZDTDWxdEbmOZzobUa5p6XuNUvQjzW+moCxoQwZUlzpiQ6FsUFLMxfOuPwVaoMw+DVsHRiYAKLkbmcaIi4CyIRsQHQT6KrSFKNZwj8/MPD1opFCo6Hj2IXbiDq2nJUfeY/nBIrIZkMpnQ73cZjYZ4B83GCmurW9SLDofdXY6OjnC2CpGu8TSbh512u8dcu/EarVYNhePo8IDpdPo23jNCz6J30WHowGMQCeOnJLhgrTXoTMfuSvBeFtGyeIkdryZEPrvoiLQWcIgPbmnnWXRpujj35mMYHKRh/EHhnY8CsIuhqoSbJ5QFl4VIXQAqvFWI9hgXHNviidGxiUQikUgkPki8q+Kjc46qqt6Vdc37Lr7Xz7a2tnj66af5rd/6rXdlexKJ9xKrq6usra3d681IJN6r/OfAPwNu3OsNSSQSiUQicW+x8Wq7mguPAE7Fi/4ZMo/WRAcJRARszujOCN16EdWskdUKVJZFR6OipvKgcBAiV3EOVxmMmiFEcdGHQEbvY/Sk1igILknv0M7TsDPOyoBaNaMoclReQBaiXZ04JM8RB8bMUMrjTYiCtIMRhQOlFVmmoVFgpxVKa1wZBFXbH2GKGlmrgUiG6Cx0WM7KsPzJmLxWp7OywdH2HqfO5ag8o7bcwc1KytkMV5Xk7Rb3feRxpoMe+wf7vPjaMde6mmZdcfbseYZTz63JgLoocIr1Uxe58uAD3Hzmz8l3b3NsKg6VUEpGZUrKcsp0NOJQ77J75yYrp87TGx3x27/3Wzz7zW/wa5//G/zqX/9bfOnLf8DLr7/AcDplf3eXo8N9Tr1+joeefITV9VO0Oysst5d46iOf5JvPP8Mrr79MXhT86Kc+y8997hdZXT2DF7Cu4vjwgK/82Z/w1a/8CW+8eRVrPLPSRGFrjDGGqjqJoxRAKciLnEw3EVVD6xpF3iDPM0QJs2qMNaAyj87Cjde1okGtqNHutPDKMpuN6A9KyplfLNfMJvT7A4b9AeVkRLuzSbc7ot/vYZ1hZanOxvoyiEMphVZCrSgo8ho6s+zt3UGcp9NapSxL9o92aHVa4DIGg8PQ92g81jnGwx7thqEcj2hmCusdKE2WKVBZODeswZkw3xQOneUw3Eee/wP8oIt74ONQa8SO0yCSoTwiwfUXvvro5bMorfDOBref1ojzOB+ETlE6noUmCJpKEKdxvqIyFb1Bj8PDAybjMUVRxxnDZDhgPB5hnaNWa7O+vsXp0+fAWnr9HuPRCGuruwS7efSxMBpNePbZb9DrHaGVsL2zzeFhH+9BKwnuwu/Z8elwXnDW4XXsrYyPB3ehx2ERr4KJMfY14lw4vxZ/PFgXlmdDD6t3Hmer4HLkxB3qEbwzIe42vrfM/w0qit7zKNgYIe1MjGC1MfrZo1SOVy7MTx/G2kPYF//W1+gSiUQikUi8P3nfxq5mWUaef+8y742NDZ5++ml++7d/G2tT9EMicTerq6tsbGzc681IJN6TeO8PgIN7vR2JRCKRSCTuPYJHoZg3Lnrv4sX40P04d12p6CQSIBOPMRmjWz3y1avUWitk9SZO5XixWK3RXkcHpUdwWO+oqpJMFzjnQydcdC4pCTGY1lWI0nhvqSFcLPs0xkN0LXboCYhVeFeS1RpIrpEcVKnRrSYYgxtNyFyGaIFcgc7IXA5+yGwYHHlSVnjvKI+6iM7JGiFqUvICXW9hp1OIY9A5tc7hszcZtuqsnDuHm06x4zG6qFF2j1BOkXc6wDKDq7e5vqfolsL29iFLq2ucyXJe/P1/Rz0rWF5q8soLr/KhD3+MCx/7BMM3l2lefYXOrOKqNxgFWVZgTYWrPDPnON65xebZ81TWs71/i//yN/8LfvTjP8mP/9TPsbqxwTdf+Aa7e3tUk4pb125wtL/PfQ88xOWHH0KrjFqjxWd+8ud5/NGPsrSyytM/8uM0Gi1AM5mWGKtoNNf41I/9DI888mFefvlb/MmX/5hXXnqZyWiCoNlYXeXM2bMcHBxwfHzMaDSmqGUstZvUGx3yooaSAutyqEC0R0mdvJ4zD/UVggvWe0FJznJnjVMbMyaT25SzEIXqvGc4mTIajel1j1GSo7Mljo6PGHa7WO+5cHGZWi1nNpsGEVGCm85UhtlkRveoRzNrUC8Kypnl2s3rVBMYTQdUxuCdx1qDMRXT0RH1IsNVFl0XnAvzwlpPJiEGNUSi+hDxqtVJj6GZYa99DacUcv9HkLweBDElUfyywcE417KcD+WW8whREZQP3YVaR7FfLM7OHZMKvF30oJblhIPjPXqDY7zxkHmGoz6D0YDZrCSTjEa9oN1u02kvMRqPmJXjk9GPwqNSsNRqU683GIz6HB736Q1fADxVFUS+5aUGK0sd+sMh3d7ku6qArPM4S3R46vguEmKTjQv7JOLDuYpFlCbTGc4FRzRe4ntOiKk1zmLdXBydd2EKiI3dmzGm2QdRl9ih6b0BXIhZNS72bIbIZutdHDsNonC+xIsDn6GUx5owzloXoa8TixY5EVETiUQikUh8IHjf/p//L3I+QhAnP/zhD3P+/Pl3casSifcGa2trbG5u3uvNSCT+UojIJRHxIvLfiMgjIvIvReRIREYi8iUR+dm3uZyfEpF/KCIvikhfRCYi8ryI/CciUn+L5/+ncb2f+Y7HvYj8kYhsxOXdEZGZiLwgIn//389eJxKJRCKR+CuFuOjMujuA1UU3VhAOg/AYetkKEXIRctHYkdB/4wb9W9cYH+xjq1kUD0KqkLNldDCBq0xwNjmLeA8E55OXIMooFbogPZYMzxk3oj44COvPiygwGFw1xZkqOKhmFWY6RmUK5TyqVpAvtVDNBnY2wfYn2MMhtjtAKkfWaOKUxTqH5Bo3m1B1e9jpDG8NeI+q5ehGLcQ1uiAMFa06w+EAOzP4mSfrtFFFDfICOxxgR0Nmkwm72zPKss5GISy16syGR7Qw/OJjWzzVqfjRjzzAzsFVXnnxFapqmYc//XM0P/JJVlpNrhQKDThrQiitKMR5ZrMx2zeugjOcOXOapZUO33rj6/yj/+4fYvIan/qJX+CJxz9Ku93Ce8NwOOSlb36DP/63/4bnn3mGg70dprMZDz74IA9eucJsMmI2nWCdAeWxzjKZTajKilpR42Mf/3H+g//oP+Yf/IP/DT/zMz/Lh578MD/+4z/F537m8/zKr/xNfuWX/wY//9c+z6d/4rM88eTH2NjcoCgKilqTvCgorWMy9YxHluPjAUeHPbrHPYaDMZNJxXRmKCuH1nVq9fa31WjMuxFn5YgbN69SltDr9+h1jxlP+yx1hKV2LTxXoKxCh+NoNGAyLrmzu81oNKK9tERWLxiNJty+cYPpuGI6rcLctkHgG4+6lKNDJFNoM6PmDdjgTMyLIsaiOnx06ynxYW6bCpxFIWSugpe+iPvG7yIH19EuCof44ACU8FVi1mmIGw3OPC0aF5JWsdZijcFbH3oNncNj8N4ggDWG0WDI0eE+49EInWWU1YT+sMtg2AMn5EWdTrvN1qkzNNt11tbWOHfuPA8+8BCXrzzMxUuXWV3rsLzc4b77LvHww09w/5UHaLVqlKWhnFmc82gtPPzAFT7yxEd5+MEHqdXewg/gPcZarHWh/9FZnDdMpmMmkyFVVWKMx+NQWlOvN6jVa2FcswylgiColMY5mJYzptMJxob+RaKT0TuHw4V/S3CRIgpFhie8n3gfuyxjL2bohST0OEZXqbMVEIRvkfBepyScY0p5lIqxt6JBff/Oy8S3IyJtESlF5Mvf8XhDRKbxM+bf/Y6f/Qfx8f/lXY89KCL/rYjcjsvbjt8/+BbrXHyeFZG/JSLPiMg4vub/JiK1+LzPxs+3fRE5FpHfFJH1t1jeD/15WkR+Q0S+Gtd/JCL/TETO/WXGNJFIJBLvHu9b56PW+i8s8xYRHn/8cR544AGuX7/+Lm5ZIvFXn83NzSQ+Jt5PXAb+FPgW8F8CZ4C/CfwbEfnb3vv/7/d5/X8MPAL8CfA7QB34MeA/BT4jIj/jQ6bZ22EF+DJQAv8CqAF/A/hHIuK89//4B9ivRCKRSCQSf+Xxi7+tl+B19A4lgjgWzilByJTgvJCJhJhWnzE7rOhdvUbeWSFrt8nzGl4XWIIgobRHUSwcTcaWOFx0MRFEAKWpxAcHpNd0KFnu7zCrhrQbq3hrsI4gZDqP5Bla55jZLAgcSuOkQlxwP6kiR3QDN5qB9YguULU6ylu8NUyOjlFTR14UVIM+UsspWk3IBVEKlYXtRQniFJ2NTXbeeJPV9S7ttXDduuoNUCrHMsWVJcPukFHf09GwVq9YarfptFfo7+/QPnWaJ7fW8cMBL1zY4JWrz1HLPKb6GPnKRWaXR8jVVzhtxmw7F8JvFSiVU2QFWabYvXWD3v4eaxunaC0tcX3nBjv/bocrlx7j4qWH+fDqJtdee569vdtMqhnjQZ+Xn32O3Zs3eORDT/ATn/kM9UaDopph+gbrQkQuukBpQTkhK3KU1piyorO8ylNPfYpHH50hopmMR1SziqXOKvc/8Ahnzl2g3a4xHPa4c2eb1197nelkQq/f4+i4y6SqMJXBuYq8yME6xv3QmZnlJfVawXA0oawsIpCp2IeIYzqdsL97yKULGcaU1Gpw9sw6rVYjuOq8CpWHWhAtzCYV5dRx9errlLOKM2fOolTO/v4uzfoq40HsPLQW66AqZ3QPtlEZNDsdlrWjleWIj6K59yhvcXaKuAItoWt03oGqdB5EccnRVZ/q2nPowQFc+jBy35NQa+LFx/5CBRIcxIicCGPiQx/k3KEXRcvQ91iFbYk9kNPZjIOjfY77xxjrKHLPqD9gOBgwKysatQKd5WxtnebSfVfY3DiFzgryeoONjfMcd484ONjj5o3X6fWOWV/ZYmVlnfXhBt5ZXn/jdcqZwXlPu13j8qUrnN7YIq8VvPjKa0yn315N5L3HmArrDMaViM+pSstoPGQ4GjKeToJxUWmyoqBWawbHaeaY+kkQZBUolTErx3R7PZZax+RFTqvdQSSL2b7z/kjBzG9UwIdxERClQIK70rvg7MT7+N7io2MzxjRHPTiqmzgcIhrn4kckFd7vRKXY1R8U7/1QRL4KPC0iHe/9IP7oxwifJQF+GvjNu1720/Hr7wOIyCeA/xnoAL8NvEj4fPt3gF+On2f//C1W/78G/hrwL4E/An4W+N8CayLyPxKqRn4H+IfAj8blbcTX3M0P+3n6PwR+KW7zHwNPEz7Hf1hEPuK9n73FaxKJRCLxV4j3rfiolPq2u/zeirNnz/Lkk0/yhS984V3rokwk/qpTFAWXL1+m3W7f601JJP598ZPA/8V7/7+bPyAi/zlBkPx/i8i/8d73/4LX/4fAVf8dmUgi8n8C/g/AbwDfT8Cc82HgvwL+wfwDloj834FvEj6UfV/xUUSe+R4/euRtbsP35I/+6I/+sou45wwG4fP4+2FffljSGATSOAQ+qOMw3+9E4l4TwlVD5JBHh6hHEZzXeNGAQRGiVsVrBIf40P2oUTirGd4+RK2+TrG8TK3ZoKjVwwV/LKJU8FOKx8z713DBlRRdkHPxCa/JcKyXx5jeAVmtjliLLUtA8NZh/Qw3suS1OuVkHHocJxN0lqMbDZTWqCxD8gxZyvHWgvHgHKBRtTqCYGxJ5kPcrOn10UWOFiDPIVcon+G9woul3ijAO4a9LvWlZbAeZw1ZXoBq4UpDVmhabUurPaKpDZmDRqOJmTQ4un2V5VMX+MhKi+l9E/7JQcbVN56hnM7YPP8YXi/jLz7A8p1tBkcH9LwLwiBCu1FnbanNK8NbTKcjbt1+E7WdUe8sU5ZTrl7/FrfuvM7W5mUuXn6Sza0L3L75KgfHB0ztkMOjki//0Q4vfetZPv6JT/HhTz7F2fP3keUZZVkhKiOrtSiKFkWjwWw2Y9A7Zm9nm9FoQlFkeB9cgM456q06rVad5aU2m6e2EH2Rxz/0cX7mc47RcMCNm9e4/uabXL/+BteuvclkMkEpmE6njMcT8rwIfYW2oiynKGBlqUWuYDydYa1jNOzh6p48V2R6yua6Js8UZlaR6WIxcZX2ZFoYjyyvv/EqB0cHtBsdLp1/gKo07OzuMhqWMW4VnDOYquK4t8vg+BabZzdY6Sxx1hzRyJt46xA8tpzhfOgExBuKIouuTIvKMuahxGG65JjJBPr7ZK/9KXY2xD7wSVRzCeclCGY+dpt6Fxy1UWTFC86Z6ASWEBvqXYxIjZ2ItmIw6LF/tMdgMEIQZmVFfzBgNB6ifI54YWlpiUv3PcC5sxdYWV1B6Ryta3Ta65yeTOhu7rO1vsmd3dtYE0TkIlOMz13i8PA2o/4IlOLyhXNcOHuOTmeF7nCAzr77hnnvfZDwRCMqx1nPbDalP+zT63WZjqfhvUJUFAU9xpVBsLRBCDTGYWzFrDTsHx1Sr9eoNxvUigZFXcXoVhWiWJUJ5bTeR+3QgwtjKjGqVQSsN4vYVOdBeUXlZ4AK7kgfhEvv54uo4jZWKHx4v3Op8uiH5A8IYt1PEsQ7CAKjJYhyc7ERCW9uPwW86b2/LkF5/2+BJeDveO//u7ue+zcJAuJvishj3vuT8tnAzwBPee9fis+vAV8H/i7w14Gf9d7/8V3r/XfAz0dh8Nm7lvPDfp7+eeAT3vtv3fWafwr8LeCXgX/+PUeMd/Zz873i/fD7/Af1s8lfBdLY3zv+Ko/9O/25+X0tPv5FzkcI7sif/Mmf5Dd/8zc5PDx8l7YskfirTbPZ5OGHH77Xm5FI/PukB/wf737Ae/+1/z97fxpzS3af92K//1qrhj2+05nHnkmKIkVKJCVSVCjJDm1fyWIky5QMO7ENR7ADBDBgQ7ADK46da9g3+RDBvhFsI4ChD4kCBcmNZEu+dmRZlK8sUhJJDRZFstnzmc95pz3XsNb658Oq93RLblJks7sPu7t+jffs963au2rt2lUbXfXU8zwi8v8A/jLwg3wF0U9Vn/kys36SdLL0p/jqxcc18Ldeemenqv5BF6PzPxORsaouv8pl9fT09PT09HyDkySu1MwXUBoVMo2ELhbVSsQaDziEzkFETBf5RVEV1hvQZ28w2jvLcDolL0dgbVp47K4VC4h2kaea3EgCqYROTOp6NMpYG6bzQ0K7YTQpoYtWVA2obwhNQ1h7NGto6yq9gxiwRUY28WRFicksJuaYQQGaImTDuiJWLbH15OMpvqppQ4uzhlCt8KsBxkwAjzibnFBBUGPJBiOG2zusjlZMtheU+ZR8MkYF4v6KqA2T8Yj3fc83U6/mzI8OaJs1vt4w3tljtVkyu3eHnQvn+NCFXVgu+H/97oqbN3+f8cSyfebteDvGPbbLY5tjnvn85zncrMAqB80Bx/M5xggaUyyub1vadUU+GrCpNngNPPvcf+Hppz/HuQsPc/HCI2ztnuHw4C7Hs31iiBzdu8uv/P/+HZ/+zU/y2BNv55F3PMbFy48wmU6xdk6WZZSjKZt1w527N1ku5xjrAEPTNoToGY6GnD13gb1T55lOt8iLIomkUYkYBuMtHn/87Vy6cIWHHn6EJ27dIrMZw/GIwXBEXa15+ukv8MzTzzA7PsIaQ1mU5JljOTvCR09dRXzjGWwVVO0dbEipUOqUvBykvVWUMs+xLqepPc8++zxPPvU52tpz5uJpBsMxt27f5e6dfdom9SZq9MTg2axXHN19Dh82nDp3jolTLlvBiKZ9PjZYsWhM2znLMkROYj0jGiLWJQERUYwFiQFfrXGZRZ7/LCFGzDu+C83y1FmobYoNVYNqSMIayekoxG5R6djqdnYgYBSa2jM7PuZodkQIAWsyFstD5stjYvQMijFb29s8dOUqly5eZGtrkgT56DEmUuTgXIkxu0QafKw53L/X3RgAw0HBeDpBY4pWPnvmPNvTXYpygM3ty3YgKhCDJjcpXXRvtWGxXrJcrdKN8yJYMsQY1usl3ns29Qrf1vighBBQFaIPrJoN+weH7GztMB3vkOdliq8VwYghRsEaIYaQxm1tEoqNEKMSoyISsdj0GRJAusZaMag3SBcvHeOJ9ZTOiapY4wgaiKopjrjnlfDLwP+eJDK+VHz8DPA/AP9XEXlCVZ8E3gPsAv+f7nkfIoltn3yp8Aigqj8rIv9b4MPdz3/6I+v9ZyfCY/f8WkR+FviHwC+eCI/dvCgi/3eSYPktwO+8ZN4rPZ/+Zy8VHjv+byTx8QP8MeJjT09PT8+D53UXH/9omfZrhbUW5/74t/eRj3yEvb29t4T46JzDGEOMsfuf0dfns+h5YzEajXjnO9/5oIfR0/Nq8tmXxNO8lE+QxMf38hXERxEZAX+TJFI+QYqreWlm0NfSOfGlL+OyvNY97gBfUXxU1W/7MuP8DPCtX8NY/iu++7u/++t5+TcEJ3eSvRneyyul3waJfjsk3qrbYTKZPOgh9PQAnZAoyQ0UFaIIXgVFEIkYBKIFa1HxpD62Lo61+ydEy+qwZXn9ebZ2dynGI+x4hLMDnC1SgqKaJEAixJicj6mDzeCcIwZLbg2XmgXZZkU2GFG45HLT4FNEIukmXu9bmtYTmxohxa9KZlGvxJHHlRkudm6yzCQdwwlIwJY5zjrECtXBGt9GLBnt8SHiMmxZYJEkiuKRCDbPGG5tcXg0o55XZGdG5OUIX20w4rBlTj07YrR3jsnWhMFgzOHtOxzcu8Pu2XNsb53i9u0bzO/eZvf8FT7y9oeZGviVFw6p8iVHd36byfbbaM0u+WTI9vljrn/xSUwbyVyKvFULGIPDEPA0q2WqqDMG9Z6qbght4KmnfpvnnvoDLp27yrkLl9g7dZajg5uslnPatmV+fMRvf/pTfPELv8+ps+d5+PEneOTxJ9jdOw3Hh6yWK44O9wl+TZYPiMHR1htEhMl0m52d0+zs7DAcjcicQ1VpfYsitLVns1ly++ZNbt28gcsy9k6fZmfnDKPJCJeVvPNd38p6OWc2n1FtKo6ODvnd3/4Nfu1/+o8U1YbxeMTVK1c4e/YMIbTEYMhdTlDwMYAk8WkwGBDalmvPPM/v/ZffYj5fkZuM8+cus79/zGpTEdokdqtC9J5qs+LOzaepjvcpdyecPnuGveqYYUHqF4yRZtNiHWRisEYQLGIsWe6SABkDct8q3EJUnLH44FHf4ExB89zv4ofb2EfeixhJzt8IUVtOcleTW89ggJD8eKCp81I6t2SMgapaczQ7ZrXaYMRQ12vmi2OiD+RuyGQ04erlh3j48qOcOXWWPC+Ivk3OPytkuYW2Jc8tLnNkmQCBulohRjDWMh5N8Y3HGGUwLskH2f1j37xMDKnqi3HMGlMcatt6qmpDG1qCejQaooKzOT62NHWNbz0hpC5G7URAEUEjrDYblssVdV1RlDlZniVBVBQhdWGK2PsiohGbYnpFsYbkEu0cpKhJnZDRImpRiS/G23bxvul2iyRw+tASY0Bj6oLseUV8EtjQORxFZIt03vd/Jrki6eY9CXxv9/fJ9G/9I3//Uf4jSXh8L/+1+Pjpl3n+ze7x5VyFN7rHSy+d+HWcT7/c+l963vwVeS3Pmx8Ub4b/n3+rnpt8I9Bv+wfHN/K2f63Pm19X8THGyGazeV3WJSJdrv9XZmdnh49+9KN86UtfetOKcdZannjiCX74h3+Yq1ev8sILL/DJT36S5557jtu3b/exVD1/iOl0yhNPPPGgh9HT82py58tMv909bn25F4pIRjoh+wDw+6Q7Mu8BJ1nd/wde7Nr4ajj+MtNPbgP+ypb9np6enp6enjcUIgajSuiCJIPaJBACqEUkdC6k1KUWEQIGJbnF0CSOeS+sru+z3HuefGuHUe66iNMkVCSdMimcMXqibxAFZzNSICqcdZ6tg31s7igmk653L2IzQD2qTScOKKEJWLEYK7SbmriOtAjqG7Qu0dajIeLGJeIyTBbRUjHGAakrUFQ6QUtgVdEYS7azAzrAOJPO2a1BAgxHI+6GQLuu8O0GZw26ackmY6JvsFlBvX+L8twlRns7oHBw+zp3r91g7/QppsMBs/Wc2e3r7Jy7xPvf+Q6unLvLHxxv+I27NYNsn02c0jTb2MEVzl5YcufuXVZNy1BTrGs8cckZgxJZz+acu/oQo9GYF248R13VrKsWCRtubpbs33yKvTNXOH/1UU6dEVbzuxwdH9DGmmq54Npywa2bz/I7v/mfuXTlMS5efYjR1hZihLxwaKjxcYPGDaJCbB2b5SHVOKdaOzREIhBjwHvP7OiQ/Xv3uHv3HoMy59KVh9k7fZrhYIyxhqbxoFCWA1xWEkOkLAv2b5/j8Sceocgeo8wHGJelvkBjQex9d53T2HVADkHhi194ik/95n/mzp0ZROGhRx+jLHc5OJ51/X6KhojGQLOpuHvreeyd59kiMphOYTUjO3yOo2HB7vYeRjy5GSACbVvRak1uijQesWCSOzDGiMWgXRSodZIieltPtAZpNqx+65fIcRQPfTNiUoyqxnBSTMjJ4YCASZ7i5CTuhEli6qjcbCqWq+QcDNEzXx6xWS+JKgyHAy5fvsQjDz3CuTPnGAzKJLQZITc5Re5oY2C9XqWIXW1p64bGe0LjseKwwLnTZ7l88TJb0wlnTp9hPBrgG0WMYF9Gi1NVgg8prlQsARBryLKCwWBI23o26zUA3jdY69J3R4j4GAnep305KJgUz9pWDfPljM1mxWg8wkTBiEsirIaky9KF054IuAaIikronNhJRDYmxUFzP2I1idBRFSPSxeCeROEKqO/6PEnR0D1fM6raiMivAX9SRE6T3IwW+GVV/byI3CKJj/+8e1ReFBtPznVvfZnFn0zffpl5s5eZ5r+KednJhK/zfPr4K6yjP2/u6enpeQPwpnU+Oue+KuejiPDn/tyf45//839OCG++/PnBYMAP/MAP8Lf/9t/m/e9///3pIQT+y3/5L/zyL/8yv/7rv85nP/tZnnvuuQc30J5vCESEb/mWb2Fn54+9iayn543E2S8z/Vz3+HInTid8jHSi9NOq+ldfOkNEzpNOlnp6enp6enp6XhbpbEyqoGIIKoDFiMeKx4h2zsjUAxlUODljjt3vKS3SUs+F2bUbFKfPMZhsYYeKZBERe18EMMaBT3GHRgxiwMd04r+zvgttTTYegY/EoEj0hKAYZ1KUZFaST4ZE32JthhFwWU6kJbQb4MQxlQQRsYrNS0yWowNDWG6QFgSDsxb1FnVgbYGfHSLGJbtLXnS9ghaNEWsh+shmc8Rks4WnxmQZxgoYyCdT2hX4/QOyU7sMtrfYbioObt3h+OCY6dYWIQRW6xXu4BbT05e5cPYco3LGoH6B33r2t/n8QSDfucDFc99MefnbGRVf4Nbda1TrNdIEXO5S7KpvUdK22795jSOT04TAbNUQFXYErCqh2XD3xheZ7b/A7umLnLr0MBcunmK1OGKzOmRT1/iqZV4d8/njz/LF3/9dBtMtzl24woUrVxmNxxibU5RZcp4ZmB3dYb085O5wTFEOKMsxWVHStoH9u3c5OtzHR9je2WZQlOTWpchY9aAVbQu+2bDarFjOZ8yP71IUytsefxvzxTFNreRZhrUmiUSEFHVqDc4IRiH4yG9/5nf4rd/6Te4dHDEohrz90W/i3NnLKJHYBsAkd12M1JsV+/vXqW8/g2lrHj69y3g8pP7Mb7HM19wcO1xmmQ5K2thSkGHFkdvQiVWdS09PlLiYhKvuZvK0Lyc3n980hLalXc2of/N/xA6m2NMXQWLqE0xKZNfpmVyASdtPLkJDF6+r6ZpItdlQVRXeB5qmZr3ZoN5SZI6LZy/wyJVHOHf6LJPxGJG0PAtYMWlfqWo2yzWHB/e4e/cW9+7e4s6d66zXFWU+REQ4e+ocly5c4NSZPcbDCePxlPlsjhV52aog1dR5etIFmuU5w+GIU3qO6TTSVDUHh/fYbFZdqlVF1VQEHwjB46NHSOKgtelmdBVJ82NKatYoBPFpOykIyekYgBDT9DQtIsah/uTziaRaQL3fsXkSV0wngKZPMH0vEbp9RSOo7do8e14h/xH4n5PExQ8BFfCfXzLvz3SdjN8FfE5V73bzTs51z/HynP8jz3u16c+ne3p6et7CvGk7H4GvyvkI8L73vY+HHnqIp59++jUe0evLcDjkr/21v8aP//iPc+nSH0o9wFrLe97zHt797nfzF//iX+QP/uAP+Nf/+l/zsz/7s9y+ffvLLLHnzY4xho997GMPehg9Pa823yoik5eJXv3u7vG3v8JrH+se/4eXmfeRr3dgPT09PT09PW92UuOjnHSkAemSvWAFnIDBcqLBKMk1FFRf4mrsXhcMi+tzBmeuMdo6RTEaYqzr4hszVBzxJEqxEx6dccS2pYgbsvmMwXiIMxZtQxIVYiA0AQiETUPMwOQ51mW4rIDgMcaCM8TQEOoG9YrgkJM+SXIkyzF5hhaBdnGMWMGNRtjhgKhKu1xDY2gPjzDdTcImL4jRJ0HHZlx559vT+iz4Zkk2niLiEAVrM8Q42uWCuGmx44zhzi6Nbzm6eZuszhiUJW3TsjqeYUzB6NRpdrZ2ed9jOWcHN8ifvM3/dO85nqlmXLn0Di6fezfTnQscz69z8/ln2GxaTir4ogasNbRtyzo0rCI0nTgzE6EUZYDijMGFCnv7OeZ3r2Omp7B75xhMLzDUwKaasVouaJqaNniOjpY8/+xzFL814PJDD7O1t8fp02fY3jvNaDimyJNztGkbmqZhNjsghIAIBK+IqWg2a154bp/F0XXOXbhIXhSogm/qLt4yYGxqG3XWMCjG5HlDnrUUmaQoWY0Iim8jjbZkWc5wPKTetHzm1z7J577wOTQ6HrvyTVy59BBlOQA1+JA8vBo96mFTzbl3+3m4fY22rYhW4PQug/0jhocVywJGI0PUSOsD4jxNhNzaJG7FBoJDshKxFjSgoSUKSXAMAYIChrb1LJZLgg+0IaL1Ndaf/xTj7f8GXI5qSNWRqmjXSyhiCTHc74BUFNXUtxpCpK4r1psVTZWER98kZ9/5s+d5/JHHuXz+EjtbO4hTJGq6mUCE2GxYVhv29/e5de82t29d5+DggGs3nuWF60+xWVWoWsbjCWf33sfZU2e5fPEqRZmhqixXqYVBxKXeRY1/6DtDT9yDGJx1DMsxRbGVOh6XNT4G6mpN01S03rPZbPA+4GOL9wFnc4woeVFS7OTkWXKfZplLTsdou+8Il1yj+hJXqHaRrkgSHNOkbvt28c7iUAnEmLocNSon70C6uGcNaZnJ/Zhibu/fWdHzSvjl7vFPAB8Efl1Vq5fM+4vA/wYYveS58OK57nd/meV+T/f42VdtpH+Y/ny6p6en5y3Mm1Z8zLKMLMv++CeS3IHf933fxz/7Z//sNR7V64dzjo997GP83b/7dzl//vyXFWKNMZw7d46zZ8/y/ve/nx/5kR/hp37qp/i5n/s5VqvV6zzqngfN3t4ef/pP/+kHPYyenlebLeDvAz9+MkFE3kc6QZsB/9+v8NrnusfvBv7NS17/CPB/epXH2dPT09PT0/Mmw0gSrERT2GqSUQKGiJWAMZZU+5ZcTt2zMSJETdMlVdrRaiTWluPnX2C8d4rR9hZ5MUStQUzXFSmpn80oqQ9PAsYIo+qI3HjK8Q5UderBiy2oIcQGXIG1GaFqCE2DtbbrbYu4IiNGJTQQ2wrU45sNsk6ddkGy1I9Y5JjcYcYZsQ7Y6ZCwXqPLmhhaxFm0bmhnR4hNzkmNybQmBvI8Q7EYEXQTCG2NZCmaVUOLcQ47HqFVDSHiypzpzh7tZs38aMb2dMxgOGK1mbNZH+NmBtlShpMJD199mB8sSh6/dZdffHrGs898mu3tC+yeejvF3jvJ3Yh7965x5+iATRsIQBYjA4Eqwlr1ftZfrcoMMGVBLpE9DDvGUUpA6n02t2Yc2yF2uoUdT5nsnKetKhbLI6LfYDDUzYpnn/oS+syT5MWQ7e09zpw+z9lz59k7u8NwMqIsB0lU1iQW+lBhjGMwHAIRHwK3bt1IcZzlkLIsyfOMLB/grEOjULc1IUCxrhi0nhA8qoEQIq2vqao1We4wYnn+6ef50pe+xHKx5vL5t7G7e5phUWKMJSqIRgSDRKFtGubLAw5uPMNkvk/VthwF5fLOmKE1VDePCN4gWUyCegyoepwURG3xXlGjBBGMzcCedJZaYmiwIqjt3JUoGiN1U3N4dI+hK2g0YF1O/cLnGb3j25G9M0nEiyH1qGIImmJoDS9xE8ckSqZ44oamrlgtF6yWK9rY4EPF3vYuj165ytVzV9ne3qIo8uTcO+lHjJGqWnD71g2evfY8t2/fZr3ecHR8xNHhPdbzNYdHM9aVkhfHPP7IY3htMBayPCO0vhMxFWftS471boyqtPGk2zGgqljnMCo4l9NkDRoiddOwXK1ZbRYs5sc0dcu62qQIXevY2z3Hpa1dzpzaoxwMGY2GTMZjMufuXx/yoe2c1TF914gBkggcU1J0uikixu47RgmhQfVFV6qIBWmR2DVsakgxwhqTUEtI6+s6aXteMZ8lnbt+DDgN/MxL5p1ErP7v/sjfkNyRXwQ+LCI/rKr/75MZIvLDJKfkk8CvvUbjfq57/G768+menp6etxxvWvFRJN3R99Vw4vb6qZ/6qTdN9OoTTzzB3//7f/8rCo8vRUSYTCZ8x3d8B+9///v55Cc/yT/6R/+IT3/60xwfH6c7aHve1IgIf+Wv/BWm0+mDHkpPz6vNfwL+1yLy7aSTr/PAj5Dubf/rqjr/Cq/9N8BTwN8SkXeR7hy9Anw/8Ivd7z09PT09PT09L0tyMp6cl560PSZx0YrFdI10KgqSugY1vQrBdn4twXd9kE6FzV3P4rnnGU53yIshudsiigERDCb1tHXrMGLI/IZxu2GyvYXLC7A5qstkZ/INmg3JBoMkkK7WRE0xnMZYYhBclroBY90S2oLQ1MSghM0Gj6A+oozIFEzucIMx6+UBYVVjXYYRg8sHxFBhguKXc7CWwhUpJVMiSY2M+LWn2JkAEOsGTyAri9TRZwWrQnBKrGvseEo2tkz3zuC9Z7Gu2JpuMYhD1ps1a5mDVwYiFIMx5y9dYWt7i4vT6/zKMwf8h2vPsn94jwunrnLq1BNsbz3Mxc0LfOn5L3Hn6IhNVNYKUXjR1dV9irWPHG5qltYwc3Aaz8Nlzi6RQhTXLilnS1jcZO5GVJNT5LvnMbnj6Oguq+Wcpg6EqGzinLauWRzMeP6pp7GZYzAu2Tl9it29U2xNpkx3t8nLgiIz5HoiTks3JkvTBmKs8N5ShAJrAyqahOuoGIEsd2jtaX3At566rqlrz3LWcHx8m6ZpmE7Osrc9IM8zhOTCjFGxEiFC9IFNXXF8eJvV7ee40C44agPXQ6R0lt1zO7gbt9Am0gBNA4QUP6wxEoMncy51gQLWROpNRS6Cy0znXJTUKRkDqgGNEY1KaBrqqkIcVLHFyIa62iC/86vsfvhjSObSOMUQou/EyNgJasnVZzBE9RiR1GOoyrpaMZ8fo1YYFlMevfIwD19+mFN725RFDjTdcwUDaPBU6wXz+QGL4yNi7bFqcNZQ5BlFluGcwUig2rTcvHWDg8MDzp4+g7WGqIpvA0SS+GgEo0I8qShSkoux8rRtg2ttJ0Ar62rDaj5nvjhks2moqpbZbMX+vdscz45ZrWtCm65xtVcDV85dYmeyx/buLkVRYlyKjk3uxtBFribXohFBY4uQBO/YfVdpJx4KKTpa1Kb9Ap9uTziJXQVUJTlaJblMRdONF+nmCo/21sdXjKoGEfkESXyEl7gbVfV5EXkaeBQIwK++ZJ6KyF8Gfgn4WRH5eeALwNuA/wWwAP5XqvpaXfTrz6d7enp63sK8acVHa+1XLT6KCI8++ihPPPEEn//851/jkb32ZFnG3/pbf4u3v/3tX/NrRQTnHB/+8If5+Z//eX7+53+en/mZn+HXf/3X2d/ffw1G2/ONwiOPPMJf/st/+auOK+7peQPxLPA3gP+ueyxId47+H1X133+lF6rqSkS+t3vtd5PuDH0G+G+B/wtJxOzp6enp6enpeVmsBIIoqMWrYCUFr6b2s+RKFAHUpOhVOsESOAlcjURaIvZEdPKW5Qv3GGw9STGaMM0z7DRHs3TRH1XEABqx0bMdK85NtxgawToLqmSDFOEoYsAK2raIGMpt6UQDJYRIjC2EgLYbYunJYsRvlsQmOTRxBkxGbGNydDmLKQcUkzHrO0eIKGZQkodA1AzPBhMjfj3HlUPscJDiNo3BuoyQKVjBDgtCXaH1mmAF6xwESc8bFLTLBdKsMUVBOR4zrnc4jveYz+dMRwPKoiGElrapcKsF1uRkZc7WqdO8YzDm7M5dHt29yS89O+PGjc+THd1i9/TbKHYu8diVIePhl7h3cMiyqWlCckJq9/m47id6pQqBuo0srXA7BHaM4ZIzlBEGYplYmPgZ/njB8shyYEacO30JTl3Eq7DeLFmv51R1F5sZWiw59WHN8WzOC09fwziLc5ayyBlNJ0y3pgxHEwaDkizPsDYjL3KsMzhncMaCAYNSVw3z1YJq0+DbhrZpaZuWum5YbzbQZgQ8uZ1QjEg9oVicybou0uSk9Y2n9TXr5THLuy8wWuxzLgSerQPXfCQinN3bYtpGdOapNHWZ2kyYTEcYk9xzVbMGxjiT9nFnHdYaJLYYCjDS9RVC8J77HYNKEtdFiSEg1FgzpNlUbJ76LP7xd5Nffhw0vdactAt257YR39UUStfBGhFNx5evGw7n95hOtnj44Ud4+KFHOL17irwsEQ1om2J4oyohtjSNZ7Va0dQtURXnMiIwKErG4wnNXgUWtrdT9+JwmNM2FbPZDCMGT2S9rthUFapgxdHy4k3wMQZmywX7xwdMj7apmzYdzwL1ZsP+0T3ms2OCV6zkoFBVDYvZhvkqcLK53M3brB6fAZEs68T9EFN8bqzStlFz/7smaOftlc4F+RIpKkVBh667Vok+9Z+e9D8qikp4SR9k55LsBEi6myrQ/lrD18kvk8THOfDpl5n3KPAZVf1D/Y2q+hsi8n7gJ4A/CfxZYB/4fwL/rap+8bUacH8+3dPT0/PW5k0rPhpjXra8+8uxtbXFd37nd74pxMdv//Zv5+Mf//jXtQwRoSxLPv7xj/Nd3/Vd/Oqv/io/93M/x7/7d/+O+fwrmYR63ogMh0N+7Md+jIcffrgXH3velKjq53nxLtEv95yfBn76ZaZfI0W0vhz/1QGjqv8A+AcvM/3LHlyq+leAv/KVxtfT09PT09PzxiNFrErqsONF55xI7CJXk9sRUaIGDJaTcMJIsrCk5rRAWoJLPXbryPyZFygmu7iywOUZVgaosSgRMULh4VR1zJlmTtY2YITYnCw9gHVgQIxFNIIP6fW+TnGoRpAQwVkgx+YFiEPyMTF4xGVIbrFFjjEnHZARiRE3nJCPG/xqjS0HyQnVBNRFyHKkCckBaSy2zBHrEGNR39AsDcV0jIgjrNYE67DGQtRuvAZTlvj1CieKGxaU9ZSxrzi+fcBqYxgOh9BuiMHjNxU+3+CyJP4W4yFnB5f5nukWj5y+waefu8enbh5z49pvkR/ssX3qCS6c/TZ2tu9ycPgcx4sls+WSqBBQDGkoIkmnUVFaVWJUVla4UwtTC2ei42w0bBkYWGWkDdmmYrM4oJ5OGZ27zHSyjT17AbWGtm24d+8ux/MZGiLRB4KBoDVta6jWG46P59zkOiAYl+Oc4GyGc5a8LIlEbOeCFaTrADVdn6gmF6IYsIrRDJcpVjKMmk4cSmI0JqJB8aEFhM1qwWZ2h8HsNo/Wc5qgfLFS7gZlo8ruKOf01oDFzX3u1p6RCOdLYXcaMXicGEQDRhzet4iV5NDNCpyzuCJDjNBZNQnBk3R0RSS5LzWkZbXNBlcMGBQlmSsYSiQ+9RnMuatE55IjVEnOvRPXY0hOYqSbTrrmkTmbJLLWc+nMRR6+dJWzO6fJ8gzftrR1SwgRHwMhBFrfUDcts/mcqtrgjEGGGYXPyK0hc4bt8ZTmTNMJ6o7hYIAxhs1mjYjiVTleHFBtagRDnhU0viF0MmBU5fDggOvXXyBzGePpFkUxIMsymrZmuWwIHjLrsKVlWI4oyzLt3xbamKJS54sNt+7e5dLBPllRUBRlOg5V0dB9p8QTx6Wm7x0RMAKaGmpPzHDGnLhgO4FRI0EjMSpo7DoqAU3LVW3uC8ZA+k6KdLHSPa8UVf3vgf/+y8z768Bf/wqv/SLwv/wq1/MPeJnz2W7eT/My58zdvE/w8ufHr8r5dDfvuZd7TU9PT0/PNyavq/gYY6Rt29dlXcaYr9r5CDCZTPj2b/92fvqnfxrv/R//gm9QiqLg7/ydv8N4PH5VliciXLhwgR/5kR/he77ne/id3/kd/tW/+lf823/7b/tOyDcJIsKf+BN/gh/90R+lLMsHPZyenp6enp6enp6eNw1W9P5VUktEMVhRDIoVunjVLgIR4H4sYedAhC5eM/1YERyCREe133D41JO4YYpNHcgeJi/Sa0JgWK3ZPbqBqVeoCGoAlxHbGmMtaI1xSVQQkyIXtWlPSt4ggrgMgknxqLUn+jTP5GWKkK0DGhV1sTM1JREiaIsbOXwNzXyOK0qIYKJAURJjIGxWSbBgC2ctGMHmGTEEjLWY3GJbA+sNjSr5eISJaVu5LJ23JAHSUU5GaNhiNV9zeOeIojjFoJjgm4rYbmjXS0CJpsVkDnGW8XTC20aPcXp3h7OTZ/m1Ly343Poezz5zj63RKc5efg/jK5fYXb3A0dENZrMjFqt115EHXuHkdueo0AYli0qtUBnhsG15zhlK4IIRLpSWrcIx8AFZz9FrX2SjsDI5fjxhtHeRtz32OJPJNlVdM18es39wyL27d2lDS4wRlYiVPO0dsSW0gm9bBMt6uQGriArGWMQoxmUYLE4yoo0YMSSNWDAGoo+IKHR9fuDxTSS2FojEtsItDtiZ3+aRek6sA0/VyjNN5CCkHswyczxy9SzheMmdZcMp4ziXGU6NAhIh+Ig4h/oKg+BscveJFZpqTWhapnu7mNIlsbDTwkLw4CNiLRoVow7nSurlHOsMsd0gxpE5IZvfpj24QXbu4SStS+j6IjvxjNQ9qIDRFE3sbEZRZGxvb/HQ1atcvXCZna1dFOV4fkDrW4KPNL4lxIhvPa1v8NHTNJGmbVNsqlq8aSmyDGsmjIphOt6NxWQOlxmy3LHaLNjUa0IMHB3us1jM0Ng9z1hCSGNVhePZgmeefx41ljOnzzOZbDMcbqXuRckYDreIraf2NYPBkPFwSjk4pKlXoIoPEELk5u3rPHNtF3GG0XiSjjMBjYa6XdO2LcEH2jZ1gXZaNSnAObklrTGImvQdoqmD8sX+RkOMKU7VkKJhVdN3gZCuNZxYMVN3aF/n09PT09PT81bidRUfVfV161R0zpFl2Vf9fGMMjzzyCBcvXuT5559/DUf22vKDP/iDfPCDH3zVl2uM4ezZs3z0ox/lIx/5CL/927/NT/7kT/JLv/RLLJfLN01X5luRd77znfzET/wEV65c6V2PPT09PT09PT09Pa8iVixGwGggYhEJGPGdlBgxYomk/r6gStAkOkYVThr9TmINk3hiT7QZYshY3dwwmzzFYLqNywryLQPGkGlkstxHjw/wGnDFMDna/Cb175EEhtg2iJjO8dgiPiZBMHT2KZHUwde2yRUnijYeEUv0IJJep2LACBoifrPG5DlumlFsTVjdukusV2RlRlaOUGepfYNET6zXhI3DZBaKDJM5fNWiGjF5keIdK480DTGWWHEQ0tCsLYi2wS8W2NGQcjxlsrVmeTDn+HDGuQvnyUtLbGu0rdC6JMiKUEckGtSBG4w4dfoM324dVr/E7t2a32sLnj+4x+qpX+Hs7mUm597B5OpFFvNnWa0PuXf3HstNg4aYAnKlc0KSHqNCE5UQk8usAeZGeEGViUSmUdkSJa+VzIKNNQezGTdeuM5Tn/sso/GUvXPn2N09zzufeIxzH/ke2uC5ef0mt+7eZrVYUDUNbRNoQ4P3HmssMXgUi8EmAUgVE5LrrJUagyFIRDBoiASXBCQjQhsbHMlNl5uIaRbYo9tMN8eU9RqJnjsVfL4KXG8jtUINDK3wnscuMLaWo4Ml50zOGWcYu0BsQaNlurWDAdqmweYOFxVrHSE2KErbVFTHQjmd4kY5WIP6iK8rSC2NRDWIeIzN0SiE0NJslhhbkMkeubasbz2D3bsIWSeoi9w334WUtYpGJWqKQ8VERqMhj119nL3JLtPJkBA9944PqNuKpmm7KNxACL5z/YHXQAyRpm7xtaduGpq2Td2IapKb2YBEg2gkBqGtPb5VwOBDxWpVU7epMxGr9x2CJ4QQ2awrVssV60FF4Voy15K5jDwb4mxBjIG8rok+0J65hCIcDfdZb9a0PmCMJctKFssV+0f7+BDJS4e1jhihaTY0TaCtG5abNcvljOViSQhgrCQ3rE0h0E6yJBxK+s6i+1YysUCNpt5aFIMg5uRmitgZAk6aax0v3lzR09PT09PT81bgTRu7KiJfk5AiIly5coVHHnnkDSs+njp1ih/90R9lZ2fnNRGRTrbpYDDgQx/6EB/84Af5tV/7NX76p3+aT33qUzz33HOs1+tXfb09rx0PPfQQ//gf/2M+8IEPPOih9PT09PT09PT09Lzp0K59TrEggkUwgDWCsV17YPTEGAkKAZMu4UtyPAZi8j92Korcv8yfPJNtZZg/e4fB1lNkZYnLMsygZFIvGBzfhs0aGQ6SGNUKGMWWBagQO5FQjaBtAzGmSE71iDMvlhx6D1HQUEMXCxs3a/QkerXIiUFTNySaeiezLLkqjaHc3WJ95zhFVGbJYWdHZXJHbTZoVRGcw+XbKWq1XeA3G1w5xIQWI55mtUZnaxiVyQ2XZRiTI9kO9XpJrBrcaMRoa4fJzjFHtw/Y1Bu2traJzqRYyOiJbY2IJEdgNFC3oLC9vcu7Hr5ILbdwqxJrhWfuHvL8/rPk+y+wt/sIO2cfZbB7jjNnVxzcu87hwT3mi00SkSQ1Fbak1MqoyXNXByUT0Kg0PjJzghODRSli5JQz7BphDDSAj4HN+ojrzx5z/ZnP8dlP/geMK7h09SqPP/EO3vvN72A8mhI00jSBummp65omNFSbik1VE6On9UpVVYS2SfuKRtQolgJRTV2jVjEK4it0vcHUS8zyELs6ZtSuyUWponCriTxfKc/WkY0qXiEKZCgPnZpyZjxg8fnrnIuOqc0wKGsPrlDOnpuyNZxQL48J9YayGIHEbnc2ZCYnxJp2U5HlBdZZJDedXJUldy0vZiwaiVgrWDvAkmGtZVAWiG/RW88QH3kvZroNSIptFU1uPlWidIK+pIQsYy2T6TaXzl9kXI5pfcNsdUxdB5q2IkQIMXRjMKgGUEOMgbppmM2OWSzXrNdLal+R7scWnHU453A2Iy8yhsWYvMjIXYEYS1BP3bS0TZ2ClI3FGO3ec3qvhXOURZ5Ev1hTN0vMOqKDAc4WGKNYsZAJ48kQzClGowLfPMKmWtKGCoAiHzEYDCAamqbBWCF6SQ7FKPejZKuq4c7dQ67ffI7j4xVykitstIuJTjHMSTuU1OdpHDGexEeffEhyX0eNMWBtlr6pTLpRob/Zuaenp6en563Fm1Z8tNZ+TZ2PABcuXODhhx/mE5/4RIqSeAMhInznd34n73vf+76muNmvd53f9V3fxfvf/34+97nP8Su/8it84hOf4NOf/jR37tx5XcbQ88q5fPky/+Sf/BO+//u//0EPpafnNaHvg+jp6enp6el50CQHUHJimS6L0IgiYjDiUgda59DqWvk6B113Pqonc8B2gqPt2iPTsw1+4Zg/+yzlZEo2HDG2O4xm+5jNGrEODRGNLeR5WlaT3I5ibTrv9W2KRiSmB42Iy4GY3JKaQQzgcmhbsBlaN6kjEiH6lti0mKJAygLr8uSQsjaNN89x45ymasEJeCX4gGQOY0Zo0xDXFTr0mDzFhDbrNW5QIMbQti2bxYxyvI04weQlWTFCmzWIwWUFbTMjbNZkg5Lx7jar+ZLF8YzxZJt8MCC2XfxmlmFzhx2OU+9g6OI4vWc8mvDw1hEMlVUxwlvDjTuHbGLLjYMvcu/4ebYnZ9jau8De3jvY2bvI7OgWx7MZ+0cz2tZjoiahpqvOM2LwMWIM+Ai0iiegAhsRliFyS4VCDM6lz96oYA34IEhURFs2i3s88+SKw9lddnbPEAjcvn6X+fExu8MR450Jo60dxoMhWV5iXU70Y9rQ4tvkjvRtQ2gjdbPBNw31akaYHTFcLRiGDTa2WJL7dj/APZ+Ex8M2sIypqzB04nkG7A0KLu6OGV67jTTpRuVVDAgwzuHipRHnz+5BU6XYW0kuSyMZsXPFOZfESvWR2Lao9ykW1xjESHLXnoheGroDYUCoNjjnsYMR1hpcnlGGFX52FzPd6Y6ZTmRDEXEYjWAj0Quxiw+1xpHnBWJhOV8xn8/uO3yNs6nPsBt3jEnEDSGy2VQczg65ffcWm9WKpvG0weODJzM5YiyZyygHA7anu+ztnWI6dmSlgZAESiMGldBtk9BFlCrWGIaDktFkzGQyJXcWjS1NvSaGGufydM3HJEHUuYzpZMhkNCSoEIInxAZRQTC4LCPLDM4VdKWpoAERQ+FyTOFoa4NBaOuW5WKND1/+epjc/4cX1VI6k7SexLHSbfcXnyicfM/19PT09PT0vFV404qPX2vnI8BgMODhhx+mKAqqqnqNRvbasL29zZ/5M3+Gc+fOve7rLsuSb/u2b+O9730vf/7P/3m+9KUv8alPfYpf+IVf4DOf+cwbukPzzcqFCxf4h//wH/JDP/RDD3ooPT09PT09PT09PW9aklSThK94/6p9alQTOnEj0nWlnbgkwYohakyRrCiBgFWDvX9pP/XYCYBaNndqZs99iXy6xSmjDGYHaNNgyxyxNnU3GpCoaX2hBRTjTBqexiTGEQBJrkaxGFegIYB4zGCE1jWx9RCSSzK2nqARYzM0BMJ6iZlMKIoivT8rmDwnH42YLw6IRy3ldEyWlwTfIi6tM2xaYlVhsxyb5fi6Qn0S6UIUmqqmnNJ1CAZcjGiMaPSIQDYY0S5XiAbGkx3Wp+bMb89YL+cUZ86SlQXiPeQWk9kUn1mt8a0nimCL1Gu3O50iyzl2vCQf7xE0MlusWK1rvFbcm1/jcH6dST5lZ+8Sp/Ye4/Se58LmDsdHM46PjzmcLwmqWEmfnVdwXYqtOXGFaScwq6IGKgRxgjVCpoYMoXAZVy9exISQ+vnWgbDZcOfuLTZ1xa2bNwnzOaqBWRRW1tCIQObY3TnNdDLBGIenxdk8xedqINQN9fKYOD9iJ4KLkblGVgozNRy2kXlQKu+pgqIiRAQPGGBghWnuuHJhm+LuDL8OKIZZTJ7cvVx49JLj8rkptGs2mzl1WzEcbYEmF6h1lsw6sqwkCvi4JjQ1oc4wRc6JuO5DwKimjlBVQtMgMVAOJpT5gOFwRFbmuHJAtVwRZofYS4oxLnmGo3Q3luuLfYQkgSzGSNM2VHXF4fEBN2/cZLNeIiJkWYazGeIMzhRguq5Msan7saqp1ulnsVxQ1TXRK61vCB5UI9Za8iKjaWpGoyE70y2sM1gruAwym9yymQiZMbQSiAjOwHBQMh1tMR6NGJQFENL1LRF8qLBaQGxR0dTp6XJEUkejak7UDNEUv2vEdd89pGjeoATvaX2gbTzrasV8vmC5mtE2DX/cffh6/5+XTrh/n0R3U8TJtJd5Yk9PT09PT89bhjet+Gitxbmv7e2JCA8//DDj8fgNJz4+/vjjfOxjH/ua3Z6vJsYYrl69ypUrV/jwhz/M3/gbf4Pnn3+eX/iFX+AXf/EX+fznP09d14QQiLEvGn9QnD59mr/39/4ef+Ev/AWyLOujT3p6enp6enp6enpeI07cidq5Hp1ErNB1vCUhMKrvIlYF1ZPISbpXcl8sSemrKXLVvsRlJAKxzVk8f0w5/RJ5AS7UZMUQul5JgkeMTUv0HkxyYMVWkRDR2GDyAnEGMQ7jTIqFlRSzKqWB1oO1Ke5RldDUaFBi44mtx5QlYh3WKCIBDR7tXGtihGKUsz5cUMQCmzkQR7upcFmBaRWtarSM2CKnns2S0JTlRO+JGhFjk1tzU1HjMTbDuhJRT/AKYmiWx+TjHXb2zuNrOD6cMZhMGW8VyKAgxpZ6OUvPNw5blBix+LYlti3DwRhjHLl1FPN7LOya34+CGQ6o2oYQIy1w2Mw4unbM+MYXGQx3GO1e4MKZhzi1t6KNFfv37rFcblivKxxdt19Mxk8xgo2dy1FgE5LknGnqJpQTd14WefradXZP7XHpwsMMiiEqymq5YL2uKdyIbKfAti1Z8Ex8oPYtGgx6NKNer8jKHDE5bVxRVWuarp+wjQENym1jqYEmGpZ1hQ8xORzjiQwNXpVIcjtOrGOUWc6fGTOdVxzNGmYKQ6Os1XN+aHj3pYxHzw3JQkWoatp6jY+RbLsgsxkQiMEjNgMTk1PWBzR61HsIinXJGVjXNWIyjAlE39K2Fd6vybMdyjInzzOIEOqa3Dqa9SwtBxBx3QFkiOo4IaQAAQAASURBVFFBFIlZcgEraKtslhsODo64deMGt64/w3x2FyJkxZAiG2DyAVlWYpwlywYYk0TjzaYm+kjmHIOixEhGtdng24YYAo1PwnjTtkwma4JviRoQDVjJsWJRiah4fGzQKHRaNBGwmWU0KhkMHGVREInJsKjJA633vxVMErFDTHG/ktyG6g1B03UXjREfWjRG6qZh02yoqobNumK+PGK+mnN8dMx8PqfatN33Si8V9vT09PT09Hz9vKnFx1cixJ2Ij/v7+6/BqF4b8jznL/2lv8TZs2cf9FCAJOKWZUlZluzt7fGt3/qt/MRP/ATXr1/nN37jN/jEJz7B7/7u73JwcMByuWS5XLJer2nb9kEP/U3PZDLhx3/8x/mxH/sxnHO98NjT09PT09PT09PzGmJP3I5y0tIYMRis6UIIJQUSRlWinlzyN/dDVTvPFpqaIDGS+tZUux62bj2CEDeCW6wYtA0WJSstBBBnEeOgbYmhRXyEGNAY0nTTxUq2G4wMQT0UA3AOrRvEudT/CHRllEh0UFcQQooYdRkaPTYb40qH4IjRY3DJ6ymGbDAEt2Kz2jAcj1N3oxE0BkxmiXWLaWukzMAIzXpFMUkRnDbLiE2LWkdWFMS2JYYGmxVoiISmwVhHlo9oNyuKwZjds2e59fSzHN/bxxlLVmT44FFxeALNesNm/zgJWqEhGxQURYkRx3Q85fEiZ1occcou+c07FcZmmMyxbluqCJoJCzzLzT0OXrjH6M4Wu7tnmJ4+xe7jF1FaZvND7t67x3K+4HixJmpEY4o21a43UVRxRu5H7cZOiMpCIATP4Z1bxM2CerPGDEaMdrYJAeo6gBUGwyl5UaCk7sjMOER9sonG5Bpsqxa1GbfihmXb4KOiAdrYoN53faOkPse0x5LBfdExt4ZpXlKIYWs7Q1cbZrMaqwbXiV7v2LF88wXHmW1HIYFYVfhNQxU8eV7gOhetZI421JQMQdNxYbKc2HT2UE39h9Y5oih1W1FYg6olRtA2UM+PkaaC6Blvb1MUeYr9XNxDfEALS4ieSBLfEINGRTSieDQqVVOx3Cw4OLjL/t1b3LvzHLOjfeiiXq3JiFmOcQNMllMUY3I3wOYWjY66qclkymQwocwrymzNwsxAFmj0+JBiYn1Ts1qtWK+W3bHbsFotaeqa2AaCb9IN2pqO+agQQ0zO3+iJ0RO7GdEHYlQ83e9EYjDEmETDgCf4iG/T8lrf0HpPXTXU9Yb1Zk1VV6zXG1brFVVVs6kbvA/094f39PT09PT0vNq8acVHEXlF3Ye7u7sURfEajOi149y5c/zwD//wN6SQdDImay1Xr17l6tWrfPzjH6eua65fv87TTz/NU089xQsvvMCdO3c4PDzk8PCQ+Xx+X5g8ESd7vn6+4zu+g7/5N/8mWZY96KH09PT09PT09PT0vAWQ+1GqpotcTVMNKgYxsesdNERs99yXdD+iKLGLbwWjcHKWa16yDoC8yHjs4ctMM8GpxVjBZAXqA6DE2hPbCiKd8JkiKUXBxwbjHFpvsEWZ3GJ1k5buQ+qXMxZCA2KQzEGeJwdj26aePGI3KEmiJl35oe3GGCOjyZj1fIWvG5xzGAytb8lwiFXUe0RKBKGerSjGWxgrCIJvPFmhUBjEZFixNOslJoKECNaSlQNEC6qjQwajMWcfucLBtevMj/fZ2juD5CUHt+5w67mb1KuGqrbU0RFsRCxkJUy3LJPJiPFoyIXtM/yZ0RYP7ezzn64t+cJRoHCOojDUwdMA0QqtUQ7bY2a3jhjdfY6tySkm26fYOXuG0++4SOsrlps5RwdHHMzmLGYrFps1XgQj5sWWPIUqKAMLI0nRnK2QXKY+smkaVvt3MFbYLBpsHVgYGBuDFRiJYWIMmRWMAYcQomKA29FzTGRVKU3Ubv+iaxBNQnd8yR4VgaERJkWOs47YesrcM1q07DSRwuY0UdkaRp44Y7m0YxgPMyyGUHuir/EhggjDwZjcZqhocvGpEDXVOBpnEAkQMwgBbQOS5xgjOCNs2g2GAU1oQQOuLMiHI0wMOGvInMG4jOXsiKWuGLU3ce4KoqYzGAsxJB9nlND1myrRt2xWS2azIxbzI9bzBaFp8T4dE2hNi9DoDK8WaxxZluOKgjyfkFmH4jCS40lOXmsynGREiahEfKMsZxtu3bpOXa8ZjAaICvPFknt377JaLllXDVWI952GMSrzxYzrt6/RhoYsKwgx4NWjXmnaGh8CvvXUvsV7iCFt69bXxABtGwgx0PpA3XSdnz7Q+kiMkRhiEjR7enp6enp6el5DXlfx0RhD3pXcv9a8kthVSM6wV/K6B8nHP/7xB9L1+PVQFAWPPvoojz76KB/96EdRVZqm4fj4mKOjI+bzOavViuVyyWq1Yj6fc3h4yMHBwf3HxWLBer2+L1CePL+qqj/SLdBzwkc/+tHX7Rjs6enp6enp6enpeasTNaJqERWQiJUAWKJGCKnNMYmLCe1Ux5f+LZjk1iK50/S+M1K61sfkjxxNh+xNx5jocVnBSc9daGokBmJTE5u2E18MYt39KEhjhehDiqSMgFiMEcRaIgpRMC6HrEh9kQZsmRODdq7JFrEOKSwmL1BC10FnEHEoFb6qQQUjcHxwyNbOHtYZXF4gbeeCrGsYlLhBSb04AtJ0USGE5mSrgkudmSZYQqwwxmJzl8YRIC9H+HpN4QpOXTrPZjaj9Z5iPGK1mNM2NdFD2xjWIWeNgFVyH1hUgdFqxtb2nEk5oCwz3nl2hzOjks9cO+STtz2LRhmWJbrZYDIwRgnWElVZhIbF7Cb26CZbNwom5YDt06c5/+hVLp6/yvHymOAbFssN16+9wHKxZF03xJAieQVlLHBKkry8QAltA1mGWGhrBRoyaxArhAhtVCbGMBGh0IiJkKkkwTsqK5R7ogSvtF2Man5/a6a/tZO4rcDQGrYKh6ph0wZMteacMTxiLFMyKglkTnnbHlzZM+xMHA6XXKht6gWN0dMIjEZjytEUsZ3MqZEyL0E9ojkYh6hgLRAj+AYNLjkijcMHT8WaNgTaegMYCpdT5jlFnhGripgVSZhbzHE3Pol52xFaPwHkxBiSmK+CRk37fHcMZc4yKDNsRor1VQMh4EXJTepDtcYQQ4oP1hAx0WFUcGaIyct0w3XbEELat7PM4hvQNkXWLpZrNnXFvf17uNxixdDUnuW6ZVUHfDKovuQ7A2bzmmeefY7bd25jjCOEFh88IQqt94SgaVOF5IRUVWLU5J7UF3sZtRO0e3p6enp6enoeBK+ryvZK3YivhFcau2qtfd3G+GowHo/5q3/1rz7oYXzdiAhFUXD27Nn/Kj42FcMrbdvSNM39xxACIQS893/osWkalsslx8ept+D4+Pi+e/KlgmVVVanXoKruzz+Zt16v8d7fX/9X4o0kdH7v937vgx5CT09PT09PT09Pz1uGgEniHYoh4ESxxmNNjliDagahIWgSEo38YffZSfTqifMxRXMm4TGRfnMOrjy0xyQ3OGMQExCSmKd1ncSJapMcjCRhR6N2YqNHyQjep8hL26JxhR2PkiDUrTM2G6QYkHojW8QaTJZ6GLGOGGuyQZbiPnEg2ommEVXF+wAqZEVBU1eoRGxZgBhirFJ/nW/Q1mOMQ2I61xLjcFlO8E2Kcg0ZYFACNs/QGFAjqHVQe0LTIiYjzx11tcBoYHr6FEpG23iGO1MWywXBByQYdoeBR0/lTE+NKYYlIQbUGKJxVHXNbHYHpw0TW/DBqxOuTJZ85pby7LolGwxY1i2ZQmbBi2Izg+ZC20buhZo785pidkTxzFPs7O1x5uxZTl06x+mtPS5dOE/T1tzbv8dysWa1XFJv1kyNIQuKhhbXtmCEYjxhM19xvN7gjKV0kZwkcIkRckkRqQLYrhfUR2UTlRckcthE1kEJJAetdjuZg+ROLCwjZyiNwathVkeWdcVU4InM8a5xxqQwNMayPTZc3TVMC0mio9RoWxHagMGgKK0RHJZBuUOeF+kG8TxD1Ka0qRDTPqJgxIDNUAkQA6hixZDnSSCvGg8ovm1wdkjjG5wIecxRScbbLMvZKUoGB45hlnPg76DNGYzknTB30mKZjswiz9mb7vLQhYexCtvjLa6/8CS3bt4l1C0hKIU1NJBctXZINigYDHYZDXfIi2FyFitU1Ror4IxnUNjuaK0wjU89mk2KBj65dtBGaEK6mSC+zPWEOih3jzbo0eb+iKHvYezp6enp6el5Y/HGsvh9DTjnXpGDcbPZEEJ4DUb02vBDP/RDXLly5UEP4zVFRO6Lk19NJO7J/9B/ucevdl5d1/ddlyfOytlsdt9pOZ/PWSwWLJfL+9OPj485ODjg6OiIuq6ZzWYsFotXYSt8/Vy9epUnnnjiQQ+jp6enp6enp6en5y1DClpN/Y7OBAxgjMXY1IXYxpYYUxekdqfncj96tet47KaBIJ0bTjo/5MnvRS6c3hvhjGA0iXyhbaH1nfC4hAja1qgJyeUlHk7iKH3bDVgJoUGsEDcVhLyLxYxoBKETIF2WxNHgUQ2YwmGdQzJBjAFnIMb0O9LFX0bauqXIC4y1tHVNVuSEdpNWbZL4qk2NcVl6z03EuBxjlaYTFk2Wpf7LPE89k3lG2LSE0BJ9Q2xrsnIABMrhNm29wtcVFJHY1uyeOsV4usVmsabxSuZasqLAOIvYghha2hCJCNE4wnAHHzytRpbHEdc6vm2n5ULuua2Wo/GAF+YNi3WFNcKoMKhAmxuaCE2IxGiY+8jR3bs8e+cuoyefZHcy4vSpXc5eusQjZ8+TPzpmVW3wIVDXnrbZ0FY1eV1jJcPHlmy0Zrha0TQ1BE+GxYSAxIDHUAOYgBWDSopXneeGhXoyWqxGCiOUme16R6GwDiMOQ6SuG+60LauQ+hGnIry3dHzgjOWxsxk70zF5VmIItL4ithUamtTHScT6tF4VMBiyzFEWBblLMbyxqRiMtrGZTQZckttWTBKrkdSBSWgxRUnm8q430WMsoEJROMbjFOPqnIPoiesNNirTcoy/d4jZL5j536cw7yV3u0BMx1UXW6xEnDUMRwUXLpxjMh6yPZ4iZMwWNbcX91AFZyNiSEK7jeQuYzAoGU3GDAc7uMwRNFDkDmsjbRMJftPVzxisrWlaT4yBEGM6BuOL3Y4hghG571bUl8iLJ9HLL/27p6enp6enp+eNxJtWfHylsauz2Yymaf74J34DMBqN+MEf/EGGw+E3ZN/jg+JkW3y926QsS7a2trhw4cLX/NoQAgcHB/yLf/Ev+Cf/5J9QVdXXNZZXg4985CN912NPT09PT09PT0/P60irgYDFSiQTMMZhjEU6wYFoUPUoBiVisJBCS++HYZr7HsiX9DwKnWMs9SpOtgomoxxtKmLwyeUVJLnIYgQkOQRjElNQD9qJG9aARrRtwIBGIVJC0yRXpJVOgLSdCbJGbFqv+oDJHeLAjgyIRYxDxBDobuqNKTrVlQXVaknMc4wxLGfHDKdTTEjONbUWMSFFWzpwWYnGAFZRI4g1NG2L84pKRNUjzqBt8pYaUYKBoJEMICgaG3I3oCiHaC4EnyJM88JQuDGqyUEZVLCZRWwGZkC7aWijR0zAmAF1s6JVw2A6YL6GsBLOWmEnBlZs2NnKuTGa8tzdOe3a45xgjFCKMMotAaFVQ9UEah+pQsu1oyNuHB6SP/0chbOMBgMmkzHTrT3UZbiipCjPMd4VNusFEmqunDvfiVKBar3BWIsTg4YABAbFkKquWK+WeB+JxjPOci6sV1TrDXUdCK0nMxnBJ8feuvasfE3tG0InjDlgIMJ3jC1/8lLGpTMjpqMSl5X4tsZv1rRNRWw8GpQYUk+iwRBcJFjIbMF4PKTILM46vDZJPMwy1BiCVYSAEYtIUt/E2Pv7ZhIMIy5zbFYbbEguWt/W1JsKcckl6YzBtAY7GhArRcOG5s5z6HQOmUG6/TBFkaYO1RR1rBhrmU53KbIh1aahGD7Hso4cLtN4MqMYq9gssmkDHsG6IVk2wpmcEDNCVNqmJsQarGKtpVRH3RqCGjBC9BYfQVWSO9QoVhWjXeSqwMvlo57MevEboKenp6enp6fnjcObVnx8pc7HF154gdVq9RqM6NXnve99L+94xzveUDGxbxWstZw5c4bv/d7v5Sd/8ie/IcTH7/qu73rD9Zn29PT09PT09PT0vJFJ/ifFScCKYK1gsF3/XJs62vjDwkKSHPW+2zEQUfS+8Khdl5uR9GiNcvnhHcZlicSWWFdJZAwR1YhETe4w71MfoyiigtiMGGoMINah4oCAqhDrGlyGRjA+gmZIYSE2oGBiildV5yAzmAzE5dwXOX2STwkGjTF1SIohqhJDS14WHB0csF7OGRYDQl0RjceJRdUDGcZZNCjGCdaVqK7x7Ya2tmQUiCtAHRronJktWV5iokFtxFcVsQlk5QCrjmgyrMux1mGygrZeETXQrjfU6xZBKYZD8mGJFA4bHZlzZFlL4RxVs0I1cvbikFDV5N5hJWMwHPHoouGzN+aML2zx5N0FdRsJGhEBqQWskBllUGSEUmnaQBNT/WbQyKoJLOuGm0czgt5ARMiswbkhg62zDAY7FEXJamJw1pNlGc4MCF5pYsA6IfiaFofqGDue4ESo1w1NGwk6YlUdsVocUVUNtZ/hvafVJHvnwBYpcWiuSmGE7z2V832P5Fw4dZrcGVSVuprTrtfUVUNok3iWhOi0s6sBnCV3sD3ZYzQaJRerDWjtGUz2sC4jApGABgtZ6isU03WZepDYxe3mJWUxYNbeBRQJSlGUFHmOxLRvuFGJzYrUoxpTK+r8S19Av6lG3RqNI1SSsVJjRLCoRqIqBouK4GNgvllz++CAa/sLjjojsOmOX1srYlqO10ccLRuGx0cMhxOyfIARS91URF9hneIk0rYrNpuWpq7wPnVt+qC0QYkKPihBwesf7nv8csQ//ik9PT09PT09Pd9wvGmViFfa+fjMM8+wXC5fgxG9ulhr+dCHPvSmj1x9o3Pp0qVXtB++2mxtbfHud7+7F6p7enp6enp6enp6XkeSOzGQWzo3nKaEFjGIsSgtQSFoiqU0kmyAyZf1UufTi6kukZOIxjTNOtjeHiJtm3oZW4+GFGVpxaA0aQnGpFhLLGIgEhGxXedjkllCswYsZAWmKJOYY0yK1WwbTFGgvkJNJKpHiiK9cjhAvQdjXhy1ANEjYhBjMLkjyzLq9YbxeIIzls1yyWi0jQ1ClAgYJHrUR6zL0wZ0GUiKoNXQoKGLjbUWDSe9kIJ6QaySlTnVco4ag5QGkxXpfcUAA0kdlSGSFUNQyN2AfNoSfVqOANp4NCoWwyAf4IxjkA/J65rFZsYqKq3CsMyYbm1xaq/kzM4hT945YNuWfHERuLfy1JVPQnJURCDUDR7FWqG0QlShyC2V104IhDYqXhVMpGoWrO7O8QqKxdoCbIZzBdZYnMtxWY5F0Jgcfd63tKHFh4D3LSF6YmjwsYGu8dOQ+iHHCDlCIWkfm6sytcKfulDyZ5/Y49RkiLGWul5Rr9c0mwVNHfBtss0GJcW3OsGKIcsGFKVDiYyGI4oip9WABkOejzHEtIvETliPmpybGsE3iHNJVI8eQkxRp3mGMUJs6+RaDA31ZkPhHGJyjMvBWAgeQkuzXlBXBe4irMw18q3TneNRQZUoPh05KQuY4BsWi2Nu3b7OU88/x/GsJnYxqAFQSb9IVFat57iaYY5m6ZqTEayxKSY5RnJnMaKAJwSIIeIj+JhERtW0zVLXZVr/H6c99o7Hnp6enp6enjcqr7v4+HrFgzrnvuaIyfV6ze///u+/IcTHCxcu8J3f+Z2UZfmgh9LzFTh9+vQ3hPj47ne/mzNnzvTxvD09PT09PT09PT2vJxLJJZJZhzWm63CMKe5TFdUUsfrSeFURkjMS7ZIY5b74GAGbjHQIijWBK4/tsbs9haZOUakoIXps9KjJu3jV1KdHVMSeiJDpx5gMrKBtwNgiOdhE0OCRzCRhUgAnRO/B5cQQiDFi8ggmEhrfCYKKuBQ1KQgqAYxDQkCMoxiO2axv0zQZo/EEtRaTkzoqmxav4IxFQ0Qye2LzxNiuA1It0Xs0eIgFGlPvpFiLaJsEsTZC64l1TTQRIxkqgnUWYy2xqVHxmHKAKTKiB6sWlzuywRC8p1rPIEQ0KiFEjFjyYUaRF+ROsGKpXM1ysyZbzSh2ci6ePsPZvVM8ur/P71y7y++2e2yixRwsiYsNlUJAOQ6BJgR8iHhVQpscdg4ondACgRTbOi4MUQScSZGd2lK3NRLXxJBcgk2VRDztxEvhJDJUyAERJcsFjQajYAJsGWEbQ6vKUYzMVGmInM4dH3til+99eJsyM4SmYbU8olqtqDcNPipBBd+mfs68sNg8CdulyRiMxoyGQ5q2wRhDVE/hMgZbO6yXC6zJMMagBqSV5J7VgCigJj1meZczGhA1ZCbHZRlVtUl/ZyPK4RAngjU5saoxpWKyDDcUWC9pVi12tgd711B9F4JNzkhNzsckxgvEiK89+4cHPHftea5dv911sAoprPgEve9QjCc2xOC7X9qXHPAv/v4Vo1KVrzS3p6enp6enp+dNwesqPhpjGA6Hr8u6Xonz8YUXXuCLX/wi+jJZ+99oPPHEE3zHd3xHLyZ9gzMcDr8hok7f//73s7e396CH0dPT09PT09PT0/OWIjNgjcO5LHUrxuQ4tMaj/sXIxROXo01+RKSTIjsJslua8EdzTKwT9s5uE31NUy1xzTqJiTGCcQTf4DKLtgEVMGKT4BN9EvZECCGgXrulp8BXjAUxxDZgHIR6DY2CzZAYwSuSZyk31Bhi0yAuQ2KKlDWZSU5IMUlEInU2ihjEZmxqT+EM6+Wc6jjDuQHNfIMpHMaCNWm5SbwUjDWYLCe2Ad+2hBixnZVMrCDabUFj0VijFtywpJ0vqDf72PEQW5YEUWIMiBiMJAGYANF7xCoMIsZlWOOIUbGDDF9XaBYRFbAwHI5RMbhqyVos83qNWy3Z2dpiOBzw0IXzDMqS+Hs3WbgSfd/bqO8taK8fIpuKVausYmAdAwv1eEL3+SvRKxVps8UW1jGiAmoDxgjGGEpnyJ2h8YGIEBRySR9FhhBj55aNinS/u6iMjXC+NFzMYRKFGCw3a0NsG2rxPD4u+PPffJZvOTfFmMjy+IjNYsm6qljODKIGcSkutMiVcmjAwWg0ZDCY0GyWFPmQ4XAbWR0CLUYdk50domS0TctwAMZZom/ui+GxDfcTelKccCfbxbRPOleQ5SUbf0RQwVhB1KMRos8xzuGKkqggLpIPB4TVAfGOp3i8og1LnB0TiRhJ3aVRG4iCqrDerLh77w7PPP8s83mVukwhuSRJNwK8Er7xryr19PT09PT09Ly2PHhV5DXia3U+xhj5whe+wBe/+MXXcFSvDkVR8OEPf5gzZ8486KH0/DGICFtbW9y8efOBjWE8HvOud72L0Wj0wMbQ09PT09PT09PT81akLHOiT1GTGkHVY0yGCSmKUTWg6rrYU4UXmx1RVSKKwWA6lyLyohgpgPdCPV+yzhoGREQh+DoJODEAHoKk9WtEJSaRRgX1PsWVdlGsmpRR1CQBKDRNJ4I1SRSyRRIUTzooXXJQavBpROLA17joiNEm96MCJgmEJnPYzDAYDDjePyQbDGmbmjYq5WjIOM+pVgs69bETB9N7MkWOMRlt2CTnXBuSwOocEkhRrTFttyRqeawbYLZ2qOZzmqMjJEK+tYvaLMXdxgYng+Qu1YjLSqwrIEbEGqwRXJ7dj6W1MRBWzf3tPyiGlMWQ9doyr1foQtgxluF4yqXRhD81HvCFL17j1rOfJ1y5QnziXbCqaZ5+Hn/3HhKVgCOSYkojhnmjHDWgamgI3KPmIEasgSceucCpi5dp1bOY3eN3P3+N/SpgncNljjMTx7svnWeEwHxOc+8IQ8S5zumYg4hhuRHuNcqdGo5CS8wif/rSLv/NO85yZpyzWs5ZHN9jPd9QV8J8kdG0hp3dSDFScpcxGFlckYEKhRuws32GhXO07RrvBygBEw3j8TY2K1kv5rTtBjG7yeNrHGI8GCFq7IR2g0SDWEHbFkOB2gxrNxSDIWoc2kSqJrIyDSbUuKJEDQRVYsozxQ1KsiLn6PYxO/UFvL1LJmOsuCSti2BwnavVs1gtuXPvLs/fuIP38X7k6skxpvcDjnt6enp6enp6er4W3rTio7X2a3KcrVYrfvVXf5X9/f3XcFSvDqPRiO/7vu970MPo+SrZ2tp6oOu/evUqTzzxRN/32NPT09PT09PT0/M6M33kMZbPPZuiQk2O6cJOoxFUI0GTlAedW006we4+0v2b5MeoYDtnJCjOKUWeEQVcOSAzBgmKasTlBWiLBE02Oo2EGFOnpETI05KianJEErFFToyeKNp1NoYUw4oQY0guRq+YogDviVnEyiAJck2NRiWqJsEws2CS0CkmxzrF5gVZPsBax6ZaURRFckOKIHmGXQlt22DtSd+kgAbgxM6XRFTfVNg275xsMfX32RROa/IMWTp805DlA8rpNo0I7ewQESWf7oErIAT8ckWISpQW0zS0myVihBAi1hY4lxFoMNYmkc97LAYfIstNGn9ejjHGMV/OURFCUMajEbt7u7zn3QO2n3maO3e/xOrWC/jzV5h+6zcRmobi6HlGi0MKMWCBqFRNy/5Ry91ZoGqVQe3I2sjuuW0+8ie+m9MPPUFdt9y89jSr9lOMj5e8+93vxNgc1UDuW77psW+mCJ7n/93/iL9zD9emXsz9dcY6GA69chBq5tqwN8354W86x/uvnMHElnu3bzA/WuArT2igahyo5cL5jLMXp6iF9XqOcY6zu5dp2orVaoUxhiIfsFmu2ciMDMNkd4t8OKRtPMvZAcSIFYMhuXKttQSbiiMFQQzp8yY5ZDUoWMVmjizLU5eiMV3XpOJjoN7UDDYNzhZdgpViXcZwOODGrTvsLAb48hCRx1NvaMoUBknri3VgMZ9zZ/8us9kKz4kLWf/QEdi7GHt6enp6enp6vnbe1OLjVyu2qCr37t3j3/ybf/OGiVx9z3ve86CH0fNVsr29/UDX/9BDD/H4448/0DH09PT09PT09PT0vBU5+54PoNWS1fWbEFtEDHRCX4xtitrsnivErmnOdU7CE+ExCZQW7YRHSS5IhDwT8iyyPDhm+9wu+WQHiyG0DaZw2HKAeI+2nugjGgPiLOIclEXqdKya5CS0Bg0Rv9wQW4MrMqzrBESjEJQYU1asqAVjMOZEKGpQjYAhth5rXTq31pC6/EQwzmHzDOcsw+GI44M75KIEDYgzqcfRCpvjIzJ3GpPnKd5VHMZkiM0x4tCQtl1aT3o/JrfE0CCqmCzHFgXtZk2MHpsXDHZPU2mkOTiAECl2z2DKPG3fGIiVJ7YbqmqVHKcasC4nNDneR4zLMXmGsYIaS1EOWK02DPOSqmnIXMZ4PIbo2T+8zXJVcGrvNMNByZXLVxEXaDYtsr5N88wB8+ku7YXH0fGIwfKAU/Wa3AhtaLjcVrxw/QYHB0umKxgvMx562zt44p0fZO/yw1R1xdbpCzR2wJ3Du3zzN72DwXgPmxd86pd/ERmMuXjlMezxLW78h38LXoitoOpBCwyR0iqX90b82fc+xKN7U2ZHd7h35w6r4xofUtypbwyDUrh8dsD23hbRBDbNGiNC6QrKfEBmSjbrDYvlDMFDbGlnnsnpMxTjMVLkVIfHVJsVw3Irfc56slMn0VFjhEzQGJLHUEjxucEDSWDOrcNmOVLXSPQQHEYNooHYBkK1RlUwWY5khrIYMjCR+Qt3GJwlraMTuVV80rOB2rcsF2vu3r1H1YT7x+1rfVWoFzR7enp6enp63gq8KcVHY1KUy1fbh6iq/Pt//+95+umnX+ORvTp8/OMf/5oiZXseLA/S+VgUBe9617s4derUAxtDT09PT09PT09Pz1uVrUefoDo+pDqe45cV2AIMhHDirlIMSiQggBUFIoKlyxHt0PuCReyCIAUlyyOD3BO9UmRZcjgai3WCcSnmNToFlxPbFokC1oITXG7BOnyWHGUmc4TFnPm162yf2UEqQW2GGWRkwykmK4khgg/4dU1db3CDEkxMYmQQlIhknRBIjrEpwzK6GsRhihI3qhkxpVosUV8joSV6D61iTIazBXVd4fIc1TZtjzxDuh5KldT7mPkWDRFxDlUBzdCgqI2YokSalrZZJ8ddWVLs7iLG0C6OiVHJtqa40VbaHkWORp86K31AohKrirBcEzVJvzWRAGBADYiNuCyjzDLqpmJUlizWM8pywKZecOdOYDwYMZ5MGQx3MG6FU+HyZIcY4fnnfp8DW+IffweLrfNckA3b7YaxbxiUQ+7u3qNuarZvHbB1cZdTZy9w+uIl6sYz3DlF4z3myT/gySefQfgSuzu7rKqWFhhMtjj/LR9g/9f+LYMCqg3oIjIUT8iFd145xXc/foFSG55/5inu3FjQtpGyzMmKgqIMjEcDxtMR2SAjhIaqrhiWE7QQQmwI0eNyR5FnLOsFuVhi41mvDeevDpCywPuW9WaO9x5XFBiT9rkUCWwI3oNG1DsISXgUEdQKGhWjEWMtWT7AlQVxUyGioG3n0AXEEuqQNP3MIZKR5YbpeMzdwwPGdpcmLMnsiPCSYyoq1KsNB8f73Ds4wofXUw7s5ceenp6enp6eNz9vSvHRWstkMvmqn3/79m3+6T/9p6/hiF49xuMxP/ADP/Cgh9HzNTAejx/YuqfTKR/84Ae/aiG+p6fnjc3VqeHJ/66P5e7p6enp6flGoTx7lq1H3sZm/y6LJ59E64YYIqq2kx0FryQh0XhM8o8hXS9jCldNHYzpv4R08mOeKYUFmUyweYHNC8Rl2NzhBjkmk1QjaWxyP1YNqGJyhwlpKa7IwJSEpmZzNGe0NWG0vUW7XhHUJ/dk7oAWaxUc2CzHVEo+GqNGQT1isjTaLh4VBVUQVdSDmAgaMEVOFmH73Cnmt+9BiKARk1uok/tzs5xTlgNskad3KiCSQjEFgxGbIl5DRDQiYpHMEH3ASAF5gSs97aLGNzXGZdiioNw7jYaWan5MW6/JJhV2NEbyAskcgiFI0wmOGUjonJ4R1EHTEGipfIXHMp8fIVgyWzAqS9brDUM3wIckrm3aGlstEA2Mywmz4yPqUtnZ2eIhI2zNj4k3vkizfYY7u+fZn26z3VTsFCMm6pm0Laot+bBlPBoxGowoBhFxlrPnL3P9xgvcvn0LsYZb9w7Z2jtDEypUhOHuaayD6diwNc24t+WIbshD45wPXDpNu5zz1LO3uHMnMJqUPPTQmJ29LZzLgEjUSPQtra+pqxVODLvTU2yqiqP5huPjI3Z3dhELJgaaNjA/hmGZ4QYDxBmaZc2mXmKcw7ksuWTVp2pOa1DRrqNUsdpFraKIgBKSG9g4rDFkxYDMrfB1pJGKGD3jsdBsVhjryEcDknCf0lmHgyHtck6sCqI5IDOjtE9KJ3y2ntVqzsHxPrP54n7Xo74OouDrsY6enp4vT3/e3NPT0/P68LqKjyLyNfUwvlKcc1+T+Pgv/+W/5Mknn3wNR/Tq8aEPfYiLFy8+6GH0fA0MBoMHtu6dnR0+8IEPPLD19/T09PT09PT09LyVyXZ3mDz0ELHdQFOxeOopfNMQVYgBoiqKwUpEJLkbzUkViICIpp46JAmQnfoYuxbI4TjDlWNG45LRmfMU5RhxBcYIOEdkQ2wXaIjceeoGzz5zm3PnBlx+5BFcPiAaxaglSiSuK/yqZnrxLKbIkLoh1mv8xmMHyU1I9CkO00SK7RHGObACuUtdkgE0hBRziaLRo2LAuCScGocYD5mhmAwZ+22a5ZpqtSJ3BX61wbiMsiioliuGWZZccpEkMFqLhpiiUWNMbkUpEKOoNakvUxSTWUyRY6oCbRpCUyNGsIMhgzMXkf3bbFbHbA5uw7ogG2+RD7aSY9LluDxPbkxVYuMhKMYIJs/xwYM3tD6w2SzT+44RS0vmMrLcMpUR+4f77G1t4X1LtAGMweXCnf3r5M5RFDnDQc6gGDHdHtDqnFt37vCCd9zcPc3w4juZrPbZyguag5u4Zka9WlFuTRmUBdu7u0ymE8pBSdM0GGsQUTabNVW9hIM71DUsGSIXHyYTy7eZyOUC9m9cY//WnCwreec3n+LU+W18FK7fOCQ3LRcunEuiY2ixkrMzGVGtZuRFjm8brIGqXbJYQmg9w+EOVbXkaLbh3MUtsuGAGJWm3uB9wJkcJwaNEYk2RQ9rm1yxIRIJGJPqGOkihYmdw9cK1jiKfIiITdG3GgjqWa/nZNYxnG5jrcO4LH3OrmBUDGD/Dtd++4uc+eYxhTvfmYkjIgbfNixWKw4ODlmsqiQ8au9H7Onp6enp6el5tXhTio9frfNRVfm93/s9fuZnfuYN0fUI8P3f//2vyzbsefV4UJ+XiPC+972PM2fOPJD19/T09PT09PT09LzVcYMRo3MXsR7ao2PqwwP8nTudvuJBtOtvVAx0kqLcD1aNCgZDIBBVid15qwGcCVx86DKX3/E+SrU441CxiBeEJNBprBBraKoVhzf3efr5Y7ZH4DcrsqLEEBGb41dz6oMjBnvbZINB6nnMBPEuiUChRWyBsTnkOdiYfrdJGtUQ0aAYJ2hr0djef0dAEouiIljE5NjcEoJQbu+gAerlgnynpNjdwSwWNG1Fta7Ixy1WI9gcMRmiBk5EWNUUK6vJ7SguTSNEJHeYIscORoSoBN8grcW4iMlzylOnwRrWxwe0qxU+1PjNhmy0ix0WWJdhMgcIYg2EThj2IJnB1eCsIrJhsZrRtA1hvcA5R20iNhtgBiUByygb0gRBg2ddV/ioPP/8DZ54/BGybEAbA9V6w/aZc5S5w117gdXNQ1bFhNvbZ9BzZyhCxf4LT3N27zL1UUs52WI42mE62WEwGNKGgFFFxNI0DbPZXW5++rdYXf4m7OUz7PgN7x5ZinrBvevXqNeRR9/2MOVgwKpZ8dQztzmeBy5c2OHyQ5fI8gy3WuPbyKgYYR1slgtU0v5pshyMJwRhONpCnOPG9RniDDtntpEip6lqNusVMYLJLcZmYAQ0JNdvjJ07NuK9B/G4vKuXEQO47vM22MyQuRxrFdUaVSVoJCjEk/7RInV9qoCJhizPKBWuP3ONrUfejg4VIwaNFhS8b6lWG+aLJXUb/ivX44nL+I1xpainp6enp6en5xuP1118fD3iH79a8XE+n/OTP/mTPPfcc6/6GJxz7O7uoqrcu3fvVVnm9vY2H/rQh7DWvirL63lzY4zhh37ohzDGPOih9PT09PT09PT09LwlESNk0yn2gmF3OWdzcI9mPidsakIUgnY9iV2cKN1fphPXrAgqLzqyRKQTQyKnThW84+3vYMgQ0Yj6CHjwEc2EYGpEW3xV09w9JEbl9B6cPbODZA47KpHc4GdrVjfuIJlhdHY3iY0xItZ25xKK+s59mDkkM11EqSaRtOvdw0QkKOoiMUgqRgSIPglPoil2UzKChhSzSmSwu8Xszh02myVOcppNBUSK7QHNZk3BHgBGIuIEQkzO0RhQtSm2NbSASeJYjBgUm2VomRObnFQT2BLMGmdHmKLohM8I82OaTcu6OSRvA1kzwpVDsmGJGIMxGdEEQLGZxUTBZ4JrWwpj0cGUY46pQsXUZUgV0bBhKgVNtcZrS8CQ2wzBMJkMOT44pgktWZYDwnpxTG6FfDzh9O4eO01LOSp5+oXPMZOS4/GEzz/zBZ5brtg6fZHhzi5bO2fZnm4xyHLqoiB2DtEQIjevPYMfOkZbAw5/8w8YZp7bO4bSBXKXs7M7Zj5f88LNW7QaGQ6GvPc9V5ns7CSHZ4y0UajWS8blMImEROpqAxLJ1JBlQ8rRlDYE5sv/P3t/HqxZdpdnos9vrbX3/sYz51iZVZlZs1SqUlVJKomShJCEoJGNLbgM3TZwjbGDuB64doBNOHC05+4bxA2Hb1zbgd3hiy0cHbaBNkai2wYLsAUIBEJDoamkmivnzDN9wx7WWr/7x9qZVYWEqCEzS1KsR5FVmed8397fdFJ1zvO97zvj7AXPTZsTxqsrxBCoZ/ssmjkaWgqzktKp/X4mekU8CuDwvqZwJYSAdhGcIEaI0SNRMGIoy4qyrJAgBL9ATUEIJBltbQpLxg6RAlXFGMfKeEKYXSC0e+k1bUqUDlVD13XMmzn78318iL1lfG6LMUvHTCaTyWQymVfGDY9kOecoioKu667rOdbW1r7iZZqm4d/9u3/HBz7wgfQuu2vI+vo63/7t386P//iP88EPfpAf/dEfvSbHffDBBzlw4EDe7/sao6qqV+W86+vrvPOd73xVzp3JZDKZTCaTyWRArMNah1ufMrnpZrbuvpf64ln2vvA4+A76hKOViBEIUdLeI+mXAZKiEywCmgpXi1K4677bmQ7X0BBS3WgIEAMRRdwAzJLYKu3ujMX2Ls8+u0csIbQRCYKGjvbigv2nnqU6uM5oaxN8IAaPNi3iCsTUKQVpCxCT9vKigjcp9SgmmVGTalGVtMtoKIEkm7Au1WlqL3aEJCERRAymsEw21phf3qWcDhmuTIkhEtqGTmskRHCOFG10pCJa6TcfPcQqiVkjCCbtM4YIxiDWYKsC9R2hC+A7bNchrsKOJgzUYooC2T5H7Fq87hDinMJP0bCKG47QwmDsFcElGA/WOMQC6dFmWFbMQmDpWwbViLIcYiJIUOrgCd6zFMNgOKawjpXNNS7vX2R9ugoiRBOYz3fTzqJEvG8p7QqHN1Y5GAxnLz/D6cc/xWk/4MJNt1Kdup3tZYetBtAsGY+GeB/Ta06E+dlnsb//GwyaGVu1IfiSXVMSB4bgAl23z3R1wvEDxxhUFYUIk+l6qrMVR9fU7F0+S6yXGEza6jTKcu8ylbOsTdcoyopF27G32ObJJ/ZpFp7D9xzAVI6ua1gu9+iaGUZKqnKIEQMeMKk2NQ15po3PEBQtXBKJrkCcIXZNLykFW1YUVZGqZbUjKPimQwvFFRZXuP75T8nK9HXiWSkHDAI09YyoLR6DSkxJXB9pupa6rtGoaWg1G8dMJpPJZDKZa8YNl4/WWqy111U+lmXJ4cOH/8jPqyqf+tSn+Of//J9z6dKla3rugwcP8lf+yl/hh3/4h1lZWeHixYvX7NhveMMb2NzcvGbHy9wYXi35+O53v5uVlZVX5dyZTCaTyWQymUwGrDFYaxFXMNg6wOTYLazc8Rrq3V3aZxeIRKx4jIARIYqiomgEg2CFfvORfu8x/X5ja4VTt9+BMQbtGmLqjERDk2pIY0S1xTcN7Wyf/cu7nN9pOHXzgNC0xKD4umH36dMMN1eZnjiGKMRFg6pHrMEMS1wY9TLTEbsGjamKFNNibJHEHQrGgKSNRykq6Lf1CAo40Ctv+DWpUdNaRIskEGML0WDEsnf5PEU1xmrEOIuTMonFqEBETGp44cpsStS+8jWAGMT0LUEpOor0O43SLJFgUY2E0GG1ABHceIipipTg3N+lrheEuiXIgsaDRqGc2FRnq4raJMlMiGDAGUENjGVM5wP1csmibSiqEa4QjBlA6FjGhk4jPni6ZoaoYeYD42LEYFxRL1rKyQhUWMz3cNUI72tiaPFqOLC+QVUKT509w87nf4fZ7ALu9jfQeU9p4OgtN9N0HiOGGAOxEM43EY3K1iYc3BixOh4zHU8ZTyaIEzRA3S7xQWm7GiZJhBIDbb2gWdaIGaRK2tBibIV1A4rxAFcO8SEwW+xx/sweTz4pHNmwrB1YBQzdMrBsG0JQiqrCFUlex+gx2id5Q+j3OyPQEUIDDPrKVdO/4NM+IwZiCMSomKD42NG1IManRKUEIoJoLx9F0samEQZWaJvdlIq05dVEbtBI13mWbUfU515SmUwmk8lkMplrw6siH6/3Bt5oNOL222//Iz8/n8/5B//gH/CJT3zimp53fX2dv//3/z7f8z3fw+rqKsvlkkceeeSaHHt1dZV77rmH0Wh0TY6X+frnfe97X65czWQymUwmk8lkXkUKV6bEnBrMaEB18CCrp+6g2b3EcucCpp1hDVgpACWogAoiimhagLwSx7pSCGmMcsvJQ0wHI/AevBJih1hLrFsYCBqWhHZOt7vH4vIOZ5/dxxrlpiPrqII4odndZ7mzy8rBLeLeEjMaImWBs47oakLTgtLLokCzW+M7jzGGoirBgB2MMVYwZYU4kyovQ5+OlNjfct9XbIJc3Wi0RGvTNIs6xHpCiOxe3saVLQcOH0JEU5pTIkpE++9tIhGix4lBg+/PlVKYIkI0JgUsrcFQogZsMSJ2M6J6fFgitcVUFZQOOxgxPHAEW42xOxdZLPcIdYcWMzpjUW2x1QBXDjBFhYiiNqVRowEzHFBZR9V52tYTJRJQJED0IaXxRPFdQ+gagl9QuDFCZL+es7K5yTAE5rN9JoMxZTlmfzFnaEvG1ZhL8z3GxYihqzi8sUJhZ/iw5ImdbfZ9xMSOp86fpe5atOtwwHB7D7Y9kzWoCmE4GhK8R7vAYJCEcqstxhS09YxutsdkPEFRuhCo6xnDySrWGrwouIphMQBgb9kQ9veYz3c598ycp86WoMLRw0PK0QjftMyXc5aLBdFDMa0wRdFHeSVtkaqm9GFQ6GtVkZg8tYT0XBpJtayxS4lWoxA9oW1pmkhbK13d0LSRsm5BHMYajOs3IMXhioKV4ZhlaBEEY4r0FRWUECMhRnwISd73RcKZTCaTyWQymWvDDZePRVFQFMV1Pcfdd9/NwYMHv+znuq7jJ37iJ/jgBz+IXsO3tq2vr/PTP/3TfOu3fitlWV491+/93u9dk+OfOHGCu+66K1eufg1yvV/vX47jx4/z4IMP3vDzZjKZTCaTyWQymechJIlGSuy56YTJ0Ztp9neYXThDvfg0plWsUa7ID5GIxpQkjJpqNI2m7wMNwmRsufmWY0jbEHxHRCEGdLkEIrYcEf0e3f6cdm/O7PI2F3caNg+MKawwmExxVcXuM2fQrqOdzcEUFCimsIgrU2LPOGJQwrLDLxe0+0ua5S4aI7ZwuOGYctxQDAfYCMY6sJr2HWNI9926VLGpgqghxhZjHCoGgkc1pRMRGK5OOfvMBS6fvogRz2g0YTgdoTEADrEWsQZXDIjSoGJBIhGPjTE9drbAxNAL0NCLVotUDhMqYicQIXYBcQFplVik+11OJxjnkN2CxewiwUdC25GuIIhaBEOIESn7x0kKNLYQldFoTFCPRiWEiLGOKOHqgmD0Nd7XaDSIMZSDEXU959zZpxmNp3QhsD/boRwOCKGl7mpWVtcp24YgSaCK7DIaFtTNgmL/EqNyyqXFnPPLmunaGruXzqIXznJ0r8V2lijQRfBeCV1Da5e0XYeq0jYtXdtQGEe5sobvYpruRBgPpzAAxBJjR1RDG2rqekG9XNAsZmyf95y7ULLoKg6utRw4vIEYoZ7NmS/38N7jbIFzJUYMYhxYk5KOIaSEpmgS7VcqdWNKXprCEWIaO01eMNURp6+JyGKh2CgEjdT1ErsHg9bjqgpbFpiihMJQVhXTckCrEDUQowdN6d2omn6FkLVjJpPJZDKZzHXghstH09fOXM/jf/M3f/OXTXwtl0v+xb/4F/zLf/kvr+nO44kTJ/gn/+Sf8N73vvcF9+0zn/kMZ86cuSbnuPnmmzlx4sQ1OVbmxjIYDG74Od/xjnewsbGRZXUmk8lkMplMJvMqIv16oxLxGsFZBge22Gjvot3Zpbm0w/L0GVTBx1QdCZrm8DTtPZqrC5ApEXnT8QOsVI56+wKGPuFnLOpb3OoUHxdo2Me3HcvZkr2dmkUQjq0XaBcoigGxC+xf3mZQlczPXyR2gdBOMVVJMR6CQKxb/KKmmzf4ekkILTFEunaBdEIROggBbSOujdjRELEGrQyCImogxFTj6gqUQMothr5CNtW5aoi9pIX1rTXOnN5juWhYO3AI40yfBO23HFWJMaAEFIeYVJmaNh9JKUm9st8niEQoLBILTOmT9AohSdAO1FSkfKlAYXHFhJG1GLHU+zt0zQzvLcS++hZQZxAp03NjC2K0mAIGMsZaS123qHqCdnQxIKYiaofGCMETfUMwQjVepxiP8b5muRcobUkzm1PPdzECe3uXabuGKJEmKFYsIULjG5Yzz5hzbN52E5uHN3GXdhmsbTJSYfvsM9itdcyFfSQK2jmcUQaTCeV4RNstIQrOGorRGLEFIopxFhXQ2iMGQujwMdJ4T90tWS728HVDN1uyu9Oxt1tSh4LKKreemLJ+cAvvPfPlgrqeoSi2LCiN60WwBy2SUNQ0yxlDl2QjYMRefa41KpgCtAVNglI11etGjcQ2EqMw319gzBmCXyH4VcqmpqwGGOtwkwlFUTIpC7a7Nq2miiUSU7Uxafsx5Wqv5IozmUwmk8lkMteKr7vko3OOd73rXV/y8bZt+eAHP8g//sf/mMVicc3Od+rUKf7e3/t7vOc97/kSqfqrv/qr1+QcRVFw++23573Hr1FijDf0fEVR8PDDDzOZTG7oeTOZTCaTyWQymcwfwgiCJcaUiEMsdjhgePAg05O3snLuaZq9XZrdfUKMkHQiURUwabowHSa1VhrD0eNHiMs5rd/H2Zh2EW0FLtAtu5T884FuWTPfvcRyWTOdWKajAdaVWGcI9ZJ22bK5dYR2b5tmdw+NYF2F31+kSswu4OslvmuT8NNkQ4tyBDGlxWLb4AFiTOk0J8QYsK5M4g+SCOy6lHAEjAqCwZiKGJZgFGMt0QSma2MOHVlje3fO6NJlhuOCarpyVeImFasYST/KUE3VnDEEbLREjekSYlLqMnrQiHEOLQrwHulFbegCYtOvZHvTGdygYrixiS0Lmt3LdO2C0Ozh1aOhoyiHiJL2Jf0y7QiKYF1Ici0oGoToA1aUQEA1oCZJNKJJuksMhSsQVxBDoCorRByNn6OhS3JtNkNNpG09EiJ+uU+37OjmSrl8imI85onVm1jWHfe/9nU8WRb4ep+Vw0cofv+j3H/LcQ6srzEoS1xh6RtpkdjbXiP9G1ZTda2qoNje3Sr1bJdZs6RbzPH1glB7ljNPszQ0vsQIHDsQOHHyMLYomM/3qf2SGD3WGgo3SGlSSb27Gj1qDCGSEqBikkz0nmhd2gDViDGCeEXpq3QBMQ71AgqDgaPpAl4DofPE4AkhEE16basr0nlHA0bllMI7okpKVqr0adr02jEYsnjMZDKZTCaTufZ83W0+njx5kttuu+0FH4sx8rGPfYx/9I/+EU899dQ1O9fhw4f5O3/n7/C+973vy6bbrpV8nEwmvP71r88ptq9R6rq+oee7+eabec1rXvOq1L1mMplMJpPJZDKZ5xAMKjFtGxowzmKkgKkwPHSElVvvZH7+PPX+p5GoWBR/pWqyz/RJ/yeD4JxQ759m7iyTYYUtBrihIMWVa3hQpV22LPd2qfdmGBM5dHTKZDLBhIAUlm5RY2yBqwboYERbLwj1HC0DcRnwXYcPLaGrCSESYsC4AmcKnAHUEroWH2NvRi0sHbZ0GAwhJqklkkQjhaI+1bBGY8B3oHI1oahJ+2HKkoPHNvj0x2dcunCZm8ZHUWv6c/TDkWpBiiTSYl/pKkIMHcYVqAgi6Y3BYmxKXhrBOEssS+Ji0W8KFqAGrT1aJsmFcYixuPEYcQXGDbG75+mWC+KyhpiElzHpuRVroegwjiRfjaMYCLEjCdAgqPegAdWUsBRrUXVE3yHOYsXhDAgBYy2FlARAxGDE0nZLrCsI2uKqglEXCYOWGMCee5phtUVYeD75Wx/mzOlnwC8ZnGk46hdsbpxkvX8Tsxghdm2S4NYQCRiT1FuSrwaiolg0emIHi+WSxd4F1Hdoq3TLQOgMnbfEaDk09dx99xHGK1PqZslssUO9XGIowAScK7DGYGyJqKABYvTE2O+UKqnGFwu49BzH9HpS4xGVJHmRlHyMkdYrxUCgcKC+/zrpUn61P4eKxy9rikFBZS2D2kHoa1Y1pt1QLMY4bN+alfVjJpPJZDKZzLXlVZGP17N29V3vetcL5Kaqcv78eX7sx36MT3ziE9ds59E5xz/8h/+Q7/7u76aqqi/5/KVLl/jUpz51Tc41nU65//77r8mxMjeeEMINPd8999zDrbfeekPPmclkMplMJpPJZL6UoB1WC1QVYwwRnyRVYXDTKaMjN7Fyx10sdy+wfPocMSTVmDKDeqUQlCtVrOo9FXNGoy2q4YBiPMaUNkkzYkqANQ2+qan394g1lKOS9QPrWGuIEdyoYnFpm8GwTIKrcJgmpS6jj/iupWtbojb44NEYcGaQRKJGkLS9KAE0BELbpASZsYgrkAhprDKCS5uPEpI3NApoSPfHuLSZaELaczQeI0JZWqqhY2d/yer+HhshoCEmCWXow6G9bBXSpiTJn6WdSQukmk+hr/EUBZMEo7EFURuUQFSDFM9tampo0+WMxQ4KYITKJmqEuKwJPoC0iG2ICFJVSASNNl2ncERtkKICG5HOEKNS2AIRhxiTKmM1IiY9RkYipatQlBjBmBLBoxGQDo0NMfZiMFhC6NAY0s9VxmNuq5RDRcXHz5xh59wZQt1Qx44N69NmZYyYyqHBo0TQVIWrYtNryyRZqyQpCYrgCCxYLHbxzRKrQmih7SJ1v5t5YKXh1tsPsHH0EFEjTd2wrBu6rkEUyrLCWosxAkaThAeiF2KI6TXSpxyjeqBDjOmfSL36hKpqX5ebEo5Nm3Y0VzdXcc5SOJd2RJ9/X0KHj0tMO8IOCqraE5sGKSMRSbWv0oGlT+Tm2tVMJpPJZDKZa80Nl4/OueuWyBIR3vOe97xg73GxWPDX/tpf48Mf/vA1O89wOORv/a2/xfd///f/kSnOj370o8zn82tyvkOHDnHXXXddk2Nlbjxt296wc1VVxetf/3oOHz58w86ZyWQymUwmk8lkvjzGuKQ1JNVciqRqUFWgMri1FcbHjrOx/3ou1r/N/OwOsZeO0Hs2kV7aweqa4fDWCsPRiHI8wA6rq8JGoxC7hth5mv09/H4NRhmvr7CyskK9PaMYjjCFo1vWpPcEK0bBSC99pFefomhQrLEYV6ZqSmMwIhhJyTgDRAlEXyPGEboWaZZgPE5G4EwSjZqMocSYspyqaafSmqspPOn7QGNQuq7j8tywt1uxuRXQ0EshI2mfkDqlCDWm9CIGfCTaiHFJaElfpalEUkTSIlYwrgNn0OiuNNz2gcpUx5oeA0ts2r5K1VGOVzBi6WQXXc6IXZ3SlTGF9Gz/3CKxr9YFjMG6vr7UBXwosFYQlauPt2JS/ayz/W2MhNAROk9bzzGmQmnQtiUE6JqGum3oWljf2ILRCt3kEFEMG0XkTUdW8fMdFhbGky18s89nu5Kdfc+J2GGiJ3ZXNjEFQdH+3Gq5KnTT42JQVTREYquIKl2rtD6iwTCdRA4fHrJ1eAtbVTTLBXWzoK0XIGBsSVEMKAqHsTaJcyKhS3WzKaEbkowEjAgxptYqcennOYKmP6fgI6ghBIWgiBHaeo6djLFuiDVlur3eQ2FQCcQuPXZ2MGIQOhbLfZisg9AnilPtq3UWI5rdYyaTyWQymcw15usq+bi5uclrX/vaq39u25af+qmf4ud+7ueu2TmGwyE/8AM/wF/+y3/5K9bHfuxjH7sm0klEeOihh65rVW3m+nIja1cPHDjAQw899AIBn8lkMplMJpPJZF4dYtQ+WRXRGDAIiMNrSDuFwxHjI0exIkTfUs8+jNmLWNMRtcBrknKCUBjl1M1T1lanFOMSO6gwJsmjGAMxRkLb0cwWLPe2iepxk4K1jQ2GwxHzszusHJ4QW6Vta6pxhbi0oSeuIpJ2CtPOXqqHFdF0jj61Zm1J9F3aUhSLRpAY0vajSEpEWkukw8gACIjrJSNJDGL6nb1oQLokltBUrWoCvm15Zl+Y1SW3LGOSkzYlC82VKtfe8YkkMQY8r5aVPpkZUbFoiJDybkhRIl6RmLYjsfR1qebqMZSQNhltiVqLsUJZTBFrUGfwi31i2xBCoEB7mSZEBVsWGCOoKOoVsQ5bWFy0FK6ktoqxhhA9MfYJvAgaO2L0hK4l+oiECNJAAAK0+wsWbcdgNGTrpiMMV6Y8tr1g0XgOv/EtNGefhqDccd8BdtuW4XjC9qWLfKzrsBca7tytubMUNipHYdPOopNIRFHtX58iGOdwZQHWJgVuksT1Xggh4hSGU8PqypADhw5QTsb4rqOua2bLfUJXg1S4wuFchZMibTUCvo3gPYghhPQYR3F9yrevzzUuiXljEY391idoVFQjiMEHcFaJPtLNa0xwmJHF2ohXjzUVIoKGQGxanE4pgqfbfhJz4BiqyTobKXG2whVlenPAl7GPOQ+ZyWQymUwm8/J5VZKP10ukvf71r2c6nV7dRvzwhz/MP/2n/5Su667J8a21vOc97+FHf/RHWV1d/YqX/cxnPnPN5OODDz74io+TefW4kbWrR48e5Q1veMMNO18mk/nq4sm9yIkf/+CLuuwT/+t7r/OtyWQymUwmI8LV6Q+xDo0BjRDVE0IgGmW0ucXQDtBFy84zj9N+/jE0mLSdiGBEKRAmQ8vxw6tU4yF2MMQOB4gUxNChHcTOE2pPO5sR6hacMpwMGE5WMNZSL5a4wYjQdvjWM16bkPJliopL1w9JgomkLbw0nSiIOIgQfZNknXXEtsFoJMZI1BbxBu0KPEs0BgqAykFskaJMFkeThIyh6yWhpupVCYg2xBAIQQjRMIvKrCGJTk1Vn0QBSohdElR9ZWbUSCoRTWJJg0dsAcQrLa/pcxrBWoyL6X5x1VciEcReOYZD2xZMh7gSEYsbTRDjaIyj3dshdjWd7hFioBhNkcGIKC3iHBIlPfdGMEVJESPDSlkOltTdZfCRsFzSEqBQCtu30caUAhSTKnA1BELnsUXBga2DjFamuGqIG08598VtGrOL+ewnCSib97+ZwweP8Mnf/SjbiyWD9U1iaIhBeaxruBwa3jC/zJHxEFcUGAlJmsYIPoKm7ctoPEYMzlqslOAtne9QoCgt01HF+tYKo5V1KBxd0zJfzmmamhgDrhKsKyj6xGOMgRhiuj9oLxUVUUukTY9/5LmkbEzpRu13Q1OvbXq2fIx9slKRGIlNpIl7SAQ7cdhK8KGmcCVCS/QdBKVUJVz+JLG9F+yk3wUVrDNURYGRL//1m8VjJvP1yUv5vvmlkL/HzmQymRdyw+VjURSUZXldjn3//fczHo8BuHz5Mu9///t58sknr9nxT506xd/8m3+TkydPXhWcX466rjl37lz6j/hXiDEmy8evcZqmuSHnsdby5je/ma2trRtyvkwmk8lkMplMJvOVETUYSSt6oqR6VCLEiGrAGkM5mOCKFdq6Zv2O19Bs77A4u0fA9PWQUIhweKvk8OYYYwDnQBwRT/RJ4OAhtIG2XaIEinJAMRjiygJf10SxGAthvkQ1UpRDNEDXdbTNguBDSuxpwFIgLm0fihhMv5Mnrki7hdGjwRNDkkLa1USULkbEl6AjjFisAVO6dN+tSduNKBq6qwk3SNJPrCCaUnlRlCZGmmBSGtO3aIxpNFJDXxsqKOaqnFIfURv6qUCTtiA1Vd2KSZJNgvS1ngV431eQJpGFtakhNqakndF0e4kd2FQP60YDEIMRRzvfpWtm+MUexICNATccp0QoSTymSlyHCRXOtgyqCc18hteaThfYKLjo6DRgpMRYQWMayEzJTmE43aAYD6CoMMZhqxJTWk5fnuMGHdPz5wjWcPazj+A/9xk2CDxxYZfp1hZlWeJKixkPaBYzvvDsU4zoWJ2sYMsSJFXrYoFYgga0U6gKjLU4S5+OBFCKUpisjRhvbGHHY0JU6mbJfLFH7DpMUVDYMYUb46xDMKj3xKiE2KXXE4qVXgSLYpxgTQV6JYXZj4OGlHRELSodUSOFGAZWMFHSa1c9xoMNc1oLEgdoUWGRVL0bIxo8paswlx5nsf/bjNbeBRiMM7jCURRl//OdrBozmUwmk8lkriU3XD5WVcVwOLzmxy2KgnvvvZfRaATA5z73OX7nd37nmqXOJpMJf/tv/23e9KY3/bGVlrPZjOVyeU3OO51OueOOO67JsTKvDjcq+ViWJX/yT/7JryjGM5lMJpPJZDKZzI1EASGiiFiUgCiEGIkKiEGsA1dQbWwxOX6SlcsXWe5+DF0k9ahECms4fnyV6XSILStQIXpPCC3aRULT4ZdLunZJt2zBFZSTEdVwjCkdbV1TjEeYqmR58RIQEaPgbBKKvsCKIhSYwmFdgTh71cdIv1eJkRRCbGsKNcS2oWkXiLGEGIhtjUExmmpbjTNEA0ZNkntdh7gCNTY9JiH26TaXRKR1lGWBc0JA6aJP0lEsYiKCRWyBhpA2IEUQV6ImPU4aYqoKNaCh68Vj2pwkSjpH8P2OYAHRp19X6z2T+NOohJiqQQ0Rg0FEUcAMCiq7iikLdC+mx71ZEkMg+pZSNqEosL1sVSxSKK6oKGLLYDpht15irKWqCnAGsQPESEqBEpCoED3WjnqJCdEY3GCIHZQsm5pLuwvGEvAh4ESIp59ifbrKeDxiYzigjZHlYo61DiNgsVwebvDM+c8R28Osr69RDiqMSbW4UtmUFNQkx0Gv1p12HRhrKKuC8cqUajgFjTRtzaJe0nYtMbQU5RTnBGtSva5XT+giqBIIOECMTVWrYhA1YGzaNAVQf3XDFGevdNImKRqhLBxmLHjf0rWBEMBG8Mbjuw5rDINqmgS/BjR2EDrECLYt2T79IdzoDoryZNoeVcEam7+HzmQymUwmk7kO3HD5OBgMrot8PHbsGDfffPNVMbizs8OlS5euybGLouAv/IW/wJ/9s3/2Rf1HqTHmmm3uPfjgg3nv8Wuc+Xx+Q85z+PBhHn744Rtyrkwmk8lkMplMJvPHoxKIpE1ABDQKCqna1CjGlEn0ieDGY8ZHjrIyv4PF5Qtc+MzTxGCwGAaFcNPRFVw1QkbjJPuWNaH1aKf4tqVbLOjqGh8axKWaV+MK7HhIt1szXFuBrqOe17jhBFeNKMZD3OAow6YhtM1VwSiSko5Yi/QDi+o9GIt6haIk+pZQJ7nXtXPUt0lW+o7IAt8IOINVwVUpUiiuIBAgeFRtSvsVDukTkpAE0+rAAR3eCzEaxEr/Pbb0slARUyaBKjHVsmq/G2kdqh4xRapVjaASwQZAkABCgToQL0nwqqJoStpJkn2KxQAqhhg7pI7pMR0MkFEBhUMEGrtPvb9NaGpCaBGxlCtrmNISjUEkIlFwZUEVxnTDDlfsYigZDVYpbIFaQTWJ3RDTzqSJEaFAHOAsahyuKlBr2Dk/Y2e/JpQQomBjCmemmtaGm1a3eDYKezsXWFnfghggROrBKs/MOvCnKasSay1SGGKIST4LKS2qBo0Quo4QoG6F4cAwma4yWtlEnaVtO+bzObPFgi54jBukFKGzqEiqahWhsEWfcjQoFjSianoXLGgYIKZNVazYJDxjv/colihJCsfQ0QZPaFusMTjrCCHQeCXaDts02HJAECWEgDGCWJuSqoXD+oLZ40vs2i+weuQ7UV3Ft55luyDEnHrMZDKZTCaTuda8KsnHwWBw9c/WWowxr3iX8eTJkxw9evTqn2ezGXt7e6/omFd417vexd/4G3/jRV9+MpkwmUyuybkfeOCB/C68r3H29/dvyHm+7du+7QVfW5lMJpPJZDKZTObVRbAgIDESNEmz9O+IaErTCYIxDikMg7U1Vo/eQn37ZfYvXmBxukEQVlYHLHzJ557cY7yubB6YUvVbhMa1hOWSLnraeo52HmsKnCmohhNkNKKZnaVcLYnes1zMmW6sYwsHKLZIAtRNhkk+ihB9i+CS0SK5PXxEY0y7ld4gnUmCr3CYtsQsZvhuCVEJXUOMfQNMnzi0Cmq7VLUpkipeuxasAytoNKni1RYUhSeoEiOob5J8ih2pBDTt/mFCSkwGMJVDxCR5FgIifT0sAY3pz5ji6uZjEmz9hKQxfcVnSnhqUCIgpr9NChoiMQYMgmlbFIeIUoxGiLG40rHc38V3C8Jyn1BUWGMwZdFXyYaUHCwtAx2zcuAI3d4ugcigGqRkY4hJUpskU9WCaECcw5YVsXBIUdBFzzNPPEvoIot9z7JbIlIS3Zi67UCV1XFgdzAiKkhoaULDZLTG2Uvn2e8KXJixVc8Zj6cYr0QfcM5gy0F6fkVp25q2rqm7SIjCeDJkdWsDNxwQBJquY97MadsFIo6iGoEp+prWiBqHKsSoEEnJURPTDqYYiF0KNhJQMX3KV/sRziS8U70uIJagIMbgnAUvlGIwpdA0AQL97I0SQ0u48oZw34tMVYwqs6eU2p2nDr9FOXwrs2aP2WyetiRf8HX73D8117FmMplMJpPJvCxuuHwcDodXq1HhORl5+fLlV3Tc58tHVaWua+q6fkXHhLTz+Nf/+l/nwIEDL1oClmXJ7bffTlEUr1iq3nfffVk+fo1zrST4H8d3fud33pDzZDKZTCaTyWQymRdHjPGqaBOV3qt4VCPE0MtDg2hKRtqqolxfY3T0JqanTrG//WlkKZQrE87Wa9QtrK0e4alna3Ye/SSHtoYcP7LOdDhE2yXqLIFI6SoG4wmmsIi1NG3D0I2ITUfX1QzHI4xzSXIREWMQ43rRY7BVkfYOUZArCcBUaarWXN0zlBKkLbBFm75vnUG3nAFJ1uGWYAuwgmgFmvYfozOp0tQaQuww4jDOYkuLNEphFRXFh4j3/nl1qEnQGSMIRUpiWpseX2sgKqodOEl7f0RETEpuhpD2HUVQudJUZAGTpic1EENIKcxIklbGELWvfiU9d6HxSCwwZYVxhnI6wY0HyGBMs3OJUO8lAenS4ynOYqxio6WwJQMX6IYraPAslw2DgaewA4xL0s5dkWgYiBbrStSm6l4NHRfOnOb82R0G6hg0lq5rsJsH2LjrHmaffYTWNwwJjCvDqXe8gwuXL3Lp4jmefOxRnvri4wxMybPtiJVlYLqYUxUOixCjw/aCL8TIYrZH0zY0tcGIZXNzjeF0FayhqRuWbUvruyR3jUFIG47ptWyJmpKmsdeK1hSI8agaVGL/WpO010lKAwdN9z3to8KVTwr986VgxCDWEkPAkNxx65W2C/i2JZQdFAViLIhH+kRrYYQqWox9PZW5h7rxzPb32dtb9JuWz9HPswIk+Xwd/m7IZDKZTCaT+XrnhsvH8XjMdDq9+ufhcMh0On1F8nE8HnPnnXderXONMTKfz9N4/CtgMBjw/d///bz1rW/FWvuSrvv2t7+df/Wv/tUrko8iwt133/2yr5/56uBGyMeTJ0/y2te+9rqfJ5PJZDKZTCaTybx4UmWpYtSgIqlyNBqiSp8Ac6jIlZBVSkEOS6qNDabHTzE9f479xy8zOnyKjVN3IBoxZcGZzzzCpXN7nLmwx7luyM0H1xkuDZuHj7GYzSmmE9x0Bbu1TpzXdN0SW67TzhtsMaQcDTHlIKW6jIAxCNJvUiZZoxrQGDESURWi71JaMYIRC0WBiR76Wlk3HIAVjLF09R5qDKFtQLeJRDQqdlBibZlkYoAQA1IYsIaokbYJgGF1VFCKp/MR33YYMQRMklukilXVJLWM2L5bNWXUxBjQAGoQa/ptP3rhZVNyEvpEJEmO+pCuIzaJRtOnKImIN8lwCcSgoB7T70KqtamyNVpcVWE3DlFvg/cNvp5jjcU5l5KYBIyLFBQMNCCjVbrFnK7pKLBYZ1FjMFHQSLJezvUbmQIG5rM9nnnqaWIXOOhKhkWHaTtm8xnxC49iuw4pHb5pWD045d6H3oQtRmyfv8AnPvkRnnriGc5f3sWI4YNPXGTFRQ4PKybVSnrOowdjaJuO+f4ezSIQVLjp8JitI0ew1YDOK4u6Zn+5R9cpUcFZg5iYJHLfbmWsIDFJSWMNSERwRFXQSNSIRfAEjDFYXBLGYsCZXn4DxvTXUSCgBpCYPubBGiGq0tWRrmzobEllq6tCWkoHBpwtGI3GrN31Roqtw+w88yw7+zvM6vmXyEfpH34A7b8ysoDMZDKZTCaTeWnccPlYliUbGxs45/DeUxTFK96AXF9f55577rmaEPTev+KqSxHh7W9/Oz/4gz/4sqos3/Oe93DkyJFXdDuOHj3K2tray75+5tVHVdnd3b3u53nHO97BaDTKKdlMJpPJZDKZTOariBgVZ11K3iGIBrTfJxQxKXmoETEWawpwBmnBjIcMNg+ycstJpAscvusOJge3qC9fxsz3WTNKefgQsSpYM4YnvvA0hzYqyqpg3w8ZuQlBHIwHNGd28G2gsI79nW2KqsQ4R4xt2sTDorEDWyKkRKIgiKTf65XEYUxVmKKKqgc1EHySlUWFcxXGVRhxYAy+npFqMAPS1ESRtH9YCRIkJQvFoE0g4tm9cJnzp59iOpkwHhpKC0RSDWoUxEhKs5n+8bOCtQ4xSXhKCtz11silVGR6FnrhKKARuSIYlZTgjKlyNSJJVonpP6/pObP9HiSS7q9NqVC1niiGGD0aAgLYqqAcr6HznT6ld+W5FoyQBJ8rGcb+4+MhTbNPaQyVGfaPbTpbxIF16XFzBfiaC+dPM58tKQo4uNJSFsJoUrEXFL+7ixgIYlns7bE1mfLp3/sdvulPfBerq+usHzyEM0M+9MFfZH/nMjF4ProbeAsNRvdxRYGYChHDYrbLfHeHplZW1yqOHjtMMR4TEJbtknldU9dLNCjWWqwdYKzD2gFVYaF/PpwRTC9+U8pUkjiMNtXkEnF2lPReaFJ1qgpEQUyBaoeSkochtmneUwUfPRrSEQmpQrdtoVl2DMqWGFrQEcYMMM4RgKiGtSN3Mtw4QBthuZyzu7tLvUzP3RW5eCWRKfTtr/2c6MtBrh75yvfpLzhLVpqZTCaTyWS+rrnh8lFEOHDgAFVV4b2nqqoXJCFfDuvr6y9IfV1JPr4Stra2+JEf+RGOHz/+sq4/mUz4wR/8QX78x3/8Zd+GkydPMhgMslD6GqbrOpbL5XU9h4jw8MMP573HTCaTyWQymUzmqwxF8DFgU/kjkQgmohKIKFiDMRYjkipYxSKkj5WrY8ZHjrO2dYSDN59AfYc1lrWtFVbNUZajitDUzHd2qbqGU+sHqazBHbudtTvvxvkl9cVzbF+c4YYDHAV7u9tsHjucRB0GxCJGkkjUiOLRkCRjjIpG31dQWkw1QH2LBk0VptqBOMQoxro+dZgkITEionT1EmJAQ41vQkq/CYgpIXZAgYpj2e7x6Y8/SrdcUN4mWBtxIsQo+BBQ7fcjRYga02agepAi7TVaCzbdnyQV+4tjwAiqoa+9DSgmpSPRVBVKSkcaWwACJiQJG5R+NDEJx363MRm+Pg0aQxKRncc6gziDHQ2x3ZLYtuA9WpZJ5gLGOKxGovVUxRArJZ1CJxbjO5xzYBzeR6KJGAM+NEhQ9ne3uXDhHBoj4xWYIhhjOHrf63jszD7N7g7OViCWtvGsTFf53c/8Fm9557exfuAw48kK733fd/HWb3oXuxfO8ts//7/z6KOf4RP7+7yuizhTIM4SOs/2uYvM9zrsZIVb7r2b6WgACMvFkr3lkmVdE4JHTIErxqm2lgIxeuVRx5gCY0J6LajpH2fpk7VKlCQmLSYJatunbTWm7cjQJhlrQIzFR6HVSOlKCgxqAtomGS4aiDG9XtQL0ad9VVM6xFi07SB0YNNrJTSe2WzG3uVduu6KHHxOBBquTk5e/ToOz/v885XiczrxD//cRpNwfp6C1OdfRtPnsoDMZDKZTCbz9coNl4+QEn3D4ZD5fM5oNHpF6T4R4cSJE1f3HgFCCCwWi1d0G7/3e7+Xd7/73a/oGH/mz/wZfuqnforHH3/8ZV3/inzMfO0ym8364fvrx+rqKqdOnXrJ1cCZTCaTyWQymUzm+pKkoknKIab0nYaUJBSSKBOVJDmMYm1K7IFgBwNGGxtsDtcpq5K6XTAwgaJt8M4wXZ2wfHaPi3XgloNrbG2sYEQYjkdM1lYoJgfx+xOa7c+wfsIRuxYfPOOVNTAGVTBX6ywFVd+/8VXRkOpUxRTpjkgv7sSkalZnib6vlY2gse1vt8XYgnK6mrb5YqRtGzQEfFDAEiO4QjFG0K4FY9jf2efs2cihAxWDwQh2d3A24KPSNgH1aRMwhvS4GTEY61Jy01jEOURsui2iSU6iKfGosa8xFXqrmqo9rQU1/RZkv30pko6RIp6IM8QmbUVi7VUjpVaIbUiX1VRDqwixSzuRtqjQ0IvbGNArsleFwrpUn6uxX5wcEEKk67cOY1SigUCfgrSG2LVcuniGrg1YC1UJthRwBRsnbuOyvcTZ/X2KrYO4EPH7+xRFxYNv/xY636Qa1NIynkxxRcnWgUNMjTL7//wDLnaOT17eR4zhoHUsZrtcOH+Opbec+rb/Gwff/ScYLC7jzz5J++SjLJ99gtbWqIugPm1wupLCWoqixLoBllR5arAECan6VvoqWwWNFukTtkgKO+pV4R37tChgipSkjQtELD5GnAYcBg3g20DXKG0wxAAalej7lCtgjAFCSqaqEvb30bajbhpm8zk7u3tEfaFIvCqK5bmPJUWdNlt7n4j0haxcPduV32iflLySdn3u80Ly2ZDuL5AqmDUryEwmk8lkMl9/vCry8eTJk0wmEy5evMhoNGJ9ff1lH8s5x0MPPZTeIdgTY6Rpmpd9zBMnTvBjP/ZjlGX5so8BcOjQIf78n//z/MRP/MTLvh1VVb2i25B5dZnNZq94e/SPY3Nzk8lkkhOymUwmk8lkMpnMVxlRPU4cKg6RVL+ZRFdKrYlJe4mxr+g04jDYVPtoDOOVVVZHaxSlwwuMrGLbFO0Ly5q2aakmJccOp2mTqIKzSpzvEcoOsQ7CnMHKGvvPnqWajChHVRJyPhIlItYhLglPVVIVbL9DKeIQEWL0SZ6mGGRKlqmgvusNjaTUW4ypurQqsDLFqRJ1m7YNKQXatli5UqnZ70waaJae2jsG44gzAwqjFBZ8hK4LaAzE4IkxYmzRpycdYhx2UEIIfX2tSUk3tJe7Hu3Fooma0qZieqPU70KKSQZRJW1Rik3Nrcal9GMvHSVEQHsTZZDCpvRnTNLMqIBLEskUDqsTBEPsPBIA179Z1II1DowHMbRer1Z7dsnoEjSixiaxibA/22ZvZwcNSlkJw3KIKR2tUYbrGxwwE05/9pNUm5uws5Nun49sHbyJ4XCUkp1Fnxjs9zGP3/Mm7n3gTXzol36B7cGUT17a5XWxZefMJZZ7gQM3HeD4Aw8QNw+zs7LJ+OTruOVdI44v99k/9wzNxdO0ly+iiyXNhScwixk0i96qWSKBECOoJYpiRFLKsa+5FU0CXIlEHFbTrmXwLTFEjE1bnqqBGD2GCCJ4b8AHuoXHN9B4aKLBomiAiAcTkgQtqiT2NKAiRL+kbVvmzZLt/W12Z7MXVKra3hQakny0cqXy9Tnp2H8J9GnmZCOtpDfGi0JQSQ9B7KdCJb3kksh8nmjsz9uF57600t8Z1+tvo0wmk8lkMpkby6siH2+77TbW19d54oknqKqK0Wj0so/lnOOtb33rCz6mqrRt+7KOV5Ylf/fv/t0XJClfyW3703/6T/MzP/MzfPazn31J1xWRnHz8OmB/f/+6Jx9Ho9ErFuWZTCaTyWQymUzmetBnoyQSNaIaCXplU/GKiAxpCzAqUQNRfJ8Is5SDgtJZhsMCJiUD79DgiSiLvX0oSw6MCibTMaoQmhaNSnvpEpRj6otLNNZoF1jszlnd2kyVozGgoQVxRFHEh+dSaP250/ah7/NdgtoC5YqE1H5bESBJVQ29iAsBqQRblMh0kpKGQemiRwmo90RpUJseGwE0Rjo1BDUYC0Y9Ywt4+n8ogqEPNCZhai3G9ZWWJslasS5d4Mp9jOn3xjiIgpHnRvyijymRqCTB2O8J9v2x6YZZgzH0tzmCOtRoEngRCCE9r5A+5jusqzBFCaFNEjEkoWSMYGxfwGstRpS4WOIoidoRNNKGLm1VGun3DTtsKNi9fImm8xgRBq6gcII6pTAFrhxz5LYTfPpDFRc//UnK0QhrLV234GP/7Zf41u/+QWJoMa4EI7iiJPgWaytue/v/wG/+h1/grN9nsbGKPHua4a5nc2vK6tGjBBXmsxk//VP/nq6O3HL7MY7dcoStA5tsHb2X9Xs3KAuHM1BYQZsF/vI52kvn8ct9uuUMv72NtjXUNX5/G13MISwxIRB8l/ZENeJFcfTborEBN0hv5O3FXgie4AORBe1CiD5VodbRUTcllesIMb2OjSuwZYmU6c3cMQZCbInNkm65z3J/we7OBRaLZS9C4WpakiQeSyNX3zDgrab0bLKlGAu23xytyorJeExVloQY6HxH5wMhJKsYiSjpNROj0vmur4hN2VwjKf3oQ5KcRvqt0d5QXt+fJmQymUwmk8lcP14V+Xjw4EEeeOABHnnkEba2tl5Rum86nXL//fe/4GMxRuq6flnHe8973sO73vWua1JhKSLccsstfMd3fAc/+ZM/Sdd1L/q6KysrbG5u9jUhma9VbkTt6traGsPh8LqeI5PJZDKZTCaTybx0REySXxhUAoG0UQekek5JcktQwnMDc2AU44T1coLzHr+7w6ioqMYDvBHEGqqDh7HNnPWRw7iCGCNSWKx1mNJigrD37BMMCwtdi/qG0crBlAp0KT2oXV+dGdM2YqrGdFzpx5Qr63c2VagacahLt1OjT5ZGU92lRtOLpICpW8Q5TOEo19cgBJht02kg+BaNYMsCYwW0xRURY5TOR3xoUvITwAhFNUwSUMH0ibokPEFDSKnS0kEMiEaUJB4Rm1KYMdWXpkrP9Nj7tkmpOlMSvcdYkypdDYSmQyrby1i5KqPEFn0KNCDRoeKThVJBTOifOpPslVqiEaIPqdYzCsZ4TFGg1iIS0Jg2ILGp1NP79POCoEmaikZcOaatFyz399AglIUwKAdUZUWwMVXOFnDw5uOsrG+wP9vjwOseZFBNkOGAp3//E+xevsTq+sH0mrMWQ9HXxEa2Tt3F2m2v49yv/hpD0/BoLdw5Ltk8tE7TeXZOP8vwlrupF8KFC54z5x4n/vrnKKxlMK5Y31xjvDJksjpgZXXM2vqUyWTM6vomowPHGI0HTIYDBoXFqmJ8A22LLvfQxYx29yLd7jbdbA+/cx7ZuQRtndKqGlIyWFLCMMTwvFlGJYSUeFx6Qx3BRkNQDy69jm3lMAWoMYQQCSqEukPPPsmiFXZ3dvFtQEipTNUriUaSTC2ufG0KNhrG44KtjS0OHTrCocMH2TiwwYEDRzh69DgnTp1ivLJCCC11PWe+v898tkc9r1nWSxbzfbb3dtnZ2WN3b5d6OadZtuzsbnP67BnOX9zGdB4fIaRxSK74bdHn9b/mhchMJpPJZDJfQ7wq8hHgu77ru/jZn/1Zbr31VpbL5Qs+JyIvuqrygQceYDqdvuBjL7d2dWtri+/6ru/i0KFDL/m6fxSTyYRv/dZv5ed//udfUvpxfX2dlZWVa3Y7Mq8O+/v71712dWNj4xWlhzOZTCaTyWQymcz1wZhU8ZmqJdP3qvTbiqop/ahRiNL1qao+4YfBqmVoKwa2IopQlcKw2sAdWyH6lvVTS8LOZSaDAmkbQugw1lGsrqDNLrNnHsfvXWRw4lhf9SiURQmSNhBFCsTGfseRXjjafgPRIDYJScIVCyJgFYmCuILYBlQlJf7kyrBduv3qU+WmRItxBcXqlBBrwnKW7qMGQquoc6ne0lgGVaStAyEIQS1L3zEphMF4hBhHkKaPqCnG2H4r0/V7j31VrGoqwxRBfYDoCf02YAgtiCWGJPbAEFkm0WcFMRZpY6rBrTuMTUIqGptqS41BItjSYSTdBo2gBmJI1Z4ipMctKDFAaJokPIsqtbz6VKuqIWCsYF2BxiZtWCJEsaAtEcVh8V3D3t5llk2LmEhVlAyqEuMKxCYpXLiSajRm85ZTzD79KcQZlr5l2dWMxxOa+R4iKXknWKTfFFUfGa1ucOc7vonHPvJhNuOMmSl5qlplsPTc5hq2n/kiY/0mhuMBXIpoDHTe0DSexjuWyxo1C7rQ4azFENjcHPLe73gH+3s1IcDpJ5/m6afOMRpWTFdGbB3a4OChDVbXt1g5cZLBqGBtNGbgDNZ3xMvn8Ke/QPvp30R3d+i/gIhEQlCM0VTZGgQfhLpzhOjwMe2WGok4Z9Lzc+VnS2IIRHwzo/zCbzGSVcLeLkS9mr61/Uu4EFgbjzlx8gS3nbqdu+66mzvvupNbbj3J+sYmk+kKg8mYohqmJKsxmCvjjgDaJ2M1ojESu46uXRJViRFC1xJjxLee/fmMi5cu8PHf/z0+/OH/xqc/8yjPPPsUTdOgCF1Ilcyx/xK8kqSO1/lnDJlMJpPJZDLXgldNPr7tbW/joYce4o1vfCO/+Zu/efXj4/GYjY0Nnn766Rd1nHe84x1f8rGXKx/f+MY38u53v/sF+5HXggcffJC3v/3tfOELX8B7/6Kuk+Xj1wc3Ivl46NChLxHwmUwmk8lkMplM5quAaJIkEkWu1jYKooKzrk8aArFAaYmEvu3T0rIAp5SmpJl3xKZh3rUMq4rBaATaYQeOYlCiIrhB2oHXpkZpaS5eZKWvWQ3zJYPpFHFJkoi50l8K+CQ909xjTHWr2qGBlHS0cDWKJanCUkKSHypJ4hFCv1NJ2pKMHg2KeIO6iBQFxXCCb5YQA0FbNAhiLSG0GGOZjgxN0xFiB9Gg0TAshLIqr0paY23anxTS+RwgEaFEiCgBUUmVqkEJXSCKAAaRlA6NnQcsKhYxQ0KE4IUY+lSjSXLSFgGJAesAAtElURW7VMdprWD6UloImH4zMypgXUq4iqAmYiTdHlWQGNNzgGCMTfepT2waUcARo8e7QGgaZjs7hKAUrmBYVbiiwBQmJeSspP1IgSN33sEzf/BJHv/If2Mw2eDyhQkbW0dS9acqEn2qpRVJKVeTtizveOiN/PrqFFte5sjIcmhccma/5olLM+4aneFw5ynLEo01Iba03RKwEANGAmggxJBe2r7mHd/6TZy44xi/99ufoRqOuO2eu3jssYt8/GPPIFHAKM6CK4TBoKRwjulqxdahVY4c3+TYsUO85p6HcdM19j7wfobOEUPAe4gIJipdZ6iDYemFOhaowlCT/LPO4aoxUrg+mavE0KbNUA0UXcuW2eEm0/CIQGUdw9GQA+vr3H3bHTz8trfz4Fse4tgtN7N58CDVYNAnIPvXnHEp8XplBPJKFvF5QlB6oa94QmgoByPkartWP/CpkU2FW06e4t57X893fOd3c+H8eT77+c/wG7/56/zuR3+Hp58+y+7ePovG9zW8z/3V0jfAZjKZTCaTyXzV8qrJx+FwyN/7e3+P2267jd/+7d+++vH19XVOnTr1ouSjiPCud73rSz7+cjYf19fX+Z7v+R6OHDnykq73YhgOh3zf930fH/jABzh9+vSLus7m5iZra2vX/LZkbiy7u7vXVT6KCEePHs3yMZPJZDKZTCaT+WpEAkKR9uLoJzXkirAw/a5iquYM0aNiQCRJyGBolkuiC5QaCZ2n3Zsx3/k8K4fWWFudYoA4X6LqkWDQ5QLBw7REgaIoMSI0dc2gshDAFAXiHISAakTT6BwaAuIcGkNfeekIdP1GpKa6Vr1SG2tQMVerSRVLspWGiBBiS2iWGDHYYoBzY2w1xg1rdLlEQ4fagMYaiUOMc2ysWs6eA99FxEZKC5ORoawM6luuFsHaItWpFmmbEoXYtYgtULUEH+iWdd9SKUkCRfB1JMYBwR+h0w3mdcliUbBohLrz1L5LnZsaKUWoisja1DJwHWW5wLrLuHKX0tVgC7yPGAzWCFJU0HlQAR8Qa3CTMdaPCF2TtjP7PUzjTIrZxZTShCuVn0IUkL5etGsalosZs/kMJNWtFsUAMYI1A4x4XDHAqiJiOXT77dz6prdw5otfhJFhdOwop26/A+sc0QeM9XA155dem8TI8bvv5cCJm9n93GV0NRLDkneePMKTO3N+6zOf44l/9S+4sA11W2BkgEhFjH1CVCMheGJQgnSUzjBdn/JLv/gbHDl8lCefeIpnHq94+J0P8Oyzv8xy5gkeQjDUjTKb1aDK+Ys1X/jMJZQvUhbKX/yRkpu3Sp45e44T62v910y/k6hC2xk6b2i9oY0WIwYxHWJjX5WbEomEgAah7VrqesGyrRn7hmpQcf/hQ8zcBidf/ybe+s538qY3v4XNY8cwVyRhL+STcQ6A9Juipk/gypUvcp6vAdPrLhJDh4ilnKw/9/eB6nOX77df0YAtS6rRiI2tA9xx12v4H77tT7K7s8MffOoTfPg3/jsf+chH+PRnPsOFS3sEwCh96pg+1ZzJZDKZTCbz1cerJh9FhDe96U00TXN119Bay3333fdlU4vT6ZTlcvmC5ODtt9/OyZMnkSvbGD0hhC+pcv3jeO1rX8v73ve+LznWtUBE+IZv+Abe/e538/73v/9F1XBubGzk5OPXATs7O9dVPq6srHD06NHr8rrNZDKZTCaTyWQyr5SUjVMCxJAW20Qwzqa+TiTVnsYOjTHtFvY7g2hEfUQlUhJQNThj0HrB8sw5hu0+JYDviHUDvsM4gxuVxKUntg12UmELR2hbBpNRqgU1FtDkUKLBlhUaIpEGMERJCUeNSaZpCGjwfWKuly/4dHsFCEmeAuk+akAiGLFJ4EQlxg7EUAymqA/EmHYdVQ3Be8SWrK4WPHXaUC9qYhCCCpNRkZqJRFIak5Q+NGKAtLeoPmDckOBb2uUSEdsnHPuNx2KC7w6w2N/izCXPxUXH5WXLLHiaomLRBoKp8MaCWHyzxOEx0lGdbam6JatDw8mtW9icdDg5w3R1n2oQcVYJIWCkSFuBAFHAgC0c6hRECXVKZUbfghFsWaUEp+0FdIhYSXWorQYiStcF5vt7+C5gCoeFVNVaVDiT6lOlGKRVTiMMh1Pueee7uemee9jbvsDt938De7vnACV0HbZwycKaJNfEWFQjzpXc+ba38xuPfIx2EamrdFvvOrTFdCWwffw4p89+nGef+jzOrVIONzBmwMBsYawjeCWKRxA6hb3ZguO3HOLOu05x8vZDfO5TT4IYVtYmzPYuE4PiLRAjwUeMVbSzBN9SFkNCNDz2xdMcnK5xcfsSh4YVw+EAHzuiQr0w1K2lCbAMBZ0ail4EWiPYosQUZbqfxhLrFu+VuvEELSmHq6wcPsUt972V9973RjZPnGAwnhIRus5jYkqzGmuuikjt91CR5wvH5yPPfUhTRNHYIgnQK18b8nxJ2dcEI0lYa0zPp0l/J9iiZDAYceDAYR5688NcvHieRx55hA996EP82n//NZ544inmy65PK/dp25yEzGQymUwm81XGqyYfIUm5oigYDAbpxjjHQw89xK//+q9/yWUffvhhPvnJT74gOfi2t73t6nWfz0tNPhpj+KEf+qHrKvuMMfzIj/wI//E//kf29vb+2Muvra3lNNvXATdCPh45ciTLx0wmk8lkMplM5qsRBSVJgqgp0aYaiFExNtVuqkYISUCApmShpmpGJ6YXbSkRZxWikbSZuL9NqEqMSqrrNH16zkBYzHEWxEpKVgaPGCGGiPiAlEmuYEg1sMaCNSmZFQJgwEMM/ffVyVr2CckORNEuJIEWU3orTT6mdKRKSOmwGNDQEpYK1mIKiykcpnNEwIcOsRaDMJmUOAvzWcNoZBk5YXXoelErGBWipqrWK7cn7TUWtPUc33ZE7/tzKl3rUXMz+7uneOJszendOZfNmLkZsxBLKApcZbnt9Uc5c3qPy89uUwwHlENhtj/HDYRldNTdRYpWeOyZfaZ+h5PTFW47coLx6AzTlTOUBQQTiH0SVGJAO9J2p40YK0QL6jUJrH4LMJW2prvijNAFTYlCVbrQ0NR7LJdzYoTSGqw1GFMgajG2wIrgTIEBrHGIMRSDAeuHD+Eqx2A8ootTCjvE9Albjf657x0FNEY01tzx0Fv42L/9Z+zXyrJeIiK4omA089xy3+sZHDnFM09/gP3ZRXZ3nubUqVvYPLjFmXM7CMJwaFnWgRgj872G1Y2KRT1ndW2MLTuWywWjlQofPdrvbnpfp8SiCm3b4KzDlAM0djzx2Gne+faTqLFEVXz0+NbTzCyhdnRRaCM00eJVsCKg6WulKIfYsgQxxBhpJbBnS+TUG9lYvYnl9CDN+jrb1SrxC48SH/0sGpXFcoa1LqWFjaUalAyrIYPBkOlkjY3NA6xurDNemSYp+bzvwa9Usqr2y4zGpK+Xq5+8csHnX+fK83DljQikNyfESD+yiikco5UVjo9GHL3pZr7hG97On3nyUX75v/wyH/zgB/jUI5+m6QI+XklcXj1KJpPJZDKZzKvOqyofIUm56XSKcw5rLW94wxv45V/+5S+53C233MIXv/jFq38WER5++GGqqvqSy8YYqev6Rd+G22+/nW//9m9/eXfgJfC6172Ob/7mb+bnfu7nvuLljDGsrKxQluV1v02Z68v1rl1dW1vj6NGj1+34mUwmk8lkMplM5uWjEhBJ+3+hr10VLM5pL5/6KtMrqai+SjEQUAPFcEA1GBGbJdamj1ljMAYKa7E4TF81KYMyJaiKkma5mwSItYTWE4Mk0dPXPKq3RNF0GWNSGaeJgO2FaACr2FigGlDi1WpW1YhGAWfBA6FDo6LRYzSN4xnn0JiqJzV6pA3YaoAYhxtMUO9p6n2cGIJGlIgrB6xNlO29lsmw4KaxYVLaKw8LatLWISEmCRcVNeCbmui1vw2W0AQia8yaEzx9fsLTy5oLOmZ3dJgmBlbWhtxz9yE2D69ipOD2+47y6Kcv8t//z0cxMuf7/vrb+N0PPc0td29x8MiUD/7r3+XyuV3quuHc7DI77Zw/ePQJHjh8nFPdMSbTxyncBaqh0HUtahzWQQFpZ7JQrFpU52iIaOwTr4a0CSoWU1RoXCYxqYrvWtp6TogRrMGKw9qqL031CAFxg+StNG1gWmsRBFuN0ZVIqGvG43WaxSxdBocGfzVdB6ZPwVo2jp9i85abefqzT9F6xbohUcGGhnp3h82tgwwna1BMWd08yXd9/7dz+90n+OTHHmM5W3LyrkM88/hFVtZXWdsYs741ZTCoKIqCCKxvrDMaVxjraJuUYFUh7YKKQRWKYoK1JW3XcPb0JdoIsXB9clZo2kDbGEIQFKWJBa1aAoISUROJBvZDw3Jvm/3FHqfDgGcZ0o5vQu2YOG+w3bNw+TQhBqKG52qPtUsJ4D4d6pzFuQJjlEE1oiwGFNaxNl3n0IGbOHLoJg4ePcLK2jrTlVVclVK60jd7vSR6KSkKGItiEfoq5PTk4oxlbWOde1ce4I7bX8N3fud386u//iv8Hz//f/D7H/8k27szYtTnUsiZTCaTyWQyrzKvunwUEVZXVxkMBmxtbXHrrbd+WVlTVRX26kA3HDlyhDvuuOMFH7tCjPElJR9/6Id+iNXV1Zd3B14Czjl++Id/mP/0n/4TXdf9kZcrioKVlZWcZvs64HonHzc2Nrj55puv2/EzmUwmk8lkMpnMy+eqiJC0T6hXN/dSUguFSEgiJAAxEiVipACt6dqOaDpUDFEjxjowJRqXqFdCu4TSYq1DQ6r6ZFAQLzRpRxDBL1vUe4wYYtvglwsQg5QlbjBEStcntwxiUuJSxKESESOIWqL3SSSqAZsurxrBRNQ4xAHBEKMioqjatMmokdjWEBRVxVmDKSpMMcC0HaqhT7XVSDVkZWXAU095/IawMWgZDGySiiH09aRpTxEUxPbyCGKIhFbpWgjdEc7vnOTRC0vO+Mh2sUUrBWuHJrSXZrztT93LoZvWOXhsymc/fpGdS0sOH5tSjhzHbj1KCHDHfYfxTcQaw2vffAsr6wOe/sIlLp3b5uIz+1w+d4CP7mxzdmePew7fxubKKkGeoLKRaKFQpShAQ+wTrb0gNgYVRTUQIggxPS+2wBmPWgOxIQTFdx0xgkTFWIuxEbGKMS5l2zRgGaR9Q7F90i5gbcFwNEEEympIPd+nqecUw1GagOnrcFNMLj0v4/VNNk/cTvj8s5RGiFhCCIiJtPu7TI6eZDIdUrc1ivLYY2e4695TPPzO11NVFXW94LX33c1wUPIHj3yW82cvs3lgyiMf/xwnTt7CytqY++6/k0Ex4rEvPENROC5d3OHy5TlCZDyZsHFgjbPnz+Lbht3dljYI5dphQqgJ3tPWgX4UEx8FryYlYU3ADjrmVcun55HFs7vs78xoVzbRyYTh2mZ6fTf7aaNyrrQ+HfNK0lSsQ6PHxw5jHc4VWFtQugHD4YAYO5omohrY3t/lqfNPMXx0wMp0ldWVddZWNlhf3eTQocOsb2yxvnWQclgixnzpz3WuzPB8uZ/3XJGQ/T+vvtY1pESypOrlkTHccvMt/E/f83287RvewS//yn/mZ3/+Z/nYJ/6AtmmJ+txpsovMZDKZTCbzavGqy0dI1ZGDwYC3vvWtac/hD2Ftv9nwPInzwAMPcPz48S8r6F5K8vHQoUP8qT/1p26Y6Hvta1/LN37jN/Irv/Irf+RlyrLMe49fB6jqdZWP1lpOnDjB5ubmdTl+JpPJZDKZTCaTeWVI36upKBrD1XE2FQUjQEQ1fe+g+F46aBJ7GnClxaqixvQVlAYxYNwAdYHeFaK2wLgkAaUaQEyJOJG0iSiidHVLt9glLBeIMZjhmGI0xo3HSFVgijLJTREQRaRA1aMh0i1rYtv2G4XJhSj95qNzV+4oYtJ9ISbHJX0daPQtoa3BCOW0wFYDXLugqdu0d0lBYRzTtTHxyX2WzZLCOgonRDEYLBpSJaUYS9BAbDpCmwSdKnReaOo7+eLTYx7br3nWTJibEUePH2Bne8HD3/paPvWRpxhOKs6e3uGJx3a5eHGXvUsLFvOOC+cvMJvt8tjnzzNZLbnzNYfYb5aoiYzWK246tcVb3nMrF57Z51d/8RFOPz7h8WaHnfOXeYvezEYzZTH6OGtrgRgF03qcM9jSIppSphoCGrXfuTQYVxI1YGKf7uz/F7oajQbFgBOsxFTpakrSvqBgjEkVvLYgkuQsJj13RTlI25KuQIj4tkmSjZhqPY3tpWh6DRbDERt3vgb51V9jVIKPXS+YOxaXznFsdcpoXMDlJLUf+cQXufOeW3jNPbdTFAaRIaLCbD6jC54nHz/N6vrtHDx6iMFowGKxYH1zynd/37fgu0iMykc/8nHOnt5hbW3K7XecYHdvm/f/b79EHTyzJVy4tKA8fJj6iT/AlatJhtpI9EKnBikDq9WcWHj2Os8TYtgfWJZVkqdTHbNSDJgv9+l8Swi+T1FGlu0MJSC2wNkK1Yi1BussRI/GAjXpsiEo4lsGowmj4Sqj8ZjxZMJoOGAwHmGsIaBc2DvP+e0z6XUKHFw/yOGbbubwkWOsbW2k+uAX8LyRxq/48yhBxKJXrh89GHBFgTWWkydO8n3f/3/n7d/4jfziB36Bn3n/v+XJp88QVFGV9PWYyWQymUwm8yrwVSEf19bWGA6HfOM3fiPAl4jAyWRC13U0TQOkFORb3vIWjhw58mWP91I2H7/lW76Fra2tGyIfRYSVlRUefPDBP1Y+3ogkZub64r1nNptdt//YL8uS++67LydkM5lMJpPJZDKZr1bUIGoIsU2yjpCqVoNBTCSI9pIHwFwJdoEqMUJoW8xoiBYWWwwwI4Ah6jsiNXZQYpxFBKxzIBBjSzGeErctommX0bgCAYrhGOcsvq6J813aZknoGtxojB0MoRqAtYizoG3aTlwsaPZ2QVwSVpL2DDVGjHFEAqZ0aOgQU6ZcpxMkOGxZgRo0ztDQENslfm6x5QBxBcICayp8iEQfKAaCKxy7+x1VUIxRNGjanhSTzqeR0EGoG0JMCcwQChaL2/j8UxOejAMubR7GGOXgeMTbv/11fOzDT1IvPO/+7tfxkV95nGef3uPss+dp2gYhbUmqRrrOE7Y9ly4WPPP4RYw4iqHh+C2bPPSOO7j47JLFIvDQO1/D3oUZv/Vrn2N7e8xvbO9w/2jCkfggF/W32Vq3zBcLBlVJZQeIMThX0EVIitGlTUiB6D2iihWDJyUeu67rdzP71xAO0SK9pEQQdVhTYMWktKQqV3YCnz8y2C7niDU09SLtTMqVqt+U4rvaaQts3X4f64OCwnU0fgFS0nYd7cVzVNWA1ZUJzi5ovOfSpR3+08/9d6YrU+LNBylcCUTq+ZLxcMLmwSn7e0uq4YD5vGFn+yLbF5esra1w9OhRQgi84aF7OHTkCL7raJuGJ58IDMcV+/stYDhzeo+bDh5n+egnKZb7dF2HV4iVYssGibC9EC60sDuGeRVYRiXOhfF0FXEVu/sz2m6J9x1NU7Ocz2malqABDBR2gDghhJqt9QNMVg9RWMegGnPs0M3ceuoOTt12GwcOHWayukJVDSjKEudsqrk1kvKjqsQYCCFQLxf88oc+wIc//t9pP7JgdbLJ0QM3cf/r38TxW08xmkww1r5gCvJ5g40vFJHy3G/kys8VjMUUg14ee4o+UXvq5Cn+/J/7C7zhDW/i/e//N/zyr/wqu/vz/o0CmUwmk8lkMjeerwr5eOjQIY4cOcLb3/52jDEURfGCz6+srBBCuCofT548yTve8Y4vudwVYoxXL/uVGI1GvPOd72Q6nb7yO/EiGQwGnDx5Mr0D9Y+QUkVR3NDblLk+zOfzl1T/+1Kpqor777//uh0/k/lKiMhPAz8AnFTVJ17dW5PJZDKZTCbz1YlIn3JU6VOPNi09SkiVrEHTHl+Ifb2oJ/ZpSEShcJTjKW46Rr3S7V6iqZepyjQq7shNlKdug7YhPP0Y8fyzqHTY0STJQpO24wYrY8q1VYS0w1i2AT9b0M628bvbaNuiU0V9xAxLjAySlOs83WxB7AJiFBFBsERtiQF89IBHaxBbYmyNMQYJikHSnuRggDGKXyix64izGeVaQTEYEZuG0MwRY4mhw4llMjFcPF8yshEVQTuPGJe2+Yyl8x2CJURFEZrG0zV38vknJjymwoXqIIvlgu/6f7yF0WjM3uWG93zva/n93zrNv/n//jbnz19Co4egGJfSgp2vsaZAIVW5qqNuWowNzBcdi/2Gp75wkbsfPEr0woNvOcnpC3ss24gblmyzxu8td3mDTFj3r+WyfoKVaVKMzGuG0yFiLcZ4YlAk+pQSjSXWFERtgSQgVZPYDaGDIIjxBK2JUiEYYgxIAfS/x1YYcamJVtPrTYHYNVx66nHWT5xi99IXaZdLbFEiMaC2l5T96wPgyNFjnFyxaNfhvWKsp+laynoXXy9Z25xi5SJWBBXLuWe3+Q8/8yu8+a2v4dY7TrC2PqGtGyBw5PAWOzv7XLp4keFoxHRlwtZdBwhdZLmsaTuPK0vquiGEjvFoyHR1DTFDBgNFJPLM05c4de9N7AalaGsaDYSpp4nC7ky42Ah7U+VsEZi1inqDWGG6uo5KxbOnT7OYN8wWNctZTWjTXmlZCLGXhpY9jIO19Sl2xfG2+9/JA294M3ff81oOHjuILQvgxb7ZNz2Oo/GEleka29uX2du5xDlzli888Tl+46Mf4ubDJ7nv3jfwmtfdx5Hjt1AOqudflee9E+F5f4lc/cfVD4ik9CuFoF2DdY6BDHC24M1vegt33nEXD3/DB/jX//rf8MlHPkcXYhaQmUwmk8lkbjhfFfLx1KlTfPd3fzeHDx9mNpt9SeXo2toaXdexXC553etex0/+5E/ywAMP/JHHe7Gbj3fccQf33HPPl616vV4YYzh8+DBra2tsb2//kZcpy/KG3abM9WGxWFxX+TgYDLjvvvuu2/EzmUwmk8lkMpnMK6WvIBXp/WOXijUNaVuRFGBSQpKN2v9ZQ0qqGUvoliARA6lyMXja1uN9y2B3F7eYwdln0O3z4D3Y0Fd6GsQobjDEFA5TGEITiHWHhg47rqjKAzSXLxOXSzpjkiAVgIhxQ9CY9gZJ9a1YB0YQNYiElCATkBgJcUlsIZISkcY6rHXYwiFugC09IUa06wjzGXa6RjEc4buWECF4jy2HTFYGfO5px9L6tOXoQ5rHFEPQgCCo79CY9iY13sajpyc8Qcm54jCbx8fcdewWqsGQvZ2G8xd2+ZX/81OcefoCrfeItQiGLrQ4WyAqqZ5SlGgickVCihKjxzqLCtRhyfbZOYWzfOJ3nsBR8PA7b6UsK/a293n8s2f4gzPnebO9jb3dBtWPI8ZQWoOZGapJhRQFEj0QICb5jDMQBCOWED1BI6p9nawAhiSaNBJCg7PllVcWEYMpB6gGjFiQVNsqQGg7zj/6CJsnboUY0eiT8FZNKbrUlJvkOIbh6hqDrS0WZ59m2S7BDghBafcus9i5xIGDK6AeEUM0IEF54ounefrJsxw8cpBv+/Zv4PY7D9F0kcuXLxOjoXCG8WjIhfOXcIfBypCzZ84wmU6pBiXBB0Lw1E3NY4+dZjavicYgCs88fQH78L0sqhEXnvgC28uOnZlhVhu2rXJx5DnvlaZRVAyNj5RVQXt5j8XsPIs6SVSN6SFUoBBh2UITI6VA6YTKOpypuOvk/fzZP/fnOHDkMLZ0f0wV6pcjXd4YwXeeixfP4buWoiiJcYExls888QhfePpzfOjD/xd3nLyLN77pbZy67U6mG2vPazTS5/37eR/7w5/u5bEpB6lK1ysFijDgwOYB3ve+7+SuO+/m/T/zfn7hg/+Zvb1ZFpCZTCaTyWRuKF8V8nFjY4O/+lf/Ks45QggcPnz4BZ9fW1ujaRpe97rX8TM/8zMcP34cY8xXPOaL2dl7zWtew8mTJ1/RbX+piAgHDhzg8OHDf6R8zHx98FKTj0VR0HXdi778vffey9ra2su4ZZlMJpPJZDKZTOaGICbtJqr2UjF9nyoqGGuBiIa08aiqKYmmSSBhACNoDHS72zgJGGMYTgZQWGYXl+xduEg5GWB2LqLtAoh9bWoAawFBJYITpEobcSg0lxfofMFgYwPZ2KTZ2SEsZ/j++2jVAUwKxBpsWRLqGvUtkYC1ghrFqKOoHL5bpmrU2BEDYMG3HSLCYDTGuCJtRVYjDEI0C4Jv8N0SNxzh6gZf1ygR6ywrk5JaWrooiCVJRhFaDWib3qwrCFEs7XLEzv4pnm1nXJ7exLJe8sb33MnnPnaRyxfmfPoTT/P4YxfZ250TupagEWLNymrFcCJ0naZ0pQpiU5pMo17dS0TTw2gkScRv+hOv4aaTm9hBweqhKdEJdEpzacFH//MXWezexOd/+1HuG76Oc+efoKxmmJHBhwK6SFVWoCShKiBGQLuUPrSWaGJ6LYhgRIgoVgWJBsShGEQcYgURxWvAVumYaiIoOJMqUGPw7Dz+KERPJNA2SyZiUY1JdGlImrKXXuV4ldWDx1mce4rGN5RSEsQT57ssti9z081HseXHoelADVEU7zs6D2fOXOTn/v1/5S1vu4sH3nAXg0HFU0+d49bbbqELgcW8wRpLFxu6fc9wNEbE8NnPf4HFvOaJx8/we7/zJD4IGj1t0/LM02d4/OOGC8+c4cwTe8waQ90Js0HkGTznl0rnYRGUWRfxEUq7ZEz6WgsKUZWIXGmWTa4XiOlpgwCVWo4dPMl3/o//E6vrqxgjfXvty5s3EYTgW+azfTSGfqtT6LoGBRo6Ft2C0xee4uOf/hi3HrudN7/lG7n/TW9mMB6lY8jz63O1T0N+6e2R/u8YLFipUGMR00EbWZms8Pr7HuDwkZs4dvwY/9v/799y/sLFl3WfMplMJpPJZF4OXxXyUUSuVqiOx2NuueUWrLWEEIBUu3rnnXfyoz/6o9x8881/7MZdjJG6rr/iZUajEffddx8bGxvX5k68BLa2tjh48CCf+cxnbvi5MzeO/f39F1X/C3D8+HHuvfdePvjBD77o47/3ve/Ne49fg4jItwM/ArwG2AAuAY8C/05V/5mI/O/A9wJ3qOqjz7vevwa+H/iQqr7reR+fApeB31LVt/+hc/2PwF8E7gcGwOPAvwV+UlW/5MUpIncBPw68CzgEbAP/Ffi7qvq5513u+W+affx5r8MnVfXES3w8fppU33or8Kf723sCuAj8B+B/VtW9L3O9Y/1t/TbgJmAG/Abw91X1o3/osn8H+J+BbwJuAf6fwF3APvAB4G+p6tk/dJ1fA76R9Lj9beDPAEeBZ4B/A/wvqvol7y54sY/hl7nv7wX+AnA78Nuq+o4v/4hlMplMJpP5WkI01ZQqIdWrqkXVI321qmpESb8EJXqPxyfZ4BXbdkRVTAyUq6OUenKOQKAaD9HQos0+WloIBap1SlN6EOvAGWKIaBcoTYEZCGItfjGn3ZkTfcStTFFV2p2WUM9RcThAnMFWQ0xZYquS0DZAJHY1YotUKRvBmAItAxILYuiIwadly+jp6gXRt9higCtKhCEi4MM+oWmxxZByNCL4mhBrNA4ZDh2+rNluNMnSGKh9ixpFQ0FURWOkDZZlfTuPPLvN6fIIp8/u4EaO8+e2uevegzxzbofPfu4MQVqKlRYToKLjB//St3L82BrLxYy93YZP/f7n2d1pOHnyZp5+6hKf/9w+opbptGLryAqnbj/Midu3WDmywvjgBBXhyU9c5j9/4PfY3t7FWTh5YoMjt6/hZ0NqH3js449z8+itnD7zQY4fC5Q24psOawXrDCIVMVx5w7RNu4H49MSpoj4QI0iQtOQofdpU6dOLhhih65a40Yir240KIXZY64i+ZX7+SfxiRmhr5pfPs37klvSaiyF9LxkDveHFlQNGB44SUTweH9IvG/aYnX2WQw++lfG0YHtvns4nNlXDCkDF5Ut7/Mp/+QTWFrz2dccoCmE+qzl0ZIv91Tlt6xHfEaLhsSdO88UvnuV3fvNRLl+4nFyfKbHWYRAGlXIoPs1n//0HuHj6MrWBZQuzynPGK2eXyrxTllHproY4hQqlBTqg69OdlQXbP9IGMAYK0mM1GBa84YE38pd+5Ed5zX2vT6+3K6i+LAGpwHy+YG9vF+MEa0sQJRKJMab76g1GhIvb59neu8invvAxTv3KnXzze97La1//AJOVlfQ8p79FvsKZtP9duowYg1gwDiRaREqOHD7MD/3gX+TUqVP8r/+v/zePP/nM1Tfr5yRkJpPJZDKZ68lXhXx8PmVZcu+993LkyBGeeeYZBoMBd999Nz/wAz/AkSNHXpRsUdWr4vKPYmtr6ytWt15P1tbWXhXpmbmxvJTk41/6S38J59yLlo9FUfDN3/zNr+TmZV4FROQvAj8FnAV+kSTYDgL3An8O+GckUfW9JHn16POufkU4foOIDFT1yjssvpH0d/l//UPn+lf9MZ8Bfg7YAd4M/H3gXSLyzarqn3f5bwV+Hij62/YF4BjwHcB7ReSbVPVj/cX/LkkU3gf8k/7YPO/fL4d/DLwd+PfALwDfQpKEbxORtz7v/iIiDwD/hSRv/3N/u7f62/RhEXmfqv7SlznHXwPeA/w74P8C3kp6jN4hIg+p6oUvc51/D7wR+FnSzzH+FPB3gDeIyLfr88Z7X+Jj+Hz+CfA24IPALwFf+f/AMplMJpPJfM2gpJSZj4EYk/RJHwejkn4nBiUmSagBYkrfWTFYH9I+Yezw9RznKsRa4rLGb28jtOiqQdsWU1VojPhmiRaKmCQQMQ4/X+CrAXZUYsoCKUsUi68X2FGFG42IzRq63EN9g68VU1qkKFO7Y1niBiN8k/Yf8R7EIM72DiRiXYUxDnUdNnSoj0mmhgV4IAq2KhFVxDUp6RkVUw1x1RBZRAie0lqcDezHgCrU3YJWwHpH0A66Dt/URHcnT54reKoLXDYOnKcNkf/6ix/npteMKSaRO9+uDMYVRTFAxDDf3+fW1x9iEB3luOCW16wzXnEcOHaAi6cvcNcbDnHT75/m6MnbGW0NGW2OMIMC7z3Pfn6HL37oSRppOHVig7vvnvLIRy5ibcEXP/k0j3zkadQLe3szimVga7iOxHu5vPu7FLZmECzdYs7K5lraf+yTjVdTbT6l3VRDv9uovRwyBCIhdphYEoNP+5fSgCRBnDpYI0nBQQyerm1oLl3k9Ed/jfXXvREfAxr66lXpX3twdWPQOMfkwGFsUdEFoYkNEaEwcPnpxzj5je9hulIxuAArq6ssmpr9XU8IBjGCRKWed/zyBz/CU4/fxM0nj3Ds2IC2aVnbWqFrGurZki988Sy///vPMNtr0aAs5zXGOsqBxVrLdBTZCqdZPfMxnj23zZ4HuwIXfeTJOnK20SReNYnHKz8hqvq7VJP+Y7oQoTTKoJCUPA6AgTZCiIARjh05zHv/5Pu4+dbbEIHOB6wPmNREi1j7nID8ku3FL/sFj6r+/9n783DJrru8F/9811p7qKoznx7Vs6TWYMmWJ9mWbbCxjc2UAAFMMJDr5GYgwxPg5pKQC+SS4ZI8JCFwk5tfngyEYMyUGDMZ4hjPsy1blmRN3Wr1PPeZTw1777XW9/fHqm612t1Sy7aklrw/z3N0ztnjql1VrbPrXe/7snTuHCeOnSAvLXHeUHYKrBWCBmL0CAZrLGIqjBGquuH+A1/i4H99lBff8lLe8Ia38aKXvpSi03m83/HCvxyPn+fx5zF96TgG2ThL9B6JSp47Zmdnedtbvx3rLL/8K/8fX37gkatKC2u5NhGRtwN/h3Q/npPuO38T+KWLJxmLyKHxjy8h3cP+BdLE3f9HVX9eRK4D/irp3vsG0v31OeAjwD9T1QcvOe9u0oTm/zY+3r8A3gJMAF8Gfl5V//gy450mfYbw/aT79kPAfwR+HzgA/DdVfecl+3RJk7Z/kDRBV4H7gf9XVX/r6q5US0tLS8tzzTUnPgK85jWv4e1vfzu/8zu/w7d927fxYz/2Y+zatespo1bPo6pPKfrMzc09Z31509PTTyo+qire+yuub3l+sLa29pQOXIAtW7bwIz/yI3z84x+/6mPfdttt7Ny582sZXstzw98AauAOVT1z8QoR2TD+8UPj728G/sN43c2km4QPAN8KvI7HxcY3X7IfIvJOkqj2XuCHVXV40bqfJ7kA/zZJ9EJEZoHfAgbAN198kyEitwOfAf4z8HKA8Y3KbtLNzi+r6qGnfym+gtcBL1XVw+Pz/kOS8/EvAD9FEk0REUcSBCeAb1HVj1401uuAzwP/RUR2X8bd+e3Aq1X1nov2+TckkfNfAP/7ZcZ1K3Cbqi6Nt/8Z4MPAdwE/ArxrvPxpXcNLeDnwMlU9+OSXqKWlpaWlpeX5hipEiWgIaEiRmmIEjEFUkl6kcexcGvuXNCaDIzXdModhH7QhDmuijQgBLHQ2zmP9EB2N8NUQC0hZwjASwhoqEY2pDy8GJawNUSL5lEXGnZMaPNEHxGVkE9NApKnXIdT44QgpOpjcYjKH65RoaJKQGmJyPjZKiB5rDCJN6qgMiogFZ5Hg0aD4OAQJiJtEjMUUE2jdJ8Sa3M5gi5KibvCqqDF0Csto1UOMeGtpaCgnuzBMnYBZNsXS0g0cXh6y1tlJ7SOS55RTgY0vCsTuYZYHDf1HhWo9EoKlKLpUozWWjv8W3/kdryPLDUcPH+b4ocMsLGxi05bNKBHjVtBmmc/9rwOcOtcw0SvYduMUN+zdxPbNOQ9+8SxrCjteeh0337iT/qAhDAIHHzxJ1a8wmcGocG+ouGviFhbPPshUUZNPT6TIWmtw1hJ9INZV0rNMis6kNuMOzBShGlUwRGJoCKFGokOCR0KDE4PXHFMWXNwTGOO4/zEG6pVFFj76QeZueimuVxKDTyKwyRCXpVedwnnRcnLjNpwrifWIaJMzt/I1o+MHMUbYtn2OpYUht71kDxjh3i8+xrYdW5ienUEIPLr/BMsry3zx7gPc/6WDPHz/UV72mpt48P6TnDp1lmo0YjSEqmmIPiJiqZoBHTtF4XJ27+xRrOzD7P8CJ86ssFJDtJEza8rRRllsIoUxqCg+nn/ESX0xKKOxOFgYmHTjKF2FbicjamB9pISojFTZPD3Dt77lO9h7+22EqNR1g7MNTZOihUMI9LpdbOFST6gxX6E9Ru9pmgbfNHjfEEOkqRuOHTvK8tkV8jInVKfI84y8U1J2SmxuEPVETRMEnM1wJjlRm6bmk1/8KA/su5eXvuiVvO71b+FFd9xB0e2Mz3iRaHzeuapjBfbxOZHjPlmTOjrVkGUZU5PTfNtbv4ONGzbzL//1v+JTn76bxrdzHp9viMgvAP+QJBL+JikB6NuBXwDeJiJvvSShJyd9VjBHmsC7ShIQIU3+/WnS/e17xsfaSxIJ/7yIvE5V773MMHYBnwMeI90Pz5FEwj8Qkbeo6ocvGm85Pv/LgXtISUzTwM+QJuBe7jHOjPd5GfBF4FdJxuW3Ab8pIrep6s9exeVqaWlpaXmOuSbFx7m5Of7JP/kn/PRP/zSdToder3fV8ZKqeuHrSogIN9xwAxs2bLjiNs8keZ6zcePGK3b81XXNysrKczCylq8nq6urVyU+/rk/9+fYsGED8/PzdDodhsPhU+7zrd/6rReiilued3iSg+4JqOq58ffHxjMUv0VEZOysOy8w/iPgTePfLxYf+yRx6zw/Pj7PX7lYeBzzT0mzJH+YsfhIinOdAf7OpbMbVfXLIvKfgJ8QkRdduv7ryK+cFx7H540i8lMkN+NfGY8bUjzpDcC/ulh4HO9zQkR+Efhl0nW51P34rouFxzE/TxJq3yEif+syguU/PS88js8xGgujHx6P613jVV/LNfzFpyM8isgXrrDqlqs9BsBHPvKRp7P584a1tTXghfv4rob2GiTa65D4Rr0O5x93S8tzjagSgyanWmyIGjFqsWrBRNSHFIGpEGMgnhcTYqSIAdc0aFODRqIGmmoFfIHLc+x0l9L00HMnCE1A4wCaAGWG+g7Ra9IjNGLLApMJWo/Q0MN2umS9DrHxhLoi6xRIx2FDJ0WnxoA2Ea1qcB0Qg7EOkxeY4JNoGUNyMcI4echhbMBYk0r1rCNEjwI+1PjRAERwRQ+xBpN1IQJGsUUXV3lUhCDQy4VMIDiLRKGki20E56ZQqVnvb+LYuchSNsV6ECQXOhtqtr00cHbxNAv3FcThRkzM8VWV4jcne8TRNAdXK47fcg7TWWG9P2DT5q1o9GiMrC6vsuP6HXQKx+vfsJ3TR9eZ2TDLYLVmcKbP5z92mC2bJti6dZozx5bpTpZsv3mO+a1dNm2b51Mf3MfyUp/ga45Hw6ncUepe1gZfpjsZMFlG0x+Rz0yApGhc1RS5a1QJIaZYWR0bGQEaIeYB1UDQgJGcEAP1sE6dk5lJLjYZP99RkzhXV4RmSNNfgbUV3OQU0XtAMTZHkwINkDyWCpObtkCWUQ3WyMav1Vg1ZM2AlYVz7N27i89/Yj+T01M0oWGy12HjpmlmN0xRVxW33b6Tsws9HnngBBs2d8kzeN/vf5a19RGb5ifwTU2MBkuDD8mpqTFSOMcdL9mMq47Rv/cezp1ZYaWOBKMcj5HHKmUUobQGa2HYPC63ZpKE09E4fjUTcAojr4yapOmKjdQBfFTqCHOTE3zrt76FG150O8NhzXp/QOYsLnNpCkADTV3jg0fWk6C7ur7G2TOnOHr0CKdOnWZx6Qzr/TVGVU1T18QYGA2HLC2c48D+A9RVmigwMAPWJWBWLZnL6HRKOhM9XC6oRmozwNmMzOY0IXWl1n7Ix+7+AF96+G5eeftr+fbv/F62Xb8Ha+3YCQmioMalyQqxQcRAjMTYQByngck4djlUZHlOp+zwyle8ip/7mZ/jF//VL/Lhj3wK730bv/o8QUTuIgmPR4FXna8OGd+fvpc0Qfb/JAmR59kKPAi8QVX7lxzyQ8BmVX3CH00icgep0uRfkITNS3kjyeX4jy/a5zdJ6UI/RbpXPs9PkYTH3wbecT45SET+H5KweDl+mSQ8/gNV/cWLzlGS3JL/l4j8D1X90hX2P7/91+W++enwjfb39lfDN+q9ybVAe+2fO67la/9M3zdfk+KjiNDr9ej1el/V/v3+pf8/fSLOOV71qlc9Z315IsJ1111Hp9NpxccXMCsrK08pPhZFwfd+7/eSZRkTExNs2LCBo0ePPuk+WZbx5je/Geeuybdvy5PzbuBfAw+KyG8DHwU+eZm4zw+RhK2XkmYHvgk4qaqfGf8B/WYAEdkI3A78L1Vtxsu6JEfiOZLYdblxVCRH33nuGn+/Y+yMvJSbxt9vJd24PBN89NIFYyH2KLBbRGZUdZnHx7rrCmPdO/5+K18pPl7uHCsi8iVSfO2twJeeah/gE6Q0p5ddtOxruYafu8z2LS0tLS0tLS8AgoloiAT1RI2IAWNMch5CEgxM6oVMqY5CVNDg6caGfHIChulDAYmKhobYr9CmpNObxzmhHkcuGmdBDNXKAs1wlXptFZtNE0JNlufgBBGLZBZrurjpKZqFJWJTQTcgzmDKAud7NKMhopFYD7GdDuIEVcGWOTEqik3ORV8DFiQJIEqOao0YB9GDOEzhsHUgNjVhuIbB4DpdcA71DVE9Ni9xnQgxUldDSoQiM+C65K7AGUMIgqqnHkUGg20cWF6j3nELurZCMdtn40uGHDm+TH1sG5N2lokNjp27NrFt7xxbdswws3GGxx48w2c+9gCSOU6cOMGg32fnzj04K+SdHru3Xsfq4iJZlrPtxu3cfleHkwdOcfjhUxx4aInVpXVuvmUDdROx1tCbyVlZGvDYvgW275nhlXfdwIf/7EGihTpGHlkfcGd+I2fP3c/sbCDLLI03RCRFlTrQJhJ9REku0giopA5Ho4zFJkNUgyESfIUfi0p2YgLrCsTHtI94MDalKdVDglcIgbi6ittuEGOInI/+jQiCCknpFENnbiNkJcO6wWSRCWeJeKZ9H+2vsmHTHLPzXUQEDZHbX34T0zMTYGB+4yxnzpyFbJ4H7z3Onht20Ot1ePCRE1iFW2/bTh0iRhWX5xw/dpbDR5eYnZvlhhs3YM0C6w9+ljNHTrIcFDHKcYnsr5UmKpkIhYV1n1pSAZwIzsAopIjaXITe2EQ6jFArab2PNBGGHjIDnalJejOzrKyu0lk4i/c1w8EsK6vL5HnJYNjn8OHHOHniKIcOPcbRE4c5e+YMTahTtWZUjE3iYYpANfimoqkjqyuLeO8peiUCNMETYkSIDMOI/toAt7xCp1fQm+xiMiFYoZYGay3WZfhQM6oa+qMB/+tTf8T9D32Bt735e/jmN38r0/PzjwuQ6Z2P2OR+FRQTIyHWoJYYYno9+Qixj8tLyjLnpS95Kf/oZ34OZ3+BD3zwY4QQiJdRII2YNLm/lSevFf7K+Ps/Oy88AqiqF5G/B3wHKUb1Fy7Z7+9dRnjk0jSmi5bfKyIfAt4qItn5zxou4jDwzy7Z5/0icgR41SXb/m+kaSb/8OLKElU9KiK/fOlxRGSelDB098XC43ifkYj8A5ID8h185b17S0tLS8s1xgtSvXiqvkdr7XPW93ierVu30ul0WF1d/Yp1dV2zvLz87A+q5evK2toaVXWpieqJ3H777ezduxdjDGVZMjk5+ZTHvfHGG9m9e/dVxxC3XDuo6i+JyDngbwF/lxT3qSLyUeCnVPXu8aYfJN1YvFlE7gW+hceFtA8Cf3/cm/Am0j3nxX2Ps+NlG0nxqlfD/Pj7X3uK7Sau8nhfDaevsPwUKdZlmtQpeX6sP/AUx7vcWJ/sHIzP8ZT7jG/uzvd1nudruYanLrPsiqjqKy63fCxMX/X/3N74xjc+ndM+bzg/k+yF+viuhvYaJNrrkPhGvQ5X8zdVS8uzgVVDE5sk0sVxd5sxgCUl4xlELUg9VhICaMQIFL5CxWCshboZuxhTH2QYrKPDEpmdS0KfKuobbLdDVpRUCycxLkvipg/YiRzjHGpAjMFgMUUX7AqxqgnVCOd6mLxAywYrkdjUaFMTR0PMRAexDomKsYFoA2IzVBUTFfVKCBXqQAyoazCSYQQg4lyXEA3NaJXAGoUVXDkJNgfJMc7gel38oKLyy5gQ2DbVwxgLAaIqMdaE4FG2ceRMgPndDD30Zg07Xl1Sy4jXv+wtzH/zVq7bu5H5HTMM+579d5/igX2L+P1LbJzv8NbvfSlrCwd501u/jeOHDuGbiv56RdFb4siRA3QnJ5mbmsNljrzM2PWi7Wzes4ntt5zjxltnsQhrqw1Z0WVpsWLnTTMceGiRuY3TVOuB216ym0ceOoavapaDpSoymv4uFpYeplNeh2SG4eqAspcTx+KicTlNiCiBGGs0CipKiJK6O2MSXg0OKw4jBh8airKbxGrV9LyqJeBRlNg0hPFXXFlKMbkRjE3ipBEDGpPTVgGUojdN1plhOHiMCHRnwVgI1YAHPvVJdr/69fzNn/xBqqHSXx8yrALeR+q6YrA24tzxJdRZOmVG5hwxBLZunebE0UXOnVsCyUAapqan2LBxgpMnVrhx7yynThygPHeC4b59rMeIFzgRI4e84qNiACcwGIuIjH/PRBmFNPyJ3DGTGwyBqk6imx2Lj+teaTwgig9w5MRp7r77i6BCXY/oLPaIIbC0tMSxY49x+NABTp87iSJY4yjKDi4zGHHEEIFI0wRi8IQgxNDgfaCuKpq6SUKeKFEFbSI+xvGlVpo6IiPPcH3EyvIa3V6XiakeLndkmaOphtgsxzpFgyU2kSP1YX7jvf+Je7/8Od721u/lJa+4k7JbAjxhUr2ajLEZkuCbJEeqQayMr2NEMLjMcfPNt/B//cOfwUflIx/5BE3zlQ7IbqdH3dTUzZN/rtHyrHH+Xu9Dl65Q1X0icgzYIyLTqnre0TAC7rvSAUXkO4EfA15J6mO89HPiDcDJS5Z9SVUv98HrUR6flIuITJFSi45eoarlE5dZdidp/oBeYVLv+QiwWy+z7gl8ve6bnw7faH9vfzV8o96bXAu01/6541q+9s/0ffMLUnx8KreZtZbbb7/9WRrN5TkvPl6Ouq45c+YMdV2T5/mzPLKWrwcxRlZXV5+ye/TOO++8EP+b5zkTE0+t7dx5553Mz88/5XYt1yaq+uvAr497DF4LfC9JaHy/iNwydkGev5l4C4/3M5wXGD9Eilr5Fi7T9wicv8m4R1Wv9g/q8/vcoapXvDF5htkMPHKZ5VvG31cu+f7dqvqHX8U5Lsel57h0nyMXLxj3Tm4g9WWc52u5hu1U4paWlpaWlhcoSTRL3YtRI8Y6VJKTSKNB1afYRE0dihElqMfUI2zdEDFYw3g7JXXzKRI8sb+CTvZQhFg3hKrCxxrpWRgOcIVBUNQHjBWMc2DlQqekCEheEAcNfr2PyTNMUWDyDBsUiZrG39RQubTeZYhtEKOpu9E4NDYEFNQTfY0zOURLNMnZaYyAtdiyQ/QeX4+o1pYQEbLORHrcMRIRfPSM1odoLeyc72Ek4rIcXw9p6iGGnKqeYnkAO998Cy/avpGtt0/R3VYQjWNmYpYzj63whc8dpf+JA7hCmS9L/MIiC0sjFg/kHJtwFJ3TvORVN1Nv2kyIgc7kFEcOH+Tf/co/Z8/uW/jJv/+zFGUJKIhQdnOuf9E2dt20mcHakAc++yhVv2bfY2c4fWaVmQ1dZuY7rJ7LuWHvHGdOr7BwepVaA4ti6JqNnF28j42bI0Y9/eVVxMxjLRgMQSMaI2iGBJAYxn2ggkQIKVUVEw2ZMamhMQSyTo6ojl8bimqAOG4QbZIwqLHGL59Lz2dosDYjhgoTbXLCiqASEISyO0kxOYVvBPVKNVEzkSk5yvBLH+X+Qw/gpuaYv2EvG7dtY8OmzRQT04gtcEWP17zmxXgfeOubXkUTA8vLq2zZPMcD9z5GWVpq9Zw955mchJmZGV784oYHH/wS/TMn2bh4BNVAkQlHq8Axr3jSSxaEKioXqx0KVMl4SNcJMx1LCJHaK/2QtHwrMPSa3J08XosYNPLAww8xMdnh1LkTnDhxipMnjjEcrqFa47IMEaEoC9RFQr8mMhZqSZ2eISohNuO4ZEP0gbpu0HGcbNBICDoeqSBjNyuaPi/AAHVkNFilv9qnM1EwMTVDXjiqZg1rHS5zZM6SxQ4hRL7wwGc5fOwwb3jwLXzXX/gBZjZs4HHtMcXPYnNs6Yg6xA+HF+ogazXkUdGYMlszZ7l57y38w7//9zFG+MAHPpb+rbro1qQ/XG/vVK4tzk+WvVQM5KLlO0l1IOfvT89c7Di8GBH5cVLE6RLwAdJ974D0rH8PKVGpuMyuy1c4vye9ss8zNf5+pUnAl1t+/sOuO8dfV+KZnBjd0tLS0vJ14gUpPj6V83FiYoKNGzc+S6O5PFu2bLmi+KiqnDx5ksXFRbZs2XLZbVqubaqqYm1t7Um7R8uy5GUvexlTU1MXfn+q2QbOOV7xilcwOzv7dR1vy7PPOEL0T4A/ERFDEiC/GXiPqp4SkQdJBezfNt7lvPj4SVJs6ptJzsclUjTr+eOui8gDwG0iMqeqi1cxnM8A3zc+39UKZ+f/obVXuf1T8QbgYxcvEJHrgR3AofH1gse7Lb8JeLri4xuAX7/kHNOkeNsR8NAV9nnXJcteT3rcF/dHfjXXsKWlpaWlpeWFjkIMSgwhxReqYLCIhvSZ/lg4kAhEm+QDVWyzhmhNNjtJVs9Qr6+hIab9gmIyQxyNqM6dw05MYqsBYb0PEtAq0gz6OByaKSEoJstRIxjrwGYIAckdkrQQQgzEEJEYEWcxeZaiOGOEoGhVgbOoCGINgoELXxaxEdRA9DTeY1DECtamuFAIiBhc0UFjw2g0pK5OknUmyXsTGFfgm4rBYMhwsIIJwsZulsRK6yEGrBrUdlhZy1lzhiq3TM1XHNi3xOlPDPAamN/WZWZDl+3bCx761BnqTDlb1KwuRRbPwcryWYaDIXMbal58x1G2bN9CqD15tyTLSqYntzMzewsrayNOnzzH9OwEeZnjnE2PRxy9qQle+ebbOXngKA8+/BDLC44H7jvFof0nuP3lO7n+RRswAh/40yP01xY41B/xmqmdnDjZZW1tETNVkuUFo2rE1NQEMViUGg2BqDVRY4pcjen1Y4wiJnU+qgqKJdYjJESKydkUv2kcEiJRIypcEBo1SOqPXF4gE0cMDajBYtNrUyMpddVjxJKVXTrzm4leaCplVFXMdWbIc8cW1lk4dZYTX1rn/j96P+sIIcvJeyXTM9PMbNnEzLatTEzNMjU7z8ZtO9i1aQOydQOvvvMmxDowGWtr65w8eZb77nuY/vAM+/Y9yG0zBd04RCbhviXlqFcqkmiWWiq/8r46jBdlAoUT1vopgHgQlRAf304v/OeJrK33+finPguqGAkUuZA5i7EQQ3Ireh8Qk3opBcYipmJIQqchprdIjIQIGiJGHL3Jafr9dUKoCDFNQhBJQqSQZMIYFCNCCEpVeUajwKhfU3YLOhMdsgyapsE7S5NVNFlO47s0/jB/+KH/zoMPP8Bf/MF38qKXvYSsyNM/JheUSEvW6WCsMOwPqZoaYwRfK7lLMb6IQia86Nbb+Ac/9VMMh0M++anP4/3jn6c92ecZLc8J5wXFLcCBy6zfesl2cAX5eDyh9udJKTwvV9WTl6y/63L7PU3OT9a90iTgyy0/P/Z/o6r/x9dhDC0tLS0tzyEvSPHxqTofd+zY8ZxHVm7dupVut3vF9cePH+fMmTOt+Pg8ZTQaXTZS92J27NjBTTfddOG1WBTFU/acbtu2jZtvvvk5f/22fHWIyLcAH7nMzMPz8Z2Di5Z9CPg7wI8D+1X1KICqDkXk08Dbx/u9V1UjT+SXgP8C/KqIvPMi4e78OGaBPap6vuD9vwI/A/zfIvJ5Vf3cJdsb4JtV9SMXLV4Yf9/J5W98ni4/LiK/rqqHLzrnvyR9ovVfL9ruD8bn+9si8mFVvbTX8fyN0r2qOrhk1Y+KyL9T1YtFw58nzSD9r6p6uTyhnxORP1bVpfGxS+Cfj9ddPK6v5hq2tLS0tLS0vMAJoQYNSXxEMRd62iQ5D5GLttYLIkfhRxhxaFUhzqY9JBKjR/FYkSRgVANc5jBFgVjQzOMXF4m+wboSgsflGdY5VEGsxVgBdYiORQgE9Q1hOMRYi7EZxgkxj4j36aPrVHSHyUrIBM9wLJymrkArGTHUeA0EIkYrMu2gZOPHZMAKkhdkMkWMQuX7DPurVH5ElveIMeLrCt9ERKCwBpd1Ud9ABOtyPCX9qoOf2MLRLy1y4Pg9TO8cMBduxVWTrB+ILD28zOvfcgvl7BQ6XOf666ZZ7A4xfoVzJzzBR9ZWMx64f4WlJVhZ73Ps0BKVW2R6y8s4eXgT//5ffoTpmQ4zM5YbbtrMrS/eydyWWXqTHSZmJsicIRAZDhe563Uv5+TvrPPAA6e45cXbGAwqrrt+HuxhVJXlYY1MTaG6kcWVwxQzOVoEqjVPt0zPWww+9QmGkCJ0iQQ9/+oQiIpGRQkISoxCDDVZd4LoG8Am0VGEGAOCSXGrY90h9leS0KyCxkBMGvf4+MmRFzVgrKE7uwkV8B5GfcFPJ4drz3ga5xl1Pc4r0cNg5Bn2B4Qz51h87ADLRmi8oMaRlQW2W2LKDhNzM3Rmppm9bgtTO/ew0EwAIz73+c9RmsCWYojvwgNLkWNNZHjBJPjUbYONwtLo0tuhp0ZJqVOZEXBQN0mET6TXvTUy1vPSdyOGqKnD1SDj9Ya61nFnJ3jfMKwWkmtYk1M5jg2Q5x+LkeTGVFXOGypjUHztGQw8Zb+iN9Ul76aY18ZDCEKIQ4SSgQ54+Oj9/Pv/+C/5zrf+Bd70bd9Ob3pqHMGqacxisEVJSRIxY9Dx+WoKW4BRjEKR59x2y+385N/9CUb9f87nv3gfsRUdr1XuIcWFvpFL7sFF5EZgO3Dw0vv/K7CB5JD8vcsIjxN8HWJJVXVVRB4DdovI7stEr77+Mrt9jtQR+U1f6/lbWlpaWp57XpDiY4xP/ofnc+16BOh2u2zatClFnFzmD7tjx45x+vTp1N0gcpkjtFzLXI34uHv3bm688cYLv19N7Or27dvZvXt3+5p4/vJeYF1EPgMcIt3vfxMpTuQLwJ9dtO0HSeLjJuD3LjnOB0k3HOd/fgKq+qsi8gpSt+QBEXk/KUJlDthDclj+V1K3A6q6ICLfPx7fZ0Tkg8ADpPvjHaTehnmgvGQMPwX8JxF5D7AGLKvqv3taV+RxPgl8SUR+hzTb8W2kmJcvABeK5lW1EZG/ALwfeJ+IfIpUND8Yj/VO4HrSrM9Lxcc/BT4pIr9LiqR5/fjrEPDTVxjXQ8ADIvI/gAb4blJvxfu4yBH5VV7DlpaWlpaWlhc4wXt88PjYYCTFoCIGGctCKkokEGiIGhBNkY6dUR9nuzRnTmCqPhprYohoUyEiyPg+UaIhViPwAYkeyUv8ep8QFVcU+PU1sm6ZxAhjEGOSQ1GTEKEhRasacUSfhESxSTyyQETQ4MdFjmNnlVUks0QfUEmxpMmRJwgOIylCNIQmPUYsRmuMczgnGGvIu13Cek0YVQRtCHVAXY4PgUjq6StdjTMGI4JYS2g8TWUYVgXrRYeqHlKUQx7Y/2VmhvNk9YjlxVW2X7+J9cWKez62nzt2NmyYOsjmDRu58frdHDp0lsGyZ33V8+E/PYK10NQ1rugxta3BZ2tEtlKtCcP+iDMnwNqK6d4aRx5c4P4vn8H1Apu2FMzOWDZu2sZwvWbLlglOnzqDIpw7ucryUs1wfQDiqIuc1aZhcmIrzfAxzCgDr+AjPnqcpIml1jgUwWskRCUGk661TZ/EYw2CxY/6hCYQfI2bmEE1jHXgQHplMX5uG2KEzDpsU6GjAUyn3segNTY040mt8nj0r3X0ZucBQ4iBYaVU9QjfySmygl5RMjXVR4nUFWQ20vOMx58EuuiVJtT4kWfU7yMq9I8cZynAYZexuGUHR+kxMZtRry7ziu05xWiRh9Y9xwIMSPHDF4t1zwTnHYgKqQ8SxTRJFDSSulWtIXVlkro3DclZGkIS+KwIzkHtU/Rt0LFhWHVcpTl+BPr4SVUZd32On9fxejWgQakCjKqG4WiVbi+j2yvIy5wQKmKqhKVUQcRybvU0//2Pf4OjRw/z57/3B9hx457HH6ACWFzRZXImMupXDAdDmgaIA8puDyvpec8yx6vufA0//uN/l3/wMz/L0aNXSvVseY75VeB/B35WRP5wXNmCiFjgX5Em7v6XqzzWGdL98itEZEJV18fHyoBfIYmTXw9+nTTh95+LyDvOT8QWkR3AT1y6saqeEZF3kyYO/xzwC5f2S4rIDUBU1YNfpzG2tLS0tDxDvODER1V9ys7Hubm5Z2k0T87111+PMeayMbFnzpxh3759vPGNbyTLssvs3XItMxqNWFtbu+J6ay27du1i8+bHUyauJnZ18+bNbN269Um3abmm+WmSqPZy4DtIUZ+HgX8A/P9Utblo24+Q7kcNX1ko/0Hgn45//oqyeQBV/dsi8qckgfEtpFmNiyQR8l8Cv3HJ9h8UkZcA/+d4jN8E1MCJ8Tnec8n27xeRvwf8NdJNQz5+LF+t+PiTpP7LvwbsJjkrfwX4R6r6hH/UVfU+EbkD+D+A7wL+MulanSTNBv2/gXOXOce/IYmDPwH8ILAO/Brwf6nqmSuM6+3AzwE/DFwHHCfdPP2LSx2sT/catrS0tLS0tLzw0aD4qkmxlq6bxD2JSZhQj6AYMSgRYup7lNDQ8+uUpsKcvT8JMTEAOcZmgEnuNWzKa62GSIyY0oxFyiF5J8M5hw9KPlkSvQdjCQTEWsSZ5Mj0FRo9YMcCFKgx4D2SgZgS6ir1xMWAeo/6JFyKWhSPkUgkxbVaBIJHbU6InhAqLGUSGEODhoAteskB2ekSFEL0+NCAdWiMKJGiEMqyRMMIMWWKCxVDVXUZeqXpWJowYGbGEhcywrDD2sIqja+YmXB0H/kz3nbjgPmZDqOVFcoix09mBFUgktmMphkRyTCZQ9VTDSNTs47VxYYoEScWIxmP7T/L7h3zzExOcNtLenz0z/ax/4HTTM/kvOlNe6kqz7FjK0nwdQbpdrj/z44RYkOIHmccS9WQOenQ7w+pqyGWDGMtTRWxuUeCQWwSHzUIMQhRwSDIWHDSCOprGk2vlYiSzcyhCsY6xKR4XyOChkisGowayrxEfcQvLuC2bE9RqxoJvkZciuv0ISAKSKAzN5N6RVVoGqWqakJMnZBilMwaOt1AnoOWEY3gG0OUsTuzEfIo+KjkHkSVEIUQhdH26/n8QsXh0w+wYzDLvI5oji5yUD2Hh4HaGEZBefLp5F87SXRMjkSN593IEMc/hHE8agyKMaSt4/l9x+Ih0KDQ+CQojh2k57c5L55epDte6G09/2Nal84VohDHzmgPxIFSVzWjoWdyytOZ6BIVfGzQmLpCjRFGvuLuhz7L0uoC3/3n3s6tL7sD5yyPS6vgii5d4wi+JjQB9YEYRrisS45S4+mWHV571+v5sb/+V/nlX/n3nDu30NY9XmOo6qdE5BeBvw98eTxBtg98O3A78AnSvf7VHCuKyP9L+ozifhH5A9I9/beQJi1/ePzz18ovkvoj/yJws4j8L1Ly0NtJtSvfA1/xlv87wF7gn5BEyE+Q+iGvA24lTTj+IaAVH1taWlqucV5w4iOA9/5J18/Pzz/p+meLPXv2XFF89N7zqU99ih/6oR+6ZsTSlqvnqZyP3W6X22+//QnCcp7nTxrFm2UZu3btekqBsuXaRVX/A/AfrnLbZa7Qp6iqn+ZCQ9CTHuOPgT9+GuM7RPpD/2q3/yVSxOvXg6iq/xr411d57jOkG6UrORavtN+vkQTHq92+An52/HU12x/iKq+hqr4TeOfVjqWlpaWlpaXl+UdoRsTgEbUIEXMh7tJhjYxdjx5LTqBGRMj8CBdqYtNghgFEiN4TqooYwRQlWTGFkZSaYsRgrCCdgliPcCaj2LQZycA6iwiEpkajYvKC6CLGCDGEFMXYBNQZxFnEJBecWIdqxBiLSkSjQaNHiIgxqKZ4zxTRapHQECKICxgijfcIOUok6ghCSIJYCDgx5EWJy7p0ug5fVwxG6wTvCb5BEMpuIHeKaABtMM7hK4+YLiHPcZ1Jwkqf3Bo6ZQdZU9QYsqzHruvnmJsJxHAWjRWunGRywzxHgqWuIhZHjIFQDTECQQSbQbUesZkgmNTp5yNKhctzJmY77Nm7kfXlhte8/gYeuPcgN928nanJOVaWPKNaidGwujZi+XTF0nI/iXL1iHyjY73xbPI9BiMBkvPU+5rBIFCW82hoklipkUBF9OmPfSvn80cVjZ6oqYsz+AabdymnpokhIDSP9xKqQQRG62t0ipzexASoJywtYGx6TkJMz23qfDSIpt5HfEY5uQEzju+saqGuIr72eAcWgzMGK5DnjqwEIozqQAqgUmKjGGcuCGlNnZyAo8ltHOhs5PS+L7J34xRZf4Wjy33OWFgRCJmwMvQpovQZRi/Ifhc7LJ94eyXpoRHDuEpxLBzGC8cY//eJ0xHH/5ULgiZPOMel2126TC4su+CiDBHfjKhqT2+qQ8wtRg3GZBhb4Mb1rIfPHOTdv/trfNfy93Ln6++i6BQXHpNgsK5gYmqSpt8nBCGKIcaIs5YMS6PK5OQM3/+9P8jZ02f4T7/63xgOqgvRvS3XBqr6D0TkHtI9518CMlIE688C/1pV66dxuJ8DzgJ/FfgbpASiD4yP9Y+/TuMdjutn/gnw/aRJxweBXwA+ThIfVy/ZZ1VE3gD8deAdwPeREoROA/vHx/jA12N8LS0tLS3PLC9I8XEwuDRp74l0Op1naSRPznnn45X4+Mc/zvLyMrOzs23M5vOM4XDIysrKFddPTEzw0pe+9AnLsiyjLK+cyNjtdrn11lvb10JLS0tLS0tLS0vL8wStIxoaxBisLcYux9TpZ7CIKmacxagawSvdtTNkRiiMQXyDZBlGI2IghkCzskTlF+hMTFJMbUQ7PaTogE3HiHXAdgti8Cl+06euwDCqCaMRGpVsagIi1FWNqmKNwTgznvqWeuw0BhSPWJMcm64AsWhT4+tRiomlZOwLw1pHRBGbkWHwvkY0oIydncbgmxERAxisUYy1qHEggh8NaZoGolKWOUXZG48jRYgmV2aJj5YoDgLkWYdup6CqK4hK1i159MAS+W2bKK/bjq4tM5VVNMUsvbzDnl2bWF1dYzCqGQwNGi1hbFMz9IA1Gl9BdEQ8zpVUQ899dx9nfm6SzDi2bZ1h08YXk2cZeSmE6KhGkcYHzhxfpVN08THg1SPOYacGNGeAkFHXqWewW07ijBJDg2pIrjsfCb4iNIEYBGOT+1ABiek6oBCjoBEy1yHr9DAmQ9QQomesAKIaafpDJnsF2ThCNCydw1hH09TptTGWyFQVxnG8UQO96bn0vMSID8JwGBiFho5Gsqwkr7vk0lCMna7RWpxt8GJSZ2XuKaxDbAZq8Z2KZW85svFm9t97kBs2TMDKIsuDIX2BFZSy10uv7Ti88N45f9f7TEtfjzsTU1Tq+XOH8+2sIhc6Gy/e6wmOxicc5ysPft7veDnJ8QlLLlk1rloljBTvG0KtTEx1AI/SR0TIXUGd11jbYW24wh/9z//BmbOnePPbvo2puenHPz8QsEUXm+VUgz7BB8AQFZzLMQaM92yY38IP/cUf4eChQ/zJn/4ZGkIrP15jqOpvA799Fdvtfor1njSZ+HITit/JJRNlxxNtr/iBlKq+8QrLl4G/O/66gIj8tfGPD11mn5qUqvTVJiu1tLS0tFwDvCDFx8s5CS+m1+s9SyN5cm644YYnFR+PHj3K5z//efbs2XPFbVquTQaDAUtLS1dcX5blVzyv1lqKorhiD2i32+Xmm2/+uo+1paWlpaWlpaWlpeWZQX2FiGJthrWWlKgfUdVxN19IYqRGVANGR3RGK2PhD3BZsl0JiAjWGmy3IIQIzQi/egYTephiDobQLC6wurjAhpkSVSXUipuAOAzEKozPs4oUFpO79CmysYixGJdj8wwxGfg47om0EAWxOr5HUWIMJPHFIqLJMXdegDR5cosJGM0hGmKsMcbhm+TcDKM+sa7IOiU262Iyh/UZ1gdctHiNiFhUBTGGWDeggrEFwQuV96gKQaEeCBs3znDiSMS5nKapuffzR3noy6fIcoN1MDlTcuONObe/wrNjzxxv/HOvZnnfQc4dO8vx/iR3f2mR1dXjqA7x3mONIVpFghB9AKc88shx8o5w64t2smG+h/dgndKb7fLwQ8eIGpieKtmxa5blJU+IgRgimXOYrKbxGaPRAF8rvqpQ7SXXa2xQETQoIUbq2uNrQBSCEKMgoiAREZeEWBrEKnmng2TFOIJXL/Q2IkmxGi0vkBkDISKZYKoBaCRqQ2h86ngUmxx+MRKDBwxFt5vcrKZBG2FQCaOmJsYKTIc8K+hqDxOTPGdQjO3gCUQEk3WwYnAGEGGl8hx31/HYUsWWUlk5vcCgbpjbMcfKmSVME/BVxdLwiSJXp8hpfKB5is93vm7v1a/4fSwVPuHe/EmEwycs/OqiY/Wio52PbhURUGgaWF/zeL+ODxEIWOsYZutkmSVzjsY7RnbEZ+75OKvrq7zt27+TLdu3XRilyLjPNc/SGaKgMTmwjRicM0SN7Ln+Zt75zr/KfV9+gEOHj8NlPp9oablaROQ6VT1xybKdJOelB/7oORlYS0tLS8szzgtSfHyqzsdrxfm4c+fO8Q3olfnVX/1Vvu/7vg/nXpBP1QuW4XD4pOLj9PQ0W7Zs+YrlZVninEszfi+hKAq2bdv2dR1nS8vXGxF5J6mz8an4kqr+/jM6mJaWlpaWlpaW5xgd9nG9LmAxJjkAtQmoWIxRVMaOsxBQr5jhGm60RuMDuU3xiupTNCoRkNTvV3QsNssx1mKshXpAzDOIgVHV4IqMZrUGZ4EkLhI90deE4NEIxcwMs3uvx6+PiMMBtsyweQER1IJIBsalgrux+BB9g2/qJF6NOyqNVYiC0RQ3mbooDSINUUCNoiY10In0oYqE4JG6wftVXNklLydQtYSwjqVJjxNAMmxSsUByYrRogLoa0Yw8K6cD179iM+c2jvDDkmY4wmRJBAshow6e+kzDjj3zHHx0kbML62zbPY379AOsfuSP2fNdP86jk5E9eyNNGNBvIsthQIwZIprEORFQw/1fOsSRg2fYuWeenbs2cOTgEqt9OHN8lfW1Je6883ps4fjMpw/iA1jJCIyIPmDVYTIQAjHU49eCEOIIiZEQGqpqyHC4nkTAMI7tDKBOUAPGWCBg1ZK5gnKii8ktYgUC48JCACX4wHBlCeMblEhelhgChJgic1XTsS+EauoFl6lkOcZ1gBFRlaqCeuRpupE8b8iLnMwZYjSgNTHG1AVJBw1gnGJdnmJoh0P2L3mOzWwgW1vkzImjUDhuuf0mfNdwYnGFTgicrTz+IoHLiDDZm2DhSdKEnkm+FqntSvs+3WOe3z7qeP6BgI8wGChNM8A3Ho0OEELUNCEhGgSDCty/714WF8/xPd/z/ey6YTfGOtK7SnFZCXFI9CFNhIgRLDiXJaHbZNz5irv40R9+B7/0y/+W/mCU3Nlf7UVp+UbnPSKSAV8AlkmfF3wX0AX+4aXCZEtLS0vLC4cXpKL1VJ2PT9ar92wyNTXFxo0bWV9fv+I2n/zkJ3nggQe44447nsWRtXwtxBhZXl5+UhF8+/bt5Hn+FcvLsiTLssuKj3meX1awbGm5xngn8Iar2O6/Ab//bPQequrPAz//NPd54zMxlpaWlpaWlpZvLOqVFYosx+YdRJOzKcSIoMSx64wY0aBojLj+KjZ6qtpjihxxDvUNRhRxBhULEpJwZMc9gBGiNhe6IctentyUApJZxJA6HEWITRiPwVLFNbKZCfL5aUQnkwszxOSuaxTVAGLAChojagy+8YS6QVWTOGcs0TfEGBBxGFEQgw8VUZt0b24BFcQKQgcNNeormrpBY6CpPLYzkRyYucHUNrnNhHHUqIAkgS7GCuscEg2Tk3P0z62Rmw6bdy1y5AzQVGR2jcxlRCbxQyB3zG6YYLhSk5fJkTc8ehhxlmFTc+drN9Gbydh724vorw9513/8M1aWZhgNBiAe5yJ5njE9M8VkR5kqKmhq1C8nIdCss3fvPC9/1U4OPrrC4tl1ovdEr2RlwLoMFyyiqfMyy7vpeo6djsEnx18VBjShInpJvaBRxqItGFVEFYkGaw2dcpKyN42xOWBS36MIGgBVQlNRr66jqmRZgStzJHi0rtPrQZTomxS9OhYdYwxJfMSieZkiXjHUXqmqSB0ayphRFgVii3E7YYFqxMfkzEwpPpE6m+fQaCOn18/y8MIjZOU6px57lHxmkutvu4nexmnuv+c+uqNAsI6ReJ4gz4kwqCr8s+R6vJbRcRysQc6boGkaZXW5RuMiMXqsyXA2R8wqkYgqeOs5fOIQv/1b7+IHf/BH2LP3htTpOo5htVlBDH2ssSnKVwUQsixDNTA7M8N3feef5+Of+Dgf/dhnWuGx5WvhXcCPkrobp4F14LPAv1PV33suB9bS0tLS8szyghMfVZVHH330SbcpiuJZGs2TIyLceOONHDx48IrbDAYDfuM3foOXvOQlbdff84QQAqdOnbriehFhw4YNl1133vl4OTZt2nTNCOctLVeiFe1aWlpaWlpaWh5n5dQxpvOMTlGASZGiMQaMNSlaNCohNvhYE6Jnoh7g61ESmpxBBIw1qLVoDMg46BIRQtMQFIyJqUOyzBmsrhJijRpD9Iq1FpOVaKxxRQf1Hl9XBF8RB0kYFGuwRYG1OcSAakPUehz5GCBKilYNhlhVBN8QY+rCCz65KMVmqT9w3BPpQ6CJFV4bpJHkjhOLEYtkGRglVJ44jKirwEdsZsjzgnx6ArUGYwRhLLqG5ELMXMNkWSLikQh+NePBu09xxzdtY/nEWVYfPMyrv3kXr7zzlTy67yj333eMxXPQm4BcCua2zGONMFxcwXYmaOoh1794M2fPrrF9906qkee7f2DAh//nl9i8aQ8bNnXpFMqO67czt3GasyePYWxk14tu5PADc+x58S38/q//Efv2PcKvvet/0V/cRW53YETw0tDbUlGvKt2ojMIaWRaxzhI1EmPEiEOMo4lDhqOaqqoJkdT1OHaSiipWwBpLnnUp8oyymKDXm00uRUwSeYULQqKvhzT9AZ2sR16UGJcR6jqJpnmOahIJzdjZGdWjCiEqKoK4kmYsZPoAwwrWq3WK3FAWPYy1WGvR6Gl8INJgjYJYhm4Lnzi7mX2PPcbG9X2sD0asrTxIZ8tmXvxNr8aWlgfuuYdzj53CeqV24C9RtmKMrPX7z8Vb9prlvDE0MHZBemVtpcE3y8nxKCZNRAjptVUWBSFGjp09xm/+1q/zfd/7dm694zbAgqbXXlb0aEaD8QlSp6wxBusEHyI333QLP/qjP8q99z3A0tIaz3wDZ8sLEVX998C/f67H0dLS0tLy7POCEx9DCHzyk5980m0mJiaepdE8Nbfccgsf+MAHrrheVfmDP/gDfuInfqKN3Hye4L3nxIknT42YmZm57PKiKC4rPooIe/bsaQXolpaWlpaWlpaWlucRiwf3Y4sSV3TIpk3qVhQzdjFZNNbJaegVrYeYwQL1sI+IJYbkjkMUsRbVOjnbxCIm+c5EIAaPuBxRoVkb4fIOGgwEj2QpZlEsmDLHVikaMya7JFo3xCYg1MRxtqMxBlPkxOjREFM0ozdoaNCgxJj6H0P0GDOOefQKJiTHVfCE6PH1iKaOhBhBQxJIxYIxZHmB5Aa0oh42xGFDVhismSbvOPLOFGoyfKzRJmJdQYwNxgyYzUpKE1gV5UUv2s7G63czGp3ghtsnaPq3cuJwweA2x6tf+zK++dtew+nj5/jyF+7n+PFTbK32cPdHF3Avfx2mrpncMIkxkYUzC5w9fpbJ2Wl27dnGd31PzoZNW+hOdKjqIVt2bcPljplNE2CEEweOMj07S1bAzLwDu8JnPvc+Jjs38uKbf4yqP6AzYylnPIOjhmlnWV9aYqKwOJccgmIg7+TEGBhWfarRGlpDDGDR9BxaxTjF2YwsKykKR6czTcdl5GUPyTIUxYcq9WIaQdUTqpp6MGSu28F1SlR9cl42NRRFitrUiAme5Lu0xHH3KALl5GRynQpEFZoGQmjwvmE0XKXTmSKzFmMcjY6SmIllxCSfW97Mx+79Ire6M8SqZuA909s2c9tbXo/tljx8/5c58OVDOB8ZqbJSB0JsRa2nQhm7gEnve2MgqjIcehbOLRFiZGq6pmm6oEIInrII5HnBsbNH+B/v/R3ekf0Ie27eixEBEzHG4rKCuholMVlBQ8QYxUgkzzLe8sa38Ja3vIH3/v6f0jRPnjLW0tLS0tLS0nIx5rkewNebU6dOsX///ifd5lpxPgK8+MUvfsptzpw5w/ve975Lis5brlWeyvkI0Ov1Lrv8qZyPLS0tLS0tLS0tLS3PHxb3fZnV40eplhfQqkaSNQnEjvv2knuwbgbQVPj+Mr4KGI1JzIshxWCqokHQcaxljAHGwZdiMozLU3dgUzMxO53OoWBchhjBGIcYg+06XJnjOg5XOsRZ1Hua0ZCmqohVQ6gbgo/EaFBnMJlDCsDEccyrJ5w/vyZxM2qFiqb+QaOpN87lKB60wtc11WDEoN9nuL7KYG2VxtcEmyG5RU2kP6hYWV6iqQYIEdWIikWiT4/ZD7FmnTKrKPHMbZ7glW+8nrm5LejqHKrKja/IqMTxP971EL/9rk9z+JFj3HDDbt7xY+/gnX/zHWzanPPFL97HyQnHaNM2dt56PdNTM7zila+mKEqIisXx4te+jD0vvZ5Ne7awcft1uCInxEgTPEWnYOv129n54j0Mq4ruVIcDBx+kyKbYuuXVqe8yj8zfNGD5zJC4pkyUjv7oNHnhEJteGyF4qqqmairWBgNGo5oYIhIF4vkI1VTbaXNHkWUU3Wk6ZUlWZNhuh2iFpqmSAE1ExtGZ9bBP1R9QlhaTGdQYlID6ihDD+HqGpGAJxNjgm4APgRgieadEx1GcIQqj2tCMoBpVVMP0fAI4WyLGAZHKC4/Wu/jUg4eZikfp+hFHm4bJjRu4/S2vo5id4NEHH+Lhux9E6iRUVwoSldZRd3WkCFZF47iKFQhBGPQ9i+eWWFw4w+raGguLC6yurLG+PmA4HDCsBhw+eZDf+t13c/jAATT65FKOEWMdxqb4ZEEgelDB2gxB2bhpE3/x7X+RrVs20U6FbmlpaWlpaXk6vOCcj5/5zGeesvPxSuLOc8Htt9/+lNusra3xwQ9+kO///u9nbm7uWRhVy9eC957Tp09fcb2I0Ol0LrvufOfj5fZpI1dbWlpaWlpaWlpanl8Mj55mafpBiokekud0N2xB8jz1PwYlBk/jR4Qmkg0GNGur5H4ExSTCWKXSgIaAkgQpkQgoGkISA1xEDBABMvKyBPWoxNSPmDmgxqgDKTDOEdUTgwckdf+pAQJqAGMxmcM4l9SNzGFCg5SQS5egCuuRGBpQMxYdC6xJMbIiFsEBHiNm7GoLGBGa6AlNxDeRmHussYgomll83RAGNX3bR+06tuySOUuZ91CjmLykCX1yVpilw2oQTh1boew4snqae+89xPV3eK5/eeTIfSVHD9X893c9yOkzfbZt67L75j18y3e8hfWVNd7zrt/i5jtuY8u2DeTdEl81TM5PIWLoTPXIigxfeUbDipWFVfIyJ4bAaL2PGCHrOH731/8bH3j/n/LowwcYrOXcsPuHKO0O6niOTTfXjEZr+HNbmC8idX+JGFbJM4czOU09wEfFoFRNzWA4pBp6ohdQCF5QAeeUInf0soLOxCy9skfhHILB5B2szccxmyG5YX0DKE1/nTAaMjW1OfVFAtQV2tTjfsfUPxlCEv1i1LGUnX7O8zzto4qqpa6U0QhGZU2ZlQiRwpXkWcbAZ4RqyJJu5NNHRqwsP8Rdk5FDyxEzNcVL3vo6ig1zHD90gEc+dz/V+pBCUnwoSnLgttrjVZMuWZqMkDkZWwqU4SBgGCS38owHTc+zEpPonRv2H3uU3/zNd/ODP/h29txwQ5qUIIYsK6g1ECWk96+CFQtWgchrX/Na3vymN/Drv/E7hBCf2wvQ0tLS0tLS8rzh2lHhvg6oKp/4xCeeVw7BnTt3MjExwfr6+hW3iTFyzz33cN999/GGN7yhjd68xrka5+OV3LdXil2FaysuuKWlpaWlpaWlpaXlqYnDipWDBzBTk7jeJDYvyadnwOYYhKAkp6E2iB/hqz6FMRhrUh+fdaivAU2lb2IQY5LDzchYcFSMNURfYTQiAhpDcjAxtkepBYkgBpwBnyJgY4h4X6EejLFJcMwY33OOjy9jB6Z12K6hlzvyTo6vGuqqoRkOCDEQBYJGBEljx6AaETnvomoAA0ax6ghNwJN6LJV0uhAiK6t91pvAzObrKKanMdYRYyCEEdZ64CQb/BzHo2FpYZ2bbt9GXjquO3kL+z77ELPXn+Yl33wDBx/0HHvEcmj/Antv2cD60jqbtm9lZtM8b//LP4o4x+T8LL72rA/XsWsjQFlfWaN/uE9RFkQCBkNooDPVJWjD3Z/5DIcO7+e3/9uvMxrOMTP5enZu3ouPgekNIya39OkPh/SPXUcWu+zsjugvnsbIGp1uSfQVNrM0wwFl0WUwWGfQXyN4JQaS6xEwFpyDIrMUeY8yyylchs0sGiKaOcQIdTUkxoiNoEQkRkaraxhVJiemAEmviRCgrhEEEUOIHhcagkY0pvtY7xti9FjjiAoRgxEIKjS1EGqgI5R5lzIvwBqcCCMv7B/N8ODhz3H7RM3KSBnkJS97w6vobtvC0rkzPPqF+xiuD8nHH2dUCl6S8JiLUD2PPsd5rlHS51+NT+bVsYGVURVARoisIyL46Gl8xdR0mswgRth/9GF+67d+kx/+oR9h9w17SMWv6T3aNBW5c8ldqTH9uxIjczNzfP/3/QU+8Gcf4vjx061PtaWlpaWlpeWqeEGJj957Pv3pTz9vxEcRoSxLbrzxRr70pS896bYHDhzgk5/8JHfdddc1FRvb8pVcTefjlQTkJ4tdnZqa+prH1tLS0tLS0tLS0tLy7CEqVGeWOfvAFykmJ8i7E7iiwOY5SE5Tj6ibCqVBbIYGxebZuNRNxiLAWHSUcQ4npN8RxIHLC8QZRB3l1CS2LIh1BSZDnIBqckYGMJlNHY4IMQrRR4gG6xTjHMY6cBaxSadMLkuT+ikFiALG4jqpSzCvK3w3pxlV1FUNEWKtiGjqj3QZIISqxpocYpOcmGoI3hNFqUOFM45okyBV9WG0PMLn60z0JmgUxHtULC6zWHeGmeYmek2f0WCCZuiZnCqY6mTY0WaO3wNnj+9jx40dbv+WKa6bKZnbMMe263dSdjoYa+lOTHD65CnysiDUnuHqGoP1AVlmcS5jduMcapX1lRVOnjnJ/ffey71f+jzLS4ucXVygV9zAzl3fTm9qnqyw9KZgZqNhYeEMhx9dg6Vd6LowwYBNE8rRtccos4bexBw2N0QiIXrUBNbXV6mGIzQo3husAFZwRaDbyeh0e3R7PcqiJCtyNEY8AS2SizUM+2iMNBFsFEQjy6dPMdkpKLsF4gSVpFCFpgFJ8b0+jMhlEoOhiZ66rtAYiSFiTCRz0NRClEgThfMBUwKp6zNLsZwqGevMsn9BmXd9pk3gywN42ZtfyfyNu1lbWeLQl+7l9OGFpJ8L1AqNQiaCAPPAGcC3stbTJqaaTiLJERkGgRjXiDHQbWpiaFANxCmhLAtUIw8feph3//a7eeeP/GWu274NMWDE4mxG44c4l6MqSBSMyYja8KpXvIa3vvVNvOs3/nvb/djS0tLS0tJyVbygxMdHH330SeMur0WyLGPv3r1PKT7GGHn/+9/PD//wD7N79+5nZWwtTx9VZTQace7cua9q/yvFrgJXXN7S0tLS0tLS0tLScm2iGmlCYHBshXOPPEJvZjPlxCRZ2cGbiK9GhNBgTE5WTCM2w4hBGYuLOhaOfE1UTxICs9TZZiLOFqQMV9CQ4lhjCBcchyIOgk/Loo6jXmMSJBGMdbjCIpkBjakj0tkUvWjMOO5VU+RrDKh//NgaFTEWlwvGWvKyoB6NqE2NNgZDQ3LdKSJJ7BQxiKaxECIqDU4sMYCvIyBEYOgtwTg0F5qmxmrASE6MniJfwZoh86vrHD46SW9iiTu/ZTc7b97AoweWyetZWJ5l32fP4bOHufGmafxonbuyb2HT5k2MhiMG6wOmehMYH+lM9ZiYmSD6gI81hw4d5OiRIzz04H0c3H+Ao4dPsrrkiVi27d3CrS+5i8mpSTplic3Sc7y0sMSD9yzS0U24xa2EyqKh4rpujlRnqOtjbJyLlJnBqCFGz7AaotKwsrZMaMZRqzGZVG0WsHm6P+xMTtHtTpEXZRJufUOsGqLLEOMIVQ0IjqQYS4ysnjjF/PxkEqUxoIboA3E0JIYk8UkEX9dEkkBtjEHFEOIQ0UA2dtNFTeOKIXV8WgPGlmgAtUlEXQgbOHH2MW7sKAeWlev27mbzrdczakYc37+Pk/uOMvLxfJAwo7Eh93xa8CA1Uj77b9DnOXEcqRtVxvMV0oSF4TAQQp8QA4LirENkFZiiyEucMzxy8CH+x3v+Oz/6l/4SM7MzQMSIpfY1xijOSVKKNb1/p6an+Z7v/m7+5E8+wKnTX93nHS0tLS0tLS3fWLygxMd7772Xfr//XA/jaZFlGTfddNNVbXv33Xdz3333sXPnTowxz/DIWr5aTpw4QQjhq9r3yZyPdV1/LcNqaWlpaWlpaWlpaXmWSf16DoisHDjEubl5yslpXNHBdCeJzSg5jlxG3jWoKXAmYPMMI4pKSAKNkFQpQENyHQk2dS4qY6ekpu4/xh1+xiHWPi48+prodRy7KZBJcltqTDGuuUWcS/uRIjyD+vQYmmbczaeIBQ1m3PcIGiNiCpRAVliMceSuQdRTV5aAQ3LFIYRhAAx18NTRg1GcCMaCLQ39QQALo6bk5KkRu/dEovVY55JzUwy9buDoyXu4sfftLDRDpue3cuqxNb78uZNs3znFwQMNdTWkazcifgPLh1bob6y591OfZfcNe9mycxsbt6aux+OHj3D4yGGaYU2WZ3z+c5/k9977myyvrpDLZjr5DjrFq9m5eSt53qO/uMLBezwqFdY2FJ0JYhBG65ZXvPh1vO27Xs5jDy3wqY88xvKxE2zWEUdP3EtpFpmf75EXyQnq64rR+jqNDwwHNVEhBIuKIEbJnTBRlvQmp5kspymLEuMcKhBipGkC5B1EhHo0IC+6IBZBCGHI4Nw5rpubS7q0pM8OoldCXaWuz5hE0xgjEUE14kMgNoEQAtYGcgvWRGI0SXxGsdbiig5R4/h5B68Zh9YmmDV9FtdqzOwce1/3MoKBs0ePcey+/SytVqim13Eca4zFOAzIAJVqKz1+DcSYolcR0vsbqOtIf3WEIU0UqENNJDI5MYULFmuFT37po2yYn+W7v+/76RQFKkKW5VTVCGfdOAJYwaSjvurlr+auu17F7//+n7TPV0tLS0tLS8tT8oISH++///7nnfiY5zkvetGLMCZl6T8Zw+GQ3/3d3+Vtb3tbG716DXP06NGvet8n63wcDAZf9XFbWlpaWlpaWlpaWp59fIxEDEEzQt+z+ugjrGzYTN7pUm7ahArkzmDzAieCN0mREfVEVawqagBrQVM/osnsuA/SYiT1/qlGMBZFwCbXIk5RAliT6h5tEhxUNXUDSnIzirEY65A8Od903P2nTSCGQPQNGjTFMEpMQieQXJjjWFgCEj3GGcRkkFu8KsMTx8F5MpcTo8JYuKzrwLAC1JAXHucMNrPkZaTxSgCWgkNHgW7Zpcw6YwdWRLBspc/pU4fZbnZz6Mur3PFNk3z/X7md9ZWGP/sj4cjRRZqmQSTgmwlWli0zM5ayUzJ/3TxnT57mg+/+U7Zv2cGOXbuZ3bkTmzn++H3vYemcMjf1BqZ6N2KkA+LwjUfjiCKbxoghz7sEX8FACVXNjvkZdu/cTNl1KJEsd1zXMRRxif5wP9tmYHZmisxYVD2DYR/JlOEoEEYGH5KAZK2SZ5GiNHQ6PSa603SKEmuBcZen9zWjap2ZTpcQPb5psC4SfHKa1v11bDNkZvMMWJMcqyKEqMS6TiJmSOJy1ORrTW5YGf/cYAjkBWSV0kTFq6JiUjynsRgzdsURaKJjVAVKP+JYNLzirpdRTE6ydPY0x7/8CIvn1hno+VdMcjx2DBQGQhSG6AVBsuXpo+P/RhV8SE5IGYuQdRMZ9GuiKlEVYnLCdrslRVEiYvmzT36QqalZ3vLt34o1EWsyjDHUTUWWpZhW63JiaJiZnuMtb34zH/rwx1hZWX9uH3hLS0tLS0vLNc8LRnwcDAY88sgjVFX1XA/laWGMYdeuXWzZsuUpewIB3vve9/JP/+k/Zc+ePc/C6Fq+Gg4dOvRV7/tk4uPa2tpXfdyWlpaWlpaWlpaWlmefCHgFRTAa8QtrrD22n7zTIxhw09OYzJHZPDkKjaD45DbSlE0pkmI6BYtxLnVGOpNiVU2GiENLS1irsJlDVIjej0UiR9QacRbjLNF7YlSMJpuUZiaJlyJED8GPUB/wlSeGhuh9ErV8AGMxNu0nRkANxoFiQTxowCgpstU5JPMsLDZk+YjJbhJHjQg+NAwGMOg7jCiI4oOSExHjiEQaHA+eWeLOtSl2lQWxbpKrayyY9grAPMRGs5kTJ09x9kSP2fkOi2dWed2b9rD8ezWLi4sQBNWMh780YPFMZFg9ytLCEkErTh89yXe//e0URY+qqvn0xz7GfZ9fZO/u/40wCkQVwKBR0MYiRYFxBt94RIf4qibLLXPTGd/2PXfw8jfuZeXciJW1SP/kOV7ViZw59BClWWTL1ikmJiexzlBXgdGwxgeoRwHvDRHBOSUzSlFCtyyZmpygW3Qpig5qUsVHDND4SN002IlJ6uGIGDRFqYojSmT93AI948k7YwertcTa46uGUI17HTVNfA5NQ/KimrH7sSHGmByOeSSzBvGCQbA2kGU2iVPW4TKD95YRXWK1yOJwnZ0vuY353dcxWF/n7KEjnDl8mvWgeIVcwCBkVnFjl54q9KPSZvx8JePE06vbFknRyym5OC0zYK1hNBq/58WRZQOsM2Qux9kGI8qqX+MP//S9bLvuOl50x0uIscYYy3A4xEqOscldbUyGiueuV7+GG27YzT33PDCeeNDS0tLS0tLScnleMOLj0aNHOXny5HM9jKeNiLBhwwZuuOGGqxIfB4MB/+W//Bf+2T/7Z8/C6Fq+Gp4J8VFVn3eu3paWlpaWlpaWlpZvdLxawGIl4kSQRlk/cpSYOxpnmJLd5DNzxNggGsB1MKFBJEtOtBjBJHeTakCDEKsaVYfNM6xRMIrWNWKSszGqR1WxWQYSMZkB65KAIILECJoRiWgMhCrgmwGh9jS1pxms431FCEm0UF8lt2QQxAlEg7UZYhWxBpPlWJPh7HlHZpbEUQzDUc76oCZzDc6CEqgbT1NZYjBgI4OhxVlNXYImEj00MaAaWF4bsn1+AtEmxUoGTf2FapiaOMe5cw+zy97C/i+cQAy8/HW7wMCtt23kC59fYzCsMWLAdDl3OvLpD5zgW//cazly+GFe+8ZvJs9Lzp49y4f/50f4+Pv3M1G+hqb2+KZPJGKsIiESFJxn3EsoVMMh1lq6heEVr9rNhq2zHPjSWQ4/do4vfOjL3JTVjM4dYWn9PnZtUmamu1gjNMEzqmuWVlZRGwiVARWsgMuUrBA6ZYeJyWl6nR5lJ0PyDGJDjIr3gVE9YFANcRNT+KZOopOPqEvK0/DcGeYnujiXkxRtIfpAU9c0VUWOIpLci2oMGhRfNzS+IYZAiBGJNZkzOAu5SY5IY3TsXBSMIYncUanKOfz6pynn5tn7itvxdWBlYYGTjx5lZeCpFKxAARROMQZ8EJqo1KoMaSNXL8eVdL3zvuOLVxtzYa5CWp80fSD1QlZ1QNf62Cz9bowgMktegKKcW1vkd9/z3/lrc/Ns23EdIoIRqP2I0nXAB8RZgsCObdv5ptfexX33PYz3/hl7/C0tLS0tLS3Pf14w4uORI0c4derUVW17rbkjN2zYwPXXX8/HP/7xq9r+Pe95Dz/5kz/J/Pz8Mzyylq+GI0eOPOU2V+pvfLLOx6sRp1taWlpaWlpaWlparh0UgwAOwSbvIs3KkOVH9xE7JVIUTGcFLnN474k2Q9RgBDR6cAWIpt5FNeAjBI8RCwQ0xCRAugxTdjFFgQZJgpIfYMockXFXZAyAJcaIRo8fVTRVTTMaUQ3XGa33qdYHNPUAP0yRqyEoQiQGxgKURZJehhjIypyszLCZxTpHlheU3SmcKwgx0plwnDxZ0OuOmOpGIoofKaBYFxBRYm2IWeoxLApB1VJHxUc4eWbA3q1DbHcyufNihZJEtl5HWSkeZqPu4OzCcQ5/uWDT9kluvHUTN98xx7lTK5w+s87y8ipm7K7sTDtGVZ/HHtzHt373n+fAo/t4/3s/yj2fOodlG9738T6gCoLDRAdGcBFUPd5HMptjnMUJvOU7b+e2V25jfbnmyKGzfOaD+9mqDTOjJR49+Vl6+TKb5nuUucNgqOo+6ytreK/gwcfUpefyQNmFXp4zMTGReh7zAisWjYEYk7PNh4qqHuIRTFEyGg2QKIgYNKb1cXWJ2elOes5DIPhIM6qo6obCN1ixhBiJMaCaYngRg9ixmzUqmRicjSQJOR3fGX9BlBK1hOhpQmR5aYW19TX2fNNdSGZZWVji7OGjnD2zyjCmZ6uQFLWaGSEkPZ0A9FUZtcrj0+Liy3VBiNQkKia3ZFoaVMFHjAhK6vxcXxul59BYjMnpaZfMObCeBw59mT953x/zl/7yXyYvhCwvqOuavOhgAFHFuoxeZ4LXvfa1/Pq7f5elpZVn+dG3tLS0tLS0PJ94QYiPqsrRo0evWny81hxk09PT7N27lzzPryhKXcyxY8d4//vfzzve8Y5nYXQtT5enEh9VlfX1y/cjFEVBlmWX3efAgQOoKnKhY6WlpaWlpaWlpaWl5VpGASMBK4o1ksQCHxmcG+IfehBXTpLlHWzukqCXlVjNUI0M64pgBEtEghCDT07D0uKyDJUk4ikBKXOkkxOdxYeaxnvoD8E6sA5JlX5EHwnDIb4aMRiMGK2vM1xboO5X+EGgGUa8jxCS2BVVEKv4WjCSOh7FRtQEjBWazGNziFFxhZCXGaPuCq5TIsaxdWcHWxoWTzcY65nILc4J3YmIs4JRw7AKNAG8B+uUwioDjagKRRylzkljCU0NcZwtqRHncubmG44f/yi3FG/jgVPH+dyfwWgQ2L5jlu/4odv46PsO8eUv9Kk0oMFjzIgHP/sFvCpfvPsL/N67P0TT3wlhC03TEIg0dUPmHKqCcxb1gBGMdSgeDFy/c4Y3vO1mJmenyTLD6RNLfPL99zPRr7jejth/4mOEeITNGxxbNm8kd47GD2mCZ2UwQG3E1wZjwWWRsjD0ypxep0t3YpZOt0ve6YKYcbRlJAZP3VRUwyHegDhHs75GjIHQNCmGt6nImjV6Ez0YR6mGxlM3DZWvCTHFrUYUa5IQ3XiFyFhwNbi8xIrgcshyJW88yPhn58htmdykqiwP1jh06CxT27YxvX0zg7V11s+d5uSBY6w3kQBkAhMmpT6JghMYxWRirUkiZMsTyZzFh/gVsabnhcSLLY7peUuVPuf7HqMqRgRjBGeFqlF8VEK/wSJ0yx7Dfh/rLM5alIgPkY995kPcvPdmXvvGb0IEjLVUoyGdTmf8AkmO6pe//JXceMNu7r77vta32tLS0tLS0nJFXhDi43A45MCBAwwGg6va/krCz3OFc46XvexlbNmy5apcc/1+n3e/+9287W1vY25urhWjriFijBw/fvwpt3sy8dFae9l1i4uLLC0tMTc39zWNsaWlpaWlpaWlpaXl2UExWEjiowgy7jgkGOqz66w8+iB5t4MUBcXkBBYIMVL3Bzxy4FGqYcO262bYODdPmeUUZYmpM0K9jhrFZRbp5DRn+2hRcOzRA2y6bismKH5tHQVsnqFBqUZ9fFPT1CP6q6tU/SH1coMfBXwTaUJMnkIBXExikT0vSIE1yYGpBFw0xAhaQfAAhrqJ+GFN1fe4vI8pLJplbNrcoch6rC6u0zilyBy9UshLgzMFg8GItX5NVQm5tZjcsh4bJk3ObNYg1RBUCZIjOsCKwTiH1g2FhQ1zi5w583H2Fq/hoRNn2HdPFxHPzhs2c8erNjI9k3Pg4SX2P7qf3Xs3s16tsu/hx/jo+w7i6z1Eb2hGFYjg8hK1iqhDUNSPxRwCzjgmJ0t2bZ/jRXdeR1FkqHo+/L5D7LvvMNnyiBdPC0ce+yT4R7l5h+Pmm3cwPTtFUw9YH6yzur7K2tpw7E5TnFPyQunmJd2iQ7czRa/skWcZRoRoDDEqMTbUdUPtR8QYiaYDNiOixOhRMqIfIU1D0TQINsnSPuAbjw+BGD0hRjRC8GMRWcy4A9KjCt431IMBxEhuLUWm1CapW84INnMYZ7EiVL7h1NmzLC2us+mNLyUYZbjW59yxkywuDy+IipMGMjMWG1WJKoxikqyurUyqawMRIXPpM4HGhwuCozVJiFZVrE2vnzh+HYlI6tCMXJiwHEISID0pLjcqOCes9mvk3CITsz3EQeaEXDMgsF6t8d4/fA+79uxi267tGLE0viaPGaoOMz7/5g1beeWdL+WL99xPCK342NLS0tLS0nJ5XhDi4+rqKvv27bvq7a/WIfls8trXvpZXvepVHDt2jDiejXglVJVPf/rT/MZv/AZ/42/8DcqyfJZG2fJULC8vX1Ws78rK5eNJsiy7rPMRYDQacfTo0VZ8bGlpaWlpaWlpaXmeoGoRExEUxCESUjwihtgI60dOQ+cBQm6Z3raLrs0ApVd2GawGTh7ss3iyz56bl7huy3Zsf5WMnBhHROsopyZZXlpGBkPW1oeoN2zetImmrhkNVoixAWsJwKi/ynB9jdFan3otEps0eTI6kJ6hWxTknYysyHAuw1mHMWBcgWok+orgk6QUgxDqEX7kCT5Q9Rs0pOjPGGE0Cqjx2LLCNCMmpno4U7K6PCAa6BmHEYe1hk7RYX3gqX1ER4ETq5aVxnBjFyYnBG2UpmpwxiAmYl2JanIiOo10S5iZPsTKquOW4lU89PBDfOb0DP3Xe/bcPMdLX7eV1X7g5MkO4pVP/OkB1tbmycwmmuHKOIbSEUMg1h6CIvlY3AkRYwLbt89zy4u2ctsrdzCsYWY6Y3V5yL57jvLpDz3M/GjEXRtKDj/2MYb9L3DzLsee3VuZ7U6QNYozHXwe6K8upn5PhCyLWKsUzlKWjqLTpdebpFOUiE2uUx3Ho4bG0/gG7yN14wnFOPcWRSV1MiqGOFwjG8t+6j3NqKY/WGOtv0IVPNM+0ISapmlwpkBDxDcNYgQ0EHxNHHdCWmOxxiMCYpQsN1iTkVmDSqQ/WOPwyVXybdspNs2wurzC6uJZjh48xTAqgRS12hkLZTIOzA3nvyuMrlRs+A1O1Yx7W8eCYxIXFS5EqyqIJHeiyHlTImIgRMVZS4iBGCFEcBasEeJYKOwPKsQqRZHTt3202yPLMhTlsdP7+aM//EP+8l/9qxRlAVGpRiOKsof6ACIYk3HXq17Lu9/9HlZX10nP8BORJN8/69eupaWlpaWl5drhBSE+rq2tcejQoave/uls+2wxMzPDz/7sz7K8vMyHP/xhQnjy8JGlpSV+5Vd+hQ0bNvADP/ADZFnWOiCvAZaWlr4iGuVSVJVjx45ddp2IMDExgYh8xXH6/T4PPPAAd9xxx9dtvC0tLS0tLS0tLS0tzxyCIpoEAoOiej7lRIkY6gEM9x8jFgVic5xGfAh0nWF2doqzR5eRobBwasjE7Iip2XlyHDQWmwlNrcQ6srZYs7a8wtYt0ywcP07W7dLUEY2reIV6NGK0MqQZRkIAMYbufEHeLSkmSjpllywvMc4lMQ5FIoRQgTEYlxOqmqgeMRYQYqjxjaepakbrfUaDmmbU4OsAdZLDokL0NbEJZJ2SuU1zrC6ssdKv8DHSLSNNowwGhsX1klGEfaOAomzsRLqdDr6uyIZr0JvC2NQliQoaG4wxFEXJ3HQDPMraWuAlvbt44PRJHvyQcOLwWeY3z7H//jPcevsWbr9jFzt338L+L59lcXGB9eUpRpXiQwMaqaqGyYmSublZ1Dfs2LuBfQ8tcMOtW3nZ627gzOl1Jqdyjj22xOc+dozlk0fZ6yJ37JjhwP4Psb76WW7aWbBj2xwTnQ6KUjU1oxBYWFpgbW0V51LMqcuUzAmdsqTsTtHtdMlzi7WCGsHHJFprUGJUfKjxXvG+QXOLGIOIgxjREImqUDWYpiFqpB4MWV5e5PSZU6ysrtKd7KbeT03uWmOF2gcaH7DGpkhdYwnekyFkLmJNivR0IuTOkjmHsTkaYXl1jTVfMnHrjYxGQ6rBGicPH2d10BAFLDBtU3xvo5KOpTCK6XWxOhYhW56IkGKMDWBt6tsUEWLUdFEhRaCeF/cEiJomNWiKWo0xYiS5ZmEcnxyT+zpEJXhBomHYH2KNpSi6ZDnJ2azK5+79FC/9wst51WtegxhDVdVkWYE16fiqgTtufynbtm1ldXX/ZSXGy32m0dLS0tLS0vKNxQtCfBwOh0/Lzfj5z3+ewWBAr9d7Bkf19Lnjjjv4t//23/Kf//N/5t3vfvdTPqaDBw/ycz/3cywsLPA3/+bfvKJjruXZY3l5+ar+wD558iTD4bg74RImJycxxnyFAL2+vs4Xv/hFfuiHfqgVmltaWi7LrinDvn/xnc/1MFpaWlpaWlrGGIlkomQiGARVg2oABK+WOgr1SoB9h+hOTNGdm6XRVM2xa8c2Th47x8rZCjdwrKw1hOoch84MWI0W64RRPzBaq+hoxU3XT1JOTKLGYETp94eMwgAJ0NSBWAtFJ6czM0FveoqyU5J1u2AM1lokpljVpE4Fgm+IAXxdQQikRjkz7n60qLG4TOl0he7kVOojHPQZLg+oBzX1qMGIwYcIo4iaEW6iYHp+grVlWF1pGKwNUYTVfo+1psORuuKkHzFjCnqZ4lxGqD02NFhrCLHBjj/GMFkOKqlLMwizcwY4wJmzZ7lj/js51V9j3wNrHN/fp46BLTu2k/cytk3nbN89w/ryDmJMPZhLi326vYIP/vHDfM9fejFrCzXTm3qIRB758hpf+Phj7Ng1x0NfXGB5YYlhf0Q+rHntRI9Jv8iDD/w+zdq93HAd7Nm1kU5mcICvR3hR1vsrLCyeoewENCoiinFQZjm9zgTdskevM0HuckCIjRIlYkQIGmhiTPeH6akhjjs5jTpEzLgLEwgN4mvqesTCuTMcPXKEMyeGdCaUualZTEjCbfDh8fpMjYg4go+M+kPqUQU+YNSBqRGjKXJVLFYyEEPV9Dl9bpVs1x78RI9qfYX+4gKnjy/go2IEJozggNWgKMKUhWbc9ehRBq0w9QRSX2N6PUPqWHVu/DxbQ+PDhS7HsfaYhMAILs8IMeB9GAuVadKzcxZ0LEyORUyNHrHCcORThLK1VGWFsQbnHKqRxdUl/uf/fB8333oLUzNTKJEQaqwtEHEYK2zadB033XQjDz20/7KPJ2orLbdcu7T3zS0tLS3PDi8I8bGqKpaWlq56+4MHD/Jrv/Zr/K2/9bcArikh5+abb+Yf/aN/xFve8hZ+8Rd/kY9//ON476+4/cGDB/nH//gfs7CwwE//9E9TluU19Xi+0bga5yMkF+ORI0e4+eabv2Ld/Pz8ZcXHuq7Zt28fCwsLbNiw4es25paWlpaWlpaWlpaWZwaDB9HkFhNJUYgoAYtXQ62Kx1AtBlYPPErX3UxdCjF45qYn2HvLVh7TE5w7C6MvnoBoOTbIOdhUlOKYtMqt2zNuvWUrs70OglKtjzh37Bhry575WUfZLelOOMqJHuXUJPlED+ssYhxoTG64KKgVRBWNDaoRjICxBD9EvWJccjzhLRhASC5I48jyDJcVdIqSqal5quE6g9U1RqsjmpGnDkoYBGK1ipvImJqdoK8DVs5G+sOMc03BgXrIcb+GxWAFLEpEwSkxVlgfiMahKKKpu1BDBGMxRilwzE4VSDzDsbO/Rlnexcu4kdNV4Hjj+MwfP8Cel29n9w0bmNvUJQZlbssEvamcbX6W5bMjNm+ZYOctGzjy4AJ3f+Qwc1smqYYjjFo+9Af3oQHyYc3NPceuOeXIwU+x7/RnmM6XuGl3j+u2TjNZ5hAjvhnhVRmGSDWKTHR6VEHQUNOESGaE6ckZup0evc4kedbBmPOijaLB48XhNdD4Ch8CITQXojcxQtAaSGKiuIymv4apas6cOMbBx46zcMrjrDI3XdLpFLg8OUdDE/DeE2JNaGokKBjBWEtsGiR6VBQZ35K6XMlzwThHiBX9tT6LweF27WLU1NT9Vc4cOc36sMEDGdARWA6wGB1WoKOeRgUDBKB5bt6S1yzGmBSjamXsPjYIiskk9a5aMJIib30IBB+xxhJR6qbBWUeMHmstRWFpmkAIATHJNakasM7iMkMISuMVvx4ou0q/38dYIeY2vfdF+OJDn+dzn/oMb3rbm7BjC6xqTOK5sXSLkm967V380R+9/ymrg1paWlpaWlq+MXlBiI/WWsqypN/vX9X2VVXxC7/wC0xMTPD2t7/9mhLsRISpqSne+ta3cuedd/LX//pf5/d+7/eedJ/FxUV+6Zd+icFgwE//9E8zNzd3zTyebzSuVnxcW1vj7rvvvqz4ODc3h7WWpvnK27HHHnuM/fv3t+JjS0tLS0tLS0tLy/MAEcWJRUjCHhpRTTGIXpMII4BRQ3VqhbWZ4ww2l2icICsce7bvZNLmLC8MWF0dcW6pYjbW1FriCbx4W+S2XT0mEAZnVvAxUpQFm7ZtZ8sOpSwLsk6ByRw2K4AAziJRiL5B/biPMgrqk+CoMaJGMCbHhIBxGcHXqBrwNSomDVxBrKCSoiGNOATBOkt3cpKy26GeGjBYGrC6tsZoCEEVQo2Uke6kI8sLju9T9tcjjvo1UHACmQhBlYAChtFwRNnpY8tpQvSgASMW4xyIELzHiCHPcubm5sg7fU6e+RRLy4+woXcnm8vtLKjh9BeO8IV9S6xVNZPzOTPzk/i6pjfbZWKyZPfN0zzw+eOcPbbMqZMLHDl4mjBapcxyJvuWG6Y7zHf//+z9d7Bt213fC35+I8ywwk4n3XRulHQV0L3CEuFJlrDxc8s4YGOgAIdqqK6m28amG7DaD8ND9qPqgZrkgLEJ1dUutxuw6aYb+5lqgniATBAgRJB0Ub7x5J1WmnOO8Os/xjpHAoN0r3yP4vzcOnefvdeaa4055pi7zlzf+f1+LYdX3sHvveftpM27uXdfuXjxLAe7E6ZtixBJJkMWQoosjo5JscdXhryxDBhEldY3NM2ESbO7jTMtrtisWqIxMyiBEONWfAxo6jCSMK70/910boo1hNizODrk8PHHed97LnP1eonZvHC+Zm9vj7adUvnSbdkPPegMzRBiRCpHDoFusyakHokZjULaJn06S+mBFEfOmaPlKcOZC/S1pV+vWB3d4Pq1U4ZtCujMCBtVTqnZ2b+DzfqUIR8Ri8bJwDY59BNxTm6/frL4LoUSg1wiTQ3WGerKo7l0OnpXnMZDCHgnhAjTiWe97plMWlarFdaUmFWpa/LWAQmKMQbEkDVgRRCFvHVXOmtIGRbLNUrCWJgypa6r0jMqiZ9588/yyGc/yvk7zhFTwpmKRMJgQIQ/9adeyWRSsVx2n+hpHBkZGRkZGfkk5NNCfDx//jxf8AVfwH/6T/+JYRie1TbPPPMM/+Af/AOeeuopvvZrv5azZ89+Ugl2xhjOnDnDd3/3d/OWt7yFq1evfsTnL5dLvv/7v59hGPjmb/5mLly48Em1P58pDMPwrMXHX/3VX+XLv/zLqarqDz129uzZcpHwx/D+97+f3/7t3+ZzPudzcO7T4vQdGRkZGRkZGRkZ+bTFIVgxKAnBoUQyQsIQt513DqgEtIOTxy+xaM8Sz+7gGkczqbj73ru440JgtVxzeHjI4x98ivy0w5jMvBLOntunNS3V+epWzCJGkCyoSZATBCXFiGBg2yOHlEhHFUMmo2QkK0gRQdQaxFl8VSMa0JxQDDnnsi+atrGdChqADjIYmzG2wtqKZr5DNZ0y6XcJ6471ekO/GcgiTOYzdu87z3uuf4CnH18SNeOKFIrHoBly7PG+IYY1KQzMZpYhe4bUoVlQiQhgjQUyxhq8NTgzpb6r4vj0mGvX/xOXru8wnz/K59z5KEk9T3drrr9/4NLvf5CQA+orVIrjq3Y1KXR4DPftHPCq8zPOzhrS5gZXr/4uv3vld4ibJzgz67n7wQnnz+0zn05xCMSerJlEoouJTUgYMjvnztJtlvTdGtWEd4bZbM58tkPjK6wV0ISmRIqQiIQoJA3EVATIlEtPqBGDMWDEojmR0oBkCzny1Puf5N2//jTrzrPXJh66b8p9993LpG2wvnRLLmKm7ztSyttjVzoCb7ohcz+g60Xp/dxe2vpK8VWNtZZus+JGF4gP3UMfA/3yhJPLVzlZ9WSUiQgCnCSLVAdM5jM0LohLCChqYJE/ftLf1iQK27UllDV+87JdVT+hQqQRoXKGyju89xhn8LZ8RhDzgDNVcS1ai3UWJCMI3gspB2TbpTmEQF15UrwZrZuLOxnFu/L5QkyKt5ZEJqaMAuvlQO0MXdPhvME4bnVKvvepd/Hmn3szX/G3vhJNQrKK1YQawXnHXXfcw8WLd/Oud73vEzZ/IyMjIyMjI5+8fFqoF3feeSff9m3fxote9CJ+/ud/nt/5nd95ViLk9evX+Y7v+A6efvpp3vjGN3LhwoWPw2ifG/feey9f8zVfw5ve9KaP+tyu6/jBH/xBAL7927+dnZ2d2z28kT/C3t7esxJ9Qwi89a1v5bHHHuORRx75Q4+dOXMGa+0fu91ms+Fnf/Zn+ZIv+RLuvPPO52XMIyMjIyMjIyMjIyO3BysCZIxaMlvxDiHqtssNpTKCl4xiWB/DjRvHhItnqVONqEBWcirCmLGCMcrEZJBMGBIhBQ72z8AQsW3NcHRCigOkCMaUmEQUcRZbT5AklHxPKcKkNWQRIKIJMAZRi1GwvsbYCuM9OQRyjohxaFKylkxOjYmcI0kVFSVqB2FAQsCFHkFwzuPnLZPdWXE/WkPdzjDWc89Zw+QpoY9A8YhSG8FKImcDGHJyxNiRtaeuZ8TOkYjkGHBiSm3FEBFrMdbhnUUAf7DDzjxy5uSUq4dv5t1/8Ct0cpYL5x7hkfP342VOHDJDDoBBk+BqS2VahMSVo/dyfPlxLq2u0a8uYTnl7CRx7u6Wg707mLYVlRVsDggGsZYhZYagLNcnLBZr7rzrDow19Dgyhqyw086YTXdoXF0icFGyCiFBJBNSZogDKUdSDvT9mjhsnbNkjC1djzlGYhgwIog1XL5ywgdPhQcOMp/1wn3uvecCTT0rkZ7OY9uapJGcE1kzfb9htTplOp9jLICSu66st60+bR3UzlBbS9LAcrVgMzlPmM+JqwX98SnLG2uGrZjVCCyzEt2Ug7P7nCxW0BVn3GYraMbH7/bTAAEAAElEQVSPk9ongLNCZS3WgLWGmLW4alMuztEM+RMkQFojeG+pK0flK+pqhppMZT0iQgiO2c4uxgjLxSmVazBWCCmgS7De0bYTlsvifpy0NaeLBeSMdRYjQogBtoKws4asSta8dWMDqnTdgF0L1licbXCVoKrEuOaXf/Xn+LNf+Gc4d+cFYhB0K/ALyqSd8aIXvmgUH0dGRkZGRkb+WD4txEdrLY8++igveMEL+Ft/62/xB3/wB7z5zW/mP/7H/8gTTzzxEbddrVb8yI/8CKvViu/6ru/i/PnzH6dRPzuMMXzFV3wFP/IjP8KNGzc+6vP7vueHf/iHaZqGN77xjUwmk4/DKEdu8uIXvxjvPX3ff9TnvuMd7+AXfuEXePGLX/yH3I8fyfkI8OY3v5m3ve1tfNEXfdFHfN7IyMjIyMjIyMjIyCcWI7J1XWWgfKAv2y43EJxABRgyCcM6GA6X0PcDkxgxxpZ8SNEiJqaEs8Js0jGZJPZ2PYdXr1AZz87+AUYN/swBfivy5BhIXUccBjRHcojgPJIUIaPbHrebt08a58D5IkAai5giktqmLiJjyihKCgknSkoBTYpVxaKkEMjZEUIkA0m36k7sMKI4YzDWYlTQ3tHlDTMXedHU8baTjBHwYqnFgApDv6H2DqOJoeuIIVLvNrQCy9MNakBthYhifQIMmiNGHIZIZcHamro+w96ZyHK54vToEtePnuB3H1e67FGpwFSI8VTWEeNA3/fUEkh5xbxJzKaeu84Ju3sVu/MZs9ku3lgkdzhXYipjGoqAiGG5WTB0HXfdfYG6aRiGHrGKaqR2FXu7+0zbPZwrQm6STIiJIUVC6gghMMRACBtC3xOGnpwNoQ+kFJnN+3I8YyAMPZWr0dizWq5pvPKyB1ruvuMMRjOb9SEpl6hTmxIpBlKMqCopZ4YYaEMZf4wRDQuSDqSkpLgVID2klOiWRxwuVwwPvoxNGFguD+mOrnN43BOBCiEJrNSzt3eWqrZ0RxsmKAtVvMAyf+yRp8Kz39ZQxL0Le2eYz2asuiWZhBhT3KkpEvpI3wdCysSkH3cR0hrBO0fla5xzGCs4VzFpJmRVqsojkkEt08kU5z2KwOBoJ8WhXHlP3w94q+SU8N7jrIJR+j6U803K754hJFQVY+RmpStGLMOQsV2griMh9BipEKsMOfDElcd5yy+/hb/yJX8FQTDGozmXbsmm4QUPPYRzdhv1OjIyMjIyMjLyIT4txMebTKdTXvrSl/KSl7yE17/+9XzFV3wFX/3VX8373//+j7hdCIEf//Efp65rvvM7v5P9/f1PmshSEeHixYt84Rd+If/hP/yHZ7XNZrPhX//rf818PucNb3gDdV3f5lGO3OSBBx7g27/923nTm97ElStXPmIE63q95od+6If4c3/uz/Gyl73s1pq7ePHiR4xUPT095X/6n/4nXv3qV7O/v/+878PIyMjIyMjIyMjIyPODE8EiiFiSZjKZpAYQRBQDIErCktSQEI5OBpbrFXs7O0jtSFJENVC8c8xmU3IeaBqLEeiWG578wPvZOTxkZ2+f6d4O7c4utm2xAq6f4PqB1G3IMSJiMcYhRlEjbLMZEWvAGZCt6Mg2nlUF1YiiGOvIGnHWoBhIpeexRLKCdZHl4oj1sEScQ9QhJFSL88pnQDMGWPY36ENE84ZXnM1c7j2XuogVgxEhJyGFRM4Z4w0pJIbVKe3eHlXjmOsem3VPDOtSTJgFkYRgUY0Ya8AaJGccFu+E2u9zsLfPnUNkuViwWm8YUqLrjhgGJesAycBEadsK7xzznSnz+ZzaOnxl8d5hVEp4rvVkDQxhAGMZMly/fgPv4O677sM6x5BTUfBywBnPpJkwnc1xzpIRggZCSKz7DV23YbM6odtsSDETh0wISugtMSpDcMTsWGQhBC3RtCmTJQEZF3pecpfnnjvOYrxlPfQslocsj1cse8tOKk44VUh5G7kaE0kzYorVURenxGEgZiWolqVnYIgbuhhZ2h1OmwlDP7A5PGS4fsIiZRShEthkIbsWsZY+bIhxxZAGOkqa5+q/oezxuQiP3gj7813+zv/+/8SrX/v5fPADj/O7v/0O3vWed3L5+tOsuwU6zWy6NZvNhvWmZwi5OHM/DlgRjLFYU3JTnXM4Z6iq4jR2RkArmmYCRlgvF7TNlKaZsFgsQCOCw1nLznyXMAyE2NOHiLWl69FZiCmSUkJLTSiq5XMmVfCuIqZIVshB6ftNeX+reLEo0A1rfvXXf5lXv/ZPc+7CeWJKWJuwzlLVDffdfz9t27BYrD4u8zYyMjIyMjLyqcOnlfgI5R9RIsJ0OuU1r3kNFy9e/KjiIxTH4L/7d/+OO++8k2/6pm9iPp9/0giQe3t7vP71r+enfuqnnpWjDopA9S//5b/k3nvv5Su/8iv/q17BkduDtZav//qv57WvfS1/9+/+XX7jN37jIwqQ73znO/nO7/xO/sW/+Be3hMSLFy9ycHDwEZ2ub33rW/n6r/96vvd7v5dz58497/sxMjIyMjIyMjIyMvI8IAAW1RJJqaVJb+t7FIoMqSQ1BBUScLwSlt2GnBNsY1dVFWMtvqqY78xxZER6cDX96YBK4Ma16xxdu8Fsr2H34Bw7B+eZ7O/g6wnOOExVk9OAZEGcA1HEmCJIsBUf1RShUUFTRI1CgJwDiqIaKNWIJYHF+pt7YVHJJAv9jZ71ar2NerRg4q25MHgkQ84DIQ5ETfRdYlZHPne34s2huCJFDSlZ+i6SJhHjPSqZfnNKvzik3j2gmtWkkNBgEDxqIilGnCldmkbKuAQtLs4ESiidko2n8TscHMyLwy8N5KRY74gJCBHX1ohGyImmbTCqWCNo1tJ3qZEkQhIlWkeKieMbJ0ybhjPnz2FESZq2n1EYjGuYtIF5WyNYQlaGsGGxOWK9OGV50tGtEv1aGQbLOlRskqFLwpANm6wkLQ62nVroh7IvceiwxiACnszdd8xQY7l+dIXT01NWxz2LY2XpzvDyasrQ96QU0ZzIpOL2S7n0hBqLDBsyPSEkYjIYC2IMXdrQZ8t65wIbhNXiiOF0yeY00G+Pr4iwUsG5lgt3naEbTji6LsSYycCG4gG+naebyNZRXBk++5FX8de+9Et4wYsf4NWvezVf+pUDl5+5yrt+7zF+49d/g7e9/bd4/MoHsOYU1UNSHtAI+Tb7H29GoDa1Zzqd4FxF205pqpaqrmjaKWw7VieThq7b0Byco65ajMk4b0ASIh5nDXVdcXx0RBczBoM1lpy13Big5R1FwBkhpIQRIaZUnJEGnHWEqMQIQ+iRTjHtFCUSUuA9H3gXj73jHZy7cI6cEylHvFbUlePeu+6hbdtRfBwZGRkZGRn5r/i0Ex8/HGPMR3SQ/VHW6zXf//3fz8WLF/nqr/7q57Tt7cQ5xyte8Qpe8pKX8Pa3v/1Zb3f58mW+53u+hwcffJBXv/rVY0TnxwljDK94xSv42q/9Wt73vvd91LjcH//xH+elL30p3/AN30Dbtlhr+eIv/mK+53u+5yNu9+///b9nMpnwhje8gYceeuiTRiwfGRkZGRkZGRkZGSkUMSSjKsX5qHpLgLzp41IsqkpQSCg5GK4cbbj3wgZrHZARMs466qrGAo2FbrnANTUXzt+B9gPL1ZrTa6esbvT0y2dYHp9w5u67OHvxPoyrsL7GZgdGtu8PmhPk7WhCKjWQMULKKGlrlSrPFSPFTWgEMBhnEByqeSuSCikHBEjDANaARgyWFAdiiIQQSbHIsMlsBdlURJG7JpGXTmrev7YoSkiGfjDElBAxdCFiROgWp/jpDr6dMdmdULUVy+OjIp6KJ+eMiKJk2Lr7NBXxtPRsGoSE1Yi3DZkArgax5RFN0LgyJ+qxxoEmrK0RgZgHMpGMEmIiq0XFELuee++/D+Mr4tARchFxY+7YbHpSyvjKk4xn0a/pu0NOTjacHi1ZLTKrjWMTWtbJMqghbrsRS9BtOUZelL0mcXE30zqDKGjIZJ9AIKVA1CVPXzrl5GhDf2JYdY4bg2Fy14R6PuV42JByRDXjbQ1qGGKk9hXeV8QIihCyEqNQOyVqohuUdbCs7z9gvdnQLY7IJ2tOu0hQxSMElCQ1lbVcvvoU3huMZtZb8T3cblehgBOYVJbd+R5/+fVfzEMveQBXVyCCm9Q8tDfnoRc/xOv/6p/n+pXr/Or/+lZ+/s0/x1vf/iu85wOPkVL42HNhnyXGCN476qrF2RrrinhcNzV109A2DU1bs1qsOXNwFmuFazdusLsz53RxinGWCxfuRoDFYsFyuaSqJ+x6x+L0iBAzdV2VHs8ekibCEBiS4qwlxPK7qPKm3ORAJsVMvxGM6REDznuMVZyxLPtTfuUtv8LnvebVVHVNHCLRJYy1XDh/gd3dHa5evX57J21kZGRkZGTkU45PDnXtNpFSYrlcPqdtDg8P+fZv/3YefvhhXvva196mkT13Hn74YT7v8z6P3/u93yOlZ5+l/7u/+7t867d+Kz/xEz/B2bNnb+MIRz4cay1f+IVfyIMPPvhRxccY4y0H49d8zdfgnOPrvu7rqOuaH/3RH+UDH/jAH7vdMAz823/7b7l06RLf8i3fwqte9Sqstbdjd0ZGRkZGRkZGRkZGPgaMWFRLo6OgZCxZt5GmAGiJPEQImglZEYEbh5H1Zk1bTUpvpDHQCGLAW0PjHI2fMIQNKEyaKe1szsF8Sjcoy8URqDIs18R+wNUeI2nr2iv9kzllNMTys5xJMdyspiyxq2K27sGEOItYh4hBVFHRcvOjaok5jYmclBwzYi3WecQUx5zmXOJXM+SYyCg5CRqFpBTR0imuyjy8EzgdKoJCUEPOyjAMCJlhKH11Vd0T1mucrxFxiCjOGzbLDeIspmoIaYBUBDZNCUPCVi3OVdvezYQai5FMFsWIwDYe1luDJgVjyDFgXEVOmRADkjOJjFpXBOOuo24rZvMdmO9ue/bWxJwIqSeFSNctGLrIkDpSyMQkdBs4PVVO15Y+1PTZlv3drokiNAoGxQg0JrPTJM7uJO44W3H2wg61N2Ag5URKCYsQ15Hr6zXaQRgcfbRscmZnnnn44XNM9nYIzyzRDKpKjLGsRWOKsJwVCRHNhpSFhGJdEcq7TonNLks7oV8dM6xX5NOOTf5QT+JaQY2lrhqWi1OGvCEOZY0Kt9f1WNJhhcoKbdPy4L0v4XV//s9Stc12UX+Yomihamvuuv8u/vrf/qt8wetfxy//4i/zP/yjb+Dxp57kOXzk8pwRkeJWrCqatmbStFjvmc/38K5mPptijaGpixCZcqJtp5zZPwCBs2cu0Pcb+r6n7zZUlaeqanxV021WyEw4XZyScmQIYetyLG5la8qxqpwjpERKSlPXxBRIKRNDQpMlRyUMPa6ypQdVAu989+/z9ONPcd8L7iOKJ2fFecvewT77+7u3b8JGRkZGRkZGPmX5tBYfN5sNwzA85+2eeOIJvvEbv5Gf+qmf4o477vikcJRNp1P+zJ/5M/zkT/4kV69efU7bvuUtb+EHf/AH+ZZv+ZbbNLqRP457772XRx55hLe97W0fVTC+ceMGb3rTmzh79ixf/MVfzP333883f/M3c+edd/Jt3/ZtHB0d/bHbbTYbfvqnf5p3vetdvOlNb+Kv/bW/dit6eGRkZGRkZGRkZGTkE4uVrZNuG5epKmQgq25bHGXrcFOGrcvNIpycJpbrNTvTgDMGyYrxFus9GEu2DmMNps/0aWCdlKae4ydzJrsVk9mUbAy2qtEASTqyZFBLHhKZSOwCN82NRUccEOdx1mHEIt4iApjSQVfsYLm4Bw2AJaeExoHYb/+EhDGWdmcXSRnNAxozahPqHdZD7IU+9IQ+IyrFDaqQEkxs4v4m8njnGbJjs/bUtsfsZAhClwZWyxXtbIlvJ7jaIiI0zZTKN8Sc6JZrUhxwlUdFydmgquQU0Tjgq4qcFMSQNCDGlwhLU0HMWzFYMSQSmbjZoAZUSlyuMZlh2UMW9vbPMTu3j4aB9WrJar2iC4EQOvquZ73u6JY9/UZZDY5V51jHIgpGBNEi7BkxWEpMphHwkqkl0brIvM0c7MDewYz93Tm+NkXcTQkxQo6BHCxZI7IJrI4cokKnGdcE7jlwPHjvPdz10hcgVcXQ98XlKUUYttYgCM45QlJyDoSUiFG2x1ohCkMvDHffyTr29MtT4rIjd4m16jZGGDqgri1dXNOFiIij10QrbNf37UEAI4K3QlUZJrM5n/95r+W+F973oSds5X4+/FJZQaxw5tw+991/N21TlflI6baZH40I07ahnTS0zQRfl17Hum6om5q2bckpF1Ewls5TUNrJBGcEFcUamM1arlyOpJyZTFpOT0+IKWMrx2QyYb1Z0tQVJ6cL2qZB1bLuBpxzCBmLozIlonXT9SWiFWXYDDgnCBmRiiSGrMqV46d466/9Ovc+eC+JyDAEnLVMJ7OxCmZkZGRkZGTkj+XTWnxcrVbbO/meO7/zO7/DP/kn/4Tv/d7vZTKZPM8je+6ICH/hL/wFXvGKV/CzP/uzH7FH8I+SUnpOca0jzw/WWh555BGqqmKz2XzU57/vfe/jW77lW9jb2+N1r3sd0+mUv/7X/zr/+T//Z376p3/6T9wuxsh73/te/ubf/Jt80zd9E1//9V/P2bNnx5jdkZGRkZGRkZGRkU8wOYMxlpQzWYV06zpOtmGaSqLEayZVMopDWG6Eo8WCg+kBk7bFeQc5ozltPZOCEcE5j7M1YRMI3Rpft5jK0TY7JAyoksJAjnqr2zGFgMZIjBGLxYhDvCuxrE2NMRYxqfT/GUepd8wY64pQaSwqAjmVSNVBSKGIIJpzcepVDTEGDBXZdqTQkcg0tmKQjJG6CLMpkWImRgVTZuVsHbjUN0Q19MnQbRxtk5n5Gs3QryLrxTHVbBfjHIIgtcNlh0Op6pZ+s6bfdEiOOOsZYkBjAhRyJquBFFDjsBZEPJozw7BCjOKco98EMGCMB1U0Q44dKRv2zpyjnU9KHOxmxWqxYLE44WR5zNAFNsvMeg2r3nA6tAzJEbJsXa/F9yoo7pYolhHJTExiYhOTOrIzzezteObzltlsTt1MKJJQKisnR3xdk3MkxUTsF1Sxo1fIJKazxB13tNx91z2cOzig3d0niCHGiJgS5mqMwVjBeUfWTAqRGAdCHkiqGAvWQh8zg2lZzeb0QyB0S+KqI8VM2ArpCYhiOL+/y+lqQ04ZYzJziivxdjYCipTmUWdgMm04e3COV7/m1bSzSVHW/+QNAUUtPPn046hTVLglxN0OrBWsNXhfUbcTKlext7dPU7dUtaepWpq6pmos1jic9cQcMdtjjyqalGHoOTjY48bhIdqUKOPjkxOyZlztsYMlDJFJW15vud5grUVzicA9s7PP0eIQYpkGZ0EzdEGhG1AS4ky50UAh5I7f/M238pf+2l+hmTTkFEkpMZvOR/FxZGRkZGRk5I/l01p8bJqGqqo+pm1DCPzUT/0UX/AFX8CXf/mXf1L0P+7t7fFt3/ZtHB4e8va3v/1ZCasiwite8Qr+0T/6Rx+HEY78UV7wghfgvX9W4iPAu971Lr7pm76JH/iBH+DzPu/zuOuuu3jd617HL/7iL7Jerz/itl3X8d3f/d089thj/MN/+A/57M/+7E+KdTsyMjIyMjIyMjLymYpKRNWgytZRZ0oN4VbayAoRZQAC+ZYAGQbDySqy7JcYo0z9HLKQh27b0WZKzKQr/YiVF5IoYgWaIpaZCDnCMPTkoSfGgZwHjPHI1vFk6hpjHb6uMJVFjAUjGFthbEUJhJWb5ZXlcQwiso1eBUQxHhwWyZG2bdCspOjRlKGqCLEhpo4cI0aUXAtuYwnrgUDEOkWTkHxirpkzLrNIFUOu6JMS44DfcUxtRTcMdOuOYXGMrypM26IUxx4IpnZMp3u0Q6Bfday7DjOU/Q1ACAMpBDIBMYkcO3DFcSgZjDHkCMaWWFkdOqzxTOa7NPMLuKZCNbHZdCxODzk6POL45JTDo8B6bdn0nk109NkQtxG7XsBQuijl1n8JbzKC4k1mUgXmdWY+Nezuembzlrae0niPOIcaR84CmgmhJ2566qYhx1D6GRcrYkysU6adBM6e89x9zz3sz/dwVYU7uMAyBGIIVJMGiGTN1PUEa7aHGNAkxFzEVkFJWZAB8s4uvW8ZTm4wrHpYJ6IqYSvVRQHvPNNZBURWQ4+kSCPCSopgdjuQrRxvDThv8LbmRQ98Fi999GXlGD6LVxj6gSeefAJf1UAuEa23yfp480Zya4vaN53N8b6iaSsqVzGbTWjaChRmO7vkGIkxMPSR1lfY2tKHQ4aQMcZQuYrJpObohtK2A2GILJbHuMrCUCpal+s1QwjcjHtWlGvH10gpIzUllXbrcMxJCUPGeYGuA7V4X4Mq7/3AO3j68ad46CUPEVLABYf3jgvnz2KMkPNtLsscGRkZGRkZ+ZTi01qZmM1mvOIVr+C3f/u3t1EVz41Lly7xb/7Nv+HzP//zeeCBB27DCJ87r371q/lX/+pf8TM/8zM89thjXLt2jaOjI46Pj1mtVvR9z2azoaoq9vb2eO1rX8vXfu3X8uijj36ih/4ZyYMPPvicBcDf/u3f5hu/8Rv55//8n/OqV72KP/2n/zQ7OzsfVXwE6Puen/zJn+SZZ57hW7/1W/miL/qiMYJ1ZGRkZGRkZGRk5BPETaNj1EzSErNafqQkpXQbbh/PlM462QayHp9mVusjJAcM0LQtWIdoES3Ee6wRNotDnGtwzpONEENEoxKHjtBHUtiQciblTGUdxgmurvC+wTYVxhjEWoyrijhmpPQ7GgPk0tuoJfpRMFu3mAEioBjv8dqQTcQ4i88tOQ+kGCEJWSMu1eRYb/sJlaxK1bSEtqdbrgnDQEZJImjMnPGZRYJOHXPpcZVBLMx25kyJrBYrVseHuHZG44tIarwna0JT8ZOaytNWFU2ao2GfEANhiMS+IybQFBEnhM0ajMeKYJ3BikEMqBR3mm8rjPWoFVIILI6POFkcceP6KVevrTg8Fk56zybWZLXc9DbejE8tYaOK23YOOlGcZLzJeJtofWY6yezOPZNpy2y+S+Uc3hdXq3GOnBMpRWIKhJgYNh3dyRHTBx4sVYYpE7uedYoMJHYamO/uMmmnWFe6Os3BeRanpxjrqHyDwaBZEaNY44uEqEokkTOEUKTnGKGLgty5z2Ac3WJF3HTkPtKpEreLXMRSVzVVZcnWcBQ62hjJAtEK8TaJj1AENu8Mde2Zz8/yWS/7bO68+OwrdJaLUy5dfpputcGo3DpLn2+sCNO6nKsxRXLOJY4ZsMZSNxVN22DEYJwwnTRs1gHnayo/YG3pQa1tBVNDCpF+e8O9sY6mbvE+YazhmctPYK2jj4EQIsYIMWa8d6RY+lPbugZRUgJRvaVAp5gZNoIVJTnF2oRgOF6f8s7feScPvfQhck4MYcA6y4Xzd+CcYxjCbZm3kZGRkZGRkU9NPq3FR2stf//v/31+8id/ksPDw4/pNX7xF3+Rn/3Zn+VrvuZr8N4/zyN87ogIr3zlK3nkkUc4OTlhs9nQdR193xNjib2IMWKMoa5r7rrrLs6cOTNGcH6CuO+++6jr+jlv92u/9mu84Q1v4J/9s3/Go48+yt7eHpcvX35W2+ac+dVf/VX+3t/7e/zjf/yP+dt/+2+PAuTIyMjIyMjIyMjIJwBRSGSUTKbEbt50PAYyg8JAJmtpzTMimK07bgiw6jpyDKSkHEimmezgainRqFkJoQOxhJSpTCarhSDkqHTLnhQ2iGSMq2mnkyI6VRZXO2zdguRyrSAGkSJgSMmdRMggFpCtEAncfEwzagxKLnqFswiKiaZkzSYPxqOhRIKWF3fkHFEMmYSpoLINtnbETaBbbEghoj6w1/bsRouq4m0qjraqQWrDdH6eql5yenKd7uSQqm4wVV0cm1IV8VFKX6EAYhRpLDWOSgV0W6uiBhVFY7xlduPmZfN2TjQpKQe6vmPTbTg+vsHVS8dcPQpcO7EsoidmiyBYwIlBiFjZ9mMCTjJWMg5wNlNVkdYrTaXUDczbmsmkoZ1MqKoW6xXvGkQsIhUYwxAD3XpBGHq6fs3QDyyuXeLsS16GWIvGSJaMbzLWZGZzRzOZYozDeI9Yj5y7wND1iAGjSsoZYxxVbahcTQx96cZEULWEPBCi0HfQq6Xd32Pdr4nDhjxEYh8ZtoI6KCqWPiQ0wfFqQx8iM1EysIp6W/oeBXAC3gjeW6qq4cz+eR55+ctp5s/+OvzypcscHh2yXi+Kq3e7T883zhiqxlPXDU3d0rYTvHd466h8TYjlfGmnE5qJx1hLUwtZBD9pUTLL0yXtZIKYgWU8xVUV69WalBJ1VXF0dAPd7oG1jk2E+WxGPww4m+mHcCtFa4gRkXJkcqZ0PYqQshJDIgbwlZBi3IrqymPv+QO+qP9LWG9IKZFz5OzZs8XJySg+joyMjIyMjHyIT2vxEeDlL3853/Vd38XXfd3X0XXdc96+6zp++Id/mC/90i/l4ODgk0LEERGqqhpz9T8FaJqGBx98kEuXLj2n7VSVX/qlX+KNb3wj//Sf/lNe/vKX89hjjz2n1/jABz7AG97wBrqu+6QRz0dGRm4vj59m7v8f/peP+JwPfudf+jiNZmRkZGRkZESNIeeEahEeE0JQJQCDQtg6Hj8UHlkEFSsw26uIMbAKA6v+CpjI+arFT+YYV0Q+HTY440gpE8OAsw6VROw3GJOxTYN3Db5tcY3HVA7rK0BvdTpuszYx1iMiiEDON7vvoPQRWlQFrEJOIP6WrVNFSqxsTKQEGgIh9OSsDH1HzIGcEpqFMPSknIgpI6FHMQiWlNfkegCEFod3SpYNlxeO6TRTG0fMHV2omTcN83aC+IrTw6v0qxPa6mzRirQInYJDiWVexaIplvJCmyGZ7XznorhYQWxxAaqBnDIxRkIIrFYLTk8WXL9xxI3rA0enwmLwrHODbo+VQYqbkYQRxZmEF7AmUtmMt0pVly7PphXaiaVxFZOmpvKOejbDWYMVh3UO6zyKQQVCHOiWC5aLE7puQ7fuGfpEv1JOL1/lgjG00wnr0wVt3XDxnpbYKdaB96Ur1BiDTGcw2yEcXy4CdG2LcOzAJ0OWgK0qTOhRC0MMDEGIGZaDoPOGydl9Vpcv0282LNcdEnXrfS1CF9bSx4EQM30YqMkYhGZnzkwz3fECAGscAsT80WtkPhqydeo5C947Js0OD9zzEJ/1p17yrF8j58STT36AqzcuMQwdWTO361MfsQZjDN43TCY7VFWNtZaq9vjK460r7sdJRVWVHtLJZIJKJudETpl20pJSwHkLYjESMAKYzOHREQIcHh4xmcyJYaBpa1LOxJQIYWA6mbJar0tH69bt6J0j5VhujMjFmRwimF5xLmFFimgtmacufYDl0YLd87tb9ybs7+9h7XjD+8inDn/0unm8Rh4ZGRm5PXzai4/GGL7qq76Ka9eu8X3f931cuXLlOb/Gb/7mb/LYY4/x6le/+jaMcOTTGRHhNa95Df/lv/yXZ72NMYazZ8/y0EMP8Tmf8zl47/ncz/1cfuInfuJWP8Sz5erVq3zHd3wHd9xxB3/xL/7FsQNyZGRkZGRkZGRk5ONI0kwJARUyiqp+SIDMmUi+Kd0U5xyCE8H7RDvxRO3wmhli4ni1YNYuqZoayRUaAl0sXY511YIo1jucrXDWIkxBBOMspqqKq9GWWNVSQJkQ68CC0dL1KGK2XY63lMet8KhgQdQWjU+LcJRjJqVAGiKhi/TrFUPfMQyrIjQGJYW+9AeKKe8tFqNgbYOxILZCqVmuVwzrNc4o1cQwT5nHTx3qKvYu7CMZQgqkYYOb7dHu7tB3G4b1Ej+d451FUy5CohU0JtSYEhsrRfQhlQhUpBwLsRZEi+CYIv1mYHFyzNHJkuuHC05OI0cnyrozxFwTt/t/M3jWSsJLpjKJ2qat2JhxTmlqqCuhckLdtnjraVpPVVXUzpeIV2uwrkZEMMYhFlQNCcMQ16yWxyyOb7BZ9PR9Ig6QkyHlTHdyTLdasnvuLvrluzFVxflzd3Lt6jNY6/CuRoygOSO7ZxgQspbYTUHIKWHEYXxF1kCORQY3zZy0FR7XQVCBncmMwVXEPtBv1pysIlNRYi5r1gBqPN6WyOB+CJRWSeX6YsnRtgvQmYpHX/Tf8czh+7h09an/pnPrplTvAG8tVVUzmU55+EUvZu/8wbN+nW6z4ZlLz3BydESIEc0fikt+PhGkRKvWE4w1OGuZtjPapqXynqb2pcOx8lunsaHyFcZbjHGsVmusOMQkrPek9YaqcqRQ3Ml11YAqvq7Z29tjs1nhqglt0/LUpafJOWOMsFqvsaa4p3POiAr9UCJdQ8xoVowpJuehT3irOG9xYkkkTpdHHB8fMT+zA0BMielkhh0/axgZGRkZGRn5I3xG/OugbVv+zt/5O9xxxx380A/9EL/+679OSuk5vcbHGts6MvJVX/VVvPOd7+SDH/wgJycndF2HquKco6oqZrMZ8/mc3d1dzp49y8WLF3nkkUd4zWtew3333QfAK1/5yhL/8jFcBX3wgx/kR3/0R/n8z/98zp8//3zv3sjIyMjIyMjIyMjIn4CqkG66HtUSMSTNBM1EStebohjAiODEYIC6VYxJhKxYErV1LNZLjldHzOY7WHFF1IwRjINt9Kb1DlsZRLfuRmswzqKilCLDSA596ZTzHrPtA9QUS+6oMWiORfzIN4MyBQzFGZgDIOQ4kIbEsNnQrU9YHp6yPFoSux5N4GtD1bRMpzWVO4+bNBgEJaFGyDmTKXGnMSYQaHcP6HfWxKEn54GpX2GuKyddRTPZYdpMWS1u0C0XVO0cQZjvn+Ho6tOk9QJrPVgHSVGNxUuaFTUJuSmMGIpdLmZwjqyZOEROTw65ev2Ipy8veeZax+nSkqJBqBC0CMe3vKlKLYnGJlo30PhMbaFpiuhYeYOvDW0zpa4bnLfUTYs3grMeY015HTXFtde0kDMpJ4akxLQh9APr1Qnr5ZLV6YrYlxhZa6FqipjZDis2R8fsXriL60+8H6kU27QEItNmD1dVIA5VwZ6/l04TKQecq4oUHjOmMqSQtnGrGWMdpp4QUqbPsMyKFThz/jzdEBmWK1Z94KTPOC3iohdIGDKGSetYni7I/YBBMAopZdL2OvahC5/F/+X/8D/y//m5f8dP/My/I4Thv+n8MrAVcC1iLLvzc7z8kUdpZ82zPD+VxemCq9eucXq8KGt8e04+74hiLCgJZx3cjGDOmbZusdZRNy3WONq2xdnS9emsKcdsIsSYiCGSQsY7zxB6rBGatsY5y3BwhsPDQ7rNhkkzZbE65fToGBSct/R9JKaMqtn2TG6d2WU2tiKylP3PkIChV6pGcb50wB6fHHF445B7HryIIIQhsDPfYzppOTk+ff7nbWRkZGRkZORTls8I8RFgZ2eHv/E3/gavetWr+Pmf/3l+7Md+jLe//e1sNps/cRvnHC984Qv5y3/5L/O5n/u5H8fRjnw68cgjj/ADP/ADXL16lfV6zTCUC6wSt+Jpmoa2bWnblp2dHfb29rDW/qGI30cffZTJZMJyufyYxnB8fHzrfUdGRkZGRkZGRkZGPj6oZrJmMm7b+agkhbR1Qd6UOFRkG0BaJC7rhKiBmAcES6vgpOJ4s+FgtcIaj5HikhMjFAthBFWMtYg4IBeBC0FyBiJqMsZXRQCzN6W08v5iXHEDImhWyKaMUeK2468j9ZFuvaZbFtfd6nhDvxaaiWf3zB6ze6e4psG1xYUVUmBYrtkMPXHYEDZLYi4dclnjVtLzRdgwBiOJTCKEAGQevJA5XliWJyv2dveZyQFhebqNslVs5anqCUPXYSepuCs1F2FHEsZU4DJGZCv2KRoyKpBS4PTkiCeevsx7P7Dkyomw7CFpxUQMIhlHwkoRZZxAZQe8VVqfmE+Uqsl4b/Ao1grWetrphLoq/Y11XWNtcZmRy1yWLkhFrC8/N4YhREIYiDmScmS9PmGzPKHfBFDBe8E3bitmetrZnDr3SBqgntDMdojrFcFNMdV55mfupp5UWEAqj794Hyd9v+0ezeShIwyOuppivZRI3ZDQ1BON53AJVwdlnZSp97gL5zjuN2xWK7pBWSZlIkWvdggDkFXIMXLtcIVTxQLWCHefP89wcspyveZ1j/5veP2XvZp3PvVrmJ8p/YLWmiJAfwyIgNm6aStXce7gHA+/9MWYZxsBqsq1q1c4vHGd5XJFSoltEunzLj8WN7JgxFFVLb725JyoKk8mE1NmYgVjbRGajcNaS+2LM3aThm1kcaRpK2IcyAl841lu1uV8MI6malmuTll3S7ImJu2UkBL9MJDyts1Sy+cRxliMGCCSUsaabX+nws1ZGBLEPtIBde1Y9xuuXr1M1pdjxBLiQFNV7O3t8Mwzzz1pbGRkZGRkZOTTl88Y8RHAe8/LXvYyXvjCF/KVX/mVPPPMM/zmb/4m73jHO7h06RJ93zOfz7nrrrt4yUtewqOPPsqFCxfY2dlhMpl8UvQ9jnzqYYzh4sWLXLx48WN+jd3dXb7u676O7/u+7yOE8KwckMYYnHO8/OUv5xu+4Ru44447Pub3HxkZGRkZGRkZGRl57iTNW9ejkBQiN4VHtoGrpWPRIjjkD11zJlVyUoIOzOodnBE2YeBweUhV1VS+QhlYdwPiBGdMcf1lwCUECyaX7+VDHY/iTfm7OFQT6FZoSUW0y4CmRBoGYujpFgtObhyxPFwQkjKZNkwnUybVLvO79pme3aOezcF5chwIfc96ccrx1RvcuHLE6mQJQ0DUYEQQU0QOcYBRrFik2gpgGkmpxKJaNcwQdueZ9fF1VrtzZnt7hPWasF5jqwkp9PimYXF8Fd+vMbkpQmNKZGMJuinv6TrEe3KMxDCwWC14+sqSdz2+5MkjRbMFwGBoJFObQCVKZRRvIpXLVFVm3iqTiWMyazFGUUoMprNFyGnchLadUNVTxCQkQ0o9KUXE+pL4aiqMqVBVYs6E1Yo+DaQ4kHNiGDasFwuG9UAcoJlYKu+o6gbvK1xd4+sW021orOH6yQ32zt/F4dOPY+94IY98/pdx/ys/B9MtOX3LTzOsb8DZC8TVGle1DF1PDBtS3SJGGfqhiMG51IYE43j6BA5jcXy6tsbMdxgOrxD6DhJ0WVkZmG4jV7MqKUX60FNZJZC30b3FYZu3Ltp7HriPVb/m9x97OyEGzh2coW48T1+6cus5zwXZXhZbEepqyr13PcCdD1x41tvnnLl8+RmuXb9C1y3LeanPr/BopNx0IJT4Us2RmAZEptjK4lyJW7XGUFcTkFx+L1jBWkfMmbrypJRYrTbUVYWxE7LC6ckJMSnOe1IqwnUmEeOAxWKw2MaTV+VmByeWSMa7ipJCnFAt3Y/GFCemEYMRLefhdo6HIZZz1Rm6bsWly5fJKW9vYBBijOzvP/uo25GRkZGRkZHPDD6jxMebVFXF+fPnOXfuHI8++uh/JeR8+AXfKDiOfDJgjOGNb3wjr3rVq/ixH/sx3ve+99F1HSkl8rYo3lpbyuqrijvvvJMXv/jFvP71r+d1r3sdk8nkE70LIyMjIyMjIyMjI59xpG07YMagGJIqQZW0bQ5UttGRgBPBIhhRDIqoElJCc6Z1S+Zuhs3K6WrJ3rTD+4qmnmFSRxoGRAw0dVFOpEStao4Y67d9ix4p5r9tvGQip4jIto8yFBEqdhs2ywWnhyecHq+JQ6SdtuxduMD+uTNUk6qImIayDzkRkxJWx2xOTzi+cpVrTx9z40YEhKrKtLXgrcXb4tS01oC56S/TIs6IkEUwKmDB26pEalqP8ZbNsMJ2DipHnwKVBob1EucsiqVbntDOHckIOQdIjpwCEWWIA33sOT1dc+14zePXEk+fVGyigGx7AwUqSez5ntondtrEtMl4a2gbRzud0jSeum4ZYmCzXlJ5qKsdmqahcn4r6pR42hC6cmRNhXEO53yJ4U2JkCMxJbqwIYZ1Ef9iIOVM3PSErsdiaXcd0/lu6eETsOLwtqKynkoVu16xPLrB+bseYHN0zH1f+sVceMWjGO9QherCBY5//61sFDabNZvNitj3hH5JbGZoLvPubEXIA6iiYtmQGVA2qkynU2TSsHmyA83UuYTQLlWYmrKKs8KknTC1U5bL6+WDJhVCznzwymU2fekIfet73sLJ91zil371l0rkbYrYaPhYPnaR7f+SFjen856XfdZLqVr/rF8jxMDjj7+f69euMQyRFPPHNJaPhPeOfgiIMdS+wVpbXMsUkREVBIN3FaoZ58zWdJhJSUu8agyEqDRVy2w+YQgD1gh7BwecnJxycnyKrxylb3PA4lDNrPsNi+WSfPNWByMYVdqqYjOUOpiUboqP2+Og3FJfUwZQJJS4XzFCysrlK5e3grogWpzSB/v7z+/EjYyMjIyMjHzK8xkpPt7kprA4Cowjn+yICG3b8mVf9mV82Zd9GV3XcXJywnq9pu/70hPRtjRNw3w+LxeI47oeGRkZGRkZGRkZ+YSSbjqpVG8Jj1EziQwCshUerMhWiizkrBgy3no2Q8/JMODtBiuwiZb1sKKJDZopDkhNqAo5a4lQ1YxqRrIBFBVTVE7dui5TIKeAZkh9pO83rE+PWJ2ccni9Y9MH5vMZZ++8wP7ZXZppU5JdjUWcRxByGkhBicPA0HWc3rjMyaUrXL+2ZrW27OzNuPOuHWylhKHDGsU6jwBi5JYzU9SjQrmZsvKIWFxdYZ3bzomgOZeY1piBTNCBtOlIOZEGg29q+s0GPw3kVAQVYkcIA0MeWCxOuHHY88Sh8OTKs45+23uXcRQdtDaZedOzOwvMJsq8rZk0Bg2B3d0DZju7QCSrsNqcUnnPvJ3ifUNdOYzdHkMRqCtqNwUxWFuR1RD6nn5Y0ceOmCOhGxiGTenRVIhhwGQlxUTtGnxTM51Ocd4Thx7ViDGKtzVV1eArS776NFpV9GHDHS98lHMvexmZzOrKNWbnzlDfdx+pO+bk+ApHx0ccndwoYuFmg+6WSGDFoFo6P7NR1DmOsrLMsE7KA/sH9CnSdSsqga4vAt0a6BRKu6KyHtZgMyEGWoQIVApxuNluCj//y/9v/stbW04WRwAcnRyXbsP03F2PWpYzAGIMO9MDHnr4haWv9FmyWa14+tIlTk+PSSGQ89YA/JxH8ydjTIk+rnyFcx5r3bZTMWPFUjceYyGlSM4R52dkVeqmou8D6/UKEXDO4q0l5oimTM6Jvu/QXGJ81+sNWRVjYDqfcbo4IsSe2WQH54W+D2z6Uju0GbrSN6lKyrqNJS7zqVpuBoByXogRJu0UbzM5JFSE69evEYaBqm5RTUymE86eOYuIPKuUppGRkZGRkZHPDD6jxceRkU9VmqahaZpP9DBGRkZGRkZGRkZGRj4CSYWshoQt7kJyETa20Y5GBC/gpTjwjAhu63qUnJhaRb2nC4Gl7Zi5CQORy4fPMK1n27jUyKSdE1Mgp0SOiRQVJJXXATAGXVMEw6yEfkO3WdEtexanC46OO06XCbE191w8y0Mv22d+Zg/nazSn0itJidEsbqmBPCT6bs3i6JDljRusry44XvRk43jwRee4eN9F6p0dbGXpliuGzYocE0LGOItxFbaqEEmlo08cIgY1pii2yFYgzZAhk9CYSEOPUyHHDF7IOZNCcfaZ9RpEyGFDyJHNesXx6Zqrx/DB44YbvUNVELZuRyNUAjtVx8Fu5ODAsDOZUTdTrLWknPBWaaYT1Cnr1Zph6PBVxXyyS121GGtJcQAqbNVgvQdTXK5o6Zc0xtAPa45PrhBTBCMMmw2SiwtVDDgtQnFb1/i2pWkmeO9IKWBEMKaialrayYxqOgcr9E++j51XvpYn3/k2Xv3XvxbbVAzrDdyMMDXCB973blQiN65dZXF6TO0dulkSzg/EIZFiKsJjVLz14DyHEboMEeHg7vMsTk8Y+gGj4GKJC05AQGko63boTzGqJIFThF2UiQi1FqFSga7f0G0FMChiV4jxYzy7ZLuLBlHh/NkL3HnxuVWNXHr6aa5eu8LpyYIYISTl+dbOjJEi4kkGSTjXYG2FYrau3e1az4mUMilFrHFsNgND3+FthW8cKGWthYSY4iCOMbLuN1R1RYISbWwc62GDJouV4obsur64nTVT+Za2qVitF+Rc5j5n3caw6q3fTQJYU3o7VSMhJPKQMDZw9eoV1sue6QxQ4elnnuT0+JjpdMJyuXp+J3BkZGRkZGTkU5ZRfBwZGRkZGRkZGRkZGRkZuQ0kdYDZdjxC2v785of7BrBiMMUHiGx7IJ2xWFtTaWBeJdYC6xiwmjA2sQiR0/UxbTMnp0gNYB0pBMJ6Bc7D1vkYY7zVOzgMa5anS06PVxweDxwtIWTL7s6Uh150wF33nmW+O7/lwlTNJadVbOkoHHpSH8ixp1suWJ2ccnrjkOFkYBMDk4MJd997N+fuuIBvJkhTgxEmVU0T56TQFw3TClsLJJrSdk5cGbMIKsW5SbaINSXeEQtOEYEUAE0klJgjq2HJYtkjzQlGLGHoWa/XHK0Cl08sT562nAazlX4zlRhaA1MbmbjEmYOBc+cn7M5nNM0M52tyCtS+YW//DJIzR6fXWXQ9IfR4kxEDaixtNcHXuwgVYhMhZ/r1ClKmsh47seAhxjX9qkM1It7hKAJSJmKcYPEY43C1x/kKZz3WGIxYSBnva5rJHFfXQEKzRU+OmBvPZrng8OmnmN/3INV0QtW2iAjGWtpml+XmiMMb1wihx7YN/WoBQnGXShFPnbdY78AaBlUiZa6nB7tcOXyCpJnUgwcqoAM2CjO52f2obBC67foOwMBNifD5ZSuFY40pCaUq3HnhDvbP7D7791Plyaee5PT0mMXihCFG8ta4+nwKkMaaEm3r3PZtM9YK3jtUYAiBSdNivS+iohQRuus2VL6hqj1mu05CP1B5TwiJzTqQcsQai/WWoR9o6pqTk4CmgcmkxteW9XKD8xXdZsBZi7WG1WbNECIx3iy41NIdaQ0pbTsnKS7QnDMbBkSUduLRHLl8+WkWp8ecOX+AMRYxht/9/XfgvX3+Jm5kZGRkZGTkU55RfBwZGRkZGRkZGRkZGRkZuQ0YUVSLtyl8WOwqUPodb4qOwrbTDywZMREVi5MGZwLGDKwi9DHSWkfOluuLE+5p52SFnCMpC6HvkTgg4kg5MPQb+mHDZrlks47cOE1cPRFWvSNkz4UzhkdeeJYHH7iHdtaU3kUtMY5iPRgLaradhJFusWJ5fEhYn7I+XhFWA6FTTG04d/fd3HHxbmbzOeIMOIuSMNkg3iHeYipLTrmIO6LklBAseRsJiwpZE2Rl20qJkSJuWLFgtmJOZ1EzIH1GrUMqyyYZusvHTFrLJgzcODJcXlZc7Wr6XNxnMYMT2LGJ/aZnVifmU2W+Z5jXdRH7rIJGrDXMp7tUlWXoMiKe+XSPyk+Ans36hMX6mCFssMZT11Ost/RBiSkw8RWuqjHWkNWQsyHGDZqFCrBVjTEGzYKrJiX+1XuctYj1eHFYaxCFyu3gKo91NWK2/lkRTE7UJ0fs33U/rp0C21oZ+6GKmXsfejG/8dafY3l6iq88fdexOjmhylp6+8ThfEPOEc2AGMJ2vU5nZZ8Orx9RV1OOwnVqoBVYZOiFW72lFcINMlmLk3cDeFUiz2+MKXyoljDkjKTSQ3jXXXczmbTP+jU2mw1PPPE4RzeOWa974jZvNT/P1keDxTmL6Ic6FFVLz2JOiZwgpURKgRgjYQhAxjmPrx05Z6zzeGsRLJtuU04VlLad09SJk9MTXGWpclXck1HJKdKHnnW3IiN4ZzFZWHdLUkzoTaGVsk4ESGnbvZp163am3AxhDaqZnBK+dkRNnJ6s0O1/585e4IUvfJDH3/xkEUrzGL06MjIyMjIyMoqPIyMjIyMjIyMjIyMjIyO3BdXSj4du3Y+qt2IWRYRaDE4Ei+IpEawORUTJZBBDpbbENMqGiKJqaLwlAiEGkka6GMhZ6bolbDZ0oWe96lhvMsdL4doKrgzCaTDMxHLfPHPPBc+99x5w7swBzrsiOIhBNYExJb40BvIQ6FZLutWKxeFV1idL8jKSI9jasXvXDnsXzrNz5gBf+aJEybbTUhTElB4+TaixbNXSDxNgIoIpc5OkRNNmRXNCk5bY0sZhLGyzJ6GuySguJxSomik755XH333EjZPEOlgubRzLUCNALZTnG5hY5dx0w8FuYjoRJrWlrVsqb6kqT84R62umvsVZQ+gCm82KIQ44XzObtlT1Lpw5RxbDankD6Yp43A0bwhCZNFMmsxnGOBALojhrMc4Q+oGUM87UWFdh8Xjf4K0DMeV53mFVSsemJoy1YBRyIBsHCFYMihKefh8XXvw5TA/OlzWXM0hZXwDqhSeffJyh73AOFidLNlcfp1scMzEPYMSSQijbGIOtKoyUw7i3v0/SzOJ0Q70/JfQ9FcpMhBsoUSEKWFU6lATsi1AjnKIocjNo+HacXWSEmBVnPXfdfQ/WSnnXoqj9SSclAMdHRzz9zNMcHh/Sdz0hKLdFM9PMtvaxdJfGgRBqUtIi+BKB4jZMOTL0PSEIB/tTNss1u3s725sCwHnLehURY5hOJiyXS0Iqkainy1OGbsAYy3y+y2qzYHm8ZAiBlDNdl4lb4b+pq63DU+n6AUGwRkg530w8Ls5sMYhAH5WcMyqCdcKQOi49c4mLL7yH09MFp6enPPriV3B8acONk0NOliecLI5vdUyOjIyMjIyMfGYyio8jIyMjIyMjIyMjIyMjI7eBmx/wb6vUtq2PFMFRwG4dR6X3UbFb4VFVUFUGMpWCs46JzIjSI1I8k1YMferJfSClSMqG1WpBPl1xcjTwxInjyQ1cCpFFTjRiub+peMVdykMXdzjYP6Bu6uKQXK7wTY1tPDf9TrHv6U5PWJ+csDi+Rr/qCKtAToZ22jLdnTE/s8v04AxV3WI0k2MoW1cGSQExHoyiKRWPVMrkmFBNaE7FBZlBVMlbZ1XOiiRTRCJ7szMvY9SBBVwRmJzxZMctZ9h0NsHPVjxxKXIUDIfRUpvSOygm0ZhE6zM7k8DuTmY+c0yqCXVdREbrHcY0WGPwrsbVDTllUo6kFFHN7Ex3adsW63w5wE6o3QXW3YqjG1cI6zXz2R7TyRxX1aBFIFNVxNZ4P2XoBuIQqJuIc4baTUocpjiMsdhKbjnRNGcEyGQMBrUWJWMobrksoFef4eCzPPPz51DNnDxzmen+PtV0AsCw6bl+5SoxDnSryPGVJ1h+8HEOl2vOGAFRTOVIMZFTIvYdaIkInu/N6PoV3aYj1B2pCwxQuhxFWakSkSLuqiBSDlFCabdfb6cJTlVJCl3seee7385v/db93Hv//ezt7+HrGufcLRH2j2zItauXuXTlaa5fu8amTyiyvTHg+SVriYfVlDHorU7TrJGu66nrhhgTwxCw3jD0mclsQh821HXDpuswImzWG0IMVL5BVTk5OSHnRD/05BSxeDQHjDGEFEkx4ayjmdWsujU5Q4gbvLPEmHDWMoTIhwfj6s1C2u24K2uATIgJEbY+bYg58F9+7Rc47K7xvvd+kN9/x7u478IDfPX/9m/T+pYhDPybH/+/8ctv/XX0+S7RHBkZGRkZGfmUYRQfR0ZGRkZGRkZGRkZGRkZuAxZIUgSeWDx9pasOcBiMKFYEL0JlIkARQTQT00CwnsE4qq3Y6G2JRbXOgzOkGEAyy/WSpJHVpuPytcRTS8sfLCKLHBGFRhwXa8fnPgAvfvBO9ncOqCpXohRVcJVDttGnYj2hC6xOjzi5fpnN8SExZKyr2L/jDqZ7u8x296gmFUYMqhFNgdj1YAym8YirML4Cb0uHnKYiFIVYRMcYt/GuprjBcrrVKSlqihOrspjK4a1BnCApoLkIeWQl50RKmZgzqGK95c67W64tTnnmhiOgGFUqF5lUA7vTyHwO09pQ+Yq2bbAiYCO2mZZ4WI3Ufk7VNIhx5BRJKTGkQOU8la0REWKIRSTtIyqCRoP3U0xraNspxllSTqAGMWX/jCi+8biNJXdKjgnva3zlcKbCGIsxBmMcIhnJkWwEFbPtwdza5zCkDCpAVtJmTfjAe5DXfREihunuPq6qbq3B6XwHcmazWbLqlxw+8STXry45Xayx1m8F4FS6CJ0D51hS+hqr2ZTF6SlpgNz1SFLWQCMwAdbAUpUDK+RUpLsTymMtsFAIt835yPZ8gfVmw//jx/7v/Nz/+rPcf98LefglL+Le++/h4sWLnL9wgbNnzzPfmTOZTKi8R1R533vew+Urz3B045gY9XmPW71J0gg5k63BWLONHBaGIeC8Jauy6TuQ4pLe3d3FOo+q4p0j58Smz7TtnLkXTo5PSSngnGG52nByumCxXJJSJMSeru/ou4F2OmHdbdh0HSklYspYY0Agp0yf89YlCyCEm3HIgJGtEzLFP3T0yrgiQ4i87ffezpWTE971zsdYrTd88PEP8O4n38ne3h4P3v0CvGnw3jIM8bbM68jIyMjIyMgnP6P4ODIyMjIyMjIyMjIyMjJyGxApAlSkuIqSgkG2Uaul89GL4E3CSvnwP6ohR4PBkkikZOh0jROLcQ1eWsQXl1kaenIaWC43LNaZ64eGx67Dk0Og14QgnLM19/iKh++I3HvnGXbne9Rti3UWttGnGEG8QdUyrHtWi2NOji4zbDb4esru2V12DnaZ7O5jrS1uspzQlEBBYwBnsU2LaX0RHpESkZojOZfOyJgC2mfCZkVOEeMs1lfFcZcjGhQrCVvVOO+L69FK6YBUSgukCiEngsbiSlQhigED09mUe+8euLoY6HqHFWj9wMFOpG1h0hjqyuJdjbWu9E7mDOKp6gpjS/8h2CIu5kiiRMfW1QRjIfSJYdgUFxtKiAHras7snsM1jmGzYQgDdJS5qlzp9LQWbxucq1nHQMbgfI33Lc7VGCNF/JUirqpI6eUzFsVsI3wtGSWZEktLAusNWrJuERGq+WS7+kp+ZjWd4q1ntTqhPz3k+o1jVjGzXq4QMeScyko0pZ80o/QCG2uo5lOObhwSVegXKyzQZagNzEU4UmWtykaFCDjVIm5t13Yik27zOaZATMpyveLKtaukbLly/RrNbzXMJi3T2ZSDg30OzpzhzjvPc+H8Bc4enOVtv/tbXLtyleVyc9uER5HizM05Y3MRAOttYWWMgRQcoR+wxmGtp6mUlBOahCSZ5WqFsx7nHUpksezohg5NmfW6o+87whCZTqY8/czTbLoeEcOknTKkjhgGci5xq7kUTpYbBoySsiLGFFeucCuOtsStQtKyqrLe/JmwWG5wtsx5ioecLjccHy0IOTKcRm4cneKs5fJdV3jgwYfKuTUyMjIyMjLyGcsoPo6MjIyMjIyMjIyMjIyM3DYyqo6wdf9ZU5x9TqASqCRhKY8pjkxxE1pNaDZkBnyuSh9jVoy34DxpvSDnxOlizfGx8I4rhg9sIoMqQRMW2DMNDzcVD+xuuOOsYWfaYo1s42CLYxAjiDPkSHHILU9YLw9x1jPb36Wd7dDu71C1ddmdOJRY2KIylU67ymObGlNViDWAIZPJMZJCJHV9+RMCKfT03Zo0bHCuxldN6ZfUjHeeqmlwbYXxiuTScSnGoWRiSAxDT9919CGWPjvNiCoiFqxw5uyM++88JjwlCAFvMs1EmU8r2npCO9thMp1jvC3CUMqoRpJ3ZPGkHLAp07oaVRhiT+Va2rYtYlKMVL5FNBFzYrozp94KpeoE62tCtyangOAxWVHjMLbC1y1GDTEqmi2Vr/BVg0HQPIAqqomcS+miWF+cj8agCEmEJIaYQcXgxJDF4vbOYnyJglVVhpMFKSWq+RTnHW3dcnx0g369oZrMWa6us1qdMvRrMoKVbRenMYhYKsA7x/7ODr/3/vdhnUc2KxoRltuWwlrAI3Qoi6xEVRxgEVq4zX7HP0xSZbkZGIYbhK6jnc+w1jCb7VFXU7z/YHF5asD5Cmszx8tDLl16ihBvYyulQt9HKmfQLIgKqokw9DR2Ahj6MOAqj1IjVkgps+6WeOcIxmGkZzJpOT7ZYDFMpxOWiyVN05bXaWu6TYc1ltpVRNOTNDMMG7qhIyWlqTwhZrp+oO8HjDHklG71Yia9pT2W7zOAkkVuCbMpKYIQUonSnc12uOfeC6xX76YyFb4P7MynXDs84srRVS79xhViHMXHkZGRkZGRz2RG8XFkZGRkZGRkZGRkZGRk5DZgtl+Tlq5HkeJ8tFv3Y/lz8+eWqAaDEoOBBMZZhtzhTY0aQBLYTO47Yuw4PR24fj3x/kPPe7uBlcZttKuwbype0DjuP+i5cMZwdu+AnAa6YYnzHusrStaqAbUggaHbkGNiMtnFVzXNdIara6z3IEWhUGPRsBUuncE4j/EO4z3iPKiQUyaGnjhEhs2G3A/EviMMPTl2WLE4NQiG2PVYZ6jblqptsW1V3I4podahxpKjEvvAer2hG1YMQYk5k+KAkrC2whmHsS1VY7jnrpqT5YbYD7TTjKs9zc4urtnlOCbec/WY4yHR50TjKvanNfuTnr3WUDvBxsDWTIlzLc2kwXiHJmhnLbYyaFTEW6z3aOzJ+abLTahdQzAWUQVNpFA6/pyvcXWNEcG7mrZpSaEnpoimiDEWrAFry9+NRY0QsjJESAQQgwqIr0nOEBR0vrsVfSGs1pw8cQkzbbj21DPc9aKH2Nk9T0zQTvcxNnJ84wbrfkOKkawWUxlM5bDicc5uF6/BeOHodM3B3j56dIObYa5x2wl5syVxQ/leETLgt2v/4yk9paz0mlis14QQMM6wXq2KcIpuj6cBIwx9x2q9YtVF0u1WSRVSzlQoKQX6HryvyKIMQ48xjrqOxBBZrzusqXA5lw5QC4iyWC5IKVLP5oQwoJIZ+h4QQgzEnHHesd6swRi65Yr1uqP2DckmhhAYQkREMNYguo2DLuZY2P7OMAZy5lb35R/ta1R0e7+C4fy5c+zM92jblkmzg/OOYdhwZl84OjlmCOE2T+zIyMjIyMjIJzuj+DgyMjIyMjIyMjIyMjIycpuIaojbiE62AZlOBCvgRbEU556iJLXkm2KgQqWBQS1JE04MxlVkEYawphsSNw4T109aHh8Ca02krYiwayoebmpedKbn4j2OnXbOtN1hUAghMIRAbRwqGcFCzlhrMEaK4Og8vi0iG+QPCY+aiy3KRARBfI3xDhFBrCvuvNiRhsiwXNOtVvSbFSkO5G4gxR5JEes93tV45zBiqJoGP5ni/VbxEwuuRomkkBi6nuXpMatNB1bR7FBNiDE4iuvRiFJSHoXZtOXeuxNPPrOh3VWya/n9awPvPnqSJ4876vkZzt1xF/tnztO0LccSeN9mTb6xIC6vcmB7XnznHg+cPcO08ogVsIaqqjGuuNds60Ehx+FWtGYOpd8S72h9Q7dZELuBrCXe1FlH3TRohtl8hq1ckXmsRbUqgp8r86nGEWNksT7hZLUk5kQzmVFXLWIcKfWsug1mvsuF/Qu3XGwihtVqzdGTTzI/dw7nPXtnzxGGyOL0BnfunyeKZR3KqjRuu2lWVMpxBcBZRJR1SOyjVMZiZbuNCLKN4yxrHCopwmMAVrDtOP34klXpQiTnhI2Wdd9hb8XAAigpQ0yJlMpNAbebct4bUlaGPmCsY9OtSZqpqxbvPd1mQ+UrrBgq7zGmRjXhXENKJbi28jV93zF0Pev1hpQyy9WaYQjkGAFh6IcSvVt7mlCTYiBlwWy7Hq0Vci5CrTOWmD/seBtuuaLjR1Fk93f3cJVwerrizMGdTCc7zCYtL33xS6gnFUfHh/zab/wav/+ud93eyR0ZGRkZGRn5pGYUH0dGRkZGRkZGRkZGRkZGbgOKlB49FZIqFoOI4IziRbAkkOImykgRohRyErJ6nCayg5QTUCG2KS7AGLhx3HN0UvN0LxyljrCVOVoxXPQ1904GWh8RPOIcgsFbpW7n5BgJDCCCrQyiGYzDNQ3WCK7yiBFAitNOA5opTj4D1lVI1WDEFQVKMjklUoyETUe/XNEtFwz9htAtMVGRbPDW4rzHNTV1s1sEz8rgpw3GGcQ71BowHlUhdpm+6zk9OWW1PimiqG0QbzDUYCyiEQFCCogVDBZxhv19y/HK8+7DgT9YnpIme7zs5X+aL/kLf4kXfvajHJw/z2Q2x1XF6ReHgc1iwfHVy1x693t47Nd/mceffIz7mmMeOjhgohnTGjR52PZtinUlMlUNodsQh4QxlK7IaEgRVpsOKxlftdR1xdIYFItvahCHazxiQVWKCJkiYAhD4PLhMzx14ylin9idn6FqJoQ4sM4b+ukdnHvlq9l/+GXsPPTCW2vOTRouPPwgu6s7mZ/bR6xltreLtZYhDkjt6Y1h2Q0Y68DaIkCJLcfW2hKrWVWQMiElYg5gbvb/bWN72abuAgOKR5gjBJRTYKM3/XMfX7IqQ1QkFXVUtuPcrmayso04vv0I237ErCRVVIoDNiUh5wpjoRsGmrqmDz3eWtabFc45RDZY5/CVI8ZE6CNoph8Glps1m9UG6zw5KzEnQohkIqIOI0KWXATP1DP0sYzhw0RFMRYrJXp4W9VJynpLxP6TcNZx8Z6L3Di+gcmee+55gN3JHv/9n/9C9s/toUBTef67V38O/9fv+V7+4D3vvZ1TPDIyMjIyMvJJzH+L+PiR/0UyMjIyMjIyMjIyMjIyMvIZjGJIFEEkoXgxJWqV0tfnJAJ5+xy33UbQLCRNWOPxCsJQnIUKIQZOl2uOjoWT4DlKPbE0MGIQdsRz4BRvMzkJ3ZDY38aDVk1LXXnW6+JGDAy4XONcha8rrPcYyUW08Q6NEXJCjEcqi7mZI2sFYz0iFs2piKUxEUNP2PTErieFhDGGpp1gc8JKi7UO31RUdYP1DqwUYbT2iLWIsaStizD0gW6xYble0vUbmmZOW9dU1heh7KawlAdSAtRgNBE1gWkwk5p2T/mDKyte+KpX8Te/9v/Io3/uz9LsHCCy3Rj5sLK7xF4+xx0P3MPDf+pRXvPFX8RT7/0Av/X/+xl+4/fewounHWcVaiuYbJEM4iMGQxh6utUpqNJO5hCU9eaYYQg4Z9AYaXyNczWihsoJztfkkMBQRECzPfrW03c91w8v8/6nnuD4pONgv6HynhADazdl55FXc/6lj/C+976T3/xf/iNf8YJXMIVb+zI52GVysHtrHdZtS9vOWKwWTOb7qPf0XUeOCWs91tcgimCY7+zQWENbVeRc3K43rl6lHiJTimNX/khHYN7GrRqgRpgDJ6If3+LHD6NUFipy8/1FbkWIfjyHpB/2NyOQs7Jed1R1wvuaEAJkGEKNrmHStFhvGUKP88Lp6TEGg2rC1xVDnxj6gBPHfDYj5sBqBTEqVeWwxrPpOrr1mqHv2fQdORVnsBDJems6CLGcn5o/JMQqoPkj+1Wn7YT9/RlPvuP9CMID972Qz/+8z+X8XWe5fv0ak8mU5B0HZy7wl//SF/H4v/4huq6/PRM8MjIyMjIy8knN6HwcGRkZGRkZGRkZGRkZGbkNKErGkMhYBIPgAC9gCQiZrJakRbxRNduv0PfA1OBFMDIBXxOHwBDWHC9hsWw4ibDReKt/byqOHVPR2oQY8F6Z1A1Ns0tIAas1Q7ehqluGGJGg9HFDTAPGGtqdHcRvRUUFxCAWMFux0RbLm2oqTj2UlCJpGIqIEQAFVxUhU0QxpnRckjNWioBpvEG8RdoG4+viDs2KpkTOSugG+vWGdb8BlN3ZnGbS4CsHZDRtA2y3bi4jA8b4IuCqow9lTh660/Hl913g/tf/VV70ys+inkyK8vLh7i65KUBaMAYxiljwkxl3P/wizl68yOUP/ve89af+X1x+4u3c38LEQ2MbfKyJObBZrfG2oplMEOfplktSUppmQt20xKFHiKQcUIk4B6SEkAldh9EaN5mgAilkjhdHfPDqU1y92heX3F5mEXqmFz+L/Ve+lieuPc0vf8e3c+PyJV782a/BVg4B1sfHNDs7iLW3di8OgePrNxBjmbRTNt0asrLerDFiqOsGI6bMgSh1O6WpPXVTozmhOWMdrBT8Vk8sUaYQP2ydJyBu17cDRG/6/j5x3Hr3j0O86kdEhSEqMUUqL+SspBiJQyDbSDqOTCczDI6symwbfQwZ5zwxJIbY4aylqjzd0DP0A0MM1FVN33Us1ms0Z8KwQokMYcCIYUgBVcVYQ44J7zxDGABuRbrCf93v+CdRNw3Hx8f03cDu/Ax3X7iHB174AKv1Ke9+5zt44IEXUDc1IWcefvjF3HHHeT74wSdvx6yOjIyMjIyMfJIzio8jIyMjIyMjIyMjIyMjI7eBTLrlfhTASBFuSu9j3kZXClkFxWxbISGrMAyGwEClHjGOnJS+P+X4ZM3pYcMmVnQ5kbfNehZhzzScc66EXwrM9iecPXcHk2ZCSAPe1QyhR1Ccs6A1XmCIPX2/xnae2k6wN3sHJaHGFGceJRpUU0Is5JyKuNr35JRBM8ZaqtkMzQFNEbPNkZWsaD+AKNZX4AxiPcbVqBhyDkW8jEWEzFFRFaxYJpMJ7XQrZholx0y2Sg4ZrMHeTIdNuTgiU8ZYR1LBVRV3OcPp1SucHh4xP3c3zv8xB+qmGKmAdcU1lxPWWIxznL//In/2q/93vOtXfpl3/8J/5EVmheqAGiUNgappme1OMdbRb3pc5WnbXYwpgp6tLN3yFI0dWMHXjjysEbuPEVsckFvH68nJDS7duMaTTy05OjFUlfLMkfDw534B9ctfwi/80v+Xd/3Kb7O4usFPK+q6xrpyfJZXj6gnM/gw8fHoyhV2p3sc1Oe5fOlpuu69DLGn6zcYb/DOljFo2jo3p0xnE6ZtXbolYyZbYbClC1AUBoW1gbRdsAYho6xR2u1adyjD83w+feqi5JxxziBiMAgpJYbY4/FMp3OyptLpODSszYq+62iahvncklJAVAi5iIjeW4ZBWS5WVL5EJA9DIKZISJnVasnQD2TNGFPOW2cNOWW8t4QgW9NvuTkAoZx/H0V/NMawtzOjbipe8VmPMJvu8vCLX4BxytHlG7z7D/6A3ckZzt55AWMss9mchx58cBQfR0ZGRkZGPkMZxceRkZGRkZHnERH5M8AvAP9EVf/xJ3QwIyMjIyMjI59QshqCFmeYFcEhVCJ4Udh2PCqgUnr2bn4vwBAcgyrWgDhDCiu6MHB8Ipx2Nb0awtZ7ZoGJePZtzUHVM3OZqjXs7O1gfUXXb/B1RdJIxmAy+KpGVIgpUYmnH3rWy1NUwdcBI6ZEoVohp4iYgDGuxDJmJYVEGgKqCUEw1iGVYCQh2UJlkCyQFREDbUuOHViPcQ7ja7IRcojkFNGQUTI5JjRlINO0U9q2wjUWEYOqIDZickKsFqeoKBaPSmn2UyO4bNCYQAXfL7jygfdw5QUv4ew9F3GTGX9Si0zRIA1qLWSHtYq1lhCFuml56Z9+HfNz53jbv/8hXuQ37OSeiW+Y7s7BCjFnjLNUzQwkAYYUhuICdIZuscb7GfVsTQg9OQ8ldjZFNAS6IfL0lSd57+NXuXzFcZoycTB87utej959Jz/1//zX3HjmCum0LJYUIicn1xn6DQBnH7oPsSUbV1VZL075T//h37JZb/g//4//Mx/83cf40Z/4Abr+fXTrDs2KsRZUIJf1WDU1Z+68g5R021NaxO0E9KoYEaJANXW4Rfj/s/fn8ZJdd33v/fmtvXcNZ+xJ3S21pNZsebZlPGNbtsEMBsyQEMDcxHHIZXiA3AwkJDckJiEXuAkJEAIkJAQuQwATJhssG7DleZRk2dZkTS2p5+HMNe1hreePXUc6LtV0zqnu0y19336dV/XZtffaa6+qamv1t35rUxQBM6hgOKBDGa7vcK3hRWX90+5DIM89+JTCF0wzQ606RZ4XOINaDdrtFs45oijHtwoMqFQSsiLHfEHe9rRbLVyUUKlUSNM2zdYaa6srpGkbfLl0c+EDFjmKNCeJ4+59PSHLcsyBmXWXZC0rVMd5vaLIkcQJa2urfPPXvRVnjquuuRLvPcsryywsrDC7Z4Yi70BUIYkrXHnFofKLDDtdfSoiIiIXnBu9i4iIiIiIiIhsVh7Kf3TvFucRW/mTWIEzX4YEweFDeCqIDGVFWZZ5CssonCP3nryT0WwWrK0lZCGmCOC7/3MYdYuZcoGpJGd2Nmfv3gouqXHi7Ck+d89DfPHBIzTTFnkoykrKNCWuVKnUasS1GpW4RpHm5O0WebNF1miSN1rkjTZFOyVvt8vn2m3SRotsrUnebBKycmlOck/IPEXb44syaLAkxmpVCufJ05yQOaySYJUKwUUUaYb3AYLhg8cX4L0nWEFSrVKbionrSRmoWbkcqoscZoZFMS4qq7HMWblPBDjXDf7KGxPG3pMtnOHoww+wcPIEIUuHLMPZDSWdw+IEi2OiOCrvjecC5iIO3nATz3nr27lzxdG2iGgqwacpRRoIIRAnEeXSsAW+CBTeyDpt8nZG1mlTiavUZubIOh3yTgdzAYIn9wWnzpzi3kfO8OhxOJUGzuYFe2++mQOvuJFPffQ9nD1zhmZaELsyfExzz9FHHuTzH/kr0narm/oFsk6HM08c4c9/879y5sQxjh87wu/9zn/mj//yv3Pvo/exvLZMVmQUvrzvn6d8DSwYsYuZ3XsZqRmFz4nMCHmgk3s6GLGBi2Pmds8TdYNOQjlyVWAKqFBW4m7F1o66NHhfho9pnpfVwgZFXpB22qRZjg8BQlkJG7uEIvNkec5ao1kuxZwWZHlBEcqgPElifPCsLK9SiSr4UL6e1coU09PT+BCInFEED2ZELsJ7Xy5x7Mvg0Rn4YnTVY3kBRitt8eAjR/jt//W/aKy2qdQrZHnK0aOP004brKwtURQe52IK79mzdy+VSr9yYxEREXmmU+WjiIiIiIiIyHnwVCVjGUBGVt7v0daDR6wbIpY/FqxcyjQYrSwqlyNNEvKiSbPTpLUWaKUVfHAU3TYdUDfHjIuZdjmVKFCfcVQrMWtrK5w91+T06Yzl1UX27K5z6MBVxJWITtuTeI/Dd5dWddRcAnlBIIbYUfiy8i34rFxitMgIeY7PCiyKCQF8kWGUIWvodAhFBiHHojK0K7KUbK1FZDB98DJctQrBkbdbZdiClVWPRRmKmBlxtYKLYuIkLu9fGLoL1wbAJbioO05FDkWOOVdWVhUFAShCBK7AvOF8Rj1d49gD97L30BXsumw/sweuxCz+yldqveR0/YWjTHiiKCFOcorCk5vHXMS1L3wxncZ3ccd7fovXTZXVp0lwxN1wJwQIheEiCEVBlmbESZX6zG5i56gUddIiI83aJPUaFsesNFvc/eVjPHjcOJsVLPuCmT27ueVb3sznPn07Z8+cIQ8RUeJIkpzIeYoikC0u8t7//P9w8pEvc8Vzns/c3B4SH/GXf/IbPHz0Ab79nf8IixKOPfE4x44/QaudYRaT5gVpJyPLUnzhcc4BjiSpcuiKK1hYWMAFOHDZHKuLq6wVnigEpg2mZmeZOXiAh48vAGVVZEYZPsbdQUy2WPv45C04n2FC93Neq7inPi8hI0ocURxTFBkrq0sURUGe1cA5InO02m0IZVWq9zmFh2qS0Gq3aTabtJvdWtO4O24G5gKdToqziKQW00lT0jQtqx19WckaAOeMPB+v6hGgVo+JY2N2aoZd85dx8NAhWs0mq8urnDm+wPGTZ/jUJz/N3st2E0UJEDE9PUUcx3Q6WoRXRETk2Ubho4iIiIiIiMh5EMWGFUZMuTxoxaBiBRGeIkT4YBTByuDRnqp+zAiEDLIcioon5J5OmtNqRXgflcucAoajYgkxxrSLqbqUSuKpViu42LG0usK5hYD3Rq0aaHVWKfIMn8SYc3TSdnnPP28kSZ3CZ7hQ3r8x5BAsJ7gyTCryDoZR5DkUgahSw5yDtMBTkGdN8qxFyFJ8nuFzsFAQsoLaVJ3pw1dRmd9TXnOngy+KsuIzBPBlZZZZuQyoS8pqRxfFTy7haa5cHtT7gmCGrVeUOiP4HPNGIKeMY8u6uyiKqFSNgy6wtHyGx++/jyuuuY7a7C6SqRl8UdBqrbG6dJa1lRWmZ+bZtWcv1WodC2VFopkRRxF5FBMlMYUv8JHjultu4eRjj3DvAx/lxZfNYUUHSx1FCLg4wXwgzzPyTkqwGEdEnATyTpskTgjVKXwe8IUnEFhcXuTxM20eTjMWfUFK4FVveC1La2dpNjrMX3aYPZdPlffcPHWCYu04zTxQt8DKiaO897/9F3y1yvzuXXzd93wfN93ySu740l08dv/93PKmNzG/5zKSWo2HHn2MpZUVWq02nXaTtNPGRRV8nuOcI44rHDy4j/vvj3GR4+DBA6wsrZF6yAFvMDU/x569e/Ghm9ZaIMfIgKT7ExlbWnvVPwODx3UGFIUnistq3SiKqM3EVJM6UCEET3WKMuwPGXF1ilqtRty95+pqo4P3gVajSVHkRC5mdnaaZmuVdrPFrrndZL7DqZMrzExPE0URrXaLosifqnjsJrshBIpic4Ndr1XJi4y5+TmuOHQ59Zkpjj7xGBSB177y1ezfv5eHH32YI/c9QfKCKTJfUBQF0YZ7kIqIiMizh8JHERG5pJnZO4BvBl4KXE75xesvAr8SQvjtnn1vB94A1ICfAN4OXAEcBf4/4KdDCGnPMQH4MPA9wM8CXwfMAvcCPxdC+N1N9HUP8GPAtwLXACnwOeBnQwgfGPuiRURE5JJQu2I/zcdOE4VyRdCqBZyV1XkeKJ6sYCyXvDQgC548BIoM0tThawV5p02WGVkeUQTXXcbVcBg1S6iZY8ZBHOXE1UClGpNlBcvLgbW1BBcFqtUM8Ky1U8wZcVwl73RwoU5cjahUKxQkdNprxAb4QJ7l+KKN9wZ4vM+BgiiqYEVW9tkH8qxDu7lK1m4Q2nn3vpBGHEdMze1m13WHqezaVVZ7Fh18KO8VGTxYlIAVWPCQJFgclWGFWVmNFzxGUo6Y0b0xY9mf4MBcguVgkSNkZWAYKChCTrCIJArMRoGbQuD0mZM8eM/n6RQZU9NznDl7ljNnTrGyeI5Wu4WLIw5deZjrb3wuhy4/TJJUyn5EURkkZxBFMd5DpRp43mtfx1/ffwdXNhrsTWo4V8FFhlWAEGg3VulkOUlUgyinKArS3DNdSbDcESwh+EDuPctrbY61c46FApcEds/v5qaX3MzJJx7l2ue+jOn53cRJBbzn3JfuYvmhE9RdoO6gk0O7nVK0Umg0+O2f/2ni6SlWG8s8cu8XePHrXofFVQ5eeRXXXnMTH/noB6kldfIsw/sC5zzORUBZGbdn1y6qlQrmHFHkqE9FtDLDdwIER1KtgquQ+/UgqwzM14DUYB7rLr1aVkXKUwofyNOCzAqm53z3XoyeJElI4ipTU3Xa7Q7BCqr1mOmpGnOzc2CB6kpC4QNFnuFcRGOtwdlzy1RrNea8ozpVZXGxgzmjk3XIGhlplpXhcOFZr+aNo4iiyDddYepcxMLCIklUp5pM0WitcMddH6fTLKjENWbnp0jbOc97zgtYaZ7jM5/5LLV6/Rm9lO4zkZldAzwK/CbwbynnwG+kXFH5k8A/CiF8ycwuA/4d5Vx8N+Uc/J+GED60oa0rgO+jnENfD+wBzgK3Az8VQrh3yLnfBfwM8DXADPAl4F0hhPdO/qpFROR8UPgoIiKXul8B7gE+ApwA9gLfCPyWmT0nhPATfY75A+DlwB9ShpVvo5zcfJWZfUsIT5uK7wY+ASwB/xPYBXwn8DtmdiiE8O9HddLMDlNOsq4BPgrcBkwD3wTcZmbfH0L4tXEvWkRERC5++17wQtrNT5OfWiUyR2QZZh4wCg8Foax6pFyatVyKtbznY1FAXjiyvEPe8eRtI8ui7h0eHZinYo4KMdPOqLlANQnEU0ZkxkqzzepyQlpUmam1SOJA5CoQOTpZShwnJEmVJIpxUUwePC5yTE3PkeUZeTulSFtkWZs8S8tKq8TKNkJ5r7+i0cZnHdJ2m7yd4r3HcqhUKtRmZ6hftofZKy7D1acItl41WRByD8FhZJg3rHuPRRcl5Z8tgLluzhgRfM6TC9gaBO8JwTBXrjVpOHxRUHij3enQ7qSkwZPlbfKirCgtMKbabb7wkRU+/dk7SCpVokpZzRh8wAdP4eHMiVOcPnGc577gFq697kamp+bK+3ZGMUlSIYQOeQGxJew/dCWHnv8KPvfxP+UNh/cThYjp2jQWAmknI1jM1Mw0kYsgBKLYkeUp5j2kZUiEi/DBsdZMWckDnRCYSYwrD19NXIm5/nm3sHf/AWr1Okm1ylpjifzsIs3CSAgksdFqQpoHogBF7mm1W6QrTbIAD9x7P1mnw1SlzuzsLq69/nruuuvjzFarpHlKoMAHj7OoXB44clx++RVMT02VEa8L7Nozj6vVWDy6TMUgLXLWmqvdZXOfWl44BdIQyAnstTKAbF3gz9zFzD95/1cjco7CZyytnGN2ehdFXoWao5PGYEaSJGRZQZzEZFnG3OwsM1fMs7S4SLPVoN3qkHbyMmwPjqmZGTppk3a7RZpmtDudcmlX75+8R2RZYBy6n5/N9d05x+7dezh+ooX3BXPzUyyvLvHFe77EudML7J7fxytf+QoeO/UEv/Zbv8z+g5dx9OgJrrz6cjpp57yMp5x31wCfBu4DfqP7+7cBt5vZqynnsyvA71OGit8FvM/MbgohPN5t4/XAjwMfAv43sAbcCPwN4FvM7LUhhLv7nPsw8BngEeC3uu3/LeBPzexrNgacIiJy8VL4KCIil7oXhBAe3rjBzCrA+4AfN7NfDSEc6znmucDzQwiL3f3/b8oJ0TcB30s5wdnoRcC7ge8KIfjuMT8D3AH8OzP73yGER0b08zcpJ1HfHUL4vQ193UUZSv6imf1ZCOHUsEbM7I4BT9084vwA3H777ePsdslaXV0FnvnXOYzGoKRxKD1bx2H9ukV22p4bbqK9ukzWuIuiWXSXoozIgTw40mAUwRMD3gxPoAjQCR48tHNjxkNWQJob7SIiBIezQAzUrAwaZyKj5lLiKFCpGt6Mxpqn3a6BOWLncUCepaSdBpbE5HmFerWOiyPytMCchxBhFhEnET6L8OYwT3kfyADBO7Ay6PNpm+byIkWje6/GEIjiiEpSZXbfLmYOHKSydzdRpYovcrzPCUWOz/yTaZW5cglZcw7nYlyUgCvbejLWMocRlYf4jFBAsO79H4MRiowi97TTlJXVRZYayyyvrdLs5DQ7OZ2irMxzMVSqCQvRKY7ZNLO7L2NufjfVep1abZqpmRkqUZ0077CytMwjD92H9wU33vQC4jh+8h55zkXEUUyeFyTVGjd91Sv41G3v4ablNa6eC1QaABEhQH1qirhWKYNSc+ALslBWo8WuvNdf8J7CB1aaTdo+kHoogrH/4H6uvelGatNzzEzvKZe4dVCZqlKZn2fFO6YLiDw4D+SGGWShe39BMwqD02eXWDh9hsrUDAQ4cfI4L3jei4gpCBk4i6EIeMrw2UJgdn6eOI6Zma3TbpRL89brFc5ZufTqaqNBc8FwBII9fanUNrAKVMxoPRNv4LgNoXxLEwwi58A70jQjsgqtZovgjbm5XVQqdZI4wRcQJTGNRod2uki73WFtbRUKA1dWNLaaTfIiZXVtmbXVVfKiIC883nuiyGHdzxG+vHeqL6czmxJHEUnsyHJPtZ4wv2uedruFhZiVRk4lTsFBnmfc/9DD7Nu7j1tf+UZW0lVqtTrttu75eAl6A/AvQwj/bn2Dmf0E8G8oQ8k/AH5ow/z4LylXE/qH3R+ADwIHQghf8R9mZvZi4OOUlY3f0Ofct1JWOf7khmN+lzLw/DHKuftQm5k3P9v+O3knPVvnJhcDjf3OuZjH/nzPmxU+iojIJa03eOxuS83svwBvAt5MOQna6N+uB4/d/dtm9s8pJzHv5OnhYwH8s/WJVfeYR83sF4F/DfwfwE8yQHdy9QbgDzcGj912lszsXwN/AnwH8MvDr1hEREQuFTOXX83eRovW0gIrDzxIyMrlVvMQkWPkoVx01VNOzssKve6jDyyveXbNQJ5Dp+UoQgQGgUBiUHGOxBzTrkU9SalOeapxXC7v2TEyX8HIiCNPFEV0sozFxjK1KCJPc2oHq5hLwHLSVoc4qRG6/0qQtdqkq0tYXKEyNUVSeLJOg3ZrjZAHimZBupZjwZFUHZVqTHWqxszB/Uxdtotkbg6LKwQcPlDe29AHsO7Kqc5hSbnUZ+QizEU4V9Z/EgLmXHl/ulAuU2tm4Lr3ejRXLq3qPa12h6XlJY6ePs3DJxc5cq7F6UbBmU6gVUAnBCIzDlWMG/cay3syVkNGsIQ9ew/wvOe/mBe++FVMz8wzNT3DmZPHePjh+2i0Fjn+xBHqU1NcdeV1mOtWaLruMqyhwMw4ePXV7L3uJj72wGf4zhddQyfLqFVrVKaqWOS7y8aW99jzeYZ5j7kAUYT5jBASWq0mS40GWVFWweIcBw9dwb6Dl7O0cIbMT+GIybMMgufK59zA1FWXs/bIEYrMURQBH8ACtCyQhfJ9lLgyKHz4y/dz8KqrWV5a5PhjT/Dq17ySqVrMVLUOBnESY8TltZkxVZ+mmiTMzs1xbmGFqXoEzpFThpud5hpxtVwuuPSVAaMBjQB1U/DYy6wctdg5arUaleoUU7UaU9OzmJXvkzLkDlQqCd7nHD91lCiKWVttlYGlOYo8Y2VljSJPKULO6TOnWFhaJHIOH8rgEQLel0l/eDIhDpuuegRwUUQnTcnSjPp8nanpKVLfYt/egxw7cZZmq8Wp02doNTt08oIvH3mE/bsP8pKvegUPP/EYH7r9w5MbRLlQjlCGgxv9JmX4WAV+bOP8GPhd4NeBl6xvCCGc7tdwCOFuM/sg8BYzS0IIWc8ujwE/1XPM+83sceAVm78UERHZCQofRUTkkmZmVwP/jDJkvBqo9+xyqM9h/Wa/H6P8d5qX9nnu8RDCo322304ZPvY7ZqNXdx/nzexdfZ6/rPv43BHtEEJ4Wb/t3W923jLq+FtvvXXULpe09W+SPdOvcxiNQUnjUHq2jsPs7OxOd0EEgGR2itqBPey+6TlY2qL58ONkRXfJ1VBGNj6UgVx5D0jDh9CtgAysrEJrT06RQpY7ouCAgLMy4Km4gooV1KKcWgS1uiO2mE7WIU0TLBiRgfMO854oBJYa5/B5YDpZYWZmml27auUSkJHRaTcoXIyjYG3hNKHImJ6aJapWCXmGb2W0l5qknYKQOeIoYWbfFLXpGpX6FNVdcyS7p4lrU7i4AhZRFAU4g7ybeESurHg0AwdRXMWF8h6OuHIJ1UAgUAZ9vijArLyPJBFmgbzIaDdbnF5c5uGjx7nryBL3nmtzthPo+BhcDW9G7jM6vkPhPaGA3U2jUTXiXVPUqzPccN3NXHXoRp7z4pcSxTHNxgpZsYuX7X01p48f4+iJhzlx9AlmZubYs/sywMrwyAJRHOOA2vQ0z3vxS/jDT3+Ms42UK2fL+z76PIWsrG4z5/AhI+t0CKHA5xBbTBQleByrrVWW1gJpCN3FdyPm5mcJwdFJc5onT1KpVknbHWbndhFXHNN7Zzj5sLHSgrSgO4Zl2BpjVCxgAdI04/ix46Rpi4995CPsmt/DlddeRdpeozZbwxGVY+uKcuwtolqvMFWtELmYuFolrtWZma3x6KNnyHNP2uqwSkHmA0+/W8F6iB5olW9tdiKC3KnzDmNmxFFZrRxXHet3xQxAu71GCEajscrCwmkMKLyj0WjQbjWoVetcffW1VKoVms0GS4vLrK6skuUdmu2UTifDF57MiicDzOB5WpXjVgtRI3O004wQYHp6hlqtRtHKSNPy3HGUc/ToUTqdlFa7w0OPPM6BXVfzlre+jW/75rdxzxfvY2lleZsjKBfY50MIvbdtPd59/HJvNWMIoTCzU8CVG7eb2VuBHwC+CtjH0/8teh/l7VNGnRvgCZ6aWw+1mXnzs+2/k3fSs3VucjHQ2O+ci3nsz/e8WeGjiIhcsszsOsp7QeymvI/iB4BlyhDxGuDvUH4rs9fTljYNIeRmdhbYP87+XSe7j/Mjurq3+/i13Z9BZka0IyIiIpeQwmVEs1PMXnWYqIB8rUHj2EL37oUbf+zJwCbvRm8BaDQj2lmOFWAuELkMfExsjsQyai4jcZ4k9iQVT7QeYuYRRWpEVi7RGrlA5IwEw/JAs5VT5A1aWcpcKKuaakkd5zq0W23aKwt0GivM7dnHzJ7LcHFEZ22FdrxEbXedeqhQq9Wpz9RIatO4xLBaDTdVxSUVXBRDHJf3o/N5eY9IA1wZwDhzmEVlNSHdikbip67cPISEEFIscoRgYAGfe9rtJqfPnuFzD57gjieaPLSUci7LKEK5nGtBDkXWvS+mI3HdZWoNWrHDW8T+Xbuo1+t8/MPvo5JUecUb3kxrrcmX7ryTB798D/sP7OPmF72ETt7gxOljnD51gtnZ3bgowrkE5zw+FHjvcc5x7Y3PoRNijiyscOXcLHmeQx6oVCvgXBkmFylFlnWDyG6gGkW00g5nlhdZanoCZZWmAfXpaSpJhQMHryGOYqq1GkWRkyRVWo09HLz8Gu4N97CUlvcRrFEuieqCMQ20AxTOqBo0z5zlzs99kr9672187997B1MzMzRWz2HmKaPuQGTl65VnHQqfQ1l0yoHL9nLq9AK7ds2WfSaQ5QXtjh+ZZPVLDi6Uiy14hLKqMS8guEDRzDiRncbMmJ6eolZLiKOYdtqmudam1eyQ5U8da7bGsRNnAEgSBxhFESiKniVUuxfue9fC3SYDGo0GUeSYnq7iIkcI0Epb5HlOFCVAgYtctxuBM0unOX78GG/+ujexf/chPv/Ffrf2k4vY09Li7py573NdOZCs/2Jm/wD4eWAR+EvgcaBJ+U79VuDF9J+vLw1p343RdxERuQgofBQRkUvZP6IM9v5uCOE3Nj5hZt9NGT72c4By4rNx/5jyW5crA/bv52D3cdTXeNef/wchhF8csa+IiIg8UwTDVWvM7Lscl0F7ZYmseTetxSYh+CezG7P1JSxDGQYFcBhZYSwve3ZXIHIBhyOygsQCVZdRjTOcGbELxHEgqlq5vGeAyCCygDOHA5wrf2YsppUUmINWmpXncw7wJJZQRDltqzA9fxnV6XkwT5HltBeXyBsdpvfsIqpUqE7VSZIqeI9LEkgMi8rlU4ki6EasFiJ8kXWXUA1ELu4GFGVlI2ZYMEI3iTVvBBxYAYURIof3BVmrzdnFBe788ik+80ib+5Y7LOcpechwDubiOnuSQJFldPKMeMaIZj2nTntCqDJTDazlOcVKg3Mnj1Kdb5LEM5xeWGBxZZGKxTxx/ASnFxbJ8zaVeo3ZPbshOBYXFml32tRrdcxcGXT6cqzxgX1XXcnU1AxPLHfIDSrBU6nExJWEPC9IW218KHBRWRUZkZJ3coq8YKW1yrnlJssdIwSjSsC5iBA8uS+YmZ0jSZLyXoG+/Df3+swsL3r5y/nAn32ApTyjasau2NhbT3jufI29c1UqlZgkMYIzlhtP8Be/f5SvedNbOLD/AElUIY6S7q03A+W/5Yfy/eoMC45pn4F5ZmbqPPzQMmer5b5ZgOA9vrCnvd1ltPUAEsKT4WKruYIZuKgcU188fWnUjb+n6ebv2bhdURyVlb7VhEpSwQyWVpY4e+YsZoYvPLGrMFWvkqfl+sGnzpzkM5/9HK949au45RUv5cbn3XTB+y07pzu/fhflF3ZvCSGc6Hl+rApGERG5dCl8FBGRS9kN3cf/3ee5Nww57g08/b6OX0259tFdffa/2syuCSEc6dl+a/ex3zEbfar7+DpA4aOIiMizhLMIh8NqjtrevcxdcwO+3eHsF+4hX0kJ3aUxXVmnRxFCd3XS9dpHY3WlyvyeHDzElhOZJzGjEuXEkSd2EMcBF5fn8yHgKIhCGRY4Ipx3RN7jXEzNRcz6gk73How+97gYsjSjsXiKtN2kPruXWn2aOEkospTWyiqrp86x69AVzO3fS5Gn+GCkaUoEUImIXFTmjc5BMCi61YFFTig8LkqAHBclWPBgZfWWUQaVoQjlkqwWMJLynnUB8k5GY3WNex85xqcegQdWZjmx2qJtLWr1hEOXXc2+PQchd9Q75yia5zi+sMC+axwvftsUC0fr+OZXcfzxx3n0oftYbDVYPnGGuUbOG77hddRnd/GR297L5ZfvY2X5LHnmKYg4ffJxlpurxHGV1dVV0rRDvV6n8AXBe4qsoPA5gQJXqzA9Nc2ppRN00g71ahXvA63GGkVaEEUJSVJ5MkBycULeyfA+p9leY3E5Jy3K8DnuxtCdfI1zZ45Rm57pjpDH5wXmIoIPXHH4Oqbn52mePcN1sxEvu3aOqw9dxuzUFDijyAqazSWOLq/y4ftO8YZv/zu86mtez+OPHqEytZ+pmXl88BSpx1XjbgluuU6qM8MVnoQKFqBWNY4+doI886SA81B4371H4YX9TD1ThQBFfnEOphnMztapVRJm9x/kqsvLVTU7rRbXHj7E0vIaaScjshpmDu89laRClqUcP36CY0ePk1TqhB2thZUdsA/YBfxRn+BxhjFuGSIiIpc2hY8iInIpO9J9vBV4z/pGM/s64PuGHPcTZvbeEMJid/8a8NPd5/5nn/0j4GfN7LtDKG+cYmbXAj9KufTLbw/rZAjhc2b2UeDbzeydIYRf793HzF4InAohnB7WloiIiFw6PB4fMnzwxLvr7IoPU02qBB9Y++xdRCnErqxM9JTLrnoCBeHJJVmzLKbTcWXIZ+WKcwFHoKyGrNYL4igQ1wLe2hASfB4B5a0WY8uJooA5j1HgcMwmdZzLcFEgLwryLGX17ClWT58lDzmBmEocQ2y0WmusnjhNZWqamX27CT4nSuo4X5AXAavEeAIuKpdUDQGCBULh8YUn5OUSo85F5TVZwKJKtzAyIpgn5L671KrhgyfgKPKcVrPFY0fPcseRDvee28PxRoPFzhGuvfZa3vym72dufo5Tx45SBM9aq83Jo8egXmU2rnD85Crn3u2pzaasLn6SfLnN7vndpHGVc6srVAu47MorqUxP02o0OXviNDHG9TfeRKuxxLkzxzh57z1ccfgqqvE07Wab2ZkyHPI+UPgCXxQURUZe5NRnqpw8mhEwzBx55jEgqVa7y82C90X5qhYBM6MIsNpaZaVp5AFijClXBs+rp89x3+fvYH7fFUxXDeccqc9IG2uEwjM1O8ULX/Q87r/9oxzeE3P44D52TU+T5xmN5SUWlpc5tdTk4+c8V9RneMvXvJpGZBx99BECOVccvpI4SQiU/SzyAkcEDkIwOmmHWujgQ8Zqo8O5Ro7D6JhRA8jL96c884UArXYbT8H+vQfZvXeGTrvFgw89yNlTZ4hcWZFbSRxR5Ci8x2HMzU9TqTmWFpdoNVeIYq2W+SxzmnKJ1ZeZ2UwIYQ3AzBLgFyjDSREReQZT+CgiIpeyXwb+LvBuM/tD4DjwAuDrgT8A/taA4+4D7ukekwFvA64H/pynV0QCfAF4JXCHmX2A8huc39l9/KchhIfH6Ov3AB8E/oeZ/Sjwacp7WVwJvKjb71dTTtJERETkGSDLOxSFL+v7koSpy3ZTtRrtxirnjh5h5YkFXABnZcVjHsCHMnwyjMjAfESax1Qso1x41XAGlbigUvHEETgXyvvIZYYlERYczsBREIKRpQ5fBIyAOYiISHxGEsXkWUpzeYHGwgI+L5ianWF6dg7vobW0hM9TknrM/L69ZaBoHmcFFhu12Vksiig6KSE4fJE/uZwnwT9Z6WRmWGQ4F2OhvFdcCAXg8XmA4CEyfJFDiMjSlDNnTnL3w+f44tk9HGns5uzqUVrhNNdf/1z+zvf9X7zl7d9M2uxw78c/zyc/8hEay6d5/MwjnD5zgtXGKo0spXMsEAH1CGYSY3p1hfm9c5wtHCcXFvnSF+9m9979PPe5L+Ilt7yWrEj5zGc/weKpx1leOcf9X/oCq40VDl9zU1nxSBmqFnlOlmfkWU4o8vI5F9PKPZ4Ii6BaqZeVhCFQFDkhgM/z8vXIy2Vo2+0mjWbOWtvIQoGnXDJ3Cs/Zh57gikM3sLKwTFrvEAI0G6tknQ6B8tiXv/q1PP6ZO4CcYAXNRoNmY5mz5xZYWPJ8vu2Zrk/xN24+RLK2zDIxs7tnueuzn2X+st3UKgm+CLgoInJxuWyuOQjQyToURU4cJ1STmIqDOEDRXdI3eJ5cKlieDRyrKw0+c/JOVtdWeOs3fxPXHr6B5930Er786P185rN3UITuPShDIElinve855K22yycXaDwAZ/no08jzxghBG9mvwj8OPBFM/tToAK8EdgDfKj7ZxEReYZS+CgiIpesEMIXzOyNwE8Bb6X8/7W7gW+nDPYGhY/fCfwE8HbgCuAY5f0ofiaEvotHLQLfAPy/lGHnHHAv8B9CCL87Zl+PmtnLgB8BvqN77ojyHhj3Av8Z+OI4bYmIiMiloQhlYBXFFWrVOWr1GcwZ9eXL2XXzc8jbd5Oe6ZAFyEJZ8bi+4Gr3jokQjE67RlItiIKn5mC64qlWAtXEQxyIIwgR+OAIVhBXIIkD7awMs0KANIOqd0QYFgK1yjSV2jSdNKe9ukrRyYhio1avk8Q1fJbRXFwitHOm98wQVRIKXxAnESHPcLVpzDl8keLqVcxVyJtNcmsRJZUn7/HoXIw5sLJWE5wrQ0wfsCgCK+9xGfKCPMvptBo8cORxPv5Ah2PuZk6tLnNy6U6mpqu86Hmv5g1v+ib2HDzEwtGzVHdXObN8irvv/Qx33/k5zq4ssJbnrBHIKYNcgHphFBiFFeTLK8xP1zmx0OLTH/sor37jW9h72eUcet51xHHMI1++m/vvfIzP3/MlTp44RSfrsG/35RiuW+lXVmdmaYc8zQi+oChSslYbfKAoMgg5vshYX5fU5znBjOAhFB6jvG9kO22yspax3HG08DQDdIBrDlzF1dfcRDWv8vhDj7LWOUen06LdaJShZ5GycPYczeWCF73itaze+yEajVXW8g5rq03OLQbuaQQ6tRp/74WHuWJumta50yzHCVddez1pM+e2P/ozvvZtX8+ePfuJKy2SuEIInogYzNNsdtiVFkxPl69zCFYGo1beITIJ0NSaq88a9XqNvMgJ1mF1bY12u83VV11NfXoKq2R86Z4v0klbuO5yys12G58WHD50JZFz+NzjTe+XZ6GfAM5Qrkr0/cAy8JfAvwR+cgf7JSIiF4DCRxERuaSFED4BvGnA031XgwohdCgnPP9yE+c5DnzvGPvdPuS8q8D/0/0RERGRZzjnHc5ikkqFJKlgzkHkqOyeY+7qaymaHc6l99Ja6jx5r0frxo/uySrHgCsSQhERxxmx81gccEkAB1EUcJFBZIQsUBQB5zwuckBBgZW3YMxzssIwn2DOiOMKMZClTYpOE/NGVDXiSkwIBUVeYEVgevdu6rOz+CIlqc9CFOOqgAtkrZRougquiuGIa1NknUa5zGqU4FwC5soMrrvcqCcneP9kdaQvCoosI8tSFs8tcveDj/PhJxJWq8/h9LmjNPPj7Nu/jxc8/+Vce+NzqdWnybI2t737D/jSF+7ks3d8lsdOHmMpbdMKnrKecj26LWvzsmAsFYZPy3GqRykz1QpLi2e5567P8NKX3MLjDz/A/Xd9no/91V/wqTs/x7mzC1SqsLa6gveeWrUGZngK8jyj0+pQFBlFnpF3GiwvrxLHjthCeR9FfFnxaWAhIgRP8B4Xx2RZSgjQSTPOLVv5unTvfjlrkJ46wV2338Z1N7+UYtcsH73zdrLck2UFmQeXF6SZZ2Z2mre95ls48dB+jp8+TRIV5KnnMR9zNol45wsPc9XeeQqMdHWJaO8hkjjmxa96GXEUcdu7/5xbXv9Knv+8FxPPJeA9RV5Q5DntVpss81RrNcDIAhRAYpAA+81omwLIZ4ssy6gkEVP1KktLaywvLzM1PU2URDTW1rj5+uu5/trr6aQdbv/4J2isNTm9eJqTp09yww0vxWOkndWdvgwZQwjhCENWVQ4hDHvump7fc+A/dn96vaP7s5lz3zroORERufgofBQRERERERE5DywyYoupVOq4KMIH8KEgqlWY3X85ISvorC7RXHuYolPezTEyIwrgusuuJmZEGHGoElxGYYGiMNLUqEx5InPEQERMYRm+AIs8FlPWUIaIonDkqdHsZMTVgiieJUqqFHlB1lnDt8FCoFKpUknquACxM6oHLqe+a5qoWiMEj6tWiCoRIQS8D3grCHmEM4+rGCEYSVwja3fIs4y4UsfMg4EVYC4CDO8hhEBI22StDs1mg+Onz/CR+0/y2bMzVPY+n7MLj5DbOa699nquv+H5zMzN00kzTh4/wv1f+hx/ddt7OHHuFAt5myZ0F3h9cuQxwJlRpUqFKrPxLqpJnaqLiGyZYGcxjBe88AV8/K/fy21/9DukecaJUyfIspzZ2RrN9hoWHHv3HmBqpl7ek9J70rRDkae0m018lrF67hhLy6vsSyJiZ0RRhLMAzgjegy/K/gC+KAjmyYqc1VaL5dUKZuU98ipmHIgjDkcZSytLHH34fp731V/PvQ8tkgXPVK3C/Gydg3v2smv3HLPTMzQthYOHeOj+4+yvGatJxIMdzzdeu4crd1XIvAdzkHaYqk8TJ45arcbrv+FruOq6w7znj/6IvFHw2je/gazIiF25lGqWlRWxUzM19u6a5eSpFeIALhgYxAQOOuOYh44CyGe8rNPhsgN7eOLoSbKszcmTx9i77zKSuEqeZ8RJQm26ijffrd0ul+bNQ0HwOVCwuLKwsxchIiIiF5TCRxEREREREZHzwEUJFkEUxUSJA99dUtVBMlennu9h6vDV1M6eoXN0CXyEI5B06z4cRkw5cXdFhOUJrp4xOx3YNZOQRI6iyEmLlGbIyAqI4oALRpIEYte912CAPDesHcimMqK4TVKp4YuMbLWNbweqtYT69AzVqWmiaoWoEhPVYiyKy+VD84IQoCgMs3I5VTcdg3e4OMH7lAD4jsdnnrSVwbTDRRGBoqyE9AW+G+DlnRbtRouz505z7xNneP/Dy5wo9rNr782cXXyQynTghc95I3su208UOXCO5aVz5HmbT338Qzx85hirviDtjvXG+w8aRkxMjWnqNk3VzVGNZ6hEdapJjUp0Fa6+APXTNDpNHj/+OPV6nQOHDuHNs7hwhrzTJnEVLj90mOe+8AXUalMEX4ZyeZqS5yl51iJkGWeeeILVRoubD8xQrdaI4kp5z8vC4wmYD2ABX3iIIPiCNG1y9mzGQsexUHSoRJ7rphJu2mPsmY/4wpKx57k3c+CGq3jdi5/PWrHK7t27CBbwRU7abtJq5RxfeIRjZ4/g16DlYh4vAjdXjMNTjqpLqLqIDkZEIEpcuYRqEQDPC255MVOzsxRZTlHkOBfjC0+WZ2R5xtrqGpftnWH3nnmS6DihCDTxzJZ1nSQhMI9xWvd9fMZrdTqcOn2WTjsjBHj0kUeIk5hdc/uIXUK7tUY7zfChAAIhBIqiII5ivA9keYf77r9npy9DRERELiCFjyIiIiIiIiLngxlxVCGOE1xwFL5cZtTMMJfgajVq+y5j9rprSRv3ERbSsrqOsqIxsjL3MzwWCqKigg8FSb2gVosxCgiOkAdy7/B5IPeeqOao1gL1JKMoHIU3imDUokAnK6hUjNgl5K022UobF2Bmzwz1ud10fM7yuSV27d3NdGWmDB0LDw4s5FieEyURZuA8WBwoQouileELo0jLSrtKtUaRZfiQQuEJcUERjLxTkHVaLK8u8PjxM3zy0WU+fqbDqtvN4StexHLjCAevvIxXv+ZrsbhGq7OGLzJcAmtnFvjMZz7BQ6eOsVYUT1Y7BgALT8aPUTASEhKr4agBjizPSbMFGu1AFBmzU7u5bNfNxEwxO72HkycfZ3pumqQSqFUSKrMzTE/v5vVveAvPee7ziCyiU7RI2y3yVpuinUInJ2Qpj3/5UfI057o907gigyinTBnBmcdiR+EDzkUQCoospbXW4vGznlN5xrWzMa+8dpbpugdXECLHnqjO3J5Z4umYw9dfyT333sXq4jmajQYrqw3SVkZaeLxF7EmissqyVme+scbVLmImqlKPE7w58qIgFIEoioiclcvDxjFxpcK+/QfABYL35FmKzwuyLCfrpKwtLrP/usuZnpmhVo1Ya2a0MGohkHTv4lmFDbGvPBPtmp3l+huu4r4HHirvoxqMU2cWuOKKBt7nOBeoJBUIgeDKhY/NOfI8w0UJoYBOq83nP//5nb4UERERuYAUPoqIyLPGVu4RMeyeFiIiIiLDJHGCc47IxaRFis8K8jwl7bRI0w55CEQzU0wfOgRFht37CKsLTUIo7/8XGcQGziAywzxkzQonF1uYNZitlFWJcRLhAKKCgGGRkdQi6nM57SyiCBF5GkPw5HlG6ttg0FpewvuCuf1zzOyfZ6XT5ssPL/DgYx1e8IKzPOe6G4ldVIaMZhA5cIE4inGJw0UxVhjmHFk7J+t0MBy12RmiSozPHO1mA8tTCDlZnrK2ssyZpWXuenyRjx1v82Arx8UzXHnZ82m2z3LTzTfx1W/6NvZffTlnjh8jXkk48cSX+ehfvocTS2c4naVk3YUd13+gDMA8YN160ciqxFYjp8D7BpF5fMiJLcJcjWa7w9lTxvz+hKn5KQ5FV1OrTDNVn+HAgWs5cOAKXvLSl/HKr76VmblZsqxDlqa0203ytEPeaRJ8RnN1gXu+dA/TiXHz/jkqcb2M5aKAezKVM5w3zFFWQ2Y57XZO4SMO1xLe/NwpDl+xmzxAp9Ok4Y1DB2/i5Llj3Pbbd3Ls1CJPnF7CW4DgSCoJu3ftoTZT48SJ07RbOT4rmFtd5ZCPMIuZmZuFYGARed7BooipmWkMh4tjkqiGBcACeMMHj7cCMNJ2A5dlrCwu4bxRq1aoVWNWGxlLAaoWmLNyvHMrA8i20sdnJDM4fOVubnnpC1hrrfHgl5/Ah8DqWpul5SWCzymCBxcRR44oqjFVm4Kigw8FIQdPwcmzx7j3/vt3+nJERETkAlL4KCIiIiIiInIeBAMzRzBPWX3XptNu0W43Sdsd8qwDiWNq756ykixt4+89QnvNAxARnqqeI+AxirzC6lpB1aVU9hRUXEQw8Bi4CDxk3ogjR6WeU6l42h0j90YnM+pVAypYmkMjY9fuOXYfOshqu8ETx5a482FPMKhUElYbK9SqNfA5FmK8FRQhx+dped/DoqyA3LV3nunabFkNGUVAUd7vLU9JWw2WF09gRUyj1eaxUyt88nibz61krBSBamWWyw+8FCPn5a95Gd/29r+Li+ocP3oCZzFHj9zHh9//x5xYXmApPLXMauhTbmc4YhJqNsNMfICp5ABF0aYIGZVohmoyw1R1mkolxhdtWu0VHv/yMtc8v8Llh65k7759XH34Gq6//jlcc8NN7Lv8AHEckbUyisLTaq7RaayQd5r4Tht8wYlHH+OhI8d4ye5pLp+tEhGVnckDwRllLEpZVVi0sOCIo4ikknLdfEa1WuOqA3NUKlUcMR7DNVYplh5h6cwizYUWzdWMNPXsu2yGud3TTM3OsXvPXlwcsXzqLI12QQzUWuCTBG8xIRi4mDx4srRBYuV5iwKKIsOHDp4Ilzh87gm5J+10cK4gazeYLwpOLC2DL6jXp5mqV8HahBBYBaoYDqhGsD9ynE4Dba8E8pmmXosIYZU8W+Xaw1dy7ImTrDVScl9w9PgpXtBqkKUdvM/LMNtBtZLQaqblMr55QZZn3P/APRw/dnKnL0dEREQuIIWPIiIiIiIiIudBZBGYo8hzijzQXmvSbK3Rabdpt5v4IsdFRn1mlmqlThxViKhw6ov3U7S6wZVZeZ8/Ag4jDxGt9jQLkWd2LsOigmCOAoOiXALR55BbgUsgqWd0UsgLKDpQROBmIHYRu/fNMbtvHyutVc6cW+CLj3iOt42XHsqYmdrNSnONRmuVkGWkeWB1tclSI+fcamCp6WlnjsQirt1/hpe96HL2zu8jih1Zp03RDqycO83K4gLLK6u0Wh0eOQsfPOt5JM2IIqhW6hy64lUEn/O8F1/PO//RP+TGF1zHicdWqFQi3vN77+ODf/GHHF1bYiV4PBBC9/6OFij/Z+UytkQkVKgxzZTbw1S0l1qyB59kGDFT1SmmKlPUp+vUahVcgE6nQe4bJGmderGP2egg1179XF7yiudSq0/hDPIsI01bLC0usrq4QNpokDVahMzTbq3wmU9+Gp9mvPWFN1CrJlSqNXyeUxQZhpWFhSF78j6ZRZaRJFWyjrFvPuHKq/ZQm5om8zmdLGO53eDM8jmytGD3Wswjyxl5nDC3y7Fr3zSVWp3gC1aWzxEC7D4wy1ylQjVNiWNjtTCs4mmnLZqhDsExPTVHdW43yy7G+wKCEbwRuYjIHOAxH3A4CIF0bZVZck4vr5K3WlRrEbWpKp1utWMrwAqBuchIKuV7/bIqnO0YrWdYAGlWvueejfbN7eGG5xzg3NknOPLoA1xx5XO4+qoDPPLIMfIi0Gw0+cLn76JSiwnekRces4CZMTNdpVarsn/vXorQ5vaPfIQsy3f6kkREROQCUvgoIiIiIiIich6Yd0CgME+n06DdWSFtN+m0W+SFhxDKZUBjoz6zm0pSw0IgazRYePAJfBYRggcCPkCGIw1G00NjrUZ9oaC214gq5bKZAB6HEeikBZELJFWIEk+eO1qtmBAKpnYblco0szO7aOdtFpeWOX6sxaMrVRJXsHeXo5F2OHN2kcWlgtWG40wbTrdzmnm59GfNlfcaPDDr2DsXk+aeVrtF7sulXddWV1g6c4pW1qbTSnlsyXjfGc+JzBPHEEV1Lj/4GlwcceVVe/nuv/8jHL7+GnwamKk7bv/EX/Ke//1bHF1bYqkbPHYLQbuLrgJmGIYjosI0deaou1mmKpdRre4mjiKwmHpcpxpViSsxzscEH+GSCrtnd5FUjGpcob3kuP/kWR76wvtprC7zxm96Dc4i1hqrLJ5bYHlxgdbqMu2VFXy7jc9SHvzSF/nClx7kLTddzrUH9lCfnsIoqz9dEhN8RshzzJdLUqZZwFNQqVWZ2z3PgQNTVKsxjbRBFiKqlSpTzODcErvrCTdcfgVPRCvMHNjDwuIitakpfAgUviBN0/JcMdieCrgpGlnKWpqzPwEfOxwRcbVOURRE1Tqh8ISsgBjMBfIsJ4SARY4oqRD5Al942itrJHnOZQBnF6lcsZ/5+TpJZGRFIAdaBrtqELmAD0bsYU8Eyxhr3QByve7TX8gP3YQ9W4PHJE649fWvY35fhU99eonHHztJmsHc3BS7Zit0Whkzc1OsNlYoVgN75vfhi4A5mJ6uU9+7jz1ze1lbXeThI1/m4YeO7PQliYiIyAWm8FFERERERETkPAgWIATyTk47bdNudYPHLCdQEFmEIyZOKiSVKSrzdcgK2jcsUTQarB1dBF8GkOZcWV0UIAvQKiKOr1bZN99mthZRZGUwF1nAecO5crlPiwJJzUPbCIUjTwM+9VSnahQhsLCwyOryCscWKyzlBbsT4+xyyrFzZzm26DjWhiWfQgjMuYi9lZhD0zFXziVcdaDO/n3zzO2apzBYbazgV5ZoZy0Wz5xlZcXjfeBEK+KDZwseTT1TCWARB/bcQiXexa49Hb7+276La55zE2kaSLOMv373u/lvv/TvObq2yJL3FBvG9MnVVrv3UjRzVJmiyixVm6YSzeNcHe89wQJT1SmSao1aPEUcJ7gkolqboj41TRyX45RmHc6dO8Hy0mOsLT1BUjvHzS+7nko1ZnlpmZXFc6SNBulag6zRxudtVhbP8he3fYgZC3zNTQeZqdVx3hEoKDJP8B7Iy/tiurKC0ucezEiznKhaIzUjthhcTBxialGFVuiw0uiwb2oXlWrCwav2cH+zQxzHRFGF2Dk6eRsLMUXexqzC0soKa2srZJ2UJDFmZmMePXmOqek5pi3BRUaRxORF2h22CGeOEAoIoexXHjCLcLHhWy2mXeDA5UbUPMdqfCW7ZueZryfM1eostxrsIqfqADN8KCtrI2CXg8xDClSAqhmdEOhcqA+dTMQNh6/huS+6gUZjhd2793K00eCJo6eoVmpUagnOrFzlufslCiv/usF7D+apuDpf/erXcN+Dd/P+v/4wzWZrZy9IRERELjiFjyIiIiIiIiLng5X/OJ+mHdrNVdqdBmnWxgdPeTtAj0VABAFPERW4uWnmr7yGyDti7qZ1bKEMMENMFhydAKn35CGw2Kxw9FzB9dPlPRADhi8MM6hWDR8MHwWqNY/PrawaDI7mapvVtUVCnrG4ssjisuehpudMSFlMjUcf97SCJ/OeyOBgnPCcuZjDu2P2zcdctmeO+bl5pmp1ojjC4ogsbdJcOUdzdYUsK1hZzllYiXkshU+s5pzxBbUYChx7dj2fmd2HWF35Mq99ybdz8JrrWFxaJlsx7vvk7fzyz/8UDy8eZ9F7cp4KHM3AP1n06IiIqVCnxjxVN0093k0l2UUcVYlcTBLXCUS44KhUa1Trdar1CoQyJMwKR9FqsLx0itNn7mdl7UF80eDUwjwP3H8PtXqFvJNRtFrk7TZ5swkho7F8jtveexunTpzh7730Ci6freIKT+ZTsIARESURLq5S+KI8NmtTeI8Pgcx3KPKUenWOmXqdxsIqWZEzXamx1ljj3JqjMzvDg03jidwBjrhaofAFIW9hgbJaMU5wgA9GmmcUwZO1Al9uFCQrS+ydOcahK64ktgpxUqHo5LjIPfneK/IcfCAy8ElEnHmKAqK8zdX7PXP1CqvZGdpFzq75efbP17nh2pfy4JGHWTn5BMGXtxk1B866r1GAmhlZKJcJLghP/sOTAshLQ71a5eabr6dWq9FsrTIzu5up2iKrq6s0m53yHqpJjSiqYhbKJYZdwLny/d1pN3no7L38p199gMOHLqc2VYfFVZ52k1YRERF5RlP4KCIiIiIiInIe+CLH59BpN+k0W7Q6LbwPhFBQ+EAS1XBRggUj9zk+LwgxTF12GRWrENIOvnEn2UILD6QhkIZAgSdQBnknV6aYXVmFOBBnRi3xEAcyH4iiKaIkJcoLKkmgiHKch07T8cjRIySRY22lw9JynWN5yhNFh5iygq1qxqxFXFdJePHlcNNVs1y2Zw+1So1qUivDNWd470izFovnztBYWaLZ7JC2HI1WxGdXA59qZ2R4ppyR+YBL5rls702cPPsFDl1xiIOHb2RlcYUYx8LSE/z8z/5r7jv9BIvBkz+1wOpXVjtixCTUmGfK9jJV2UUlmqUSz5JUqtSiOrWkRpQkOIzaTJWp6iyVeqUbCOcEc3RaKywvnOHM2Uc4t/YQWbFIvRIRzHPvF+9maqpO4owEI/KeyApC1uFDf/0hPvP5B3jr4XluOThPhFEUKc7FeDLMh/L+lA6ytE077WDBk1SncBbwzRaRS4gdmEXk3eVOO502Z5aWWZk6wMq+K1lcXaKzvMKppQbze+eoVTwZgPdYKENmYse+fXOsrKyx2goED4XBY42CB08vwpTjiUXP816a42YLMEccx1j3LqIuivA+gM+wqFwyNiFlz0yVqTgmtNeotlap1afZt3uGu++5k7VWSpyXFbhxFcwZEYEiKpdYrXiIAmQEKkBkxiwQQiC9sB/BHVWrJ6SdvBzfS4SZcfXl+7l8/2U4cxiOalyhVquxvLZCkXeIoph6vUa1Nk0IBWtrGc658gsQwRO5Cu32MnPTs9xww43MLZ7i1OlzuuejiIjIs4zCRxEREREREZHz4Ou/+2/aTvdhXD92Htr8h5s+4rm8/u33n4eeTNb3/IvJtnfzhj+/ZrJNb83XfhvwazvdCxERERG5hLmd7oCIiIiIiIiIiIiIiIiIPDMofBQRERERERERERERERGRiVD4KCIiIiIiIiIiIiIiIiITofBRRERERERERERERERERCZC4aOIiIiIiIiIiIiIiIiITITCRxERERERERERERERERGZCIWPIiIiIiIiIiIiIiIiIjIRCh9FREREREREREREREREZCIUPoqIiIiIiIiIiIiIiIjIRMQ73QERERGZjMNzji//zFt3uhsiIiIiIiIiFyXNm0VELgxVPoqIiIiIiIiIiIiIiIjIRCh8FBEREREREREREREREZGJUPgoIiIiIiIiIiIiIiIiIhOh8FFEREREREREREREREREJkLho4iIiIiIiIiIiIiIiIhMhMJHEREREREREREREREREZkIhY8iIiIiIiIiIiIiIiIiMhEKH0VERERERERERERERERkIhQ+ioiIiIiIiIiIiIiIiMhEKHwUERERERERERERERERkYlQ+CgiIiIiIiIiIiIiIiIiE6HwUUREREREREREREREREQmQuGjiIiIiIiIiIiIiIiIiEyEwkcRERERERERERERERERmQiFjyIiIiIiIiIiIiIiIiIyERZC2Ok+iIiIyDaZ2blqtbrn+c9//k53ZUetrq4CMDs7u8M92Tkag5LGofRsHYf77ruPVqu1EELYu9N9EREREZGLg+bNO+vZOje5GGjsd87FPPbne96s8FFEROQZwMw6QATcvdN92WE3dx/v39Fe7CyNQUnjUHq2jsM1wEoI4dqd7oiIiIiIXBw0b95xz9a5ycVAY79zLuaxv4bzOG+Oz0ejIiIicsF9CSCE8LKd7shOMrM74Nk9DhqDksahpHEQEREREXmS5s07SHOTnaOx3znP5rHXPR9FREREREREREREREREZCIUPoqIiIiIiIiIiIiIiIjIRCh8FBEREREREREREREREZGJUPgoIiIiIiIiIiIiIiIiIhOh8FFEREREREREREREREREJsJCCDvdBxERERERERERERERERF5BlDlo4iIiIiIiIiIiIiIiIhMhMJHEREREREREREREREREZkIhY8iIiIiIiIiIiIiIiIiMhEKH0VERERERERERERERERkIhQ+ioiIiIiIiIiIiIiIiMhEKHwUERERERERERERERERkYlQ+CgiIiIiIiIiIiIiIiIiE6HwUUREREREREREREREREQmQuGjiIjIRcrMrjSzXzez42bWMbMjZvbzZrZ7k+3s6R53pNvO8W67V56vvk/KJMbAzL7WzH7OzP7azM6ZWTCzj53Pfk/adsfBzKbN7O1m9rtmdr+ZNcxs1cw+Z2b/2Mwq5/satmtC74UfM7O/6B67ZmYrZvZFM/uPl8LnASb390JPm683s6L72fipSfZXREREROR80rx552ieunM0L9w5kxx7M7ul+/4/2m3rlJl92Mz+9vno+4VmIYSd7oOIiIj0MLPrgU8A+4E/Be4HXgG8EXgAeG0I4dwY7ezttnMT8EHgs8DNwNuA08CrQwiPnI9r2K4JjsGfUF5vG3gIeAHw8RDCV5+fnk/WJMbBzL4eeB+wAHyIchx2A98CHOy2/+YQQvs8Xca2TPC98BCwBtwNnAIS4KXAG4AV4NYQwl3n4xomYVLj0NPmLPAFYB8wA/y7EMK/nGS/RURERETOB82bd47mqTtH88KdM8mxN7MfBn4BWAT+HDgG7KH8N6ujIYTvmvgFXGghBP3oRz/60Y9+9HOR/QDvBwLwIz3b/2N3+6+O2c5/7e7/cz3bf7S7/badvtYLMAavBp4PRMA13WM/ttPXdyHHAXgJ8Hag0rN9Frij284/3ulrvQDvhdqA7X+/285f7PS1Xohx6Dn21ykn+/+i28ZP7fR16kc/+tGPfvSjH/3oRz/j/GjefGmP/aU+T72Ux75Pm5oXXsCxB94C+G57s32eT3b6Wifxo8pHERGRi0z3m1QPAUeA60MIfsNzs8AJwID9IYTGkHZmKL+l6YHLQwirG55zwCPA4e45LqpvcU5qDPq0ew3wKJdI5eP5Goeec3wP8DvAe0MI37ztTk/YBRqDeWAJeCiEcON2+3w+nI9xMLO3AX8C/B9ADPxP9A1XEREREbkEaN68czRP3TmaF+6cSY69md0N3ABcHTZZpXop0T0fRURELj5v7D5+YON/zAB0J0IfB6aAV41o51VAnTJoW934RLfd9/ec72IyqTG41F2Icci6j/k22jifLsQYrE9mv7CNNs63iY6Dme0Hfg34kxDCb0+yoyIiIiIiF4DmzTtH89Sdo3nhzpnI2JvZC4AXAR8AFszsjWb2T7r3OX1z90sPzwjPmAsRERF5BnlO9/HLA55/sPt40wVqZydcyn2fpAsxDu/sPt62jTbOp4mPgZl9n5m9y8z+g5m9H/hN4DHgx7fezfNu0uPwa5RzgR/YTqdERERERHaI5s07R/PUnaN54c6Z1Ni/vPt4Grid8j6z/x74D8BfAZ83sxu23s2LR7zTHRAREZGnme8+Lg94fn37rgvUzk64lPs+Sed1HLo3OP964POU93i4GJ2PMfg+4JUbfv8s8D0hhIc217ULamLjYGbvBL4F+FshhFPb75qIiIiIyAWnefPO0Tx152heuHMmNfb7u49/DzgGvBX4GHAA+FfA9wJ/bmYvDCGkW+7tRUCVjyIiIiLPQmb27cDPAyeB7wghZMOPeOYIIbwqhGDAPsobvQPcYWZft4PduiC69z39eeDdIYQ/2NneiIiIiIiIPOXZPE+9kDQv3FHrmVwEfFcI4S9CCCshhAeBvw18jrJ68jt2qoOTovBRRETk4rP+ban5Ac+vb1+6QO3shEu575N0XsbBzL4V+D3KZT5uDSE8spXOXSDn7b0QQjgXQvhLygCyBfyWmdU33cMLY1Lj8OuU1/pDE+iTiIiIiMhO0bx552ieunM0L9w5kxr79edPhhA+ufGJEEIA/rT76ys22b+LjsJHERGRi88D3cdB68Tf2H0ctM78pNvZCZdy3ydp4uNgZn8TeDdwCnhDCOGBEYfstPP+XgghLAGfBC4Dnr/Vds6zSY3DLZTLvJwxs7D+A/zP7vP/d3fbn2yrtyIiIiIi55fmzTtH89Sdo3nhzpn03zlLA55f7D5erF+MHpvu+SgiInLx+VD38S1m5kIIfv0JM5sFXgs0gU+NaOdTlN9ke62ZzYYQVje043hquckP9Tt4h01qDC51Ex0HM3s78JuU9xV44yXyTdIL9V441H3Mt9nO+TKpcfj/gKk+228EXk95X5U7gLu222ERERERkfNI8+ado3nqztG8cOdM8u+cBnCNmU2HEBo9z7+g+/joBPq8o1T5KCIicpEJITwMfAC4Bvj/9Tz9k8A08Fsb/wPFzG42s5t72lkDfqu7/7t62vnhbvvvvxj/w35SY3Cpm+Q4mNnfoZxgPA68/mJ83fuZ1BiY2dVmdqDfOczs+4GXA08AX5xc7ydngn8v/GgI4ft6f3jqG65/3t32X87bxYiIiIiIbJPmzTtH89Sdo3nhzpng2DeB/wHUgJ8yM9uw/wuBd1B+KfoPJ38VF5aVy8iKiIjIxcTMrgc+QbkMxp8C9wGvBN5IuYTDa0II5zbsHwBCCNbTzt5uOzcBHwQ+AzwXeBvlfRRe0/0PqIvOBMfgq4Hv6/46Q3nT7tPA+9b3CSG843xdx3ZNYhzM7I3AX1F+8ezXKUO2XkshhJ8/P1exPRMag2+lXMbnk8BDlMv57AVeBbwQWAO+KYTw4fN/RVszqc/EgLbfQTnR/HchhH858c6LiIiIiEyY5s07R/PUnaN54c6Z4N85c8CHgZcAnwY+DhwAvp1yudX/K4TwC+f5cs47hY8iIiIXKTO7Cvg3wNdThiQngD8GfjKEsNiz78D/mDSzPcC/Br4VuBw4Rxm8/asQwtHzeAnbNokx2PAfzwON8x/hO2m74zDOGACPhRCumVyvJ2sCY3A18KPA6yi/qbgHaAOPAH8J/EIIod9k96Iyqb8X+rT7DjTJFBEREZFLjObNO0fz1J2jeeHOmeDfOTPAPwf+JnCYcvnnzwD/IYTwgfN5DReKwkcRERERERERERERERERmQjd81FEREREREREREREREREJkLho4iIiIiIiIiIiIiIiIhMhMJHEREREREREREREREREZkIhY8iIiIiIiIiIiIiIiIiMhEKH0VERERERERERERERERkIhQ+ioiIiIiIiIiIiIiIiMhEKHwUERERERERERERERERkYlQ+CgiIiIiIiIiIiIiIiIiE6HwUUREREREREREREREREQmQuGjiIiIiIiIiIiIiIiIiEyEwkcRERERERERERERERERmQiFjyIiIiLPcmZ2q5kFM3vXeTzHNd1z/MYmjnlH95h39Gw/YmZHxtlXREREREREZLs0bxbZHIWPIiIiIvKM1W/CJSIiIiIiIiIlzZvlfIh3ugMiIiIiIgP8MfAp4MSE9xURERERERF5JtC8WS5KCh9FRERE5KIUQlgGlie9r4iIiIiIiMgzgebNcrHSsqsiIiIiO2zjfR3M7GYz+xMzWzCzhpl9zMze0rP/k/dpMLOvN7PbzWzZzMKGfebN7KfN7AEza5vZopm938y+ZkRfXm1mf9Vtb7V7zFf12e8KM/tXZvZxMztpZqmZHTez3zWz5404x8hr7L3OMcbwK/Zdvx8HcBg43H1u/ec3zGy3mTXN7GEzswFtvqe7/9OuX0RERERERC4czZs1b5ZLi8JHERERkYvHtcAngT3AfwXeDbwMeJ+Z/a0++/8N4L3AKvCrwO8DmNku4BPAj1N+q/Hngf8NvBr4gJl9/4DzvxK4HegA/wV4H/Bm4KNm9rqefV/fbX+p2/Z/oly+5W8AnzGzF0/oGrfqCPCTlNe/3P3z+s+fhBAWgd8DrgOeNrE0s6uAbwDuCCF8boL9EhERERERka3TvHlyjqB5s5wnWnZVRERE5OLxeuA/hBB+bH2Dmf0S5aTjV83sfSGElQ37fyPwjSGE23ra+VngecB/A34ghBC6bf0s8DngF83s/SGEIz3HfT3wIyGEX9pw/rcBfwL8upk9J4Tgu099EDgQQljd2EB38vRx4GcoJyHbvcYt6V7bu9a/0RlCeFef3X4Z+LvA9wN/2fPc3wMiyomeiIiIiIiIXBw0b9a8WS4BqnwUERERuXgsA/9m44butwd/B9gFfFvP/n/aO4EyswrwvcAa8M/XJ1Ddth4EfhGoAH+7z/kfopxYbDz/nwIfBm4AXrdh++neCVR3+92UE6w3mlkygWs8b7rn/RzwNjM7uL7dzCLKSdQq8L8uVH9ERERERERkJM2bNW+WS4DCRxEREZGLx539JiaUS7oAvLRn+2f67PscYAq4O4Sw0Of5Dw5oC+CjG76hOfL8ZvbW7v0dTphZtn5vCOCbgSqwr09bm73G8+2XKVcDeeeGbd8IXAn8dghh7QL3R0RERERERAbTvFnzZrkEaNlVERERkYvHqQHbT3Yf5wds32h9nxMD2lrfvms75zezf0B5T4xFyqVXHgeaQAC+FXgx5URqy+e4QH4P+Dng75vZz3Qnkf9n9zktHSMiIiIiInJx0bxZ82a5BCh8FBEREbl4HBiwfX1pk+We7aF3xw37HOzzHMDlA9oa+/xmFgPvopz43BJC+IoJm5m9ekA7Y5/jQgkhtMzsN4B/CLzFzO6hvOfGp7tL4YiIiIiIiMjFQ/NmzZvlEqBlV0VEREQuHreY2Wyf7bd2H+8ao40HKL9J+WIz29Xn+Td2H+/s89xXm1m//z7sPf8+ym+AfqLPBGoGuGVI/yZxjZtRANGIfX6FckL6/ZT3rIjQtzdFREREREQuRpo3a94slwCFjyIiIiIXj3ngX23cYGZfBbyd8puNfzyqgRBCSnkT+lng3/a0dT3wo0AG/Fafw28EfqjnmLcBbwAeAj7a3XyacqL2su6kaX3fBPgF+t+zYt22r3GTzgGXmVl90A4hhAeBvwa+CfgBYIlyWRkRERERERG5uGjerHmzXAK07KqIiIjIxeMjwPeZ2SuBj1Mu9fK3KL8w9v0hhJUx2/lx4HXAD5vZy4EPUU5svpNycvXDIYRH+xx3G/BzZvYNwN3ADcC3A23gnd37OhBC8Gb2i93zfNHM/hSoUH47dE/3fG/s0/4kr3Fcfw28HLjNzD4CdIC7Qwjv6dnvl4GvoVze5j+HEFoT7oeIiIiIiIhsn+bNmjfLJUCVjyIiIiIXj0eB11DejP4HKCc9dwLfGEL4/XEbCSEsAK8G/l9gL/CPgL8JfAb4+hDCLw849NOUy7hUgR+mvIfDB4HXhxA+2rPvTwD/GGhRLrvy7cDngFcAj5/va9yEnwJ+Fbge+OeU32r9jj77/RlwtvtnLR0jIiIiIiJycdK8efI0b5aJsxD63W9VRERERC4UM7uGcnLxmyGEd+xsb56dzOw6yiVyPh5CeN1O90dERERERESeonnzztO8WTZDlY8iIiIiIvBPAAN+aac7IiIiIiIiInIR0rxZxqZ7PoqIiIjIs5KZXQ18D3Aj8Hcp79fx7h3tlIiIiIiIiMhFQvNm2SqFjyIiIiLybHUd8NNAE/hL4AdDCH5nuyQiIiIiIiJy0dC8WbZE93wUERERERERERERERERkYnQPR9FREREREREREREREREZCIUPoqIiIiIiIiIiIiIiIjIRCh8FBEREREREREREREREZGJUPgoIiIiIiIiIiIiIiIiIhOh8FFEREREREREREREREREJkLho4iIiIiIiIiIiIiIiIhMhMJHEREREREREREREREREZkIhY8iIiIiIiIiIiIiIiIiMhEKH0VERERERERERERERERkIhQ+ioiIiIiIiIiIiIiIiMhEKHwUERERERERERERERERkYlQ+CgiIiIiIiIiIiIiIiIiE6HwUUREREREREREREREREQmIt7pDoiIiMj2mdmjwBxwZIe7IiJyMbgGWAkhXLvTHRERERGRi4PmzSIiX+EazuO8WeGjiIjIM8NctVrd8/znP3/PTnfk2WZ1dRWA2dnZHe7Js4vGfWdcKuN+33330Wq1drobIiIiInJx0bx5Cy6VOcDFRuO2NRq3rdnKuJ3vebPCRxERkWeGI1dfffWeO+64Y6f78axz++23A3DrrbfuaD+ebTTuO+NSGfeXvexl3HnnnUd2uh8iIiIiclHRvHkLLpU5wMVG47Y1Gret2cq4ne95s+75KCIiIiIiIiIiIiIiIiITofBRRERERERERERERERERCZC4aOIiIiIiIiIiIiIiIiITITCRxERERERERERERERERGZCIWPIiIiIiIiIiIiIiIiIjIRCh9FREREREREREREREREZCIUPoqIiIiIiIiIiIiIiIjIRCh8FBEREREREREREREREZGJUPgoIiIiIiIiIiIiIiIiIhOh8FFEREREREREREREREREJkLho4iIiIiIiIiIiIiIiIhMhMJHEREREREREREREREREZkIhY8iIiIiIiIiIiIiIiIiMhEKH0VERERERERERERERERkIhQ+ioiIiIiIiIiIiIiIiMhExDvdAREREZmMx1Y81/z4n+90N569btPY7wiN+864QON+5GfeekHOIyIiIiLPDpo3b4PmXlujcdsajdvYLtZ5syofRURERERERERERERERGQiFD6KiIiIiIiIiIiIiIiIyEQofBQRERERERERERERERGRiVD4KCIiIiIiIiIiIiIiIiITofBRRERERERERERERERERCZC4aOIiIiIiIiIiIiIiIiITITCRxERERERERERERERERGZCIWPIiIiIiIiIiIiIiIiIjIRCh9FREREREREREREREREZCLirR74qU99Kgx7PoShT3/Ffmb25OPGY3t/X7e+/3b1nmfUc/360btvvz+fLxvb73cNg/rWb59Bx496frPtDxvPfsest9vbh36/rzMzvPcDX9dh17rxNd14TaPGetR7dth19mtz4/atvJdGjc9m3qf9xmxUv3ufG3W+zV5jv/1HvbZb0e/vp/XtG/sC8MY3vnGyJxcRERERERERERERuQRNtPJxPSTabPDYT7/t/YKd7YQN68HOOOcfdU29YcigcGb9nP22b9aw/g/ap98xzrmn7TNsjPuNxbDx6Q1thr1HNo7PsH2HXUMI4WnXNOi4ftfR77zr7Q0LHjce3y9s7WdQmxu3byXEHvez0u99Os57c5z33mb2HxTsD9t/0JcVtmKcY3vHcrNjICIiIiIiIiIiIiLybLDlysd+NltRNSzwGNbuus0EnVsxbv+GGVYN2Pv7Viu3xglqRukN0Mbtx7Dq1HHP3a/CcLPh7LDKyEHvxfWQclA7G48bVEnZr2/DgtV++4z6rGy2KnDQNWx87H19+gXnmznPxr6OMirAHbRfv2rUccLdzb7/B/19o6BRRERERERERERERGS0LYePo0Kn7RinYm27oeNWlrIc1tbGx3XDKrL6BVHOuW2FeMP61y9c7LekZL9galTf+wV4g/o/btuDArR+wVC/AG19LPuFYxvHu7ffg17LzVb59Xscte+gpT0HBe/b0XuuSX4etmIzoeWw16ZfqL9Z4wShIiIiIiIiIiIiIiLS37aWXR1V1ddvyc9eo5YEHWbj0ofjHLvdpSw3q1+fRlWY9R7Tr7pzq0HIoFBsVDjWb79hx46zbdR7Y9Dvw8Zv4/tg2NgOWkJ1WGXjsPfYZt+3g47pV7m5lfdsv9d32Gdl1J+HXfeFNuzvnH6B6iDjXtuw94SIiIiIiIiIiIiIiDzdtpZd7Vc1t769d791zrmnLWHZr3KpXwVc7/O9IdOgYzYaVNnXrxJwq0thrj837vKlo8avX7ubNW5YMk4oBaOXad3MMprj9nHUuIxz7lH7bQz9Br1+o57r1/6w9+Og8w87tt97fStVf6MC5WGfsUHtjbP08qj9hgXIwwx7T/V+rgdd5ziv+TjXKCIiIiIiIiIiIiLybLSt8LH33muDQoR++w7S77lhIc6wc44yKCTdTEVcb/82/j4qnBrnPKOCyXHC33FDqFH79Qtl+4U7w84x7NzDgq3NhMH93nO9S9r2u45xw+JB1zLsvTloidtxzz9OGDdoDMYNBAcZt+px3NB30HGDxmvcdkYZ5z3Ub/9RVZAiIiIiIiIiIiIiIvKULYePGyv7hgUDw8KDcaqoxjFOFdhm2xy17zhBy3YCk80EoMPC2VHB7MawZ6t928o+gyonx612G1a9Nqj6bVgVXL/2N4ZPw/o77FrX7yvpvR/Y395+DqqGHLZPvzYGVXBuJjgc533R+/fAuO/BzbxHRo3VZv6u2EwAO2isFESKiIiIiIiIiIiIiPS35fCx3z/ir1eX9f7j/KAQYViV5LAgY5wqsN7fxw09x+l3v/2GnXtYfyexZONmqg8ndZ6tPD/Ocf1e737B36iKv1H92fj+6vee7bdfv/P3q2YcFlaOuv5h2zfznur3+Rn2fhtWNdlrWKA97EsG/Y7t149BfRtVNTroevq1v53P3qAKSRERERERERERERER2eayq/CVYcewEGfYsZs9X7/z9uq3zGa/Pm80qkpwUBDaG66sV7tt3Geca+rXh80Yt2pwVD8GBTvbqeTcSr/GHbveZTuHva6jwrFx+j2sWnLUefrtMyzYHnWe3v37BXT9gtFxlhPt1+6gSsNxKoH7Xcf6Z2XY3x3DqlwHhcCjrmtU5Wu//Qddg6ogRURERERERERERESesu3wEUYvoziqgmvU/ps576iqyt7fR4U+g/qyHpz0BiObrVbrF/BsDIiG9XXUNY5jMyHZdtvdahuj2h1UGdcv3OsXZI0TRA0L3oa9nzf2Z1iF7LAAelCoOqif/c69se1xzrlx33EDvWF93vj7+p97vyAwrN1hVZmDAstBn71R4zss7OzXP4WPIiIiIiIiIiIiIiJP2dayq6PCsGHbz5dxKrL6VXH1u5ZhwWO/djZuG9SvYeHQsLBjUFXXoAByqxWlg34fdsyg0G9YW9utphx1jnHbHRWyDgqsxunTuM9tbH8SYVZvyDfOWAwKV9f/vHFJ5WHHjHqfbmyvXx+G9XOcIH9928aKykFtDvqcbgyz+33BQEREREREREREREREBptI5eO6QcsZ9ts+KCQcVXW33ZBq47ZBIck4weOgdjZj3KqvYc8PC30Hna9fuDqo+m+cbeNU/o3Tv36VeMOuYxzjjOEg4waTw8Zj3L5u5j00LHgeFn4P+/z12zZon1FVmP0CyN4xGVXVO6pvmxnXcQL6jSHjoNd11HnkmcvMAvDhEMKtO90XERERERERERERkYvdtsLHYZVKg4KMccKoYf/oPyy07DVoic1h5xg3RBh1HeMukznIoIq1ccOa3n0G7bcxTBon9Ont2ziGjUnvazROuDUo7BsVbG/WZl+/fucd57281arKYa/nqP36nX9YX8cNm4edb9DzoyqIe7+80Pt3yTjB5bAQfdCXEfoF9IOu5UJWd8szi5ndDrwhhKAEW0RERERERERERJ4Rtl35OKl/dB9V8TjsuM08N6jacdi2zfZpM/uOWrJ00P694cio/o4TSPULoUYdN25IvL5tWFC4sep0M+2O89zGfbbznh0WVg3qT2941rttWBtb6euwNocFlsP62C/869fHUdfTe85h/e431v3GdFR1Y+8x/dofdq7e50VEREREREREREREZLBth4/jhBvr24cFKeOETf2Wcp2E7QZavftvdVnQQRVhw8Zgu8ZZUnVYdenG5SpHncfM8N6PrMTb+OeN5x7Vl3HfK72B0ySXcd3M8eOEXOOEzNvRb1yG9XFUf8atRuw9pt9r1y807Heu3j9vDFI3bus9T+/7YNT7pF+ILCIiIiIiIiIiIiIiX8lt9cBR1VWDAoxx9h0nnOptd9y+jmtQP9a3Dzr3JCukRgVs/fo1rK3efg8a941/3mxF5aQMq4zst9/G5wcFlINeu97r7v0Z12beu4OuYdCx/QK5zZx/UH+2ep39zjvu57jfscOqJzeGf5tpc2Nb43w2eq+l3/toK6+vTIaZXWNmwcx+w8xuMrPfN7PTZubN7FYzc2b2A2b2WTNbM7NG988/aGZ9/7/OzG42s183syNm1um291Ez+8Ex+/Rj3fN/3Mz2bNj+SjP7QzM7aWapmT1hZv/VzK7ovR7gDd3fw4af27c3WiIiIiIiIiLnj5m9ojsvP9adT58wsw+Y2Xd2n7+1O79914Djj5jZkZ5t7+ge8w4ze6uZfaI7t1/szrFvPP9XJiIik7Llysdx7yfYz2b27bffZpd5HOeedtuxmesZ9/ybDTM2UxU6KFwcJ+Ba329jKDROeNb750Ehbb+lMHv7OiyA3Nif3v4Nu1/gqHCq91z9tg3Tb7z7tT/OZ2bU/sOupXcMx3mfbeczM+64jrtf7++TuA/jOF8Y6D3n+fq7RMZyPfBp4MvA7wB1YAX4LeB7gCeA/w4E4NuAXwa+Gnj7xkbM7K3Au4EqcBvwv4BdwIuBfwr8yqAOdMPMnwd+BPgj4O0hhHb3uXcC/w3oAH/W7c+NwPcB32xmrwohPA4sAT8JvAM43P3zuiObGRARERERERGRC8XM/j7lnLmgnPc+COwHvgr4IeAPtnmKbwe+Afhj4HbgJcB3AG80s9eEEB7YZvsiInIBbDl8PJ+VPqOCkc0GLuOEF737jBNYbmUMNrY3KHybtH7XsJmqrn7H9Wt/UCA5KnAb16jAsLfqcTPBYr+lRPuN2zjvve1+Nka95zYTHPY7Hp4eIA8yzrK8mzn3pJe4HfS53fjcsLBwPdweFVD3+5yMM35yXnw18NMhhH+xvsHMvpsyeLwLeH0IYa27/V8CHwa+x8z+PITwu93t+4Dfpfz/wDeFED688QRmduWgk5tZjTL0/Hbgl4B/EELw3eduAn6VMjx8Qwjh2Ibj3gx8APgF4NtCCEvAu8zsVuBwCOFd4w6Amd0x4Kmbx21DRORScfvtt2/puNXV1cl2REREREQws+dRfsl3BXhdCOGenucHzqc34ZuBbw4hvHdDu/+A8kvAvwy8eYx+at4sIs8at99++5Nz4M3Moc/3vHnLy67C6HBgq/8ov7Habbv6LdU5zrk2bl//c+++27m+Uf3pd/5RbfbrX2+Qsl2D2hw3KB52LYOuuV8ANk4ovJ0+bTVQHKcKdFRbw9oY9V4etG39vM65Jx8HHTOqv9t5Lw36LPXus96/cYwKQ/t9Jrb6xYZJ/R0g23KKr6wSBHhn9/HH14NHgBBCA/hn3V+/b8P+fweYA36lN3jsHne034mtXFr1rygrKv9ZCOFH1oPHrh8EEspA8tjGY0MIf035jdBvNrPZ4ZcoIiIiIiIiclH6Qcov8v7b3uARBs+nN+mDG4PHrl8CHgbeZGaHJ3AOERE5z7ZV+bgxBOpXOba+32aXKdxshdSwY7cSnIyqZuoNH4YteTqq4mrjOcc517B9Bx03bNtGvRV1m7kX4aC+btxv2LgNan9Qv/u9RpsJCHuXdN14zc65J/u2HoL1O992KjrHfR+Oa+N19Dtuu0uF9nuv9mtvnDHazDm3UwG6mb8Leq+v3++9f8/Jjrk7hNDp2XYL4CmXY+n1YcqlYF66Yduruo/v28R5DwAfB64Dvne9irLHq7uPbzCzl/d5fj8QATcBg76FOVII4WX9tlv5zc5bttquiMjF6NZbb93ScbOz+p6HiIiIyHmwlfn0ZvX7knBhZh+jvBXLS4HHhjWgebOIPJvceuutT1Y8bmYOfb7nzRNZdrVfNdC44dCgAGFQcLDZsKPf88MCwa2EC8Pa6/f8erjVu8TjZsPZQQHJZo1bOdYbMA9a4nQz19F7/kH96H2f9Papt2+j+jFuwDru69rv9diOccZzM5+zje32C+4mEUpuZb9xKi8HfSZ7+947VpO6pnHOLRfUyT7b5oGFEELa+0QIITezs5TB37pd3cdjvfsPcZCyWvIo8LEB++ztPv7YiLZmNnFeERERERERkYvFru7jZubTm3VqwPb1fw+YP4/nFhGRCdnysqsbKx/7/awv77j+542/9/vpbXvj47Dzjvt8b5v9+jrounrbGDYe/f7c2+6wwLXfuYeN0ThjNsqosdxoVIXmOBWc/drvN07j9rG3/VFjMep13Uo138b3UL8+DOr/IP1e/0HvjUF92rh9O++PYX2b5L7D2pjEPtu1mfelnFf93uzLwB4zS3qfMLMY2Ed5P4p1S93HQ5s4792Uy7UeAj5iZtcN6AfAfAjBhvw87VucIiIiIiIiIpeApe7jqPn0+i1KBhW+7Bpy7IEB2w92H5cHPC8iIheRbYWPWzmm97h+oeTG/XuDy43nHhbUDQoCewPRjffA6xecrvdx4z3yevs47KffWI0bYgwLUIeFHqP6NOy4YQZd92b6sNGgZSzNNlfBubEKcyv7byVs7N02bAnQccZ1uyHWoBBy3HYvloBvK/q9L3ufH/czMIk+yI64i/L/z17f57nXUy51eueGbZ/qPn7DZk4SQvht4LuAKygDyJt6dllv93WbaLYAMLNoM30RERERERER2QHjzqcXu49X9T5hZjcwvHrxDX2OiYCv7v5614hzi4jIRWDL4SOMF3L1+8f+jaFfbzv9wsBhz2/sS2+b/fYZFDwMCi03ttEbVm62mnNYcDnutt5Ac9h5hwUi4/R70HUMMyrYGbfNzYRho8Z90Pn7jeeo/Yb1rzfUXA9Rx7nv4KgAdNxwa2N7G5eE7RfQjvseHeeYUf250DYTMp7vYFIuiF/vPv60mU2tb+z++We6v/6PDfv/JmUl5A+a2dMCSzO7ctCJQgh/CPwNymrKD5vZ8zc8/UtABvwne3owiZlVzKw3mDzXfbx60DlFRERERERELhK/AuTAT5jZ83qf3DCfvp9y3v02M9u/4fk68IsjzvEmM/umnm0/THm/xw+FEB7baudFROTC2fI9H7eiX3Ax7j0W1/cd1eag84zbv0H37hs31Nt4XO9+69v6Xcug/dcrLjdTIbjd8GTc8ww6dpx9xwnhel+HQa/1qPv8DXqfbXw9xjnfOOfv9/yg8w1qZ1Cw2e9cw8LLQZWlg+4XOq5hn7l+r99m3j+9bY4a2+3cc3Grf+9s97wyeSGE3zWztwHfCdxjZn9CuTzrtwLXAr8fQvidDfufNbPvAf4Q+JCZvQ/4AuU9HV9E+c3Ma4ec78+65/tj4HYz+5oQwt0hhPvN7J2UYeg9ZnYb8GUgoQwXXwecAW7e0NxfA38T+CMz+wugBTwWQvit7Y6LiIiIiIiIyCSFEO41sx8CfhW4y8z+FHgQ2Au8nDJwfGMIITOzXwB+orvfH1P+O/TXAse7P4O8B/jj7jEPAS+hrLRcAH7ovFyYiIhM3JbDx2H/cL+Zf5jfTNXboFBm0D7Dwpxx+zPOOfudtzesHNWXUePQG5CNCqqGtduvj6Nes34BVu/2zS5/Os55Rm1ff25QhWG/ex72e3027tu7f792BvVt437bGYvNvB7D7rM5Tjg97ntvlFGv0UabDTw3c/7NLsG7metX6HhR+27gw8A7ge/vbrsP+DnKb2Z+hRDCn5vZVwH/DHgz8BbKZWHuB3561MlCCO83s2+knBR9yMy+LoTw2RDCb5vZ3cA/Bt7YbbdBObH6Q+D3e5r678BhyuVc/ynl/y9/GFD4KCIiIiIiIhedEMKvmdmXgH8C3Er5xd+zlF/q/e8bdv3XQBP4+8D/CZwEfg94F3DvkFP8EfDfgP8beCvlCkN/BPzzEMKXJ3clIiJyPk2s8nFYoNMb+Eyqkmgrgd4kjGp3nNBnUBgzKAwZNmbjVLGNqpjr7fOowGUzQdOw8/Q7dqtVcuNu7zd2o/q3lQBuWNC5bpzwa9T2cQPmjX0a17D32aD9thL69zNOiL7dysTNhqKDvlQg518I4Qgw8MUJIXjgl7s/47Z5D/C3x9iv73lDCLcDs322fxF4x5h9KIB/0f0RERERERERueiFED4JfMeIfQLlrVB+ps/T14w49r3Ae7faPxER2XkTqXwcFTz2buu331b7cD7+8X/YtW011Nhu9Vi/MR3V197xGRVwOueGLlu6mX4NamNUpepmq/i2qrffo/o3Tjg5LLQcdV0b+zCqzY1GBYzDwsphQfVmtq8/19v+MJPap1+fBv0dMyhg32zfN+53vr7gICIiIiIiIiIiIiJyqdpW5WO/IGvUP/xv3DYq0OrX5qh9t6I3rBkW0I0K9/ptH7cPG42qSBwUVvUev/G5fkHluJVxg0K0zVbR9Z5/0Lk3vgbjVN5t1bCqxGF929iHcY7r93oN68OgtsddrnTY+fqdt98XBfrt19u3UZ//Qb9vtmJx2Nht5ksMoz7TqmQUEREREREREREREdmebYWPW6kC7Bd6DQu0xllucRJVcf1Cvd4/j6p4GycEGbevwwKpfvsNCupGVVL2GtW3YcdtZayGtT/sGgZV741b4TbMOCHhZvs1qu1R5xrU5qD2+n02hh0/6loG9W0zr+Oo/QZtX/+iwqDAcn2fcb80MIyCSBERERERERERERGR7dly+LiVcGfYsp6D2u63X2+wst2QoF/42a9Ss19fN9v+ZpfV3GhjGDessnGcAHFUQDootBr12gzr48bnx32NRxkUCI4K/oaFqFt9Pw0K3EYFWsMqBAe1P+j4ccPbfu2OCv0HfSaG9XHY76NCwn5h4rBrHfR5HaeacliwOci4VagiIiIiIiIiIjJcCOE3gN/Y4W6IiMiEbKvycd04gcqoQKc3+DAzvPcDlzWdpGEhwrhVahuDun7Pb7a9cYyqehxULTbuUpz9tvWrMlz/87Cqy2EVa72GVU9u1zhB3DjjM4mActB5x60eHdTuOJWMo6oI+7W12UB4uxXAg5ZlHXStG5/vFywO+ztpK18KUJWkiIiIiIiIiIiIiMjTbXvZ1c2ECJsJD/tVJI0TsPQap3/jhGSj+tnvnKP6OM5SkZutruoNmIbt09v/cas8R4WMg5b3HKffw9odp+JuM4HdOH3pPf+g9+UkwqfNvF6DwuZh7fS2N+g6x+nfuO+RUecfpypxWJvjVFYO+pwNqnDeymupCkgRERERERERERERkdJEll2F/pWOw8KFYUs8DqtOGqfaaFQl4jhGBRb9+t57zvVlZkcFjMO2rVdxbWy3X2VXb99HnWM7VaT9rmdjn9Z/+i2z2/ue6G1jVCXkxvHYeB2DXpdJVaT1BmbjvA+2G0qOCiMHBW+DKmA39m+cCspBfRg39B/HVr9AsJmxHRa2jtOvUeOn4FFERERERERERERE5CnbXnZ1WMg3TvA4bL9xjh1kVCC3mXNtDBM3Ptcb9vQLGgc99uvjZio7BwWxvcHkxuf6nWucAGoz1s/vvR/YVr8QcVBfRvWnN2geFthttSpyWGA6bn9HvZbjmkTwt9nKx0kHuf3OOU4IP2rfzfZvnM9kv3OP2l9ERERERERERERE5NlsIvd8hMktPbmV9oYt0bjdfm0MO7z3Y4Vd/UK1cZaHHNZGb3sb2+291n7n6rdPv31HBcYb29oYMPYGjhurHtfHbWPY06/icVTINaoScKvHbPU9Mup9N6gPvfuMe/6thoCDXuth59iurVQmbuUc6wZdX29oPIkKyt52RURERERERERERETkKdsOHydZ+bPVpTP7LTPZ+9xW9QuOxl1yc+Pzg6okh51n3ErIzYaX44Reg8LAjYHi+tKqvRVkG5deXd9n/Zj1/fu1v3H7oOrNcQLKrVZQDjpXv7b7VUOOEzyO2raZpVA3U8k5ic/pOBV/2wn3NtuHQdtHBfDnsx8iIiIiIiIiIv9/9v7sWZbsvu/FPr81ZGYNezhDj2hiJEgQ1DxQEkVZ1NWVfa9lSbbDYb/4wQ9+tZ/84D/h/hd6oRUhyoqQKV7yUlKERSkIQRRIiSQIUiAxXDSA7j6nz7T3rqrMXJMfVmaeOoWqvWuf0yBN9vogGmdXVeZaK6eKqvrk97cKhUKhUPi489Ly8boU0SERsE/WHJI2u8sf6vNVedk2t8d+nRi8bcrwUEnMm2THTfPP7SY29y27K/tijMQYCSHgnJvE4Sgex+WNMYgIxuTTaXu5EMIkHbfL125LyUPbcN25cUgW7pOU10nDQ/vq0HPbx3z7tZv2/7Ecm0TdXfZQWdjd7f9hcWj/Hpu43G7jVTmUar1uf71sSd5CoVAoFAqFQqFQKBQKhUKhUCgUCi/ySsnHm9J6u8vsCrBDou0m4XJTadDbcKhk6XXS9FA7123/oRTfrsy6SdYcEpn7RN6h7blufNvSMaVECGH6r+97vPdcXV1xcXGBc46UEsYY6rrGWsvZ2Rmz2QxrLZDFZN/3VFU1lWU9xD4heegcO3RMdrf3mLTebblOjr6M7HvZtOa+5Q4lag+Jt4+Km/o8to3bSOKXGdtNz+0T4FDKrBYKhUKhUCgUCoVCoVAoFAqFQqFwLB/ZnI9wnGw45sf8Q8JoVygdShPeRhhcJwdvW9rzkOA5JB4Ptb9P0o1tb887eahE6a44OaYE5bbUHP8eE4tKKWKMXF5e8vjxY/q+R0QIIbDZbHj8+DEiwp07d3j8+DFvvPEGy+WSEALGGFJKtG2LtZaqqqZ+RtE49rcrHvdxTCryJgm+/fjYcq77uOkcuEmg7Yrnj0J03eYa3Mcx18+x2/UyHJNwve75l+UmYXrTjQhFThYKhUKhUCgUCoVCoVAoFAqFQqGQ+Ujl4z72JcJum+bafrybmLwuBbfvuX1lYXfb3x33vnZuKzh3S3VeV5rypvTcvlTjofWOSQnujnW7rGqMkfV6zdXVFX3fc3p6ymq14t133+Xq6or5fM7JyQl93/Phhx9ijOHq6oqzszNOTk5QSlHXNUop2rZFKcXJyQlN01BV1SQhtdZ7Begx+3Z3v92UoLtuH7yKzL4uObfb5k3clKT8KJOB+/o99Pg27bzq+A5dt3+cou8mOVkoFAqFQqFQKBQKhUKhUCgUCoXCx52PtOwqXD9v366AvKks6E0lNrf7O3Z81z23L0l3U0pst53d545NiO2WYr1pnWNSljexT4iObY+Jxe9973s8ffoU5xxd1/H06VO++c1v8uzZM1JK9H1PSomzszOWyyXee4wxhBBomgatNVVV8dnPfpa33nprauPu3buklNBaY63lzp07zGazg8f7ppTh7uN9CcSXSardJMCPTbUeSmredD7uXkfXJTavG9u+MRzap4cSpfvGeIgfpix8VdH3wxK3hUKhUCgUCoVCoVAoFAqFQqFQKBReQT4eSh0ekoNKqR8QQIf+PpQeO5RKvElIHpscuykheV068dBYd7lJgF0n2W6TfrspNbbv37Hsad/3XF5e8od/+Id8+9vf5vvf/z4PHz6kbdtJGD558gQRoW1bRITvfe97LBYLtNZcXV3RdR0iQtM03Llzh2984xv8zM/8DJ/4xCfYbDbTubJer2nblouLC9555x2Wy+VBQfYqMusYaXfsPrwuAXtsOvaYBPBNcvWma+82km5s7zbn303tHfPay0rg7XX3yeWblt3X5jFjuU1yu1AoFAqFQqFQKBQKhUKhUCgUCoWPI69cdvXQD//7pNGxiaPblDbcJ39205W7fV83nu11b5J718nJm5Kd17W7O+ZD0vWmfXHdNu07XiEE1us13/rWt/jN3/xNvvnNb/Lw4UOUUtNr3nu89zjnSCnhvZ+E5Hq9RmtNjJEQwtTfw4cPefToEU+ePOHzn/888/mcb3zjG9y5c4fPf/7zNE3Ds2fPCCHwmc98BmstIjKVYt0d56twSNAdIwoPJXD37c9Rto/LHjoXbrpu9iVjD23LIZl+aNw3bcOhc/pQ24fauY283L12jxXsxwrB3aTqsbL4tv0UCoVCoVAoFAqFQqFQKBQKhUKh8HHllZKPI9ui5TbrHpIiN71203gOPX9MunJ32etSVddJnetSnvB8nx2SSode2339kJy5SQiPj2OM01gePXrEr//6r/Otb32L9957j6urq6mE6liGdXs+yFE0bj/X9z0xRrTWOOde2BebzYaLiwvqumY+n3P37l2qquL09JSUEu+++y6np6c0TUPTNFMZ1jGRuW8f7Ht+e1uPFd677V53jm7v+0McEoGHzqObUo7XjfXYMVy33nVi/7p1r5OyN7V3nZA9JGhfVj4f2u+HEsyFQqFQKBQKhUKhUCgUCoVCoVAoFF6OV57z8aZSjYfW2U0ObvOqJRR3k1OHxrG9/LHyY1//h0TVoQTibhs3jW9f34fGsr3uISm5m/7abDZ885vf5Fd/9Vd5+PAhAJeXl6xWqymFOJZXNSafMsYYzs7OAOi6jq7rWK/X0/7QWqO1JoQAMAnKq6sr1us1m82Gy8tLPvzwQ05OTrhz5w53797l5OSEe/fuMZvNuHv3LvP5nBgjkIXt9nlznfg6Rpwd4joR+MNgNyV5qO9jBeEhDknOmwTmdUnG8XgcSisec6PBTcdq9z3jNlL1Om669q67fguFQqFQKBQKhUKhUCgUCoVCoVAo7OeVy66+DDcJk2NeP0Zm7BOQ10m/Y0XlTWnJQ+O8DfvWPSRVb+rjUHovxkjf93zzm9/kF37hF/jOd74zlUsNIUwlVpumYbFY0LYt1lrquqbv+6n8atd103yQIsJsNgOY1h0lZN/3aK0BaNuW9XrNxcUF77//PlVV8elPf5o333wTYwzPnj3DOcebb75JXddorUnp+byUxyT3doXVTWU893HMeXLo/Njd/4ek6THH86MQoIdE5r606Pa4Dr2+2/ZNsn5fX9e9flsxv/vaof147P697rmXTdYWCj9sPnWq+Pr/8Pf/uIfxsePf/tt/C8DP/uzP/rGO4+NG2e9/PJT9XigUCoVCoVD4k0z53nx7yneAl6Pst5ej7Lc/PXxkcz7etMw2H8WP9YdEx01931Y67iYF96Uir9ueQ7LiUFJzd9v2iZnd1Oh1Y9qXSBvFY9u2fO1rX+MXfuEX+MM//EP6vufu3bsopWjbFqUUWmuePXuGUoqqqqb0o3OO9XpN13WTiAQIIWCtncY2pha995PQHB+PywNorWnblkePHvG5z32OT37yk5PQPDs74/79+3v30U1cl4y8SYrtW2673dsKq2PGeRvJ+FGJr2O35Tq5fqwQPOb1j0q+7rvWCoVCoVAoFAqFQqFQKBQKhUKhUCj8cPlI5nzc93hfsmlXGF6XeDqmhOkhPiq5eay4OCRhrpM6+xJex2znrlQ8tNy+Psd/QwhsNhu+8pWv8Ku/+qv8wR/8AV3XcXJywmw249133yWEgFJq+m+xWHD37l2ePXvG5eUlzjlWqxV9309zQo7ScexrPp9zdXXFarUipYQxBq01m81mkpIiMvWx2Wz43ve+x9OnT/nGN77BG2+8weuvv85P//RPc3JyQtM00767TgretE9uEsfbx+a6Ep+vwq4Q3jeufcscaudVx7L978usu6+tXeF+XWp45Lrz/5ibHK6T/LvL7dv3xyYarxPZhUKhUCgUCoVCoVAoFAqFQqFQKHyc+UjKru4TbtfJhevE5XUyY3f9Q4LyZZJm++TB9lx8h8Z77HPHlJC8KYF2nQw9Zt1xW5xzfOc73+GXfumXePToEev1mvl8Ttd1fPDBB1PisK5rAKqq4s6dO6zXa2KMU9nVsUTr2dkZxhhEZCqP2vc9fd8TY5wE4zjvY1VVbDabaX1rLVVVkVKi6zqePXvGarXiwYMH3L9/f0pcvvPOO5ycnFDX9QsCcvu/Q+fedefNoXPnunPjh8W+bbjpXL1O/N9WjN1G9l3HTXL8UD/HSPTryqkeK1BvOs63LdE7lgMuFAqFQqFQKBQKhUKhUCgUCoVC4ePORz7n423LRo4cSkduv7b9eF/67Zg05HUS8KY57XbHcmy5yGMFxnUi8lAidN+yu31vv+ac49GjR/zyL/8yDx8+5OrqihgjFxcXxBjx3qO1pqoqtNbcvXuXs7OzKeV4fn7Oo0ePiDGyXC7ZbDYopVgul2itefr06ZSuHMc2Styx3cViwXq95vLykhDC9JoxZhKW41i6ruPx48c8ePAAYwxd17FcLlkul5Pw3LetNwm6lz0uu+xL2u07dte1e1372+O86Ry+TTLw0OPrlr8N152PL8sxSdZj23mZMrfb6+4e35J8LBQKhUKhUCgUCoVCoVAoFAqFQiHzkcvHm7hNmvCYZV4m6XSbto5Z71Ca85h05bHj3ff6MWUht9uJMfLo0SP+yT/5J3zlK1/BOUff9zjnAKjrGmMMTdNMpVbPzs6IMXL37l2ePn3K06dPiTEym82o65qTkxOMMYQQuLq6mkqq1nWN1hrv/ZR+rOua+XxO27Y4514YY9u2NE3D2dkZFxcXU9nX1WrF7/zO73B1dcVf/+t/fSrd6pzj9PR0SmfetO9uSjYec4x2l9tOKO4Kwn1lU4+Rby97jl9XLvQ2SeNj98Whvm7ipu07Zj/d9gaH3W3at3+uu1YP7b9jS7QWCoVCoVAoFAqFQqFQKBQKhUKh8HHileZ83Pfj+7E/xB/6kf9lxeQfF8dIikOPbzOP3z6ZeCjtt2/9UTz+3M/9HP/pP/0n2rbFe0/f90Cen/Hu3bvMZjOqqiKEQFVVXFxcoLVGa42IsFqtqKqKGCN9309JSRFhs9lwenpK13U456Z5Ha21xBhxznF5eUnf93Rd94LAc87RdR3z+XySmSEEuq4jhMC3v/1tlssldV1TVRX379/n6uoK5xyLxeIFaXUoRXtMwvC6dOAhaXfoGOx7/dDxeVWOkeSv0vZ1wu6Y5W/T1762dqXvbd5jdtvYl17c7fvYc+bQuAuFQqFQKBQKhUKhUCgUCoVCoVD4uPJKycdDP97fVPrypiTRsX0fKyB+mKmkV5EOh0pz3jZ9tv33Pvk1isJf+qVf4stf/jLr9XqSe5DndDTGTPM8hhCIMbJer6e57B4/foyIMJvNmM1mPHv2DGvtNL8jwGw2Yz6fA7BcLun7fmrXe0/btqzX6x/YtjEdCUzraK2nkqwxRjabDb//+7/PfD7nzp07aK2nPsaSrUqpvXPv3ZQE3N1/xwiwQ+3tJntv6uvQcy8j9G/ajpHrZOkhobY7hkPb+FGVID2UUN3te9/YrmvvpmN8aL2SbCwUCoVCoVAovAyvnzYpIUBCiULkBz9LKyWIgBJBi6BEyIskxiWVev4ZVkn+zjF8OgYEEJSAICj1/HNsTJAbG1sSRMZ1dr935NcVELe+J8QYSUPb2+yum4a+ps/Q04KgyTedKsn7AhlvKjzw3SLlMZzVis+f1LylAvMUqZVgFSxmmkoJ6ytHCJ7GCM3McHo+Yz6zaAIohWpqohK8D/iQ0AIa6DcdfRdpFg3zucX5gOt6Uu/RWuF9IpDwAera0jQarKWPmj4ldF3T1BXrvmN9tSGuNsTeQ0xYrQgpcdUmNiGhraU2Cr/u8T5CpUlG4zrP5bonRsW8Trz92ow3X1ugrMbMGkRg0yViEBJCXWusRELb0rUd0SeCj3StQyvBaIWuNMZaYky4TQ8hkBTYph6+q0WUKOzwS5DvI4mANpoYEp0LBJ8QEkry9CjKarRWKBKEgAYqK5hZRURwnSMmhbEaJYnoAyEKsVnQYtlcXmL7DSczg60NujHYpgEUKfT59wKX2xWVzzcxmoSiax2+c1R1hdWCD5G2cwQPy5lQ1xUhQe8TCChriBE2m4BWMJvb/B09JGK+CkGpvH9cxIWE0YHKVoitiFHwvsfYhNYV0TsEUMoi4zkrCVIkRUg+kEIgaQ3KEgG0IqKIIWJUpDZ5neACvg/DuW/ROmGtwhhQukFsjYgiiSImwYVEjAmFYHVC17Nhn7R41yNKIwKSEhDyTePGorUBpUBrsBXaWKq6zsc4Jny3zvsjJnrncjumIqVADB6lNVobRGtUZYl5cwkRUsjj11qhjUJEo41BW42tasQ2edngiEmI0aMEKqMw1qKs5e3/5f+j3LlbKBQKhcLHhI+s7OrLlouEHyxheWiZm/rdl1jbJxo+SpHwRyU3j02Z7dvWlBL/5b/8F37lV36F9Xr9Qipx/DI5Pn7w4AHr9Zq7d+9OKca2bacyqN57nj17BjCtp7VmtVpR1/U0/+OYWlytVnjv8d5jraWqKvo+f8FQSr2QjjTGTPM47r4uIlxdXfGVr3wFgJ/6qZ/ijTfeoGka6rqelt+3X6Yv3kP51+v24XgeHjp3Dq2zT4xdl2Ddd6wOtXubdPA++Xyo/+uWu2ndly23esx6u6Vqt9vYt/5txnJo2d1E5aHrandMRUoWCoVCoVAoFK4jjeLvmpvrYNSH42JxEHfZ6GUPOcrCcT0hpTCslbIYAhICSZ6Lvxc+r+ZlR7H5wjhJkLIETVtLT52y/7vGdJPf1kakrQZGYZkkoba2VHi+HTKOe2w2CVrgnZOKz88Vd1XEhohKEYOgA8QevCQ0gaoSTk4rFouaxTxLnZAMszunxBhYXXXEEPFdIIYARnA+gdaEkHjwaI2VyLJRYDXt2uMSoIX5zFLVFd5YktYEBFGKEODJ00tS7LHOIwpipQkBYoS1C1x1gd5FZhFImiRCMpqoNE1tiFFhLTQ6cv9E8/a9BcoaTFPjXaLtHT5okrFUlcZ5T2hb8A5JoLUiRagbQ/CelNUavusRJSiViFEwRiMkEoEUwdRCSpG+80hKGJNFWdtHUhLqKotGpfJxCYM+z4JT0KJQSgghEkMixQQpEJ0QCaAVQSku+8DGBSrvmc8tymhICRUTYbPB+4gIaGtQJELvEKPQVY1zHtf3WKOpZzUxRdrW432gsprZskZpxabt6TuP0oK1huAirYtoI9TWkJKCJCijiC6SUv5NwFiFqSusj0Tf5e9/EVL0WA1KLClJlvcxIEmhrUJIBO/yeRwTKQQk20NCgogQA0DA6oRVw3LI0J7gfcKqnlnVYI1+fuWKwieh7Txd77AiVE2eZiZEQYVAkoSWhLZZ+omAd/m4KWUBiDHfwE0SjE4YpSEkvFuRogcRRJl8U0EahGLqAZ23VxIp9aiYzxttK6TSaBSkGqU1MSVScKThrgOjNLZp0PUJfe/wbSCON5lLIqq8/Sn94M3ihUKhUCgU/vTykZRdfRWOlR/7ZNIxIuXQevuEznUic/e5fcvdhn1i5TpeJmE5pgafPXvGv/pX/4onT55MacLxNRFBaw3AxcUF3vspUfjBBx8gIlRVxdnZGavVakokxhix1vLOO+/w3nvvcX5+zt27d6cyrNbaaZ7IzWbD1dXVtJ+Xy+VUhnV728axpJSYzWa0bTuNdUxprlYrfv3Xf53Ly0t++qd/mrfffhulFE3TAEwJyGNk4PZxvSlleJNcPza1eF37x5R3va6Nm167iduse1tpeSjZu8tuedRj+t233465WeGm548V1YVCoVAoFAqFwmHGJF8iZ6+eP/cD32cB0igQmWTe8xzhYPamz6RqeD5N+lFgkEVZMG33sPvRNQ3jyv+MgvR52jL3kMan842rL9wcmVNZgz18QWeKCGq7HRHkuYp8Pob0XLyOSy+s5sfv1HxypqFtsSQaJagElQoQBRVAWVjeaTg5rbGVwSogelxQmEVN7zzeBXrnSQGSc4itOX3jPvHBh6zXnsu+p6o0d+YWEwNXrcMlhakNzdyibYPXFVEUSERrod30bNYdjQ7MrMJUFmUiF5eO1kXakLhqIz4mZk2FrQ29CD56kk+oFMEnkoLXzg2vn1iszclKpTTry4515wk+UjUV2gjdpkUHj+o9kgQYxB2CCwGlNE1TQYoogeAjIgpb5QOvFUjSaCV4H4nOY5QiSRZXMeZ0mjYGiYFEQpQihYhOCRUiRmmqygKC84EU8nmmRBGJ+bxQihihJ9F3LTZElpWgtUFpQWshJCGGvL5WQooRHxIRwSqDx+BSQhvBWkVwkb6PxJRoGkMzq9G6YtM7XISqsRgFvQs477HaokRwIWKSoJQmukTwgYSgUmIIQOaAoDWkJIiKaG2I3kH0WeaJgChEgaQ8BhEFokkxoI2gjSKpiAoKF0EkYrXKSeThBgFRBq00MfZYq6ls/t3C+ZgrOGlN7zydh97l61aZ57qelPCuAxRaoLIWNDgvgEeJRhBC8MTkSWLRJHRSSMrblEJPDJC0AuKQ6oygNFEgxTAlFrU2iJrueUCJQmsLSufx2ooQwLdXEHog5n2qFaKG9zelsLZGtCJGh3MOlc1soVAoFAqFjwkfWdnVmxJBhwTeqwqPm0TMbdoauam9Y7btpu06JtF1LNclO/u+51vf+hZf/epXcc5Nr42pxVGgdF0HwMnJCVrr4S64/MXSOcfV1RV1XXN1dQXA+fk5TdPw9OlT1us1b7/9NgBd11FVFev1msViwWw2Y7FYIJLnhOz7nhjjJDC35ef2HJLGmCnRGEKg7/sppblarfja175G13X8zM/8zFRuVSk1zVU5SszdfbRPOO7uu33H47pjfp1I27fOsSnFXY6V1dfJ9Nuk9l7mvNzt+5CQvW79Qxza17sJ0e306rHbsZ2+3k297uu3SMdCoVAoFAqFwjEk8udJNUUDM3tvuBuSggnIpVHzk2lcVYZUY0yTchwWnv4WGWVeIFdXVblmoqjn7Q2PZUw5kYikSRaSImypQtmSkvnz8vO+o4CkPErZik2OJWDzU0PbwzZJktGhDkI0r6dEuFNr/sK9OW83oF2LmIBNClLKZSoHgTVrFMvzOdXckpLH1gZCwDQzVF2DAhHFauXxomk0dAFe++R9KiOIEVyMnM4Vd5eW0DuuuoBPmmZZ08xrxFp6DL2PaJPHf7Vu2Vy11CQqSaiQFe3lynHRRTY+smk9AG/cOyVVhk0fiF2PCjmRZzQQPW+dGJY2Szk7rxAMXR9p2x7nApXWhE2LX29y2VvJ+z+4LAeVFoIHUQbRiqQUCiF6j6pMloIhQvBo0RhNLrOZhNpoUEKMiZASWoE1WUaGlCDkk1elnLQzWmepGSERQCWU6HwOxIiIzuVGUxjSbYp5raiiRykwlaWyuUwoWuMT4HtS8DgHuq5AeTYbj8dRWwto1uueFCKVUVQzizUWF+Fq0xO8xxqFUYJzkZgidW0JCZwPKG3yZvSRENIg+jTgMUSsaIiRJII2Bowhej+cy5oUPYqU5SRjKWJFSmG4BnOZZI0QfEBCPj+VKIghi00lRBQBh0IxX9YobehDout6RBKVUrgodD6PUWuNNfk9I0WPKAMqXzPWaOywH0NIxBTQyhBR9M4TogeThXhMkdj3RCWIKFJMhJCvR5FIiokY0ngB599ovMPYmpTA+wBak1RAi8U0Fagqj0kidT1DE/FtIoREcB5kQwoRYyuoQLSFmAh9wrk10hf5WCgUCoXCx4kfatnV3dduK1uO4RiZuU8o7LaxT+xcJzv2sd33bUTkqyZId8ULMM2T+J//83/m3/ybf8OTJ08mkTcun1KaROMo7/q+p6oqmqah6zoWiwXOOU5OToBcHlUpRVVViAiXl5copVitVpOwbNuW2WzG+++/j/ee8/NzjDFYm8uAeO+pqmoa+5hUHP8e05XbpV63xw6w2Wx49OgRX/va15jP5yilprkivc9f9sZtuk1KcJ/I3ZVZ1z1/qJ1D/W+3d9PYbsOh8/m6Pg5Jtusk3iHxft24XjWZeexNCLvX8T6heChtet32FPFYKBQKhUKhUDiWUdhNjwfpNn2fYBBwg8RLknLKalg+kpCockBxSheOn1fHVOUoCcliEZUF3xClHCXfi5+Jh8jhIP6mTGJ6/hwSiSheLJQ4jmDre8CwAYJMSanhW2lOxo0CNSWS5G2EwUUO+6AywltzzWdmmnesZxnDkBaVXAUn5B1greb8zHLvjVN6F2k7TwwBUZ7KGjAarUBpTdc56kphnEdE886nXkNiy8WHK2LX8eZ5RVMbNq1jvYlEXTFfGGazGd7WdEHovM83wQbP1XqDxMBJJSQX8H2k7wJPV44uWZI2RHJ1n7qpCAKu7Ym9R0Ke89NqYd5o7tyZoRRE76kqS/CB1nX0vkdrOFtWtG0uv6mSICmRjMILRJ9IKWDQjHMEKqXwLo+1ns/QWsB7upVD63w8ovNYpaGqhhMiZfkVIXmfE2vGIsHjO0foXZZ7lcVayem4lIa0qpBinj9U2VyiNElEoYk+UhtNjCCiqRqNthbRBqU1VJZKG0K7IrRrNLl/1wcQqJTN+7bv0ALzmWU2r1Ba6PrEpvOEAHWliQE2Ls93aOpc1lUCaKUIEUIMw9mn0FpQEonk3wpI2bAqUcP+6fP1qPP1Nkr1ELLITOl5yVWFH077nPTMUjZL2xAcKUWshWgNfkj7NY3BaINL4MOwv0XRBwgpJ1BRkoWpB0uXZalK6MrSNBU2R1nxMRCTRw3jDUnwCUKAymqM0RDAuw6tFGItCYUy+T0gxkhMipACxDRdryhDTAqSIsaAhIgyWUxWKWGqikRN7Fskeaq6AhLRt8QkucRsAlNV6GaB6zzd+tEgOiPJt8e8bRYKhUKhUPhTwiuVXYXblTfcvrvzJllxG0Fx01gO9XFsuy+7zm0E5G4bL7Pu+CFyTAp+/etf51d+5Vf4j//xP05CbmxHaz3Js1HaVVXFbDajqioWi8WUPpzNZsxms+nxd7/73Sm1eHl5OcnL8/NzRITVasXTp0+p65q6rqdyrYvFgpQSm81mEpXjnJHb0nZMRcYYqapqkpbbAjmEwIcffgjAvXv3sNYSY+SNN97AWvtC2+O/+47TbtrtJrm2T9jtO36HXt8nuo9JKe6uu+/6OfYcuU4w7i573brXCf3dsW3/fZMIPNTObfratw2H9ufLyuPtNgqFQqFQKBQKhX0M/i/7vJQGQbj1eXNKKsqQbkuT2EMEladuQ1Iu2pqeNze4wyGRuCUiU8rSMs/Zt/19Yfu7Bi+Udx1nXhxbHr6dMeSkfvDGy/RcXk6JyO1tHldnzExCEhnDj4N0zb3emVX86Inlbe05UZFTpZAAbR+RGJEUMSJYC/fuNZzfP8MFz6ZzmKoi9qBy1Ue0KIJPBN/TWIP0AXv3nJlVKBQffvAQ1zpOZoYYEs8uPS5EVFVzsqyxTYNDWPuIdx6jNRICl1dXuHWPDiAKOufz3I5JsQkGU1tmlSIGqCpLSrBZdSSXj5oQUUo4WzacnjWIrsAYtPX0bYdzAReyaFNE+rZHJaFSMujfNOxzTfA9VTXMP0hCq4jzAVGGyqosY0PCdw5SIiZIPqdZUwBj8rFQ1mBmFtX1pB5iEpKofDZJom4qqtqgjEGp52dGdB7v83kBkGLC1poUhegjRpucnkyJuhHU8KuT63uUVWgFLnrEVmil6C6v2Fx11JViNqvZbBxXqx4LLM9rZrMaHxOrTY9zCWUUdW3wPtL1EVsZVGWI8bm4j6HHeQ/kpGxVGSR4hIAx1VQaVGtNCiGXBR5kYIz5mMXxOhLB9Q7ns+zLy+aj4p1DKYOofG0E53C9I6SIcxZ0QCvFrKlIIbIJHWiDMRofBO+GilBaIQJGCQpNCg4JAcQjNjLTVT62kgjD+LQeyt0CKUa0UVS2RumECo4UIiFEurZHRTDWIsYQEvgwJD6tzTFlJYiyECPRx1zaWBsQUEpQyhCcR9sWZRrQGqU1Yuuc/Ay5UpWpahJ6Sh4nBNf1qLHMaxxLTxcKhUKhUPg48MrJx0NlFq9b/qPkOsm5TwzclHT7qLguUXWs0Dq2dOT4RXBcV0T4+te/zi/+4i/y27/92y+UUN0eU0qJuq5pmoaU0iQXx9KoIoJzDmstT58+ZT6f0/c98/mc1157jYcPHzKfz4FcrvWNN96g6zratuXu3bucn58zm8343ve+x7Nnz2jblqZpWC6XU4rRD5OQ++GO0u1xbqchjTH0fT+9Nq7T9z1f/vKX2Ww2fPrTn6ZpmheEZdz6cHtI6t60j69L0u7uz+1jsK+v8VjFaz50X5es3f37Ogm5b/lDbd6U7juU6L1pnIeSldddt7vrHrpp4ViOGef2c8f2sW+fFAqFQqFQKBQKLzAEA59/9FSMJVVlijzmV4Q8t1rWSolx/keRYbbINIo8GZbNDY+fRqfU4vQ466FRKI0lWkePlTuNkLJgiHH0hSknmBSQhKgGPzGkKKdPv2Pfhz4PSxaqU0ozAZLniRQRrFF86qzhJ041595Rp8DCKiRFnI+kkFCSqLSimSnuvbakmjc8u1hjFnPqpSH4Hm0VtlZoSazWPXVjUdGz8YHl2ZL5vGF9ecHqqsW1DpNgswls2lxWcraomc1nJFvTJcXaeXwINMbQbVo2qzW+7XCdp7aGTZ8IEZp5xbyaIz7SqJ7loqINDVcXa/y6B+fRSmhqg0jitdfm1LXJ8wJqjRaF9wnnA673zJqK4B2rNkKMGJWwSuV9phSdC/Rti1H55t26MohSrLtcbtbWNQRP7FtIEUkJHyNNlSXSZhNJMSLi0NagrEDfEn0gxoRShr4PhL6jMppmZjGzeZZJKSF4Qt8SQ0RXFUk0KQVspUkRnHf0ncMHqBtLXWmIkFwipTVJNNEL64s1fUpUJ0uausL7SFVpZrOKoCp8ijQ2cTozNLXBOc+6i/S9p641xmpcjGx8IimDUTmZp7VGaYXvfS70K2C1ojbkeQljRLSexLeSXIJYDSItDHOkKgQf/HjB0XeOECNCvhAESCHiUpaWSfK5mmIESaAU3kW63mONZlHnOTgjHlPXaGvoPQTv8nVgNTHbuiz0yBIzeYc2mrlWqJTwfU+UPD+kUnp4T7C5PKrK26klJxaTdwSX5x31EtC4XDY2RKIoQsjXp6kM2gzlaYn5/WF4sxGVp8hRKidsYwLX9ZjgUdZimgVi5kQUOnhAA4K2NTEG+q4lug4QnO+RlLBVc/RbZ6FQKBQKhT/5fCRlVw9JkUN/7y53U5u7HJs2OkZwHHrtkGR52fTivrZuKz6PFSN93/Nrv/Zr/O7v/u4k7LaF2ZgEtNZiTC4fMi7XdR193+OcmxKRXddN/44lTh8+fMijR48m0WeMYbPZsF6vOT095fT0lBACz549o+/7qZwqMPW/vT1VVeGcm1KUY1nWruuIMU59bAtIyF8y3nvvvUk2jqnNMcE5SlTYn+Tbt48/6ueuSxAeSjluv37d+X6bc/xQuvKQXL1pvetk7W2vvUP97b6H3FZA7ktMHjOOY26mKInHQqFQKBQKhcJNiM4lMxmShTLIOKVyMVPJenGYZw7Ymn0xDeZyTA4yJAdFhjkVh8RiEiDlJFOKCcak3JBMZOqfYTrH5/3lFKZ60R+m0RPmORtl+/sLw/yVItM6MjSURFCy+3l6GvogsLJsXVaaL74+5/NzRdW24B2LxqCIhJiQFKgVGKtYLCvuvbFElGZ12dG1nmUDKQkRzcnCgvOsNw5RhrZ32JllMa8JmzUPPnxC37lcutQnLrtEVAptLcvTOc2soU+GKxdxocMoqJTi6tkl3WbN/dfuMF/e49kHj3nyZEMX4fRsSaoMXddTScLYhjZa1qsVadNhYsA2FlsrTueGZjkDrQlkiRVDJLqI7xwSE1bBatUSlOB8wgKGoZSpVvjgSRGMsWgl1LVCJM+dqC2E4bt0itDofF6REvNZng/S9QEfc4xWUDkV6xyp83gfUabCO4fre0gKqWrUbEbQJoto74jOEVxCK0UiYazK80A6T98GNpueSKK2Gp0cuIAMNV+VEpwL9K3Lvz94T9979NmM2oIYndOb2nOyMNjlEomR9SbQ+wBKmC1qkmj6EEhokoLeRdLGU5uEqiwhaoIPkCKzSmMkIUPZ0xAjMSWMrklDpSVRCjEKn4Zysr4nhDBciwrnIn3fD6V8LTolUsyiTlszycKYg6XDDQFZ+NdWYXWOH4eYy8mKtvQu4Z1HKzBa4WOCENDWopUC70kpj6HW+RqP3uf5GbVGUeWUZVSEZBCjsUrQEkghZlEogktx+n3FxUTfekTHnGgckp8+AFrwXhOCQ+lcZlfEABGtTH7nij7LVSIpKSqjEZXfH4zV6LP7OSmaHKFb54pcrie6Lr/XmQpB0NWLFakKhUKhUCj86eYjm/Nxl93U3vbz1y17bNsjuzLkZcd63Zj2JbaOSTDeZjzXJeX2Lbdvn42lSh88eMCXv/xlLi4ucM7lOTKGdUfpV9c1xhhijJNsHCWhiOC9n+ZdHNc9OTmZ0pIiwo/8yI9wdXVFSglrLRcXFxhj8N7z/vvvY62laRo2mw1nZ2fMZjPW6zVN06CUom3bF8aX76rL81WM8nP8u+97tNYvzAc57oPVasW3v/3tSVAaY/jkJz/J3bt3qaoKY8zBVN11x+AQx5ZJ3X7tUEJxWwofM8ZjOdTeviTuobHvrrdvuw+dp/v62H58TN+7adHt5w+N87r+dxOru8sdamPfdhy7XqHwR83/fBH59P/zf/zjHsbHl//pT8a+//b/8Pf/uIdQKBQKHxvUVrIx8/zzrWwJRDUttnUTYl58KG+ZP2+qoa6kqCwQFWO6cEhkKcmCkfxcbnL8XD1KxTTMuzje3DeYEyRLzLFjnn/OjVENcmUc4pYU3frMrkTGQedxTjVnh3KVCG8sKn7yTs0nm8iJSrQx0NQaFT2khAr5BlK7UNx9Y8HJvMb5yOpizWodqBYzlAatFSpC6D2SIlor9LyhmteYGLl6/Iy+8/gQMQpSVLgkJKuoG8tiUaNsxUYsV52n7T2VSUTv2ax7VArcPz9jXlck51ltAgGNGMWli2xWK9pNj48KbROLqkW5DpsUuq5oKs3piaVZzuiHZJtInpsxBEg+z6moReUkYGPoe49oBTEQlcEHEKlZnpxQ2wZsja0aQgQ1n2O0xbmEDwkfPbVRmOhQsaPSiRR7fNsR0godWoJriWIICKn3+bxTiugdMeZjpJoKVTVEUUOSssN3bhBbmpA0lVZoAqHvCS6LLW1gZkwurRvzfJLR97kMLUIMDlF5bsLeB7Tz6EbQswofhdB3zCtHU9eEqLlYOVZrx2xmaeYVSWlcEIzWxAQSA1pFooM+5vKzDAlEqxTKO8YsY+/yPIyVNaTYIaKH8qy5IhPGknxP8lnAidgcANSgJKFRqOhx49yjlUGr8fqRQWZCiAmSYjHTuSxqyPOcKqWJKdG3bZ5extqcOPT55nAhQghElzCimFUarQwxSU4sDgJWJNE7h4+GKGBqUMYgIeQ5FWMipFyOV6weysHm69d5B06o64ipG8RYgii6PotCpQQxOu8brRClQGuSNiRlsnR1HmUaYsrlVHWtULZBmZrgA64N+G6N79YIiuA9IhFb1SgxxNB9RO+shcKrMX5vLt+LCoVC4YfLD00+wnHybZ8c+Kj7eFl2xcq+1Ndt01jXLXso/XbTcvC8vGgIgXfffZeLi4vpjtr5fM5qtZrSh/P5fCpbWtc1JycnrNdruq6bxuZcviNxuVyitebi4oLNZoPWeiqdev/+fb7zne/Qdd0L63ddl78sWstyuQTAe4/3Hmstq9VqSky2bTvJwe1SrOM8kGMycxzXKB5TGiZ+VwprLd573nvvPd544w0ePXpEXdecnp5irUVEpjv+do/jblnQm47RdcfpNgm7fevvCu/t169LEx6T4Dz0+GWun91x3iQ6j+n7UOLwkPQ/5no7dhuve/3QDQWl5GqhUCgUCoVC4Ti2Py/KD/wrZLkhk3QEJBHJSaw0pAWff2ZNg8TLabLp2a3Xx+Ks43yQOXEIgwp9nogcU5M8/1yfUiJPMjm2InmeuyGhmdLz+SAnJbotU1/YvvR8nkeByig+f3fJF5bCSXJUzhG90KhEcgEkYrUgVjGbWU7uzqjqYf7EdWDVBWbnM87vnhBdR9is0LOapmpoe8ed194A3+K6DeurDevWYbViYSs2rcMh6NpwNquxtcWLohPLunN0fQ8h0XYeCT1nZwuayqBIXF5dsl4FrtaBvgu0vqcPLV2E1ieiipxEWCQILhIlcVJnARWU0CaFUoYUHT4EQsjlT7USYgwoUdSzGZugUPWC6ux1zt/5FKdvvEV1+hqLu+c0p6eY+Qm2adDWIMbklKkoUGNZ3Vw3NyUgBoJzeO/wmxa3aekuV7RPP6R7/B7rB99l/cH3iKuHiLtCqYjWCRSoukYqDSkQ2w633mRBXlmSUhijEAKELJljTNSNxhjJ2z8oZ5VSnqcwRsRaZqdLrlY9fdcxr0w+7p2nT4KymnmtqbTg2sC697StRxvBGI3zEV0pmsoM82NGKmUwlRAkoBWTzDdG5xuzU8RFRe89KSYqm0uVquG3Ex/z/suStEPw+QqJkSQepQ1WZcntXaD3CSShjYKU8hyTKRJjwvuIDwlTaZpak4br19iEEo0PAbcOKB2xVU4Beh/y/JxD0tn3ASNQzxW2qggRkoukmG+8jkmIMSdjUQFt9DQXZ+5PCDHPCRklgrKIrok+4F1HSAkJjmSqnFQmX+ohRJJEtJkhqiKmSHL5t5sUs1gVq/KNDc6RIkSEmEClIUmJxnWX+HZN8h3ERIiBFCOmMihrCa1jffn0+LfOQqFQKBQKf+J5afm4Txpel8o7tO74eN8ycNyP+7dNLd1WFl43zm2O6f86kbG77G3249hW3/f8+q//Ot776cvjWDa1qqoptTjKO+ccV1dXUznTUeY556Z0Ytd1UxJylIFPnjzh8ePHdF1HVVV0XTeVSAVomoazszNef/11+r4H4MMPPwRgPp+TUqJpmil1OYrLUYqOf4/lV0UEpdQ0NyQwpTjH0qoPHz7kS1/6En3fc3Z2xnq9nuak3JVk+4Tf9vMvw8uKvJtevy5Z+MOSX7dN7e4TiS/b576E5b6bAHaP220k4qF+lVI/sNy+c2abIiALhUKhUCgUCjex77OqnpKEY+nVLPNizOIiTvqPreRgTj/mlCE5tZjGz7txSiFOEmrqNrcuorbmk2RYKJF4Xkb1+fO5j+cpRpn+RobSrSQUamhh1KljvzIuykmt+TP3ZnxuIVjXUSVPlSJdFwkhcTLXNEajRKgqYd4YYh/okmbdeZLAnddOaGoN7SXRBebzBmM1MpuzqBzh4gmbtuXq8QptFPPaoBJctY6YNNWiZjavEDFskqKLkRB7VAIJuWxoI5HGapbzCqMVz56u2Kw97ZXDtwHfRXqf6ASqxiKSZaJxnj6ANsLp0nJ2p0GUJg5TmRAD0Xl85wCo64YgM8zpm5x88se499nPM3/9beavv8ni/BRVL8D3jPN8JlHkU0iNR2j3DJueVoM4Vo1gU4LzOIhog1IWkoPkcG3P6vET+scP6T58j6ff+Qbt+99C/DMIHWHzDFxEa0NIuW9bGbR3CBExFhFNJcNvC12PSD5/lWYoFxsRoxAFUQymSpxIRDtwrcMFT2MUM6tRYllddayuOlBCs2hAaboQqYzGVhbX598LjORSqUpA1YboPaLAaEHyBIaEqPAxopSgReV/VZ6/0jkPEcQIOgRE8vkbXJ//NkO52ZBwPXQuYrRgYCht/PwGgvwbRsBUmrq2pJhLs2qdtWLvIt6DNVBZTRLBO5fTyiIgkYRgtWLWaGxV5cRhSnkeRqVIKPo4zEWpBKs1WgnJeTC53K6P+VpKMQtwJIFWJJWTl1XKNyn4KISuR8XhGCqN2BnK2JzWTIoQY/7NKEWIgo45bZ2MBaUQVZGUyXNHtpfEqOjXl9BtEK1RxuSxSf4u77ueqycP0Coc94ZZKBQKhULhTwWvlHzcJ+YOSZ196+6+vq8M48vKuOt4WRG5r7/rHm/3dazMOXYsIQTatp1KpVpr0Vrz9a9/nd/93d+d2nIuT2I+ljAdy6SOZUu99zjnXkgmxuGD5rh+jJGmaXDO8eTJE7TWnJycTGnCp0+fIiK88cYbrNdrUkqcn59PJVhjjKxWK5qmIcaIc47ZbIZSiqZpXhChY9nV8bkQAovFYiq9OjKOcUxQjtvb9z3vv//+JEW999P+3/5vHz/MFO2x/d0k0La56br4KLZnn+A+pu3dRPB18vTYlOZ1fR3z+g/r+P5RnzeFQqFQKBQKhT857N5UNyYEc2ptEImTeEzDvGlDoowXb0aUafksHWOU5+lIERhKryJCGmRKloWjmNr53pAiY75x/G96nLZKrA4iMw7zPaYE42aNczjmzdn5vjJkuu42mi+c1ryJp+5aGhLBeVqX0FpxfmKZ2Sw8jFFoq9kE6H0itC11Yzm7M0fFRFhtSAlm85oYE8Ya4mbF6nLD5cWGrg8Yq5lZ6PuA94mApppXLBcNXoRLp9l4ByQaLWw2LSEE7p/UuLbj/M6C0Lc8etSx3vQEn0tZKgGxeSqTs8ZAStSSkF6oZ5rTZc18XlPPFUk0pq5QEWLfEaOia4HF27z+hT/H6Sd/jDe+8GdY3L+P2BplFIjJeywFCJ4ketrRgs4HQykQyxR5e14Hd9zpWWYP59BwguT0WnLEOCbtLGZmOfvEHHn7E8T05/hE9KTg2Tx+wqNvfp3Nu3/Axbe/jnv4P0O8xGqFImSR6DwohbYVQSIpOBI5vacrg9b5XNPWEiKsLlq0ddSVIqRA5yLBJxYLy2LZIEpxddlx8bQFIovTOShNn6BuLHWlcS4QwiAJfRaqSmmSKGxlMRKRlFPDLkRCSlgzzK06nFtKCUTBKIWLHlLE2FwS1bU+RwHjINJjyuIwRuqmwgj44HPS0QWUzosnEepZhRBxbUdMQj1vUNbSbrosHmuLrTXBR5KP+TBpTfIekXxjdlVpRAt50/IYqjqXyHUuIMrklhC1dwABAABJREFUeUBTgOhJIV9juexsGG5QUNN7iEqSJTEhb7cyxAQuQfAOkyImaXSlUdoSU8xJRqURnafoUYMgjSkgIaGsYJoG0zTEKPSrNd6t0cagtUFsQ0z5OJiqIkZD6HvWF48IfsPpa2+8zNtooVAoFAqFP6G8tHw8NN/bbmnS3WV3l9lt79Dj60o17iuHurvsoWTWoZTTvtcO7YN9fe9r71Ab+zi0XoyR9957j1/7tV/jG9/4Bm3bIiJUVcXJyQmPHj3igw8+oO97+j5Pll5V1QsiMoRAjBGt9STuQgj5DkJjpsejANRa07btNC6tNZvNhtVqhbUWay1vv/021loeP35MSokPPviA09NT6rrm0aNHhBA4OzsjhMBqtZq2px3mPICcZBzHNabQRIT1epiwfBg38EI51u1k3JjK/K3f+i3Ozs741Kc+xb17924l9XafO5SMPHSsj03gHst1svRY6bgvpTw+f911fNN497W3r8+b2G7nUJ/HXlvH3KQwtnfd+86+/qZSVIVCoVAoFAqFwi0ZvzMqyU5plHUylKkcpUEcPm5Or8uQ3BrXGz6+phTJ8zpu3wDMlFJMaUxNjkmt54mt5597ZSjXOZRV3U3VDSVYp/WG9OTYq4zqdMd/QZ677pNLy2dmcJ56zlXitFK0G0/nEgurmFlFY0G0oI3Gzmuq2tJuepaNRTeGyiisVXSrFhCSi2yuPHVjuHp8yWbj6buQy3/OK+bLOeuLDTEKooX5oqZZ1DixXDlY9fkGWx09j9uOKnoWWoht5PT8hL7tWa06rq4crYsYo0kSCZLQVjEn4Z3Hh0SlhdfenLM4qUEbBCEkQRtBQk68peZNlp/5s3z+z/9l7n3mc9Sn5yhTD+VSp7MjbxsaVAU6p98Qk+fek+mkef7veHiGwzolV4dDLdOxeH5kZBBbxAgxkEJPCgGJPUkEVdUs31yweOMt+Ot/G+8Vlw8e8vhrv8HqD76M//APcOvHKNGkIPTRIfI8ZmtrgzKgh3KwnYd206O0Yq6F2Dm6TSB4mC0qZnNDCJHVpePi2Zq6MjTzBo8mRqGeW4wkXB9JysJwA3WMkv2rJIwezjoREoY+9ASyxI5ERKBq6kHICqiE8xFRGmuFGPLULz4wXTPRe1JyiFLMZxYl5LSkKLTJ57/vA8oIVV0TiayvPDEElosGU1uCsiiTqFWgmtX4kIjJ5YMVPSlFlAi11dSVIokiBPIN2SIoJSQZ5nBVCqsNiOA6DynkOUyjou0CIXQYrTF2mJ9RNDFEtDi0QDIKHxMhKUIIIArRFcrYfP36niT5vUeblGW4UvRdT4rDe5a2KNH5ZvEQcJuW4LqhjLBHNQrRdigVG1Emv4N1qwsk9ty5f5/F2b1r3yMLhUKhUCj86eKVyq5u/3vMstuPbxI0L97heXNS7TZpxn3lHXelye72HSN5XlUs3ZT89N7zG7/xG/z8z/8877//PiGEqVzqKO5G4eicm8ShiND3PcYYlssl6/V6SkOOaUHn3DSGsezpKPvGhKPWehrHmCicz+e8+eabfPjhh2itqet6kpVN07BarXj69Cn379/HOUfTNDRNw8OHD6c5G8e+x+3elpF1XU/SVGs9jW17ua7rpvkclVJcXl7ye7/3e8xmM2azGVVVcXZ2dqvjsnsOHDr+h5J7x0i7sVzsq3DsdXhIxB9af995f53UPCQxt9fdlcTXXdvXbcPusRnPg90bDw4J1e32bpuy3FeWtcjIQqFQKBQKhcIxjJ8/RfK8jUpG4TTcGLcl/8bPpWpMFsq45JB23CqTyrD+ZATZEk8iU+lOyOnJ3BdbJVeZVhQUSESJGqxWFjaitmXj2PHz+R/h+byVpMSy1nzxzoLPzCJL3yM+UiH4PmIFzs8brEpApKo1zaKiXs5RVtGuWhYzg2ostjFIDMSuQ4vQR42ZV1QC3bpnte5wIWKt4mReEUTRtR6fEtoaFssGU1f0Ylk5z6rtkQjBOS4vNlSuRxmFLCpEaS6eXuGd5/LSs+4jxhg0EJJQN4bo8/Qg1miCSZyeWGbLGk/+niwYapPo+jmzT/8Z3vkLf403v/hnac7voqwF0c/3YALEZNGoDJgGMfq5aEwRoofYZzmUAqSIhCyuxhMg8fxvREBpQJOUAnSWl0pn2amG/owBDEI9HL6IhETyjhR6CLk/U0XufupT3P3Mj+JWf5/NB9/h4W/9ey7/4DdYffe/YlMHBGLKKVSlwVSa6ANXqw7nhabSVJWid4m+iyhjWJ5oai20beDiaoN3gcYajNGs2oSpUxa6gG87RPJvEa7vCSFBDKAsWj1X+CEmRAl2ViEuEmNAmwqrBWtsFoYu4L1DlEYrgEgIOUXsY4CQbwDoO0fTKBqrUCkQYr45QGlBmXxNSTJU1oAkeheISlMZgzJ5jkejK6plQwxxmF9S0KYmuDUQscZSWYsxijhcXUYLRmnQipCGG7BT/h5KylWkGN4r+t7RBnAhZdmb8jyUGIMoQ0qBFCJoQVmFThbv/JCobDBVjWhDChHft/m8NIZEpDaCthYfEl3fk+Ka2XwGdoZrO7xbE9wG28yp6xm5vKrCu55ci1XoV1e4foOknuXJCfPlGUi83ZtmoVAoFAqFP9G8UtnVfRybMLtJut3U1k393laKXidMbpIkh5Kct2VXWm2Lnc1mw5e+9CX+2T/7Zzx9+nSaG/Hq6gpgEoNKKYwxU3JxlIvjNozlVkfGUqZjylCGOwlHlsvltM7YzljOdGz/e9/7HiEE3nnnHR4+fDiVTlVK8f7770/C8+TkZNpGYwybzSbfdbe1vWPicUxyjnJJaz2tM84/OY533Bfjfuj7nsvLS7797W/zhS98gdlshrWWk5OTWx2L6x7vHrNjj/2hZO5N4/jjElzHSM1dAXmMuLwubXqd5Nx9/VBC+ro+98nh624u2Hdjwk1StlAoFAqFQqFQgD2fSRnLp8r09+jyZLuiz9Y6aisjuV0mleHR9Dl8jLxtlUId51wbn0uDcETGkq3jOlmIIoKSXOJz3zyOIkM7w//UJCDzMlqEN+aWP3un5q06MgtZFrkQQQmVgTtnFZISPgpGG4xVNPMZYhQxBupKIzqX64xdn0urOnAxYivFvNGsLzuu1h0xQm2EWaWIIdD6gFSaxcmMelbjMVxGTdf1ObkWoOs97dUG1bU0i4rzeyf0AS7Wjm7T07mIR6FMLgW76oVFBSYmPLkKj6419dxSNVW+EdgHbDLY0/uc/thf4VN/+W9w51OfwcwXiChADftJkUQQVYOZZwmoBtEYPLRXz+Vf7HM6MQZIfpLFxDwX4PPEY2RSw1snVBpK+mZzrJAcX0NUlWWkqbPwtDUoC0Yj1gAziBHxgeR6Umihv8DYyOknP8npO/9Husv/nqt3v8l7X/m3rL/xm6TVh4iKaKtISJ6nM0SWi5qUhKt1i6CoKsPpaUMKkadP1jy76DCV4eSkxju4WDmahWU5N4MAFbQ2eBfxfZeDiyIM5jDfcC1ZugUUxki+cRpFpYZ9O+yLFCMo0JVFx5BL0YqQfCL6hE6Jrg/0IdDUinlt8T4QkkdpjdUmX30h0sxqlAjeR6IPNFXNrDHEmLL0zM4PlCHFgMIjREJ0KKCqa+q6RkTTe0cMjsomrKpAAp0LBAQ93DQeYsQFT4w54RuCIkQgBaqh/msuOEuW1CHlYK0SYoQUEmIEaytMnStgJSIpJpRWxFTh+54QekyIKIRmoZifLOjjCe2zx6iuAzuj26zonj3GakGfR9Qsoass792mJ/pc3cq7Dq2Eaj6jaWpInhj0bd9CC4VCoVAo/AnmI5ePL8NNqasfhoC5rs3tdNPLJDt3X7suNXdTWyklPvzwQ/75P//nfOlLX6Jt2ymV6L2fko5mmMh+FINjYnEsYTqmI8f0IDBJvO3tHvs3xkzzLI7icUwqjrJPKTXNAblYLHj06BHr9RqtNcvlktVqxWKxmFKQkMusjuNtmoarqytCCNR1TVVV9H1P13XM53OapqHrOqy1OOfouu6FtOC4/5xzk7TclqTvvvsuDx484J133pnmrlRbpW1eVRYfOmbXJWIPJQWPSe4ek6jcN44ftri8KYF8TKryNn3sY1+/h1KYt0kxH7rx4KNIPRcKhUKhUCgUPl5k2ZhQjEIvZfknDLUyI4ptscjg9LbSiWm4aXM0T2O76bkQFCHHtMglG9OWzExDedVtuzkG7ZQM8+PJmLEcx71bGShNg5zKfiLUGj5/WvNjc+GtRUKTWK8jIgorwtIK908rbCX0IWEDdH3CpYhfdVTBUqn8fU/jkZQTmO1VR0KxnGt857h41LHeeERg0WhSTGx6QVuNnWnqWUXVLOiV5qJ1dK7HCDiX8hx8m46ZhfufOKdpLJdrx7O1Z9N6Nk4w9Zy5gfXFKosbEt4JtspzBdpFxfxklst6xoiPiub+p3j9J/8an/hLf42Tt95Gm3pIIDIdOxFLMrMs+0RyorG/AteSQksKnhRdFo4pQgpDedRISnl+wpSGEqcpDscThoPP7kEVUVl8CllAqiwiRczwby6tKbrKQtJUSDUj2TmiK6gMUhlIc/Ce1Lck3wKOajnj3k/8JHe+8JOsH7zP937jP3D5tS+z+fCb+PYKSYnlvCaJsGodKeVjtZxbvHN88GDFk4ue5dxwurR0PrLqPLPGUhtF3zpMU2HrmuB8Tjum7ByVUcN1oxAViSEgRlMZC0S0GGydBXokvx68QxlDVdWkEIlBEYPPQjIFgg84n+ezXNZ5zsoYofeRGCJVpdGS0BrqusIYTd8HYgw0zQxdW3zSOA/KWkQpnB/mYUyggBCzKNWVxhpNApx3RO8xRqMTON/TB+iTQpsKrCKFSPKBJOCjofMOSQlrDESIId9MEBMEHxBxIFmEKlPlNGT0VNpQ1Q16NgcE13XIkBA2VU1na9brls16Q4oeYxWzxQnazvA+4PyatGlpV2v8+gq7XBKj4NYbUoiIiXSbDf36EiWRxdkpdd0gBFB5LkpV7tstFAqFQuFjxSvJx31zx21znSw5tjTlbdiXiDokIPaN47q/xzaP3Y59Cazr9sXu6ykl2rblN3/zN/mX//Jf8u1vf5uu614QK8CUQBxTjUqpKfUITKVTrbV476f5F43JE4iPy43CdSyB2vc9SqkXSpqOfWyLndlshvee1WrF1dUVWuup7OtqtUJrPc39uNls2Gw2AKzXa6y1U0IypTSNZ7lcTuViR6naNA3Pnj2bxOr29o37b5wXcmyzbVu+8pWvcHZ2xv3796fynLvHZ/fxbUr4HsN158Ju2u866X1sKeGbxrLvHP+ouE1C9FAK9DZjOpRM3peivE1fY5uHUo+721PSj4VCoVAoFAqFfYwfE7ezi6N4fP7X8HgyhMPD4TNnAPQoK3cn+GNc7rm0ZEo0DmVcEySJpMSQjnyeaxQBUXoQjHkk2U1ufWYe/lYqFwwd/3/0porE3Ubxl+83vFMDztFgCV0uwVjbxGIh3DmfgQg+gdaJq02HMgZdKbrOE11AGpODgDpLIRcUyhpUTLRXPc4FfEhYIxiBzdrTR2iWlnpRY2cVSSwbhFXn8N4xN5qud/TrDWG14RNvvcb53ZrNasXTyw1Pn7VsvCLYCj3T1JLw6w7fR5QWZrXGGgVWMZ/V2MZitcIHITWv8cmf+m9456/+DCevv4EoTVKa8WAmFCiLmFn+N3XQPia5dRaO0U2pxxTDlHRMwZFiIEZPjB5i/h5MFPJcn3E67tNvBM+P1PPjOpjl/LfK/+osx0TyHH1IO7wmOQGpLMo0UDWInUOVE5pil6S4RHxPbK/Ar1BKWL75Dj/+v/k/sPk7/zve+0+/yge/9j+in3wD71sikaquaCrBamG17vnwScvTq55FY1jODG3naX1isajQynCx6TlZGBpjCM7h+0AafsMwWpMky8aIEKJGGZ3n2BySuDlxqEgJos/nv9IKrfM1pHQuYRp9IgVHjInOuXxT9bweEoqREBOIJgCr1jOrI42tSQjtuifESFXln9SCDwhQW0sQhfMOBIyAxEh0DkGo6ionOUOeykZFsFqjdZ6TsesjnoSuGlCGrg85AZugj4o+CKCxOk1S2cdITIGkhDhc24hHIajkSQmUztJaK0FJTqcqAVVpjLYkJVSLJcH0PPvwfa56j9o4VH2JnkUWM4PrZ3SXV/SXj6mswtY1pm6IKGIS+tWaq8ePaFcXnJ80VGmJloCZz0Ev8K7LJYALhUKhUCh8bHjlOR+ve33fD/cvy25C7CaBcBspcGwKcV+JyVdJbx0STCklHjx4wC/+4i/ypS99icvLyxek3zbjXI+jhNtsNtPYthOcY1pxOzk4lmgdk4Pb2zTO8TimKYFJKo7tjqnK8bUxdTmKvzt37jCfz6mqihAC1lrOzs6o65pvfvObPHr0iNlsRtM0UyLSWjuN3XvPer3GGENVVTRNQwiBrutekKbbYx73Ydu2VFXFs2fPePjwIW3bXrvfbzouNx2z8bVjz4frJN2+tm/qf1vCXVeKdPu560qHflRSclsMX/f6dWO77vXbHqtjn9u+1g6JxiIeC4VCoVAoFAo3MaUHJTHZulFIDpJPBCJDWDGN7jGx/Skzbf/f1kdlUel5Y3E7ACfTijmdOAip6emh5OsgNdX4whS93PquMD6dcooyP5mQJFgFnzlv+Guv1ZxEh1aJmTVo3xO8596Zoak1Z+czXB9pW4eEXJ7z5Hw+jCGhW0/w4DpPVRu6Lpe6bLQGFOuuw3ceWwmN1XR95OlVTwRmJzWn53PmJ0t6MVxuPFddh1GwtJpnl2s2Vy0mRuZzQ+zWbFY9SQTnIswXVNUMlSLz2LF+sqZzkabRzBtDs5xRzSt0XeXiqS7g1ZI7X/wpPvd3/lecvf0jWdaJmgrRJhnnWawRSSSXZV3yHSmOotFB9KTQEb0nBU8MuQRn9J4QPCkEErlEaByKFqUkpBS2CuMOB062yq6KyuVz1XiOZQGptM4JV21QukIZjVIWpbOYRPVZ3LkN9AalNGiLVAukPkGqOdgKZe9CvEPsNkh/haSe2dLy2Z/9u7zzl/4q3/uP/473/v0vUbvvM59rUgg8fXLFh096vI+8dmeGqTRXrcfWhvO5oe8DV21PbTQKYXPV5XRjSlQ6oXScztUYAn0fEG3RdRzSgbnCU9QaFSp6l3//MLVFG4OSLM773rNZbyB4+t7h+oA1mlll0ETi4PZzCdV8bTW1pqo1fYik0GFFMBIJLhKTYKylqg0pe02U5N8pfOeIXYsWmDV5jsqoFDFqFBGrsxh1gwCNyqCMRRBiiiDgE/S9pw8KpQ21ybcGBBSt93Qu5JsSkiZXYu1BhEobjMo3DWht85npfT62YtGicpJU5+s5xZ5Zo4nnp/h2TRcCerWmDinfOO4Tsd8g0aP1AqUNog2CYXN1QXt5wfrZhzQSqFWFxJZqdsLszR/DOcF98IegS8WgQqFQKBQ+Trxy2dVD6cJjpMlN3JQUvG6Z23KsSNiXVLttSu06Ukp89atf5ed+7ud49913p3KhowzZLi8KTGVQx1TimHSc7pLdmVNxd9+llOdf3G1/3EZr7ST/+r6naRr6vp/G5Zyb+txub0xWXlxcYK0lhMBiscB7T9u2GJNPvfV6zWw2mxKTI13XTf9Bnnvy7t27U5/jtsQYp7bGkrK75WE3m820zMuch7eRldeV79x+vL38qySA90m4Y7fxVaXZbWTreA6MEvLQuseUbR1f27fO9muv8t5wKIl6aMxFQBYKhUKhUCgU9iNMRUyH1GIaH0+fLQdDyLb32/p8z7QqpO0pQgYRNayZxdfOjXTD/3jeOok4WEiVU2EyJiQTRDWk6YabPcnCMeXO8pAHhzrTiT93t+Ev3q+Z+Q5SYiaCCZH5suL+eUMiUS8b2o0juoAloWpDPa+wde7r6tkG7xK21lSNwTlPXVusgvWqx4khBjA6z4HXdo6rdSIqy+lZxZ37S8ysZuUVVxtH5zxzq8B7Hj9esV5tWMwNn/7MJ7j68Alt25O85rINXFGxjpFFaDFtn2WgT2hjmM0M80XD4nyJqRtECd4l9Juf40f/zt/n9c//OKqZ55tzh5k50zCfZt5dAbrHxNiTgntePtV3RNcTfU8MHcG1RO8Iw3fsGDzRh5xmi8M+JyceXzDPWyY7J8pyidUsmTWRsCUkAaVQw42+KAWyRimN1gZtDco0KFMhajYkJzdEAbRB3BppnyK6huoEaU5zMnI+R5oZqWuhv0LoqU5O+fTf/V/z1l/8KT749X/L5W//a1bPvsvjC4dSkdfvVyil8UmzOJ8xqxVu05ESnDQWiZGu7cFYbK0ngaYkIUoRYsK5SIwpXxfGICqf/2gFMbJZrQlKYatcItVoAyLEkHLC0CdiF4kRmtpilZBiwHvwCCiNd45E4GQxw9p807YINI1FicV5j8SIMYKxBtcHQvCoyqAQWhfpuw0VjtoaUkxZKItglEYZSwoeh6KNCaoaW+XfTHrXoVMionJp4j6iDFiVELFEZehjZO1bXOjRxqDNcGO6SyARpQ1a6bFWLTHlUrCaFlPlm82VxFwKVVlc8ATvqbRCVTPwPb4P6LiBKuBcD8FR1TOMrRBRRO9xPnDx6EM2Fx9Shw1nd87y9WIbVDVDjCH0HWhFkv+/mPmpUCgUCoXCHxGvnHy8LvW1Tx7cJjl4XZv72NfXMVLgVeTBTQm2YwVISgnvPV/96lf5x//4H/Pee+9NQm+UbNsJv23Ztz2GMY24XXp1tyyliExpxPHxWOZ0TFEC0xyJKSU2mw1a5zkrtudP3BZKY1nYUYIqpTg5OSGlxMXFBc+ePePJkycvSMrVaoUbSpxsJyzHbRj77/ueR48eTQnGbQm8ndQc94u1FshyM8bI5eUlr7/++t5jdkyq9ljRNpW82Uoi7lvmmPTeoTKlh9a9zRiP3Y7t5V9G5o3bOp/PWa1WLyQhD13fLzPW7fEdc9PCse3v3mRwKDH6Ud0EUSgUCoVCoVD408ZoDZkkYEQYiw9KfoIko2RiDCMCghqSXqPC3P6OkMjpw9FVphSJk8hMJBlKUEqEpIbWBpE5IkJMQ34u7an0o+S5nCSnMxWJe5XwZ88tn2kSdb9BUsJ3AV9rzs8r5kP51JAi7eWaGKCqNNWiISlBG00IiYtnG9rWc3Y6x0ii6zoWixlaEl3r2fSBmCLLmcE54XIdCQip1ty7t+Tk9ARvFFdt5Nm6gwQzA+3G8fTxJX3b8vbb9zid1Tz74BGbVYdP8CwEwmwGWlh2a6q2x7eelc9Jubo2zJYNzbzON7ImcPqct37q7/Cpv/43mZ2egzJMUVEZUqEx5uRZdFPCMcUsbZLviG5DcB2xb+n7nuAcwQ9pxxCJKZFCjrDGobSqpOG4SWIsifv8e4wCAimanEYdxjFJ0Oc2MpfcFUWMWVqKQFSKoBTSC0pt0Noiyzdp7r5NuPw+kjwSAxI0ogLogLgOaZ+iqiXMzqCaI7MZ1A2pbZH+AsHR3HudT/69f8TFT/wFfv9f/jxN++ucNJ7gHT5EmpkgBNrLlqoynJ01dJuetgvEJFhRiBp/2xCsEYKPoAzGCPiAtoK1hmQ0oizJ97iuRc0qmtkSrYd0X8xT1oSYS4/OTBZyVmtiiPR9wLsse0U06ERTa5rGEhBW6w6NoplpBMFFQDS2yWlS5yPeDTdTuzzPpAJOG02lLDFFYoqomKgHKdh1PV1IREBXFtFVTjl6j/hASOCCwTuPEfI8kcPxjQJ+iMLmilUGpTUona/z6DFGoYwhoQg+ATlNHFNClB/KrwIqIjZhbEMVFe3qEvqcvlVK8DFkIY6AGEL0eN8TXEe4iqwvL/EXDznR0CzPqZan2PkJ2Dmu86RH79H3Pdo0ueRtoVAoFAqFjw0fSfJx33PXJbJ2v9BsP3ebfvb1eUi4HbPubRNm13EbGZFSnu/w3/27f8c//af/lCdPnrwwr+G2hNRaTyVJISfKuq7DOcd8PkdEpnkgt7dpu9zqKPPG58aUIjClIMfHbdtOUnJXqG3vb2PMVPpVKcXZ2dkkJy8uLlBKcXp6ivd+Gvv2OFJK0/ySTdNM2zkmFruuo+/7SdKO/Y5CdJSe1lpijNR1jfd+OraLxeLg/j9WUI/LHjpvXzXFOLa3K1YPjWVfu7eVpC8z1kPbvjv+bZRSUxneUUaP5YRfltu8f2yP87bHbFzuunEUCoVCoVAoFAr7eJ45HP5KEVIiTt+rslCK03er/JwM6Tk1phlJQyKRaZmRlJjmcpyeF4Y1x4cxj0FeWHNIX+a2c+9D3chh/ZQYhGZeoxL4/Inhxxaa10xkqSKN5DKY1Uy4c1ZhVWKzcdja4FqPqRRWC6oymJlBa8XVRc/VVUcQxf137uM6x9OnV5wuDKTEps3fGxdzTQrQ94EuCmjNbF6xPJ+j64rLkNisApuux4pQKeHiYsPmaoO1wqd//B3C1YoPvv8Ql4SAJs0XxNpiU6DuN6jgiSHiY6BuKprljLM3zrDG4HpPSJr6zZ/kC3/vH3Lvx/8KSiKEDVngZfkoaSh3GfssHUNPCgFCT/Jrgmvx/QbfrvBdh3cuf6/1kRg9IYZJOOeE41gWVw3zfCqeT8gp04klykyyOc/4N0puGc45GeruCkSIkr+zIznNKiGnIEWEKAEvDuW+g798DyUGbQVtKpSdIaZCRQ9Kk4Ii+R7pL8AukNkdqOcwn0Fdk9oN0q/ACGef+hx/6f/6f+eD3/rPfPtf/Tzp6TeZLRQxRFarHmsUtrakEAg+Db8DqFxm1WeZHlIghIStK5QxRO8wVmMqg6or9GyBd57etajZDFtXaF3lK8V7fAhZ3seIRE/TaLSyRO/oYw8pIUrjQsB7z6nJ81H6lFivPREwOtH2HnzC1BpbGYL3bLoepS3a2Dxnp3dYrWhqjTGWgKIP+SYCa7LM75xn1TsiYJVGBILvSSQ04NF4l689JVk8B+fQxpJUHqOOsKiafH1qRVK5EpQxgrWznD4Uhes9a+9QfWDe1Chb4XpHSoJIxBIwSjCLObbKqdfWB3zfo7VCFIhWKDFghNBtsnBs19TLBUYUSyvMZhXV8hQ1W4C2RMC7QFhfgljAc2A2lkKhUCgUCn9KeSX5uCtibvND/nXLHko/HRKAH1VC61Vef1W++tWv8vM///M8efLkhTKmYwnUcQzj4+3UYV3XL5RE3bf/ttcbS6WOr41JyM1m84L0BKZU4piMHGXlKAHHdkYhOArDsWSqMWZ6blxfaz3NHTmmFcfnQwhcXV2hlJr+G1OW2+NOKU1jGiWmiExzSI5t1nXN1dUVp6enL5Sr3cexybh9Qn378b7E4jjmY8+zfSnH3T52/97X3zHj2l32prEdenzouVFm37t3j7quATg5OeH73/8+T58+fWl5d9sk400p1pvev/alWW97U0KhUCgUCoVC4WPG1k2yY9HVbeKUaBxLrqZBOub/JDeRP4sqxfMMZE4sZlH5vNwqW+VdR534POkokOILy8ZhhkkZ5ipEhoxjyipLYn5Oi+LMwp89tXx+Aac60ihQEjFAPas4Wea5A59dtsQkSAhIUsyCQqmEVZq2jQTXs9p47MmMd95+jcv1hounl1RKE4LQth5InJw09G3govW4pJidNCxPa2ZNTVAVF33gYt0RI8wt+N7z4NmK2HWcnRgWTcX60TOePlmhq5rm/l2edR31YkFoN+hug4EsRBJUtaa5u+D+W3epK0236fHVGa//hf8Fn/lbf5fm7hsoCRC6Yc7MYR9PCceOFFyep9F3EDb4bk3o1rj2ir7t8H0WjzHkdFz2usP0J7JVJHeYn1AIQ1/5BBGZlhhPoGkq0XwcR9mYpnMlyfPU7fN5Q7PJTsRBQunpfI0xIN6jBLxTWazpFl0ZTLVElp9E4gbl1qSgkNBn0VgtUPM7UC1gsSBVNbRXpLDBzBSf+On/hvNP/yjf+9X/D49+6/9Lu3lKXWmqShNcTwx5O+p5lZOxzg2lSrOMtfMKUQJjCdthzkvRiuh7+n5DUgrR+cZo3JoY8v5T2uZkY4hYq1AqXwvKWnRK6ATRBUKfmNWGptF0AXwSmmaGSMR7RxKhqgxVYwnO07keQaNFQejz+K2lrvNvFSFGfAKtDWI0CWHVe/reI1qobZ2TnS4Ov51YupBweCSBwg+p46GocopE7wf/rDBGEGVISkApegQlYBuLtjNCjEQXSTGCEpKPiE6gIiGGPLek74gIldlQGcOsqWDWENVws0PM84sqYxBjCTFwsdrgU8+sUixOznGxwc5m6NkJaEsIieQDKEHHCpInpB6l7Ku8mxYKhUKhUPgTxkeefNwnT3Zfvy65dd3zh1Jnx0qRfUmt7Xb+qBn7ffLkCf/iX/wLHj9+/APlVMd/x3Ti9rhjjGitmc/nU8lS4AXJNpZhHeXeuM52Odcx3TgKzZFt0bjb9liedVdiAqxWK9brNcvlcirF2rYtm82GsdxqCGGYM+F5v+Pf4+vjPhr/3i6rui0mx22pqmoSmuP8jyLCycnJJEHHsR5bTvVlhPYh2XidVN8dx03n9nXtHtPf+PgYIXqo7WP3iXNuOvYjxhjOz8/3ph93r/GPIlG6PeZdgXhIFL9MH4VCoVAoFAqFwoukwQM+ny9RbX8XJWWJmMZ0ZEJeWHdQjUp4Lh7za3qYomJ8SqutVweJOSJpTDGOhUKfy6gsn0aDNY53UJNJqBV8ai785InmrTpyXisqEaIPKITGWpIoLtYB0RFtDSlKTnemSEgR21RsVh2XG4/W8OYnX2d+OufZ00uunq44nc/pL1eQAqenDUoU65XjYp0TjydnDWdnM7CaDZZ1H1ltekBorHB5ueLy6YbUt7zz9jlGIo8fXBKTplkuYTZj1fW0V2u6ixWVglljcxKOhDKa+Z0T7r91h9gHNp0jzT/BZ/72P+StP/8X0fUiH8PQwSgekx9SjrnEKqEleUc0cxSB9vIB3eoSt+lwfUfo++fTlmylXGGQg6KyON4SjDnzKKNHnkRUPlQyuWZRWWw/T8eOh1dBClNpXhkSkTKck/ncACRM/RGFJD6LvJjLkgbl0U4RXE9Vn2KqBaF3KCV5H2qPREfsN0i9QGZ3oZ7B4pzUNdA9Q6Jj+eZbfO4f/V+wr32aB//+X2Da7yEpz4MoCUylkeH+a9GatnPECFVtSAFicMP3OjCzGl03eBeIvkVIaBHEe5L3dH2HGIuYihA8IUaqSkP0eBfR2uJJhKTpXU+KgTsnFdoo+hBRxlLXFRIirm1RJKraoitD8o6UoK4bSHm+SKUSs9pijcb7QOfzfjZW0ErhY6L3iZDy9WyUynOBBo+1DUmErneEkNDaAJEQI1qBQVC6wqdI8FlIKm1QKJQoEIVYgzbjeQQxBkCojIKoMSm/R+i6AmXz7znR56Sz86R1SwwBbTSN1Ui1yO9QMeF7N9whESAGUGC1RQeP31yiqhrdnIKuCCmfvSkEVMrXWPQuC3H9/HerQqFQKBQKf/r5oc32fJ3U2CdQjkkvHmrz2CTYsZJmX1/Xpc/2rTsyyr3VasVms5lknFIKay1aa375l3+Z3//9358E3Hb7ozAby6BOd80O/2mtmc1m9H3P1dUVxhiqqsp32W3JwVHijeuN7W3P6TiOa3cuxzF1CFka9X3/QunT7VRl0zTTsuP8jE3TYK39gXFvS9Ht5OTJyQl1XXNxcfHCPtlOSI5jG/82xtA0DcaYqe0xFfqpT30Ka+0LUvZYefYqYvpYWXXT+Xlo+Y8iMXjoZoFDpUkPCczd9ref77qOtm156623MMZMpXS3z7WbxrdvLLvL3LQ/jhGt23LyGPFZhGShUCgUCoVC4SbGj5M5tfaDrw2fyIe02vgZ88Wb8dQgjNKYXBtWFqbYG2MQb7BMWyVfyfMRjmIqjW3HaVzPxVSc2kwJTgz85Inmx04Nr1dwVmtspfNcbgim0riYSJUipsCiMigR+j5ijVBZg9GKq2ctmzYwP6954xOvoY3mwfcfEPpA7CNROhaNZtZoVEw8u2pZd4mkFOdnM2bzij7CphfaEOl9x+L0jPlyyePvf4/10zW0PfPGsH66om0DURR3XjsnaOHx5Zqnz9ZITFQC2mq6EGm00PURtbCcv3ZG9AHnFbz243z+v/vfc/fTn0dMDaKBiIjKEjL2ubRq9Hk+x9CRXEtwLTE8pLt4RHt1Rb9ZE1xPCJEY4vOTYUyyCrlNYChwO4hiySm/MZFIIk3Lpa244yCoY06Z5TNnbCeRhgTkqDHHA5uGxOx0Tjw/bQYBGSFFkkRSVKACUTQxRfz7X0cbi7Y1uqrRtkGSRcVIUgFJDvEbpD9FZudI04AxpM0FhA26mfPZv/33uPupT/Hu//T/wr//u4gKxKTwLpLIv1E4HxGjkCiEEFHkNCbaUJ/M0fUC164JvUdwxJRQoolJcmqSnMwl9iRl0DZr95iEmOIwdyG06w4likXTQIo4F9FNjV0uib0jhYhWiso2RIG+i2idvzMG71EpYawwa/LN0L0Lw1ylwrzJMtNH6FK+DrWCGNV0M7ZWOcXsPCQi2mqSi2glWJuTgmo4/qmPhJj3gdY6y1Wdha0ohVJVnhMyOFJ0OWmohEorZPgdZvsGBpTBu57UOXwU2tUV85mhWZxhTIVIQJmK3la0lxe49QpJjvt3TqmNRqceRNDNCcnUwHCDeOyxukHQpJDnjFSjbC0UCoVCofCx4YcmH6+TPLcRCPtSZPvavinl9bLj3/f4kHgZhZdzjidPnvDtb3+bb33rW3znO9/h2bNnrFarad5CpdQkCd97770XZB48F227AmQ7IThKm67rCCFM6b5xHsRROI5ycOx37GsUeSIyzcO4myj03r+QOBxTj6P8Gx+PJVP7vp8EoFKKxWLBfD5nNptN/TZNw2w2I8bI06dPubq6mkq09n3P5eUlq9XqB/oZtwuydBwTktbaqQ9gkp91XRNjpO/7a0uuHpLP268fOo/+KBK01437Vfq8KZl5Xd/b/W+PI6WEtZaTkxP6vme1WqG15oMPPuDhw4cAzGYzUkpcXV1N5/OxycabrvljReF1NzKMr2//WygUCoVCoVAovAwyJNNycG1fFQ62BKKMhSRJQ8ptmvFxSsrJC3+PScVRim0zJRqneSAlp6TYnnht/Jyd21RD+k4luF8rvjhTfHqmeKsRrAKlheAjfR8QAR8SYhTLmWY5r9HGcHXlqBq4f6ehW7c8e9LiQuTuGyfcfe2c1aZndfGU0DtIOZk1rxW1BDbrnnWf6AKY2nKyqDCzmg5DFxKXqxZEU1lN01ii3+BWa3zrOJ1X9F3Po8vI6d0lp+dLvCieXq55crFhvXIsKsNyMZSITYmLjafXivNFTe88yWuaz/5VvvDf/29Zvv42aJOTZMR8sFJ4ocRqntOxJfQbYntFt7mkX13SrVf4rh1KrIasA+Nw/BmmNEkM9jkn1JIMJloUIpoUsyBMKh8bGeaCZDg/FDDW1JUhHTk6yVE4ypiUFD2dg2MBYHnh0SCjJUuq5ydRyNseVC7V6SOgiRFCiGjvMKZDVw1UNUpXucRnDKTgUG6dU5DNCbK8S9pcQn8B2nD2mR+n+T/93/jOv/5/037jP5B8h6kkJwRbR0wabSImksVmTOjG0JzMkXqB6/uhBGsg+lyzNcSIcwGtFKIE1/bYWY0Qib1DW40yFSSFc4Gu67AGZrMa3zm6NlBVCjufEYIQe0+lQKwmhFwCVnSWo8FFUghUlaYyGu8jvQvEmNDGUlc1omC9cfjQoSuDtvlmbdf3OYWoFb2HpBxVY9DGkJIhEXKiVGliSATf55sTlMIYjY+R6ANBepTKczvm0qpxeEsIWG0QpQkpIcYSQ8DHROo6xCTQFSTJ82GKokYTo8NtNlSVRaygdU0I+blufYWkyNl8SWwv0EaBWoK2JK1zgjt4Qkr5fSTl9GWKCYmBSET8S76RFgqFQqFQ+BPJRyIfj00BXvfcoXYPtXmsENoVljetc13buyUbtwXFarXi3Xff5Wtf+xr/9b/+V771rW+x2Wzw3k9iD5iE2pgA3Gw2zGaz6Y627aTfuMx2uVDv/QtpyFFoWmuZzWY4515ofxSJYzpwTDqOIm987d69e3z605/ms5/9LLPZjO9+97v8/u//Pg8ePJjuyAN+YE7JbTk4isi+76dysFprmqaZRGtVVdy7d4/FYoG1lsvLSx48eMCjR48mATmmNkfhOApVY8wL+zOEgNaaO3fu8Pbbb7NarSbhCvDaa6/x4MGDad9ed57se+060XVdezcJsOvOwdvKrtumhw+te9Nrh0Tf7vN937PZbDg/P6dpGtbrNUop1us13/nOd3j8+DHn5+e0bTuV9L3NNt+UXjx227eF/L72X1XsFgqFQqFQKBQKMCgeGedwHI0RkyjKwm+MNGb1yCArI0BKw2yMudRklk1DudVJZg7lOAfPmNLzONvkKdNYhnUQT5L7UmPpTkkoASWKRsGnGuELc83rFZxUuYRiSILVBhL42mAk0dSK5UnNYlkRfOJi1WMaw3ljiJ3Drx1JRd586w5aG9577wLXthgBYqCuDSdLgxHh4qLnah2Jopgva5ZnM1CGTgzrPtI5ByhmRtO2LU+/+11C52gvOzRweekIwGtv36c+yWVWL1crgo9YSdxfNtQm8eYbC4xVXHWRennCTMO98wUxCouf+Jv8xH/3j5jdfQ0RM2XEBCElTwqeFB0p9OB7ol8TuhVu/Qy3XtFuVvTrFX3b5fkGY8hlLRm+YyKIBJ7Pzah4fri2RPFUZzU+l5BhTKmO83/qvIzkOTtHETmmY/N3nmEOz6FUq6CmOT3HkOz2ufoiCgjjWUyKICmQxBOSIUVNDI7oe3RwaNdj6gZtZ6QUcxnWGJDgUG4Ds7vI/AR0RWqfgBKae2/w2X/4f+bdX73H4//ybwjtM/o+r5d8gAhGk8uU1hWmNjnp6VtwPaF3WfaJkEKeGkaTiCERPShrCD7PN2iNJqW8/SEmYgjMK4VWBu8iMaVhGejWPUonjE4gCp+yfM6pxUCIEYUwqw3G5vXXnQOE5cxQNYY+BdYddC7ifSS5HtGB0DssAakMzmkCCms1KUZc70HlPR5TwntH9FlyamtAqeG9ZJiQUfJxydOHemKIiDIYpXJKdHxPETWkYCGiISSEXEpVq0RlDdYIkQp8hOBQWoO2rC+uePbgPUK74nQxI248sd+g1AKpLEnZfH4mCNGjtcFUcxIhz0+pdK7UmlIu81ooFAqFQuFjw0vLx32pp2NEyE3lDG9Kp+0m0nb7O8S+5OR13CQgU0r0fc+HH37IV7/6Vf7Df/gPvP/++zx69GgSevu2fbsd7z1931NV1VQqdGR7HsRRIm7LxFGyjf/VdY0xhq7r6LpuWmY7BTnO07i9DUopPvvZz/JX/spf4f79+2itsdby2c9+li9+8Yt8+ctf5nd/93dZr9eTXNzeljGFOW6v935KPo7/9cPcFk3TsFgs0FpzcnLC+fk5b731Fvfu3eNb3/oW77333gvzT46MQnKUkKOcHLfv/Pyc+XwO5NTj/fv3SSlN807euXNnKvW5e74eI7BvYt/xPXb93dTgy7Ir5Q5dJ/tef1l2t3c85549e8bjx49ZLpfTeXpycoLWmkePHgHw9OlTXnvttb1i+Lbi77r9f0guHrt9x5wrhUKhUCgUCoXCIdTw0VNNamd4Qp5rJjXIxjRYxzEQB1n9kJ6nIYkp/zsKJmB6QnLZ1KmEqsi0XPYUz1NziYQaxGVORIIRxT2b+OKJ5UfnhiZ6dIp0bUIb4fy0Ruu87PLEYkWwdU5rxahwwSPJU4ng28TF0xbdzHjt/pzeJx48WdGve1QI6FqxWFjOTyy+j1ysI5sOdG25c7agmlc4pemCcLFuISVm1pBU4nJ9Bc7j+vzds5a8n+zM8iM/8hpJCx88WXNxuWY2X9DHDRbhfG44P69wznHpNPPzM5azmuXM0G0Cs5/8W3zh7/0DmtN7WdYwSD0SKQVS7CH4XG7VtyS3xrfP6NeXdFcXdKs1fbvKaUcfpoSqDLI37/2ITCVTp1NhOA5ZM0/fOySX4ZQkk0KczooQiRIQpRGlSTKUyx3KqopSQ5KRLCmTTEI6J22HASQ1nDtpGtOYkMwL65yElDjsC4aysFnWScyJ2xhzWdmYAiY4dL1ApRqSy9sTI+JbZHYfmiWi75NWT0lpg56f8cm/+w+Q2Qnf+tf/DHGPMSkSibguUS0Mi2WDjx7vAjr1JBH6jSciGK1I3hNCQiUhplG6S048BtAmpwCj5PkYiZG6NqQEvctbbKyw6Vu89yyqispACjnpq4wClfCdQ0hUxlJb0ErR+UDvAQW11Shr2bSBlQuIaIzV+CRcbXqSJKJzVCnC2pGUYjafkYxlvfZISigTwRhyld7hdyVtcnnjEMdoa05wKkVESB6iABLRhFy6dki7ZsmsUEoQlaZzO3kHArWtqSqDlkiymoiAGGKArtvw7NEj+nbN3BhUivTtBtEWEYPCoGwNtiLFlKWlgNKaJHo6h+Nwt4Wk7dR1oVAoFAqFP+28tHw85kf8XRmwm3K6SR7u6+O2JSiPWX+foDjUR4yRy8tLfu/3fo/f+Z3f4bd/+7f58MMPf2B+xd12xm0fJeD47yjzRpk29rPv8Tj27fkSRYTZbIbWeppPchzHtizcTkCO803GGPnEJz7Bz/7sz/LgwQO+/OUvs16v8d5z9+5dvvjFL/I3/sbfwBjDb/7mb06px3E7xr9305rb+wJguVyyXC6p63rqe5wHUinFnTt3iDHSNA0PHz7k8vKStm2nPkaJKSIYY7DWToJ13AdjAvJzn/scn/3sZ/nqV7/Kd7/7Xe7evcsnP/nJg8f2pnPwUKnO68qy3ua62H58bKnXY9ocZfV1y98mgXnTeMbX6rqmaRo++OADlsvllH41xjCfz3Euf/l8/fXX9wrhY/vbXuamJOS+Y3NoX+8moA/dPFEoFAqFQqFQKByDpOdhR2BIMZLlT4pMCnJIQibJCcfRL6aUcvZsXHxcGFAypuOyPJIxyja6yDHzlsbEXH6gRJhGNTxdAZ9u4M+fWz5/x2JjZLOKbFxCjOHszgyTIlrDvfszVIp0K59LU0oiEmg3HaKE5DwOoTldQFXz8KJnfbHJPSqhtpp7d2pmVrO+6rhqAz4JzUnD+fkpYmtaEa42HZtNi9GaRaVYtx2btcOGiHeRs/M5vvN8uEmcvn7G6emMXuDJVcvGRerZnL7rMN5xuqw4X9b4BJ0YVFVRVYqzmaLrE6c/8bf40f/2H1Cf3WWcZ3EsS0oMEPO8jkRHchtSt8K1l3Trp7QXT2mvrnBdT/A+S5gcUxwSasOBS/kgp6GsapaacUq6kdJQKDUhU2ncnFrbOqOGMr6DRIoRGb+bR0ENMnKKU47J1zFcO0nGnKCd/PTz1plWGpOUozZPcVh2SEImIUkgoUlRkWIiRk/0Du099VtfBGpYv0dKkZQCEh0q3oXZOZzchdVT8JfoyvLO3/zbRF3x7V/8OaR9SPJgDDSVIQZP2/osASELRBTG2EGea7rQsb7qIUXqpkIJxJhQetgCMcSk/n/s/XmwJVl+34d9zpLL3d5WW1dXL9Mz07MDM5gFIPYBiBAEExCokIO0aDskhUlLZjhCIdo0FQ7SokjKphQOiWboD/sfGyTEYRCwRQIkSHGwbyQIAmAPBrNhhtON3qur6q13ycyz/PzHyczKuvNeVXX3DNED5DfiRb17b+bJkydP3nqZn/x+f+SFxRpF8IHGB0xuiZWj2nhiTNvLM0V0Hu9dX7fU+YBWiiJTZFna33XliGiKMicvMiKKauNYVQFtNHlhEveVwKzQgGEVNHXt03njBW88JlYQhcIqrBiMCslBqLJUwzEGvJDGXGJyvup0hjeunXMGMmsTGI4xAcbYzTGDshYlPj1AYCwi6dtIa5KHNgZEBYxObtHl8RGb5Zq4OWVnMSPTGTHUKFugsxwxGrTBWAPWEpsGrbvvmQatLZiMGHWa41pDaC76qhw1atSoUaNG/QHUVy129aL3tuHAwzgbh1DhQTf7X0805oMcmhdBii4e9YUXXuBTn/oUzzzzDLdv32a9Xvduwg7yDdcdOiCH8HH4063bORQf5AwdRrB2MatdfCXQx7IaY/De930ZOieLoqAoCpRSfOADH2A+n1NVFU8++SRlWfKlL32Jz372s/yzf/bP+OhHP8q73/1uXn31VV544YXekTnch6ELUinVA6jFYsFiseDg4ICdnR2m0ymz2az/ybKMPM8py/IeKHl4eMjZ2dk9Ds48z3sHZNOkP1i72NWPfvSjPP3008znc5RSrFYrtNZcu3aNd7zjHVy7du0eJ+XDzo3tz+83z897/41GqF4EQc9zD2/3/6LXD+rjgxyYDwMC79y5w/7+PpcvX+6Pk9aasiz7+NwOiA+jgx/Gffqw/bxfOxedVw/jmn7YZUeNGjVq1KhRo0aNGuouXpQB2OkcdR3s6UBj7zdrgU9yLIoM/w5tXZAtaEx/63afqR5CRrnbyt2oTeldb6qv+yfMNbxvbnnPjuXGTLOba6pNIPg2wjE3uLphMtHMpxmxdlRVwEXBWoPR4GqfatoBUWnKWcF64zk7XbVuNU9uDTvznJ2ZgRhZLh2rKiLGsLczZbqY4URReWHVOM6WG2aTHGs0p8s1vmmwIbm/Ll2a0jjPxnuuX98Hm7EJmtp5oigmE0tcV9joWCxyZkVG5QO6LCiLnL29GZkSTk9qDj74R3n6+36QbPcApVpY1B2XmICZhAZCjbiKUJ/hNyfUy2M2pydslkt844mhA4+tW00EUa5nfaLasZcWa6p0ZCRKa5FVCRJJZ0FsI3UlopS+5+HIftb0JsZ2dsXYwimbAFW//HDZDl7GVFVUqXamDh5gvftW74gczteuFUSl/uPbdSX1IUbiq79LubiMCjVRMlRMABK5g/YNan4ZZvvIWhB3hs0L3vZtH8day7P/8G9TNLfIFNS1p4kBbQ0KwbmANhlK6zaaGHyAtQevdYqHDY6JsYgoohO0imA9uTEYpXHOIzGSaY3zkaaqEe8oM43JDE0T8c6RGUHrnAhkVmOzAi2prmQdUhxuWWbkRYYXqCqPc0JZmNS/kGoelhaUttReMckM2pZ4IlWIiDY4aWOZlSZKREeFMqkip/fpYe/cZmAMIcT0DWINoT3eEcGIQmIgiiMqjbWWIB6JHpVlyWIdPFoZtLVoo1NdTmVRkbRdm2Jtm6ZCtIFQMysUZZ4hyhIQTGbRRdl/GwUfIVYgEW0sJs/bByU8Smu0KfG+SXNf3f++zKhRo0aNGjXqD5beMHwcOqvOA3cPE0V50TKvJxrxzbjDzuv78D2RFK36hS98gV/91V/l05/+NKenp72zb+hcHLo6tyHSELAMod3w9XY/hq6+88bDWtvDtyzL8N73NSaVUv3x8d6TZVnfRgceRYTpdMrBwQG7u7tcvXqV2WzGfD7n6tWrvPrqq33dx3e96108+eSTnJ2d9TUXnXNfEeFqre3Xf/TRR7l06RKLxaKvadnBww5OdiBUa81sNmNvb4+DgwOOj485OTnh+PiY5XKJ976Hs51zrmsvz3NCCLz97W+naRqWyyXWWp588kn29vZ47LHHKMvyoebB65k793v9MA7Kh9nWRS69+23/Infmg7Z1Xh8vOl+Hyw/n8e3bt3vQ3L3XQcYuXnV4TnSxw0NA/zD9PO+z17OvD3MsXs8Yjho1atSoUaNGjRp1kSLd39aDhBySi6yLR03LdU61zls2AJKD99Iyd4FjF8epJDkQ+5xFoXfMiUp1I9MmuvhMRQZczRXvmWquWZjFQF15NsqitaKY5OSZYTqx7GRQWoU4ofbgBYpFidGKSGSxO2G9djgXMFFoqkgIkUmm2DTJwXZ5f8JiotlsAmfLBheEYpIxX5Rk5YR1gDOnOa02BOfIrSE0gZP1KTOryXykmFgybTk9WqHyjEeuX8KJ4bT2LJdL6kZQRpFHTyaB6WJC7RrONo7Z3oKstFza3yG6hrOlZ+dd38Y7vucHyHda8NiOU4rADShJjsfYgsdYn9Is71CfteBxtcQ3gZgyMlHpSNLV+kRiazLUdEUWRUX6Z5eVoq//SHIyds5DrW17DAcuyM7R2L+THIjp+mVwnRMDxIBojTKCUtkAOLcIW3Q/0VTbr7S9Fk5311kt3Oz/bfuglLTztXVKhtCOmyWIQ+IJtVth8gk2n6GzrB+fKBEVI2p+GTU7QNYWcUuUzXnsY99CZhQv/dO/TX3yGtYYiiInSsBoUJlKdQtRWK2pm0jVeKxNUZ914zEmEoIQYvKSlkpjVcA7R+MiShu0yWi8x1We3GpsnrfuTYXShsxqlHKEGMm0kBkNEqlqT+Mi2mRpPuaWqm6onEepjGKi0VrT1A4lYFXrUPQe7yHLNMZq8J65GMRYgiR4qFAorcjzDAUEEUIEdDpuRim0MTjvwXtCTIlTxlgkBoL3oBTGGnwE39amNFFQKsHBdmIiwSG+hkxIbl+FeI9zDTa3zPf3CUVBWB8iAqGdxwGFab9bQoxI41onJhibgaR6lcoriBU6I81FIsaM8PHrWUqp/xD4/wD/kYj8yO9vb0aNGjVq1NeD3pTz8WFu0N/PdXSR8+phQMODIOUbgZJD6LharXr332c+8xlWqxXe+x46ishX1D8cgpWurWHtx2HfznNTDaFhF5M6dDoOnZJdH0SkB3J1XRNCIM9zAKbTKXVd39OPro0QQh9hOplM+nqM3nsmkwnf8R3fwSc/+UlWqxUnJyfkec5iseDk5KR3KXYwsevvzs4ON27c4LHHHuORRx7pIze7WNZhfGrXRud+NMZQFAXz+ZyDgwOqquLs7Iznn3+eO3fuUNc13ntijFhr+/1XSnF6ekqMkTzPuXTpEpcvX6ZpGvb393vI+Ub1ekD48Fh2x2z7vfvFd14ExS4Cjg/q55uNCN0+H88DnNuu3u44bi83PDeGUcCvZ2yH+/XV1kVjdR6QHYHkqFGjRo0aNWrUqIdVjC2caev8xXvgUXuTHnpnYg9y6P7GbqFSIpbAXeDYSYlCqQR1JLb13JBUS1LBEF4plapPTrTwjonh3RPFXAV2S0tpFbkVityC1mgbmU8184kBD5smks9yFMJimhN8AA2z6YyzkxWCwgI+CkYEJUJwgTJX7O8tmOSGzaahCQFbZkwzSzmxRJOx0gUnted0vUEJzMqCs5Mlx6s1l8ocHRvmuwXiI4dHDjufsbO3oHKB06pivd5wtvRkFmZRgffM9qa4oHABZjsTdvZn7E5zXFWz2TRMH30/7/q+f4d87zLoLI1v/3BlQEVPDA6CA1cTqlPc6ojq9Jj18SGb9YbgaqIP7UAncKiUSu4/2usJrVpyo+gdgkqBiugWoind1ujsjpHSiGnjNZUG3cax9iBQ3+1v646kdcJK/6+kKM0YCCq5IbUx6dh212n9bBvcaxlEv/Yu2w5s3xPbyj1zGdXFfDqUWKJ0903SdaCNOVamXeArsESWATW7CtMdWAF+hc4XXPumP0Jwnhf+6Y9i45IoEdM6HyUAKqA0OKcRlWEzYVPXuCDkOvbxxpk1lIUizyx13RA0FFNLsIaqEaQO5CpSlprohCZKOz7JcWisJrcGqyIxBJxLQDNrY1a1htXSUTlHWWYURU6IjqZqUCJkuSEEwTshNW2QGPFBYe0EJREfY6rlms5aiAnqpX1IsFHrVHtTouCDJICqQvpqEUFZiFEl2K0UmK7kT4LoSEBHjSlyjMlSvVck1WUM4KMjeo+vNyijKYsF2lfEsCFKqiWrM4PKpqBN73JVCNpmmLLEVxXRuxZkG6JEQlNjvUdbQ/RCZISPo0Ap9ZeB/wL4HhH5hd/f3owaNWrUqK+lvuaxq9uvL3JXnfd7d5O/g0cXwZD79elhQMFwe+v1mi984Qv84i/+4ldAxyFwvCj6cRi/ug0jL+pP1+52nb4hYO2AXdeec66HN8vlsod6HVzsINAQTnrv+8+7uNadnR20TpEcdV3z/PPP8+KLL3Lnzh1OT08piqJ3Oy4WC5RSfV3Izo3YNA3T6ZT9/f0e/s3n8z4OtqtR2cXBdhGcnXMxz3NEhKIo+jqQHRh1zrFarViv1/e0oVSqbam1pqoqnHPs7OzcE/HZ1RR8M3oY0HW/OXY/wHjRfH8zQG57nl30+kF9v9983Qb/3fxUSpFlWX+8L9qvbcfkee+/Xt3Pwfwwelg36uttd9SoUaNGjRo1atQfcrXQcfjX472wMf1zFywlxX6FDvqc03TnZFMKaSMzI3dhpiAtmGyhFqARHskV37iXcc3AxCgyFJkCQ6TMc3yESaaYlZr5JGOzdHgFk8UEMYqd3QmnR0uUSp83ywoi+MpRzksKrVgvG4KPZJlmd5YRfeB0ExAJZEXOzrzA2IzaayoyTtaO5bpiYgyFgZOzJc3pGXNrmJWWyWxGU284rWF6ZR+VF5zVDaenK4oix9WeUgkzDRlCcTCnNpba1exeWnD90SvMcs1quaJxnmL/Sd7+R3+Y6dVHUSa798iIT/UNo4fYIK4m1me49SGbsyPWJ3eoVktc45EQQFpI18HHmNCd0ul6WWJXyVEGzE6B1t30SPcM0KBb6IxAl4bUwkZtFFHr1mVmE8gi1TuUwfzQaaPdFtvJ4EACIZLWtRalTAKbafYQ4+CajfR+b3hk6MCkh5z9HO0DhpPDU6IHUQTkrnu3vVdi2i3SgneWN2F2BTXdQVYCcY0uF1z/2LfiV6e88gs/hqFCBQgxoDCgItGDyTMyo6kJSNRMraCiwrvkwpxMDXlhU4ywaDKjCMHhgqC8YMRjVCQ6TRQhLy1141mvPZlVlJnFKCGiiEphc0OR5eR5RhTF2bJhXXvy3GCtoa5rXOPJrMZkGd6n+zA2M8lZGxQuJDivtSJ6kBASYLQZLipc45DgyPKM2B6f3GaIVgTxVI3QOIe1ugfdxqRj7WOq7+hDijoOroLgsTq5LfPcohRE37T1QS3OOZqqIrgNEj07izmZEuJmSQxCENMyboXRmqzIaTYbiAGTFyl6WSl0nqF0C8ZFkNDVgDSIc+2DF/k5X5KjRo0aNWrUqD+o+qrAx9ejbRixDSK2l73f607nOaxeLzSqqorPfe5z/MzP/Ayf/exn+3qOQ2i3DUyGcHF7f7Ydj8NtbcNI51wPCofvD7czrJc3bKfrXwcCjTF9pGUHgUIIfZudE9F7j7UWay3r9RqtNc8//zyf+9znWC6XhBDY2dlhd3e3r/M4nU57+NiB0u793d3d3j05m816ANj1u9v/zi05HLOu71rrfiy6Go8HBwc8/vjjaK05OTnp26mqCpEUv/r4449z6dIlJpNJ7xjtwOeDoPeb1fZx7dTNhe3lzoPtw3XuB/suWuZhXHsXzcUH6fWcR8O5P/xs6FJ9I3140DbP6/P9xus8PWi5i76bRgg5atSoUaNGjRo16n6KrTuog0g9rhk+dNq/1wKf9Kplk51Ljj7Kc5Cr0jrP4mB5evwoXc3A1pU2M4r37Vq+YS9nl0jwwqzQnJ3VWAPzxYTGRZwIi6mlyDQnJzXrKjDdKfEhMilyXnvlhMZBpiOuDkSXoFY5nxAEzo5qnPNc2i+ZFpa6djRNQCtNVhbMdybE4Fk5xRrL4ckS1wTKzKBi5M7xGaqqefTKgsW8oF7WvHrziDiZsnt1Hx/hZLXBVw3VpiF6z0QUIXqKyRRTFjitcOuag/0Fjz6ygyayXDZ4HwlmxrVv/qPsvu0dKHN3zPvxloiEpgWLgvgNzfqQ6vSQzdER9XqFq2qiT+Ou22PcHwc0ShsUnWFQWidh6/lTiuGBlP44pk50rsWWSiNt9GrwHWw2KOUIGrSxSAsTUYZUM7C7Fu3ckJ3bLCYnbfB4CWidgJfqo1YViG77kvrUR6sqhSid3HdIC1p1Pw37ddpt9t0XDwZE2gdU27lvBXRWQotl9er2AEB6FB5d7vLYd/xR6pMjDn/rnyJxk0BtCIhK9xa0RJQ4yjzd2orO0wSP1oqdeYnOczZ1jShFPimSnzOCkYiOEe8dTmlUoSmmGcE1qOgpCyhzi6J98NvmGGvIrJAVlhCE1cYTCCwO5hCEarNBQrrPorRh3QRiCGTWYGxGCAalUvSoiMfVNWDQWarlKEqhYiDTYHWqpelCk45P1HiJ+AgutGBZWhCtDHhBtE5ORhTiA42v0BLItMYqTZFZrLaIqxECSuUIBu8aNutTcquZT3NKqxHf4O0EJwofU01J7TyoOn2XaZ0cojq5LWNwmHa/RSAEQWmY7u2h7IKzV5/D6IjIv/FbkKNGjRo1atSo30e94f/5t91PsHUB9TrcQee19Xr7ct7v222f1x/nHC+99BL/+B//Y37nd36Ho6OjPt4T7sZEbq+/7YDsnIvbQLXT8PW2w3FY33EbzHX/DoFl54Ic7lcHFmOM9zgCuz51gG+43e73s7Ozftvf+73fizGGF154gRdffJHpdMrb3/52Pv/5z/Pqq6+SZRm3bt3qx2g2m7Gzs0Oe5+zs7HDt2jWKoui3N4Rw3nuqqurdjZ2jsgOhWZb1kbDdOM3nc/b393uQeefOnX4snHNUVUUIoY+X1VrTNA23b98G6J2VXfsdcO1+huP+oPl6EXQGWK1WXwE8tdb89m//NiEEZrMZu7u7PPbYY71zs9N5v58Ht+43ly96b7uN7e0N9+316CIn4DY0P+9nOOYXPWDwepyIF+3LwwLI1/udcd76I4T8+pFS6m3As8DfAv4q8F8D3wPkwD8H/pyI/I5S6grwXwE/BOwDnwb+TyLy84O2HgX+NPD9wDuAA+A28AvAXxORz95n238Z+OvA9wFz4HeAvywi/+irv9ejRo0aNWrUqN8vpfCa9m9f0u/3+iBJcZvSham21xO92/EusGwZZPv2XVgl7eKxc1Am310CRoCSyKVM8+H9nPfsGuZWqDeBQgn4wOU9y7TMsDY1PikmbKqas7OA0oZ8p2D38ozl0YbD2xu8MmzqmkIpNmthtjclV1BVNVUTsUZ45NqM3CjqjUOiIi9zposJeZZRu4ZVoziLkbP1EqJnXuRUVcNyuWYmgenEksXA+nTFaa0we3uUsymnlWd1tiLWnugayiwjrBwxs1y6cY1oNadnSzICjz5ywP5ukWChF4IXXO04+NC3cf2DH0HnZXuAulGNIB6iR2KT/hWP2xxSL4/ZnBxSr9e4qiEG3zsThZiApXiUyenchOmYxv6Yi2rhIbrlebFNyWydYqqrxyigZIiq6ayNifGltqMAoUGUI/oUz6pNjrYGdAKRaX51UapdUxGiBgKh8WiToU1yQg4dlPdO0+TMFK1bp+dgoXZiSr/yYI4LiA9gIPR7lK6jbfI/gmo9mmtBza6hJnvI+hClLWZ2wJPf+8dYH71C9XvP4JsGiULwniIGMjKssohWxBhSvUetKGcZjShWRxsKqygmJS6kB7Qza0AEFwWb55STgizPCa4hBGE6sRR5QdNEqrrGWENpIc8FaxRNE1ht0rGcz6Yoq1kvGxBp3YqGk3XAuYZJbggCmzomOIgCHXGVo/Iem0/JbU50gRg8WikKa8jyHCegfCREcN4Rifg2xtkaS3COqDXaKgKglU6RqlohvkFiwFrFrCgoMovOM0QiwTmsMWiVUbuKoBRGG+YTS1nkoDU+aghCLQonOrk/YyT6iKdGGY3J8jRnoqBUG/EaXYKhEsgySz4pyeZXqM7OoD7CmBE+vpW0dX3619uf7wIK4F8Bf0VEPvkQ7XwP8O8D3wE8BmTAvwZ+HPivRaQaLPsc8GT78ue37imrwXJT4D8F/iTwNOlL5dPA3xSRv/tG9nfUqFGjRv2b15uCj8PfL3JydTovXvFhYOP2sq/XtXWe+61rY7Va8dM//dP8/M//PK+++uo9DsEhXDzP0TXUNqg4z8m57XjrljfG9CBvWN9xGyqe56iMMfbATkSYTCb9Ms45NpsNs9nsnsjYDk5Ccg82TdNv+xu+4RtYLBa8/PLLrFYrXnnlFZ5++mne97739W7Kmzdvkuc58/mcEAL7+/sopZhMJr37sYtR7aDpMLb25OSEuq77/bTWMplMmM1mZFnWr9PFpiql+jqXWZb1+3F6esrR0RExRj75yU/y+c9/nsViwdWrVxERbt++jbWWd7/73czn8x7YdnG0ndM0z3OuXLnC/v5+H9M6PJ51XVNVFev1mvV6TV3XbDabe2pQds6+q1ev3gNdN5sNP/uzP8vx8TGz2Yynn36aH/qhH2I+n993Hm3P4eG/23N7e06dNx/vp4schK+nje0+DWF619Y2sL3oXLqoj6/nu2L79f3cpA8DgR8GXL7RBydG/b7qbcC/AD4H/Ej7+t8FfkEp9a3A/wScAn+PBBX/F8A/UUq9S0Seb9v4LuA/B34e+P8BS9KF0f8c+HeUUt8uIp86Z9tPAr8OfBn40bb9Pwn8hFLq+4aAc9SoUaNGjRr19a+74PGuq/Ee9S/bh1DTSu1nw4dHW87TQap+ERm4HbsGQIuQa+HxUvPB/Zx37hgKrTFa2NvNiSgyq5lNM5ZnmxRnSaRabzDGYKcFB1d2UEZYLWsqp1h5RVFGytywWnvyzBB8BJ02e7DImc8MCkW1DgiWbKqZ75RYa9g0UOsJwQi+WjGzGpGc4+WGZrUhj47Fbkn0kZOzGmc0dv8AO8k5Pt1wcpLAI41nd5rBxlN74eDqnCYKq6MjMmt5/G2PcLBTcHZ4Rl0FosBm7ZheeydPfOv3kc12UGoYtxpJdR4DEjyEQGxW+PUx9ekRm5NjqtWGpmoIPsFH3QHL6BHfgMlR0tZLlEiMITnMRA0gpCYo1ToifRubqkmOSFpsrMAIXSxq94PSKGOA0IJK3U6diAQSAAobtDMoo9HGoGwHFQ09FJS0rLTZviE2xBjQJkvmSZX6maZm54hsnZD9fGyBrSRQKp37sSOcfdVFWidpAoweARXvuoBVKomZ6lRWyOoWanYVVe4i1RHKTikuXeep7/v3+NTfvYX2L5BZRfCxBbGKqnJEhBBTLKoxmkYgBNBGo6wlYIgCORGjNU3jiRKYzKeUkxLnU7RqURbkFnwTqJsGpQ2TSUk5Se7TuoZV5dDGMJ0VCEKzqbA47LSgboSVVzQEJnl6gDpEjRiDi+CDQkKkbjxRILhAHTYYQCtBRVK9VGMIaJRN55dzLtVQ7b4DTOt6VaS6pFGjWxeuQRO9p1CRMisoiwydFykBq9qk42t1ArkE9i9fIc5mqPqEIBpEEbGEtg4l2hJigwEwlhAjKjokRlReYLO8BeHpXDAm3Q8IXqhOTwheYTNNCJMuCHjUW09PkR7E/TTw/wKuk65P/4lS6k+JyN97wPp/AXgP8M+AnwJK4NtJD9x+vL3GDe2yfwP448B3k6Dnc9uNKaX2gJ8Dvgn4LeD/TfpS+n7gE0qp94vIX3xDezpq1KhRo/6N6g3Dx/McdPfT/cDk8PV5wO5+4GX784fpl4jw7LPP8uM//uN85jOfYbPZ9K65YURkF1HqnPuKuNNtt9wQHA6X6eDURWPSOQCHsGe77RBCv+wQznXLdWAvy7IeRg5dhSJ3o2O7/nTOyOVyye7uLlVVUZYlRVGws7PD2972Nl555RVu3LjBer1mb2+PJ598ksPDQ/I870FgCKF3FE6n0971OARRnRtxtVpxcnLSA8ihq7Qoin7shw7CDk524zSbzbh58ybr9RqA+Xzeu1evXr3KZDKhaRqUSvUHF4sF0+m0r0XYjdvZ2RnT6RSAl19+mc997nO9a7ODm8vlkqqq+n524LaDodtOvs1mc8/7IsLTTz/NzZs3sdZSliVVVV0IH7eP+3lzfhuQDZd/vY69YTtDPcy5dVEbXU3Pi9Z9UHsX7c/r3ZfzXI8P+h55UJtDPayrctRbWt8N/EUR+a+6N5RSfwn4KyQo+WPAnxWR2H7208DfBv6z9gfSBdE1ETkbNqyU+iDwq6QnR3/gnG1/nORy/C8H63yCBDz/PAlmXiil1G9e8NF77rfeqFGdfuEXfuH3uwtfFZ2dpVPvD8r+fL3o62Xcu36OGvWW0NDF2AMa7nsjvvPi3eOKVN015DC+FQLdA393lzEiXC4U75tl3Cjg0QnkSmGNojCCyRTBG05XDaerBmt0e+1mWCws052Sxd6U49M1vomgLLX27F7OWUxKXnzpFJRjWli0eIw2LOY5hdU4FxNqs4bZvGC+W+Jd5LQKeFXioieEBotiVXtOTlfYCDOruHx5H1c5js/WTPfmmOmMWjRnh6esT9aIC0wnGUWRI0FwSjG9tMvSC+H0iIO9CdcfucTUWk6OV4QIIUbqZY3fCNf/Z9/N7NHHUDprj8ldQEYMSHTpx20I1ZJmdcLm7JhqvcTVFcE1SHCtgzA5vaiXCcypjChValJiCxxBKdMf/3S0aOsrtg8+tw5F1UaQAgkmSlfvsZ0NCpRur2mNSb+btm6jiogyKNUGn/pA9B6aOkVz2gxl8lTnT6kh005zMgaCRHQMaJuj9F1Ho8LQ1RTVHfDuxqzruXQj2WHweBec907QFM8ZiPTxri3o1EBUCsUGtb6TIlizOeJXqHyXnbe9i3d+3x/n9/7Jj6LqI6alQWlFUIa1hzWa2aUDKHeoIswWO1zaPeD4peeIyzsoIpmKiPesG4fWmumkRBnNZuPReCa5QQt41+B9oJzlFGVGkWX4ENlsBBciJs/IywwfBKIn0xAxNGKoY0NuNZO8QIlFRKNURuUDy8oTQ/J/RjF3H7QOEZ1laBSKkKKPY0Mwad6kuq26nW8xTS+lyPIJloh3TRrqIERp0MFgVbqHpGLEN3Wqs6k0XfhzU9cE7yitkLklHk8dIr6NDBaTvkx0nmG0EH1oI1ZBfCSGGqwlakPQJjmIRci1ab/vBKUz6koQOUPpVLPUO3ffr8pRv2/6LuD/LiJ/vntDKfXfk4Dk/1Mp9U9E5PQ+6/9Z4FnZujmilPqrwF8kPZz79wBE5G+0cPG7gR8RkV84p72/QQKPf0FE/ptBeyXwD4D/s1Lq/ysiz9xvpx7muvmt/vfsW0lfL9cAbzWN4/bGNI7bG9MbGbev9XXzm8o8uAhaPMiBdR4g2F7+IuDwZpxGHQx75pln+PEf/3GeffbZ3pXXtTl0PHZQbwiZ4G5savde97oDbcN1L4ItHXQzxvQuuiG0Gba57STr+tmtb4xJT8K1EK+u6x6crlYrINVDGLr0AOq65s6dO1y/fp3T01NOT0/Z2dnhypUrWGv59Kc/zXq95qWXXuq337kROyi32WzIsowYI/v7+2RZ1o/RcLy6Me6ckM45vPc0TXOPM7MDe1VV9WNSFAXT6ZS9vT2KoiDLMr73e7+XX//1XwdSdO5sNuPg4IAsy9hsNuR5zmQy6f+gr+uayWTCyy+/zGQyIc9zQgis12uuXbvG5cuXWa1W/RgeHBxwcHDwFfNhCJOH81ZrzWKxwFp7z3yy1vLFL34R7z2TyeQr6kIO58WD5vx562zD9m0X5LYeFuw97Pl1kdvwQRB1GBt8Xv+2+3G/c+lB/b/fvtwPVj6o/Qcdt1FveT1HgoND/S0SfCyAP9+Bx1afID1x+aHuDRF57byGReRTSqmfA/4tpVQmIttX2b8H/LWtdf6pUup54Jtf/66MGjVq1KhRo96quut4VH0NQOiiMDs+0yGn1pnWSqu7lQSNSutrJS18TCAgSrKDpOjVFN050fC+3YKnCsVBptjJoNQQG09W5hRFwWZTs6kczgWmZUaIgsoM07ll//oeSiKHd84oyhwfFZsQefzRXZwLvPzKETYKmTYYJRS5YT7PsUoIIqhMo0Uxm5VMJhnV2nFWC7Wy1E2NkUDTBO4cr1mvK3KlKCeWQmtWqxpvciaP3WjjLDds1kt87SgKw+6lGRmK23eWqKKg2JniUdQnZ1y7vMujj+yiYmSz3hB9pD48wt8+Q60C+x/5Hh750IdTrUF19+FhiQFiAAlI9EioCc0StzmhWh5TL89w6w2+uRu3ilIQPOLXSLMGO0Vr1xHjltvpAeDUPVwUiYPo1cH1U++OpHcYisjdKaEAiYSgUI6+VqTWJsEda1N8qrapJh8pUlZCRPkAukn3D7K8j4e951ImRiS6uy5IaxGtiZ1TUel7+pgms777u9Lt/O6go0LaTGCFpBqaShN7e257TpAgm+6uzdQaNkeo6SUIDsShin0e+caPsnn5OU6f+SSh3uDQmL3LnG0UhyESF5eZHexhdYnkOSc64K4/yU72dsLhbeoXPo9bNRir2N2boTND1XhU4ygmCf7VlSM4T1YaJospWms2VUO1STCvLDOUNTRNBO+ZFAZMyfpsTQgbCmuxWUZUGu8Siq2co/btAwOSYpKVzVDBkxGxmSYrLCqmOq1NMMkpKun+g9Lpvk9UhtjeIzHWYE06xhqom5oYGjIMuQVjLT5CVddY78iDx5azdL/C1UgMFFqwKM7uvEZAEVUJRtEHESuF0QpjNNFMkCB455AQ2nqoCmUCqAbdPugdtUXFNPd1lmpdKqNRMaY5F8YHd9+iOiFdB/cSkd9QSv0d4D8gJQT9rYtWFpEvX/DRf0eCj99PCx8fJKXUJeB/BfzGEDy226mUUn+hbe9PAc88TJujRo0aNer3T28KPp4HC8+76f8gwLENTLaBwIN0PxAwbMN7z6/92q/xd/7O3+Hw8JCmaXpg0607hHNDeHZee93nQ6A0BE/DdS5yXXXAcRj5ut2nrg/e+3va6ACktZamaSjLkhBC7/xzzt1zjIYuzQ4I3rx5kw996EPs7e2xXq+JMfbRqVVV8ZnPfIann36a2WxGjLF3FgI0TcNms0Frzd7eHtPp9CvAbJ7nAMxms9552MHFpmn6fztnpoiwWq36fpRlibWWoiiYzWbcunWL69ev9zGpIQQuXbrE1atX2d3d7R2ZkFx4p6en5HlOjLGPRd3b22M2m6G15qmnnmIymVCWJavVqj/W25Bx6DTtfrr9EUnxrbPZjDzP+/qT3TJN07BYLHrwet483T5vXs/c7+bNw7RzEZS7yO140brnAcfOYdtB426Z4fmxPR8v2s/zoON5UHPY1vA75PW6Ee8HK89r6/U6Mke9JfWM3I1+6fRy++/vbrsZRSQopW6Salj0Ukr9MeA/AT4KXOYr/1+9DLzyENsGeAH41gd1XEQ+ct777ZOdH37Q+qNGffzjH//97sJXRd3TfH9Q9ufrRV8v475YLH6/uzBqFLB1rQoDb1t37dv+bQm9Q/Ke62sRrGr9SjrVdetdj8kW1a+rgLkRvu3anMeNMNGRSa7Zner0B4o2mMwklxyRxU5BZhPYLDKNzS37V3apqjXVssGgONt4ssKwoww3XztjUzVMRAjRM50X7O5YZmUG2hAk0VCjFFmRkxvD4fGas8ZQB0sjDaWxrBrh6GjJ+mSNRqNzhaWtW5dPkDwjxMjmeI2qPfVqg9aGSW5YHq2pXWQjmmKSUZ9WaHFcv7zDlSu7BOcwMcDJGf7mGZxuoBFcuc8T/9a/Tb6z3zrAkmJMDj1FhOiQ0BDdmlAvaVan1KszXLXBNzXRuQQOY0CkBZa+JqKx2iASWuDYVt1sj5u0lFmIrWOwmxOqnwmpbh6pHiOd260NLu1rNiZ3GRLbyN3kig2xARTKVSSXpQZNqv9nLMpkrTNNE2IgeIc2NSYrMNaitAFpHx5uXZsxOFRMjjxlOodiAo9pTtPuW6SvRylC+8Fg/ksP2FEtfo2S2ge8SvPS6i5aVqWIWLeCOkcVc2RzG2UX6NkVbnzbH+Xk2S9QvfIF1OUD7pR7VNYzwWDLgkxHJvOMGBPkzGcF0cLj7/s4R69+lNd+/hNMVYUTC00kV2BnOWDYbBzOeQqrmUwzlILV2tHUniK3WJuu/V3j0dGT55qAsN40NEEw2ibvYkixuD4qXONw3mOzCdlkh6apqJZLJAZmVjGbWGyR46KhbhwmmyDiaeo6xdYqICoypbEmuUMlRpzzWOUwWYbWybFpg0/ns7GgLFEcWhuM1ikd2NVIkyDxtMzR4vBRsVoHAoLOcoieQqeHB5Sofg7E2IJQAGOIkuGjIK5Gt4zdx4gEj84yrDZoBVpHFIaIUMznTPIrb+LbdNTXUL+1ff3b6hdI8PGbuA98VErNSPUZ/13gXcCCe74JuPE6+vIxaM26Sv3lcz7v8rLf+6CGHua6+a3+9+xbSV8v1wBvNY3j9sY0jtsb0xsZt6/1dfNXreZjp4e5Kb8NSi5a/yKQsL380AV2EeQLIfBrv/ZrfOITn+D27dt9vb6LnGPbYGS73SFkG25/u6/btRyHyw7rN3bwcdj+dv+2x24IJjuIef36dZRSHB4e9vvdfT6Mb+0ck0dHRxweHvL2t7+dg4ODPrJVRHjve9/LSy+91NeP7CJHu/10zlHXNUdHR9y4caOPW92O3ezqOWqtefTRR3n66adZrVYsl0sODw85PDzsnZCvvfYat27domkasixjOp1y7do1nnrqKXZ3d/nsZz/L+9//fk5PU+LDYrHgypUrfb3LzSbVMPDec/v2bSaTCY8++iiXLl3qx+8zn/kMTzzxBGVZcnh4yIc//OEekg7HZugwHc6D7tgMj2lZlsznc46Pj/niF7/YuyQ752O33Dd90zc9lCvxvHPiIgfkecsPl3tYJ99F6z/o/eF7xpjeHTs8H7fHcXiuPag/26/Pa7sb6+117ufCfFid9z00gsc/EDrZfkNEfHtsv+KzVp67Fzsopf5TUiTMEfDTwPPAmnTb5o8DHyS5KLd1fJ/29QWfjRo1atSoUaO+TtU5zBJ8BASMIsG6TnI3ClMpiJLqrZn2/q1WLXhs/1KQ9nOtEhLSSjjINO+bKh7TnomOzGc5j16eYHDs7O5x+84SYxTiGw4OZhCF6TwjNiG5k5TizmtH5EajlKGOUGSWUDfcPGkIMXBld0KsHcW8ZDG3TAuLd5GoFWiwec5sMaPZNByeNazihKNNjYSKwmqWVcXJ8YpqtcFEYTpJDsC1C+xcOSAo2Kw2LE8rtA8oEfLCgsCdow0Yg9EWQsAvT9ldZLztqetpv+oKdbbE3z7CHdf4RghB4Yzl6nd/H/vvfg/Y5PhrR5EuDlRicj0SGqTZ4DYr6tUpzXqV4la9Q2IgRo+KoQWGATCYokRU7xdLPwr6WojSORjbhzL7iZCgXgKPqoVzrYMVQPwWqJTUZkwACdUv2TrqushThyAEmjT3tEEbm0CVtSidpfkVPEErdJZj7QRl7F3gKQLiiU0E6zFZDsqSamOaVNcS1cLW5Niju0YSuQey34XjQufyTY/h+bTPjULpBNeNNojyKF1DfQzaouwUcaeoYpfykXfyyHf+MX71E7+HpsBM5ki1pshzlFHECEh6aFpnU5antwkBTg5f4MmPfhfT/f8DL/zk38S4monRTEtNBJq6wbmGLDMUkwwfYX1cE1FMphMyq2mqDdE5jDXkRYZzwtGqpvFgdaoJqbXGaN3HzmqBTClcDBBqtFYYDVpgUmiKIicYg6s9PgRMrjBZTqxrovdMJkX6FhDBIJSZTdGo7WiqGNL3CYJRbaXNIHhfoSRijMUYiyBsVmsQoZwUxAaCeAIGdIbSmiDQrCtQUJQG5R1iLCE4vA9opdA2B5MRlE/1I0Oa0yGGVL80M5gsA3RyEbtAEI9SES0Z5XT/jXyFjvra6+YF77/a/rt70YoqFc/9OVKCz++QHI63gC7957/g/Gvii3Sp/fdj7c9FOr+e0ahRo0aNekvpqwIf4W705Ju5yX9eu91792t3G0ZsK8bIb/7mb/KjP/qjHB4e3uOyG/Z/u71tcHGea2u43+f1cQhSu2U7CDh00HWOuA5Ubse2Drc17HO3jLWWqqq4dOkS169f59lnn+XFF1/k9PS0B51dP7XWWGsxxrBarfjsZz/LwcEBu7u7vUvQGMP73/9+Dg4OeOmll1gul31065UrV4gxcnx83LvcyrL8ijqInSaTSV+PUmuNc+4eSJXneV+DsatFaYzh8uXLPP744zz22GM88cQTFEXBE088wQc/+EF+8Rd/kStXrrC/v0/TNPze7/0eRVGwWCw4ODigLEuef/55XnjhBV555RWstWitWa1W3Lx5k8997nPMZjO+4Ru+gYODA5qm6ZcZHodun4bAcejq66Bx54Jcr9c888wzfV1LpRTPPfccWmtu3LjBRz7ylQ9e3Q9GboPFh1n3PDD5oIcC7gccLwL628tvQ/tu3g3nxUXtbs/z7r0HORm797u5dL99erPfTaNGDaWUssBfJl2QfVhEXtn6/IEOxlGjRo0aNWrUH3wp1WECkkNMdeXRBA10KYTSuZxU93es6lfR0HKqBBtF0vLd+1MNb59lvG9quJxFFoVmvpgwKQ0mRiKG4+M1RgmZEiS3RBGs1YQAjQtIHbDWEJxng0KyHIis1jXORzJt2M001I5Jrrl8MEGip2kCQRmMVpTTnCwv2KwbTms4aQx1VSXQGiOv3DzDOc+mcsymGY9c3WFTezbrirwsOVltCD6ga4dtAj7CZFGgM8ud4zVRW6azKcZ7Ci0cXN3h8uUFKjj84RK9duhNjT8LVLXglWb+5A2e/CN/hGsf/yGyg8fQ4vvahHeBoEfEQfBEV+PrNc3qhGa1pKk2uKYmhpjgZAytu88ACmWLNvc2HTOlurZbWEjHHLuHjXULI2OCzPGuO1C1fUotxT6st486ldB5D4HOcSh9uxIDqNDOH5OgoIpITE7G6JoE+azB2Bxtc7TOkBCJOrnoTF6itE2ba7cjPiJRMJmgjAHaB0v766sWmg4rFrRuzY6+iuomd4qsFREkRoJ3CVoqhVIGrWzr3FSgGqiPUHn7IHGooFhw9f0f5Po3fw/L6hbz3X2iXMJ7h9WKSZlT5AVZrjFGmM73aOqKs+MVNz//S9z4zv8dJ1/+FPK5n2Vi22v+xmOA+TQnKzKqynO23GBtxmyRkxtFU6fUrCzPUCLUVWBTBULtiKJw2mKtoShydGZxdYMOASGgC4OxBWebBh0c80KTGYtVmsZD44R1I/gmoDIPpsRkBi2RzCbXqvOO6ANZbshMhvMOCR6JgtagjcHHgI0QfSB4T2YU2hgkejygtMGKR0KgiQofGmyRMzm4SsCwPDxEiaBixGiFxuCdI/oalEa0RrRBSO7L6D1oneJyjUUB1qT7PqIsoakImhQxTKBebTDF0Rv5Gh31tde1C95/pP33oodzAX6YBB5/RET+o+EHSqnrJPj4etRt678TkT/3OtcdNWrUqFFvMb0p+LjtYNz+HR4OXjzMZw9ySF0EHkSE3/md3+ETn/gER0dHPfjrPh/W9BNJMahDV1v3+7bTsQMjXZ3CLpbzojE5zyXZvT/c5rD/5wGaDiJuAx9rbV9L8emnn+bKlStorfn85z9/D7jsnH3W2j629IUXXuALX/hCH7daFEXvdlRK9dAwhMDOzg6TyYQQAgcHBxwfH/fuw84Z2e1LBx072JllGcYYsizDe09VVX3fjTEcHx9TFAXvf//7efzxx/tI1A5E5nnO5cuXmc1m7Ozs9MDzkUceYX9/n+VyyZUrV3jhhRe4ceMGSqXalN1xFhGapiHGiHOOk5OTHv52x2F7Xg1dnN34d8dgCNu7djoAulwue2fn0dERu7u7vWvyIhB4kbt2e04/CA4+SNvrPAwU7H7fhoPDz7qx6VyO25D+YdvZ7tMQoJ7XryFcf5Cr9I3qqzX2o/5A6TKwB/yP54DHOWP86ahRo0aNGjWKlktBWzOve7hO2hjOpA440UVZ0qHHduWudCBt3GULdqyGy5ni/bsZj2fCXgF785zd/TkS4ex0jTKW003NbFawM89oas9q3TCb5RSTAq0EtxGCC2xcQBtDjIHNaoXODM5HJlmG1ZHMaObznJ15Roye2glKa5RVlPMJRmnOTtecOstp5XFNpDSa0/WGm7fOOD2r2ZnnPPHkJeazghphPs/Z25tycrxG+8Du3ozbrx7jRJheOcBOLHdeOyGoVMewWa/Ym+YcXNvj4GCCOznDL2suXbqO3YdX/tXnWa8j08cf5elv/xYW1w7ws0sUV2+gJ7uwuTOAZq1DL8ZUPBOIzRq3OaZenVJXG0JTI22dR5Gu3mNKyRGToUgxrG2RvHTslGpNlR0cjC2YU0C467wUgA4Wqo7Tte7FeHcRaeNalbn7HqBE9fNGCP0nadl2N3XnPFQEBBVVco02bZ0+azGmwNociYHgG7QtsXmO0l3ghySA1URslqOsbYGSuidXsTc+otKklTg4A7oJniCkEkmwVISoHEF1tStVchFqg1IaUQ24s+R+9EtUNiebX+b93/adfP43f4kojiIzlMWUopyQWYuOgWp5inBGlpdohKpx3H7hZR7Z3OLx7/xhbr34G/hqRQzJ0Zdbhc00q5Xj5GxNkWkWOwWgODtZIiLMpoYYIlXtiRikg51aYzJLURZkWYZrHIQIks4nnWVEBRMlWAvTIjkNN02kcRAIaC2IBt94stJTFjPENmhrE+ATkpPRGoKx1E1FrGpMrrF5htEmxaF2RBsBne4zhRjIioK8nCH1mtjOVUWGYFogLxSFRWUl1qRj4LFEl0ruGGMIoiEkNzI+YlSC5L6uybTG5jmaCKEBa5KTsz3+oiLBNdQn92NYo34f9WGl1EK+Mnr14+2//+o+676z/fd/POez775gne5pDHPOZ79OMvR/5322OWrUqFGjvk70pmo+PsiVdB402F73Qetv3/S/6PWwreF2X331Vf7u3/27vPzyyz0Yux9g2YZ+XTsdUBnCyqGjawgTt51nQ+C4vQ+da7Gu634b29ve3s8huOkcel3dx+PjYw4ODvjO7/xOjDG8/PLLfS1HSH805nneQ0ilFHVd87nPfY6iKDDGsLu7i7WWxWKBUoqbN2/2IGx3d5cQAs45Dg4OePzxx7HWkuc5IQSqqkJr3ceYisg9QLLb5s7ODkBfI7D7bLFY8J73vIfHH3+c2WwG0Dsl8zzv61h+5CMf4eDggBgjTz/9NFmW8WM/9mPMZjOeffZZ9vf3+djHPtaPawdGnXO9Y1NEePe73816vb7neFwExTp42LlSt49XdxyuX7/OYrFgNpvx0ksvUVVVX2+y2//zdD8ouN2f+723/dl559f23HyY9rrPH/Z8Hy43dOy+nn530Py8Y7N9DtzvO+FBfb9IDzomo5vyD7VeI0WsfkQpNReRJfSxM/8PEpwcNWrUqFGjRv0hl0a1MKY1eMHdeoASuzDNu3yGDlQmbNOawtqyencXmhnF+3ctH9iz7JiIFdiZ5UymOZvKUzceFxTLVcVslmE0HJ1UlLnl6uU55cQSo7A+q1ktPVGErMhYVh7XeLLMkGWGWaaZTgxapRpus9LifaSJQhQoyoz5Yka1qTlcBjYhY904EGFeaG6fbHjx5inrdcX1R/d579uv0HjH6dKzv5sTauHWnWPwkRgir90+5cwbTmqPr2pYrTg7XiEodkrLfGK4fGlOHiqq507ZvfQIxVXN0b9+jrOTiuL6o7zjg+/n0o193K0Xeflf/i5Xf/B/gykLZHOYnIu0MbhCAoriExx0a0K9olme0WzWhKYmeJ/ApEQI0kNkEJSSxNO0QdCt+48uWJTkRnTQQzrVgsTYgznVQkaGDzkDEInS1fMMoGwbZapJwLLlm6JS31Wq0YeSFjyGtNXY+25bONhGpQLi0zVywOGNxdgUzaoyT3QWkxfYrATT3jaSSHA1OgZMVqCUbV2gd6Fi6kfajgzne/ced8G6kPoeg0cpUgSsscmFqR2xBZKETYpfVRZxZ6hyn91H384jN1/g5ee/gALyIicvSkLwbDZLvIuEEFONxc2SanVCk2sO//Wvsbj2TTgUoXGUucKaNB6rdUNVOaaFYTLJkSCsNzXeR/LM0DQBvEcrgy0yos4IITLJLCYzKKOJzhPqGgUYm2pCRtdgTGCRC1oZ0JomajYu3Z+wWshMRswsQTwmeqLJUCrBYWWSi1Ur0Fmqtagkok0aL6UNRhuMteAdilSnMcSIip5yUqA1IA3KWCS27lpboKzCrc8w0zmTxS5+eYI10sLy7sFiTVSpH9F7xNUoFHluyQqDVio5JY3GNw2yPEPlvp3jKYbVZhnZbEHjxuvnt6h2gf8L8Oe7N5RSHwX+lyQn4t+/z7rPtf9+HPiHg/XfDvzXF6xzp/33ie0PROQ1pdTfAf7XSqm/BPxfReSeOlVKqXcAUUSevU+/Ro0aNWrUW0BvCj4+zI33h1nmfm6i8278DyHfRY4oEWG5XPLjP/7jfPnLX/6KGo/nObKG2zjPqdW5urr3tvux3d/t94YOx22AKJJqKGZZ9hXrD5cbQtBtSFoUBaenpzRNw0c/+lEeeeQRnnvuOX7zN3+zbzOEcI/zsQM7p6enfOpTn6KqKp544gmuX7/ObDbj1q1bnJ6e9i7FDp51P50zsAOIAFmW3TM2McZ7om478DmdTrl06RJ5nrO3t0fTNMznc65fv85kMumjNDtHYVmWVFWFc47FYsE3fuM39mNT1zVf+MIX+MAHPsDu7i6XL1/m+vXr/ba6/RxCsK7t4ZgO41a7389zBg7dfUopnHO8+OKLLJfLfh9msxm//Mu/zOnpKc899xxXrlz5irqEQ53nMnyzup+78LztXwQlL+rntjrIOxy77eW3+7S9vfvBw/Oinc87VuftyxsFjyNgHHWeRCQqpf4m8J8Dn1ZK/QSQA98DHAA/3/4+atSoUaNGjfpDLKWHXsbktENUXzEvOSDbv5eV9IBSKZVqrLU8qbPECbBvFB8+yPjA5YxHL02wUbhzWOGN5fDUsakjeWmpqwarIcsNMQiZVezv5BgN69OKpo4slw1BKSZlxtmywU4sj1xfoEIk1B6l0nqzeQYBKpfuAas8o5ga8txwerJi4yxrmVCFhmmhER+4ebjm+VdPUVr40Dde59FrlzjeOKq64WAxYXm6YXm6wShBlzmrqKm8x6nAbH9GaDzSOAqjmGWGS7sFu/OM7OgYvXbsPXqDsxde4ebxGebSJZ76t7+LS09cYvXil3jpZ/8ly9dWlB/4dhZPvA1tWide6zpMzkJ/t9ajq/D1GW5zRl1tUp1H16RIUknuSDUAbKmR7pi01+2p6l2KTJVAW9iwNVrKADSG9ri39xb6mZGgpQh0EabSuwdbqCkBiO3vbdsKEN33oAOCbTBvW5dx2FYcOBY1qEAMnhgN3m3QpkoRrK4kZg0mL1Icq7EJFrbX9iYrUNq2bt22I5HW/Sm981Opds+EFkjddWx28bExBIJzaFMloGY12lgkepQyiF+jTAa+gnyOnl7m6tvex4tf/iKSS4r+jA1GGbzzxCgYrbCZwfoSxQlaac5e/CL+tUP8eo0hkmmFYKhqwQWYTkuKXFNVDrepsJmmKDNCEGKI2CJHW4uoFD+qRaFUBNE0qxpXrQGVljMG5xOIz2w65/uHooMQUYhSbf3NiNUJ8FltiCq5ilGQGY3RRTeR0E3FVEXsdIq1OUoLWltEAtE3KeRUBC9CmSnKPCfGAMoQjSVWdbov081XrbHWoLKCYBRRKaJrEJXufxndW1qR4Ai+ah+iL8izHGMyUBEVG7TJgfaelkSiC6A82uYYrVF2LG//FtUvAX9aKfUtwK8C14E/SfrC+I9F5PQ+6/5D4EvAn1NKfQPJJfkE8IPAT3EOYCRdK0fg/6aU+gBwBCAif639/H8PPA38FRKE/BVSXcpHgfeSakH++8AIH0eNGjXqLa43BR+H+mrc7O/aeT2fXQQeY4z86q/+Kv/iX/yLHnptOxW3XY7btRu73zvAth0nue1w3IaRF4HKYdvDz4bRq8Ox3AaWF7k9tdbUdc2Xv/xlqqrigx/8IH/iT/wJnn322T5CtYOEXQTqEMYdHR3xW7/1Wzz33HPcuHGDLMs4PT2lLMsemsUY+5hUuFvzsGkaJpNJ348hxAsh9PCxaZreFam1JssyJpMJANPplP39fYqi6Melg6Rdu3meU9c10+n0nmMeY6QoCt71rndx69Yt5vP5uZG5Q2DardsB2eE4b8+tbhudhvG9SimapuGFF14gxtjXenz00UfZ3d1lMplQlmUfWdtt+37nx1cLeD2Mk3H79ZvZdhexOzwHLnqA4KLtA+fGt26fp93cGy7zMG2/me+lEUiO2tJfAm4Bfxr4j0lPhf408BeB//L3sV+jRo0aNWrUqLeIlEpISAup7p1IX8VPVPcevadNQQsd2+uq1gGnARWF64Xig/sZ77+Wc+P6jAnwyq0aXxbsFJY7qzUbFwgxcHkn58qlCfVqQzHJmZeGqmpoanBN5PB4zXw+RULEN57LVyZcvjohV5rT44pVDWWZsbNbgsCmTrXjjLVM5gVGaU5PN1Q+Q4wmyxSLacHZ4RnPvXzE2SZw7aDg6XdcQ4zhleMlk1wzL0pefeUIoyKlNhytPHFicHVNFEM5K/GNZ1N7dktYzHL2pjlF08BLp2ivcT7y/OG/Rs9mXP3mj3DjA08TT1/hpZ/9nzh7+YRQRXw+49EPfhQ72+HurY/2Wi9GlKRahgSPhJrQrGk2S9xmhW8aYkiORyUtTORuE6qDeSrVQEzXCenD5EbsyGC/yR42gqTY0db22oFEGFz7SGjXUChtErTroGS7/RhbmCom4T+V4kwTjFSgYgu6E5yLbfxlulaDLv41Rai27k4FITZISA4+n2+wrkw/xRSdT9L+BghxjclKlM1aIHsXlHZ0UTpiPgDsLYXsl5XYxsqGgHGOYGqUMSjTJFepDiAeJNXZpF6iyl2m157ikSfexfHtL6F1hlYam5eYbELwa4zVZHlGlmeEGICaoCasbt2mzMDYjCCexkcCwrS0gLDeVDgn5FaRF+ms1LlJMbSqc2wCsUHFiMScjXO49RpUxGQ5oT2OeZ61uyz4EKkbTd00RMAaS1Sk6FNtwBqUQHApTUpbi9YqReHq9hiHiI4elEaLYIxgtE7jGiFok9yHVjMrCwqbYHlsXbFKmxSbGwWjwWhQ2iL1hug9ypgekKZDpAkqPUwQ3BIdHVlWoPMCowWjwOQ5wXl8tUHpBjtZkE1KlIZ6syaGClRJU1d4d/eh9VFvKT0L/CfAX2//LYDfAv6KiPzT+60oIiul1Pe2636cFJf6ZeCvAv8tCWJur/M5pdR/APwfgT8LlO1Hf639/FQp9d3A/xb4U8C/1y5zE/gi8J+RrrtHjRo1atRbXF81+AgXg4vz3E8Pq4vWvQgwdH+sf/azn+UnfuInaJrmHvh1HlzsIEZX+6/7DO66DYf7d94yw36cVxOwW38IQIf7Nxy7IQTbbne43LZbr1vmy1/+Ml/+8pe5ceMGH//4x/nJn/xJfud3focsy/p6h0NAtA3RXn31VW7evElZlly9erUHf932uthRa+09Y1rXdf9+0zQ95Bt+3o3x9rguFgvKsiTLMuq6Bu46KLsx7to+O7s3hr4DtK+88gq/9Eu/xMnJCcfHx0ynU/I87386x+c2QOqO/9BZOtQ2BOv6PXSfhhA4OTnp62Gu12tmsxnf933fx5/5M3+mB5DDfXojejPn0sPqPFD+ejQc36Hb9TxAONzmef0YAvhhXdVhO8Oaqxftz3n7d9Hr7fW2HyoYfnbR/ox660pEngMuPGAicr/P3rb12pMuqP7bcxb/D9uf17Ptj1/02ahRo0aNGjXq61MpplMPU1Xpqj12pjXVFfprl9cKjE6ACFEYhINc845Jxrt2NG+7OuVtT+4RVmtu3q7ZkLFXKprVGl9V7E0y9qY5uYnIZsPuTk5sIq++sklbUynycTafEXwgKMWlywXXrswJTeD4pMKHyGynYG93Rl1XVFXER8GWlsliRsRycrSkiRabZ8x2JhA8t1854l8/f0wjwvvfc43dieXOsiJbTLjxyGWObp/x/AuvUlWR6XTCnaqhCYpSa6Z7u3jvOT1eEWvHTibc2J8yjYF4+xTWgRgVaxcIGnafepzHvuk9WNa8+ss/TXV7hVs5QhMT2N27wv77PoDOykHWp2rrIwZ6d2IMiKvxVQse6ya5+2IAiW3NR0mQpnOvtkCvO4qqbzveBW+dvU+6ZUILrtJPsrTKXbDYeiAlOKK4FPeqM7CqnywSpJ0xnUsydNbY5EDswCO6XSe0y3el1WLviOz7pmJqQ0UkKpQKxNhC8qpK8bNZRagbskmNKaZomyMC3tVoAtoW7Wxur527eNeOq94NYO3nfZTYvlappiaB4B3KpVqH2teIsRA0WmlSrU2L+AolC5SdcP1tb8etXkZZYbqzTwye2c4+SmmMOIwpgMDu7i7KwOLgBsuXf5MitwQnVE0keCHPICqh2gQIQplp8jylPkWlyfIS5wXvG7RW4CMSI1EilWtw0aDI0AQkpnNWSURZhdKWugnUTrPxwqYRDBFtJIFbbTC2IErE1TUqNBR5jsKnuYlCZSkG10WIopHoCU1NNBFblBibgyjqGqLWzEpDbtM00CbDmBLX1BADWiVnrCJBRZtriCkmVhU5fr0BY3u/LQjVegmhYlJOyYqCrFygxHePUaDyEqkChAqjA/lsQYyOSTYh+gasRWUTxG/e8HfpqK+tRORzwA8/YJkfAX7knPdfIEW0nqdzr39F5H8A/of7bKsB/vv2Z9SoUaNGfZ3qDcPH89x3F+mrDR63t7m9zNnZGf/gH/wDXnvtNeq67usEnhfvOIxr9N73wKxrdwg6tiHkdj+GwPE8cNg5DbcdW52G72/XmDxvP7dBZKfDw0N++7d/m49+9KNcvXqVH/iBH+BLX/pSv29dFOn2WA7HQmvNdDrtgVzX/26cumW7/e0gkPeeEEIPH621fexq54pzzvVjpbVmsVj0UNA517sKN5v0h+mwTmXXn7quKYrinnHIsoyXX36Zl156CYAnnniC2WyGMYYsy7hy5QrveMc7mE6nfbytc66vOzmEW93+bDt6u7EYgusOSnXuyQ7Oaq05ODjg0UcfvSdO96LjuX08ztO/Kej1IDh4Eaw7D/Bvu6EfxvE5BH3b4G+7/W5enOe0HK5/Xl+392+7b9sg9qL1Ro0aNWrUqFGjRo06T0JEiWrrAQ7RVXudpzpAedcJ1q2JwNQI797Jec8iY98Il3czHn9sB3+65KVXNxxtIl5qpvsFOgT25hm7i4KiMDjvme9MkjuxUTjRVBtHXljm0wwRoZhbLu2XLCaW1VnNyUmDNYrFTsEkt9SNpw7QxMhkMSGzmrNlxcYZEENWWBZzi1+veP6FY05OG6a7JR9+51Wkrrh585jZ7pxFWXDrpUMObx2jxCIZ3F66FAeba5pNTb2uEB/YMYIuhMuXZiyqGnfzDBpYeyEqhZ1aHnnf0ywOJhz99r9i89oJoUmj5poALfzaffcHyA/2kqusdSXeLbJJ70wUhODXuM0St9ngXZPceNKWXYmgSO5D1Tr82quDDhdC6zoUSdeCd2lzd73QXkN2W1Tt9iX9K8SWAwbEVenhy6xAYVKNvg6Etrhade2rFJua+qBbB2JyPaq2pqJS6Vr7LuCWu/BThQTHY3tto9IcVSJAqo8pQeHjhuAdvlljyzXZZI4tZmnPmjTHtS1Bd0C1w+qd8zG1l0yQCkT1o5eiWBOsDUGhXJ3go80S5FQW0QGlQrLqIeCWqGKH6eUbTOeXODm9SZOdYjLLfGeOSKTZLDFGoe2EvNQoHWnWS/TmNo1zeJeiSLMcotvgQ3IjZrlGYsBHQSuNMpaIQZSgTYb4GlGGoA3rVU3tGopZQVZaQp3QrlFp9FwTicqxriONAxdSDcUgiuAdRZajbIYyGuUiRgJaQwwBHwJaK7LcohSEEAnOYQjYTKO0wtoM014PVx5q75kXisxAiAHROSYG0KZ3wGpjCdElGNnPUUGFiBSCmDS/sjzVnayWpxgC84NLaNXe7woOk7V1JlWGLcoE/J3G2AytIracQ1ZSL5fUy0NMFMwYuzpq1KhRo0b9odKbrvn4tbz5/iCIcF4ko/een/mZn+HTn/40VVVR13XvmtoGeUMnYgcGO3ixDSPOi+TcdkreD1J0kG4IJ7ehSleXsnMangc5tl1Y25AHUgTqM888wyuvvMLTTz/N937v9/KTP/mTvPbaa+R5TlVV/b5vOzE7ODqZTJhOp/fUS9w+FkN17XQAsiiK/nW3b11NxyzLyPMca23vBOwcmVVV9TBws9kwn88pigKtNWVZ9iBxuVz28LHb/s7ODk899RRFUXDlyhW+53u+h3e+852sVit+5Vd+haZpWC6XHB4eYoyhLEuUUn0dyaZp7nHqdfs5dJae5+oDmM1mZFl2T21R59w98+m88+VhIOL9XHoPA9a+ltqGs8aYe8B2d45sO1iHIP+i9uArx3v7gYC6rinL8p7jsX1e3a/vw22c159tPQgKjxo1atSoUaNGjRo1VOIuQs9gWvSo0CiJ6BZGpkp4vQ8MJcKNieYjl0ueXGRkEjBiKMspd1494c7hhjtnCU5cWmh2czirPYudCUYrHIq8yLhzZ0NQiuiEqnLs7GRMc4NTinxScPXyBNk4Xnl5SROFSZaxt1egEI5PNzg0tswoZiUKw/Fxw6oRbG7IrJBbzenhmt97/jZnG+HqIwsee2TG+s4RjXPszSccnm64fbwiNqnchcIQqxrQTCYFEhw6RjId2Z0KB/tT5mVG9dwt/FGDa6CJgs4Nk9mE6SyjfvF51l+o8HUgROlMd2mEI4S8ZPf97yMrJ4gyKFF3AWQ7vlESNOwiRt1mg2saxAUkeEQ8RJ8OXwe9VDpaXUiu6q4PlNyFjtLGcqouKrWLZFX9Mh2GFhGIguDB18klhkHZAtEGkYi0dTaFNhFGpTqB/TWPRNAtdCRFcCqliZ3HUNP+27kh2+t7UaAjEnULCROoREsPISEkh5woaIFYcA2+2pBNNhTzHcgKgmtQIqgsS1HCfZRr16/ts0L1+9R7gWNE2vqTwTlM1hCDRxsPkupNqi7q1q2h2EVNLrH/6FNU1SkxQr2umU7SQ8/57j5KHFobRDR1vYLVCeH4CF07jDZYkwymLmpMpigyTUThHORlgc0n7QPSQgwBCQGUxUdF1XhEKSaTkswmwJ1N8jS3QiCEdDY3Aeo61bQMMT1kYDNLZqfoLCNK2l+NwWQGiZoYJM0Rlb4fgiiaJkW7TjKhKAqySYE2FhfAhUAQYZprrE6gUrQFMXgf0JnH5AVojUYTVUZojjFa8BIS/yUy3dmjmO2zvH0TXE25s49bhlSKJyvwTY1WglZd7U6FNQptDVFNEaWRKPjNitwYlF4gJsf5iMQ1+bRk1KhRo0aNGvWHR28KPl50w/3NQMntdc+DPUNQsA0JP//5z/PTP/3TrNfrPnJ1GxAO3XpDB9uwTmEH3LbhyrZTcbut++3/ECgO+905EYf7N4z0vB9M2gY5Xf3CL3zhC3z+85/nqaee4tq1a3z/938/P/ZjP9ZHkHrve9dfN0ZDB18X0Qr3AqQhRN0+PsP6j0VR9NsA+jqPXf+6fnfrxBhpmoa6rlmv11hrmUwmdE7LPM+Zz+dYaynLkuPjYxaLRd9u19fVasXTTz/ND//wD/Oe97yHPM8RES5fvszR0VEPNod1HmezGUqpHk528bFDnXcchnPSe896vU4X0wOY3EXIXtTGRcd2GzKf93nnuhzOp65fF7X3ZnXe+dnpPNdv93roSNyubTr8937zffv8yvP8Hlg/fMDgIqB4XttvBFKOGjVq1KhRo0aNGvUwuutq7D1ybeSh9OAxBVQqdOtWK43wrkXBe2eay4VmohXiIPjAyy8dtQ4sRQyO65dm3Lg6RYXA1Ws7hKi4dezIsohynv1LU6yxvHByzOWrC3JreOmVM+Yzy2KuOb655PS0IS8MOzsZu9OCs7MNVUojxRSQFRmu9hwdb/BYTJbjvSMzlls3jzk8rDmthL2dnEf2M05ePcIFKAvDyXHFnbOa0ia3l2rL9hVlTp5ZNqcbciscLHKmRcZiljOpazafu0VYC7VoJNeURUYxsRgFcb3BrZqUVtpdoyvVAps0yrKzz96TT6JM0Tr6BpZHIQE7icnF5la4+gxXVwTfEKPvoSBKpbKK3bVA635UbfRpilmV5F6M0jXeH/3eJSnQ1WvsMbTEBNwkpjjKkGCWzkpEmRS56X36vHdrxn4O9aGlkl6jdRts2iJs1daLjAGURqJK7kitE5BVCoIkOKl0ckBKcjEKd2N/EwRr+9tC3tA0RH+CbzaUsz3MZI6XiJaQYm51O+cV0Nam7CJX70ax0sJJWvouJLdpIIaG6CyS1WByRAe0Dojou8fSbVD5lNnlx7DPfg7fLMnLHaIosiyn3pxQFll62LhaM1nssT67gzRNAmMKGheI0VOURXIreo8nkuUFJp8mlqsiYZ0eWNbGELXFx3TsMptqMwYEiWB0gn8xRCSGFJEakuNQlCVTAaUBrZD2ewARcA6xilRmNM0TqyJKmxYaWsJmDcFhypwsMxiEplrhyTB5wUw3ZFNNIMd5BVoRJIJotFZoaxNcRijyHFtYiAmSxhBQOvXT+JppWRKqFWF1yrRIINxVG6IPmLwr4WMgRJR4tIJiMcc3Oc3pHWgjhjMsWllsOcEQMXq8nh41atSoUaP+MOmrWvOx00U36y+CItuvLwIH54G74WdVVfETP/ETvPTSS9R1TQjhHoi3DTqGrr9hLcDu9+G2OiA4BIjnua2G8HB7H4buwm77nYZxrNvureH2zqt7d56Oj4/5l//yX/Kd3/md7O3t8f3f//38yq/8CsfHx/fUVRyO37DP3dh1Tr7z9qPrw7ZrdHtsOwegtfae+pvdct2+d/DRGENRFJRleiquaRrOzs76eNbOYbher++Bj0opXn75ZW7cuMG1a9eYTqf9WF26dKmHVUMAeXp6ytHRUQ+wptNp79Ic6rw5NxyDDrQOj02MkbOzs3vm0nng8TyIPtzmtobza3uePmi9i/qyvU/3c7reb951+3Ke03i7H8P9vl9/uvaGfesifPM87x8CMMb0sPt+TsYHfSdd1IdRo0aNGjVq1KhRo16vkuFtED2p2lp4naR1QracZj+HD+2W3Mgk1axThlg1uMYTAGPg8u6EetOwc5Dz2NUphEA+n3FWeV68taaqHFd2S8osp5jkvPjKKTrPaLxw83iNi5FpiNx85RRtLZPcsHcwZb4oOL59xukaogrMdiZMpiX1xnF86oh5gTYagyOz8NorRzS14mhZs7+bc2Vmefn5E5Z1SPBDRfJJicEQfGS6M8POCs42gfXphrDcoGPg0s6Ug4Mp+ztT3O+9xvL5I3yjqJQin+XMLu9ibQI0Yd3g6kAI6e/4EAVlFNF3kaaKIEJx4wbTyweI0a3rsHUsygAQikB0BFcT6jrV0HMeCbFNAr173PQAINNGiCoUSgIIxODb49tGvHZAUlpfqyRqlxx+CeSpmOpKxuBQRESXYHQLIxticC2ISpGkHUPtfqcDoG3txNiCyQRHQRkDyiJtpG96rRBtUcogWieAJKCUEJWAVqiQAGVUoFQav8Rhk4NUVAtuQyRWjuAa8npNsdgHCkRogVmW4JpqY2U559pLtcehjYglgBCJ3hGCx3iHzjxIJEpAiU1uU6XBVajZAXayz3z/GrdvrTAhkuVZKnk5mRFdTfANSlsUBnf7RXbKDEOkqpKjsbCa3Gq8T5BXS9q+W60SKIwCMcXTRhTBpTGXqHExYrWAMUQCvnEEnVylUTQ+RmIUJLS1FrXCZgYnCgkRZQSrNdFYolL4ILjGk2tJkE9bYhRMqCjEocuMsrRopVmvVgSlWOzto8OG2GxQeYm2efqG0QplDCEmMK91BNJ8M1lGXpaAoXG+d2EH58myQF7O2KxXCA5lcyQ4rDaQZxijMdZiywLvQmrb1ygKZpeughjqo1eQ+pQsOIrpLnluUVq3c3HUW0Ui8hxfaU0eNWrUqFGjvmr6msDHobbB34M+3wYxQ4iw7Rwcthlj5Jd/+Zf57d/+bTabTQ+4hk6o81xZHcwbOvKGgK9bf+iEPM+5dR5YGkK7IZQ5D1oOodVFjsvzfj8PpHQw7JlnnuHFF19ksVjw2GOP8UM/9EN84hOfIM/zfp+Hyw/b6iBhnuc45/r6jcP9G+5355jsjkVd1z0w6uo1bkPTYU3FznW42Wwoy5KyLNnb2yPLsj4StXMWdo7J5XLJ/v5+f5wODw85OzvjN37jN7DW8i3f8i3s7e3x2muvcXJyQoyRoij649C5FL335x7j7bE9by532+76OHTghRA4Pj4+tw7heTBr+7g+CPJtz+eLlrlouxdt60F9vV+/OhB9UR+H59TDjMn29uq6JstSfZruPF+v18xmM4qi6B8cuCg+eVsPgooXjcvDQOJRo0aNGjVq1KhRo5JaFxkaDYik2EyR9loVwQLvmFs+tJuzR2BeWJo6ImtHzOHyI1Pm04yzO2tOT9YUueFgb0pde1ReUPmMV443bFxgNs0pCk0UzZ1Vw5VLc07vrDk5q8mtBe3xUaONYl5qdvYmoBWv3V5Re41kmsXunGKSsT5ec7wR8tkMJZEyS6Dv7LTidB1YbzxWC0qEm4c1VYh4pyAvkMywqRtmJjDRhvWmRis4PFlTV8Juqbi0M+eJd15jv1AcPvOvWd9s2DQC04Lp/g6ZElRT0xytiC60TjxNbJmVADEmCBcFJArBaK48+TZ0OUGp7Rpz0tdmRCLEkABVUxOcJ/oEBJW0dRTvAY6xrbiYaiZ2VwExhnQsu/hTGawjXTxlbNvVIAFiTfSujfHUYCyqhW4xBGIMPcAEUKITC9Up7lVat2Pqg2mdjqaNgW1jYaMCfDsFfet8TDX/RCXwKMYmV6SyKGNQMUG0NGt1Ao26u65q3XrSOnZVgKgI4qiWJ/hqTbm7RzbdT87evHOc3q15qpS6OzwtpJX+LFCtgzSkKNjgiL5BfI3OCogRpROATJGiLYXNZswvX+Hk9nNEhCieaTnHhkAVBVRNU6+R5pSiOsFVNaebBqMjkzJHKY/brFHWMJlPcd7jNnVyxUpEGduew909EEGJwegINkvptUYRXST42LqSBRcjsZ0WaJCgiBKQaAjBkVuNjgHnYz8emRJQkSLPMdYQQkNTLTFKURY5NsvwjWdVb/C1Z293QqECVV0RvCJai9UZStepXqbJCD4SYwKMShtEKULjElM1BkRRWIPWihhbB2SmUyRsEJQPRO+xhcEWeXoAQSuUOPLZFK0NCknRxeYEtMcFiHVFFI/VOaYoEljXX/NbkKNGjRo1atSot5C+pv/z38+N9SAn1XbE4kWux+7fw8NDfuZnfobj4+N7XGvbAKRz5A1rzw37Zq0lxnhPfCbcdfIN+zfs1xB2FEXRO7CG2oaRQ7fjEHYOayyeN07njUdXv3EYkfrKK6/wu7/7u7z97W9nOp3yLd/yLfzsz/4sJycnvWusG4sOmA33q3MsOud6oJRlWb+97bqIXR+UUjjnyLKMLMt6sDuEQd22uvY3mw3OuX7cuvYnkwlFUVBVFTFG1us1dV3fAze743p8fMxms+G5557rHYzWWrTWLJdLrLXs7u4C9C7H2WzWtzPcn+35Nfx9u26hSIqc7fZ5OOe6/b7IPbrd9kWvL1I3j6qqIoS2FoO1/Xa7vp0HIrfB5XnOyIcBa8N2Qwj3bG+7pujD7N9F3w3dnOxqdUKCkc8//zw3btzg0UcfBdLcqaqqn6vdOba93+f14UEg9kHvjxo1atSoUaNGjRr1lWqda7R/b9LFbybNLLx3avjIpYyFDuzNc0qjOD7RnK08FBmz3LI5rfAusjsvyTIo5iWTxZyjKnL79op6VfH4lRlXL81pNjXOanYnmtOjNVWWo0uNbCoWs4yssJRaWMxK1o2iVlN2Hn+U9Z2X0G5NlluO7lRsgiWfWpq66uszRlVytF5Th0BRWuogrHXBpIT6dEOxP8eUOW654mAnRznN2bJmOl+QT6fY0wZUw3SW8873Xaeo1tz69ReojjwbJ/giZ+fqPtOdCe61Q9yqIlRdqhFE8UhbwzFGCBISACI5q6LWTK9fx5h86zh0AK0Di6m2Y/AbfFPhXY1ET3JIdqa82LoNJUEn0gO3SsUWYrYEVCkQjUgXpttxw7sPHyMgoUFCDb5BlEJpnZYJLrFQkXa9vvpna3lMeE5iIG4qfLVGa5OAHIroQgu9AsqaBHsAJL3WeY6yBoUl+hqlJTkfQ4Zoi7YZIhbRCrRFxeQuTPUeE/BEQjuEGqU6N6ZGkQibd4HN8R1CvSHfuYQlYvMpKde1q3uZYGxbQfPu+UFEiUmIVyIqBIJPzscYGnTwKONBsgSOW0BKCJBPKXYuk9kJtQQ26yVFOcHanHyqcN5h8xlnz32ZcHzI8niNiYGDgxlKW5wL5MaQlxlIIPoGpQRjLUjAowmNB6UxNkPpSHC+jU4FHyP1piL6iFEgwSZ4HLvjLmg0jUS8DxhlUgStj3jxuBAgBKzVWJUe+i7yHGsMTb1GKY3NM4rC4AWO1jU6s+yUlhyHW50kB6KxycEaU51Lay1RgVWJQ0uMKXXXGCIQvOCbmswoxOZoZTE2J7qG6Jrk1MwyokvuR53nGKuJoggxQoRSRexkgpnsEH3AbTb4apNclkYwSnD1KTBHhRSfO2rUqFGjRo36w6OvOny8CFhsw7JODxMF+SAHWOd6/NSnPsV6vT637SEAGToXh8t1sKRz8AF9VOh2X84DKt37TdOcC0uH2xkCrPPGbLjMcP2hK3J7bIafK6VYrVY888wzfMu3fAvT6ZTr16/z7d/+7fyjf/SP7tm3Lray6/ewz108a/deCIEsy3qHWRd/2amLNNVa91GkHRCt6xqlVA8EAZxz1HVNVVXM53OapuHVV1/t60528a9VVXH79u3e+bhardjZ2em324FPkRS/++yzz7Jer3HOcevWLUIIvPe97+Xxxx9nPp+zt7eHtfYex+e2m/Mi2Lw9H7r3uu0vl0ucc1y+fJk8z1mtVnzqU58ixsjjjz/OI4888hVwe6j7uR6HUHDo4JxOpz3s7eD10InaOTG7Op8XOSLv98BAt78P29/hXBy+1/XnvPUf5JDs5mnXjrWWRx99lOl02r/fQfMOQD6Mi/R++zdq1KhRo0aNGjVq1BtVV4WwA193kYtgEB4pNB9c5DxZKi4VsDMpKK1QbRw+Bi5dmWAksjxdsTufsT5rYALXn36EfDLheKNZHh2i3Ya3X51w5fIMH4VXNzVPvP06r718hyUT7AyaO6cE55jvlkwzTZlpTr0mLA5gMeekOmO5adi3huOzhlhMYFNTrddoBLxn0wSev3mT03VgWubUlUcyS6YhiiLf2cOL0Nw6pjDCqhLEC5sgNKsNYbkiF+H6Izs8clCw/uKLnL1yhm9g2UT0rGT3YE6ZK+RsidQVwTtCCx67somxhXlR0u9KtfXtFNjZDtOrl9EmS567lvymv/Xb8ZfuuiO564KrIThU52KkM9ap3n2YjlrsIVpMHWlhWBsFKr5Nak0VIbucVRUD0VXEpiLVjdSgDSKBKA4linq9xuQlKE3wDX61QXxAI0TXgNFE52mWDQRB6W5WKaTrp6R6oInutdeOWmFzg52XqT5h8GiBrMhRRYbKCiTL0daCsaBTjUHV1oZEUgxrMu8qhFS4U4kCc9ediUo1E6vVCu88070DlIApZgBo0f3ZIJ3T8R7FbpBT3UfvCd5hvEdCA1IM6k+215uuQvIJptwln+7iN0eYokQZixdFUy1xTUVZ7lK99jJ6VTPPhUmRkxlo6g15rinnJT5Emk0NItg8QxtLdBFxHkTa+pqCDhFjDVEJdfAQwASPRqMwxJCOmTYa8QmsxhjAt2VaQiCKEInomKX5gBC8R6u2LE1TozJLlmnKaYHJcqrNho13FJOCmdGUJiJRaLwjkGpqElvnqDL4EFA2wVoRQflIkJomgtY5RimMhBQD7C3O1xhrKGZzxBT46hZGQzZfIJLibE1eIE0NRLS2+GqNMQpVzlDGoJXCGiFmGp2n+RbRNLVHudA6YUeNGjVq1KhRf1j0NXE+XuRWHH52nuPqIkA5XO+892/dusUnP/nJHjadB3aGjsLtuMmhI7Lb/tB92K2/3YfznGNDZ995+zt0YZ4XRdpBue32t/sybAv4in3uYN8Xv/hFXn75Za5du4a1lu/6ru/iF3/xFzk5OUFrTZZlfaTq4eHhPTGzXR3GYZ86mNNBxG7Z7sda238+hF7e+74m33as7WazoSgK5vM5eZ7zpS99iU9/+tO89tprLBaLHuh1kayTyYRLly5xcHBwz/52KsuSb/iGb2A+n/ewraoq9vb2uHr1KpPJpAePHXjtxm4Inoe63xzt9qOL/fzn//yf8x3f8R3s7OxQFAU/8RM/wTPPPAPA/v4+3/Ed38G3f/u3M5lMvmIb521zqG3Y3MHHoXt1G6B3+9X9nAch7+dyfD1OwG3Yft7v543heb8P64gO+3nz5k0mkwnL5ZKiKNhsNsQYOTo6Yjqdcvv2bfb29jg+Pubxxx+/7zifN+YPGoPz9n2ElqNGjRo1atSoUaPOVwtpejdcgmhTrXjfwvL+hWUaA7kKlDpDh8By7WkCHBzsoMURasHVws31KbtXdnjnB26gfMPLt86oXEO+OeUdT+yytz+ncsLh8Zq3v/sGrgo0dsJE1tS3z9isPLMyx9WeqBW314GqKKgzx3zPcvXyDnmWc/Tl5yiLEhtrxNWoGChnE1arihdeOqNqIrNpAc7jI/ja4aYFZV4QVhuOTipy0jasMajMUO7NCBKZ6sDezpRLi4Lw/Kv4ww3BKWogm5XMDxbYXBNOTvEbh2/r8kVp6xtCqqGnVFfFkc5ZqpVGUJjpLuXuwd1IUegxV+doTCUfBQme6BqCc6nGovgUcUoE1WVmtuAJlYJzJcW/It09hPb4qjZKN8rdSNEQEpCs18Rmg6sbbJ4nKBTTMn5VUR2d0iwrdGZAa0JVE+rYthOTy1GlMQhBWmCl+zqKqvNbRoXSrUtTgdJt4lLjqDYNRpsEKBU0RmEyg8kturDY6RQ9maByB1mBSJ4iV3ULtdq4UQiIKED3vDAtA+g0v121Znnombiacucqupil8exGK9HD9rTQ7ZGMLcjUSEzALgRPDAGJHhUDxIDo2I43SHSg5mgzYb67y+nxi+STSXoYunE0dYKJ1dkKc/wq88xTFDkhRqJrKBZTinKCazzRByQqtG6PnUR8CIiAtamv0TcJOosQVBoXhcKYnBBSqhNREqQVCAIxCPgU85tlOYKmiYEQDdp5yjyBaJVyg4nBo6zCGEtWFOh8wmpTc7aqmS2mzMocE2pAEzF4t0Ebg8lylMnQWiFa410DIbYxvoJEITYOD+SZoLKczOjU1xBRBJSB0FQoK2R5hs0zlMqIoWldwTn5bE6MPpXIWa2pl8sU42wtCUpm2HyK1oKYDHwa2+g8St9rBBg1atSoUaNG/cHWVx0+PoyT8UE6D1x0P129tw6chBD4pV/6JZ577jmapukhYRe5OISBIYQ+VhXu1vgLIdwDRTpn2xCkdX0Z9me4n8PI0W7Zi/ZrCD2ttX1M7BC6DNd5mLjM85aPMfLCCy/wpS99iXe/+93s7Oxw48YNvu3bvo2f+qmf6sdpNpv1Dsdh3zv3YheJOhynIcjt4k2HgKhbtutbFwUaY+xrLHZjD7C7u0uWZeR5zjd+4zfyyU9+ks9//vN476nrmh/8wR9ksVhQVRXXr1/nm77pm1gsFvdAzKEbsAPRH/7wh/ngBz/I2dkZRVHc44TbjsEdAtz7QebtsRaRHuZ1NQm/+Zu/mRdffJGf+qmf4vj4mOl02h+Pv//3/z6TyYQPfehD/bh1Y9qN0TYY3wbnw353nw/nZzf+3fvdcsA9sPI8t+t58+oibY9Ndx4M+3Ne/cwHaTg3hjU0O7A6fB1jpGkaNpsN1lqstb2j9n5xt/fbv/O+g85b97zzc9SoUaNGjRo1atSooUTRAsgEXq5k8KEdyztmhpmGEA27OyXGOU7OUi02Y3MODyumU0W1Dmy8cONtV/jGb347YXXKpz//GjePNlzfL3ny6i6XHrnM0eEpTgnvePo6RzdPeO32GTOdolBDFOYTS24VUeCs9tjZDJxjdVZRHjhWp2v86oy8zJF6k2q7RUFpQyU5J0eHFEZTzi11E1g2AWUydi4fkJWKpmnYvbLLqRPWp57ZPMcWGcEYnGvIM82lvZIpgc0XXsAuHT4qvNbYLKOwCuoNzamHIPjGExJvQlRbcVElt520TkgArQ1BYnJDKgh5RrbYQU2vQdigJEWpwt2YUCFATI666BzROSTEFg5Lx7YARRe92uNOkRagJQiWnI4dlKR3t0lsiJsVoV4jzqGtJbOW5mzD5ugm+STHu0h9VuHrAMrAuitDIohoogjBBXSmkA4gtagxSrqu1qq7PlFp/1q6mhJSFVonZ53WmmgCWkPUCi0Qgyc2gto43LLGZCeo3JLN55j5Ap1NEZOgljIGIihlaP2n6Ucr7kazahQBUARXszk5IoTAdO8qdrJAJEWVDh2ptI5NidK3m5hrgOCR4FuIG+4ex3b/JHqUBGKWUx48xuzmcy0wU4ika3yvc85efo55WFFmGu8CPnhmuzOK2YS6drjGJ7gYApDhXCDEBms0WWYSxBZBIgSJeElOw0iqCSkRXKiJIY1HjIIPQvARFT1WKbRKtRFNZpBo0M5h2ikdY4LiCsgyQzGxlNMCpQ2rpmG9qZhMcxaTDBUDIbb3obIc5ZPbNcvbUjftKAUX0LadRxGauknQWreg2hiMzQixQUlAW01LVrF5SXAOTUwRvSESvUdJQz6ZEmKJXx6hCCnydX1KNilRpiRKIPhAVB6jbYLnRhNDcrSOGjVq1KhRo/7w6E3Bx4vcTeeBsYe50X8/5yMkCLGzs0Oe52w2G+q65sUXX+Tnfu7nWK1W98CIoTNvCG22wUi3/WEfuvWHAKdzkg0jNy+qG7nt7hq667r2htoGGecBzu3lz3NpDven69/JyQlf/OIXOTw8ZDabYYzhW7/1W/n1X/911us1ZVneAwo74DjcTtM090TJngfDRKQfo25cOrej1rp3p3XwsYsHBdjZ2bknxvXRRx/lB37gB/jMZz7DZz7zGc7OzvjgBz/Ie9/7XpxzTKfTr3D2iQjz+RwRYbPZ8IUvfIGnnnqKGCO7u7tcunSpdzpuuzK33anD49nBq+2o3u1tV1XFrVu3esgqIrz66qt88YtfZDaboVSqBQpQVRW/8Ru/wcc+9jGm0ykhhL6GZVmWnJ6e9q7TDk4Pa1x2kLRzMnaO02Hkavd593oY4bs9R4Zz+n7n6Xnzcftc7/p03lwfAtUHtbsNlIc/w7naxfNmWdbH/XYAMs/ze/pyP23DyYu+w+63/6NGjRo1atSoUaNGbUvr7u/MSInw9qnig7sZ1yeGTEUyq7l0dRfb1Dz//AqbZWAi0USmE8OlnYJVZri0N+Ejf+Sd1CdHfO7TL/Hsa2ve+dgO1y/NKMuCZ1+8zXy35NL+nNsv3ubWnTMmwK1ba7xSXD6YYAWWTSCIMF1MUDqiUZyt11SvvIopLZPgUOs1hTWEqmFVB9YbB3LGbHeGzRrUZIY7XmKVxpQlKEdwmsXlSxyfHbGqHZnWmNzijUEDu219umxVEV47IdtEGgFvLZnV5Daxt0QZBe88PsQUHaoghEgAjE5Zo6p1PiLSQkfdsixFMBabZWi3pHMkJglKJXdW70SNgeAdMfi27uIAiKUF0C3Y6yNVaa8bRFpuJqjYxohKSHUdmw310SFutcYWJfVyg1s3aC24TQJdzWmDDxBCCxWjJ0b6iFhBCKIIAXABrRLUQrexs2hEBRQQfCTPTXI8klx7WmtUjL0LsquWohUYrTCZwhhNUB5jVYrlbEBXDr+qsadnmEmOXexhJnNUNiVqm+JeVaraKICKir4WZAtnURoliiCO6uwEgjC51EWwJlDZDXPyjapkhGwhfRRJUbUxEqNrYW7rQhVJ4ywhRd26BqUzssUumS3I8gKURhuFzUqqpiKc3CbPNS7miHjKSYHJLevVhuCFxnskpGMdfapf2cXQaqXRSlCiwQiaDCPJ2ah1hngPoUnX3yYDQjpmPqa6iSqBUmMtNjNgQEJA60hw6f6INmBMRpYZsiKjmJSYzLKuA9VqyTzTzCYWgsM7IUpEQkAZi53O0dJg2nELwSM6tPVETXvtHDEmx7sNmVZoItFXiFbo6FHG4EMk+g2T9oF/azOUdhhpwCiCpIeAxTmMNfimRnRbtzSm42PLgqqqceszlEquUWUtaIsuSqL/moSvjRo1atSoUaPeovqq/c//Zm7C3w9QDt2AQxfXYrFgOp3ysz/7szz//PN477+illwHJzpQ0cHJDohsw0egd1Z18GMIGzrH1XluryG82m67W6br07ZzbQiIzoOI50Gj4T5eBFeVUmw2G7785S/zyiuv9LUGn3rqKb7t276NX//1X++Xq6rqnvjU4X4O6xluuwW737vPhjCs2++urqRz6SnOrtakMaYHolVV9ZC2e/9jH/sYi8WCn/u5n+vBXPn/Z+/Pn23LErtO7LOGPZzhnju+Md+rzMqsUg1CoqViqKZbTQGW1CDZQLtxY1vdONp2hP8E/wc0Ee0gFOHgF4uwIxxuNzjCQAsBomUhoJFANQ+ZWZWVw8t84313OvOe1uAf1l777XvqVUGXoBHS/iqy3rv3nLOHtdY+evt8zvf7zfNr598f3zt37jAajfjwww9Zr9dcXl7y/PlzvPfMZrNrYKzvyIzbicceYVwfqDZN89L1En8ejUY0TUPTNNy5c4fVasW3vvWtDixGSBbH9qOPPuL8/JzPfe5z12JfvffcvHnzGoiez+f8w3/4D7m8vGQ6nTKZTMjzvAOdWZZ1cbRpmnZRun1ItwtW+9dJ/N3ucfTP7wcBuF3tjtPL1vPu/l7mLIzXaby24znFKOCmacjz/Bogjuutv5b7+kEO1n+V97ABNg4aNGjQoEGDBg36HyMBCO/ZU44/MEn51FhwkEsmmWCxMGylRDy7YpJrtBbszTSjiSaf5TTbivW25I2ffI3Downrs2csLgrWheXO8ZS7N/dJteK7H5wxO54wGWU8+fCMqqqZZYrn5xUFmmkuwViWpcHKhP1bR0wnAlfV1GvHbJqTZym+2uCtAwPnq4IkT1huGq4WJcevnJDfPqa8WnFxsWZbeU5OjljM55zcvsfebMTz0+esVoZUCCa5Dq6ppmCcpeQ4cidQZwtU7am9QO9PyYTDVTUySRDeYmqDMSF+0wuBsz70Ooowmsa6LkM13NcFgCVjgidArpHK40yBVDkgX2StygSsI3wUIvDOYZsQverjlzK7+5Y2drVzPXoECo9rQVj4U0To6BpoKsxqQbVYBcfoomR7cYlOEnQqMAK8DVDRGI81IUbVWo8xLnQsCnDC42y7bQFCCXDBteoaGeIrvUdKhZCA8jQOhIhfpJVgBM6Hn5WA2jkUAi3BSo9yEqUsWoVxlipsS0qQymPKEr2psOuCZG+Emh0gR0chMla2kaxOtP2S7RBLB0IgWreo9x5vPeVmCThGRzfR+exFDi6+A44vgDBhY04EIGsbnKtQzuCdR6jgTA3/GTAbhEjAVDTlltTneJcgvCOf7HP+fIF5/giXgsw0ySjFe9hug8vYCmisRxLGwViDkAqhE6x11M6T5wlCC4SSSKlwVXBnKuvw0mK1RNjW/ekciQwORmMNTgJSorTEO4szoePTGE9dG9JEkWUpSapJE4HORzg828pijGMvl+RaYJ3FWIG1DieCSzMXHpkm2MZSm9BX6qG7bry1oVvUOYQSSBuaKaUQYGu8AZUkeKmJ8bemcfj1gkQrcBbrLE6moBQgsWUBssar4Ob0BChuGwfFCleHCGOER5gaPHglEVLixMtrbgYNGjRo0KBBvzf1O4KP/ypuRiHENVdShGPRXbe7vRiNGR1yMdIyOh7TNO3AxNXVFb/2a7/WxSv2YUfcf3w+fC/4e5nTMIKZXZdV3022CzH7z42/i+f4sm3Di27BXZDVh38vO97+ce/Gs+6CVwhOzcePH/POO+/wxhtvkOc5aZryp/7Un+K73/0uV1dXXWRo3E/f0dnfZ4xm7QMja23n6OsfWxwr7/01R2o/OjMCpPl83rkSx+Mxo9Goiyk9OjriZ3/2Z6+58/rrJY6PtZZnz56xWq2646vrmvPzc4qi6MBcmqaMx2PG43EHFOPzsyzr1ksfVMZ99cFrHJ8Y9ZtlGffu3euiYR8+fMh6ve4cjHE/MVp2Pp/zt/7W32Jvb4/Dw0NGoxFXV1ekacp0OqWqKi4uLliv13zrW9/i+fPn7O/vM5lMOrgYuzrjeimK4trY9NdGH7K+zCHbnxd4AST7899f6y+bi/72d92/L3M2/qD5BDo3bITXcZ7n83kXFTwajSiKgvF43P0cx3s0GuGcY7PZXFvbu7D/B53L7jEP8HHQoEGDBg0aNGjQ/xgl3nM3gU9kituJIxMK4zxF7fFCsS0aJnmOdXDj9pTpLGE8SamLBpMofvyPfZLpnmZ1Occ5WK22jCcJt+4cYT18+GRJBWgETx+co7UnU5KHpxtWheP4IEd5x7ryiCRjtj/h+HiEb2rOtg6XjpikCdvVgpFQNMZwVdYk4xFXm4rny5Kb925w/MptLosNy9KwqhxVbXh2seL+J15FK8tmeYWtG+ym4DhxTMcZqZZopdHeMZIS8WSBajw1QJ6RaLCr8CVY6qb90mHoyVM6xVdNG+zpQMoAtJTsXIeBR4Z7COsdEhkiPdvYzSAfci3b56JScAZE6ODz3uJsFVxkUoENX5gV0FKwdhsE9Oi9B9fCR28CHHQ1Zr2gmS+p11uSSU61qChXJUkiycdZG03qcbWjbhx143EOGuM6H6YXCu8CaBUidPFZ50IcaB1coA6wzrRxnQKHQQrZkVepRHDbyRALKtrHvJQIPFYE8KmcRzqHaEArgVICmXi08kghkDbcDxoncKbGFA3ppiDZL2D/EJHOEDptI1QVXhiE0HgnAgyFAGlF8DXiDOVmiRee8aEIDkgZQVQvZah1k4aDtnin8d60425bx6Nr594hXB3mU84QSqF0igb03oxiU2PNFoqSvN4imjXZ0RE+UTgjQHpsVeEILr/GeKypEVLiURjT4Fy7dqQIjkSdt5GlTXC6WoMQCoRDqUDCTW0QonXt+hbGSoW1jqKqqZxouyBdcPIqCUKgU43OM4zzVFaAlOyNNcrY0JnYzolM8+BsFTKsWe8xBpypUDpFIKiqAqkzrPdIoVBSkCjZOVaFIESiEu6xJQqdZiAEUgoSlSBc3X5m4hCpRqc5Snp0miCzKdYqfLMJ7lMBXgjqosQ2dbDZOjBN22GpBM576pd8rjNo0KBBgwYN+r2rHxo+/svcT/E5ESh678nznLIsyfOc1Wp1DU5AcDhFF1fsz4uwTynVwYQIx7785S/z1ltvvdSRFqFlP54x7qu/313A1ndFxojOuI2X9cd9v9fCC6jTP6a4nT7U6sdn9vf3MlDUf83L5mI3XhPg8vKSjz76iOfPn3N8fIxSips3b/KTP/mT/MZv/Ebn2HuZ27N/Ls45qqq6BmLja2LEZZyrvruxD5u9Dz2TsUOyqqou1jXLsg4AbrdbHj9+TFEUTCYTXnnlla7Hsw+B4vHdunWLn/3Zn+Xy8pLVasVbb73F0dERZVmyXC4RQlBVFVmW8dnPfrab37quSdO0O69+jGf8rw+9+4C1qirKsqQsS87Pz7m6umJ/f5/NZsPDhw+pqorbt29zcXFB0zSUZdmNlxCCt99+m4uLC+7du4fWmjzPr62Fr371q7z99ttorTk8PGQ6nZKmKaPRqBur0WjUjUWSJIzH4+9ZW3Gc4nb7oL5/DUBweK5WK9brdfe7+LrDw0PG4/H3rLndddJfF/C9IPNlr+uv7d21Hp8Xz+HWrVskSdJ1PPavszinVVVRFAXr9Zp33nmng68R3MbxHo1GnJycfE9n6Q+CkoMGDRo0aNCgQYMG/avqR0fwsUxwoAR7I03qLXt5+De6SiFPFMIbkiTj6CRnPFKs1zXZbMqdO8dobVlfrnC15/TxJT6fcO+VA6rNlodPr0gsJA7Oni05nKY0TmDUhFoa8tRSFQ1ZlnDwsbtkqkbaGlMXrNaWhhDjWCw2pFKyWW8xecbs1gGXF1uWRvCpn/gREq348PEzmsrQlA2Zc0zHOfn+Hr4uqRvHdl2wvlowkoLJSHN4MAI8wsHINLjHV2gr2FgQWcoo18jGYhF4YzHG4pxHjsdMj/ZYPzvHOg/etf2GbdNh6+Rrw0VfQDd6LkiVxAzXF45HPKAgnYGtEMjwWxfiKxHtvTjhqb7tb4T2nkXE6FUXCvoIzkOcxRUbzr72XZoyOBmFmjOaZqSJBO8w3uEbRV0bjIXGgjFhmw0KL3zrcJQ03uKsx7r2vgSBFx58cL5J2fZAEu6PnBcoacPhiHjP4vECEtm6KL0nTRRJokJ/X9t6qZQOka3Oo5xD4wKXFR6lQzSrlCLEedoQh+uaC/R6jZrN0Ac3Eemoi711sgkgyito5+NFt6MDB9V6jZTn5PsClY3D8zsbZDvOPnR6CtHeu9owR96HmFcZJiU8T6btnJngLkwz6tUKRkeMxxls5xQffIljUzDdn1JcnCMOjmi8xlhDYz3WeWxZBwiZJCEW1RiMswipETLBAlpKGmexXobIYGNbwNgmT+ERsk1Pch6LxYm2KdRYPGBUwrYOHZCjPA3xsNIhlcBj2RZbGq8Yz2bkeYJsKqzVWBxOSBwgtULJFG8sILGmDnHBztE0AVJ67/FN3X6GYlEqQUqHVx5nQxejVBrrCecnFNjwmU2apdAUAcy7EP+rBcj2GhQqQUiJSiRevviMTiBwtgk9rbQ/G0PYtKMqKxjg46BBgwYNGvT7Sr/j2NXvB8j64KJpms6NVFUVxhiKoug+zI+ApKoqptNpBw+yLOu626IbMkKyzWbD3/ybf5P5fB5OROvOmdcHK3Vdf4+DrQ8R+nGU8bV99+Kuc7EPEHfjGl/mcuyDkT6cjPuLUCQCnnhM8fkRqPRBUR9Ofr/eyb57c71e89FHH/H06VNee+01ptMpSil+6qd+igcPHnB6evo9ztHoNuuDWq11N3fRJRddj334FM9JSkmWZdfgVoxfjce82WxYLpd4H6JRV6sVX/3qV9lut+zv73Pr1i0+97nP8dprr11z7+06+27fvs0v/MIv4H3omvwrf+Wv8PTpU6SUbDabzlk4n89ZLBYdSLt582a31rz3rNfrbptpmnawNE3Ta445gNPTU46Pj7l79y6np6ddXKoxhps3b6KU4t69e7z55pu8995719ZakiTUdc12u+0gbQSh0Y0ax2U8HnN8fMydO3fY399nPB53DtZ4LBGqJUnSrZ8IMncdiLvr1XvfXZ/WWrbb7TX4GNdY7NR8mfrrs+/AjetpF/L/qzoP43j232NiF2Y87zgWcWycc+R53u3r0aNHHZTVWtM0TdcLmWUZk8mE8Xh87frrj1f/d7vn/P0eGzRo0KBBgwYNGjQI4MdmkqkSTFPFOIX9SY4tLcuqYZaHCMhRqtg7yPHGsVhZbn78NtO9CeVqTVlZNvOS5+dLsr09Tvb3Ka6WPHsyx5YlQmpGezmZViyLBn14iMUjbevMEjL0zFVL1CRESV4sGoxMWBQbEu/JhWNTlEzvHJGOc5589ByZJfzYJ+5RVQ2PHp9hi4YcoKrJpCCfpsz2U4qmZnW1xGxKMgRppvBCUteG/dmYPVuz+egKUToKr5Bak2mBbELPHO29gvMCmaRk05zi4pKmMjgX3I0OgXC0DYMEQCXan3rGRt8+5nwLBv2LLFYvBIIEkWhcrXnxQIxQDVsQ9JOJBB6HiBDQmwAqncdTByees1SXV5jChghUDTqVwaFH6GusK481lto4GuvxUgYoJWRwIlrfQZ4AjsDF8/KeaGy0ju4zBkeI1kS0KbI+gFQpZXBLAk7S9lVC5Sx1bVFKoiToRKAFKBW7FwXehghSJYIX0QmPVvHeyIVj9ODqElXXuHJLcnwTOT4OubcOhAxA1gsZNtLG5QoRYnSxjmp1hVSatHXxgerGP0JFfIgVVdKATfDOdk7UiE/DxgNkDv2PgiTN0dWWVJao5grcmnFSM371E1SPv8N4b8bm6pJGj2hkQllUlI1DKkWeKVSicD70b3ogzTVJlqGSHCMltfFYDN654FpteyKF84gWrDlrg/OxjcyNsaS18RgJeSIZJZCnIUZ3lCeoRFFuS5xUjMYjUlehhMbplKqssI1DKI1Mc5IsdDI2dUNdGrwAVBqgaFkipUYriRQufMm2bdX0zpIkkkYIHA5jA5zP0hShUkxTkUpHOs4wfky9nON86K105RZHgx9NcaZGSEgzjbECYYOrUQiJdA7pwzlbF9axdA7bAmSa6nfwbjpo0KBBgwYN+ndNvyPnoxACYwyr1QprLcaYrtevH2+Ypuk1uNF3FUaAdnBwwKuvvtp188V/8Ee3Xey3U0pRFAW/9mu/xle+8pUuzjW61/pRmhFcxDjKCGai4vH1Ow1fFr+6C1O+X/TkLozchYfxfPrPNcZccz3Gvrq++l2TcL0Prw93dp2X8TjruubZs2c8ePCAz3zmM0wmE6SUHB0d8dM//dN885vfZLFYXOs67G97dx/Rxdd3X/ZBa4RpcRt9MBrjdKuqYrPZsFgs2G631HX4Vl5VVWy3W9I05S/9pb/Ez/zMz3B0dHRtTHbHfhcsK6X4+Z//ef7aX/trnetwNpt1/YBPnjxhuVwym824urrq1q4xhoODA8bjMUmSMJlMWC6XGGM6AD4ajbp1dH5+3nU9RtdhURRMp1OOj4+5uLjAOcd6vaau62tjEHstP/nJT3ZjFQHiW2+9xePHj5nP5yRJwuHhIScnJ+zt7ZHnefe8foxoP643rqfdddEH6P2xi9dx0zSkacrR0VEXlVwUBdvt9tp2fpD6EL0/9z8IWsY/v981Za0lTdNr0b19h3AE3rvjEc+rKIpu7uD6dV9VFV/72tcYjUbMZjOSJEFrzXQ65ejo6AeCxQE6Dho0aNCgQYMGDfqX6c5Yhho87xmlKTSOsnYICXmu2DvImI5TVusKRimv/+jrSFezuZojlKBYNzx6esWtu7c4PNln/ug5V6cLNouS6cmYo0lKWTYst5b89m3y3LF+9JwUiUgE49mYyZ4mT1KEdVyuas62hm1ZMEslCsu2cWQHEy7nG8rzNbdee5WDScrpk+dcXmxxjUOXFaY0SO8ZncwYnxywXG0oFiuoLXupRGtJoiUoxexgj5mt2b57itk6aicRmSbFI63F47HdF2Bf3DuuzxY0jcFH2KhkgH9C4D1tvCPhL/E2PXYOCoEX4GpD6L5rqZzwbR9h+zqZI6RuXXaRd9nggnQh1hN0cJMROh+9d2CbACqdCf/hcdsVxfNLtJbB4CeDQ9I20FhLXQcXo7GOwjgQCuEk1ofzNTbeu8u2KxFwHiliA18bz6lACo+Sso1cbeM6Wydja3HEOY9qXZrOh3UnpAhQ1UFtDVorrBXBpSkFSSoRvt2u92ghUL6NTA2prwgEUnqUAG99GN9mA+YJyUmDnNzAq7Q9FxXgsCT6UxHxXlKE8ylXV8gkQyMROutAcpj1dmJ9cPO5bl7aqNz2eLp7SG/DcwE9muLLU5LqQ/Txqzx7Nqcpa+79wv+BZ3//71J9558z3d/HXa2p8WxLB8agRxIhA9KsGxccj95jrUMRnJiuvTc11oZ+SNr7bG9RCqyToc9RtDU4DoSWOCdorME6Q6YUo1SQZIBUSDyJhk3RYJzgcH+PVIZUMLPd0nhPXVQIa9FJRpJolPAYa3E+HIuxHi8sQrSfAQjwUiKTDJRASYmzFUqNQqyqF6h8RJIfUK+vwlqTAiUVifLgK1S2j08n+HKNsw0ei5AeIYv2mgKVZAghEVohvMc5i9AJWkoUEofCVCXWNEgBWknKqhexO2jQoEGDBg36Pa/fkfMxwoXlconWmv39fR4+fMhqtboGA+IH/VF9OBf78o6Pj5lOp912+8Ci7xQEuLi44O/8nb/DYrHofqeUYjwes9lsqOv62r6stdcAZPz9rrMrQsj4u37Uavxd/+8Rhuy68OL2I6jr768Pyvouxvh4WZY457o4yf559J2i8fkRbPZB4O6xAJyfn/Pee+9xdnbGzZs3O1ffK6+8wp/8k3+Sv/W3/lZ3rEAHO/t9jfEYontstVpdi1XdhaIQnJ0RKmZZ1sViAhweHvJH/sgfYX9/n/PzcxaLBfP5nIuLi85xeOvWrWvO1ZfpZY997nOf48/9uT/Hm2++yaNHjwDY399ntVqxXC4pigIpJUVRUNc1q9Wqi/mdTqccHBwwmUy4uLjo4j1nsxl7e3tMp1OEEMznc15//XWurq46AB/di6PRCKUUm82Goii62NnQYxLmdzKZdB2UEbBrrTk7O+P9999HStlB06IoKIqim5Po3otj018fuwBuN3a4vwbjfK1Wqw7iHxwccHx8TIyqjfD0XwYfdwHjLuTsw/n+cew6p18GSPv9oWVZMhqNrn1pwFrLer1mNptRVRV5nrPdbkmShPV6fe1LDP1+04ODA548edJF7MZxu3//PoeHhwNgHDRo0KBBgwYNGvQ7UlOGmEbnQWtH7Rp8KjmaZRzdnKEElI3j+PW7HBxNKRdzim1JsW5YrBsslo998j7TNOX0/Sc8fnDJtvHUQnEo4Px8g81G7L1yQiobNk/OEVYipWA6TTl55ZBEwdXlhsW84KoU1FJwOE5wRclWBIfSs8cLyDNObszIpOTBB09YXK5QjcEUDaIx5KOce594hVo6njy5YH6xZi9Pkc4zHWUkmcbJhIMbB6Tlms07T7CFY2tAZ5o8k9A06CTFOos1wfHocTgPprQhwpM2qVPKNg7VX7s/8L17084BKcBbh9Saum5wxgDuhTvyBbVE6DEkI0S9ae2FrnVABqgo0SHWlLZbEIF3Dd4ahHd41yCQuLqkPl9it03oCXQe58EZR2NCvCpCUBuH8R6hFNYJnLU4J5FKkuah5xEbomdbjIhUIbZUQnCFSkmeCoRsYaySrSs0OBLx7ecVPsBI7+K9V0xiCoDJdeZRgW3ClzqdcUgJ2skQ/yk9VsgQP+o9rgnGRqWgacKXp/ESbxxQ4c1z9FGJ3r8DOscpjyDAO48Mc+Q8CBtgpICmLigXl4yVQgaSRT9+NfY6OmfxzryAwt63UagpXUZuG4crAKUTttsKc/dV1lc13/rtbyOfXKFFxd2f/4s8S3O23/gnHJwc4M4XmATqJKV2LiS34rDGoqVDSYEQKdYQnJdSBejnQgxwoj1SelzjsLYdWyGRooWzIvxsvUUoxTjRpFIgpUMIh9YJztRsNiUyG3F8uIfG41xDOtkLa2VbIH3opJQCcIbGGEzt8F7gXIh/td6gkwyhNEpppE6QiWqBsUXpBGMajHWgNEJpdKphPCbPUqxpQBpQiqYoSKRmPJ1gUoWrS7wt0aMJwoMr1lgxCuBeCNAZOkmwRmKFBePDZxvjKcV2RDU/x/twjaPTf/1vsoMGDRo0aNCg37X6oeFjBBuj0YiDgwOEELzyyitd/912u70GC9brNXmeM51OOzAZYcTx8TGHh4cdSNh1GqZp+AdKXdcYY/ibf/Nv8vWvf52yLDsAAwEsTqdTiqLo+vX6cZ8xLjT29/VhSnxe3zkZFY9n19EXt9F3Tfa39YPUf318fpIkbDabDtTFDsDd4+yfU5yLCAjjvMRjise7WCx4+vQpH374Ia+//jrWWt577z3efvttPvaxj/EX/sJf4Fd/9Vd5//33qeu6i7iNTsY+5Ixg1BjDfD7HGNO5KfuO0xiRmaYpxpju71mWMZ1OybKMy8tLRqMRH//4x8myjM1m0wHk27dvI6XsgDG8PDo0/r4/Rnme8/M///N84Qtf4O233+ab3/wmFxcXAFxdXXF1dcV2u+2gdewTjevs/Pyc+XxOWZbdeFxeXnZRs1JKPvGJT/DJT36SX/qlX+KrX/1qFzMbn7O3t8d2u+3clXF+IkiLMa1nZ2fs7+8zm82Yz+fM5/NrAD/Lsm6t7O3tXYPmfUdwPy64f13srnUIEDS6V4UQXexodAtGR7FSitls1l0bZVl28G53/PvOzjhP/Tjf3Wuj36F67cOEnS8dRCgY112M8o3O0Aiq+7A1rrf1es3+/j5pmnYgPEY49/cvpeygeFy3A3gcNGjQoEGDBg0a9DvVjVszHp4VzFcVWep49e6UdKQY5Zq6KJGTEfd+9HWUN1w9fY7xgqaC80VDvp/zxht3ceslH37nIVfnWzaNwKeK/URztjSMDvaZ7mfY+RWrdU3VGEYjzclxyuzGCfk448MH5yxLg3GK8URzKD2rxZpkkiOrhqfzksNXjnj19Y+xuLzi3Xc+pC5rRNmgPCgHqITJjQNW1rA6v2K7qtjLNHvasX+0RzrOQStGkxFis6Z49wkUjqIBlGA0ThC2ae9/Hc627i0TOgm7+4H4OYEU6DRFJeHf9q1VMEAgRAsVXZeeGvCiwHpHXW5xjUF4iYiAhNa9SFsziAiuLSkRIgBA4UOsqJcKXBu32kafetOAdXgsCI3ZzqnPFiweXqBV/DKzx3pwTmCcx3qPRWAE2OCfw3lQqWacpzjjsY2lqhukECRa4vFI5VEquBulEKgkQWpNOsqRWajeSEY5aT4iyTOEEnjrKIoyxJXWJU1RUS431FVFU1uwpg3fFDgL4LCNwyHwro2nFQ6PwgmFkJLaBaej8h6t2oEzIabWKYeWAl+CNxXOXgUH4eFNyA8Ag/cS6R0IiZe2dSsCUiCQNGVBvV6QTgXC54gWQIoIIb1DeNFCx+hI9b35p3XChjML7lfJ1ZOnlLNDNoXg3a+8xccvHvH8V/8/3Pzp/4w7f+Yv8CzNWH/pV7lx6yb64oqL0mDTHIeiqRqkcKg0QacJIklwKJwLcbvGOJy3JFohZYC8SIFp2oQeBE5AsH16hDekKswh1qKCzRCdj/EugGmVJRwdTlFKUGy2+MaAX0MybufLBfLrPXVZEFizoqoMTVURMbSVDaNxqEqRGBKtEFIhncVJhbUVUgqEkrjG4Islynt87cIcehmOHY+oKlSqySdj/HSPejUnSTRSasrNFluUqMYjtEJlAqEVOk3Appj2CwXp3j7Z0Q0u7dvoZktlHbl+8dnOoEGDBg0aNOj3vn5o+Hh4eAjQwYmmaRiPx5ycnHQwLEKw2MUYP/DfhUdFUbBerzk6OuoiRl/mjLLW8o//8T/mb/yNv8FqtbrWBdd3eI3HY5RS12JYIxCLz41Asx9V2Xdo9h1a/fjI6MKKr32Z26vvOouv7YOOOBb910fIFJ2kVVVR13UXsxm3GV/bP8bdaNSofqxtURRcXFywWq346KOPePDgAVmWcfv2bU5PT7lx4wa/8Au/wD/4B/+A3/zN3+zGIfZ1RngWxy4ekzGG5XIJ0EG5CChjxGoEWREepWlKmqYopbi4uGA+n3Pjxg1ms1kXjVlVFWVZfg8024VYL4vtjNA7unE///nP8wf/4B/k9PSUhw8f8vDhQ958802KoujAndaaxWLBarVis9ngnGM0GjGdTtlsNl03YwRaRVFwcnKCUor33nuPy8tLkiTpnJF1XXN1ddWN8Xa77dy3aZqyt7fHn/7Tf5qjoyO891335s2bN7l79y4fffRRt7379+9zcnLCdDplNBp1IK0PGftrLs57hJd9p6P3oXe137nad/hGwNyf77iG4pzFtbnr+o3bivuNbtg+hNyFx/053FW81mIkbHyNMab7ooFSiu122zlMYyzzYrFguVx26ziuw3guMboVXnTO9q+z3feheH3FDtv49/7YDho0aNCgQYMGDRq0q7ceLlmXDZ+4OeKVGxmjmUYJybJoOHntJrfuHLG5WrM4X6ITzXxZUzaeG/dvcetkwvr8nOXzFfWmphaKkzsZ3ng2jeTw3jEJFZsnz3G1Q0rFbD/n+MaU/eNDtqXjrbcfsaxDv+BelpA0JZuqYnowZb6qKZzg3qfuMdsbc/X8OVdPL7DrmgxBU1saIMlSKq24XG1RixWiqZkJz82bUw4OJqy2NSIVjDJoVlfYD56TlLA2Hq80o0ySAE6EaNCmbnCuhV5S4ozpnI7eOZASJSRCCYQU6ERD4nEWmiq435yPsCkEe0oR+/XAlRV1ucF7gycLE9G6xrAW8HhjgPglzdBTiFD4FuUQGya9AG8QrsHhQrRltaE6u2L1aIGSAmsdzgmsD4zU4nESkBqdpthtRVMaHJCkGiUF1abAGYMQkizTATw6S+MceZ6S5BmT4xl7J0dMD/eZHh0z2RuDtGSjGbYuMMUamWTBtZhkNHXNdrNhb/+AbbVByxFlWbOaX9EUlvMPHzI/fY6rGrANJgEpEkxj255CgWts6DvUGiVliLGVogPE3geHpxbQeIcWod9SFFDbOcJZ5IlCpNMAkrGEqRF4H6JBu3hVZ6g2K1SaI1EBO0rdxq3a1sFpcK7pQKRAILwBFELoMCcvKjpBeJrGklvB8mrN4oMHOOV4/1d/G6kUJ3/iP+X2/+zP8qxpWHz11zl65Sbycs1Z2VC69h5UgtICmWbBeYvEe4m1DmMtXggyNM5arHUhslaFcTEudHtKQsyoVIo0USHS1RuE90gdnL+NtYxnM3LtkK7Gk6FUSl032LKGOgBi4T14h61LvJSgc4xxeGNQwiOFRKDa+NkGLRPSVCGkR2kVIoBrj1IarSVCKbwziMYjpcLYKsQgI5EWkALnQi+rFyt0vgdZhrUGhEFkM2xjQvRsY3BuhdQh5tk0AArvPLZYorxkNB1jbcpEJuBepG0NGjRo0KBBg37v64eGj0+fPu0+pI9gAuhgQQR98UP+6XTaOdj68aYRtn3lK19hs9lwfHyM975ze/Xh3DvvvMMv/uIvcnp62nVIRggSHVARHkRIU5blNbdWP1o0HnN/HzGWMQKOXQgawVY/OvJlMZG7QKUPHfvP6bvYIAC82WzGer1ms9lQliVpmpLn+ffEbPa319/H7t+ji/Hy8pI333yT5XLJYrHg3r17/OiP/ih5nvPee++x2Wz4mZ/5Geq65utf/3oXAxrhXHRCxujQOA5N0zCfz5lMJnjvGY1GnctsMpnQNE0HuCKIjnBstVpxenrKo0ePuHv3Lnt7e0gpef78Oc+ePePTn/40Wr98mcax24Vg8Rj7zwE4PT3li1/8Ih999BFf+tKXuuOPMHR/f5/pdMqNGzeubSMquhKjq7coCq6urnjy5AlSSsbjMePxmCzLutcmSUKWZWRZxuHhYXfek8mEBw8edC7coii6yOKyLLl79y5KKQ4PD7vo1dj12L/udv+La2MX4MffLRaLLoY0rv/YM9p3rfbX6G6ka3Tkxi7I/rXR7ybtR+zGfTRN060HrXXnMIwwfhc2x+jX6Ho2xnSQO7pMY4RwXdfM53MuLy8pioKqqjrA34ex0U1b13X33hDPM45rXddUVXXtnCIM74PIvvN00KBBgwYNGjRo0KBdaS35kaOM28cphzf38FXF2jhe+dQ9pqOUy2fPkVIy3Rtz+nyLGKe8cXOfzbzg2YdPSLXg9GzDsvIcnUxxxlKkE2pfwtlzXG2QFnSaMD1MuX1zD6Eznp+vOD3fUMsMlVgyKfDbJV4rptMxj87X6MmUUaJYLbbMLxfY+ToAifYWyniPUhKXSiSeerNlP1FkSiJzRTZK2daObDJGeIOvG8STOUkB29phhWKcKZRtqLcB9sVIUOfBOgdKhD8Bbw0+BJoG5ueCay5U6wW3lmjjQKV3eBu6HGMFZFsuiClLisUSby1Ce6IrDmGhKQmFe8HhKFSAdxC/0BocitH56PEI13SPYSvK01MWH12RKoVD0DQOa6FxbcuklHgpGB8e4WzDZrFFaRV6FBtD0957pIkiSRRSCaQU6DRncvOIo1ducuu1V/GuwZYFq7PnXH33IReNxTQ1TWXAtB178YvfHoSSeBw6TdB5SjadMD6+wdGtO4z2T3jl06+T6JTNcsXjt97h6uFDqtWSphI4K3E2RM1aR9tpqfHxPhFoDHjlY3pt2yXoEATnn/KCer4l4THi+C4im4W8Vg/4ts9Stm5GIZBCYk1NtV2SCk3gtRH90kbhEtyrzgW/o2i/1OrDcQgUAhPmzIduRlPXzI5u8PY3H8J8hThOKJaWd3/5NzFVxa2f+c+4/TP/Cx5by/wb/4jDT3wS89Fjrqoan4xxWGqjYFuR5ileSuqyCj2HWiGFaGGiAN9W0HiLIrhohQqAVXqQMoyT8Cb8XmmskGzLOlwbrqQpGrxQIAqapm4duAE4JqlCSI0zLrgSVYqzYKoG4cJYSh2ANhgUHq0sOAcuwzSWpjZ459q+SFDaY53G2zpEDXuwPkB/51xYq4lECIdvLF5tg6vYGqxxKJ0g85xsuo+1Hl/PkTrDGAGmwrmQF1zXDmGXJJMpid7DiDM0LyqSBg0aNGjQoEG/9/VDw8ff+q3f6j6sF0KQJAk3btygLMuu/w7oIFWEB/ByOLfdblkul9cgZnx9mqas12t+9Vd/lXfffbcDjhEE9sFB3H7fBRnjTCMI6Uer9rsd+32FxphrMat9OBN/3oU28fe7UZS7EPVlzq++EytClfV6TVmWbLdbqqq6BrIiSIEXYCduv7+ffoflcrnsug3LsuStt97iwYMHnTMyz3Pm8zmf//znWS6XXXRtdISu12uWy2UHReO++xA4/jedTjuolyRJN+YRYMbXr9drrq6ueP78OWdnZ537c7Va8clPfpIvf/nL1+BX/xz7Trr+WPaBbF9f+tKX+JVf+RW01nzmM59BSsnV1VU3xsvlsuuZjG7c9Xrdgci+GzBCqqurqw6IRVjdX0+7Mbn9Mfp7f+/vsVwuuw5J7z2np6fhw4c2nhigKIprsbb9Od4dm6h+PDCEKNl33nmH09NT9vf32dvb614X99OPL+1HmPZjW+PPcftFUXRrtz/u1lq22y3n5+es12titOvl5SXGmA7AOuc6aN0HrP35hOBcjN2b0ZUY/4sRq5vNpouTraqq+zJBvJ7j+UYXZby+d4FpkiSsVivef//97ksNsRdUa905QKMDNR7nT/7kT7503Q0aNGjQoEGDBg36/atbY8XhQcrNVw+xZY06PuETt2eY7Zar0yv2j/fZrgoen685uLPPrYMxjx485+Kq4tbBmKdPr9hYyFJNUVn0dIysDWmxRdQWmSYc3hhzcntKqjSbRnI5r9kWhoYErCErS1xZIdIEpmPOtw2TwwO8syyXFdWmZOItyobo0JqWFyEwUuGlRHpLpgVHxzn7h3uUjcNKwXQyAmtwTuNOrxCrhtJ4jFSMR5pMBReVtx7r2ntL70EE96IzTYg6xYcIVNr7POdoqgbvHUKFTsDu3se54HyMEZEIXLtt5yz1asXy9Dl3rAkxqUha7x2YCvQIlAZLgI8qCxGZgJOh38+3/xdAUBv36RvMcsnyyRytAmRqKkvdhIhVByBDlKtKNOvLS0xjQ4+itQGgeYlSkCiBVBKtYHbzmMnJhMNbt3C+YXN+zsMvPqbZVjhrsA6ECBGoIp6NkigdOgpRDikUVrgW1DX4raEoSuxqS7NYMvkYjG7fx2nJ/mt3mdw4Rvk/yvzZGU+/8w6XDx7QFCXOWKzxmMaCCLG23oWeRdvG5QopA7C1vnOeBuuqQlQGMV8jxBPUsYBkArQPCxU6M4VAeNm+DkxVoJJtAM1CIqQCQryrkIBzXa9nRz59/NJqcMIG15/DOxv6PgV89K1vMxYWj8Yaz3Zp+OAf/AsAbv/s/5q7P/1neVQWzL/729x440dwHz5g1dQYpbGNgURTO4mpa2xVkaYapVVwv9Im7DiPcz7wceFApyjvSaTAOxt+70NUsFAJlXfo0YjDvUNUucKZBpmMQAqassDZ0AXZtn0ipKAxjqY2iCQBB1iDFJ7GhU5P1xiSBFQi2/0RIomdozEGZw1pqhE0AVZ6h9QKZwTeOGpr8SikMiitkQIkNVokyDQLn+MY156nw9kGJQS23iBlTjI9ZHz7Puh9tosz6tJQLp/hZBrWZVMjRckoTxFiiF0dNGjQoEGDfj/ph4aPRVF0YCV+yD+fzzuAET/073csxnjTPhjZjTaMzrgItSLM/OIXv8iDBw8oy7J7rO9ErOu6c1buOqgiJKiqiu122zmotttt93gfCvZjI3d7HqPjqe8IizGVUf3H4nnvRk7G8+5HqUagFcdsNpuhtWaz2XTbH4/HL416jOAnHlsfpsYx2Ww2ZFnWwSIpZTcG0ZV2dnbGnTt3ug5OIQR1XbNcLnn69CmXl5cdRCvLksVi0R1Xvxsyussmkwl5nnewJyqO82w2Y7PZcHV1RVVVzOfza92V8TW70Lb/cx/s/qD+zYuLiw4UHhwckGUZH3zwQReZ2u+xvHPnDn/0j/7R7mfnHI8ePeLi4qIDXjdv3mQ+n1+b9zgXcez60aPRdRsfv7q64tGjR3z6059+adRwvB4uLi7YbDYcHR1xcnLCbDa7BgD7ehmotdbyzjvv8OUvf7nr0dwF9fGc+v2R/S8XvAx09mFoH8JXVdXN6XvvvcfV1VUHG2OkbXQubjYbptMpd+7coa5r9vb2OtdzVHSTRvdtXF9lWZLn+ffEGkcwGNdj3yn6srmKcxSve+cc6/Wahw8fduflnGOz2SClZDKZMJlMOkdyVVXd+9agQf+29epM8s5/9XP/tg/j951+4zd+A4AvfOEL/1aPY9CgQYMG/e7T3VcnnOzlbOYFezf3me3nLE+vEEBVNjx4cIXFcOfePsI43v/2MzaLNZlM+eD9c8ZHY06OM84WhnE+wa3XbM42FLXl1o0Rr9zb5+jGBO8Ezy8bVo2h3DasiwbpoV5uoCwYzfbIbhyzKtch3nO9pTGgyppDb0KfnFbokaYuquBkG+XcuH1AU2zQpWN6PGX/ZJ9ApCSTyQhnarSU+Ks59ukS20gMkvH+CF0V0EhwATyCwHkXWJEH51rAJ0QX5ymkCIBJiK5TzzuPFBadKixgm7b1r43zdO5FFYcHmmrD1cPHuKbE+2lwBbbpqjH2UegxotkikxStswCUWugZjou2b7BBWBuAXlVRXy4QNjja6trS2ODAcwiQCqHAedpo2dAz6RsLHpSENBOhx1Er0nHK3tEe+7eOcNWGxYfvUW1KhAgxsgkekUpUolAtDg7OQIEQLaxtOw+FD92ZIZa0PVUB0jf45XMWX33MXCnS2TFieoSa7HHnUz/O/u1D7n7i5zh7/IBnb3+Hy/c/pFyvaSqJD9QT6yzCG1Tr+PPC41ABILfmVN/OgzUAFrFYIdRT1NE9EHkYMFwLD8PfvfUhAtRYbFUidXAZCkQXdxv6KF14rZB4oVqnazvn7bhE66tSCmcaHn7wAe988Sv8gST0eTrjEVoGAPn3fhtb1dz903+Buz/35/nov12w/Ohdbr7xKfz7H2CNxQmJSlKctbi6IJGSTAiEtzjvMMaCJDiDDRjryaYTZJLjmwqBQzoBLjgeZZpRNB6BZ5xqsNsQYyoVQimcD4vKK01jPQiLVCGC1jhHaQXCGRJlUYDSOoBa73GNQSlJmmfBvWoayBKapgIvGWUZWoaO1XKzJbUZyTjFtdGvjfHtekrB+dDNKjTJKMWrBC8SdKIQxaod8+BstHWNMSWukehFQrYvGE0nKFXizRjXlDilwKXglwiVU69X/4bfcQcN+lfTcN88aNCgQf/T6IeGj9PptHOx7YKQ+HOEYP3Ixehc2nWr9cEbBPjYNA2bzYa3336bR48esd1uaZrme5xtu6AvQp/4GHCtb3C9XlNV1UsBZP/5EWz2wcYuHOqDst199gHj99tO35kYt9v/czQadT12WZYxGo2uuRojiOk77HadgfEY8jwny0LETJZl7O/vd1B2vV5jjOHi4qIDLMvlkidPnnS9jNFZFiFujN2MzrLoyuwDnRivGZ18EQQppTqH52Qy4Vvf+hZPnz7tYK8QIaq3H3HbP6fdcXqZ+6//eARKu2smukkjrHLOda7Hs7OzzqUZnxtjYcuy5N69e5yfn3dzX9c1WusOesVozwi14pz1Idp8Pv+e89l1N0YXYVmWzOdz3njjDcbjMUDnzOu7eHevw6ZpmE6n/If/4X9IkiTdf3HdKaW6cwQ6QA8wmUy6a6F/fHVdc3l5yXq9vha7XBQFRVHw5MkTzs/Pubi46K61zWbTdUbGMbi6umK5XOK9Z39/n6ZpODo6Is/z77lWIrxcLBYdoOy/v/SvxdFoxGKxuNbh2n9uv/8xwv64lqPTMa7h+D4Xt2Wt7VyYQoRY2O/nth00aNCgQYMGDRr0+1vHk5SLy5LZrSPq0vFwNedoT1NsHetaMj3MuHV4yMXTS55dFWRC4GrPyhYcnYxYrStWasTocA+xXOCKGovn7t0p9z+2x/5sj8pqzpc1q9KxXG1wQjPam3DxwWO2hUVpTWUE6WKOKwrKKoCaVEoSKUiyjK0TZOOc7WKNaRw60eztjfCrJbpxjKY5e8eHrMqSfJSTaUezLdCJwmy2mIcXOCsprWN0vMfB0ZT14y3O2xZQqdCNGJNNXAuORHtPHO/nfAA63gfoiBOhC7K2eBMcWzLTCK+xdegC9Ly4Pw+xrYLzDx7QbNck0yNQDlCBkrkmONH0KIAfnaLyDCFUgKIujI1w0VEHHgcONs/OkS7cd1W1xxgf3IAIUKKNigVkiKk1psFbh5KglSbRIYZXp4p8mrN3vIcp1qwffYSSAqUkk1whpUNqgRDtF6t1e08kQLYU9cVtX4il7SCtVIHveRdwnPN4JD5JQiRsOYdqiZ1rni3P0PtHmNtLbr/+Ojdef4P54yc8/sY3Of/uuzTrDdZ6rAvboXWDIiQumh2lCA5IHHH0QSAqEPM1MjmH2S2ESFsnqaOlxq1xNWzT1AE+CqlxnVM1LAjvXdsHGboTQXbrRbRzRAu1t/MrsJZnj69gsSKdBOcgEoQMVRzlxvLBP/wyvqq5++f+t9z78/85H/63/zeKZx9x65M/gnn7TayX+LpGCE+KaN2IDlsbTDsWOlEhRresQr+laVBZik8U1A3OGVSbCVw1DUmSkgiwRfhcBetQiW7nVSF0gpcKawTelAF86wDzPR4tJSpN0Aq8kMi6QQmHSEN0bxwTJxSuapDOoJME5QXeg6mqEEXsLabaUtkwr84apJA4JRHKkWQZ6ThHZhNMU6PyFK0n1L7BWwMqCdeITGhoKNcr8M9CP2U2wltPmqYY26ZS2RqZ5lgvuXp++m/kfXbQoEGDBg0a9LtTPzR8jIBpFw71H483APGD/gh3YtRhhCN9iNZ3jTnnOD095b333qMsy2txq33g1ocuu7/vxzbGx2ezWefO6gPIGPnYdzTCixsZuB5r2tdu/Oru8/pgtQ9ed52f/b/Hc4rQMJ7P7r53Y153xyjup+/Yq+uaP/yH/3AXMfrNb36Tt99+m6IoeP78OWmacnZ2xkcffURRFGw2GzabTdelt+tijECmruvOXVnXdQeMjTEcHBx0Y5HnebceRqMRt2/f5t1338Va2wG+4+Pj73EwxrHZdTdGwNsfvz4wi0BJCEFZlrz33ntorbm8vGS73V6Dg4vFgvPzc7797W9362AymTAejztILaVkvV7z6NEjjo+Pubi4oCiKa3OglLoWlRshVgSSSZJcg9L9dbS7TuLPq9WK7XbbxX/GsY/XQLye4pz3Hcnx3GOvaRx77z3b7ZbNZsNoNKJpGuq65ujoqHME9q/fxWLBYrHg+fPn3XUT4e7t27dxzvHhhx+yWq2urb1+PHDTNJ1b0BjDs2fPWK1WzOdzrLXcvXv3GrSNTtrYsxjhZIzojVCz/14Q33viOFVVRZZlHQyNz0nTlOPjYxaLBcaYa/MUr9sIH/su5/glihjxOmjQoEGDBg0aNGjQri42lr2TKVVVkY4VN2c5l883NEhunozYn2RcPLugqD17acr8asPkeMJJpri4qli7FF9uYbnCNQ0q07zx8QMOj3NG4zFnS8vVtqbcFMyXBXo2YiI964dPKNYNRiYkCsYKfFljjMTVDdJajFQksxGFTjDeYxZLmtKSj3OKoqRZr9Fasn9rxt5sRFU3TPenOBO6+LI8oVwsqb77hLwSFLVFZSmzgwn15WXoArQtYLQR5oXAyhhrGm95hA/P885jWzClpMI6G3r/hMBZkEqiUg2tw882TYiCFAG8GWtJtObs/Q8oLi8ZHd9F6CTEdAoCeDQGVIqQCSLJSUZTpJLgLR7beuhcG11pQUjsZsvq0SVaaqraUBsRYJ4QeKWCA9F6hAqdlWUROvm0kqQK0lSSpBKdSPJpRpYq7OqKVEvURCGlQKr4xWGFVDLUXcqwbSHAe9v+XQUg613gTe2+IX6h20J3X+rbwFkV+hmlCqDMW/zmFLM+5ezpd1k9eIuDj3+KG5/6CY5efYOzd7/Dh//iN1k/eYptQtefqV98BiBjbK7v1To6EXoJhcc1DqssZnGFThMYnyCVDsAS0cHeCCOdaTB1ieq+JKsCQBWi/VMG8ChU+zvfsjaPF60z0lmq5RLhJe9+89vkTUWiUpz1GA+idm1fo6QpHY9+623S41/h5s/+Ah/7X/0f+eCv/9fo86fc/PgnMO++w9oaVJaikgwkNFVD09ToNA19l2VD1RisdSRS4sqC2tckOgHrkRiQCUVtkNKTpxJXW4w1SKWQicYaUM6gdIrTCVRbhJRY77HGhyWIJNGSRAmSPA/xv3UNrkbqcL/uWiAqZLiWnBMoJUiyFOuhqWq8ECSjEcY0ONfghQZTI51AKEuWScaTCXo0xiHwTYNXOVIIvHfIJMMCOsvCNe1VAM9JhvNgaou3BmdKsvGEfG+KsZrtxSmmuaAoPb7Z/k/xtjto0KBBgwYN+l2iHxo+RmAXQUt0cvWBTx+2rVar7jkRHkT3Ur+vsP/6+XzOF7/4xQ5g9UFNHzj13YPR/dR3gEXA13eIxRjHoii6zr/RaESapt8TObnbI9iHe/1jj46qvgt0Fzi+LCq0/7y+i7MP+Ppwsw/fdgFV/1z72wM6d95rr72GlLJzUZ6cnPD5z3+eo6Mj3nrrLS4vL9FaMx6PWa/XXF5eslwuu/n7QYpQK0bb1nVNURRcXV1xfHzcjXGaph2QW6/XfPTRRxwdHTGfz2mapnNm9s+hf867ztmXHdcuAO7HyL733nvX3KNxO3EeIjCP+45rtu9wVUphjGEymXTxrBFAxnWYpilFUXROw/F4zGQyuQY1++ofS4TH8bjhhSM4OkqbpuHZs2fcuHGDGO0aweTV1RV37ty5Bq4huGn74/H48WO01ty4caNzKEfnZtM0bLdbpJQdjJvP512XY5qmHaw0xvDmm2+SJAnn5+fUdX3N0dw0TRc9W9d1d44RxMa45hi/e3Jy0kWzXlxcXPsywGq1YrFYdO7oqPF4zN7eHmma8uqrr7Jerzk/P+/WYpqmPHv27FqvZZ7n3Lt3r+s1jWsg9jn2e23771fxvI6Pj6+93wwaNGjQoEGDBg0aFJVrzWJVc/POjImE9x7MSUeCO8cjzGrDR+dL0nFGuVizrQ33Pn6D5cWS00XDvPZIU5MaS+1gNEq599oBN27ssdlYHp43lLWnWm9ZGs/ezUP0dsnm2YKqVog0Yy8RHB9mXM4rGuuwbYdfOspQkymFgu1iy3q95cbJPvs3J8zPLrl3PEIlihv3byKlZ1M0jCY5OIdOM/I8o15cYp9ckleSynq8UmS5pD67xDUWa1snoQCwhMa4Niizc8AFh5sQAodrnxsU3XxeWAJYE9gmxFMmkxEyT9FNg6trTGmCM9KDxLO9vOT8w0ccvvYjkOa9GfF4WyD0DFSGUGP0ZIbKgrNN4PFegAuOOy9Ampr1w2coAvTyPoBB7wkxoVLhrUWp4FZsigYlQGpJkip0IsgzSZZrsokOUFIKRhMNyqOUDF2CBCehED6MixSho6/drnUeKUUHI0OWLKGvUmqcCDBS+rZiQ0rwhJ5GIfCJQxDibb0HJ11wMDqDXZxx8Y0rirMn7H/809z6xCfYOznm6Te/xrNvfAWzrTBK0DQ2xOESKg6lljjvcF4iRDt2PnSHisYjiga5XqDSEV5MQIZ7LO9F60T1oMJ5OlNhTYqSCuddCxsFyAAMhUwQQnVgtd1QOyeh93F1eYWxkgff+A4nWhA34ZzDIBCVIctDZGm98Xzw934LgeTGn/kvufvn/0s+/G9+kTufHHPr/muIxw8oSRB4XF2BtWgZ5s/UdXD2AdJb0lQADioTXIpJhiOhdp7x3gSFDV2l1iClRsgEZLgiwOGVwEuFKSTYGqEkzjpsXaOVIk00ehRA+eryKVKAyscIb8L4CIk1BmcbhFCkaUaepZCO2c5X4IKj0VpojEcKE9yq1iCThHw6YjSbofMxqBTblEjhkErQ1HXIDJYaSYFSHpFNaSqLkjVqMsE6R7lc4Nw5Qgb3bj7KSUZ7GKfYPn+GznNG0/G/mTfaQYMGDRo0aNDvSv3Q8LHv2vpB3XN9MBgjPyOUSdP0mqMqfrAf3UTf+MY3ODs768DByxx9wDUAGUEBXIdSL+tdFEIwHo9RSnWxln3IEI81bis6uHZjTvsuvP75xsf7rryXgcxd+LoLoPrj8jL41t/X7nHvOrIePnzYOfLW6zXb7ZY8z9nf3+fWrVt8/etfJ01TkiRhtVrx7Nmza0CmP7e78aBREYLFyNKqqjpXW5ZlHXyDENO5Wq3IsoyjoyNmsxlPnjxhOp0ynU6/Z03FP3dhTwRB/ePadUjG10VoFwFjf55eNmbwYo1F16JSildeeYX3338fay3j8RjnHMvl8ppzt+8AjjG3zjmSJGE0GnH//v3vu884lvF62I0ireua58+fc3p62sW3VlXF/fv3OTo64vT0lI9//OPs7e1dWx9xPUa4fXh4SJZllGXJZrPpgGmMqV0sFiRJ0vWtrlarDkRaa7t4UqUU6/W6i0bun3v/mozjF6FuvA7H4zFVVXUxrKPRqHNVxi8H7O/vY63l8vKym+O9vb3OYRodtgcHB1hr2d/f70B+hLJ37tzpHI/xdfv7+xweHnbj0o+I7js/43tRBPdKKfI870DloEGDBg0aNGjQoEHXJAX37xxiLLz7fIPKEk4yz+LpBiMESaKYP5tDlnP7cMLF0wXzOqGyDcVqzVgISm8ZjxNefe2QG7cPuLiqOZ03bIoKU1SsC0M+SWmePKWuDJ4UIx1Z5pmmknJbM84U1kkuVjVGSORohC0Lik2FtY77928gxwn1asXtPUU6HnP3jfssLi8oasjHE2rTMNs/QNFQbeek2wK5qqmtp3aQTzSJ8LjS4gh9gJ7Q7eg8OF5EpIbewtjUx7V7Mu8JnXK2vQf1tP8e94i2O1IkDeM7N9CjlOr8HH+2wtgGj8A2hsZYzj74gNf//Q1yHHofO2rnanAOITNIEpLJAel4RsEznPd41zoe8UhvKE/PqJdVAG2NoG5McDsm7ZeVrUFC6LY0Dik8aRZiVke5JM016ThBJzIwHCGD803TgkZaR17og/SCABlVgHNChf5LSduHGB1uRPjowEukUASAKtoxDdttzY6EQkpal5wEJN5brAnOVOEN9ekDzi+fMXv9Uxx86nN8/D/4j9i7ccLzr32J9fNT6kpg6jY+13uEaCNzJaGzkNA9qYTAW4tvwK0KZHqB2M9AKjwW58PRh75K18aW1jjTILUJHZE+CYsDh9QaZAKI0DnYxbLSwkeHMzWLZ49ZbVcsn53x8UQihegqIT1grEU2kKQa56BaeR78/d9EjnKO/8T/hls/8xc5+9X/J3f+0L9HXd/l6uqMohY461pgLnEOTG2C6xVPIjxKBCemVEmAd0JiTEOuFbm01MZRlw2ijazFNGSjHJlpjGkQXuIJIBsULlBNpNM4YzEYdFpj6yVJopHSkk33wnFV2zAWTgAKqTU6kTgEtk1/ElJgbENZW5RzLTT2KCWYzKZk0wkiybAehGlAKJzzNNsNCIFOLVKFnkhnHa42uKYgySQqn+D1mKvHH0FlkXaBPjzAlDX18hFNscK1KUI6u/4Zz6BBgwYNGjTo97Z+6E/Mx+Nx53KLIDK6/rz3XURihFCj0ajrcutHrRZFweXlZRdpGGMRHzx4wHe+850OEu0Cxr4brO8u3HX89fsY+3GU/cfTNEUp1cWK9oFfVB849uHlrrOxH4MZf+5HNb7M9dh3t/VBWT8Cdhda7jr/+s/bVR9mPnnyhMlkQpqm12IoHz9+zPn5OVprsixjOp3y9OnTa+BRCEGaptciSPvjX1VVF70aAVQEOUIIlstl1/cYj2k8HnN0dMR0Ou1ccqPRiFdeeeUa1O6fV3+c+mBrFwr3nx/3G7e5Cx77ayWCzP64xjjO6JhzzvHKK690+1ZKMZvN0FqzWq26HsC4rqOjMB4nwPHxMbPZ7Hvmbheex17E27dvX7t28jzn9ddf57XXXuPNN9/stjmdTtlut6xWK4wxHSRsmqaLII2RrM45ptNp5zBcr9dsNpsuVnaz2VDXNUmSdI7Poii67cYx78eiRu3GKMdjj+PRX+tlWXaO1wieY1zs/v4+n/3sZ7vo5dgFOh6Pu3UWt1/XNZvNpht/pVQHsaOzUinFaDRiNpt1XZ91XXeQtu8u7R9njGuN8a5xPZVlOXQ+Dho0aNCgQYMGDXqptBBcLBxuOmKSbXGXBc/OHBZPmnpOt4ajW4eopuK9Dy5x+Rjva6rlhtSBl44bN0bcu3eIkpoPHm642FTYxrBelxTbhkyBXpWkKuGiclTWMBpppolHCVhbQekU26LCSEUtJHJToPAcTnImRzMqU5Baz8H+hDRLGE1yzk4vkElCosE4y8nde9jNHKTg+OiI5994gLCCGkd2tMfeyGMWJdZD5IYhmtNjncP5ABzDbY/oHu99jTX8u7p1/HloOwHBOxv6DJ3DGoNCMLl9GxKB0oJmXVKXNd47rLMoqXjy9rtUmzV67xgvkxe4yjVg6xZmaWR+wOjgBqtHD3C2wLeQVHiPLZZsn5yRJBJTewwuOPV06B8UCHx7Ul4IslSSKEmWKbJMkI0S0kwhE4lqexzBoaRA6ADgHD44H4UP3X8qQEEhQqapVBovPFKEXkMp20jW1sWIUIBAKBHiZVE4Z0DEzkoRuiKFxFmLlkk4R+sCyEsV0jraski8a1i9+ybN/Iyjz/4hbn32M4wPZjz70j9n/ewRTWFoaot1HmPbL3q7FpbiX8BIwDUOV1ncco0cbUCq0LUpPB4X5tkHt6dzASA6k6J0CiI8LoUO0ac6ASlbaB1AYGyixFtsXbE+fc7p6QV51ZDPQk+kEr0vpVtP3dj2i7AKPGyXlvf/9v+PZO+Am3/sP6FZzTn953+Huz/1Bfx7jma+pqxNu14tKkmQSuOcRUtBmoQYXKESrIDKNGjvGUmFFp66KGkah5UKqTRaq+D8VLr9nMtjjAXRQnvrETpB6wzbNNCERCGcwxYrlAxfJs8zhZreZH1+gd8uSLTGCYGQDttU1CJAU0To0bROoIQP61ALkjQnyVLkaEJlHUnjwlJq45AdDi/Cl7dNVaG0xzmJrxuEK6Bp8EKhUo0cH6ImK5RKMIsl1jQ0rqLaLPG2RGRjaguY6l//m+ygQYMGDRo06Hetfkd2nQhRIlyLUCGCiAg54of3MXKy7ybcdVlJKSmKgi9/+ctst9sOBFhrOxgUYWcfEEY4F2EEhNjE6EiK7qUIMuNxRogQY0YjgMzzvItg7UOhl0V+9mFYkiQd+OjHt0YA1Fd/uxGUxH3Errr4uu8Xd9qHVvHYdp2h8XkQ4i7Pz8/5/Oc/z4cfftiN0WazQSnFj//4j3N0dMTFxUUHf6JbcG9vj4ODg2vRtHGfSqku8rIPHyOsifA3QuQ8zzk8PGR/f/8atGqahiRJuH///ktdjPE8+x2a8fF43v3ux76j8LOf/Syj0YjlcskHH3zQdQ7Gdbf7/P7P8TziXCqlOlddf51PJhOSJKEois49F19bVVU3jnfu3Onib/tu2f7+rbUsl0sePHjQAdCbN292rsfYVZmmKXfv3u3WdYy8dc4xn8870Bwf11p33ZqxKzGu+/V6zePHj3nllVe4vLykKAog9I7GeY5fNOjHDkcg6pwjTVPKsuyutf65xefsAt6+u3A+n7NerxFCsF6vWS6XnfMyRv/2ry+tdTf3aZp26253HmMEsPe+226WZd3x9t8/ttvt97i6I0iNDuk47zFC9o//8T/+0mt00KBBgwYNGjRo0O9fnYlDDvczqsdPePZkSaYV+9OUctuwKRzjSc7V6YJGJtQyx2+2KGvJJUz2FDduTLn5ygm1Uzx8tuZyXVEXFU3jqDclOY6jaU4qM06vSkoLo0wxzkKs5bwRrI2jqmuEBa0hV2BMAFu180wouXsyI8ly0lFCU9VsyorJwQGuqQDJbG+P5aP3kVoxne1x/rW3kFawbgwozdHJhPrsDNeIkIQJICXWmhYQxS8Ot18eFSL0Ksb7H+GhBY7CE/Ff7/6o99mCc9htyfLBA0Y39jHrVWu8k0CAVYkUnL73LuvHTxif3MbrFLzuwJW3W4Qeg9Cgx4yObqJHObYuEG10KN7SXC5IUoUpXOiqlMHJaFt/oXMeb0IUaaZBK8F4kpBlCpVAOk5RrcNRquBdRIb+whihKkVwPEqpCPDOIiVIpcPPbdehJLjthBB41375to1oxYf4VusM4NEqwTuLULL1QbZfGhVhH84FMOe9wPoAUqH9zEJYvBM0V8+5evO32f/UT7J34xb6j32Bp1/6x2yePCZtGprGY2uwzoX1JNo5UwKEa52BAmssohbI1QUqyfEyx+Pbnk+B8OHvEvDG4k0THJJS4r1AaoVMxwipw5zJAOK6jF7vwFvq9ZLl83POzlaMadAiRdD2R4blCApMU2Maj5Rtcq0XbC8ND/7O3+dThze484Wf5v0n73P5zS9y44/8Key33uSsPqOsDV5IrGmQQuKVxEuJVVBLhVQpTVWjtWCkJVJ4qsbgrUclCTrJEFKhlQzg0QEmrDPnDN5LmqpBAFmaIZM0OButQWJCnKmQeFuA97iqQKg5ea4wYoq0Fc5a6rqkqRtEmuKFQqqQJiRME1yw2RQhLPl0ipcKC0iVYpsa4RUWibEC5yqyNIE2ErZ2Hm8caSJIZ8fUmwLhC8bHt+DgR9l89wOKJ++gncFkF2SjGp3m6HSPqrHYyoQI10GDBg0aNGjQ7xv90PAxuqb6cKHvfuz3vMUP7JfLJWmadqAB6MBfhITWWr7xjW/w/Pnz7ue4jX7saAR1MWoxwpZdUNV3EEZI0YdjMQYy9uJNJhO2220HmvpwsQ9aosMxnntUBFP9iNYIffoQse+gjOCs//s4jvHxl7n9+vAt/r5/HLvuPe89dV3zxS9+kS984QuMx2OyLCPLMu7fv8+nPvUprLU8f/6cX/zFX+TBgwdorZnNZsxmsy72M8KfeHx9J2MEz0mSdDCo7xiNYxLhbj8WNwK10WjEnTt3unN5Gex9mduzv5/dcfHe85nPfIZPf/rTLJdLfvmXf7mLKp1Op9y5c4fT01OECB2A6/W6g2t9x1vsbowOwwjY+vMYHXZxzcQY2qIomM1m3LlzhzfeeIPpdEpZlh187YNc7z3b7ZYPP/yQ1WpFnudcXV1143pxccG3v/3tbm01TcN4PO4g+tHREUopvv71r1/rY+07daMTuSxLnj17xr1793jy5AmbzYZ33nmnizoGupjVfr9rXOfx/SCCuqIoro1JjDKNX0hIkqTrg9y9VmIHaPyiQuxh7LtS+3Gtu9A4nmO81voOxf41G59fFAWbzabrrdxsNkwmE87PzzvoGc8ldtRGgBnfs+L7z6BBgwYNGjRo0KBBu0rtkvW7Sx4+LfBSoRAsGs/hzT3Mtub0+QY9HmGBerVBCxDSceN4xK1X9pkd7nO5dTy7WHJxuaY2grKoSKqK46lmfzphuyh5vK7YGAdSkmehP/Dx2mG8I8Hhjed4llEaz7ao2ySRlBtHY27dmqEnGUmSs7xagNScnJywXa8YTaaYpubyySOUh6ZsKD98RqZOGJ3U+NUzRpmifHaOrQK0Cg5HiXWt884F96LFtz1+wdkHbQpqRysDuBStYy8+1gvYREiB946mqlg/OqO+WuFtg60M3nmkkDhTkY1z5osFD778dY4++SOk6RikxbfwDVuBykAlCJWSzG6T7x9SLefgLdJ73HZLfT7HNyE2NgDSNv2FEJ+KaUi0IMsUaSoZTTQ6BZ1qklSiEh3OB1A6ujnbzxCER6UKvAt8UYY4UalkqMNsxypUYwpAt32HbRxrdF56C0IipET7dvwBoRO8bA/bg3AuuCoB6RzOiQA5PXgT5kio1k2pFM5bzOKM5Vu/zd4bP8roxqvc+onPc5l+jfWj90lSh0kspgFjwhRa10bItlDRO4tzGlc73KZATZbBceoVtN/z7H9t2tka79JwTlaAkugsRyY5QurQGSkUXjQhStW3J+cF1WrL+vyCi9NLphq0DFGoyDaIVoQ5lFLhAGscQipE68BcPlzy0d/9O7z+X9zi3v/8F/jw//5fUzz+DoevfYJifk5ZBSSulULnOdvKsDaOpgHnBGm55Sh1TLMcKQV1YzGND7ARSNMEL1SAwoROTi8lrvE0RUlV1SjpyUdTnBfgLDLRSC1RcozSCqU1dZUEz6dOcMYgZEI6GeNKh2/AVBKZTVsHbejhVMqDGmGbitE4RadjEOBMQ6pDDG1VeqraUrkwRrqNQDZNgxXgsCQSnFdURUGxWqHqS8Y3jpCjJZdnH+KKksRYpDpHnSQkaUI6nqFFxmZxBc2QGDRo0KBBgwb9ftLvKHYVXoChCB77wCs6jfqPTyaTDjY657poyPjz5eUlX//616nruntNHzhFuBA/+I8Ritvt9lqU5veL6ez3uDVN00WFNk3TAbE8z685ICPw6cOkCDb60ZHxT601dfuNrt0Y2AgN+069/n99994ugIzb6zvt+tCvv+3dfcbXWWt55513mM/nXddjlmUd8Irg8R/9o39EmqYcHh52/XYRdO32P+6Oc5yb6AiLxxddd3E9RAAV4Vx0o/3ET/xE57SL290Fiv399selPxbAtXEqy5K3336br371qzx79qwDvgcHB7z++utdjOdoNOLhw4dUVcX+/j4Al5eXNE3DdrslSRI+85nPdMC036EYHcC7+4/r6hOf+AR3797FGMMHH3zA3bt3OTk5+R6IVpYlDx8+5Ozs7Brs2mw2lGWJlJL5fA7QOS3n8zl1XTMej/n4xz/O48ePuXnzZhclHJ2Y2+22cwju7e1RliVKKRaLRRd9HKNy49qP4DCCvLi+d9dYhIlaa0ajEcYYNpvNNRfkbudjkiTked6B8Pj+EB/vO6r7ay3Czv7v4777kDEC4fjYy4BlXJMHBwfUdc2dO3e6jtrVatUBXe89y+WSy8vLa/B819U8aNCgQYMGDRo0aBBAcXqFkbA3G+GFQGaSg6lmPl+xLT1inFOaClVX5FKhU83JyYh7dw4xQvNs5XjyfMFqWbApLfW2Jq0Lbh5OUMJy9XzFxdqxRZIowc39FJ1IrtYGUxsyBaORRmealRMgDOOx4tbNffbHKRS+zcAAAQAASURBVItlTWUciYPL8yuk0iTesrq8YHZ0g3q7DHUdkwnbqytyKdleWW594TM8/e9/hXQyYZxJ7LrCuwCf4v2ixePxWO8CHGudcS/u1QNshJgGFNxovo3/FKLX1+eDd9K7EG8K0NQNTVUjZXTAtfuzDulAOMv7X/0an/mZP0EyPUCoELMaKFQT4iCFxssMPTlmeus+69MnuLoGDNXlOa6yWAfWidYtKEGBRuAbQyI9o0wxmiRkuULnApkIpBIkMhgrhQcUKAUIFVoLpQhcUQiESsEbtEpwonUEShBChnMTrSNUtk5I5yAJEDIMtUJJiXM2AEil2qrEthvSe7x1IQpVyAA7tQ7QkRYQC93euyqctXgvED50RJrNnPUHb9IUK6av/ijpH/wjPPeW9bMPSBJN0xhs7bEIGhNAoBQqgD8bP8cBVxv8ZoXM90EkeOcQyncRqsRjsaFXUGKRQqDzCSIdtY5HC8K0UbMtWHUGV685fetblJuSzdWWEyVQYYBbuBvALgJUmuCMDT2k1qEUCBkSfs6/9Zjsb/833P+L/ydu//x/zqP/9/+Fe3/8Jgf3P0nx3W9SIUhGCaXMuHCGwnpoDLktOZhI9vMMBxjn8W23J8KhkgShBM56rAswVAgDSlCXgqqs0GnCaDxux8YGx2QmQ2eqs2AsQnmSbIxwNUrrADB9gMYkOU3jccmIdDTBI7HFGm+2CAw6T8jHe+QH+yTjY0xToq3D1RXlpqSygm1R433DZG+GQNFUJR6JTgVaSWwTomGlXVIurzDrK5R8i/G64NN/8MfYbn6cs3e+jBIlOIOpS1Sd4WiQAmTyOwpfGzRo0KBBgwb9O6Yf+v/z97v7dqMn483E3t7etchUrXUHQuLP/a63zWbDb/3Wb7FYLK45mIBrgMMY0/0cQeJoNOqiHvvuwe5EW5gRgUOapiRJ0kVSVlXFdrsFAsyJXYh919TLHFbwAn7suhj70ZC7EDC6veK24vb6kLVpmms9hT8IrO66A/vH1QeSzjkeP37Mo0ePeOONNxiNRt18vPfee/zVv/pX+ft//+8zm83Y39/vIizjMffHPu5j9xj6j8U/dyNR4/wYYzrH42KxAOjiP2OUa/9c+5Axnk/UbhRsH9g+f/6ct956izfffLNbbxGGTqdTmqbh5OSEoih49OhRt//ohLx9+zYXFxes1+uud7Cu6y4itO/8jesqrrkI5eK60lpTVRXL5ZIbN25cm5v43KZpuLy8ZLlcdo7VOGYxLjZeQ/E1cf1fXV1x//79Lro0RrHG9WSMoSiKzsUZ52G9XnfnHeNx++s9Xk+x6zPGyMbn9+el3+HaP/64BiJQzLKMvb09RqNRt614Tcf3mV3XY1wLuzCy/+fuOoxzGf+++9z4c7+jcjabsV6vO3f1er3GWktZlt3z1+t1954zaNCgQYMGDRo0aNCuRKqpK0+F4GQqUd5xfllhVUKSeRYXGw5yzXiaM95LOT6ZsDfNuVgazoqGolyzXFZsSwNNw74tmUxz8HAxb7BOskGAgoORRHrP1aahqC1KQjZOUXt7ZF4gtmsyJbi5v4cQMN80HN05obENi1VNlk/AW5JUgW04f/Qho9EIleasNmtG4zHl+0+YTmbMv/QvMC7naH+ENjWN87g2RVVIgXW+7XsERwvAiP8+9whkcDzSQiH6CTcBQIa/h+coIZGJwhqD9ALrXQcnvYv3oeF/wv4d43HCw+98m7P33mNy+y4iyZHC4qUK0aquApWHDsJkzPjW6+T779CsF2BqqosFAoW1BhuSS5E6xKdiPXjDaKoZ72VkuQoJrolGqOByDLGq0bnoW5DokEqFuE8hUBEokoAUaClCJGnrrkQG4BlOTQAOlAzxq8jgmMQjlEK2Tr44hqLdFvFeqe2KtFZ0nZHOt/GlEoQL9+xKqACFncCrYCy05Zrm/AnN7Ijs6C4nP/oTmHKNWc0RQuJ1gL5KWFwsfMSD0njnwFmctbiiRDXb4DiVOjgghUdg8T40YAYnow0vTzXJaB+hsxacNgiRdPG83lu8M9iy4Onb32ZTlIjGMh6r0KspPAgZ8KYI8bb4MIcej3VhXqQM8ay2djz9rXfIjv8GN3/uf8/eT/40z/6HX+OVP/OfUF484XKx5KqwnNsVhVe42nCI4fYIprnEeYmxHussSZqiplnrTnU447C+dRNWdXAeOjDGkqQZOk1wXuKcIW0/S7BNQ12WaCFQGnQyQ6ftZwfW4F0A066psc4hlEZrAWmGrS0Ch3PgpESmCel0BiqjLtboUUp+fIvtakO1XOKqCiEaEuERtsD5FF9XOJVDXaGmU6yDZrXF15d4JHjJ6myJSh5z48d/imQiUO6zlBdPQYvQEelAaE0+Sdo5HjRo0KBBgwb9ftEPDR9PT087R1Ge598DC/pxqn04FR1VEVYURcF2uyXLMr7zne/w7W9/u9tHhJZ9l2HcBlwHikmSdO6uGKkaI1u7k9W6Awv910VIUhQFZVnivSfLMqy1HYCMLqe++zHqZcAjApl+1OguLOnDnf7fIxTbhZ/xdS+Da/F1fQj4sthV7z3n5+d85zvf4cd+7Me6uXjzzTf5y3/5L/O1r32N/f39zuHY7/KL24hzF91zcd99Z1ns/ovj3gc0fTgcf59lGbdu3UJKyWaz4dd//dc5PDzkc5/7HNPp9Nrr+rA2QjKgm+sI6Mqy7CI1o4vxp37qp67FZTrnePToUQdVkyThx3/8x7HW8vbbb1OWJVpr8jzn1q1b3bwcHx+z3W5Zr9csFovuWkjTFCFE14nYd82madq56WL34+44x3muqorJZMLNmze7MYrrN7oho5MxuhCj+zeuz6Iorrke47qM62Gz2fDs2TOOj4+5urpCCEFZlozH425M4+shQPk4ZvGY4vNi1OxoNMI51+07gt742nj9jMdjTk5OyPO8e7+I12IfOO5e5/31/LLf9a+xfixy/32pf529DGDGtZYkCaPRiP39fcqy5Pz8nIuLi86VXdd1ByIHDRo0aNCgQYMGDXqZlhvLUkk+eSPHrAueLwylEMzGElOUzCaaPE/JR4q7d/cZTSecLQ3nleXsYsmmsNS1QRUlxylIpbjcGoRwpEkCzjGVnnGqsc5xsWpQqWJ/miGVYO0VWnhEXXD/3gFHE82z0wXpaMzRXkZRFiA1o/EYU2/YPz6m2S5ZLbfsndzEeUttYW92wPrJM7gqqHxD6T3jaYZfF9SNwVrfgiyFax2IL/yNkhcWR3CetocQfAsRnXsBIq9/4bT94qtouwSNCFGsQvQckeFxR4zhFNSVYW9vwtWzNe/+sy9y50d/FJVNA2j0oezPOwcYpFB4oUn2b7F391U2zx/TzEuwwaGolAqAU0uUlHhjwFhGk4TJgSYdSSQelUhUIhFaIIQLkJLQNSiUal2cDqXb/kWlWiAZXH9ShDGTStGdvo8dke2YeYGQGi/CNoT3CBFidn177ymFDFDW2LAdKZBpGjo2fYgNxTm8D8fovWijYAnGSmvb/YC3HkfotLSbK4pH76DHe6QHh+y99glW734TVxbBFSpCl6ULu2l7HQXOCpwFvMM1FlGtId8LMNq3tlAAHMKD8KZD0ul4Dz3aR8q0jaD13ZryvsG7BryluHzOk2+/Q1kbUiBVAfhGaOtcGKsIwomrrY249bFT1HtMBU//6VfY//Rnufsn/2Peef/bLL7+P3DjJ/8jVv/kV9mIJOxgXbHvGz52mJBlGidSjBUYa1BKIFXsmdTYukF6jxSCumkwdUNlLEI6kjRHJynWW/AOJSRaBzBs6wpXV6AEOt8LEL8pEYA1NdZ5hNRYLzBNidQpSmmoK0S5wZkSpQXZZEYynuCVpi4LvLUg9yhWVzjvmdy4Sb0tkSK4JZu6xosqxBh7ibMNOk1J9/dYrbesHrzDOFM4L6lqy8XpFeZbX0JlE6RUJHszGtPWvwDTvQmNgaZY/2t6Zx00aNCgQYMG/bugHxo+fvjhhzjnWCwWnJyckGUZ8/m86wicTCZdXKfWunOC9fsA44f/1lq22y1f/OIXO/gX/4uwoO8S7Dv5+gBht4Mw9rjVvVLrCCH6QDPCjclkQlVV1HXdwaLoJoswtQ8oooMqqu/8i/Axurz6jsjdONFdN19/2xEwRbi6646MwCWOycuiSvu/i1Ghb731Vjfuv/Irv8Iv//Iv893vfpe9vb3uuPtOvAgcd11ocT6isw7oYFMEm2VZXgNWcSy01hweHnZdhf1tJ0nCZDLpokRjbKdzjizLGI/H3bjGHsL1OvxD1jnHcrmkLMvOcVfXNev1mtVqxWaz6aDkdDrllVde6Y4pnvP9+/d5+PAhl5eXTCYTjDGMRqPOpXd8fNzFlUbIGuFidFX250cI0T0WQVl83u4a8t4zm81QSvH+++9fc8lut1vee+89bt68SVVVSCm73sK+8zaeR+xtjGMf13+UlLLrnhRCdPC+aRqklB187V9vZVl28bPT6RTvfeckjvuIkDpeO6PRqHNb3r59m8PDw+55/fXdB49x7PrO4l2H7a4Lt7/ed53J/fjWvnN6d1txDiJMjvM6mUw4Pj7m2bNnPH78mKurq64vdve9YNCgQYMGDRo0aNAggHyacEN55o+vKG0wzKVC4raePFGkueJgf8yNkxnj40OeXa55/HzJ5XzLauswZc2+r5loidaKZDYiXVc0mxprPEpJ9kcp68qwLi1oyZ6UbKuKRCkSYfCF4ROfuodoSk6vCsY3DhjrhM12S5LlKKVpyjWz/RnNKsSsHt25h6/XVFuDFJp6s8Y9vmKsUralQWaa6SzFbyqq2oZ+RyFxeLwSrbPQBYDnCXDQ+14EJrxwPYrvuR8Wbc+h8x7vQieibeKr+jGrEVi2Dj4C4HHOM9pLSOeC73zxS/zYf/wnyA9vIpIUiQbaGFgfnJBIhUinTF95ndGH32H77ndREZEKgjsuUeHYnSOdJYynmmwcPieQSiC0QmuJECFGVIaCPJCgdOt29G2UKinI4OwUqr2fERKPCD2E7SmGfkIDtA7BeM/jLLhwjx66/egcdt5aUBKhNa33MTgdpQ7xq3hcmCkkogOj3oVOTCVFMB5KgZChDxHAWY9ZnrN676vsvf7vsXfrHvX5M+qLx0jjsEIgvQcRqjhs60iVSmCNwJkGZwyuKNGzGifyNvbVIETWzq0HV4NrUFlGPjtCZJO261EAqo3ddXhvwBl8U3Px/gfMn51S1Y6xkui25zEAzvBS17ochQgrSIrgHPXeYaxFSdHCV09x3vDwv/tlXv/f3eaVn/tPefj/+r8y+vhjbn76J7j49teZr2smVcFrRyn5KMFlOc5pMJZEC4TW+MZiAVeHGFunFN4W0DTU2wrnIZ+M0EojJGivSNIcIRzWGbwNMDRJE7RwaC2wdYl1Fi9ByhQnZAC2OkGmOdY4pGmg2iDw6GxEkiUk4wlC5yDBG4GQCU1lqF0J3pPJNIyNljiV4Jo6QPAkg7rBYvC25vjVj3Hj4A/w3fL/i7/8Diof48oKj6TclORCY+K6JFxXCIlUobeyWi//dbytDho0aNCgQYP+HdEPDR+Pjo44PDzk8ePHKKW4efMmi8Wi60e7vLy8BoWia0trzf3795nP50wmEw4PD3HO8ZWvfIXT09NrTjtrbeea67u/orMxRlzuwog+1IhOtaZprkGSfpRpH2amadpFWMZYyAjjIliJ++r/GeHcbhRt36XXd53Ffff/Hv+MAClJkg5w7XY/9oFM/8+++rAybjs6tr785S+zWCz40pe+xC/90i+xXq+7MepHmUbXZjyvCGXiufb/7EdWRvdhfy4jSM3znJOTE2az2UvnIm5jb2+P8XjczbMQgqqqWK1WnJ+fd5G9WmsODg66ONJ+VO0//af/lNVqdW0O4zo8OjrqoGd09DnnKMuSb3zjG3z44YcURdGt6T6gisc1m80AWC6X3flGsNoHadFFF9dr0zScnZ11r+87aWMP6Xw+75yIRVFcG8+yLCmKojvmCH778C/up98xuuuyjOvIWtsBQKBbC/3/IoDNsoztdts5ACOc7sP+OE5VVSGE6Ob84OCAPM+vgeT4hYT+tbXr9t2Fg1G7j8W1E9dwH2JOJpMOiMbtxLXV76CM4xmfE92Nq9WKpmnY398nTVPee+89ttvt9xzToEGDBg0aNGjQoEFRaVVxWVka4wMMlAK0JEk1Saa5e+eIG/fvYBrLB0+vOD9fcH6+YVVYfFFwKDxpAnqcIPOU86sNiVAc7Y+pijKAJSXRieLO/ojt1nC5qcjHGaM0BWF440duU2w21CgObxxSVRWX64LpeIqrCrxQ5Jlm8fSU8eEBeyc32S7O2VwtWVeOcZ4wKgvyqsFKhXGWHEl9ucY34L0KwK11zuk8wxmDqA0+lPIFR58MHY8vXI7Xo1Zf/Lu6f28bAKBvoZ9vkZKijTNtPZZtix6qBYW+NiRJzt6e4snpGd/9zX/O4cc/zijL8EIG4IYADLjQ+edlRnrwGoevfpKLL/5zRCBTKBQqCfCXxpBMU8Z7ijQTqES2Dkffut1aeEiIOUW2wEvG2FMZzlVKfHTmIQJYbDsKpYpuz+BglF4FUOkFzluEVCglEGmKs/bFvgRIBKHwUnR9mcL7dhQVQmm8MwitwnPbfk2PR0qNEyLMmRJIJ/DW9sbKAQ5XLGmuHpMc3WV65x6bckFTbFAOnBNUVY1OJNJJfGtuFNJjkXjrcFWJrLdYPUGKAEiDodF3YFlgSSdj0r2TF5GrhKhZEC18NXhnaLYLPvjy16jLinLbcEOG7lCpWkdo+0qPbztD+6lRrouldTZG43pM47l4+znjf/h3uftn/xLTH/sPOP+tf8qd/+V/we0nj3Dbp+T7YxAOR4K1GmMtaapI0jHWNDSuwRV12xOqoNngrcN6h9QCJROk1nhvwTYkSYaSFtPUOGuRMkEKj0o02DZJSimMAyFk9xmUtRZlDWo8RWU5zjahW1EphFYkWRacqSbc40raz4a8B9/WxyyuSJKMxnuaukQ7h0olo8mUQlTU2zllUVDNT8nTY7LphOW5RDtQaY5snZpIgXeCujZ44UAqzLqgqKoAUIcv7Q4aNGjQoEG/r/RDw8dXXnmF/f19bt++zWazIU1TRqMRTdMwmUw66JEkSQcqHj58yHw+B+Di4gJrLXfu3OkiLvvQK4Ky3Z636ByrqqoDaXmef1+nXwQLEYZFgBPdarvOM6BzakXXY/wHXVT/mPrHDHTH3FcEjxEM9cFIHwz1QWU8DiEEdV2Tpum17sT+a3dB58tAZPx9HNfnz5/zla98hb/9t/925xDsA9J4fP1z3N1P/zkRGMVjjFG8cdyiMzLLMo6Pj6/Fc+4Crgi+ttstFxcXHBwcdO64GO05m82uQb5Pf/rTnJ2ddeuiLEt+/dd/ncvLy2uOy3jc4/GYLMu6TtL++lkul3z7299mu93ive86HSO4i+fXd/bNZjO22y3L5bKDdNHtG2NeJ5NJ17OYJAn7+/vXnJBxblarFU+fPuXq6oo8zzvgG12OEezHdRhhX7zm4mP9sew7UXeBXd8x2V/Hu2t7MpmQ5zl1XXfRrP1rHegAZDyOeAyxKzO6YCPAPTw85OjoiOPj4w42x2ON62rX+dhf3/2fd6F+XIt9sBhjniN07scFR4haVRWXl5cAlGXJYrEgSRLm83kHWa21nfs2nuegQYMGDRo0aNCgQbtaNo7ZwYhyWVFb8EowmWZMpjn3Xr3N4Y1DFouCDz4848mTS7Zbw2ZTIaua/VRhBcxu7rFZlSzPC5JEoZXCNoZEB1g0LyqU1lytKpRU5Ikm04LxRLJ3cMRyXUGWsDfKqLcl1kn2RhNMUzOaTKk2S2oPB/deRSnL6tlT5tvQ3Tc73EPZmuLdKzKhqS3oLGUySXCbGtv220klaawlv3+PG5/9JFfvfofVR89xtsHLgNici/fUouU9XbZo+3iAlCGtVPZiWyOYpHXKeTztvTCEbEvvaRsDEQRIuZxvODya8fTZGd/6jd/ijc//Ye7sHbSgLgsA0IPA4EXoIBTplL2P/TjHn/oDnK5+G9cYklQgtUS1x5zPNGkmUUqgdXBEisBxQsSqCOALJYIrrY0+FaI7VBDheAP0DMGjXsi2i7F1K4rwN6l1B9EkMsSHOod3BtmOq1BgTYPyErQG4fEuxNFKIfCiBXDOgQhxr76N/0xGI0IuqguVkknajrUBrRAOHL6bF9dUmMtnJKMZoxt3cOs5PHuAqSskjixrv5CtQ1+l8yBFiD2NANLXW9TY8SKg1RHCax0ChdCa0dFt5OQEZLg/9ITBDdG6oeuRpmT9+AHf+dLXUFIhbckoU6g2ltZDcOAS7y1b6A2hV3L3y+DEn8EZwZN/9nUm936dOz/1J3j3u99i9bV/wsf+0L+P/M1f47xusEJhHdSbLSQKqVK8kDgk1ksit3V1jW0aLAKdpoynGVVtaKoi9D1qjbcG62pMYxF4ZKbQiSZJUpqynd9sQqoD3Pc0OCTOWVxtgDlKaYQX6DxFZcHpKJMc00jMdoEWEqEEjXF46cIXAUSCs46mXJFkE2oP1lSIbcOyrFmtt1gaZJLw5N0HuO+8Ay4FlWFtAzJ0sWpS6qKgrB3GepzwOCcAS5bn5EmC0C8+Vxs0aNCgQYMG/d7XDw0fp9NpB8ImkwkA+/v7HWyKUKUfnZqmKdvtlqOjI4wxHBwcsLe3x9XVFRcXF9fg4feLEY3b6bsfjTHXIlF33YTRCRedhDFS8mUgo/+aPM/Zbrc0TXOtwxKudzS+zMkIdJGyu9vtR5rGx/qOrXhsMdKzKArquv6ezrrdCMrd/fePMf4ctVwu+et//a8zHo9Zr9fdY31nXB90xsciIOuPXQRqfYAV4VJZll3PYQQ0xphuXOPPsR8wAj3vfeeWe+ONN66Nbz/yNR5DP06zLEu+8pWv8LWvfa1zNO6C2Ri7qpRivV5z586dzs34/PnzDjzG6NKiKDrofXBw0PUoxjFWSjGdTkmSpHMA53lOmqYdqDs8PCTP864PczwedxC8r9gtGMeyqiqMMR3822w2lGXJZDLproF4DP0x7h9fHIf+OMW1HMd0N9K3/3MEqf1rNF5z0WE5mUyuQb+6rnny5Ann5+es1+vuWo1gut/DOJ1OOT4+5t69e9y7d49bt24xmUyuHeMPAo+74LT/c4Ta2+2WBw8eMBqNuq7JfrerEKGDdrlcdpBSSslyuWQ6nXYRvuv1mo8++gitNZPJpFsPgwb9btCHS8dr/+df+Vd67oP/6uf+DR/NoEGDBg0aNGicZ9SrBmsFTmlm+zm3bh9x57VbKK159GzFBx895/Gjc0zlcLVhYiuO9jJqDzLRnF1saJwgT4MTSkpJYT3ryjAdpTih2ZaWNEnIc83BrSn37x6zLWtKD0eHezhjWK0LppMpSVOy2WyZ7E0oyw35eMJoOqVYXVCvN2wKhxSSvf0JVimqR3NU4aitxOiE0UiiJdTuRbdjYywuURz8yKdRB1Om9++weX6OrwPwCg4zughMYrcjrTNOBVAl8DgfnGtCCpyxL5JNvA/9hVIRuiLbOpWw2Y5lKqEQGDbzDfuzY7JEcvrwCd/+jX/KwauvME2yFwBStDTQWUCBzEgO73PzD/8Um6eP2Dx6hEwkMtEoa5FakWWCJJNIER2NINq4UiTEGkelJaBemBAl4fgRCByif28fP58AvLdI1TriYsKsUF1kqseG5/swDhBApVIab1wXc4sIY9850rrI0RbKKUkyGQECKTQIiWjTgmjHW0qBEz44DtveRIHHFEvc9hLGx5BokjxDeIszDq/C/aeQIjgoPXgd7nu9a3DW48oCZbd4vde5NgPx9AipyKb75Mf3kckoxMWGwQuT7B3eNXjbYDaXfPef/WOuHj9lL0uRHlIVQK+SIVY17JjWGxuHNK69eH9LALY+vNZLMM7jV/DwH/wGn7j9Crf+1M9z+t/9P5h+5g8zvfMaxaP3qLygajxSKHSWIXQWMKoMLkhbe1xV4a3FNg0iSRAiwFw9yqDwKBxKKsqiwpstaTZCKBU6QL3AIakrg5QelTj0OIc0wVaeyjpQCTQlwglQEqklQicgNQ6HqyuEFSidtlC2wUuJsQ1Yi9Aeq9P/P3t/FmPZlp93Yr+11h7PEHNERg43M+881MgqFkWJpChKbEktyzIaFuSp0W4bRhvwi2G/GDAMWPBDwwb85LYfGjagbhiymlJrosSiKBZZRbLmqlvjvXXHvDlnZGTMZ9rjWssPa6+dO86NWxKuqBZV3B+QGXFO7GFNOxD7/Pb3/dGLgnyyRyAFQRijLWS1YF7CQBpsVTE7OnbXpBIEQdJcgwapAnRZk2en1Noiw4TKKozVhIFCIQgC5YBxr15/AnR3Yv7VG/Xq1atXr39jfWz42IVqXRjl3Y7d+mtpmmKtZWtrC3DQ4KWXXmpdW/fu3ePk5IQois7FF3bBmlLqnJsyjuPWReVhw08DAP4Y3rnlIc0yvFh2I/padT5isht72gV0ft8uQAQ+BHT8dl3wuAztutuEYdjWofSOsK7z0h+zGwv70wCk38dv70GsB6Ie9nSjKbuwqlv7EWi396rr+lxflFKtE67rwPPjZoxpXYhlWXJ2dta65LIsY29vj0996lPn+uDb4M8xGo1aIF2WJd///vf56le/ynw+P7eeuvKuS+8mnM1m7O7uMhgMKIqiBWu+1qQ/lo/97bpyu/PpwdrJyQl5nrdr9JlnnmF9ff0cULx//z4vvvhiOw9+buI4PjdHw+GQ2WzWwsXuHPgx9e5H7+jz67U7Vv7a+ag16SNHu9e1f68oCobDIYPBgMePH7fwM4qitp5mkiRt/cOzszMODg64e/fuuTjfZVevf53nOffv3+fevXuMRiMuX77Mc889x/Xr19uoVh/LunztfZTTd3kb76aFp85XD2m9y/ns7IyiKEiShKqqWue0B+ldYAquBmdRFG18bq9evXr16tWrV69eXR2fVWgDaarY3hly+fIGK+vrnJ0seHyScffhEYf7Z5jKEJmSjVAyHqeko4iyqMkLQ6wkg7DxwwlLVlbk2qBCRVZbFlXV1I+UXL6xziCKOZ2XJIOInWFKXtWoIGZze8jkZAJhymAlYrHI2Ly0gxKWbDKhXhRM5jUqcHBsklVcvrpN9vYdSgNF7WDYIBlBVTrKpi3agBWWKEmY790Fs0U9ywjihFJUnuy4WoxNBKt3O1ocUHSORoPA319p0BIppIvs1E0tR+vORXMMD/ae3tc0Qa1CUJYaCyRRwKwo+fFXvsHNz3+GZ//sFiJUzumoIlfr0BoHRYXEBinDmz/HpT9zj73sX2DKHKlAVhBGijgNkGHjZFTuvlFI5byawiKUbIAkzXEVQgqEdPUQBSBUgJC2OWfjFGzqUFrt2i+EfArLRBOBKgQIBVisrUEGCKOxde1cjqFy7kYrkEJilHC1A5EY3TzoLJXbB4s1GoFqnJEO/JlaI5XCNFGoKoiwusZKEFagjMXqisXeHZLLAVGaYNMh1BVaaAyGUDk3nYUGuEqIA4x2MFQXFYEuIbDe/IgUBms1KlSMdp4hGG0jVIyVgYNc/r7P4vpYFywOH/P+628RCklZVERYIumANE83d6C4+epqjlrPOpt140ba2rZMJ43JltnDGY//6Pe5/rf+N5w88wmefOWfsPPX/jPmT+5RVwYrLWEUEERRe1IlQ2xVoIscWxlkGBAIIHAPbmM0gbREowEAZV5Q5lVT/1EQKImpC4SIqcoSoZrPcaqMiBQjoDQKIyOMyRAqxMgQKUCFMRBQLxYQSGSQgikJpMRaTW00dWUxxt0nqxjSNKGsLcVigowURoXU2l2X40FEYCx1VaE1aG0pspwwrEjiyM1nXiKkw7tWgMlKKgtBoAgGA4QwBElKWfTwsVevXr169frTpI8NH/f29kiShDAMmc/nLazyENDXyPP13JbjU/2/LMt48803W+DTjR/1Lkr/YX8XYkVR1EIRX/evC/G6MZsegvljedDlY1s9QOs6C7tOxeFwiFKKxWJBnuct8PFuse45l2vO+dfLLtBlLW/XdUB6AOmdev5ny6B2GWqCg4NxHLeA0UMuX7POuyuBc65D36YuaOnCw+45ujGwHgZ3HZQAZVm25xBCtADPu9K8i8xHcZ6enrYA8oc//OE5x56fVz8Hf/kv/2WMMbz55ps8fvyYr3/960yn0w+Bx2WHarcPs9mM/f19dnd3kVIyHo+p67qtYWiMYbFYtHDS99HXA+y68JRSrK+vt3P2zDPP8IlPfOJcfVKANE0RQrTA1a8PD/f8vPt17UGZdxB2naNdd7G/5qIoal19aZq2UNKvW7+ell2Cfu36uffbx3HcXnc+7ng5ElUpxWQyYTKZnLuW/bG716bfL01Tdnd321qeJycnTCYTPvjgA3Z2dnjppZd48cUX2dzcbCOWu3PXbXt3frvXloeuHoR6B/Lp6SmLxYIsy1BKtePt62762o/+Xzeq1bffGNPGSffq1atXr169evXqdU5SMBgE3Ly5ydbOKkIqDidzHu2dcOfBCYtFjagrthPLQEmyHOYVVLVEWkGoBGEoiUYJVDVHJzlBFJLEIZUVREnECgE7V9bY3FljcjZnXpQMV0YEUnE6WTBaW2e8Oubw8UMG62OoK2aznO3LO1ijKYscg2CaFYyGA6y0ZCZi95krZLffx5xlVNpggEEoqGcLTG3Qpqm2KNzXuq4oD0+gKtB5jjQu5rKGBqY1bjzvQGugo7C+Jh88rdDXONAaSBRIV1NOCImxPjbz6TC7Y7v3nJHOUtdgTMDaaszprOTo8RE//K3fZevmTVbjBESAlBFCBo3z0cFDCBCDHTY+9ecpjp8wffsHUCwQkSUdRahIIgLn0pRKNnUITePgE0twUbp/8ukDxrJx+gkVuIhYq10brMaKACEDjKlR0rn9RFOrUAYSqzVG484rhIt6FYGDaLbC1s795oBqDVI1Q9rUQfRRs1KCdu2zWIQKQGuM0ajA7S9V4xXUFUK4mpFW18hmH6sr6sUZMopQgUQHEiU0QtM4FRtfrAwwOCdiGCu0rkFbKDJEbLBCIoVtnZbp+jbJznOIaNiAVgCFoLm/NBXokjqfcO8H3+Pe2x8wDiVnk5xUCZRwdTWNh97N+hC2qYlpbQMZmxqP/t60OZPVFqtc7VCtLWjB4XffY/OV3+fqr/8V3v87/w/0/pusvvw56p98gyIaoKLYjVWZkaysk8/n6Kp2scOu5QRh7NynSjhXJmDrgtoaSm0d4A6Uu56auqi6LAjiZr0IB13z6QQdROSLChVYgsEYqyu0qZv6pIqyqskWBUEUEqUapSS1tuiqokZidYVqPkMT1kI2RVYZQkG2mJKVDtSmgUQFChk6mCytgDDAlCV5kx5UlDWnC/eg7jBNqLEEQeRgp7CYuqZcLMiDEHPBZ2G9evXq1atXr59dfWz4mGUZdV2fAyQeQFZVxenpafshv4/d9B/ae2AphODHP/4x+/v752rvdR1Sy3GZ3QjNKIpaJ1hZlu15uuDhIiDhgV4XknUBYndfv48HeFmWtfApiqJzUHN5/y7UWa6bKIRo4VsXil3k/AyCgDzPKYqidWJ1HY1dt1oXfIZh2EZh+pjL7rH9uPn3u85Nf15fV7ELQ/1XH/np++kBp3+9HFfrnXhFUQCcg8t+LPyxR6NR255bt261QK07tz5K96233mI8HvPgwQN+9KMfMZ1Oz7lUl8fzIvk4zUePHpGmKRsbG21dx8ePH3N6ekpd18zn83asyrLk8PCwbYfvq69NGUURQgiuX7/ewq8u+EzTFK0177//Pnfu3MEYQ5IkpGnKcDhsQVcXPI/H49YZuVgsSNO07UMcxy1YD4KAzc1NZrMZeZ6zu7tLnufs7e1duMb9taC1bte6H2N/rfnrxp9/Pp8zm81a6Gmt5ejoiEePHrURsZubmxwcHLS/G5bnwddXHY1G7QMGfp1XVcX+/j4nJye89957vPTSSzz//PNcunSpbWMXQvp++LnvQvplZ7OPYd3f32/bFQQBSZK0Eay+rX77rjPUXzPd93r16tWrV69evXr1Wtbmesru9XV2dzeYTXNOZgsODifs780oc81KYFhLJYGCykgKDNNS8KkXXoZH75PPCwarQxZ5SVlZ4lHM4dwgQxiOY3avrLC5lhCIgCdPJmgBm5urVNqSFTXj1TWKxYzDfMHmlRsUixOsUuyurZHN58goJU1XEZMjLl+/hhEBebaArOTWD35E+vCYUQW1FahAksQSSo21oE0Tx+n8TpiiYnF0Qn56hlQSayymNm3NQR+z6v6Gb1yB4JI0HT1z1RrF00hM21gbdVPxUDTbCSkbx1zzuQENWBIQSIEUYGrDw4dHvPjpq+wfLZjONe98+wfc/Mw3+MTGJokKsSKE2NV7tKZ6CjSFIth4jq3P/wXs4oTF7feIAkmQSIRqajpa+zR6VSgHiPxD18IBQqRCNG10tRYddHX2MG+xa4JjZQMxlUQa70psIlKFRcgQa0GJJobV1lDVTU1NwEpXyxHnxLTWOFArXaQtUjx1eCIQQeAiTLVFGItQCqVkU3vTuGNUNVIoRHNfiFJYCzKMXG3GKnNdaz5z0EajIoWpDYLAAWPpanEiAoglutRYrbFljsS0blWLJUjHjK69TDDaQciwyehtyBsWTIXVBbrOyI4f8+Yffpcyr4hXBhSVYT2SSOkBtmznUwgHisHFhLYA3DqXqVubHni7ZWUa+6M1kE0M9//lH/Dyf/Yy65/9Jfa/8hWu/M//98xvvYFQhgqBKXNCCWU2w/dIhRFGN85YKTG2RlmLEgJjBUVRgYQwUERpglJNvK0QWBFg6sq1R1uCZq1kRY3QAmyNMQolLDKKsKXGIlksKrSxECZI5SJqtRVUtaAqKhCCMFCEaRMRi2jqh4JIR0xOp4i6YjhI3QPXi5wwCIlCB8aLPEcYg9A1tipAKcLhiPkspzybEMUx0SBgMIiQKnSwWhvmk7OnhLdXr169evXq9adCHxs++ijQrgPvIpDmYYKvmbZYLNja2iLLMh49esTXvva1Nk71olptHpJ1oWQXyvm6jB4OdAGb1zK47Ea0erjgQRhwYaypd0wqpVoQ6Nvgax767ZbHQWv9oVqN/usyXF1uM9C6zzxo7bav28busXxNwa77bPmY3VjRbvu11i1A67ohPWTpRq0uOxKXYU93/rrz6NvUdXL6qNMuPPJt7dYI9O/5GNRu3b3FYnEO9F4EHC8CRV0otbKywvXr17ly5UobzzuZTM45DpfdcH5t+GP7MfHvLwNurTWHh4cMBoO2vqRv72g0OreuR6NRexxf89Efx4NdKSUrKysthI+iqK1r6cHf1tYWBwcH7b5+PLtrpAtRu+vBz/twOGR1dZV33323rUfqAWmSJOzv7zOdTltXp3cq+uP7MbbWtgDRuzJ9bdbBYNA6co0xrZvy29/+Nrdv3+a1117jxRdfZG1trXWTegju42erqqIoCuI4Jk1TgiBof2dZaxmPx+06DYKgrYnp10F3rrr1MrvXjB8n7zzt1atXr169evXq1WtZr376OsO1IaeHp0wXNU8eHvHoyZx5VpGiGQqFtQHGSs7mFZWS7CSSydtvEEcBMgg4OcuQwv2tPjeKKK2JhWZnO+XG1XUe3nvC2axmkMasrAyoS42KQlZWx0wmpwRxxOrKCmeP75OMV5EBLKZn1CWYYoFZ7Lt0E2NZ292lOqvIZhNWgaA21FpQG0hjRZomFMXCwQ2gts29nYBK1y4q1ZfwMwLdgDfpMi+xpnNvap/CHieBkC4W0zauNQ+FGtvX04Ft3Gut3dE7J60lUK4eI8KSzXKEiti9vMLsgyMW84Lv/tbvs/3yC1z5TEooBUKFECQgQ9Bl0xQJKiG9+mk2P3eKnc8Qi8eoMGhhnov2dI5HISUW0wDHpl2NA08gQUoXbSoaEGiMz1Nt3I9ujKwR0ES02rpxhwrw9S2d49M0TlIDgWzHQdgmM1SAtTVCBs19jQFb4/Csck7Lplafta62ozEaiYvlFMim5mRzjyo99BUOSvrbaWlBF4gwdi5PBSoKXSSqNE0sr8AajVIRVrrI1iAIHGS2FmkrLAFCKIRQjK8+T7LzvJsPodw8NO5Ha7Sr9Vjn1LMj7r3+Pd79wXuM44AyrxDGEkrrPJKiYbk4B6NsAK1p+iuaMfWrTwh1bk5sE+krlMBo5849u3vK49//Iju/+rd498ffIf/JV1n51K9w+v0vokOL0Zqq0piiIh6NUEGEFBobCISSWATKCqJAUNeaPHewOwhV+1mCUAqhJFjdtMuNnwhCjKmoipLaSKy2qCAgHI6RGKIkQsQBVe4elA/isHk431KVNUVZUZYahQBj0GWFkhYZpe6aNBqVDAi0QsVjlJigywIRhmAkdVGidU2NxNiAQAEohJSEYUgaxCTDMSeHJw6Y6opIpWg01hhqqxBaYvv75l69evXq1etPlT42fFyON+26gTyQ6UKzjY0NwNV2G41GSCn55je/ycnJSXsM76Tsxip6x5d/3zu0/Pm78ave9eTP2wUeHlZ5ddvYBWtdF5/vX9dF52NYgyAgy7JzLsHu/l0Xoh+b7jF8G5YBZBcKdR1VYRi20PMiN5c/dpIkrdux61L0beoe04/rYrFoIVZ3Pz++fj8PDT0A9YDKw0v/tQvmfBxuVx72dl2cXQjq3ZFRFDEcDluY5t2qXh6+ra2tsba21saBLp/PQzAPJ7s1SpcVxzE7Oztsb2/zyU9+krIsOTo64vbt223Madch2nV5dufF18/c3NwkDEOyLDsHYL2LcD6ftxDNt8fXIxwMBq2z0PfVr/c8zxmPx0gpiaLoXI1CKWUbRezn2R9jY2OD2WzWts9fAx6ie1Dp563r1PSxpPP5vJ0H326/RmazGcPhsHVQhmFImqbtmvfnXVlZaeGjX2P+e19r0l9XKysrCCFYLBY8efKE/f19bt++zac//Wlu3LhBmqbtwwc+3rd7XfjfLcfHx63jemdnp93GX+/LDwX4Pvu4Yl9zUynF888/T5ZlrKysMJlMGI/HH1pLvXr16tWrV69evXoNBgmP7z1hOil4vHfKk+MF1mqurUQERGSFJtOCIq+JBiHbiaIuS5QIyDVUZUk6iCmtQaYJKYa1OGJrc510GPLoziHzwrC6MiQMFVlWsr69g0xiJqcnDFZW0EXO8d4eG5cvo+sMXRryvKZalChh0UZTVDWj9U2OH++jDWzs7HB278dIIyiMg2NxrMgmM1eT0Dp4I4zBNvdGWJfyqY1pXGcN2JEOwlnhAKEHX3ho1hgBG+MiRpvmRfNwqy++10ayPq3V5yWbHFaLdeCpcdNRa/buH/HM81d4+OiMRaZ59MFdvvtP/jm/trXByvWXCGToajeq2EWfGgNoB72iNYbP/SI2XzB/43ehmKIECGU795QSESjXNhmA8DGqFpREdECSA0uiiUO1DlDKBsgIB26tce/LOHKbWI23jAmJA1HWgGySn1omW2NN024rXYKs8BAXqF3EKUKCjx9tolfBPyDs7rOl6PRPWDCufqNoomCxBmENtqowYubGPwqxZe3iTgVN/q1EV6WbF+VAuwhKbAm2qhGmBBG7B3c3LzG6+WlkuoGQEUgXWesWiAVTQ11gqjnTvft894tfZrGYszVOWCxKYiGIlHN/CuGCfN3Dtg2gbgAqT1dRu+awzRr0axbroGNLxwWmhIPvv8f65+6z8YVf4ejrX+Ta//avMXnzm8hqAkZgjcSWObW0yGSIxqCEczMGgxgpBLpYkJc1VVWihECGzVwg0LpCydDV2zS1w6DWgAgpjKSuK6yQGK0J45h0kLjtJCAT6nmFqXJUqNy5hKTUNaW2GCS6rpCBJCCiWOQEWqM1BHGMkBH1bEIaGiwJRhsqXbqIXxkxyyu0qYgCt460dVG8tXWRvTJMicOA0GgkwpWokYogSkC4mqPnHiDo1atXr169ev3M62PDxzfeeOMcdIDzMaNBEHD9+nV2dnYYjUZtrOJoNKKuayaTCbdv326396AxiqJzEKwLP+B87KpXHMctHFuGiN3j+NfdYy47N7vfey27OaWULcyazWYt8PAgrRsfunzsbju6Ls5lJ2MXUnlgA7SAyddp9GPiv5Zl2dao7DoA/dfucT3QSpKkbb9/zzvmvAttuQ9dl2p3PP37/jwepvpae0Bbz3DZcephXlmWzGYzRqMR29vbbV+Hw2ELLP28KaU4PT1twZXvs5efq42NDRaLxbnY1OX58U7FjY0NRqMRn/nMZ/BRoj/4wQ/IsuxD69XXKvVz5B2j4Go6xnFMnudMp9OnTzN23KPHx8ftmvcQcG1tjTiOmc/njMdjkiQ5F9nqnZGrq6vtvnVdt4C2rmtOTk4YjUYtlPdzAJwDZR4YegDoz+HdvB68+3Euy5KqqtjZ2WE4HDIcDpnP54xGIwaDQVszM4qi1lU5HA7bY/s1NhwO23lO07RdDx4++vFM07SNfPXjV5YlBwcHfPWrX+Xx48e8+uqrbG9vt85qPwf+OvD1GyeTSVu7srsGunN6EYT0cNy3TSnF8fExWZbx5MkTwjDk6OiIXr169erVq1evXr2WdefeIfl8yuHjGU9OCiSWzVFEIKCuNZNSQywZDkJSLNIYVkYJ01mFQBMqRS5CRAjS1AxWEm48f435yYQnDw6b+4MhVli0NWxsXyJbTJmfzVhZXeXsyQOG41XWtreo8gwVSmqjMRaS0ZAqnxMmQ0ajMY/vPWC4ssHKMOD01gcwqagNVNoSBpI4lug5WNXcp1qBxmKMi6YU2MY85lyAWOeUk40L0FqLkk9LiFhHMF00ZwPDEMLFQHpo1sRjQgde4sGaaZmkFQ09Es7lFggACcJw9OCI5z99kyvX1rn1/iGVsbz5h9/l0vXrfOY/GjMMAoQIEWkK4QhRZQ7i0YDSdJPhK7+MJCd/71uQnzj3n/VORtd2GQYNBVVPo1XBxaXKxpko5Hl3XdNG2bg8ra98aRyoEYDVFQiFsM24mOYBSiRFXjGbF6ytjwDR1JpUzuxo3HFa56h0bXWAEp9V2zomRaCcs5HGCRmEyCBA1zVSBSg3IFhTNw5KWpeeUApbuVhW1UTnWiucgy8IXa+kRAaCpjSmq7OpawQQDldZf+HzhGtXHQSW3vHozmNNhdU5tsoop8e899Vv8fYP32Mch9jKUNaGSEEgxNOuCdlC6GY5+SXm4mgFbW1FKdx827b6qPClMvGDZSwUxyX7v//b7P6Vv8XJ91YR8yNW/tzfovzSf4mVsYOsQURdVCgWBHHSnFC7q8IYilJjtSWOFUqFWKUIQoUIIjAuFlVo9/Br1VxTGk2pLWEQ41G0aubOSkG+yAiHkZtjRyLRtUEHEYtKoyvjxkbQfKYSUeeGuqqxImYxm6JqgS1zlFAUuiTPcpQUBIFGBRFRLF08a+hiioWBRVVRZxPCJEZZha0KtCkphMGWiiCU2NogRNk4bKs/1t+vvXr16tWrV68/2frY8PEb3/gGcB6iefgEsLq6yqc+9SmGw2HrOrLW8uTJEz744APeeOMNJpMJcD5etQsyPUDzcYzAOWDlb1p8zGhZludglodUXejW1XI8qgcNHpIsOweXayt6Z54HkB6SdI/v3VIerHXrHHaP91Hy5/Igx0d/ekDrt/GAZD6fEwQBGxsb55yXQLuvd3P5PsVxTF3XLUTtuk+7Y9x1PgLnXKa+T35NdF2QHmB5qOnrTPox6M6/tZbJZMLp6SlVVbVgzM8nwMbGRuuuAxiNRozH49ZB6EGnB2hCCCaTCbPZ7JzDdFldGNUd852dHQaDwTmY6mFZWZZt/LAxhuFwCDjANx6P27b4cfd1HgEWi0ULBYUQbfxwlmUsFgtms1nr1uyuK9/OqqpYX1/HWtvOnZSSPM/bOFchBPP5vD3OfD5nZ2endfNlWXYO+C0WC4qiaONVHz9+3LpQ4zhunbd+LZ6enrY/Pz4+pqoqkiRhNpsBtA8U5HlOlmXM53PSNCVN03bteYekh5J5nrduye7DBL7epH99enrK22+/zcHBAa+88grb29tthKsQ7klL33cPIYEW9Po1uTzffr6qqmrP140e9sfy7s7u75FevXr16tWrV69evbqank55snfGdFawPgxJpKC2ECYpp9mcaBCThoIsqwmSACqNDS2j1QGZNlS1Jqzd39hbV7fYvbLK/u0HnE4qRmsjlICsqIgGKVEcM5mcEA9GpFTs7e3x0ideozzZZ3Z6QjAYMjlbkA5XWNla5fTogJ0bzzE5OeDw0SM2dnbQwORwHznNMdpSaoMQitFKQpwGLOaga4s2lrqJqTRNDCje0eiq56GtcbDLekDpakMK4R1n7u92g8DHilrvmhONc7KBRs6A1sRkNq+dOvfqzSthLYHCnccKqrLi+EnOi597mQf3T5kvNNki55v/+LdZu7zN838+JdmQDtylW4hoDJUFXQMGIUPE8DLpy38epKR8/xuQTRoX4tOql7QgUjx1M1rhwCG1iy99ujW2bt6Ttu0qWKyum2NIV+/POoho/QPAymKtxsoIIUzTT93WnjTa4SkhQRJgTO0gWvvws8A28yqFasfY6MrVqJSuzS5iNXSvse61bVyTUiCFalx5DuRJXNSqaCAoBlwRk+YzGulcllIpgkQ1JkOLSkasvPBzxLsvI4IhiKCBtLIBmDVWF9i6QBdTnrzzFl//ra9Qact4FKCzkrzUrMYK1aydQHrw+BQmNkvHrcPuA+rQuitb86N17bUNYG6aitGC03cOWX3xh6y99Cke/fbf5+r/6r9g9u3LVNMnFLVxG9oGHgcR2JIgiqlrQ1XVGCsIlCAII7QIIAgIwoRgMCafnqIEGKnQRqOtdDGrFoRULmo1lKAkQTKgqC261uSzjJEKCcIAZIzWhrzOMKJCqRBdaYw2qGZBGVtDA92lStE5mLomTFOieEB+cEI1W7gIVRuQDBXRIMYY62J2Q/c5WKAkBoXNC0xZoaQizw1VXaFtRhhGENWUtUYJSzxI/rh+tfbq1atXr169/j3Qx4aPXeddF9q1Bw6CNl61KApu3brF7du3efDgAbPZjDzPz0VterjVjYu8KEbRuxv9627cqXegLbvuuhGlXfdeF6wtQ8NlSOWPsexejOMYrTWLxaJ9rwvr4Klbs3uOZWfmsmPP/6y7fRRFLBaLFuh04aMHScYYTk9PGQwGpGna1sDz2yw7KoUQrXuvLEvqum6BSxcK+vZ03a3L7eyOy7ILtAuWtdbkef6hmoNaa7Is4+zsDK018/m8hangHJNBEDAej1ldXW1h3erqKsYYXnjhBS5dusTR0VELwbTWbczoR8HeZQjt++D73Y39Bee0HY/H7O3ttRDQAytfM3JjY6N9z4+ZX+9+jXdjif35utGm3iHqoWWe5+fWSZ7nXLlyhcePH7O7u8vJyQmPHj1q4fPbb7/N6uoqV65cYTAYcPv27Xa+j46O2vU7Go04OjoijuM25jRNU65fv87jx4/btRFFEdevXwfg6OiIsizZ3d0lyzKste3aXFtbayGrdx36NeXdq8PhkPX19XMuSa014/GYs7Mz1tfXefnll9ux8aDQz08QBO3vEO8GTtO0jRz2UHc6nbYPCHjnpneH+nXoI1WDIGijYMMw5PLly60Tt/s7wYNoKSUbGxv85Cc/YTqdfui66NWrV69evXr16tXr4d4EUdY8sxFR5JrFAogCJrMFg0FIYDV5DZECrGH10ioCzUwn1KIgCRRxLLn67A6BFNy/s4exAaubKUVVUlnYvnKV+dkJi7zg0u42J/t7JKubfOaFZ3ly+x2K0iCjoavTHqfMZxMW2ZzN7cscPLiLikdcff557r7zNlYo0iCkPJ4jrKC2kihWRKGgnOaYBjoaK1ow5v1ixqelYp2b0RFDF4ZqdRt16W/HBILa6AYPOeIjhcBgkS0paryATb3Itk6fAGvc3+TGWqSQLrLVuvqToVLEkULnFQJ48O4dXv38X2R7Z0B2b4oxksODE776//unDDY3ufbZzznXpFDIdBsRrmKZYrVzoFoZIMbXSV/8NaRSFLe+DdlRGy0qhXS81Fn6sFa3QFYK517DaIwpXXSpEFhrkBbQbj9hDEYYhAqQVmCMboCqaVyUDvIaXWGtQVtNIAJWR6EDdNZgjcN9xvcljsGE2FpQZjOiRDlYSPM5TlWjggAjpDuPBGuFA52BakCoc5Y6oKihqQnpgLHECoGUATYMGydhc38tQBmDteJp3UsrCUOFsAF1VaOCmLXnPsvwmU8jo1WQDXhEuT7b2jk/dQWmYn70mO9/8fe5ffsR1zZTRF1TW4MEQglSCaSwKCXduvIw3Hbb5d20zgkoaMo9Nm7cZnQQVjiHZLO+hQCtLdmkZP/rr3P5L/4HnL7xkPrwx6x86i9QfO3vkeM+/wnjgDAJHSRXCTKOsZSYsiQIJDJI3NgagxIGGUjCQKDjCF0W1FWNUBItA7QxyFAhpUIGTe1PESDCiEVRo+cZgQBTFcgwpS5rrDGUlcHKnCgdU1oDVjvnc11TLnKiUGGDkCisGayMyLOcIApQ6YDBCpjFhDLPsBiyxYJkZQzREF0XKFESxSl1phkMQ+oc8nLRxDArqqJEVxVGZoyGA0QgSKMA89HP3ffq1atXr169fgb1seGjjy708nCprt0fux76eXDwxhtv8ODBgxZIeHC0DH26LkEPpfwxPMTxP/dgAp4CLw9xumCxC8y651uOXV12Enb3X45J9dtJKRkMBlRV1TrYfPynP4bfr1sjcDna0b/u1g3stqsb1ekhl29bXdfnahl6h5ufI19Ls9u37hj48/n4Wv+ej9j00avL8av+2L4N3hXm3/Pb+Ln28DCKIk5PT8+tHw+NfTSoB3e+/p8fwyiK2vbUdU2apty8eZPDw0Nu3rzJtWvXiOO4jdn0dUC7Y72si8Cj7xvQxqn6/QeDAePxmOl0+qE1ZIxhNBq1jt8u5PUAeLk2ZRdSg3MNd12qvk1+bKSUZFnG9evXmU6naK2Joqh154G7Hg8ODpjNZu1aKIqC2WzGw4cP29qrly9fbtvmI1Q3NjbY2Nhgc3OT1157jePjY+7evUsYhmxsbCClZDQaUZYl4/GY+XyOtZbV1VWstdy4caNtu3cIBkHQzpuH4t2anx7E5nnO3bt32yjX27dvMx6PyfP8nCsxSZK27qVSijzPuXfvHisrK4xGI6Io4tKlSxwfHzMYDNpx6Tqa/e8JHxWcpmkLFgG2trY4Oztr3Y1+7vx8eEenH+NevXr16tWrV69evZY1VJrN3RQpJKYumJU1KpQk2pLKJio0gBLB5SsbVFZSJTHl0Sn1vGD7+iZXL42YnE45mVVE4xUGA8lsMkdIwdrqKmdHj7HaMogHPL5zj+uvfQIlK/be+wkrW1c4u3cXtOTas8/yZO8eEkGaxNx5+x02N7chz7j/1iPG62ss5guKg2OCwlAZg64tKjKEQpHlBVrToMHmXtIY51BDoK1tai9+uAQJvrieEE+hkHQAUgrRwjAAJSVCOAjk4SbtM5i2qYnYKaNhvZfQtFGooZLY7Om95eTolEf39rl6Y4cH988wCIwQ3Hv/Ll/7e/+QvzQcsv3Kay6aUobOASkDRHmG1RlYCSJCjq+RPP9ryCCkvPUNWBw2zjrtytlJXyex+QzCWKyoQNcNpHOxq1bXznXYgbFuTF0tyErXOI9h0Ead0gBaayxSKp8figgUQgXYqsIK5wCV1mCNwFYVKgypMRCEBLGL6DW6RhiLsgpsjUA3rs3mcwIpQGtQytXs9L5TqUBbB/AkrmYk7p5OBhG2Lpu4VIHWtQO6UrXHlAhUHDtYHUSMX/wcw2e/gBxsgIgaYOsfwtZY41yPpsyoZoe8+4df56u/920GScg4DKmzkrIGhSCUEEj/+Yl1kNfIhiw20LqBiNaCdIvDMU77FEi2aVzi6W6uSW6uTGWYPDhla/qY0atf4Ph3/h5b//F/ztHX/wlhmGHjEWETYYt1ANnUJWGSIoTEZHN0XaNrCEJFFCfo2tXrDOOUqigpa+3cigoHo8OIKE4IAkm+WFAXBUVVYQgYBKCCmDwroLIIrQmkIAwVeaHRizmhgUAZrLZkeUYYJQ7+WqilQEn3kK2vC6mEIU4S99BAEJIXJdXpDBFVRIFEWhimCfHGLtOTU+azM6raoiSsjkJqMSKfzKl0SV7VJDJCW4Eu+pqPvXr16tWr158mfWz46OvQAS088B/i+/c9PHr//fd59OhRC+i64LH7z3/A751IXWjk4SI8hX4e8HlQ2d2u67KD8/Uou7BoWV0w52NAl4/v1XVDecdWVVWtm6sL6brj5Pftnm8ZjC2DR98OHyu6HF3qAZkf29PTU5IkIU3Tdnt/LN8v35dulO1wOPyQU8/DRD9uHoJ6IOihWNe9B7RzuAzFPEjL8/xCINht18HBAVtbW23bfZ3Aa9eutXPvXYKPHj1CKUVZlqRpytnZGYvF4tzxu6Bx+f2uK7Y7Dz6207fBx4Z6mA60NQavXLnCyspKC+J9/z0s9RGt3RqYXVBdFAWrq6tkWdYCN1+Xsdv2MAy5evUq+/v7bG5utnUvd3Z2SJKkdVZ6d5+HbUVRtA7Xsixbl+X777/fugXDMGRtbQ0pJZcuXWJrawulFIvFgocPH7K1tcXq6iqTyaQFvN6VubKyQl3X7O/vE4Yh165dayNYNzc32/HwMNqfM8syNjc3Afe7xbsUF4sFGxsbTKfTFth6t+PZ2Rmbm5stZK2qiidPnjAcDnnhhReYz+c8evSInZ0dTk5OmEwmLUSu67qNgvX1TcfjMYvFgrOzM5RSxHHcntc7I/1YTqdT8jxndXWVMAwvdAT36tWrV69evXr16rUxCIiU4vHBgkxL1rfHVFlOqCRJotDGucYuX14nq+DodIo6myGt4KVXr7Axinm0d0YlYLS+hgolVV4yHo8wGM7OJmzubHG8f4iOYm4+9wwHD25TE7J+5QaP33+HKBqwsrHO0cNHCKuYTk7Jiprdy5fZu3OHYRKzujrk+PiEIAgRkwUYMEagAkESR1R5jrUC7e/jAa0NBjDWgUUpfd1C/5CxcyoaSwu2hJANsPQ/94X43DaO8RiMB5YttARvtXR8yhJFMWWR00a2ImhOQyhBCX9f5yDdnZ/c5xf/w5/j1lsP2XuSg7XUCN75zhuMVv8xv/SfhKw/+yLQANBkB5JNKE4R9RxrNVYoxPgq0fN/CRmPKG/9EcyeYHXtYlKtcS7OBmq5+oum6Z+DcliNVCGmAVPWagc6hcSYGhmErr6i0Vhb0wyut+chVQMeaWotNnDSClAydHUepSZbFChVEpkYJQRJLLC2xkqXPGMwyMhiKl8T0X/uIBDClRkRUmF0hQzCFvgKKRFSomvXNhUqhDWYun4a2WpBKufwFFY72OlyYQmUoIwHrD7/eVZe/XXk+LKr8wggpKsniQFdYesCUy2osxMe/Ph7/P4/+h3OFhmfvrlJdTJHWkGhNbGCQMnGFNsA6qBxbCIx1iCFh9kOehu/7nD1IcGB20A61i2wCO+INE8jZQGo4PD1H3H5r/4NHvzuj9icPWTw6l9Cf//vU6UDZBAA1gFYa53jEYmSAoKAqipBggpAVxUyTtEWytqSlzWltqjm2lCxq4FphcFYiQwDykWBKWekSYgcDEBIqrJGZzmoEIkgDF1rhdHEUYSuDEbX6FojZE4yGFOVNaLMEVJSVs36CiviQGKHQ4SUqFgSiW0mx/tEdUZlQozW6GrO6sYKw5c+gXl/xPH+PSQ1gZLIIKRKBoh5BcZSVwZd5wSh+rfxa7ZXr169evXq9SdUHxs+ejcaPIVQXYeg/5plGW+//XYbkejdZMuOvy78KcuyrUPYdR96956HZ91aet1akR7ydUHicu3Cj4J93TqE3fd9hGu3v10Q6eHTZDJp2+QjYbt97KoLFpfdiL4ty699zKsHOMaYNnLSO+J8rO10OiWO4xbIdY/nIa2Hid0x8+PrHZweci6P6TKM7Y6lP1d3f+/KPDs7I8/zc87L7ph03z89PWU+nzMYDNpx9iDVgyv/2rvpPGSazWbnaoB2z7E81t1zdsFvFz57wDQcDluXoF+b4/GY3d3dthalnxs/nl33ZncePLw6ODhoIz7zPGexWLRz5qFut61Xr17l9PSUS5cutfOrtWZjY4MXXniBMAy5fv16W1fVuxA9HPSAf2dnB2NMC6qDIKAoihbsejfs1tYWh4eH3L9/nzt37rBYLNo14R2y3fMA7bz466DrePUANsuytu3+GhuNRqRpSpIkDIfDdm376Nc0TQnDsI1G9XDW1zmdzWYURcFkMuHWrVukacp0Om1rZPr2evDoz+/Xqv8d0b3Oui7jKIradejX8UWxyb169erVq1evXr16BSLgwZM5lQpZXwkxZc1oEJCkIQaBRBIFESenM+aLGqlrVtYHPHdzB53nPHx0ig4iVlYG1NpgrGJ9a4Pp8THJxja7z63z6M777L78CoOVMY/29hje+CSBnfGT7/6A1dEm2Ir52RnIkJOTUzYv7SAxHD7eZzQcI6wlnxWM0yFxojiZLdDaoq0ljhWBMKAtpm6cXzy9Z7IIjNWNqczDGdvCwPZezzZ1DW3z8K1tIlJpIAu0YMeZHZtXAldnTnVq9AmJEFBXlTuHsW1tSQe73DED6dxrNQZrYe+DB5wdv8ZLn3mW/S/9hMq6OoW6hh9/5dukwwE//zf/Bqs3n3eOTKsQwy1EsoHNJaKeuTp+wiIGu4Q3fwWZDMhvfQuOb2F14VovnJvQWeq0G5rG/WelBNsAWCxWOueiNRZhLUIGznEoABm4KFZAKOscpvYpUBUicONala5WYwttHRCO0xQhwRjtaokqiahqLAusbeoyBiEyipwb1RgHRFWAsBpMDaaBnVho4KnA1Zh0gK35TMDi4ltr4/prtetXbRDKImUEzecCViWMb/48g1f+Imr1BgTDBjq7PtKc2+gCWy3Q2ZTDW+/we3/3n/H2ew955cYaSSSZVjXSCrLSsJYolHQAXCo38Y5V2ybS1/seaWJkwVi3PtySdFDbwUjhYn+RbqFCA5Yt2ntsDeSP5+gn90jWdzj6vf83m//B/4mzb/4DEjRGuOvHWOMAIpI6z2jQKDJQGCEpyoIoAlEXlFmNjlbINYRCNaBWoaREBQpjLbquqSrXzkEaoqREG9CVpTKCfJERpQJlLaF110aNRWt3rcggQKgYFQaowQpaz7CmoihKJIIgUASBwlAhA42KHGSOhoLx6jbFyWOkhCgZsphn3HrrXYYbR9QmpLYWa0BJUDHEgxRra7SuMVjiKCJMnpZv6tWrV69evXr97Otjw8fV1VXgPDT0zqrJZNICk7quefLkSVtv8KI6iMuuM+8e9E7KrsMQnrrWPAzrOvO6QKwLQ/05PXBbdl5+1Hn8vt0Y2WXg5t+P45g4jluY4fvfPY+HOsvH+ShH5XKdySiK2nEWQrSuK7+9d6TO53Mmk0lb+9HDUD8O/mt3fDz49WPr52HZKerb4/f3Drtl16Dvp4fExpg2Enb5WB81BnVdUxRFC8E83JpOp+38CyHIsox79+7xve99j5OTk38lELoImi7Ln9+D7y5ofO+997h06RKj0ait8ditw7kcM+Sh49WrVxmPx+zv7/PCCy/wy7/8y6ytrfGlL32Jvb097ty500L27trqzs3KykobN3r58mXu37/PYrEgz3OOj4/54Q9/yGg0IggCNjc3WSwW7TE/+9nP8vrrr7O3t9fWzpzP52xvbxOGIYvFgsPDQ8qy5MqVKy1M3d/fJ8uyFjhOJpMWPBZFQZIkFEXR/h7wLsEsyxiNRi2A9vPi63vO53O01ucgeV3XrK+vs7GxwaNHj3jrrbcQwtWLvHz5clvb9LnnnmvHzIP3NE3ba8QDxizLODg4II7jNkbVz2kYhu05fbt9TUgPGbvrvds/H2ccBEF7bfXq1atXr169evXq1dXdo5x0NWEjVNS5q/enkpDaGISFs2lBKVy6RhSGrO9ssLM1YHo6ZVpaBusjhAipEAxWVqEuONzbZ3VtFVNk3HvviN3r1zidnHGSFahkwNn+HR7fe0AoY7L5lGCUUmQlWT7n5vPPMZuccXpyShJH5LOMeJCA0CBhfnCMyC0aQWUNozAgjBRlpl39O+mojbUOruj2fgWEdXGqogEnQjQxoUBrXTR+ewNSNrX4nExTW882OZftA8LNrhLZOCgNMgga512T1AlNXT7R1ICEOJDIqkYaMEhsbfjJ13/MJ3/+BbbWPmD/KHfnwZIXJd/+7T/AWMsX/ubfYPXmC4SAAgcg03UoFLacYqgdxIrWkJd/gTRep7z7beq9N7DFBGEd7HTHdgASqRzlso3TEeFclNZiTe1wl5UNfHS1AMFgjUE2sba2cZgKpIthbd6XYdjAzKefY0hXFBOrjU9rdfPmx8s6oCyEq8loNaAUgQocwJQhwSDEVK4tMlBNLCzt8YQAK5QDhtJFegopnTs1CDBl4X4USGfaDCQ2XSN+5ueJn/0l5PgqIhy5MWpqSVprXW1MXWKqDJ1NOb3/Hl/9B7/FN7/+Y7ZXI1599Qq3f3Qfay25dnGwkbIEAhfrKqVzORrj1uQyfGwfeG4WnmhIZBO/aqz/rMZtY10uqxs36cEk1AvDk29+n5XnX2D+w7dRf3VC8sznMLM3MWHCIq8IlESFURPNCwhLIAMCJSnyAmMDaKJO88WccCVEBhF1NiFOUoI4ciBXxIggQGtLEFjCOETKkFKDqcBQY6whiBKUkMShQAG2ytAGTJGDCjBVTBAGKCGpZmfoqqQ2FoN7aDuJQ7R1a06qkGgYA5K6KoiURScJigopIFARWVlxtH9CECVI4667ShtSY0nTkCAYUWrQeYZU0sUF9+rVq1evXr3+1Ohjf2L+yiuvuAM0H7r7D+dnsxlvvfXWuZp2eZ6fA49wHs5040M9DCvL8ty2/qsHTl1H3bJLsauL3JXLx12GXx40etfcMiDrQpSuhHA5+V342I2O7bahW/uxCxuXAedFfUrTlOFw2Lq0gHPH8zX1PKRLkqQdXx/N2h2XbgzrYDA416/T01PCMGzfX26nB49hGJ7r8zI49jGb0+n0Q2tgeWyWwWB3DFdWVlhZWSEIgtaZBw6kfutb3+Lo6OgjoeNF8Hh5fJddjx7AengbRRHj8ZhvfetbFEXBL/3SL7UOX1970PfX1yX08x3HMb/+67/On/kzf4a33nqLT33qU+zs7LRQD+Dv/t2/y3w+ZzgcEoYheZ4TxzFBELQwzceq+q/erVhVFUEQEMdx60T0oM2vyyRJmM1mDAYDNjc3EUKwtrbGaDSiqqq2XmYYhjx8+BBjTHs87/wriqKt/ejbtra2xtHREdPplEePHmGtpSiK1qmolOLw8JB33nmn7ZOP0K3ruoV5AGEYUhQFx8fHnJ2dkSQJ1lpGo1G71oui4ODgoI357UbwJkmCj1bt1qSs65rZbNauRR8HHEXu6cv5fH4uEtcDXT+XcRxzenraxgV76A0wHo8vXHO9evXq1atXr169/nRra2eIzXKmp4UDOKpiRUaUZU1hYO3yBtpKjNasrsYECPaPFgRxzPrOgLKsKGvNlZvPMjs84Gw2JxmtcHQ2Y7i5wZUXXuTBB+9x79GZg3e6YG1lnfXRKvPTKel4QJ4VJKsbPPPyVZ7ceQdd1CgE01nBeDQgSUOq0kEDfTJFNJAvDl18aV1UGCM823L3hd4V5p2M3Xs4YVFBgDa1czPap1DRwbh2M4SvjYhFCkGtTVMHkgauNaTLuyYdI6IuqwbuNed2lAhrm4d9jSUNpINwSAdOgcd39rjx6rO89gvPc/Z7b5JVDiUqBFlW8Pq/+COEhZ/7m/8hGzdfxKJRtkYOL0G6hlARlBOszlz7wzFi+xNEyRpyvEP98LvYyWOoy7atSIUxPsmmcQaKJlpUCAfIhE9SdbDPdUfikawbQ1eAUAjZxqRiNFY7K5/VBqGaeFYpsU2/VaiwtRtjW1cIFTYNM+jaOPeftUgVu2MhmtqVASpOGnAm/OzhajH6GF2LQDftbyJ2hcXqEokESeP+M4iVK0TP/VnCq59DplsQDvAxtLZxU1pbQ11iqzk6m3Jy/z2+8Rv/hC/9i68hlOUXvvAs1iqKeUEsYFLWRFIQSUEgGqur9VGrzUPAgsaV2y41dz9nnRtX+DnpfDRgjW2iWGn7j4UwcGDT1BpjLNmTKcP1J+iyYPGTLzP8/F9h/vtvYm1NGCqkUAjt7s1loJBh6ByeVhNFirJw5VBqa8nymlScMRhtUNmEKIkxUrl6qkVOOXPHDIIIggBdO+Bca4NFMxiOMFVNSIlUElPW1GWBCAQyCNEI6qpA2IposEZlLXVZIQEpA6wxFLpChJIwil30snBlUWKVYIxAa4PQGdZqVBRAqTBlznxyxiBWBEphjKUuS0xVo4EwjgjiEIxFlx9OpurVq1evXr16/ezqY8PHS5cuMRwOW/BR1zWj0YijoyMePXrEfD4nTVN8HbuiKNrY0uUY0m6kpZeHCssQzjsivftoGTx2ayh2XUv+va5jb/l9fxzflm5UpoeR3s3ntRyN6mHLcpRjtx3LwOsi6PZRoFQpdW5c/PGWIZ6PgJ3NZozHYwaDwbn6l8tOxe7PfN1ID568i8z/3LfN9wto6+Ytz5OfE2ttC5q6/bloDrry7rJPfvKTxHHcgjIPhzzw+8M//EPefffdcyB12VW7fL7uV//9srvWz18Yhly5coWNjQ329vbaWpTXrl1r3aZBEHB8fNy6XYfDYVtjcXV1latXr/LZz36W559/no2NDay1pGnagt0wDPnrf/2v8/rrr7f1Duu6bsHX/v4+R0dHJEnC1tZWe+0Nh8MWzllrW5fjYDBgNBq1x75+/TpPnjzh1VdfJQiCFpgFQdDG2vox96DROxQ95NzZ2WnXoR+rlZUV4jgmiiKKomihJjhQXhRFG3Hq+911Ant5wOuh5uPHj1vwaK1tnZT+tY+lHQ6H7Zz5Wqt+HkajEXmet9GtQrjoXw8RT09PzzmCPZwXQrTH8HVokyRhsVhQVRWDwYDV1VVWVlZYW1trr49evXr16tWrV69evbqaHS/cfVUUECUhg0Qyz2pWdzYYhjHTeYEKA7Y3UrIK8iBiMBCEwpDlFUGcsHt5m+NH96hLjQxiTvOMq8+9iC0XvPPGm6yurrA6yDmbLNi9dIlIWGbTCeloQFmUXLp+DSU0D976AUkYo4VCSE2SRASRQqoAFQdsrg7Ye+s2MlBk84IgdLGfFuEcX9ZQ6yZ20lqHxbwTzz6FamDQVYUPvBRCunhR4e+zRAMd7VM3GgJraOrydb1qzT2c8PdrNC406ZGRA6BC4dicc1VKJYmUJBJQ4iJNrQC04SfffpO/8Dd/hWfv7PP2O0+oLWhjMUKwyHK+89t/SFUX/Px/9NfYeP4lIl0jTIUYXYF46OoTlhMop1gqhIiQKzeJ4jXUyi71/e+gn7yHLWdIqbBopJGN89EH05rGRVq7WpDWtvDUgy7vkKRxKmptUE3UqRWqmYenENfKxpFnJdam2DojCBQYEEohrEZLAbIZWePAmwgD1yYDCOfEtLYGq5BBhEEhwwBhNEZrVBi5sTamdRKia6yxVFVNHAVNRGsz7mGE2n6e+KVfRu18AhGvgIxaJ6iwFot28bC6xFYZdTbh9Pa7fP0f/GN++599lWlW8kufvcrNV57h9d/7MVQGKwyz2nIplijVRKka42pgSr9mmjRXlh78RrgapcY0jlm3prFgMCgE2hoEFqXcnEklMLpZdcatJ1PB4nhCsjbm+Pf+KVf/17/CIlwhQlJbgxDu3hLjak5iDXVZocvSJQIZyLVbz0HoxiRSEpkO0LVGW0MYRZi8gDzHGoWua4x1cNoAtsip6grqgmESEgjDYHOD+Qz05AxhFEJI5whVhjgOsdqgywpd1UQNDzXWoKuasjCUcY0IKpRUhMrVf7VlRprEiHCbyWxOUWYIKQnCmLLI0GVFFEiECpjnFVZAEEXYPCdSLjpWBn3saq9evXr16vWnSR8bPr7wwgttLKGPF03TlPX1dabTaRv1+I/+0T/i8PCwjTH08ZXwFPZ4B52XhxPdWmvLMMmDvS7EWAZK3g3ltfzzLozsOi8vcjl22+aP2Y1i7L7fdZ51z9c97k+L/Vx2Qvp9vNNzWd1x8V+9M85HW8ZxfK6v3THysMm7wzx88TX7BoNBe47uXHSdncA5cOPH0x9fa02e5xeOx0Xqzut8Puf69eukadr+zDvS6rrm9ddf53d+53dat+xFcNe/v/z9Mnz0X337fbztaDRqXYTz+ZwsyyiKgjzPuXr1Kuvr6wghuHnz5rl16UHayy+/zAsvvNCO5XA4bB2V4GKMlVK88sorzGYzFovFOXetlJIbN260gLdbT3Q8HrdrZhlid8G3UopLly6xvb19DtYvj4N3AnbP79d3t83L4DhJkhYWdt/37eg6VZfn4iLHb1mW/8oHCLq/O5breyqlWF9fP3f+rqM5iiI2Njbaff367V5jYRi2YziZTNpalB7weidvX/OxV69evXr16tWr10WaZIZBLFBSEAqwccy157aZzAsOJgvWV4cMQsFkXkKgSEOJCgLyomDr8hWK2ZTH9+6xvr3J3r2HxOsjXnnpJnffeodFJbh67Rpnp8eM1lfYvbzD9OgQbSXWCsqq4tpz18lnE+azCUkyYl6UxKFEGkMyWqHIFxgZoMKAx+/eYn6UkQjntpLCucaMsdRaO8edsA14sgjH0lrw2NxhOIgGKKkavqQJ44itzzyLPZmy/+5e83c9jQvNOSlt6+xzY+fuAQQgEFZihXO0BYFC67qJFm3AW7ufr0loCQNFFCqk0Ujnz8NgmR2d8u4PbvHKL7zM44enHJ2VLtbUIUHyIud7v/M1ssmUX/xb/wO2X32FxGikqVCjK9hkgFDroGJEcYI1BSAh2URd+yXkylXqjR9SP/wezPexVd7UdawRjTMTG2Bb06dzwrlykUHjZDSOKkqFsAKERIZNXC3OXYhQDmA2cyKRWKNRUlLrnCgOnAOuKlvXn1Khq4dY15SFc98Ja7BSIZqPZFQUN+NuMVpTlQuScA0rpANzFjAaoRw0tXVNXWskhijyKFmAlBCPiS5/kuD6LyJWryOCQQMmZQOKLVBjdQ2mxNYldXbG6b33+cZ/+5v81j/5I07yiue2Ez73hZc4fjLndH+CwpLVbuhi2dRIBG+Sbdy5zT1n43C0TSyvwN/ruhhgTeNu9A/HN2MlsK6GpPT7GIyVSGmRSmK1i6wtJjnp5R3Ke49Y/Oi3YOUq+uBtGI/AWqRqPnupa+qsdhG11lBWFiNUc+EI0mEKSlIbS63dZx5WKQIpMWXl3KOEVEUBxpJXhqquCakpi4LjJxnXLm8jBgmnB0eUJVRWoKoKoTUyTQmC5r5YWCoLVZEhK0tZW6QMQFi0UJRlRhiXBKMBgoR0MKBsQHmtXaywMRaFJUwSjAaTTamLHJmkRGGAtoZIWKLIPcgcBhIRJ38Mv1V79erVq1evXv++6GPDR++Q69Zb9LXffuEXfoG6rvnKV77Cd77znbbGmweSURSdq9HoYcoyAOyChS7M8fLuSDgPnJYdVd3zLLvilrUMOi6K6ewCoWXg47/vxpv6vnb73HVoLrvxlr+XUp6Lt+22rdsu/75vl6/Dt1gsWFtbayNSu/DQt6ULLr2Tr6oqptMpWmvSNG2doMt97e4PH67naYzh7OzsHJD1c31R+738ue7evcvJyQmj0ejcOZRS3Llzh9/93d9lNpt9CJZ119BHzfVFINyDbe/o9bBwsXBPLV+9epW9vT0mkwk3btxogat3SHaB9draGq+88grD4bCNYF1dXW3n04N7Hz26u7vLa6+9xvvvv89kMjnXXl+nsAu/l8fZ/8w7QLtRxr4v3bXXHfvumHfVve78MZfXjH9v+Rq9CLx3z9V1yi6vn+V5WR6L5eMtO6WXHzLojoff3v9bvu677e8+YBDH8YceeOg+eNCrV69evXr16tWrV1daa2otsVIy2F5lZX3I/sGE06Lm+uU1bK2ZZJpwkLIyGjBdZBgjuPbsi0ye3AeZcP3ll7nz9husX77G1uUtPnjjTaJ4xCgWPLl3l/FojKkrDk/OGI2GTKdzBmsbbGytcvToIWkSoq2i0oLxSvPgolTkZclodY18NsVWOaqoGYwGVLMcsMRRgG0cd9ZKZOAiFa2xDVQ0gERIFxvbZIwiGoej1s39iRDUtaWY10RKuv2sr//mgKIDRvrpfYexbYRmc8CmZqGh9tTSOzKFaJ2Rxvgah5ZAwSAQTCvcPqZxBmrDnR/fZnv353juhR1mP3xEUeF5GyApK82b3/gR5azgz/6P/3vsfvozJFioC+T4CgzWIBmCihDFFFvNgAoIkWsvEabbqLUb6MffQz95DzM/aOpVKpSQGKsRQmGEQRhABNjajaEIwBqBFMrFquLcZMJohArANLDXODiGdK5KY2uskCgpkMoiqBB14EAlAqTCNtMUBM7N2OS3uuM244hw91K60ljjIjiNLpHRgFAKrNFgK0RtMAiM1hhCrFmgVODiU4MEufYM6urPoS59CgabCBE5YNrWpsRta0qMqbFl3oDH9/jWP/5n/PPf/EMOi4qVSPLqzU3iUcq7P3oXnZeEWE4KSygdfFSyiYAVT+9bnSm283Busy5cDutTR2R7Tw3Y5ivWuXBls76sddG2QOuoRLj6onVRY3ROtL7JyTf/gPFf/E84+uffQQwSwiQkCELKUlPMM6zRGFGjRUBRORexkC6NSIUKwoQsL8jmC6wuScIQLQV1VSGsJl/MwboSI4WW6LIgTA25sRRFRZVlhFEIShAEMVUYUS+mzq0qpIvHVRIZpwxDi9U1+XzBJJsTB01tUlMRxjFCW6gMpqjQdgLWUpYVQawYDAdYBNX0DJ3NENZipCSvDFJURJEgUYoAQxAqBK7GJfqjH0Dv1atXr169ev3s6WPDx8ePH38I3MBTiPDkyRN+53d+hyzLzjmSyiZeIoqiFhZ4YOGjOpc//L8IMCwDlOWvcDEo/GlOSDhfX9Afw79edhYqpVp3YHdbcK4qXxuu2x7fBt+vLkDqgsFlcLbsAvXHW3aLdduglGoBZJZlrK6utn30+2itW8g1GAxaKOidhWmaUlVV6xzz49cFXT6isxup2o2o9fUn/c+8A87DuJ8m74Db29vj0qVL5+Z1b2+P3/iN32Bvb+8csFre30eiXrTNslOw66rNsow0TcmyjNFohJSS4+NjwMWJ5nnO6upqC5rLsmzny8cCb2xssLq6ynw+JwzDtrahh4jd6wZcPYWXX36ZlZUVvvzlL7NYLD7Ul4vWzkf1pbvtR/W5+/5FrtFlsHwRaPP7XXS9XORW/KhzLEO97s8vmreLHMIXAfrudh/ljL2oTRdt2+3HRfv26tWrV69evXr16uW1OQoYr6dsXt5GRopHhxnJIOb5rRXmixKNYHVjgyAMmS8WDDe2GKUxJ4/vM17fopifcu/HP+TyzReoywW3vvc647Vtjg6OUIFiOBxxcDwhGgwJByMyrbl0dZd8kXFw7x7jjXXm8wVKxUSBYHpySpAMCWJFLAMmJ2ckcUCUxBwc3eH6Z3+BB9/6OpESDMcxtigxrmQf2misMQ2kctDKYqGp0+hBDjgQZAErmodD65rjNz5otlPIhnNBEx0qnNMS29Q4dHTHHQuJkNaVmBSdBwaFg49I4ZyNtcbXoEQ6R2SaKMK8pmqiNS0Oni6mc9785k948ZOXubR3wsPHGbVxONU2NSDrynDrx++QzRb80n8seOmvPY+pF9iTW6hqF4Y7EEYQrCPK1NWCrDPXhGgDeennEKNd5Mb76P030UcfQH7izuDjYWWAkGBMjRAB2BpUANJBPSkChPQu0cYRiYuuRTbRt8a6CFURgAGMcPVFjXFeTiFwUbcWEbhxdQxOYqVwgA4B6AZONrGlpgHCBAihyGYLpAgIRO1crQ0UlkGAqmqUCrFCYpNNgiufRF35OdTKM9hggJCBmxSk6wsGa7WLa/VRq/Mpx3fe5lv/8J/xxd/6Gk/mOalS7A4Drj1/mdPDGft391FN/6el5lKsUA1MxAhkKJ5C1OXPBwRtbUe/ujyAhAZ4N2tHigbs+n1wYNLNXbOO/DwaQXEwIRiMKR4dE4eaYLCOLReodI2qLKkrF7daY5BRiJUxUuQgLEEosUJQFxp0QbEoKbKcYRxQFSVFVrr5QmAwWCHR2oIwpKMhmpI00qxdHaFUQF5okIbB2ohweI3F8SGmzqhqTTJI0SqkzheEYcRwZcBwawf75IxyeoRUEApBEsUIaxBao6sCCKmMIFABytYIpRivrKCTlGI2RcymaOOvf3eNBaGkLhYMN1eJ1q6wODrAFPN/g9+mvXr16tWrV69/3/Sx4aOP0OzK32h41+P+/v6FLqu6rtv6al230nI9xe5+/jjd+MQuzFsGKssQ46Mcg8tt70K5Luzrns/DJe9080DVb7/srur2z0OYZcdm931/nK7LrduvZcdbt3/L7rMkSVoHY5qmRFHURnd2+6q1PheJqZRqI0d9DGfX7dYdcyHEubp5XWh6EfTztf1+GnhcdtXdunWLT3ziE+2xjo6O+M3f/E3u3r3bbv9Rx5NSsru7y+HhYRvNepF8lKl38U2nU3Z2dtqagUVREEVRC1STJGlrM/qagt25D4KAOI750pe+xGc+8xkuXbrUjrEHtV1nb7dm5sbGBpubm20Mbheid/sbhmHb3q7jchngLYNKv013jLoOv2Vo7+XXvl8jy+dabmf3+r/ofP5nXQdkdwyXf3/4viwD94uAbPffRwHHZZeuP+ZFbV4+xnJUdK9evXr16tWrV69ey9p6Zp2NS2tMJhVnk5zNjQGRCjiZl6gwZHNzjSIvmM9rLl1/hnJ2zPHeAeOtXU4OHxEPV3jx85/j/ttvEYQRW1vb7D/aJx2uMhwNODo+Yrw2xhiQccrO9hr79+5RV5AORyxmOXESU1cVWkukkgxXRlT5jGx6xsrGBvkio1gU2Kzk5K0fks8zoiSimrs6dRYX/+n+dnZ/Qxvh6jxi5VPHoHj6bRNyCf6+DgdPnKWsuadpthHCw0flYkKtbUBi4wwTYAwIKVBKOuelNUipGtejcPYxtAOHElSoENaSBIpACaS2SGtdPUAL1hiOH5+yt5Zy86Ur5It7HE0qauMBn0BYKGvD3r1H/PhffIV4dY213TGj7V0iXSCrOXJ0GZGMsUkKYYwoM2x5Ctql/ojBLipeQ67fRJ3cQu+/hTm5C9kxwpSABKORKnag1rp+Gguhitoxt1XVjKdBSNnUHcTBWikRocBqgxAuCtQagxASEQagm9qRSmLquvleYZs6lHWtCYR0dR2lQNcVGOf+TEcpVV4gjaDONVASjUKsMa5OptFIqdA1iPEG4fZzqN1PIdefR8TrDjoK5ekdDjpahNUOOtYlpsoopoccvPMmX/tvf4vf+/3XOc5KwkCyHgpeem6b1d1L/Ogbb1PMC4bCclqCQZAqCGRjkvVAULj5lUphtCPnPpoXBAbr1k0LpJu1KEFYi2rvIXE1T33csBUNUHerG3Bu3AoWBwsG6xZTGhYffJP0uZcIDj9gUVdUeYXWoLWlQhDYAEWzlq1FaIsRmtJYbJ5DVTEaJhhtqMscIRQISW0qjHFRrUVRMB4mRFGI1RFxuMACWZ5jBcRhyOzogGS1JllZRZdxMzoGrWsUgnwxx0rFIA0YDCKiYB1bFQwiiQpTTFVRmoJSg7Q1Ushm0aWgK5QyJKMBUSgJwoCoKpkvMkxeMB4FjLeucPR4n+nxKWEJSRhgm89DevXq1atXr15/OvSx4WPXNdj9wN9ay61bt/jhD3/YQoquPMyoqoqqqgiCoI1cXZaHGV0Y4OGjd6ddFL/ogWQX1HhY6I/ZPV/3e7+N32/53zK08A5OH7HqgcVFsPOnuaR83zzQXI7XXB6P5Z99lDssCAIGgwHz+Zz5fO6y9sOwdSKGYdi6I7tAqCzL9t/Kykpby68LaDyY9fCy2z4PKKfTaVsXz+9b1zVZln0ksLkILt29e5c8z4njmKIo+OIXv8jbb7/9r6y158f1hRdeQErJ48ePL6zT6Z2s3djS+/fvs7Ozg9a6hbZ+vj1E9K5Ov7664KyqKt58802yLOP73/8+SZIQx/E556mvI+ljbv1aLcuSyWRyDlzDh2sLdsfa9/ei+qc/DcB5XQT1u+f25/P7dsHk8j5+m+45usB92cG8vF8XRnbbugy/u+3u/ny5b91+XXScLqi8KEJ2+eGEZbDZw8devXr16tWrV69eF2nt6i5Hx1OMMTyzu05dlUyKinQ4ZDQccHh4SDLa4Mqzu+zfu00QxCSjdRazBds3XyUOau6+8xbrl5+hzBbcfe8OGxub1FXFIlswXlsjm52xeekqYRzx5OE9AqVQSmLQpEmAEQIVpygpWNvcYnpyTJFlrK6vMjk9c5GIJ2dUs4rFfIK2gihwcZ2OwzlIgzatTcwaVwMS2RjyfAxqQ3Oco+z8vaMDkc09RwssRQshtTFo+/QcQviNnAtNSIGKAqQQ1EXlQKIAJR0obFoGQhJGyhFLqxkGkrzSaCExvnFSYIzl4N4JK5+4zHMvXaJ6c4/TTGMs6AacmtpiTMWDW7fQ/9//itXtLV75i7/Mzisvk27kqHKOGl5CDrYhirDpEBElUCyw1QyrC4RKkIMriHQbuf4C5uwe+uAdzMldF8daFw5uCQHSPRArcTGxQgqsrpGquS+yphl7cLRXIpqahEgFIgBdubkRAE3JDKRzm0qFUA2sla7uYjxMMNa494xmnhuOy4SdJMdq53Q1dUUcSBchXNXN3CikitHxCvHOC4S7ryHXbkKygZBxszaaOFcAGues0RhTYKsSXc5YHO9z7/Vv8wf/8F/yjdff5ayqiaVgTUl2VyNuvvIMB3snPLn7hADnhH1SVEQSEglB038pBLJZN96V+dTdaBGyqSnaxq12Ho5tDZONuxHheDYC4e8FaYyltqkr2gQPYy3UmmpRIwPF7JvfYf3P/SWy6QIxHoEV6LpExAE2r7FViQxCQKDLHBOEGGEpaxBGE0qwusZUpokarhA48GcQ1FqThJIASzmbo8sSpaAS7vOIdJBgjMZUlunhMXG6YLC6ilQBpq4oipxaA8LVwpwdPUEJiJKEeGVEVRaYqiIaxFSVYj7PSZoHy4tKU2YF8WCABeq8xFhBMhyQBmPC0jI7OMAqyyLPqLRzmprJBK2sW3u9evXq1atXrz81+tjwcbmOnv/wfbFY8Prrr5PnOWEYAuddhFrrtkaed0EuR612P8i/yG0URRFZlgFPa/8tb78MXy6qUdd1KnqQ1n3tj7fsCPM/9/X6PLD0QFRrTZ7n5wDoRe5M3z6/XbdeoK/Pt7zfMujo9uki8OKdi3VdM5vNGAwGpGna1gAEB8B8PUbfRw/AtNZkWdbCuS4M6/bXuz+78Laua87OzlrA5GNHy7Jsz708FhfNo7WWxWLB2dkZu7u7fOtb3+Ktt976qa605bkXQvCLv/iLvPnmmzx58oQ8z6mq6kNrLwzDFo7PZjMePnzIaDQiyzIWi0W7/jyILMuS0WhEGIbngJRXWZbuj/SiaN3CF4G27rgtw68u2PMOUx91exF8/teFut2x+WlgvOtw7EYDd+uX+veWf75cI9T3w8fydt2SyzCvG0/chfrd67ULPLvX6fI1AE8dvsvve3Xb0+1Xd50tOyl76NirV69evXr16tXrp+nB3gmXdsasxSFHpwsKa9ncvUwYSh7evsvG9RcYDWIeP37IxjPPU0wOQYbcfOEaRw/uczidceXlT/DgJ29ydpazsrpKUeWEsXMmTY7P2N5e4+xgH1MWjNZWXd06qcjncyyC2gZYNOPhmHw+ZTBICQKFtoLRMOHs6JhIVxTaQqSI0pDhKEZkGc1f6k/rPLqMVDx+wTbQUQowXTejwv+pbG1TbtE6eOadkE7uDE//5hYOunl45IGRcOcwlQH11KVnlXvY1JSVY1zWOQDr0iCERQaK8TDmNM8QrnoiVoKxFjDkWcnBw2OuPX+JGzc2qG4dMSs0pomV1QKEFRztnTA9OSMJ7/LwnTt88td+ked+5RdYvf4scZWh8glyuIVI17BhCIMxoh4gygyqOdbkYCVieBWZ7iI3XsLOHqFPbmOObmNme5BPwFSghIOMFqQIsKaBfQ24FVai67oFjC501rkirTWIMHSwDYExgryyIBSJMAgl3VxoDTpHRgnWGgertXOOJnHMZhSShhYVSJQSbu4thBYHKVWKGm+hNm+itl9BrlyDeBUhYpCqqePZQE5rwdYYU4GpsbrAlDn1YsLZow9454++ye/+o9/nzbv7lNYSSsFWHDIOLS+8fAUZhTz4yV10UREB08owqw3XYkEUCAIhXJ1L4QCkaBy4WOv4qpUN/2z8is36E8Lj8afgkhbqOhiptXUlIv3nS501r5RytU4tWCvQuiYcDajOFkhlMLM5RkJuFBqJtRIlHEw2xRwhJSpM0AJ0WWHqmkBYahFS5yVSRQgsdVURK0mpLcZKxqMIaxVWG8q6RlcVURohZYCUkBXauX2FQtc1dV1RlgVxqhDS+Y/LusRUNXEUYYVgMB6iogRjNQhBbS11qSlrqIqaQFlXpxSJtDWz6RRjJDIZgrYYYxltbLA2WGElHZCXNVm+wKqAUDi4m2UuXrdXr169evXq9adHHxs+pml6Dgr4rz/+8Y85Pj5mZWWlBYse8oD7A8/XvfM3GD8NHF0Uq+jjMbsgbrle5LL7q+ugWnZE+u+VUufgxHKdyO7xfZu8483v6wGIh1PdvnT71m2Hdzt2gV7XqbVc58/3bzmC8yIXlt93MBgwm804PDxka2urddwt1x/sOjD9mEgp27qP/nx1XbdA0m9TFEULxPI8J8uyFjJKKUnTtIV08/m/Xta/7//Z2RkPHz5kf3+fL3/5y61zc3nbiyCaMYb79++zu7vLr/zKr1BVFScnJ5ycnLBYLNo2DgYDtre3OT09ZTabkaYpp6enKKV4/Pgxi8WCMAwZDodIKZnP50ynU27cuMFwODwH8bowcdl9uAxZl9enf99/7QK97nGWv+8CdqAF2N3zeHem/9o9Txd0L3/1a70L/rrXmF+7fs0sz0t3jXah5PJ+/hzL7saLrvflSNZl0Lk8lv73RvdYF/XHw9/lMenC1Y9aa7169erVq1evXr16dXXz0ggD7B1OEcMhN158kex4n8PHx1x57iXOjp9QlEOeef5lHt95n7Xda4xGQ2798HuE4022rlzmzW99izhdZ2U9Yjadc/mZa5wcHYJUXLq6y9nhAVIo0tEKVgjCIGB2fEw42uDwyZ4rvxHGHD45ZOvqZahLBmmKLmbks5qNzVUOnxwhhaKqasJhigoE2lpqbTBItPGYsPn7Hv83elvtsXE/+m1cdKlpDHXekejcYrbjVPSgp/nbHYsvzCdkA4mkixG1WqIC91ooSZiGiECCEMynC6QViECilETrijCKsFiSSJKEwsFV79xEAgYhIJuVTE9zLr90hfm84MHejHmlMbg+1M39RFlY6sqyuP+Eo//mizx46z0+8Vd+mSuf+CTDnWcIygkq30Cm2zAYQ6AgGIEeIMocW85BZwipIL2EGFxBbn0ae+0AO7mDPv4Ac3Yf5ofo4hSBbt2iRkgQGolAqiYGVKqmLmTjKlXe7emGFakwVeXu/ZXAmqoByKZ1Srqakdo5BaVA1AYlYCxd3KfbVrroVhUjRmvI1WdQG88j164hhtsQjUGECKFwxTZpRg6wFmtKMDXoEqsLdD5lcXbAk7d/wnd/68v8wVe+z8OpA91KQYIgQHNld5Wbn3qex/cOmB5PwVgCYTkqNIGAoZKEQhBI0Y5TM7kOMtN5qNkufy7jQKSUogGW7hi2AYs0bl7nFm2ih027hLHC1cP0r4XGuWSrGpCUjx8QDldYnB1Tj9bcuOscGSiUEGA1lZZkeYUUllBKB1CVxEqJyipMPUNYgRIOPFaVQWKoFoYwlARBTGBBBTHpMAEVMp8XjRMTZAAyEM3aLdHGhR/r2qIbJyzCYIzFmooqn1Nqi1Iu9rfMKvKypCwykiR2McdBTFVmaG0xusacHmKNIJ8rqmxGEEaoKKI0bu6HoxglhswnZ+RlhSo//DlOr169evXq1etnVx8bPnqw5qGBMYaHDx/y5ptvnoMPXQDQrTXo4cxHOYiW318GOx5UXARPlo+xDO66bjMPGbrQYxn0wPl6lB5OePegtfacq6/r9PRuweV//ti+/qF/7WshLrdnuS8XxWwuu+e64+IBZJ7nHB0dsbGxwWAwOAcW/bxNJhNms1kLJj1c9M637rj5/vn3yrJkNptRFMU5aJMkSVsv0cPWrnPPt3UZGPm+VFXF1772NU5OTphMJheulY+StZaDgwP29vYAV0/xr/7Vv8pgMGhjYff399uY2UePHjEYDLDWMp/PWydrVVXkec7e3h51Xbdu1d3dXZIkadfJspYdet02X+Rq7YI3P17LgG25pmN3rXp1nbfdWNxlqNgF98tgdDmeeNkF2AWLXZC/DO6WHcPAh+KK/QMJyzD9IqDY/V2yPH7dceu6LJfb7vvchbQXnbN77XedzN33egfkz4aEEDeB28B/ba39T//dtqZXr169evXq9bOgvNTMcs3KpcvsPHOZwzvvM59UrG5usnfvNpeef5FQVtx/68fs3niO6eN7fPDwgOc+8zlmpwd8/w+/zubuVcLAEKiAzZ1NHj/eY3Nnh/nJMQePH7Ozvcnp0RFWRCymU4SBqqx5fHifta0VamNJk4Qbz1ylzGcEccrZk31smREgUDKlmhUoLGVtiKMQoQxaW8I4piyqNnrS+QUb2Oj/Bva3ED7jUgoHgty3NCQHIRtqYxsA2cRcWh+LKRp3n3CuPyGcXVIIhZUuXhVwACdWiCQkSGOq6QwhaKI2DdFoiM4baKQhDiWjOGBWFA2cVehaN5zOxbDOTyckA8XO9XXyvMYcZSxqFwPbeDtdBK01SCPQuuTN77zJo/fv8Ik/93le+tVfZOPFl0jWLxMUU2S+ikw3EfEIGwaQDhBRClXu3F+6gTDBKnJ1Ezu6gdz5Anb+ADO9j5rcx0wfY2bH2MUxsyIlqg5Jw9pNgpUIqRrIZhEywBrnwkNCW+NQSITJETJABAopAxd96jmd9fMjG9OfJQjcvX1VGmSUEqyso8a7yLUbqNWriME2xGsIFWOFcnMtXAyraNYI1mJM3TgdSwce6xqE4fTBLd7/+rf48j/9Mj98+z6zJs5XCEuMZChgdRDw0qee4fT4lOP9Y8pFQWQtqJDjsmKoBIkUqDYutXE9elBIJxGnqfnY/sS67kvVuB2tc+55t2OzC1iBEBZpXbyvi5F1dVAFFmsF7e2yI3mYQiOVYPr+LYY3X2W2v0e8HqGtpCwW1FUJSqJkhLYuKSkOFNrWCBlgrEBqkMpijPOvZk38ahIrSq0dFHU2YlQgCeIIJSVZlhFKSZim6KpGBQKBpMa129Q1MogQUYSqNKECFQTY2lIUBm0LKmMwIsAYTVnV7jhSYqUAEVAbS14b55Kta3ReOo9rFDCzhkDlpEmKSlMUFisUs7NT5vM5gzgkCNI/vl+uvf6dSQjxnwJ/B/hfWGv/q3+D4/xt4P8M/Jq19it/HG3r1atXr15/svSx4SOcd109efKEL33pSzx69KgFHV1w0nX1ASRJ0gIJD3WWYzQ9LOm+9uddho/+ff+ve6zlWFX/XheQLMMgf+4ugOnCBt+fLnDp7ufB43Lbu9v6PmitW+jY3c9vuxwz6dvf7bN/rwuEuvtY+7T+Y5ZlHB4esr6+zmAwII7jc5GgHsx4oNSdi+65PHD0r32dQg+UvKIoamsk+u197UQf17o8NssgTWvNBx98cCG0627r53h5vS0WCx49esTKygqf/OQn+exnP9vCqcViwR/8wR/w+PHjFlJ12z+bzVBK8eyzzzKdTjk7O2M0GnH9+nVefvllkiQ5tw666q7Jrqt2eR67Y+v3Wwbky3PcvTaWnYHLrsBlh2/XQeh/fhHk7MK4riPwoqjjbuxud+1c5JrsXl/dc37UWPj3u9frMkRdvo4vuj67a+2jnJXd66W7rd/uoocUlt2evXr16tWrV69evXoBLGrFM699CknOg7ffYX1rB2OmTPOCF3/+Czy58z5ybZNnXnieO2++RbS2w2d//de59b2vMzmYce3qM9S2ZmN7i8Vsynyece3mc+zfuYWWEZeuXOHJg/sMx65UxNr6BscHJ5CmPHt5h4ODI8bbl9naWqMuC1ZWV3n09k+Q1iBqg5Yhw8GQWEaUlFQ+lSQMEUpQ5RUtYxRgHDbEOxaxjZNONtGfCBd96rcS7ntX8xGMddta4//Gto2TrPMQqmzq7AnROtaUUoTDEJUEhIOEcGWIdNYuqsm0BZhIgZQC07TH1M7VN04DTuYVtdYoIUnCgLzS2KZNujZMjmesXVrjmRe2ybKH1DNLYZp7EUCDGzcURkOl4cnBnLMvfo0PfvQun/jzP8+zv/h5Vm88T7K246JY4zEy2YB0xUFIlbqakLXG1jnozEWgigiiGBEkqOEV7NZrqHKCzU6w2SFhvsCcPkDkh9h8hihmCKuxdYGwDkgKwAiQUjUwV2KVhVo5uCgU1tbO9WkEuAqYgAKlsChEsopIV6mCDaKVS0QrW6jxJcRgCxGvIBp4ZI1z+Dnw2NSaFAKMdrDRVKBLTF1i6oI6n1GcPWHy4CFf/cdf5Kt/8H0ens7QDeCzUhJYGFjBKIZXP3mNIE2ZHE6ZHszReU0awF5RUyOIpCVWgkD5+zI3S0K4z1Wcq/H8w7/uff8A7tPIVdO4HD0nb/7DWuHieYVGCtFCaINAGH89OIeu1hpZWUzQfPY1zwhGKbbUiHKBDVIH+lRAbaGoDALDQFoXraot0hqCUGFNRV64B8RLbcAaxmkIAQQ2xCKwUpLXhiCKsMaQVxVKRVRliS0WgMIQU/v7d1uBdPfkaZogk4AirwCJsTV1VSCEoixrtC5QSpJGMTaKqcocKRW6AfVxEFAZjZEClaSYIsdUmsoWWBkijSXQJUEUYpMhNo5YiRMSaZF1+cf8G7ZXr169evXq9SdZHxs++hqBAIeHh/zmb/4md+7cOVfD8aJY0O6H+N1ozy6wWgYC/nv/MyllW2+vCziWgdRFwG+5xt0yuFh+3++7vJ+HhMtRqR6mdaNMu23wgMdDkYucbN0akst9WB6f7vguR31e5FLz8ad1XXN6ekqe54xGo3MgqCiK1i3m2+pdi1VVtXNvjCGKorZOIvChOoRhGJKm6blalvP5/NycLc/RMlBedrR156mrbt3M7rrw7drb2+Py5ctsbm5SFEXr/ByPx1y5coVHjx61fe8CPz+n4/GYL3zhCy0E87GzeZ6fg2DLkaDLjj8Pmv0YdiN0u2vCf9/tTxdc+5939+uuOT/fyy7abr1NH5/r1/gywF7+vjuey86/i2Bod7369bncPv+9vzb8cZbrsHq36fJDA8sQuvu7BZ7GJS9vt/zgQffnHqR21+Dy9l1naa9evXr16tWrV69eF+naKy8xP7xHnlVsX32GyfEB4eoW41jw8N23ufLiq+RnT7j91rtsXr+J1po3/vArpHHCYJhSGs3a9jp79x+SjtfY2tlh/703qUvDvDgjP5sRpRE6SLj2/FUOHz9idXeHJAlZ5CUvf/azzA4fcfrgNvmsYL/MEVhW1kYU0YBLzz7L/NFtillOG5Wal+gSZKhA4UBgZbDatoDRiiY61QJNnUaEe9/VhXQORucQA9vUcZRN3cJAOXuZh4400FH61x4QCYmQIENJsJoQDlJGly4TDofURQ4Wjt5uHqQVAhkGaGOg+ds9SELnSlMwHiqyicFoS5QEWCyVce1DCHStmZ8uGK6mXL62RnX7CFEKyiZy1njAhwHrXHPWWPLCcv/OPgeP/wXvfedHvPoX/gw3P/95Vq4+Q7L+DLYuEfMnyHS9gZAJIgogGiHMAOoaURfYqmgiUyOI1hHhCiLdBVujdIG9MseWUyhn2PIMm08xxRRRZZgqx5ZTRFVj6wqjS7DuPsholxcqlHLjrUJEECGCEMIRMh4j4jEiHiHTdUSyRhKPEcHA/YtW/UQ7S6AuHXSUUQMdpXNT6gJT5xhdQZ1jqpwqn5OdPOHozm3e+tp3+O4ffp937j9hXltXm7RxrCprGQrBOFb83BdusH19i8VkwfR0zuxsToAlGQ+ZnGoCUTEMBJGyBNLVUXSg27oSk9o39SkAb6N7rUU14NHa5rMUnkJJrHdHNu8167uNFBatt7PZ30W8gkBXFqEMUaTQuaXav0ucDiknU8qxdPUSpaKoarQ2DCKJFpLZvCSKLFHs6jQaYwgDRVW7aywJJNoYqsKihMCgqJQmTlNklJAvMkLlrp04CgADZYm0kkVusGhiKbC6YmVthcEgpaoNEDKb5xjc5wi1Lp0DtKnVqTCoIMTqgDLPsUWGUCEKiIIAq0KMhawyFPMpURRgA0Gua0SpsCwYbsLayiqSEFUVqGHyx/47tte/1/p/Av8NcO/fdUN69erVq9e/HX1s+OiBRl3X/NEf/RG3bt1qIc1Frrzu62V41K3/CB+GbF35Y3jI5CGd/9kyJFmGkRedfxk+dM+zDCA9XPNgpOtu9H2I47j9Wdf56Y/rHVld6Oh/tlybctl1eBF067rLlutUwtMn/rpOTt/Guq5ZLBbtGMRx3ILGrmOvrmvquqaqqrZvfjz8ephOp+fGNgxDBoPBOYjjIXP330VanquPWhNdEHQR8O1u52tQSik5OjpCa81wOCQIAjY2NjDGtCCxuwb82L3//vu89tprhGEI0NYuXR5vP1fdSNHleF8P25ahov++W4ewuwaWIbpvaxAE5/ru56g7Vx6W+e/9tdqFgstj3j1vF3guw1bfl+V4U9+W5fnqwrwuDOzu0x0La5/Weu0es3vtd3/vLEPN7prq1m3sgt/ueHb72j3Wcl1J71ruulp79erVq1evXr169fI6e/KQdLRGOBCcnJ6wc/NlTu6+w2QCV27c4P5bP8RUsHH1Knv37pHNcjZWV5hMT9l+5ibjUcSTR/tcuvkCxfSYg/u3CRBkVhKGESjL+rUbRLHk3u27DLcvU1U5danZ3N4imxwySFMWB4cIDMkoYbC+AWnKjSvXePLeG5SnJ9jatHUYlbWU8xIVKAcLqxpbG3x9RxcGaRsY56IdrbDexOggpPWg0sWr+ijObkZrw3oadONrRzbuyCbKUwUSGUrClQHJyhrJ5grBcIhMYqLBEIoCWxvXzqZttqrBGoy1BHGEBaJBzOUwYl6cMM1dOZE0DjF5TVlpqkoTxAFlURAWIVdevEK+qNjbm2ArqAxYzqfdNI9eApZawyKzfPD2fR4/eMIz3/4xr/3qr/La//B/iZgeQ70gTI+dEzJegWQVEQ2dGzKOEVGEsCPngqxLqAvQJYgaayuQKSIcIwaXmnGuwVQEVjvaZoyrragrF+2qS2gckdZ79qRy8EwFCBmCirEyBBW519LVbbTCuSRdfqtqXY74aNwgbXpvQWuszjC6wNYFVleYKqfOZsyP9jh4/x3ef/1HfP+bb/D+nX0mlaayDcNrOJ4AhlYwlPBzn7vBiz/3KgcPHlIsSg4fnUGpSQMB2xsc3rnDihSMlSBRkkBKpLBN3UaJsAIMGKub1QDSWlevUIrmfI4aCiEdjG0KVHpALtp22aadSw+EN0DSOyWNbSKEhcDWGlu5eFt9NkcpsIuMcH2TWlusrgiFQEjIi5IsN4QKQhVQaovJF4wGCbXR1FojLWhr0FagFYhAUllI05R4MMBYiYwSqrpEWIO0BhVFYAWBsIxjSW0DrKlJowRT5MyOD0HFgKCoNZiKWggCoRBKEkUBSZKiZIA1/rMJSV1WCGOxAqQIUUFEMhgQj9Y4eWSgyrC6xBqJxmJ1zWi2YH1rm9oo8sUZSx8V9PpTLmvtIXD477odvXr16tXr354+Nnz00OWdd97hjTfeoCiKc4AFzoOinwYiq6o65zRbBn5dQNKNSoyiqD1vt25i92tXXcjx02rmLbvxuqDFA7uyLFsg4qGIEKKthamUIo5jvNuv22cPNrouUX8OD5C67q1l8HRRv5bdfhdt5/vTdXYFQXAOSmZZxnw+P9d3rXXrMgVayAe0dRLn8zlZlrXn8RGvy/Pi3/fz3gV03fn4qNqJF8nvuxyTu9xvYwx37tyhqiouXbp0br7TNKWqKubz+Yfgud/39PSU09NTtra2zkGs7jmW4aDvi3/dhXO+1mb3HN1++GMu97+7Fpedexe5/ZYje5evUX+NeQdk95hdFyJwrv6nb2fXvbkc99rdxr/vj+Wvi25M7vI2XWDbvfaXx34ZvHbP6fvTra/ahbHd18sPMHQdlMvz7NWDx59NCSFeAf6vwJ8HYuD7wP/FWvsvl7aLgf8d8D8Dngdq4IfAf2Gt/ftLx3sL+Iq19tc+4pw/Bl4Brltr9/7YO9WrV69evXr1+u9cqxuXKMoCay3j4YBHP/oOa5evEpqa9370BivblxAS7t25w+b2Dld2FXsPn3DjUz9PlU84erzP1edf4MH771AtcuI4YVrOkYEEKVlbX2X/zrtYFaPilA/eeo/dGzdIIsn0cJ9UCfYePiYMI8brK4hIEa5tMhzF7P3w2ww2rxBvhzyobxPYBqhojTQKqRSV0eja4MoJ2iaGEheJ2kBFF0nZ3JO0kawWJSW6NkiFi1q1DtSJhjw5huPvZRoQKQSBCpBRQBgrkIJobUC8OkalA4J0iJUO9shkgAwUYBxYs7ap5ahBgFSSsnAuvXJREKQhO7tDFncnlIVGSUWaKPLSkC1KwiRAqJD5dIGQlusvXaYsKg4OM9c/8zSq0/r7K8tTaKpd7cDZtOT9H93ieH9CpgcM5DGj9VXWn7vBYPMK4WiNIBkiwqEDkYN1iBKEUiADCAOEHYAx7l9dOmekyd1raxBohEoAFz3rQJ7xowq6aMa7IWtPKx5CC3qblgtoi0AKB5MREiHcsW0DJR1pc9DT1jNsVWJNjTU1pirRVU45O2F28IiDWx/w3re/zw++9QZ3H50wrQ1GPj27cxc6yhcDiYJnb65x9dlNTvb3MZXh4MEps9M5KZbd569yNxwirSGVgkGgkMKZWpvl6FE3lqf3gRJ/7+wcj8Z0PlOxOCAZCIzWT8fJGhAWYR0kdO10zspmkJxD0j4dN+EKQmItWK1BSIrpAjleRZxNoDboskZJQ6QCDIoyK0kDi1Cw0C6+VVpLpt16DmVNIAChKCvjQGgUoyxYbdBlSV0ZBAIlFRiLNrA4mxIohdbKXXe6Io5DVJqSLwqK2ZwgsSzKptxIqAjjIVVeEAYBcRQTKufQrPMci2YwHjM7gzKbEQSCUioioVHWkA4j5JVdFmenVIspdW1JhCZOY1KrEdkZUsWuVM/86WdGvf7kSAhxE7gN/Ne4e+B/5X3wRxzn14D/CfDLwDUgBG4B/wD4v1lr86Xt/zYX1HwU7mL7A+BvAv858N8HNoD3gf+7tfbvfOzO9urVq1ev/071seGjlJL5fM53v/td5vN56/KDp1GSHhp03VvLkKkL5z5Ky84rf5wwDFu3ZTcm9KOcb3DeAelfL293kSvPu7u8M9C3O89zisIVi/duQt9WDy68Y7B7nDAMCcOwdRR2AZqHIl1YtVyfrgvOuq6vZejlIdlP27cLOCeTSdufLgjrgqIu3CmKgrIsWxgthGgB43JdQH9uD4A8DOq69LwL1s/rMgj2umj+uv25CNZaazk4OOA3fuM3eOmll1hfXz/nvhNCtJGw3fP4Y02nUx4+fMjm5uY5OOa/dgHYMpDya2EZ8HXntQteuwDT79t9vRxR6veDp1Gq3Zqcfo1111X358ug1euia7cb4/pRcK7rwr2ov93rtXv+j3JOL7sil4F9d66751qGtP683euz+77/5+F4tw6mb0cX8i9Hzvb6mdGzwDeAHwP/JXAZ+B8Bvy2E+J9aa38DQAgRAb8D/CrwNvD/Aga4m6TfEEJ81lr7fwSw1r4thPgy8GtCiJeste92TyiE+HPAJ4F/2IPHXr169erV62dHpRWs7l7m5PY7zArNjU99mv077zOZ19x49TUW8wmPb9/j0u5lpCnIZnD15nX2br3JeHOLrWvP8PDW+0TpGlVxTGEkyWhAko6QYcDBwRHXXv00opxy/91bPHtth9OzJ8R2hapaMM0K0vGQ1bUNplnG7nMvoaeH7L/9E1a3L1NVJUcfPEAY62rZSec2RAgXn1o5oCKldPUgpat3Zz2EBCzGOaTOPUza3J/JBggZ40CdFFjr41l9yohpoFHzN71wbVCDkMGlbVfjcThARSEGiVnMKYqM8aVnyffvNJDIRb+6tioHkJBg3L1QNEjI84rNK+scTzOODisWWUEySAgDSVkaFvOCcaAQKOaTHDuybF1eoag09qxEIND+Ic2m595VaC1IYcEKbG3QxnByeMT9b/1L5icnlLOCS89f5eZnXmP7pRdZuXyNeG2bIB2jZk8Q4QiZrCDiESJKIRCgJCiJCANggDA4Z6Q2jdtRO3ejqdzYCgvWYK0GGTVosanRaV21Qg/YaKCWc6362o0eZDZ1HJv9ha2b+o2L1lVpqxlWFxhdo4s5i+NDTh/d5+Ebb/POd9/g3Xfu8/BgwsxY54ptQLUBbMfxGGBZkYpPv3qJT3z+eaq6oq40k6MZk+M5ylpGg4hX/vqv80f/ny8SS8FQCWIBkZJImvVjLcIn2rSMsLkntA6Qe1Dsao4aN3cNjxVNHHDr3rWuDqVQbj5Ns759rUhD58F1g6OgzfgaawmVolzkjMarqMpiyjlBkGK1pcai65IkEkRBwKyyBFIRKaisJLSWKJagEsoyJ88rjBGYwCUwVUWNqAwmL5BKQRBhrKWoCsraIKwlGAWodIAKI4ZpTFEZymwBKsCKAdOFpTQVgRJEMgZrUWGC0TmmUmRl5dh3VaKUos4X1FXpll9dE9ocGVnyuaUqc1Ah6XgFgKgsCXVBYCsiEbI4OoBAEaVrRIPoj/X3a68/dv1r3Qf/FP0fcA/Tfh34LSABfgn428BfEP9/9v482LIkv+/DPpl5tru8rerV3tVd3T3ds3XPisEMiAEwoDkEQSNEkSJFUyYsWiGTthwKi7SpCIVkW5bCDslW2I4wQ5Zom6YYskUFQVGQSMmkQBLEMgAHs3bPTO/V1VVde9Vb771ny8V/5Mnz8p151dPoATAzmPOdePPeu/ecPJm/zFv98nzO9/cT4g85597t09ubwK8DDfALeBD6p4C/JoSwzrn/5LcxrlGjRo0a9T3Se4aPzjm++c1vcv369WOwLKTijEHFsFZarBhODnWSgykAjwAjsiw7VmfwpHZiUBC/FoOf8FoMW4aQJECL8HOapr37crlc9v0IoC4AjqIoKMuSpmlo27Z3Swb4lqbpsTHGfQ7jjR2SITZDDd1jIb5hXDEwi+ckQJiyLDk8PDwWjyHAC6A3fj8ojKsoimMpMuO5DHMXfg9O0XBsmqZ9Wtp34yiLwVx8nRjCxuPRWvP1r3+dX/iFX+DP/tk/S57nFEXB5uYmaZpSluUx6BXiD97lef36dZ5//vlj1wpxjWHX0B0Y92voHhyCtfDeMB1qeD9OsRq+B2gb3hfCu3CDszTP82Nwcjh3cT/ihwjCeo/HFeYspOAdAvIQk/AVAHuA7rHLMHYwxhA6zFXoy9CtPPw5vm6c0nhYlzGkSQ2ANsQugOGTIGz8704MiGNof9LncdQPtH4S/0TlXw4vCCH+Cn4j9h8JIf5b59wB8L/Eg8f/FvhnnHO6O/Z/B3wR+DeEEH/XOfeFrpn/EPhp4M8D/6vBNf989/0//k6dE0J8+RFvfeDdDC7ol3/5l387h496B4X/do4x/b3VGPfvjX5Q4h7/TTtq1PdSRZ5x65tfYfvSFc6f3ub1r3yJdP0U5y9PePuNV5ltnOapD36Q3ds3KDbPAC17O7s8/fEf4cG1l3jza2+S5Ws8uPsWrQaVJjz57FNU5YLawAc++jFuvfZNmrLhyvueoVoecj5dp17uk07mFKfPc+rMBvs7ezz58Y9z/9UXqA9W5NN17r51AyUkpnEgJNb56ncAttEYbXHS29Vs2AOG2o6d+0tK/7MQpoM4HVh0nSstWNKExBqL657ZEzLsSbsMMUJ40CMhyRQogZoUJEVOvrmJShLqw3101dCuViSzKUkxZ/FgF5Fk2MagVIKg2ycgsdaQZjkiVVRlhWkc5V7F0+9/goO9N6hay6psKCYZeSppa01bacRE4rTlcG/FdG3CuYsbaLOPPaxRDlSaQiJYLmtcl4KTzhXqwZpDGcFqUfP13/wmkzzDOcvOw0Pe+OprbJ8/xeUPP8vF5z/I1pUrnPngZ3D1A8ThbWQ2QaQ5Mp1BNkemU0hzRJJA50gkVSA8iOoZsO0oWpRW1KurzdnXLMS3g4e8/qXez9m5LRucXoGucabqvmqcbnw9SV2j6wXN4QGLh3d48OY1rn31W7zywmtcu3Gf3VVLZS1O0lkSBbbrkutiJQVkODaV4jM/+iQf+NEPcvjwIVa3lMuaO2/voMuamRR89E/9SVannuLw4QGpgJmCTIFn5Lar1+ihpugC4t2Avs6i6ACx7Raf1dpDu3DvpetVt/I9ZEwSrAHTHndPOuvTDfv6pBxBd2uRSna1Jbs9pJXYVYl0jqQ1NNLQaoc0hkT5epJ1axDGohKBrTVZojB1S7aek6yfYfnmdRaVZTZNmc9y9g5bbNtQWkHTNMwnCWpiMCKlqQ1OSKazKWmRI/DtolKKRGGVo6o0q2XJ/v4Caw1FpmiWDckkI09TisKDSGM0zliyRKKyhLrxcFcbSyK9d9VaQ11bVNOQFDlWaIpigrGWRFoSYWidQ2qJEpaclsa+s/Fg1Pdc73Yf/Cj9K8CbbnBjVgjx7wL/Ft1Duu+yLx8F/l/AXwjAUgjxfwVewEPO7wgfv9O++fv9b9nvN/2g7AG+3zTG7b1pjNt703uJ2+/2vvk9w8e7d+/y5S9/ub9pD0e1G2NoFdxVMTiDI9D3nQBTOCa4jWKnHNDDjGHbQ/AV2ordSuGcoWswPj+kJQ3thvdiWJNlWQ8IV6sVde1TnIT0pAFABsBXlmXvfBzCuKETL+7f0Nk1dP/F0HTo+IuPGzpEA6gKKWzjYx81J2EO0jTtYxrDm+FcBJA7dOuF/sRth9qMj1K4RgC3ISVsfI2T4hD6s1wu+dt/+29z4cIFfu7nfq5vb319/dtAW+ySC2lbDw8P2djYOAaRjTE9WI4/A0EBYoefhzAZ6GsHxkAuBlwxaA3uwzh+8XEBqoW4xKl+m6Z7ejGqtRpcvaHtuB7l0OUZ13eMP09xf8M4Y8esc653yQbwGNobruf4MxpiEjtpYzd0vOYCsIz/7TkpxWx4PQaNcYrnMMZwnbDOYgga1zIdnY+/77QP/DvxC865Lwkh/j/Avwj8cfxm51/C38n5SwE8dsfe6zZY/0/gX8Y/+QnwXwK3gT8nhPg3nXM1gBBiE/jn8Slpful3b1ijRo0aNWrUqN9r3XzlFZ79sR+n3r/DS1/5IqfOPY6pDnn75dfZOv8YWWZZLfY599Qz3Lr6EvMzj3Hu8VPcufoalDVtbdhd7qGkYm1tg+0Lp7l1823Wt86yvjHnW1/8DS498QSba5b7198iL1KEgvXTpxEqxQjNalUynaS89cVfIUty6rKhXh2wtjbFWM1ytQInEdL5enTO9dAI/H5AY7yDLHYy0qWZ7NJPdh5AnO3SWQrZpaD06Sul9BAqSAgPNAQCoRRpkeKkQ00L5me2mGxvIbMEMBzevk27v0LXvmRL0hj0wQ763j0wncNSAAacEuSzontYUtKWNc74vq72Dtl68iJX3n+WV755F60tVdmST1Okg2pZkaQKlSuchWrZkGQJ22dnACyXNWtbc1QqqVcNhm6fwZG7L6RAVSh0a1nqkkwpXOJB7K2bD7h7c4f5P32By89/mM//G3+AvZe/AmZJceo02fo6yWRKkk+RyRSRpIgk97UZRYqQ/nehUhCJd+j53LbdnHX5SENnOqchIkxWl87VGpxtfZ1Iq3G29ilVTevdjUb7Go7ap1Rty0OaxR6rnfvsvHWdm6+8xVsvv8XVqze5t1eyMAaN60Gy7GJjXFcl1LnOJChIHWymkh/70St87Cc/wd0bb2ONwVrBcn+FqVsSHNuXL/OJ/9m/xX/5V/4D2lXJVDoKJVFCIugMh34xHaVztQYpJLID5AL6lLTWdWllu3XY72NDM3hw6fd3ogeX3cEIpXA2+B678EpxFPYswZnuYfxU0TYVSkiqZcPCKPIsIc1SjLFYJLpt0TjKSjPBpwyeFBPqylCxpHSKokhJk4TFou72vQKEYX2WM19LcCqldgprc2zra28am+BUgW4sE0ry2QTSHIwA5ffMptEcVjWJcIhFiZ1mmEnu7/HgsK2hxvq5lAlCStI0wekWZ32q2rY2TCcpZV2jTU3qLBOpKIoMoxu0saSTlCRRUNck8j3fghz1e6N3uw8+Uc65q4946/+Ch48/w7uHjyv8Pru/aeyc+5YQ4teBnxRCzJ1zi3fZ1qhRo0aN+h7pPf+X/+tf/zp7e3vHbvDHQASOAFbsZooVagm+k5zz6Q/TNO3PCYqBSagbGbu2YphxEoyLQUnsMgzOspNSUsawIk5nGlKxtm3bu71i16FSislkgnOO1WpFVVVorcnz/JiTLY5NnPIxhlHx9xgoxmMe1sSLXXLDa4Uxhb58pzS44ZzpdNq74IaxHc5BgF4B5g37HuYipKiN2wlwJ56jAG7faV5PAqgBZh0cHPCLv/iLPPPMMzz33HN9DNbX1/t6niFW0+mUNE37ubt9+zbr6+v9deq67uHrsL5iHIvY6TiE2DEwC/M3TE0aYGgMH4euyaAAgmNnYJzSNLwWt1/XdX/dLMv6FLHxHMVtBlga3o8/m2Gu4s9QfP0wjjgOw7UTPgPxvymhzwEGhljF6VCHn52h6zd+kCEGjOGYtm3RWvef49CHoWM0hp/v5jMz6gdKX3HOnfTozy/jN10fF0L8F8D7gJvOuZdPOPYfdd8/Hl5wzmkhxP8D+N8A/xzw/+3e+nlgAvzV4VOiJ8k598mTXu+e7PzEdzo/6HOf+9y7PXTUd1B4qmyM6e+txrh/b/SDEve1tbXvdRdGjQLgg5/9LLde+hqajAuPXeHmK69gUVx4+invzJufZp5Y7ly/xpVP/jS2vM9bX/s6s/VNHuwcMN/cJNeayXwNlSju79zj/OXHOXjwkDvXd3n2Ix9j98abHCx8Pci1zRlKSdLJjIPFgslsjWb3Lk21YlbMOdjdB91y/tI2dduytXWK6tYDKuForSVRsnPC+bqKSJ+mUrUCYwTGWkRfp1F2bjIBIvydLLrUqvg2VJcCtQOagWn6vU9IdSl8WzjyjSmTUxsk0wlqkoNraZcVtmpYPTzAlC1GW9LaoZcrqp1dTPhb3BiSNEFJKPeWaG0QaUIyzRHW4FqDRPDg2j0+8BMf4cG9X+Pu3RVNq7ElFHmCdNA0miLt9mXaA7FiUrB9FvI9yf7eAU1jPFwTPl2tdAIbKl86cCKkaPVuRK0txoBKM2yr0QBLwXLvIV/4a/837n7jBcyqYm17ne0nH2PrsbOsnT3DZPMU+fo66XSOzKfINEeqDKFSpEoA5VPOim7/JwVCpB5EEv1Z2UFHF6yn1uCs9mlbncNZ479Mi9Mtuq3R9ZJmeUC1u8fB3bs8eOs69966zc2rd7h55wE7ByWHrUG7o2qSSbc9dXT7ciE6V6bAAIlwzBCcmWX81Oef58KVC9x66zrOaBSSelGzf29BW9Zs5Bmf/lf/ddL5Fq9/5eso4ZhIwUR6J50USe96xFqEUlhzdN/Ar1Gi7x4mCkTvyg2G0QBoPTw3R2mA8ZDcuXB8n3C1C6zAdvBbSoGtWx9nC85ZpEqQStAuK5LZBCW6fbkzWCdxKkGphFRrEiFQiUAmhlo76r0dslQi8wn7y5JMShA+G2+RZSRpwqI1WONQyjGZZ2AUMlE46eulpqnCAqtlRZJnJGnGdAJNsvRr19lusiyp05hW0DYt00lBYxzLZY3IFLNCkE1SsnSGaTVVXWOExALLVQOJIs9Sijwnk2C0xWgLWPJcIVVGW9d+zY36ftZ33AfzDvBRCDED/hd4SPkssAbEN+Yu/Tb68tojXJY3uu9bwDvCx++0b/5+/1v2+00/KHuA7zeNcXtvGuP23vRe4va7vW9+z/DxhRdeOBE6DkFf/PtJjr345+E913Cs1pq6rnsAGYOhAIfiOnRDMDcEh3GK2LifMcALv8ORs2wISUIa1clk0velKIoeVMX17cL18zwHYLVaHQOvJwHPEK9hOsugoWPwJMUpcd8JwoKvWbm2tsb+/v6xuTgpngHSxnE9qd9xbMMYY4AzhEtDCBXPXVCapsfaOmlM4eeTXJjggdX169f5G3/jb/AX/+JfZHNzk8PDw97JmaYpm5ubPPfccxweHvLKK6/06/3111/n/e9//7E1F0OsMK449egQ0oexZVl2DNqFtTOEYTH0C/EP44khYDg/Pi64MQO0i9MCB8AWrhdczOF94BhYi+cqrOv4erGjMHYnxuA1zFue533c0jTtHcNwtG7jdRfHd5h61jmfwjd2N8fXj0FinufH1knch9hhOXTTxuMIv4e4x3Eb9ftCdx/x+p3u+0b3Bd7JeJLC65uD1/8q8G8Cf4Ej+Pjn8bUs/t+/3Y6OGjVq1KhRo76/9coXfpmNc49TSMu1b77E6UuXWd+cs3uw4uyzH2Lv5muU+5YrH/sUe2+/zO71GzhteXj3Ifm0QDvBmccv07YtVet45sNPcuu1l7Au4/KVx7j/xivUld+TztdnqDRBSsXhwYIsSzi4eY00Vag0Z3mwz3xjjWJe0GrDfH0DmWboLpWkc458kiGlhy4yUbhJRr5RsHf1/hFodEeZekRI4enoUrB6yGi7qniudwT6BJi9Ec+B1oY0TRCpIpmkZLOc7NQm+da6B4/WUR8uMYsVyzu7mJWmbTxska2l3LtPtbfwf593DkprHI0zaOPRkWkMiAZUipPdQ7EPdnh4Y4dP/MGP8o//zhdZVpq20VjjyPMEIVtUIkk6AGlbizX+geLpWo5KFAcHFVWpwXQPXQowNtSA9LxNANIJcJaw41seVnQcjFZbbrz0Fvdv3qcua5aLCvv6ddQXv0lR5EzXMrZOb7F96QybF88yP7vFdPMUxdqar4FZTFBJispyVJohlETIFKlSnOjuZQhwqM746OtBOmfBgLU1VrcY3WDqGl1WVId7rPZ3OXywz8Gdhzy4eZf7t+5x794eewcrVo2l0gYNmK7mZXD+BcOsc8F0KXoAaZ0jEzBFcGEz5zM/8X42z6yxc+ce1vo9mK5bdm7vsf/gkAmOD/yhP8r7/9k/w/1rr3HvreskQlAkPutslhzd41Bh3YW0vy7cf6Ij3WFWwtrtHpSWAqEktjUdcA9fHjhKobDWeJDrurG66F5XwJAipCH1pyvlobzMFM7zQlJrKNYmLMoGow0KQZoK8jyjbixCGooiQ2UCbRVox+Y8pQH2XY6tHXVTkqYSYw2NFdStT4tcFBMyCTJLcU7QWouUqofKjXYsy4okUaTFjNWqQhv/EEGWKZxxICyrxqKrCiUlSgmcyphtzXHCkilFmkiMtlinKeYTaicpywPaWjNNLRt5Sp6m6LbFtJo8UUjtMGWJmHhqqsvV7/C/sKN+h/Vu9sEnSgiR4h/A/VHgG3iH432g7Q753+LrNr5b7T3i9XCDSj3i/VGjRo0a9X2k93zHPKR0jN1/MeSKwdawtmJQDGFOej+GDgGABMfSEKbFkGcIoIZpIYcwcvgVrj2ElLFLLDgZi6Lo21FKMZ1Oj9WqC+3Ftf7SNGU6nfaOzTj1atzHoePrUYD2pFStcVyG58WxGwLIoihYrVa0bXvs9aGGIHHozBy6OIMzNcCdYS2+EKPY9XjSeGOgN4TYQ9h4kvMx9NE5nwL0pZde4m/+zb/Jz//8z1OWJVevXmVtbY21tTWee+45PvvZz/KFL3yhh5LGGHZ3d6mqivl8/m31BJ1zfezCmAJAD/0LMCu4604CsU3THAPSwzSj8WduqLi2Yzh2WKdzCAfjz3F8nWH/hp8T51w/jvizEdd6DXMexzCs23h8IU1xnKI29DOGfcO1Pexb6NfwsxAA4RBaD2MarhXiFz678ecxdlqeNAejfuB17hGvn+++73df8WtDXYiO7eWcuymE+K+APy6E+ABwCngO+M+dc/ffe5dHjRo1atSoUd+POn3pCoc7OxwcrLjw7AeRdsWqaTj3xAWufuULnH38SfKp5NVf+0dM1rYwTtCgmKxNSCcFeabYeXCPtbMXmReWq1/9LebTOUUuuXftDYS2zCc5k40CJyVlVft6dlazfPiQtfmcxf4+1XLFxfd/kHw6ZbW3Q5Y6jNaU+/ssHu51aNAgpEAmDqvBtIb81Iz5xdMcvv0Q0WiUlB5URjUEpRDYjrY54+seSiF6EhXq/Dl55BkTAmSSoLIUKQUqVaTzgmw+JckzrK5YPdhFL1vK3UPqwxqrDdb6NJdIhTCNvy4SnKXLBAvOO7LorlZXdQfEfM1EYw1vffkVPvHP/yE++Kkn+doXXqc13p2obUvdGhptmE1TimmG6LOoQJqkyJlECEhVy3LVelcjR27HsIO1PfLy6UYFPiUoXbyktbS7B8i9xVG6ViFoESzqJW7/kOs3HiJfeJ0sTSiKlMm0YD4rmKxPmXUgeboxZ7oxRxUZWT4jnU6QSuKQvg6hkL3DT7cNuq5oypqmbGhWC6rDJcv9Qw52D9h9uM/+3pLVSrOoWqrWYpxHybYfRTS2sOcO43Y+xa5wPsWp7WBdIWAqJY+dm/Hxjz/G5uk1qsUKZzRSCayxHNxbcP/mDtI41rY3+JH/+V8knc64/uI3WO3vkwnHPFFkyq8vn9rXr0Uh/M/WGFTv+uzu04SakM51sQh7we4+Al0brstc2+dktUh8QmEX3Lzd2pfxntz592WifBpY4d2EzjrSrQ3cw11S61MLy3yGaSoQyoNBDDjDZH3CsjZkLkVrH+eDsqU1Bi0dbd1SSInEPzCsjcMZQ5ElzNKjOGsHy0qDqZhlCQ2C1aJEG0c+zWgNLPaWSOsosgSZeLeo1oa61mhtyApBkqVIJZnMZ7QioTlcYFYNUjnSNCHJMlptEVlOIRM2JjmJhHq5ZFHXZMBE5Qil0NphyxUizXCTyXv/x3TU74XezT74UfpjePD4151z/+P4DSHEBTx8HDVq1KhRP2T6ruBj7NIbwrOTXGnh2GENxSGweidQCcdrNw7dUQEYBA3hwPD68bHD2nZxisz4/KEbKq5dZ62lqiqyLOtdaEMXZQCQs9mMxWLRpyMN7Z3k5DsJIL6Tsy8e90luztglGgPKNE371KNaa6qq+jY3YnCPPapf4fpDABRg2jCFZ5iTsiwxxvRtB+gcr7MhqI2v+ajfY8XX1VqzWq341V/9VU6fPs2rr77aj00pRZZl3wY6jTHs7Ozw4MED1tfXv83dGdcIHDrnYjgfQFgAsgHgxel8Y3AX1zsMYwhOv9iBGcd0CC7DZyMGegGChu9pmvbwNKz/0IcAhmMoOVzfYVzxmEOf44cNwtzGNUPD5yd8bkJfgiMz7nPcVvzAwfBnoI9taCOu9RjWU9x2+IrrycawN6RYjtfZWPPx950+IYRYc9+ecuZz3fevOucOhRBvAE8JIZ5xzr02OPanu+9fOaH9/xCfiuYv4NPFAPzHvwP9HjVq1KhRo0Z9n+nOq68g8zmPP/M0u3dvcubKM6SJ5tbVqzz1/vezc+sGe7uHKCd4eOc2040Zk/mUzc0tqqpkWTZcfvYD3H79myx3F1y6dJYHtx/iLNjVivWNOWmRkE1nLA4WFGsb6MUuQls2t86y3H2AA849+RT1co/929dJ17aYzufU5T5JniOFxAqLsY62apjOM6S0WCVJ19dI5jNkIr27cOBuO3q4snM3KuUdgA4kPhVpcKT1vFKAVBKVCCyGfD6nODXD5SnptMBZhylrnBVUewuaw8anlkxTXK2xFvRqxeHVa1jtkEohBFhrsM6B8xTSdAUmnfVORNv5MZvW0oqGF3/16/zIH/kk996+z/Vre1jncFpQa0OrDXVtWNOW+awgSRW2g1uJlEznOUpKkkRyeFhRNY7ESSSOVlic88jR98TvibrHk/3/C3x9RAuIo3sdLhzfmfAcDmsFVd2yqDViv+r8hg7hQlpQQaIUSSKYFTlrWwVN46hr7VPg0oEz4TDGUVcNbevH2BofJ229nch0sLhL0trPmRMW22HkwHhD/3A+vk4IlPQ/G+fXiBSCiYDTs4wnn9zi/c89TjrJaeu222dZhFWsdpfcfXsH3RpyafnAH/w02899HGcNr/zWFzFlSS4dE+FIlEQKCT7hbbegjiIoIh8jwqfGDevv2MPXFowziA5WOhG5JRE4a7rUwEcz45zoGxYCrLEgJUL6Goh+sTsSEjAWvVySTHLs7oJ6f4lNM1SSdiWILPk0Y7qWIDdOsffWHdbbJZM858Ck7C9KpolAUSOcw1jDVAi0SNDa4KwjUY5yVSHTLisVEtk21HVL09Y0VrBa1hSTBCcKhJNM8xTXaiTSw9okAZmQZRO0abC2pixr0sTX1RRJhkwzluU+00wik4yq8dB9mkjmRUKuJKuyYVWV5HmCkh7Ep0qQzGa4JMPoFrLRrPZ9ru+4D36Hc9/Xff8vTnjvp77bjo0aNWrUqB9MvWf4GLsBg4buu/i1+I+88BXXgIuPf9T1QurEIXiMoU5oYwiMTnLFxVAhVgxx4rp0j4KV4ecYFMZ1KsM5w7SwSZIwmUx6ADl0mg3die8EZ2PwFcPTGHQN4epwXCFmk8mkh6oBpsYusel0eqymYAwYYzActz+sczh8PdR6zPO8T4sZrh3S057kZhvO20nxGR4boJO1lrquWa1W/OIv/iLGGB5//PH+nDhOMYhbLBbcvXuXxx9/vIdmQwAY3HKx2zFeLwGIhXUQ1yUMoCycE6dsjeMeg74A8WIgGAOzeCxxet+TYjV8f5jKNPQjxDFeb/E8heOG6UpP6l+IWVzbNcQwXDuG/PF6G35mYvgbz0UMREN74Tqhb6H/wakaf/5jeDv8jI76facNfF3GvxxeEEL8CPA/xD/t+Xe6l/8a8L8H/k9CiH/OOWe6Y7eB/3V0zFD/EHgVXzejAF5xzv3j34VxjBo1atSoUaO+xzrz1NMIq9m9f4/zV66wd+dNRDrhwpWnuPvGa5SHC2zToooJl556mnx9A0XLay+8wNrZS5y5cIlbr71MKnMeu7LN4vCA2dYaq8WKc49fxjko1jZ4eP8es/V1dHlIliRY27Dcf4jIMs5deZJqsU+STZie8s6jermPSDKKU6co1goW9w7AgXECmSpECypTqNmMdG2GTLq9db/f8FDGOdul9qQjNyCld30hBEpID7LCA6VSIpWHlGqakk4zVJGSrs1RkwKrNXpV0ezv0+6XNAclbR0ezISsyGjKGtdalg/3/d/mxmCxqDTBGotpbQfAvIPMIbCANoYWQWsdNY7F1Zu8/MJpnv/Jj3Gw/5s8fFj1sNBpR7Xy9SXLSrO5PiGbpt3eRiOVpJilKCURwuEOGura15TMUGi88891TjmB8LkChUAEcic8pJNdfcHOWIch3L8AT9W8K88IhxPGk0sAjtLlutbiBOyVmklV0zSGprU9OAtA0bNCcZQU1wWY2KUu5cgJ2LsxncPgMC7008czWFiFOzqPDjpaBzMlmaeKx85O+ORnP8T66U1WB/uY1vhjjAEE5aLm4P4+uqqQxrKxPeUjP/tZXLukquHu6y+RS8EkgUkqSbuY0/XZWkfS7UulkMh+jxbIaRcu4dFpQJNHx7guLTBHmVcH9zAkHqZ5F2yo++jXscUhnZ8LqToXrjH9enRIJIp2VWGnAqxDW0Mxn1LMCoSAumo4d2qdQmoODloSARuTlER0ZVCURDqFFQlV3SKcJVGCRjuEsLR1BYlCytSndRWwrB0CTZokNJWh1Uskh77mpXNUdUOaCFLr0EKQSkWep8g0Z1lb6lWJaVqESkBK8qygrirKasWiMsyKhK1pjnKW5nDJ/rJGZSmubcjnM4QUGNvi2hZUhpUpbT3WfPw+17vdB5+ka933zwH/dXT+U8C//zvcz1GjRo0a9QOi76pQ2RAyxqAiBgxxysdYw1SPQw1fG9a7C3AmAIIhcDzJYXdS3+P+BFg3rAcZ+hPq78W19WKo55yjKAqqqjoGK+IxJUlCXdcAvUOybdtj7q5HpXKM4e3wmJNcbXE8wjHx+UMoFUPGIRxOkoSiKJhMJsdAZwxjTpq7cO0Yrg37nSQJaZoeS4EZINzQcRf6GAOpR2kIJIdxCfVEAebzOU3TMOlSgcTXCTEJevvtt3nuuef6moqxUy4eXzhnCLjCGg4AMcC8IZiNQVtc5zC4EeN2rbV9eyGdb/g9wO3g2hs6hGNQOIRszvkUtXmen5iKOI5vXMc0ALzQ73B8PIYYzMb/HsSQOcQlhrDxmgvu2DhWcZ3M2AEa+hriH9IBD/sWr5vwuSyKoofkMVwN4HfU7yv9CvAvCyE+Dfw6PoXqn8bf/fkL7qjw/X8A/Cw+xczXhRD/DTAF/hRwFvg/Oud+bdi4c84JIf4j4P/cvfRXfzcHM2rUqFGjRo363qle7iPSOZun1rlz9TXOPPl+MJrb3/w61UFJjWDj1Fla0yCKnGrvPm+89DrbT3+IydTwxte+zOnNDXRTsr8Pk9kEnGGWpLTOIZzlYPcBeZ6hywVZqjBVA0nKqfPnMNpSlRXZZA2RJOi6JJvkNK1k4/wl2tWCxX7pPW0CEiU9TFISl6ZML5wnmSqSSQI0CEAlyrsTw/7GedAY0lU616VGDU4x4r2gv45UAikkSVGQreWkswmm1TT7+5QPDzDLmnrZYlr/97xxBqyjWTVYCzKx2FYjlaKpWw9OrU8ba7EemBjn3YwW71oUglZbamupjaOxhq/92jf4yT/+k3z8pz/CF/7el1ksvHvNInwtvNaibY1uLRu6YDbPkWkCQpAoSLIEobz7bnevpG78NUMgTOcWDLsFQUgVeuTMMxztnS3dfl/QpS31NM/vwVxglj6W+NqCCHzdQ+ewxrJYNCB87UPrAl4LUNH3wTsbBVZ016NzWbrO8SjCRHIEFXs82SFRAVI4LJLGOWR3LeVgLgXb04SnHt/kqQ9cJJtkLA/2j3igsTjr0HXDam/F4e6KetmQKnjmx5+j1jV69xo79wzXXnoZJWCihK9eGQpquhDPrt4iztcV7dZdv/eP0t8KKXAhdW8XRbDeRdqtZdcNXYU0wc5ho/2xFB5oCwdWdA5UK33K1tTvHZ0AKyXC2N6Ja7XBOAF1Q5Yn/rOAQ6RzlNFkmaJeGZI0BZl6wCgdjVNkqUW6FKcE01kKTrI4XCAcFDJlUkwoG4M2Dc5Y2tagjSCRYI0GIRG67ebd4qQClWKNIU0taefGTBw4bcmFxEiJ0QaJRLSGalVRacmqqmnrhslWAZmAbMp+5WhqwwRLWiioG2SmqJG42iD0IWpSQDPCx+9zvdt98En6r4HXgb8khHge75J8HPg54O91P48aNWrUqB8yfVfwMQZh8e9D+BhDl3BccBL9dq4Vu8mGAGvofhr2b5h6NYZjJznjhu3EQCJAjiHQi1NJBmAxdOuF/gRnG0Ce5zRN0wPIeEzDfsVQaDjOcI3heXHtv5POi8+JYV7btv0cZVnGfD4/5uYL1xu66OJ+BgAVQ50YCsVzEFJgPkrDsQ2dnkPA/E7txONvmgalFHVds1gsKIriWLrSuO3goLtx4war1YrZbNav75AiNIApa20P+8J1YzdegGIxJBzqUa5BgLque5AW5m2Ynjh8D2lT4/S+ATbGaV+BPu1t7EqNIXSWZceuFY8jaOiKDNcMcxjej+F4+HmYVjn+HAXIO4x1SE0bxyfP82MpbEPM4/U3TK0auyLDNeJ6s8N5CNcc067+vtObwP8U+Pe67zk+feq/45z7++Eg51wjhPg88JeAfwH4V/EZq74O/GvOuf/sHa7x1/HwsgH+k9+FMYwaNWrUqFGjvg+UJhMW+7u0Vc6Zy0/y8OZ1yocP0KuGZH2LixfOc7i3x+a5czx8600WleWZH/0MB/dvs3t3ycUnnmC5u4PDkSvHfG2d5XLBxulTLPb3SZIcWy5R0pEXU3TToKYzsmlBXdcoqcgmGdI5Fjt3ECpHrW1y8f0f4dbLLyCRSJlgncHhUzvKJMdZyM8/xtrlJ5D2IfnaFMTSoxpjQXQoygmEkB4ESY95lAwpLr3zTioIyCxJE6QSJEUCSiASRZJPaVcLTNXQHq4wVUtTaZxvvvtbPezhA0CytGWDaT3cEaLbi7gOoGmNEAqBQzuDsb4eXmMsjYWVFbQGVgcVX/gnX+Of+Rf+EJ9YlvzTX/omZeUdgQ6B1g4FVLVGP1xR1prNrQmTadbVB4TZek4+SbHA4WFNVVsabWlt2AsBwhGYXtjRBpDXg0j6IRylPUXgkFgbIKDoD/Iz5p2lmu7+S2gYcB0YtMHdKKI9O86Dzr4PPrY2pGh1AVPG0PGolyH1qnHeASsRJDgyBGuZ5MpjG7zv2QtsnV1H65amrpBCIqTCtC1GG9qqpV7WHO4tWRyUKAmT85s8QPDmy6+QrJ3jzs2Scv+QqXDMlCSRXb1HBErKDuCJHirSQVhBqDVq/XVd97voHJ5WIGSX9rVbng7n23NdauEQSCmPz5OAo3e7OXLeDWwa7Z3A2jFZW0ckBe3DXd+f1mC1pZgk2CzFIHFOYkwLQlCTIPKcTCrcsqZysFw1ZKlia5bhZIKwhlprcJJiMqG1lnJVkgpIVUJZWqpaY5qaJMtIEoXPFGtRUqG1QShFPpkiyppq0aJTST7LqQ4Xfl+fSJyQKCx5kWGsY1U2lGVNawVKJcwmCVMlWdYtzfIAJSzrqSFPElKryZOEVdOytAlOQiIMSVX2DyOM+r7Vu9oHnyTn3FII8Qe7cz8H/ARwFfh38Q/d/unfvW6PGjVq1KjvV33X8DF8HzoYww35oTMuBj/xTf9HOeiG1xvCstBm7HI8CUA9CtzFetT7scMqdp8N34cjh1uSJL3b7CTgF5xusUMtpIGMx/AoMDoEiUFxLb6TYGPczwD7hu61cP3Y1TeZTHpAFs/VcL7iuXDueL3DcN24T3FdvvjcoRNuOEcxWBum1R2uo5NAc/ya1roHkDFEGsYvht9t27K/v8/a2lrvSgyOuBhKxX2Law4Gl2eAXHFMgxMwrJE4tWtYO+E68bnxnMRgLr5+ODf+vARYHoO6eF0EN2OYuwDxgoM3rm0aIKDWuh9HvJ7jGA6hdAykQxxiB3NQ7PoMADW8H8YgpTzmwgzrMKyzYR3NcEyWZd82f2F8cR/SNP02+DnqB1/OuWscv/fzx97FORXwf+i+fjv6KP7e0y845x7+Ns8dNWrUqFGjRv2A6M5b15mcusjps2e59fprmOWSprHMzz3GdJ6yWByyfmqT+9ffZHZqm+3ZjLdfepGNzS0macL+/j6J9O7IvEhZrpZkszUO9w+ZTKbYumS+cQ6kpSlXTDbXwTnqqiadz8lnU+q9B1gnEWnBbPsixSzna7/0/2OyfobF/VuUq5pc0INEIRRCWFyrcUIhpALVPcSHd9pJGVJvepeehzVdWk7rwHX7dRFAngeEbaPJipxkmpPMCtJZDpiuKKNFOoFtLVZ7iAh4d5/z7UBnYxSCpjUdKBIeInXwLQAOay3aevBYG0vtJCttKTWU2mEQtMD1N+/zlS9f4zOf/STVsuFLv/oaTevxnHagtURJhzGWdr+iKjVr6zlrGxOKQkEH/rZOz5mvZ9y/V7LYXWFt52QUPrGqknQQ7EjBFWmFT/tp3ZHzLhznXLd/je9BdN9tBwwFHXw7YpPQOfNc327fYgcuQThxxDO74zxg8xDSugBA/eXD9VxniZQCFI5UCCZScWo945lnTvHYUxcBqMoaqXwtSHCYpqWtaqpVg6laDvdW7N49QDrH2vlN3mxh55WbqLSgMb/KnbcPSJ1llihmqUBJ0Y1FgZQclWCUWOMfHg2d9cy3G7PwrkZrLLK/bxBcoMEZ6Z2iQoRQi/486B4Ot13KYYLrF0RXU1IEugwIY6kPl0jRgnMoIcgkZLOMNlEkxZQsTZHduq3qFishTXKQitlGil1I0izBOYlLfKrUZdWgVEKagpUFTdUiraZtalTh0/jWjcG1Fm0qinxGPs2py4qyNUgHCk0iLKmEWkHdGup9b2iTSQI4pHSovEC3DWVrWZUaiWJ9lrI2n4JuEEqQTieIRjBxFdOtDZpVyVqR4FTGzmFJ21Y0pmazcNhEkqTf1S3IUb8Hcs69xHfYBzvn/jr+Ydrh6zfwKVpP0rfdhHXO/dvAv33C64+k1M65Pwf8uXfq36hRo0aN+v7Rd1XzMXa8DWFP7BoDjgGIAAHiY94JPA6hWdwmHKVLjIFbfMxJAG4Iq+L2h3BxCLviNI4xLHuUu/CkscTOsAA1Q6rWADJOAoxDuBqPN+5n+DkcH0OdYQyHdTsD8JnP50wmk77mYzy+k4Be3IaU8hiMO2kOA7gJrw0BTpxic3idIcR+lONxCL3jeYrP01r3a3LokAvHBgdccAreuXOHixcv9hAwvmbshgzQLwA74Bg8i48L8YpdtbELMKybAPfC+EKf4/dCe7GrMqyL4MyMAWG8HmKgFpyQQ6dy0zT9+o3PCfMb4GloN6zltm2PzeuwxuQwBqGtMEdA71iMAf+wlmNIhxrWYbwe4nUXvx7mIcQlxHS49of/nn2n9L+jRp2gf737/le+p70YNWrUqFGjRv2uKl+bUi52uPvmHu1ySa0NxXwT2y7ZuVOTZAUPVzWPfeBDHN67yf2bN3jqox9j/9ZbrHZL8knOmcce65xXgsQImtaSJRmmrklUyuHuPbRxnH3qKfRqF5Vk5OspaTZl+eABs63TLHfvs3Xl/VT797n/1l1m8w3uvPUGp8+eIUkV1jYeujhHW7dgLUqmTLa2aA/2kEXSu+FwPs2lROKE6x1x/k9qgezcZw6QQnowKAVCeOdjMklwApI86/cCwgraRU25t0K3lt7p6MDZkLr0qB6hNhZtPdTxKUElSnorojMG6wTGWbQF7Xya1cO2pbJQaUtjBRpD7QSNdfzD/+5LnHnsNB/66U+xXJR84yvXaXVXU9AZrPF1BpUVGN3SVIaDvZL1jYz19SlJmnauTqjtisaBAZzzzsNECfI8IUv9fqlqLG2rff1H4bq6j0fwMdynd0IwvEviOldmFHTvagzevN4xeuRaDIce2zY7D5BFcEqG14UHaqEfMQgVQqAcKCxKCFTnQJznksfOr/HE+86xfmoKGIzxKW9FonAW2qZluV9ysLugrVqqZU15WDHLEtZOr/FSK3n1wSHnjGRy7RYPF4e4Q4VsW6a5I1dH+0ckPg2t7B5OD/dmulgI4fx6QHonovOxlNClTbX9mJ0TyC6PrDiKLFgQynVgPQ63OQrkUcj9OvWEGYdEOgG2W/upYL4+xWxMka57gNYapFRYa0ixNMZhXE0xneOMIc8UxSShtglNVWK1Jckz8izBSLA6IVUFxnkkvLNfY5HoVoN1pFJRrUqS7nOn29qvP5ewXJSoNCEvcqqqgsayvjEjLQqsrpitb1AuKlzdYCwYoVDOkAsw5YragraaudbMi5y1WYKczijyHGsN+60imQiS3GAPV/4zkKbUjBmDRo0aNWrUqB8mvWf4eBIYOwnsDeHY0En2bjS8xrDG4xBODWFXDJLi/sXnvtNrMSQN6VED2IjbDNeKaxUONQSXwdGWpil1XX8bxDgJqp0EJB/lQhyeH4Oak9xyAW6FMczn874m4rDd4dhjDV2v8fEBjoFP4Rm7RU+CxOHnGM7FbT4KWp809mGdywAJQy2/OK3mcHxDKBlch3HazzDuOF1qOG7o7At9ikHY0MUYIBjQpwGNax4O0xDHsNE517v24nPD8TFgi+c/BsExJI2hXOhPuE6I7dCJGl8njmFIRxu7FuNUxAHWxrUppZRMp9MeeobYxbUxQ59jaBjgaFyPM16PscszrM9wbJqmfb+Haz7MT5ZlJ37WR40aSvjaFz8HfBJfK/LvOuf+6fe2V6NGjRo1atSo301N1yYkSUu9XOLyOdsXtyj3dkiyCWVVkq9tcOHSGe6+9irJfJP3feJZbr76Mm2l2ThzmlPntikXBwgpWRyWFNMZolqgkoSqbnhwf4dsWnDhmWepD/dQKsGqlDRLOHjwgNnpbQ4Odjn71Ae5/9o3OdxdkUxyVgf7XLxyhYc3btJULUW0F3bWoZRAzc9QnLkAzV3vcpN4a1XnljsCgg6pZA8JQ25R1YFHhE/LifCQR6WSZJIhEoXRBhDYsqTaX1GXGtMaQCKVQtvWgyPnU2Q650hzhTEWFXJhdqkytTEgPMDT1mGso9aW2rquxqN3PLZO0HauxtYKDILlquTv/J3f4F/6n/wRPv5Hfoyqann1G7cw9sjRKZ3Ddu5QC1htWezXlAc1xSRnbaOgrDWLZdMBUdGlJxUYC41xJJlgOs3JcsvOvkW3XV3MEKPgtguKXY9dm865Hi56s2NHGzv1e+qA2IL57+jHY8C4dw/2bXT7pPDQbtemAFT3JYVEAtPMux3PnVvn4pVtJmtTrNUeDOJT7mptWR6U7Nw9ZOfBAeWq9Q5C6zi7NWFre8aD2Tovf/0al0+vsVxV3L6zT54K2oeCHMNa6mskiq7P1oR+cSxeshuZ6NLKhuN95ETvejyqCRkctcJTWOHraB6r99i5G50LMRRdQlfnwbH0YNovChCpRCQ+nrbVyEwiE0k+mdBOcoRR4CxaSJalBgxOpYCjyDKqpkZYQ6oEq6qhNQ0ySVFCIEVCbR1oQ5EoSC1abnD35l3fXyFIpCARjiKXaOs42F9hhWB9liOkxBmomxZlW9I8Q0iFdAatW1ybkIiEuqrRRtNY0FaQpYo5giyB/UpTt1DkimrVMhWC2tVkzmKEYtFCpRt00yKcY1IkTHNfF3RVj/Bx1KhRo0aN+mHSd+V8DN+HrsdHuf9i2BicV/H776RHuR9jxbCpf4KyAyBxXbwYvp3U/tAJGRTqF8ZwJxwfjzmGOCcB2gA04/SVAfiEuAzBXXz+Sf0f9mEIz4a17oZweDiHwT0WXI/vVkNHYhynoastfMVjfNS1YpgUfo/dt4/qR6wApcI1w7lhvuq6pqqqY+lHQ1sBTkkpuXLlCs8//zxPP/30MVDYti15nvcgKs/zYy7D0P94/QzdoaFf8ZoYzl/sVoyBYABtYezDccbrSYij1KOh38PrhHkKbsoAIJ1zx+ogxkAvfrBACNHDu9BWSJMKxyHfSQ8OhHiFeBpjyPOcoih6gB2OD47huq77azRNcwxYD2tSxq7I8F4AjsFRGWIaHhAYQtEQwzHt6qh3qU/i07MeAH8L+Fe+t90ZNWrUqFGjRv1uq65Atw413WJ9mrLY36WYrnH/wT2efu4juHbBta98mcvP/QgqsVz/5oskkykb57aZb61RVSXFfJP9/X2mG5vYqmJ9c5OD+/e5f+sO61tn2TizwfLBbUSakUymCNvQVBpRTFitKrYvPMGdl15kubNDXRtOb8yZXjjLWy+9wXRtTpqnuKV/YNIKSLMEXdeYRqDWz5M8KEiztINgDmMtyiksXfYRRJdqtavH6DyesdZ2tEZ06TvxBDMRqDQFZ3Ha0KxWmFWDbY7SqwrhMK3BGodzxsMsn0cTK3y9SKfxcNN29jME1kLrvDOytZZSOyoLtRFUxtFYn3LUOEFrLa2D2hpaBFdv3OE//1u/yr/48/89fuRnP0NZ/ROuvbGL6Aoc9pfprlkkCefPzXHGcXCw4sG9hlVjcNaRSJ8ms7F+7MbBorIcVhWJkmSpRFuBcV0WWecbl867+rw7z/k0tF18nfA5+50QXfpTL3+mB5Ses3XYrYOK1pPF3lkZYJ3r0oQG+Ca65zE97O32Z+4IOCrhIVsiBXmm2JimrK/lnN6ecv7yWUQKVvv6hT41KayWNfdu7nPr5h6LRYNPjwtKOOa5ZO3UDHnmNL/y628wm+VcunCW1eKQVV2Rq4wHd2+RS8iVwjlDl7/1yK5Jd+/Feaejh2u+1qhwHhVaLLKr/+mwwSbq12znnKQ71tHtyY3rAGUMd33susj49d7nWZW+Lectr7JQqDTF2cZDTyTL3V3UpUuYekGWZSzLFSIrqCqLspqsSKlaTZKlTKcpjZGsDluEa9mcTxBC0bSWpmxRQlJkEu0Uh8sSmSToqkYoiZKgW8mqar3700JrHU1mSVMwuiunYvznZ75e4LTtxubBva0bGuuoDMyLhEJa2lXDsm5xVpCmCqcNTsLBYkWWSAqboBNYtpqysZi2RRjN2iRlVfl/N4oi/x36l3XUqFGjRo0a9YOg79r5OHztpJ9P0rt1Pb7bawcNHVbw7SlI45/DsUNYNQRi4XoBQMR9iOFJ7CSLzz3JCRp+D2AtyzLKsjwGdIcpUYdA96S0pMO6hXEbQTGYHb427Puw7SH0jF+PYdCwH8N4xmOIU82GORqmHh3OWQBxw2sM+xy3EStcJz6vrusTwWUAaVmW8Yf/8B/mwoULx+YpdhPGc5DnOU3THHMFwlGty3geYodecJvGNUCllBRF0actDeOPAWYMZWPYHo4PQG8Yv6H7Mu5XcISGOodwBOJDXA4ODvp1HK4T0tMGiBpiEjsuA9hr27Z3Q8agMsQqgNZ43QRIG9obOjnjhwHi6waYOgTH1lryPD8GiEP80zTtU70OgWc4f9So7yT3iNoYo0aNGjVq1Kjfv9rd2Wd9bY5uStpckqY5Rgie/eQnWdy8xuHOPo+9/zn2H74NQjJdWyff2CJRsKobMJbDVjObr2PqJaevPMm9119i58EOZy4/jsCSTCZkW5vMNtZ5cO015lvn2Lt/G5JNpFny1r0XMG2LRXH+6Sssd++zunNIMZmzKhtaIOu2QLo1GAcqSRFihphfQuQFalr0QExICVIgfc7KzgF23LFnbKgMGKCOQHbmMgyYqsHqFttq9LKhWdQ4HT3Max1YAxw9SInzAE0pBc4DFa01yLAHFBjraK2jdd7p2BgoLdTaIFLlXXmOLh2rxDhHaX36VQ381otXyX9hwp/+4z/Kj/3cZzG/+Gtce3PH10IUR6lfrXNUpd/LrK/P2Fm03FtVtBZIBImBIk9ZVi2NBu2cT8MKtNbSVN6zKEXnoHOCaSZptfGOvZDOU3QP9uIQTkY1HInAIcAR4HU9MAseRhelIxXHUrv2SVmFP1j0rdGnKA1eQglM0oSt9ZS1eU6WKTa3Zpw6s4ZI4nsxgmrVcu/2LrdvHbDzcEWrLUpJJI4s9fN35vwpigtn+LVfe5n7qxppMu595XU2ZhnbaxnT2TrN4WusJwLhDFJ13tBu3Mb6NoW3IeKsw1jj3Y/Cp/+11iEFSOXXBl2CWykEUkmfUreLhF9nDmu7ezs4XAcmPUOXHvA6DxllmAMTPSgPOGsQK4tK/V7RthqUol7WTGxDNp3jTEtWFNy+u0cqBVtbc4RyTCYTnJOUFqq6pcgTlJRUjWZWpNA2JM4igbaqqC1M1+fUtcZYQ5YohLW0rqtv2Y1BWI2uLYIccBR56ufcWBAan0dZYLXFGElV+fsim9OUU1NF3UoWZoW2kBcZlZUegCpH2wr2G0O5qEjnikVpqKsGlQhyJWiNoWkVs0SRi2+/NzXqey/n3DWOPv6jRo0aNWrU75i+q2rPQ9ATQ6UA5GJnU3z8o+o9xrBuCBiHrrr4WkM33zAlKxz9MRzDieF5MSCLHWTgU4TGaTOHjr4YlsUOs+FrQ/gZ+hRSr5407qEz86Q4nQQZQ/+GOgk0Dsf9KIj3KOgcnxfXIzwJHsaxG85X7AAM55+UYndYd/SdNASz8TjiOMb9jtsM4O+JJ57gzJkz/VwNoV/oV3D8NU1z7PXgkgtjDNCq32R3sQggMl6rAZAN3aJxGuI0TXtXYoB2YW2G18Pv8VoOMDGAw6AwjuDeDF8h9WkMBuO+xQ7eeL4CjByu/7quvw2Ix2ldQ/sx1A6vxTUjQ5rbADsDPAyAM/Q5OBjTND3W57quj81PvDZCnGJnbHBYjho1atSoUaNGjRp1kp764DPcvnqVdDJj7/59NrbPc/bSOR5cfQ3rEi594DkOFgec+9BHaffvY0gpJhmWlPLuTRaHJbNJRtNqNrbPcf2bL6BXNRuntvwDc0LgbI0+aHh4sM/Gucs8vHuX2dpp7l59k/WzZ7BaI5KM7fMXeXjzBroqSYsZuw/22Ti1jjyzwWrvIQaBNQZjPXixK41I5qTTGdlsDZVIpMYDF+vrMiKc93Upn2I1/hs6/M8S9sE+taMoFSSgKmjrBts4nPZ1JPt9fud0dLb7u9xa7/CTCoQC5TBN6zNd2uDWc2hrMVagOxCpEWhjaRxkKkU7jTZ4MOkcjbOYrvahxdEC/+S3voU1DX/qj3+GH/9nPwu/+Ku88cZDjtyADosAA3duL6gqw06tub7SoCS5FHzgqcssdx4i2pZMwDzLqFtL2VgPIoWvpdj2KVEdQkissGjjsMLDLSkEEl/b0rpQ+5IjaMkRYAxpVT1HDA2LPteqd1JCIkSfjtUR3H1eIW0pRCBS+lSeearYmCec2ppw5uIWk0lKPunukQiBNVCtGu7eW/D29Yfs7pY02qfCFUpRK0eN3+u974nzLBJ47ddeYdV4d2rqHEiJdo5Fo9l9uEA3DUUiSbME2UFZ5xyuS5Haj9eF9ejtoUmRoevWu0CFh7pHfk8JWJ9COIDa8P+dK1KIzgkYjKL9/tAHSwnvQBV9nDpbKuHhb4ezBudAKoWxDuEEptW0ziCUpNUwL3JsW0NbkyUFTmv2FjVla0iEYC1TNCJhtaoplyVKWIR1tA5sa2m1Yb/aRwhFqgSTqXcWzoRgsX+IFIrWgXSOZaUpTMskl6SJQ2Qp5apGosnylBZBVXlw2NaG9UKyNZmhHd7xmKWYBurWIBJwUlGbBuOgQlKWlsyVWHxaWUjQLgFrWJtmJM5RLsvv6t/TUaNGjRo1atQPlr5r+Dh09gHHXGwx8AuKU67GbcV6FHgM7wVnVAwG4us+qs0YCsaA4aRrx32MHWPhvSG4DKkpg+KUj0EnQb8YxsT15wKAO6lO4LtxKcbHxZAuHt9JYDCOuRDiWMraoWKYNxxjfN1hfGO3XzwnQ0A9jMnQYXnS9YfXgW+fv5PGEYO+OFYhBk888QSf+9znyLIMIQR5nh9LwxlSdYZ1GAO4OEVoXHcxpAq11vZwO+7jcM0N3YOxgzI48UL7oZ+xOzZeB3GKVjhKZRvGHwBrDCOFED1si0Hh8DqhvVjxXAYXZezIzPO8T8UbOzjj64c289xvqsKYA6ydzWY98A3jD9eO10zoa1y3syiKHlKG/sX/ToT5jWujDh2+o0aNGjVq1KhRo0bFun/9GtZJqsMlT3zoeRLRcvfN19m+/CTZNGNRCS5/9OOYtmL+gY/THOzgmoYHb71Ku2qYJZBP5+TK8NaLL5Ctn2K6oZAiQWDJk4Tlw4cUpy5SFAn7O3vM5pvcfv1lNrbPsnhwl9nZy2xsTLn/xivk+YzlsmH/7gNm8zlKKg5XrQcvQqIQOGNw0lEf7GG0INm8QLExg6QzOCLBBR+fBz5HfxNH+0rnekBjrd97WG0xrcYdGCRgdNin+FSfzlrv7nO2A5zHIZo1lrqqKKYZSEWrW0zoh7MeoCLR1qCdA5mghYc17aqmNQ7jJNp5l5ruGJRxnTNRQGMM//CLr7Ja1vzcH/4Yn/6ZH0H8gy/x2us7aGtR4ii96aoRrHZK3q4t9xuLFg5wmHsHiKoldQnTVPKhT76f17/xOsmiQltB09WgbK1Pi+qE47Buwki6WpGud3vKLp5AV7dQdHU0u/1d53x0/X5M9M7Ho5mRHJHIbgzCIZGda9KTNinw9QWVYJopppOESZEwmSTM1nLWNyZMNwqkVAgkbWtZHDY8vH/Izds7HCxa6tagcRgJrRA01lB3sU6TlGt3d8hXNVI7dpxFJBIhVL9mHI66XqGsZZIqkjSFtsU6E1ZZXyPTOdfVp/TpXoUStE2LxMPqozhaEBKcd0wGiC66/LSyc1QqJXrQLMNsdBA4OHmDn9cK74UVzoNRlSjvJkwEKk0wGv9Z8eZWrBM4kbIqW1IBa5MUZhlJ4l2aD+89pDUKK0BNJ6xahzYVSiU4Y5DOUJZ+D6qdZO+gRGuYFAonwVrj63Emgs3t0xzsLRBaM59NOVyUJKlEO4toNE3VYq1A5QpnLaiERWUxZc2pTDBP4HDvkNpAkimyLPF1UpuWxGiaGl+z1VlkohCZQreWInVkqWTRatI8J5ECYQ2NbiH5rm5Bjho1atSoUaN+wPSe/8s/TAcavg/dhzFoip1/323a1QAQYgBzkmNyCMXi/j8KAoa2YvA0hJWhD+H9+PjQXlzb71GwNa47GMOZOGbDduP4DsFi6MsQ1g5diMPXY1Acnxf35aQ0qkP4PBzrMAVtGHNcf3F4zXB8AESxgy8+Ljj5hhqugTgd6EmuxpOuHdZoiMH29jaf+cxnOHPmTO/Ii+FyAM/Oud59mKZp32ZZln0cw/c4VWcMY8N1AxyMj4s/QyF+IS4BugXw2LZtn5a0rutjDwIMwfoQGEspe8gYA9bgoAwpSE9Kj3uSw3C4JuOajeHcqqr6OIaYBmdhnJpXSklVVX29ynjdxu7Q4TobOpPruu6h5XAuhmA2BqFhPAGch/dHjRo1atSoUaNGjfp2CfL1NZ587kMcPrjDYWl48kd/HF0vUZM1njx3CWMNKjtDs9hhsX+AMy3F+hmMvoduHaa13Lt7l/Xzj6GrA5zWGGFxxlGvHPOzF9GrA3STUZYN9x6+xbTIWB4ecu59H6bau82Da7dZP3uBO7ce0DaWs5cvsljW7OzuMVmfcXhjD9lBFqMtSjmagx1025KtnyVd3/BgRXgXIvG+Eu9UdKEOnuz2fsI70no5D3Vca7FGoCYpON3BJp++0nYux5ASVIiQJtR/tw6aqgEpqeuWRlsPOLt8nL6Wo6E2vo7jstHU1mGcpbVgnEJbcFhaZ2kQ+OSuroOe3pFYO8evf/Ma9+7c46c/9T4++pMfY775Bi9+7Tp1Y0PvcMBhbdlrutSmztdivHpnBykceSrZEBnPqDk7jWU+n7E1zdi9v0NuoDaOyvp+90jLeRCWKUXSuehwHnwJAdZ6UOjdpRK6uPi9XHdPxC89cB5chtdEgMHd+xY62ChJE0GeCmZFQp4nJJlgbVaQFYpikpNPUoSSJFkCSKrSsLez4Pbtfe7eP6RsWloHDR72agHGCYzp5k8IrIA8VdhVRaM1wkkOnSPLCiSCREosEmM01eEBuZJkWYIQLnBDrAODH5d3gx6lsPX3I0B2pR172Ern8kQgO4eitcEp6pAIbEivio3WnQ9UwL10NSFF7xF1HQzu3JMWsA4pFK512NYgM4VFo4oJVjtWzRJroJhkKGHJZ+u4RLG3swCZIrUllZa2qbFOgjFgG3LlP1+pEDT4GooyScnQZNKnELZlTTqfUq8atK1xWIo8IXWG9bUCqSRGSspFQ1PV6FAXUyY0VYOylq1JyvpEYBAcLFtaARNnyGYTUgVGQqYkjdFYHJNMogGtW+aTjCyFZe3XqOxcq8umpq4N8+mYdnXUqFGjRo36YdJ7ho9DCBZ+Hr4fA7QhpHqnth8Fh8L7sXspdgee1MYQvsHJjr0YysTurwDUhvUOh86oYZ/jVKGP6k88nmHfhn2MQV/8Wuw+G4596JyM5yBNU9bX1zHGsFgsjqXifKfrDdsaQr04hicBzRgkhRSlj2o//Dx0sAWn3zuto6CT6kkO+3QSrI1fL4rCF4VfLnvomWVZf3ycdjSMJ67FOJ/P+/iG80MdwQDrpJQkSdKPK04vGvod1mOAY7Fjczab9X2Jzw8QEI4+j0qpvn5pnGo2TmUa+hvHOEmSvq2huzB8fgLMDOeFz0EAg2Eew7wM+xtin6bpMeAYnJEhDvH3cI2QRjXUj4y/Qhzi9Rz6H48hxDa4HUPq2fA9hv7h2sn4BOeoUaNGjRo1atSoE3T66WeYz3J2bt1keu4yj33gg9S7t5lffprpqW1sXVKuStp6Qbk4ICty6mXD6y+/ipQpSVuCfUCaT7j38kvMpgXnH79Es1pSNzXZZM7O2zeYbWzx4NprLFcOkWXk0zlnHz/P3ttvIvKM+fknuPryK6RS8uSHP0h5+ACzu8+Tzz3Pw5dfZaFugXa4zp0GQHmALkvExgbJfI1smrE86EojdI4zwp5PANaDM5zAupCpx3YOOYV1DmO6hyWLFKu7PYPpahwmwuchtXjI6fyew1hf81BbDyettUjtKOuW1peqQ0hv87MdpGwtVNZSW0EDaNfVXey+t1iMO0JTzoEN+98OZNXO8eKDFQf/6Bu8+cY2n/7UU3x6XvCVL17l4LDBAgbHqgXloEDS4jyIkR4IVq2jbGv+1t//deap5OLZOVtbpzi4vUuCI1GQA40VGNyRixOYFoqNeYrqMnq6zmVqrPXlMJ09ipNxtK2laS3ahfqG3XyILpWo9HBPyXAfAX/9PCVLFEWhyDvwmGYpKlXkWYJQ3kknO1ehMXDn1gHXr+9wf29FabSHvkDVuUiF8OlQrfPpUaWUWClIpSBpW5pWowRoDLVIWc8ynJKIzr26MoKm1sxzRZIohBQIpTDaIDuoGJhfiEufPNUFKE7vFvX96eqOIrDGIqS/XgDlPl1ttye1jqNWRFf70SE7V2S4lm+P7mTvGpZKegdvoMjCu4obY2nKGpmmZBJUIkBN2V/VGN2ipGJjmiNnAJr9pWbVlEzSHGsc+7sL1qYZ2sHOYcVkOiFPJUKlKCVIgCyBVAkaIbFNjZIaQQeNsUiVcLho0W2XNcnComzZWVRM8oTHTq+RJxajW1ojSaaKVPpzMQZlLUWiKJcV80Lg0hSMZlVWzKaKJIG9ZYOxfo9c1w0icSR5zgxHVZ38APmoUaNGjRo16venvqs75kMX2dAFF0OxAFdi+PJObT7q9aEbKXYhxvDsUek+h/2MgVQM7IaOxjjN6RCqBafUSalNT3J9DaHaSXB0+H4M5U4CjnHfY2A4dDrGP8/ncx5//HGqquLGjRuUZXls/OG42Pk5rJk4hKgxyDqprdC/2KE5HEPcToj1EPC+U7rLuJ2hY3J4zEmvnzQvcerXALbCMQG2BZCltWYymfTjBw/YwpiDezCGwMFlGMBl7CAM7RRF0cOwcP0A1OIakXGtyBD/kKY0uPyCOzIAvNiFGOY7fj2OdZ7nx8Ybg+EwbwFChniEPk0mE5bLJUJ4Z3BRFFhrSZKkh5NhTGHuQ7zjscbrJ4bAIaVrDOBjV+Pwsx36Hfoau0NDfc7Qt/iY0J/g4AxgctSoUaNGjRo1atSoWMrU7D0oufyJT4OruX/tKpc/9Xma5W1Wew/Ry31UvsZkNsM1M25ffZ03X32D22/voVTKM09fRFjL4d3brK+tcfryJapyQdtahEoQ+L9FD+/fYuP0JpoF25cfZzbPuP3yN8lnp1guLV974be4dHabrcuP8+DBPVLbcP7Jp3jrWy9x81tX2VApUmisA6s8NMrULge3H3LqqcdI1jZIpxmSsksfSeds7Ooxis6FJrq/s0XYS0mMsV3tQjxIFD51o9Fh/+xhj3cw+rqHxvi+GOc6mGg9uAScE5RV00FSgXagOteldpbWOrQTHlbiMNZ5B54VPhUoXS1I62GkDcipc74h/D7DWKis42ptcG/e58H9PT75saf4+Kef5sXfepPysEYJgUpgzXnwtjKwcNAAJDCZTNlfrai1YL9xHLz9gNdvPyBXgszBRAqmHUxrrYdaSvj+H1aa2SxjUiSkicAZD7/SJME4n/61bXWXClZBB7xaY2lahzbau/U8m0MpSaoUaSpIlQeJSSJJs5QsS0jSBJn4VKVSKkLqUXA0jX/4d/+g4t6DJbfvHbBsDY0QtAisDaliQ2pePzcq8fUirZRoY8msoTG+6qKzsADSacrW1gScJFWCysCDwwMcKdpodCuoja9bKF24x2IxznV1ID0ElELghHcEIgTCBceiX6wdH+9+932UUvV9dsIhXQfSO2cpzmI79yjCQ14hw9r374NEiO7+kJQd8OwcmaEoKQJdN7R1S9LWTLbWcFJRaUFVaxIMqXNINPl0Qt1IqrblYGnQ6QolJSur2N9tqFqDtJY0NeRJgsVRJAIjhHcC1w1JltK2WQeADcuqu/+gLcIJ8sRhRAK1Rncf5pl0SFOikwkrLZgVKZM0ZbUokaFtYykbQ5rlyMQ/FF1WLWuTlKJIWDSOZW1Zm6dkk4LFwYJMZqTSUWOYTcaHdkeNGjVq1KgfJr3n//Kf5BwcwqT49djpdpLz7Dtp6BwEegdd7AAbwq5Yw5p+w/7HfQswIQZuw3NiEBH3KYYY79Sf8P4QcMZOyJPAXPg9wJ6TrjWsjQgwnU65cOECOzs77O/vkyQJDx48ADxQW61WfdsB5IRrhDgPYxbi+ig34rAPIT4BSg3rCcZxiWMcg8Tg/PvtuB7jOMfxi68Zfj+pT6EmYXAqBngV0oIqpZhOp8dSeAboFvobnHsxNAwOP6B3z02n0/6asbsuuBrDeyE2AZIN5yfue5ZlffrSLMuOuS/j+qmr1QrnHEVRfBvID1Avhpth/QX3X9u2x+Y1dnSG8RRF0fcvxGq1Wh0Dr2H9hocKnPOpa4ui6NdOqLMJMJlMjkFCrXWfajYAzRDr2FkZrh//OxKD9nB+cIKGMQD9uMaUq6NGjRo1atSoUaMepaZ1bD/1DG+9/A3y2SYXnrzM3a/+XW5861ucffo5Ekru3dtnOs15ePsmdeu4c29Bqy2zSc79Gze48uRF8nOnSScz9u/dpZjNQSWsnz1LuzhEZpbpqS2qsuLZH3mW1e5dDm7f4czlx7lz4x4Hu4c8dfkS2mrefuVFzl88Rzrd4OqLL3rwMptRH+wzExLnBMlsQiYNrqzRB4dQbJJtnqPYmiPVHqa1XQpKkEniAY11qC6dZQCE/d61+7K2Sx/qJNaCkBaHRCqFw2K1ryNpjUUbDw2tc74eo/VpMa3zbj6jbZd+02FC7tDgDHS+3qB2oqvt6I+xdLUdnaN1DoPACO+EdML33Dpfl093v0sh2HOOa9rBsmX5G69x4fQal7ZmGG0QxmGUo0WQTgseLkp2W4ubrvPRP/BRXnv1TV68usJKunS0kqq11NpDrFTCLPUQsrUWiWCCB2Sldly/vyRPJLmSJEKSpIJcJSglkFJgO6deIkHJ8MB3wnQCUmZI6aGcUgKpFFJ6wCsl3tEou726kFgHunY40VLVDVXZ0rSG5apmVfo6jod1y8pYGufTciKch3wSEiG6GIrumpI0kSRZyqJsmDg/rzhIhE/FulKSM5sTLp7bYpalbMwnPFxUHLy6YLk4ZE1ItAWJn2+FQDjn6246Dww9MPbgUSnZkfEu5W/njhTiyAkpZECSHj5LKbq6kD61qnUW50B21FHIri0HIHGBvEvXAWu/JoUUJJnq2KPx34VEpYqm0TgHqSmZzaZIB4121E1LqhIypWjrktZAo2tcmlI7UElCaw2HtaM0AvB7fyckq0qTTwVZIr3Z01labRCJQhqHEoZEKZra72dr7UiFJU8FgoQH+w0Hq5o0SdieSWa5YlU72tWSYlawKmsy63DWoJFY3VVUlQJtW5xIcGmOdCm1a1gdtrRWkk0L8vkca0CoDAXYpmJzY85hebzszqhRo0aNGjXq97e+q5qPMciJnYBDl2AAWOEm/dAd99vR8Lw4NWTsRouPj92DQ9gUzjPG9Gky8zw/0W33TvAwBpcnAa1h32KoeVItymG60vDeSX04CUyG7wEuBafjfD6nLMseMi2XSyaTySMdlDF4ip2dQ6gInAgEh7A3dpABx36OY/So+pLh95NqRQ6hW+xWjOfypPUR0no+8cQTvWsxfr9pGtq2/bbrxjC1LMseBGqt+5qLAaC1bUue52RZRl3XfYxD21VVkaYpdV33fQzQ01pLXdd9ewFcaq2pqqoHoXme9/EJ0Cy4REPfYng6nKNQozGez3CdLMv6z0ZcwzKORewUDRAz9CPEJ7wf1sOw/mqY/wBah6B6CDTjOozga18GZ2lwWIZYNk1zbB0FJ2WY13CNAI7Da3GMwvlxTc0YWI8aNWrUqFGjRo0aFTRdn3Dtq7/J/l5Fvn2WvZ3bVKuS/YdLDuxV1jPLtRu32N0v2ZjP/Hv39sisxSh4/49/nPpwj8Y01AcHFLMJRiWcvXiZe1dfJs0KRD5BSMnZxx9n5+03cFYy3brA3u4BxSTn1NnT7N7fxVQVH/zUj3Bw/xY3r77JZDbjzt0dlBQYfBpKay3Vfslsu8Baw+5XX4Q/+VOotdMUp9c9fBDSgyfXuR5thxwFHu4IGZhPDwtdSG+Jr22okhQV/pYuElzV+nYMaGs7SGi7NjzctK6r7dhDQodMEhqtwViEwENFIbDGp2ltncPia9LF4NF2UNKD0pCpM/j1upSaLrwCe9ZxU4NRjuTBAacrxZby/XHWu85sW3EmFWyoBDeByfKQwxt32AKqRCASD4BaCcJ6oFp336tUYBw+7WgiUdancpUOKmNIhCYRAlkJBLXf69D5+frtrZ8bIehSqipUIsiUIlMC16UcRThfp9MItPNRkJ0zMUDfxhha3cE7JOsbM7bPbfDwxh1fJ7NzGoIHjUL4YxOpkMq7HSdFRjEpOFzVGKupHTSui6sAk6U89dgZzp/bYmNtRp4p8jTlsfmMxmiqa3dpbElrLMJCKj2wNM6vVZxDWEeCwDqDFAEMemithPLrrksL7Kw7cmUKkEL2Dl2/Nh22S2UrJWC72e9O8izTr5mOs/dtJeroIVqrLSrrHKcKtPbJdFWSkGQKY2D3oMKauh93Wde0IgHpcHWFazRbk5yVUuwsKxpncEoihXe3qjRhkgLC+hqmrUZKR6IkzoJUBmO0zzcsE18jVggsjjTPWJYWKyX5tMBpRyocTQuVbhFKYMuKVCZkqaaYprRa0mr/GU6VQLeGVd0yKXx4tEvQErJZjixmnbO5ZVokWECpnPs7S6r2vd0HHDVq1KhRo0b9YOo9w8ehu+0k8BcUu5gCGDnJffgosHYSMAzvxelSQ7uxMysGUSe1E9x6AWKGOnMxhAvXOglgxqAngLQYgg0de/H7wU0Vjz842eL6k/H5MdwJ7w2hZxj7+fPn2dzc5O2336YsS4wx3L17l6qquvz7NRcvXiRJEm7cuHFsrMFZ1rYtk8mELMuOpfIcKhw7BDTxfAVAGwBXcNvFczJ0VQ5fj+P9qLUSA8XYSRefM5yH6XTKj/3Yj/ETP/ETfOUrXzk2hgAPhw7QsGYCVHTO9YAwxC+Av9CnECelFHVd9/0JqUWttT0ED9cI4C8oXDe0G9ZCAJwxVAvjCLATjuo6BkgaOyvjFKihbed8ita2bXv3YYh3gJ5hLcduyOCCDJAwdtIG2CeE6J2fxhjyPKeqqv79YXrheHwx3A0QMNSkDK7KABEDrA1zENZ/cEfG6zh2fA5rwIZzg5O1qqp37cIdNWrUqFGjRo0a9cOnqjTkkynzpuLg7lu89aZjbXsDbQxmd4fb+0se7uwjMbx99wC9WpFLxbknz3Plmcvs7+2ztnUazR6bZ7apag264e1XXmQyW2dVt5w+vUaWSXauv4EjYbVq0dUd1tbm2DxlsbtHIgRXnvsIt2+8RbW3j0Vx78E+22dOsVgteXDzIWvCO+GkELRVi0SyfOur1IcWNdtifvESSfItWu0Bj1QKqSRtrX0dPXy9xM545tNais7V6DpG1lnQnANjDflaTlokLE2LcNJDMSuhA48Oh7V4cOIAJ7D4uocg2Jiv07Q7GOsdahZBaxytgwaHNh5IhpqP4WfXoTufNpP+Wkf1/HwNSOlAARrYNVBg+WiRci6TZIIuXWsHMq0gV6CtoK0rdl54icspbEnJoYOFgQPj0DjqDna5rj7jsjFo6a/srGUqfAywAiUgQaKAxHmAppzrvKdHqVH9N+PBrwHReiCLCD7Vo0MNtnMCHrHLkDY11MJMU8WsSEAqDrRl78ECLRRZ4tPUCiSJEuRKsLGWcuXKOZ546jxV2bC3V/LWjQdgYPdgB2Ed2oKhcyvmOetntyARVGWFdI62SGCaMxGOJy+cw26eZvmlr2J1jVQKbVrfr87xqZDd2vCOUkOXdasDrMaFey0+rkIKBAJtDCpRfs6MPXbPQoo+a2rn5vWT7N/v7jH1Tkm/H066Y3xeYf+zyhK/pqzFSenfTCStkewvVjQoZnlKImoanXT7aonBUVqBFA69WHJnv6ZIE6aTDKMNVkNjNeu5JEsSlqsSYw2zPEEgSBKFFBYpVPcAgAA0WZL4z5zWHCxbSg1pnjLHgTZYo1lpByrxDyMYR5EJklzhVE5T1TgpEVgmiWCndBSTBJGmfu6qhqJIWT+9SbVqaKwhERbX+ocRKq1ptSMbn9kdNWrUqFGjfqj0XcHHk1yGwLGb+Se5B387aQpPglHDfsTuxxjSxekRHwVG4SgtZQy4wvlDMBpDqyH4PKnu2xCeBpA1BJNDgBigTjg/Bk1DEBnDyTzPj9VDXCwWfa1AgI2NDfI8Z3d3l93dXQ4ODvrYBWdfgDYB8oRUlScpnDt0m8Xjj510of04HeoQLp7kUI3HHVJlPmpO43gEsDZsL253e3ubz3/+83z6058+BoNjQBach+H3IYgMUAqOagzGoCweXwy4w3jjdL0xJIvTmVZV1a+DuK5haMtay3K5/LbrxZ/LAG7DOhwC7uBQbJqmj1tRFL1DMbQJHpgWRcGdO3f6eo7xXAUwGH/Faz7UnAzjiIF7PKZ4DcWO5JACNXYeBshaFEUPbOOHHYwx/TyGfzvCfAXAHte5DDGKQXX4fMXrIQbJo0aNGjVq1KhRo0YFteWKrQuX2bxwls3dPdzbd9lfrFiuKhyCBzuHLPaWnMsgrw1bs4xnnruCblrK2jFbm3C485CLH/gwu7dv8PDWHaqDBafPbkOmuHjpEvXBPovdfRojKMsDFILT585Tty3LwwPma3NENuX6qy8hpaKqNAd7B5y7dJFWpty/dcDSSM4gUc5RrxrWN9eRDszedRa37rO1cY7ppcuITCLqDrxYR9Nq7wQTEon1AMbavvbd0Z7Zwxsh/JcTClKBKnLSmWJmYX93icOitf87HCmwXS1Ia1wPMI0NyUZh/3AfqRRta3xNR2fR1qdbNa6DlpYeUhknfBm+rvKfIxjcPHUSHRCEzvAmvANRO8+VzinBE1PJPPMgy7luf+QETkFqHVp40JbjmCjJppWstGO/New52DWSAxwl0Pqo4ARIK/yYHdTO9zCUXUwcpAgPIa1P2Zp0/ZWizy7K8bsnIVVocPB16UGJQDA+zawHj742pgOyBNZmOUnS1bsXjrLRpFIym6TMpxnz6YQ8TygmGUmRMF2bIVD8xJ/8E/zmP/w1zj/1LL/yX/0DD3LpruEcclogphMeLmsWjSZJUlSicZVFCTDGkuUt2hg2PvAEyxdfpm49bJWyK7kjfBpe6XwNT+t8itwAkYXw8yk7Z6iN3I2iA+C2W7fBlSuP3V8QPo2w8AtHOBDCdWOwvUfW73+75S6I9vKGdJLhbHBeKlp8vcXZJIfWIpxFNxZSQV3WZFmCdYJiMsPUS1SRM1tqFJZEa6RwmERQoBDGsqwqrLNIC9poHAnOOrJEUteasrZkCopMeiewtlSNY9U2ZGlCrhzTBBph0c4hpaUyghyYZr4uZF37UieTLKV0Aq3BakE+SVFpgjaOFsfa1jqpcEhdg7bMM4FSCW0Ly8UKmSRMJgnT5J3v740aNWrUqFGjfn/pu6r2/Cjwc9J7MSiLQeQ7QcF3c80AEEL6xdhlN7xWfM34Kz4vnBuDp/h7fN2T+n8SLA19iZ14oY3YwRYDoLivw7EPxxJ+PnXqFGfOnOH69etUVUVd16xWqx4EHRwc9PCxqqoetNR1fcylFxygWZZ9W1yHYwzxj1N0ngRWA2AD+lShAQ4FiBRgXjyuIZQMAPOkuAxjH0Pik95L05SLFy/yR//oH+Xpp5/u3XrxnAxhqBBHKViDyy5OKRrXAAzxDLAxxChOiRrmpizLb0trG6B67FCM21NK9bEI6URDe+H3cL0sy/r4h3aH0Ds4+gJEDU7CAHyTJCHLst41GJyQeZ4fiwnQ15cM1wpwP06RWlVVv7bi2otw5M6MQXs4TmtNmqbHQGFVVf3cBWdmDBGDO7Vpmh4cDtOphtSuYQyx8zOsl9glHeYk/syOGjVq1KhRo0aNGhVrbesUrTNMN89z5olnKO0/5d5XX2F1sKK0Ccv9klmtWd9aZ/1MwqXHz2PNitZ5cKlnM7afeJLrr77MxqnTbJ45w839FWkx58yFLe6++Tq2qRDJBOccs9mUrJhQ1iW2adg+f4GDvV2qu7fYPLXNvWvXsGXFqVObPHywy527+xwua5wTlEYzFQ6rDdWiIRMt5uHbLN64yqnPXKY4c5FsnlMf1pgjdBelpewcYAKsiWAPAf6AsQ4nLImxzGdr1MsV1cIhnaBartDGO9OEEBht/PFIX/OR4Ibs9gg4XOtr0BkLxllaHK0VaOuw1mFMB72ET7vqeyywoksJi4dalrDv8/MmHb6uII4ESYvlnIBPzFO2C0kqRU/7rOtScgrvWFMdGAypYbWUFEIwFYq5tMxay0MD+0ayFI7GOYwAJQUqlTTGUbVQKNGDQY2vb5nhb+IoCwaBFHTpYTvQBn3MuySrPmbCg9Je7ghUGufrX3r3oKNIFWdP5WyfmrN9Zs5kmvPUh55luap545VrCAfra1OMtdy9u8PBcsnETtjdX3K4mPHq//0/5dOf+wk++/N/kYPdh/zjv/drHpIpxebmBJnnPCwb9huNwbFsNc+cP8WZ9SlVa9FtSds0OBRWKeRsSnuwYpIpjLW9i9Pv2QVSeSgpEAg/EX4vF1LSSl8nMjBmCPdwvF1WCtnVquwsuXTpdEXXhvBuVBfuCSAJwRSeiCJSgbAOpz2UFwhEpkB3GX2KjKqsmCBptGZ9kuOkoixb3LLECcnesoI0Q9dLZlNBWVsmkxRpDVZbmrJmOs2pnWNRVj5tLP5emGksWmmEVrQKjDYkUjKZpKRZwnLVYAzsVxppIUNjraM0vm6rkN0Dzg5c2yKQLErDJBXMEunT9lrLftuQZSmZFDRao41mvrFGPslQ1tE0NVMFiRSILEPrkkQ6lLJsTjMSNWYMGjVq1KhRo36Y9J7hYwzdYmAUA42hO2/4Wvj9u1VwSJ0Er4D+veE1YzddXH8xOPhiJ1Y4Pv45dmYFqDLs19DFF6d0PAnoDeMaHxs7+oay1lJVFTdv3uyhT9M0bG9vs7a2xltvvcViseD111/vwVZwK8YOu+ACjeslDvsSj29tbY3t7W1eeeUVVqvVsdgMQWrsQAsxDi694RqKQW3sRIzrIQ77dtLPsYMtfq0oCj7xiU/w+c9/nu3t7R6QxXBsCB+TJGE2m/VgLBwXxhWgWEh3Gn4OaUEDLIw/AwHABjdjcBzGsC3ue7heHLvQdhhnqCkZ4Fye56Rp2rcfpz8OUDKAwRD7PM/7+Q6xiT8jIU1sAKPxHIXPUNu2pGlKnuc9OI4/B+G64eGBGJqGtgJojSFpXMc0zFld1z2QjD+bca3SMA/T6bTvS9u2fRsh3W0MTZMk6SFpHI/w/ggdR40aNWrUqFGjRr2Tlq1jtraB05Yv/P1/yGuvvgnGIWxCef8hp1LFdKNga2vGmVNTbFvTGsnmqS0aaylXC25dq7j45DPcePUF7t24zfbFx9g8d5pbV18jSTKKtYK2rkmLKSIvKA8XOCeYbp3hYGcHrP+7+u7LL5FMJlx88gleee1tDvcXTIqcndJy6/CAC4VilgBOUJcN6TxF2gU3/vHf5fEf/9dINy8w297k8PZt7/qj298KujSoHsz07joXfGjghPQwEMCBbhr0vQPSzDvHlEpBSKzzf+db52s8ui6NqweOogePASBZ59Ddd+Mc2gkqa2jdEaS0iO594V1y3e/gGZITR+25Dlg5AbIbSCIcUwefnkreN0/IU+XTwnb5SZXzkLLjXH5v5XwsjHO01pHg+lSpGYKJEKS4vq6gU5LGOdrGoDssqLsal66z6TXO1+ZMJWRIfLZP78oLuxJv+AuQLJLw8xo9wguRC9QCwjmkFJzeSJkUKdMiJUkTPvnTf4BP/bH/Ea44TdNUNFWDbh2rvQfceOFXePj2m9y6dZ/rb93hrbf3WT3Y49M/9mnm24/xqT/6x3jpay9DKqhUyrUHS/armnmWopRgr2zZXTZ89c17PHZ6jfMbUzYmKUljaW3L3v4BH9w+g95/i7LW5Ino0836vbrr14YUR8BRipBmlaM16BwSDxqFFL4+JnTQ0Tsbe3zbveahtPMT28cygPcuQ49wGO26+psOkUlUItFl5Q9Fkp+7QPnGNVb7S7ZPzTBSoA0Y488zbUtZWfT+HvP5lMXSMV2bgFAkWnOwvwQrMK3FtAYlJRjtgbEDoSRKCJLEsaw0qdRsbEzIJlNWZcvKwH5lyQScnkmmqWK/1d6yKaR3QqaC9Syh0T6tcZoo0kRinGW5XCGUYG2asmoNiVAIlVCkCVhL2zS0xjLBoqQHmtY5EinJMsXavCDDkRaT9/YP6ahRo0aNGjXqB1K/Y2lXY4dbeP8kgDOEj4+CaXH7J7kJh8cGlxZwDAqdlAI27mvcdgAmMcAY1o6M+x2udVJqz+E4hpAzjln4eRiLoRszgJGTXJjBfaiU6iHT3t4eWZaxXC45PDzsjwlf4J1mRVH0Dr4AWYYO1uFchvE+99xzfP7zn+fFF1/kl37pl7h169axMQ7da8M+h2vFbrMQ1+EY45qS4byTnLBD2DsEyKdPn+Zzn/scn/jEJ5hOpz24Cs7CeKzxPMXfpZS9ey44RdM07V2BRVH0MQ7ttm17LG1q7ORzzvUO1NDn0KcA+WLgGByKYU7Dz6GdcE5w+QXIFuY+gDvgGAS01jKZTE5MORrAZQCTAUjG4Dq4R8P74ZphrYU0p6HtkCY4rLs4vWr8MEGIYxhniHVwIIYYhPiH92OQGOYtdunGDyBMJpN+TgLIDPE/6eGE0E5RFP1aHjVq1KhRo0aNGjUqlmkrXvz1X+fqK9eoG8Pa9ikO7t/jzDTjg89fRACT2RQtUtbOnmbv/n1smmCFpZhO2dvdRyQF3/jVf0SSpHzkU59E5ilvvvQtzpy/QLm/h25bZutbiETSNjXIhKyYUS6XzDZOsdi9z/2bd9jYPoPIC1761lWcg8euXOCt2w8pqxXOGUoLFkWqBMYahEhRqcK8/jVTsX9rAAEAAElEQVTag4Zk/Rzrj53l3jdvI03nHOzcgf0DlP3WS/QA0glfD7FLaur/xhbQli1VJUgSibWlj5cOD2t6iGk6YOmcd+hBB4uERCpBrU3wX9J0gK61Au18bUdDAGyuc7AJEB5Whq2kQ0SZSbv9e3fNkNL0/ZnkoxspG7PEQyktENKn7MR1qTk7wOchZPfdCSQ+HaxMwjV8mw2OVQvb507z4Q+f4x9/8XXu7Ddo4WFZEqCiC3sih8ankLXCkTuB9LbHHjjSzQk4hPPpXMNYBM4720SXMpbgBD3a508ySVEkZHnCa9d3mdxfcOrSGzzxkdc59cwpJqeeZprnGG2oX7vO4eq3ePX1e7z+xjXu3zug3DlkPRO89a1v8PqXvsDLr7zN9uWznD0zJ5tM2P/CK+yXNXurmo1JxqX1CQ9XNavWcuP+Pvd3Dthem5FIQd02NHWL22r58OkNzINdtPa1EbVxKKyHcHgX3iSVCCk9eOxAbyho2adUFT5la4CvIa4CgRLSOxkB62zv4g2r2YVfwz0LpG/GdpPaHSdROONI8hRnwBnL29euUmnHWpqwv2xJEuvrJaaS6XyNw90DJmpFm6csVzXZdMJyb8H5y+epygqlJI1o0Ra0sVhjoHMRN60f02yS0NSWXEJeJKRZSmMMe8uK/aVmmjhm0jEtJMta44SibDRnN1N0Y1AC6rZBqhSJQCmLSHOcE1htUVlK29SsTQoaY8nTFK2NB6JWM0sSbNOgrUZmivn6nLpp2VyboFsNecJqLFcyatSoUaNG/VDpdzztagwrhi7Id/oe/zwEX4+61rAvAU7EkOxRYBM4BgwCVAgQZQi5hqAyBlQBmJzU7+HxQ8djDLlicBZ/D/DmUbEK4Mg5x+XLl9nY2OBrX/saxhju37+P1rp3OcbXSpKEPM+POd6GoHAIAePj1tfX+ehHP0pRFHzsYx/jiSee4Dd+4zf4jd/4DQ4PD4/FZ5jGNvRhCG0C+A0g61HuxUfFeQhmY5CslOLixYv8iT/xJ/jABz7Qu0Wdc73jcJjuM243BodA7y4MzrmmaVitVqRpStu2PcQry7IHZnFtwLheZqhVmOd5DzMDqAuuyjC2UIcT6Psd4F1IxxpDzbA2goMvgM/Q7hDOhuND6tQwB3F63PB+WA+h3bIs+7UVjguOxhiihs/CcrnEWsvGxsaxmpZhfpfLZT/m4Fhtmoaqqo6lVY0dxSFWwREZnJpVVfVxDesifnDgJIAYP6QQjgsOybBevtPDEaNGjRo1atSoUaN+ePXqb/4WSZrwoWcu0a5W3Lmzyyc/+BjzNYXKZ0hpSYsNpJLs3n6bsjJsnr+EcJb7b17n3JWL7N5/gExzPvoTP86N115i5+4DHrv8GPeuXSVNck5feQpdLmhXC/LJDI2mPdwjUYJ7b7+NrpZcft9TrJZLbly9znw2JZlm3Nk55GDZMM1z7FyzaFoMkjSRWCxWKJRy6IPb7LzyCmc/tM3kymVU+nUaAyiJsA4pFKZzLAoZ9sk+zad1FotAKokQYLVFKAnWesBjHLoN2TE9VALZ13U0nYPRdq40YxwWh7EGZ46ckNZ5KKexGBytcWhfJRBCklhf2BAIvjXX/er7KxDeBen8tSS+3uMpKfjMesLptYSLz15g58Yuq8MS4a2HOKuQskst6zxiVVJB1y+Ew0kPCFMlsM6grSNXgotrU5777NPc3q/YKQ2uc2I6PGzt60p28AvHETAUkFkP1ujO8d0Pg3Sdc9KfZwNzi4LgRHefApgmkiJV3NmpsLsV1glWxvEr/+QFbt/5K7zvw89w6clnSYp1bl2/xTe//CWuv3WdShtu3t/DHizZnORceOI0D8qa//Tf/8vYtiVJBDfv7PGBZ2b8zI9/iF/58su8cO0BD5YrJlIxTRRJ4lPlpk5wsL9CSO9ElAjeuLvHcz/2Iyj9KmZ/D4fAOOc9jAJct8aMhT7hlThKAUwHk5XywDq4dK3xC0JK2cfVOYcUrg+a6FO34us3cuSY9ODRIZXw0NL4dSPQCCR61aCUwAqBM4ZiMmNRapxUFIllgkFOZtRNy9bWnEWRcnjnkGVpmLkSIeDWjds89uxTuGKN6s2bmLZBYPzApCSRvo6pkqCtd85OcokzluWqxUiF1nA6E2xOJFZbKmMQRYouDVmmcEZ7hyO+3mYiW7QRpLOMFQn7D/cpUkhNTV4oVk2LUClSG9ZSRWstbePvOa3PFMvaMckyDpa1h75tS16kVFrTivR36F/WUaNGjRo1atQPgn5H0q7C8XSccDJIe6/X+E7gMVwvdoENXYVDGBlDmbjOYQBCw36/E/AK7z8qXWcMs+IxDcFsfI04XekQBg4hZIA/i8WCa9euIaVksVjQtm0Pr4bXOmksQ5fmEBDHEDZJEp5//nkuXLjQv3fx4kV+9md/lmeeeYZf+qVf4s033+wB1knrIygGXHGc4lS5Q2g7jP1wvYWfQ+yn0ylXrlzhZ37mZ3jqqaeOzXlwisYgazhfAU7FbsYAn2JIGNLZBsgV+hFAYAzeApAMDsI0TfvrxKBVSslkMumhWvgKjsHg2A1wryzL/v2iKPrxhJS6aZr2IDK4MoEefIZ0rDFwDeOeTqffBu3CdZ1z5Hl+bK4D6Av1Fuu67qF3iHmWZb1bMs/z/pymafqUrUKIHuoCfdyBHv4Gd2lIqxpiF75ms1nfVlVV/b8T8TyFOYzHGL7Cgw0h5gcHB31K2+DcHTVq1KhRo0aNGjUq1sbpU2xuzlnu7VLMEs4+fwWkYGP7FE1Tk8/WWT7coa5KVD7l3KXzVA/ucHi4YG1jjfu377B16XEeP3OaV77466yff4wrzzzNjZdfZ7p1mnNPPMbdN14Fq9k4c47dhzukWU4xnbN39xbTImH90lPcvXGTw/19Hn/6cWqVc/WNa8jGcCqR2GlK7qY0O4fU2pAJyBJJtWqQucSudtl/+Ruc/dDPMH/y/cjiv0FWzgOXRKHbUF7DV1AUXYpVi8UicUA6TZnkKYf7JUhYLTXCZxMlFCbUxvZuRBHMiL2jz8O73qUXXJR4SNd2bdnOLRmuq53BiEGb3dmmc0N2hOrYvIX2p8AnZ4on11MKoXjw9i71ovLOOiWwnkohpPR17pA+DhacCak8PSTyDkSLlhInLfksZ+Px0/zm167z1Tfv0zYGha/haMSRo851aV1tNwABCCdo8XAx1JcUdPcfnOtHZLuSnMHlZwd7atvDSMH2Wsals+u8cGMHayFPBMY69g4rXnzxdd56422S9NdorKXShtOnNzASrt7cQQLPP/c0+SxBO8NBXTEtciyOxaoB5/jWq9e5cH6bT7z/cVKR8urb92gaTVk1Pr1tF3PwEMyn7VQ4ofh7X/waf/pn/iD6K7+F3d/tHZ7hGHAdqPaLxXbgUdLB2W51um7KnaXbQ2roHK6uC5bpUrj6BSN6EO36Nen6JeOnpHNZymgVSYUwFiETrDXU2rty66ohyzOMkNSVY9ms2Nxa57AyOJEwWZtiTYvRBiGgaVpufOuqv5YziERgashThZKgkpSkbj3Q1pq1SUprLI12rNqW1jSs54rtWUptHAdlQ1GkVE3DNM9Q0qGUj8XKCpy2SJUwmybIYkq7av1n1Ammk4Sq1dS1I08F+WyCEI5pKjGyW1/SMplNWJYtWSYRGNRkws7eATJJKJLmXf/bOWrUqFGjRo36wdd7vmMeu43g2wFQOCa8Hm7mx3X/goYOs1hDp9w7KUCD4K6Kzw/vxUAt/BycZEM4OBxb3N6jgGIMFuM+BJA2hH8nQbegkKYy1jBN6zBeOzs7Pbw6CTgOYxuno3yn2A9/X19f5+Mf/3g/r2E9KKW4cuUKf+bP/Bm+/OUv8/Wvf52bN2/26WxPgprx/MRrKrgQgWOuzfjck5yiw+/z+Zwf//Ef51Of+hSbm5sYYyjLktls1te3DDUMA9wargFrbZ9KNQC6OI1ouF5I3Rm7KkPbsVOwbVtee+01mqZhOp0ihGBzc5PTp0/3cxFAegBiAfAppaiqqo9pkiQURUFVVUgp+zSgof8BsIb5CW2FcYaYt21LkiS9azIAtXBsgNmhHwHazufzPkZh3YUxhvkLKWTDuWVZ9vUcw/wHuBtiFJ9rjDk2RqC/RuhngI/hGiHVbPhsBTBcFMUxp3NYX/GDAnEdyhCn+PUAMwOcDetm1KhRo0aNGjVq1KhY589vcXD/Dqe3TwEGITNU4bNyJCpl7949EpVQ1gayKQ9vvMWlZ55isthnsWy5eOkS1eKQ229d48kPP8/OjRu8+dIN5qdOk2K59qV/ytrp00y3z7HaP2Bj+ywSx51r19g+s41SiutXr1JkGY+9733s7R1y9ZsvkyaSYp5TnC5YHK64vrviVmk4VwhmqUBrSHJJMs0RhyUPv/abPP3f/zyzJz/K5Mw65f4+SiWITHnXYWOxHSjUYR/agb2iSEnyhHQ+IylrtHW+Jl7TPWDapSr1cr0jMaS5tB006vGmlBgL3hvpZb2JkmmRUa0aLKFmJN2xHbjEpyJ11sM648BF+TnFEVdCCXg6F3xgXTFLJM46qoMVzkGSCKZbE9JZhm01prG4sN/BgoFyv0I3Bmd8TyUgujSpOk3Q8ylfemuH1+4edk6+rrQL3qtphOjrSYoOgoUYgMBaQSMA6Ujo9rARYIMAy47GZ0XURr+9FyRSUGQJWZEhpaA1FqUkmfT1C4ss47Bqmec5i6rhwpk5rVC8/uZDLlw4y2OXN7BtTdlqjPaxeOb9z1JsXuLLv/IPmOU52mheuXoH21rqsuLUJOcQQaIsq7qhdgbrPCyUQoBzaOtdfgsL/9l/98v8D376p1AvvUhz7zZCgbJgpUNJX88zrC1kSKsrsMZ2HFH2KVP97w6lxBF8xENd52x078OhOqh7dNBR6GQHH0F6p6ZKkAKkFF2aYA/ID+oWO4fpJKNcNchpBi5hmqesSo2QnYOyrdncnLP7cAnOIgXUTYMUkrVZzvTUKQ53D3C1T1OcCEM6TVhVAmMatPHpduvaUraW9Ylkc5picKy0YT6bYJxjmoJxFiUc1gqEdGSJQsmcPAebFlSrCtFqJoViliW0pqXRjkRIpLUkQpMqgbaWRII1BkFKebBkkkomxYRFk3H73g7Gwtk1ySQ9esB81KhRo0aNGvX7X9+VXSdO+Rl+P8mpeBK8i1//btMWDt2EAUjEMCF2QgaXVHDwDR128dji14fux9jhCBwDPo9yTQZYEQPHWHF7sSstdswNjw/QJq5L9yid5GKM34v7FtoK8QpK07QHeQEUFUXRnxtq/f3UT/0UP/qjP8orr7zCl770Ja5du8ZqtTqxfwFcDd1qsUM11jB28TjCuUVR8Mwzz/BTP/VTPPvss9+WCnToMgwgMYZkoa8xuAoOxQDuQirQAMm01n1aVPCpUZVSXL16la997Wt9fA4ODlgsFn3slFI8//zzfOQjH2FjY4OdnR201lRVBdBDvsViAXCsDmTTNH0txTh+TdP0cxbDwbhGaAzmg4syALZ4DQzTFAenY1VVPeSL5yOuyxhiENKzxm2EsQTQqrXu3aABhgK9WzOMIdR31FpT13W/dmKIGdd7jGtvxp+nGHAHd2ioHxn+LYnXYDgv1Et9J1fuqFGjRo0aNWrUqB9uGaO59P73cbi/z2xjG6M1Td2CKmi1pphtYK0mK1ocmnNPX2FvZwfTOk5tb3Lw8B5JMefSY4/x9le+hNaWrQuPMZkmHD54yIVnnibJcurVkvUzZ1jt7dIc7rN1aoO61ZT7e2ycOs3a1hmu///Z+7MfS648zxP7nGP7XX2L8NgjuDOZZDLJ3JhZnbV0TU8vM1IDEjQlYDCvetCrXuZJz/NHtAChAQmQgMGoIVWXutRV3VlZ1ZmVCzO5M8ggGYw9wsOXu9m17ZyjB7OfhYWnB1mZWUCjOu0LON3dri1nu84w+9zv9/f5DdZ7D9mdhlilmWxv4oUB1/czPtxbsS4MR4HPjlWgHaaoqCofP9Csb11jfe8+w4un2H7xGWaf/hylFeE4RPtgCkWWVpSm9iM6NFgIByF+qInimGK1orKOPK+oTF2nzjpdAzMrqLGJaxUXnKrBYh2v2cSaqjrCFVvXczTWYpymwmFMRRx6pOsK07gGmyzS+pxyy6Ia118LnajjMZt/12scW57i5aHHM5e2KeYp2bqoHYaeYvflp7j4e18nOX0KbO3Y1H6E9j2yg4fsv/cht/7T+5gybyJX6wjard0NVg9XrJ3H1cOUDw/XVDjQrq4PaUG39TKFdT0CZDQOT9W021A76pymrm4pMaNtsqr0p8WrNXxs4WQDYxUUxnL5wg4PZituP0wpK0sY+gShD3HC8mhGVBVcPLfBnYOMDz6+xXe+/QobU5/ZfAa2AaheDXo/+fhTpptLkjjAOEtpwZQVG4MYTxvWadXWUvRoanwqh3EWh8JXurEbKrSDZV7wr//8P/Dffe8NgvkRplzXkNk6KgW+q6NbfQueVniefhSVqo89P5CJF4yoFR7q0eAqMeSqmjk618Tb1u8N1awd1VByU1WAIvQdaI2x9b2zNQaDItc+KozIixKLI8tLJoHHal0CBfEgRisI44AiK9nYHLJcZuR5AQa0D6tlRmn3iYfjOpLXVWR5ie+B8jxQmlVmSLOKUaw5M/AYxPW4PpxnBIFHbmCdl/ga4sgjCHyydYnvayLtyLKCmfEIowI/8MiykiKrWJqC2A+IPIUfBXiqBpdVYfDiEFMU+GFMlhUEQBT4WKdYrZboKGInDkh8g/ktn/316tWrV69evf5h6e8tdrUL957kVjwOvL4IvJ0E5p6k4w46Udch1W2D7NN1SMp5jsPHLpA73p7jtQG73wVedSFeF8yeBOC6zsAuNJFtVVU9FhX7pPM9CWx2X+9+7/b/eOTo8XNrrblw4QKvvPLKYxBJXGQCbOQ8g8GAN954g5deeonPPvuMH/3oR1y9erWt79e9tvStW8PwJOD6Rf2RKNXt7W2++93v8tprr7G9vU0QBFRVxWq1IgxD4JHTLwxD0jRtxy0Mw18BybK+1+v6E4YCqmRM5Jx5nre1HbvRpHme8+6773L79m0mkwlPP/00k8mE27dvt5GmxhgODg5Yr9dYaymKonVLOufaqFABXt3ajt06lPKzxJGWZUmWZb/y3hMYK8BOxkOck+L0lEhX+b0L+SRGVRyBsi67cynHy9zI9T3PI01TjDEtpBSwKPNRliVFUbRrKwgC0jRtY1ulDxIlK20T5Xn+WJ3LwWDQrqkuaBWHoxwr9Trl74X06bhL+CTY3KvXf05dnmg++p/+m//czejVq1evXr16Nbr43T8iO7hNHG6wXh0RxSPK5UO8IMJqj3iywfrgHsPNTQbDkIf37jPcOUsyHnP/8+tsn7mIXc+5c/V9ojBARx7p7BBbDjn3/IvMDu5jLYy2dpndv4nvewzPXWT/zj0qU7Fx+hQqmXDzs0+xyznnzm0TDwegHItlyU9+eZ2Pb88IPY8ggJlz5NYR6frf9gZNEPp4qxkP3nuby2d+n62XXubWX75J5SzhMCIceawPcnyrKFdFDfy0w4t8kmlMEPtUVUlhLUVRURYV1jT3s0ALHaGtbegkVtU1PzWErI5StdRRpArnTO1wVLXTbZ0bdBTWILHBoOKi1I0LDeq4ztoFqFo3HJ3750jBK0OfS0Mf7fso38dRoj3FYHPEhe9/m+1Xvo4/3kD7CSoaoYdbEMawustw9685/Owey9md+h5DgUOTlnBgFVeXOR8uSnLrOiDMPqKK1BDSAu1HpRtHY3OnjFNSDxIqB76qUV4NUjuuR1VH1raeR7knbAinc5rKOtaV5WiW8voL5wjcHdalJUoiJhsDirLi4rO7aE9xf1bw0c09XnzhPJtTRWXzei6NJYl8/DCiLHLCQJGvDtCeV5Ne5TDWsbkxZGyHLBZ7HK4zbGVQKCLPA+UoTd1v5RRG1XNlnMVXiqLK+X/86D/xj586x5XDB7iyaJ6LNO5XDaUxbc1NZB1Z10DEeks9rvVPWntgm5qdLXKsQaNq1pWQX+tcHU2qVe2SVPV6qp8NPJo7hQbP4Yyj1Brr+awWBThTO0qdYrnMUZ4lHngUeUaFIgp8BkkEQUxlIAo9itKCNSjlKLMcV5WMp0OMGoBbs8hKbFGA1qyqEmssuwOPQeJxf5axLjOsgtir0IHfuDghKx1lVZD4msBT5KUjKy2hUqzWGWGmUcaQ+IooSTBlgdYezlb4sU9RGkBTpFm92Ko1Wvs4P6DAY51XeKMxoypjayvmaFWhq+y3/6Paq9ffgy5P9Jfv1KtXr169fmv9xvCxW5Ow+11+flIMa/fYrrrn+XXA4/Hju0BQwMpxN1/XGdZt10kOzC9yZXYhnzious49uW4XKkqc5JOiXAU6iaNN4IZAme74nOTAfBLYPak/T5ojgbLd9slYxXHMG2+80QI1gaLyerf2XRRFbVzqcDjkG9/4Bl/5yld4++23+cu//Etu3LhxoqOsCxyPj9Hx37tA2/d9dnd3+drXvsbrr7/O+fPnW7AlMHc4HLaQTWBU12Ep7jxpd9dRK/0TuCowquuuFegs9RaNMcxmM65du8bbb79NURRYa/nxj3+MtZbBYPBYHUVpp4AxcfhJu6RtSZK0EF36lyQJURS1a1/Ao8yNtFHmS6JYu7G5x2F7VVWtq1MiXsVhKf3uxs52ATnw2HqPoog0TdtrSLu74ycgFmjXVxzH7bblcvlYTcYuNK2qqu2/zIW4LrsQUeJ8ZQ5lXXme1+47n89boCzrQuCrzEk3vvbLAHmvXr169erVq1ev303t3bnOR1dvEHkezz9zgfs3P2dy+izJxhhdrjm4+4CNndNU6zWHB0fsXrrC7c9ucO/6Lc4/+ywP796inM85dXGXvLR4xuFby3g84u6n19ja3cULE5Z7txkOEwqrmR0eECQR26efZrlcMr/+KUmVMdyeMDp9FothmeVcvX4DZzW744jtUHP9YM3n65xnAs04ULgKyqIijBVUKQdv/YIL3/sO46e/QjhNyO+nVLlleHqIsxZTWYrSg0qjfI3WijAeYExGvs5ZL0rKvMLaGkIBoBS2idYUoGica+s5ugY4WudaMAmqqdfY1HikjlZ11PUf11lTKsOaBlzWMZi6OTdNtKfCtgCv8RcCNaR6NlS8OA4YhZr53RmuLuCI5/lY61jPMvJlik7GeHGEGuzAzougPdx6H2MslTE4VddNRENlYe9gyYfLko9WFVkl5rsadonbUSuFh2vdnq6x4ak2PlViWOvXVAPVrKKJaaXpiWsBZNfhWZtG6w3KUdc3dHCwyPjs1h5//Icv851JSFpq3vrgJmVliJMIXMUgGrKXrrl8dps4gfv7M6w1KAeDKODKU5dJ85LF/n1C38PXPp6GPKvTag4XKe99eg9jFPNlVtdeBDYSjxfObzFPM+4vcpZFRVY6nLVUDUA2zjGMYhZFzl9+cot/fGGbpxYztK1QFrQG3TgorQOsQTdRq06pxsVYD4ZSCmcdSqvWuapUvc48rzmmLrpZX93W2zUOp2tYjrhxqa8nkFir5p7aaZyyrFHcWWQkXsC6tIS+I/YhVJbhwMfYGowrX7EuDYtFgTMLgkAz2tqE+RxTWFAehalQVlHkFfHQZzJNyCuLrWB/lUEFW4mHVoqDRcY8t3jaw1eaIPJRWCpK8CKKrGAS+ySxj/N90iwniAKSRONVivU6ZxB6aD/AGAc6QDmL7ymqyuGFIVmaobXCVw7fUxSuJAgj1s6g4wletsZ3FYezlKp0RF4PfHr16tWrV6/fJf3G8PE4/ILHo0lPigd9kuNOth0HYb9OW076+SQ3oHydVK/yeJu/qE1d6NQFi8cBz0mg8HibZLuAHmmLwJeTHJfHz3d8bMU9eBzmPUnH42m77q7uOb/yla/wzDPPtNcWyNOFU3meAzwGJQVSOed49dVX2dnZ4d133+XHP/4xs9nsiTUdTxr37tgJRNvc3OTrX/863/zmN7ly5UoLo4IgIAxDlstlC/W6TjipC5gkCUVRtHCrC0K7sZ0CObvuOahdtFEUtXAwCAJWqxU3b97kJz/5CXfu3KGqKra2tviTP/kTHjx4wA9+8AOMMW0dwzAM2dnZafsYRVHrzJN1ZIwhiqI2BrYbGyrOSIG+WusWxllrH3MHdsGajIsAwqIoHnP+JUnSruluzUMBnlrr9rwCCrtju1qtANoI2KIoWtApMFOgu8BbgYvdmNs8z1tHpqw73/cfczxKf2QMxDl5fH3LPMoY5Hn+2LjJvHZr28p1JE5WxkLes7169erVq1evXr16HddP/vqnOB3xjW+9hgs8zjz7PMnOBTbOneHWB78k2NDs3/uMIIjYOn2amx9fY+P0KXZHAz765S/Q8YgXv/E62TojKgtcWaDTBbODI84/8wyz/T3M0YzNM2dZLJes0pRkOCaKIg7u36aYzYh9GG1P0b5PnmcsS7h98y7jJMRbZ3iB4pOl5doyI3FwYBybzhEoyBY5gT8gGcYsr75Leu8B4/NnmFw4zfze51hTYhkQDAf4WUVQacgtfuxjTUW5XlMWOetZTp5VteOwyTsVT6JWPtbVVRGVOPZ4BNvq+4Ma4tWA7pEzDWwNJpWiMhbjFKb2zaEB7WmGvkflLH4Qcn+VN9dqLIMNorMNlEMpNhW8PAnYjDTWgtIWpTTNR5epVjmf/H9/yPz2Hue+/Q12XnoFP97GVQZsRnn3Ovd+8S5Htw/raE5PoXyPPDd8tiy5tjKsGvDolGuhp5ImIZGfEjfbAENqN2Dr4FOgGpoo8aWuM34ATj2qFSl4VTmH5xqXZLPNAKsKfvLZIb+8+2OeuXyK519+hU9mn1HeXfDyxU2izSE/vn7EurRsDDRZZnGqQGHZGAd4geLmzZskYYjvB0Shz3g8oCgKVn7FYlUA9TxZU89B6Hs4a5iOYp79yovcP5hz96330UqjqcA5fBxWKyprcDhCpVgbww9vP2Tr+YucuncH6yzG1a5YhaasKkLfF+NnPXPWoZrHQFpiZ7XuQN0aUrdLS9feSJyqIWVzLu17tStSyt4YW59DUZ+/mQ9rLE4rlllFEEessorJIMRZwyDwMNZSlgZTGiajsF6x2sN5DuMc0XDAarkGdPPB4nrRlMZRVpaoyFkUinVuWecFGtiMNZsjn6VzpIUiDmvw6Hk+65L6QwJOEbmKMPTJrSEwGmM169LhJwp0gHUVWkFWOXRVEHqa2NeEQe1KrVAUWUHge4SBxpQlTmsCz6fEEQ02SLTFRgHrRUagHHGoKR591rhXr169evXq9Tug3zp29SQAKTrJmff3/ZD++PkELgggEnfaSRGaJwHRkwBpF1h0+yP97YKNruNLjhX480WOT4GNx+sbHh/DL1K37d0YyeNOS9mn6/7sgtDj55SvU6dO8fWvf70FREVREEXRYyBQnGwC91arVRttKnAV4PTp0/zxH/8xL7zwAj/96U95++23WSwWv1Lj8qSfuy7Dzc1NXn31VV5//XXOnTvXtquqqtal1nUvCkSTOoFRFLUxpkD72pPGfbFYtPDJubrmobjrBJ4ZY7hx4wa//OUvuXbtWgsQJQp2c3OT6XTKxsYGy+US5x5Fqj733HOPrUu5lvRFHJXiIOzWp5To0qqqmE6nlGXZAmyZZ4Gvsl0A9WAweKxuolzvOLCT8RQ3psS8CkQMgoAkSVgsFsAj4CguRVmX3RqPYRg+tg67dSsFygvMlG15nj/WToGGMl/iGO6+vwS4Sj/SNG1dkDLGWmtGo1Hb3iAI2jmVvsg4SzsEpvbq1atXr169evXqdVzOG+APNvnJ29cZj2J2zlxkW8/42Q//Pfs377I6WPD1b3yNSDkefP45Z65cYZ2mzO7d58VXX8Ifb3Hz8xuUacb25ojF4RHJeMzupW32b15H65Dh6V3u3LlNllXsnNnG04r5/h4uz5hMBiSjIclgiAsC7ty+x96DOcNhwsgPyCcxH/z4Ovvrit0kZr4uuJdbzkc+SlmUguTMDuOoovjsiDs//xnPnvlDNl94ils//YwqB1spbGXRvk84dGjfYC11vKvWFLMCayQm9JEvT77ZhqQ5akdkbSxr7qORaNQmLLN1KjZ1IgUaUgMiZ+t6g66hicM45PzpEYu04tbRisrVYEfiMl2nLQ5FALww0JwbBXVNSutw1uB5fg2gXI0r1/szbvzgp+x98AlP/eFtLnzvW+hbH1IsD9l/62d89h/fpEjLNvLVKsV+6bi2NhyaOg62DlZt9KhIo/QYJxUblcMXWKsaCAmNg8+1wNY2QNIT2viocOEjOEZdl9Gquq5m6cA0Ga0VjgIw65J7H9zhbz68jdaKnSTkILP88v0H6NDj0qkhZVkRR/WHZOMg4Ht/8I95/+1fUq1XoA3a8wjikGSQMJqOYX/GRlayWK2wRuH7UJSGytb3vLO84k//6qdkWUFlLJWDwtSOSqUU2jo8FEVVoT0NzrKoLH954z7/fHvCZHaAcrpeP9rha79xjTq0aiCi7rpC67qQuNrJqGwDal0Di2nug00Dv5uJVErhBT4oh60cyrimdqnBqSZeNmyAujEQ+hyuCirtE/uaOPAJPU1WVfhhzHyREfuKLFX4VpHnKcPxAH8wRtuSyWjM/QcH+EphnEYpix9oSmM4WsFnhxl5WrIZanYSn3GgyE1FlESU1lE4RWUtOtTo0hGEAVleYQ0EkU+oFZGnOVqVJIHHeJSQpWtiTxMmAevc4jvLxiRmvc4pKvADjasqkigABYXTFNrHQ2MrSxzFxDZHez4GQzIIsEpj8vLEZ069evXq1atXr/9y9RvDx+NQB3js9yfBOjm2q+Ow7df5B8lJ+3ZBk1yvC6267e06/LrqRirKdU6CpwI3u20XgHMcZB6HadJOcV4JLDlp3L7MCdj9WQBPtz0n7Xu85uXxMeweE0UR3/nOdzh9+jTr9fpXxk3GSgCP9CVJkraGokC6PM8Jw5AwDLly5Qq7u7u8+uqrXL16latXr7K3t9eCsON9FXflzs4Or732Gl/72tfY2Nhoz9eN4TxeP1Be746NjL2497qQvLtO5VzD4bCFaOKsk/NZa3n48CFvvfUW77//PmmatmP31FNP8e1vf5uzZ8+2IPrixYuPwWGJLz0e6yrtNMYQhiHz+fyxOoVyzHq9JgxDlFLMZrN2DLrQbzqdtjUlpX/iDJQxkHkU92z3/STAsvteEnchwGq1as8n7y8BiAILkyRp4a9ATJmXqqpaUC3rP03T1lnbdWB2378CuMW9KM5FWecyh924aFkPck0Z0zRN23mRvsu4yhx1/750Y2F79erVq1evXr169erql+9/xiL9EN/XJFHI1uZ7hLbk4DBlJ/Z57twWy7t3GF06w4Xnn+bezQc4U3DmwkXmy5TDu1dB+zzz0gtcf/9tgmDAYDxk7/o1xjvn8DyP259+isFy5SuvUGYLDm7dZDBICEcJfhii44jcj3n4YJ+jWcZ4HBP7htVa89dvfsbBYs125DNLc/Yrwyc5vFg5RomHVh75ao2vfYKhT37jE4qD1xk/+xX80Q9IZ0vinSFg8bwApQxaO7zQB6dZPZyTZ4aqMjj7CAoqrR+BR8A2dRyVA+MU6JbD1QDPNd49AUUS29r496yzyCbnakBnnWNRGlLr2JstyMsa+VkUxlmxDrbnxsGpQPH8NCLxNKYyWGxde7AyaA2+59cuQ+dQzrK+t8dH/8ufce9nbzHc3SSbzTm6fptssarboxQVjqNlztVZyf3KYpsal50CgdAAQaSfuDZaVupfyscdxa9Y13esx6PuscZQw8j69A3CbOJpawdl7Vxztq4TWVG7Ly2Pollx4KlHLsG9NOfuR3dJBgnfu3SOytX3boGvuHT+LPuHR7z75ptA7fCLQ48kCRgMY8bTKfl6RTRIcGpBHEVkaUFpKsbjBLUuyLKS0lrKymCcw1d1jU9faypTO3BLZ9v4XVzdfu1p7i9z3oxCfj8Z4vJ1E88LZWkIQo1STb3NBsw618BZpbAWPF+3ebQKh9Ye2qsdkPU66zxP0eB5dZQqTteOR1sjZAco7fCCAGcNzrPgKpKzlzl6+Ak6VGhr2IwdKwPGOLyqYjQOUdZitEeVFQQ40vmSwXhIMBpS2ZKdM6dYzZfEgcdqsWaVlzhjOcwLCjwmwwFDCjwsqxKUpylWOVmpqLBMBj6e75FlOeXaMkkgGYSMhj5ZqVhXEAQ+YexTlSVxU2fSWUcY+MRa1xHCTuFshdEezjrW64zS6joedjxkXhQMkyFBmYOBLE8JQp/CWKqijkDWAtl79erVq1evXr8T+o3hIzwCLsddUcdjQY8Dyi4AetJ5f5O2wCNYdrwem4CrrjOxCya65zgOQ7tAqvvVdekdB3zippKfj7dVgExZlm3s5JPcdie5O5+07XhEaLdNx/fvurW6UOZ4PU+tNc899xwvvPACQAtiuoBJnHhSD9A5RxAEj0W4FkVBHMdtbCZAWZYkScLly5e5fPkyf/iHf8iHH37I1atX+fjjjx+Dsr7vMxqN+OpXv8p3v/tddnZ22vZ1a2MOBgOqqsL3/RZurddrJBZUnIae5xGGIVmWtVCp65w7Dm9lPKSWokCrMAw5Ojri5z//Oe+++y5HR0ftWptOp7z++ut85zvfaWNUoyhiMBi0bRZnZLf+osCx7trotkkcj/JekthWmQ8BhN05lmhVgX/iVJQxkde7708Bxd1+d+GsuHUFoksbBQDKOIsjVUBdF9DLuZIkaedC1o1EwEq7ZF9x3MqcdN/vApoF4sp4iDtT3J+yTYB2d17l2sPh8LH1Jdc8/iGGvuZjr169evXq1atXr5OUZyXDwMeLQrY3B3im4sHegrPDiEvbAwaJx4WnLzKcDrh/4xZxkhAPTvFgb5+iyDhz4QwYx8e//CWnzp4jz3LKdcru08+QpRkP79zFC2POPv00i+WC5Z3rbG5uM5oOwRRkhSEtHQ9vf87s4SGT6YhzTz/F0WzOu7/8KWe3x5w/tckP3r7BYW4Y+ZqZceyVls1Yo01Jdrggnm4TTxPS2x+xvHWL0c4myeaY5WeHrGcrgsSnygqqrMQa0M5RrlPKQu4RAAcaqFpKCM7ZxudX37doXcdjuoaECRekiSc11rZ1H9vsTOdwtj7OQieOFVa54eqNfXC1Q6uNXG0chM2ZcDh8pXgq0ZxO6tp2dc1JiS61WKspMdRRsRpfa7TS2Dzj8KPP2L/2Weu6tK6uUYhylFbxMDN8nlWsmpqMSkBf034ZEKdq56bEq5auruWoFXiu8Uq6Tryqc41pUjVwB+QC9b3to+tIzUzXnNPiHoHHBmhqV0e5Wuo5cDhKB7lSvHRug7TIsU6hcWxMBzz/1a/y9ltvY/MVo/EIpWA6SdjZ2WC6OSGMRsznCevyAdrTeJ4jinyM9cA5To8HzHTOYp2hceyOhzz31AXefO868yxH4QgDjc2p43SdpTRN36zFKfhktuSFCzucz7PGAWpRWlFaW7sYna5rfIpjtoGwqOaD3wqCIKQqy5pH69pdW0fD2tbxqJRGBRovDCnma7AOa2tQ6fs+Std1IoPm/rowBXfTnEz5hL5iEATkWhGGEeiSwIcwijHGcbjK2BiG6LJEl5ZsuSZbpky2xuSlJQg88qLE8zUm1xxkJffSnJHvEw98wkCTFgWjJGSxrtDK4XsevlJMhjFGaWxsUCHgciyWo7SiyEqCoAbqZWlQzpEZQ55VKAWb44jAV9jG6WqtT6UBapfnaBARBFBpx8ZkglundS1VDeOtMUVeYYqKylmGg5A+L6hXr169evX63dJvVfOx6ybswr8gCH5lf3lYLzXuZFv3fLLt7wofv2y/LjQUcNB1/HUh2/FzdUFGFyzIz+JK68ZFds8lx0m0pPwuXwJnxKF1/NiTdHxsTpoDOfZ4jGX3eJkj3/cfa3cXZsk5lFKcPn2a3//932c8Hrd1+MTxN5lMyLKsBVAS59mFqVIrr+sK7dbm69YRHAwGfPOb3+S1115jNpsxm824fv06RVFw8eJFzp8/z8bGRtsPcc/J9QV0dcdInHMCr7oRm+v1mvl8zttvv93Oxf7+PkEQcOPGDd555x329vaoqooPP/ywBVeXL19mMpmgtebmzZv86Z/+KQ8ePGjnUGvN888/z7e+9S12dnZasCiATmCsjKeAU6UU4/G4nS+plyi1DwXeictRIJm4/wRmyzEyFwI4ZX5l/mWdyHoXgCxgUmJJZUzlGsfXmZyjC1SjKML3fYqiaIFid988z9v5EnAqbkTf91mtVm3NTmm3wGUB4DJn8/mce/fuMZlM2jqX0naBrbJNrinnkvaIK1eOF0g+n88fi1qV47s1KHv42KtXr169evXq1eskBQrGGMYhmNkSz1S8sDVgMoo4dW6bS88+w2L/IavlIVunTuG8gIODGaGv2D17hmx+yGq+ZGtrh6P9QwaTIfgRt2/ew1WGyaldJqe2uX/9OnlRMJxuMtoYkwwHzBZrHhw9IF+tMOuUZ5+9TDAcce3jz/j4vU946ulzWGP5jz/9hMrAhYHPXmZYVIabmeFC4hMEjjIrKdeWaBzgDg+587Of8NR//XvsvHCZvU/3yY7WKBeSr0pM5XCVA20p86q5TxM3X+3Ak0J8NSPTtIY/aOJIXcMba8hnWjTWpNdY8Ts2lj5qV5pqzu1wmMZV6VwN/5Brt1UOG8elfQTeJgouJT4eYBygdR27aSxN4ira1Y5A61ztivQ0SqumlmXtInzUg/r8WWm5nRkOK9dGe9aNFoBI64VsOCsKhw+U1A5F1TgYPRk7RRMpKgCWJhpUtcDWCugECXDFNTUi61qS7pGLsqW8TdOUxjjH2sIax9YwIdKOWZqSRDGXzm7y1Vdf5523f0m2zhnFIWEYMd5I2NoYc2r3NGcuXCYIIg4e7pNmC3Z2xmTrjHmRgq2IowhTWXyt8LXCGLDK42fvfcI6Nwx8ja/r9vq+pigNCoWnwHl1P51zLKuKa2nB+eEQspQmNLV20ipdw2lxfupHa057CqdBW4e1Bu17j0Nh1awyBZ5Xu2S90Mfaqh4vVQNMh61rVPpe7di1FqscycYmn8xSSqcYhD7GWdalY5x4jKMQpQy+DjhMC8LIR1HXn1S+R+BpnDGs1xnRZFL/LUGjVcBqUbKXGdZl/cZZZBVV6ZjEHotlhnGKJNL4HgSRR5pXFKUjpMTTisoolPaRGpWRr8Ep1pVjta7QriL0avDojME4XddM9R1K+xRZgef7jKcxXhCSGUXg+YRUVGFEtpqzsznCxmPMeoZzEEdNjUvTx6726tWrV69ev0v6reAjPAJUXRdg19HUdfMJYBLQ8tvqSaDyuBtK9uvWkuvGNnZjJeFXnYqyrRs1e/z6x12f3TGQ18VV1Y2NfJLDsXuNk6JVv2w8jkfMSr/FDRgEwWPOyOMuSdkWhiGvvfYaW1tbj0VnCsyZz+dtX8VhFoZhC1hl/AWsHa/HKYBKQJG42qy1jEYjnnrqKV566aX2GHFYRlHUgkeBjwKWpQafMYb1et06MKUOorQhiiLSNOXevXv8+Mc/bkFVHMeMRiOuXr1KnueMx2OCIODWrVvs7e2155xMJiyXS9566y1u3LjRjulkMuH73/8+X/nKV/A8jziOCcOwjauVepgCrgQcep5HEAREUcRqtWphZRdUdmGgxIN26yjK/IVhSJqm7TnFSShrdLFYtE5QUZIkLQyW+NouMJYvmZ9uDU1ZYwKgxVUocakSpXpSDVKpO9mtd9qF1fJhhS60z/McgI2NDfI8J47jX1lrAjGzLGvXorwPxbG6XC7bvxUyN9LPPM/b9SJRvhK9KmtPYl776NVevXr16tWrV69eJ+nV8xu8/off5t7Vq7z73k12tkaMIsXOuTOcurDLrevX2d7eYePUDpnzONzfZxSFjCZjrKcZ717E6PvkhzM2xmOizTG3P79NEoZMz57BmYqb771FkEwYDRLi0CceJOztHbB3/5AiXbGxMWTnmcvMlxlv/fwdAmt5442XOTyY88v3PmU7CZgEAR/cm6MB39Ncywwv5CUjr76XKNIC78IZ1N4RB++/y/bXX2X329/ms795h3RZEMQ+pnJYY1lnBcONERQlpjEa1rGq7hFdE/cZNRQUM6RSqnEa0sRlGkA38ay15c9TGiPORF3Xm3SOxsHXVY3zrK2hn3FNJGnz2iMnYI3gdiPNNFBU9hFAdIr2WDBYBx4K39M14DIOp2sw6GwdHQvUUFApyqriYVFxK7cUDUQU6qjbe58a/nlO4TVjYhqHXgAYagckugGFTR+0k/hVUE3Eqm1wonMd4Nj02LU/1Y7Gx58t1JmkDqkF6UidI6eGdt98bhdXrnDKZxhpBsMxR8sMa2E0jJluTDh74TyXzp9hY2vKeLrNqcsvoMMp7urPiW7dQqsDxsMIVZUkccL2zjb7+0s+vfkAUBgHh/MVSkHsw7nNIffnKXlp8YBQaSq5h0RhVO0wtdbxyeGSV89vcEqBVrqNjDXygW2nmrhUVb/WcGttHDrwcc6ilEN7uoaUrl4z2imUdiDPd5RDex4V4GkPnAFdO1Nlbq21DZyG2w+XBKOEEoijoH4m5gxK+VhjybKcQegTej75uqCwFmM1njEoT6GNo8oy4mFCUcJiXVIUJVuDkCiqP/y7NJZYe5SlYRj6eH4NS4uiJF1XhH6Friza15SmIgw9tDVUxtWRqmVBWcE8r5qoYM10GGEd+J5CaQ/fWQpb3wv7WjEaRQyHEUeZQTlFRInyIrLVmvEwQfkJy8MFRboiBIrMYqx57MPLvXr16tWrV6//8vVbxa7C4249gVDdiNMuCBR1o07lHN2ffx33Y1cnHdd1A1prH6tv121vtw/S3q4rsAtGjsNVAXwCR7rRl3K8wJWui+x435/Uh+M66fVuP7qQVYBKEAQn1ng8ybEp233f59VXX+X5559nNpsxGo0Yj8esVivCMGS1WrUgsOsoFcg6HA7bsejGW4oTsqqqFlbBI+ecACoZU5mX1WrVwrduf6StWZa1YE/cgN35lLqIApHW6zXvvvsun3zySeuAC8OwhYhpmnLr1i2SJGnnbTweMxwOuXPnDvv7+6zXaz799NN2fl9++WW++c1vcuXKlcfg9nK5bJ2Z3XGTbcfHQ8B5HMety3EwGLRgT8ZS3Hxyjb29PZ5//vl2fmXNR1HUroeuC3g8HrdwE2jHWOajWwdS+gO181LAZje+ViBjURSsVqvWiVgURTuuQRCQZVl7HXlPrVYrPM8jSZIWEAr0k3ZLhK5A6zRN2zGRWNuuA1aAbVEU7XjKWl8ul+17Uto1mUxawCnt67pLjTEURfHYhwcEdvfq1atXr169evXqdVx/8M+/z8dv/YK7tw958SuXIF9x+tJFglBz59ZNLj91mXgwYJGu8Sg5d+4MaEU6X5EMYu5du4YzJaPNLYzy+OitD0k2thmfOsvBg7t4tq6zVuZLfOejk1Pc+OwuR3t7eFg2N8aceupZ5suUjz+5zsVLZ7h48SzvvXWVW5/f45lL2ywXOb+8dp/tQUiRFtzPC1ZOcSe37EbglZZ8mVHmlnhnSPb5Ibf+08959o9fZnppi+U7e2TLAs/X2MpiK8N6scaWFbYybR0+p2p34CM9isDUymFdXYevGxfq2uKPj2JDa/DosFbhrGsjNGvI2eSOisOysRUaaqBnpC2drxoEwtSvbW+lq512WIF9dZymq22StbPNKSrd3FehQRJ+rCTh1DBvVTmupxV7ZQNcVQ1ANQ10tNK92mFH4+ZDQakcodMkylGpR7GonmsiQwXgqjo+9Rh5feRjfERyZSBrl6QC0zj9XLOPdZA7R0o9DkrB5mjAc+cTnjo75cPPVih/yHw2Z51exdOKZBAxnYz5w3/6zzj13DfwhjuAj/IDrDFU1z4kCCKCKGJjc8JTVy7y7X/y32KIePuHf87RfAFo0sMKrRRh4IHyuHmwYmsQEfpgbIGxjspYjDO1MxXasV5VhrkXcVp7hHGAK8t6XpAxAq29Zu4cnhJ3pKMoDJ4Hvu8BCmcqQINtgKTuPDYzCmsMaEVlKrwWoT+C2FDfh86XGQdlgck9RmhCzxEEtUswNyWLwzVx6OO72nlZlpaycihlSS34BobjkNLCep6R5obFKmeII040REMKa8jmOcpaKgdG+0yGPqWpyNaGycAjSULyrMRaR1WBdpqsNAxij2VuwEBhLOPIJzOGJPDwKBgNhjhjKau6/qMxBotmPE4o0dw5zBhEIbrKcX5AURRMhwF+qHE6wFMlPo6iqLDGEnmK0ei3fgTZq1evXr169foHpL+X//Mfjx/twiNxu3UdT103Yvccx+EXnAzaTgJ2T5JcswsLxdElcKXr/DvuyJRjuhBNIEQ3RvV41GgXRB5v85fBxeNj0XWDHX9doInAJdlH+tadl+6cdM8j1+hu11qzvb3NG2+8walTp0jT9LFxLoqiHQuBhzLWEmtaliWr1Yo4jh+Ln5Uagt2aebIm5JxdCChzIoArCIJ2bKW+o5xb9u2CuzzPGY1Gj9XY1Frz0Ucf8Wd/9mctPJI14fs+YRi2caiDwaB18W1sbLQut+Vy2dZ33N3dZTgc8sd//MdtPK2AP6B11Enbu07Qbi1BgaTivJMxKIqijfocj8e/AsKNMSRJwvnz51tQJgBvMpm0MFNicYOgvjkQd6Ss/TzPSZKkbbPEr8qxo9GoBXoyZ8vlsnUVytjJeYwxpGmKMaYdA3EUyrkFMDpXxyOLK1PWkXyX95eMVxRFrZt0sVi0+8Vx3NaqlP0FTlZV1UJfGQdpm0TGypjIPrIeu2u+6/Lsxhf36tWrV69evXr16tXVtZ/+nDNPX2F7a4syL9k69xyH9z/HmZinn3uGypYcHc2ZTkeEUQ0JinWBHwSsDo4IQ004OcVyllKVFWcvnGd4aofPP7hKkRtGSYT1C07tbKDCmFs37pItU7a3EiZbG0zOXebD9z7i0/ev8fxLz3H20kU+ePMXGFPy2mvPsnfvIbfvHLI1injv3pLb6wqtNIWzfJJZnk0qfK1RsxX71++z++w2QTjj4J1fMv/KRTavXOHOu/cp84owjFgVJTjI1wXOujZ+1CF1HBvgKAZApWq/nlOtc1H+ae2EDGIf/d4AntotaFECzxoXoKb58C+POyFrh2ATQFobKBuUWe8UKhj7tcPRaU04SrDGUqxzXGWbttawUOFqJ6cFpRWeqx2aSnkN6NN4kUYpzYN5zmeZI3XgadU4ImvXnWtqNPqNa7NJGEUrRUUNTHMF42awWmDqaONABd6qBliKkxOlWqjZRsy23rxHjkjnwDR81zhFYR25gkoORfHUmR1cmfHG/+p/wP/bd/nB/+8/oT0PbTXKKxnuTtnaGrLz7OsEO88gk6IA5Tump84ThCFhMsCieearr3HxW/9rgsGIZJAwX6744Q9/0WTgOooS1vmaSDvyUpOXCs/3cEWBr6Bydcxt2xvryFzFXMWoQV06pDIGXyuUPA9CQLU8/2hgLwqlHNi6rmE7rvVEtABbNS5YWxhsWULVrOGG4ipfozzwAg9UDdHvzEr2c8s4dOgIosDDVIbKaNZlvXYcitUqI1D1Ogo8yCuHdZrMKrKVYTJJODhcsTdbsBUFJAMPPw5rF6azOB8ip/GUoigtWeGwpmQYKUbjQdN/SNc58TBinVUEoc9sUZE7TVYZfByRp5jGPlVZkkQRRZaB1WTWEgUheZGjfZ/cwCIvCaxBu4poNGS2zBhEPsQhTnuY0hKGPsQBpbX4TuEHitmi+I3/lvbq1atXr169/uHpN4aP8rBdnF3HH753HYwnwcLjEaEn6cvA40nXPEnipuvCPIFDXYegtPl4vKpI4IZAopP2+01g40n9+SKY0T2f7/sMh8O2Bt5xJ+dxeHp8rk6aG611Gx2aJAlZlrVOzuVy2UZzisNRAJKAUDn/bDZroV0X3grkEng1HA4fc+et12uWyyVxHLcOM4FRci2BdxK3KTUN1+t1O0ddh2Ce52it2/p/RVHw3nvvtQ6+7liEYchoNOLcuXOtC85ay2AwwPM8BoNB2+7JZMLVq1c5ffo0Z8+eZTQatU6/rktRQJloPB63oFQce8fdgeKmLMuyBYLiRHTuUZ3ENE2pqoowDNu+LJfLx+CmrH8BrQJiZZu4/DY2Nsiy7LE6lBKbKs5UmUuBo+JQFMArtSi7X7KupF6o1PeUmNxunKx8SbtlH3nPRVHEZDIhz/P2fSz1KWVtAi1MFFApa0LaLyB1Npu1azZN07YdAli7a+J4zVqZ09/Eqd2rV69evXr16tXrv3xdfvVrPLhzk0EUs719iuVin+nWFoPpNmlZUhWW6XRMkCTkWUaQxFTG4aqK0SSmsCPy1ZwqTYkHI4oi45Of/YLVIiWMAlTkc+4rX8HH8Om7H5AezdmYJmyfO0u4c543/9Pfspqv+c4ffZ88XfDuL95iZ3PI1vYGd24+YL1as7U14rP9B6wdbPg+s7KuDXffOu6WllGg8Y2hOExJZxsYp7CLI/be/ZhzXz3LcCMmPaooswpnwNjufXEdjyqwEa3Q7pG7UVyRHfNiEw8qTkCLUwpnbeMwpGvVwzlLjRwbC6FyrcGvPr75gK2c19Vxml1g6ZRjqGCgFNYpwtO7XPjmK9x9++dYDdW6xFWGtoBi46w0xoJtXJE4HBVae6AcldKYKORGZtm3DqMdVml009oaMLqmniMEromfVbRtAvBwBLqmYU3pyjbK1qmm76p2/zkczvIIQrbgVz3edFXvaxvnowUqC4WzFAIeqaForBUTvWR7OgLreO07v8+P//KHzNclgac4fWqCsSWB7xNEo2Nuy/qHzQsvMtk+y4dXr1IVa+7ev03xb/4vPPfKN9g4dYbnvvp13nnnE0KTY5zi7uGK2Fe89txp7j9cc/3hkrQ0xL6mArLComiiV6mdpJ5SLIsSPYoIIw9PQxx4NZStKqhcXV/SC+pBUE3pncZZ65TFVLUj0ikNytRRqvJMRbk6utWaxv1q0F4DLnlUV9Jai/U1Jgj4PF1hggAHjOP6PtSPfLLKoYwhSWJmBwviyMeLfPKswhaWwPdQyiMtK6gURwcL8nXFmVGM1opolFA6hcbhjMYVFYWp8MOQyhpma8soiYhDn1VhcaaiKks8PwAUO+MQrKE0BmVU7V5VltCrAWriRbWjM4xYLdeEUcDhck0SBayznMxpYu0IXUWUDJnPV/jOYUqHtQnaU8SRJl1WRMmArFCYsmCdFoySPjGoV69evXr1+l3Sbwwf5SE+/Coo67oHu/UTu8Cy647s6iSQ2XXtdbd9kXuy2zaBjV0AKdcSoHH8HPK6SICNbD++z0nHHO/3kxybfxdw8SS463leC7i6r3XbeRwCH5+37rgopUiShO9///u88MILLdTLsqy9lu/7LJfLx64p0ZhybnGXCUwUOCQ1G49H2grkE1goNRLDMGxBo7jg0jRlMBhQliUPHz5sAZjUeQyCgMViQZqmrctSHK8C87Is4/PPP2/b3R3nJEk4e/YscRwTBAHXr1/n7t27LZR97bXXsNby8ccfc+7cOZ5++mkODg5IkoQgCFp3YRRFbfsFIEqdy+Fw2IIucQYK7JNjJJ5W3J1dCCiAdjgctu8FgYEyHhJjKtff2Nhox1FgrtSclHkQp6j8PhgMWseluDZlvKQ/0reusxhooXUXLK7X69bxGMcxURS1DtUoigDaa3RrPXZhcnc9j8fjdl6lVmbXxVqWZTvO4n4Ut2QXZnaBooBdOaeA2i7sFJgq9THF4dqrV69evXr16tWrV1c3r37CzvldohD279xie/c04WSD+SrF17B77ixae8yPZoSez+H9+wRBRBQnrFYL1rNDNAWb21PSVcbiwQPGscfGeAsGEy5+/TUo19x+/x18HDtbQ049/Rz7i4z3/s2fc+n8Ds+8doXPP/2UIltz+fJpgijm5ud30M6ysbPNx+9cx6A5M4y5cZQy8j3mpeVebvg48zjrW8II1ssVD67eYjDU+D4cvH+VUy/9Ezaf3mX55i1s5Qh8H2MKDALQVAPD6tQRh6MyBmua+9Gm2KNxjbNRgVdjFUynWiFyn65UjdKcQErECohtakKCqkGmcg1wpAWOra0SoXz1t5Gn8BRkBna/+085863nuPPRB4QW/LCgTAuqvKzdnM6icXiextk6BtYAVWXQ2qE9RT7P+HS14pO0JHNglSJQilCDs4qNSDMvFakzWBye0mDrPhhVu0A1jqHSTW3IhioCj5JodQtEG4Meuk4L7YBGHo2FQFfA1pfDOKiso6R2WRrq/WMgUnUNyn/07Rd57kLO7pUX8QeXSIZDVvmSb7x0jt0z26yLuo7i4e2P2RrvPja2AOF4gj85Q1lULJYp7139iOWb73P+zZ9y/tJl3nn7PY4ODjg9GVBUjgdHKwZJwHyeE8UBUVDX/NwcDzhcF3h5hVH1Mxy/6YPF4YKAYDgmHoD1FcoUjCZDbFVhVxX5OsdWFj/wUa6OErXGtXU0nZN6o7UV1WqF5+lH4NqAMxZnatirdfMMRTfrUtXxu84oDgvDgatrUqqmfZ7WVDhGw5gyL8EpphsjjCnw4xDPCzBFjikNeVURa1iWlrwCjWEyHkAUU1nLKisp1xkxiiTygYA0K4gCj9CHONR42hF5jqXRZKUi9hwbwwRsRbE2KO1hrEVZiIIAawrCMKqBv++xWJf4gc+6MES+Jgw0qQkYKsswhDgakOcVwzjCqPqZkETaFnmOcZb5UUqVFZRFxXTokxlB27169erVq1ev3wX9Vs7H4/CvG6Mp37sATI7rfj9+zpMch1/mnDzpHCcdKzDsuE4CpE/a5zjcfNK2L2r/8ddOgpDHt50EZbuvH4+6PT6OMmbdvp7UhyAIeP3113nppZdaiNQ9JgxDrLWt606caQJqgDbCcjgcslqtWK1WbTSoxH8KFBTYNp1OqaqK+XzeOiC79fhkXwGg8MgxKK5BceBJZGY3dlbaJsceHBy0cZzHx0Jcbg8fPuT06dNkWcZisWhdf4vFAqhdclmWsbGx0TrpBMB2YzxlfCQOdLlckiRJC1KrqmprUWqt2xhSqU8oAFCib7vj0IXP4lIU+C8xp9Za4jjGWkuSJBRF0cK2LMse29at1SnjkmVZ62JNkoTpdNrOyfEYUqnnKBK3cLeGqsTPLhaLFgqKG1LWarePsr8A6e78CqgUd203iljGWGCiOBXlmlIn1DnXRgPPZrN2vmReBDxKX5MkYT6f43leG/PaOx979erVq1evXr16naRLzz9NWa6wxnLpq19lPp+R5iUBOafOXGS5SJnvPwQUmdbEoylhGDF7cJ8izxkNI/zBKfbv3qZcLhmMxmyfPYM/GDC69DS3P/qQ2x99TGIzJqOE0y+8xN7DOfc/u87LrzxLMoj57MMPSQYxV158iixbc//GHTZHMau84OjhkmHgo6uKRVrgKU1lLevKoZXi07XhxcgxCUOUtZRZTrC7hYdh8fCQu29fY+v8OfQ7N6kMeBpaoucaN56rYU1ZVbVDrONslOhLDaDBSG1CVUdl4izKgaZ2/tWORYGO9bFim1TqkcPSta/L7829gqt9gJoGtKl651ArLI6qMrz9//qf+fxHW5jFQyYbA6LRAB166LVPlVe4omoglAXtMIYaqDpw1lJZxVFpuL6qODBNn3A1pLQKH83OqR3SWw/Qru77IPZZZVVdE9PVcaybShFBGyGrGtemroe2Ya5CGAVNarRnseZR5137Wj04VtWOy8rVdR1LoGzGLHQ1dAxU7c7E05waaZLRkHjnKZYHis3xCGUtf/hf/QFnX/wa965/zt1bt3j4+VWmV76BlwxaxynUzxMWywWlhTRzrBcLHtzf5+aNu1Q/eZfD/QM8Y8jyAucUhXH4nma1LhmPQ3yt8bRlXZRoVc+TAiLlUzpTu10taGcpixJiRTRN0JWPjjReHIO2KE8TDeoPLa+PllRFha2qdsFIPClNNC/WYeVz9bYbG+xQTU1P5ena/ajF/agoneWzgzU2GeJSw9EiYxr5bG0GKN/HKBhNE1zlwFh0kIAfUhzMWWYVma0h56KowflG6KPxKYuK8VBRGE3oa4ynOVplbEQ+vucxin2SUBFQEVIRxjGHyxKlNONQMd2e4DyP0gUs04pkEKHTkkQZogAGSUBpDZmBvQcrtiYj1nmONWA9xarSBM4SK0MY1v3wPI/SOcIoQimH72uWy4zAU6wXK1xZYEqLryxHq7wt2dOrV69evXr1+t3Qb1Xz8SRodhJwlH27QOz4sV9WM+2LoGX32l+mk67/dznnk4CmwI4v0pPA5ElOxC+Kaz3p926dye72LrA7qY1dUNMFSV/96lf57ne/y3A4JMuytq6d1pqiKFrAJPBHauENBoP2Z4E13esI5BsMBi3YkdcFrEENCQUKCaSU63dr9YkLUuIxxc2olGI6nRLH8WNu1aIo6gLo0ylFUXDv3r1fmXv5fXd3l/39/Tb6dbVatf1ZrVZ88sknbU1KOe9wOGQ+n7O9vf0YKBRI6/t+G/8q9RPzPG/nXF6X2NX5fM5gMGAymXB0dNSOjcSdRlFEnuet01N+F2deWZYtFJPxF0egRM865x6rYwi0UbhlWeKca8dJ2i4uyO58SDSvgEbpk7hepV/ilhaI2XUUSp+Ou2fFgdmNsRVQ2YW16/W67ae1lsVi0bZzMBiQ5znr9bqFjNJHWbPdiFYBleL8zLKs/S6wUvouULSHj7169erVq1evXr1OUpGlKKXZuXyZm599Sjqbc/bMKcLxNnu3PufoYIn2I8bTCVESURrLYu8hPobBIKa0jtX921TLGWEyIZqMCcYjwtGID/7yz7n1yT2wlulzF9h+8TlufH6TgwdHPP+1rxIox82rH3Lu4lk2NqfcvXWfIl2yfWqLzFh8rZlOYT5bEWtIwoDr6zWlhaEPawP7peXjXHMmMmz4dc26+cOM7XNT8tkes/c+YvPM62yd32b/xhHG1HUerTWgagejAqzt/HtZ2KRzHfjYAB1TQ0Hn6rqKiGNRLHsIqqzdgFpJPGsNMhWgXe1e85q4UQvNzzV8c058hPXrGvAVGOeotILFQw5W+2hPY0rDcDIkigPCSUBYGaq0xGS1E7IyhgqNsbZ2jKHIHNzLLHcKh6AWhcNaKACNZT2f4ynQ1jHwNdbVNRcV4GnHQIHfgFEtNFVJrch6QFTX0dlepamp2QyRco/gpVVgcFTU0LFwdX1HqCNWQ6Xw5boNzE2CAOV5ODVhceMTPvrFL3nl5SscHMxRQcDg7Fd56vx3KH/4P7N36zrb195kcP55/MEErTTVOuXux7/k43ff5HCRkhWO5aogiSNmy4yD+ZzKOAKlCdGUVqOCmFFo+aNvvcDePGW2yFlmiwYEahSuqZPp6hqdOHwUgyhgMTvAJSPijSFUa6q8wleQr9dEg4BgFJIvc3QYoI3D82tQ66jLNzrbPCNo4KsSOy40g18Dx7qeab1NRXUcsOfVzsZMBXyyrHiQrjG2rqX4cJaCdZwbbDPw6yhWL/JRSuOMJfY1TIaY1AAl86xgMgzwnSP0HKvK4BeW9dEScJwahBz5HtrTuKpic+wTBh6Bs/gqIJwOuL23pqwsk8AjiXxMllEqn8O0oFwXqCgiCkMmLgdnKSvwopD9wzrl6mi2ZGvk4w8iFirBZgVDrwLP52hVkcQ+cRziBRHpfMXGyMMan3Jdkq6XeL5GoQgixWxeURYlcRL+Vn9Pe/Xq1atXr17/sPRbwceuG/BJEFHcTAIjRAI5vgz+ib6oTuEX7Xt8/y+CgCfpy671Ze3/dcDkl7XzScd2x1/gTRcAH4ePJ4Fcz/M4f/48v/d7v9fCMQF94lIEHnPVSZxmVVUsFgt8329BnEAicZl1AZi1tq0xGAQBQAuPJA40y7I2+jIMw7ae3+HhYVs3UtaUOPykNqO40eBR/KtEgMqaffDgQdsXgXISwTkcDrl37x6LxYK9vb0WiMp5bty4weXLl5lMJpRl2dYZTNOUo6MjhsPhY5Glcp2joyOiKGrjSsMwpKoqoigiiqIW6gog7AK/7njKfMkciCR6VgBZt96jQMnVatXWpBRwJo5JcQNKBKrU08yy7LG4UnEZShSrSGJyBYpKO8UtK/0ry5LFYvEYjJTYUll78poASIGbAj3lNXEfigtVxjvPc46Ojtp4XmmD1LyUcfM8j/F4zGKxaF2mArblvSTgXIB2d/1L+8VZ26vXf259Prdc+R//9LFt1/+n/+Y/U2t69erVq1evXsloiFE+195/H+0cTz3/PPl8xv7nnxOPhvh+XP87Hk2eFszu32Vzd4thMuXu3T3iKISiYDAYMj61TbS5idGam+++Rbl/n53NmKe+8x3CyYjr772Db+Hl115hsX+f+ewhV56+RBjF7N2+zTAJGW+fxyifuMzAGGZy3+dr7qRrEk/jO8esslQWPE9xLTM8H2tGvkZXFevZkpmv8CIPla7Y++gW4zObHNw8xFiDoqmP51wL+Gr8BR0bXuOMpHE61t8kNrQ2P+p6W3u0gKD6jJ5SGEPtjqSbqOoaPFm3Q3WObI2VAkVReK52+lk8wkGIqyryssK3sFqsKXNDMgoZbU7ZuXIZszgim83IFiksc/KqpOk1xikOC8etwnHkavSqcGhxJzqocBjq+52NyEN5sMwqHDUMjFQNKY18kPSRubF22aFaZ6Fyj2pm2s642iZu1joolMPo2ulYAKVzlM2+gTgddV2/sHVINm7RyTDgYDbDOov/V/8LaZ7y2u99n/d+8T7L+RKsRYcDzlx8irufX+fDN3/A9p1PwY+xTpMt59z45APu3b5BnlfMVhl5URAri7K2cTKCH2jCQUy5toQovvnKWf7Jf//f8elbH7O3/+fcny1Ji5LCgodHpSwlFtOAbGth5/RpuHaf1XyNHwRgDSYtsGXBZDIgiCPKdUWxWlOltXvVVjXV1kKhpf/iKm2co9gaRjpjUaput+d5qFARbgwplznOGpz2uHtU8OnaUuiCQGuc1dhYkxYVD+8fMj8KmG6OGE+H+Nria4fTHipQTDZH3L25x0aoGMaavIAHi5RB6DOMFOk6Z2sjAlcyCDTJJCL26jU/DjXOQBAoSqNQlWWqKzzlcW9eMogrclu/b6yniKuSYaDxtIcpQPs+N/cWGM9HhyFbI59xrJlZH7ss2NAVXuCxLixJ5GMrS5qVqMyQRB7rrMKywjkIvPoZhPYC5vMCT2vCccza9DUfe/Xq1atXr98l/dbOx+OxoCe5HuV3iYQUEPWkyNEv0q/jMPoiMHhSlOtJsacn6SQn4xed78uu9evoiyJaTwK03f27x3RBsNaa3d1d/tE/+kdsbGwAtVNQoKH8LjUMxb24XC5/xcEo9fwkwlMgnLj8BDICDIfD1k05GAw4OjpqHXECtcTl1q2BGMdxWzvSWst6vW5BpFKKo6MjxuNxCzMlArYL2Var1a+MndRX7MKzqqrY3d3F9/0WUE0mE86cOcPdu3cpy5Ld3V2UUoxGI9I05fz584+BvzRN27FbrVZtJKu47GRfqbcYx3EbH7tcLlsoWJYl8/mc4XAIPHK1St1GAXBdWAw1YJtMJm0cqsA9iReVsZZYWFkb8h611raxuscdh77vM5/PAVqXrDgdi6JoHYZVVbUuS4F24k4Ut6c4IGXcBXoKaJWalMYYptNpCyWlHbIOJQJX5jYIghYqSlSvuD6VUi34LYrisdqjAnPlHDLm8l4Q928XMPfq1atXr169evXq1VVpHfduXacCrjz9PMujQ2xeEAxHWOuIIw+nYLmYUyyOOH3xMvFkwI1PPkUrDWVGNIiZnjqNP9ngwa2bpAeH2NWMM08/zfTKczx4eMRnP3uLZ159mc1Jwq233iHy4MpXXmQ1X7K/t8/u08+A77NMc2KtKcuQbJGyuZmwmAVUVnFmEHCQGm65AnTtgDMWHlrHh6lh26/YDOvahqvZikEcoLRl/ukNJm+8SLIRstjLULqpuWhd63AU8ucA1dRlfLS9BhViblRK4WtFJR92lphWNNYZxLNoGw7nKYV2Ds8pPOXwXQMCFWg0ylkkSrPFl6qJI+24LysUm+d2SA9n7D+cUzmL5xyVNVhnUZf/gD/6P/2P3P+P/4YbP/y3DKdDuH/IulqCc1RlRW4sD3PDXmUpWqdmc2/uFHV5QMfhfEnkKc5sDPh8f4W1jiUwbxySXgMFQ8BXDo/ananFHakeZcvWY6wax2gNHA01aKx07XS0tnY91i5QiJ0iUuDr2jnqnKvhpXuUmOpQ7Jw6xWg0ZV0YPr5+g5df/Rrj3ac5dSXn3udXef7gFmEyR1ESDxMODx+SZWu8ICIvCvYf3OPh3iF7D2fcvLeHFw4w1sOnwtOKwFN4KuDp7RFG1ck5l7YC/tk//wPc4BLp+mPK1RpPaYqqonR1fC2qee7UuB5Dp9keDLFFiTWaxYM5qnIoYwgDMKWhSAu00ig8rC3riF4cWjfLsHHnSqRtTakdunkuYitTuyFRtUtXOfzQB6tQ2lFZx8pYPjhMWdp65rVqPoxvDL72WKQVOrcsVzkbRwu2t4d4owRTFmAdfhRy6alTrNKC1VGKMZZzO0MGAeSlIQSqoiBMIobDmCJdo4IailaVZRBC5ftkRcUk8rGVYW0t69Jg0CRRgNaQ+D4jbeq433WJjnwOlhlFmDAdRpyZhGTLlHmmWM2XDEKDDkNMYRkOQvKqJA4iqixjmIRkee3exRkCXddBHQ4CHsxLjLEMhyGEIZ/cOvx7+bvaq1evXr169fqHod8YPnbrC8Ijx103NrELdrpQUvY7qU5hV8dh2pNgYtfh9yQ9yTl5Un3E4/s86XpPgovdbV/mtPx1IKSAkG4dxO55Tqpnefw64vCT8VdKMRwO+d73vse5c+fwfb8FU0EQUJZlC8lkbiWmsjsOAtC6dfnE6eicY2dnh6Ojo7bdVVWxWq1acBOGYQvRBGJK+7rxst0IUAFbSZK0LkLP89rzaK3baxRFQZIkRFHEarViuVyeGJe5vb1NmqakacrBwQGj0QjnHOfPn+f8+fMopbh37x43b95sQebHH3/Ms88+2zo2V6sVo9EIgOVy2YKxxWLRQixx0+V53kakiqNzOByyXC7b8e3CRK11218BddLP7tgJ9JXoU3EWTiaTFlRGUfRYxGr3QwIC4wQiy9rpxrfKGui6bgVYyvyIusfJekySpAXPWZa1IFQp9VidyOMORKghoIDcrmO26x4ViNh1045Go9YBKeN9cHDQwtQucPR9n8Fg0EbrymsCT6uqauGpOG179erVq1evXr169erq/v0HeKFPHETM7t/CVz6+71HkGePpBGNhNVsQ+rD91DMcHi24d+sGqjIESYS1MDy1yyKrmN14D70+wg9jtl9+nXA84qN33qMqCr76jVexSvPTf/cXnN6YsPPSC9y/eYNsmbJz4SKL2QJjSuLpNqvZnKM7N9g5d54sj/GDQ04NfPaXJZk1jLQmxZFVtoVY7+eGpzJV1350BmsdRWUZJAF2XbD/6T2mZzdZH92jyngMOtaMR7WmRXFCaq1x2AYi0b6uVBOLqhVYW7solcJZi1a6dgSiQCk0Fk9D4BSVc3WUKU3tRwfO2XYulKrP5xpXn7PyvKI+pjKOZVaRF5bKqfo6xuKUh28M6+s3+en/9d9z+O6P2Hj+G+xsVyyP/prCLsiNIzOWRel4aBxzW4M+jW3a01wL8FwdM3tpe8qD5ZqqsqAgcjBFkeOoFOQ4MtfEqIozlMagpxq3nvQNh218iwIZVTMOGoXvHLECH4WnpM+NFbMmvjUklUcKzbk9a9l7sCQeBGxuDhhtbLFeLDh15hJv/+iH/Pw//BkXn36aysCDvRkHsyVaHxGHPp7SzA6PePDgAQ+P5hjtY7wAl6V4oSYrDHllmQSWMxshRkUk1ZrvPH2Ke1evceP//Rcc7M+w6xWmMtimfmjDDFEOfKdIlGJ3Y0po52RFTr7yKKzFwxF4Dg+f1eEKP/TQvofJK2zl8HTtZtS6A3Kb8a4hY73NWduMD2hPN7DaYa0CL6BYpCjPYtHsrSquresPUysHkefYHsVsDCMCT9X1E40lSUK0U8wPVui8wk8GBJ4iDjVGB4Qjj2VaoVyG9jRHq4Iw8EhCD6Xre+OiKZMTao+qtGS5wfNCFplhEPh4w4ijWUGWlYSBzzAOqMqSJNRMPEMyCDFU+HHAepWzMR2xvX2BePWA0KvItYcpDNsjRV55rNcFUeCRlyUoD1OVbG6OWa5zirwkjiOwUCyXxLHHMi2osoKNrSEVmnv7cza3xr/dH9RevXr16tWr1z8o/dbORwGIXTDYBYECF4+DMXFACRSR474I1j0JLj4pvvXvEnl6/Lq/rhPzSW35+zjXSTrpXDLG8vNJcLMLJ7sQKAgCXnvtNZ555pm2rqPv+wyHw9YhB4/iMLvAeDAYtG5AATxlWbbnldp54lATN2XX9SrxmcaYFtKNx+PWMbharaiqiuFw2ILTxWLBaDRqHW/D4bAFjFJbUOCTxHJ2nXZaa5566inG4zHXr1/HWstkMiHPcw4ODsjzvHVGrlYr7t+/z3e+8x1efPFFbt26xZ07dzg4OOC73/0ueZ5z9+5dhsMhp06dauHharVq+yrRpdKWMAzbCFWBi+PxuK1bKTBMa83W1hZZlrXt3traakGZjPlqtWqjbKWeYVVVzOfz9loydjI+ZVmyWq3aOdJaE8fxY+cpioI4jh+LIpXjuwBcxl8AqKwXuabMtzg4ZQ12I32n02kbjStrKAzDtiZj95xKKQ4PD9sxkjEVwFiWJcPhsIXLEtMq/RBXaZIkrbNUziXXkWjc5XIJ0K4hcUnKdcIw7CNXe/Xq1atXr169ej1Rp89fwFY5D2/dZbCxgSkKlouUwcYG6Solz3K2d7YJQsW1a5+xtjAdhIy8ooZWQci9O3fQpmAyjvHGuwSbu+TG8ukv3mQ8HDI8tcXdj65ytPeQ8xfPc+bSWe59/jmh7zM6dYayKgijhMnOKe7dvI6nAy5/7TWOHu6z/2CPjUlCGPoMA8duEnEvzUhN7YQrABwclfBhDqcyy+nIpywNCoVNAqLIZ33/Icn4PNEoxBpLnlt0k4Xaug2dRSkNymHtI6gjoarKNTAMRWMsq+NTFXW0qgLrnGSyonlUqw8HnlZ4roFyYnNszJUaVTv7mkcE2tXOvkfeSoezjvu3H+IpsEqhNWhdOyjTyuCO3uHd/9vbWKVZMsS7HHH7szus04KqsqRVxWEFDw1k1EDQc4+CZxX1A5jEUzy9PWZZlBwus6bupCLCESjFoHGFWsCg6phWVzsYHR1nXiN56qGbkNfAgW5AYw0ZHZ6q3ag1N2tgm6qvUc+Ba2Jd5dp1Zc3QUwwmG2TrQ+LRGSoV8+G7b7F9+ix3D1L2fvRLXjxcUhYVZWVZLpYMEp+80qyWS45mK1arNWlWEGzt8vw3/phP/urfEuol67wkUoZXLu3we7/3HT74xTuc3wg55cO9d98nn60oFgUhYD1Vg1htMc4RGE1kYRz6RL4mUJAfrfCVxlYVylm0rt2yWIeroDIVWlc1XPSEcKtHjscGajtj0b6MbU1qnbV4uh5hYbZe6FGkOX5Qr5e0sHx8mLN0HrGn8H2FVookCfEij9Eoxq0LPGwNzX0f6zyWh2uSEgoFtoqorGORlgxDn3WuyDNDZSw+ljR3+JFHUTq0cmxNYlbLHE0Nv/OVQTuP3ORUzpFVBRubU7Qf4MoCP/QYJorBIORgtsa4erw8ZYldhdm/wTwvcdTpU0mg6jVYmRr+hwqlPDAVyTDkYL7CVYo4CvA8Tb5aMRiEFKUhS3POnh6SOc3hbE0wGjGM+g/t9urVq1evXr9L+o3hYxcunvQz8CtAsgvlxKEmrquTzi3n+G30d3EwnnTdJ4HQL9vnpN//PvpzkkOz6yAVd+AXHXfc8am15plnnuGVV15pjxW3WRfKyLFxHLcuOHjkwhPoJ6473/db11k3UlNA2NbWVgt14jhuXXZRFLVwSACYxFuKW0+iQrtuN6lPORqNWlgq9R/FwblardpagM45Xn/9de7evcv+/j47Ozv8y3/5L4miiPl8zr/+1/+avb29ts+DwYCLFy+yXC5b8HXhwgUGgwEHBwcMh0P+5E/+hNdee413332Xa9eute0W8Co/x3HcgkiA0WhElmXM5/M2AlSAINBGlkrc53FQGIZhO08ybhJ92/1QQJqmj9U5jOOYoiiYz+ctKBZIKK5UAXty7jzPSZKknYNujK3Eooobs3suaWt3/LvrtgtjZc6VUkyn03abwE9ZC/I3RGpISpuGw2E7XtJ3ceVKm4G2/mVRFAwGg8faK3BWjpHf0zRt3ydSX1Kco90PUfTq1atXr169evXqJZo93OfG9Zs8ffk8+fyI+VHK6NRp8rIiiXy2tqZUlePOZzcITUXiaxIs8WBAuLXL1Y8/YWMQMZ0kxMMx/niLw6Mj8uWC8xfOEA8m3LlxHd9lvP69b5Nby93btxiMJ3XiR2VIwogSn6PPP2cwHhGPNrj9ySfEg4gzly5z+7MbbI4jTmdDHt5fUliHQVE2IE+7Gvp9mFVcDGAcaAYaKmPIiorxyEflJQc39hnvjEiPMpQG11YmqCNSdQNubOce2T6qxtiJZq33V9A4HZv7XFXXcKwhkMM0sK2uVdhEr3oKbZt9qWNXbQMXa5Sk0M7i2nt0h3F1bKl1jrI02Ob+Jgg9BklEEECxzEnTDGMUaMX9t/6cez93GFtQWUVpLalTHFWOpa3vn7XrgEFXR5xOAnj61JjUVtw5WKM8zdnTO9zf2wOnayDbjJND4RwYVbevK8GZqjN+tYlR+lnvJCG1mho+2seOryHlo+c09QuB0hTOYXEMRhOWszm5WXPmq88zW6X86Ec/x+Lx8P4hYeDz7rsfgdJsbG5w/ux50sV9TFnhAePJmBv3jlBewNe//8+IN77Cw+jfExnYGkQ8tT3mv/8//B/ZfeG7uLv/Z6xaUs0PKdOcqjCsjeFGbrmTVaybeoW+cjjl0NoS4PCs5dmLO6wfPGTiDNYqfJlfJ2BZ4RqXp9K0bk9N47ClgdrO4el63zQ3BFoTBV7zXAW0s/Uq8h55TZX2KAq4tyi5uizBC4k8jbGGaBCRFRUDbYmnA9wgxHcFWWGZLzKSJKS0jrHWZHmOWlhWuQVrWVVLBqEmCTTRMKYwhmVuGQxjVJGxNU3I1hWhVqyMZW01pzZGqHzNwSxnlASMRhHj6RBnFVWZsrHpY72QPC/QgY9vHSryWaeGMPS5vb8iKxUKwyQCpXwcCt8HYzTWBYTGMh5H5JVBW4fSdXTt8mhOEtSQd7HKGY1iSqPYP1pigpjpYMjIe/Q8qVevXr169er1X75+K+ej6EmxpSfFpnZhVhAEj7nlusf8OpDuSXGs8OVxridd6zike1J/TtrnpN+/zLn5ZXrSmHTh3kl9EvAi5+jG0yql2Nzc5I033mA6nT7mUpRITaAFQwI3xR0p4EcgkkRRCkyS17rRoAKpBJ5lWdaCOWttC8K6be5GaSZJ0m6X60uNRHGfda8vEa2+77cgSyI0BUjGccz29jYvvvhi62z8V//qXz3mutNaM5vNsNZy7do1qqritddeY29vjzRNuX//Pu+//z7f+MY3eOGFF7hx40YLCsWFKO47WfMC9CRyNk3Tdg6kvqXAU3EM5nlOFEUtnOu6UKVmpLRZXILSDomlXS6X7fmhdqhKFKxEi3brTEr7xQkr60L6YIzh4cOH7Rrori+lFOPxuIWu4sSUuopdR2sYhoxGoxYyTqfTFiBCDfsEYso2Wfsyp9I2AbsCW2UNihNSgKqsA4nuraqqHf/u3zSZB4HvsrYFqp/0969Xr169evXq1atXL4DZ/gGXzp0i3XuAHwQMxiOKMmM8mjIYJiznc5YHh4QKhjFoDMnGFoXVfPzu+xhjcYFmsHkGojH79+9TlRkXn36Ksix58Nk1pqMBo/Pn2N97yGo24+LLX2OxPMKsc8YbG1RFRpml7Jw7Q1YY9u7eZLpzmsFoyM2PP8TzNBtbQz6+N8fXUOIoLGgLWjlM4wScG8dbmeF0aLiUeHjWkq9y4jAgigOy1Zp8oQgin7IsmhTVGug0eOfRPTINFBPjmZAvFM6pJi5V199dDc+cclgl2x+5/1Tj9JMoTqn7aKndYepXXIL1f3UD9SyOwjmcgsqAdRY0KOuRbG1w4blLBBg+/PE7HO4vwUKFoShNDVKB0sGyciycI7c15ZNbdO0UvoINX/HN58+wKC0ffPYAz6s/WDmfH+IpMO19cNNg59CewkMhJtFfebbBo+cASkt/6h5a2jTRFkrCo5qOKOo428ZJWrtT65qQujEGZvmKr73yTe7N9zj/8h9w95dv8/D+EUfzRf2B0Y0RpbUMx0Pi4YDVckZRWLB1NG9VFsSDmEGl+Oo3vsc7P/mccxsBSbBDmd7gn/z+t3nxn/4P3PrrH+At9tEKDlcFq7xiLze8vzS8M88pmnnyUGjlAYZYKTY9hXKG81tTsg/u4zdRqvW6qIFyHdjarCGonZBNH3Wg8SKNLVwNe41t65KOogCtwFRWSpOidB276gWKulhkDa/TyvLZvGCBJlB1rVTnefjOsZUExBpW8wXDyQC8CMoSXxuWWUUURxRFRRyFZGlGqD2y0jKIfMYDTRh6ZJmjMJbNgQ+2YHMa44yhKiuKymCUh/Y0sXI8XOUMQkXU1JP1qhxlFaNJRLBzioe37xNoRaDATwIeHq5BeTxYFhCPCM2aSQzGKYp1RTzwCUKP0kBVlOBr0rTECzx8ZYniGrCGvkZ7kC4zxomHCjzm65Jl7ji7MSAOIPD+Xh5B9urVq1evXr3+gei3jl09DuGeFIHajdkUaa1bt1s3FvQkiPckJ2J3vy9q5/F2nKQunDtpe/f3bv9Pev2kaxzvx5eB1m4Nva7jsNuv423onuskJ6QoCAK+9rWvMRwO2/qK3Zp7EpO6sbHROuvEASbxmOIMS5LkV6CT1I4UWNN1q6Vp2oIzAaRxHLfwSFxlEscqoEdAnMAjqWsozjdpXzdKtOu6lHMNh0O01ty4cYM4jjl//jyj0YjpdMpf/MVfsF6vGQ6HAG39x9u3b5OmKW+++SbPPfcc8/mcw8NDzp8/TxRF/M3f/A1vvPEGr7zyCi+88AI/+9nPWmgVhmHr7hQnnUArAa4C9obDYQvxxH0o2zzPa2GoAFZjDFEUtfPRBY3d2FyZY8/z2vMKzI2iqK25KO9DGW+Zp24NR4G2SimCIGihobRBXJXj8RjP81gul2itW4ehfOBA6lxKvUTpVxcMdoF11w0q35MkYWtr6zFA3l0nRVG0saxdh2eapo+BewG30m+5hvTRGNOOt4yTQFugXbu9evXq1atXr169enWVeBqzXDKZjCi0j7EwThKGccz8cE56NGMQx3jlkng0xRuPOXx4yPLgiK1hxMbpHUanTmNQfPr+B0RByO5TFzl4sM/B/TucuXCB4XDIjQ/eBuWzeeFpbn1+E61hujVlfngAVcmF559jNV+QL+dMt3YospLPP7jKZGeDyamQzDni4QGvnNli/cE9VkdrnK3dj0bVQMoqxfUcrq4Nm75iI9QY68jTkulWgq0MxWJNGAWUfoVytYPRKTBOavXV0at1/b4aLNZpqErSLxt/omoAWh0d6mgpZeNjtGh0cx/s2lqSSoO2uokQpbH6OZTTtUNSqdbh1gSNUjlIm2tbazFOYXJLZUoe3Jtz+rkNLr54hs/fu858vmY4ilksClZ5KYGtFBYyp1hah7FSd7KO7PSVYxIovnblFLnv894nd+tI1ib21VRVzbG0whrXuhJbgOlcHSGqmvqPdGBiyyldB8Y+ehZRx9bWk+Aap2grB8q69nxOAKRwYODm7Vucvfy/46sX/oD5/UMCFaOVaz7o67HOKkZjRVWWzI5mbG9MccY192I5VeXAVJw7f4GLV17gvT//j3zj5SsMQkeQHZEEPvnBklt/8/8hX61Yr3LmacVBAe8uDe/OC0rrmCQhaQ6xB3lRMQDO+R6nk4CdUxuYYo2fr9Gxh1INZnzEs8GBtQ7P12hV171EgQ4VWntYZ+qx1fJMRWGNRXkK5dUAWanakYrnMMbWYFt7lFazt674cFVQ+DHWOZLAR1lD5HmMhxEbGwO0dlSVpchLbGUpnSX0NOU6J0p8SlsCitW6YGPgkwQQRz6FsVRVyXQYolzJaBKTZgbKeu0OEx9KAxZmBwtiXxHEMYu0YDz0qMqCMjeswxB3ew9XGnAVQRSQrSumwyGzLKMMhqz2l4zDihIPTEkc+tiiIopDnLVEg4AwDPCCAOsqdByzmK/xMYSBpsoLpuMY61kyp1mmGTubCaNYUa3XFF/wjKpXr169evXq9V+efmP4eNxF13X/nOQWPL5Nfg/DsAUf8np3/+7PT3Ix/jquoycB0y875iQdj0DtnvuLIltPGovuWAmgERgrIOS4xHUoLrfj7TkJggqUu3DhAt/97ndbZ1y3tqPEeQpckShUcUEKfAnDsIWGcq3lcsl4PG6dbeLsExDabVMURS2cUkoRx3Hbn/V63boeBYjJ+STSdDAYALSONKltKJGZ8rtcUwDf5uYmDx8+bNt67tw5ptMpzjnefPPNtu3imEvTlOvXr3Pv3r3WOfjgwQOWyyVf+9rX2N3d5ZNPPuGHP/whzzzzDM8++ywffPAB+/v7LYgdDAat21FqTEokK9DWvJTYz+54CMAVx2MYhgwGgxYIpmnaQjKZJ601o9GI2Wz2GOgV+ClzK2C7qiqSJGkBXp7nFEXxWG3NyWTyWBRpURSs12uSJCHLsnZ9DAYDnHPMZjPCMGzrVWqtWwfjarViNBqRJMljoE8idWVNjUYj1ut1634NgqBtrwBMcVPLWk7TtHVUdsdR3LVRFLVrJ89zqqpqx0PGUeZitVpRlmU7Bt33lcytvBd69erVq1evXr169foVmZydc2cpK4MiwAeCQcKdm7fwTcl4OMCmc0bnzqHHG+zdu0eRrtgaxQTjKd5ohwcHc9L9B+zuniYYj3l46xYuyzh95gxllnHj5mdMd05jLdz9/GN2z53B+jG3P7/N+csXuPT8Cxzdvc3i4R6TM2expmA93+P0xXOo4RBTlRCF3Esr5vMFi6zEV3W9R4XCawCico4Sx3trx4VQMww9tDUUeUmeB0x2xxzdnZOvS7TS+B5kVVVHfrauPdW4IR9hMKdqsKgau52jAWmq3lsiRIHH7m1QDmVpnJF1vcTaufjIXVnX52tqHzb90OpRrUfRUWXJA49Ia0pnqSzYymIOZ/zi3/8Hrv9ig2I+Y3xqm5d/7yXe/uFbHKUZtoFahVOsceQNBNTUsDNQis3I46WzY24dzLl9va7Pp6nBomocl21tSi0OUVBe812gYBMxSxOhWo9dM5aPfchZjqv7+GgcHi1LOV9rjWy4Wu3iFACqWGeG6zfv8rV/8id88OP/wCiEeBCTrDMGgwTta5SzBIFHFPhQlfhUVFWJ9gJsnjMdxTz7+rewa8fFkeJb/+3/nmy1ZvH5TSK75v5/+r9z+NlbLFcFB4uC/dzywdzwwSyjsJaNOMD3NUWlKI0hdI5dT7Phe4wSj6eePc8H797lsg++77X90DiEF9ZrpxNS24BGU1gsZRNNC87KMyLwfB/nTFuLtL6nVVilm7jWeq2tKvhkWbLQPoWlbqsxDDzFojAczFI2pyFhHBM5j73ljMIYkiggzUq2hyFZZbFWUZmSceyhnMH3E0xVkgSa6fagTgkKYxaLkjTNiIKAMPJZpjlb0wF7DxcoAjJj0bbA15CVlso4rB+yOFiztZVgyxLf05S5I4w1y6qiJMDOF4y9Eh9FnmZMJgkbZ88wv79PUZYoX6EDH+f7aE+TTKYcHaYEDWg1xqB96lqTuWNVrdnZGjMIFfkqxdMwX/exq7169erVq9fvkvRvc7A8iO867J4E9rpwUn52zuF53mM1136TNnxZ+47r7xp/+kVOy+7Xl+lJ+3RhZBAExHHMaDRiOBySJEnrgHtSH8QZ1o2HlPnozo18ybbhcMgbb7zRApOus03gj9Swk5p6aZqSZVlb9+/06dOcOnWqdU6maUpRFC3AEmAmbjxxOOZ53gIdAT4Ch7p19LoOOdFyuWwdgwIZ5bwCr4fDYQvvxNkm142iiNFohLWWO3futHGgly5dYmNjg4ODA65evdrCLXHRlWXJhx9+yL1790jTlDt37rBarbh06RLPPPMMly9fJo5jfvCDH3Dt2jXiOOall14iDMN2PObzeQuqBNKKc8/3fcbjMRsbG+2cSI1Hay1HR0etWxHqG26pPyhRsgLgxEnc7YNEq8ZxzHQ6ZWtri8Fg0DpHwzBkMpng+z5FUTAcDtsvGQOBcXJ9+bk7RwJWsyx7bG2K69QY08JrcWuKu1Leb0VRsFwuW8g5n8/bNRnHces4lPdKWZas1+v2eIGWR0dHpGnarh0B2L7vs1qt2vb5vt+uGQGwUH8oIooiNjc32djYYDKZtLB0Mpm0oFdgvvShV69evXr16tWrV6+uXnrjDZwfUJSGOPLwg4A71z4h9H2mkxGqSDn1zLPo0Zj79x5QrZZsjBPGZ89i4gEP7txi/eAuF566TLQx5s5HHzAIPK589WWMqVivFmyevYBRPkEY8tTTFynWBcuHD7nywkv4wy3e/uHfcPfD99m+eAkvSFgdzhlOd0k2NsFa9h4c8NO3PuPgKGX/wZzK1DUfta6jKRuzV40NNTw0jvfWFftFfa+AhnSRYaOEwUaEchbnLForPNW4EztuRuXkXri5T3Yd+KUcnpLrOjyl8IBAK3ytGhjpPapj2NAlX2u8Jn7Vb+CeB3jS/gZESVCpvC4QcGkdD42tnZNN26x1WFNRrFKO7t2jKEvW65LPPtrn8DADpXGAcZBbR2Ydxqn2GqFWnB36/G//q5cJN8fcXeT4yuEpVwNIRx0j2hzTukAbmKgaaNvCNFU7IB+HhTUYcxxLcWrdi/U41fUn65hbX9WgTbum/w3MbIEmMi2ORW648/ld1g/v8vRXXuXcy/+IcTLEOfADj9Eo4dTWhLOnT2PKAm1zxuOEjemY0xtjLp3Z5dR0g5df+hru4CHPXtxk56XfY71f4a0OsUczPv6Lf8ve7X3uHq75aFbw5kHOL/cWrMuKjdhndxhA82zBM4bTWrHlKbQyXDg/pvIGmIMZkQblbPN8Sua7++HRel1Ya+u4XEcLbpus1XrdNaDWOVO7ThVoz8c4h23djwq0xno+e2nJ1VVO2USuYiyJp1DOEtUXrJ/HVI7lIqVa54yi2gXtK4ezFcpXpEXJVhww0vV63ztacbiqmnvOitIp7j5I0Q7iMEShePhwgakUh0cZhdEcrguKyjKIPQaDgDAKSY3P/uGayhryvKhhv6cIAo/ZypKXiior2IoVwyQAaxgEProoObjzgGKdoQMP3/fA97E4rKdZLkvCJpHK2RrEWqdYrAoWq5JhFDEIA0xuyZc59+c5D5sPePfq1atXr169fjf0W8WuHnfaiY47+U6KLO1CJqm31q3xdtK1jsPM7nV+Xcj4dwWQT9KT+vek+NWTHIjiQBOn1fHzixvxuCSuVhxuQBsLKc7F4w5M2c/zPF566SUuXLjQur4EEHUjPbvgUkCegLGqqpjNZhwdHbVxldJWaYNEpkqtPHGkCZwSkClQSdxx4oLUWrNardpaj845ptNpC/Hk3BL3KbUKZSyqqmI8HrNarVpwFoZhWzvx4OCAIAiYTCZcvHgR3/f54IMPePDgQQtpZcyUUiwWCwC2trbY3NzkhRde4MqVK22/n332Wf7yL/+Sv/qrv+LKlSs888wzfPrpp8znc4DWxSlzsV6v27ES2Ouca+NR1+s10+mUxWLxGNAVkCfHAm2k6/HzjUYjRqNRCysFuAlglH1lvLtgT7ZLFK/s340dle++7xNFEWVZtm7ILshTqo7xjaKohXbdmpECF8XV2AWYxpi2PXmet/UW5b0lzkiJWs2yjNFo1O4ra0XW/mw2a6GkrB15n0i8q/RD/ibJGAs8lfXWdQnn/U1Ur169evXq1atXrxN09d1rFPmSy09doUpT7n5Wg8eJV2KzlGR7l3lmWa8WBGXKxvYAf7zDMrfMHj5gOh2xc/krLI722bv+OTu7Z9m9combH31Cnq7YPn+W5WzOdFp/MG+2yBlOJ5waDrh342Pu337AlecvMjz1Irevf46H5vSVSwy3d/CHI9bzI9755btsj4Zc3MhwZkm5NgyUZqkspnEeVg3c0g4KBx/mhrNpRaRDtrFUpeHo7oLzL+5Srm+TzkvKyuJ5Nf5yTS6qbT4c2vj4amimm/qNzjVRq7aGao7GxkftjGwAEsqCA0MNPHQDMX3tCJymUpZQ1XUYxVEowFGDnB3VQEqJHL1TOIbasRl6KOOonMU6qKxBlVBUFreeMTucU1mHcQ5jHaWFjDq61TawKvLg6a0B//j7L7K3yPjp1ftN5Cx4aFxTi7Jug2rqVbqmqw75eLd2TZ1C9ej+XktQautmVPUY2k7yUSd1FDm+2dc14M21u9bXdaoeBwDbzJdzjnfe/5hf/Mc/49U3vk8VRWxNB9y5q1gsMqbTIVubU+JIc3Z3i61BjLUVORmXrjzDMvd4NvL5yje/xy/+9C+5eGED7cUsrv8MypL79/bY35/zIC346Cjj02XJw7zE4thKQk4PQjytWZcFvnGc1orTvkegFFEA8WiTq1fvMnZFPbsCVrXGKepoX13Pcu2ItQRaNw7Sehy0lgWosLqJ5FW069MphdIOT8AvCj9QlNZiN3e4+tlN9ipNiSPxHUngUVWOzUizEXsEgccizclzzWo2Z2MQME8LPA2e72G1hzKWs+MasoZakRYFo8AwGCQ4a6lsQLrK2Ix9Qr92bebrkq1RhKc188wyyw3K89mdhjil8LTizrxg/zDn3GbEJAlBgec5oshnVVSoIELlBRuRR7rOSAsIlSX263vdRBnyQOF7Cqc0rjIEScRqscZVBl8pPBx5ZcFaSqVY5pqdaUzkObJVSjpfkWvNYW45d/7038vf1V69evXq1avXPwz91vDxSXDtSfvJ7123kERJivPty46XY46DTPn570sngc7u9r/r8d3fxW0l3wU6ngRoT5LU6pP6dNIucZ8KxDru+pRrX7x4kW9961tkWdbCGamnJ7UXlVJsb2+3bkYBPgJkBPCEYch4PH4sqlPiPrtRuhLFmSQJeZ4zGo1I07R1zYkDT5xvUNd99H2/7afAqI2NDXzfb0GjrCWBTavVqgWqEqcpcZwSfbpcLnnw4AFRFHHu3Dm2trbwPI/333+fw8NDgMecft1Y2pdffpl/8S/+RTtvxhim0ymTyYRr167xox/9iO985zt8+9vf5sUXX+QnP/lJC9kkDrYb6ynjJA4+iQOVsek6PBeLBVrrNoq2CyODIGA6nWKtZblctrUdBdIK6F6v1yilmM1mbeyrjLNAwfl83rZZ5k/aFMcxURQ9dqysEamxKPGwXbApLlSBlAKOBVIrpUiSpD2/RJ4CLTiVKN7pdNqua3GTiutS1qj8fZH15JwjTdPWaS3vJVk7URTh+z5ZlrXvKWmnrG2ZJ3GryjqVWqK9evXq1atXr169eh1Xun/EU6+8yOHD++x9cp2Br9nYmbJxapNg4xSL+RGLe/sMfMXGzhYujLl16x4mzThz5QKDzW3u3bqJZ3KuvPgifjLmrR/+NcZUjLa2uHPjHlunNiiMIy8sW2d2GIw3SJt/07/6/e+SGcP+vftoHXDlG98iTGI8T/Hmj3/G5x98xPLgCF1ZTm3GPFxmlA6qtCBydZnFzDQAr4FiGsXKwLtpxbanGAcRIYZylVH4Wzzzj7f59D+8w2JWYCsH2lE19Qs1qgVcNVtsYBhNTqgCZcUjKTvJDxKL2UA3B1rpOhqzIYza1axJO4ePbmFgY2xr3X5tNmlzXgusLXySGZ7RmtORhzWO0tQuudKYutKkUzhlcdT3zxZHgSK3joraNRn6jlHg44chP3jzBjfuHmKsxUM8kXWbUOBqytVCUdc49vSjqo5tU5sSkm2b2wESQKs7A9a6SZvnGErVMK4Tw2qbvZ2SDfXgdKpOYqzlxp2H/PInf4tWitF0i8iueXp3yu39FctFymyxJPIDJpHPS6+8yt7DB4ynu5z9+h9y7ed/y1OvfQ8dDchv/IRw6zI3/uL/yd23/5b1qmRtCvZKzVuHOR/N1mTNQok8xUbkE3o+d9c5eVayqxW7WhM3jtAwiNkvNQ8+u83FocLXGl97KGtQzuIpjVK6hoZNnGpbJ9SClXGvaoejso/qi9bw2+J5Gqc0tjQEkUL5unY8OgjOnOH9uxWfpgbj+cQexJ4m8BSDuP7dWLBWsVxUDMKU7a0xeZqR+B6+b+taqFXJztCvI2+VY5UVhHGI8kMUYIxj/jBF4VgZw7y0aHyqEiaJIzOWeVbihQGxhtkyR1lHGWlcXnHx1IDTE5+qKAkCD6tDFmnF2iqy1ZKdaUyWFXieYjzwcGWNXbWnKAAvCFhnljhyePGAqrJ41pGtM3TsM88NJjcoH+JkxObA4AP5uuAorVjlFjXwuXx5F236mo+9evXq1avX75J+Y/h4HKwdB3MnuQG79dLEdSUuPXGGCTx70rVOApFftP1J+rL9TwKrvymAlLhH+TrJ0Xj8et0xEPehwJ+uy1Be68JGccZ1x1trzc7ODn/0R3/E1tZW+5o4HsUV1wWN4u4Sd1oQBCyXyzYOVqBilmUt3BIXmNQuHA6HOOdaB2Icx208axAEbc2/LjASF5m40MSBJ22QNkq/JIpV9pdakuv1uo323NjYaEHcbDZjNptx+vRpXnjhhdYd9+GHHz4WJyptEzgmMa0SRSsAUcbhtdde49/9u3/HX/3VX/H8889z6dIlrl27xt7eHlmWMZ/PW+A4HA5baCVgVJx7Mi6Hh4eP1SPswjCZA1kzAl7FtSfzIcBvvV63tRa766w7hrKWJApV6qlkWdaOo1xfXIriWBQwOBqNiKIIeOQW7LqWZbwEIAs0FHemRKjKtcSpKdBSwKn8/UjTtIXfso5l/XfdtwJdpRakAOk4jlvnpMxzd33JByMExoqTVNyX0+mUoihaUNurV69evXr16tWrV1dnn7nMWz99k6hYMw49zl45RzQcUiVbPHgwo1ocspkownjA/izj4OAu2zubnH7mCllp+OSD99k9s83m5kUO9/e58Tc/YrixiY5HeIHHzu4WBg/rLKcunMeLY+b37mFWczbPnmU+nzPbf8jGqbOceeHF2hzm+3z05o/49P2P8aqKCxdOo4KID9+5xs4gYr0uGPmazFQ4V8eTWqWwro40bUxi3Cot764rxoHm0iBgnec8eP8Guy/+Pqdf2qP4xS1sWoNF5wxOKawFtG7rOjoEINZgswZlzYd/ER6msAIpFY1DUuGp2nmoUY0Dsq7F5wOlAs818au1nbJ29jVOw5rVaVxb/bEpj2Hhw1XJqjKcHwTE2kN7iiyryK3CYsG51hloncIAZePZDDQknkZZx429GYV1OK0aaPioziVNRKyVQRBXpxMgqOpx1qp14tGMB22txvrn+jFAMy/HJAjTOYdt3Xzy4qPzts7KY5ZJB6wLR1GWvPW3P2b77Ck8UzIKPJ65uEWuNLYsWC/nfOu173Hmu/8bsh/+GyYXnifceprNwYfk+/eY3zokvfchy2tLrr73MXdv3SYtLZ8fLflkXvJgXWItJIGPQeErRxz53FmuOUxLNhTsak2gNFo5Ik/z9MvP8vOrd9mhZOB7+FrhbIVWDeJ1Fk9rrK1di1KH0/OaDjqHM838OFDa4Yc+VVF/mFl7GqUdyhi0B9r3cF4dXVsYx8Mbe7x7a8mecyShJvQ0nnKEXn2dlYWp75FXBs8pklFMluWkRUWk63VvrGLQ5P+WBpLIo0ThBR7OWnznWK0KlHUQaNKsIox9DlYZQ+1zuKpYrS0uCBh4Dh9HoB1J4uM0nElihuOIdJWDMVitWS1WpDbA5QXTJORwnjFNPCrjCFRFmCi051EYV0fGAqYyeMOE1XJNHAZgSqbbIxZpjrPgacWZc1vki3WddGUCFquMI6MIhjFXzm2TrdYMfyv7Q69evXr16tXrH5p+q//1y4P64zBDQNDx6E8BHcdhpOwfx3Ebu/hF9RaPO/q653rSfifpOEg86bxPOu7LQKRAnDAMWwjU3bc7Vk9qbxcOSWRld98uYOx+deNrpR2j0Yg/+qM/4sqVK4+1TcZe4iMlelWuYYzB9/3WWdmNDhXgI5BIYFpZlm2/uzBZHHkCm7uOuqqqWP//2fvTaMvS+7wP+73vns987lhjV1dVd/WIGQQIggQBcBJJKWRESdQQyVKswfKKVmwnXitSpKzEtpa1bOVL4lhyLMuUYllWqJEiRVIkyMYMNNDouYEeqru65rrjmffZ45sPe//fu+uiGiDQodiQzrNW49x7zt77Hc9F7fM7z/OPY6IootvtkiSJvfZ8PieKIhvXKuBNYlXTNCXLMhudKfBLoKAAS4l6vXbtGp7n0el0OHHiBK1Wi4ODAy5fvnzXGrVaLYIgYH9/3wK5oig4PDy8y/3X6/Us9Nzc3OTJJ5/kYx/7GO9///u5dOkSu7u7Fh4LOBZ3pkSUDgYD5vO5BWaAXSMBfoCNMj2+9gDz+dyuhTgjBcRC5dITeCrgUKJoZc0lLlXibqfTqYW/4hpN09SOWRx/MtfyPo7jmKIoWC6X5Hluo3Vlb8rPzVhaaUNch+KoDILAgkXZd+KKFKcuYGOBmyAasM7GKIoshJRannEc233WdCEbYyzwjOPYQnSBoQLYu93uyvW40korrbTSSiuttNJb6vlnnqNXpnR8zdbpE0TDdeI0ZXrjKsVyyfp6Dz/qcufWHZIk49yFs7TXhixmKbcuX+b0xYuEoc9rz7/E8nDMmYceYTqZoKKItfUB8WyC73u0hn3ytGTvzZfxPBev22c+m+MouPjY4wSDdbK8oHfqDDtvvMqrz75Mcjjl5H1nSPMMFjNObPcYz2KUaXN5d4arFL4LJjeUZfXhQQEoZTAoEuByWrI5z+j5mqGjSA72efW3n+HhT76b5WjBzuV90sRgTEle1G48CxQr9x21Mw/E0WgwyqBxUKpyGyo5DqxzkvLICagMoCq3m2OqmnmFURQKitouqcxRrUeFIq+vjSnR6BrOwbI0vBYX7KQl5yKXk10f33dQZUleKNKycoFmGNJSk5bYiFYHQ5KVFEAOuK4mcBSu4xBnOWVZ40Bj0OoIBBpz5DesWeJdY5aM1cohaY74oNSLrM9WjVNogkbVAJ82ZtWecffvNV02NZDcn8yZLXO073Hl1esMI4cgUHTbXTyj8VxFv+PTGmyC3ydNDOl0ysGrL7C48TW8/Dwvf+0F4vkuV16f8eqbt7h2uOTKKObqLGFZFLQ9n0GkUI7LPCsoC8OtacJ0vqSnFae0IlTgKUPLVWyf2Wa/DBjd2OHxgUPg6MrVlxWoutCnqv9zlK6cpqbEmMpJqGsgrLSSVFa0qymVpkTh6GqPlEVVg1S5CuU4SHxr2e3z/Et3eCMuCMMAz61QdtvzaPkucV7go8hLCD1F5Lssp3Hl0kXjRxpPK7q+Q+iUKGXwHHCUYT1yiLOSZVbgKhetC9zAZ3+8pNuNuHU4oxd4lKYkTaDQLv3AoRU4aFMSBJrFPEO7im6/xeE4xtWK8TxDzUscz8MpIfI143lMmcFcuUSBg+PkuKHHbJETOA6e75KWCjcImMxStCkpTEa32ybOKzjZiQI8cspkTp6lzPOCJMlIXA839Lnv5DooRSvw8dq/c8PASiuttNJKK630va+3HbvaBFUigZLN45rgsfmaHC8uPAEXct69dDxy9a369q30VsDwra71rRydx68hdRwFOh6PbxVn1bdqX352XZdut3uXS/Rej8frZYo7S+r1/dAP/RAf+MAHrLsuDEPG47GFdeJOlHqJcOS0k+hTz/NotVo2jhWg3W5bYCRtNl1uMg5xLTbr/AEW8ARBYCMsBR7KsWVZWmgkUFOgaBiGAN/kxJMIXwFYnueRJAmO43D9+nV83+fkyZOsr6/Tbrf52te+hjGG9773vTz77LO0Wi1Onz7N/fffz2QyYWdnB8/zuHjxonUmNtdJXHGXLl3ii1/8Ip/61Ke4cOEC58+f55VXXmF/f99CPokGDcPQPieaz+e0220ACwqPOxwXiwX9ft8CWrne8dqbTegtsFdA3nHIKW7EXq8HYK8rr8sYZX0EBIsrsNPp4DgOSZJYZ+d0OqUoCgsOZT813YTHXZWyzxeLhYXuUv9T/j40AbmseZIkFow33wfiZiyKgsViYd2Ts9nMjkUAbvP9KtBc4K2sgwDyPM+ti/P437OVVlpppZVWWmmllVYSrWnDoO3TO3kapzPg9t6MdD4hIKO3tgZ+i5s3dnEwPPiuh3Dbba6/+gbJdMy5B8+znM147stfZ2O9Q+fsSRaLCa1OhAFuX73GcGODsNvm8PYt0smYtZMnKR0fx/PYaLdwA58Mh/l0zslH3sOV57/MC5/7Euki5/5LF5gdHrB9+gRplqLLhPs227wwHRE5mrarWaYFbl0SL6OGhjWwCmun4LNLQ3+W80jPY+jA4to1rj8/5L7vfx/Z/EuMbs7BOBhTVPDs2D+fy5o9agMFJVCBQnFYKiPxo5VvT5nKZahUBQ6hijtVxuBog2sgN3UNyLKKMy2Nqes8Kgs4tToajzgElRKApxgVhsU85+ayYDt0Od0PcfOCJE5IckNWKlIDuTHkxlSwE2ynlVZ0Q5/3PPwAr9+4w/TOHgaDqzWB6/LgmW1miyXX7uxZN2eJuSuW1tJDVQNC63SU50GpEmPqCoXWsdhAjfWYrMmxvoaqCaeEzxrJpqX5OUdVz+/VN0d86F0dFllKu91he73HiTOn2ZsnTBczihymezdRz3yG0bVXMLMdxnu/hcmmvPbKG3z95dfohzlXRlOeuTni+v6CeWHISsMg8sGAqzVxYVhkOXmWs+k7nOqGeMuUNgrPUYSuotVpsX7pPP/8157igmdou9VcazRKV2ZRZY7GhinrKNWjfWRkvosSbTTaBTfyWc6SRtRtvWccF3R1j42jiAvDlVsjnpssWTguQVmSpYbtro/neyR5Ubl2ywpMnhyG6LygSAsKFJ6niTyD5ytUXqA9BWVJuxWiKSiVQ5HmZJnBISFsR9wZJbRDn2mcYXIo3Poe1nfpuhrf02RpysagRW7A8wuCyGcxXRK5ikVa0GmHLJRLPFmw2fZZZIrAcYhaDspVBKFH6PvEpUPU8dGmpCL8muU8oR84pHkBvsvOaIFbFASBR5EnKFcznWSMxylxWmJCF7fb5uSgW9UjXSYEkcvOzvx3/sdzpZVWWmmllVb6ntd3DR+/VTSqfBjfrPsmYOFbSeInxVH2rQDivdyKv1v6nfQbjpxqAuua7k85rvm7ALvm7yI5V64joFEA0HHno6hZS1Jq+T322GO8733vuyuyU2I5m/GWAiYFTEmkqsCeLMvIssyCJOAuJ2KzhuRoNLIuOOm/wLY0Ta2bUWBnkiRIlOtisbirjp7AnjRNrTtNxiZtCDidz+cWroqjLU1TZrMZnuexu7tLlmWEYcjW1hbD4RBjDC+88AJhGHLx4kWuX7/OhQsX+PjHP85P/dRPsVgs+Mf/+B9z584dzpw5Q7vdtlGj29vbHBwcWMC2vb3N2bNnefrpp3nxxRf5oR/6IR577DG+8IUvWPdl0/UqjkgBgwLKAAvu5BipOShATOCfQEqpfygxojJ/eZ7T7/et41DgaRzHFjpKm+I89jyP6XRq61Mul0sLwWXvyVo0a1AKoBNnoESbFkVhga3s4yZwljblOQHMMidRFNk6kVEU2b8TzT0tf3dkf5dlaV2gzb9DAjslGrjZvuxB2bPSnsyPAHaJ4xVw++3+Rqy00korrbTSSiut9O+m2r6itbnF3jRl77UXGPS7rA9DWt0NSi9kMprS60WcvvQAWQ6vPvMcUeBx7uFH2L91h4Mb1zn38Hm065HjszHsMxvts5xMWN9aJ+j2uHHlGt12wPD0GYzro7VDGIXkuGQKwsGAVn+db3zxN3ntmReIZyluu8V8MuH8pYvEswkOBevbGyTLnPX+kp1FhsoNLRcmWY6p4V5aB3dWDsHKqbWTF7wwh64DUc+na3J2vvYCrbWPcemnPsrLv/RZRjsLlIK8qGJYC0pUWQFHI2CtBkJGSTho9e/4QtXoqDQ1HKwApVY1yKzhpKs0mTG4GryicmkW2lCUirR2Pjr1rXOp6+jW+mpQx6A2sZ0xpCXsGjiYZ1yLM7qOwqupVmkUOSULA2kN9ARyaQ3r3ZD7T67TH24zfvn1CiBKxKrj8viHP8JzX3uKcmevrjEo7R65EKtZkD6JK7F6TqsqCrdKEL1n5qrliPJj2YjNxShJWz2CrlTroVBoG/FquHp7jx/84AP0+y2my5jC3ebRj/80t69f5tknv8yV16+TxinDtTdYjG6zeydkPh0zj3Nev7HPnd0Rvm+4fhhzc7wkTkscpegGDmutgPEyYz/JWeYlKi844Rg+cnqN2XTJNM0qd6KGMS7v/8Hv51888TXceMa5gYvvVnuhyHMc2Z2mqk5a1Rgt0apaa2MMynEwZW5BpNbgtQLyrMBRlbe0KKp7zcJkYBQuFdUsjeYgNjx7e0oRBQRKk2YFLd9lkpaoPKXju5awl3mGa1wK16EwDr4DDgWB71IUJXmR42Ye2jN4uqTUmnRZUpaGwCno9LocjKqEo93xksjV9Nte5dzUDr42hJ6D1orBoENhwC0MXqfN7mhOv+WRZiVxZsiVAwq2NwYspxOKvMDXhiisFj8vYJ4rXBe0F+BmCUmumO1NCDxN4Rm8wGU6T9FFidMOWC4zPKUYpzl7hwlaK2aOIup0OLs+QJmMMiuI/IL9wwVKr3JXV1pppZVWWunfJb2tmo9N8HUvd6C8Js4pgRai4/9AFhgnzq/lcmkB270+3H8rAPo7jV+9FwxstvM7qe8oYKgZryrQ560iVY+7Hu8V93rc3SiQTuo8Nuf+eMRr00n24IMP8slPftLGegrglesFQcBisbCgRsDmdDq9C3gJdBEJbBP4KPBGgI2AId/3KYrC1qqUcwT+CCQVZ5m441qtlnXOSZ09iQOdTqf2eIFj/X6f0WhEFEW4rst8PrfjE6DneR77+/tkWUa/3+fChQt0Oh3G4zGvv/46/X7fxngKVJL9KtBX4Jr8LGvfarVsfOqHPvQhfvEXf5FPf/rTPProo5w6dYqtrS1u3bpl61PKXhF3puu6LBYLlFIWKgJ2TeT4MAxtXO5wOLQATFyKWZbRbrfJ85zpdEq327V7ROJpJQK16VIdDAaMx2OCIGAymeC6rq0D2el0LAxcLpcMBgMLUWW9m3tZwLGsebOmqMBTcdoKwNRaE8cxk8nEwkwB1bJfxcXqui7tdtvGnzZrn8ocaK1ptVr2eKkrKjGrAheb0FXiXqMootfr3VU7UgCjjEMcvxI3u4KPK6200korrbTSSivdS91Tp9mZTtm5tsOw02Jz2KK7tck8UyzGh6xvrjPY2ub2rT0Orl1juDGkPRww2tulWE5Z29rkYJrjddsMNtY4GO1jlgndjW1yFPs3brK2NsQLQ4qygMIwXOuztzei3V9j+8FL3L76Ol/9rX/KwfWbdNbXKUzMqa01hhtDMCWnzp1hMZsw2T/Ec1z2plUSUc9zmGSZjQctqT5A0CiWlOgaUsUGXkoL1mJDx1GoyCVSCTe+8BS9sz/FhR99Py//6pPMDjIMJcYUR5GpClSpMKasXIkYClOilG4AyYq56dqhaH2BhrrOY4lRoI3Cq/FfrsADSg2pAafileQARmHvbE0Vy4kxaCMewApIGo6IXAYcFIZRYXCUwgN0DU0zA4U4NWtg6nsBH/3IB/kDf+hn+M1f+mXiJK1eLyGnRPkGnc+Zz2fY6vGq7tddH18cwVCBg+pY3+pTMUbddfKxT1tqKFfVlhS8W0+LPfjo+kfzrhQcTmKe+8ab/NCHH2Jjo1d9jjBdcO5dn2S8s8Py4JBbd3a4cuUGkQdRGFFqxe39Ga/e2kflBSoumcYFWmnaoYvC4GnFNC2YJDlJmtNRhg1XcSrwiA8m6DynEzqMc8PEKD7x0Xfz20++xO0bO3yopei5FfT1o4g8XgBVrUStVQ0SxelpKHWd0lUU1V7S4GiNG7igHco0Q8scGINWJW7bx/EcyrygMCXTBJ7dmXM5Llj6LtqBVuhjKFmWhq7nkJUlrjZEjuaBzS6hD5kxdKIQlWfootqIvi5wQg8vdGi1vcp9W0CZpjgGWms9du5M8TwHk2asd6svfSelQTuGVuCzO03oa0Pfh3FSECiFViXj/QX9TkSRJpQF5H6AUi4DJ2c+meA7Cs91CHwIooDMKFRR0gpdcqMxy4QMWE4XtFsO82WOVzrkWQZZQdgOmM8LZnGKqxWLvGBqFMoPaA26nO638FRBUeYEnmK5LAiikNF0+a3/YK600korrbTSSv9W6W197ajp7DseGdp07cE312RsRo82nZICIMVtJFDlrXQcFt4rHvXbnfftrvWtrhEEgYUszTHIf28VEdt0PR5vtwkXBXoJHDl+LYlWldp9zXjbjY0NfvRHf5ROp2OhV7vdts5C3/c5PDy0kaWdTsfGmLZaLQvEpF7ebDaz4Ml1Xba2tkiShDiObW1FgUGe51nHmwA0gYwCFMVVKf0GrKtSIlIFRAmEkusKXM3zHK01k8mEJElotVrEcWwhtjjdpMbg3t6efe3cuXO4rsuVK1e4ffs2p0+ftvutWTtTIJRA0SRJrPMzjmML5nq9HvP5nCzLuHjxIs888wzPPvssH//4x3n00Ue5du2anXeJ/pS5E0eigLQ4ji3Y7HQ6zOdzC9gkYlX6NZ/PbUStQF9xf4rjOI5j62qV2FVxIoqLT5yu0gcBbRIx2u/37fwIQAyCgNlsZmGhwN88z21dxqY70RhDGIbW3Sj7WNZVHM/NuRfYK+c6jsPh4eFdEbbS11arZd2OzbVrvqcFzgO21qn8Lu5ewEJeceXK3zuBq7Iex79UsdJKK6200korrbTSSqI7e4fMxzNODlqcvO8UKupw9eounqu4+OiDuK0urz73IvPJlIff9TDJImbv9i7tVgsddRgfjmgFVWrJjSs3aXc9Or0Bt2/cJnQ1J06fpHRcijyFEoZnTnLj8musnTzNxumTTHdv8upnnyA+mHHi9AkORhMuPfYoy/mU0cGYBx5/iCJZoh0f1w3Y359wYtjC0UvePIgZei67ZYZbVFCqMJBhcFEYBQmVSy81hudjQ18XhFpxqu2TTUe88i8/zbt+/ke58MMxr/7GM6gpmLJypZUYTCm1DhVFHX5awU5TA7E6wtSIO0+h62jSgsoJ6RhFYY5qJirAd0wFXYzCVyWFrlyCmsrZJ3UfxeWnFGjJf63HppW4GY+CSEsURVmN20LQmgPSiEtdLFM+8+SrXN77FWaH10nSAlPKVQwlDrfGJYtlXrVhKuej0RUclHhVVdFZm51azZy0U41fpkqpI1ekzVq1/VSNcaijczC2z/b15kTWDaSl4SsvXuPDH34f3/8H/yLLyYSbL32efi/g/IMPoLKYV7/xBjs3b5OWsJzNWaRwc2+CKUq6rZBFktIKFZnJyIqSLCtI63vPyMAJV7HhOfRcTaQNjzyyzf3vey+/8iuf5/DOhB//ke/n1asjnn/tTc47cF/oVJAv8NFaoZVT191UdhxK1Z+3mNoMKb8rU+X9ugVuN2I5WlbxoEU1J8o1eO2Q0hQUWQFasczg9VHCC5OUMQ4mL/CMQesKLrqORjsObVfRcmC75ZAWBUVq6HdbJIuUyNWYPKPTCdDax3MNXtsjnue4nsNsWVAa6HZ9ptM5YctlNFkyaHn0hi0OFwVeAb6nmGYlvhcQ+OC1IkaHc9x2yOhwxqDtoV3FbA6p1pS5oqsSlklBJ3RwMQSBRrk+WQG5cnA0GBxUXjCNc9L5gl7bZ7Ys8D2HPCtwXYg6AWlWUBhDnhtSbUjDiCIzDId9TrZ9Ql1gjEPoaGbzJa1ui91pguetnI8rrbTSSiut9O+S3rbz8VsBOvmQX2Da8dea17qXO1CiFQUGfKuo12afmnor9+HbqQkp4FAiL5vRjscjUJuOrObrx38Wd17T8Zjn+V3PHZ8fgY5NV6K01e/3+cQnPsHZs2ctyBJYE0WRnVeBTr7vM5/PbS1BAXdra2sWsJVlyWw2szX+xCUo/QiCgJ2dHQtikiRhMBhY95scu7a2dleMqvRBoJkcLwC01WrZ+dZa47qudT8KHCqKgna7beGVzFWSJDYK1/d9dnZ2CIKAs2fP0u128TyP1157jTiO2drauisWVZx9st6yJjKHaZpaMCYQUI6/dOkS169f54knnuCxxx7jxIkTnDt3jps3b1oALA5HAXHN+FJxBMpai4tPxhbHsY01nc/ndp1kH0gsrDFVLdXFYmHXSCJqx+OxPV6ckM26rL7vW9iY57lde4Gd4kyU9gTWNfeuwM/FYmHHVZYlWZZZCBtFkXWgyrw3a1iK83M8Htv9IftGnIsSHzuZTKzbVdyjsqZyXvM9HASBnfcsyxgOh7bmbJqmtialQFUZX5IkBEFw15hWWmmllVZaaaWVVlrpuOajKSeGEdv3nSUD3nztDfprAx58/3tJ84KXnvwy3U6bc+97jMVsysHNW/ROnGQ2mxNnOVtnzjI7PGCxcxPX7zJXbUY7e2wM+2xsrVc12Az0N7fJ8oJbV9/kxGPvI+wPuPbqyxy89iJpUnDqwQvkyxmPPPYwy3jOcL3NxunzzCYTlMlQGKajQy4+dB9rewdMX7hJP3RZZNBxqnjGw9SQl4ZAYQFcBcMqGHlYGJ5aFEQKfN9hO3BIbl/n+X/yBO//oz/GuY8suPL5VylmJWRlFV2qwFEVcMOUFOLkq+xn1f2PqhyMyoADlGiK+v5b1YBOUYFFhaJUBm3AUeAq8LWiLCqgl2JQSlNSte9QwbqKc1bRskcQr3JAlgq0MRhTQdH6q9NHzkIDRlVQVKJNMbC3d4eDvZ3qGaOq6wCmVEynC37tV3+tihNt0k0BkXW/UBVulZqVlFV/FEi5S6j7pep6jRWktdmtNq1VqaOnjNga6zar+asArVE1pZU+1KByskj5p7/2JD/wswsuvPcHee3FZ3jui0/wyAc+yPkHLlAsF5w/d4LXrtzixvXbvHnrgN1pyjIpmCUxcZazTPIqPteUBGWJVxraCnqOpuMquoHi5EaHMw8/wMf+wn/MP/vFf8VunPFH/9CP8tQL1/ntJ59hTRkeajl0/apOpFIGVZa4jrJ7RNf0sXKbKrSqIK7GoGs8qV0I+xHpokAZObaKBcarwHRpwWPB9XHJUztzDtDkSuPVWbVJVoDjkBY5Yemx4XsELpSOSxS4tHzNaLqk6zuEXnVfmhY5vla0uxGTuKDT81mmkCxTWm2P2Tyh1Q7YH6WEnsL1fe4cxrjapdP2OYhzFouStioojMfe7hhjSmbjjEFb47qaZZJhQheFi59mTOKEtueSFYrMFCjHwTUl5KBcQ5wULMcLcBSLRUKvHTBZ5GgDru9BXhC0A7Jlgue7xEmKUrBAkxvNcKvD2cjHNQn5wtDvOizxmRcuN64c4BgF9Zd+V1pppZVWWmmlfzf0XcPHJlRrqunIE4hx/Nhv5zoUV6TApiRJbG24Zhv3go33ApLNn7/bGpHHa/SJ47H5/PHY1+ORtG8V8dqcJ3H5idPxeOxsE/aJY7DZtsSqfvKTn+T8+fMsFou7xtDpdFgsFtbtKPUDBTZprVlfXyfLMtI0ZTweW8iTJAmAhUqHh4e2lmOSJMxmM4qisO5IgUsCBgUil2XJZDIhiiILrAALz6RPcRxTlqWNA93b26Pf7xNFkd0TYRhakCRQrdVqWVdf0/UWxzGz2YyNjQ0uXrxo237hhRfQWjMcDplMJgB3AV05X2I2Xdel0+nYmogCcZv1G7vdLmfOnOHpp5/mqaee4sd//Md5+OGH2d/fJ0kSW49yOByyv79Pp9O5q/6irG+r1WKxWNj5akK+ZpytRInK/Ak0K8vS7ic4gnSyvuLelTHKvhOnobQpEFfchwICm+sHFbCTfSuRr3me0+v17PoKWO73+xaeTqdT0jS10apyrsS3GmPodDq2XqhA4E6nQ57nNlZV/l6IMzTLMgsWgyCwIFeAteu6dg2lfdd16Xa7LBYLGwEtUb4yVqkXmWUZ0+nUOidXWmmllVZaaaWVVlqpqUcfOk3v1GnmmWF0/QYXH32Q9fvuYzSa8uqTT3Hq/nOcPH+Ka69eYTqdsrG9TQrEWUGvP2B8MMZ3FGfP3sfO4YTJ/i7tIGL99BmKbM58csipBx9lur/D7t6Ihz72SfZvXOG3/8kvsr22zsbWgNZwi3a3SzafsJgvOHn/edzQ540XXmT95AncICCejxkOe4ynCZ4bsNmPGM8z1kJFaVz2kwzfqSBfUdQORSrQ49TwyyjFTlHyQgyuSvDXQzZ9h+XNazz3Lz/He3/uI4DhyudeJZ7nOCVkRUFRw0JVE8aiylutnIeqgnkaiQo9yhk1ZV2bUNVJS+KGrEGbA3iqgoiFNhQF5ApKuXaVcokGXBSFOqo1WbkTTQ36KjekUlCWFaA0jY8djvyODR+hkqvULsb694K6ziX1ZwxFTf7kM4P6fGPzULnLtQfVz82XbeFG7u6UogSjawel7WIFR2l8AbyGkqYer4ytNFVdS6ruYTC8/No1/vp/+pf53/+Vv8oj3/+jvPbZv8/OG1+n0xty38X72dsb4Xl3CIMQypLlfEGR5ihXQ57TNgoXw+luiMoNi3mMp8BzoO0pTqy12Di9xRu3DvjM/+m/IFtO+AM/95N85jPP85mvvYBvDJdCh9OhxtUKT2u0cijSDEdXwFkbcHS9kjUYrpysup6qCuQGLRejHIp4UR9ocLRGexon8Kp913ZIFyV785Knd+dczhWpo3A0pGXloPUcjVcaWq6mQ0nbMQT9LnGmyfOYTGvagYPWhjQv8DAEoUvY9pgsS3zfZZ4a9g9iikIx3l3SihwOb89AQRh6HMxStFa4Xsl4npMX0HOrdU1nSwYdHyfw0YHPYr5kNM8YLQq6nRBVJijl0g08oio3GcdzwXXA9TF5wXS2JHANidHMJgmuVsyWBSYv8bwK3i61S3wY04p84nlWfQFBazLHo9MN2fI9ApORomiFmsPYMF3GzJcpQeiQ5YZR/UXflVZaaaWVVlrp3w29rcyDe0G347CvCc+Og8F7AczjYFHiFIMgsPXaBELeS28HMN7rOs0YVYkBbbqlmuOS8dxrrMeddPKcnCfApwkej58r0LMZU9psS9yQ73nPe3jggQcswBLYmCSJdWm5rkscx7TbbTs2cXdprVksFt/Uh36/bx13ApikZiNg6+JJlGhRFEynU7Iso9frAVj3mcSEAsxmM+tMlHjYbrdrI1QPDw9tpOd0OrVxn1LjUvpUFIV1vYkbVdxqQRDw1FNP4fs+vV6PEydO0Ov1GI1GXL58mXa7TRRFjEajb9rPMg+tVoswDJnP50wmE9rt9l0RrOLykzqA4n783Oc+x7vf/W5Onz7N5uYmb7zxBmEY2mhhY6oam1prer2ejTwVmCr7rd1u2zlrRrBGUWQBX5ZlzGYzWyNyuVzatRGQJ45S4C6g34wXFfhrjLEgUyJYHceh2+3aeFjHcWzErtTslP0ocHI8Hlu3qEBpcU3KnMm4pV/NLy6UZWnrLB6HrhKDKntbaqPK2I0xtoajrJWAVzlXYLDsH4GVEqkrQFliaZt/l6Re5UorrbTSSiuttNJKKx1X++QFrty4g0oXbG1v094+xeHBmNntGzz83sfobG5w/cUXmY8ndNc3KVCkiyntdsTB7Zu02hGq3eZwuiBqR6z3W5Q65HBvlzJfct/Dj7OYjZge7PH4R3+Eq699g+c+81nW+n3OP/ow2gXPdRnfuk2e5px64CJlkXN48ybbZ88RF4piuYS6/IDv5niBYdBvcWpRULBgP4kJHE1qSjJzFPfpm6O4TgeFMhX8ezUrCRJFMEoJt9u0dcHstW/wjd/o8a7f/wlMYbj6pcssFzlFbWssKasafAagpKiBmqrrGBpT+xuVqR16CqeO0MzL0saPourakAZcXbkMS2Mq9yOGvCo3WcE7ZSo3FhwBRYGT9fo5VI7FktooWN/ma6AUh6SpgOQRBMSWYyw5mqPq/lLgnqojZo+iUEs5UJ4xUJYyrKPYVFO7Qo+iU+vrWQZZt4M+MjcepbBW4NbcDUVrg2Dl6pR1NdXcGRmNgbIwfPGZZ4n+5n/BH/mjP8Nj7/owxWKX1rDPi1/6Alev7XP9xj5vXt/j4HDKUEHgOQSOYlEYPEdX0DrNKYuCQCtcrQg9TaY0L+2nXLv2Ko7j8IkPPsLZRz/BP//nT/Di61dwjeEBX/Nw2yF0wVUKrcEUBZqqFqcyCqV1tUcN1etlUc2F1hiqvRJ0fIznkoxq8FgalAa0QQU+WVYQ9gPStGC0yHlhf8mrSUkZBOi8qHF05ZYsy4LI0ww8BzTcmmV46Zh+OyR3IU5yepFHmZcYXRK0PVrdFvv7M6JuwPU7MWWakxnDfKnwVLVzPN8DUzJbFrRadapPblCepq1LIk/T6wWo0hB4mnFmuHZjhgd4gz5unpAlS9pRhE+OpiRqRWRAUZQkSYkqctI4JVRwuChJ5xndQJOWUOaGwFVEgcaLXHZuHBC1Im7uLnDQjJOMot0m7EScavv4Rcwyh1Y7IClhkhaky5StyMEEHoPtLhd/3x/8Hf3dXGmllVZaaaWV/u3Q245dFTXBm7zerOUI3+wIPF6f8fjzzbqJnudZuLBcLu9ybr2V+/Gt2jt+zFvBU3GSCWAT4PBWkbNvBa2atTHFjSY6fuzxCFqBPgLamg6rJvwUaPTggw/ysY99jFarZSGMxG0mScJ8PkcpxXQ6Jc9zFouFhWBSa3EymaCUsjUUxTXYjMMUB1273bb1IiWmcrlc2lhK2RfibpMYVulX0zkp65skCdPplCiKbIzqYrGw7e/u7jIYDGz9SIm/lKhNuQZg5y5JEq5du4bv+6ytrXHixAnCMOTZZ59lPB5z/vx5RqORjR+9F1CWa4qTLk1TC/lkXtrttq3VqLXmoYce4utf/zpf/vKX+Zmf+Rne9a53ce3aNfb29kjT1Ea/Oo7DYrGw8wDYOZWfBdzKe0CA4Xg8tmsiMa4CG2WeBWpK7KpAQAFv8l4T17E4EMUhK3A6DEPrEDz+nhJwB9io1OVySbfbte0DtkajgEJxJyqlaLfbzOdzG5kq7lzgrnEBFqi32+27ImkFPstayt6XqNl2u21dy+LgFKApzsdWq2XfU+KolC8AyHtT5rbVatHpdFhppZVWWmmllVZaaaXjeuGpZ+kOBjz47gdZxilXvvEybSen2+txeDjlxutvELoO7bVNlvMFEZrFeIEyU87efw4T+uzcvI3nOaz1B2RFTpIXbJ/cxPEi5pND0jjhoR/+fTz7+U/z1Ke/zMVLF3noA++tkk2WUw53bxJ1WmyeOcfezZtoregO1okXM1SW4XXauFvr+N0Wi/mS0ijmBzFB6OA4mrXAIy8Nk7xA1bY9V1yJpcGt7wGy2peYA68nBW1l8PdiLg092sDBc8/w9VbE4z/507jhb/LGZ1+CmSLLS/ICigKUqmib1gojpQ4NOI6irJ2Ojqrur0sUmLJyu6EpqNyOyiiUKSmggom6chIaFKU2qBIScQPWNM/UY8hROHU9ygY2rACcOoKs4p4saxCrLG60F4VvekbZfsttlBgf7W1VDfvk9woKCn6sr6UERorb8ig+tXpUdf9rmGgpZe2sLKvoWKMqp6PUkrTs00hf656bu0eT5Ybf/upzKErigx/g1PlT7I2WnH/X96H0U1x57Q0OR1McA05RYApDVpSEGoZtF1MWDPsh43HCOM5IgUkJd+KUZaF57wOn+P4f/gFefWOfX/i7/4Sd8RSF4YQDj7cUPR9crQgcjeuoOn62mgvPczFFFUXsyCQ6QnwNjqNwfQ2uQzbLUEW9fo5CKY0TepSlwfEV8XTJZFHy6mHKi7OMidJ4WqN8KPMS33WrY4FQaULXYS81eOQMFGTLJaWGTuDgKw8vcggDj1JrJtMlVw9TpndK3CLHjwIO4iWhoxm6hlC7GEpSU9WUjFwYLzIcP6StSjyvoNMNCFoRB3sjMuWzP8lI0wL6bZaJQcdLWi0o05hSa0rHY5GktNseN0cJrucQz1L6gcvhsmBvnHCi66M8RZuqzSgAJ/SZz1I2+i2uHyxZpiV+4LH0XLr9FtthQFimlI4idF2WBezPUpJlylrLJfc0RZJTTmdcf/LLPPCJb/03c6WVVlpppZVW+rdHb8v52HQFyu/y2HS2fSsgKWrW1oNvjiWVxyYEyepvZ4prSvryVjDyOBwVCeCT8Qh0lOcdx7kLIN7LrXkcbB6HiMfba7q68jwnDEP6/T6z2cy66cQ5Jn1rzs1x16VSinPnzvHJT36SbrdLt9tFKcXOzs5dTkOJvJTzpQ9N92IURTbuUwCiOAmXy6V1u6VpauMuBZqFYWihlcBAAVlaa+vKM8bYGEsBSFIzsXl9gCiK7LnS5+VyaaNEBU5LVGfT/Sa1EtM0tXGcW1tbrK2tURQFL774IsvlkrW1NQvjmnPdjG2VtRBHnMy/uBSl7qNAN9d1efTRR3n55Zf5/Oc/z/d93/dx9uxZzp8/z1NPPQVgayFKnKxE40p9xeVyaeNFxaV3PAJW1lUgXLMOqfR1uVxamJskiXUYSj1EWX+Zd5kLWXPXdUmSxMacNsGyvPfEUTmdTul2u7aP4urM89zWEp3P59ZJKdCzuZayt1zXZTweW6elfBlA9pC4b1utFpPJxALBZn1H2UeyLyVeFiAMQwuL5fryXpF5FmewzLO4HWU9sixjPp+z0veWlFL3A28Af88Y86f/DbX5p4H/Efgzxphf+DfR5korrbTSSiut9HurwfqAhz/8IRbzETcvv0F30GfzxEnGB4ek8zEbJ9fZ25vAfEbgKpLlgjBy2Dj7EHmZsn/9NqHv0B92SdOSUilO3H+eLMs43Nmjt7nF1mMP8tV/9Ut846vP8pFP/CAn7j/HfHTIdG+PZD5j68wpgqDFtZdfYfu++3B8h+Vshh+GZI6D2+7S2XiAG19/FlTG+kaXIs3Yn84ZTjSTuaEswFWawikwZQXNSioAp5UiN3UEKhUAXJTw3NLgqwxXGR7oB7SKhL0nv8TLYYuHf+wn0Z7D60+8yHySYUqDUVVMqOdU99EFZVXrUFUwUNUQkRr46TpSU3MUQ1qx0RKtqCJYAV/VAE0ffX6RA1WtxBpK1udq1aCBdVulqfhVWR+HwEZT1xasiZ2p7ZeVv466fmQN/Ezl2Czqw5U4SMXTWF9LxlHWfRerpI1K5cjCKDhSokRRRwhUYlLNkWmxgo11/6l/l7EbCY81lfNTVDkk658FoRpFmhb8xpdfYFk6/OxPfQCfhC9/dg8/CHjsfe+i1XmNb7x8i939GUma4ZeKvq959PGzuNpw9uEH+exnnmH3zohUaU6vtXh82OPEg+8jD7f5lV/+FS6/cZllluMaxboD7++4bIcV+Ha1hhpQKwWuU9VxVEqhtAOmrF4DO7+OUiht0K5DNk8hKzEYtKtQykEFGqMqjG1KwzwueWOU8NWDJTs4UN9jd0OfcZmySHNCR4FWlI7i2iwhdBVu4OAqoCgItSbSkKcJrShinhYURUpeKJZxjh85ZCkcLhec3OjTIyfUBs932JumdNohyhjizOB7HoOWi1aGFMWyhNGdQ1q+x5UbM4yj6PRb4ATcuH3IZgiDVoDSCtfR7I9jwl7A/iTF9XyyPMP3XQ4XKSWKMxs9nGJJy3cwZUkQObR6LXZ3p2jlsD+t+l0oxYFSRP02662QSGUUaDylKUsYTRPyJKXXCkjLnMVBzLmTQ8bTJbPnn39bf09X+t3V78U98ncqpdQTwA8b0wy/XmmllVZa6Z2q7xo+Nh17AmEEeMARtIG7IWUTlt3LmXj8uOPRn00nmkR1yjlFUViYcS8QKWBJ4FLzsRlderz949c57lY83s/mMfeCnYCtNyfg5+TJk4xGIws2mq99Kwen9Hlzc5OPfexjdLtdZrMZaZpaaNftdi08E3eauO1kbAK6Wq0WgIW84noDaLfbbGxscOfOHQsxtdZ0Oh3rSoPKpRdFEbPZzDrbJPJT9slwOOTw8NA6F5fLpXUwCiQTsCM1+gQ0hWFor9V0zUlkrewNce15nseNGzdYLBZsbW3x8MMPEwQBs9mMy5cv2wjXTqfDeDy283Ecros7VCDhYrGw8ZxSG1Dm/Pr163acDz/8MM8//zxf+cpXOHHiBI8++iivv/46i8XCjl8gbnPfLBYL1tbWmE6nKKWIosiC1NlsRpZlrK2t0el07BxKfU0Zg7wntNZMp1PCMLR1P8uyJMsyTp8+bY85PDy04E5cluIcFAergFzXdS1AbDpLm25GmTOBwbI2MrdNuC1uRcA6FZWqomPlNXFESh1Sid0Nw/Cu954A2FarRZZljEYj0jRlOBxaiChORwGSEoUr8cHidpVxyx6T/S17xfM8NjY2WGmllVZaaaWVVlpppeO6+L73snv9GvHokO3Tp8Ex7O3NKJZL+ltbTKZzWt0enlKMJ3M6a0PagwH7u3fIZhNcP8LzIw4OFnQ6IcPtk0wODjCmZP2++0iMw7/87/8HKHJ+6A/8BL31DQ6uvM7+zVt019e48NhDJMuEw/3bDNd7jPd38DyPznANpWF46j7SNOaZ3/h17nvscU49/i6K5ZxCPcPF5ZLbe3OKEjwNbUdT5obUFFWsqZJIVIODxlHUv0OiShKjeCYucVWO62gu9n38NOXWZz4NGC598pOYNOXNL73K7LCCfmUJRVmCrurzFcpQmBLKo2hTU4NOy+Eq7oaW++ba4eiaoxqQvq5cmkZBYQylUSRlHWHK0fUc5JrV78aAY0wF5DRIo0YZyhLrqBPwWHfHgkFzZDrEmPpeXmBhdaa0SCmQsG5fAk8LJRdpXp+qP/XHEOquV459NqEa/4F1RUrsqqpfKKVdY47clHIN+xnJEfwsioJPf/lpbh+M+Q/+/J/mwYsneOGpz/O1p26TGQfj+Vy4dJavX7mDyTMe/+AFHvvQ41y/doPnrt0hDn0Gmz0G/Q4nN/voaJsvf/5pXnz1TSbLBZQlDoqugvdGDmfDqr6ioxSOUniui8kLPKWqyXOgMCUuBqUrd6qqnapaGxxHozUUcf0l+Trz1nE1ytcYR+M4FXxdLEpuTjOePEh4o1AUGjwDmJJFmqEdB1VUgLzt+4yygr7r0PY0Hc+hoyFyFFHg4KmSdhQxmS3Iy4DZMufVWxM2NttcfPR+3nj5Go9vtFF5wXRRrWaWVffIUegxiZe4nsfpEx3Go5jCKOZJSlRWTsP9WVo5PoMA0oI8OeBkCL3IIU4z3DCgyEvCVsB4npEXhtlyydYwYDTPSIzD6fU2HpAuqj3iulAqxf7unDg3zKYLPNcBR1NEleNya9hiECji2EBm8CJNHC9QjkPkadIsQTma7Y0BB+MFLobUWfGilVZaaaWVVvp3SW8bPsrPgIUF4gg67gZsgpWmq+g4yDwOHJttHod84iIUx5I4upqxmfe6VrNPzfEcb6/Zx7eCf8fH1pyTe0FUAWTN9kejkXXTAda1JXDl+NibbfX7fX7iJ36CkydPWkeluL3m87mFN+KCi6KIJEkIw5A4jjk4OCDLMjqdjr2mvCaQVODgfD63zjOBSgJKPc+zbtHRaGRfE7jWbrdJ09TGavq+b2v7SYyouCTleoCNEpU6lWEYMpvNbLsSrynRozKvnudZWHXr1i1c16XT6XDq1Ck8z+Pq1au8+eabdLtdW++yuQebblyJipUY1ziOCYKAMAxt37Mss31rtVoEQUCn0+HSpUtcuXKFT3/607z73e/m0qVLPPLIIzz//PMWAAvUFCC3v7+P4zgWvkobTYDX6/VsHG0QBLZmpbgvxbErfReYLQBXxrm3t4cxxroMlarqe+Z5zmQysfG64lqVPbFYLBgOh9ZpKsBU9pg4W4MgsE7ayWRiAWUTWjdjgqXv3W6XJEnY2Nj4pr8xcRwTRZGN/RV3ojiJ5f0hDlmBt833lrwniqLA933CMLSRswKRZc3FaS3vkcPDQxvn24yCXWmllVZaaaWVVlpppabefOlFXG3o9vsslkt27+yzsbWJClpMRjOKdIETRIyWCdvnznD75i6LZEGRJIT9AcvFAjMb0esP6Q422Lt+A+04nH3fB7n1xmW++Cv/mhOnTnDmoUdxDOy8/A3mkwmb91/kxAMPsnv9TbL5mK1TZ1lMJ6xv93EdTekFtLprzOZjksmI9/3oT5DkhqtXrtMLDJ3hOtq/RRh49AIHV2muzRI0EDgKoyoXn0KR53UUqJaaigYHhTYwMSXPLQ1apVCWXBiG+Omc25/7DKZMufSjP0mwscGrv/Yl5nsJaV4BtNJUMFJVvLF2N1rLX+0qtL9iqADRUUQoWG9hzctCZdBlBZ0EWoIiU6YGcNZvWF9Ivkisq/YEGmpFaTTaKaGEUqieuAzlMwBFTRyll9SQzxwhQjlGWq1/V/XP9vMKfTdSFDxonZVK3JTlkTvS9ufoLOo1w35OcQQ8qzbLaq4t4bTU1J57BCarcb/8yuv8lb/2N3jkoYf4mR/7fj766Ic4uPwC6WLEwx/6GLPf+AI719/k6n7M9vUD+pvnmL7+VTr9Ia635HCa8uIrV7i58zUmy9TekztG1+BRc7Ht4Oqqb46j8H2HMitwlbFDpTRQgHY0SleOWGPKytGqFcYUmKKC2FrroznSGjwHVVZoNskMt6Y5T+4veSkpKbSDSwXFSzRxnuO7DiiD0g7TNGO95RE4irarCUyJqzUbgxAv0OTLnPliTu543D6YY4qcrY5Pf61NuxVw8eQAZ5kQZzmLOCHzfHourPUjRoucMle0Og7jSUIcZ5R5ShS4qKLgcJlhHAe/E+KXJSbPcDsB2jG4jsfedInKC9Y7IaPDMY5WKMew3YswWcb2WoTBhyTG9RwSNKoocFshySIhzg070xwfhTGKvax67b5Bh54qSeYJoafxfE1RKkrto9McxzW4hWHYDxlNYooko3Q1XnH353krrbTSSiuttNK/3XpbNR+bkohVAUZNh2ITnDWhnMCde0G7b+V+vJdr8rirUtpvxqWKjrsbj4+p2edm7OZxmCq61xjuNXZ5TRx8AkqOg1OtNYvFAsA6wZpjb/7seR4f/vCHeeyxx6zzTdxmh4eHNlayGZ+Z5zlFUbC3t2fhitS4E7daHMe2v7PZDGMMN2/etG6xbrfLfD5nOBySpimTycTGdYqLTMbRbrdt9Ge/37e1JY+PQ9ZL5qcJOgUmynESJStRngLwxEEnayDQ7ubNmwRBwH333Ue73SaKIl5//XVGoxH3338/m5ubdv9KLcnj+07GJXMszssm4JY6md1ul+l0yp07d/B9n3e961188Ytf5Etf+hLnzp3j0qVLXL9+nTRNbT3NXq+HMcaCtTiObcxnHMf2+SaYlf1y+/ZtC8YGgwGLxcLWcpQ6mwLZxCEqayT7sOlyzLKMOI4t0BVXqsDYyWRifw/DkPF4bGNSjalqdAr4M8ZYd2ar1fqmupQSgSqgWJy2cr3RaES/30drzWg0slG68vcjz3PrDt3d3bVwUdyvUpcxTVMb35umKYPBwH5ZQvor5wmwlxqeSZJYN2Ycx9bl26zDudJKK6200korrbTSSsfVbrXYPLPN3vVbzA8mbJzYZDGdU5Y53X6HzDW4Qch9Fy9ycLBHPDkgbHfpdHuMdvcpDDz4+CW043H76hU669tsnb/AV574LDdefoXH3vsYUb8NeUY8n5LlCSfOn8NpdXnx85/GR3Hi/rMc3L5J4LnkjksRRLQ6Ifu3b9Id9li79AgHt29x+9WXOfXgQxRZwnSeMjyxzWMPLzHlDb5xfYavoOdqMqWYmJJSnIO1U64sqjqBjlK46ghsHeTw3LKoQKLOuDgM0NmSO1/8MsZ4PPLTH+Nh1+Hyr3+ZyW7Mcln9G91Yu6BG8jXL2gHo6OpnXd+yFcZUUaumioIFG1JKUbsxXVWdWyrwjKLUlaPSGGXrKso9oK4xm0JRqspBKcjP1LGvyihKdcQPm5GopWl6GrkL2CmljkDmXc5FjgijkgjV+vMTU7kt6xBR8Wpa8Gg9kUpiVI0dv9ypaNO4bg04dT3FRx2t3YDK1Ot3FB0rXT6qPlk9oanigp959mmeff4Z1jdPcOHcaVom4Ru3fgtPa05tbjCLU55++TbJ8ze5eeOA3b0xs8WCJCtI6/tratDslIauA+8JHR5qa0KnxNWq+s/RlHn1u5g/taru2QWcqqIadRULXFZ7wgCOOCEV2lHowKUoDdoYnNAlyx2uH4x5cjfm6UXBxEBbHX0xPStKlFZkeU7L98jKAl2CyXM6YYSvDb5WtCJN2NZ4SuG3W+weLhlPExSKRQmDfkB7OCDdH+NSME2re81h28NrRThlwe4oRhtDK3JJlhmTJMV1FaFfOSJ3D2aUyqVUHlFp0GWKG7rkWQVcA6dkGCmiQJMUCWHggFF4GgK3RDkBkVbESYLrK5aLjLajmBcZB7szlHaYzFOWGeTKkLslXjfi1PoAb7FgmiX0WgFlVlTAPAzxcMmKgiw1bAxapGmBjwFPAZooeluVn1ZaaaWVVlpppe8xvW342HT9ieOxGVnZPKbpKmv+fhzSvRWwbL7WfP44GBQQIABSdNxZKSDwuEtTjmvW/Ttei/Jezsrj/T/e5+ZzAlgkFlQA1Llz53jzzTfvCT2POyxd1+X9738/H/7why0Ea9bBlDp29h/KNcwRNyTAcDi0YE+cjUmS2FqE4q4T2CWwRaAiHEXICrRKksTGxwr8lJqOrVaLfr/PfD5nd3eXKIpI05R+v29BocBjiU3d39+3UavGGBuFKfCqud69Xs/OhQDd1157jTiOGQ6HnD171taGfPbZZynLkuFwaB2UMs9NkNyE0Hme27qJMg6JAx2NRgBsbm7aWozSz/vvv59nn32Wz372s3zf930fDz30EPfffz/PPfecja4tioIkSWz0bLvdti67JqSVWpxxHON5no0WFSgqYLDX61GWpe2fOBsBC1Hb7baFau12m3a7zXg8tm7JMAzxfZ/FYmHhoOwFiS/VWts6oXEck+e5rf8o7kCBqHmec3BwQJ7n1v0qkaniXBTwK3t3Pp+zt7fH+vo6URTZepTSlkT8Cjh1HOcuJ61E9gpoF0g9Ho+tE1QiZBeLxV0RwQJSJSK31WrdVcNS+iPzutI7R0qpDwH/B+AHgQ3gAHge+DvGmP/vtzn3JPBXgZ8GTgFj4LPAXzfGPPUW5/w88OeB9wEt4DbwReD/boz56rdpbwj8i7qv/2djzH/5OxzmSiuttNJKK630DtfG2ZPcfOMKZZzQXVtjPBqjtKKztsF8saAddQj7G1x+7U1MtuTk/efxo4DRrVsM+y0GG5tkacbBnZucvPQoQX/I1z79KeLxhA/9wPsptUZlS0ajCZ7j0N8+AY7Da1/7IsPhJv0Tp0izBNeUeEGIMilhe4MsjukNB7TXNzi8doXlbM6lD3yY/f19Dnf3CLSiv3ECk2e8+eYuoYahr5imiv28cjZWIM9gXEWeG5QyeI6q40ir8YvL7yCH50yBUikow8NbbUhjdr7425R5xiM//REeake88WtfYO/qmGVsKOuifQZwdF3ZsawAZwkoZSzIM4UBdH28ojBQ1Lf4GoUyVb1MpRVKGZyyLh2jDKo0pEbZGouqNLbGIlRuQl3XndSlALnKialsG0eQT7CjUuI0rP5X6i+qJtRrflG6NhhWPLf+zEJBqQQ1NhyWylgIaU9vfrm6bsM0XJdVHUr5HwGsVbxmIU5Tu2bV2KR7ys451k1pzNG4BHSa0rC7c4OD3dv2ZK2q/eK5mrI0VfxtWVryqYyq41Grn11gqBXvrh2PoQNODR6dGjiK87Ve9aN5NCWUGqMq96GuIbFC4KTC8TVKV7VECyMEXZNkhmu7E762u+DpRcFOaXAdRVpC23XIS4OjDI7SBK5mnmUMopC+BxttnzzL8T3NIKrgYJGVOFqTOQWZMTiOy+35gvVhyPqpk/hJxmQ6Z7ZIcByfWZyz1m/Rch1UktEP3Wo3GfB0SdSLyDFQZCRpBspD+T7DQLNMMqjhqKvAdxTjRUK/32J/sqQIAlTLw8ugyBbkuaHlV59vuKHPfLJgrROgXcN0VKIdxeEsw3U8uh6khUG3Ak5srVGkOXFesBa4oEzlmDQFXmnAc5nuLtkeBMSLhEADrsYrFN21IbPx5C3/Vq70zpJS6mHgbwAfAwLgaeA/M8b862PHBcB/DPwJ4CKQA88C/8/mfbdSqkN1T/4VY8xHG89HwGHdxp8yxvx/Gq/9ReC/Bf59Y8zf/d0Y50orrbTSSr+7eltfOxKo1YRrAtOaoAyO3IZwBOyakaYCGuS6x+GP6F5g715w714SgHgchh4f03EQ2uz7WwHG4+c3YxibwFBcXs15EcjT6XTY39+3rsd7Sa6jtebs2bN86EMfYnNz00Kxsiyto0tg5HK5JIoiCzkF7EmEZlmWLBYLut2ufU2iPLXWbG1t2XWT9RXQKLGqrusSRZG9nrjoBHKJCy+OY+uklOhWAX8C0E6fPs1oNLIAFbBRnNK36XRqYZaAIul7WZbM53PyPGc4HNr6ixK52uv1ODw85PXXXwdgMBiQ57l1tCVJwnK5vKfTVfa4gDoBVVI3cjKZWFgrfZIahxcvXuS5557ji1/8ImfPnuXSpUt8/etfZ39/n9OnT1vHZhzHLBYLO7dSp1PgqMS+KqVYW1vj1q1bds+Ju1HqMHY6HQuexdHX7LMxhs3NTXZ2qtov+/v7tNttyrJkbW2NxWJhQbk4JgUSSozvdDql3W5bYC17WqJnpTajuELFtSqQfDgcMp/PAayLUPaF7/ucOHGCOI5t3cU8z22Ur+yjXq+H67psb2/b1+HIkS1AUmJoBYoKwJUYYqljKvMl7YqbUvYbVM5MAfkSc7zSO0NKqT8H/C2gAH4JeBXYAj4I/IfAW8JHpdR54HNU0PG3gH8InAX+MPDTSqmfM8b8cuN4BfyPwL8H7AH/FNgFzgCfAF4G3hI+KqXuA34NeIDqZut/+q4GvdJKK6200korvSP1xuvXGHS6xF7AjVu3iHwfr+Vz89ZtTmwNSfOE26++QqcdcuEDH2B/b4+rV65w+uQpNk9usXftOnGac+bd38do/yavfOVz9Fohra1N8HxanmaaLGl1u0SeYjaeksdz7rv/HPNlQRCFtNptFuMZQauDE7XIs5SwFeC1O0x3bxGEAWsntrn9xmX2d3aJOgP8wCEIXJZpxsZaxIMn2rx6e8E0WxI6CnJDVioWGMqiirV0naoOolKGojyK83Rrj95+YXg6rupF+n7K+UGAk2bc/vxnSMdTHv25T/LwH9/g9V/6dW6+tAMLQ4aCulYjRqMcQ1mqCl6Ju9Le1xs0kNcYUIsT0FTwUpUlBlPVqtQGCo1SJaWunHbLGvCZyt5XQ7YatFJDLjE3mmqcRwUdxQFZQzwFxlQA0n5soIx9vr5E0794t1Oy8TGFqhtVFjxWzwtMbB7bvMbRlY+km30GAqVITeUGNbW1sf7add3/BoCUFlT1vLgOxRMqPdB3vaIxGHIMRV4cHaPF/Sljq2J6fRRbLryr5XI2UgS6rJy02sHR4GoNtcv1KIXX1CBV4KzBc5yqBqmprJQGheNoHFfjBhrlOxWwTjIcV5MZxZ2DmGd3Y746L7iaVwtuSkPoGLKyxChDhCJ0FMvCEGiHUMMg9EmTFNd16bdCcEqWmSH0DVHXJcsKbh/E3FiUrK23ue/CCdw4I4uXkBcMWg6F5xJ0u7R8l9l0wUbHoyxKsgyKIsf3HHy3grOjWfVFXu27rHU9sjzDdxWmzAldhUaTGZilmivXZjiRx8MXt1CmxeTyawz7Aa7nsRjNWHiaoNWhv9miyObs7c8reJ/lBG7l+DWOR+krtrc3KIsq5nXYiVBFgu8olOszOZjS6zrsHM5Y64TkeYGrFMsSiiJB+z539id0TM5K3xM6T/VF2ueB/w44Cfw88KtKqT9ujPlHAEopH/h14IeBbwD/L6ov4v4h4B8ppd5rjPkrAMaYmVLqSeDDSqmuMWZat/VRKvAI8COAhY/17wCf+t0Z5korrbTSSr/belvw8biLUOCHOOEEvogb8ribT84//vy9QGLTpdg87jg8lH4dryd5/PrN/jfbO1677a3aaF7jXu0fh5PS/2bMqkRWJknCeDy2cKUJO5uux+a1+/0+H/vYxwjDkMPDQ/I8J01TC8QEkgksEcC5t7dn3WVRFFGWJe122wLCJEls/KdASYF6AhC11nS7XSTWVJxzURQxHA4BGI/HNnJTHGPGVPX/9vb2WFtbsxBQxuy6LrPZzEJNqasokaECisRhK/Cnud/u3LnDbDazLjeJl3Vdl16vx3333YfneTz//PMcHh7SarUYDAYkSWJrVkrUqwDxZsSnQEGJS5X9Iud3u11bo1GglDhFt7e3GQwGfPazn+V973sf73//+3n88cf59Kc/zXQ6JQxDW1+w3+8zGo1I05Tlckmn07Eu1E6ngzHGukaVUvbntbU1W7dQaj6Ox2OWyyWz2czuD9kv4qb0fZ/Dw0OKorB7RtZU1mO5XBKG4V21GSUitygKZrMZw+GQfr/P/v6+jUIVWOm6rq2VKM7K6XRqnbHNtqVvcRzT7XYtVJXYWNlPvV6PoigsWPR938JM2RdSl1OgYafTIY5j6/oUUBmGoQXHAtnFESzjF6frZDKh1WoRRZGF4Cu9M6SUepTq25ET4IeMMS8ee/3Mt7nE36YCj3/VGPPXG+f9t8BngL+nlDpnjJnVL/05KvD4FeDHjDHjxjkOFfR8q76+B/hVoA38lDHmN39no1xppZVWWmmllb5X1O13ieOEJz/3NA88eom1s9tcf+U1uoMhi9wwPxixtbHF8MwprrzyCrvXrnPp0YcJe10uf+Mb9AdrXHjkcd78xvOQp7z7Ix9mPDogXywo8oz96RKTpWxsb3L1tct0+n02T2wBJZvrHbwgIM1L2usbpMsFy8N92v0+xg2YHx4Sdro4fsDh7i7JMmcw3MQNfeKsKu3QbkWcOb3BZLTE20/oeBqFwVWGcWrI86pknjEGU4KjIckr64tGAFgVAmqMYT83PB3n5HsLllnJg5sRQZYxeukZXkwTHv4jP8VDf+LnCX7pl7n+tStMJxlFYSpnXlmRNqMqeFXWsEwpcDhyDCql8LSiMKUFUqasnG4llaPSVWC0QZUalCGl6n9pDIU+4nOqhnFVtKfGmLKKNbUfAzQQn9SIrAmggErreGzWS/zmjxFQ+ghi1hes/1ccgo3zTOUARFUOxsL6LeViqkEea6B47DMUVa+Rrl2HlklaPtm4ZhOGSqiraq5vJd0ImlVCac3RNSTKVpmja+namdjVivtcxUMth80AAsfUsakKrcCt++8qAb8VdHRcp1qjshqnU8+JKYoqWtUBx63uSR3X4AQuxtEUeYZ2FZmCO7OCL92O+do050pmSA04KCJP42mXpMjwtcYPHBZ5Qct18bVh3TcYU1CgGbiKOM1xXU3gaaLQZ2+8xHd8St9BFfDY4+dZzkpu3bpN6EDbdynRKNfhxLDDwcEcrR0cDNkyI8sNkauIWiGFKXCNwXE9/FbIiWGL/dGELCtwFQSRh+c5pHHGcpFzc7Ig9n0eOL1B1OvT66xT3LpCEHos50tagzbjBAaOR7mcMZmmoBzmy5Q4K+l3Atwg4iBJ6XS6LOZzwsClXySQL1GuS6F9ZgczHK05OEgYtnwKk6HzgtKt6lJ213vsHCast9waTK/0PaCPAX/TGPOfyhNKqf+GCkj+baXUrxpjJlRJQz9MdU/7vzKmostKqf8b8CTwl5VSv2yM+UJ9md+igo0fA36lfu5HqL40/GmOYCNKKU31Zd7XjTFv/q6NdKWVVlpppd9Vva3Y1Xs5FQUyNp9r1n+8V8ypPC+P93IZCqASHQeW8rsc24STxwGkHNds+/h17wX9jo/pXs+/lftSIJa4wOQ5rbWtYSfXkrjO5rkirTXtdpuPfvSjnD592joay7IkiiILvcIwtPBE+jqfz62rrQlnJ5MJcRxTFAVBENiISXHaaa2ZTqe2pqOsX1EUOI5jodhisbBuNel/nucMBgOm0yllWbK/v0+/38fzPObzuXXW+b5vIU6aphbKGmPwfZ/pdEoURSwWC+bzOWEYEscx/X4fY6p6j61Wi+FwSLvdti7Qg4MDZrMZrVaLs2fP0u/3AXj55ZeZzWacPXsWY4yt73fz5k0Ly2R95D/Arp+4E9fW1mxdQnH6CbgSJ2NZlnQ6HTqdDu9+97t54okn+MIXvsADDzzAhQsXuHz5sgWrh4eHlGVpoa7jONZhGAQBURQxmUzsGsxmMwvYBFwul0sbxQrQbrdZX1+3QHo6ndLtdhmNRhYcy2vimhXIFoahdaQKfJZaihJv2+/3iePY7rnpdGojf8WtKpG04jiUPWWMYTweY0xVW9T3fQaDgXW8DodDkiTh1KlT7O/v275KTcZer8d4PLbrI/tY9oTMZRRF1g0q8b2yxgKRBSBqren1ekwmExuHK7VExaErf6eyLCNNU6Iouuf7fqXfE/1Fqv9v+8+Pg0cAY8z1tzqxBpM/DlwF/qtj531BKfUPgf8N8AeBv1+/9Jfqx7/QBI/1OQVw6y3a+jHgnwBT4GPGmGe//dDsufeMfgUevteTTzzxxO/00it9l5pOqy/urub636xW8/57o++VeZd+rrTS77WuX9vn9Wde4PEPPkpnbcjNV16jPxhwezQmagWcP38Wt9Xj6mtvoMuUd33wvZRGM759k1P33Ycftbn2jRfxPJ/hqZPEqaG/fZZ0MeXaa68SRRHab/PmK6+xvraG3+2h/IBW22M+T3B0Rv/UfezduE58OGLj/HmC0CdPU4IwgLJkcbDPwc0b1b3pYMDs8ICvP/lVNu67j/V+RDhYZ/PkjK3dBUVe4KucvcTguoYIQ1aDO6UVWVHFkTqAUwOtEioHWg2jDnJ4Zp5RlIbSwKUTEV2TM3v1RZ79hRkP/+Gf5fzP/WHC9X/N1c+/xGgvJk3rLFdTueRKo6oIT6lwqEoonQpS1XjMUeoI+DlQFBXAKkpdufkcU8dycoREygqOLUvu4or1pw9H8aCSkWoLLt7takTuN4AjRHcE5sQtqBtgsilxBRpVt8VR1Kk1/SmO2jAKTImSL2uLQ7Fu1V5fyU/V/xalwpiySkBtklLpx9GhxxyVQlhNPXa5qjgepZ1j1zpClygBfArWteGspzgXavq+wdcV7HWUwnVU5bCjqieqjdSzPFo/qOo7uq7GUWBMieMoHFcR9EPK1IAp0aFHaQzk1YouS7gzz/nKzTlfnWa8lkFmqnmt9nAFqpXjkZqCaV4QOR4YGPgap97rg7aPA2RFwcm1kJ5ncJyS23sJO7MFh8CDZ9dRy4K9m7cJA49uWNVfHKUlXc8hWaZ4LixSzbIsafkeXgiur8jLnHmS47oeQRQROYrxeMZ0lqMVOL6m1IrxLMUHwsAlaocMTmxinJBrV25z39qCoB0xn0zpDnsk85jTgcNyOiNeLHBdn1ujObMkZ31rQMt32YsX9Ht9kiJHOS7dbEmn5TKLXQh8ru7ErEU+s3lMx3NQqiQAdBCSFQWh77BclgwdjVPGNlVppXe8xsB/1nzCGPNVpdQ/oPri7f8a+HvA/5bqTf6fCHisj91RSv3nwN8B/iwg8PFTwF+jgoxN+PgUVYLQf6OUumSMeQV4L7BGdc/8Xelb3TeXZfmO/7fsO03fK/cA7zSt5u2702revjt9N/P2u33f/Ladj01Y2ASQ8oF9s27icaDWBG5vdf3mf/cCjk1Q2XQvHgeizWs2r/FWbsjj/Th+XPNazes1a0M2X2v+14x/bfZPXrtX+yLHcfjQhz7Eo48+aiM1xWGXpilZlrG1tWUdfwDdbtc6BvM8t+dJVKYAQIm7nM1mdLtdW39QYIv8Q1HmQeJOx+OxhUWz2YzBYMBisbCgT1xpEss6m80spGy32xbGaq0tvJOafQKlBEomSUK328VxHPr9Pu1220amyjWHwyGj0QilFDs7O9YR+NBDD9n5euWVVyiKgl6vZ+tYSi2/JjyXNZU1ktqWEt95cHBAFEV2TgTcisOw3+9zcHBAEASkacr29jabm5t88Ytf5MMf/jAf+MAHePjhh/nKV75inY7iHpWalUmS2IhQaVegoVLK1u2cTCbWQRnH1T/sZa7FVVqWJXEcMxgMLDSUvm1sbNj5n0wmdj8VRWHdmMYYC4IlVlfqJA4GA7t3xbkoTmiBd/1+37oLu90uk8nExr+Kw3S5XNpxHxwc2DWROpeyj8IwpN1u29qPsj+zLLNuzjiObb1L2U+y32WPNSU1UZVStFot5vM5o9HI1jcVsC4OWXmPiHtypXeEvr9+/NXv4tz31Y+fNcZk93j9t6jg4/uAv6+UagOPA3eMMU9/B+38ISrI+Srwk8aYq99FX1daaaWVVlpppe8BvfbcCwzX+9y5tcvscEJ/o8v+wQFbgw7b952jdF327txkbb3L+olt7ly9RRj6nLn0EKWB8f4ew/V1CmA+i+msbbG3d0A6n7F53wNcfe118ukdHnjoAYwqKUrw/JAsz4haAUFnnVuvfoNiMSEYbOJ4Ptlsjt8KyPKSZD5l/+ZNPFXS6nSJZxO0yXn3hz9Ae+sUylHMdm4RTyacPz3F9TS3phmjw5igzDBakZWKpSpJckNuKgjkqMr9aOsMUkFDgVajonJApmZJrjTvun/AWr9gcfsKX/97/zPz3/djnPv476e9vcnrv/Fl9q6NWcYFZQlFWQMuDaY8ctFVoA4cU7Vb1BGcWlURorrB1UqbV2owGkKJVlUKVVbkMalJnzSnat6orIuxOv/osgIkaYBGZZ8W7GYwFngapazbsIKQTUjZvKSytSINR4xQ6k86KIx8blEfo5ExHnUFY1C1P7FUldNTnImVw7N5f2Rs5Ks4IHXDmVk26k6a+gABjEeYFRvjaiEhCsdApBR9BVsOnPQ1676i7Sl8LTUedVUvUh/FvCojka0CV+t1o3LgOrqq7+ko8EIXv98iTwqc0BBtrxPvLzBpBqZkmZTcmmY8tRPz1WnG65khFVQrnycpTVIUaF05MH1Vxb62fEVWlDjaI/IUrgOh59MNFet9n05gmC1L5jlMMJzZGLAZeOxdvQ1ZQjTsMuj43N6fsT6I0N0ey+kcVSoC30E54LiGeJmyHkSV4zfNMa5DmeXME0PHUXQDhwyD4+gq/tj1SeIlqSnorXVwooDIlHRIMLMx2XyO8nxu3T5kexCSFznJPEG7LkmSEvmK9c0ByvHYP5yyfmKDRQ6F56HGc1TbMFukqCji2s0RLd8nTlL6kSbDYTaOiTxNXiS02xGTWcb6wKGMXObjjJOnuvf+Q7nSO01fM0exqE09QQUf36eU+qdUpUNuGGO+cY9jf6t+fF/juS8CMbXDUSnVB95P9cVfOf5HgFeATx67zkorrbTSSt+D+q7hYxP6HYdn4qwTONJ0ksHdbsN7wb+3ci3eCybKNQQYiYNLAEHzGsefazoe7wUljz/f/P1bHXuveYKj+nPHx3MvUHn8WjKPjz32GB/84AdtdKWMVaIvBSqK20trzWQyodfr2ZjJLMvodDqMx2PKsqTX61mIJMdIbKf0WaBXv9+30agCPcUFJw680WhkYZjUDnRdl/G4MgUFQWChWLvdtnM4Go3uijiV2oriLsuyzEaK9no9yrJkNBqhtWY+n1vAKg49gKtXr1o33YkTJ4iiiNu3b3P58mVc12Vra8u668TRJ6CsCb6Bu5yFy+WSzc1NkiRhPp/TarVI09TCPXFyCuyUeXRdlwceeIAnn3ySJ554ggceeICLFy/y6quvWtejjMVxHBsvKq5UWZNOp2P7JSBSolAluvTw8BBjDJ1Ox0I8gdsSGyrOP9lLWmv29/ft3IxGIxt3KscMh0N7rOu6FvYJHBVnoe/7dk0lqlXcsTIv8lzzb0gzQlX+Voibs9vt2ujVPM/Z39+3cFH2QRiGRFFEq9ViMpnYuFQBhPKolLJAdbFY2PXqdrs2Xlf6Kk5dqZXpeV4V+eO61om60jtGg/rxxndxbr9+vKdbsfH84Njjd9rWRwAP+DJw7Ts8F2PMB+71fP3Nzvcff/7jH//4d9rESt+h5Ftlq7n+N6vVvP/e6Htl3rvd1QecK70zdOLCGfwyZdhrk2ufrCi59MhFWt0eizQnnk7ZOHmadr/Ha888T6/b48TFi5RlznxvlyBqUeQp08MphVFcu/wkQeDS6m/w1Ge+wHo74KHHH6RwfLR2CH0Pg8YLu+RZzu6br9LyDTvLnG5hoMxw210OD8f4QUA6m9L2HXAi8rKCNvguUXdAvFww2j1kvncblcZE3S77t2PemKZkpcLXGuNAUpakBWDAqygbeWko4C5wRSNiUynFrDQ8tcyI92bQ6vLjP/ujbCc3eeNTX+HyP/lnTK//AJd+8sM8+idP8Oav/TY3X7jJfJpDVoWMlqZ2DtYRrOLIKzE2hrVy2KkKgioFlKDFrYjtmdL1PX1RXc/UkZ6xqfIANUdITa4jjkJxN5oaVgrAU00XYR0Xe9RmjWGPeGVD4g2s2zSNa4qxsjG3pr6Qql2bVSSqafTNHB1c/17WTxd1O9pUjkJDDSVpOCxr2FpFpB5BZFcdvV4qRWGMZZ3VuUcgtYKtCk9BC0XHgYHWDB3DuqdZ8zWhW13TrUGfq6lqN2oNRe3gVKaqw1lDQIGmWlEBOFPi6soBiYFkHNM61eKBP/YzxPOQN/6XX8TkBYu04MY456t7C56a51zJDWndZ8dIdGtVPzTQDkoZQq3xlaLrajyt6PkVHF0WJSovCHxohwqTpUxyzZ1RwkhrTm/32A580mXK7mhK1PFRnuYgWdJZ77JIDN4io3B8SAp6LRdfa+I4QQch88JhNkvwg5DA1TiuT7/lkccJWZKiUGRG4zoe8/mUZZLhd9r0WyFZsqRvSnSRMpnOafU63NmbECiYxTlJkhE4HtPFEs91WB92KF2f3f0xg/UBeV6QeSFenDNsGdIkBc9jtDPi7Kk+h+MFHdclDB0ORgvc+v3nakWySOlEiqjjs7cb0+tG3N6TyhUrvcN15y2ev10/9vnO750xxqRKqc8BP6qU2gR+gCo1+1PGmK8rpW5Rwce/VT8a3gZ8/Fb3zVrr97/T/y37TtP3yj3AO02refvutJq3707fzbz9bt83vy3nI9ztCrwXNGw6HEX3cjo2nYPN45uuxyagOO4SbEKiJsA6Hp95L8jZbP94/+4FTJvHvxU4PD5GgWkCP4+3fbzd41BTa83Zs2f5gR/4Aba3ty3oErgodQi11ly9epUgCFhfX2e5XFpAUhSFdTQqpSzEE7ApAE1qH85mM1tPr91u47ouURRZl2FzbgV+ZVnGdDplc3PTgq3FYgFU8FVgz2w2s0BHXG3izGy329bBNxgMbD/ESZskiXVDSl8dx7G1AeWYxWLBzZs38X2fkydP2nqMly9fZjweW0iVZZmtjyn7RvaKROWWZUmr1QKwEDSKIhsvKnMix0u9S9mzsg5FUXD69Gk2Nzd58skn+ehHP8qHP/xhHnroIb761a/S6/VsjUipaSmxtAI0d3d3LWTMssw6IQUCB0FAq9Wi1WrZfojDUeYrCALr8JMxR1HEfD4niiI7f+JwlNqTxhg8z7PgUMbq+z5ZlpFlme2nuAcdx2E6ndraieJUXS6XKKVsfK7srWbNR9d179qb4lKVmqCyP+W9Io7ILMtsvVIBt8YY6/CV+NfZbEYURfR6Pfu3I01TgiCwUDVJEvb39210q0TJCmCVSOKV3jEa1Y+nqYrefyca148n3uL1k8eOa7b1neivAD8F/BlAKaX+fWNM+W3OWWmllVZaaaWVvgflaodTp08zj5f4WnHm7FmIWkznM4r5nO2z97FMU77x5Sc5efY02xcuMRkfsBzt4zgBSZrgkJLnCfFiwcYwpCg1N156hofOnWZw6jRxXtAf9HCAvKj+HT06PKAbOXQHfYJOlw1vxOD0fSRJytef/honTp2C5YTQV6igzXi0AKWJAo+gs05eGlQ2xs1HbPZdss42z117jfl0SbcscQPFfqFZ5EUFslTN9EzlTDQ1wLFuPSP34E30BrGBZ5OC+M3bmF9/jh/6yHn8rS7p7pQ7n/st5nfu8Ngf+f08+Mf+KJ3f/k0uf+YFZgcJKqtAVFGI964y+ZXGYEoDSqPLEoxCa0VhKqhGWd/HlgZduwYxtXuyygFFlRrllDilwZQV2CwlvvQYtKwNiwjgM02zoapAnapPkhqVR97BGtDVcE8slEcRqZXTT/6RWNZATNcgs7qcpa5VjUnrTDzyHjbZpqr7KGCz4rAy/vraNha2kSAln9eYap2hdmxSORGr07U9V9XrjQEHg6M1ARCqyvHYdWDNNaz5mrZWhK5C6wpaOUrhaIXnOmgUZVHgoFC69mwaA6pKZlJGoalL+1A5H/3AtSMoSnCH28wmXe488TlUmTLLDW8cpjy1s+C5peGNzJCoanxOPT6lqvqlDpCXJS3Xw8PQ9gweBVvtCK2r+8+2UvQChwdPeQS+y2tXx9w6THAHPhfOrLHlKW7dmTKbZww6HkUQYnyXM2dOkxYu0/0DpgcTtDK0XAeDR2FytOOwzEvSOCUIPHxlWOYFZVFA6FDigqdZjBdo16CWaQVeOy26LY+iLBmWKZ3QZTTKKD2X27szhpFHlhXkywJVQlpmKMfBixxy5bG/N6XfDSnLnKkToOYJm06GJqczbLM3WRB5LvFyScvV+C7MpzMCxyM24GqNKgswJa0o4PbelFYQMFsWzMbxt/+judI7Qdtv8bzcJ4/5zu+dRb8F/BgVXPwBYAl8vvHaTyqlAuCHgBeNMTvfWddXWmmllVZ6J+ltOR8FijUjRY9DwyZMfKvI0uOxqsejS4+DTYFdx9tpuhkFQMrPAqia12+20RxTs3/Nvt/LHXkv+CjnH68lKQDtuI7PzfGYWKUUZ8+e5Qd/8AdxHIfd3V0LMXd2diycEdijtabT6VgHmQAnz/OYzWY4jmNdiICFl+KEEzjpuq6FNE3gIhGoEts6HA4tDJN1SNPUutGWyyX9ft86GMWt2el0bK3DyWRiIdtgMCBJElqtFjs7O/T7fesyFMflcrnE931arZYFW+LEPHPmDAcHBzazuNVqcerUKYbDIUVR8Mwzz9h4U+lXlmUMBgNu3bp11x5o7u3xeMx0OiXPc9uuRJsKGJM1mE6nKKXY2Niw0FLm0RjDo48+yqc+9Sk+/elP8+CDD3L//fdz+fJlrly5guu6d9UHldqXy+WSLMvsnojj6h/uYRjauW/G64rDUcYxm81sdKlAS8CCQqkVGscxZVkShqGt/Sk1NUejka0z2YxzFVDZ6XRs7dHmnhaYJy5HcUVLLUuJ642iyM53mqYWkPZ6PZIkodPpsLOzg+/7FjQ6jsPh4SFJktjaiwLPj8+j7M84ju16SJ9kT8l8COCUuo++77NYLOz+9TyPdrvNcDi0a7HSO0JfAj4I/CTfOXx8un78QaWUaxp1K2p9on78GoAxZq6UegF4XCn1vu8gejWhil79B8CfBgKl1J+6R3srrbTSSiuttNL3uE6ePkm6nNEJQjbPniYrSkwco4qSkw8+yGIyYnTjJg888hhhv8vozi2iloOOAuZxhuv7LMYzFAVn7jvLYjJhPh7x4KMPoaI2SZ7RaXdIlzGB71XQTStOntxmNp4wnSfkZk536yQ3X3+NO2++yWBjDYopQdTDa7WZHuyj8wStDLS2MEWKNobQBXpD3P4WyXLCqeFV/HNdJpOQl3bmKKfAURDqqpbiwhiyvIrx1KpyulVuOEWTgJnGowEy4JUkJ3npWQ5vv8HFnuLcWkiQpUxffp6n/9+H3P/Tv49zP/aztM+e4PVf/zx7b0xYLosKIBYlOdKuBgUlJY7StcOvrF2IAs6qKM/S1PfwAChcKsjoqgo1GgWFMpRKkTWIojGGsmkktOMRlyOV0VC+ZNx4DgTI1r8q8SdKjUhlY1BN4zhDFXlqL2Wq36uKl1VflTqKNjX1fEgwqTlqsHIwSoyrMWhVQUhxFipdg+M6drXAUJia+dUEtgJ0qoahtQNRmTrQtTpO13DTReEDvjJEWjFwYOhpOq4icCDQ4OiyiuvVCscoAtepoHFWVP3TCq3rcdt2q6hVKYaplcINHdyWT5FmOI7CpAUHL11h+sYvoE3BPC25fJDypTsLnl+W3Myrseka/kr/taYC0UrR9yvw1/ZcQt9hGLqkRUFeaIahR9sznBqEKGN4/eqE8SyHTsjm2hqdZMl0VrCcxCitSI1Df+Bx/4XTDNa2uXn1KotFDKZk2A3wHUWWL2l7imUYoRdL/F4fJ69qpLbTlNTk+Ar2FksWSYnvafJS4TiaFIPvBsSloasygsBhNFmi/ZDD3QkbvRBT5tU8OpDkBWHoQVGQlB7pbEG/H5IWsHR8AuUxcBYk8Zz17TaTRcF66DMtSnwF7Y0uu7fGlEmBG/m4aDxHESfQCxX7o5S8MEySGFPA+tD/tn8zV3pH6P1Kqe49olc/Xj8+bYyZKqUuAxeUUg8aY149duxd984Nfap+/BGqRKAvGGOWjdf+BPAXgXbj2JVWWmmllb5H9bazAo879Jouvqbr8bi773gMalPicGvGtTadjcBdDsLjIFOcUa5bsVUBEOJwEzApALOpZr+l3Wak63HAeK8+3Ot6UvvurWpCyvWOP2qt2dzc5OMf/zj33XcfnU7HxoI6jsPa2hrdbtcCE6nNKPULlaqiTVutFr1eD8/zUEqRZRkbGxv0ej22trbY2NhAqapmYrfbpdvt2rqFAjbjOGY8HrO2tmbdaZ7nMR6PabfbnDp1ijAMWV9fZzAYANBut63DUJyCEld5eHiI4zg2UlXqNWZZ9k1gV2CpMcYCpzRNmUwmNrK11WqxWCy4cuUKh4eHvPLKK9bRd/78edrtNoeHh1y9epWyLNnc3LSxoQLdJDL0+PoJtBKYLi5QAVMSyRqGIZ1O565ozjAMabVatt/9fp+trS0uXLjA008/zfPPP4/rujzyyCPWnReGId1ul16vZ2NCJULVcRx6vR7b29s2nlTg63w+tzUx0zQlDEML3cqypN1uc/bsWVqtFr7vW2AZBAFBENhYVAHHAgDlUdZdILW4F6WGou/7TCYT60qUaFRxUkqNRKlhKc5IAa4Ster7PmEY2kfZ7wLJ5TlxL0p/m3UlpX+j0YgkSey6ARbodjodG/3bdEfLOkRRhOu6di0Fbi6XS+I4tnOT5ytm9A7S36IqMfTXlFKPHn9RKXXmrU40xlwHfgO4H/iPjp33YeCPA4fAP2u89P+oH/87VdWsaJ6jlVInuYdMVVPyjwH/U/34j5RS3rca2EorrbTSSiut9L2nbDqi3w5Y295gcrBPPDrEdxW9jU0O9/ZZTGNOXrxE4WgmhyOCVovCOCRJidaG2f4e7XaL/nCdxWKJMbB1/3lM1MX1ffr9HqbICP0AU5bkWU6712OZ5oz3dllbH9DqdTncucPe3g7rm5usbQw4/dBjELS49eprJOMD+pubzGcLDndHzHb30VlCmeX4nR6jndtMx3NOXDjL+YdO099oobUhcmAj9Oj5Dp46CibVGqrUy2YMp4Axg9ZVNGNaP+fUkO3NvOAL+4c8e2vC89dmHBQuftcn3b3GK//z/8JLv/Qk4YWP8Z4/9yd58BOX6K35+K7G1RpH6TqqU+M4NaDSoFRZRWmq6pvXunYPuo7C81wcp64rWLvdqtjP6tHTClfXcFAZC96qgRgbeVoBzDpotM4qNUaCR4VSmhoE1pBLVdGi1dxIPUMBeQalTB31ytG82mMFHtorVh/sNHI0lKleFx6qqOs1msrdWDkTjY0X1ZjK6eeAp8CnikftK8VAaTpaETgGT9fnaIWLwTEGB4NLBRmraFWNj6qdjoaWgp4DJzzFWV+zFWgGnqLtQuCA6+gaPFZz7vtONedlWUNNiZut50UrtK7gpKr77WmF33Lx2wFlluOGLjr08EIH3weVZ8zykpf3Ez57a85Ty5LruSGnYpeOgcDU6+8ILK8BtDFEvktpDJ6CRV4wzWF/mVEWBQ+e7pKYgjeuzbk1XpK3PE5vrdNJFkxGU0aTBYOuy3BrAK7HpUcvMU0Uw+0h59/zPnr9Nr1+CKYkLwxoj0lSELR6uO0enuNjHI+sVORlQb8TMM0ylmnOMs9Rgc8yM+zMU8aLgmmW4SuNyQvSJMN1NaPpggunenRbDp5y6bQCHFcTeA6lhkQ5xFlBK/AojGKMSzoviSYHtL2C06eHTBKDD8TG4JuCbi9kMp7j+Q5+u4WjNe1QMctKOi0PJwgwhSEMHNKsAGVw3Ht/brbSO0594P/SfEIp9UEqMDjm6H7471K9W/5rpZTTOHYD+GuNY5r6Wn2NnwEe427AKBGrf/nY7yuttNJKK32P6m3XfGxGpH6r2NDm8824ziZkaroXj7vPjkeVHr9uM+JSfpaIxKIo7oKAgIUZx8HpcYjYdHTeC6LKz3LecQemzFGWZfb14/0+3racK+7Aj3zkI5w9e9ZGRkrsp7jPmnMh0FVgpAAVia1st9vkeW7jV2WuoyhifX2dsiyZTqd4nken07Gxl8YY6/qbz+cMh0Om06kFQRI/KfAziiILbMSRKG4xgWsSwdnsv9aa0WhEr9ej0+nY6Exx8nU6HUaj0V31K6UuJFQuxziOLZj0PI9ut8uZM2fQWvPmm29y+/ZtlFKcPHnSgrTmGt3L/dp00S4WCxzHIcsyzp8/z3A4pNVqMZ/POTw8pCgK65q7deuWhZKj0chCzjRNeeCBB/jN3/xNfvu3f5vHHnuMM2fOcP78eQ4ODgBs3KusmUSMBkHArVu3bDviGm21WpRlyc7OjoWN6+vrrK2tWYdpHMcWiMr7QSBdv9+n1+vZGpACNGX+BXT2+32UUkynU+seFDel1GCUWGCp+VmWJYPBwDolxSmb5zlRFFGWpXU9Sm1J2Q9aa9t/iWyV9+X6+jrz+dzuA8/zWCwWNgZ2Mpng+z7L5RLHcWzMr9SJFLdnt9vlxo0b9r0lkbVSrzOKIvsFBsBGyE6nU9bX1wnDkJXeGTLGvKSU+g+Bvw08rZT6F8CrwDrwfcCEo29h3kv/AVXsy3+tlPpx4KvAWeAPU32k82eOfQP071BFwvxJ4NW6vV3gFPBJqput/+tb9LVQSv17VFEzfxb4p0qpP2SMSb6bsa+00korrbTSSu88bZ85gRt6THb2afe6rJ0+w/hwzGI0ot0OKLw2RZmTJzHa0eR5AlnKbDzDDxTbp0+g/YDR3gG9fhcvCJkc7tPv98iyjCLLUa5DlqWURUF7uMYyTTFFwdrJM+QowGM+nXDy5GkGgxbaj3jzG9/g4OZtWr7L2vYJlss5k71dtk56tKMuThQSdLbJ84w1rYgnU/rdk8xCjytv7nKiG9FyHKZxTpwbSgq0MrhuFS9aFHexsAqW6SrKMq9teL5141XuyBy4VcAizhmVMbOs5MJGi62uR7Ccc+uJ32T0xhtc/Omf5MLP/1kGD36K13758+xdnaDSKmKzNFQ0SYtDUFHohvvQ1E7BsqjdgTVUpIJwZQ3zJDK2NJBRxbYWqvq9aqL5uYGqr9t0NJq7Xqs/MaEKJ61iTA0VhKSug1nxyup1iVE15giEiQOyNEcOR107MTESfaqOHJh1X4SX1pep2q8vq8UxqcStCJ6BQCkCpXDq8NU2igxDahRZCUUp+FXhqgo6SiyrU8PeAOg4io6j6GpDx1X4Agprt6xW1O5I8DwX39WYXD4nqcCiuCsd+QI7Vb+MMSitUaokWosolCJfZGjPqZ4vysoFq2AUlzy/n/C1vSXPJQUH9eY0VNGqMueOU81RpB08x+DV7tF5mrEVVV/kLosKiq6HDue22oyXKZNZxmEGptdio9NiSMrtcUxZgOdp4qjHfDQj2uzx9TfHvPT1a3z9tRv8wZ/5MR764Pu5/o1vYKZTWCzIlYfyI+LJhO6gR5FklFmKWiZEvmKSGK7vzthoh5TKcGt3QZwv6XQ7BJ5P1wOvLCiyjE4UMpou6XdCUBAGLsusKt+SGsM8zmgrH68V4uNR5AlF4BHkDgOzpB15tHouN0ZLwqxARyH5bMnZM+tc3xnjKZdW5LBMDL5Tskg1gYJO5JBjGKx32F+WOL6m5ZYss9WXdr9H9Bngz9ZfwP08VYTqz1O9Vf6CMWZSH/c3qRKHfgZ4Vin1r4AW1b3zFvBfGWM+17xwfQ/8RH0ONOCjMebN2k15karc7qd/d4a30korrbTSvym9LfgoAK7pUHyrOozN85o/H481PQ4Y5ToC3u4VfwrfHFUqUasC4MTx2ASRzTbfqhbjW8WqHu/j8eebbQi8kbk6XtfyXnMjUZMf//jHefjhh23cqDjByrK0YEjcXM26eHmeE4ahjVlNksSC18FgQBAEHBwc2FqCzZqFUkdSfhfI1Wq1yLLMQkOpeygxllJrUSnFnTt38LzKxNPv961LUmrjNSMz0zS1tSHLsmQ+n1vwJnGY8/ncxmqOx2OCILBty5gODw9tNOliseDg4IBOp8P9999Pq9XCdV1eeeUVG5cqsa0CR2WeZC9ILKnsPYFf7XbbrtOVK1fodDoWYkZRZMFVr9e7K6JU+ioxr2tra9x///0888wzPPfcc3z84x/n8ccf54knnrD9k3jS+XxOEAR3vb9c18UYY+tOClAXF6nAWakFKf0HrMNUHKUCOLMs4+DggDzPWV9fx3VdGy0qtQ0FWgvkHAwGrK2t3QVLZe8JwJY1L8uSXq/HeDy2rsfFYmHdtYeHh7buolLK7heJDp7P5zZSOMuyu+q7ShxwEAQWrsr7Yzab2XUWYC5RrBIHLMBYwLhE2ArMl/2epimLxcK+J6T9ld45Msb893Uc6v+RKhrmZ4E94DkqWPitzn29/lbnX6Wqy/hxKmD5a8BfN8Z85djxBvhTSqlfB/488EeoPmu5BXwW+KVv016plPrzVADyfwf8klLqZ40xqyzflVZaaaWVVvq3QLOkwEym9NoteifPcefOHu3BgM0TIfF8ga8dyJb4YYDjuCS54vobr6ELh/sfvkCBx/6Nm6yfPIXf7jDe36G7scF8MiZqtXC9gGQ5J5kvcVt9UuOgKUA7TPbH+N0u+1cv45Y5XscQzxMWt3dIDvfY6LXw++uUbotiscvWyRO0Wh5Oe0AZ9CiMwW91cBS0+2u8+eY17owLhsM25TLj1g7Mk4K2q0hKB5NCjKnqRVIBPBSYsgKRGlU5u1A41pRX3xOb+j4ew8gonl7mTIuSWZZzfxxybj2krUriN1/mhf/hDgc//CM8+JM/wfsfeIyr/+pfcf0rrzE+WJJkNXAyVbtKaTTVPYMGyQ0FHChLWyexNKYB7RS5MSTGEJeQibPRVMCtoI47NXUEaw3+TJ2neuT2pK7ZKPf9DTejqqGkOBOFLCqZC3n+6EmFogQcVHPabO1HqQlp6iBXqKFj0zmI/F7FlIqj0zHU4FDhAZ6qolI99FGcqnIojKFQ1TzkxqEQoKrAoYri9LUi1NBSEDiKyNEEGHxX4ymDqwyeo1EYXK1xHY3vOXiuQ5HmmLJE6QrMOkpXrkylLBxVpnJeOlrhBJr1xx4k3p9R7O/buSiSKqY0KRW3ZznP7y54eprzWloyKiUCTNk5KWs3amkUgYK2Q9W2htCBfhQyznKCsiByNcNQ0/Edbu7N2Gx7TMqS3Pd4aKNLPIu5NUqJlwXbA5ey1+ON20t8DEUOr734JofjOV4Zc/WVl+htXUBlJSUlk2WCo0rQmvX1DlEnxEQe85mLUxaUJmd3f0IncCmKnDvjmOky5/yFLXrDIc50gc4XTMZLtoYh+4dzojCkyEpSkzNXDkWWEvge5TShE7io0KelPfyyYN7t4uXQd1McxzAYhEyWJV3HQbua+TzmvjNDdg8XmLSgu95ivkhwXYfpIsUYhecpJoucXs/l1nTJMqvcrsbkZPnK+fg9ojeovpT7N+rHgMqx+J8ZY35dDjLGpEqpHwP+E6qUoL9ElUL0LPAfGWP+4Vtc/1NU8HFC9WXf469dBJ4yxoz//zailVZaaaWVfk/0tuDjvZyK3+5D+Hu5/t7q9aaa7rOmU/L4dZrty+vinkrT1AIQOVZgYBNGfrv+3ws8Hn++GdXajFy9F3B9qzl79NFHec973mPrLc5mMxuj2YRNMiYBeuLoKoqCdrttXX2DwQDf9zk4OLAOsGb9vfl8bl2Es9nMwljf95nP52xvb7NcLu+qSZhlGXEc47ouRVEwHo8pioLJZMLa2pp1Sm5sbDCdTi0wlfUQ+CftSD1Fif8UmNVutymKgt3dXXzft9DK931u375tgZPUi9zd3bUA9sKFCxa8vfTSS2RZxvr6Oq1Wy0aDep5n3XDH4XRzPQX+yjglYlXG1u/3LcSSmpnN6M69vT201gRBQKfT4dKlS9y4cYNPf/rTPPLII5w5c4bt7W2uXbtGt9sFYD6f25ja4XBo94OsncA6WScB7OL06/V61gkYBAFFUbC/v4/neXY9BSyPx2Pr8rt16xadTodut8toNEIpxebmJpPJhMlkwtbWlq17qJRifX2dNE1J0/Su2qLi/JS436absSgK66CUyFqprShjFwgp587ncwaDgT0mz3Nc12U4HLJYLGytyiAILLwUQOy6Lt1u18a7imNRIKvv+/Y9sFgsLFSWepcCX8WBOhgMGI1G1hG50jtHxpgvAj/3LV6/gv3Y5pteu0FVZ+I7ae8fUNVw/FbH/ALwC/d43lDdqP2l76TNlVZaaaWVVlrpna/ldEpvbY1oY539O7dxwy6dtSF5kRO1u2A0mVFMDne59fo1xqMRvUGL+x+8SLGMKbIx9z36CMb1KUs48cj7yZMpCxWSGsizJY52aK+tkxYFZZaSJglxntPq99m5cgW9nLJ57izLOCZf5njKsPXu95DmhunhAXp2QOho1s6eYT5PKFyXbicEFGWe47e6lbNyukc5n3PxsUucPrVH/tXXuT2Z0w1dSqVIS0OclXjiOsSQl5V9RSswpXz5t2ZvprLjKQFxNXUrMaRK82pmmJYp0zxnEmfct9FmK/JwkhHXfuWfc/jqa1z6g7+Pc3/kT3PiIy/x+q/+FjefuU48z1BlFe1aGKgbqZ2OFXAqa0ebQqFUiVYGYzQGQ1lCagxxAYmBouFbLDGUVA7JxteUq/t6G5J65DQ0ylS1GlUdhYrcYx45Lo++yI08UbPJxucEtQPSMaYeS339+pAKPCqUOSKXpp5P64msT1CqCaJX1AABAABJREFUisaVHusaPLq1izEAPGPwtMZXgmoNnlutU1EerZ/iyL2oUPgoPEfhK1PFrzrgYggdjVPH2WqlUMoQuB6ep/BcD8qCLMksHNXULkpdR/PW51R1JBWOo/C7Lq2tNaa3p+QHB2inivUsy4y80CwMvHGY8P9j70+CZEkSM03sU1XbffdYXsRb8+VaCzJRAAoY9DKN6cGMTE8P2M1tKOzL3ChN4WFuc+SV5L154IUyIqRQZC4UbkIQbDQaBVShUYVCF5BZyKpM5J5vi9V3c9tVeTBXff4CLzEjqKYMumF/yauIcDc3U1M1D8mwz///f/cq54Os4aNKk+8mzuzWqDU67q4JO6vGEClBQQtSB76kqGqqBhqtGQWS145jlpuanqeofAkq4vUkoN6kPDlLKbVgEMKiVuhVRaga4l5M7MVcXp4xSDwO+x5Nobn68F1CkVMInzCOqLOawJRkWYpXJIhGonRNLjxmV0uSwKPS8GieUQjB/YcnxKM+5XxFn5J0k3EwSki3JYf9hLQskRjONrrtdvUlWZqSxCGF1CihCEzJUvmEWjCRBVlecHTUZ5EbTJYxPZ7w7MmMOydDNhXoqqTXC1kvUrQXkOUl0tjuUEkQwKrQSCM4HXhIDGnhU1RdwMtfZ73kb+R//BWb7r8mB/43u3//XY/zz4B/9hXP/VPgn/4lr/0P/rsep1OnTp06/fevvzJ83Hdg3YR/FvpZeGS/v+lYfFnk6E2noX3tVzkNrW7GvFp3of0ZcH2HFtjsOx8tELu5b+t+2398f7ubAHHf7WiPX1UVTdM4YGS3v+ms3Aefdox5nmOMYTgcui5E6/iykMjGs6Zp6oCM7f2z0NM6t2xs5Wazcb2E1olXVRXD4ZCjoyMHAy3IapqG2WzmHGYHBwf0ej3SNGWz2bjztG62g4MDVqsVg8GALMvI85ztduuAqO3JW61WrjvRAh8Lj5IkYbPZOEhVVZVbL8/zKIrCuR2tU/Hi4oLhcMijR48cHLp16xaDwYCLiws++ugjtNZMJhP6/b5zZ968NqwL0K6RXfN+v+9iQm3E53a7dY9b5+e+K9T2A9rOQjuf8/mc6XTKG2+8wY9//GN++MMf8hu/8Ru8/fbbXF9fu/O1oNV2ZtoI2tFo5NYjDEM3ptFoxGAwYLPZsFwuWS6XTCYT16n57Nkz50Y8PDx0kNKuiXUyR1FEVVUuTtc+ZuGvvQ4t+MzznPF47CCfvRYXiwUHBweuq9Iew77GAkfbXam1JgxDF6VaFIXbxjpUrSPRXp8AFxcXBEHgwKp1QvZ6PRcdbF2OeZ6795p1/lrnalmWzh3b6/W4vr5+IbbVzntRFC+4dDt16tSpU6dOnTp1uqne+ACV9Pn8k88J/YBYhXz2wU8xRY0uSy7O58SiQZiKzXbLnTu3GB7fom4M/cNDDt58m/n5M+ptwXabkj+94tMPP+DyYsHxZMAvffsdRDxsKy5khWkaFuslKuzxyY/fpR/4nLzyCkJJgqhGGZ9aC1azOfPLOcNBj6gf4MU9Ki9EaEUcBxSLa4JkhNcbUlcl2WrFcHLIeHKAjvqEwwmvrCrK2vDofMOqzJAChp5EG03eQNa0rsAdW0QKg5At6DHmORjTpnWY1aZ1FLaowmCkZCYl75Y1y6ZkVTTcHUbcnYaMPEP68Z/yo3/2GUff/tt8/X/8n/Jz/+W3uPUH/x8+/a0/5OqLJWUBtW4w2h6zxXBtbOeO5+ldZOnOQqgNVAYyLciNoRKGegcgNe3zUlrn485luAOFYufetGCwhZAtvRMYhGxfY119BgvwWidfOyXm+etbwgd7wHJ318E5JxE49yYGhGodnwjbMWlhpHFEoZ2DFii2AE62HY6m7XoMZNvtFwjTwkKM68qU0uA796h43kMpwAdCKdpO035CucnxpMGTCk/otitStLGrYaiYHA+oC8NmvtlBwHZtPCURRrfHtMCW9nsp2+f9xMOPIzbPlgihEZ5dF01wNCXTkh9/cMafXGz5oNA8bdp13f+suaCFyGp3T0fRdjr2vfY4oZAMFRS1JvAEk9AjChShhFVaE2BIgayR3I09dJGx3jaMI0kcB8yKhkhJmlhycu8Ev4AffvCYwBe8ehzx63/v50gL1Z7DOEH5EYXe4oWC1VYTrwuSUUmS9MiNT36xIFAeVQPrPCeKPA5ujZEqprlKUdWWypcMkpimqhjGHsIDrxLUUUiYb4hHCdfXSw4GCeuiwQt8+rFirXwoJIfeGi0kk4FPKRVNvuHgcMj1bMN0GLIoGrbrgkEcsNo0BGHEcp0xiD0Cz2edlvQjReNLmk3JtBeA0Kzzhk1p0NXzxLJOnTp16tSp07/7+ivDx5uw7yaYA1zM581t97UPdr4KKN6EfPs3+m9Gu+7/fHNc+4DTjs3CCwsy7OvttvuP2dfuj2sfcO5HQNrt/rIuQav987Fjs/DEOgVttKYFiHEcu/0aY1itVs7FFccxw+HQOcosAFuv1/R6PcIwdPDERmNawGPdcZ7nufhOrbWLl7T/LHixDsgwDEmSxO3butHs8Z8+ferAWdM0DAYD0jQlyzIHm/I8ZzabuW5BGwUbBIGbp3v37jGbzZw7M0kSJpOJmxcL9Wyk7NHREdPplCAI+OSTT1iv1wghOD4+Rgjh3G7W9Wnn0LpV7Tpa4GVBXK/XY7vdMh6PEUKQZRnL5ZJ+v/8XHLD22rsJjI0xlGXJ6ekpP/nJT/iDP/gDvv3tb3N6esrJyQkffPCBe62FsRaQSSkdcLYOPzv3aZo6Z6E9v+126wBzEARujW66cC1w22w2zilrQdxms3HwOk1TNx7ruA2CgNVq5a47O6/2qz1GHMcuttfCRBuBmue5e9yeW13XLmo1jmPW6zXb7ZZbt2694N4FnEMyDEMX02tjcj3Pc9eJBcvj8RjA9ZeWZelgrRDCgVLrDM2yjOPjY3fcMAwJgsAB+E6dOnXq1KlTp06d9pWlax5/9CEPXrlNMBzgJ4JAC9L1gsdfPCNKeihP4fkxbz68hx+GeFFEOJwwuP0qf/zbv022XlBUOUfHhzz74nOePZ1zdDDk9u1j1psV6bMtcZwQxRHr5Yy6rKnmC+7fPqI/OqARDUoaAhWyzSryqsGTNaOo7WZTUULRGJJeiBcohFDEo4SiMKwvLwh9iSdhfHTIej7j4svHKGEIQ8XpUcL5dUqgBAeRR2MMm1JQNRpfSYw21E0LnQC0NjtuJpDSsDM/kmvtYli1hEBKhIai0uQIVnXDrNas64ZFVnJvnHB7FCDLJeff+U0u3/szXvvH/0Ne+/v/hOk7v8TTf/HbfPHdn3D1bIGu2z5JvYuDFVJgmtaV1Vj7oDDUGEoNW23ImxaGtuzPgkfjIl317jnr/rMSZgdZhc1Rtfco2EFI4cBk6/aUIPTzOFTaJ2wH4fOySivrKbXzaAHozr4IWCIodmAT0UJGaZ7vUglQRraRq2LXoyg0vmj7Hn0Bvmi7G8XOreiJNopUA1Lv9iHbr55oexwV7fde0yDV7jnZRrwqCb6SJD2f3mRAmWu21+s21lWCbMNdd8M3beypEEghd+BT4CmQPhitKTYZMlAo5WHKCq1g9PWHPCkivvN7H/DBLOPDQnPdtFG5zzs97ZoZB0SNaSFyrAShp9g2holnw4JhFPtUGm71PCJPIXRFHflsC80bEx/fGFYVLNclt6YBOgo4iUD0QsLBiIEv+dGnT1g2FXcOQv7Wt98gPrzP5Z8/QQUeyvORpsaPFNtNRZ7WiFCzXcw5ff0hy0+ftNGmq5JUC/B8jqd90lKgyiVJIJBK43sesmkY9BRF1VCXEhHGpIs10vdYrnNODvqkeYUXhvQjRdo0rDO44xfUpoXSpfLILpYcTxLmyy1h4KE9xfZ6zWSSsFpXNLpimRum45ggjJhdLQiVQfoe27ykHygGQ49lZrheF+gajvqKTp06derUqdPfHP1MsavwcvC4//PN7fbh3r5b0G5jf77pdtwHdvv7vdn1uP/9TfejjSq1ugmYrKNuf/uXgdOb4NGCj/04V/ua/a7Fm/vdf8zuz0KZMAy5vLzE8zz6/b4DZdZRZ+MjjTFsNhuCIHDuPc/zHITZjwu17i5oHW/WCbZarVxP4WazcWDGdt5FUeQgkI3fTNPU9QdaV6V1eNZ17VyTUkoHfaybTWvtOvPG47FzoVmHpwV9TdNQlqWDr0mSuMjOuq4ZDAYMh0OGwyFpmrrrwQJC3/e5ffs2k8mEpml47733HJAaDAZcX1+7ayKKIp48eeKO+1VxuAcHB0Dr5rPzvl6v3Tp6nsdisXD9kfaxXq/H06dP3biFEIRh6Byob7zxBj/96U/5wQ9+wG/8xm/w8z//83zxxRdsNhvXq2jnSUrpYkDtOC1gtK7I+XzuomptF+c+SLTAbr1eO+hso1Gty2+9XnNwcOBgtIWeYRg69591dQohKIqC0WjEZDLh/PycXq9HHMfumpJScn19DbRuw8Fg4OJW67p2wNpel57ncXh46PohV6sVaZo6N/BqtWI0Grkxx3Hs3t/2fWeM4fr62kXwWvjq+74D4PZ9ZyNo9+etrmts7Kt14H7xxRdIKV1fqY3F7dSpU6dOnTp16tTpphZPn/HOv/fLaOFRlAVeY2iQeHGf0wf32CxXZGnJq/dvo5s20nJyeIwM+7z/vd/BVA0Cwze+9fOYbMGf/fE1QS/h3puv8fCt10ln14gyp24KyrzCVBU63/LwrTfBwHazIokDgqhHXuRoUeNT4quAydd+juVqzbYoiXtJ24FoBMFgwPL6Cm08jK7RjSaeHlBsNyhfMeoH1I3h6JtfQ0q4PSuotGC+zpntYl37vqCu23BS3zMYLahM691T7P5eFxIjIN9BQSlkG6e5A3y10VgfYIPgo6bhPDN8rTGsS80qC7h7GDMaCerVIz78r/8PPP6Dn+Otf/SfcO9/9r/k5N9/n0e/+y/57PffZ3aeUu+sh9poawaE3ViEMZidWzNvDDWiBYzW8YhBizbG1boedfviPYejxac72NgSwBfAogtrtaDR7PZh9M7l1z4o3Qa7+wbWDmn9jKYdU5sku9undZMas3Ni7j787CJmrRtyPyoVpGmdjcEOPLauR4GQBs9uu3OLeqY9CyUNngR/F6XqSYEvBQpBFCgkmuGtCeV2jRIeRjfEkY9uNLrSLJ4ukUbvoKfYXRPsuiMNQkjUzlkpLfz0QPoC4QmkpzC1Bq2p64YGjR5O+IOfprz7wU/4MKv4LK/Jdi5Q6zQVZtf3KJ6zWo1ACkPPEwyCgJXR7TUvJYNQ4gtBXsG2rtn4MJ3EPF1qlBG8fdIj8OBsXjJfl/i+ookH9ISGXkQoJV5d8f7llk9nG0aRx9/+1qsUjeLJhx+wPp+hpOHk1XsMpsd89JMPSWTI0PdYrVI265KLZ+dMT+7x2bNrUi3Z6pqHx2O0aRgmoIYhZQmeVGTrjGHigx+0TsyiYTlfoqQiT7fcuzVESIkoW5dqqRu22mcaaISSxJ7EhB7rdc5oEHG5KAg9gZEei8WW4dBnmdWURUUhFcIL0F7M+cWCAI3fC1qwKSXHxwmrsmG92lLXICkZTMb/Rn6vdurUqVOnTp3+7dDPdMd8H8DdBII3O/Ngv8tAvgB2viq+9WWyQPGm4/Flr70JMS2QvOlctF/tvm9Gyt4EnNb9t9+1aJ/bP38LVOw+LIS7Gbdqv7f9chYy2khHC38sULH9hBb+2THbHjvrxDPGuC7EpmlcX+F2u3UOw6OjIwdZoihis9m4njw7JtvbZzsCLayCFsZZ6BUEAVJKlsulm7OyLJ2jzjr9rKsyiiIXq7lYLNzcKKXQWpOmqQORYRi6/VmIaUGsBZSj0Yg8z1kul85l+OqrrxIEAcvlki+//NLNg43/FEI4qGWPvQ/I9+NzrSswTVN83yeKIgee7FgtNLMw0IK+5XKJUsqtpQXE9hp9/fXXefz4Mb//+7/Pt771LV577TVeffVV3n33XUajEUEQUBSFg6/WaWeh2K1bt/7C9WTPz66NXf/FYoHWms1m40BwFEVkWQa0DsH1ek1VVaRpynQ6ddDQxtTa7y14tK7Ki4sL+v2+A31FUThHrO0KtXO7H9cbhiFKKbc/u66r1cqtdxzHDjiuViuWy6V7b61WKxevaiNZbUxtGIbM53OyLHNAWAjB2dmZc3Ta94Xv+y6qtSxLptMpSZK4Ts84jtlut/R6PVarlftd0sHHTp06derUqVOnTi/Ta996m6uzc66eXXDv1fusG01VN1SVJs8qMIJ7D++zXm0JfMX08Bbnjx4xu5iRNxXbbcYbb3+LZx9/ypd/+q959ZVXOPn6zzE9OWFx/oxym2Gkj+d7ZOs1UgoefuObVHVOev6M0XhMEAbURiB0DUWOjyboD7g6e8ZqlTE5OKDYbIiOT/DiHulqgYrGXH/+GYfjiN7t22gDTd1AIxkMRxD1eXo2o1EB9+9MyLclZVUhpSD2FI9WOb4BpSQ5hqpp3XxCPgdsBkndtIDSCIiUwPMEdaVpGmho4ZQQO/hnBNcY/iRvuK4N60azyGtOJwm3hiFRU5F/9CPe/Wcf8cU73+aNf/Sf8Pp/8V9x99f/lC9/67f49Ls/ZXGdUza236+VALSRlDQUBiojWtcjLWBsaKFwI9g5IG2M6f69jt39CBd3ah+3wPF5fKhlhexgrNAWAtpYUOk6FB173P3cMk4bwSp2Tszdh6ARbr5c6OruYJ4QCBs7K4RzFwrYwT9DJAQ+beyqL9lFkQrkziEY7FyOAkMgdw5H2YJSXwq8XUyrv7t90pQ1pgEjGqTRlFmJ0aZ1Ou6O60mJFPYD46Y93s4p6ok2/tQPdvcJQg9/4NPkYOoKBJS1ptSCqwZ++t4zPlqXfJJrLqq2m9NI63Z87ioVCDAtkIT22oqkpKckmgYlBH0lKJt2jpVqX38SKe6NQvx+TL4suR9L5mlOkTVss5rGk0Tjfgup44iwBuqC733xhLOs5mjU41t3Bgz7ffrTKYvrJU19DgoaFNeXF9x6+ApffPAJedVQVobDRJE+uyRL4XJdEPVC3ppM0FVFXVXEA591VuPRYIqaw0kMGMptjjaw3RakucbzNIfDhLSo27VUkrSp8VTCSaiRQuAbjY5CFvOUg3HCJi3xJchQ8sWTa07HMddrTSIasqahNJIkkKSzNbFoSOKARVoQCjg8TEhLw/JqQ1VoFJrjUY+z6+yv/Lu0U6dOnTp16vRvn36m2NWXgTR4MUZ03+X4sq7Dm25Hq5dByv193tzHTfB487GbLkoLC2xU577bbb//cX88NmLSQsX9vjkLhayjzMat2vOwTqmXAVLbm2c7F+15l2XJ9fU1o9GIpmlI05TBYPBCHKuFmkop59yy7kkL/cqypCxLBxmtI9Ceb6/XY7lcOldgFEWUZcl2u3WRn0VROBBUliVxHDsgGASBc6yVZYnv+y7Ccn+t7Pis++zo6MiBxX23oo1MDYKA0WjE9fU1q9WKfr/ParViOBwyGo3cWg6HQ66urri4uHAOQ9/3GQ6H3L59G9/3+fjjj3n06JFzV966dYunT586iGmB5/61vb9e9jwuLi5QSjEajVgul87BeHJyQpZlL6z/4eGhc3pa2bUKw5Ber8dsNnNfHz58yPvvv8/3v/997t69y8/93M/x6NEjPM8jyzLCMGS73bqxKaWYzWYOoFngZq8rC5FtfOl+1Otms3FOwaIoXCeljXO1QNW6Fu16hmHI+fk5y+WS0Wjk1juKIudgtG5ZG7lrI35932cymTCbzfA8z3V12m5Me172faW15vHjx+5ateDaQksLMC3gr6qKy8tLfN93gNfGwVqnpoX29nyNMUynU+dqtNe8jRk+Oztz0boW6tsPKdjuyyiKOvjYqVOnTp06derU6aWaX5wjypKvf+vrpNstRkORrWlqCD1BHI3INyuSJMBPeszPn+J7DSdHI9aLGd/4+tuks2d46RV//3/0G2ivrTBYfPEpUjYI6SE9jyrdMhkNGN26zWY5x1Ql01tHmAY2aY0fga41/WGCliEqiYgw6FhTZVsObh+R5QWohMWi4Ownf8hbX3uV5OCUrPBRZNBkhKFHQ8SnH/yU68trDqYTxDAmDOHB0ZAnVxtWRUHPU3hSsKgb2EWE2qjL9u8SQd2YXV+iaCGHgKpsHJRTov27WO+iTSUtoKsNfFFprhrN67VmVWmuNx63Jz0Oex6q3DD/4+/wg/ff4/jbf4dX/+Gv8+Y//V9z99e/zxe/9c/57PsfcHWZoSuN0YLStJGrjZBID4RuMA07l+MOYmHQYueGNG1cLDyHjjcrVqTtgDQ27tM4wKiERLpI1Nbl2TooW5ho70S0rj+xe30rs4t8tU5KjUALWzFj73ns8lhF6yAUxjiwKcwuqNW0MaiYHTC0rkfZfu9LicTg7cCkr0TrhhTPH2sdjxJluyF3Y5W6QUjQ2XbnaDTOwSiU2HU3thBV7W4BSdmC07aisoWPypf4oUIpSV23bsQ6a+3BldHUWrCuBJ8sS346z/is0nxZavJGYIRAY6h30yHMc/OoYRefi8GD1u1nBKVucfOBLwmVR7lz3FbGcL+nmPZ8CiPYXK0YeIo802yKgqJpGPYDgjjCNA2j8ZCw1qhmy7sXGx5tKsb9iK8fxxwc9KjTlKfbgnSz5faDA3rTA67SgsXVNaNoxcT3mGnDIPEJ/ID5PGV2kXF0a8JQNCgKMiGRvs8mLeklCZv1ln7Q3p8psgojFXVRoMOQarvlKIkQnkdR1gxjj3VWIJI+Q6UJPfClppQBF+crpqOYq/mWni9RgWSxyDnshSy3NYmEOoAUj0SG1NuSxNSMhwGbukYgmQ58vMhjucmpigYPOOh55NpQVs/v+XXq1KlTp06d/t3XzwQf93XTKXgTEL5s268Cjy+Di8AL7sSb2978j/2XRbjejIi1/yzosA5GCx5v/hGxH89oYaUFNnbfFoRYAGVl3XA358r2J1q31X4sLMBqtXK9c9ald+vWLVarFYDr1JvNZjRN4/oe7T77/b6Dh9b1ZsdtoZYFaPb88zx3wNJGbPq+T5qmVFXlnkuSxB3TyrrObBSr7R+0c2DhbdM0L7jplFKuD/Hw8NBtv16vCcMQ3/dfcKlmWeaglXUiTqdTqqri/PycwWDAK6+84uDTxx9/7JxqDx48IM9zJpOJi5at65rtdkvTNG4NLODbdz9qrVksFi4itCxL6rqmLEvm8zmTyQTP81w0bZIkFEXBZDJx4MqCvTzP8X3fndtrr73GkydP+N73vse3v/1t3njjDV5//XV+/OMfM51OWa1WaK3d2FerFePx2IFdpZTrcbTQ2Y7ZxtJaB6td19ls5sCd7V6ENrbUdh0KIVyMrnVf2s7R/feOhY42HtbOrYXO1mlp3YhlWbprdx+UW5D99OlT9/6z0NLGu/b7fYIgeGEttNYMBgPnwhwOhyyXSw4ODhwstg5KrTVRFCGlpNfrIYTg2bNnLuZ4OBxyfX3tgOpwOOTw8JDz83P3/rUfBLBgtFOnTp06derUqVOnm+pFPuNXHnB9scAIjTCasm44OpqwvJzjCcPkzm200GRFTX86RqLJNhmnr3+NptrgeSF3v/HzzGdLimJBqBq8MGKb1RRNzWDQ4+TuLbbLFYvzJ+hsS386pVGSRmtoCkxZAwI1PKDe5kgNnu/RGyb0jk4xQrAofL744YccyRnf+ju/jAoi1o8/4ZM/+YDx7fscvfaQxmvI1nNO7pxwcvuYoqxAab72jftcnc9ZZjnTssaXkqdpgRZtVKdWBlO3IaqVboGelC3Ma11zgqZuEZ29p9DoHS4SLYD0sR9Objv85sbwp0XDWa15tWpYZDV3RjFHA59RTyDrOVd/+Jtc/ckPmH7rV3n9H/3HvPW/+iXu/6d/whf//J/z8R/8lNnVFtkosrwiKzUlu78DMRijd32PBi12nY/Y+wNgQaoFgcK5Hk0L/PYcd4oWvirZugt9WucfgN5zN1pI6Xx6pgVl7evbefOk9fC1kLPljKZ1OZr2+NYxKYx2TkIwu37DdvxqBxd9G7mqIBYGX4AnTNuxSAsYPdW6G+XOCSkR+KKNZBVC4Mm9eaAFi3IPOkpsv6R93rgeULW77yJpe0CVAs9XSK+FiCL0ENRg2vsxlRakteEsq/ngOueTbc3nteGq2fkahQXGbSSt57ynuznYxdZKBNGul7I2Bl9IpkFAKGHZNAjZnuNb45Dx0OfpPEMZiRdI6sZQNkXbgxn5RElIPPSIJ1NEmuE1JY82Ne9fZYz6MT93HPLqKwf4fsxqveV6PmM89Xn4zi8wW2vM4jFTqZH5imxbEkoDUjBPS9K8YjxIOJn0KLYp21UDQrDZFox6bSdj31dURrNZbAmjgGKTg9f2VZ4eDMiagqqWHPcCtmVB4YeMTTvXQgpkv8fy2YqDUYwRkCgIeiGXZ2tiX1A2mlgJ/H7IF+crhr0eRb4l8hSDgUdqIC8F09hjsOuJbGqDFyg8bZgeDPhytkXYRe/UqVOnTp06/Y3QzwQf992J+64x4C/8fLND7yZwvLnf/e9vxrla7bvV7PP7fYr7wPHmc/Z7C/VsX+H+mPY7LG86OPcjTvcjWO32N8dlQcf+4xai1XVNkiSuc2+/a3I2m1GWJbPZzMWe9no9Nz++7yOl5Pbt22y3W66urpwby+5juVwyHA5db97JyYlzNlpQZh2Ol5eXruMxDEMXxWmPYx1kUkq22y1ZllFVlZtLC37yPH/ByQbPI1Nt52GWZQ5IWshjnZd1XbueQgvosixjMBi462A2m9Hv94njmPl8zuHhIV988QXQQjYLH7XWvP/++y4q1roprVNwv5vSOkRv9pdad6qN6AzDkMViwcHBAUVRkKYpZVm6WM/RaESapmy3W3cMGy9qHXn2OHbOxuMxd+7c4YMPPuB73/se9+/f55vf/CYff/zxC9eFjX610aTWQWgjVi1YU0o5J5+9Du15Hh8fuy5JGzdqo0ZXqxXr9foFF6WNT7WxvHY80EbvWhC436to+y3ttWuMYbvdorV2Uax2rHatl8slZVly+/Ztjo+PHQBeLBaEYeh6J7MsczDT8zym06nbl9aa9XpNXdf0+313Ldo44MVi4QCk7Tn1PA/f911Hpe3GDIKAzWZDkiQAbn6t9l3FnTp16tSpU6dOnTrd1OjWCZdffkaeG1AByoMHb7zB7NkTvCBgfHKM8nw2iwVhEuMFAbquObxzyjbN8aVidOsYI33iJEDUC2TYJ68aqrwgCgMObh1RblPS9QrV1Ezv3IUoZrveUqcbksijaVqAlq83KCHxw5h4cgcZ9Xn6dMF73/0Oi6dP+Q/+wd9jeny//W/lqsYfDPjar7zB9vqa937395g8fJW7D44QuqaqNddnn+LJmMO7dyml5E0hER9fMzvbgJCEUoE2SKEpFFSNcVGmZgfGMBJtnleZSClotHket0rrqtM78FhjXNdigeGLxnCZGe5Wmtcbw52tz9Ew5M5Jn55poJgx+/7/i+/98Pc5/OW/y5v/8D/irf/Ff8Wr/4MP+PI7/5KP/tVPqT+fMy8ydKMpG+P6KdvuR0Ozg3bPHY84t6IUO9hoDHKHuaTrVrROyLYb0QdCYQhE222JgEaLHUJkB1/b8za7A1lw1iBAWB/m8yhadrBRuG3b7dpuw3a/aofeFBJfmtbBuBt7IASRbF2PoWwdkT4SIVu46UuBkrIFbfb8dhBR7mJYlZDuPFsg2Y7Dky1wtcZM2y8phQRt2khT0TohlRL4nkR4HtKTyMCnqSrqoqbRhrIxZBqu8pqPFzmfb2q+rA2PasNmty6+Ba+7ObRRs8Ls+iRFu0Jqd96hJ0grzcATjH1J7As2dUPoKSaB5I2Jz+lxn4/PV0RCEQWC87QmlBJtNKoXMPA9/BD6B1OCWjNPSy62Bd9/vCBRgm/dG3Lv7gSCAY+fzlhczRifDumfnHC1NDS1wMtSfKHZVg0SQ28QcbnYtnPsKYLQJ883DKcTyixjMd/g+yGLTcog9gmTgNl1Si+OWSwzpDZUQjJIPLSBTeZxlARsiopCKw76IQOlkcIQj3s8fTxn2O+xTLcEwhBEHtfzDCXb92xdVfSORnz+bM2tYUxeVQhjCGXDttIscujRMB5HLDclTdkuiB8GDAJJowShlPQHHXzs1KlTp06d/ibprwwfbzoaX+Z0vAkc4Tkg3AdxX3Xj/qtck/tw0YKom5GtL9PN19nH7Dj3IzL397cPKm2voXXhWdBoX2dh2Xa7dZGe1rm3P76maVzU4z543B+PMYbFYkGe51RVRa/XYzKZEAQBaZo6x6IFdU3TMJlMqKqK6+trt/1ms3GgcTQakSSJczFa6JKmKVprDg4OHFyC5+41C4HsMa27znY+WrjU7/cpioKqqqjrmvF4zGKxYDweu7jM4XD4gmvOgqzBYECSJC5C1jpKwzBktVo5uGnXwPM8ttstxhjngvzwww/xfZ/RaMTJyQmDwYCrqys+/fRTN9cWYvV6PQeTjDEvrJEd0z4It+7Mw8ND1/9n1zKKIobDoQPYdk2Gw6Gbz7quCcOQ6+trgiBwHYbW9WqM4c033+TJkyf8wR/8Ab/6q7/KN77xDb7xjW/wox/9CM/z6PV6Du5WVcVisXAdoBYCW9gZx7ED5ev12vWAWudpnucuLrUoCs7OzhwgtnDZwjwb12uvhc1mw2AwcGtoOzzX67W75heLhXPmZllGEAT0ej2yLHP7DMPQjSnPc6bTqTuOhbx2+/33p42vHQ6Hzll5eHjI1dWVc9JasG2MeSHW13ZT2ohiC/Hv3LkDQJIkDtpeXl4SBAHPnj1zcawW+Nq5Go/Hrn+1U6dOnTp16tSpU6d9nX3+CVHcQxRbetMhx/fu8MWP/4zN2TWv/NIvMvn6Ozz7/r+kPx6AkET9PmWtWMzn+KFHfHSC8gT5cglA40Vs84J8ndLzJV4QsVosmX/xCZHJafw+tQioFjM83dAbRm28KZLBaIAuc6QXkpy+TjN5lT/5f/7X/Kvf/Of8wi99g1/7J/8ZKhnQNJpiPaPOtvQmE1aNhzhMeCMcgAqR0qdBYKolp/fukFUl6XpL3YDXCzk+7HG2zGhooU7VaK5LSWFq16Po0TrPQCDNnt9vF0MqBa530ewAoP0L3QaMNruYTiUMhYBPas3TtORh0fCwqFlpwZuv3ebt//n/hHGQ8d7/6f/MxXf+Hzz63u9y9K1f4Y1/8B/zyn/+X/LwNx7x5fe+wx/+33+ff/3ueQu6TDs2O94WPD7/YLUUz3scWycgruvR9hmKXf+ijV6NEITCEAsIJShp9noZjS12bI9lPzS9O1eMwWh2PY+CZgce2yHJ3SyxS1w1GGEB6M59SOvwQ2jUbnyeACUFoYFECaRqY1A90TpRQeMJiZLt48+djbvzEvY8ccey5y939zWUkrvjt+RW7Zxvwpj2OU/gKYFQgrDnkxwMyZc5uqioswJtNLWGbWO4yjSfLgu+3NY8qQ2Pas1CQy1sA6ag2c2ZwKBgB4bBYuHaGALVdlkaoKgbekoy8hSehKIxxL5CGs1hBAfjgMfzjJHvI3SJ0T6TSHK9rvEiRRyEnEw8hndvsV4UFFXJR0+WfDpPOZzEvHW7x7//H/4Szx6dM79Y8NmTK8JEcHp0xGpWsnn0p0xHikh55LWPNprQV8zSHKOhUQoPj8WiQOqGew9fYb4piTY5q23BqB9gjGJ2lRKGCekqwxOCVAh8LyDwFQWKE19S1xUZcHowJtIVui7oH425nKX0e30WqxRlGobjmHVWgVYINIKG4+MRjy9TToYxIgBT1IxiSS8KuFjl9Dx45faY0gi2qw1KghCSJGp/J8znJY0xPP8Yb6dOnTp16tTpb4J+5qKym/GoNyNQ9zse952LL+tk/KqOxv1j2a83Y1oBB4v+2xyVdhz749nva9wf3373334f3U032X73oh1L0zQEQeCAmgUn+9GtvV7PubluOiyNMWw2mxfmx0IPu42NOU3T1LnVrEsxiiKur69fAEa9Xo9Hjx4BLQTchyZZltHr9Zwz0brdLIiz0Z6+77t+SDtfNpIToNfrufO2cHE6nSKlpCxL58QMw9C5zqSU5HnuXKBFUbiITwvJbMRqVVVMp1M3l4ADqOv1Gt/3GY/HHB0dEQQBn3zyCcvl0sFRC+LyPOfw8JA0Td3+kyT5Cw7d/fkPw5Bnz54xnU5RSnF9fU1VVRwfH5PnOVmWMZ1OWS6XLj50s9kwn88drOz1eqzXa6qqYrlcutfYONPXX3+dH/7wh/zhH/4hDx484PXXX+eDDz6gKAoH1Hzfd/NaVZUDrlEUOfegdQUul0v6/b5bNxsRevv2bRede3V15SJQbVyvjbZdrVYuetc6OIfDIVEUOcenEMK5LW3cbhAEbLdbB+3t9WLfJ7Y30l6fWmvG4zGAA6wWDtu+0jzP3fW5WCwcUGyahvPzc7e9PZ69NubzuQOpYRi6GFnA9Yumaercn3Z+x+Oxc8za69T+frBOSft+79SpU6dOnTp16tTppqJ4yDbNmJw+YHxyyPu//x2qTcZbv/zv4Z28xeWH7xNPJsgoZHh0zOWXj1ldXHD3ra8RjcZURcnm2Tm1bpBeAF6NX1cEkaDJtyyeFijfkHi6jWHNUvQXX3I4jQkiDzyF8H36YYwfBYg4wZ/eId2W/Pj//b/H0wv+83/yDzl+5SE1irJqSGczQt8wPDymqHLK7ZqiVty6daf9O7GsELomHI6o65qgbFHiye0jzh6fMxiXPDzpoS5SVoXmPC0wRu8iNwWRYIc0DIGEpjYIuUN3uo0vbbTB+gFbRyFoYWFd+1oQKNG62xrR4qfUwJ9VDY9qzWvlkvMUrsVPePj1+1STQyItUOuU6x/+Ns/+6LskD7/Jg7//93j97/5P+fuHt7j63/4f+fRpRl21oKRGtP9sTOjz/M4dmHketap4fo9D0fYfWvAYCEMkIJYQyV3HpZEOZGrTAj0jjAN5Fryy+yqUQKN3rj6xA5O7v0P2xgUtwFRiF8kqBNKYFjgCyPZGkKIFf8EuXlTJNmpVGpDonUPRIGXbHSnd2HbnKMQuxtXgSeG6KpUSKE+xo327SNo2ZteCSuVLwsjDC0DFPv4goUFQZIZqW4LQNEKyruAqrXm0qfhiU3FWG55ow1VtqIB6N8febu5L0/Zxeuzifneo2qd12/q7TkttBMIYep5kvItaVQIGoYeSMPYlrx4lzDclgVGEwpALn6fLgsZovEARDfostwW3Tofc+/rbfPc7/5rHH13yaJ3TH/bo+5LX7x1zeOsW59eGDx59hvEVveGI+Sdn3BtKBr0A3w/Ai9DZksATXK9qjJFoKfClAhryrMRvDB+/92Puv/1LfDm/5FBJ+n3F+eWaUb9HUxV4nqRpasaDGJRC+IqelCjhs9jCnXFE6BmqrOHg1oTlsiQKAtJ1hqcER8Me89WWqqyQKmBbVhweRpwtCxJf0R96fHm+YRwGHE5D0k3F0TDgeOIj+zFPPr5CNjVe4BP6muHBmC/OMpazDQfjgKt196HdTp06derU6W+SfqbY1f3oRat9F+PNnkXrktoHbfa5fTi370x8WffjPjS86VT8qr7JlwFLGzW6DxlvRqfa/e0DVfcHxQ607ANKKwscPc9z49x3eFkIZeMc98e57wzdbrc8efKEyWTCxcUFcRy7Xsc4jsnznPl87o5jAVme5zx9+tS5BFerFUEQEEURh4eHLJdL10+5Wq0wxhBFEZvNhn6/z3w+fyF20rob7Tnux33GcUwQBAwGAzabjetAFEKw2Ww4ODjg6urKAcw0TVksFvT7fRfJGUURnuc516YFXGVZ0jSNG5N1xNlOQntsrTVPnjxxjry7d+8yHo9pmoYf//jHLj724OCA09NTt17D4RBo+/usi/SrYn/tOloHpl37LMtYLBbOVTmfzwnD0LlRbbTtfnysddBZ16vtGdRac//+fT7//HO++93v8q1vfYtvf/vbvPPOO/zgBz8giiKg7WS0kC3LMrbbreutrOv6BVeh7aW00M2e+3a7ZbVauehaG4tqQe1kMkEpxa1bt1wkre20XCwWSCmZTqdsNhsH0i3g7Pf7SCm5c+eOc4Ha8dho2qOjI9brNcYYBwAXi4UDh3Ecc3l56XpDT05O8DzPXa/7jkMh2o5N2/mYZZmL6d1sNqRpymg0chG3+z2Zt27dcpDUQtggCNx7McsywjB0QNQ6ZO31aQFop06dOnXq1KlTp043NZsvGE+PwFP86Lf+v3hCcuvhAy4vzukXW/rDAcIPSQ7u8OMf/JBqveKbv/xLROMp6dUFq7NnDCZjwvEB548f0+Q59+8fcfX5Z2TZlkgqpPZYbmpUmODpHF9q4tEAowK8MCHyJVJ4yCjGiwZsrs+5/Pgn3Ls7YDi6SzC5RVaWCDTVZoYvDPFwymaxQDcNUlf0A4MKPPI0w5MK4SXUuqZIM/y4RzDtkUynmKog9D2UMni+4k8/mbc9h6p12Hm+oDSCUhsCKcG0wEvvehINBoy9p7Bz0+3SRiWCRlhnWwv1nsdrtoCrjXOFmTGsyprPZzM+/97/jVf+OOBWL2AYCoa9iP7YoLKC9JM/5gcf/Ijv/V+OufPzr/Lzv/5tDt7/mA8+uubZoqSpQQuJMQ0VBiOecz7xvJISZdpoUqzr0LBzGEIowEfQk4Zo12+pdtGlSrRwzojnZ2xvK0ie34Ow84MQGCNozC5ByDyHgfsf394lumLxZfu3q9n1TLZfrfvRE+DJNpK0NSbqFhYKCw0lcvec2N1XsW5KIXYfoJbtSUixm4dG787FINAoqdrjBYIg9ggSH38QIIMQXWjKRYauKpACE0iWmeFiU/DFouDxtuaqEZxpzVVj2O5O1N6JUuwico3Zxau2A29bTg3StO5IX0AgJYrWVZt4gkBJ8qZiGIT0fUFftT2It0YBm7Kh5/ss04q11iw2FWnVoGKP129NyTcpxwc9oEGNbvPRl7/D0+WWXuzRkw1vnQ7QXsBPPnjKD3/0Acuy5HDQJ1wsuXucEPQCZByjPI/Zcob0Ii7Pr9Fa4AceQRQhmwpZG+4cxCzWFSrLWT3+mNuvPWD77ClplnJyOmF2vaVuGozRRJGPFgZPKXwJRVVQCclBP0BJQV2U9Ac+i+UWXyryvEY3NZOeYrEpqGroJRFV2TDuhyw2DVI3HN+Z8OmXS2KpuHMUsd5WGCUYRh7J4YT33j/DbGuSEBpTM5z0uVwUnJ0vGShBUQt00cHHTp06derU6W+Sfmbn48vcii+LW7Xb7H+9+dw+QNzf5z5s/Crn5M2x7I/jq7bbB6H7x9vvddt3U9qxWHfjzTmwY7dQ0wIJ61CsquoFd6B129l93Oy0hBYyZVnGW2+9RVVVJEni4izzPAdwIMyCtM1m46IzrYsrSZK2nH3XO1iWpYMs0HYkrlYrFxlblqUDTnVdo5QiSRKyLHNAME1TJpMJSZK4mFXrpFuv16xWKwdRLVi0Y7HHjuOYMAxfOH8LlKzTzLow9+NebXymEIL1eg3A2dkZTdMQhiFvvPEGSinW6zWff/65g4B2vBaCnZ2dOZcqwOPHjx1Uvwmr7dzs91jGcUy/33cdmfuxo5vNxjkyj46OXnBP9no91004GAxYLpc8ffrU9TjevXuXd999l+9973u89dZbPHz4kA8//JD5fO76Ia0T1nZl2njay8tL54Aty9JBcnv9+r7vXIOj0ciButFo5KJd99fS8zwXETwYDNxXG+c6Ho9Zr9dsNhvXj2jnyPZ62nW1Ub9KKXcd2nm2693r9VitVgwGAwaDAePx2LkWZ7OZc+jauGMbnRpFEefn52y3Ww4PD13Po50D2/loOxptL2VVVS6WdbvdOge0dVlax+3R0ZFbK/u+sXHN9nro1KlTp06dOnXq1Glf06NDsm3B2R9/n6ODCVp5rDZbTu/dQShJNBxRFCWf/Os/ZBQr7v7C38KLEtLLJ2wvr0imEyrp8fjPP2K7XPKLf+fbLK8XqOGUiadpypr5VjM4OqRezggHfZLpIeH0BG0MnvIQQiE9iQgHlHlKtc04uv8KYa+H8GO0UPhS0pQZyXACng8IvNhHND5JfIyMhiyuLgn8AC1aOOWFPnESsl6tCXpDlqsFp298jf7lU+JQ0tSCu7Oc4sogTU2iJBd5jS/bXr68alBSog0I0wI2I/Zcf/aDyrTxoo118tGCOWPMLql0h99sbOluG20088bwblPyWVlzP8u4HyiOlgW90GPcDxiMfcbGsJif88m/fMyHfoTfD3n1a/e4k2558nTGk0XJPIfCCBohHPA0AucEFFIgMSjzHPx5tB2PsRBE0pAIiS/b3kQhzA4+tjCwPSfZRrAa4+JahbDn9NxlCAJjJOwiau356x1+FDu3n9y5AQUCjN65S9mBRBuxavCl3EXH3nApCvHcrbiLXFV7H8qWNnJVglQKJcFo04JHYXbnCb6v8GOPZJqQHPUJx2OKTcX2fEl5NUcikJ6iDhSLtObZsuSLRc5Z3nDZGM61YKY1WWOohe3G3EFf0V4tzW5Mahdbq00bNSwx1BJCdpGvpt22F0gCWvfjKBD0PM00CvCkYBgrmh3UTfOKykg+u8roB4r+MOLOtMdIaTa+YqMVxbrh//rf/CZnV9dMRzFDZfj63SmbxlCcrZh/NicQhuloyB0yHrx2SOXHiLrGYFivCzwgzwrGSUyaZgSBj6doezF9xfX1FiUg14bZ03MOTk65yDWejHh0tqYsDIHXcHo0ojKabdGgJKzSHCMlvUi2Ub5VweRowHyZM0oCNIqirDgcRWyzhm1V0ot9yqomjhSbTYkuao5OBjy5TJHG8OrtHt6oR75aIOuKw9dP+eizK8q0JABqIxlHAYXyefxkTijg6KDH+VXKcfzivbROnTp16tSp07/b+pmcj/ardQjBXwSLN12M8NytuB9heNMReXMf9ut+LOn+WG46Kb+qA9IeUyn1gsttHyruw6j97y0g2wef+6+z+7bRrPb11vllIcnLolbtV6XUX3Daaa3p9/tuLLaz0PbsWdhpYy8XiwWe5xFF0QuQzrr0rq6uiKLIgUHb12iBkHVF9vt9oAVsm80GIQT9ft+BmYODA+I4dtBmvxPv9PSULMs4ODhw3XrWNWb3Vdc1URS5+a3rGimlg5P2eDZOttfrcXV15SCVddvZ57/44gvCMGQymXByckIYhnzyySd89tlnbn5Go5Hr6uv1eg5aWfeeBVJ23m9eZ0mSsNz1rdgIXGhhclEUrlvTGOMiVS3EbZrmBWfuyckJSZJwfX3tnHUWlt29e5dHjx7x/e9/n7/7d/8uv/iLv8ibb77Jv/pX/8odWynFbDYjDEO37jYq1/M8rq+vUUo5gNc0DYvFwp2zda9aiG2vaTsvQogXokYtsPU87wU4H8exi+ft9/vOPWk7PO3vAOt4tG5N6171PM+5de11aWNW1+s1aZoyHo8ddLbvNxtdDK0T1xhDlmVEUeQgrud5zkVp3b/7Hy6w7wkppYtXtYDUOkX319y+n+3rrTvYzmGnTp06derUqVOnTvtaPHlGnefceeUBWhgaLZhMpjS6Znx4yGY5Y3F+xYM33yA5OKZuDOuLpzTbDf6gD1JRb+Z4Vcpbb3+TxeU16/NLDk/G5HXMKi9odIkoCkIlqbVAxUOEHyHrGoPAixLwAqo8o9quCQJFPJ7SaEOe5e1/e2drgsBHKB8tFEWaUqQ1o+kBRbahmC92f9No4jDACxVaG7RUjI8PqfOM45NjvCjCD3xMZUi3NXdWBcYILtYZX25K+r4ibTTbWrfdhaaFccb2OwpojI0LbR2DxkBNu4HabWN47nw0rvMPZ4u0zkiblLpoDCtt+KLU3PYa7vqKo23BOA7o9UKSUZ9k1LBdZ6zmKz6+WKKSiN54xDsnkvks5XKZs0wrcg2N7X90fsPnnYtqB/48AaEQxNKQSIEvTNuzuIsrZTdGKSXS9Te2zj2z29fOUIhFii2BFBjZgke168c0uxhWudtEm+fRqC3fbGNTxc7xKHf79tQubnXnQJU7EGl7JPf7G+XufJVUCNFuI0wbDiuNRmiJ2N3XUcoQJZIg8ekdjzCewuuNqHLD6s8vadZrhBRoIck1zFcVZ6uCR8uKs6LhUmtmGi60IdPtce26I0QLeXeAtt4tuJ2LRoA0u4uD9ppx94fs/O/2OY0Ug0AxjRRJING1RqGRSNZZiZCKvNKM+xGVMty5NeYoMlxmDUmckBYVVe2RZY947aTPwBec3j7myycXfOPrdygIKM9mlFXDqwPDW+98jXXaMOgNaNIt5WZFUxWgPGRVgDTEvQgZtH2cxkjm11viKKCoSnq+opeEPPrgz3nw1uv86IfvczHPCXxFlETUwpBXEEhYpRmNVMRhjFDt1Rb2Q1aZoR8qoiRmuUgZDfps12uWm4Ig8NlmJb1IkleapjH0+wGLtETnDfemAcnBkM++WFBnBW+9ccCnX15xeT5HVgYZ+igJg4M+nzxakq9z3rg35GKZ05QVx3dHP+uv1E6dOnXq1KnTv0X6K8NHC85uOhH3HWMvcx7eBGs3t7WPW+CxDz72I11vAsqbx3qZ+/Jlzkz7vYV+9nsLnizw249m3XeRWZhkx23PqWkatNYuqtE69qzjcX+8+/Ge+3Nif14ul677sd/vO8hycHBAURTkee46CIuicPCkaRr3z/d9lsslk8mEzWaDUsqBPRtfal1uSZI4J1scx2y3W7TWzslnnXGr1Yo8z+n3+xweHnJ9fc18PndxlBbopGnqXmcjNO2YyrIkjmMXZZqmqZu/MAwdzLTHtNvVdc3JyQnz+dw54tbrNf1+n1deeYUoioiiiI8//thFe4Zh6KI+Pc97Iea13+87QLi/nvvXiDEG3/cZDoc0TeOAm431rOua0WjkroODgwPn4uz1eg6WWbfcarVy8NfG1+7D3bfffpvf+73f43d/93d5+PAhb775Jh9++KFbP2MMt2/fJk1TB4jt8fajVJVSDoJOJhN3PdooWnuNWMemdb7adbLXuY3NzfOcqqqca/Ly8tK5eHu9noPCVVWxWCwAnDt2vV47J6uN+rUxxBaUW3gKz6NN0zRFKYVSiuFwyHq9ZjabcXR0xO3bt9110O/33bHs+yvLMs7OzphMJoxGIweJbZ+oHZMFkZ7nvdADqZRy52znwhjDer1214sdb6dOnTp16tSpU6dO+/KUx9HDu5RV24Pmq4CyKJicnIKCpsi4/cqrhNNbFNmWbHaFrwS59GiMJFaK/uSA0ekd1rMZTbri6PYxXiAp9YQeKX5zQaCgERGN8RhMR5gywyAJhhOKokFv1xhTESQDlB9RViVGa4TRNNmGMOnRNAZd5jRVg6lK+uMxy8szstkl2kii0QjPl0hTkc5TPF+SrzKie1PCyMN4A2QY4YURui7YZhlF1vY9XqY5oZLkdY0whkS1sZhaSHTTtK49bXsMW+Dl7Rxs+zGrrSPyuftx9xHo9v9NC8uM6z9s0aO9K6CNYY5gURke1RXHZcPdomkhZBCQJAFJFHMyVWRpyWaTs3yaco1CBAHjQcLpRLLZ5myymrysabTA6F1crHVB7sYXKNvzKAhFG8GqpMCTsoWWpvXmiZ1VUuwyXG17ovIFurb3UHYYVbTuPW3A29HY5+fbQs/dBO4clG0npdxBSMkOgMrdHEvbxSh3/ZWmjS3ddVlCuy2m7XKUQu22kRht2h5PKVCeRKKJBxH+MGB4Z0JyPKGuFUUhWHz4GXrzOQJoMFQG0rxmljWcpzVPtxUXZetUvdKw0Ibc7EDiDjy2/Znt/HoGyh10lkLsgGn7XPtRVeG6M5UAX7Tdmh7P+y4HHkwDxWHPoxcphAQ/UPiBYpFWZIXmKq8YeALpwev3jgmEYVbVHB5NyLOaTVYileZ41COSAqVgtt7w2pu3UHFMMd8S+IrTRPOr/+HfQoeHFB9/SJNvqfOyhai+hKbBKEMjA5o8x3ghvTBikeb0Yp+mqgmCgMnAI90WJHHIoy8v2VZtb2fsSYY9n0Yocl3jNZpGQ9gLCQOFp2vCQHK9KRnFAY3wuL5aEUqNqBsqregnIU1VMx6H1I2i2GZEvmS1LcDzGQSS4eGIn3w0p1qteeP+mMtFxexs0faaehD4cP/OkPNVwfUs5c44JKsbqm3B7YOQxb/pX7CdOnXq1KlTp7/W+ivfMf+qmNWbUNJue/O1++5B+9g+XLzZ//gykHnTUfnfpv39WxfifqzrvvNwP351/5z24dTLoKoFhlVVUZalc8bZOMt98HhzPix43AevxhjXdXhwcECapoRh6ECNMYY4jhkMBqzXa+dytADGusss7CqKwoEh61p7/Pgxx8fHLzgU7Wusa89uu1qtyLKM4XDoAJ910NV1zXA4dFBHCOGiV/M8Z7PZuDEPBgMWi4WDtPY4gHMh3r59m8vLS5RSbDYbByEXiwW9Xo+Liws2mw2e5/Hs2TPn6Ds9PXWxnO+9957brwW0FjLarsHZbEZRFAyHQ8Iw/AvXrV3foii4uLhw29nITuveszC31+u56NvpdPpCz6CNTG2ahsvLS7d+gHOG2ut9NBpx+/ZtfvjDH/Irv/Ir/Nqv/Ro///M/z2//9m+jtXZrbIFgEAQuDtc68uz+bN9iGIauM9NGACul2G63zoVqt78JCBeLhYOXYRhydnZGv9931y7g5taes+/79Pt9139p4Z+dU/sems1mDkBuNhuGwyF5nrt1GQ6HLJdLF3Fq3bFSSp48eeLAvO0ArarK9VtaIGpBs50nC3yta9HzPNcZaV2a1oFp505K6d4nk8nEnYuFrJ06derUqVOnTp067Wt8NKHSbUxpUTQoryQajqgAWeQcPniF2vhkyzkmWyKUR4mhqRt64z7JcERdw+LyGWa7YnI4pRaKHA9kTRQExCcnFAXU6YrDk/v4foBQkmBwQLa8Aq2RnocX9PGiHkW6RkiJrmpMXRMmPaqqwegGYTS+MjSEPP7zDxj0fFbXl3j9Q3polPBo6hqMxu+N2C6X1LWkf+chVZZhhKCqQKsRw8mUuoG61rxVNeSfXZPXBqPbqMwSA8JQAFv93MkmAbkDbIYW2gkBtunAugbdX9R7Uae2B1DsIjWFaV1+GjDC4UtSA5/UDY/rhkkhOfVqbmcFB4FHP47o9RTjaZ+TezG33v46H//Rj3j8aM6ibuNmx7FP77CHpwxVXlJVmiyvqGuNRBAFilCArDVSGzwhdqmw0sWVwq4zERBC7gCkdGcmzPPeReuxNLsJMLZw0oCRZofZWieilO38CvM8GlXsImIlLSBToo1LFaKFoXLXNWmjXdvX7NyOux1IIVHC4Kt2Wz/wQEGYeAxOx4zu3cIfT6kaj83TJed/fk5xfY2oNdpoKg2VMSyymqus4mxTcZY3LBrDzMClhpUx1LqNkG1BqvO0ogUoY/AR1DvwiI1+RbQdmwgabKQtNFogtMHzWgiphEQaw0EsOQgUD8Y+o1HM5aJgGirSyvD52ZbCQKUFgTCoQPHqrSFaN6zrmjceHKO05HK+odfzGCQRQ1GxKATrquTNr9/B643YrNeYpuLYq7n14JD1JkOkFwjTIGuN1iU0FWHkU7G791I3SOUhNFzPlgijyfOCXuAThYrluqCfSM4XGSwKjm4N6IsEozWboiFfZ3i9BE/4jJVBhOAlMZQlhfQIQlCeYJvmhIEi2+Zo4aECnyov6ccefhSyPl8TeYZSQrYFv644fXDEFxcbZhczXjmK2WQVZ5czGuHjCYOUmqPDmCUBXzy5YBh6ROOYL58tuRVL/EHCv/7wmn/0b/Q3bKdOnTp16tTpr7N+JvgIzx2P1q12E8zd1P7zN7exoO9loPHm9y8DePsQbz/S9WWg7+Y+LHi0DkX72H4E583x3QSt+712Frbsd/xZh9t+1Or+PvfnZH+/1i1Y1zVN0zhYtd1u6fV6zglpo1YB19NnHVm2E9J2F0opubq6IkkSjo+PWa1W3Llzx53zrVu3+PGPf+y6BO2Yh8Mh2+3WASDrtNxsNi9EWFqnou0hHI1GLJdLNyfWjVjXtXNY2rGcnZ2RpimPHj1iPB67bkEbsel5HrPZzEW+5nnuIkZ7vR53795lMBhwdXXF559/7kDzZDJxENe6+yywsudgOwjtmljnq72eiqLgyZMnbg0AfN/H930uLy9fiPO8vr52bjkLWK3j0HYiWhhqo1otgDw8PCRNU9555x1+8zd/k+985zt885vf5N69e9y9e9fBrqIoODg4eMGJa6/Lq6srF9dr40OjKHIRuRa02zno9XrOnbpcLh1wta7G09NTNy+2zxRa4FhVFYPBgDRN2W63zsEZRRFZljl4Z+OGZ7MZSikeP35Mv993jmL7fnn69Kl7D1oH7b470QLTIAhcRKp9bwghXM9j0zQURcFoNHIORt/3XXdlnuccHR05d+92u2U0GjGfz53T2QLULMuI45g0Td37a7VaOdjfqVOnTp06derUqdNNzdcpw2GfKs+oS40ajdHa4Jma4ckdtpuMxeMviQKNFj6NEijZ4AUJUdJneb2g3CzJ5lfcfeNrGF+wuVqSRJrIh8ZElI1BBAUHB/dIxiN0neH3jrj48x/jBwHxcEQYxxC04FF5PlW2weQp4WhMlWUIs6sVCEKqskRXa27fOaLIKwaDBBMEBEqQZzkGGJ7cIVvOOXzwKrLXpy5zhDQU25RqPUMZgw7HJEPNaLpmsCp47aTGY01a1qRVQ9EYMq0phSDwoKzBFy30aYxB7uCbFKA1rnsQ2shRwHU+2qZHsXMAOpfcbh20aIGWBw7iSSOogDOtuSo1n1QNJ3nF6bbkcK2Y9EJuD+6ho6/x5i9HJP4PWV2v2OQVVdWwuSrBU/THMQfjkKPTEYOe4tbrD3nwC79IrRXLD9/j+uNPmX95zvXTOWXeYBoDRiK07S9s/6d36FEjnKdTiJY+6r17Di1c3HM8CuFIrJBy55t8Hu3aQs2289C6L+XOfentNrIxrEK2IFQY8JTEE22eqbdz9YWDiMHpEL8XMX5wG396gFE9NlcpV5+ek/7JRzSbOYqGRkBtIMsbtrVmXWkutzVP04pVY1jUhittuNaGtYGa1rVo+zTtPQhp2iheH4GHoNldDzaeVpi2D1QZgRamdT5qe7Ho9rylDcbVHIYeE19w2lOMJzFfXmb4BmaZ5HJTYpCkZU0kBeNRwKTfZ53VqB68+fAEWTY8vppxevuApmromRqhJRsK7t47YbOqadYLaGqiquL4zojBwTH33/k260VKnZUU2zOUFPiDiCzL0UYQxQFVqSm2Bdt0yygJmS8zPE8SBYqqLOknHmlWEXuKRjQcD2O0aZhfbyirEu17GN+npwxGVyzLmvsP77J6/ASRr+lFAWXd4AeKdZqja4nvK/KsotcPEJ7i8nxFpMAEijTTiEbz2r0xy6xkPc847HlEvZCrqy1XqcH3GwJTcToOaaKAn3xwgd/U3H9lwkfnawZSM54OeO/xmvmq/tl/qXbq1KlTp06d/q3RXxk+Wpfey1yP8PIeRqt9wLb/8368pX3+5rb7j7/M8WjHtQ8gvyqO9WX73IeWN+NQ92NRLajcP651OlqXm5SSKIpIkgSl1AtjeJlDdB8eWdBmgZSFdhZuWrfWer0mCAKgdbDZfrrlcukAje1WtJ2OV1dXzrmYJAm+77Ner537LUkSsizj6OjIOejqunbnsVgsHAy0YMqO1QJPGwXa6/U4Pj5+wdVpgVuv13PjtrGwNm7VdgfaSFh7rtbZth/dad1+QRAwHA45PT1FKcXHH3/MfD5/IS513/1mnYiDwQDf95nP5y6C03YL2i5B627s9XokScKrr77K06dP2Ww2FEVBHMf4vk9RFC+ARgsfB4OBcyeWZYnv+y4i1a6BdZ2uVit3TsYYHj58yJ/+6Z/y3nvv8Wu/9mu8/fbb/It/8S9YLpfOaWrnarVaMRwOqevawVm7hkVRkKapu3bsmGz0rQWjtnvT9qLabWwv4z6YtbDQrr2FiNBCYvs+6vf7HBwckGUZjx49whjjIn5t16ad38ViQRRFbv9xHDunpn3PTSYTgiB4AUAnSUIYhq6X0YJG68D0fd9FzdrI3f3e1NHoef9Ev993172NpbVOUQuK7fvdHq9Tp06dOnXq1KlTp5tSuqYsCnxP0AgPpGA07jG++wqLq6csv/yUMOpRlga/HxNIKDYZk9Mjrs8vaaqCQRxy8s7PU5Q1V4+eIMIIz1cIfDzfI46TNjLSD/F7IwwB5TZleDBGIgmnE+pGUKcrPM9HVzmer5DJEWW+pSkyRFMjfEVZVOhsix8G1HWF8gOS4YBaK4xo3ZXh5AQQDI9OUHEP6YcYBMV6iTAlHhUfv/8B/UGC7xn8UBAHiuNpzO3jAZ89uuJqVXKxLqgKQywkhgapIN8BuRY0CoS0TrydLbJt/2u/CrMDiTuX3E7StMDKMrlqFz3q7fohsX+D7yJMlWlR38bAp9rwqK7plzVHWcXt1XuMf/JTpr0E36uJpWIYe5jIIKShKiry5Zb5fMvVozlGSQ4/g8cXdzl951c5+eW/xy/9Fyf45SOyT37A9skXLD/5mPXTK9YXc7J5xnZdUGQVxuzuVRiBQbXnpndwcnd/odGAtt2H7alow67PcXdvZfezjWIVQuBL2ToXPYEnBEK2oLPtFWzBpKcUSINUEMaKIPZJRgnxYZ/4aEo0PSA8OKJoQjbPliyuN6x+8jnbswtkkdLUFU1tqDAUjWFTNSxyzSJvuCpqlrVh1cBcaxZasDSa0giqHXhFQAM7ALu7J2NaGKl2kao10Oygq6DteTRi53gU4PHc6alp8KRESaiNIJGCA7/td4wCxaYyPH20RdQNKlB8OdswUAqMJpZwZ5IQxyFnmzWHBz1evXdM0GguFiuSQUSkBKMoZL6oSLXmYDrBU4rzdE1UVATCMJz49I5OWTUhP/ytf8nBJKGpGhoDoikpGoMQPlBTNRqlNaZs8BVUpmY47lFtc7IsZ9hPSIsSIxSNLznoJ1R1w3yRoStDLhRbQk5qDaLhOtPkwPLpEzwafB/qnbW4rAzCU4hGkGUVSajoDRLOL9dEvsQEAR89WxIZ+LnXjigQbFZbhgHEccKziw3rrGEQeSgpGAQhB/cn/PjjBZ7WPLw94mxb4ZcNt08GbBq4uN4SBOr/n79uO3Xq1KlTp05/zfQzFZXdjF610MGCuv1tboLJfb3MJflVUapf1fVov99/fr8/8WX7f5nb0EIXwEEWG7t40/m4H89qHYAWjlgXno1q3B/Pfkzl/jnYx/f3bY97eXmJ53kOMhljHLixEC4IghfiMy38sb2J9jzG47GLSbUuMSklaZqyXq9JkoTNZsPx8TGz2czty4Kne/fusV6vARxAtPGrm83GwVIL8CwUtfGUNjrTvt4e//T0lLquHdRJksTNh4Wjdk7t/pIk4bPPPmO5XDIej3n11VedC+2nP/0pm80GgNFoxMHBwQtRuGEYuj5Eu2bWEWePa78H3Pe+75PnuYNitvvQdixOJhN3/Vjn7P5aTyYTB4211g622fmaTqcOwvZ6PZRSfP755/zO7/wOX/va13jw4AFHR0fMZjMXZxtFEVJK3nrrLS4uLpwjbzAYcHl5SRiGhGHIfD53PZ2e5zmgZ68z2xUax7HrPrRRvZeXl0gpOTk5oWkaptOpu6601ozHY6IoYjabufWx11VVVVxcXLhryEb1xnFMFEXuWrTgc7vdOuhc1zX9fp+qqhgOh+4auLi4cM5MC0ot/Lexq7YntCgKB5LTNGU+n7vYWgtor6+v3fspiiIXDVwUBUEQMJ1OX3AU2/jZuq7d+6BTp06dOnXq1KlTp335QUjdaFSckMSC0dEx/ZMHXJ89ZfnkEcPxhKurFZPTU5paI5Vkev8+6TplPZtzcDhmeHqX66tLtvNryrRg7Ad4foQIFFK0MZ5eGBL2BmgU6eyaIPARUhEd3me7WVOs50RJTF1uEVIhkzF1lVPlOdIYjPRABohige/7NI1BygAZAv0+JqupygpdN1w/eUY4HCONpN4WeL2AusjQTUM4OCbsD/mGbLi6SBnfvcsrv/QrHPzoh1ycr1DUTA5i3nv3C8qqodAQakMoYVE3eDuqJqQN27SgsP0/7ex+zyGl2JEoYWjdkm57aOQuknMH7DQWWLay0FLQ9gsClAbmO0j2WVUxUBVHWcaxEow9yUBKfKXwPUEcevT7HhiNNFDVFfrZJ3z55HOe/fP/hrg3xBueEh8dc/q1+wxPbzP42mu88etjAq9A5yua9ZwmXZHNFqSX12yeXrGZr8iXW+q8wtSGumlAS4qiwdQNxrQxrY0xNEbgCctSn8entvdoDFIofE8ilcH3dukuocDzFV4o8aOQYBDTOxwSjof4oxHBcIyREcW2odi0gPX8x+dUVx/TpBvqLENXFXXT0GhD3miy2pDVhk1Zc11oZkXDpjGsG1jrNlp1qXXb5UjrRNQY9M6Z2bJlSbPzKLYuSPCMQAiDtk7HvS5Paa8GA6WNsN0tqCcksY2oFdD3BLEHo1AwiDweryryuqICrtOSQAgyYRj7koN+QAOcpSn37h5yaxjjlzV5UxH0E8bDmL7v89HjK4wvuHU0JRCGq3nGYVMynoaUvk88SrhartnMLhn7BXVQUpUapUsqDWVRE4QSISRZXqBrTRQLVrUkCiKU8tptFCzWObHfdpYOAsE8Lbh+vGQ0CBChh/YT+nmN35Q8WZc0KO4c9FD5Bs+DTV4jA5+myBFeiBA+QaSJI4+kF3J1tULohkYpPnu8IpTwxt0Ba61ZzHJiaRhNeiyWKY0WRIGHLwxJ7HHr9oAvLwvKTcH9w4QUzdXZmoeTiN6kx+PHKb70nIO3U6dOnTp16vQ3Qz8TfPwqd+PLXIn7AMb+vN/BaLXvUrSvuRnD+peN5+br/jKw+TIIaL9aALkPAO2+952L1pFlnU9CCIIgoNfrOUfiV431Zc7Lm2Oxx/z00085OTkhiiL6/T5hGLJYLKjr2vUqAq4PcjAYAK2rbjAYOMg0n88dlArD0DndiqJwgGqxWJCm6QuxoHEcu5jS9XrtYmWtM9E6wHq9nju3sixZrVYujtO60ixkquvajc3GadouQ+s2lFJyfn7O9fU1cRw7h2EYhg6M2qjXMAx57bXXXLzmRx995IBrkiQuetY6Iffnv65rPM9zDlXbCbm/BvZrXdcOWI5GIzdmO644jl20aRzHDsQmScL5+TlpmrreRevA3HfSbrdbB4hthOmDBw947733+NGPfsQ/+Af/gG9961s8efKEoijwPM/1eW42G7TWznlqr42maViv1+46sS7FxWLxQlSpdauu12vqunbxqvsxsRY8F0XhInyHw6GLVz05OWGxWDCbzVx35PHx8QuxqkIIB4Yt0LPHsU5RC9fLsnSuWNvxeX197dbLHtdec/uQ2EJHG/dq4ed4PGa1WlGWpXOYDgYDtNZcXV2RZRlXV1fOaWujdy3YjqLIOXGtw7VTp06dOnXq1KlTp5sqy4JGC/rDHoenp6jBEcuLM5ZPviAejbm+XtEfH7C4uibpJ4S9MZfPLjj/7DPuv/Yag9M7/PTd96mKlIHX0FMNwsQYL0Z5CpoKqXzCyR1Ak189Q4oG34+Q0ZQiW1FuroniHkpKvKhPrQV1U7O9fEZdFgwmI8ptikkz/DAEoaBpyLdb4uGU+CBELBaUecFgmrD8+AlZmhP0BwRJn838Gl1VhIMRQnnQhPTvvkUwugJ/RN1oDk5O2Sw3xMkRVVXxzjcamj97ghI5m0JzURg8BEaCJ6AyYKNUpY0l1S1gNOa5602I1vm3q4Nk/w5Ds3PHSftn3c4SZ9Gl3O1HC3ukVi2kbJ8vBVw0huvG8JmBgW8Yi4apqhkpwTAviaTAk4LQV/hKEfmKQGp8USOrGc3FFZsL+PgnCi8IkZ6P1xsSjscMj4+IhjHBqM/w+HX6b3+b6d/yCAKNNDWiLqHJ0U2NztdU6y3lek21TVunZF1TlxVNXmGaBm0avCDAC33aGFqB9DyUr/CTEOVFeP0efpIg/AgjFU1hKLKKYlmwWW0pPlmyvf6A4npFk66RdUnTVNRVg0HTGChqTdEY1qVmXWpWZcOqMqSNIWtgrTUrbZhrw0ZDCVSAMK1D0dA6FqXZdXPaddi5XNVuTT12905oo1Vtu6XY2T61hcxi544U7f6kEfiqXVMfQU8pJr5kFLbO1UWp2eQ1vidYVQYhJL6SxJ4EBVljyHzN6w8POen3MNucjW4IBn1OYkkUevz0s3MKAQ8ODolCRZqWjJqcOFasGhiMEoJggCozplOPZHoIWhDnW1bzgrLWSNNQVWV75SqJNgLPE/Qjn6LQeNuUYSDJizYy9tmyYpBErEtDUQmWRYPwGnpJj4H0CMKGrCwxRnJrHGG0RnmQZpq6FphqS280gEZgdI7QmtF0zGqxZtCL2eQVs9kSz4fxQZ9yMObq6TVeVZNMe1xsKraZxuiG4TBCSZ/jacjZqmR5vuZW32MwDPjg0ZJ745Djk5hny5zZbEvsC6qqg4+dOnXq1KnT3yT9TJ2P1tm0/9i+bkaX7n9/s5NxP8bxJjzch31fNY79fe0f/+b3+xGxX/XcviyMstBp39FZFIUDZtBCFAvILLy02950aN50g+4ffx+CAs4dJ4Tg4OAAIQRVVTkgNBqN6Pf7DmbZx6uqot/vO8fXwcEBvV6PoihYrVZkWeZciNbhZmFTHMcOPllQZyFrEASMRiP3vXWq2Z5FpRRpmqKUIooiwjBkuVzieR6DwYDNZkOv13OuOHt+FgxawHR8fMz19TX9fh/bTej7PrPZzEWjpmnKZ599hu/7HBwcuHN89OgRn376qYurtTDLHtuukYVRtqdyPp+/9BqzyrLMgU/rCLTdltvt1o3JOu6yLHOw1ToJtdauS9O6Sq1z1bpWbVTo6ekpi8WCV155hcePH/N7v/d7vPPOO9y9e5dXX32V999/n/Pzc4bDoYuOhTY2tNfrUVW74vpdbCrA0dGRg9G3b992sNRGlF5dXSGldPAuSRIHqPv9PmmaOgfibDZz71vrTL28vHT9plJKFzVrwbx1H9qxLRYLByCllIxGI9brtXvNdrt1YzfGMJ/PnbNRCOEg5L7j2vafWiC5H9G62Wyo65r1eu2uqTRNOTs7c65KGzU7HA4dON/fV57nrsf17OzsL3y4olOnTp06derUqVMnAKV8htMBJ6++xrZoyM4vIF8gPMU229IfD3n86edMjg/ww4ir8zOaPOONd76JjPr88e/8S+JQcXw8ou97bJqA+M4rGGooc4TxCMYTivWKYjMDDIPDIxotSDdrTJnjRwnSa52N6WYDTY0QEMQB0XBIvl5QzC/QBPTkBCEr8nRLut7ixwPKoiRbLCkqyfGD+/SvFxSbJfL+fTbza0JfMD97RrEtGR/0UcmAsH+A3zskX12BaJDRkFd+7h3WlxdMygMCI/iWMTx6suCP/vyKUEkkglI3FLWhoXW+SSPQoo2LVEIgJWhHCw3GtHBS4h5qOwF3HYaYnUduB7CkacGi4rn7cf/jyhaKKVoXYb23TQlcNHBmDF6t6QMTBSMpmSjBoGgIJSRKEihBoAS+EISeJFYCX0JVZagqo8lW1LOnZJ+2MLTRBhVGhEmCUD5BHGGCEKkCVBIRDQfE0wHhoEc4mBIcSKRSKGEIhKGpNbqqWvDmeWjA1A26aqjqmm2ewVxTr1OKzYKmtd0hmowmz6nSAp1vEabBaEOlaweKam3Ia0NaNa2zsW5YV5p1ZcgbQ64FW63ZGEiNIdWQGUgx1BqUaB2rUuwAopAYo/F169jUBupdd6eN0LXxqwKxg5LteoAFzMbFs2JaeChF2+kppSCQ7XURIBh4gknkcZwoyqricq1pGhjFPpk2lHVNoCTKl4wSj0RJ4p7P5GDAg0mfwBgKZUj6CdMYaqP54tE5Qexz/3hMEvk8u845kCXe8ZAnZzPGiaLIMk6PRtx555ts8ob1co1eb0i37QdvA1+ihaCoDYFvQCVUzZbA8ynXBT6GKPLJthlCemzKgij0KAwsrlMObh2T1IZSaF6bDNlmGU2aM+0nDHuC9bYgDjwKIQlV6/EtK41uagKh8JOAftxjvcqJlCHzY9bznFAJmsDnyVXG6OiIXi9gEPW5WuRkWUNeaDwhKMuGu6c9nq4rVlcpk8Tj1lGP959u6QnB3ZOYPAo5++wKYQyRBN/7yw0FnTp16tSpU6d/t/QzwcebX/edhl8Vs7oPJF/W/bgPKvf3d9PB+FU9kvv7tFDkZufjfhfkyzoe98/HurUs8LP7sZGbFoJZ16CNOn3ZfN3stPyqc7n5uAUmm82GyWTCdrsFcBGRp6en7jHbdWfHdnh46OJDy7JkvV67MdiozsFggBDCucn6/b7rubMxrNbdZaHkdrulLEvn/GuaxvXu2T5HC5eghbj9ft/BXCklVVU5qGR7Ji1IapqGp0+fAi1IsmDMug6t29JC0TAMOT09dY7ODz/8kM1m41yFFhJa+JXnOaPRyEV+2r7Dv+zasg5KC5btay2UtjDRRuNqrR3U1Vq/ACuzLMP3ffd4HMdUVUVRFAwGAxdNap+/f/8+77zzDu+//z4/+MEP+Mf/+B/zC7/wC3z66aeUZUmWZWy3W6qqYjQaEQSB6/eMoojRaMRyuXRxp1EUudhV25UIuO5J+/61jsEwDBmNRiilGA6HLhY1SRIHcO2a2LXt9/tu7S2ks3Gu1nV4eXkJ4GCrjZy1jk8ppYPPbfxT41yK1jFbVRVlWRJFEXEcs1wu3XwCDpSXZcl4PMbzPNbrtTtnex31ej0ODw9ZrVbO/WmdpXa9rUvSvi9tVK29djp16tSpU6dOnTp12tfkeMLkwRvMr2ecf/Ipw2EfEYVE4wPKvOKTH/+Uw9MT/DDg2aNHmKbk9oNXEWGPP/7d79BLQl55cIwp18xXmqDfJ5svmN4+wRcVMh5T6QZjCuJBgh8PybOMzfyaIAjxPQ+pFAZJvr5GGE0Qx6ioh9aSyy8/YdQPMHFMVQuCfh9EQ12sSGIP5QkCLyY6HrPdFCAkJ29+HeMPSGeXhL5HUTQMpgNWV+fkiWJ8/BCtPKr1kmBwSJYu6Z/e4dmf/QQv7DM5MoSBQocx16ua4+GSYaHJ8prLXNMIQx9omjaWsxGmdTvaWE0BiNYJ2XYaglKgmxYWyl2Ep3UzNrDrR2zdlQ5UYjsS23007LoGd18bYZC0MaCGFpA1NK3DTkMu4FkN58IQaENgDD0BAykZCEFfQSIFkYBQSjwFvhQEchcJ6kOoBJ40LVSttuhlhhKCaiFQaufRNIINBikVIJG+h/BUe57Kjn+vABJo6gqJvb/RPqZ2Jyak7YzUGCHRxlDWBo2matrvs7phW2mqBjZ1Q1rTgsbGkJkWLubakAFbA7mBTBvnWDVCtmshbCfnc0cpOygod2spbE+jaTsboe3hFLtOT7t+GrODx6J1yYoWVEoB2rTORyUhAPxd7OzAE4wDSeTDeVaTYBjHHqmAeVmzMa1rNfDhjTsjJklEUVZMpzF3I4mqG1ZpRT/xCTzDVVpS5iXCV4wGPUJhOL/c4ucpeSC4WJQMBkMCv71uw2GP/r23aK5XrK9/0n5QOSuhKQhCn2XhURcVKmivM6VCtuuMyDMEUrFJM+qyASpuHSSkZcNykTPp+/gmJ1INh8OI64tLxqOEwcmAbV6ji5pYALpmECcoz6Ne5cS+QlQlhYbj22MWixRFhU5CPv/pM0aBwvcDrjcpWSMRYQ8iw5PrGXXTUBQVGkkcCO4e91nWhvnlmthXHB3EfLrIqZY5r70xxJv0+eyDGWVWESpFrhuCDj526tSpU6dOf6P0M8euWu3DtZsQcX/bm/2ML4tT/ar40f1jvSzudf/5lz1mgcp+9+LLQNP+tjchqVLKufKiKOL6+tpFRd4815vzcxOq7o/jZo/k/jgA1us1s9mM09NTNzbr6rNQxPd955C0TsSLiwsH/CwEsy68oigcGLJutf3eyCdPnrzQV2ldnhYGWudgkiQkSeKiWG0Up+d5XFxcvBDFaqNeLeC0gGmz2RCGoQNpgOs9TNPUQVQ7Fuv2/KM/+iOgBUN37txhMplQ1zXvvvuug0pKKSaTiYNkQRAwHA6RUhJFEZvNxkEt3/fxPO8F6LwP0m03o4Vltt/S9gvu9yna87HOUAurjDFUVeUgpY2EtVG3s9nMuf5Wq5U7zu3bt/mTP/kTvvvd7/Ltb3+bBw8e8NZbb/Hs2TMWiwXj8ZiyLPF9nyzLnDvv6OiIIAioqoooitz3RVGwXq+Zz+ccHh464FxVFb1ej8FgwHa7Jc9zgiBwEb3j8diBVtv/mGUZ5+fn9Ho9N89N05AkCf1+nzzPHTi37sF9N6Pt/FRKMZ/PXRfoeDx2153tWLTw00YEW6ficDh0bkYhhItFteuyXC5dbLGF44vFwq2v7QC1Tly7HxsTCzjwbJ2ks9nM7bNTp78OejCU/Pn/7j/773sYnTp16tSpU6ed4oM7nH/+CfMvPuPwzm1U0kcrxdnHn1Bs1zx88xUqo8mXV4RNzfHDVyhrw2c//hGDOODk7gl+GEG/x2TQEI8OCZKEIIopm4A6zVASwriH8GJmF2foKieKAtAaZAuX6u0WP0gIkrh1tpU1q6efkYQxxk/QfsVg1EcjQESoaASeQAURUhgqlVA1OUkQI70e2WJGlW5Jiwq1g3NCN5TrLbqu0dkSLx4hfR+Zl+SLOaOh4PpsRpAM8Yc+1ZMLDiYxv/qte1w/W/Dnj1ekVUMgPNKmIcdQ7AJGbG7RzvyGMSBlC9QkoCU0DahdV6TWthmyBVu12d2DoHVNih2Q07t+wZ05so373EEvhcAI3fZQ7myVvhFIswN5tpXSQKkNFYJ1Y3hmNJ5swWMooCcEidAk0hAjiSQEsnVFBqIds8QQKtWej2gBZeBJfCnbGFKpUbJBIRFl0QJE0zpBlRS77s+GRoM2YnffA5CapmkBpnV/VtpQ1praGEoD28pQNhptDEXTRo5WWlMZQUXbgZkZQ2YEuYGN0ZRGUOoWJprd+uvdlAjT/n0ld0BXYvs0BcZAYOzMCRpt3CoJYeN0d6tjBGYHL+36GAG+Ab0D0cYYjGijej1p8BQ0RhAYw9hXDHxB6EuyskEYGCQe2xpSbVjXAikFnicZxh79JGaZptw/nXIrkkwmPRbzDb6C5SqnCQIC0+AlPuNBj8ALma23TGgw/YjPLtYEvk/QaJJpj+us4vOnc9LmPY4PB2yWK/ymwY8UVeFTlm1YsOgFXGxrDsMakefopiGOAvJG0OiG/jDEV1AHIevZnFuTqAWyStI/6JGVDXVWMLpzQOklZPNHHEwSrhY5oQIv9Lm6TqHShJEk32qm05jFYkMoG6KDAe9/OmMcB8gqZ57WhJHifhhz/eefkxUlV6uCJPRpBISy4fV7E0ov4OqzCyJPcnsS82hTsb3a8vaDAaIf8OMP5izmWyJP4nuCshF4/ldXE3Xq1KlTp06d/t3TvxHn4z6c+8tcjvtOxJt6Gch8GZz8yxyNN8Hmfgzj/pj340xvHv8vg6H7cPD6+prhcOjiGF8WmfpVEbAvO85N6HjzmFVVubjR/YhPpRSz2QzP88jz3L02CAL3vO0atGDQgiQbsbparQjD0EW5BkHgolXtz9vtloODAzcWGzUaRZHrgrSQx/YZ7ncYWsDk+75zQPZ6vRe6+WxPpj03C90ODw/detkuQwtYbY9hkiQ8fPjQuf0ePXrkejin0ykHBwfOoXhyckKe5+R57qJlx+Oxg1E318g6MT3Pcy44CyAnk4lzoFpQaSGs53kufnUwGHB2dubW7ujoiC+//NI5RT///HOUUq4j0TpaPc9zrtK6rnnttdf44IMP+MEPfsDJyQlvv/0219fXzoF7cHDA48eP0Vq7tbOuXetMrOva9TeOx2OMMWRZ5lyMtuPw8vLSdYha4G7XYbPZMBwOmc1mDAYD59JsmsbBbQt01+u1A5cWAtv17fV6fPnll87JmOe5i6W1cz2dTlmtViilODg4cPNr4aaNeLVRwr1ej9Vq5TozrcszDEPW67V7rY25tf/seVVVhdaasiwdsLWRynYubQSrdVTaa61Tp06dOnXq1KlTp31df/kF+fqSg/sP0VFMutpSrpf4SjO+e5d0W1IXBZNxn970gLSqKbMVB0dHZFXD9N4DQKD8gCgMCZIh0vfItynFZoXQhuTgEN00LJ58iJKCpN/HoBFKoY3Ek5IwiRFSopuGdDGHKqM3HKHxENQ0mzmF8vG8unWZ1W2kqzQ1TVmilCTsRTR1gW4EQdyjSeeYbIP2oEy3BHFMsclYPHvM9O49RJCwnZ0j6i3Do3sUcYBQHnlakm5yJifHxOMxui6YTmdo8yUKw/mqaAGaJ9FVgxRQ7UBXLQxStrDK28GqGkPTgKdaIKf1cyC2Q1fOUaetU3K3PmIHugwG5SCj2AFJC+123ZEIJK0bc7drdxxl7N//onVhGtg0hjWCa3QL3ZRAGY0v2LkhIRaCRBhCIYhEQyCEG58vatdvKKVs3X60Lkmx824aQAqDEhLZ2gxptKbSGr3b3hjTAkFhaEw7l5XW1AhKY2i0QBuBFoZiBxpzY8iB1EBJ62hs9u5vyF08agsHd5jXWEdlOyvNbs4wrQOyFs+jUts52+2rtbIi7BrsXmfBo2iJJsK0MbzSBrSK9vyUEPiy/cpuT31fMIkkkZJoJIk0HPd9NqUhawwb3Q7K8xW+kighKMqa+6cHHIQC3RjOz1f4oaAWEp0EHPZCtNBozydMYtJlwagqMJ7kclOQJBF1aVhstnzjb7/D8uPHVLMN6fJPudrcJg486qLG8wRVrVgsc1AwS+FWT9HkGb5q12+bV9SNIQgDoihkm1fMns45nvZYbHJ8TxIHHllpqIuS44OY68WKg5OEycGYs9mSQLcu5ydnSzwBw3HCdlPSTyS1EagqY3R7zKfPtpRZyUgIlgYyDYeDmFgL1kWK1JpRJOgn7VzdP/Rpkh6f/PQCoeFkHPNonXFxVfD2rYSt1Hz66RJdG3yjGSUxl+sMIQPSsvo3+Nu1U6dOnTp16vTXXT+T8xHaeNWbfWd/WZzqzZ+/Cl7eBHdfFU1qn7dA8ub2LxvLV+335jjtVwtUtNbufNM0JYqiF2DhXwYt7VfrxHvZ/Nx0QN4cz9nZmXvMglUbfzkYDPB9342pLEvnhrx16xbL5dJ161lXnXUh7kdsWgekjba0EKvf77/Qewc46GLjZpVSxHEM4CJULdAajUZMJhM3LhuX+uzZMwfnrq+v0VozGAw4ODjg6dOnKKVYrVbOpWhBpRCCpmk4Pz/H8zzG4zG3bt1CKcXnn3/O5eWluy5v3bqFEILhcOhiUler1QtzbuczSRJ3Xd9cV+v6tEDNQjobRWpBo+3UHA6HDshtt1vXFai15uLiwsXaWlhszyuKIhejatffwrgHDx7w7Nkzvvvd7/ILv/ALvPnmm5ycnHB5eenice01at2Cdr6DIGAwGJDnuTuvIAhcF6Z1wh4fHztnpp0rG9Fb17VzitrXLRYLt252rBZQWyhpQXee5wBsNhvG47HreAzD0EXvJklC0zTuvbHZbBw0Xy6XCCEc2LVz7nkey+WSIAi4uLhw62hhuYWIdmx2vZVS1HXteiytm9h2c9r3gP1qHb3WCWzfi506derUqVOnTp06vUx1uiDojaiQLJ6eUSxXnNw+QIR91ustkYCD6YBoesDTJ5foxtAfJJim4vDkHl48oC5ShFSooI/qT1g+/ZxstSYMNL7nUW9TsnTLYDrFj3xMozEotAGtWzhmyhyNItssW0fdcIyRHqaoUEFI79YpXjTYxXeCqTLi0RFaN62rUknqpvUDKk+BbvCHh4wHQ5r1kv50ggojhKcIp0OQHstnn9FkG3rTI6QniUa38aM+1eqcYJCwnS2YX14yuywxUvHw1SOkMZTVnCKtqYxG+JLKaKSByrFAg9xZIIsd2JNSYHQbv1nbTFXTOu6sm7EBfASKdjthN4I9cAaeEQ5SagNit63YuS0Bm4bqHHz2b0cpdvuTAt20/JKdY6/auSMzY9gI0I1Bm3Y8Uhh8WojmSUFIGx3qI/CFQdHggQOucneKbXQsSFrYCO0+WydiOzf1Dq+WaKrdPDYYSlrHZg2URrfXi4CCHSA0ZocvW6Dn7War4flc6d0Uyt1rcc8Jgl0KrBa7ONvdrZd6F5daCo2yVtZdpGoLU9tr0EbuQgt3pdn1gIqdk9K07lRPtnPl0R4vFnAYe3hKUWmNLzSvTCJKbYgjwdmmotCa2A9ojCb0BVIbJqrmsCc5mo4oFiuqqqHG4A19bvUS5uuUyA/oDWPS2ZYjUVGFcJXnjE5u8/nnl2yWGw5u93m2LDCFYFyvMUKSPn1MmPhIFGlR8XhRMFsVDGLFw8M++bbEQxNISdpoqkYjPEESBiy3W4oCjg77ZFXFIPHxPckmy9AI+r2Qs3VNrCp8VZOFAwK9ZDzpc3aZ0mjNcDpktcyZDCO0ktSF5vigz6r2Wc2viZVmoT0u0oph5DPyDItNzjzXDCJJMgkIg5DToY+aJLz7p2eYsuD14z4rrbmcl3z9OOHgTp8/+2xBXTaYxhAFik1WIRvJqqrQsvvbuVOnTp06dfqbpJ8JPu5Hf+6DNnuT3v4Mz12GL4s5tfBn/+d96HbzdTf3/TKwuL/d/j5u7ucm5LPj2I9dhbZL0XYUWsBkOwW/am5u6iZkfVk87c352398Pp+zWq3cY9ZtFUWRg2H7bkG7Ptap9corr7wALG0EpwWKFxcXrmsvyzImk4mDaNvtlu12y/HxMev12h0P2ijKqqpI09TNmwWKNx2Ws9nsBcfZyckJ0MLKwWDA+fk5t2/fdtDu9PTU9VWORiOapuHx48eEYcjl5SXr9ZrpdMprr73mnJ4ffvihg4u2i9D2/KVp6uI4rVswiiJWqxWe57lzsu7AffdsVVUuEtY6Q63zEnBRthbozudzfN9ntVoBLQS9uroCcHNjI2nDMHQAfb1eu/hYC3PLsmQ4HBLHMW+88Qbvvvsu3/ve93jw4AHvvPMOn3zyiXM39vv9F+Z4Pp876GfBW1VVrlcSWihYlqU7nl3XNE3dc9YBOB6Pqeua+XzuxhxF0QudqDZ21R6naRqWyyVSSqSU7hqyztI0TR3g9zyP4XDI+fk5vu9zcXHxgvsQcG5HG9MbRZGby7IsHdwcjUY8efLEdYHaSFULyy2It7G3RVE4SGm/t85T65i0wN1CXgtKO3Xq1KlTp06dOnW6KW96RFU1VJsVgck5/foraBlwfX5OiGE8GRMfn3D59DGiKRgmMaEHyfFDVJxg6hJTlmgZwuCIRx//OdXyjOnRIVEcIoKEqiiIhgOEgHydoaUHosHzA3SV4/s9aiBbnJP0BwgvBOm18Z5xiIpj8s0SbzBESU29nqGCHjUS5YeIKEZXJVK1//0rVIyhwI9C/GiIOjptQaYRqP4UIQOW54+ZP/6cYawwSUzjeUg/wkumePEEL37Khz/4E6b3HnCgDeXW5zzLODkdkxUN8mLD1bZm2xg2NRhlUNqQSImSkm2l2RrTHtMzaG12sLWd9xYmCvwdeNS0UakKg9HswCPP8aN4/pgxu1xQI1EWvYn/H3t/HiRblt/3Yd9z9y3vza32V2/p16+36Z7BdAMDDEAAQ4aCDJqL6KAsWrRsURZtKBwOy4ZMk7QpCaJkmqZlinYwFLbDZNBWkEFHUAvMYMgQidkXzKDZszV6me5+e22ZlXveffMfWb9f38rO6hl2z2AG6PONqaiqzJvnnnvuyeqp96nv91tDEQJVfQHjFAFV4GIsoIKKqi6h4AKE8r9R1KuI1PICrCmUCnUBJmsgF4BSrYBaUgJ1uQKfQlzMhWJKL/65g7oS64vJC6ycj6pyARBrQFUuwF+1mnOFEhWADO/Cy/KiixFYvU5dZbmSr5BOh0oAZgWozT/eFgLlhdtRACgUAdQ1x+PqAHLUKJXV+ZWL1a3Eqs+R+htXXZzvvl9qApWCAOjqH600vBu5W4lVpKt2sc6aAHShoq4rdDUVgSHg6gqSsgLKCre2HWRpgWkpMK8EkrJC3zKQFQUq24BWlHhuy8QT+wG2dnuIp3PMFhHKsoIRWOgaNoazBGqtQlUEqkWEHcSooODhaIqPf/pncDRXsFw8gtWx4XX6OHvrAfr5GIkq4BoWVFFDB7BMCjx8PMYkyrB/fQuHfR9JGEIUORRRIUtLlEIgFzW2AwtJXqKsFFzbdzGfp1AUBaZl4HyaQggVwtQwKQHHcxG4Ko7PFtg52Ia61cNsNkdRZOj3PORpjLarIxeAgIK2qyFVDAyOZ3BQYyl0PBwtseXZuOXrKAHMEqCuKghFRRqXuNZSYHRcfPv1ASwABzstlKaGB++McCMwce2whaNZDiWtoKvKKn65qFHUNWZFCcXScGs3+EA/R6WkpKSkpKR+b+pDdz5eBQA3uSHX4V/z8fUOxk29ic1j1nsbN7kjNzkbN82Z5kU9f9SL2HQ+UmwkwVZd1/nr5ljvt0br17N+/VfNk9YziiKcnZ2h1WrB8zx+jDr1yLlHr6WY1SzL4DgOVFXF2dkZAxqKkCQA5Ps+fN9nmLRYLDh6kjoO5/M5sixDq9XimNIgCDCbzbinDwA7LOt61aNIMaPdbpfHEkKg2+2ufmG4iOns9XpQFAXD4ZDBGbACrOPxGGEYMhQ9PT2FpmmwLAuHh4ccHfv6668zZHNdl7v5PM9jR1xZlpfiYWezGQDwfV/fA+QkXC6XsG0bdV3j6OiIx/I8j91+5NKjr03TxHK5RBRF7Iasqgqe58H3fXaQ6rqO5XIJRVHgui5GoxFHiFIPJQDs7e3h1VdfxVe/+lV8+tOfxtNPP42nnnoK3/72tzku1HEcdimSC5Uce4qicGQuXV8QBBiPxyiKguFsmqZotVocDUtxqeQ8JJhHcbp0Hx3H4TEosnU0GnFXKDlod3Z2MJlMOEqVwCTdB1onumemaWI+nyMIAr4nzWsg56XneYiiiGN9CTyTC5LeG5qmsYOW+lKpA5V+fhGQrOsanU4HaZpyj6dhGHBdFwAYSktJSUlJSUlJSUk1NV+GQF7ANjS4vT3kionxyTEcXcBvBRCej+l4imyZot3x4AY2cpioVRVFvEQZLVCmGbaeu4X73/inmJ6eY/fwGnTLRKlbqPMSqm5AUYB8MUeNEkWaQ9VNKHoNvdVGnqfIoxla3Q6gGKsoVsuFatlIwwWSZYLW9nVUZQlUBaoyQ5aVCHY6qPIMZVGizFIoqgKlLgGUUE0XmuOu4JuqoSxyCNWAqtlYnD1AeP4Yi+EZgttPQqBAXeYQdgtQNFTpEopu4flPfQzDew8RzZdQNAOeH0BAw+52DkBAH8c4XaQo6wq2piAqaxgA8rpGrdTQsHIGQgCpuIg81WqI8oLTKQLFhfPTUlZIrSyBiyTQdwEYVmANF7CvvoCNFNlKjr2qJrfhiphdsEMIpUZdlVDEqoNx9beJqzEEUUOxOl9dr+yUFY1z0S9JpK9u/DtAUa+OIfchqgsAdwHsqhrQxKofslaAUgiUdY2qEhcRqSvIV1+cP7uIQqXUWO1iIE0oDP1WUPKiR1OsYmn1moDnyiW5ujv1pfkruIhYRQ1DrHoXy4s1Vi/OIy5iay9WZwURL+JbIQAFCkrUyMW7PZwaBDT6N5OLmFlV1NBVgaqqoYrVGmhidY8DQ6BnaYhLQCkrHHTMVWKOquB0kSNHDd/Q0DI1JLaOJCvxZEvBndt9bB9sQdVsDKJzVIqCVFGx43mYpynyuMDWtgsBBX65AAzg1UGKKWxkmofT0xNYLQO+7yM+HeOJtoJCUWFbOpS6ggIgyiocDeaoRY2tnov9wEYaJqizAp4lkOcX14cK+20bWV7AUFTsXg8wnEerOyMUHI8TTOYJOtttqFWFlmfDtSycnU9gFDmUHR9xpUOpa2z1WphPU7QcBXGlIJvFuH64gywvMDwao84LQFfxzqNzbLU7eKqjoyxTHJ1niJIajmmghIonthwEhx1887Vz5IsIT+53kGoq7j0Y45pr4tk7bZyENRbDEJq6cjv6loZSEVgucwhdQ9e2UCWyrkRKSkpKSuqjpA8MH5sOp3UQCFx29216fhO02+RgpMfIUfdBIg7XXYXr2uTOJMdbXa/68CgSkwAkQZj1+V3lwmzOZR3CXhUb2wStTXfd9vY2NE3jzsDlcsnOr6IoMJvN0Gq1OJJyPp+za4uukfoZNU3DZDLBYrFgUNRutxnOOI7DMZjUi0jArN/vI4oi7kQkp5/v+7wmuq4jjmOMx2M+L0EgRVHY/WfbNgOis7MzpGnKPY1lWeLBgwd8TcvlEkVR4N69e9zX2O/34Xkejo6O8M4777Ar1HVddLtd+L7Pka26rmOxWHDMKMWblmWJ0WjE96cJpZtQLIoi3L59G/fu3UO/38dgMGDYSKCKQKKu67BtG1EUsWt2uVwiyzJsbW1hMpnwHms6NclxmqYp92kWRcFuvhdffBHf+MY38JWvfAWHh4fsfqR9SBGpdI2WZbH7EVg5N8n5R05NirYlqKwoClqtFu8bimY1TRMAGPKRm7DdbsMwDMznc0RRhN3dXY5ubbVWEU6np6fY2dmB4ziYz+dwXZffWwQgCZZSP+j169cRxzGOj48RBAE7FeM4hud57IwlEB9FEcNARVHg+z40TcPJyQk7XSn+l1yc1I1p2zYURUEURdxZads2ux+pX5TeGycnJ2i329xPKSUlJSUlJSUlJdXU4nyKftuDbuoolBrx+RkMXcX2tX0kBZDkJeJwAV1X0d6/hrSoICogWcygFAlagQPj8BaWoyO4JuAdbqNzeA2FqqNMC+iaiiKJoWoK8rKG0Azollg580wbcThDnSdwenuoqhLRbA7L86G1OsiWC0TTKZxWC9F8BqcdQIECzdCQpRXCaQTLMZDFS4gyh2E5KCtANQwIoaKuFQhNQ5GlEJoG3XJRZhEUtYalA9t9D9H4HIbjwGibK2hWFAgXIXTTQuf2J6CpKkxngMV0gSwWKE0FtmNhe0tl156VaRgnOXylgqqoiErAEiXUsgZEhbJUkNXVKhJVrKCfqgrUdbXqAxQKFLVCka8iPWus4N0qulRAU1dxqnUNlOWFy1FZAbwLpomqXoFFRQBluXLeAQLKhX1v5WhcxdzS72QQQFnVICMlxMptWK8sjSv4dvHcqq5xleUqxEW0aY0L5+WFEVOs4F6JFWikiNLiwgFpCQFDWbnmygY1LcoVWKRx6HOJGoZYAcoVeBRY/RZ9ATtrrMDjBawtLmJR64v10C7A5mreNYPZolqByxLvQkbtoqsRdY3iYlGViwhcIVZjVBc2UoK9KgR0BahKAVWs1gsC0FUBpa6hCXEROVujqyswlRV4LGtAqyv0AxtpmgG2hbenCcIa8A0VnqEg1QSSpMDTgYqnntiCZnmoCiDME+RFCajAXruFeRxC0wz4rdW/eXREgrwq8XgBfO17Y7zw1A6G8wpxeIYgaKEYz3HDE4gLwDV1GLoBQymRVgqGwznqukKSV7i1ayNOIogUUESOMMmgaQbqusB2z0ecZSjSHJ0tDyeTCFWy+gOG81mGOEqx7VvQqwKOqUBHgflsAUcFuoGDweMRTNOC6dmYhzF6PRtpBZRJjq5rIE5zDM/G0OpVl+fZMsJWu4VbgYFaFBgsauRZDcuo4ZoKdrsW9I6FV9+ZIZ4ucXvPg9Yy8cZbA+z6Op44bGNU6ji5fwZV1IjyAqapIoWC8SJFKmp0PAuWUiPJJXyUkpKSkpL6KOkD/4t5063H/+e6oaschld1Oq4DuCbEa4Kg5vObYN8mB2QzOnMddjYBX/P45ryoj4/iKqnnT1XVS2NsOnczlnY91rV5HVc5QNfX4Pz8HN1ul2ER9eY1XZme52Fra4sjQKm/Dli5x1zXZZdhURRQFAWe50FVVe7nq6oKpmkiiiKEYQgAmE6nSJKE3XDk0KN4UM/zsFgsGJRpmgbXdTGbzRi6+b7P0JNcapPJBIqiIE1TdDoddv8RDCLwSWCJYmCbrsputwtN0/Dmm29yHC459ChWNggCtFotzGYzdDodJEmCKIo4WtM0V38Vub7n6F6oqop2u83wst1us2NUCIEwDLG3t4cwDDEej9Htdnn98zzHcrmE53lotVrsdiSHKDk4m/2Wzf24tbWF+XzO9/H27dt455138JWvfAUvvvgiXnrpJXzsYx/Dt771LXiex/sjiiJ+b1EsbJ7n7Hwl6EpQuK5r7moEgLt373Lkqed5DATLsoRt2+yU3N7e5vfDbDbjSF/qtqS9QXMhZ6mmaZhOp9ja2oIQAkmSYLlcwnEcmKaJyWSCTqfDYJxAPEUf27YNVVWhKAp3UVKcLMXwBkHAe42cjzQPcnjSPsjznN8vWZYx+FVVFaZpcnSsqqpIkgSGYWCxWEBKSkpKSkpKSkpqk7Z7LkzXRwEgXYYwhI7t6wdYhDGqskKxmMJWBNqH15FVCsL5DFlWwhAVenv7gNPCbDJBFU3huB0YfgeVoqFMEggI5MnyImrTQJYk0EwLhteCquuYnT6CpihQLRvLeYgqDWH5XbhbB5gcP8L8+AE6+/tYDB7B6+2hLoG8rBDHGU6PjtHe05AmBqL5FO1OGwUMVEUKw/VRixpQNAgAquFAs1uAIiAUDbonYJoqTs6/CbW9B3f7OoRmQSgC0fgYSGPo3j6KSod74xNIs1eQ5wXKEmhf20VW3YUiligzA4YhcDxZotdyEBUVphlQZxWysoSi1MjL+qLTEFDVVfSqrq3wmaoAmgKUdY0ix0WQ6Mp9KBqveTcG9eLfG8S74FEoYgUPL6JMhahhq6vI1UoARUXJTyswp6pYuQGrehUBWyt8LuDCsXhB16jDUIiLiFSsomELXJwPANE46mCsL+CcUtfQ1NU5DUVBUdVQVAWGoUGvKkR5voo7VQB1dVLubBQAcgioqFGKdx2d9UUMK/3LiF6vJrea2wqiQlyc+yIDNb+4NgK5pahRXQDNi/9xXmp5cWx9ce0cH3vxHDkjlYt56Gp94fasUQoFQtSwlFVEraYo0FUFoqrgawpMVaBtKnAMHfMohWPpiNISqqbhO+MYaSWw7+hwVAWzqgaSAs8FGn7qhT3sPfMcHn7ndYxQIKwspHmNvu9gEmXo+g50VcdsEqEvcqRFjneGCX7ze0P0fAunkxjV2w/QabWB4Qi7AVCbGkSxine1oSDKSizmIVRVII1L3Ow7KKsVNK8vknWcwEO0CLHdayHMC2RRhq2tDk6nCaJFjBt7bXzvaIKiqLHl6vBbJgABTVewTGu4dYntLQfTeYp4vkBw6CKMcyg1ENcK4iRF21JRaSpO7x+hyCroLRcn0xi+5+JGx0YtUgwmNcKoRFXWcHUFB20dxlYLr35vAFEA17ds2B0X33lrjL4u8MQ1D7OqwtGDEfKqRFKUUBSBvBQI0xLLqkLQ8tASNXRTQRxvTgyTkpKSkpKS+v2pD23XaQLBdSdgE+I1tQ7XNrkhNzkpNwG55vGbvm9CxU3RqJuckE2oqigKR4KGYcguK4rBXHc8Ns/1/WJgNwFLgivrPZpNSETw0Pd9lGWJ5XLJUZbkSCTn13Q6hed5cF0XdV1zv57ruphMJlBVlbsiCSIlSYI0TQGA3XqWZSGOY3Y10nO0HlVVQVVVaJrG3Zg0puM4iKIInU6HwRgBSjo/uScpLpPWi0DkrVu32CGYZRnG4zH3IB4eHsL3fRRFgd/5nd/hCFpFUdDtdhkgAcBkMmHYSJGyvV6PYRIBLto7BHQJmBHsanaA5nnO1+e6Ll8TwbV+vw9gBaJbrRYsy+IYVsuyOF7VcRyOwe10Ogx9x+MxVFXl+06PP/vss/jiF7+Ir371q3jqqadw584dvPHGG0iShNc/CALuKwyCgF2wQgjEccygtrm/Tk9P0ev1kOc5uyN938dkMmE35vn5OT/WbreRJAnv5b29Pcznczx+/JijWl3XhWEYfJ80TWMwalkWFEVBEAQ4PT1lEJqmKcqyxPHxMSaTCUzThBCC3b55nmOxWMA0TQaBSZJcgtuapuH8/By6rnPcLHVV0n4uigJhGCKOYx7ftm2+prIsGYBSn6TjOOxKXSwWODs7e8/PESkpKSkpKSkpKSndbiErKggAnu2g1e9iOlmijGNkUYRupwXD91EqOso0xejoCEHLRf/O04DhIJpNgbJAq+VDMWwIw4SiCGRRgbosoRoqirRENptDVTTohgMIHcvzAco4htntoyxyOI4Bzb8G1e1i9vAukukZ/E4HKEu0dg4hag1KXQF1hcXJMWxdRR3PsDidooKGudDR7mrQdGuFhqryImK0gu60URUlZqdnULUaSV7D91309rdxfjxBvEjhBCbyPIOqmVBbFsosh2E7qCsBf/cWdMNBK84gUAFFiXQeYvT4CFUt8ExRYrxMcXS2RH26QJKnMOoaOQRyCOgX/YcFcBFzSnBPQYkKRfmue6+6cPJVNaCq4gJY1qgKcSn+tKpXgEzUF72NF4BSU9+NAa0E2fRWDsBV7eFqPqUAqgtHZFVRpmoNCHEBFVcoVKFuxov4UYpSrdlVCM5DLQWg1isHpCpqmIpAWQNpUcFQFcRljbamYRYnKCoFQI2qFqhEtbpWrDoni2I17boGUtTQLtyLCi7AJvAuXBWrx8mhSfNTUSO7cJnScyuz5QpAUvejqFcwkaDpqvdxdTMUdk2u1r0SWK1dDejq6vmVi1NARQVTWY1nKCsQXFQFdk0dgWXAVoBdT8M8zND3LJwtU+iahvMwR1bX6Fg6eo6BaV7AqBRct2p86qWbMLf6qGChMC0opQozjGDqNaqigG/o0KAgnKXoihxFVuM8KvGVuxO0PRvttgvVsDEfDGEpAk/fsKBtbSEOC6go4RsGJuMJ0iRFpapYLHPc3vWRZBWSMEeta8gqBYpmYLHIsLvVxjxJkMcV9noe5lEJkee4vtPGcB6iZVjwOxoUXYGmANAUjMMKga5gq9fCaB4jCXO0Ag+nJ2M89fzTeOvuGdLxEp2eD5gqRoMxoriE72goDBNCq7HbsgG1wPkMWCxzVMrKQXvQ0SE6Hbz19hQGVHRbKlptC28+XKJVlXjqiQ5CXcHR21OUpUBWXnR0VkCalEiUAl7goaNrsNQCUVIiT8of0U9aKSkpKSkpqZ9E/VCzAq+KLaWYRGCzM7F53CaQuf7cOvCkx9bHah7XhIwEk9ZhZvPr5pwBMLyLoghpmjJ8XJ9H83xNgNpch3VX51Wux3WnJ83ZcRwYhoG6rnH37l3s7u7Ctm3uNYyiCLZtw3EchGGIuq4RhiG731RVvQTMyG3XBD6apiGOY3agEVBLkgRxHKPVamE0GmF7e5v7CsnlR646AnAUZUngjjoEKaqV4Far1UKSJMjzHAC4Ww8AlsslFosFgiCAZVkYDAYMip588klomobxeIz79+/zOgkh0Ov1oGkaR59Sz2MURQyswjBEq9Vi9+W6miB5sVjA87xLvZe0prTew+EQnU4HRVHANE08ePAAVVVxZOjR0RGSJGHXKAEucngahgHLsjgidHt7m2NAKbK11WrB930cHBzgK1/5Cn7u534OL730Ep5//nl87WtfY8BM3Ym05wkYn56eMnD0fR/j8ZjhJzk5CeoBwNnZGUzTRLfbxWg0QpZl7BrM8xyWZWE2m3G0LUF0ciGmaYrpdArbtjkCNggCjgkmJzHtFeprpJhcgrLU80jxwwAYAtu2DcMwMBqN+D5Sb2izx7HdbvNcae+p6uqfANI0xXK5ZDdlGIYcF6tpGvb399FqtTimOAxD3j9SUlJSUlJSUlJS6/q5X/1/bu7juEJP/6gm0pD33Ps/3/nZf/2DjfvMex/rf4BxnvhAZ5eS+t3Rv/XjnsC/gH76RzDmv/QjGFNKSkpKSkrq96c+FHwkd1mzG6+pTfCt+dr3A22bXIrrxzWPXYeHzZ7ETePT3DbFwDZ785qgkAAkOeYoCnL9mjdBzU1r1IS16+7QTddEEDFJEn6tYRgMpch12OzAo27FNE3RarWg6zq79gAgjmNEUcSdfOSEa7ryOp0OHMfhDsHFYsHuPYptJdcjRXZaloXz83N2n9E8CIBS7Gmr1cLZ2RkWiwVarRYMw0Acx+yKpB5I6twjGHV+fg7DMNDtdtHtdmEYBt58800cHx+jLFd/TWdZFgNBWvc0TRnudTodjEYjTKdTBoBZlkFV1feAcAJptH6z2YzdhKqqYjAYwLIsBnej0QjXrl3jeRZFgfl8znGsNN75+TnDxSiK4DgO6nrVe+m6LqIoYqfofD5Hq9Vi8N3r9XD79m189atfxZe+9CU8+eSTeOaZZ/Dqq69iNBqxM1DXde6gJMDbbrfZQVlVFe8VioQlwN7pdLhfkcYrioJB6vb2NoPO3d1dPHz4EOfn5/A8j7tDkyRhuDeZTBAEAQzD4H1zcHDAcNlxHIa71Du5WCzg+z6/p23bZuhrWRZOT0/ZFUrwmtyVtFfJwUlRvdvb28jzHI7joKoqOI7Drk46D70Put0u75skSfhe0GvJkSklJSUlJSUlJSUlJSUlJSUlJSUlJSUlBSjf/5DNWgdsTai2Ds2a/Yab+hfXj1n/WHcvrsepbgKJzdjVpgtx/djm65tORpoPQSIhBHRdZ5cUAZr1sdfns8nR2TzHVXNpQtvmWuV5jjiOEccxqqriiFKCQYPBgOMyTdOE53nsdivLEovFAmEYIssyxHHMTr7ZbMbQpSgK7r8DgPPzczx+/BhVVWG5XEIIgeFwyH1/QgjkeY75fM5QiIBOv99Hnufcu9cEtoqi4O2330YURdyBSL2IBIcompPAWF3X7BzUdR23bt2CruswDANvvPEG5vM5r/ne3h56vR4cx4HrugiCgO9XlmV49OgRiqJAu91mqOk4DsPL9T2epinm8zmm0ynSi26G4XDIc6FYWM/zuBPT8zx25OV5zpCM3Ku3bt3iNaM42clkgvl8jkePHrEzlCJb6VooXtT3fbTbbXzta1/D7/zO70DXdfzUT/0UyrJEmqYMQz3P43UnOE+P+77Px9G10/VqmobZbMbvjzRNoaoqd4oSiA/DEEdHR0jTFLu7u/A8jzs1qZ+R4DdBPbrvFG9LTlGC7DSfoijYcej7PoNK0zTZHbtcLvl9Yds2d4Tqug4A8DyPuyqXyyXSNOX5CyE4epUAJcHL69evc+xyHMc4Pz9niGuaJuI45jWRkpKSkpKSkpKSkpKSkpKSkpKSkpKSkvoQ8HFTJ+EmYHiVg/Gq59YjWtfB3Pr5m8Bwfcwm3LwK8K2PvQ4oaXz6rGkaWq0WqqpCnufI8/w9rsnmmqxfW1mWl8DpujuzOWdy4DXnp6oq2u02u9moC3A+n3OUJQB2DlKUpud53Plo2zY0TYOmaTwGubgITFGUahPq3Lt3D6PRiIGOaZrwfZ/HBcCwiEAVRVLmeY6trS0YhsHnCIIA7XYbtm3z9Xe73Uvdfr1eD/v7+/B9n+NTKTLTtm3s7+/D8zzkeY5XX32VgSmwitd0HAcAkCQJlsslux4Nw0AQBNz5VxQFRqMRd2WSS7UJsRVFge/7/NqqqrhPsdPpYDweI4oidhIOh0OUZcn3mADjZDJBFEUMaXVd5/t3fHyM4XDIa0NrQRGf0+mUI3QnkwmKosCzzz6L4XCIL3/5yxiPx3jiiSdwcHDA+426HQGwq5RiX1VV5f5EOoeqqgxSp9MpX5Nt2yjLEq7rIs9znqthGPw17b04jnFycsLAmiJXFUVhcAqAY3AJ4NHaUBQwda4SKKe42DRNcXp6ijzPkWUZgiBgRyjtxyiK+N4ROKZ7QfHBdV0zQC/LkuG+EKsuymY3Ka2baZrQNA1pmnK88fv9nJOSkpKSkpKSkpKSkpKSkpKSkpKSkpL6KOkDx66+X0TqVa69dWBJcIeg1zq4pJhO0nrX4zrgbDoGm/MgcLQejXhV7GrzcZpb8zlN01DXNQMbihxdP/e6+7EZu7q+buswtHm963PM8xzD4RBCCIZpYRii1+shjmN2b5GrLM9zGIYBXdcRRRF39rXbbdR1jeVyiU6nA9u2cXJyAk3T4HkeFEXhuFSaI8VM1nXNLj5yDQoh0O12cX5+jqIo4DgOhBCI45hjSukamp18FHWapimKosD5+TkD0F6vx+45goZpmjII3Nvbg+u6ODs7w4MHDxjuUgcgQT5VVTleleJTy7LktaWxsyy7dH9oznVdwzRNAKvoUIKnVVWxY47cphQzSn2H+/v7ODs7w87ODs7OztjRSetAc9jZ2eH9ZhgGPM/DcDhEGIYM/uI45v7NoigAALZt49q1a/jGN76BT33qU/jMZz6DF198Ef/sn/2zS5GrWZZxnyFBaOr/JKhN+2E2m6HT6UBVVYbHBFUty0Jd1wyN6dp7vR6DRgDcp0mdoWmasvvVMAzMZjMkScI9pZPJBNevX8d0OkWv1+P9u1wuuYfUNM1L++Lhw4ewbZv3OQFL27YvRQCTY5fevwQm6X0MrNyR0+kU7XabgauqqhznSu5c0zQZaFL0axN6S0lJSUlJSUlJSUlJSUlJSUlJSUlJSX2U9aE6H4H3uhebHZDr8A14Nzq1CeponHX4dhV4XH9sHeqtw7pN7shNsK85ryYcXZ8vRcdSlGlRFJeg5KY5bIKN3++ammPQOReLBc7Pz6HrOsdukqOL5lCWJSzLYnBmWRYmkwnHlnY6HXbCkbuLwCLFtDqOwy4v6sbLsoy/D4KAXYJCCJimiTAMGTBR3C1195EbjkAoQUsCdBSLSm5Ay7IAAFEU8fUQtMuyDIZhoNfrYWtrC6qq4s033+To2LquGd7N53O+10VRYGtrC7quY7FYIIoiBkkUkdqEzU0oTmr2IjqOw3AuSRKOHnVdF2maYrlcwvd9PHr0iAEYAIZ6BALLsmT35vXr1xngEQAWQnC0K8XR0v6sqgqGYeCnfuqn8Bu/8Rv43Oc+h+eeew43btzA9vY27t27x3GrmqbBdV3+TJ2KBAebAJV6NKMoQhzHvP7UwUjQ1DAMqKqKbrfLsDkMQ35f0J6Kogjj8ZjHVxQF8/kcURTBtm20Wi1eE9qTdH9ob5HjkFyKBKBN0+T1plhXun+0HxVFwcnJCfb29lAUBeq6xrVr1zAejzEYDLC9vY2yLOH7PqIoYmC7XC6xv7/PccHUa0pjEKifz+fv+VkiJSUlJSUlJSUlJSUlJSUlJSUlJSUl9VHUB45dbWrdKUjaBPk2QT9yo63HjhJIuirWlAAXqdkJuQ5AN815fU5XzXMdQDYdc+SGItCyPj5dm6IoHE3ahJvNtWqe4ypXJwEgVVUZrsxmM4Z8BEGou09RFIZH5OSiTjxgFcOpaRrCMOSxq6rCyckJBoMBwjDEzs4OTNPk3sIgCBj82LbNAI+ceQQzLcvCbDaDqqoIguDS4zR/Gpc+6J6WZYl2uw1VVRlazudzHB0dIY5jGIaBGzdusCPw9ddf5xhNgqGtVguKorAzsenk1HUdrutypCYAdvU178k6iCZYeHh4iMViwc/Recl1SsCNXH7k3CTnXlmW0HWduwaTJIEQAsvlkmNDya1KrkwChN1ul9fO8zxey/39fXzzm9/Eq6++CgB44YUX4LouuyTPz89h2zY6nQ4sy+JeyjRNMZ1OEccxO/zCMMRoNOIY1OFwyGtW1zV3Y0ZRxDCZHLU0V3KK0rpZloX9/X2oqoowDLmfk4CyEAJRFLFztN/vc+wswVl6n5MDlO5lURRotVowDIPvHQF4inXd3d1FkiT8Pjg7O4OmaTAMg0EjOTqpy5JAPMFHiqqldRqPx4jjGGEYvudnjJSUlJSUlJSUlJSUlJSUlJSUlJSUlNRHUR/Y+dgEbaRN7sKm8289lpQeJ3hCjzdjV9fdgk1nWhMwXuV4pLk2QV9zPs2x1x/bFBu7LuqJo2togsz3i3ltukI3uUU3rSOtFUVFWpaFxWKByWTCgEZRFHZrUfQq9fTVdY0wDC859poxqLqu8zWQizEIAoxGI3azGYaB09NTBnlFUSDPc+zs7AAATk9P0e/3uUePXG5VVbGrcTqdskuOQNxsNmM41+/3OT623W4zvC3LEicnJ1AUBZZl4datW1AUBbPZDG+99RY7PYUQCIIAlmXB8zwGrePxmPsSqasPAIIgYHC4WCzes08IOAMr96DjOBiNRmi321gulwzMyOXXarUAALu7u2i32wyymi5CgnedTgeLxYJjYs/PzxHHMVqtFsIwxGKxgOM47K6k7kHae+Q6tG0btm3jN37jN/Cbv/mbeOqpp3Dr1i1cu3YNr732Gq9ZGIYcIUsuQdpPBCIJKiZJAtu2+T6Sc5NcpRRTSuCdwGNRFLAsC67rYmtri4EpOSvpHhDMo45Filrt9/sYDAY4ODhAHMcc6Uv7bbFY8Nxo3MViAU3TEAQBbNvmfU7vieaamaaJs7MztNtt3kd0zQCwXC65DzQMQwyHQ3Y5ZlkG3/cxmUwQBAHCMGTALiX1k6AH8wo3/9I/wf2//sd+3FORkpKSkpKSkpKSkpKSkpKSkpKS+ojqQ8WuNgHaOvxb/3pTH+I6mCStj0fHrsdgrp97Heatw0jgXRC5HqdJ4zW1KSYVwKXuQgI3mqZdij5df33z+/V5rbtFmw5JGpPOT640GqPVajEABABd1+E4zqVuQMuyYNs2kiRh0EOuQOrES9MUcRyj3W4jTVO4rss9kIZhQAjBUZQUE0qOL0VRcHp6yg5Qmg85RMn5RhBSCIH5fA7TNNnxRn1+SZJwZx9BJXKw0feGYWB7exudTgeO4+D+/ft48ODBpXU8PDzkOFc6RxAEvL7U1+e6LndKArgEqZuuW+qgJEhHIJFkmia7Jmlset14PIau62i327x2VVVxbG6zH7LT6cAwDI7YTdMU/X6fxy2Kgp18nU4Hu7u7UBQFSZKg2+3i5s2beO211/DKK69gf38fn/jEJ/DWW2+xMzCKIhRFgeVyCcdx+H3iOA7yPGdIKoRgaLtYLOB5HseOUl8nAWSK493Z2cHJyQnyPGc4mGUZNE3DfD7HbDZjGByGIba2tlBVFW7cuIHj42MG0LSvp9MpFosF92v6vo8sy7i7lDorae96nsd7lxyx9LOAOkgpwndrawue52E2m8G2bYaLTYcp9X/SXrBtmx2ttCeBFYwlkC0lJSUlJSUlJSUlJSUlJSUlJSUlJSX1UdcHho/rrkHSOvRbB4nN4+i19DzBKXqcnI7r466P0fy++dimc6xD0aviTtevpwk1m9dMUJTcj+RMbALOTQ7KTfO56roI2DVdbxRpWZYldnd3MZ/PGW4VRQHP8zAcDtmhdu/ePbiuy04xcteVZYkwDLFcLmHbNkdPAmCoRF19BAKpN5COpejPpvMtTVMkSYK6rtHtdhlaUbQnudlM02QgZBgG4jjGaDRit59hGAwcx+MxqqqCrus4PDxkMPr6669juVxe6nsk0DidTqHrOnzf507O8XjMcySomuc5+v0+Tk5ONu4jmqthGAjDEEEQIE1T7iOk9Wt2OMZxjKqq4Loux4fSuYqiwHg8Rl3X2NnZwWAwQJ7n8DwPdV1zJGkQBAzlOp0O0jRl0Dwej9kZOZ1O8dRTT2FrawtvvPEGvvjFL+LFF1/E/v4+nnvuObz++usMCxeLBc+VXKjNcxOsXiwWfA2KonAPJbByuNJejqIImqZx5yYBwclkAt/3Udc1dF3nuNKiKDCdTtlVORqN+L3jui47YsltS+eIogiu62JnZwdFUeDhw4dwHAemaaLdbnN0r+M4qKoK8/mcHb2WZWE0GkHXdUynU2iahuPjY+7PJJekpmkYjUZIkoSdvuTqJMCapils2+brpveSlJSUlJSUlJSUlJSUlJSUlJSUlJSUlNSHdD4CmyNJ192F6w5G0ibn5Cb3ZDNqtfnaq76+Ct5tik296rWbHqeP5nzoOXL8UUTo+njNOWwCW82vN7lESdTP2G63GXgOh8NLri46l+M47PojWEn9hwQaCRBSv91wOGQARIBxsVhwpx05JlutFrIsg+d5DPKKomD3ouu6UBSF4z0JJHmex44/OobGOTo6gmEY6Ha7iOOYoWNd11gulxgOhxwfure3x3P4zne+c8ml1uv1GFxTbCvFfM5mM/T7fcznc5RliSRJoGkadF3n/s7mvaF7B4CBqmVZfOzu7i6D4TzPIYSAbdtotVqYTCZ4+PAhfN/ne6nrOkfFdrtddDodAOC+SnKXGobBMZ+0tyaTCdrtNkzThKZpaLVaGA6HmE6nHN/a6/Xw9NNP480338Rv//Zv44/9sT+Gj3/847h37x6SJOFYVdpjTz/9NB4+fIjlcgkhBJIkgWmal67Ftm34vs9do9SjOJlMuDOTALzrukiSBL1eD+12G7quYzKZMPAjl63runAcB5ZlIc9zdLtdDAYDzGYzLBYL3ktpmsL3fXYxUkQtdT12u114noezszP4vo80TTGfz9kR7Ps+uyIJeNLPpjzPuTf09PSU971hGOzYpON7vR6SJMFwOEQYhnxv6J5OJpP3/GyRkpKSkpKSkpKSkpKSkpKSkpKSkpKS+ijqQ8HHq6JNrwKPzc8ALrkc16EkPb8JGK6f7/0iX8mF2HxN8zGCVN/PmbmuTaCS4lfJ/dUca31NNo25vi7N66FzGIbBrjRyBpLTjOIwCQSSa5DAH0WhpmnKUIscX9T/RxGgeZ4zKAvDEKZpoq5r7h1MkgR5nsP3fY6tPDk5gaqq7FxTVZWdjVmWodVqwbIsbG9vI45jTCYTWJaFuq4xmUzgOA470whikSNxPB5znKnnedjf34dlWRgMBnj06NElgG3bNrszx+MxdnZ20Ol08PDhQ3Yi1nWNKIpQ16seTM/zMBgMGGI2YXNzbIoQLcsSvV6Pozvp/juOA8dxOOaU3HOmaWIymWBnZwdJkiAIAiwWC5ydnUHXdXbdRVF0ae3JaXhycsJ9h7du3eK+T8/zON50Pp8jz3M899xzePz4Mb70pS/h4x//OO7cuYPbt2/j+PgYiqLAMAyUZYmyLDEej7lnkhyzFM1qWRZD1TAMMZ/P4bouz9WyLAbPBJqrqoLjOLBtm2GtruuwbRtFUXC8bnONAWA+n2N7e5sdi9TLef36dcznc2iaxl2LrVYLQRCg1+uxc9F1XeR5DsMwOPaVnIrD4RD9fh95nnO0MLl2aQ/cvHkTSZJgNBphuVxCVVW02212+5K7saoqzGYzBvrL5ZL3k5SUlJSUlJSUlJSUlJSUlJSUlJSUlJTUDyF29f1iQ+nzelzperxp8/h1OHmVW3BTN9+mGNNN8HI9VnMTSNwETsnVSC6t9e5HAnwUIdvsa6Traa7FOrBoznX9+pqAkzobVVXF3t4eyrLEvXv32P0ohGDHF0Esik0NggBVVcGyLDx69AgAUBQFd/wZhsEQhpyUBA8pOrXT6bCzjKJSi6JgiNbsJkySBK1WC47jcJSlaZrckZemKUe1zudz+L6Pvb096LqO0WiEKIoYoBKg7HQ62NragqqqePPNNzEcDvmeqqqK3d1d9Pt9drctl0sA4I7D09NT7O7ucnwpwTTqX2zea1r7qqpQVRXSNEWv14OiKIjjGNPpFO12m7sl8zzH+fk5nn/++UsxqxR1O5/PkSQJw7Hm+lHv4M7ODubzOR4/fowgCOB5Hnq9HsqyhOM4HD8aBAHDMyEEg05N0/Cxj30M3/rWt/D1r38dh4eH+MQnPoGjoyNMJhN2f5qmibOzM3azAsB0OoXv+9B1nWNMKap1b2/v0l4kJytFn+q6jsFggJ2dHSwWC7iui8ViAUVREAQBptMpz4+g5Gw2Y0BNa9KE4r1eD7quo6oqhGF4qd9xsVjwWOSCLYoCQRAgjmOG6L7vc68n3d+yLNFut6FpGrspyXlqmiaiKMLx8THiOIZlWQiCAJqm8fuv+YcNBDulpKSkpKSkpKSkpKSkpKSkpKSkpKSkpD6k8/Gq6FPg6g7D9+twXI9H3QTtropmXf9+vVexCRyv6nVszuGqOfq+j09+8pP46le/iqOjI+4ZJDWjVwn8NeFlc72+3/Wsi+CU53kcRUlxlkEQcO8jRaGORiN0u10+L82HQNvOzg7HnE6nU+47pJ47AqrUmViWJaIoAgAEQcCxrdSd2O12UVUVQytyXJIIggLgaEvXdXF+fo7t7W129pFzkVxrk8kEZ2dnDENv374NXdcBAG+88QbiOOZz6LqOXq/HrkCKEDVNk0ESXQfdAwJaBJ1oH2wC0uTOJMddFEWwbZvhJLDq4zw/P2doR2N7nseglaBesyORHJSe5+H8/BxlWWI0GiHLMvT7fe4YbLfbODs7g2VZ0HWdAWRZluy8ff755/Haa6/hq1/9Kj71qU/hiSeewI0bN3B2dsbzJRi8tbUFx3GwXC7R7/cZomZZhk6ngyAIGKTSe5FcmXEcs1uUnJGLxQL9fh9FUaDdbiPLMu58nM1myPMcrVYLURTBNE2oqoooijAej+H7Pg4PDzGfzzlulq4vjmNsbW1x3yYBdYoBJsicpin3n1KcrG3bAMAu3a2tLUwmE3ieh36/z6Dctm12ZxKUJ7BLQH57e5tdxHmes3tXSkpKSkpKSkpKSkpKSkpKSkpKSkpKSupDwMdNMan0eBP4XQXT6BgCNk0wR89f5Xxch4nNx9YjVNePbcaevh8wbYrGrOua+wZ/8Rd/EZ///OdxdnbG4InOn+c5gzZVVd9zTvp6UxRrs2ew+Vqaa5ZlCMMQvu+j1+thOp0CAPftkYvt6OgI3W4XrVYLYRhyL+BoNOL+w8PDQwaI1L0XhiF0Xef+O+ovzPMcaZpie3sbjuMwKCNwRvGc5LykvsZer4e6rhlAlWXJsZXUW0jxmTSXwWDArj+KcSVw6Hkebt68CVVVsVwu8d3vfpcdqATIyA0ahiHvKzon3Z/pdMouTGAFlZMkuQQymyCbeh2BFWDOsoy7JGkt5vM5OwqpB1FVVY6gjaKIY0kpSpSAsKIo6Pf7HO1pWRY7Rum6p9Mp0jTl/UW9kQQO2+025vM5R+XevHkTr732Gr70pS9hf38fL774Ir73ve9BURScn58zaCMQ2Ol0uP+TroccrATvyNXq+z6GwyEAcFxslmVYLpfIsoyBJfWh0jFFUSDPc17DOI5h2zb6/T7HuJZliaqq0O12MZ/PYRgGx86SA/Lw8BCPHz/m1xEYHo1GDCyzLOPI2yRJeN2LosDJyQlM07zkGKWo2cFgwO9duh/kjqWoYOoipWui9ZGSkpKSkpKSkpKSkpKSkpKSkpKSkpL6qEv5/ods1jooa0af0vNNoLbuIrvK7bcem7oOBK9yMDb7I9fHaT5HMGt9fIpKpc7D5vkIPhGMo76/n/7pn4bv+5fiVQlwlWXJIKXpfmye7/26H9fBF6ksS9i2zQApDEPEccyRnvP5HIvFAkmSwDAMfk2324WqqrAsC67rXuo3dBwH3W4X+/v73F+nqips22YHHAE+cpRpmoZ+vw9FUTiu1HEcBmamaUJRFO6ItG2buxDJWZkkCSaTCYQQ3NG4vb2N7e1thlUEfKMogmEY6Pf78DwPnufh5OQEDx48uLSmnufBdV0GxrPZDKqqcldjHMfQNA2u63LMrK7rmEwmmE6n7LBtdnbS2HQeivCkNSEXYxAEDL7m8znOz88ZMmuaBtu2eZ2yLINpmgz2bNvmLsjt7W0EQYBut4t+v48wDHmPU9xov9/neNSiKBicUoQrAOzv78M0TXz1q1/F3bt34fs+nn/+eZimCcuyeM9Pp1OMx2PuCK2qip2KqqpiMBhw9C1BTsMw4DgOqqpCURSI4xjL5RJxHCNNU9y/fx/D4RBHR0cMTdvtNkNYgq17e3tIkgSz2QyGYaDVakEIgRs3bvD7pSgKhsRxHCPPcwyHQ4btrVaLYffOzg4sy0KSJFAUBVmWIcsyvqfkWrRtG0EQwLIsjk5VFAXL5ZK/p/MahoHlcomiKOC6LmazGcIw5Chi6kuVkpKSkpKSkpKSkpKSkpKSkpKSkpKSkvoQ8HGTu7EJGjfBs2bkKgHL94tuvUqbuhIJpKxDPgCXHqfjCTBuij5dd082z0HRnFmWYWdnB5/85CcZKjXPQ/CCuvzW53SVI3MdnjaPazr3kiRBXdfwfZ8BFsV9jkYjft1isUAURRwbGYYhTNNksOi6Lvc0KorCEJDA5GKxwGw24z4+iqck52GaptB1nbv3COJQdKoQAnfv3mVARRGk1K/n+z6WyyUWiwV0XYeu6xBCoNfrodvtotvtsuuN4BLFl7722msIw/DSOvb7fYzHYwZn7XYbiqLg7OwMs9kMiqJgPp8DAMIw5E5JcvQRLKaPdUVRhOVyifl8zo7MMAwZTFEv4Xg8hmVZcBwHrutC13XYts3dlk888QT29/d5vxGA9DwPg8GAQTety3w+h2VZyLIMs9kMy+WSYWen00G/3wcAhr8EYu/cuYN79+7hq1/9KpbLJZ577jmOCiXH4GAwYLckAXjTNOH7PmzbRqfTQRzHqOsapmmiqiqcn58jz3MYhsEwmoB0t9uFpmk4Pj7GbDaDbdsIwxBhGEJVVfR6Paiqyntvf38fwMqdOhwO4XkeqqrivUvu21arhU6ng729PXQ6HR6X4lYHgwGGwyHKssRwOGToPZ/PMZvNMJlMcHJywvuMHKqLxYJjdMmhu7W1xVG29N47OTm5dOx8PkdVVciyDL7vv/8PLSkpKSkpKSkpKSkpKSkpKSkpKSkpKamPiD507Cp93gTz1sHapq+v0ianYPNcV0WuNh9rQsSr3I7r8a+k9f5Jku/7sCyLx7l9+zZmsxl+53d+h6EEgdVm/KqmaRuBZhMsbgKOzXkCYIdbp9PBeDzmYzVN4/hIipecTqfsXKQeO9/3MZ1OGfDM53OOQ6UuSQJ0BPnIRVlVFeI4Rq/X487EJEm455FcfoqiYDwec/xpHMeIoojXzvd9jMdjXoc8zxEEAQPU0WgEYOXcG41GHIWq6zp2d3cRBAGSJMG3vvUtZFnGa6MoCra2tnhNxuMxAzJaszAMkSQJOw+73S479sgx2Fzz5j0gx2C/32dXJUVwktuVAC1FrY5GI+6d9DyPo07TNEWWZfy953koyxLn5+cwDIOhWVmW7NZdLBa8lxzHYTBJkbRxHCNJEn4syzI8+eSTGA6H+NrXvoaXXnoJL730El544QV8+ctf5v5K2jMEhdM0BQAcHh7yealL8+zsjJ2LBFzLssT+/j7Pj95PW1tbyPMcURRxtCsABpLU13h2dsbvlyzL0G63Udc1wjDEeDxmKKhpGgzDgOu6yPOc4S/1jhIUJxif5zlfn6Io/EcDruui0+mgKAq+9+TSpXHpXpLb9+7du1BVld83NDatxQ/6hxNSUlJSUlJSUlJSUlJSUlJSUlJSUlJSv9/1geEj6f3AH/AuKNzUtbgen7oJXDaPu0pNyHgVcNzkKtw0/lUQgdySFNVI16TrOl588UXMZjM8fvyYARb1QBZFwa9tAtr19Wtef/Px5nM0phACWZbB8zyYponpdMqddbZto6oqdDodVFWFNE15DIJhTYCSJAlHxFJnI0WmUiSlZVnQdR3L5RJlWXKXnmVZ3EtJDsgoitBut9Fut7FYLBhWOo7DEaUU8UkuPgKzWZZhMBig1WrBNE2Gh9PplIHb9evXoes6zs7O8PDhQ14b6vzb3d1Fp9Ph3kLquLRtm8+paRqWyyUDW8dxGCbRWM2oXroX1G3pui7DRYoKPTo64ghOwzAwnU5R1zXDwOl0CtM0ea0p6pZcn+QsJeiraRo7UQmO0fm2t7cRxzHPnSJty7IEsOoFzfMcpmliuVzixo0b+O3f/m18+ctfxp07d/Cxj30Mb7311qXxoyhiuKeqKoIg4J5MuqbFYgFN06CqKpIkge/7DAIJ8k4mE3Q6HXbSxnEMwzDgeR6/X5bLJQzDYNC6WCwYBh8cHHDsq6atfjw1nb4E1Wn/UsfpcrmE67pYLpcc3Uoxy9vb2wjDkK+FXLjUIen7PkNMcvzGccwxxUII7sOs6xq9Xo9dotSfSvtNSkpKSkpKSkpKSkpKSkpKSkpKSkpK6qOuH0rnYxPYNWNLNzkMm68lx9B6X2Nz7PXx6fnmZzpmk4OwCY8I3l019vqcm12OANg9SB1+FN8YBAF+4Rd+Afv7++z8otdQ9Oo6nLjK3XjVtdO1kROu3W6j1WohjmOeq67r7CZTVZVdXv1+H3meYz6fc0xmHMdYLBbc10gdf2VZYjwecycjudEo6nOxWGAymcBxHMxmM+R5zs4913XR6/W4E9PzPIa15MrUdZ1BrOM4SJIEURRxryCd23Ec9Ho9ZFnGkGpra4sjPd9++20cHx9fWlPXdTkWV9d1WJaF/f19ju2sqgqKouDg4ABBEPB6R1GEOI65I7HprN3k2A3DEIvFAgcHBwzfqJsyDEPMZjPs7u5yjGpZlgiCALquI8syPHr0CCcnJ7BtG91uF2VZIooipGnK93g6nbIzj/aEaZqYTCYcJ9putxGGIQaDAbtDq6pCq9WCbduo6xqe52F7exu+7+O3fuu38Prrr8OyLHzsYx/jKFUA6PV6HA07Go04olYIgXa7zZ2LBOKuX7/Oe4x6ISnel6JKKXKWnLnb29vsntU0DUEQ4PDwEM8//zwsy8LOzg4D6vPzc4a0FI1KILPptqV7QmCZALJpmvzeOzk5QZZlWC6XqKrqknNS13W02210u10AgGVZ8DwPRVEgDEO4rgvDMNjdqigKDMNAmqYMWJfLpXQ+SklJSUlJSUlJSUlJSUlJSUlJSUlJSV3oQ8WubgJ+wGWI1wSL68c2n1sHkHSO9fNtikJdd15u6pLc5Cpcd1g2tQ6dCJaZpnkJTNLz29vb+Nmf/Vl85StfwdnZGbuq6DrJ8abrOo9LTshN/Zfr86I1Imji+z6GwyE/P5vN2A0XhiFDGIqFNAwDx8fHaLfbAMDzSdMUmqah2+0yJDVNE61WCwAwmUwwGAwYarmuy51/VVVx3Cc5Hs/OzthNV1UVdxMSOGvCujiOoes6AzuKZBVCcMchue8Mw8CtW7cYXL755puXgJsQgvsqkySBpmmwLAtRFKGqKrTbbQyHQ3ZgUrckdRHmeY4wDHnMq1RVFXc8npycMCzd2tpCr9eDoigcK0rOR3ITTiYTLJdLtNtt+L6PyWTCIJs6NIMgwGQyYXef4ziIogjz+Ry3bt1CGIY4ODjgeFG6DnJ0lmWJyWTCca4ECJ955hl87Wtfw1e+8hXcuXMHTz31FL7zne/A8zx0u13uv8zzHFtbWwwVd3d34TgOJpMJfN9nZ+94PMZyuWRnoKZp7Kqdz+c8Bu39Bw8eoNPpYD6fw3EcLJdL9Ho9zOdz1HWNZ555BpPJhNef3mt7e3vcnUlRup7nsXu2+X4oy5JdmK7rcpSqqqrs7LRtG2VZXnIRE7jsdrtIkoRdjjQPckQ6jsN71vd9JEnCLk5ynUpJSUlJSUlJSUlJSUlJSUlJSUlJSUl91PWB4SNFDm5yhgGX40+bz29yK25yUDbH3+QObOr9oF3zdc3nNzmV3m8uBFiazi6ChzT+4eEhfumXfgmf//znMRwOGUDSa5odkJvm0lyjTbC1rmt2c5ELjMYlJya5zFRVheM4KIqCXXTb29tIkoShm2VZePToEWzbRpqm3F9IUaEEtlzXhe/7CIIA5+fnWCwWCMOQOxypb+/4+BiKosA0TY6lJPcYRcNS/6EQAo7jwLIsaJrGHX9RFGE4HMIwDOzu7uLBgwfc83ft2jVYloUkSfDNb36T3aRlWUIIgXfeeQe2beP27duwLAvD4ZABqaIoHAebJAkURWF45zgOw90mmKL7TIqiCOfn57hx4wYAMKgajUa8nsPhEHVdQ1VVnJ6ectdhmqZwXRdPP/00FosFQ9wgCNg5R3s+TVO022125SVJglarhcFgAN/3OQaVYF0cx5jNZuza3NnZQbfbxSuvvML38M6dOzg9PcVv//Zv48UXX8RnPvMZ/OzP/izu3bvHPaLn5+fwfR8AsL29zQBzPp8jyzIMh0O0Wi2GtEIIdLtdpGmK5XLJwLWuawwGA46PpTWmGNPd3V2OsV0ul+xkJGBI90bTNN6/tM+uX7/O3Zh0nwhg7+7uwjAM3L9/H0VRXIoFJpcvQUJd15GmKRaLBXeNkit3Op1if38fvu8zbO73+9zlSfHK5GSlnwNSUlJSUlJSUlJSUlJSUlJSUlJSUlJSUh8idnWT05DAw1VuSHL4rbuENkWONsEPjdPsj2yeuzlOU5ueW4d563Ncn0szspWgITkQaY7NKNf9/X38wi/8AtrtNkPL5vUQ3Fq/jnURQFxfY3qO+h0Nw+A+RIripG5K6szr9/twXZeBH8VSttttBi/klhNCwPM8jkG1bRuHh4cMNmmM7e1tBEGAqqo4ctX3fRwcHDAgJMchgUPqqDQMA51OB5Zl4eTkBKPR6FJ8pud5aLVaWCwWmM1mDOGCIIDneTg5OcGDBw+QZRmSJOHo1vPzc7z88svs0ut0OuwyJOdmszOTnJ/U9zefzzledFM8L4Fj6sCkXsIgCLg3stfrodVq4eTkBDs7O7AsC0EQcDdhmqYMjGkPEDyu6xrb29vsJiUXpa7rmEwmSJIEruvi9PQUDx8+5KhSx3H4umhOYRhy/GoYhqjrGk8++STG4zG+8pWv4Pz8HNevX2cHJx1DvZZJkiAMQ2RZxu5EWoc0TXmNJpMJd1Zub2/DMAzcvHmTXaie5/F75fz8HKPRiMEgOW3jOEaWZVgsFuxujOMYYRhiPB4jTVNeo6qqMJ/P0e12YZrmpX7Muq7hui663S7quuZzkXMxSRJMp1OGqY8ePUKWZXjzzTfx1ltv4ezsjNdgPB7j7t27mE6nDHcpKpfeb0mSwLIsjMdj6XyUkpKSkpKSkpKSkpKSkpKSkpKSkpKSutAHho+bYk3XYQ19UFRpEyJ+PwfjustxvYePPpqAjwDM+tjkYmueo/nadai5Djfpg1x6dA5yNjZ7LhVFwc2bN/HzP//z7PYiRxs5FwncNsduno/mvA4pae6DwQCj0egSkFkul4jjGEVRoN1uwzAMKIrCMaZZlgEAut0uDg4O0G63URQFut0uR4ZSdCqdh6I/CRwToDJNk6+BeiaDIOBYToqiJAC5XC4xHA7ZIblcLnH37l28/fbb7DhbLpdotVrcD6nrOoMfTdPQ7/fR6XSgKAq++93vYjKZsFOSnHsEhQ8ODhj0qqrKIJdgGUW5ktNzb2+PY2YpFnfTXtR1Hdvb25hOpzg6OoLjOOxCnM/nWC6XDHypU5OgInVQttttqKrK+2C5XMI0TQa/5G7c399nUEndh61WiwEwjVtVFWzbZjBL/YwUiarrOlzXRVEUEELg5s2bePnll/Hd734Xmqbhqaee4rhX13WRZRls20aWZTg7O2MHYxO4+77P/afU/bm7u8tOztPTU+5CTNOUQbOu6+j1ehBCMFQUQjCAVBQFtm0z4LNtG7quc/dokiQAgFarhWvXrkFVVbiuy9cchiGOj4+haRriOObeUdpjlmXh+vXrCIIAURQxLKX3CzlUCdrSHh6NRjg7O0Oaprz+URRxvPHh4SGCIFj/ESklJSUlJSUlJSUlJSUlJSUlJSUlJSX1kdQHjl0lNeFYM4a0+Rw5pZruPYrKJLDVfC1Fmr6fM3CTa3HT1++nZjTsJqfm+jgEmppdjc2x6HWqquKJJ57A2dkZvvOd71xyu+m6jrIseYxN7kxFUfiY5jzoa1VV0ev1GEB1u10GdlEUcfymqqrcAxlFEXzfx2KxwPb2NkzTxGw24+5FAFBVFYZhwPd99Ho9nJ2dsSuwqioURQHDMBi+6LqOVqvFsZiz2YwhGcVm6rrO0afkYJzP52i1WkiSBI7joCxL2LbN8acEwE5PT1FVFbvpHMdBmqb47ne/yx2XBGib3YkE9WitmxG1BLmCIODzU3QnueWaMJscr7Tf5vM50jRFEAQ4OzsDAHZN3rp1i913rVYLpmkyBDs9PeWI0q2tLSwWC3iexy7AOI7Z7WnbNoqiQKvVgmEYDCaDIMDjx4+xXC6hqioD0SzLuHuR3lcUuxuGIV+r53m4du0aHj58iM997nN4+umn8cQTT+DOnTu8H6mzcTAYQNM05HnODlEC2bRHCdyNx2O+zwTl67qGbdsMosntWpYlRqMRR/3O53N4nsd7wDRN9Pt9DAYDRFGEfr/Pe5OAZBiGGI1GMAwDcRwzpCSgO5/PMZvNONpW0zRsbW1dcpu2222Gwnfv3mV3Ku31uq7heR6WyyVc1wUAZFnGEa51XcP3fURRhOl0ynG1UlJSUlJSUlJSUlJSUlJSUlJSUlJSUh91fWDn4ybXYtOpR66s5vFXOQvXex03uRzXx6Dvm+dYP6bpkGw6IjdFajbntakjUlVV2Lb9np5H+p5gEK2Dpmn4+Mc/juvXrzNEbPY/Nl2a6+sIYCN8JTdfWZbQNA2WZbH7kWBXq9Xi6yQ4SD2AQRAgz3MGc77vI0kSzGazS+cm8EndjJqmwXVdbG9vo9PpwHEc7OzsYGtri6NhT05OGO4VRcEuMup5FELAMAxkWYYgCPjaFosFVFXlqMswDDEcDjGdTjEejxm+HRwcQAiB0WiEd955h+em6zpM02T3oOu6ODk5wWAwQJIkKIqCwW+WZTg8PITneQDAEbUUtWrbNvdD0j1ogkhd1xEEAfb392EYBgDANE2O+iQoNZvNkGUZd2uWZQnLshjgkksyiiI4jgMAfJ37+/uXolxN0+QeT4qYzbKMoSy5HjudDqqqwtnZGU5OThiK0Z6xLItjXbe2tvCtb30L3/rWtwAAL7zwAk5OTtiBaBgGLMuCaZrsZM2yDGVZwnVdzOdzhny0LyeTCTsZsyzjnkdyD06nU5RlyT2bpmmi2+0yJN/d3eW4VwBwXRee52E6nQIAQ9DRaMQwmLpALcvC9vY2dnZ2OLJ3Z2eHwTq5JhVFYQcjwcw8z2EYBr8nbNtGnueYTCaYz+cYj8dIkgR1XWN/f58BKTlcaf+Px2NISUlJSUlJSUlJSUlJSUlJSUlJSUlJSX0I+Ph+2gT/yDW46dh10Lc+Bn2/3i3ZPHaT+7IJGd/PJXkVGF13M5Lzi6Iwr+q9pONbrRZeeukl+L5/qS+yCU2b8a/rXZRN+NX8TEALADqdDuI4RhRFiOOY+x1brRZ0XecYyevXr6Pf78P3feR5DsuyAAC7u7t48skn0ev14Louu/hOT0+5p5LcbFmWwTAMduQRZKTITwKA1IdIjjGKq1VVFVEU4ejoiPsVKQ6WolnJxTgejzGdTqHrOvb29uB5Hmzbxt27d3H37l2GrqZpQlVVhoCGYeDtt9/G22+/jfv373NnIkExAmZ5nmM6nWIwGDCIJRjZvC8Um0vxwf1+n6NuqdMQWLliPc/jcxF0y/Mcqqqi3+8jDEPM53O4roudnR0GyO12G7PZDMvlkuc7m80QRRGyLMNyucR4PMZoNEK73Yamaej1ekiShHsK4zjmfUyOUALHYRhCCAHf9zkmNIoifP7zn8fR0REODw+xu7uLPM9x/fp1jlYNggCKorATNU1TnJ2dYTweYzwew7btS3G6aZqyIzXPc/6IoghRFOH4+Jj3LwHSVqsFVVV5bymKgsViwetI8aeLxYKvg9yltm0DwKWeTorApYhg6p6kSFfTNBm6h2HI4HNvbw/7+/sQQqDVaiEIAhRFwe8hcvGORiN+j1ZVBd/34Xke7xspKSkpKSkpKSkpKSkpKSkpKSkpKSmpj7o+1L+Yr4Ox9cfXoV5RFJeOWQdsdFzz6+brmzBz07FNEcRaf6x5rk3PNz/WXZMEOwhAApthYhNC7u7u4hOf+AQ76qg7rwkv1wFo0wm6aX3J1TgcDlFVFQM4cq4JITCbzbBYLPgzOdgcx4GmaRgMBuwes22bXWMEBKlTUlVVjEYjhGGIOI5xdnaGR48eYTKZMIQqigKe58H3fXQ6HY6w3N3dRbfbZTioKAoURUGr1cLBwcGluNN+v4+dnR1UVYVer4cwDDna8/r16zAMA7qu47XXXsNsNkNRFAjDEGEY8pomScJdfKPRCPfu3cObb76Js7Mzds15nscgkUAqgaqiKGCa5iXo2uwFJWenEAKu63KsalVV7OKbTCYoigJxHDPcIuenbdvwPA/n5+fsLqWI0yzLOJ63CfjG4zHH4pKDkYBeHMfIsoyvLQxDZFnG962ua449bQK0IAiwvb2N1157DS+//DIA4MUXX0Qcx3w8xfUSpO73+7As6xLwo2sIwxDL5RLHx8fsMMyyjCNtDcNAv99HmqYYjUZI0xSO42C5XPLPBHJE0rmBVT8pda2SQzTLMnZ/apqGLMswnU758TzPsb29zeemtSInMQFIOhdB07IsMZ1OoSgKQ0fHcWDbNg4PD3H9+nXEccygc2trC57nMVAnh6aUlJSUlJSUlJSUlJSUlJSUlJSUlJTUR10fCj6uA7J1F986hFyPZG2CRQJT6+OtOx7XX3dVL+Sm566Kgr0Kaq47Km3bvhSZStGLm2JmSRS/evPmTWia9p741ebr1se56hqSJIFpmgiCAGVZMkh7+umnOe6UYmC73S7HXJITEQA7wCjSczqdYjQaoSgKHB4e4ubNm+wao+Pa7TaAVUSo4zjs4CvLEo7jwHEcjMdjhpjL5ZJdmkEQwLIsLJdLBEGAdrsNXdcZmL7xxhtYLBbsaDs/P+e4152dHe7Xe/nllxn4UYxpmqZQFAV7e3uo6xrHx8c4Pj7GaDTCYDDAyy+/jC996UtYLBZ4/Pgxx6KSwzLLMhwdHWE0GmEymfCaU3ch3X8hBCaTCZbLJZbLJUedGobBjj+KcW21Wuh2u8iyDOPxGHme85oTdCQ4fHp6yq7B/f19tNtt2LbNPZgE0Kjj89lnn8XW1hZ6vR5DcIquJdBHLk1yMFKMqWVZcF0Xd+7cQZ7n+OxnP4sHDx7g4OAAd+7cQRRFsG0bW1tbMAwDdb3qFKX5Ul8jvV8Mw8CtW7fYHTuZTKBpGoqiwLe//W1MJhMoioJOp4NWq4V2u41Op4Nut4sgCBBFEYQQSJKEofZgMMCjR48wnU7ZKUkgkEA7uUypB5RibBeLBe9vXdehKApu3LjB0J1ckORkdl2XQevR0RH3oDqOw72b1MFJfzTgeR4WiwUmkwkeP36Mqqo4Llbq97aEEH9OCFELIf7chxzn1y7G+cwPZWJSUlJSUlJSUlJSUlJSUlJSUlJSUr+HpH2YF6/DQdL7OfqaXYdNhyDwLvRrvv6qc64/T86mTb2N65/X57tpvHVXJQGLqqoYlK7HwNJz9Br60HUdn/zkJxmGUWyrqqoc63nVnJtzacJZXdcZME4mE5imiSRJUFUVptMpWq0WgBWU0nUd0+mUgSC545IkYdh3enoKVVXhui7SNEWWZTBNE/P5nAFkmqZ8bwjC0DkIzFCv3mg0YucarRWBujAMMRgMEIYhuzBVVcVkMsH5+TkMw8BkMuHo2q2tLZimiQcPHuDo6IihLd0HIQQ+9alP4VOf+hTKssTLL7+M+/fv4+TkhCGt53nszuv3++j1emi1WoiiiCHUcDjEa6+9xm5C+iAX4Wg0wv379/Hss88CAPdIdrtdvhbqNlwsFgxdCcrXdc2vEUIgiiK+l7R2o9EI4/GY3ZW6riNJEgghYFkWuwwpijSOYziOA9/3+V5QzOtwOGRIR32No9EIrVYLVVXhqaeewptvvolvfOMbODg4wMc+9jG88sorOD09ZUhI/Y+TyQRhGKLT6aCua0RRxPuJuiSHwyGvt2VZKIqC3agU9RtFEdrtNv8Rwfb2NhRF4eOpV3U6nWKxWCAIAnYX0vtiPp8zxO12uxiPx+ySpQhi2seapmE4HCLPc7iuy12hURQxxHddl+NW6b22XC4ZuNL7mgDvZDKBbdt46qmn8ODBA1RVhclk8p6fVVJSUlJSUlJSUlJSUlJSUlJSUlJSUlIfRX1g+NiEgE3otw4e6XmCjqSm43BTZ+NVXY1NgNl8fh1YNkEjxZ2uA8Mm7Lvq+uh5wzD43DRHcp01ozmb1009jwCwvb2Nj3/84/jSl76EJEl4TSjicf216+vTnOd0OsWjR4+4SzFJEti2jTfffBNJkmB/fx+z2QxpmgIAO9Vmsxk0TUO73b4Uu3p6egrP81AUBTqdDh49eoQwDLG/v/+e+0XRmxTlmSQJWq0W4jhGmqYwTZOjYDVNQ5qmKIoCs9kM3W4XSZIgSRLuWTQMA77voygKnJ2dMUCK4xjtdhvb29scafq9732P90ozrvfGjRt46aWXcHBwgMVigT/0h/4QZrMZXnvtNXzve9/D8fExQ7EgCLhzsN1uc3wquQz/8T/+xwwp6R5qmoYHDx5gMBigKAo8++yz3CG5u7uLxWLBUaM3btzA+fk5hsMhsizDE088AQB8zxeLBVRVhWEYaLfbePToEcbjMfb29vieUB/m1tYWu0rjOMZkMkFZltA0jWNd0zTFdDpFp9OBruu4d+8ex9kWRcHno/WlSN1er4fbt2/j3r17+NznPodbt25xJO1kMsHDhw9x584dPPXUUyjLkh2tAPh+ACtI12q1OKKVui9VVcXe3h50XcdyuWRAnqYp8jxHXdcMmNM05e5EciyS09W2bQwGA3ieB8MwUJYl/xFAVVUYDofQdR1xHHN/ZJIkHOmq6zq7Rh8+fAjHcaDrOs+HeiSDIECn08FisWDYS87i2WwGAAxeCdw/evQIeZ7DMAwEQbDx54jUR1Z/G8A/BPDwxz0RKSkpKSkpKSkpKSkpKSkpKSkpKanfbX1g+NiEePT9JtdfEwauP04QsQke16HjOizcFG3aHH99fs3zEixch46b5rkOS7vdLn9P4wDvAkaKYaXX0uM0X1VV8cwzz+DevXu4d+8e8jxnF9am3ssmjGw6Huu65m4+gna6riMMQ8znc4Y0BCTn8zkAwPd9zGYzeJ6HJElQFAWyLEOv12M4eXJywl191OlHcZJBEHA/YxiGyPMck8mEgZtpmnjnnXcQBAF830dVVezQUxQFeZ5jPp9jMplgd3cXg8Hg0nouFgt4nocgCHD//n3uSrx16xY0TUOe5zg7O8MLL7yAR48eQQgBVVVx7do1/OIv/iIMw2AQlWUZWq0WnnvuOdy+fRuPHj3C/fv3MRgMMJlM0Gq14Ps+Q9DFYsEdfhSJG8cxptMpXn/9dViWhel0yn2PBHXb7TYWiwVHjlJE59bWFl/X1tYWlssl9wlmWYZOp4M8zzkGtN1uI89z9Ho9duwZhoHz83MEQYAnn3wSw+GQAa9t22i329zDScfrug7bttHpdDCZTDiWlNaXokcNw0Ce5yjLEp7n4dvf/jb+zt/5Ozg8PGSn6Ww2w2AwwJ07d3B2doYkSS7tAXLejkYjdDoddnPGcYzHjx9jf3+fAZ9lWUiSBP1+H0mSYDwec3eioihwXRdVVWE2m2FrawuKosDzvEuOybIsGb5SFK2maRx5Svuy3++jKApomoZOp8NdjHVds6PVtm2oqsr3gqAr3VdynNLeIPhLjmD6+UEOTOoOlZIi1XV9DuD8xz0PKSkpKSkpKSkpKSkpKSkpKSkpKakfhz505+M68FvvKFyPEiXY1gRuTdC2KXp0Pd60+XgTRq4DxE3OQTp/c85Xxa0259xqtS7FZzZ7GZvuy6bDshnXSF2An/jEJzgycr37cVO/46a1pdeYpomiKBiWHB4eYmtrC0mSoNfrwXEcjticTCaoqoq7BauqguM4mM/n3JvY6/VQVRXCMMRoNOIoUeq67PV6AMAuvMlkgjiOce/ePYzHYwBgl5thGJhOp/B9H/v7+9B1HcPhkKNFadwoihjWkkPy6OiIIdj+/j40TcN8PkeSJByvSb2Bzz33HG7cuAHf99HtdtHpdHDt2jVomoatrS0EQYBf/MVfxJ/4E38Cv/RLv4StrS1Mp1M8fPgQR0dHmE6nODs7wxtvvIGXX34ZmqYxnKQuTYLetDZf+9rXcHp6ivF4jCiKoKoqrl+/Dl3XMZ/PcXp6Cl3XUVUVjo6OcHp6CsuykGUZ4jhGnucMBSnqVtd1BlgEAW3bxvb2NsqyZNgWBAGm0ymWyyUAIMsyuK6LyWSCe/fuwTRNRFGE5XIJXdcxm814/0ZRhEePHnH343g8Zij72muvYTweM8ALggDD4RDD4RCWZeHw8BC+73MELTktl8slBoMBkiTh7sTz83N885vfxGg04scty8Lx8TFHnFI3KcXKUqztyckJJpMJHj16xH2Ys9kM0+kU5+fnyLIMaZqiLEvcu3cPw+GQ3x95nuPhw4ccjToYDDgOmHpQ6d46jsMQdjQa4fHjxwDALkqKFSanLgFp6umkdaJ9ahgGpH7yJIS4edG9+PeEEM8IIf5rIcRYCBEKIb4shPjDP+A4f1AI8f8QQrwmhJgLIWIhxKtCiP9ACGFtOH5j5+PFY58XQvQvxjsRQqRCiN8RQvybP5yrlpKSkpKSkpKSkpKSkvr9KiHEvyqE+KIQYnbxu+l3hRB/WQhhrh13/+LDF0L8zYuvcyHEr108vy+E+PeFEF8RQpwKITIhxLEQ4h8IIZ7bcN7m79c3hRD/UAhxLoRIhBAvCyH++BXzDYQQf0sI8fji2DeEEL8qhHiCxtvwGufimr518fv7UgjxNSHEv/bDWUUpKSkpqd8NfSjnI2ndSdh0IW7qdGxCxOax9H3TWdh0Ija1fs71uTXPuclN2dSmGNfmGAQO6XnqWqRjyMFIx1M0Jo1LoLGqKhwcHOCFF17A1772NXZiqaqKsiyh6/qVa9mcF41/enqKLMsQRRG63S7HUtq2jSzLsFwu0el0IIRgxxr161HvI62FbdsMWFVVRa/Xg6Io0DQNZVliPp9jf38fruuiKAoEQYBut4soinhc6ngkyOZ5HmazGTv+KJ6zKAp26gGAZb37b/fL5RJxHKPb7aLb7cJxHHieh3feeYfBZPOe7O7uXnIT0lpRryA5/RRFwc2bN7G1tYUwDPH666/j/v37ePjwIVzXRRAE3GVIrremA5a+Pj8/Z/fh/v4+O1EnkwmDuizLkCQJOyDPz89RFAUcx4FlWcjzHIvFAt1uF4eHh9A0DUIIjEYj7kkUQrD78Y033uDroejSMAz5XlD0aVEUHNmqKAqGwyEDNnLn0d6lONZWq4XHjx/j7OwMJycnMAwD/X4f165dw3w+x3e/+1289NJLDK4BYD6fcwTpk08+iTzP0W63uSPR8zx2Ij548ACf/OQnYRgGRqMRjo6OoGkaLMviTlFyHlK/Y5Ik7EZNkoRdwrSndF3nDkfHcbjbkVyW5+fn3OU4Go2wWCzQ6XTgOA4cx0GWZbBtG91ul6FiURSIogh7e3tI0xSnp6fI8xy2bWM8HsO2bXiex/uO3KRnZ2c4ODhgCCr1E6tbAL4G4LsA/u8A9gD8GQD/jRDiz9Z1/f/5Pq//iwCeAfBVAP8EgAXgFwD8GoDPCCH+pbquy6tffkltAF8BkAH4RwBMAP89AH9XCFHVdf3/+he4LikpKSkpKSkpKSkpKamPiIQQfw3AX8YqaecfAFgC+KMA/hqAPyKE+MN1XWeNlxgAPgugC+C/BTAHcO/iuV8C8JcAfA7Af3Ex1h0A/wqAPymE+IW6rr+9YRo3AHwDwF0A//nF2H8GwK9f/G78ucZ8rYvzvwjgmwD+PoAAwP8OwC9ecY3ti9d8EsArAP4uVuaZPwLgHwghPlbX9V/5AZZLSkpKSurHrA8MH4H3Rp0Cl+NO10He+wHLdUi47ki8CjSuw8tNUJPGX3/dprmsd04C4DjL5jFNSErwrSzLS47G5rU3x3ruuefw1ltv4fT0FEVRsJNR0969HZviVpvzJMAjLiI/VVWFZVkMW9rtNseK5nnOQIxcg3RNlmXBMAzUdY2zszPYtn0p6nI+n2N7exuLxQKLxQL9fp9de5PJhN1hBPgIuhLEcxyHu/t0XYemaVAUBW+88QZ2d3exv7+PLMu4K5L6EymG1LIsKIqCt99+G0IInJ6e8hpYloVOpwNFUdilSWu1vb2Nx48foyxLhGGImzdvYj6fYzQawXVdXLt2jR1/b731Fo6Pj6EoCoM/AAwzmxBSVVXubKTYUcMwUBQFwjBEkiQwTRNxHGM2m7FTMAxDFEXBcbcUjduEzkmSoKoqvvf0QR2K5KClaybXI+31pju3+Zn2iaZpcBwHVVXBsiz8gT/wB7C9vY0HDx5guVzi8ePH6Ha7sCwLrVYL29vbePjwIRaLBYIgQK/Xw2w2w3A4hKZp3M04nU6hqio6nQ47R7vdLt566y0URYHhcIjd3V3Yto04jhleGobB76uzszPouo4gCFBVFTqdDr+POp0O3njjDXS7Xb7HTzzxBF555RWGutSf2m63Oe63rmvs7e1B0zS4rstdonQOej9SrCqBbIqVpW5MVVWR5znG4zEcx8Hx8TEODg44urUJz6V+YvVLAP6Tuq7/Aj0ghPjbWAHJ/5sQ4r+p63r+Pq//nwG4V6/9h0QI8R8B+CtY/XL2/QAm6RMA/g6AXyFgKYT4WwC+gxXk/L7wUQjxz6946hn64vOf//wPOB2pH4YWiwUAue6/25Lr/uPR75V1p3lKSUlJSUlJSf1+kBDi01iBx0cAPlXX9enF438ZwH8F4I8D+F9jBSJJewBeA/DLdV2Ha0N+FsBOXdeX/k+TEOITWP3B7F/HCmyu6zMAfq2u6/+w8Zp/AOD/B+AvYAUzSX8BK/D4DwH8WfqdWgjxv8cKLG7S38IKPP7Fuq7/RuMcFoD/GsD/Vgjxj+q6/tYVr6fjr/y9uaqqn/j/L/uTpt8rvwP8pEmu2weTXLcPpg+ybj/q35s/FHzc1LFIwGwTAKQexKuiTQkIEMgD3nUlbgKRm1yN7/c1qQn11ufQHJfma9s298Q1r4+OaToeCVA116XpgKzrGq7r4pOf/CS+8IUvYLFYMDykrrqmC3P966bb0nEc+L6P8XiMLMswHA4ZBC4WC44OpfkQeKuqimM4qVsxz3N2oCmKguPjY3ZSpmnKIGw2myFJEiyXS3iex244Ao/0+sViwZGvTaeaoigYDAYMfOh6XNdFlmWYTCa8Dnt7e/B9H1EU4cGDBwCAyWTC6xoEAfb29nBycoI8z9HpdDAajdh9GYYhFosFx5YCqz6/PM+xv7+Poijw8z//83j++efx7W9/G6enpwzmyCH68Y9/HN1uF7/1W78FAPiZn/kZdLtdVFWF8XjMe5VeR3skCAK+Z91u91LMcHOfNPenaZq8Hs095Lour/EmqE6PNQF1E6ITPKPX53kOVVVxdHSEo6MjeJ6H+XyOk5MTXLt2jYEngdLXXnsNf/SP/lF+D9R1zWsQRRH3bZqmCV3X8fDhQ2RZxnG0cRyzU5M6Gal3VNM0DAYDXscoijhKmK6dwCf1WVIH5+HhIVRVxXw+R5qmGI1GHMvq+z5s20ZRFPB9n/cpOUrp/SaEwGQygRCCY2PJ9djv9+G6LsIw5OjinZ0dhGHIDlFynpIzWuonVjMAf7X5QF3XLwsh/j6AfwPAfxfvA/3qur57xVP/KVbw8Y/gB4ePEYBfbTol67p+TQjxFQC/JITw6rpe/oBjSUlJSUlJSUlJSUlJSX009D+++PwfE3gEgLquCyHEvwvgvwPgz+MyfASAf3cDeERd14NNJ6nr+ttCiM8C+MNCCL2u63ztkAcA/uO11/yGEOIhgE+tHftvAKgA/OXmH/PWdf3o4o9wL40jhOgB+NcBvNwEjxevSYQQfxGr37//LIBvbZq/lJSUlNRPjj40fFwHeVdFnjafWweCTafX+hjr7sjm2KR1x2PzmE3wsqkmzNkUv6ooCnRdZ3BIzkYADNuabrV1lyc9T45AgixPPPEE3nzzTURRxM7Hoii4I7K5fuswVVEU7O7uQlVVRFEEy7L42CAIkCQJBoMBXNeFYRgMawjIUHRkFEUMpWjuBCE9z0Mcx2i324iiCK7roq5rmKaJ2WyGPM+RJAkcx8HOzg7yPMdsNoNpmhBi1TFIMC5NU9y8eZNfEwQBdnZ24Ps+7t27x3Dwm9/8Jh4/fowsy+B5HseEHh8fI01TPHr06BLI7fV6iKIIvu9D0zQsl0uGu5PJBLquo9/vo9Pp4MGDByjLEuPxGEIIvPXWW5hMJqjrmjsyATAg7Xa7+Pmf/3lcu3YNrVYLr7/+OubzOX7pl34JURRhNBox0KR7TOCR9l1zX1G/I91XTdPYWWkYBqqqYkckxYBSFyhFhR4cHOD4+Jihp2VZODg44D01mUxgWRZ830ev18NiscBoNMLu7i4DZAAIwxC9Xg+vvPIK7t+/j16vh/F4jCRJcPfuXaiqin6/jzt37iDPc8znc3bFUrwtuReFEHAcB7PZDIPBgCHhfD7HbDbD/v4+2u02zs7OGDg6jgPTNNmlSK5Xy7IQhiHKsuR+SYLVBK9930eSJJhOp+zOpesGgMFgwNG2URTxXqK+xvoitpjei9Q7SY/ruo7t7W2MRiN0u112GC+XS2xvb8NxHPT7fe6pTJIEtm2/JxZa6idOr6z/NeeFPo/VL0OfxPvARyGEC+DfwQpSPgWgBaD5H5WDf4G5vHWFy/LRxecOVnE3V6qu65eumOc/x+qvSvGZz3zmX2BKUh9W9Fdlct1/dyXX/cej3yvrTl3aUlJSUlJSUlK/T/TixefPrj9R1/X3hBCPAdwSQgR1Xc8unkqwStnZKCHEHwPwbwP4aQB9vPffifsATtYe+9YVtSOPAHy6MbYP4DaAR3Vd399w/Jc3PPYzAFQAtbjoplyTfvH52Q3PXdL7/d6sKMqLP+n/X/YnTb9Xfgf4SZNctw8muW4fTB9k3X7Uvzd/YPhI/9i+KfJ0HcC9X5wqPU9uMPqejiWYcxV4pHGbX29ygzWPa/b4NeMp10WvbwIlgjg03yZ8bIqAXtPt1oRRpmniueeeY6CWZRk73uh1m9yjNN5isYCu69zxSM4tcRHfmaYput0udF1HlmXskux0Ojg/P0cURTBNE0VRsPuS4kINw8B8PofjOOzKjOMYvV6Po0Nt24bjOOwMI4hJEaS3b9/G3bt3EQQBnnjiCYzHY5ydnTEYq+sai8UCmqbhi1/8Ih4+fIjhcMhrsFgs8Ou//uv4U3/qT+Hx48cc39p09l27do27B8lFOZ/PEccxlsslNE2Dpmk4OjrCZDJBmqbcH0idl/Q17WcCV7PZDK+++ip83+duzjiO8fWvfx27u7vsfqM9RA67OI4RRRE0TcNoNEKn00FRFAx/Pc+DaZro9/tYLBZIkoQh3Hw+5+5JuscUA9rpdDgStt/vQ1EUTCYTjpelWF1ynQZBgLIsYRgGTk5O4HkeyrJk6FeWJa5fv46HDx+irmv0+32cnJwgDEOcnJxge3sbTz75JLa3txHHMe/P4XDI+z0MQ+6/vHPnDmazGXdeUp9kmqY4Pz/Hcrnk/USxpsPhkO8TgUHqPyVQSb2LN2/ehBAC9+/fx3g8RrfbxcnJCbtUoyhCmqYwTROapnGv6HK5xOnpKW7evIk4juH7Ph4/fozr169DCMH3nmJzLcvCZDJhIEp7cjZb/f92ig2m9xFFyVJ/qdRPrM6ueJz+WjS46oVCCB2rX+4+BeBVrByOQwD015//AVa9jT+oplc8Xlx8Vq94XkpKSkpKSkpKSkpKSuqjK/q9dR0GovH4dQBtrNJ/AGCwXh9CEkL8O1hFnE4A/FMAD7FK6qkB/CmsKkM2/a47veL8BVbdjCT/4vNVv49verx38flnLj6ukvc+z0lJSUlJ/YToA8PHdRC4DhbXXYRNUNcEgyT6uumk3BQxScds+m/nVVCy6dBcB3nr51ifrxCC++2a5246nWj8dYDadEmuR24CwP7+Pg4ODnDv3j3Udc2gr9ld2DxHE5AmSYL5fA5VVRlm3rlzB3t7e5hMJhiNRrBtG4vFAkII7gms6xoHBwcIwxDL5RLj8Rjn5+fo9XrQNI0dkYqiMHQiN9rjx4+R5zkMw8D+/j6m0ym79AhOUnSopmnY2dlBFEWYTqcc8Xp0dIR2u40gCDCdTrFcLnH37l2GWtSnmKYpvvCFL2C5XML3fYRhiCzLoGkaDMNgV+fp6SlGoxEmkwnDYYJJ9JHnOTsJCZxR56Vt2wiCAJ7nwbIsaJqG8XjMsbbf+ta3+Pif+7mfw/7+PkzTZHefZVns7Fwul2i329yZSWtCYLTb7SKOY46obXYJpmnKLrz9/X2OI1VVldem2+1isVjgySefRJZl6Ha7SNOUwVu/38f29jbqusb5+TlD2v39fd7L0+mUQXG73canP/1pfOMb38Dh4SF3bVZVhePjY+zv72N3dxfL5RJpmuKtt97C008/zeCR4C0BxVarxU5Nus4sy7BcLlFVFZbLlZkrTVOOXqVe0iZQNU0Tk8kErusiSRK4rosoirBcLpFlGcPTuq4ZRB4cHGC5XHL8LsFl0zTheR47YYUQuHPnDs7OzrhrlN57vu8zaCSoDgBxHGNnZwdpmmKxWEBVVQav1H+6tbX1np9HUj9R2rni8d2Lz7MrngeAfxkr8Pj36rr+N5tPCCH2sIKPUlJSUlJSUlJSUlJSUlI/StHvrbsA3tnw/N7accAKJL5HQggNwK9h9Qe5L9Z1fbL2/Kc3ve5fUJT4c9Xv45sep7n/p3Vd/+oPYQ5SUlJSUj9GfSj4+H5wjB4DLsPCdXD3fuNcBTibzzWjTptwcx10NsdpnofiF6+KZxVCcG9i0yVZFAWDLLomXdcvwUbqrVwHn3SMYRh45plncHJywl131EVHUZDNa6Tz5HmO5XKJnZ0dnJ6eQtd1JEmCx48fM/wxDAOmaTIgURQF8/kcRVHg7t27MAyDYRMAFEWBo6MjCCGg6zpc14VpmpjP5wjDkGHh1tYWfN/nONo4juG6LuI4BrCKLaUePoJIi8UCURTBcRyEYcgOujzP8c4772A6nQIAuz7zPOco2tlshslkgizL0Ol08OyzzzJEeuedd/Dqq68yMKM1Wo89Jdjnui5c14Wu6/A8D0899RSyLOMYW+rOvHPnDnq9HnRdx9nZGYQQl/oyoyhCv99HWZYMLJudihSde/36deR5Dk3TOPI2DEPEcYw8zzEajeD7PoIgwP379+E4DoQQOD8/R7fbhe/7GAwGSNOU13V7e5uhZKvVwmKxgOu68DwPBwcHME2TnY8EpcnN6rou7t27x52fs9kMiqLgU5/6FN58800A4OcnkwkGgwFeeOEF5HkO3/cvuR5pj1BUKcFg2rdxHKPb7TKkI0dkHMcMzCmiNYoi1HWN09NTdDodqKrKvaKDwYDjfwneep6Hqqpg2zam0ym/H8mRm2UZdF3HZDK51NHa6XRwdHSEbrcLVVXR6XTQ6XQY/G5vb2M+n8P3fT6X4zjQNI1dnXRvCZRTFPFwONzwU1LqJ0gvCiFaG6JXP3Px+Zvv89onLz7/lxue++UPOzEpKSkpKSkpKSkpKSkpqR9A38QqevUzWIOPQognAVwDcK+u6+kPMFYfK4fkf7kBPHp4N+L1A6uu67kQ4i6Am0KImxuiV//Ahpd9A6uOyF/8sOeXkpKSkvrx60PBx3UHYxMMboJ+64ByU79i87lNsHAdDq47C5uvbc5jvZOtee71ea2fl6ASgReCis05EhQkRxuBCXqOxqDX0vUeHh6i3+/j0aNH3LlIY5ADcl3UZ/fo0SOOIn3++efx4MEDfO9734NpmuxsS5IEWZbBsiycnJxwHGWr1eKYTs/zcHR0xO46gqhCCGxtbTGo8X2f5zefzzmyMs9zTKdThGEI27aRpik7KykCNs9zCCFg2za2trawWCxwcnKC4+NjBqAECalLk+JG7927B9u2+V6QI5MAMN0ncjPSOASRXNdleEmQjMBUFEXodDrsziToOhwOed08z8NyuURRFNje3gYABl2np6fc1UhdmHVdo9frwTAMaJqGBw8ecH+lZVncm9nr9ZCmKU5PTyGE4O5Hx3EwGAwQRRFarRbHvL799tu4desWdygOBgMsFgvEcYynn34ag8EAhmHAMAwsFgs4joPxeMxrenZ2htPTU55DfdFlqGkann/+eYbIg8GAnYx/8k/+SXz961/H6ekpNE3D/fv3sbOzg06ngyAIoKoqR75Sb+lsNoPv+wxFbdsGAHaA7u/v4/z8nF2sBNyDIMDZ2Rlc14XjOLyWeZ4zUO31ejg6OoJlWRBC8J5cLBa85wlWEyCPogiTyQTdbhdPPPEEFosF2u02/8FAHMfQNA3n5+fcK0nvX+qVJOfs7u4uFEVBGIbskE3TdOP7VOonSgGAfx/AX6AHhBA/DeB/gNVfVv5X7/Pa+xefPwPgHzde/wSA/+MPeZ5SUlJSUlJSUlJSUlJSUpv0dwH8WwD+ihDi/1vX9RAAhBAqgP8Eq8jTv/MDjjXAKmL1JSGEV9f18mIsHcD/BSs4+cPQ/xsrh+X/QQjxZykCVghxCOB/uX5wXdcDIcTfB/A/FEL8ewD+Wr3WLymEuA2gquv63g9pjlJSUlJSPyJ9aPjY7D5siqDQekRq89hmN2Pz++bXm0DiuqtxU9QrnWvdKUljb7qWJqRsdi82nZHr897k+KQYSnpds8ORjqfHXNfF7du3cXp6iqqqGHSQw6oJMZuinj4A7CK8ffs2zs7OLgHG0WiE5XKJvb09GIYBx3HYtec4Du7du4ednR08/fTTOD8/x/b2NrsTCYJ6noder4fRaISqqjhSk3rxTNNEr9dDlmV46623IIRgtx7BvbIsOeYyiiI8fPgQAHBycsJrQ9dL0NXzPNy6dQtFUeD69ev4+te/juVyecl5ZxgGdnZ2YJomLMvC3t4ewjCEoihIkgSqqnIsqq7rUFWVY0DJlUd7eGtri19DEG57exv9fh+6rjNwHI1GsCwLALjL0HVd7o+MoojhVxiG3GGZJAmDT+oWpOjRTqeDKIqQJAmPHYYhg+S6rmHbNgaDASzLYvjc7/fhOA53Rna7XWRZxtGmFB86Ho8xm81g2za63S7a7TaGwyFH7dKxn/nMZ1CWJY6OjnB8fIw0TfHiiy/is5/9LNI0he/7KMsSYRjyeo7HY47fbbfbKIoCmqahrmuO2F0ul+j3++j1enxvaD3KsmTovb+/j/l8zs5GisoFgPl8DkVR8PDhQ7TbbWiahv39fbRaLZyfn6MsSzx69IiBJsWz6rrOIHI0GsFxnEuxv61Wi924vu9D13UGmrZtY29vj+OJm72QzV7Jq/5QQOonRl8E8OeFED8L4CtYxdH8Gax+OfuVuq7n7/PafwzgbQC/KoR4Aau/Nr0O4I8D+CcXX0tJSUlJSUlJSUlJSUlJ/chU1/VXhRB/A8D/BsCrQoh/BCAE8EcBPA/gywD+Tz/gWJUQ4v8K4C8B+K4Q4tcBGAD+IIAugM9dfP1h9Tew6o/87wN4Wgjx32L1x8H/Kla/p/8prJyOTf3PAdwB8FexgpBfxqofch/As1h1Qf5rACR8lJKSkvoJ1wf+F/Nm3CnwXnDY1CZ3IYkAXhPQNcfYFN3adLvR+Otwb9N56NhmFOP68+uvURQFrVYLRVG8pweyCSbruuY4xiYsJUCrKAo79ag/kq7r9u3beO2113B2tupapm5DgmI0dlMUSUpON4putSwL7XYbnU4HQgj0+31omoY4jhkaEXCiHkLXdbG1tcXwzXEczOdzxHGMqqowHo9R1zUDrWvXrmE4HOLs7IwdaLZtc69f0xFX1zWyLEOr1cJ0OsVsNmOnGwC0Wi0Mh0OGvXRvqqrC4eEhr+crr7zCkEvXdbzwwgu4fv06R9WGYciu1H6/z52EFAvbarWQpikePnyIIAjg+z73YpJjM89zhGHI7rf5fI4bN24waAqCAPfu3eP7XhQFut0uQ6+trS3Ytg1N01AUBRaLBcbjMaqqYqDY6XSwWCz4Gm/fvo3BYMDHLJdL3uvk6uv1ephOp+z6nE6nl0Cr7/v8/Hw+Z1BKccEUn0sA0zRNhn4EBilW1HVdTCYTPPvss/jc5z6HL37xi/jTf/pPY2dnB9PplN2eBPB0Xb80b/peVVW8/fbbuHnzJlRVRa/X4/eFbdu4c+cOO2/JdUquWd/3GdpTTG5RFHAcBwBw7do11HXNDseqqrhbEgAfS5HJnU6H3Y2z2QzT6RSu6zI8jKKInZCPHz9GlmXo9Xp8XeTqpPjiJEl4fkVRwLIs7oeU+onVPQD/NoC/fvHZBPAKgL9a1/VvvN8L67oOhRB/6OK1n8Eq/uUugP8IwN/ECmJKSUlJSUlJSUlJSUlJSf1IVdf1XxRCfBMrQPc/AqBjFcH6VwD8n+u6zv4Fhvv3AAwB/HkAv4JVKtA/vRjrP/whzTcWQvxBrEDivwLgf4XV7+d/DcCXsIKP87XXzIUQvwzgfwrgzwL40wAsrADkWxdj/NMfxvykpKSkpH60+tB2nXXg13QlrkNAeo7UhI7NXsN1qEfHAJfdjZsgY3PsTXNdP2ZTDGzzeU3TODKTwFhZlu8BggS+mtdB8LA57yZcpdcHQYCnn36aQVVVVZd6H2mNm2vV7/fZhUZgZDAYoNfrsZOrKApMp1McHh7CsixMp1PuBATATq/ZbNXn3G63+XztdhuvvfYagiBAq9XCeDwGAKRpivPzcywWC7RaLQYvWZYhz3OGn+RsJKBpmiam0ymyLMPBwQF2d3dxdnaGbreLd95ZRdWTk46ul+DacDjEa6+9xveo0+lga2uLXXEEvijmlIAUnbPT6cA0Te5PpMhWApCe52Fvbw/Hx8ewLAtVVSEIAnQ6HRiGwecsigJxHOPg4ADD4RB1XWNnZwfL5ZJdi3EcYzKZ4Nq1a9wPSSBY0zRUVcXwyjAM7uhcLBbY2tpCu91m9yTtgSRJGNYul0vouo7xeIyiKDgSllyt1ClJ/Y+0FsvlElmWcRTv0dERA84sy+D7PkzThBCCew77/T4+//nP42d/9mfx7LPP4vz8HEVRoNVqodfrcZRrq9WCYRhI05T3XbvdRr/f5/fDbDbj90Sr1YJt2xBC8LVVVQXXdZGmKbtTe70ednZ2kOc5Hj16hDRN0el0uJOSXJ4EnOk9Mp1OuZeUImGDIMDjx4/ZrRuGIQNiApBNSKyqKgPK+/fv83gUo5umKVqtFgAwgJb6yVZd168D+Je/zzF/D8Df2/D4I6wiWjfpPf8hquv617CKlll//Mr/aNV1/ecA/Ln3m5+UlJSUlJSUlJSUlJTUR1t1Xf9DAP/wBzju5vd5vsDqD2r/5oan/xzWfj+96Gx8v99pP3PF41MA/4uLD5YQ4n9y8eXrG16TAfjbFx9SUlJSUr9HpXz/Q65WEwJugnjN2FPgvW7GTc7DJmikr5txqOvgsRl/uj7m+811fS5Xvcb3fY6QbM6LOuAoYrU5T4Is9Jk+yC1FY1O0qq7rePbZZxEEAcMaAmR0bc1rJJhDwEPXdbRaLXieB03ToCgK0jTFcrlk2ELxrYZhsPMty1Z/EEVdg0dHRzg/P0cURXj77beh6zrDKIJxBwcHDGHo3KZp8vlu3rwJy7LYPTmfz3Ht2jUURYEbN25gb28P/X4fcRwjiiL0+31YlsU9k7R+hmGg0+lgMBjg9PSU+/8AoNfrQQiB3d1dPPHEE7h58yb29vY4bpRiSQkozWYzHB0dYblcwrZt+L4PIQTiOIZpmgCA09NTdg2qqorvfe977DIkqErnTtMUURTBNE3oug7P82CaJndfep6H6XTKsaaapjFkOz09Ra/XQ1mWWCwWePjwIbvraJ9RTOhisUAYhlgsFsiyDEII6LrOe2lnZ4fdlvRBzlLam9QNaRgGuxypa3F3dxdxHPPeoPsYhiHm8zm2trYwHA7xhS98AVtbW7h+/ToMw4Drulgul1gul0jTFOPxmB2w1HuZpimD0TAMeV8B78YEEwQnV6miKNxNenZ2hjfffJP3ObkOydFJkBNY/THAeDxm+FsUBSaTCbsd6b4EQYAsy+A4DiaTCc7OzniPJknCsbSWZTFY13UdQRCwm/Xu3bvwPI9hZxzHfD1SUlJSUlJSUlJSUlJSUlJSUu9KCLG/4bHrWDkvC6yqTqSkpKSkfh/qh+J8vOqxdcB3VUwraRNk3DTepi5Hchaud0x+v3lvGrspihAl0EfAcL3jjY5Zvz4Cf/Q4fU0Qko5rtVp45pln8PWvfx1lWXJ/oKZpHNPaBJBZlsHzPF4zXdd5TpPJhIHLzZs3MZ/PkaYpZrMZPM/Dzs4OJpMJer0eut0uJpMJbNtGu92G67oYDAZQFIWdZUIIhkvkMKReylarhdPTU7iuC9d1Aay6Cg3DQLfbRavVwqNHjxBFEXZ2dnB4eMj9hHSuLMsYdNI5Dg8P8c477+Dx48ewLOuS6/Pg4ABCCJydnaEsS7iui9lsxgAQWAHV2WzGMJb6E6uqguM47NQkcPj222/Dtm0Mh0Ncu3YNrVYLVVUhTVOO2KWY1clkwuDunXfegaqqHEFK/YBhGGJnZwe+77ODlu718fExR4ISaGy329znaNs24jgGAJimiTiOYRgGkiRBu92Gruvcu5kkCXZ2dnBycoIwDBn4Ue8i7VnP8xgqUl/ieDxGmqY4PDzktVEUBfv7+3j06BHa7TZ6vR6+8IUv4Jd/+Zfx1FNP4e2334ZlWTg/P+e1JdBJQPP4+BgAcHh4CF3XucNR13Usl0vkeY633377UsQsrQXNVwiBk5MT5HnODl8C09RJenx8jJ2dHZRlCU3T0Ov1cHx8jKqqMJ/PMZvNcOvWLZycnABYRfw6jnNp/W7fvs2A1zRNJEmCuq4xn88xnU7ZDRmGITtZhRCIoojfywSwpaSkpKSkpKSkpKSkpKSkpKQu6b8QQugA/jmAKYCbAP44AAfAX67r+vjHNzUpKSkpqR+lPpTzsal1RyE99n7HE4BbHwN4F7Rtim9tdj6udyw2tQ44mw7C5vFXzZOgDUVmEmyh15L7keZE10OAsuncbDo0152QNP6dO3fQbrcZWlKk5Hpkq6qq2NragqIo7MobjUYoyxJnZ2fcTwmA3ZPk6krTFKenp+wiI9fWcrnE+fk5AznqBKRIVQJM5+fnePvttxnmLZdLtNttntfx8TGm0ynG4zH6/T56vR4AwHEcjpl1HAee58HzPEwmE75Wuq8UA0pdfJ1Oh/cVRavSGuzt7WE2myHLMti2jX6/jyRJIIRAEAR44YUXOL7TcRy4rotHjx7xa+i+dDodjllttVrI8xzD4ZAdj+PxGFEUMYxeLBaI45jBLMXa5nmOIAhw48YN2LbNLtRWq8Uwz7IsLBYLCCGQpik7BWlvhGEIVVW545DiU8k9OBqNEEURd3DSe2k6nWK5XMJ1Xe7ozLKMI1HTNMX29jYmkwlHBBNgpfl4nofRaMRAdWtrC3Ec47Of/Swcx8GNGzegqiqvy9bWFkcIB0EA0zRxeHiIbreLOI4xn8+hqipHqdL+NgwDvu+zu5T2GEHy3d1d7O7uwrZtAGAgT+5JVVWxv7/PEcWGYWAymeDGjRt4/vnn2RValiWD+fl8ztftui56vR7Hvd68eRMvvPACWq0WdF1HGIYIw/DS/aCY1dFohLqueV8ZhgHP8zb+DJGSkpKSkpKSkpKSkpKSkpL6COs/B5Bj1d34qwD+MICvA/jTdV3/9R/nxKSkpKSkfrT6wM7HJsBrdjk2HX2bHIjNeNX112wav/ka0jo0bEK+9WPWv26Otw5M14GkaZrcMUduR/qaXI4EGtclhGAwSetDDjqKF232Wgoh0Ov1cOPGDQyHQwBg9yPBm+ZaT6dTuK4LVVUxn6+6mQl4JkkCRVG4A7Cua+7lMwwDcRwjz3NMJhMsFgt2wmmahocPHzJIGQwGcBwHaZpyJGeSJLwm5Azr9XpQVZWBEkWI0nFBEHD/ZBiG8H0flmWhKAp0Oh3uqLQsi/v33nrrLURRhF/4hV/A/fv3+b4EQYDnnnsOpmlif38flmVx7GeSJHjjjTfgeR6CIMBkMkEURRzbSWtNsbYUcUouyCzLsLOzw1GcVVVhOBzy+pdliePjY3S7XXieh+9973vodDoIgoCvk657NpthNpvh8ePH7Bbtdrvc10luTd/3OUp0MplwpGm73UZRFEiShMGtrutYLBaoqgq7u7vcXfjw4UMGgtSfGAQBHjx4wHttf38fSZKwa1BVVQae0+mUHZ5xHMP3fTiOg/Pzc1iWhV6vhy9/+cv49Kc/jRdeeAEPHz7k6FZd17GzswPbtpFlGZbLJTsZCfL5vs+OycViAcdx+DxBEGA4HHIHZhAE6Pf7WCwW2N7eRpqmAMDOTIrK3dnZQZZlmM/n/B6JoghHR0e4ceMGDg4OkGUZ741Op8MuzTAMAayckPTeIVfk/v4+Q/jt7W3cv38fZVnC8zyO7E2SBNvb2/B9H3mew3EcLJfL9/wMkPrx6/t1UkhJSUlJSUlJSUlJSUlJSf3oVNf1fwbgP/txz0NKSkpK6ndfHxg+NoFeMyr1qsjTTQ7E5vcU5UlqugjXOx2bMa7NmNVNzsZ1IPl+Ds11R6RpmnAchx1pTQciQUhFUXiuTSBJrswmICXXFT3WjHIl9+OTTz6Jb37zm7yuFElK4JLGJRhKzrI4jjnm03VdjMdjuK6Lra0tLBYLzGYz7OzsYLFYwPd9dqC1Wi0MBgPcvHmT4yrJ0acoCs7Pz9FqteC6LnRdh+u6CMMQQRDg7t27cBwHRVFwvGav12OYR44xALBtG1EUwbIsZFkGy7JQliV++qd/mq/vpZdewrVr13BycoJf//VfZ6A2mUx4jUzTZHB29+5ddDodjuYMwxCu6/K5yrLEk08+yd2LjuMgiiJ2s3qehyRJ8ODBAwbLQRAAANI05ShYXdcxHo+xt7cH13WxWCw41nZnZwfHx8fodDoAwH2TmqZxRKqmaVgsFkjTFLZtM/AjB2Cn00G328VsNsPp6Sl83+cY0H6/z/MnQP3222/j9PQUWZYx2KXOR4J/aZpif3+fI3UXiwV2dnZwenoKwzDw4MEDPP/881gul5e6Muu6xv7+PgPnra0ttFot/NZv/RZ+8zd/E7/yK7+CO3fu4JVXXuH9PZlM3vO+T9MUo9GI30MEDQnWW5bFj3meB9d1GWhGUcTgcrlcwjAMBqZ7e3v8npnP59A0DaZpcrztaDTCcDjkDlGKSZ1MJnj22WcxGo2QpinKskQURdja2sJkMsF0OsViscDe3h5s24au63j11Vfhui7fmyiKuG+U9gdB901/gPD/Z+/P4ybL77ru//X5nnNqufalu2fr2SfJZI+QsAlZiCKLrCp6u2DkVhZFb2/RG/CHGhRvQOUWFRFQAWUTcQEUA1HCYEiAkIQkJJlJJjOZtaf3a6/lLN/P74/vqeprrumepad6unv6/Xw8KjVX1alTp7516kp/r3d9Pl8RERERERERERERkWvR817z8WCIdzBYPF/Yd/A+ONcedLLPSbC4P8A73/Md3PczVTU+XTh68L5OpzMNTibbANMQcRI6TqocD7Z+nWyzv7Jzsv7fpApvcttkn0eOHOHIkSM8/PDD01aTkzatk4BjUpE3WY9ueXmZtbU1Njc3p5V4GxsbnDhxgsFgwNzcHHffffd0LUJgWtG2urrK7/3e73HmzBkWFha45ZZbGAwGnD17dtqmtdfrUVUVdV2zvr7OxsbGtEpvsk7f5Pg3NzdZXl7GzDh79uy0Ss/dp8HOzs4OCwsLnD17lrIs6Xa7rKys8IY3vIHbb7+ds2fPct99900rzs6cOTMNAo8cOcLZs2cpioLV1VU+8pGPsLy8zMLCAuvr6zzyyCPTcHNhYWEaoj366KPccMMN0+rGbrdLt9vl05/+9LT16aFDh6Zh787ODv1+nzvvvHNaIdjpdKbh1fLyMrfffjsrKyvs7u5OqzZHo9F0LcLJe3/kyBG63S6dToft7W02NjYoy5K9vb1p5eWkjetgMGAwGHD77bczHA45c+bMtEXuY489xo033sj6+jp7e3usrKxMKzr7/f40kJ5cr6+vc/ToUR5//PHpubm2tjYNkPf29hgMBtR1zdzc3HR9zfvuu2+69uT1119PWZZsbGzwO7/zO7zlLW/hZS97GZ/85Cc5efLk9NwcDAYURcH6+vo08Ov3+9Mq0n6/T9M0bG1tTVvKTipwIQX9TdPg7tP1SCfh9aTNbdM005a6Kysr0/3sryC+6aab2NnZmQbD29vbrK2tEWPkiSeeIMsy1tbW2NvbYzgcTkPEkydPsrq6yubmJlmWUdc1r3vd66ZB5fb2NsvLywwGA6qqYjgcEmNka2uL9fX16esQEREREREREREREbnWPa/w8WDF4P7AcHL7/iDxfNWLkNqFnq8S8WDr1PO1X90f4D3dY/dv83RVj/tvX1paIs/z6c+T8O9gteUk2DnfGo/nC0HPN4b7qy1vv/12HnnkkWmFZVVVFEUx3d8kdJwETaPRiK2tLbrdLmfOnOH06dPTtQknVYiT9QL39vbodDrT7Zum4cYbb5yunzdZ525ubo6maabVcJN1BE+dOsWjjz7K7bffzvLyMk3T0O/3p+95VVVPCovuvPNOlpeXKctyepwAm5ub9Ho9xuMxIYRphd3i4iIPPfQQL3vZy6iqiieeeGI6hnmec+ONN07Hw8w4evToNNiatHU9cuQIdV1z5swZdnZ2ptWQTzzxxDRYzPN8un7gbbfdRlmW07UDq6qarjP52GOP0e/36Xa7HD9+nMFgwNGjR9nc3JwGZIcPH8bMOHPmDL1ej5WVFU6cOEG32+XWW299UgVeCIHrrrtuepyTqru6rinLkrm5Oba3t9nc3JxWihZFwdLS0rT17tzc3DSwXFhYmFYuTiopJ0H+ww8/TLfbZXNzk7m5uenrHg6H5HnOxsYGu7u70wBvfX2doiimgdykWtPMuPHGG3nwwQd55zvfyTd8wzfwile8YhpIT9Y+nawxGWNkdXWV4XCIu0/Xpex2u/T7fR544AHyPJ+uFbm1tTVtn3vnnXeS5/l0DdMbbriBY8eOTStX9/b2CCHw2GOPsbS0NK3c3P8ZmgTTk7U25+fnp2uLHjp0aHo8k3VQJ2M7OY7RaDRd73NSbXrDDTdMz49ut8tgMGA0GrGwsMDe3h6nT59GRERERERERERERETgqYstPksHK/wmJgHZ/rUfJ4HZ/tBvEtSdr1pyErJMHn++575QyDh57v1rKZ4vYNzf/vR81ZtmxnXXXTc9nk6nMw37Jvvcf5z75Xk+rXrb36r1fJWdk+B1fzvVSWXe/vUJJ2v3TfZRVRX9fh9gWsXl7mxtbTEejxkOhxRFMa0KnIQ0k9cwCTNDCNM2m5NQZrJ2YNM05HnO2tratBXo8vIyd95557SqcNKWdBLYTCrqJuv59Xo9nnjiiWkbzrNnz04r5TY3N6ctMo8ePTptu/nQQw8xGo3Y2NjgIx/5yPRcyrKM66+/nm63y+LiImfPnuXmm29mcXFxOlaLi4vTtQDX1tamrT63t7fZ2dmZvmfD4XBavTaprpu8/5MKvE6nw8tf/vJp0Dl53hACx48f5wMf+AD33XcfdV2ztReEp7cAAQAASURBVLXFo48+yuOPP05VVdOWtseOHeP06dPMzc2xsbExDXXX1tYoy5LDhw9P12+chK+rq6s0TTN9/+bm5jhz5gybm5uUZTmtKnzZy17GwsICeZ5T1zUnT57k8ccfp9frsbm5OT1Hjh49ytzcHN1udxo8drvd6XhPzq3JOVfX9fQcnlT9lmXJ2toav/M7v8MnP/lJXv7yl+Pu0/NuUi177NixacXu5BxaXl6m2+2yvb09Db8Hg8G0Xen+dS93d3dZXl6m0+kAqSJ68p6dOnUKM6OqKmKM03O6rmt2d3c5efIkw+Fwul7nZO3TSRBeluV0zcft7W263e70d0G3251WhS4tLXH99ddPWwp3Oh0efPBBjh07Ng0z77rrruk5vrCwwE033fSU3wMiIiIiIiIiIiIiItei5912FZ5agXgwGNx/2yRsO9hKdf9jJlV0k7DtQkHl/ranB1uvni+cnNy3f5v961Xufz0hhGm7xqZppoHM/uBxf+h6cN3LyfX+cHH/8R487v2vbW1tbbo+4aQl6+QyCSk3NjZYXFzkxhtvZHd3l/X1dYBpkDgJWm666aZpK8+zZ89OK7cmlZ27u7v0+33KspwGbpO2lf1+nxACe3t7PPHEExw+fHhaSVnXNcPhcBoOlWXJ1tbWNHTt9/vToGuyfVVVjMdjRqPRtEpxPB6zurrK+vo6Kysr7OzsTI9jsubfZMwXFhZYWVlhcXGRpmk4fPjw9PgnbUcnoWIIgcFgwO7u7rQl7U033cSNN97I/fffPw1Mm6aZPsekYnNhYYGNjQ2GwyGPP/44p06d4vDhw9R1zcLCwrSyrygKhsMhx48fn4a0a2trjEajaaB55MgRdnd32d3dBeC+++6bBq3Ly8vEGMnznKWlJXq9HocPH56Oy2QNzm63S1EUrK2tTQPMSfveyTFsbm4SQpiG5P1+fxosbmxs0DQN3W53GoZPKj8XFxc5ffo0e3t70yq+SStdd5+2s93b2+M1r3kNm5ubvOMd7+ClL30pr3/963nf+97HysrKdP3FSdXhJCicn59ne3ubqqqe1Gq2qip2dnZYXV0ly7Jp2FrXNcePH6dpGjY2Nrjllls4cuQIp0+fZn5+nrm5OcbjMVmWsbi4yPHjx5mbmwOYhqsLCwv0+/1pCDtp7TupYp6sETkZk8cee2waMO/u7k7fu62tLTqdDo8//jiQ2jCHEBiNRvR6PXq93vTnSVgqIiIiIiIiIiIiInKtm0n4eDAQnARkcP7w73xtSSf2377/MZP7JpdJtdb+IO9g8Dexv/pwf0Xm+daPnLyGyVqKZVlOK68m4V+WZcQYp5WNk2rC/bdNwqz9r2V/hSMwbT15sDpzf1AyeS37W69OKude9rKXMRqNWFlZma7P2O122dnZYW1tjRDCtKJxYWGBqqqmaw8C0yqxyZqFk4rEnZ0dIFVXbm9v0+l06Pf7PPzww4xGo+lx9/v9aavRSTg4CWn39vao65qHHnqIxcXFadvRyfp5Z86cmbZonZ+f59ChQywsLPDoo48CcOutt3LvvfdOqysB1tbW2N7e5pFHHqGu6+n6fkePHp2GrjFGzpw5Mw0gQwjs7u5y5MgRgGkodvLkSW699VbOnDnD3t7etLpz0mZ1fn6eU6dOceLEiWl13SQc3dvbm1YJ3nDDDdMKvMn70O12eclLXsKjjz46rVQsimLaunYSKmZZxqlTp5ifn5+2TN3c3JyegwDz8/OUZUlZltM2tysrK9N1ByeB5aSKc3KOvPrVr+bEiRM0TcPZs2eZn5+fhudFUUzb1B46dIiVlZVpNeDE5Nzp9/tUVcXa2hrD4ZCXv/zlfOxjH+PjH/84L3/5y9nY2OCBBx7AzLj55pt57LHHpuuRzs/PT8PPbrc7bWXb6XTY29ujqiq2tramIWen05lW7U6qb++77z7uuusuVlZW2NjY4Oabb2ZnZ4eyLDl27Ni07ez+z/r8/DxN07CzszOtqpycQ2VZcvr0ae666y5Onjw5XY9yf4VnlmVUVcX8/Dzz8/MMh0Ne+cpX0jTNtIXu5P0/e/Ysi4uL08pZEREREREREREREZFr3UWHj8eOHXtqv1GZme/93u+93IdwWdx9993cfffdAHzFV3wF3/Zt33aZj0iezlvf+lbe+ta3Xu7DEBERERERERERERGRK8RFr/koIiIiIiIiIiIiIiIiIrKfwkcRERERERERERERERERmQmFjyIiIiIiIiIiIiIiIiIyEwofRURERERERERERERERGQmFD6KiIiIiIiIiIiIiIiIyEwofBQRERERERERERERERGRmVD4KCIiIiIiIiIiIiIiIiIzofBRRERERERERERERERERGZC4aOIiIiIiIiIiIiIiIiIzITCRxERkReJW5cCD33vl13uwxAREREREREREZFrmMJHEREREREREREREREREZkJhY8iIiIiIiIiIiIiIiIiMhMKH0VERERERERERERERERkJhQ+ioiIiIiIiIiIiIiIiMhMKHwUERERERERERERERERkZlQ+CgiIiIiIiIiIiIiIiIiM6HwUURERERERERERERERERmQuGjiIiIiIiIiIiIiIiIiMyEwkcRERERERERERERERERmQmFjyIiIiIiIiIiIiIiIiIyEwofRURERERERERERERERGQmFD6KiIiIiIiIiIiIiIiIyEwofBQRERERERERERERERGRmVD4KCIiIiIiIiIiIiIiIiIzofBRRERERERERERERERERGbC3P1yH4OIiIg8T2Z2ptvtrr3yla+83IdyzdnZ2QFgcXHxMh/JtUXjfnlcLeN+7733MhwOz7r7+uU+FhERERG5MmjefHGuljnAlUbjdnE0bhfnYsbtUs+bFT6KiIi8CJjZGMiAD1/uY7kG3d1e33dZj+Lao3G/PK6Wcb8N2Hb32y/3gYiIiIjIlUHz5ot2tcwBrjQat4ujcbs4FzNut3EJ5835pdipiIiIvOA+CuDun3m5D+RaY2YfAI39C03jfnlo3EVERETkKqZ580XQHODiaNwujsbt4lyJ46Y1H0VERERERERERERERERkJhQ+ioiIiIiIiIiIiIiIiMhMKHwUERERERERERERERERkZlQ+CgiIiIiIiIiIiIiIiIiM6HwUURERERERERERERERERmwtz9ch+DiIiIiIiIiIiIiIiIiLwIqPJRRERERERERERERERERGZC4aOIiIiIiIiIiIiIiIiIzITCRxERERERERERERERERGZCYWPIiIiIiIiIiIiIiIiIjITCh9FREREREREREREREREZCYUPoqIiIiIiIiIiIiIiIjITCh8FBEREREREREREREREZGZUPgoIiJyhTKzo2b2Y2Z2zMzGZvaQmf2Ama0+x/2stY97qN3PsXa/Ry/VsV/Nnu+4m9m8mf0ZM/sZM7vPzPbMbMfM3m9m32pmnUv9Gq5GszrfD+zzjWbWmJmb2XfP8nhfTGY59mb2Ge25/1i7rxNm9htm9nWX4thFRERE5MXvcs6NL8U85YVyuea27fzrQpffnu2rnL1ZvOdmds8zjEPvAo97hZn9RzM7aWYjM/uEmX2XmfVn9wovjRmcb29+hjGbXG4+8Lir8nwzsz9uZv/CzN5tZtvt8f7URe7rOY/9C3GumbvPal8iIiIyI2Z2J/Be4Ajwi8B9wGcBbwE+AfxBdz/zLPaz3u7npcC7gN8F7ga+EjgJfK67P3gpXsPVaBbjbmZfDLwDOAv8OvApYBX4CuD6dv9vdffRJXoZV51Zne8H9rkIfAQ4BCwA/9Ddv3OWx/1iMMuxN7NvAf4ZsAH8MvA4sAa8CnjM3f/UzF+AiIiIiLyoXc658aWYp7xQLufc1swceBj4ifPs9jF3/zcX/cIusRmeb/cAbwK+6wKbfLe71wce89mkc7MA/hPwKPCFwOuB95DGevzcX9WlN6Pz7TbgbRe4+9XA1wAfdfdXH3jcVXm+mdmHgNcCu8BjpN9HP+3uf/Y57uc5j/0Ldq65uy666KKLLrrocoVdgF8FHPirB27//9rbf/hZ7udH2u2//8Dtf629/Vcu92u9ki6zGHfgdcCfAToHbl8EPtDu51sv92u9ki6zOt8PPPbHSJPkv93u47sv9+u8Ei8z/F3zRUBs97d4nvuLy/1addFFF1100UUXXXS5+i6Xc258KeYpV9O4Xezctr39nss9Bpf5fLsnRS/P+nkz4OPtc3zFvtsDKRxy4Nsv9/hc6nF7mv3/bLufv3ae+67K840UDr4EMODN7ev4qUs99i/kuabKRxERkStM+62lTwEPAXe6e9x33yLwBOkfJ0fcfe9p9rNA+gZnBG5w95199wXgQeDW9jmu+erHWY37MzzHnwZ+Gvjv7v7lz/ugXwQuxbib2VcCvwD8OSAHfhxVPj7FLMfezD4M3AXc4lfot79FRERE5OpyOefGL8T88FK53HPbthLtN9z9zRf1Ai6TGc+P7gHe5O72LJ/7C4FfA/63u7/pwH13AA+Qqvtu9yss0LnU55uZHSJVBkbgRnffPHD/VXm+7WdmbyZVFz+nyseLGfsX8lzTmo8iIiJXnre01+/c/w8HgHaS9B5gDvicZ9jP5wB94D37J1ftfiYVSvuf71o3q3F/OlV7XT/tVteWmY67mR0B/jXwC+5+UeslXENmMvZm9irgNcA7gbNm9hYz+5uW1oF5a/sHHRERERGR5+pyzo1fiPnhpXIlzG1XzOzrzexvm9lfMbMrcZwOmvm4mdmfNLNvN7O/YWZfYmbdC2z6he31rxy8ow3EP0kKyO94ts/9ArrU59ufB7rAzx8MHve5Gs+3WbiYsX/BzjX9IUBEROTK87L2+pMXuP/+9vqlL9B+rhUvxHh9fXv9lH/kXcNmPe7/mvRv3G96Pgd1jZjV2L+hvT5Jai/0LuAfA/8E+F/Ah8zsros/TBERERG5Rl3OufHVPJ++Eua2rwX+LfAPgR8EfsvMPmRmr77A9leCSzFu/wH4HuD7gf8BPGJmf/wFeu4XyqU+9r/UXv/I02xzNZ5vs3BF/25T+CgiInLlWW6vty5w/+T2lRdoP9eKSzpeZvYtwBcDHyKtRyjJzMbdzL4e+ArgL7v7ied/aC96sxr7I+31/wncBnxZu++XAj8FvBr4ZTPrXOyBioiIiMg16XLOja/m+fTlntv+f8AfBA6T1od8A2ktudcC7zKzmy7meV8Asxy3XwS+HDhKqrq9mxRCrgA/Z2ZffAmf+4V2yY7dzN5ECss+6u7vvcBmV+v5NgtX9O82hY8iIiIil5iZfQ3wA8Bx4I+5e/X0j5DnysxuI43xz7v7f7y8R3PNmcwpMuBPufv/cPdtd78f+Drg/aQg8o9drgMUEREREZHn79nMbd39W939ve5+2t133f397v4ngP8MHAL+5gt60JeBu/9Td//v7v64u4/c/RPu/reBbyXNn77nMh/i1eIb2usfvdAGOt+uXAofRURErjyTbxktX+D+ye2bL9B+rhWXZLzM7KtIrVZOAm9ue+jLObMa9x8DhsBfnsExXStmNfaT+4+7+2/tv6NdoP4X2x8/6zken4iIiIhc2y7n3Phqnk9fqXPbH26v3/gcH/dCeSHe839DWifzdWa2+AI/96Vyqc63NdIXWIfAT17EcV3p59ssXNG/2xQ+ioiIXHk+0V5fqL/6S9rrC/Vnn/V+rhUzHy8z+xPAzwMngDe5+yee4SHXolmN+2eQ2n+eMjOfXIAfb+///7W3/cLzOtoXl1n/rtm8wP0b7XX/2R2WiIiIiAhweefGV/N8+kqd255qr+cv4rEvhEv+nrv7CNhpf9w/DjrfnurPA13gP7r75kUc15V+vs3CFf27TeGjiIjIlefX2+svMrMn/X91+824PwgMgN9+hv38NukbYn/wwDfqaPf7RQee71o3q3GfPObPAD8LHCNNzu5/hodcq2Y17v+etMD8wcv/bu//UPvz/5zJUb84zPJ3zR5wm5mdb2L3qvb608/jWEVERETk2nM558YznR++wK7Uue3ntNdXajegS/6em9nLgFVSAHl6313vaq8PrgWJmd1BCooe5socu0s1bn+pvb5gy9VncKWfb7NwMWP/gp1rCh9FRESuMO7+APBO4Dbgrxy4+7tI39r6SXffm9xoZneb2d0H9rNLak0xD7z9wH6+pd3/r6oNaDKrcW9v//OkMOwR4I0a4wub4fn+19z9Lx68cK7y8Zfb2/7lJXsxV5kZjv2AFOz2gO82M9u3/auBt5FaC/2n2b8KEREREXmxupxz44t57ivF5ZzbmtlrzKw43+3AP2x//Kln/2peOLMaNzO7vW0ZyoHbD3Nufvof3L3ed/dvAPcCbzSzr9j3mAB8X/vjD7fLWlxRZnm+7bv/C4CXAx919/c+zXZX7fn2XJhZ0Y7Znftvv8jfUy/YuWZX4PkqIiJyzWv/QfFeUhvJXyT9w+CzgbeQWh98nruf2be9A7i7HdjPerufl5K+3fQ+0j/gvpK0TsPntf9YEWYz7mb2FuB/kb7k9WPAo+d5qk13/4FL8yquPrM63y+w77eRJnj/0N2/c+YHf5Wb4e+aJdIk5nXA7wDvAa4DvobUbvWvu/s/u8QvR0REREReZC7n3Pi5PveV5HLNbc3sJ4AvB97dbj8G7iZVWWXAvwa+8UoM0WBm4/Y20nqDv0mqHjsL3AJ8KWk9vfcDf/hgK1Ez+2zSuVmQvrj5CPBW4PWk+dVb3X0845c8E7Oe05vZTwJ/Fvhr7v4vnuZ5f4Kr9Hxr11D9qvbH64E/Qjpf3t3edtrd/2a77W2kTkIPu/ttB/bznH9PvVDnmsJHERGRK5SZ3Qz8fdI/mtaBJ4D/CnyXu28c2PaC/3Brv3H390j/qLkBOAO8A/i77v7YJXwJV6XnO+77wq6n85R/MF7rZnW+n2e/b0Ph49Oa4e+aBeA7gD8B3EpqbfU+4J+4+zsv5WsQERERkRevyzk3fi7PfaW5HHPbNlD5OuA1pDCkRxrn9wP/2t1/6fm8phfCDMbt1cC3Ap8J3Agskdqsfgz4j8CPuHt5ged+Bali7S3AIqn95c8C3+vuw9m9ytmb4ed0ldTi14Ebn269x6v5fDOzt5N+H13I9LP1dOFje/9z/j31QpxrCh9FREREREREREREREREZCa05qOIiIiIiIiIiIiIiIiIzITCRxERERERERERERERERGZCYWPIiIiIiIiIiIiIiIiIjITCh9FREREREREREREREREZCYUPoqIiIiIiIiIiIiIiIjITCh8FBEREREREREREREREZGZUPgoIiIiIiIiIiIiIiIiIjOh8FFEREREREREREREREREZkLho4iIiIiIiIiIiIiIiIjMhMJHEREREREREREREREREZkJhY8iIiIiIiIiIiIiIiIiMhMKH0VERESucWb2ZjNzM3v7JXyO29rn+Inn8Ji3tY9524HbHzKzh57NtiIiIiIiIiLPl+bNIs+NwkcRERERedE634RLRERERERERBLNm+VSyC/3AYiIiIiIXMB/BX4beGLG24qIiIiIiIi8GGjeLFckhY8iIiIickVy9y1ga9bbioiIiIiIiLwYaN4sVyq1XRURERG5zPav62Bmd5vZL5jZWTPbM7PfNLMvOrD9dJ0GM/tiM7vHzLbMzPdts2xm32NmnzCzkZltmNmvmtkfeoZj+Vwz+1/t/nbax7z+PNvdaGZ/18zeY2bHzaw0s2Nm9jNm9opneI5nfI0HX+ezGMMnbTtZjwO4Fbi1vW9y+QkzWzWzgZk9YGZ2gX3+t3b7p7x+EREREREReeFo3qx5s1xdFD6KiIiIXDluB34LWAN+BPh54DOBd5jZnzzP9n8c+O/ADvDDwM8BmNkK8F7g20nfavwB4D8Dnwu808y+8QLP/9nAPcAY+JfAO4C3Au82sy84sO0b2/1vtvv+p6T2LX8ceJ+ZvXZGr/FiPQR8F+n1b7X/Pbn8grtvAP8BuAN4ysTSzG4GvgT4gLu/f4bHJSIiIiIiIhdP8+bZeQjNm+USUdtVERERkSvHG4F/4u5/a3KDmf0gadLxw2b2Dnff3rf9lwJf6u6/cmA/3we8AvhR4Jvc3dt9fR/wfuCfm9mvuvtDBx73xcBfdfcf3Pf8Xwn8AvBjZvYyd4/tXe8CrnP3nf07aCdP7wG+lzQJeb6v8aK0r+3tk290uvvbz7PZDwF/AfhG4H8euO//BDLSRE9ERERERESuDJo3a94sVwFVPoqIiIhcObaAv7//hvbbgz8NrABffWD7Xzw4gTKzDvBngV3gOyYTqHZf9wP/HOgAX3ee5/8UaWKx//l/EfgN4C7gC/bdfvLgBKq9/cOkCdZbzKyYwWu8ZNrnfT/wlWZ2/eR2M8tIk6gd4GdfqOMRERERERGRZ6R5s+bNchVQ+CgiIiJy5fjg+SYmpJYuAH/gwO3vO8+2LwPmgA+7+9nz3P+uC+wL4N37vqH5jM9vZl/Wru/whJlVk7UhgC8HusCh8+zrub7GS+2HSN1Avn7fbV8KHAV+yt13X+DjERERERERkQvTvFnzZrkKqO2qiIiIyJXjxAVuP95eL1/g9v0m2zxxgX1Nbl95Ps9vZv8XaU2MDVLrlUeAAeDAVwGvJU2kLvo5XiD/Afh+4C+Z2fe2k8hvaO9T6xgREREREZEri+bNmjfLVUDho4iIiMiV47oL3D5pbbJ14HY/uOG+ba4/z30AN1xgX8/6+c0sB95Omvh8hrs/acJmZp97gf086+d4obj70Mx+Avi/gS8ys4+R1tz4nbYVjoiIiIiIiFw5NG/WvFmuAmq7KiIiInLl+AwzWzzP7W9ur3/vWezjE6RvUr7WzFbOc/9b2usPnue+zzez8/378ODzHyJ9A/S955lALQCf8TTHN4vX+Fw0QPYM2/wr0oT0G0lrVmTo25siIiIiIiJXIs2bNW+Wq4DCRxEREZErxzLwd/ffYGavB/4M6ZuN//WZduDuJWkR+kXgHxzY153AXwMq4CfP8/CXAH/5wGO+EngT8Cng3e3NJ0kTtc9sJ02TbQvgn3H+NSsmnvdrfI7OAIfNrH+hDdz9fuDXgD8KfBOwSWorIyIiIiIiIlcWzZs1b5argNquioiIiFw5/jfwF83ss4H3kFq9/EnSF8a+0d23n+V+vh34AuBbzOwNwK+TJjZfS5pcfYu7f/o8j/sV4PvN7EuADwN3AV8DjICvb9d1wN2jmf3z9nl+38x+EeiQvh261j7fW86z/1m+xmfr14A3AL9iZv8bGAMfdvf/dmC7HwL+EKm9zb9w9+GMj0NERERERESeP82bNW+Wq4AqH0VERESuHJ8GPo+0GP03kSY9HwS+1N1/7tnuxN3PAp8L/CNgHfgbwJ8A3gd8sbv/0AUe+jukNi5d4FtIazi8C3iju7/7wLZ/B/hWYEhqu/I1wPuBzwIeudSv8Tn4buCHgTuB7yB9q/WPnWe7XwJOt/+t1jEiIiIiIiJXJs2bZ0/zZpk5cz/feqsiIiIi8kIxs9tIk4t/5+5vu7xHc20ysztILXLe4+5fcLmPR0RERERERM7RvPny07xZngtVPoqIiIiIwN8EDPjBy30gIiIiIiIiIlcgzZvlWdOajyIiIiJyTTKzW4A/DbwE+Auk9Tp+/rIelIiIiIiIiMgVQvNmuVgKH0VERETkWnUH8D3AAPifwDe7e7y8hyQiIiIiIiJyxdC8WS6K1nwUERERERERERERERERkZnQmo8iIiIiIiIiIiIiIiIiMhMKH0VERERERERERERERERkJhQ+ioiIiIiIiIiIiIiIiMhMKHwUERERERERERERERERkZlQ+CgiIiIiIiIiIiIiIiIiM6HwUURERERERERERERERERmQuGjiIiIiIiIiIiIiIiIiMyEwkcRERERERERERERERERmQmFjyIiIiIiIiIiIiIiIiIyEwofRURERERERERERERERGQmFD6KiIiIiIiIiIiIiIiIyEwofBQRERERERERERERERGRmVD4KCIiIiIiIiIiIiIiIiIzofBRRERERERERERERERERGYiv9wHICIiIs+fmX0aWAIeusyHIiJyJbgN2Hb32y/3gYiIiIjIlUHzZhGRJ7mNSzhvVvgoIiLy4rDU7XbXXvnKV65d7gO5muzs7ACwuLh4mY/k6qExuzgat+fu+YzZvffey3A4nPUhiYiIiMjVTfPmAzRPeSqNyVNpTJ7sxTIel3rerPBRRETkxeGhW265Ze0DH/jA5T6Oq8o999wDwJvf/ObLehxXE43ZxdG4PXfPZ8w+8zM/kw9+8IMPzfSARERERORqp3nzAZqnPJXG5Kk0Jk/2YhmPSz1v1pqPIiIiIiIiIiIiIiIiIjITCh9FREREREREREREREREZCYUPoqIiIiIiIiIiIiIiIjITCh8FBEREREREREREREREZGZUPgoIiIiIiIiIiIiIiIiIjOh8FFEREREREREREREREREZkLho4iIiIiIiIiIiIiIiIjMhMJHEREREREREREREREREZkJhY8iIiIiIiIiIiIiIiIiMhMKH0VERERERERERERERERkJhQ+ioiIiIiIiIiIiIiIiMhMKHwUERERERERERERERERkZlQ+CgiIiIiIiIiIiIiIiIiM6HwUURERERERERERERERERmQuGjiIiIiIiIiIiIiIiIiMxEfrkPQERERGbj4e3Ibd/+y5f7MK5Ov6Jxe840ZhdH4/ac/MQXz1/uQxARERGRFxHNmy9A85Sn0pg8lcbkya6Q8Xjoe7/sch/CeanyUURERERERERERERERERmQuGjiIiIiIiIiIiIiIiIiMyEwkcRERERERERERERERERmQmFjyIiIiIiIiIiIiIiIiIyEwofRURERERERERERERERGQmFD6KiIiIiIiIiIiIiIiIyEwofBQRERERERERERERERGRmVD4KCIiIiIiIiIiIiIiIiIzofBRRERERERERERERERERGZC4aOIiIiIiIiIiIiIiIiIzITCRxERERERERERERERERGZCYWPIiIiIiIiIiIiIiIiIjITCh9FREREREREREREREREZCYUPoqIiIiIiIiIiIiIiIjITCh8FBEREREREREREREREZGZUPgoIiIiIiIiIiIiIiIiIjOh8FFEREREREREREREREREZkLho4iIiIiIiIiIiIiIiIjMhMJHEREREREREREREREREZkJhY8iIiIiIiIiIiIiIiIiMhMKH0VERGbMzN5mZm5mb7vcxyIiIiIiIiJyKbXz33su93GIiMiVQ+GjiIhcMmb25nYS8vbLfSwiIiIiIiIicnUxs3vMzC/3cYiIyHOj8FFEREREREREREREREREZkLho4iIiIiIiIiIiIiIiIjMhMJHEZEXOTO7rW19+hNmdqeZ/SczO2NmO2b2TjN7VbvdYTP7UTN7wsxGZva7ZvaW8+xv2cy+x8w+0W63YWa/amZ/6MB2PwH8evvj32uPYXJ5877tumb27Wb2+2Y2MLNtM3u3mX3tM7yWl5rZz5nZSTOLB/b5RWb239r7xmb2qJn94uQYzeyPtPv58QuMWdfMTreX7oH7/qSZ/ZqZnW1f/0Nm9rNm9vpn+X4cNbMfNLMH22M7Y2a/ZGZveDaPFxERERERETnomebLZhbM7Jvauf6ume21//3NZnbevxGb2d1m9mPtvHfc7u/dZvbNz/KY/lb7/O8xs7V9t392+7eJ42ZWtnP2HzGzGw++HuBN7c/7/6Zwz/MbLRERudTyy30AIiLygrkN+B3gXuAn2p+/GrjHzD4X+BVgG/g5YA34U8A7zOyl7v4IgJmtAO8BXgH8LvADwCHga4F3mtk3u/uPtM/3C+31nwd+A7hn37E81O6vA/wqaTJxH/AvgTngjwM/Z2avc/e/fZ7Xcmf7Wj4J/DTQb48dM/su4O8Cu+0xPArcCHwe8GeB/wW8E3gA+Foz++vuvnVg/38MWAe+393H7X4N+PH29ZwG/gtwCjgKvAX4BPD+8xzrlJl9Rvvca+3r/i/t+H0V8Jtm9tXu/j+ebh8iIiIiIiIiT+NC8+WfBP40aY78bwAn/U3gh4DPB/7M/p2Y2ZcBPw90SX8v+FlgBXgt8P8A/+pCB9CGmT8A/FXSvPfPuPuove/rgR8FxsAvtcfzEuAvAl9uZp/T/g1iE/gu4G3Are1/Tzz0XAZEREReeAofRUSuHW8CvtPd/+HkBjP7O8DfJ01M/iPwl909tvf9T+DfA/93ewH4PlLw+KPAN7m7t9t+Hyl4++dm9qvu/pC7/4KZbZLCunvc/e3nOaZvbY/rHcBXuHvd7u+7gPcB32Fm/93d33vgcZ8PfM/BYNLMvogUPH4a+AJ3f/zA/UcB3N3N7IeBfwz8OeAHD+z/G9rrH913219qX8vvAn94f2BpZhlw5Dyvb/9z56QxXgDe4u6/se++G9v9/lszu20SeF5gPx+4wF13P93zi4hcjXZ2drjnnnsu6nEiIiIi16inzJfN7P8gBY+/B7zR3Xfb27+T9GXhP21mv+zuP9Pefgj4GdLfjr9w//y1vf/ohZ7czHqk0PNrSHPt/2vf3xleCvwwKTx80/45u5m9lfRl3X8GfLW7bwJvb7sc3XqBvylc6Bg0bxaRa8bFzJnh0s+b1XZVROTa8RDwvQdu+3ftdRf4W5MJQetngBp4HUyrFP8sqaLwOybBI4C73w/8c6ADfN1zOKavJ33b8m9Mgsd2fyeBf9D++BfP87gTPPlbjxN/tb3+1oPBY7vfx/b9+OPACPjG/duY2ctIgeivu/snz7PvbzxYKenujbs/cZ7j2e/LSN9A/RcHJ27ufgz4R8D1wFufYT8iIiIiIiIiF3K++fLXt9ffPgkeAdx9D/i29sf9c+8/DywB/+rg/LV93GMHbwNoW6v+L1JF5be5+1898HeGbwYKUiD5pDm7u/8aqRLyy81s8elfooiIXOlU+Sgicu34kLs3B2471l5/0t2f9HUXd2/M7ASprSjAy0gtUd/j7mfPs/93Ad8J/IFnczDtZOIu4HF3v+8C++MC+/vwBaoDP4cUZv7KMz2/u58xs/8IfJ2Zfd6+6spJ1eMP7zvWeeBVwAl3/71n2vcFfG57fauZvf0897+kvX45cMHWq+7+mee7vf1m52dc5LGJiFyRFhcXefOb33xRjxMRERG5Rp1vvvwZQOTJy6FM/AbQ8OS59+e01+94Ds97HWmZljuAPzupojxgMi9+k5m94Tz3HwEy4KXAhaoXn5HmzSJyLbmYOTNc+nmzwkcRkWvHwXUNcfc6LWX41PtaNelbiQDL7fWFKvwmt688y+N5Pvs7foHHrAAb7j58lsfwQ6RKzW8E3mtmXdI3PE8C//XAfgGeUk35HKy313/iGbZbeB7PISIiIiIiIte2882Xl4Gz7l4evKP9u8BpnryUyEp7/VzmwNeTqiUfA37zAttM5sV/6xn2pXmxiMhVTm1XRUTk2ZoElNdf4P4bDmx3Kffn57kN0oL0q2bWfzYH4O6/Q1rz4mvNbBX4Y6TJ0I+7e3VgvwA3PZv9XsDkdXylu9vTXM7XTlZERERERETk2TjffHkLWDOz4uAdZpYDh4DtfTdvttfPZQ78YdKXeW8C/reZ3XGB4wBYfoZ58VNavYqIyNVF4aOIiDxbnwAGwGvNbOU897+lvf7gvtsmbV6zgxu3bV4fAG4ys5ccvP8C+3smvw0Y8MXP4TE/BPRIFZDfQJqo/eiBY90DPgpcZ2bPqq3sBY4N4Asu8vEiIiIiIiIiF+P3SH8HfuN57nsjac6+f+49mb9+yXN5Enf/KeBPATeSAsiXHtjkYubFDYCZPeXvCiIicuVS+CgiIs9K257lp4FF4B/sv8/M7gT+GlABP7nvrjPt9S0X2O2PkcLCf7x/ImFmh4C/s2+bZ+tftNffb2ZP+Ybm+W4Dfob07cv/B3gT8D/d/cHzbPfP2+sfMbPl/XeYWTCzG87zmP1+kRS2/hUz+9LzbWBmn2tmc8+wHxEREREREZHnYjKv/p79c872v7+3/fHf7tv+35EqIb/ZzJ4SWJrZ0Qs9kbv/J+CPk6opf8PMXrnv7h8k/d3gn54nmMTMOmZ2MJh8pr8riIjIFUhrPoqIyHPx7aRvKH5Luzj8r5MmFF9LCiW/xd0/vW/7T5DWiPhTZlYBD5MqC3/S3R8G/gnpm5RfCXzYzP4HMEdaF/EI8I/c/UJrRTyFu7/TzL4b+E7gXjP7BeBR0sL3n0/6luXbDjxmYGb/jhSeAvzIBXb/b9rX/ueA+83sF4FTpG90fiFpMvf2pzm2ysy+BvhV4JfN7L3Ah0jVpDcDbwDuILWbHTzb1ywiIiIiIiLydNz9Z8zsK0lz94+1c2UHvgq4Hfg5d//pfdufNrM/Dfwn4NfN7B3AR0hrOr6GNIe9/Wme75fa5/uvwD1m9ofc/cPufp+ZfT1p/vwxM/sV4JNAQQoXv4A0z7573+5+jfQ3gv/S/s1gCDzs7vu/+CwiIlcYhY8iIvKsuftZM/tc4DuArwH+Bukf/u8D/rG7v/PA9o2ZfTXpm5R/ghRQGmnx+YfdvTSzP9zu508DfxWoSWtF/HV3/9mLOMa/Y2a/RQoT/ygwD5wE3g/8+ws87Mfa7Z8AfukC+3Xg68zsV0ntWb8W6LaPefeFHndgHx8xs9eSXu8fBf4CENt9/B7w94DTz+qFioiIiIiIiDx7/wfwG8DXA9/Y3nYv8P3Avzq4sbv/spm9Hvg24K3AFwEbwH3A9zzTk7n7r7Zdf/4bKcD8I+7+u+7+U2b2YeBbScutfBGwBxwjhZ0/d2BX/wa4ldTO9f8h/T37N3hy1yUREbnCKHwUEXmRc/eHSIHfhe5/uvtuO89tm6TJx7c9y+f/XdJE5UL3j4D/t708074e4mley77t/gfwP57N8bUm6zj+W3evn2HfP01qP/t02/wE8BMXuO8kqYL025/D8YmIiIiIiIhc0LOY+0fgh9rLs93nx4Cvexbbnfd53f0e0peQD97++xzoSvQ0+26Av91eRETkKqE1H0VE5JpmZjmpErHmwi1XRURERERERERERORZUOWjiIhck8zs84E3AW8GXg38oLs/dlkPSkREREREREREROQqp/BRRESuVX+ItMbiWeBfk9aOEBEREREREREREZHnQeGjiIhck9z97cDbL/NhiIiIiIiIiIiIiLyoaM1HEREREREREREREREREZkJhY8iIiIiIiIiIiIiIiIiMhMKH0VERERERERERERERERkJhQ+ioiIiIiIiIiIiIiIiMhMKHwUERERERERERERERERkZlQ+CgiIiIiIiIiIiIiIiIiM6HwUURERERERERERERERERmQuGjiIiIiIiIiIiIiIiIiMyEwkcRERERERERERERERERmQmFjyIiIiIiIiIiIiIiIiIyEwofRURERERERERERERERGQmFD6KiIiIiIiIiIiIiIiIyEwofBQRERERERERERERERGRmVD4KCIiIiIiIiIiIiIiIiIzofBRRERERERERERERERERGZC4aOIiIiIiIiIiIiIiIiIzER+uQ9AREREZuPWpcAnv/fLLvdhXFXuueceAN785jdf1uO4mmjMLo7G7bmbjJmIiIiIyKxo3vxkmqc8lcbkqTQmT6bxeHZU+SgiIiIiIiIiIiIiIiIiM6HwUURERERERERERERERERmQuGjiIiIiIiIiIiIiIiIiMyEwkcRERERERERERERERERmYn8Yh/4tlese1NVOEAIdHKjyDJCCFiAunE292pi7RBgJcsoisA24CHj8K23c8NdL6OYW+D+3/0trusVdI/eSH9phY2NTbIsY3Fungc+cT+n7/soNDW9fpdbb7+Juz7r9bz/199DFgrWbrqJsXXYOnUG3z5Dz0uKEHEzdkvn6HVrvPRlt3N6d5ff/OD95N0F5leWqEe7mBVkWUZZ15w5dYrNjR3iuOLwXM5r7jpCnuVY0WdcjuksrfLwvZ/k1W/5I3zqIx9leOoxjhxe5jWvvYus2yNYYPHwER4/eYZ65XbqheuI3uHul7yUufkFHMMwQgAwzKy9BsfBHbeARwjprvQ/Bka7bTDAASMAbgFwzAL2pO3PPYebEWMkC0awQLshZiHtKf2ImU/3HaPz8U9+gqLTBTN63YL5rnH/h3+XD/76f+eVh/u86uW307izMxjjIedDn3yc//q+T3GqKRjv7rC4tMj21g79uR79+UUCgX/w3d/HnXe/hsFoRLfbo9Pt4TFSlhWDwS6Li0sYUFUV5XiMG1jI8KahqirquqQsK6q6oixL6rohNulS1RVNXadxdCd6JMZIU9fEmLZxd2ITadr7YnSapsGbhqaJOE6WZe04QVWWhCzHY4OFQHQnNhXujsVIdMc94p7eQ4tOxNL7ieHu6XbS0Fr79kVL783kdiy9/1mWMb+wRMwKqr1NHv/UR7nzM9/IZ732NXzq4cc4c/xRxuMR3vjkTT7HHcMJZu0uI+5G496ePY4TwUN7TN6edxD93OkG544TnBgb8rxL7g3XLy6x+fDHeTxbYGlpieiTcwZidNIb5rgbeNppOr/TJbSfAczbRxlYIDPDaSZndXr8uS0gcG6saD8v6QxvP0eR6T3envftOWCk15uHwPziAtfdsM6jH/sI73nXO2i2jrNEpInOgxsVe3mPhsjiYuTQcs7m6TFnhgW9POMrXr7IdVnkoZMD/sMntylDng7IIXr6HO8NhwffFRERERERERERERGRa85Fh4+dLDIaR8oaGnOamOE9KHAyD0RgvhugA+bOSpZxpqoZV87Ia/buf4C5hUUO3/kS6HSx0YA8FMToFHlObJzoTuh0WL3hevIQ6HY7zK+vE/KMfG2NrZ3ImXsfIB/tMldAr1/QzQOdIsMto7YG3GliTV1VeGxo6jHeVG34k5I6jw3ukcKdyht2h5FTp/coipz57oiQGZ3OYYr5OUKnz8JcoFgs6PUKQpYTQsBjxD1iIdDUFVkw6ipS1/W5kM99GlTAufxoX9YDtBkO+7fZ/5j038GsjVza4HESZk22tXPbOkawFPqkQJJpGGbTZ7dpgOkGdVOT0yUYaazMCFkghIyIYVlOPRoSmwbLuliWUTqMBrsszvUIlmE4RV6cC74MludyhoOaalwS8pymiYyrMXXT0MR47miyAE2DNzVNjNCGfNbmWh5TYDcZL4/pzjTWRnDDcCzLqNuBdY8pBG4m8VV6TzwYGQEnEjIjhEBTN+R5DhixfVLzOH1fYhrOc0mdg0+StrTjNL7tYHt7bARP5cY+je3SfmhD03rM7tZZup0OZV3R7fcomhF7m2faEDXiMU6PHTt37gRzGtLnLZ1rTw7/DMe9aR+WxsC9fe/bkA7a96B9HZFIrIaUDmfGJS+59Q4+/fF7Gfd75CE8JcB0N9ybyZmaIs5IO45OMKYBoVk8d5yAW2zfz3OBrbul12OTAU9j2r6j+2LIcz95e3IHy9rBMcwCw51NHtzdwcuaejSkaSINsDGomVtZYbmTs7ezw6Onao6fbairSCdU9IPRCQEzZ3tYU2PnzsXJ+4yIiIiIiIiIiIiIiMDzCB+zkLHQj2SVs1NCWTudBiwzqhgZlBFiClQOFzlDh1BkLIRIqBo8VjTbG5hD1u1Qe8N4OKSuGvpz85RlhTtsbu2yd2qL+YV5eo2zsLsHZmwdP8kTDx/npuUe8/0OEaOuwZsmVWQFGFc1o3GJNw0L/S5ZaJOrGNuopcEsI5JucxwLYVo9mBvTarG6gTrrMihLmlhjeZ5CCGwagIRQkBnEuqZoK+Wqup6GPdj+LreTx+0LACdXKQ86F0BOwsV9FXNmzjRuNJuWMIZ9oeqkujJaTLcHA0IKHyfhZPsE+48vszbAYjJcETzDQkYIqaosmBFTaWIKzSxQhUBoanq9VcZVjQFFnrfhVKSuGh46vcNor+R9H/19rOiQFTkWG4iR/vx8Cgcn4x8CvU7BLasLPHF2k2FV4x6JVU3T1FhMVYzeNBRE6iaStWFrCrXaCkhPAVjK5JwmOnUTiXUbGFs6G9xJgVQdiU1DKIp0LGbEOpJq9yLtM6TQ09uAzQ23SZA3kWoMUwDJk8sL2/dwEkJOq1stsLbQZzBu8Oj0en2yomCpV7C3E2lixGMK1SdBK+35VU+Oyx2P6fWncyYdSzqkuC8fPXcsIdSYG7G934HMjOBpuxgbtvf2sJXrWOv32NjaZGl5tT1HUgDn0abj6KRxnXzkJuH5vtg3BYsEzOI0QJ1mjJMhC+24exuan4vqz/3vNMi16es/9xztrQHiuMbywPbegCxzOh2DKlUcVwRCrNNYODRN4IblPl9wXcFcbixlRtUYZ4d1imfb935a7YmIiIiIiIiIiIiIiMDzCB8tz4nR6XWccROJbnQ6GQ2RzUFNXdUEC3QwYmacGFVkON3c6BRZ28YyMh6NsFgTiw4bjx+jOHSYE8eOM7ewTGd9iUNLC2RP1Iy2z7B9KrLeD2REFno5C92MxbkO8/NdnECWpfqtVA0WqLwAUiA5Nz9PKDoES5WKRkYTUyWYYURSi0rc6BaB9bV5ut2cpmqIdUWMDU0oiFVFPa7IO33C+lFqN6yuyVI/1TaoqsmyDJi0Ao24hVTxNilLnCYyk6TF9sUq+8IozlUenquw2hdgnYsQp/9tlqrwUgg52Tac+3l/FeXkkPZVSUJDbKsQmRQYGmncLIWqFgJZyIjNkDzv0MkD3jjz8/Nsj0bpGXMjzzPqJu2kbmp+9R3v4MYj1/HT/+L/ZWdni/7iIr1ejwA0dcXW2TPUdZNan4aMQ4ev4++9/R/wM//4u3n48cdTQGqBEFKb3xooioI3v+oOdnb3+Lw/8HJGuzucObvN1qhit3SGdcOgbBhHqD0df6peS4FsngUyy8iyjKLIyNrgmJDjISOGLAWUFvBg9MO5wNfMsJC1YW9I45KFthKRc5WmGJml98Cn76WlcG7yYcwL+osdFo/cxM6n7weHImTE/ir9Q9dTPv4Ew9GYGJt0Xlt6VyfC9DyY1DA65u1zeJtqT8I4YNqu1B3L9oemaTHYmghu09A21hWP7VW88ujNvPuTn2Sn6NHv9dp9gBHbQK49q81pJqliDBDOBYbmBhYJnKtiPJfB277AMgV8DfHJ5cGT89/PfYImr/zcZyi9tkkofPTWG1iLA9718ICQZ8xbh16oKauG7d1NFhYKVgrnrMEgC9y4Ps/1yxFvUtVrXUfODhvivk+g2/7xFBERERERERERERGRiw4fa4yqjvQ6Wdsu0YkxUsbIeFynIMQia72C3aahjpHGnX4eyDMj5BnbW5t0T52kE8CyDvXxYywszxFPP8zxk30Orb6eXr9PzAp8XNHLA708kGUFc70eWZ5RVg35qCKakYVAFlIAUTfOYBwJje9r0ZmCK2sDuVTV2FaIudPNAsGcXgHd3CjyLIUw3qRcJMvJu132IvQXVyHm1FVNboGGtB5grBpqS2sETtYf9H2Bz5NjCj9w/WQp2NoXPmL7AiTHJovhTfLESa5pbbXd/hunz3MurJxUGU4ObLI+4f6WlqnjaMTI25DNUutPT+9hWl+vIbrhBAblGK9r+vPzLCyvknc6UFc0jdM0kdOPPsBCFmmqIU25RzU0QizpZBlN01CPdhgM9shDSO1sl/oUeU65dYLhqYfbir90lFmAqklrJd4XNlmb63Coezv9xZxbV/o0VcZoMKSJOadPnOX0mbPsDCu29sYc29zjie0hozrSCem8Wpnvsro2TzDYGox55PQu22WkimlE5js5HTO6uZFnWaoOzDIigSorUsAcMkKepfCybcPbpsFtaJu1b1gKOAejMYMqhdPzC0t84R/5oxz/1MdZv+EmmqqiyDMe+/jv8fuffJAnPn0f937s9ylDRlnVRIf5ALUF5m0SN2bp7WyfN7QnR9O+mVk4dyaGSbVsSOsuWmgrWUMKSmN0xu5g7T4NPhAy3vJZn8uNseId7/nfFMvrhGBYlpFhELytZmRaUcqkDW5KY6cVm2kN1FQd2Sbz7XqP0MmMzAALVJ7a3E5btrbn7qTVcAht8GqeIvZ23417ui8EDh1ag6JgI2tomorRsKRXQAzp99HSXM5iNyPESJGlz8VCJ0CssHZt1XHdsFmmcyFMg85JRel5P8IiIiIiIiIiIiIiIteciw4fQwbjKhLyAMHwCDuNsTmKEGGxCKzO5SwVqUJyMYNyVNLpGFV0yqrBfcTGow/SX1+HPNA1JzdnoVdQjZydM6eoA6zP5eyRsT2sKeuKuqrZG1UMRw2D1GcRw8nznCwzGp+sPQfjuibGSAiBSQhX1w15mAQXGXhFdGdj7FgTqWvYG5bMZxnDMlLVRj/kVARGVaQiY763wNxcn8g4BYxlhKbGiDR1STBShZrHaavU/RwntLVqk8qt2N4+DQWtDXIOhIdtMdi5m5jePPnPfdVg54otpxWWPqma4ymPSNWftMfWhpRtG8tUMQrjccnezg7jpqEqazpZwcmNLYbDEd08Z255mSwvwGhbs2Zgab3Cl7/8lfTn5lPglWdYFqiahnFdp7X+8tTatdPt0O/1CDhbuzssFsZK4dQNKeiNbahrqXr10w8fo7j5ulTFmBlFUdDp9ejOLZAXXW6+6w7K4YC6LNkbDHj00Sf47Q9+gkdO79DvFCx0cm6+fpXP+oyXEEdDtgclv/CuDzIaN+mNCXDX4TnuPjyH1w1l3VBWMa19aTm2eji9z3lO0e1Tj4eAk+U5kCoAB4MRw/GYhbkeBtRVzUfuPcbZrQENRlatk9dDjlQnCVxPyDMWFhY5/umPcuzTnyLUY7r1gKI/z+6Jx9neG9BZXKDT67Nab1JXZdpPFihCYGdUsVs74zoyjKkFa9a+73N5YKXICJbWigQIFugv9MmLjF6nw6ceO82xQZ0Cw/a8yDPjVx79JJ//utfAY/dx5qGGyo0qWBtiZnSKPH3BAJt2Gvbo1DGmdTHx1FrVJ5WD7RnoTh2dnXHJofku820705ODkkHo0M3SupwepyWSGEYnn9QJ+7l2scD2sGR+vkOn6PK6z//DnH3wXhZvuI2qHJOHJq3TGiNN40SDuSKjGjdtSJvRzybH6DRNw6isGcT0JQaffobbz4gpfRQRERERERERERERgefTdtUCg6pht2rAjdqNR3ecs5Wz5JFb54zF2tn1hq2mXdMty9irHGjwAHvjEZ1sh8IrqoVV8rRSHMXCEvO9jL2Ns3h/nvFgxLCKDMaR4bDEPK3TFopAnmd0ex08RiyklpbBUgVV5qlCqqkbQkiVUSmIi1hW4LGZhnzRSW1YG2dIYHsUKfpG3QaXWdFhr4o8dGqL+YUlbu5HtjeOU4c+eb9P3YzBIMsDzbhKayPGZhL37at89DbM2R8Rtsc22WZ/OeI0hIT9DR6trX3E/MnBYtp83zbnHjfd7WTfbfrj9uS1AlPBWar2MtKajo4TrG0/Gp1YjojjMaO9ISEzdvaGBGBpcRnLMtwCeCQLEJtUBVjXNdvjSOmjc+sRBqhHI8q6prewQKfbox4PCWbkWQ71kP/+336J1Z5TrOaUlVN5quSrqsi4cRo3gtWs5JE8VnRCQbffgyyn3Nuhv9Any3O6nRwLgd72FnVVcsfRNYI3dLLATYeXueHm63npy25nrtdh8+wmH/r9+wjsMhjXNMCNK3O8/jW3M7+8QjnYZXtjk6pqqK3LYPlm3Iy82yV0esS6wnCyTgfzyNbWDg898CDznQWO3nE7wZzxcMQDn36Y3q4zaqDXyZmbn+OO6+/ioQHUVUV0GJclywtzBLq85K6X4/0Fqr1tet2Cu264kbnFBW4bPEgzGtNg9OZ6dHJ46IlNPnpyRAl0Ug9SDKcw4/qlHi9f79EJaT3MTi+nKArWb7qJheVF5haX+Jn//D85MaiY1sKas9ztcFu+yYmPv4c71nLqmHN21HA65sz1+/R6PQ6trLA438PMyEOqoIyxpizrFA6GtHZmVccUJrqThQx3ZzguefDYcd7w0huYjyVlVfPBB08xWl5jbb6Lx7SGJ2bM97v0uhmBSKwbisyITUW/l9MLzvs//hhHb1rBQpelfpczZ2uGgyGj0Zj11SWy0Q7ri0vMb26xeWZInmcM9iIecrpFTt9Sy1VvQ8iybChjCibNoPF9VZxP/nCJiIiIiIiIiIiIiFyzLj58DEbdOE2dAr3KoaoDjWc0jVM3DdSBsUeaug2vmFT3AU0NDWzu7BLLknUHmgararrzKzz24GPcuLbKsUcfp1+XWDRWehmZORagV2SYQ11HmrqhaSJZMEKWEUIK0dwDntl0jbzJuol5yNvYL6QKSVILzbksVahFc544u5fSOnPqaHQsZ6POePBMh5d0DvMZ8yO8HhK8IssWaPKCEFIb0tiUk7gvtWychp5ME8A2C+Jc1aHvX46Rp9ZKPvXnFBhOgg87z9bOuT09dZv9gWU6BJ8GliGbrBHJtOrNgrWVhpD3+hRzc+zsDanKiqXc6BZ5Ch4xPDp5llrh1sGIjVHWFZ+690PccN31EGMb3AR6c3NYWabKun6HYtQheKpi2xkM+dC738FrlyKOER2IESPQyYwiS+9rluWsrs4zv7bG/HxahzDWFXmIdObmcaAqx+CGhZxgOR2DtYWcxYU+N995C/Orh+jNLzK30GNvOGR5ZY5bvCGWFQ3GXbccYv3QCqHo0ckDVVXTLzpYdx6PPdwyik4BISd0UkhrRYeqrDhx/DT9OKY336fozREsEuqS9X5GttTh4a2K5fk5Op0+406PZjDCPJKbM7dyiPWmoD71KQ4twIn+Ir25OdZX5/m8WxY5ExZ56eoalGPcI/2lBXr9nJ43EI1h3XBmWFNFp5sHMnfuOLLEG155hE6sacY1wZzKC25/w2tYOXIdWZZxz6+9m1M7QwCq2imjc3Slx2e9ZI1Q1TQeyHo9Hj29y3HmWVhaZmFphfW1FZaXVugUGcSGSPoMVOMSiOTGNEQcVyUegZBTZMbO1ja7wwFveN3LKXZOMdob8PDJHZbufhk3rS7QVGOamCpgF1fnmesVeDkCmnQueM3K8hw+3OO+R85w6603UIQuRa+LHz5E3u9z4oxz002H2T1WsrCySh/INsYsdjO2CRS9PkUemOvmWFVCJIXAdaQBsjzHY2SyZmaWZW1rVhEREREREREREREReV6Vjw1QO3Sz1Ae0iYZFI3pkXBt4CgmqOpLnNsny0tptack2MoyVuZxeAaF0do4/Rm/9Om656XrOntok2zlN5Wl9v6KTAkdipFNk5FmgiSlgqxuniU4vBNqskeANsUktOg0IWU7TpHUezXKgSmtBWiTGSB2dMhqVR5avP8yr3/i5WJZChby/wFffcit1Nk+zu4sPHubw+hJ103B2d0wHp6kbqrKhHteYO7GuMTytaWf7gr5JlZT7tEzR2jaxbRfZfcs02jQEPDf4MAksn1zOuC9ytP0rOO5/PqatL/fv9FxdZkxr54W0Xp9N3iicYIaFQKxT4Flkgbn5PlWEm65b48hje2x6ZNxEijwnuFE1DaOyJMOIdcON113H8tIi4HiMBKcNjLPUDzQU5J0+3rYtjTEyaMacHTnXZYGQN9Se2l5GwCyj9nQ+nt7Y4VOPHueOW64ns0iGUUdodncoq4p6PCIPTqzG1OWAYV0xqhqKYcnxk2dZGEdedfdtZFmWzhF3Io5laQ1N63TJ+osEC1jeI9vZYzgYEauMML+MZxkhy/DoFEVONR5hOI898hjFaJv5bs64N0dGQ2rd6W07YGiAvFOw0IG7lmtObEWykLGwsMC9xx5ht8zoF13K7hx1XU2D87roUeQdlvvLdKjo9Lt05udoxkPmejkvvf0Qw+GQ+4/vYCHQLzLmc+PISp+VtVUW+jkhGLEs2dur6IaKwgd4nbHQD9xxeI6mahiMahqcO69f4tajh4nDAZ4VxFCw64FTm844Gtcvr7C6vs7S8irdbgdvmnSOt2ufZnlG5jXESF50GA52qesIIaPX77Kzucn8xz9ONdhjb7fm8OoKiwvz3Hr7bdx8aJWmHFFVFWVVk2UVWZ4z313BaAihxoDFxS6D0xCygsXlZeaKHnsWyPvLzPW65Llx3XVrVCePs7g4j1VD5haXOHrDCme3S3rehRy6ltqxpg8NxLblapYVREpo0ucm5DmZXfSvUhERERERERERERGRF5WLX/MxpNacG3VDx425jtEpMroU5KMxAKM60s8zxjFSVcZCp11jLqRAy0itN2+9+Qg7wxKGQ+YKIx+cYGtQ0WwMWekGGgwPqc1h1UTyLMfygsyMsolUjVMUgXEVKetIJ0/7z8xS+FlWxKYhBV6QQq2G1NY0VemBk5mTG+zWMLKM+RuOYlkHC6lhZXct7XO4bdQPRnqdPvVgh7wpqaIzGuzhdUlT1gA03qSIb7oc3Ll14YJZWg9xUsFosW3FShuOngsGpzHh/rUbpwWTjk/XlJz0W7XpVWyvnXPtWZ9c/3juMWYpZKNdH3Da9ZVUXRjyHMwYlGVqVWtQRyi6Xfo9p9Pt4cOane0tmsZZW1khyyA2DSEEYmxYWLuOyXp9HpvUaLetDG1iJAuBrNOhGg/BnYVOzubeiNOjyHV96ORGiLTBdyBlW04WGvY2N7n/Ix/ilvXPpjvfJwuBvJcT3SkC0OkAUI+dfmF0LGLekFnEB7t4NyPGCjBCyMjMiOOaelxBMMYbZ/DdVawoyAj080A212NUOWUWIC8IIcctkuU59RgefeQJNk8c43A/0mR9llZWKMxpYo0TKKsaizGtsxkyLKSgvU3LwQJnTz6Ozx8hZj2GLBIoqZuKwdg43iyxOp+xcGiNxW5GyDNiU1HGhryTUw6drb0RoX2OJhjWLVhb6tD1moycjPZLBN2CLDQEr6mqEo9OZhHLjIV+TnTohAy3nMXDqxiwvbXHaGeHY4/sEMMZ6p2S5s6GeHNOkWXU45LFQ6v0csdCRpYXWFXT6fWxEOh0u5jVZFlql5y6CDur/cDcwiGO3HwTvY88zvxcJ8W1ISPLI1Y3NA7dbpdux2gap6ycvAhAavVa9LqsLvZpqpyqqWnGO5R5gVtkrt/DQsbNr3gVw/s+Qv/0mBqj6M9xxw23s336MQpK3J2QGxDb8N7az25bEdyuc6m2q3KleHg7Xu5DEBEREREREbliPbwdue3bf/lyH8aV51c0Jk+hMXkqjcmTzWA8HvreL5vBgVyZLrpXYJYF8jzQC07wiGGMm0hZVRTm5GaMIvQCYMZe7dSeQg4nVUEGDDfoLCyS5QVZcOb7GUUnJ2SBJkYqYKuCM0Nnb+yMxzWxaWiACtgr27Xj2urGcR3btp9tBSFQ1+1aczYJMdMLj+7UdUlTp3XtzCf1aDAclhAj0VMVYzDwWAMRywN1hNFwxN6wJM+MIkC5t0M1HlGOS8ycpqowT2tRpoOZVDm2HV3bvOJJtYsHQgw7l1yeu43priar8Z27YVok2a4Jua8tq7txIMV86p7bSsu6rqc3BUstWG0SvDQVwZ2qatjb3qEuayzvMKgb6qahLkvGO1vEuqauKxpPVZh13XDfvR/l1KnjgNFM2uG2+67HY4jOcDRObXMxxjFVau6NGyKGBSPLjE4np9/N6XWMbmHkAYJHHnvoEY4fO049GqW1JMdjqtGAejSgGgyIVZkCZwvkRYdekWNZzuLKMnP9XhtEp9EMAbqdQDc3Cgt0uwV521o2FBmdXof+Qo/efDe14rT2XfB0gp3Z2OCJRx+h5xVzRc6Igrn+PO6ONSU+2iE2zTQ47nQ6jL3Dw8MeZZMqZ4d7e6xcd5TllRUW2OFQNkzBe55x3VKXVx8KLHeNnlVQj6AaQTUmDw2ZR6rdPQZ7YxqPNLEm1jXluCTPAwU1RQ6dTqDoBvqLfTqdPiHv0DTpc5bFiHmEtn1yVtfEvT28gSxAbg3zvZybjqxw9MgKh9bnWV2aY6lXME9DNtihHg6oqooYHfdIlgUghZx5llEUBd2ig7tTNyUhGPMLPQ7ffidzK+s0BHIz6mqMe5PGj8jO1ga7w5oYcsygHg1oqhoLgaqsyTsdet1OCoSBTt5jNBgRgtHNoaprQl6ksHsw5OSZXWzhMG/8kq/ihqV5Aqka2kjni7dfVojEds1S2rbK5/80iYiIiIiIiIiIiIhciy668jHD6OWB+ZCCuVTDFqbhmQUYxVSVtxgCDQ3jOpIv9rn+thuZX1omYoxHYzYHFdm4xrsF5Dk0kcqNcd2u75jBSjdQeVpDMbpTNtA0KcywpmmDTaNqnFg4IaQ2sGVVU1UNRiDLM6yO4A1mgSwzmiaFK+YZsU0RcoOmqoixwWMKHVK70QaCkWcFWZazvr7C+PGG2ETcG5pqRGaBSMQsrZ2YZ1nbwtRoYgo2J8vDpW0m6zLaNJR1LFVchUkF46T2sU23DgSS0zwR2n62AGmdRncn7F+P7sndVvfvgVQf6bg5VVnS6c2l8DELuJFahLbrK5rBeFQyGgwpej12hjknzm4Riw5VOaZpasq6Yrg3ZDgYsDC/QFVXLHQC870iVWXWDaGtfs2zQD0eMwqB3Y2zLPa7OE5Z12QG1/cyelkzba87arPR2MTUvjcYIQtkIbVqXVg7ks7LuqYaDWnKkqoaE0uo65KqbiijM46QR8g6PYreHFkw3GMKBYPR6RXEuiY2TggFhA5gxKZts+uB2jNC22YWAiFEBoMRjz76BEU1YHG+oPRA3lugyCOxbHBSJWmRGWX7vvS6XXp5xkq3g3mkMeh2CqpoRHIyg8qMpqoIFgh5YFg3PLE74L1nN7nl0Dwr/YjFiDdO7PS45ZZFshA4WwW6nQJvauZ7HapsnifKLtmWE+KI3b096rxivtlgfsnpeUXVgNepXWrEaDwSsnQOjrYHNP2cOkIMOYduOMT88jrL66usrx9m5dA6RZYxt9RnVFXkeYblRmYNwR2qEut28FinUDsLqQKzLMmKnG63Q6y22Tk7oCTQyTNirMnyDiE6hlMPttmsI4eW7yB3h7ygKDpEd3b39iBLvyuyooM3gW6voBrU9HsFC/0eo3GqhDaDBmN5ruDkMMO6XZpqiHXTZ8M8tW7G0+c6tZCNKWMOKTj3p35HQERERERERERERETkmnTR4WOqSEsVhGap6jG6pdCvgdg4FpyNqmY5z9ipUsVhURTM3XgT27HHdSsLLJhx+oMf4DpGxLkuZV3T1A0eU6hUOZRNZM8zLMCqRZq6od/J6OYGNXhmjJtUcNbJUkWdk9qaOjAua9wjRR6o8hzLCtyyVH0ZIBKIbfA2WaNxWJY0dd22vwT3nKYpwQOWZZDlzC0uMTe/Re5DxmMwy1leXSHbPA3ujMuKvF0zMhhYFqaVddC2Q02lcpwLAFM/VXfOxZLWVi1O2pViTIopJ2tETpema9fWs/YGd3Dzc6GkPfn50vP4tAQ2kNa2q8rRdLtJ5aRZCpctZORFQTem56obZ7S9TbmzyTgU1FWFe2Rna4tyvJdC3H6XOkZuuPUldDvFNBBNwVNG5k5sIoPtLZqmIsYC3JkLUOSBl6zl5OUemae1JkPjjKsGI7228Tj998pN69xw9Gay7hJFt4MTyQd7NFUF2xtUwwGxbKj2Boy3B2xvDQnR2N0dsriyQjDH64rhcIgRCAZ5MIxAr9+l6Ba0zWzpeA8fNxR5kSrjYsSC45Zx4thJcm9wg16vx+lRzS23HGpb2KaA1yyjXbIUN0tj6gP2Tp3EvUfe6dHp9Xj8kx9l5bZXUdY9yvEY2znJ7fNOqId84Pc/ytbekE97zb2Lc6wsL7G4tMTK4jyLt7yC+X6X19yd4wSKPLC3O2S3znCHM02NRWdw5jif+PQJ9sYlTbwfyzJ6cz26c0ssL6+TZQUnN7bZOXGK0mEv1mRlTV06p7ZKTgwida8iayIrWU7Z1IyqSJ4X9BbniDu7VNEJdUUko6pLFhf6YCGt9UmAEPC6ZjQY0OkUFJlhTcnOxmkqC+QhSxXIQN0mhgtLK+SdPkVwqlFF6qzcEKuaalRiWUFZpvegqWvo5phVFEVGXmSM65oQIgEjdArm5vsshkV8PE6/Bzy198Ugxjj97/T5TJ/jELJUBan0UUREREREREREREQEeB7h428+vMNaKokiN6dyo46p3eQwQhmhcNiuUlvW6zs5Dw5reuOaTl6wWEdC0+C7Q1ZiSVyeZzQakON4MLp5SC0+6ya1Q43gwaatPolOloW0rl8I5CEFabENK4sQCMGoxjVlWVJXFZ08p8oz3IyqTmv7WcjSC2rDg0g69sGwpq5riiJv744poLMUsLoFsm6P3vw8zaCirEYUHcOKbloPrmkYDYeE0AZ2k2ao7XGeywJTzeM0IGS6CuS5AsV2HbxJTpnCwLY9aNtWdXKf07Z0nVRITruxnusNOWksOs1L/NyzpTDTqcrxvlax6UhDFoCMxsHbarS802WwN6BnNUd7xk5W8ERdMarr1J7VDMsyunOp3einP/0AR9bXMXM63QIna1+3UeQZlDWGkYdAZoGFjpGZU2SRmLrCEt2JTTr+LEuNcjsELGTccdtNrB+6HiMFqHmeU3R7KexcWqXodGia9Jp6RWB5rqDXLYhVzWg4aitrI+NxWtcytIGXB8iKjBhTy8+0RmagiaO27WdIFbXBWFg+xPpezcaJ45BnDJqGvZixtDDHsKwgBCw60b2thk3nQL+TszeKDMoUdrk749GY/tw8vSKDasBtnZOs39HjpW99M3VdUY6GVHVNXdaUo73UQjjk7I0bdvb2KAl0+8t0Ol36eaSbFyx7TmdugXJvGwyG+YiiHrKzN6Csakajkq2ygrzLqb2GcTlkONijyOH4mU02d7aJTcOogYc3SsrePLesNZzZ2uLkyScImdFbWOXQkSP0ixRmm+XccPtL6S4uUJ09xrgcE7KCpomQZamVqsFwb4+iU6RQGaMqa7K8C0Syomi/IAB1VbG5tcvho6uYO03t1HWTxhanqmpCCOztjgj9eaq6YquOlOMxcznEqqRsoLe4TJPnlKOSjb0xnX7BI48+hMUGYkOcVBm3ldiTtR5T61dL51WEOtYX+6tURERERERERERERORF5aLDx4fPDqhyWLTIfMgoIzTRIUZqAmcbw2tnqTA268gwg+t7OYtNhZ04RRadwbGKfK7H7sIic17S73cIRMqqoVNE8iwjI6b1FaPTNKmlJt6QZSlkCEAeIEaIRHKzdJ+dW1+xHNcEjCyksChvHI9NCh5DqgprSMEl3u4vlW+SdXpthVag6M9jIeBNJFpI8Uie09QNdVVz6sRJ1mLE65qmaS9tiGRGu3Zcu24ipCBzGqdMOCGkdpEp4PA2kzSm3VfZHwqeCyGd9Dx4CmqnkeU0iDy3fZyGmAe6sFpqD1vXVduClXNr2pmxtbvLoWA0ZUlWBJYWOgxGJYM647qFLkcPHeHMp55gPBwSY0MWApFUIdbUNT7aweoFup0e/bklsIwm1pTjIcsry2xubhEGe2Qhtavt5nDT0ZvJh4+xNxyx1ziFp/arHo3GUoiXETh6/WHufs1rCJ0OodOlGae1CVOb1ApvKrK8oOj1KC3n8b2Gs2NjNTQcKgJFbsSsCxawrENo22k6UNVONa6pyipVLsaG8TgyHoyoshyba9v6Wk7n8B3c/9EH+djjZzm60uWum2/jaJaqI7PgxCa1iSVkZO15BEZ/bp7QXaDTm8fPbmAhIzYV67e+FM+7nPz0ExTVGT7jDV/EyqJBbDBbadciTeeEYVgoGFSBjb1IjDUWcrK84Lojq/hwgzoGsu487qtYHJPdeQOf4U0aI5y9zQ1+/Z738eATW1SjUWpl3K59OC5LqiqV3VYe2BvXxCLiFqEes7c5BpzdjdOceuT+dD5aqjZ+4KFHWT18mDDeZqGX0+92CUWf/twCvW5OBM5ub0MwxmUNDnuDAZbnjPd2CfOLZFmWEvx20dSAs7e7S1NVNFWFxwgeKMcl3V6HfrfD0DOKDLoGJ3ZKim4KNUdVWue1rko6NFgT2T55jI8/8Bgv7TZt9m54TL93BmVDM2l9jOMW0rkVI5llF/urVERERERERERERETkReWiw0d3GDiEaBSNM44NbgURZ9hATYCyZqFIYdugiWw3DUPgtn6H7up6aoXaLRg+/CiH53vE2DDe24FYkwO9HIhGL4N5g2GT1kL0BrK8SMGWtaEnk4AP2mUdyXMjyzLKqiEUHUJIrVaLIqOuDSykdpkeU7DXVrnlQN00lKOSnhvRUvAXzKZBoYeMPO9B3kmhVhagrhkPB4QIsa7xJlKXZcoq2oTvSc1VY2zbmE76r6YwZf92jk9Dx0kbVphUTD7pHWEaSE5DRX/yvT59Gvbd1bZrZRrYNk1DXVX7jiOtdxcwxuWYPWqiO0WW0e11iSGjR6Q316XudsmynG6vT6/ToQHG4xHluKQcj7njrlfQ7Xfp9PqEwR7mkWpcMtfvsr03pBzusVgElnoFpUduWl/hDZ/zFh76zZ/nlNd8ehB5+dyQXqdu1/w08Ei3CKytztPtZZS7m8RyhGVGXab2pnVVQTTqcUVVNvQ7XVZyY1iXdJpA8LS+osWah5/YZXNQUzZprUMLRrdjFEUKGMGpqzFNWaV1QLNAlndo6gY88sgn7mX72GPcsLbGyqF1jrzi9YSdM+yePUVoS1OdQKxrBnXD6VFaszTrdFnKRyyEyMOxIcsL+t0uD33ioxy585Wsrh/hJpy5pRWKXoO1a4vWoz28qYmeqnmz3nwKyx3Kva22Ta/TKQw8x3bOEIdDioU1Ym3ps5FBFjtUg13whiI3mmqE12ncvGkYlTUZTt4G/43XVHWkQxuEhvQ5iRgWPYXvdUOe5ThjdqqHGJ15FPNIHiBYwC2HoiDPczwUHD9+HLpdPv6pR5nvdzlz8gwbOzlPnDrNShUp8gyPDaPRiCzPKcK5bsLpM9UQPWdcNXQ7cxRFzrDOaMho8gy8Ya5X0IxHjKsajxV5nnPo8CG63UhouqyMRvSySPQ4/exEnGEdaTwjm36A09qP6csOCh9FREREREREREREROB5hI8RI8MpghNJ6yY60DSpRWn0tI6iAd0sRWV1ltJB73TpLa/hsabX77N3eoOiW1OV42k4ZgFCMCxGGofM0vp7o8rZ3N7F2+AFT2sHZllIbVdjCgnzPFUZOrA7Ljl+4gyZNzhO1aSjdTNikyoTnUiM7Xp8wPaw4l33vJ9bb7yOrFMwbhoay6i6SzT9JW4ZlFRlSSCkxxHT+pF1jbulqr88x2PdNtVM9v/3JFAMkx/adSfPhYOTSjabruGY1l6cPD5VNk4DxXaNx3TXk6spcUtr001vtmmL1vSjt/uaVEvuP+K0UR4Cty3Pc2M/pvfGjKJT0IScoqoIAV563QIfe6RHjJHF5RWGw0F6X6sSYiQ2Q0pzut0OdVNDrOn3+2zvDtg7e4a1jvE1b/kCTpzZ5T0f+xS+fBO91SMMYsb9W5GNQWTRAzcXMa3P6ClgvvPWG+gGp97dYFwW1HlByAtiU2MEsEDVNNRVpCiMXi+wslAwHuYszRcUWWBvWBKj0wxHbG9scXqnYjRyeh7o5IG8yMAnoaeT5RndXo/Y77PrDVlmqSK2cd7whV9GXQ74rd/5bd5/72PcvADr/QUsRkY7221FYGBz2HBi4MQQ6BQ5buBZhkcnZAWDwYDB1hm8LpnrF8zl83R7XbJ+1q532UA5SueZp1awmBHJaOoxZgGPNSHLgYh7xL0GqwhZTtw7BfkaxECMTmxqPDYpxGsaPKa2qe6RTmZ441R1akFcRqd0pxfaNr/RCVnAMDKMsorgTlPXFJ2cjJpYNQSz1PrWjJqaejSkaVvQnjq7jYfA79fbdDsZW7sjTo3mePDBQL9/nG63Q1F0qB1uuWmdbrdDL0SGTU2RpzVVzaAqK+aXuxR5TqyNpinxzjxNXdEpuuRZQ+NgeUHjzmA0ZrdTcPPrv5DVx+9n8MBv4zH9Bgt5TifP2Su30v5DRp7llFU9bb/75DRfREREREREREREROTaddHhI5bWRzQzqgbcfNqmMrRxVQoUGsyMThbaAAzqckyW5zR1+xjLiLGmqSMxNlhIf8xPFYPpMWl9RiiARx9+nCILZA7BfBqEgZFlThMjdQMhyygyoyxrHv/0o1T9OcaNEcc1nU6qVDKztsXpkwelbBoe+eSnWNo5RZ4FqnLMbhUZ5fPs9G5gfnnMieVA1i1omopYV4BTN46RE+uabq9o10mMuAfweC6EtJDWi2srqJ4US07atJqdCxHd2varngJXSwFYeye4TVujTvbhloJK98lak2n79D55G7rauU6ubfgZzCiK4lw1maf3Ihhcv9zj1ptWUzVYjITMyD0DK8GdO29a4/B9J9jeG5BlOYsLS9TjEb1OB8xYzStiMJZy6NZ7hN4C4709uns7vHxlntvXV1irM+b782yvLXP64U/zwY98lAd34OygZiXUrOew4wVrecNcEWjcKeYXyagJWUgBWBt0Z0WBx3ROeDUgjsfE3jyp425Gv9+l0+9RecMTJ8/gWY/bb7+VTp5x/8c+wmPbNTtkdELGbk3bdjOSdXp4NSQCdV3RxCaFwW5sb+/w0Q99jE4vp8+Y5XoDH3e45bPewgO/99402CEDc6JHmrZqt9/rs1N3GMSMpjmbgvO9HW5+ySvJ84KNYyc51dkjLwrCZD1PM8w8rfsZiukapl6P8XKAmZHlXYqim95/M0JWQN7Dy11oxpg36ZzziMeGJkaaGMEjRmpr237kybP0eTTS5z96qqokNsQAwY0QmH4xoWki49LxEIgWyDObhu1uIT1uchY3DVVVk3dyxmVFU9Vs7pYMa9g4fYqtYKnC0Iy60+foTYcYVzVFBywziiJr3wMYVTXdboc8z4kRulZTjUrG5QjoUDcNVQRvnM2zG5w4dZo5W+P2EBhuniF4Q+2ptaplOfP9LqU7RZ4RgDq2636SvmyxL9UXEREREREREREREbmmXXT4aBh1dMiMURMpsbTOYIypVWiApjHqCFWMhJCnVo3RGY/GGE7WKYgGeaegqYe4O03T4J7WPbQAXkeMwLCODGpY7mbEjTPUIdDLjWK+S7AMiIQ2sAshVXEFS+Fkv9NhdX2FRzf2KKuM7lyXPMupHSwLxDa4CxYxYC7AfAa3rRSszBXknYKqNLqjMaNqxNzgQeZXDxHrMY2PgZCCjybHikCnijR1SafTxULeriXpbVvTtnKxrVqM0dvKMidYNlnBEbO24szS62+jXgBCSEHg/ipHaxdmnBZBsv++tn3rk5u+EtrAxNsAdNJUtsgyFhYW2oouw9rxzPKckGdk+wKvLMuwmM6HEGBxfo5X37bOYGuT4WiPkYOPBvSsgaqkLHp0un1eev0ye0902S7HLBfGbbdcT1ZXDM2596EH2B2NOHl2wLCq+cC7/htnmkDmgVesBq7r1myPgf4Sn/MHbuXe+x7gwQce5LrDq2xubbG8MEfR7RIYEYIRo1OOq+k6nuPBLsO9IXgK2kbjMYPqLCPr0MkjvX5BKDpUdcPjOyW3HZpjtZ9TjcY8fHKHhfk5Vhe6FPMF5e7uNIR3jzRN5MzJExx78Pe59dabeMNnvoLewiHqvSEf/K1308uatPwmjgUjM2vP08Bcv0cex2QxI8aGoijIQ2Brb8jKXCS3iDUVRV5goWp76Rp5d57GhkwCPQs5/X5BlhW4O1XjdDptS9+8Rz6/nraNNfn8KoR2bcu2L2+wQNOkz+o0vLYUPOLpM1lGKBunATohpHUPs4CFcG5d1Cqm6kn31PbWOuTmREvV0gXpOUN77sTMaNyZy0Oq2LT0O8ba3yvRIXpNUzVECoqQvubg0aibpm1hnELxcRWZ6+SErEhV2p0cryIeI91uAT6gadI6l4NRydraEl1v+O0P/h7NxinuDjZdF3V3HHlkb8yxOufQ+jqnTp1s13KFshoTLBCC2q6KiIiIiIiIiIiIiMDzCR8Nohm1wyhChWPZpBjL2vad0LgRY0owsixQeOTRBx7l5NkhhEAFLFrN4VtXyYq0nl4wJwspRKzadqFNhMqNzbLhSBYgOmaBLKTQB4fGG/JghGBgKUjLgmF5oLe4QHc4xhqomoY8pnX64v41FH1SzQWFGSFL60HS1OBOEYxOF/LKGZUVj5zeJmtKDi9khLwg60TyTkE+aohVTZbljKuy7UmZnqUtaWyfMRK8DVcm7VttEgQ1xDYIMvc2VN0XK3pblzip2pwEm5DKHCeB4qRd6751IyGFu5M3Mkwe0u4+ulP05tuqOlLryljhlio2yQLmaWyztuNkwOkUOQuLc3zJ57+KL3jNS/jND9zLO977YahKPucNr+LwUo+dEqginU7B577ubj788ft5/OQGH9zeZbGfs1JkLPY6nN7cYdRUDA1GVcVip8ur71xhtTpLcOPum5dZXj/EQ7vG2I2mrjl7ZpN3v/t3ue66Q9xw4/XcfPQQPctTBWyngBjwqqEcDSkHu+zsDdgejOg3ke1RpFheYTpCbjQxslNG6hjpBufGI8vcdst1bO8OOXV2l3Ho0C+6YMV08Kq64eSJx7hudZHPe+3dLC/Oky2s0j90hGOf+gh1dM4MSkbRqXeHKeDDyDKj3y1YCEPcc9yd5dV1YlNz4sH7mFta5bYbbmC1zsnyMH3vMaPoL5L3FohNnaprLdDvwtxcqlj16O3b77hlWNEFB/MCPLVYTZ+hVMHnljGsGrIsEKr2do/t6Wt0CkvrmsZIBPKiIM8CRZ4TsjwFie3nuG6gCIGqcjrzGZ0sHU+MTlGkVsnmKcTEUzvWbgYdcwgxnYtFTp61bZybVFGdFwWLS4ssz3WJ1WiazQfSZ3ZcVuQh0EQoa2fgHfKQQvNOFqj2GkZVQz3YYVxVdPKMYVnz2AP3sz7XEOYi3v6u2BmUfHwnsFlFrpsLnImRPGRUTY03DQ019YEVWEVERERERERERERErlUX33aVVL2EwSg6exi5p0BiEvqFmBKBydqLKawKNHs7jAe7uDmjCL2FPuGOw4QQyLIcb+ppBWNmqR1pN0tBppkRY4AMspBaiKaqvSyFF0SqJlWS/f/Z+/NwS47zvBP8RUQuZ7177YVaUNh3YiEpkQRJCRJJizK1efRYki15rG5Ntzz2yD3dmmfabnssu9s9tqdtS4+7PfJoxMeyqNaIomWBFkWbIAlxEUEQBIl9qUIVaq97b93tbJkZEd/8EZl5T11VgSRAiaSdL57Lc8+SkZGRkZcn6pfv+4VoSIV1wfGUxDGJ8cRRTFZ4WprgYPKujCFVeALcyB28vJohGBb7MdY6xHsio5lJI2Qy5PTxVzA6Yv6mfaRx2GcUJ6RJjnMFxsQo59AVdCwVjI8hNlPpUOBSfPV7Fb8ajl8pXcavhscq6jFY0ipUVgWkcmUdSAn1JAVfuhxDy76EhduuyLKN0v3ovGNra4O000M0wYHmXMClIiHGVBzOGUSCQ80YRRJpWu001Bv0KeiYSV7ggMWl3URxQnbpFL7VYXM0ZjFVfM8738ITjz3BybOrGKXIsgCNjNH04jYuKxg6w597+33M9mPUaAMZXGbv0gxRt89XP/8MZy7l7GlpTOHYuLzOcGOL5Uur5PZWjh7aR2wKxFmMNigTYVoKiWK67ZTzGxkrWwXjUc6R+SUonareTUhjxVuu69DrpkTGoHwObsJMJ2K2m7K6usmrZ9cYJ3O0zAKdSDMajugmmrfdfy9zc3100mN+33VsXHgFbQzjwYQvvniekxdWWGwFd51W4bpI0pQ86mGtwtoVrJ8wGgxY2r2LxAg+H1NIGVXspWTaCl9eX6JNeR1I7XZFPGiFqiC1CEoqBB0CT7fje8McdNZii6ycRyEaVklwEivCfIg0oBVaaZLIYEo3ryonlI5jkiTGu+CQjBKNKed8ZAxWgdKaSMI1CIooCgzRGI2KFHHpyAwRxSrMYROgeXumT6fdBuUQpTGRASnAC94JeWFJIsNkkpO7FONznIpDrLOCySQjFwVak3nD2laGywUnhlRDpBVOUdaxVBzrKU5eFM6vDrBKUzhbX3khaLpRo0aNGjVq1KhRo0aNGjVq1KhRo0aNGjVqBG8IPgYQFWtFomDkBaM0OtAJjNGIU7iQ5IgVMF4RxRpfeNpJsEk6FerCIQqTpJg4wolD+QDbjAIVaYrcY6Lg7UtijQUyB1hHt2UoxBFSYMM2Lli+sA6sK7BFgTYRSucMR0NacYSkMc4Wdb27qDQDegkAT4vDO08UR2g83koAlQgOjXWKTAqcMqA86AhjTKiLZwvECMaoihNOjRyUmZnhhbK2XBXJWhsZS4diXfexgoxlkUalro48gulRAQ6UqeNWlQIl2zvY7kHlogvbO+fJxkNEPF48586e4/yJp7jp6KHgZrUOLwUm13jnMCV8UYBzFgEiPNloM8RcGk0Ua9Ca9vwSSmkGgyEXzy8ztzlmZvdubuvP4pzj1bPn6Pd7zM7P0enPcPHSKm/atchD3/MONi4vs//Afi5fXuXpr3yVEy+ucXLLMfQJg4lwrJ3RFo84z9rqGk8+/hRntoTWbJ+OH3H73h5KhNFwyGBzi6zwZJnFZTneC912ClDWI3V4r+i2EpT3ZL48R94iPseriFai2bvQY2xaXFxfZS23RGmbO28/xly/i1KGSWYxRrNx8QzOhbqQb73/zSw88yTnzp2u4R7aEEUxnoQCEOdDVCjCzL6jYGIunb9Ady5B4bfjjQHxLoDs8KScHrINp0u4F+ijD/OtnIIVpA7zwANCNhljC4s4QSRAd60U7bicuRLqXo6yACuTSKG8xdlQ61F0BAJxmoYpaIUoiWglMb6wiAh54Ujj0rXswrXrRGHFE6kSlPrgXNQlJtWEmxus96SdFriMySTDWUeWFxgjGO/ACoX1xDq4LJUS+qlldRDAY6wVg1FGUbokR7lFoej1ehyK2rSyS8EFLWAizUy3xx17dvO5Exe5uL5BZKJyniTEJkLEl3VqGzVq1KhRo0aNGjVq1KhRo0aNGjVq1KhRo0ZvoOYjOILbz2gdolXLf3+vnXdUOCNASCfgCo9BMcwtRmsEReE9zjuMilHKBL6mIUoNm0PA+hCxSjA+OUKco9YaFRkKFKIC4MxzRzfVeKUoXHAwTayjKCxx2kFrSztNSFtp7Was4kojDQqNL+NKF9qa3f2IdqzwUUo2UWEbFN5JcBUqAW8DfHCC1orYKLzN8Mqhy4TMAASnEGTFHfEh8rV0JQbXWnCLalURpeD4lKpeZH0Gan9b+YqCMu62fl6jTin3whXvXiERyoRcvK/OHCCWwcopij2zCArxvgRdLsBbDOIKtAJX2ABcETaHAxRSRuFaZvotTq5sEZsAd/PRmFObp0mShOt2zaG1EBtN4Qqy3NLqzXLguuu45743MSSlt+96pNul5TQzu/bT38y4/rp9xHhOnLnI8TEs6IL5yJMKuM0hq5csaxe2iLbO0rtnF5uX17iwsskgs4wzx+YkxNt6BVY8hrK2pQmjszXOMRpiE9XuUR3FAURGQpRq0ihh1/wSLs8ZDMdsDicURCzs6jKzsIfJxgp5NsF7wUZ93vq+HyHttrjwe6dD3CiQRJpIKxbjEQMxiAidVhtf5LzywrPcdPebOHLdfnp2K9QXVAalJICvyr5axrBKCZJ1NTe0Cu5FBXgV5pxUM8eHiGIRvHfgHKPBIMTrlg5K74XIBHem0lLGAusQG1zuhzI+VQDnhdgABFes0qCNDvU3jcJaT1HelaBKiC7e453gPRhUcDBqTyFCGgfno5S01DmPxpPnGaYEqJEJNSzFGURC++IEbwxWNCNp42RMHEfEkSHLCgoPeTbh/KVVxlGXu+55K3ffcxef/jf/DOUndQ1MMQlaCYUvx0O2ncnBPBld4Wxu1KhRo0aNGjVq1KhRo0aNGjVq1KhRo0aN/nPW64aPolQAVCpASF/FgVaRjz6ACSWCUZBEhshoTBn16LxiUrhQ09GDLRxxqy5NGCJQBSZFgHNewaaHqIySTGJd1pUrOZ4olFjSGHzZp6EF5R0mVhS5hSTElhoTB4Dog2NJl05CkVCrbuw8HQOpCXGitvBYL1gfQIkTqQGr+Cpe1oSB0QZMjFT1GpWuwaJQ1lcsOYX4shJfVcvxT8DJCiSVGLFKVi1fn6ryWPe/Bp1QxrXWT+pmpzFJBShVZZ1TU57McjsvgjImxHeWDroKeYp4rA9QWClFnls67YThOOPyxrAEpoqZXXPM793D8qe/wNxMn+t276ZHzsXVdawtOHtplX6vS7fbYWNrQCsynD9/AdWdRb34KocGlrTdYqbfp9ftEncXiDqXefNdb2K22+L+tWW+8uWv8JXnTjCJIuZmu/TyLRIsfeVZ2Rrx8OdfIplsoW3OwMIw6eNnd5EUI4qNTV5d3mB9OKY3Nw8o5uZ6aJezuTnBKijy4KTFFhgdoSTAd1fSctER7ZkZZnbtYjIac+HMOXq7j3L+zAkujy2rA8uG1yytrDHIHIPRBOs9osAYA+IpUGSica5ARYbhYINEe5TNSVsRadIN7lgtNRxTKgA30KBCRKqX0lVYnefq/KsQNaoAfID6orZPuE5SoiRFvEeUhLkuGu8dhRNiFEqHH4+AVkQGIh2BVpjq+pDwN0ChyPMC0RoThWtpXFgKKSF/OY9DXVcfXLM6XAuFg9xDXDqqXSjtivOeNE1QhDqy3npUeawm0FIKH47HesE5EBWuzygKdUqzPMd7z2Bji62sYDPLiBb3srlyEetyrHgK55lYoRvF2MJiqznvLSgd6qZKmW3cBK82atSoUaNGjRo1atSoUaNGjRo1atSoUaNGwBuAj1oJThRDJ0xKiFYBjqo2IQSYJyJY60gMRFFwNkZO0FYwBBjgnQ9AsEZfHhOSWYmU0IpKsGFL7FU6G631zPXiEDepNKKDA1IEYjxRFGriFXkBNkPrAMoio8nz4BBzIqHOn4T3OkaFyEYleBGyvAiuNRfcY16EopAAXUWhVHA8KiUBNgqkSjFUBnF+uxyjTEGgKm4TUGKmYF/4RUSC+7FyH0697eu8TK5gHrXPsa4LWcWv+mugkQAxVTBM1ttrrbb7UEFWoQae4ErHGsGJZh3ZxJFZj7MWlOHy2hqDwQhXHnMUGbSJ6XVS+t02vc4+9uw7wJ71dZQ4PvbIpzi7OqDbSuilLe6/7w5SDVuDMf2+5/CeHkkckaZCt6/YN7ubtZWLrG0MGOYWkQ5v/3M/wn1vW+Xjf/jvuOdwl+tmOly3f0B/zvD04znPnUq5PIzZGo2hyGmLRbIt0AbfmWU583zic1/iLW9W2M4ubvveH+HSC1/l8smXUUC73SUrHM5aYu3IC491itxZxnrM6soK61sjeotLLMwtMJlMOH38RV49e4GB1eikjWRDvvyZTzIerBMbTawUGoU2BtERq0WHQe5CRKuAMinX3XQnogynTp/ltsN7q5NRg2nEI+LK37ejfKtM1ZCS6kuoX553VHDtypWboDRFYSnyPMA+H9y53nvGuScvAXQcaTIXwGdkYiLlcWFveOcYZQWFdXTTiFZq8Aij0QQRRWE9JtY4AS0BYKNCrVFRJXxEcF6wAgnhGp12P3ZabZSO0JEmEoctfID5Gtwkx4pC0AgRxlh6yrKZ5XjvyPOCyaTAOc/aKGd+1x4urwxIZubI1l9hfTNjohwJHq8MS7FhfWtITmhTT114UoJWpaaDlRs1atSoUaNGjRo1atSoUaNGjRo1atSoUaP/fPW64WOkFJliuxYeJgSISqgDhwKtFB6PFcB7vDcopbHW1iBLxGGtYF0ZXao01gbwoDClezC4opTSDB1MMseiD/spU08pnJBnNkCiWNNrx/RaEZPCM8wsztng9NIaZSKipEXiHN4H6KjKSFNF6HddIq/kO94HkGe0IF7hvSujZBXO+bK2nAY0sVFYV2B06RoEqGBPGYAqJTwK7kRdxmBOh6iWMalKhyhNqGv1yXZI61SU6jbwrShl9U7doqLeZ62y7WnAqSvHJIISVcfdVsBUvAsvoBAPvshRKtTHFAXeWZaXL5HnOYmGNNbMzs6Spin7DhxGvGA3N8FmtDsdlInYGo45t7pOpDX7Z3uc3bDceOwg73z7zZjI0Gp1SFsp7XYHL54LF1e46+brKLIRZy8uc3rNszUpcIPL3HFDzM//3J200j344VOI2+LQdYd5qNjg/Nkxv/1Hx3js5WW8eNqRpms8/swl1teET/3xi5zajHjwPe+n29Ecessebn7wB4hbbTYvL/PVJz/NQmLYnTrGo4zhpGBVYJIozp49z/MvnyLq9LnttttZjDVx3OEdf+5HGI1GnD35Cuubm2Q2p9Pr026l9bhHOkCtRTOiEBViiJNQ33FlZYU9+/fQTxRJpFHiw3mcAtDBDUs9g5SaivmVcJ1M1/UMVHGbPCoRlLjgOMxzCuvCuZZybhtdXwceQTmP89AyGuU9Do8lQH2tNJHKUUaIjEa8xhiDzx0ewZlwQwE+1BStrjnvwnFofIhrlgD4I6PrvyveBcjZn52j02qFuFYpGI4y0kTjCxucliWUzQrPYGvIpheyrGDllVf52KlTjDc2kCzj459/ll3zu+jNxuxaXOTCS48TI/QMDHJPOzW0lOXps5cpRNVBx9OXWhjXpuZjo0aNGjVq1KhRo0aNGjVq1KhRo0aNGjVqBG8EPmpNVtWMKyGgrusUli46CfGq1juMivClczAyAUCaUNoN6zyTUUbPBniYFw5nHZESIi3EWuGBzHtEFLFWDKzicuHpR5p2SzEYWyIldNoRJoqwZR25zDpyK9hC0E7AeURX3i8wJsA1HRIkQ1RseRyZC2BEaVW7MJ0ThqOCce6Cy0pp8szCjEJpg4kMRgm5zVFGB0cmFeBTlBQojJt4lIrKOn1Q1Vis3IhKl3Uerxh5NRWjWsWxbpOQ4GAMAa9lqGv9eSVq6vl0i9t7Ccdb1rGrdqc01inEeYQQJet9BWAcJoIojktoaZhkBRubA6z3RFrRTgwL83N0Wilmax3nod1KME6Ri2IwGuG8x0uIotXakBWGVqvP4RtuI40MWhPqfkYRIkJsYrLFRexkhFYRk+Ep8q2zJJHm5hsPkXRTxhMDE0MSRcSxR6WGtOvp9DQznYR8OGEmibm0scVqodBJhHM5+cYqF44/Q5Y7Jmj2HzyM33qBy+dOkw0zhrHhVS94LyyfPscw92wWnouXB2yMMtJhDkmfN7/vvaxsTMglIvMR7cW9FCrCrq5QjDbYGudkLjjntNFh8IstnHQRL7TSDltrKwzWltm9e549e5ZopWk4LVO1HSsoXWNpX8atVkDM+3q+VI7WALaFulArEiKDlce6ACbFh2szd552GmPimNxrRBwm0RxIWuzxYIoJrgRw3isiQjQqIhRZHoKWxSMIWhnEWqzyeNHlNeHBh7jWONIlcPehpqUK9Sq9gHeCSLhWkjjMQRPH+PGQdhrhfYETTZblOKWxeYHTHqIOWmlkdIlia4uNLMPj0S7n1XMrHD12GyfPXWRQBBjaTg1tLEMU7Uhz9sJlnji/GW6uqN3EurxBwZduyMb52KhRo0aNGjVq1KhRo0aNGjVq1KhRo0aNGsEbgo8BjukIbF4CsNrlpzFKYVRAG7FSJDrAD2stWikirUsnnaLAUxQFSik8BoUjisCjA8DDo0WRlPXcxl7oIMwaWGpHdFqK1CQh4hMPeMSBJwCrKA5gI/RJ0FqXLkYTAkRFEAIY1SXoHBWCEh+ciVqjvEMrT2EtiTGkMzECjD1Y7/Fl/GIUG9rtFhNxmNphKGgUrgIUIgHEUro3CQDQUzkvqxhbfUW86jbELOETO12LV6LK+jdVOSyZAo/hSe3KZNs4qVBEUYRM/VdBq2pbJS5AWqOhcJjS7epFMRmN6XTa4fxroZsaFhaX6LRSWtqRiyKJEyLtWb20hvXhrKcaYh1ceEZHvLRueEu8n8MH9gajpXiqnNBkd3kg4jl0j+MBV5TxrxkdXmZsH+NTT2xg7IDvul3R7xvIFEYZbrn+EP3Z63j17CVOn73ApZV1UqNRGpJ+H9XpcPLcGZRXGDfhlZWzbA6HJGmL+ZkFvHecefEEyUyPZ09fZqbbxilhkmUAJHHKnffcx6XVTTYzz+alNTY3N+nnGbcmMedndrHngft48jP/Aa8MyvhQuxDDStEmkwRrLd5bdJxw8PARlDKsb2yxND9LXd+znEtqOjq1jsOt3vc1ZBQ8dTpobbX1JRgM24oPsbJKILcOQbNntkWnnXL9jTexeeE0Jy+sksQRCiFFhdqH9SwJ509cqPkqUkDpLnYiZK4gF02nlQaQ50owqsL1GscmXIdKl5G/Gu8conUZu+rwonCTLTbWHd1eD2sd1gmxMUTGIEWBaIPWmlwUm8sXiOb3gLOId8GB7EM0sk0S1i8vM8kyHvvUx8nOPM8SDueFbmJQzvH8pQ0uZ25qUKvrMdTOFO8DuG3UqFGjRo0aNWrUqFGjRo0aNWrUqFGjRo0avZGaj6AwrI0zvDKgQZfRkQFkKZJIoctabVGkaqimjcKICrXe0BQerLWlsy3GlGAt0p5OpMAptIYURdwpcZlWJHFEt9tGRxF5ZMmtwxdF6F9k0CIUArEXvLN4EYzRpK0UkVC7MUSlhtqOvnRqesoajlqhopg8LyisRaOJ0piogj1KYZ3DuVAD0miFjhMiUTCWGh4qgiMxQD5BfAAzSpsSIlbYUdUAMTjbVO1sq2BjOPgKNsmObbbZokz9Vse3Qn28TNWoq7YPxxSgVGQitp2VYMpanQHGBq9XDWZVGCsfbJIMhxO6MzOgNKnRzHdT+jN9tFZEC7uYjC1JNKI302Z5fcBsu8X9N+xDuUWG44zTl7ZY2dzgqc98mgff9nauO3gwAC5fVhVU2z0P+b6CjgQfOZQqUK5DpFrsX4TRcs4kU5i24fKmpfARxH2G4zXWVtYpxjkynnDTHbeyuNTHCRQ6YjCxpCha3T6mP0MnTjl1/CU2NudoG43NcrLBkM3BGKUNvW6CVoo4Nszv3sutd93NU088zu6lfeye7TGZadN/7nluv+EO9o8KziIo5dFiS5dqxLjwKHE4wFmLNhFGp+j2DCIFW6ur6KPXBQgs29hZyvNZJuXWkaP1eRYBXD136glU1Wat5lg574rJGF9YdNrmx7/vHpY6CcNJwZF738by8ad4+YWXafV6tNodlDgK5xkNxxQ2xxUF6xsDCg86SjBJSrvd5vSp06wMJszEBuc9VhwGQ06Ap3gorEU8KB/iVie5B2NoxRFJpEs3pkGRc+HSZdxwSH9mTDeJ8NYS9xOU0eG61hqtNd4qZLyJ7s0xLrLaFaq0IokMvcVdGDxJr8+bbruFF84/gzjBiaCMYpA7LkwEX16z5aiilS6fh3PgfeN8bNSoUaNGjRo1atSoUaNGjRo1atSoUaNGjeANwkcUWA8m0nV9RKM1VimcF+JEsXsmJh95JoWQalA4NAZlFFqFOFPjDdY6tII4jpiUEaFGKdLYhNhGBKcUBkFpMMYQJRGZdfTShFhM+AwxUsY0ojS+gjmicKLRWhFHEYVzeGfRsUGkdCYKCB6tw741MB5nRJFBCRTOoZyCChZqhfMhJrashIctClAwKXK8d+TOlXCvdHOVkarGxFjvAjqU4ExTJSjy5edVtR+ly3J9wSWmauC4rcANqyjNK92PSm2DxTr+tYZUlCyqdDiWdR2TtFW6WaHf7XPrHfcz12uz+XKMUqEfvqpZORXaKqLJsiGtdivAIgG0od2bwSqwgwF+XBDt7WCSBKc0c/OzvPWuw9g84/SFFS4sb5FNJgwHm6ytXcZ7QUwV8zrt7dRUgaNKKbSA0R4tYHOHxrMyMJz47Doz7YQLrw6581gHlXQx7jwHWGP3jOUUoRZop9/B9OZZOXeJhbbGes/WxhZbmaOdRMzv3k+WjXG2IOm3Obh/L/t6rQC9EeZnRowKz74bbqLX6xDFCatbY8SkJK1ZzrQ7sLZFvvsAp06/RJFbcicUVoiMItaOjfOnKGavR2tNp9Pm8isnuHj2DPt3z3HD0QMkkS6hfQWMp5ypTLHEKuEXQ5hR5QUqVfxvWTW0dAHipaxDCnYywXtHFrV48zvfxWJbUUwmoDwH7r6Fu+66mXThIFG7V9aEdIh4XJFjR+tsnDsV5royOGtZ2Rjz2fEGYjPGWcHQepSJA+RXgnfgBCZFAKROwHvHxHl0lBJHBvEhF1lrhXcFX378SeIkYWFuhrl+l243Zd+uGbrtlEvnV5FIo6MIX2jimQWiNMEVFi2+5LHBCb040+LSmVdZvO4WbrjxZr7y+yNalRPUe5ZHwsCb8PcEdQWkrW8YKP9uNGrUqFGjRo0aNWq0U//kn/wT8T7UbK8eIXwftdaSJAlxHOO9L9NPQvqOMeFGVV3e7Bm+q4dti/KG2yjaXs4nSRK+45qQyKG1pigK8jyn2+0Sx3HdVl22A+p9SLm2q7Z3ztXpREmSkCQJWmucc1hr622rtnb2M4qicDOg9/jqBsByX9WxiYRUoiiKcM7hnLvic9WPMaZud3oMgXq8dm5THUP1eaVCuo8xpm7PWotzrj4G7z3GmD/RTrXfneM2/QjUx1Xtc/r5dB+nt63LaZT7qX4PN2dvH1O1zfSxT4//dDtXU9XW1ba9Wls7f652zDs/M93Pal87t3mtPl6tHztfmx6rnWN2tc9/rXZ3arq9nZ+91utXa+dqx/XN1PS5/Eb0zezH13vsOzU3N/fNHYxGjRo1atSo0betXj98BJTWFEKIF5WA36a9d+OJcE4ch5ZSfO7JMocShRaPdxqNJ2lp2h3FyOd4F5xtXgjOLBGseLz1OBEsgDJE2oRIVoIbsLC2dvd5HMYYYqUYFw5lwkKqcI5uZFBpF4/C5sU2gPMSIKIPDkvjQ51JlFBYG7CCUojyAd7oAHnKEpI4J0RGI2jyLEfHMctrl3HRSfpxxJGbyjFTpcdR6+Cg8hZRmqoO5HbCajmGZdTq9he6beCh2P6yXb9ff0yx/R1wO5q1MsztBJTTaZIiIXqz3e6gVKhkl7bbLC3eirITtDEBfonD5Rl4V0IvzVy/CyJECDoyddNWIMsztuwWw43LiER86YVlotgg2QSzNuRQv8vK+XVaaYzWkESGTqfD8sULiHiU6PpoKuwqFfGu3ilBnMISJwmd7ojbrleI7zHXEibXtbi8MuHUCy9w+sIqblKEmM84YjAa0+rM0Jnbx6kXz7M002LeFEzW13nnvTfy1juO8fDHPsXDf3ySDZnjyC3v5Na7v5vPPPyvWV89y2A0ZDDcwhjFQ7d1eeX4y5w+/jyXV1fxzjK7sJt2q8365bO4jVXsZIP5mR6j4QQ1zGmnMTpKmF3aRe4jClvGi1rHbL9PkqZgYqIkQGFfuhsFj6rqNqrSTVyNixAiRpkCjhJiicsB3B69qj3xOOcY5Z48bZG0UoS8jEAN9UvDRWJKEA4YgxKDNhFKK9LOJZR34Tq2E8zwHH7rMkWWk1tF7iAu4agXhVce64LT0USGSIEyYZ4bHf62aK3CtarD9Va4jGKSoSYjLp2FNI041U3otlsMByNWpM2zp1bozHUoVEI7jilsHkqtSrWohGKSMbuwxOzBI8SSM8lyWirgWivCSg52KlK2vp5UCfpLoP+1FtGNGjVq1KhRo0aN/vNU9Q/0FQibfq0CfRUEq96HAMIquOicq0FdBfOmQZ+IUBRFDX+cczV8nG63WjvuBDhVe9MwE7gCJlX7rX601lhrw9o7juu2kiQhz/MrQOE0oKpgXwX5quPbCRWngVL1sxMA7hzL6f5O938nnJp+XoHX6mcaCFZ93QnudvarOoZpkDk9rteCedVndrY7/bhT0/272hi9Hl0L7AHEcfx1tzPdn+m59s0Gb98sNWu4Ro0aNWrUqFGjP129fvgogtamjB0sFy0iKLVd+yzSCl84Li1nLM2ltGYNsUAnjYgiKESxNbJsrjtakuKtLb+cBrcWCtqJwUrwdWU2hEeqWIOKEKVQGrLcYrTGlzGuFbzUSuO9I9a6LBUoCBolDsRjIo1HIeJoGRDty4VFgCoaRaQVWoEta9ipCgjq4LSzJSAVFNZZNta3MBpcNubZF17EjYS7H/hu0iRBIWUtu+CQrEIzr1TppmIaPF7pZKy2qbfc5r1U2ZsiqnSnlhGcSqYcigG6KuSKNqR2YSra7Q4FGqXCgihExUrtvPSiiFodVGFBHOI9R67bT+EcnU7KqAaEYL0wHk7YYsio1cc7xRNf/GPOn3qJ65b65JOMn/3Bt+BsQaQDcCq8w1rHaDSkqjVZAVSlplJHpVqcTY2QWFyxhc0dp14ZMNM3XH9Xi6wVs345xzrh/NqI4xcL0lYLlg4xiNqc3RJ2tSbM757D+oKXvvISZ0YDvu8972L3dddx3b49HF66zJPnI/pzd5DKXtqzh+nNzPP5P36cLFcsLfa4+447OXfyBBdOvYyPIrrdGZYO3cTuA9eRxBrlBVtk3HDb3UxGIwYbl/E2J9MdbKeLG2Y4H+bozNJeVGfC1vKrvLJ8iv3XXVcCRl/Cal2eS6os3nKAyojf8pzXi756PvnaxVfH2CqN91AUjo2xI97dJlaEPmZFPXuSJCJFId6W7tbybmalUWXMspT7EAFxDkQoXADRkVHhp0wALsrzm3tPqk3tDnbiEV0ehwLnBWUdfaNIlGLiwpy1zkMmeOfIhxnDSc7yeIvHM3jT/btYefU47d6bKPIJXjx1HVOtUCamO9NnZm6RyE6oXM0ikBWOdRvh67DV7SkW/vFDtq+/Rm9YSqkjwCvAB0XkZ76Oz/8M8P8F/oqI/PqfZt9ej1S4y+PTIvKub3VfGjVq1KhRo0bfOk2DQ+AKJ+A0fKsAXOXKq2Dc1RyTcCU4mwZYO7epfqrPVu7Cqo1KO11+FUib7vf0YwWWdsIzY8wVEHPabSkiNZicHpPpvu9sf/p4dzoop4Hk9H6mgePVoGZ1TnaOQ+VUnN5u2gW5E/btdCTufH/n79Oq+n61fkw/3wkWd8K8nXB35/n4RvWNALmdn93Z/50w8mqf+XZTAyQbNbq2vt3X4I0aNWrU6NtPrxs+wtQXaQ3iBPGCKcGQF8hEiEXhlbC5WTAW2N1PGGc5We7BOmJlUCIUhQOliJMUtEJsqKO2YQVXCJFWjK1nYIW1zYzEWG7c1SYixH867wg1IzXOOqyE+FQhQAvtPLYo8CYjTRNyDWIDZNCiiLRGl8fiPUxcAIpRHCOAMdsLNySAVScgDqwNC0jrLMdfPYv2nrzVYjKOWV25hHcOJ6VlUjzeecRblDElMArQlNJBWnG1wNZKl2MJ/ZQqYUztDi35W22W1FSYUVCoEuxsn7SaT1I9iAiKUPeyeiNttbGTfKonpeNSaZyzKK2gXCxZa/FiOLBvD+uDTRbne4w2J1C26byQ24JCLGtbY/CeLMswSui1Is6trzMaTUjjmK1JRhIZnHNMxkMuXVzGO08V7jrt8rwCnlZjokApB4w5uzyg1W8zu5jw5IsDNlcnLEYpi/MLaH2e1U3HW24+BsMVhtmESydfZnT6ZXKd0J3bzfjAIXalMY+dzpk8PYTrv5ebWnfz5G99nNZcm1HbcOrUBSaTEWnco5s6FhfnObe2wflTL7JyaZXzl1ZJ0oQnvvo8adoliiN02sLHMYIh7s7S68+wuLDAYgLtNEGpCVopWq0WF186x/rEEWdb7JnrEhlTR6eKkgCWJQyIAN45RuOMPC+wRYF1wvz8DGm8vVgOE0uXEFembh8A8UKeF4ysY1evi1HC8qVVXrywhaDInWNpzxJ37y1BI2Eeeu9ROgDJuq4ivoxVdeSFY2wFIkXbKHZ1DfMpWKs4vyV4L3QiRS81pJFGAa1ImDeGjqksu0I/cRxYarEyzHl1kONd2PfIeebiBGPCMTjnQ91JY9AmxnuLzXPqbFpAKcPcnsN0mdCemWVtbR0Rj6gASYeFMJIQ3RwGJwBVJWWNU9m+DpX+9l5EN2rUqFGjRo0aNfrWqHIk7owDnY7ynAaSsA1rKgA4Df6uFh9ZuQ+nQVzlkpwGZzsjJXeCq6pP0+Btpytuun/Trr/p44uiqP78tPvQljcbVwA0iqI/0dY0gNwJI6vH6SjSa/WvanfnsU+D2Z3xr1dzFV4r3nMaou2EgVeLyr0WmLvWa9eClzt/v9YYfCN6I8DtaoBxun/VZ77doGMDGRs1+tZJKfV3gb8DvFtEPvWt7U2jRo0aNfrT1OuGj0YFsKV18AjqMsoxMZrMBZhlEKLScacR+gqMC86oFgrRmthoci/YwuGdQ8dxiHWkQBtNqjUFFh9YJSnQM7BROM5t5uztJ1grpLHCIxjjSAzkViGiiE2oC5hbF6BhFDMaT8iyHBFVuiUFY0qnIAqvQXmCq8s70iSiKCMinROMltLhpWtXpHhBnIdsjPUKH8XcsP8QSdQrv4gT9iWCVItCVZFAgAoCSQkz9DYkmYJ/UkdGeqp3tx2P05r+Ml1+6Z+mTKXLcbvOI8EdqcLCoNVuMSpriVAvKDSqdKa5PLgdvS/hrtb0ZvpcWFkh3TuLyDAcc8isZbg1YpMt2n5Aog13H97HsC+gHL12i9EkY3e3zdmVDYxRiNL0ul2Gy6eZjEd0u61yFsmUy297/ERCVGZeWEQ8aaQ5Ot9itCHYDU++bklGDhdHHNsc8EA24YJ45noLLG8us7KywV1zEW89dojkxrcwf+M9DMY5Z8+d4ekXX+aFf/8IPttieX1CO+5w/EufZXL2eWJrSdotbjh4EF90ac8vcP70cZ578jFasSbLMyZ5xmSyQieKmWsnDCcTom6PC+fPMykshTLMzS3wgx/4CyyfPc2Bm+4Ip1RryAZ0dcr8wizzczFG76g1UQ2AAiWCFAUr5y/WdUhB0em2SaOkml0oUVPntQTY3oH3eGsp8oJxAbOzPYwS2t0Wu+YKVtcnaDTWBnjulQlz0jtEHFppbDYuz5HbXmT6cH3s3z3DW+67nTaOZ194GW2glRhune/gvXD3oR4ojTEqHLsPrkytNYkB7zxnL2ywkQmJ1kRKU4gn0prUQCvSpJFGl5PcJC1M0mbf0RtClKuzmPJ4vQiiFQsL89iLJxlfeJUvPH2aJDYYG2Doega2WkRXxF6Ce1QrDb4MZFVsu0cb/VnqI8AfA+e/1R1p1KhRo0aNGjW6lnY6F4EaiFV1/a61nbX2CgBYQb4KyFXvT9dFrGoaVpGuFZCstp2u6VjVbqzAZbVf4Io2Kk27/ZxzNWS01tY1Jav9TH++Ao7VNhV8rfq7c5ymHYfTbsTqc1V7067MaYdpVdtxJxCbbn/a1Vnte6e7sXq8Fojc2a/pmNbp7ab3czV3Y9W36X1Mvzf9+/S+rwY0r+V8fK1Y1p1QdSfc/lrbTB/bThi58xx8Pf1o1KhRo0aNGjVq9J+G3kDNxwCBtI7wAlppnAhOhXfBE6lAu2wJiowCJZ52HFEojfKgDeBAJLjhIqVBQs0G7zyR0RQorPMkOkCJrAR5eWZR/ZjIAFLWvnPgEPLCYaKqNmRwG+Z5js4z8skY5zxRlOC8x4tHO7C+PC4d3FmdSFE4R0wEattn50SRFYL4gtxR38GJCAd3z4CKWC4Me2+7m421IVFkSkgLpW0tONfq79bBZYkKji9VujClhENSx4uWz0v4GOBkjSDr5mtWWfVYyvp0orb5iVTvVdtsA0atFVGUUKEtQbDeEZvQRxFwNiweCxsiQNtpjCiFUZ60FdWM05cLQOssg8EG/cVdKAx3LM6wtrLAyRMvsLCwwNbYsmduhrwQut0Ub3JIHOPJkK2tTRYW5nAI3lmszXHWIs7Wd+aOJxNeeuUca+trvP2GMyztb3PDngRmZ6BjGZ9LGZyKWP7yGkdPvsDt4yHvnVW88srzPFaMOHJkLze9+R0cvPu72djY4omnn2ft0gUmmXD8gmZ2dh61eZrzJ17lvtvv5u4bPTccVhzp94nnD3Lh7KusrGzx6plXOOssK2sbYWC9x6PpdfukkeHCcABJitEG1e+RDAa4cc5MK6LX6cKBQ3hRWBdg9tzuA1gixhee5/xklf1HbwgeR1Eld/Z1HC0CaEUUGZwK8F3p0gGrdWUerOcg0+BSB2RXFDlZbskEZvpdlDiGW0MGw5xOy+BHBcokRCbClJt774PPVhnEFWilUDoCBK8tUrpwDx47xDve/wF6OqP9yCeYZFlZJ0aT5xlFXpBX88qXsUEiHL7jbu664Qj5cMDvffj32LwwYGFuhkvZGqNc0CLcEBmMgg3xOB8AY5q0KIabXFxe4eCBfbiiQJfz2YsQtTrs78ELr2Z8z0PvY/nRj3DqKxNi7dFO2HDgvC+NpcG1XF4kaFRwM5dj37DHP3uJyAaw8a3uR6NGjRo1atSo0Wtp2sUIwe037cabdshdK8azAmk7XZLT0Gwa+FVgLYqikFJTbpMkyRVxpVWfjDFXtFnVT5yGZldzSU5vV32+anMamFaPVR3LaSg3DUmnoWDV/2l35LRzcRpqVWBz2s14Nbg4fU6ma07ubHcnIKv2s7PWZnU81VhO72MaZFa/V+N+tX1Mx7DuPKc759PO+NnpPu7U1ZyXOwHj1eDg1wKQ0+NwLRB6tW2no4WnwevrdUZezW15NRC6Ewi/EV1tbK7l+rzW+6/12T9NXasf327O1EaNGjVq1KjRfxp63fCxXkwIFHlBGkd45xjjERVAhpPgkHRlDKIoBQUkkQr1A7XCefCoGhgY8cFd52zgYkpRTC2ElFKkBq7vGbrtGK3DXZdehMAxNd6HCFhlIqTweC94QiwqkyHeC0YbokghLkRDpgYSXRoDyzqPVZ05yhp7WghRrj44PqNYgw2RsJUjr9XtouI26VZOnKa0etXisRqzHXc6CnhF2IfSVPXkKgV3ogquxOq5D1GbymzHjsoUKFSiSmCiUGXtTNmu9lj3ofq8qAA/jS7vZNXb/QyfLqGlCOARH8a7iuhEa5JuB1dY+t0WkTHbzljAlItKn+d4YqyzxPkYOxnibYHKx7x6+hwXLlwkLywkhpbOuf2OIxy47ijF2inW4hylhGIyIJ8Mg1OvWshbS2Edu1PD7KwnVus4VbChPUoDW5AZy2jomTmt2VjfhK0xLWe55cLLXK80y8WI8499gbOPf44XVjc4v+829lx/Ewv9Fu75L7IyuERXxmRjw0vndpNHHSbasbKZ0hufYnTxFcYb6/gsp8gLYi1keRHmtnNM8ozRyGKdo2UiYh2hZhcoOj10XmA7fTbGGRcvXOS6o31MZEhbKXk2QZIekRtjx1sYbcpzXs6B6fOqSnisNOJ96WYM4yMkYV6oilKWM0cUaAUuOIOLvGCU5WQeut0OXjy79y6xa0/4fJFbpDNbgsfgYvYaEI02Ed5atDbBmasEtMY6ByZi35FjtPbfQr8f89CR25E8w9siLLCdwzuLx2Bdgc0yimyCLXK6i7uY6XUZrZ4jm+SopEOiFb1Oi05L6A3HHLYemQi7tKKnFTOxIm23yAeXmWxtUOSLiHOlYTZcH625XSwdvpVnn3sF64QLFy9iCAv9UVaw6aIw+0VhULgy6FercoHvpeK9U+eh0TdDSqlbgH8IPEgwvH8Z+Hsi8vGpz/wM16g3oZQ6CPx3wPuAg8AYeBn4fRH5JRWKE58EZoH9IjK4Sh9+GfhrwF8Qkd/Z0bf/DvgeYB8BgL4A/KaI/K9fx7FFwH8J/GXgNsL/D78A/H+AfyEiV79dvVGjRo0aNWr0HakKrFQuxQpE5nlOHMcopciyDBGpwVwFIiHEtooIaZoSRRF5npMkCc454jiuaygCtZOygoBRFJEkCePxmDRNcc6RZVm9PVBDOGNMDSq11uVNguaK7aZBYnUs1T4qR2YFx6p95Hl+BfyZdm9OA9ZpeLoTTMGfdBtWj5V7cxo6TvexGgvvPXEc/wnHpVKqBojT4K8CZNMw9Wr1L+M4rs9f1YcK9lbwdCc4rdqr+notuFydm6sB6uq1aXBXHcv0vq4GFqs2qsfqZupp+H01TTtSlVL13Jzu62udq2o8K/dr1ddqvryWdo5NNYeq66Aa/8qJO70NUF9r1bG+Hvfl9Bx4rW13gt7G0dno21VKqSPAK8AHCevv11yDv0Y77wb+IvB2wvo7Bo4D/z/gfxaRydRnTwKHy6ef3PHvn2rqcx3gbwA/DtxI+Eesp4B/LiIfej3H26hRo0aN/uz1hpyPlRPIO0eUpmgl9Ixm0wYogROUCS4jQRCtuZgJE1+wu2sQX0Z+ImQFIbZUa1QJ4byEGpLOC4ULeK0ofHBCKkhFUNYGJ14JVrQBURoTK3LrGeeOSW5xTug5IVIKXdbNE6niRhXBNFmiNvFMnEJHirzw5EXpMqx+dHBEFVbILUysxXtBaYM1KSbphpGVAN62Wd6URUqpKyJDw56F4BrdThUVL3jtUZjwXDxCqGep1PQX32k33HbVxwAjCZGlV3y/LhckUjpYVeirmnZoTrsolUKUwqIZ5pZ+uwUmoig8Ipqk1WF9MmJ2tosyZT3BsiaheMFaxyTPmWlprDPISCAfsdBJiVyL4XCIAIvzsxy5/hiHb7yF2aVdxEkL8g22lifESVgwxIFoBtusAGmMCJioxWhkiIzHaE+v40ArjGgmicHdOsPJ9Ca2tm6A469gv/IlZtYHaGvxa+vcEBtmZ7rc3zF8WIZE2RmO7p1l15s0zz5/gi+fWmWmtcThfQfYv2sXW2ONRbNx8VMcahdsrFkmuWUwzjHaECee8ThjUnjSxGCdC05bBB0ZZDDATQqipE23N0NRTBhlIa42iVvEUUQrjsjFsWvfPvZmNsxdBBFX1n3cdsRCNbdtcCDGKXEU6oqq0t2qKsejl9rJWybvIoAtCrLMknnICuHFM6scXuyS6LAP3Y7RnRRlDLoKBFYOHcUYE+FsUU7r4DFUeIosx5mYhcUl0jQpb0ow6ChBtzpUDlHrBD1zCBXFJfCX4Oz0DimGsLnGzEyfpcgxJOX+Y7eQjwbkT36FZHOE94IuhANKcbAXYWbaLPdmaEVtnLPgXRg7H/62OJvx+BNPcnF9i88++kkunb/IjW+6l4PxhE99/im2ckuhpeyH1BGrYbG9PX7SgMdvto4CnycsLP4lAfD9OPAHSqmfEJH//bU2VkrdD/whsAA8Cvwu0CGAvr8L/JKIOKXUrwL/D8Ii6Vd3tNEGfgq4APze1Os/QFhApcDHgA8Bc8DdBCD5mvBRKRUDvw+8hxJYAhPg3cAvA28B/tJrtdGoUaNGjRo1+s5SURREUVTDjwrYFEVBHMe1aw+owVflzJuGXBWkqZxj0/UVjTEURVEDtQqwTAO+qvZk1cZO0FTtq4KJsO0orGBcBbp2gj64EtpUwK+CPDvdkNNwqihLfUwDNuAKODgNSqfbmIZrOx14FRzdCfumHYZXg0I7a0lOQ9HqfE3DrQo8To/n1dyDrwWgpgHiNNyaPradEHH68WpAc/r5dDvV56fje6fb2vn8avudHqvXAnlXUxzH9fjtdIu+1hhZa6+I+a2ug+q6mgaf09dQdc7yPL+iFurVjufr0ddymH69rzdq9G2mN7QGB34RuAX4HPBRoAW8jbD+fpdS6iERceVn/ynwQ8A7CdDz5M7GlFJzwCPAm4AngF8j/CPTe4DfVErdLiJ/63UdaaNGjRo1+jPV64ePGjRV7UICbCxdQUoFj1BsIIk0hfUYpUm0oms8WjS75nuMRxPGw0lAFCJ4GxZElTNJaY3TmomHWG07jGJgwwpMHEttEzojQuHBFQVOBOuEUeHIrGC9EGkd6g8qgxOD9xPEgS0CdYl0wHVe6iDLgHRESCITHJBVjKYLNR9jrfAlZBQvKPGkkhHrFtoXwb0p23fmiciOjMap+JhAFkFVHkqp+R9CcIrW7sMAOrdLHqq6r+EZUC1IKBc75REFvFlBk+qLftWerp9LCR3Dg6ofhyR86ZnzvPeBWzHe4L1DRxFxp0e+dZFuv0fhPFluURLiY611TLYGjEeUEb0RW+sbrK2tYW1Bu92hk2j2Lc1x8OhRdl93jP7sPGk7JWm1idvtUF/TGEwUoXSENlFwtgoUzmOtI2518ION4HyV6u5Vz3jsGGx51lvv4eAP/iSbG1tsXTrL2TvvYLKxQTIYorxi9nvew0ox4vlnn+Wtd9yHuALEMnf0Vg7dvsy7VQ/TmmFm6TpmF/ogwukXhS/+h4Sjd9/LBfUcrdOX2Ng8zchaZnodRDvaqWKmP8NkOGCUFxiliI3CJ21men127drN3l27mJ+do9NqAYL1IYbYmpRub46FSBieOBEWS86HM1LC8CsidBWYKAIpiHu7iOMI5waIKp2xCpS/0glbr4eUMJmMGWWOTALo3trYgqVuOXt8mD3O4/McMVG4JlXAkMVoA1dkRFLB0HAl2cJSqIhBFnNxU9jfDzDfRC1UFJX2X8EbBXEPbUJcMlLjUYosR3REe6aLHm9inafwDqcUk9lZZpYWkI1NBpfWQ11WJ/goJTIpnX6HldEGRsLfJYvgxTNev8Qf/4d/x5iE5ZWL6HyI5zrSWcc4d+S2wGmH1lFtFFUotI7CNSEl6Ffh70ajb5oeBP6xiPy31QtKqV8hLIb+N6XUH4jI5tU2VEolBDi4APykiPzmjvcPTj39VeBvAz/HDvhIWGjNAf+jiBTltksEWBgB3yMin36Ntq+l/56wYPoV4P9SLcBKJ+b/G/g/KqV+R0R+7zXaQCn1pWu8dQvApz71qa+jK40qbW1tAc24fSNqxuz1qRm3b1xvZMyqbRs1+lZrum6i956iKGqnYOVkrDTtApyO7awcYxU4m3Z9TbdTAcLpmorTwLJyxlXRrxX8mwZs1fvOuRpopml6BWia/n0npJqGitMRr5V7E2A4HGKModPp1GNSQaOrAbdpmLdzfzth2/RYTsNb7/0V0HDajTg95jsh4nSfrLX1+TLG0Gq1GI/HTCYTer3eFf3YCXd3Ojp3gt/p46vicq82rlc7zunP7HTmXQsS73RMvhZs3Akvdx7Xa7kBryZrbT0Xqn1Xc+1aQHDaWVsBx+o4Kgdunuc450iS5ApX6M46o99of79eVefoauPUqNG3uV73GrzUfw28Ijtou1Lql4C/BfwY8L8DiMg/LeHiO4FfF5FPXaW9f0oAj78oIv/PqfZawL8F/u/luvnJ1zqor7VubtSoUaNvF30r/43gT3vd/PpjVynrEyqNUWU9RC+MncOLRvCM0bSVlKUGBa2gpyE2wp69iyxfHiDO4jKHRyhsiNJUxoRo0fpf/FVw0ImqHXiKEOvqUQRzkkLEY5QijTSkEBeaoYVR5ogUeO/KiFaPEnAenK/uugtHVTFGo8NtNZHRJK2U0TgHEYwWtPKkkQEUUoS4SmctWgvaa2KjSGON8o7NSc6Zi5dIOn28E8bjIfjQh3CMCrTGRAlRnGKiCKNNHX+qlcEYQ1R9ERePUhCJQlcxqaocm0AgUVpjRMI41YBEEAmgimqxVbrf1PQCqxxvVYLSGloRnKVOGbZGOSdOnOXYgXm8s6TdLiqKWF5dJ2nHtLXBWVeVmCQymrSVMhmvIBKTGs2xWZib3Y3XEUnSQkcxURKRtNpEMiD2HcTGFJMM5zSiFLp0Ow4GW2R5jlIa7yzOalppDx3F2GxEd87hbEaSWEQ0WiCbCONxhFFtRm7IuLXA0v3fx+ULpzl34RxHbr2b+VvvwOdj1PxenHfIeIOt1TUcEabdZ6a7wNyuPcEpKhmnTr7CV18+wamh5/gjn+PihQusbQ0ZZgWT3DKxDu/hvnvvw7iCtdVL5JdXEeDmYzdy7OA+brrxFg4fu4nnXjzBiyfPM8iEtg4xuFrBKC9ItGG0tcXg8uXS8boNpqtzE1yQAUoaDTppEbV7ZOtnSRMzdR1tO/iQ4I5FlXceoykmGVnhcGhaZWwN4hEV4nxRIOMN8mwA2tR3IisTI+LR5fxSeCrE6WyBigxSxGxsOA7lXyUaruD33M+kvUgaK3zhWRlEzPcgLSGhlK5khSLPcrLBFoPBuJ67rsgZj8a4pXkuKxgB48wx2hyzMDNDq9PGnT1B1F/E4dijBZNqTo+DE9pHCmsLPJ5sWBBhefX4i7QX2wwzhyvvFfClY3L7j1+Al55QrRUR9NQdu43esDaAvzf9gog8rpT6N8BPAz9MuEPyavpB4Ajw73aCx7KdM1O/n1dK/Vvgx5RS94nI9MLk5wDPlVDyp4EZQszLFeBxZ9tXkwpW9f8zwU35C1N3flI6Mf8b4K8AP8mU27JRo0aNGjVq9J2tnU66Cu6tr6+T5zlpmhLHcQ0Op+HQNPSZfl59B6+iPavXpus4VrDGWotzjjRNaxfmtGuyig4FakBTbVe5zXbGd1b72wm2Kllr631VzkwIrrdWq1XDo83NTbIsY3FxsT7maZdndZzVMV4LSE5r+v2qvarNCuhdq+7mznauBuWSJEEpxerqKrOzsyRJQqvVugJS7oSPOx2QXw+Umt7vzmOe7uNOOHkt8HU1ADl9jNfaj9+xzpk+Nzv7+fXUbqxA+LQDtzofV5tnO9uejiqugDxQxwRXoLG6NoqiqGHn9DX2esHg13I4Trf9Wq7RRo2+zfRG1uCIyIlrvPW/EODjeyjh49eSUmqRkEL0+DR4LPczUUr9YtneTwBPfj1tNmrUqFGjb51eN3w0CpQKTsfYhDpzKCHViomrvjQKuQuUJHeeTmyIjSYxilZkMHFUQxTnIS9cADvGBPAgQqyEtiYURlRgEXLrmRTBgeS9kMQGrSREr9ZuQY3Ck0QRzoeoRZQq+w1eNE4kuMgEWnG5eCnBTOEEr4LrUpQmTWKKomCUWdqxLuvagSKAorzwaFwANOLR3pJoj1fCCy++QNqdQURh8yy4xbQiilKiOEZHMdpkaDUAtb04VVqh0GVc5vbiResw3gFQmhJSBseo0aFvJoowSgE+OAZLoGl0hDEaEY93Fsov8FFUxfeEupa5dUzXkQzxrIqicKRJQjYekY1ilDK0urMUheXipVX27VsEKc9jub0xhrRliCJw1mPimDzdjZnZTUt5tHJEWki0EEeaxOTo8TlMVgJWDFV2rXOe7niLthdMFJdjEKN9GtxygF9VnM0zTORAutiixWgYM9o8w8R/jnFW8NJX/pj1lUvkkwn5ZMLF57/MxRffRGd2iajdRWtDFLUZEZN251DZOq1uD9qz5X4NJmnz8rPP8OZbb+TlZ57gos2Zm5snHWxyIdtAiWGm06bbn2OyfpHBYIBH46zlqa8+w5nlLba6Byl2OS4uL7OxtowdDVDpPrRSRFHMpVMvotMOR3ct0JrrY7SmRo7iUSoqHa2lG1KEKIpxbsjw4ssgQprMUHtkFYBGxFFmsaIkwGmAosgZFw4VRdx+eJ6blxIiNeWTrBZQvgAcIh7J89pDa3RwTG5HuQqj8QTSFjfecR037TXo8UyoCysKO7Y4F5FGmkSVEasAuNq1i4oQO6YYbQXo6IXxaIie5GwOB2xlBR0l+Dwnnp8h7s+w59BBNlsdJhcsmIh8MsJNMo62DesTy1AEk7boRpZYJxzdPcvp02cwJqbVbqH1Rhgv2HaWlrI2x6MDlFRT0c2Nvll6QkSudtvNpwgLnzdx7YXPW8vHP/g69/UvCHdh/hyhDiNKqTvLdv5ARE6+gbZ36iaCI/Ml4G9d4x8gxsCtX6shEbnvaq+Xd3be+653vet1dvE/T1V3mDXj9vWrGbPXp2bcvnG9kTHr9/vf3M40avQ6Za2l1WrVUalxHDOZTHj55Ze5fPkyURTVdRhnZmY4cuQIe/furSFipWmnYwVXqtqMlcuwAo3Tzq4KHhVFQa/XQynF8ePHOXv2LLfeeit79uypP1/FrE5HW06DSNh2cl4NYE0f8zQwjaIIpUK9vc3NTeI4Zn19nY985CNMJhN+5Ed+5Ip+TEMbke1aflXfYBuQ7YwpnQZw027H6vheK451p+tyGiBWbWmtWV5e5uGHH+bAgQM8+OCDzMzMXLF9FVO7U1dzQl6tD9PHP13zcmfNya+lneflat8/px2gO+NIpwHtdJ93QrWdDtJrQWmAPM9ZWFioz8tgMGA4HNZz82pwbxo6Ts/3drtdv761tVXPtWl3cTVHpmt5vl4AeS3w+FrvN9Cx0XeI3sgaHKVUl1Cf8YcJ694+U5lvwIFvoC8PAAYQpdTfvcr7cfn4htfN30CfGjVq1OhPVd/KfyP40143v274KCrUYtRKhYhODXghNgbjFApHB2gbxaj8nC6BWhUNGXVa5M5hNOQeijxHvEebeLtOpAoOx5JGIigSo5kBkig8dwSYp6kKlgv4shakzdEoRIMyUagrFxkEH5oso0+9lE7K8gtjJorNCRTrOfO9CbGB8SQPXi5FcMaJwitdRrIG/hAWfBYlgncFnVZr+9iVJun1Q8SpBKBYpa0650q+GhyPSimU14iyaGVK0Fs6PwW88oQDL0NRA4+qa0Fe8YVfhfc1lHCzOovbkFOrALXC7+EuUQU1HA7xsZokifDe0e3NIEoRGUPS6ZPnOaPBEI2ghTJ2N0Slpq0OmxsZeMPew7cwLgpWBwNGZgZEcEWBeAv4AGYlQC2tQnSvwaPFYXCIzzFEGCVoHLHyROKJ4oxYC7GG1CjWx3PBuaoVkVEoccx3NWrzSebx7L8xRt2wH23i4PzUBqXWcH4VNxQcGusE5yFbOcXmuGB86ThZ3KLdagPQmkz47luvo7Owm0gs3Xabzp6DvHz8OK0k4ZZD+3j3W+4hEvjkhQneB4fsDfsWOTSb8pWXvsRvffmP+a20y8z8Ivv37efQ/gOlI9eT5xNio4h8wWQ4KoHy9vjUixspz5UKjmMBcA7RGmMiVBlljIQ7Q6sKo5TuVlBU9NDmBUPrieI2CzMdut2onD6qnLOU+yjdlrU7sZpSwe4qXlDKopRhOAqRuCbtELdjfHojImBiTTckWuKsp9s2yHAZb8CKRkQjymCMQklBNtiikACgBY04ixVwSoV6jjpC2h12RYqN1XWGuw6Q7j+KJG385gqjScF5AmD3Ev4GTbIJ3YU5NIJRgrUFk8KWjmFKe6miCkMGQZzHlXA0nAKHp3E+fhN18RqvXygfZ19j27ny8ezXsyMR+aRS6jngLyql/ptywfVflm//yzfS9lW0WD7eCPyd1/hc7zXea9SoUaNGjRp9h6ly/lVgr4qHPHXqFJcvX2b37t045xgMBrz66qucO3eOe+65h/3799fQDq6EX1VMaRXPWTkop91jlfOvAi6V82swGPDcc8/x9NNPMzMzw65du+rPVdBxOuJSRGr32HQdSrh6HOr0cVdAq4rBdM5hjKlB7MmTJ2snXFW3cifAq8AjUP8+/TO932lQVz2frpE5/VO55qZh2U4nYdWP6jgqV101hkmS0O12r/jc1WDs1RyJO2Fl9XzaEVhpZ592QsKdEHZ6v6/1fKeztpo7O8dk+vNXc1tWc23n8VxNrVaLjY0NPve5z3H06FEOHz58hfPxapoGo9ZakiQhSRJOnz7NuXPnuP7665mZmannl1KKV155hUcffZQjR47wjne8gyRJmEwmtcv3m61rOWffiMuyUaM/Q73uNbhSKibUZ3wz8DTB4bgMFOVH/g6QfgN9qdbND5Q/11Kzbm7UqFGj7wC9bvjoVHAOAmReiEs4GKIYPUoErRVxGRNZfRUrvYUgnla7hXNCFBlQQm4dtgRoFRg0GBQwcaB0Wa1QhS/EEeELdOElQEtFCd7KL3giaKOIJMSppq2UHIV4wVuHjhJEckCw1pPosB+AVMGMFmYioZ1GaKVwVhDvsIUPNfFQWMCV8ZVxHIeFAmDiiCzLiVpd8A5duRKNrmvvFUWOrwdmKrZEgaYERghKmdKCVd31l5DEcQ0c6wbLqErnitqpqEpgqdRVFocluFQlPFIqZM3qerEUzl0AT6E7wRkZoG5kwgI2aaesL18KwFWkjEg1mCTGZJ6Z2SWUpLTbnrH1DLaGZHmOZRRiQL0PoNl7EMH7sOAN8yT0w1pwXqNUq3R+EuJzfbn4UaVLUgHeo5QvYVl4HXFoPEYrIgVGtRFxNXQyBiJVECFERhEbwWhPahSREhwhujc1ExIdgHmv74mvm+G581sl6C7Y3NxAipwjh49w/vIyx8+f4/4b9kM+Yrbb4iffdTPf88DN9NotRptbPPPUC/zHJ47z+KvHefL0Kxxf2scNN98O3nN5fYtDx25Ha8WpF5+hb0ecPr/MeKGPURApiONQVzMs8IKjtRBPnMTopEvcbmOMx9kCo8qFanmdeOdLEOlDDLA48iJnbIVdC120neBsCw3B6Us9DYMbt3Q4VnPL+wAzSyyJK8/daJKT7urS6/VxjtKt6fFWEZkShitFmiqUCrVSEYiUCrUgbUEuBfloiC0KJlmB1W2WV5aZoJhvJ0zWthgWQrE14Oy4YDbpMbdHEY23UK0OeZEz0ArJHXHJWpWA9VLGqzq0BFelKyyE+wIqlDsVc1v+LVMVeCzfvMIb2egNas81Xt9bPm68xrbr5eM3cmfl/wb8M+AnlVIfJES8nAUefo22n/oG2q9U9fsjIvIjr2P7Ro0aNWrUqNF3oCpgled5DeCqOocLCwu87W1vo9/vUxQFzz//PM8++yynT59m7969NXSZhkEVBJwGPNbaK0BH9ZMkCdZa4jiuoWWaptxxxx0sLCxw6NAhgNp5Oe0mrByHFdSq9lE9L4qCOI7r7So32nT86M4o0umai5UzcBqYVuvpaZdfNX7T8ZxVPcqrgcIq7vVqEHO6bqb3nlardQX0q/pWQazpY6qiX6frZXY6HTqdDltbW7Tb7SscdtPnq4rVzfP8CtBY7a86hmrMq35XYzB9fqp+Vuem2mY6gnQaeE1HnCZJUo/bNCybrhta7aParnISViBcKXVFrdJqrKvjrJy+1b6ruVJBeKUUJ0+e5Jd/+Zf5vu/7Pn72Z3+WNE3rsan2WTl9q/ar46zmnIjwla98hd///d/np37qp7j//vvreZXnOcvLy3z2s59lZWWFe++9l9nZ2XpsqnNUzSXgCkheXWPTn592XlZjppSqwX91fbwefS1H5deCyN8MvVYfrrW/b7Tfjb6t9UbW4B8ggMdfF5G/Mv2GUmofr33j7dVU7et/EZG/+Q1u26hRo0aNvs30uuGjlVB7sYpXpHQA5taWrigBpUkiQ+IF5T2RBmNUCQkV3XYrhEWKYARs4fDO4VHbPiMtRAbIBec8idH40knopHJvlY5KKb8cSoBwXgTrQRuNkSrGVBBvUTrAQC8hPraTaAZaU2gfDIVAx0A3VsRR+IItCDYHoxXOBWemQ5G5En4Zg3Ohn4jH2zx8AS/KBUgJ1ATBaEXaapVAo/pSprZdiVfcKalDvwP1oXJrlrgSqhp+pQRfAr3wtqo/qbaBZv175Y4M72ujSNJWGdlaG78oTxpKGYrC4V1YkMStFiYyjLc2iRUoHQMapSHShsRoer1eAL2qYO3yGkVR1msUT1VkUyuFGBPmgtHhKEoXqy+L7ykJCy1vg/dOkG2HmhK823Zu4suxVoE0VfUKlQ+wTgjzw12xSJD6UZduUgmYKiSUIkTGhL55iwLSKGK+HQYwjSNOXbzI0q5dpO0Os4u7+cJTJ3j86eOkGtpxxBdeWUfmhywtpkS6S3TDvTx06DZuu7DO5545zflLlzj17OM4Ec6deonvf9t9HDh0Pffedz27uwYdaYwpHcJahThiqRaZHueFdmJoz3exIjg/QiaWkSrC+S2hM1W8qVKItXjnKKylt7SX7/2uO7FJh/XLG8TFhCQO809rTRpHGB1uMqC83rWG2ASHpKhw7igXiMPhmJWRZ3lgubC+RdTuMNfrEusA8pwNM3Bra0S71wVx5WIZCgFlFGIVSRxj8wmjccZ4nKGKMUaELgo10bSdp608xBCnKbrVw2qhzRaZnyPPM5yJAE/uHBaIy0VtEkfYYlLXOXU+1I6t/gvgUZi6csI8rJ+pBj1+c3WvUqp/ldiXd5WPX36Nbf+4fHwfASp+Pfog8D8RHI8TgsPxn8tUTcaptn+sbPtjX2fb03qeADDfqpSKRaT4Gp9v1KhRo0aNGv0noMrxCNSQqIIorVaLbrdb10a88847WV5eZnV1lSzLSNO0hl6rq6ucPn2azc1N2u02s7Oz7N27l5mZmdqtaIxhfX2dlZUVJpMJi4uLtdNrYWGBVquFiJCmKXv37mV+fr7unzGG4XDIysoKWZahlKLf77N3715mZ2dreHrp0iUA9uzZw5kzZxiNRgDMzMxw8ODBcHNqktRtVvGtFbypQGBVs29ubo48zzl58iSbm5s455idneXgwYM1+BIR1tbWGAwGjMdjNjY2EBHm5uY4ePAgMzMzjMdjOp0OACsrK1y8eJFLly7RbrfpdrvMzs5y/fXXk+d5DZ8uXLjAxYsXGY1GzMzMsG/fPhYWFgDq8U+ShAsXLnDu3Dk2NzfZt28fy8vLdazncDjEe18DwmmHapIkQIgavXz5Mmtra2RZhoiwd+9eFhcX0VozGAzo9XpYa1ldXWV1dRWlFL1ejz179mCMYTQakaYpaZrSbrc5efIkw+EQrTWHDx8GYHNzk4WFhRryjUYjRIRut0uWZZw/f56iKNizZw+tVoskSRiNRqyurtbna+/evfUxtNvtOq53NBqxvr6OiNDv9+vzVoHhabhtjGFzcxMIUV4VyAVot9t11HC1j8lkUsPOaTBdXTciQq/Xq89dnud0u1263S6XLl1ia2uLOI5rWKmU4o477uAXf/EXiaKIfr9fQ20RIUmS2jUcRRFJkpBlGaPRiE6nQxRFNUit3LZpmjKZTOh2u3jvGQ6H9RyoPns19+k3U68VZ/ut0rUA5LdbPxt9Tb2RNfgN5ePvXuW9d15jm2qt/SfzqeExwv3i73iNfTZq1KhRo+8QvQH4CEqZEFNaugyd9xR5DiYGhEgrjIE0iZE8JzKGKDJlTKmn00qI4pgYjzFC4RziLHlekOcFaWrQEtpBawoboIQpnXytKLjaAiAqYzrKOpQej/gQY+pdGd+aW1QUaiKGeFWHUR6tVIi3jEAVErYRCXGxmWMwHNNOE8QJsQ7epygKULDwBMiD4L3DOQvelbXvHK0kZjMb19BR1+MFqABJVe3YZCpGtXykjOtQCqMCLHUVWJzCj9vxkKHWYhSHuoTVeFACXV2zqinkOfXF0ItsMzgC2BUFWoXvBB5FkReIcyjvaHV7CIpiMgkIRrZr9UVaExlT1jgJd5DOz80wGA6ZZBn4cOdv4Du6jHatkI4nGOmCgxEV+iae0o055aVVCqR08ZVQXJcOWI8KEaUIkYmo/GygMNrg6hogUu0GUSXgBhCFE4/RBu9dAKEI3oeKlk5gNjWoKCU1hl37jnDsljs4/uLTrKxeDvGiSoijiPl+h5ULF/n4Hwm33Pcg7X6/HKs+yfwBHnjoPoy3vPzi0zz6yMeZyCbn1sb09ig+e6lDv52WsbbB5WuUR2swCFp7jAItHmM0hnDtGaWCuzPLUCgiLUTKExmN0eGcxtqjxaMizf4j+1nau0SrlaIJiz+twpnXKkIZHcCveJQWlI5wIogjAG+xeCnPnffkDt7+4FsZOsPmU3/Eyyd6LO49QLudhH7FKSYyOAfFoFXedZugowjvHFuDDBWnxLrg9LkVOr0Zbp6bRxkTrgUreG2QMjrXK4MyMRL1OOO7zMzfxIX1Am8LtIKJ9VgHHo1RwRHdbqf4wYjIKHLnsV6ugP9XXC3VNSsKNX39Nfpmahb4H4D/tnpBKXU/8JOEuyA/8hrb/j5wEvjzSqm/KCIfmn5TKXVQRM5MvyYiG0qp3wR+Fvj7hIXQr16l7Q+W/fqvlFIfFpFHv1bbO/ZjlVK/DPxt4J8rpf6miIx3tLEPmBeRZ1/jGBs1atSoUaNG32Gajv2crtfovWc8Hl/h7qqAW6UqnvRLX/oSzjnSNGVtbY2XXnqJQ4cOcd9999HrhfS5xx9/nGeeeYbJZFLXw5tMJlhrec973sNNN92Ec47nn3+eEydO0O/3WVxcJEkS1tbWePTRRzl+/HgdtZqmKQ888ABve9vbiOOYra0tPvGJT7C+vs7+/ft58cUXybKMoihYWlrife97H7fcckvtXoPttWYFqSoXZuXO29jY4JFHHuHMmTNsbW0xHA7ZvXs373//+7nvvvtotVqkacof/uEf8vGPfxyghpuTyYTv+Z7v4fu///uZmZmhKAqee+45Pv7xj3Pp0qXapVYUBTMzM/y1v/bX2LNnD5PJhC9+8Ys88sgjOOdqQLp7927e9773cffdd9d9fvzxx/nYxz7GmTNnWF9fZ9euXcRxzIULF2qnZtWfCg5N16acTCY8/vjjfPzjH+fixZBsKCLMzs7yzne+k4ceeohut8vKygqf+MQnePzxx2v4ODc3x4MPPsh73/veOjp3MpnwiU98gn//7/89m5ubWGs5cOAAs7Oz9Pt9fuInfoK5uTlGoxGPPvoozz33HIcPH2Z1dZXHHnsM7z033HAD3//938/hw4f5j//xP/L5z3+eLMs4ePAgP/7jP84999zDYDDAe0+32+Xll1/m4Ycf5qmnniLPcw4ePMjb3vY23vve99JqtcjznK985Ss8+eSTvP3tb2d5eZlHHnmEtbU1jh49yp//83+eW2+9Fa01J06c4N/+23/LZDLh+PHjfPCDHwRgdnaWBx98kPn5eZ5++mk++9nPcvHiRZxzxHHMvffey1133cXhw4frOfzkk0+ilOLzn/88g8EAEeHo0aPceeedNfBdXFysYWe73cY5x7PPPssXv/hF1tbWmJub49Zbb+Utb3lL/b5zjpdffpk4jtm/fz/PPfccL7zwAuPxmFtuuYV77rmnhpxxHFMUxVVrfDZq9B2kN7IGP1k+vouwHq+2vx74n6+xzWr5eGjnGyJySSn1b4C/pJT628D/uPPGYKXUMcCLyCuv0a9GjRo1avRtoNcNHydelQAvuNGc9xigFUcUooMjTQU4Y4zGGoPS01xLaKcxJDE2m4QI0/JuO1Gwvjmh342JY4OYiDTVpCbUcdSRxmrIRTHKPEsdgzIapzXO+RKOQEiYDJ4lpSBOEjJvQUJfdRk3akShS1eYlBEoWiush8sTyL1BOyGf5GgBtCaOVIgfFXBOhZJzWpGUQNQYQ1YUmCjCeRc6o01FGgNQBLwXnHdEpduPK+o1ynZ0qDGI0VNQZBs/ShmPuo1Bwnue4Dit6tGFWFVdArpt56RQxfOYGuqJXLmrykXpvCPLczyQpoZ2t804y1DeBabqXbn/cFevF8FEBlRYYDrR5NZzeXWVdqdLEgUINphYuu2ITmywHiY+9DcxEU4LtiiIjLnCzVl7M5XC2WDO1CrMS4cPtT5l6tirY1IG8Q5ni3BeoghTRpZWXrYrgjQFvA9QtYoq1SjQYJ3Fe02eZRhfMNxa54VnvsJgc7V0vRZ4L2yNLesty4GlPicvXuTlZ77IjXe/lSRpBReoEUQUVhTtTo+jN9/B0UOHue2Bt7F8/jzWzJM7E8ixSO0sLg2a9TVFBWYFFAHcVuBbSflTgt3gTvTbj3gUGpEIrcIxaspxRTDKY5QDPJExxMoHJzMhojY2GlM6BJNIE+kIo2L0UofZSIc6nSpnuPoKY6PDnIEacFZ1UU1kCFA5Qpwtx1u47rrdHNi/ENzSKsQnB3gdofA4a7FeYZ2wsuk5fxYyayh8Hhyu5ZkVVf5tsgWZFbLckmdFeayh/qvRV9ZGrWeDbP9+5TxpvI/fRD0K/KxS6i3AZ4F9wI8Tytb+nIhsXmtDEcmVUn8B+Djwm0qpnyM4FluEgvTfy9X/f+9fEODjAeD3rwYRRWRFKfUTwO8An1RK/QHwVWAGuAu4Djj6NY7tl4C7gf8T8INKqUcIEa+7CbUg3wb890ADHxs1atSoUaP/RDQdDboz3tN7T5qmdLtdnHN89atf5dKlSxw+fLh2Clb1BcfjMQ8++CC7d+8myzK++MUv8vzzz7OwsMCdd97JyZMneeKJJ1haWuId73gHR48eZWVlhccee4yTJ0/WkBNgY2ODU6dOURQF3ntWV1f53Oc+x9NPP83NN9/MzTffzHg85vOf/zyf/OQnWVpa4oEHHmAwGHDhwgXOnz/PaDTi0KFDHD16lOPHj/PYY4/x6U9/muuuu444jmsoA9sRlVXsZxWxGccxZ86cYWZmhptvvplDhw5x4sQJHnvsMT75yU+yZ88e9u/fj4iwsLDAO9/5To4cOcKuXbtYW1vjC1/4Ah/5yEfodrv88A//MKurq3z84x/n9OnTvO9976sh1AsvvMBjjz3G2toae/fu5amnnuIjH/kI119/Pe9///uZnZ3li1/8Ih/96Ef55Cc/yf79+zly5AivvPIKv/M7v8PFixd56KGHuP3229na2uKrX/0q6+vrNXys3HHTN/VW7335y1/mwx/+MEeOHOF7v/d72b9/P5cvX+YTn/gEFy9erLf98Ic/zFNPPcW9997LzTffzHA45LnnnuPhhx8miiLe9a53URQFjzzyCL/5m7/JwYMHeeihh2pH4zPPPMP8/DyXL19m165daK156aWX+PCHP8zBgwe54YYb6vP6pS99iVOnTtHv99nY2ODYsWNMJhOefPJJ1tbW+Lmf+zluvPFG8jzn5Zdf5ld+5VfY2tri8OHDdLtdtra2+NVf/VXW19f5gR/4Afr9PsePH+dDH/oQjz32GHmeMzs7y2g04rd/+7d54YUX+MVf/EXuvvtuNjc3efXVV9nY2ODEiRMMBgOiKGJmZoZ7772XKIp49NFHefjhh5mbm6tB6sMPP8w73vEO/vpf/+vceuutvPLKKzzxxBMMh0OOHz/OiRMn2Nzc5IEHHuCGG25gbW2Nf/bP/hlvetOb+IVf+IX62vvEJz7Br//6r7O8vFz3sSgKfvqnf5qf+qmfqiNq/+AP/oAvfOEL7Nu3jzNnznDp0iWccywsLPCX//Jf5od+6IdqIC8itau4UaPvUL3uNTgBOL4M/E2l1J0El+Qh4P3AR7kKYAQ+SfjH5P9JKXUHsAYgIn+/fP+vEdbHf48AIT9DqEu5n7CufwD4i0ADHxs1atTo21yvGz5mNkRiJlGC00VZPy84qDQGrRVKa7w4lIaCmHHhaCVlQXKBODJErTZ2OEZpsM6jUPTaLebn+2SjEVFsGDnPJHe0jMZEASRqQqxmpAIANGgUDlXeTRonEWPvKGyI10h0BFGEKoI7T2uNNqEtDxSityGlVngPMwYS7WnFik4rIYk0Uli83y4ejigcIT1UEUCroFAm1KYwSodaiCVACc7EchGqt6NN5QroqEBXgY4BLgYP37Zk6pkqw1tLzFrDJmoAG4BgqMUhrC+fIxtu4Jxnbtce+gu7MVFa9p0yyrSMsy2hpnMOvKEoBFcEt2LcbhO3u6yvbRCVMT8igncW5z3OW8R7PIq8KLDWsbV8juFgi+VTz9PrRnSTmJl2j89+8tO87a6jPPDALVxc2+S3Hn2RCQnvu/8WXlrOOH36DCqKiZOEOEk5sHuJw3sWOb28ihPD7tkumROsMkRGMdtukTsLaKwPwKmKvjVaBzeogVwFB6/RGicOqhqSEmoBKoLb1ouQRBGCQ8r2QuSvx9mcTq/PTbffxa7VjBdeeQVtIrwSEhVjc6EoLJc2RrSSHnvnW5y/eIpXX+hw5JY3ESUxgmJ9bY2k3WKcTegsHuDme97M2TNnGA+36M3Mh3Nbxu1KmZ6qpnBpXY+wcrgCiK7ni5LQ7wAuq38I2a43QuX8RNi+rayEl94BGi3hGhFXzrQi1IsMeNITeKcLQ1PWhAzy9awNxltB48FXc5P6PV3Wk1TKofAocSESVWs0uoSVIcY53DTgMAqMNhg8cWQY2ohJPmCua8A7iiInLscGAnCOjEZHhpk0Yn11glIQlVE5uoqOrcar3jaAb1GEPlXnollnfjP1CgHO/cPyMQWeAP6eiPzh19pYRB5XSt0D/N8IEanfDWwRFkT/wzW2+bJS6kngHuBfvkbbHy3vAP1FAsj8fsJC6XlCdOvX6luhlPohQl3JnyEsyHrAMuG4/zbwb75WO40aNWrUqFGj7xxV0LFyw1XRklprNjY2eP7559FaMxqNOHXqFN1ulyNHjjAzM4MxhhdffJFLly5x9913c+DAAUajEfPz89x1112cP3+e5eVlsiyrYzg/8IEPsH//fowxXH/99SwvL3PixIk61rKqnVdFv6ZpyubmJs888wzXX38973vf+2i32xhjmJ+f53d/93d59tlnue222+ptFxcX+dEf/VGOHTtGp9PhlltuYTgccubMGZaXl9m7N5QJG4/HdY3AyvFZRWOOx2MmkwkHDx7kx37sxzh06BBpmtb7+eIXv8iZM2c4dOgQk8mEBx54gG63W8dbRlHE0tISL7zwAq+++mrtdDt79iy33norP/qjP8rKygppmnLs2DHuuOMOdu/ezeXLl3n00UfZv38/P/MzP8OuXbvI85z3vve9rKys8PTTT9f7feGFF5hMJvz4j/8473nPe+p1UxRFPPPMM2RZVsetWmtrx2gFWgeDAadOncIYwwc+8AFuvvlm1tbWuP7667n99ttrh+pnPvMZPv/5z/OBD3yAH/uxH6vrX95+++08++yzPPvsszz00EO8+uqr/PZv/zZLS0v8/M//PMeOHUNEmEwm/ON//I959dVXsdbWtRu11iwtLfGDP/iDvOtd7+Kmm24C4B/8g3/Axz72Mb77u7+bn/zJn+SBBx4gjmP+9b/+1/zWb/0WX/nKV7j55psB+PVf/3VEhF/4hV+o3YErKyv8w3/4D/noRz/KPffcw7333lvDt/379/Pggw/y9re/Ha01v/Zrv8aHPvQhPvOZz3D06FEeeOABOp0O586d4+1vfzt/42/8Day1dV3UTqfDQw89xDvf+U5uueUWWq0Wo9GIj3zkI/yrf/WvePTRR7n++ut5xzvewWg04nd+53f4oR/6Ib7/+7+/jguem5tjdXWVTqdzRe3Kl19+md/4jd+g3W7zS7/0S9x1111cuHCBD33oQzz88MMcOXKE+++/v45drWqv/tW/+le54447OH36NB/84Af5zGc+w1vf+lYOHz58RW3IRo2+g/W61+AiMlRKfU+57bsIcaknCDfe/r8IEHPnNs8ppX4a+L8C/zXhZmEISUSIyKZS6p2E0ig/Afxo+ZmLwEvALwD/4fUfbqNGjRo1+rPS64aPIVZRSOIWhc1AOZSEQt3KKLwybDhLxygibYhiRSEebQWUx7ngmkq7bTYvOKI4oshDzUetFXGkMe0ItKKfajrisNbjpSzAHpJYUUrXcZkYjRSuDA1XeCAyikkh5LbqX4CASisMmqh0XSkgimMip9AmAI7ZOKKdRLRn5mj3uqgoQhEhSjMuCowO/rueE5L5Dqab4lxBEcUgChMneFcQIRhv0T5gogAvQflQN0+VkEZh2K4ztw0iKzCD+NL55lHel/Sp9IWVdfIEKRe42+DSmFBrAqUQNyGVMTMdwZiIQTZGm2Tb3UhV324bjFKeV1RU111IkxhjInScMB5uEb5zByBtvaBNzOLsDIN8A1FQ5DnWFozGQ/LxmEvLy5w6cZn7brsJZzTjrRXWl1usrS3w7HPHOXX8JSbEnL5uiQvnLnPyK58nc76Mp1W4Y4cZ92KePH6GOEn4c2++ic9+6SVGhXBgcYb3vfu7eOD2Y1xcXuHf/Yc/5vjqiLE3YCLiKEInLaIkIYpiTJKQxCmznRZJpPFKYzFgYkwUFubaaNqtFlLW94yMKZ1xivbsIje89fv4yb/yM/zxo3/Er/3GvyHLM8ZjAeOJYsH64D+9uDFh/3yH6/ekvHzyOXQUc+SWuwDod1soHepettttnECvlYZIW61rtChT/yNTkCzw4inAJwFaS2nP81SJvz7MQYLTsHbGevBqu47mdgSvvxJ6lgBvygQYQKMKsbhV/0TU9u+YKxyYYY5q1PSfoNrsK/XYVhC9+kcbKtBe/h76VV8gQHAe+8LR3TVLu6Mpll8kUrINZMvjzvMcTIiBHjuhJeF1FBgNqaa8cSBck76+OqqxLcGkNMGr3wyJyEmuTLH9wNf4/K8Dv36N914lLGK+Liml+oRaFa8Cf/A19vsM8Je/VpsiV58YEibvvy5/GjVq1KhRo0b/iWu61lzlqqoiG4ui4OzZszjnuHz5Mmmacv/993PkyJHaqWitZTgcsra2xhNPPMFkMqHT6VAURQ1tBoMB6+vrLC0t0el06hp6Va29KvK0qp9YwS2A0WjEhQsXaLfb3H777fR6PSaTCUop9u7dS7/f5+TJk7XTz1pLt9tlfn6eoihqwLe0tMSZM2fq+nzr6+t85jOf4eLFi2itybKMXq/Hvffey5133snGxgbOubp+oPe+3sett97KY489xpkzZ+r41zzPOXXqFMePH2dtbQ0ItR2ttWxtbdXxr3Ecc+7cOT72sY+xd+9eWq0WCwsL7Nq1CxHh7NmzrK2tcfPNN3P8+HGOHz9e14usnGyrq6uMRiOeeOIJkiThzjvvZDgc1jUKq7qF7XabTqdTO9+qOF2gdnbGcczm5iZf/vKXieOY2dnZK4671Wrx6quv4pyj3W7z3HPPMZlM6tqJe/bsYTAYsLW1xYkTJ7h8+TI/8iM/wsGDB9nc3CRJknp8qvqO1dyqnHpvfvObufHGG+vakseOHWNxcZF3v/vdPPDAA4xGI/r9Prfccgvz8/M1ZD5+/DgvvPAC73//+zl06BDnzp0jiiK63S633347Tz/9dB1va60lSRLe9a538cM//MN1jcr3vOc9PProo1y6dKmuETkzM8NkMmE0GuGcq2suVnUbjx49WtcFrWqMXn/99ezZs4dz587VrsXFxcU6ljWOY2ZmZhgMBmxubtZzvKqz6r3n5ZdfxlrLT//0T/OWt7yFKIro9Xr81E/9FP/oH/0jnnjiCb7ru76rPp577rmHn/3Zn+Xmm2+m3W6zf/9+VlZW+N3f/V0uXbrEwYMHa8g7fe4bNfpOlIg8x+tcg4vIaUJE69V0rXXxbwC/8Rr7yoFfKX8aNWrUqNF3qF43fNQmRvIsOPlUFIAYKoA0QGnDyiRjmHsOzqV0u22Uj4mNR/IcUQatDL1+h9NOSHBMchvcj6bycKkQn+oUua1TJfGiMEaItK4jFK2EWhomiUJ9SFFEcUG302IuTvAY+nPz9LUhiSMwEcbE7FYKZR1pbJhYTyFhoWCtxcQxvV6XOE5QOtSmMJEmLxw9cSRpO/i9vEVphUpidAlftPcwHJBvrDLTimGyFr6UCuTZGG0ihuMxvrDESUxkDGma0Ol00caQxjGiNIW1OOeJIkMUp4wmExKt8VJGxkamjokMOEbjnMWjAljTajvuFciGI/AF48kGS3sPkU106ZacvlOvBD6Vo6uCSiJY64NL0AuiIxyKyWCARhPHLbYyw6TVp0g77Dp2K0N9knYcB2esUkRJGxFNtrnBZGOFdnQbkTEgsLx8mfUL55mNYHGmz9rYsWgyLrtN7tjX5/jyFqNC6LUSzl+8wIXzBVvjgp54Lq6uM9za5K4bjyBJj73HboO9t7N314TDL8W8unKejfV1lFllYeYyk0HO6tDixBBHislkk//i//Be7rv5IK+cPMP/+lsfY2XkytqmAUopY1A6QpsYE0VoY0jabfy7H2J+/zFOrmaMSdh34BAbmwPGkwnOeWITkbYCyMrygnOXhyzNdiis5fzJF1ncc5h4T5elfQcYb14mtxYVd0k7ffbt3sXjj38JvGDFTzkdSwhYxqPW8Z8hTxVXgcoydtWLRwkBrJfRrZ7g3KzOucg2eBYRvFSgsJwTqgKA25HAYdPyu6SEyNIAGCsQXoJKCS378r2q7qbUbYW+C1QlZOv6ihVMr4CjYrtPznu0Cm5jpHSGutB3KTwX1wbgLFle1LfSOUBFEVEUkUSedqTxNsQ14z3WedpxxP6OY1gIm7aCnNtXSfXoy+MW1QDI73D9VwQH4t8XqS+KRo0aNWrUqFGjN6zK6WiMuSKWs3Jo3XnnnfT7fV555RWeeuop1tbWOHToUA0q8zyvH/M8x1rL5mZIwNu/fz+HDh1C63Bj3cLCQg03W63WFfUlkyTckFq5MCvHlnOO9fV1kiRhaWkJa+0Vcan9fp/hcEhRFHV8apqmtNvtkDBU1jssioIkSWi32+R5zng8Zm1tjY2NDaIoYjAYUBQF4/G4dkRW9QKr2oxVLciFhQV6vR7D4bB2Bz7++OM88sgjTCYT5ufnabVa9ednZ2drWHrffffxuc99jg9/+MOkacrc3BwHDhzgLW95CzfeeCMAWZbxzDPP1NDPWltH0s7NzdUw7NKlS3S73RpuVduGdbGt42MrJ6a1to6bzfOcVqvFrbfeyksvvcRHP/pR/uiP/qiOLr3zzju5/fbb6ff7nD59uq6n+elPf7oe/ziOGQwG9XlZWVmh2+1y/fXX1wBbRGrHX3U+p9+bhtTee4qiYH5+/goYnaYp1toaoFaQ8KWXXqrjWCuIXM2Bs2fPkud5DYIrh6uI1GC5gntKKYbDYV1PEaDT6aCUIsuy+jxW8G55eZkvfOELPPvss4zHY0ajEaurq6ysrNSgt9pfdZ1YaxmNRiRJQhzHiAij0YjxeFz3+eTJk3S7XQ4cOFCfyyzLWFpa4ujRo3V069zcXD339+/fT5IkjEYjtNYsLi7Wx1ONeeN6bNSoUaNGjRo1urpeN3y03oFy5MUYT/jCnuAQ0aHmns2x3uG04s7bb8VEmpiwkFl59XSoH+g8vW4bFcVEJgCBEC4aoVtdSFroKKEdFahxjokiTNIhSlKiNERwJq0ekyInjiPSTo9QN09wPtShnJ+bI2l1cOIDiCOUytsaDJlsrdNqtUi6XQrvsZMJrsjxAlYcOBiNx/jRqISgBUkSoUzE3OwcURzhbYESRxrHeIS8yBiPxhgTMxxsMhoNaCUG0YbJaAxK0eu22ZzkLK9eRoujlcR0WgnjKMLPztFqpeh2iiiNdY7NtctoDe3OLEm7i50U2GyIVoaJ82SFw1uL6OBG1UbTSlpMECJjGI3HYCL6vT7WOpzNUUXG5up5aC2UwAmoXGRswySUKiESeOdw1pEmEePC43TKxtaIQW4wvV20bzjEi1sZk+VLKKW4sLJKkrZIW21y6ymcYLIhbjJktpcyo/rsmp/BqeAijOOYO+68lVE24UvncmTo8fmI9vgCc/MdJnlB2ulzaO8Sp1fWWJjtMz/bZ2lhnvnZOW7adwNPXtji3NaIl1/xfPSRZY4e7nF+807mF24m6XhQlj0HNAvzwvPPbrK6oYgix9tuPMt7f/DtXH9kL7ufe5Zf++0/wBUZrqzJgoDSwfFYOBfgK2CShM9/9lOknS/zwnNP8453vJ2D1x3lwvmLbG5ukGUjtNHEJsZOJihjyIucM5fWUIAbD/j8J36Pm+9/F2sXz9CdW8IVDp0oBpcvc/yFZS5fusCXzpzBJAlRFGGimEgbTGRIoog4Cg5NrU1dbzQsDKNQFlIptAq1NeM4QHSUBAhZugdRJWQNV0gZjVoupEo2qKbqSU5ZHsu2dO3LrA2KbDsfK5tktX1wRVbhwRVMZAosllxUCeKnIacqnZqBjmulq5zgEp6W+1Ew3FpjrA0OjXeujCAOi3DxlktrG1gVcXplnawoSJIA4r0PNV/nWgajHK6Eobq6LlTd6xKKllGujb6jpJSaJUDHA8B/AZwn1H5s1KhRo0aNGjX6pmk8HjMzM1PDoApyVYBp7969zMzMkCQJZ8+e5dlnn2V2dpZ9+/ZR1RHs9/scPXqUo0eP1nUaK2jT7/dr19t4PCZNU7z3bG1t1YCwcodVqmJQlVI1mBqNRuR5ThzHV9RkHI1GxHFcA7nJZEKrFW7rq6DmaDRiMpnUEK8CiO95z3vq/VXH22q1MMbUsC5N0zoeM89zoihCl6VUkiTBGMOJEyf4vd/7Pfbs2cMP//APs2/fPjqdDisrK7z00kv1cc7MzPDe976X22+/ncuXL3Pq1ClOnTrFZz7zGV555RV+/ud/voZe9957L+9+97vZ2NggTdMaWAHs3r27hovz8/N1XccK4E2fwwp+VfGe1XFWEPfYsWP8pb/0l3j++ee5cOECJ06c4Mknn+TLX/4yP/RDP8S73/3u+lz9wA/8QF3TM8uyep+Li4vMz8/X0b3VGFbQcTKZ1MCzgrnGmDoOtoJ7Vb8rWGeMIU3TGtBV57mqR1qd+71793LHHXfUjsiq3ueDDz7IXXfdVbdbORmBuiRLdRxJuZaFsO6vok3TNAW2a6Dmec4HP/hBnn76afbt28eb3/xmFhYW2Nra4iMf+UjdfuUknq4hWsHWan/dbrd2Hlcgt9Pp0O12622quNhdu3bx4osv1jG6s7OznDt3jsFgwPz8PEmS1Oc5TdMa+Ffz+LU0fdPBG1HlWt4u1/Ptq+3kokaNGjVq1KjRf8563fBRRCHO4Z3FO4e1BUyBDieOJIoxWtPvtGj3E1yWUUiHySij6O3jsu/TW+jywLuWQAQnno12B2sdfrGH1oZuf5Y5gSwrUCrEjkZGo42h3WmjtWaytkrhwhdvV9aQ63Y7oDRZNuT/z96fBVmW3ee92G+ttccz55xVlTV1VVdXNXoA0ADZ3YBEAhQoUhAJSZRESlcWr6+kuCHLfvKT7Qi/2iE98MVx7ZDDtmhHyNK9uleiSILiIBIkARJoNNBo9NxdXXNWDpXTGfe0Bj/s3Kd3JaoBsCWautL+orMzc589rL32OhW5zm9933+SzaBI6bZ8ZrlGFwVZmmJ0jslj8qhFkmckSYazxQcrU5WPnU6xRuN5ijj0mWl57B5zqNkMZwoCCY4uAkmRJ5g8I9FTijwtI11FRKETdJoBBl9otAZhCjxPgNPkBWijkZMhWebjXAehfCbjGTovsAjGo/u04gDnIMstVfBjlmc4a5GeR5okFEVBHPn4vocSglmSkRc528pH+WUkThh4OKuxRVY61soyk/M/EOvRq1XspTFljT4VBMy04P5UMplp7rlFbryzw9HRiG6nRSsIwBRIIZhqS+EEaZZhTIHJJphsWkZ6iuN6irqcoE2SjEJrLJLhZMrR2LB5OCDPS2fqs5dP8/Snf5ROt82Vpz5Of2GJPLOMtg/Ze/M6X7n7Du/vjjh3eoHhUUQce2xcHLB+NmKh38ZklvEo55039hnEHc6f6uDShOHEMHMX+Pff7PPVVx6wIG6RaY2SpeMTKYmikDgKkZ6HtJbhaExhDHEcIrIjpqMdvrN1g3Y75umnnmZ57RTbe7sloLIFUsDl8+tk0yn3dvZJ8wJnbdnPVoMpeHDnXVAeCEcUxRzs3uX62++iXMY7r3ytrMPoHKFXjr/COjyOJzMlZUQJgZAKJBjhYSgjYz2piJREeB5SKjyvdHH6nkKqEmjGxw5cLattHp4q43WVJ/GUh/BUWVvxeHVv6CmElEihjkuVytIFTOlIlqJy1lbMsopUpSKax3VFS3AoBeDKCqlUE5b5nKV0a1Y1OYWowcQq4rX0L5YTTM9DOEdeFDhbuiElAuPKCFada4yw7O4f0HYOZwWhX7qTERYlBPI4lhkoFy84W9YCFeV5fKWObZpNxM7/DLVAWacxA74F/G+cc+M/2yY1atSoUaNGjf5zlT1e1FhBNuccrVYLrTXD4ZBOp8Nzzz3HN7/5Ta5fvz6PlZRSMhqNuH37NufPn5/DoSAImE6nZFkGQLvd5vDwkKIo5k656XTKdDolCII5KKscYxXkE0LQarUwxnB4eDh3oFVQMcsyFhYW5m1ttVpzqOV53hx2VXCzckgCLC4uzkFJ9VXBoAqgaq3JsuwhsPPWW28xnU7Z2Nig3+/zzW9+k+l0OncLVkALPnD6VbBrYWGBhYUFJpMJn/rUp9jd3eX3fu/3+PKXv8y7777L6dOnKYqCNE1ZWVlhY2NjHvlZQb2iKBiPx3S7XYbD4bydRVHQ7/fpdrvzPqycnvAB1AXmbkhjDOvr65w/f57xeEySJNy4cYN//a//NS+//DIvvPACZ8+e5fr165w6dWpeh1BKyWAwQAgxd4qurKyQZRkHBwfz+plRFNHpdGi328RxTLvdnjsDq7FWQb5q3FWOw8r1WS1eDcNwDtkALly4QJ7nrK2t8bM/+7Pz/YMgmI+d6vODKuK0GjsVKA2CgCAI5hC8DhmzLCPPc8IwnD/D1157je9+97u88MIL/PW//tc5ffo0aZoymUzmrtAkSeYOR2stSZLMQXZ9jNVBXeWQvX///nzsVnC9AsoV0K0idiu4W52ruufq2Vf3Xo2976dHwcKPckx13H/q8LFRo0aNGjVq1Aj+A+BjtxVTFFNUFJEUFkwBFMf1FCWBsXRDhZVeCREsFFlGLhXxYICOF5kVCj/08CxMJkPyPGM8m5Y19ST4fkhWFAjhCMKYPE8QQqEF5IVhOhZEgY/WObbISSaTEjAIKJKyfh7OME0yQgWhNyiBqS1BmjbgREGRjMuoSyiBg5IUtoRGVhcloDlGcb4q/U8mT7E6xVlLEEdMR0f4noc9jmFN05QsTctox7CNKRKktHieT5YXyGPwY6wuJwbGEIYBWV6gC40SFuX7ZEmK1oY8Lwh8iZQtjHMMhwdYo2nHMYOVdYaHBx9EaDrHbJaglKTbjpFSMEsLAlnQijyKIscTHl4ck8wdaRW8ecj7RlVHz1pHqjVRq82p02dw3QFfeWeXe9s72DwtHXdIRKjQTtMOQrLZDEvA5tYOxjmmOkL6XbSVFIVBGIcQCivK5zFOM9LCoZSkF0cYa7CjIyIriXKBNyp48OYtRu0WSRbhDo8we4ecWW4R9du0Bh46n9KO19i+N+XsldOgcxYGLRZXegwWAu7cHrJ4OqDVjui0zzI+yjjcnXHn3gzpBexupyyuOda6Ab6AiRZov4UKAzzpWOu3cMYQtVrHUMoilUD6imSS8PIf/juSJGE8mVBoTVZosAULiz2uXLlMcrTHeDqlOCzInQUJnu/RjmP8cJV+t8f0cAslIUumtEOf2XSGpxS+kqwvtPnis2co0pRX3tnk/f2Mti8w2jApNJlxOCFBKqw1VJ68QiqsUkhbkBl7jMrcvMaoUmUEqich0+44bvaD0VASwxIsKikIfY9BKyDyBaCwiDIKWJSwU3oeyg8QUiFVGftbMjoJSBwlxA+CEOVJtANPxXTbMXmeIbyAQB7XX5UK6St86X3gOqxigZHH4LWEr5LjuCYp6S8tkuztoI0+BpLgnMVZUMrD8wvAox0GRIUpwaJ22EAihEQKhxSSQIF0DmctuS0/OLJOEvqSUIoyarZZ1Pk/Oz2ixmSjRo0aNWrUqNF/dIVhOI/prEdhAqRpOo9FtdZy4cIFZrMZ7733Hu+88w5PPfUUa2trXLhwgXv37vHtb3+ba9eu4Xkeo9GIW7dusbi4yMbGBo899hhf+9rXeO211yiKckHtvXv3uHv3LsaYuVOygpJVXbsqWjIMQ9544w3Onj3LysoKWmu+8Y1vkOc5Fy9enMeJFkXxELisQGi1rYrurFSHJMYY8jwnjstFxJXjUUpJr9cD4ODggFdeeYW1tTWuXr06h0kAu7u78/qMe3t78xp+Vf9ub2+zs7PDxsYGZ86cwVrLuXPnWFlZmbvVut0u6+vrvPPOO7z00ku8+OKLc6A7Ho95//33WVtb49SpUzz55JO89NJL3Llzh1OnTrG0tMSbb745h6EV1K1gagXdqvtO05StrS2gjMitonEr4FaB46tXr/Kbv/mb/MZv/AYrKyusr68/5Go8PDzk/PnzXLp0Cecc//bf/luWl5d5/PHHmU6n3Lhxgzt37sydhmmazuFc3RlZtbMChEopkiSZ1/KsxkcF865cucLFixd544032Nzc5LHHHiPLMtrtNkmSzJ2HFewE5rU3627LJEnmUaxCCPI8J01TptPp/NlW4+Lu3btzZ+XKysq8rqVSina7jed582tUsLgOtitXbTVeZ7PZfN+1tTW+/OUvc+vWLTY2NkjTdF6T8/3335/XjazufzabATxU07KCkhUQzvN8Dh+/X83HxgXY6D9FNXPiRo0aNWr0p62PDB+jVoiclO4m5fkcl2gDyuhPnKOvLLnNSaZjuv0I5wmSLKcVRPQ7LaazCQf7DzB5irEWazSFNuAcyhOkRY7DIZVXOiytBgdSKpCKPNdIypgTARTWzX9WnsUHpOcThSWMG41nhFGItqaM0PTK6EqLK+tAumOwaA2+qJIjVel4Uz5KgpASYyEMfIQSmMKUExlPIRAUWiOFJvA8CAICX6GkIPB98txhrEMKR5GnOMcxaJRoDNoYcm3xpKIoLL4vEcrD5gVCgPJ8ktns2ClmUUriBT4qiPD8AGcyHI44ChBSoosyqtaXHu3YkmcpwjmsNXh+hBd4SCNxzoJQzAkNFXIsf7XOMh4eMty5wzSZci9x3Ln+JlmSEkcBcVhOsmZFBkrQCQOuXTjHnc1NcI47W1scHR0Qr15k0Oli87ysyacNzpa1II2DaW5wKHq9DlfPrLO9eUQ7T0ArMAZlwZc+bvU814cz7O27nFlc5fKLV+l2JJvFDN9qDnbuceqxJ3jsvOXTlyOM9Nl9sM+k6JKPp2hKx9rIOawQnH1ime7SDCtCup0jNu/lyFYHX1ik9WkFAbrQ5DojNY6zSz2KnUOmziOUpQNYIGiFHrPhLi//7r9mkuTkxuB7HnEcEUrL3e19rl44xdmDQ6azlFyXMba+59Ftt1lePI0LYozRKOXRH6yjwg5bt97h7Pop/KjFQsvwsavnsUWBsYKPnzOc10Nms4yxEdzcn/H6VLBjQLrjVbdCEMcBeB1absSDcYpzEg9bAlTh8GUJBzPjMFVtRVsD0AJ85Yh9RSdUxIHAV6acYDmLLgzaWKwVWCXAU+hUYly5utqYMj7VmoLClPUepRT4gY82lkxbegsrqOUFjh5sg7O0I0XglyuapQqQXkhuLIlxtDzwnCaZpRQaUu3QlPVNwyhm9cJTjA52kcrHFjl5nuGpcgmBATqtFklRMOgvEpqUWWpQqvo3TCCFwxPgKUFLgDWWWVHWq/SBAvClohUInD2uF9moUaNGjRo1atSo0QnVaz1WEZ1SStbX15lMJiil6HQ6TCYThBBzAHnr1i2EEDz55JM899xzvPzyy7z55pvcvn17HvfonKPb7dJqtXj88ce5e/cub731Fm+++eYc1lT7KqXmzq4KIFaOvpWVFS5dusR3vvMdfv3Xf50zZ84wm8145513ePzxx3n22Wfnx1ewpYoqBeZApl4XEr43elEpNY+YnUwmaK25efMmv/Irv8LKygrWWu7cucNoNOLzn/88S0tLZFnG2toaURTxR3/0RyiliKKIzc1Nbt++PXe5eZ7H4eEhv/Irv0IQBDzzzDMsLy+zv7/PN7/5TZ577jmuXbvGYDDgp37qp/hX/+pf8cu//Mu89tprXL58GWstb731Fjs7O/y9v/f3uHz5Mk8//TTf+c53+Jf/8l9y/fp1nHPcvHmTzc1NRqPR/HlWILFqSwWTh8Mhf/RHf8S3vvUtPvOZz8ydqy+//DJ7e3t85jOfQSnFlStXeP755/nGN74BwHPPPUe73eb+/fu8/vrrDAYD/tbf+lusrKzwpS99id/6rd/in/7Tf8rFixc5PDxkb2+PW7duceXKFYQQ8+cipZxHt1aQtgK1aZrOwWPV7grcVQ7CbrfLj/zIj/Av/sW/4J/9s3/Gz/zMz7C8vIyUks3NTb7+9a/z4osv8pnPfGbuhqxgojFm7qZstVpzEGqMYXl5matXr7K/v8+NGze4dOkSSZIQRRFnzpyZu1+ff/55+v0+R0dH/Nqv/Rr37t3j8ccfp91u02q15u7gN998kx//8R/nzJkzbG5usrS09NCYqxyoTzzxBKdPn+YrX/kKZ8+eZXV1Fd/3+eM//mPeffddfvInf/IhB2mlCqRW7tgKnMZxPI+sraDzo9Q4FRs1atSoUaNG/6XqI8PHWZLSloLEanA+IDEIMgvHiYlo7dBCgJRIUUY/SmmZJgmjyZDZ6ABjNEWeg7N4XunOEtaVQNNYAt/HCVBKwnGdOKVAO0uWa3xVxok4q45r0oFSPnleIEIfz+kSYIqy3kaRZ4jjiBbf9/C9AKnKOo6C8rpldEuAdQ4lHMpT6MIgpSWMO+R5gUSjfA/hMqI4Rvo+VhuQHtY4lJS04wikwg9CjC9KlyMWT3lIaUvgahVSlI5KdxxhW1hDlkEUR4SBh++1Stdl4ON5AVmeEefFcR1CyeRo93iVJaiswDpNoBTgI72ATtwiz3KCoI1TAUqWk0WjM6yWCMqoyWPuUhooXRlTOR2P2N+8ycHOHd587z12H+yV0a5akxcahKSwpSHNk4LLG+v4SvL69ZscjKa0+wNMnnM0GRL21sldRjKdEUoQnuDw8ACE5OOXznJze5fB4jKPnVvl0tlN/ui7N/H8iPNnrxJ2ImbZjJvCZ3rvFkpKzl66zF5m2RxpOiYgw6fdjijyhM37d3lr0OEvvdBnsHWbJ9fa6O2b/P7uDHPhPJ96coVZUqCNZDyZ0TsVc5g69nYTRuMpe65PHoFnLc6Z48mCYG88I4pCzq4t8tbtHWaeR+wJrNEIKYhDnyQrCFQJVMGRZxmHwxHvvn+TXr/DhfNnub+9x3SWEkmFJwW+hHC6iQzO40ufVhRTpBNIxqyuLLPWfpYsSVD2AIPk9vYYvyjYOHgfk2oW4oiN1VXW9JAnB23SK89ye2uHN3d9DpIU56b4QUxHWTq+jzaa0TSnHSpCT3FqIcaXgsNJypsPEnIn5vUXhYB2HHD+1BKDXou43cGDubPSFAWBdTjjcL5PbixjbdgfzZhmGiEVQliM1oDAUxYEtEIfYyyztCCMW7SjkLjVZuHCWWSyR64NcXeRdqePdYJCW6bJjORgxNJih8jlpH7GaDjDzlLS3OB5PtgIl884eHCH3qnLFEVeAnYncQ4MgrwoMBa0dShbwtf5zYqy5mMvVCSFJnOOaVG6QZ0DhcD3BKHvkFBGFrtmMtmoUaNGjRo1atToe1XBmKpGXFXL8PLly/MaghX4mUwmhGHI5cuX6fV682P7/T4vvPACW1tbbG9vk2UZKysrDAYDVldX5+DjJ37iJ7h+/TpJktBut1lZWeG1117j7bffnseDCiE4f/48/X5/7kobDAa88MILBEHA9vY277//PkEQ8Oyzz/Kxj32MdrvNdDrF932eeOKJOXQxxsxdlWfPnqXdbs9dhPC9sZH1WoLOOT7+8Y8znU4xxvD666/Po2h/6qd+ih/90R+dO/E+9rGP8bM/+7P88R//Me+88w5SSs6dO8df+2t/jZdffpnl5WWcc6yurvLYY49x//59vvvd76K1JooiLl26xGc+8xn6/T55nvOJT3wCIQQvvfQSDx484NatW/i+z/LyMp/97Gc5c+YMk8mEK1eu8KUvfYmvfOUrfPOb38TzPJ555hmeffZZdnZ2WFpamsPkOmCuIF8QBKyvr9Pv9/nqV7/Kb//2b8/rKn7mM5/hJ37iJ+YQ7Rd+4Rfodrtcv36dt99+e+4o7XQ6XLt2bQ6pv/SlLxGGIa+//jq3bt2i1+vx/PPPzx2OFeBLj1OYKldrBcCqPjl37hxRFOF53nz7ZDJ5qA6jUoqf/umfZjgc8uabb/JLv/RLcwhtjKHT6fDCCy8ghKAoCsIwJIoifN+f1xmt2lRF/mqt6fV6c6j5T/7JP2FpaYkgCPhH/+gf8cQTT/CpT32Kr371q9y5c4fFxUWm0ylpmjIej5nNZvM6qr1ej0996lP8+q//Ov/4H/9jVldXiaKIv/N3/s487rUC7J7nceXKFX7+53+eX/7lX+aXfumXuHTpEmmacuPGDS5evMgXvvCFuTM4yzKcc4zHY1ZXV+f3WMH3qsYp/HBw8fs5Hxsw2ahRo0aNGjX6z1UfGT4aXRD6ikwYpPDxlQBzHFsqPQptGOUZWpR/mMa+IA8V5AVFluPLEghaNJ5SOFdFO1qMcIxnKe0oQApJWQHO4oUhhdbgDM6CwOKsBSlxgO9JCufwQx+dZQhnUNLHSovnlavVjHMIW074Aj/ACYl1rgSeSoE1iKoYuy2LkBun8ZXFDyKUkhgpiPyIqN0m9RIcYJzCUcLS0I/QRbkCrnAwSQuKwmFlgBSCoLdwHIWT4NwUI0HGLZySeMcxmKkDk3pErTZ+5OMQFEFA4QRaWsLlJYwuGI1H5JMhUhb0e11WNi6ihCTHQ/oRezubxJmlu7JBe+UcTgYUR5sYnYDWGOtwGASqhE3HAazCGW699w7vv/Ndbrz/HrrIKApDNp2ii9K95pzA6YLAE3TjgHPnL3Hh0hOMD3cZHhxw5dSAdqtNK4zYf7CF2b9JKgWxr3jiVB8hBiTTKcZoLjz+OMunTzEpYFj4hL1VVpbOYNtnWXniMsIO2brzGm56yMryKk99+rNcfuJj/OZ////i33/lNq3uKlonCJPh+4KL6xFX24Jvv3Sdx7sSMct54/0tfvfGEV984gwLS4IlGZZxsTJiPE74P/1ffpPB8gqfePHz3NjPuXH7BqPhEep49ahzlkIX3N8b4asBZ1e63Nw5IhcRAokQjjD0cVagAklkLaPxDOMsaQYP9g94450bnF5ZxEifpYVFAt+jE4XY0S4HIicUPSLl8BSkwyM6geR0P+Dnfuyz3H77Hf7H377L2+9vcffOA57O7qGtpZAQFCmjrR00irWwoAgs6y98jmfyPm9ef4dDpTDFiF62yeXFDtlsxhvXb3N6qUc79nn2qQv0Yp+7d7cY/f57zIwiKQzjvHQJn1sd8LGnrtEfdLHWIYRH3OrSHSxi0wlRkUCmSaXH733rdfaLglw6VATt3gBhDJ4sR9douMfO3h4zF3BueYGVRYHfHvDMp17g3MZpvPyAbPcdJqlD+S0Wzn2MxcVldJ6RTqc8eLBDHFlcNkSnE3bu77Bzf4c3bmyjAoWnFFKBrywCiy6y0tBb1ZeUkOcZDhiPRlhfEB1PBqu6skpKvOPFDAKIlSA/BpehKoGjJ0EJh3QO2ySVNGrUqFGjRo0aNXqEqrjVCiRW0Ze9Xm8OjOqRrFDWb9zY2JhHtCZJwsLCAv1+n/PnzwPM685VsCuKIqSUXL58eR6DWgEbz/NotVpzd9nly5fJsmxeB7CCOZ/73OfmsZ29Xo92u83R0RHD4ZAoilhYWOCnf/qn587JSr7v8/GPf3zeDjf/2/oDCFl9GWMoioLBYMBP//RP0+l0GA6H7O3tEccxrVaLxcVFoFw8HMcxxhief/55Lly4gLUW3/fpdDpEUcTVq1fRWmOtZWVlhb/xN/4GWmv29vaw1hJFEevr6/PamZUL7+Mf/zjXrl0jSRKm0ylCCJaXl+l0OkgpybKMKIp47rnnuHjxIru7u4RhyPr6Ot1ulyRJ5vGelbuuerYVjKqgcQUrR6MRURTRbrdZXV2dx5emacrp06f5xV/8Rba3t7l37x7GGBYWFlhdXWVhYQEoweZgMOBv/s2/yRe+8AXyPKff7xNFEd/97ndJ05QoiuaA7LOf/SzXrl3j7Nmz8xqUSZJw7do1fvEXf5HHHnuMNE3nMazr6+vzOosVrO52u/z9v//3ee+993j99dfnY6jdbvP888/P3anPPPMM/+Af/AOefPLJeW1JKSX9fp+//bf/NnEcE0URUMYNV87Wt956i9lsxsbGxrxe5S/8wi/w+OOP8+677+L7Pp/4xCd48sknefDgwby2YwUx/9Jf+kt0Oh1u3bqFtZbFxUVarRZSSr74xS/O+63SCy+8gO/7fPWrX2U8HqO15sd//Md58cUX59G4SimefvppWq0WvV5vPtaNMZw/f54vfvGLPPnkkw+5XL+fGudjo0aNGjVq1Oi/VH1k+GjTDC9SKCRK+pxZ6DHLEkJbcOrUKYbjEfn+FsZaDo5G4FaxxiARaGuZjA4IKCiMRnoBee4QpnRHSaXwfUm708YLQopco3WOJ8qVobowZNogjEX7PhjIdbmCMvB98qyMe5lkBpklZXSosVglyQuNLnLCQBOGBuPKCNfQ9xn0uiwsLbF9OOT1V99kfbHPc5/5DFlh2L53h9uvv0O3E7Ny6Rqq3WZsLG++9Md40rF48RKeH0KeYXZvMVjocPqZF9EqZrR5h4WlBQrtSArI/A7WgyCGYEWVDktZ1pnzj78LKZBCIpTECoV1FikEUpU1NH0l8IwmCLfoBKvkThCFIadPn6XTW2D7aIYWIX7QYrkXkaQpweIGhVOMZke4YoaSpfvTGgtSIGwJFH1PEfqSW299k8mD+2z0PDwVopTC99fBgbYWIRUqCJBKIYXC667xzOf+Mu9++6vsH+4T5BlCSJzy6fRWESokMQ4bRGzt7DAdHfLYhfPMDHz7m2/w7Mef4rdfvs7R13O8cJnw3NMs9XrMkj2GO2+CTugPFrn6iR9l/exjfOe1V/nO9Vv81Z/6PD/+4vMc3XmX5SBhezild67L2WurnFpcRPkKZR3XwhanN44IZiPe/9obCF8iwgCkYppkJOMJf+nn/yrXHlvlpVff4uad24BAG1uewzsuJG8tm3sjzi23Ob3Q5saDMZ7vH7v7BFZKcGAAi0Mbh3GaOJJMjsbcnMwIgEGoGMSKMPJYDVKyNGF481W63QFqtE0risEPaUUzFhe77IaCo0mOZw1nVpfwbt/CSUkr9vCsZaQduRO0dU5ruMkX/+HfpdOO+PJvF+jVJ0l275KNtuiEimwyxHohBotwmjsjTc94jF3AUi+mpx2IiPf3E6ySfOKxJc71fKKNC0zGRxhnUVIiyekOBvTVIsnhiH//tW/z5t0HDFZW6fYi0izD6gKBIOivMNq6w9b+EWluSYoRVgievrCOMgm333uTtUtPcu3q48y2u+jCYI4jiJc3LrLQHyAE6Nkh0+F90ukQUyS0O++xuLjI3fv7ZFLhKQ/Pb9FdvYBTAfa4vkng+8QOjIwYLA2Y5JbCgq/H2HRyvKhAHtfyrCaJ5XvNl5KWXzojpSijVkvPsMNKgWzgY6NGjRo1atSoUaNHyFo7r+3n+/7cAVmPuazgY1U7r6r3V8VWCiHKuW4QzONOq5pzVT28LMv4wz/8Q7rdLoPBgDRN2d/f5969ezz33HOEYTiHJFUMZRVXWkEzIcpUodXV1TIpxxiiKCIMw7nbrZwT+vN2Vu2pIFzlrqzaX/0MH8SwVtuqcy0uLtLpdB6KDM2yjCAIyPMcrTWDwYDl5eU5cK2iROM4ngO/6p6iKGJlZWUOBytnaKvVwvM8xuMxvu/T7/fp9XpzN2oFkqr6h7PZDM/zWF1dnbsrq5qW3W73obqC1b3Xa3hWfVq5WSvnX9Uv1bio+trzPC5cuMDp06eBEkKnaTrvJ+ccd+7cIYoilpeX5+f/1re+xeuvv84Xv/hFWq3WvL/PnDnDxsbGfHxUfbG0tESr1ZrXqHTOkec5QRDw4osvzqF3NV4rF+zVq1cB5n1ejd0sy7h48eIcqFZOQ+cc7Xabz3/+8/P3g+d5zGYzFhYW+NznPseLL76IEGIO02ezGadPn2ZtbY3Pf/7z8+cdhiFXr16dOyrzvFwovrq6yl/5K39lDqCllMRxTFEU/NiP/RidToeiKL4HQD7zzDNoredOznq9zjzP+eQnPzmvr1r1kbWWhYUFfvInf5Jerzd3ddadr40aNWrUqFGjRo0+0EeGj9Lm+F5Mv7/GbH+C8BUL/T6roeIv/LW/ye37W/zWr/xL9GRMkmkmmSMtwDi4t7vPe1v7rPQ7nDt7msQIDkYzDnZ3aXmCxaUFer0BuWoxmaaEUcD23hGbN+9gshQPixLgS0Er8hFSEnuyrAEpwB67nCTHsSdCIJTCX1ggP47PyCWohQHd5VMUWemyPL2xwZnHrnD4xnXefu1/wrtwio1f+DsMU8PwaML+9hbtlT5XnrhGITz2Dg6Z7D+g7UH/Y9eI2l3GIxgnY1RHMVhZIRMx/Syh7Qk8LyYvNIUI2RvnGGuRykfUJmBSqtKhBUgpEMe/O3M8aZOlw9QWhpgZ7a6PkwqnNbPxiOn4iNl0AtEiCEeaTNncv8nC0koZD5JZrHNYB4UFg6bIM5AexpSV6yaTgq33vsNSL2Jl4TIIWUIfbcjzDClVWW9SeiR5gSk0cpwwvvcu/+PR/5NeoGkHEeNpTjYacv3gkDydce4Tn2F/OEWkhjt7+8yOdsmsIck1W1s7TPOcz77wScTkTUJ1hqPkPtPCkM6GeMIRtTtsXHqK3tIp/viP/oDf+e3fZHi4z6/99u8xzC3XnnyaJ7/w1+jevsn/8D/9Gl/52h+xfPExTq+tEsct1leWiQKf1V5APw6JfUXs+7gkxbcpvjfj7VubTGYjMqvo9BZIkoQsmZbuWOUTSg+tC7Is5c7uGCWP+7LQxFEIQpBnBaNZyjTNcdYShRGry8t02218KegoS9dmtCgIc8XjT1zEtbp896Vvsz+cYswdfO9tNs6cYf3cFdaefRKvv8bZpz/NXy96/ME3XmfxwVscjVI66+tcO6VQ40Pu7Oa8cWTxOoLHOxGtXg/PE0DBx5//LKdXetg8o8hTijThJ9IZyWxCMh4yGR4xPjpgPDxi9fIhk9GYdDbh4t0dHkwSlpcXceM9Zg+2MVFM1B7grEU7SlAufd59/zrX7+/SDUPyyRixNCCOohKiO0sx2kfrgkAK/CgoXYXWIYOYvoJAaI42b+N/7HFk2CFUCaNpRuAHPLj7Hp7/FJ1WjMUiPQ8v6qK8gG5/gai/wqXHNjlKDak2pYPRaFBlPGxZzxQiPyJYO8df/Vt/m7vvv8/Zi4/xe//8v+PG4T7S87h8esD24RQc2OMYVk+UkawlYCyft1NlUVhjHUJIlPz+q10bNWrUqFGjRo0a/ZcpKeX85woWVTXkKlBXgaHKNVj/qoBc5dir12ysO+2AeWRqVd+v3W7z5JNP8vjjj8+BUQVhKthWjwv1PG/uTqtqGFb1Iq218/bGcQwwf83zPIrjBX8VnKsiZesgrooirUAmwHQ6nW+rrlHvtwogZlk2r61XOc4qVdevokare6zaW8FSKN1rFYidTCbzGod1SFy/dlUjsTquus/KqVpB1Op+1HGJFyHEHDJba+euxOocVZ9U96xUudC1ivuswGa9TijAV7/6VX7nd36HjY0NOp0O4/GYt956i7Nnz/Liiy/OwVsFlivwWN1XEASkaToHntVYrMfHVrUiK8duBemqZ1v1f3V89Yyr1+uwuYomrcZE1S5jzEPPph5VW/VrBVLrzsGqnypIXZ2jql1ZXbcaNyfHSnV8q9Wab6veP9Xx1VgMgmB+//X3cxzH8z75YaHjh8Wufr841rq7+E9yXF319tX74aR+kHuzUaNGjRo1atToo+gjw8dIOC6cO096/kfZ+v3fQQjLqbV1Ti92CaIWZ86cZ2njMY7eepWi0KSFoTAgkIzTnBtbh3TOOk596hR39/ZpRT53D8cs9UOefPwyp89dQnRX2bz9HoEzLHe7LHc7vPmtVzCzhAw4NI525tMKPYLIRwLOOiQOrcswUU8BTqALg+ccQkqctfR6HdZOn0b6EVppwjBgsDAgDkNAqdZ+AAEAAElEQVT8wMMgmKQ5EsFCt83F8+f4lgww2uCEJc0SIgm+F+JIKNIZcdwhCEIKS/kHv84J2l2iKGY8mbLQNkS+JERjuiEHowSOgYaghIsOEFXtOVm6rKrsR4vDEwCOYrRJJ0wJ+gukGpzqIJTmwb3reF5A73wXYyXO5OTjPWa+Qy2cpzCSLMsJlKAwjiJPyPZ3QXoIKdHWkiYJ6Wxc1vPThlYcwHxiIEGUf8SmWQGZRu4ckIxGJE4wfetljsyMSZZzY+eIfsvHqJAw8BBS0F1Yxlldhrtay97eAzJjcVhm40M8Y+jGjsgfkg636QzatGOf4ThheXmDzuIyv/tbX+Yb3/g6o+mEOPB4sLvLb/36r/LyV7/K8tIKe8MjHjt3movLXW5Pxtx+71Vee/Mm/YVFet02/cU+Egik4mw3wjmPw0Lyzv0Ru1/5bdbXTxO0O6ysrpPMJhhTYIwmUBKBJUDhiZA8T0nz0gHnhEBbx2iaMJmlpHlZZ7AVtzlz+jSdVgtfScJiRtekdJTFAg80HLx9k7On1/nk01fZvH+fotBMRlO2N28zHg955tlLmGgBHRuOZgmTLCc+OmJYQJxNWOtFLC5JhDG8M7YYq5CdJbzeCkWRsrK2TjsOEcpHhoowbBF2q3dyWdexHIOUtT5NgdEaU+ToPCVLZiSTEbPhIZPJhFlWkGYpszRlNh6RjFO0L1i78iz/zdVPEbXahHFEt9ul3WqVjt3jsWx0gYPjiRogJHEU4XsKoTz8qE2r02Wp8wJwHJMMIBRC+SjPgyim3V/H2nKCunTukxTacenjP8X4wX1+9Z//P9gpZrjkACsidJZinSMvNFMcg8Eyrf4Af7DMxqVLKFE6HIVSXDy/RjJ5H0NZ81QK8IUAJ47bI8t/X47rnEopkJLG+dioUaNGjRo1atTokaocWhWoqEODTqcDMHdtVapcYPCBWzDP87nTqv5Vjzj9mZ/5mblTsHJN9no9Op3OvM5iXXXoVAc8FbA56eqqux4r2FO9XgEba+3cTQbMX6+uVUGnOsCrwF0dpFagpIoLrW+rOyvrba+uWwHR6XQ6379+P3Wwq5Sav1YHl3VV+1X1EetfdVXnrvqwfmzldK3aXLlKq5/rYFopNXfDVueqAPXGxgYLCwscHR1x69YtWq0Wn/3sZ/kLf+Ev8NxzzzGbzebHVdeqP5vq3qt6nfVxVH8WJ12q9W11GFqP061gZX2snxzXJ4+r93EFJ6uv+n6PivI9OZZPHvP9wOD3e616Xo9aCFD/+eSz/35w78P0/SDih52vfq8nVfX3yX6qP6NHqXFuNmrUqFGjRo3+NPSR4eNCFHL12U/xLsulMw/o9/ssr6+jlEAEAUtrZ9h8/20wGmtL65HEIaTgYi/kiY0BRwc7uOmYIksx1mCMpjNYRGJRUcxsmuK1fIzJWV9bwHv+Od79zquMhmNmheVgqukbQVE4ekEZVyoQOGcBiVVlLUMpJEVhkJ5Pu6VYWF7G8z2ENOyNpqidXS5evorLUxYHfQLfRzkHOmMw6MLaKnErBgp2d7c52N1mbe0UfuBjZyN0mmBMga8UnhBIawmUZJaneH5A4iwytbS9HGktkZP0fEWCIzcl2HCurHnphMCTEPo+ylfowmKMBefwgHy8TZjvIpRCiD4pERktnD1EJ1NyzxKlMwp1/Aepc+gsZf/BFkkO+eiQ/oJHbgU6y9i59Q7SCwAotCbPMgIFoQdSCKq/Q6VSBGFAlhfMxgn6KMGMJiTphNlkyO7hHkI5lHLMCsusyMknmsVBiOcplPIQBAgESpZf4GiHIZ989hMYa0isItc+e/sjbtzbwtoVFnttJrOUyXTMb//mr3P9vffIsxxdFJx/4knAsru9yeH+hK3NHZYXesTnN/D6y1zAYo+GKGsZ9NvoPGVrJ6flC9YCn5sHmut3H1B4HXpLp9i/fxdnDWunz7KyvkEym+KsIZ1NKYqUpU7M5bPL3N/e5f5OgdamdNY6x3A45miaYinrBraiFhunz9DptAl9j1OBxU80keeTaMuDzBCGPq1AkRY54zQn7nRoC4jCgIV+C5Mbfvc3fgtfz8j3t2m7gqsdy61pinGClspYWQjwjCCIBYuxoO9JemfOlW5fFdHptciKAmM/qEx4POJKR+CxHJSuW8/H8wK8qEUItKtXjwEcOJwpPyCxWmO0RhcZeZqRZxlZlpLlmkJbzPE5ERKhFMoL8MOw/PJLmOhJhVQSISVSHMeeHrt/EaLWRlf9h7MWZx2WMgY3LwytjsGM93FYhBfiRX0SSpiKc1hnmGYF2f1b/Ktf/qcU7VU8PSVNE3whyKzFiwN6oeIgLc+flyUikUKCsyghKCxk2iCVIvYlimaVaKNGjRo1atSoUaNHqw5wKriTZdkcONXBEDB3kVVuuLq7qg4/TsI9KSWDwWDuNqsDwoODA8IwfAgE1mNK6xDu+8El3/cfcu5V91CPPa1cd3WoVwGwCtBUv1eq7rneV3VgWHeeneyL6prVeap7qsPCusuw2q/ehgrWVuetQ7pqW93pVx1Xd/OdBEUnIWS1f9Xmuiv05HOsVIHeqg+qWoSf/vSnSZKE2WyG7/ssLS0RRRGj0QhrLUVRfA9UrPdpdU/1MVR/1iehYf3nk67PR0Gu+tiq9jsJ5+p9W/1cd1vW21Pdf/0aj4KP9eN+EAj8fq/XYevJtp6MEq7v+yd1D/6w7sVH6cPaf3IhwQ+CsI0aNWrUqFGjRn+a+sjwcXllle6Zi9g7M4SQFMZitEEoySTJ2Nob8SPPPMXmnVtkk3sILA6HdZquLzl9ekDUCtF5hrSW0FNYIRinOfdv32Q66KKGY1799sssR3D6/BmCMGLQDfnYpz7Bm6++Tbr9gAeZxSSGWVJgpUQ7WPUFPV9yZA2e1PjHrqtZesj68gLrp5bxfYXEsrk75N033uFCv8V0POL+jdcJ+meRnoc1GgkIZ4migCDwwWakoyNMNiVNxngKCucw2qDzlCCM6cYB7ViRzKYYTxNHLVqxwApJ5mZ45gjhILBgMRQ2wgmJc7YEcg6QIGX5B3hRWKwxWKspJg8I0k1aocDzPXaOUqZECJkQWIfJc2yakU6HHCWH6OkIoXMElunhLofDCaGZ4RaXyLIU4zTpdIz0ApyjjBTNC2wQEvVilKcoCk2hDZ7voQvNeHefdOcQPZsxnQ05mg6xpiAxjtjz8aQ4ftZQGPA8SRz44GB8uIOHxROgpKTX7fH0c8+jpWLz7i0ORoe8df0mo/GE6XTGaJLw8ScuEEchr732Gnv7h4hjgN2JIqbjCQeHexR5Sivw6bRazEYzxP1DtqcpS5fOs6HhJ53j1rDg/tShlOD8qdM8vtjlMDF8+70dZmZIq7dIYAum4yPGwy7LyytcuHwFKeDenZs4k7G6tsSlxy8RBorxZEpeGLRxOCyeFFhbQqkgCjhz+jT9Xg/fU5xpK1ZlyoyI1BgOjabbiWh1Opw5e552K2Y63MOYgiDw6a0tk6UJWZpxeDDmm1//Np/6xFMcvfttxlv3mWYWgcNox+hQ0+tLHhxAAHTiFqefeQ4pS3B9553X+N3feIX/+n/9v+XC+ko5MTK6nKRXE0EhkKqM0xVCIIWDOap0te/HEy/loRQoL8ADQhztinVXbsra6mZjbVl3VWu00ZhCY/IUnVqy41W31paxv9Y5HGUEsTj+wEUqH6nkcRtVCSmrVdgOrBAIr3TXelKBUKioj0szsAZwGGspjIGjPcz0kNXnrhIoSRgqBqFiOzfcvP2AwkmcK9tsjMNgkcKRFpYCwDjCwEMKR6hUGe/80eeNjRo1atSoUaNGjf4zVgUQ6/UE8zwnz3Nms9kcFlSArdoXeAja1B2IJyFeBW2qmoxVPcSqVmMFBOvwUWs9h3ZV1GU9PrQCX0EQIEQZDRvHMVJKkiR5yEFYOdqq+6viWiv4J6V8qE7kSVfloxxrj4KQH6Ysyx5ZPzMIygW2FXis1wWsx3TWI1Orvq/D0Gp71b/VM6hAX1WT8MPceFXf1+FU9Xv9GZ90iFbnr65pjKHT6czjeXu93vw6RVHMIWq9f6ufq3NW/Vz9XOkkUKu7/046Cevu1Po1qtfr0azV2HyUY/Hktrrz9eR4qK77KBfko5yAj3In/rCqw+6T+jAH5kdxPn7U9v2g476fO/TD9GHg9IcBuY0aNWrUqFGjRh+mjwwfn37qKsoP6HUU7f4V8vEdhqMxdx8MefP+G5xfXmJw4SqXnnyWu9+4h3CG0FN4Epz0KIxGiWM3pBB4UmCEJCtyJgc7eLJAaYvWKe++t8V4NObKtcdQrQ7tyOfpH32O3T/4BuHemLOeYEU5XpoVGFdGJN5PLUd5weOtEBcF3C80q7HPhY0VAt8jCiTTLOfWjbvY2RTRCRke7OIv9IhDD6ckmTbkWqN1gfAiWlGIPxsR+h5TJVFSEfgKlADhsM5idEbgSzxPkGVTdFbQXxkQRFAYh3U9tPAIskOkSwlMQpuIqevihIcxgCtdZlp7IAVFnpHnCXZ6gEq3Cdo+QnlsjRx70xlJdoQwBWcWAjxnsQ4mowP2947oKE2IpchzEj0mT2cIMnQ+YzIc4vsKo3OcEDhXTmis0WgNReHje4rjVE7yvSF6exeVT8gmh4zGY4wtyLUp42EBX4HnCWw6x1REYUAcefhK0OsvgC2QSnLx/HkuXH2Gzb0DlvsBUhTc3dxjNpngjCaQgiRJeOf2NmHgM55MykmxlICl1+uy/+A+Ni8IPIX0PWwv4sHmDl9+7TU+d+VJ+t0hs6jH+WfP0L17g4WdAw6GY5aLQ8ab24zGCXo2RTvH6OgQGcR4vs/R/gM63R7Ly+uMDg94/53XUE4zHM/YO5py6tQpdnb3mcwSillOVpS1BqWUICSLyyssLa/SjiMGAZz1E/KkwCrJncMZZ0+t0IpjNi5eYtDrcPf9tzF5hjGOdrdHlmcYJ2l1uhgHw9GQP/j9r3Kq69E7dRZx5w6twpEXlgd7OSaHblvSGzv6l55l/enncTLCuSOk0+y+9TLbd28RuZzcGJy1mCLHHH8Q4ZzFWlO6GLXGOIdUijCMCeOYKIoJo6isU+IHeJ6au2LnU5oaqyzxc1kPUgkfiSudikGAX2OZbv79gwmjc+V7yRmDNQZjjttmNbbIyKzBaFu+30xZw1Rrg9aGwwcPkHGfjlAE1jCaFsShIlLlhbRzBMrDCcXq+houGRH5iqATsLWb8O77WzilWIwUzglC5VDVpFYLrHYEnmK9FxBJCH0JtmxLo0aNGjVq1KhRo0aPUr0OXlVjsAJ29XpzFdyqQF0VjVl3j9UBWnV85cg7ODiYR3kWRcFsNpvXdiyK4nvgY1Ub0vM8siyb1+Krg8gKPFQ1D4MgIEmS74lmrUBjBeOqeoNVO6vX607BOuwrimJ+vQ9zAFaqAJ1zZY3ACrpVLtKqn09CqfpxJ4Fw3S1ZqQJfFUCrQGD92VXtrvq9DtIqgFhvTx0K1p2X9X6pvqIomsPaqv+01rRaLbIsm997BZqrfjgZQ1sHkNW56+5W4KFnVW1/lPuzDkirfqjDx3p/B0Ewb3d9nNavexJM1o8/CWXrY+Cki/JPGrv6/YDaSedgvW0fFmtaj6z9j6WPAidPwtsf9jx1J3KjRo0aNWrUqNF/LH105+O5x0jynNu3tlAiIHchD/YnnFpNWe8NuPLk4wRxxNUrj7P9xiIKUBI8IfE9hS8dnTAgVR5FoWlJCVLhCosSDgW0PEcU+ExwbN/fwbcFV559CoNkaaXPp569wtr1myT7hxSmjCRd8gR9X3EvtfSDgFkUs74+INSGZ65cwg8UoWcZ55ZbN+5i8gwlQGuD1Zqi0LQFSCVJipQ0SZDBOlY7LII0zen1+kzSjCzPSXNNnuYUeYZztoQ6QqK1QQlASqQU4Er4ogFjA6amRdcDz44QdkpRZBTBEpkuo099CZ6EvMgYj8aoYkZHjMGXSD/g/jDj5t0DJtMpzhgCT7ISnSbwBYXJmY4OyZIpvW5IgEXaAp05cAbnLNJZOqEonVzHsLPytZXOOMt0lrK4vE5xOCG/fZPZ5AjVAj/QjA4TtC2QDpQUSEq3nFKKwJME0iAdSHUc5yIEFoEXxWA81k6d5rFLl3j7xh12d3eYjANCzyc4jmeVStGJfbCGo8mIKYog8PBUGdHpKUmrHSOFo9VrYTPLUZKSCUG83iUbFvy799/kmf6Az/3oc7z/yjdY3jjHRhDzF7/wMe6/+yq/9nt/yFK/x/MXBmweTbm+t4WTPhy7Nne2AqbjQ15/9SW0zpB+wHA04ea9bVqXz3H2zGnu7uwxPhgzTfPSVeccC70e5zfO0usN6LQiHotyzGhK99RZbl6/zYWzp1jsd5BCcrh1m52bKTrPUULSbrewhUZa6LZiotAnn02YSrj8sSf5X/7CFwmE5lf+O82NP/wDXF5w/4Hm/pYhzaC3tsGz/9U/oNUf4HAIqWhFPu1I4tmUfP82W3dusbe3x3gyRRdlPUttNL7vo4uMNM3IkhlZnqELgwaQZc1F6fl4YZuo3aXd69PtL9IbLNDr9uh0u3R7PRYHfZQAi6p96CBQopqUlrGplirG9Vii+iaQKPAkUgUoXOnyPB6qOMr3GoAF6yzWaLIsJxI5vThm68EhufSx2qI8xaAdMEosxjnCKMQD1k+fwm7uIHAMuhGtvRl5XmA8hwsV0lP0lFe6dHHYwGFc+UGCEo7QV0hZRrL+ByTmNGrUqFGjRo0aNfrPWNWH+hUUqByBzpV1HKs402pbtX8Fuk7GJ9ZBz8k4S9/3CYIA3/fnYKxyLwIPOdHgYUBWHVcHPvWagCchXHVMBbKq1yqQV7/eSeBUnbfujqygah2W1fut3gd1iFhBv5N1+U5GrdZjZesxqXWYVLn9qrafdPaNx+M5CKxD2ZO1L6vt9dd835//XsHUat8sy+ZuO9/358+v6qNKQRCQZdn8mVb34vv+I2uK1lW5KKtnUhTF/DpVe086GE86DevAtoK2HwYs607ROnw86eytg7EKwFavnRyLJ4856Xz8k8DHHwZMPuo8J69Rb+tHAXgfpY0/6Fon23jSrfknudZHdWc2atSoUaNGjRrBfwB8dEGLd+/uc+fdTXSWgPAYFn3ubY350bVVltcWMMJjbWWF0xeeIFZlRKmUglYo6UhFHCmck2RSYLUuYQmCWEk6StOSFik9ksKiC83uzj7uldc4/8QlppOQdhxz9doldu5ucrC1y3kBaWbR1hEBVyOPtwuLdYKnr1yi1e2CzpgZePk7b3MqlighcEJwOEsQwsM5gS0KhFCgNcIUFFkGKiSMY0Q7Rikfg2B8uEusDDLwCD0P4RwqCNBOMR4POeUcUX8JAccRmWV8aZJm5GlBIgxLcYSzCTIfIo1mPHUcDadIW9BpR6XDTCesdCVh6OHFLXaGKfe2D5iNj0iTFJwF36cocryWjy5ysjzF5Bm6ELQ9Q+EKlCwndsJBS1mCbsjeqKyHhyjrFpZgx2GtY+/giJ2tbyKzjFY3wrR9eqQUWc7RNKePKKNhBXhSEAjotEP6kSK1ArmfIIUgywr2s4JWXjBJ9yAfgxfw7dffZTqbkSRTZqN92u2Yfq9DazzlcDjCSkkn9um3PXy/nFwNFgakSUIQBIxGM1YWOvz4n/8sTsO/+be/xvRgQm9jQLgQ8OC9HW48GHH7y1/l3GwHNc2R/Rbfevs646MZh4VgZ3eMMJrt4ZRpbrAHu7TiNr4fsH33Bkf722ALvDDEaEuaF2zvPEBJyGYz7u+NSPICa8oagZ4XcOHiZc5snKPX6/LshXXsg3tsS0nQH3DmvODMYhtnLKPhIcksoe0pWr0+ftxCxgFeGKCUQCmPLM8JWh3E5IgMRdDpECvBl/5X/4gbz36M26++SrG/hRnN2Dj/GB/76/8Va9ceQ5ghOEtR5OTasrm9zz//td/l//C//9/x2U/+eZzNy4hea0rnnjXHY8BitMYUOTrPKLKULJ2Rpymz0SG721skaUqapqTJEdn9B2zfc2wLSeD7xK2Y06fXeOfVb/Eb37pJ7nfo9Hr0+30G/QH9Xp/B4iKDfo9er0e/06HVbtGKIsIgwPc8fM9DyTJiVR47KOeQ0pWRruVQtcdYUnBcmZHZ6JCD8ZRCBuTOQ7sxzuYEvsLNTLm/LhgmKd/5vV8lmj1gVUr8MGQ59jhINcaVvs3UGBASX4AnILUWqSRKSTwp0NZyXNSyfA82atSoUaNGjRo1anRCURQ9BKLqMZ5SyodqO9ZjJ+uRpfABKKwDyErVMZX7rTq+DlEqKFgBrTosTNN03rYKhAIPuSwrMFcBJM/ziOMYIQR5nj8EKOM4fuhe647EegxsBUurdlR1DU/Cx0c5JevbgyCYR81Wba/6te5qrNx/Vf9UqsfXnnRu1uFxBVvrfVu1ow6LT7pWK6BYwea6S7SCz1Wf1q9Z7VvVtWy1Wt9TI7J6rXLUZlk2v6/6WKv6uDp/nucPwcc6aKvDvCAIvmfc1uuV1qF0pQpOPiqy8yRQr3+vj+c60KuPsXpb6/ufhK4/CD5+P31YDHAdcD+q/R/let/PgflR4emHAdKPer5GjRo1atSoUaOPqo8MH4eZ49XXbpOMDrBW45wlNYLrWxmnF+7wZDrCRQPiyOfq009T3HqJlaDASUHsKxQOTwqkE/jCURzXyktmBmkNHV8Th4JWp4MvHFqAdoKDgyPMG+9w5spjqLhFp9Pj4hOP44cxwb1NUj9nOzVo6bFZOFotxYWNMwRRRJFneMrx7vVNPG3QooOKQqSBdDbB8xTdhRXCwCcKfOysnODNZglhS2Kkh0kLhod7pNMJsecRxC3cdFjW+zOGotCMM81kPAXrCOMOUnloPWI6TZhMJ0wnE/I0AVMwavmcX+sQhJbQZegAtofbpMkMsdhnoddnpR8SKUurHbN5OCtdXVmG1gVOl/DQquN4FenBscPMITBaE3Qk0yTHVx0cDiXAp6CwBussUpZuNGst1hmsdYBlNpsxGY+JwxCFwEPgO8M0t2SZJepIlBIgFFIJPAvLC12WBzGylfHG/TFSKCazDF9JtLa0uj3SqSOfTZiMh0ymU/JsSq8V8WDvAKkkvU6bw+GIThzQilssLvdBQKcV019a5803X2M0mSKB3uJpbm2P8HzJ+mOX2N26z9GtQ/pLa5xeWiEZb5GmU14ZHrE7OcRLcx47d4YczUFSMJ5m9Fpt1pdWuL+zy2w8YpqMCELF/t4uvu+hfB8AK8FYw2g8ZW9vj+k0IS/ysuagKzHYwmCR8+cvsbq2ThT6rJze4M27d7j64ud45eVv8Oc+8zwP7rzPdDimrTziKCZa6BN0Oggp0UYjAM/zkZ6HNJb+0gI6z9m7t8n/7f/+/yWKAtpRwMJgwPLnv8RCt43UORPrsa9abL75PkJ5XDi1TEtqyDN2D4bc+v1/z+4//EesDToIFaBChbRlxKksbwKEQOFw7oMM1flEzjmuOIusVk86W8agziaMDx9wsH2HzZtv896br3L7/jabt95je5IjhQTn0M4CHzhh41aH2PcpjCFut1hbXKAdxfjtLssLAzqdNu1un94xtGy3O3Q6XVqtuKxf4/kMD/e5u7XLqY3ztH3B/v17DJOMbJqg4gFGF3hS4rzSfWwRBL4iSSyjnbscJgm6H3FgIQtbFPkUg8MCwokyctU5CgsWgbQOVLna19myOKto2GOjRo0aNWrUqFGjD1EFYSrwVY9XrcBU9cF/5cyrg7h6dGU9ErXubKy7D086ECtQFkXRfP8KBlbHBEEwB4KVO66CTtW+FUzL85xOp/NQ9GgFNCtAkyTJPFq2cniejIo96RKs2lqdo14Dsw6W5rXij8FjVeOxukYFCJMkeQiK1qNvoQQ09TZWz6qCgHWIeBKAVuer2ngSUGVZhjFm3paqHRXwrJ5/HMfze62uUUHUChhWz8zzvHmNzqpeZR3o1vugrsoxWHeZVnC1DrvrULsOrOru16p91fOr9qu7R+sQuu5crOsHwbv6PdSh+A96j53Uo65df/6PAokfph/kpvwo8O5R1/6PAQEfdd4f5H78fvUgGzDZqFGjRo0aNfqo+sjw8Z17Bxxs72BtgbWABYNGFwXv3Nrl2tvfZfnyJ1FSsbq6xq27EQWOVijxfQ9RaMAhBGAKVJYS+YpRcfxHjwpIiAnaXVq+wheOIPAojOVoPCN58z1OP3aOOAyJowHnLl8EKRnevUvsNHEYQ+zz/NWLqCBgf3+Iy7KyLh+K/upZQt8nEg5vaYVkdEAYRsTtLnHcIgpDJsYynU7x05wwDPDDiHGak+W6BA6ucmWV7kmkLMGM8Mi1xmpNliZ4QcDO9iZ3bm8yGQ0p8hSJpd9tMSxi3k3GbKx0aQUeKwOfp69d4p3rN+m0YnoxUEzxopj3tw+4d+8Bvh9ijZ1PTHAglcYc12wUzhwDHrBFge8JcAasQRxDssCDaXq8alIcTywAewyjrHbk2fFk1vPQxhH5kl7cYm80RUnJQjfG9xWFdiglySysr69yaq1PJ9HEr28S+T6pc2gExjqKXDM8GtHtdlhYXGR7exNrcvrdNtoY9vaP8HwPX0k6g2V6/UVW19ePQY9gZ3eXPM+ZTqcsLi6jkSRpzmRvjO+36fUXOdjbJR+N6K2dptVbQmczWq0O4yxn5lJ2797FkxLrBFEQMlheodOO2T08JM9yDvZ3wRqks3hCYarVvp6HzQ06Szm93OdAWjYPchwQBT4CwRPXnuLy41dY6A9QwnHr9jaJF3N4dMgzn/4RQjOhyAuk5+G1W4iOQgY+xuQII5DV6mDPwwpBFEfkWU6vHTLcHfNv/vWrHIwmBIGi34157IknePLa44gi59tvvEcv9vjmq28xWOjxf/xv/wofP93i3GoJ8PM8ZZqkH4xZROl2FMcroiXHPsPSgSxc9Xt9svEBiMSVwNJax2R4wObNt7n+1lu8fmOLkVUUFoyx5M7iKYESJby1TtCJIqQtCMMWxeEhw+khjPewRYEKQ9aeegx1CKOi4MgKEB7GQRBGCAF3hyk7U8vRZEya5nS7XZZEzmI7RC6ehnyEC1qYQoPROKEQnkc7VhTWsXpmA2MVSS64MzJMhmOs9GnFK/g2Y2oLOgpaXgkWi8IgkHhliUeMdYSRR6DKuGFcUyOjUaNGjRo1atSo0fdqOp3OnW0V0IMPYE9dJ2sCnoQndZdZHeA9yo0GPOSmq7sm63DP87w5AK2gZQXW6lGldfcb8BB4q9pefa+AWfV6dc/Va9W5KuAJPATSTkKhOmirVHdB1iM58zx/6Hwn+6y+/VFO0nqka73fgYf6otrnJKyr71e/dtW39XjVqp11mFpt+0HAp17HsV5v8E/qiPv/l6r+f9T2+veTP/+w536Uvh9Qe9S1f5jr/mm6B+vt+EFt/2HO06hRo0aNGjVq9Getjwwf3393C47Bl5AKhEU6hXWO+yPNyy9/hz+3tEFrYZ3bN+/hZmPGQhJHEqEkRWpxxiK9kmH4nkcQhmTWkmjHXiZpxxAHPt3Yx+QW5wwGh3QCmefs3LyNNY71M9AbLLFw6gypMaR3dhn4EVfOrXNvb8zRgwcstEKibocgarHQ8pAStPBIc0MUBiwMltBBj6PRmLDVJQpDhtaQzUZkecY08RknBRpwRhNHLTxhsH6LJC9X/0WdAUoIrFBl7biiOF7NaDk82GPn/k10UeArSbfdphUHWAHDyZgsm3HhzAqDwGdl0ML72DVybWi7CUU25fbelDffukkchci2mk9ctCmhiLUOnWd4ViIVyCAERoAl8j0i33JUGJwr61mGnsSXrgSSUiCUwmlzHGcpKaxBSEXgh3A8MWu3QgaDLrPbu7TjgAunFonigFxbgsDnaJrTasW0uz1SlREGHqcXe9zan+IooWaRpsS+4Oq1a3z39dfptUN2D2bc295GCYlNE4qxYdBfwI+6bG3d54knn+L06XN8/etfY3vzLmmaIBHkacrB4SHD8YjhcDSPmrEodne3mSYTWu0e2ljyvMAJgd/qE/geYRjTPxXgHORZxiQ3nDl/GYAsnXF0dEAYtfA8jXRlcK7BMRoOWV7s8YXPv8Dt23f4g5feo9ftcniwh/ADnv/Mj3Hh/DmWFxdphyFvv/UG+e49vvrHX+dLf/kvcjROKRDgeQgJxWzCzXv3ENLj3JlVWp0OxoJy1apPTZYXdPo94qHg8fN/l+2RRMgdnDli9/4EZe5wdDDi1u4EofoI/xM4scA47zHJBLNcMFhcIpnCZDie40Th4PhjgA/+76CCjR9gyHplxmpCdPyrUgilSlehzkmSGUdHQ67vTTlKyrEunSvrggpB3OnRjmLiKCSZTpgmKYVTYAsm0ylRHPEjV86ytthFOYE1BbookEIRhwolFVv7M27d2+HO3gEqCJE4ZtMxE18yTQJOL55HtQbgBTgJvdUlTJoRMmOW5uyOJhC1yJLy7tPJkEmSELW7yKCFtgGbRUpkpqz7lr4ncE6W//ZYSI0hUJJZZiBQSFHWPW3UqFGjRo0aNWrU6KTqYKkO0oQQcxhZB14noSJ8AFHqtd6qc1Zutupa9VpwdVfkSQdmBavqzr56ncTq3NV5qwjQuluxalP9PHUXX3XNOkw86WqrXH51KFjdXx3m1Y+tw5W6+7GCrFX7627Lqh31fqvaeDLqtF4LszrvydjWk/Gn9XadrBlYv1bleqz3a71PTvZd/Tz1GozVtvrzBR56/o8aiyehdv2e/mOp/uzq2/409P3O+6g41kp/FgC2UaNGjRo1atTovzR9ZPiYDcdA6ZrzPAHOx1iLlQKtLbfvHbH22jfpnXqcre98iwuLGZoY6yRS+aRZ6QxEWYwtgZdBUFjH4SQjLhx2ktCOY1pRgMaQFo60MCAFHmBSzebNu6S55twlDz9osXpmg4nz8KXP5t6U4e4OK22PlTNnmKYZaTrDWY0SgmiwgvED9nONTUb4t7e49tgppHCEQUCAwxUa6wSzWQLueDWplHh+jEknSHlc3B1JFMYo3yeIWkyMoygMcauD1pqiKEinYzwpieI2YeSDEGijKfKCdJbw+njClQunWFlaohfHDFPB4Viw+yDh9q17mCInjkKEFEilUF6A51skAt8PS1BkMqTJkMIiOK6TIByRpwisPAZKEuscUsmyzqN1aOOQ0kcoh5QWk2l6iyuYIiVLJpgiY7m/SNCOmeZlMfp2p0Wn0yLJUnwlyYxAqTKKVSJQUrC40OfWwQwAXaT4cZ/1sx/j+vs3uHf3DsoLCZRHrgva7Rip4CgbMxuPGfm7fPYzf45TG+e5dfce77zxGnk6wTmL73ssrZwhbLc5Ojogmc3odLukSUKuNbqwJKMJk9EEJUq4KsM2QagosgydZyRjQIDOMmZpSlFo/Kh0vy6urJNnKanWBEohbc5wNGQ8GYOw3D/SRHGHK099nBd/5NP8f/7ZP0VGPRaW1jh7/gJPXrmIMjmvfedlXvvud+kMehzcv8t0eIhzAhX4YBWH21s82BuSWUm31SI3isCXOARSQKFzoiikHYcYN0PKFn/r2hKXr11lc19wY9txlBk6y5KFjqHXjzg4lGRxTNJe4XfvDnn7u5rTi4sU020ODw7mqNGJEkCCeMjcWL0uPviFk9My5wSCyv1oMDrFGk2RG8ZJzmSWIoVAO4cUJcA0ziH9EG0tB0dDkB5FlpJmBcHxPoNen/NLXSLhcFajrSFuxQRBhBWCG1uHvHb3kGGSlRGxysP3FL4U5OmMe1OPjhFIFxAJD+cMw3HCzbv7pHnOLNP4ns9onGJsWas0ybLyfoQEHHmWkOUFQyMZ5XCxLVj0IZCCvCht3n5Q1oLEWayD489eGjVq1KhRo0aNGjV6SCcdanXHXAVHKrdhHZRUkAgeHWtZh0/wMKCsjqncfXWIdhK41GM+622tzlmHYVUb6m7F6l7qDr4qKrZyMT7KRVj//ihAWj+mgnd1UFq/xyputYomlVLi+/48prQOCqt21eFkBRfr7s3qWvU2Vueqn6/eTyef00kAWz+mAqYnXY/1Pj/Z35XqsPWkg/ZPAtT+NOHbh93HR73mnxa8bNSoUaNGjRo1avSnp48MH63RZdwqAmvB9xXSSgrtsEhGicTXmixJye68h1s4DUIilSIMAg5zjdYFMizjOKUAqQS5g/EsxRgo8pwg7uB5CislgXQESuAsODwQgiw33L21yTRJGaydojtY5tTKGrPcYvZTenHIuVMDnBeQzfZIZlN8IYkCiUgOeDCy5IWj025z/dYdzq0Ojmtc+DhrwWharRZaa9qhTyYlOi/ICbC6qDJLy8gYBCBRno82kMymCBy5LmtiOquJopg4ipBK4uxxvYrj+nHTNOPd9+/SbrVZCCN6yqL6Aza398AJPD/EDyL8IMLhEcXHwFBKwjAkiNqkJqUlFdoZnBBoJ8mylLYvCQqDF7QojGWYODLngzJYl5fuRT8iiAKyLEMbSxB5eKpNMpviCcHZjXV29vaYpTkuzRnPUqI4QHK8OlfWJhnW4hwUhcYh6a+eQfkRxmhu3bnJ4e59RntbdIqCfuAzMQaT5/jdNu1OG6sdn3nhMzx+9WO88uqrvP6db9LttPEGA4bDA4QXgFIURUGezHA6JR3b4/6N8ZRHu91FmoJ0NmKWp7Q8xeLCgPWFHgNfUySG0XiGVAInJLnOORxPOZrNGE0NypMoP6RA4PIMrTOM0eztH/CH3/gO//XP/wyv/sqvceddRa/bIR6skiQzJrMUFUWYxBD6CpslLLSWyfMUZwqEkPhKMksSwijmscev4PkBceTjjuuoDI+GGOfwwpC4ZcuaktJD+RH0AvoLjmvXHNI5XJLixzF56vAiy+RgwmEyQ5qM/+vvO7YSHzEckxc5mzs7x+DwA33vNM4eg8djKOl4yP0ojl8qSbbD6II8mZInM0bTKbNC40lJmh+vprYWKwT9hSV6vmB37wCvvYAnHFmWYIzGepKo1Wa51ybXjsJZrDEYbcmnY/zYkBSSr711h6kuI4vCKMYPApaW1og9RzHdY+p8bJ6yt3ef0xsXmY5GHOwfkmaGWaaJYo9IKbSFIpmSCYkKW7TC8kOSWZKUEb3j+ySFJpOS96zi2WWftrMoX7IY+wjBcW3Mso6lNk3Rx0aNGjVq1KhRo0bfqyrasw7WfN+fR6VW8K8em1mHhHUoWdVfrMehVvtUOgm3KrhZr/FXAbeTLsDK3WiMIcsyrLWEYTg/96OgX9Xm6txKKXzff6h+Y6WTNehOxp0+6nvVpkf1q7V2Hhlbj6qt32PVx4+CtY+Kvq36sH6eqp11+Fl3zp10rdZhpDHmoWtU563aVNcPC+bqY6lqW90R+2E62f9/mqrDx//QGNEfNj61UaNGjRo1atSo0X9a+sjwschynJUIKcEdr2KTCqVAG8tUa27c2eXU3pQLFFAYjIOZARWEHCaa/aMZy/ECoRI4J4iDAIcgLWwJaDyFCHwcCikFQSAZSFGarZxAHV97WjjGO/vkeYE2gjMbZ8kRLPVmDHoeC8t97h6MkdbiH0dE4sDpnIHOCVYusLh2msPxhJ2DIde0xgtCtIVZoXHSI0smdLtdxkqSTEZMAosopmTGUBhTgkiqlZ+KaWaYzVK0tSTJjND36XW7dNptvCAAKIGSK72IYGnHEY+fW0FiOTw6otsJWQwCrl06gzQFB0dTwlaXMG6jfIf0Y7QuJ1xRFNFbHOCnW5g8AeeQopzQ5oUh8g2B1yGUHXRiuTsytKI27f4AebSJMxYjDUKFGAPOHK9QVB4IRa8Ts7yywmvvXCctNDrVbO6OkELQjj08TyGEQ1iNsZa8KEAIeusbrLgFooU1ChTj4RF72/c42t9hKQg4deoM94cThCzdfq2oxeLiEutrp3jsyhN89/XX+e43/xitc1r9RYyU+HGbbDZhf/s2AokfRERRi7jdJYwilC3oKcNCt42QHoF/lsOtbXYmMzzpMUw0w6QAU9bu1OkMVxR4vk+33aLf7zLJC/YP9plNjgjafaJWG98DpQSHwwl3b97gnZtbdGLFvc3bLJ65RCvuMDrc5+7tmywvLXF4732+/e1vkRpDrxWTpTlC+eRpxizJmBzuY4zDCsPS8jKXL19k585Njg4PCaTiYDzGJCmTpCBNDXE35vEnLzKTS7x06Di6PmM0MXhqwPKCRyeGdmgYtHt0oxRloSgsHlNmRc5eWrB/OCyjVoXgw8o5PhS0Wvs25401V6RzFqMLsmTKbJawfzhimBrwPELlc/78JYw2OKDVX+bJq5c53LzF9dt3sUAr8CiKnP76Bf4Xf+dvI++/iRnfp+cbisKSZJrpdMrt7UN+/71dRrnB9xSZNsTtFktLa/iej7Ez/FDiFaX7NghCwGGsQ/kxAk0rjJFSYRHYosDzQxYXl8FosqKgSGfsj44YTcbMkgx5XNd1nBp2E8nZsHSj5saUrtQP+SClUaM/S53vfe8HaI0aNWrUqFGjPzvVXYrwQU3BOqT6MEAHPASw6m69urOu7j6sXqtqOdYdfdW+FcyrAJjnefO41AqOFUUxB3911189urRyFlbnU0oRhuG8XfX40apdVfmQOjyroN7JPqpUgcY6MKyOsdY+dD/VfnU3ad2JWN0blHUXq30r92f9Pk+6F+vtrAPguru0DiRPOiTr4+HkGKkDz/q5HuUUrWDxo/b9MFBX9f/JfeqA8E9L3298N2rUqNGfhc73JO/+n7/4Z92M/2T0la98BYAf//Ef/zNtx39Kavrke9X0ycNq+uOH00eGj9oe1w90AmMKinyXdmcdh8A6Aw7ubE24IO/TVpLUOTwlOZxprAEvsdx4c492sICvBAZBO44QQlAUGgFYZ8s4RgkIh/T80unlBDiJlAIpfIQyhEowWF0jaLdpBT6LG9doXXuGx850efk738Lc/w5SlrXncCC88nhhc546f47Ln/wkO7Ocb37ld3jv+i00Eq/d4db9XaKtbVrKIfyINC/Y392FRYdwBpRHVhiyNMMagxOG1DomuWM8mTCbTsAZfE/S6Xbx/QApFc7ZMnZWlJNBTwlOLXXohAJfFBzmksm+Zm1J0A4U508NiKKIqLtCb2EJJyTGyfIZSA9PSTqRwnvwgMA6Jq50W1oB2kDhCqS0SE+BVCA83ru9zfkzp5BKoQuDKwp8FaKNwVK6F62USKk4vbKMkIJklmGMYJpr9oYJSsLaYhvfq+pWOHSuSbMcr9Vjp4hprywwTVJm4wN2t+6QDA9YHyzxyR/7KRZPb3B5PGQyOkTiCPyQ4dEBTzxxmeu3bvPWd1/h3Kl1snTGYZYzGg7ptNuc3VhlQWqsc2hdYKwkc5bFgc/SYIkgCBju7bC1u4VRLUwyRU6OcDuOFMicQ/s+KozKmoVK4vtRGRkrBF474vxggJIwmszYPzhCo9jYOM9K/5Ab93Z47buv44qcW3fu4Af3eOrZT7N1/y6Hh4csLq/z7quvcPfO+wzHY4osZX83x5mC0PegSOm3IhZX11k+c56zl56gv7jEhYuX6S6tsnXrPf7wt77Mzv4hAoGzltMbq/zFz/l4MgEkxgRkqSUrBNpInJM4oxgngqFYJEksP/KZjNvvJPzWmxnDwjEeH2GtA/UBTBQPc8aHHI7ioe3HoHw+d7Q4a9B5Sp5OSaZTDoYTjqYp0vNxCI4mKWEQAILh4SF//PWXUJ6PCrt4gPLaOBxKSH71V/8dUhjQOaEHngQlBIHfYYZjYdVjgCDLEoQX0Gq1wVqwliRL0IUhbA9oxW1iP8A4h++3UEzRxuCEpBP3cUUGQmKBwoI1hjRNGR4dkuYZOJCyjHJFwOmOx4qyWCvmxS7tMZgsa2I2rsdGjRo1atSoUaNGj1Y95rMOx6rafycdbHUAeDKqsu4KrIMx4JGgrx4petKdVweZdUBXf+2kY7MO3+q1D+tAUx2n01QuyHoNwqrGo9Ya3/cB5tCzOlYpNQeHdbh3EpJV9193RlagtgKKdUdgdUy93mUFXes1GR/Vz9XPJyNS6/Ua61D2JFQ+6Yg86eT8MEfih0HIR42Lk9c+CS2r/n3U+KnH7VbPvw6s6334YfdUjbE6dK7vf3IM1x2o9TH4Yfded9bW7/dkn528Vv34k2O9fq2qf6r3ab1tJ8fCSZ10d1bvlZNu1EfB4arf6u0+WYOzej/W779+3mpbfZ+TfXjy2JPPv3rO1YKE+hio3199jFVtq7t4633WqFGjRo0aNWr0keGjEBZjHc44pK8ojEZnR6B6YMs/+GdZTh7DThTTCmPC4z9mPCk4KyzdJCeYpSSDFsIUxL5iKQpx1qGNIXAWP/IxwkMbgecsxkoK61DC4YuyvqBVAdHyCn53iUg5VhaXuPTCj7G9c0iq9wh6C8TtNuPhPgBKSXzlgYNZlmMmhwzTjHt3brDzYJf//t+8AbZA9hb5/Vff4uvv3OLxixdZaEX4cQw4sizH6QytDdpJjHGMJzOCwKAtpMYwm84YHj4gjtr0Bwvs7WwhhERIgXAKQemSDD2P5V5MJA1ZOmNWwNB26fZXGOYFvhkTYNlYbkPcIl5cweJRGIPWBhAIZ/A8QdRqYTOLtYaiMCjpmKYa6xlEBCDBQacdcnljjTD0uVsUGK2wwqGC4+eKQxuNFAqpPE6tLjJLc6L+Ck68j3GOcVrgjQTGOHzfw0ofqzVJVnCY+6xcfJrUhaRJwnh4yP3b76PyhEunznHqY59EBy0eHAwRAvz2IgjBLCvjc2/cvsdbr7xEGMaohVN0cXjpjFOez/krH6PX7aMwKAmx0PQCRzpLGA6PyNOEo4M9pqlhe5wzPnpALAU2L8jyB3gYCmvJLRSU4NtKCcoHJen0+6jjuKGoFXPx7BkO9nY5PBiTJCnrq8ssL2UoPyAOfbK8oNtbIAp8hocHGOt4sHmb62++zmg0JPAkxeQQYQ3tdotuN6K9uMzZx5/iyrOfoLe0RmflDEo4Znt3OdjdwfM9Pv+Xv8QrX/8qW7sHDBYXuHD2DJ1AI1yBLgyBdPiBoeVLgkDQjgVRoChyR64dxgqiSPGdOOfL/04jheVwe4s0L4gidQwRqwqPx+/rspIjIHDC1fNVj3+u7e3AGlvCx2TGdDLhwSglt462lEghUVKhpHe8ezlJMXmBPX4fuDJDGWMT9nez4yZZhFQfXNc6LA6EQCCRfhchBbM0BxSe5xG01xBFhhe1KfKMWWGJPEeepSRFQeB7aOkTRhEyDDDaYKyjSCY4qxEYfOnwBCAlYRgTSBA251REWd+Rsh2SDyZi8ti13PDHRo0aNWrUqFGjRo9S3Z1WfdCvtWY2m9FutwmCYP5hve/7JEkydwhW20/WKKyiWuuOucpxWLkkKwBX1UAMw/Ch61fbq32r+NJqewXu6jBrNpsRBAFaa4IgeCi6tWrzbDZ7CKRWbaper2BIBWUr+FgBl7ob0ff9ObCs+rDarpQiz3OCIJiDHmstRVHM2wgfwJG6exQ+iLedzWZEUTRvaxWBW8XH1oFV9QxOQth6NGvVJ9X1q/jZ6vWiKB5yLlZtqYPRqs+q47XW8+dbd6aGYYjv++R5jjEG3/cfcrDWn1917jr0roOnqq9PQscKztb7shqT9TZX95SmKWEYPhIoVqruse7GLIqCKIrI8/x7nJxV/xVFMYfWRVHM96m/F+pArg4V627jk79X93TSefooGFs9r5PH1funGg/1n4uimF/3ZDurMVFBy+rfjer5VdurZ6e1fui9V19cUOlREcvVOat21fep3lfV+786Z90dXY2xJEkeCZhP9lOjRo0aNWrUqNFHho+4st6ZdQ7PSsJwiSx/gPR9nLN4wtKVOfcHbfxuhyv9ZZJihk4yeq0YEfgsagNGk+JQQrAUt7mqFJ5xKMpY0vF4TG4Ek7yMNg3DAEUZfygdKCHxFwaIuI3RGatrpzm1ssqnP3mFg6nh29/6Nvpgm8ce2+DWXcXk4AA/jBhnBZNpwlEh+K1vfQv76htksyFJnqOLgjgK6bQD/CBilmV8/Vuv0G61OLu6QuT3yHMNRQbGIiU4BNoYlNEgJSA4Go2YjEbYPCf0faTyj61mAuV5dHo90iSh6ztCaUjTGeNJwb7OWVxbIOz0CKKY5GgLow8hHxNJRXIA0eIFlPJwTmDsB7X1tLGgDdKzCAQGSHONNIZCFVjPoq1FOMf6qVMcTXOgXHGKk3hBUTreHGhd4KkAXymWlxYZjQ7pLK0ghKIdBkgpmKQ5mdEEgaLb7WBUyM6u4dY4xKiAyXjE0eEeu5u3aOO4cOkJVq88g/CD0s3KB5Ob/b0d7t54jyKbIYuMJ5/9NHF3gJACFUTI+cpgxTTJAFhoRywuLaLTGQf727z3+reROuHn/u5/w73rN9i6/j8Qas3AV+RSkFuDEgKNJHEaUVhyB0JKZC/Ei2J+5PnP4ocRcSum0JpOrNjavMPNu9sURY7TBV7g01lYZrHd4uosY/3UaTzPp7/QIw5DWrMDnpRD7tiCThgSOEMr9Gh7kk6rw6mLj/Pk859j7cIlpFdOnkc7t8mTGXEccGpthTt3btPqDejOUs5snOb06iKBslhjiVsCax2HmSaIwrK2aCGIfUvoO8KgmmQWeAoGvRAjNZuvf4NXvvs6n332CaSzcyff/Pvc7/jBt+M3fG1DtQLSYowmSyYk0zF37z9gZ5winIcnJcoL8D2FV634BJwzOFmdxZUwXghwrnRkAkKUtkwhjmOPKF3C1hqk8rCihH0luDTkeenCdg6E3yJPU9Lc4sceWaGZZTl+2MJmOcgI3/cJsIDCD33iMIRizJFNiTCMjMTzFDjNSqzoeOBLkOK4b4QFIfEDH6sN2hi0bSZZjRo1atSoUaNGjb5XFYiqg4DNzU3u37/PJz/5yfmH/XmeP1RbsAIqeZ6zs7PD8vLyHCAqpebwDj5wFAJzQFHBAmMM3W73IQdhkiTlAr5jSHfSLXey5uRsNmM4HM6vX0E5rTWtVmve/na7zWQymcOnCtxUILUOUyuXXBiGD0WkVoCjKIo5/EnTlDzPWVhYwPM8PM+bg6gKuNUdWRUcBR7aXrWn2i6EmNenrKBP9Vp1zqofKkCUJAntdnve/5PJBPgAZtahVAV0K0jaarUA5r/XQddJSFcHk0EQzK9dd8vWQWDdVZll2Rw0VfevlGI6nRLHMUEQzPepw7UgCObnrfqp6u8K0lZQLggCZrPZvO8qGJrn+Rxk1eFi/Z6qZ185XSvAVYHj+lgF5s/zJPys9qvOUbWrAvgVUK0D3TpgrwBc/XzVc67GdJ7nc9hdj+Ot2n7S7ViHztWzrjtr630upSRN0/n163VM6wD05Hu+vsCgGu9VH1fXOul8rfaTUs7HcJZl82vVx3g1ZqsxbK1lNBpxdHTExsYGRVHQbrcpioI0TYnjeN5O3/e/J0a5UaNGjRo1avRfrj4yfLTW4KzFOYkxkjjsYOjRCmbMtKCfHtE1E2SxQieMGaxf5v23XmLnzh2eunKGkVKcUQpjNQgJriAKBf2lkFB+MIFyacLRSJPPJLmDlrEEgURYi5OW1TNnoDsgSROKxHB0OGZj4xz9QYdWX/DmzVUm05TILxisrPJgnHH/YEySJgRhCxH1yfIMz2ravQWYzRhNRkxyQ26ntKKCKIwYDPqkuebt2/fYPRrRH/RZiD26vkAqH6MLrC7IcThb4CtJoQuUlOR5QRxFtHpLTMdHCGcBifJDomyGMjnZLCHLcu4Nc2Y6R3htllfXCOMWqr1KXljc8BazyQimE3SWEa5cxlqJNRpnDakuCIxB4sr4UC8gjlt4doYnUpJjR6rRbr4q1RYGKY5rYjiHKgqckLjjbZ4P3VZIHIVsbSdE7WX8wCdSHRa6ksl4QpJqdoYpRzZkbyYwnYBpVpAmI/Z3NxntbdO1hqtPfowLn/rzaGeJiymBn5V//DuPG1u7vPfGd9DjQ04vL/PYJ16ktbiKQ+CQwPEqT0Dh6DjDQiuiv9InEoI8bnN67TTv/uHv0fE8bv77r9KOFI8JGHkek0KjhKDv+TjjyEwBVuIJh3aOBIcXtjj/xDUmkyELUcjO7gPu3LxO7MP2gz0cjrX1M1y7eoU008ii4GD/qITN0uPe7feYzDL+/I99AbnzLpdXO8S3fZYGHXqdmDjw6S0ssnDmHJ21U9y9/ibpaBctA/LZiMiXdDtddJ5z/851Dh7sMjp4QH+wwHJLosZ3ObJ+Wc9TejghyAuHZ9sIqTCeYqQlUoCxDiElQiqkdLxw9RTj3KKd4pVf/X+zqn6OpV4bKRVS+UivdClKpUAohFQl/BOiPA/HkaPVlwNrNNPhPvubN7l7431ubu0zaIWIzCIAzw/wPQ+lRFlDVJYxwU5w7B60ZfVIAc7aMl7ZAcfuS0F5KSkFQjiMPt7AceyLKOu3CqewxxMv3/OIWx2CyKKzWXkPxqALjTMGo/VxrUYHFOR5zmwyoZju03aajW7A20NDniWEnmC95RN57rj+Y/lhTmYcfqfDhauXiYVl//59tncO/sP+JW7UqFGjRo0aNWr0n6XqwKX6MH4ymbC9vf1QXUDnHHmez4FCGIZ0u12uX7/Oa6+9xic/+UnOnDnzPa6xav/qw/4gCObAIAxDsiybw4QKBCml5qCpAhv189Tb/tJLL3H37l3CMCQMQ5Ik4ROf+ASXL1+eg5x2uz0HOUEQ0Ov1mEwm8/a9/fbb9Ho91tbWkFI+1MYKztTjY6vzSCnJsoyvfe1r5HnOF77whbk7rgIlcRyTJAlaaw4PD4miaA5E6/1bd7JVTq6qrmUFjOqRnEKIOWytXHYAq6urjEYjvv71r3Pq1Ck2NjbmsKoCTFpr2u323C0aRRHGGIbDIb7vP+RSrfd1HdZIKecAK8uyeQ3PCvZVIKvqqzrArfoYIE3Th1yZAK+88gorKyucOnVq7lqsA6P69wq+1SFmBbCq+66cfUEQPOQirXQykrR+rvoYDMOQw8NDptMpS0tLRFE0h+N1F2aSJMRxTKvV4vDwcP48t7a2WF5eptPpMJ1OHxoDddhcQfnKRVl/n8Zx/D3u1JMu3Wo81GNVq2tV7UrTdP4+rp7ryUheYwxhGD4UK1z1bxVdXFfV7upZ10FttbCh+remclTWXaGe5/HGG29w//59Pv/5zz/0rKtnU70f7927x2w245lnnsHzPG7fvs1v/MZv8HM/93OcPXuWNE3n/ZnnOb1e76EFA40DslGjRo0aNWoE/wHw0TAFYfA8iXGQF1Ock0htOGOGtPSINIzQrYj98RGzZILAsdDvoKI2e0iuYxDOEgmBLxxSCPQ0pxUphC1ryymnWfEd1qRMtEAVkrENyQOf9eU+waBfOp7iiCS3TCdjDvZ3yfKcvbGm7UuOCMkOpvQHHa48/UnevrGJfrCL50lGhwcIa9DCZzKe4IymHbUw1qCNZZprknyC1QWeksRhQJplZA/2mIQBa8tLmO4aIytRR2PidhuHotVfRPltjFPYIiHLQXoBDlXGTwKzyQRZGDLjY4qcqfZxXoCzBTsPHhB3b3ExjDB4iNYS2jr08Da+1RTDbayziN5ZjC1ditPZIQshFMbgnCCM2wStLkH+gMh3HKQGLS2F0fhWsxx6zGyBMmUdR2MtRZGXMO3YUemwrPS7WKsZTVPiTohw4LRmtb+AZ3P2xikPTITz1mktnGZ0dMR4eMje7hZmckTfWfAlUbfDZDxBSnhmWdMNMpSnOJrmvHt0jygd8Zd/9BKPX7mEUT7aDbFCUVhJZqCwHkUhaM/GBMmQvQOPg9eOWB2s0Fo5zcpswsfjPrPdHY5e+Q43fcl+miGNYV35RMoDIZHtiP3xmJnVZFowsRoWFvn5//Yfcvv/x96fB0lyHea96O+cXGpfunrv6Z6lZzCYHTPYAQIgCa4SSRGkuEiWaIeo1SHJksKOsMJhK8LhsOKGryz6D1sv3jNDlKWgJJqmKBEUKFAkSCwDAhgQGACDmcGsPUtP79215p7nvD+yMlEDQfZ78I17bam+wGB6qqtyOXmyorJ++X3f9UV++MLTLFy6QMnO8YlHPsF4tcj/8Tu/g0Czsb5GpzPHwX0HUCpmo9XC97vUqwVWVywIN3jx+afZfWCMyREDy5DYhQLaMJK40zBk5doFFs6eIlcootCYlsVt9zzI3JFj5IsFAtfFddqsr61iWzluPXoP7spFNjoBTTfEFIKcbWJJMKWktxUg6HeaCIGKfVToo4WJRlIul/nUJz4EhoUwTAwrj+FeRnuCSIPWEj14R6qQaAyElAn4lQZIEyFNhCER0kJIAxWHrC6c4eLJH9Dc2ODIrnEOKuh4Ma3IwizVyZWqhKHGDxSxFARhRKw1sVLEqv8lB0kvaWIKlqhYoXQKHw0QOnPjir6rGNIY1yT6VfQjUQ0riSPGMJP5q5Pnhb6TbLcQfZOnfNOJqRQqDqmVLLZZmsWuSzNU1G2Lokzwt07SX4m1RmnwIljcdGjUy5Tn9rBvJvqfeiMeaqihhhpqqKGGGurvpgbddoMgxLbtzKWllGJkZISlpSUuXrxIGIaMjIwwNzdHp9NhZWWFa9eu4bouk5OTlEqlbFmD0E5KyebmJlJKGo0GlmWxublJPp+nUqlkTqXUAVYsFikWi5lr663wzXEcFhYWOHjwIEePHuX69essLCxQqVSIogjf9xkdHaVQKNBsNgmCgGKxSLfbzaJe19fXefLJJzlw4ADz8/M3udHg5kjMNGI0BTEpZGq1WhnoSIGR1ppisUiv10MIwfLyMk888QT33Xcfd911F5ubmzc5+Qa7CFM3ZepGHAQ6hmGQz+cJw5BOp4MQgnw+DyTwJwgCFhcXefHFFzl06BDbt29HSpnF6KYOtTAMM4AUx3EGikzTpN1uZ6D4rXGZ6XxJI2FLpRKlUok4jun1ekk1SD6PlDIDnKnbsFwuZ2MbhiGFQoGRkRG63S6+71MoFOh0Onz5y1/mwQcf5P3vf38GKVPQ2uv1sjjXbrebuf7SmFvDMKhUKvR6vezYpfDY87wMIvq+n+3TIHgEMpfvYDSq7/v8wR/8AS+88AL5fJ6f//mf58iRI9ncTuNwIXH3nTlzBikl+/fvJ45jXNflK1/5ClNTU3zyk5/MwKaUMpszaTytYRjZeZDO03ROpMd4sAcxheTp2KfRwOn5N+gGfeGFFzh58iSf+cxnGBsbyxymURRl45eC9fT16RxJ9y8Fm4M3BgxC9EqlglIqA8D5fB7f9zOg6vt+FoucnqcplL9y5QqvvfYa999/P5VKBYBOp0OhUMjmpGEYXLlyhcuXLzM9PU2j0aBUKmUgemNjgzAMmZycxPM8TNNkdXWV1dVVfN9n7969NzkqhxpqqKGGGmqov796x/DRD5ILgFAJBBBEXUytGDXzlMKQS8IkNkbZXHKoGz4y8JiaGie/bYpICtYMSRfJpBdS1ArTEJiWScsTdB2PwzpEGDZ+HFAtWSDyFLyYPU5MM/BYHatSnJ5ERQGGkChl0V5vI6sFnnnySXbd9346ugQqIrRKLKwusE1b7Np2K5Vyh5xpEgYeXhTRWr3BttkpllZX8X0PqTWm0Y8I0YpyuQxaYZkSQfIh1rZsTNOg4/qYZoFNbbC+0kXTxZASe2oPG4bkyedeZXSkSoSk23VwvRDPc4lijRZGEh0ZBZSLJWJZoOm1WFvfoNNucmnhEvuWljh87G4su4RZaCAsG2/tAsrvUo8dhLeGGxfoeT5+aw2mDWzLAiko10YwpE1emuQshfAFKk6gquOHdLpddOgTBC4qzhPFEVEYYIikb09FMeiYqbE6vZ5DL9DkgYlakV2VMrWCxJR1tGzj5gowPcelK1dYW17Ec7oYvseEZeIgCLTGxcZfXiRnS+R4A0MCSvPa5TVaTsDd8xPcfmQ3hmUgdIwmQooEdgVOhNcKCN2IXqB4fXGZzZUVJuMYs75JpdOht7YKbo9Vp0MrCKhLyX4kOSy0ggBNaEIYReTQIAUCRYjCdT2+91ePcePqAputJrOFIu86chvVkQn2HbqVW3fv4sZmE8dxMAybT/2DzxG11/h//79+D2kYbG2s0Wpu0um5YAcUj72fSHrkXlqiVBuj56xjE+N2WtQbdeKoi+55dDo9So1xblxbIHY3MCKHUAnanTYLly9Tn9pJdXSKE6eu4kZm5giUUiBlEk9sGwKwMKTAECCUwpImBgpDKAySxy0ZYooAy3KxJZhCYckY2xCYMokyNoVGGhKhkwsdKQyEYSbmZARaaTQCpTVxFFH0uhzYs519e+aIlSbWBlpIYgyUSH5WWhDr5IIsDEKCMMYPY4JI4YcRfhDiegFeEBFGCtePcMMQL4iIYkGoFGEkCCNJFJN0dPYvYMMoJtb9vgoElmnidNoYhRIqClFCIIRJrlAgCPw+eEycmxqV/Kw0BRNmKzYVA/aPxFzoKXaWBDpWBHECNpOxF6AVbqdF97zPYrFAPl+gUhv5n3ojHmqooYYaaqihhhrq76YGu+8Gow9TwFUoFIiiiE6nw7lz54iiiEajwebmJr7vU6lUstjLra0tRkZGKBaLN3UlDkaavvHGGwRBwN13300Yhpw6dYrp6WkOHDjAyy+/zOLiIr7vYxgGd9xxBwcOHMjASgrHUqfXwsICa2trLC4uEscxk5OTWJZFrVbD931OnTrF1NQUnU4Hy7IYHR3l3LlztNttcrlcBi08z+PatWs899xz1Ot1du/enQGrdF8GwRLAwsICi4uLlEolVlZW2L59e+ayWllZYW1tjSAIqNVqHD16lNXVVVZWVlhdXeX73/8+u3btotFo0Gw2uXLlCo7jMDMzw8zMDAArKytUKhVarRau6zI/P8/m5iZCCLrdLkEQcOjQIba2tjLQdcsttyCEYGpqioceeoht27Zh2zZXrlyhWCyyvr7O2toas7OzTE1NoZSiVCqxubnJ66+/jm3bzM/PU61W6fV6b+sOSx167XY7c9htbm5Sq9UYHx+n3W5z9epVRkZGMohWKBQIgoDV1VWEEFSr1cyl2m63s7jSdrtNPp9nY2ODbrdLoVDIIG8KalMnnuu61Ot1Op0OQOak9TwPx3EoFos3xbGmc9BxnJvm/WBv4qDrtFKpEMdx1iN6/fp1Hn/8cT772c+yb98+xsbGcF2XWq1Gp9PJYGUQBJTLZb7zne+wvLzMv/yX/zIDgtevX6dYLFKv1zPnrW3b9Ho9lFKZKzOd3+nv0vMwjRxNQWwKbNPHK5UKQRDgui4AxWIR27YJggDf9/E8j62tLa5evcrm5ibVajUDcGl0bgoybdvOxjMdnzS2Nj2mgzctpDCv0+nQbrcz6BiGIQCu6xLHMd1uNwPo7XabarWKbdtcvHiRTqfDxsZG1pPqui65XC4758IwzFybZ8+e5fHHH8/Og7m5Oa5evcqXv/xllpeXabVa/IN/8A+499576XQ6/Mmf/Annz5+n1Wrxoz/6o/zIj/wIuVzu//o31KGGGmqooYYa6n8rvWP4KHWEKQVayiTiUQl2SIOZyOeHQZdN08YIOhgdn2pFIFWPciGHtAp4XpeuVqx0Q8a8EIMEeliGpmAadEOwZIzWIVILNjqK4rpHzRTkhcHU9kl6MxPEKkBqTVUbXDt/nXbPwbFNnK0uq1evMXbLYfz6KMrIc/r8ebz1RX7mZz7PrQePcO3yRVaWl6hPTPHcxgZut8vePbdyaeEyTreFE/jEgU/BNhGqiDBtYq3JmSZjExMEWtLrdul2XbRyyFsmpp10U5iGoFKpErkxruez2WyjVZT0ChomXhATuA71aoVCfYTNzU3OXb1Ke2uDMPBRWiOlIEZz7szreD2H2+68n0KpQrU2Rr5UYvXCy7SbG1Q15I0iq80uyushxChBGBKHIXalCFogtIVlKQxi4igk4Z0RvuMgiREqRiuNihQqjjAsA0ickAaayfEGK6tL+JEijiJsHTNTtrBzNsVKBbNY4sbqBjfOv0yn7eF2mpRVDCqCUhnluGiliYSJZZhoFLFOoPVay+HE2cvEW5vc+YkHMAyJ0Em0po77UZuBRocWPV9xaWWd9Rs3qHa73BGGlGOFubbM9etXuNTr4kch222b90w1aFQLGKZBpDV+FNKOYtbdmLWWQwToMIFdFSQ5p0d04gXGUVhK0VKCxsg4ygtZXN2iVB0hb+fIF4qMj9V44dnvs+/AAfbtnefSk89y4MAD9LoO62urSc9lKJjesxfbMui5LnlpoIWmUB/l8Lvex/r5V1i6fB5rtEFtdp5Xjz9JuZyjkJP4saDtepTro9x26yFeePZ5NjsBCt3vQkzgsBACQ8okFlUnUchxnPSw9sNJEcJAYqJFAs4MKdCaxEao48RNSOIcFH1frpQq6VOVCoMIQziYIpkLptTYliRvJPDStgS2tLGFwjQFOUMghUKoENOIMQ0DrUXS74gGBWEsibGSKFWt+32NkliB6sNKpRNIHMeaIIZIaYIoJghjgkgTxIogiOh5AT3Hp+slPwdmnq4IwTRpdUPiKEBphZkUsyYXwKR3jYJAEuuYsi2pmAKhYmbqeUZrGh34dFyN6vdPmkYyqlEMhtAQO8Rth6U1uLq08T/5VjzUUEMNNdRQQw011N9FpdGhg3GKnudlYCmNRWy1WmxubjI7O8vs7CyO4xDHMfV6nWq1yvz8PNPT05mzLYU7KTQ0DCNzMaade2knYQp51tbW2Lt3L41Gg8XFxaxbMYUib3VTpuAl7S70PI/l5eXMNfjcc8/RaDSo1+tMT0+zvLzMjRs32LZtG5ubm2xsbDA1NZXFja6trWVxlqZp4nleBrVSiGKaJq+88gpPPvlkFpG5tbXF7OwsuVyOS5cu8cQTT2RwbWNjg5WVFZrNJpubm5w5cyZzt0kpefTRR7Po2ePHj/PQQw+xY8cOvvrVr2aOvnq9juM4PPXUU5kLbnl5mR/84AcYhsHm5iaLi4u8//3v55FHHmFjY4PnnnuO+++/n2q1yh/+4R8SRRG1Wo319XUajQaf+cxn2L17N2fOnOE73/kO7XabbrfLzMwMn/vc5zKQNRh9mf6RUvLXf/3XrK2tkc/nuXz5MmNjY+zfv5+rV69y48YNGo0GjzzyCPv37+fUqVN885vfzNyGc3NzfOQjH8G2bZ544okMAG5sbPCxj32MarWaOUlPnDjByZMn+eAHP0i9XueFF17g6tWrKKWYnZ3lwQcfZGtri1deeYViscjq6ipKKe6//37eeOMNrl+/jhCC2dlZ7rzzzgxipvuVztXUzdfr9Xj99dczCLZ3715mZ2d59tlnabVa2Twul8vAm27FdG5Uq1WazWYGep977jlGRka45ZZbkFKyvLzM97//fdbX1xkfH+fw4cPZWG9sbHDmzBl836der3PkyJEsttf3/ZuciKlj0rIsFhcXee211wBoNBrs27ePfD7P6dOnWVhYyKDy7t27OXr0KLlcjtnZWba2trh48SJBEGSQ+/Dhw+zfvx/f9zl58iSvvfYaQRAwOjrK3XffnZ3jg+OXbtONGzd45plnWFxcJAgCZmZmeOSRRzAMgz/90z9lfX2dMAxZXFzkfe97H5/4xCcA+MY3vsHx48fRWrO+vp5B89R5m/6dxummMcRSSur1OoVCgUKhgOd5WJbFRz7yEZ555hlOnTrFwYMHefbZZ6lWq/zWb/0Wr7zyCidOnKDVajExMfF/3xvtUEMNNdRQQw31v6TeMXy8dWok8QIZgigSGEaRY17E2eY6qwhAIZSLaYM2TDpbq5SMKlUrZrSiKeYNliQsh5odQmIKiY1mmwE3vAhTRZjCR6NZk5quF1PQ0JprUJ2bJlIhqIhSZBJcWWbN93HcHmVXEquYK2+8SnXXfi5fXcSKQwzLxnV9ds40OHbLMc5d2s8LL7/CxVOvcOuBo1y9dIa1lWV2bN/O6voGKzeuUCkVKJTruJEidNpUS3mMQgHPdckVSoyN1DGsCTQQK4XTaSN0hGlaCK2wczn8IEAKiWEXiAHTMLCjkNrYGG3X4/yrJ+m0m0n0DRrDSMbCMi1M0yKMYy5fvogQcPDoXRQLNpXqKI2dR1k5/zz+8lVqI2PM1SssrgpEvgyGRRx6icvLsAh9gyhSRFoQhSFK6wTmeEESYJnFSSriOMKwi0kXn1LkTEmpVGB1bYPV9Q7Vsank4giNZZmEWkKsGa2W2Vragl5AUcfsGCtweT0kXYHq51bmyzVsA+L+Oi+sdijaNg/ddSvj43VAJLAqNBChhaTIZtfh7MJlFi9foO65HLQsarFGRCEqVFxwHa6LkAfmZ7h7cpQxaWF3XETLJdzy8V0PJwhooVgvmKxNVbnR9lhfj+gISSg0hm1j5W1s28R1epz3As4vr3KvcDn78nMI4N7bb2P33Cxaxzz7xGNcPPcaBB7ry4t873vfod3cwA9C8vmI5aUblG1FtVzi4uINxnZPYSDxnS5rm5vsf/jjFJ57nGJ1hO7GGkQOm+ttkAbKymEVioxMTHN94RoLGx6RNvq9iyCFSiCkBkWMjmOkTC5OlFbEWiWxxQPdianCgcJ5+nGiiUSyfEBoiRLpW8NABFCSi5p0NWqFVjopQkQn57sAQySBpgKNFAoLjSTGkJAzNJZUmCLGkpAzBKZpUDBjDKEwhcYUYBgSW2qKpsTISQxBvxdSAxKlIFlyPnFgKog0+H7MCxd6XLIgDGK2Ap/N1esEUYznOUls7up1pGEgDZN6IU8UK+I4YE259GoVSkKAUBSkILZM3CBKzgVAiKSw0rYMVBQTRxoVa4gVhN13+lY61FBDDTXUUEMNNdTfYaXQJXUxDXaspeAx7c7r9Xo0m02uXbuG53mUSqUsMjKNRUzBWrrsFAymXYO5XA7HcW7qrUu7JOM4ZnV1lXw+z+TkJI1G46a+txR4pPBjx44dNBoN7r77biYmJjh//nwGNlPH1P3338/09DRaa5566imiKOLQoUMUCoUMHj7//PMcOnSI+++/PxuPdH0p9Em3LwxDXnzxRQqFAg8//DClUok///M/T7rXfZ+zZ8/i+z7vfe97GR0d5ctf/jKLi4scPXqUlZWVbHsmJiY4fvw4juPw4Q9/mNHRUb72ta9x4sSJDDYePnyYo0ePMjIyQrPZJAxDHnjgAW6//XaeeuopvvWtb/Gxj32MT3ziE3z/+9/n7NmzhGGYOfzSTr80bvYDH/gAjuPwzW9+kzNnzrBz505eeeUVDMPgH/2jf8T6+jpf//rX+f73v8+P/MiPYBhGFoE7eAxS4Hnq1Ck++tGPMj8/z1//9V+zvr7Offfdx4EDB/j2t7/Niy++yOzsLGEYsnPnTrZv387GxgbHjx+nUqnwwQ9+kFdeeYXl5WXuuOMO6vX6TZ2fp06d4s/+7M+YmZmhUqnwgx/8gKeffpojR45gGAaPPvoo5XKZsbEx/viP/5hqtcrc3ByNRoO/+qu/4uWXX+bgwYMopThz5gyzs7PMzc1l83Nw39JI169+9aucPn2aubk52u02zz//PD/+4z+euTbPnz9PHMc8/PDDmbMvjTnVWuP7PsvLy2xsbLC2tsapU6eYmZlhYmKCfD7PiRMnsCwLx3HY3Nzkx3/8x/mRH/kRrl+/zle+8pXM3Xfp0iU++MEP8vGPf/ymPsnBHk7TNLl27Rpf/OIXsw7Wra2tzK342GOP0Wg08H2fZ555hp/5mZ+h3W7z5S9/mb179xIEAb//+79PPp9nZmaGxcVFnn/+eX7t134N3/f56le/mkH1lZUV1tfXmZyczObBYFywaZoUi0Xm5+eZm5vD930effRRtm3bxrve9S4uXLhAvV7n4x//OKdOneKxxx7j/vvvJwxDrly5wsc//nEOHDjA8ePHOXnyZLbsKIooFotZX6OUSW3LXXfdxdbWFp/4xCewbZuNjQ127NjBj/7oj7J7924sy+K1115DCMHly5e5cOECvu+zvr6evZ8NNdRQQw011FBDvWP4eG2rix8oTNMgDDXzuR6tIGAx8jFsm6gfz6gNE8+waHc6bK/bjNmCnlUgkhZBHNOJQ2KdwEppCurjeUaiNqU4ppg3ySmNXbLwCga1kRqlvXP4OiaKQqIIKleWudLq0FQBCEWkNLmuy5mTL9PYfy9riyuYsWJsYpL1q5d4/eVX+OChu9i7ewehtOi2upSLRcqVIhfOnubGjeuM1EcR03OEgY8fhOg4ojHaQJg5nCDE8TuIVqffBaeRQlOpVJHSAGESK/CDCB1ExFFMhEZ1WtQrZQqNMbbCgAsLl+i0W8nFltYYUlDMFzENEy8M8QOPruOA1ggpOPvGaULf574HH6ZQLGEWyhSmDhCsnaPValFv5JiYGKXnRdRHZ9naSpxYQiadfW4QEscGru8TKk2sNG4UZfGXyV1+mjhO7nJNuvKgnJNoDddW1rm0cI3R0VH8KCYM4/7FoULHEX4QUzQlNRFQLEuqZQuxKZAygWaSpHuv29zAlBp/fIS2Ulxa7lAKuuzbdwSNATqPjPOI0KDZbHP++mkunHuDYrvFYdNiKlcgHylKQHmkyunOFtdaPX582yzvsRrkFwOE3wVFEpkZglQCtIRIYYU+9UgzvnOUS47Dhgu9sTHq+7ZxwF5mzHK5sVFid9tgxb1Bq9WmtbzAjNGhsGsbkzt28dST3+PKjXUMu0Rzcw3fc+m6yd2HArBsm3OnTpGTIaMTk7x46hwbE3VmR4sUS2VuvPEqE1PbmN1/mNrMLfQuvczVc69xdaNLLxZYuQKjEzOEXsyLi4tY1bFsnmmtQQiE0GgBKqbP/kQ/GFQj+z2KCLKfpRBISRKfqhOQl/YhpgBSZaBRJ07Y/gVeAjnpOwdJQLKO+88Wb76m73Ds/wckLlbdjzoVWoHWqLTDEYEUoo9INf2vY97seEQhSfbbkhrTSLphLQmmiLANjSUUlgGWIbAEuEFAyfAJpUEU+oRRlOwLMVJIfK+XbJmUFCOTnhsQA8KQrPYKzJWSvlMlBEJKTCnx4zfjWRFJ56OKk3NIoCiZAvfNa9ShhhpqqKGGGmqooYbKlDocU3chkEU6poAwjb2sVqts27aNHTt24LpuFh2ZgrcUYqZAB8jce6mj0nGcrM/RsqwMXADceuutXL9+nZdffplSqcThw4ep1Wpv2/eYuqDSOEjXdbM+v7QPMu2SLBQKAMzOzrK8vMzXv/51tm/fzo4dO9i7d28WM1sqlXAcB8dxsujLNBI0BZBBENBsNrnrrruYmprKxq9arQJw5coVrl27xl/91V9l/XNptKRlWVSr1QzULi4usri4yF//9V9jmmYGjlKAeuutt7J79+4s2nRkZIT5+Xnq9TpjY2MUi0X27dvHzMwM09PTvPLKK5kTNHWHRVGE67ocOnSIPXv2EEURzzzzDFprer0ep0+fxjAMvvnNbyKlZHV1laWlpbcd88FuxHw+z759+/jYxz5GFEW8/vrr5HI5Pv7xj6O1ZnFxkc3NTeI4Zm5ujqWlJV5//XU6nQ6rq6tZ12ehUODYsWN89KMfZXR0NIsovXjxIqurq9xyyy189KMfxTAMnnvuOUzTZM+ePUgpeemll/jGN77BL/zCL1AoFHj3u9/N+9//fgzD4Ctf+QrNZpN7772X7du347ou4+Pj2VxOuyQHO09XVlY4ceIE733ve/n0pz/NwsICv/d7v8fly5e55557ePrpp/noRz/K7OwshUKBMAwzV2waO5oet/n5efL5PJ/73OcyyJ46Dn/mZ34G0zT5j//xP/LDH/6QBx54gOeee4719XX+4T/8h8zOzvLFL36RH/zgB7z//e8nn89nLtT0/EzPuxMnTrC2tsY//+f/nGq1msXhfvGLX2R2dpZf+qVfYm1tjS9+8Ys8//zz7N+/n1arlUHqXC7HQw89xMc//nGuXbvG7/7u73Lx4kXq9TqvvPIKH/jAB7jjjjuyrsXUvfpWeKeUolqtYpomp0+fZn19ncuXL3Pjxg0sy6JcLnP33Xdzxx13MDc3x9NPP43WmtXVVWzbZv/+/VSrVfbs2cMbb7yRxT9LKW9yl6Ydouk5P6jUEZpGu6bu5XQO3nffffR6Per1OrVa7f+Kt8+hhhpqqKGGGup/c71j+Oj4oLREKxMdBezdY3DuSkhP5Rkt5uhGAUEUEynNhhtyfWmTw9tqRHFEK/AQSPwwpudFKGmjdQ/LNtgsmRRHi5hRiG2bRJHGlJKp0Rq7dm9PHFBRiBPErF9ZYdoJuRYGRCK5q64twdYx/o0buK0NOq1NSpUKO3bu4tqlczzx9HGOvv8RcrUGI5UKd995jLW1bYxOTjE6Ms6Z06/Qbm6hVIxCYhomO3fMYlcaOL0enuvheQ6B5+N7PYq2SS5fwnUDcjkbzBxhHKODpDvP9UJ07FMplekGmo2ri6ytXE+okdbkLQvLtvCCmHbPQSiNFhpD0t//JEozihSXr1xK7i59z/uwcmWEkcO3x6lYFqfOXaLWGIXREWZvmaUZJLGbUaRoB4AHvdhMuhitIo4fY0QRfqToeiFRZCcIKFb9Xj/QQpMzJc1Wk3bPT9ySvo9AEKkYAZhSMVYvUg1jdBhwRQikCVH/w6s0jcSBKRLQJUVyB2Sz2WFDSq5eu86Hbp2hVBzDlhWM0KCzscmpS+d54/IFRHOL/dJid77IiJCM1cuMzY5TbYywtrTG1xaucqfI8yAlCr5EYIKROHLRAVJEoBNQpw2BVJKyMBkfmcbcE3OmJTD2THK7OM2U2GSiCJPFURqbmstXF4if+lNyuRIboWSzfZnnTp7htUsLuFGMi4EOQ/wowg8jVKxQQhADvV6HyxcuceueOap5i0vX1xirbic3Mk7REKxev0p13y1sLF1lYmKa6bnttMQ64+UKsVK0t5pci2IcH3JbHYqFApVCEQwzOa5CICwTLQRKyD7gg9SHmCjBepIkYlQiUbHOXI8Cstf1E137F74gFH1gLNB9+qj7eFKQRJGiNQLRB6PJ8VUptU77FPvL1qisp1EonUBPJCp7TeIqROhkPW9y0L6PMvFlyj4ITd2IUkiESNyRUki6UY5irkSrm5zDpgC/T2KFTCC4MAwsKVBxhNIqiV6NYta9mG0lIxkHnTzfMgVemES1hkr04SsgJUasQAhsW2LFf7OvZaihhhpqqKGGGmqooVJ4loK2FBb6vs8LL7yQQbc9e/aQz+e5cOECnudlQKRarVIqlWi325TLZWzbztyJb+3TS51mKYzsdDo0m01mZmbwfZ9yucz73vc+1tbWeOKJJ7h27Rq7du26ye2VQrAUBEZRRLfbvcklGcdx1sPX6XSo1WoYhsH+/fvZuXMny8vLXL58mRdffJGRkaRmJO3VS/c3HZd0f1IYl/bnpT19YRiitaZWq2VAZ9++fRw7dgzLssjn8wgh6PV6+L5PtVqlUqnQ7XaxLIudO3dy1113obVmZmaGTqeDUgrHcTLnV+qoS52Xnufd1FmYwtG0hy+FsGl/3+DYp/BGCIHrujiOw+7du9m2bRtSSqanpxkdHb0p6hZudsRqrTOIrLWm2Wxm/Xyu62KaJlEUEYYhURTx+OOP8/TTT7N79262b99Os9mkUCgQx8kNw2NjY8zMzNBut7OYze985zvMzc3xK7/yK1SrVTqdDisrK6ysrLCxsZG5L/fu3Yvv+/i+z8TEBKVSiSAIMnfo7/zO7zA3N8fdd9/NBz7wAWzbJp/PZw7ddFxSUCyl5IEHHqDX6zE7O8v+/fu5fv06ExMTRFHEyMgIjUYjOwbpfEnHJQgCRkZGmJyc5MqVK1k0cKrR0VF27txJt9tlZGQE13WxLIuVlRVOnjyZ9YumMbjr6+vMzMxkvY5BEGBZFr7vY5om586dY3R0lL1799Jut6nVaiwtLdFqtXj44YeJ45jx8XF2797NxsYGQRDcNE/GxsbYsWMHWmvK5XIG5fbs2cMdd9zBv/23/5bx8XHuuOMOfvInfzJzNQZBkAH69Dy5du0aX/3qV5menubIkSNsbW1lPajpeZDOGcuy6Ha7dDqd7NwrFovZeZb2dg4C70HnbbfbzW6OyOVyWTeoUop6vQ6QuUVLpRJLS0vs2LEDy7Ky7tShhhpqqKGGGmqodwwf0cndi1EsaEiT3vUu677AN0fQvqZegC3tE2tBqDVNx8O0LJoBtNwQO2ehEbhBSCfwKYgYgeSKIzBCKAtJzTAQStGol6hOT2DYgtgLcWPBhYVlbmsY3HA1baExhEALgRSwheKG1yMKA/DbmMUyu2bneEaanDp3nmeO/4Dtt+zHtPJsmxwjXywzMTbG1LY5Dhy5jcuXznPp3FlarS3cXo+WE1HNBRTLVcbGJtEIPM9la3MNp9ejFwbUimVmd84j7TwCQRD4eL5Hs9mk3dyg7cZIqTBNk07HwZSaUi6HnbcJon4XggBF/wJEC5SOsEwTQ0IYKaJYcuXqZdQTj3PwyO0U8nl04ND2fYrFEoHr0et2efXMJSIA7wZBpNns9CjYZYSVY6IeIA2byNM4QYg0LHL5EreMznFteRnPD0C/CZnc1iYXT79KGGtQgsD3sIx+/4CKsU0TPwyIwxBpgCE1UaToOT5KaSxpIIUEHSMRFKojiFix0V4lNG12lUvcc+g+8mER2W7TXr7Bd0+dZGFliXnT5l3lGjsLBaa2jTEyM05OW8hNF3Vhi1dOn8Z0An5s2zzl0YnEWRf1o17DCHQCvCIp6UaatSiiqWM8J6bhacw77iBe93iocobtUQunC5dXYrTucWnVp+OHVDcXqOTHuZabYebQIW4p57m6vklnZRXH8SD0CfyQ/n2CoDW91iYtWxJFPpYtmN22jROvnWVhpIxlXGLHeJX1GzcwZcTsjh104gJydJaqo9jc2KTVdllfd+jJmLrTYV/sM2nkuHPvfnJCoXSMkhZYNqFWhCogCGPCMCLQikALAkMSmBahlERoAgWO1oRAIASRSCJYQy2IEMRCv+k9FAlm1CrZH5W6GvsuyNSlmEBAlYFESByWiTswIc5apwj0TZqYeh0VGqEHoV3yJUf6jBRK9hsak/VpgSZGIRA6bZJURP1430gbtP0YJ4rwfI9QadLrHkMKorjf4yENzNR/KRKiuOkEhKM5cqJ/RzoCwzDRRNB377qRohdByUocwRKJIQSGOby4GmqooYYaaqihhhrqbyp1MKWgSwjBzMxMBlZSJ1GpVOLYsWNcvHiRGzduEMcxMzMzlMtlJiYm2NjYYGlpif379zMxMXFTP2MKEAqFAlNTU7zyyis8+uijFIvFrO8wiiJOnDjB2bNnGRkZwbbtDKClAGwQQKTQLe3Km5iYIAxDWq1WBuxc16VcLlMoFGi327z++uuMjY2xZ8+e5Dq43ca2bcbGxuh0OiwuLma9falzMIUdqRs0l8vRaDQ4d+4cO3bsQErJ+vo6vV6PUqlEvV6n2WxSq9WYnp6m2WyytraWRX2GYUi32yWfz5PP5+l2u+zZs4fZ2VmuXr2auQUHgW0KPy3LymBWGh1p23YGk6IowvO8zD2Zwj0pJaVSCdM0s4jbQqFArVaj0WhQq9V46KGHKBQKOI6TOc3S16ZKYafWOoOypmlm4CjtIPR9n1KpRBRFBEHAmTNnmJub41/9q39Fu91mfX0d3/epVCoZaNJaUywWabfbOI7DAw88gJSSb33rW+zYsSNzsD700EP87M/+bBbHmYLI1IkYBAFSSm655RZ++7d/m4WFBb7//e/zpS99iR07dnDXXXfR6XRucummkCoF767rMjk5SRzHGUxN528YhjiOA5AB2lwul82NXq9Hp9PJxgfA930sy8K2bbrdLq1WC601Y2NjnDlzhiiK8H2f/fv389M//dPZa8rlMlNTU9n2pbGr6fmQPp52Ivq+n3VBRlFEvV7PxjSdQ4MAM50r6fuAbdtIKbObC/7Fv/gXXLx4kQsXLvC1r32N2dlZPvnJTxIEAZVKJQPOvu+jtebixYsA/Pqv/zqmabKwsEAYhhiGkUHI9LnpvNq2bRvHjx/n+vXr2LbNwsJCdmwLhQKtViub677vZ67VYrFIs9nk61//Ovfeey+5XI5isUi32832NY5j8vk87373u/nd3/1dfvEXf5FyuczBgwf5yZ/8yQxSDjXUUEMNNdRQf3/1juGjNPIU8mW6nYA52WOjE+CaZbQ2iYhRvZixomTFV0Qx+KGiF0tkIBEiiabUArwwxHW6uAWJLQxWepqaE+KGIUJK2qFmcmICoSJc38eJFIvXV8n5DuNxhSedgFBqJAKJwJSScs7E9V0MQzI5McmiaBL5PSr1Bs1mkwsXL7PW8ahXq0xPT6KEiSEEpVKZaqnIrXv3Erz3fTQ317mxeI215SXa7XZyN6XTI1IxURRRKhQpF/p3i0lBq9VEijfjJEFTK1jkZIMoVkRxzOjoGEtL13Hb60RxBI6DlKLfp6cxDaPvUIvRsSbUMbZtIA0DhERISafd5PVXTjBRKzJeK6F1xFjBpFEbwc5ZmCY0Oz22Vq4QxIq8ZVAvl1Hap2BLfBURCEGsBLE0OLR/L41KiZl8j9cu3cCLI4RO4krd1hbLbotY54hVjB/4Sa+gSi7UJMmH8jBOnJCGSC4g3SAmijXSMBLHWfp4p0nREPhKsLG2zt1T0xSbbQxniXh5ifXFa0Srq7wnX+bd09PsnN9BZaSB0fEQqy6oAKRFp9vjtU6Lo7URpiYmEUa/SBKZuOWikDgM6EU+TRWzrGNW44gtNG2tOH99Cata4ui77+RYeBmxVeNGq8KNxaust7q4GCgUjhdRlF2OPvx+tqKY3ftvYW7yB6xtbtJqbqHiCGFIlB8koM0Q9ByHxRtLTE2ELObyjI/U2T5W59WzF5FBAE6dxtQ0K2trbLS26G2s0Os5tJsd2u0uojqHKhURTohyWnhbm4xUqzQvn2fXaAMhNUJayF4bbSQXFUIphFbIOEZqjTRtZKUKpRq4XZTnEIchsdbEpkkskuGKYk0gIVAxvtIEQuAJA18IXMBXGg+BJyWeFDhK04sUXaVwkAQyiSiVholpGpiGkUFK3Y9PVf1A2CwYVqROxxQ8ioFn0I9u1TchS9mHkGmcq0Sj++eN1mnUqybwXZQ0QEMYBBn8TOEp/S9SlJLEfbdm6tDsOAFbvmLC6gNLrTEME8uQhDGEUcxGL8ITEjfW1E0yxycy907fSocaaqihhhpqqKGG+jusFOYBBEEAQLVapVwuZ0Ci3W7jeR71ep0jR44QRVEGanK5HHNzc+RyuQwW/m0Kw5C9e/dSLpfZ2tqiWq3ieR7T09PYts2BAwe4ePEia2tr3HbbbWzbti1z6b01/jMFYDt27MhiVVO4mbq65ubmsu0xDINer8fCwgInTpzIIksnJiY4ePAgZ86codlscvvtt3PLLbdg2/ZNnY/pcnK5HIcOHeL48eN861vfyqBI6pS87bbb+O53v8uf/umfUi6XiaKIffv2cfjwYSzL4o/+6I/I5/N88IMf5OjRo1y9epUvfOELjI2NZe69dP0pQInjGM/zMqjS7XZpNpsZKEojcFN5npdB3RSupeApXcbm5ia2bXPkyBGeeeYZhBCMjIzw/PPPc/fdd/PAAw8Qx3EWTzro7ktBdRRFGThMtyt1FKauynS5Fy5c4LHHHqPdbvPiiy9y++230+l0sm1LnXSmadJsNjl27BjHjh3jD//wD/na177GT/zET3Dffffx9NNPc+rUKXbs2MGlS5fY2Nhgz549mcM2jUE9fvx4tn8Ab7zxBteuXePAgQOUSiVc183gauoonZqaIp/P893vfpfPf/7znDlzhtdff50Pf/jD5HK57DikPacpAE+3WwiRnTdp3O3a2hojIyM39YamcLDVamX9jHNzc1y8eDGLbe31eqyurmYOwBQapuOb9kzu3buXr3zlKzz11FPcfvvtXLlyBYDt27fzzW9+k127duF5HufPn2ffvn3UarWbnIupqxbehKlpJPDp06c5cuQI+/btY/fu3dlcTCG2aZrZ+SGlZGJigqtXr/Kf//N/pl6v88wzz3DvvfdmEBegVCqxurqage5bbrmFUqnEb//2bzM3N0e3283OhdQhmY5nOu6e57Fnzx4OHDjAs88+i2EY3H333dx1110ZgNy3bx/j4+NYlsWhQ4f41//6X3Pt2rUstjefz2dzeqihhhpqqKGG+vurdw4fTRu0ohwHVOMeZ9Eoq46hJTki6pZJN/IpWBI31HiRotV1KVRKxEJgGBLLTCITPaXY0Hmi9S5tHeP6gnVPUPMkG6GBF0ZIIekFmkurLbphzHS5QdeXWNMTTElBLpfcRZYXUCkWqDRGaG01E3dlZ4tKwWbfLbewfPYVfKfFxoZNt93CcRxyheQuQqWhaBnUyhOMNqbZv+9WrFyOOPDwPJdOq0Wv2yUIQ7rdXhJB0+ngB0HSA6ditFZEUUzgB4RRRLfn0G42aXc7rK2t4fsB27bPc+VcG60UA/c5ooEoVpiGiW0bRFGMEBIhDExD9D9QC9qOT7vnsrVp0GuUmajl8ZTC72xRLBeJVYzrBzh+CELgaw2Bi5CSvJ3DyOUw0ZimQOQsjh05gLF4ggO3CHpbMa93PLQAQ4KUClfb6P6dc3EUEsWKOIIoiJCWJopioijGD0LiWDM5VqXlRxjtEEMYGFICAgOYq1U5PFZDWXnOnD/NLteBUychVBiOw35hc/+efWybm6WYyyO6IVxrIaQBdglyNjqKWe+06IYB+/MFZP+CDRVB6IPvoX2XnufQjCI2VUQnjuhqhaM1LuB1utx25C6O3H8fBSeid+IJHCeklruKWRFEOqYbaJodjQg84qUlCjvmaXY93nXPXVxbXmZ1q43WCjtXQKDpeQGmkMSxIpAxm1tbrG62WJuZY6JQZb7oc+7iVVqtDlPNLvXl60RBEtcahRHCKFAp5Yi8LsWxOUw/Il6+jOO5bJoGtXqEJRKvHypCBBFax4g4RWgCoSKk0kg0tDbAMNBKQxxh6MRpmHA1QYzG1ApDCPKxAt3He0KAlGjDAhWhlUKZJsqyCARsbKyxsLzM+Z7PYgxraELDZHz/Xu69/wChHxDEmkgLtDAJI4UfKUIFWhiEShEpkj86gaBJe2KyTbHWCC37KLKvPgzUOj1XkhhWpdOzJ3FpGtKkmLPYaMfofnyU1qLPF5MbAgwpyZkGRhyBTgEiRFHMci9ibMRC9vsphUgck04Y0QsinAi0JbGlwBCq76oUyPzoO30rHWqooYYaaqihhhrq77BS2JA681LAAGRgy7IsTNOk10v6yQedS57nUavVMvgy2Ak3GLkKZMuanZ1lcnIyi09MYzDn5+eZnZ3NIiBTsJc67NJlpQAsBYG1Wg3P86hWqxw5ciQDUO973/uyLj7TNHnggQfY2trKXHop1Dl27Bjz8/NIKTOH3aBrM4VGKey7/fbbmZycZGtri0ajQRiGjI6O4rou27Zt4yMf+Qg3btxAa83IyAjbtm2jWCzykY98hMuXL2OaJuPj48zMzPDZz36W1157jSAIGBsbY2JiIlv/5ORkNs5TU1Pcc889lMtlTNNkfn6ed7/73eTzeZRS7Nmzh0qlglKKSqXCgw8+yNjYGEIIHnroIXbu3AmQ7e+OHTsQQnDfffdh2zanTp1icXGR+fl5brnllux4DiodC611BnbjOLnxedu2bZkjNYWouVwOy7J473vfS6/X4zvf+Q7lcpkdO3awY8cOIHG+VavV7LimzrRyucyhQ4f45Cc/yVe/+lVOnjzJBz/4Qba2tvjzP/9zKpUKm5ubHDt2jEOHDjE9PZ3F+aZuvP/23/4bjz/+OL7vs3v3bu644w6EEDfF1qbHVWvN6OgoDz/8MN/+9rf5N//m39BqtZicnOSee+5hbW2N0dHRzFWbOlLTiN7B8bEsiz179vDd736Xf//v/z1HjhzhE5/4BNVqlc3Nzew4lMtlcrkcURTx0Y9+lCtXrvClL32J6elp2u02hUKBf/yP/zFSSsIwvCkCOI5jcrkc73rXu3jttdd49NFH+f73v0+z2eRDH/oQjzzyCF/84hf5d//u35HP5ykWi9xzzz0AzM3NZZHJo6OjGbwXQjA+Pp6N3/e+9z2++93vYpom27Zt47777ssg7OBNC1JKHMfhtttu40d/9Ed5+eWX2b59Ow8++CC7d+8G4MEHH8xiZkulEj/2Yz/G9PQ01WqVn//5n+eFF16gWCwyMTFBpVLJbkxIoWuhULjJ0Vyv1/mlX/ol2u02lmVRq9UYHR3Ftm2CIKBUKlGr1QiCAMdxmJycZGxsDNu28Twve98baqihhhpqqKH+fusdw8dKsYgV+2yjhytiemYJwzLJoZjSMXVbsupJqjkbN/DwwpCO46KkAMtGorl7/3Z2zE4TxgrTymFLg/uP3orTbaOKNgs9wWa7i6k1WkHBtjgyP42UJkoJ4tDjmG0jDAshTUzTIgiiBLwIhQ46hPYIke/gmBZH9u/n+tlXuHLudbbf9i46nRabm5tUSmXqo6MEUUy9VCCXz/WhhqRQUtTLRaq1GtMz06h+N16swQtiPM+n1e7iuQ5ohev5oBSO59DtOqxtbFJtbrK2voEfKVpba5TLZYrlGoHTQ4gEckRRAkK00MQ6xhQ2uVwS7WhKSD1fjtvDMk3y+Ty1+hiiVKUnNIW8JrAEhVwZocGIexSAAIMgCLi+1SMIA3K2TaVWZywPtlSEUYSzeYOpcg7TTGJcNBIhNBP1Mqbv48pCEqUJxEohDYvQzLER5/EjgR8ahJ7CCQ3yts3OmRFcBFe3vAQyS8lMrcaDczMcnp5Btjb51tnTFK9comEK7FBTCUIm8yVGJqewylXoxoieB7YNxUJCiIxc8rfrsdrcoGFKdlRHEgwlRdKjGSTgMYhDWr5HO44T8KhiHBRdrekKiVEpUWjUia8/B3Pb6UZlzJWn2bvbJF9RvPJKiJUXeK5CxhGXX3iGg4fv4OqVKxy74y6eefZZWl0HaZi4QUi5XMbzNxNnndAEUUzHcQkihXfpHEtWjplKiW2lIp7rcXVxjWariCVAq5i269MMYqYnx5gruywvr9JyHSy/Qy2OWel12Z669zKDZ/8HoUAm3Y/EGi0VQos+TFQIodFmcsGH0gn01lH/vIpB9d2JOgad3ukc982JSXdjchEW4ToOTrdD1OsR9VxiDVUhcPwQO5hjuiyI7X7kKgLDksRhjEYipQFSEAeKWMUoLYiVRvWfGyvwg5Ag1oSRIow1gdK4foy084mDOlaEShPGmp4bsO7GSMNASgNpWdjFPEbsJctXcf+sSdCslBLd72lUWqF0cne1FMkYhkqx1vHxahYlkbgxtRbkbBM/VniGRMjE5dtUgnxBpK2W6KD1Tt9KhxpqqKGGGmqooYb6O6zUxZQCyDR+Mn0sBSrpl/9hGGbxmikMsG07g46DPY9A1gmXLqPX62XxjinASAGjUiqLw0ydTmlc41tdSikArdVq2TJs26ZarWKaJpVKhXw+j2mauK6bLXN8fByAfD4PJPC0UChkEYypGyyNtEz307IsDMPIIlUnJiaYnJzMHGiGYWRRrdVqlYmJCcrl8k1gatu2bczMzGT76LoulUolA4Cp89QwDG6//fabxnJkZCQDQ0EQMD4+zsTERAaj5ubm2LVrF51Oh1KpxH333ZfFa773ve/NXG2mafLggw9Sq9XodDoUCgXuv/9+jh49mjkd0z7PFEi/ddy11jz00EPZ8pRSvPvd774pwveee+7J+hv37dvH6Oho1tFn23a27I985CPZ2EKSWvTpT3+acrlMEAQcOXKEqamprNPyp37qp7h06RJxHFMqlWg0GlQqFT796U9nENN1Xd773vcyOztLs9kkl8sxPj7O1NTUTb2jqesvn89nbtkHHniAHTt2cP36dYrFIjt37swg2a/+6q8yOzuL4zjZ3BzswUwjUaWUHDlyhF/4hV/g+vXrNBoNLMvi4YcfznoJhRB84AMf4MCBA9i2TRRFfP7zn+f06dNsbW0xMTHB6Oho1sGYbl8avZoeo0KhwC/+4i9y/vx5NjY2aDQa7N27l0ajwa/92q9x/fr1DOxPT0/j+z6/+qu/yuTkJLlcjs9+9rPkcklKTi6X41Of+lQWo/vrv/7rbG5uYpomo6OjTE1NZUBwEE6n57HjOHzqU5/iQx/6EKVSKYPBhUKBe+65JwO1xWKRO++8M3vfKJfLWT9lLpfLjmF600HaXZqeG2lccQpLgyDA87xsbFIHdvpelXaPAtm8HoLHof5X15W2Yudv/uX/05vxv57+ajgmf0N/z8dk4f/4yP/TmzDU/+Z6x/BR+w61yKNKyCUFgZXDVAEV5TA5luPAvp3oa+u0HYdmEBNrRc+LsHMxyIg4iphu1Bmp1nF6PUwUuZxNHMNoYRzblLhhBEYS66jCiJJl0HFcorhHrCVCKxzfI4pikBKlk2hElMIyBL3mOua2Geojk2w1N9CRZnxihhdf+AG1sWlyI1OEYcCqu47n+5imhY5KCMMgCEP8IKAR1tBxRCkqZV0E0pBEGjwvpOd4WaSIG4TEQUgYxcSxTKIvDRMtLWItE9dXDHHo0ZicZfX6Aip0MaVEWjZhnHTLGdJAoxAi+dAWK4WUYEpJsVqhUCwTxooba2ssr60xUq8xMjJKpVLBjyqMjE4yunscKU3CKMDpdbm+cJ5wcxU/ijAoUq6YlFVAL4zwu210TREpSawgVhrbtNi/dweLl2K2XImUilp1hOrINHJiO4uez43lZVzPSSJh/BCJpJ6vYBSKTFZLlIqrVPN5jh05yj17DzJuWujrl+lcXWBxcYmH4phJZTORK1EfrWOUKwjTAmFAtQqmBdKkXyaZxHSGATrw6AQek5ZFMWf1oWOQOB5DnzjwaXseXdUHj1FATyt6QE9rekIhWm2MXpveS19n44Utoo0tqjWT0V0SsxCTfyOitanICUGs4fKNRRoXLmJW86xutbj3rmOst5qsNnuYEoS0qFUr9BwXISRhFCGFgUYTRjFe0OGS67KctylaJpVikTCKyQHXW12kbZIzJAvXF3FGaoRBQNcNiGLNgdExdkuQ/Q5LA0DFiRs0hYtKIwwBpoEOkjs1VT8KV4o3Y4DVQEujQPRjW2OEBqENtCSLQ5VKgTaSZ2oFSmAJgS2Tvy0BtpDkhCQSIYVCDlCJk1CAaRiAxsjZaK3QKtkCTIGIASRSyAR0knw5omyFFoI4CtFx8nMUQa6QXHiqOM4g6JnT53n8mTeIDJMYQWPuFnZOjREpgaFiHM/rx64mF2zFnIUfRhimgak1MqY/PooUU/a8gKavyNn9Cz00hmEhRQASDDRhrLAtmfReKsAAFTjv9K10qKGGGmqooYYaaqi/40q/tE8jFR3HIZ/P39SFl35Zn8vlMlAI3NQhF4ZhBhhSyNDpdCgWizd1yaWOwkF4kc/nM5cikDnqhBBYlpVtXxo5mUZ/phAyhYZpV2WhULjJ4ZTCS8dxqFarxHGcdfENuuDS7ZJSZpGcg5GlKVBMr71TeDW4HUqpLJ4z7UCMoijr40v3LXW0DY5lOrYpnE1/l4Iu13VvgoLpOAz2RKbPD4IA13UpFouYppkBmBRgDvZ8pm7LwXEe3KZ0bqRgNoW36bamrkvXdbN5UywWMydkrVbDMIybHINplO9gl6HWmvHx8QwWG4aRRelCAp9SZya8GQM7MzOTRcxGUZRBz8E5kUbjptudQvB03NJ5vHfvXubn57N5HwQBtm1z8ODBbB3pHE3HaPDvKIrI5XLcfffdHDp0KAOHu3btugl6jY2NMTk5mS1vamqKsbGxbJ/SdaXzJp2vpmlmczAMQ0qlUhbrq5TK+hynpqaYmZnJ9jUFc7t27crm9PT09E1u56mpqWx8ZmZmGB0dzeZeejzSOTDYm5keL9M0aTQa2fmWHnN4s182PSbpfqfbnHZmpusbdKYOurFTuJg6o9P5nj4vhaHpcUvXmTo6Uyg5BJBDDTXUUEMNNdQ7ho8lv0NFhbjAeh+ymG6TQ9M5juwoUVKbTO8r8fjZHqtdg8iLcL0Qw/TQXoDvR4QFSddVBJ5PyQRLljFF8sHV1JqabRCZJn4UIQ2brXYbEevEhaf78ZNaQRQSK4VSoEXS+RfGiksX32AkyhNGMN0YZWzuLrYcWNv4C576zqMcvOMB6hOzhJFixelh2TaOU4RYobVCxSrpXtQapUEaBlH/Q7vrh3ieR89xCYMQIcD1PKIwxHU9wjBILnSipJTd9Ryyz15aY+cKjE7NsbF8FSIfyxBIwyKIIkATxxCHSWyqaUoMYeJHIZESaOGSLxRo1EeIoojNZoul5ZXkwtEQ1Gt19u4/jF0ZoVgoUCiWsa0cUhrERMRRTKgkloSimVyULIc97HyOnq9AKUrlMo2JaZbWNpHKZaxaolxv0PQ1MopxHZ+O6xJ4LihNHAaEvk9oG7xwYYt7D1U4Nr+HBw7dzbZCEbm5jtzaJF5bwREmamKK64tX2TmznUKljsgVwc6DZYKQkC9ArvAmWNQaPAeiEAKPSMUYOjnWhG4/atUjiiKaKuKGIbhg5tnqtLF04njsKYWLxkXgei3OP/8MOl6iopuMlyLqYyVUbHL6eMziFohIUDI1656mjcWZl07wwZ/+h1w49wr33/cAL73yKlsdF9MwCMIA07LRuFQqRZxujyhSCK0JtcIyLfI5i1KxxOTYGHYY4PQ6LLo+vSAgcDy2NWqYWnN9dZO+SREh4Lo08IRKElH7LYrJE/rwUAm0jnE6bZY21/EcD43GME3ytk3RzlMqFMmbBkYYJrBSijf7EFP/nuz3HyaUEIkA0e9e1KCFSPo7ESitCXVyR6slBAKNUjFRGCOFwLLzKBUlHFPp7GIG0b8g4s27tYUwQKfNkAN9kP2oVcsyERpMUxLpGCFMcqZAqpAw9AlCn0iBikK2lq8R2hVGCjZRHPf7JRMAa0lNCAmolSKDkv0NQ6AIwohVJ2TcthBa0w9uxZAGURQCgpIlqVmCSGm8WOD4im407LMYaqihhhpqqKGGGupvKv0SPwUcUkqazSaGYWDb9k1wKwWR6WveCtDSPrswTK5Fi8Ui1WoVKWXWVzcI1gahR7lczmBbCqhSuJg6rQZhQ+o2TCFFum3pz+lzUniRgsF0GwYhp+u6GfxMwUgKJAchitaaIAgy+JaCuxQ0pstOgUvquErXl/5Jlz8IdcMwzNY5uP5BZ12qdPxS2Dv4nHSs007O1DU2CFZTWJQe3/S1g/G76bKKxWK2nkF35+D2pMctBWapKy4FQelz0m1PtyXd73Rd6bYNwunB5Q4ej0EgnD4vfX06/oOgcRBAD25zemxTAPbWZafLH1z3IEQbPI9Sl2vahTgYFTy4DYP7mYJ33/fpdrsZKNVaZ3MyPU/S+Tc41wbnUwot0z9mP5EqBcLpdqfnXHp+vPX9IN2+9Hfpa9PXp+M3OPfS/f7bljU43un+pfAyPVfSGNp0/qRzKXU9puORzvHB4zt4LAbfMzqdDpB0TfZ6PcrlcjZ2Qw011FBDDTXUUO8YPk6IiKLUXA9izPIoMhaIoEVRmuysGpQLArNh88LFmLxt0wsigjjG9T3snE2sNatbbVqeJuhtgYqxrTVqlRKFYhHX9fGcHl0vwEIzYtiMSCjGmk4YEpN8WFNoWsJg1fcI/YBWpOlhIIVgXi4z5cCuXfM0yg1u2b+fta5i5foVrlz4IS8/9z32H76DxuwtaBSe6xB4XgIdlUriSw1JIZ8DIYk1hGGQdT76QUDP6aEihWkauJ5H0IePnucR+D5KaTrdLr7roPsXemEUIQA7V2Bsejvriwt4oY+QgjhShCpCprE8UpLP5TBMCzuXxw8Cmq0OamsrK17XWiXMSGiQJl3H4eWXX0QKxWhjjOm5edCafKFELl9AIVna6GKUNLGwiDZ6bJiCyLYIa7up2dButXnq5bMYho1ZzCFNm2JB0Ov1aLW26HbadHoOIg6JggCtIiRJHG2z6VDTI7zvXYcxW23UwgX0ygo5z6fkeDTHZ3C6myxpTTg6TrEyAvkSFEtJzGqpBIYJng+OA0iIg8T16LkQeuSEJCDG0zFtaeJKwaV6hQuywqsba2xaeYJ8zPLZLd4FFNH4WuMJcHVMMwx49fwlCuMmNSUQocbxHaS0iQID14mQgaaeM3ijLdhyPDbeOM2dyyuY0mBpvcnhvbewtLpB249w3A6hSC64DSmpVZI5HMQxUgsa9REmRhuM5Cyc1har3S6dQBFpRS6fZ7yQQ/QvYMIwJFAahcSUmsudJqeEYKY+AfSjVBUgE0ioARX6bCzfIIpjKoU8BdPEzuex7QJmPo9lmkhAuC6R6yAMIwGLSmdOR4To/5xCx+Rx1QeBGaRMYSMaKQWm1sQalCEQUmIakjj0UCpGkcDKwfjTBAbKPnBMYmdVf706jon726OFSGCr0CgVonXi4Izi5EYDL4j7cbECBVh2DksIzFyOOE5uSEglpcQApCRxiCqd9JqKNy9wRQxKadY7AV7VIofox88mXx4o3V8AgmVf0ws1fqyI9BA8DjXUUEMNNdRQQw319kqhSgovXNfl2rVrCCHYtWvXTU6+fD5/EzAYBABABg7SuNNer0epVMq+7C+VSpnDyff9zMWXurpSWANkEa0ppLBtO4OHKWhMnWApvEiBaApkBuFR+se27eR6JgiyzsdBQDTohEs16PosFAr4vp9BrjQKMwWRad9kCmJSh1jqSkuhRzqGabRreizS7UmflzpN3xqDO+ggfCs0DMMwG7M0kjbdj8H9G3RcDgK1dDvTdQzCmhSYDro0B/dp8E+6vPR3KcBN15n+PPj6wXmZatD5Ngit0/jWwS7KQbj635vz6f6noOutr/8f6a3PGXT0DgL5/96yUnCdOmcHY1VTkDm4bbZtZxHC6X6nPw86PlPwli5n0E37t43H243PW8fjrc/77+3XILxMz4XB83Iwyjhdz2A88qCTOI16HYTzWusMRKaOW8uysG07i3ZO3Y2D+z7YITvUUEMNNdRQQw31jj8RBCNjxJtrdGyDaiVHEAuQJcxcCbtSJV810Lkc5UKOnbt28vrrZ9AqxhAmIo6RwJnrK8R6g26vg1aKkm2zbWqK7fOTjI5v45ZCgdlihbLnMJXPkRMGvmGylc+hSmU812f11KsstjbIbWyg22ucCQJWpUm1kCfScOPqJYrFEttmpllbvIbyWxQLJbZNbGOttcHp115idxhSn9iOMEyiMMLzHFzHTe5olJKcbeMGIX4YEfoejufj+z6e6xGHQf+DlsYLAhzHw3EdoiBAKd2PnQhwnR7dbhel+hcvfYhh2XnGZnawvnwF3+0lH9ClRBjJ7zXg+iHCD/oflg3qlTJeFOK6Lr7jYJsGlVKJUqmI1kksbLVSY6QxhjRthFYoBH4Ysb6+hhCSVam5Ua0hEUyN57AKZbx2TKfrc2PxKjlDooDKSAMvUqwvX8dzk3VFStPt9tBKocKAKPBJuZSKAu6bmeVYuYF443XY3MBudai5PhPFKqW53UxWRtjqdZnfvofK+DRUKlAZSf4ulRO349YWhCHatqHbBN8hVB5qtMIlXJ6rVbhgWUSH9vPG5jLX12Pmb93OmTfeoDQzxtX1Dg3TpFvK85LT5qAQhICroS0kjoIbSytcNStMWQYVIek5Ae1uTMFU1GQCdJebiq2mgaEUy06PF596kh/5yc/w2ss/4I5jd3Lu9VO8uriGbVv0XI+cnSOIIhrlEp4fYhs2UzMzzGyb48D0KC8+d5wbrTZdP8IwTWrVCvVCHh36aAWx6N9VqWLouxJLhsmefA6DN12IYgAGahUThwFaB+RtG0tKDCEwtEaqGBmrpBpSaKRpYRcKaKXx4iiBgn13YHapkzLIBFeCSJ6nZNINGas42RIBphCIKMRTCiNvIVTc/33iGkaayfL7CxYkO6B1jBYC3XdFRmFAFIYopTEtGxUrEP0vBwwDQ0IcRkmXY5x84RFHEYYQhMlOYNo5KrUG0rDobC4ld9T21yll0l8ZRkmvZckQCK2TceqDVFMKohi6XsCGX2TCUvihpuWHrDkha47CiTUImfgkVeKM7Js7hxpqqKGGGmqooYYa6m9oEC6lAKvT6RDHMbfeemv2RX0KNdIv+9OIxdQxlsJAz/OSvnnP4+zZs8zPzzMyMoJlWVmPX9rhmL5uEGQBGdAaBAwpkEhdbelzUhiabscgEMvlchn0S2NG4c0IU9M02draynoi0wjVwbFJwUgKk/L5fBbpmNWe9AFQ2lOXwrfU+ZhCyhRCpjAnHe90PNPH0t8PQqgUVA5Cw0Holrr30mMy+Hg6zunjKbxL/z3oNEu3PV1HGlOZwt8Usg6OUarB+fFWV9yg0+ytcPCtgO6tr03hY7p/g3NlcH1v99q306BLdxBoDzo8305vBdnpY0qpLF540I066PJ9Ow06gNPXpPMgjQv1fT973ps3d+tsPg3OkUHHZzoPUkD/PxqTtwLHQRA++JxB/W3LHHQLp+OdPn9wzqZwcvC8GgSWa2trVCoVisUivu9nccDpTQRBELC0tESpVGJ0dDQb03Supu8NGxsbGIZBLpfjueeeQ0rJsWPHsjEeaqihhhpqqKH+/uodw0fHbeObglDaRL020rTRUtL2fYJY4YQGWkcoLei5veQDm1LYAgyR9MLFkSKIfFAglIY4pr20hNpYZ9/0DMfmdlEZN+j6PkKBd/AgnaltbLRarC9cZumVl1g/f46tzQ3CKCCnYcKSFKtlKuOjjI0UkNqm195gaWWV2tQWtZxm38FbuWQbaHGeNWly7tTLzO/1GZ3eRRQpVByB0hgC4iii2e5g5xPXISrCD2M83yfwvMRNZUgM08QLIlrtJr1OB1NIMAzQmjAKifsfkk1DooSdgBn6IEOajE/vYGP1Bl63hWEIEAamnUeT9NwJkbjKIiUJwiRuUgL1Sol6vUEYRbieT6GQo1CuIQtlWq6Pij3iOCIMAtrNDQq2ZGJqG5HWCMPCtPOsdkPi5hpKK6I4oliu0m63aa+us7rVJtaaQqFIuVLFCwK0CkErwiDAQFCvFGm2uniezwP1Gg/li8gzp7C6XUb8mPFiifL0DkRjDFGuUssX+enGGFTKUK4m/Y7FcuJ6DBVEEaiYcG6MloCVmTIj87M8/YPnGb9lH8+feIqnL+Zp9nyk10OLmKvtDbwrgnzBIpezkIZgo+0yMjPB6maHQqSpAw4QGQZaQxRGnF3uEpVAB5KdeYnnKbpxjKlyrASal5ZDHjZtVkOf8xpeOfMa7+n+GFIYdCPNwb3z3NjY4nzHQSsIohhDgrBzbN+1m5HGOKVCjkLOYvete3n2+Wdxo+RCbPvkOIYOifp3Grb8AAwT25DIWKH60atmrCiaBiKpbSSF0miF1ibaMAhDr39CS0wpMaRMPIcivZhRaC0SN6EwEIZOlhclPaM3S2eOQ61B6P4yVJS4grUmVmAIidkHlQ5JLHHcn9dC9J2CSmXnAYDWMUonYFJFEVEYEkWJ8zHphdRIIRNHo4pBCZAxUqagVRErRRSE9LwwAX9ohDCwTIPl61eozuxIQKbOvJYJpFSaSGt0HBMonTkyDSGQOtlnITRBFHNly2WVmA03ohdplEhHJtkGrd+86NPZWoYaaqihhhpqqKGGGupmDYLHFDKmIObatWsEQUCpVGJ6ehrDMNjc3MwggJSSXC53k5twEOysr6+ze/duyuUyvu/T6/UyCJYChBQcDYLEtK9wsFswhTnp61M5joNhGBQKhQyGWJaF4zh/w+mVwqsgCMjlcjiOww9/+EOmp6e57bbbMkcVvAllB+NGT58+zerqKqVSKXM3pi6uI0eOcPbsWbTW3HPPPdi2nb327Zxng7BpMPoydW6m4Gbwd4ORmumYvDWKND2O6XYN9kwOxt4OArbBTr5BABnHMefOnSOfzzM/P3/Ttg/uQ6p0/wa7Qge3Pd3m1Kn51u0fdEUOPndwmwb1VvA5uJz/EYRMxyaFdSnk/u/prcscXGcul8uWmW5vCqz/Npjp+z62bZPP57PzJ537WmsqlUrmCH6rGzLd7rQ/tFQq/Y3oVcuyMlg8CJT/f9HgMRgc07f+/u006JIdhIzpNqTvMYPzNp0z6fZGUcRLL73E6OgoR48ezTor0xsB0kjVl156ienpaSYnJ7Po20HHrmma2c0Pvu/TbDb//x6LoYYaaqihhhrq767eMXzUtQpSCAphjNfr4vk+fqjZDCTNrktMjDYE607ElRuX0TpCaLBFhCkEOdPANk2UTp6nERwslfjZBz/ETBRT7V8wqWaXfGuLIA64sniZc90e128s0dtqEgchhiUp5C1yxRISQa2cZ2JqEqtcAgOUkmi/zfrydbZ23IqhNSPlPNVqiVVMtk9Os6jh0vkz9ByH8W3zRFHSe7ilNY7rcOX6dWzTwjAH7/oT/SjTPAiBHwREYUQURwSBjxv1IyFjhRBJdKWUEjtnE4QCPwhBxUiR9FTGoWJsYoZuvkhzYwVpCCq1OhpJt9tBRSFRrEGHxIGPEJpyqUyl3qDd66IDn8nJKaojY0jTTOCUSkI6e70u0pBMTE2DVqytbzA+Po6RK+C6Ho7Tw/N8er0OOdtEGzl63RbjE9NYdh4hkosHzw/wfRetEoealAYmmnqpRGerze2FIh9tjDHe7VHb3GKqUKI2sx05MoqoNRLAaJpgF6BQSqDj2BhIA0wLTAFrm9DrgWUS3nGYG36Pp545zodn7mDVCLh68SIb7SatVpMghjfOvE61YFGwDdZWNhAqII4jOm7IWL1AwTahnGNtyyEWAi0gNpNaQ9AoX7HW9mkbilbVYKIQoUK44SreaClGlcGUJbBE0pHouS1OPv09bn/ve3njzEnuuP1ezl24yMJGGy+MmR7bxuHDt1GrVfB6LcLAxRRgGoJytUbetqkUC9TyOWSYdDOGWtHyY9odl0ZjDEuEmDLpZRzJ5dgdBKxsNZmsjvXpl0ALlXUiCq2JghC0kbj80Ej6EaMpYtMiqVsUou9ETGB2YqFMnLtk/5d9N6QgyXel74zsd6EqTaSTCyJTgxPH9LRABQFOr4tM7ybWKhlnKUhdj1EUoVFJ7Knux5qqBLSS3oGsFFol502sNSKKk7BZlaw/uXiKcf0wc4EiJLaUtH0PESuiPuzvF2ViSNGPmE12Jxm6/sW37NdcConqXw+vdny0SIBkrBMKrAZei+hfrKfvh+/0jXSooYYaaqihhhpqqL/zGoyLTCM+e70eS0tL2Rf5pVKJQqHA1atXcV0XSD6vHjhwACEEZ8+epddLknIajQZzc3PZcrvdLisrKwgh2LZtG91ul3a7nf2+Wq1SLBazf5dKJba2tigUChSLRVqtVuYIm5qaolgs0mw2UUpRKBSwLAvPS252TF2X9XodpRTdbhfDMLJYzCiKMgem4zgsLy9n4OatPY+D0ERKSbfbZWNjg62tLc6dO0elUqHRaGQxspcvX8a27Ww5hUKBZrN5k/MwDMMM2KZRqoMwJHUhphAr3e/UyTbY8zcINtPtTJ2WKbAa7Agc7Foc1Nu5CYUQdDodvvvd7zI3N8fMzEwGh9Nj/1YXYKq3xnQORtQOQuZ07g3Ov7fCxnRZb+2RHHRyDq4zXW8Knt5Og2A3jW+9cOEC7XabW265hUKh8N8/Yd5mfe12m5MnT7J//34mJiYymOZ53t8KH8vlMt1ul/X1dWZnZzPg/tprr+F5Hg8++CC5XC5bx1vdvymQTGN7FxYWKJVKzMzM0O12s/W8XWfj2+3LW6EvcNM58XZj/bctL30fgZvHO73pwHEcfN+nUqncdINBOk+CIODKlStZhHAKG9N9rtfruK7LjRs3ME2Ty5cv4/s+s7OzlMtllFIsLS3x0ksv4Xkeu3fv5uDBgze5J4caaqihhhpqqKHeMXyMkQRh4mASdo4wcCkUSyjLpBco8rFCSRNXSzzXI4o1na7DSHUUX4PQipmRKoZpYkgDG8H7IsmHfEluaozIC/CBEPC1TXesSj0niP/8v7GwtYGlIS8FgYSKbTJaH6FSH6FYbxBZFh4x667LanuLjU6XzbU1rGKd8ckZCnmb2Yk6yzeq9Da3qBfzKD3KxQvn8XoO49t3EQbg+Q5WL0ccRViGibQk0pCg+uBGg2XbSNMkjhWx1kRx0n0IEClFGIRAAleEijFME1OBbVnEkSCOQoQEYZi4nkulWqNUqbK2vERzq0muUKBYKBJFEa7noOMIaUC+UEbYJVZW1zBEzMjIGG0voru8QhSFWa9kGEXEgcfo6ChbocLvdaiPjnJ1eR2lVhIXW/8DZhz65IoTeL6PYVrk8gXCqB+JGUdEQYhAEYYxEs3sZINut0er3eOwafEPR8eZdX0mLYORyW2YY5OI+hgUq0mfY7kG5Qrkckmvo53rZ1YKaHWgtQmddp/kxPDX3+VsPuL0Ky/xnvvmuXVXhT/406fwOxsJtIoVa8s3kCNlSqUCrlIUTU2pXkOsbVJB0VppUbAsYiFwAFtIYgQSMKOIsu9jKI0bwfk1eAMIIoGnktjP3XmLTqxZQGBKQdmIeOPVF7jzgz+CVgrfKrJ75y7eWGlTt2t86KOfxLbzdLZWCJwutmUjtEYIjZ3PUyvm6ToeQitiLWl7Hls9D2lamJZFtZjHVgJZKPDud91PtHyJ8tIGI02HxGOnMyiW5LSGiRswDBAiiREVfeAGgIgBA6E1idcPQILSfSjZv9DhTYCWxLnqBBLq9N+J20/p/jzX/XJ7pfBjhQdIKYjCCBkpQBPHYd+oKZAicW6qOO67NpNIWSnFm+tVceIopA8OkyeihUQj0CpCJ/cFEIX9OS4EUgOGiZXLMz27A9O2aUdhtkcJ1xTEscqiakV/2mkErha4adRsyl21Jo51P5A1eX5ykdgfVhLnaOKWlAmoHWqooYYaaqihhhpqqLdoEDil7q9CoYBhGBw9ehTHcTh58iSbm5vMz8+za9cubNtmaWmJkydPMj09nbkc047IFBamcGxxcZGTJ09y55130uv1OH36NJubmxSLRXq9HpVKhYcffpjz589z9uxZhBC0221uv/124jjm5ZdfzhxLDzzwABMTE7z22mtYlsWhQ4cQQnDy5EmEEBw8eJALFy4QBEGSltNuMzc3x/79+zP31OnTp7l8+TJaa5rNJsViEa01nU6HUqmUgZbBeFPTNDl8+DDvec97WFxcZGVlhf3793PvvfdmLj/TNKlWqywtLbG2tsb4+DiNRiOLft3c3KTVapHP5xkZGWFiYgKtNZubmywtLaGUYnp6mnq9ThzHOI7D2toaUkrGx8czgPrWyNH0OBqGQa/Xu6lH8q0O0hQgRVGUgZ1cLndTJGsKuNLjXK/XyefzGYxNIVK6DW91x6XrTjUYhRkEwd+I2h10d6aAbLA7chAYpc641I361l7CwfF5O6XHMn19uo5nn32WCxcu8PnPf55t27a9bRTsoKNy0PUIsL6+zmOPPUa1WmV8fJwgCKjVanie9zeiTNPlxXHM9773PV566SV+5Vd+hZGREYIg4NVXX2VxcZFjx44xOjqajVkKlLXWGZSM45hCocDW1hZPPPEEhmHwuc99LnM9pnPv7dyl6c/pmL8VJL/dfr51TN7u9+kx8zwPKSXlchnDMOh2u5n79saNG/R6PXbs2EGpVMr2ZdAlnMbHSil57rnnOH78OJcvXyafz/O5z32Offv24boujz76KE8++SQXLlzgM5/5DD/xEz9Bu93mqaeeotls0u12ee655/iN3/iNzLU8OBaDGkLJoYYaaqihhvr7pXcMH6Vp4Xe3MCWUihVAIBXkS3l8JH6siQ1JpI2+20rSVQZtWSYIYtCakVoF05QYwsRSsL0VUdjqIN54iaizRfXOScSBMquvXefkq5pj1SKzIuRQPk+r54DS9MKAd9fqGJhsBgGR1ijTpGwVGC3W2DkywUavy1a3hVi7hhqdJD+6jbK2mJ3dzitLS0S+S9HMMTU2zuLiNaLAY2puN8owCaIYEUc4KkIgkJaJYdkYpkUYRghHIE0DKUyQBl7goaMoeY40iFUCYrRWWKaFYVjECux8nqBfxi2lwDAthJR4gYcpJdu276DZbNLrtIjCENO0KBbLENuAQhs5uj0HS0CpNkYvUERRlzfREcRxRBQmH8o7vsLpbFGr1XCCmCgM6fu/EEJiSBO7aJIvlkG4RGFypypAGARYloGRy9HptCEKmBlvMDXZYOHCAiOdHj9TrXJQaSZGR7HrDajUEeU6jE3CyDgUymDakC9CMQ+mBC9Iuh19D1pb0NkEt5dASd/H2rjBir9BkJMURIta3idyNpEqZnpylCgM2Ww5FA2DCpCLAiZjQaG9zoFWj0oc4fgRTRStPnwMTIGhNGUtGFWKok6iONMo00iTRHMKKEnBvGFwVmnWpaRkKMoyxu+ucfqHJ9hz8BCnT7/OgUO3sePyNW5ERQxpEvgBKlb4notpSqz+lwILC5cIogg3jjG0ptv1cPwADeSFZHqsTkVqelhUKyWmtu8kVzbY1Ab5aC1DgondjwzOJa7YOAF8fa8jWdth/+ILhVB9iChI+hTfrGK8Oc5VqP6/ZT9SlKSfUesE/GlF1Hcq6ljjqBivD+a0VmgpE9etFtkXIqATd6HS9L9iyNaXXRj3t0Ho9IKxf4euAai474iEOFYokuOUAECScxBFq9OlUSj3Y44TophcVBmJwRaNFIIgjoh1AqLRAmTibowzh6PunxvJSEqRjGsKIIV469247/SddKihhhpqqKGGGmqov8sKw/CmuMz07zQK0TAMyuUym5ubzM3Nce3aNa5fv04URbTbbcIwpNPpMDU1xfz8PEIIPM/D930gcWPFcczhw4eZmJhga2uLbrfLgw8+SLVa5cSJE4RhSLFYzOImjxw5wtzcHEEQcOLECWZmZnj/+9/P5cuXefzxx/npn/5pgiDI4FvqkhoZGaFarbK4uMjCwgJ79+6lVqvxxhtvMDIywr59+3jppZc4fvw4Bw4cSNJz+s60FEgNxqVKKbMuSM/zsk6/FFZWKpXMwRiGIUIIfvCDH3Dx4kWUUqysrPDLv/zL2Tb9xV/8BZ7nZbGzP/uzP8vm5ibf/va3WV9fB6BUKvHzP//zdLtdHnvsMRYWFtBac+jQIT72sY9RqVSy8U2Pz1NPPcXp06cplUqcP3+eqakpDh06xJUrVzh37hyTk5N85CMfYffu3WxsbPCNb3yD1dVVpJTU63Xe9773sW/fPr797W9nY7G8vMzhw4epVCqZW/Py5cs8+eSTHDp0iP379/Pyyy9z6tQpfN/n1ltv5T3veQ+u6/Lcc89lr1taWuKRRx7h7NmznDt3DqUUo6Oj3HXXXZTLZeBNsJfL5bhy5QqXL1+m2+0yMTHBoUOHqNfrbG5ucv36debm5lhaWqLb7XL77bcDcOHCBRYWFrj11lvZsWMHnU4nA4AArutSr9e5fPky5XKZMAzZ2Nhgz5495HI5Wq0WH/7whzMgvrGxgWVZ1Gq1DISlUHd0dJRer8fKygqGYbBjxw56vV4WC5z2KzYaDS5cuEAYhoyOjlKv17NeyNTBads2vu/zxhtvsLi4iGEYjIyMYNt2di4uLi5iWRaNRoMwDAnDENd16Xa7+L7P9PQ0UkqCIGBlZYUoilhYWGBsbIxyuYzjOORyOYIgoFKpcO7cOSzLYn5+nk6nw8bGRnYDwenTp5FSsnfvXlZXVzlx4gRbW1vs3LmThx56KHPbplAxhcLPP/88jz32GJ1Oh/vvv58HHniA7373uzzzzDM4jsPs7Cyf//znmZyc5Pd+7/dYXV0ljmMWFha47bbb+Gf/7J8xMjLyN+B2ClvT/tY77riDj3zkI3zrW9/iS1/6Ev/n//l/Uq/XufPOO/nkJz/J1atX+cIXvsC73vUu2u02AD/5kz+JaZr8p//0n3jttdduchOncy/VEDwONdRQQw011N8/vWP42Gm1iP0A0zbxPQ8VBJiGxLAEvjDo9EJ8USEiKZkumIJSsUCv20XHMYboEw8tMExJXsdMxUm/nMDENg3Ee0bh/duRZ1bprnUQjiJqdqn6IW2t6AKuhkIsePeeW+kGPsvtFteaLdbLZYJqjWKlwraJCSZnppkeneDsepcnT72O77QhdDmwfw+2ZWKaFiMFi5NnLnDt5ZfBP01pZpbYLhBolcAMFaOEwDBN7HwJJSVSJpEcmgD6d/VFiP6HuKRPL46TuEhI+uniOImalCRF59ldjUKSs3P0em3anS6FYpFcPk+7uUkU+cRxhGVbCGkTBSGSmMboOJ6W6NhHSrO/Hp1FTJbyefKlGq1mE9syyZdqOK6LlAayvy/SMLGs5O5N286RsyziWOF6HsVigWqlSrPVot3eIgpcxqplpAFLFxbYf2OZny6WOFCpUh6bQpRKCDsH+TxUajA6ARPTEMUJ5FExbDUT4Oi64LkQBtBch24Tem3wehAEmEHArOHxrKu4em2VM6+c5+DMCCsXVsj7BoWuQ7EbUWv52EFEIY4oa7BEcpEcovAMyZaGTaFpAXHFJuj4SZciUKqWabW6oCFGofq9iAawzTSxEfwgjvCFZn9FUNbgq4iLJ49z23334/dcRHWMXdum2LrRBQGxitFC4LoOhiGwTAPXcfjLCxfBcYm8gGYYo7VCSsF4pchI3sYLfXpmgYkd2/C6W5w/e4b3v+se1hav85r2eUCrPuXSAw4+TRwGSUypTKJVdZ+OpWa8zEQokqjTBFAP9Glk4FGh07jV7Bfpc0Tmkoy1IkIjMECHdJUiFiZSCqSR3M2tiJGqf7emFBkoFaaJVBqExLBt4ijsz9m4H2faR6walOrfEU0Sw6pihYoTMKyVJlIJFBRaI6SFCl02ttapNsaIoiCJPe4DWNu2MeOAXhiDkIQYCANk2iHZX1cap2qIxNksZD9eth8dSx9MZ+oPkRRvH7Ez1FBDDTXUUEMNNdTfb6XQbBA2aq2zOMcUbNTrdRYWFrh69Srz8/PYts2rr75KHMdZNGMURfi+z+joKKurq3S7XcIwZG5ujh07dqC1zqBNpVLJAIPjOARBQD6fZ3p6mt27d5PL5bh27Rq9Xo/x8XGklExNTSGE4OrVq9ky8vl8FnGauvsMw+CBBx7g6NGjtNttHn30UVZXVxkbG+PUqVPcc8893HvvvbTbbVZWVrKxMAwD3/ez3j3f97OI1DT2czAmNLk+tTOXme/7bN++nUceeQTDMPgv/+W/8PTTT/PJT36S5557jpGRET71qU/RbDb58pe/zBNPPJF1A/7Gb/xGBlG3trZ45plnME2Tf/JP/gnXr1/nO9/5Di+//DL33XdfBkZTkNLpdGg2mxw8eJAjR45w/Phxvv3tb/OhD32IO++8k+9///u88MIL7Ny5E9/32bt3L/fddx9RFPHss8/ywgsvsGvXLs6fP8/ly5eZn59nenqaRqNBt9vlwIEDLC0t8Ud/9EfU63W2b9/OCy+8wF/8xV9w6623UigUePzxxzFNk2PHjvFf/+t/RQjBzMwM09PT/OVf/iXPPPMMe/bswbIsLly4wMTEBEePHsXzvCzmtNfrceLECV599VXGxsZ49dVX+d73vsfP/dzP0W63+d3f/V0mJiZwHIdGo8HY2BhPPvkkP/jBD6jX63zjG9/gkUce4aGHHqLdbmffZViWRafT4Y//+I+5evVqBhwbjQa//uu/zvz8PH/5l3/JxYsX+cxnPsNLL73E17/+df7pP/2nzM/P89WvfpWTJ0/ym7/5m5w7d44//MM/ZGFhgUqlwsTEBL/1W7+VOYbDMKTb7fLFL36RF198Ea01k5OTPPLII9x///1ZHGqlUuHixYv88Ic/5KWXXuJLX/oStVqNT33qUxiGwZkzZ/jCF77AwsICAJ/+9Kd55JFHWFhY4D/8h//A2toaxWIR0zT59Kc/TbFY5Nvf/jamabK4uMi9997LT/3UT1EqlYjjmGKxiO/7vPrqqzz77LP85m/+Jpubm/zO7/wOn/70p3n44Yf5oz/6I/bs2YMQgkcffZRut4tlWTz//POsrKzwsY99DCklxWIxizP+4Q9/yGOPPcbRo0czuOm6LocOHeLw4cOMjo7yB3/wB3zjG9/gJ37iJ2i1Whw8eJDPfvaznDp1iq997WtcunSJ+++/n3a7TT6fz95zcrkc+Xwe0zTZu3cv586d45VXXmFtbY0oitjY2MC2bXbs2MHU1BT1ep2ZmRkWFxfZ2Njg8ccf5+WXX6ZWqxFFEcVika2trb81BneooYYaaqihhvr7p3cMH8MwoGibjI+OsdV1wVBEvks5nyPXmERvrSLMPJYhMaRgz+QYO3btQkgjAQlRAueCIMQAGqaiFHXQol8gbtswugsmDmJUfkAkNUJplIBFQ7BRrOJ3HQwBvudjNDcYKVWob5/nlpxF2++xqRV6pE5taobq+CQqUtSqBWqzI5ztSkJdxtSayHFo9Xp4PZfDM1NMLk+zefkidDvsmt7GrtoIK0pxptOhoxNIoq0m2rSJ7QLKttHSQPU79IQ0MIWJ1gIVx0mOo0h+1v27zYQA0zJBCxQapZLHDcMkny8S+C5BECHQ1EfGCEIPp+cQxxpJAnRMy0LmCxREktsfhiFxFCX9mhLQIbXGGFE/srJaq2PaOewoRthv9kdEKsY0JIVcAcOyabW2kFLQGB1HCEmzuUlzawMV+uRtAxVGuJeu8kHH5bMjI2ybmsao1BGGAXY+6XU0bShWQBjguBCF0OlAu5UAR6WTxwIXAg+67cT16PTAd6HXBRVx2/Qo1zs91P/nMW5dWuaeSJPreVhxDDpCK4mvND1h4ElNpDQhyTyJkSCghMDW0DAkcSXHaiekSeJ4DMMIKUTfZZe42tLo0ltNg6tKcUopcpZg70iM72hu9KC9coXzZ04zv+8A5y5eYu+hw1xrv4RSGkNKLNsmiiM67S6dTocgipOoUx1TLdgEsYNh2sxUikilyI3tYH3pKnnDpjG1ncWLLTrNDdoRjExMcH1tjSB6E4ChVRYqqqIQ1Z9TUsh+FGjS/gj96ZdG2PQnn+bNvg+lkzhXocUAWEsLDnX2T60VSieAVkmZLBeNowHTIJ/LIQ0TFYcZ8NRxnBSPkvRKaq1AmJTnDlOZm+fC6jITloFau4yzeg2tY5RK8GgcJaBQq8QprVTieEyAoyJWEVJqUMkFby5XYHx0POl3jOP+vQ1JlmrOsgjDgDiKMYzE0xlGMZF6E7Am/LDvuhSQs8zUC/pmt6Pog1ShQUuEKRPoKoYXWEMNNdRQQw011FBD/U2lEZbwpvMx7UpcX1+n0+ngui7bt28nCAJs22ZsbCxzq6UOrlarlUVALi8vZ92NaSTr8vIy09PTWVwkkHXXpXGcQRBk7rHUBZl256WRoymsG4zNTLc5dYWFYUg+n88ce4VCIXNJrq+vU61Ws+enEZ2e52VuyvTx7Hq0Py7pPg/+O3VFFgoFTNOk0WgwOjqKZVkZYPU8j42NDZRSfOMb38C2kxugwzCkVqvR6/U4fvw4jUaDXbt2EYYhq6ur1Ot1Tp48mblTNzY2CIKAYrE4kOACuVyO0dFRHn74YYrFIjdu3CCfz/PQQw8xNTXFysoKly5dwjAMZmdnM3dcCo5T96hSinvuuYcf//EfJ5/P0263KZVKLC0t8Wd/9meMjIzwoQ99iEqlwhNPPMGuXbt4+OGHcRyH1dVVzp49y+HDhymVStxxxx184AMfoFQq8Zd/+ZcEQcCP/diPMT4+zsrKCtPT0/R6vWzepfDufe97Hw8++CBRFHHhwgX+4i/+ggsXLnD48GE8z2NkZITPfe5zjI+Pc+3aNY4fP87P/dzPceutt/Loo4/y7W9/m0OHDjE5OXlTxKrv+9y4cYNqtZq54X7/93+fv/qrv+Lzn/88vu+zvr5OoVDggQce4MSJE3z5y1/mgx/8IM899xwf/vCHqVQq/Mmf/AmO4/DLv/zL5HI5vvCFL/DNb36To0ePEkURhUKBixcv8uSTT/LZz36Wo0ePcvnyZYQQdLvdDNy5rsv/l70/j5LsKsx80d/eZ4g5I3KqnCsrq7LmSSWVJDQPIEBCILCxMRgwbrtt0x5et3u9+3r1u6vvfT24+73uu/p1r2tfej3bGNuAsMFuZNDAJEBCQiBRkkpVUs1zVVbOMZ557/fHiX0UpQYPct9rtx3fWrWyMjNOxDn77BMrTv72931jY2McOHCAV155hfe9730MDw8zNTXFoUOH8DyP3bt389BDD/GNb3yDb37zm9x4442Mj4/zkz/5kwwPDyOE4OGHH+ZP/uRP+I3f+A1uv/12isUiH/rQhzIo73leds3lcjl27tzJpz/9adbW1lhYWODixYu8+OKL3HXXXbTbbcbGxjh9+jTr6+t87GMfY2xsjD/6oz/ixIkT2QIB8zWOY86cOcPu3bt5xzvewdDQEACNRgMpJUeOHOHZZ58liiLW19cRQmRu1rGxMZRSfOc738muhd7uUdNp2el0aLVaPPXUUzz33HOMjY1RKpVYWVnJooDNogkpJYODg5mDed++fdl89X2fmZkZnnzySVqt1p8bzdtXX3311Vdfff390ZuGj3Ec4eSLNNsdRsolvDiHVAWK+SIqlyOxQKoIW0DVdbAtxbkzp7ClpFDIk8/lsG0XrcF2HIoa8rFGWALhWEAOGh66tUiSKGShROy1CbXG2bqZ6Vvu5MR//TPy9VXGCgWElhArhOdh5yWDM5MMzs3CDbeglluE5y8gTp2kRsJbaoPszJfwGnWKQ2XOK8kjf/Kn5Oa3MZgvcmB8DHuoiggjpnMuc3aRxHZ4JUl4dX2VxSiiTQdfCDzHpunkaNkuHWnhOzYyX0IWSykkkd0gVCG6nXxWl2QIFKmbC8CSFpYtiOIQISX5QpkkSfDaLTpeB8e1qQ0OEQY+YRCkkZqWS6PRxHFcbNshny9g2RLLknQL+rDdHEGzhbQcNJKo6wiLo5ic4yJk6h7z/YCO52M7DoV8EYRgeWmRjtchCjwsrSnkHBydMHj5Cv/Asnjr9AylkQ2IQgkcN6VfuXwKHN0iJAoWF0EuQ7sNnQZ4TQjClPTEAcRh+r3fgcBHBymM1EGAUglTZzx+JtHEC6vEaKJEEyUJsdbEQhDqmEinPYhISSIUiYYYiJUmRqT/BFB0CDohYRdOKg2+FyC0Qna7/qQG4ThUlWKDJfmTIGJdwPayYLyoCCxY9TWLXsjzz32Xj338lzl9/FXym69j49gIxWKOMAhYubJAfW0Vz/ezrgwD/3xhMzVcIycE9baHHyVsHx6n1FhHOzncQpW8m6fV9nn1lSNct20rVy9eot0RmYM2nUGmyyK9ORfComvdw4BHTQrgFCC73Y5pf6FGIbvbdN2UAgwk7GWPSki0iFG6y4x1gtLduayhrTTSshjZMMrQhhFsKwWGSRynsDKJQXcBLxqlBOUNU7Slw0VtURiZZWpkHN1aIwo8kiRCKQ2WSLtSkWih0hfXGqUt4iTGD2Oz51hOuuq2XKmCSgiiMAWyXfdi6HWIIs18qUAjCLkUJyTQjaY142mu0xQ+a532WKYLBFKnskR0WyAljmNj2xZuoUxxYvLNvpX21VdfffXVV1999fV3WAYgGiddLpfLwMCZM2dIkoTx8XGGhobwfZ+rV69y/PhxhBAZrHNdl/X1dY4ePYpt2/i+z+zsLEoptm3bRrPZ5NVXX6VWq7FhwwbOnz/PkSNHGBgYYGFhgVKplLkLgQz+VSoVCoUC9XqdycnJrPfRRFiGYYjjOHieh+d5WWSlSVBpNpvpwuFuApBxLvZ2Ihono9mm0+lk7ivTM9j7+9441t74TK01vu8zOjqaOUdN16Jt21n8poFGBw8eZHh4mMHBQQqFAmfPnuX48eMcO3aMnTt34nle5viMoog9e/YwNTV1TUSuAS/mdYQQNJtN4jimWq0ipaRerxOGYRZf+dprr/Hkk09SrVYzWGT6Pk1nZbVazWJzLcvi+9//PqVSiZ/+6Z9mdnaWtbU1Go0GrVaLL33pSwRBwNpaWqHieR5xHDM6Okq1WiWfz7Nz504OHz7Mpz71KUZHR9m5cyfj4+PEcUylUsmAU6FQ4NixY7zwwgsZwIqi6BoIfdNNN3HLLbfg+z6PP/44zWaTl156iSNHjnD58mWuXLlCvV5ndnaWIAgyyJTP58nn82zatInrrruOfD7PCy+8wOHDh7Pj932fdrvN5OQkH/nIR/jEJz7B7/7u73LHHXdw++234/s+zz//PENDQxw5ciSDtq+99hoHDhxAa51FvWqtOXr0KOVymS1btrBx40aiKMrmrHHrzs7OUqvV2LRpE5s2bcL3/ex833PPPUxNTdFut/nTP/1ToigiCAKCIOCLX/xiBpGNC3doaIgwDNm4cWN2f2/mvbmmtmzZwuDgIOfOnWN9fZ3bbruNZrPJqVOnsCyLjRs3cujQIa5evcrDDz+MEIK1tbUsxtaAb+NWNp2qJipWa83y8jKPPPJI9nrGrWyutYGBATqdDkEQZC5HE+Fqxsece8/zcF2XV155hRtuuIEHH3wwc6YmSUIYhrTbbVqtFi+++CK2bTM7O4tlWVmsrOM4LCwsUC6Xs+v6R/VY9tVXX3311Vdff7/0puHj9OxmVpcXCZp1OoGPJSVKw+r6GlusDTSFi1YWNoo4CYk7TXZff4CYHJ7vs9posLa4yORAjult26heaVKQEpXEiOERdBzBxRAeP0Z4WWFLi8CPQAnsdgd9/Dii00YojWs7kCtAZQBKBagUEBMb0PM7CEML+eqr5JavILw1GK4iRsoMlvIMymGIQ1xH8KsPvo3C/FZsx0W06ogoBC9AKyCMUat1btxQY+9yHa/Zwo8iwjhCSEGkYElrzqqEU4HHKb/DUrNOWCxiF8vYbg5FCl/SCMn0JkOo1MWFEGjVjZNEIIVEWukH9A1ll+Vmg2bLI/RbOLZNqVwmTmJUrIjimCgKiOMkRSLdPj/d7duzowilNY6bo+MHWGEaBat0QhKHiG5cZLlUQdo2SoEXhMRRRKtZJw59XClwHAvRbLC/1eYfDY2wc2wcqzKAyJdS2GXZYLkpcARYX4VWK/0+DlN3YxRC5EMYQJJAHEOi0FEEcYiOE3QUEkcBUZT2dyaExBoiAYkWxJYkloJIaWKtCIEIQSAEoYBISSI0kVYkaBKhuz2OAlV06Ky2CYVI+wcFxEphAQiJUIpEa2SSsN21aSSKF9FYAvZsUEzWYLUBFRcuxTZeZ521tQZbtu9hab3J/J59HDl9mlPHX2NxaYEoirEdF6e78lZrReCHhEGI34nRKsECvCjm3PkzlIolvEghpUXBzrFUX2NlaYFw7x6qQ0NciNfwtKYkMjNt9scMLWwsQRq92p0DqQS6B7Hpbitk+qs0ZlSLbuSo7j5xF0QKAyt1GhuslSbRqbvUOBAjpWghwJLk8zncXC514frtbp+jhbQNKDVwNO1NVTrtalQKcq5DbWSEJA5RSZzCR6260avdVdAqTjsZk5iWpSnnbGS3h9J28rRWF6n7ivHhGkkcYzJRpYCiI/E1BI5FJ0iPT/aCR5E+Lh2jdOyiJMES6epQ0S2X1ICUFpYUOLk8I+PjWANVGvXGm30r7auvvvrqq6+++urr77AKhQK+72c9a1JKJicnGRgYyABUoVAgjmNc12Xr1q14nkc+n2dqaoqRkZEMfNXrdVqtFpOTkziOw8TEBGNjY0xOTuL7PseOHWN2dpaxsTGWl5dZXFzMXIzNZjODKJ7nZZGm1WqVV155hdOnTxMEAbVajWq1yqVLlzh+/DjtdhulFJcuXWJ6epooivB9H6015XKZRqNBo9FgaGgoi2g8cuQIUkqWlpZoNBoZULx48SKvvPIKN910EzMzMyil6HQ6mXvOdMEbkKOUyrY1MDMIggzmmi7AKIooFAoMDg5yzz33IIRgeXkZ3/exbZsbbriBt771rXzjG9/g29/+NrOzs5TLZSYnJ7nrrrsy557ZVwM+zb4YGJokCbnu/Y6BQ6VSCSFE5lR77bXXCIKA97///dRqNb75zW9y+vTpDGYCGTg1cHPfvn20222eeuqprB9QKcX27dt597vfzfr6Or7vMzExkXVROo6TwaRt27bx8z//81y6dImXX36ZL37xi5RKJW699VYgBanmHH7uc59j69atfOxjH2N5eZmHH344O59SSuI4ziI/DQzbunUrAwMDzM/Pc/PNN2fxvCYe1zhgzdiZnw0MDFzjsHVdF9d1iaKILVu2MDo6yqFDh5ibm2NgYIClpaUswtRsd/vtt7Nz587s+vE8j9tvv51f/MVf5Otf/zqf+cxnKJfL/NRP/RQ33XRT5vJUSmWwzvO87NgMAF5fX6dcLmfXgm3bSCk5ffo0n/jEJ7j77ru57rrrGBoa4tixY1m3Y7vdxvf9DHDGcZwtKjDHuG3bNr7zne8wODjIO9/5Tp544gkee+wxisUiMzMzHDp0iOHhYd773veitaZYLJLL5SiXyxkkNAsFpJQsLCxk5zuKIhYWFmg2m/zcz/0c27Zt47HHHmNpaYlOp0OSJNTr9f/mOXrdxObc9sJaIQRPPPEEvu/zwgsvcO7cuWy8P/3pT/Pcc88RBAE33HADU1NTVKtVjh49ype+9CVKpRJTU1PMzMywbdu27Drqq6+++uqrr776etPwcbhWYXnxCkVLUSo4YMBEEhEnGt9y0HGMTCKCWNGJNa0YvDhkaWmFVmOdwPMZ2jhMPmdj14ociz2uLp5meHYjcsMEhdOK+NU2x+QmKLVoL1xFA+75S7QuXSUfJ5Rsi2p1AMplKLkwOgBbt0BtEi6sYZ9+Fuk0YXYI4Q5CpGD1EpxfAcsCbVEMAijl4PJ5CLsuPK0gSlIWoxIsW2AVLHKjDtXiAIkXE/ohLa9DywuYFJLNWvOAZbMuBC/HCc/W1zjltfGrQ2g3jxaCuNsxZ3X7Ii3HTtGQ1MRxgi0laYqlA0IRS4swEd242oQgDPHDAEtKLMsml8+nUERBnCgSnaATBUKQJDFhGKRgE41KFLGIEEIibQvbstP/C4kfBFjd1ZpKKbxOmyjwsAWQKArra/yEJfnQ7BaGh0YQ+SK4BZASVJx2OkZ+uiNJAlEEOkmpk99JI1azqNUAHcWgYnRiHHLdGNQ4Ik4SoiRKIZclSYSNkpIYSIRIH2f+IQi7gDGk63jsOh1jIUjQ6c9cSYLGizRJ6mtDQNYLqDHxo1BBMGtZfCtOWECC1LTC9FwMlmOmahI9Ocfd992Hjnxmtu3l+995kguNFS6ePcHS8tXUoZckRDoARFrVqBJsFB2/g9CagmMj8kWqRZuxsSlIOqxeuETkt1ELCxRyFn4Yceb0aTZv3srpte9zJorYY7uYvlSdpIDu9YJCU+TedT8KAx5fjxc1WFLq9LE9yappmqjWmfPPPFrr9BhUd3yVkFha00kSQjRS2hQrA9i2g4qjdL+SOHVqdp/H9C9qmQLTJPYRSmMnEWGnQxh6JFEE0I0hTl/Q9FQqbZ4BtEooOl1npEi7Ux1pUSrkUCoh7t7YQ+pedIXGtRRlN6GF6roc022FGaPXDZ+k014To7FkxmQRlmSgXCFOFGNbtlMbG+Hs4ZcJpftm30r76quvvvrqq6+++vo7LAMWTW+jiT8sFosIIbLIUhODaGJMK5VKBqe01szOzhKGYdat5nke1113XRZNunXr1qxPcXp6muuuuw4hBIcPH76m7zGKomsg0K5du6jX6ywsLDA5OcnWrVuZmZm5ppexXC6ze/duJiYmCMOQsbGxzHEohGBsbIxarZZ1Ej799NO88MILBEHA0NAQ5XIZ3/dZX19nbW3tGsecgSAGrpgo1qGhIUqlUgYejTvSsiwcx6HT6WQQr1QqsWPHDp544gnOnz/P2NgYJ0+eZMeOHTSbTVqtFlu3buXMmTMMDg4yPT1Ns9nkO9/5Dmtra2zatImXX36Zubk5brnllgwqmfPVbrfxPC+LkjUuTXM+kiSh3W7jui6WZdFqtTKY+/TTTzM2Npad/yRJMuBkgPK2bdt4y1vewu/8zu/wyCOP8BM/8RMMDw9z5swZLl26xKZNm7h06RInTpxg48aNFAoFgiDInKGHDx+mWCxy1113MT09zX/6T/8p6+0DsG0b27ZpNpskScK2bdvYtGlTNn5RFJEkSXZsBqzNzMzwxBNPMDs7y+7duwmCgKWlJQYGBlhZWSGXy2VQVmtNLpfL5u/6+jpnz55ldHQ0A4cGZg0PD/Poo4/y0ksvcfDgQb761a+ybds2JiYmyOfzbNu2jZ/5mZ/JFoHGcczy8vI1cb3XX389b3/72zlx4gS/+7u/yzPPPMPevXuvcQia82gcvblcLnPxGnDabrczN2UURRw/fhyA973vfUxMTHD+/HlWV1exbTtzwBr3r/neQME4jsnlchw4cIB/9+/+HXfccQfbt2/nzJkz/Of//J956KGHGBkZYXh4OItJ3blzJ6urq6ysrDA4OIht27TbbarVKpZlsXXrVj7zmc8wOTnJ+Pg43/rWt9izZw+rq6v89m//NlNTUzz11FPZ46Mool6vZz2fpiOzWCzi+34Wp6yUYnZ2NuuEvf/++/mDP/gDjhw5ws0338wNN9xAkiR88IMf5I477mBtbY2BgYHsfcV1XX7mZ36GS5cuATA8PEyhUMgAu4H3ffXVV1999dXX32+9afiYrC8yOViiIoq4xXLa/RYrLMfCSwS+dsjFPraTrlT0ggjP6xBqmzBOCOOEONEkUUxz+SpBqcYPhhTj7RW2rVlMdxoUi0U6+RzrkxPo1TrnTh3HRiJ1Cj5sAZsLeSYnJxADRZgaga07Yc2Hs89DKUJukIhEwNVL6DABbSM6UdovKEU6BEmcUpWgC9GCTje2tNvXaAmwYvAjRGBDpLGVxhosUXBgpFQgCGJWOx5XWg1UHHOj5XCHY3MiCnly8QqHbId6eQCVy6OBMImxpIVSSdo1J6xuH17qTNOku7DW8hHSwsnlUqBIRKLi1PEYRxCI7k2Y043bBCElqgt9NCqDUdKyUx6lQakUdkKcvqYWEIaoLhBUcYQjNMIP2Npu8Su1IW6bmcUpVRCWA9KBREMUgI7TMUxiUCodwzhMv49jdBSmY9n9v1IxcZICokSpFMgCiVZdl2IKCBNLQK5ALG1iFRN3wWisEmJUFzSmHY8xmlDr7vcifR4BoUqBpC7Z+F5IhEALgbC6ME0nKVDr9gpawNacTStJeE5rlJM6ep+8YnF4VbN72GW8Nsj26+7gcpCjGisOP/5lThw/CiqkUsyRz6WrXlWSEEZphGjSvfGbHKqyf3aSPRsHee3Vk1wpzDI+NYNbHGDp0inCsIMf+FjFAarDNerNFVYXLrNj210UalWOrTbYhkuKulTar6gSLHQaHYsEg+hk+hiNTM8vaadjOh9SGCm6EcAZmzQg07gghZUCOZW6dhOdEKmk232oqSeKGIFtWak7FlLnrdDZTbnuxpRqnXTnt4VSmpKbY/fAEBUpSTo+WukMj0o0Sdf1SPfGMSWA3fjW7rxRXYhouS6F2ggFYeE1V0hUknViWlJgo3ByksEcXBYaaQm64arZ8b5+a9S9jtDESYJtO91xTH9q2TalSolypYz2OpRzOWIt3+xbaV999dVXX3311Vdff4fVaDQIwzCDcwb0WJaVxSMmSZJBDAMhjfsMoF6vA1AsFpFS0mw2M7jgum62vYlQPXLkSAYPl5aWuOOOO/B9n3w+T7lcziCfgZLFYpGJiQmKxSIA58+fZ3BwkNtvv50kSRgYGMjiKqWU3HLLLZnDrVwuc8cdd2QQam5ujs2bN7O+vg5wTQzj3r172bt3L8ViMYv7tCwr65o0UKdSqbB//35mZ2fJ5XI0Gg0GBgY4ePAgg4ODWcTp2972Nmq1GlprbrnlFiYmJnjuueeo1+u85S1vYevWraytrfH973+fl19+mcnJSd72trcxPT3Nxo0bKZfL/OAHP2B5eZlarcbWrVszh6UBd5ZlMTQ0dA1YqdVqWJaVdeLZts309DS+73PrrbfieR6///u/z8aNG6lWq2zYsAHbtnFdN+sKXF9fp9lsMjExgRCC8fFx7r33Xr7xjW9w5MgRPv7xj/OJT3yChx9+OINKO3fuZGJiAtu2KZfLmWPv7NmzPP3001Sr1Wz/tmzZksX2GqfexMQEmzZt4pFHHuG73/0uS0tLrKysUC6Xr4nKNW7Zt73tbTz//PN84hOf4C1veQsrKyucO3eOX/qlX2L79u34vp+BKMdxaDabfP3rXydJEpaXlzl06BC/8Au/kMUMGyfq0tISf/iHf8jtt9/Oe9/7Xn7zN3+Thx9+mF/7tV/jwQcf5POf/zy5XI4tW7bwzDPPUKvVeOCBBxBC0Gg0ePrpp3nkkUe49957M2fx2NjYNXGrQRCgtWZubg7XdXnkkUfYu3cvMzMz2ZxrNpts2LCBXC5Hp9NBKcX8/DwLCwv84R/+IQMDAzzxxBNZv+nk5CSf+9zn+J3f+R327t2bRcH6vp/Nc+NErdVqzM/PUyqV2L17N7lcjv379xOGIXfeeSenTp3iX/yLf8HMzAyLi4vcdttt/MzP/AyWZWULFtrtNjfffDMLCwt8/vOfR2udXRfvec97sn7Tj3zkI4RhyPj4OO95z3sYHBxkeXmZSqXCRz/6UQqFAp7nZe7OTqdDuVzmnnvuQWtNPp9nx44d/Kt/9a8yl6/pI/U8jw0bNmSLFcx7lFkQMDMzkz2nWfzw56kPJPvqq6+++urr75feNHyMAo+ckyeWkijRtIOI5UaHggrZvHkjrmVT8ju4rk2oEhp+wup6HV/ZrDVatFoesdfBbq+zoQrrjQ7LKuCCCnlu8Qw5BIVcDqUkhXyR8kANd+MEYq2Ou7SEjkNCAWOFAnatCBtHYXIzvHoK1CLUVIoQFmLwRArCxjdApUjYCWlczVNcr1NorqQRq5YDYZTGhwZRGgmqk9QeltcpmAwsiOLU1adSOAE2IgwoWILJgRLjeZe653Gh0aBeb1KzbH7OdbikYh5davN9x2WtVCGxHJRtkVgWoqcjI1EJsrtKzKyMTOLUQaZFF5yJFCSq7gpCpRU6iQBJHIWke5YCGiFASAvLslO4maSxkUDXEQlokXbbKQU6xrYESijsRoP7E83HJzcyVRsE20VgATIdg8RLt0miFEIq1QWOYepwlOkNKVGACgPCKCGMgrSzUSUkXfSjhCCRMgWOUpAISawViWWjLCuFlFGcgkJNGqeqIdYqBY/dKNCY1A0Zowm7j4mAWAp0XuIthYRCoETqaksSlToeEWilkGgGbIvtUvLtOOGyZaURuAKSOCF2B9h20y3ctn87zxy9wItHLlMsnqXdbhLGITqJySuX4cEqi4vL6EQRxwrHlkhbcvcN+3nrgXmGBwq01xa5cvIUnu1iOzlcN5/GnIY+QRhT2jiHCDuIjk+cJJw6c47pTZs5X3+RC2HAvOWkUC6OssjelCyn12fK/RRoiSHZWuiUR2oTv5rGomoRdWNX0whW8/g0KjUF4lqk0C9JEiKlEdJGxQmrSQpvtWPhOnY63+h2jvbgPIRCdjsidferKyRjuRyxSogiD3SC6HZxIiwsqdHdc44y3andp5MWYaRQWqM02JbN4qVz5IbGseK008K8fM6RlG1SSEnq0LZEGhurtCDpstYscFWLtC9SaFSiia0ES6SAViUJTS/EcvIsnDqJFgJLWmjxutOyr7766quvvvrqq6++jIwjzIAr4/gzUYjmj/29EY7mXtA4C8025ncmOtVEfQohaLfbWXzjzMwMy8vLFItF9u7dy/j4eOb0MvI8j0qlknUamm4413WpVCoEQZDFNpoYx97XN1GlJmrScZzMdeW6bub0sywrg4qWZWFZFr7vZ2PQ61AzPXQAe/bswfM86vV6FrG6devWzCXqui579+7NtjUgZHZ29hrHaLFYZGpqKnP0GZCnlOLWW2/lxhtvRGuduRZNz6A5dq01d9xxR/aaSZJwyy23ZA67RqPB3XffncVclkolPvCBD3D//fdnsNjE7X70ox/NzpllWdRqNT784Q9nXYb79+9n8+bN2f58/OMf58KFCzQajSyCt9ls8rM/+7MMDQ3RarVwHIcHHniAnTt3cvr0aQqFArt27crOuQG7Jqr1wx/+ME8//TStVot77rkHz/OYnZ0ljmMeeughpqenUUplLtx//I//MU8++SQnTpygUCjw7ne/m7GxsSzy1pw3KSWlUgnf92k0GkRRxC/90i9x00030Ww2mZmZyc7B0aNHuemmm3jggQeYmZnhAx/4AF//+te5fPkyb3/72wE4dOgQhw4dolgscvPNN5PL5di1axfT09PUajWmpqb42te+huu67Nixg7e//e3kcrkMkruuSxiGjI6O8vGPf5ynnnqKlZUV3vnOd7J9+3YajUY2NiMjI1n87g033MCv//qv8/TTTxPHMffffz/lcpkwDHnHO94BwOHDhxkeHmb//v3Z9WkAu9aaarXKv/7X/5pisUgQBGzZsoXf+q3folarZfPoV3/1V7n//vu5cOECk5OTTE9Pk8/nr5l/Js73p37qp3jve9+bOStt22Zubi6LGIbUYd1qtdizZ0/WWwowMTGR/V8IQRAEWYdqoVCg3W5n7yEmqtZEsZp9Ne7eN74X9ca52rad/c5cN+ZxffXVV1999dXX31+9afh4Yb2DbceESuBHEX4QkqiEQUeiohBh2bSEi8xBPueSkzbNdkShlGd8dAN6eBhLK6o1i7IM0MQIHdIKQ1SiCASseQlowbbRAa6783YWA5tvPf5lLiwtUZCS290828ZHYWYCXR5FvPg8lAKwE1iOUhgWCfBjGFgFcRSvU+WJqzMcWlBEV5b5QOKxv5xDy7j7ASkBaaWuyBhEFCJsFyHyaVeh60AYp/2FYZA6JhUgJcK2sIChvEvNHqIV+JxrNDnSblOWFh9xbO5QCY+EAYfcHH6xguregEk77cPTKkErEFZKYaI4QiXdnkaluqY2iSUFQsgUMEK2yi7lMzqNz+z2SRrnWNrv2HWkabJITEtKlIpBJTiWJPHauOsN/kl1mPeNjFN03G78rEihYxikhEilzkbiKHM96jjtbwSNtrpRvLEiSMALgxQ0qYREJWn8qZDpEGqIZQobEyBWUepM7MawKqVSyCpE1yVpoKPqgshu5Kp+PXq16+tEuZIoBj/RJKSuUK3T3kKEQCjIkX4w3uU6dJTiewLibtRtzrK5+frtPHjHTUyMVGm026jYw2/VKeYdcvk8gZ8jTGLanTa21viejx+E5HM2SaIAzaWVNSqVCraV3mi7lsCyFItnj1Gd2oYtQMQxndUlRGONSrVCtdUm0ZIrZ08yc8ut6JzLy36HjZaDo1I4BsbxqLsxoiZuVaYGRnoCV/XrJkfjDBRCoLrzJAOToovidPfRWqc3oUoRawVaEyYJda1RQiAdG9dx0utAxibINn2z6DoyVZK6SxG59LeJyqJdRRJn+5R2TSoQdjp/k/SGRnT3UVjp/gXdnlOBJO86RC0vdUx2gaw57oIjyEuNrzRBJIi77lCFQAptDjsbpXQfuwBXa6I4RjpO93GawO/gV4fI5V1s28HKl7GKlTf7VtpXX3311VdfffXV199hGbAYhiGWZZHL5bI4TwMrDHAw0ZAGsHmelz3GbGMAmgECBjQYkFcul8nn84yPj+O6buaQM7GgBjwA14BE04fX61AzeiNAMFGvZnsgc3Qa95nZJ+OoM89nIIZxLxogGUVRFmnZ6XSy/ZFSZj2P5jmNyyxLWulZuNs7Xr3fm3Ey35tt3/i19xjfGB1p9tmcTyllBozM/bjp/zPnwYxLbxxlb4Ss2V4IkcXxxnGcucgMGDS9fLlcjtnZ2QwWK6Uol8vs2bOHLVu2UCgUsn0xEMmAsTAMqVarPPjggxl8MufedV3uu+8+isXiNX2J5XKZd7zjHbz73e/OzpUByL2Q3MDo7du38yu/8ivZ2Bjn7549ezIwdtNNN3HgwIGs93THjh3Mz89nLtsHH3yQ++67D0hhfa1WA+AjH/lINkd/6Zd+Ka2L8TyKxSL5fP6a7kpz3mzb5p577uG+++4jSRJKpRJhGLJ9+3aq1Sq+7zM9Pc3w8DDVapW1tTXe8573cNddd2W9qCYqOY5j3ve+9/Ge97wnmy/md0A2Z6vVaja2BsBv3Lgxm89mvmzZsoX5+fnsOM3cMnO3F86b+WLmoXHcGtfhD5uvb1QURdn7jplzxr37o64B4Jr+RhOfa/bZXAe973Ou268k6auvvvrqq6++Ur1p+LjU6IDwsx42pVMIFgmQaDox+IlFrAXbx4e5+eZbcAoVXMciThSdVgMVezg64dLyRbxmE0dr0ApLaIQtsaSNKwR5qaii+a+Hj3Pywhmk1GDZPK0V2yzJrF0id+IlKEXg67S3MTQuPAUFH6ZX0cWIE+tbePlik5E8HC7A504sUpsZ5uLGCULbAdticGSC4sRWLhw/Rf3MCSbdiGoUU5E+A55HIQywgxChdLcLToG005xPUqeZFFBxbHYPVpkN8pxoNXm502JI2vyi6/J8EvNnUcjZfJHQdRGRWSmXgqosChPDDxUSQcqx0qI70XWk2ZaFZTnEUYiQVjdytYujtHFzKaS00Eq87mpDY1ndOMwkQYc+dNoUvA478iUe/PEPUjz0MqK51q0QlCl0jVN3JSpJ+xs1EAXouOtORBMLSRSGhHGc9jEmCUESo6QkkgKF1Y1X1SRCoi2HRKQxmNp20MJBaUUUhinnFJqke+imHzIBEpGCSNXtfYwFKXTUmkhoEikQAzk6dY8AkXY8drsLNeYDdkJJQMWymBWSryvFGSlR3Q/be+Y3YqP5vS8+wdXVOi0/JBEW1914K/lSFRB02g0adR/fazFSG2CoUmB4Yphbdm/hT578Lru2zvODV47z0ulL3LJtOj13gLTzlEtVSuUSnasRKo5p11cZkBKSmML6Gu2VOuXJQc5duMzgxBTHlw9zReTZSOp6VUIgpEAKgSSF0qmRT3ejRV9PUdUynRcpdI2vuXl4XSaMtJvRq0X3BiWtTI27cLAVx7R0yqQtN4dtyW5XpEILY6DUaWSreY0uENdCpLGqXfgpdIxGZWMOFnQhsRACRAqeu7uHUJowUakrUkrcXIGhqU0It8ha/SqR1qjuw20piZVCC0mUKCK68NVAxnTHMhiLiaI1P1WpU1Z2R0YlCWGxxtQN1zM1OoofxtTjfnxMX3311VdfffXVV1//rXq7Ho2jzYAw4x7qBVe98MSAHSDb3jiNIIUCvS4nE4touvfM8xtHnoFxxnlnIITpDzQAxAAr8/wGmPQCEuMaM2DijcDRuKAMIAGybc3x9UI34BrgYiCled7e1za9j+YYemXGzfzf7Id5rd7+wR9+L/TfPof5v9k/s7/GnWZcgL1AyEA9z/My52fvOBhgZ8bdPLeBz6Zf0mzjeV4WTet5XvY481gzn5IkyWCtOYdmzHsdkEmS0Gw2s/kYRVHm0DXQrdeZaxyyZk6Y4zPwWSnF1NQUw8PDGZAzENmAU3NvZ/4fBEE2X41z1PQxGseo67q0Wi1KpVIGAs25EEJkALPdbv838N4AWsdxcByHIAjodDrZ8XU6ney8GKek+XmpVKLdbpMkSRZHbI7TuHyNOxRej+k1rloz9wzIdV33GieqicI125r9Ne5bs9CgF7r3ztXeRQRmG/P1RwFIAxvNe4k5D+bc987/H/b1jXDTnB8zt3p/3o9X7auvvvrqq6++4K8BH9MP792VWUKgE4VlSWzbSaGFJVAapLDxg4C1hTM4rovW4IcRQeBTsATVSpnILeIXJQNjNdbUEssrq7haY1kJWsPqWp2FY8dYW1wjihNk2mLHRKnA3ZvncRdOg5VAR4EfdSNBdQralIJ9UzBdwQtdXlraSqP5XfK5CsW8RbhhhDOrK1yZn8IplaiMTVC77a0MbtvH0mtn+cqnf5dxlsgX8iTBEKLjUe14OJdXmFlrssN1qUZgxd3uROiBgynCqNiSAwNVZl2HF5stXvY6bLNt/omGL8Z1vlss0XSLKFKYopVCo7BkOp4mWtWSdtcIJrrxmClCdCybqHujI0TqnlRao1TqXBNCYEmZ9fGpJOk60ySJVog4hk6L4XaLf1QpspZYvKhimgiq5SH0+ioiiSDwMyCESkAl6ER1+yNDIhUTS4vEkgRRSBQnRDpJuxe1ItJpl6MSFrFI4zoNQER0AVmhhHBzJIkiiXwiEaGEQAubRCvSkQGFTDsIkSnsJG3lSzCvAVgSmbdo+yHtIP053eMWJFhSZMZVR0h25/KsuA7fTSKCOI2tTdC88Npp4jiNBJXCwpaCXE4St9YQ5QJnLl6k0+mA0FTyBbZt28lg2CAKOgxXCggh2LtjnqSxxlPPH+G6uXEQAqU0wrIoVarkcnmiKCRSCe0worBtJ2LpElanhVy4hD9S5fyJY4zefDMdeZQX/Q4TbgGlEiDtL7SkxBbdP2QgsNIy0Wwept2e3QtY6DTGtCcdNYWU6ckQujc2VaOTOIW+ShGrFMLVo5iw6zB1CwWs7v2GVipzHuquhVCgM3elsFxsN0ecdJ2VWpNEQZf7SUB1nZepCxIh0phXdOr6FQlaK8LYRJ2m0bjLq2sMjxfS6GFtRoHUYQrYtsT3I+JEo4UBpQYyvq70eiOzjCpNNz7X6sbXQlhfIZ/LUxsZYW3hCrbsx8n01VdfffXVV1999fXfykCgHxatamAOpLGJBkqYP+gbYBZFUeaq6oU/BkbYtp11RvY6pQxYNMBufX09c0Iax1Ovy8r0RIZhSBzH18Av47wyIMlAFgOXDIwyIMNApt7o2F7w2AsZzWubsTGAxERolsvlDJIY96GBMmbfeoGPee1et+MPg43mMb1OSKNeB6V5bO8/E+0KZO474yyNoogwDCkUCpmT1OyLOd/mWM34mO97naS9cLoXSBuY1nu+e12Zpl/SAKsf1qvZC5vMa7iuS6PRyLYzMNLss4mmNT2lpnfUQNef/MmfzMYgSRKiKKJQKOC6Lr7vX+PcM8cQRVEW8WvOqeM42f6ZuWKidc14GfdfL9w014yZ873w3UBIAyR7ry8D7sxcNM9vxu2N0L4XCBqHqNlvc55644zNNuZ9wDh9jRvauIN7oX7v3H3jXO095+Z6632+HxV1arphzVwzx1cul7MFCOa1evWjHJHm+HqjWc3Cgx92TfXVV1999dVXX3//9Kbh48z4GIkWXedS2vFmW4JKzsFxXLSTp6NXIdGEYUSiBI5SCJVQECGurbDRyLBJQUq0K8jni5SLkywPV/E8nySO8doeUloISyJkd2VWEjNfLPIvb76B/aUEkgCRJn2mkahBiE6gEyWsyISpg/cjxWmW/WE6Z+rccPdbeeWZb5KPJI5tsS5cbl1PsNeWODm3i2889V3aX3qMyc2bsS1Nu9kiAXw/QGnNiitRsyOcnBvnsG0z0vbYdHWdzWstql6I7MITQ1AEEiE0w3mXu5wqZ9otDnVCcnHC+12XTa0mj7ghi8USsbRImakmUTFCS0gEQmgUXaioFSrurm5Eo7QmSWJs2Y1w6TpI0QKkQCJT0CVevxFLuUqC8n2cRpOtYcjHR6vcOVil1WxRWq3z9ON/wr21aTYogfYDUN0P1kJ3bzRDYiGItE5hIzqNOI1iojgiiNOfRQhiNMqyU9gnJYkUqChECYlCpNGcxRLuhg0kDiQqIlY5orjQdToqkkQRJyqNbNU67W00Ea4qdeYplZDE6YrCKNH4ax7tMMHvfvCV2iAx0YWaAhLFuG1Rs22endvIyrnzhF4bN5/D0RopBJVCjj3zmzgwP8P02Ai1SolSsUDg+/ygIhmolFj1EqojGzh18jU279/Pke9+m++9eopYSIYGytx2YBe/9cVvcXGlzlQhBYMIycUTR5nceR3SsnDdPGrpEmKgCjp11w6sr7N0ZZFSJc+FxVWGxid47dwFDiY2A1qnbkYhEcJCCKvL3UQagWqyTLvnTXcBYG8fY8/td9cbari2NBt3IZwiVknqOk0Uq3GMIo24dfMuVrdnVKv05kxlzkWJVknmehTSIoljU7SIsARaRV2HbgJCmvbF7jbp9SN0F5gLSJQi7NY6CttGELO+tkR1eJQ4Cnq2h7xrg1a4tqSjUmid3sSK1LltBqDnJVMEClLI7OYp0QrZ3R9p21j5PAiB12nj+a/3W/TVV1999dVXX3311ZdRL4AwUMLzPMIwZGBggDAMWVlZoVwuUy6X8Twvg1EGLBjoZ6CeARQGPgwMDGQAst1u0263sW07i34MgoDFxUUuXrzItm3bqFarmcMLYH19nSiK2Lx5M47jUK/XM5jZ61SD111Nva7DXsBiohgNWOt1cxkgYVxrBgBprSmVShlUsSyL1dVVyuVy1if5xljKXnfkG/exF9LCtQDnjcfSC1x63WA/zPnY+/oGGgkhqFQqmau11/FnInYNWGu325k7tdcVZ74aGdhsznFvp2AURRSLxQy+mX3pddCa4zf7bL7vHTMgA05aa/L5fAaVDXw286cXZBswZsCj2f92u83o6ChCCJrNJvl8/hrHY6+Lz/zcHGM+nwded5YakGXAY6FQyICZieE1z2muBfMzs3+9kbye5wFk7kMTD2quJwO1zfbmmskWd3dfq9c52uvCNefFwEzzWsZhaMbaAHbjnvxh8L4Xxpt5+UaZc2ugai+I753vb1Q+n6fT6VCpVLJ5FQRBNiffCNh7415/mBsyn89fc230Rjf3wWNfffXVV1999QV/Dfi4YXQcIe2ug+h1d5MtJInME4YBYRSlnYRdF6BUMS4xNhFKR7hS40cWibBwLReFIlQx5WKBkVqNSrlAEESoIORiJ0AHLRxLMuXk+H8d2M2NgxIRdVLrnLAgAe0FNL2Aw1HEt3yfsXLEz5z+Y3Q14Njl62i3oDwwTDNxcOOE1uVFviFCXjp9Dj/2GZ2a4MLVOnauzNnzZ7m6tMz4gI0Vd2g2W6AVcRShddo3dyyMiaOYJE4YdAX7OxFvTRI2S7s7uKrr5gIhJQ6SrcUyY07E95stjvhttjkuP6sVf6o1p4slOnRjNRDoJIUjWkPUXVWnum4wpRQ6u+lQoNJzIGXqDE3dkTr1A3adYHT7IG0BcauNaDS4y4Jf2jTGtmoJqxNSLZV5UAmeuHyGR9bWuLeygTGvg9AKLVIHW9bdaEkCpYhUgnJcklwB328TW5IgUsRap4DRttG2QyxB52xiFUPZRRUclG1Dvgj5Mg0Ffieg1engB+kcCrUiStIVh6n7MYVGiU6fP4ljEui60lI4pJUm0XHqjJRgabDRWEIjVRo/qrRAWjY1FbPLsbk0sYHjjo1bKDAIxIliYqhK3pa8655buGXvdvKOlcLwLKRzgKnxUQSa4xcWseb202rU8WWOkdENHD9zlkSBZUlmpqcYcCVnLi8zuXkIISW241CpDZLL5YnDgFK1Rri+SqwUtopBa0pBQL7VZLU2yIVjx7j5LTfy6uXLvBx2uEUrJOn5tEmhWdrzqNBaIbqz0LgcJSIF03RzWLu5oxqduWlNJK9GdfsX6fY9xoQqIdEJkYpZSxQKiADX7d5YmshgkV6TQurUOihTAChSNJ86C7tTMolidGyiX7r9HDp1OWqdZLHO6Y6mcz0KQyKVRujatksuX2RifArHsoij11efClK3Z6TBSaAVk0bUdl9b9rRfZjvUNRVL0pu9uBtjlCiQVurCLA5voFCqQJIQeB71xdU3+1baV1999dVXX3311dffYfW6HM0f5FdXV1leXua6664D4PLly4yMjCCl5Ny5cxlUEEIwOTnJ4OAgQRCwupp+5rQsi8HBQRzHodVqkSQJYRgSBAFDQ0PYtk0URZkLzYASE01pIJSJmjx//jxxHFOpVCiVSpkzstdhafoF4fV4VwNNDFDsjX7sdWkZ91hvrKo5DhNhGoZh9vizZ89y4sQJbr75ZsrlcjZ2vbCw193Z+1pAth9GvT9/o/PxjW6vHwYdjXqjMW3bxvd96vU6cRyzYcMGPM/L+i1t26bVanHu3DkqlQpTU1MUCoUsNtQ8xoyDAZcG9hgXmwFESqkMRPeOtQG8BpyZc2ogpDnuXkBsIj0N5DLAyLgZjSvWRLgaV+4bXXW9PaVmDgIZeDRRpwbEGsefOWYD+ExHoAGFJhbV7LcBdub7XC6XQTsz180+m3NqHIkG8vUCz96oVdNv2ttxarbtvVbMsb4xPtXA9t44Va3TPk0z9835Ne5Gsw+5XO6aiNk3ulnfON/N/K3X65w9e5b5+Xmq1SqNRiN77j/Pceh5Hl/5yle4+eabGRkZQSnFuXPnePrpp/nwhz+cuUTf6LI0YPyNbuAoilhdXSWXyzE5OXnNtd1XX3311VdfffUFfw34iLCy3kG6UaFCCLSEThgjLI20ZBp5aKUfEIs5i1zcxIpBW6mbyZI2Wkv8KCLWAm3nKLgOlXKZnFugEfm8dPworfo6bqXKdZu38A8rBe4YGUCGr8cragRhpHgtiPimpTkjIRA2W8sCefV7qLUOr7ym+MaRNcYGcozMbGJgcJSrosip9UUul0tYApoXLlOtDjG3/wZePvQDFlaOceFShzCOCYMQzw/wgpA4TkFY9uErtW3xZSH4HSF5T7HAB0pl5qXE0jp1dOn0Q6XQMGBb3DUwwLFOm+e8gEGl+KBWfDmJeTFfpGM7SMtGym4PIrweeao1KtFoHZNiIkGiFIlSOFJkAEepGBQZJhNSggIVBkRhwEi7zQeqJT6wcZRhKRCdAOIEopiaVNxXKfGVRp0/i2MO1sbY2PawdJzGnKqY2HaIEWnXYq6AcmwSy8ZXmlhKQmmhbQfyLqqQQxUcIhvCKCCMLDpBQrsR0k5CQhkSigaRSDsCzXEaECaEhbC7U7b7wVtbNnbORYUhgRcQRjFJHKOFTF2RcYydS2/i4iRBaYGNwnYkOklQCvJJxE2OQzRc48holVOXFlFJwrbZae67aR97t86CShgol7Ck+fCf9g9m+ygkQiimhsq8ev442/cd4OXnv8/2LTtYWrrKzmmXkXKBnIShcoG1RgfNcLpS0nGpjk7i5vOEcUyxOsxqs0E8NoFz8TgCjQXssSWPN+rYOYfVVkB1eJijV6+yS8OgSONWLWFhSdDd6NVu+wp0AZxhgqqLTiVprGkKDUFrkUWRmmPTaLS0UdIiUbrrPNV4iaKp0yjbSKQ3mFKk/Yjw+mphgSbRCUK8HlmadlKqNJ5ZabzQw9Lq9YUKkIJQVJeXp3/k6L3RiaOIMEmhumXbqePSzYFQRHHYhZvpvHEkRJFCWJogjjHtjRlzNA5l8XoPZHfrdMW2lMQq/YOEkyuCkJSGhikWi1gqweu0aV05/6bfSvvqq6+++uqrr776+rur3j/eG1DUCxKNK864lxYXFymVShSLRZrNJktLS7iuy9raGktLSxkoWllZYWpqinPnzmVwCmBwcBDXdanX6ziOw/bt2xkeHkYIge/7XLlyBYBcLsfU1BSDg4NMTEwQhmH6+dayaLVaLC0tZe7FSqWSuTSNY8xAIeOkMu4v40Cr1WoZbOp1KvZ21Rn41dtB5zgOCwsLXLx4kRtvvJF8Pn+N69EARzO25v+9Lspe6NYLkcx56AWTvdv2RnG+MVrSHK+BaMZN+uyzz5LP53nwwQc5e/YsruuyZcsWkiRhaWmJZ599luHhYSqVSja+bwRbQOaKM2NlIm8NuOqN8TT7aPbHPIdxX/ZGtV65coXXXnuNnTt3Mjs7e42L1HVdVldXuXr1KmNjY4yNjWVRpr0w1MzPXthpXJCmX9IA81KplMHq3mvAnGNzvgzINe5OE8X6w+CVgd0mztXA16WlJc6cOcOOHTuyTlPTN2kihHujes35N72ZZk6Yc3vq1CmEEOzZs+caB2xv96k5jt45Zs6fgbu9INNcA72OQHPee0Gh2Y8fBsHNfDPwd3FxkUcffZSHHnqIffv2ZddpbwSsUS+IDIKA559/ni1btjA5OZnB7e9973t86EMfwrIsPM/jwoULrK6uMjIywvbt21lZWeHUqVMAtFot5ubmyOfzfP/73+f06dMA3H///Wzfvj27nvrqq6+++uqrr77grwEfBSCFQFoWGpGCA9IbgdAPqA2VaDoOVrfLrb7eQAwO4GsBsYVUGltoWmFIjEUjBC8OiBKNZbks1CNajQs01tdYrDdYa3Yo1Ot8ZNs890+OYCU+xCnA0EqxELb4qla84kjWvIRWy6MdKZZtja630MUExxHs2LqJ8Q018tUhGpFFedceiq+9Qnt9GVsFDA4KLiwt8vTv/CZx5OMHQTcCNYUzTrFMoeKgpaDR6tBpNsESlPIF4kTTatY5E8f8Zr3Jf222+MlSkQ8WykxLgaXjbtJk2tFnC8nuUokhW/K1ZgelFe/SmhKa5/JF6lohur0KSpt+OrN6NE7dYOYGLE5IkpgYjYjSHkgN0HWHSVIHmvI9nFaLu1ybf7hlit3DA1iRj2j7EMboOIYkvUkrW5K3lgp8u+Px1fpV9oxMM1evY4ed9IbFsoiUJhESLSXacaFUxms1SGyBGqmhKyUiCzpBRNMLaPgRrTAm0gm2EBSEpKI0Q0JTcWwq+QI5rXG6LkYtU1AYhR46jkm6+xYrjQ9EbQ9PJTSThBaKthB4OgVjbSEJkgQlBZFSJFLi2m4Ks4QgF8Vcb0ly1RqL997G+ZcOUUzaxCrh9gO7uPP6XSj9unMV0QW4WvV0JXY7FYWkUHApr6xSKO0gn3PRxSqDw8PUVNqHqnWE1gopUlehQiOUz4XjJ5g78BakEJTKNZa6q5ILSYJAECcx64vLOGUfd9t2zp54lf17dnJyvc6JyOMmIdHSSvcDEBIEVubcE1Kiux2PqscBqzF9hwJLk3ViatG9mRcWqJhIxagkJtGaUKe9mo0kwSM1HSshKOVzSCRpi6eJSOV1F6NK0EKikxi8JdTlDlpYCGkjkggdNLM3Fm0ig7v9nLoLQVMomd6shVFMrFJQaLsuQXONxZV1CjPT6crY9AAQAiyhCRX4MbSjBMXrN3WiZxxMHK/u4knV3X8TZwwwMjFN3GlRGayRsy28VhO/1SRaW3qzb6V9/S2SEOJjwCeBn9Va/95f43n+V+B/Ae7RWn/zv8e+9dVXX3311Vdf/2PKAAzTl6e1JpfLZZGcBhYkSUIul6NcLjM+Ps7U1BSnT59mbW2NLVu2ZFARUufk0tISc3NzmStrbm6OdrvNyZMn2bhxI7VajYsXL3L+/Pms0833/QwqLi0t0Wq12LVrF5cvXyaOY2ZnZ/E8jxdffBHP87BtO3udu+++O4OCe/bsYXh4mBMnTnD69GnuvfdeAI4dO8bS0hJBEDA3N8e2bdsyl9bOnTspFovU63WuXr3KzMwMQgjOnj3L0lL6WbpYLHLgwAHy+XzmRjt79iy5XI6RkRG01jQaDdbW1hgeHs7cZAbaGHjpOA5ra2usrKzg+z7VapUNGzZk8MbzPNbX1zP32fT0dOaaM4DKuO0AyuUyYRjieR6NRoPR0VEcx8mg6+DgIJ1Oh5deeolqtcrGjRszl+na2lr2eLOPvXGqvW43KWUWEQpk3Ze94M04I02sp4FOxWKRTqdzTSdhHMcsLS3x9a9/nc2bNwPp/aGJ3zTn4/Of/zzvfve7mZmZodPpZICzd7/M/VNvrGmSJHzve9/D933e+ta3XhPdamCb2eaNblfP88jlcpmD1HRRGidhLyQ1gLXXPQxw5coV/uAP/oBf+7VfY/PmzRl0hdfdnkIIisUikELM3thb838DTh977DGklOzYsSM7T2ZOmPNnolp7Ow5N3CiQXaNm/HrHy/zMRND2unnN3DQQutFoUKlUMniZz+ep1+usr6/TaDSyBQcmztXMj174+EaXr3kN835k3MYDAwMZIH7hhRd44YUXaDabtNttPvShD6G15lOf+hSzs7NUKhUKhUL2/vC2t72NL37xi3zlK19hfn4+c7v2xh739T+2hBCbgDPAp7TWH/u/6DU/xn+H+/K++uqrr77+5vXm4aMES6a9dYlKSJIo7RZMFFEYgZRY3T/85204en4ZLrco2ZrRsiKIFLYUnLzaYKAyQNyFiKlbShBrSc5v8raDc4R2hT989BkOqpgPTo3jxHEasaoVUZzw7Moaf5rEnLNsgliRaNGFdYpTDYXv5clpn4kNI9S9QXLlAu1II2yXVqNNGPh0WuvEUcTTR+p0ojSuM0kUUxMzDI5N01pf4cANN5MvlnHzOc5eusLpIy9y/Q0H6cSCY0dfYWl1naMvP0+sFKGGUwn8b/UmX/EDfrlS5b6cS0klWMI00qWuuYl8noekxdcaDc4nirckCldrns4XqQNWroAlrRQgCQ1SptA1TqNFScg+aKbxqwqVKGxp4dh2Cn6SGNHqMJ9EfGRylHfOjFBBIQIP/ADiOO3lU5pEayKliJTCtQW3lfLYfsB3Fs+xNLuNzc0OzvoyCIky4NGywbHQOQhnNuDHMe22x/LVVdb9gFApCkIwZlvsdPNMuxVsS9KKExpKs6g0x4OIxSCgGcesRQGeUoSA7Vg4lqTg2KgoxIoT7CQFrFXXphzGFCJNVUjmpKCARtqAtOigaQlYdRzWkgSlFVJDFcG0ZUOxyKmD+2m3rjJfhUIgORvFXW+cgiROgVeiEJaVwr3eD/Hd85ga5gTTwxWOXzzN1l17OXP6JLPbd3Pspee5srxOSUQsNXzesmG4C+oh71gUS8XUXRdHlPKF7g1MRBCGWFrhCIeJfJ7dccxrrTaOC56yyVcGOBbF7IvBlQJIIaTs3ighur2SUiK1RsnXI0WVBoRAknYwIiUSjcKCJEmBoRTdiN6ukzRJCLUiBlaVJhKCkNRJmcu5mWPVwMKu+TAFejp1jIIGFZF4a9noCborRoUJQTXxrXT3N31PQKn0eTUEYUykNEoLhO2Sc3PUKgPp3I+7GFWAJQWWEAQakkQTxJpYaPKWQJFCY5TZ27RbNTUyC2RKT9M+TTRaSCy3wHB1gFx5gJxtUW+36TTrBEH7zb6V9tVXX3311VdfffX1d1i9EYZxHF/ToWcAhOn4M1GrQghc12VgYCCDZOvr61y8eDGLqQzDMIt4HB0dZXx8HMuyuHjxIlNTU0xOThLHMc1mM4Mdruuyb98+RkZGeO655zh//jzz8/NYlkW73cayLE6fPk29Xuf666+nVqvx1FNPsbi4mLkcm81mdgytVovFxUWiKGJhYYETJ04wNjbG0NAQR44cIZfL4TgOL774Ijt27Mig4Pe//33Gx8fxfZ/nnnuOmZkZoiji/PnzjI6OZgDJdV2OHj2K7/u84x3vQGvNsWPHWFxc5Pbbb88iOA0MsiyLfD5PEAS89tprvPDCCwwODhKGIdVqlbe//e34vs8zzzzD4uIiSilKpRLve9/7GBwczFxtBu4ZsBMEAS+88AIvv/wyURQxMTHBbbfdRq1WyzoVT506xYkTJ6hUKrTbbfL5PPv370cpxfHjx7NOwEqlwq233ppFmR46dIjLly/T6XTYunUr27ZtIwxDTp48SbVaZX19Ha0127dvZ3R0NIsJjaIoO27bttFa8+qrr7KyssLAwADbt29nZGSEQqFALpej1Wpx+PBhoihi69atDAwMUK/XSZKE++67j6mpKXzfx3Ec2u02V69exbIspqenMygmpWRpaSnrGS0Wi5w7d47FxUXe8pa3ZC5VE036w9Qb3dpsNimXyziOk8HIUqmUOfAMhDZux14gaKC753mZu9G4M4EscrXRaOD7PrVajUqlgu/7ABQKBZaXlxFCUC6XKRaLmGhiAwiXlpbI5XIMDg7ieR5LS0usr68zODiIbduMjo7S6XQyl+DCwgLr6+tcf/31tNttXnnlFc6ePcuWLVvYu3cvlmXx3HPPsXHjxmwxwTPPPMOGDRvYvHkzzz77LJcuXcqu9RtvvJG3ve1tFAoFrly5wh/+4R9y+vTp7L3CzL12u5393wDVHzX2CwsL/NEf/RHPPPMMSZJw9erVrPPV7PN9993H5OQkf/Znf8ZTTz3FzTffzPz8PPfddx+7d+9mYWGB//Jf/gulUimDoJcvX6bdblOr1bII3L766quvvvrqq683Dx9FGsOoVZjFZ8SxD0oRhimE1Gh0HFAUio5OkLaF60iKMiTUcdpRJzRRksIiIWXqLtOCldUmUkV8+6Uz5EsV3DjmumKegcER6LTQKmQ9iHj48jLfQNN2c6i0pQ6k7LqtBCcCi2evDnLX1DoTjs/xOI+fQMuLuFq/wsK50zRWr9LpeBRLJaqjG5jO57n59nuwcgWuLi4h44DpmbsJtEXYaXF5aZWThw9xy403MDwzx8VLl1hvtbl8/iRaJVmfnBCCfLlMc3yC/8PN891mg3+gBVu0wtZx2j0oFEIKBhyHdw5U+FajyVG/w7xKYeKzxTLNJELotHBcSpl6tbquLqEFGoWUxk/pAApLSlzLwpUWrh9xg9bcP1bjwPgwY5Uc0vcQXpCCxyhGJwpUQqwUkdJEcUykBDEJWsH1rkUF+Pr5YyyMTrF9fJL82jKWAHISlZNEFrTWGyy1W6x6bVScUBWSbbbDeK5EzrZYk5pLUnJuuMLETTfwvVMneOX8RS6trHN1rQG2RaFQYLVZx7UkEwN5PC154MF3M1CUXDx/gbXVNVrNBpcXligNj1DMFzjx6hFklLBvegM1CzrLddwooioFQ5ZFLZdng5TUcjnwfVShwNWhATpz0/iupH11nbYsIUuKcrdbFNWNhVFJ2lcoXLBtlE6y1M60Q1F33YWCYjFHqd5gYHQXnHgNWRmhWCxz6twlri5cxilX2To9gk5aKDTaLTC6cQuO4+BHMbW8i+U6RJZFM0nwNAygyFuCzRrW6qu0Ns1x/sQxNm/axIVWi9NJyAEhkRKQpm9RgBTEpF2XWc9heoWQef5EivqESPtFZdeRiRAIrcGykMq4TRVhkhBoWEsUMZoAgRKCfM6lm7mK1mnfpI4TkKLrVlTpr83qRxEbAtr1Hqb+Q9UFldrEoNLtyVEmgjV9fBRFxKnxF8fJ4eSKDLolkrBNnMTdI0yvF1cKQpkeT6AUCQnScbPFDmSPzvyVmLOLACV02o8pJM21FWI9zLbqABaaZqtJ0GqisxLLvvoC4H8HHgb6ebx99dVXX3319fdcvQ6vXuebUgrf9zN3kHF+9bqkDAhLkoR6vc7g4CCbN2/G933W1tau6cIzwAvI4I1x0plIVdNDJ4RgeHiYixcvXhOz6Xke58+fp1gsMjo6im3blEolms1mBlDNcTQaDcrlMlJKVldXOXnyJCMjI9x4440ZxDl58iTT09PXgBEDsfL5POfOnWN5eZn777+fzZs3c+nSJXK5HOfPn8/A5eTkJF/5yle4/vrrGR4e5sKFC1SrVYrFYuY6M88rhKDdbiOEYGJigp/8yZ+kVCpx9OhRDh06RLvd5vjx4ywvL/Oe97yHfD7P0aNHs+hQSGNwjdvMjO25c+c4cuQI+/fvZ3h4mKeffppTp05x/fXX8+qrrzI4OHiNKyyXyzEwMEC1WkVrzfr6Ouvr63iex/PPP0+tVuPgwYO89tprfO1rX2PTpk0AfPWrX8X3fYaHh/m93/s9pqamAKhWq9RqNUZHRzNIZgCclJJCocC3v/1tHnvsMSB1PE5MTPDhD3+YIAhYWFjgs5/9LK1WC601t956Kx/60Ic4fPgwjz76KPl8nlwux6ZNm3jttdd4V3wXxAABAABJREFU5JFHOHHiBL7v8853vjMbq8cff5yvfOUrQHqfddNNN/HUU0+xurrKuXPnGBwc5B/8g3/A3Nzcj+wdFELw7LPP8uijj+L7PpVKhR//8R9n586dPProo7RarQyC33zzzdx9993XRKD2xqAaKP+5z30uizG+//77ee9738uVK1f4/Oc/z7e+9S0sy2J2dpabb76Z++67DyEETzzxBN/85jdZXFxkbm6OD37wgxnoFELw8ssv88QTT7Bnzx7uvfdevvKVr/C5z32OtbU18vk8GzZs4J//839OoVDgU5/6FFevXs0g4D/7Z/+MJ598kldeeQXXdfna177Gj/3Yj/H2t7+dr3zlK+zbt48HH3yQJEn4zGc+w2233cb4+DiPPvpoBh2np6d5+OGHGRwc5ODBg3zhC1+g0WjwkY98hNXVVR5//PHsPcJc13+RDJQ0ix3M3/Fs26ZYLHLs2DGeeuopjh07Rq1Wo9lsMjw8TLvdxrZtNm7cSJIktNvtDNSfO3eOzZs3Mz8/T6VS+XPhZ1999dVXX3319fdPbxo+xkmMFCK1F2mFhUJaNjExkVJolZAkChnrbg+hQnYjFQUJUmgKtqToppGQaW9e2lMnpGByeoorC1dY8hKk3+RgKc8GaZM4Lsou89ryEp9ZW+UZz8PK53GE7HYaJl34IlEqoa01/79zOV7zJxhbX2CVHJFTIFSChfPnWF28QqVU4Oa33M7mnXv57vOH2D+/EZmvsLa2yomTpxgfcKkODuLmC3hhxKsv/4Ctm+eItE27vk6n1SJoN3CEplTME8URUZQghaBQKtPs+NiWw9ntu/itKOKDAvaeOUYh8pFaIBKNkFB0bO4ZGCDXbPJdv81GlcZHPickdVuDlbrUbJl23CkpkFJ0D7nb4ycEUljYQuIkMbOBz8/NznDvpgnyfhPaTUS73YWOETqMIE5QUUKcaKIkIYwVcZIQk4ZoKg1KC+aExXsLeZ64epFnSwNsHhik4HUIfJ9mx6OhNUHoU1WaLZbDUL7IGppXdcKhkRLnogDftamNDnHHnbdw10d/lpnTp+l88vfwjp6gpRS5nEu+UKDZ6SDRDJdy5MbmeOj9P8alsyfpNDsUi2WE7bBpq8d602dsapZOp8XS1QWmd2xiYrDI0mqDM+evcr7ZolopM1yrUKmUqA4MsL66wuTkBNWhQZxEc/HcGXzh4hZcBkol3GrAamOd8wtLjFTyuFbab6q1Rqu4GwWanl9IkRVKIbvnYLQkuXjlIpvmt3Pp0nkmNs1z4vAhXju/zP1330K14BK0JCAhjjhz4lV2HLyNnONQyBfJ5fJ4nsdqFLEWa9rEFJYWGXMdJm2Ll9oeQofoXAmZy/Gy7zMX+1SkhbDTyFPXspDSBRWjIgm2A9JJ4T5gkcblphG0pG5albweJdu9zrWwSCxBJDR+EtPRGk8rrijNGgKv6xbMOVbqFCVBqQSddG8OE4UWEtF1PqYQUSCwwBJo1S2vN4WUOu16TG8we3oYu45HrUErTRDGxF0AbDsOywvnSXID1HJW6lJNdx4pBH6YEMQai4RQp45H6eZRvpeC1u45fD1+NX1dE68bkzosdRzRXF9GDI2Rr1RRgU+r1SBst7rOzr76SqW1XgaW/6b3o6+++uqrr776+puXgYvmD/3G0VipVDh06BCu67K+vs7o6Gjm3tNaE4Zh5ioyji7jdGs2m4RhmC60c11c16VarRJFEeVyOYOCplvOsiziOKZQKFAoFAAy56Tv+9dEjBowWCwWr3GvGfBoWRaWZTE8PMyVK1cygBqGIcPDw1l849jYWNahVywWkVJmUZwmdrNarVIqlfjCF77A4OAgO3fu5ODBg1lfoOM4zM7OUigUuHjxIlJKms0mN998c9bTZ4ArvO4yHRgYYG1tjeeff55Go0GSJKysrGRRp8ZZduDAAfbv30+5XEYpRT6fz7oVe6M/jx49Sr1eZ3Z2Nu26l5KXXnqJrVu3Zl2DGzZsYHZ2loGBAQ4ePJhFhSZJws6dO3nf+96HlJL/+B//I6dPn2bv3r28/PLLTE1N8dBDD5HL5Xj44Yc5efIko6OjuK7LbbfdxvXXX49lWdk5zeVyhGF4TSRqp9Ph61//OlNTU/z8z/88r776Kl/4whc4evQolUoFIQS33norBw8e5IUXXuCP//iP2b9/P3fddReO4/CJT3yCWq2Gbdt8+ctfRkrJr//6r6OU4jd+4zeYnp4ml8vx5JNPcvfdd7N3716OHz9Oo9Fgy5YtbNy4kQceeCDr+vxRrkeAo0eP8vDDDzM8PMwDDzzAc889x5/+6Z8yNTXFkSNHePzxx7n33nuv6Qw1PZtmPpu5ZOJj19bWuOuuuzh+/Di/93u/x/T0NDt27GD37t3Mzc0xMzPDt7/9bb70pS8xNzdHEAR8+tOf5u677+YnfuInuHz5MgsLCzQaDQqFAkopPv/5z6O1ziDxF77wBW644QYeeughvvWtb/HYY4/heR5DQ0MsLCwQhiEf/vCHmZycJEkSDh8+zEc/+lHuvvtu/sN/+A984QtfYP/+/VQqFTqdTtadWqvVsvjWSqXCu971Lj70oQ+RJAmXLl1ieXmZ1dVVXn31VT784Q9zzz33cPLkSY4ePYrjOHQ6nazj0ozNj9LKygqVSoUPfOAD3HnnnSileOGFF/jt3/5tgiCg3W4zPz/PnXfemTk+p6enOXXqVBZZm8vlKBaLVCoV7r33Xu644w7a7TatVos4jrl48SJra2vs2rXrmmuzr7766quvvvr6+6k3DR8taSF02u+mSFBJDCrBkqIL30KkZZGQusPiKCKUHloKZAGKtu7GsqZdc7bddfYJyFk2O/fuYWGkikBz5coi9UtXOZvE/ODCWb7jJzy7ukJTaApDw8RhgG1LpGWTKEGUaOI4SYGCEKwlgj+7ohELTbR4DSUlKkkYHR1laPtubr/tVtzaGC++dBjp18EpUnAtWrkceR2w58Bt5EsVLMthqXWe+ZlJrrv5NkCgdMJKo8GGWpmd2+6m3mjSbNZZWV3lyuUrLC4uMVAuMrFhlHwhzyVp85tBm/fuOcjtRw8x6Lew0Fhds5UrJbcPDGC3mnyr4zOlNddreL4yQBMbLUDoNE5TIhFW9wZQSoSQ2FIgkwS7VeeBgTI/d/1bmCq6sHgR0WqB70EYooMQ4gSStHg9jGPCLnyMEkWsNLHOgjLTeFVLUlGKd5UKHEoCXli6RNFyKGlNFAZUhWSDm2cFxZNxwOG1OstaMzYxxlQph0pscnkXt1BgcWmJxx/5Amv1Ju1mk2K5zM7dQ1w4d56ZyVHCwGNtrY6fSG6/9VZGR4eJA4+33HEbiwsL6e+CkIkpm0TD7NxmbJFQG6zh5CXDIw7DgzU8r83KWhMsiw0TI1TKRdYbdQaHB3GLZRYvLTBQrTA4OECn47O6uk6hkKfebvKtFw4xUMgzPznG/Ow0QiSgnTT6Vsgud++OkHjdKXn+3FmeWTjDe9//AU6+doTcxDz5QpFb921n91QVFXmp606AsCxy+TyCFNjbjkM+lyf0O0zUSuS9Gs0w5mgQEkuH0XYLdfUKA5tmuXD6DFPTM1zqeFyxLYYdB1uAFDKlZ7aFUKkDMQgDtA5QqNQ12+03FEKipSDSCpXEiO6CSaESkDaJjGnW61xcX+O079OWkrqKuYygQ1oyaUlJIWd3DYMCrXQaewyvA8VuZ2lPIGv6M1Q34VVm/ZDK9GiSjm8KBbstjBq0SgiikEinuNN1c8g4xnZs4jgkVEkGEi00UZQQKk0SalTXbRyTdnskcZQeL2SO4tSNCQkqBe8q/YqO8bwO5WIZO5fH8+p0Wk0C3+9eJX39bVNvPwXw77r/7gRywCHgX2qtv/KXeJ57gA8CtwPTgAOcAv4Y+H9rrf03PP5/5Yd0PgohNPAt4P3AbwDvBoaAk8B/0Fp/8k0fbF999dVXX3319bdSBk4Z+JgkCbVajW3btmWxqVEUMTw8jGVZ1Gq1rDcul8tl8ZzVapUzZ87QaDQQQlAqlbLnC8MwczCaeNRKpZIBShNF2mq1WFpaotFocOXKFcbHxxkZGcmiHg1UNF2RuVyOpaWlDAIZR6PZZ+MCM9GZjUYDKSWjo6McPXo0g0XG5WmO0TgWp6ameOCBB1heXubVV1/l61//egYAzZgNDAywdetWTpw4wcrKCuVymbGxsQxAARlQNXB3eXmZb37zm2it2bNnT+Z4zOVy7NmzJ+tBfPbZZ5mamuKd73xnBm+MM9P06LVaLS5fvozv+3z2s5/NHH1jY2MZCDZQ1jhWy+Uytm1Tr9czV1mSJJRKJarVatY7eenSJZRS/P7v/z5CCOr1OqVSiZWVFaSUDA4OsmHDBpIkIYoiWq1WBl2N600pxfLyMisrK7zrXe9CCMGuXbv42te+xmuvvcYdd9zB+Pg4e/bsYcuWLVSrVb761a/y2muvceONNzI8PEy5XCaXy3HixAlOnjzJzp07WVpayuJqDx06RLVaZXh4OHNBbtq0iTiOefzxxzlx4gT79u3LgKvpNvxhOnHiBFEU8bM/+7Ps3LmTwcFBfuu3fouzZ88yMDDA5s2b+ZVf+RUmJyeBFHp3Op0MBBuYbsZ848aNfPCDH+S+++5jeXmZX/iFX+D5559n3759zM/Pc/z4cU6fPk2n02FlZYWlpSWOHDnCjh07+OAHP8jU1BSrq6vYts3Fixd58skn+ff//t8TRRE//dM/zcTEBI8++ihJkvCxj32MgYEBbrzxRr7//e9noG/Dhg1s3749m0d/8id/QhzHbNu2jSRJeN/73se3vvUtzp49i+u6mXvYODnNmBngF0URzWaTmZkZtNasrKzgeV7mIjYRqXEcZ7HD5t+f54A070VBEGRz07Ks7NqfmJjAsixGR0fZs2cP6+vrNJvNbMGAiYiemppiZGSEP/7jP2ZpaYnLly8zOTnJLbfcwnPPPYdlWezcufO/zxtoX/+nSwhxE/BPSe91R4BV4DDw21rrP/oLtp0A/mfgXcAkUAeeAv6N1vqFH7HNB4BfAA4ARWABeBb437TWz/8FrzcIfLG7r/9PrfW//UseZl999dVXX39DetPwMQo9VJJ24yUqSbvjhEYnEIUaFYfYUpJIkfaudQFNgibWEkd0txMQJgl21xGVAr0USpQLeZTQVCtFgrFBjiYJhxev4CmNEpqCa1OtFWm0U/CWutMkWsVYUiAsizT/NP1QJ7vRlInWjE1Msfu6g9hujkg4+MuL+K11bnvLLYzNbcO2HVZbJ9izYxvTc1tRShOEEYuXL3L9nn3UhkZR3Q/Cly9f5o7b76A2sZF2q4XXaXHs5Ami8Du0vQKtZpPjJ0+wuLRIqVzB83wulIt8zfP5J7k8G2M/pR9JGj9pCcktlQE0km+0m4wpxU5LcqQ0QDPSIBSWFEgslAJpSRzLRqJJOm1G0PzidQd51/wcudYqXLoA7Wbqdgz8zO2YJDFREhNEMWGSEJrIVa2JlCbuWs+0sNCWTYwgjkN8LZixbfIDRY7HClcJatUqz66tcjbxWNOaBOg4NkEUMzY+SrFcotFoEyaCIBFcubrG4upLRLEiUhZOscrmLfOcPXue8sAAO3du53svHOaKF7Ft124CP+DC2VMUS2UGajUGBgeR0iaMQkBz4sRJakMjWLky+VqFxOuwvrqGki5jY6PkHBvLsul4MZs2zVEoDdDwAhqtdXKORFkWx0+cYSCfY2xslGazSRT6RGFAxc2xeeNEaoTr9g6m6b7d1ZdxQhJHBGFAvdHka987wvG1mL0H38KWbTtZXL7K6OQ0C+fP0Gh3GKkW0zhRaWPlyszMb0cIgR9GCGlTLBdptT2mR3JMqxJxAlEQsSwcPDePbYMfKsLVK+S2bkblznAoCBmIFCUhKEiBKyxsIbBEuhrWsew0ArULSOMkxm83iZwciUrhZByHCNHttdRpB2Mr8jmzssoV36OuFEtK8YMwIejedAshEEgKuXwXMKZjo1FpxKpSCJ0CRNWNfH09aPV1N6Mi7TIVWqVQVKdgv9f9qLsxqGhNECoSrdFY2K7L4GAVYbs0Vi6TdCOfRbe3UScJtpD4UeqyRIPv+QxWCoStOH2eLsjUWgHpexaqG6mLwOr2ZErLprxhjJxr4y138BoNkij8S8Xc9PU3qjnSG5rDwH8BJoAPAI8JIT6ktf7cX7D9/wPYATwDfBnIA7cB/ytwtxDibVrrv2yxSQ34DhACnycFoT8B/K4QQmmtP/VXOK6++uqrr7766utvuUwfnQGKxtVXKpWoVCoopTKwEgQB09PTmWvR9NS5rsvw8HD2OAPIyuUy4+PjWQdkGIaMj4/jOOnCXtO/1mg0yOVyaK05efJkFtm5b9++DECY6MYtW7ZQr9d56aWXKBQKNBoNNmzYQBAEWWfed77zHRzH4erVqxmInJmZ4eWXX+bw4cOZU3Fubo6BgQGCIODw4cPZ10ajgWVZnD17lnq9nvVOrq+vZ25OA1ILhQJ79uzh8OHDLC4uZt2PSqVpKcZBJqXMYNDa2hqnT5/mvvvu4+DBg5w+fTpzyRWLRd761rdi2zYvvPACTz31FAcOHGBubo4wDDMgZM4BpLGnjuPw0EMPZVDNtu0MtlmWRafTwfM8BgcHs/0KwzDrZOx1KprnFkIwNjbGjTfeiO/75HK5zFlp4lujKMp6Hs1zGGeh6TU0MaTGaWbcpgaURlGElDJ7jd59NPGl5rGdToeFhQWeffZZtNZMT08zMzPDysoKGzduRClFq9XK9jMIgmt6Sw0U/1Gxq0tLSyilGBsby/opC4UCV69eJQgCarUag4ODBEGQ9VqauWuO3fwzkFpKycLCArVajeHhYdbX11lZWeEP/uAPuHjxItPT07ium4H8er1OpVJhcHCQ1dXVLLLXuA0vX76cRQGvra1lblrTZWicySYS2VzTa2truK5LLpfDdd0MFJtzYq5Tc67Mc5muTBP1a94fzKKCUqlEo9Hg1VdfZcuWLdk8NbD86NGjLC8vc+DAgcy5+aPUC4UNrMzlcvi+z/z8PPPz83zqU59ibm6OKIq4/vrrGR0dZWRkBMdxsrn00EMP8eSTT3Lp0iVKpRJzc3PZc2/bto1cLveXfIfs629SQoh/CPwfpKFnjwAngA3AQeAfAT8SPgoh5oCnSaHjN4DPAjOk97bvEkL8uNb6Sz2PF8AngZ8hTQn6E2CJdHHvPcAx4EfCRyHERuBxYB74qNb6D9/UQffVV1999fV/qd40fHRsiRYWSnfjWQSAIFYJYRSjojCNcRQC6VgIEeA6aRF6EClsoQiiNIoVrZHSQkjRhROpmwohcG1JtTbAVT8gCDySWCBEguzGvgRhTBDGGXDQSCwpcJ30w56Q6YdyoTW241AdHOHgbXfjaZdjRw+zY/Ms5WKBcrXGpUsX2Lh1JyJXJApDVtfWGCuXuHj2FIXyAGfPncdKApYaTeonXsV2XLwwYrCYozg4ikqS1AVWKNCqr3PHLW+hEWpePXKEKxfPc3nhClIuopVi2bZYHx4hcPL8ii3ZFXrIHleaJSS3DAyQoPlaq8VofZ1py+ZMsUw7SZAyvUGwbRvHsiAMCNdXuW5inP/pV3+dHRcvYh3+Pvg+tDvQ9tBhCGFMEkVESYwfxwRxTBgnXeioiBSEaGIp0JaLQpBYEp0r4GtNOwwItSaJIpa9gKO25GKlhJOXnIpDXCd1u6o4wXFsHMdmbb1JPu9SLOTRSPJODo1FFCtW15so4bBjzz6OH32FYrmKtsv43jpeELBt20627dhNs9VgZbXOer3J8NAQEKJUwoaRGi/94AccP/YqkzObaceK++94K6ePfA9XBzRbFo2OTyMIGNAwOTlFsVAk0pJTZ1/D67Rpa00YK6xCkfldW1lZXUVLC7dcZm1llQsr63S++xKye8MgpZ0645IEzw+4vLjK1dUm660W7UgTIdlQK7N27hizt9zFyWOvsHnTLPHpU1xYXGdksIYgAGkReQ3OnL/A/K69OI7EdW2mRkc4Xj/N9IZxNkzWKA7UKJYHyBdKuI5LFEd86akXqZarXD53idGxcU5fuUp5coKiUsS+T9RooYMO+CF2HGMrRU4IclpR0JAXgna7yeDwBqYHB8knEUmUoJIUSnudNmvtJhcadU56HudVGrV6rBOx3nUm2t37SVtKCq6FSpIUMSZJGscMKCFAy7QDUim0EKgkJonTC113HZK2baW9mpgeSLrvBN1I1rQOFnR6oxnFCVJIbEviSMnC5fMMTsyRxFG3+7W7b5ZEaYUSkkjRNVBqkiQkkWUcSxLFcbpGwcSuCoFtSRJUCpy78a4CgZ0rMjAyQk5KlttN/EYjXXXc73z82647SZ2F/3fzAyHE/04KJD8hhHhMa934c7b/R8AZ/QbKLIT4V6QrPd8P/EUA02g/8DvALxpgKYT4/wIvk0LOvxA+CiF+6CpSYIdSim9+85t/yV3pC6DZbAL0x+2voP6YvTn1x+2vrr/OmJlt++rrb1rGrRXHMVLKzCFnOgaN487IgMMgCLKYTQOqjKPOdEYCmVPSuATz+XwGg0ZGRjJQMjw8zIEDBzIoopRiaGgIpRQbN27M4Egul+PgwYMZPDLdjgAjIyPs2rWLS5cuMTIywsjISAaaNm3aRLvd5urVqxSLRebm5pifn6dUKrFv3z4WFhYy1+TU1FS6+NL3OXnyJGfPniVJEjZt2kStVuPixYtUq1U8zyOfzzM7O5tFo27cuDFz/tm2fU20rAF7uVyOmZkZfvCDH7CyssLy8jKtVgvf97ly5QqLi4vMz8/j+z5jY2OUSqXsuUz0ba+rctOmTTz11FOcO3eOWq1Go9EgjmM2btwIkPUUGvfc6dOn8X0/Gx8zRkEQ4HkepVKJfD7P+Pg4q6urVCoVNm3axOLiInEcZy7I3ihY03XoeV52bgw4tG2b0dFRXnnlFa677joWFha4cOEC1113XRbda0DukSNHuHjxIvfee2/mZjNztFKpMD09zf79+3nve99Lq9XK4PmTTz7JwsICSimq1WoWtWkAVrFYJAiCa86NgVEGstm2zfj4OEEQEIYhIyMj2bk389hsb44tjuPsGjHHauI/zfmSUjI0NMTly5dZXV1l//79nDx5kpMnT/KRj3yEd7zjHbzyyiucPn2aOI4pl8ucPn2adrudzTPjLt24cSN33XUXTz75JI888gg/9VM/Ra1W4+zZs5w4cYKtW7eysLCA53n4vk8Yhtn4WpaVLRRotVocOXKEW2+9leeeey5zGWutefXVVzl37hwXLlzg5MmT7Nq1KzsPnU4n62r1fR/P85iYmOD666/nM5/5DBcvXuTMmTNZpHGSJLz88su89NJLzMzMMDs7+yPhoxCC/fv3UywWs/eYoaEh3ve+92XRre9+97s5cOAAKysrbNiwgenp6ax71YBlpRSjo6O8//3vx/d9SqUSjuNw7tw5JiYmGB0dBcjg/Y8C0X39zUoIsQv4LaAB3KG1PvKG30//BU/xCVLw+D9rrf9Nz3a/BXwb+JQQYlZr3er+6h+SgsfvA/dpres921ik0PNH7et+4DGgBDygtf7aX/IYf+R9819m+7766uva+7D+/ey1+rsyHv9n3ze/afiYxGEKGbrwUKG7nY0KIQVxHKVGRjRSg0o0SZzCxlYs0j/qC4EkjVFUgOy6nFKwZqFJYxPLpQJetYRuaLxOJ/U0CYdWrNHNDolKHVSJ0ggRAzZBnMbC2JYEDQOlEntuvJViaQBZGsRbWqLiSrbt2MHAyBhtP8Z1c1y4coVWo856fR3tt2laZaJ4kZmBCkEYUixXaDRbFAoJthMjgOlNcwS+Txj6REHA2vo6IwMVtuzYS7PtkXNdiqUCZ06fpF5fRytNnESEVxdYsmwulgr8L6Uc11kaO9FI0j5HS0huG6jia3jc6xCuLjGZy3HJdgi6HYMumqSxTrK+yoPDw/zyLXcwcfw44sxJaLZAKfATtB+hwogwigjiBD+JCGJFkCSECURIQqWIBSR2Dp0roF2X2LIJlaKjEiINbTfPyTDk+2HAsSAidm02FopcvLqMJr0hovshM+/a5FyH+c3TWNJiZW0dP4ywLcFIfjiNollbZ3bzPNVKnqGRIaqDI+TzLmfOn0NpaLaavPjCC+zatZNjJ86ysrzExo3TzG/ZhI47jI4O89prx1lZXWV0bIoLl67wwnNPEdbXcJ0cYyMlNghoBwFnz1yiOtDGkjZnLl8ljCOQglyuQK1QYEupRM51GBsdQYj0Bn91tc5is4WwbMo5l0BHKBUihMCSaWTq4GCV4ZEhRmsVXj11kRdOXebdtx9g60Seq4sLjIzPcOriApXhMRZWl/DIYYkWQtpIaWEJC1sFOEJ1YZqPbWm23nQD9975FqTWqDhBxzEqjIh9n+W1NS4swdmL55m/93YWFhZJBsrcdtN+VBQRRwlhFKVgOUrw/RAviNKVrM0W9fU1Guur2PkiH/zQjzEgE5QfoH0f1fHItdrIeh3VamP7HsUr5zj5/KskeQcnTON6DSSUliTvpD2YSqsM1qkucMwckV2Ho1aaOI7S9w4hU5irBUJrEg1CdG+WtEh7XI09UqQQM4kjvDBCkf7xxrElbd9DJxFRGAICSwgUGkdK4iTBdgVhkka/5l0HYeVwnQIqnxDHzSwMVuvUeZ0ojZASqRVapvuuhSBXGaA8OISII9qNFn6rcc3q477+1qoO/MveH2itnxdCfJr0Buh9/DnQT2t9+kf86j+Swsd38JeHjx3g13udklrro0KI7wB3CiHKPTdoffXVV1999dXX/+AysYi9wPCNMZ1xHGdAoNc1ZpxdrutmvY0motG46HoddQbyGGDjOE72WVVrTalUwnXdDOr4vp/1RRoAdfXqVS5evEg+n88AyObNmzNIs3nzZubm5iiVShlQMw6s3bt3Mz4+TqVSIZ/PZzDm4MGDtFqtrKcvDENc12V6eppKpZI5Fk0npXkNKSXtdpt8Pk8+n2d+fp7R0VGCIKDT6WSgzkTCGvBYrVa58cYbOXfuHLZts337diYmJtiwYQOu63Lp0iWOHDlCPp/n1ltvZXZ2Fkgdgr2xsMYVtnv3bjzPyzo6bdtmenoaIQRzc3NMTk5SKBSYn5/niSee4MKFC2zatIlbb701izQ1MM30X0opOXjwIF/96lf57Gc/m5373bt3s2nTJgqFAmEY0ul0srmitcbzPAYGBrLYWcdxKJfLzM/P84Mf/IBWq5XB0f3799NsNjl37hyf//zneeqppzh9+jQbN25kfn6edrt9Tcfo6OgoO3fu5PDhwziOw/T0NIcOHaJWq7FlyxYOHz7Mpz/9aTZv3kyr1UIplbkWn3zySSqVCps3b6ZWq12TDNMLoXbt2sWf/umf8tnPfpZ77rmHZ599ljiOGR8fT9OHogjP865x8Bk4b1yOxiXYaDSo1+u88MILKKX4wQ9+QBzHHDhwIIOUJ06cQGvNs88+y8mTJ5FSctNNN/Hkk0/yR3/0R2zZsoULFy4wNzdHp9NhYmKCj3zkI0RRxJe//GVKpRI33XQTjz76KP/0n/5TduzYwfLyMuvr69k1ZmJ0IQV6c3Nz7N27l09+8pM89thjLC8vc/vtt7N161Z27drF7/7u7/Jv/+2/JZ/PMzw8TLFYJAxDpqens95Uy7LYtWsXg4OD1Go1PvzhD/P5z38ez/PYt28ft956K6OjowgheOtb38q2bdsYHBzMnJU/TIVCgQceeIBqtZrN98HBQW6//fasj7VUKrF37148z8ucvUC2UMFE6prrw7g7pZRUKhV27dpFrVb7ocCxDyH/1unjpH8T/ldvBI8AWuuLP2rDLph8O3Ae+P+8YbtnhBCfBT4M/Bjw+91f/Wr36y/2gsfuNglw5Ue81n3AF4AmcKfW+qW/+ND66quvvvr626I3DR91otJeNkk33rAboyhl6lqSaWShtC1UF0hZUoO0aCtFOxTEQYSrIyINThwRd+MOc04eLEG+VEGFHfKOYKhWRWmLTqiIwgBLGHdUGqUqJeRthyRRKJWQqJgkVvjtkA0jw9z9zndzetkjajSpDvrMbNxIjoBCpcrS8grnzp3FsmBh4TKdVockjnBsizCM8YOY+uGjJCpJuxADjyCMKBRLaUyklERhiNYJUqT9lTNzm/F8H0ukH9SGh4dZX7qCjabeTMu4wygiimJeCAP+p5bDvxmqcJNrgVJI1wUtcJDsHKjw4lANu97m6uoiW8enWXVyhI11GvU65Tjiwzce5IPv+wlKp84gzp2FMAKrgCaGXJmo7eHFAV6sCZQgUIIQQegUSGo1PL9DomLI59GFMpFlEwBeGOAFAYsaXokTXlUJi0lCRykSSzI5VGV8fAMXF5aQpKsvtUojNwUwOTbKDdftxQ98Fq4ucWXhKjqJkDqh1ezQ8n1eOXyYc+fOsnPfAW69862cOn6Yp5/+NqVigSgM+LMv/xk7dm5n9569fPe7z3HpylUGymV279nNpUtXOXfxMpZlM1TJsWXbDl548RU2jVYpug6eBjeXI4lCinmXYjGPVRlhPVigUKzQbsTU6006no9lCVx3gMpABVDU1+sMDlQYHqoyVSmza24WaYnUgSe6UAzVvZkS+L7Hy8fOUSvnmZ/eQK3ocuLkYcb238WRlw+xf+9erl48x7GTZ9g+WcFyXGS5ytz2EjJco+WFKARWropOTlMsFymWCpmrGLrNgkpxx7vu5pOffJzNmzaTJBYbNgxx/OIlhiaGmZ/fws75jZAk6CgiDgKSICKOIiI/IAojojBdwSpzDkPTQyAg57oZKBxAMKo1mzXEccChrz7CmYUFZkJFy1dE0mFwchrLslDAhkoBrRKSxPyLEYh0gUI3XhXoicvpWh/RaJkuXBAi7V1ME19FtrABUhBM1yWpEo0fppHNecch5+aZ2DiHlXNYjUIS/brzEa3oSJuCmyNWPkqn+1DJO+TyBYLIw7JlBj6VTt3HSfd7mXVxdN/X8iUGKiWCTguvuU7kdbpwtA8f/5brB1rrH7aU55uk8PEAfw58FEKUgP8bKaTcBlQwF2Wqqb/Cvpz4ES7LC92vg8CfCx+11jf8iP18QUp5/d133/1X2J2+zCq1/rj95dUfszen/rj91fXXGbNKpfLfd2f66utNygBCy7IIgoAgCK4BXG+MbwzDMHM3GhnXlwGMvc9pHqe1ziIwDegyINPARuMsM9AAyMCBAR5KKTzPo9Vq4bou+/btY+PGjdnr2baddqd3QaeBVwZ4Tk9PZ9GMxu2Wz+ezGEYDHo27bWxsLIuT9X2fJEmYmprKHus4Dk8//TRLS0u87W1vuyYu1OyviTc1MKhcLrN9+3Y2b96cOeyMOzGfz3P//fdTKBSyKFzbtq95zt7xg7TT77777kNrnUXYDg4OorXmHe94B4ODg3Q6Hfbv38/o6CjLy8tMTEwwPj7O/fffz+DgIL7vZ69t4PPOnTsZGRnhypUrLC8vMzQ0xPz8PFEU8e53v5vZ2dnM5WdZFr7vZ9DN9FwKISgUCrz97W+nUqlw6tQpKpUKd955J/v27ePEiRPcd999WT/l/Pw8t956K3NzcxmANuBJSsm73vUubNvm2WefxfM85ufn2bdvH7fddhue5/Hcc8/x0ksvkcvl+PEf/3FmZmY4ceIEn/zkJ5mamuJnf/ZnGR4ezmC2mY+QwtW5uTk++tGP8sUvfpGnn36aYrHIj//4j7Np0yZGR0cpFovZvDXH3RvvaoA2pK7fHTt28Mwzz/Bnf/ZnjI2N8Z73vIctW7ZgWRa33XYb3/zmN7Nuz4MHD1IoFLj++uv55V/+ZR5//HG+9rWvMTo6yvbt29m1axevvPIK9Xqd97///TiOw/nz57n99tv5xV/8RV566aXMjfrYY4+xtLTE9PQ0d955J/l8PrsOBgcH+fjHP843vvENGo0GO3bsYPfu/z97/x1t2XWYd4K/vU+6Obwc61XOAYVMgIikCJIgKVlMkmVLpCVT6pFD2+202tMzrZk1a43WeI2tGc9Y3R7LGkk9soYSKYoiCUigGACCREYBBRRQuepVvRxuvifuPX+cuw9fUaR6BHXbEnW/td56Ve+mE/Y56+7zO9/3HSOKIj74wQ9y8OBBtre3s31QKBQYGRnhQx/6UHaMep6X7bdOp8PevXv5zGc+Q7/fz3pAzXFdr9cZHR3Njo0fpCRJmJqaAqDb7WYQ3ABUA7iNi9Ocb3Z2SZrPMEDcgEghBNVqlVqtlu37of7S697B76+8g9eeHvx+WmsdfZ/H/4QUPp4GfmMwnz4OrGqtX/lzfM7HSCHnBeADWuvrf56F/LPmzcDtf573Gmqov67aOQ8bzmdv1Q/L9vhfe978juGjZaXAUUiBsDQCifFCpf2PNui0Qy6xLJDg2Hbac2cJVBDR6gaooJveuWfZg65HBkBDIIUmjCK2ejGbzTbNZieNV9WSJFFImXY6aq1QCpLQJ4njtG8gCkBp9u/ezf2PfQSVr9FvLHLi9hNMzOymFSYoLTlz5gz9XpsoSoiThK7fGMTaSJIwIk7SSBCtFNKyUCq901RaEhhM5jB3rEaDiZjCDyykEEhpkfdyTExOoeOIG4tXQUC73SGOE9AQK80bQcB/ux7zryZqnHYtEhWilMXNRHE+5/Fzx05x/cyr/MbKEns3V7j7wDG2d+/n+ttvsGvfPh599D3k37oAW5sQJWhhg51+wYy8Ar3qBF1s+t02fhIT2jZRotCVGoEWJBIS20HZDoFj04tjmnHE1TjibByyqBJ8KdL4yijdT47jcOzwAZY3m+n2kTLrxhRaA5KJ8VF6vQ69Xp8kCrBJcPIujVaTsYlJZib6NDs9Nhstul2f8+feYHlpkanpOSYmNe3GNotXr3Dp8mXuvPseDhw6RLfVgiSg09yi29zktqMHCYOIXmubTrNFLxS8cXMbJ/GZH6tSL5dIkpipsTqVyV0sN0Puve9BLr91hrXVdTQOsR+xsrzO5sY29ZERdu+aAdXAdnPkCkVanR69XgdbaqIoIVGaOFZonQzGi02j47O43WVytIZE48cJY3lYW1lm1669LK1vUq6NcmNlk/nJEQrFAtvNTbZbTebGSrhujqILs7M1rp23sB0H4gg1gJwpGIQk8Ok3tykXNH5+jMtvvMmeo3u5+tzzfOfMOa4vr7B73zzVUh7I4w3uMExdiAo9uFCgVOpC9JsbnL90nZOnb0MM4kMNvhNSYAuHiV17eP/7BMpywHIQtoNwXBASIW1yIiBRcepiTmK0AqWS9PMGxwcDuKiSBBXHaGFhbn60jGNSJekZRAuk3MH0ZOpg1klCHIWEcYJG49gOQkf0Om1Kbi51ZGsTkgqhglx9lDjqEw0ijeMoot3pUBnV5PNF+v0eURyh1QBukk6q5ACYCiRaDG51KJZxbZteY5tep0kSR+k6Djsf/7Jr9Qf8fWXwu/qDXiiEcEgnT3cDZ0kdjuuAmWT970l7G///VeMH/H1QSor1Ax4faqihhhpqqKH+CsrEMu4EhpBe/HddNwMpzWYT13WzmE3jJjQxrKYPzlzwN8DAOCd3dsCZyFADDrK6joETMkmSDIBFUfqVxizH+Pg4k5OTmUPS8zw6nU4W22lAqQGnJq41jmO63S5CCCqVSgYxzLIZIGk+p9/vZ05D835mWdvtdva+lmVRKpW477772Lt3bwZCjOPRdB6abWW2k4mizefzWZSnWQ7zvgaudDqdbDubmyV3Qi8DAA14tW0b3/eRUjI7O5v1+gVBwK5du5idnc1g6szMDJZlZXBzfn6ebrdLkiS0Wi3K5TIzMzNsb29noLbVamVAq9VqZctl23bmejRwt9froZRidHSUxx9/PHOKep5Hu91mcnKST33qU9m4MW7OTqfD6uoqV65cIY5jcrkcnU6HarXKj/7oj/LYY49h2zaFQiGtuwkCHn30UR588EGazSYjIyMEQUChUOAf/aN/lPV1TkxM3BLlasaeiQhVSnH//fdz7NixLJq2WCzi+z4/8RM/kTkwTYRrFEWEYUg+n8+6Ts3xsHfvXv7xP/7HlEqlWz7fODl/8Rd/kR//8R8njmPGx8dvccg+9thj3HPPPdk4MT2MH/jAB+h2uxSLRT71qU/d8vkf+chH6HQ6fOUrX6FWqzE5OYnneTz00EPZ8aSUyrosP/rRj2bjOgxD1tfXqdVq7Nq1iwMHDmRxwcbJOzY2lgFh42w1MmOlUChkx5XruiRJQhAElEqlP9VV+r2yLCs7Rs16m2PQdJPuPF/s7Cg1+9J1Xbrdbnb+AW6BjVLK7Fj/QQ7Mof7SqDb4ffMdvNbMn7+vW3HH32vf8/vP+1nvAhzgOb57s+5QQw011FB/hfSO4aPrONhSIAedavDdknKhNUInOBbElgQhcS2LYj6HsFJAYIkUGoaJRqAp5nOkjW8aJQTbm2tEgY/f92l3egRBBJq0580GsLAtG0Q6ofMDn363k36pR2NJi12zM5y8/73EdoG422GsWmBmfoF+LLh+5RLtThs7GhTBR8ngi5s/iJNI3VtJkqQGrAE8gfQLlW3ZJFGEEBLXcwnDGHSCFHlsJ/0SmGiVRkgKqFfrqEThBwEqTpBAq9sjDNPJngbeihP+u40W/3q6zj5bcDWKuGE53Ds5h/fGWY6g+an6CK8KwYTrctfMFNbuvYjtTcS3ngVpg5QoDUmsCaMEP4rxkfSkRV9KwmqNXrdLGPSRI5PpuquQWEp6cUw/CtmOHc6FIecCnw2tCOMYSwos4aR7SMPRY7cTxQkLC7t45c0/QaOxbQcEKK2Rg5XqBSHXb6wQBAHtdoeRWp1Spcrq+ipCwNTkFCdvm2Vuz0Fy+SL9MOLa1cvs23eATrfP3PxeLl16G5CoJCDsNXGk5s3XX2b5xg2mxkeYnRzn5KHdLK1t0mh3KNdGWF6+gbBd3L4kcgSeEBSLVTbaPoXqCDkHoiikWh9N4VgSopMYdEwYRizdXCb0fSr1Gq6XR2oFrkuhVMS2UjCukogkConiBD+IWW31iBPF5FiVVrdL0lYESrF+9RwH73qQp5/4fU4dO8L5V57n4s0NKJQRQqdADQdEGhd6aP9u3njpJaSbR4hBFjGp01IICyktZvfs5/DhJX7v2zcpVqpsRZIP/9iHuPL6C4SdbS6+dY7bDy+kXYbSInUPWqkjVQi0lEhbAoJeMwEpkJYGi0EEqkILjcYGLOZO3cHUsZPEUUIcpe7JOIqJY0Wnuc2rL5xncmYSSycolaTbNAszNdBxEJsTJyRxkq6S0IgkIZEydT7q1FGtkvRHCBBYaKWRInUmxnFEpBR5R+J5DjLyiXtNdKWWTfyFTmsnYyTFUon2coNI6cED6WR1dWWFXbt24bQaxEmEEsZoqVFiYN5mJwSV5GpjeI5Fo9PGbzbRySDyyhryor/kmvwBf58a/G7+gMcBfpQUPP661vrTOx8QQkyTwsehhhpqqKGGGmqo7yvjojNAy3EcGo0GS0tLzM3Nkc/naTabXL58md27d9/SE2hAggFnBhZ+vy61nXDTfJ5x7hnoZN53Zw+h6Xk0IMI834APIQSO4xDHMUEQZHGqQOayMk5Nk4JinFFGZnl3ujTNupnHHMfJIKZxfxo32dGjRykWi+lNwVpnfzcA10Ap4wQzcNX8LQzDDB7uhCIGYu6MwDXb12xvAyHjOM6iKIHMyWjW2fzbwECznUwvoFk2KWW2fsZxaGCk2Yb5fD6DogZ87RwDO8eGgV6tVotCoZBFvLbbbeI4plwuZ65Bx3FotVrU63Vu3LjB7/7u77K8vMz+/fuZm5vLYOtOR6uBkq7rpjd57wB51Wo1c5TOzs5my2yWwexbs53N2PJ9n5GRkQx29fv9DP6VSiX6/f4tHYrmeQZkQzqfy+fzWWTw7OwsURRlHZfGpWp6Und2SRpAlsvlsnhb855hGFIoFGi1WliWRT6fp9fr8aUvfYmXXnqJsbExfN/nIx/5CLOzs3S73ez9fd/HcRwKhQL9fp9+v5/tL601IyMjGSw0Y8o8ttPZbG5AMMdvoVDIImlN7Knrutm4Mj2vZjv+IOhnHMvALeeAnePKjEMTb+t5XgZQ+/0+5XI5cyrvvBHAHO+m73OovxJqDH7PAm/9OV9r5s9TP+Dx6e953s7P+vPovwU+CHwaEEKIn9V6GDs11FBDDfVXSe/4W0G5mB98gdQIaaVuR0sQxSqFj9JCiwSlNcK2cJ3UAagsl0avRRCE6DikYAlcATnPQzG4yi8kjc1W+oVIJ3huDstxUWow8VIKrSAMA3rdDt1umzgI0Gg8xyaXK1Cr1bj/ve8nFA62UEzPzxGFITfX1rixuEi33QYBSRASRTGB79PttImiAGmlzkqVxGl8pNbf7aK0bFzLQqEQKCzLxrXTO0BdewCQLAvh5VLQqGLCKAZN2i1Yn6DX7aKVQglJq9VGDQCGQvFqFPPfrTX58UoJJW1+bKQO16/SUzG25XCiVOJooYjtWNjdLmKjAb0O2rZROiYKFUEUEkQxfhjg97sESUSooB+F+GGAyuXR1RG67QYqSfAti1bocz2KeCOJuQ70pcCy3fQLtGXheg75Yp1mc4u5uQV+/Cf+Fl/+/H/iyvUlur0+0pLYjkUcxSSJIlIKIW1UrIijmFwuz9XrN1nYs5+xyRmCsI8fRBy97T6O33Yax4Ibb3yHF55/HYSLlBalcgXLsjlw4BCFfJ7tjTUWr1yiubWG39lmcrzOdqNBIZ/H7/fp+QET+SKdZoN+r8Pswj4eft/7KOfg8vmLbDQ7LF56nZ/7zN9FhX1m5ubJF8tEQZBG58YRYa+N32nS0wrLkuRtCPw+ju3xdsPF05V0vOsEqRJE3MdzJY4j2I67JEoxOzvH+Mw0OQtsqcldXaPRbDE7t4vNrk+pUmWz2eLe00c5tyWZGJ+gubnC7NQECocLK13aOsf1G0v0900hrfTLv5ASITU6UbSbTS6uxjz6Yz+K19vmyuVLTIzVadSrhH6PV155nRMHd+NaMnUzCkAnA4g+yDgd0HTH8fBybnZsp/NCix0tHUhL4EoHx3WAfDYh1zrB1l0uXbnO6MQ46HgwAU8nMwxiTtEpzNekk5hYKYQCRHrzQTSY2OskAamRQgIaLQRax4BEoVE6IYpiEJLZWgmvVKJQrOAJiT+4YKJ1Gsnseh4j5RJWHNEJ0gsVKW5NnZFBv0un2yeXLxAmCTr0SUg9k2nYaypLSDSaRNpUR0dx0fQ7LaJuF8tyKeSKRGHwTk+lQ/3n0e1CiPL3iV59ePD7z4p+2T/4/bnv89hDf9EFG2qooYYaaqihfri1M+oU0pjT7e1tbt68yejoaHbRv9lsZh2LURRlYMq4v3a6kUzk6U6ot9O1ZB4z/9/pFjOP73SB9fv9DGSZzzUOwa2tLUqlEoVCIYOTBtbsfC+zHDsjXg042umANPBkZ4SrASFm+Qx8M7DPPNdAOgMKDfAzIAfYMU/RGRACKJVKmSPLwJGdoNGAwZ2QywA749ozDjHzY/aPce7thLbfGw0L3+0sNOu0MxrXONe+97nm8w0g2tnVabaZcbuaTkiz33dCY7MvPM9DKcXMzAwPPfQQWmv27dtHuVzOgJsZL47j0G63M5hoIKaBywau2baddSCa5doJxneul4nHNQAwiiLK5TK9Xo8gCDIHqtk+Zrvv3CZmPYwM9DQQzDiAzWf6vp8dBzshbhiG5HI5gMypWi6Xs7hfA/ur1SqPP/44hw8fxvM8ZmZmmJ+fz8ayWQaz7ObvBuTudEVCChPNupp9tfOGAXPMGGejAaVme5jP2wkNDdD8s3oVzXYwx71Zv52A3dyQYOKXDfSXUnL+/HkmJiaYn5/PxvvOsWmWf2dM61B/qfUd4E7gA/z54aOZP79bCGHr9KLNTj0y+P0ygNa6K4Q4CxwXQpz+c0SvBqTRq/8T8CnAE0L89Pf5vKGGGmqoof6S6h3Dx0LOI1HpF3LN4AuHkKRJjBLpCBw02lYIL4ewbRAaP/AJw4Cg38GT4DkWAo1r22ClX5ixLOIowhICgZ0GKGZdcYper0+rsU2n0yKJwiyKpVSu4BWK2I7LiePH8UbnaN24Rn3PLtximUhpbi5eo91qk0QJjpt+2YviiM3Ndfq9DkolgzhZSej7aJ2QhCGQYA3uKsuNjFCv1qnURnG9PKDJ5/LEQOAPQFaYIIQmTqw0KlaBlxOUK1VKlTp+r0spiXGkRbPTQZBGzKpE8UwQsbrV5n/ctYtwfQOFRgqRBmEmYPshcmObxI/RXp5EiLTPD02ooR8EhLZNH03f7+N328RSQnUEsWuUKAho3LhGIC1aQnLJDzjjd1lWitiSuF6eUrmCm8tT8H3CoEulNoLjeGxtrfO+97+f7c01NtZXuDz4km9bNpaw6CchsUoQWhPFisXlFfpBjT17D7Kwey/jU/PMzs/R7zZAKXp+n1ariS0EF2+scenyFRJhURudYnxiiiAIGB0ZY2x0BEdq3ui8CpaHl89Tr1bw+z1anQ6b2w2uXL9JpD12LezBsV2q9VEsKUlinyQJeP3MK8wfPI7jOKyur1GqlCkUi+l6ba7jByHzp05z5ltPsba+yej4CHGiyOdcWt2YpFok8BXoGD2I9dQUINJoBNc7gljDelLmte0iQsdUHEnLnWP52hWOHT/Jd/7kSQ7u28u1t97g7SsrhE4Fx9aQBLj5HJaEa9duIPIVnvvWdzhSDHAsm15PsdXy2W77rC2t0G406bRCGuevMzE9RSFuc+GbW0gi+q02gR3S7kbUqsXvTkKMM3lQp4hOnYlOrsDkzDyW4w0iRNOIV8nggoYWKBLSfkuRPQ5pTGrs9wYROgmxUgNwP7jxAIlWEUorlLnbOknSqGSd+iMRApId7xlHKCEQQmYAUIgUG6o4dabGg1hmLQRJHCEsG50oEuN8RFCpVBmt5OgFPnEyiFMdoMX0ok1Ec3uTgmuRaIHjuiRBgN5RNykHczetNdJ2KdbrJIFPr90hCSOKpTqj5SJrKz+wi32ovxyqAv874J+aPwgh7gR+ivRuzM//Ga+9Ovj9MPDFHa/fC/zy/8LLOdRQQw011FBD/ZDpey/Im4jFbrfLtWvXGB0dpVwuI6Wk2WzSaDSwLIvp6Wny+fwtEZYGkBlYYECkgSo7ewp3OiHN8w3YMBDDPGcnrDG9iAb+fec732FhYYEjR45ksGInXNzpkjL9iztdhwbuOY6TuaiADN6Z5TEQ0rzeRHSadVlfX6dUKmXOyGazmXXU5HK5Wxxfvu/T6/VugSM7XZ3GgWqcZCZG1UTTQgqQdkZLGlhm3nNnDKh5n51uTgM0dwLgnZBzpzPUyCyL2Q8GKBoXoFlmz/OyZbFtm16vR71ez8CUgUsG9BUKhaxP1DjmPM/jjjvuyJat0+lkkZ4G+O0cU2aZCoVCFhG8c9yVy2W63W4W4QrcAtWMzNgwQMtxnCzGE8icpN/be2oigIHM8bcTTgoh8H2fYrGYPbYTDO6Ej2aZcrkcvV4vcxqa7WSOCfNba82ePXtYWFjI9qMQgk6nk7lSzfIGQUA+n8+AnOkTNePc7B8D+HbCUHMMGvejGf9m/Nq2jeu6GVA3gNQs406Y/f1k1hHSiODnnnuO119/nePHj/PII49k28wcb5cvX2bPnj1MTEwQRRHPPfccc3NzLCws4Hletq927qOh/krp3wG/APx3QogntdZv7nxQCDGntf6+Fzq01jeEEH8M/AjwXwP/asfr7gH+JrDNrfPs/xvwPwL/gxDiR7TWzR2vkcCk1vpPxbhqrSMhxE+Sgsi/RQogf0J//67JoYYaaqih/pLpHcNH6bjINMEQoXUKFpK0C1EKi0GGItISCG0jEQSBT7sfph0OUciekSK4FludCKUSbMcDIdEoxKDjDa1JVPplq9dt02k16Pe6oMBzbMq1CSr1EQqFIsKyUTphYmycsV0HWF1fY6RWxitWefvCBRqNDaQQg4hUTRxHSClobm+zvb0FOgJFGusY+FhCkfdcpkZrTE1NMTU7x8zcXuoT05TLdZxcLg2WVGlgrGNbJErR7/tsN5usrqzS7/Xo9rppr1wYUamUCCcmCYIeYRxhK0W9UiGMIuKkg0AQJwlXk5j/sLLM3ykUqEhwpINGgoBYa0IVIQcur0ilzrHIFgROnn6ng9/tEJGQeDmi6Xl0rkCkFe1ul8bKEptK81bY51zQZ0srYsB2HcqVEYqlKpadxny6ZRfXdYijkFarievmOHjkOL/1W7/FdqNJnMRpTKaGSKW9mab/MYpjel2fwnSBXK7I3PwuttdvMjE9y/yuvbjKpza7h6XLbzEyMcuugyc53Ax4+htfo9XuImQ62Tqwfy+5nEun2WBkdAy/18HvBFhOHuk4uJ5CaYEcACnbSicVs9NTvHn2LKWiR2uzSRAm3H3PfQRRwtrqMq7r0es3KCCYmp2n29oi51r0el2QFo7robUgjhOUXcwmRGiF7bgIIYkTnXYbAlGc5oQq6eDHgLCJQkhwUUh6fsTo2BjtQOG4eVYbPSb27sIhRsdLJFEECEbGxtGJort8g6e/cZ6w1WV1u8Fmu03khxTihLq0cZOY1pVFnIV5ZNyns9kgxkLbArtY4HON/8Dc4QOMzc8wOj1CuVLA89K7Ym3HQVoWQlrYXo6x8hhaWimc0wBqcGwrUAqpY1AxKIVWMVqbeBhJv9sjCGOiMMQmIY6jLL5UA0qBTszhPOh+1GQTLSktrME5BAVoUGLQq6pNcGucdqsmijhWKYjXICybsLOJ1ILEypMoRdrGCtKyaQaKqNcnURqtBTINUoXBecvvtXFkGUsIpJPDiWOibAKVRgynLkiN9HLkRkbRYUDQaSESjWUpQg2e8+ep/Bvqv4C+CfzcYCL0LdIYmE8CEvh5rXXrz3jtF4GLwD8WQpwgvctzF/Ah4EuDfw811FBDDTXUUEN9XxkX3c5oxYmJCVZXV1lYWMiAT7/fp9Vq4XkejUYD27ZZWFhgaWmJjY2NDBjt2rUri2rN5/MUi0XiOKbT6WRuJOPGq1Qq+L5Pu91mfHw8c+UZiGiiKw2ECYKASqWSuZ/iOGZzc5NarXaLo9G4ujzPyyJbjbvOuLWM4ysIggwmGdhiYKeBUCY60gBHAwFN36Lv+5lbzGwH0+VoXGUGSjqOQ7FYzJbPrKvZZltbW7TbbWZnZxkdHaXb7f4px6KJGHVd9xZ3nwFg3xtda+IyDVh0HCeDV2aZdvZymvUy22Bn76UBdSbu04BEz/MyYAffdbEqpSgWi3S73QyI7uzt2wlRDcQ0r3NdN9umBkoZsGqAqgHHYRje4rbzPC/r7TQgzUC1nQ4+A4WNWxP4U4DMLM9OZ6LZloVCIQPJO4+hnfverJOBiAYI5/P5zNW7MyrUfIYZQzv3ldluZmxCCrdN5K9x3GqtM1hp3JhmX5nxbtbRrLfZdgbYmbFZKBQyF+3ObWCWxWwjsx3NeDLH0s7+Vtu2s7FlxpsZD+aYVkpx4cIFnn/+eU6ePJlt450wUwjBV7/6VR577DGmpqayayFBEPD2228TRRELCwvZ2Nvc3OSP/uiPuHLlCh/4wAe49957M1BqQO/3wsqh/stKa/2mEOJ/A/wq8IoQ4gvABWAUuAto8V0H4/fTL5DOrf8vQoj3AS8C88DHSS/JfPp7kof+X8ADwN8GLgw+bx2YAR4Ffg3473/AsiZCiJ8BfODngM8JIT6mtR5GUA011FBD/SXXO+98tB2QMo0PVQm2FigHIIUxcaKxROqssiyBJQWR3ycKEsJ+j9G8zcRoCcu2CFR7ELeaUonUESXQAsLAp7m9Tbe1TRQFaA22ZTE2PkZtbArHyyNk6meSQD6fY2x2N0GUMFqroVXM2+cv0N5aHbw3WJak2+uRxAFoRbfTotdt0mu38ByLnOswWa8wPTHOnoVdnLrjbuYOHCPQkpWVNW4uXufsG2/Q63TSu9Jcj2q9xtTkJGMTU1RqdUZGx9i9sECv32dzs8ny8k02NtbodTrpl+kkjZLcXFsh6HdRgOXYREGIJSWh1nzO95keGeVxBPk4xNYJtpJYloI4Br9PYoWEGiINkS0J3YhQSuJSGeW5RJZNEEX0/R5b/R5XWw3e8HtcDHw6aYkm0rIoFstUqiO4bo4UuigEoFQKV6UQJHHC9NwCl64vc+XyRZIkxhqAR41Onzv4wh3F6ZfKRrtDGPTJuxAHHYqlEqVKjd52RLx8nYbfY+nmKi995zt8+GM/yT1338PrL7/IjeVlHNuhVKnh2hYqSYjiCGFZ+AMnW0/ZHL/jPs48/zTbzRaWTNfj0oXzRElCwbW5uHiZUx/5BN7xOzlw6n7uuP1Oeu0mN2+u0ul2GRsfIwwbRFGMbWvWV25iOR4lO0+9XKHZauPHDuXxMXQcYDkOSgqSyE8ns65FHEviRBEF/fQLtd/Goo6WNiCQUlOpj3D1yiX27z/My889w565ORYvXqTfbKCqdUJlcX1tjX1HYdp1qGlFUqmxtnSTt65ep+PHjFsOc45NUSlcKyYKA5IY2G4idIBsdzOIF21uc+3GEjdePItVLlIcH2Fq3xzz+yYplj2cfA7Xkdi2g+N5WG4e28thOTksx0FYLpbtYjku0rIHdkkLsJF2Hkh7ULVOWF7eYKvVpt/rk7dBJyYCSqO0aX7UmWNSDIikVqn7UaOxVJL+XTM4r2i0SGNSkyRBCokegE0/CIlihbYccvkC+UKOfggqHFzYEen5wBLQabfx/T6JTuNdtUiho+lvTeKIfj+dzOYdD8/LE/e6O85BAgbHgl2sUK+UiZsrBJ02liWROo0VLhSK7/RUOtR/Hl0hnRz9nwe/PdIImP+D1vrJP+uFg4iYRwevfZh0wnQZ+D8C/1dSiDnUUEMNNdRQQw31fbXTmQhkMMZc4Pc8DyEEtVqNPXv2MDY2xrVr12g2m2xvb7O5uUmhUODQoUOsrq6ytrZGtVqlXq/f0rMohODZZ5+lXq9n0MnEmpqIx9tvvx3btrlw4QIbGxsopRgZGeH48eOMj4/Tbre5fPkyuVwuc1kZwGkcao1Gg3K5zOTkJBsbGzQaDQBqtRqu67K2tobneVQqFdrtdgYhDYSoVqtZ76XpdTSxmbZtZzGPBkAaF1gQBFlspAFkzWbzFtdao9Gg1+tlcbamq7BQKHDlyhW+8IUvEMcx1WqV2267jVKpRLFYzABaEASZc09rnd40vSMG1oAd40QzUZgbGxvZdu92u5mD7Nq1a7zwwgs88MADjIyM3BIpWiqVMpi0s/PQAMher0ej0WB6ejrbx5Zl0Wg0qFQqWdRrLpfD931qtVoGA8Mw5Pz58ywsLDAzM0O/3we+271pYFO326VQKGTAyUC3nfG85nWe51EsFmk0GhmkM9sFUihlug7NfjMuQQOmd2oncDRuOwN+TWSsGRfFYhGtdba+QRBkzszv3tD63Y5CM96MO9b8f2ccrWVZ9Hq9bHyZZQLo9/tZTG0QBLRarcxla45hgG63S6VSyXo7zfsakGjGgXGrmthg05VptuHGxga5XC4bizsdrv1+n9dee41+v8973vOeDMwayGuAr4GsO7eB2Re5XC4bywakvvXWW9i2zdzcHEePHs32u/l97tw5fud3foelpSVeffVVPvShD7G1tcVTTz3FM888w9WrVzl9+jT/8B/+Q8rlMk899RSe53Hw4EE+97nPMTY2xsGDB7OxtfM8NdRfHmmt//0gDvWfkM51fwzYAF4jhYV/1msvD9KE/rekvYwPkwLLJ4D/k9b6he95vgZ+WgjxJPAZ4BOk8/Jl4GngD/5nPk8JIT5DCiD/HvAHQogf01r3/xyrPNRQQw011H9mvXP46LkpzFMSgYET4rsX7XVC5PuoRKO0IJY2vb5Pr+tj64S9U+PkcjZojWenXiUtBJFKL+Z3Oy0a21v0ux1UHCMHPXBSSkrFEmOT09heYQArANKo1Jn5eVqBYn7EwSlVuXntKlGSEMaKOE4QMu0oCP0+/X6PZnObzY01VBjgWJLJepndu2aZmp7lwJHTTM7v5uqNRb7wq7/K2+feYGNjnWaznfY4qtTFVfFccp5DrlhEIBkdHeP48WPce//DzOxaYM/uOWamJ1hd3+TG4jXWV1cpFAqUShW8fJ615ZuodiOdIApJEIVYwJGjxzj4mV/k5p88w/SzT+EOegatRIJSqDAgkZJISGJhESYJcZwQC4tIaAK/T0/DZhRxod/htU6LG1FIJATakkghyRdKlEpV3FwOaVnIwX7s9/pp7K2UWLYNKKJYceDgMV584QX8XgcpGDjgAJFCTABbSoQNkdL4YcTl6zeYm51mZnaGWn0Mz7FwJ6fZBl5//WWu3Vim3+1w5vlvU5+cottpo4OAxvoKYeBjW5I4Tgj8gG6ng04UcRyl2259DSUk46M1coUqec9jpddm1+499FobTE9NUq9XuX75Es12n/Pn3mJmehzPK/Dscy/iui579syzd36WUilPY2ONXC5PkmjWtxp4+SKOU6DX2UZ2m6lT0LIROiHJFbAtOwPmhG3CKKDT7abwVg2iglSIZ9m0o5BAQTGfIxQWrpQ019cYLRSZnqjjRprp1hbFUg3tFXj14ptcX1olH2tukxa1JMEadDbqIIQkTt3HnQ5hFCAShZXujDTONNYQtZDtNr31Dc6fv8SFSpnC9CSFhXnssTEs10YArogoeBaeSJACPFtgi/SCRT7v4dg20nJQIu0CRdoI20FYDsrOs3vffnqxQAsLHYfpcZ9E6d2xAHGMFnoQpUzaDapitAJpO8QyHABGgZCpVVIkMXrwNyXUwC2Z0Ov3aQcJvtAUtQJhg2Oj+tswiPxFCsJeF51EoCFRmkHFJJACSC3Sbscg9HEcl/FKDeV3iKKIIAxhEL9quy57R4okE5OUci6dpS5Bt0Mc+VRrY/R9H684hI9/2aW1Pgf86P/Mc34d+PXv8/dF0ojW76c/NYPWWv/3fJ+7NrXWP3C2rbX+FGmPxVBDDTXUUEMN9UOk7wUFOyNKjWOq2+3S7/cz+GW686SUbG1tkc/nGRsbI4oiWq1WFhVpnE8GqLiuy9GjR5mYmODSpUusr6+za9cuqtUqzz//PNvb28zNzTE5OUm9Xsd1XRqNBq+++ir33HMPV69e5ZVXXmFqaopcLsexY8eyZev1erz00ku0223uu+8+rly5wtNPP505sfbu3cvdd9/N9evXuXjxIlNTU/R6PQ4ePEgURVy9epV+v8/o6CgnTpxgZGQki1mF70anLi4usr29nYG98fFxCoUCm5ubGahdXV3lwIED5HI5tre32d7eplKpZO7OVqtFPp/PXIpRFLG1tQXAww8/zK5duyiXy5n702xPE9nZarUyN5lxHBr3pXGjhWFIr9cjn89TLpfxfT9zkRr3YxRFXL58mfvuuy97rYGXxnEnhKBYLGbOUwP9zp49y7PPPsuP/MiPcPDgQdrtdgYtfd/PXttoNPj85z/PAw88wJEjR7Juvv/0n/4TH/rQh5icnMy2w04YaDowd8Kund2hxslnWRavvPIKWmtOnjyJ67pUKhVarRZBEGQRuf1+P9suxpm603VqYm8NGDfQcWc0bz6fp9vtZm5eE4Ha6XSyyNXvja9dWlpCSsnk5CT5fHqTbKPRIJ/PU61WEUJky2mcgUKIzE3peV4G513XvcVBurOX0TgGzd8NWAzDMDvGgWy8GChr9ncQBBlYbzQa2TFr9oXZdgYeGqAJ8Prrr3P+/HkOHDhAuVymWCyysbHBzZs3Mzh77NixzLELZH9vNBr84R/+IW+++SbNZpPHHnuMAwcO8K1vfYsrV66wsrLCz/zMz3D48OFbOiULhQLz8/OcPn2a48ePk8vl8DyPI0eO8PM///Ncu3aNp556ina7zaVLl7hx4wYf/vCHAbh48SLNZjOLoIXvxul+L4Qe6r+8tNbfBj76Zzx+le8z5x08dhP4r/6cn/c/kXY4/lnP+XW+/7xcA39/8DPUUEMNNdRfAb1j+GhZaVcjOnW+mU45rQZ3xwGxkPgxAzcW+EFA1O8yVi5SrdaQcTt1PUkIgwAl+rSaTTrNBqHfTQGEHnwZVRolFUhBLp/Dcr2BSSqlj0pD3nWojc3QWV6hWC6y0e6wur5CIecBAiEswsCn1+vQ7XTY2FyjtbWBRFEr59k7P8P09DRzC/uoTS1w5u0LPP+bv8nG2jJaJSSxJlGKnONQyufJuw6ua+PZNghwLVDCptPr8Y1vPk2UKG4sr3HvHae54977mZ+dYnS0zrVri6ws38BxPVw3/ZK7unyTbbFF0OvhuS6P3H8/P/N3f5GVmys0Th1HtrYZeeMVpIErKt0WCZJIaxKVEEtJpDSBBS0hWOr3ON9pc8732VAxkQAtJdKyyOdLFMpVXC/dNuhBdIzW2HZ6t560JEIlSCno9dM0gzhOWL5xBSm/+xqQOINukF63m8bvSoktII4iGu0+G1sNcvk8vh/iugUqo5NcX7zO9vYml69cYWZqjAuXz3P+ia+wsbmNTmKSKKRWqzE6OopWCduNbfp+SKFQolT0GK8UWFleR4U+5UKO+vQcfreDShL8bovtjVW8XI43XnqGlaUbbLUSVpZu8NAjj3Dn3fcgbI+VlSVmpieYmZmm3dym0+6Sz+XQAupjU0xPzyClRZLExHFCGKYRwWmEiSIKg9T5m8SUcw5CKdobSzA/Spxo8vkclo7RWlAr57lx7Qq7DxzjtVeeY9fUJKuLNxHba0x7HguVEYpKsHHxTb5z5RKrjSbzkeIuKSnLdOzFiSKCQb+nJkxA+j3iRGMP4L8lDH7UCGmhtELECidJiFfX6W5s0bx0HTUxgdi9FzExA3YO0REUPYcoTpAkEEdYtoUjFZZuI5MQS0W4no1LgkOCY4HtFTm8dx5PRji2hRIeQlgoBEpp4kSDTohiNZjgJuDG6DgmiiN0olFBjFYRiVKgEtAxEpFNDqUUaWWl1gQxWI4DiSCINPk4wh7cKRtpjZAC25YkcYxlO2iV3nQghACddj6mWbADuIhAJjHdbptaqUicRMRJTDyYFN92ZB8/ebDCC9E0GsVap40V9KiogKJUIBOSZNh3PtRQQw011FBDDTXUn1Ycx1k0qenNA7LuPePyM2DStm3iOM6cU8Z9aJ43PT1NuVzO3mtnpGM+n88AmOM45PN5RkZGMtdaq9XKIkk3NjbwfT+LfIUU7JTLZe666y7Gxsay5YyiiLfeeovt7W327dtHLpfjySefpFKp8Oijj7K2tsZrr73G3NwcjUaDxcVFJiYmMjfea6+9Rr1e5/jx4ywtLbG1tUWpVKJQKGTuTeM0vHbtWhaPubS0xPz8PPfccw9nz55lfX2dSqWC4zhUq1U2Nzd56aWXKJfLBEHAwsICd955Z+b2MtGhW1tbXL16latXrzIyMsLW1hZHjx5ldXWVxcVFgiBgYmKCEydOkM/neemll5ienmb37t0EQcDrr7+OlJJjx46xvr7O0tJS5k4UQnD69OlsXZaXlzl//nwGPQ3kFUJkMM3APoDl5WVarRbtdhvXddm1axeVSoXl5WVu3rzJysoK9XqdarWa7WcTxVkoFLhx4wZvvvkm8/PzzMzMZPGtSZJQqVTo9/v0+/1sexvAbSCtgWiFQiGDgAaqmTH7uc99jl27dnHixIkM+pmuQwPvTG9pPp/P3HNAFiNsImZN5+fOqFsDpkz8q4GLBkAHQXBL/KsZw3Ec8+STTyKE4NOf/vQtHZeWZWVuxJ0xpiZG18SSmv5ErTXNZjODh0oper0eAKVSKYvxNaDV9D2aSGIzjk2MrXEjbm5u0uv1WFxcpFQqcdtttzEyMsLKygpf//rX6Xa7aK0ZHx/nkUceodVq8dnPfpapqSmuXLlCGIZcvHiRxcVFfvu3f5vZ2VkefvhhnnrqKa5fv06n0yEIAn7u536OI0eOZOtiOldfe+01Ll68yJ133smNGzf4nd/5HX7hF36Bu+++m0OHDvHJT34y265mXBYKBQ4cOMAdd9zBu971Lk6ePEm/36dYLDI1NcXhw4eRUlKv17Pt9sILL1CtVomiKDv2d3ag7uw9HWqooYYaaqih/vroHcNHVOpUlMIirWYTgz63QYa+0kTKIkpSIKKFpNtp40mYnlsg1ho3ihFCI9HcWF4h0qtEUYBn24zWauRKZZZX10mSDgaQuZZNrT6C5+XRGhLjvIsjRsbGWWu0KeU9+pFi6dplJBAnCUmiCcMAv9em1emwvrpCt7WNYwnGqyUOH9zL6OgEU7sPsbLd4ff+439g8dpVpE7SfjxpMTkyxshIHc9zCaMInSR4OQ/PdenHCW9fvMLRPePkihWE1lQ8h9ir8K//H/9PTj71FB94/MPcdtfdHD64j/HxMa5fv04un8exbAr5ArncDbqtNj/6wffzUz/zM2w12tgrktLsLMH7P8C1tWUmVpeQWqCVAlIwG1kQKUXgumyjuNJtcrbZ4GLQpysA2x449iyKlRojo5O4hRJxFBKFAXoQkysAaVugNI7toHRCnKSwqNfvUyxVuH7jOs2t9cz1qQexGYlKo1YUA8ApJTpKe/riOKLV7hDFMf2gT7u1TXVsCstKHbRjY2M0mx2uXrlOq91BI7BsmyBJ6Pe7lEsFgsAniRNc12Gz0aCzvU7YbdEPevTDmEariy3a9Lpd3HyRKI5AJziFGo6bo9NqUSqNc+L4EVZXbrKwZz933n4bcIrt9Zu8dfYVPNdCWgKlEoSQeLaFIyWWFCgkniUpujaItKcwNXyKlGOphEoxxwuvvoYfBMyMj2JLgWODLUhdgEpz6coWVr5E3vOIHQ8riWlduEhpbIpSv0djdZmvL93khh+wIG0+XK4wKVOXamJF6ThWGj9R9KWki6IfxARKEUpJIuWgo1Ck4D6J0i5M0rhTicBWCUmrSdjrEm2uE09OkSwcRFVH6EVpZ6IUCVJDGJGCwCRKmSAuUVsiEUR+DELgOgE6jkhUgq3DdLsLTc4ROLbGVjEOCZbQuETYUmBbEldq8m66LREeWg/uZBUWSmsSLBBpf0uiU5dikmjqe8o8Mn8MHJeVjQ7uyCjt1VWSOEJqBu5KQbFSpegKGpshri1RWqbnCjSWksRJgmfZFCwI4oR+p02pWML18uTjmE63g+04vGvPOP1+B3t8nCSJ8LttcspHRn2Cxho5FRN2t97xqXSooYYaaqihhhpqqB9eGYBjLsKbONRyuczi4iLtdju7iL8zktC4w8rlMq7rUqvVmJiYyOJJDagEMhec6Uw0UBPInGlSyixa9OzZs2itM/ehea0QgkKhwMjISAaQLMvi0qVL1Go19u/fz6FDh0iShPX1dUqlEmfPnqXX69FsNllfXwdgcnKSO++8k1KpxPXr12k2m4yMjFCtVpmamqJUKtHr9bKewW63m62ref9ms8nGxgYbGxtIKen3+xQKBe6+++7MGffcc89x3333MTMzw4ULF7h8+TL79+9nfn6eXq+XRZjmcjnGx8epVqvMzc0xNTXF1atXef7559mzZw+7d+/m3Llz9Ho97rnnHs6cOYPv+8zMzNDr9XjxxReZmZnh9OnTvPnmmzz99NOcOHECz/N44403CIKAxx57jKWlJb785S/T7XaZnJxkfX09g21RFGUuN+N+9X2fs2fPcv36dRzHodfrce7cOe69916uXr3K2toaL730Em+99RYPPfQQBw4cyHr8LMui1WrxxhtvsLa2xosvvsi1a9c4fPgw9XodIQTPPPMMzz//PGEYsm/fPu6//37m5ua4du0aTz75JDdv3kRrzenTp7n33nupVCrZ8sZxTD6f5+mnn856R+M45tixY8zMzPDMM8/woQ99iFqtxjPPPMP09DSnT5/OluWee+6hUqnwzDPP8Nprr1GpVLj//vvZvXt3NmbNGDdOUcdx+OIXv8i1a9eoVqvccccdHD9+nOeff561tTUeeOAB6vU6L7/8Mm+//TbT09OcOXMmcxLv3buXd7/73bz22mtorWm1Wly7do33vve9zMzM8MQTT3Dx4kWklHzwgx/kyJEjXLlyhWeffZY4jmk2myRJwnvf+15eeeUVbty4QaFQ4MMf/jCnTp3C933efvttnnjiiSzG9N3vfjcf/OAHM9gMZDcbfPvb3+b3f//3GR8fRwjB1atXeeihh/jZn/1ZOp0OjUaD0dFRoijiG9/4BqVSidnZWT7/+c9z9OhRbr/9diqVSgYoH3/8cVzXZXNzk9dff50PfOAD3HPPPdy4cYN6vU6/38fzvAwQm/Xfv38/Dz30ELlcjtXVVS5cuEClUgHIIpVN1K45Fxinp3GgmnOYbdt0Op2sM9S4SI8dO8YnPvEJSqVSNj53dp3u7IIdaqihhhpqqKH++ugdw8c4SRAClNAIkV7Ih8HkSmvQCinTzHsLC2k7CA1jI3XKo1MErQ0SP8KSikRpOr0uuVyB8dEJxicmyBfLbDXbROENNOkXHUtICoUipXINx3YRlkQpPUh8LaKsHJ4O8UoVVlbXiML0LjalII4jut0W7VaTzc1NOs0tHCmYrBU5efQQtdFxKhMLPP/KGS6dP0u3F5DzHJotn7IQ3HbiMNX6GEgLpdN1lULgWDbCsijEMccPHyTnWMRBSKfVYPPGFY49+AFeeq7IG2+9RWNrgxdfeJ4PfPCD7D92ksLhgxRKRVzPw3JsKuUy73nw3Tz8nvegwz7FqRFc22VpeZUnv/YUl1Zv8vEkYc6S6BiwIBaCjtKsA5d6Ca+uLXM9ifHlwC0mJY5tU6mMMrOwh2p1LO0uCH3anXY68UlirDjGti2klU4ybceh3W6idQoTozhmfGSM5aVFAr+HJdP4TNt2Urer1sRRjKXT7krLkkgUwkn7+67duMnczASOY9Pvduj32mysr9HpdFleXqbX6+P7Kby0bIvRsRFsy6JWTe8kTJQml88TxekEeLRWxdIJW602eTdHsZin2euQK1bQOqKSs3CLI3zooz/Fq9/5KlrY3HHnnYR+n+mZOYL2Fn0/4OqFczQ2V2g0tskVini2RbfdIl8u4wc9lq5fYX5+DksODhUpQUgEYpDhKZAy3UZj4+PMTI6zvL1NN4iYmxxFa7CkIA59HMthzx6XtxavcPj4ac68/BzT01M0zrxO2OlSixO+vbzE1V6fghB8bGyUu0tFbMtCWhYqTgbdkgFhktBPFJ1E04ojOklEC+gCkRBEiULotHc1UQqh5QCWgtTgWgIviUmaTQLfx280iGfmcfcfwi6V0doiDgNkEpPEIZFKyGB3nE6qYmGRJIp+HKFVGstLukXSqNWeSh25UYxOEoRWWDpEooiUQKoEB4VrgyMVnlBIKQbQV5FzLXK2hWenbl3HtnCFpFRysKQkSfr06XP1xnX8QJPEIXKQqypth2KpyJ6yzblWC7QmjBWm3l5agoJjUbfS3tp+lBAEfTqdFpVyBddLKCQJ9VqRKSumF0hEsUoU9Ak7XXaP1sjRZaXdoq/AEs47PZUONdRQQw011FBDDfVDLnPh3vzbdV2mp6dpNpvYto3neVm8qLlwb7oKJycnuXnzJufOncs6HI2r0VzMN9DAuCpNLKfneTiOg+/7Wazj5uYmKysrnD59mpMnT3Ljxg3OnDmTOdFc180iXU18pIF/nU4nc34BjI6OZo63qakpRkZGsp450+k3MTHBqVOnWFpa4otf/CKzs7PccccdjI6OZpAjrXrI02g0+Pa3v8329na23MViMXOrTU9Ps2/fPgDOnDlDo9HgjTfe4Pz58/i+z+bmJu12G/iu+8vAx7GxMVzXZffu3UxOTvLWW28xNzfHAw88wNjYGEEQcPny5cx5atyTJtrWbJc4jhkdHeXhhx9mcnKSTqfDlStXiOOYN998k+3tbR5//HEOHTrEc889x3PPPZfFixo3mulMdF2XU6dOcfvtt1MqlXjzzTez9V9YWGBxcZF7772X8fFx5ufnM7eeccHm83n27t3L1NQUJ06cYGpqisnJyazPMQgCDh48SKPR4OWXX2bXrl3Mzc3x9NNPc+HCBW677TbiOOab3/wmhUKBu+66K9tvJjp1ZGQkc9TW63UqlQpRFPH1r3+do0ePsmfPHr7yla9Qr9c5efIk169f58knn2Tv3r189atf5dvf/jbvete7uHnzJr/xG7/BT/3UT7Fnz54sbtUAdyEEv//7v89XvvIVjh8/zs2bN3n99df5J//knyCl5Atf+AJaax566CG+9KUv0ev1GBsbI5/PZ32iZj9961vf4tlnn+X48eNZ/Osf/dEf8a1vfYsHH3yQq1ev8qu/+qv803/6T9nY2OCLX/wit99+O/v37+fZZ5/ll3/5l7n//vs5ffo0zz//fAYDW60WTzzxBFNTU3zsYx/jhRde4Etf+hJHjhxh79692biD1Nnp+z5RFPGhD32II0eO8Id/+Ic8//zz3Lx5k5mZGX70R3+Uy5cvs7q6mrlwp6enyefzfOITn8j2z+rqKr7vc+TIEZRSXL9+nSiK+M3f/E1effVV7r33Xg4fPpxFpprY3E6nQ6vVYm5uLuv0NLDQdd3seDYdlgYEG8ewuckhn89nrzXOUnMuAzhw4AC/93u/x2//9m9z+PBhlpeXef/730+5XM6WaaihhhpqqKGG+uupdwwfE3P3JkkavZqk8ZxKg0ZjC0nOdQj7ABZCWniOw/jMHNJx6NoF+qKMRNB3LeYWCoyPjeMV0h69OEnY2r6OiiMEoIVEWk5aZF8ogJRY0kJaoJXGzRfwo4jxySk6QYLfbqA0KK2Jo5BeN41a3dzaor29iSWhXs5x8sh+6iMj5Eemefq57/DKK6/gSo1rO8yOjbBvfpZ2oLl4dZGZbofp2QUcx8WyLYRMPWWJVmhAxSFXrt1kc32Nju8jheT4XS3q45M0brRYWltns/kdrl69xic/9uPc+cBD7Nudvl8xX+Cu246za89u+u0mjiWxbYdiyeOpr/0Rf/KNr7LVbNBSmp+s1JhwBb7S3FABb0cR0cwYLy2tsh36WJZECBvbcaiPTDC3cIBybQxLCoqlEgiLqj2C423TbTZIkih1LMYRluNQKEK308ZxXdA2UehjOx7FUo3Fq5eBFGJZwkodeSIBy0LFMRpNznXI5xx8WxIEIVjQ6fk899JrHNgzT3WkxeLNG5y/eBGloF4p4VgWjmMRhSFxrOl1e1RKRarVKhro93tsbm2ztryMY4MjEkqlMgt7DxNHIZXRSVqdLlPTU6zdvEYncvnIx/82cXeDxsYK83sPIqWkWCohpaZUrHD1/Hd4680z5FybQj6PFJpOq0Gr2eTMG+eZn5kmbDc5dvI2Ttx+GtuSpAWjkkEGaAr0pEjddpbk5NHDXPvq05y/cIX56QmsQRyxm08L6ss1F/vKdaxCBdeysCp1hCM5v7yJVyzwtlL4saIgQMUR63FI0clTLuQoWS52rFKXYeAThBG9KKKXOHTiiEYc00wS2hK6aAIliBTIrGZOgRo4XIXEGYA3zw/IR+u0tzbo3lzEPXkaOTUDKsGTCbGlQYcoHBAKW8aEShLr1I2slUZKsIUiidMuR6SFhUaoCKGCQfxqTJikE5lE28Q6hdbSV4ObFWzUYPKplMQSCsfWCDS2JbClxLUsCnY/vZigQja6im6/D3YBVELOsQmTCMd1yVmaXRNF1jfzNHsCFSv6UUwYaxzbYqpgofyQ5SAm1KBVTKfTwrYdcp5HkoT0un3++PWrLExNEbg5wl6bqN9mouxR80ZZP3+DVpCwb//COz2VDvW/ov6sfoqhhhpqqKGGGmqo/xwykMXI9MZNTEwwOTmJUopisYhSKrtYv3v3bvr9PkopxsbGmJycZHl5GUidjLVaLb0pcOBMNN16cRxTKBQoFArpDaeDrrtSqZR105VKpSyy9MUXX2RtbY1ut5v1Gpp/u66bdRHecccdlMtlnnrqKQqFAgcPHsziNu+44w4syyIIAlzX5eLFixn4hDSy8vTp05w4cYKtrS1efPFFzp07x3vf+146nQ5SSgqFQhYvee3aNd7//vdz9OhRXnnlFc6fP591Pfq+n7kHTZ/ciRMnsG07ixc1cNKAR+POMr19SZLg+z5bW1scPHgwrQ7p9ZidneXSpUu02+1bOgINMPR9PwMyBnqVSiUmJibY3NwkSRI6nQ4nTpzg+PHj9Ho99uzZw2uvvXaLy29n5Kr529e//vXMjba2toaUMnPEzc/PMzs7m/Uzep7HxMREBtz27duHEIJ9+/Zx/PhxpJScPXs2cw5+9KMf5cqVK1y4cIFer8fy8jKvvfYahw4d4n3vex9bW1v80R/9EZcuXeLUqVNZHLCJZTUAb2FhgQ9/+MNZ3Or09DTLy8tphczqKsvLy1y7do2lpSUmJiZQSvHcc89xxx138OlPf5qVlRX+3b/7d1y/fp3Z2Vny+Xy2b4wr9Itf/CKPP/44H/3oR1laWuKXf/mX+drXvsZP//RPs7q6ypNPPsnq6irnzp3jn//zf87dd9/Nyy+/jBCCn/u5n8O27QyMnTx5kp//+Z9nYmKCtbU1vv3tb/Pggw/yoQ99iJWVFf7Fv/gXXLhwAYD9+/fziU98grvuuotyucxv//Zv85GPfIQjR44QhiFPP/00YRhy7do1Lly4wL59+1hdXcV1XRYXF7l69Sq7d+/OoJ9SiiAIsvfet28fpVKJ+++/n+eff5719XWUUvz7f//vuX79OuPj49nzXdfFdV1GR0cJw5BcLpdeAxvEuHqex/79+/nFX/xFnnvuOb7+9a/z5S9/mX/2z/4Zjz766C3nHNMz2mw2yefzrK+vUygUmJiYyAC7GdPmRgYTu1qv17l27Rq/+7u/S6fTYd++fVn/Zr/fRwiB53m0Wi2OHj3K3//7f5/Pfvaz3Lx5k4cffhjLsrIbKUysrhnvQw011FBDDTXUXx+9Y/goIAUwSg++qECcKJSGOIlJBBQLOZTSaBXjWJJ6tUyuUkcKKFWrFCsVhJDUtQCZ3h0mpUQKQbffx2830m5BJBaSYiHP6MQUXq6I0gJLMpgMSLTjIRHkCkW2mkuESUwUx4RBQBJHtFstmo0GvXYTKTTVvMfJI/sp1eo45TGee/klXn3lDCqK2PIDpuo1pqbnqVbLHDt5mhttzdNPfZn2hfMs7JqjOjKGbTtIy6Lf77N0Y5EbN67T6/vECcRYOBKajQ0mZ+Zp3LyEZQmiKODm0nVefuVlFte3ePiB+9m3azd3njqO49h02y0saYG0WN3c4smvfJm3z58nUmnE6bk45F9vbTJmW4Ra0xBALs9Is83Ro4d44bU3ieOIWn2M6el5Zuf3MDk7T7PVxiGgVhIUCmknX9CJmdg1idYJruvR6/sgJevrm+S8PJa06Hab+H6P+sg4nU6TJAkRMh0AQgqkJbEcmzCKaHe72FJSLReQtsXE2AhBFLOytoWjE2ojVa7cWIIby0xPT7F3z25GR+oUCnksIVleW+fVM2/SaHZotXs0211mF/YS9H267SaWFExOTdHvd9hcXWJ5dZNqfYT52Vl27z1IHId0Wk2K5RGOHj7N1NQUi5dbTM3uIlIWtm2RL1ao1Ue5+MarNBubFPN5LHtgaNTpRHRpdYsoiFm5uYIjFa++8DyV+ijzu3djSY0QCinkLRFKZqPs3b+f+guvcPnqZU6fOs74SAW0ABQCQRKF9DeWeDu0OHDsNs6/dZapAwe50H+Drb27cVxB8vY1GkrzYr+P67hYcQ9P2uzetYtdlkCGQeoabLcp9LrEUYIfR4zEMc0wpKU0DTehEQS0owg/SdAijS1VSiGFQCQKoRUagSMUOS3JCUF+fZXtp/+E/oEjML+AtEg3DgKtokEkahpfKnWCTUCkJXEiEVqhtEKqtC8RHaO1IonS31bUQ2uJRiJFgotGi7Q3ViNh0O2q0QRRgkJiJyI74UipsaXGk2nRrLIchEyYGBtlZaOF0ArXkthSILUCrSkV84wWPXw/ILEBJDlbUBaggpitQBFIiWtrwkQTD9yPUMKyXfp+wktLbd6KSxy/E6x2A93rsNjps9jvsJ3YtJWkODbxFzgNDzXUUEMNNdRQQw31wyoTgwjfjUc1kZbGOdRoNADodDqZu804GR3HwbZtdu3alfXi9fv9DK7tfN673/1uAMIwZO/evZmLTUrJwYMHM5B28OBBLly4wPb2dgYzjfOxVCplYKLb7VKtVtnY2ODo0aM88MADvPTSS9TrdW677TZeeeUVOp0OhUKB1dVVjhw5kvX0mWW7fPkyL774InfffTfVajWDpWEYZq7JTqeTOdeSJKFarbK+vs7ly5eRUmYxloVCIQO1o6Oj9Ho9yuUyc3NzSClpNpsZxBFCkMvl6Ha7lMtlisXiLa7DSqXC6uoqQRBQqVRoNpsZJDYuxVqtxurqauYIMzG3Zv+ZXk4TT2ngYK/XQ2udAWSzXkEQZMBVSsn29ja//uu/Trlc5vHHHyeXy/G1r30NIIOfBgoZR1o+n88cswC9Xo8gCDKAVygUsnjd6enpDNhqrQmCgO3tbVZXV1lZWeH8+fNA2h25vLycudRMhKht2/i+n8W4GpCUy+XYv38/b731FnEc8973vpc33niDs2fPsrGxwczMDADNZpO7776bKIpwXZeJiYnMCel5Hp1Oh1wuh23bnD9/no2NDf7wD/8wi0G9fv06Bw8eRAjBe9/7Xt5++20+//nP85nPfIbbbruNra0tJiYmWFlZuWW5XddlZGSEiYmJzFn55ptvsri4yJe//OUMVm9ubrJr1y7iOM5Au+M4JElCoVDIjgMTSdrtdllaWuKJJ56gWCziui6nT5+mVCpl8cVmPBQKBZRSbG9vI6VMK2oGjlKlFF/+8pfxfZ9/+2//Lfl8nl/7tV/L4lWDIKDX62HbdtYr2mw2yeVyJElCq9WiVCrxEz/xE3ziE5/gX/7Lf8na2loWP2singuFAvv37+cP/uAPyOfzXLp0CaUUJ06cyKKF8/k8vV4v67PM5XKEYYjnefzdv/t3+exnP8uLL77IxMQEH/nIR7LxNDc3x0/+5E9m8PjkyZOcOnUq2wbmWPped+tQQw011FBDDfXXS+8YPmqTBZ+kX/DSuNXUF5aCAhtLSqQl0DrBsySJLbFsG41MoQMpkuG7fCGdECQJq2ur+P0eWqn0YRvq1QpesYwafPlWWhEGIVInFEo18rZFs9Nhz/QIbyPodLrESYzf79Ltdmg2NlFxSN6xOLxvlpHRMbzyGOcuXea119+AJCKMFTMjdR5817vIlyogNDeuXmV6eoZP/Z2f44mnvsb5N19god9lZn4frW6Xt98+R6Oxjeu5eELS3mwQhgECzeK1q4ztu50zUYLf76MH8Gdzc4MGObxv/An/1X/zzwkSTa/TwbJtLGlxc3mJp578Epeu3KA2MgbyPFKkHGs1DFmNUviHEJQdF+WW6PYi7r79NK+8fpbJyWnqI+NI2yYIfMKgz+7ZKvWCoFjwEJaNVKPkXQ8hEoquTbE8gxaSb73Qphfa9HI5fL9LFMdMz09xY/E6QiuEkIOoW7AsSRAGtFtdojCiVKumd9ON1dk1P0Wr26dUyBGFIYVCnla7y7EjBzm4dzelUiUtnrckEsjnc8RhxEajw9XFJVqtJr1ej+uLi3Q7TcrFAp6XQ6k8Y+OT2K5LrTZGp9fD9oosL92kNjLCntEJyuUiq8tLnL9wgX63T7GQRyUJ1UqF1cWrbG2s4pbqTM1axH4bIR36vQ7lchXLbdLdTON6Sp5Dq+vz6gsvMjY+jpdzwcSKakGcxAg0wrKxbJtcvsSpo4d55c23ePGVV3jk3fencbZCYKN564XneOWVc3jTPfYdOEKsNMwtcOeB/RS761zsbnLx+hL9bsjrQci+KGJUC8JYcWXpKh0/pFwoUikVKVZHcEcncaMA1/cp9vrUtabZbVP1faqux2avRyPw6cURkVYkCCxIl1lL9CA9ViqFBWkHYhiwefYMm0s3Kd91J3a5QILG7wbESpHEESpKHclRHJMkeuABBYGNEBqRJKCT9PE4JFYCHWtsEZFoi0SlW1FJiZTp5CTlhYok0SSJRqBItExBtxAIrYmShARS16TUiCQmitIIWpVECDRSCpIwQklJLucwV8+z1uxArMhbEkdIXKFZbUf40sIVGqU0lqWJE03kd+kJTc7LYVsWWkmiXBktoN1s0Njc5OrGerpuSiOkTScI3+mpdKihhhpqqKGGGmqoH2L1er0MXJmbbYMgyHoOHSeN7zdda6ZXzUSoGgBl/r8zZtW4HU2EZz6fz55jwCaA4zhMTU1lcGp2dpaRkZEM/ARBQK1Wo1AosHfvXkqlUgYVH3jggcxNuX//fkqlEq7rctddd1Eqlbh27Rq9Xo/du3dnfYILCwsZYKvValSrVc6cOUMURVQqFfbs2ZN9roGjuVyOXbt2MTU1xVe/+tUM2I6Pj9NqtTIo1Gg0KJVKjI6OcurUKb74xS9y7Ngx4jjOnIf79++n0+lkEMX0+cF3nadjY2OcOXMmA5wvv/wyo6OjFItFRkZGsp5L89rJyUmCIMjiKw0MNLDTOBLPnTvHxsYG1WqVa9euEUURYRhm+1prne2zzc1Nut0uDzzwAO95z3s4c+YMcRxjWVa2n00XYhAEWJbF9vZ2BogNgDQRmwZO9ft94jgmjmP6/X4WR+o4TuZqfN/73sdDDz1Et9ulVCrRaDTI5/NZpK5ZBjN+zLYzY23fvn0899xz+L7Po48+im3bfOtb36JYLHL69OkspnN7ezuDeI7jUCwWMxeneY4Bhq7r8uCDD3Lo0CEajUbWP9pqtTKw63kezWYTpVQGsoUQBEGQgWrHcWi328RxTK1Wo9FoYNs2H/vYxzJAZqD7pUuXsqhd4/rL5/MZbM3n81QqlSy+d2Zmhk9/+tM89NBDNJtN4jimWq1m0cZaaxzHod/vU6lUsn7UhYUFvvWtb+F5HpOTk9nNAysrKzQaDc6ePcvJkyezc4Tv+0gpKZVK1Ot1VlZW+LVf+zWOHj1KtVrli1/8IkmSMDIyQq1W48iRI1nHorlxQCnFvffey9raGk8++SQHDhzgvvvuY2Jigttvv52FhQW63W42zk0/ozluHnjggcwdXCgUMmekOS+ZeGXzY8aIgeXfCx2H8HGooYYaaqih/vrpHcPHfs/HkhK1s2Mio4iCWGukEFjSIkzCQTxqgkoUWqgUVmiBkKRQTacdimjo+yHbWxsIVBptKqCQzzM6PoVtuyRxQsfvsr21wfbmOnOT48wdOEqsNJ6EGxvbbK2vZV96/CCk3++ShAG2JZgZqzA7N0euWGF1s8GLL79Kr9Mm57pEYY9Duw9RqtaQlo20LLTQrG+sU+53+fgH38M3p+d47o9/n/XNtNsviEIKxTxCJ6y222w2W8RJguc4XLu+xP2nHwZhESUJcZxk2+39Dz/Iow/dn/bN9f3USSklV65e5umvf5XNRo9CuUJJWkyOjWHHfeIkz2ajTRiGQNq32Qt8Oq0ttIa8Z3Pk0AGuXL1BqVzDDfNsbW3S67Q4192kXquD0Hg2jNRKhEEPz3XZanW4emOVSqXGaK3O8vnLOI5HHMVI4VAu1QiDtwbdBQGOJUhs6Lba9Pppp4iQgvrYJLEW7J6f4cCB/XT6Po7t4Pd7oOHQ4UOMj47g2DaW5WDZA/gowPIdyuUKk7O7mVk4yHPPPUt9ZJRarUoQ9Gl1eshuHx0H+J1tFg4cI1+qcN/D7+HyuTMIy+bK9ZsUCyVOjU5y4fzbaRel0pw99zblgoNUIatL11heXmJmfjcLswe4/PZZri9ep+NH3HH6FHv37+MLX3yKtbV1PNvCloJri4tcuXyFXQu7sB0Hy0onnJYUSCGRlgSdoBQcP36ci1euceHCRRbmd7F/9zxKK66+9QZf+8a3Gd1/kqO3nwbb4d6HH2V+epSJXMLLz3yTqT2zTF+9wcX+Futa8aVul1k3Zi6X44BlE/U62EEf2dgkJyTVYpWxmV1UDx7ELVfJd7t4l85RWl+m0NgmZ0kKjk1DJbSVxlcKzJ2IOkkjWC0BKnUSWyhyQNkRFLZW2XruOUbf9x7G986hkzQ6NVFpvGoQxQS+T+AHhIE/uPiRpHff6ggVR9gqIUgCxGByHAUBKIXSNjECISSJEIOIZEFC6n5Mb2TQoAcRt1IQKY1EoEgd1omOicOQwHFRSYJGkctZ9KKEQAPSRmmYmhxhd7PLhZUmnpAkYcRapIlch5GcgyUESiUEcUKYaJJYEcUB2rapj4wihUZVq9haEXbaFPI5KnNzlEpFqnkPx3ao5b2/+Nl4qKGGGmqooYYaaqgfOrmuC3w3BnSnY81xnMxNJqXM4M7OC/fmYr/rurdASQOz4jjOIJ6JnDQQYWfXpIlBNVGilUolc2EBtNttSqUS5XI5g2XmeaZ/0QDIIAjwfZ9Dhw6xsLBAGIYZXJqcnKRer2fArFar8eCDD2Zg1fM8yuUy3W4X27bJ5XLZOk5PT/PII4/QbrcRQlAoFDJH5vHjxymXy5kbz8RYlkol2u02Wmt2797NxMQEWussijaKInzfx3Vdjh8/Tq1WI0kSjh8/TqvV4o033kBrzeTkJCdPnqRer3PXXXfxne98h69+9auMj49TqVQyl2ahUKDX62W9eAYSKqWYmpritdde43Of+xyFQiEDf2Z/GvhonJDlcpmxsTFee+01Wq0WKysrLC8vo7VmbGwMz/N48cUXuXr1Krt27crcr0mSZI5Yx3GoVqu8/fbbVCoVqtVqBhp7vV7WWxlFEa1Wi7vvvpsTJ07w5ptvsmfPHqampnjzzTdxHId9+/aRy+VuAWAmctb3/cwp6zgOhw4dAmBzc5PDhw+ze/duvvSlL+G6Lo8//jgTExPUajVeeukljh8/zvr6OhcuXODYsWMZJN0JOCcmJpiYmKDRaHDbbbfhOE4GhuM45ktf+hKXL1/m8ccf5+mnn2bXrl089NBDKKVot9v0+31c182grFlnE8V74MABLl68yG233Ybrurz66qtZrCqQRfT2ej06nU72Psax2m63swjV3/u936PZTG+UXlxc5L3vfS8nTpy4xTEqpcSyLG7evMlv/dZvoZTC933e8573MDExwT333MMbb7zBv/pX/4rZ2VlqtRoTExNIKXnwwQczZ3S32+XUqVO8//3v59KlS5TLZQ4cOMAjjzzCpUuXqNfrPPLIIxw+fDj7/FwuR7FYpN1uk8/n+eQnP8mHP/xhLMuiXC4TBAGjo6NMTk5mvZQ7x5WJV83lcoyMjGTnFANkzTnDnIuM09H8DONVhxpqqKGGGmooo3cMH6MkJooBIbClAJF2Hwqt09hQIYmiGEE6eUI4aK1AxWjtkmiN1gqhE6RIv5jFKkHrhI3NdYJuM30+Kdwp5PNIx2NpeZntzXWazW2iMMCWFqWDB9O+QcvlxvJKOvlQGp0oep0OvW6bTqsBKIo5j30LszhujtjK8ebbr9LvdxmrFGn6AdISvPDmea6trLNv92727DtAsVDAti2EFCxffov33LaPqcmf5Qu/8xvEvS2K5RJbjRZrm9s0O/1BFyYkWtHqdck5EoVECItiwWO0XuPRR9/D3/gbH6HbCwj8ANtxQAjOn3+Lc2fPgJUnV9DIMKLTaSMR2JZFzrWRQrC62SSOwwzYbm1vYjkuV1c1h3dNMjExyvKNK+TyBWyt8VyPGE2gbCIVE8YxxX6PTuTjuJLFlS2uXt9gdn4vCIGwciRxhO26jI1PpL42pUEIEpV+MdUh9Ho+rutQLhXodn2q9RGiznaKoXXCrrk5hIop5DzGp6YYGR1na2ub5Zs3AI1SIVLYeIUC+byH67nkch6n9h3ByxfZ3lrDcRxKxTxJvoCby7N6c5GbN5coV8c4duoues1tVNin02xgC5u5uXmWlpYQQhOFAVorbrv9Lop5m7XFyyytbeCHAZsr19laXSTxO1w5fxk7X+bGzTVmp8d59N138tnff4JeEFIv5emHPRZv3mTvvr2gIVEJKonT6kdpDb5oK+IowrIkd50+xZe/+nW+9o1vYssHyYuEP3nqayxtNvGmmhw4foq777qdG+ffoLG9Ss/KURmp0llfx5IWCoXl5lkp53hqdZuRIOAX5mY5JkA5DiiFnyQ0GhssNjcoLl5m7N53M/WhxynfOETpa3+Cc+Ft8h1J0RcUtKYRRbREQg9NjE57VKWFIEn5nmUhSB2IBQS7cjaF5gYrf/CHiPe9l/FD+/BcJ41HtS1yjo3KuYPulBhXhwT9Ht2eS6ghjCJsyyYKQ7q9HlJowjAgDCPiWKFVQhInKbRMFFGiCaOERAsiBKEWKYgknawnQpCk3mo06QTRscATMX0V4UqJIx3KOVAhSBRBGJFHUHQdPJXaPHsIrIJD3bPxZOq8FtgUNSihSbTAcvNMjk8wNTNDJe/hS5dSb43Zqot15zEIemgtEGi2W12areZf6EQ81FBDDTXUUEMNNdQPpwxoMfGbBuqZOEUj13UzAGnABaQX/c3FfnPxPwiCrN/RxHOaONfsxuABQDJAwjjndsIDA1YMMDCwLAzD7L2TJMlAjIl7Nc6wnS4548IslUqZuzNJksyxZsCGiY01y2NcZ2b7lEolRkZGMhengVR79uzJQJV5rud53HnnnYRhiGVZWTyrgWYmQjIMQ0ZGRrjrrruo1WqZA/Td7353BtcKhULm+Dx27BhjY2N0u10mJiYyQAxw4sSJzBWqtebkyZPs3bsXx3GYm5vjscce49KlS+TzeWZnZ2m329RqtT+1b7XWTExM8Mgjj3DhwgW2traYmZlhbGyMSqXC3r17eeihh2g0GmxublKtVkmS5Ja4V6011WqVe+65h9dee40nnngic7Tt3r07cyxalsX4+HgGkx977DGefPJJvvSlL2XrdfvttxOGYbaNC4VCBjhPnz7NV77yFf7Nv/k3nDp1ih/7sR+jVquxZ8+ebKxNTU2xf/9+Go0G1WqVqakpPvzhD/Pkk0/yK7/yKzQaDVzX5bbbbssiTQ20VUpRrVb52Mc+xn/8j/+RX/qlX6JSqbC8vMxDDz1Ev9/nK1/5Co8//jgf+MAHaDQafP7zn2d6epoDBw7w7LPP8ku/9EscOnSIT37yk+RyucyVZ/bhxz/+cb7whS/wK7/yKxkYq9frlMvlrANRSsnIyEjWnen7PhMTE5w8eRIpJWNjY3z84x/nj//4j3nllVeIooi9e/cyOjqaOZdt2846U5MkoV6vs7CwQK1WY//+/Rw6dAilFMePH+cf/IN/wNraGhMTE9Tr9Wx//sRP/ER2/JsI2B//8R/P3MCu63LnnXdy6tSp7Djv9/s4jpPFqAohKBaLAJmzWGtNr9fDdd30+tvguDbrbm5uMJDcnDvMZxp3pHGbmvOI2Z7mfDYEj0MNNdRQQw01lNE7ho+O45BECVpArDRaR0RKpe5HZxADE8fkvBxxrLFVgJCCRMXYEmwtiRKFSvkiQii0EvSCkI3VNbQa3B06gJedns9bb72RdjSgcVyP0clJJiYmKY3NIKVgs9libW0V1/XQCKIkou/7dDpdVBwigMlameroKHauwJUbNwjaTQ7vXSCJEnbnXGZ37SE/Osd2s0UUhKx0NbLdYKToUco5RLbFt599hqlqkb/3X/83/Mav/Xvaq9foByFIC89z0i/qboG+3ydUoJUiTBIsSzI5WufvfeYzfOynP02z1UMlCbbtoLTm7JmX6WytcOTUnVy+fBW9sgyiR3/Dp9vrDNxqUMh51KplNhvNNIJWpHGRG+trTM/Mcf76CicO7ubta4vcuHKeybnd5HLpXbCb25uDiYpieaWHVglapV2HCJtmaxuVpJ2GQgj8vs/e/YdS16EAKSSe52JLCWiiKKGUd5kaq7IiJbblEgmN41i0Wg1ynoWOA8J+QtRrUZ6bxncEkoSbN5bxo5BKpcruhQX27Flgq9HGshw81+Guu+/mwlvn0kmv0lhSkkQ+I6Oj6H1HyJVHsG2XTmuLV158nk43YOHAYTY216lXK0RRxOLyCrHfxUahaxUWF69TLpbQkY8SLus3r6N6TYqWRbvTprnZYM+efYzUxpibmeLqjZtUtIdtSVZW1gYTCiuF6paFRqfxtwBIvFwBpWLmZmc4cfggL71xnqef/Q5VEXPlxio6Ulw79yavv/wS+/fvY8/BI1w9rwiiDqXyCM12QGLbWI5NHIYsFGu8bUE/idjsdgkdG1unEcURglgIHCDotWn8yVdYfP4ZRmfmmKtPMnLwBKM3L5PfXCMnJHnZx4tCGklCT0FIGnMqLSsFyypJwbOQaMtmdLJEod2n0A1Z/PJXCFv3M3ni0CBiOUn7XOOQKAxQQJhE6CTtWhRJjI58tLZxLYHMu8RJgiMh59okcXoBRunvRkkJ0mNFiBTqxVEMWu2YeKW9jFGiiRT0wgg/ESgVI1VCPu9SsFySpEugE6RSaY9p2Ge72UZITV9LcuUiri2xhMaybDzXBgS5YoVyzsO1BGEQEiQ+3c01SqM1CiQk7RWkSpCWDSrGsSyarRbt7Q5b7d47PZUONdRQQw011FBDDfVDLBPLaWRiMVdWVpiamvpT0ZpbW1sUCgXq9XoGFnb27RnYaH7M4ya+0sCtnf2EppPQvIdxO+Xz+Swm0kAT4wAzMDGXywFk8Mysk3m8Xq8ThmEGEw20MNGW8N24TgM1LMuiWCzeEuNoQKUBIztdZFEUZdBVSonrutlrXdfF8zzCMMxcX0EQ0Gw2M4hpIFyxWMwiSA2gLJVK2bKadXddl5mZmWy5DWzp9XrMzs5msaYA8/PzSCkz4LSwsMCuXbsyh2Cv18scqTs7Oo3T7ODBgxw+fJjt7e0sltSyLLrdLj/yIz8CpK7UkZER4jjOQKkBa4VCgUceeYRTp07h+z6jo6NUq1V+6qd+iunpabTWjI6OZkCr2Wxy8ODBLCLXAM1yuZzt9+/d9o899hhjY2MsLy+zd+9eyuUyjUaDj3/847iumzk4P/WpT+H7fraNHn74YUZHRzl79iy1Wo3Dhw+zsLCQwXKzTYx795577qFWq/Hmm2+SJAl33HEHd9xxB2tra/zNv/k3ueOOOyiVSvzCL/wCL774Irlcjn379pHP5zl//nwGPR999FEajQa1Wo1ut0uz2eRd73oXCwsLXL58Gc/zmJ6eZnx8HNd1GRsby8bxyZMnmZmZYWpqCq01d9xxB8eOHaNerxNFEfv372dubo5+v49t21QqlSyy1UTSmuPQ9Il+8IMfZH5+PouPNQB537597Nu3jyAIMngZxzGVSiU7vsy4NMe5uZkBuMVhacbjTiBobi4wx2GlUsH3/ez15ng15xXzPv1+/5buRjP2v/c8sLPPced5yADsoYYaaqihhhpqqHcMH/0gJFFJ6oISEsGg6N6ysW2JkDZKKzxH4HkRdpyChygMwYmxLBshJLYUCClwHQeFoNdoQtQn59oIBEgQCAQJloDRkTr1+ijVkVHyuTwj9TpJvo7j5tjcvAFap7GQSUK/3yMewBEAz5LMz0xi2w6Rsrh+4wbXl9coNVt0Oz3ee/+9jJQr1EsOxxaOkK+N0I0EVxeX2FhdIujGeJ7g8uIymyuaT999L5/8mb/D/+d/+L9zeGKGvu/jWprbTt+Bmy8SRBHr65u4+TwCSbVc4m9/8pP8jZ/6GRrtPkKTLkuS8Mw3/4Qn/uD3OH3sGIdPnKY3M0Ov10VpTaJSxxgCEq1xJFQLHoGfo9vro5OURoVRyOb6CpPT85y7eI0Th/bw0pvn2Vi9yfjMPLa00VqhpUwhj+WgpYNWCbYn0SohDMIsSheRwqlqfYTlpSVc1ybsxSRxhBYSz7W5/fgBen2fra1tpifG8XIe/abG7/fYjEMaW1tIK/0i2+q2uXrtGlqlEZuWl2d2aoaJySlGJ2ZpNjYZHR0nTAReLsf4xASgSaIejmXjuB7dTpue75Mr5KnWqhQKeW5e3UQJi812n7UXX+Lue+6h2+kQBgEzUzMUiznWVxZZOnuFXrtBEvZxLAchbM5fuoaIYyZrNUbchBuXLzE+Ps7Bw4eZmZnhwtVFgkjh2hbtVod+EFAQLiCymFwhJCBwbAspNOBiWw733XsPYQJXl9a4sbWBHypQCUmvx6vPfJPb7rqTkdIxNpYXaWyvMz89TmmkzsjEGNeWN+mFAaNaUPMctvoxvSTBt21kFLMc+vxxo0FLCnblipyu1phJItyNDZqrq9wAxkcn2VuuMFodx7FcnOYGrgaXiAaCbhwTKJVCRFQWbyqkxvVsyjOj1OuT7NlziOIXfp+3v/kNrjS2qR7ag7AthIYkiUiSOH2PgbNZJekdkSKJCf1eOubiiFgLEqVRWqE1IEHFCseS2GIQuePYSCu96cD20i7JtPdVYAMeaUwzSYzvp1Gsru0gSRCiiEgStLAoWx4516LfDyEIaPdDfARu0cOzBpMiBBNTM0wWHRqdHlpD3O/ha0WQQBT4dNa32djcYrzskbfSGyqiKCSIYvwooeeHNLsBrSD5i56LhxpqqKGGGmqooYb6IZQBZt970T4IAsrlchadajoLV1dXqdVqlMvlW1yMxqFn+g9d181cYztdV+b9gczNZCJZTd/hTqhgoCVwCzQ0NwgC2W8TCWtAI5DBKgPxhBCEYXiLy9M4P3e+3oAVA1DM8pqIV7N8nufR6/WydTPLYSJbzXNzuVwG54QQVKvVzHXpui79fj8DOsbVVywWM2hYLBYzB5jv+1iWlUFO4z4zMasGEu2ErQZymjjZNB0mIZfLZZG7ZluYdQuCAMdxsuUx7rVCoZBF2ZrnmtdVq1VarRbFYhHXdbPY0NHR0WxfdTqdDJ61220sy2JycpIwDDPwLIRgamoqi2iN45hut4vjOLcsp9kep0+f5r777sP3fYIgwPM89u/fnzliW60We/fuzV5jOgHvueceDh48mK2TgWSe590Cy83nHTp0iMOHD2fLIIRgYWGBY8eOsbW1hVIK13V55JFHsjF74sQJ7rvvPjY3NwnDMOsO7ff75HK5LEa1Uqnwrne9K1s2A/p3795Nv9+n2WxSKpWYn59HKUUQBORyuQximvFsokkNdNy5Pma/AczMzHDw4EGiKKLX61EqlbIbAYzTOAiCbB+YbWTe07yXcRybMbrzPGLGtOu6tzipTQSqcYEqpdje3r7FUWmczUqprAvUHFfGpWt+zONmPJhjZQgbhxpqqKGGGmqoP0vvGD4qlf5A+uXfsW28wZcwKVM3ZJQkeCLCtcHWaaeca1l4+TxxorElWDKdoCig3Wmztb6cggRLDiYnFoVCntrIGOVanVKpjLBsEAKpwY8Uke5iWxKVhEjbxrJt4iQmCEMC3ycZuB5HKgWqIzXcQoWVzSYrq+vYAoIoYbJWZWRskljB6to6a+vrSCGoVMoc2rWb4wfu5crNNVYWr3L0xG2UigW2fMWB3Xt4/9/4SdqNBqsrS8Rhn6W1Bjm3Q6ISyoNJiOfYfODRB/jxn/xbNDs+ArAdh57v8+RXvsj/97d/k8Xr11m+cZP9Bw+y68htbG1t02g06PW7qCQi5YGScOAaK+Rd+mFIHA06QtC0u12c9RVGJ2d5+/J1bjt6gJdef4tCsUShXCdJYjwvl04MLQupFLEG4bgkcQQCLKWxHZu+36darREGEf1uE0tIkijt7MuXckyMjTBSKTEyUkVrKOYLWKRUyXI8hCXwwwDPFVnXSRQrohjqtQr1nEeobDwvx8TEOGOT07T911C9LugEx7aZnJzi5tULKcC0bbp9H2lJcm6O9Y0NnvjyH1DIeXjFMvliwMjYONvNJlvra1hCMD8/i4othNI0NtZp+xF7R2ewtc+3v/0tltcbjJQrlOqTjBUtrt+8yZtnXmH33r3UammxfKwSio5FGEYoLXDdHAiJbVtozQC+k4LiJEZKC9uWlIpFHr7/Hp76xrNs5ArpMqysAHDtzbP8wf/71/H+zs+Sq44TbqyBV2BmYY7l5VUqlRKr3YB2s8OReoVv9TfxtUZZFpGQvGg7vBwnxFpxTVi0Z2aZEZKFwGdifQOvvUVvbYnV9SWmChX2VSqMFMo4SmFpjScELaFpJ4qeSoiFAEsO+jcFxAlhJ6Q6n2Py5GHEzYP0v/wdrr/4CqutJvkDC1i2HEQexWitUEmMGvSaRkphW+kxqpUi0QqtU8et1hphSSQD2CkEliVIEj1w8qZRwlrK1F2qAfQtF0iUcvFsh9ZWk2bQp1YuYksL4UoO7K0ihCQMI7RWNMIesj5Brt3Gclw818Z2XFzPo1Lw6EcxFortZouNZocgUSBtpNZ0w4hmp83Gho1rC+JYkQxuipBOHjdXplqbZNzN/QVOw0MNNdRQQw011FBD/bDKwD/Tw2jcdUIIFhcXswv6CwsLjI+P3+KMMwDEAMMgCLh06RJjY2OZ426no9CAv51Q0oA9E4Nq3Fk7+yDNc8zfTaSpcV6aZTCvyefzGdAzMkCxWq3ied4tHXHfCzFMBKSJbjSQxby/iX4062/ckP1+P3s/A/0MzDLvZ7ZtFEVorTO4aNan0+lkALHRaGQdh81mk3K5fAscNPAKoNvtZutq3m/ndjbb3URgmijRnS5C4y7d2a0nhGBjY4NyuUy1WmVrawtIY083NzexLCuLKU2ShHa7nfVymu1n4LABfwYmttttqtUqYRiytbVFvV7POjNNRKeBSTtjdnc64QwwNG7RTqeTvdb3fYAs3tb8Nu5U0w3qOA6dTidzle4E5pZlZePBsiy2t7epVqsZCNVa0+12M8Bt1Ov1suPEgHnLsm5x9Zpl11pTqVQIgiAbywZGmn0C3AIHzbFjxoMB0OZzHMfJHLkGLpr9Yd7/9OnTHDlyJLsJwDzHHAvGyWjGUqvVQkpJqVSi3+/jed4trlzjoja9o2YbAtln7rzJwezPnZGsSqkMnJpj2ewjsz8N6Dc3Fuw8R5h/m31m9qUZfzuPRfP/nVB0qKGGGmqooYb666V3DB81abejlBIhJVKKQUxq6mrTpF/QNAlIiNOr9WgVgwbHksRK4QcB/W6LxvYm7WYTFYc4tsT1CtTqI9RHxymWy9hODqUVUgpsK41JTCdFLkI6hHFCqVSm2+2kd39FEUkUp9ExSYwtBJOjVXKFIl6pzvqFq3Q6HTzbpt332XfkAPliCSElURiSLxZx8kW6nTZvv/0W5XyO48eOMTd/P5OVPIeOHiFXKNKPNBPTsyil6HS6LC3d5Oxrr3H14tvcWF5kvFLAqjT46Afexy/8g3+EdgoIQEqL7WaLP/z93+V3P/vbXL9+Hc+xWVpb5YknnuCnZ3cxMz1Fq9lIOwCVgkQhbNAC8jkHz3NRGjabbYQAC4lWmk6vR761jTM6ydLyOscP7+fMG+eZX7Dw8kV0EqeIUMoU9kiBStIISqUUjpdOguIoYqQ+yurKEiqJiFWMFmnc6PHDBzh2eC9bjTarK6vs37eXyfm93FxaIQprTC4cotfpsH75bRYWphgdmyRXKGA7Ll6uSLe9nX7ZtV3cXIF8qczG2gozs3NcvXqVm1cvUB8Zo1CqIKUgXyjQ63bRWuHY6XpblsSybHpBiJIeu/fuxZYSx7ERWtHt9bm5uk7id/HoceX6dbzSGPlihSMHjnPw0GF+57NfoN3qYXs58tUyd45V+faLZ3jztdewbAuNQicK27IRUpLLF8gVi1ksLaTOXa01Ko5RSqbRpVLg2h4lpfDXbtLqC8ozC/SCiI2tLeIw5Lnnv8PUrgU++rf+FqHfY3OryZ7DR1m5vsj4SI12L+Bqt8c9I1XOFXJEjo0WEKII8x6jkxNsbW2hhaAdBCyXivRHp1mZncW7dJFd2w0qvTb9TpPlbosFL8/uXJ4xIXDDEEeAI2LsWNHVmthMHGwJStNZbaD7b9C9eoWw3aXsOMzZETffPk8vDrF2TaKFBp1CwyROSJRCJQkJkAgGwDx17CqlifXgAoiKEULiOTZSplBSILCkRKCJwoitdgevUKJaKiCERDK429qyESQkiWRqYpxEaUATRnF216iwLPIFm0QphFtin+MR9DvpZ+t0rKMVoR/QbLZZ3WqghcR2PVyVoJA4jkuu4lCpVrDQSNuhmHcp5VxsAVa+hGtbJEoQJ+p/gdPxUEMNNdRQQw011FA/bDLAwAAIAy+klGxvb5PL5TJ32tzcHOvr6+TzeWzbZnl5Gd/3kVJSr9cB6HQ6lMtlut1uFtHZbDbRWmfQL5fL3dIBaVx8BuYZ6GNgFXwXXJnlNO44A8h2Rpsa95QBIuY5hUIhiww1nXcmHhZSuGPbNpubm/i+TxiGTE5OZmA1iqKsMzIMw8xNZqJh4bvdmMZZmc/nb+mWNBDMwDIDpAwwKhaLmTvRRH4agGLiJncCml6vd0tcrHFXmphcA4cMADNOMwPBzPY1EbHALc5Ty7KyGE4DSs2/d3ZiGqflTiBk3svsj53LYra32X6e52UuUSAbG2ZsmnFh4mLNMhpoaj7T9H2a5TH7AsD3/Qxamm1q3sfzvFscrsbBaJbFbCMDWj3Py8aY6S40Y9IAvO8F7GZ8mOea5xj3rNHOuOKdPak7x/v3vu9OsApkgN7AT3PsGEkpb4lN3elM3inzPPMZQLbvzc0KO/f1zu0M3NINa5bLaKd7cWdHo1lmIBvD/z/2/vPZliux8sR+e+90x17v333ePzx4VKFQRFWxDFnN7qJrNmd6pGZLrQmNvumr/oQJhUIRGqOYiZGiZ3rUpGZENtndbJpieRaqABQ8HvC8N9eb40+avbc+ZOZBvlMXZdCcIZvMhXi45pzM3GnOjbPPL9daxfNRjGXN1z+ucSg57n4sRrKOj79UqVKlSpUq9XdHnxg+IsAW3pRVfBfHdUmyNyrSKhJtGAzbKYQwgBBEUYxKNINeh93dbdp7u0RRiBDgex4TM7NMz8wyPT2LX60ilUNWdYhLdtdX5mrS2rCxm2b5d8KYQa8NSMIoJgpD4ihkGA7AGjxHMj8zRSWoMgwTdvf20XGCrPo4GObnF9JOQWup1mqcOPdEejdbktBq79Hd36e13+alX3qZanMCY6E3TAijmDhOsEYjgcX5OaovfpqpmVkSfsj2w9u8MD/Jl/93/xSUhzUWhGBtfZ0//P/9f/h3/+6PebS+QbMaIKTEmoQr129w+b13eeazn2dmdhY/qAICjcUmhsB1EUKihCDwHGqVgEEYAgIhwHEUu/u7NCcm6CZ1Kp0eZ04d59rN25w4eQ6jJKYQfymlRCiDROJIh0oQoBNDHIVUqnUe3rgCGIRJ8LwAPQw5fuw4y0dOsrn3LnNzC7Ra+5w5dZrlw6dpNgJEMmR3/R4Xn3qOY6cvMDU3l71Blmyu3SMO+8SJZtDvsx5bjpw0rD96hJUOe7s77G1tcvPqhzzx7KfxfZ9Bv4fRCX42cU7vEFZonUadeK7CWk23NyBJYpSUNBtNjp04wds/+CZra48QSJ5/6jyLM0209QgmFvjVX/8Nbt64yZ3rt5ifmaHeUBxdmuH69askygObxqk6UlKt1GhONHFdD8d1kJBFloIUAut7CGOw1mR3jFo2Hj3g/t2HdAYJ6rjL3JGj9OOY3V6Xto75i299gzPPvcCLzz7DB29+H+NVOXb6JJevXKeXJPS2d2nvt3hidgpXeRgJyWDA7v6A0y/+IjONCn/5rT+j396lGqTRMW1Hwews+0ePs7i7y+K921Q7bXqDDg9jh+N+wLwQyMgirUEpB0cn9I0llgaTpDBxGEdErS7tNUmiQfkOzcAlMZa1O/cYBC52qoa2dhR3bIxGa5MSxxzQZnMMISzKWnSSpNG+UqGEAJXdFSkEwhqMFijXZXpqMn1NWINAgpAgVAYz0w9OhOOilADpEEgHYy2JNghtscqCMPhKgtV4FR9jDSYxaBNjrUAKS63Z4FizmUVICxyVOlnTibsEAb7nZsA5fZ1I5eA4qZvbVwItKVWqVKlSpUqVKlXqx1SEKMBjH+YvLy8zPz/PtWvX2NvbY25ubuQQyyNYjx8/jud57O/vj+ZBuQPy+PHjxHHMvXv3Rs6npaUlFhYWaLfbbG5u0ul0aDabVCoVFhcXR+6y3G2VOy9zYOj7PhMTE485FhuNxsgxVqlUCMNw5KbKoVURXOTgpAi88h67brfLa6+9RhiGo+7CQ4cOjYBH7qbMHXg5wOl2u6Ox5kAwdxDm8CR3/RWhUu46zN2V44CpCJSAEXzLYVHuLstdY/l6Wq3WyAGWw8o8BjOPes2ja3PnZr7+/Gu+XA5bc0ibx8nm+5BDqHFn2U/6V9RB0KfoTisCqvyxg1xr+ffjoK4InvJlxyNAi8D1Z9EnifMsjvnjtvWzbr+MEi1VqlSpUqVKlfr31yeGj0qmrrP0jXra5WajKIUDMo19cIMANzLUZASeR09YHj56RLyxzaDfxRqNq1wa9TrTs3PMzKSuNMdxkCp1lUkp8JzUyYS1GG3QKetAOQoTabSx9PtdEq0RWBwhiIQg0QkmuwPMcxwajTpBrcnefo/9VhuMQVtLoxJQqzfTyBopmZucwlMOxoKjHGanZ5mbmqZW8dlaf8CcNTSnZ/C8AD9yUEoSDgZYo7Em7a9cWFjg5Zc/jy9f5rkXPoXWQJKAgHsPHvDP/7v/mu9+59u0O2nsS6+fuuum5mY4evYp3r50ldNPXGRubobG5CSOk3Yzam2IE4OVAk9KXMelVkmBSxRFOedBKMH6+kOe+/TnWFvf5HhVsbqyzO0blwmqVXzPx3F9pMwBsgIBruuRhANyZ2kUJ/S7HYLAZRBGINJOhtffeIPbt29iHZ+XXnyR+eXDnDn7BBOTE7TbLR7cvU144jzTM/MIpdjZ3iTwA5qTU6w/uIMY7nH42EkGusbW1g5XL71Npx+yvb3B2qM1olhz7eplqvUmU1PTuJMVLCLrO1AElQrWGJIkxnMUFkmcaCqVgDCEKIqZXVjkwc3LrD+8x/5eC1datu5fR3e2mF8+yuzSIWanZ2k83aTf7iEdl34Czzz7JHfu/hmPtvfxHEU1cNHaMr04h+NIVBZNavM7CYVESACBFek1IJAIEh7dvUuvN8Ba2Lxzk8ahwxw6ehRx9y6DMGRja4M///M/4cmnnmZh6RBrG5usHDnG4twsmztttmLNlX7Iy7OLDCsV6PeQCqRJuP7eazQ+/XnOnTvD/Ts38asnOX36DGv379Da32c/GpIszPKw6lO7/4BT3Q5Rt01r0GdBuRzLnJkSgSvA04a+tYTWkgiBNZBEmkSmLkYlXBIj8ByPSSyhMYRJQsoNLcJaMCDyu3ilwKIxRmBG7sgMUgKOY4lCi1YSJWVW5pgX1WuskEgkwoKxGtBgDEnuLA0qJElMklikkiBseg6sRVuNzl29pBnRUiqkVChfIROJTjRGgicFUjpp/6xSWGtIEoMEhuEQ1w9IEoMS6d8TqRRKakwcIV0XV6rs/JcqVapUqVKlSpUq9biKgCaHTVEUjaIegyAgCAJ6vd7I6VaMYmy1WszOzo4iWbe2tjh8+PDIJXj16lWEEJw8eZI7d+6wvr5OEARsbGywt7dHrVZjd3d31DkH8Pbbb49gYb1ep1qtsrGxgVIK3/eZm5vj7NmzbG5usrm5SbVaZX19ncXFRRYXF9nZ2aHX6zE/P8/S0hL1en0Ez/LY0Uajwf7+Pp1OZxQvqZSi0+nQarV46qmnWFhYGMG4IqxzXZdGo3FgV2J+DIERNJ2enh7FtAKj+NbcrZY7CiGFT7VabeSWzM9PcQzFHs4cDua/z52b3//+9zl58iTHjh0bRVjmztI8djT/mkPQfF051Mydl8XevjxKNO9CLILA/BjnEDWHhvl6D4JuRRg4DhWLsbsf199XhJnjgLPoeiuCyxwEH7St/OefpOL2iuseB6vjMHZcB+3Lz6KfB5L+VejjQGfpFCxVqlSpUqVK/YeuTwwfhXQwxiIxKCd1Q1lrcBU4joc2hsD1mGrWsAb63SFJlLC510I5HpUgoDk9x+zsHPXGBK7r4Tqpy8kYjXJcPM9FqrR7TTmKxBis0ThSUPG91D3lVRBSkUQR1hgMgjiJ0p5HkcVBCPA9iR/4KL/K7u599vZbKAmJNkzXqlm3nKHiB0zPzTMcDvE8Hwsk1qKkoNWPaN29y8bDh8xMT+I3JhFenfYgZNjroWTqJgyHQ3xpeeqJszSaUwwGw9Qpag3d/R2+9+2v8/YbrxFFIRaLtRBGCYnUmCjmH3z+RQaVGW7evM3Fp59hcfkQynExSYQQEiEFrnJRrouyBpUIahWfONGp88wYXKmIkxgbdnnxC1/mR3/5Lc6sTNPrdbl1+zaNRoP5pWMM+z2MSYjjhCSJwRqkEGgLq0eO0+33EIIR+Ov3+6yuLPPs85/i+U9/lpOnz1KrVZFSpfsehmhjefLZT/H0p14ijoYMej0Ggx7hoM+dG1eoT82x/ajH9WtXOXX6DE+eP83NOw/o9bsIJNPT06xvbNJu7XLvzk1+5df/EY8e3OPO9Zh7d+/QnJwhqNTpddso6aC87G7UWh0pJFubA2bmF/CE5saVd2m19onjmO3dNr1hyIvPTrC//Yhue5up2UNI1+XEieNsr28SxnDkzEUWF37Eg84jqsolcBW93pAnThzHc5wUeuZRNza9Q9fzPITMJg2Z+1RYw/1bd9JYUClph0O279zm0JEjzCwts3n/HoMk5v233uBHP/oRX3jpU2xtrJEon5Wjqzx8tMFDV7KlNdubGziVCn1SRqejBHTCe2/+kJX5aaTy6O08ZG72ZV7+/C+y8eAuN69d4eGD+3SMpn/8BLFQLDy6z+yD+wyjiB0JR1DMSoG0FseBwGg61tDXkAiJFgKrHJCp+zExlkRJhvOTJPUUiEMKYq3NXm/WYBNDpGOEkFgriE2CNaS9jwKctDYxdUMKUtehIY1pFiI1TgozckiCQZt0IqutSLtdpYPRhthafJHCc2MTMJooMUAawUs2NhyH1KeaTuIMYIVCZubMJNGYYYS2IB2FcAWVSoCSKgX/SRap4zg4SqVdn46bHp+SPpYqVapUqVKlSpU6QEXolMOEHFblsCl3zwVBMAJtzWaT5eVlWq0WV65cYW5ujqWlJVzXpV6vj/rr+v0+S0tL1Go1Tp06xRtvvMHOzg79fp+jR49y4sQJ9vb2ePXVVwFGnXbnz59nenoaIUQ6x1tdpVKpsLGxwa1bt1hZWaHT6fDWW29x7NgxhsMhb731FnEcMz8/jzGGy5cv88ILL3Dx4sXH+uZqtRqPHj3iW9/6Fr1eDyEEZ86c4aWXXuL999/n/v37xHHMzMwMn/vc57DW0m638X2fIAi4cuUK29vbI7i1tLTE8ePH6ff7tFqtkRN0cnKSyclJoijixo0b1Ot15ubmiON4BOry7r+8Q9FxHHzfHx3/OI5HLsz8+PT7/VF0ax7bWQRoSZKwt7fHYDAYdfPlEbg5PIUUHuURr3mnYhRFdDqdUTdgpVLB93263e4I0iZJMuoXDMNwFH+aw+FibGbu+hyHi7mKAKvoui2CweK1WXxs/Hn51+K/HPQWj89By4+P5SCNg85xCDi+/Mdt5+PW/bOM4aet768aCJbgsVSpUqVKlSr1t1mfGD46SqQgzALCIoVACIUQCgtgLbExrHdioiih1dd0Y5fJmXlmZuZoTkziZZMrECQ269YTLq4AlTkerczfRBtc5aCkysBEGn3YNwKsoFKfgO4+BogiiU6y4nVrAIvvKKI4oTeMWV/forXfwstARGNuBiEddPamXjs1wk4r7YUTKZSIdPpG3gho6z7t/hDf28FxJMLxsdIFqTDGIqxh9dAyfrVJr9fH2LST7oP334G9u1x690f0+n1E1olnyN6kA+utFv/X//v/jS9++ZdB+pw/f47TZ07SmJ5jb32AzKBXtdFAuj6RMcgowVUG33foDxKiOAKbxsVcvnyJX/m136Tdfp57l9/liVPppG13d5dhf5/l1eNYY4iyu0yxJnN/KaZm5rl75wZYy6DXRScxzckpfuc//c84cfQIx46fJI4j2vt7dHt9/EqN1cOHOXX6dBpXu7dLu9Nmd3eHQa9LHA6pBhXqtQrzUxdpt9ts7+zS6/ZI+h3CThvHrVCtBqwsTHHr1j5Hj58Aa5mcnOLck88xObtAe3+XRMfEXoA1liTWOF7A4uIy6w/v4Xoeh5aWeP3bf0wUDVlYmKXX7eA4ko2tfTZ2Wjz/7AV2t7f44IM3UV7AuXPP4nse1sTcuX4Vv+KBFEzUKmAMEYpjq0tIEwMCA6lzzmZ3k5oE13FTimUtRlqG3Tbr61sgBBpBZC1xErO18YjFpUPUp2dItjdp7e/yve98kxdeeIGJ6Vl29tscO3uO6x9cY3lmmlZnyOvtHhc0RLUagZL4juLimeM8vHsHq2M+//yT7PUj/ugPfo/P7nyFX3j5ZV56+WUe3riSRvtubtPxAgarR7iF4NjODgudPXpGsygFh5Ui0BZXGFwhcCx0BSSOQguy16gl9FzuTTXpTPqYRKfQXpgRPDSm2DGRA0SwJu10TF/u2Z2yAEYT28yxiMGS/l2RiCzW2SCIQZBGslrSaNUoTA+1kCAVYRKBEFnsq0YnBqQiidUoslXESdpRmU88s/8QYARpj6gf4JDdHZzd7GCUQor8DuX0hWpteqOBSBKksZiP6cIoVapUqVKlSpUq9XdbxphRPxx81KeWd9vlgKnonuv1emitmZub49SpU9y5c4etrS0Gg8Goo3BiYmL0PEjjSvNttdttjDFMTk6O3Ij59vKOt5WVFer1Okopdnd3uXPnDsYYhsMha2trI0A2PT3NV77yFaSUvPbaa7TbbT73uc/hOA6///u/z8bGBufPnx/1Ivb7faIo4tKlS0xNTfHVr36VO3fu8OqrrzI1NcXs7CzNZpOFhQXq9foIOOVg7vLly7z22mvUajUajQbXrl3jySef5PDhw3z44Ye8+uqro87L1dVVvva1r/F7v/d7dDod4jhmeXmZX/7lX+a9995jZ2eHr3zlK9Trda5cucKlS5d4/vnnWVxcHLkHHz16xPXr17lw4QKHDh0ijmPeeuutkaPzzp07I5B5+PBhTpw4gRCCbreL67o0m03u3LnDzs4OS0tLBEHA3t4e3W6X1dXVEURaX1+n1+sxNTVFo9EAGHUR5v2dYRiys7Mz6oys1WqjSNZiR2QOC4uux2K3ZBFo5c89yPkIPw7wipGuOUzO110EjDl41Fr/WL/jx4HQn+YoHI+N/bjY1INiaMf3aXysB309SD8JYv6vBR7zx0oAWapUqVKlSpX6D12fGD7mxe7CWpTjIB0XYQ1SSHKUEPiC2DuMqzVTroMVCp3EKCURiNRRhM0iVkmBAzZ1UYYJ1vRxXHfkQLRWY1L6gON4JHGMlC5COUib3v1nM0golcyiRLO7E2NDOAhJcOkOBiRaY7HoWFPx/XSyhGCyOYGsT9Df3krHmTnbTOaeihON0YYQiBODFKCcEEcJotjg+wEnTp7ECeppX4WFOIm5eeMqf/GNP+W173+XVqcDKFzHI9EGYxLIPFlJknD30Rr/+o/+gAsXnmJn8zMsL68yt7DA3vq9NPbUdZmdnWf1+Cmufvg+W+tr9Iwh8FwEgihKSIxG6oR2t8cPv/lnfP43foc/73a49+gen3r6Cb796hs8fHgfz/VZWDlCkmiUK0l0jOO4uG6ANppw2Acsw+GAKNH8Z//sP+XM6VMYY7n03nsMwiGOcphfXObChSeYmU4nwVubW+zt7tJqteh0+vT7PaJ+Dx2H7G5FBK7E9RwW5ueIteDhw0c4uotVCqMFViccXl1leWUVz1Xc396kUq1RcQW1pUWiRNPa36e1v8OgLzhz9gJbm+sMBkMOHznB1Xd+SK/XZmJymmHYRylFo16jVq1y/94DVlbmmZ+bRnkB2jpg0ztPsZL7j7YYJhbpuPiey2AwZGZhGU9JOq0WynFwlDNyrVogtpokzJysmdt2Z+0RrW4XIQWRMWhsejdskrD26AHzC4vUmk16nTaX3nmTd999l+cvnuXy268ye3iBw6eO0hsM2Gh12NxpcTMMOR1U8JVicXaC64/WmG3Uubmxw7NnjrLa9Plwf43//v/9/+T73/0mv/4Pf5t/9Fu/xT87dJhv/cXXef2NN+lFQ8LmBHeWD7H74A6H1h8Rh0M6KI4qyaQV1GyCg8WVgq5SDB0HncRsBz4/dBzsRIPAxgRYlE6yV7tNHYZkEyXInIipKzI3HBqjszgkiZQpfZdCklM9Y0ndt1IhlMTm687WaazEGE1i0+hVTILJYl9VNinWJgWU0his1lhhSbsb079N1qSA3XEUWIElu9uXNKpZZrGuUqR/01LnqqDqCMJwSIwEv4rjpOs0AjKUWqpUqVKlSpUqVarUY3Ich+Fw+FgkqOd5zM3NjRyPubOu3+/jOA6O49DpdHj48CE7Ozu02+20IiSLA11bW2NychLf96lUKvT7/VHfoOu6zMzMsLW1Ra/XG7kpoygawStI37MPBgPiOOb9999nd3eXQ4cO0Wg0Rv2G/X7/sQhQIcSod7Jer4/6J3NAlfczaq3Z3t7m7NmzHD16lNnZWR4+fEir1WJ+fh7XdXnqqaeYnp5mOBzS6XTwfR8pJZcuXWJxcZEvfelLVKtVvv71r1OtVvE8bxTB+tRTT7G4uMjk5CTf+MY3GA6H/OZv/ibr6+t85zvfYXt7m8nJSX70ox+xublJrVbjrbfeYmtri5dffnl0bvKOyKtXr+J5HocOHWJ3d5fvfve7vPjiiywsLPDgwQNc16Xb7fLWW2/xxS9+kcOHD4/ch1JKvve976GU4jd/8zcBeOWVV1hbW+N3fud3GA6HvPHGG9y4cWPksPza177GxMQE3/jGN3j48CFJkjAxMcHFixc5ceIEGxsb/OAHP+DTn/40J0+eHB3TOI6J43gEL3MVoWEed1r8+aBI1YPg1kERp7mzMQeQxesn7+Us9kaOg8GP29ZP088b1Zpvp7jsxwHIT9Inma+/BIKlSpUqVapUqVI/uz4xfPRdF+04WGPTqrZcwuI6LkpKhJS4rsHodDLiuAqlGkggjGOSOCKOE6JBRBCkHZEWkbqUEJgMFkDWMSkVSgmiYTrBiXVCqBP8msPefgtj4gxEMAIbIEm0JUpiHNdDoxCkT0oSjdY2izxRJBZa/T6i00VbQac/QAmwOknjZfM4SGOI++mycaKRnovruRgEOo55dO8W1VodrzYByuP69at0WjscO3maV197DW06OK4EY3AdRaINOjtGUimCSpWF5VXOPPk077/3Fi/PTlOvNbDZm+Rmo8nk1CwTzUm8oJr2XCqB7zooKdHaIEwKcQWWV157la/+2m9x9PhxLne6PFrf5sVnLvLKj97m/t2b+IFPtTaRusbiBCVdHEcRRSHhsE806OMKy/Fjx/jcL/wC+7vbxIlhZ28fKQSz85NcfOppZmcmGQ5Ddls9OoOY2Cis9HCDGjWp8LwK4bDPsNdhfXuN/Z110Jrnn36SL33xF3j1tdfZ290lcCosHDnO5NQM58+dRkd9Dh8+TLe1y7tX38UiOHzyAgtzs8zOzNBud+l323jScOzYUbYe3efh/VvMzs/guR5aN+jVqgwHIUKnztevf+uH/L0vv0S1WsFF0OvsI5TCcyrsbPTphTEWiY7TCM8nDi3T298ldBWe6+K4LmCRstDDkcWOGmMQUrLx6CFhmCCEJLZx+vKQEixEYcjO9jbTM7PpHca9Lu+88UOeevIiswtL7PcGXHj+eYZxQiuMGRqJ8So4X/oylbu3ODrc5cMHa1QXj/PFI4dxTILVCc+dOcremx/y4Qfvc/vWdX7w3W/y2//RP+ZTv/B5FlZW+e63vsnW7i4o2FlYYLfRYHJ9nZVuj14ScdQ6LOkBVSyucnCUZN2VvFep8kMNkesyh6BRrdLZa1HzHTKkR2bwTeNMsQhjM3jICNSmsC+d/Boh0qhWsr5IkU0YrcUajbQmdRKLdFljyZyR6T0FiU5/NsZm0DPJIGJ6XjSgbZI5FTVSpI5VEEjzESwVUmJJe1eEtqnjEomVZDHHCqRiaMA6Pp6SKJlN8nXCcBgTVGqf9E9pqVKlSpUqVapUqb/lKvbf5aBmcnISpRRKKWZmZkiSBID5+XkqlQqVSoUwDEcOu6WlJaanp2m32+zt7bG7u8vq6irT09Ps7u5y/fr1UfTp1NQUvV6P27dvs76+PgJHlUplBBKjKBrBzF6vx5EjR/jUpz7FvXv3RnGjOfBrtVoj2FnsdYTUvee67igGtlarjboSq9UqrVZrNKYwDHEcByklw+GQ4XA46nzUWo9g5unTp0fRsnn34nA4REpJrVbj6NGjHDp0iE6nw61btwB45513sNbS7Xa5fPkyX/jCF5iammJra4upqSk2NjZ44oknmJqaGrlHjTEsLCwwPz/PvXv3aLVarK2tYYzh0KFDHD9+fHSeoijiX/yLf8H9+/dZXV0dRbcGQTCCunmvZLvdptVq4bou7777LtevX+fpp5+m2WzyzW9+k29+85ucOnWKd999l6effpqFhQWuXLlCt9tFKcXe3h4ffvghR48e5ciRI49Fo+axuY7jjJyySqnH4k/zrsgiOMx//rjexhyOF58LH4G8fNlcRchYdCuO/1zUx4G7cUB4kIpRsz/N+Zh/P/7vp+lnjW79JODyk+jjjsvPA2b/1xrrz7Ltv86xlCpVqlSpUqX+evSJ4aN0vNRVlH3gj5AIoTLgl7+JTeGfEBkc0AapDMYKkALlqBQWIvADP5sMqbTnDYtQClc5CKkQ1iKFJNYG6aTb8VSASQyJ1gg0iU5diRgDxmadchAnCVq7OI6bOqoQWJM6nqywaKMxWdxjpzegd/MqnrR4QYVIa4y1WK1xHIm0BjKQ4bkuoEmGEb2uZRiGxM0G7U6fiUaVerOD5wfUnQTZrOH7LqfOnuXdNzroRONICU4aI/vck08zu7iMG1TS/ksl2N/e5P7GPWQyYHW2gXr+U0RRQmNqhnpjAs/zmJiaY2vtIXGSIIzFQRIEAYPhEEcIpBS0en2+9/U/5ku//b8nGfa5ddun39vgmYvnef3Nd3hw/w5T0wtYaxmGA4wVnL/4LMNWizgK6Q0GmCjk7/39r7Gxscn29hbDwQCjDbVGgycuPsns9ASd3pBWd8gwAccL8AwYa5BSYBzJUKbw1nMdJiYaTE3PcOfGFV790VsMe/tcOH2MKzckleYcyq9g+jvcePcVjhw/y/LJC+ys32F5ZZEoFlQqVTqtPaw2zMwtEQ77XH94i61Hd9naWEM5Ei+LoDHW0mp32O8MObo0jSMl7U6bD67d4fzJVXQU4/t1qrVZlBT0+j3a/ZBEayIDTqVKzVO09vfwfY8gSCfYqVfPYk3qdjQ2SZ25Wafg7u5O6uSTgiSlXDhSoG0KvcJwwKDfY2JyGhP22bh7nVt37vDkmTNcfvsHzK4s8NV/9I9ZPXWJpzZbyF4Htb/BXdNHJIZzRxZ5/+49lp85SxwNsMZSr3i8cOoQ337vJu1+n+9//3vcvnGFL7z8eV76yj/g7/3qb/DaD/6Sa9eu4rouPQlrjWk2llap7G0z12lzeqA47UqqMxPcrvt8pz/gfmKI+kOGcYIAtIWdfsjAWKZcSe7c1ZkHUAmJTf8UIKxNY5ANKCVHx8dmMNEakDLrgERgUWlvJAIhLcKmEaxk8cvpnMVmkc8CJbMJpU27PxMNuALXSV/reRCsNgYDaadpooG05xUkQqZ/x1TmuNRWYpFIR6GsyqCqAKVSEGos0hEIC1K55USqVKlSpUqVKlWq1IGK43gE2HJgksOjHOT5vj8CRs1mcwQlV1ZWCMNwBOaMMRw+fJhOp4O1lqmpKYQQVCqVEbianZ1lcnKSSqXCo0ePGA6H+L4/imLVWlOv1wFGwKzZbHL37l0GgwHb29v0ej2stSNYmHcZhmE4ct8FQUAYhvR6PYbD4ciZmXdXCiHY3t7m4sWLbG1tcffuXY4cOTLaZhAEo+jT/GuuVqs1+r3jOI/BwjyidTgc0u12abVanDt3jomJCcIw5Atf+AIrKysEQcDq6iq3b9+m0+kgpeT8+fOEYTjaXr/fp9Fo8MQTT/DKK6+wtbXF7du3uXDhAmfPnuX69eu8/vrrDAYD6vU6nucRRdFj3Zx5d2MRzuUO0SRJuHPnDlprOp0OYRgCjDo1IYV9CwsLrK6ujtykR48e5bd/+7c5cuTIY67VPD43P3c5kMzdiHnkag4U8yjffEz5GHOXYjG+NUmSx9yO+TWXuy6LwK8I/j7OWXkQCBz/flwHRb7m4HM82vUndSUW13FQ1OpPcjCO78PHRbUW15Ef759XxXUf1Jd50PM/zrX50+Dq3yQIWapUqVKlSpX6u6NPDh+lxGgNMo03VY6LtQaZGvqyN6suMoeM2Z1zWhsQAtdx0Ag810epNOYwMRahE4QUuDl4dByMsURJwiAM0damTieTAsBEKHDSN6WOcolNDDIFGyJ3PiZ65MqSUmSRMVk0I4JhGOIqhRWKQaz50RuvU5MJS8uHmJpOgZTjePRDTZyE6DimWglwnBhHgCPBSRLot+mHXaxXZdByaEwN8DyHZrPORK1B1Y956oknuXntOnrYR2Hp9A1SGqYrDs9dPM9Gq8fm9iabm3tsrT+EsI2UlsDzqLgBRw+vcvTcU8SJIRwMePbppzhz+jRRFLK7vw/GMBhGrG+uo7VmemqKxeUVDi3M8ey5k5w/dZR3Lt/iD37/DwkGGzxx/hzr62tMTk7gVOqYOOHU2QsMwpDtzXW0jkm0Jkk0cwtLtDsdomFEGEYIKTl2/AQrK8v0Qs0wtrh+QE05OK6DoxRSWDwliYYSYQ3omMQKfC9gsrHKodUVrl35kCu3H3L77gNWVpZ4ePcmk/PLmLBNe3edzbX7HNveYqJZYXluitr0IRJc5udm0Dphcv4wD+/cIBl2UI4gCHw8Jz3P1Xqdfq/P8tISs7MxNhqkEFlKwsRgkERWMGzvU63OgDX0e336UTrh6g4STh1bJB72MTpCJwEmifEcF5FztAyWW2sxWmOyCM5wMEhBVwbmRAaDrU6BLECn20ldsl7A5to6l995i/Nnz1JvTHLr7jrnnnqeF77865zd2+L97/5bvvXmdW7fesCxmWkmZqZx4pDLd9c5sziBEhIEzE83efLIPK/fXCPSmkebW7z22g9xppe4ePoEL3z6M0xNTvLmm2+A4zK/1EQnCT1nhp1ajXdbe0w2J5mZCOj3W7SsxEpLrRqgwpD1rT3k3CRCKXA9dvs9XGsyxyMZxEt7W5WUWJtOFLVNIW0atyyyROTU3WisQNg0HtUKnbZqWjK3YvqYyaBt7oAkc0nm5wChEEogSSeoiTYo6SBtuo7UFZn+DbKCbAIrU1BqySbV2R9GIVAyvelAG8BNuyGlsRgEVqYu57TP0hLrj7ouS5UqVapUqVKlSpXKNe4oK/4rRljm8DFfpghbcriRu79qtdpj0ZLT09PMzs6OQFEYhkgpmZ2dHQGc9fV1jDE0Gg1OnjxJvV5HSonneZw8eZIkSRgMBhw+fJgjR44wNTWF1prZ2dmRW9L3/cc6/qanp0e9hY1GA8/zGA6HVCoVjh49yocffjgClL1ejzNnztDpdEZxnZ7njb6XMu1YbzQaXL16lenpaYQQ3Lt3jyNHjuA4Dr1ejyiKRmAqCAJmZ2eZmpriwoULTExM0G63ieMYIQRnzpzh3/ybf8P169f57Gc/y+zsLHEcj45xDhBzt+m7777L2toaL730EsYY3nzzTXq9Hl/+8pdHEa+e541gnbWWKIpGsC7vvOx2u6OOzXa7Tb/fZ3d3l06nw+LiIseOHePs2bPs7+/z2muv8cMf/pCVlRVefvlljh07RrVa5amnngIYuUNzWJ320IuR83EcxOXXRX5c8/OfLze+TB6nmx/TfL+KULEItvJrcLzbcdxhmJ/T/No9yCX5k3QQBByHdQd1OY4/tzjOTwIgx59Xxq6WKvW3Q0eakmv/+d//6x7G3xh95zvfAeALX/jCX+s4/iapPCalSv3V6BPDR89RqOxNN8IiMNldchorZOp/EpIkiUiS1A8llcAkOo1kzZyMUgiUdPA8lcaWJknqcBKASN2IfuBipSQcDEijLgU6i1uUKr0jcnZxid2tLbTOYYZBKYkVMoUekHb4BUEKTaTAYpBK0Wp30DpB+R6BVMzMzXD/9g3Wd96nFgR4UrC4uEhjYhLP9Ui0Znc4JBoO8F2HWq2K5/lUG43U5aVj4n6PTjwktIo7icFzFSfPnOTZJ55gcnKKK1eucOWD9+nevcXi1CSrJ8/QHoapg00ndNt7dNu7BAoWFxeJ4zi9s1Ua2ttr+PUJlJJUlaI+7aO14dDi4mgs2lgmpibZ32/R73ZwHcWrP3gFz/eouD6fevFTvPKtbzE/Jen2+3R2N3jhpQucOHmGerPBH/7RH9HvddDG4ihJdWIKz/fROiHRCdYY6o06J0+dohL4DLWlohysNSRKIoUlGfSIem36vQ5WGxAWz3PwnADfU2Atnu/ymZde4uqHl7j83nvcuHWXSiWgXq3x3rUPae/tMjO3zKOdiJMnjqCkYa4Xs7h6Asf1CZOYZtVlPenTqFWpVxxWlldZX1/DdxQLi/N0Ol02NjeQjmIY9wk8h2bVxUYh1miUhE6ni3vIw+qIdr9PbxASaUOjWqHpuwz7fYSSJFFI2FfpdeeorBswBV5CCiQSpEodkFojU1NgGt8pBFpn8aMi6zg1mmqjyVPnznD70tu88+oPuPjM85xcPcrOToteP8QO73Hnynu8c+kDLj3awa3UcWcbCJtwamWWt26vMVdzaXgpfJRCcnRhhqFT591rt1HKZ21rh/UHd5mqOHzz63/GF3/57/GZlz7Lj15/jUEYgU27Fyu+Q+QqWsbQa/Vw/RqTh1dAx2yvryESi0mGrO20CBS4RtJo1um3WggLwgqM1SSJHb1WySeApGMzMnVAigI/RNis29GAJI1ytqT9r/mEUZDGtto8ZjXNX7XGYAxIaZHCIpEkxqZdjDbt39TGIrIbJVzPAQOJ0Sk4dtIoV0hvTjDWEBuBIoXFArBap+dVZGPJ8p3TiN1sR0qVKlWqVKlSpUqVGtNBsOUgiDLu6srhWDHy0lpLr9cbOQd1dgOcUmrkojTGMBgM2Nra4tGjR6PY1pWVFVzXpVqtMj09PQJvSZKwsLDA7OzsqHswh2qTk5MA1Go1ms0mFy5cIAxDJicncRyH5557bgQwcygIEIYhzzzzDI7jcOPGDSYmJvjSl77E0tISw+GQ+fn50bHwfX8Uryql5Itf/CJvvPEGb731FpOTkwyHQ4IgAMB1XSqVyghwBkHAhQsXePPNN9nb22N1dZWbN2/SbDb58pe/TLPZZG5ujq2tLVZXV0cgLo7j0fGNoohms8n09DSvvPIKJ06c4NChQ7RarfRm3gzs5j2Ws7Ozo3XkxzB3Q+bKz0vez+g4Dl/72tfwPI9qtcrGxgbNZpOvfvWrfO1rX+Phw4f8y3/5L3nttddYWVlBKcXW1hbNZvMxV2gOgXOomO9PDhCL4K/ocjwoorR43eXjPcidGEXRaL/y7Y87K/N/xWjY/Pjm6xv/3U/TuFtu/HWUX+vjjx8Uszre/1jc14/b9id1GJYqVapUqVKlSpX6SJ8YPirXQVqQ2R2aApBSpWAqcxRaYzJ3U3bHWx7hEScoZRBSEcUaJ5FYk/boCQGu66QRlUpircYaiU5ihDUoKXGUgxDZnXnWYuKIXrtLHA2RysEicVyPwbAPIo11jOIEa8F3FI6TxrdEocUVsN/toeOIoD5BpVrj9Jkqs80KvX6fre1dNjc3eXjpPTwpqQYVfAVKCKTVuJ6Pcn08z2V59Qh+UMGVAqkk0iQEJCit0Vpw/+YdvEqF5ZlJjn3pF3nhuRd49a132Ll/m2pzku3dfZRJ2NnYoLOzQRJHdEPD7sYaZ544T6vVpdYM6La2GfbbVBuTVJszOF6AyCIjEZAYQxJHbK4/Io41xli0MbR6QwaDAbvbW8Rhj5OnT/Hh++9ybGWFvd6QRr3GtauXaLdb9FtbWBNhjcZRDpPNKhVPYk2CxWKMZvXQYaZn5xBSUFPpJCJOYH17g2uXr/BoY51BGKGThGG/iysFUxN1Fhdm8ByFkIpKxade83nxxefptvfZWl+nVq3w8OYH3L51i+12TP/aOhNVl7D/HAuLc0ThAOFVePTgPs1aBV/F6KiTTiitpdqY5vy5c7z/1msEjkN1fg6tY3Z2dhhYSxD41IKYfnufq1euM9WskyQelXqd9Xs36HR7CKNxlWRppok1CVFskFpgTYLJooeESSFkpA0i6xmV4qNoF22ixyZwqRNQoLX4qBfRgmNjDi/NsXe/wdVb13n9tR9w9MhvU69XeO+1bxP2Wty6cY37Gy0On3uWna0Nrm23WJ30mWrUWGl6fHh/k6ePzOIARqRA9Lnzp9nv9Nlu94iGfS699RrNRoOt9Yf8V//Vf8l/8p/8b/jMZz7L66+9SqvbA5PQ6/cJJmdwMGgL9XqDE8ePszgzQb/T5v7DNXZ3Nnn44B694RDXGJQXYJSDA/iKLKbYkBR6R1J7okj7F1P/YgpqhcBgsglr6oAW2DQ+mdzhmJFImwJKgUjjUq3AEWk3rJDpDQfaghQm+9uRxrSSfZ92UBoc6SBdiYg/6puUIgWTiU0hstHpZFpIiVIpgBcmvSkijY2WKJGBVSRSGkqVKlWqVKlSpUqVGlcx4rLogsyBzEEqRmnmDrY8inViYmLULZgkCZ7njWJI4zjG933q9ToTExPMz89jjKFarY6el0d/5mAsd/EFQYAxhp2dHer1Oq7r4nkenucxGAzodDr4vk8QBMRxjFKKQ4cOjfoQi5GfObh57rnneOqpp0YQbDAYsLCwwNzcHEEQMBwOR+PKgd7c3Byf//znRy7KP/3TPx1FmJ47d47FxcVRbGy/3+f555+nVqtx8+ZNLl++zPT0NBcvXsT3fQAmJiZ44oknOH78OEKIUW9lGIb4vj+CphcuXODOnTscPnyYZrNJo9Hg+PHjfPe73+X69etMTU3R6XSYm5sjiqIRcFVKcf78ef7kT/6E3/3d38Vay82bN1lYWMBxHM6dO8ef//mf853vfIdTp05x7949wjCk0Wiwv7/PxYsXqVQqTExM0Gw2kVJy69Ytvv71r/OFL3yBZ555ZgQIjTFEUUSSJI8Bw/z6yOdeRbfjQc7DfDlIHaD5+cyvuSLEHO8dzJ2P4+7CcQD5SV8r+frGfz8+jiKc/zgdBDB/ln7JUqVKlSpVqlSpUn81+uSxqyKFCI6X9RsgsvhVlbohrUUKkCoFDEmiAYsQNl1YpKDBkR5giLTB2Bjf91JnpBQppBAQxxHCWhzXHd35mWIDi7aSvd09Ypt2OyI0wgqEBMdxcF0XpGAQJUSxJvAc/EoD3/fodyHRht3egO31B8ytHOZRq8f9ax9w5OhRAt9jerLJ8cMr9IdDdvfbbG5usb63A1GEROMLCByF60ji7h4T80t41Qn8IMCVgnqtgl/x0HGEjHokScR2r4NXbyBcjy98+nnESy+wsfYIt9fj9o3bbG08xFhNfWKa/b193rn+gCjRzCwuUW9MIJUiiSM6e1vE4ZDJuWXcoEaik9SF5zh4fpBGfKb4JgU2AkgSnjhznP/5D/+QrSsfMjF3hKDZZHHKcOnDD+n1+4TDIUYnDHtdHCWwyqHV7qAcn8npWbqdHgjL1Owsg8hAP6biObgKrl/5kBs3b+MHVWZm54jCIWCJ4ymiMKTd2mf7g2ucOrHK6uFDTE/WwSa0Wj0OHT5CPOizOD/L/UfbnH36F/jw+m22L19C6D6t/pBDWdzOzWtX2Vx/yMxkg87uGgqD5/kcPnGW2uQc/f6AQ0dPcuf6h8wuLOC5HhPNJkIn1Kpb7LWGRFrQHSYk0T4nzn+aiXqV97c2qLgQRoaJmk/Vleg4REoHm01uhDVYDRgHx5Eo5SCFwMmufyUFFqhU0nMgtMWRYMjuyFRpjCgihZWe1Ny/dY3NnV12uj3efON1nn3uBY4tH2HtwV3OXDhH1Guxvb3Pbmub1aPHGPY6XLt1kynPsjg3zdatRzza73Fkupa9vCxxa5NPPfMUf/yNb6M8n62tTV777jc4ff4C7176Nv/Nf/Nf8x/99n/Mp198kR/96DW29/aYXzpE2O/Qbe1i3YBWu8XV6zcQ9gQzEzUW5qeRInUo3r59hyjR9MOExAi0UtQdUAKM0an70Bhy8PhRv0gK763Iul2tyCBjNkG0FkEGD8VHrsS8TdOMTWqlSaGkytyVAOKjZ6N12tGohcEBtE7SMlghkCKLuhKkvZIm7fCUCtK+WoG1mjDWmXsVEBKMyPojFVI5aFVOXkuVKlWqVKlSpUodrIO63SAFKON9kMBj73XH3VxFN2Qek5l/9Txv1BHpui7NZhMh0qSgJEke6/XLAWcOq3JwmW8z345SCtd1R4Arj/wcDAaPwVTf90fxozkIGwwG1Gq10fpzOJWDuxyo5dBJKcX777/P9evXWVxcpNPp0O12WVhYQCnF4uIic3Nzj3U/Wmt55plnOHv2LFprJiYm8H2fVqvF/v4+W1tbnD17llqtRqfTGXVYep43+qq15tixY/zSL/0S9Xqd4XCIlJLTp0/jOA79fp/V1VWSJBmBuosXL7KwsIDruly8eJF2u02v10MpxcsvvzxyPJ44cYJf/MVf5IMPPuDKlSvU63VOnTrF0tISa2tr/Mmf/AnGGGZnZ3nyySdHQLTb7TIcDkfxpfmxLbpdiwAtB4759/lz8q7QIjQsXl/5tVS8DotwvLidcbdl0cVYdOwWr/n8sfHt/rTXSvHnj4tYLerjOhPHXYw/C3T8uDjXUqVKlSpVqlSpUj+fPjF8jMJ++mZfpNDAYNPERCXxfQ+JTOGg0WBsGtGqDULJUVwHpH1w6Rvq1C2ITD/UV5kTSUmJIwRCJERJnIFN8Hwfx/Ugigh1QqItSkBiU3ggbApDfS9ASZdeFDIIhxAPaExOU/ED9q1Aa4M2htsPNnAWNvnet77OF158AddLY1NSJ6YmqFSYmGhwZGWRYRSx3+qwtb3N7tY2rWEfEcf0dYteqFla1gSVRSIN23strLEEjqTZqFFv1LDSwVpNPOgxtDFIyeTkBLWKz727t7Am4qtf/DwvfflX+Nd/9h1ENCAK+zSm52lMTgOWQTjE6vTu0Nb2GtXGFG5QR/o+Ukh8PwBSN5jRGtdNuzONECAdvvjFL/M//ov/nvXbHzD33EtcuXaL4aCLNoZ4OEwhpuvhuh7GWlzPo9mcYHtjHa2TtHNCW9bW1pmamiSuBTy6c4vLV68wOzNHFIUYnXbqJTpJJ5dJQq1axfNcPrx2G7Rm4vxxoiRhc3uP3faAWDhsbO8QhkNWFhc4evQYF8+dZGdnh6eeOsdEzaff6/LWu29y9PgpKs0JBsMevdY+jldhc+91pqdnOLR6jIXlVRwluXvrCpMTUyRJTKXepDE5gdzYRkmXJI7ZTRx+5cwZtrfW6A3aVD1Jq2epeg6YtJ/RkRZrEjAJiRYYCUmU9plKqXCy+F/XcdAqjT/13LS7NLKG5ZlJWv0++/20f0QVJjKB59BtbSNdFyEkD+7d5r133uTwwi8xMTHBnVt3CWPDzn6HxsIh4rDPMIo4dvoc927fpNsesDjd5NajXWbqVSaCNOtVRz2mZ5c5vDjLnY09tLa09vdY39jg8MoSV2/f43/8F/+cX//N3+KZZ57lzTd+RLffR1uB8hv4vkO14rO5ucF7gy7TtQpYQ5TEKMdjbnaG+4/WsMbgGEu14hJaTUUYhFK40mKtTIGeECPHYwoW0+jVNHHVpvRPWAwWY0TWiSlwrMWKbFJrLJr03oU06VSgMWANhmwdWddregTynlmdAUtFbsLUSfqooyzGptvU2qaRzKQf3iilQKRjdByFQGQ3UsToLEpXCgs6QTmf+E9pqVKlSpUqVapUqb/FKnY3FgFR0fkohBhBxPEeyLyrLwcoOUDM3Y/5OnNolUepHhTnmgO7HEZFUTQaTxRFVKvVkYsyX67X643AUd4nmUeM5kDT932EEI/Fp+bwstPpYK2lXq+PHHv5dl3Xpd/vj5yZQggWFhaIooher0etVhv1ICZJMoo3zcfkui5hGI6Aa+6QzMf74MEDpJQcOXKEfr8/eo+fg9cwDHEcZwQVDx06hOM4dLvdkUP04sWLACMnaLvdxvM8Ll68OHKS1mo1vvSlL9HtdkdxsJ1OZ3Qez58/z9mzZ9nd3WV2dhaAarXK/Pw8cRwzHA6ZmZkZHdcTJ07wO7/zO8zMzIzAcn4Oc1hahH/5Y1rrEZDNlQPBHCrm8LXopizCxhxS5s/Lz2cR4OWf5xRjgsfdj0UX48fFoY6rCESLvxuPJx4HivlrbHzZ4jEoPv6zAtDieEv4WKpUqVKlSpUq9fPr38P5qHAdDyskVtisazEBJI7jIqTEGJ12OjoShIcvLEIoEqMxSZR1RZJONEZ38xkcJ7tLTymQ6V2TylEoY5Bu2gPpug5WQOA6NJuTDKOYvShECtDWpC4/wHEdXM8n7of0ewOG3Q4zs3NUqxWQEnTCydVDzBw9yZ/9u3/DFz/9DMdOn2O/00MpJ41tFQopLEob8CxBxdBoNDi0tEAYxrS7Pba2t9nf3aFjEqKtXWbwaFYDmrUq1mgibdhs9djrRTQaDVzXIRr22dkMaU7PIpWL7ypm5pdwLDz9ma8w3ZzkH//Wb3D1+i1aO9tMTTbQSUgSDqlXJI6b3oWpHIew3war8fwZfN/D9VyGwyHWaISUJInB2o+iSWamJvkHX/tVfvd3/yVvv/EDJuYO0+vsEyUx4XCA0RqLJdGaJNEYpej2B6NJmO/5uH5Ap9ejPxgw7LV584ff5alnnk3jLxOdduEB1qS9fNoYTDapnpic5tLV22ijaU7U2NjYZtDr4bgVeu0drNXsbj+iKRTTM7McWlnG9VyEdJmY9PnMi59GAK6jIEghaX8Q8ujhfbY3N9nd26NSqdJsTBDUp9naeMjUzAJRGBKGQ5SSLC4tc/PGHQ6fPMfMZIMfvv8aVkcoVxLFhmbF0O8PcD1F4Gqs0WidxoUaAWmnocTN+guFUrhO+s9xFCZJcF3FRLXKsxeO8O7l27QGMdpaVAbXhBS4notyHILAQwpBOBzw7ls/4vjqEh4JRoNbbTI5MQlRj8XJCsPaNLfvPUCgebS1gwSiMOb9u5u8cHwBV1kQksHuIy6eP8/tR98BqRjGMXtbG5w6eZKbd+7Tbu3zB//z/5df+bVf55mnn+Htd95mr91FKEGntcfujkZbScKQKE4Q0uHEkRW2N9cRXo352TnWN7eoBS5mEFFrVoiiIVVXjoAi1qYdmAKkEAhMeiMCDtYYHpseComU2cQyA4rGpv2RVoACYm3RaRNjCi+FxMn7IEk7NXW2UpHF4aYBrjY9h1gcpZA2/cAk7YRNzwVGAylUNsaOHJSR1jiKLGYVsCZzZaYuTZPdgVyqVKlSpUqVKlWqVFHjYCaHQ3mXXh5tmf/L3W15ZGYxLrPoVAzDME354aMIynEYVQRUOWTLgVMOKT3PeyyuM19PkiS4rjsCOzkIzeFo7nYsQtB8OzkILXZX5j/n8Z45uCzuo7WW6enpUVxsFEWPbbN43PL9zNebg9n8mOZuxuXlZWZnZ3Ec58fAXP5cpRTGGNwsaSkIAoQQxHE8gnG5GzI/XrkbMY9ezR/PYaHv+49BuyAICIIAKSVxHBPHMZVKBd/3mZiYGMXeKqWoVqtUKpURICye016v91jfZ75P+bktwuriccqvg6Jzsng9FaFxfj6L1+RBfY/j12T+vCIgHI99za+d4hjzbeXHenzMxf0/yOE47gDNf86vm+I1PA5MD9J4pGz+dRygjjuai9suLvPz6Cc9vzjmvwkw9KDx/E0YV6lSpUqVKlXqb44+MXz0/CCbgLggBLE2JElMEicppCAZxaN6XoDnCBwvyN6wKuIkod/rgjU4nk8SJ2ANQjgYDVpZPD/tgTQGtB0S+ApR6MuwxmClyt7YQ1hv0ut2EFIgSbfjeS5BtUo87LG9vU2n22VxfplGcxLHdfBdwZmTp/jmt7/DqfkJVo4co9XpsLmxjtUJ9UYTzw9w3QCn6iKlyiZI6ZutOIqoN3ssLi4QJwnGwCCMGYYh3X6PQbuHIyVVz6EaeMSJod3rE4YRw36PxBFY5RLFMZ5S2CRiaWGJ67fv8P4HH6KjIZVKhWpjEulWEI6PtRJlI1yh6UUdhKiBcIjDPr09Tdxv05icJqg2SbREAtL1ieMIncT0Bn2sMSzOzfIbv/kP+YPf/wNWGlWiaJpkdyudfOgkBTWJBqvByhQgZXG6SinQCfVmFYvk3Tcv0e91uHv9CstHjiOkm0ZvJglJEmNMNnExGqMTpJBU6g3eu3yd44cX6XU6DLM7ZaMoRlhNkvTo7m8xMbdClBjCdp+OsFQDF9+VeI5EG4PE0KgG+K6LtSt02l329/YZ9HqECeyuP2Jvcw0hPeIwJBqEDKOI9t42xlouPvkE/V6bdmsfJQUDI3Fdha8EWE0Sa4Zapz2CxhKZCLK+xgRIROrUE1ISS4lSEqEkSkoaEzUmpibwHYvvZZDSQPoePZ2MuMpJo4ZJOwqNFmzv7dPph5xcXSRwHeJEc/7CWe7cuMHW2kOm6wGHpyt878ZlBmGY9XpautstJuoVziw2UdaSDHvMzqyyOD3Bw902UWLQ4RA3qDJZr7LV6dHr9fm3//aPcH/1N3j64pO88fZbbO3uIytNpisB0iaEwwiBZnd3G//sSQ6vHqLd6tAJHPr9LnudHhONKpGx9AcxQrjUXJnGMSsFIoWCwmZxqwqsNSgBUQKOk0bQIsBanYG/FDwamzZkiiy+1ZGkNwUAOoPaFoFSAkXqhszjXtNI1xRGSmx6/K0msWmMa96HipQoq0hM6tbViQahcD0Xz3VQIp90GqyxOEKQ6BiDgyNTp3apUqVKlSpVqlSpUh+nIngpgp0c2h3kViuCySIYLEZfFsFH7vzL4zFz0JRDEF24Ya7YEZirCKaKMZw5LHNddwRFi3BmHOTkELAIQItgpgi4coCax5kWIW0URaPI1uK+F3s0gyB4bNtxHI+2Ua1WaTQajznkio/nvys6/3J3YHFsOXAqRpd+XHxuvv/5+vLnx3E8+j4Hwfkxz52vRcVxPAKJ+Xjz+NuDYkSL19T4+SmOefxcFV2TB8HC8W3l+1WMyy32mRZ1EFDMf18Ei8Xnj7sUi8uNP6+4D+OPH7Sdn9cBOa6iU3kcRI5Dyp9X/6FDu0+636VKlSpVqlSpv9365FmBWeyKoxTGgjUGhQCVOhKTJCHRCZ6SOCrNSbTWEMcG37U4jsJz0w/23aBCvz8kGmbBjAastphEozPvkbQCVPp9nCQ4Iu02TLTGdxXt/S79Xg+dJBmmsKlD0nGoVasMegH3Hq3zZLfD4RWf6bkVmrVbBELzyltv0e50OVR3sDpBSMVEY4LtvRbCrXDy5Cn2TJUHl9/AdxWukvjVGq7r49Ub2HodS+aiSiKmIbsbdUi/P6Df69PpD9jZboFOqLqpWw6dRpHubMV4XgVR8en1+vSJ6Vy/Tr0SEDiKZDjEkZKBTNfb7bZ47/33map4rB5aQeshJgkRShF7PklQRdiEalCh2phAJxHNmTl2tnfodSKM1hijscZwbGWZr/zSL/HaD3+IV2mmVjFSyJIkMUkSpXcJ+grXVTiuC3HqBhuGQ9zQJYli1h/cZ2ZiAq/W4M6tG1QqNZJsomGMzi4ZOdo20uJ7Hr1+xM5uC5EMSaKQRFsSY5Gkd/Mm+zs0puYRjpdGwsYJ/f4QnURUAo+lhVnmF+YQUjIYhrjVBlOzCYnW3L32IV59krv37mDjmMr2Ft1Ol8FgQLszYGunw9GTZzl+ZJXbV97H6Ag/cNjZT2gEDlJkkwvIHHMGVykc5QAmBapZr2GckDlLBY6jkCp16C4vTuC5DjqOaVR9hLXYHGSlpC19PQmRwjWZxgU/8fTz6GjIjWtXqQQunkonqdVajTAccv/hQ2qBz+mFSa6t7dFPDJVqnSQcst4aMNsIWGwGSCxRZ5snzp3m4SuvYaUkNppet8vc7AzbnT4CS8ORXPnht5mufpVnnnySN956m34YY62hVglIhkNa7Q79/oC9VpeZRpVGc4Ikjmg0m3R6ffr9kJYAHRsetUOOTlZws9d9urepqxHS2FSswEiHigJjDUkGDCUZhMyMk8Jkd3CTQl7JR8fPKoExMnUiIlN3pJWQuR+zqTgyXSCdqJJ+cGCxGEu2zRRCKlL4iRBpTLGGhLTvUcr074rRZvS41hFGSvItlSpVqlSpUqVKlSpV1Di8GXdvjUObIigqwq4cXuXLFR1pOTTKAVWxH7Dotsy3W4SWURT9WMxkETKOb6O4P8XxF8FV7lgch2f5MgfBoty5l+9HDi3z/SxCtKJDTyk12l4eSQppTGqv13vM8VhcTxHSFd1/+bEZH2N+LPL1jUeUFt1/xfM2DqiKsDkHnUXwmf++CG2L5y3vqhwHhcXrq9jTWDxu49dYEQqPn9fieRof/0HgLx978XfjYy8ej3HX4bij8GcFWT/tefnxKEL83BH582r8NZlfh6Xrr1SpUqVKlSpV6mB98thVKYmTGDMcIKXCdR20kEgBiU5QSuI5AbVaDbLYFiklUqSwMg7D9E2bTMFBJQjSdSRx5rwzDKMIkU2yLCBM6nwyFqyT9rcJIaj6HsLxUHJIZKLH7uy02lCtVvCCKtvdFlsbmyws73Dk+CmuX36HRw8fEmYTrmv311i7e5Pl42dpdXvMzM+z8+Auz/7Wr3Oj77Nx9yrVwGVza4vNqx8yUa8zNbuAFwR4rpf2xRmLxiKziYHv+0xPTpLohF5/wM7uHnu7e/T7baTW+K5gouGghKDT7uG4PpPNJhXXpeFKPCWQFmy/wzCJUUFAUKlx4txFNrd2eefaHeYmaiwuLuLoCNPvkwwHhP0u3U6LWr3J/NIK/bbL3WvvMrt4dDS5w4LRhgunT7K7vc2PXn+dRBus1Sip6EdhGp0rBUdXV+i3d3Cki3VctLXoOCIcDGnt79Ha22ai4uBgCGODsQPu371NfzCgUq2itaHRqBMNh8RxxMLiApOTkzSbDbZ392m4SQqahEqjeq1IOy3NkGg4oFL3EJlbEAGJNrQ6A4bRGmG/w0zTJ4kjPMel7vl4XoUb/TZvvXY7ddcqh/1Wi53tbVqdHtZoqvUGTz75DNZoHq09JPAkxkh6g5C6q4h1BskEKQI3qXvPzTodLdmHB6QTK2MhNmCsRRmJsRopE4xJo42UBNcRJDFgBUKkXanGGqQQaGNxlMPS0iInDy+hTMIHV24wt7jA8aVZtN5jZn6W9t42mz1Db79PwzWcWJ5hY79LLARuZQKXhFY/oeYlTFQUcX+fo4fOU3ffpJtYesMIF8Pc/CJX79zDAr7rQDzk3R9+h5d+8StcOHeGdy59QBTHPGq10m5DG5PEEbdu3mQ9B+hYlOMxPTHBxvYObSyOSGOFegnMVNIoVCFElsBqIWtkRAhkPlGzCiGzD0QMCGGz6GQJKndBmhTxCYnIIputFUglMCY/F2mXoxBpr6YUFikE1ogseFWTbSKDnyCzvyki64iUAoS0CCsxOibRAp0BTyVl6ro0GZQWjIB9qVKlSpUqVapUqVLjykFLDisOcncdBHuKsaj5z0UnI3zkxMphSjGmtOjUGweeQojRMkXgV+xqLPYBBkEwAjh5P2Iet5rv13gkbA7l8nGOQ7BxKFfct+KxKEKtotsuPx75P8/z8DyP4XD4GID0PO8xZ57ruo/9nG+n2JWZP6c4luLxHI/zHIeyxXM/vs+5a3A8Knfc4VqEj7mbNQegRZBXdDsW4V8eJfuT4GMRoh10jIvHZ3z/xuFmcR1Fh23xuePn/aBtjD+/uI2P08c9dpDr86B9+Vk1ftzG96tUqVKlSpUqVarU4/rE8NH30rshdRzjugohHRwhUMpBm6wzQhuUcnA9D1lVGWDJCtOlxJrUpWi0wQ9cfJFOFqI4yqBMCh2kyPrYsp43rCFODEKn8YwSCCoVZKeDUhKdQyLPSx2QUjA1OcGuTrh99w6rR49x8uhJ3ls+wu7uPlEcp7At1rz13odUZ5Yw2rC2vsHJxRmCehO7t0PFAeUoGhNN1tdc+sMuC8FhDp88Tyey3L38LiLq4AcVvMBHKRepJMpxcaVHs+lSqwQszc/RHfRptbt0ul26SYzq9XCVZKZexXcdJgIHT6WTmDCMsEbjitRFJp2A+VqVuWaDzvIce/st3rp8k+1Hdzg8P8XhlWUmJprYJCYZ9nFMQtAf4HhVHEfhBDVMaxfp+vQ7LYRN+PSzTxJFId/93ndIwgEzC6tESYwrBWE0xJWSra1tFpcPoRNNHEV02m2stfT7HaIoJI4TrBAsLS3Q2t1hotHAc12ko4h12j1phSCKE5JYI6zF81x6bY2VyQjiSJmCxzxSs9Pex/ECbAZ1rRG4nkccJwz6Q67vb/N+d5cojLMOwDRTM9GaWq2GFOBXqgij6Q8e0B3G+NU6p889yYUnnuThvev0e22mah4PHrUY+disxVhBnqhpLYTWpHBRyhScSYHVKRzP2WgKcG0a9SlSRyTW4iJwhSSUKaiUCDCWOE4nZ1onaJ2wv7fHh++9RWd7k+rsCqdOn2ay6gGaer3C1sN7NKouu2u7bA9SV+x0s47nSrrdNokxuNNN9gYhUnjUfDBhn0ML81x9sE4YG7qtXRYPHcURgsikQFI2mkTdDpdf/z5PfvZLXDhzhncvXQIErU4HqTwaE5NIqejFhpnmBDXfZWdzg1ALKtUq7U6HwPeQWpOgGGpDoDJICKnTUGS9JUak7sbUTzgygdrs/KUuUXCkSF/2OvugZdS3mP0xEhKl0ucYaxBWoGQKN7FpVHAiMthpZXpzQH5XsBUpjNTpeU0/OCCzXRqwJnXjkjZMKpFeD4I0ejldRqTnslSpUqVKlSpVqlSpMeXAaRyCjLvAihApV/G5RZdaEcAVHXx5H6LneY91RubK15HHhBbHlket5svksZ9FiFV0+xVjYYtOt9w9WOwHHAesB+17cR/zmNXi8/LtFeNbizCpCE1zAOk4zmiZHCwWOyqL44qiaBTJWoSPxTjSfH/H3Z75+MbjSQ/q4Sy6B7XWo2jXIrjLz2V+7IvHuXhOx4HyT4N4xeuq+HV8fMVlDxr/OEgvqujsHL8ODnJZHjTe/yVB3icFj7nGHZwf5+QsVapUqVKlSpUq9e8BH4WAiu8hazWUlGn8KmlkqjEGozVaZjGUSqVuIWtSaON5mdPIRQgFpB/gI8D3fZSSxEmCNjHCglKpYxKRuZ9E2r1ns3FYoFmr0K03CaMER1miMMaRCpQDWCYaNZSErXaXMBqy7BjOPfkcW+sP6Pd7KCEwUnJ3c4e5G3c4c+EiG1cvsfDklzDS4/qlt+lt3sNfOU7V9zh2/BSP7t6ktfWAiQvnqU1M84NvP6K9fptqpUK11qDeqKMcl2oQUKk1cTwfqSSe9JjxXSYadYzWRHFEbxDS7Q1oDwY4gxBT8ZlqVHEdiRRpTK2UYEyMiSzoIUIoKoCqBdROH2Ntusn9Bw+59to7ND3JidVFVg8fAgvBYIhbn2LQ2efQ8fMIx2Vz7SF3bt3m2PIcs0uLvPSpZwmqVf7i63+KMhHHjp3k4f2bVKs19todzpy7QLfbZxB2SKKYXq8LQhBFGscJ6LZbRIM+Vcelu7PN9N1bCCyRNbiei6hWsX6AOH0Cx/XJrHAYa7HGIDLwI4UlyXtKDAzDIUmiQWQT25Q/YY1Bm4QkCgnDhEhr8k4+SGMyZZKws7dLs17FcRz6w5hKrcb8/AJnz1+kXq/w8MFdGjUHx3Po9gZUPIkUMoWG6dWegXDAQpQYpEjdlI4idbxaECaN91RONsnWBoREa0sUJRjAV4JeksZ8ph2P0B2EzE/VqdQbKOVg3Ao3bj9ie2uDYGOfC088yamXP0u3vc/E5CSnT59gY3sXKQzdMAZjkX6V58+dpVHxWV97xMbGGuEgod31ODxTx9nf4MjyAtfvP8IqxW6rxdEjEPge4SBkEGmGBmpS0Oq0uPH2K5z51Bc4deQwl67fYmFxhWGvTWd/m+3BEKUkQbXCkeUFpusBG1vbbO/sEIcDBmGM77kkSDY7A5YmAkQWZZrnqUoJSD5ysxqRRaemP9rMaYhJrwWR2k8RmWsxvXTSGFdhya4NiRISYdNYYAFom/oZpcomsDZ/3GLyTklsasbUFqM1VqY3O6h0IEDa8WgtWK0xVqFE7pS0aA2P39tbqlSpUqVKlSpVqtRHGndcFSFj/rsc2hQ78nL4JIQYxakWYV8O1HJX4jgwK8ZN5iAsjuPHHJNFyFeEZTmgLEaW5pGVrus+5jDM9zHvhsx7GPPn5zGl451548cnh4XFrst87EXnZw5JgVFsbJ60lB+rfJm8fzFftggQc0dhPp4cyB4UOVo8nvm6ivuR79t4l2Fx/R8HD4vf592PcRz/mAM0d1wW+yGLj49D2Xy8xWM/7rAch8f51+L1+XGAPF/PQcdhfJmD3ILjsPaTQMeDAGiuogO4eIzGAevPquI1cND2S/djqVKlSpUqVarU4/rk8BHAGKRJUI6HNQlSpPDI8xQWB2uzuwMzJ5PNQCJCIrEYnboWpXKQKntTmMV8OjKNSpRSIKwAmyAdF6WyOwMzcJMCDYObaOq+g5ieZmtrCyFSF5RUEld6SCloAEMl2Ft/xNPPPM2nn73I9vo9wjBkb2eLSlAnjEOG/S7tXpflxXn8yTnefe8d3nr3PTr3rvHpyVlUpY4QMLt0iEGvy41bN6lM9Tl++gzJ4VWGg5CNtQds371HtV6DJOHk6XPIKCKNixQElQpKpXeSKkcR+AHzM1MkxjCMEtqtNnd3O0itaVY8mvUKVeXSqNSQWXxNFMf0uz2SXgcHy6mlOY6vLLHb6XHn4SMu3btLe6h54dOLDMMQKdt41YDpukvQWOLSW69xb32DS++/w7MXz7F86BDPPHGW27ducO3yBzSnZmhOTDPs99jd26PTbiGUi+e6aG3Y2dmh1w9xHIep2Tl6e5skOr1TtZdo+v0hw9Y+FwKHVhRTr1bxfJftZgNbraOCgDiOUSKblGRUMYkitEmdaJCCZmNtCpq1yXr6Ui4kBcRRiJASB4XRArIKByklg0GfXn9Aq9OhXm+SJJqZqUmU8jj7xNPcuvI2Udhjql5ja3sHGydIlfcUAtm4LCkwy+EngDYJxoBUKr1OpcIaQ2I0jhBp16kgixsGISWNistuOASRQTQs/UGI71V4+lPPE1mfMInp7u8xDEM6g5Bb9x/wzCBiYekwG2t3OHPuHFeuXmev3aEdalq9Pp1ui/vrjzh17Bi//NVfoR73+b1/9a9Y29tlr9vjRKRZOXYGRwpiYL/TxxpDxfdo9UMsEBpB3XfRcUJ3b5tb7/6Q4099hv12m4dbuxghcIMqNkqdy639fR6sb7E8O8n01CRJFNJpBwyGHWKtGUQJUWhoh4amAwkWIfKI2ezDjew8CgFKCLQ22V8XQVKY1Of9j1ZkjtHC/DILWk0dr9qkMDGDjzZbXmRnEUDINOI2/15oUhisUjejyNyYOQRVmfPSACiZgdA0AlYhsMKOrslSpUqVKlWqVKlSpYr6SUCiCPrG4zOLEah5N2AOpIqus2KEp+d5I6gGqYNvHHwV40WL8aHwEUzKAWMcx6NY1uK2inAqH5tSagT24KPozaL7LYd9+XEZd9vlcKjorsydlHnc63gnZDFGNY97HT+GxZ7GcdA67vocDAaPnbsifMzHm4POYndm8XnjDsfxc5EfkyiKfizmND+++TkYh3W5E7IIivPt5rA6X7543orHuQgF87GPA7yDAOw46Dsodrb4u/xY5fs77oIc11+1g7AIRMddiv8+oLC4r0Xw+Em7JEuVKlWqVKlSpf626hPDR8f1U3dYavpKQZGJcVwPKRWO62CsJIpCjPnozjshBUbr1OEmJY7KSuhFShfSOEWJdARC64+AQbYsGRhI39jJLNbSoqWhUqvR6m5hyQCHNSjlYnSCUg7CTV1Sdx5tEu1vcfT0In//q7/E8eVJ/uhPvkHU6yM8l5WVJS5/8B4vXjjJ2vojUA6B7lJZXqbVH+An6V2gbiUgjiM2Htzl1z77Oc4//zLt3W2MVFy7fJl719/njVf/khOLM8zNzaXHQQha7TaDMEKQZA46hXBcDAJHKqoeVGenMDMThImh2+mx0Qtx+12aVc1ELaBaCagGAQrwXEU46EF/j2jQZdKr8amzJ4jOnKLdH7KxtUuzUUEbQXdwj0FvyOe/9g95+oXPUAl8Ln0w5E//4i944dlnOHrsBPNzs9y45rCwtIwVDneuf8Cg3+Mbf/xHfOmrv4LruIQkRFFEFLeoVALmFhbBJFhjiIZ9lhfniSbq7O3ssqUEk0rhpbZG6vPLGMfFdVw2N7aZaNQwgwhrQKMZhmF6fgVIIXAdF5XdcWtsjDE6vUaEIE4isAalUvAtlJu64KTAGMtwP8IPAggjXM/BkdButfncL/46Fcdy7+4N6jUPYwXb69u4ktSNicjslen1rq3BWpFtJ3PBWYvWBpsYXJXGClsAk0Z7IlKnr7UGnV661F2VOvKya0EI6PZ6ICULh44QvPkmrf0W29ubDMMhzdklzp47z+2bN/j0C8+z9uA2wq9y7PAKG9s71IM2nb4gTjRbG5vMTk5yb22Nc8cO8bnP/QJvvfkml24+4O0bG+xGAlcJhlFCP4QwTqgGPkK0QTo4lSaJGSBNzE67R6zvEdQmOXv6NK3u23T6Ft+v4Ig+vcGQaDjkzp1btHZqVByFMZpatcpwGNLrDfBdBdYwMJaKBVdYrE1f6zJzPKduRovOqJ/M7842qRt6NLHLAGPGFbP/5dAxjT21NotpReZ5t5kLNt+UyNzZJjtXqZNaShA2633MzpPJ3ZPio0hdWZigpiDUIoVCCsMBc+dSpUqVKlWqVKlSpUYgMIdFxf6+HJDl0KIIFvPlit2MOegYDocj8JEDp/EYzGIkaxFeCiHwfR9jDMPhENd1H4tXDcMQKSW+7yOlJIoiPM/D930GgwFJkjA9PT0aR9GZaK0duR7zvsd8P5IkYTgcjkCf4zijqFPHcUaw01o7GlMURYRhiOd5jx2jfLw5HM1dmnncbB7dmo89d42OOwaLjkxjzAhkxnH8GDSz1jIYDEZjz+coOSR1XXcEXw+CakmSkCQJQRCMXJr59vL9ys+f67qjKNaiIzMfY77t3PGZn3vHcR6DokUwmV93+fWUA95i/G6+jnFIl297HNgVr7l8Pfm2c5fqeJdovt3xdRXhc/F347Gy41C3CE/z41X8mh+PIhzN/xVB4c8T9zrueh3/V3xsfP9+1m0UNQ58x521B21jfFvjjtiDdNB1+9N00L6Mb6P4nNIZWqpUqVKlSv3d0yeGj0EQIKRMARGp60vrBCVI41ETjVAS33PRJnXpJVqDTmNPHKlQUmZRiwIlHXzPw1HppMhRCpHZJaVy0EZjjUGpzK0kUmeTFKnzSEjJVLPOzs4ugR+QRDE6yd74iDTqUSgHx7XEVvOjH73Ff3zqJKeOrLBQeQGL4ff/1Z9y5sRJdDDBpQ/+Ar27xq/NLqAa05w8foJmvY6TTQRyi9TC/DxvXbnB7/23/w+Wjp3GJgmrpy/S8ATnzp7j4aN1jixP4Lhe6hSVgsr8PLt7+9y4cZW1B/cJfI/52Vkmpmao1hs4XjrpUVJR8STV2UmEUll8p2Y/0QyGBpl0qQeKxsQEE9PThL02Ym+LXnuXvd0NrFA0pxYIZucJcdjpdOi0dkmMYm93m0bVY3VpHm3O886773P52jXu3b+PFZJapcK5kyfoD3v096Y5eeQYOom4fuUKi4dWcb0KUkrCOGYwHFKrVllaXmF3a4252Tl0EmFtTL1ZQQBDP2ByeYVavYrya3R7A/a2NvFcBz3cJYmT9BqC1GWoZHp9WYFfreA6DhZBbEzqShQghCEe9NIo3wxGk8WZamOI4iGVio8QPhubu2mfaJhw9KmX+cXPfZ7v/fH/RNWVBH6Vu7fukEQhrhRYAyZtC81gd+pWtQZiYxHSklaQSpASrCVOdNpNmgFToWQaPZxo3AxYGiyB66AEJBaQqaOy1x8QJZqws4u1GoQiqFQYRAlHjhzGj/a4fWuD8xefYn5plc3NB5w8c47L128zM9Fht91jGCd0ul22tveZmdsnPLxMKBSnn3wG5dZ57/JNbty5jxCQ6HQyvdMbUKtUkAh0HKGtpB8nWEBZ2OmFBHevUZ+c5tzJ47zx/mWG2hAlArfSJFCCJI7Za3dpeQEz9Sqt/i44Pm6i6fRDAiXS9UYJdQfAYq3Byf5umCzZ1GawL4d8SoKwMgPBKTjM5zEyz1/N5zVGYPgo1lXkBJHUQW2sJjFpB6cQMo2AHvU+ghQydTLawsRXZC/xbFs5jGQ0YbKZOzJ37R48CSxVqlSpUqVKlSpVqgiqik5GrfVjbrUiUMxBUL580TWX/6743KLzcNyplsPMInQpRqbmEGscrIxHlAI/FqEKj0MeYATPkiQZQa4cSLmuO4Jvxb7JYpxlMdo037diX2Vx+57njeBZDk/z7eZjyLsci7Aqh3DFLsm8M7MYOVo8hzkcHQeQxeeOg6YcHhZjY3NwV4R6+ffF+NTiOvLlis698cjc8fNSjEct7kdxvMXnjztCi07Eceg1DtuK11r+/OK6isc///7jIN1B4z3osfFzdJDG9/cg1+fPo/Hnj+/DJ13P/1LLj5+jv2p9knGUKlWqVKlSpf7u6JPHrkoxgi1CyNQRJp0RELACBFlXmzUZrBFZr1/qblSOk8WnZkX32NGdg1oneK6LclwQgiTRJEmUwgijEVlHYBYQg5SSauAzOTVFGIa0owHKUWiTTuis4CPnGpYPH25x9Y3X+IV/9L8Fo7lw8gQ3n3uaQ5MN3rhxncBzubq2zas/+AFf/KVfpjE/TxzFab+ll8aOYizGGpZWVvjv/ugP+Sdzyzz3xV/n9OlT+L6HSPpstVo43V2scEisJvArKKW48uAa3/z2XxKFfSpBwPzsDCeOrJIYQzTosby0xNzCIn5QxQuqKMfDcSy1qoOxFuU4DAYhrShma6uNLwzTk00mpuepVwMG3X2SJGG/s0V7dwPXD5icXWL+xBmE67N2/y7VRp3d/V1OH17lV7/2mzy8e41Ot8PG1ha9XocPP3if+aVVZucPkegEE0luPlij1dlnZfUIQa1J1a+QGHCkIAh85haX6OztEAQ+yvXTThELbqWClDDs9YjbbVr7bVq9IYGKaQ/6WUSuIAqHWJvFBrkeVih8v4JOhsThkH6vzaDfYzgYMBwOwVomp6ZZPXaCyZlZ9nc2uX3jKvt7+/R6faIkplkNUEoShYblk0/zT/7Z/5Gb77zC1tYjphoBu7u7bK9vjF4MIosDzXsftRGjEE9rSd10UiCFRanUjacz55zRBqsEjkrBpNGaRKfAEsB3JTXPIdIJOc/SSOZOPs3uXpdqY5J55dGcaLCKx+Gjx7j24QfcfLDGsUvv8+Lzz7Lx8C7VmRkOryyxubvPZL1CqzcEDP1BD6EU7W6fwXCAFYoTF85Qqzvcu3uHBzv79EJBrA23H64z6aQ8LY5Dbt66wUyjxtJkFWVjXFehk5C7l9/izKd+kdWFOa7fX6PR9Aj7XQLl0g0TdJKgZcLK8gITVZfNzS320LS7PVTFJ0osroC+NbjSIoxA2wRrsol/FkkLZgT5pMgml2RdjqS1kCBInwlGCIQ1WTQuYLPJeJZpm85rJHkEaxbwAxnINNgMgNqRoTKHnBabrhuZ9jmKbDKORdvcDWk/AtP81U/kSpUqVapUqVKlSv3tUBHg5YApd+eNw4txEDceFTr+4X2+7vy5ucafm0eWjrv2inGqRafk+PrysUopCcMQrfXI3VjsmCyCxaJbM48CzddV7JDMQWAO4fJ1Fp13xWOTOy2LMaz5MsUexyLAzZ9XjH0dj70tnp8ioMz3IQzD0bLFx4ou04PgShHmHvRYvtx4f2I+7nzfio6/4viLoLDoDDzIlVcEvkUAmq+nCA+Lxy5fpni9FTtJx6/Foosyf+yn6SDAeBDsO2iZn/a88fUeBEEPclH+PCo6ND/pOkqVKlWqVKlSpf426RPDR89xUa6HThKUo9L+QmuxOskAQgoN0qhRcF0HHDDGpiAgL7fP3/BnEEEKmcIAI9AWkjgavRl0HSdbZ/ammxREyszxhLXMTTXpdnsgFd1eFylTVyU4GB2nT0s0rhfw7bevcuzMj/CWTzK5cITPPtfhzXcuoQctllcOI23Cg27Eq6++ygsvPI8KaujEYK3GcT2sFAgrWZluMr16nNe/9x1+5Vf+AQ/u3OT2zWtcvXqFvd09pqamWd+/y+mjK3R6XVpb60y4mueeeZqpyQkW5+doNpvce3CfP/7TP6Pf7bI8c5dnnz5PnFhc5TG/MM/k1DTVxgTI9E2/7whqQZ24GpAkmr0wpLvfRcQRKkqouprF6RpCKiJt2N99xN7mGrY+xYfvvM7i3AxIwbknXuCpyjwT9Tr37lzn2rVr7O23uHXzGlOzCyRJTBgOGAwHtHe30R3odfbBgu86zM4vcvzc04RDg1IOE9Nz7O1uQxKjlJs6XI1hc2MDa6HXD4m1wVcJ7d2NkestSTSddhvPc7FxnLocheThnSupi9AkGJNFcFqBclyqgY/vGDrb9+hv36XV6bG+voEV6eM1z2MwHIBwWDl1nn/6f/o/0127zfvvvU2jHjAMI+7dvE0SRekyKr3+EGQxv3krYQqbZHat6sSghUALkcGnzLeXmoHRSYIVMoPeBjKAKQRM+IpWmKBtCq4ElqnpaS6cPoPyamzcu8HGzg4L83PMz05x+YOHrG/u8v1Xvs9TTz/D3OIKrc4e5y6c5+a9e0zu+fTDmCiO6fXaTDWqTE9N0t3fYr/VRlaqLB89xlzVJXz7Evv9mAjY3nhEFDgICSIBk8R0BwPagaKuJE2bdl+GvQ53L7/D8bPPsrG9SyeMiBODozSVII1fsnFMpzfAcV0ajRpRNCQchgyjhERrhmjaUcLKhIcQFkgnqdoYjBAoTNYLm55bkwFJkUG/Ufxy3r8pMuRX7H5MLZSMwpqlQAmNSQzapJHOKQMu3F2LBGlwJGgD1kpkBkGtzWN0c3yZujDTy1Uisw8YII3kLVWqVKlSpUqVKlVqXEW3YQ6HcideMR4zB2q5K64Ik/LHiw65cZAyHvk47jJLkmTkSBx3fxXde0UoVYRscRyPHhsOh6Obhsdde3kk6LjDMFcRohVBaw4wx2GVEIIoikadljnwzCFXGIYjd2TxWOXRrsXjW3QI5mMpjjM/TsXjWIRwRchbPFbj4LAYvzoOo4rrzGNK823nz8tdrOPX0bjrs7iN8S7OcQBWvCYOgoXjTsZip+U4wBx3XxavqXwM4+7RnyV69Odx5h3k6DsIHBaB4EEOzYMcgT/JJfizOC1/HuBaqlSpvx7dbRuO/l/+3V/3MP7m6c/KY/Jj+ht0TO7853//r3sIpUr93Prk8DEIsBak4yeoc6cAAQAASURBVGTwUYGQxLFEmDw8U+Aqi7Eqg4ZpNKI1NoUB1iBMChOMMShH4TguxqYxqk7hzbBUKgUVOoWdOcDUOo0yVcrBWIPjuszMztC/20vjFKVAIFMXJAJrEnAk1sBeZPmDP/0Ov/GrFSYn5lg+fIphrEG57Ld7hL0+k80Kj7oR3/zO9/nUMxdpzsxik9Tt5gWVtK9Ox3zqhef5g//h/8X/9M//W868+AUGcciguwNCkhjN7u4uKtpnbn6Wiu8y0Vzi5NFDWSel4cqNG/zJn3+dbrtDvVbl2U+9SK0S8O7773D/4TpKCJrVgPmZGVZWFplfXGZycgqvWkMIhRTgexJnuollgl6vwYPNh0Qb2/g2plkLmJicRFtFaAfQrPBwexsda7q7m6ysnmJ6Zob6zDyX762zttOhNjHHYDCgP+gzHAxo7W0R9jqIiks1cgmqVVrdLt3eTbQxHDpxAcfzcRzF5NQMcaIZdNtIr0JvqOmHEbVqgOs5RPub7Pf7YPPoUs3u9hZb27sYDLVajWajQRA4KCkI/ADXdTLMR+qA1RbH8wkqAdpatrc32NjeJ4x1GgcswbUKg8MzL32J3/4n/wf0/jrffOV71CoO6IgHd+4wHPRTJ13h7lukIDEGTAobE2PQBiQSKSyCDIJnMAyRfhXWoEUOzWz6NYv4zF8TzcBFtoYjB91wMOD3/+X/gPsPf4MLx49x+50fYIXPlStXuRS+w87ONq1uF/3+u7x36X2eu3CWrbcesjA3x8LMDA8ebXFopo60lvu7Xb7z3e9x5OgRzpw4zrUbN9hr9xCVADmzzNkzIXu997m/3ydONLGWuacTqw0Gy/Z+l54SDJIAT1WpuJb25gPa88s8cfYUb35wFe0HdPsdfEchdEwYGx48esRE4GGNJvBcqoFD2BnQDyNQlm4U009cag4ZrBNZvKnBmJG39KPJXjZZ08YirM06FsmOPRmEzl2P4rFUVEHqUDVi9FD6IYAQKVwElFCgDCpbQmVJQtJKdP6sDDymzDKFnU7ebUI2wf4EcT2lSpUqVapUqVKl/m4oh3LACBBFUTSKIC0+XoSU46AqB10HgZaf9F60CECK4GrcDVdcf/4vB1BFsFQEjfl6it1/xX3NwVoeOVoEU0XwmoO28Y7AfDnHcUZgMIeUUkoqlUqahsPjQDWHcbmTMf9+HIQWHXz5WPJlDoJxQRA8FiVbjITNo2fHt5f/nPdJFo9z7g4dh5vF/Rl3O46Dwxx65serON6Pc2Hm28rnXTlELl4X+Tk4CMz9tGvuoOtv3Fl4kIrX5vh4D9qHg4DqQQB7HDwWf/dxY/5553cHwcxyjliqVKlSpUqV+ruuTx67KtLIQelmbkRtMOjs7kE9esOvMSjlIqTAkQ5aJ0glcRyJzgrflFQoLI6jUI5DkmhEFocoHVJHVN7RhkUnGozByd5YKinxPDd1qVnL0ZUKrXaXKF5HSkUYx4SDGOUopOOhTQLWwZHwaH/AN7/1Pb74lS8xOzuNsWdwXY9rN++xu9dmOBwwXQ/Q0uMv3/qA04fmOH32HK4XIJXCags24elTR/nWoRO88urrnHvmOe7de8jG3RssHjrG+oPb3Ll+mckLZ5g8dRKh0ggSow1hOORHb77Fd77/fYbDiMmJSX7t136NUyeP8/5777Cz0yKONUOjafX73Nvc4a0r16j5HlPNOosz06yurrKwtEJ9YgrhKMBQqfocOnKMKFqi2+6w196ntdVFWY3nCCYbNY4dmyV2arz3zuuc6vU4+cRznD13gV8NNUtz82yvP+DurRsIqxlEQ/rdDr7noK2l1ekwGPaZnp2j0+2z8fAeElg9eQEnmMD3fQIhmJudY3lpHmVjeq09ur0+t2/fQEoHv95EidRp1uu2iI3Gq/i4jku96jM9UaNeq2bxpm4aUZokWCTBxCyOX0cIQWvrEQ/v3mR7r43IrjEQOAqc6iSf+3t/n1/51d+gt/WQf/0n/xZhhigM9+/dhyRmolGh15NEYYLWaUyPI/I7Xy1xkkbH5g7NNG44nVA4MoWRmekOY0UaISsEStjsq2BkjBNQ9xVN32VnGKdOYCl4eP8ub77+Cq+/8h1uPtjA8Sp4rovvKoZhSH8YEq+v8f2//B7PPPUUjeYke60uFy9e5M79DW49WkdrS0UpHjx8wH/xX/yXfPr553nizHEGgyFxHON7LjMrh7nY69N+5zKx446iSiWp0zOOEnAVyvFJrEzBKqmj+eG1S5z/zJc4NDfNjQdreJUGg+EQr1rHlS5hZNiMBkhhGHY7aAuVwKPbGyACF+U4REZQsSBs7mLMAk9tFqVqLUqmx0RA6oK0SdrzaC1WKaQQSCkQluwmhnSUo8md/QgKZn+sQKR/Z0j3BpWdQyFAiRzAZhPSHBjb9LwJkTssRQY3sxgnJIk1mQP7ow8KSpUqVapUqVKlSpXKlYOMPBI0V5IkI9ee53kjOJXHsRadgTlYKnbxFQHTT4qKHHf25evJ1zvuCMwBoed5Pxa/mkMV3/cf22YRMOZArhiJmu93DrOKjrniNnNXaNF1B2S1LB85JnOw57ruCEyOg6t8PEVHppQS13UfA6rF2NciPM1BcBH+FaNbc8hUHH/RgZivIz9349ssdmyOu+WK/8ZjUccB6kEu1uJ1UYTF44C5CC4PWte4u7YIQPNr+iBQe5Cj8mcBegeBznH9NBj5cesc35dih+XHOSY/iQ5yPpYQslSpUqVKlSr1d1WfGD7GwyFCSITUCGExGX2RykE6bgYTkixm1eAIi1BO9kbYYo1FSYk2oJMYpMAa0JHB2rSXjcyNJcknVtmbOSmQrjeCCQKL0UnqjDQaJR0OLc/RbbcYDgb4rsvAgEk0OAZHKTSQGI0rJFcfbhN87we89LlfYG52ChA4rs+9h+tsbO3S3m/TDDQnTx7nwW6HtVde5/zJY8wvLaJcDyGg6fu8/IXP829/959z+a3XOfnsZzi6vEQlqPDB1Wu8++GH/Nn3foBSgvPPvogWkls3b/H9H7zCzbv3MdawvLzCV7/6K9Rdw5s/+DZXbtymP4zQxpBoncXRpnCkM4zphvs83N7n3Wu3CFyHlYVZVlaWWFk5xNT8El5QIajW8YMqs3OzJDohGobst/a5s7vPo/0HHFpeZLrZ4NJbr7G1tc0Tz3+GSq3Gw3u36LT20cCg38HGQzzXxXPTidKj7Q7bO/ucWm6zcmgJKwStvS305Tc5evIMwfwKyq9x4tyTRJ09enttOp0hrU6bTrvFxNQ0UijiYRt0SNV3WJ6fSXsa4xitE/r9PiaJMxdhBpyQ+I0phv0+uttjb2ud9fU1uv0hsQalNFKksaiNqWX+3m/9U770lS/S3XzEd77+pxgzwFWGtfsPScI+vu/gSB9jQEpBGILWJu30tKk7V1tIEv0RKEuDP5EyxWeuSkGZq0AKhbZpDKe1Fm0tBoHUJoVpIo1ynay47IcxSdYxGCeaBxtbnD+6hKME/UGfXt+A0cRJMooW/vDDS1y+cpVTq8e5/M6rrCwf4vDKPL0wpKMVvW4fFW+x3+ny3VdfZ2NvnydPHcWRKbRXrsfqidM82e5wb6/DcBgihcGSXl8N32WiXqHuOyw2fCoOqTNQCOJBm3vXLnH4yEkerG2QOB5Cx5gkZjAcUKs3kI5gZrKOHVZ5tLZBd5h2o/YijeNAP0oIpKSSAVlBCgZN7iC0Am0yh6NIo1JVNlmTQiGlwJEpLDQ2PRcSyUd0N319KCvBJlmUKiOIKoRACUV2OLM41XzJ7Byl6DFzVZqR29Zam3XXyhRGi3StKoOQpUqVKlWqVKlSpUodpBw05dBMKYXneSNQVexJDMNwFHGaR4cW+/NySFl0EOaP+b4PHAyzHMchjuPH3IfFdeTgKo7jx7aROwFzFbdT7HrMYV8e7XpQTyNAFEU/Bu6KcDQHlZ7n/VgEbA5Eh8MhURRhjMH3/RHEy8FnEbqN/1yEhMXY0/w85GMuuhPzfznoLLo98whdz/Meg4VCiNG5S5IEIcToPI/DwOIxK8Lqg9ym+Xo/zp2ZXzvFcz/erQmMxlSEg/l5LoLRfFv5Y0VgPe46zMcxDh8/zoU5rvHr7KDvf9Lj41+Lx/Pjlv9ZwOVBjx8EGYsOy+JyHwchfxbY+nH6aXC0OLbxMf1V6OdZXxE+/1WPo1SpUqVKlSr1N1ufGD6mViObAUEB1qCkxHUcpKPQiUZahVAqa0oDKcDxHEYtehYSY4lEGqGpdZICTOkgRNqvp60lNgmOTfshlRI4yiWo1DDWMhz28BwXIRXxoI81msRamoHH/NICV69ep6EEvu8QhTYDGxbHcUnibEeE4tLdNez3/pJPf/YlFuam8XyfSqXG1OQkDx5tsrW9w/raBsdXl9D+Mh8+XOfG/TUWZiaYn5+h2Wzy4vljfHDxOV5/7wPOnDvP5MIKFnjqiQtE4YA33v2Ab/zwbboDzc4w4f3Ll+m0W1hr8VyHp84cJ+5scGV9jfWtXdzqNJNzAf9/9v48yrLkzu/DPhFxl7fly732rq6uqt4BNHYOMAMOBiQxHnJIDjUUZQ5Nc7NsepFISpZlWtThUFxMWrZEUTo+NmXRsimOdSguGlE0NVwAcIwBQACN7ga60ejqpdasqty3t9wtIvxHRNy8+ZDV3agGZwHuFyeRWe/dJW7ceK9fvE98v7/7a9cxNtTLdLU0scbHfQoybRjnBaP8Ltfvr/PM3gH/o0evkOcZ42yCsJa00yXt9uj2BswvLaEf0RSlpigqbu1MmBxO2XzpBYrJCNOdZ3J4gK4qqnoyXNGPU7CWoqyoKuh2OpTTnI1btxkuL9GNI6oy45UXn2f/4Jd54unnuHLxMfb2DykmOctLC6zffYu002F0sM/O1ga3bl4nkoLHr16mrDRJHPtxECEQ5EWI8IVSazrdARQVd954g/nFJe6u3UUbKEqNsS46UxNx6X0f4Xf/7O/nuaev8NY3n+dbL32DaXaALafcu3OHYjImTWIHk6yl2wWlIiwT8qz08BEf9esBIsLXcAQlBUmsiKQgVs4NWWmNwSAsKAFWCCofL2zCi8YDrLlUEQuBtsE1abl7b4NnLz/Co2dWeX1tk7w0lHlBZTSdXo8k7ZIdjvnyl3+Fp574Q/T6c+wdTHjm6ae4ceceo1HFyqlTrCz1WFu7z87hlDtrdymzjPc9eYlhf+CiiqXk0uNXEde+w3fuTjDWOGefhGEn4qlTfTpKePBqqaxzRVpj2b5zndULj/HYI2f59vU1orSDlWBsznR0wLQydNMOK4M5lodjlJxwYDWjvGSYxBSVZi+zdPpRI4oWN6aFi1oVwRNpXNSqkIokUighscKAsJRFSaEhiUPELRjrfgdAj5VI4RyVooEW8eDRW6od9AwQFOtdrsZDS18fxjr3owPL2rsqVe3Gbq6cbdWqVatWrVq1atUqKMCb2djUAIp6vV4NCoJjMMC2Zoym1rqupdh0rYW6htZaBoNBDeCA2nEX/g7QrQmoiqKowWSI4AxgdBZQNaFOt9utgWWAm9ZaJpMJc3NzdZvDdU+nU7TWdVRqOEfoowCp4jimqqq6H8L1NZ18SqkagEZRVEOmsizJ85xOp0Ov1yPLMvI8J0kS0jQlz/P6+sE5OJvxrkF5nqOUotPp1GA1nCfAUykleZ4D0O/362sNkDSOY4qiqK+1CRybNSGb4yGAxgBxw9/BiRrgTRNohnaFGNrRaFSD7tCfSZLUY8BaW9fs7HQ69fXPOkGBup+CZuHYLLR8kHOw6RqdfXz2OLNQsTnumhCzuX2z3mfTtdl02jaPF/qi2ebmcyEOuQkvm9C1GRPcdLs2X+ehLU1X7En9ONt3zedOiqF9N/BuFjwGvd2c9e2SfGb7PPwdxs1Jar4vtWrVqlWrVq1+ePUe4KP70j+KYrQxCCuJ4gghJdZ4t5DEfxCUzq2IQCEQvria87FBpBSF1uiqwlQaKX3NAu9UimQCxmCNJlLKAaYyB6mIVOycY0XhwKdMsUJQFiXnVxbIJxc4HB2iJxlRHDnYYAxWu/qTSIm2Dlh85/Ym5otf4eOf+Bgriyt00i6DwRwrS0usb+9xf32bG/e3mO+O+ODjF1GDBdbubfDyrU1Etcb5lQV+6rf8BH//v7nPv/j6i3ziR/uopMPWzg5vvPEmP/7R57iTpdxfv8nps4/waF7xxmvfZK7T5fLFC5x/5AJRnNDPDY+fvsLS6Ud4+ZVXmEwPGe3vUlWl61shKHSFqQqsdPXvwofufqfD5UfPM96+y97ONgcHh4wOD9G6ot/vs7h6iuHiEv25BZKkQ7fTZWlpkcoIDkZjdiY5d659jdLCOM+Jo4hu2iWbViSdAXOLZ9jbWmNpIWZ16TFWlldQKmI6HiGlojd3moXzS6wkfc5ffIz+8io7B7u88PV/jjKaje1NtKnI8oKiqLh+b5vdvRGd/oCFuR5zgwFJEhOpiDhSzjmoDdoK4giyPOPa62+wdzgiWrtPp9Nh0OsQxQopIhZOX+SjP/ZZPvNbPkNHT/jv/uv/is2t+/RSwWS0x907d9DZmDQWWAS6qlwkcBT7CUcXJSVZVpDnJdo6R2QqJNbXfgzRq0oKIilIIuXIl/ITRguVd8Z5vuVdk07GWiIlmEsjsqmb0FksB6Mx97f3OL88JC8r7u0csjUeMVxYZWVpkd2tDfYOdnnpxRd468c/zdlHLnPt5a/zyMVHOXd6hYPpfTrdPisLq8iqpN8Zsb67z5ZKePnaLT76/sex0wn7ozGdbo9T5y5wc32XQ+NAfyQEd7cPGPa7XFnsuHZpS4XFatcPGM1oNObyY5fY2N5ja29ENsmpqhxjXP3Tnd1d5pJFosi93sJ+RVkSI5FSYYUEzBH8EwaP+lyMKY4kCguVdq7TSEqEBGncogUTYleFg70BFlsf04y1GBsicl0MLjjGiXexhrsSHhJSoqy75vCEA43uBEoI936BXwjgDxgJ9dBvpa1atWrVqlWrVq1+cHVSLb4AK5pOtKZbLECDALECsArgC45q9jXjSwNAnD1OM2oztCM8HkVRHWsaIBe4eXxwFwaAFQBjcEfOOu9CDGpwJgYA1IxHLcuSNE3p9/uUZXnMCRnOG+BX8/rC9cxuM5lM6nYrpWqnZgCNnU6HJElqV2cAigEUBYhyklMvAKUAWYIjNLQl/DTrSQYXZbg/cFRHMuzbjHYN96YJAGchWXgubF8UBXDk9Azjq+lEbcbpBmAWxtzs+Gy6H5ugbLbG6KzTbxYOhsfCeJy9byc5CWfH0Ozzzf2afTo7nmfhY2h/szbpO51r9rpPAnezAK75u9l3b+fcPAnINY/7oP3e6RjNNpzUv83Y53d7vOZCh5Pa96A2fi8uzlatWrVq1arVD64eGj5GcYQ1GqNLpFQOLoYPrUIihIOQCPcBSEmJUpFzUHl4AwJpLQhIlMDICK0k/iEUEhFLQKBFCVI5mGMFVrj6a+7DsyWW4iiW1FRgNd0k5sK5M7z62gHWagbdPllZkk0ztHU1Jq2zSoFwEPKt9R3yL36Fj374ORbPXqTTOctgOGRhcYEzp1fZ2Nljc3Ob2xv7qI0tzq4u8+QjT1PJiHubu1y9cJ7LH/gRbt9+FfXqTeK0g4wUT3zgY6SJ4k/9gZ/ha69cZ3drndzGiHzEB9/3FB/95E/w6JXH2djc5vXXXiNOOxwejlldXmL/7AXWrSHPxs4ZJgTWGLKiYDLNMFYTCYUSgmE3Yf3WdW69/h2yLKMqtQMy3oGn5GskacJwOMfiyirLq6eZXz5FdzBkvt9DLsxRTbe4v7JAkY+IpCWSAkuP3/E7fy8f+viP8Z//9f+M9dtvcDge0+ukXDp/mnj5HGmScnr1NJ/8bT/FyvkL5JMxr7zwNX75H/0ir3/nFVcbJOmAsFSVZmvngCcee4RpZZFCupWnSpIKhQXybEqWF0RxgkVwf2ODja19VNLxcbdu5WilBStnHuOJ93+cj37ixzi7ssC911/h2698k53Nu3Rjw872mK17dxFVTkcJjKkQMiJSzh2npcFEsV+hJxBCYqzGZhodHHoSpJVo6+t4WOn2x5JVlQPv3jkHFmuFB1e2Ntv5YY+UgpV+zF5eURjcGNeGN27d5dFzqyz2Eoo85t7djG6ng6imlEXBqdUVTg1T3nj1W1z6bT9JHKUcjDMeu/QIN+9t0O1EXL1wmp3bb7A636c/N8/KI1fZuLfG6zfvc+nMAmWlSa1lbukUj54/w/6bt5loS2ksaZqQJJFzDhrneDTG/UNbQX/pDEU25tbddR45e4rt/RGduSHjw32MrlDGMhqNeXl/FxXFlPkUiyGKFJO8JCbFCkNuDHOxj0Q1AQWCsBZ8vUdrrX+vAKMNhTYOSApd92elNVJITD0ZdD9O7rVtfK1G4f93BIXda8LiQDLebRn2FViENRAin/1iCQXEKF9v0ifp0sautmrVqlWrVq1atTpZ4Qv8AIuqqqrddOH5JgRrgqIQvxq+/A8AIY7jGkw2v+gPUZ9N59ZJdQrhCKgEyDh7jmY7AnwMj6dpeqxOZbi+ACUDfGxeH1A7KqfT6bHrbkK3WYdbgKTBLRrAYhO8NeNkwzGLojjmYmv2R7N/mrUhZ+ENcMzdFs7VVAC3Ydtm350EfgMADOdrgsfmdsEp2dw2xM4229o8X7h3TaforNOyOWZmx8TsNTeh50lxq83ra0LMk8Dg7Gti9rm3g1knuSubj8/+Hf49e54HAdTmc82+Cvs3++dBAPQkZ+hJ/dE8XxMGz4LNBzkVm2PxpOeabWz+fjfuxlk17+tJfdVCxlatWrVq1arV2+mh4WNILZTKOR6tFWijPUx0X+QbBNKCFA4ghHppkvBhz2KMczpGKkUbQxSZenWfsBapXH01tD9GHCGEolmvzWIw2uAKvDkHlZQCLCwOOjxy8RFef/0N8myEdYXmUFI6WOnr6VksGkGpNWs7B0y/8g2ee+aAR688TmdhnkmvR3+QcWp5mcnFCxyOM+5vbnF/b5e7O7dcFGwcU05G/NRnfyvXXlzkwrmzdDpdDDDNcrKiZOf+BlcePUP3qcv8+G/5DC+/8HVW+5IPfvRj3L2/zub6fXr9AdMspywL4kixML9AkU8osgFVWZBPRxhdIohrCJMXlas9WGSsrU2CvcvhFusdXsL1dzXNmGYZ65tbqGuv0Uk6zM/NsXz6NEunz5JND/nok4+wnDq3WmfhFAuLqyzP99m4+So/+rEPs/XYeYp8TCwEcaSIkg6l1ty6d5vdv/s36fX7bO/s8tq111lbWyPLMozexQpfVySKMEg2NraJlOLeIezt7ZPEMVZrrNHEsaI/6JGVhwghuXd/Ey1ihgpiqUi7Q06de5Srz3yQJ558huXlFXbu3eAffO4fkE1HJKIkomJ/94DdrfuIqiJSPr5XRoBEG01WlggExmggTPIESRRRiNIDRQeshHXhncb3bRopykpTGYuw2oMoZ4REuIjVACPDmHXxrYK5TsIwKdieep+dEGxu77C5O2J5bsD2/oiFuT7ru7tMDw5YXlrg3MocKYb1W29y9949Vs8+wo3XX+b8o5dZ+c6b3F2/zT987WVUHLG6usIwTjl3ep6t22+RFT12RzkLcz3Xfik49+hF7m3usLY/AWt5ZHnI+b5CWOPch+Dgs42xvWWWHn2CF7/0efLOIj/9W38zS3fuszvJiQQgJWkn5fBwTKU1/fklFnqKrd1DjIgQtiSrKlCCg0zSVZLI17w0mBrMCtx7hUB40OscifV7j3V9ZXGTZC2c8zrUchRSgAeXQimEtd4h6eJVZXjOv0IkDua7pF0HmQXODS3wK2wRIMM7X4jKFb7UpMB893y6VatWrVq1atWqVatjYOckqPWgGEo4AnVNaBAgXNP9F/YLDsPg6Gs6udI0/S4nXjOitVl7sQlMAuho1oYM7QrnCDC1Ge9ZFEUNGUM0KFDXaGy68JrgMFx/AI4hirbT6dDpdKiqqoaPURTVEakB5jadoE3HXxRF9Pv92gXZ7PtmLcVmlGTTadn8d9gnPN50GjYjS0N/BxDbjHadhX7Nez4LY8Pj4bHmdTYhVdNhO+scbMLLcO6T3IWzwDi08SQHanP/JtBubhvu5ezjzf2bkG72uA+C0ifpJCA26yh8kMOvqabDchZWvlMbmsAy7D87fk6KcJ2FzycB4eY5HvTc7PtLsx1v1/a3u6YHuUffrh2tWrVq1apVq1bwHuCjVDFR7CJOrdFoUzloJH1BeaGQylV3tEBlAOG+xD+qp+biHqWKHGQ0BmsMUgiiJEaICKFiyqpyX+5bH3kpw8TA1YmsP+ApgbASa12kozuP5dFTyxSV4Rvf+AaJgE6vjzaWsijcpE0KrDJYlIdLgtG04Csvvcat22t88LkPsHD2Amk6T1mU9MuKlSV49MI5irIiL0u290dMxxOmVnCmE3FnbkgymCft9RjOzdHr9+h2OywtzNPvdrBGk+UFP/ETn0aXBa+/fo2NrV0QEUWVU5QFk+mEosiRAhbml5BLkru3r1MWhQdllkgq5gcdslLTsSVWa4xHK8LXKAyl7mQNa60DktYBl0k5YTKecH9jA/XtVxhXhlOLQ8qipNtLObO6wpzM2bx3A4RgOjpAewfpxsEhVV4xv7RIUVQc7KzT7cQkSYrFEkdw+swqZWUoCgdgp9MJZVEihKTUEZUVYEoORznGGiqt0bpiYTCgn1UY7zCrSOn355lfXuHchUd5/MknOXv6LFIqNu9c54Uvf4FsckCqNGkE08mIw91tyskYfH/pyrgahUa7HvKuXIxxLlgsaZKgpEJgHQSuKj+WXJ9KAVEkiSOBNobKGufotRasqy4YoJSwrt8drHKAO4BgJWG5G7GXlVTWQbiiKHj1jRt86iPP0OslLM0P2d4bMZpmRIcjTi/OUUnN4f4ub7z2bT75iR8juvkmRkRcfewiW3v77IwmTHPN2Uce5Xf97t/Ozq3XudGVbEwnrG+XdJJlOnGEsNDvz3Hxwln2p9excZ9M9nlza0piCyLhxooWKUuXnuHSpUd465tf597uiCtnY4rplPNnV9l58yYq7TLe36PX7ZB2E7KDzL22TEyvk6J1RYFlUmrSNOJwWrCQdokjgcDgg5WOFhQYV28S613SHhi6upDelSiEcz36aFvt4XGYWDn3aeNNy48j7WvVhkUSDjwaB5CFQFrpzxdQo3utCGO9C1Z4x/SRo9XadsVnq1atWrVq1apVq+9WE9LNAoZZZ1mosRegWxP2BYgx6zALOglUhceDu64JQZqQo178648R2jPrkGv+u9vt1hGjRVHUbshut1vDxybIa9Zw7Pf79TUFkBnqIzbjZIOzczKZMBqNjkW3lmVZ1zZM07R2ZoZ+LIqCNE2JoogkSeramuPxuI5+bfZHgJ1NN1+omxj6PUDU8NOMnp2Fi+E+BDWdlk3XYBNchusJwHkWXIWfJnwUwsXdNsdBUBNShzYG8DVbp3B2n9ljhfbMwqfZn5Mcck036SxkPAl6zYJQoL4HzXbNgsXZNh/NC7/7mLPgs/nYSe7XJoiddUiGbWdfd7OPN7cP9z28P8xee/P4szoJ5Dafa97Pk67vJL0TUD3p7wfdv1atWrVq1apVq6D35Hw0RqO1wVjtY1UVQimkjEgiF91orQMJFosVysEbGdWuJbBoY5w9TDRgkBDEUYyVbkKTxBHaaIyxVNqg9VFtCCElSrjIy+A0M8ZizdFk5/FHziDER7nx1psuVsZqyqpAa4FUisjGKAkiUigslbUYbbmzO+Xwa9/k4pm7XL7yGAurZ7CDPnmpMVozL90Hx0fOnnGwVSqMFVy8coVBv0sniemkCUmSUGnNZDylqAxRkjKfJty6cZ0333yLrNSUhaYsC7JsyujgkMloTF4USKlYXVlFCLh14w2KsgBriSNFr5sgheDswoDR3jbWGqSQHoJ5MCKsByR4B6qLkTRW1ODE+shJbSsiC+tbe9w7mFJozQtv3GXY73BqeYkzp5ZZWJin1+sx309YWXoEhHS1J7VlYXmebDIiz3NsWWFNRQz0ezHpQg8jJKPJlCybApJOt09vuEinO0BKxSTLKfICrSsEkHY7zA0XGc4vMDcYMNrZ4HBjDTXZYustzf03v8PeeIKU0OtGDDuCMpuyu7vL9GAfXRYIAZGUSKUwphE5w5EDVwiBkr4vLKhYknYSer2UyljKsjF5F4JYCvJKU2nn2FNCoH0dQiuOIj4Rxo9H/4FeOMetFM5Z1+/EzCUlu7nG1UC03F3f4P72eRaGQ/b2x5w/vcLt9S0ORmNG04LFOCKbTli//SZ3L15kMJjjcFzw1DPPcvPOGoeTnHGhee31t3jz1W9RjQ/oRgJ9kHFYlVy/U/HM1UcwxsXdnjl7jtt37rFvBf1ewtb9bQ6mGUpKojjl2Y+8n1PzKS9+6fOMsylXVnr0qVi/8QaPPf1+bt1dZ1tboigmyzIiKcAY8umEJFUY45yjGsC6lbyjwrA9jegOEqSQziftYaCuXJ8J5eJQbYhSFQbh31S0DQDRT7qNc6Nav0BBW1ef0ZlOnetUCuHqUloDxt0ba/CP4StOOgd1PZmXwkXBAhbpF00QbqgD+g3A36pVq1atWrVq1apVU0VRHANYTSdjgFUhvrRZG3AWJjahGByHACFqtQklwnECwAsxpwFyNaFnaNssQMrzHOBYvcfmMcJPgKYBnkVRRJqmx+BQFEV1TcrwXHAhFkVBt9s9BuWafbCwsFA7AoPLMsBHa20NAUNErTGG8XhMnucIISjLkizL6uNNJpMTnaMBQAYHZNg/OCs7nU7t6AxQM8ClwWBQuy9DvcwAfJt1P4NDdBYMhnY04WaAu03oJaVkPB7XsbuzNUGbkbLN8RLGUaiP2bw3wR06CyID6Gy2rXlfmjG0zfM1AeOsw3LWdTn794OcibNO0HDe5u+TwOdJTs4HbTvbviYsbd6b0EfN19Es+G+O47BvM9539t432xT0doDx7Z6bva7vFyCcvT/vdOwWTLZq1apVq1at3gN8PKpt11whqVSEVJGP1QSrDSgHAWT4It+DL2ssVjhnmNEGa7X3SfoPfYCuNNgKKax38fkPutZFiIYakODjFn39NiVdCmsAnArJlfOr5FnGnZvXsUCSdtBlSVW66FVjhXNgasiLEqUM3ShinJW8sbbF/Z0Dzqzc5MypVc6eP093uIhVCaX2EwStUUAnViRJhDIGW2lkCkkSg47oS4UcHXLn1pvcuHmL8STHWEFeFPVqz/2Dffb39xgfHqJ1Ra/XYW7Q543Xv0MxOXD16ayrEaikYGnYZ7K36+rTAcYaQDZcYsGx52vfqQglQCEwwt8j6/YzPiY3VoI0isjKiklRMikqNg6m3Li3QaIgUor5Xodet8PiXI/FxUUGw3lXO3JlmShOQEjnDM2m5HlGnuWYsiRNBBKJEpYkFnTtlPm4x8LSEp3hkgNKViIatURBcHiwz92b17l7/y5xp8P8/D5nTq+yuthHYRkd7rO+vo+uciJhXZ8nHe9a1Egf7WmFA+JGH03gwvVrY5ESlIyoZEm/1/ErZHOKskIgiJRzP1prMEAUYjelQBjn5xXiCOxK/O0QDpwZV3AQAyhhWexFHJaayrgxX5Ql33nzFj/64WcYDvqUWtNPY7bLkvs7e8wPuuiqYmdzgxe++i/oLZ7h1OlVRCJ49Px57m/sMJxT3Lt7n//vP/0Sz169gEhTFpcGHIynHExyxpOcQScGLGmvx4VzZxjfuMPm+l2UFCz2u0SRQMmIydo13rpTsdCJONvvur6zhr3Nu5in3s/q4hw7oynzi4vs7e5QVhUq7TAtcsbjAl0WyBBdKyAvNbEAbaWDe4CuLGBQAl8h041djUVY0N4tXYel1tGrxkUs4yeFwr03WWOpcPdaCudANdb69w0HnPFAWEiB8s87x6MDlFJIt18d0hpApD36n3Vjp4WPrVq1atWqVatWrd5JTbfZbJzlbNziLLQJzwVwBcfr/M1CkLD9bNRimP+EcweoGNoXjhvgUhNmNAFOAFhNh17TwTnr3AvnybKMyWRyDNAEmAnHIynD8dM0rdtcliVVVdHr9er6mUANdQOQ63a7x+Jmm662AOma52neowAL8zw/BtMCNAo1HkPMbICb4V6Ec1RVVd+T2fbN3peTwE7Ytwm9ghO02+3Wjs84jo+B36YDLwDo5j0PfRbO2QTiTcertZZer/ddYyc4RMNxToKPzfsZ+qt5zQ9ySTbh4INcjbNQsenabW7TfL2Fdoa/w1gN2zb3De09adyHfmz23+zrZBb8BWdqs+ZoE0w2r+/dOhXfDtKeFBsb7sX3qvr7Ejj2Wng3erfbtWrVqlWrVq1+cPXQ8NE57ATWuxSdCUm6L/+lczAK4zmSDV/X+xBWa1FSIaPYARjpHEWuthpoYxHWuRutNQghiaQiiiTa+g/ulaaqHFgU3n0kjfARmBIhQuF2v42wYA1XL56lKHNuXr/O4d4eg8EcKOXdl5DbHIwltomLcDWWXLtJx6GA6b0dbq3v0nv9BqeXFlhdXWY4P8/y8jKd/gCVpCgVESVpXetS2JK9jXtsbW1z6/Yau4cjpnlFVWkQgmyao41mOp0yOjzk4HCfg4N9ijyn3+9jjeHW7be4dfN1qgDA4siBOavJR4dUZeEdYsI7HX1vi1CqzoFVFfm6GmHVKg66Cf+34zgGaQXDTsykqtCVdlDSQ7pKQ1YUHIxzhNjjVC/h3t11pJB00oRur0un26U/GDA3HNDrDZjv9YgXF1BxgsVSVobpNGM8Gbu42p199iY5aXeHOE6xBqRUOLwnUXGE1hVLZ09z9pGzdNMITEWeZYz2dqmM5e7du+TZhMuXLjLXG+AAlkFXObosqMqCIptSagcZtTEIA0oeDVThHXPYMKGMEUIipWI6LVztDmuRCDQS5eGWFSCsBGmRBKec9WNf+H51/S58PVQECKFY6Ah2JhV7eZgEwr2NLdbWdzizNGRzZ5e5fpf90ZTd/QMOJkss9lNG4zGV0Tz53Id5/LFHWb/xHS5fucLrb93ijdtrTLOCbFyyc7hEGqd89Ed+hK/9yhcpk4Sd/RFz3aV68nP67FneuH6bcam5sjqkl7prjqUgjSBWCcIaKg3auAjaqpiyce82F86f49b6DoWxRFgqXL1WYS1zg3mELTk43Cef5iRxTKmda3BSVkxLSTdyMNo4GlgvHEBYX28zvIeEGNQwebSI8AYjqMGlMcbdH+uqVup6Pw8YA8IMwLE+ZnOC1pgU4/7p2SS+AiTGGn8nHdRv1apVq1atWrVq1WpWaZrWrqeTYhSDu635fABPTUAXwNNJEanw3e6vJlgLMaTBdRiAUHAsCnFURzDAqwBnQtvgeO23Jtw6CaY0I0uBY47C4Kibhayz7rSgwpdLacahBsdjURQ10GteU9NN2uyX0GfhWpRS9bYBZAVnZujTsiyJ45jxeHwM/jZjXpvHrKqqdnmGfgkwMvRx6JfQR836nsF1GraFI5cdQKfTIUmSuh5mlmXH6mOG6woANfRR2H86ndbnzfO8hm1Nd2sTtDZdouHYYQw1x21wfIbx0QTS4R43HcCz9yLoJAB3ElQNx2rWTZxVc0w2x1kAqM3XSng+QOWT3IMPAoJNSDe7T7MtTc226+1g9Enne9DjD3rupAUN76ST2vCwx2hhZKtWrVq1avXDp4eGjwjlvrAPbjohHFAxQKWx2n3xL4T70l8p/4HeeDggBCiJFJZIRejgosS656T0dfkUSOW/4HcQR8rYfdlvDaWPXRHCuaQQIK1ACO0Rhav95uiBi8t85spjRGmH77z8baJIYoWsJweaikoqoAAsspQoGZPlGUJIbCywUUSmYW13wua4IFLrRBJ6SQTGkCSJm2SCB3eWUltKC1q7D5FlVbkYG2swVpDlGYeHBxweHjCdTtBG0+32mB8OeembL7B2802s8bE8aUw3SV1MaDYmnxYo6WvfhRWEOFeZVAqlHEQ1Qvhaj26SYI1zlQWYg3DVCUN8baokK3M9RnlFVhT0OjGmqoiUQlk32Sgqw2FWEUcSKwyTacVoMnVjAhdHquKIJI7o9rrMzc0xnB/SnxvS6fUZLM4RnV45mlxLASLCGn20ctcatC4xGoppxXi0x+baHqPRhKTX4/Jjj7GyOM/5c2fZ3dtj2EuxZUFR5OiqxFYFpsyo/ITPGl1HaTrY7eJOw2foSmuUdDE62kiiyBLFESqSTCaQZyVIgXS5rbUDOAB21/+mhuLOJSyQWJR04Z5uO1sDyNODlHFpKDyAL6uKb79xg6WPPMu51WUm05x+L+VgnHF/a5fh4DyRSoj7cxxu3OBWtk1sczY3t5kbdIkQkPQZxhGTwwkTMeHOm69hizGjzGB1j2JpSKQECEG3P+D0yiLbd9a5O6pYFSkLiXN5Yp2TUCpJqgSqCuAUDtbvcPHy46zMdbm5dUCcdpjkGUmSUOQ5h3rM8rDH6rDPKJJMS4MxGmsFRaE5KAyxCDUc/cID7xzFGvytca5nC4hQZ1F4p3Od2Ow3cH+7+rFHwNKExQ9+jLmoV3dvXByscPejPrqs2yQCXDYGTZgIurqPNmzTzqNatWrVqlWrVq1anaAmQIQj91PTIdaEcE23W3gsOLgCZGlCtCZAaYLM4IIL2zXdaU3YFpxZwUXYhE+hveE6mn83QU5Qs+Zh050ZYlEDcAr7Nus+Nq8rALsAE2ehTABWURTVcHH2nM0ahbOAtLltUOi3EJ3ahJWhjaGfmv0Rfnc6nfo7hTiOSdP0WIxuAHnhfgRw2Tx+0xkXHIdN6BquK47juo3NNgeYGGBy6PPwXHNMNfulKIpjfTIbDToLCMNjAZbOAsFwXeHvJnxsXm9zzDejacN2zd/NGN9m+wNcbx67eYxZl2Tz3s3e41mgfxJ0fBBA+16BXKtWrVq1atWq1Q+DHho+SqW8Uy4AlgAOXEihqUGWc0E64GU8ADCUlUVJ6Ws0lhitG/DHIq1fVakiQLgoUROiDp2jLVYCISN3XOOcl4BPVPRQwMeNSiVdzUhtkNby9KMXmOv3eeVbL6NshSUCa4nimEpbiqpCSMgyiZQlkVJgJyQ2xZiYWCmssg4mYtFGMvWr/KKqQExKbIh3FCF2RKDLwjumBEWRY0zFNHMrUfPcOQEFMNcfECcJd+/eZuv+GmVZIayh00np93okkaQcH7pYWiUhikmiiCRJUVHkoigtGF1hrYuEzYqSSmuc+StCREcRlcZorAdfzuVlkAIGkSLpDJhkBYmCXAgm09xBXgXKQq49HRLUIA4cNNJAVZTkecH+4Zj1jS0iIZFKEEVulWnaSel2OiRxiooj4iQh6aQkcYQQfhVwVVFWFaWRZKWh00k4dXqVlYUhg8RSjHYophlxmXGwuU1VlujK1dDEaCpjXO1R6yeIPsJTW+1qZIbJhjRoo9HajfFISqyESAniKCKOI8ZRTlmU4KNbHUx31wtH8bXWGJR0TjmscTiyBmgCa4R39grmuxGL04jNaRmMfOzuH/DW7Xs8dekc84MuIJhmm6goIptmPHH1KfoR3H3r23zt5g26aUoUJ+wfHiLjiHMXzrlJp55gignffulFNg8zhvPzTKcZh6cWGKQR1jqSuHJqhXhtnUwLdH+VfV2xvrNFVxn6sWDQiejEMSoKk0PD4f6I/cMp506tcndnhEm7CLGPsIZICCZFwbiI6XoQ3pMKjEbjXvtZaTCJWyDg4pfBWI2QonYdinqBg6zHprbUDlPnQsS993jg6GqemtqlKAS+PmOYFPrIGBwgd2xT1E5VhyCPgKU3bIeh7VyathGLY9vJZqtWrVq1atWqVavvVtPxFf5uxqQ2YyObMBGOOwEDMGk60JqOxFmwEuBKOHan06nP1YRC4bGmoy+45Zp16WbhSqgH2YSBTZDYdMYF190sfGy2tQkGm9fSdA/OgqAA3cJjoT9DPcgQuRquYTZKswlBw/GakLUJjZtQKvx7FobNPhbqXAYXY/NcoX+aALN5njiOj8W4zkbRhraGMTAbc9u8x00Xa7P94foCHG6ep+labf7dPPZszczZPpoFvs1zh/bMxvY27+/bOQ9n++JB+zTvSbOtzWt7EIR8UFtaOb2Tk7Htq1atWrVq1arVQ8PHAA4RolGjTSOE9DX6wgdrgZLSwynht/FRlR5VWuPrwXm4KITytfekj3i0KCVBRGC0q62HRimLwsUvWuMCEMuycnGaUrj4Tv+BXFoHL5zryV3444+cZdDr8eKLL3KweZ/paMSZs+eQSUKRF2R5garcykLtI0QiPwkrioIkTVAqpt/tknY6VJVbMVhVkrIoXZxpAE++fdZUVFoTRYosy5hOJ+SFA2VSKTpplziJGe9tc2dnmxvXX6csSg/GIoqyZEEYyomrKdEdDOh0OygZISUOBPvPgBaJiGKiJCFJugglUcq5voSAIs+p8gmT0YjJ6NCtKgXf1wJrDGjN4nyX/nCRg4N9EjLmhwOEgI2tHULtwqwypOp4dKUUDq7VrQnGWGNQFoqqQmYF4/G0ztXUQLcTs7jQp9dJUcrV8LRIIhWRxhG9ToRSJXa8w9Zkh21AaxdtU5aVn2C4ISmVg14GRcTRh2OppIORRtZA3FqBlKCEwgTgbQEhMUiiGObUgDROKIqSoigpi4LS379Q07C+ZAG6/kBuPfx0x5VS1BsJKgSS1UHKYV4xcZmjGGN48+ZdTi0tcP7sKYqbdzm7NI+NFLfX7jNYvENiltm4d4fNrW0qK+kmMaPDA5RU9CgRnXl6SZfde4fsHI7Rxo+jbMykKBmkEZV27RnMzTHf77KfTdlev0s3Tbm3uUOhDQpLoiRpFLFy6iy9boKxoFXKwqji8qlTzN1aZ38yYThcYGtnx70urWF0OCYTrj6pFAIVKcpCo41hWlm0FUhrKa277lAHEsD6xQ2So8ecs9RPIv1j7rUGrqKmrGOgZXAp+vvh93SQ3Fr/mvH1Z7FHDkhrj96fgrvVhvc7D0Itfj+QsrU+tmrVqlWrVq1atfpuNSHXLNwIzr1ZJ+RJMC8cowmhwly36cJrwpVwPHDOsSaoau4DHHPphe2a7rwmGGy68IKaIOpB9QwDaGrCqGZkZ3AHztbmmz1/0xkZwGYTfIV9Q/ub55sFm82+DXBXCFG7GJMkqc8f+ugkp1yzTmCIuZ2FlQG4zd7DJpAMP2FsFEWBMaa+lhAv21QTMAbgG/quCS+bY7CpposwnLvpnJy9n83tQ182YWazXbP7NsdME6yH4z4IaDUdrs39hTiqqXgSIAzbnOR+fND5TopDbS4iOEk/rJGirRO0VatWrVq1avV2emj46JxJjhlJ73aUQiBE+EANSkUORoZVft7l5dyIsq7l5p4LPkkHCIXFARohwZq6LiTSgTEpwyo2F5NpcZMUJQVRFGOs8HUN8asfC4SwqChBRbEDQVhOLwz4sR/5GC+/cYtvf/MFJtMx3a6k1+1SGeuBVoExCoQ/l7Voo5GZIlIR2XRCHMVUlYs4qbT2bk9LHCUIQe18q8qSosixCPIiJ89zjDYM+n163S5RFLG/v8eLLz5PPp3UNe+SNGbQ6ZDlU4rJhCiK6c0NGQzmSLs9kk6PJO3S7fXctv05+nPzpJ1OXdejKnLGh3vsbW6wu7XO1sE2o71dsmlGWVVYA0YAfsJrgEJXHGxu89xHfxNb21121u9xbrlPp5ugJBwe5uRlQaE1kRAoYT2M9nGuHvg0uF/DqebueaUNkXSQKRaCThqTJjFCydpQpnWFNpqscPG3kfTA27sorTUUhT6a7HpYiQegSviIYOscjdZabD3pMyih0I3xKax18Ey4saqExZgKow1KCXq9lDSNKYqYIssZTXIoK3dFvmlSePBVQzIH4F1CqAChwSo36qWllwiW+jH5YU5lXSTuZJrxyps3+dgzV1ldmmea5UyKjL3xmG+/dg1TnMcaTXcwT29uyGg0YX99h0GsiXCwd3dzi+2dA1fnsqowVrC6tIiUkiTtUGlXEzXpdDm9OMdZDKO8YlJOGQ565JWm0hptBXZukc6pC8zPD+l3OywuDjm/kDKY67E87HA4mZAmCdJa4jiqa13meUUcKYo8I01iMK7uZmU0hZZ0juWWHv0dXIeVxV1NPQF0iNFaG0y3zu0oZH2MOkU1xOLaBhx3B6nvNcKijYOX4YACEPX7iN/Xurqodd1Jv3jih3Wy2apVq1atWrVq1eqdNQs8TnITNgFYE06dBC7DsQLMAwd6yrL8ruhLoK79F6BSWEwbnINxHJPnOVVV1S64WfA4614DjtUoDM7Gqqro9XrHoGkAYrMuxnCtof1NQBfcbME9GP5tra3b3Kwv2FSzrmD4u9lnAbCddP4Al5rOwiYgbLpUm9GdId403MNQBzJcf3Pb8HwAeM1+aUbHhnYHB2TTHdjcp+mqbcLjsH/TIXsSFA59NFuD86Rx1nxsFsQ229Ucg817NOsObUa6zh6zuc/s66UJLk96HTW3O+n3g9Q8XhNQzrap1Q8vcG3VqlWrVq1avXs9NHw0wa0I3vHl4lWFaBTr9j/GZaL6Gmvu91GdPIHRlY9wlQgVIYQDB8K7noRUBGiAn/MoKzCmQgiFEmCVg5ZWSYy1SGuptKUqCrSPd3HsoMKYUPfNtXOh3+XHPvwMj148xze/+S121u+SRBFKxcRSIRNXv6GqKhClh2mWtJNidIUxFRMDRZYhpIMVSRKhohgQlJVzxlkLWTY9Wj0oBVJGREpSlQVUEXfWbnLjzWvk0zFVWYGQxEnE4qDHoJeSJgssLK6ycuYsp06doj8YkKSJi9yMlHNARgmYijRxUS2jgz22N+9z7/Zt1u/eY3Nrk4ODEVmeU+kKYS1HzMbVtzNWE6Iwu9Ky0E1Iz59HFxOKfMIgkXzoiUeI0g5rG9vcuHmPfDKlE3n/mQWhhKu/CX4cuMctLqY0jAXrkA+R9CDIGJSSzvHo9xFCIQVoi69hYlECklQhgcpYlJAY4fYVWJDOYatEsIO6uoWijtx0cBGUg5UhyFOACJNDKYki5frFRFRK+xhf53DsdlPKboe4k5PnrqZkWRnKosBUR3UksR5Y2RAl2ogRNWCMc9St9BLGecVebmrAtb61x5t31nny0bPMj0ZUexalIvb2D7h5N+Gx86eJhEUZzaXzZxEyZnJ4iOwvYXSJOdzidEewd2DIrGH/YJ9B9xSxUggsUlqkUIhY0Ot26MmSlX6EsZBr3+dCoZKEcxcvszQcIEyFmR4wvnOdW29V9D7xW+mnibtHKKI4QusKCeR5iVSSbizpRinjzNXkKLSlYwWFtiRR6KYjB6wzJbrIVYwbjYR3Amt9HUfnVnRs0CCs294SwKCox3UgkQ5oWz8mwopdH60qXNyqszMqt2DBx+VaZiKw8HU/sXXbWrVq1apVq1atWrVqatZp14SPVVXVkKkJYZpxmie582bBy6wzbxbauMW4ZQ2+mjArONwCUGrGdTZdanC8TmITlAUY2AQ3wcEZXIGz0ZjNn+a5oig6Bt5Cf0VRdAx2nOS+PAnGNtvbPGcAnM3I11mnYvgdak/O1nts1uBsulillCRJUl9HM9Y0HDc4+Wb7cBZOhutuuiRnYXazf5v3qNm3TXjXnNPA8Ujg5vU1z9XcfhY6vR0YPGm/WXdmAMUnAb6TIHzz+WZ06+x9a2q2DSfB9JMcoQ86XqtWrVq1atWqVat31sPHrqrIURPrQw6F/yDIkSPRGosQvig91ZEDyf3yDiKvADCtc7MhjIeEOCIgpYcKHnpa0MZTCVwso/IxrcK6SZOSEEUSa6yDbIDR1kWPRrGLOhUWFTlI98jykNOf/hTXbt7j2998iRtvvc7SsM/i6hk6aQfS1NWN9B/Cx+OJi9AUzvFZVRXWT17GE4iUci5M4yI3kS5yMu2kJHGMFAqERFcVr3zrG9hiRJYXTKZTBzDTlEprVuZ6XDyzyOnTpxgurHDq9FmGi0t0u12iOHbxkh6saK3B5OzubnNn6x6jwwN290fs7I3IK9Cig+wtEGmJLncoixJrNJHyMBaBkM6lWlSGotJESlJMx5y79DiTg13GO+vEaY/BYEBZ5pxe6BGJs7zx5hqY6gggGV91T4gaCklf71CIAHBczGu3kxIpRVlVVFZQldpPvpqrDaGTSKrCUhnjwJ21GA1u7mJR0oNCXLyuH1wAviaoczpaDEK4iNUAWXVzch0mh2GSisQqQxolWOEiZLHO2eiiYFNKXaF1RZYVTKY52cQBSW3A0zMHO61FO1MoykNSvENXScGpQcq0mpL5ZBljDG/cXmNhrse5M6exYos0TbizucfG1jadNOGR1WUm04zS7HDlsYuMc8P122vYMkfhwGwkJVIatK5AKpLYrwIOTj8BSZygy9KPBeio8KWGRpgJhzdfYWyPx9YIGTEejZmfH5LE24yynDRJKQp370bjMf3uACFcJGqsBLk3MUoBmbb0Yo+irXtTqCd51r22rQymx/DFiEeJlrqGqqI5IQyTxOPvWwGEIyQK653XzVhWdzwQKOvjoX07jQfIwscW4wGkpvE+1qpVq1atWrVq1arVCToJ0DThFxzBswAL4QiaNWs4BlhUp76IIxdhM8IznKcJoUL9QWNMDQxDfGkz4jTEiIa5bzNaNER/BrgYzpEkSV2rcNY5GK6/6bKbhZuzQDJcV4B+zT4I+4frmK2fGY4RHIrGmLquZei30D8BBAbn4uzz4dwhunbWHdeExeE6Zn8HuJam6bHjNtvQvP8BRDbPEaDxLLwN7QvnCvflJIA8C0GbEDDsE8bfrOPxQbU3Z2F3aM9JLsomPDwpzrWpk5yTs5rto+b1zbbvpH2bP7PQc/bamlGz70Yn9cv3otl712zb7KKEd3P82YUJzeO9034Po9n7/27O1apVq1atWrX6wdLDx64K6RyH1vpqbAJrwwduUcdMmpCt6pjLETEgxHMKF6EplXOsCQ+IkA5cChe3aq0mEIJQR1AqCcj6eMGJqZRfhSkkcXBH4eo/lpVGSEWljXd9SVQUUeQFVVkiheSpi6e5ePozvPLGZd669h22N++jpKTb7ZFNx5w6exEZp2hjMKYCK9BGgxBY49qllKzrPMZp4ldrKl/0HueuTFJ2drbZWL/Lzt6ui29VMQjJoNdjca6LQtONwOQTRrs7lJMxerzLaG6e3qBP2usSxx2kkBT5lMODffYP9smyDBkn9OfmOX/xUc5fisimOZsb69wY7TA53MdWBYN+ChytusVaB1OspTTu77LS7OwdcLXfo9tJGVvN0uIct27fJUpT+t2Y+UGXs2eWWb+77sZFcJnhXH1CWCIBIGsgJJWboPZ7KYNehygKRectUaTqun1JpHDwTmJt5e+xcgxJCB+7Sw2Bg6M1CitTAWsNyloPkhTGVh74Of+cH3YONhmL8dmpwhpMZWuKpZSH7R6SGT8GpFCkcYyJJJEQxHFEL00ZjadMpjllWTlgGmoXWsfuK6hjhkEihWUuVSx1ItbHFdp7Q/O85Ntv3ubjzz7O2dUlxpMJ1hpu3t/h+u17RFJxZnWJqqrY2tzk6Wee5tL5Vb709W+R60UqU5H2Fepwj26/x9kzq/SSGOt8oD4q1hFIbSxxJJyXzwqMY/8QXKy1XxWsFFg0k8N9VlZX6cS3GGeC+flFNjbvU5QVUadDVRSu7qNfcCCFg73aOgCocJNL44Fu7XRuYr36de4ntN69KH0cqq/G6I5BCHF27laBc7+KRiyrqyfpJ2AeeApASoU1+J4RPv4ZlHEOW2NdTC2+3wQgH3JC1qpVq1atWrVq1eoHWyH1pgnQmq67AHjC3wGiBdhUliW9nlv4mWXZsZqEYd8Qpzoej485EpuQr9/vH3OzNR1xATyVZcl0OqXf79Pr9erzNOswNmM4AwRpOhVDXGvTqRjcleEYTdDWvN5ZwBaA56zTc/bvsG04Lrj5YpIkRFFElmVorUmSpG5vs43NmFc4DqWAE6Fu+DtcY4i1DbAz9HETLDcjXpvHbAK7Jjiddbc2+3q2HSfFsTb7vNm/s3Cu2WfBQRvciUmS1MdtgtZQh7L5+Oxxm2qCr2ZU7Gxfz2oW+jVBWOjD5s/sPZxtQ/g9C0Wbatb+fC/grXkN77Rtcyy/0zln4fe7BXpN8Pi97PcgfS/7tw7SVq1atWrV6odT76nmI1YSKjU6J1AjpgQHGBDKOxWPgEVAC0IIB15ql1KAjwE9eCKEA1IOBxzVl6xrvgHg3WhKIoXy7rQQvykdKLKWSPn6DlZQlCWlNhSFi1QVUhHFKSqOUbLkE889wweeeYrvvPkWr778Mtvb6+xtbRAJydzCAt3+gFGWgS6YWz6FlIo4itjf2SVJO6g4Jokj8mJKfnhAd7jA9GCHzc0N3nzrTSK0q7Vora8lmGCs4czykLmOQlgXW4kQTIuKYncXg2Vja4s4ioikdcBWSoR19SjhaKIVJzHZwQ5V5Va4jqc5+4djNnYOOTicYI0DsMuLcyBSDg7HFKXGYlFC0E0U1ioqD3YG/Q6DuT47SlDmObdv32cwN4BTSySx4tTykO2tbXRR1eDQm9WIpEJFysepSuI4Ikkiep2ENHbjRgqBRdXwMLgLq1IjBGhT1m4z44GVNhopBZEQ1AsDhXO3YisMwjtz/Yd4a9H4aB0RDLfOVie8E05IhfLuSGvdhA0hUFJhjBtnfheUEK6GZN0WUJEiFoJYKZRyIHKaZYwmrr6ntQaMr3spLda4MR+rEAGkWJ3rMtVT9rIKgyWSkvFkyqvX7/Dc448y3+8y6SZsdyJ2RznXb68hpeTsmVW0heu31njq6hU++cEn2dw74N7mDueGGb29OXrdLo+eWcQWBaWuQAiM0WAFeZ4xGmWsDjvOlekBrxQC413Gxr+KA6yzFiajA6ILF+gmEdZqZJQgjCWJI+d6tRV5UZDpCqwllpKy0uSlpogEeQmRlCiFvw+e6llXh1TgYK2xzk2LdVG9waHpSLR3J1rroKZxj0fK1m0Fd8/cegjhanp6VKmtA9zCrYXw70Py6H3Lu0Ot8e959mjBw9E651atWrVq1apVq1atjhRqATYh1GwNwwB/ZuGe1pr19XXiOOb8+fN0Oh12d3cBGA6HdDqdY7Cu3++jtaYoCsqyJEmSGkoFdx9Qx4I2oWOIYZ2bm0MpVYPOOI5rF2YTcDVrHgYHZVEUAMeuaxYyZVlWtyWO47pPArxqRnLOgrOTnHDNGo0BmoW+DnGtwUUYwFkAwk0IFuBVM4Y29FUTBM+CwbBd6KPmtQaYF5ylTdjXjMkN0LIJFJuxrLPO2Gb/NJ2cTcdsaEPTPReuI9y/2e3CGJhtZ9Oh2HQ0NutpNs8RjnuSHvR8aFOzT2eB6axbdNZZeFKUatDbtef7BePeTm8HFGfB4682oHtYwAotTGzVqlWrVq1avb0eGj5CgDbgwF/zQ4etnzfaIK2fNGBq0IiveSg8CAhw0OoKb0NyzxmLscbDLOd2k74WoDb+eMLBqOC3NFbXdd2EEGCqo5VhQFUW7sOtkKSdCENE7CchcRRjhYLEwa2FGD769FXed/Ux9g7HvPjt11i7/jrXb91EmpLRNCeSglO7O1gpSZKYu3fvc2plCRXHVGXJ4XjCzuYm3V6HsigpqoqiKjFaO2AZR8SJYqkTo4SlHxms9tccRYAlUhFSCRSuBmJWlAg88AIiJUF6t5qomExzB1SM/9GmBsAL812kUuwfjJgWFXv7Ewb9FKMdfHP3UhDFkkhJdGVZmks5vzLHrVt9ytJw7cZdDiY5lbUMhn2U6tJJYpaXFhiPxiipEFJ4N2uIoHV1HJNIEccKKSSRdC5JYwxCgrQSbX1UDm58GWOcE83n9NYfjYXAGldLM8AhNxYj95yHqGEfi/DXdzRJUUKgPejUPgrUhA/+RqKUc+FK6SYzUkUID78iKUCBMT5iVhgkCqREWlcLNEkipJIkSQQIRqMpVWiPdw57EkapjYulFS6a9OxcSmUtk9Kg/KRvY2ef12/f5/FHzzI3N+BclpOXhqzSXLt5m26/y6nlZaSQTPKSIisZpAlXz61irKXUZxiPJ5RZhjYaa0BJN0aEsEymGbd3RiglWezFKP9yNcY68Col1roFA26O4vpyPJkgophut4MSkko7QFdUmkgqsrKi303oRB3yvGQ8DePX1ve/wvjXuHMkupvkYlBl/Tq3/n77mGC/mkH47aUHikJIjHDxrNr6FbD+PSK4IoNr2022fLRugMp13K67v8E/jcDXInV/SwLIbNWqVatWrVq1atXqu9V0PDZ/NyNBg5rwITy+sLDA4eEht2/fZmFhgTiO2d/fZ3Nzk+FwyKlTp6iqim9961tcvnyZNE1J0/SYsy7EpIbjSykpCjcnjuO4hqMBkOV5fgyaBigVHIILCwsURUFwXTZhY5ZldXxpAGgBKAYQFyBeMxI1iqLaZRfaO+sODf0WjgmQJMmx/pqN0QxQaxZazsLDWQfeSe64oii+63zhudm/Z4/TdNIFUNYEuLM/ZVnWfdrsx9Afs1AwuD6b46z5/Kwbdha+BjAb2tY8bthuFiTPOjdn+6KppovzpO0e5FIM46YJFmddkM37NnvvZ8/xoDa8F5fjO+mdgOKvVjveTg9zzhY8tmrVqlWrVq3eSQ8NH6X/At5Yi8A44ENzxSZUukRYF68YYgqPPuiB0aFwm3eBaReDKb21zDnn3N/GGDDug6fQugaXQkiM0d5JqUAIqrJEm4pYCWIVIVSEtmCNq8kY3FxYi9GWOE2I4gijIwQuWlFYH0VqDKaqiAWszvf57Cc/wuGH38e1t27yL37li4w3ryGFqOs/CgRlpRmP9p0jTls0lizPycvCtdc72Spt6MaKYT8mjlwNOiEk1sNapELhnHRR7CCkEqKGiVJJMA7QVtZiS3MEzsKqRhofYK0Dl4mKWFmISZKIvf1DpmWJPtS1Gwwg7cQsLw3pJC5K5cqj5xjEAlMVlNpweDDmcFqQVxULo4y5QZ/SA6ZYOlgaRQ7uecTj4KL0NUGsQKkAnty5jXFRuxbhnIHKbWdsAD4OASkpUFHkIjelg0NhHEklkIGJ4wE3FuEZn3EMyjnjtK2htpUOQBqDc4RKH7fp+0+iiIREWONqQwqLtopISYT1bkgVgXFwM1IKjRuLwcUZSdz1CQJ5rONpA3wXYaWptXRjyelezNq4ojLelSksN+9tkCYxF88ss7ykqazg1uYeWaXZ2Fhnd2eP973vfSRJQolgWpZYY7AWtDYoGfmYJvc6rrTGConJMvZHEyoEt3fHJPE8i70Yiaux6dydrv5haLY2BmPBTDOQEWmSkMQxk7IkTmImRYbWFWVZUShJIgTdNCaJJDsHGuFX+MaJG+v1egZA2BAHS71sQQrp3KJWAqYGkdaDRVOTQD9OPNwWHE14XY1Gfzx/f/HAXTrUTMhTNQEUe8CsA+l2A8PfumZ90VatWrVq1apVq1atjtSscRi+4A/ALoqiGug14WD4iaKIU6dO0el0WFtb48aNG5w7d465uTm63S6j0Yj19XWWl5dZWVnh9u3bpGnKmTNnWFlZIdR4DJBRa81kMmEwGGCtpdvt1m7BwWBAHMccHh7WdRQDhAzwyVrLzs4OcRz7ciKCoijQWtfRr02HYgBowY0Z4OYsSAogMfRJgJ4nAaRZNaM/w3nD7wBYm5Ck+XzzvsxCt9ltgDryNADBJihu1kwMOgksNtsR6mM2jx0AX4iIraqqPlcYT+F3+AmuzGZ9zHCOJpRrPh6usalZCBaen4V1zXHaPP7s/ifpQdDxpPvcdAM23cGz+zbdqM19Z2tWNvebBae/1s7H5nYt1PvhkxDiBoC19tKv8nk/DXwe+HPW2p//1Tx3q1atWrX64dBDw0fHsszRl/cu49B/iHPuI+uhhbDOUhaghZLKQ0OLNpWHbSGaJKq/zK/rtCGwnjIJR2AcuMRFqwbABhatK7SpXCyrVIgoqutOHkEG1wakC1w0pkJKhdEarasQJIuSrkZdiWursA52DNKYJx69gBI/xj8vS+5dvwZGe2eUA3y6dNGixjoIIq0GY1BKkkhJJ4nozXWIY9kIdjz6wCykc9sJ6WIi6zoGPhoUKer+t8L1iZQC7SEW1kE4axorA/HgSIISsDBIwRj29kfuPnmipJB0k5g0UihhQQrSWHE4Kjg8HCGVu7+VsehCs70/4ty5c5gqoyg1k6yk35XEKB9ria+vRw1prHVtE8IBJRtq7yGOnrDO9Ya1VNo7bXHXJ70bTiBqJ6NUrs8iJWunY22LbJ5YuD6nHjcGHUqK2oYbV7r4zzBeXSyn60lr3X0JAFxKf18iwBqMhihSSAF5URByWl2FQjcOEM5x6I4XJtcGVyPTjdX5bkIlFBvjgkr7yGHgrTv3iZXk9MoC55Qiy3Pu7BxSGbh3Z42D8YRemjLspWD9QgHvIjahbzBU/ksOIQUHuzvkxsHeorLc2RkTqwFzifQjw9VwdeNN+teze1Hnpbt3SriYWakN83NzjMdjSm0wBqZZjrQxg25MmsSk/ssHASihUNIcuQiFHzPWvxb8vbPC1vesfq34wW3wX16Y8LiLg1a+lqTnmC64WQgfmSr8e4P2E1Y3VtzzxsFv4/pJazdOKt+mOHLwUgb7ZatWrVq1atWqVatWM2rWLIQjqHVSxGIAKFrrGvbdvXuXg4MDhBDMz8/XTsU4jtnd3WV/f59er8fp06cZDodcv36dV199lYsXL7K6ugpQx6GORiPW1tYYDofs7e2xtLREt9tlcXGRnZ2dug7i8vIyWmuyLCOOY7rdLoeHhyiluH37Nvfu3ePjH/94HR8bIGpVVSRJcsyV13TchWsNkC6AtrB/lmVIKel0Osf6rgn1mlGs4bjNvoUjEBbgZ3P/sG0zEjbs03y86fQMECvcmxDd2nQANqNTZx2VzfM+COSFxwIUCy7QsH0YE0opiqI4dj2hb086VhOuCiFqJ2Ez4rRZi7J5r8I1zbZ51mUa1Dz3SU7GBz0ejhUg9Elu1PBcUycB1ea5ZuHobH83t/+1dj423w9+ozgf4YfX/SiEuARcB/5f1to//GvbmlatWrVq1erXr94DfLTe9WO8rw3AOGYknKvRaO1cZ9hjkFKH+o32qJYjOAAnAyC0xtWhA5RyqzSdIVL6um6GJElxwLOqHX9SRiiFA1bGYsrKtUNAFMXIKEYQ4mVSjLVUWlPkOQiodOWiPD34kEIRSYlUvi6eb4NSPd735BXm5+Z46Vvf5Nab1zjY2UJXbnWlBISzXRFHkMQ9OnFMFAki6XCjq0hpfJSk/9AmJQrquocCQaUtQmukdL2NAOn7IAAUi/EOO+lqJ1rjgJlw/SB8jcFAdrR1fdTvpUyygsPR2LfBxYlaD4qtde5BaxXru2NGk4xeJyFanOPgcEqcJuyOcirrYFtWlKg4IS81SQRxpMC6a8VP1kSYOPq7Lj0kklI6l6x3vAoPJI8iMh3YllK5OF6sg5seOrrrF/U1yrpvjuCzqmuKWg8o3TiWUtQ1BQXu386O6ydQRh8DokIqyqIi95O1UEMSf04lHSATCJIkxiLplCkDbX1dTUGkBJU2VKWmqjSm8Xm/roQqYLkbURnYHGVojw3LquTarXsoKVleHDLoD5ifFlSli0fa3h/xD//ZP+fKxQtcvniBJFG+xqUD+NY7OK0BpCHf3+HN6zc5d2oJvbHNaJIzKSo2DgvihdQxVUf5j+J9fTuVlGQ4QBv5Op9aV/4LBzcGHC+s2B9n5GVFL4noJxHjygF1hPW1FN05BAKFOIqm5chwGFyjdQ6q/yV9jyv/jlQZi7ESIZqx0AJfqcbVj5S2jnQFB5K1LyjqESeuMqpyDk9h6vGDBSEUUlj/PtaqVatWrVq1atWq1XHleV5DuGb9QKCOM23W/gswKzgHJ5MJcRyzurpKr9cD4ObNm2RZxvz8PCsrK3S7XXZ3d9nb20NrTb/fr111ASiG6E5jDEmSMJlM6Ha75Hlex7Dmec50OsUYQ6fT4dq1a2RZxuLiIqPRiGeeeYZ+v09RFNy7d4+9vT0GgwGnT5/m8PCQvb09hsMhZ86cIYoiiqI4VlMxAKGma68JKIPLL7j4Qv+EqNMmVAqgpukKhCO4FM7XrI8I3w1ZmhAu9HsTIs46EgOQrKrqWM3KJvicdeQ1o2Nnj2Mb88kmaGwCxuYxQ1vCduHcoe5mlmU1rAsAtnndzWtpOmzjOD72OFDfw9nY1aYrdBasvRvQFq5p9vHZ+9j8acLe2T6edTw2/551n57UnpMWAvxq69dLO1q1atWqVatWrb6femj4qKsSIQXa5ZlijfVRrMZDFBchqvwHVamc88tYi7C2hoXKR6yGD+RFkZPEMVZILC6yFSqElFjjYJsxDp6UZY4nAMgAlpSLTAwwwGiDEIooUjWcCi6wsshdrItSGBOBMSRxijFuAuOaFrlYyvAh1lqsr2PZjQRPXDrLoxdOsbXzETY2Nli/f5+9rXUmB7uYKvNuTd8fjk5hTQAXFqzHOJ5/OGenbEyArI+hdaBMeOBiPIqRwtWdE1I6ECf9xE47vCakIJKRg3Vl5SddzjEqhEBGluWFAVlekJcVWOdozIuKShuMgtgodNRjb5Ixmk5dDK0SrK7MY7Tl3Jl5rIxAGKqqYH9/yqmlocNA1oNbHSJgHdOTEh+JKmrQqL2zVSrnQKt0gI1upzCuZKByAj8BCmDbTz4AYUXtqBTWP+bdjw5yhUmJ93saW8eQgodJAqy2lGXlo1z95BEHfyMJyt84pSK0qbwT00e5aPcaEALSVKLUgCRNyYvKRxELIhVhjGZv74DJJKsBsUEjrXTOV+BUT5EXEftlVY+JvKy4dnONp5Tk3LkzWBWTV4Y47TPOcpSKuLc7ouA+Z5YXUDqj20mIIgfStNbosmB/d5trb95ka5QxNYYrZ1d4694Wo0nBpNQMhkP0ZETuv7Cw1vjlBM7tWWiJSTrOEYvx8ND1cBS5eOEYyWFW0h/0WO4ljKY5pcUvTnCvU2sNRggi5cZ1eK27SZh3xAKB01sCaK6ZL94E7IeHGwPuFvn3CY9NLS6y1zlzhR9DCqX8ePCT0wiBMhYdGeLYOUfda9P1Q2iHOJ5Y1KpVq1atWrVq1aoVQF3LUEp5rI5fgGJaa5IkqWFR04EnpeT06dN1HUchBFmWcfr0aaIootPpkKYpeZ5TFAVpmnL69Gn6/X4N0MLxOp1ODZ56vR7z8/OcP3+e9fV1xuMxe3t7CCE4ODig0+nwzDPPMBgM2N/f5/z580gp6ff7HBwcsLOzwze+8Q0WFxcZDAasra2xvb2NMYbr169z//59rly5wrVr19jd3WU4HFIUBcPhkCtXrrC2tsbe3h79fp/hcMjFixePgbkA00L9yG63i9aa6XRKp9M5BieDO288HjMYDJhMJseAVOjzTqdDnuckSUKWZSRJQlVVZFlGv99HSllH1IbalM0o2XCf9vf3sdYtZg5ttNbWx43jmCRJmE6nDIdDrLXkeV5vH44TrqHpMmwCzTAGAoTudDp1RG7YNsDJKIrI87wGqaH/xuNxPY5CXwUoPZlM6HQ6NWAM+zVdn3Ec10BYKVWP2U6nU7cr9HG/32c8HhPHMWVZIqWsoXGo8RnubRMga63rGN9wvKDw/EkOy1l3ZFAT0s7G+zYhZLNm50lq1madhb+z52qe43vVbPTtu9HDQN930uz1zTpqH6QH9eODIn1/LVydrVq1atWqVatfOz00fHQ1F32NPuE8R5XRWKORUrlISiVRQmLFUUF4UWcqOgjnOJzzoOFjW4siRygFwrmoHAR0vjes9V4kPGB0rimDwJjSwbcaQCgPIISPjDSARCrn0rL4SEzrQFMnTTEyoiwKrNV1fUWlJNa4WnFY58wzxsEHaTW9WHHh1DLnVpfRzzxJVWkOx2MODkYcjkZMRyNGh3tk40OK6Ygqz7E+HtZUpQNyxnrwiHeUhi7xENUCKAzeuRagi7GoWBJFMUJApQuEAKVwwEYo55QzFVIKpHSTGaUisBopXT2J5WqRvYNRCJylEjFRZ44kTYiURCYdsvGEosjppl3OnHuUS50uQkakcUSqLHv7+8wtV3RXYkR+SKEnVNZNWtJO302qffws1mCt9r89zPEOWmGC29F1gtE+0hQXrRpFITbHIJE1pIojCT6WNMTfCjhyvgkPliIHnIzB1Vj0TkAZCV9n0NZjy3qI7gmli171MtZitEFKSxRHxEohhUQbTVFWtZM3iqQfYhKbxkRxRKXdfVSRwmpJtDzkIInY3Z9ija5fF8IIf52C1UGEmcAorxzsF5Zp4QHkY4orj12iOzdPnHS5ub7Lvfv3sLoCoRgVlk7SZ2vtDh2mdOIYrKEsMkzUJbcCKyTru2N6UcTjF05x4942e+Mp1+9uc34+pawsSuEgf3jdCsVhAQtzQyIpfDSse0mlvR6VNq7eY+le051IIoRkca5Hnhdo4+qfGlyEcHAhhgUCxkoXWywsxttxpc9vDrVEEW5hQl5WVAY/eVfEsV+xDDXAl/hasm5g+L8duNbWxRG7lcfKAVTvdJVa1+PE1Sj18a7WYq32bWv1XiWE+H3A/wZ4DkiAN4BfAP4ja23e2O6G//MDwM8D/wpwHviL1tqfF0KcA/5nwE8CV4AlYAv4AvAXrLXfnjnvJXxsjj/eXwZ+KzAAXgZ+3lr735/Q3nngzwG/F1gBbgB/HfhvgTc5IYZHCNED/gTwrwGP497pvwX8NWvt/+fd9VSrVq1atWrV6jeKmhGVTRgIHAMrAUoGABd+mjAofHHf6XRqmBXcksvLy1RVRa/Xq+sFNh2AANPplMPDQzY2Ntjf3+fg4ID9/X2SJGFxcZEzZ87wxhtvEMcxvV6PbrfL6dOnuXDhQu3gtNaysbHBYDDgwoULnDt3jueff57t7W2effZZoijilVdeqWNgb9++zdmzZ2tgev/+fa5fv16Dz42NDS5evHjMdZfnOXmec+PGDcqypNvt1s+fPXuWfr9fA92iKOj3+9y/f7+GaAGerq+vc3h4iJSS5eVlwLkr0zTFGMNoNKohVbfbrYFJHMdMJhOAY+AuuO8CXAsOyCiK6ljUEEUbYlNDFG2n02E6nTIajVhaWqphXrg/AfBZazk8PKTX6zGdTpmbm6vBdWjPeDzm9OnTdTvDOAqu1wCjoyii3+9jrWU0GqGUcik529sMh8MawHa7XcbjMUVRsLCwUI+X5nUEwBquOThom47WXq9HURQ16J5MJiRJUl9XaGNwegKkaVqfa1bvxg14EoQLr7fma6v53LvVSfGss499P1yKD3uMk/rnna5xNt71nc49C3G/F/2gOjiFED8P/Fn/zz8khPhDjaf/iLX2v/Tb/SRu3vdxYA64A/w93Jx173s43+8H/ufAh4AObt76t4D/sDlH9tta4J8DPwf8Fdx8eA74NvB/sdb+wtuc54PAXwR+FDcX/xrwp621Xzph23ngf4+bhz8KTIGv+jb905ltP42vK4mbJ7/bc0T+uv+nwDO4769fA/4L4P9qrX37FQStWrVq1erXhR4aPkoVIaxBJe4QZeWKoYVITaFCxIiF2vXmnY8eCGqCA016RxFUZYUUFmU0UZSgfM1GYatgDSRWkYMCWqOkczs6Q1+IcXUSwiKlz08UDoQqFWF0SbAaikZh8rKsiBNBEsdUpdtfKYkkAmtQRh9FpgBSupWQBos0Dk4IK+l0IgadlHMry+4Dr4rQRjsQU2mmWUGR52RZxmQypiwLiiyjLEqqsqAscsoip6pKH+XpAIgxul4Rq6uqdvy5OocRYLwrzl2zkgoVxe4a3WGIopgkTlGRQghLJCOiJMLIiMNRjlIxKnbXtTg/oBNH7jzGcH/7LXSpufjUEzz3wQ+DklhtsEajy5xTZckjl68yPjhkf38Xaw1KuZjbQb+DkhJrDdpPyMo8oyozjDb+mgtMVYDVCFyMKx4Guut0EbhCWIQINVFcLVAlgwkuwEIPqWwAty48U/hY0LqeqLC181FY66JyvXsxRAJbYzACYuX2rQzoqkLjaxQay2iSueelqMGUlNT1B61xdR2VFGArX7PTR9X4la7D+QFJ2mF0OGE0nnh4WiM2upFktRNhLBzmJd77x3ia89qNOwwWT3H5kUsMBgs88WyPjb0RL7/8Etn4ECUFcZpw5rGnGe1usrd+g9RkSAGJgMfOr/La7XWmWcmt7QM6keDKmWVubO5yc3Of3cOEfiJIlGB5rkfkCC+FFhyYmMvLK0j/GgKBVIpYKZQ1lFYTJV10kbE/miCMYdhLiZS7BwHwOjgualAslSCWMWkaUxQFQkU+Ylf5qCFZQ+i8qKCoSDy4V9K5rtMk9bG37vUjparfH9xkSxIp6QGmRlcVBoOSyjk8tfs8q6V2rz+jERasMSCMd1Jb+B4nsq2+W0KIvwT8aRwk/AVgBPwU8JeAnxRCfNZaWzR2SYDP4cDiPwYOcBMxgN+Mmwx9Hvi7/liP4yDh7xJC/Ki19qUTmvEobtL0FvA3/bH/NeAXhRC/1Vr7+UZ7O/78HwZewE0A54F/D/jUA65xwe/zIeAbwN/AJQb/JPALQohnrbV/5l10V6tWrVq1atXqN4iCE6/p6mvWEZyt3dd0Wp0Ur9msRdiMU+33+/XfQA3CAjALgGowGNQuxgAHtra2AFdncDQaUZYlN2/eZHt7m6qquHfvXg0olVKcOnWqdgEuLi7WrsEANK9cucJwOGQ4HLK9vc373/9+BoMBBwcHrK2t1fGxq6urdVxoURTEcVw75pRStVvx8PCwdhQGQBtFUQ36NjY2uHbtGm+99RanT59mNBqxvb3NysoK3/jGN2pn5PLyMk8//TRLS0sIIeh0OjWEDCDv+vXr9Ho9rLX0+31WV1dJ05TXX38day2rq6ssLCyws7ODEILFxUXW19drkNPr9VBKkWUZBwcHLCwsoJSq4d/CwgJSSrIsQylVg9AQl1oUBTs7O/zjf/yPWV9f57Of/SyXL18+9twv/uIv8tnPfpZLly5x9+5dXnrpJZ566imuXr0KUI8xKSV7e3s17CuKoq5PmSRJ7XIty5K5ubnaERnGSThncCUGN2S/3yfLMm7fvs1kMuGJJ56owW9wl4bx3owWjqKoHivhPo/H4/r4cRwfc9o1fz8IZL0ddAuvpfAdTnO7d3L0BTVBHRx3+s229UGQ7p2A4IOe/5cB72bB49uB2XcDNN+ujT/ALscvAAs4sPgSDqgFvQgghPizuEWtO8B/D2zgFs7+b4HfLoT4hLX24J1OJIT4G8AfwYHLvwvsAT8C/Hngtwghfpu1tprZbRH4kt/2/+nb+vuAvyWEOG+t/Q9PONVHgf8d8GXg/wFcBH4W+GdCiA9aa19rtGkB+BUcEPwa8FdxC3F/H/CPhRD/S2vt//09niMG/gFunvwa7vuBDPgJ4D8FfhPwBx/Yca1atWrV6teNHr7mo/YfJnTlwKGvl4b/EGO0c7cJQCrlnVwBEBlXDU8qYuFq9imlEDLGHbL04MoVnremcohSuH1qN1vkagNaD2lUFLmYRetiXV0sp7tEYypiFXtXlo9lFYI4SrChhp2ASmuErZzbEcBadFkhpUVFMbosqIxBhShHtKvdKAXWuPjGKHbAVPvIVaErBJAoSS+JWRwM3GTT13+oXZ/eSVfpEG9pa6docHBZo6m0pvQrGZsfHIMjywG4AB8ThAiwRHigZ8HX0wzRpdYYSm0oS12DYCUFaRxR6YrNzX1yKykrzcryEktLC77unUVb0D72ptKacnmZqjjrJ9CurmC4f9a4bbAuztRoTVVOKbIp+zubHOxuI4yLi1VSEcURQkZYa8BohDWUWuNct6F2prsO55w0SAxSKAQOBgprvHdQuMhbITDWOLDp+8UKi9aAqbAIqko7niQscRQhLBSlQckQKOqcjMKft6w0WvvoYQRSujqbVeUA59FrxGFQqVx9RGONq+kpHaiLpSKW7viTLHdfINhQ/9PSTQSnhKLShlybOip2mhW8/NrrXLr6NFefukikFJcuX+EDH3g/z3/ta9y68SamqrAYhiunSftzjLbuUR5sYkvD0qDH449d5tpb18mLgrc2DjDWcunUEp3eHPc2NjksDJEQ7E32ObfQp5t22BiDGg5ZWVrA6oqscI5eawx5kfuIZIi803AwcJFJ93cOGfZSBG6cOYCuQEmUcDHNUkVIGaNURK/bd4sRfHxUHMdIpVDKfeFSVhVlpbHWAW58f8kaarpxon28qw4TUP++ZJ2V2Y1pbVzNWhsqPgq/sKHAauvf4yzSO3NdbPAR1Gz1vUsI8QkceLwNfNxae98//qeBvw/8NG6i9pcau53FreD8cWvteOaQnwNOW2sPZ87zHG6i9JdxYHNWn8a5HP9cY59fAP4H4N/BwcygfwcHHv9r4Oesn10LIf4iDiyepL+KA4//rrX2/9Q4Rwc3af0/CCH+jrX2xQfsH7Z//gFPPWWM4Qtf+MLb7d5qRoeHbpi0/fbu1fbZw6ntt+9d76XPwr6tWv1aS2tdu8UCPAxRnaHOXjPqMwDEpnMrKACO4DYzxhxz2iVJQlEUNXhqQkxjDMPhsI64NMaQpilLS0tYa+taj5cuXaKqqjqWczqdcvPmTebn58nznHv37tHpdHj88cd54YUXGI1GDIdDVldXkVJy4cIFtNbkec7e3h6j0Yhbt26xsLDAwsJCfX5rLVtbW7XzremAC9fW7JMAYpvus+C2k1Jy9epVjDF8+MMf5o033mBjY4MrV67UsPXZZ5/l/v37vPDCC3ziE59gOBweqykppeT69eu8/vrrDAYDXnrpJZ599lk+9alP8c/+2T/j4OCgdhE+/fTT5HnOq6++ytLSUt3G97///aRpytraWu0ovXLlCmfOnOHGjRtsb29jrWUwGHD16lWGw2F9L8M8R0rJ4uIiH//4x/mlX/olyrIkitx3GkopJpMJly9fZjAYUFUV0+mUl156ibm5OS5dusR4PK7veYCcwQEJ1OMgxOO+733voyzLun+73S7GGPb29uj1esRxXEe6hrGYZRlFUfDtb3+be/fusbKywvLyMlJK0jSlKNx6weAIzbKsri/arLGZZRmvvPIKy8vLPP7448fcj7Pg8e0gV/hO5J2iQt8uMvVBmj3u24HCt3Nnvt3xv9+Q8d1Av3dyPr7XNv3LuK5fL7LWfkG4JJ4/Abxorf355vNCiJ/AgccvA7+96XIUQvxhHBD8c8Cfervz+G3/CG4+/AestdPGcz+Pc1/+r4H/ZGbXDwD/DfA/Du5AIcRfBp4H/qIQ4u9aa9+a2ed30HBt+n3+F8D/zV/n/6qx7V/Bgce/Dvzxxjz4rwBfB/6aEOKXrLU33sM5/j0cePzPgD9prdV+e+XP+0f9vPkXZ/utqbebN7/dfq1a/XrVr/U8sp3PHtcPSn/8y543PzR8rHThok/BxYXWEZXKOxSFRzQOBLq6j5FzexlXtw5rjk+mPIhTyjne3NHxMYwGq11MJ9J/YDXGnQrq+oFSKISKEP45FYVVf6qus6eiyENR6SMcj+q3SRwsRUpMVYF17kdjNbbKkUAaxzhqYbzhyUU/KumgkjHemSgd/HL5nlCUJdZGxMpihXLH93DLwRKwsUJr5yiUdaF71zY3MZEgYuh1PPwTdVuM1VikS2oVjYgMIZAyQfn2WANVmbl75sGtEYI0jl3UZWm8e9I70ITCWENRapRU9Ls9upEEXWIxRJ2ui6v1NSW11pQeREKjT3SFqap6vJRaY6uS0Z5hf/MuB7tbZHmBwsWRxnFCt9tHKUFR5JSlpaoMFuXHl5tQq9hSlZrKGIRybke36NBF+7q6l9LXevQTBiFd4UYDlTGYUjtAKQTaugmvFAJjXN1AJS2ZhrzUHhVbrHXRwlIIFw8qXN8La139TF+j01pLZWxdC1JY91qpwJ8PX7PU9VW310VGKXNlwd7+mMkkd447XCxsV0ac7ks2s5JpXjnXbyQpteVXvvxlhotL/Nhv/gmkkCzHikd+5ndy7Y23ePnFF1hbu0VVlqgkobN4hknUpcxGxMLyvqcfR6qYG2t3yadT7uyMEViefuoZHrvyON+59iZpb448G3NnOiHVCtIu585e4NTCPPl4l0leorV7nQmpHNiXEl1VDtpLxVw3pUwrsrxACklHKSLXUe71J0D5hQ0+LRVpJFQWo10dT1PGqDgm8jFIBgerQ21HrHFxvUZTlu69wGoXDS2EwFRl/UWN9u8VYFFSuNePNSD9pAnlYLL29WrD+xJ+EopF8s4TvFZvqz/qf/+FAB4BrLWVEOLfBn47Lkb1L83s92+fAB6x1m6cdBJr7UtCiM8BnxVCxNbacmaTm8BfmNnnl4QQt3BxOU39Idx/pP60bczwrbW3hRB/dfY4Qohl4H8CfL0JHv0+mRDi38VNsH4Ov2K2VatWrVq1avUbXyEaNQCZANaAGlqFeVuI+IQjmNGEiKFeXhPIBGgUjheOH/4OdQtD3b3gLkvTFCllHes5HA6P1ekLUExrTVEUSCnp9XpcvHiRTqdDt9vl0UcfpdvtMhwO6XQ67O7uMplMWFxcxBjDeDym1+tx8+ZNDg4OGA6HtUMynD9JknpRbWgzOBfmyspKHSEb6jIGiDbbd1VV0el0OHPmDPfu3WN7e7tu5/z8PB/84Ad55ZVXWFtbq8+7s7NTQ1MhBOfOnWNxcZG33nqLD37wg7zvfe9je3ub9fV1zp07x/LyMi+99BKdTocPfehDXLt2jbt37/LhD3+4Brxf//rXieOYy5cvc+fOHX7lV36FD3zgAzz//PMsLS3R6/W4du0aCwsLDAaDY9GrRVHUrsOVlZXaFRpFEZPJhBs3bvDVr361jkztdrs89thjXL58mSzL+OIXv8idO3d49tlnuXTpEi+//DJ37txhbm6O8+fP8+yzz9b36Ytf/GJ97XEcs7Ozw82bN8myjLm5OYbDIY899hgHBwe8/PLLZFnGuXPneP/73w/A9evXqaqKp59+mpWVFYqi4PXXX2c8Htd1NsNz4/GYr3/962xtbdHr9bh69SqPPfYYWmvu3bvHyy+/zJkzZ+j1eie6Ht+tmmB61lH8XvR27sAmZHvY83w/nY/vpg1N9+dJ1zZ73odt3w+w8/Gd9G/63/+6nYlXtdb+l0KIPwH8Ad4BPuKAXAX80SZ49PrzuHIlf4Dvho8at9i1tulaa68LIf4aDlj+QRz8bOpXmlDQ62/g4F89DxZCJLg57Yjvnge/7s/xZ3BRqf/BQ55DAv8GcB/4UwE8+nNo//3AH/HX/rbwsVWrVq1a/drroeGjc4/5/84I6aMSQxSmA2JHjiLnTrTGxa86GAaIyO0rBUpFaO9Eci4/ReUdkG4Xl6vpECMIoTBS1PCw9lX6qNQo8u4pITFRmIhp71BysZpKKSqjMZXGmMq56aRwUaLVUSyH8M7DqrLekengp6nb4voAa328LMRxAli00XUNPCkdfDVIF6cqpYeEwm8QojokhkZhdWevI1LK97mtXaDOuaVd7UctkZGbkFaVqwtpPQCTQnhQK0AYhFIY7dtmLAKDlNBJ4iN3oHHOOiEFpRHoyqC1Ie31KBEgEw+GBQoXh4nVGOFWOSZp6mtRuqDTcD+d8bLkcHeL+2tvce/2TfYO9h08RLiKntZSVBXWx9PIOCWdGzJIu6jIxfFYXSGxVJWLsdVFgdYV1IG+Dj5JY5370gPFUKuvrCq0dijJWoiU9TUb3SrbyriJxLQo0ZUB6eoFqighTrtEKkIoRRKnCOngeVHk2DKnKDJ0WVIZQ+VtvwYHkD1n83D2yEcpPBMV1qJ93OfCsE+aJuwdjKhK95nLYOhFsNqJ2DKWcelclVJAluV8/gtfIIoTfvtP/04AlBD85h/9ET7ywffz5S99ma98+Utsb2+712HaJdewVUwpbt/jqaeepqw0+5MMiWRnvMf1N17n8uUrfPwjH+L1t24QRXPEvQFKRXQ7HZ5+6kkWBl3W1t5AC1kD3jROXG1NC0Xlo3bLCtKIJJboUkLkYLx7rRlsZdFUlOF9pSh8zVY/uRHuHuFjk0FgcDBfSfd2Jv3rSuuwKMFPfKyLL3bn0q7mpH8Lc6xaY617b5IClDi6Ny4u1y2kCH1NczL1g7mo81dTH/a/Pzf7hLX2mhDiDvCYEGLeWrvvn8qAbz7ogEKI3wH8cVy8ywrf/d+7FeDezGMvNic3Dd0GPtE49hBXS/L2CSs6Ab54wmMfAxRg/UrVWcX+99MnPHdM1tqPnPS4EOJ5KeWHP/3pT7/TIVo1FFaptf327tX22cOp7bfvXe+lz+bm5r6/jWnV6iEVok/zPCdN09qRFmr+hbjKbrdbf5n/oCjWABXBzStCPcE4jv1n7SMHWwCGIcq00+nU7QnHrUt6eOdkWZYcHh6SJAlpmpLnOdbaY7X9VlZW6vNfuXKFJEnIsoyLFy9y9epVqqqqAWcURbXjLYDTs2fPsrKyws7ODqdPn2Zubo44jutozGak7NzcXO2SDKAOqJ8P8asHBwfcunWL/f193nzzTW7evMnm5iYbGxtMJhM2NzcxxrC5uUmapqRpytbWFl//+te5dOkSTz31FNa6WpFf+9rXiKKI3/SbfhOTyYStrS3yPGdxcZEkSXj88ccZDocsLCywsrLCqVOn+NjHPkaWZdy9e5fxeMxzzz3Hxz/+cc6dO8c//If/kP39fc6dO8fGxkYd/9rr9chzV66tqqo6djQAx1DnsSiK+j6fOXOGJ554gn/yT/4JP/qjP3oMet24cQNjDFevXuX06dO8+OKLfOMb3+CjH/0oAC+88AK9Xo8nn3yy7qOPf/zjddzq3/k7f4eqqrh06RLPP/883W6Xn/u5n+MXfuEX6HZdes23vvUt7t27x0/+5E9ycHDA888/zwc/+MHa0Xj9+nX+0T/6Rzz11FNkWcbW1hY/+7M/yy//8i/zzW9+k6eeeoovf/nL7O3tcebMGebn53n00Ud58cUXuX37Nk8++SRwMnhsxp021Yw7nYVdAcTPAsh3gmbN52eP+XZw9GFg26+F87E5bmbB6azT+r0A1R9U5+O70CeAEvhXhRD/6gnPJ8CqEGLZWrt90gGEED3gOVxJkj/5gL7MOXnueMtae/2Ex7+Ag48fOuG5r88+YK0thRDruBjXoCeBHg4k7pxwnM/h4ON7OccTuPInrwN/5gHXPuU9zps5+g6iVavfMPq1nke289nj+kHpj3/Z8+aHho9KRQ44Ej5Y+Bp53gkoPTjTIZrTgPHf6UopsQaUch9olJC1g8iCiw/19excDUkHzqQHGwjrarIB1seHCoSrYyhdhKjx7bLCuKhW4YCl8vGl+Pgbn7qJUjEIMNaiq7KOz/S+QrCglKSsfNyGP6fwMCR8sBUBwOIciwKFULaGMLVj0oja7aitc7UpJAJFFAmMhx0W66MeTV230TqSiTQaqRTCSKy02OCSlBIVGRCunkIUeSentRhdOGhiXb07Wx9XYLWDpVI6QGcRZHlZR69WVYE2FbGP+pFSulWqUqGrCjAIGZEmEaU2WF356EsBVU4iLNpaRgfbbK7d4PatG+zs7rsJMIIkchGbyIi0N2AwXGIwv0CvP0en2yFNI1SjH3VVYrXGaDcpc3UyK6zVGF1h8pwsm1JVJbqssKZC66qOta20pdLGjy+L1hKh8JGtEPn6glEcY5CuFqaMXK3PJPYQSqGUc9oJaVHEVMJS6dLV30R7J2rAWI4wGmO9adWNbWvBCMBYN8ksnXNP+LIR3V6P6TRjMsmxuPjXXqRY6cYgLVq6OqhlWZBlU776L75CkU35md/zM8yvnCKJXQ3Pn/zsZ1g9dYr/3xe/xPW3Xoe8IO10UFHEqKx449Z9nnzqGV599RU0km73DKN8yqt3NhmONFcff5wbN26SCgnGcuHKkzx6dolIWO7d26CyYI2r+1hWBUa7uojCv172DyeUZUEnjomEII0k3U6HpHaQuleWQjhwKHAuROOiUrU23plq0JWhsg7m1q9362pGKimxYUIo/cvOWrR3SEphqbSLysW/LKWwGIuvO2mdq1u4hQ9G1i8bt8hCuHEixfdnNW0r5v3vWRhI4/GLuHoVAT5u2Ad0vF9N+leBXeCfALeACe4t9GdwE7n0hF33HnD+Cs+ovYb+9/oDtj/p8WX/+2P+50EavM1zrVq1atWqVavfYDo4OEBrzXQ6ZTAY1AtZg1OuKIpj4CnMsZrgBKghYagVGcBlcDMGkBWcgiFStAn2grsyOA1DEki320UpxXQ6ZW5urnbhdbtdptMpQgiGwyGTyQRjDJ1Op4aCVVUdi30NtQuTJGF1dZWiKEiShF6vx3Q6pdPpIKWsnX0hmjO0P8DHcG0hWjacJ1xLmqYkSVLDzg984AMcHByQJEntyNRaMz8/T6fT4f79+/T7/TqyVGvN/v4+N27c4OrVqyiluHbtGq+99hqnT5/mc5/7HOPxmM985jPcvHmT6XTKwsJCHaO7vr7OzZs36fV6fOUrX+GRRx6h3+9TliVra2vcuHGD8XjMZDJhMBgQRRGXL1+mKAq+8pWvcPv27bpt0+mUyWRSw+np9MjkNFvX8+rVq3z961+va3qGsbK5ucni4iIf+9jHSNOU+/fvs7Ozw8svv4wQgs3NTW7evMkTTzzBeDwmSRLOnz9PURRsbGywv7/P7/k9v4cPfehDfPWrX2Vvb48XX3yR69ev88f/+B/nAx/4AL/0S7/EV77yFT75yU/y/ve/n729Pd566y2yLGN5eZkzZ86wsLDAz/zMz3Dnzh2+9KUvURQFSinu3bvHuXPnuHLlSl0zVGvNxYsXiaKItbU1nnjiiXesmzgL+05y6TWBWvNnNsI1PDb7e/ZYs+du/n63rse3A3En7fsg8HcSOHwYzV7bgxRegw967p3a+EOqZdx3rX/2HbYbACfCRxyME8DquzjOrB40Rw0JQ/MnPLf3gH0q3ALaoHczbwc3b3/Yc4R58+O8/bW38+ZWrVq1+g2gh4aPwn01f8zRKL0zzGiD1qVzwkHt3HN/SwTeoef3McZi8WaTYKZsrGATAh+N6DCZko1ah1I6R5rwsYtKUmkPJKxzU4WYS0d7/DmF9GDBxYE6YOgccgYw2tSOR2eE8dDLuppxKoqPqCTe4amkc9WBd09JZMQRSAUqXaKN7ysccLH2yCVqhcZoMNrVHjTesSk9BAlwU/roWceyJKG+piu1aTBaY0yBEq6WnvIQt7IWrX3cpA01A70TU+BgG35yKgSVFa4/0Qhp6KYKW2UOjFpDVRbeJelAWZKkgCCOJKjEjZEyZ7y7xfa922ysr7G9s8t4muGMrpYo7ZP25phbXGL19FnmF1eYmxvSSROMLhHCHjOWuYm4i4zF+lqTVYkxGl1pV7fRGqqqpCoqirKgKosaUBZFQZXn5NnUORQrjTEaibuGYGoT0gHtNImJ4wQZR0Qqce5YLFVRkE3GZHmO1iXC32f3JUEY7pI4Fg6U+lFgrfW1I6WLTPXXUwNKFbt7oPCOYktkfU2YNGV375DROEP5OomLaYztzVEa7wTUhtFoxDe/9U0O9/f4Xb/rp3nuIx/F6gpTVjx55TFOra7y/Ddf4Rtf+xqbG/fIphPi2L0dHGYFj1+9wutv3SQdzJNlHbSuOCzhzv11rly+xObOPnNnLvKBxy/Q73TYu3ud9f0JVihX+zKOsbqiNC6+1HFVV19ksZ9wMHIT6vlYkcgQOevGqXvNOWdyCG/GGh+tpOu6qNqWSO1e4xiLsRptpQOJaIS0WCuDB9vDYoGVlV8wYJD26HEpQdbuRjBSekekq9+JtZTaLaQQwhJFkEjhQ1hPMsu1+h4UgOIZ4M0Tnj87sx0c/VflmIQQEa7Gxn3gw9baezPPf+Kk/b5HHfjfpx/w/EmPh7b/x9baf+v70IZWrVq1atWq1W8Azc3NMR6Pa3dbAI1ADcgmk8kx91uIEw3194qiqN2ETddfAHVa6xr4BWjZ6XQQQlAUBb1e71jdxADxer1e/e8oimr3Y3isqqr6nM0Y1jzPaydliEItioLxeEyn06mvJ9QqFEKQ5zlxHNfOy1AbUClV18M0xtRuxuAMDdCxLEs6nQ7W2toR2Ol06HQ6KKVQStVxpKdPn+by5cvs7e2xvb3N+fPn+djHPlb3Veifn/7pn65dqADPPPMMZ86c4eDggCzLWFpa4ty5c3zmM5/h+eef58tf/jKnT5/m/PnzrK2tIYTg8PCQl156iTzP+djHPsaP//iP8/nPf56XX36ZNE158sknSdOUX/7lX2YwGLC6ulo7SpuQ1/r5Xr/f580332RnZ4eDgwNu3LhBmqacO3eOvb099vb2iKKIjY2Nus92d3f59Kc/zfb2Nl/4whf4xCc+gVKKJ598kp/6qZ9iOp2yt7fH2bNnSZIEoB6DAfLu7+8zGAyI45jHHnsMay3f+ta36jGc5zmDwYA8zynLkrm5OTqdDv1+v65FOTc3R5IkLC0t1cdL05TnnnuOKIpYX1/nW9/6Fi+//DKf+tSnmEwmtdMzjM8mBAxu2eYYb24DHAP0R98fHYeEAZ6F+xz+/XbwrrlPOJ8xpoa9J+lBxzsJqL4TmHuQ4/Kk7WaB5Ntt2zzuO7lBm+D2JIWFEE04G47TrN/5Q6h9QFprl97jMQBesNZ+rw69B81Rz8wc+2HUnLefpJPm7Q97jr9vrf1X3sNxWrVq1arVrwM9fM3HynhgB8YKrDFIXRFFCm0sRjvnnfA/rvRg+LeDEVJ6oFfbC32coVT+oeDJcx88Db7mmtUIKwi2MOljOiutkdZBQ4uLLxF4a6OPgo2sQkQKkP4DZahJV/roRIk1xsV3HvsgRw3+lIeqjeBVpFRIK7HSffitdOVrUlqUitHWRcoKK2o3Jx6IKiF9XUQ/EfQfbI2xfsIX6g+6v5M4wdjwmEEIV9dSyPAh2aIiSSRcOx0EdmDNTQCti3utQEXuvuiqQnv3qbtmQ6WtB6YO0ghrWZ7r8sarr6KzKWcffQzV7SMAFbkIWF1VSCGI4hiMYX/7Ptdfe5k7t24ymUzRgFQJ8WCJweJpFldOsbi4xNxwjl63Q5pERMoBbGs0GllPOpyDzYNIqXARuhYpFZV2LsiwKtY2InsqXTlIaoWfuBbooqD0E3xr3KRPVznGlFht0GWBNSVVnlEUFWVRYIEkcZM0KQSmqsjKkqKoPKCt03dRQvnxZGsYqT0gdmNR+LHkYCE46GX8MZTwdQ+tAaHAupqRczIliRQ7h2N29w4ZT0uEKImqinS4QpTGaFMxnU6IYsXa3bv8vb/337Jxb40f+eSPknS7HIwmSCwfed/TXHn0Ea69eYNr115jd3uL6fjQfWmwtMSlRy9xeHhAQkylHVgfj6fc2Trk2fc/x6UzC0RAZEuuXXudKoqxVelgaRShdIYQUJZubEVRRD+NiKVicdBhmpfMdd1kU1vroKWxaEpK7dzAAaoLYTH+ywejjXMoW+PjdI1bOGAFUVjnEECucONDCF9PEunc0P7Va7yLUR17NfuIVm+LtkDkY16TyL1nuYUTbsmEMccTWFs9lF7AxZ58mhn4KIS4ClwArtuZmhkP0ApupeXfOwE8Dvg+xKtYaw+EEG8Bl4QQl06IXv2xE3b7Km5ty6fe6/lbtWrVqlWrVr9xJISg2+3W7rWyLMmyjKIoSNO0hixZlrG5uVnDoU6nw8HBQQ109vf3iaLoGIQMdRO73W7tYgy1HXu9Xu0qVErV0anhfAHsBSemW9BZ1eAvtFsIwXg8Js/zOqY1TdMakIbjhWMGkNmMkA3HA2ro2ARAwd1XlmUNapqwc/YY4d9KqfpYJ7nWwvXdvn2bqqq4cuUKw+GwPl9wmgZokiQJFy5cqN2ZoU7nhQsXOHvWfadeFEVdk/Hpp58+VlMziiLOnTvH7/7dv7t2jJ4+fbqOn93d3aUsSx555JE6sjZErM7NzdWQtigKrl27xnA4ZG9vj1dffZXRaMTa2hobGxuMRiN++Zd/mYWFBYbDIdPplG63y+OPP17Hp168eJFvfvOb/OIv/mINlZ9++mnm5+dZXl6mqioODw85c+YMaZpy9epV/vbf/tt86EMf4ubNmwyHQz71qU+xsrLC5z//eVZXV3n55Zd5/PHHybKML3zhC1y7do2trS0+97nP8cwzz7C2tsbu7i43btzgtdde48aNG7z55pt89atfrWNewzgO/f3WW29RVRXD4bAGVsEZ/KD41VlQ1oSDzd/h3jbHw0nHfC96Nw7EpnOwef734gycHeuzAPZBbW3+frvjvRs1jxf6e/ZYP8AKdFWd8NxXgN8hhHjWWvvKwxzcWjsSQrwCPCuEWHpAxOmDdPEBc9RP+98vPEybvF7DJQo9J4RYOGF+/hP+9zfewzm+g3NJ/ogQIrbWlu/hWK1atWrV6tdYD+98FO5LeCE8QBFHMatCgIpihFTEcYSUblWm0b6moxBYH3sofc1HvNvJ1Yd0q9ocANBoa13Mq1QYW1EVJUpJD+GC38w51dw+VR17qWSEVNKzR++YqzTWlrXjEB1qUR5Fw9YOzMb/GevjU02FNa5GH1LWLkekwBi3XaQijNQOZBrteJ+QSOVrSuJAk7YaKSyiAqkCFPUBq9YQKQd3Sz+5k1JSaYvycanauMmdsRahtauHKAQyrMizBmM1Elf/Ukjr64w4YKq1nzR695m1Lka2FydMsoJSG6qyIitKImGJI8n5J57l+u273Hjzn/KbPvNZ4l4PU1UI6yabh7ub7G/dZ2v9Pls7O4yzEpF0mDt1idVz51hePcPi4iK9TsfHV7qITetja90YAesdnkp5yOoGXmPSfLSyTuAcsHEUUZSVqx/q3YS2rrtokcJSlCVl6d2sYbWeMRhTYXSJLjVVVVDmU4rphGk2Ic8yijwjK3LnxsQBQl1pqsrg2JSrSOpqjfrxZh1d9798VK8bI86d6/o+UhITYBpgrfAxxs65Z7zLVUpJkihWFwcMuh129ifsHY6YZhlG7nJueZm40yfPM8ajQ4QxVFXB//BPD3nl5Vf56Ec+yJPv+wCIAbs720RonnzsApcvnmV3/5D1rW3W799n/f595vtzFBpsXNJPO8zNL7K8ssIjZ1foK4OSkk4a8/rzX+bufobFgUakIlaKw90DN/HDoqTEINg7nFCmEYNOTG8wx7kzF+gkEbFyXyBYX7smy6ZkkzFFPqWsSpS1GONcj9oarAe4PtO2hn/WWVZ9RLN7L4pC3UcM3qPta0hav3ABkKCsG3PKT560Fa4mpLXk1t1jJV092eCadvekhY/fB/0N4I/hajr8d9baTQDhVhj8n3GRp//FuzzWBm5C9BEhxMBaO/LHioH/BAcnvx/6f+Mclv9HIcTPhQhYIcQjwJ+c3dhauyGE+FvAHxRC/PvAX7Iz9SWFEFcAY0+u0dGqVatWrVq1+g2opnswRIsGsFVVVQ264jhGKcX8/Dyj0QitNePxuAYou7u7CCE4f/48+/v77O3t0ev16PV6xHHM/v4+eZ6zsLBQw8Fw3izLatgYzhXO1wSRzcjXZvRpOFaYjwYo13SpBZdjiHeFIzATAGdwPzYBSVmWtaMsgNemYy3AxyZgCtsIIciyrHYAhkWr4e8LFy5w5syZGjxFUcR4PD5WdzPsA0fOuCzLAGrYGgBj+PdoNKqjX4uiIIqiukZjFEUsLy9TlmVdC1EpxdWrVwFqt2eWZdy+fRtjDN1ut3YiRlHEhQsXOHXqFP1+H2stWZaRJAlXr16t+ypA1ziO+ZEf+ZEaMP7cz/0cvV6PhYUFHn30UdbW1ijLkuXlZc6dO0eapqyurpKmKdevX2c4HJKmKT/7sz/LN77xDTY2Njh79iwf+MAHWF1d5Y/9sT/Gl7/8Zba3t3nf+95XuyrX19dZXFxkfn6enZ0d7t27RxRFfPKTn2Q8HrOyssKHP/xhqqri6aefruNoz507x0c+8pEaiu/s7CCE4OzZs3WkbvP+hnEQxlOYvwfoPQvNwpgJLtrmGA/7NsHmr4YCVG22+/sVSXqSM/Gdomu/1+O/2/M3X4M/BNrFfZNw8YTn/mPgdwD/uRDi91pr7zafFEL0gfdba7/yDuf4j3Bz4L8hhPjDs6BPCLEIPGatnQV9CvgrQojfb601ftvHgH8TF3H6X72bCzxJ1trCz2n/deDPA/9Goz1X/DlK4G++h3NUQoj/FPj3gb8mhPi3rLXT5jZCiLPAorX22w97nlatWrVq9aujh4ePUniQAs6x6OChm2DIGg5JIbFGEyuFka74mrVQlaWDTVGERdVOI+mtY8bDNWO0d5IZhFIcRb1CXpYoIYilgwRCW6wUzkVlQmSl//BVux+9U8raI2Agj2JhpYdgSiqsMN5p9/9n78/jLcvSus7/86y19jn3xpBjzVUUyWQVFFMDMohAojYiyKA4dAutpa2NA9ovwQHtxga1f+KATSsi2t10deOEODDYMgkmU4kIKIqWUCBZAzVnZmRM956z11rP749n7RO3IiMyMyJu5I3I/L5fXG7GGfdZZ59bd9/vfp7Hdq/HLGZbdo8Dj9SXysKYG2hWRkvVEYpYiiAMWK8Klgt1nkfQ5Cy/83Z35rnR6+H496j89B5VlWaU1QqzPOZeVEpJlGlNzols0FvMa1yqA+PgKY/wM0K4lDJReJmARvdor5o9Wr0yKsVSyqxXhXawJVm8N5cPDvHWOHv6NB/24R/OL/zHf4fhXH7i3Tz2rnfwvne+nYsXL3L5cMNhNfbO3MuLPvAjee0rP4D77r+fs6dPsTdNo4WsjYOLFu+RGS0Z3kfwHEkd3hvzdhzAWqxrThHexbaXaL3aWswOZYSZZYqqzTpjCdaZUS2aKGWiraNilCMHLb1V2lxHa5/KvJ2pdcPh4WXaXNluNmy20cJ13h7Stltss6GsK06LqrveceK9TcnG/FKoc2JubcxJ9XGeXFS9WopAOmO72YJxgNXoYyZhTjn22w7miWzG6XVi78UTp/dXvOfcBQ43W97ztkd55Qd+EOuyYtMaFy9eYjvPHG42XLx4ibe945285F/9BB/3sR/DB33Ya1itV5w7d466PWR/Sjz08hfz6pc+yOYjfgUHm2hTuxzwTwmsV7xV8rRif0o8+u9/ip999N10S7TthlpnwNhbr3j7k+di24HZO2fPnOGBM3scXD7gsMKHv/ZD+YBXv5SSDFq0U9pcvsilzWX65hJ9cynWeJwkED9botJ5CXB9OQnCozKR8TPJLJNznFFrKT67jIPZ3jruLU5a8NEUeQSQUWXt9B4Bch9nJcQM1Ag+U4/H8yggZpn9KjfP3d9oZn8J+BPAz5rZPwIuAb8B+EjgR4G//Cwfq5vZXwO+EvgPZvYdwIo4C/MB4F9y5YzMW/GXiPmR/w3wGjP7PmIGxm8Dfnhcd/WAlC8jZlf8WSKE/FFiJscrgA8nZkH+t4DCRxERkeeJw8PDXbvR3vtu/uFS9VhrZbPZ7EKRJRi89957OX/+/O6P+0sb0osXL/LEE09w+vTp3dxCgCeeeGJXXXnvvffuwrxLly5x7ty595snubTIXILE9Xr9ftWTpRQee+wxttst99xzz24u4eOPP879998PxGzJZfuPVqMdDQiX8HGapl3Aucx+XEK4pc3sEmBePV/u6jaOy/VLyLQEfsvzHQ1Dllam6/V61y50sbTxXFyrunKZS7iEl8tsy6P3W7br3Llz3H///btgcQmbl9mUBwcHu5Cytcb58+d529vetluHZd3uvfdeXve61+1a5S77yxJALpft7+8DEWbef//9XLx4ETPjZS97GavVipQSDz74IC960Yt2rXtXqxWbzYZXvvKVfNRHfRSPPfYYm81m97if+ImfuJtLulS6vvjFL+YLvuALdi2BlzX43M/93F1QvexbSyXq0dake3t77O/v85rXvIZTp07tqluXcPutb30rH/mRH8nLXvayXXB1dB9aAsllja6usjvqaOXd0eB7Cf2Otm693hzD22HZ5iV4PBrQ34xrVU1eXYV4M9t4HG72+e82ozLxXwOfNsK4nyf+yvOd7v4DZvaVwF8A3mxm/5w4vjsDfCDwGcTx7Wc/w3N8s5l9PPAHgV80s+8F3koc034Q8OnA/w38/qvu+u+BTwJ+ahyj3kcco94H/Al3v9aYkxvxlUQ3ny8zs19JHF+/aDzHWeDLjuFk2j8HfAzx2j7PzH4Q+GXgJcTx9KcC/xOg8FFE5A53S21X45e4RM5OyRm3BL3FfLfRxLDWhhk0GpjRaouwLEelYMqGex+zDWP2Y8y/Mywn5ub0NlNKJmWjtQg+02jV0VtjW9to4er0GonAlHO0OoURMHTwRG+V7kulWo4KNSxauVo0bE15CaPAbSlZ89GtNMIKSyXaOB4JrlJe4d3jK3KNaEHb42Bqu52Jk4DGwRgRoqacyOaUqdDHL+rRxqaDTTHL0og1sxRVciVj7qPq0kZ7zt2jjraxeRcAW06YQ8oRfvn4Bb77CFhHdap7G/MiG5vzT3Dxve+i5z3OcMjZqXLZGhceexdvfctb8e153vgvvosLly6yneN9O3XmHh76qE/hpa94Fffddz+n99esp4nWowrUe4SGrc7jl+YOxMxKRnXfEg7HgqdRJVt3AXesQ+x7CUa7WBtn6Ea72PVeYTUVthsbM0ET280G7/N4jqiCzCmRS2aaVrTeaVONfaR3Wot1PFXv3bVz7bVS5w1tnjm4fIGDixew3qj1kHlziPdYu7lGMJ5KAQwvKw7mLbU1cIvWqmkc3Npo3WmJ9TRh5mznxmbeUrc1Wvg6FJxmPfae8V5mjBe/6BQPvPglXDhsXN52Ll4+4L4Hz3B6b4+LFy9R6xwh5GbL5UuXOXf+Sd72znfx0n/zk3zYB30gD33wh/DAAw9wuNlycPkSh4cHZK/sJ2dVMj0DRFibVvuUbGzOP8a//zf/jp9723vYksErda4R+peJenCRuVZqa2znLXurFZPF5+f0/prV2Qf5mNd9OGeKsz045ODik1w6/wQHF89x6fyTXL58wLbOEfD50kI1XTnhYVQSmxnZwFIm5UJKhVwmcomZqku6b31ms9kCNVq0VsOXCuPe8ZyiJfKoSF4+oMY4GcEMs/jMsXv+5SPnV37WyE1z9z9pZv+WCOh+JzARLVj/Z+Dr3H17Aw/3VcB7gd8LfCkxN+L7x2N9zTFt74GZfSYRJP4W4I8SB5X/P+BHiPDx/FX3OW9mnwH8D8DvAL4I2CMCyDePx/j+49g+ERERuTPcc8895JxZr9e4+66VqZntQr2lBeXe3t6u/ekSFB4cRMHHEsgsYd+LXvQiLly4wOXLlzk4ONjNQXzsscdYr9ecPXuWCxcu8Nhjj3H69GnOnTu3a+353ve+l3vuuYdTp06x3W7Zbrfcd999vPKVr2SaJs6dO8eTTz7Jvffey+XLl7l06RIXL17k4OCAl7zkJUzTxOXLl9nb29uFqHAlEFyq2pYKymXbDw4OaK2xt7e3awO7zIc82i5zqVhbAskl+Dtqea5lLY8GU8u/L126xAMPPEApZRfeHd3GZduWQOhoy9glPFwqDZdA7eDgYPeal6q6g4MD7r333nGS8LybY7kc1x8eHu6qNFerFdM08eCDD/LRH/3R7xeULX9buP/++7l06RLr9RpgF/gt63l0buNSfXm0/ecSNK5Wq12L3+X5l9mNH/dxH8fb3vY2Tp8+jZnx4IMP7va1ZV3ibxgxb3Rvb495nnePuQSOy/u1VMMugfbRGaJLa+Cj79kSAH7yJ38y+/v7u/d8eW+PVrou4eOu69GRqsjls7FcvgTRy+VH77M8xvJ+P5eurrQ8um036unaph53ReeNBInL+3KtcPR56r8jqhw/mziB1IC3A//e3f+imf0YUQn4q4EvII5Jfxn428DfezZP4O5/yMy+mwjhfh0RID5OhJB/mWtXMT5BnMT7l4DfDdxDhHR/xd2f1fM+wzY9bmafAvwp4DcDXw4cEGNG/rK7f98xPMdsZl8IfAnweuA3EuHte4lj7q8C/u6tPo+IiNx+Nx0+plTovUVYlyL8ShYViL23aE1qCVt+kR9hUfeYZZgsxdy21mit494pOUPOMKqKcGdVJmy1AotqyVrjOVudcYw8Wiou4WX3TsIoq4nJogVpdx8VdU4bQyZzsjFTMsKF3Yw37yMYHQHXCBt8mdXoURk47koecy+jPWimeo2ZkCmzObw8Vmup1BrtV5df0lMmr0q0cvWlwnMJNyda7ZS8VFsuVZjjl/UU7VltrGerdVQKOu5x0NTGvI+Ur2zvMqfTgDKtKNN6VJg6TmK7PaTTmDdb3v2u93LxXW/j3pe8gpXPrHxm651f+vn/xOZww6WDC0yrFe5GWe2xOnUvH/LhH8XrXveRlCnvZm92v9I6peGjchTonZSX20UguVS3GSNkxsnZqB5VkSnFQZ/3TibmKDqxb7UWLXpTlIFStzEf0HrDzUZ74AlPTvGlOjcvpYbkUV2KrXehOUBtjVZrtF09PKTOmbY9ZLu5jCWL/bcDlvEUVafTFFW0hnF4cEhrsFrvs86JlNeUacW0WmEpYylTyoqUjamkaJtr0Wp3c7jl0qWLXD485PDwgHkbB7ERqGZyKZTVHqdOn2W1t4/nwsFhYztXOnDPmdO8932Ps91uOdwcspln1nWP7VzZbmfe/Z7H+JmffRMP3n8vL33Ji3nwRS/m1JnT5L0z1NapNcLo1mbq9pBLT7ybd771Lbz5LW/n3EHDycytUus85o5mTp/Z5z1v+UVadyaDGWPbOhcONvRauefe+/joj/8EXnrfGermgH7obA4ucuHJc1w6f57t5nI81giiex8H5GmcUb0EgGMOY86xDjH/ZrRfHo2NEzYqVbfU2sBbVDHGmQWxfxqk5eDI4j1z8+V8g/jsxk+KUR0ZoWVv0Inqy6WqWm6Nu/8D4B88i9s99AzXV6JNzV+9xtWvH19Hb/8oY6+6zuM9fJ3LzxEHlH/k6OVm9vvGf77pGvfZAt8wvkREROR5bqnygph3uIQjy2zFJaRZrVacOXMGiFDp8PBw9wf8g4ODXQizBD7ve9/7ODw83IVlKSVe/OIXk3PeVTVevhy/V7/kJS8hpcSFCxdwd1arFQ8++CAXL14EIiB97LHHuP/++6m18t73vpczZ85QSuHtb38773znO8k5c/r0aS5duoSZ7dpuPvDAA7s5lEtot8xaPBqILCHd29/+dh566CFWqxXnz5/niSeeAOClL33pLmA7ep+jjoYbSyedZR2X9VnCxKOtUK8OQZcqvaPv0fJ48zzv3qej7WYPDg44e/Ys2+12tw7L+tdaOTg44L777tvddpnJefbsWYDdWq/Xax577DH29/c5c+bM+1W2njp1is1mw/ve977d+pw6dWq3n+ScdyHe3t4em82GlBLnz5/fvZ/LutVaecc73sHLX/7y3f60Xq85deoU58+fp5TCa1/7Ws6fP896vebw8JBTp05x8eJFSim78PpoK9sljDxz5syuunPZt6dp2s0m3dvb21XqllJ48sknd6HrqVOnePe7382rXvUqLl26xCtf+UoODg52YdzVX1e//9dqtXp1KHl1Fe7RVq7Acx48LkH81Y7u6zfrWut0veDv6Z7nZsLCa70/19qe5yt3/wXg857m+h8lKhyfzWM99DTX/TPgn93gtr2DCO6e6XaP8PTHwdfcrnEc/CfH1+16Difat950C1cRETl5Nx0+tjZHG8wRFLoZaaRaPSLJ6CzZorXmlDLdRssS7xgx3y/aaY7WEUT7yeiwOcLJPpM90TBanek4JSVshEZz76RUx8w/G9uURnvLES6QxoFDJaVlXkO90t7VO8ucuO5jHl+PSi8n4W6k0YIVM5L5qLaD0QQy2j+OX8hba9ArS4tZGJV5HG214VGB1Sp+5Ay8ViutR5XflDPQxy/0RusVY5kN6SSD1vpu7XrvtFpHAWEb71OsT7JMWe/TWsUs2sq2VrE2R/vXXJimzHp9L7VH+8sHP+CD2bTMfOm94DN76z32X/4qLl6eWT/wAK94xa9kNU3snzrFmdNnOHPmNPt7K+q8pbcITXeZDolpvcZY5ktGu9TeYw6fLaVkRLhbSonQz3vMyGwjBLPOar0er3WOMNkSyeL7VEZb2rqNtUp5JJmxPXlUSbYWwVa0arWRP1ZIiZIS6ynWuHcnW6YCvU2spko2pwKr9R61Veq8orfGqle8z7S5YmO64OHBhscvb7k8d8rBhvtP73PqVGI1rTl7731Q4vtqtSKbj6rTHjMoa2WeZ/ZO7XP6cEOtbdfClxQtd6dpxWq9pqz2Wa3X5JIxy2xr53DbuHjxMvfe9wDve+wxzj35ZMxRPDxgs4mzo/f399nWmcfPX+TNb30HJcFeTuyvJ1ZTjsC2NQ4OD7h8ecP5yxs2rTEvlcruzHOl1mgvu16v2DPnwqWLbOZ5tNU17jl7lvtOrzB3PuDDXstHPPRKsjnb1qljFqx7p7VKbVfaIrvHSi77kntUFWcf5Ye90jz2+3kb7WDLKjGViZJLnAjgjXm7zBU1kndavLvxeX6/n2xLFe1ofezLfhlPF9XKPkJ1x/KYWfuCOLFTrmZmr/CnzvB4NXEmZgW+60Q2TERERO4YP/RDP/TC+Eu8iIiIiIjIETdf+ZhjJtoyl6/kCHmyJdbTCgfmGnMdoUTlIU7fzWJMu9aJOacIA8fsu+7gPVo2mhuePcINj0q5uTaSdXJK0X51qUi0Kz3+59oxi3olWFo3GrmAMYLE0X50RAlERHblrLldnjBCsrS0AWlXZljMfR5FWCnm++3mEbCb6egWM+NyMqYyAU4uaVdlBdF+1HvbtUHFPMYykskp1iqznBl35Cy5JWgdLU0hKk83m3m3zmaFs/fey/7ps5x/8nEODy5TfdnuqBosU1Ri9t5GO1nj7KmJ17zuNVy4/MFcOP8kr33oozl15iynzpxhL0HdXo4WLiN8WSo6W+84eaxrPK6N6lj3HqHgkteMjquM6seUEnmK/WVVMp1E2/ZoVmvj8ccsk7nV0aIXcCfnQl5NTDkzj/ew9w6jGnRXVWlR6dma02mj/enYLuI6vI0qXYCY8ZlzZr3ao6bYS87ecw+rvX3mOVp57q3X0Du1bkaw7eDGyz/UmDs0Mk+ev8S83fDk5hKlJV7y0pdy/733sL+3h7fKdnNAbZWDywfMzbGpcPqeFWfuccyunKlro9I2pYLlCSuZnGLOpZuzT+L+VGgvuo+XXH6Qxx6/n7e85S28891ObY06bzncHtJ6ZyqFkgspZ7bAZYd+Pj4XPmYkLq1Gmy8VxvE+1NqodcbNmVb7nN5b8ctv+UXm1plyYtOjEtVaxXrh3rNn+RUvvw87vERf75FKpqzWrE6dZrXeI08rbJ5pdRsh7K5Od3wsxlzGXSlvnMFAyhPTao/Vao3lTMkFo1PrTO2M6kQfJxo45lF9u3x2ffwswzxiY4fafRRZ2vLjapw0wHjutHxUFT6+cP1jM5uAnwLOAQ8RLWFOAX/q6mBSRERERERERERE5IXgFiofl7aqsK0NfEsZQVidCq1WbISBUf0E3tquBegSyqWRH1hKUck2z6PSyGi9j1wpk3KmVWe72YIZ01RIyxyAWoEYdp/N6GPOXrRIiTCN0QKzNqf3eAzvHts2quFSzmPeYxqtY/vY0FFt1jppzBuk+64VY8ejVSbE6yT+I1qKxm28d6ZptdsmSyPo8r6bZdhZWs8mICrOzMDbjLcGecJxSpl2FVhjmiaJhJtF8GKJuUVVYHQgrVw6/zjnzz0W22wlApkW1Znr9Yp5O9NSJRuU0fanzVtaq7zkvjUvf/CVtLrFcoHRbqXkhO3t7SpCl1AWjDJNY3ZijTa2GIebTbTBMfDeSSX2hd5atEp1J5Wya9+7nbdjMZeWOzFHNOcUry0XnAh8uxMVbfOMTYWcEtNqRd1uY62WULSPwHnMjiwpYyntAt2lOnUebVfNovLV6GQDSqIw4Q7TtOI0RnNnro2cEnvrmIXRvY2qXou2qrlEFWHvtN7Z1k5OhVIyJY02wGliwmGu7J1ZkaYNEd4m6jz2mTRa/I4A1FLMGSHFGrcj1XmMKuHp7D6n1hOn9ve474EHeeKJJ7hw8QJPPH6O1iptO2N5tLAhKgTTeDwba+K+hMc+Wik1aqv0UZW7WkfboMP3/TLveO9j1N4jbF9P3Hf2DCVBq5VT/TLv+rl/y+G7HuXe+x9k/+w9rNan2D91hvtf9HKaF/LqAgeXzscMzVZHa9TYliUjxCNkdRIl3hbSOKlgVSamaUX3OX5OtYaP9s7NfXxOEykz/nt53BEmmkGC7I3q0arZ3PDxATfzXQtpyKONsE5of4H6FmLWxxcB9wIXgX8NfIO7/5OT3DARERERERERERGRk3LzlY9L5WKPgKOURLdEb51WG/Romeq9U0omWcJKobVGcidj5JLHvyOQ6Q5WVhEGpnxktiMRTI7wz3I83pX+/TZmOxi1x5w399HKkwguSs402mhDGjMgyEbK09jmGa8No2ClRPUlQG+kESBBhFEx+80olnDG/IhWqR6PnUY1p+dpBJwdN6fVLUswR3cafQSdVyopIdrUsquIhJRX2JRwj3W13phyoeL02kgpWrJ2h5wLZCel1WhZGnMYDmcfzx2zIhIZ9xzhkRtuEQKSEr7djPUdLUrnGa9zbIslao1QsJTCupQR6ka7zN46pUyknJh9VL16hI2G7Tq9O7GvLHMY3J2SC71VttETdbkhUXnImMNoQI1Zf8lG29TMdo42urU1OrAqmdS2tLZlDOob3Vf7lYd2aL1Rljl/jJaa5rtZEjaqKjsJt455zCWd1mtSykdmm1ScRE6J0zmz2Wyi6tc8qmnNdhWfpWT29zK9w2azZe4RvpeSmFbrqGp0OLW/v6vS7GOeJeMxlkDbbJlDGuFgtmjv27yP9qWxRntT4qUP3MuZvYn3nDnNO97xDuZtBKSXDw64fPkSPip3sUROGR/tkLMZ3sd8Q0Y/UouvlDNltc+D953lXb/087zrfe+LlrmbLZu5sV8K2Zz79ibuzcaKxqXz57l0/jzvfucvs97b48zZe9g/dZoyTdxzZp9TexOX1oXz555gc3AZ704xw1NUohqOj9cdn8l4/yzOcKDPB8zzIbXPbDaHbLcztVbGOBKaxX69awvsS5PceB/yrrAyQ+q7z3PKmZxLnIBAirm2OIxgU1543P0bgW886e0QERERERGRFzZ3nRUtIiJ3lpsOH3dtJUeatN1GpWAiKufImeZ9jEtzttsNpWTa3Jh7I+VKmsFSoeQ85h8eqSz0Ts6GlYlWo8LK3ckW7V6bLyVevmsfOteYJRmPFpWNUR0G2Giz6Y3co7Kx5EwqiZygJ8N8lC06EaZ6zGCsrWGt0R2mkiOYwXE6vUX1ohlki5ayS9vVZE6nw5hJx24Ad7RyzKVE9aiP2XNpVP+lPFp2OqtpxYMPvpjqzrknHmPeHO6CwagoLLuwsuQUFWveo3qtO82jMiunFGEZQK/kPEWrWI4EbLsg0KjUCFVHpVhKI4Cd5zHbMkF3UovYxtxZlUxL8TqihapFe9PuowWskS1mKcbdewS7MXCRhkOP5+qjwi6lqKiMtyXRWseJithtqzG3MWf6qKAsU+zSm+2WzTaGyUc1X9qtqQGplF3r2lZnerfRwjYqLHOKCtXefLTDje2K4C3ex0THLMdryNH+12tlXdbsr1fM2y2H200EXJZ2swR9xHhlmuhEFXHs4xEeu0NOS0NeRo/aqGLFYp+MgtBYy9qd1OvYz20E4Ia3Ru2+q0AGmHLmJQ/cyz1nTvHil5znscfP8eRj7yMBeVpxuDlke3hI621XyVrHvNKUUrQVLitSisre06dPsb10nve+5c388rvfw+V5Q7GMm3Hq1ClOn1pzqsB9pTGNiK+PSsLZK/N8kUsXLmLZyClTpvh89drYbLcxDxIHmyh5Yio+ZqpGK9gE0J06b6jzlmTGZcY8xl0psu/WM9pEF3qKz6qNtsOW0vixNuZu4ljJ7OVEyoXVNDHtnYrtG++Re2PezmwOolWuiIiIiIiIiIiIiIjcQvh4cHgARDhS5y3RAbSxf+oUjZjZ6G54b9RNI08rah0z3CzaSOLOtErM/UpF17zZxjzJXGjV6N6p8zyClQgIWmski4CvL0PXRivNaGs603uEeCklam1ROdmjGtJLJvVEr1tyzVFladFu0m20ie0Rxnlf5sQR4Ys7rc0RbnqnjTAquRNNTCtzjdCmlAj8em+kEkudcxmVjg7NwSJISoDljI1w1Xpshzs8+cTj0YLWGLM1o+orWttarEWK9rLuHXOovdNaVAqWUiI4a23XmpXedoGSYfRW8dYhLwGckbLFe0inu5Nhtw3ukI8Mu7NkY56jRwdQN2w8b9+tZYTD3loEoGnaVQSSPCoAveOdUb0H3iIszDnmRqZa2daZkqNy0DGKZaYpAzb2O8dSobUZS5nWIY02vKmMNpkxWJTWe7QxtUSfZ+qYKDgT3W/7eH/jfR8BpCewCGmTAxksTVGlmBK1zkwWrT997K8R2BrJCnOdR3jslJyATN3WCFz7TM4l2nzaEu73aEvcaswfHfNKLWXmWoFEHcF8ycssyA6psBrVnfNc6R1Wq8JEZzVl9tcTL3ngPt5z/328873vYz485NKTlcPpDOu9UxweXGJuTimFzeEBZVqBOaWsuff0Pk+8913Uc+/h0UffwsXNFnMnWyZNK85MiTNT4qVnEqezUyxC0itTWPsu7DdzmI2Zjm3nqEG1CKCjA3PMXCwJLE/kMkWFs1tUH3pUA/fuVBv7j8VjlxRVjqS0q4S2kmIG6RI6enxutttOHRXEqRRWpbBar5imFdO0ZrVex2ditGt2d8xbhP1LOa2IiIiIiIiIiIiIyAvcTYeP2+0Wp1PnRquVlIySC5vDQ9wOo0WoM1pwZrbzNsLA0cLU36/iyKIy8PyT5FP7dFvjyeneI1ColVKiFeRIIUflYfzB33uEbDFiL0KFbB33aMvqPaqYUqQMuBtzrfTeWU3ONEUsYRbz9OoI7SzluDylXcVc6203Bw+HMk2UaT3alzbS0p415aiIhF0IugQg3S3CuVFJZvQrt/EI3lKGZJmUjLm3CCg7Ua3ofReAHK3c6m6j0m+0y1xmXy5vWh5BD+M5zGJeIg5LZeRo8xlBYoTH0fbUaa3vwi1SirmIY07lEhK5Q5vrmOMZAU4frVDLtMd2s2XbGm45grRt3VWd9R4Bau+VlAt7qynm9S2bbwY5kfM6imNL3s3rNDOmacKJeaN9VL22WqOa1lMEX63GvMSUSClq8XrrjC7C0Hd7FXMbFXT4bs7gPNeY5ZlSzPBMCSNatOYcLXS3tbGtlb31HinHfpCwERqyCzO326jcjPmFS9vdMctxVMe6+5iTmbFpitazHlWZjkXA6U6tDfdE5N5jBunYF3OCvangZmznynaOuZbmznrKvOplL+IlD97HhUsHvPNd7+Gtb387l554D5cvnGfvzD2cmvY4OLjA/uo+Di4+yZOXDzjXZt77+BMczvOumrd2Z39vzUvOnuKUVbI3Vq3iZszxwrGx7xm2FC9iZnSL6lncxnzHK3uuO9Bmtq1hth3vTgSMMWl1zFctjBMSxmzPZWFZBrF6zHy0qArGIoDGfQTkDSNCXPNOrzW2251EZ/aZkkdr6daobbRzrfXKeysiIiIiIiIiIiIi8gJ30+FjSTEHr1ljmqYR1qUIlrxhlnCMdrglZaJaqdZoqdg3mDl5WjPXugvm+r330c2iym9bx5y+aGk5z1sqW3KLeYnJOp5SzGAzw+ij9WbHeyMngxRtKVOKcKSUhPsy3S+qHTsRouWUxsy/RvN2ZNbdMlcuRQtVlkAKUs54q1gpTCXhPtqxdmeaCrW1iJLSqOwcrT9tVFG21mh1GwHMBD63UfEFjIDGPY1KvojAYhv7rmWoe4QmyRIjmxs5i5MsXWl9Wis5J8yigs4wPOdolZtHgMO4r+0afo6A10Zr09FCdpnJGP1WR0gcrUIj2DNa65g3eo9qw0wEuOtVoUxndoHNZrOl1xaVaBir1YraYlaoL3P0dmGv74JUA3Ip1LlGxatbVEnaEloZKa9oPlPrDJZYTetdtepUIBdjskTeWwPR/rSOlp61Vpp3Sk5jH7BoCzvC8hj1F2GVTZlM7Hu1tQiIe+fg8JBTp06xWq2jeneZ2+ijs/BoAxvBt433B+ZWKVauVDEeCdFLLjFjstsYeZhI6UqomVKm1TmC7vGZ7D0qYaMy0ViNilQzmOdOTtF6eG91hrOn1rz6A17B4+cv8Na3vY23PPpW3vnOX+LSxfO09l8oJTFvW3yepsLeaoVZZ51X7BdjnZwVByR33IkwtI9werTLTSN8HbExrUeQGvM7j+x1Izu08ZlLdiUEXi5jfCZbW9qojvd/zNpc/tu7R8EqS8g+ZqoygscxfzXZlREJ7jXaRM8bLl+60lI5eirHe1hr330URERERERERERERETkFsLHZDDXBr2Rp0KZVtTaSDkxbyvet5RSaFGihLeOWcz/8w6lZCBCGnZBWrQW9d1sPKJNZ48KMcc4vHiebDCtVpjFvL4leCtlovdOnbd4SljKrFerqJAaFXI+6q5KSXiPGYmdmA24zBpMKcMIMhltRm2paAR6itAjquui2jBamxZq67hFQNdbw3KitxFz9Bmw0c4z5lh2H21el3I3YsZhzkbOJYIZjxl52UYL0xE+eo/gxQy6+ZiHSASAZuQC7hEELt+TQevR+rR4xi1jHRjz+OLxYm4gFgmLt7ZrAdtHlaG7YT5mP2Lx3rrjI+TMJULhZS5ozjnm4rkz5YKNBpxmNqr5OiklVquJWhPzPDNv5whilzA4GWVa7WZRlhyVtHW0uW29k3KBZGTrWJ6YVjHHMPY/2wVyUbG4zIRskCIQiyanUa1qsYDUMZtyGpVzS9XqUhFKnfEoSyXnCIujPatR55nVeh1hvEW1neUct8m+CwlTSvTWaN3J2caYx7GP2RIF22iTu0RuMduz10qyNFqNRjDYvZNJu8AxZcOJEwJ6n8lm7K0mEnWE3TE7dL3KrFcTp9YTL7nvHj7qwz+cx86d45d/+V287W1v473vfifzfA76zD7O2dVESdGCN5vH++pGTCE1LKd4zSwViL5rWRwvyXext3MlXMZjsmeMQ027lsJtfI5TitfWjn6ufQnofeSDPip9LWarLhWh0ec3/i8VyiqPlsqApThhwBwflcERDqfR8jZOYrBRnVm905qiRxERERERERERERGRxc2Hj2WPVe5sAW+duR+Oqr5Eyom8Wo0qoZgHmFLeVeulMecv8pkcMwVrpfe6q7dLACkzzzPeZlIZs/vGY87zjLlTVis6nVwK3Z3aGq1HBVpJEVL1OjPt7ZHzNAIap9c+Zr4VSi64RSUbgHneVVZF0BZz5NIIr9JodZquFAhGgDgqrWqL6Y+MYJLRcnOpnKojmHTv5Jwou3aaEQ5aLjh9BDQZY1QurveiLWQyWm10Ot77WMcIVaZpDyzT6iExvC6CtKXNah8VkZSo3pumiQ60eR7vU7qy3ebk0V61tk7OmVJytGI1Z259vE/xfq6mEpWnxAzLbLZr3dpiIGe0su0N604qhWlvovVOq1tSisqzNqrmLGUmM1ofsz57ZzJYTdOuArTVeM+6x15TnKj6TImSOt6cVU7MHoHtlGMGp0PMJh37Uik5Klndo4oz5wgWR8jZWqf22L93swPHNnQ30tjHlyrQaP3ZwYyyBFgpkeis0gTGaNUb+4mN106PmZ1wdC6lQ4rngCMVkzFQcimjJKVR+Tuqa3vv432zUUXYcW+xH4z9NS3zUkcAOlmJtrs5ZonuTYX98gCveNGDfPxHfwSXDg553xNP8p73vof3/vLbmS+dY6oH2LLP+5V9PpVMHq2El5LFaKk69hFjtFH2EfhFjNia79rQ7up9x+NcCRo7EUmC23J/gD4a10acbqPiNqfxg2upuuwzdR7hfUqjYjeDGblkplxi9maZwEYF6mira7mwWq1Gi2djWzvTanUzP0ZFRERERERERERERJ53bn7m4zxT65a63WCWIrwZf4yfLJOT7WYuplQoBlZWEc60Tusz3TslRRtQd2fexOMlGzMJSx5tEhOHh3OEcyXv5rflNAIHS7vqwQikKt4bPWU2ly/T5pnp0iX21yvO3P8i6lxH5ZuPmYeV7TzT5mj1urRgNYtKvGWZaqujfmupoDOSpWghWmfytAKMNir8UjJaZ8yO9JgnOKr4Si6MyXUkM8qqRBg62m+2diW8ijDMyKOFbCfjOY3AbY6qLkvgjWk14STmlqDDaorAcLvtu/mNkWGN96fNWCrkknfBY1SnZWxUhuXJKKt4H7xX5tZo8zaqO81YM2Eps91soyJxyrQGtc67Fpc5xbxJ90ZrUdGXerRbnUoipzXGCJXzxP5o0xvrUeh1prXG3t4a98Y81wiyco6K19aptQFthFOJNveY42ceFbC+VLimEYYv7Wud7dyYfCLneK8ipIWeYsZfKpleI1S1JQizK4FUbzUq60ar36VNausjuE3RPpVR2Wsp0SyNYHu0lTWjmEUNqkfA1j1CuBQ7d1TTLmtq0brXko01HpWNo0WpE/NPvYMng96Ya6e1PtoVOzlH0L6tLaoJx2c4pz7287KbA5pS4tTexANnT/OhH/AK6sd8FIcHh1y8eIEnH38vF849wfnH38Pm8kVSr6SoXRytgZcZi1HRmEYFZFvmmY4A1cxG8Bj1nT7Wi3EbB/IILFtfLluaA48KaltOcLgyM3Np1cpScZsyU7JoJbxEmrWOKuj4TMV6RBthUh5rNVpEz4ekXJhWE9mhbdrN/igVEREREREREREREXleuenwsbeZXiPEKzmTp/WRNpZRvRcVUBF6ec4ki4rGVDKtdua6ZXOwjXarZFpv9BQlSilFVaRh1B5BTCkR0FFnSjbKNEEq0HpUO9bGwcEhlksEQynjPpNWa+bWyW5sNxtab6SecXNyb6P1KxHUtBqhF9Gq0nBstbQa7aN1acQYueRdNV8bYVouhezRNtSW8KeNdrK2VMDFbDofrV5rjxA058Q0rXZtJUl5N2Oxt0rzBrmQ8pq8GtVr3WIuIRGSbQ8vk3JmVaZY91Yp68y0WjPPh5hHVV4alYiOYb3h5tQ5ApRpmji1fwqIoK2MlqW9NTxFK1FLhZwKaVToJa/UER6lZliO0Ki1GtWMeFStEmFqzNprFBLmmWxQa8XrzGqaojVpj3UrZtSlgtOdOvcx53FUxfU6gqeGtzHHzyJtshHQudcrLUvd8TpHa14bMzXNqa1S6zba+KYEpAh9U8HdyVPMI42FWeYLRpXkbi1HaOh9BO8er4MRlPuokswpk4j2wrPXXUDGqKx1N9IIHiOIg2gfCr3FnMuUosqYUUkc1xslJZyGkSJMHQFcN+hlmb/oI2xPOD0qWmPgJ7kY2SNUjdebSDZaA/c29mVjSom96TT3nT3FB7zipRjOdq4cHFziwrlzPPn4Y5x/4n0cXLjA4cFlWttgoy1szDLNYBFy9jH7EneSGz4uX6oml8phM9utk5nT4y7x/rvHSQWjC2qyaCHsy3s+/jve2agOxW18nkeVsDt4xiEqlL3uwuK83I5EKTFP9XI8E8tuISIiIiIiIiIiIiLyQnfT4WOrc9QbeVQ20hub7QbcWeVMT0avM2aJebON3KLOlNUeGGy2mzFPLaoH1/v70TZ1O9pv5onWG/Qa4ZvHc5WcSXtrVj2CPcqEl6iGOv/kOaZVYbXei2olh5LLqKrqu7aaAGUqbGuLyrHRhjIlJ+USFZa5jIot2Gw2uzaZKefdjD7zjNcI10ouEd70qG5LOUXr1xyBRffR+pKOtaiG7N7JniilYHmCMXPSeqWkjKXEZtugVRynNsAb2TcRtbV5BFXQ+jxagCamlJmy0dOoRsVICaYcLTWj1s2xtI5K0dYie8kR3KRcoorOe7yHYz2sZPDEtFqT2xYw5nlL86VFLWNWZoSlMTdytNV1KGNeYlTrRZVj7y3mZVqi1srh3LA5XldKmY7RbAMwQqp25H1bWnMSabWNKsLeqaPiLiejlIlEhJtXQqwImnKKMLh1H5WRMPdGKbBeRUvY3hs5p9jP0/IcS3g7ZoLCLiizlCjrfSYzvNdoKcsqNnEEjq1FmGfYrmVo96Vlavx7N3fTl9o+J1vsZ7W1aJk6ArS+VPgRjx3B2ygMht1sxL1SqCPAqxBVrG1pPWqjanDMYOyJ1Bu1xvPZUtE5qmMZLVS7d3ptpGzsrwun1/fy4L330D7gVcxzo86VS5cvce6Jx7l44TwHF89x+cJ55sOD+DnSozo1hk/GjMhkZeloHOEpjDmwPt7zUSk5qj6jPPPI3NgRQDoWFahjX0hmjKmOu/a0sdZGH21d4/Oy7Mw+gsfxvClmj5rF7VvvYybozf4kFRERERERERERERF5frml8DHlgqUI2fBokTm3hlki10qrnVwglYlkidydWueYZeg9gqFcyKVEG1N3ptWaaSq4G9YNb7CXC4dALhN5tcIswpTuTm6VNm+xlNhbryIwbJ3kRsqJVSnRWtFjpl4uGW9RxZeT4S3CKhjBRqtRoWkxQ9J7x0brTTPIBt2gtRphZDLKeI7NwSElJ9anTmFExdwYeom3ituYh+dRJZlSBndaa+QMyUc7zu5RBdijajQZMa8Si6o3Ym4kveN0anNa75Sy2lWiphRhJw4+ZllG0BvzNfMUtyUn0pjjmIhAJedE90bdbthuDkjJWK32SClTawXv8R6Xwtwa2Tvu0Vp1u93QMMpqhVkmmzHXGRp4n5hWE1NZsT08YK6NwgQWMx/BWa1iX5jnynauo0XqqDwrhe6wtx9ht6UU8wkZMyIxttstDd9VNNbaaW2zawkcbWwjzJrKqO5rI1AzI1khlWgx2nvMx8xTGasXoVS3Hq1RlzGDY9ahj7wq50LOCXqjYcybytavzIksU8wLtFHFGPM1x1DCMa/SPVqlRodcI6USbX5zjurVUevpvVNbpdUaAWOHuUa72QRMPlFS2rXaxZyVxTzLlDK5ZEruESAv1ZojSOsWJwaUHK2IW53pHu18a6+jInTMrXSnN6dTSSWCQ/POapVZrzKnTq140YseoLfKPMfXweEhh4eHHFy+zPlzj3F46RJeZ/r2gDZvqKPVLtgICX10WV5aA0PrMZfUfHkzlhbMBp52bW5J8Xqy2S5wZAkhccxttNS9MtfVoxRyafo62upGG9be+xjXaWSiwlZERERERERERERERG6l7ap38DZm0Y3qvt6ZUsx/NAOmQslpF7bUeabOy4zIAhgplwhUACOqwXpvJEtMOUMpEVSWxFQmuju9VWqbKanQWh0z9aJCbDVNI8hxYr5bZ/mX0SPgnPKYx+f0GlVtlqPSz1IZrTfnaNO4BIA5AzDPEbK2USbnvbK3t8e2w+HlQ/ZWBVLMAsw54z1hNgKUpVrLGfMGO+4Zs2hD60YEdyVeV6sV6400rUkecwOXWZUl57h9i1l9hZhBuZuvt8xbHPMrvdVdO0pSvO3e4760OsLaCGbmzSFb78zbqDgsZYr3rm1orbJarcgp494oKVHnGgGQxYxIiLAq50TdbqNSr1b6XKmbA1braVd5llIerWijPWo9PGC1WrO/vxdzHfsI7VIEf1Na2oFGULxUPk6rVbSo9cK03o85nL3Ru++qAm207fTU2JsmYGwXSzvOZQ1ttNF0jE5JNl5ftJgtKdHHXMicx+xQop2nG3jdMNfRCNUSqRRw2G7nXQh2JRB0SFFtGy18U1RWGrvvabQYzeN+eMyRxBI+Wrj2nMe8QqesnMPDA+Z5JrXx3hNtgu3IOlhKZHc8xXOtViXWoy+Vg53dsEQfFZ1mtNrA4nPZao0TBejgxmau1NbIkRKOfW4EdR6tVKdSmErh9OlTGJk0FVqdmbdRCbvZbDi4fJmDg0tsNhsOL1+iHlxm3hxQ503MG60V94a3vpvb6HTMofmoXrQr7WrhSjVkXLdMXHW8Q/W+a9ka2267eaDdYcrRinc3P9ISnVGd2nv8PBARERERERERERERkZsPH8tqj95bhC4pRVVdiogvpURtnWSd1WoPgO1mw+bwMpaMadoDh9V6xWq1orXOPG9xr/Q5Ktq8ZGhX/vifUooKNe/R4pOYs2g402qKeWzZRiVXpo9Zf701nEayaLna+9I2cyaXCXJaumWO9o1Lq9XDXdiQxhzKPtqLjv6PoyKvMFcnlcx6vcIN5lrJXvA+5lSSdi0w05gd2b1DitC1tYb1Bt6wsmIzb0kW1XvdEsk7vTk+2r6693hdo4VozHz0aF/p4LXjacwnNEg5j0qtCEhSLrTeyClH4DdXSo6qRm9Op9MxLMdrmLdb6tIC1OKxSsnRwjVBjQl5FCus9tdYziRLbLdbequUacIYwZYZvm04TpkKiUaxCFKxRB89M5N3VlOmznPMYMxppEcRiLXxegzDLdFrZz1NUZ2YE2mv0JuPkM4jjGzxvEYe7XFH+E2EkG28rgiNx74yQuKeR3gM5JTBICcjW1T8mTl1jiBzyiXa7Y4Kut5gO89j1mbCa6dmKCmRDUqOateYE+mjNWzatfctKeLN2jttu40qQzrel1QQRnYZ1aYJ9tbraA9KVHBu50pxRstQ4n5EJF5ytMbFRxUhnbn5Lq0zEtiVCkczx1qn9zqqhuOzaSmTy4Qb5CmTICoUR7vUqOrtpDRRUsEi5cMsAsm99f5omzrmq/aoom69U+dKq/P4OXLAweUDDg4uUectdR4B5XZD3Ryy2W6jYtk7TlT9XtkOcPOoarSl2tF3ldTLnEkjAnRLI5hOmZJ9BNNjPmVfWv7mXYtYEREREREREREREZEXupsOH+mdaVpFu8EebR6dqMKKSrYIa+Z5y2azHW1aM6RM98Y0rQG4fPkSvc6jramTpmjX6d3JuUTIs7TK9Ag4rBspj9arbSb1znYEK7VWWsns7e1jpdAMvEfLyDSqmZq3CJS2m6hqGhWVOCTrlGzMnWiNWla71qh9hDMpJ6r7CFyjLWhOCUbFZ8w+bHhPbLcbsuVoLTsqPDsRdGaLf/nSOrJFjBeDF0fY16PazoqN1o5ReWYpMa1W9FbH/ECj1TqqsqD3MZcyRUvV3hutdaYyjdA0A0bvhpuxnbfknHAzzDKrUijTis3hAVvfkFJiNa2ixWyrtHkecy97vKYOs1dWZaLkEi0qcSxP9NbJOTGt9ke1XISivVYubw6jFWYu7O3tUfI62qm6Y6OFLtlJY8Zia05rfdeW1nBWq32i8NVZraZoJ4uTrI+qPY8WsUTIaGZ0i/fAa4WUyKlQUszway3Wcaki7bXj5DEbtGPWmUrB3KnuzNsZM6eUqPQ1c1pto2oz9tvDbY25nxa1uNnjdXiC5EtL3mh9amMf6C32IVLUVfpu9mPMeGwtKv7YVXYmUmq7YHRvNSqFfYXlEvNTtxvytBcnDBgRmPbKMh1yaWG7BNWMa7AUj1FnsPgMuEHq0Nt2V228NB9NI0Tuo0UvRFXoNE0RYiYj57JrMbsL4VOG7uSyouAxEzQXppRgtcLOnN79LPDxGeytMdcth5cucXBwwOF2Q6+Nebthu9nQe6W3UVnZGvQ5wmVveGsRpo/5tb15fP7G7M6YnZlGi91R0Z1GnaVF29c2QmMREREREREREREREbmV8BHIlphbo7cIJPIucKisVmtajyrAUgplhF59tA9NONvtId47KSWm9T42KvnAsVyi8q/N5Bzz7iKMqaN6K5FTYjMn2nYDZjEjMCdoLWYi0gAn54wxsd1uIkzIBW/RkrWUidrrCE2iWrC2FgGMRUtNcqa3GnMEx2zAMmboWaQTEZwwJgP6lSov804vI6DFd9WXeLSAZbwW72NuoAO5sEQZxqjUqo08xXp0dyYgMdo9jiAzTesxp7IRrS4jwGs1Xo9ZtFr16hHimpESpFJo9BFeJVLKpFG9uFrvR2WcO73OHGw35JzpvZFTVFwurUct5THHslPnLa1V9laraC2bPFqwdo/3yIGc2Y62nTnFHNGcc1Sp9s5cK807yfOu2nRZL7NEGxV1zDNsG9M0UVZrSrIILTFKGmlRLqSSKXWKtq3urEpmrmU3V7C3Nt6TNEK7CB7Jid4gWcz4NHNab1ElOipxV6s1q5Li37XRxns31xoBHEu705hduYwdbL3RjF3b4D6CZOst9imHmmJP6D0qZqdptWsLammEhh5BXW0RJCaDaSrR/jYlci7UreOrfcCZax1zMIGU8dbx0Zc3Ir2x/yfDWxszQWMOZfIrjWbTaqJWI7VG807zRiLmwJacYt4rSyIf7V2jLavhPQJZGK2Bl7bESyvTXqNK2WPeqo9hjXE9EYT3RjeYyimyG9NqzdlYLXDGPM94XTEnNULH1hq9xnq1cflSWTnXDe49QnHiPWmtjZMKfLfWvVVSa+Sl1bGIiIiIiIiIiIiIiNx8+Ogj6PDuo1qpx2w7nEQESN3A+zayn1zIObPZzNTWmHtnvbfPtH8q5jtOhbk5dbuJGZCWR3VTtEZMJaoNe5rotWLulASrUpi9Q+/kUsglgrxa66hcSqOdZYQQ3pxpMppFABlVmkvL0QpEBVnOabQ4jdmOOWfIhVbrLoiwlPDWsUwEgDnRPUKsZInWW4SM20Yu02jjmKI9KmPOXIsix5YS5Dzar452l7tZexG6zPM2gkrvVIe5XqbkjOVC7T1m1UG0xSSNGYDL3L685D/MtUXb2WTjPYsqSVIerTczqUzknCgkDtscLV9TpnYH65Q8Kt1SzBFkqXrtjYODgwhoc4n3ECLc6Y3WnWJQcgF39td7EVznxHZzCNuZ9bqPTS4kt1HhGpWx1sGssFqtKGmm9QjSaq0jLIv1Symq1VqN4CpbIrlj2fBcWJmNVruwaY0+ttMtZjcmi3auqRCVfsnAo6rRxn23PVq65qmwmlKEc060R+2dMk2sVyt672y3M9liHuqqlJjbOM+7SlUfLU07Y+ag2y6c7JGy7ioxW9uwWk0sVY9GtCXuPloKj4rAOgLxPPajnEbY6LEf0qJiednHYt8kPgMpL51ZR1VjzGKNNqvxODknvDlpmqjJoLXdfmuWI0Qd1bre+64quJR4vDpXAPK0iipaY1S7NubxuqNycmnnnKMKtPdddevSgrfXhucSMyy94W7R/tXS2L8mVq0TO1C8n0ZUSi/zPltvMWt1/EwzRlVva8zjtRENiaPd63bLdt5GeBkLJyIiIiIiIiIiIiLygnfz4eOo5EppzN3LZcwkTFHxNUcrxmm9h+VC7lGBmFdGm2csNcyius4xam2UUvBSmDcb8uS4GdN6PebMGev1NEKmqIpMBtVhYsU8b8k5M63WbDaHUZE3Zhrmkqke7RPzKtPmCowZdTnT3alEaJFyoeSyqyC0lEm54D0qnMp6D+8jqOoNSzaqBGN+IHWOuYZ2pSLNco6ZfbVGBWePto5TypBi3l9tWwxGO8825k9GoGQtKkJTWQLEHG0vR9DEaNe6qTVCvTH3sJhF5VeHnKOKq7ZOrY1RXEaqlZxiDUqCbomO07eHo41sokwrZmDeHpKsQY9AMCbnWQSVlkl0aus0ogq2tUb2NOZqLm1gjUQEu3lUs7ZaSUyUaaLVxnY7x4y9UWnYRrhJH607S7Tq7CnBqKSd9gqlZNJI2rzVUfE2KkpjSOcuIMTB6XjrtKVlpjvZEq1XWo3bz7WSc6Z1G61d02hj66RkpNFu9PDgkGyQp4kyFTIxz7G2OuZOjpamQK0z3mvMScyZbPHZIC0VqREwd4v9w8aMwaWS1Ucb4Kgmjs9B7Gsww24eqPdEtQjFIlw0UjcYJwK0WqFusTQqHt3Z1k0EvSnFjNPeRuVfZRlsGJWiPiqBY93SaE26BHc553g/LOZOeiw4rdVoI7w0kLUUl7XxeRqJZ6ujdtQMG/ezCWgRZHqt5DJFGOyMz5RTSqa18W/3Ub0MjM/qlereCGBTyiNU9V1VarIRznolpSnCYZYgP3769dZihumRSkgREREREREREREREbmF8HE1rVjt7TPXLSkl6jyTU4QhEUhAtDUtpDyNwKVFtVTPrNenRiAS7T/pnXl7GG0YU1RGlZTIKQK0EuVZV9pNZuijXadZoUxTtHA1mKYV3ju5ZLp3ErC/t6LVTGsVTwn3hPVO9Rl3j7mSOeYFLvPuItwwkjmUAmPmH72RS4FURoiWdte5GblMEUimRJ2B3nEiJItqQSPGAXa8+S5oTMaoXuuUNIJGA0uFToQseI9ZhR3oS4vQPiovyy5Qifl1LSoA84qUE701ptVELn03M28U2QGMuZxThIKeY92zURLk9ZopZy5dusC2tZitaIkpT7HOvbOdoxK2lBKBDIz3LtrbWma3DksQ5MQszdaj5WlrETxaMmpv0eKztXjvS4lQs0brWRuzB1NaJg1Gu9Wcou1nBM+FXGI9lta6ZrZrvbnZbnbVu9NqFfMEU6Z5VHtOU1Ro9t7JeTWqKRsdmGxUx452qNvu5JGTplTwSAUj9LMIeC1FGBbzH21XYRlhoY8GpD3ahiZjlQz3CFS7O9GxN1oGe69YTlFxakZvY3YosX9ZzuSSKbmQs8UbbXnMFb3SwtR7pdXKdrOhjXXzEeAtn4UI10Z4lzO9VmqroyL0ShVpzETNu4A3ijOj+taw0Yo1gxu1zVArvdUja1Bj/yX2n5gFGUFyq5ByuVLhOW9HG17b7YPufcwlZezjPaaPetsVJ7onjAmIEwy8ezxbshivSR8VzhVLdbePp6XKsvdoY5wSxVIE+5Zv9kepiIiIiIiIiIg8Rz7wnsTPf+3nnvRm3DEeeeQRAB5++OET3Y47idZE5HjcdPi4f/oUqUyjQ2ijTNOVFMuibSVpFRVqPQK1GC9XWO8VcjLc+6ioivAilzUlZbbzTK+bqKj0Gq0sR2iVc2G73TLVhuWJeVtZraZRGdZpdYR8JUcoMxltO1PnQ7xVptUeENVnzR1GxRPuEZCN6rycM5byqBxcZkJGS87qnYxH8DgqtXqr+DKz0mCU1lFKBitRJdVbVLhZIuP0pR1litIqT0TQNCon+yg1WwKgaBFrOIa3TspHWluOtpbJLGY0EjMnY2RlwlKOCrZ5Hs/XxwPGa63bmead7jOMwLX3Tpu3lKmMmZUwrVaU7qN9ZsZH29N53rJ783u0PS1TVGH2VmlL69rmEa6WAiljIwR1d+bupDIqzTxCRFIGEtFF1UfI67tALiWIvrc+5kL6qIKzXbVhbFbcr/dRTTlatbbW6d0puUQr35wooyy0YlFl2TuVPlrSRtAccxQjcIp2u5k8QrHuRBUjsV+n0aY0WYRVqcT95tqXFaO3emVuYIuwOv5pmHdI7ILwHi96hNCdhmE5RQiXGOFdBPvTNEXb1RQViJYyrcd2ptECuNXK9vCQeZ7HddFid6ksTmajjbCT8giDR9vSaEEcJw/UWiEXaDH3tCd28zuXmY5pzFE0c1Y5M/cRCvYWIeVIMc1srFsir/fjOl/WINEtbu99Hm1UV0sXWtzjs9H7aLtLx6xgFlWb3cdJBO5RfZpsBPujkWsfW2sJ3GL25DzasVqOkxIY80ZHhbSaroqIiIiIiIiIiIiIhFtou5rpY2bc8sf6CNoSvgQS46/53mrMRix5/LG+0tzonahu8s5qtWYZSjhNhZaNebuF7UwuEx3w1iKQMaM5MCrbOnHbaSp4HxV1o0IymXFQR3VjWQGjlWSNkM1yGVWSJaoZHVI28hKSJLsSVozL0piNmFKi1gZ9BkboM4IYupMsWsem0UqzbjejEsuiOrB3PKdRyQYRbUbwYimPijEb8/watTWK26j+avQ6qvksJm3GdkewChF2RctX6HWmtzrSldGKkg5Ee01LGevx+rxVRtNLusFc65VtHK/bLNObj/aUne4e7VRT2q1bttj2TRvzCpvTmkcBa+ukTgSAY47gaprGa3U2h4e06uyfmrAU2xDVh0sYOyo3e8y49FHW1keb1ZwyU0m0FlV38bgVG+1g624OqI0wsDPXtmvLmUsm5RyzLc0otVJbzGmMrNpjf0xGSYkpG70nukWT0aU96pX2orZ0gI1ge1RLtjrjOVoE24joujm1xv0iJ44Pijsxq3CE28lSBKqAjTajrc7UDiVnUhmtVwEbbWPrvMUZfURxtpsD5u2G2tvS5ZTmV4LeqIJtWErk6G862qwaeZrwXOi9sZm31BptU6MqcNq1h10qbG2EtbW2ES5bzFndVVeOql4fla4GKSemKQJpeoufCWbk5GM7C0aC5KS+/AjJ432Pkx0Y8zzNoESKC+bsiqmPBNSMdsJLkO1EtWlaTaMKll0Fc+zTM605o2RSREREREREREREROQF76bDR3DmVukt/kif86iMoo0QIUKhPsKICAZ7tL/MebTSNGgt5iO2BjmzbZU2gsQpGZQJd6NutzEfsXXW04TlZVZcJREtMxnVhDbCjto75jCt1qPqMqqp5mqkKUKyyBEK0KM9qkeo4Us7WKL9ZHcnGTEvzhlBpcecwtYw76QUrU97j/+u84x7j1lyOdGnaSxdrEkflVyOjXl3RBqC0fpYRwCPaj0Y8/aWYDVdaUObRzVjnbdX2rhm8BpBTlRxRYi3zCpMpUTQOUKx7g3aTMXGexghnNd4j6YpXlvvTpmibWdMc3SmUsb8vI63GnMKRw5rsReMRpoR3jpO9UYfFYqRBMV6GE5ZrSI06j0qJHftSmEpcWvdSb1TCiM4jLmGJUfL2nm7jZa188x2nqmj+rKN19AxpimPtTXqEvKN92h0vY0q1dFetFm0HM1pRIUOtUUIHJWEo3wuRXBXawWiKjGSRN+Ftb13LJcxOzXan0ZlZqKUTG0RhnZbPnFOXcJWPLarO9ZrtN4d6+PEXNVUZ9pcsVQoUyGlxjzP2Ngf+gjlWo8WvT7es6Wi00fwaRiWRiA6Kju9d7Ilckp4q3EighlptELem6LqEoO23cZnqtUxzzKqLeuYk7jMA81livDfDDdG+Jsp2caaxazJ3toILBPdPFovj8C7RYYaMyiT71rtLlXZ3Xz3WXKPHd9G0Ll83n1UhvZRjZlz2QWOpKVmOYJRbIrZrcoeRURERERERERERESAWwgfD7dbfN6SysR2nllRyNOK3hpzaxHG5PiDfhmtRlOOGXMxU66PuX2QzTjYbimtspkr8+Ehq/VezG9creL2ZhSDaZpwOvNmG5V10zSCoh5hzZidZ7nsWo2WnCglgqE2SqBKSWNGXY4KJm+4G701VjlTVmtaj8qtMqXdduecdzMCGdu/dFrNKUVlmkd1ZcxwjBAypcyqJOYWFXNWRs7Yo61ra53WWrRN9YznMirlnE60f4z8a1RdWhotWyFb2lWcdXy0PY0KulYjqEopQYoQtI6EJqLQeF6AtJuzt8yDdObN4ZjBWNgeHkZL1pSJjqhTVCh6pZRVhH8WYVJv45F6hIK1Voxoc1lnY1qVaOeZMo2YdTlZREW9EwGyJXx5bSMYjna4IzjuUTm4mevYrmiZi0HvFQfmzSEXtlu225mUC3lUM3qK4CwnWCojp1yoPWYERtAZFZVzj8rQPBXMGz5C8N4aZjF3M9qCjn0vRQDurY4uotFmlDGTsM6jUnOZ+ThahcaOlHehc0kp9hGiZSyA5Wh3OsYpxmUJaBEits6oPo0Kz943pFSpNe613Ubb3fV6Tc4RvK7X+2xzZbPZxJzDsR4ZqGZRIDvKInO28TqiHSkW+1P1WP/VKrPKOfahXscGjn2rR/hMjxCx1e0IRte7NTM7UkE8eqHWecZbzIytc6W7jZarESb3FCc05FyiAy9Oq7FPLxWnfVQxRtjfIzZfWsqW2BdiBmnHex6tfcdJCSNstZRIY1+Nn0cWLXyd3fsjIiIiIiIiIiIiIvJCd9PhY68ztc6k3ihlFa1ARyvLbIlpNe1m+fXeKTlH1RVASnSisqjOc8z/IzHPM/M8s5pWrNZ7MQew1giMSmYMh2Oujd5azHWrUdGWLBGxV8wiLCWTUqLlFJVSRKWTWaLkCG96c7abQ6bValQDZtarVbTh7J1pNZHLKebRsjQqqjo9RdDoIxTrvWN+JXywXCjuzFTSaPM5oo9dsBNz8pzWG6QcsyKX1qYpXsvSWjPbWEcfjVlzGhWVNtrQRsWae4R23Z3W2VVdOhG+ZO+jSjEC1zrmakb7z4oZ1DEDM6UImyytyLmAV2oyplxi7qQ7vVdqr2PWZsXoUdXoPd7j8f4uK1NbVECWnKijLeZ6VcbMykat22h/mzMtxazEZCla+aa+lIrugtGSMp4j0FtaYNKjwjDCsdiOMq2wnNluNvQe1bSrqURFnxFr1FvMACWRM7EPeGKZJRrFctGC18vEqjfm7SHb7TwqE68ErEampDLCs9i2mF8Y+0DsD0Ye6xyhXB9hWASOvfuoAo3wcplvGVlaBJlt3B5iBuWu/M4s2sV6tMSNkDP292QRjG83h0w55m6mnJhKxmyfuTV6rdFO1Nm1q13Czj4+00sb3rlHS+NpKqScWJVCTqNCtfsImG2EgWNyqo1dvDcODw/Y33emvVPkaaLXTYTl432GaMtcW2c7z6Oq1zHv9F7Z9kqeVuTRJjWXPGZDsvuZ4Bi5JHqdd5XGNtZk9IWOHy1pVK1ilDxmi5J2syCjUnSsoV+phralolVERERERERERERERG4+fGytxrxF76M9ZoWco4Kpd3pttN6iZeFqRY9BcKNCr9NaZd5ud3PrIKrx8hSVab03Wt2OlpBQt/H3/dVqIuVCyoXafcyM6yPQKFiJUHDebkk57UI5ryMo9HlUqPVRMhYhmpmR16uYlWcGOdq40ufR1jGNdoxQphKB0Kg0NLMIzDyqutyXeZcR2SQzenO69QjoSo51s2UmXYSFsARhjiewFM9ZPardcjZaj7aTBrsQs7VOPhKSOI65vV9Q15rTvGMptidaTdqV9p8pZhzupTRm3Y35myXT3djOlVJKBHkjrNtut7u2nCnZqHiM+YDRHnMGYi5kskSZpl1A3WodbVxjDuVS7WaWo2oUp7dOxdhbr9lfraIqsXtUwNVoLztvtxF8LtWHlmgjKptHha1ZtP3deqxN9x6tU3PBm5OngicbVZbRHdXcSXQs5xhz2KNSt5QVvVXmGm2AbcyoTHmirBLJYj2XwDKb4WZjDZZWvrZbX0tGJMXjc2R9hG5jDqIz2p6mEURGCLhtsZ+5xWtONoLabKScx35RYxf0CDZbm0fASmyDpVGVGftW/De0BL2P/b11yIXWOxYDRTF8VBIy9jYoOZFLhHzR4ndUGVoaAV2PsYg95kLmnEj7pwDj8uXLnJlWZIuAz7pBr9Ta4sQEb2DRWtgMvDcO5w21jnDTZlousW4W1Yw5p3HyQ3yO4o452sraqHrctXONfSIu9xE02njPOykVcjbS+IzFezkqVkf17pU6VBERERERERERERGRF7abDh/X+6fxESI5UQ2VnJhxN6qQkhm5jDaSIxyJOXxQ57jdtJ5iBmQnKpdSBAcArFa0zRaWdqAJamv4CJuMaDXp3pmmVZRTWcYs0Vq0Uk1LEOYxA3C7PYTeRytRwDvbrXH61BnaXMe8xkSymNXHCCYgQg0rmaVKk16xZOQUPVR9VKMt8U4fQUtvFUuZWivZjNHHclfF1bvvwpqlcst9hCTJwFtkJ0tFVy701mk9gkWzmH8YLTrTCJscI43Ws9HGs+Ex37FHFR0jNI1OpdEitfaYp5dGmLftEeqWPCrreo9WsL1hyaKDpkVQaGOGITDC52V+oEVL3DpH986U6Q69RZjYxozM/VOnotKutV17XO/OdrshpxxzMN1H1W2l1ZnDzYz70nJ17NQpWvz21qk1tjNPxt7eirk587yl90IZwVxpjalMrPdX2DJf0X3MH3RyidC09QbLXMZcYm6mxaxT92gp27lSbdp6jyA72WjbaWO+YYmQ1fsodh1B4BKupfgcLVW1S+tZiLWLEDLReovg1ywqDUf4Zt5JliOCzQX3HvMLR1Boo/QwjdmOjPffloAPqD0COEtG8hTBYHdqlCzSx5on0qgQjDmoyzzRqDAcM037mF3Zo+IzvkcAmFcrTu/tUaZEnTfU2nBahKEppoUyPkO0Fp8H4jNnpdPc40QIjJacfjhaHK+m3TZ0b2OfT9FS10YrXAz3BHXGLaZoLuNH3fs4sQLcK81HaD9a2vp4z338rHNljyIiIiIiIiIiIiIiwC2EjzbCmZwLJKjbLZONQGO0P8x5YjtvKSlm9zkJRhvUlIzVep9cSnSnLNBbi2DCO9NqjfXO/t4aS5l5u4m2oC2qo9jWqLbKiZwnLOdobelOrXNUXxFhZ4Qv0eJ0u5nBOylXjEQpmfX+Hm5Ql4q7OmOWI0PtjVwm3KIdad/OkGbmOdqUJoecnTRZhEGjCq22Niokr1QYWopKUW8RGsU8uXhcy3k3I9C9R7iVokKx1pgjSI8KMiDaojJaR5pFJWEudBLeIqhq3UnWRxtSKFZ2lZFLmWStW7IlIvuNdDPlvOTHo8VnG1WJieYRALZ5VJyOtqF9zKEsyckpQxrhWmsRWvUeMzFJ0KICtM11zOzsY1eM5CeXTOrOXKNKtdWG5z7+e6bWRm2d1g3Pq6hizUYe7WCdCAK9d3Iy9k7tM00TdZ6pbRvBtyWK9xFORWBcxzzECLsiuO5m5DSqDPvMdtx3vVoDUN3o82a0KM2jTauPNqijkq6P2Zat4d5I2TGi3e7u8zQqIpcWrDb+f22Q8pVWn+b9SBXu6MNLzCBd5k62XgHf7X+9VkrOo6Ky0T1CYRuVnQmjrNb0VmnbymyZnAHvu9mJY/DhrqrS0tIWOOLM5TW7R8Ad+2gEfX1pb8vS1TTHzNKS6NVjX4/yV1KO966RWJcJX1oem9FjfGPMk00R4efRytWykc1ozaLyealspY8qxbg8ihRjX+4t9ututjt5AHxUNDLeO+IzCpALqfdxIkSkjT4CWsYJCiIiIiIiIiIiIiIiL3Q3HT7+7j/45ar1EREREREREREREREREZGddNIbICIiIiIiIiIiIiIiIiLPDwofRURERERERERERERERORYKHwUERERERERERERERERkWOh8FFEREREREREREREREREjkU56Q0QEREREREREREREbnd3nK+89BX/n8nvRl3nu+5/Wvy6Nd+7m1/DhG5c6jyUURERERERERERERERESOhcJHERERERERERERERERETkWCh9FRERERERERERERERE5FgofBQRERERERERERERERGRY6HwUURERERERERERERERESOhcJHERERERERERERERERETkWCh9FRERERERERERERERE5FgofBQRERERERERERERERGRY6HwUURERERERERERERERESOhcJHERERERERERERERERETkWCh9FRERERERERERERERE5FgofBQRERERERERERERERGRY6HwUURERERERERERERERESOhcJHERERERERERERERERETkWCh9FRERERERERERERERE5FgofBQRERERERERERERERGRY6HwUURERERERERERERERESOhcJHERERERERERERERERETkWCh9FRERERERERERERERE5FgofBQRERERERERERERERGRY6HwUURERERERERERERERESOhcJHEZHnGTN7vZm5mb3+Fh7jUTN79Pi2SgDM7OHx3nz1SW+LiIiIiIiIiIiIyO2g8FFERJ43FO6JiIiIiIiIiIiInKxy0hsgIiLH7p8CPw6886Q3REREREREREREREReWBQ+iog8z7j7k8CTJ70dIiIiIiIiIiIiIvLCo7arIiJ3ODN7aLQSfYOZvdbMvt3MHjezS2b2o2b2WVfd/rozH83sVWb218zszWZ2MB7nJ8zsq57ltvwOM9uY2ZvM7KFxmZvZI9e5/RvG9Q/d7Ot5tszsDcC/HP/8X8ZzLF8Pj9vs1sbMPtvMHjGzJ83Mr9626zzHI8ttr3HdZ5nZd5nZe8Yavc3MvsPMft2z2PY9M/tH47n/hpnpf59FRERERETkjmRmn2hm32pmvzyOf99pZt9nZr/tyG1eb2b/2Mz+y/j7w3kz+zEz+5LrPOYj45i4mNmfHn+3WI6t/6KZrZ67VygiIrdKlY8iInePDwL+FfAfgL8FvBz47cB3m9nvcPdvfbo7m9knAN8LPAD8MPBPgFPARwBfDfy5Z7j/nwC+Fngj8Pnu/vitvBhu8fVcw7eP778L+CHgkSPXPXrVbX8L8NnAdwPfBHzgDT7X+zGzrwH+DHBxbMfbgFcAvwr4EuBfPM197we+E/hU4E+5+9feyraIiIiIiIiI3C5m9vuAvwk04lj2zcBLgE8A/iDwD8dN/ybwH4m/P7wTeBD4HOBbzOw17n69k6D/HvBpxPH6+XGfPzGe43ffhpckIiK3gcJHEZG7x6cDf8Xd//hygZl9AxHgfZOZfbe7n7/WHccZgt9GBI9f7O5/76rrX3W9Jx1VeP878GVEYPnF7n54qy+GW3g91+Lu325m54jw8RF3/+qnufnnAJ/j7t9zU1t+xKjU/DPALwGf5u6/fNX1T7e2H0gcUH0o8N+5+999Fs/3U9e56rW9dx555JFnu+kCXLhwAUDrdgO0ZjdH63bjbmXNlvuKiIiIyPExs48AvpEIBT/N3f/jVdcfPf79SHf/xauuXxHHwF9pZt909fHz8CHA65YTns3sfwJ+BvidZvan3P1dz2I7r3vc/Ez3ldvnbjkW0rHbU2lN3t/zZT1u93Gz2rqJiNw9ngT+7NEL3P0ngb8L3Af8pqe57+cBDwHfeXXwOB7n7de6k5ntAf+ICB7/OvBbjyl4hFt7PbfqO44jeBz+8Pj+Fdc6cHqatf1YImh9JfAbnk3wKCIiIiIiInKC/gBRzPLnrg4e4f2Pf68OHsdlW+BvjMf4tdd5jj95tNOSu18i/k6QiOpKERG5C6jyUUTk7vHT7n6tU1IeIar9/ivg/7nOfT95fP/uG3i+feAHgE8hfvn/Szdw32fjVl7PrfqJY3ysTwYcuJEw81cDXw5cAD7d3X/m2d7R3T/+Wpeb2U+llD7u4YcfvoHNkOUsNa3bs6c1uzlatxt3K2t29uzZ490YEREREYEb+NuCmb0a+JNEyPhq4m8MR73yOnf9yWtc9rbx/f5nsY1Pe9wMfNyzeQw5fnfLsZCO3Z5Ka/L+ni/rcbuPmxU+iojcPd59ncuXliP3Ps197xvfr9XS5HrOEr+UnydmRR63W3k9t+oZ27TcgPuAJ9z94Abu818R6/tG4D8f47aIiIiIiIiI3C73je9P+7cFM/tg4qTf+4EfAb6P6H7UiK5MvwtYX+u+7n7uGhfX8T3f4PaKiMgJUdtVEZG7x0uvc/nLxvcnn+a+58b3651ZeC3vAX4jMAH/0syu197Euf7JLPc9zePfyuu5VX6dy/v4fiOv5xxwv5ldfRbn0/kG4JuAXw985w3eV0REREREROQknBvfn+lvC18OPAj89+7+sLv/EXf/Knf/am7Pyc0iInKHUfgoInL3+Dgzu1Y9/MPj+799mvv++Pj+G27kCd39B4DPJsK4f2Fmn3KNmz0BfMDVF5pZBj72aR7+Vl7P9bTx/WbPhnxifL/W67kH+BXXuM+PA0as07Pl7v4HgK8HPgv4/8zs9I1tqoiIiIiIiMhz6tn+beFDx/d/fI3rPuP4NkdERO5UCh9FRO4e9wJ/5ugFoxrxi4kqwX/6NPf9LuBR4PPN7L+9+koze9X17ujuPwL810S14PeZ2dUHCj8BvNrMPuuqy/9n4AOfZptu5fVcz2Pj+6tv4r6MGZT/GfhUM/uII9uVgb/KU2dUAPz18f3rzOwpZ39e67Ijz/dHgb8AfCbwvSPgFBEREREREbkT/U2iBepXHT1mXhz528Kj4/vDV13/64Hfexu3T0RE7hCa+Sgicvf4YeD3mtknAT8GvBz47cSJJF/q7uevd0d335rZbyXmLPw9M/tS4ozFPeDDiQHw1/3fBHf/12b2a4DvB/65mX2hu3//uPqvEO1Dv8PMvhV4HPhVwAcBj3DVwcZxvJ6n8XPE7In/xsxm4C1EaPot7v6WZ/kYfxn4v4AfM7NvAw6JcHACfgb4mKM3dvfvM7M/T4StbzKzbwfeRrSV/dXEOr/+ek/m7n/azA6BrwG+38w+292fuN7tRURERERERE6Cu/8nM/uDxBiRf2tm3wG8mWix+iuB88Tx8zcCvxv4NjP7R8A7gI8kOgb9Q+LYX0REnsdU+Sgicvf4JSLUewL4/cBvA34a+Bx3/9ZnurO7/yTRBvVvEhWJXw78d8Qcwz9z3Tteuf+/JYLEC8B3mdnnjst/APhC4D8C/w0xOP5R4BOJ8O+2vJ7rbGMDfhPwo8BvJQK9P0cEoc/2Mb6ZOBPzHcRr+W3AG4FP5cp8i6vv81XA547b/UbgjxGB7JuA//dZPOefBf4EsWY/YGYverbbKyIiIiIiIvJccff/gzjR9p8RfyP448DnA+8F/sa4zb8nQsg3EsfKfwC4B/jNRHApIiLPc6p8FBG5i7j7m4AveIbbvAF4w3WueyvwB5/F8zx0nct/FnjZNS7/TuA7r3GX1/P0VX/P+HpulLv/G6KS81rXvYHrrM1Vt/u/iOrHqz38NPf558A/f4bHfYSYD3mt6/4yUXUpIiIiIiIicsdy938FfNEz3OaNwK+5ztVPOS5294ef5rHewLM4lhcRkTuHKh9FRERERERERERERERE5FgofBQRERERERERERERERGRY6G2qyIicscys48l5kk+I3f/6tu5LSIiIiIiIiIiIiLyzBQ+iojc4dz9Ua4zJ/BudIOv52OB/+VZ3varb2JzREREREREREREROQYqe2qiIjcsdz9De5uz+brpLdVRERERERERERERBQ+ioiIiIiIiIiIiIiIiMgxUfgoIiIiIiIiIiIiIiIiIsdC4aOIiIiIiIiIiIiIiIiIHAuFjyIiIiIiIiIiIiIiIiJyLBQ+ioiIiIiIiIiIiIiIiMixUPgoIiIiIiIiIiIiIiIiIsdC4aOIiIiIiIiIiIiIiIiIHAuFjyIiIiIiIiIiIiIiIiJyLBQ+ioiIiIiIiIiIiIiIiMixUPgoIiIiIiIiIiIiIiIiIsdC4aOIiIiIiIiIiIiIiIiIHAuFjyIiIiIiIiIiIiIiIiJyLBQ+ioiIiIiIiIiIiIiIiMixUPgoIiIiIiIiIiIiIiIiIsdC4aOIiIiIiIiIiIiIiIiIHIty0hsgIiIiIiIiIiIiInK7feA9iZ//2s896c24YzzyyCMAPPzwwye6HSLy/KPKRxERERERERERERERERE5FgofRURERERERERERERERORYKHwUERERERERERERERERkWOh8FFEREREREREREREREREjoXCRxERERERERERERERERE5FgofRURERERERERERERERORYKHwUERERERERERERERERkWOh8FFEREREREREREREREREjoXCRxERERERERERERERERE5FgofRURERERERERERERERORYKHwUERERERERERERERERkWOh8FFEREREREREREREREREjoXCRxERERERERERERERERE5FgofRURERERERERERERERORYKHwUERERERERERERERERkWNh7n7S2yAiIiK3yMweW6/XD7zuda876U25q1y4cAGAs2fPnvCW3D20ZjdH63bjbmXN3vSmN3FwcPC4uz943NslIiIiIncnHTc/lY5Tnkpr8lRak/f3fFmP233crPBRRETkecDMNkAGfuakt+Uu89rx/T+f6FbcXbRmN0frduNuZc0eAs67+wcd3+aIiIiIyN1Mx83XpOOUp9KaPJXW5P09X9bjIW7jcXO5HQ8qIiIiz7mfBXD3jz/pDbmbmNlPgdbtRmjNbo7W7cZpzURERETkmOm4+Sr6nfuptCZPpTV5f1qPZ0czH0VERERERERERERERETkWCh8FBEREREREREREREREZFjofBRRERERERERERERERERI6FwkcRERERERERERERERERORYKH0VERERERERERERERETkWJi7n/Q2iIiIiIiIiIiIiIiIiMjzgCofRURERERERERERERERORYKHwUERERERERERERERERkWOh8FFEREREREREREREREREjoXCRxERERERERERERERERE5FgofRURERERERERERERERORYKHwUERERERERERERERERkWOh8FFEREREREREREREREREjoXCRxERkRNkZq8ys282s3eY2cbMHjWzrzez+2/wcR4Y93t0PM47xuO+6nY/90m41W03s9Nm9sVm9vfM7D+b2SUzu2BmP2lmX2Fmq+vcz5/m68eP91Uer+N4v83skWdYg73r3O8jzOwfmtl7zOzQzH7OzL7GzPaP7xXeHsewrz38DGu2fH3AVfe7K/c1M/stZvbXzexHzOz82N6/c5OPdcNrfzfvayIiIiJybTpuvn3bdaNrMm53veOUdx3Pq7s5x3TM+1+b2deZ2Q+Y2WPjdf3os7jfHXkcclJrcqcez97qethN/i1p3PeO3EduJ3P3k94GERGRFyQz+xDgjcBLgO8A/jPwicBnAj8HfKq7P/YsHufB8Ti/AvhB4N8ArwW+AHgP8Cnu/l9ux3OfhOPYdjP7bOC7gceBfwn8AnA/8PnAy8bj/1p3P7zqfg68BXjDNR727e7+f970C7uNjnFfewT4DOBrrnOTP+/u9ar7fBKxX07APwLeBvwa4BOAHyPWeXPjr+r2O6Z97SHg9de5+qOA3wz8rLt/1FX3u1v3tX8HfAxwEXg78bPo77r7l9zg49zw2t/N+5qIiIiIXJuOm5/qhNfkUeA+4Ouv8ZAX3f2v3MxrulXHuCbfTrz+Q+LvBB8J/Ji7/+qnuc8deRxywmtyxx3PnvDfku7IfeS2c3d96Utf+tKXvvR1Al/A9wIO/OGrLv+r4/JvepaP87fG7b/uqsv/yLj8e27Xc9+t6wZ8LPDFwOqqy88CPzUe5yuucT8HHjnpNTjBfe2R+PXxWT9vBv7TeI7PP3J5In7hduArT3p9bve6Pc3j//3xOH/kGtfdrfvaZwIfBhjw8Hgdf+d2r/3dvq/pS1/60pe+9KUvfenr2l86br7j1uRR4NGT3i9u45p8CvC6cXzx0Ljvjz7N7e/Y45CTWpNxnzvuePY41oOb+FvSnbyP3O4vVT6KiIicgHHG1S8Qv7h/iLv3I9edBd5J/PH+Je5+6Wke5wxxRmIHXu7uF45cl4D/AnzgeI7/cpzPfRKei203s98B/F3gn7n75111nQM/5O4P39QLOAHHuWZL5aO727N87l8D/ADww+7+GVdd98HALxJnQ36Q32G/lN7ufc3MXkRUBnbgFe5+7qrr77p97Wpm9jBxNugNVT7ezNrfzfuaiIiIiFybjpuf6iTXZFz3KIC7P3RsL+oW3a73anSx+SWepsrvTj0OOck1Gbe7o45nT/JvSXfqPvJc0MxHERGRk/GZ4/v3Hf2lB2D80v9jwCngk5/hcT4Z2Cd+8btw9IrxuN971fMd53OfhOdi2+fxvV7n+vvM7PeY2Z82sz9kZnfiOh117GtmZr/dzL7SzL7czH6Dma2vc9NfM75/z9VXjAPYnycOaD/42T73c+h272u/C1gD33Z18HjE3bavHZebWfu7eV8TERERkWvTcfNTneSaLNZm9iXjOOV/NLPPNLN8oy/kGJ3ke3WnHofcCfvvnXQ8e5J/S7pT95HbTuGjiIjIyXjN+P7z17n+zeP7r7gNj3Ncz30Snott/z3j+1N+MRw+Bvi/gP8V+AbgX5nZvzOzj7rO7U/a7VizfwD8BeDrgH8OvNXMfstz9NzPldu97b9vfP9bT3Obu21fOy4vtJ9rIiIiInJtOm5+qpNck8XLgG8hjlO+nphl92Yz+4xr3Pa5cJLv1fN9P7kVd9Lx7En+LelOeC9OhMJHERGRk3Hv+P7kda5fLr/vNjzOcT33Sbit225mXwZ8NvDvgG++xk3+KvCpwIuJnv6/kujR/zHAD5rZK2/meW+z41yz7wA+D3gVcZbsa4kQ8j7gW8fw9dv13M+127bt46D8NcDPuvsbr3Ozu3FfOy4vtJ9rIiIiInJtOm5+qpNcE4D/G/i1RAB5Gvgo4oTKh4DvNrOPeYbnvR1O8r16vu8nN+tOO549yb8lnfR7cWIUPoqIiIgAZvabibM23wV8kbvPV9/G3b/C3d/o7u9z94vu/pPu/luBfwy8CPhjz+lGP8fc/X9z93/m7r/s7ofu/nPu/qeBryB+r/wLJ7yJd4v/YXz/29e7wQt9XxMRERERkTuPu3+Nu/+gu7/b3S+7+8+6++8nwqZ94KtPdgvlTvBCOp59Nn9LeqFS+CgiInIyljOb7r3O9cvl527D4xzXc5+E27LtZvaFRCvR9wAPj777N+KbxvdPv8H7PReei/f7/yTmGnzsGNb+XD737XK79rUHgC8CDohWRTfqTt7XjssL7eeaiIiIiFybjpuf6iTX5Omc5HHKSb5Xz/f95Lid1H5ykn9LulPfi9tO4aOIiMjJ+Lnx/Xo93T9sfL9eT/hbeZzjeu6TcOzbbma/Ffg24N3AZ7j7zz3DXa7lveP76Zu47+12299vdz8ELox/Hl0D7WtP9buANfAP3f3cTWzXnbyvHZcX2s81EREREbk2HTc/1UmuydM5yeOUk3yvnu/7yXE7qf3kJP+WdKe+F7edwkcREZGT8S/H988ys/f73+NROfapwGXgx5/hcX6cqKD61KsqzhiP+1lXPd9xPvdJONZtN7MvBv4+8A7il8U3P8NdrueTx/cbrZh8Ltz299vMXgPcTwSQ7zty1Q+O71fPgsTMPpj45fstvLDW7feN79dtufoM7uR97bjczNrfzfuaiIiIiFybjpuf6iTX5Omc5HHKSb5Xd+pxyJ26/57UfnKSf0u6U/eR207ho4iIyAlw918Evo8Yyv6Hrrr6a4izwL7F3S8tF5rZa83stVc9zkWideNpnjpb4cvG43/v0dYPN/Pcd4rjWrdx+e8C/l/grcCnP1OrVTP7aDObrnU58L+Of/6dZ/9qnhvHtWZm9kGjZShXXf5i4P8e//wH7l6PXP1DwJuATzezzz9ynwT8xfHPb3J3v5nXdjsd57525PpPAz4c+Fl3f+PT3O6u3NdulJlNY80+5OjlN/kz6q7d10RERETk2nTc/FQnuSZm9uFm9pSKNTN7CPiG8c/n/Djldhy73YA78jjkJNfkTjyePcm/JXGH7iPPBXseviYREZG7wviD+xuBlwDfQfwy8knAZxLtFn6Vuz925PYO4O521eM8OB7nVxBnVP0EEXB8AdF3/leNX7Ru+rnvJMexbmb2mcC/IE7E+mbgbdd4qnPu/vVH7vMG4POAHxm33wCvJc5ey8D/AXzpnfgL4zGt2euJ+Qw/SpyR9zjwauBziBkFPwn811e3EjWzTyL2ywn4R8Qv6L8W+ATgx4Bf6+6bY37Jx+K4PqNHrv8W4EuAP+Luf/1pnvcN3L372hcCXzj++TLg1xP7y4+My97n7n9s3PYh4JeAt7j7Q1c9zg3/jLqb9zURERERuTYdNz/VSa2JmX018BXADxOVWheADwE+F9gD/jnwm9x9e9yv+Zkc45r8auD3jn+eAb6IWIvvXm7j7q+/6j535HHISa3JnXo8e1J/Sxr3uyP3kdvO3fWlL33pS1/60tcJfQEfQFSNvRPYEr/Afz1w/zVu6/E/3dd8nAeA/33cfzse75uBVx3Hc99pX7e6bsDrl8uf5uvRq+7zhcA/AX4BOH9knb8L+PyTXpPnYM0+CngD8B+Ax4CZCCB/BPjDwOppnvsjiFkI7yMOPH6eOLtw/6TX5Xav25Hr7ifaGl0G7nuG57xr9zXirOln9bkizjp9ymftZtb++bCv6Utf+tKXvvSlL33p69pfx/g7+fPmuPkk1gT4DKLV5H8Gzo1jwvcC3w/8Tkah0928JjyLvxVc57nvyOOQk1gT7uDj2Vtdj2ezFlz/+PaO3Edu55cqH0VERERERERERERERETkWGjmo4iIiIiIiIiIiIiIiIgcC4WPIiIiIiIiIiIiIiIiInIsFD6KiIiIiIiIiIiIiIiIyLFQ+CgiIiIiIiIiIiIiIiIix0Lho4iIiIiIiIiIiIiIiIgcC4WPIiIiIiIiIiIiIiIiInIsFD6KiIiIiIiIiIiIiIiIyLFQ+CgiIiIiIiIiIiIiIiIix0Lho4iIiIiIiIiIiIiIiIgcC4WPIiIiIiIiIiIiIiIiInIsFD6KiIiIiIiIiIiIiIiIyLFQ+CgiIiLyAmdmD5uZm9lX38bneGg8xxtu4D6vH/d5/VWXP2pmjz6b24qIiIiIiIjcKh03i9wYhY8iIiIi8rx1rQMuEREREREREQk6bpbboZz0BoiIiIiIXMc/BX4ceOcx31ZERERERETk+UDHzXJHUvgoIiIiInckd38SePK4bysiIiIiIiLyfKDjZrlTqe2qiIiIyAk7OtfBzF5rZt9uZo+b2SUz+1Ez+6yrbr+b02Bmn21mj5jZk2bmR25zr5n9BTP7OTM7NLMnzOx7zezXPcO2fIqZ/YvxeBfGfT7hGrd7hZn9GTP7MTN7l5ltzewdZvb3zOwjnuE5nvE1Xv06n8Uavt9tl3kcwAcCHziuW77eYGb3m9llM/tFM7PrPOZ3jds/5fWLiIiIiIjIc0fHzTpulruLwkcRERGRO8cHAf8KeAD4W8C3AR8PfLeZ/fZr3P63AP8MuAB8E/CtAGZ2H/BG4CuJsxq/HvjHwKcA32dmX3qd5/8k4BFgA/wN4LuBXwv8iJl92lW3/fTx+OfGY/9vRPuW3wL8hJl9zDG9xpv1KPA1xOt/cvz38vXt7v4E8A+ADwaecmBpZh8A/Abgp9z9J49xu0REREREROTm6bj5+DyKjpvlNlHbVREREZE7x6cDf8Xd//hygZl9A3HQ8U1m9t3ufv7I7T8H+Bx3/56rHucvAh8B/G3g97u7j8f6i8BPAn/NzL7X3R+96n6fDfxhd/+GI8//BcC3A99sZq9x9z6u+kHgpe5+4egDjIOnHwO+ljgIudXXeFPGa/vq5YxOd//qa9zsG4HfDXwp8P1XXfffA5k40BMREREREZE7g46bddwsdwFVPoqIiIjcOZ4E/uzRC8bZg38XuA/4TVfd/juuPoAysxXwJcBF4E8tB1Djsd4M/DVgBfzOazz/LxAHFkef/zuAHwI+FPi0I5e/5+oDqHH5zxAHWJ9pZtMxvMbbZjzvTwJfYGYvWy43s0wcRF0A/v5ztT0iIiIiIiLyjHTcrONmuQsofBQRERG5c/z0tQ5MiJYuAP/VVZf/xDVu+xrgFPAz7v74Na7/wes8FsCPHDlD8xmf38w+d8x3eKeZzctsCODzgDXwoms81o2+xtvtG4luIL/nyGWfA7wK+DvufvE53h4RERERERG5Ph0367hZ7gJquyoiIiJy53j3dS5/1/h+73UuP2q5zTuv81jL5ffdyvOb2f9IzMR4gmi98lbgMuDAFwIfQxxI3fRzPEf+AfB1wO8zs6+1i/b7AAEAAElEQVQdB5H/w7hOrWNERERERETuLDpu1nGz3AUUPoqIiIjcOV56ncuX1iZPXnW5X33DI7d52TWuA3j5dR7rWT+/mRXgq4kDn49z9/c7YDOzT7nO4zzr53iuuPuBmb0B+KPAZ5nZfyRmbvzr0QpHRERERERE7hw6btZxs9wF1HZVRERE5M7xcWZ29hqXPzy+/9tn8Rg/R5xJ+TFmdt81rv/M8f2nr3Hdrzaza/1+ePXzv4g4A/SN1ziAOgN83NNs33G8xhvRgPwMt/mbxAHplxIzKzI6e1NEREREROROpONmHTfLXUDho4iIiMid417gzxy9wMw+Afhi4szGf/pMD+DuW2II/Vngz131WB8C/BFgBr7lGnf/MOAPXnWfLwA+A/gF4EfGxe8hDtQ+fhw0LbedgP+da8+sWNzya7xBjwEvNrP9693A3d8M/ADwG4HfD5wj2sqIiIiIiIjInUXHzTpulruA2q6KiIiI3Dl+GPi9ZvZJwI8RrV5+O3HC2Je6+/ln+ThfCXwa8GVm9iuBf0kc2Pw24uDqy9z9l65xv+8Bvs7MfgPwM8CHAr8ZOAR+z5jrgLt3M/tr43n+g5l9B7Aizg59YDzfZ17j8Y/zNT5bPwD8SuB7zOyHgQ3wM+7+XVfd7huBX0e0t/nr7n5wzNshIiIiIiIit07HzTpulruAKh9FRERE7hy/BPwqYhj97ycOen4a+Bx3/9Zn+yDu/jjwKcBfAh4Evhz4rcBPAJ/t7t94nbv+a6KNyxr4MmKGww8Cn+7uP3LVbb8K+ArggGi78puBnwQ+EXjr7X6NN+DPA98EfAjwp4izWr/oGrf7TuB947/VOkZEREREROTOpOPm46fjZjl25n6teasiIiIi8lwxs4eIg4v/x91ff7Jb88JkZh9MtMj5MXf/tJPeHhEREREREblCx80nT8fNciNU+SgiIiIiAn8MMOAbTnpDRERERERERO5AOm6WZ00zH0VERETkBcnMXg38DuDDgN9NzOv4thPdKBEREREREZE7hI6b5WYpfBQRERGRF6oPBv4CcBn4fuAPuHs/2U0SERERERERuWPouFluimY+ioiIiIiIiIiIiIiIiMix0MxHERERERERERERERERETkWCh9FRERERERERERERERE5FgofBQRERERERERERERERGRY6HwUURERERERERERERERESOhcJHERERERERERERERERETkWCh9FRERERERERERERERE5FgofBQRERERERERERERERGRY6HwUURERERERERERERERESOhcJHERERERERERERERERETkWCh9FRERERERERERERERE5FgofBQRERERERERERERERGRY6HwUURERERERERERERERESOhcJHERERERERERERERERETkW5aQ3QERERG6dmf0ScA/w6AlviojIneAh4Ly7f9BJb4iIiIiI3Bl03Cwi8n4e4jYeNyt8FBEReX64Z71eP/C6173ugZPekDvBhQsXADh79uwJb8mdQetxhdbi/T1f1+NNb3oTBwcHJ70ZIiIiInJn0XHzTXi+HjPcTlqzG6c1uzm3sm63+7hZ4aOIiMjzw6OvfvWrH/ipn/qpk96OO8IjjzwCwMMPP3yi23Gn0HpcobV4f8/X9fj4j/94fvqnf/rRk94OEREREbmj6Lj5JjxfjxluJ63ZjdOa3ZxbWbfbfdysmY8iIiIiIiIiIiIiIiIiciwUPoqIiIiIiIiIiIiIiIjIsVD4KCIiIiIiIiIiIiIiIiLHQuGjiIiIiIiIiIiIiIiIiBwLhY8iIiIiIiIiIiIiIiIiciwUPoqIiIiIiIiIiIiIiIjIsVD4KCIiIiIiIiIiIiIiIiL/f/b+Pdi6L9/vut7fMca8rLX27bn9bn06p89pknMwlAhCCYrkcClFUhRCKUUCYiqFJVHAQiskUGChlpUgZamRIkIBBUJiRVOIWDEhinRAykuCEDUGEpLT53T37/pc9mVd5mWM8fWPMee67Gc/v9t5Yne6v6+up397rzXXvK21ds0xP+M7xlth4aMxxhhjjDHGGGOMMcYYY4wx5q2w8NEYY4wxxhhjjDHGGGOMMcYY81ZY+GiMMcYYY4wxxhhjjDHGGGOMeSssfDTGGGOMMcYYY4wxxhhjjDHGvBUWPhpjjDHGGGOMMcYYY4wxxhhj3goLH40xxhhjjDHGGGOMMcYYY4wxb4WFj8YYY4wxxhhjjDHGGGOMMcaYt8LCR2OMMcYYY4wxxhhjjDHGGGPMW2HhozHGGGOMMcYYY4wxxhhjjDHmrQg/7B0wxhhjzNvxS7eZb/32P/DD3o0fLX/IzscJOx8Hdi5O/Yiej+/+zl//w94FY4wxxhjzY8Tazb8CP6Jthh9pds6+OjtnX9m/8Desfti78CCrfDTGGGOMMcYYY4wxxhhjjDHGvBUWPhpjjDHGGGOMMcYYY4wxxhhj3goLH40xxhhjjDHGGGOMMcYYY4wxb4WFj8YYY4wxxhhjjDHGGGOMMcaYt8LCR2OMMcYYY4wxxhhjjDHGGGPMW2HhozHGGGOMMcYYY4wxxhhjjDHmrbDw0RhjjDHGGGOMMcYYY4wxxhjzVlj4aIwxxhhjjDHGGGOMMcYYY4x5Kyx8NMYYY4wxxhhjjDHGGGOMMca8FRY+GmOMMcYYY4wxxhhjjDHGGGPeCgsfjTHGGGOMMcYYY4wxxhhjjDFvhYWPxhhjjDHGGGOMMcYYY4wxxpi3wsJHY4wxxhhjjDHGGGOMMcYYY8xbYeGjMcYYY4wxxhhjjDHGGGOMMeatsPDRGGOMMcYYY4wxxhhjjDHGGPNWWPhojDHGGGOMMcYYY4wxxhhjjHkrLHw0xhhjjDHGGGOMMcYYY4wxxrwVFj4aY4wxxhhjjDHGGGOMMcYYY94KCx+NMcYYY4wxxhhjjDHGGGOMMW+FhY/GGGOMMcYYY4wxxhhjjDHGmLfCwkdjjDHGGGOMMcYYY4wxX4uIqIh854e9H8YYY350WPhojDHGGGOMMcYYY4wx5keOiHxHRPSHvR/GGGO+GgsfjTHGGGOMMcYYY4wxxhhjjDFvhYWPxhhjjDHGGGOMMcYYY4wxxpi3wsJHY4wxP1JE5FvTfBH/goh8W0R+v4i8EJE7EfnDIvIXTcs9E5F/RkQ+EpFORP6oiPw1D6zvUkR+h4j8h9Nyr0TkXxeRv/6BZX/TtO3f9IZ9e20eCxE5F5F/VET+PyJyO+3nnxGR3yci/8kH1vGfmo7pYxEZROR7IvJPi8gHX/ecGWOMMcYYY4z5yXavLf1rpjbppyKSReQXRMSJyN8ztZ3XIrKZfv4tIvLgPWIR+XkR+edF5Lsi0k/r+7dF5Ld8yX36rdP2/x0ReXz0+Be2i+fjAX7d9Lse/fvOr+xsGWOM+XMt/LB3wBhjjHmDbwH/d+BPAv/C9PvfAnxHRP5K4A8Bt8DvAx4DfzvwB0Xk16jqLwOIyBXw7wD/MeCPAv9T4CnwtwF/WER+i6r+0193B0VEpv34TwP/V+CfBSLwU8BfA/zbwL97tPxvBv4ZoAf+NeB7wK8G/m7gbxKRv2Led2OMMcYYY4wx5mv4NqUt/aeA3wMsKG3nfwn4jZR26D8LKKWN/U8BfxXwdxyvRER+PfC/ARpKu/d/BVwBfzHwDwK/+007MIWZ/1Pg7wP+FeDvUNVueu7Ltouvgf8e8JuAn55+nn33q5wQY4wx//9n4aMxxpgfVb8O+EdU9X84PyAi/yjw36c0pP7XwH9DVfP03P8R+F8C/8D0D+AfpwSP/wzw96iqTsv+48AfA36XiPzrqvrdr7mPfxElePxXVfVvOX5iamxdHv3+a4D/BaWR9OtU9QdHz/11wB8G/meUxt8bici/+4anfv5r7L8xxvxI+853vvO1Xnd3d/d2d8QYY4wx5s8ffxXwO1T1H54fEJHfQAke/z3gr1bV9fT4PwL8EeA3isgfUNXfOz3+FPi9lHvHf62q/pHjDYjIT71p4yLSUkLPvxX4J4H/1lG7/Uu3i1X1GvjHROQXgJ9W1X/sy54AazcbY36S3N3dfa2285/rdrMNu2qMMeZH1XeB33nvsX9x+m8D/Na5ATP5vZSqw/8EgIjUwN8JrIF/aA4eAVT1TwO/C6iBv+st7Ovu/gOqmlX11dFDvwWoKA2vH9xb9t+g9Pj8m0Tk/C3sjzHGGGOMMcaYn0yfcFolCPCbp//+9jl4BFDVDfDbpl//7qPl/6vABfC77weP0+u+/9CGp6FV/0+UTrW/TVX/vnvtdmsXG2PMTwirfDTGGPOj6t9X1XTvsQ+n//4pVT3pnqOqSUQ+oQx5CvBzwBL4d1T15QPr/z8D/wjwl/wK9vH/C/z7wG8QkZ8G/nfA/wX4Y6o63Fv2r5z+++tE5C9/YF3vAB74NRwN1Xqfqr42jyTse3b+pV9p740x5kfcL/zCL3yt152f2/0qY4wxxvzE+uOq2t977C8FMvCdB5b/I0DitG38V0z//YNfYbvvUqY9+Vng75yrKO95K+3iL2LtZmPMT5Lz8/Ov1Xb+c91utvDRGGPMj6qb+w+oaizTLL7+3CRSelHCYcjTj96w7Pz41dfcvznw/GuB/y7wX6IM8wpwJyL/IqXicu5V+mT672/9gtWefd39McYYY4wxxhjzE+/jBx67BF4+0El2bmc/pwR/s6vpvz+4v/zneI9SLfl9Sqfch1i72BhjfkLYsKvGGGN+XM0B5XtveP79e8tB6QkKD3TOEZGrh1aiqq9U9R9Q1W8Cv5oyVM1/APy9wO9+YH8uVVU+599rQ9oYY4wxxhhjjDFfkj7w2A3wWESq+0+ISACeArdHD19P//3GV9juH6cM1/oN4N8SkZ99w36AtYuNMebHnoWPxhhjflz9h8AW+IvfEBz+NdN//59Hj81zNH7zgeX/si/aoKr+R6r6zwG/jjLX5N989PT/bfrvf/aL1mOMMcYYY4wxxrxF/x7lPvBf/cBzfzVlqNPjtvHcfv0vfJWNqOq/DPztwAeUAPLX3Fvk67SLE4CI+K+yL8YYY364LHw0xhjzY2kaTub3AOfA/+D4ORH5NvD3AyPwLx099cco1Y+/UUSWR8s/Bv5H97chIj/zht6cj4AG2B099k9O2/ufPNAAQ0RqEbFg0hhjjDHGGGPM2/bPT//9Hffaukvgd06//nNHy/+LlErI3yIirwWWIvJTb9qQqv5+yrQkT4E/IiK/9ujpr9MufjH991e9aZvGGGN+9Nicj8YYY36c/XZKj8q/d5rM/t+kNID+Nkoo+feq6i/OC6vqRyLye4D/CvDvi8gfoMxZ8TcC/xbwl9xb/18M/Csi8keBPwl8CDyjVDxWHOaARFX/AxH5zZRG358QkT8E/KlpuV817ednwM+/1TNgjDHGGGOMMeYnmqr+XhH5mylt4T8hIv8qZXjW/yLwM8DvU9Xfc7T8cxH5jcDvB/5NEfmDwP+L0j7+j1NGC/qZz9nevzZt738LfEdE/npV/eNfs138bwD/ZUrb+/9A6eT7S6p63JHYGGPMjxgLH40xxvzYUtWXIvJXAv8Q8LcC/21KQ+X/AfwTqvqHH3jZfw34BPgNwH8T+GXgdwH/BKWhduyPUXqJ/jrgb6BUPH4G/LvA71LVP3hvf/5lEfnjwH+HMuzrfw7YUELL3w/8vl/hIRtjjDHGGGOMMQ/5DcAfAX4z8F+fHvuTwP8Y+N33F1bVPyAifxnw24C/jtJ+fQX8B8Dv+KKNqeq/LiJ/I/C/pwSY/3lV/aNfo138zwI/TRnO9R+k3M/+I5yOYmSMMeZHjIWPxhhjfqSo6ncB+ZznP++5bz3w2DWlsfTbvuT2e+C3Tv/uk3vLfh/4h7/Meo9e8/8GftNXeY0xxhhjjDHGGPN5vkRbOgP/1PTvy67zTwB/15dY7sHtqup3KKMO3X/8S7eLVTVR2t1fqe1tjDHmh8vmfDTGGGOMMcYYY4wxxhhjjDHGvBUWPhpjjDHGGGOMMcYYY4wxxhhj3goLH40xxhhjjDHGGGOMMcYYY4wxb4WFj8YYY4wxxhhjjDHGGGOMMcaYt8LCR2OMMcYYY4wxxhhjjDHGGGPMW2HhozHGGGOMMcYYY4wxxhhjjDHmrbDw0RhjjDHGGGOMMcYYY4wxxhjzVlj4aIwxxhhjjDHGGGOMMcYYY4x5Kyx8NMYYY4wxxhhjjDHGGGOMMca8FRY+GmOMMcYYY4wxxhhjjDHGGGPeCgsfjTHGGGOMMcYYY4wxxhhjjDFvhYWPxhhjjDHGGGOMMcYYY4wxxpi3wsJHY4wxxhhjjDHGGGOMMcYYY8xbYeGjMcYYY4wxxhhjjDHGGGOMMeatsPDRGGOMMcYYY4wxxhhjjDHGGPNWWPhojDHGGGOMMcYYY4wxxhhjjHkrLHw0xhhjjDHGGGOMMcYYY4wxxrwVFj4aY4wxxhhjjDHGGGOMMcYYY94KCx+NMcYYY4wxxhhjjDHGGGOMMW+FhY/GGGOMMcYYY4wxxhhjjDHGmLci/LB3wBhjjDFvx09fOP7U7/z1P+zd+JHwne98B4Bf+IVf+KHux48KOx8Hdi5O2fkwxhhjjDE/Sazd/NVZm+Grs3P21dk5+3rm8/ajyCofjTHGGGOMMcYYY4wxxhhjjDFvhYWPxhhjjDHGGGOMMcYYY4wxxpi3wsJHY4wxxhhjjDHGGGOMMcYYY8xbYeGjMcYYY4wxxhhjjDHGGGOMMeatsPDRGGOMMcYYY4wxxhhjjDHGGPNWhB/2DhhjjDHGGGOMMT+OHv/sf0bLTwLo/nERAUB1fuz+82/qJ6xlyRyRmz9J7RzJtaQIDC9R1yB5ZBeV4EFUEfE4p5ATXRQQz/Lpr4acCOmGtHofGe4gK9fPv8fZ5SWhXjC27+NxqLz5+Obj2O/ddDwicnRsp7/ff83xa0XkgXPz5td82ecf2scvsWR5S472aXp0XhECCF9+28frcNOK7p9fndar05ZkXg4BKa9VACnvL1rWIfOK5fiTdH/DevLkQ+9teR9OXzLvh4rut7N/7X7/7m31+CN99C14eLfuLThvkMPD5VdFpi2pCE7Lb1nAnXyXZnn6XVBN02PusNJpe6oK8znk9c/f0Y4ePstHn2mVfNhlOSx7ckhSPiun650XzsxnMCPlfSWDuLJMzkBGxU37wL3zLohGmu4TVo+ekjPocMvd9UsWbQCp0JyAAVWhjxA8BHE0iwX9GNhyzuN6TdaB2y4SB086+wCXFXXT5qbPz+d9hz+PqtK9+NNf/QtjjDHGmD8vfe3w8X/+j/79qjmT83SRNF1kOCc4Vy7unHM456YGhEPw4MrjIIhw0rgAQJUIaMpoyuQ0EseenHYsQuCsqQjeEVOm33XEvifHAYYeshJCQFUY+x4yjCkRFcQ50Ezf9cRRCb6iqmsUpa4qyEq32bDdbBhioq4afFXTjT2P33+Py6dPaZdLVqtzQhCyKjErwxjxAou2QcXRdwNxjDjJiBPQzDD2qDqaegGqdLuOlCPtoqKuHbu7NXe3d5Azq/MldVsRfIU4T4wjMQ5stxvqpkEUQnAIuVzwe8hZqasalYRmJY6Zly9esd1tePzkEf0wMOwiIXhiGuk2O15d34AIwzhyfrHiZ7/9M9zevCIEiFH5j/7sh3z84pZujLz/7CnfeLZCEGLKNCHw8YtXXFxdsVpULNoFwTv6bsdmu6MfEqvVGZeX53z86WcomXefXJFQrh4/Ybft+P4PPuTl9Zqf+fav4Z1n7/An/uSf4tXNDX/Rr/05zpctwXtWZ+fEOFJ5T44jd7c3fPjxR6hmyMKf/qVP+ehmx7Yfee/pgr/81/4cS9+wGbd84xvvc9a2xBzZdlu2mw1e4ZNPr/no5YY/+/ELPnr+ilAFQsgsveNb7z7i8dmK69uOvh95fHXG5UV5X1+8vGHbbfHBIznyaLXi/HzFxdUl4gPbbcdm1823Anh8eUZdeXwdOFsu8c6RVNlu73AoTjxeHCnF8n3JI84JVVWT8+G7hMAwRMaYyXhUBNQj4qiqgHMecCAO8R4hgM6NIYd3Ae+r/few/Bxw4sv3zgmquTRXVHDeUzcLNCspZ5w4NGfSGIlDX76LMaNZSyNbBSTjgkz7ouScyzFoJqeMilA1LSJCjJE8JnLMkDM6/QPQnNAskObHIuKEUAdcUxNz5q7b8uLmFR++fMGHzz/ldrtmHEc0Tw3HqQUsIuRcvoN5+vsk03PuqOWrqlMzb2rUIUBGNU+/uUMDyh3dMIGpwTc19UT49375lTWijDHGGGPMA+Tov6c350tTeA5Hyu9feAMfxUu1X18cRlSVdlFzdxc5bwOJzKKpUBKuWpH7HSqRtnU4EbrbX2SkJp9/gN+8RL1n13cs2gp39tNEVyEIWfIbw7WHAojjx+4/f/j9OKE5ff7zXv9F2/6y7h/PaWg2twlKe6f8f2kjHC8x7cRJPvaV9kGmMO8NT05xEqr6ekAoR3sqh19Os8A37JicLji//PXPnJy85Hgbx0GdHi86N7OmUE72MeH02s85V4fA+fh3La9X2e+sTPHcPnibvj/uDZ+pz9vG6899iTdzem+4/9nWEmgK+Wj95R7Unp5+1k4D+kNbdN4O6qbP3+H3wxutR8uxfyPOzlagmTo09MOA4rjbjTgSQRTnhDFnKh+og9KPIJuM+B6XMy+0QeQcVwtVfkHKevJ50y97nt54+qzJbIwxxvwk+drhYwg1KSXEaQkvNE1XcYrq3Gg4/jc1JqZueLLvYQaZqUFz1NByIqh3aPbgPKRATIlhZLoIU5wry8cxk4eE14xTJaJTN0LBZUh9DxJw3pdeYpoREVLK4ATxHh88Pkak75GYwTnaRQse7q5fUS8anHMsFgt8aAleqMQRxkgc46H1IBAqj3Me70BEEQcpZZxTfHCEsKTrOoZxBITF+ZJ22XB7fY1KJsaIF08Igd0wMAw77u7uWGVl0TbENFIFTxwH+iGSYiK3LS44cswM/UjlS6OyH0YycLfZsFotCE7IWfHOl+vUEKirmvXNLcNuxxYYRuVms2Yz9DgyVUg479isd9T1gk3XE6rAclEDPU29JA4jY0yoC1w8WnJ1cU6MPXWtXD16xs3NDVVVE3xFN9zQNA3f/MYl773zDrfbDhC++cE3OFsuCU4RTTgnhKqi73akNLLuRhKOxXLJ3e2Os7NL5KZj0EgfI58+f0mQwOPHF+SY2XY9WTM5QR0qRDNtG2gah5ConOCAVdWyqh1NqBCU5cKTRbnZbXB+JPiKIEJbNdz2HX3XA57Q1Mh2TVUvGHNmO4zcbjrWuy136w1PL1csVy1tW4GrkCkA1KQ0iwrvPMOQy+c4uen7MAV3ZFIqn3GmIA8tPR9TjoSqKd8nXwJG53zpA6qQYyLFPIWLuQRkUj6TZEhknMgUyE9NOi1fxqZtiewYhoj3HqlKCB6aGvEO5wNpjGgcyWkkaemBmbOW3phOQBzOOyAQWo84R06JcRjJMZXe1w4QXxqQ4qeGjAfJpUGXBRVBvOCrAF5ImuniwN1ux812y26IpVHr3L7np+6bZ9N6neI0T713fWn+TsHiHLiWR/PUcJOyX/hy3jlqFM7tOlWccyUEL1tCbARrY4wxxhjzgNdvtj8cuum+mki/4Aa9QyWRNSN+gYYKJ5E0rHGO0oHQt4zdFucEyRF2t7gwQg74RSBraQ81Xom3v0xsnyDtOb67w/uG5CqCePI+8DnszxdVL37eeTh05HMnp2GueFTV/c9vPn9vw2nQU/ZnqpDT4+enjOl+0HT009fdvX3l4lECeNzR8ZC3yelrOOSHqocoS+b/l+n83cuHHqoivR+EfVn7ZedcSjj6/B4voyfLzqHZ/WLfk0z6dEtHAaZMGZvgcMh07+UQ4MpJAHvYzuk9qbJvnPx+GgC+fpyf1xng9LQdAtC54+v9D8jr3597oanK4f09+aTNwd/Re43uz18JqoVhjFQEgl+zSwPnZ1XpJN8Lu20ikRB1LJfCrg+IW9L5FudGQqjoXUMlgSSO2jskJ9T7k/dm/ntw8tl8Q8WzMcYYY36yfe3w0fsakYRqIueMqhyFjzoNEzPdzN9fKM4XwnOXTjhcUum+U5hOF/6lUsuj1Iw5Mw4bRhLBBZwXvBecF/ACIowxE9OISqksE6ZwJWdi6qnqugxfkTOkSAa8C+SsVG2gXa3oh2GqMnOEumXV1lzf3XB7e4OvahbDiqoOOAUXKtqmJjpXeoVqxMt8aZyQKUBMKSEy7SsJFxzNokIGJceIItRNzeWjC8ZhICfY7bZAwnslpURMiTEmWsrpm//1/YgTYbPZMsQR7wLbzZbKl+rMoe+ni8KMkPHOEyrPxcWKYejxUXEifPLJZ6XazMN2zPRD5rwNnC0azpqG9WbHerPhcbugS5EP3n8H8kgk8Or6Gkdg24/UiwUxZ7IIn7645fL8DHGeH3x2y8/96m+XYNRVrFaXNE3LRx9+j+XFFd/8xjOWdc3Foma726IuEOqGfrtlfXND13e8uF6zXLS0VWAIjjF3DLkHzWy3Pd//5FOeXp7xXrik7zckrRgzDLse8ohIpm5qKu/wXli1gadXl5y1FUFH6sqxHXY8urjg4jLwyacvuN10VNVI5QOV96x3kU9e3rFNmWZRE6oK5zJBhMoJ49Dz8npD7DNDnzk/76jrmvOzFeICTgPKiHce5xxpjCQU5/ToIt0h4qZhYQQnJTD34kiSS0OLiFI+486X5cmQp0o+ESlVsCmWSkLnyfuK41IpOX8HBSGnEmw6EbZ3a1JKhKqa/gWcD+AcrnI4X6PJk5JHYibnEY2RlFI5Bif4UFO1NeJ9+fzGUgVJVrJODaxcAtJSIaw471CXkSTgHQSPCx680OdIl0bu+o6b3ZZN15E14cQxfc3Lcc1/RDK46SaCTn9LdP/3xrHvD6yHv1Gl0Tw12JFSUHp0IyRPjdSs5TvD4S/XvjOFMcYYY4wxr3s4Xbkf1uw7tx2Vmr1+Iz8hGsi5J48941DaQ41LZFVUU+ko68FrxocKxBF8ZuiAPOKbtgQWoWLJii7dEW+uSb7GaUDEkUmlfaByUnn3dcLA+TWH1/rpv/nknLy+HK+Fkftlj8qxHgo7Hgrg9svDIfOZqslKlHO4TzEHVDq1ozna3j50+5zjfdOQs/d2hX0l3dFxl8xsLu07Xu/pIR1XPe6XnY9LpqE7TwKr02M/PUf62rrv77McvfZ+9d9xxa7sd1sOj7s5KDx6VuTeVg+h5dFiR1WQR3uix5Gi7t+h18Pr145gzuweONb77+rDgdpJwEk6ekROwsN5DSe1jV/QGWGuspX9e+lK5/m5/XoaPZaf5zdVSigbgrDd3MGQGLLStEty2lI3gvOX5Z6TLGB5xUiFE8XHG6JfsIwbOneOF8WHljyO+FBuG87DAc+B8D7bzW/4/s3n4OjcWTBpjDHG/GT52uFjFWpSKgFIzomc3VQBmZF8uGjPUi7Z96PqP9CT8XAxV3qnZS2VXpp1ClQyKUVyHBnJNLUi6ku9kROcUwZNxHHAiRC83/cKc97jvZBzwst8CaykOJRhLdWRYyTlTKhrFssVcVRUAlI1VLVnKcqu6+j6gWEYaGJN7TykUhFYVb4UpalDXEZTqa4ax4RzDd7LVG01BbGaqYIj+IZx9KQ00udEVVV47xm6ge22RyRRNTXeOZaLBTGPxFQq0rIqOu1/XbdcX7/i1d0tVxeXdP3A6EHEo0mp60BdBVQTznuqtsJLTb7pqUJAXGLX7RhHpW7K8D2PLs4RRhZNQBzE7OgH5fZuzdlZi3eZnIVxm3i53vD40ROch91u4MmTR3z22SvGqFTNgrubW1bNisWi5dXtdWljZsf67o6kytlySVN7HMKL61ekHLm4vKLrB16+eM6rly/YdT39mHh8dV7O19DR7XYEEZwqXT+y60fSFCSH2hNzQjJlSNhBESfUdc3l+ZLLs8BFs+Sn3n/KZrOj30b6qOyGyPu1Z9HUjBdLbjZbRBxxTCiOm23H3RgJ256XN1sWdQMCbduwWjY064YuJra3a9bDwGUXqOsaL4G6TigJRcmUoWP7PpJixHvwXvCtp648Ko4YhRA8EhJJE5ozIg4fHJDLZ1jc9C0uFcflfkX5XmTK0MUpJXLKU0BOGYLVufKtnIfDyVC1LTFGuu0WzYk4+DIsrfeID/iqwgVfhm6tPK7yhAg5RaIbppBxRAR8FUo4vu331Y4kyElLyK2KZsWLzC1VkNKoElfmsHChQipH1FyGz+077jZr1rstMQ04oQSW6srQqgKi5XOEmxunrvw9ca402MofGEDAlcF5SqWpn4qXD1GiUALMuSE634BwqqAyBZBTZwlrRBljjDHGmAfNMUG5uhSZO8GdRBJHPz+whuPAYh4FyFe4y2+Txi3j9gbiNY33IAObfigj8PgaghD7ke0APiuVVqRdxYCSxw7nwakrUwyMHck1eMllX7lXbTVnN6/NWSf74zpK9UqwJ/m1ewDoaXr3eiBzPwiaOw/OYRCH0sCj179WVbYPF/Ph9+mHsoybVzb3RSyrdIeErXSqnoPAo+0f4sqT4USPA9Q3BpBTMJjfkGDKfp1TB8lpIsHTPFJPzpJO+3UIFO+dwwdCUz16/LUw7iQUPgoTj0PYoxWVjuaHZtFrmZ/Orz8EojB/ytgPLysc5jO8H33Ob/v86SzrOgSepzHr8fdu/uC6/UNyNMfi/pi17NGUqIFk7o95+3pU6OZTcHJCSts07491//96WFoPT5wklPOoYIf9PH5a5wG3pt/KAWXAq5JSJnFDHkaGQXF1wyjnVAvh1TrTNue4yjNULcvQ0Y0jY17C1OGZ0BLimhjOwVXQ71BZlvsGU6oqU7O6TG3CV2LDrhpjjDE/Wb5+5WMI+3kUYyrz1ZUKyAyuBIZFnq6xDkMXwtEF1Ym5l1gZpjGOkRhHUorEfkvqN4RWyFlw4sha5pmrqpqxGhn7vlx+aSTFTFW3ZE2IE3wZg4a5C1nOGZ8SCiRxU4Wgo12dEbMwjhkJgXq14qpt0OuXDMPArtvRtBVVtaDMaZfKeaCEEVUViBJBHcMw0vc9bdsg4kkpk1O5qHNNGcbGB0fqYRwHcnZT+JHxDrbbDQtVhmFASQxDz4u+4+Lygrau6LuOfhwJVTXNJ9jRVCVASgirtqFtW+rKs9ltgIzzQqgrRJWkyqPLC168uiFnGIeRR48vaeqKnJXb9ZqUMyqBu/WO9W7g4uKC8+UCncKs9d2apqp5dXvH9z/9hKeXVzRNhZPM00dLYr/js0+e884H3yANHdvNHTFCXTdUVc3jy0d4lNu7Ld/78EMIgZ969oy7V2vQGhWlXTRojrz7+IK2hmFQghfOz1p+8PIlu6Hjom0JOJw4hr6jrSrWdzuWyxYfPH2OnC9XtHXNZVbevbwibrdcNBVp2JFqx8vbO7oY6YbEYqFcPT5j24/sdjsWTUtSx6JuuFwCOXO77bjYdrRNBSmyalrOFg1RYbPtyuctCt/jIwLK+aKlWQSatgIU8UJG6PsBSCwWNZorgDLXopYiXT8NoZvIJZhEiEmJOZLSSEoDzlXlRoSyr4AsIaSf1nNopJQRXBWHIsHvOwDUVU2OqTQApXQmyMPAqALOI34owxNXDb4KeO9xLuDrgKuErEJNmVsxjYmu60jjCCmXCkIEp2UoqLkKUqU0PksbJCNu7jDgppFPM2OKbHcdt3d33K3X9EOPiOK9QJqWm/5+zO3Pcj8gIXhynoP6Em6q59CIFCF7yrA4WmLHuXG/b2Dx+lwrkuchpLUMC/2GG0XGGGOMMcacmMOPh6r5HqjAen3eRPbDbUpocdWCZvUUd/ddxuEOp550/k10vSEvG2LzCHfhaHbPIWc6Ab/4AOc9uvmYvLhCXACEetyhd5/sg5g5BLofOp4MpXkSnh6HPVO4hyAy3xs4DiA5CqHKhg6Hqvt1z//bh233XjNv83MDzHslg4dhZI+CQo4fmrY5V9zdX7UI8+gpIvrGlsCb57w8PvY3kZOjuB/3zoHsvOf7t+qQpu1fD4e89+Q+zPRZOt7APlh9ILu8HzTtF5lWqcdVf9POlP9MIdl+3XpYZn79flSewxGe5tWH+PNokf1r5s/Im96MfaB6FITu08zTU7bfsZNhT4870B/tT3kf8v490eN1yGHPTl88/XicPu4f12k0Hg7nq0S7h3PB0ech5ymHL21pn4WbXaB1NeP5FVk84AhZQW5KP9zQouq4HWoal2hlPRUAwOAXVK4npZGqUSogHY0OVO7zle/b8Xk4HVb58H2b30+ZntMv/NwbY4wx5sfJ1w8fqxqXYqliEsi5NCrKnHW5zKOm0/yPlAuQedhDZBq4REoIsh8mAoBSWTQ1IUiSGePI0PekYaD1jhx9ueDSMi+cqxx1XZPrmrHrGGMqwQqlGktTIqUErkJd6W3qNKF5wPsGySNpgD47FmcXnF1esNt2pUosVFR1RdNtWe929H3PMPQ0tcc3FTmXUMNJmXcueI93jjwNKVkCWSUED5Sqr4ySpipRzYqbruRTSsShpw6BnGG33ZFTZkyZMSZQz6vrW5Jmnj15QoyKkOiHHYRyoRvjSFMFfO1ZLhfUdYO4jLjMYrFi0a6gHxnHARccVV3T9YlhTLTLJW2z4PJyxWZ9R+Uyjy/O2A0j690tq+WSy/MVy0XD2arho4+esx1Hlk3Df/Rnf5nr7Y6ri3NElLZuWLUtf/YHn/Dp3R3ve2W327LZdSQtlalNXbHZ3vD85XNuusQnz1/wl/zan2ez2VDXLc2yJaaWbrPm7OKcUNdlOM6sNKsVetdzux3oBsWHyCL2XN/d8snHDd0VZOdo8WzWO1JUVoslQqaqAm3d0k0BdVPtiHGk67ZUzYLNtmO1bFDxiPMs2wWgDH3Hu5c1j84bXt7uSJpJSJl/MTi6WILoyjvImZgS6y4RbpS2eUl3vuKZO6eqPF0/sGwr2rZi7JWclDGNjKlCxkAmM/R9qZANU8blPOqFHJWUMzGD5ESOA9llUD9VIIPqXNU4TXxfvjDk/VApbmqvagnwm4YqeLpxINQlAN2P7Ztz+TEnYkyQE3lwZB+QUOFCqYqs6prgAsMwEvsyB+zcK5K5ehBHFSqSlPBRNOIA50oTStw0X2TwiIcxJnZDz81mw816zbbfkrVUPYp39xpllApjnRvLZYgo72W6f+E43MvR/SEeosPpJsJ0AyHvW+9TAEkClbJ+vx8jempgWiPKGGOMMca87lAlJ4fQ8HOuHb+oMmif2U2dFWW6ziWDapn2IwxrxrpCBCocOqwJYUG/eAzDltS/RJbvQHMFw5a8eIbTEXWhXEPrIfBjrvw7aluUvbg35/kUmMzhaAl7UllOyxMPVs299thRiHSS+s1Vo/O25bDstI1DhKiHgHA/Wsm89qMQ7N5eHN4dd/LM/lzslzwKoh6YS/Hrej3IOVT7HUdPh+zweF9KeKSiZahcnYfFnOLbuQ00h0DT/81nTe6d/9eO5ThXhnv3cKa9uRdW7oNTAXcSTh2WVw4VnCXrdYcNHAfGD55b4bTCVKapS0Cm+zPH+//6XJeny9yv6JV5rFj2p23/GXt9Pkg5ee3xWu4PLcu9Jd/87H7XT385Ps/7akSPqtANytg+RfKGLDVMUyXVTvF5JEsJzLNmvASGJDhXUflbZLgjkXH+gtC/pJMLnO/KiEpHx3Oy/w91onjwgYe++8YYY4z5cfe1w8cQAjqHbFHIzpdgTdM0DGsCdBrSEPYXw/Owh6JIPr54ObmC2ldjeRGGnInjyNh39M4xDG4agjKgKqSYSDkyd7BSwPky12LfDeQ04qfhWMVN809OV3maMxDL/HhRiVXF6vKcUFVsdj19t2O1aHBuqsaMI33fUVeOugplTsacyOLx3u8DRyRT1Z6chaHryL48772gsQwjW4WaNPSMw0jOkSpU7IaBYehBhCTCtutBIeYMbg54EyE4ogcdSgXiom5L5VpOVIuWdtWwOlsSxxHRTBUamnaFhAY/ZupFS1M9pR8ju3Eg5sz5oqZpKlLK7LqBZ+884fL8nF/6pR/w+Oqc4CvqOrBcnTHkxHrX0dY1203PxXLJB+8+5Z3H55Azq7NL1ts1L1/ecXn+iJQzfT8w9iPnZwtyVjabLarCzbbn7OKKX/X+N8iphLJPHl+xaCr6jVJVFeOYCaGmbRpSUsg77tYbhiiMGda7RO0HGhyf+Q0x15xfnXFzvWHod/hKynsYM3nMtE3Fsr1k0VaMQ0UcK56erwh1ze16TVPXrNqGi7MWzcrzl7cMMfHoYkHM0PWJPCZ8huAF74Q8Kj4rj5cNYzfQDQPqhfWYeHFTwtwnFy05NWy3PY4AuDKMqZT3uBsjylgC574nOiFMIXFd1WjOjONIjhlxHs2JnCKamYY9BZzDS8CJB/J0U6I0vkUEchl+FVWcKDioQijzj+ZcAkD81FDKoGUux6zzUDqC5kxKA8SIC4HGgc+eboikWOa98OIQHEnneWBL48Q5wTs/DdM8VWBONwFEfOmFLcKYEt04cLtZc7O+4267oR/G0gz0oaxzqmicm4HiSjgoGZRYbspQft/f8JkaXIf/TUMiT3ck5gaUmxrvHD23/+MxD58zn1NjjDHGGGP+HJjDlUO14RRmzNVjMk/qkMji8OffQLXD1Qt8fcEYb6l8oG8ukeSQqi2dYbvPcMtnuOEGiQMaQolJ5qnhpzaE7qv87kckp9OnlEfcIdVSB6STAOeBo5v+7ccxAVxpg3Aa3MK8mTmJmx7X49Dr6N7C8T4LiB4Nn/pgArKPxDgeHndPDyHSXBVZRl06DmR/ZU6qx6bzPtWbHW3jMEVEOa7yw1wtdwj9jqtNj17DPESuHAVqR4eplEpVOQ1D599Ptn3vPO77bh49f1yzJ0IZ/eZ4R+9FqPM9nfng5m3cr7rdf+qmEl2B0+/Jyd7pyU9zm/Zk3+9X9TIHjodwdx+QHi1fXnMIM08rNO+fn+P3snxETz41c+B6lNPNEfQhwDxef5EBJ6WdmyWXIoGps7Gq4vHEvEPrM+o0MrqACzXzbEkkxVUtLvVlLtnYIa7C717Rx4T6iPhwSGDfYP78vlYpzNv4dhhjjDHmz0dfv/LR+1JFmPNUQFXmlpPpam4eklWmOR50H1xMY9ZrCSbnHmnlX7lYEw7hoyaZApeBbrejEc/YekgR52rAEVMixbEEod7hKBVROU0Vic7hkDJXImWfRJQsCuQSSDESvJL7HUPvaVdnZBrGGMswocGzWrQ0VUXOmWEYp4B1mn8vJVQdzpXGUhxGQghTD7/MMMQS2CrldblUsuGFbugYui0XZyvGKWw5W65Qpcw1OCbwgnjHoq5pqkBTB4ZOSXHkfLUkZWHRBBZNTdvW1FWFiNJ1G+qqIsaISDlXY4o0dcP5xTm/9P0P6ceIrzxNHfAeNps1EjzvvPcOaYycXV5xN4xstztWZyuquuWX/+wvkrOn6zoynp/9qQ9YLCpe3d7y5IPHkHv6rqMNFRdnKxZ1zS6ONFXFersFgSePnrFZb3j25BHf+OADRByfvnjOk8ePcA5SjATn2W23LJbnPHr8mHEYWG93vHr5ivXNDe9enPHOuefl7Q0pJnZjJIQKjZHr5y9oFy2X5wsePzoj5QFU8F44O6vxzpPGHSGUIUr6cWTEse0j/SfP+ZmfepfgM6pC4x3VsmG5aLleD2Qcq7OapvbkcSTjyNmxXLR88/2n3HU9d7sduBoZ4XbbcdZ67rY7kEA/RvIYOTtblLlHXYWTMryq5ghahgYey7gohFCGGx5TJI5lOFHv3NQnN5PySNclUlTqpkUqjzt0Ut4H+k4EfAkWxXnEC1XdICEwDhn1YWoSuX14CL40naZKSEWnSsgSGIaqggzr6xvimHHO70M550MJABP78FPQsm8ISKkILk2mqQnllKSJ3TBwt9lyfbfmdhpuFc0Ecbi5oaZTj1Sde+y6ErRCCRlxc43j0T2LPA33MjXoRY/aUXNlaAloy9wlU29t0r6ROo/c5MofM2tNmR8Zv3Sb+dZv/wM/7N340fKH7Hyc+DE7H9/9nb/+h70LxhjzBY6qoL5Cp7XXK6vuBx6HEMYpOBJDgrBYITnjozKk5/imJTWPQBO4VPrR1ef4PBJ3r6gWj8i7G6gf49RNbfFSWVhG/sjcD1EOZYSn8/wdjvho+NgvnBTuMNXB/Pv9jPD1cLFsu+RO/qGz98D+zkmbHP1YHj9MxThVUL62z3LYz6n6Uu+t4yQkpQRSDwVa8zG+Kcc5XVaO/s2tj9Je0SlwO0qoTsM2cVNkpfuHdKr+U47Ox4PccfNpeoP1KOi9F+YpDwZOCKUKc16H3Nvv+/bVmkcrODnPD7zkaPFybNPIP8w1imVakMOINvdCzC8I1O5va97PeRsnQfgbDmt+dF9POw/zK0zT7hxWrnI6D+bhJXJyrPN5n4e5nZf0fuqAPIXXDkdOG1xKSFZkcU7or1G5BK+AJ/lMGyKbvrT/F1UZoQjvYbebKle1nNvPOV+fdy51+v58lfNtjDHGmD///YoqH3POOCnjvud5FELVEnAcVVrNQ4/ODZD5gmPut3fodTY1bKZfnSuBoRNHjmXIxxhLhZefkhURLRPjaalATLkMuzjEEoT6aQhTmXrYHfcXE1Uk53JhLaA5IaIMQ4+rK1ZnZ8RYqsoW7YKmaXAhIKHs4ziO1LVHcoKkpDxO4U0JUfu+I4SA925fDTqOkb4fqKoKFS37IDDGkdu7G8Yx0e86mspTVxX9docmpW0bxpgIPhBCtb9YFlGa2tMPA+dnCxZtQ3COrI6+78tws1XFGEtDc4w9m82G87MzvG8ATxUCIYSyvWFgjCPn52d454g4Qt2w3u64OL/g6tEj1jevGHY9oWkh9FwsKt559ohXNzdcXV4xxkhOysWqpX+64snTK87rmu3NLTEJ+Ip3n14iOfPOO4+pFw2L1YKbmztWy5qmDuUcNMv9UL1nZwtEErvdGs1lmNKL5ZL33nvM9z78DEFLReKYGWJPHMvcgsumoQmBui5Bta88IWcaXSCaud2M3K57Pnt1R5YakYqXt3csa8euH6i84p3n/HxBCBUueBbJ09QdWTOJMi9hjErMCfEVi7YulalZyWMZ9rcSYb0buL7bMg4wDAP90BGaEkCToApVmeRdE26a/1BTeZ9L4D0wpoGcIIT5++JwrgxVLOSp8aH78LK0Ah3iy3fJuwrvA276nDovVFWDiKPkhKU6WPDT+nXf47Z83qZvalY0Q1VVOBG63Y6xj5Ah7Xuc6lTp6HGVp3QLENC4/xY6N/fIFDSXXttjTgxxZN3tuNttWG83jGMPmgiuNMUipbFX/takfQNOScz3LoT5psnUyJWjHrLzHwE3NVA1nzwuInidgsrpwEX8dC9E9r1Sy2GWSk9jjDHGGGO+yBcOq3rv5vzJXHMPvVaFHLe4HImqtBIYRfDjJ4TlN9DmEaC4/fCpAg5ye0Xd3xDziJCQMaK6xjNOgSPMMcj9edrmjsWft1+HqsWvdvyvP//wOo6r3N68zjc8Lqe/HIeb94fIlH0ZaPl93xlx/ncSnM5eDx1PK+oeCjgfyHXmYGluyBwHp4cdev3w5rbRvipw6th9cmynw6QegtLXVrc/WufmezmlnTjPC3ryWWAOIqe7PftwVO6vDvatzONlDsHeFw1X+toO6/3awnuv2K/+9H05rZh8w/ZOQuajzFGhTOTx8OdQVe99Og5D+h6G7J3jYDmE+UfvxxxOyvFbPp1DneYzKp3ePb5qyGNPdudAIuVEk+6QsEKGHck5aM5gvEPDExIDITuuh0Ajgd4vieIRydRxQxzHMg2LC/v9Ob6nd/8cfVG4aKMGGWOMMT9Zvn746GuyRLLkcgWSy1xwITvSPBeklgpDN40pn7OCzkOzCjJVEt3vcci9i/S5IlKnQCWlDLWAc+Sk5GlOx5TK+P6KEmMkj4l6Cv+cd6i4fc9N5zzgS9GSKs47fBBcAAnlQtcHR71YEocICs45fKiQypPySEqZ2A9TcKmoL1WNwhQojiMxjtR1VQId75Ds2HU92922VL2pIJoJXthsdywWDW1ToSUzpKocWgtnlyt22zJ3YRVqsgpjyiVE8o5u6AFHu1iR0sA49LjsShWcCIu6wWdhfbvj5nrN4ydPWHlPu1jQNC3kTD9EXt2sefrsisurMzbbHW1zxq57QU6Jy4szhnHgw5cv8W3D+aJlUQttXdH3HZqFmDJ5yHzzp34KUke7OqNpG+5ub+hSOS8/97O/CtXMol2UIHa1xPmKu9s1Ve1Zrzvu7jYsLy5Zb7eIgJdEv9uS4kAdlLrxfPP9d/jBi1s+urtlBzQ4vHh2w8DZouZ8taRuHH2KbPvI1fklOQ+IE9qmQVPi5d2OP/XdjxAR3nt0SfCe7mLFrut5edeTU2S1rHl0vuLR1RVjHBDv2ew2dEOmHzJjFBZnDbnvUU14hFXdsGhatn2HisepZxwS201PHDK7YWA39qxWS86XC1TLkL5OpAwXPIyIE4IXnCufqxgHRk2lsjeVwK1pAlXl8YBGYRgySi7VtaJ4CfjK45zfB+FlXlKPiEemMDulhPOlirEMH+yO2rGlxTPPgSg+lLDSe1Bh6HZlPY2QUyoNo1ya7g7KvCPCFBwK6DQcjTjEaRlOmDKE8xATXRzYdj232w23mw27oUc149zUC3uaz9LPPZOnQFBlamjq3As1HyWNbuosUSo5D6bewFpmfpwHYj20LefEsgSOqm7/9+jwd8oaUMYYY4wx5mFftdrxq1YHOSDnkaQjREFrj75aI4vHZPJJaAFu6rSneBy5uYLdZ7iwRPtbXN2SNJT2hjqQVNrQX3BMrw0XeuTksvlz1vHQ+soy83Cpp+t/vUIQji7+v9D9MHB+/fzwyfqP708cVWV+zhF80dYfXOb0lMxdtU/irv1CJ0OCzrna8XyO0z2P16rnjoK2hyppD4na/M/tlz2e37BkUHr0836X94Hpfp+O3p65n/rxuZj2Zmp+yWHZB07y8a+vfU+OPodlz+fz88WB2JvCtNPtTO+bHM7P/ZD9/ro+v8Ly+KTNqfL02FwJebwoh8XLOuenSpWyOKhFiLsdGi7Lk+MG9SsiFcq2BJ6uQaQn9TtcXSPiSJoRL0h2qAeHY0g14j2aRiQ0COyHan3TcX3RuTTGGGPMT5ZfUeVjjICkcq0khzndRMpQqZpKxWOehl51ktGccDp10dqHivn0QnMqo8xZp3nsxvK66SIslULDqVebzvVZ4ARSCSgVLRdPrsz1V3qPZVKcL5Dz1CdO9kPA1qHChVACmBBIMdIsWrwLxBQRHD74sgyOHMtcicGVMCTHdAgsnENRhmEEyYg4YiyVYlXj+fST5+Q00ISKHMdSgTnlHOcXZwzdQM6JxdmKmJW6rfBO6PtMXbVkdWy7kbqqy9x+Sdn1A4+cgwg69GioqdsWRanaMoTs3XrDtu+5vr3j7PwcjyPFSEyZpDtCVXFxfk4ljlFL9eA49Czqhsp5PvvkU4Zd5PHjC1aLijgKXZe4Xa95cX3De0/f5b333mG5WPHyeqRtWyASs9Ksai6fXFE1gaZdstv1SKh4fPGY9W4gi+NssWC7WbNcNpRqVuXs7AJV2O06ttuOMWaePX3K3d0WzcowpFLNh9LWNZV3rJYt777/DlXt2Gx2OOeIsafre4Z+y8VqwbrvuV533PWRq1VDGre89+QZ4j3Pr2/Z7HasdyPeB3SZcb70TlzVC67OV/RD4vrVLZvdwGJRIc4hItQinC8WLOo7YvQsa8+i8rRVYFEFgocoCimT+hFtaoZxJJPwzrPb9Wy2G87OzqgWC1SFfhgY+pHklOArsiqVD3jxOCnzJ4YwVRBShv8V58t8ku70JkaeOwY4IYQSdo7jAJJxVGU4VZcRpqCttBpxWoLC4D1V1ZAzjEOP5lwCfpQ8ddXUlKcOsVoqg8lkzaX2UUplpogrlYfT1DBJE2OObIeOu37DzfaObb8j54gTNx33dANF0/T9dcjciQBK1XHp20v5Qs2dGwSRdOiZvO/NLfsG8aFn7TzRTXn+0Jouf5+cOLLqfjhbOH6tMcYYY4wxX81DYd5xynCYxkReWyZDmXpBlqB36OYzxFVIdQbaQxqQ0BzCKilX0Hm6ag7Ld4m7TyFtkRGQwEko8iX3++Ewb//ItAxvWOf9Tn2CyNzKfz3geT28Ox4O9Mtcl5++/rD+fHSej5eZ1yuHcOhQgnj48cFA9FjeL/fFAfMhctzPkQjMVZhlaNvyuGo+3IfhaDkO89M/FA6fvmdH5XQP7c394+YQG4JDFLKk/RyU81Cn+3kA5+WPD+U4aJQyXc3xsb92Or5EKD+v8+ht4RDizlWIp9+tw/EdreeB0HY+P/vPmp6uax6Z6PMCx3KeHtrzKULUaeBYdzhvx7OKllGKIE/rLtORlO0qnnGMZEql89jfIuEcp7eMocGRyAheHdQXuOGaFMtULCKBRA2MqNRlPcMdF1fvcLfrpyPXk7fmoeOb9/kkcH7tPBpjjDHmJ8XXDh/LkKgOp0LaD+WRKPMpJrJI6SWZM+T5QlLI831/nceMz9Ok4zr1oprCi6xoUuI4EIceJ5m2coSpikqzkmMs1ZQyhYzAMA5oKsOT+spBVlKMJZDMCukwp2QpgDpMG484QlXTtEt8U5NVGePIslki3hFzQl1plDgE8WHqZVouwuMwMgwDzjtCUyOuVHz2XU9dNeQc8R6Wixo08/LFK5oqEJzjbLUCOrpux9XFBclHUsqcnZ3TD+N+iEskUTcVZc8h5gxOWCwWfHa7JSelChVb8fjg8d4zjGVuytvNmpvNHZvtltvbDd2Tge12RxpHsgq+rVkuFtO+Zpq2Yb3ZkjUzpszddsdms+Hy6hEXFxek1FNVDXHs2faR9ZB49s4zVssFn3z6CSIev1zQbbfUVUMdhapeAoGcMnc3tzx78g43t9fcrLecny2oa4eTmvWmp+8jgqeqyhyiL15s+ezTz3jvnadsth27fuT69przNnB2fs6rFy9QEo8enXFx3vLTP/UeQ4zcXn8XL0LXb8lZy/C3ceBus6EfOpras+t70vIMvOfyzFPXFR9+8py28rR1IAN3246URh5dnfHuk2c8f3XDJ+Nn3GyEqnZUoVQEjhpx3uPFEZzQ1p6mEpZtzaOLBZWHVevJKogmYowMcUQ1kZyy3Y103UhVlXA54lhvtux2O9q2pl04CIKLiZRyqUgEnBdqcYg0IFUZQlUc4xhLRaavSqbmSvPET9+bcRgYh7HMtThOoWOAIAHnfAn85u99FZCQGfodYz8Qx0RKI2RFspQ5K1Rx6lAt1dCaY2moxPJdS1MnBRc8kn1pMZEZU6IbI5u+Z73dsut7Yor7oNJNjUUHJHdo+CogWqog55soMlVBlvkfy98U5/2+IZingu3SE/vQANoPDb1f9zxnCCi+7Avlb1AJNh3TSDfGGGOMMcZ8oa9WAXQIkA6vOw0FBUHSjpwiokJoLxl2dyAV1Evc9jmED1BX5oFzaa58LNfJSZW6fUbWD4l3H+Lrc8r1cJ7CpTk9mDvn3T+G10PKcn3tOIQ1mX1ll3IyysqbA8lS8TgHK/fP3zx/4amHwrN5/w4Botyr5lQ9tC32waMcKvLKMod176eYYLqnIYdKNZh+fW3f7r1v94K/N1UjzqGdHrXJZDozczvmMBflHAjKa+t9LYSE0/dgOu75vO5ztZOtsm9ETf1Sjzphyj5cPH3VFNiKIvMwNa+djWnJ4/BUDssC80A0p684DrOOq29P92i/Pj06KBU5SgH16N8D697/XpYtr3IokePP5vHPnzt88knX1XuVvfv/L/e53H6Xj9anR5/06V5cWSSRFJxbMMSIdw34gI4C1HinRJTgphGC/BI/3JH9I3T6LHudRhXbPscvn0HweNmU/ZxGNvs8h5B6/8D0WdPXzokxxhhjfvz9CsLHANPchiKORBmioQSQpbopS5lDEUmUedlKEJmljElf5o1LzMNWCOV3VSWnTM6RGAfi2OOAqqmopqopsqK5zC2YUplXL+dUhnyNCfGKJEccE3GM09xwiqjDicP5qgz5SglSVXyp/ksQ6oZ62RJzJsZIrCJ1XROo2M9pSdntqIkUE14gp0zXdWQnLFKkqmuCg2GIaMoEH+h2G5w43n32lE8//ozr61uqUNEuF7R1xW63Lb3YnCfFjBPPsg3cbdYsFi1j7KkWGU0Rp8IwJnbDSHTQDQPb7Za6Cmx2I87XeMl8+skLqhDohp4xZYZxIKfI2HfkHFk0DZtdx7KpqYIj1BX90OEypAjbXaRPwqaLrLdbfvbbP8Ny0bC+2eFdIDQQY+Tb3/oWF5fnvLq54cXzl3zw3jvcXt+QUyKmkRxHLhYNm9tbXlzfsFyt2A1bbm5uUFfxwfvvs2odz5/vuLm549HVe4TgQZTb2zvu1mvaxQLnS1/dVzc35BT54MkZXRLauuYb7zxhVTs+ePcpF2fnfPzqFavVgloSm6Fn0S4hN9SLJU0TQT2b9cj5okG1Yb2OPHtyTlUn1pszHomjqWqGAV692oBL1M2Gs8WqzImZhOu7DnGBtmmpvKeqPJqVpqrohkBG6FJGgqddLVg2nrOcGIeMuGkewQQxKSGUalPvSu/RoY90Q2a768kx0jYNToWclF4StWaUVBp/zoOrQML0HYSUIjmNpAh4yucwCOIjdagRX+asdN6RNDKMQ5mbMUsZUlkpsx4qhFDRVjW7zYbtekOKCYkglJB/7t2KCE4dEBHNZV5TdO51UKqknaCiqFPSqAwps4uRTT+w2Q3sukiKoHm+0eKAeUhjwe9vWLjphkRpdDnnyv7o1H7W0ut2bsbO9wxOb1S4fSM7H83Jse+1uW8Czr2M58emvyly1Kg1xhhjjDHmyOcNSfqGV/D51Xvy2s/jsGbsO0K7RJ1DXemsKq4mVWfIcIc0F6Ud7UoHvSweIRMUoiqyep94/QngCK8FnvL6pk/24XT+wEJPljkNMr98CLvPJlVOA5wveQFegsbT8PH+ULCvDzl6+HkfQO7XMx3x0Ryaem9/5jkPX6+EPN3pN1UjPlwhdhoivjkEmue9zw+eo5MQUiidOPdB6tGws9Oj++LEeX+m8Fim9LJ0/5wj0teH6L3flroXrzFXcpa5EU82/sC+H87gm4Ks+7E4UjrIHvK9efsyNxhPXv3wULRzkHv/mDzHn+k3vfbkudeObR63Zzrjevoe7I9jaq/OwTPIfvjaKTYGmSoeVRHtcNV5WVooo485jw535GpJdjU+tOUI+jtSe46qw2lHun2JX13hQ2BQITghqr72lrxeOTvvz+sfPKt6NMYYY34yff3w0bv9BcRJ76ZMGX5y7ikpuTRwchmHXqUEgIoHSdM8crmkL9MFYdIyZ13OqYSR0/j1zjtEcxmKdRxx4khRGaZJsMexVB7qGIneIyqkvqw3+ICIlHnvQgBxOC84caUHqAuI88QcicNAs1ywaFo66RmHoVQRVmVIykOn0xK+9sNAGgf8FKrEYWSbBlZA8BXJCevNmqaqqIMjUsKTs+WCrtvtK/EuL1Y4B06Eqq4ARTXR1ks24kE946C8enlLjon17R3VoiUl8L4hjiM31zecr1Z0ux2rxYIffPYxr25uWLUNIo7KOc6WK8ahZ7NZ07QVy4tS8ddWpVo0psj6bodzFWl6JxXwLvCN995nUQd2uzVVXaPqGDal2vSdxxdoGnn58iWLxYJhHPjk009olitevnzJxWrB808/5fnLa7I4Lq6e0I+Jum6o6pqLs5bb2+e8evmStlkgzlF7R9Yyf+Zq0XK2OuPibEHsexaN5xvPnrBanvFnPvyMp48uubpY0CwqFstzFGHst1ycL6gcXLgFIo4+51Il5wLPX90Sx8zZoxaPY7cbkVARh0hdBxZVQ9+PqHMMY2RMA+2y591nz2g3Ha6q2Nytkc2WC4TzpubiqiHHiu1ZQ84ju2FEc2bbZ3aDslq2tK2nqgaSznOmZlKMLBYLzlcL7u5KQJxzjSI48dRNQxOq8h1JSggOTbmEgK7My1LyOCW4EqrnrAQvJfdPsTQpRVjWK9q2Jani3dRIzIoLZahVmW9K5FK9G3ygrhvikFivt8S+h5SR5NCkkEpQLk7x3pXGueQpnJv/MEyNWindZMtwzEKfMtuhK/92HXHo0ZQpi83BY2nMqkxh6JQClmFWARUcbl/5WB6ab4TIdLMC5j8yx72d8/zcviE9DbdEWf98w+HQbD7URSrl7521o4wxxhhjzJvcD5dO55AD9tew+6jrNAPb/yj7oGaa7REniqaE04Eg5zDe0oiCL9exUq9wmxfQnAGJLAFPKCOWSEbF4RjJmxuCB+/m/Z037L4g6HsocLof1BwnLq/XvN0/Rydr0rmi7su/5s0OAd79l873NU7nf5wGqD0JK+VekVzpeC3ofhjUzw+PP3//71eIHUegDx3L0SuPHn+oujNzvF/7Dp5yrwKS41D09X09GQL4tflAH96nckpOE7V9FeH8fdi3xw6trjKlx0On9KEhiO9tWUsYN+eO7mS56TN58rLT+TCPz8Nr29C5VXgSD55U8X5u0HYSIs6h7tT63J/nqePu9HXROXk9epsVDgXF82dUPMIO6qeoZlQygZqrVWI7OKJvkKz4cY04SK4mVEoadmjwDN0NVX2JVOcgEclVGU1LI+L9YcOvHdIXH7cFkMYYY8xPnq8dPoqbeshNFw+qvjx+FEjKNJSFqJRKSOdxKZX5IBE0lznnNCXGBDmnEgppqf6KKaEpIqoEN12IZei7gTGW+eNSmi4QszL0kRQzKWYYE3FMaFKW7QIXAij4qiZUofQKC54qVKBC1DIEpaJ03Q63qTjz57RNQxrGMsRkSqVKUhXnHFXweCeknLhbr8k6sloscCL0fQcqLJoFqplut+Pu9pbL83OqqqbvOnyAs4sFKSWcF5pmATjqpiblMlSmAD442qYmpYG2DXz/w4+5fnWLJuWDb7xLGhLN8oxFteTuZouOSl23RITr9ZZdX4bFLL3jHFdXj/n0s89YnnW8/+5TPnt1y/nlFZIzlxcXbO42vLq+wTc1TVODRurac3G+4Lyt2NzeMmbl/W/8FJoTnzx/xcXZGZKVzz75jDj0hNWSjz/+lGEYSQz80kcv+PY332cY1iR1PH50yXb9kmdPn3DTb3n06IrdrufF8xtwFbfrHZfjwOX5OZ9+/AM0w7tPn3JxvmDs1jS18NPfeALi6brMZrfh6bvPqCVyfnmBC4HNZlNCykWDd0KzaOjHgf52ZCU1v/TJNS83Wy5XFU8uW9a7nhc3a1a3V6xvbmmammVb0Xdr4ghxqv7sBsX5QKgcl1dnvLy5ZRg6Bi/41nGx8ARpOFu9z9WrNb/4/U/pug7Fc7sZaduMSiBHKevxgRB6hljmPz1bLRnHkRRTGS5YlSY4Kl+G9BlTRH2gwpfvS8xl3sSk4KEOgaouQblzAqEMF+qyoE4IwbFYLfDeM3ZdGUJUwHtBvEe0hHiqIE5oQl3mGR0jQzcgWairBXhFEuSUyDGW75dT3PRvHgKmNBanvwlOcJ5yVwMYVdmNA3fbDZt+zXbXMwwjUcuNlBLsCSkrKg5RwWmpmJxDx8MNDQ5B49RwOwSR09+JfWNnqpach4ZmXoUcevjqYcjVozWhWqq39y+613PaGGOMMcaY+16r2juqwrqXZh397I7Lno6ePowOklVLYOg9uT4nS4vzI/V4TSThqytoz9HuGhaPcRlURsT5MjBJ94IcNyzIdJcf4IbhXvh4/PO8nyX4fOjYDq/JDzxW1vHlQ8M5RJOv8Jo3hZL3Q6XTZV4bnnT+WT3TZPaH1wiUcxCnJsT82rka8qG9Oj5ufWPnxf0yepTX3S+tfGjt98K519fr7j1+CIvuD+l7CIlKAPv6sLQna56bUff2Z/6c3NuvowJA2Ud4cv8ATtYrRy8v+3Aact0PDY/389BtdH909995Xg/H763jZN/KiDyQp3M6fRfkze/pffPWZL++/UEdLTRNUXRUWTm/+NDGnYPZo3MBOIlllDDnqEi4ylETuc6hVEbXDVQVOYFLAzFnJF3T94EmtOT2omxIPVEyVXDokMq9gpMOEJ9zjF8mxDXGGGPMj72vHz5OPfzmoTncNL9cqRrKOFeGQ82SyhCMmiEJ3pWee/nolr5OgaNmJWmp5MopQ0rkNOIpIYqjDJEyTqEiCnlM0zCXZYjUNIWbMWbiGPHi9hfqqmXfckplAu+kVJWjahqEaRhYHDEru24HDparBXVVkVUZ0sjYR9IUVrR1A97R1A193fDq+g5UaZsG7yr6biBnCMFzcXHB9fUd17drlouWHCPOQV0FXNvgfUVSAR/KxZ86+m6A7GiasYS0Cm2z5Pxsyfd/8DFxVJ6NAzknqspxdbnie9+7JaO8e/UYTZkmBLJzLNuWV7e3jEl5+uQxmjO3t3f89Dfepa0Ci0VL5SHnyK7bgghtXaEZ7m43rM7OCd6TEWJSFssldR2IsVTWvfvsGU4Dd9uO84sLxq6n6zouLi745LPnXCyXhKplnOa/TKk0oL//0Q9YNDVd1+GkQiTw2fUrNruRbzcNr+42bIbEs8dPeHy5Ah3pSFxerhijcnvXEaqKd55c8e6TK+Kwo6katpst12NPjAOuF1aPnpagr1mSUqmX29zdctbUPD1fsGgbbu62VDh0yOioXD4+Y9lWjLs13TCiWoYxVRW63Yh3nto5rlYN7z2+IA0jjlzmKRTlyeNzFoslfT9wu17z6PKMXd9zu9mVhqQmKhXEeULV4sZEjJmmFnyo6PpEjCOCUleelEuvW5VpcBtXGpExZmIu1ZlNFQjTvxgTTv30/Szft1DVVHVL3bSMKZMl471MU024MgVrTmVbGaq6oWkWpJjpdttyPsUj4hEn+MqTNaIplDk8NAE69Sw9qhachuYRAQmCSgnt+zGy7QfWu55N1zOMkRzTfjjn+cbMPGyyytTpgBLMl3lB5qFWpzkf5Wio1enGjE5DyO47MpcJXznuXbpvCOVpfhl3rwkq5c+YUObL1NIFlWnyEmOMMcYYYx4wDc24D3DmEGi+npxG5ZBDRdlhDre5Qu/eDfupFEzJkIYyTxtCHW9RvUXcJbmpkOGaMNwRF49wMRLjiAs1WR26eYnoiIRAWzWMiydod0OWiBOmudPn4TjvB5HCHJq8OUw4Dp7mcEhQfahSknvrOq3iy5oQ/BSE5dde81XDjYeWeXBORKb2yxvX6Y92dy5FO86R5mPNCH5aLjNXvjGd3fKCfK/YVU6ywuN47IuO6c3B0Ocd9/Fj87IPhIcnyzBVij4Udh7CtLK6+b17YPjOafn82oM8kOXOId+9Nty9cPT+kLZzcaUcLXt6VHPomstIPVN142Ed83bnvXSn34v9edivbb/zpU/+0fuz/3vwemCatXSS3U8LInn/GRGVkz13Rz+JlLlRNTsyLQu5RsI5pIHVyvNyvaCqK4ZU9tEr4AZEEkFb+rQl0JHU49Ia3ALEoyqEUKMaQRtk/pg/EMxaVaMxxhhj7vva4WO5eCsXaPOojaq5XCBlpgvLEnzkrGXKR1Gc89NF2tzHrVzUOjxOlZRKpVfKkZSGcmHjHE48JCXmgYyWgGRMpeorj5ATQ9+XwJGqDN2qGSeOqImA4p1HNRFThii4ANFHQl3TtIupaSXgfRkrfxjYiRJWZ/jKE1SIKbLb7UhjZKgrqrYhqxLHgRQTm7sNTjN15UASkjM5CU3TcLZa8urlNd2uo2kqcGUePcETXF3mpvSOnBLBe1KG27s7EKUKjhQjGeFstWK1XLDZduAcWTNRI+KVmCP0gsuOu7s7Yt8THFxdnrHpO4iJx1dnXN+u2O06dt3Aru9YnZ/TNDV9V4bUrFypoFvvRlyoePz4iqYN1FXgbj1y2S5wruLm5honZbL5z158Rk6JdrHgeveCpqlxoaIfR95/9ymX5y23sefyfMnifMGf+TO/SIqZy4sV2933+PZf8HNstwPPn7/k4tEjvAjETBMqrm9uOFvUCCPL5Yq+G+n6He3igpub6zK3Zrdj2w18+NFLlssl3sP77z0t4bL3pFzm4Vwul+SceHpe4957glfFucCiaWiAs1Ugjp5h7AlB8XVgETwyKre7jqauGVMuQ6UOI4t6yfnFBR9/9CHtoqYfImNKnAlcni95/+ljzpqadtHw8adrYszklFkuGhSldY6qdri+YkyZvi+VvTFmNCeWbUVCGWOkajwCBAWPkuLIqKUx09aeJpQ5J8sgpQ6VaeykAOI8VbNgUS9xzhHjWCqKnZBiQhPkqMScSDkhUhHqmhgT3WbN2A+IQs5lonkRAac4D86V8K+0xXKZkxVKI2n6fJSGu0MDxKyMqgxxYOjL0MZxLJOMlqFZp8aLCojDaZ66/Do0C36+CZCVLPNNGp1iz7mxWyot54bsceMxTQ3WModrmDpRTI3FacjZeeBVpNzYmf8W7fvKqpbg0dpYxhhjjDHmDYQ0/1DMKQi8VtF2HDTOFUaH5w6xSfmptL1jjjhymfTk8qfprn+RtvVo84gsSiLAcIvTAd1+TKrOCLmD5oqcKyRtSc27HE3fvt/z147lJIR7c8j3xvnujp77/JDw3nM63z14uGrybVZTyZyu7H+f06/j8Og4jD16anr/9OhnmUPIfbnrdC9E9vHzUbjE/nk4rdDT04/DFw5v+ZCv8pr5s/hQVeG0tn114xzanbz+sJL9EKv7c3IUaB++Fvcq/A6LH1UtTo8fBV+HjPN0KN3XQ8jjfSvHdYgK95s8LCjHPVdlWv/UQeD++66H92ruNDCvo3SA5+j9Pw2S3RSel+Fh53h0/q7PQ//uT9J+4J+56XpSxTsVBwzUVLT4NPLozNGPmSFFWuchbqhSuX+QfY3LjjFds2wXZFlA3DE6xxg300mvSQ5II74tBQNv+u4/NKyqBZLGGGPMT7avHT4yXazJlCw458iZaYJsIecM0/wIThLqSsgoqiRVRDzOlfAxAU7LXHHZuxK89UocR3KKCOUGf8qZNM0tl1Ji6McppMv02x3j2JeL0WlbWcuIimOMiPQ0IVDVK0IIkEHFgXeoc7g6UFVt2UZOhOCoqgpB6WJPyELOEdVITgO7bsfQe5o0IuJY395we3uHEyF4h9ASx4xqpKkqQuVZupZxXBJjLMPAuhLGjmMm64D3UDcLsjoccLZa8fLlczbrNcE7xn5gqBt8JTx9fEHKiboOZDLr9YZQtSxWK4Zdx263Y9vtSJKp24pHV5dsup5N31M3LYvlYprHsMyXuWgaEKEfE+I8njIXpg+e1WrB06eP0Dyy3e5QoGkWrLcd667j8uoxn378KXfrNRePLhlT5Ha94Vsf/BS3mx3vPnvG08dXjDGiJC6ulqy3O+7u7mjaJS82Pe+9e8XdtufDF6/48NPnPH72jKat2WzgT//id7m6uuCdZ49YVL5E1d7RNp7buw13mxu6bc8wwu1mx4vrG1aLNd/44BmPHj+jCRUpJ1Tg9uaGKjSEqubZ40ua4Lm92zIOHRfnZ1yvNywWNdcvIxoHAkJYNNzeDex2PTlmRKAferZ9x27YIl7YbXclPCOw7RJ1VYbzrZuGp88uaReOcRjJJDZ9Jr1MfPPdp9RVTc6U+TP9jtu7Nf1Q4XAEH4hTD9i5QerEE4IHJ0hWtt0OxBPqhqoOhBBwCqSIUy1TL3o3/auo6oaqrRn6Eecybt9ZAFIay3CvqQxP3C5aILPbbEn9UCqR1e0DRhEpc8mITN+5ubfoUWuodGNFJQFCUiXHSMyZfhjZ9Tt2/W76bJSIkKkhPs2wUvqHOyFPc5K4/RCqgAfRqcGL4CRx6Eec9o1KPzX45h6knqmCVPPUo7zsr5QNHDXipuFZ9z2VDw3ceairw8/GGGOMMcbcM10zHn6ffzhUxR2eePNN/eOX676iyqGpR4g0TpHuGq8en+/I6wFXn0/bF+I44LtbQurg8mcYxx2iAyzeL9fh6st1N3Nnu6kic38N/OXDxs/d/6+xHuf84eCPzsKX3eabA7TD88evOdTmTf8VBfVTx8O5MvV0+6dzIjJ1jmRu3kxz0x9itMMcnzrVwubDnPUqU3715zZgfH3Zh4Pd02Fq9YFl5+FxH9j2FBAyD986TWI4z2U4f76Oq+kO25nbda+H18c/l/bZ/X08hJEPnROdXzvt42GrJynlUQA5HSf5KHPWo+Xm5w8p8T50VDlNT4/e+7m9eljnHPhOW8jzZ0kfOPP3jhNILpC1J/oVQbakdMfNuuay3bFqhP6FI/oLsktIGonDmvPascsrsngWcktOHtc+RvKIyzty6gg6khMofqqyPLwHFjAaY4wx5k2+dviY83wBP108TRVOOl2MzXM+eoQsjix5CijzVC1ZQgzdV00qOc+VSn5f0ZdTwkOp7suJmMuy/ZiIWi7UxzHSDZGcMkHAhVLFlKpyoZfzXDVWUbU1i+USARJlGMq6XVItWoKvkKxIGhGnhHoOFxIxKkPfs+m3jMNISiPD2JFILBcLlssl3TgyjJFt14OkUmkZQXuhbmuqpmJ5vqTvO0IVcJRAse97Us6EACu3og4V67sbVEfGsaPrlIuLC8QL/djjEVarBe7Fq2leS7jd3HJ1dcGi9jAK3jsuLy64vYO6DdRN4GxZUbUeFxyL5QLxnnax4PLiDHSgapcsz1dsug5Rpet6xAWePnnCYrHis08/43vf+4izyzNu726p6wYnjuevbrheb+h2Pd/45hnjMLBYtNSLivHmBe+985jz8zM++eRjgle22zWvrte4UNH1PY/Pzvn44+dIOOOzm23Z5qMnjEPiux8+59V6x6/9C38eVY+vl2zWt+QUGceB9W7Dq7sNw5DY9IntbkPthVVbUftSZdvHTIyJXbdls77l8vIKccLV1SOGYWSz7Wjbhs1mRx0conC2OiM4x9lqRVTl45d3bMaeLo2suw2OyNBtacThnMOTeHS5QnPidhu5ulhS11UJBJ2jqWpyVuomENSVeUTHkcWiZYiK85mUICbFucyiCVwsWrY7R04DaSzzjLrpe+Hw9OPAro/4qmEZqvIdSolxHPDOlV6SU+UfKjhx1FWDdw7VOH1HYW705FTmmPShYrVYEnzNZr0mdh2aEqJuakwnyErwDicBmQM8pTTG9nOilDlc5+a0aibnSMojfYz0fU/X9QxxIOUyXKuUcZkRwM29RNEy16NMfWT99DdnasR5N/2edT+3Y+mE65BcGkJuX+k430gpQ7GWnsx52mYJGFUoQ+0cNQxVmeZ7haN+quUcap6CU2OMMcYYY04dQqa57VwivkP50v3A56jyiteDkzmkKSGJJ487FnXLmIRqcU4ee3TxDBUla5lKwaceWX2D1dW7bNavYPMhjUI6e488V3iJ4rR00JX9MI6cVmP+CgLIX8nyry/6VeaN/HLh6OcHpNN0EFNnyDk+PAk153YLgrjS/pH9W3cIiGRqxxyytznCOhphZf/Ce6HzFwSLXzZkffNxPrzsyX0fmNpUD70vD8TC985rOU1HAeRJ8FiCuzlIPESJcwdVpjDzEPTulzj6nB725LVdOHnd/lt59B6WoPhwJIf16tGByWkmOd8S26//aN9EKPM3nrYvZfrunwzlOg/3ehTo7UNsjj+jh8+H7D8r5TPnJONUGDXx5Mwj6Rxxkduxoqor6nBDDB1DBB02nDUV2+hR17CSHdthYNlCH++QUJFYQFgRxpdEgTLNytv7O2CMMcaYH29fP3xMijgt4QZzH68S1kEizxdLrlQ7Oo6DR7+/mEpp7q019/IrF2aSFU0lyAiuVGfFmBjHTN/15JxxAt0wEMeRhOB9jWpCnKOqPb5yaCyTgHsXyvCKIvgg1E2NqwKhavFVi/ga5wIuZ1wSRCOIw7mA5kgfe0ZNjMPIrusRVWKM9Le39F3H+eqMZ08f0w8DceiJaURzInjHrtsQKuGyvuTsbIn3jpQi3juGfhruURPdriOeKctly86tub69IyXH0A84H2gWLdvdhmEcWbQLzpcNQ9/B2Rk5R3IcWFSBVHmqOrBsl3S7jkW9YEy5DC9bNzR1w3vvvMuHH33I6nzF9e01t5ueR1XL2eqMTz7+BEHod2Wux6uLM7rdjuu7O17e3nJxcc7zzz7jyZNnrK/v+PTFC2JW1t2Wi/NzXjx/znvvvkOzqHn06IrVckHfbek2Hbv1wNkyc7lsyVdnhLrlbrvDE1mExPmipX32CM09H332GcOY+fmf+9W0jTCMI7frNdevXrJoW9abDa9e3fHxZ3fEnFnUFS4nzhaBug7c3K755PmntHVDigkRpW0WoFLmVmwaHj16xNXVJd1ux2b7fVbLBiVzfnlOCA5fOV68vObl7Yb1bmQcI5v1jn6zRYctq8WCNETEQxtqXt1s6JNS1Z6YM8M4knNiuViSMkgugXxVVzjvya5U5o4xMo6Rq4tLBMU7ZblaoAKbbSLHASfCGAdySjRNQ9RcwucciXVkHEd8GHCaqYLHew9Z0JxKVe00J2RKZWgmpcy1mrUMdeycp2kCTbsk+DJvZt915BSZh5Jxkks1opQKUPIw3as4hIylkTY3GDOpjOdKGiMxDoxxRz9GupgZ53lamRtZ7qhhzrSf4MRNTbLDfDHMHWfnmzhOy3yUUDof4MCVRr9ziqog83A+07Cq6gTUHYatOWrA7UPTfaNTp7knDw3j/XLW29MYY4wxxjzoODCYLmD3QcSb6KET3mvDYZZ4RHDgodKR4B1ZA8m1JUBMO1BHHl4SmivyxbtUKTPOQya6mnj2Dm64Q8YtUl2iviZrQsUj90KQLxPe3a8e/JWEEK9X6M1DXn69kPOrhG73Xn38ouMden09chw0zw8ch4hT2DStal9dyj63ZG6FvJ7gPbzfX3W+vS83L+Sbln04nC3DkMIUx05tqLz/7Q1r3/9HdW4/lkB+nt5nWvv+HLp5+Xk0G9iPSPOmAP9wjPdO6D7QU/IbzrPOQeH0hpWvxDwVyNG8rPP7BYcRlef3fb9vh3N0FOkyh5tlHtS5A+/9fZk7IRw9LcdJ57yuqbUsHvLAMniCbPls46mqgGbHkJSkiTMGNv0WqRpwA15qHOvp3oBDXIukxKiraTqUTFPBmDLOh5OOt8ffe52HhS1Nc/Lpm2TBpDHGGPMT6OuHjyTIZVgWd5j08WiYkXIBpfux8uf5IT2g5DxXXslJA0UFspYZ1srygvOQUrnY6YaebdfhHARfnpcwzXUnJTT0wdO2NQqMQ5oqmgI5K123paqEUDlEPYgSgkNCGe7VuZqUM5oymktFk6sCThN56PDeUzcNMUVcSgzdyHqzwYlw9fgRy9UlTmCMPdvtju1mR06JGEc0Z9pmQY6ZXYzklEAo58Ipu92O9fqWug6IwDBEbu82kGF7tyNcnKPJkcaMa5UP3nvCzbZnTANjGhnTiKs8oarx4nDeUTeBxbIl5oT3nrau6LZrQt3w9OljNCb6fqCuArfXL1mdnXF5cVYq7GKkDoEXL28YUuLFqxvadsFZ23B9d8PQ73h5/QpRJcfExcUlw1jOv0PZbbecX5xTVw23tzckhbYtx9+sFiyWiZwSV6sloWrIw8gHzx4hcskv/dL3ePLsm/yF3/4WKW55eX0DSbh+9RmPn5zjQ80PPr7hex9dc73elQvgFDlvKi7Ozri+3fHRi1t2Sfj5v+BbBAfPnj3h5uaWzW5LXdckFS4vr2jqiu//4Ae0i5blYoGI0DYtu77j5e2OTz97SdDE02XNbZfZbNaoKgufWVWBV7dbhk3H1dkSVaEbe+7WHbshIz6VStOmIuVAWwec89P09InaCeIdd5uOfhhZXiwIOGIayAriHSmXoYaWbY134KYKvTwFaN4LtXeoZlJKII6cPUqceoYGgm+pQgASKZUAM6e5l6cQqkDwHh8aQmi5u12z223JKSFShoB1AjllNA5oStPoqw6nMnXenoI/50pwP63bpURKI4w9udsyjh19SowJ0tS4FCdoFtxRj1XBHaodtcwbqbipITPNO6lleOUy/2wZKhjNuDkE3TcGpwbi/l7AfGMgnPRS3T8MzAvPNwjKn7ipg4XovFqyHpqTxhhjjDHGnDoEIV/m5vtJpdUD15iH5nWZQiDIiBBQydDfoXmLiw7fPsPXH7CLA273EqkWDLtrfPOEmNa47Mj1Y4SEjLe4KCRGnGveWO34ZSsfv07I8PnrLvtzf5Evu52vu9z9QPX+cveDF+4FnafD5d4P8+4FTVMgdvzUF4WFXzyM6ttxerxynCftqyLnqXIOj8nRIR5HbnLysS6/z42w48/Q4ZUy5bjHYd9D8zx+Xsh8GMHmOLU7HpqVo7B42i8FneZRdAjcG1pWOP0czPt13OaUk/2So292WVDnw993fE3Mlbbz+dbDBt9g/tAIPjuQxKPVjrs7R5SGEVjqDicVLsH1umfhBfIdEldkV9MjBCf4kMk5Ib6C3OF8Q86CDwEdIlmmYYEf+G7IHIp+7bDfGGOMMT9uvnb4OPcEy9OEcaeTfh8aKof+W9NN+31l1HFj5vTiJOeITlWSaJnnMac0DdtY5qnzXqZ5GQNeheCFIIqTGieeKkxBiM845xEcokJOynbTIc7T4qZ55DwBxXsIladyNU4C4xgZhw5EqeuanFvGFCGVIVXr1YrQ1Ox2O5IqfdfTti1VU9MsF7SLc7y/Zuy7/XF6cayWC9brNV3foTnR9x0+1KCZ3WbNc01oTozDwHazIYSKbbdFHdyt1wiRbrfh/ffeY9E6htijecR7x6Kpif3I2HWknLi4uqCuKnLKVFVNCGWoV43Ko8tzrl9cE7uO2q149eqacRxYtEvimBn6yDhmdl3HkEaGfsOTx4+53dySx5GPP/yQbuhZLVe8uP6Yc7nkB9//AYvaIWlgeXGJczU5O9abSD9Ecs48OVsBZRje3W7g6vElaYTtbse7H3zAGDs++ugj3n16SVPDgCeOiY8/fs7jixXvPnmHjz/9FEcGHRniQFaBHLk6WyDiyAjrbsvQd8RxoF40pFQCX82Zrh9p2gXL1QovUFU1q7NzVqslwTu6rmOzXqOS2W63nK0uuFotaTZrbncjN+st1SIwpoSK8PHLa7LCo9UC7zNRlSElqjgy9ANtVVP5QNs0rFYrquDYdR0pRZbNEieeGCMpZXzl6fqEdh2Vc6Bl7s26Cogoo5b5K0klmAvBU3lHTom+21E3eZrjsPQUdd6XIVu9o+u3jGOp4NQsOF/hHOX7VDX4asG2GxhijzjwVTh0EMgZiKgOqI5ljldx0xyJJXAsDbLynVLxlGGPS2CZxo409sSxJykoYd9r003DLbtpzosMoFIqplEcaWqIl+FX97GggLiMqJSw0uVp+NdpGJ9pXOe5Q2hpC00hpPqj3skKkqeOrK6s35W/VXPDau5xLVMV5bxS5+aZKY0xxhhjjHndadtXOFR8vb7Mm34/DjhgHl1IkTySJOAzsPsBC1fjmkuSlHkGvWTGu0/KaCLtIzQOSE7o3S+R20dUGsFVqG6hu8Ut3+MQfn714O6rVuC96fn7wd6XrcJ803beFAp+mdd+Xij5dSo+P2+evDe9/stULT4Uir7pNQ9t58tXQ86jN03B2NHQn7LvwOmmqr18P4s6zdBkmglT753j/XJHIea+PVeCXnngmL7MeT1USyrHn/X5530LUY7qEU86qB7N4bqvOjz+fp6GcLIPbA93xkCOpgyZQkYFET/t4/3A8uHjmkZMPuygOB6tlM3Ok2Uk6IjkRJZAYARGCJckFxncY5qgXLgd3RjYaY24isxA1iWtu6bLFeoFkQpJI65qyn4fh+lHp2cu/5Tj02GMMcaYn1hfP3zUQ6+tpIcA8viiruSS03wUU/e4/XVRKheROZdwUaap4nLOpZJIU+k9l2FII5qV4DyLtqWpAs6V7QUnBO+o3NxTSxBNU6+rCq+OULWEUKM5k8aBlMr8ellLsJPGSKSEmwBNu0C8o2kDLtR03UCcAsdFuyQl2KzXJbQKgeQDMZaQKlSOlVuy8EvOz85YNi036xu2ux3dmFkItPWCx4+f8dEnH5PiSKgDSmLV1KQcub0uVYXb9ZacYRwTOWXSMDDsdvTjyG63ZXF2yfnFkryO5NCSkpCAqq1wQaiqwOXFZRk2EyXFxDhEQrXgbn3D1dVjbrc9N+s1rnLc7bZkIm0IXL94RZcDzVnH8mzF9tOPWNUVQTwfv7hm1TbsNhuapiXHEtpsdls++cUbftW7T/n2N9+nchVD37NaneEQ+n6krisWi5asmX4cuawalsszPvvsBT7UPLo4p+sdv/pnv8XlckEaEtevbvjlX/4+7z97zLe+8Q7D9o407Hjy6JxX13cs73pebQYcnuVyxXJ1xvXmBau6oR96bu9uWS0e88knn7JolzjneXW75tIJOS0R5zi/uGTX95ydrVjfrRnGEXGe7/7gU17ernnyzjsszpaMmrndvCy9AbViESou33+Xz16ty3fAefohsmgCjQ/sNpGuH1gtAe94+uiSR48uyWQ++zTjXMUwKKkf8CjIiIgjoWy3O84XNXXty/BJzrMZeiQnVlXNSJpmBlGyJsYhTw0xwDmcOBweCR4XKvpxZOi70hszlWrKMilpAA24uiGOiaHb4iVTNzWijpQzeRyIcSCPA0LEu4QjI5rQ5HCE0uPaBUrDSVFJoAnNA3nckfpb8tCX75mrUOdKODgNs1xuwnhU8753K9PAPaJh33JRUjnqaZ5JL27fZvTTnLOHhuMUPoqClnpIcXM1NNPw0GnKKA8NW9U8NZzdNORrCSdLz1NBnJb9nv6+OfzX/VNqjDHGGGN+jL0eSk3Xqftr1teX/aIqN5mCl5wiYxwQ3+DokOoRQxZWmtB6icQMLtAsH5GlRb1Dm0eIu8JtX5DDilwt5z59aCwj/xyGBX3z8Xz54/18nxe2ncx798BybzPwfGi5r7JvX2UfHgpRtbQyPjdgvR+gfpUA88tUSX6Z7RbTnJawD8NLI/PQ2fx42TIFhh6OTdnX82VcmQJkPjfzvvBaAd305BTY5Xuh1wPH8Pp+z5/N+blp7tXpM39Ukzhtqvzupp/n344rP9GpQy75gR2ehm+d+7rK0VCtcFQi+frfh3k/T4/wdP2qui8KmL+zyZdOw30/UjGQZUEiAB7VCpE7AgPZLRDx9DEzyDltiFxJz04zacio7+mbC+phQ5+WuDoheQucl7Y17KeslaNd2w+3K1PHXb2/18YYY4z5SfK1w0dEyo17BabQznF6gTdfYL52kbRfB/trqZwTOSfICVJEU0TziOax3OSnhGl18IiWIVFzHgHw3uOkjL/vnCP4qsw1KTVKhfMNVR1wEtHkGYeE81W5IHKlirKM5ZgZ+lJF1zTTmPlZyXEgpbHMTadwsTqDlLm5uWOz3pYep+IYc2K32ZLzQL/bcnF+wWKx5Or8kuArFCGlRMojy2XN0yeP+OTjgaaqQUfqti3DWmrP5jaWIWelzHeXMoSqpgmeRS20XonDjjg4NCUWVUO3G/j05XN++qe/yfmiwfsKyMQ0EHyD84676xvef/8dXlwnrm/XtIsz+j6y3fRoTNzdbnj65B36GBnTyItXL1gsFizOVogLvHj5irapCVVNzFtqhNvNFu8rVqsz7rYvGWJCncOVnWcch3J+RdGsxDFycXXJ8xevuLy8LAVzmqkEmqamqhwff/gp62bH5dUZm7trzhcVV1cLkoys+w1PHl9wu77l/DzwaNew3u24ujgj6Yg4ZRw76srTeE/tPN12YL3b0VQL1ustN69uuDw/L8FdTjR14OmTRwz9DqH8XtWB/x97fx5v27btdWHf1otRzTnXWrs6xT23eO9RiCVSWSXCwxiLBNREPhI0hRIKq6gRCb5EEk0Q/BALREIgJIom8EEDolJYoPAkYAzRgEXAAt675Tnn7mJVsxpF773ljz7GLNZea599zn0v3Hvf/N0795pzjN776L2P4vQ2fq39mjOGwnk0KVfXt4i1DH1P6Qx9H6ibGfOy4kvPLnh0MWPbbTDaMa8XlIVltVoxn5VUhSWJsDifMz9bsN5sqaoS5w1d1zGEmGVRVVmt1wxDxIqlbQMiFrFTtF2+aSavaWcN1hjafiDGkMnwYSA5j3GWNEYci3FsVhvCsMU7v5M/jpojFb3LpF/brkmhx1rBYnbkY76X0y7yT5JCGFAdxr4UYLIMcw49TDnJQ4pIDOjQkfoeDQNGLNYYLDJ+z8+ISELSKJ1K2hlxSkJSjlbUkeqbDFczGTc6musmR1uqgpj9fCnZIWFHWgJqct5Yos0GlGTp15xPRkbyMVtUFiEeSuyIyc89EQR3bBOecMIJJ5xwwgknnHDCg5je0o+Ob7InIu7m1jsIt9pvkckeyKkaLB4VoXQ1bXeNqZ+hxuW2NCHxBmafBxx2uCYON9jyHKmeodsPwdXktbdFsdmO27Erx5n73orAk9dpSx2He8i5yERvTgFbb7Ggfohs/LTRhneJu4fIuul9hjHm3mPeVX16SKZ1r6TyMA0zjf+TpG3vHvMu3lZ69dNItt5P6Mn4f907m0/RiKM9l5Vy9pycZBZv9xookcu/icDN14iM74TYX0hmvH/GOpNdeBhAeUhgPiiVuy95fMzxWJND7NH2w0jG3UU8SqhOoYia338cC61Ktm3Z3V73zvXxNIxE724eX78O5aB3Nhqul4nqUc2giQC4JAQDpQjRO8IwEPx8VB3KfWoHQ8eMptxmlSQJaChQKXFpTWCGk8jASMBOZKxOHKoePikmnvhEPJ5wwgknnHDCj3F8dvKRcbE1EpBZ237yuhrJANVx9z0EpJClGkciIJEJSI2BNHRZorHriGEYF6R50eW8xYkjDD0pWsQk3EheTcew1oMRxBisrYhkEkZEwCawSpKU5TpTJA4OX1rwQkpKIoFGYoh0Q8/Q9whCPwTCEGjqisV8RtcHrrsWwkBd+hwxaSDGyGq1ytGKIVJUFc4ZQlT6vh/z6EFMA9Y5hr5lPp9RFgUilhCVpIn5vKHtO/pt4Ha15PxigfMFhc8kXRyGnAuPRLtdUzQNm1XL0CXKiwrrLF23BSLL5S2PLh5TFgXWWupqxvOPX3L+6Anvvfsum9USawraoSXGRFlUrJcblssNGpX337ngsmspXeLifMarmzVDHzAzi+iANfDs8Tl96PEGrEDXtfiiJISIsSAaKXxBU1eUZcntzZLzs8f0fY938OiswRhhGGC93nDx6NHoGRl5+vQMKxBTYjafE7YbTIq88+iMbpu4urzm+774BbrtGldVFGXBubG8++yM+aykazvqugQRLq+u8KXP0rwihBCZz+Y4b3nZbgkxkkLk6dPH/ITv+TwffVyzut7Q1B5vLIohDYpaZdsOpG7goil5/9kjbteelOD9d54ya0quLjN5XJb52mg3HSqGEBIKDMNA27YUhaUua4wVQsjbrImYqkQ1Yp1gVXHG0bU9nQ1Ym4l2gDBEQsiEZUpK7IdsphiDKyqGIbFarTASMKIgHtRmaVXrKYqa7XZNHDqsySTxMPSkCEkjqGLdSIIOIX/6jhQ71AIEjElZPpVRtlQMSQOaAhqG7FiAItYg1oJ1kCyT0KpR2RtvAkbNaMranBNSpojEKeejjHKr2YSVzGKPxu9kwMveQ1XHnB5mihDNxOxkNIMQM18+EpEcvO8xed7I/TMKOhrJSd/yRcwJJ5xwwgknnHDCCT8mcZ9k6DHxkf9OkU77snvi4qA1QFERNPb4smIbFM6+iCy/irUNWj5G+xtc3KLNF8mr5QTlBaa7Jm6voXmMczO0v0Gqx3ntniKI3R1vF6X2AEkyjeNoXHf50j2Ts2/nqL03k227piY74QGC703k39327vv+UMThJ5GBb3vcw/a/ldyMbyIN3z5y8ZPbfaid18vtaeOdjTVeN8Yc1d5/pghAFZB0TPbtD3Z8rDtkoUwE1xH5dnCM8aIR2d8/U3nQHfl3SCgeT1NuY29rvj4f+/M5tbs/zlRepnZlGvr+ntIDdu5+Qn3q0/SebX8O7p4LGdtQASOKlUhM47szm53oLR0RiNUCP2zo3CLbvuNEJI0s+4pCHV43VAKbwWAloKGF1GHH48h4+vZjPXaeOPDj3V0hP1o5SU844YQTTjjhhG9ffAuyq5lcTCntnTUPPLH2UY8HEEWZohbTTuZURmJQyaRbSoGh79i2W1IcMEbHqLCcs1GtYgwIWX7VSgI1aIrZ30oFIw7vPWIS1gopgjEuExlVJn5iUoa+J/aR5XqgLDxVXYG1bJNBjCVopF1vMvmBsFy3rLdbKm/R0FN6i1rBOIt1jqYpKauCvu3p+47VakkdI12IdH2g6zpK7xGBfujptj2vLq9Zbgq+8N47JI0UVUFZ11SV53wx5yauiRrptmvmZw1WFGsdm/WWfggsHj1mvdlwfv6YR5fXLG9eMZ8JQ5948uQpt7dLlje3VIXDyEAcOs4XM169vOJrX/0KMUUK5yEpUQdCDFw8OqcNiRgVo4mb2yVNVdFXJSQIfaKZNZSVp6oKhMTjx4+YLxYMYWDbDdSlkHqlHqVQi+qW84tHOG8JYaCqa0DYbrbMmoqmLlEVrm5vKCrPfFYy9F0md4ylqWfEEFCBzaalLCrQnicXNX+B/xyff/+C65uCqih45/GCoW9xThlCYNO2XNQlq82G6+tbnj45xznYbjacXTzCe0sIhqapWS9XaJHP6dlsznPzkrp2PHv8iKubG+rSg8CsqnCi9F3LbF4wO6vBg3OeeTOjqmYUdUFRlbgi52t0zhOGRAiBoc/96oaBRVVgXF70n5/N6WKi71qqytMPPSkmUlSsQN8FVDvOzptR9jPfbzlqNl/XqkofEr4qMK5gvdnQtlsKLwxmbxB6V1LWNSkFwtBhTL7HQgikoON9k2VwzHhvJ1KOAg4DKQ4Qs5ejFVAMzptduRQGdBiIQ8gPCGvAOtT5/PiZJGogG4ZGsoe2TDKmo6Gnk2e0QDKjM8L09Eg7wyuOXreZVhwjIJns2in/42gFqYJRxiDQTHOKye2hqMnHTruXCXuJHTnomzVwsqNOOOGEE0444YQTTrgPD0WpHcssTn9GqmO374C12FfcSRma0GJ9g+1v0aJEkmDihriNVNYR5h+gCYwkZExvYKunSPecuL7Czx6Tlh+RiguMUQoMImNe9ElaUQ54xHsYGHPAMB2TLuO2I4LngKq6Q1IelTviWw+InztOfw/P6ZtxX5TdJ8m6vq3c7IQ3tf8mkvDTEpQPRXG+bY7LT0MIfZK87OE1ne2mdLiX8aplT25nh/TRfZSj6MM7x9oRjVOeyYNWITuGZrLwsN49fRXGo+1C9u70fyQmdz3SXR1zpylV3dmKR+V3rq2HfR3zZB72Sg+Pez+OSUh57TqaiMBxA2ryvWuIDOJJgNOcmsjELWIdmgqwEZe2YKsx+jEPwVmlsYa2zY7a83rA4hHpaDURNNv9xxMxXUcH0Z8Tw3rn8XXCCSeccMIJJ/zYwmcnH5OSkzOOa4mDhWFKo2Tp7qU9TB5ouotQHMnJ0TEtadoRjzqWE8YopLFpYyTLs+YO5JySKGoUo1ly0VqDdRZjDSnGTH5YcLbAmhJXFDjnsGIJCbp2oGsHQtuhbYvGCALDoFhfgLUMYcAKWGOxCLc3S14NPaqRuipomgpnHc5bfFHgvacsKrbbNV27pes2dINyu1wTQuB2CJSlpw89hS0wYrh6dU3lLfP5DFXl8eMFm82GZ08fYyTnvbMWZrMGNOCdpS49V7crxDiiAibx43/C93J1eUnfB16+uMG6irqe02561qsVQmS7WSHWU9cFLy8vuV1tefLonKb00OXIz6IsePzojJcvrlhvW1Zty/mswlcNba/MmhlFUeJK4WxeE4aBGDveefaYPkRurleU9YLlakU9c5wtLrier1icXaBYur7HGChKR9ln79ou5IX+ze0ScY4xDSACzOdzjDV89NFL6rphcXZO6R1fufwKFxdnFKVnVnk0VqyWK87mZ9xc92gyiDiWyzWF91zfrhDAGpPJOpuj7m6XS5yzOGdpZhUxBdarJbHv8R7eefYMj9C2Pkertt0YCZjJtCfvvENZ1ahxtH3AFo6QAs45Zk0zEuwGYx3bdosxgnOO29WGkBKltcyMwTtDVTmeyhmbjWNWl5gO+n5gCGEXNaiS7wcUUkyEEDPpOEXjAaihKPOxu+2SGFqi9aSUx27FUFUNBti2K2IKmcqLWepUTTYsDDmhhqYAKUsi79w3VcZcqhGVgDM9yXhETDbqkqIhommUMrUO8RXGFTnqMY0ew2ZvoImOxOMo2ZyvAYMyhiJaszPj9oZh/ojuaMHd/6Z3F2KyYasiSJrazFHVKplwFJ1yRlqmC9CM0ZE552Nu3cjewzVJOpGPJ5xwwgknnHDCCSd8RhyycFPqgT30DuE2rYmzqM8KtQ1GDEToracYlpT1nKF4AhqRnKk9O+aJElOHrd6F4ZK4folpzpD2kjh/DLrEiT+O9NKD71OPHyDqRupmTzqwj8iS3a9jQvOYmZTXt927+/7jPyh7esRm3l/mW8EnRUze7dfb5Gr8NIThm2Rj735/m76/qY8P4WGO6fD6OZyTwxqTPXbQl51SzUHNcVu28dLr18HoKSojWXh4zWVi9+BKGIlG4e55Oo7QPXSwP6RSd3XukOs7YpNjknRnk+76dBjBeHj8+3EY7fiQ9K6oIsaAMYgoEZMJyajYuGSwBd4pdIHBzanCNSF5onXTXUGMsIkznG3ZppqoQuMEH6/o+y2mGRBT7KhWOehf3qIHssr6wDVxwgknnHDCCSf8WMFnl13VCBz7d02RkFPU42RkTNr+euA9KUYwYkijVnwmUSIhZGLDGkNROmKIhL4nDAPZiSsSoxmNmEhKgBW8s7gi1yl9AWIJAfpNx5AizgViEakNOOfwzlP6grKCbdtitkJse2IfxhHBMPT0m8AQI4vFDF+XnBWRFHterDekBM4mQujxhcGWYL2gxqJqqKoZqol+2KKa0JgjwFJU1rcrVBPrtMG7gllZst1uKUvH0PdURUGMA6qepvaIhaKsSCkQQ4eo5/zsHOuForQUxRO22y0Xi3Pq9z5PFwMvX634xje+zhe++B7nFwv6tqPrWuIQCX3H02fniHN8+OE3CUPPJg4MKUe7DX1EVWiHSIiKdco2OOqqRrstRhMStwyblursXeys5Ga9xosiRmiqkq7b4H0JOKq6YtttGNI51nkMBTEqZeGIdUMIA9Y7jIOXry7xRUmMcD5f8OzpE+rKcnl5xcfPX/CTftJPZD6bYSTx3nvvAsK661BrcYVn/WrD02df5ObmmscXc6wMDO2S21tltVqPkrzCZtNSAx999A1mdYNv5jhbsZhf8NUPP+Sjj59zNm8o6xnzZoGGgadPnnF5243J1pUUI9Y4ymIG6mjbDSFZhgjXV7dIYswZCTGCLyqGuCbGyLZr2Wy2JDUspePx4zH3IpG6NMybC6yxBFVSYiS5lMV8QUqBwpckEm3b5SC+MT/LlAPR+4q6XjB0G0K/hTSAOpBsrFRlQ+UKNps13XYNBlIyqCjGgXUGSQlJoGEgxh5CzAQ9CibnhEEzOU6KmWh0MZOUYvMrFBGSCNEYsJ7kPMk64mhkYgRRA2k0vA7CCyf/UtHxJczOozITokaEQ+9Zo+N3zXUyuQhIGp9SBoOMz6SU5apGD1kjBtKYi0Ng8vaeXvjknLYyvktJTBI6ZpIWOuGEE0444YQTTjjhhE/AnmiYthw67N5dU5pdtNRYe/yd16LGKDEFkvMEDZhhg1u8S+w32PLpmB6ATGhKAuyYzzGgfoZRIYRAKQMaA700qMyPSJvjvsthVw74oUMG5rj2PhpSDiIadxkfd+M/qq/s1uLTcZWD6gcTOKVnOWrzKBjtmPbZ9+9NuSz1iOg5GvS4/805HoVDquqzRGi+TYTh25b5LHKX90VUvjECVA/n9PDc5N879ZijqN6DSE9GOZnR4fOIm9x3YvJ8f6APOhLnmnNP7mIV0+4a2jd7dDW9Be7p0EiEPuSJejB7R791ugd0Hw25c7wd5+z+627vfJvPx+TYv7+2SyvYwgI9Ig2owYYrpKwJkvLxNIEmgj/DxyUhNRhjiSpgEkFrHC0JxahlOySWqc6nLiXUjnkox2Pqwd0p0wDHe09gjEodieMTG3nCCSeccMIJP6bw2clHY3I008FCI//OevKaFJVIlkPV/LJfzW7BaawgKeYcct7hUsjESUqEoUU0YFQJKYJEUlKGIWJU8FYygaKZRBJncd5QVQVFWWDEglisF6yDtt0SQyKFQAg9fW+wxlCWBmcstRQ4tyDViW7b0Q892g85VyFgUkRIlGWJNYKxFjGGMAw4bzP55yyl93hXINajmujann4IbDZdXszFSAgDzoCaRFIhDpF2e4N3wszO6NuO9WbLWtdYYzAKMSr9tqcsmiw5uu3YpG2WiB2JlfOzM/q+Y9OuEGNo+46qMNyuO64+fomvPfPFGYmC9XaNoPii4NFFQ+Ge8OLjF6zWWXZzvdnmyFbruLiYA4bVpiUGiP0A2rPdbikWC+Z1RVEIzaKhTykbDQbOH19wfXmDc4ahbxm8o2kWOfrMKENQLm9uuLi6womlLEucc7x6dUnfD9RVTYwBNbDerPG+Zt12FEVJ5QuwQkjK4uKcVy8vqcoSNOGs4WLeYESpmpr5vOH66hpjLav1ltWmxxUlN7crzs7OaLdbjDPUdU1VV0g/YJzlZhP46jev+XHGcTazpDSANTS+4GLesNluqbyjKCxV1YATxFrariMOkRSUtt0wnzeE1GFUKMoGYwzr1ZLVakPb9YTQ41xBSJGuH3BWqXxFiIGi8hRlRbxdEZLSFBWqPVVVEIIlkQgpEZPmXItimOy7ZBzV7AwrhuX6lmHY5si+FFAsxhmquqbrt3SbNaIJxWVjxph8jyUykR5y3kYJShwy+U2IqCrG+FGiZfQkHSWVrY73vDGI81BUMBiS9UTjiNOxRj9sYzKhuJPcGY0qnaRwstWCEYdqwozE5t7QSuNnJCNl8m7Ne1VdHn9+dOTITjJhmwO1M+loDj1HTXZwAMUYs3u5souulLwtfSqj9YQTTjjhhBNOOOGEH+t4LTvJtOQVAewBqQA7suwwkk4MKFgxpGTwOOL6Q6yfQXmGxg5tL6F+gqS4Wzdn+mSMg1SDLS6w3SVBFNPdoCmBOSAVDA+TBXIcEXmHIn1tbEcEpR6SgeOukdtRPYxYO17x77bKdAzZkY/7vHuH0NEZUTgUzczOmncjOu+SvIfzPkXE7cms/b67PTygX98iou0hfJqIxE+W7XxzOw9FRr6JNH09xU7+57UzcKSGtT8Hh9dFdlqXXRvTv7rr20HM4XSv3DOlekhu7czIycLc79jLguqDxOF9uE9ad3q/dTzOYzlUvXMdH11pMjnW57ysr0erHp+nw8GJ7OVYGaM4o1jqqmCINpvF4ZrelZw5R98HfDk55FpCSiBzqnjLWi7wRrM6ERHBIUlIJkJUTLdi/vQdVusBir2Drsr43mAiklV3J+c+nlhP7OMJJ5xwwgkn/JjCZyYfRQxmyuGIkHZE5ChBKHmhlAg5QimNHolmWpLm+hiwVjIBaT1iHH1MhCGgQyANMZOMqoSoxIkMsEIKijMW70DE4b2jLDyqOS7JO09dOKw39H2PweCs3ZEbMQRyHkqDMQZbelzhMds1SSP9MGCMUBQlfTewXC5ZzGbUdUP5uZr18pZu6HGFJ6nS9ZGyUJq6wBgh9D3btiNGxReGs0cz2q1j6DtKKWj7QBwipqwwJudojLGnKApSGHDWZtIyZcnXHOSVaOYNm+Wam5sVZ+czQuhZLi8xVvFFyXa94fbyiq4bMN5yu17jWnDOkkTYdAMobLevqGpP2dRcXJxhWdPFBCmRQs7XZ7CEENAY2aw2zB6d5Ysm5RyD1pRZDjf0mBTYtluq2QxrhfPzBW27RaxiRHnvnWeEGAGDteCsY73eUjc1Vj2IxQDf87lnuW1neXm95GsfvmAx/yIxDMxmBZvNLetOmdUNTVnhLJyfL4ia23z27D3qpqYfOjabDW3bYqzl1eU1N6uB84sCsf1I3EUqU+ILj7FC1EjhPd4K87qgDz2+uqAdeqqiIpIYSEQhy5ikhNdIu90iIoQhEUOPaKRwFl94hqBAoG7g1eUNbdsRYyKGgNFIUxhiCrx69YptXfLUPCEhzG1F4RsUQ0oRYxURz2w2p223dH2f7wHrMbs7OZNoZTWjbuas2w1d30HScYwW8BTlHAS22xVRe6wxiI05T4xYjMnSKxIyuRlDTwo9cehIMZOpjN7WSQU0ZcLaRDQO4Fz2PXYFlIJVUNsTFAKOMEYlInZPCoo5Mto0a7KO49Jdnho5zBO5syptNrd0jEgcPUknayeTh9m4zA6zZmcYTmlqVPdemcLomSlmZ1gZsjxwfvEwkq+SJX9OOOGEE0444YQTTjjhPjwsyTm9hB+VPuS+OvcTR4JFdcCRGNIGbwvwc5IVhBrKGt18HdIcjD8iACZFVCuOpAlbPcEMHl1+SBpWiDzLZM1EEsjBmvsOJnLo3j7qvgwc5pE8jEibDpB2nZviF/OyfB9bdVB66tSezJm6Oraf1UrGdftoGxzPnzw0tQcjOyh/FK23/z3JdL7WmDxUf+rv6/s/TY7HqY23yUP5tnKrDxGLbyIm36bPb8pJeX/9uwTlcbwiHNKTUxThuF1zio3dbh3JLr2n3vTtgQjWh6JaP3mcEzH4cBuyv2D3rvyjk+sU3HmfxOphv/Yj2t+dRpSYHMYUDMlCyApJ1s4xZk2iwBhFtEXEjWSlw8iMur9lKM5BIooQcbgU6QXYXmGbC8Q6rLk66Pt+Yg2ys5vZpTLZse+Tb/Gn4Xq/IyAiPwj8LFV9uwvlhBNOOOGEE36M4TOTj9ZmecYUlaiHXlz7BROav8hISk5uimKyYSCMeeuwkCzee4qyxBhDCJF+2xFDhxGwMtaRSXpGMQLG5MVkijFLmjpLzuUGPYq3jqqaUfiGFBORiPUOrCNoJsVSitnrCyWlQNLIkAJd6LEIddmQNNFvt2wU5mc5x6C/WNCFRD8MiEAIgVW3pYuBqiqZn50BhvXylqQBY4WmmtNuN6QQaObCqtiyWW/pup5kLN4b0JwT0pcl88WCbrulbyEMA1XRUNUFdT2HlAnFonRs1ms22y3bbUvlS+azhu32miEoRVERQkfftohAt+0Yemi7LYUXvvjFz4EmmkXNcHtL4T1l5Vmvt2hMpL7H6kDQDtVMrGnqMJQ4KRAsm82WmAaG0FFqzWq5yZKhKVA4S4iBpmn42le/gbMF80XDlz7/ea6vrgmFQcjSs8ZaTAos10tcAbfXS4wkvvrlLxP6jqIoePn8BQBtU9J88XNYa7GuBlvgnM8yuN5ycTZneXuFNYZnT5/ytY+uEIHN9paiaFje3jCrGwbrGIYst+udwxrhvUfnXD1/Tj9EVpuOdtvS1DkvYEoRg7Jdb5hbQbwndD0vNx23yw2IElOiaWqaumAYsjSqdR2vXt0w9D1t13O12tInIWC5XK6wt2vee/qIqtqgYjk/z4vzoQ+EEEkpYJzPnr7GECMghlnToKp0XUsfBipfcrZ4jDOOdrvGiFKWBYUv8L7EF3Pqes623dD3LYXLyeSRfF/naOVEDBHtQ5ZtbTu0H2CMdswSyJmAllFuOcumhkw+qgcKjDMgORpZ/EAKgaSZUFcSInZ3P0O+x6d8ryI2G6XjkwUUEcv4UBmVaSbjcfKWzQShGWV7VCOTA+aOgGQ0+HYGUa6724dhx3uOUrcKWR4Wg+poJGaHaYwevyw64YQTTjjhhBNOOOGEYxzKP5oDsgxeZ8HuhG5Nv0YiQkUwmoipo91sSOKQ5v28BhePmJD/1u+i7Su0fu8O+SFj9JEikh2JjZ8RmmeEq2+g6ncE5etxh8cky32E6T4abDraQaGj9/OH49xLpqqMzoYHRMxd+lMOejcd+5jsOezxXn3pIRxkiec+smsaFxwSQYd5Mfek09sQLHfz96WUxjE8TH59Eon4o53T8dO0ffc49xPwwCFhdTTnrxObRxG0O1vu9TOVXznJwTk348Z9qYOzdTSOh0jS49J3j7bfto9EnA73+nV637zrTmJHdlfhfXi9L6MM89hkFlZWClegXcDEgVRdIBqRqGjyGAnEFBASGAcKvSkp7ICJGwZbYrEkNRhtiasVvnmC2pJolIJEl1LOLakH03o0rMMHg2FK2ZTN+OOctieccMIJJ5xwwnc3voXIx3EdkbIplWSffy17l92Vnpi8GUdDhVEydcqhpp6iKKgKz7yekTYrOk10/YCmhDOGylusKHbkKpxzGJulVVOKdF2HWKHwJSKWFIUgQuk8ZeNRhCHGTJ4YD5JwrkCcwQNxSAxDz0AkBmHoI70mrHF4n+u02x6xK2oqisKzWCwYQgRVYkxojHTthjhsKZzJUZCc8fLyFZvVhouLc6qmJg4tivD0cUM7H7i6uSJ0PdY4RBKokFIipchsPscYQ9du0RgoXEMxqxGxhKGnLEs0KW3fs1xtaW0gpUg9q9luLkl9NmauLm9pmhqjFu8t3TDQh8B2O+BsCSrMG4ghIilHES431yyaijNbEJqaEAdEEilsKYtzrESGYaAbIqGPLNeXgMX7mk0XCGHA+5zPshZDGPN6hmHg/Oyc5x99E+sizePHVN4xjLJAdT1Hk8FhKMqGr330dd59NGPTtpTVGS+fv+RLX3xGSomqaFhvA81ihohQNyXz+YwUepa3q5wP1MF7z854db3lxasrmsKxur7hYjFDNdB2HaqRsipzDs5ZjS8LPvr4Bc57SlH6HvpuSwiBunBZeleEm6Hjveoxm9s1N7drtn2HdSVVXVMVFaVTtiYyxMh22xH6LW3X8/xmQwyJTdxyve45Lz0YQwwDSQMpRtabLUNImWD1ns22RYyFBCEJRWGpZzNCiHRDIPSBopxR1wtCHLAScN5gjcf7CudqZotzNCW67TobHZplhMGCSpYHjgOxHdBhIA4tIeQIZGsKrBGEhMYAkklbEmMexgikUc5pjB70LvfZWjCBFNIuqlFk8oI0o/xNOnhMjAY5k9EKk1Uz2WfKZLhPEZETKTm9hDjI+8jBa4vpjcq0dWf0jj0wkHRvtIrK2Cmza2dnkN55UXDCCSeccMIJJ5xwwgkT9sQKTLnK96TjfRFlh5F8rxN/uV5EY0fbtbiiQmTADi/R6j1UfKbbbA1yg4Q16mf52JIQNQftGhKg3RbaS6wzYEdVkl104ets2uS4uCMGRdhFJ+4GMMpJinLAj+zmYs83yS4qamztQG4TxoX96xO7S8+QG99zPflFReabjkLejr7eM/MHf0eS507hQwLsiJw66t5BHr+JUD042ESS7iIz5X4S+qHIxPtJw7sSsp9MLr5JQvVtjv1pojUf6AHHZO4xsTad/50NOJ3bvHNXRQ6IxSnq8Sj35z2XzmHk8fG5uY/0PCSkXz+XoBxdZkeMPPddaHfan96OjbbtbqzH5fb9Oiba83Wa+6cKSYSUArHr0Pk7SAKxLr8H84HHdeDq1YD2A5JWmYD0BdHNcGFJTAkxkX5ISHdFVb1H8BWkwBAttfd0o3Oujt4BIrsZJ4nkKMiDAeiUlhW544BwwgknnHDCCSd8t+Mzk4/O2Cy1aiQTkEZHIiMyJQsXI/s0bLhsBIhgjGDM6IE4LkJEI845rHPYwuGqAus9urEM/UCyEe/tuDjPdePoaWVUMJrzXaQItnAYXxCmto3D+xJrLI0xRIUQEkl1zGuR5VitWJIm6qbG+4LZfE7XdTs2QwDvMrWy7QLGeMQG0tgR7yzWO0QjN8tXCInZbIYRR1EUvLi8oguJR4sZFrCihLRl3lTMFu/w9a98kxgT86ai73u8M4S+wyL40uGLBaKw3ayJMXF28Rixjm23BTHM5+ektKTtBoYh0G22WTa1D8zmBTEozgjqLdtuoK4K6rJG40BZl8Sw5eKspm23hNBiJfLovCENHWezhhiF5aZHo9A0NWjA2ILbmyuGYCjLObe3S2K6wtpbfOGxziDMiENPGBzPnj5iPq9Zb7Y09YL52RneJ6qmoks97bbFO0O7WrPeLHHWEPp1JrO8JfWw7VvEgrGW9bqjrAyb7ZL5+RliDHVVU1cNH28/pt20nL/7lKr0vP/snOtVy3LbUW175nWHqhKGwHa9od8KTVmjAsEIr1Zrnl9veHTxCO+F1XrLcrmkrkoK7yirEussVTPDlwWJNZu2Y7lZ87jbIPIExVA3NeIGrm+WDEPLMERW65bVekufFN10VL5gvsgks4YBXxT0fcu2i2gCJ4aqrLm+2RKGNfN5kyWLncX7giG2dEMkqaGs5jjn2G5vMUSctfiyxNiCpllQNSW315ek1OGcRWWUTJZ8LceQ6Pqe2EckRkTBeIexihnlllPMBCNjlKIRs5dUyskes3OBMWBcjpQkQjJo6kdj5CCJzE62Zax/mNdjJ5Q0soqqTBKtew9h2e2XXV9SPsaeqczvezQdG5xp/wJkyuc4OVPk9x1j/srRWMrl7K5HJ5xwwgknnHDCCSec8BD2Oesy8Xi8b3KeO45ye8ivTaY1rRgkBoq6wgRDKh4T2i0ubnHdNwnFI8RVaPUMNh+hfkZOdpDtaSPk1Ar9Grob7HCFm7+DhqcY9bu1756Oe5gwkIN/j0g7lTE34xSJJvdVhEnBZJdJ3dxp72Dsu63puIwc1DvifmTPVx20M0Vn7imlPdGqr63wZVePA6fGQ/JsH+V2MAe7MqPD5USkHmy/O8a3wetOj/e39UlSr/e1+WkIy7u/HyIm35gnciRx31R2V0f35+GQeDs67lR3utqE187nnibXHXH2cMTj3ZbvIygnMnB/rLvjfIjc3eeCnK48GZMajWTya6T38UjukrKCoDERSxBriCQcllJ6glieLhK3vcdYh5ZzktaE2ONCxxCWqPM4vWKQBXZYUtQzWluO9rOBpFjrSEOEMZ2RTMTndJ+MxOSOHz4cuureSeGEE0444YQTTvgxgc+seWCMHT8GawxGDFZMjtCyZvcx1mBtJhXd+LHWYsaoJGOmem4kJA1iHIUvqMuCWV1SlyWVK/HGjZGBgpGxrBoEi7UORBhCoI89MfVoHNAYUA2ENNCHjr7f5qjD0JLSwNC3tNs1q9UN23ZFDD0xZonU2XzGO+++y7N3nvLo8SPqWQ3O7TzqQlL6PrDdbnn58iVf/spX+OibL+iHhLM1V1dLXr28BiKLWcX77zyhdImX3/yYFx+/4OWLK7quY7m8ZehaqtKiGqcsmvRdR1EWJA0MQ0dMAessxgjtds3VqxfE0GOtZ7MdiDGOi8FAATRlQVV7VCKaIqV3DNsVhRnQsMGSxvMAkJgtGorSUpaOsjB4GViUyqOZRVPPfF7TVAVd1+FdwabtuV1uGbrI1dUlIfVcPH5MM5txu1zRbjYYhc1yibNCUWRCt6gqNt0WXxW8+/571M1szG0Z6fuOlBLX19dsNxtMMqQ28LhZILHmxTdvuLm85tlFg4aBbdvzzRevuF0tMRYQxRiLqiFh8GXNbDHn7Pyc997/gNlizkBg1bU8v7rh6mbD5ctrfujP/DDtpsU7T9d2dG2PJDgvKzbLFc9fXnK5XLFJytVqnfs79Dgx1GVNjNCHyLrd4r1QF5bCG4YYcUVBVdYIwmqzZd1uGeKAt4ZF6bmoSypruFjMaCpPVXqqpuL65pYXLy8JIecxnFUlddUwDAErhlldUxQFIUb6fshRsrM5Z4tzYhxIsc9yv87hipKibGiaBV27pW1z1OPkpSgoaCLFQAh9llAFEIP1Bb4osIXHuGwWKSmTlmJBLGIsxhZY68e/+bu1DmMdYnM5MKQkZO1lk6VMTSYcsxxzJizVGDAGNdkVeOewIAYxZvf8EXNAco5E5P6Tyxux+XkxPTeMy88OLEZcfn4Zg8jUpuT7glzeGocRzxStbUQypymMfcl9O+Fbh4j8LSLyh0XkRkS2IvKficgPiEh5p9yXx8+ZiPzT4/dBRP7Rcf/nROR/LSJ/VEQ+FpFeRD4Ukd8uIn/ePcf9HhFREfmt4/ffISIvRaQVkf9IRH7OA/09F5FfJyJfH8v+FyLyD4rI903t3VOnGcf0J0RkLSIrEfl/isgv+JGZxRNOOOGEE0444dsKMhErbyhyR0njTb9lXKemYYu1EK0imw8xaYWzoyPv8AK7eYFJLVI/xbavxrQDCRXLsL2C1TcoYo9Iwp59gdR8DiVx77LWjCRe9jocIx+niMdDAtIwbZn6yX3EI7CvOdUelVh4eB4O6+73jXaAyUTMbrtMhOIx8bj7e1Ru38fRChj7sx/L69g7S8rBcWW0O3bbpnk5OoeHdcxBnU/+vDYXh2TrG8q9DT7Ncd9U99Mc77Du6/vZzU/G9PfgvN85j0c2YXaL318vR8T2nqi+t+9yTGTfN9YJumvreDwyjTF/ef2+ZrpH8tWU+WkZo4HNwdgfIpz3UcMTEWgFDAXn54653ODMACYyrzwhJLbRIjaRjCLiKEwFbo6vF3gUQk+4/RqqkRgFyCmGDAZNgvWOlPo8PXqX2r2fWJQx5+tnuyr/7EFE/gYR+fdE5CMR6UZ78t8Xkb/7Ler+VaNN+6GI/DQR+ZqI3IrI/IHy/9xoP/68g21/pYj8ntHW7Ea79j8Ukf/NPfUbEfkVo+26HG3MPyUiv15E3j0o91vH43yfiPzPROQ/lWx3/+C4vxCRv1dEfr+IfGU87qWI/Lsi8tc/0PfJNj8Xkd8gIt+QbBv/SRH5++SBh4KI/KUi8jtlb69/TUR+s4h87p6yPzj2u5Bs5/+XY99+6yedixNOOOGEE/7s4lvK+SjE0VvMYBiTe48yirrzJGM0BvYmCXCknjLJIhpjsc7jXAnicL6gntVYY0gxIibncAtJsVZwxmKdxXqLLTxClj4NQVEdSEnBWFywFMllbzQjWWrVGIYQ2HYtQx8YYiAMEcFgrKAqpKgUZUlTV5RFzplX9B1d1+3G2PUdXdtinMEMlo+/+TGFtzx+8ghXFmyHnuvVhvl8hnXCfD6j9AV9l8nCm+WaFCNVWTCbNTR1BQI9ia7r6dqeuipp2xaRROwTvrTEEAjdmpvrJfPZHE2JYDRHydmCVbdCNeBLx5k9o+9aQt8jKRG6NVVZ4Mo8Z5tNS99Hzi/mebEbA6UTbJ3zfcQgWHGEMRpTCbTbAWsN5ZOLnP/z1vDycskXvud7qZsKYwzffP6cd95/D5GEdy6bcGIpXEHhLCkOlGVFCC1DGIhtD0Scq1CFGAKRhPGGpixZrZY4b/DW4YwFVZwr+OjFLXXtQYTrmxvOz85IqjmPpzUgFusqnM9RrKKG9bZnVlQsNx3ffPWKxjve/+ADuhi5Xm7YbjrmsznXm4Fv3i4pjKUpDEkSVWGp6gJrlLL2FKUjpsB627Fcb3jvyYKmsFQ+j5mUidey8Fij9GmgH3qMUQqJvPv0Edu2551HM5oCmqakbBY8f7miHTbASKhaj68qdLOiD5GzusIYS+gjGpSqrHn06DF1XdF1K0IcSCoY6xHxVNUMgM1mTUop5z7VPD+CIYSU80iqAYkYazACzho0KWhAY8yWUcovB3JkYzaVjFiML7C+xPoK60rwBaMwM0mUOOZjVSQTizr6paoyuliPxo1kY0Yke0pPWqq7p8jkCjp5H2cjTTR7Qef1beJQjklln81l9ywa/9FdizmXrGo2XFVslmRFSEZGRZvRN3WSZT16pXHCZ4WI/GrgB4CXwG8HVsBfD/xq4K8Vkb9GVfuDKgXwB4HHwL8D3AI/PO77mcA/DPwh4HeNbf0E4OcBf4OI/DdU9T+5pxtfAv4Y8EPA/3Vs++cD/7qI/NWq+ocO+luNx/+pwB8HfhtwDvyvgL/ygTFejHV+CvD/Af558g3y1wK/XUT+fFX9R95iuk444YQTTjjhhO8UqGGKDvokAvKT5C/HBjMZOGxwgPg5zD+ge/FfInYBxVNSERERzPYS0po03KJ+Ruo6JN3iy0ck9x5x/XXs2QckcZg0usCOKkN3+7D7uyMUp/17+cVpRb1bJctEkLwe2TaVfhPBc19E3O6Y41Ts991Dbsh+tf7aLu5YF7LfohzYKQdtqO5TPcAU0XaXrT2wVY7kOI9GDezzA9537j8JR+OWw9F8Uvn78WmOP7X1UJ23yT/5NlGWx1Kje5srR5JObb3ezn6qLUpkZyvKAVk22nKv1T64tu72+7VozLGYjg3tx3csxXr3yjwcm+43jgo/h6Tm3T5Mx9+/S9u1PsoPR6OEGFnFMy5qYQg9ld1SuoKvLWfMCvBq0JAdlUV7HI4gnh7LovHY4jHdYBENmHBLSh51FSqeYByWPt/lcjhJd6IcD7DLbzp1/jsAIvJLgN8MfAz8HrKN+g7wFwF/B/Ab31D3byPbeT8E/HWq+hUR+S3APwb8AuC33ClfA//D8Vj/+rjtrwN+H9nG/TeAb5Bt0z8X+LvHtqb6j8h2708G/svx2D3w48a+/qvAN+90858l26y/D/j9wJiYk8fjvv8A+APAC+B94OcCv19EfrGq/p/vGXYB/LvABfA7xt9/89jWnwP8PXfG/AuB/xPQjeP7Gtle/0XAzxWRv0xVv3rPcX4X8DOAfxP414Dn95Q54YQTTjjh2wjfQs5HwdgsPUia8raNvmXjYsrAaBs9FBWUdouuSI6A8kWJ9xXGl+AKbOgpioIYMjElmuUWjYB4iy0KjLWkNAmcGIYIaVwIGZQUAt1mjfcFvixxJnuPhZQoi4LSVwwx0HcD221L27aoSl60bVZ0XcPjx4+Z1Q1FUeCdJ8YIJhOgYRhYr9ZoGqi8o+87vvnxxzhrePbOOzTzOc4b5k1N3zuSd+iiJsZIu3VcvrxitdqgCeZnDb4wzJoZRVEgRkgx4lz2vtSUGDpISejbgWQSt7dXoAnjHWlQNAlpiKQU8EZYzCtWKbLetqTQUXpPHHrEwKNHZ7hiwe3NilcfPmc79MwXNU1dUlYV203HZtUxrwMpWV5e3bBZbzifeeaLGZt2ywcffAGVhv/6hz9kvV5zcfGI996bc32zJClcnF8Qh0DbbRHpsVZ5dHYOCQpfILM5w9DRdS1C4vziMUiia7ekcMFquaUfetarW87PFhATQx+oqhwVul7f8uTJ5+nblqaqcn5BhFkzoz87Y97MIUVWt9cYI0QVNtvIs/N8LZ3N5zw5X1AWBZvtFg0DpRU+9+47fHi1ZhiEWW354Nk5aEJTRIee6qymKFwmo9VwfXlDCkrtPNpHUlD62LEqNpwt5jgnnNUlz5+/4KYNxAhqAo8ayxefXvD+u2esVxvOZnPm50/oOmG5arlaLilczTAIVVUxb+Y453G+wjqbc3SKYTY75+LRU4yxdF1L220xBqz1OF/jy4bVesl6u8GZyfMze9kmVWKMpKSk6U4ymfRLMZFCgpjQAKRIijnXoxGXAxitYKxBXIEpa4xvEF8gtiCpkGIgaiIlQdVkwx0Q43KUpclkoUxGi1iyarOOdGXKZJ+MD5UDiaPslKA7SdQJqma09rK0Mkkxk6U2Gj86ekHsvE4VVAI7T2xht08kX1eCoKIY3YvGvC7PdMKngYj85WTi8WvAX6KqH4/bfwD43cDPAf4hMhE54X3gTwI/S1XXd5r8g8C7qrq8c5yfDPxR4J8gE5t38f3AP6qqh8bcbwf+LeCXk426Cb+cTDz+DuBv1fE/ZiLyj5OJxfvw68jE469Q1V97cIyKbDz9L0Xkd6rqn3ig/lT+P35g1096U70TTvhuxA/+4A9+6jrL5fIz1/1uxHfrfEzjOuGEbw+8mfh5iGi7S0qJCJqUJAlih9qASS3av8KJQbcfEocN1pcEk3AIQQfi5hZuX1BefAHOvsTQrTHtR9iLL6LYsXsGweNEMl1zh2w8Jg3ukoiHbMjrY32dPHyYWHyo3ER4Hrok7urrvnt6SOBMpBF64ER40MBEGo0/R4o4bz4gWPfk6XEU2s6ZWg/6MTU+ac4ej2h3lGzP3HWuvDv+N0h2HozzPj7n0xB9h3V2RNEbiPC3JQ7fRKrePe8PlmM6hwdzNJl007a7bOER7G7fwZ20r7brwCF/Pl4UI2EpQLpHmnVXcezfZFvuPV73hOKe9JxkSd9M4E4t72xeOL7+dxfnwdUrYDTbrH2Em87xqFJmReBy1TPTjkaFG10hWqKmYZAGK6Dtmtp09DpnsBU+3eIlYOw5vZQMcYvrW4Y4IHHshZjR+fcTcODA8B1kN/9SMoH3k1X1iOASkacPVRKRXwH8GrLN+Teq6uW467cAv3Js97fcqfbzyaTdr1bVYdz2i8meDd9/12n2nuP/H8jE428C/h7V/UmRHGlp7+nqTwV+iqr+8J3tV8CXVPXrd455Po7p14rIb1PV7Z1675PJ1r9AVbuxzv8G+H8Df7eI/Muq+ofH7T9x7OuXybb8Nw6O898iOxb/s8B/755+f2k8xst79r2GN9nNKaXvurXvjza+W22GH02c5uzT4zRnnw3fyrz9aNvN34Ls6iiZaiep1TEK0Vrc+LHWZOmXAxnWI0nWsQ3Iiy4FxFicz5KNKqO0gxHKwlOWnqIsqMqSqijx3mUZUutIY85HxZM013PWU7oSgzD0HXEYsGTRhxRyBJe3nqaqWSzOefToMefnF1RVg3dF3j6rMCaxWt1wfXtF221JJBDNhEoKOUIvBLabns22w1hP6SqIhpurWz7+xjf44R/6Ya5urkmppx06iqJEMFRFzQeff5/PffCUqvYkDfShxxhhNj+jqmeYwlOWNWVZUlUFzkNZWJDIZrMlhoQRi6aEMZH1+obr6yu2mw1936MpURaO+axisWioKsvFRUNTWYzm4MCLRw2P310gznJzvaVd9QxtYAiJVduzaQPdNrBZtmzbAaIyKwoKY/jmxy9YnM1pZo6b60v6rsNaw7Onz1gt15RFg3UFXT/w0ccf8fz5c5q6yfFwI5EVho6y8KQYiRGef/Ml69WWzXqFiLJtN6gq3hi8ATGJoii4urxEgLr0eGeoqooYYs5JmIS6rpnNZpRFSVJDUVSEpAwaafue2hm+9O5jRBJ9VJ6/uGK5XqISmM88T2eOi9ry9Lzie96/4P0nC+ZlgReHJCAkYhvYrFfUVnk6LzBAF+DyZsUQAzEMiDjWbeRqE7haD7RdZF44LhrH47OaJ48WVFWJKz1V0+ALx6wpqUsDsSf0LZvNisKZvL2uMM7gvcd7j7GGejajaRrC0NP3LX3fZYPJOOp6Tj8EVqtbhqHLkYzjvWUEch4HMlEnaTSElZQiQz8w9H3+DD0xxQP7WLLRYT3W5TytzhUY67HWI86DtSRyjtZIztCye4ew8xDNkkPZM9yMBopBdIwwlEliNUuuTtJExkyySJPcTn4pkGVZJ2mbKR+F5AQ3YyS2MYeSrGa33xiboz4PpF0neal9m7kvQpZmNZ/9UXpCxi8c//6qiXgEUNUA/DJyvO0vuqfeL7uHeERVn98lHsft/wmZmPzZIuLvae8rwK+6U+ffBr4K/CV3yv5Pxn79gB68NVDVr5FJxiOIyBOyR+t/dEg8jnVa4FeQ76q/9Z5+nXDCCSeccMIJ37HY02VyKAl5R4bx8Puupurr+wSIA0YiQ7Q455AkuOYZ7vxLDCiDr8CdM/Qdkiz1Oz+Js3e+B1FHvPoz+PU3sYvPI+r2EWUmoQzEOxKg7CLODFNexcP9k8vva4Tk4RjvGd9947237tiHw5KTc6A5bHvswWvH5YBA3TewF0g52rUnie4ej9EG2LUox/MxVdqfz92Gg/k7pDnhcF5zn6f+7J0q787Z/d93bNdbzed9c394Hd5X7772HupXtpselpJ9mz6+3q7ZEY7H7eR0Hve3f0BV54uD/fnbz3f+YnZTuHc+vTM/5r5j7MtO19x+z+7ATNeCvsUcHCLpSGzu+p77mxjzLBpQOSb0xGRbP6gjREPfbblcCh0zpJwxhJ5oHNEKVoWwWeMkkMTRSoFaR9GcEYnE2OFjTyUGVzhsOaN0glEd81ze0385fF6Q05YczuN3DgIw3N14H/ElIkZEfgPZyfV3A//tA+IRVf2I7Gz600Tkp92p/kvJduVdUhLgLsl3dHwReYdMXn4E/EOHxONYdqWqN/e0+2vvIR5R1e4u8ThuvyFHVD4iRx7ehx+YiMexziXwvxt//h0H5f4uwAN//yHxONb598iRkD9XRBb3HONXvi3xeMIJJ5xwwrcHPnPkozFmJAyVLGnIGAFp80peJ319Q5aduM8jypJSIqW8L8ZIDIGU0pSRjaA5KspZByiyaztHH2lKJBNAyTKrTMSGzdKsNi++NEZiCmNflBADfR8w1uEFrBWscSzmc3zhWC03bDdrtpstScH6gWEIJHJbKSUKn3NYLhY1TVNxu1xjl56+HwhDoG5KitKgcSD2gZcvrmmqkqgwDBbnBe8skJifLWhmAzc3t3SbgA4wF8PibEFZlAxpIIRIChHT98QhMp/PUJtIgxIGpRu2FIVwdlZgTUXoEv1my1ISdVHQNCVJHe16w3K5oWpqnHfc3i4Ribz/7mPq0vPy5RXWGW5WK2JSnLGs1gM3aaDXgqoUggrrtidF4eXNK3w154P33uX2aslmeU2KPdYJV1dXiDjKck7ZR8LwglfPX3C2OEMxWO+4vrnCSOTpk0eEPuSrRgTRyHazxIgS2i7nL9AEElnMz6jrkq9/+CFF1TB0PVYcfT9gZtB3G2BgNlvgfUlIUDVz5vMN54sKWa85qyzPLuaoJMTNsGLYLNdUc4vzjtnigllT8rnHM77w/gWzs4LtZY8UlrqpeHS+4OWrK4rK8M6zM1QDL15+E1c5nr77iKYuKCqLiLDtOi5vbnh5u2YVEin0vH9R8P7FGYt5g+LpA7iixpczrC+p6obFfMHZapPvrSFhJZNt1uZbV0Qw3qOuoKznOSKXiLeelCSPu5wjxnH94jmb9SqT5saiVrEmG1maFGcFSYaQlKCRFHPUcOgDKaZMUGpEEexo+E15D7EWsQXGeMQ4jLWI2CzdlFJuMyXiwXPgNS9uJkmqQ9vQHP1OJJLGvZfr9CKHbIwqCcaIx92LlCkq0jAaaew8aI9FpdgTqrlTe6dVBUaycSepo+PYEbhj8J3wqfFTx79/8O4OVf2vROTrwPeKyPmB8dQC/+lDDYrIfxf4O4GfDjzl9f/ePSUbaYf4E6oaeR1fA/7yg7bPyDI2X1PVL99T/o/cs+1nMLpey5ib8g4mMvTPvWffEVT1rsE69es/Zj+XJ5zwYwLf//3f/6nrTN6An6XudyO+W+djsbjvndUJJ/zZw6d55/5QVNT0Yl9DwptE20E6e4aIR2JHChHfPCFefxW8wZ9/L0YKLBuCf5ew/a+wxTMoz2H7kuQdFGeIKUET6eDYuQMTabJXC7mPZORuvTcM/m3Jh7uRkVOtPU8oU8FducOAs7tSl7tqY9s7S0LuJxyP2h3Lv+0Y5KDNXVfHzu2iJA9aVN31ZtryWntHbd8TLbcjO3dtfrJ9ctju20n+Plz/R5pUemic+cTJwfddhddCQKdYWT3eOF4jsmtHkHztY8bIxLQnDncqXqNKD0whifuDHxxgStlx3K07UZO6j369i+manEIx7xLjO4lXDs/2/u7I15dBbMBKz5N5zzdenbOoO1YBXixzupSZ6UiSuO17nLXMZ55l5/GiSNwgIWBlAcOGvpoTjeCSYxDF+y1dVLBkJ/i77/qOdWEPxv/Qe8FvS/w24J8C/qSI/A7g3wf+qKq+eKD87wL+JuCfA/6BuyTgiN9ITgPyS4FfAiAifyHwlwH/5h278rcB/33g/yUi/zJZgeeP3kMM/gwyy/uH73PKfQP+2EM7ROTPJ6v8/ExyRGN1p8gH91QLZKnWu/jB8e9POdg22dU/S0TuIzLfIdvNPxG4G7n4YL/vw5vsZmPMT/1uW/v+aOO71Wb40cRpzj49TnP22fCtzNuPtt38LZGPaI5i2iuLKI68eM6S82a33FM9XjweGhPW2h0BGVIkxCFHV43/vbbWYlxewmlMhJAw3mKNklIihkSMOd9jjmYqc546VWJKWXpVlZSUvh/QMVedSiKmQB86NCqCwdscVSZiGMKAti1JI1YFEYdRS1Jl2LakIVA3oxSrd5RVzflZYL3a8M2PP2S5WlHHCjRRVxWNtSxXq3EGIhhlMT/jbD6n7zr6riVGzZ5tmlhvVxgnhJAQa/FlxUCP1YSIpaoayqambwc2yy0Q0DQwdIFZXSBV4vYmsLndkqqepvEUZQWuJHSB23VgtXkFhEzo2oInT8959q5htVxi8GzblqJydNGhQ8JoXpQnJ7SD5sV5gtubS56985gnT88IoSMETx8SVVVwe3PJo8fvIMbw7J13uXrxgj/9p/803/N93wcRur6jLh3OOlzhEYl8/vPPWC8vcUZIYaC0nsKBd4I1BWVRoCoUvqSsCrq+YwghR3rGnINwfjbnYvEoy6Sanrr0vHM+5y/80nu8ePmSdx43BAIvrze89847JB3Y9CuKYUFRFVTe86X332F43FBXBa9utyy3LcZYomqOlHx5ybOn72Ael6yCcjtEfvysQTRLmJZFjTGO1XJJHCKbdmDTZjKvj1DNFogpWG16pOiZn19gfIUvSqoqS8vWTY0l54I0FqJmIt1hsuSpsdSLOWVT03UtdWmpqhl+2yLWU1QzXl1ecn31CugRUWJyRNJODcg5RxxdFzUpQwiEEEghkMaoZFXN5OfkySiCGI9YB8aBLRCXf4uxWf5WJUu5JiUppHTwvEiAHOe12Bv/48uD0fjbWViMRuFURQGN48sQc1Aum2Oysy5Hr8sxd4tOA5fp5cPorQqkyft41wuwxhDHPBrT8bPkq+xlnE74VnA+/r1LBnKw/YtkOZqJfHyuD7xVEZG/nxx9eEXOVfFVICdQzUbhTwbKe6peP3D8wLFSwNn4927uDN6w/cn492fwsLcowPwN+0444YQTTjjhhO8w7Ne6e0LxkLB7zSHvQXnHyYY2SNpirSIpYayjjQPGgNu+zOkZLt6l7zq69oa6eQcXNyxvVtjZF4n9EusrjKuBHrYrRJZoeYaVHEVmpqi+w65PRM0dkuvNYz7s98P739TGfWWPaJuDaLNPJS/KPYTmcaHXvk/E5d0+3sVdsvLwIEfSrVNQ3pH86v3t3e3/6/sP6ajpGtNPNScPHe++Yz5U7qF6913nn+qY+5Oc/0z/HlyPE9G3J6z3BPVrreu0f7oWDkuYu0WZDEA52i4H98NBlOV95/94ZEx5K4+4ftU9r3pASE9fD51oD65IdiylsiNLU1Teu0i8vE106knmnDN7w0YsiOFmcPhwhY8dTenYrD3enhOTRanZGDAmMHeR0K8J5RlRElEtjfPQpf17gXuwJ1+nZ1r+fKcEPqrqPy0iL8n5Ff8+4B8gO5H++8AvV9X/6E6Vn0m2GX/PA8QjqvqHRORPAb9ARH7ZqNTzS8bdv/lO2X9VRH4OWQXoF5IJy8nZ9AdU9Q+MRS/Gv0cRhG+Bj+/bKCJ/Gdkh2AFTFOItOTLzLwb+Ru63o18+4MQ7Hef8YNtkF//yT+jjfXbxvf0+4YQTTjjh2xffAvloSWOuNpMAMy7odL8oUwRJeQGlmDEyMhOG+SX/nlBAch5Hg0KCGOKYVzGvn5zNkoyTk5kxMkY2WvohoKP0KdaSUs4zpzsJk4TzJRhHHxM2h0VhjaHvBrpth1hBjMWYgaIosMZS1SX9ULJtO6x3GCP0Q5+9QiXnwiuSElXQEHHeUpYe7xaU5Re4vbmhb3s2fcsQN1ijdH2LFWExK6nqmrIqcxRm6FktO1Cw1iFkwqbdtnRtDwJVVSNi2Kxbbm9WNE3FbF5RlRahgHUgBUuIga7vstcaQt8P3Kw3XISKizNlcb5gpoar6zVX1yu6PuALQxsvubpZ8+TJY9atcHmzpPaOpjAsCkGaij4avvnqilWn+KoixYGqqWnbDavVJLlZUNWeuA2cLc64vr7m8ZOn1E1NWRSgiR/64R8mpcSsKLL0p7FEFcQYtu2G2gk9CQ1rFjPH7WpLPyRCGBBvsEZ58c3nVGVN4QzOCSEFttstquBdSVU0WGfp+0BIinGOmHo+994zlstbfNXw4fNXzM+fcnm95PxMCX3Ptm1p5nOsNbz3zlNubwo+fvmKD1+8pKk9jxY13pWslhuaqmQxKwmqbLuBWT2jLmrWy5YhGuZniphEGHq8tdRlhVHl8fkZiCWKwdcVbAJFWVMUJX3oKWOJLz22cPjCgibUKomUJUGtxfkyk2hqmDcNxEhKEWNyrlBfNtSzBSFG1rcvMQzEBJFEJWAkG6PGWkQVa/L9q06g1zGyWTA55BTS3jybyMh8K2VpV7EOcQXiHGKzVGkMSogQghJDRNMUQZmjltWMzwky0SiTIcZkSu0N8X2uFUEmA1tkzO+avSj1iDQcvVk1OxpMVlvO3zJSmrJ/+TAGcY92e3ZG2Mm5qiJYJhWZpJMjuCGh9xrjJ3wqTITie8CfuWf/+3fKwX2uwoCIOOAfJRsmP3WUuDnc/5ffV+9T4nb8++4D++/bPvX9n1HVf/BHoA8nnHDCCSeccMJ3BEanuAeixB6Sv7z397j+tqkliUGiEpYv8C7iijlm9h64ihQi1im6es62uybSYqov5lQB2iPr5wRbYAyIM6TQYlZfRfoVOc/5PjVKHsEdIk5fJ0rv6/tD2w8D/WRqUu+vo2MU1aHv4V0cUm93HRv383ZMIMrhfjh2JjxwUMwWUd72UDTlEZPJVG6yVw4JsTtztoueu0seQn7Xf9Ah9uTafThuX16rv5+Ku/WPqNzd708iBj8tHorm/aT9uyDA3RwLkwPrMaHJjiCf5nRHB47Xz1H58V893KBTWd3Zh9ypsd92TBhP77/215oeMnB7/n5Hno7xkSK7fJIiU8zkPedAdHf9HZ7LbBvv7x8lYnE8aQKbwdMPkSQNaGJrai78hlWvDJuXSPUYLy0be8aiVtphIEpBjKMqkQKmoPCKDFuCa0iqOF+StgMWd+fKkdemaerj/rr8zoGq/kvAvyQiF8BfQc5B+AuBf1tEftKdKMifDfy7wL8hIn+zqv7+B5r9TeR8hn+biPyL5LQc3wB+7z3H/33A7xORGfCXAj+HLFn6e0Xkp6jqn2TvPHtfNOIbh/fA9n8EqIGfrao/eLhDRH6ATD7eh6ciYu8hIN8b/x7a8dP3c1W95VPgIefjE0444YQTvn3xmcnH/QJWRunBcWExEhbTektMXvbldVzaE44xf09JiaOM6RjzBxoQCRijWJtzqmkkE4opHxPGvJHTdxF8sljrKOuSsiyyx6aAcwXWWRCHcQVJIYbA0PcgkmU3cahmoq7vu3GBBIV3GBXKssA5izWOIbQU3qPAdrthtVqCGKwzpJho2yxzXlcV5xdnCML11SXr5ZLKVyiJbuixFvq+p6hKRISicGw3K8RYUnIMIZGSMG8arLXEEKjrkifPzlmczei2Lev1hnbbE4LStR3OQlWVdF0gxsjF2YKqdDy/XNG1iWsdWK0uefT4gs+994QnTx6z7TqWt0sMwqZLXP3wR3hjud0EnncbLhYVH7y3wJqBs7qmmb/Di5e3vHj+nC9+/nOEJLx4+YrlsmPeGHyZc/KdXzzi5vqG25sbbm+uOH/0iGGIXFxc4J3j9vqas/mc+XxOUZSAIQ0D282S7WZFjANEYVE3dNuOtu+ZVQXeJ/qhZ7NdsVjkyFFfV6SUIx+TRkQczsJyeYNzjr7r8WUmb60oZ00DWLbbjqZO9ENHXVXcrJZ4J1QeYt8iviAVNX/qyx/y/OUVf96P+4CzZs7QDdRVwaNHH9CGgc1mw3a54my+YNt2PH50jmga+wNd31OUBednDR88u2BeepoiMG8c9aymT4bzR49wPisvWuMYhkgMkcJ7jBhiFCQaqnpGXTcY6wlDoqkW1HXD1asXNHVBiIkhwqw54/z8nO3mFmMUZw1t12EdI4mWJVGtNYjLssMihhTBG4faNFq4JpOcCTTlXKc5v6gg1mFczvcozmOsw9gc/ZgEBk30MdH1gaGPxJAQBaNCUnIiSEk7Iu/QoNx5Gkyko5rxmaNZWVUlf0YJaEg7ozGb21Pk484VApDRJh2Nyp19NH5JuncaZXz2KKOBO5KjCmKmCEpw47YTviX8cbJc6Pdzh3wUkR8PfB74YVW9fou2npI9QP/Ve4jHOT8CsqSqeisiPwR8j4h8zz3Sq//Ne6r9MfJboL/yWz3+CSeccMIJJ5zwnQMRc+f363nd7kqMHu/kgJfK1ETfbbDiIazx5TuEGAjqKWJioM1qNUWBLWeYoSNqiXMFSQSZfZ64+iq2eB/KEokG48+zs97mBoMeEB3yer90//uzOuBlQmhcz4+H2dFGdyLipvnaE6H77YevofVw510ckJcP9vsOB/da9Nydd/V3o/KOScXjztxHNudx5nKqcqc8sEtfM10v8bX604Uhckg07iPw9u0ctn03KOswTc7heb1LSr5OEL7p/L8pkvHuOX4TdFfvgK3eWXCj8s2By+rrhPKxE+trfVLd2X56cAzRkXieojQZcyzKnbpHwxxfgJFeIyD3POQ4B2RvVp2iFsf+vj7r7Jz778X4fmyykyUZpLLMwsDzzUBJS4klYlC13IYFXlcggiMy+HPEFFwPSmEKmrikxRCkJqkhEgk0lKajDz3YimQV3y/B19nXVwA1+ykYT8PkODBdqw9K6X6bY7RBfz/w+yU/0H8hOdLxdx2U+U9F5GeRowV/t4j8fFX91+5p7l8Efg054rEl262//oGowantNTka8Q+KyBXwvwX+euBPsrcxf6aIzD6l9Op9+PHA5V3iccTPekM9RyZo/x93tn//+PePH2z7D4GfRraLf99n6uUJJ5xwwgnfMbhfI+EtkKMPcwTilEw8kxEGMQdJxq1BjMlyqEZ25XcJyMf2UkzEEEkhL3wFxRiDsxbrLMZasipkTq4dERCHmALnC6qmZL6YsTib08wqisLkPI7iMJLJMDEGay3ee9yYr9E7R1mUFIVDjBJCy3qz4vb2hs1mS4iKKwts4RAr+NJT1zNmszl1WeOsxTlLXRbURUnhPAbYrtcsby7Zbm4RG7l4fMb7H7zH03ef8vTZE548eUJVNVjr6NqW5fIaJbE4mzGf1ZyfL3jvnXd58vgxZVky9B2bzZqr2xtulyuMEWazhseP5pw/mtHMS5pZRVUXeCc0VYGKZRMguZKLx+dY71l3A/0QuL6+5uXLF4hEzhY17773hGfvPeMn/Pgv8eO+93uo6znnszm+KPj4cs1XP9pydRP56OOXXF5e8/57T3j27DEhBEyCoqhYbnrKugExxJTwztHUDQCXL1+yWedozfl8RlWVxBjo++0YLWoxCpoi6/UK6xzrzcCLV7cMQSmdYdEUGE2EIZNMjy9mWAasKKKRrt3muRpafGEYhpbNZkWKkaZpKFxJCok49Dw+a5jVBY8uzrm5foWkyM31NS9fXNK1W4wxFGWNcwWKsNn2hECOmvOC2sj8fMbZ2TmijpSy8TNvZvQh4AvHYjHPRlzSfP1ZWDQlj87PSDEw857CebouUFYzmmZGUzfUZQPGsNluMCJUVYk1kNIACFU9w7qCqIFm1nBxfkYcevq+xVhD30eCWhaLM1IYIA6Upcc6ixXFO0eIia7vGcJAiCk/CiTnabTWjFG8BWVZ48pyL6dqXfaGFgsmE48yko3GesSN24wjJOhjZAiBIQZCjKSYSDGhKSEJiAlCREKCGNlps6Yx8nLM16ijN6cqowSNQUVIo0FtRnkoM70e0IlozJ9MbB4YouTE91ayyWpGjtJYgzF2fGZk49JINkZF7GjgmtEjfHwRIAY1n/lRekLGPz/+/UdE5Nm0UfJE/5Pk/1b9X96yredkidWfNpKNU1ue7GX69Eekx/Avjf36NXLwJkNEvkCW5TmCqj4n5+746SLyK8exHUFEfpyIfO+PUP9OOOGEE0444YRvA3xWgm5Xb+KBxrUpEtGhx8gC6w0pdURVnFiMMfjyHKkXmO6Wsp7jnv54iidfJLSvsL7EOI9bfAntbrEUiDM5b7tYVPIae+eKd0CUTt/fFN34SZ9d2fF/ZiKSduTb66TjQ+3fndvJuZLDz2utftKkv6mw7KLo7g0aunPcTzzUUd/NwdhGp8rxmLnJ11Ucc1mzK7cnDg/3mWPi7pN7xX0TcN+5eFs8dM08eG3cd93IG9rZ3yB3+p9txjyHo4zw7jOWuUNo79u84zBw+OMg8lHlsL7eLZD3iEH2Vurruw/2KuP7hvE62kmwvmH+dSRFp2aTSTAIz68DYixDKujMjGBqopT0ajFOkbAkigdbjNe1p1fD2lyAaajTCq+3hAhoYGPnlLLGpgFv8jWZjD1w5H2YbP5OhIj8bLl/AO+Mfzd3d6jqnyKTkt8E/u8i8vPvKXMD/HZyDsRfRfYq+C33HP9nSlb0uYtJYWcztvcC+B1kpaB/Uu54u4jIXETO7x3k/fgy8FhE/qI77fxPgb/2E+r+GhHZSbKKyGNyJCXAv3BQ7jcAA/DPiMhPvNuIiBQicnLYPeGEE074LsFnjnycvPT2PxKoYjAceSUeekaOehkjd5BzVIjJuSWAlCJD6BiGgKacV1KNxRqLswbrDZqyQ5CKyWSGxJxvsbSIEZz1eO+BmNvGURQeMUIYIzONgDOWqijpup7QDyTy4qzwjqosAMEYRz8MqCasydFo1lqwWV5U1VDNstRlihHVyDDcQArUlaOqKhDDy8tLYozMmxnWOtquJSqU3tHMZhhrWK1WkJTCO1LsQRVnhJgU6xzSNAxxIAq025Z2vcVZi3dC4SsUR4qKIdCUFi0Ni7MKRIiaSCHhiprnzy8ZusC8KXBiub1eYl2BtY6qdogKZVnyE//8n0jb91y/esVXfuhr3G42xFBQWiFtthiuePrsGZvtmu16S4yJbTtwfXvLYj7j9uqK0lfUlSfFQN+3XF+94vGjx8SkXDy6wDsDEqnKgtAH1AjeZ3mPoRsQTYhEjBlomoIYHCKKpogzli5EjAialJcvXvHo6buUZT161CWurm6o5jMW8wXDENi2G5y3zM/P8kJcPR999Jxuu6UwQjcMpJgYkqGo5jjvePH8OSklmrrgdrVGY8JZh5/PcN4TY6IsM6k5DD2l8ySrDMNA8+QJISVCigxDR+w7jCoxBFZtx3uPajQIsU8szqsc3aqjh2Do6bst3lugZDt0WIkoERGhbVtUlIvzgpQGNqsbjEkMIaema+oZVoS23aCqFN6TQqCuSpIIXd8TQsqBh8bhXYGIYsgeuOIc1nhUDWmIGA3ZuEqKwSFJMdZgfYFxDnE2f6xFgSEG+hDpun53j2mM+f7XhKZ8v6eU9gbyGA0pRnZyqCiZ/TvwPGV8rkympo4RkobsmSmMUYo7j92p7P4FjhmlXbPBp7u2s4frgefo9A5BhSj5NcDUrIhkAlU/xQuNE+6Fqv4HIvJrgf8F8J+LyO8E1mSPzr8A+CPA//4t20oi8uuBfxj4z0TkXwcK4GcDj4E/NH7/VvFryfkj/wfAnyMi/w45l8XfAvzhcd/dN0V/L/ATyN6q/yMR+SNk4/RzwJ9LzgX5C4Af/hHo3wknnHDCCSec8G2CT3oJf3f/Ebk2CYLkLaAWS8cgBbY+R4oLTOjR+oJUn5OGgGy+gWmeonaeSQEzw194hpsP8fN3EVORnCcNGygqkjCmcxiJGtmvs99ENr5pPMdyi/vtImME2ZgW7ei9ApmIu7fOvXKhHOzbk2yyb/hTrdN30WkcRO7lEK7dsWCcqzfVP3Cy3u17rf8PyVDKne/3R+1N0ZLH85fb1DthoPsyOtY7HCljOweE1j3n/e3yTh7v/ySJ1U/a9klRcpPNdrgh03CGg4DCnaDOQ2PQfLCDbdPcpoN2pz6RIx4nsm28USbTlYMoVhnTnBwHTO5v6NeugINyMl13B6froeeEjv+IgNGcFiRYRyXKYCqYojaBgoS3BlMsKExLK/ORN+wRIKIgliQXWDPg40s0RRIFSc6R/orr4TEGw5DCKF5k7lCvevQekPFaUO4/z9+m+N3ASkT+QzIhJ+RIvZ8B/MdkidXXoKp/eiTO/iDw20SkHOVbD/EbgV9Elkr9Par69Xua+vXAByLyR8fj9+Rowb8K+AqZcJzw95Jt5r8T+H4R+bfH8t9LJgz/BuAH33Lcv26s80dE5F8hS6T+dLKyz+8Eft4D9T4i54L8z0Xk3wD8WPZ94Deq6h+eCqrqfyEiv5DsgPz/FZF/C/ivxjpfJM/zC+AnvWWfTzjhhBNO+DbGt5DzcW8U7GQzxOwW6of687DznUQ0jiRkljRMkkaHMMmkgyrD0NF1LUM/IClgJOf0864AiajmvJFDihjrsIXDOoNz2dvTOodzZZaGxGCsQYzFi8UghCGQNBLTMOaNFDZtQFNksTjLkq3GEFJeIIUhjBGTDkWJoaMfWoqiwjtHP0T6oWW1WjIMA2o9dT3HWcfQD3hTY2LH1fOX1E1BWZZoDGxDIHqlbirOzs5AwTvP0HfcXF2ybTuMtVnW0jicr6gKT+V7tus1YejZrntEtqgaur5jGHpiXYJJbNZbiqLgnaePaB6VFIXBedisO8qmZLGYg1r6EGi3HavtFkzCFwWhb3n25CmLpqEuK77+jY9YLlckDKgSho6b61eYoqCLUJYF3/vkEcvNkr7r2a43eO95+uwZ52c16/WazWZF120QNZzNz+iHDmcLvC/R0GKNofSOwls22xV1YfC2ZrNe8ujijDgMaAo01YzKWF5tNpw/foyIYT5f8PjxY9Asx/vq5SVt21POFojxdN2Gduio5gvarWdYR778jVd8/HKFSYGmqun6Flc6ijKTy13Xc3V1yeL8nIvzGW27ZVYXNHWFc5a+6zHGMqsbwnpJXRiGbkVpSzRGiqJA+4HtpmWzXrJcrfn45SUfvnpBUxqMCRgJeAulN8Q4YF0BBtq2ZblaoynRhUDfB5zxLNsb+qA09RzUUfia5fKG5fKKhEVpKauS84Ujho4YQ87ZqDnyV002x6zNxGk3RGrNsi8phUxgG48rPJogxoSkhKjFpnzfBXSUHs3OAVhLZudzJGKIkX6IbPqetg1ZAniIxCGSM73uIxtJafTwhClZhWGMUhw/OaXjJF2zl70RM9WZ5FUnQ3/MT7Ez1qaXELmMiqJq8t+d7XNATu7YRNkZhbu2Ne9WAYPk+dy/jzjhW4Cq/goR+eNk4+l/TDY+/gzZW/KfUtX+UzT3K8kGyy8CfinZaPoDY1v/2I9Qf7ci8rPJROLPA/7nZNLwV5PlZv4m9rkhpzq3kuV4fgnwtwJ/M1CRCcj/emzjD/xI9O+EE0444YQTTvj2xjHZMxEU97yYl4kKyf9o6smpEhQjFaY5R158GTN3pLZFuo8x88+DybariiHrgziKsy/Q3X6dYvaMcv4uw9VXsf6L2aHTjPGIYg8ovD0h85BU5lFg3WF01gE5mMe4J8GM7MmQ1wk42Q/2QHP2fs5iz+DsiKYdC5IP+BDFd4jjMmOWx9cOeECoyPHYpuPdpfSOevog6XLnOtif7Tv7Dw0OPf462kT7andHfc8c7/okQDog5iYi8j4y8W4/jvfvx/AwUfbQtntlWu/8OnRGne4Zxi7JPbX04N98vIOtk+TpdGHu7r8DZ3oxoxztndk/6NhrV/wR4wmI7jKK7Mo9dOkzma+6k4zdXY8HLRxyy7tzsjtuQsVjUodJEaVgSh2iErGpzfa4OqI/owrXtO4xQkCTw0hAk4y2sqc5P8cv16RwBepxRYHvX9GxhRRRWwHxtcviNXJ639nvFPzDZBLupwL/HbJE6leAXwH8H1V1eKiiqn5FRH4mWYL1XxgJyN9ysP+Pi8ifAP5i4Dc/0MyvJueY/OnAX012av3quP3XqerVQXtXIvJXkNV3fj7Z1ozA18gE359820Gr6r8lIj+XbDf//LGdP0Z23v0+HiYf+7Gfv5rsnPsU+CHgnwD+uXuO838Tkf8E+GVj238N2fn4QzLJ+S+/bZ9POOGEE0749sa3kPNxj9cWknJnITkt8ISRTJCdnISiJM2Rg1PuxxgTQ8gRhRYwVjP5WAhgiTETFmVZ09QVZV3hZMwHJzrKLObotyEOIFC4AsHSD1uGMOTlW4qIZHKyLD2KIaSIGQbAEFMWsCiKAmOyYRNCYLW+pW1bzs4eYZ0nJrDWcXGxwDuXowy9JwyBvgiEFGnbgpvVhvXVhvlC+Pzn3mE+q7EUFIUnycDy9paoIUs/WgcCZxePKZoGY1wmbGJk3bcIgaoy1GWZJ1wNCc/t7ZaubTk/K5g/ucA4AxLoBkdV1agYUjK024AQePf9JzyqSrqu4/b2mhR6uk3Pdh3wlWc+P+MLX/oiVVnwzY8/ot1uCW3L+Tzn0bxZbem6gCjMUsn3fuHzdF3HcrXm1Ysrmtmc+dkZ6/WWdtOzXXfUdc0kBTMMiaryWBeIqR+lPYXtsB0jXYUhCogFifT9QF0bujCwaVsurMMVlsJVDH3P4mzB0PfEpNTNDOc8xlnKumZIgcJXpCh413J1ecVqvcIYZd0XmNTzztMnfOGD93DOsFz3eFdQFjUGi8FT+BJrLN0QGYaewjvqwlIb4Xs/9w5X1zcoEVd4nC8Y+ki32bJernn16pIPX15xdbuhftTgxWHF4osSrB8jFB19P3Bzu2a52mBNoouRvmvRmLhcrjkPiaauKauaruu4ublm23YUdU3QxFlZYUkkHVAUawpCyPdXNwx4nyN7t11HqUpKeV6t8/gxIaRTSFERDMlmsWNSIg05etGMEsZZpjSThUlBY6LvB9o+sO0im3ag70K+n1XRlGBsS5OSQhhJPMjJaXIk6458NIKmvSduTDHLO08Gc34A7V6QZG4wR1RnejA7N0x2/KH35aEBKztS0yAm5v2TN3F+iIHJYY75mPnazC9o0gMm+AmfFqr6Ozj24nyo3Pd8wv4A/NPj5y7+9vFzWP7LvOG9lKp+/wPbr4G/b/zsICK/ePz6p+6p05OlZn7DQ8c74YQTTjjhhBO+u3EY2aiqB/KKjLby+PuQQxr/DsOa0hW07ZayepTX+eUCVt/AV3PM4vtyfnKFlARjLXkx68BA9egLhNsPkbTAN88YNh9j5h9Ad02hSwxbIs0bI9GOIjPHFfjRUkoPFUr2Yzueg7064MSfjVWZ1uDpwJn53qXa6HC4J3P2a3eR44M+tNBT3VM7AjltA/k87MQsjxRVDscgR3Mir0VIvslKkNfKPTznh9Gf0+9poNM8H5KUDxOddyNND/ft69/dPp1vyEQl95Z9m3x+nyry7W5ZPdy1vzH2xPNhgWnbUYMcj1PYe5fuK4ncnQvZR07uLsi9lGom8+52dWzzoK3p/ddrs3SXuLu7a0eMMkYQ6m7ndJ/lyEvBiAOJY+Rkj8o853MVgxkGjF4SfU3SDUEKjAsUYUVw1WhXj1K9ErEKqYchRcRUOZ2S9ogxtEMEPyCunka37/vd6xqwO5v9OwOq+puA3/QW5b7/ge3f4IHIPRFZkHMrfhX4Nx+o/68A/8pbdpcx1+M/Pn7eVO5v544tfE+Z3wv83nt2/WHgt76h3g3w94yfT4Sq/mef1JeDst//NuVOOOGEE0749sNnJh93BhGHMieHxgdTAXbLJwM5AoussiKJpJpJCRIpBmIaUAJJMylROEfhHdZbjHGIJLxV0ERUxboc6ebGpG0CaFRCP9B2W9CISE1VVogkWk2EIZOamgIqMGT2kMIXTJIRQ9+z2mxRUZwvMDLJRCohRBaLCy4uLnDW5zGkRJLEdr3lxctLttsVVVlw8eiMqnQYET7/wVNEI6rQDx0vrzpMTDhjiSj9GC1nraU6OychdGpxahBNKIqxlvnZY+pZQxpatpsVMSacL/De8aQsiMOMq6tL+jjQNDN0G4hxyLn+QuT2akvbblmvNrT9wNmjx3z+C1+gnJ9z9fIVMQSGzYYvf/lrnC/OeffZe9iixpeWoijY+MC673n/8TOa2UBIyrZNrNYtKQa+8IUvsFxveXV5w6uXl9SzGRePn3F9c0vb9/iyoqpqxMAQW1Ka55yeYjFEnEAxeR0itJ0SVWjqCmMgph4zKN4YvChhiHiTKL0H9URVlIgtLPOmQlCsd/iipKx6VBMhdBQmUppMVHVtwKriTIG1PucpHALNbEZRzei6jrPa4b2SjOHrX3tB6Q3zmeCLRBe3zM7PeHF1y2JWU5cO1R41iaIuWH+45fJ2xXZIDCOpV3hLNSspmpIUOwwlViCEgaouqEsPGiBFQghshsir5QZbNoQ+sViUrNc3dO0S50Cj7Or1YSCqYFxJTIkQI4pQuApxjuvLa/ptS3GmDN0GZwzOezB2lERVxArGJJwKg2Yp2RhzBKIaR7IeZx3YLHmcYmBIsGmH8RNz/skhoKnfvxzQTGCmENEwoBgwgjhDIuaoSmsIgIxEpDE5mtGO3s1CxOiUw2N6u6AYnZ4D+U6eJIeyMabjY2gyjBTLJJmTCdSoSk7FN3qfC2iS0eAzu+eZyRtG4+87y5A64UcOIvI5Vf3wzrYvkiMvA/B7/qx07IQTTjjhhBNO+LbB3ajBQ6JpH7114Lh7zOHtdudlrcG21xjbE4cOOZsTug3EHmtA/BnGKv0wgA44V+UUBulQNtJQLL5E2HxIMIrFk/olSEUfDHVyJLcnvCbb/jilwSEvdCClerRvymN3GHl2P/F0N13cId/6RqpqImUOC+4qHMaavX7so/Nw0Ae5E0V3H7E1EX+H5zWnk7ifTPxk+dCHJUqPBns8QKZ3L4dEnKrJ9hEPjfe+ADQ52H73nc7dMyBH536KvJv+flZZzU+Scf2sbTwUXfm6XG++iI7PxRgJrOnArnw9AvRN5/nQDp1+38+MCsexszsGfeRGdXcdyzS2e8atpGydakTFgiiCR3WNCRuKekYyBSmuUYRB5pT2hhR6kqvGY0ckOaJEXnUNMxsIWpPE05pEHVdYsfShxVULdFQiOr7j0t65Yje7J4z4u4A58Kt0Cq094YQTTjjhhO9SfPacj3rfolXHxc3xAu9wOTfBSvZktMaQjMGIYDIHgTOGwnuMKN6TZVliIuiAc4ayLrFGCCECgqZIULszPlKMDH0PMSIaiGFgs9kgYgnEnP8xKtYavPekridqyoyoKEXhcb6kKAusMShK33cktWhKSNOgGFabNTGN0Vaavd00ZjJLk3L58pKrl1cYZym8Yd6U1KWjqStUhOV6RegCKJRlifMGb5S68lRFjbGOlJQUOqIGYlKqoqIoSpKpeXF1y/XVOku9CjSLhqr0lL7k/NG79FExxuIKixWbA85iJKnl6vIV1jnKqqTwlm27YT4/473PfQ4SrJcrtn3PZrPh6uaWZubwrmS1bCnrOS++eUn46nPef/eMsi7yXNmCIQSev3rB+fljPvj8ByyXa9q2pxnzXa7Xa5qmwRpLXTW8ulrTd1ucMxjJUXildzhruV2tETdDgDBEtKgpCjCSiKGnrgucFYRIVTmcs9RVgbZdltl1BSikmLDicM6DCoXxxL4DEZqqorCGFAKzWUUYBtbLDVW5JoWBuiwxaeBz7zzh8tUlqspyveTq5hVf+uA9Sg9oyFSp8/T9AI1nVtVIgsIXNJXSDx0xBEwYqI1y0XgWdUnlLJW3dF3AO4chk+DWOlZNgzOG1XrF8uaWEAYsEVHFuoKua1mvb0fbUOhDS1k8IvQdrsjyrd5aUt9ixFLXM5yNLLcbkkaMQD8k+iFhXcC6REoDSRPOWUQsItnzUdPoJCCSnQiMIMYgJnuLJlWGEGn7yGY7sG0DbdsThhzhaMh/GR0NhBzFm9KQDWQVBJclkifjd4p+lMmb9MBxwQhpfNIcPldyjpr8LDqUUVWRQ/twtOUmElJ2+40YxEy5UMb9Rg/eAk3bE2CyFI9JyMlk+LGK3yUinpz34xr4HuDnAA3wA3eJyRNOOOGEE0444ccmDsmB+8gZ80bC5mBNS9a2c9Sobglhjbav8OUce/Z5wu03QD6PSxF1DVZqUopHAWQCBBJu9jli+xwlIu1zYvUOSS3BFsgUpTU62h337/WxHIqbKmOe9N3nLSgHOaaAJhwFfh4QPDqGet0XobYLBzs48jE5s9/2Wg7AHV0quz4d19q3FWM8+D3Nw3HpN0U0vi3uu3YOybEdMy17cvFQOvRTH0un8/h2ZOhEWub8hg/3fcJD/XqbufpWCMr7icHxfKs5vkcOrqs8trt5SA/fhd3fh6PITLINzJ0Iy7Fj+01HfK/snA9yFOV0l+kuWPPecVrLohzYckPgESqCSQPS3kDzHildEq3Hoch4rnu5oDJXdFHBNERJRKPZ8RcFIlYTUSKili54rLPIMIzytbIjRlXHF4UC+Unwls+A73KIyDmZdPwA+MXkHIm/8c9qp0444YQTTjjh/w/4EZNd3S2mHioDeRGioyfWSCoYMeMHrDE4a/FWqMoSKTyFTRiJZKnxHJVkbYnzFufCaHgYQswymHEIpDgQ44CkTLB4hRgGhtiBybkjVZSoCYfB+ZLUbSExkmg5h4aI4r1DjBKGnuXtirbt6UNCjKEoizGibL8479ue9XKNFfDGYcVQeEM1EmUpDvSDoygrmmZBcZZz7xkx+MLTzBpmTYNoZBg6ttsNXdeSEpRFhcbIZr3m8tUrrq+uSTGiKM1sRtOc7SLYirpgYTxiLF3oMdYTh4F+vaZsZrzb1KzWSzabLSqJbbvmyr1ifn5B0zS0Q08/tGy3K755c8XZvMKo4fq6RQViENo08OLlDUZgcfYIEY8fcwWu1hsWc89sVlNWBTEpFxfnvHjxnL5v8a6hqhpKX7PdrJnVFc5l4yklpXCe2LZIoZQVtO2Gvt0yX1QYDWPeyAXWOMRaqtJT+EzcbbZrFk2D95aYAiZlAi0NHaKJss7XT9dvWcznOA1IFJ5dLHjy+ByL0ndbjFFK57OsKHC93vC5D55y+eI5OmwxVuj7HkuiaWaE6KmKCmcMVVlggNqXOe32EDifVSw3PQMdMw9VaXJkrkJZlBSFJ6aU8zFqh7UW5xxiHMZYJG65qAvmTY33BTfXV7SbJXVT0XY98/mclAZulxuePX0Haxxtt8VZw6xaoDHRMaAaCV3JaggYY4lJabsOGcfpvc9ksMlenZNBYYxj5+FrMxGYSKSYSe2+i2z7gbaNdN2Qc6vGbGCl/M/on5ClYjQmNIwRvf8/9v5lSZIkS9PEvsMXuaiq3dwjIiMzuyuzumt6BtMLDA1ABBBh03s8Ax4BzwE8AAgvgD222MyityACAbPAEGh6uquzKiurKiPC3e2iqnLhy8GCRURFzc0jszIbVI1K+YPCzUxVhIWZRVSJD//n/48apvL2qEgJU4yAmWxflEmhWL5NFseh1bYGch0oFgumUqdDZF3g/lKbljlrVadXdc5MZwq881WS8ZV9q5Y2jG7B1J8w/i/A/45Su/EOOAL/d+D/pKr/13/Mjm3YsGHDhg0b/vPE5+TKhbia9IbTunO2+wcW1ZNS5YGcj7g84vWF3jhMtcPmHtn/lHD6nur9v8CokCUWV49XxItFySKY9msYnxmfHsnxt5OqakUkrgiU3zEoLmUMPt8VeFPZJ1cV19/A28oumUjH6yX4NTl6CQk+V+JdJy5eEhR16fjsmnLdsWsF3YV0nH8upMsbWKsi/1D8KDm3vKSXvrw+5EdIzGvVI0tS9+ftzEGYvPp5afd3WsgWCyzEvCb0Xt2nNxSLbymIf198ibjUV/f4y+1eP61r5aTI675djpt/LJ9rZoJ3nruSAF+mf/r8rXJfr3jmq7+vKb3Z5lVy4iV4MplIjdFMGp5wzdcIgSEbvPFUrvRFySiZQe5o9BPn7MEYrJpie0xGpQJyuV54KYnb739O/PB3qEQkm5n1fqUM/fw5+YdT4v9k8AD8H4CBkrj6v1fVl3/cLm3YsGHDhg3/v8d/EvIRVtlMcr04XDAvxiebVXJe1EfGFtLRGoMzFuc86j2S40QSJIwIVWVpmpa6qnHWompJudivMkbSOBLDSAojSROqihMHJlJVghNDVRcL15gyw5AJYw9aFJjGCDFGhlAKZp/7nrDb4bzn8dzx6eVYLFq1qOnMuQeBw35P7T0xB87jQI4DTVtxe7NHDLjas2v35JRBExZLjkrKmW7syTGikskvib7b8WLNMmYxlnEsRE4YR2JW+jHSdx3OG/YPOw77He1uj61rKlvjXY2rKkQKMZeych4GHj89EdWgxnHuznR9xuAYe4WUCOaEbxyVVw6tpfINlTN4bzk+HUEj+8OOT59eQJUxRxKOut7x4cNHHt7dg9bc3t6SMhyPn4plrampqgZrHVVVE0LE+0SIA01Tc3o5EmIu9QmIWCd0w5mcE6F75uH+gQ8fHglj4O7uZ2iCuvY0u4aqsdRNQ9NUNLVHNZU6jHVNVkM/DNRVSwyBOPZYVxb9KSU0Rm72O47HT3Qh8G1bEzQxxI4qeZqmpapqTsPIx5cn6tpTNwd+9Zd/hRHh04dHjCYOu4aHd+/pTmca77g/tFhTFtbOe0x/Yt9advsbPn56RmvH3llCzMQEKSm+8VjrGaMSYiSlTNM0VFVFypn+pYWxp3aOu9sbhrHj9PQRJTGORe13e9hxfH7EWMs49mRgDIn97YHKe8ZhxPmEHSMaB9rdgV3bEMKIpohB8d4jvtSiFIqCV7Dl3liDSpxUygYlE2MkqjKGzDBGuiHTD5kwRnLWYuGa8mSgO21KpIzmTIyJnLS4MS8Zn4rkSR0p5vK9AUusO1sj8Sqw/SzYNfP3kpCXAE+Wpi6bBat4fW1RtMgZL+eJXNpEeKWK3PCnBlX9P7NlrG7YsGHDhg0b/gFYr1nLmhrWhMb6uPJz2r7PU6pePBKXY2okfkIyqKnL2j135OP32MNXGDXkmTRYrp0xxpCJKAapbqnvDcff/D8XpeMc2xuu17kzPyX6+Rp8ze2Y1TlLEvL6bxHy1I6y8BYXimK6kFlOeGMeV2TQa6JqroE3//2Z8nEWZhVG9yq2WI6Zx3q5ytW7RtxKRVl6O//9Vr9/l+Lv933vx8ix8t5Fd/q2zehlDJc2X1mJXjI9V8df/60LW7v+X64IuPUzMZOzmtNkZfq7yL43Rvg7VJGfKVn/AKLyte3qZdxfIlVfW+AYRNZ1Med25I15NdPnaXIGWh5KvYSlwnVb8/O6qCivn1dU0GSorWeQCjN8QuoHMA7HkSBln8RZS0BLfzWhGHp7Tx0/ceIOZwTJpeRKVoOVxND3aO5x+/eoJhxCiFpszeZZWmL3VzVd/8RVkKr6K774bfb/31DVX/5j92HDhg0bNvzniz+85uN6cT7/LjJlsn2+wJs3+/OyCL4s3oyYQkz5mlBVOFeBL4RIWceW2m7WeWrvqFypvxhCYC64rSkVgimM5DxijMX6CslCPwyM44gxQhMrqqqmbVt82xJCBDUkLbUfQ+hJcUAQxqC4Smjblvf3FQ+3XwEwjoGPHz/SjwNjSPzt9x/p+wErSuMrrPHkLBjrOex37JoGX7eoQtRUrFCd4+X4xPOnZ47PJ4ZhwNeWcYiFzLp7oNm1OF+RcmQcTtMK3pPFgkIKI6fjC+OoIJHW7bBNi6sakmZiCEhMZC1kmzFCs2t4/PSISObubk937tjvW3ZNIaFQMMZjkaIkrCvu7u/p3gceH5/41a/+kt2+xVnHDz984K9+88zXX8HX97eMQ6BuMy/HI+1uh68qAJ6fH7l/qDCmoq5rUsqI0clyM1HXO2IMOKPkFIptLSDO0h9PfO0rmqamP3dlDmPGH1oylnZ/Q1MXohWx5BywRnDWETMlqJZI13dUVcn867oBg/Kw89w6BWfxteH01DGeR94/fEPtGryrsJXFRUttHH6/A5TzmLDieHw6c7+v0Kx8+vRECJmcBtrmBhFhGAJ1k0gxsGsbsliMJnatx3shJkHFYKzgfYVxHo2xqHjHkf3uQFVVdKdnzFR8pGlbvBVeHn9Ac6BpG3LO3N3eYo2gOWO859z1xKzs9zc0VUs/nBnHju70wtPzI3XT4qs9IYzkFEGFFE2xQU6JcRypagGxpbC8NaX+qTUYsYgIISZSSowx0fWRYYz0YyH1U5qynic3Y0WLXZPmkoCQMzllNE3bIQoiWqyBc9m8EJ24wLlOhJbAGL3Uu5gx5R0XG9jyjTAFuyUoM9g3vq/4LOgVuWy2lF2cOWo209/ld9XJvmr5KvsnGUds2LBhw4YNGzZs+CPxRcJE1j9/hFTBFDWUEUgjx9NI7TOV9+TUk0QRV6P5hKtuIX9Fjh2kjHeOpTz63J4x5Hl9LJANBFuDazBDRym7cN1NnQjQhYx81d01Efn74IpLmV641F58TSCZz8jNdb07Xl1zrokn07zO5OYqBPi8/6wOmod4zSxOROk6ZjDXSZKfNTszq1+eh7mt/3Qo9R5V374fP0b2fV6b9HUi5qWu43z8pT1d1H/lXKbnSJfzilpVMaZCgFKR45rQ+6we6qpv6/ffGtuPje/HazL+eDu/m/CcyEKRz+ZnxlW/Z/tUyavEA718BharHV2UuZfnd+3mI0BeiPR1zdUsCWMUxidydYdxjpwFy0Bv7rFkKmfph4/46j0qpQyJIiR7yyG9MMod2VlISjaOeP5InYXcfoOqEtRQVTDkgLWXbcXXzxFXnxteMakbNmzYsGHDhn/q+E9nuzpZGKrKaqV+DV0tr7JCViVrnhZYBuc83leTdacDKQqsYsepVE4KwZIiwxDohtPUsBJCpB86ckqlniSxWEXkQrxFhRAC3fnM7c0Nlbd4LziTiTEzjIEhJEIMoIl903J/d6Bpd4Qc8M7gqoas0Oxqml3F09MLH374QOsstw93VE1VFmJhRDXyeDwSspLJVDliraWqKqyz+Kpmf3OHqOKc0PWOw6Hh0HoqyVR+IKfA8zkWkoaMNZ6UR4yvqduGGBLDOJCzwVYHxFQkLNbXeOdLkJggxcTOKG29I+XIrq748OF7ckx8+8tfUDU143BGs8fWFm8TlWkxvsK5liEEkB6s4V9Wf4FooD91hJzpfoC//+HI3c0tt4eGlEZ2fs/pdOT29itiHBmGgcfHR+qmQcSQc6nDqWSGoadtGvpTj6ghp0DWRAYyhrra8/LSk0a4v7unP/dkIqe+4/buptjpWmGIJ3zdEuKAdzVGLCmNGGOKqtI4kIQXQ7WreXfb0H91wOiZh4PnaRA+nV5o25ZT33G427Hb1YSQkazcHnaMY+Dl6RkV4VN3pm0dzVe3HE9nNAtV1fD89Ezb/lmpQxpHvL9HjNDubzGupqkqvISSDauZnNP0GZgDxKImdNaz3+8LmR5Hxv5MjAnnPf3pxHB8wfkShex3NYf9jhBGnLfUdY2KobKOQ93Sh8jT8QUNPafnRywWX+9QVdJkUexsTYyJEE/ErBz2FudrjACSMV4xGASHADEmNGfGEOiHwLkbCUEZAoQwZb1OgZAVUz71mhfLVXJGUyyZr9jltZwFkyhk8hTPiwqGkoBQ2p1SkOdNB71EZIsqEl1Iy9cR/xwYikDWqXLkxCSqTH+rLUkPc+0Uo2iWosacSc7JovX1hseGDRs2bNiwYcOGDa/xmqCbCb31a6/JKp1cg4yRKeF2xBlB1ZTEShnYOcF0f4X6r0n2lqQ91d0vGJ//Gnv/S4xciImZLJGS1UcKA7n7LX74gNnt6OJYEgYvzN5qANOPOfFu/ZbM9SGnpMPVm8t4P2vxsjewqDuXA9baybmNHyeePsPKtzIvLc7n6erfCwGKTHauqtN6f33silSaX1+mQVZHXF2C2c9VXs3ZctiPkDFfGqdO43r7+Ou5+n2uM79/fbnX157tbN/q0/q1C2mGziQa03PHFKPp0tZSw/P3wO9T8/FLpN9bx32J7PyH4fJMiVzI17eOgLmO4xQvoxfSUK7nYSHjlxbmc6aSJvPrMsfe07UFFEsMiawe65sLGagJo46fHHo+9gZJHuIzmkMJfe2BbCqi22PikZj3WGA8f+S2AVd/xaculxqSgLiGNAy4asc88NezKK8fxy103rBhw4YNG/6k8IcrH5dfdPn51nJNJpJxWS+pLqRLStP/ORX7xhTJudil5omcEKOIVnhfYa2QSAz9maE7EkKxgFTNxJCxAtY5JCsJiGpIMTKOgRwTpEy1r2grwUnCZAsiBA2E0DP2kSFEVBQRj60SIR/BOKytSMNIykpMiZwzzjm++uYnfP0Ty65tqbzlu+++529+8/f0MZJj4uUYOJ8Hbm4adk3FOEb2e4swUnuLv71jctLHUNN3A6NEnp/PiCje2bIo1Yypa3ISTscj3/1dJKuhrhz7/Z79rsEI9N0ZQQrRZyaCN4HNFiuG7nzmw/ffczyd8N7zcurgdCKFiLMGn5RjPKP5kRQTTbXnsL/FtRVV7bmVe07HF3a3nn91/4677z/w7/4//56/+/sfcD97z27nOD4fuf/qPSkFbm/v0Sw8Pz3y/PiB2/uH6d5HGt8i3pLiSE4JcGRVxq4j9CNdP9BWLX13om48mpTu1NG2NU4Eo4mxO2GtQaygeSTFwH6/J081QmvfklNGrEURamtIQ8ftzYHH3Y7T80dsXfPp5cj+9j23h5bKGQ67HQIkzTyfX6jbuihtU6YyHgkDL88n9OclE9caCGNPW3swmaw9zvti7ztGrHO4ylF5waojZUvfDxySQoK6qjCiVN6w29U46/DW0J86ICIOmp3HO8vL8yOaItlYar/n9nAghEBOyn53S9M2YMp4Yw6MfU/jK7oYUVdjrAexWMnE2OO9wznH8dyVLGg31YgIAyCEnHHW4pydNj3K5znkRDeOnIdIHzMxKkkNxriplEexdCqf+ekLYFE9xmLjAiiJS0q2gJjp8JJ9WfYBymcAFYyZw3eDYBdacvrGQUQRTAlw528rSZTgtvwtYkoLhoVkLNeZao9ImuySin+uqEzOMboaz2Vb5O1vvw0bNmzYsGHDhg1/6ljIkGU5fCGt9HeQaDITgQqIksMARIzxZIGUK8RWVHLExV8Rnz4g/mtwFfXugfjyPXL3U2RO+JXi5qH9kRR+wCePMiLv/mvSeIbTy+fXf7Nj13+m1ZjmUgsz1gSrziq4qRGRiSy59pUsqsLlzLxSlf04AXVF4r7u7qIik0WNNVe4XE6SSzsz8TMTnzq38UoaqXMfAVFTjpRLvLD04FWC9nJbv0Qw/i4ybC3n/Oyt1yrGz61pXx/PajTXr83nXC47H6pctzuTjOWPKUf0creZH5Kiqvwy6Tj398u2sb8/yfjWON8iHde//xgp+XY/dJmfix3taqKunsv1ezORXebjNVH/2RxNz+/yRC5EOAvLZ7Q4CVWVxY/P5LgnG0MjSk6Wu32iS+CkArGkag8YJIyQzqR4gqwYAhJGUh451GBcTQwDKjVNtUd0IAUL4QzyrpRVWs3JkpCgyuwgNKcnbNiwYcOGDRv+dPAHk48lae1qqQ7zQhumDfrPSUpRIE/k4kREpJiIaWQcevquI4wjKQRkUvwhirWCkYymhKaEAM46VIvSKkx1Go0zqC0LshRSqWWXA2Es5ExnQZ6e6YcB6+xUk9BSe4+zFbeTZYSIIcWAiKOuLHmy68yzfWld8+7ugcp5NCvGWowxOFGcgSGOxKgYLPf3e6xEuvMLde2wVsk6cj4NRfkogZtaqGsLtIzDwJB6Kl9Rtw2GUhswZsOp63D1DT95f0fd7rHWUFWeqqqJMdB1PV3X0XUnMFJsO88jfR/wVUUIA4/PZ3744Qe+/eYbJMMwjguhknrYtfdU3tOPHSoQpOTj1ZWjdh5BeX5+JKfAT7/9lnf39zx+/3d4kzgcan74+ISxNT/96T+j63p2+z0vzydiysSUcK6irh3OeKw4+jGx2+9AIiZnxr6jrmr6/iP9kEkx8I1vETIhdLx/uMMZxTuhbhzeV9TNnhTAux111ZZHLmWsLarCMUScdeRJWWiwpJgIyXA8JYZx5L233B9abu/3mMqiIjgj9OczznpiTIiDn7y/R2PEEGg93LzbMw4jp+eBh9tbPAar4HzF0I28PD9TGaHxlm/f36EpY6xhHEfG0BPigIiSYiJnoW125f2hI8QeYy37/QFtDN0wEscOZ4Xa17RtQ9LEMIx477HO4KuKpLbUL53sW7Nm+jFwd/ueTMaI0p2e2bUtTV0RU8QMGWcctXfkOHIKPd5XuKrGO4MYS8yxxEjTJkGMmTEkUhbAYsSAEcRMGwbC7KlTaj1qRvP0XTBlbYsIxljEWIx1GFOSAuYAag6wlm+eDJhS1H4ObJg3ZiSvNg1YvoMuWd7TV5GCzqrGJQt96q5cNkxe5W1PoeLqu2z6ZUvg3LBhw4YNGzZs2PAmZPXL8rsu6rXX5M3CyC2kRanhJwhxPFMJRBV2+58wjp8wpkHu/yXqK0TBdB/Rl98gzQFrO3L/jGtviptO9xH6J1xzINp7VI9w+xeFOhuOy3VKDy+4KnO+9FE/63dxPtGVirHEBKJzcmBC57X9K8Iyy5RgOCszJ+LlutTCtG5f9WdNKM3Hy9y/mQSZiJFrQq24meRlbf850aTzMAXEmGuub1byrQpXXhIShcmO6UIUva4TP3NUgC4E69qucur7ZzUFr2it1T34/dV7r0m4C9lmPnv9c2Lymkz/4qA+e6s8x2ub1iXxc4XfZwyv642+/n193I+1//r5et3+l167ItYo45CFRJ+ftTfGIuUZvn593kNbP+fmQtQJrO/4msCbCfHyGOqSI6sixGhodwdM+ECkAefxVc2hjvzNk+e9M2iguBtpKvG7aXEESD2aE+blI+oMpn2AGFErWE1oGogxka1FU14+C4nyOZ+FCVlKsu4Sz4ssewEbNmzYsGHDhj8N/NG2q68JyKtlms7BxLwEmxZL0zk5Z1KKU63GQNd1nI5Huu4F0kDjAFGMZEQjOQVSHNFcVHrGW0Qc3iV6X+xFRS04Dzkzxo4xBTQpNgspJvouMAwjzp7xlaNpG5qmASM4V2FFmGu9zTUyhvOZGDMhJIxz7G/3oImnT9/RDT1Z4bA/UFcNxhi+/uaBdnco5e1S5unpBz59fGQcB1KKnOSJvu+xFva7FmMEX1eEFDAK1lhub2+w3qAiJHXElBDruHloi/Xm0NGNPbv9gRgj4xABEBzOemxd4b0lx8hh79g1yhBG+j7xsz/7BX/+X/yXHG4ONFVNjpEYR87nI8/HJ56enoghMoSO55cj1jj+7Be/5KfffktVO1zlqeuKvjuVWppEfvazn9KdX0DgJz/9GefzyNif8NWODx8eubm946U78vL8wvt37zFOsNbSDUdyDhiBcRhxZMIQqLxjX3teuoAgeCdYU6HJ09QCWtR4+/0NvtphnSWkRFPt0BxRNdS1x9UVIQRszHivDMORYTxinUFcRbO/5/HTE2JqXo5nfvLVA95VhDEyjgOVtewqhzOGJ+dIceDbn37Lue8J3YnQdRxuWzCW53jmZr+nqi11u0NtxcdPjxxfHnn/1dc0uz2m3qFxRHLCOou19qL0nZ4X6zxoLs+6Zgxw2N/y9PTCeH7BaMKZUk/Ue8vx+IQiOGdLrc+sDOczIQXef/WO3eGGvuu5vX1HDgM5B/ruyG7X4ExD33eEIeBdUfA23hDDCTGGyjdUtcfgyid5Iv1CUDIGbLERdmLIouSptqLkKSjKGYygmjBSbEvVeJIo2YCQMQhiXVEYO4dYAwasKYHTvGlQrl0CYruQkyXAkinIUlmRlcuGhlDqn6y/pC6WqigYs8oKhiWImzeGFpukqwh/aufVyxs2bNiwYcOGDRs2XLBeR17kY7MerGBNglwUYoXKE6alKmk4UlWGFDL4GyQpKfeE/lySYdVA/R5yRLtHJJ4Ix78jDz+hNoI0d+jDLxiev6NyoPe/xKSS7Js0I5iyXmddU7GQCOsaiDr1zax6fE2ezARaSfibE5dfEzifTZFcEv3Wx8zXm/Gax3vdni6kzWu53hvk04pTe5PAW117JjXnKhBr1dm14k2vAoS15epC7HG53o+pH2UVp1zGN3PAn6v1Xs/FZ3PzO9SDrwnNpR8rwu16rl/tAa1gzNy2cJlD5t4zq1qvxvsjf/8YKfiWSvLH3vtSe2v8LiL0dxOXa/vgL6n+5rmY+7Ti9Vcq0rmFQvAXottMhPdaVGtUMVIRBYK+p/GPBBXe+8DtPvCXP3iETK5qiGeqFEnqyideHEkqxFju6sDRfF3IRhxZRhi/Iz2NJFdjqh3idgg9aJocdssXReln+QwuybvTwOSLT8uGDRs2bNiw4Z8i/nDb1VeL9um3S7bfFFzNiYBzTTdZLY5n0iVPBGQIAyGWGneWSKlJF0kRRo0YTcRQsrAQIeeMtWZS+LVkk4gxMYSRNEasQO0qVAKjCmOMWAoJklNk7CfVZYhYb7B2IGcFI7S7A85VhJgYY0YVjLVUVY13Nc4K5FyUXlNNuhQDtm6wxhHGwBgC49jz0h+JKojUhD4R4gAkKl8TQ6Kqapyv2N+0hGFEY8bXDTFnvK9xzmF9Q6mLackxc6dCVmGMgZQVX9+AKMPYIwL90HE6JdqmwXuH95Zqt+Pu3VdFAXo+E/qO1A8gmWEc6c89zjb89Gdf4Zyl6488P37i5fGZ3/7mryENfPPNN4Qc6c49dVVxs9/z8nzGGrDW8OHDB3wy/PRn33J8ORG6M7tdi3OeG3PDMAxMhpdoVpRMCgPGO9BMyCMqCYCmdqXKoLdYk2lqjzM1OQcO9weaXYNIId0whfCyrtjw5iw0bUVKib47F2IvO1LIpKQMaaDaVbR7z28/fsKQ6YczMRcbozDCp8dn2saRSGAMMUeGMNIe9mAtL92Zp+cXHm72kKGyhrubAzc377G1JxtL+vjEbr/HGUMeehyRmAOqmZvbO1CD5lKb1BqDMYpqout7UlaMcTjv6YaRx+dPeJPAKFXl8JXQ9ydyiHjv0RRI48hLeCSEyN3dLTf7fSFimz3WwPn4idPphEjR9J37M8M4EHOiriqqyqIayaJ4axExZKTYH+cS4MSciakYBVtfYXNCs5T6ogqac/ncp0mZiKBYDIrkTI4BESFHS54CTjEGcfaiejRFESnWTBHqtOli1l8vnwejpdaMXjJA15s4OtfYyJfX5uBuzmieSM2rQHPaWZitogS5JKMDatLGPm7YsGHDhg0bNmx4E4sz4isC7bUKr/x7TTYwK4lQRCx5OCJekHxC40diGKl3d1gZGD98orn7KSrFzWS0Hh0TzoGc/5b88OeAkD79JfXttxi3n8qjyEqBVtxfFuJgVphx4e9WfFv5qTPBI6vEvznmn4gSKVquUvpAySvCZDn+C64nJTlQizpzsTa5qCYv6/A1iXMhy2br2jlOECPLeQvJKutxvaGgmw6YEx5nZdeFSJvPu76f6+TFpVa8zHOiU5sXxeTnFqarPrxWK64ek7dIxC8pIa8Iy9Xren3QhRb/MaJunfT55jFfItvWP6/7th6HMeaVwvD6nLeI1Nf7VF8iId/Cl4jP34fInMd0uW+yfEYux6weiPU1l7ZlNZcLHbmc/AgIdwABAABJREFUM1u8Tq0z1+mU+YTpnZQjZMiSOMsd91XHza7n737I+PyMF2UnCWd6kv0JeSbUFdJ45qZ6ImhL9Dsyyk36AWM9KQ+8f2gQk+jCQBcVI4qmBM7O31TMm4CrrzGWz6f8+D3YsGHDhg0bNvzTwh9f85HVcugSBywL5aQUqxDVKx94i5Cmc4oCMoHmpe6dZEEk44xBk5JiIEuasqUgp0RIESWRpRA4IQRiDMQcCEPEG0tVOdQ4klHUeqR2GCOMQ09OGZMhpRFfWTSPWOcwTlDOHG4slXdUVSH6koKRjJXMbrdnt69JoZCXY0gcu47H40e6c4evairnsAImw75uyC7StBW75g4rluOp2J82TaIfB9o2YMVgjSXGRFLBOkPMSowDztVYDGIzQz+QE4hxVN4RwplhOHM+HTmfz8QYaNpbmm++pWn3GGtJudTgG3LEeoevHc46RJRDzpybDustlfP0w4hiaXYH9vsbxvFMDAPHlycOd7e0794xdD2//e33tG2LWMd+d0fVHnh5eebjp2e+evcVfT9grEOsQ5ywaxuMGKy1hKHDaGbse7zdkVLCG4c3llFH9ruWpk5oLirAXgM3Ny13dwd2t7dTFmixvEUsbeNBHM7VqBZbnDCOxBBwvirK135gHCJ1tefh3TuOz8+0lSEEh3GetmkxAsZCdzrSVu9ISQgZPn468vBwQ13XNN5QZajtVIdx6GhrR9vU1FVLtTsQcubu7h27tmU8d4x9x8NNxd/93Ud81ZJSIfHOfUc/jlRtxpgSiKaUMGJxzpN8hY6f2DuDc3DOCeMNaBmPcxXn7kzqOuy5o2337Pc33BwO5Gyw3iLO8fT0kb47A4ashmHsixWpCk3laFqLSKnDak15DlVBNaPYYuGShZCEkA1JSw1I6wxWbLFTTkqayMerzQAppCSaycmg0ZFjLA6q82Ez4WhKQoFBEWsuTZjyuyqFtEQw6wBUdaoHyaJgnIP8Oby+zrQtIftsZ1RetczWVnO/DVP8JJcKNpcNJBC1i13Thg0bNmzYsGHDhg3XmC0HPyc/ZKbVCqtFWXnm+c2rRWfOiiWheEZtuHH3aPwNGUd1+Dm+fuJ8esbv3jEev8fbCvfuL9i5gf7c0fcvmPyBqjqgKqQ5wU6Lai+TEWsXlZ7KXHu92K6u2a51YQJW6+v5MLMmIOa1+DpHGbMQVwrkae2+JA/qRAvOyi+dj73M5dxWIV8WfdUV2fejyraZx5TVH58dcyEqF2pHVxOxwtt1Ci/Nykox+vpSOr0vV+3MfNJ1u4vl6JdHtiJdV+fNfZjbLwcuyZXX9rav2lnGMyd7rsgxvfT9c8yE8qXVJU77PeOnmYCEsm/0JSXkmqRc9/XHnoEfqxv5uxSSbylBy4/VZ/jq2LmNUt/x8vr8PTCrja8J6NK3q8aWF9eEt0zPSVYlSUZyccbKInhTk2UgRBjYMTjwqSOOPXU7oFIBShyO3NURqDnlW5SEiMfVju7lA9ZmOm3I+YZIg9tV3MnASwyIc4hcTKJ1fnYRwAJC2qLmDRs2bNiw4U8OfwT5qMtvInqxqb9OmUNRclZU80I+5rnmmwCaC7mU87RwKpmN3hucQOWEnBJZJ9VkphAcClkvi9mQBvpxKITiRGJ048ixL6+1bcPt3Y6b2xZQji9HQogIUsjDGEHBNTVREzZDGCN9HkhZGYZEjMVC8uOnRyrvebi/o6oczlrapsZOv9/f3IEYzueODz/8QN8fuTvc0NYeUQghEOgwNvHu/Y4Uy4JMVDGuLBitEW52BwRLCpkwDowycI6xKD5dTVW17PctiiGr0NYNd4c7EGj3exBD3w905yPD+cz5+EyIAeM8KReSaX844OsaZxw5J+LpxDBlvbaV57DfM4w9QQNPn544HUeyGm5uhaqueP/Ve/pzRwgjiFJVLff373h6euSvf/O3PNzfsq89VV3zchyx3uC9YxgDYRyxaFHtZS21M51DsAx94NSdsaLsdw1oxTAM5Az17oAzjpSgrndUviJhMbai8i3WVYDQdR0pl2X8MPRYZ4lxZBxG7m7vOeozQxfxtuF2D/cP9zzc3CLWkvPA+/sG7xJaez4+n8kpcH+zw+WBP/vmjncucthVtI1F84AVRSQRc0ctO5yraJo93hly7IlJ8e2OiMWLI2cIMfH9h080h3dLDc+2aWjbdsn2jHkEU5Sc4zhw7nt2hxu64xnnBEumO59QLba8dVPhvWcMGV+DiOPjx0+cjk94cqnPkKDd7dE0Eo1Q9I251FpFcMbijEAcSYDUBmsrkkxZkRa8N4hUeAWlKIDL/3CpF1I+30sGtSo5x6l2a6kBWbKO5+8UmTYgyj/L6/NuxJThIChiSvui61oh03fCHBRPGaG6Ukxetjmmn8Jy3XLQRGou30eGyxEKU8a26KUl+6Ph/4YNGzZs2LBhw4Y/WbxaJl7X0LMrwmKGWRF6FBIvG3LuqX1AjMeIJzU3SHOHdRXx/Fs0Rnx+Rp8/0T78K6Q6QB4YojKMkVo79OF/RjYOHR+R43eI94i/L+UQVBADIhMNctXvwkAqkEuVOGZzVpW8stUsa+s8xQSlmWnNvOT2yTL66+2DicDUNckol8TCFRE4l2K4HDmRUzpfb133cU305Ck+4A0O8EKJZaG4JU3Tb+dLTKzbTBa9vq8L4bX0Xa8IyOurydK3mcxdiKh1aPMFzOThqgOvRvLGOa8Vj6vn7C1B2kJSzkScXGImnbJC5XLwZ8Tr9d8zUTdli5ZU9DfUg/Nxq75Nx6yJyDf7+jtUjv+Qupiv8aX21j9f9/vt+fi8j+WHeXVsaaeEpRl0Lhsy7a0t151VxKVNo4pq2ZeCzK4SHvaRv3vyNDctuc8ENYQUSDnSmBdMUp56ZV9V7BrHx6NFzAmbM5nMY6rB/RQXnunqny91JW3O1E1NOvVUsiOqYMruweWzOiFPhVrNxj5u2LBhw4YNf1L4w2s+rtIKZ9Lwkn11eX22SSHnkoWlWiwbJ0JRcy7WqzkzhpE0jqQ44gHnpRS/1kjWTI7KMI7lOBQrlqZt8JXFGUdlXCHVRBhjQrtAOJ1LcCSQNNCNFiMGxWKsLTFEMgxjxPuKqjkQ00iIkXjqSpAkQsqZGAJDP/AxZvq+4+Hhhq/f33N/e8vd7S37pqG635OS0vUdzgS+fn+DaoMR0JSJYcDiub09YJxhGCIpZpz3OO+xpij3TsczoS+EGSiuMohY2vYGxJAxVHVNSJG+O5YaH0aXGgBjKPUxQwycuzP9qSOnSL1r6fuenBPOWZ4+PTKGgDMGZ6FuK0QMla/JURifH8EIViw3hz193/Hdd3/H6fTMu4d3k0IO9u2BDIQQqeoKcQ5jPaeukLd3xtI2DcMwQJPK0loNKQtNu6PvTxTb2Dg9I0oImQwMJlM3Fc5mvPeIGKz3GDzO77C2Kgtv46bgVNBc2scYnKuKes0ovqoRcRhjiSHS1p73X90wBqFpG5x3iBhiLrVPnCtRaRpP3B0amsrRdSeapuZkMkYyfXciDmeMN9QWNA0YU5Syw9hhyROBJRz2B25v7klJ2e93HLszkUKspzjibYMoNL4m5oTGRGUr6qqi9vd89/0HQDAIp1NHu6vJGhCBu9s9Nzc7mqalqnflOTHC3/7939P3J+4OO8auBxXu7u6wRhmHojQc+w7JJdAzkhGTSDmSsmLF4zCoGJRSm9KIYq2Ss0DKRAXNZYwgmFxC93IfLoQgqqSs5DgxjhhmK6Sy2TFn3crye1JdyHJB0ZzIOmVIiy7bH1CI+/nLZ62KXPKTtdTNkcIhThsKs+pRpr0dWTZHjMzkpxRLGcp3moElqtfNOmbDhg0bNmzYsGHD78RMw836ponNWhcwlEvi23QQikEloqFnVxtOo8c5Lq4i4wnjhKq5ReWAxszph/+Af/cLWhcIj5+o7n4K/ieY02/Rw8/R9h2mhpzPaPcdIhbNZ0TcYit6qVOopebjtFY2FKeWzIUUXHmVlFFJnjt/hZm/M58RVRfNl5nUW2uarFxjamNFgF3YuZXyTNbqOpiZRjMVIMw6pyPKQt7AtULQilnKxVwIOr0M4A1S8C2lnazuta6OWXjalXLt9WgW4nJFwK2PU9XFFnfdiKyvM5+3tiJd93Xp4fzOF7Da55mJVeGi73s9HZfrX573V+6khb7WvMxZzpNm7gtKxbdef61MXJOTv4uE/Fyl+obd7hvqx9cK1x9TYl63oby6FW/2cSHaF2Xl/AgW6l95NRczyT3/aaZnWwu5++1u4Iej0I+e0dTc7xP9MJIjGAx9OhBefsDJC95XPD8fyLLH4Mneo9mSJGEs1LYjxCeiu5+ezYxxOySeAMGpkuRSeulqLqd/tsh5w4YNGzZs+NPCH0k+zj8vy2Rd/s4slhJzxqEyKdwyOSsxRWIqSqyu7zmdjoynF4ye8Sp4dRhXTs9RCWNkHAIhJnLOmMpAFmJQYioZVsY5KmPKSZUFaoSMMwCZMAzEDDmVeo/GesRa6qbF+ZpuzByPHTEmNBfiNOVETJkhFDvXGCPDMPD06YVf//oHbu9uaSvHVw8tN7c7jJvqF44DlfMYA2Iczjts5SBHjs8nsshCAhZJZ6ZPPSJdyco0AlRY36JY7JS9FkKPtRXDOYBmau8RKSRlzoGURzAWMYL3nlt/x8PtPYIwjCMhnIk5cTp2nE4d4xgwqqQ08unpmXGMtG3L199+wzdffw0ZQsq8u31H2gX+6q9/zb//9a+4f3jin//in/P+m68QgRQjtXf0Y0fbtJxfelQtYirOXcBbQ1PVxGHE+RrZNcRoCknYn1GFNAbCOCA5Yazl5aUQq85bml1T6id6hwC+rrDWElNCrMcaO5FbiZSmzEijGGuoq5qx78kRbm7uePz4PXVlaKod7mwYhoz1Ds2xnJdLXcoUAyllLMq7+xu8M4xD4OHmhq6paGtLGkvNy9obvBeqpiamRNePxOGMbxy+coTc0FQNX3+beXl+pm0dYcw83Nxxf7ubgkgKYTfVZMxZ8b6i3R0wkuj/5rfs97cTwapoimCV9++K1apxjqqqaOqaLJZPHz4w9h3v7u7Kc5sih7bBO0/OkYRlDANjCIgGRDLWFCvWrJmskGIHnWJsRVIhpbJJUuxRTbGHVVP6hJT3dLKQMXMANX0PZMWKQ920WZLz6qvkOhM367yBIFjnsM6imokJrF7mSnWyerXzJkFpIefMsmeShNngtZjIzlnVq40JmTcIZjufiRRVpqzumRS9ZCUDpe7rhg0bNmzYsGHDhg1fwEI6zmtMgBVBt5AJsj7+AmMcIRxBtTi6xIHw+Fe04SOm/ilSvy/JeSSQRF3XnP/u/4VWDrP/Z4T+Gde/4NKJ8fF/RPwDXmPJtLMOQsJ0HxGzL9eXi83plEo8vT6vtGcVFhR6dOblZrJNLidcTwR8Rtzocs7btNPl4uv3PyOVZNIjyhxTLKv11cUNKgkjMpFf600NXUixZd7XF1wpVnWOGyZ+bU5QXPftyop0DiDmPElYJUpOhOXCQ+vqmficqrmy+ZyPeEVGze+tFYuLM8xb7y/zd32dz8e+elYX8vRzEu71sdcuNcu7yx4GgDHFmrOo+ibS7S1C9wuE4O+D1/foH3L86zG9Jiuv7Vfl1TlTpKufk6tvkZnXBCTT83Mh2D9/9Nd9cWTtMeL5s9vAd88VtelIpsUy8vIUuG0j3ic+5Z5w+i1u9w5LQ+fvOVSZNg68RMjJoaYUSsk5k+yeG+l5SmeS2aEYsqkx+lgUwwaymilp97oe6IYNGzZs2LDhTxN/BPk4Rx7rLLrPPfVLRl4hEnT6DyhKyKjMPvkxJcYw0g0dFQNBDINmVA0qUkgQAGNxlSFrqfWnIuRcrF2yJvIYSs3EMBJiwhiLsx6jUwajAWch2kwU8M6hCn1QYoycnl94eTnycjqXMErBEok50Q8BS0VVOcaQOQ09WV94Pp3YVxUfPjoO+x37tmFXW6wpxODhpqXdVzhv+PjhEx9/eOR07NjdtNze7KlrPy0uHeeuwxpL1ew43Nziqxoy7FqPdw6yUjlDiIkhRJpmR123GOfoxzMxRapqh/OOEAPH0xMx9sThDChi7ET4OKrG8eBalBqjSnfq2O0c1hqscWRVHr/7jpACWWC/33N3c8s/++c/4ec/+ynDODD0R77/247d7ob9/kA3vhBSpOsH6qbi1PVUqaWufFGpCaQ4YpzFOkvKFtFEVVU8n1+K+i6Vpaqznr7P7BolpczNfo+1FmMdISTqtiyCK2fBGKwptRRSisUiVxUxxcQmpcT5fKTdNSWzUjO+qfC2we9ueHo8YqyjblpEDOdzTwqRLgdS0klZm0kx4JzDOuHh3V0J2nOiqy37XYu1Nd61JZswKbW3GISUBRVP1e55/7WhaRw6nvAW9k1F5RwRJWsiacQmSDmXfjrP7d0Dj48fySnTNA5nIqIjOUTa5sCu9lgBZy3elvH1Y49o4puHe0LKjOPIrq4RMTy/nLC2fFaHcSCjVFUNWUkxgWSECJoRPIlUCsmbCsEhYhFMmV8FowKT/amKYibVadFoXgJe0UzSPH0hXOpDKloynNcB3GTNU14vyQAAgiv3mmKDXEjPy0bInFmq5JV9kC6B9xzzzn1dmQVN32HTNosY5vocxf6oqCuLErIExSKljsZVxvqGDRs2bNiwYcOGDWss60VZ8Tjl77WLxlsrStGMIoSxw0VDiIHKG3x9IFU3uOYGbd5jACtCVkPKyu3X/wW4HSkM1Dc/I0tRDbbHvya29xi7I5uMZEFbSw5nzNBP2YOTJebcR6QkEwKF1pvKEMiUdLwo2Wbi5UKcrcmVCzmzVH0v15jVlkuW3ysi7PXvU/OFy7wov2QiSrNciKMSI057EXKJHeZzQZakRYyU4chrW9VLYuLci+so4kLmvbp7rMnKq/2ShfNbE7WXsV+Ug6sEzdUcLmN4NUeve6BLMuWrnsllHvSN85bztST1rq/9uh1WcdzVvV6G9Pke0fUccVGrKlMcdk3o/b4k5Dxv13zs/P6avPuciFwrJ18/t7+LrPxybcmSiP/WPXg9D6/HvIxzlew//3uh2F/NhyQEz25nOZ0CKUawR2w0JAPZ13yMNbfeIjljmwcMA8m9B1WeB4+YmkM9ovkT51iTqAFDSEqgoXWRUx5BLChUpiflrhCRouTV0yQzabo88Fvi7oYNGzZs2PCnhD+cfLwiHVf/vmFVMVu2ZJmtQ5SEksjF3tKUoECmrMmiGouQM6qFWPJVQ+Nqkk+FVBDIOWGdkHJRp6VYjD/GYSSMI4LBGMFYg3WmlP7OmZAgqS2WkmpIKdEPka4b6PqRrk90o9KNRRXoUZrKEcKIc+BUsEbZW8OYlNPLC6nx3Jsdn/qeH1JRau2amvcPtxjn6buPWKPkFHAS+fqrO9qbQyGlsEuwttvv2R/2NM0O72vAIsaSiAznM6SiFnNVw+6wxzc7jLXknFAChoHQPXN86uhOJ47PTxyfjxzPZ0KKZMD7ihAy7W6PdZZz3/H0fGIcE2PKiBhSjNwddtwe9jgjZA08f4DfiuH9u6949/5rbG04nQa6l47T00c+Vi337+5o6h3tzY7jy4ndw45uHEk5UTe+xLI5052POF9hjJnUfZ7KO5JGqqpmOPeQMk3dEELCGIMxEIYzQ+epm/1UO7SQTDGEUoDeOGJWQkygis2WECJBRmIMRW2K0Oz2DKeEcYbW16RsUDXUzQ4Enp6eISU0B1JOE+mZOb28cHd3h7GW2/sHLAFi4HR+wdYtxje4aoePgARkvy82pX0hEfOUuVhXFc/HDzS7hpu7w2Q5IwzDSGVrqIrqUjVhXEXrDjw/PVF7i+SARim1G2Pm5sYWRaFJ1JWQ40g3ZKqq5v1X74hRyDGwNy1hGCErTd2gmjmdOnI2iNQlUM9DsXFNUFfVZGlrUWMRa0EcIcKYEgkhAWoEwWIws8MqRuxEUBZrpsXeVLXUh5nIRynSxVLvkwwzkah56k85J+eSpVwsmhNptq4hY6SQoOX48nrOU8bwnKs9E4paNjcUWZKgZ3J0+c5Ss8pOv7wlS42NeVSGufLNm0VSNmzYsGHDhg0bNvzJY7YnZKl195pIWFtAmjeIDoMayP0Rs7slHb8v7i/dR6geSHqi0vckW2I4ff6PuP3XmOahJDL6M+PLr7CHnyPWobufIse/Id3+EslmIsaUnEfEOUDJMsfmK4JwTsJDpijAoCRQKcl4Mxv4isq6UnKtiKZpVgqmeVlUfK/3FKZDLgRbWaer6tLGJedwrTicyiwgiEYQhxVHyiNiZ9emeS0/le6g2L4aCgk2JzZejWkew1pFyBsElaz6NY/tqrOrlmVFnE3zlrn8PROTutyNVw2xqnOpTGq7zOopW1rT+V6ticGJkHuTRHw1A0vtxtVO0OeEWYnByBcyek2+Lped78FU18LM9UMVIL1J/H2Z6CsEYgk1y2fOoGAMqrG8L5YoiuQEU7z6VpuzFexbtqpfsmKdf//sOZjI74nhXrU39XOZ1/zGeNes5fQdIvlzJpP5LguqkWMf6NRh2oY+QVftsXqxgA3Rgih5/ETa/XOsCGBJAlYTx+AQbmmrBPFIpx5NpRTJWW5p80c6OaCSySmhKWOmxOAlKWFN8s4fly/fug0bNmzYsGHDP0H8weSjTtmQl7+vF6lXC9cpN7IECQY1uXjBTyGEZsWKYVc3+N2BFBXSWEiwPC/qDEYc4oQUS407Zy0pJ3KKGLUY62mahr4fcWIYhwFnlLqaFlYpY4zBGyGHEZHI2AWQycbTG1IUBkmEMDL0PdZahghDF6isJfaBrh8RyRwqx01tObiWlIU6GyqbqXaOPivPpxe+f3ziP/5G2FcVN23NYVdRVx7VYv262zXs93tEPGCoKk/KiTBmNEea1gIjMYzkBDlZqt3NVLsQUndkCD3H5w88P3/P+eWF0/ORp5cjHz71PB4D5wDnBFiHs3BoHTFGXs6/JqkSciSr0tQt3TiScyGFjAiNsTiBqnbUTtg5eP70Az/8/V9N832gaWrEGIbTI09pIN3dUzU76taRUuDdTUsfdUUOCWik8pZxDMQUsICvKnLscLXFVw4XB/a7iu58wpo9IXRYo2jqgV2p2aeBEAwqlpQCOSvZOGKIeOcY+wFjPc9PL9R1RYyZunZEU56t2jrO3RHnHcZUVHXFEAOqESFjjWUII1XlSCEzdCe4vSGFgNnXGPH4GnzziDiLrZsla9UYS93coJqphoRTIcWR8/GZHAdCTNzdP3C4e6Budrx0A0Pfc3e4RWbS3AjWWkQNVdXw8HCDI3E8nuiGct92/Yh4i6+Uczcw6sjN/p672x3WWbqQivoX8HuH956xH+jOPVVV4VxRn+YciCQyCbEGNZ4spQZmsYgxS22XOWowIotC0E5WvzORXv4v2ZjWCGSzbBTMykczbResM0xzLrU/lGKhmyelZE7FwiUrkA0lbyETScWm15gpCJ62RjSzqK01YShK6TnwJkvJbn6VNZuXzOmZmJy+5IRCPJpJAamArmvcbNiwYcOGDRs2bNhwjXktfZXV9sb68VL7/Po1BYxmyGeM/5Zaf6D6+l8zPv8at3vAtO9I1pG6R7T7RH3/56ipUCYlpD9gbxv05TfQfk2qbpHdA5y+g5ufouRp4S6lnIApaXYlVJ/W1is277riYhnT5/XzLvzhl8b1u+fty2q5z9Vh5ur1a9JoIvGMx0zWoyH0aCoJwKayWCoSs/JOJ7OUcs9KAuJc216XvY7PxrCcc+n/FYlkVorGi3TtcsYSdszvz2TtNcH42U+9xCs5Z5azpqTL5UaYCwWITu8v5PKK2F2NayEV9VrlOSsE18Rd4Zn0QiDDklgqcmnnLVyUpiuK+Y35fPO8q+duTWxKufMi6NBjZSANj8Tdtzgqstgp7rsmFJex/wjh+aXrzsTj58/uZS/sdfuztWo5pRC66/a/RGzOY7zqx3SMEYu1ZY+qkUCiwSITx16Sads6cbQO7y0m9fT+BiOKSULGoBZUDWkUnNzi/EBjBnLMnPOB0R2oh0/08gDGEmKk9gaRPCUTz1975bo6k+Kf3cUNGzZs2LBhwz9l/MHkY14Wm3NW2rxUvizIYVogUUiAeaFhEKwYvAhJIIlgraFpWrwk0pDRaLFGgQRqyCFxToEcR1IOYBRn3bQuV5yxZVGWS33HZKGui+VlTok01Xk0pvTFolgHfYz048DT6czLqSNn0BSRHKlEcFa4qy1jTIyx1IewAs7XDDmSQ+b9oeWu9mQdqeuKcRz5ar/nvq34/vHED089o4dhFEIyWOk57DsOsSONLWHoado9WZXji2CwNE2L8wbJI4LFG4t1hgCMY0cIJ0J/4uXD9zx//MjL8zPn7sz5FHg8jnz30vGhF16CpU+ZJBlv4KYWumCoLPgKWiuIOlIWgkZumrnwe6mvJ5pBhDEOdCP8EDN/+zzgraEyHkvmmxvPt+9vi5puuCEOgXdff4N1FRll1IH94XZ6cCJ1XTOkEtRV1hHGM8YZvLVoVTGcz7iqps3C0L1QWYOzkFPCWkfOgjOmZNhN0UxKGecq+qFDrGCMI6ZCAKoK1hTSLYdAzoHufMa5irre8d33P3B3/5662VHXDSLw/v0dGgMhRDAR7wSS8NX7d6CJGEZSLmS3N4bd/o5+6DBiSCGSxoTxNdbVWANNPSAYTucBK1DXNdq2VFVNjFCJYQgdmUQmY6fPirUGayHHiIjQHu6I/Zlz90w/JHZtTYyRmAJdd6aqoW33NE3FGDPP3bGQoFVFu9sX++IUUIGqspxHw3ns0TRSV5a63iFVW7J+jcH6QobnHIhhIOVIFod1Nca4okwWixiDpSgkc1ZyUrCKEYMVg2IgF7sZA4iVJfhdrFWn7wvJRd2LKmJKXdSEAJFIBluCGlTJ00ZOTiWJQU0s2wRTtm/KuXymTSEJzZT9KxhES+ZoNmVToYS8ceqgoVCbk/0qc81HSj3LWeko602kDRs2bNiwYcOGDRteQa9tBmfSYHn7DULmmuhQckhIVkK2pBAQf0uiRU4fqHZfMT7/BicGefdnoH5SvSmqJUkQ8dj7XxKefo3EQN2+I46/RsYzua5AKiQrxgGYqT7gl9a5QnEVmQmRz8nA12P4h+L1uWZaiy/bDq+UZ6/PuyLQVFEMVoszjg6f+Fn+K2oLY7b8EP4VQQIYT6lBuCb99JVF54oc4w2CcX3NWe35JcvOlfRvURmuqd357blHOlG/qmCmvRW9tLPqcYllrkhDXRIvlYSYKTZbjYr5uZwJxGl8su7rG3N9SThnSvSc25zGZGQiO780b3N5DFbkpF61/db8vv57sX9dvZ5zQiTxP/+F4eu7G37zN8/85adnRvOeTHGzKsTf55/HL9VmfHsMq/l6RRwWzDayb83fPN7rZP75pzHmqi2Rud5qnnhnmW7b/MyVz66xBtFIbYQXcyguYGQwBhMTtfY4lLO542BHXHxG3Q1ZFGukxPQoakxJjg8eWz1wY38gjT8wBIcYT6MfCBb6eALuUSnfH1DKrizq3eUZ37Bhw4YNGzb8KeGPqPlYFpSfJW3OGXSrBdCyEMplcbfQlsZircNah/cVWtWQeuIIUROaM9ZSClbngKZIHEf6sSeTccbhrMV6j7gMFmIqNSisq0hiQVNR22UgK5HMEAYAds5y01TsasWQCGPi40vHeeiogENjQYXKeO4fbjidjwzDgNHMvjZ4f2AYRgCSFYQKcZ6bumYcewThZ1/d8/4+0w0BRdjVFbe7HY4RGwPjCXKIkAMpK15AjNAPcHxSfN3S7m4R6xj6wNj19KHj+HLib//mt7w89XTdSB9GomZOQ+ZpyDwlodcyb21laCvhrhZuKsNh5/DOgkBGik2pCF0qtfVynDJGNSBJMSoYKdaZQxSGIdAly5AyfYL/94ee9tdH3tWeb25rvrk/8LPTkfdfvcfXTfl/FHzdlMWrQuN2hQRypYB5CBHnLCmAMYKvDc55UnRo3rE/7IjjiCTB2ArFkJIgxhNjYhhHjHUYW4jbYRw5Hk8cDgdSitzcHvj0+JGb23tEHE3blsWwFhtP52v2+3vq2tOdT9zc3HF8fsKpQfXM0I+IJpyzQJrqVJbnSxCqqkKkkJ7jGOn6wG1zg/MVItC0O6y1pDzStjvapiaGEWMcLy8vuLolxUDb7rHGMevuvKsQqYjpjKsM+33DDy9PHI8vNJWhqSzGCGPIGAuVWKzxDBFSHMjAfndD07SIWLrziWEcCcMAORJDwBtL1d7irBBDpO97vBMqZzAkso6T9WkJGio/2auKoqRi1WQKsZhIqErZYxFTgk1jcOImZeIqGCvFWhfSMekUQOU8ZXdDzKWtYuk62bTmyZpmCuAKUTl9J2kq1qoKKpG5pktOgpqSzyy5PGMikw2rzvUbzaSqLJ9BkdkutlxMJiugq3otq++zDRs2bNiwYcOGDRte4zXZOLukQPmxkAtfOL/EPS94F4kxYr0DC0hxSel/+Hc09z9B6vcYlCiTZSiGbARIOBw5B+rbX5JOv6F7+Q3N7c8wx7+C6heIJCyf8GY/2V4ys17Xsrip++v17xfrAPI54fTm+F4pwJjUhte19tYawIlA+8J1Lm2tyKKcyp5EBrN7R//893gn6CBI1ZbyEJNXU1EGlnhDzIVKu5BUZvo7v+rj59f/TK1JiS10YQ7n1i/bKLO7ipp8IQFXY5yViwuxqJdz5xuWpxIWorpUh7CSiGEgxEjVHEANs2ntTH1d3SG9EMzz3s56bOu+v8baknY59vLSq3l6u5V1G1cE7ardNdk8H78m+aytyZLo7QNVY4numej2U3tzWHlNNn6pluRrEvD3IUZfE4lvEZNFtZvfnIPXYy4/ZzrZTL/l6+Onz4cRUDVln4liLyvGIiniwyfOZsQah8ZM9O8x+Tti8oir0alQqEFIqoixqCZ2bqA7J6Lu8BV4SaAVg0Iaz+WjkZjuQ3lODVKcd5fnd8OGDRs2bNjwp4Q/gnz8kWxN0cviYrXYVIWsMmmPSkgkptRrE1s2/mNShgAplNWgIWAo8VXOiRgiOeZJcKmFmDCOrEIYE8YKu7ZBjCWEgaE7YozinCNFQZLijSOmQCZROYOo0NSOb24qGk08EohRSw3CnAgxYsh8++7A+ewJY5zqZ1hubh4Y+oGEcLtvaOoK7y0SiuUnkjkcGva7YgdbNx5bWW5uDhQ71UDlHNr3DKcTY0ol+PG2EFIhoRm8d3SnjuPzmXFIfP/9Cz/89syghqdTok/KkOA4wikJo0JbK1/tK+4q4a413B8qqsqgVkgIKpYYIOZEheCSkNJIlHFa/vpiP6O5qN9gUnoZrCi1ZJwVgjWcB+VXL5H/8dNI8zdH/sV3Z/5X/yX87Kt7bNejIXH3VYWqmWoAasnOxVB5z/k8EImAoW1bTjERY2C3azidTiAVipLUEqJhHDN3hz3WVthSkIO+P4M4ckqMwzgt1oW6rjifj+QU8a7C2EJYConjyyPGOpyrqaqKYTjTn1/wlSeMAe8rKlcx5kRd1zgreO9x1kAyZAFxijVQH/Y432C8w0TAFFNRZyy+3pU8VjGlXiXQHh5o25rj8UQaeyxCW7UYsUAJcIxYjCnBQts0aBiIMWJEaZoK5wz9eaDC4KvMmBLEyH5SrTa+QnKmO3eMWeiHEdWRtqrwbkcIZ8iR0Hf03Zl+jMSsDCmzNw27uqb1npggZQFTVI5JmeyLDMZajHWAKdVfskw2KxfCcbZmVVVmY9Ry4ygEoSomZ1LKxU43C1qSc9EMKqWujEixgyoaxlzIT+KyUZH1QhaW8o8Zzbl8xySY8yUyuZCPMmeMGuwUHC3nLpm75TldNiFg+f5ZsoE3bNiwYcOGDRs2bPidWHRogE7JbWZRrKkKFlCZKsSpgFFi6DBkwnimbm5KUm3saaq6JM3ZG4rfUInT8kKo5NIGGTGWTMLcfIvvnugf/5b69mvM+bfk/U/pU43LNzgphNVCvL1WmQGIWRR5U97xFCuykF1XewRX9dGvyaOS0Ff2B2QhuUo0umwlrFR55RrX9pWfKy9LUFBKfpRSEYhgrJZE1KZl11Y8q8WowU6Kw6RaykqUu7D09XP139zfmRTis/6a1ZhVKdsfs9pwGrKZSJ6ZAxOzii3WtSunfZOZzNEpcbKoIKfpNa9J2hLjKII1kdal4rQznspz0D4gaqf5zORFd5mnG2nmaWRhDmdSdaW2nInRq5hovq+rCbgQuat9JFgSPC8st1x+zM+TTHHYmoyW+R7NiaQlrpw+ONNlM0aFMZV9AYfgJBPEYbJOMeHnVqhv4Us2rD+mlFy//kVVrs7jKiTk5Trzntl1uSPQV30pcerS7jQFBsHGgaBlrlQTlgTxI9LeMKYp1k6hPPvua+rxB0YBlR06PU9GyzwZMXzXH7ghkW1Nrw2GhOdMzKAxkjUh2KKwnO/fTOivFJobNmzYsGHDhj8d/BE1H/Wzv5fXRLlea062LzmjMv9dAqyc86Q+y8Q4EuIIxGKzqRmNhfwLqkAiJ1AsTKpJsQax5Xcjtqjg+gEVQ04jIUSMNSSTCSEuCzxVQwqGlyFx6gfGoYcY0ZzJWbDGUFc1KSdi7jl3Z7yteHezI8TIeQwMYcBHuN/XVJVjvzf4ymJ9S6sVeWiwxvHYDex2Lffv7jBicLUnpcjdzT0pDgzdC0kTxjvEWpxTjIkoPSYJDIaXUyIlQxgT/ZBRNbTtnvM5cA6ZYCxdTpw0E0i0jfCzdw1f31TcVML72x2VLyRvRAhZiBmcU5IWRZiPSgxKMErIWtRzRhlzos/Kyzkg6rBTuqZMqZtihX1raTCchshLn/kffvPEy/Hf87/8i5/xy58+cD73qFjubm/JmjG+IqSIsyVo89aSwkDMETFK09b0uahRm3ZPXTclmHYVu8MDbbsjaanbaICUEjlnjBXOp1OxS50IsqSJvu9pmrZkerqKmALOOjKG3eEWV3lSjhxPR8Q5rPeMKSJG2B1awlNHu98hBNq6wtsKK56qcigjzheS09gS/RlbakzkPJKlBhFijHTDMJF1ntv2ABrYo+ScaNsG73z5rMSiBBVji6pQBO9qRAyV8zR1VYI4hRgCrbSIRsJwpm4bUg7UVUVKicfTM9Vuj3MVh7YiZ8FoxGpPzj2n4wvnriOSqKuWvfcgI94qTiKKkHKxKyIrWME5T5ZCOhrnUGMgT4Gb2hLgzJmZq0SFzzNJ5xqRZSPBGEPSyS41KiplEwUpAavkKShjIjgn9Wq51yUzUzMYNWTJE3OpxfpFSqhegvC8bHYU65qprSlQNVP7FwJyIhuXgehESMKrHOENG/5R8Ytbw7/7P/5v/7G78Z8F/u2//bcA/Jt/82/+Ufvxnwu2+diwYcOGfyys18LzKxcF05qOXC02VycY8thT1wdO3UhlPf35I85WuNt/jriK/tN/pP7qX5ENi4pvaUcWbdckuhRs8w7jarrn37KvDHl8wWRlLhN4rXT80lp37v+8tp+Xxp+r0ViNl2m8V+2uLlpqLM42jazIoYUJWxE/eWnrc7WhmdqiEHdMCjYRDrf3qB4xrgW9jk9kGYss45nnb1Ebrsa5no8V/zMxL7MqbYqfmco3rFWF8/Gv2ruOmy52ono5YyEDy202C1F8UcAp1iS+um8IfSA5pfaZmB6RAXJ9j1FLxkwFJ+a2V2TgMhd69Q7yOq5bqRFfqT4vY/v8WVK9PBvXp6xtaGeyO09lOS6trXSGywytP1+IknLCVzWqadatTvf6LWXhjxONr+PZ9Wtv2Q+/hev213NYXJYuBONlX+0txefS1mp6RRVnHG1taLozQb9GSSARHR9R94CXgUFvwX6P5hEtMkn6+p56+J7BW6xWKLkkW8/PlkJWi5GICiQDGhoQD3qGlBF7sQ6W6ZkX0dc3a8OGDRs2bNjwJ4I/gnycV3yXxeVSJFxZiAFmmvFqJQlipoL2yzl5jo2APJGPMKZCPloRjIW6rfBVizgLGYZhpOtHvEs4X+xkxjGiYohRGPtEPw6klDFGqb1j35QaeCqRGMvi8zQE4piIMZERgioaE7WzZGN5HAJDHvHW8O7dDfUYOD8HYorEnNlVhYgKMaASuXu4o+9GUlK+2u1ADLe394gz1JXjeDqRFXZ3D5jKo6nn67pFNJHTQIoDKUFOhqQWo2UZ3e7dZCfp6YeEGSJVVdOHyFMYGZJwu6v4s68q/vm7Gw41tDvP/tCANUQ1qEJKmUpL/cocM1kNVcoMQ40JmTyOiCljCWPF96eO7z+V8dYV3DaO1oAxihjFoHir3LeGQ204DZlfH88c/4f/wP+i+wn/9S9+ytOHv4H8jsPNA7EfqXxNyALkon4MZbxxSFjjii2vEVxTs7vZ4fc7qmqPq3e4qpCXPjnGGIk5IcZjjOele8GI0OwOxBwYz4V4HFOiMYZ+6AlayCPj9uzqA2I9QxhBMpVvqKqKw+GGuqqJQ0dTt6QM+6YBzRgD1paxxxRx3hZS1bjSLpGxPyLWIl5LTc2ccb4uAYMxuGbP2J+wriHEkV3VoMAQRyRnrJ3qJERTaliOAy8vz/ja07TN/CFjv6toG4sVZd82HOqaum2IxjLmzP72QONrjDHEnMhx4OX4idPxE5pHoNRcdK7GieDEEpKlC5GzDjhrpjorFhWH5pJhi53qwmSKIlIN2ZhCmE5BnzAdO9mfFuXiZPsiM4FYLG/UCmRBNGPFlGzhbFEpSQdFrcuUBFpyu7OWjGbNhXTUKbBHlUReaqNk0nJ9MQlJINagarAiU7YwWFtsgHXK+C25tGUkWUsNSyOlluwcPOkWRW3YsGHDhg0bNmz4An6MjJgJKOGainrtfKrjC+JrUnjGVx6TAmn/vrTtLf7mJ4THX9M8/PmSTLfUmpSVmoxC5iXAux3t/b9gePlL5OO/h9AjpsSK5rWKa+kZJf6fFWlzH2UJ5Ce12ppwnZWdMym3bmwa7/KnrgiKWT0ny4mzw0lJENRL33QqxbC65usBqIA1BiHhqpZw+kjiwLyaX9cMnOOYVYtTu7Lqn35GVuVZzbmcI1fHIUzxhJBzXtxbrii0N0g9YKqjdzlymbOpT+XQeTel7M0YSXzzbkdrI+cxkm1GfKn5p/mZmHaIrZfz5/5cykusbtc8rplwW/3+Vt/XeOsTIAvh/DZxN8djn7W04rGu65LOtOKFJFUpyakgWOfw3jHXxNQ5Xn2jz59Zy65Vum+OYzVHX5iD12196b15Tsr9yJ8d83b7l2dIxRAydAFEEoEKJGH6j2j9jiwOSR+J8g3WWAgRVLFAEM9Yf03V/5ah+hY7qZLnGFtEScZgZcSIoONIOj/R3P6cW/0rYoxg3PIsXjjRt5/pDRs2bNiwYcM/ffzB5GOel+JLAKATCbm8uMo8LDURBAUttRSQPNVVMxP3OKmTFDQLamQiB6RYPmqph+gR6rrG1w0xRFIsVqynlwExtqgHxSJGiDkwhEiMiRAGnBPqyqLWc465WLz2gb5PgOWl74vVpslUVth7pa0N+6bG0GAy5DAiIXFbNaQKrDoQZRwGQlVIjHDuCLuK27tbnp+L3aerGpIqlauoqpqDQj8MeNfS3tX03eOUKVbT7m5AlZwSxhj6YcAOidNxwOCoG0NKGd84pHP4dkffPZLItJXy04eGf/b1ga/3O7zNtLcV1a5ijBnUEWNZXFrnMAYEh3EV/TDikxCejiQVxLeYCGMYeD6ficmSFc59KdyuO4szYFXxgGTFWsGbzIOz7CvDyznx3//V94RR+df/4md41yFY2v2BEALeWrIUi1FrPeoScRjRpOx2e4bzmapqEdOwbwqJW9UVOSesEWIcGboBW9WoWvohUoILj/cNz6cXxmHgq3fvyBKx1jF0A0PfY1tP07SY6Tmz1oEa6rrBTIGhM5Zqt0NzpB87clQg4XaAJkQ8WZW68oRYlJhODH13xojimxZjPcY1EMD5hnHo0SyEkOn7QqyreGIuQXOm1CIsdqMT/aUlQMVAs6u4e9gRQyCOEc0lEDjsW3b7Fuc9GQtqudntsKIMw5kxRLqh53h8QseOtnJU7Q4FYkhEBJxDKo+JSg6GnAxjCIgmjFEwQmIq5eAMlXVTIKyTtVCpG1qUgRS7UxEQswo05gzIaSMil8zWPAU2mvNC6JWSkQZrLVlLHy/VT0qAnGaBY87T90zJhp3rNWouJLFKsXQ1U8AqeQo89RIEiWqxeFWz5Mwqgi4qySlwmuJCmd7bsGHDhg0bNmzYsOH3whXBcqGpZsXWxaoQYKqRHs+M2SDxhPv2vyGlgLUVaotCybT3MHYMxw/4m4dljXqlrZvaFclTHTYDWahuf0GfA+Pf/k+0D3+OQ7iiPET4bLm75svkQpaaSSW46AOlkEM6EYcykYg6WYYua/BVwrKIWWZlFk7qMoaJphUmpmwuoXBZq8OFxLmy1JzGb6VYbWoMJQ4ViHo5BgqJaNbDnIlVXabymi6eCTkjS/KjXrlBva5XeN3u60IO819rimlRgpaZWojcKc/zlWqwJHQe2oqv7hrG8ycqL1BbUDfFMhGnn+jk21LihlVMNLfJhUJdj3NWg17TppdjljbWxOTcc7nubJkXMHYiTtNM/9rLHM4DLCzxdA/W5PN07+Uyc3PMZoylH8Lk+uSvPhMXde3nakZjzJWS9kod+wUScn3870+yXRLyS63Ti2pZ5PJJfMt5bPXH5VmSTM4QVDDTwyHDI1p9RTIWr0qMhuyF2nsYAkiJ8cuVHVp9gx++J9ZfYwTyZPaLgZQ8tUa0eyTHJ+ztz4hAVVmG+IKpWmY7Zib71TmRPn/2tGzYsGHDhg0b/qnjj1A+vqpBwJQpqLMpRwkGoFhKlqwtg5LKQiSVQCFTFveFIIyEmEpR6yQ4HEYczhaScQyJnAXkzI5iz5hhqh1nisVqyjSNp/Yt3ngq8ZzPz4wou7Zmd6hImumHkeN55PH5xEs/8jKM9N2ZVuBh59m7il3lubltkBwRhLpu6M7dVA8O6towniPjMFLh0ORxXnBOyLEnjhX7/Z7v/v57zBiom5bK10Q11O0NxnoUsM2em92BeHwmhJ6sBl/vMHkgxxFbNZg0sDsYxmFkGM7ELES1JFWGlIhaCNNvbxt++dUd7/ct3gttW1NXrgQNU/3A2mqpcaCleLj1Faoe21SEEPG3jt3uHZmGcBwZjx8w9ZlGB8I4EqJyHkCMcGgFLwJkkpaaAN45rFFqp1TW8XTM/D9+9QOP547/5l/+lJ9/k6i8A98QMYXITIIzNdkZVAcwUO0a2ptE3R7ItsbVbSHgVIhB8I2jHwdc3RY1nkZyHBFj8NUOZyvSmLE4chIq33LqBk6nM+fjmV3TIALWVBjvOZ5P9CHS7jxjf+bm5hZXVaQ0sncGf3JI7mnqHdZ4EMhpJI8B42pyHInhjHUQwom+6/HDgXu/p3YWciGTxdgpiI9EHVFV6qrBupqYMiklVKCVBk2F5o8xgRja5sDQJapmR0xHxhCIw8h+11JVNRlHyg6TwFcOVOn6gTF0DOOZMA40HnxzS1s3OFdqraaUSQiV91OAp5ANRqVYGOdQSDiVYpMsCdFC7BsMxkw2q1KCHGsMxpop4J9qlCwhPKtvDuZyIpBLMXpjykZInDYTirVqCXKtM9hkyHEiGFOp6ZhjYRuTTNVK5kRZLYkP5BKIiRqsmqme5lQxRUttjZyFiGBlqfwI2U5KRzNV2yiBVJ6scEvgb7YwasOGDRs2bNiwYcPvRInJiqPGTDQuBI9eKMgLuVYcPE7HE5IHjCa8rxnHDleXmutILqUx7n5G/vA/oeEG5zxxXpurTgmNZiFqygUsxmRSVDRKacpAFoPoWzTBTKBNbSwWqDCzlZdcw/nsqWb7rOiayaOZTlxUmQsLV65TMpIvszbHFDNJmBNz7T8jLCTjhfixCzM2K8iclH0HiwWNeONQddPx01xdpr3sZCwMoWDmOpTTcBeKaKU8TJQyM2v15hz/zHNzcYwqd3udCDmfW27XK5Ju1WLplinxymLnmlbzW37+5Os7vI1k56i8RZNDc0Ay5OTABZIGRhxzzc35msLM88lnfbj8/QUV33xvzGuV3vyMX/6er2mxxJyWY8r9nidjZQl8pf0sye1ronqO14TLuSFm5tIdzs1lOq7HM9fSzDlPJLpOHxVZJZ7Ow3s7+lvPRc757bn57Pi1S9hr5eQ1vXupd/m63Un1OX2hWEmFxBeHGT6Rm/cgk4sPHSOueP2YCpN7RBSrlkgELMmC8e9wwwdS8810n6fEX7Gcz0/EYPC3P8OoI5Nx9Z70EvHGTKZmc/qETp9/Xm8FbNiwYcOGDRv+BPBH1ny8zslbYgy9LHinXxYSQXWq5ZgSOafJbqScmFImxkhOudgk2kJkUVdEk8gaC/E0Rs7HM66qMMZhrKHZe8YhkFMipIQJoQQhkrAGdm1N21RFCRUjXqCSTC1wDpE6R5yB+31DU1uqqiKJZxgslSuWlt7XuPuGMPZklKb2qMIggcopXgIkwTcN3tc8Pn5it7/l/uGBp5dnnp6eONy8x1tLP4y0TcMwHunOkdubB2x7ABFCHPC+QtUQjUPHSN0Yhn7AZaHdwaenTwxjsXw99gMhK/vW8s3Dnrudp7aJ/d7R7A0qSsLgG1+Wk6JIyPTnESUT45mEZYiWLhoiFVENxyHRj9DWe/7iz/8r+jjw8vzI6eWJ7twR4sAYLFXjsJIhhUllRqnlCHjJNM7x8Xnk1x8eISun04Crau5uFTUDxjpGBDEOXzW4ukXIWN+wu3mHdRXONahOitaU8d6TU0JEMQaGMGJwl+xWMWQB6y3duePleOTh3QPH04kYThDPGLnneHzh3bv3OOcZQ6YfEn0YOR5feLi7o6oqnp5PaEo430CGpin1Fr0zpBRLUAWgiTCcUQxhjJyPR3bGoDoiRqnqhiEGYlaqypE107Z7hq4vdq0ihBDmjwxMtVDDOEwq1aI0DSLkCJ8+dXSnE/u2RsVy6hOtd8VKNCVMTqgmNGe8dfj2hur2YaqPqRNRqFSm3Ks0JQ/EnBeSMRbWDoxfPuqC4MRgTMmoJSdEM0YcYiZiz8yB0uV7Yp35e60i1CXzVwVMLranxkDOZgoolZzK/0YsxllMNGgo3xs6Wa8mdMqKLQF4eb0EfqVfgOos1Ga2oVI1iC0d0CylfKVcwuNJj1r+lcsmw/zt92PWOhs2bNiwYcOGDRv+dPGaKCg5cuU1s2zRUwgvuT6vJOGNVJXDo4QhgHbY8SO+aRH/gIiQJBVi8v2/oP/uPyBf/1fYWaE2LVvLUldLzTYMYfhEOP4AL3+Nf/gZ9nAHpimkzKx2nMismY6SlQpS1nzCRDItXN0yKF2RQoVYFSkDlaX24aTjWxSPAIU8lGn/QCbict5PkIlwXGrIy2y7Ov+9tpxlIZBmy00RxVUeHe2i0pzOvFQ+lAtxOBNEwup+zvO7kmea+VpvsizXyrp1ncFrheD02tLW1NoUuwiFJM0ykWNJMfOYV5dtvPD1/Z7UfSI7A9lAMkg2iAoxRIyxIEdCeqCoSOVy/RLkvEEwTrpcuezxrOdKl3HOU38Z3zxXV/ULp4NjStN7831WZPWQzbzorKozplinLoQ4axvbS3/ylMhqXHnmDEzOPdc2srMydolip+fZrNS2l/qan1ufvrZTNeaaEF9mT94mZFkIvsu4Xx8nq/m5hlz9mgFi5nR6JNx8jRdPtBmTEz51DLYla+KusjzFHuk+gijWesTuyWrIrkFoof8eW78jSsZhCS9/h7OJ9uZbojjyNGPi9mj8bnIXujgImWkeVdiwYcOGDRs2/AnijyIfr+MoWf02Lzxl2vQvr+dcsg5zysQwEmMgxEBKqQQeUxM5JbIkspRsOes8IKSk2Km2XIoZYxPOObz3aFXhjGUcS7tDX+oTem+pd7slKIqAGAEt1ht9TPRjKZjd1A373Z6b1nJzOKACMQYUQ7NrsZWh3tWMg9AfOyRbIJJiIDuLMY79zQ1qPc1uR0xKd+rY31oON3sen858+P57fvLtt2UeFA77Ox4/fWAczjTtgdpazHgmxRFf1VhbI3RkMaRUyKJuSMSUyEA/BLoQSGTu9xX3B8thL9zcevb7Bl97MqVeXlQ7l9Ykm4jzgre+1JTMjjAEJCrjMPL94yOj2SNmRxoyu7uW27t7vnn/NceXT3z/3d/z6dNHck4EFby1GM3F5kMzaspriUTrlG/vLD8c4a8+PNGHgcO+4Rc//wn7wx6vUha4ROq7A83+hhgCSQVX7zDW46tijxrGHjTjXZHLNfWOHAOQiDGT1aLZkbOh7wZOL0eOxyfSbsduV2MEvCvPy/F4JKpwPvdU6hmHgZQiQx94eTmx35XakiCkXGqAemcR47BGCGFgrv+Yc8Y6C6KE0GFEqZu6kOe5kFsZpa5rQhgwpuTOWuuIqcdYT846qQZL8JVTnGyWRlxtiTGARkQzjx8/cTq+UFee/b4lJqWyHowjKzjriKkohnNWDoc9VeUxUgjTrErKCU0BVAkpEbIW1Z8YqgqqypBjYoiRcUilpqWAdw6xDrElwBUzbQzkSI5KNmaqBWowtiTtliB/DtbNdQArUgLpnCfb00vMfjlnDtSkqDRTqeForZ3uT0IzhXzMxbY2U4jHnDPGFNtaayxiDGkKlK215f25H2ixX1KZMkdL7Uiz9HkK9LIsmwwyB80bNmzYsGHDhg0bNrzCOmU3z8ltsq6Htv7lWukkIuTxTE4ZX7UM6VyUinaHaEA+/gq1HrP7Gpo7yInm/ueET3+NffdLphoEGBxZcnEX6X4g9Z+wClUK+J/9t2h1wH7/q6J8NJd4fnaELGTeSp2FrsYwk4fz25MibxmULL+WM+ayDLlsEFzZZV7W/rrURKS43EwwhqJsnK+uSin3uFaE6YVAA4TJ5tWAs1M9xzyVfXjFFV7bZ67unshqBlhKL6zv79LUa0JRrkSMVwTk1XHKEnNcRjJfr/wzE2DOGNBMEiZX0usalF+/v6PxMMQSPyFKNhl1QJpVpEpre150Vg6ayVHoMofXfZ5en7JGZ2Jy/d4yYC41MGcyb7YyfX2vZ6yJ6ouKUa/en+c5pzw9g4JqRsw0U6t51nnuAWs9zgjkcE36r68vK5ZTZamzOc/p+kbPVHle1LbXpPJyzo/g2vZ17vHb51zubb663uqIQloKWDVgLA83dzyHE9E7jDSUXndke8e3uyPx5BAjaP1QYmftkfiMTYkggrMtYiype8Q1t4zPf1NcrtpSliZRoZNhq5oabzJKnujx+ZtiIkxXXxEbNmzYsGHDhj8d/OE1H/O82Ltk2M0L7avfF8uNKfOLYnMYw8g4/R/HYVI8JjRdFlOYQg446wlip8LpQooJUKz3IJaoGWct1hkaU6O0DEOP956maRAR+r4nhJEYijqy70f6vmcYB6KOCIb97kDta8YYscZwc7OblHYR6wQxYI3j5mZPGD+R4kiWQMzCMCohlSyzoqKLtIcdxo8Mw0BVt9weGobuiR++z7z76idYW2Gd5/7+a2IYSTlhrKfZ3yFxJDH5gViDU493MKQR7yuMtZzHnlM/ElNi3xje31R8dVfz1Vd7fGupmh1INSVJFmLHaibFCDZRtR5vDlh/IKpBqoAO0D119OcT//p//b8hJcu//e/+b3z48Lc0TUvb7kFKfpv3jjhOSjQ72ZSQUCmWNFYT1lmcGGorfO0Mv8mZv37u2f37v8ao8s++/Yb9zR6xFsURhoiqQ6wtcai1VFUzKR1HYozsdg3jOOKcRcSTCKQkNO2Bl+cjTdtAHhhjIKcRUkJQunFADMQQeHwZCJ86Hh7e4auGvhtxrmK/axnHgHMVKU5Wsr4h9SOaE411jFFp64qkPcQRZwS1BmccISbEZPaHA75qSh3JLOSkJM207Y4QI1kjmpQQYnn2tfRLoZB2KN35hDeWrGW1HoaBnOF4HHh5PrNrag77HWSovGdXt4gKRgxkxVjBVjXGeVw1BRaAtYITIcbMiCOMEcFhSFixWGdJTMS6MTgETZacCzFojEVV0ATOlU81RJAp8FBB8aDluDmD9VIfQ5bAcx3cL5nQzN8nBpFchJdmIiin2o0pJcI4kmLCWlsIyRjJOqsdi15xrgOZElhrpuxxg6cQxaIKOaNYkEwWiyS7fAfNTj9iprq1Kph5g2RSbWb93ZY6GzZs2LBhw4YNG/5EcZV0B0bNRGTlhVmaFVeWQoCsKYU8nEmhJ9iR3eEdxjWE8Vfs736Cad+jBHQYyd1fgm8w9Tt8XRFO3+EPPwFjSakjPT+R+k9U+3vwByrt0Xe/RMWRJKE5FWKPCwU6204W85A3LDZndofVHsBCoM3L5YmgmhRdKnmi1rQoJpckvzwl9glJp+Tk9RyKcNFGXoiYOSm6rN0vpOeVnen873ScEUPMc4X30pcryme6JxcCcu6GTEnTlxhmmZPlnMsV12To549Fee91bcFlWldEHIva86JkLa9ZRIpW0xhDoji8GIR3DzcICWsUb8F5C1pKbJiJR8so3mSaFOmym3ZbSty1mrLL9dZzv+LJS1KnKYmbeSKZZ8Xbipy7YmC5nHtJNr3cXV1+m/+aLHancz6raapm4e91yrgWCsmaYkbV4CsPjBPFN113foZXcz0LKAt5dlFrLr1fkbBrJeLnhOCP4/PzLiP/3Hr1+jm8Jq8Npc6IwpT0nIGXUbk7OEI+YqXjmBocFQ/1QIqWl1yX8eaEGIvYA9keUE2Ijsjwgk8DOj4xvvwG194i+5+g+UjSOFnXGhKZiMFZQ0gB7+uV87KBnJZ526LmDRs2bNiw4U8LfzD5WBb0U1betGibF2mzZcVy5LLOnOurJYqlZCok5DgQx6I6U03LeVZsIR0yQMY6Q46lwYwSwkhIEeM9be1AoGoqBIsYCDHQDR0pZI4vR3KKGBMxZFJI5GGkAhpTAp7bpuLm0JAl4ptSj9EZy/5wR0yBGEayWpIKze5AHAe6cyGOigox486BSgdcDWIcxtV4CcQYuL+959x1GG9AFOsrUlKaZk+XIYw9TXvAuarU2UsDIYwYMWTJiCQMWuo2ti39MNJPhEvrLHc3Nff3+0ml6TDOg7GoQlYDKZFSKio+BIzH+h2+vYXsIPU4hW4Y6dOBwzf/LTlmDof/nvgSSacznx6f6VPA+NnmJKNZcL4GVzIErckYyYgpxjXWlezXG6d8c9vwNx8G/urjmdZ/R+UcPzFC1TZUlSOlRMwUZZs1NHVdVHbiyDpOBJUjxELojiGRYslkjCHgnMUYOJ5O+Kri5nBH3/e0+z3DmEihJ6XE8Xhiv7thv9uTcmYMPd55vPMM48j+5oBYVzJiY1HPtU2NtYUI884RgyC2qHPNFGnHGPBtTbu/w9ihWAUbyLnUXYhJS+ZwElIMDH3pT4yJbhjw3mOSFlVlzoh3YC0hjsU+1JbainVb0zaCtwbnHc2uxnmDWFNsZYyhbmqscwzDSAwTUzgFrilHYiiknOAwTkgJkhRyzZoKzQ7rwNqE84kYYyH48pxxqVOQYotlqRjUWFQsTApDa0ttxTJHswXSW3UqmEO71f6MTv/lZSNBVUk5k6cAPYRASCOo4LwjjpGc85TJPJOPc8Ztxk5BUJraX28M5FxsdTBSasvCpHg0JSliyvhOKiB52fxYV23ZsGHDhg0bNmzYsOE1LmRGscuEKTGPy4r0ogyShW9QEuN4Yl85UopU7R6tWqr657wcP7D3D0hzQKzB8g5CR+6+R8NA7L4v7cQRiR31zTfo7l8yPP2apt2Td3+BSCwdTGaKV9xsSjJnE0+/66Q2nDERdjKTVbLEGYVwvCQfXtRwE+m6ruW4kIgXwsVwqa0+07K6mqs5VMgTwVTsN6dZXpE4ZiKLREsibkk8LLUbNeel5IQYM21WrP+/jlVm4myOK9Zqt8uMlPHML/0OzdtyxpysrWiJRVjN23zdFec6E7pzH9aEmJFCQmVR7g8tlT0jqSINZ55eHhmPLww5Y73F+7IVZIxgY4DkWJ696acs96Ao/MxU61ORZc9nIQ8VZoGqzkTv6v6KTHpYzRNpd60UvCZyZZmX9f1YXpJJfXuZxdV8zWHbpY2YdXIRMiVmHkt5HhBUJuJX56eNpZ3lgzAlyq7fmgnMq2eECzG4fqYvSbivyNi551fE/vwcXZ7Fy17a2+ctRKXO3yAGiyWMiSfe05pHjslzWwVcPmON4W//v+z92ZLsSJauiX1rqSoAG9x9+54iIzKzKk+dqjoTRdhsYZMXvOAD8IX4CHwRPgJFeEvedYt0y5HuM9eYU2RE7MknMwOgw+KFAmbmvn1H1skSYZOV+EM8thvMACgUMBcs/Ov//8MLGnU4l6GMVSwQE8HKsUGgNBssBkgPrC86sm5J4w05HyqDrYGiLdQ2ZtZt4DaN0HRglQAVA5uuy+O1sWDBggULFiz4o8E/MvORs5vT6cbs7D59vhmtVihVrTSrG+vrRE4jMQ4Mw4E4jpM3a7VgsSqkwomn8Q3ZBPFKyUy5dJX8asTw1Fy2FHtyhlgyu/2e+/s9KWXu73aklOmCElQZY+Jmv0fwbLqWq6uGqwthuxZWq0u6dkU2KumnRtCAdwFxDt+FWgBZ5upqQ4l7ShrIRYhjRqSSZN16g2UYxxHLcHf/wOZyQ7va0LTtlD9Xx2ICcYh0Xb2pzMVQ8XitRNF8w6oqpJR4eNixHwZiyXgnbLrAi8sVofOYV7QLCA5DK2FmIBQEz2G65XOugdCiocVGwbsOp5H7uz13dwf+h//xv682P+MD15s1lIbdOPDh7obDOFaShprBaFZzNb14JI44yZhUW9EQHJ2AMCJWGEfPbz+N/NUPn+iaSkxdXmy5fOHxjRFCU8klmKxPPNlANaDakLLQtBcTYTTWC9lX29KmaRjTgcOwo1uvOaTIavOC9faa3/72W1aNcn/7ie0msOoEJHFze4MUY3PdIaoE72m7Nc477m/vKHkkxshm1aIqjHGglwMljXg1EI8BJUeQXElyDFTx2qDqKGakkhnTjpyq2nH3sGOMI6VkDocdY5xsUhtfScwp87BpGyznWgppg28bXr19RTzsaZyjXTc0XYt6dyRqRaplcEoRFa02seNAihl1nlIgZiNZqkTvpBRUhBAcLniCb2h8gzOjCWXKU43EmBjiSCqZnA2RVIlFd+qFlVkdeNYua3Ouy0TUPi3MRGrmCzYpEqdu5+nXSvLPf0/mnEczUkqklCtR6BXUKCmTcjpaNCkgvp4HsULxctwXZDCZoiUV8QaU2WWqLheFXI7ltzkhz5Y36KOG9gULFixYsGDBggULjpDj/6Z7Rnny3lNUEqpYpULK7oE2CGXM9f704R0mI5v1Fnv4DfF2pN3+hEKupEu7oqQRyoH4/f9Ac/mXdNff0GeQ278nXP8JaIsjUXATGVSmYbqJr5FjaV/JlslG8YzoOCdLjoTN6Z/TIZ6RLpUs0dORPlGJPbJKnTIdT69lym0/5RCe5y/ak23Y4w0ff61xDDXuwooHSm32PR7XTEY+Hvt/ze3+OX15Pq6ZQAJOqrojOfuE9nymwHiOuJJ5J9NmdHpmsF23xGHg4eNH/st/+U9898N3HJKh4ZIL9rx+uebq9Stc09A4QbJMtdN87h8rcPX4zGcmJWu1NI8ll8I5nl4rJ1XlkeL77PPPvX58iZzN0JFvO87wabJtul6PBR1HZyWvnxPMT4nB08TKcX7PlYhnlPqzYz9f9sjp55nz9/l659+Pp+MrjwjIx8racroYTCgGZolkxs6uuNRbSok0oeHmIdPwW9qsjF6h9Agrst/UmCK0Oh7tb7kInzhsvyK6S67khvvYQbNmbe+4TxEdD1AK2bnqpZQeaLg8zuEZhzpxo0vhvGDBggULFvwx4R9NPj7nbT+/PweA14y2QkqZnGJVK8V4VD2WWAnIMQ3kHFHLJMt4PKD1Bjr4eptnSrFqo9k4wfum+tBnGGKiHw4MfSaVut/DrmfX9+yHkSFmfF9vF8dUSDmy9oX1ZsXLzrNqAm7qnMwUxCldsyK0Dc77ehNXqsputW65GXvUZV5ebxgPDssDXo2UI8PeWG/hYntB4x23N7c83O/xTWB7cV3npCQaFxjHSDEjhA39YaRDca7ay4pC0A6THheqGk1lZByq5WS2ao+zWbWs2obgwEmBkusNJ1bn/ag6rZl2PrSo62qepjTV0tYrYxF++HTP9x+/5Vf/r/87Wgqb4Fj99Kdcbi8IcUPKmXR3QyrVEtOyw7IjaMuqVWh7yAOSEsErXo3GG5KF4ozrjfLdg/B+MP79727Zti0/S5HGe0Qdq7XQeE8RR06CSgEENU8TNqiA845iieA7shRSGmqBCpQY6YJHzej7PZvNFaVMNjA5IwKlJLZXV4BAjqwvtjwcHhBxdG1XG/OScXt7M+3H4duWVDLmWrIKoqF2TopWa1MfEA1gxng4UGjq3JpR4oD6QElGjLle/8MDWLWB2e8yzWpNTCNdqB2zKRfWXaCkRIqxdu/mao0q4jDJuCbQti0iVZHrxDCLlYAko+rJVhgOkRQrgSm5KgdVFC+KU1CveO8qKa0OU63q4RwBCN7hUERCJVNR0jAwJMOJ1G2IoVLq93XuHJgfEJR6DqdwRY6dj3Z6oFDtWsvZ35CqjmZKLa0MqSFTMZnKlH2aKxmZSsbGKQdSBCs130SkFqWWBeeoY8qGia9WMcVhJLwIWqg5m+LIZkjOYA7EVYus+kcPMTkrqOMx82XBggULFixYsGDBgnM8VjQ9IbXO3nkEm4k/JY8PNBeBIRubdcs+DXjXoeuf4LorePievn+ge/Fzxv4eu/1Iu+5Y//z/gM8jh7t37D/8GpGRcPG2upv6KZnNDFNBcm0CFOfrfa3OqW3z+M6JmpkUmRVaT/MNHyuzZpzn/X05929Srz3Z31F5WOWgU+THUzLuRMQ8aw87zXaNcyg03oP5+izDzsmyc1UeZ2TPiYB8TrXHk3Nazo+ReXxlUrRN8zetN79r9vm18KNKt7M5mpkeQXBSjzPtHvjP//4/8f5ww+XLa2QHtx9veLPuuf3uDomZ17/4hjA1VupZpM5xbudjktN1MOnxfpSA+2yMMwHP43P3pfWfO97PMigpx+19UVkoJ9JbnaKlOhOJnRGiZ/vi7AiPpPbZsT/a/5NjeLqd30c2Pm/X+txfheeP/7g/PV5Bx2so46Ak1BSjMLLln19+5L/8tmfUDRpe4beRa9twmwayXlPwiKV6rIcPXLY39OUNiTUZ456XbN07YvZYafAuUFTRNGCSab3gDv1pjI/O1UI7LliwYMGCBX+M+EfYrs6oN+dyrno8KpTKZCmZSCmSc/13POY9DsQ4kOIIliphlmq2XrFE45TihFgiIQS889Um0hWcVtsXE6kWkrnQj4lPdyO7+z0xRXIuDGNkSJFUjHE0PsWRMRlimcvO0SgEM4ahRxQ2TYu6BudWmCjqGlIuNF2D4ZFSi46YRppmhZhhLmClcDgcyFIIvmF3d+C3v/4NX//851xcbmiageEQGQ8HhkNP0ykPDw9crC+O89O2HXUSFStCaDqGYU9OPc6Bd4J5nZr3aj5GtbQsNKFaleYsUBwWc1WKuuk2ORsFI5vhg6OY4lzNVlQ3ET2mjFnox4ilESmFXIxDcvR95PVlg3rhYVjhd/eAMOZMFOOQM1vAhYagiiSQyuQQnMNJwhR88DTe2IaRfih82I38p28/0PjAan1A2xXtaoUXX+0zJVFyHWu2XAtNqfmJTh3qPIf+DqdacyGLoa6jcdVC52J7SdOuaZu25lCIcv8w8LOffUPXdoyj0a02dN2a/vYWSAx24K4/8Or6ut4op8Jqs6LkqhJcNQ1KRrTBeYeVgg8rrBiuJnUyRkOUes2a1pxEmNSryjgc2O/vWG82lJIwC2x8II4JrBbm3tVrvO/3tdNUJ0VkCBweCoLD+wZEyBbJw56YCyGsyCmTMpjlaruaBrwPOB+wWFWB3vua6yKO4D0hNIDW75ZwbB6YtYM1vrFe/06VJjS46QHBsem4VPulYlTbWC218/RoZ/O4CK8doHMndJ4UjqXaKM37P1c8lqqitlwqSViqAhIDm+xWc0qo6rEgO3Y2n3UEg2BlephRq9+qfjSDbOeVUhViy1kWLU8emJhxCrVYsGDBggULFixYsOApTgTkbA4yLz5/QH/8tNkxRqRtMr5pGW8G8vYX5Pt3tN0V7Hek4R7vLxjLHYdf/0+Eq28Ib38BNPjygb2tyOMBFy7xr/83WDqghzvsMKLNGmkvEQKJeo8t03393Ox3PvSnJNBRlXgW/vdjCrDzjMTPiJuZ/DvuskrW5krEytm9uOPIgD5L6thMoJ0UhXX7dZs6RYMoUi1wzWNTs+uJxJxtL2di70SayjSO/AyBeqSPnpBKRlUkms1qQa22r9NKOpFbj7WGZyTntO1HircztedMBtZlNR7FYdx/uqVvL9F+oLDmYjNAMlZNRxwy+5t7SkqTjtHVmo1a2pyUmY9OEzKf78+m/rEe8PNcQvDeU3I+jv3HCLync/t0OYAerYAfFXpnnzVEDTPFRAlNg87qTsDIR2JY0DNV7azSdLVGZSKt7WwME+Erj/b3ZaLxuWN5/rNP7VXPFaX65PX592v+ok6fdAEHFEt4Vb6+3PHdA6yuXpP2ESvG3V2iyx4pI5vxBwa3orAhHn7g7QvPQ/wJAy1imZAGJEcGcSi3k/HZGtGWoV3Vvxu+hYdvMavq7WwFx5xTWq8oeTz8BQsWLFiwYME/cfzjyMfjjbgcf7cCxWpgfS6FkiZr1VStK1NKxLEqHse+Zj3GsSfFEcuRYoUUC1JqxmFMmTY4XAioWFU/lQST5UTKkK2qyYZDTxp2WBkY4sDukDkMhSKlEm/FyBjJclVriRK8Y71pWG987YYT8I2j7Vp8swJ1jHHk5nZP07Q45+i6jpwz+zEixVCB9WbNOO459CPet7x+ueHb7z/xN//lr/jFP/sF682WnOD25iPOe16//Yqm6bi9vWW93qDqGcfIarViTIngA2IQfENOO0oqePUkLTRti6mjVDaoknAimGWyFTKQi5AsYVbAhFgyyaqa00q1lbHqslNz/HKBrIyHiKVSrSnLCKokM+77A/eHA9cXF1ysN3wKLXHoa3fk1MXpPDRe6FzACJhm1Apep2KLTOcL1hUu145PfeEQHb+877l4f8O2a8B7mqalbcEmZZ6VTCmKiMewKbM847VevmaCD021RB17mqYjpkwIK3zj8KEhT0rQ3e6Aac1kFDyH/p7r128YxhHvPALshx0xjmSMzcUWtQ3rLmBWEKm2PN551HsKmWIjGiOGqxo9EwqF9aqlbTpiMVI2xkNfyUHva9mQesQ6So6o94wxAcqQEjknfBNIKdVjDS0p1XySod8zjj3Be3a7HaLQrDoQxYlHUVLsiSkyjAkhVDVjTKQ4MI4HUso0ocN3HaFpKSWRU7W2NZVKxFEzGp1KJagLoIqhuAKN6GS9W585FIAzm5iCoRSEmr+oZ93E1DSXY/fyyZ65/u2wiWSsP/n4e1VDTpmO5WTpfHq/8og5z53NtYgUV4vT+WGHF4fTqqqeHyAg0zEzFZZl6iKdidVprADl7AGBMNnELliwYMGCBQsWLFjwGZ4hGB7xBM+4CEkl4EoaebEW0A3O3SGuI+cBcQ3Wbki7d8jhV6y6S2T1Dff3H+FmRXP9inj/iZI97Vf/LcPdL7HhAd9dUNoNrhQs70kP3+PMIVOtbc7hnhny4/E9yW+XmjX4HIn0uHFvPs4TSXVUQ8qsAJwMYKdcwOf2/2PkTv2AHR9RHBdpbSg0qS5BaoJJnlRi5WxIJyvUk1ar1i6QT2TZF4mj0xi/RKzBiVCc52NqmeScmbaZmTz+//dAJ0J2VkAqdE1De/+Oj6OS0gG1j+wOB94NytYJTdPg1dVIi9mj9Ex+OTeLMtGTMo3pJOZ9ct0+GehTe9ic82ck3bFhdL5GOC1/mp34FE+39ZTQm6ewkBmGVONIPDBOBL/IkUedz7uiU/MriJSjXvPpSbDza8xO431Oqfl07E/J1fPXRxLx0czVf2eF65euK6bvDoA6j4jhRfjp5QPv+0AaoNdL1uEj+wLSXGOHO/oUMLmGwzvK4TdsN0LfX5PTiJMe0wAaMLcmmoC8IIy/mr4tHjc9izI6OvXEklGp+58eEhxVxJ8d3oIFCxYsWLDgnzT+UeTjfKt87LyzGnxeSqbkVC0R41hJkDgQh3GyXB2I44EUB0pOlByxUu0wg3M4b+R0smps2gbvfVU4psQ4DKScjt1pKWf6PjIcBmwckVTIKdOPI31MOK+s2w4XHGMZSU3BI7y9WvP2xZo3rzZsO8cQM6jDcmbf7wkTqVWD6bWqz4CcC843ON8AmTSMtMGx3WyJk6IzhDWvX13z4dMt93fvMYlcXV8zxgO3n25ZdRs2VzXv8Pb2E9vtliasMGr4ed8/EIIQnKdtNlASQ4yo8+SS6YcDxTL1Bh1yTrXD0RWEiNNqHTrmVPMGcyVfrRRElZxBs1WLHVcplJhrll+0QgQyhk7Zmu/vPvHQ7/nXf/YXbFddzWU8HCrxlDJeA5vG0YjhULIEVBKNzwRnqCqtODQl1CLbRuicsB9hyMp3twderO4Qp2w3W4JvKCg+dFPnn4epqFV1eOcJIbDbPeDDCucc47Cv5JUYop5iQs6ZGHeoc6w2G24fdrx4+YqmWTOkgWKF/WFHihnnqupPRAmh5WF/4OOHj7x5dU0xiKmqOL0PzBmnApANKxEXjH4/ENo1ViIp9oTQVtILyDGSSyH4qtJcrS4YhoE2tPhuTU6JYeixrHgP1ebXCL5FxKNScE5JOXJ5dcHY7xDX4EILEvDaYiYMw0BKIylnQHDOYyXycNiz3+/o+x0CrLoVzapjvV5jzRprWkLTIpImErV2iXoXaKxmV6J1DMEEtUKeCEFs6gbVue90svgtQi3U5w5OnayOpj8ik+3qTCLOKsdq1ZzPiMby+Mfmn6roLccM2LmrWShTh7RzkzWuuGrZO6lP64+g6iYFqHxWQM/LilQCnbOObKZvX/nig4UFCxYsWLBgwYIFCyp+nEA7I1wKk42ikfMOETgkgRjpb/6O0h/o89+zCkZ7+XO0+5riHBYH1t0Fh3f/lnzv8e4ad/UTxrvfoKLEH/4D+cVPcFLrTTWH8wL5gB2+g1JwVaL3iPiDE6nzeS4fPGX6zsmUeT0Vd+aqIrUBUM6JoikbASoHN0c12Pl+PidxvmTJ+VSxB3WbZhzjSWqKoTvjj+Y895kgkemXE/nDURH5OAPynCh8SpqdqyI/s+P8bJSnpfPc1kctJ1LydIyfb89xavYsKlx+/ZZ/8b/9l/hfvWOddozjBfnigqJGKpFXr96gbcAPOpFxVNvdR+RYHdNU/h7fepb++kJd9CW70GeO+tFnvkgqPrm+zvMYj6Ob689peclGLrnWf8xnUE/nYHaysdo8e4oLkUeNpufWrGenvS4RjsT0cwTjlyKLno0wMjvOfV32+POPCcvPvxepjJjBN5s9N4eOPOzoZY1R2HHBBZ/YDQc8CUdG8o5cjHb7itR2+DLivDH4C8QadCKms4CY0PkLfLrnztVYHMwxYKw6YRh7ZLVFzYNYVRnXs1DVxgsWLFiwYMGCPxr8I21XpxuhiRygFJIVSo7kFEk5kWMkppEUR4bhwNAfGIeemHpyGsnjQMkRSq5aKAG0oF5rVoGrRU4pRkylqr+kdkSO/cg4DIgZY8w89Af2Y8/DIbIfI4awXTUIBSsjGWEbPFdXa1bBs1kHVquqcPRdi+sMwZGK1fEaxJhoQ0fbtLRtSxYYY2K1WrHZXpEO4EqipD1tp7Rdy9ArY4yAY9OtEVG8wu7+I6uubmff79DWs1lfIJ2nH/Y0XUcuCecqMZJzovE1QzCbod5RhkgTHNcvLgjhHekw0vlqnSpidF1DCI5cMs4HggbGFKtgFKBAtjxZrGYsQ0mZlAppyNzffOLw8ADFardtqZ2s/dBjudAPBzatQ8VIJYHWm2fvXFVrMtnBukoQto2ianhR1g2UvmccC62DzgnODDPh0yHx248PvLno6PcHum6FhoCI4dxkWaoy2bzMxJKSs9A0HiMxxgGh2vA2TUMqmd3DLevNFivQH0aGfuDrr75mGEb6cWCz3XI4HBjHyGa7Rb0S7xNmcNjfsru/45uvv+LQj+SUcF0g5Uzja9ekFKth7GrkOFLSCM6zajwxjRTLNRvTBaASYaUY4gOu2XK4+8TFZUv1BU0cdrfsLfPV26+qTeqUTVFKvWEvBm27ghIR9TgVwCO0jGPi0N8xxp4xFoaY2Q2JfoyMKXG/O3D30LPvB3IG5zzbruVy23G56bjYdGw3HRfbC7p2hQstTdtRmpaSx1qxq+DVo04RPKRESZlimVwU02q5WkwwB+rqQwwRRaTmQlqRascqUgvpAvlIMFrNcCyTiveJ8jHn6d9SavZozuSUJ1vWx4Wdm2xr6wOPqhB2Ak5BnNYfVUQ9ojY9BHHoREDrI7ep5x4QnR4k2GfvL1iwYMGCBQsWLFhQ8eNqvdo4d6S4tDImghAP9+wGoWjBiUG6JQRYX13gt39KkUDR6jDj2kvyTmlf/hzfvqSMN0h7TbhYAQLbr0gf/h75yb+sPIsKDoeVhKUD9u3fUlwgmH6mTvqHkCUzTvatp88UqznscuQTZWoYZHKXmZSGUhsaj6SRyUREnoi8L43lxxRns4PJXI/kcagCSdMTwTOrMifSz6ZBHBVy6EQCTln2Z1afs6XrUwLy0RzN453eL0fl3VNi67HasC7/nOB8erwmQgQaGWjtQOMgi/Dq65+xfXVNGh4Yhp5YEiVF8hC5u3+oWfZMjZaP2F77jNSbp+jH+i7/a5V/04JnycXntnn+eq4xP19n3v/0zSq11vPO4Z1OZ28m2U9sqpwd+yygnVW559utn53XtuM7clQnfnkunr02nryeFcGn6+DxcT3FY5J2Ji8dL688Y8wMQ2SV7hgFRB2WYe8uWIUBPwjDYUR8oVutSe6SqB1RC0EGtumBUgqjbigacDbF4LBm1STa4T2ZtySFlD3rpkX3kTypcE1yfW4wz9jStLtgwYIFCxb8UeEPJh/nm6ZZeWRTDlsuk81qHIk5ksdISpWAjGPPOB4Y+j3DuMfyiJWEWEYsQynknIgxHm+2wpTD1oaAOsEHR9N4xnEkHQ6QBkDqeinTjxlLxoUqmxAIwbNLiYd+rOMVxQEXnXJ9ucZ7h5REKh7vAk4DRQsSC6UIqSSCS4iGqgRUT9MqZpnVasV+HJAuYSkjGUQKSUdyzDWD0TI5VXWgU8d9/0C3WrNab4+EbNO0NMGx39/Stiu67gLXdQhQ0ohQ7VNFhRB83Xfb0jitVhcC4mpOX84F0QC+Fkem4EKHjhEZp7nVhmLGfNtdrLAb9vzdb77j3/7Hv+bj7pZkuRKQohQDr0rKibuHW15drDCpmZ5q4LQqJ/tclYHBO4IKmgdQRZ3QBkW9kFPGGHHO0TaF4BKjOXozboYDD/2BQ7/jslyiUjMVxSupVKtcDLTIURUrqpWItARCVTCGBiuFh/09uUSCF/a7PXc3n/jmq7eowKdPN/g2sEJpQuD29p6LyyssGzefPuG8Y7PZsO4axAofPnzgYrPBu479bo9uOryAcwLBIxht0zBGGPqBsFqBFcY0on5VScOu4zD0tYz0gdCs8E0/kcJVxVpVw5m2aSrh5iu5b6aIOlLKhNByOIz40FRb3AL7+3v6w8Cnhwfe391z10f2MRNzhpIRK+SYSNHYDYnb/ciQ6ve4UUfrlU3nuFo3vHmx5e3LS15eX3F9/Yp2c0HTrmnaDitCtIRzDu/quSmmpLESgUihoJUoLeCKgs+TsjAfi7eaIzOZlpaqdCxmWDZyTvX1pH6cfy/HZakqXM2mZflYw5SpCJ0fIKjUq9ypTopHQVRQ5ybicVI/CvX3iais5dv8EKGO9FTrTXmSx4cisx3PggX/6+OXd4Vf/F//H/9rD+P/t/D//P3z8ff/t//L/xcGsmDBggUL/hjxY4rH2cR/krEh1T+EqshScn/gsrvgwy7h1BHaN+R+wGgxEuK31WbRlMOnX9GEFe2bf12tJuU16eOvyO0l7fYraFawfUO8+ZZw/TVqtaYtCmYJtRpL8rSp7kvE6XMkypxvDxxdSGZiTGb1nNmUAVdhlSGqlWmVJqKiHO+5KfXzNucRPlZhfq5+47j8OH6r+xExQlCaImAOyTWORJ07ZhnW5Pn5fDw95koxPeVP5Gg7W8d/3hR5HM+j8z4RiJwrGLW6whxJzGnZtM1zFdxzUCt4cZBGAg9IqZEy2TJqU5OleDyZJEJ2ynqzPioD55y+mXSd66nzOuioDZ2WP822PM3H59fQ71NE2jPLf8ye9PE+7NE6hkykamG2IXXq8CEgJdUjkaq2PSpibW4sPVnimk2vT/LIs//X2tNkKnHnN86VkM8c8+9zzHmq+jw2vB7X+/H1rYDDITjuHxIZcD4QfUPWF8QCrq1NB4mOrbvHuKPzRh8uEWtRIgVlsJbRNQSXadMNeRSSrjHfUIqrTdbhAhnf4dvXiBVCk/H7e0xeoAjZPGZl+hvgfnTsCxYsWLBgwYJ/eviDyUczm0iRSl6VnLGciSWS40gaB8ZYFY8xReI41hy6sSeNB8p4oOQBxXCudojlidAqJZPGSMmZMjq8QesU1wa886SUSDmS0sjsy5Jzpu8H+kOk5IQGB74mzsVcyGY0zrEKQqNTX5o4mrat1jJSyZ1sZbrRLKhYzUik5tWt2o6cC4hhJUNJqDNSMkwVS5D6iImn7Ty3t3tSzKy6hv3+wNX1S65C4DDEqv4Sx2H/QI4RxNF0a/b7npyVi6srRMA3LWRPo45xHKAYzgXa1vPiakW4vcMKpOzIUdkfEheXntY7cqmKwywT6adQVBhzQbVBgZR7hnjgh5sP/Ob9t3y4/0gqU+7F3GlJOf6eUprGX4kfUQUxmsbhFZwYTgpOrfJpxWgaJYRTT69rAk0YaTx4B30yCp4xFnZDpkxtseoCiGLFKsHjfbUCRXA+1NK8REQDcYyVaPKO0DaM/UBQZd1dUErhfnfL9qKh6wIiQh97Xl1dkrMxDJE4DqQx8mm3Z7ff8farN1xerrkbH7i/veHm0wdeX18Sc+Fht2ezXuO8x3lHNI/zlVB0TWGI91gc8aEjDz1SPImCC57SFx7u71l166p2dJ6UDecNN2VJuhCmIh0SgkzWLyUXEE9MB5wPII7hMDIc7rjfPfD9zY7ffLzldhgR53FO8U7ofLUU8qoEn1EPSOF+nzgMiT4VLClaIq4kAplQBnxJeBFiGtlsE04F71eYM0pOZCuoKs4Jfsq/jLmQSqSIIHgyVhsMfEG1oOJAtNq2iNbSunBUM1bVYzmSsFYmArLkKbdzIvVnxWOuqsfqnaQUMXQu/KeHGqrTgwulOjppffAxO77U63K23LHpkYOcOnBne6j6Nqo123ZyNJqsYxb6ccGCBQsWLFiwYMHzeEpAnlsvykQ8zveutWKqzXwSd4TtJeX2HZvNVwzDnrB+hWx+gjTb2gg87Ej3v2V1/adYaCli0+aU8OYvGD/+kt3tr7m4/BPa7VuG27+h7B9g82K6J9bpvtdN+e7TPfuEeUwwxwqeyL6ZEFSZcuCtZqEbhlLv+22yvzQ7Za2bTmoxqw2AMCkmJ9LNjru0R0SfQq0zZmmmzNv5PcTWGZHoFZrOU8Zc7W3zPJA5/30mGGcVIJ/xPUcl4vyLZeaqYhrlZ2PQeZzH825HkvO0I2W2dTU75eTNy58je89ekcg4Bl5ed+Bq3VLLL2PXR6wU9vcD63VAJeKcTRmbTNefYTpP61zjVHZtJknP1ZCzyo7P3jtX4T1PRp7DHq3zmJCcRjDN6DOYWb9z0m9eb1KkisAhZbZO8aGh7KkHaQUzfbSuTmTkTEJyPj/PZJGKgWltGJZZFPtfiS/ngz79zOkIqztY+YyoFVWKFbJoJQd1XV2r8oaiSpAqGDDROl4fCAVGU9B2UgQH1Apleg4UzRHda5wbCOwh3hLdmlKEKA0hbIjDB2LzhtEuicN7dKMoaTo9VaFaahfAf/0ELViwYMGCBQv+/xZ/MPmYc8JmC8ScKSmTc2IskTwOjMMwZTsOxDQypp48DqTUk+NAziOW4nRXqJOlZj5afljO5JQQCjkpJTpMIVpiGEZ2ux0PDz39OJAN0ljoh8Q45pobKUJQ2HaOdbdhjJUkXTfC9WVL1zhKHoij0a3XhBCmLD9XLV1HIaWCSqFVV2/mTFivLsh5JKUBykDXBkpUinkGU2JRUhxRB9mEvo+sVoFm1fHuwwdeXr/ixfVrpiqA/UNk3/c0TYcNA123Zjj0eB9ougYT8F5xrsGXQhx3FIuoF3zwBPVYFlI0hj6Rh0yKGcPwTmqOYMwIgleheIeJnwLdI/v9gY+HPfthpFhVWFqZuhldJZidq7f8xTLff3zPoe9JOdWbURO6ANuVo3OG0/qjGNmEtm3oWgUbKMVovCcHY9VGGhdrp+HU4Tua8eH+wO3DyKtk+EndWLJhUi06S5m65pwyjONUGWVyrurAJnjAWK874hgITcP9wz0pQcyOQxRy6pFSiMOAOU8cBq5fXBBzZH8YWG8uaELNNXjx8gXfvf/I5dUVXRuIeQSg4Cg0DLFgxePDClHwfqRpG0ouVakpMAx7xgzriytEA8PwQGgyOdci3EzwLoBlNtsV61UzWdo4Sp7Unq7aH4lvUINx2HHYH/j48YZvf/jEd3f3vD8M9DnjvGO7ErwqnXME72vx5Axvjk3r6bzS+oH398JuHxGUtfdceMdaazE8Dj2H+zsEo1ElOoeuBOdaZvuaSsopzhnOhGzUIqaUiQjM9TrJbrLwnXNe6gON2r0MpdhkvQplIhdTydW6OSdyqn8PcsxV7ZhKtXvN9WdWIZrY1NXM9BAF5lza6fnFtKwW9XJsLJ4sYOcHDseuZpk+p7Urm/q9roO3o73OYru6YMGCBQsWLFiw4MfwRQXkeQbanINYMiYZSo80X2Pxbwg//d9R+nvIh0o8kBhvvkMs0bz9S8QcaEKmnDWjgGW6l39K3n/i4f1/pHv5JzQv/hnj+7/Chw6CrzVHypQiWFHUn2UPwES2zA15diQdjw1+Nuk1J5JJpiZWsZOKcoqGB6bmwGOGYd1utpkskpNS0SbCqRZG1alEZvLic0Xcj9l2znuuKtPaWJwnQZ+o40h4nlGtx+0+IdmO0CdKPOZYiZn0mrc31R2TYrDYHPcgU03+HE72nf8QYgpADcQcUoQxKt+9uyMOO16uHAOw6jx/+zcf2WwEdQWSI40jf/X373jzzcV8Yo7HO0/xbAULpxzC03Bm4uts5I/Uk9MszF2bZ3P86GjP1H6nvEM51nJQ61OeO8fnm5sJuDO1oIqSLXMYRy46wemc2ymI1hr1pG58nFd6HNsT0v1IQD468Cfj+hF8yY72S597jBMhP6uMnxubWCVGs2VWFjloW8+dKJoTJoZYz8b13LQNwXnS+InSXFOV0/P8TfMhRi6BbJe4BlbcsXHG0N9Q3AZfCnn4QOo8YhEhYY/q5Ol7sGQ+LliwYMGCBX9U+IPJxxhr51xOmZhiVSGlyJAG0kQ6jmNPHAbG2JNyT46Rkkckp6pkY+putNnCtVAsYyVTqD99isSHkcN4IHiHmNKPkf0wcvdw4DCOlGx0ztE4wa1DVcBJqTapbcP1qkNQ0jgiZNbrjvW6Q7xHRXG+RV2D4fCuxYdA210wjpkYI+o8JkIfR/xqhbq23oTlgsXEZrUiDsZ61bJ7uGe3uyf2I2VMeIWHhztW6xYvcHf3iWzG5dULnHesVhf87nffsrkovL64oGkaSoGUBzQKWaAUoQ0Bp5Vc227WbDdrurbBO6XPhcMwcBiMZC1D7FFpSQlSNmIuiKu5mTUfr/YNxnFgtxtIKaNond/JjobjDTnUkHDwvqPkzN3hodqequKccrFes1mtgFJjAb1iJdfcx7ACywgFsxp67sTReMU7xeGqGA1HKcJ+NG7v99zf39N2ayxn1ClpKnadc+RS0CLkYjTtihwHnPeYQQgtGgQsksuAN2Eceob+wO5QrxXBuLi6qJmaoWW7ueAQ93Rdx6tXjuGwYxwPxEa4vLhkGN5zebFBFIZ9z3q9IYSa05jGsdrZ5AilZmQ6DahO15+rlic5VYLUoYjBfrerJC+C81Mna8msVyuapl6LmE7dw4JqYEiJXIQ4Jm5vHvjVt9/yV7/7yN+9f+C2j7jguNyuWIubOmWrLa1zDhPBidGp4THWQVg3ijBSYsZyQQU6r1x0HZebDV0bECnASIwH+r3HgIaC9y1O3VROGKq1S1Qn0q+qEWuv5PxQYq5bTaZ+YpPp2YHVrEeDkqa8xzTlPGYjp8lyNWVKqhbHJedqvVoqQQ61ExV3KnZPDz+mzuFJ3yjFamE9X+bzTx3cudvqqaO5PmqZCsCpuJtMmYzPrZcWLFiwYMGCBQsWLIATQfAlHImI+gosoCTyuGMlI6MGJCdcc0Ec99i4R/e35JsHVtvXyOYlMN1X4xHmPMGaYy6A76646C65f/9faDff0L38Mw7v/pbuqz+niODtwGob6VxipH1C5B0Hdzyec5KojnqybBWYw/LK1PB3Or6ZxGBqHJy3MTUHyqQYlFM23Cxqm+/XjxmNU2OhyfPxB5+TNhMpY5VQyVbIQKnVAaKnpsPzXc5E6NybaFONUebPyal5sRI/NQLiUVaicNYEKUcS9pHt6nGUTyb7+NnHx/ac1WyBY479OCSG7BhZYQygkJLw5s2m2m3GBGJkAS2ZscTq+mrzKM5GdCQOj12bj0Z9Gs+kGDwnBeVcvcmT3x8TcI/UwPO/T1SGX7JhPY5Bp2CZs/mf1YGl1O07FYIKMXO8Xue80WM9eLxuZZ7ZuUX1eH6etVSV03X9+Px9Ptan+JKN8Ofn+kRMzxmkj64JajdwoT4LMCs4erJeTqPPqPOEuMflj2SFkhOHsGUV9+T+E9a8Iovh1GMlwvQ9QQRUSKXgmjWt7dnojlgiYd2QYo+LI1oy5ASuo/YsWyXHj38FFixYsGDBggV/LPhHkI8jJWdSSozjSE4jKcZKNMaBcTgwjj1j3xNjTy4jJSbMMl4MJzZ1QE4ZbCJTIQOpGMWEYsIYq+JpvxNCcFXNFyNjTCiFIELG8FKtH51T1quW0AR8cGw2HdvNGkGIQ7Vp3WzWhKYhdCva0KLOgxNizBSBUhLBtay3m5pdZ4bTaqMy9j1tU7PvrAgmSrKChEBolBfhmtB4Hm7vK1HSR8Yxc3jY0a1X9MO+KkVzZnN5RYqFi4sLckmMaYc6Ifg1QxoZx4HQNFgRhmGgCQpe8EG4vlrz9ZsrfvX9O3738Z7dwbh9gP1QuEjgXS2JTBRz1XIDqwSXAaqeIWaMjHPgPISuwTcOhgilUjVGzfO8uFrz5sVL7m7v6cvIw/5A4zzrbs3rV6+42F6gZUQkVoUb4JyjkMlSaLSeT0uJc8JGpKAKXgtihSElPtzc8PXumqurgZQiXl3N2vQBEHKpnbXq/GSVU2+MnRiq9Zb2sN+DlYn0iziX6Fpj3dZ8hxgjhhLTSLNZI6Xl8vKacRj41ccbPr7/lut/+RdTJiD4ENgfIiUZuIySaUJDTgZW6rGZos6RkqDqSTGDQHAtlvcA1eKExKcPN6zWl6y6BixX4syqalGdp1glYwFc01XieIw8PHziw6cb/uPf/pr/8Mvv+PZ+YJ9qsbFC6bKwEl9tgEUxFESqUlaU4JVWQZzSeo8VRxoKtw/jZGmrOFE6H1h3HX4VUOfIJXM47Ik501mhbQpN0+KpORfFauarlTJZ8iaKGM75ifyshHflWyu7Z1OmyVykFoOc7ZjzeG6tmlI6fm9KSnX7k9VznTt5VPhCfUAgxRCdLHBmC6FJxTwTj7XmnIx8dCo4y/lTglpc21xwzsW1cFY+/fhDpQULFixYsGDBggV/vHiqzJtxnusmM1FlqZIl6cC6cYxjddXBtVjMhPaK8f2/5+Jn/0dotxNBUxA8hYKpoejxNlYA8aDZ8+Krf8Xu46/phz3N9VvS7a9wL/6M6BosrUmu/YwEgdo8qDIRMSKUMhOGEyljJ5rIZiLUnpCAEzEyC59sWnaagrlGnAg6Ob8fZyLEaoU6O5AYtXF2nrunc36282qZOTdmJjB1oA7Lc+QCUwOjTaTOo6FXUndqSJy2WM/bkRiayaonZNOx7n08xNP7PKlj5s+fyL5HKrwvYCZKiyWSCevLC16mTDrc4gzaFrx3jL2BKXE8cNm1NOsOkfY4h6fsys/H9WhfPBb+PV7++Ho/rX9a4Uvfiefw+z77VK34aLnMxyE4FyilNkaL+lPD6Xz+mankPNWX9RnQTJKfSMnHJOO5y848TecKzi/hS7mpP/b6dJ0Z52a0j8hcUTBHcEopcSptK7kvODQ9YHmHhAv60iP2Hkoh+Rc0+R19usH7CxLgCLWWL2dppKLcxpZYLlFp6N0LRAa0VfK+J6UBiz2NX02iA5tGrJ99TxcsWLBgwYIF/7TxB5OP4zjU7MVxZBgO5FjJsnE8EOPA2B+IY884VPKxWIIpxyFrJUG8E3wIeOeq7arWzrQmZBQYSmEskMZEypkQHV1waMn4nPHqWHWBaIVcKjXQBuVy27DZbiYrVY8J5JxxwbNardhebHHegwt41+JUMRXalVIMTBVyLSScc1ihWrJi7A974nDAO49ZRCioeqwYw5jp2pbV6pKYMv1wIJSGYUw87AaatqNtWvaHgXeHdwxjog0NMWdC17F/OFCy0baJ0FzhQ0t/6PHbNaEJpJxQv2a1CqRReHX9ghebjg83B8Yk3PaZ2/vI9cuCNoZzlW0xy5RJAYmras8hFoZUSMAQEzl72sazXnXc3Y0TyVLtWtV5UoHvP32kP+wrOYyQS6JYT2ZgzB5JI1p6HIHWC02Y8vGotj/rNmAKD7Gnj4kxV4sdRFBRFCWmxMPhQKGQcyTnjBeHb1rUBcxqNuKciYgoZmmyhk2gVi09i2Oz3nBze0PTBFwfWK9XvH3zhvfvPvBwe0+3XqNNoKSIYBwOe1QVdcLLly9ZrTeMMRGcgzzSD8ambVivAs7XBwNx6Gm7DhFPGhOqwhjjpF6sx9OGqhCtN96wWW9IMeK8VPKVSWwnlbQ0wpR9YoTQYMB+f8/HTx/469/8kv/4qx/4q99+4vYwIBrougYwHKXalNIgztfMFqu2Rh7BO5mOz+GcQ1LkyqAfM2PMiBmlSCUAS0KkEFRxWjuoU0rElMD7aX3DLGCiZIOYMuOYGcfEmDPZBNVq+6qhwTlfH41otTEVY8qGmWxXrVAyRyvVHDM5RXJKpBhJMRJzrIrHmNFcIBe0TLkyBkW0NjLMncdTmSWSAK0PM8pk4WNU+55s4CZl5Nzta1aJbTg+XJnxebfu/PBhwYIFCxYsWLBgwYInsJmcmB+8n6m3ACqddrzhFAQTocSMa9aMUfB+jQLj/oZmteHin/+fOXz/N3Q/+ReoFDK+5sSrHvkIFcPNBKG46hCCsn71C8bdD/S7D7TqKfv3mCmmHstgfibfpgzD6T64AHNe20w6HgnFmXCcDuNLkegngovjds4m6agurCTHudLrtN75XNU1Z02aHf8/xy4cP2OGTq4tjYKT2kRc1ZU6ESqP1XzzehzHc36OOC60Se1YZ2gO/Xu8/6OA0ubfT9aZz3Exc+78+fVyJLLOSN3zcdbqp9CIMAyFt+vCeIAhKtEENaEUj/dGTrVujSXxL/78ind75SbaZ9s84XPycybjPjuPnJOWX7aMfU7J+Nzyp/t9bvkjovxzRhQQxAmN9zSdg7vquMVM0n+2rp7OwUR+z6o9ObuAq2J22v/5DMhcbz5WCD93PM9lZT733mPCd27gfVqn1hGqCV0L26sL7uOBIhtcqUpXN95gNpLbt7Tpd4x6hVetDkQtDPKWMP6OmAQNLzBJlYhFKFNDca27CwWPB7II++Jw8hKRQrEPkz30dX0ewdG8mfzsmVywYMGCBQsW/FPFH0w+DsOeOEbSODIOA8N4YBx6xuFAGg+MQyUhUxzIacAo1XJSFSsFxVfCphiaCs5NxIA4vHeTQm4qIRRKyhxinjL7MiJGFwKXqxUr9ahCEyoZ6UNA1aCkmn3B1BtmVHKmZBrfIt5hpTAMCQ2KWCV91AkaXCVccuKwHxB1hKZBFVJJHIZ+6rrMNL7SR6Vk+mHEO0fTdVxcXbHXgEmAktkdBlzIhKah9InbTx9Zry/wTYPEgbZZk8aRh7sH1heF9faK1WpNGhNm0LYNOSdEPaFbs9le8fb6Bd9/fOD9oScmY7/bk8YBpw3auGo9Yor6jCbh0KfpJrZm8akBuUDKOITNdoX+cE8uUsPmAVXhfvdQb2aPNaCAwmrjQXr6PtJS6HzN+fMu4JynlIRTweNogzCmkVwyYxwZcyaWqdNQwCgUUZIJYywUs2oT4lztbjXDBU+MkeA86hwxxWk/QyW5VLBc2GwugEzjHSKee7vlzeuXIBBTJPjMeu0rcW41x2+3O7C9WOG98uLlTyg47m6q/WsB7m9v6d5c0a0ClkfM3CTiqzmEfX9gvVohvkVcg5Kn4jIRWsU7MCfQNvRtS0z9VE9W61DvPIKDSRGo6kA8D/c3fPfhHf/p73/Jv//lt/zuw559hKZtcRMRaJNF6ZhKnVN1qLopg9NVi9uJ4Ee1kpCqWC5cbYyHXeTQD/Ql0afIOI7EwdM0AXFSSTqrDyByHMkp41wha7V5yZMFswHqHIqQYs1uLMXQAt4ZwQfMFJmeithU8JkZVqjHYVa3VzJ5yn0sOZFTnCxXE8WqLWvNiZ0KcgpioDmjDpwpNWlmyqBRO843JlipXZuGHgvDYvVpiYiR81TMWc2RnKu7WkCeHg6dPYJYsGDBggULFixYsOARRGZP/8fKr5OCSY7kFEz8nkKJe7RtGe9vKZbpP/09Yf2CZv0a0Zb16z9l+OGvCW//FbgMeCxHVGuJ70Q+I2dqs10iXL7FtxsePvwGuf9bdPsWrNQmXJmJxyPtxfFeV55kqc939WfjlyNNM69ysr48jcVNCsMzddhZqfk54XKKPzAekzJGbeKUZ+d33uhEygi1ITPVOlPFUSQh6EQkfa5wO659lvEo01w8T6s9ZV5n8vBETNbzbMd5nOeWR4Td6XgeK0if7O1IVhVEpTo22YBobbQUCTgZMTcRZ5FpHl3N/2OKzvhs+ubj+/3KvS8Thue2q58fy5cIw3/I8ueIuWfHNxHCY8z40NE07qxxVJ49PDk7R/PrmaA+KkFnRvrsMpPTBpjZclWllPJ7j+u5a/5L2aWn9R5ni9bvZn2/HzPBHM7Dnhbziutvq81ue42fIkTytE4pI2orTIyx+Yp2fE8c7intpn7LZXL/QavrkdUnfKKClAyuJcWRMg5sNpfsxx6ZmqBnvbAsTbsLFixYsGDBHx3+YPLxcDgQx5E0jPT9gaHfEYd9JR3HPWM8YDliJQN5UhEGdOq4nHVJxSqZV0w5PcZX5uxBFcO5erOfYiHmqpJqvKJW7V/EOdpG6ZqqAnPe44Lgg6cNAVSnG7P5RqkwjiNSoPENztflMY2AIMXhfVMVczi6NrA79AxjT1BHt2pJQEkDKY7c7QdWXUvTBrCa0Rj8Ct0YbePZXmwYDge8d9w/3JPGarXqgzCOEbOBlBw5NzRhharx7a9/y+bijm9+9nM26wuco85nGuiHAwVluw1885OXvLvb8emXv2M/FA4RdvuRi1RYXzSgwhiFYUzEVHC+Fr9eDS+VKFq3DWgl/i4vGnwQSpaqQANinspHq9l+hYJT5fJyw8XW4xykcWTTetopyxETUi6sNy2Ng1YFFwo61sJLnVSbGzIB8JOiETPGaNzc3fPVmxfkHEk5ghnOBxrXcTgcqiLPefb9iHdCSoVGPSkXSklIyZQc67lKiVevX+FD4P7+QLe6wF1ds7u/ZbO9pB8yrmkwqpLx9tMnXr28ZnfYkXJPyiPr9oLhcMOhjyCeYYiERnDakmJE6UmpJ6XZUhSceFKp2ajBN5WwskJOiTpFVZWai9B4RZyr16pzWKkdtP1hz29/+B3/89/+DX/7w3s+9SObbsPLVYdZ5hAHDjmSqUV3Kco4FEoWfNfgpHrgOufq98gZ6qsq1plHsmfTGZvWc78f2B1Gdipsm4Z1V0k/zUwVoyDqsTgQh92k2uww5yjZKMlI2TBxhKbF+Zo3Y6V2OJoJKRnmaqFfO6QnItDqQ4naEVzbOGWq4mbrn1P/deGUGVuL6ukPBxSruaNmtdimoFUii5ZSO1AFsNlSGEwTlCnvRSrHOjcal2NxdKooz2vAuY5eqMcFCxYsWLBgwYIFX4Q8/r2qp/RENJxzICYIGc134FvK4T2byze4y69hdw+udohqu6V58Zbdx7+le/vPcAbFN9QYdJvue2dSY1IxYqgGchFE1mxe/zOG279i+P7fonRHu9NyvN89z747YVZ3zRl3nx3iM9aRM2GkegpYf6rg++L0/QM+V8mhqrKaLHKOysWZ9CslM/Qjfoz02dfICLTSI3NJ8Qw59vQ47PjeTCwCMm/Hjuf4ZGF6TmZOpBzl2NxYzvcx72f+t5S6vScE1efEW22QLAZSCl6N0EAbwGfHOColKyUHRBr6vufhYSDGuWH2SLE9Unw+noNz4mse7aMZ+sLrz8n351WR559/vI/nVJI/huM1R31mtN+NxFQdkhQBCZg9nxlaNzBbBD+5/o+H8WTs9uTtefHvUYCeH9s/JBvy/Pjmc/5oAKaYZJSA0eDynuga9HCLSSKvXiMl4y0ySocQcM4xlulZGIaJkcJbmvF39LEgblObqIvU5l+pf4NKrn9XvGTSkInDe8LqNdsXa3bf/oYkVmNajEdZrwsWLFiwYMGCPx784eTj/oE41EzHod/T97tqtRoPpHggxQGl4JzgnaKuWq067zETnDicUxrvK4khMqnIqj+9mCenQEwZJ4rH04VCnmwZnYN1G+g6T+MdTRtom4Z2tcKHBucF75VVtyKEwOzXH1Os1g/qpk62jFMHAmmMNE3Har2uFpHiQafsQxHGISIGQz+gU7i9c0Icjb4fySWx6rpKrKHsd5mSMsVGNhcdORW6VUfO0Pc9K11zsb1gGAZwgRgTY3xg1XW8erlmGHsOuxvUVdVlcA5oUUkImfXa8erllj/76Ws+fLrh3e7Afhg5DCMPtzuC86w3KyQlYsyVmDOrCrIC6xCIUomrVhzZCtuNY71Sxj6dbv3NalekCsUK6pTr6zVvX29oXMGVwqqplrgi4IMnBKVtHKvG0Tpj01RFYimVILOilAxOGxwOLxxzQHMx9vuB/X7k6oVRJsKuNaOUgvcBEyGVfLRJ3Wy3gJD6oSrtKHjvyHmkWKFpGz59+kTKsFpdcHPzEe9bvG/opNAPEcXx4f17vPOUXMipsH/YsdquiXHArNC2K/b7HodV+0+rZLgrRuMD3nkYB0QKIkpJCcqIOsfD7jAFtkPjlZIy4zjStmvaZkVRRXyDFSVbIg8Hvn//Pf/+l3/Fbz9+II9Gqy1vLt7S6Zoxj9wfHij3N+Q8YlNGZElVBalSc1LFJZyvXb4yPdioXb+Kto5SCquVhw9KzEZKSrJqQZzTiJCq7bCGI0GY+oLYCKmniGPMMCYl4ikSUPOE0BCcB4Q8WbVgMn2najVb+y2BidC2XGpTglQrXqdKUAdaQDMmtZvXO625ljo9HBFBUQr1OnJnRb7aTFJWBaeqTCSlTMuor6cubstnDxOOHaenwm+2NYK5jJ4f6CxYsGDBggULFixY8AQTgTKTEiL1HtSOhEzF/EowLHt2DzfsP9wSC+iLn2J5REODia9Kv2L41UtsOGCffke5/gmWtDabTk/6ReZMxWoLalNWvVisDYQWCBd/yf13f0WjeWrCK4CrYz6u/5R8tEe/nxMKz5EvNinAjuSdnUjNcxJv/jzzPD1jSXlOZD4eU0WxUx3Ak/W9d9RsSSPZHFMxEbX8fqvQeVyfEURnkjcRpVCJq6r6e6pOm5o6ORGMx+OeScfjlNuj+fzR/EAUkXrunBqvrkC3jv0DBLdi2NeIDMzw3tE4x+VmQ4yJj8mht3Nky9mxixwzOuU4mt8/L6c1TsTYU9Xfl8jE42IpzF6+/1Dr1qfbqvM/hcCYQ1ztMg0Kko/9tU+2V8d7+pk54sc14Wfj+Ez+eBzAo3H92LX13DE9Xe9skzN3ffy9vl+bgS0bwcHIBjncAoJ0r7CSURxqN/TaoQhOPaWMlVCkIEUwLfTdG7rD99QneytUa2LJTFVnhIyjHG7JOdFtvyaLYpZpFKREwE9Wyna8phcsWLBgwYIFfzz4g8nH/eGOsT8wHA4Mhx1DvycOAzkPlDxiJWEKznm883jv8c5Vq0eZVIuqkz2nVJtEFXLO1YJEjK5kLBlJFQlVNYlByhGzxKptWDUtwTlc0xDalm6zomm7SrqoEHxLCNV61Qqod+RUrSBLrmpHaZraJSmQUySOI9rWbDwfap6feE8uO+I4ErxjHEZKHvFOCU1LSQWnnv4wViJEjVLAh46724Gx79muO5q2AzO61hhS4uPHPU3bkccDzgWc9+Sm0HUrQtMhIoz9AVWHGgQHIThSygTXcLF9wdVmX+1X7/Z8ehj59JBpmgErH0jDCoJDcQiOOA4TEQNx7DEplJgoRZBSCcPNdsXd/QMlgpWqdkQgm+Gccf2y45ufbNk04FVpJbP2gteamUHRSiJTsw6DzzhvjDHT94XDCH1UrHheXl3jTTjsbhHisTgex8zhMDCOkQupGjdVIcaRYRxZv7ic7FYLWKq5lCnTtB0pRSQXUuyJ40CxSrqOfY+IQyXx6sUFD4eROCb2/Z4UM4c9fLz5wC9+/ifEWLh/OLDrIy/eXPDrX/0Kp7BZr3EO2uCATIr7qsCTDgARxfsGUIah3sBbLsQSOQx7xDI+eIolhnEAg9Y5vPNEcWCOkgsljXy4/cBf/eZveX97Q0GJVhgTSGhZtVtCGkCVh2GgP9SMRkHIuZAnBaJOuZ8iHicGWlBnqKuNAU4gJUfXNASniBm+9bSdor5awKYYqzWvM8ZiJBkIsSHFyOgGxgLRlKIt+A3iQ7VWzqWqLUVqTuec9uBONqfCTG7X68/sZLesNhUpky0tfgqrt2qTJKWgGJqnrA6DTM220am4OSogVZBJ+YgJYnUdSgYMVCtpXcM3p8LfaqfxZBQjTErOiXyci9Pa8b0UUQsWLFiwYMGCBQuegZz4iBOZN9GRZ8zSsRGOTLZIjpFGPORIs2rp4wPSvEA0oFLrrowh1z8nfvxrdH9Ht76q5JtUZ5FTE13GYeRJnSc4+mHP+OHvcOkd6+0VaZdBG0Tq+8+p1eZxzvyMImRmgkSeKMJO683EI5zIlEcT9EQR99n+nlhQPkfkyDyr8tyYZ98lcGqIlOpEUyYKZcobeS5L8TmoPL77f2r6Ws9tLRROhyPMgZxm5fN9PB222FmT5dzyeH68p9dzXSJAzpHd4SP9sKMLyhgN7w31giRPKUayzGFMSC6s1oFEeLT1z2xe7binx4vPPv+UGK413kxBHr8BZ2TZ53P7eFFt6P4xsur3EXlMazupc940tT5WGYDVFMuhT7YzM3sy1XxQ/XDm8ynTpT4f08lSdSYG5zZVEZnI8NM8PTfux+Trlwn20+dPBP5MsD5CqerFbJnDwz10P0Pb7WR/rBhlamZ4UbNPVSAlVGpNLArFEorSN1/hh2/BhNKsAKv2zKU2C+8f7oBMc/HzSrqLMeDxXhlTj4Tt8btZif9F+bhgwYIFCxb8MeEPJh939zeVeNzv6Q97xvFAyeOx/0mFqljUqrBq5mw6YWrXYsqfE0KoFo2q4EumZi0aOEfsHBKrwq0NTc2uo2BWQ9JFFNQR2o6uW9G1LU3T1ALHedS3oJV8dFozLqyM5BRJpd5A+iKExk/KMEcuMAyJ0CRwGRGH9x1tCyUbh6GvOXq+A8sE70kyIg5a33DYD0SJtZ1O4eKy4+5mx83tnq6rBMx6s2HlPLe3B/r+QNsolMgP339gc7hme3FF07b093ucRmKG7WZDTrl2KrYbSi5cbIWv3rzgTx9u+eH2Ez/cPvDr727wekXXbIixkitN5/HBIVLY7ceJUCw4NboguOJwQGyVr15fcHs3cH/TV4UoAIVVI7x+veXt6xWth1aMtVc2nZ+yBQtNEHyTaVaeVdvQhOoMNBxG+t3I4ZDYHQq3O89m+5YX11/TD/d82+8oMeKCQynEODCOAzFFSi5sVmsw2O8PbC+2hFCJR6eOcci1u65tSMMwZURGsELjlf1hZIwF7xpW3YrL9Ybbhwe6riHnwn63o21XxOHA21evCV3LD+/eUXLim6+/4WKz5cXVJRfbFS9evCCOPbkkSq4ZpUpVP6ZiHMahZlI2SsyJJgRSMooTvG/Z7R7otF53OVe1ZGgazDnISs6ZlEduH27461/9Lb/79J5EYTS460cOu8L16sCL9gVtt8K80u3veRh2xFTIVtWl1cq4foeEgFGLCqcZ5ws+TNbG6nE+0QRH8IIUoW2rYrbt6veSYmQrWIyklIk548YB148UHNGgaItrlLAqeKml8EimkHCiqG8JTvHOkakF+7FktlocqVIbE0pGnFIkgdaiRkRwKiRXf1dXc2yyD6ScK0lYMnM5Y6VUleRkzWxS7YSU2r0rNtvolKlerASjZThRicbJgmmyi5kK73P1o4g+KioXLFiwYMGCBQsWLPgMj/icE4FRG2/l0Xs5jTiD1YufEL//O8pwh8YDvoDrVmBbkjfEHCqZzfU/Z/fuP5NDh2vWGDWbnaKoekSMQo01OexvGW9+iw13XKw79O3/if3Nr8mHv0algHmOTo5PSMFT5uHkqGLzff10gLOy6ezWuNqsnrbxFD+mNjzPinxqh/olcuaYlzipwmqeYW2ydDplPjqHF4/lqXl2UoZ+dpyfqc7ObFen2ZFZHTgrHc/IumM+5nEblUw7J4weZ0ue7eeojHw6pUe2+qjWPM+/9E7IKfOrX3/ixbbju9/tCI3R+FKdq9LAMERub+4Zh5FvfvKa33zQM6XhSb53VG8+SxhNWshnyOLTMZyN+kztCo/ns75/dmzA2UX4vNr06dw9W4+dSPFiICaEtsFKfzzGp8ra0z7Op3qmE0/n6PhMazqeuSnVBNTqvr9EJP6Yuvb3W8qe07727Fwwx5OIR8SB8/V5gK9uVJ5CNkcRxVvCN0K+v0f770E9oh0mDTY31a9+hu5+Q7YLNFzispEYiXff0zZCaLbsJ3WzWaHgWLUtQ4xo4GxuTkraBQsWLFiwYMEfB/5w8vHu42S5emA47MkpIpR6Mz8FnXv1BNfgnUfVIToFudf+MuY6RFVx3iFaw+G891j0ZHE49eDAzdatwdM2AZzhmLbpHC40+KbmOyaqlYb3NUMPEbz6SkJowIWW0BaakimpVOtF73DqEHEEXxWIVqiEqjhACaEjN5mYajag90LjHZYjIQjBV3uadtVyOBT2+8K+JEQiTdsy9iN3dwNd5yg8IM7jQsCXhofdgZIKbdvy4cP3lJTYXFzSrje41pNL4uHhjuAd61UDJWEWUc1sty1fvb7mT97ccnPfc3s/8u52z+XG0TRKkIDkEecCTVDyqqXkVAnUYDTFEUdwVtAO9i88H68b+t1ITPUmdrOqpOTbVxe0DagNrIOx8oXWK06FNjh8cLTrhs22pWtdDTIfjZKNcUjsDpG7h8RQOt6+/Ya3r77h+/ffIuoQB6VUwjZno+8HBGEch6oYLcpmvWa1Wk8diDLZrxqrpiGEBisJimOIia7rGPsdfT+QMrjQ0KxW7PqBYRy5un7N+48fefHyBZaNOI588/XP+OVvfktwDav1mq7x3N18pGsbQugYo2FWVY8xZ8S1KNUa1ocVKaV6LFpVs03r6e/3FEsUYBgjZpmgtSgJbYM0LainpEgphWE88OvvvuWHjx/IOVX702z0Q2Q/JL77+IHWdby9vqb1k/pXlFQAgVSMXCAL4Fzt9FRDvE7fz1yVgCI4pF4DjRKcw7B6XbiWtlvTNYKUQEqRnAvqHcHqw4tUCkMyRnNocDhtUG1wLoB4VALBtTj1aPCT7lFrD+v0N6KIYKUWTTnX/IhsVT1r1DzX5GrzQfEOFx3OOXJweB/IqVAm8tFyqrmoTHZLNil3mUpoO5XNs4Jy7lKmzL3Dc/F7eqRQrac4kY7Tg4P6CaE8SmlZsGDBggULFixYsOCESgKcyItTwONzaYmVMCjjPTFGcuxxXinqsOxxDnj4NXL3HWX1Ere6QMKG4hzt279g/P4/0Lz+V4gLqEzRBVYoIoy33zMe3uP9Cu+V5sWfUbo3ZBEKA0gLVFvS01i+TCwx3VtPFBQiM1nISQEm8oRgss+Wzc43sx3pTPE83f+PKcE+H+NMnpXpeM7mvxjBO/KgJ5KQMqnJvqQ0e2IXO/tdnqkWReQsL/5L45r3IVSL1Mfbf0pCy6xinUhUs8/rE86au7UITaNsLgJb+0g5CNcXPZ8+7HHbq+omZEZg4E/erri83NL3yr/8puff/XqFqVHK53Nwfj5PROdJC/ljjZhHheYzx/hYzTpv7Xz5l6/F5+f3czXsfD0c+kgpgpNqMyrFKiFH+WwbM1RnG9aJNAZO1rTnWtBzQvDLY5t/P5+Hf6il7ON5qxa754Tt088akJLRNI6Q3wMb9gRwVwS5Z9AVQTJv2nt2hxXCnty+peQeKSOaq2sT4jAXoHuLP7wjScCA4eF3dN01GjZYuUPDac5zETR0lIc9yMsnc7VgwYIFCxYs+GPCH04+PnwiDiNpGBjTSM4JL4LXAFRLVQ0e9Q4XXCUXp/LEqUwETcFpnv5tUHGYZoqLZC2oCl6qTWvtVqt2F+KU0FYL09CsCKFBpi4rSqaUMqkia5i4q2GSxFzIYzwqJpvQIE0tfnQiHmeln6ojWyZZqZaT4lD1BO/puo6HOHJ7e4OTyGbdVGJFAqEJdI1nLJ7xLjHuD+zvd5QS2W62xHFkv4fLyy3qItlGNustuhK+//4DzivbbsXh7oZh2ONCx+vXX/Pq66uauRhHBKkELYqEgvOetznzze0Dv/lwz28/3PHDxx3btlrdvnjhWG08wTvatqHrHKkMxAiWMtFqnh4aSZa46go//6olDiO7+8iqbXl53bJZBTpX59hLphFHMI9DWTVK2wraBC62K7ablsbVgm9MmViUw974dJu53SsubHl5/ZJV1xHHAeeUGCFRb+tzzhwOkYe7e9abNWMbuGgDwYOVxGEsIFByYb3ZEIKjlEzwjjgIFxcvKHnkw4cP3N7dgihvvvqGFEd2+x3takV/GHi423H9+ppkPZuLCyQ0jHFk7VtiStzd37Db3fPmzRv2+wMf9COX2/VkV5rpmvCIxIop0bYNKjW4PU9kmpVMShHVRMpCSkLB45sN6jtEGtQJY9pxc3fDDx+/Y8xjLYjEMBJWMrkU7nZ3fOpWvLxcEVxLcE21T9FqLytAKWAoRZTqpGI4VdRT7YudoWIoQtM4Gq84Z3V9rdaxbbum7aoVUWMJszIRcJXgHFPBj4LPHtddsdq+pF1f4EJbv0uT2tmpo6hCqTmUPvjJhrUqRuf/vNYHALkYWjKox+WM10xRV+1kfSL5SM4tISRKyuQcsZKxnMnT959Spn8BK5jNoR6GlXzUNtbcR6ufoRKURjk9SDiSjTXvFNNa9Es5PhaZC/EFCxYsWLBgwYIFCz6HHlVxnFrfgKoEeuI1iQF5uMOR0HKP0y3dyz9jX35FP+5Zv/pL3PoSy5nU38HDR1TBtS9pX/+C8vFX8PpPKdaQizHefgv9He3Va3T1ijJ+pH3154h6ahyBw3KqDbdSpjGeqRWnERc5qR4rfzrRLpUVw6zW4Mf1jvfHM7E4N/JNqkmRI4l2spE8J1FmErE82d7T7T/GkRg6klnTz5EUrTVJwqNSj1OONqizdLG6Mp2rClVP9py1bjgjY5mJ5cdjk7Nlx9mU5+uHIyk1r3/iTGutd6xP6nbKccdSlXZmIB4zz//+v/lnXF+M/PKHO/7kT/41223h7jBih3tKbmhboWsiweBim3m53fHbDw0f9udjekyoyXEsz+QscsapT0OxSRUpk3yz1KcXj453/uzZgT16r77/OTn3HHH3LKF3GjSp1Gu7FCN4YUr7+KIK8TSGOXdTngyzXmOP9/Xkuyz8qNLvH0JCPrdO/fdzFW0dB4gJmUzJwl0vXLx8Qx4PrINS4nss36PhJ3y9eeD7/SW4A952k5igBdeQg1ZSPidK2RHGH5ByoNx+IBZldfFTWF2SS8JlICdEJ7WkgYYOSe+PYlrhsz91CxYsWLBgwYI/AvzB5GPcHxjHAymNJCphYOIgF1qFoEJwnjZ0eK2Zj5TaDYlMakeneB8QUUqJlZA0wavDgidFgWRoNnIqCB4LHpEW9R2h3dCEFT40OBVKSfUnF6ASNWjBSiVlVBTVmolXCyDFOY/zDd4HkEqalmKUUipJgmDZGGNiGPakYSCXRAieEByHhwfGw5511zIMHh8CXdficqRTwwVFN55Pnw7c3DyQSiJ4Zb1ZUWIhF/Ahs77Y8rVz/PDDB3L2mGYsRR4e7hn7nj5F3rz9CZt1RynCOBqhCdjkhbJabfnFz7/h/qHn9u6B24fMt+93rFeBdRdwprS+RX2gwROzJ4kjSk+xkXHI5AzeCtcu0V7A5mctw9iRi2BqOB9pAngRgnN03hG84TsInRCCY7VtWXUOJ5mcqJl6ptzd9Xz74Y73B9jT8urqNaDs+3vu9/cUK0enl2RGsmq9Gse+ZoWKY+xHMK12tGlERNlstowxklMl54xCt1rhJPPh3Sd2hwOb7Yau7RCBlAacA+cdh92OF5crtp2j1zVjKrx7/x05JeJobF5sGYeBtlmBwXi4R6znxcXPcC5w2N2xbj1NCIw5k8ax2vpYoeRITpmMnzpqC9Wit+aejmMitB1tt8FpqJYomtgfdnz7/ncchh3qM65AQI9WPpqNROL+cM++33MRPEjV3mWbsgltKvKKoWa4o2nPZIukiklGnUPJSAj4pj02BHjvCG2Dbxp84/EenJapm3l6IGIwpkw7KkkaXHdF6C7wzapatYgHVZxWvWNWRZ3gnQdXrVBrEWLHYytWyUItBTWHpYx6hxWj5EzJmZQzJTTkksmpftdzjljKWMnEkqoKcvoOk/O03bo+M9Fo6ditjRnF8jHfUawqWW36e1VrecEkTfylTp279cFEmbI2FyxYsGDBggULFix4iiOJYSBHTm9mauT06/EdIe4f2HY10MS1LSUlirasVq/Y3/wGt3tBe3FJweFWl5Azw/135PEjKpny2zuk3dAEIWy/Qi9/wvDhP7PevoE3/4psgprgpFAQSk7VUch8JVSY7SI5kipylr83267Oajab7VHPSTdmsmH2PfpczShyts3jcpnIyNNnZpLyOTy1tjwncyoh5kCNUgzvBCOTciHafEJSjWe0x8dn03OLZ1VrU4PocQzPnPMfJZPkZN365c/Ik+tiUj7KXEvNOstTU6WZ0VvHd+++ZRg7rl9e027esus/ohpREcQJcTR2g5AT/PUP72g7z9grItec1JmnEcrMotq09Jw4/UyR+the9UQln8i5z1V98xEeN3lGzD0/jwaPiK2n52metJn0KpWbRUVQSVRSvExk+JcIQDuN7cgunx/JpOicj7Z8/l0+fv1/RPX5D1U/Hg+K05zV78YZiTl9RHEYBcmZvqxZeSX2d+T2LVfcs1695/3thihCED8lyO7x2XAknGTMSnUn0oCsromuRdSzCZ6UE6X/hDMDO+DEYxIoUpt2rWkRRkrJ6Bnt/GPq4AULFixYsGDBPz384eTjeCClSMyZNN0c1u67arvaFOFI7jlfLSuwarFYQJyizlU7VT27wVRB8IhrEB8xekR0ymeY7FmdHG0jUSNbIqdKKkAhlepx751DXVPtVwFVP5GPiTnvrnY1uqPZRs5VOdX3PSJC8A1eHCrgvaDFV5LMMk3TYqs1437Pw/2OEATnAiVV0urmbkfXBEIbuLjoyMkYRtjtDtzdPnD54gLvPf0hM6aezbpjtd2y2+1RccSYKyllws37d/T9nrdv3nB1cYV39VZetfYQeg8X28AvfvaaHz7e8O9+9T2fHgbe345sVgkf7rlMidW2pd10uOAZxVGKow0Butohhxqyj/gSuWgKrkAuNT/TOUcTKnkmVmgaj/NC0wS6dUfTCm3rUK0kUo6ZMkb2tz3vv7/j7mDcD462fcG6uyAOIze7O/r9PaSIYBQKGSh4xmz0Q8JywUrk0Ee69ZqUKlHUrdfknNntdiBVObderehaz8P9J0Th5csrck6kVGhbz/24w6nStR0lZtyUJfrD9z/w0Pcc+gNXV5cE59lut8RGSDHR9z1t0xA8NN7AYs0PtUwRh1EzSNerbc1tjBEmBV2MB9puQ9e2pOGAitE2Dd3qgtCsqEEIkHPi/c0HPt7dkErN6hRRshoJyFPovZmx7wduHu4IwWOT2i9nm7ISZbIbBmZ9nhXEKompZnhxOJmtVh3O1WJU1eFd/Y6pqwrIEBw+1Pdm0i2VjKaMbxVzLRq2iF9hoohoVSN7XxtwEbxzOHHVhmXKtXFS+5jnur1gkA0tlTDFFfKkYLTpXz9ZrKacydPvOVfVI6UQSyUh58/nnCZb1rNcyJLr3worNZMiJ6QolgulZKTINGsFUZtIyYmINJmqSsPKnMWiPKq3FyxYsGDBggULFiyYMZMEUhV+Yic6ZeZWzkmGIkY6PHDZddz2iVZ2pJtf4ixg0dhuXoCNPHz/P9Ne/SniHeo8bnuJHJTy8BvW/h2x3yGrP8N2H0jj79i8/RfgAsXcZPlZsw4FsJxrbTw1tsojguPcYLO+nm1Hi9X63qkDmew3nxBR53TMyXJ1yos43+bxM7Oiy47rPyZteJbQeZxJOc2rTsRpATElCGQrtZYRN5FDcr73z0jXz7c9kaxPFWc8IeWerncki+b67DwRkek5SHVleY6MqmRbXUenGuvIi02xEELGmfJ3v/xA8+c/4dXrNckc0OB0j6hDRevzmywQlCSeX/6uYb9P2KqgCOVsrOek8PmojqTvZ8cqx6k7PwydJqGqSZ8e3dN54uy8TqSwnV8lZ2rDM/L56TkwqjWpSt2OOo/znpR6ajE6qXDrJB53blNT72Ni9XRtHbWQz5CwAo+vw7P54ZnXT5f/PgXkuWJ4nrt5VPWVUQTUFJxQJGMCDwS61Zb1+B0vNpnv92u03NLlDzSauS8jIUVwHaO2lYxXV7NjEfzwjlYy8eKnRHFc60duhxcUgZAKlneMgB9HDE+WQkBrLIq20zwbTw53wYIFCxYsWPBPHH8w+ZjMSIX6k3O9qVMgVWKvzZlUCplC66vNaZluknJOVflY9EgS1Js3h7pq24pziA+gDWgkqNF0nrathJdzNTS+KqIqAXHsP8ul2r66SgyAm/kXYhxJKYMqqlYVmQhYtYaFahezWq1JKZFymYROikgAX7AoqHpW3Qa1jFeQ0lHKQEqFnDKbywuSFT5++EgTlGGENEa2m4aSK5F3e3PP5eWGbr3hdvfA4bAHUcYo7Hc9beNIaeDSFxo3cvv+ntz3lJ8ktpeXtFbJ1FXbEFolJXjzSvg3f5HYx8Rf//aGX36/B+8ILajPmI0YidC2BBfQVUvKQtNAcMaOSOkdm7bDu8y6KfSxIM5VQlgd7SrUnBCveFfouo7QNIirYwje4dQQLez7yHe/u+HdTeLTXijWsV5dVOvUfsf7Dz8w9PuqWKVQMIopscDD/sAPP7znq5+8wTWei6uXZDPiMLBarWmaht1+z3q9JvhalGxWa3LakXNkvV4RnbHbPRxv4lftGhCC04k8dewOe9J4IB4OfP3VV4TGkUum7/dYLuRccM5hOeMdjMNushA1chqBqshTbSblbAYyqlQbEoUUB3y7Yr3qQB2Gp+vWqAZEHeRMPx74ePuJfT9QzGFotS2ZCH1RTyFhGH0a+e2Hd/QxAsIQR0oBV/lB1OtUi5WjXbEJmAqmtfDy3hF8JSpF9oDi1OG0RTXgfYsPDd4r3oXaLOA9xabvuTc89VjKpHRU3+B8i3MB02pRqqJ1n6LVdlVnA+ZaGB3JRwMm61crJ/Lx9DeiEoAl5+NyipHKnPlYLYRLzlCMXPKRULRpeX0v12WlVCvbXFWzJSdKrGQlmqpqcqrrLHP6+2K1gQKMYlAkT223CxYsWLBgwYIFCxY8xef3iY+WPCXPSkHLoWbD9z3bzQWjBiT2SA7k9hLtrrm4+BPu3/89pq+Qkon7d3SbK9qf/3esXWL3cMPh9gf85iWhWxN336NNh4bX4AIy+U4WCeRSaJybBmSc23x+bm9aCbr5/tx5PRJR+kTZVdfVea2z7Z9m4TFfKZMSzXhE1D3Z92nq7BERM79XCTSbapGa0a46WZpmwGRqJMzMmYSPRHYyj/O5e/wT2fPl/MJnVHjn9rE2Kyc/JzdNTqa3j4gsmZq5rWDH2TynTacHMvEB7wvvPu3487/oKFNkBKZHwrJa5yriHXkceLj5yFoD97w8HpuJm4ugeUKOitTn1KCnYz1XrZ4r986J5NP5/QfHV4icZn5ysDF7fHV8tgq1IdeYmldTtVryIkjOzE0B59urfbyTeu/s2I/HN7vnHBnWs57fR8M9Hd9zJPV/XYbp+XVwum4eu8uekb/Hz3ZI/oihOFOSCD9/ccf+0BAP9yS3JjdveNE8cLHtibnH3BqTDiFXuyODbvwebTwH/QqyYmLc2yteNJ/YxRVKwvCAktTjSySmTLdSdnHAr9ZUn1v7URvaBQsWLFiwYME/PfzB5GPOtWuwUGoHWzFSNMzVHEezjFmabgin22c11IQyqRxzyTVWItebqtnydM5g9L7aQSabott8g/oW7zpUGqBBpEFVao4dNc8tS0bFSLlQck/0crTiKDlRSiWTnAuoGohNQjHBeVctOaiqr2JGSZmYqhWNSsCpkEoipYz3jtgncor41tOtAmIOzHhxdQ1FuL+9AYSURnJWNitP0zoygd1uj6njYrsmjpmHXeTT3T05Doi0XG42fPrwiVXnePHiEvLIfn/Dvr9ns73k5YsX7A893nvW6y0q8JM3F/y3//pPAeXvfveJX397R3AFcZc4H2Dfk4eCMSKNEFqlaz1OrVplWMPYO0rMaFY0GSaemmnpadY1S3LdNrSN0TZgoqi2qAlBM66M3PcD797d8937no+7wH5UtpdXePG0znG/u+fh4QbyACWhTAUCSslGWHmCg8NuR3n5ClDMwHtPt1rRH3pUqoVv01SFa8mxEo9dR8kj/S7jVOjaQH/Y4cRVm1ZLIL5aiExKvbdv13RrTxwHbj58Yr25YL3d0oQGBd7/8I7gN8QxUkokeEdKQnBrFCXGEe8LRqRYwtEiwKrb0I8jKQ2I1LyJbJmQDdfUDJhYCvf9ntvdfbUAFSimmCkKNE4IWpWDTiu5uBsHhpuPtXs1xmO+YvCe1gWcKFYgYzgVTAM4j3jBdx3OVfVhLoUxVQWt8w2qHqcB78NRtSzq6nsh1O+w1PPlqZmWBV/VsWEiHZmVxg6VWty6yYY1T4SjMHF2k19OtjJZHk0pkLngJrU0k3XyrIK06XfMjkSk5YK3iWw8Ix8ppf4+KUSrjWs8EpA5xYmYTGSXKCVSUkRToRSPpkSWQrGxZlQq5GIw2dueEnEWLFiwYMGCBQsWLHiMI+kmj0k3niWoII8D206IJTMMRvn6z8Bf1AY/L8i4Iz/8jlgKasL4/t8hfsPm5/8dXgzX/8BdLAyHniZ0yMU30KwJGHm8p+x/jWTD/AZbXdcGXCs433LmC/ss6Vhhx5r9/LMniqlm+52sNwWzfPx9JqdOgrrz1+fr2fHfc3vWx6rJmdx8TOSViXxRAXLCi06fMHZxxIaeQ9lDc13rjyMjMlFzE/lZuaXpPdOzcT1vA/sZ6WicjetkH/uUgHtulj9fVsnH+bDF5vXr3Bkg5khFSO7nKD3fff+RP3/9p9MxTVrXyUFKqLEnt3uPqDK4r2uUhfNUO9J53qfDn/bz+7M3T8TsY/JN+JyIe/odOKkf7WzujmM4ffjIvB0VhFrzHB8T4KdrJZtwt4t89+GGu12PtK6S0qK13pwP8ulx2ePX87VuHFMgz3SHz8/JI3L5zB74SwT/j6kfz9969PdkJsPrhTARqp4iGS8ZJ/DTzYGkK3bhDRevIp8OgsU9Y9PhdU3sd/hxj4ljXH2FmuOFfEfZXLFPKyQNSMmIJUyMUWHjbtkVh3eJqC1O1mSrauiGkfvDgOiktIVnr+8FCxYsWLBgwT9d/MHkIyVOrWaT3QpM9+eCiMNJg5OAFz3eLDvVWgH4GmI/E4IiikzqQhWHqsOFgNcGEccQBsyMtm0JbUcIDeo9GgIuVLUZVouaUjJCplhVMyFCyaVarljBSgKpmW5kwznDLFHUVyJTWpxWAhJsImhqN2guacqSM3zw5LFmyDmFnEfSPpF1oOkaQrPGhY6rV6/ZDz0NiaArBOPFi0v6cSSmiIly+2lP/OGWX/zzP2G1XtP3D1ACMUaKZa6vNtzv93y4+cRPv3lDcJUM3d1/JA49r16+nojSxGrd8rV7xYuLjpddw//4H3/Fv/v1D/z9b28pJog53rzqUDVSPtC1NW/QidJ0G5zzeN8Rx0LOkFKhHyOZSiytVh1+UrI67/AKKhnV2k0aAImZ208P/Prvb/j2hx0f9sqnUWm6C5puy9XVS64vtnz3w7cM6QBERK2S0JO9iVeHmJJiZjjssTLWpBAzmrartq4pUUqm8x5BSGMEq/atNc/C0YQWKJhlhqFnzCNN205ZgoAVHh72rC825JJYrzt6g+BbzIT9YceLy0vimLh/6Hn56poxRhwZ9TW9xKki3pCslaQrkHNV6xWrRLqqI5NxwVGiMaZciwqrDyRyNj7d39OPQ71OLZMNigaKOiSA84IqbJqW7WrN/aFnGGLtAnXQBE/TNLSrQNNN3xFXFb2+qapi51uaxhNCQxBQTQzDSD8WVDua0OFDIDQBHxpCaHFquOBxwSHOAYoTgVLqdx2HSVPtY50jVRof730lIEUQVZyrRKROHY8qszK5KggVQ23KfqRgzh6RjGZVFYlVe95q4TypIq0qnv1MME7NDaUYlnNtdDCrFq2WsRKOiulKWCcsZXKTKWkkpVgbFfJISkrJhZJ1+jsAmquatHZ8LyXUggULFixYsGDBgudxItlm1uhz1dhJ4afk2HN5IeyHFzjd4ZuXxHTAxOHCa6KOjEPE09NdXUDjGHc/cPu3/29efP1viIdfEsPXrN/8JULk8O4/0331L7CsBG2Jm6/QYhAfyLd/U4nB8QZtN0d12VPyqC462a9WV6Anyr3Tu+cs0SOlnNlMnulJMXY2L7P6bPKoPZJglVA5z2TUaZQ2jWUmmk4k4kRz8pNXKz59uGH38Il3nz6Rbr5lePjI6vpntD/76bQvpjFM9egxc1KY8x1tPq75mce56vBLCrYjOTQTqXpcLk+Uf8drYZ6TM5tXYcoslMkmdIq8kSNBqNP5May94PBwx4eb91xeNqgEkOpGM8+1SH02E1TZHXowYwwrnPPVmeb4H6c5fUR0Pb5+P89MfI6grNfL0db2R5Sj5+8/JuLOx3Ii9k/k5lPSbpofM8rhPf/L//Q3/PD9r9j3Cbv0/Nlf/hvefdjRD9VNSPXs+zhfj/N4xI6v5yE86iWwWUH7mFR8eoxfulaeU86e/jY8+Q59Rs6fk5LzdWMgilrEI7xe3/BxvGQte2574Wq14s124HeHK7Iozv+AuUuyX6HjB7j5G9pW0G7Dbp8ROSDaMAbFlRUmwoMZ3kUuV7ccYnVCKkUwEaxk2hAod/tqbzuN5/fZyi5YsGDBggUL/mnhDyYfC3MhIGckCkeiZSYHZSIgarajoK5mBpbpRtCpq7lyPlRrR1FMqQRgKeAdog7LqVpfUhjyiFrBpCqyEJ3sE0tVeuUy3XxpDbfWUG+wRdEw3YhhqJSaRacKvmZT5pwREbw4cq62rqIOr1KJVgWzgTGOZFNSKhRRCIqzzDgM9MOBCzzbsMEkc7HdsLeCa1qwSIyRzWqF9gmxDI3jsIvcvv+B61evePv6Jb/77j1Xly9IY4/3geurC+5ub+gfDrTBMY4ZF1YU33Doe9abLcF7VCGEFh86zDz/jcG6U/6XX77jt9/esX/o+cVPr3j9cs1m7Wi7wDgkBKNrITjlYt0QQyZmgyJsc4P9f9j782Bbti29D/qN2WSzmt2cc+45976m6lWVqtSUJKOyhXGgpgSOEBA4RCMkJNthyTaSgyAwiBAShMCFHDaSsZAwCoMj7JIESCAswGALYQkTJdmywagpVHKJal5T793+nnN2s7rMnM3gj5m51tr77HPve7dUvFev1hd3n7135szMOWfm2neO/Mb3DefwlUcR1Nj9Qt/ZMp9OBKNKt+l5/v41H7x3zXvPtzy/iWyGGu/OmbcXLJsFZ7OWt997l6vrl5hRpbbP/hMlixAEIpGoDudrcjZ0Q6ZxBrIWIjEOzNoWaw193+GdYeg7Kp9RowxdLEGZ8XTbiLEeFUfTLKjrmm7Xs91sqF1mGHpm8zM2m55uCGzDQHd9xZMnF/RDw26z4/JiQe0N1nhElaryaBro+jW+neOrUtdTjSVrLuo4lEAkEQnDgDGOrI6ccrFItUok0YfdaA+bwQg5jzpQY6hEaJ1h1lRcG8uQMze7XSGFJwtjcaRxHp11+0xMJ4amsnhvitJXKtR4xNaISVhVhqCsdhERS+0ts9rTtp5qJCCNLfVTwRZ7ojFZoASwFtQUR5YSp2MQfOUQazGm9MdYNyYJaCEuOSYfD8FWCbDyPit4CrgmW6fp5+NADNXxs6/YPKqts5Jy2LfPozIy53L+yYK11IUsRGNMsRCRwWNTJMeyzaQ4ko9xJJVTyQrO5efJjOiEE0444YQTTjjhhBNewddlrVhUcUYMcbhhdbumdy1WhO72K4QUcWtLJ18le8OseUR2S0wypPqc1i+IVz/N5t2/hqvPsJcNcf0+ai314g22X/sbtG98X6mxPsYKampk8VlEI/76nVGtCDLaek64Q4pQ6rvno22TCqyM46BuOvBQpfyDTAo9OSa2RhJOOPwuoDqRKwZjzBGxMhKfI0l3IGSOiUcQU+o8KpYvf/HHiau3cZIw2w+5WBh2Z46+fw67D5DZZyhWmgcFnWpGZBrVMclaxjrRPnfJ49fd3IlQvUtMPUTa3SGmREZy8UBb6p74PEzYgajLiJbE6aaZ4ZPHV2egCStxnyw+yhcREeq6QtwccYpxi+L0RJn/iVw7tlAtY/k4td6BDDt8P67zeF/1eGzRykjw6l7Jed+2dH/FY/7t6D7st0/NxCK795jpc3TY8LnPfxfL+jP8f7/4VeL6S4SPhJg/ux/X8bwe7vlEGOv+wqLHJsLjJMmBDH11Xj5ezfg6vN72d5qb8dN3dOo7VK0pichvzTc8357TVKDq8Bq5We+wdeStxrDrIt63SDdg7YJsGs7ODIN7QrTKUgLbpHTZYY0tSuFxvkKqWMkFl+5DhqhEp5hUDI2TW+LTNVASpsvn+mHV8AknnHDCCSec8O2JT08+5kLGiRSiUKzgrKWqauZtxWzW0rY1TdNS181ISpbgwTp7yE5UxRjBylQjQkETTGRBKpamQ99T1w3WOJzYUizc2r21o4gDSUx+/SlHBLBG9vXvxpS1opwci89bY7HWIn4k1GJCjBQ7T+uIKRNDQMlYZxCFioYYE8YaTITNpiMOPZVTBEOMys3thoylaVuq2hOCI4VMXc3Z3N6SdgMA2z7ivfDo8oy6MjR+oG0sKS64Xa+Zn8/x1pKGgbfap/R9x+3NFuscQ1ghThjqhvW2I5s5jXc4a7C14psZT55cUnllXll++p0XvHN1y0+FxLYPvPmkQTXTziuMQI4Z50rGXsoUYshC5YSmEXxd049qyJQCRhQnJUCzWQm7no/eueYrP/MRH14PvNwq3eDw1Rnz5ox5Pcf7ii++/TO8/8Hb5NRhNY9BLiRVBgW1xUoVMVjnx/AuI5IJcSDlogitXHkG+r6n73dUlWPe1liTCP2Ovh9Alb7rSLHH1Q0zV4jwFAeyRjBKZQvBZqyQQ2R7cw1Dz+ff+gyz5ZLVZo33js98/jOE0BVSzxh85SEVy2HRUjt0GDpcVWOMK2RWzsRcagfGCDkHsKnUUqzcPhANfUfsdzgDWEN5kixZhCwW6wVbOzDCro+QA6lQwaSx7iEEnDeoOiBBGkBLsXkrpdaiM4bKOqwZay8mZbfbsd0NGGdx3lLXlspbrBWMBbElQC0Bq8GKxYol5KLOTIxKYi21VKuqQawDZUwucMhoGSvGjJm3I9nMcfbjIWngLhk5/nxEOiqMRCIloLalDmNWO5KPFJJ3JB73dqsjYajpYMWakiXnhI2RnCqSj8RYyEebwqiOLHUmc5xIy7g/h7sX8J1wwgknnHDCCSeccMJ9vKoOO5ArezcgEUJ3g8WDXVCZF7jmDL19l2ZxRvXolxOqJSBYMsZYNO7IN+/w6Nn3k2dPYXhJ3t2Qz74b5yyqwryaMaw+oHrjl2OlEHXpiL1JCvim/D7WaZ/6PNWPN3u14RGfOgVylHIEpSLLoQbkuIe9XShTc9knS8J9nRwcRIYHBSIjOTpZZMpIpKnes7NlUtYVdx4NA021oJo/Zv7oLebpbc4sfHSbkM37MPscIpMrjYzcyKSCPBrnnuhS7EH3dufeHt/X+yh5mBNhdKyb/Bgcq+b2FK+ZqlaMyruJ/jWYkcDrRVgu3qCTGms9InYsUV8cgowI3jnauiSRhvqtkZM0+8Tg/fWOntv79+m+cvOo46+QskVVCAfSUcGUWHp8WXN4Nu4QgbKfBh3fPx0nwR9f/xUyVxMecM2bpLZn9uhzGPOS+fyWrcx4972PME/eRLEcyMYpDmV/3ydxbrmuucNx3lFDHilB79TqfODn19WB/DjcV0ce1zs9zPw4B1L+eTK3vLzdknXgfG756BZS9hipuRocc9/w6CLQfbQlpQj9FbNa6O1bRHUMWREWtFXHUwZ2fWSrLdlY0IioELIQbcNlfctHwzkiFqtKthXWKZoUce6VxIYTTjjhhBNOOOHbH5+afFzMWkrRcos4g7fFyrGqatq6pm4qXG0xzo9WiRHNttRtm5RNOaKacM6hLuFyZKrnkEdVZOUraBRvK6xz1FVFVXms96OispBJooUsyzmTRcqiZrRX1ZRxE/FpzbiwlhJYWbdXTxoRkighxFInzzqwHl+3qGZC6MmjArNynjx0WCM8urggdBXdbo2IsphZoma6zZoYIs7XNM2SXdowRKVqLnj+4gVV4zHO8f6HL9guz/js0xJIpjjw+HKOVaHrMsxr3nj2BuubK3ztub7dsr7doRle3nSYd6+5fHzDF77nC6R2xqxpsMYya2dAQiTyy2vP5z/zhB/70rv87Z/5iJ/48jUfPa9467Lm8WXNxWVLM6+ISXDOYo0rGZaScQ5Siuyubgg95JSwPjGbVVg1dNvM9cstLz664Z3317z7csdNZ1Gd07RL2mbB2eKcR2fnvPfRe3ztnZ+B1GO1WGBOoWXOgJGiVxWHiCUMiTAMqEZSGqhnc3LOtO289Gm7JYSIrwy+siRN5JABR9u2dLsB5wVnG5zz7LqBqnLkFHHO0c4WdEOm8oacBpZtjb28oHrzKc+efY633/sI71vaxhGiojg2q1suz89QMfiqJg49MUWclWIj60rdDBEIoWe1vmEYInU7p+sjGMvirCkKSmvJMTOEgRB6jAGLxen0OkBQozijVA6cVfqcS2BvjjJ8x2xSZzyNh8pknMQxMDaFULeK90JVga8UL8IwJG5utnSd4q3HWoPzpjwDtrwAMWZ8HowD40DK58yMdRiNCBgpNSerCrGWpBRlpxwCQZlIR1POK/s7b8sQ7gUirxCQR79n1dF2FZBioZuzoBoYXVpR9WQtfdTJBjcnsiZkJGw1J3IsxOJEMsaUSCmOSsdCPk5fORfVZErlWB3rTeZPCBRPOOGEE0444YQTTviFikIQjEYeY+LsgTC5a6MIabfCNxdsho4KRcM1Mn9KdjPIA1bKC/ysMKw+wA3PqS6/QDCzkkBbPcP7R6TVl0nVI1z7GJ1d4kMg376NOf/sgTwRQymvnjHWIGqBvKe5Sh+nWocjkTgRphTXmrukotmPa0+KjXXS90rFOxzVZLV6j4gxY7MjxaNO5I8RdK+e2msQD6RUSW8sY5OMIWPbJ5jZE6pmi++25LjFOxhMIUlUGAlXOSj88rG2rTBQE6EjR/frFXvaB3CoialM1quv1Ifkbvyz33b0dXhOJvWlHLaN1rBG4Y2FUumOyp0jNpW40RQCMJe8Uqw3nC/mLBrPs6rjyz2EOJHFd/s1JXHfJ7/uE2aq0zHH9+QOh3p3zFpcc15hYmVSQJp715y+m/29/jhkIMlAlJqgj6jqczp7Rd007OIMTTfk1GPdvMyn3jHTHRm84+ezWPDK0b4DK/3QMO7Pz33y8JMJyNeRklOyQnENOto/5fMjIJaXqwyPL8A6Bt2ylkV5C5gzDsftJmOywzklddcsl2+x4w1KIZSMwaBEtoNlYzwz23MuHSHBmgrwZDLbQWm95dJveDnMsOLxRM4WjlV+DjwbP8Mn5eMJJ5xwwgkn/ELCpyYfz2btGEBZIoqogObRbrLUWNNBiSGP9QCLCsqOJCCUBb4xxbrFmLLIN1YQiopMjMWJoL688B+Gnjh0hJF8LKoqPyopLYxBGMpo80qxH4G96lLkkG1lbTVuY1zgymgjATlE+tgjJmJNhRGDaKlNoaq4qqZWoReLIeHrGfV8Qex3hVAFctKinsyR9Xpd6u/ZQgddXJ7T9xsqb/jcZ5+yXm15cRvosiHnjkWrtLOKm/UVL95d08fEo8dnkAJLaob+Jdt+S1Qla8/Q79C44fPf+Z34J09R12BMhXcNZ2ePSU1Pu+xpl0sen53xY198h3c+uuXlTccbNzXPVgOPljXN3HF+3uJtQm2mrTwmWfphYHOzo9sEUsyYytC3LbvdwAfvr3j+YsvLVc/zrbJNBmNrlvUM6xzeFzvP5zcf8tGL97CxL4vkXKxzs4GooMZgxwDRaCrWnsYimtmt1tT1ktQMLJdLQhiIsRDBi8WCrtuUu5qVYRhoa0ccEtYW1V/KmZgyXT+QsuC9p/INLz/6CGM9s7MztptATOCriqqZcbPdcb26xRmYt3M26xu8FbwV+t2OpvU4YzG2Kva7MeGsIwwBYxy73ZbNejXWfsyEMJBUGLYds2aJsx6sJ2kgM9UzLEGPsxY3Jt5mBGeFtnbUjWe7jcWWdcwcnWxtKu9YzhqaymFLUUyEYsUrRrDO4itHU3lqZzB5YLUZ+Oj5mhwFXxlqb2jrQvA7a/fjK7apbgy6DVkALfnPQlEJYwrpqCGUsNr4ydUHY0udkfKSYgyTp+xUnV6+mHt/Ze7ZEo1bC7E4Pj8qILnYwSpklVH1CDpaQ+eUUZtLbRRVUs6QE25SRbqiUE1HxOJENKYYCmGZMzHGO22mPqScPpWNzi9UiMgXgC8Df0pVf8c3tzffOhCRPwn8Y8B3qepXvrm9OeGEE0444YQT/m5BYe/6Aa+q5AozV5JnRTM53NI8+jxp9RG2sZjqAjVnmPmS3F6AMaRhoL/6IvP2Eh5/P6oGh5JQkgFRj734HszmA+Lqy7D8Avb8TdL1T5O3HyHzp6BlPWsR0FTiaZP3BKOipSTKKyq/u65CR6LGqUVRNR4yDfdE06HFA/NwuMDh3EcHjOHHgQgDZE9A3VUIIuP7iWzAOMQCdom1sAuBmWtAShkVZ47jqpE0Vd0TqYy/qux7TqkDmV8bAzw4Nr27r6j4JuJSyXtF531ieiStJo5xUpQKRw6WxaJVBZIol5cXXL//Hk9mK0pqa43Ijn3NSAErysWZY9FGvvONJR9+UHG9uTueg4JwIqsPZNyBWJy2lXcvd2+jUF793FfpHesF9yMY1ZYjrTqpIh9sf3euDn06ek4p7xIsLcZ5YqpZdYGSdJ7xrmKoFuOTk0cr0aOSIHp0rjtjzaNq1xwI5VEpylHMKkb29+uhOX0dHqqD+VBtzZzzXpEpe7XvUTIAU7mUQtXXxtDvMkYg5QRiiCSMWG47eGo8ooatPiJbQA1WlCTl/YRkwWTDRiu2UtHYxIXdMuiOEFsSnm7ooHnEpb/lOi7pckvjW2TroHJkAuMDccLPc4jI7wD+BPA7VfVP/izO80PAPwP8BlX9kb8bfTvhhBNOOOFbC5+afFyvVqUmoha1oMXgvKdJEUvCW0XUjhYfglqL+ELuOVu2G2PxvqJuGpwtdfJyTsTUk9NY600TOQxs1yt2uy3kTF1VNG1N5Wps5XG+xrsK4yqYak6O9ebMmHFpTSlLXyxVzd6OYiI/MQbSuEAWi3iDxkjKkRz7MbtNIWViKmQRAmIcIWZSTEgCMRXWWlLfYYwWgqyqODu/pNv1hBTxFtq2Lmq6IRJj4snjCzZ9pI+J8/MnbNZbIHBxecGQbnjn3fe53az57GeesXj8mIhh9+4HdDe3PLpcsJx7uu01X/6pju1mx+XjJ8wXC5yv6WImSaapHW01x0mNYJhX7/He9Ya3P9zx3svAvLK0rWExM7RemM88y7alEoOSCDnRh8iwjez6TD9cc70duO0zt12mixCCwbsai6UberTb0W1WXD1/l2HoCTHsLTqtKZm2SLEGBfDe4DFYMWgWco6EvmOzuqJZzFmcLxn6HeIqrLO0Tcvt7Qojiaaq8dbijcFKLJmuRgvR6FpWmw1GCrHnm4bNbsfQ73jjcUO/vaHvAt1QSM/dEInxCkuibWuMgW69xp3N6IcdUjdoSgyj4nG322CDx1c1q81A1o5utyHnQFXVRfGbE963bDalXqVSnkUwhDgGMUZGxWmZl5wVQ7ElrmvHbFGzXg90KVOCTEEMVLXlfNlwNquYO5hZoXVCZaTMaeVxlcdVNd57rCjDEPjgo1teXm8RGmortI2nbSucd2ON1qJAlpFY3Af+o+KvfLqKTZQixFhqLlpf7Ql/mKxS84G8LO8ExoxWe1Aj3zGwOWS33gnSdAz2TAnwlFInVvfnLJ9jJtWjObJrVcWO9q0uZ5JqUT9Oyseci+pxsmZNfr8vpUObUnfySE2ZPz6IPOGEE0444YQTTjjhFyaM2LsExF7VNcajpcoeCRhiwKdIrh+R0peYPfk+jO1QrbByQTSOcPMObnfL/Ml3g20Zg6mDsnBkpqIobvEZXFwRr79ImH+O+uL7SC/+P1At0KrBqiFqQMNAAjxC1FzqP5qyvr9bofEAFUYV2H5g7NfwozKs5EjeJVCmOTgkFj6gGpS7MYFO7TAj8TPO4RScMCmqpnqBU+3CBOqgPiMPL4l2R1v50s7VOBxKxGAPFOZIBOtEsAp3HFom+9nyOuCIKPwEFd4d8m7cMk3HZCd6GP7BOWY/P3mMpQ4n4DD8UWGKKUpPsUiIbPtC2hkiiDAMATuSqs5Y2qqlrltmbT061Iy361ih9xqVpx515FjxeKxIPI7lpjaqx/f6QB5Pz2+xfT3M1zHRuT9K9XgK9n0rP0zPRwaFKAZLxYwN11eGZ4s5WQ1eB8Qv0FyS1WUig6fnfeLLVQsPnccYc/qlPCXkcZz7ecoH3lI4PN+vIyFfp2y8b996jONap9Mn7s6Ujn1TAWcNQRPzOjMMvsThYkANSTJWMy23VFUsycfDe4TqTdQ6EopRHcetZNI4R5kuKp0uqRzMqw3Olncpt33EVC1nXLOJj5nZGo1dmSOVV8ZywgknnHDCCSd8e+NTk4/X3XZfD8EYg3cOqxY0FbsXU7LcrC3EhHeOuq2pqkJqlOwswVoZyYJihxhSTww9ORX5kuZIjgMpbMlhh8ZIiD0mD2g1YGNFriPUpf6gkXokNh3OWowp5IKghdAxDusd+3yrKbNQIU0WM8ZgMSgW0bpYMcZQFFXYYlsSA2EoisMUA9Yozlv63Y4YlBQHQh+wzmGs4HzF2VnL9fUNm3WHamY+q8EWK0pjDc/OH/HyesP1Tcdbb71Jt14x7G75ni+8yUfXN/S98vz9G4zfMgw988USZx059SCO2XLG9nrFT//4f8TZ43PefOuzXFw+YbE8p25ajIEwDDTzzOc/+5R543jzw2vefXHFi13k+Wrg/esMongHzoKXKyprRquWYk8bE+x6peshZoO1DSkWC1WjCVJPiD0qFkXY5YiXRGUcnpKAOjrmlGxaKdmCMSUkCdZ5jK0IoSdaQ98HwpAKma1CigHnHE3dcHNzTYyR8/MlQgkMTDFYwTrB2JrQx0J22lzUit5DUkyGpqpZrwupnUJAU6Ru5jgrzOsaYy1JM+vVmrPzC25X1xijOGtZr65wkqkqX57RHAAlhI7tbleyNbWo55yVEtDXluV5i68tpdJKeT5D6NEcscaMgX4ag5aMpExlhGXrePqoZegCz4cNMZYwuK4s58uay2XNvBZmNcxaoW0s7dzRzCp862gaT1UJ1ii577h5fsVX3/mA1aZjUXm8Ueazirrx2KMM1imzMuWiMBQxpBhH8lGKOjMMqELEINaMdlCjKnD8qO1Jf5NGa92SdmuYAtUpfVeYaqE8hPvZwFM1mb0zqzK9Ddn3X9HDzzolNmTcaMmaRgJRdVI2xkIYxzSSi4mY9Q75SC6EsuZ0L4g+4YQTTjjhhBNOOOGEgofsNTkicbJSFIcZ6N6j8T0pRUwMSHNGSIoNL8hhQff+u8xml9hnv/Rw7F5iOBE2kDAY9WSj4BZUl18gb36GMJwhj38F9upvI2e/jGQMpA2tjcxlS1ChEltcTo6IolfIwckSk1F9dceukkNfuEuuPKgIvKdWe1AZdrTNcDcWmAgsGQc/qdGyCKSAIWKkIsYbrj/6Gr6KaFMRQkf2y/IOgzSSwkd9RtFpbidy7U7f7iq45J4S8lilNhFTejQxB/p1+n5QBt6pez9+n9yj7tQ5hEKyMSntEgbHYqb8wG/8tXS9LXGVyFgPMqFa4jDjDLNFwy/+zhlvXCzov6ij7ej9GprTXN8d1/FteZ2I9e783E0uPZDVB7Ly0O4Qx90/fvo55lwI06Nr6OgkdGhXotVBZyx0R+xuWaFAYogDluZOvKqiiJo9kVym4bgPYzuOCcaj+RjbmAee+Y8jEY9/v18r837bYwvYUkezlLG5o5QVLbbBYguVqsKyjbzb1WRjsSmQCFQYGr0hSywEqbXk+hFV/wGhegrGk0TLkz7eJplIV3FgYEgDIbe8VUdIW+jfJ3YzTGNZxg9Iui4foRw4MLonnLDHHwf+d8BXv9kdOeGEE0444ecGn5p8nGqwWQSrYHJZlFhb7B2rqsY5S+WqkozpLWJcqSmRRlVRLPUe6zqBOIwxOLFk4xCNMNomIuCdIxohlJULkjOqCc2BFCCOC2KXBVMZjGZylr0VRemEwboSruR8UEMZYyAUMiIjGPGIMVhjsCUp7E62XUqWYrVSakh22w0pDsRc7FglJ5wp/YwpYZ1DEfo+MpvNsMayXq/YbrdY56nrhpQjQx84m8+5ulnx7tvv8rnPPMGali5Enj19XG6Yrbm+3fLeextStjRtQ86ej17cUG0sb1wuOT9fcLO6Zb16gZIYwsCjy0cYW1O3MxY5onrOfNby5OKc7+re5Gq14Wfe+ZCvXa354LZjO8BtLgtVJ0IaLVBEMs4aNDlSKmRtiLtRnQcGxYxuH6IKkgpZJ4bWgvVlAWxsVSx6gaQgWOIwUFnBScKQcMahWemHnqQtKcVCCnVbYo5sV7cYYzm7OKeqHUIh8ZzN5Zk0ju22ZwipZD2qUNctzlbEEBCUmBLzZcvqdsPybMl7773L48ePsc4yhEgfEt1uy9liTozDGIgYhqFH0g6pHGoN1k1ZhEWp6F1FVopyNiect1jrcM4znzuapmGqVWFFC5meM2Kn+oclo9JkxRotlsUKTxYV/cWc9Wpguw14B+fLmieXM+ato6mEtrL4yuNrj6s9rqmo60L8W6uQB3abFe9+8JIPX9ySouCqzKwWzuYNTVXvA/+kCUgYLYG+okTN5XkQyCkTcyImihLR1jiR8dnPZI0Yo5iUsNYhFOsXFUPGIIaiUJX7wfsDL2kYM4B1rC8zbpu+SkZpHhOUhYNtzpQtPf7tukNEZtRl7GTBmjM6kYyayS6Xe5gy7h75OJGyOSceiA9POOGEE0444YQTTjhhjztk1JESSkypqmaMpRuUppmTQ8QLOD9jF26IZol7/hOcPfsB0uIcJCPqEUnT2WEfqShCQseyJhhDSBVm+b1U/QuGqy8hy+9mePmT+GffD2lGnz3BPwYLiULAgPl4UumOYnH6rg+QdJO27UDgFRJuL4u72+6+7eR9ggmmfEWm0+yViXs1ZEl2ziGQtCOnLS1CO59T+Y6mdcSXoZS5kFziCz3qI2M8MVrivkqOHfVvf08fJlePFa53bEULc3hHQfg6HNd/PMz0dNenf4ujTNbMrk+88/YHfOF7fiVWZSxvYmibhjD0iCgexVctX3l5znvrCm9gd6RwPB7jHdL0tWq8uwpOPZIAvu6Yo6MLkY3c+5w83HrPuT9wnunxEynlPlxSnF6XOqLOcnZe0zQz1slD6kgRjJwj+eg5mq4i09XYP1OfWG5D9jTzxze7RzS+rgbo/bk7jG86dvzsaz46R3mBJaPbVNQA2ZFSgzERAbxY5rxkZ5bMdA3SIPqSLBXSPsN37zP4ZzhTkSjn3v+VEYNNEwnrycCHu5ZZ2LHzLa2xWN1hfU3o1hhJxBwx1p/Ujyfcgao+B55/s/txwgknnHDCzx0+NfloDUzMnlhTlGhNy3y2YDE7o22aQjoIhBQJu4iagHUWZ4Cc0BQRItGC8w1GaozzIyk11izIQlZBjEOMIasyxIBxjioWW82S0RZBA0IFKRIHSNbgkkWMwXmLGkPOCRHFmhJQZZSkeS/F22cU5kzWRAzDmExoSt07cWXRmZWUSobYbDan74RdtyWljLdC3TSYyhNCIWG8qwlhR04DTe2oL5b0Q4cxhma+4Pa242vvlf/nPnvyGHJgdfUR7bzGuWLtenW1Ybt7weMnF3zndz5FsvLiasX7H1zhm5rl+QIxwtnZgvnZnOurW1K8JobIbn3D+eUFy+WCprYYbUit5cmjBTElbm9XPH0057uuV7z//gterna8e92zGSIRpRuk3IskxFhqNRrASsY7V4IdzVTeFkJawGlGMdTOUxuLs8X2QxHIStBMIJd7poY+K03t0Bwxkgsx3Uz1QT1ZO1ar57TzMzKZoQ9cXDyiqSpyjqQcSuWFbFDJpKzEmABTMjyNw7mG7XZN3dYYHBeXF/Tdjkfn5wwh8sYbz7DGstvusM4R+oEUIze3NyznC3LKXCwXxYrXgK98IbKMUvkKV9W4ALt+QIwh5VAIbDVoKs9RXTnaZlasZVVK/6QoI1UEay1I3lvruFRqEgbNOJu4WFqenDtuJDCvPU8uapaNobXKwglt5aibBj+rqWYNbVPRNDWVFWyOaOh5/uKar7zznM2mZ+ZaFjPLo/M5Z+cL6rbBeYuxBmddqbtpHFAslHPKJM3EGEgh7l9zCAqSyTnSD4qJCWsrnHNYa0ZrUhlrsR7Vep2CzDFIvKNsHLOuSyuzD0gnZeQrdSLl8HfpQEoeBbwyvqsYg2jNCjljbamzknOxjTWpJDcUctHtrVftPeKx/J2YJJcn/GwgIjPgnwZ+K/C9lHj2x4B/SVX/tw+0N8DvAv4J4JdSbviPAz8M/Ct6t7AMUh6avwz8duAPA78RWI7H/BFV/TP32lfj+f9zwPcDbwIb4G+M7f/CA336yvjjrwR+CPgvAZ8F/jlV/aGxzT9IqWvxA0AP/BXg9389c3TCCSeccMIJJ/z8ReGY9M5a14wJtEkt2QrDcMuibrgdrScHA8PtByyWj6i/89cQX/w4rv5+VCOp8ZgsFBvIkYQY18ElCpvIn5KgB0JqH9PWC4brr2F8S3r+U3DxHRjN47rcAXFUNX48ZHQxOZBpul8ST3UJp5Efk0oTe6L3SUcOa/Y93XdMVu4vTFH5jW2mcigG2SswjZbah5Bx7SMqV5FxXC6f0q/eAQSjCTF2HOvopqLsx1P+k33vjgnTSWGnMLq5HHOpB8JoH4McMaa6n5Oj+RmTpV8hokbydS8Y07FsDOxjkOnYUiMREMXaCrxntbqiuThDDDhriUPEGk9kAGux1nPZZuq5xZhUSoDcI0PvK+4mwuzYFvbQ1UmxWNyU9jeMV61Tj+/1od3h52MnnPsWr1PMN3GDh/k+nCFrSQqOZKJZYFnjZ0+I8RYlkZjhPaQ4SvqM7pNXDyLYYq+6d7k5uj95PMyaccxH6sMjseQrz8Qxjp8TESnuOvf2P4xjAvsoyfYwOYgm6kqYXZxRpY7dblGS6lG8UWpu6fSCQZSZ7ui4LInBKTP4Btc8pR0+pOMJmGYkh5WpvqTKWCczGywQxKG2wmFY6wKjDV574jqSYiLHgDH+JHz8FoWIfAH4MvCngD80fv06oAb+JvAHVfUvfh3n+Q3AbwN+DfA5wANfBP514A+ranev/Q/xQM3Ho9j9NwP/PPAPAY+Anwb+RVX9E596sCeccMIJJ/z/FZ+afPTisM5hjadpZrRNy6xtqWuPSGYYuuLSiZByJmVKPUbRkt1pSyZasaWMxBBwxmPE4YwhRyElIQaIMZJTQEwhZuIQ2O12xBCockOWonITm5kWiDnHoqYyZk9s5JxKYOIOy+l9sW6ANNpJksZakFLqWsaMtUVtpSZjLHgsOTuMKDkEKl+jImxWN/RdB9lQVTVtW2GcI2shUta3PS+vrjHWgRFms5p61nLpPevbG7ptx+3LF8znDcYbXnx0i1qL8xWVr9ntEl/64ns4Z3jy5BI/m9G2MzQElk2Nrw1djtRVxeLsgpvVDXlzQ+s9OXXEcMZidkZVVzS2xYgh5ICIZd4uWC5vebyoiEPk6nbHzXrH9abjw5uOPhiGBMNY/847R04luPDO4Z3HA04oCkYrWCPUzuEMxFSyGXOCqLAbQAYIJkIW2qqi8pY4jPfDCiFnkJo4RDQmhm7HfLZk3jQs2pa69XTb1ahmy1RjfUONgRgC3htUHSkKKQz0/a4s0o0bycUNTdUSY8BYoZnN0KwYMaxvV9Stw1pLU8/YbnY0lWG7vmLW1NR1g68qYgxYW+N9TYwlOOm6nqopdQ9VS51LMYYYY7EolvJ8ZSBoJqrsaywaa0eSNqNJMBHiEIkp4FJg4ZXPPHI8mjVUztHWQuUSrXc0jaVuPfXMM583LGct86rUeMxDRwK2q1u++tX3+eCDNS45zpc1F4uK88uW+bylblqquny2ra0R68nGjmRdIdxSDKQwIKqjWbGgOZKGYl9srMP7QjKmCClRAsSRcDyoIMvnsHxG7xKP4w9jAUxGW52xTuYYZBuZMrLH2q22ZCeXkOvwomPKoi7KST28ODA6/h0qSQzGmPL3QDKq9kjlmLD28PsdAlLzK4HkCd8YROQC+H8Av4pC7v0wxcvqNwJ/RkS+X1X/wL3D/tcUIvFrwL9K+bP+XwT+ZUqw8w8/cKlL4N8HroE/AVwAvwX40yLyWVX9nxy1fQT8z8b2fwn4CHiLEvj8X0Xkv6aq/+oD16jGsTwC/iJwSwnkEJHfDPxZYBi/vzf29T8A/tbHTtIJJ5xwwgknnPDzFhMlYIwDUY6JnIzFCGTNmGGDmy8YtmucN+Sbd/DLNzDLN8E1mCe/hN17f4v68nswTQaKraZO1xjZmJJoN62tdSzrUPYNpkYefQ/V+qukm4+Qm1TiduvJlHM+hDtEyJ5U20fVd9ro0TFHS++HzzXOzR27yUlldax8nEjIe/I/weyJQJGSZChi8KqEfoOII4U16i+oanChZddvaeZvkMxUSaaMx9iS7Hyvs3eUbFN7RjItoxjDWKvw1dp+sm976KToJN083KNX1I+qhZxmJFjH3Xkk7A71LkdyVAspK8niTF/UnFkxOpb10Iy1DtVCMnpnyWJw0vOrvmPBv/932BOux/fs2Pb1461D76r4FHN0vummHVvLHm7+5ExzR3eod8nNadqnYx/i5WT/cCgYM76DMcwkk2JN44Wz2QzJjmwrNDvIO6ZilzLWNjx0rBD7sr9XI5Gogh2T5fdJBeyFnkfE9YF4PJ6v19nz3sdDJO99kvahuSljsHQRZrQ0LnEdPUqi1oFGeq5lASpYjQzRYLPFW0MiYkkoFaH+HE3/NTreIJuqPFNmsuc9OBKpFvI/qCASMSaW93/REtVgHWjqEOYP37gTvpXwXZTY9MeAf4US//5W4C+IyG9X1T/7Ccf/PuCXUGLoPw80wH+Skpj7gyLyD2rxCv56cAH8VUrs/OcoROh/BfhhEcmq+qe+gXGdcMIJJ5zwTcKnJh/PFhejtarHOQsqxBjIecA6i7UW6wRrDSK2LEiUkTwQrClqtJQyOWVCiFgTqJwpFpPWklICVVLMpKFYH8qYlZZiIodMTBRlpBpcAjEeaxzGliDO7pVkpbZbtkI+smkZw8CyDjaFzIiqpJSx1mCMBVMWezkFUhz2a3xjHNY6EkUVZlPEIGy7QByU3CrOBqq2wVcNs7al8o54dsF6s2K7XZPVgFRkIsuLltm84sWLa27ev2XWChfLFtVE6JRd7jhfLPBmTkgDIWx4ft1zeb5g2XhiGOhvA1VT46u2WOZIhSp0gzDEjpiV0Pe0bctisaRqljjnaRshmYHGn3OxnBEH5dFuxa7rubq65Tt2HX2XWN1uiSmD0RKgZkPfDSxnMzQlhhAK8WgoX06ZL1raxRm7bmC76dluOmLM2JwwYnjZDWQc87Yp5HKIZBGGmMghsWwq+n7H+vYWV9cszy4Ifc9sMUNMRkYCtKoq6qomx0AcBnKKxDggYhmGjqSJlMFXNZvbW2JIzNqGMGz2AX+3i4UcjZnFcgEaeXn9ArJiTWFDRCzWWOq6HpW0kHPJMO2HwBCGEj6qGTWBln7XsTifYY0SY2CIAz7H0i4lRDOiGc0RrGLEIiMZGcfPSiWuKE1JVDNHqkowpQK+Mczairatmc8bmllNO6uoG4czgsae1Ac225533v2IL7/9nG2XeDSfcbFseXx5xvliSdO0VFWN9Q4xHowvtT7zWPdREykFcgqjbnjMjsyBmCFJQoxivSAmjla0EUQxUlSU1gop5T35eN9Wlf3HcwzUx4DLjF+lbmQhMq3Y8dxj1rLmQkIytR0tjqYA8DggvhcIjh/rO/VUGH/P2QAJkUJMGlOUksaM1q0n+5ifLf4YhXj8far6L0wbRaQB/g3gvy8if05Vf3Tc/tsoxOPfBH6dqq7H7X+AUd0oIn/+vpqRokj814H/6qSMFJE/BPx14J8Tkf+Dqn5pbHsFfKeqvn18AhE5pwRB/4KI/GlV3d27xlsUNeWvV9XN0XELSgCXgV+rqn/taN8fBf5bX+dcnXDCCSeccMIJP4+wJ6SkqMqKqc5IOJTFcklU1IjVgSHPyNuXzPwcN39CevkhbnaLizOy8TTnn6MK79HnJ9iJbEReueZ9FV0h8w6ERp5/Ads+Yf3lHyEOHYrnoMi7S5a8blz6mrX1nbqEr2n3STXw7hA0sHfeKSO+1/7eWFUTamyxmOxvyPUzfHtB1vdw1iAdZH8OcTeSmyVJ9pXz3sPrahDetbx8+BwqhuK99Kop56Tsu0NEiYw1J0cu6R4ZeCC4JuWrgmZUDH0wbIh0knkTHUvWGPoQWN3c0HiDSoNoxtuan3gXjC0zMCVt3r9n9zGRpg9Zgh6ot+P7fXx/7s3RKDqceL9jJd99deXdGO5eHybJ4phwqihGElYSySa20XN+NkMlYvOaRIXVDqNKonRAOH6m5DDxh1FOd+14Mva/i8jROD45QfXjLVbllTYPnGFPik79n7TD5VYYnEv0vWHGDjGJdT6bxKPM2dKbJQnBWimuQNkSjJJVsf4z1OF9BrlEzQxS4H7oq1Ke64zFaYBsoV/TDdfMLj7LgpdcrQcQQym8c8K3MH4dRVn4e6cNIvLHKYTk/1JE/oKq3n7M8f914Mt672EVkX8W+AMUJeMnEZgT/h7gXwN+90RYisgfoyTt/j6KSvOEE0444YRvcXxq8nGxmJEVYkzsQo+mjDVKVVkaPEY8KRpCFJx12LEgtWbQXFSAIoI1kInEmBnGxaK19mixW2ov6mTvMGYMppSJQ4B+IISB0EV8U5HGwuOVmYGtyiLfCsY6hBoxk4LKIEbIo30qVkfC1OFHkqLUgwT1hexMaSJeUrHw0DwSI5BFEOfxTUubI7HfQkqllqNG+t2OECEr1E3L2fkFtvJoyuzWK0CpqpZt2nH+6BHD8Jy+3zE0kbOzOQtfc321JnZbzpaenCrqZk5T9ey6gQ+2G9589pjHZ2+yXq+Jqjx6dMZiUdN1PUMM9F3m9v1bVs01Ty4XEAfaWcJXpf6icYIxjto7fAvRZKzzzNuGvluhKgxdJMVAzAmMx9qKXTfgrEBO9NseZyx1U6NG8FWp19m0C7bbjpuraxa1ZbPqiMNAHJTKWCIWbz12Uq6mcv8rBzkHvK9x1nB5cQYExCgiDnBoHjAiOGPQnIBIzkMhHjH0XaRtZ3T9biS7I1YiVa3EcFPiNeNIfSaGhIgd1XnC1YsNBsdiuWC73eCsx+SMtZbd0DFva1JSVBIhBbImQlxjPTRtAyh1ZelDBwSca4GxnzrWZsiZWgxeRg5chZwLeeacxXuhqRwpKEMHsQ9EAzkakhqitdjZjMV8wWzWUDUVbdvSNhVt5XAps+123F6v+dq7L/nJr77gg6uBRixns4onF3OePLlgvljgq8km1SOmAmPJQrFZTbF8xWIDiyopK13f00WImshiEZtwKRJTwtiiSLTWYk3GGEtKgojdZ+tO2cBl7HcD9b1aUY4tV0fy0ZTEBstoPySFHRZzsHQVDEYMakyxRBphxqB4wpRrvC/pIuP+KYIWGbN9SwA6KSXzlMlwIh8/NUTkMfCPAH/tmHgEUNVORH4fRQH524EfHXf94+P33z8Rj2P7zdj+/w78k8B98jFRCM58dMyXReRfoti9/KPA/2jc3gNv3zseVb0RkR8G/gjwqym2qffx3zkmHkf8Jooa8n91TDyO+CHgdwLnD5zrFYjIX3/Nrl/y9Rx/wqv4kR/5kW92F37OsVqtgF8YY/0knObiLr5d52Ma1wknfLNhRMjHia9H61A5UpqlFLm5fUHefkgOFebygu36Fs0dfVTS9gVetdj+X3wX/vrHyZffX5xI5HDO/XXNZCF5UCnu7U9FMBgGrQnJgyYwirFytCj+ZNxft+9/NtM4Ya9cPGp3nxh9KCnwIXXk8VL9zrvtY6UkgLFITuRujVNL1T4lp8BH778kbD9ArEDLgWwyBtKBQJxouPsk6jEKcTwJuQ7Xfx1RdHgExmTKO3HPpF58WD1pxphEj7br/qjJvjePpJsQcyYpLGwsScv9gKqSIlibOF/WpKHH28Ts7Ck369WrFNs9Iux1xOvhrtzdN5GNRdF6fxZGcnVqyIF4fPgaR3O3Tyo9bDvMSb5DxBugsRETd2z1nNtNz09+9UNiH8jyCKsJcZGAwWi+Y4Z7h1w+Up7u1cRTH3SKISeHnf0I70/Ng4T8q0N89fk5VgUXoruc/Nhm9XDR6buSFZy39FHx4ZZkHb0sRwK8fEYq6diaJVkpytgcynsgLbPZI7jqGfPwPrdGETsrSdPTfYOifsyZLA5rBtL2OUPuqZdvEQFfd+jNFZBfuc8nfMvhBviDxxtU9a+JyJ8G/jGK09BrSb+jRN77+KMU8vE38vWTj1vg9xwrJVX1x0XkrwK/TkQWx+8CHsLHxc0552+7te/PNb5dY4afS5zm7BvHac4+HX428/ZzHTd/avLxZrMhhogmxVrFj7Xh7KhAEjGIgqiSNIIVSGUpaG15kW+MYozgZFzcp4GgSrSW/WLKUKxTrUGSgBpEC6FgpNRbC30gpzU+ONCMNw6k1BqclFPWOZxxZbEqUuwjhWKbqkXxaK0ti/2cxzp3xctfjGBMqfdmnSeGWFjUHAu5OISi/gqJjGCsR8Ww2Q4IStNUuMbivEGlWONsVhtEFNQRk7DbbQkpsJgtuLm64unTN+g2a8QasrS0dctiYQghMJu1WCNsNj0pZpZNS20MVy9vuLnd0tYt0gvddoX3lqZtSUmoXI9dVPTDlg9e3qAjsWtMi3MtxnhwTQliUmJ57klDou+3zBZ1WaTmMvRtv0MxuGqGiCX0O1LoCMMA4pgvloXoywO1M4WwNZm2rvAK/a4jk0rlCytUxuOMwaB4a9gOpWZiSEofBpREzgO7fkOzmEEOxNix7XZ4Z2jbdgywFSWS8oATCCFgjaWqaoYQ8JWh6zaICLerGzQn2vlyXGjHUoOztmTN7LYbLi/O8E3Far3CG0+MA23liSGRNaB1gzUOBdbrDYgnDKPNsAjWO4x12BTJUp53zSXYhzKZQsQL1EaIOY3VE7V8nrzFVRUGSwoZJ4lkQCtDSImQLbgW185p5zPaWUNdV7R1RVt7vBHCZsf6Zs3b7z7np37mBe+/2EIyLOY1F4uiejw7W1K3Lb7yiC0fuxIZjEH3+LmfrEZTVnJKDDEyhEDfR5IE1DhsLuRxTGm0Zraoq8im2JqKOZCJxRDKjJm9MFmmToHaFHyZiXg05bNfziEYY4lGMGKLHaudEhbs/jpm/Huk44uWcq2Sm1kuNQZ4KORRyXiUQavTvvE32VtlHdUVOUVRPxv8aoq/l0qp+XAffvz+S4+2/QCFr/6RB9r/ZQrJ+Kse2PdVVf3yA9t/hEI+3jlGRL4f+L2UDNC3KLYxx/jsA+fqeNhC9QeO+ncHI6H5o8Cvf+C4E0444YQTTjjh5zEOJNGYWMfBqrCgkF4aA5ISufJkjYha9Par+Nk5M5ORykHzBuH6Paw9wyzfxN38JMPl95VydYyEj+GIQgFyvnN9RMjdhu7mp6j7F1w++yzXuw9KPTYdSYkHxvHx6qu77ab6d4zqs/vnuDM/D7mRfB2KsQcuPB08uk4ahv4GtQvc+mv0/Yrbj/4OJm1xdUutl+BrDJD0QGjKeI5pEh6yzJRJJzmOdZrgYyHmfZXnwbxVYCodcYeUu2uxuSci93uPvh9dSCkJzmNUBUaxJtNv17ilZ1bPCfMVYVDefOuCd37mmtAlmmVLUzd8cLPi+541/PWv9OMYp2fzcA+OLVfvkmJ3+38X0xim/dMxyjR4HZPPpzE+rIwdY+hxjIc+HNc83DORd3sgEIcd6i6Y+w3r97/GV+IHbLuAWCHbc1K3w8wjggNJHEi96XzTg5BfGeL+M/HK2PcNjifpgd0P1Pnk4ed/31aOn4Tpa2pj9oTulGivKfLi+Xuk+XcjcoZIKs+KJoxNCJaMIWWldhb6vrg9qWLGd2lZYFO/yTx8SBchu3avrRQdv4shi2W7uSZzzmzxWTRHshiwDRCBiL7G1vmEbxn8DVV96C30j1DIx1/Fx5CPIjIH/mkKSfl9wJK7H8yH4ufX4adeo7L82vj9EvhY8vGEE0444YRvPj41+Xh9c4MgI+noQIpi0VmLNaPZgwiWqbajkmMii6Jq9tmUZiQQpgV2TgOiRalojQFnybHUIxBri3JKC0Hjqgqw5DHrMKZM33fstreIU4xVnBGsM5AsqoJaICspl4WlEYP31T7YKMq4hFKIKEaVZM6xLG7FjLUgQFRwrsIajypU3rFZK5rA+Uw33LLbrLi+ucF6x/L8DF/VNE1LXVdsN0Uhp6psdx0x9Dy5vGD21hN2mxWPLh4z9JEhKC+uNywWC6IJfOkrL3j69JzLR2eYumd9vWI+b0mbzGZ1w1njEGDbC19794az+Yzv+q7PkjWz2XWsnkPoVmh6iVG4NAZp7EjmCN55xDmMWHIdiY0jawC1ZBUwmarfgQp1syAMEZ3Piu1uVIyriSkRY481LRoDKXbM5i2SM+uuwxilrhukj1QmgVWERO0MWlWs+4EQcnHErQ1dF9huN7SzBqeJ3eaW65srLp48o2nPwUAIHZoTIoUM1qQkwDhDShExjr7bFuVmCBhjqOqas/mMoQ/MG48xynp9QwiKtTVV5bi5uWEIA27WICRub24xCmdnS1SFPoRiU0qi73tq68hpRwybonZNgb7vqaqKlCLGgHUGzSWwK/VJIQfYdQNVk/GNxXiPNQZnyucKKzhTEStIMeNSohaPa+a4qsHXLU3jaWvLrDJ4yXSbjo8+fM5X337JT3/tBe9dbZEkPKocT2aeR2czzpZzFrOGpq7GbOeRgEsRDMSc0ZhJKRFCIPQ9KQ3EGOiHgX5IxJjIksmSyFkRyZjsy3mMA9W9hXGxRZ1eMshR8GgOsdn45mSvjmQiG82+Hqs15ZnFFrLRYFAHVkoSBKbYJpf2xX61qCZLm1HEWI6byEQdg/3xJUIes8QnJXQe79kknJvMmT7N+5ET9ng8fv/V49frsDj6+Rx4qarD/UaqGkXkOfD0gXN88Jpzv390XgBE5D9Bqd3ogH8H+L9Q6jdm4D9GUTLWD5zrw/s2M/fO/Ul9+ESo6t/70PYxs/MHHtp3wsfjB3/wB7/ZXfg5x5QB9wthrJ+E01zcxbfrfCyXy292F044oUBkdFndm1qO2wEt+zJK6K+xxtGev8mw+ypWAjFHbBK2yVJFD+v30NCBFbJ5gtGAvf5pzPkvJplcHINSRE1ZVxst5SHEulLbrb9Fbt9Dt+/Tzi/gs78GzVuEv4VYg+L2cfzrh/P6ffs2B+Zun1j4SUqv1227Q3YdfT+2k+X+dpWi6mqWqJkj599FrYlq+XmGFz+BXzyG1Vfp0xNqA1aFNJ1LpvjjcP1XlGoyWtdKsXYye/JQR6eUh+r8MRKUkxKVe/tHOudI/WhKB/ZzeX9upjaqUxmKRMwGyZkY12S5HEvRGJIRnBgq7xFv9vdkux3r94nCmCBbrnGoz/jQvXmVNLvb/8P9u1uzcG/XWjqwJ1jv4kCq3RnvQX54pPo76tsxxStSPJVsBTi2cYa6lpm0DHGFiGJMQ8QiGkvCO1IKi9x7qPbUqghG71GNR4m6ezWnHH3aP4ZU/LjfHzquPNvmaNwZuPe5UsaSMoIdqzdaIwzaYzWT7ajytcLcdOzyHJOVIMLMWWQHo53QSHaWEiOahcE+o9H36RJkW2HyWNtTFVIkbj5i0SywriYzkI1ggZAdtRH6OGD8/VzOE77F8HXHy/chIp4SP//Hgb9NUTh+BISxyT/Dw/Hz63D9mu1x/P6JTPbHxc3GmB/4dlv7/lzj2zVm+LnEac6+cZzm7NPhZzNvP9dx86cmH/sYsaYUL3dj9tpkjzgRBEzbTcnunLIQJWuxQREli2DF4sRgpjamWIcgghWHOk+yDjSi3hFjJMVi+arksei7w6jFpmLrmEOCGEvwlSNJXVkIJ4olxGiXqGIgJ8QYcsrk0UIipbxXM4mYkVAZyRJKYGedIyWIKZLSAFmZzxZU1ZydqYhDoplBHNagyrDZsF6tqOsZF5cXnF+ecXV1S07KvG3oyJCU84szcujpQ8/Z5QXW1lxdvWS32xTFHZG33/2IEMFWlvn5JZWvkKqmHwJdyBiTIMPn37wkDB1XLz/C2Iq6bnl0vsBcthhRrreBTm8hv2Qxb1jMG+bNAmdrTFWUo9YbDC2qghVDFjDRUPuauvZUPpCy0g8lYzCpkocBk3vCMJBjwns/kpOR3WZDjgNNJSwacNETVTCOElRZyJLZDYmEsqsNm23HrDVYkxiGFb5e8uTxI6qqLuce73NlYOgjoQ90Q48Yg3eWzfoGWzm8NxhxzBpP6Ducc4S+x0igaSp2uw5vDfPFgj5ktpsNTe2ZzxoMynrYgSiPnzzGiJJSR4652NXWM5J29MMKaw1xKM/EEJWmbhBx5KR71Z4ohSANmSEENrue2/WWRfaF1M6CSQ6TxwxHgcpXOGMLka+KrRqqeg7W4VxFXVm8ZBgGbjYdH3x0xZe/9hFfffuGFy97ULiceZ5etLz5ZMmTRwuW85qmdlgLJRBUMkrOkZQSQ8poysSUGPqeod+SY0+OhSAOIdGFTMIjzuIQnLXkDCmBZh1rY47kn0yZu+UzLmOENoVxezISOdoue9LRjKRjqcnqCvk4Eo7qKESkcYih1H8Vg4gWVbaR8QWQPXqxYEpNEFGMGtJRpvNkLzvFdPss5Dt9PeFniZvx+x9V1d/zDRzzSES8qobjHVL8mJ9QiML7ePaa8715ry9QbGFa4Deo6o/cu8Z/j0I+PoTXPRTTuT+pDyeccMIJJ5xwwrcRBDOtHEfK5VhNNf2jhO4GxBP6gHUGP7uguv0AN0Tq1XvY4bPk+efQalGS5LKQm8/g5QPC6ovo+fegIaHeIlrq/ImxGGuJmw+ge4kHQurxb/0Kcv249CncjjG75ZgGesga8nXb4OPJxfuE1OsUkF/PdSey6hXi5pX+pJIsbJXh+U+Rwg5xNWb+pCQX64C1rhy5f5dR7k25Xn6wz3uqzYxk2BhPMNKXJcbR187T6+ZI5JAQCezrW06ko6EkRk4E6atEVXEpMqKcn7eYxTMWy5phCMVtyhYbzrat0JyxCGKVzz+rWTSGQhO9eg/vW8E+pNSb7sBdovbh8R7bdU705kMtH7Lhfciu98Hjxu1WQLPB+EAVb8FYNIH3ji4kdPMu1s8oBqMOSA+oKWFvz6uMas0DI3nXoXi0XuXQ5LU2vA9sf+jZf/U4e/ScvHrsIcYu77uM8bTNAp83DEMmxgp1cxIVrR14mc7JqsxMovIVxJe4wYJYrFRk40mmBlUCGcwzZvkjtnFOtm0RGvQr8m6NnT/Bmg1nVeI2Rnq1tKZHY8S6GokDuPaVe3bCtxS+kXj5Pn4ThXj8k6r6O493iMhbFPLxhBNOOOGEX2D41ORjykrWjJBxDspPcsiPk5HYgzHDT7FisBgYLRtVM1khWUGcK2TCSDoyLsDFGJyvqOpIKMwhdfZoCsQhjEpF6JMSVeFWWK03nJ8vuIyJswRNjOSYqKoG5ww5Z1TzaN84cgtZR/ubjBjBHgUe6GiEqUpKiZxDIeBMRakPCJoh5ETIGTDUsxmqT1hdG8IwoLnHe1syvro11y8jFxfnzNuG9WaFWMNyOWfX77AbQxaPMXC72rBYCheXZ9RNxYsX17SzlhyV25sV83mDm1uaZcViUZNDx2a9Y3Z+jphMGDqQRA4DgvDyxZqmrpnPHO1sxooO5zwpKKv1jl3X0S8SZ/M5Xh3eOYyvwXoSJSi1xuDFFxKXjLgKsuAMoEJlDTnfknpD5Ryudmy3a4Z+w3a1YddFkgopDEiK1KaiFoevDJuuw6lyMWuJaUWOA0KpQ2nSQH9zhc0DF09rsnZUzbIEFQIYJaWeGHcYkwj9hrqd09SeFCN92GHHzOPdrsMJJbdxXNBfX10Vq1BTsVlvqWdz2jPPEHqEzNANGFXmiwWh6zE243yxrs0xoj5T1zVXN1f03ZamriAzWpZ4xDr6IVD5FoMhk1FTyOvNdsNq13G97hFRai/UzmJcRKwt5N1IWlpjME4xYnCVQSRQ8pSBwbAZAtttx/MXa776zhVfff+Wlzc9JOGirng0a3l2seTNJ2c8eXzGYtGUmp3kkhk7Kow1KSlGUkxoioTQ03db+n4DMZBzIkTou0wXFLFS7GsVcp4sa8AQEWOZUoiPM39LvF5+yGOANFGTwFHgemy1OqkfLc75O+QjrqggrSlWz1biSFbqnrSUkXyU0aZ3zBcuf6pU9orIonA8yrQegzzVfHjpgNyPOE/4xvEfUt49/Npv4Ji/CfynKXao/869fb+OkgX5Nx447jtE5Auq+pV723/w6LwTfhFFXfkjD5zn138DfZ0w9efXAz98vENEzilqyhNOOOGEE0444dsMOqqhpjXvXVpm3GeE1N2i2pPzDmeBypPdEnP2FJGEu3yLobvFZANrwflHULWk+k1MfBtWP4NcfAc5lYRQQYk3b2PDNb6+oFePsUL11vegUoMoJgmZYsOoU0KeOVCQDxE+ryPRXmch+dDvD9VG/KRj7hMzE8l0/7olr1HRviOnDkktpB5NA84b6Nak/GJcw+dx/X9MsBlewURIZZ0ik1FtKGRyKa+hh7bCx5O0D6v9jhWCB9XhRDEVt80DsXbULUSEpCWWU5SuV1K3ZjZT6toQhvF6qlRVxW63JQNXz2/5D348E37RSJQVn83J+OUV9d3H35ODDerxvuPnRvfGqXuG78Fn7aHfj4nN19uu7g9Gc4nZrGREGzw3aIZ5bYjpljgENkNAHLjYQz2bqo7s72E5lxY3nLu36vBdD/dENd+xqIWSzPqwnezDOH5OPr7tw8rUqZRJ6Xf5+7ONgl+8iem3WNdiwxWSHEkDg1ouqo5WegSLmgb8U5Jm0A5JHVXYgiQUQ5KK3lzihxf0WSEMuNyjZ29y7jMuZW43HWrAW48Ri3Z9UaKmLSXsOeFbGD8gIssHrFd/cPz+N3k9ftH4/f/4wL5PEz+fcMIJJ5zwbYBPTT4OsZB3kHApM8SMt0pySs6C42BPOmVgWWsL2SdKUkYSMpNiTw4B7/2oZrJYa7G+qCatd9hcFSVWTtiUqGoPmon9QCKCRkxSQlTCJrPRhCPhcsLkHpN3aN/Q26pkdRqLdRW+slgxGOeBUu+x2L8mco5YoyMhY4gxIZrIMRBjIKXVqMCyWOupa08IkTAEYooYr8zPlwwxsrr6iM3mlsuLJWfzhq7b0W8NtmoxYhmGAT+rsNbw8uqqLIwXLc46VqsddVWR1SLWY4g8vjxDVIlxzaxWlI6I4/KNx9Rt4OX1LYu2xkjFan1L16948uQxFxfnpJC5vtmy2UUUxtqMiZwSPjmG/prtdsViWXO2vKDGYHD4ao4YR0wBMUXZJmIRW2GNUllLVstus8GIYG1i6LZ020C327BbbYgp0C4qbtcDRhzzxpKtxTmHlUxTV6x2PXETOGuUoafU9kRx1uGswztP01SFRNze0LMBM9rdZMUaS6Jn3lrECiEEnK+IOTMMPd4K3juMKuvVLd4bfF3j6ganyhAibdNgnaHb9ahm6qqhqS1usWC3vUEkFSVntiAO6w03Nzd475k5Tz8E6rbBzSuigvMVgieRECnqRwCNgX57S79as91GXq4zm75jO0SeRmU+ZKwvtq62chhvcEaxtpCacciggRQghEw/JK63O57fbPng+Y7nVz3bLmPUct44ni0b3rxoeXa55OnjC84XM+rKj8QeqBgyFlFD1vJMpBiJQ0fXrdlsbhm6HVBsjncB+iCoacrnSgVNI5GYA5iiTJYpEQEdzWF0JB4L+59V93smq1OlkPr7JABkX4vVjASpMQ7jynNhxYGTPREpbqwNu7deLckMIkXBewiAD307ZBJPr4bu4tXMbXklCD/hG4OqfiilgP0/KiL/A+Cf16Oi8gAi8j1A1kO9xh+mkI//YxH5QVXdju1mwB8a2/xrD1zOAn9YRH6bjt65IvJdwH+TYt/yv/4Lm94AAQAASURBVDlq+xXgF4vIr1TVfQ1HEfkngN/4KYb6fwaugN8uIv9zVf1rR/t+iI+xsDnhhBNOOOGEE37+oqxxi40hmjmoHSeeo5AMcbviYuHpc8K6GbL8RbjlgHv8yzCVY3P7Hv7sl1I1C9KwhmFF3n1QiI96QZNfsnvxNjJ7TFp/DZO3VGdv0fkZ+fY92iefQ6pLkgqGBNmQTEkaTCkX8lFeVaG9zhb1Y+vSfQJetSQ9Jp0O83N//yt92ZONBzVkcVrJ5DyQuxdIv8KYjHYvkC5hcsa2C2IqpV6MLZaShcx5Vel4SIYcScN9X6ZSEmW35LEvYxxxzBM+RNrJ/tzH4z0ct+fBxg3H+Y5TzDJ2EKAkVmfFkLEGpK6xtsX6omg1ahiSYsZSOSknDJZIJovFmoOMb4q99vN/dJ2vR/16l6iWkYCb5uUwMcXo9PUK0U9+9g735fj6U7ssguqANx0tHZv1O2xjRxKhcaZYieoAmlAUM5J2e9cbjvo8/j7dnL3a9Oiax8Tj1E9jzJ0+3e/n8fbiznW4ta/O60Rm6h1y+M68T4/FON9GSg3XLTPmlaLDlq5+woXr0LzjLXmPrJ73+kd8rglIUhKZbAyGGpWGwSZMHjC5o9U1ErdkAvH2fTppaS4+gyEUV6gkOJOJ1pEUqvg+vdQkDJKGyVzshG9dnAP/Q+D3ThtE5O8D/mGK6vH/9DHHfmX8/oPAv3l0/HcDf/jvcj9POOGEE074eYJPTz5mMLbETzZlfB4JhDwSWdkWFeNkPTIuK1UExKAChrEuQkrFNlMVax1GMs4VgsYJ+zqE1MUePGQl51LzLqdIzgZiOV9tFGcilh0kRwodobdoHohuOxKPHlfVOF+DNmRfITlixB2y6WIkpUAyIAQMUqwucyLFQIo9Q9+hmgq5IRXGGKx11MYi2dCnRCbTzGq2W0/fwc3tmvmsKnaltcdVFb5S8jawWa+p64qz5YK+6+l3W5q2BVU+/PAl16sOW7e8cf4InGKtYnSJiiPFohptfMXsUYtIZLdaY4zjbHlBZscH71+xWW25uFxStzWbzY6h7/EeZm1NykrjLH5UhMWQuL56iTW3eN/StDViHFlKPUhjKkh+tKypiQJDvyKmXSFDc6QbBrr1hjwMeCf0ogxdRHKiqSwZIYrgrMFbC0MPcSiqSI201iApFyvVAGo8tm3ZdStcGnCuxlmH2DH7UQ1JLZX31PUSVy1IwGa7wXtL5Rv67S2iDk2B+bwu9SAp8zekoShtm5YUlbqtERRrHEXuqizOlmgciKHH+4qchHo+x3jH0Hek1FH78uFwptTJ1PGFQ1tXKCXIt5qJMdL1PevNLbHf4cSwHRKbl4Gr3YazpmdWO5rW08wcbeOoK4OTUr8l5cSQhNUmsdoEbjaR623geh3YdRGrhrl1LBvL07OGty4WvHEx4+LRrIzNFEshKAG0KsSoBI3j/esZhkC3XbO+vaLbbRhiIAEhKSF5VCq8NfvARjWN8eRYj7WcvXyuyPuAfJ/NmYuVcdbyNyDnDFNtRaasW1uCejGIcYgVjCmkoziLtQ5n/FgX05X7FUobYy0ipgS3ZrRvHV+u7JWP488gJegcf8scBXJH3405BORGpr9xJ/ws8N8Avhf4gxQS8t+j1Jv4DPBLKbUgfxvwZQBV/TMi8puA3wL8RyLyb1Bu2X8B+C7gz6rqn37gOn8L+PuBvy4ifxG4GM9xAfx3VfWLR23/GIVk/PdE5H9PCbb+PuDXAH8O+M3fyABVdS0iv4tS++LfFZE/C7w3nu+XA3+Foto84YQTTjjhhBO+rVDqod8hjPaQsoJJPTl2LC5qbl/0LCrD8O6Ps1t9gLOeWNU4YPf2/4vee6xblLIk2eLp4eZdVsMtff+jWLvk/Dt/NaF6xs2LrzKfzfGf/XuIEjCjWk+1ECIGGEKE0Rkky5iWd0+l9xD59xAx9JA67qG2r7NVPay59Vjg93Ul+01ET1nWS0kkrs4xeUfu1+Q4MGCpjCIxknIGLfESksc0xAM5pkfqN1UtasfxlhUkRMbYYfTYnKg0M1qoiryekH112zjHZcB39Jd5TJpEKXU9R8Xgnhhk7KvJSLYYE5lVFToEtruuvLcxYK1gTTm3GAhO+cIT5fs+W/O3fmZAKGVyErlwcHKIhyYS7et5Ju6SZbqPu8wx6S6TIfHr760wxYMPEZ9TP6bRj3U4JzWlgFGF2JFv/jbBCkMfiUOgmc+QXAr4zK0nSSIDyeRD/2CsfCJ3xnOX9HuYWH7IHvZ1qs7XEfFymPnjmSiJzEft7t+P8igqqCmJ88YVO14yW1qWM3D9hrZWKlezDhV931OHt6FqMETmXOHJRTSAkNSQMGRXkWRG1jNsuCFdvsk5PVESmqCxO9QEhtDjeM7QZzrp6XB4VyO7zZ68P+FbFn8F+CdF5O8H/irwFvBbKQ/i71bVh0qbTPg3gZ8Gfo+I/AqKSvI7gP888OfHn0844YQTTvgFhk9NPuqYEVYsFkHTuDrVQmBoKgYuMCmPRtvFkYTZL0CNw4yWiwbQlEgmIUnJ0aLGgpYFvPdVsW5VLZYwKRGHQCaS1BK1ZIKpUXQIuO0aC6R+i3MWX1cY5zHOUzU1dTMjpR7jWoypMNZjrCGlREoDWafC7YI1RQ0mFLtLw2QvmtEcIUcCmUQhuFQzOUZSLNanOSk5K13XYaRn6A3DkKjaQNM2nC3PuL3ZEEKkqhxP33hM30eev3yJMcpsbhiGjLWZoe95ebWmbh3n5wtCjKRdT9M0JJdo25Z6tsA5R86Blpqz8yV9t2V9e8V2s2W96VCF82XD2fkcBXarnut3n7OYeS7PG0SUEIsidaaJOKwQLPX8Eqlqsggilhhh6DuMKM5F2krY7SAm8M4QjbLpI0MI7HaBlCL9MGBcRR8jy7MFVVWhqoRNIR3t+Hhmowwpsdl1XBmhSwNvivLszccghbCMQ7HR8d7TR/DVkmQ8zjVkFbqhw3iHpDRahfZYm/DOstvuRuvMiIpht+sRyQxxwFhP4xyusqA9ookUlBg6rEYqK9RVQ8wWzR3EHm8g1zWVs/Shpx929D1EHajbGcZ6sireWbCB0PVs1ytWqx0mZt5oa4aU+ajveXEbeXmb8HagqYV5a2kaoTJmHxQqsBkSN9vIZgfbPhNj+TzWRrhsPY8XNU/PWj7z5IxnT845O58znzf4yiPWo+IKgT0qR6HYGQ/9wG67ZtOtWd1cs1tdIxpBHFktWQ2IxTiLWlN6k4u9sqpQ6p6YkWwsQbJmUInoaH2cVdGk49+RMJKPabRlHjM6Ga1QmdSLRblsjCVaC84Wq1XrRvtmh8Ui1uFG1aMauycfZSIxp5cbpYLKPtw/zsbcB/NH5OPekklKbVojJ+rxZwtVvRWRXw/8LuC3A/9loKEQkD8F/LeBv3TvsN8G/GXgHwd+97jt7wB/BPhfvOZSV8B/FvgXgN8JnAE/DvyLqvpn7vXp/yYi/xCl9uNvpTzQ/yHwG4Dv5hskH8dz/jkR+c9Q6l38FqCnBHj/APD7OZGPJ5xwwgknnPBth4mAuqsqm9yBypeGLVZ7em0J/S1m/oSQdlR+jp8/wj/+Doy0VFaQ/iX9zQdU55/H+SX95gNk+5L28ReYy5YYBta372HDV5kvlhhjSLsX2PoSRMi2WIRqHtV3MY5Jf+YVgvR+At4nj/XTtXuFtBy5ltcRnMfNHjqniiFLYvEd/wDarRi2L5nPLsia2Fz9FLWdYfIW6zyZkoQ82ViKTBrIe3auoyrukFb98Fjvq9aOf96Pcz+2keyaFJIU1yWQB8b5sK3pPmKRPCrdMjlmwu4D3vre76X1nk2wqJVSp3J04DFq2a4HfuD7v5Pb7rqwkZJGi9fpWgfS9f49ephUPRC4+1hOGGPDV+/VJ6ooR7bvjqJwnORRd3hEZ3LMDoIWtWVwl+T6F7OzPYEeM3NEOyAxok4I1Qxnlahlfo5Vnvf7dHQzPhavU2K+Tvl4vG1SVN4lKA/3WXiVcLw7l5OjUNGVOiwpD2QMDmUbW86azNJ+yPsfrMjVWanH6GaIsTgiIRq2/nEh4VXIppQockDOG1zc0LvHOBxBFlw2K647S5fnOO0JYU2W4iRl6Kl0QLXHCkTtPnkCT/hm4svAP0VxE/qngJpSPuQPquq//XEHqupGRP5T47E/SCmr8iXgnwX+p5SY+oQTTjjhhF9g+NTko7PlhX+FpbKlDiBGUCOknEl5JB5GhZO1gnXF4sMYO0ZhFtGEmIxoxBQ9VFGJwUhQlIV9ISEc1gnStCVAypkYMhIUZzu8c8Xbf1wQdn0mDiu2TmhqT9M21POW1syRXKFj7UkxcVz2Z3IqtiApBlIqdfaMCAFIKY39KvaraEZzQvOA5kAcIpog+gq1FhGHt47oLK6yBEkM246qmpV6gETW1y9Z3whP3nzGszffZLPZsFp1hMHQLubMFuekoaMyluWzBusdzWzGdrPkdrXj/Xdfstv1oMpbb72BLhqGFOmHRN8PeGtpG0fbWpaLRzx74wk3qy1nQ892u2Oz3cFtR1t7rGQWbcvt1UsqK7hRueqcL+oxDKglExkCxF2HMR3ISPqizOyM3bpn2O0Imy3ddksKGVc5kiRcdmTNNNRoVhazOd4bskRu1uuSfeoF5xJnlSOqQOiYe8GbSGs90u9IXYdoZh0GSAnrDP0QQBzr/gpXD7TtktyvMQZsFqxRuu0tGnpMpYR+i7eFeAsp07Q15+dLcsi4qiLEgWHYIaairhrUWhqB7Ax52NF1N6gR2vklaixJlRAjThyaLUMaSDlirWez3jGEiPOOpm0wMtZSjAMpJdZdT2WEJ8sKI8rlTvhwM/Cyz2w75Xab4Tpi/Bi0pBIApKykpMSsQCHdKqMsGuHpouHZecvTixkX53MeP77k0eMnLBYL6rpCRMY6np6sQkpSCHyNhGHHZnXDbrMiDDtityGnATEGYx1urHcasyfbQhVPGZ06JiWUD21GFZKUjSI60nyKZi3S6awjEVlIx/KVIee9crIktE4BmwVjRgLSIN4Vq2ax4B3euGIJbS1GSg1ITLFgnax/JpskJY/k4xRF5n0IK5OicR9sFnVsCRxHWypzVKf2hK8LY73FVyZMVQfgj49fX895MvAvj1/fyPXfBf6Rr7PtvwX8Ww/s+ivAn3yg/Re+jnP+JV4lUgF+x/h1wgknnHDCCSd8G2EqPyATqUQpSbInNCTRD1suz2fUDirZMW8rNv4ZjVRYEdLLn4bYo3aBmV1SX3yG1ds/ipPM7PF3wNPvJYdb0uDZdS+5rIT45q8i2TkmB1J/Tbr6Gio7jFuizRKtzoq0yyjeFbeQ+1XkXiHN7uF4++vaHLd9/RzdIyPRO0q717WVB/ZDSVymWxOrDbq7wjYtOnuKyYkmb8jrG/zis2i/KaUb9u8c8v68eyXjSGYdrDdLW7lDFr1KNL3OlvY+sVW2K5MLjY7qwym+kPHrft3BYzXe9LvBUrstXnbsUsv6psN/t0VzwmFYdd34bsby0YuX/Oq/95fx//yxL9HWvpTN2JOCQNa98nPkS++EPJOy8z7uK1YP7b6+eoevJeam52ucw/1cyuF90TFBOeoEcdYgNpJShdQzjK1IukPCOzi7RIwHvcU4iqOW5DE5+e79PIz5Pin9+lqO9/fff0b2Y+Puc/2w0vPovBzUow/OYzkLiiGOHLcFIsrCKU/nA8gbNJfn3PamOH7liCVgfMTrDU2OdFoxyBliwCq0uiamQO8fI2IhJhLKi63njWpbEhnSFeswYKqBRMtOnpGtZVaD1w+5jeHBPp/wrQNV/TvAb/qENn+Sh+Phr1EsWh/CQ+8AfohSguT+9tf+z0RVfwenuPmEE0444ecNPjX5WDuHM4baepraM68rGmtxRjBGSIBoKeCemew2FDGKdcXWBTGj6ql8pcxoFwKqmZQiJsVixWoMMevIVRiwDnE1xkecnxZL4O1oGZoSMWZiKjUs0FzsWyuLmbU4A3bM+iuWG4mcpoVhuXZOqSjM8rR4zeOiHpyzpeZeHsaIQBBrsdZAUlIqNRgzDmsil2czGvuEmytDGAZuw4520XB+foZxNSlZbldrlEhVWYaw4/r9W0Q8s8qjqYxlXtXFqjT3NE3mjccNm1VGE0gKbNeBuq0ZuoGclF0/EHphtTasNlvmyyUX5wvOLhY8e/MJ65uB1e01EKkrT9cNPHrjEVXt6YPivcPZCkTJOPohsgk3gCEPkRwCTW0QI/imZZsHhiEQQyGfvavJMYCBnEr+3WLeIvSAIWqkj5GmrmmsxVbCtu/wzvDoYsZuN7DdGrYhULeGIXbsNtBva/qN0jQz5sszNruOmBPeO7poSNtITsL5xSWqAznsyGRyzizOLtmub0khsFy2xU2VBHlXrF/GmKapLFE8omNdR0k4C4IhRoebX6CS6bsbXFWj0aCDEkKgrmoq27DarGlaw3JuiQjOlQqaOUYQiKlju7kh9T2tE5a1ZT73XJ7XXGwGrneZ91eBl9ue7ZAYQplHVMiigMUboXXKzFoWtWdRGS4Wjs88XvD0YsFyNmO2mDNbLpkt59RNS1U1iBTb25AzKSVCFxnCQMqB7eaWzeqaOPSj0tig45exDusqEEvOjlw8iY5UgSVgylAyVccM4TGaprDVgmDIWuxcVMtndPoSRlXknSBtsm7OxTY3l8DQ5GJ9rMZAcqgZkyFcUTtGI0yZn3uN4hhBT/avU2bvPgtZQUfyUYxBxYx/s8rvxhwUkGJOdR9POOGEE0444YQTTngYh/XnZKJYEt9KOYJigxp3t3z+Scu7V4YuKLH5PHF3Q7W8gLMn+Pqs2IDurti9/2PUZBYV7KJw/dE7LN05TfgSq+2C2Vvfy2Ad7uonSGffg9oKM38D5hmjGR16dHiBWb1DUsEOV9gx0bTE65P862gMr1GqTevhTyIej89xH8fkzKHNuB6/R24eneweyzXyUOOvWZWh31Bf/QyuuSBjMUYRzVRnX2B78//G1QuIazJF8Sc6KRIn9dtRvwvLeWyMOl5T99f8OPvNQzcPhGKeSMyJPJzmAxmVbnff1E9k20Mk1fH1hUzsI48fzUEzWTwfffCSNx63zNsZXb8lDnBxueSddz/k2eKW73hjxr/7E24fx7GP6fY9Gst1PEy0PXT/7z8zryMeH3qm7p9XdXou9c6+aZL2szXGddOdsiZjrEdzLDGdq4ma8dT4alHeF2nEx4HBKlmO7ieyf8/zsUrI14zjof33+/5xc1IasH8WDsO9/zzeP3YizQ8JtFYgaeZxG2n9huTP+HAl+HbBnJ6rcA6aCWaFkYqQPMkLVb5Bho9I0iKaiOJIZkY93KDWlg4GQzTw4TDncdsT9RFJX2KbRyRzNkbgmaiZWVWRdyfl4wknnHDCCSf8QsKnJh8b56mdo618+Wo8deWx3mKthaxkndRQkFIhOKwxaMrgChmQx4DFiCWmRM6K0UQWJWgmKXjv8b4tC0ZVYoakBhWLdQ5fe2IM5BhQAW89xgiGgBo7LrwMMcGuj5huANuXepK2wtiANRUqRa04qahURgJkVGaJZnLqR2vXkhVnJWONQ8bzAKSYYAAdBjQXhVsfAn1IBHXshki/21BvO/o+MJvNqesWIeNqTx6tamsvbLc9z2/XZI3sdhseXV7y+KlDRVgs5yzOGhazis26Z9sNkEpIs1y0DGFguw1sushm15ET5LhjdXOLc5mmqXh08Yj2bI6xlhQS4nZ02y1dr3S7HaHfUVdCXXu883jvcc5grCCSqXyx25QshK4rJK5mUhgYukhKmU03YBCaZsYmbtl2iT4m5rMKpyXT7vZ6Ve6TEcjKzNW01kGlxH5gMV8ShoE+Rmp6Vs9vqJcLbG15+70X9MOOs4sldV3TNAbrGipnER2Iw5ah26Kq1NWMGBPWW8S4QnRnoXYVVkCMgnXsdlusMZyfXxCzYhxoMvS7HZI6QrdGrClkolgsnrqq8U4wQdhttsQUECAMAbEW7xzWWULMpfaGKKHvGLoNQ+yYNxWLRcP52QxTWx4Ngb6LfH4bebkeeLnuWA+ZmEtdxmwSVsxIODrO64rFrKKdeWYzz2LecnY2ZzabUzUtVdXi6xbnG4x1gCHHRIyJoe/Z7Xbkoafvt9ysrwjdDjL0GYaQiAHquqgesQ5NQs6KihRqbwyScs4wagpL3Q1BNZVgkEMgpZlDhmouVspTXY2cj7JsVUlMhyml+EghtwUhKxjJ5fPrFJVIlkI+Hr+/KKTneGEOQbDmyZrmKGgeA1yRUi9SRUodSSOj4nL8u2UMYPf2RSeccMIJJ5xwwgknnHAMQykdX3Bsi6hYA1k8ebgl2MS2iyWGzQNh/ZzWN4jNDDdfRYcthkg7P8O1T4gE2tCTXnyJ65/8t2nPHuMun5Ku3ycYA26Bf/mjhPot1FUYhSSmrKVNhSxmSLa4cEWQiWSzyJQv+Bq8TgX2ULuPU3o91O446fD4mPs/76Vf0/GSOVjblh1huGXx1q+BNIzqRkAcOmywdUuOO8TNUZMhjcmIpmShHq/sJ6XqgfCRIxIyvUKcTjHGMTH76uRwSHqcBqLTGMqvRoSkE2E9VpO8Q+jlPTHFvpXSdwPZtaz7zOPzlpQTz956jEkDAd3ng3aD5aIS0lufYxMtcFXisVFrqSiY6axjf8vF99/z0VinuZoI1dK/I7vU1yhn9/umZ+B4mu5KLV/ZNl2PfTKs7lWSUCxDXVzRN0vEzEAtFtB0i9WebGu8ZkLokGZUUO4fAzk6193+vE7l+ok2sl8HdHQL0ilplzKXhzEfzQUgtiT6o+PzNpUTmRSh2WCd8EazwQq8f/uIX/JWz3awxK3ydAnn7Nj1iZwtRixBM4aawdT4aouNPTiHt56tVvQsQCab5pEElcjzneFRa6mbFWhEyKCCihKywfsWvbn+uubhhBNOOOGEE0749sDPgny0VN5S156mccyaitq7UQVUagpkCrGjKMYeiImQh9HuVMg5geTRchX2q9rJrjEFjIARj3gPWIxxqFWMdRjjyz5TrhNiQjB46xCn5BTIhdugHzKDJoa0IQxKMxuoqgrvK9TXGFOUiliPiC2KLcmIEVIstR2NJEIaiEPEiJIkE4ZIzoUsFWNxo51szsrQJ0IMhJAZ+kjlHdp4DA2WTLfdMex2VFXF+taQxZLU8ezZm5xdnmMry+11YLeOLNqWHANXz58TY8Q9eYKvPDl7+mFHzsrTp2/gvUesRfqOMNwwayseXzQMQ6bvlbqeUzeelDq221uWixmNa8gWWueJTVsWnWctm03H8+cvGXKmcgO27zhfzkEMdV3R9T191zFrapyNuCrRdz1hSOQEMWSWixY1yup2y2w5x1ZbmpknDz1GPKqCtVoshVSZ1xaxDZsukiXjHMTUk3Jk0MwuOHQbWbiB7e4DyIrzjsvZHOMct7seL5HWlUBCs7KYn5FTz3p1w9DvMCaX+ifOUzczfFXvbXXXq1uMgPWFEI9hR95FvGsgR7bbLd7ZsaaG4JzDNQ3N7IKYBDu0qL1lfXuFqSLk0Y7VOCRHUhjIEfpuoNv1rFY7QlaMs8UitrK08xnm3BH6wHLoeDIkhqCEmAlDou8D4sC7irqpqStPWznqqqJua5q6Kv3yFb5qsFWNrxt83WKsQ7MhpVysZbuebruh6zbkFOh3W4btttjCJiWkkj3sfIXzFYghRGVImSQeJttSlb2SEAHRkeAD1IwZmJT6rqVsRUZQJI+lJtXsbYX2geN+y/T3wRRb1Cz74FvT+DfDZCQVxWImQ07IoRmqeV9rUhU0p7G+ZN4HcaMr1hj0HerViggYN24b/85N5KSx3H1FccIJJ5xwwgknnHDCCQXHziD7em6MCqYxEW+3vSLUTanz6CpIA6ZaEuslpl/hfI1585eAX4LJKA4TE/Hmi1w+/X7S97yJibfE9RrOv4PGV0Sj6PAZzNVPYhe/HHWGRAaKg49OziC7t8ky1ViflHdf37iOv39cm9dt+zhi6hgPkz3TMQpj2ZbpV8kJV10Q4xZrfBlnVrIMkLd4PyOHLRnFY1B52E5z5N8OZCFlnnR0TOHeHBwff/980/6JZ7tDPN6Zn4NC0uzP8XC7KaGyDHtskyIxQ46BLic+eu99CBvOLhoQsNbi65oXL7esuoEuGDQPWMmoGETHQhQi+3MKR7zua1SKqO7HdSid8+r4747haPtREuiDGCfu/mwcaSGPCPHDnpQzahclmTRnEgMWB/6M1A9ovSQPL8lkLO5ANt/r5yepGl+3/5PsWO+3mf5eTOO5nyB7UHrqlMuwj5YF3X+Op3uVBR4tLCZu+TC0tPaKbgtNViLw4sbw+KzF6o6dFiHBbaqwZs4i74jZkOq3yFgSQmPXLHTNNtV0Wo23JQEOI3DVCY/OHhPDjj4BFnR0PLJVcT474YQTTjjhhBN+4eBnYbtaUXtLUznqusLXJaPKGQejNWpSUFde3htrcBaMKJoCMfQkVXIu6kKjBu/sfplnXVl4lXJtR7UjjQVXlI2iJUsxhVSyqHIJNATBmtHyM1djFlyxTkxaFJir1ZbdtsdZQ1NX1HUFttStNK7C1w2V91hji8VHjoWY6TvCEMghYjTiTCYOPSkkYoiAMKiQScW20fiy8EtQWUtIiVnlWdYVMfYMwxorYIh02zBa1ApfXd+SbMXF5SOcKLWDurY0swbnHGEYGPod622k7yKr2w05RfKjjnpWYVxFVbekqOTU4b0h5oG065gv58ybijAo2x7qek5MmavrW8LQMZ9VXJ63GFXUOp44jxGlX23o1jcMJjGb1WQCKQTqpsI2NSkLKUHKHusaZnNH7YTQD3Rd4NF8xna7G9V2lqZp0aysX2wxYnF1RY6Zs4UgxvHuiy1oqUlhgJgyGKEbIsZEdqsNzilV7Wn/f+z9yZPsTJblif3uVVUAZubTm74hIjIqs7K7K6uYlc0WLkgRclFc8x/jX8MNV9z1gotaUCg9sSu7WJUZGRkZ8Q1vcnebAKjey4UCZub+3vsyO1pauhmBE/I+d7cJCgVgoRfnnnPWa4jCYRwZi7FOiZLziUza9yP7/RaxQh5GvAys12tSUMaxp2CkFFB3rlolG4xjRsMA5gRxyriloMSmw3JhvWkJAilFUmiJknBxVm3H0O+JUTGLlMFQUXIeiW1lw8ah5/D4yOP+yOPjliSBMWf6khEV2rYjtR1l5YzjkauJQVcJWDGGYcQDhJBIqSGmRJMaggptisSmAwm4REQjGiOqa0RaLMMw9oxjz27/wGG/43g4sD3s2O/3lJwhO6pNDXewEXElpQZtEkVhzFWB6aGqHkGmfMYydT5qzUGZCyqbmi/nYhbHzSZusRZJNtdPUyFWu31rhmtAT00Ec50FVXV5Kv4nxaVPisuzyaqfvj/Mcs1pdZuyJWvjQP2Mc7crMOU76smGCq1Eo4YIolPupJwK/wX/68ZP5UYsWLBgwYIFCxb8z43nKr5ZtCY+Qj4g3bf0x7esY4K4oh3+jqujol/9K3L3BnDUDYqS99+jx3e0d39Cjlc1JqJbkZpXlIf/yDG8pLl6Q4kJe/FL9P1/i73+KzSEGtehwERE1uiRAFobIS/Vg8/H/6V8ui/l4M3P/1Pm5qeIqs+TmKffLpR3RrW26Wm6l2gpWL8jXH9VexcP75H11wy7j0g5oOmq3lOY8twvt3UmB+eqYqpAbGq0lBOF/Nmx/5QVaR31GZevlYu/5TPvO79nIrKn52ozd61Zkka+++53fPvVC/rDO0o/cnOzmvbE0RAIIuzGzF//9YC3Ut/rUou22Xr1gkyc1Y6fWsmeD4jOM/asPvqnkJD/WPbhxRGYGlXn+ZqJ86dzOCtVc7gm9N9R2q9BhTQMjGmDe4+4kfuPBJrLTz+d/l+y0v0p9e8/di7/9D4/jx3xp+fJ/N/5y8Nlauidn53jTk4MNw5kh/u8JuuKN9fGfQ5spTp5aRS+Pzg/W6/obOABI1jPTX7gWAa8fU2xSNDCYM6hrEkCt23myh55LIHBOoqX6nzkgXcH5asucx13POQNUCY3pDWtfvfJXCz4Xx7u/iuWmxoLFixYsOB/Bvze5OO6a0lNVT62XaJpEknq4ufcIVgtFlShSYGUAoJho1HcII/YOEA2Mo6NVbGICuqB0EQa1Ym4nG74V+1UzWS0AgFSFynWVVJhHBA3hrHmMta8OSdOHYSHPNL3I8PolMlOtWsiV+uGdiJxQop0Vxto28lOMVTbWDP6Q08uA1JGQhmRKJSSKVZwCkz5ecNYyFbf24+Z7IpIIkjNlIipkolJWo7HAw6s2zXmI9mM42Gk3+357v4DL26v+frblzTJCDYy9kfQSLfu2ITAODirpmPMezQGhlIo/QPr6yvefP2Ktz+848PDjtWq481XHcPxwDBkus013eaKXV/YrFq+/eY1GgL9caQfMykamy7QxlR52bsVebzjw/t3bAfnul2TVrVQcwJjHglirFaKSiV3+rGAONpE3r59JI+ZTZeqavKQOeyP3N1sGMeMlcouFWrORYpC0gYQHvaPpNSwPRzYjsKQ4XBUbq46NpuWdrViu92z2x5pNldsP77n7vqK3pyxjLRNpEtCCGtyuwIRVm2Dl0Iej0R3jn0tJNsmoQKbVVcX+FPO42Z1xf7xI3k8EJqEFWhWDalrEMkUP4JWRWTTrIhxzfZ+izASYsRdKbkw9COHxy2WBygZMSOIIMWJVNthpryVNjS0bVdrJwPEiSHhbpQpm0QkEpqWJrSEST3JZDcsEihWu4FzHqr973Bg+1iJxt3+kX44knMm9/V4gBBjoG0m0j5Mdsptg4Rq3eJeVX94qN2XblxQf1MH5swSVvsXQajOKxnQyQ2nqk3dDaFMdkITlyhK0GrnU0ymIh8wqerJ6btoJi2DCD7Npc6FmEglQasmszY7lKpiNjPKpH6cHXXqZ9axGoIHRb2Sx1ghasRKqY0Kk+pxWaIvWLBgwYIFCxYs+CJOSqaKEw0xKejG4UDjhZJWlOGANhk/fsfYvWHo3tDSEkVQMfoxw/u/pd3cYq//FQVQF0wLWGRUJd39S0L/O4b3/554+88I8Q65g/Dw19jdX0yuHXXtq0J19wmp1tpu03r++S58XuX4hDR7Rq4+//3zKsDPv+dzjz1//8UzM0d4mu8hD2haE65ec/j+v8e7OxL30L0GG7D8kUY7yhSxMHvNfjKG52MHVKvD0/NxfWncnyWgLvjdL+/XrIL79P2f26ajKEKTnK41UgtRA4fdjn448qbc1ubLXN1gto8f+fVvRr57VK7lxdRIOhGZM9n3/Hg/mxsxPxGSU9l3OrcuFYSnfTCr77uYiJO6Ui5rvPP7PiWhLxpcfSaA57P2co7qfpg4pX2FHn9Xf6oSxVF1ihmoo90NTsGkRS7qzMv9/ccUvl8iIJ+/96fyHusj9mT/nyhNOdPhnyP73am1r/upnq135hISI2TlqnN+9VbAFdHqCBQMfrtVvr7uiClQ9h/YcUtovqYUQ6RmNkIkeMaAd31CJXCdMjd64DjAtkSMDrTj/ccfePFmDfQ8DG29L6dHbjbhi/O4YMGCBQsWLPjDw+9NPm6uGpqYSE2iSYEQ5gWl12wCh8qC2PQPgkBQrZmGEhDJiMlENs0de1WVVEogFIUotXNLpw4vr8Wbau3tcy+oCjEFLDcUNyxXW1dwYoAQqrJKRWgbQ0UYo9Bno4yKZ2e/7TlqJsVE09UO0LHtK6mhoaquDErugYyXHvcR95phGXwih1xRd/LYM/S1kJMgpBCn3Dip1i8OpQeRQGw7xpw5HAfM66I7NcrKlWM/sN3d07wz7l68QEMhdi0pdWwfD2gMdN2azd0Nu73wd7/5gffv73nx4pZf/OIbXr16wfpqxW9/+yM//PCeq6sNbRMJeuSwP7Ju1/Rj5qMZotX6s42JpNC92jDmnmKQzTBzSobjEfaHLQasug4vmWEYcHHalDgeHBsNG4yr2yskCCKJq+sOK5EYhCAwHI9cbRq2xyP9aARJFITd4chxPOBTJmMxZzsUkgjZIanRhMLVOrHqIl0XoQxYMRpA8pHUduwfH3n/8Mj6aoOVRNsoKhEfM6kJDMc9x/2O1Cn5sMcssLm6Y7W+Jo89eKYfRmKo2Qf9vqeMmRRDtcARRbXBs6BqUHqsgMaq+GxiIGqgWFUqSkiU4chw2NEfj5SxEvFRlWEcGFPDvh85jiObUmpBoUqMCRWdtglBFNFQrUy9hslLUMJE0JvULEJ1q7bDuTBmo5Sal3o47rn/+MB2t6U/DrhX0q9NiXTVVbUhWrskraBAVKlErQvZBDMFBZmuyUoA1mvdcMQDLnMBO+Ww+Ew+Wu2KPb3nIk/j4m6MyFxaUZsAJtUjTBZE1A8R0XrPwWxSUF52g9qpYKuZrRm3THarlqtmiE1EKT6pMuu2XKQqNuf9EKGYT4XTVBJOUsx/zCpqwYIFCxYsWLBgwR8nnhAYM60hVJcZHOsfuWoCR+sgDzSrDpMr4vqaePUNrG8p5hw//IZUBtKbf07R7vTJLpV2cjXCpH8r7be0zQvKw9+R4wvk6muiZvThb7Db/+RM1YjR9wNJG0QNTHE9qzL/MTxVml2SIJ9XQX7psdNcfcGG9VRVnIjO2rx4in9gbnwUCBnPRzQp2EBa3TC++29prr8mlAODK01cE/JHCHeoO+WkcvzMPk7HzbSOWyflXRWgTVq/ZxN2Ui5O5Y1xVhKC1zrJ69jlQrVXf/V5B788pmm8Z/vWWgMJMKZXpHYkoPzm17/h9csbmjbQNJGxD8RYnapKztw/7HmxvubNq8TffJ/P5CEyze/lsXpG8s77Mb1JOKsR4XyuzzNrl2rOqZarb6/7OpOXyEygyYloq2+5yEF0pjpwqif9vB1FJtvYaoHkEgkktPuKm/IP5OYGLKFlJMSE54+YvgDP5+nWi/N6HtuTfX96rM/H5POk4udee0lCnq+jeo9hVjA+/Sh7Mp91Hv3Ja+p8nMcuOKNDWzt7SVEJwchlmicXRGoFrC487A7cpQb3BOGaLF6jkgC32nRuIudrAOFhiJgHrhq445HBevbe4CQ+Pjqbm8C1HNiOa7Z2hcj9F+dnwYIFCxYsWPCHh9+ffFyvaFJLTBGN04K8ZLIVitWb/KKVENFQFYbFKoGimpAY6o37BlQzYeo+E6rtqJlRcmEMhlIwGUEKGgKoojESLdXctuKEEElNQsXIFDxnEENDIDaT7atE1AMqhaCGqjFKYRwzYx4p5jQxUkZnPDygGohtS2pXoIqVQnCjTU6xHi+Zvgj7/cA4ZgJOVEUI2OQz2fcHBjOKB9BEbFpchPV6RZMgakOURGoKlMKx7xkGxwZIIbG67ci58PH+wA/vd0hsKOa8ev2KbtVydXUFUrMJx36gSy0pJMYMjw87rjYrUor86Z99y/3HR3I2zJQgjuWedx9+5OXLW1LbcP+wY7fdsy3Gpm1oVzVevoyFw2HHZrPi9m7Dqxc/J481g7AUo+SefZ9xE/aPAxoV1Lm7u6W5ecH79++x/rFmg64Sw3Hg3YePxNAwDpkUtC64Q+TD+weGMdM0sWYCjD1KJa4d4dgXaOv8bTYd7kbOGemd0ZxxcBpRmhQhCt/87DUxJayM5OGIZwhBUXWCKNq2tcBpItkLLiNQCFLAC+tVwE2IKgiFQaqtqIaAdut6PnoGHHFnvVpzHAb6oSdF4ebFDfvdnmH7APTYRHSl2DAeHyj5SFCniQ3mwn7I7HY9ty8G1qqE2KApoVKthCXUawgCjhI0TvVappRCydVHKdQjRz8cqtI3Z9yFnI1hOID3tFGIq1iD5TXQNk1teJaaK7k/1PEWg3Hsq3ozRIwAYbI2PdXF565XJquYWoxaLULnytsBsVNxjNR5Eya9oVzcbCj1hoyKYDZ1tJ6qq/MNgjmPxaZhnO/pTL+4T9mzBfOa+WjmUEBRCjPB6ITZFFam2wwyV8DTZ6o+7bLGsdlPdsGCBQsWLFiwYMGC55gWj8qsJptXjoJq4HD/A19dtTwMB8xH2td/RdA9MX1FDC3l8MD48des7n6GrV7WxkM4qb+ebmqKRBBwSejL/4zN7rfs3v1H9PaXdBvot39Luf5TFKmqx/1vWemI5BFPLdF8IkCeEilfara7zOu7VEE+tY98Srj8U4iaJ6+f9vdMukz/mcmrmdzTmv1u+UhU0FHQ4z1XV9/Q73/EV9/QbK4Yju8QqcqtPPU5I3N9czlOJvJpJgvnpsaZM/uUTHqyf9NrhPnnRAlNbNZ5Hk6825l4nEhJTmaml3rLTwncIDUGw10pplxtbrm9S/zlX/0pQVrchDENuPWIZP71X/wMH0f+7/92y8+HntEdkebcuDkr6U7bONd+J3L1Yvyn/Z4J6Gmfz3zeZ3IRnzFsellfMilzL15cCchZGXhWULqcj8sTOCRG2mZfz2u9Ztw/4KnnqlE2JB7HAbV9NewNXptqL61VL8f6DM+P++fOgS+dGz/ZvDrPJ7Uhdi5y5WIHz0rPp8drPq/m3E2hOiMpzvU687ALuDrOiBYwqvJ3I/e4OS6BwEjMPzLGr2seKlKbb72ct3feIRQ49IWDXBND4Tb1lAJ96bn/2HB3VbgKjxyPK1ZxadpdsGDBggUL/pjwe5OPq25F0zSEGHCVulCxmqmY3auVoWVCqR1wKgELgZGJRJEAkVNmgnlGXNAgBAm1iwsoxTAv5OIQhBgTISSESsyUYlgG1CYiKOAl4GSCQtNEmratuYGTgrGETMgZtUxTvJIrE/kYVRHPtcgoBYaB1ETchePYc+xHDiIUKwSvHYFenCRVHXgcjCCFmAIhRmKIaMnsDgP9vseHgZgCaCHqmq5dkZpADEYIkEthuzswHks1ylBBQkBjQvseTZUQ7Q9bUoD79wM5F27vbogR1i1889Ut233P/fsPbLqWFy9uWCVFrzvGbAxTNp+wxh6Fj497bly4vd5wd7PGTBmz8x/+/m0lW4Nye9PStCs+3m/ZbnesuiuyCR8+PiIIt9fXtAm86xnGwv32yPdv9/yHv/2e9arhdrNifzhgOaCutG3DcBzAYRhHjMjucUcEvGkIIdCp05swZEeIlFK4Xres2kpVmfW0TaBYZtOsONw/EmODkzE12tQRKJRhYBxGymh0MZHLgRGF1EFSYpNoU6qWIhrABTOwnFEJk1pPSG1Dyh1JFcuGiDOWXC1TJRBjh2giJqExIZuTrHATA7vDPdV6dCRGoWj9PZcR88JmpYhG+sE4HAbyUBCvxFuItTAWqKSpJnAlW+3ydIcxF0ruGXNhLMZEG1PKiLtgVuiHzOPjnuPQT8WKoFJzCwmBowuhODEEEKNJEQV2h2Ml6N0JEpEY67Um565Tpu5YM5uKdKsEvFMVwToVPz6pIGFSF/opSwSZulS5KLwFqsWrnzqF63/PxOOTjtjp+6mSn7Vrtt4QKdXmh3NXrYR6UyX6VEwJmJdpBDXzxMVPYxNVVKdt6WQtDQR++sbJggULFixYsGDBgj9eTPq2+odMv1+QLuXx13QvV/xw/5YGJ1x9wzh8pB1+zfH+SGxXrL75Vzixrm1PC966sj0TZuAuFJnX3FWt1K//hG71iLz/jzy0b7huGuz+79HbPyNH4WiJJq5ZpxUXzpt1uP8Ed48vveanbFSfk1fPX/PFbMn5+Vnqdu4QrLWWCrhSSk+8+TNcYSjfIbImvfglZfcOka8RcUJUVAumdb7sQs11mcGH+MStylQXznSgT9mZn477kzk4EY8Vevn50+tmghKRJ/M0E0znqZBnPy+2Vzt2aZuIxsTVzWu+/uZf8XD/jqHf1ngbDYxDdXh5dXNF9Ac8+FR3nef4RCJ+Rs36JNtyOhaCfoZcvGASuXCv4YJsvTi+euJd5+bPmXSdSdtLEu9zsyAXz8spB7MvG9zuyfKCEGvDKWVkN4KXhhhbrBwZns3nF4/nxXOfsxH+Um3409fTTKD6s0eenkfnH5V8/CSX8tl73QXLwiiFmxT4zb2iXu8GudZaeKPvGW2FNC2tPuIE2uaGzr9n629QqnuZaGYSml6McVa0JrIUvChbW3Pb7JB8T4OR98LV1ZqV/cBY8k/MwYIFCxYsWLDgDw2/N/kYkxJTQLWqhMxBdLJ7FMXFMKu2jaKZ6BGzukDPJgRRXCJoi4njJU8L17ogJgAGZmN1vbApj8HDtLCbCxbFgGI1n8KAWgXUhbcZk/JK0RiIEvBYiaxkNqksYRgHhrHmwA0jDP1I3xfMR1Z9pouK5wErmbE4uThtamiiojhBA66KuZNdcSpREVRZx5YmdhxTD27V3qM4h8MeUUPihrC6IraRUEZUBb0CVHCq6is1SskZc6VZtbTdivv7LQVHXE+E0cPuyNX1DSHBYTty//GefjhwfbWmW22IbYP0PWMe6PsjIsKbN99QrPCwe+T2ZsXN9RXugdhFvAgf3j3w13/7I68+HPj2zQvWzYY2BVK3oetWU/HmDMOBtr0hjIX9kAlReXn3muurKz68v6drrrhar3j48HEicyA1Cc/K4/YIUlitAnmXcXNub654mzO7hwdWq5b+eOB2E1nHSvwNY2bdrRgPmUPYV1vNXFB3JBfSCo7HA5aNKEpKgb7vOey3rG+u6LqWptuQSyVBgygptQy5Jw89bVSG/Q5VKCEQ0x0xBpqYKJoZ+kfa9Yam2RBTh3kgF8WIrNZtJbrzwH67nzJQFTMjpcS2DFiuijzcSCqEpChV+ZvHwjge0bGqhDV1+JRR6FKtRIdcMPOqZhyPuNdMw1IKMTZooJLgCNjIUI7YuMPGETQymtIPhRgTMThmB1JQrtYrYtR6bZkTFTwIHkJVYsaEhKoRnFWHcLZ8qQ/l6bm5AKtWrnOR5O7MTklzZoXztFtVppszzvQxMpOUTELKi4Jv3jZe59qB+TvB7Ex4UufYJ9tUEer8TJ3SLmfVIzZnkFSG10Un5WP93pE5t3Ie0IIFCxYsWLBgwYIFz1Az2itjZSclWSW1zA3LhumKYaxWkXF1xf3bv0VX13T5Ab3757iGGhUggSKlRpAUUPF5eT0RNlNDnta1ubrgnkFvyG/+BZv9b3l8PNJowXe/Jm5+gQSp2ele1Zk+O5Zc7sNzEvCfQErOr33+ni/lRn7JnvVzY4BznVCfq0qwWpYIWnokJGz3D0iMpM0NEjfggf74DrOItzdV9eUwtx+et1HjY2a1okq1qKyK0ksi0ZnLoRMx+pywYiaLnhKb9c7F09deEnPzfnF6/5eJ2svPVBViMGAgNUq2jFPv26iChggow1jYHYx//rOOm5uB7z6snpKgz+b86b49JwFrbXduB/WLY3MmIOffn3DcIlMD67mxtW7LTsrLmYSVyZL0UwHlvP8X25HqrIMkxEeKdzgHhniL+AAcKV7weE3xek0h8sksn4m/+u9zx/jT+Zne8hkyVgCzs4pVpjyS03GcjvWpDr0Y0bPegOfc7rPt1ie65kCKO7oiuL3m6Kl+b7jTiZHsI9tyTfbEz9KeMgqKcSiJpnnFrb3jMd8i2kzXw+euDSdrbZ5GlNGM93nFSjKH8Io2jhxzpmtaShk/HfCCBQsWLFiw4A8Wvzf5GCKEUBd65pN1ohtBBU2JIoFSMm5GDBEhICKEULsSzTLmszWqkosylgISaFKqiXNhIhNxNAgalRgDIYRKJrpSrCD9pJDMhpfKA6grZpmxH3CHWBKNO7EVQhMJRCgFLYIXQL3mUVr1tVepBGLOGS8FA2Ksijw1IzoEiWRnKhQFs9oJFqQSsSaB4hFNCUnCur1mGPpqD6sGZNxgPAwE3VNkQ2o6cgSCs1qvybkw9pnBCmm1ZjgOPH48cnUVuXv1Na6Fh4ctXdfVcPTYEmPi5u6Wx27H7vEBrIAkXFrudwe+++E3vLq9Qc05bgfef/h7tscjKShJv+Kb11d18c2IiPLVqxf82X/yLfm4xW0ktlfE2OJmbEJD17WIwMOjsN/u2e33vHj5gtubK0Qiu33Pq9cvaVvl4eNbYiM0bcd9KRAbbja1GNrujrgmrm8SARjGwmCRw2BcN8bd9QrMyCVw07VA5mF3IGjiOiY2mxXDMCAB2kbQAKtVoj+OHI9HugBmgRBb3CPb+3uuLJMtsx8zm/WaQ3/g+vqOQx6x0Wlji5Wa3em1HZScBwRFzLA+E7qmno9SyTnxSnoJgaiJcRiJIfL4+EgICQPGsVAso2KVZxedjhMUyxyHnuNxDzjZMqyuCFLPaRysFAbLjENmf+jp80AMzUS8OTkOhNQgYYUh5FIVhKtuRYiJMTu5z5SxIAapc9Qzlp2SA7FpKKoUnJAi2rS4dhDbevELeLF6Q+NUGPnM052K8BNx53PXqmJilcCDWTRZyceLDs6TqhJORU1VT9buWj8VppXQPBd2VrMralnHye7JBZsUjEHjlJtJLcJDODUyqIQL255q2+RukwVNzbmoHedzZotiapVIXbBgwYIFCxYsWLDgGUTPdqEqYOI179GdPPRsWmcMG8rh72hiYPjwd7RBCC//OXL9Cv/x/wPyn0B3Q6GgonUdrJcbORNVZzUdVFPFWBv0XMjrX3DV7hk+/oq0+w0eNyQcQprI0VLrnJ8gF//HZJ1/jnB8rhC7fM0nyjEBXE77NlUKp6xFkUkJ6kzZjRCm5sa8/wEbB+LqFZI2mDu6/oaw/XuG0kP3BvKx1gdzE7PPkzoThYG5zqlE7wXTM9UuMhc1F2rBJ3PA1OTohZl89KmOqT8raWiTuk+ekJJyes9cPD0nBi9zN8UDUBjHkb7fT42uUseuMkXgzApDwYPw40Pho4ULupALEs4uVLWnp6j115wXODWFT25Yp2N2QejOVdys1DsRnBNx+Zz0rNs9TfLFecHp+dPc8+m5NBNjIk4TheI97gGXhIWIlAh+oFXY+4Byg9qu5p1eEqt6zu08HfYTCTnXoYXnuLSqPdXJ07jqmXV+zJ68zM8NBMjFhme72fnYTDM6TyGXxGS1UZ3naj+0XNGxaluKN0Qzigsv0gEJxsf+rjaeS2HdKI9jQxTAeoZxjcc77poHPoy3qAeMes9utmAt7lN7r55GIQIWV4gfoIyMtGRv6bMgw7tP5mvBggULFixY8IeL35t8ZFIcqghu1WLR3JnvwocgqEZEqlVqTPVfiPVGfykFcr2xf+5Mc7IbwZ2oYVocy7Rg12o/qjqpLQGDGJSmaShjtZssNpwJjVLzJ3MuxNFRqiVmlISEiEar6spsSFK8qYt5HRPDsUdE6QdhHODoRusBzCilLhINZ8xGPxSK1yKiVQipYGUEAoSqEgsaaduGq+srtEm4G3kcwaptpxVhKAUbMzYp5GQ0UlyxuW05HnsOx56xJFIjlJL58P5dtcuQgEYh95kQhBQDZcyYZVyU+13PyA7dDjxut/SHzHv7wLdvXvIv/sWf8Ljbse8LHz4+Mo49/XBAVRnGgTwWdr7nxas7Vrcv2W53vP/4EeWRm03LZtWgGhlNKF4Vbk1q6TO8ve+5vk60qwaGI8fDkXE0UgoIke3+LV+9ucassNpccxwzL1694OFhz4d3H2lSZJWcu+sWs6pZ68cDad0QglGy06SWVRcR8boQVscMAoHtw5YhD7RNy3DsaZsNDrSrhm6VOO4PSMkEFYpqJaHyQBDo2raSg6ocjz2JQtO1hFTtjhRQqxai5pmozdSpWBW8xaDp1mhMlVRH2G0PDMeB2AhRjbF6eLJaJ3KBIUOfM91Y2O+PrFct4k7ygSQFSavpvMn0fc9QMkOfGYZCNmcgT8pjCGY0Khx7QQk4lfDVIHB0RI2YWtbrtnYNW2F/HBhHY38cK5mKkzQROyF7oJCQlHDValErwpzn6FxUpC7gtcmAqShxOxfFzKpDOGVq1F7lqdP6Sa02JaOoohOBOddl5uF0o2EqS2v38lyyz2TmZLmjUm+kqEDQVJXJQdAQp++VWMc+d6O7U2TqQnUm8nkat54Gj7mfCMsFCxYsWLBgwYIFCy5xUsVxlim51/XqcNxxtQ5kbRj7PV13Rfvi5/Tlh9p8FxL69V9i3//3+Ms/Q1Z3J2IFPiUCZyXUieCaLEGqpWV9bAxr5NW/RHe/Yfju3zL2j8j6hnNG3CX98mV8zmLyc7aUX3rf58b/CbGGPJF6XarpTmSYz6+p630sM+x/ZN0mUlxxNMOJIGOtFlavkYffAopnQwi1OdLnJkdHVM9qPeYcQn8yxst9OBE/Fw2UF5JHZoLt83MxxUuIgtiklL2ci9OHcSLc5Gmm5kkxp457JDXC9c2L+tCk6GMid2XaFCoMh5H/3V9cQUj8P/5rOz0/SUiZgyYuicDLcczxOlM1eFb/CXOr6XlfP9EUzk+fswyfv+LL59g50/D585evU5wUI2YDJdxgNIgVXJxixhAaVnKPlB2lHJGULwho5t7aeQcmcvnUYvuZET/fOT+R0+dzZHr2kl+f9+tCsVj34ZKPfLqvZ7XreZCXKtHTKAxEG97cKt8dE4Lxdbsla+LjcY26k8XZJGNwKNYiKhQbCVEYrWWrd7xI93zo14jWpvciipGJXh2R5igV89qGXYeXiGpkFcQgW0MMq5+YswULFixYsGDBHxp+b/KxdgFW20EzO1mJnBdjjqoSwpl4jDERNBBCzVdQasZiEQXPiNTFcbVJFBAlhFBVShKQUHPvzAomik6kZGrbaX2VcTtSyrnoUoHihtnIkEeStfW9okBAg6LqJGmm3EonNg1taonxSDz05GbESiHnzDhmSvFKnpIplb/E3RksY17JwDY1JFUCEGVEzRj7QimZMCZC2xHaFV4MwQlRCJqQ6p6KZNg/7Ch+pO2uuHtxTdM17B53PDxmigk3t1ek1PDhw5bvf/eBrhHUMw/5PU1UzAdSSIRuxePDAyVnuhTZiMM4ctjvOaw33N69YJMLnjNuA+8/fERCy67PpBAJwA/fvePj/QOxSVytE3dXV7y43hBioi9OfxwIsSP7AA3c3N6Qh0wZj2gQDKFoYjAl0fL4+IHXr+7oGmG0BtXA119/w+44cjgUxiygTPMrNF3CChz7wvV1AxLousTV1QrUGMcDq26DRhiLcegfkdSgMZKaFok9sevAnRASIa5IjZNzwciEGCk2YJYZ9lu8FNq2IcSExMjD9p48Hri9u8U14ppoNrdYgVNXrFf74OKgsSOXqmSMTQQp3N6uGMYOkUrK51I7BUueO2lLVd2GQNDEcShIKBQMbEdMGRSGMbPfH8jZEFGK1y5cFSWonLohh37geBxpUiKliKowmjPmTJMSIbXs+55S6mNWSh3XmNlj0/WaQANYwIm1CWDajkrE1SmTGlMmO9hqCwuVKJxJSZm+FcpUkDjhVDCdi/YZ1YJ5KpRn0k+qrXONkPQpb8KBAlotk079ljNZOD0StCpRQ9CaK6up/gz1O0YlEkLARSdVYy3i7KIHeK7BRWqXsJhjpX632BdurixYsGDBggULFiz448alqmsmTBwBVYbDe+7ffkB//HeU4ZHm9hcMpnhqavOdKEYifPuX+I//PWYJWV9dqByfbgfOJM+T3MITYVS3rS7kzS/ZtT+SD79C3zTMWYpzHf0/Zv9+6vHndqo/lY33xcy8icya7znU1148yeRSgk8Wnh3y4l+S8z1tWGFiBEKNUbEdIXVoOWDSgGSSa83K5JwweCKE8M+QX5/Z1xMpfKkhfErAXb7efSpebLIZndsop2Mkcmmtct5n9zPB+Ykt6mQJW3JtUI0xTQRkralUhaBnteZQ4P/53zxyc6ek1HEKPZSJtLrY1fNcXI5nJkGfv+7LpPT83HzfRy6rrZ8go8/nzVPy7XPbPKtCHeyRMf6yZhZWmoxQDhiJZtxh3RskBOjvJ9vhp+ftk3Nx5rlPxLI9PdrTk1WlWInHM23Mk+vryZ5+Uks+nbvnc/nEtnW6vF3AZPrs6Rj5VBNrjKybDfv3I9f+wEO+5mgJodT3uPLqynj3EElihKCMJRMA3OhLQOSa192W92OhyKqaEllVQtYmgbN17GzfXHIkMuKuBL+v9wKar1mwYMGCBQsW/PHg91c+UkPK6worIBPZCDWTsZJzcLEyxi/UkjGkmskmtQtTaEkxTp9nJ0cT9WoLIvOqalowujuEapNYbUyMOEZGjTjD9Kq6zSCORkXFGfuMM5BarYHsWjMuUCXEUO1DrCEmI6SWbt1jOU/E40geB/I4VoXi1IYWY0NUJZeq/lQNdaFfSo3mjjUfrpTCWMb6ecOAxoC0VU0XY8uYhXwcsDJOykuj23QU23J/37NarVmtG1abDjehHyY70VYpWSjDkbaJDLnw8X6L4XQrw/IjXWhorxpEnG5zTde2HI8jb797x25zrGNoOn744R778MimWyFBuXr1Bkfojwe6LrC52lBK4YcfHnj39oGwijw8jkQVrtcNr242jJYZD1vurq/YbK7Z7Q98+PDIdnfg5uqGKJlrXeNuPOwOjIPz+tVr+rHw4fGIxIam62jaiGrixx/fE1MkJFhtWvJE8L28vaZtU12QTyrPMRu7fY9G5+aupWsTV5srxIUgDm40AcwOtG2shdlhZJUarBgpttXKV51x3FOysFk3tPGOoT9U29xWGfqB0FwRmhZCi4eAUNVzOo6U8YCXgf3uA0LERWm6lnaljMUZ88CmUewoDApSnHVskCn3MXUNq3VL0Gr3a6IchoFshTxmDoeBw1BoU0cTIzFVVV/NPwXPIyoZcakKyWO9Bg/DoeZEmpK3e4Yysuo6mthhoVBqEA1mhVKEEJpK7E0NAPVnJR/dndFr1kytr7Sy5u6UiYg8F1J26g716bop889Tt6/MvbWn74xaYQuKTipLELWzCtLBbPpumEjXiTes3Zc4qJyIyzDZNoeQiDEQY0Q0opOquhKe08c5T5yXjPm7SiizBY8ZVkaKXd4cWLBgwYIFCxYsWLCgYhaNuU8WoQpMMQHD/j39zkhdVd6lzSusf0Rih4QO0ynmwAL61f8GffvfUfRPYfXi/PmfkGFyIgOev8YMVAM5b7GPv2JT3nFcbUBTXYtPCroTbfmP9Nd9jmB8rhD8VJ35jDD7DMnz9D3zwlymtfhlZXEmJEWqSxGW6a7eUA6/I8gKDzXDDge1D3j8BvgVx0OPaEsiTLXDZZYjJyJMZtL2YpyX4zOzi7rlTNJx+dM/JdIun9dpvub5ryLZua3yvF2vA5oHeCK1gcmJRRAyKUa6pjZXBq2NlUG1ZntKdcpRFcxGxlwoVFOX58fpfEzPVrCfHKuLWXN5ftyfH9+5ZppJuMvfv0xanj/j+eOcnbQ+OW/ATDATJPdougI3xDNuIx7WiA6YDXi4YiwjT5yMpzHrtA8+E4v46VxxqpXy5bh9VjtO5+bn7WznenX6lCcSx08zLZ/jCZkv9VoXBzFO45xbcd3rOfHD2/c0Y+IxvcY81HPAAxkIYljfcxyv0ZRpVBnsfD8vIAwWeJ9f8LJ95P5wpKcjULC5Lp6OoflMuhpOoJE9bj/i0pDD3acn2oIFCxYsWLDgDxq/N/nok3rJrEwda9VmtS7MSiUbpy44d8fMp7+l5ioKiCg2LeZDiASNiChOzYpUoWY9TpaYrhPh6U5QCKpTwLXhxdHYoKkhlLFmKhadCIgpixIj5x7RgGgkSjMRonNmRO0wDY0TiqExEXOHZWMce2IaMRuwKQeyTERiKYUgQkiVvDADjUpIgaiCBKk2rxpqDoI7ue8rkRICxaEfRigBJHHoCx/uB6wM3AzOet2QwsC4P6KhWsZKCKzWa4Iktrs9QzaKK+OxJ1K4vqpdjvvDwDAWeh1orzZsNhu6bsNq1dBtRuzdR3YPH9EQWF1dcXtzRck9L26vQRM/vHvH/eOW1y9vURQbM22rNNct49Dj+ciLq0hs15go+zwiIgxDYX8YKS5YcbrUsn6zIqpweNzSbV5XovDdb9msA7v9nqbt+JOfv+E3v/kdISq4kMeRrgkkGXl9c8P2wdntDrBqCDHW7s6olMHxfiQPmTxk1u2K4XjE90dkLFXVNkIeenYPD0houL29puvWE5kWWHVdVT72Pf0wsF6viEFwGzHv6boWDaEqBF0pJbPavCQ2V3ioVri1gCyI9ZD3qA/040hqWo6HoSoTzRmHkX1fGKfFt4bZthaiKpZHYroGUVwSooEmKVIKhhCyEcZqazqr9VJT52TMhWE6v6ASnY4y5IKZY66MXhBg1basupYyFnIMiFULGpuK2+IgEhBJiIeqfEQnZaIgZe7jVES8ZldOHbc1G1GnewV26sBkUjLa1E6rU5e1ngpcqSTgVF3LXMzbfIOhFluqATOvosfpu8WnIqtuaRqX1kYDDQGNkRADKTWVgAzxwnZVEa22QTrtlc/jmQvO6WaMqVZ1JI6XSCkL+bhgwYIFCxYsWLDgU5RpzStTfnhd/heUhnLc8uLlK6y94fj4A2n1ikN/IK1eQFCiTetujRhGfPWvkQ//HUUVuttZ44hTqJ474FrdfOo6elqzeyWdPD8wvPv/Ene/ZfX6P0Pf/J/Qd/83YojMNMmZgJFPyJx/KqH4JXzOlvXLCra5IbH+mw1h5//hM9kH1ZFpRDRiY09crQndS/p3f0u7+vNaJ/Q/ErqXuJbpY0dM1rUZmrlpsu59JZVmUqXmM4p+njSZVYUnm5Tpx+nYnFRws8bPpn2tL5xzBWdiC5mfl9M9lc/N3eUsz2o309o9mQns9sbULV4zHqfPk1DzGeu56BQm4oiCSJym+0KP6PO5dKlyvNy6XRBeT3EmEmeHrOfkba3hfuKUeUJGzm6454dqA/zlHFVXrkmBJ2DpllR2jAUkrPF8xONVtdpNLen4SO9Gal/U+1ZyOoonMtjnDgLOG58zLD8zYLjYv0ucrVXnB2YSfX7l586xTx87K205Nd7O1a9MhOd8SBzHHA79jp1/XetegzLV56LCRjMft4ZpPR4xNJAH6vWQpzEqxZz3w5q79R497Oh9DZ6pWkedzvV6PNQBGRAbsHBHpsWsZsouWLBgwYIFC/548HuTj9msLrQnf4fidTGoorgaYjJ1oVEXzcIpRN3NcWreo5lVBaTXgqwusBW0LtqcOaq8LoyweQFqiBsqASWgjcO6g5I5WGbEsOLgRpg68Kqoy4lNoFs1hNSecilNHSUStKkL8ljwximlWkW2dk0eM5ZzJXzKSMk9eRwZ+x63TNRqG5vNybmnjFWNJ0VQDZVsnexgXOuiPojQth0aAmUiaTV2eHAOhyO7w57t4wOrJnJ9vabtOpom0+9H3n//HRoa2nVLDEIeCn1f+Pjxnuv1ijdvrvn6q1swxz1T3Njvt9w/HGlXG9ab6tnfdsL+sCUOe0pxrMDhUFhfN7x+c8s/+2c/J4jS9wM/fv89x71z9+KOF1+9Io89w5h53O/Ybvcogc31muubazZdh0bhsHsEOxKn/MTYtJOazvjqzWvevXtH0xhdUH77299Shp7r1YqHx0fcjG++uqGJIF4IQWm7FaqRw/5A1yUYjTwOlKFHHMZ+T7xZc7W+Yvt4z+4xTwo+5+pqxe1mw/32wO5hjxdY396Qx5FhGKgFkLFqGyiF3f6IKjRtPV/cHEFJMRDIhABooOTCOIzEqKgkmiawHwZS7BjGI+rV1jZTSCnVnIQxk0thNGMYnZBqFqiGUK+NMdOuWhCvFq5WUFUiEPB6/Um1/cyHzFiclFqsGMe+euZUu+JISonsfdUOj9UaRUPCBIZJ2UsMJFFCrT4RSZhHzCPFJztS06oGlFoeB63dpGX2lJmLWg9TB+bcCjr/Zy46oV4EtcgXmAqUSuD7VOBeNBGf7oSoy3yvgblUqwpIcA+n7tI6zkoohhCJMdA01YK2Eo+TFXSIhImAPN0UkIk8nb/wnjas4jp1rZphpAul94IFCxYsWLBgwYIFF5icM+Y1bZAaSSGWKYct8fqG3kcaFaR8ZNweWV9/i4Z2WkJrbeSj1Jz61/+a8Pa/xf1PYfUSpzr8yBPVWSUhQwnkoIzHD4T7v8d2vyWtr2j+9P9MTiuCH8mekRCYoxMvIhQ/syufko6fZNA9UwZ+7rH57zOh9fT5T9/PLBUDpurh9JqZ3JsaocsOTdeE2BFWG8aPv6Ndr9F0jeUBO34EldowLI6618ZC84nMrHXWRAUi8hnV3YljnLZdn5l7Fi/INIeJMHsSbX96fmrK5Kygm1/lE6skl3//xHEoUj8oINhwgPJ42hdVxbxarnoIqNb4GRudcSy4BXB9sm/zDDwnhOdN+3lCnipGnx3j0zx8FvLJcz9FXn/Jpvdz25z3RVH6+JImf8RtoKiDdkQKeXzAJaDFkO6KYgPE9XTCCSeimPNxdZGnIz4pUidC8PI8fzbWy5zOT/frKZn5XGX76fvOBOTcNGtunxyJAsTQ8Op1Q35X73/0dLisQEHMuN0M/OAtXpxikdAqNvYENcR1OvcLIpANPvRrbpstYdyxLxuK20TOG0GqK1pTDqjvITQUU1x1imf50rmwYMGCBQsWLPhDxO+f+ehWTeV96jSbSMG5C3G2pnCqQilqJEjgExWRhknNOC9ELooPAdMauu6FiUScXlaEYkaZsidjUFJq6FYrLBfcjMNxoIyZIAYeUC0kjWCZkofq7S81JD2EicScuyol1vw9raSluRFCg7iTbc5uHIhNT7deYVaglGqXY4VhCBTNp9AIgWrN6AX3mq2nKlh2shc0JoiBIIHURGJYs24TxzZy2G9pMLwUhmEEUZq2I4TI4XCk3+6IKdAFp11FxK54eNzxu+8zm6sVq66la1fEGGiDctjfc9xt+fF7obfAzfWKGBr6UnNFhMxw6NGgEAIfhkd22x1NVF68vOH2xQ3XmyvWV9eMxXi43xG6PW/eVE4xxBo6vt/v2b1/pEkNKTb88N2PrNcbtAscD1tElaLQj7Ub+HfffyB7oNvccDwekZh4ebthPBw57PaoOKlR1qvI1aZFLNPvjogG+n7P2B8JGiimHA8jpTyQUqgNilZQgePhQEyJm5sbokSOfc/xeCQEYRyNw75ns+nouhW77Y4iDeYZGTMqA+YjTbOmWb0gNKkS6rl22cqkWHQ1chkoGNmFtrsiaMvj/RYrPaBoTNy9XNEeAof9QN+PuAuqDTHU62S7PzC606RUMyGBwYVhGBj7gSQBc3jcZXaDcRz2uAurruHFizUp1oV/jIGoSuha3IwxZvbHkcNxQEchxqo6bldrotaOadTJRRjzZBcj9Vo92+/MxSinxyr3WAnFmlmimJxtYyY95KlImi425pZlnysnmUk/OT8+l7Qu9SSjnPpxHYFS7ZmNs22PyjnTMYZUz8OUqvIxNqcc2hjSpK4+dzrXLx+YLZ6mLz3mrl/R8+PmZSEfFyxYsGDBggULFnwWcwYacFbtAbkcyP0Offk148df1ybGdkX3+JbDu/+OZv1/QNoNRZ3gGTwhWLVgffWv0Q//jqKKd9cI4UyOoURxsih9/yPy+B1hPGLuNN/+b2H9TV3mSqnxB6aINFSW7KzyOjmSPMPnyJPPEodzvf8ZMuk5sfIJiXRJxEwWtbUkmGMWTpTL5NIy3UNQx4ZMkB3e12y/4nB8+zs2L36B5562e40ffl1rnFBjJebapirdntqZflblOXNmT1SAZ0vRaahT22VVoOpUvZwsV6efz1WMT7YzkWCXJOCX5l6YlHqABOGrr18RYjg1cwYV5oBAmY+NC0pB5Qi2Buo9ilnheEk0fi5X0d1Pu3K57+fz53x/5/K9n5vX0yd8hmD8KYL7c8T2DBOn4ARRcrPmenyLZRi1nutNUIo4Ph5AXxHLkTwrhk+zaidy9ZMMyPrgFwm1+Zx9TkJejvWpCrT+Nudhfm4+PoeZKr08F85zXuvn/f7I3Yuv+HFn3DIyes/OWoo2tBFGi4RpbzfagG0JFKonkOJUElIBSuHBOlYps/H37O0lhRGXhs56pLwlaKSPX9P4AzLdi0E+R1MvWLBgwYIFC/6Q8XuTj0HjRATaZK8wW4pMJKJBcUexaYGtSKjqv7oAN0QVt7oYs2zTZwHi1UrV/LTgDqF2Ixo8accsljFXzKtyKTUNq/UaEcNKZjBn7I+I50pGxNlasdpGxHZFTC3ilXxQCZwLhtrZF6wWBEXm7sNpYR8bUoq4ZayM5DxWS86QaLWjJMOtgGRUHMsFK4aVefHllGyVsLKMWsJjwqiWsl3b0sbI1arlsH9EKGgwSt5z7J08VguYru0YxoFSMikmUkzEGCn9SNZAASQpZgrNFdp07Lb3xJhoGuHd2/e0UXlx1TICH+8fIChfpVekZkVqIkFHgihjH3j3dst2b9yMymq95urmjm7V0R92DMO+FtcGKa5ZrWse4/ZwAG1J7Zp+LMTYsts+klKkXTXcf/jIm1evuH5xxQ8/vMM1IKEQkjIcM2YjMUSCKMO+ZxAltYFhyOShx32ka1ccdgdiE5kLheKCuGG5xwdjcCePhaZr0BgIMTH2mXazYrSeJhYw4XgoxKCYCDE1JAErmdV6Q+g2xK7D3DDLiBaKVdLVi2HjQLUSDgzDQGpa3JX1+oqHh57+eMBKrqS3Km3X8nDIuBUiAazaDJvDdjewXtfz1ooxZON4OBJEiLHgXrM88AEEmiZyc7Pm5uoacSOXmvs4joUgkFK1Tt0eCkM2klabV02pqog14gjZxnpdGSBarUllLs79XDTXDoJaSJwsgqaXWS1TTCYGHk7F9Vy0+8zMi0ztvud/83fFTCaeCnuZqUhBtXZz4oZYJT9reSR4UEKs1qoxNpV8bBpCijSxJc5EZEhI0MnySDgVfl4L9HkMrnU/ZO4unwqw4pkQFtvVBQsWLFiwYMGCBZ/BicGZ6sups+943GLjEbc9/eHAumsJ6ZqikTYo9sP/m0EgtD/HNjeE9oaQVmhI4AFe/QXp7b+j+J/gq1c4pdbcCIft98TdW2ITyQV0/YruxZ/gGnGxuVwnOlCsxhlIdSIpF2vtf6r68XNQkZpDeKEY/Jwq8vIzLpWbMxzO5K183pnEnErMesRsT2x/jk8WpOv1Fb1t2X/4e9av/hwnc5QNwd5T5FtMBfUzITeN6DyGz4x7akE8k5BfeJ1yrn18bsKUOStQLvjLmVR9RniePvuiBruYsyd/ez0tqq9voM/hFC8RVCYXqHMjqWjdfjZOMR9PJvXiGHxOoXpWr/IMZwKyDu9suSqip/deql/nx764b58hPr/02vkxEUEdku8QK7hVxd9Qjuj+R6Rds9LINgsxPGLDW0S6027XxnpjVsB+4XI4H9+ZTD5Nw5lwFRHcbHrdOT/zS/gpQvXJhn3udZ+I+M+8fj7fssGHfcvX657fPEZSaFmnHVd6ABom8yNerR8IfSD4SGsPgBFUq8ZXEkZgJGIlsh8inTrJP4Lf0MkDff8DqXnJIWxwg1yqdat7mXxzf3LXFyxYsGDBggV/YPj9yceYyGM+qx2xGmBOJQxVHLVKrI3DgMREdJ8IPkUICJW0sFnlaKVasFpNyi5uUyXhaKBmM1YpJRoqCSA2bd0NK4K4EkJD265rDqRpJTEtk7OhQ8ZsQHMlFaEWM9kdUiTGeLazxCv5Mq3EQpiIUqcqJUMluNwCMbSEZBQrU+EilFKmArNmRNb9MqyMWMlYqcSkjSPkQs4HXAdCagnNesqmDMS2qdmOD/f0Q49qoesaui7xuOt59/ixkktRWK+h6xpUO+4fC302ZFQ6Ii9v7yZbTuGqaxFx0mpVMwIPO9rkBG1Ybdbc7w58//1HAh948+qKb799QfHA/cORpoXRMtv9lpIHHu93qArdOtGkFhNBVIkUDoeB9SoRJCHXHft9jxD43XfvGA7HamParhlMaTdXPD4eSDES1nB8HOh3PXmcjr06mJFi5N2HHaONbNYbhiHjGLt+TyNKM5FK/TAgJXGz2aA4D4+PjGNhHPd4hqZraNaRYMZxt8PdaLqGtksct1vEnabp6JoVY99T8sgw9LgF0uoVKSUwQSjEMNmciOME8EIpNbd07B/ojwMahPXVmmEYGPpKmI5DZt8L/SiUMZOaTNF6jqcQoECfnS62NVdyPODS0ufCYRwRKWw219zFmgV6d3tN1yTcjH7ImGXcoe8HQFjRMhbDfKRrhbZriDESg1Jy5nDoGcZSM0pDg5OQ2IBHfCoWxc9FzNzNKmKcrVJrYelT8vznrXrO9lOnXmafC6epu1JkovrqZmaiUtzx6ltcxyPVwrdGt8rJWkijTnar1Wo1pkRqq+LxCfmozZR7Mo1jIlGrb6vVYt+l5sN4HV+YtuXuZLSqmhcsWLBgwYIFCxYs+AR+brhjUimJkw/3iI9sGuf7MpBu3iDrO8L1z+le/oL04mes+/ccP/yG4or0j4zb39XlaUzE7gbu/jPk4T9gXqB7TXn8Htn/wOrmFcfmmtA/0n71Z5BusSnzfVYvRVfMRoyCBD01+MpJR2UTCzjrDqcmPSrRB2cr0hlPiEUxzlTMuU74rMXq3I84v+PCCUW55Cu0bntqRJ5/6sQO1T7mgq7uoN9iTWIoTvfiL+gP3zGMI83VDePjD4SrF0QVgitGAWpzNBfCRHjCwZ3Ha9M+TI3TTyxZmdWN87snYmjqzvRpbmfV4HxWnGukumsmZyVjPWhyPn8uxzKPF8U8E6h1eCk1D/HsNnNZq9WHYoxgTslUm6kTm1pdZfwJgXhuBj0fv7Pa89Ka9oy635dk2JcUpV967Mnz8/5P++1u578vPv/y7yIbxnBHLB8Z4h2ER6xExAN5vCdzQ6s3hPgKY0uRcmpCVRdqPJDzhGD8CTWiPN9H50lNPB/np+O0TxSll/vzpe3ZuTf32Rhk7uEHB5WIFSdL4ftjy7c3Pd8/jGxtw6sXPR+Pzuv4ntuu8O54TUfATDhwDZ6qNTEZMUd9JIYdjRgxZEyFsfTY8B4rRlh/y8Cqun25QAhVgazXYH4a84IFCxYsWLDgjwO/N/locOrscq+L0xCmgqL4lDMIbk7JA3lMWGyJMaEhVCvVabEeKgeJWMHyWO1VJ2VRLQQUd62lzkSAGEJAETkXG1ADxouDSUBTQ1rVtaONPZRSGwGzYT7g6rXAIBDRqiq0zKysMjfMZ/Kj2rDUQPlQiRGvr6mLxVqEzQSGiKCxQUQwy2AZt6rGFAkELYgMiARcQiUq84AZ2FBQyWgTav5kgNSsuek6hmPPbnvP7tCjHBEbabQnJUOToi4ctgMmTtLA/tCTQuS4Ux5UuXv5kpvra8bxwHa748P377i9ueVqfcs4HigS2I0jj4fC7e0tNux59/4Dg2XMIiKBFymyXjf0uw/4IdKmFbd3L2i7yPGwZ+gHzGtj29WqwcaBFFpMaubecOy5Xnc85p711S2/+ocfePXqllIORB3p+wNlMHI/MO73BHEaNdZNiwbl4/2OgcKP93vasWG7P7BphNsucrSBZt1yOFRiL7XClh5VYXN9Uy10i7G+XmMlc9j3rNYN66s12+2WXI70+5pPqSLk/sjRR2JqaFJXMyOOB/JwRNOqKh6tnpHmlTy3QrX01cRq3fDw8R3VWxhS02GipNVI2RnDeMRG4/VNy1BWvLvfgipjdqQMNO2KokrONSuy6xJhIvDNas5ksUxIifV6RZMi41jY7fbcPz4yFqdNHUMBpWYxpBRZd5s61qkexQvD2LPb94wZJDSEGIipqcpKFLFaKNdifC6QL24oqCAmF0VUvSZnq+RzzXR+Xi6K4NNjU8c203fMrJuc3W/81JY8j6GSvO5OCDXXI2hAYm0mSCmRUjr9HmMixUiKiRDjZLs63e54UvBO2Tw+fa9JOY9vvsHhTvJAscV2dcGCBQsWLFiwYMGnEKrlpk1rZ8cRd8bdB5IqGjM+GurG/se/Z//4FrLRPL6bMtyc8cf/itjdYVON4qWnDFsSte4oD/8vrFnx+uf/BePdzzh+/Ae6l7+AV7+cIk7ytKBWdOKxRgxyhmx4SFNt7yci8Aml4af/XDQLzkTK/Np5sW/n52d71Cdk4+fm6IJFmRsWT8/Iefs+O61ckLnuzPEI6k6wEdwYtr+lGHSvfo7IiqZZMXz4W8b9lixrYntLdsExAn5qtDyNSfVUwJwsRP1pc+Vp9JdTNT8v56bNefwyu6owE61+IgJPtdJ8HDgTjXIKhDzbVp5nvBJiQiHSIGKY1nOsNn0bUWBwapPmfDgUILPZKOIDKu2pzrukOM/10OVWz2TpSdn3GfXh6fg8e/xL1r0/pX6c5+P8+7zfn+JkOToTv6XHQ4OND0ja4M0NMn6gH3qka8guIIfp5tjc/jrZreozxelP7MPncapmZ+Z0Uj9Oe/GMuH6+H2cV7AUhydN5/aJ1b93ClCZSv0uGAu8PLd+sj3x3KHQiZM3oes3vdoHGBpqkqBSC7YiilDISLZNMsAjRvcadICQ7soojH4tSpNDqjqsw4uqYNbVOtvqcu+D5nzhtCxYsWLBgwYI/CPze5CNU9Z8ApdTFTghaMwKkqheLVULGM5T+SNZEjA0ptjWbb1p8YUpAECm4CaUMiECIsdq2Ujv5EEG15ln4VNDUdZtXYkTBVSpBKYqEhDZCdKOoIl4Li6AQoqAp1rB1CZOVS1Vw+tThBkwrtrpdEQga0Bjr81NHnLnjVrCSCaGOb/bor8RKdct3qWOsbOtcShUcI8WAxw4zoUw2ryrQNAkPWj9DDDfhxcsInhHP5PFIHjPb7Z79sSeJVhKqTXgurFc9+2PP0Pf0TcPD444XL99wvb6ia68I+pHvvv+Bpm1489Ur7l7ccftKiL/+DZv1hhev/oz7+we2j49crVYc9nu+//Geh8cdm82K9m6NhIb3798zDlvEjXEYsOJIEIa+56uvv0HTih9/9z1NEyhjT9KRb795xeO+55/94iVRBRszwzjyuD2waTtiDBz7LV3XkqRh1a5xFY7jI99/zHy3i+wftrTq/KdfXdMkIQ8ju+2RPDi5ZO7ahNtI067QpCQNxFgzAykCBtv3H8j9kdAklEA+9PQYm6trxsMRGUeurhNX6xX98QEbM2ojD+/fsr5+QdbJRgWq2lEnWxwgF1itr+mPR4ZcEI2MDlcv3hDDA4dDQeix7OyPR0YzkkF/GKFtyLnmc6wipFDQ2BC0kMfMetOwF3jY9tjoaOxxjDEb+8OB41CVoZvVir4fORx7xtGJjSLBMKCJAQ2BPBbKOGLZEGkJoYPQgrYw263OBeRUJNUa+pxbU5XM/kTBOHf2nknL6ctDJuuhi38qjmvNuzwVq5/74nGmzuGL/BCZbhCIEzQQNCIhTIRjzXpsYpqyHhva1BJiQoMSQ0RFq6J6VnSKoOhpg1XJWb8bVOYcmKkT1QJiSxW1YMGCBQsWLFiw4FNM7XhnlRqABg7bD7z59g1v3z3ilmnaQNn/Ayvt6K42tDffIGmNqpB/9i9IfqR/+7ew/jnt9WvG8Uh++C2pHLj52c94//ZHdh9+RRy3NKmhbN8R8oA310iqNVtd5xpmTkA49v20Bk/AU1Xdafwn0lROi/NPMhpnkk3Ocit35dQzKE5tKPbTBkR8aug7566f+xFnwqXO4NMMvk/JLHGlYIgX8Mzw/t/j2Vnf/QLRhMlIMaF98c/Zv/t3tE0DMaFUy1l1prsOT49bLWDsRCSeVW/1tar6RJl2SbjpJSkpU01k1THGToTdvC2pZiuXijg/E3wCnxBg9adzsokRqtoM57vf/JpXt4VXL/4CvNZkGqY6Zv43jf8mHti9/4H96q8QaTmPiNM9kfkcmI/DT5GHP2WJ+lPKxs+RmD/196fn4KePq2fG3BNEGA28vUG8ijw9vcLiDyRVLI+Id+T+A7J+dSZ/TxmZX1b5fpqBOT9+jiSpfz/9eZmZOR/R5yrHp1a/cqqsz5/pp888Z2ye3nAip11mMhWgMI7KB2/42fo9m2bHbjuyH1esrDb823Qdr8sHPL2g0DFGJSMgkaMb4sJKPtAEZ+e3DM0VCefuZuC7D4qJIl7oQmTVPRDYoiF9vr5fsGDBggULFvzB4vcmH2eFXwjV4tAni1EVrxmD0+KnWAGplqpjHkg2YpYJKRAkIgSwgpsik8WFT3arKoJN3YEi1TYkhMisuMTBqbmKjmF2JgSAKd8g4BoqgWd18Vg7vwRMqve8DLgI2W3KRajkRYhxIiRqCL2oIqLYFNodZKJGrSAOGqbsBq9klFlmNqYJGqbOQ5maNkO1Ew2Cj0MdeBDcleIOsdq6BlVibDEBEauPSQJxxmEEDWgyXqzW3OWR46FnHAuHUtAYuH1xy+ukuGVSu8I08e7jewTYrFZs7q75Rdfy4cMHHh/uubnZ8PL1V4gob9+/5eHhIyqBl69e4e50XeKreM2qXaFxxcfdnuPDB16/WNNoQ98PjKI0baTYwO3dHeZOEOObb9+wffxIaFrG3hiGgSYFjgfjhx/ek0KgW7fE2DKOhkjmxctr1l1EFMqQ2W4zu0H4YW/c9866SXxzk7jdtMRYwJXDcWAYjJBCPRY4BAixQR129w+0q8RYlGHMrFZX5JyBTGjqMXCr50jqEsMwMI6Z+8f3uBkxRA7btyANnlvi6gWEVBV3jGQbaneqB1yFoR+BkaCZ/f5IKdV+d+iPCNAfezRG1qsWRFGBXAw1J1jhen3DatXSH/a4j8SYGMeR7W4gj06KDebGuO9hlErqG1xt1lytW1SMnpHUCKlVshX6vkdQVJUyjpRcyNkIIdE0a0J7hYcO0eZUoNbrSjnnVJwYyOlLAeRJ1+U0kOlJ59zFOyseEUFCHUcl3MP8BXP+roFKcl7UU5U6vegIlonADIpozcwMKdW8zol8jFPmY1VzVtVjCBNRqVPm46Surt9vet4XEZCqwr7MYHGbVM+L8HHB/0rwz26Uf/9//b/8Lz2M/1Xgv/wv/0sA/s2/+Tf/i45jwYIFCxb8cWNew861KEh12+m3/MmfvuS//h9+R4qB1Vd/xXa/JaaOECNl+xusDKhH6DZY94L2q3/B9se/YfjxP9Btrli9+XPURrb3v0a7N7Qpw+bPYP2yrpeHHdI/UvbfISWjqUPSBpprSOu6HlaQUGMIfG4ilJl4errO52Sh6dN6fXYysek9M7VQSRV/ogqs1YBodSuaH1YEn9VlMhFqJ8ys31PVl8zPnebYaj3uI+P2Bzav/4RsYCokCfX+w0TGpHTLLr8nhobRcm12lqoSPJFBs2flrFA7eVjK6T7H5zIQYb5PMhPOcKluC6I0TeA4ZtxmW9WpQ/OCiHTOnzt9xImQnPnX+tbza+zEzyquiTcv1xwPw+kejU32q2fb1TrO7CNaRqJEBua8e58IrTMBeXkM5nPkNP+fIV9Pr/2CAvJL+BKp9+T5C5b8S7akIoLlDP0DYwLpXhOIiAqFTJAetQKhoS09xZzj/p64ns4nmC7eS0yxHP/I2J+Tg+fG2tmK+FNr1Tm65NKC9fS5nM+DS5vf+ZjUc8FP50m5OOeqS1hAJ4VwoL721ebAy7XyeLyhtMp6bRjK0BtJRtqg9HJNFGh8pPdElgY1RWXg26uecuz5UF6w95aIMpK53yf+5M3IP3xvqMIq9Xiv7I8DJmX6HlywYMGCBQsW/LHg9yYfo8aapSjTwptQrRdMMK82qbggVokCD5WIyHmglHbKYmuqSiwUvChqhmistIYV3AqXeRLFQSZLxaBa1+hSF+4mVvMfy2QsMVmiqDohKmbV2tRyxi3XVLmglYRIGR17JDaE2BKaFmmUJErUVMcoVBJStBYoXqqqc8pxFLNqyyqKuZMpp065EOtYMTuvkyfSRUgIipUMUgnXqAEJlfgMMSBRiKKIwjgWHK/ZlKFBNNEPlbzs1pHVTbW5HYce0UiKAfMeK455IWokTgvaw36LUdWCL26uuL29YnO1IQ9HUjBu1ivcYbVaV6vTcc9m3bHbjfzD3/9ADIHbV3c0wTgcPrJetwRryPuRZrWCEnn//j2brmO1avn4uCWXTNdEShm42lzTD4VhOPDy1S1RI31/YNN1PHz8yLpNNCmeVu/98chutyfnkaBw0xrf3gZ+ftuiZGJM5DEymvE4OnYYEH3k9qoq3EoupBApudAfCm3T4FrViG2XaJqAqBJCoJixP25JMbLeXBFSUztJQ2WZhuOO2Bp52KPNDSldkceMxljPJar9p1DJ68PxUMlGy+zu7zGDXIRcCrENSFTUldXqmmHImEC76lANjFZoVYirljwWcql5JG2KhFhJw5LrOYFAyYXBqja4Lwae0QDX6w2OcOiP9McecWU89gzjMF0LDbFZE7praK/QkBCP4FCkXr/CdCF6JfyrErlUQnKqmX3q3pzNWJjOd2ZiUiaCUBRXqURh0MnYRk43MuZGXsfrteM+XVOgfk5/qeLnqUkhBFTSiVyMMRJiqkRkbEgh1WsgxtpoEGpTQbUgqirlU/Yjguv5BoJNCulT4Td1kVYbovD7fpUuWLBgwYIFCxYs+ANGmAmricByHMk9X71qWa0CTVS6dSQpDP2em+uvSbffQNMQMCgFH/bsH36H/PrfsurWWDnw/vsfWGXhZbel+Cviy19gZOKH/4HivySkDcUN7dY0tqqKpnJE+o/4499Rhn1tAta6lnXzqRnvmQpxxjP120zS+dSUeGouvHiDTw2EszmmM6kCpVqa1obNyYr2zK5cNABOP0+uRBeE6KWKcspqL/lIuH5JaN+QxwMhdtNnBxyn7L9Du1ua4PT9R5rmDiTM7cmIeL3H4Q5iF6o04ZRHOc2QzLacOCbyNJ/xmQptJtBMhMGm/MfqyXommqQSRdWWt+7yPLs1AmJ60UktdznVgk73YxDhw3bg/f6am5GqMFVFrd6rEBUmsxmGsfB+11F8g/Uj2qxhcnuRafuXqlO/IEmfHKIvkI7PlZDPn7tUFX5KUsqn+zk//+zx0+dNNdqJ2FZF/Ih1f1H3R6aoFCDagSwdbntyfAWaEPkOsHo+iZ8IvXk/Ha9OW3MjwYmUrtv93BxcPjafT89J66cqyKeEqwN60QigDmUaC3K+sk60/1Rzz9dnJZiVsRimhatW+ZPrkR8fA/HVC757a7Sy53fHDWGq+dP6IykUPL8ltW8IOrKydzxsA6GBu1XisBNGD6wbYyMHijmGkU15fx/49k3mh/fG28OKmyDkYBCuwJ/aGy9YsGDBggUL/rDx+ysfq9YQMz8RD3j93XKZFEF1uVzMETNEIkrEraoDzeoCS7WSL26G5bEq/WzETMHzqevSrDCOuRYskwpRvSoBTaFgFJyIMUohj2NdesbZt78qEt0KxQ23/KQ7MwYlhJYYhBQjQWumZCUzq6XmKWwcwAqlZNwNccc9TyQLlFImcrbaus49iZWcrQVHSB2pcbwUvJRK6NhE4ojWbLqmwSVW0lOrjWTOhZwLeM0QTM2KY98z5AFV0NjQdR1VClpo45r+EBkOR9QEHwrH4YgjtO2KzbqD4Bz6A0Zh+3jg8WFPtwqs1mt2jwMRqTakuVqXtKvE1fU1AlytOlJUDoeBVdfw+k1DP/aUAqnr0K7jUKC44ibEkBACx+OICtzd3WA0fP/9e2w8kFTYdIF1EtQywQPDmMnuqATWMfFnd0rbXpEitJumklNmNcMEoGRKcXYHw/OAjc7qquHm9pqrF7ccDgcKEKISUHKfOe62dJsVaGC9WVMsctz3DONIUggxkTTQHw50bYuVaqfjJZPLgMQAPvUyWiGPA6JxIpaVoS9YdoY+Mww9sV1xfbNGVNhuj4Sk5NHIBV6+2NC1QoiKBiH3AylG2lVivz0wjkZKga5p8AbGbJXI7XvGXKYMQ6qlkqTqmopQfKTkARszlqs1rQi06xWx26DpltDeIbGFEJjD6oNT7YUnxaF5waxMxZ/glIvu3Kp2Ri8LZ5gbZFUCqorGgIfaSKCiU4brRPqJINM2zUrtpmb6rjFOhGfNrqhdoqrU6yooKURCiJPV6pT9GKuaOaT6XAg1U1U1TN8nMtkjzR2rgk03YNTldDNm1kPOZKgj9VpbsGDBggULFixYsOAZTumFE6HgwDFveXj3jvfXa3Z7aE3IVmDoyYf3tX7tIxnIu/eU3Ts2tz/Dfvmf49phxx3hu79m/PAr3rcN8aufI2OPqFFe/AvK/b/Hwhvi7SvclCI+rZcbrLnF5c8QNbr9b9Dv/gOqcVrlz2QX9S8/j/1yjy5/lakJ8KLL9kSEnFWAE7kyFwSX7KYwNRbXwsEn0uTZJD77dVbeTYSuFIIIg4+sbn5Bf3iHeCA262oVqQPsHtDmGkJE1cn9kRirQ49aYA6Y12lX5jiGkyqNmYassStMhGMVlPm0136ag/Ogn5FNdpHzd9qp81yISCWCTxa25ynjRCx9Lg/R6zzifPfbf+Aq7fnzP//n5+MwNVXqPG9iUODx4/f0ux1Xv3x9qnPOGZdPD8Jsrevz8bvM4Lz4+SX70DPRdibi5tGfD+uzx56xnU8JvUty1E8/ZuLU3JGwxssHNL2kTPeVmtJTdIWJEtTxbIx2wD2CZdDms2PDvTpoyUyan349va7+Pf9+Jlmf28o+Vz5Ctf5VkanuvjiLZKb4z/t6eUU+IR7l6QyaKKhT8sifvN6Rwoq/edew7ox373qywyZFfna14+E41cDDsd47ChucSO8KltisHzkMxm4suK7Y2hWldxStfKzWulhRdh9afvmq57t7pZRA54XDfFNgwYIFCxYsWPBHg9+bfDQzrBSKWSUkjImELFgp9Xk3zApWDMuVXKzWo/W1jlULU5OTulA0oGGK+p5VRdSFfZnIwlKMnEdmVZmGSa2kQgx6slmtxZMhxU6kR0YZi+CloOKoZ4qOpLaZCFM5WVCqVmVUjIEQYh3bZIfhOLidiwcziudpMT/brk4R5yrIlBmpqZmIxNrxJW64FkiGYHVezE4EpUtV4lW714BZzYcMauRcKkmD0HUdZoGcM8fjEet7QhDEjeMh06RESC2H45EQIm2b2G23HHNmtbqh6zpUIuJCDIHVuuPqek3brYip5diPHIdMzoUQWrDA/YcHmhTwsWaEhJiw/EhMoSrJSuZmc0XTdrx9f09/OHC1ahEE84yVwou7l3zcbvmPf/MfGPrC9aajXTes24YokLNyv92zXndcbRJNCqxWAdXA8XgkpZa+GKv1in63I8XA9TrRJqPPTkoBjUpfjGSF4/FAE5WkwvFw4Ob6GokwjsZYjHAYUR3JooSmYdUlso2UsR7jmBpwwQiE0DJmR9yIUol0KwUFSh4Z+y1IZtxvycMANjBaoWkbQmoouRCDgBfGIdftuDLmkWFMtJ4IoSE1iZKdw76naRNNUDLGbrsnhEgpUMyQ+TpwQYjEWK1+c67XY86OuZGHOkbEiaqktqHtrgjdLdLcEporVCvhPV9DWM0KKV5A6j6LZbw4xTPiVZFbC816M6DWFmfSTqRaDWlItfs2hqn4n62Nq1a03neQWsT7fJ3l+oGl1EJ5wpw/KTIRjxKIGgkxEGMihkjSSNDZZjUSQyCEgAadun8nMnS6lkTPNwpUTk3Jp6zH2vU89z5PNwB0KaIWLFiwYMGCBQsWfIrnxEt0KLsHxpKx5hXZf8P16ha//iXx4Ui6/SXcvaY8vCVvv6fr7tBv/hJCqtmGH3+Njgde/af/htDcEGQPH37N0TPx6tta967/95Tf/TeUfEe4fgU2kRdSmwpBJnciqTaI1YfzghA659WdnElO/rETLXJehCP+yV5f/PfpPNQ/JtJwJuUm1ZrMnOKJwJnpl2eNfrMdKvP2A+KC9wMpdWh7w/aHvyaur1EP2PEB2o4Q1+TdB4I4HiKuAZmIvrNjCyeCb8Y5891Pf59yAd1rzSDPcwE/zSc8z1NVI5rZRQOn06TAWEqNYTlp7S5m9GIcn9qRnu+zrLvEn/3y57y8u0FVKHMdM5GQqooGJTaBF9cvuG/vEOaxnInTT5WLRpQaE2NmIGei+ROr0E9kiz41yNYYj3l/LpvB565Vn84zmQjQ51Tbmeyb1YeXk3T50oyHNYGCH36Lrd4QHVyNrA0SAsU7ZHyHxJawvgEyom09rjMZfrl/s0Lxk/LvRBGfGw2eE8SfUYZePIAwv2e2/302NfMleDqf/PTemRD/FAY5cnfdMhxHdt6w1j2/eBH51Q8FFeXj0Vm1HZvmwNtDR293IL9h1ztldcMqDnz1Br7/IRA652hrVEZu4pahd466RkiIlZov6UYx5e/etnz7+shx6/QGh2IsjkELFixYsGDBHxd+b/JxHEdKyRPBNhFlZmdlUp5VkWAlM5ozDj22KtSF+GV3nE5/V+tHQ0FCVSbaRKaIE4JP5GOui103TCNmXu0f5y5JqZ18orUQcK/2KVbspEyzYlOGhlWio/SU4cB43NciJHQwkRipaUlNSwiJ2DbEmCoZMRcEk5oSB9GaF2g5Y5anxVUhhhYNCU0tMpEhKkLJI26VCK0Ky8JsPTMTMYKCVu1mCKFatpgTQqF4JZa8gGqk6yLr9YqcB4ax2szGzsHGakEbWobjQIzCz3/2TSXTNBBTO3VTZjZXK9brjmE8st9lUlMqcSOGRwGPlKGwahru7q6IKVVVYoxYMR7uP6JeaFIkj8LhONBEYfXqJev1ChP44Ve/Yt12vHt/z9/86neIKG/urmruQjY+bne0bWS1WbPerIlquI2kZKTUMIyZjTQEVa5jYCyZwY1iTrGaa7hZt6hk8jDQj1XFl3uwwenaSGgarBheDCvGen3DerPGco+LUHKmaep51aSEjYUsYyXNcyY0gZBindvS0x8HUtNQJiIuiHPcPjDstgRRPGdiSPQyIMEoxyP7o/P+45Zjb6w1VlJZCu8fHnFR7iQQ20r8DUOmFGjbQNu19MNxsvk1bCy0sSWkWnCaFWw0jsNQSUSHzECIQkqRqKGqiHG6zQ2pvcHbG0KzQWODaqQat4Lp3FygqDmmjnhGLEzqRq25NcVPTQgeFPWMy6RG1dopG1VPuaoaI4RKkKqc1Y6nbMgCTOpGL4oXQ7Xub2G2aK2WTSoQQyU2T1mOIdZrLiZSTMSQCDHVjMkQTqRnUJ3yXKfGgqlomxM9pniUaqc0ff/NJkunLu7PFnoLFixYsGDBggULFlTMZEOJQtl/ZNN2DN5h45F4tcJ27yAqZfsD/fbvWV+9ofv2X5JDg5hx2H/APv497c3XxJf/GjFwMUZpCK/+gtXwHcd3/w69/jPCakX85q/I3/83da189XqqXBWXAIyoKZInC1TViUcU3KdG25kAOVE/T9e7MtXzPjOGl8QKZ1Kt1pjn2v9MKl4SVJdWouftnLd//izDCTJ3Ovp5G8EhfySXSCOBaCPHj78hdbdoKKh15MffwXFLbK/J41i9kXTa5onHEczPZGcQrSTY3GQ8E4rTHE09xZ8QTU/3jynn8mxre7KanYlYLnMb5xmXSe14yURdkE+fKA6nWAyEdZuAXGMrpDZ7i0p1jlIlSUJ05Lpp0RjZES5UiV+qcUJV5Tn4PC8TdG7QvpiLSxJORGtdN9nZnl/9XMOnF7t6YYeLIlO8zqW68EJ6yPm8YiLpAlkC2rwAvScOH2gaZeSKzmvMz3H7O9qQKdnAnFJGQpgI72f7f56PSzJ8JoLP5/Llvj9/7+eJ43r+zcfb5vl3puN3+bnT7S7q+TKT9TWU5IKwnphKAUZ1hm3hkDaMukbc+fNV4rEvFDXA6A/CXbfiZXvk7dFrPEnJ3Lb3dG3i77/bk1LLjlU9ZUtLT0eII7dpDww8jg05B1xrs7lL4Dc/dvzyVWATMo/3hf8JtyAXLFiwYMGCBf9/iN/7//nzONSONamLwUoeeLUzPVlJCGZTAeNwPB5I7YEQWkQTHiMiESYlEdSFVyUnajajFa1EIQVMQQoCRK0LPKleiyhCMaOYVYKgCqaqbkvmoqhmKDYeKSTKUKAYTlVrihe8jLgq7ntMGkRbjqmveXFNolltaLuuKqdkUntiVHrIJpVnIY/HSkjCZL1ZSO26jiHUbq8ipZZSGieFWakKSWoQuEyh9z4TqtNjgYngcSVb7ZhULScbWxGhaZXYUAlQz7grVgpRA306cNg9sn18qIrJUsu41ChNu0JCJZ5S08JYOB729FNOoFkhNAErxma95phHErC5uaHpOnYPezbXt4zDgevra0LTUczZ77bgGUPY7Y6s1xs2q462afnLv+w47nbsH+65Xq/IQ0ZLrBknw4G7F3cMQ8/7+x7cWXeJVXSkqVavZqkuvksBDJPMahWqonCsNJUGZ3840qQ1qnA8HEl5IFhT1bEpEqJzHLaoQIodMSb2hwMSAl2b0OCM44DGQCkDMuyrRWuI5DJW9WHJgDGOA6UYQx4oZQQJpBgZXMjZiSFhPvLjjx/48f0RQkNslEbm5wLHQ2EXjmSM1CRWVw2aApYBTzSNYsXxPCCiDPmsmBUZsTLSpBYT4dgXNARikxBxxqGAB9q2pd3coukaSWtC0yFSCWmfSO8wdZee/plTLOBWKjHpCkVrfENR3GtWa/FKlCvU/NLpn4ZAmMlHlUnBLNhM6ItgXhsE1IxiBZdQvwMmgt5tTlmZslZrHGslFSfSMaTJZjXGk82rBkVCqDdYtKocdfoOkdO/SZV80XF62f0szHks9Yvmc7ZHCxYsWLBgwYIFCxacIJzIgFCE/eNHvr255XFUyEdiEI73v0Ie39Hc3LL+5f+RsrrDxJBhYPfu74gCV9/+JSNV5aRaP1Mlom4M6VvWX71ivP81/aGju/0F4Zv/nPF3/zVJFNl8jXgBKbg6wYVcBqx2uJ5VVRdE2+Xwz7K9SfYl8/rYOS2Yme0in773xBVOZN2lUnLOW7ygEesnXBA/4nYaT5gdSGS+51DdlcygjHuSKo9vv0cxUvMzhg+/ort9he8/om2LtRH/+JHY3KIapubl2jD8ZP9O4/OTKq2u/af9rVXBef8/PeDnT3A/uaw8mc9nBG0uNtUl0zsvCcALcnTe7yfEnTuiATACNV6kW3W1CVaV4iBa42JEFcdwgx+HBOr4dJ/l6djPezP/1wBEEa8E4om3nciz+XPcpubSKoQ7zauc8gtrxM2s5DsRlxdtnhCmv5lu7viTXb6cj7mB+6JKmyI1HMXRuGIV7gnDPSoDg3YohfW6IdqRPu8osoaSgVI9eeZG9kul5fPjMm/tkgPlTI76fAFc2O1+PgPyTMqrz8Ti+XXn+pMnNeppPqZz1C5Us9PQatOBCeMwENoNd9eFh71iOqDTaVhw3u/hxbrhq7bnowoyHtiPge1uh7QbDr5C3SuB7AJiZJQPww1tNO7WI2ZbtofIUFK9dyfKr38U/uzrwNcp8P17FixYsGDBggV/RPif0HZUiQARrTabCKWK/8hiNVhepBIeeM1Es0weD4y5RWNDOhEltbPs1LymgpDqYlUcIdf4QnHcy2kEp8wCmVVKNVtyJuxUHHSyYU2VBLEyL511UsIJVjLZMoFKaio69bIVYCBQO/nwgJWBPAikBk2JEKoKUsxApw5PMZLGSblZauajhqqQLFPBJyOuMnU8ztaylYSsti0Xi0abut6o7I9PJIm7ErRaxcaYMKsrUndqjuQ4IGLEJhFjSwhKUGjahIZq7xliAzrS9wPiDet2TYgdxQrH457+ODAMI4pxd7MiRqVpEzFdEVNDsZ7Hj+95ePsdopGua7lar9GbDYdjz8f3H6pS1QvjceCbr16z6gJX1y+5vrrisD3CQXi7e4dqQqMiGTQJ69UVQqEfDqzWLV+lF3z48MD+cOD6eo3ZQEzgxdnvjxzHQj86IgF3IUjtXgwxkEsNjh+zcBycKE7OytuPB16+vqNbt3jJtfO4bcl2oIxDJWxDIOcBG/dV/eeJkCppiQjFIkhLbJWSB5qwom3r4n9984rHscfcSG3HcXtg7I8MFrh/PHIYjNELm1WkWyeCGMEEl4B7JluhscRxXzNBJXu1LZXIMTv73YHDoSeEQEo1qQXKlIXa4KYcjj39aMSmqbT21DkaUkPqNmjaILEjxDCpeRUnImGyIJ1zLZjJR0NKtQFWrfMqQfECRWVqSqjXLVOPKiqoysnyNExWpz6pHsNEPlYVdC1qTY1SDKxgZJCIaQHJiNWuXbcyNT9otfoNOlmrJqJWhWeUyWZ1IhkrS1l/VqJRLgrc+r0wF4I6t5Wea7fzTRnO9kyL8nHBggULFixYsGDBPwYRMArj4ZHmzR3v+wMqBfUDiNF881fEm1fYakWwwuHjd/j+HZuv/pTQ3p7Wvqdmucmiw4HgzlAi4fbP2eSPDO/+Grn6Oeuf/xf03/9XEAq6+TngaDFMpTZaSiR4ANepzuTkqDqN+pO1rs8kkJwNMS8JlVrKXpBm8owzkkow+UxYcaKmJuXhU7vReW3OxTZO9w44u5MkaVh/8xe0eWB4+AGJSnv3Df3HvyFtfkYTN4z9jl7WtONDrXeoTdRPCNOpHp/z9y4fh7O9qsin1pr18WeWq6c3TNTsREbPe3lp4/qTn/PkOX326lqbiEMMQts2tKmZYmM4kZ91vpygEAPECH0WQpTTfYl6/Cvtd1KqPiH+ai1kTLE6rtP2K7np0z5WMmw+tj7VlicG7jQnwjNCbs6V9Om4z63qT0jK83Ce07/zmEWFlR/x8hHzhpHAGO7Ix3ti3BJSw+DGsTSs5IFSCkY83Vu6tED9qeN8Omdldq961qD6mfP2OU57Ne3UrCs9qUunN19clhf7+umnnT5vahau97mcgvDtdcPfvRsmCrjGpijV4ejjIfAn107bdbx/fKDvH4jta3KpilSfIlD8RP7Xc+WYjUMvpLjhuitc07M7Cn2OmAS++2Hg9a3wizdL0+6CBQsWLFjwx4Tfm3xUDViZSDyXSlyoIihWBqyMCE4KjqGYxkkBaJRsmBfMMu6Jak06db5RLUl0UkCagLiiUjAVvJxVWEEmssXAJU8LWsfzlEEwLc6CVmLPRHFTggo5TJl2ouThiJtShJoNlxpiiKARCQk0nmwazYxjfzwRp6nRmg0ZZ+96R6VQ6DF3zAsiVvMUJ2aw2Ihh4FXlOLqhKhMhE6rtJVIXvFafwzMqNn3mZNmidQEYtBKylScSzIySBzwUvJSa9VeEbr1CU8d1t6bpOh4+vGUcRoIom/UNw3HPb//uV6yvr1ltNrQpMZSehhGJAXfjcHC2+0xMA6/fvCTGyM3tCw77A03XkZpETC33jwcePu7Y3r8jqnN9fc2Lr16y3T9CyVxpy9vf/ZbdbuDq6poYhJubK9Zdw7Zsse0O95EYa47lYW9IrMrX4+EAI4QUGM3Z73fse+d+OyIqXHUR9YyKIwolVwVk27UIBacQVg2eC1dty3g8MB62k2I0ImEkNZGcHQktIXW4OKaJcSikrqNtN+yOA0lastSMEfVI03ao1uOk40jSjHgk9wfycUt/GGiaRDHl/8feny1Z0mTZmdi3t6qancGHGP4p58waUANQBAi0kOwmW4Qi5F2L8Hn4CHwPPgNveEWRFgoHEAAxNKaqrMqqHP4pBg/3M9igqpsXqmbHTkRUAfJ3XxCZtkQ83P0cO2aqamoeunXtvZaGliGfiaaoKbHPJMmIE5qtR8UxDh222dBuNogqyar3oRnD2CNa2jyMHekcC5GnRrtrEClVpqdzLB/Jmdj1iPc0my0StDwTNdTOpkzVjqLM8qUihbSzXAKenDPOT36uhsuRlCJJqsyPZbIzsAg1aC+yq4q6Ms9d9ad0oiXjuJKPU2ap5Uo+ukxMmaS+BoM1TzZLfWa1VB07QZzgvdbqRnBOUS+IL9WOKoqXKvOaBfXl+Z+uXQLkSV6nSgTJkpi8BEqGLDZb0lr5uGLFihUrVqxYseKjmKoDy2Z9hv6Ap2fb3NK9eUAl4cMzIs9wmy1ue0c8dHSvf8H++ae4H/1Z+awZpg5dMIMTaTP7Mtbqw+ReEr64h3e/oH/1lvbTf0h++2/IKG73Kagnk+gffoXIgZRPKLeYJkBxldj629a4k925cU3ATP29yHFO5Ghps1XJTJn5lDI2KhOhV8me8ubcR6oNy3I9/kHyXy3hcq4l9h3qHcIGaT37n/w3HL7+D3i9xTVgacD2LwptWWSFrpb7U0oiUInNqX+yuPbkPf8xiVJbHFOIINUpmfNykWvC80Oi8f3zfuirOI1Z/Xz1Scw5F1lZrSRgVXeRmvQp03RRcMrct4t0J5WQoiSI1n5SSVnLRfrWOyXmsudxYavlQhpKpnBV0727JF/b1VjI1VhM88Tm1tSxkctRl7vxt4xflUHt/Q3ZPWeTDnT6Am8D0gaSRSSfiCaY/4Q+bdC7z3D9K2yuzn1f4WYijz9Cxk9tyHb9ml0Tjst7uzz/bLsDRc2IC/E4xaVXhOtH5sq1lGu1QLKqRKYgllGXuNlvOXwZ8QipzsNs4MTxvD1wHB0xl3vm3DOSBZJmPBHS0geztien8kfBeZJl3nYOsS13beZ+e+LUlUTlb9+O7J/dsWLFihUrVqz43cH/CPJRqvJFkfvMucgXOq+4pFgsHmmiiqoSE1XqsHq2pVQIi5RBXK1iLMtdnSsIpVQNerAsqBnFMVIwrZr/lMUvUhaoDjBX36mvF92KQpiac3gNqBtRrzQxkIZNqS7ESnVg8KgTUgbDYxnGGKu9XM1+TLlKvmQkWPGYEIqMCeDUId5jWj0v4wAIziWcb1AfwAynHq3SszkZKSeyjXjny5ioQ9UjEgoRiUGKmJVqwhLEaiFwDZwqOSeomYU5DfNqN8XIKD0SPG2z4fmLT7FkjKkQSc1uw82zW5xvENcyDGf8dkPwt6jf8PbNG5TMZhtKXmOO+OCwDPvdDcfTia+/+gZxjtBscGp88dlneF8qGY/HE7FP3O73dF3PuRu5udmTLRG84LT4Go7DgHdCSpEmeHLMJIvcbm/wd3vU4HAciVGJSXl6Mk4x0Y+R/aZhiBHnBVIqZCzGfr8hx4RIoO+Fvj+z2zq2baDvelxoGcdEaFu8c4gKoRgJ4h3zXPVNkd/NORNCIMcRxhPNZl+8AzH6/jz7WeQ80m4aTsd3VL0ZJBund48cHzsckW0oBNrj04HdfsO+bdhvAzkZp5h5dzhw5xwvb/d4VbpxZIxGCErTKOfTiCWlHxLaNDS7lmYbClFPxziOJSiolbZBt3gvBO8RDWQ8mEOtBqTO4UPA+wbnC/lYCP8SZeRcicesJDUsl6rD4smaiwepWamehJl81Cpxevmq1YhMwfJUWVk8TVMyqJXCkyTqnEWalZxK5WJ5bgXnyt+QIr3qUHFFhkp1DrY/iFbfk48SKX87RC7ZphcJnwtMbP6oTZs9K1asWLFixYoVK1a8j1otJ5WEstzx7LYhaqB//DXebQmf/zHjt/8Bl19y+vbneBe4//EfY+JxVmgJMIIZWbQIUVYSr1QGTuojxcBDBRgDcv+HbMd3nN/8K9j/GD38ulSi3bxE8PR5D0OLhjtMI2JNrZSqTZ/IweVaWCaa6oL8Hik0VfjNxFhdTU/VcJe185JQXBIn18RSLRVcVGXK4nBDTcipQ+NITEY8fUOWQLO7xXuPJeP25R/RvfoP0N7htp+jeka8AxMcSpY4kyqSFwTvIg65YEE22eRXuSS/eI8Iuo4V/jZC8WOv/23Vj+9XXRaVpJo2qUpomkL4Lq6rKhha9jy84VTABsgJlWeL601jXFVspv5UglgEnEBQLUnl032tVXTzeabxq5P1433NTGSt5Y9XgJbj9dKO/xzMpCsEGzEcDT1JlDHcEtKRLEY6/hqePYdug89nzIxc+5Lnef7h/Vu2Y5JELXzlx94rexKXz82BJFPlbYm3K8E6X07mvRyFj47M+8T/8vwioKbAUJJ+Y8fNHr59O5T2WIl4TRKtZJ6HIw/jDecEd80OsQfIrxH3OSE7ss6cLtNjPF/fDLEqtZsNU3jXw7u84aYNfH7b8+7pyJtX8T/v/q1YsWLFihUrfivwncnHaUljuZQpCQmrxX9OlVgr8CQb4hdVRjrJrBop5ykMmVPvyuJnWjwzkwYijmyCFyU7V7KrLGOUKsoSzOVCAnqFFImpvK5ztlnN5TPDciLZBlImDSNxTMR+IA09/amDNBYiEEeMZfnpgqdpShVaEwKqQk4jEcPlNC+stcpVqnjEudpeq6RMxiwWzzor/gXON4UsoVQughHTUMlHX4K7WjkmCHghm1TpyTRLwRSvTAWKtyVJ8b5UwKU4gqPeh+JSmXGYZFwwlIbW3RYPDrHiJZiVNIC6wOF45Ob2lu2m5Xw6lkW8Njy+faQ/n+i7HnWeZ3d72t0OQzkdjkDx5zsdRjbbDZ9+8pKh64hPJ/ZtixqcjgdaH3h884iZ0Yqw9UrbKPvbhrRTshljnORhM0EyKUcEYbuFYAG1Dq+JtgmYCd2QUTFa77A4YhRZGe+KF2lOmRhHmm0g+AafjWiJY2+40Wg2GzRlDodDIZPV8E3JkjQMp4qlJzQZygswYxxirbrNtO2O2B9BPNvNHstGP5wZhmGWI75rA5qN1pe57kk4zbRNy/nc0TYOv2lRKRW3znmSCb5pQK0Qp+IYyIwp4YqIDk+PHVgmRcMLqCvBWkyGqZW54gPq2lLd6Vtc0xQp1qbF+UlSuBDgEzlnTOSjkrOiapg5XNLySJJm8rHIMpcgyOn0pbP8qTqdg0hEKzdrNZkh4xxoisQ0/Q0owZ9zjhwTSZScHSIJ1fJ3x3sphL76mdycNgyWwfeHVKHNfZzIxrSICy9bJ1PQLfPrl8rIFStWrFixYsWKFSveRyWnBMSU4+lE42HgFhs7mnZHuP2C/ss/Z/P633Dzk/8Wvf2k2pZAcmUVrkAUUKlUZCUurkiyaS0rGQkCCczv2H/+j7CHX3DSltD9hgy4/Wcgx1qaNdeXVcJoIg2XFWnXBORFF3PyYJzKod5jJ6a4fvlRrCY4Xmq5LmTUktAr+w3zWly40J6XUxcL+jES80B8/a8Zh8jNp3+KaiAZJSkxONrP/oinX/4/2T77GTr2SE6oKEny3MapCtBkkac4VeDNTSzk55J4fB8TEaSLirWJqBEuJFshEZdjZzNxdn2+5dhMI/Le+FWG13vHtirniFS7i1zUbZa3UaVY1XhJc1sv5WxShzgvrr+s9oQuZbJVKxxj3pcoTalKVFKTt6c9i2l8ZzL5mqy7EJWLm8x07QWx9l7fJ+uPEs9lSpg54izi6Irij2xIZrgsJN3hco+4W3zqgUSmQfOJVCVxp3/fH//r9tbnREr/84KYnm6p5OXxNqXQX3Jha9J9ISHzPJtmq8xFMsB0/Zn4RgrpLFB8QCdis7bDHKKR+3tDOsdOBv7mtceZx8iYCPdyxER4NeyIBo13PFPj6wzeb9i2jzx1d2j2JC17WSV92sq9QooykZT+m1AT5xXUcRoSrxJoFG73/yOcn1asWLFixYoV/8XhO//Pn1KpTMq5kHkpgfcGlUtAy4J9iBHJGe8bmrbFe4/ThbxEJWpKhdJl4TQHOpXEKFl3oRCUkorMal0YG9UXrlaWlXN4nOZKZhayrvgqpFKdlSKMSk4jZqUyse8P9KcTcTiTYyR4h3o3S7GGtqHZbGg2O3wonpTJcvF2NJuJDmOqsqqEiRZfSNVcJFctY3ksVWcpAhMRWdqqooiUZWuOiSwRN5GwKohTyvrVV9nKTEoRyRGT0mbJIHhSTqhz3Gz2xDxQqsgKoXi7vwczUuzo+zPj2CNZQKvU6fmJrjugbk/jSzbpdrNnf/MFh8OBX/3mKyyPeIG28YS2IafM2HX0Q0eOiYdzR0zGi7sbmsZzPD0R+w7RjPNSKmU7ePfqLS+fPeP+xS1vvv2aTdhx/+K2VLHGgTj09KmMddMom+2eYYR3j0f2W0dOmbvNnpQicYzkLKCpkG2mjElwrcPyiPOGeuFmt8G3iqmRLLFpN5z7jn7MbDYNoJg4smXiMHBzsyvE+jjMQYaqItqQckYMUhZCs2c4H4gkJLQkIjH35Cy0jdKd4TyOeKccT5F+TISt4JwiztN1kYe3D3jv8C6USlctQZ0GR46Z4+lAzInTOXN86lGEm5s9bdty7nvGfsQ58CGw23tEoR8GsIQLm/LlN2gIaCgkufMO7x2qLCoVXSUfKeMBJXg1JWdfycdEdpU8tDw/hyLTd8Ep8/NR5H+msdPLlkUNPHO+EJCiChIpHq0lSzLnTK7nyTnVYM+KpKvTUgHppkzXmqk8aUNNFZZwyc6sLZgqL5cZtTZL3lxnDU/B4pSHulKPK1asWLFixYoVKz6GKw5JlO74hk83gbN6+vMjt3c73n35b/Dtc/SzHzG8+ks2fkNud6jXq1WoLs85830TGcP8oooWNQ8FLBBzhPufcpNP9G//Enn37zFxBFXM+eLFR1FOzPX7pcqtnnlBgEx828WBrlZ2zf1dkGPzetsun53Wz1P7bdnLfDVmMlkd2EzzVJKtkBxCqSpN8UjsH/E//DPa/kAueqI4M0QzlhWIuM0djE+gu2rpkmbSavJM5BI6LBqy5IHkamzmAZoZ04u06OLj15iJ12Vnl0TbZbzL4caHl1tWoE7jm8k5MvYdYqmQnwKolnkxx2EedZGke5ImVDNTFWKRwZ1kXJeQxb8T1ym1uq9IsWZKvDbfYLnc5ktLr1NC52FjIsJnuvZSzSf1Dn2kCvRCulVSe0p2j5Emj5ASo9/V+akYEXB4yUg+g/sxbXpDTpmn/gm3z4CrbUnXI7C4H1O3psrFfHXHL7EkxuwfOj+nyyG1y32fq5mnu1HJSWwxavL+bLomZi+kpIGMnGMg2RbfNHTdiWj3GJlgAxs9MbLhKTf4JKCZZ01PPJdYvx8chD2f7s58dW6Q5Kvdzwg2qSRVwtEWfxEW0s1JMg/xhudiPA0tK1asWLFixYrfHXxn8jFHI1VvRcuFZEipeKy5mWQo32OMpFRkT33TzJlpMQ70gyMjBMB7VyrM0EvF0qyFny/EgJQFHrlmt4mQzMg5kcyQLLM3IlaEWg0gC5mIpUSOmTyOpHEkjRFLtQrRKdkFJu1FVUfTtmx3e5p2A6pFRjUnrHrYFaYDshre+7I4dG4Rrejl3+zKwtwVOVWVkg1WIr1a3aiOsmxPZDJmQrSEUy3H4JgkRHLNovM+kKrkpdOS7QiZOETGNNC2DfvtHV3fkfOAM6mViRnVhJIhjZy6A6agTcvt/Q7nM0N/xjnPbrtlHEf684Bh7G+2HB4HsMjNbgcitJstu/0NKRtvXr3Be8/u5oZm09L3I6fDidtdQ9Ns6LuO7njEI3zv+y/ZbwJ9/8Sz+y1OtUjyRpl9+nwbCOw4Hk+MMeNcGb8UE23bIGKkwZCUiGlk66FXIeYyB0iRfqSYrUuRuI1dIpvQtp7j6YwobDdbxnEo8qsG3rcc+6HO8QHvN6Qk+KDEJLSbZ2jYoiHgJJNMwDeczk9s2h3t5o48DpzPR4xI0yptbHjz2HMaRsY0smsdbRvI2RjGSN9T5oEzTscOHzLiOjbJiAanY1e8SKVBJLFpPff3NyWTMkWEmlVqCd+ULEwNgXazY3NzhzY78G3xNZ2lXS6EuU5fqrNf40Q+FpIwo5pRhZwn4rBcz6YkAKCqqqIyVQNTJVEvxOAUF03nKJWVmZTKFsaU1a2qaNTi/SiQpFwbMmJ5JjhVHMpU9ai8H+qbCVP6udkksyo1WzdjapVcvBCTU6JDwWUDZgoe7YMAcMWKFStWrFixYsUKii2HCDmXZDkZj/jQkAZFU8/m/gc0d9/DTmd8s6d9/kOG3/wLmk//GLe/w2ZpxIXX3FX12+Ja0/tS6D2ZCTpQSYzS0n7yD5Dul/Tf/j+IT18RNBSVnSmhj5KgN3F8k93AdYWfzeTnRBO5WrE2VaQtl8/X3oZQtF9LCLyspIM8f2gmmXRRCTmt46v/46VJmXg+4jafY65FdUA0lJiv+vdZPMJwwLW3WDyRhiO6e4Gqr43ItQIxL9q6oIpsqgr9UE3FFscuSchLdRqL3yeZ0any0yg2KuVM02uXMeE6XptlSi/jtFSJIRvbtuHu7oYQfCEEJ1Z4+qLsTaiAt6KKFGdPzQsRthwLs1ySpGuUlJnkeWu8VEm0Sc2pjNN1qmZmIhGn8VyO4HuDSn6PXGW+zpV06fI+2bzzU6I5ESQdOe9+UO5tZibNXT4yyBaRiDHQ6XPEt5C+hGyY5rlKcxrn97/Pia5MxOnlnsM012sP67SdhuxKilaKxZBNH5o+P4/uYgSu5tDFJuRqPJafyFbtV5S7beb18QZM2LoTrfa8S3eA4rNhLpMyPNtFvj7sUOcwGxjGe77KW77Yd7w9Jfq0IYngRUj58sx+KE9c+5CLVPRohsg1mbtixYoVK1as+O3GdyYfh7EnxUROE/lo5KS4DHhfq+uE0AQ0OVKCPkZkGFDfoq5IrqYUYRzqGthjrkhElgVVyVaUeblq88Iu54ylCERUDamVg06owUjGciKnVCsNM5arJ10yqOSkSdGjF+8JbBFVvA/EMSICvgm0zQYVz9j3M9lnQBKHqOJ8gw/FHw9VxBy+vleCuIkIobSjiFGUfqkHldmnbiIxCjlTqkLLorNWkXFZYBoGrsiHIoJzAQeMYyJLBlFC8CQbGceOJjQ8u3tGP5wZhwM5nRmGEzmlKsWSaLZ3tNs9Y4zkPLBpd7gMiSInejoeC1eqwjYIzbMbLBXS1wyCCTFlHh/fYSTun90Aypd//WuGseP+/pah78lxoDufCcHTbD0uZVQczWZLjqVCb+w6hv7MpvGkHBljxrtAysLYD4gH75Vt22CinLqObsiF6CbgneKskLjFo7Tcd1HY7hosGed+IObMdr9BpeVwOHF3H2p1rpJG43A6sNtuSnVjKp6LT+czt/stu7sXZG1IWYtvqFMEh7Gh2QjjeCa0n3A6dMBA353ourH4Wvpyr5M5zn2Zm1gibDZIrdwjCzGB857Hd2dO50RoApaKlG5wwt2uIWxCkdRNGe+EOBZf1ZhGcja8D+xubvDtDTQBXMDQsm9g0yaBwVKqBWoV7kSIT8FVNa2vCQE5U38Hm5wSF4FykQK2Ukkpiuh03gsBmat8zEQ+ppQW3q/UDZtLJWWaA/KaYGAZFcq5a1Zv0dqphP57acLlGbpsqkwB3iQJNNGPE9J7Wb6Sy4YBIiXwfm/jZ8WKFStWrFixYsUKYCZbRAzLI/3hFb88vqP3HZDY3X5B150I2z1ON3gXkB//E7pf/kt8+gnt7edz6ViRN5287y6x4TU5UY69vBrLutUEL5AtETc/YtiPDMc/x92+KBWSUirdCi7kx0SoTK+/X7D30RQ8M3ReOFNjDGFKyhX9kDiSKv86r8IX6/fl9StzM1+5xDLKMPQ8+95P6d99C2R2+5dAwmclpYHUPxLuvoBvH7HxTN+d8fcA44J8ZY4RpsTMmTycKz7lwz6/RxZ9UAFpxixlW0nFi3zq9dkmkvFaXpWrBM/luFyRntXiIhsMJfu7+DFWoldFMVXUTUmmjigQxdBMrZidrHFsEeIs4sP6+7LlM80ohWx3tfIWyu9ax3JZ4WdmLCfJdeXecmivx+067rqe/YXor9qyVuSHo7TI+QGaFxey20Yciey2iOzReMbIpO5UElLJiHjMKoFv1/N16bM4XfcyXHbdLi3T3yoxu5zXJbk3X43l+5Kubq78XEq+Xj+ny/sll1lYfhLFLJItsm+3/M2bhk92D+C2fHt4Volkq4kDwv0mY25DYiAER4wDskuoeb487vhi3/E4dJy7lqhxiv4/IB3nOyRglnACGU/I40ePW7FixYoVK1b8duK7Vz7mkRQTMeY5jDCZiJ9S5YQ4JHgaX3b4TRXnPaZK4z0uBJz3CJBywsZCZLi68C3EwaXKb5JiXWaYFRawZiFaJTdcqY5CJumUQmqkFEk5YhmKjyKAINmhRYsEyyU5zEshc0QKEZKHHsux1H4piDogEBHiWPwlN22LSqFKVXzx2auaNTEVHwWpQYNRvPegeFCQK6mzWMzmKaNT6uK5Ep+i1dcOLbKTRvWwzKg6QvBkUhlL5/EIpMz5fADN7PY3uGC0bUDYk8yKx0A2wFU5kJ7D4cDjw2u684nb21s2G4cD4tDTpX6WzxWM7jwQx8TT2zeEEGg3W7xTunPHOCb2NztebG5RInkccSrsdw0pRoLfII3DO0dIPdGVCsY0ntnvileFpyGfznTnI3HsAKX1gpPiiRnNCGokKb6GPiiWI0+nAVXPdiN4FeJgnDUTNr4GV5HtzQ7vW/q+xwdP3/c0QTEbi8eBa0g5kcwxngdyUsYh0YeBNhtpHPAp4RtlGBLtbkNO4LyS00BMQyFUU2aMGcuUqldGxAQx4Xg8k2LDzd0ecw51St8PNEHYblq2ux0pG/04cLvds987Ykyc+kjXjXSHyDhGxDLD0KMocRxJKeNCoN3v2Ox2aAiYuiqpWoIMraSjkMsDmBL4Wi1KJR8reUh5pcxTpHiZqmcKIIvXRn2+pkxXVaQGMyIOcYVoLucugZkySa7m+fkuz3Bphkp9pidIhlg3KGqspVLbOFdUWs3uZZZPmmRskJqwkKeAfaJftWwSMAWI1dGxZA6UPtdNgFnO5qM7LitWrFixYsWKFStW1Li2yv3H1HF4fEceI+19QxAIu084Pb0l7J7jG0dUcNmx+/E/4fjrf4WmjvDyJ5Ar7ZOtFrBVksFgKkGcyagpYVZlXv8WcqisYYc3vyS++UvcxhFqkuGckDcRQtP5lp2R9/0dL+RaacalLm7xkemfqyrA6fipqlG4kCzXl1iQTxPpUvs8xR5qHh2PuLDl9sUz3vz6PxDP72g2LzHrSN0Dm5vPi0XI+ZHWK7hNsTQRx+wxOLdFF/29JCt+rLKrpgSj1bvxWiZ14cs3E2tTf+YhnGOi6fepv8vzLCshP4apkjLniMXE21eviWNJ0GTyfJTLWAOoZHR4S0PDKLcsZsrMv85Vp5U8nGjk6Ri7NB2ZyEuzqmCl1fLC5rGdP1lfX0zXeusXpN5ilJek8/vSq7NCVm3/NAvFFAl35ffxa3J4CXhcPjLKLS57cBHU47o3RDa47UtSjqhrMEvkj4z5dD+X1KcIiwiY+f253faxeWSLas1Jd+f9dAJjluO9ImiXhGj9u8A1QVuOjmQTkimPTye8nBnSJzyc3OxHCplc/1Z8cWe8fVKiKsELY4pgHrOIonx92PLZbU+rJ96cN+RaCKBlN2EeH5v2B5BCgKZM8o7W9R8O6IoVK1asWLHitxbfmXwUk1LxmDOJBOKKobQJycCZKwQcJbPOeUWdB/V45xckIhTLxlzlOUr1olW/OITyugqiJfNLxKGuLKVSKrl5lX3EpJAX3tUlZxZyniRNHOXAsmQuMYsr8jdayAUXAhpbqJWVw/nIuTtBHosZe5Wh9N4jRDAhq8Ms0tuI2QZLIzGO5VzOMeUClsovqRKUWshZEUQCJlMVWqmiStVHUsTjakBU2l099VIun3eCc75Kzhai1PuL511ZfHpQQ/PI8d1b+u5E0wY2bUCsVst5R0wd59M7+vMRYsRi5Gbb8uL+Gbu7T2h2Wx7fvWF4eEvsz5gY7aYpAe1G0J2Szh390BMHI+x2BOdofFNVZRNON7T7e/Z3d5yOj8TxTE6JFBOI4dviNXAcOkQNHzyIkrIVM3vXkHTAhoykhKSebCVrU0iIZPphJIQbhjhW4drMaMowRIZ+ZI/D3p1BwAdls1isb292uCaQcsIpIEpwZSLGlElm3NztaAFR43w6s9nvicNIloFoQmNluZ1iIR3T0BG80Fmm8Q1iwyxHmmNkt9lwf7/h+bM9u92W43lkGCLOOZpNC87RtIqIJ5+NrhvZ7TzqHE4jpAEMmm0JYkax8sxZwDXK9vaW/c2e0LRkKZIn1Ozn4suYAIdZYqr5I5dM2RI0pDkDV1RLxihSg2twonNAOYU5WoNfEWrlY64ViIrohXycdGimqseJCBRxl4DKc3luqHxkKpXOOUeK3GuxvJcp2JTL35cSYNdK6gVquFc3Scr1Wfwtmf1GrjZFYKqMLMFxvlizrFjx/wf468fMT/+P/5f/yc73i//Tf/c/2blWrFixYsWK30lM3INA6s80zZbnX3zGq2++wqkQtrf0r37DbfCINrhcqq6wzP6Hf0b/7b+Hr/+K5vOfYKnYd5R1a93kn3gKlcta1VWS0EDFYyTAOL/+Ffn1v6V99j1uf+9/w8PrXxaFk+qTtyTB5rVvxUXSVZYvXr9XOzwRozNXIvLeRxcJfiwIm7p+N3Il/q6z/GR5HrPad8MkkjG8b8nZuHn5Ken8jnM6I3nE7z+l796ST6+w1KPNHvIDajoTryXGea/PLF6T67ZMCYnluAuheE0eVuUWvVQ7XsM++Pl9P79LtdvfjukziqHO2O09P/rBC273bY2PtCZ1l2NLIupkdVMoM+9qzIahuHnOzLedaawvzbb6xkwk1yNn6VOzeUAn2dEl1CiVmYuRnFJCL/Te9YeEai8iXEj2Oj66YPmsksjJFG1ekPORZnxLEo9oizjDcUZtZOifcCrkONC0N+TUo3pDqnYt79+DCwl+mYNTH66I0fweCT+R9HZpt7Co1mQidJcz8XKtq+LR6RrTOS936PKZubJWGJPR9wOnY0PcaonEc5oTFNQUscjd3vNXr8Zi7+ICOXazF2yRSza+flKe32z53m3PV08N4MnEMm+MqlJUWqNmmChJMykrd9v3pXRXrFixYsWKFb/N+M7ko9eM+cmqMOBwTJr8OVXTcStSmWaGBocPLb4pi6tSKTktuBz4gDqds+omogKmBXIxa5iDA/VzFmfOESPW9VlGiIUc0IxzlWp0imZwUckpzUsyRUCnKqZyiiYn4jBwPjxhqXhDxrFDLRNV8U4geJwWcnGqigIhW/FMKJmHiuBx3lUSRupi2Qr5o75UUNasP5Ui+2iVvCn99oXopVQ1OhdAqoRsLv5+JqXi06mbgxrNesmMU8NixHvB+4DzhShOCdSMnAaGmhXpJND4LeKNBhAtJFDfHxnOjxBHRDK7/ZbgFRXj+O6ApUzXd6gk2uBRD2k4E4cOcb54boSAaxx5HHh49U2pUjMjOGW329C0DRZHzgcjm2N7+6x4cMZEdzoSuxH1yt3+lsfzE33XEwJ4D+euY+xHMLjd70pgpUpoCsllObJplE3b4BtFg6cJATE4H0c2m57NdkPMwtglzCK7fYupYM6wZDTNjmEYwYRNUIaYsBQJCin2NPt7nHiGoUNiT398YBweCc4492dS7EmxI/hATAaS2G2F7b4h+IAlGPoBR2a3dWQTmuAZc6bvenKOjDHSpR4sz9mt290GNUNFGFNGxSPq2d7sisfjfo8PTZH3teI3k3Oda5qBiJmSs6LZY9lj5sg5oak8SyaUOVurCVWYSb4pGJuqAKdQ6bI/oUAqz726xabGh4H1FJ/lnHHTs8UimLLyXBmuPuuOnEsmedkUsPJ8kDBzc67BJdgrz3kJcmubF/I/E+doJlNh9BxAmiz8WuZGf7gPsWLFihUrVqxYsWLFBK0Mi1PoDm9JceD+5ff58pd/yc4HTm//hpAzwQTvHSYlyE7icCmz++QfkN79Fd2X/5b2+/8AyxObMxGElYCUaQ1uJCvxpoqR08j5za8Zvv4XbPbPufnZf4s1O7IlUszozpd1eiWmJtJnliEFuPr5Qo7UIsX5+HooVbjnikhb/mw1dp5Jo0uXmMnL94iWeuqZYLpcsTai7xhPb0jjodhTbF9wfv1ztL1BxhMmHre7gadvkNNrNGxxlRmeJF9lImBrXHKRNV2SYBP1VF6fiMkSblzGySzVfk/9fX8MPiQbP0YyLisg/y4U0lDAlGEE1aaSjNQEbStWK1M/NaNecJsdceyZ5D8v9+FChE0qNcsq14lMnJJUayMvxCNVVLQO3xQrTuecq0LtQmJ+wMXWOTmPVb2GGpcKTONyjvdOI5JRjKRGkIagHX54xRhbTO9hHEgqbHbCeHzE6R0JD5ZIPiN58gG9EMtXTaz7QBMJOJPs89eFQr06yWXEkGqFsnxR5hi0Hls/k22qbrSJ6SxzzxbXmLId5vvhMMs0jeflszsOsSfndxxsQ5aSSJ4xXB55fg/HDmKCVhQNAWJmKi29pBoLbx6Nu43y/Wc937z1JISRRNGlKs9Tnn9O7ILje7eJxq3k44oVK1asWPG7hO9MPobG4b0Qx0QywZIwjolkpYrNiRCC4gRSzlhUcsrFt60tcpkapqikMATCZJqtdaFSK56qh9scDEjxJ7z4j5cqrGxFUtWq+XkJzAQnDld1MLI3ci5kSZ5CllwWXCqCJMOGHiQSgke3LWIjgxopjYXwEyFaabOqwwfFBYdrAs43RRK1koGF7POz9CRMWYGlFqv42A2F8MhlFEQV0OIfWX0js10WeiIe5wu5GFOHWSbGAVRwrsFMiDGV6kwRkExW6Ptzqd60RB6ktN0pw3BkHM+QIqoB71u8V2LsOD28Zeg6TAwfGkLTsmlbRGCMER8C+xtj6I+gDqcB733pzwhJAmMUDocnmgbi1rFrW3zr6LoRJ45wc4OZEiOkMTGOEecUp+VeiveETYPFnqHviUMiVF8SVSWlzDgk8pCIGcQlvG94OiVUAyIOr0IyIw4jrt1x6jPnrsPX6r5wysWvUZSu69ltAofHI/u7m7rE9nSnAfXKMIxY9owp4bwj5g6LJzg8stnd49pmKiwkjj0pnwrZno1m0xSZ26Tc39/iNBTeOkdiBB8CZpHg2lKrp5Egwuk8cD5H1Dvu7naoM4xME1oAYiwSq4bigtK0G8Jmy3Z3g282JdCkzP2iO1x9Wi3XuCVCViyNoI6cpJJ0xVNGVKu3K9SdiTnIFtFqzypXcdP1ZkV5FpdVkFNUVIInmz1mpmQDs+m4Keg2chac0xrma/XXmKLYPEeeU2w+CdfMdY9WtyoWyaRT8DZtHFziRmGuzJzPttwgmAR9/u6NgBUrVqxYsWLFihW/u1ARkmQUpXv8tlRSYcS+p7l/zvblT3h6+Gcc3/6a8OxzGg/kWIkjh0nGPfshm8NXDH/zr2m//6eYunLyxZpWKkmSEZwIMWfOb/6a8Zv/ATF49rP/GnfzGZamnN0RbGCyK5hoE5Vrac0PLlRxSTSczFGYianp9etP13/F5iRiWRyXzYoaUQ05JvJlijGEQsKUZfpUKVZ1W8wIbUNz84Kxa5C+g+GR7Se/T/fw5/R94vb5z8jqCOqJ0hLSW7JKsSjBlbhzJnGnKlBb9GDZ34kIutCgF3JQ5uM+lAe9Ps/SO3Cyn7gi+d47fjrHx+Rfy2slsdQ5w7lJnldBPHMVZo3NVALOFQUckbsSA6E14XKyobjc+isieb6p05t29fqSCCvWOMt7ysRAltuYJz2cRVwlH4+xlrK9y9moImQp8faSfRRzeH2kHZVBlRhHovuMsXvExW8Imz1bFzjZFuKefejoUjmPs3uKQcl1O6YK2SvidJnMOo/DND+v8cF9rAM2zWNmWv1y/+Vq2KySivWe5/fnQTnyMlJlXmcDCYG7zZ5vusytT6gceBwCkQbF84PnmZ9/nfEI0WCrHnKH5MR8s6zQ/2qJY6ekGPjei4HfvBN8MszcgszPOKBxI/ebRESRtPvgvq5YsWLFihUrfnvxncnHlHJduCrkkZxqhmBKWMxEKdKK6l3Rf0/G0HXEmKsMqqNVB20g+Frhp5csqCnL0HtfSI5KxOlso3jxmMtUv7q6ELucZVqlVQJFJjvskhGYJNfAp+jQlyTHXEjMBEYkpxGLuVQYWi7SFJaLd2OtuCrXYCZlcq4EqZUsVHWuZnzVVmUjW6rkY4Q0+SLUak9xJUBwF8KRWsSVMEgJ7xwCZXzMMFViHsk24jVgyRjGAR8cmkt7RYRsmaEfMYvkHMkxk1KPU8N75Xh8R44Rp4bTDLmnCYKgnLsjsSvtc+KIGQ4xVglaYYxCZCSELcdDR3c6E8fM48OBYRjZ37Xopy/YbG45d4nj05HNplRDKh1x6Ekx4pxnu99zOrzj9O4dThTfNGy2O/Y3d5wOR472BHi6fqQ7Dzh1NBsgJvqhSK9mU4ZkqMKpG3DOgylNNESVpmnY7TaIFrbpcD7y/JMXeDW6aIxdImzGeqwHIuIdYzbGIRHjwLPtM46HE2HjyakQ7Fv3AjNlTA7wnI49TpSYFTXouyMxJcYUMRJjzDS7MAdhjfecTwMJZSMN7XaDT5F0HmYy23lHMqPdbIg5MVpms9kVy0ZTQtPSbve4ZoM6X7Inc3lmVWzm62Ri6nICTXVejEixTC0VtEnIdVNBzc+ZyqiBFm9WnSWSZA7Iy3NcHo5lRux1dmz9NrWnPvc5Z1Qn2eApQJP6mqCmWNaaxTsFa2X8MuDs8pnpb4BIFWYVxSFTpHix0KBumNR+FMKy5mtK3eyo5OQy9XslH1esWLFixYoVK1b8rZjjU6M/vkW0IZ7fQR5p9nv68yPN3fe5efmC/PAXnPxzbu6/AJQsETXB8HDzYxrf0H35r2i/9/dx2pSEQnFF2hDDnMOScX71V/SPf43rR24+/RN2n/6UXB3pbEoaFMVSxLlS/TTLtopcyI6PSHxc5FfLt4kGLN+FxfL9umKNSxyg7xFZ0+cr81eTEGV+N8+VltO1cyX9ytimlPBNi6gjxQjWIdtbnDmef/9/yfHwJX03cnP7Bfn2K3JU0tCVuENHtDq9X3d3UZ23IIxmD0au+zb1b0lCXn6Xq/N9DBdJ2cv4XpOaHxJX5edJgSmjoqi0OG0RPCK+EMkC6krlI3mygimJqTEmRF2xzah+jZlCGgnM86KGTlf3dU7cNFmQkRdCtnx2US83HXaZDMVuJ1cCkZIUXvjMyQFxIkInYq+25v1hXDRQKsmNZBJ3DH4PuQN/TzSHD5lB9qRRCHKmHwXZf8qY3uKbT5Hzr8nmWXqXzhcxqr+hXYjqqX2zjcdyfN6fJwtq1iopON/r/B7Pf6Htp+P5KDm9mCNG6XsdD7NMFsg4ng49r8Yb7tueV6cGY8N96GnckXN0tM0t53Mse1riyp6TlW2uaJO8a0ZyIosAmeOYGd8KP35pfPVgHIdE+SOT2aji5UDrhewd377bcOuOrFixYsWKFSt+d/Ddyccxz5lZpQYrYZYQMj5IzXSqB08BTjSGrielIn+fq/Ro8C3eOUJoUFeIFdWaUSWV3FCdq450WsTnXBZ4uSy+ci7kZ44jlmP5slSCk7owtmxFc9GEaKnIiIgjATlF0jhWcrHIoFqOxY8ylaXwlA4pojgNs5eleo9TxTmPDw4fGlxoCvHo3MKzbqpizJBiIWtzKiRkLkFDziX80lr1aNMCsnCbszTrZLigoqirlZ9W7oYEwWXItf+CoC6AOeLQczg8Mg5nvFfapmEK5p7d3+AExu5MzkMJJlRo2y27DOMwzmNF33Fzs0XFM8YR7yPeNygJJwMxRt49nXg6dzx1I785DPzyIdL6b/ji5Q3Pb3eotByPI2KZTfC0TakYjV2Hy5nnL14QQkPf94xdT+w7MKPd7DmfjpUcyqRhxAePNIFu6Ego/ZjJoowp0feJm5uGm+2W4Eomqxdh7EayJm5v9rStZxxHNs2WM5HmNuA2nmEYCaLsbvb048gwJFJObDZtkWHFIc5z7o6kbAwp0mxuyRI4DUbMivgWbfeMpxPjWLJPY8wcTmMhurWn3TS4tmXbNohEzqMRhwQ6YplSpZky51PPZrtF1JNTCUImv1XnPE49zWZH02zKPRe9hMY1CJNKtiupBFUmZa6ksVY7TsG+FjJvCvD8hbRU8czSQjKRjsL7GxITGXnxIK3nXgRl0y7FFKAXEnKqkpwqIa1UPeaydZLnrFybM04tL4MxFkkK5blRKfI/MknHTEGq8NGgLkP1+ZjaBjL9ztT8VTpmxYoVK1asWLFixd+GSWI00h0f2Drl+PAbRBMbJ5y++rdgwnl8zfblD+H0S7789i94+cP/OX63KdKONeHX7T6lcS39r/8l2y/+Ca5RjIThiTbQff0L3Pk1wzjS7r/P/qc/A9/U9XXR/SkkU4KciDkiIVQyspJBH5Bw1WPPlpVnsljOX/wlp4/N9NtCMeT9lL1yypLuJ1A8J6c3Lu59M5l5RYgKM/FoGVI6smmbosp0+gYXbnDtLSpboo7s7n5A577k6du/RrLgb74gvnqDoQRRsi32GBZt/5j/4vL7pW/Lvtr8+1W4s2j/LDdbY5y/C38nATy1kxKTTPc5BCWnOMt1TuMp82cnMtQIQRhytbRQkHSpmZuovvLTx6pabY7lhHIv0LLvYXYhDuc5Uw+fBuMSo3I1f67m2iKxddmOD+aaXXwGmd7XQpq5XF4fUibYQG7vsZTAjhz7iPEGd/MDLCmSRzQ0xIn0nEeLyxhS4m+bCNWZYPwI6XjV0PocTDK8aBkryv7QPOevCOwF2Vrnkc0DvnjNLvtFBflyjun5tAw58m3X8MnmxNdd4M3YYuOGP3gWeffwjq0TTrHsD4XQYLxjp08EbUjmsSiM9Xk0c4hlYhb+5lvjR59k3h0zb07woxfKzc7z668NbVu+eWgwS5ze53NXrFixYsWKFb/V+O7kY0zF39uqh2EsmV5BPeK0ECI6kYWubNZ7COoImy3tzZ6wvaXd3iJNS66ymOQMEUzLYm8s4gwAmArKVGVVV1k512rFVKoSUyTHsZJ5hYTMMslElOVgTkbOmVSlMpIVadicEpLLe1gmY5WQjKQ0lirIFEkxzZ6LJtWLURSRUq0lvsE3La7dEZoWcR51vlathVLB6UoFW/AN3geaVnEuIF4R54p0qy8iMFSpTEv1usQL0ZpLmyxXbz5LRb41TxmlJfMuxogW2pCce9omEEJgsynejSn2qCvX6c4nfJW6jRnaZss4Jk7HE84pPjjatqVpWoZ+4HR6oG1adts94xg5n49sd1ua7R0+PJBi5tgNHI89377ruL3ZcbPbs20SKZ3Y77dYznRPj5AGEKNpPE1T5HKPhyOHx0dyMra7ItmqrUOCcD4c8aFh6M6c+x7X3uAaIR47trsNh9E4Hgf2u1tudp5NE+i7HvWgLnN7e0Pfd4w5wiic+p6bveJrdaGoxzcK3hPN8fB4xPkN+5tbTI3RMsEFmvaGxJl+POIK2844dDw9PeBdZhNaMoppS9hQ/DFrAJAsMQ7Q+OIXub9x7G8DTaJkYTaKDZntviXnMteHcSTmkT4VidrgXfHQVMW3G5qmxTkt1YlVWrU8RLlmtFZpYor8anlLyGLkrJBLtTBJ56pErDw/5sABVGnWEniBqZY2MPm0yFVU+L4U0fth9rzZUAPSeYNBS+arTGSj1C+rJKJlLOe5EtrmCmpj1nSaf5y2MKbEidq2eQNlChxLkCZWEjdr4xZtXWQ92+X1FStWrFixYsWKFSuWmIiaNJ4ZuwPPX9yT4wlnyuazP2A4vKW5/5y7208Ju1tsHNGbd+Tjn/P6r1+xufsMH3aEdkMMO1y7Zff9P6X78v9D88nfB+c4PfwVjG8w2RDxPPvxH+H9M7IrycEsc+WMEnOTSlKfupngmC0PFv9MyX6ztOZEeqAz0aFXzFF5BS5r66I/8r4MqV2IFS4fFwHJNuXZziTjbGFSO6ETMaZKHnsSjvTq32FpYPPyjzBXvQGlJC+2+y8QvuHwy3/H9tM/wYLhAmBSEjnJ1SKC96rKLvdw+f4SVx6G75GFy4rGj8muXhIv5XKuSlRd6L4PP1eGJ19FKK5WZA59B1b3JkTI1dbFOQdWYjbREn9uGscwCJJlVoVyc8XdNdk6de293l/6NvOIl1hLK3loeWF7w8JbtMZ+M9E8s7R1ytWplWVB7i3Ho36kKPHKhZgDAhEEnBwZRkPVkf09VtWikt/je0HDHRIPZf7nQlDGKo/suNyXiVRc0p+XsZnGqrRqksG9eGB+pOpVShKwXPVnSXVS9oK4zLG6QVRj1wtFL7Ic9Q9hotzctIRBOPWRd2PDZ23Hq1MLqnz+6ZZ/+nPHxg383s2BFxvhNw+CU0dvnhyFVnokGLdaJHOLV6xjyEKKypfv4MUu8T07cDjveDwckfaWtw+QtST4j/k7b0GuWLFixYoVK/4LxHf+n7+LqSyLUsJryaBUFXzQWRrRNw2uaQne40ODuRZThxkMQ6R7euLw5oFkGfWB/c0du90N2+2OJgRclVoF5gVr1eTARIv8qgpiWhmCukyTIhdpWcmp+NjZIkCJVKnWSmCmVIhHpSyuzDlyToWMlFKBmM1hKZGyXggbK2VnKY6lDwqiDonlCq0IQxrKa+KIPoBzJcPNOZwWuVlEUO8ILoC6WfYEVVx9XZyirnxOlTLWvkVdg4ormY61DC6njOVYuKM8EseOcfDk1JNTT4snByGOA2P/SH8esBRpvEMYwCJdzjS+RbJxeHgkJSO02yLjqhHvHDmBd4727hmoL6QX4JoAaWTsBkQHbm8cTbhn03Z88/ZEdzryq18PkF/wgy8+IVssVYvaksdSzdqdeoiZxjeoU/a3O7IJOY4M5w4RxbuADw2n8xEfWvbe0Y8jYz+QhkgmYm6D84rzQp+Mp3dHdrsd3sGQIGclRU8yI6sn4+iyoH3PjdswDJGYIjFmOkmcxszLZ3e4tmEcemIyxDv6mArp7AIpe/rhTOoPeFeC9ePxVGZgs+V4PBHEcbffoSY8Hs5YNlJMpKScTmeeffKCzS7gTwPnccSrJ7iGrh8wU1LMJMt4a2iaDS44VBxoU0jshaeiEyppbVAJaJu9HmvWpVH8PQRyHIrUTm6wVGRVzBuqhqtEX7ZcfraAag1kK0pQvsxQvWwqXFADpJmUvA4gkTLHcy7er0iRthGxhWcIl4rnmjwwu4UsyE6zIt/j6okv2aRSNzUmr5hF8+YodsFf5upDM49sCZxtzrJdsWLFihUrVqxYseJDGJmuO2Bp4OWzHX/zzSNOjb4/0x+ecH7PUzY4vKsWIxnyLdJ2vHvzFc3uDnt05OEVFge2mz1Y5uHX/2e8f8Ht9/6E89m4/ewF7csfgWlRCbGAkZCJXKvkRUZJQ8RSRl1bCcapCjFdYonZC7IujefKqnJ8sfVYEorTMrqWNwmFsMlWz7Wog5Ppn3mQLr9OpOQUGCyuMRFzIlISkUUYugd4+prd9/8h1j+RXemxaZ49B5NlgmtLPD084sNzJAcEIZOuyFEWZOD7kqtLvE84foxgnD77sc997BiZu20XCc15jGobFkTfsvJuIu9EDBMlW0kanZRk5jGUkkCuAmIRnzPRwnSGCwm4DOMut+46JXM6TkvbM5dxmojH6RyKMBXxXY2HCmQr5KnJfOkpkVrqDJ4adDVmcmmmze+VF4csODvRpQDtc1Q9uZRnYjYSlLIvIIL6O0L6JWOGrjtCEBzV9uYDTITf+7Hs5dhJYWdWQOWaoJ6J59qH+Tla/FtIZptjVirpnKdBlOvrykfGdmpLyoqq0QRFT46ezEF3PNsciG6PusCL5sBn9/DVuw1/c3Q841wkneNI1Bayh9TTK6gknIyojOx9pt0lboJivqELka08IuGWbIGb1ngaiq3Nefw4ObpixYoVK1as+O3EdyYfC/FQZD1Tyojz+GbLdr/HtzvUh7kKKqfIGCPp1JHGRMyZaIArPgRiQhoz5+NTqWACzLY0TVMWoPXLLxavtlxYCdUvkZpkaVfrZKAQiUUctvgwVgnLnHPJ1HTFm7JUako1LRfSfC4jJQUtVYVTFZjMFYlFS995AUu4mMk+4dSVqi0oHpFSCAxF5sy5OWvNZK7yFLNCgEZjyAmJF0nLUvU1hW6VPFJFtGZNqqJNkbEV1+BDoN0/I6dY5FJTTx47xv7Ew9tvODy+QSzibu/wviGZsNm0OC0ypJY6RI1MJKVMCAHLwjDEUt7aFHlZ1KE4dvsXjLFn58fiYcCBpoHNfs+zZzeknLm52XCz3SApYskhAkETfufxLoDsSHHAJCGupDJ6CYTdjrM7czyeSf1ACIG2bTkcz4jCOCbUBwYrpoWboMQhM5w7Ns0N6pRUJW4x5elwJmWDpGSX8V6JYyJ4z2CBPApN2HA+H8EiZopzgtmIc4ZzHnWOYThj0mDm0CCIGcE5UvCQM91p4HQcGHPi2PXsGo9PxjCOVRIHVDxeHV5Djd+qTG8aiTnSbgKbTUM/JMaYasWj4p0i4krVYwiIqw4dMsn2Vu+IbDBV7C4C11nSJoMRp+TKYtVQg0gzwTnDcib5jMuZ7AvxtyQec85lTCpBLh88ieVCs9xpxZQhOgWwZlMmZyVO5/cpmdpWHUGsVj7nVCqWa0VkZo7PFsHrNUM4EY5W/5jMFZ6ztI/NbaL+aVnG/tPmyMekkFasWLFixYoVK1asgFIdhEF/fmC/a8E7Hh8eufEeTQ+k7gGJz9jIFm0drr0jtC3Ob0j8AHFKfPdrTo9n7n/ypySMp1e/JIxHnt1v+PrrLxn7I3df/Ig0dPRf/juSQRMCNDs03EJocWFXEvpysWAY+jMpZbxv5qqtSfqxwOb4833CZD7ClrKatb+l00zVbBN/OFVtmTFLTU5r9CnevqzCrb5u8xH2PmtVF/kqQhxO2PZTQntDbwmvBmgRgamJgun8UFSG2juyGf3wjpu6dzBLh8p17y+lfh8ShX8b4fjxOTCRtwvabvG5K1/A+ZD8Xg7ngrRdjpqVitW5ilSFTduwv9uDaLGfkQxWkzilkLGlKrKkZnqXSclq7F7vU6W5pmGYbS3m8blESmbXvo4lo/Vy1DwPqDHXVUxV55UAlucZsOjy/OMyhJzuq9nlzeWYmIDHcOmEbn6vzOxcnsdshpJx8cggd2h+xOSO6G4Q3cDx1xjF13DynpyVt2r7ljY4l0GpYzY1fD58QRLa5fs0/y3nel8v58uTzFjdACuVnYvn4X0y/G8hHufjUcahL+kFMmJmdIPD646f3L7BTg88ayJfv/aIEz4L5TPeKTvXc7eL9KMwSMuYQc0Rs+cclW7seZYTObfYeWRgx6n3fPGJ8OXDQOM999vM/uWGXbOSjytWrFixYsXvEr4z+bjdNmVZpbta4bhFXcAs0g8dnB9RK0RbNiNmI1nxiUMVzSW7y3mHNA1F3kPRFBnPRyRncoy4EAjOFxnMEHCiTMbj2SbPN5hlX4wSVEmpjhT1KIIxQvWAdOoulWBmZMu1oqkQNoXnENQVb74sigN8zmAlczRVO3RyraCUIk9ZL1+y11LC+SpxSV3MG7i5t6UiUtUhrlYvzgHetNKuBE7NAMwksEy2WLL2LFYZWvBTgioCZyE6j5mRUiKnkZxHYhrZbjwiQoojt7fPub27ZxzOWOqx3NOGwLkbeDw80jQewYhDkZ0NwTOMG4IWItOmSric6M4ntpsN4j0xjgzHMzGD3+wgJHYhsOu3jMPIzd2eEDzeCeoT27ZFcvFv7PsRhxRS73wi9gNjHHDqubu/o90GXOMZh8jD67f0Q4cpDFE5npURKTK+SWCI7HdbzuPIu8MRQ9ii3ISGlDNP5x71G8SU7qnn7m6PN8gpM/YjlpTjeWDbtqgYN42DlHn37hHfeO6fvcSovoo5cji8A+fYbhpid4DU03ilaUuF6+HYYyhDErpuZIixzsNChjstZFt3HjEamnbHi+0Nj4cT4zhW4rEQzBNxqBpoNhsMV0h/5/HqS4WwTQ4SZZ5nK8+NVW/R4m8KJlaJ+BLolAAoFuI3ZVLOqKZSsZtKxanmhHcN3vsa/HhUCxFpppjpnDhwkVS6DrCnoOkDSdZKGl5HUOW4XJ9by5mUEymNpJwumx9yOfZaxmiq/rzEwnn++7HYCJniyNocs7zoR31ZLm1atnvFihUrVqxYsWLFiiXMDHWO7ukNf/onP+NvfvMKhoHm+XP2P/pfcLL/yO0XP2K3e0YcjuThke5wIllEpSSTShNobnb85b/+v7LxLZ/+5M8YTwNP6Y7v/cP/msM3PydGZf/5H5IQthgxdZB67PxIOg70scNsBARxN1jqcKo4t6uSjVqrsKYas0vi78ckRZeVju8n+s1hORfSbSYaa9bjxcPx+nzzzzOrxCXGnca0/msITiANmc9+/Ps8vf0NbdigNTHZpEiqno6v8N7TNM84xIE0HujPPTsEp4LkYqFy3YYpILm06f1qxSVm0nW+71NLy9fFA9LmwVnuX1x6ZtWrfqb26lsXAc/FBZHF6BsC2di2Ld//4Q8Rtct1ROejS/yZMTWyaxiHBnM18TRLJSE/FPCUStLO3VhUxs5Emi3bd7lXl9tX2cerbPLFGCyI7g/G/b0xXl7h6rc6X0qsF7DxiGvuyJXgdCLI2DGEgKYOxgYbD1gOZBtAGrAR0abMMoPJQ/H9ub7s80UuN1cSVwuhOpHxZggljp8JdezCxtpEnnJJCFjymx9JALj0++Ok3iRDm0SIMWPJUBwZ+Pxu4PM9iHn+zS9OHNnhciTV53PnM6FpOA/KQ9yxpUc4Em3HgMfEuNUT+3YAbemGHnM3vD2V+/Z0Uv7xHzr+v38xMMaWw6HjqTt9tJ0rVqxYsWLFit9OfGfy8eaT75GzMY4j49DRvX1LGjqclAWKU0V88RbIBikLORrRUq1Q9IgEIBC0JbQNWiVZxSDGARMjSCVKLBMA8R6nOpuYT1mKZQ07+U4o4sry0Kp3nXNWzmfF8yHZSKEeEmYRiyU7MqO1aqoSPHXRKK5WEiKIi5gMZHGQKqGRYpHsz4aqEIdITsVfUrVUpanziE84F6EFX+UqrXrZpamiKwsOh/hSwSZ4sqVZVhUSOUZKlVcGEVL1xRBRmiYgofgOelV8UATF1JOSJ8WBGEeypSIV6z1e91hsGbsDT48PdOcjcYwcH4/kLIxjZBjOPH95z/PtM2hbJEcsjrx+e+Lx3SPBKZvWUO/wwZNNSWPEe8VvPEbi/vkNTdiiXonjgMXIrtnQOIdJJqURVaM7nUlDRLVBnOdmc0Oz2xLTyNANqG7IUehOieOxpxtG+qEEF6LG5/d7xgRP546YYBgN1wS8cySUQyd4v6HrB3ADdzc7nPecupFD73EKtzuPSiamAXHKdtNy6jvGmDkdBl5+fkcfBUgMXUcIgaYJZAMfAjk2RX41JWJMOC9sNoFndku2zCiZFFviUIhArXM55VzmuK8RoAnOKedzR06Gdw2o0myKL+Zm09C0W3IxXpjJ6IkMzzaHvJiV56ZMtYUDRSXIix+kkiwh4ks1pGQ0JtAEzuFdwqeE84HkCrk9+S2WZ/hailVVmTwvFuHnnCULNZBd6sRMP88xaL7Iq5rViuuxfhW/VzKoB6RcR66C+EsO6iQHJO8H//PmRnkhy0WmaIlZQmfeEFiJxxUrVqxYsWLFihUfh6DkHNF0IA6Rvk+4RmnahuOrX5D7R4Z3X+NtwKo7Is0NYhERoTs+0n39Da1k7vZbHr79kp//86+4ef4ZL374hzx983NEhe7hFxy+/Au2n/4AJKEaSuWfGohH/S0QMYuM+YjkE2IZ8WBoJafy1bp4rkRcJAXOFVq2+PkDaCE0ayXblSHDVD34sbH6SGXh3yV1ajWxUnKHhC3b559y+OavcdsbtN0jZpyPX9OEDa69ZYyRTMKTMWnmGGWqCOSqZbmSdVqSHz/Shut2X2dOLpMVp6Tn+QofDNnCWmIiCD/IxLyoOy29MidCN5NxVmLBZJmUQH31cVRHski9uZRtDo+XTOuUk4Ja2bdBEoiiOZHe6/MV5SaLWI6ayGmQrvo2Va9eKmSn5NhlpSsUlZllj/8uadqPY4olr4nB6O8IRGL3Fda8wEsD6YwRkXRT9lqcILkjx0wCms09o0Wwhol0/MhNm/v+sdeM9+/f9GZ9fTqf5Hn8ZlJ2Yv3/E31+/3n52LEqU9yrpGyoDny6z/zgPvDNk+frk/Di7hMGfYPFTJYXRJdwAuojrb5lkJ6dgzwaSkfDI8GETTCC33LoAqFJ3N/vCa7F6Zld25JF+PmvMn//97d882rky9dCN4S/tT8rVqxYsWLFit8+fHfPx8OBoR8Y+q7IRmJFytSKp0OppLK5kirFzDgkYjZQj3NGCIo0ZSGqOIJvyxIt51qFNPkiFjFSy5EYDau+h0K9ntRFrYCYAB7JGXWKSSSjYK4kceZIzBHLEUs9pFi8IpJVM/qJrClykiIOpw7nfPGZVIdLEaeueAXGgTwqGUeyHgFyjIznrgxU0TRBtJCXvm1pNtuaXWqYxVIxiieor5KXOldiFlmQRK4efSqVRBJFzcCV8Qkql0o3FZzzhFAkQc0M70qgYZYY+r5IVwLBO3KOdKcjWUHdhmZ7iwsNlgZOhyP9eaQ7RxDHqcvYw5H+/IbTqWcYMk9PJ56OTyQbeXG347NP73l2e8N+vyP1iTRGdu2eTCQ4kDzQnROb0BJz5vT0jg4Yur54Z7oAAmHbcLO/q5maGRdammzo6USOiewG9rcb+jgynI1DN+KC0aigKWIJUjYeTmdMHW7IiPc8nTpubjxbFQZzWAQ5lqrOPiXOXcfNtmXsEk2A7b5lGCLjIJzOJ3a7Lc4Fsnn6KISmYRg6hjGX+9huSNljBLrOCF4RLdWkFsfKfmWGcQQVQuPAJ/Y3DdtdizYO9aCacU1DSuCiIzSOMQ7EcSCjhG3A+4YmtDjnEdwsu2MwS4raJLWaIZmR8pxTO28GWA0eTQoBmXOtMDarAajDJIILJJ+JKeJ9xIJhlkAKCelcqESqVg/U+vxKSRhQlYvsMFyqEqUQ/mUfY6rGzPVvQdm4KPKxZexStvKVEjGN5FQ3B8xjWZD3wkCxhWyqWZEfmjZTuASNWUr145wXPGclTx6aVTp5llkq4ajq9fVWrFixYsWKFStWrABAEmKJh6+/RXbG9vZzzo9PvHzxGUc8N8++x/bZS9rnX+DIRBEa4Nz1PH71l3hRvviT/zXnYeTw9V/x/X/wp7T7ZwyHb+lH4+UXP0N9C2Se3vwl9nji9vf+IWql9kudlKTDuTpLSCTGx18UQsw1pUasZLLOVVoyVczBpdQNwGSuyrpYLFzkWQu/Naf9fXRIVC8VXMaHa/er4ftIlaFW8k0BSxG0VHOpOW5f/Ji+f0TTiOSeJuwQv6V7+hbGMx5f7F801fMEIM19mrzh50hhUQEpojMpeJGqvSaIJl5wWYl2SXJkVmtZfu6KLrOP1RwyV8fNRNXUvksmNliA9JokkXcPT7z87KbEQNQ9E5n8HktsFlMmm+IYiMMTuvmCJKC52Fi8T2iVK9llPkxEmVGqZtEpj3P+hEwVjtO8uqLl8uXE9bzCRd51SbF+DEuCdkq+pVrUFC4vY9IS/R4nB+heY26PkKC9QyUXK5TuDT7cVfWkTPZ7SAMiu6ubc7neTKVejc3lpUq2miyIa/vI1+L2TvF4nVsllr9c5z8l7bto5dXPqbY3m5G7jr/3gx2vHhK/elC8Zv7RH93zq286Pr1rOB0PJHuDb25Qha0bOWwcQ38kb54z4hnintv2QPANh5NC/8R+Jzx2G15/4zAGJDlMY/ErFeVXbzv+yR/dAh3/4dffeQtyxYoVK1asWPFfIL7z//yH05lxjKSYwMA7xbuAk1LhFlNkTCMikZRhGDNxTIzJEJcIjeJDWzb6ncMqeRbUzwskp67IPKqWn4sCTPF2mwOi6n04VUFSJE0nM3oTV3wDq7xFyoalBDlCGopsqk3Eqc1iGmoUfzylZIHWhXoWh1eHc4HoRqI6oigixf/QUiyk7NCRYypMpjpc8MgmloxGdWQcEQFTms0GwRMRMhHFiCYQF4tM1erLUPz9nG8KoeKFosCZSDmSU8Kykc4946kDKQRnGxSnIGJ4V4ZqjCN9F0nxTB6PxLHD4gD9QDx3dP2Z7jTQd4VMur2/4/mnLwsheuuwGHh8eCSeTrwZzrx5GumHyPbmBnU9h8cn2sbz4pOXaNsSwp4QAsEHnm9viTESx56cRyRHdrkIo6iU4C9nw3khpcR2u8OyEVNie7fHTMiPQv/mLd2553gaeDgZp/7EJ8833GyVnOA0ZKI5snnwnhgB1/I4JB77vnpECo03RBPjGHFWfC1LNSp0w4HdbkvQSIrG4enEJy9fkGKm649s9kLT3CHO8/r1A36MmI2oeEa3J44jeRzqs2J458gZvN+SkyFuwDul2GYaIQRUA/25x04DzWZb5qMo0YTHx0N5Prywb7ZYGshZyOKxIl5cg5xK2nGRILUaFU2BzRwW10zV8miVHzKQcyXvpQQQksocc86TUvEAbVJDHCPeF7/O5EOVSXY4XCUfqXPxElBfZ7Be/5zzJeCaqjWnzOacMikmUix/f6Y5r1UiqkT15VkpNb/L5NGp15Pk00R4Xv6eLDNxL3Krdi0JVQnT/F3/gK5YsWLFihUrVqz4nYCgjP2JsXti9+nnvHO39DHDsx8yHA74u+eob3ASMIXx+I7XX/0VrRrPv/8zjIZvfvNv2TQ3fP9nfwaumHg02zv26cDrX/8PbHdfcPPp99m9+H06+TWP//H/zbPf+68wJxShnOrgN5FFyerPGe+aEmMCMwG0iLNn7795/b5ghha9nIk6mBm4C4W3kKyUKUlyeRK7Pg+X01zkS5eo9isCQ4poEobjt6TxQLZMu73j8M2f45xizYY8nHFNi2+3pDwQ3/0NbbilKKZMxKPMLRAgz4t/prK0OgYfu8eLltkyxXFJFL6PJSk1MZbX1Nz7RNKl6pNyrIJUK41g0LpHbt0bxlPP/+u//+/53/13/weaZl9IUyn3Z/KHFDKtc7x033I+P+DsFuv+Gt98DzNHcobkBfm1JB2xS8eWROHVQHBFmE6fq2HUJTZbEKlyNX4zrf3etS8XEcmXz0/VsgKTuYZTiFTbG7enbYUwfs1ogotnYlIkdwR6Gpfpzq/w4TNEPBZPSJiuVRWD5qpF5UIvX+7VdeummNLmzy7nc1EcmmJOqkrQ3BWW8/F6Ck3nsas4+nrgL9+ns0TA71tk9wl//vMHsKJYpWHDv/vlkZw8Jvfc64nXT4lEIIRAO/Sce8VvN8ThkY30HMd7xl5p5MzN/p5354CNbwmWSXpLck3Zv7CqgKbwz//NI3/y9275X/1py4oVK1asWLHidwffmXyMyUpWWK6SjapAwmlZmKVUiCJLmZSFlAvpoDItO+uCSwCR4nsoDtVicD1l4zmn5TVRVA0RNy+WJ+JgioEmX4eSMVcX2FTZUxNyKkQFGOoElwshmXNd6IqUNswVT0JGSaksBC3nefE7BUyGILXNXoxoubbFgWTEl0ovp5UgzZFssWbAFmIw5p6cMpoDOlWtOXDBId4jzuGclEpHChkzhQEpRsYcIUdEM5ZTkZ5RUA2EJuB1QwgAGdEi7Sqi3OzuShGeRdLYEfsjQ38gno80bYfvPCInYuzo+kiMkbevXhOCcH93R3Ozp/HKvgnc7d/x9esD7449X3/5lldOePlszycvPE+PB9R1tNsNJ1eq9R5evyWOPeqUm7s9TgDL5DSyaTeEZkM/juAbvBfGlAtpbA7nA6aKDwP73Z7uduTtIdGlnm+7zNvXHT/9bMdOITkjxxHnHIdTZhhiOXdKpAxxzLjRsFbwvvhbJODheGbrPF0QnHOMloqkrCrdsefu3nF6cyIZvPS3jJJQB0274d27t1gKKMXfIeN4dzizaYWhH0kx0gSHWSLXkETFI+pwrvpDOiH2mb4bSWYgpQKy7yNDLM9ZHHv68cwYB9p2i+AwhGSpSqvaLFWaUqkSTNWfM+eygaA1PtPqWYpQPFLFSsarpfKVR3IeCpmvDnEeFxqC35CaDU3TAC2qZVMjU2IzNSHlCFaqeVUKsci8wcGcXf1R74qJdJz7QulHSsSUiufjlIzwsZxpuWRwzxI/cvHLnJIKLkFw8amlKEYzBZqZyXeFq/hO5sj5PzcTdcWKFStWrFixYsXvErIIff+EZaO5+Yzu9ZFWoNk+wx4eaNkQ2h3d6S1vv/w53gc+/8Hvk9sNb37zc3Lf8dkP/5SmaQE/K+ioKKL3fPqTf8T54a95/Vf/grsv/pC7Fz/iyXu+/ot/yqd/8I9RH4o9RwYcWDZwRoxDiR4qa6RSbTwqySVayLAqzfNhBeJMLl2oIpmC8un9JZFYTrKIAfjIcdck5IWEsYn+Kb+LoeaqFcMBv2kI2z2WB5xT4vkNzz77KceHrzg/vmF//xkaGhCFzZam3TMc30FWzE8KKhd/+Ktu2nv+fFcDsGgSZX/gfVnYy7CV/Ysrsq2SSDIN88KGYllZCReu73LeTK7E8ost/Nn3BlI6cjoG8v6e/+3//r/m7nZflW1KtagTLQnQC0LRAtUCJ7FrvsGHyCFtOeTvMdHGM8lmixGayOnaMLu6oZOKjM5+iYJVArR0VqrqjdTE2EmNZzknSiXtRHJ+SEBf5saFH57u4zTf9hzJKZLE0fpIds+JhweG4ZFN62nCljgKvr3BxzPYgSEO9VZM17jMAJlGpT4rllnEsdNIXeZs3Z6a76sDrO5VXfbF3o8m5b3zfQwTaTkNz2X+fkDpi6IUS5W2bUnJUNfwk8/hV18lUprurfCYdmx4xTnfMaYGn5WuO/KZvWb0nk4+QVLm1j/it1veHkLZf/Iv0GyoPRHGA6M0jHqHaEZyIqL8u7848uMfbD/amxUrVqxYsWLFbye+M/no1BCvxUMiR1QKoTHmXIg1QM2IsSxmkgleA433ZAohMY4Dw9CDC+AC3reIU5oQquSpVmKgVk5pMegWnciEiYSrhEFdvIKQJ983K4SlZbAkRQLVFPD40OBcIVBTNgxH8acoXotA9UvI5Jgxi0X10YocZM6pEnmlQg8JmBitCc6BDUJOI1FK3BalSGgEy+SckZQQH7Fc/D2cb1AXSpUnRfaSWg041govEUG9L7xIbcvEVjoJOC8ohvfgVEr1WePZNg3Ohzl9VMVK28YeMUdob3BugwstsdkQhyOuK4QYCntRdvtbQrOl3e0IGhiHjpEe3zr2t57dufj79eMI6gmblmZ3g6jSnQZev/6avutp2w3PXtxxf3/LbtuWzNKc8a6lbfeMY088n7GcGPszoh7ftDTb2xKrxIGh7wkBdrctXdew2bakfCa7wJdPI9md+eK2BJjiN5yHSJ8NCYFRheP5zLZpuNsVLxRnmSZHIIEJEkd2TUOKmdPQ0SdDjg4RCJp5ejqScsa7wKsvvyUERyKz2+9gjJBaxnGg7w6YbjmcB94deoauBxyWOvYbYde0Za4qiPOMydA+Il7peyNGz7tzj7pETopTT8qZMSb2acupG5DDiY151AVStiq/K7PsakwlqzKlXCRS52e4EN2ik8RqzcGcycpynWHMxJTISTAJoB5tWnw2EIfG6Zl0TGSdVf8UsYxlB86hVuSCRTKi5fkG5upCqxJOF8LxIrt6IR/L35mUY/ma/B5ZJDZIkRUynTITKvE4R2fTGNRAUgSx8n2SW11KI00SyJedg0VWdN2T+WgK9IoVK1asWLFixYrfeWge6I8PbBtH2H1K94t/QeOMp4evSecTXX/k8Ff/irC75bMf/RHtds833/yG86/+HZ989jM2P/wENSOp4ipBdV0tpWyf/ZTNbcfbr/6cp7c7nn3vZ/gft3z783/KZ7/3j9DNHp2SWTWWdX0eSRjiA0zikKbl/DpxRDVZ2C7JfHK1Jp4YsTy/JGILVSIoVM20fr4mK4EFOTOzawuicTqukDilmqoswK3Ka+Y+4tuXRT7WOcbhxGb7CWM8s/38D9jlRHd8pLn7BDHFvf5z2H6fEI+IA8Ehli4EIouKNSv7DzUvee63sCAqK194nUh5oamWuJCSl9fm8azEY7mv15+9kj6dyL5ckqwF4eEc+b//osHnFzxvRu54x9tvX/Hyk5+SMJwT8mT5OIU2ONDEkO+Q3XOInifuGeI9JqlSpZdqvInMe78dF2nWep/qvZxNLuTy3pzkTZXMnSRZK+toc+XsgsilEooy8ZDvkZHXHPBl3E1wCGduyGFL0AwMHGJLuw3k4YmoO1LuaTa3PA7g2u+xGV9j4ROs+4oslYCs11KhyhYz9/UyNy5VjssZ/jHK9DJpFgf/LVjOJLVC4yLvvzP9nq+G5nKCTDaI0VCXQTw5R/7gB5/wf/tnD6jTstdE+Utw1heE+IoxPStyxmIMcsugAfLA7bYD7nh7KH6SIkY2ITsBbkkZnHS83LxjTA3vuoYgSiLxi191f3eHV6xYsWLFihW/VfjO5GPbKDlNX8XnMI7GYBGzhJNSvVi82TJjKgtGJyDqyDHRxRM5wRiNbODUF7lGVzwWCwFJJSlK1pvUKskSgCxWalVeUqvYf6mrqhVOeARBXcZZrotFqRVRGXVWpCozTH6LloxsGbUi3ZgQcirekDL5TDpXq6MKuYmNaIAogkkkKaSukJcSU5GPRXFaSEZtGtT5Us2ZYex7jAHnHMEF1LvKlYw4UYIrkqXiq9eEGbhaISpSKka1hEdt44q8J5lsZfzH2APgpMjDmlWvTgciGR8E5zd4L7DZk3f3bJo9N7sdY+4JYYfoBnGBFEdS6tluPZId29bxvc/vcNLS95F2d8P+/p4YRw6PT6Qo9KPw+t1IH880r95xswt88elz7m4adhvHbrslpwbvGkK7wQfFsiuyo8GTLRDjgKUizZrSiBBw7Z79Tc/L+z1PrwdUHF8/dZyGns/um5LR6wOpT8QhkRDIgVYC2zzyfOPYe1cqFVWRXDIR1fegLedB6MceXMOQIq7xvP36FZ989oLgjPPhDX6/5/B0JJ47hjFy7ouMydNTz5g6Tl3POPYl2zRHso1oaJEh4rMxUmROTSCmjDByPnccjpE354GuG9hvb3DNhuNgjH3Pza7nZtcSrMfT4Z0SnUO1KdV9lCpDnwqRF1MkpXGORkSkzt+qDkwJpsjFp9WqvGmMkTEmUgKzETTgcsJyqgb26SJLKrlm6RpkB5ZxzmE2ya/W78YsozwnF8AcuObZ67H8XEjHVKseI5YSlsbi3ZrzLAslUqorVXRiBWvCgNUg9iL/VGSMpzaUbODiUVl8KutfCep+y/z7JO46xXtTgL1ixYoVK1asWLFixfsQ53l885pN22LtLfH4yH4TOB7ecn73FQxv+PSHv8/zn/x9jodHvvnlP+f2/hNe/sE/BgGHkV3GJatr/IvnYVnLRoyiDvPJD/8B3fEb3vz8X7L99Id88rN/yOuf/wte/PQfofsiv5mTImKMhwf2IeCsB60+jrUKryT1yawwdCE7LgThhXvLCDrH6pe8vMv6WC4HL04412WVviwW2RcvxErn1DX3lEsoWFWQMYb+NY1rsZw4PXzJbn8PZNrbz0ucEKC9cYzvvqa5eQkE3OYT5OE3ZZEviUTGcVn/X7V+lg59vzZNptZ+QMzNxywI2ImPtUrWLKskzQoxNxNdiwrI6/Nc4MQVythyIRJzJqOc2fPSv6UJDU4cJq6oLE3JnzUmMquCpH7D8dhy6594kjtyymRRnEw9qwRT7XHGrkKfjxGjs+3FghCTmT2c7HEW5B2XMZbpPPO9voz49MtMgk/KUDOlN2eGTgOHoHgz9hp5GFtazozqkc1n5OER52/YW8chRgh7xsEwIk59FW+diNcl3VhjVi4E6dRducyI67lQv+f3Xij9/ci8u4zq/FOe1L+MyzhBnQM2c5qXtjAnemctFivBCSKJ+60yJsd5jBjpcscyoEr2z3jGL8khk8dETILmM7c745huGcY63gqYq38zDDMhiWG2510nvLgzPr0b+OrtmdOwJeklGXrFihUrVqxY8duP70w+BhGypPKlQnKQcvVjS8W10amQk0EGs0xMIxnDhwbVgIiSUmIceugCzp9w3i+Wc1alGrXIykhZ3EhNLxMKWVAIBYcVdzrmVddEWpqREJx6cNOiTIvMSnG2Q9Dix0ghKiYN/rJ6q/5xzpcM0LyonKoVlinm4rtnhhqoC1g2wkYI3uNDS2gaEsLZjL7r8HGkbQOb7RZ1Qtvc4NsNvm1omhYNDlWHdwEfyngJQjQrcjm5EKQ5jViKZXxTxjthGIze+kIOCfhKTCZLjLGMq4mw2WxpNy2WIMWRnAZyPqGuEErNdl8ILEl43+DcdN8c+33AUqJpPM1uy/k88PrbR1LOBO84PT3y+O4dinI6J56OZ8ahozt1PCVDXzzjlb1Fxi0vfvoDbu6eg/NYTohAPyacV0xLRWWOHXk4lzZaJEkm7LfciHI4DuyaI5/ulEEc356NN6eMkHh5v8EbOB84j5EUe/YCL9Txxd0WZ4kgrlbXCk1o5zkAidYb2RmJnieLiBleGoZzz/lwZLvdMAwjN3fF93HMcHh44uFwYiDTxYFu6EsFXVZyzjifcRbRXWDrM75xNJuGzaYhx0IUiiin88DT04mb3Y6bNuBc5hyEjkAIDaFtCW2L8x5U8RJASmCZa4BnkqqksFwFrqXSrwbONerJ1E0NydM7JQxJmRxTybRMGSyhZLITsirRFe9RFSMKqGQSDaLMz5Kqopox87MHRGmrfbBBceXxWMnH4jFZqx2rpLOlhGTD6cXfUeumx5RZmytRXzxLCgq5eCFhRZfyq6XXWv8giEyyQZfAcx7D62avWLFixYoVK1asWHEFYUCGb9lsIymOxCGibaZVaH/w93jxvZ/gdi/49S/+FZvQ8MXv/ynObYAEEkg1IdZqslzZ4K+ETi1jE4Nc39/efErz+y95+vYvefvwFfc//mPe/ebfcvfZz9jffUJ2MCAkg6gNhB2XdbHNa2lgUYFnc9XcRLVMyXk6VadNkcPEpkz9F3mfS5sGpl7kvdcXUqXTWruolFR+T6ePZlAhjRHd3/Luy38LOaH7l7jQYhJBFcuFqNPbL+iffoV5RcMO1TMqZf8g1cTNZWWdiJCsVIROJOyFblx0Y67846OVbJejlx1ejs907uquKSyOuxBM812pH02Sq11GoQQFJUmmH1pSKInJooX8U4GYJ2JQ6j0u+yvRwBM5DS0plHhOslRCaoqZalxFUV2aeOLJydMWhDHzbJhkfK/HLmM1Cbwo2yyHRubq18scm4pqJ9/H96fLcj5eV9EKjcuMeWTnPX0UNvnMKNsSNZqh/g7L70gWi31QPoAEJHd4iUQbAHeJoW1KZr3q1DVRXDs9HTcTlhMZKPWYqYJ3JlyXn1ne/4muLOcVy/Xhm8bD5qsIZT9qHtLlzyiGw6lAFn7/R1v+/S8ey2erh2UyI4uw4x2fPFMeuh8Snv6mzM34Ndu7lxz6PUOa1M4c2SZLpEV/rM4fM16983wrjo1r+f6niZrxv2LFihUrVqz4HcF3Jh/7cSCNEcuGV8E7j3cQXCQ7rQQBRIRoqcogApTKqE3r8e0GdQ3qQ5Ho9A6sVFTFcSCnWDP0Cmky+T8GX8gzJ4rW6EMqKzAveqWSiFa9FaTKmKqioSnedeaRVK4HqchYWEYslUzAnCAnilKl1cotRbwDV+RbxZVMMnUJS44QWpLB7sWnhKbBqasEYfGScCHQtFvatsUHh/ct6n2RysyTRCaMScjDSExHLOdFJRfkUrtXxthKJmKR0yzjtdm0bHcbgnc4BafgpfYt18WpGTklHh+eKilUvoJvyDkzjCfImZiFsLths79D1Zc2DgNBMiKJceiISZBecCSe39/iP7ktErkW+P73f8bpdGY49HRdx6tXj3z9Svny8cSbwxEnLV98smUcOt68/gYfmjmwyKaEzQ51W8ZuwFJfAhXf0vpAYy3j0JGz8Nkn92w2gV9/9Zbw1YmtDjx2I33sePNkePFsvXIvyjYYn+wD95tAa4Y4UEk4pFbbJryCkBmHsfifUGRUgiu+mWaJeDhy6jrGU8+QM5ubHTfP73n9+i05GY1kfOwJNuJSj5hnSEYfjZQ9jwZOImFnNH6D14BJwG08zSbg95lRA4SA14D3jnbj+V54RjLY7Fs22w3OFx9N9b7IndSEWasBXxIlUXKSRVyRQ4Wa3VvmpSKY5kLQSSZVQm6K/CepY3IqEaBRyPs4YFERp0W+VQVz1dcxAzGXeaNFFki1+JdmK7LMMmVILysN52zciXjMCwIykXIk5pFo1T91CrlEECv+rmVzwsi1f6VfpR8Zm8STLkkMSE1wkAvZyOQ7MoVy1yihvNXPfpjdumLFihUrVqxYsWKFUhR02t2ndGPC2Yj3G8LdC1J0vHn1JZvwLS+/9xM2u2dF6pRJ2SPNxIfIpcJr+r3+BFI8IMEwU9Qpz774Q8b+yJvf/Hs22+c8vfobZOzYfvJ9vCn9uSOPCfx+JnAQxdf4ccwlEXBKVHx/tbv0JJzW1mIfq/m61KbZR84zVcBdjl1Uss1yl1YSAyd/RMBSjReGI0+Hb7n9/PfIcST4TfFPnInTXPYCBLTZkdMAKdKGHSbCJDY5jeny+nV063uLc14YovmY6bhsVkpHJ6/MCz22+JDUe2XLXysxtbzg5VKTuMzFV/NCbUm1vABFHdztGhj7+fM6JaHWRMuiRqUwJDQJ2yYwDIlRDct138Qu1hUlJJr6bXNXHCWazIuG2sQ25mmO1BGQqWrP5m/L2TBXMwpYvlQcFu9Rqpfje+S2WJXfvZ5DKiVJW9SzISOpJ5swyA2OsXwOIWE07gZnD4jeIjkSeEOKLZbPU8suUrfvEXu1WPhCSi+504konYnIai8y3zXq6F2Ix3IHyz3O82DW+5xzIabrHJjbVGfrosj2Q07UDK+ZVk+04ZFdC997ueef/cdXZLFyB83hLPL93ZHot3z55IkGe3eHk9doaDmeN6TJv3Nq/+JZmPYZpkltTFYsymDCb14ZN/u5xHnFihUrVqxY8TuA70w+jlFIY0ZSRL2goSnEjXrEGZodKSZMExKFlLQurZSMJ5vDaUtotmjT0Gz3bNoN3nvUTYQjQCEgEMhZ8b4sY7xzZLXl6nxONJyIyLIOUszyLJPqsoFlslfIkOrSKedYF3EZy2PJyLJITiM5V4LEPC6EUrkZ2lIJGAKirhAXqhfZDStys0WOtmccBs7njuPDE93pW+LQk1OPOMd2v6PZNKgUSVTvQpH5ECVLIcR8aPDOI1IkQafFZdCAc4VKEopMq1dXK8CsyLGaMcaROA6kONYFbqkiEzJDTmDgJZBDCwIxJcwZodnSOE/O1Q8TIWw8KQ6knME3tHc3SHPAwgPNOOK9EkLxmEwpEfsztAkTeP5yXz7nEvtNy/c/f8Ht/S2nc2S3C4VoDY6m2ZBjwrnMcH5bslqzsr97RttsUfF4p4yxoz0fabbvaPYnXNjh9RW78Mi7k+PtWXg8Z8bxzCcvtny627LD2HpPcB4nQKoZvtUP1FUCzizhSHOwaGlgox5jxHmhTxnTiGNEUsQNQn4StrGnbQIpDuAikcTdphinjNHogueEksTjVdjvA3cv7tjd7VAXytSjkO13Nzeo8/T9QBMawqYhJAoJ7iBbIo0D5gOGK56KzhVSLU/PgyGSL3OnGoaWeaQfBNlS0qqhBhaTrK/pJHkM6gSnhbQVi2AjmMcskpMj6/QcZHDluXIssrQtYzhUS6alYgvJmov8kE3PUa7eqyktfs5lQyRT/B25ru7MlbTPdZPCMQV+l42R0r/q36HXn59iPcPmjYC5jfUPjQBq8mEK7ooVK1asWLFixYoVwBjPpPGE336fd4e3qCRufvDHHL/8OeY3fPbT/xn3n/2gJM7mXKsLpzC3EjbX+qdcs1/v0XlaPOyTKK7d88XP/iuOD7+iP3zDw5c/x/LI5rOfklL1DKyxART7Dic1MdHyItYuZM1FNnRBdnBdMfh+e0poLpU8WmJJ5Fw+tuT3lm8tmlLIRys7C6k/cfvFH9A8/xw7HC8HzxcLeMv0p69p3A4LWyydGIdMO4lgSrGL0AUJmRfk4XzhRdemirWJkyuE1pIknDg6+0issJRqtZm0s/pauSUXynKRD1lf/jD4mOi0m5vAH/3BFwynA6oeMTefw0nd/7CSU+oFnEvcbVqOp55zrG23qY3THotd+rfoWa6E4NR/mdpR58iSUGQiD+e+LedznntRhqDK0tpEqNdzLwa3+B/a9XhLpbxq5SNmON5xtM8xGpyMM+lpknDZ8HoG82AHxO3IaYc46A/HJbP4t6K0dVn5ydXPNSieJVzVStKszbRhvXsTz5hlnjdTX0pV52JG2WXUZLpPlaB9vw2FhDWGCG/fnvny19/w7OYZX78dMUtoKlH/Vk58+nLkbbfj8cnDLAUsKIl0PiHbtHi0Qr1G5O8apMmSJSGIyxxOa+C8YsWKFStW/C7hO5OPXhO4XOQOBSCTki0qgbQaPDrUlyq9OGZiNOI4khEi0OZEa3vazQZVxYWAm6qgKhFgmZkkeb/CKE8yFFPWmShOqucEJQPUZFr0GyahHljIGCUyCYdYLtIsTgUXijSjU09oGnyzwTUtzm9rsJOJY6nQHIeePA70w5k8jqRhYBxOjP1AjpEh94znnv54ou870pgwy7RtYHd3S2Qg947gG6xpIWQSsfg5eo/54t+QrPo7qsxytGaQcgIxvAPvHN4LwZWMRiwzxpFx7EhxLFKUNQMxWyE6vVRykipvS/HAVA0IELPQbreoKmN3JnYd0SJmhldlPB05vXsFeWB/c4OGwBgzZo7Yn3BYqcL0AckD8Tay3Ta0jaNpwXlhu9kVkvR8hpNwiEdu7u+5ef4CH9pCGJsHEdIQMSCrEDY7/CawbW4J95nty4Tbf83u+QO/+epbwsMjbdvx7unA6TwwbloklGrUmAZiMiDhRXEi5CykSkoWeVUlpjT7nlhKiBmaYaOO4JScI02O+DSg54zagA198SbMCYfRqCPmjPNaqjDDhuM4sHXGrgk02wbXbBHx5BjJeUTEod6z3e7Ybfel8s/KJoEZOLEyv+SMBE+oVbS+aRH15AwpZdCMc2GuHGTKpsYusqtT2DhH9YqoLyRjZn4GrQaGzrnSFy94B07KEy3mkByxVM6Za6Co2YossyttV1dkkgtBuPSEvASkeZJXzUXKufg+VuIxF/KxZMUWUlzqLohN17UpcaFIqGYufpBzBjCTP6zMf3OmwG+SYJ3iZeQSIi+3VkT+00HpihUrVqxYsWLFit9NjN2JRgS33XH8zWuCCrHvsLBh02zRsZ9oibIGXaw/L1zCgqibfrdp/XohWhCQrEDCVyrNMPbPf0B79xmHr/8j3/75P+XleGC7SYwSUXepRHICjRQXycmHcEl4mU0x+jUNOpGSxqI6rMbuky3KVCl5pc7JhVSxq4X2RHYaDmqS7kTGCWpUZRcQ72jaO07ffMnm5q4mDUZEWrKNuARj/y0+7MF5UjyRYqI7PbCBGhexiAgm0owaMzC3e1kTKZVQvbCnF0xjwiKh8kqedb6Wzb/I9Snmd5cVbTaPFzVAW1yzyts+Pfb89V/9hp/9N3+8uG81AXNO1iyJyiknUjTG2DEMVhV/JqqsntcmOxJjYcxZCOXpxk97LUxkbCUdZ/qyHG8z6VjnyWI8dH5tSWwXZ8+JjJzitou35ETiUcmymdYt/dZIf9ayHyVaEnLrZyQrTh7o7Z6gjzRppMdQAtnA5aIIZc59eGOW98g+cuMub5bxrGo/RQjMyr3j0v9LFeNM61YS2Gb+czl2tXdMNpqFpJzri68IagX+f+z9ebAsW3rdh/2+PWRmTWe605vf6wndDRATIZIiBYCgRYmSaNkW7QhbIduEJVEhhRUWFbLCdsgSZVlyOCTTDP7lCFG0RDoUnkIeZMkwqSAJyiQI0CAJAuhGN3p683t3OmMNOey9P/+xd1bVPX27CTwC6IG1gPvOOVVZWZk7s6r3t9e31tKY7VTbEOhDzydebviFLy0xKWssH8yvsb7h3YsZmmweIcmNw3dmhpUIQYWFvybECatYYzTkKyTmGdLz68egNOxKQpN5/o1+wAEHHHDAAQd81+Ijk4+NFwIWFGJKDDGhCULoIWmxr7SotWRrD4daJRIJMTEMLSEG+r6n73qstXjX4KwH73bWh2XyaW0hJ9jZhhhTAtNvTfbHgHIxeTqadOzWAwiggjUWIxaRSSEtMqkyvsdoWxlCJA2BPgyEbo3G6zIZTOgQCaHP2XNJiSEQUyh5dD2SBoxGLAmcITUeZQAnWOsx3hOHgdB1VHWF1CmrMWPCVw3WO4xxWPVIsiCQQso5mjIWQgZjwHuDtZZJXSECIfXEYleiKVvGVs0ENBL6jr7f5J8hMAwrjHEkKobpEc57rCXnPFqPFcvy5obQr9HUYww4V1FVDSFE/GzO3aMFcRjo2pZExFVklWh9RpwtMlkbA83JwOxez/rmGkPOnKzqCucEHXpCUJY310ynC2xladdLfDWgKrRtSzWdMz86Q6QCDO0wgKkKKasMw4bp8RFqLLOjOZN3H+EePkWSYd31nN906LTBkSANaIoYI0ysp3IWq4mJS6jkvMhIyEWClIR2TRhjCUMPRrKlaFQkKcPNBmNzeVXcbcdeZaIOqLFYO0XEYRG0shwvDMfHnqpyOGvRLfOVz4/UYS00dUWMgX6IrFYtMUZcMjhrQA3OeKytMbbGWo+1FQnBKvi0syDWQjyO5BzkY9eUiERSTIQQSDEgElATUWvAZtUhmruxrTFY57DO4b3PZKQRTBk3kilWMbnssbmazAWUyZarRvO9q7IrErcNB4U4jCkRR9IxZcvVlEIhInP2qcqubzSPfS6kJY0E/d7ijI6GquW/4/eI2avcx2NhV8htVxCkdNSOITfjMsWhhjrggAMOOOCAAw444Dno1xd4iXg7Y335BRoHD974Ed57+1eZvfgaw+aK86/+Eqcf//6iGNuRbOO8+BnScY9cYVQ8jQRYyVrLxFefm/wA1eymc/LK93Jy5yWWb/0cw+Ov4H2DEb/drTPgrDDEMrm9RfKM82gpPFQuJ/bUW0reYpw+b4lIxRYlXFLyi0fPyv1Tyi/frQNQkupkTB8sVGHh3WKMePFMZ0eggfXVU46qGc5VBBI1UPEh6+oUdRNid4VRwUpEqwY1HtFMcKrIs7xIqbf3Ey13T5WD3hJqu2sykor7Fp1bMnl8vYw5ec9ey/2rvKOi9g5ofPwWgbuT/+UmZV/VODdBrJRozOyolMc9/20wRIS2DajPteLW2nesd24Tq1uGeX8NZnfdt9vrrddtz2hsHJeyf31ms60bj+Y1jNGBZjseu2HfWpWOz27tifeHSyGZY1y4wLsZHZOtPa3VFSrTfN5GSFJjwhqNHTEozeSUTjqkfD7Kp+0Z+u/Z67Z/hffPSXe2qiMZ/Mymu/vEKrcUx5mAH4dzp34sd9qW8B3r4ds3RlmPkDGV0VKZRFU5ztdrTt2Ks5PEo5sjrq8Va4Q0Rs6oMLGR2dxjrCMMPcv+mIVfcmwjy26WLVs1NzuM63Zfx0NKUagWG9709UN0wAEHHHDAAQd8F+Mjk4/WOKJkBVIIiXYIDEHROJBiwlvBFbWbMQ5Vwy6bQDBqsBhMSoShY7m8wfmsfsyWk5lsBLZFl6piSufZdqonZmcLOc50dDf1ynPsvC8rButzbmPuQoxoghQDIXaZPOwDKQxZaUUkpQBaCg8FWzIoY1FUWeeIEiAlxDpczCRVsPnYQgiYBGIckK0qC2NLt2kJsS/z1IhGJSpYEaxXbEyFWElEHRAtak1TJp5G8N5hDaQUUIVu6PA+qzuFrEyEROh72m5Nv1kydCvi0BKGLk/wRbC2xtqeoAE/P8ZIQ+h7Bm3p2w2b5Q1GhMl0gZtNAMdm1WOswTSePkZCSFTNAqOBEFr6tiN0a/pujbUONdluN0bJilLrSckidoKtLLiaqfX5NknQL5cwBPzC4uqGyWRK3264WL/FdDJHfI04jxVPUouzE3xtOL13hGuuePTwIcka5kcnDMkSZUk39Hz1as2tRWj4AAEAAElEQVR6CMWSUxg0UhE48oa7E8+LU2WigVoUZyj3XsKWltQ4ZFVivnnAjCEXqqQ+boukRCbAtRDEahIqA9hsOXI6azg9meCbCb5eYOspgiHphr7dMISOfhiwlaftezQqbddzc7MiRuX47AjfLBBrGVRwpYs1hMAQEjEVs5a9mk4kE+zZJrioaMuTiZS7ImPCaSAOgWEIDLEmDiFb8aY0fvKw1uCcL59Vi3W2KJRLN6dCSsWaxeacCmMSahWjCaM5/1FFyL3Lii0K51ycaSEZM+mY8x4zEZkzXlI+His7xSLPZkUiqdjpsuUIx2+HrFjMv29zULbfULK1WS0jN/KXyN6/Aw444IADDjjggAMO+GZYXj3lehVZvf2rhPUVfjojWtA4UIlh/ur3srx8yJMv/hx3P/5DmGpC1IRVKYqpHcGxU8vtEVF7c+A8dzXlYbutS0hFExYFMzlh8sL3Yh9+HpoZObEhNwjOXSY7QszuQGI0N0eKYiU3D2IyIZh0JAPZHtszJkXZB3L7hAg4hVjm7eNZKDtSJiMfa2kJRrZ5lqmo4/LxGgwhdKiJLC/fQzdL6smC1c0jZvMzxFZUwxNevHOXr90Y2psnpHCDbRb03TpHP2jMkRVltMZhDWlca4BiAVNGW58d/Vskm5TG6H3FqkhWauZdSHGG2SeJxt9TIY/Z5mBudYM7+dvumu8zWDoSxORICjFYV22fFmN2NaHJo4uAs4YQNddqpdgxRWWYxmOSbOM7ql7zYwYKObjP2Bodz3k3TuNYjfmhz7PVHW8gHcd6JLL21nhuZ54+O3b79w5oyZc0CCKWwT1A4jm1CUSZoqknApEKQwQqwjCArlAzRfolVA3EAbGCjm5Zqtv32N0Nmte6tveEPvNTtuNCrj115N33idN9MpUtua77+yxkuBaFY1kp292DpUnhGcfX7WO5SVixeDPhvYdrXj26xPiGN59YkpKdglQxKKE0Md9bKJfrCuOEFCNelOthzsy13Jle8XQzz2sImGxjq9mmdUsq63jFdhf8kPh4wAEHHHDAAX9v4SOTj31K9DEQh0zU9X1k0/YMMeHE5ExFEbxXjMmB304cqM1KQQPO20wOSkJjy2p1TtKe6fSIquQ/jnmKqjlkPqWEczZPJTUVe9Cct7hVRJZuMmMMzpg8aU1KjJFh6HIXXUxoymSKpkgqNqIpxdzBZSRnuRmBlGPoR5eRTIQkrJhcAIlBjIUUUCuoMdgy4VWxaMjn7yvBOcMw9PRhg0ogaSAFpQ+RKiRqDI2xBGtRjViNWPVY6xB1iHXZvkYFCUJKPWrAV46m5G6CpY85C7DTntCtCMM6/+tXaMz2q+JqKp/JrxhL92gauDp/RLde5fe3nqTC8dkd6rqiHwKbdYtg6NoWTYpzgjEJEaWFPJ5hIA0dxgmurog9xD6xXq+oGs/x2V00ZVWhd5CSoFJj65qp+ExYVjW2mdMbR6VgjGcyrWjmc6rpabZDsQ7FEpIyhEgq5KW4MxSHr2asl2s+fP8R1cPHXFxeIWJZDWvaqAwR2qR0Ed7dRPxVz+tHno/NHPe80JiY1YPZv4eUhKQJYyLOVQhSCLKQbUB1TEqMqHUkzV2GMWWDX4whaWI+mfDqK3c4OjtCrGNIJisoiQw6YBysLpesrlrqacV0MqXtA1fXS84vlySxxGrCatjgq5pqMBzLQN0EVGI+Di1GNToWuQC5eM+KP4uxxYZUyoKEytZiSazDisWpQyvdEoJjJWmKalC2P/eKWnJw/XgcYxGa0pgBkrApH9vo9GTKwokx+fMex3zHQjyOtqshDcQ0bLtCd7W9EtFM0qeRNM5EqCIYsXn70vSLaK5+xk7h/aWE0kKujIs4e0/rs38kyd9DBxxwwAEHHHDAAQcccBvL66d0Q+RYLBIjzeQOIQz4ZkowoBZmpy8QJfD4Cz/LnZc/hj97HSEQxRUSrmQRbv+7R3iNGroiPJNiGbqd36qAyUaaqV2yefwVajMwnx6DgnOGmBSHcFrBxZBdTIoPUSZPJOUGRlUSkViIktuazHw8YwZdnqeb0nwrmutzUso1kuxe+cxMWnfNjhTb1S1ZV+bhKoLVRAotVjzry/exEaq6ZjZ7wOrxW/TDknh8yvVbl/TdgNQVxnkIGwa9wFeT3Jip+dx2839yfTASQIVEQZ5VPz5LfpVHbnvKjmMiIxE0xl6wtXa9rRS7LTbcigM1bY9j/z3Gu0E1lfiZwGqVY1KSBoyp0Bi27zkOb1KhUmXeeC5DruHFRVJ05PttYFSg5vcRNMW9g9vF34zqvix+1O19k3nnnUJPdzu7NUo7Em7cSkQxhcxSRiWp7ojYZ8a4rL1ohB0tV2rFABJRe0qXrplwBWYgmFMmsiZqwmuPjQPia7R9gtg6n2sM+1e3EKc7Qk00k6o8Q8Hv3wf7V+j2799o+939wv55MxKUe5+57Tb7f99qkVUFyZE4KsKTq5bKPOZ6OOLqMjcUC2kbZ5SNZxWcMgwrzlczrPXQdaWXQNiEhhgt96c9F62ni7qNBNon1keS9NYtfsABBxxwwAEH/D2Ej04+xkhMmjMcY0I0Yk0kaSbzUIdqJgO9y8qoJBbrygRNBW8t1hqSRkQDoV2zCYHYD/h6Sj2ZUE+mGONwOnb+5Vw+Y/NUxojgjMF5v7VszAHxWfnU94EYMnExZt2N9o+pTEjFCKoWIRWFoxRllUFSKRFSyoUSmQDJ88607eJUMjmqiWIRkifaNnuUoiZChL7vgVQIG4cRJRHQoIQUMaYtJGfEhAofE8klUpWwLlvOIEJMuRy0LuFdhVhHFxImCWgmw4QEOuRuRRJV3TA/OqGpZyVnL9GubmjbG7rNktitEKOkGEh9IAw9XTLMjk7ZrG+4fNoimrBOSFHJmYSW3gixaxGNKA6MoWoaZsd3MSib5Q39piWmgXrSYGzNEIXKTxFNBASMI9uMKuImzJoZ1jmcqzC2xtczrG8YhoBaS9SskB36QNJ8TazzVH4kSDuMdZyenZEUFsdzVjc32d53mYuMTRfYDAEbbLYARliL5QtXPZdt5PtOJ7xSw4Rs9RkEjPUIDms8QkXSCAQY7y8VsKBaVJUoQZXkHMnVGF9RzyruvHCfo/v3aWYzjLUMfUccegQlJcNm1XNzvWF1s0EtOF/n2iclZrXh9OyY49M5aix9VIxThjQQuzWqQizkmzUWaxzWylbNh4lAlmaqQiwLFalkSmTCb7RphVJFYk22eRXJ6t9RnTyqKCmFcd6PFPtTULItMWNnrKadBRSav4VEIShibO6e1GypmonHsP08pxTRuEcsIkRRXMl4FQ35vYzLFjqaShdv6R0WuyVhx0YFkf3u47xpJivl2TJS8ndQ7rsekybJTQoHfMdCRH4C+EvA/0JV/81v6cEccMABBxxwwAHffRgCL734GuHoPvrWF5gfzdB+oKqnVMblutIo0+ld6jca1ufvIENg8cInEO2z4wr7xNdec1z524xNc0X9aKQ0H5bHQ7dh+fDLSLjm+IVP0xzdoV99lTumZVELV71yXCmnM3h0nghlbu8FKmtxWAYSQUtts50kjzNi3XJiqrkxMZXaOybwljJ3Tqjsoin2tF1siZnxR1HU5V7BTLql0gxpMGAScVhTNzXYinZ9joZA3DylmdbUybO5forBUE1npNQT2w3OOmobid2AkVSabWNphJQtkZPXCHLtMjZqZhLwWfXjXifkHlG3TyqOZ6s7AZiONcjedRyJVWC/ClFGQq+QvSPRlCWOZStBjEM1YlIgpYHzR1ecvWwwrrhEbbMtZbvnZAyzJrK8vGaTHBaP2iG/ZzJl/WSf6M6RF6MCc6cHHTWaun0HIzDmS6bEKOy8NWiWfRJ3l4up2/fLasF9onc3breo32dsSfOaDjijOB8Qeiw1M9vTbiJt+z6BCc4MuOmEs6PcmL5ue7phhaQJKXluJz6OWt+RTN65dN0mFveP8DZRvSMTd0i3nt8Rss+Si/uvfZbsljFy5BkYtgpoMZw/uWbNhG6YYa0hqm5JxZ1KVbg/6VEzJSh4cZA6iqyWlBKd9TxpA/cnG67DhJu1wUjMrkbP8KvPYdgP+LaCiLwBfA34M6r6k9/aoznggAMOOOC7DR+ZfATBuypbiqqCSairMJKz2RQlas5sQ0tIvBWQrAAMQdEUM4GhQt8FVBKdGWhDwA8dTewJmqirBnyNtRXOOaqqpqr9zoI1KV3f7g5Nc8Elu0Mt5IggarMNSCEmdpNjkyeoYjJZYootq+yUbLkDNGCsjI2YRRmZlZQplW44k6VVphA3ogrqiEOHjS6rQXUXmJ4JzIgmJQ1DNvWICVMFwhCpmkkmV7Y5mJkQFWcx1uDrmqpqMFZx1uVDSgEhEKJlfnrCZDIDDH3b0l4/5ebmMeubp1mdKOCtZ7I4zha0/ZpkDckKFkczqWmHFiOK87k4s5UltAPDesPQt3T9uqhYLcY3HNmKi8eXpH6DIROhpqnBWFBDGpQu9KQ04KzDOmiaBtNYjJlQNVOMawhJAUsbEzp0+XqGgbRaIcZRN1MwFuMcCYOIx1rDZDYjDIHlzRLRvO+XX3+F6/PrfJ2c8PDihjQMOJuYWqEyljYGenU86ZUvXW04OqsxTsCUnFBfY1JRFfZ9JugwDGpywaGQotJjaA10AtFPMH7C/OiIk7Njjk8XnJ4d0ywW1M2kFAOeQTakoUcksbzpuTi/QRNMhlwQ+MowmzccnUw4Pj5mvjgG6xiiYlxN1SzAeEJKDCGRYrGbkUzy54UIy9YaR6R0OOYCMsZELLmOGkOxO41bC6doTG4EsK7Y6LLdz2gDbMRiSvEVS5UpSrGATaRIJrhN+WDuWdeM9raZLMzbpjjmO+Z/mmLuaNXyvaNQZIzs1eLbbtTdvwRq8oJMyRjJNGJ+7a3Sf3s8mNLQoM/WULe2/KbflAd863EoqA444IADDjjggG8Vuralufci1+trrCht37H82i9iJhMaTTCsiQAo87NXaD52xtO3v8z5l3+R049/JueloyWqQG85buxIu9FFJDtjJoxxhKFj9ehN4upDjh98iurks4gKiUDXKZOjik8eO9656XnlyHI2N6RHQoMyrYRZZehDZBWy81GMkGRHX7FVfT1LoqUUCtlRnk+5WVEEnJhtjQGFJNqey95PpTTsCk4ADIMEVFO2fcUS44qj45dxsxOWV09IwwpfzxAMqkuq+ScZNivM4oxGDHFzRX/5FlVzzDBsMJIJW7vPjxR70G0+3fY8QJLuTnus85+5Err3+/6Q7AjIrZ5U9gk8Cqm5I4y3ROZWNUnJ9mOvniu9nSgQ8j1knjLxA8RrQntJs7gLYsFsVx+25i8iBqdLqv6aXi1avYAxPluxFrIyH7/u7q1SW42U4zNMazlxKcQjewrFrXPsuFOhNC/vEXNqyq+F0pY9wpbd68ZG1Gfpvt3RjjJMY0FCIgyCmoqFi5x3Rxh7hPU3DKHH+DkxJlZtQ6cVZuqo+xt6f8osXrMRzetIyPZYRzVs2icCNdvR7kcB7a49W+vU3Xjuxmb7/Nf9/jxCc7vV1z/yjFpy3CrlY9csEBC75mx6THexYSqwpqKLWTiQiV5DkshLdz1ffFdINuC8R9OmxKtItvFNeR3kw+WEu4uO6Ynn4bUtdPLeGT5zLLrPQB9wwAEHHHDAAX8P4COTj4LgnMOaonpCEAmYHCedVYFJ6boONGULVbGkJAx9ZIiKdZGQIiK2kB6CdT7bRtpMWjjrqOqGyWRG3XiccaSUaLsNBlsmtZnUNMUXVRTU2B0pQumiQ0puQ548msxxgBSrRiQTIyp58phPNJOExqIKBodK2k7+U+mGTCIYm1+vSUENKgnrEjEZsgQsq8acCwy2x7geU3lM3xP6ntBnBV8/dMTYI73FNxMMuRMz2Zyh2TRzJrMZddPgGoezhhgiMQSSCs5XVH5KVU8Q5+j7lpuLJ/TrS4bNFYQbYlgxacBOG1QtztcYsXR9i6kN7XqNx9BUTVFkVoShZ7VcEUNL6EO2GYmJ9fIa7xxRhKZ2TGezrOgMA8Yp1tWEAdY3K2J7iSjUsyOO7rzA9OguEUhpIBkHWIypSVozdAPdes0wdFhfsZgdg3GIq5HGIbbK9w+KpJzPuFpf0/c5kzH02Qr15M49jk/vcb1cI+Yx3RAxyxu6TSANiVXfYW3EGdlmZThr2YSBh6ueeu5wxuJNRYoGjYEQ+tJNbAr/nFWPUaFLQiewMZ7OVlg/5c7pCfOjGWcP7nF8cpwVoL5BTI1qIGpHiInl9TWSlNo7ZpOGqqqYziaoJirnqOYzxFqa6QJfL7J9sViMa6jqKZisgE0yoJIVh0El5zhgstWqyZZHgsERt8VZXRlCjPRDj5iOEALEXKhoKp8RHZ7l82SvXBzzW0Zytiyj5IKxZLdI2ladmmLZomfMc00mYW3pso2QQv5MpJhIcSRDswJy/HzK9nO+0y/K9n3LP0Pu1MQ8o5Aeu5tl/OJgLMS51VH+TDl767eSe3LAdyr+OvBZ4Mm3+kAOOOCAAw444IDvPohGqukJ51/7eQQIYWCVEie1cLF8xIOz+0xnC66evJvjB6Jy9/VPc/P4PT744t/g5Y//AKnO+X0WgzIguJ2iSBVKNqMhN9sNKbD58CvEy7c5euFjzF77CRIBgydJwKpBY0/Shk/fg1dPaj71Anz5ycBLx8qDWUUbE+9ewmWfCCllCkV2efHPzI73yLtnrDPLdikpRrTElggRod9rFJRbEQZaxs2IwRmLNYkUS/xFIbusJOJmhZ0/yA2h7SW+mZBii2+mGH8PTRZrJqyXj7HNGSnB+voxk7ufJCw/zOsHslN+JR0JwmxPms1ZFI/J0SylPhjrP7bkmGzHYe/Hrr4ARNM2oz7/SFvqcSQV8+O63d9I4eyaqkeyd3wDUyx3FZNgIo9xaY2xFZ/94d+FmJoQI97lRtmyaoMaJUnOFKWa4KeB6mZJ6t5naF7BqmUo40IhaHcqRRn/n63QcCy+FKyALffmSDZq2lfG7pNmOwIxx3+MzbM7pea2vitvNioMR8Wh7O1Ly/ht77wEkZoohoUb2ETHkBRNGzAz6nrGMgj33Q2YgRQqkjnCsET8AuI1mlI5tjie4h62I5rv5K1l8N65yt45p+eTcvt4NjPym5OTt3+Ov+8UpHk8jCoqBsTy3qNLju6+wKRqOF8LdRWYsKSLFWvNNs8nTSLhWQ95XcVYh2oo+Zh263CWyls8uW44mg987D6898gRSETJWZfjlRzvFXvgHg844IADDjjg7yl8ZPLRlImvtYbkHTIMaBgnoaV4KBOMMATCEEhJ6IPSdiGTDiLZWtPXGOPwVcN0fsTp3btMZ0c474vNpTD0HUO/wWCwxmCdwYnHOodYUyYzpSNQ8sRVxJauuNJLqruCQct2lHyHrCosHXRaLGE0ZlsXsaVLS0F31iCZlMnWq8aabXi6poSkkIkZTZDyxN6LI6klhYjYGusDtm8Rs8YawZmBOAwMIWTihZztaP2Eozv3OL1/j6aZomIYhkA39HTtmul8xmQxzxkWKqQYUI2sVtek0GeycLOE1NJMKogzUvSgCWsdGJstbGLAWU+flGZ6wvToLtY39Os15w/fol89RYhMmppkLOtlx2a5JsXc9Wecwdp8zYaYOH90Dgw5tD4mrM2T4DAkFId1FyxvrrHe46uKUHmqpmZCrquSOFw1Q9wE5z3YCWIdrvbEmFhv1tR1g3MTYhhYbdZoisymU/ohMQRlvjhh6AeSKrPZjHa5ZDqZ0G56jiZTYoQhJMQJhpSzFGOP2EyjL0OiR0gaiTEgMRFDJIREjIqxikhRyoqhV6F1FRtbsRaLiOd4Nuf07Iyj0zmzxYJ6MqWZzKirOt+fMWdTpKRUviJ0G8LQUlXC0XFDPXUYY4olUM4Xtb7CT6YkERCLsTXG1Yg4KiKaBFLPUHJNU1KSRCRExOg2oxGTy6VcbOfHJ82EummIYaDrO0KMpKKIFMbXFgKPohss3aeJPC4pac4z1WLDmgw5iN4iahBRLEVpXI5NVRFrcxelZnvVGAMhDqSS/5hS3nf5+njmJ+Wzmxshdhk1+bNa7JL2m7J1Z02VO8nN9vtj/LdbW7ldJY2FL6gxe4swB3ynQVXXwBe+1cdxwAEHHHDAAQd8tyIhovTtDZUT+naDMw2T5ox6Ity8/zn0/idyQ9zYIBd7FqcvUE1nvPOlX+D+a5+hXhznJlixu0bZ3POKxNzIF4ae5ZN32Dz+Eos7L3H26R+nshWaEpX31BJpPEydYO8EXj8TXpp/kY997wP8K9/Dg/cjn3n7ip/5VeWXHkVWXSAUoqcc4E51t7WJ1FtTZd02/e3NxndKRxLeCjFkZaEKqMYtUblj73K0gpdM6AwaSj1fcuYlYYYN4iq61QVDv2Iyv4edzzLJykASBWdYzB/Qrh7RDtf46RHWOkQHJCXU7AwntcQ/wEgIKl6UxgrrwFZpOJJve79seaJxTcCwsxDd1RWFUCzKOSm7yKSVeXZHW6Zrr71S9h5TCiEaMckzlUuSnVHrI6azMzpVvG8w1heSrlijlnUcEeVoYrHVcWkMv0SrObPhQ3r7EpoMSRJb0lHj9nrun7NRyeNcxsLINiwHI5kcDYw5oLtLvLsfxtPJN4OY/JwpBJoBdGttavbuu3HdZ1eIPauiVUR6jFZ4C5oMIXQIQpQJKLRJmNWRNtVIXFL5KRFL10XqZoXYIceriB0pab4Oe03r+/aiKju969e95JsQj+Oa1RgldLsh9u9EPO7vZ0QunxMqlom1dEHBw+lMeLSsMcbTyMCJXbOJjtfuN7z3ODcNJwxJbP40pwG0JtfC21EnSeRqWROHyBuvJN5/bFltEpQoVTNeG4l88rXFc8/9gAMOOOCAAw747sRHJh+dTVhyphwqBGtoNdETMZKDq63JNimmTJxC6On7yBAiUUwmHq1nNj/h7M5dpvM5itAOPe3FBd55au/xvsr/qmwzKgBJSSYhKZYcwuxlv8dElCyK0hG3tbgoNROClsJNZM+mplg/gqDJkK0a80Q5kYmm7Z7UYCRlkjGlQl5mBSVSipdkUFuyEYxB1GBMAhMw1iJGwSSMKAEDxmEnlslszuL0LndfeInp4oQQIheXF7z/wQcYAvPZgrO79zk6fYmI5eZ6SWivQHJno3cWNBDjgCHRTCd4N6HrNmjwSAzUtSf2HX3oS3ehoU8D87NXmR7fo92sWJ9/wPLiA1Q3TOYTTBK6rsM6j68DcyqGITL0CSuWlAzXl0suL87ZXN/gKsP9Fx4wX0zRFHn0wfsMQyQGpd1smB8fM13Maa+XmZw8OoIUiXSI9Xhb43xNwmaSxwpd16MKzWSGMVmt2ZX8jtniGMQSUstsVhGiYn1N227QrmN+NGO9XrDuu5xXiXDRbdgse2ZemFjB4UiqVNZwVnsa40ATQwqFXIYhZMVljCkTZlh6EdbWcUVN5Sc8ODrCWMv8aMbJaT7Pup5TVVmhOERFNecZOl9T1XNSP2C9Mj86xtc1s6MptqqzYlEMw9CTFKIqfRio6hnGeoyxQMoZlJIJSjEW63I3b0rZokjJHYhpDN6Iu6I4K4Mzseisw/mKyaQixZQJTI0kDXktQSRbFRcrYC22M7mOzqR9jIEQIikpIY35j+W4rMPaCusrknVY9RhriJqLHC0kYyYfAyEEYhjQEHZF1ZY2zQsb47GMDj/5szV2DI9ZOTmDZlsFyo6ifG5l+Bzsd+CyG7lf4zfnAb/VEJF/E/hj5c8/LCJ/eO/p/wHwJt8g81FEfgfw7wC/m3zZ/zrwrwN/oOzz96nqT996zWeA/ynwDwIPgAvgL5T9f/HWtv8R8IeBTwB/EPgjwKeAn1PVn/io53zAAQcccMABB3z7wKjSrq5J3UA1rxiqORaLbaZoM+Ps/gmEjvP3v8r85AHT2REqHhGlqY94+TM/zPtf+mVOju9z9NJrufmVkd8TiDkh8OrDL7P64EtMFqfc+/SPYpoFqpFOIyYZBgLBeCDQuMSLJwO/95NP+NQn38R//79C8p/mJP0ZPv83O772JDFEyUrLba3Atk7eMkb5SHYKtH0x5FY5l0mbVAgRY3IOXzJCn1ImRopCbKR3jOa6xFvBO4hKrhNykY2RbPtIWLJ++iX6MODcDDubY5IQJWxjTozk96rnDxgePyHZCnB4l4kysFAcY1JKhSg1pcaIvDSxXAyKEBlzJySBVfZ0e5l0TLeUeDJm2d8m28ZB2lPpjaTa84ip25mHOYtScJJISah4QkeNhIGjM4MYT399TVXfQ1KuV4l7TZbk409JCCGiBibTE5bdFTp5lWZ4iJq7BFWiwlEdiVFYDeNxj5q/8fjLOaScZWlLg6sxSop5LWbXzM2WeFXd0XljhmItwnEjPG7Tds/p1rjcrsf2RmrH/gpojFgzUNuBzUZQ60nRgVqUgCZlGR0nkw1iHFGmeG1JFpSIT3m9Zvfet95Vn0tHPnOc+9z0N9r29jV/vgXrN9+Ovfsr7Y/V2HyviYjlwVmDXxi++NDxyjxyp1Eed5E2VqzFceRazo4cX333EssEUiRR589K6lBZPKNi3Wl2IzddpH8fPvWa8PDK8Ph8wBhHjJFZ1fH6SxOg/iYjdsC3C0Tke4B/Gvj9wOvAEfAh8OeAf0tV3721/U9QamrgPwP+bXINnYC/CPxRVX1HRD4O/K/ItfIc+Nny3N++tb//iF2d/N8A/jngDbJb0f8V+GOqev0betIHHHDAAQf8puDvIvORLdHmjZC8pXe2kAemTDYFay3OlL5A46kngq0bqukM6xvEeLo+8OT8Kau330KASTNlsTjCzGaonaE4MNnSMYmWzEWbuySL/UlKex1wJhckht1EXUrE/X6X1m6qKNttVEZ7kIQYwagtdcBYSOTMyHHSL9tg8ZwLKWrQElivSJn72kxQigEVIjErrlCMWqzWGN9wfH/KdHHCfHGMrSr6dsPFxQVvvfVV0MjxyRkP7jygmS9QhPV6w/mTrxD6AecsvvFUtSeKIYRsaeFcJnog0g8D1tWIrRESsd/QdR0pJKSaUc+PmM+OiaHj4vG7DJsrJPX4usJVFqPQLa+ZzS0xBoypSE3F8moFxnB0dhcFLp88woY1L7x4xouvv4GxlvX1FevVhqqpsDayWS+zArC5gzEJ6yDFyMWjx4TufepmQj2bM13cYS050/H45B5W82RVrCP0kag9IkI9XeBdnTs7BeqppR8GSD1GEpXz2JNT1quGu1Jxcvcu7731Dst1y3HT0K83TIxwxwsPqorGWipRGgO1gKhFMVkBWVSPOC3Zh47gHEvruU6WZB3T+Zzj0xMmswnzxYzZfE4zO6KazsB5jHOgEY1CCIEUAzEptmrwlaeqK+o4wXmHrRqsrbLicH1DVKjqCdaOSldKAaeZIJSUs0uKJbIxJbshJVLM7OnYDakSt0Xg9jFVuqKCdC7nOxpbLFvFloWFQvSVgiNpJgtVE0lz7mtWKmaVaIhZiZo0qwzFOKwLuCHgnMM5j3MO73z+PJPVsiFk8jFt1Y9j7mNpNCgEqDEm501KiVzdWhIpxsgzisb9qk/Grtjx/25brW431+0fe+UfACZxwLc3fho4Af4l4G8D/4+9536hPPd1EJEfB/48mbn+vwFfAb6fXFT9xW/wmn+kbOuB/xfwZeAV4A8Bf1BEfp+q/s3nvPRPAj8G/OfA/xtGz+IDDjjggAMOOOA7Hd4YltcfZhLIgB2W9CHSPhVwNbq6g1QTTu/dJ/YXPLn6gJO7b2Brj4ihMjWvfPZHePz2r7D+yud48MZn8lxcDDEFVk/f5+L9z1NXDfc/9bvw86Nco8ZMKpkoqInYaIip4yY6htBz+U7iH/iYxbz09xP9pzDdz9O+/yXq5oxP3If6IvB0o6wDhAiq2SklPdN4t6MepDQQwh7pshVKSnFEyUouK4naCiHoyDtuBZWZr1QMQmOgEWVVYh2276sBmyJGlfnJq+CnDN06z/dVSsOwQQqhZ0UhbDDWIX1HDDcIBmNGRd2zBJ9IRBM4HIZAG8bnUnE/2VOAjq8rCq+81lDWB7ab3CIeiwdlSsXyVUrz8zimReH5rMIsj3sq72dECAkWXBOMJWjDXf8hzfQu170ndTeZxN6+ll3NgxQ1bsfNTY9IoDf38bxPr6+R3BGn7oon7TGWxP1jw80Sgia6sLv+47lKuYYYMJKbsnPshZDYkcb7ZGOmdvdQxLUhJtbBPEMwmjJmaVu37o3lbVZPtLwi17xTu2LdLUi2QdWBRJSQSUrJJsCV8QzkZt2WCa4SfFrRDzckf5bJ211sJ7uLInv37XjNx2stu3tjd4rl/i7b7ZOXe/fIbZLyG6kcx78NuxcpSopx+7wRwRhXCEOLtcLZyQL7YceH14aXTiMLNWzaRBLD/XsVbz5ytFqxsGv6ku9ojEAK4Pauo+b3G6+PiGFQw+e+NvDZN2rOFnPefbjhd/3wKzz88AlffKvldPo+B3xH4A8B/zy59v0ZoAe+D/hngX9cRP4+VX3vOa/7HcD/BPjLwJ8i189/CPhtIvJfB/4K2XXoz5JJzT8E/Bci8nFVXT5nf38C+HHg/wL8P8lNwH8U+DER+VFVbX9jTveAAw444IDfLHxk8lGTJaYECawxuKqinibYCCHkzkE1hqqpaJqG2fyIenqGYlmuVixvrlidn7NerWm7jmQEYz3WeVIK9KGlThUYg/MVznm8d1u7SMEgpnRUykj+keuA3ay+EA9p11W4Rx5sFY6Mk9eRacgT0dxVKcVuhG133hiSLWPRISkTKliUhNhiYynZasImQVMmNImafzqH8RXz4zMWR8dUdcOQlNVyxeOnF6yvnpLiwPHpHT716e/FOM/15RVPHn7I6itfIA4D3ldU0xlV3eDdDGssw5Do+8Bs1iAi9P1AMIqRrBD1xpI0MPQtMfT4akpzekwzPyP0gX51w2b5FElrSBs2yyVN3WCaORoTZy8es9kskZioxbNet5wuHjBfHNG3G66fPqSqhPnRfV589Q2cdVxcPAECxijNtGEYOpJJPHhwh8pDu74hDj0aA4LQ+JphaKnSFGugmTS4Jqsc+80aSJhmjhhLiCkXjpKI2mNiVsb1bZ8zPIwlAuJMfi4pVTPBWGF+NOfoaMLN0rOqHDWRCYZ55amdwZZ8C1Mm9SlGohqC5hLKiiEKJGdYScXTNhKc4+Roynw+YTqfMD9eMJ3OaWZzvK+xYhEMcRiwZtd5mmIkDH3uzrSANRhxWdFYbugYEsb5rDg0jmEIhJjJfmMcBptzSozLfbZJCSnf4yIGa7PqVlO2Oo0x5ydqKaCS5vHJCwt9saQ1OWvFZQVyzow0hfcTxNqyWKCoBGIonyuVYr1KISNTVj8nLerH3AUbQx4z6wa8q0g2bC2MJZHJ1JSJxxgDKcWS96jlvNiqq/M4ZRtklb3O3pLLaYxsLWO36xbl45xJy/0syN13yFhUjYX1+Nnf1p5CyUE54NsRqvrTIvImmXz8heeoG3/i9mskr/b8aXJr7j+mqj+199w/D/zvnvOaU+D/CKyBH1fVz+8999vIXZ3/AfDbn3OYvx34YVX92q/lnETkb3yDpz7za3n9rwc//dM//Ru9y98y3NzcAN/Z5/AbicN47HAYi2fx3Toe43kdcMC3GnVjIG5IqpyevYI5uUdqNxyd3EPqhq5dIf0mN/8NA1YDb/7yf8nZ/VfxzQzXNHhTc+/Fj3N98Zh3P/+zvPix7+Nqdc3lh79CjfLgjR9ievaAFMy2yRUiWppgRSXX26KIidRm4Ec+Ljw4nXD5luXY/2k2773NV99c8ItvBh6uoU/grDJBCEYJSQjJ7NROurNzpJB+oyJuWyYDoNv5d0zKoEItUBllcNAPJTde9yNUFGdg6i3ORvpuJJ0KeWmEIbRMp3NEe4ZWMb4qFp2KS/kAxAgpKhqXGOOx0oBbIhjW6xVNivlcxI604x4FCFEG3t/k62jEEFPaU/GNCruRBNqvB/aJpJHQfDaHLxOI7G2h2/HKvFVkzDkc9X9JxrWLiEbLTK7ACn2cIRI5dRs6PeGHX21498MbPn73gsnsbn6PUiuNZHA5K5JGJt5zzRFz+zaRDVEaWk3cnV7xaH3MVx4JKQU+fWr42lUiJAvE7V729lZUj7m20kRWQ2omIXW07813TcnOHJtB83knFVadlnour/fkoVZGE6o97m/bhKuFrM0DVNhqAklrkjT54UJe5uzUXKceTxI3vaU2YGUgqUE5YYOn4ZJRvTquAT2LPRZ0JA9lnyR8DvE4/j1a8W5dfZ59xf69sm97rKWu3jUOJ2Jpot0Skqrb7ZLRrAKWREBpV5E+DHnEjeODy4FXThPv9wZJgZfuTvlrn2uJ0XMdHcfVirNpx1edYFlxd36VG/1NRelxIERPHxMahS5G1Bp+9a3Aj/2OI370d77Ez/z8I371zWs+cR8ms/lzxvGAb0P8H4A/oard/oMi8g8DPwX8z4F/4Tmv+8eA/66q/sd7r/nTZBXlzwB/XFX/nb3n/nXg3wL+GXJD7m38A8APqepbZfv/GVn5+IeAfxX4X36zk/hmdXNK6btu7vubje/WmuE3E4cx+/XjMGYfDX834/abXTd/dPIx5slPUAgpYskZjMkL4sC4iun8iMXxCd5ZYt+yWp4T+5YwBEwfqIzBzuZMp9M8lxNDVIM4h7GOrDLMEzNjDdZm4mPsbjNaippS4YwdlvkxKYqoMoVPJWdAdiTi3iugEJR5V2VaaIq3vYwTvP1J3jgQACYfV7FiyVxOnl6KCkjcEqK+abCVo26m+LphCIHV9Q2PH79Lv77COUMzmfLgU99D1cy4vrzk7a98hfPHH9CvV5l4MZa6mjD4im4YmC+OcCIMocNXjqqu6dueFANV7XCVJwUlxUQ3tFhrmc6Pcc0cP5kThpbLp4+h79HYYgWG5IjSMDlqsGowvsHPGvqhx08ragxDgjt3GtIQuH7yIdqvaBpLVZ8xnS7oNze0gKs9WntM3dDdLBmGwL0HL+TcEmOorMeKpao8IfWsbpbMqxmz0zP8dI6rZ4gY2mGDc3W2wYkKaSC2axCDdpmQ06gMMVFPGmxlGYY+X/ektN0G74RGPEuNzBdnzI9aphdrGrfExg3OCZa4LcpElaQ5azAGpY+REBXnK3AGtUKwNedBWaty52jCfFoxn885Pj6jmc1wkwY3mVM5j2hEYk/URNe1mZy2ubIy1pM0MQw91oIRR4yJ0K1wziOjXahR+s2GGBPGWZwv1rSSCFFR7cE40h4BuP0cGVP2I9gSmRFDIKa0zYbM2RqlE1eyOjKQSDHinKNyNWKEGGIugMvn02AyGakGUiCahMaYs1w0kTNUQDVkAtRQFJOQEmiCRERs/gyaUgynMVu1KBlhV/uJzerOHdFIVkHuEY2m2PVizK6I3c+tzF0MuRjeZkXKXpNBWQrQUviWDtdnlheeV4se8J2M3wN8EvhL+8Rjwb8P/MvA99x6/L9PVlH+i/vEI4Cq/rKI/Cngj4rI995+Hvh3f63E4wEHHHDAAQcc8J2F09MJ18sBK8IwbIhPH6LUiF1RJ4vKpMx/MznXR8t0dsbl03cJ7Zq+X3N8ei/bivqGYf0+X/vrnwPjmJ28yPSFj6Ex0j49x1QWWzUY4xFxmdAxY8EcqYzhZGJ5Ze44f2z53//lit/vhXvvPOXJ5THqhMebgfeXJcKkuAyNc98xfy8V6ZkdqbqRVZEdhbaPnfWjEqKSBOaVMGAJQyTpzqEkq/4SlXMcNYlNIDc95z1BsQKNfYutp0wWd+kevQWxwftJ1jtKzGq1MCBhhdopTjwdHaZdUaUBVy3AGIxobpTcyul2sRSmHLekXD8k0pa02mU67uGWNejIOO0nPTyjXrulcdunMJ/NM8wNnCYmkgmgnkYuSSK0aYESmckA0vLxT3ySu/N3+Wu/9JjX31jDNO/BsncQ5br5akqKCT8RunXNvdkCo9dcxoaYanqFu9U5590xIsI7N5opR0lfVw9lklmx5X6xAn1K+R4qpOB4BPvc4S7FcUyl/HoiV3RUSubHTSH4NO1Ud7o3mnntB2KIiKuYm2s2cUarisXRa8SIctwkVn2uUz0Gk3pEHAnB6ZJopxjtiTQ7Epnb5727prfv/NuZlLe3/zpF4zf4e18VSalvgbJG9Wzz7HbNygikkf4udb6Bdkj0myXonKQ9CLx/JbxwMqDqubkxVGbNS4sWJ8rN0PBue4I118Q+klpl2StqFWd7JtZxNOmY1I7JxDL1wvS45vhoymuvnPHOm0957X7H0axmPj3hV996hss64NsU30DViKr+eRH5HFmB+Dz8lX3iseDPkMnHK+B/feu5P0smH3/oG+zvT47EY3n/JCL/KtmK9Z/m70A+HnDAAQcc8K3HRyYf+5Bw3uGtzQSbsczmU45cTUjQtS2bds3122/iiHjRbCHpLcZ7al/jksUYX9RDOYNR1aCYnClY1dlOJkaGoQd0Sz4aBOccI3G4r1ja/b4rHLaixmeeh3GKKJhCGLLd334DY7ZkMVvyMedB7KbJkAlHY0s+pI79dGCdp24a6nqCihL6wGaz5uL8ffp2jRHl5GRB/dIDVIV2vebJhx9yff6YdnWDDj3EgDWKtVn5GYYNGgLGCt16SddtqGdT6tSgKlS1YTad4JzQdS1dv8EhzKZz5sdnWOfpuo7Lxx+yXl2Q+h5vPWINQwzEqFiX8/6MdUQS7fKKqmpo5mc466mGnq69ZljdYAwEWzEMHVXtiDLQtUvqyYzaN4QEhEQfeubHCybHc6x3pSiuaOoJ/dBBt2Eya3CuyVmDIaI+YaqK2jWEfkCIYB0hRqyriarUdbbwFYQaSDER+2zkogaSCM1kjirEIeBdxaSZUvkaX3uqyqLrSEToyTW61UI4abb/7EOkGwacLUSgtURr2Yhh1fecnp7w4p273Lt3xnR2xGQ2o2pqkimWpyQoKj1JgquarEIdWsLQE2NP6FqGboNIoq48KQY26zWgVFWNqxx5dcIhZNIMo0TJCwujZZCmbIikxQ51xJZsY8xrtFibrYJjVFSzrWlSSpVsxo9Efp8Y6FRx1mFttqKNMaKxLEwUpaZ1Di9aSMX8eZGYs01jWaFQTZl4RNAwqhBT6WjNls5JU+76hd3nrRCEiMGKzYbKwlbdSFF6GpMbFkSk2MU+a62aScpsJ7tLjuSZ4i33z+5KzK/LdtSvLzIP+K7AD5eff+X2E6Xg+Rm+nnz83eXnD5acydsYt/8scJt8/Ou/noNT1R953uOls/N5ysqPjJ/4iZ/4jdzdbynGjq/v5HP4jcRhPHY4jMWz+G4dj8Vi8a0+hAMOAOD0/j3eeudtjIOTF17n8dOHzCYT6ioiugGtcSbhxMBkivXTHEvgP4nxwvL6Me31isXJPbqbt7l77xj7xvfTra+ZL+7iju7lbPRhoF9dEq6FFDeQAqIOEUPllEnlsfMZYZjyN9/f8KW/1fE7fvglboaKz/8q3Kw3PDj1OARvI23MWYtjVt9YX+/PfkdhXi6jdwqtzEXuog+yBC4/llJiEyML4zlthG4QNn0xc5XRLFVZOGVRG642mRzMMSr5fYwI69BjfQ2qNNMFIfW0l+8zO7uHE0saWjRssNURorBaXxNvLnF+SkyOGFq8JBKyddQ0FCWeCmJKXR9jJrdiyMRPaWK+TSmNZJxsFx0otcKOeL1dNzxDse2TVDruYiTjwGgCo0iweLPC2sQmHW/7puf+mvn0iDAkLp5eE7VmcXrEEHsq57bE6XjdghrWIWCMoW5AV4mN3mMuH7KWBwC00RNlytzdsAxHBA1YzVE1Y3PmtpGbhBXBkdWK41huhYF7pOLYDL5di9HR1XRcu1F2hHXZn+wUsmOpOta7++SfjFdSc7PyJnlqaip3Q4yOluwQ1diBPlQMwSA2IkkJpqJKV0QDoU/Y6gQbu6LxTKP48Osu4rZa3G9Qv102fiPC8fb1v/3zG7wecj2brY6f6YzfNtQa57bbF60ndeV5/aUZn383r4ERExGl63s+c2/F0WnPC3ctfbrDk5sef90y6ROPrhOdRryLzElM/EA/RFr1XC2Fy+uEMT2/7bN3WN1Y/tYvvsXR5APO7jT8yA9/jNVqg59MWBwfXDK/EyD5A/hPAT8J/CBwSo4jGdF/g5f+/HMeG712f0FVb8eLjCTnK99gf3/59gOq+lUReQd4Q0ROVPXyG7z2m9bNxpjf/t029/3NxndrzfCbicOY/fpxGLOPhr+bcfvNrps/MvkYjMG5mmZxQjOZoSpsblZcn1/SdWv6dk0IOY9PrSEaA92AryqqSjHOg7Fgsw2ltRZnPcZkQkONzYq2FImhJ+cjAtaWWaUhRMGoYDBYkS3RIGIQKSrJzJqUo96RkxlbuoFsGzI+bkvH5ThJLoozTUjpujRGUEwuMooSUmxuWRUVKlvhq5q6mSA20bUdy+UNXbsmDANGYVLX3L3/AOMd11cXPH74iPbmhn6zJoaWlAaME9TUVCmr+jREulIARYk5D69dU9UNFmE+mXN65wzVxPLmmqHvMc4wOzpmfnRMM5mxublm+fghm+U5sV9hUFQtNBaTEjH0DENHr8JiNqdrV2CFenLE4uQu0SiXTz6AvqWqG5KtWHdLrBFUHd0m23dOTx/ka4ohtBtCGrjzwn2sr4iFZHZVw3S2yNdv6KlnUzQmUrLYakY9WaCmoW0HumFDnvQnfIKqyhakk8ksF40JwhAYQo8A3ln6IdC3A5VrUNF8j4ykkzXUk5qmqZlMp9zcXBEHzfalKRIEKMrBLihdyKpTXP6XrKdzFY9XK9RZjo8W3L//gPnRAt9M8c0U6wyVc6BK37VYX+X8UnaKShEh9APr1RWx29C3K2KI2/tZSzG3Wi5pmgbEkJLQTGZUMsG6JhfImu9zFSWmUt7t17B71iwpwZgjsiXtbS7p4/Y1mj8XhYRUUlYxpvy5CClgbSYbU+k81SGUwjorLCvvM9lalIYhBIglI1LHDuaU1Y2hFLEyflpj3i8lN6QsSNiybzF2zwZWSr6llJxTm61oxZWfUjIhx+3H74i910uxS6aoI3Ucv7TXubzfzWxKU8PXd7Me8B2P4/Lz4Td4/nmP3yk//8jfYd/P8xr68NdyUAcccMABBxxwwHcePvzwkvX6isYbrJ+j/UPq0wXTxWluntTAoJEYBuJmyXBzTuyuSSGgsSVsenDK07d+nrO79xma+5jeYcwx54/ewzx+B1Pfya4spBzRgMGaGmMUJ4oQ2fQ9q6drkhqaylA3lqOTIz68Sbx/lWij0J8nppWQq9rRKWi/oEg55gG2tpE7R6EdabJVDO4p3vIvmcAZoqHtEq+dCiEq78esJhxJSyuJezOh9ko7BHaawLFGh6Fd4esp1kS8g3l1h9C0XD99j2Y2z45B1Qn9Zk23OsdNpuAmYBMy3FB5jy3z+5Ry4+N+Y6FotkXdkpDsiK98brewJRrzWejeeY/n9SyfpM9YZeq+vSYwtkHuyChIGBq7oTIbVuluoWkHjFqOmw3X/Rn325bjl+9TP7xg1QUmdcJsr1s+KAEsisbEbAJDZ1FNXPUNJ66joaPH4o3QB09lBxb1mpu+wZiSPSlkdnqPMHQixXrV7JrAiyZRi+Xp/vnv0aHbFZtxIy32PUbGe1GxAomsnB3XcbbZhsUNJ0d0FOI7gZAIJDZxhpcer2tqZ1F13PQ9GItHcJVgug61LkeFSLba1RQpB4GooGMN+wyZvLtuW5J173SfRzyO1/t5ykiz99m6jdv2vaJjxJBujyOVaBUxYyut7p5OifmsRsyas7rn2HeINRg8nZnx5UcV66Hm+KhmMfM8uGtZLBz/38sL4ibxX/0nfhwvgYtHV6yWS9o28qV3Lzg/v+KlezUSAsbDpz7xMudXLb/97/8YX/vKU6Kx/MhrC9rBft05HfBtif8tOVvxA+DPkUnCYkLNT5LzGp+Hq+c8Fr7Rc6oaynqU/wb7+0b1+IflGI6By2+wzQEHHHDAAd8G+Mjk44MXX8uTuxC4Pn/CenVN6DtijNkD3locdbFEzBNbTcrQ9aQQENujxmG7HhGfsx1dhbVV9ugQmxV+1lE3ijV5QiziStbczobV2DHXbWe3OP7bBrWPJOLXdZA9M5/fwx5BOXZyUuRVmiCNfYh5MmesyRaYdYWzHmsMcYhsVhs27ZIQBowRJtNZtsl0jmEYuHx6wermkqHPNpopBfDZQ9+qw8VITIGkShhCsbgUjBqc9biqopnOufvgBU4e3Mc5y5OHH3J58RSD5ezuPU7v3KWZT+naNR+88xW61QVp2BC7DSkM9ENLVU8Yug1JAyLQ1DPq2YKu7zG+oVmcIsZx9fSc1fVjYrdGRLlBGYbEpu2ovGXWNExnZ9STKV3bEmIEEn4yY350Stdv6PsWMcJkusC4CUlrQIkpotTYymCNB1ej1jMMPYZAbclknbXUlUdF6LuW5fUVYi1V3QCSlZohse4G+jAwnUyIGhhCIiZLrkeEPvRMpg3T2ZxqdsQyfYAZImIMtUn4QiRHY1jFgaSWxntM5UjO0BvLxZC46CKvvnTGa6+9wuL4hOnsCFdPtu2z4qp8j6ZAHAaiQEwRC6Q0EPoN/WbN5vqaoV3TrVfElGiaKdY5hhSZTPMYoWCtZYiB9bAhDgapBryt8CYT8VmVm8fqdrmiW5tTLWM+bO93UzpKrbWZGCw2LaP9av6gmC1pmIu7hBjJ3cCai8QYcrewGqBYnla2xnpH3wdMyapMcSyFFNVA0pILIoWYTKUkLXknRkqGq3VY8XskY/nMj7mWNn//WLdHLtqieBTZNibsviN2xOPYtzom5TzT263bp/Ofpb1Zszfzr+fr84Bvf1yXnw++wfPPe3wspn5QVX/x1/l+Bwb7gAMOOOCAA75LkaKgGqmbGjub46uaZn7E9PRuaTZUVB3iBRNL059VuvWSm4dfJaxvMNWMxb2XiSmyvr7keDZnenYfzPewvPiAcP2U45c/gbE1SCpNewZLwmTGhrgl0BImnjNZOl59oeFry8gmJKLCskvEZHJ9K4kUC6GzR4oZ0RKrUObKpVkvsWeBuaeUFNlNc2SbOajc9JCi5/UTw02bWA4BxWI04o3htTPL5UYZouIkW27GkeBLig5rJg3Y1YfU3QojPZIMzgQev/8r+OoESW+V6AVDjNek7hGaBPXZktYaIYWdkg5VjJbGQ9Wdso7tkkBRuI1Ku5Fwyv/dNTnrHm84EqZp1+wssnWn2ad3n50Q7qi5kRR1cUXtOm70DI1KMgmjjold4qXj0abiB06OOTkS7p9csLle0dyfbom9rCrNNaOgPHx6xeVSEFpElJgqrgdDM7kixnuZ3NREmyoqaTlyLctQ40gEhSiZpM3t3gZrBGsUY6ANOVfDIAwpYEuTZz6zXd21q7bGZ/dOWkFMboPN/edCpDTDavmFRHYh1a2CNaF5fShvjWOsjQ3H1YDjmk0Lp74ixQEriYqA0xs0CK5v6fWYQQUTB9RFEgazFW3li7vNbdw+ql9HGD5DON4+w+eQi9/s8e17kwlzk7Jz0e3XSVnkSiEgkjDiSJIwMdIl4ebJU3745TWbrmEVG5J4PvXqjOW6pVpvGMKKpx8q50VWW3kY+kC7WvPnfuqvIdFSeYdxwkt3az7zSsV7zvOpT9zngyewvO55/eWamCb8uT//NaJA6i2//MuPeeXV029ybgd8O0BE7gP/I+CXgd+jqje3nv8nfwsP5wHwxec8/kL5+Tyy84ADDjjggG8jfGTycXV1SYzDtgvNmXF6rEDIHJ3JHvSQn1MVYlSGGFB6koK1HotDjcmqMGthVChZRz2ZYK3BOU9Ujye/RozB+xq3Rz4AIIKVTBjmDq4cbm7G7kPZ5TWOgfXlhc82de51Pqqk3QRw7HijdHRah6sqfF2BCiEMdG3H0G9IIU8Efe05PjvFSLFAXbdcbVb0fUsMWSVojMtKLytoshCrbH9pIyZFQowIgWCGnKFgK6bTBXdfepHj07sMMXJ+ccX106dsVjfM5gtefeMTNLMZm/WSy3fepl1fYXSAFAjDQBgGUrdBJdvfOutQHCFENm3PavOE0/svoQiP33ubfnVD7LOCsak97XqDhoQaw/z0hGY2wxtH20W6uMYYsq2pdbi6oe07umjw1RGTyRRrPRiHsVnN5/w0q9fEklC6kEji8ZNZvh4pIarZ5qUUctZ5KnEkoO8jdd1kFa33GOMIITF0HQbBe5+VjCnkMscAopyenfD46pzBWZ50AyEGTmvLzGXSaj0o131kVlVE5wjWExQuu8Bln5hMZjy4+4DF4pjpbEHVNKhkO0/r8305DKEQZ5Yw9DnnMSbC0LNeXdAuzxm6Je0q2+kOfaD2Fa6pEM1ks9gKxWJ9w7TOJGAmAANRI1YF6zzWkvcdAimNBjZjxylF+Zjv6ZDCrpGY3DFqc4vvnoVwHndJuVtZRIgxlo+boLHwrDZbwYpEskoyNx1IEpytcKbGNQ2Dj4QQGfqBUHImY2lOyDmViaQhR6hqhJRVycnmbMmRYB5tb0x53FjJ6k0jGENpUsjXeVREGyOIKZ2xRkrWbFnEGJsMSmbIWBOPrrVm29Eq45cEYzbNgXr8tsdYlf9aW23/Vvn5o7efkNzR8nue85qfBf6bwI8Bv17y8YADDjjggAMO+C6FbY4QhKqZoAn8ZIataqqqRmyO9rCFzEje0K6W3Dx6m2G9xPiK05c/y/zkHoGAoJycvcDN1Qc8/NrnOHnwOovjB/S+4fzNv8ndl74XmcyKMi2gGGTICjMtjZUqCjFwdATrTrjpBkLKpFQfs+uIs2ar4tM9kmwkjKyBMESGEHEuNw3n3D3ZkjLAsyQkSooRayxRlT5G3r8Rfuxuzeok8KtPhD5FVJR7M8trx4Z3z7PQprGgRlj3kajQeMNR1XHvzsfwjaNbe8TOeXq1RCdnzKnp1k/AVTSzB2XFQnD9BrEVDRFMZOj67NCUbLY03XKHe2o9JDu87P1t8qLAtgZQ2DYjPmsBqmOZlcdyXIcojaAjGSm6U0ZmgaA+0wSpKBpX+HDBDa+AUYxoqY8Tp80V12vL9QCP3n/CE72g22yYzeuyXwHJTcEjVB3TyZRkDXfqipsbJRlYhmNeDI/Y2BM82fFHeqEPDbVfc7eJPG0nGCnjUQhIEaGyDmdsLi+TljxNcmPq9vZRzB4Rt1PxlTUYdqTu1urUCLYoEUVy/E1Ec52ne2S4ysgil3p/4NgPeT1IYZCIWNj0DUMc6DdPqZ3DLk6wuqGqG/ooeI3IcEkMM4TAkEAkPXtse82n++cwmuXo9ij2iMd9q1lhd399naXqbr3jtpPRSDymGNGUxpvrmee371miX1TGxllFrOH+q/f4i798xZDyOpmj5bMfv8PP/coVURwvH7VcLQ1tqohRiBqZJdAhcnnjCckxqRM/9Nm7XF1t+PlfaRnSjLef3vDGi457d6a89yjwS1++RmMetzFm6fH1Yw74tsfHydLrP/8c4vGV8vxvFX4v8F/eOoaPA68Cb34zy9UDDjjggAO+PfCRyceU2jx5wWAtOG8wOMKQiYQQlBQTQx+JFPJOZduJFmJgUMWanspVeJvtGb2xqFGUCApD6GjboqwScNbhXJVtE0fCoKge2XaeZYIhq5lMmXDrdqI45jnufuZ54JgnqeRQc5XcwzlWAVuFlLN4Z3HGYYxlCIF2tckWqCkiKN55/HRG3TSEkLi5uabfbEhxIMZQrEEyoRTFZJVZcpgx3y7pdkKZ4oCYkK1uzYTJfMG9By+wOLnDcrXh7XfepVstSUNH3TR87JOfZn5yzPnjR7z/3puQAs4YrCg4IaaUc+6Mx89qZrM51lri0LO8eMjQtjTzM+48eI2rJ49ZPn2PsLkhxoAYh/UNN6tlziRMiaqqGDYb1ssVMSmTybzYYgpHp2cksSyvNvT9hslkhvgZg6lpQ6AfeqzEXIh7j5vWVM6Wa5sLQrEB3V5vQ4yJGPN1Uc0WnpHIfLag9jUhxdzdGyNRE8ZVOFfRx4EhDbiqJvYDTTPD+wlDespifkyzOOH99gn9EIlAso42Bh6tWlCh8g7vK1ZJuNi0LENgMZ3yPR9/g1dfe53Z4gRxNcZ6jHUMoWcYBirvSXFgebPcKgs1WTRG+m5D7AdSiDhrcQ7EWWaLhhB7whAR44kx4esJi+MTjK8xrkKsJWoCsVjrszlrjFjnqKoqKySHgTBExsZQ1TGYvoTOa8rZmmm0q4EokklT45BC9JEipFTsb/J3wFjajIrCRMQYMN7l7TT3wArZWtmUrEnvDZWviVWiG3qGIdIP5XMRIjEOWQGsku1YUyivF5xx5bqPhXO2dM22zQ5jZfu7tQYrBru1aN1TS4vZKjafG9zB/jnuWeqMPk/j78I2feaAb2tckK/ca7/G7f8q8BXg94nIP6qqP7X33D/H1+c9AvyHwL8G/DER+f+p6jM5joW0/HFV/elf78EfcMABBxxwwAHfwahmVL7iZHFEt7xi2sywjLaXWQ2YBIbNkvNH70C/JuFY3HuJo7svAJZEwKjNzZMpMjt6gcnRKTcfvsvy3HD24A1OX/9Bnrz3S5ycvI47uoeJStSESGkS3jFfdEPgaOJ5vLJ0MUc2IEIANI5ZfYJoiV0QSq4jmWCJEPqWvu8JUtyBROjWLQpMmkWuJwRiCojm2iwMA8YYnHcMfY8ezXhyGfi+B4Y3P7xgGByumvDqccVkoiwHxUoeH9HE3IMXYeKFzdRwOq257mDTWWJakoxHug7fHOMWD2iXT9msnjI/+RhBDN4paQiobVCUq/PHUB/hZ5OcY5/yuQ9DjzUORIgxIcZsCbUUIyIG51zJm4et3u4ZxVohmnSfkNSdfauyJTDlWYZp2+BYaEdQoe7fxaULjqpzJlWTic2YCDFx3Ey5vlqyGL7Ch2++y5c7WC/XOJvQFDHGEOL+seUYmXt3JhyZFeeXK04ag9GWmW/xvuczZw9xAq6yxChshtJoHSPLoSKUHtZc8WV1YW0S3hq6kNk3izCUWmqs3nKTZ1lfYdf0fVsDKuyND9kcy5CyktaClGsVyzpTHuM0spDFNceyHmparYkJjicDT9cejRumlcX7F1lHi2sDs0bZDBOGFIn+PlV6TDJ3mfh36U0kpuyiZYxs1yCehx3xmPaa3nWvhtQtub37uSMqx+2fr34ciceUiUfN5OOO5N/V+rt9ZVVz/h4wBJQwDAhZ5RyBT752zFffXdENESPC45uKl+aBD9eF4LRwp5rStxsqu+TVB2ecHU35wlcuOL8KJI0YFTQFjJ/xuS8uOT0y/K7vPeLthxvee7jJ3ysGJN2O/Dvg2xBvlp8/KiJ2zGkUkTnwp/i7WEf+CPiXROTPqupb5RgM8O+RydH/8LfwOA444IADDviI+Mj/o1H5GlIiDANxGPIER0CsxShYoyQxBAOSMqmRSYhiVaoGYkRSwqaEmEgKPVGy5ahzFdZ5jK/wzpfciGIrIQkkoqknDJlYSNaVzLditVr8+BVKILqOPYp7ZINsPfNHy8XSE5o7wyhdiCZnzFnv8MZsSZ6u6xmGsJ04WmuZTidY79CUGIaBi/ML4tBviR7EYIzLx6Y5pcGWTLxMqhSSR0BMyoSfgPGWk+kd7tx5gcn8mM16xTtvvsnV5VMkJqbzBadvvIZzFddX17z/K5+HmDAmFz22saiBkCLeVggVdXPE/OgMMYbl5SM26yvwjpOz1xGpOf/gbbr1JSm0qPdUzYwUcwbndDYrk9xISoHN5RoVR900LJ+eIwKT2Yyn6xUhBqzzTGcLlt2AyJKqmdD3XZ6oGgfimC2OiSJEpxjjQQyqgvaRmLqtRWYIKROLMWXFqHVM5jNUYblaZ2tasZm484YYEmG5ygW0FZZtSxwS09mMqB3TSc0L9+/zxTffpo8KVqhEIMJVr1z3MHGGi0G5lsRVO7Bqe+6eTPj466/x8ssvM5vPy/EqIUQcBkPOHA0hUfkGN7d0XSaph77FmoSklsqCTKb0XaLWmso7qroh9B1D1+aCXQTVASXifYVvFpiqQjGMGfN9zMRdTAlrHdZa6qrJitk4FOJRtz9jjCQt6shUclaSgoGEJZl8T5tigaxCIQPH0kgK/ya7wrt0nDrnywJFaQAouZFRKRmjYKyhsQ3OZTLcdB19GPKihyopxK3qcizejTVlfyYTnSXbw4jBGlu+Axxmm/loMeKKLXNRVI+5sNum1dKlvL+gsv2mGwvevUf2lJIq5Zlvao1zwLcaqroUkZ8DfkxE/mPgV8lqyP/0G2yfROSfBf4/wH8qIv8JmYz8AeAfAn4K+EfZax9X1aci8t8C/u/Az4rIXwA+R755XwV+NzkXsvnNOcsDDjjggAMOOODbEV2IhNgzO32B865j3kyJpWkV6wjdhssP3qEPLUaEenbGyYPXsM7tERCCElDJNZKo4rXh7MWP0XdrHn3wBRaTO9x5+Qe4fvhl7HrJ7N5rIIGko/FDJmVEBY0tJ3PhnU0ioFtHIAWCCJqk5CFS5v6ZNkspKyFDHGjXa0g5cqWqPTEGQrcmpcSwvslNo0CImtVpKVtgWuswpYE4bTb8zEb4F/4rNd9/r+Mv/PKAiPDZHznm/GnF1SVMqorlJlKZyIsnE06mnk3o+eqw5suPrlmulvSrR0zrKW5yQj0/o42G5c0Fm2VPaGGz+WWCzBiunzDzxwQnNGLokqHtN7jGb1mjkQgcQm4YznVnpG6arNoE0EQIfY58yJYrpBj2FJ95xEMYo87YNR+XPMSdnasgqsSiZhubJbeWrYWWckah/hSXUnO16hASIoGZrGg75eSo4jgJ3/+ZVzhqlrz71jW1l9IInshZETs4BSuG04Xh7rxhef0IU9dc3bRoD34NIXjQik2IbKJHmfDC8SWV5txFZZdtWRuY+kwAtjFgjWwjIYVR7SiFMGNLroLmmioVilLyY8XvJq8DlWtjLDgBE7M7TdDsrrX/KRlJYEPeNhgDIXEyCVx3gk0t0QSWYYI3icYnNmHKxcU7RG8RWeQ6Pgliu9KkWwxrZcssFi3vHqe/p5Yd4zWfuRkYX6vPPj6qHoszEaW+3c923CcidWzE1ZFkHDNDubXN7oE03kflNuhTjxgp9yy8+tKUv/RzH4AaovZ0veXRynA2HXgcLCkKvXqSCJ/9nrtcXBr+6ueucLqlx5nNEy/dP+LzX74hhsiTa+HND1reeGnK7/ud93j3cc+X3rxg94k44NsVqvqhiPyfgP8O8Asi8ufJ2Yr/ENACvwD80G/R4fzVcgz/Z7LF6h8AfhD4G8C/+1t0DAcccMABB/xd4COTj0MYJzww9EqIEWsNonlhv2oM1htsTGjKSqgxRlxTJFpDPwRSypatOQsiZx5KsrmrTITKOZq6xjcTXFXnYsVkS8tsUZG2FhtG7XbSl4k8s7W3KH1gY/8hI/G4oxlGiqFM702e6Bqb7TpVIPQDbdcXq9RM1IgRvK+YTJrc7dm2tNcrYugzqaPZ5iZnFQgpUoicMW8BVBIGm8clFeKzdPkZY5ktjjm9f4+6mXB9dc2bb36Fbr0iDT2TpubBS6/iqgmPHz7i5uaa0LdYZxEVUjI0kyoTSARqV2HE4JoJs+MTwjCwvPyQ0C+ZLM5opgtW19eE7hwxIVtXOs9kNsusS4x4tyDGXEwN9MQYsc2M6XTG8uIxVpW6bghDR4q5K1AEltcXaBK8rRnaGcl6bNMwPbkDCjEpYd1x010DwnxxjKsbvG/w1tB3HV3bUlUeRKgazxBywbu6WRFCwoolSkRVmE8XaFQ261W2fvUVQwhU3uHmFd2mJ8XAbDGj3bTcOz3lrQ8+YBMGLpJyteno+4TzNZ3A417p+g0mJV6+e8pv/77v4e7pHWaLY6q6yTaemi1DVRzGuNyNmRRCxDqLr2fYNDD0idCt0DggolR1hfcLQu1zsYmhMhXWVPTdGusd9XSSszhiB8FTe09V1zhbE1NCho5N1xI1kULIJKRxOO8wziLDgOpAjDtLoBizQpnSPIBqyV4MYBIkS0wOK+XzgBD3rGEkM4CZ8DW2kMEWrMNbW6xxbCb1xeaiOmVL1hATMUUwgvcOI2NxHRg0lMIkZz4aEUxpqnZmP9tVsGPuq8mqx7Fgl0LqjxmQ2/zHMd9RZHTlKapp3X4T7H5+vapxbMzdWTY/d7MDvv3w3wP+BPCPAP8k+aq9y66z8xmo6k+LyO8F/m3gD5aHfw74fcA/Vf6+vvWavyAiPwD8j8mF0Y8BPfA+8BeB/+Q37nQOOOCAAw444IDvBCwvLrBGaKPj5skHrCenVDcbLq9W1DYi3Qaaito3nD54GeMaUggkY57JCjSlFisz+Tx3TVD5hvuvfA/t1SMevvU5Tu+8TheuuHj3Fzl6+TNljl2kiyVbPXUd/sjSbga0zkqkrTNQSiXjMc/5Y4rF1nJHcgxDIISIaqJyhqqq8HZGbR3r5TUpBiwx226K0g2BTSz2mDHhrEWI9Knn4bnnb3wp8g/+aM355YbLlef1T3r+9s9eUSWDj1NePBVev1chDHzlYcvn37vk0cNA9O9i+iuMGDTOmEhi0gRSCBgVfOXoB886LpiyJOoKcXdpHMhwzfn6ClfNiM2AtWV9QYTKV7km0kRKkRDCdj3BOyHE7FojkgqpahiKqjOlrE4LIdB1Q66VrcskmlC2SbmO87Y0r+b1BWMMIRQCUgQVA3EghoSzygbLamMQbUgSmVeOJ51wPO1572bO427OJ5sHvLyoODm6pKkrrttYSK+EpJStSTWvjJxfLLleBYyb8W73MrFzoAMfM2/x3s0xr9+xXC5rfBVZt9lq1xsDhLx+oYIpyt2ph8YV61wMZjQKTqMbTiGhY8CgWGsYiptRSlqsW0FjzmuMeaEEMVB5jxVh6pTaw8UmE5BSGluRnAeqW9ZPqG3iwZ1T3l4l5o2y6SNmiCRxqM4ynR8tSQ0Ln+g2ioZI1aTc1GsqjG6wRhE6RDxG8/vkdaMxeJKtstAUmm/HMypDimWNLBcfMen2d4qC8hlyMUayxHlbpm6JzG2jvGq+nrfUksiOuBwfy2MSUXW52RjDkycbNGb745dOHOc3A6subYnkqJGbTrCm4XS2ZtlV3L074dFD5Re/8JTrTY3BEkve5It3Ld5N+NxXrrBq0KLQjBF+5Ws3fOHtGz724oR/+Mde4OLyQD9+h+CfAb4K/LeB/yHwmNy4+2/wW1vT/svAPwH8EeAN4CnwJ4F/Q1Xb38LjOOCAAw444CPiI5OPbbfJdo6lOW2bfTZanYrFe4ur0rZjcpyIp5QX+lMI9CT6qDiURrKdqa8rXOWLtSrbHEBjfFbEqc0tW2IQyQSgKzaKUkiSbRedxrGS2pua7Xcklm6y8l9jDdZUOb/OZDVY1/WkUhBQJou2qnDOlcIA1uuc4agKKerWYhJG25ScTSD5odzQl0MHSTo+n2ehURXnHM1kwtHJEd57rq+v+eC9r9F3LcTAbDJl8eKLNIsTLs+fcvHWO4TQ54LGFlVWSnjrMJoLmbqpEIH58R2qyZTlzSWby8eISUxPH4AKq+Vl7uqzFXEIUNccn5xlG9G+o/IVfRfohiWIoZ7Occ2MyjouHz/E1oaqrjF2QmU9TTUhhMC6XUMM9N2GzbDG1hvq+Sl3zu6icaBdb0gh5cw+6zCm4ma9xGzWTKazrDZNMJvPiaqYBCoGZx1duyKEISsAjWLrbBfativW65a6rjHG0oYBazKZvVze0PeBylWslyuM9ZyeHjObTLi+GVj3kSFGLNAgSBHcHlWGj7/8gE+99iqvvvQyztcY51EyuWVNvvcy4Zo7QofY0w+RlAZQcN5isBg8amtwNp8cBl9ViFViSPRdj6k8lZlinCvEuwUSKbb0HaQYSPWcqp4wreZY59m0LcMw5EUBSVvSzbl8jCLQtkV5m7IKMlego71NyUIs2ZCkRJSR1JNMzJfPi5RGAGsttpCPzuW8TeMs1rlMPhY7HIvkAj4lVAIp5PzIkCAg4DyV5s+P0Z4+wVAsgoy1iM3kprUOWz5/dhwbl4/Duax4FGuKvbDZWSaPDQeyJ4DefgPo1iZnX+m4JSOLTbPZf7He3uqAb1eo6peBf/wbPP3cC6iqP0fu8Hx2Y5H/DVk5+aXnvOZN4F/8NR7TTwI/+WvZ9oADDjjggAMO+M7E8uIxziiXl1d07QpvJsTuimHpWJpMEMz0Rey0JpzfoHrF0HfFzcMwnTZsNhumk1mOn0gR7/M8uPIVvq4wokwXd6lnp1w8fY/UR+ZH97h+55c4evBpjKuJopjSnJvSCgO0QwCvhXQoJKUoMVMrAMSQlX+5MM9bWWuZTCcI5IgDMXl+byy2mUDfYbWnNkpdOSZVhRfLEIRlO3ATwailmc6oK+HLT1t+f73gv/ajnnee9BydLBCz5DMvTPjMJybcO57wlXcjP/eVDV/88IarVYumFTZaOgWNFvEJ16+YBE9jJuAtYeiZ1hParqXTGU4ii6plWt/j4iYbghpbLGFDwFqLOE8oMSkpJYwRvM8Kx9GRpbK2NBkDFDcjl9dDjMmVknMVwxDpupauy+sE3nu893k8AU2OlGIhPndiuFSckNi6MuX3qaylMkIU8DgmZsVNmiGmAzXZlhQhDD3eVwztBstRIbBLTUXO7MREUurxZoLxwhBDbtpOA4rluoeLNjCvBG8b2rBBQqCxJhOEJKwURafAvBYWjeF8GRET8SK0bUBjRJPsnLJSaXpNgtGEDgMaU67/JNeBViAMgS7EvCbTG0xlOa4b2mA5bkxRpCqDJsaW8jHdNDesCmI8tenYdEoaDIOd5gZ1jcVSONfGq6BMjdBJQ68NmJ7abdBgQAeMJJJRlEw4b5WFRQHpjOJsjgjRELd2sCFpiYzJa0LWGERTWS+jqF1Dvo9kXFczO+vU4tA1Nr+OrbJGDGpcLkmTksuS8bjGn6NiMtfCeS3CQDKslj24Bl0nPvPxOT//hadZ4Sk9EYuGhEbhUQz8fR+f8saR5Z33A85AHAJGa5CAFbj/wNJ1hodPWqxYkuxIWShlc4K33ut48733eOnB9Dfuy/WA3xCU+lVuPbYmR4r8a895yU88Zx8/fXsf32z/t57/ZospSVX/OPDHv8k2BxxwwAEHfBvjI5OPmigKQcUZg7N+myu47d7TXUeXiBaf+XFCmPPYTIp5cmWy/3xUcIUUMQZCjHT9gNiA+ICqQ8kTOoMUi1VT1I/sFEnj5A3ZhWtD3ulIL8g4L8rqRmt9Jg81ZyekPmQLVNWtWrGqPb72KIkwBNp2nZWQIWZipVhlaDkAVUVS7godiR1KAWHGAx65HxIWx2w65eT4BBXD1fU1N9ePGNpN7sRznrOXXqCuG5bX17z71S9nQlJTloWpYhSyZYlFxWKsY9I0uLpicXaXOEQun57Trq5wrmKyWECMxPUSwTKkxJAMtpowbc5w1jN0G1xlGFJErWEym2NdxeTolKHvWV2egzGIa6jqGb5qUBwxBELIBUVKMJnOgUQSy3w+Jw6BdtkBObMidjnPMaHUsxnzxQl9CCCOyle06zVDaOnbTS4Qjc/XX6CZTLIiToXQZ1tc67LVzGbo6NcbJtM5q+USa8BXFTfLJcPQE1MkDB1nx3OuQ+Dpck1SgxND7KFxllkFr98/43s/9QlOj8+wvkGcBZM7iCVlskslEtNATAPWCN66TJiKoe1WhAGsyR8+VzUE7dHQY8VDikSyXaxDSQF84xEsMSkpJKwkrASMDSQNdMOGLkSs87iqYjqdstl0dF2b7Y9CLm4zAempqppifrO199GRgNSsxFUjOT+jdGVqjkPFYkqXb+nctbssRWs9znic81jrEO+wJlueiuwsX6OaXHRiSDqQUs5S0ZCLr1AsjpyziFa4oqg2o6rRWqwrBKT1Od/R5fcWO+Y7ukw+FsvYZ1XO++RiPo7yrbbdJmeRZAuksQt89+WnO/njAd+1EJEpUN0OsReRnwR+D/BTqrr6FhzaAQcccMABBxzwHYTQddRe2CyfomGDxnMqKwSmYOcEppwv13CzpKkm+LpCYyRExVnDZrUipsDFxVVuZHS+RFFEVJXaO+48uMvx0ZyU4OjsNWK45ur997HTBZcffgE7uUsyC6wTmsrjpMEbIcocQiKmsDWTjzGQUJyx2zlw27Z0/YB1jsl0indjo18mgTZ9KE42EYyjmlZ4C0YjyWZb1qOZo0kbXj5KnMwcrq64GirObyJDZ3n61hXf8wd+Ny9uNoh5yPd+X8XvfGHK6v2av/GrG37hHeXpGiaTGTH2LFc1k7ufwG82GDFMGsfUCzYNtJuBTRfZtAOqESvCxCaCeE5OTpjXA4+SZ7qYImIYQp8bjmPCWssQIok826+8x3lLU1e5xqFY0JaSIKtC2SpFIZOHbdsxDN3WbjYlZRiG7VpJCKEQlcpk0iAixJgJUO9dGftMcForOIFoQG3CJcNps+Sym+Gt8vL9OefvKFEjoopFcCZbwQ6xp2oaFCHFlBWVmh2DzhY1ooE0OGK7wfiaGBJooFt3vDc0/MBrPakNLAw8iQPWDMS4QvEk9aSkWKPMjVChdF3AoYQEQ9dnJ6QEqjHXVcYUdV0mO71kQjTT3bkhViETeS6r8GJUrtYD3RCxvuaoMdRO6KOU7fM6iKaIGIMTpbFQO6C/YRnPsmcrqUQu5igdYxIaYT5TnNY0qWNACFg0LlDp0BQQOsCVa73LUxQDQiwN6BCTZHVhFiUjYvAO0Jgb8VXxLq8ThZCtl7c2qpT7SAxiTV4/KqrQwkpmalnHRtoSJeKyTXKKudl5myE5/qN8tlUQGzk6mjCbNtxdmBwPI56Lm5ApdFVEIxG4v7C88fqEJ48jHzztSF1fCPUWZE4lA/ceVFxeGJZtANHsRDY2MWzJ7lFxPWBF+PDhoXw64IADDjjggL+X8JHJx9l8DhqxJs/jJGUy0IgQ0wDrQN/nSXVSEFsyDcvs3Bihrj12JDWcx4gv1qQRJSBWsWIIMWRyIqbt651ItnkdiUZRKJNtVVumO1KsJzI5pZhC+GU1JcbgTFZzpZC7HWMsE8BiOZODyg3WGqrKk4LSbtaENGQCNrOwYE0mHjMfm88VIWq2SxFjEU2UaWghe8gkpIJxHj+pmS0WIHBxfsl6dUPsc56m857Z2SmL42OuLi54/OgdQt+jMZQkyzy+URM4m7vprDCZTJgdzVgcHeGrmqePn9BtliQdcL6iWRyTQkvse4YkdCFPxifzk5zblyJdt2KIHQZDHHIO3/z0LtPZMedPH7O5Pie0G1ShniyQuiEUi07bNKircPWMuq4L0TdgxDP0PZvNBXUzIaVADLn71BlDiIlueU23vCRhcNWUum4AIYaYFXgihCyrpaoqUlLavkU10g+B2XzBMPToEGg3LVjhZrnk5OQM42pWN0uWyyW+8YQhkKIyqSfM/IqlSCYOjTLzlvvTCS8eH3Pv7imnJ2c0kxmurkkatwR7DAMGMN6z2azxztCnIY9jBMFjXEVMAyENDGHAlU5bY4RIJA4tmoZiZyOARVMu7I0ofQposllJPESsRCyRpB196KHrcK7CW4fUDV3f0w09MWQFcNd3WOvwzjOdNFgxrIFhUGIcMDqShCl/dowlKwIzea+S/1lbchXFYUzOeByVwK4oEcX5Qv5J6bbM+zVJiCXDIqlBUyLanBs5qDKUPErRiDUG7+tigWxKt7DDOYdzFusF62z+3ZWcx6LEzFmP2XpVDdvvBCkWq6LZVslAiR4plKQpFqzPBGfIThE51lIH7vG7Ha8Bf0tE/gvgy+T/vfxh4EeBS+Bf+dYd2gEHHHDAAQcc8J2C1K8wzuCsoLXHmQmDnWOqCZIclogalxt6nWEY+qyqS1LUVmybesUYUgok1Vwzq3ITBuIHj2nXG6q6Jg0DUnvq0xfo+xuuly3m6RehusdQLZg2FXbzEPMifPjoMcksiSHuVHeFJDIjhVBMP2LMWfbWXeG2MQeUel+3WewiQuMd00nNfD6h7QJXqzXyNCIh8nk38OqR57MveT79iQU//iMTqoXF2xexJ99HeuEHMfqU+0f/OU9+5mf5z/6K8Lffh+u2p5lMWBzd4bLvmB2/DDrgTM/s+D7eeGZVQNsVUTqCCJ32OGuIoWXdGRaV8OpsYBMsXReITWAymWOcJcZIFyNtcZFRUawIOhh6YPAOazzGudKE6cEauj6vVQBUVY7Q6LqO9Xq1VcjFWMinFBiGDhDCsLPH3GxWuYm11GLOGZrJhOmkQYzJFrYmsn78BY5nC+ZmSQoNd9wc53teufcyX/pgg1Hh+vyc9uQaY4WoNW27ZDqfZLWfkKNjrGdyfIdhvWEyb9i0AWLOJk0p8cnX73C1fMT5VeSDhw3f98KabrXGTu8gYqlNTZeUGDpiiNSVcqeq6PpA20W8c2w2A30fs30tkhvIk2DGBnXR3DyO4m1p4k6KWNmqC0OxrxWxJGA1RKphjUnCovF48fSaQIXY94SQwFgmE8Os8gwpkqShNkpQIaggEsjFnEETLKaCt4Z2qKhMR7IDMUAkIeJQdVCI0sSQDYtge6+zRx6G8vf/n70/e7Yly+/7sM/vt9bKzL3PeIe6Ndyauqu70QOGBgGQAAiKhihKlChbtsOK8KMirAiH3/zi/8B2+K+wHxx6FO0XiyIdJilSBAkSRAMNoKfqQld113DnM+0pM9fgh9/Kvc8tNBV2SxEUgfwCp8+tc/bZO3cOO9Yvv1PKE3V9iO7d369JU+KO3cOxpC72M2pJsUb+HmSx2bp7qNFcQLGqExteTQQuVWefTfQ9OSJlImYVcvGkbOe6E8/X3znhez++vtVi72h95osPF6QMf/jdG/qsaMl88UFg3EI/7lgstpycHfHJZ4lYLC2KkkmASj2WTPvJ7pl5zGDw3+lxmzFjxowZM2b8ucPPTD4uFktEMk7F4iVTOiy8xmyDUclV3QeuuH00K2IupuAUH2wTnHqcawFFgrmWXGhwbUvbLozQUItztS+/j5C0hVbtb0P2g9M0ADFFn+574tQiY2qUShyTxV5OQ12NaNVKcEy9DNvNhjhmCqZe3LMPAoID0hTeaJ0FCE5qtI35PY3omGJgBZw6FsslTddCKayublitVxahWoyUPVouWBwdEcfI408esdutjbASpVS1ZM6mFPTOW+SrKt2i496DBxydnLK6WfHJJ3/KMGwJwdO2HUfLY3Y3K3KxuNZNv0PTyPHxEeLUImK2a2BEROh3PcF33HnwKqXA08ePKMMGpeB9IOrU5+ApxVGcx4WWxbIhxshqfW3xKtrh2pbQKQ2135OG1jt22x05DZBGyjhUhV5AcySlAXUBcY6CMIwjGjPb1dYUrV4R5zk6O+Pk+ITtdsuw29DvtvTbHgmBo+Mzdrue1YvnphwOgaCe9e6GRWhYdh3HyyXX6zVSCkddw9mi5U635GS55P79ewx9z+npOeMwEBqLO8kpVvfkQEgtIon1+sZIxRjruZ8Yh54UBywZN9PvIiUnwP5WKumZGMkpISI0rYcQLMpVLc63oOQCmosRiwKII5Po444YjGBcLBZ27DA1b8oRGUYG7fFNIDTCsVuw2ynb3cZcksVsx456bYnDyUTsaY2A8uY4dDX61BkheDsKVcXvI09BUFdd0SlRUsKVgrdaSaKmGtvq9kPc5Mj03lyqIQSa0NCExsjOSqIGH/Zdj/Y5cuh2nL4XOVyr0yhUm3Km1J/p1+y7HyfH463f2T/k5f+e8ecVj4H/AvjrWM9jCzwC/m/A/6mU8sG/wW2bMWPGjBkzZvxbAikJ1FGyY8yBJhxBe2LEVCUhXO0+jynv3XMWBxrp+5pQYn6xPUE1xSqWkllvNmx3O0Jdi4sqIXhQRw73SUVw/Se4eMI2vUoz7Ljse1Y3A0kiU13I1Ksuyr73kfp6Ipj4FyGlQkpD3dZprrYIVlVhGAdySWz7HSWb2NJLhpTZCtysI59cbvnWT9Z8/X7g17468uX/ydco3TdQPaaUJciS51eZnAbuNZ5+l9jcbIlcs91ccnR6n8XynE1cs1s/wR+/zmq34yhFTluLxyTDahyIyeFI3G0Lb9xJfPsTj7hM3PVc9zuOzu9AgeViQXQ9Y013kpKgRmfmYWCUSB6kdtcbGVwyxJjJZESUtm2JU1Qth57MKQ4T2DveVCaFZCFrtVKWwphhHEd22w1OzPn41oPEw9fv8YcfZd4+O+Xp+phuuUAyfPLJFUMMSE54RiAjIowxsrl5wenZ2f4cSwXEeVrvefrkuRHNomRJdfsKH318xc1wxLoUPnicOV6e89X7r9BuepbdjvunC7Y75UYy61Q4buH1c+V7n40EjXROeDZmO09NEl7nSKvFUTGSStT2j1IsQcspY86kmBlLtuqV6igtWFzriLAdCscBTptCP2ZGgZJtn+fY49rAMnTsRmE9BhZNZMgjkjpiEZRCyZmuLSwbz8Uq0TVCHB0NW4oEnCbGOBDdEZKjOTX3ZGK9tqduxnp8nZM90TxdF/Z7I55F6nW1/1l1ezJdy3afaLLT3pLCQs6IlHqvq8apYt7YbNZSE/KLgmREq9i2VCdlvUe16c2NfdR6jtqG33uywlKEC++93nDvrOODD3dcrDcg3shiUT56HrmnhbQdaO80fPqkt2MjVEGvALmSzYctTzUZLBc7/yYR/owZM2bMmDHjLwZ+ZvJxHGzRmJ0teErthoNMGhNjXWxOfYfWLVAdXM5X1WapWf2mwvOe6phq0NDgmw5tFjRNhw8dPjR4bz2PYIs/i1ScvgCkEhjTgGakpPPO1GFQF4R5v6iTW48DGyJCEyglE+NI32dKHRCmOslpyVRnBWw5fIjguG2Mqs+O6LRwBvWOrluyPFkwDiOrmxX9dm2q05zw6glN4Oz8nDRGLi8u2Ww25sZEmQJsjYCcIjO1Rmt6FsfHPHjtDULoePz4MRcvnlLyiHNK2y7ouo7LyyucFrpFx263g+wJ3RFDTsTdljj2lNGiX/p+B9pw/8FDbm629KsLgsJYHCkrqZgyUJyn77esVlu65THndxdQtCofC0fLJaFpSTlZWXtRcww6R78dyTERXKA9aRnGnhzN9VlUiKMRxGNOjKMReDGNjP0adUIIS5qmQRFurq6J0Qi9YTvQdR2+aUlpJG4Su9WabtHS9yMXl5fkmClSKCXSOsd520KJnHUtr9w546hdcNQd0R0dsd1sWK9uaLqG1c2GEALe2YmRSiFuttUVKBaJmzf4EMxhmwZi3BGzRZvkYUBKpm0cpIFxHIlq25FSwjtPLi1NKTgfAK1uRG+xuog5cKXmFNfzcRhHYky0TcticYTzDTmvSLtddfgOsOsJwTowF22H80Lfj/T9SBYbWFUcburhdLp3Nnpv5ONtx6PUf6s6I8FvRTWVW0NWqtpXsg2gKg6t8TJSI1xLUotnvUXxqZrL0ZyO9bWdfTm1iNeJbJRJ+lkdssBLz3UbpcpKb+kz/8wjLTJ6urJLHa5m+vHPM0opF8B//m96O2bMmDFjxowZ/3ajiCLNEUO8pmvfIKL4bERAUUUqsWezp9UliGhdfNpcYD87rD0nAtJwcBwmCrnGecYUidnW9Jk7NItjjspTJH7M6VHkeihkIqV0WF+cdcIXstWjqEUolv122HYZ+ZH3P5tW0SbiLZgO1sjMlCI5mrMrYvcL1AmkyNXWkXLk1UVC7rVw5y0kvaDIEZIuKF3gvV9b8vqbHZ99uONb7yt/8knixxfPKLsNG7eh75+QkjnrdpsPaMMx0hS+cA6tDLjsWY1KKxt843h4X7i3bOn7nq5RRikMI1xfXBNTpvFC44TOCYIJPLMKYyxErBfRqDJQTKAcYyLmiWS0CpF9H6DaninFSN3b+yvX7sNJ+DwRVZNem5IZhljHGaGPkXfeeJWHl5/xyaWwGXvafsCrMJbAdr1mvXIMq0zz4Jj+5hm+CahAGm3uK9kIUsVcmkGTJSD1iZyUXGKNlS0WaVrAq/L9jwtfeUX44hvHLI4Sj64cn+yM8Fu4zBfOGx6cRn7vT0eOG4fTAiUSBFKxuNCSs1WPOOiCs7jSlExUixrZGjNjokapFpLIIda2kpgZc9ilDPeWwq5PbNcJnyxGNSflbgteduzGzBALu6ycdgVlx5CEMTvaBk7bhpv1SAhCkxtyv8KhdLrhamgpMaHBEUrPLgtZEhMBaYRj2h/LPF1rNe1nuibllujd7ifdvo4/RzxyuNZe/j23vtvsKvuHl1oiUtO1BJBD4li9RWb3o0TIRWkWjq88OOaffXsLJXP/vOXr793jk0cDf/T9NZERcY0RoGLXe0ng2hZKz3Y3UDDR+xSzOpGLebpntn83mDB/L5b46Z+TM2bcRinlPwP+s3/DmzFjxowZM/4HwM9MPt7c3ABVsSYAqWb1WwwmCM4FpJiyLzhFvQ0xag3nQM26j5EhRSw+sxiRIQEVoVGHd44meEL9mgiO6bmKCFb+fYvwE63KTY+IRV+kNCB1MJgWfdNi0EhRV+MthWEYqloxHxa7VWkmSFWRTSo3oxrVNHT1eaGUum/EHp9LwXlPtzDyL5fM1Ysr+p0t+izpUll0LYvjJeI8F5dXDJuNOUuhEpjYey8WX0kpkMwFp85z/8Fr3Htwn9V6x4c/+gGx3yIkvHMslid49VxfXuOcsDw5ZbftGXaZEDqKwG4b7bmj4nxLotAenXJ6docXL17Qb1Y0wRNLZow9aexRHKKOq6sVJY90XUe3WLLZXjP0A2SlbVpijvSrnj72qAQgcHJ2Qo4jKkpzdEYIgd1ui29bmpOOmCI36xtAibsd3jla5ymSUA2c3X9AzAnnPBm4ublC1VNyYdf3hK7FtS0xZ7wL9P2Aeoglo+q4ub4meM/1esVuGChpJCh0oeO1+/e5f/cM7xz37r6COmVxtKAfdnSLQEkjYx6h8fhgnY0lJ8ZkblEnwhAHNusrQtOYclUdGSUPPSWNxBQJ7Qmu6RAnDNsNZXIgSiKneiPCNah4cnHW9VCMeBSd1M8RoboHi5Hsm7glNJmuXXB67ABltbohpkROiW0/sN31dG1L1waOlh2NjwxpR0nZCL9gPYvee4LzB/LRB4Jv9z2MsicnQ41j1T3xh1CjhqxHxQbwQpb6Gs7h8qHTUbxDJSNY3Ko6ReprqD+QnvY5oPUzo4ob1NTeB+JRqNwwHL7Zv6dhtqoFpEDRW00VtwfE20PU7SiZmYScMWPGjBkzZsyY8a+B5kJYPqRfPyXHC5yekN0CLULWKT3IHjuRGIh1k4gIzh3m1gmHpejLPYOTSC5n63C3B2cUiFlZ6wMavWCZrhhWQoxCkgFlipE0Ws1eo87BYqSGyES6HAS37P9ucnIVRK33jgwlZRN51kW3U08qhewL503hb/6847d/o+XsuOHmd/8+x3/J4kLz8V/F+V/m0bf/n/zetzLbrLSN8MsP4d27gaeXjs92ys1ux2bcMaSWXVKKf0rXKe0rC7rGc9JFtsOWnQpBlW++mVA38GwTCQqNZHZ5ZIxC44WlCo0qUmBMhW2C7VgYU54aCY2cK9NMLsRsAtTp/obdQ7Djsk9SKdT42sNQshdp3iKgpgyWieAtJMpEYMXC/WWE4rnZjiCRvheKV7x2KJmcoIgjiSOngRQzi66lpB34pd3PSJGgyrJdslw05DRaklU991QFkpiLUhRRI7W/83HkG2/BsrP9tOwsQzN4xy+9BY1ayOdrx45HN9Yz6LW6KnPCKxw3yqL1UMwhlwj0Q2YzJIasNT3KxKpjTkRs5pVJKK6VwMJxudlxd5H5yisNm1joh0gRBxr5wnngyc1Yo4I3UAIXqeVk4WldYuEGTjrP9TbSecU7h2+U7c2W57uEy8XEwpLIpSOUHUMqFEk4dH+uH/oaZX8NHoQB+V97nKd7UC+Tjvur+9Y1ng/zav2diAkCcrH7BPtTqj4w1cyt6dPCUlprUlJ1kcYRjpctm/6C3/zmfUp2fOuPr9gMvQ3EuaDl4EotRWlDpnOJMWWWDJwsHKvdwdnL/r18jmAUqxmayMcZM2bMmDFjxl8s/MzkY9/3lr8PRgx606EVbNCYHIGomnLSOZom7N1/eVI+1UJxi3OxmNZYNhbdoI4UGjwNzk0Ew8vxjxaLeFBXiQjBB9uWUmqHY56W8fUxh8dSMOdWCOSciJM6c//425Yoi70s6XMD10RUSF0YIyCmHMU0k4gKJ8sjmtaTYubmesXQbykp7UvPncLRyTHqPevNln5zSYqmdFU1RWQuBXFqer8iKI5cMirgmoZXX3+V07Nznj15xosXz8ixxzigwOLoGBVltVrRtg3nd+6wWm/Y9VZCn3Nm6HeUHK2s3jdosOhOSuHixQtivyUXAfHkuGPoe5wI4hzb9QavQndyTsyF6+tLck40bUdwnpwL69WNDaOiNG3AB8dmuyWnRBtahkqItV2LOsd2syPHjCTH6uYa9Y7m9LR2aHpUPWNMNea3sNv1aCXDt9uNVSD41qJxUjLFpRfikCl9T0mFZRPY9YMNxglyjrRN4GR5RNe0eB9YLhY0TcBGoELTtBRRQtuQ4oCoMsZ4a6iwuBgK1Y2X6Xc3NL5DvAWn5ro4d85ZT4p3qLb4XBjpcRRc8IgLSHU9oh7ECHcEYklIqt7aGnVMVSULkEtmu90wjgNdt+Dk5JhCJl6O9VpLxCzEnEh5waJd0LYdAU/Kow1HXs2J6wPeayX+jPwLvtkThqjbR6fuCUCocVLUiOTaBapCViFWkrAmBeNUcSoUp2RcveFiZKPFqdprOX/4XnN7Do7H+nki02vWq/RztGO9tm9tW5keLPvLXm492v6nPlu5/UzzEDVjxowZM2bMmDHjpyOoEsWxS9B1Z0jcIelTYnufUsyZdtsZFWs8p1Qnk6uz9ZTwcXA6sic+JkGsFjWChqoHLsZ6FRlRhRwLiZ7Ts8LVdgQcWrIJ/cQ4zwMxYutnSwmxWNXpdUxk+/L6W8TmG4ezFXMVHlqXH0Ah54Gu83zj9Y7f/jnh4T34wR9t+exmQ1NG/uO3v8X6B4/wPz/QHvV88uMN7z9zPFnBrscEpxRO2x0dK5o2c6qRdYnksYAWRNZc3az5rfc8Tzeezmd++Fg4Wwi/9CX43e9fcSSemCPbNOIp3OmE8yOTEu9S4mobud4mtgmGxJ6EtbdVKKJQaqWKCBmYegqtu7F69KZ8zumgiO2HiayFW45Hphjeg6PVHp3ICE2ATz5+xgePIhTLkpEkeOfY7K6I0ZNTpB8KTx89opGRpvGUbkEaelh0qNbzQZWUEienSygJJ0pKmZQFlwvClqCe7AUvBa+OZ1eFHz0a+O1vjnzpbqRT4Ucxcew8X3q78MHHwlEnvHEKP3kRCWrFNE4Li0Y5akyYWyj0SehHGMbIEM2tF3wmFoj5cP67Umy+rSipPkdKjDnz+GrgF95o+ergeP8KSHAUGr7+lufDP9zgNLMUUJfIYUCzcL7MvHqSWW033DlXeoQ0FuuUdJm77prdkCGOjLkhqafPO4bs8QSixFuE4IFAvC0OmBK5bh/fg7NV9tf6bTLu4Iq9TeDVopDC/pq7/ZoTAb4XBZTDyGrnrO4FtaYGECKeH330jFdeuc8vfeMO3/nhyLMXF3UjnInii8XjWsKZ/ezecaGMICkxREeMA+eLws14RD9GixDm5fc6Id+65zfPzjNmzJgxY8ZfLPzM5GPMcb84TmnEV1cSxRbkMSUbPhQbZrzH+YB66/jLtctN1OHGcT9QOWe9cQKUnEkxEcdIv9uRM3ifsH4/h0g0t6JzBG9xozDl7Kf9cDYpxIBaW1Fqf6TSNI2RbqOVq0tdmE2VC3sdm5jCkSkWh0N0RamKPGMRp8fad1VH1y7ougX9MHB9dUWKuXZk5r1D0zcN3dER/W7H9uKKFEdKKkhRHJksNb4m1/JzZ4tAcsE5z/Ko45XXXkcQPv7xj9lt15SY8OIpAu3RAlHYbFYslkvu3n+Fm6trdusdvmnpxwFJkVwg43G+4fj4mLbtGDZbri+e20LRdxw1LV4LV+urSjwFxlSQbknXLSFn0rBBMng18rgA/aZn2EZEwS8ahjExDCuaJuxjfYZ+h6JcPb8k5WhxpAjrzY7FcoEPLbt+RJ3F4IbGjmfcGRnug0cVUs4UlOPlMaKOzaZnTCNtcGzXW/ox07iGHBO5ZDZ9z7QmXzQthMLx0rpGS8r7TtMQPHGMJAq7caSr8bx5Uh1nyJLJZYoliuQ8GikHOG8dDDFFioLgLI44ZyQaqRu6E8JiUYkzsetKFN8sUN/WxlFBijU9HAYS2xepJMRZZ4ud9Jldv2U37jjqjjg+OjIy+eqSMgxAIeXIkHdIBNElzaKhlaUNH5Jx4gjB47zgQ1t7Fi16VWvEqhGAztyKk+uxknr5lsJXxMh6FcE5JVVno0tK8ErJHq2dHyA4Vbx3h45H5+t1f3ArT5juC1hkrNTYIPv5ROBPDyyVobxNTpb9QGSuSXn5mU1Fektpbr+ZfJIzZsyYMWPGjBkzZrwM8YpDkJiheHJ3jxR3NNunEE5Ibmn9gSVVji4f1palVGJrIjJsxnBOb5EdshfTlZKxOvhD57krmSwFV65xRcj+nGWXeXKzBk2ULHsiw0Sxua6nbf7M2RJ4Up6Seg51JRPHUIrN3l61CkGl9vVNpqxI45XXTzxffW3B2+eOD19E/vH3CxfbyJDhP/0rju3H1/yX/9jx5Q//Ib/0C2fE7Pj4IrLaCakUiz/NkfVW2BTFicc7Ex97LaTsyHhebIWxJB6cBV6/0/DkKvLueebOg44nv7ugWyRG13A/F946F05b5aYXPr2OPL3uud4WYiomiBZIVSw8zTElF3J1JSo2dyCTI872120K0aJz3f7mwuQetXsWtwnfW2QUucZZ2vFYNi2t9PzlLySWTaKIJ0VLmHLa8MbxJQ/vOErfc3M92r2UlHHBMwxbzEEISKnHKXJ6fMR7by65fPaMn3/NiGJJI0dtw8+9sWGMifPjjkaFRoQYIzjhvXcjJ48928Fz1An3Xmv4g/dH3rnrOG2NbFq2QqOZ186CxawWeLpOPN8M7IaChYUK3kFOdmxzKahA6+z+ykgmpakLVayqoxK4uQjbMfPp9cC/+43A5R8KQxTePPO8dhdWQ+H+GSyaQiwQyshxUM7CyOVVYDdEbvodOIcPHTE7YnGstp6hRBwOp1tyCSzazIuNkHJC9LbL8c+SbS8l5Mh0JsitYzy5GW//7eRgrmL+MvWE2t+rus9dbxNBWUj72shDD+xeFFAyZf/c9jopFba7kddfPeZ3/uATHj2LOBGyjOTijVAv7IUL5IILmc7D85WiJITCkBuuNpGTxZrON1yvayfl/tyfPlMK5bZyd8aMGTNmzJjxFwo/M/kYQgAK3jmcgPfmWMo5I5ohJXwBlYx3ineHLrZymJNotCOEDpxD1BxRXj3OBXAe57G4izwyjoVSvMV2FE9oWuuLU0VFiWO0TkS1t3VQoGWm+EUR2XfUAQz9YH2V5LrQsoVR3lueipFIExGhE9k4OaVM2SlVGZqnxaMoXdviQyDnzM3NNeM4WlF4zuZKE6VpG9q2BWB1dck4jHV7mJJbADWHaJmcdFJfTxHvOD075/T8nOvrCy6fPyeOgw2tzgigrmtRdWy2G46Pj414vLlhs92gwdH3WygJxQjZ4BvOzs7R4FhdX7JbryhOEQlGpB41XD17irgWSmPEWHAs2hZRYbfZgesIrZFmRR05ZXZxizbmltuNPcF7vAvEOAKR9WpF07Q4581tKdZd2A87Ts9PcK5hu92R00iKheXxMSll1qsVXgXvA+I8GaWPPS50bHd97SB0oJlhHEkxWRxRSYQ2sLrcEKMVyqsrNBLomsCy61i0Ld2iAylsthsWrq1F8iMSE2OJuBAqEZyrureSlTlhc4Ki4ilOccUDpUavpuraNXI+5UwpEScO33a0bWfDQrYpVdXjQ4tmrFeENIW5VMLbhgsQSNFuYlQiPJfCsBsZdyPd8ojl0QneB55fPmcYR+t2VCPvc0mUYg7HEDpTQKPmNvRSXYhGPqrateTUVbLuQATedg1r/WwAKCqmnJ4cj65Gq6oiTiyeGUfJds25Gqtqr6P76/0Qs3rrs2U/7uSDIps6N33+Q6xMjS23ttd+UScuJdUbDnYpHgjUUv8eauNN+TPPPmPGjBkzZsyYMWMG3jtiv6Y4b8SFOMQtiRqgf4EOG3JzOqlkud2rWOVxJvDdR5/eijqEPZkBt4R2lWhQMSdgQ08uJ+xoUKBtEjfbsdYiQEnGPKoKWpQihbRPC5rm6mnW1b0rcup+tA0HV0xoLCgpRlI0Ue9JcLxy7HjlJLDtE//yw57Hq4E+KcEV7naeh3cCf/ydDX/0medi57m43vGVNx3ny8LNqhArXdV6i3YNUcjsoAiuJEoaLQaVhjE3fO+TyP/8byQeP1rxt35lydGikI7ushoL95bCF99ueOXEEZPju59FPnyx5ZMLS4URFCeJWA4BlnuBIubynJJmSp0dbsdPsicND8dEJldjKUDC3JBVtAovCSpNSFmJX4GYEteD8tXX7vBff/uSUuxvJStffgiLULhY7ViNx3z9q+/Q5c+4eHpdq1GUfruFOwWiRf1qMWmoE4/kxDIodxeBXRJeXG+RArt4zKPngcfPlTYox60QGuXxsy3vfu2EUK4ZsyJB8efK2Ee+/JryYgX3j4QHJ4XXzwKd8/z4cuBPn8GL1UDC4Z0yJuiTMMTEEPO+29HI2IK6wlIEbaxipx8KuzQiJeCc0LqMaxxPb2B53PBbX018+8eZv/QF2KK0Tuk8gHLWed44DSz8mh89Dzy+ifQlcdKdsU1KGsG7La0XVivBJeFaFpxJZF08Z12gTu9o0XqNlsN8+1NIyOk4TtU9e3JxsibWs8oib29HsE7fJ8GB3Op3vf37elbeGkNvE9jTc033A2w7MrkM3Lt/l9Uq8/BByzuvn/CnP9nyyfPR3Lu5BrfWbS7Aw1Ph+UoZk4mRc8mknMiqXK2U427gwZ3Ao0sTY6soqZhrU+Qg3p1jV2fMmDFjxoy/ePiZyccmOFTFehi99cFJXSDnFE0xWecRM0AZIROHSE6mmkxwi8Qo1ivnPOoaxLnqqPI40RrZIQRnDqzQtnjfVIVhZkzRVId6WwVqKGIUg3OOEIKpAMdISunWY5WDz/EwIohM9d1U9aj9TIoRMvYmrXvPRhPrdWwXHWkY6Tdb4jju40JqK6b1Fi4XiAi73Y5hGF4qnZ9gxfTlsFh1NQ+nCN557r/2GqFpefb4CTdXL+rj3LT1NF2HOmW33XF8csL9+69w9eKS7XaDCvS7XVW3KqL2dXx6jjjP5YsXpHGAoqhvCN5zdn7OxeUzxlwoGoywdI6mXRBC4ObqmlwE17Y18kMZYqTf9ka+uUA/jIy50LYt1pdh0aAhBLIqXbcgpkJMI8Mw0rULUhL63da6DcgslyfkVHj+/ILF0QkJYRgjGntyyjhVckwUAW0cwzjgvXUy5oiRfTHhFIa+xwONCNIGvCrBB7pmYYSmOo6WHbvtlt22sFwscM4zxIhm6NSjDvqht6hRKagIGWEYh33skAhIUEoZQTKiHoqzs64YeZ9ypORUT0FPCG0lD40gz4ALHlIhJusR3Z8fpQ63crgxkUtGipHQE0F/c3NDzpnl8og7d+6yullZj0tQvDq8N1eh1C5GHyzu2KmrnY5uf61K7VjU2gdiNyOg5Lx3O94mIWWKJhbzbFIUQVGxz5Cc7XNC1FmcEdbH6vaRrlOXq750rR8clXLYD9NgJnJrKLPpfh+V/NL8Y4/PZBv+q0ozV0I3a3WalkwWQWqU1cvxSDNmzJgxY8aMGTNmHODbJZSI5kQeB1xY4BWLyQyvUtIV3fCM3p+TpUGYKkVkXzFSJi3tnnyAlA5zqy15qwjR/p9QdnjZkUvLIPdAEpCQopyfBFY7GHMCcfY8tadRpxqEKt3z3gETkZbIqSCS9xGQ1S5pqSYqeGdziJJZ+kKjwrEfAeHx9cB6F1mNBS1iBJ9TfBeJu8CffAKXQ+LyM/johePBSeDte8KHLwZiFJtP0oiTQtfAaSNAIkVBQsei85x2wlFTWJ4qN2vPG+8qi7ue7pWHrH7ybb75pXO+9qWRo0H53oeOv/dBz3cfZXYDtI3HZxhiMbHvRAKJMMWiHuaafIsMvjVXlFsutnI4JpRYn24SS6aX5howZ+EkoHb7n4Nq5uOLwruPXlCKMCZPKQUvhaV3JC9cj0d8ehPIfslCj+lPCrt+zbIRUhoY+g3PX3zG8Z1X0OAYU8Y7iLkhlZZV3xEcvHMX3ns98fh5g4zCuncUKYgrjAN8+uicu28n7n1pwfnZhuG8AbfhbDnwzlsLdu9v+V/8inLnxPHRk8Tv/Sjy5CojKtw5Caw2cLWNXI2RMU5CWSGVTMnZ5tLp/zTRemERlJNjsSjYktn0A0UcbevQxvODTzP//r93j/QPrvilbyq//0cjJ8fCm6fCl+41nLXKD5+u+f2PhfV2xSYKRRdcZeFe59gl5WqdOF6saWXFVjukQJJAkEgaR7Q05OlaqaT7y72NB9Lv8z2P0zU6JWXd/v3UE7l/zOfIx8Nz/jRykf/Of0/Xponn7TzN2eFk5I2HZ3zrT57yhz9+zMNXlvyV1+7w0eMNP3kakZgtRlUKwQlHLXx6kViK4iRbjG/uKCUyUnixVbph5PUzz9VGWW2jJXeV6vaW+JLkd8aMGTNmzJjxFwc/M/noHIS9g7AWrgt4UYpz1X1YKMnUmDlmUhzoh54UI6USJ9bn5nFNgw8NEqzrT8TtnYgFRZynaVraboE6i8LsY2+xi97hnK9l7O5Wxr4RD0aiKOo8cbSev2lAM6ek1kWaovp5RZbUaBStS2C1QUyppE4t4lZT/TVNgFzY3KzI2XoG9+RTKaBK23WE4Nnttgx9b/uzEhyFQ0TlbfUjcnhNp8LRySlnd++zGxOPHz1it7qpCrxssS84fGO9fNvdjtPzO5yfn3N58YKhHwEY+wGpg6S5FxuOTk5IKXP54oJSEiqO4hT1gbO756w3K3a7Heoa0IBKsaiSdsFmbX2QzofaTBHIxSJGwdnPY2EcM8uTE+tqLNCP1jdSSkBkwc1qh3fmlFNRdrsdhYSrvYfeNeQMV5dXLLqWkjPr7Y40RrxTGu/IJeIbjwt+Sgup8b0j2+3A4tjT70bimIljREVovacRZdEuEOx9dIsW9Yr3DUdHns1mxZaBo6MjkEguifV6RdN4pBTSMNhxwgZGhxD7WFWtGYcNqjmN9jg1Yi6nTOMFV4nykjI5JqJkXNOhAjmPxJwJDlwIJBRiPES+FK0j8US91ajg6pCcroOYC+vNhlwKy8WS05MzckokMdI2OIdvLcbYe4cPAacep5WEdBaDah2sso+eoZ6vNlOpRaxM0TKfEwQIByfy7egmFSMaxUHJNsSpyJ9xON4e2FQcwvQ799Lv9sPZpNy2X/yZAe32V65CCUH3UUoZoexdz4eB7/Nq1xkzZsyYMWPGjBkzbiP7I/pxIHRLvAyU3SWDHtUO+4Gsx2xLS5NegCxI/ojJxTg5IAtQskyLWXMqVvHs7fhVAM9IKxtS8WzLKTZppn0qCBm0RHajIL6Sm7fEdCZeFGqjoS3xxdVEFxjHhEhmH9NTzBvoVWiD0gU1waeAZigxso3KzTiy65PNCaLWXa8KKXL/rOHJeuAnF9btuEuZi23iH35PeO8Vz3FrHZU5CxpHej9wtOwZhzVHy1Neu3vE+VIoktjFzNvvvsav/PwJi80/4vgv/cdoPIM3/1NO7v3X/Pbbv8P6R4/5x/+N8ne/u+N6KwQveO8Yk2PXZ5vXKJQqrtyLFifisU4z5kjNVRB6u5MPyueOTdmTjgfxpHOeEExgGaMJo6d7CDnn/bylqlysEwmHehPZ5gxv3C08vhFev5/JvqXzI0igjInFUcNysSBoJpXMxfMLfudffI+//R+8SqagaSSlntVu5GaXeHQz0DrQo0LnhffeCHzzvch253h2M7LdwPnpyOKs4+onz3jwt3+LfPYdlq//Jtsf/mPefDDy4KsPuP9m5vJx5F/9YeJHTzJdE/j6G8KnV5kfP4u82Ga2Yz2HsxGPlrxTLM42H6JJEaEfEmvNdF44WTjeOHH88pstd08DWQKfXsHVVU/Jkf/oP7rL4vVz7n//j/hf/dYJ776e+aM/Cfy9P7jg48vCmJWhdBRRvBM68WwSBF84X3pizKTU0SgoI6lkAj2pCOSRXLh172j62uee7o8VTDMmt475gbi2xKIqHr5FOE6zsz1+Irf3RUCfIzkPROdLhKRMz6vmdpz+W2yfjgLDbiBuez79bMsX37jDt793iYY177za8VtfP+WT5yOfPBsYR+HVO44X68yQR5x3LMUx5hHRWPUKdk9sMwrxYuTVe4GuCzy5GAGlaKqe5Yp5bJ4xY8aMGTP+QuFnJx/FvqRAjJnYj0YgFHNtGdk3LcAzKVmcqFJIOVnfdcnEWo5dxDyBQRyaE34iI5wntB2L5UklToAUQXLtCbSF1bSomxb1k/DTeYeqDQVD3yMlH0IubqnODjj8917NiYBqFSnKfigQNSIyBI/3DSlH+l1vTsec6yAmNece66JoLYZ1u14TYzSyqUwqysPQ9zLBYuSgSsF7x/m9e7TdgtXqhuubFcN2i0gdFGusR2gaQtvS9ztOTo6NeLy8NOdpzvR9b++0ust8Ezi9c4dd37Ndr40CnRayIhwdL4kps9ns8D6QU91mgbZdMsbMGCOinlKS7VtVSk44cZYqkyCnxGJxDMA4jihisZ8IXmF1c41zjuIdMY6kdFDclpIRLYzjSIwWJVtyYb25ZhwsOrQkZTckUDjpWmK0qNqUCimP7HYbUorEOFJKYrPe4cSRUySEgKjFzkImtNXl59WGP3E0bUc/9IRhoIoBiSUyDAmnjpgiWs+TVArO1w7Uvkcd5Kx4r0aG5YjFISm5WEeJc1rdk9UFmQZK8jRNh6qnxJ4xZRqnhKDkXPb7KNf4nFxJ/zypdAsUKXvnorhKxsZI32/p2o6ubSlq56BXh2u09isGfPB78hEOfY5FxLpHb52rNthM8TJTCOrtqJn6sFvXmNRSGRHBiVCcI5apL0VevkZF9u7h6fUO1+3L1645Pw/q8L1LsrB3jMKhr8OIx3odTtE2omi9uZBFajNJHTUr+ZhvDZIzZsyYMWPGjBkzZtxGcQtkuGR0HUWOWDSCpBtidCTpQEaKCmO+h+eaJj5np2dIcUx+IYstnMS11PVppqittVUUJ5Emr8nFsZVjMlaTImKiOqkuO6cZYiElZ3cDSra+9SmdZEotynW+zkLbBASLUM0JcjFxJZNLT4QuOE6XAe+UPGbGVOjHaAk2ycgyVQcp0QSIJVFGWLSFo9bz9DpyuUnsRmE3Cqrwx59Flg08WCqPLoQhgYtbVIWLm8hzPHKd+fDFllcWji+/Ab/wdst7X9rSthvUv4aOx/DO/5ZSOtzxb5Of/yO+8+0Nf/pk5KxVWoFdhPUgrGOmHxMx1RqWWnVh5O9EIsn+UJTJrWdsI/mgU7x1uKQ+T5nsq/bLosQYa+9jdQDmZCk4AqDkXONVpc6cvSDS7AniRZP59CIRvEciBD+Qxx7nC76A+AbJWxRlyCPUON1WPLFEJG4Ydyv6ofBis+PIwXrX8Mq9gavLwi+8k3nluOerv+g4OQ/IMezGAJeQ1iN88X+HtF+gfeNj3vtP7uNY8+m3n/LdH8DxkfDvfM1zeaN866OBP306sB5gjHZulVxPoRr9a1ykJWTlvZXUHKe5WO3IdoTLdeLDS8d79zO/9iXlf/qbx9x7reX09YbFV//3pLTl1/6XZzz/wXf5f/+DT/jWj4TrweJ4BxyqmSAZ52VP7q52Ea9wppDUEwmoZFp25BSRZPcaVD4Xjas2x1IrcRA7b2VywzI5GytBPfWrVlexXUIT2fo5cSy5Vu3ANM9OHwZT6hFwK5J12i57vEz3xzLcLiCJRRhzQULh8U3i5DzzxS8c88OPVvzw04EfPxp49X7DX/7GMVdXiTIOfO8iIirE4swVHSPShvoKdiDFCQnh02eJe6fCu28s+ezpSKpi3Sx2HP0s2p0xY8aMGTP+QuFnJh/JmXEcScX693JKtnrK0YhJb47EySGFOlxoLbbReXKyZXxBcc7jfGPkXGhxvkN9S2gXtN0RuQi77Qb1Rgx5H1BnHXXeh0peOHSKJBUjPr13RjqNRji6SsaYU1FuqRMPZMXL5B+3SIVJaWqBK+KcRUU2DQC77Y6c4/45RIUUTQUnSu0ytPjTGKM9o7O+jFIJ2P0rvUQ8UjdCWRwtuXPnLkOMXF5cE/uetNsdnGZaSMkcf03XMg6J45NTTs5OuLy4ZBwHchqJY6xvRZCi+MZzfu8uw5DYbnbVRVYs9oXColvSdgsuLi5sP+BRtffmm4Bqw3Z3aQRjEUo2wsZcp46CEcWJTGgCXh27sUfRff+iU88wDEBmHIWoSk6JXInsIWeCD4wxmpuxjKQUWa+21iVCwWkhxswQE6enZ8R+tNhTdQjCMOwoOeIkMw5b0mgRwRaxaQt1750danV7Ek0QUCHlEd8o4BmGHeowl2kxV2HBjrtotvM7RmIy96ZzSsGijjKCOuvBzGm08zk4204fUG/dHCYQTOS8I2ePcy1OW1IcGMdI0zSE0DLGRM6RlEslMXMVViqo4kX357eqEZzUzkSZblio0nTBootFcI3i1ZyzzitOfb0ZcVBW1onq5YGrEnGHGFRqf0f53PX1U4j/Gg976wfslcNFK7la6uBW1d8qe2Kz8uQvDW55cidOQoD9x9dhyMvZ1MATWbnvYik2tunkfqxiCxWZKnlgT3DOmDFjxowZM2bMmPFTIJ6UE0ECqso2O4QzmrDDxxvGMZBcR9JMzifE0tOlSwa3QMsSwOaSeqtfKjFV6rDqyTTlCsmZrRyRxVXSYnJkiVU3VELCaWbIW8Y8yRYP7qo6GR/WuSlT1MTFPjhQZRwTY5z61hOiBe9g0QldZzNOnzNDhs2YjcgrdqvAASqF1TAQvAPJLBsTj16vE7vRIk/HDJIyT1eJ7z8audd1HLew3g7kmFDtKApeBEch5YGnK+HZDwvf/nDHVz/a8StfUH7137kDD/8WSmfvcfWv2Hz4ggenO/7dnx95fAEfPBN++NjxfD3QDzarFbJVLORcZ+3qSJOa+nTbiTbVzexHpFuzQZmckEY3iZTD+ARIvn1PwmY4rYJHcwKO9rviyDjGccepc1ylwmlb2A42h0kpxBLZ9Uq+uSCeRkRSnUGF4GB5cswXv/jQCLOMHTu3pDt5haeXH5OjklXYbCM3feYnlz1X28zDO57vPxr5udcyX/ulNedfehV3/tcox7+MtL9FJiMP/w+4/h/Q//O/gzj41d88oax6/tV3Br71YeL5OtLH6uoshZTtK1biMRese7QY+Th1BE5FHhbUZELzWJQxZX5yVXjxxz3f+Xjgt39e+PV3f5Mcvoh6SP73+cF3f5fNLhNp2JUG8YruLPUqobUqx8TD7CKv3luyu850XWLT292jPrcoI7FEvGbG6iycDIim464k5HTsc96fH+znykMnqFXt5MOvmYjpaf6EymwbZVgO8/HBWVvdkPvr/PCYUjhcx9O9pPp30/wdx4xXR3DCDz684le+fp/Lu5HnFztiFj56OvDoycjX3hYevL7ALzt+8OMbhm1EG09ej3jJ9FJQdF+XMt0ze7aCk37Ne28c8dmLgau1pXM5rHN0xowZM2bMmPEXBz8z+ZhSJuXIGDMp5X0cqAA4QRRUBVHFueq4ylN3nbmaEgUpHicefEDVyLz2+A6h6QBYbTdM7jCVQFGL8AQHYotpRHCVGFR1BG8qz3GI1alYCQJRZL+4v0VyTMMAlYRkIv2mIa8+rC7WVK07UhD6oWcYeiPsJmXq3hlZO/Nqv+Bms7UOg6qYy2b/ZFpyTr2QVHJjT3R6x53jExbLI1bXK4axh5QZ+gHEonAEZUhG8jVNQymFo9Njjo+PefH8GWkwVWXcu0ZtMRralpOzU+KYquPRoi9jHEGgCS3nd+5weX29Jy1tPyjeebquZbXa4tQIrZJNNevVMaZki3HnKZJRwKlF5iqTQ07r8TJ3XhyzHbM6BHt15Br/mlIkxUTXLlhf39Q4V1AUp6bSzclclJJhe7MlpZHQhEouZQsdKsV6GxAcGVEIlWwMTiv5OHUXComEn9SIueCCR2Iil2TnlBpZyzSEiEO1gHfkqgYOraeIkYrqjFj0TWNxvBScd+BAa5+inYsOFY84I3BzjpU89MQUGWPE+0AIDSlnchnJ2c6jyQ08uW6tN8RcnM5ZXLKiiAfvPeot1jS4YA7JAH56nLNORotYBYo7kImpXjkq+30gk+J3ci+rEtPBXTh1pdymGafr0o57qQriSsqroCRyNgfqdB6LKpJkf63a87vD9WqvZERjsmtt0s/mqua2jslMxgjIQ4RtjU5SqUpyi12VyYk5fd7Ua3R2Ps6YMWPGjBkzZsz46bC5Zkr7UCARGWIDoaXxG1K8ZiwmwJWyZMuCNl/ScsPOnYAIuQTMC2mzFDrQlC0uZ3YsGLI/ZIGoEV1QyYgplaQIJY6stiNFKiF3m5iQwyK9lGLxnyrkbPPEovPEMVUxrYk/VaHxjqOjQGgcq9VIzrDtE8NYE0cmAi4lezmVGmGZOOk8q8GIytWQiNler6RMioWfPAfOIJBRIrs0MI4neBcJfiRlYamBs6PC6UI5bZXXzzoenF/TnbQwXFNcpKT3Gcvv07z3CudxS3o0kF8Urteg0nN3AcE5Nn2mF2FIeU8O6UTilomsLRTcLVLp9r6eSEkTMk5zgk6iZ2oSi+R6bsh+dnopZQWjlXJJaMm4rKgID1+J/OnjzMPXHD95nnEyUPDs+oFNX9itNsgJpBJJFEIa2fWZ3/vut1lf3+C04+tf+zIlF8bSsDw+IudEq7CLgVISZKvzuOwzD4Pn44sdvmtYfGfN3eePeeUrfx/3i3/Z5n48cJcSfov26xte//JHXPyL7/GDD0Y+ea5AJBclkSlSHbiu7lcp+zQhyTbLvcTdlrIfs0SgUUGc2uxJ5rjx3D3zvPvzR2SX4eZ3keYuu6cf8eVfPOadL5zycz8U/umfJP7oMxh9YYiZIo6FV5oAKSnvPDzh8ZMNYUwEjXgCQ4koShsKaIOW0Ryst1yZOiX47AW4E2lY6qxZIGXSJKAt5vQse/Lx5SHykBvE/po5cHrCyw+vhOWfmUVrxyNx//nz0m+Lkb6FQuOF9QC//71n/MbX7vO7m5Ftn5BSyCoUJ/yTb624e6L86lfO2I6FJx8942I1IJohCftQVZnECzZjX4+O/pM1bz1ccHbi+fGjNaoRleanfUjOmDFjxowZM/6c4mcmHy0ipJBrGXVoHCF4pGTU2XDivat9cLZgV+f2vW05WUE6RSnZIb5heXxMaFuGoWd7vUFk6pxr8aHBq8c5X+NijGSYlqOCdVCKOnKMFj1ZQMqklwMpqQ4AdVkkh6WYiRJrkTc1XrGSbPv4yFJomganyjCOxGEwMrEIKmWqHbfFqAqNbxAKwzCaQlMcSXJ1g01Ox7IntQTZC9umxaUPDWdnZ6SSefHiAnJGSQxxsJhMsJjPbCpS37SI83Rdx/LoiOfPX5CGRB7Hfb/g1A0Zmobl6TEpJTbrDSlbdGjJRtQ65zi/e4/dbsew3dmiWoQ6feLbwDAmcjEFoSA2RPkaHZqtS1G9I0ZwovU9CiGAMJoLVg7ElIjF2zoUa0hUMhl1YtGqZEocKOOAV0G9EbkpJ9R5vFNQRykjOQ22r8cRFdDGkWMBNXKv8S2jKAk7Ns57cDbwiJoD0ro2fFUQ2nkrTnGuxn/WGwgm0TSiTZ0DLahTrCu0DpsCTu27iKMJLSjkFHF+6iw0chBMETj1GKrUrtFScM6RCwxDpBTB+4BLkWGo1yQOu4FgA5A4I8zd5Bj21tno1RyZzgUjMWv/qqpiZmVXr1nrOrVYmbIfLEqWOlTF+nt/GJYLtr31B5NTcB9TWs/96dq16856MSdSMFWC16pVTX06jiPWY1nV1rngXK5xRVpjpWwfJ+yGQU6FUntE8q0BrpRCismI2z1p+7L7eeqazKqQbzk6q4Ahi76kRp0xY8aMGTNmzJgx4zZSyRRMbJlx5qZTR1HQlOmlw7mWZdkR04okCwRl1HNS3tGmC5KeEit541TxrAipZygNGxZ1sW1r4EKpgleps/e0zrZqBh9GxkH3c3Em74mxz6+FU0pEhVw8pUSa4Dg/a0hxZBxinRMgNMLpWWekoSgjyrYfKUX3r13HbatnqQxc1zjaILxYj7xylrnZZHaDueFiyZASV8kT3I4jpzQN7HaFFAspKSVDCDsut5Ht6CF5vvGw8Df/inDyxbuwe0F5+vfRd34JOCUs/xrx5p+zXv0J7/8oMo6JL9z3dMGIxVjnAZGME6EHYo0EnYaczJRaNJGT036bDsFhX04ORzswRlxK/WG5PRNN81V9nWwHALLQSmKhK7w6Pv605+zuGV98MHL3qOH9x0pwQuwFMzQqGzKbbUS84FLGOeHunWPefeOcj/qRtx4+IMUREFKMnB4fQxE8A+uo+KKkknHF5t0xJt64k7l7FCllxdmrx+i7/wm6/SG4t8lyDrpE3dtw57dxH/+f6ZrE6bGjC4D3eC0EzcSpt1SnbzZPCiBaqoD2FsreQ4iIEIFOrL/wZOH4zS83/No3PB9+r+db//yf87f+1y+4+egJevc+/6//6pKcRoo75kuvJt6+C99/rPz4IrIaheWxCW3PHsCj5ztutiN31LMbHI2PuKQUiaSoDFloNJJGE1SrTA7ISX5uol+qszEV9jMmudR7U/XxJd16e+WlUfLQ6Xjrh/K574dHT7vo5Z9N593tkbtwsJNihOt6vcV70CJsY+LbP7rkm18853e//5yYC6+ewdVVYYiJxxfKo+fPuXfecO4cTiNHnWOzizgHlv883TPLdr8pRXrn+NFHW9544Pn5Lx6xzQ3/3l//+uffyIwZM2bMmDHjzzF+dudjzgiCD0ZYNI0neG8kXIl14ZSh2IAVSwFNSDESQ4up6dR7QrOg6RbEMXF98ZycewRPaI6RYJ2NzinOO3xozAWpnskR6ZwSQiDnRK49ihZrMqkS2RMSU6SFVv0iGBmnapn6B70W1ID8/eAWmsAwjvTb7T5SY8rUz7lYLOP02BCI48gY4yGW8VYevwCkRCrFHHRq3XxTn4eq0iw6Tk5OublZsVqtUFVzFA4DJUt9D9V5JhCC9Tx23YKma3nx7AVxHEljRMVZxAdTx2PD0cmJ9f7tdqQYzb1YICVz9J2enSLAarVi7wotGPHYNKg6drsNYOSWxXrqPs5D99GhAmLHfRxGnLMIVhcCghLziHOelGzw1WLHNWZbwLtSSFZiiHd2ynZtY/EedWGtmo1gDJ4MeCckhSLOIj7UCNGRkbZpyBSCa/AukGtMKiKIs16Topjjj0KoTl4RQbPUeF8x8hMhOEdMianyoXauoyJ10M7VBWxE5Z6srVHBooL3jcWh1vPRrqGDstbV5xuN1zbSNmf6fqBpO9rQEkMmlR6h2H5Ui3by3tdrxEjH4LxFHHtvIgHn8CEQgpH6qpW8rN5QkWJfWrXUxUhSXJ0zctoro42EnmJX9DBB38I+jmZyN+6VoKaGnjo54ziSS7JhTe0aS7mQUx3UsuBcQZ05fkUEKXbsJuLYyEejO0s2cvPQ52gq9JTSPp71tvrUSFgjnicVs6odo5LrUFU/T/5sd+yMGTNmzJgxY8aMGUD/HM9IGS6RtKWUTBZLM5k62jPKDqHxSkg3bJIg/oSEkMuSNr0AaSzBJO7IumRVjknFIXUiVLX1exHL5jDBK3XqLZUcyCw8rEcT5ZqTL5Pr/DXB0m7sHylmYjKhH6Vw9+4x4zBydWGiQhXh5Kjh/HzJk2dr8EqMhcyUmmLOMLdPRFKcFJwKR60jZeFqO0JMbPpCijAmmxEkw1ASl+uCdA7NmcYnUu5JOFLM5LLgznLk9dPM1x86fuWh4/nzzP/jH13wSw8zf/l/8zalHJG4QLu/gT5IrHZ/jw8ew598XNimDcsQaJywcArBOjKNni2MuTAmQUomCkwxqSZyfnnc2acsTSRusajQA1FU9rOUzf5lv8NFZcpUAaSSWYllA3fake1uy5PViG8CX7gfCKEgmoljYTs6UjGWdBcDm80li7MzYhzpGlgslnjgeKF0wUMVREMhpYFUCmPWPVGWMvRxRFW53kS+/mbhrVdveOur9+nu3uXmn/+XHN0/wX/zKyCfIuEXQZW8+gNiekD3FnyjveG18x0ffAYfPIIfPml4vIrkYolMqdS9LIKWsheA572Rb/L/VQF1KXgtdK3jK68EvvGwox8i/9f/amQVCz/30HH5k0f8s396wy//mufN14/4lz9c8/FV5uY64/LA6THcXTjuHXvakyXHZ8r7H16z3Y12T6Hx+CHhgs2JQxIalyipQUjm9JVJOFugZLuvNSXq5FxrUEq9D1DrOvJtUv9ADFrq1suz8m2W8Xblyec7IQ/jp7Cnsn/KY02UfEtcICbqxQcWrSdj4t2n68j5iw1fe/OMH3x0xd1j+P6n0d6fZATl+cWILgRJhXffDHzza3f53o9WfPZkC9g9wiKWQiVSz6dS6HPDz3/9DZrg+f3f++6//rNyxowZM2bMmPHnDj8z+VhwhLapxGPAe2fkj2RKNmIkxkgRK7UmFyRmCiPqPCEs6I5O8KElp8j1xXOoQ41qBmeKQ5kW7QlwVVGlhUzCaaBpzF04DgO52PCTSPuIRK2jw8GBVfYEo06EH2IkC4fBYIp/dM7Im5wz/XZHjKMRcZQ/oz4TwDuHIgzDQIpxH52iNQc/AzklI0JyJbzEIltU1AY45zg6PcU7z4sXF+x2O5wowXnGwYYDqAu6SgYVoO06jo+PUVWuLi7JY6TEZMOFumrOM0dqs1gwjpE0DqRh3Lu5pm1adB1HiwXPnz2n5OrpLAcnXhMa+n5ngwCuEm3WX6Bq8ZjqnBFpORPUk3PC+cAkdSzZIjGDmwjLhPe2nRbnGfGipJQoEdq2wXkjOSW0TP0IaYz4EMylqLafVT1dt2SIEe+mSBwhOI/z3ohijIzDC2QjXcUrJU0DvLnepDp2lUJ2itRzAkc9Lo5U0p54LiRKqurH6jZVOwnsOXKs7keLpFV8HUzr3xfbx7kOKSUnhIh3gVIgpYhKwDnHmDK73Y6ubei6jkxiTDWeVT1ejVT0oZKPwePV412Dd96EAFqdkE5MiTxFzv5ZeWU976rCU2rUzKTkzBbHPL23/WfFPprm8N+lDiK5JFKqX7H2f6REHCNjHOxaAdBCTImQk0Uw5UwK4HxC1O1JwckROW0TpQ6wUvZD321F9357bpGPn+9cLWW6um+LB241SN56/IwZM2bMmDFjxowZt5HjlpKVPG6RZB33Ujvbaqsb1G71UZTioCUj+ZqRTCkNvkn4eEXuM8WfkssGJ1vclNcpUzIIoFpXqTYNI0qefg8sXGG1GyloTQeB6o2sW2yEiYkKC6nAsCvoaYBS6Drh9Yd3WK0/hUHxHl59cMzp2YLHT6/p2sCz8QbRiUUqNMHTVFGxDwpFKWmEnFjvRq43kXHIbIdCjOzdcIB1WfZK5zKOSFAhSTKRaEp0bcvD846ff125u9zyd/9Y6H3gC3dH3novUlb/H6KMyOXvUBavkS4/pNl5fu2dyOkC3n8a+ORZ4kmfGKog1d9a2qtk+29X2diafDTtNxGpzrZsfZH51ugjdn9ikkNPTj7TL5caH1oq0erqeVBQrBvRkotGzo5O+MK7S37yJHJ6Flg0AzkqX74XaVqFMnLs1rgjZexXXJPYpRskbom1viPmiJIQp1aHE3fk7Zrt1UcMuy24jiOJdBpRAq/dbzgKmS+8Ad/86hEn5ws+/iDx9L/9mLfeOOHoL//PKP6c8sH/EX37P4fmy7D9mMf/5F/y8ZOG0hzxypFy97jQPsycL+HjC8ejS+XJTeZqyGxjZhizEZEi5JQ5nKnVKUxGSLRBuNt53ryrdKr8k/d3vNjYedx54a8sAt/5wzUfXsOn/3DF3/jVBR980vFs6LgZVjx7nnm+dRw1idPThm+86nn8rCeVyHEQXCMcLz2rDdz0joWLqHeEIpQAHmWpSi5ymNMlE8dkvZXZ7pekcpgaS72HckAlU6uAffrd5+NXb2MiIG8TkT9t9vycX/Jfi5wzZEeMkcVSyVoggeaRH35yzS++d87X33K8uB4oObFshMYnjkLGu4JPiZVmPnj/EeKe8d47r5CHkV1MbHbJ7tsVJRXBec+v/8q7fOmdu/yjf/oBnz254ps/f///h62cMWPGjBkzZvx5wX+vzsdSMPLEBSMAnCJknJoSKzRpTySkGGthvSOEI45O7zCkgd31BWXsKTnhRK0fsFJ2pdhCOcUBr8GUUykhJaFB8I2jH7ZGztRFnrnVHKUITkxJN+WdTCqzPblwS5U49eJZ5KgpD0N12fVDv3dpTXVvE6k2LQJVFfVKTpkhpoPbsdhzHQiYScFnbiwtZe92FBHatuH47Ixt33N9cUEaB3PXeUcch+peLPu4lmk4bJqWk9NTRITLF/Z3pUTgQMiBRYEuFscMaYCSLTqWA+FSSITGcXZ+wuWLC8ZhqJ0ih8Wub8I+FtP2BXWbqse0OhKniFEEpCgWIFPJRbX4TxVHyiM5JYIGShFSGYk52eDsLcrXuWJRu6JGMgeHYL2jgsXtOvWgxlGrCKkkmqbGZqaEOrdXCYracONC2DtWUYd6AV8X5QjOTz2Jdj7hLEo4eAdqPYpOFc2udnyaG7Xk2jEoso9gKWIhstZxkrHOUlMApwQ5p72Db3KolhLJJTPm3vapa4ipEHPcd4oOw8g4joSmY7k4YjcOUCyG2KvSNB7nrac0BI9TZ1Gt6hB1iFA7M1+OZ8rlQMwJNV70diwM7I+3CDZ050zZE4ZyeKC8TN5NvY4550o+WndsHEfGYWAYByPaUyQXCJqQGnec2oTPmZgLLvr6+rJ3XYoettNuDBTy9AHBYbjbE41QY4/r30h1WlfyfCKhtZ5XIqYMn4Y+0Zl8nDFjxowZM2bMmPHT4bzH+YWJYEUr8ViwlKB8WDIXSw6Jomg2IjHFNcFFvB5xEzuyczQCRRJjapAS0WIxlNNMpzky1WUIUslIrdSicNzBtrf0lZJSnYN1P6Wa6072Mx5i9QT9LvPKvZbdauQrX3vIzWrH00fXHJ80vPnufdQFjk6O2PQZ0UDbeXKMLBcNTVsI3qJnh3Gk30U0C1mF7QB9b/GOQ6wpNFVIWO8KMFJYD7B0EY+jFUWl52ShPDgdOOtGPrrw/Isf9Ry1md/6xSXsMu9/Z+DVBz/hj3/wX3D3NFE2/5izs3v86Sfwz94vfHKT8EVpPZypZzuMrIfELkqtbijVuaikbO12k7i4ZqvYbHqLbsrk/YzhXJ3xpjSVukurgvclF5wrK3LpKJXk9NXx5zRyvdlRHkdkN3JzEVg8CATv+NIb5mJLY+SIG84Xyo8/fM4nGb74zmtsdxuWd87pKEgZ8WKzNSpoCGyHHZ883vL88pLWOxqndCHi3X2+fLflvVeFB/cd643yB+/fMOx6vvRGx73zhN/8EZnC5sMLlnf/KTr838lb0OWSq+3Ap5+s+d0tjKnQeWHRKE4yd0+VphGO13AzeFZ9Yjdm+lwYRiaTrs3sJREUFsHRBcE7+PGLyGrsGVNGHARxOODIL/j993u2BD56vua1Vxree7Ph0U7xr3a0ZNarDb5p+NJbHcuF0OiKNx8I602h3BF8cqyuVnQu0iMsGU087htc9DhxaIZEYYyRmBIxTzUfdg0f6nUOuC1uTWXqe2Svb73tTPz8428Tk7cJyM//7KdNo/vHTjsUi7pNBeLoOGkaFnJDaJWuUYImtqunvHtXuXd+zC+ftTRAUU+7XCDBcf30OZ8++SNef3iPd9+8z2Z3w5fbM1zwNE3m6dMbXlys+bkvv8b6ZuDqsx/xex//kM45vvRQuLm6/ClbOmPGjBkzZsz484qfmXwc+h0AcYz0wdfeOG8dFDUmdVLxqYIEj+9aQtvhnGdzc0GMRhxqHYZKSQStUY91SZ/yaC6ypJQIjXa0vkML3FxeALKPcNk7kLQu/GQaDHIlDRRXF/uTU0vKoX9iIhdDCLV/INOP477HexoiDv2EkzvSXjuNg7nnMFedOkepkY/TYjDnaHGt++eTvVNscbykbTtWqxW7Td03Irjgasdm2qvkJuLRAeIDZ3fukErk+vLa3Jm5Drda1Zg5I87RLZcM40gphbGvZKaYKq3kgqhycnrKZrtlt9uav0usgXDaXuc9wzDU93W7ywIQ2Q+w2BHcOzRLmf7Czg8fGovrHIwUlSIWnZMSKvZ7pVBUEWmMYFaxbs/6XJqFpOZ+m7xouR7QUgQ0VIJUEO/2/y6uEHubbpzztbfTYlftXkCspJwRir4JNWLFCD91CpWUcrUf0851MVJRkhXST7xvrgR3jV8RdH/+TOeS9Q4mVLy5H3GUbORjRkglsmgCKsqQLMLYqScEIeWIK5mmaVEXIKfq2lWaYNem9x6/Jx+t23FSSYPuyWMAUXN1moJTqorzZVfgy9rKWwS2JLtpkKd4ps/7Buu1MEWe5kxOkRx7UhwYx55h2DL0Q+19hEg0la46UgQfMz6Yw1PV1WNi8VUWXztF0eh+O0t1Rk9a2olkFRFwh+2aOmW1OiqniFznJmJW7DhTzxkV6zidMWPGjBkzZsyYMeNzGGJBfQfqKeKwQNJpjSy1823vhwMiXVAaXdu6X5Zc9hG0YaRjTI5Or2k8bMspqWidLyYCYpov61JfwRe1WgmUrtuy3Tao8/jCXiypyn5tPVUzLI46m0NRYlZCWHB+GlguPL/5m1/lv/2dH/DOq6d8/avv8v4Hj3nnC2/w3e99xvK4w3vh7KSjaz2FwnbTs1715OxwKkiMVWxYyGWKWg04NbfllDQiYukkQywsXMapp22OWPgWFc/lUHi6VbZj4v5xS2haHq+WPHq84W989ZzHP4Lvf9RyKtcEXVLyMXdPB7yO3Kwil7tohCHeiOCCpbRQiHlyr03hmHl/nESKVUnUfS5T2o3YTNg2DU5hjIkxHcSPL0VuFquWUATvemIO5KI4vKU55cJJ5zlthId3HS9uesZxQ+PvUIqikgkkos8sWwcCoQk4iRy1jpunT0jrFd4L18+u+Mmja9568oiiCqlw/fwJCy+cLQKpFFQDd06Fd86Ve2eJfix8+Lilz8J2dcnrp+esVok/eX/FLz34E57+4HfY9g954/XvMn74hOVCObm74Nll4vlNZr2Dm5zY9dbTWQDvlC4IRZQgcNQqjVN2MdIrjLHOxwreGwnr1MjfTZ/YjnU2LIrmwk6E40a42Ra+/zhx746yjoXf/WHPf/SrLWfHjmV3n5wvWSdh2QqyPCIc3eHZ5hkxwqPP1vRj5v6ZJ0pHL8f4JtKEgs+ZXXYs25HNurCLhRgjMWZSLlNTz8Tt7e9JwDT/vjwFayWV92T/50SsPy2p5/bvbv/bzqFcn+zlANc9kXn4AaYpL4w584Mf/Jgub3l4umYYLV546AOdeh7dKG4Q/vCDpzSq+EZZhkC3DPg80HhHHCNHR2e8+uAVjo8aXNvQNC3f++Pvcnn5p6xXI1frkW0vjMWzucmIK/zKLzz8sx+SM2bMmDFjxow/t/iZyceb9Qbfj0glX9QpwSnBu0p02M36tnE0XcdicUTJhTTuGLeDZfZLJqsg+445I24MRniorf4pORFCoF0sicNA2vYUCs43+4WbqqtzW0HF1W5Id2v4moY7tfrFGnczxZ+oOpomMObIGCOlgJvckcAUb0p9PUrZu+niWGV6MsXbVAWaCpTqhNx3SGolOm3IdF5ZHh9DgcurS+IwIlQ3pbOuxkx10dnbq5udwSlnd86Aws3VJTlmtA6R1X5WY3PsOJhjNZFi2pOBTp29t5LplktElPVqZdtb3Xm13c5iT3MmxTjtunq4JsmukTGoxVySbw1Wyfa3iOCDo5RMjFPhulT9bwQpNF2DOEeOsbrzrANCnTlOqdGkRa0XMee87yTcdyq6UlWmtq/VWwcipZBKxDeNRbtI7fITpUip7lmFkq2XxFs3yfQ4733t5KjnVT3u09gvCEWFLIcOwULa7x97t9TBNtntB+cpmPtPnNvvU+tDNJdxyoVelNB0OHXEbLHE3nkjvyrJ1zUeahSu954QLBbZrklzPe4dljUWtlQ15OTNBd1HLU2xRwWLk1HNlVDORvDeGnNEjIgr2CBmDlAjhEvJVQ1aSMWGtZwzMUViMvdrjANx6En9QOwHYkrWHYFDNeBcIcUdMo6EZiS4gPqJdPSENpGzEauiNlxN5Pd0/CbXoj2muhjrd+sdKQe3o5seX7tqp3PRVSJedP++Z8yYMWPGjBkzZsz4PGK0FBVokCKAzVG5zr82Lpgg12um0Q2kkZvcMpazOjc4vAo+35BQVumYrvQsZcWWxb77sVBJxjq/WjehzYZeCurhrBWeZGjbjqANqo7gHeqoiSKOcayVCMmqQkTMwXdxseVrX3+dJ0+u+Q/+g7/CxfWGL715zjtvv8Wnn11ycn6HH3/4hKN37/PGayfknLl4vuLFxYYxYiTmOJCHiC+FXCNLvVPGLFYLQdl3IU5yVqXg1eF1oGlaQhByKWxioow2Py0bx3pQrneJ9qxwuVJEMt/+ITxZF573SigtNzvhl96Ck6URYSqZMdqMGet7leomtS2ortDqBp16K020KvS1wiQ4jzrMVVjFjFDox0TZDcRUk4jERJqo47gR/uY3PI2DZbgLpRJuzrEbhH7Y8cb9Y06Xd5GcieUYEUfnPSI2Jzln4tL83h3AZnDr24OPvv0vajUNvLje8KOfPOPNxZacDiLML991/OLDL9u9HAfOpSro9aRUSClyeqS4o7tse8f12PDanRN0u+Wz54EHDzo2n6z4yQeJr34RStrxxdcyH7yIrFNmtxX66OjjyFBq1YkUQuNoqnjdqdIGpQmFqVPTUrBMKK0usI2ZIpHgC6H4ekwsfemk8zy6yVz1jqMkOMl8fDHy4VP44hvv8MMXO770lTdp2ifsbkbeeft1dqlwdn7CkODHn/RkV/BtpOk8jV/SLWFMgaZcsdl4hrgmDpGUdG8kVLXqoanCIwu4onsX5OQcpop5D/cGyn7u/mmdjn/mvycBPbeetxKYk0DePifK3mE73V0puHpOAyWT1dpM771yTp+OeP/7eb+NKvDmGw3f+Thx9zjz6r1X+M5HV3aPrhTQgU53nLrCx59e8v6n37Mf58S7b7S8+cYdvvuDZ1yuAyw8v/6rX+ajT6/51rc/QRj45S+9yrf+6Mn/IJ+rM2bMmDFjxox/O/Azk4/Prm/wztG6gFclqNIEZXRCcAX1nsXymG55D+da1jcrUhxwKgSnRgR5yGaToxFvC5oMWcxVZqSkEHzH0ckdQtOyXd/Y0FYXe4gRgFIdSVMnoak2PZPLacq1KDKpyHTv2HPe70mF3ThU9dhBsyZSYzI5xKVS/8666iJTjKuIUVAHwnLaY4cIjWKSM1SE0AaWywXb7ZbtZmPDQt1qV7v3LAK0Ehx1AZ5Txqnj9O4Z3nkuX1wQx1gVl2oxk9nIliIQKtmXU4RcSDHunWA67ecQODpacvHihRGFlV1TW1GTEZqmY7vbvBRTya39ZG9NjDDKaf+7nKO5zkSMQCy185J6HIsNtAUhtC1OHSlb1GbbeVLJ1t+pjrG6USflqMUCVaI5m5PRIkwtCrfUGBSL0Ty44ESE7CZ3rsN5bzcCct675tQ7fKgOuOp8m9xvk3vV3sT+bLHvIpUgrV0uIkwDAxNhd5g46nuxzgiCRYnaCzhKHqvattAPAyJK41sj9HLGB4fTANV56Z0RjSLgvSOEFufV+khvOfqkuvamIWbffViy7dOCxSmT9+exYhFEBwJSpulm/5jDlTMRxYVc55W8fx173zlnck6knIgpMo6JOCZijAzDwDiOpFIo2IA9ORLxiut7Gh8s4iU0dE2HaEYCOA3mGq6Dv9jdFFQmV6Pip8jcycHqnBHaUllcjGScyMdpwNxHtTJ1fOba6TpjxowZM2bMmDFjxueg1lvHJKrVSSRaKERyKQRVFm6DxA396OnlGMkZNGIRHYWxFJyeEoj4smJMnqgLjmXDKB29NEjlESyNJSPq8c7mnM470ELjtsQUaYKH0OK9o2kdzrNXlgoNOUNMQiyBlDJBHVebns115K1377Hb9PyHf+uv0siIyCl3779C0ym/9Zu/yPGR8uGHj/jxZ5f02ZOLsrrZsF6PpJjxJaNVUZuioCTGJHuyyYS7ti1eBS8O7xMwkqVlTMrUuZfFo9mzyUZk5uTZ9lbXse1bvv2TkeOzxKOLxPOrxL1Tx/vPE2ddw6Ir+LElq4luA5BjISagTDOhWEJuPWa5OiFzrPGZ9R6AD0oI1qN5GPRMzNwGj0pmGBKlRu0KI2eLwK9/aYFTm5tzSkx1FxllLGqZUJKsXiSD0wJSU3qqk62UaPdR9pGeCaTOr3VycU5AalKV83uxcylC1zi85nofw9f3aOSsdxaFOxRHLEobMqdL2Fwf4X1isXCsn0SSHHG9gqZVTpc7lk55kSAraONw4vA5G6krQh8zu5gJDrybRKL1Pkqq91dUiblAsr9pvN+LZqcrSIHlccvzbWZkIBe7lzSmwk8ej/zKOx2vpoa7b9/DBcf6ese7X3yVH3zwlKM7x4zPtrz++pKnlzvuvHLGW2/suOkbtjvl4mbHzRb6XU8ciyVdFaugsahTc+3aWVIq6V/24nebkavwnQMBaQLj6V5G/fuJxPw8Gbn/Pn2gHK4NqfuLl9KJMlJJRytPsX0UgUbttwVhtUocH3cohVjsxuC9o5Gltgx9z2e7yPlCefVOw+PnA0kymiAWpaiDYUBD4f5Zw8994R6Xq8Tv/OEjYlRQzw9/cs1HH/8BX//yG/z7v/UFfKv8s3/5MU+u+///P0NnzJgxY8aMGf/W4mcmH6cFeaM1QrRkNFuHn4bA6ekdjk5OGFPi4vlTJEV88IhTYnGIeigHghHsxn4umVQyTgTnPN3yhMXyhFxgvVmbGlAdXq1/T6vycE8K3CKXzJkFttDbsyccev+McLBOQMhpNOJxcmHWFV6Rw9+VSjipao1CzbdIz0MPXMrThHKIrZw0aZMDa7Fc4pxjvVqz67d1P9recNX9llJiisScHHylFJx3nJ2c4X3g+uraSCsOXZZSFMSGDyOnHH3fIwJjNJfmpHa054Tj4xO2q7WRqZVRmtyZBXP85Vwo1S04EXgTSmHftzfto70yDxB3IGhKgtuMVSmFlJKdP1r7NKurNpdSVYVKignnm0pcJYR8cK0564/cx2SKmqJY1FyIYkPldK6VUlA3bb8Rd5PT1GY+U+F67/YkrN6O11Sx7pRiUTmT29V+Np3V9h+3g3oq71bJOMgp4UUoOTNGi+5tmg4Rh4jHSLxkiuOSiOOId+ZmtCTeGgWrto3eq3U61thV7/2tDkNnMciVpN/Hw6gRxrm6E0s2t6LT6jDFOhpFjIAsye5sFNH9+XMbFpE0qaQLOVWyFSNlyfXntfcx50RMo0XYpMgYR+I4EmMiZciMwFhjUx04O1YpNDRtY25fFyyKFgjqcMETXLBYXWeKWhW1ro5b1/7tr4l8nI6RSNmT8yLFiEijYOv5O0kFZufjjP9x4J1T5Qf/l7/9b3ozZsyYMWPGjBkVqtYNL866FBNGMGkRVAttGcllxTB6hl5IjDi2ZMkEbYAIvqGIzbsjDUrHQgZy3LGRJcFtWDDQuxO0eIq3edBm6im2NOIFSonsRiVqpqgJTmUsiDja1tEGRxMczgWyCC4UFu2SQmDMI48vLvjtv/lL/PjHn/Hrv/G3cRr46JPv8Fd+7TcQWfPRR3/KH/zJR4x4FsuOTz7+lCePrhn6WiNSMqUoUUCx2X/ZFPokt2pGxLZdCsGBqhE945AJTkl+Q8kR0SOyNIwlEaO5Eyex8EkrfHKl/OBx4punDZ9ejqy3oK2izxe8cke4d9xxPSRCTCQxwjL6aC7Nkqv7DmwEPFRRmNDWUo5KKcSc2WwzbfYEb4kppRiZ5wV840mhMFbHpjkUla4RIzFrYssUzCTFRM9SCrXhpc7IBa8mjCwUkth8JgDOZvaJvJLpHgZUV1umUUVIFBzBZdRBStO8LhQxoWmu9wByFlKGlIVs3Pn+uLx4MTJmRxoLVzeJbVQ+eTry4JWW3U75wquBp9uRzRAZ4kAfYcyW5jP1j6oqqYiJhWOss2YVUOfMrh8BQbRWmGDuXJtzBecdjQrHZ4FhyJzfg8Vxw5E44jqyycoPPlrz27/1dX7w2QW/9pd+ifXqmtfeeIs/ev85X/jKW/zTT77Hyb1zfvXX7tFQuH72mNUYGceeOA5s1pE07lDvaBqIyeOqiPY29mJeXnYt2r0go0lzdaYeHlsdjPXxOWcjJSvBbfc86o2DveZZbwndy3Ry1vszwpTCKsWDS9wJhT72bHKo56MR57vNyL0HRzRB6ETJJfLbf+kB3/ngCh9aMsoPPtvytbeXrPvIdlfPCwGHA8l8870zxpL4/e9e0Y8RJwHxNisLhUTm6YsX7DYrig/8td/4CheXq//BPltnzJgxY8aMGf/jx89MPrbOGTEjtpBVsdx+vzzi3quvIQKb7ZYSe5SEa1yNNRHrUEgJyIhmW8hCjTj0iDMn19HxGe3yhHEcKbkQqoNJBNQroqGSCAfXoy3uTG2Y0ucYEaQqy8x91zhv8SrjYHqw+ndCqSSk7Bd504JwIgXj5BzUA+Foj6OylZOi9fDae/WjepZHS3JJ3KyuSTFWks0UfaL2XlKylePUnikq1c3oOD09I/iGm9U1wzACVGLkNuUFLhjxNPSDLe5tKmPqb1S1AaXrFhbr2fdG5NgoaBOGWDSu9w1Db0q1l0le09Rpje2kTA5PDmRpjac1J+ehj8CIX3O+SSWRc0q1Rw+L6qRGiyKm2GsCKSVKyuRSI04xNWfJatGw5UAgm5HWyKqJXFQxortgxJtU2aF1nXiyZNCyd8kh7I/17WiUPBGwt1SMBW45Q/OeaZwGkYlwo8aS1uBYSkmkcWQoAwgEv6iOvUr+VYIzU0g54b2nCcEGAGek4xQLO8UBHb6m/ha14b26+YyEti31zpFyda0WbDCSlwlmxDpqSnWqIlDLMOs5Pykxp44WqabPGh+bpt9X5XcdvMjZ+i1jsvMhxhrzlMnJOkf3BG4ZKdG6P8EG8hIaKBnvjHBsm5ama/HO9oUGj6JIVdSa+3NyslYSVqRqQyfS1I6fTJM7+XM0Y1WtCpWMnDFjxowZM2bMmDHjc9BgM6xa/3woDtVMkAH6S4ZhJNLUmaLHe4G8re0ISk4F5xc2J1eFYynKtrQEH2jSDlLDKMKCS5I7ZcwNhWzCx6iUEilqMaTjsGU7DIj2xORJThgHR4yekh1eGrRVlsdCaBd0i5a2VZrguXv+gIdvvsLDB2/zlZ/7Nbxf0i7Oee9Lv0HfX/G7v/d3Wa0jb735Fv/0n/8R3/uTH7NaDYeqC7F6Ei2CukLjPMEpx34AEqdHFrvaBEt1MYFtIeUI4hnzyC4NiLR4PG1Y4FGGYiJPI2oSTYCOlqcrGNQzJE8UcOooDNwMW8L2lJNlx73Yc7N1bIZcZ0pzMR5IIhMZ55LJsTCmwpASpWpS8zQ7FWGzi6jYfQ1Xk45KKaCyv0/gal1GcRBCQbTO03Lo7lStY7gaCStKjaOlRtJOyUhwmJLsf7WKJ287NaFw5+yIb36lpWkCIgUnNktlmcS6VYxaX9dmXSMny16UnPG+MGa43gjqMusNXO5M5LmKC5q18Pj6Uxah5XwpXG8GSmmIKVCAmJLN8MXub8QkVWBqcOooJRIrEWfpRlBIJvoVRV0mZaVTODpesOwafPA86JRXzpQHD5VdbljqliQDQ2l58/XXOb77Ou9+6RcQdSyP/pC3H7xK+avCg7tnPHv8jDiu6PsdVxvh+iaz2SjICCWSIkhxTPUjkyh5mu0PkarUWz+3xLe5gKS9WPnQx/jy/SZVxe1H6vq4qtWf+EdfU3hyFdpPovDpNUtREgI54bzjb/36Of/wdz/G5YaM7b+mET7+9BGbbUGczb0nS6FpGu42AfGZkAKpgQ8fb7l/5nk8KkMpHDXK0jm2Q+bDxzdcbgQnIOpJVJE7QEm8ebdjKIWPPruhoHz40b/kwd2j//6fqTNmzJgxY8aMf2vwM5OPjRemHPmcgW7J6d17HJ8es92uScOOxilNsI45ceB8qKo72S+G99IsjJTEC4vFkuOTu1Ac29UlznucBFvUUChqvQtaSo0hkX0kin0z0imXGtdR5EAaqO477/q+37v3JmchEyElFm9ja8YaZVGdfDFGplhUqcrDg/JtWkyKuezK5HWjxp82LBcL+r5ns1u/FF86xWCKqnXdmT2wEmO2jBPnOD1Z4NRzs1oxjuMt5509xxRtKerxITAMPeKUkiKlJFT0wIuKDVhd13Jzs7J9UOM79tsjgnrdv7eXokvru9s76Mrh39N72kecvjQWHX6XSyUInQ3Xe6Kv9jNa16MnjZHQNPZeYjR3aX2+6Rg556AqJcU5iIkC+NBSdHKx1c4/0brPKnFZQL0Nao5Dx+O0X/fv+SXilT2ZdjtChWJK1VLPp0kFmXImxZGcol039fzI2foZU8zEkthtd8gy2PmtwXZsdXlKPf+pKloRU1KbAtS6D733eyek9SFOJPlth+7hvUyOWVeHqZTS/pye4mq1uiN1r7yd1JmgzqJzD67Ost+3IoKrHZoxJRKlPr8pOVPKtVPGlMQxlxq1OtF71YlZ/zvmUp2o5vxELErKhUDTtHRdR9d1hEVnzkdRxE/E66G/cfrcOJCrk4sxVxHp7W4OO6b7bh7S/kSeaccZM2bMmDFjxowZ/zoU8VUgK2jJtH5AxgvSKOzKEWMacBohjahrTJyYRxS/n55KtohW2QsSLfUmFkjSWpxqWhFzg8qKVjoGPTLBIAXNkLXUiFQhOEdWhVRjRBHoBySuyMOS1XqBe3FD1zYcnZ7yztv3+OKXX+GNh/c4Pj5nKJGHZ++gbokIBF2QpOcXv/4bfPtPfpe/83f+G3788RXjaLUJOWdwSiHZTJ9N6DvmwhhH7nqLXXXO48ikksgpE2NNWRIlYvMDqcGJkKTgIqg3YWnjk4k2s9IFhUH49HIgpUyfMptdxg8vKOPIqjj0piarpBviqOx2gciI5EK7XCAlVzGzVbeMMZNTJiWbgVLORu/u6xeqgDgbwcZEtpZiOsZbwmTrmoegQhy2Nkth4lhB670ISMnElea4HE0kW7snc8n7No9a9FE7RSvtPJFaYrUoXmGRtQrHjeBzYsLtMRU0RqaoTpvx7D6K3dooCAHRRBsS/VDYjeYEXO08MUe0QNbM5Y3y5OKUt79wBh+9IOVA2+y42VoUKkDCKlVyLrecmia0HuyMNUl4gZJNmOwD9f4K5Gg1HTFCwxqXIjk0nN455uh4Sdc6thJ47d6C4zuv0J4t+Ln3fpVnF4+5f+/rbPorfv2v/jXun9/l3vn7/Oj999mNI3GXePbshiePG/JYiBly6UgIrmwOc33JlCJAYqIR4VAxsk+PmsTJ0/2DfOsk4HA/4c+6KG+Rj7eeRxCmYpv9PZB6T8HuXdg5GHPB1Xjn826JaEs/KEUSThPbAVIU1kNkTHYv7O6x8OmTnmUH5IEeRQdl4zLX68zJInJ22rJohHSzQG9uGMYdJQeSWoqTagAiIpkH547LbeZ6HSla8HazhSeXc+zqjBkzZsyY8RcJPzP5OOTBFj/iuXd+j3t37uCAzeUlKhlf3VbOu73jjOq8QqaOvmKqwFzjLIvSLY8IR8dsNmvSMOKcIzqHc7728gUkBetz9LaI09rxeIgKrUNWJeKmSEznAgQl5cIwjDUOohKa+5V7jRgppgAsdVHvqmMupmkgOHTBFQ6Pve1wPJgfzVXXtR0+NKw3GyMNJ+KxOgVzdW/mYh6riUQVpqFBOT4+IUthtV5bdGVK+6jWl1g/tRiSmNN+KEnp0FcpYC6wGv+62e4Yow0N1jculDKpLysJV6ysPlfyeCI6D6TWgcgSkUNXpdbHcSAip0jWXCzuZSqWFwEXwuRZtP2abXGvGnB+ciyag86iWmqkbyXQUkzV6ecpJJv3nIJSo2Mt7kWVvStyWqwbh6v78+Ilcu6nwDxxh15ROwfSoSM0p5dcjykXUorkZA4/mUoeUIsNFUdJiViSdXiKR53Hqbehwkl1NboaIWqEI2oEqqvXytT5CBPRfyAdRaZoFnn5fe2JQht2JwXn9PeTB3c61qquRsdkI4353POVchgaJ0KeqjfI5kxOEXKCmK3KY082inKg5QuK9ZEUiqlBVUFMQa7O41wghAbnA75pCKHBe09wHqcOnLv1/m8LFqbLrLwsUJ2EB5V4zJaPu7+0S/3sKAXrV50xY8aMGTNmzJgx46dAxMSRrQ408YrdZiS6MwgdSqR1RuqQy8HFVCy2EC2odCQxImQv7puEvFXg6VKh5wznRkJOFNmxLFsGvUPGkVUocY0PA9sRBE/UYGt3Z6SopBtGRsbNjqPuFXxwjEPixz96wscfP+Zb3/our79ywm//9jf5D//jX8e5JZAoNU2lbU+4uvw+ub/hF77xkOVR4MknN1xerthshWFM5sPLmYQguTrrSsH5xGpUNrtdLTioc/pk6MpWPyElUIrQjAPJJV6snTn9qnBYncNjrrDVNnPV2/y1WSW2Y+LMD+xoGaLH50weL+jcc47bN1n1GYmF1m2Q1JpZDbuvINnSVsx1B0E8KVmUbalzw554omp5nSOnaGLlcphrDt+FOBTGseDFRMfWeZlJKZrAmcIwDKBCEM82j2jJZK1Teik2TxaHuELJQpYE2VOSEXyiEHNCguN6HUEKng51hYSSYibniFMTgCJmtcu5zjw1bzVqwQeb/ba90KcBFz192jEOh0qKbYp8ep1YXhZUA0Edq2FNkWgkXKkiYAS/r92pBPsUGiSlztaH3ZWT/byImKtUHSKFGAtXq55N2vD8smd7s+ELbyx58PZbvPf2HUo4tyQuVd794l9GXGDpXuHLX/wtnj76Q3CgJ0fcfPAxf/yDJ7jViMuJlFtIESkZkYi5VgtmwLwVnVqmGbKKres5bRd/nX1vuR1vR7IePiNupyu9/PzTXGq7oZDkQNaqs2Svkm85KqXgpJCypTvtkrBsW57sEloKoo6T5cD5ceGv/fV3+eCz75LGnphbPnmR+fqrDTAgGbJkvATOjoT7J0c8v4E//mTN26d2v8KJJ2dFst2vKuNI6xNnR8KjKxjHwe49JCUJkA91NDNmzJgxY8aMvxj4mcnHXDyLbsGrr7zKUdey29yQ44j3FtfgQyC0Dc47I7A0gDe3oBGJoUaCSHVOeRaLE0SUm6sXpLGHnMneI+pIGvAukGNAnUfUk1Mg+oBPAeeC9dMhFHWIWrG3iFIEmhAQUYZhZ6SWHsiZXKiLSlsEW9RijUFVwYvaQimmuqSuZJQaGXI7ZoP9InGKX7G+xqPlMTlnNutVjVO1xztRI1IQfLFcfEo25eCe9BFEMienRwiwurlB0Bpf402JOE1nADnjvC02c8p4cYxjrEOPDSlFjMwLTUNJMPZbNJtCsuydj1Z2XxBUPDGNe3J2Ymlkmn6FvUJzKlm/HUUyhVaa09KeJsZYI2Uc6rS6zSYlZqGUVElLtyc7yzSYgBFUxeJRrbtRSFiskHfWc+mdr+SQ7Lv8nFOKJpzYwIKW2ltgb82IrtqvOZF1BZDa74kt8qtn0SoMRex7gYKzIaCYK5BpGLXTw/5eQJ3tWyPbzJGpar0gSCGmHi8tIgUfPDlFVOr78p7gXXU2Tr2nWp2dIOLx3npTLOrVrjXLbXmJgq7XgbKX5ErZx7RajExBJFUC11fStBKVdbpOuQCJKcp1Og+mmKG9RFMVqd2qKUdyGck5kupXripoqfup1C9R23JzUNo17pzDqwP1SP1MUOcQ5yiquFI7aJ2nuGm/G7FsNyhyHWwnB25GitQIm2L/Fov3JUMiQcp75Sm3yMl5iJoxY8aMGTNmzJjx09CGHp83lM3Ixi3Q7hXa0FgNSbY6ELAaE2HqWJ9YGEW0kARiFoaklbY6rD0VRxHrb0zFUzjDS0+ip8vPGdRqTMATjhaMqQcESZFRPD6O5LJF/REx9QTf0ucb0nCHMY7kktChh6O7PHj9Ll/+2lcJ7og0bnF+Ackch2PaMBbHm1/4EhIWqBOOW8/TZ56rix03N1t2u4GcSg11KQxjJMbC/7e9+4637KwL/f/5rr1Pm5aZTHog1QCCGHoviSICShFF7xVLRNoPEQt4jegVEHwhV1G6tBvxIiByVfCCNEkiEEAMJYKGkJAMLSGQMu30vdf398ez9pkzh3OmnFmnzOTzfr02O2eVZz3rOWsO+7u/Txnr9rllvElEUdZwrwdZHQYJyNIxM+nSy6SbFR0myRidG+nX6/epOjWdhPGZKfZM9pjqdZjtz9BN6FabmZ7sMNXr0c8u0f8eQ5u2MN2fLZ1A6VDXMDk5MxeL1OyL4ep6MFK0hMD9LDE/zZqQpcNiQlRErw9R0atnGao7JcZuksvZdDLu1xX9rNm7u09Vdej3c14H38GUr32qqstMf4LsJ8PdBLrNUhrN7FFRkXWfbreiV0PVmYbsQvTpdoK6X0Y39vp9qIfpVnvJLKMcZ/s9hodr+v3yHUhZpiQZ6pQOnBXQy5pu9qiqDpOTNbO9YLI3S29oiD1TPSany++oV0Nkn12TcPOuPtOzPcY6XfZQQ5bZomoGU9r2qClLWNSDdmyStHOJuCbWjBwsiVFit06nohOlY+h0NUzmLL1+zWxvmptvm6XfH+cLO/by7W9s4ry734t73f8+jG05rRmZV6ZSTTpsP/ke7PjmV/ny5/+Lr15/K9N7p9jYrZrf1SwRNUmPKvr06zGGqtmyPMigo3Fm00G2JLzr+R0ICKgH97OvM/NiSciFCcn9l/WZX+bcP4nm+k2H+Kqa+04qoiyRNF73ieyze6rP5rEhql29MjMSNXvHg/rEii3DPWYm+5x+QnDTbX02jc2weWwD3Uj6BCduqzhl2zDfuX2WL94wzp22VYx2k16vQ92vyf7s3HdGZM3ocJ+x0Q637Ar6dZlxazZ6VPWgo3h+371KkqRj27KTj2ff6VS2HreVut9nz+49UNd0h4apoqbT6TI8PMLw8HCZIrLp4ZT9MoVmdCDqflnQnKAzPMLo6Bh1v8fE+HhJcg2mo8yaTpZRd/1mWtTIuizXnTXUffpESW5lk0DJsiZCVXWpqoqhoS51Jr2ZmTLSLUrvwxyMrRqsOQDNFCblw35V5pcsIwbnPszFfh8c5/d4G0zAOD850e0OMTIyysz0LDPTM/t92Bwkt2Awyi+JrJpRf/Xc6MBO1WFs82ZI2L1nd0mgzX0ALfUsiZLyob3T7RId6M306FQderOzJUCdP9QrysjUoaFhJsb37KsPzH0oHCTaOtW+KVebBtqXyGxGd5ZjB2mnaOrVoa77c0neipKgi2Z6l4iy5kDOrT8YkP1mbYl6LsE1GCk5NxJ1bqRq2ddp1u6smqRzRKdJQlV0GCLqugSTURNVRbfTbZ6RZg3H6O/rjRpNMD9Xz+a3EzE3DSxNMDEY8cncCMcyNrCMehwkvqsmkVWSap0oI+Wi023adBCwlqlNu90u1CWgGUyNOlhrsiQXg6GhIbrd7oK1HcvalGVk6CAh3mlG7O57VsvvYl/99yUe9yUi55K8TeJ0MBqRhE4EdTOt7eA33anKiMhe3fybHOzJuQeieeYX/BHJksSu65IkL9MYZfNPfzA6unzB0qFqelVXTQeDwVqNTTKdfb1C+/1m2thu0m/Sl1UzmnVuUqRsvjgYTJnTTJsUdTRfIpTpoPsMko9Z/rvOZq3RfR0IXO9RkiRJS6knx6mHR5gdPpGhDZvo0qHXn6aijDYr8Uf5tDqYjaWsgVdDXeKmTiQj3aDTSyZn98WSACVN0KFTutCRBDOMUkcXcpKR3Eknukx2NjHanWbvVFLXPXpTu+l3g+GYpB9bStw+24PhLWR/lunxcTK7VJ1Zjj/pZB7y4HO54OH3ZHrvZj7zb1dw//vejz4Vo0Mb2DnxbcaGNvEP//Auvn7NDQyPdti0aYTNoxX1limGq3E2jsywZ9cMdb8qn7n7wXRviL0zyabRPt/b22Wo26ff75elIOp5o8mifH/QpGepMujVHToxQpWTlHRtl4ohup2kX/fZOVkzNTNLVn2memOQPSYZZbjqUWef2V5NZyYY7wSz2SytF00MXlel7ZuEUj+zWaewJqN89q/oluVkYC5BFjSzojRLZgxVwT3PHuHfr5spm+bioYq6grruMzzUZ2JyklmC7tAYvd5M05E3iDqoAyan9/KlG3YyHMEPnbudzlCZVah0pC4Rz399cxdbNlScvGWYTrd8j0JdM9zMQjU+McmWzWP0ZqeY7s+WKV27MFuPMMvGMloyeuRMRaeqqIe67Braymh/gg1jFf0YZmr8ZqZ6s4xPz9LLoBPJ7r19duzssXXDMBvHOnRHRvn2rh7bToCdEzWdqs9oBTPdWeq6YibrZlRl0MtBh+2cSzzO/bsZfC8zCCJrmntuOvk2M/9M98rSMj3K7320rujPzlLPDrPz9l1c/bXrOf3sEzjl9POhm01n8WgStjXnnHkW8ei9nHDVV7n6yzvojU9RjScbmWB3P5judxmuu0wnjFZ9BsP86iYROtfJmHlfFdAkhee+H9r/b8L8kZDl8Ynv277fGpLzztt3bOnkPZitqcw8lWQfokrGusHsLOyZ6LNlY4nvh4YqqqyZ7ldMTMwyPRMcvxHqepiJ6Rm6QzNsHDuOEzcmY5sqJie6fPn6qaZjdvKtncnJm2HvBAxVQ0z3ewzFGLNZs3moJNtv31OV73QonXo7dbdZfbbp4G7sLEnSHcqyk48nn7idmelp+jPTdDtBnw6d7jDdZu05Avp1v4wYqvslaUiQnWY++GbE39iGzYyObGRycoLezATl0xJEp9MsXN0poxo7XWAwjWuZbpJOlPXnaKZ+zLJ2YFCSEiMjI0DQ6/UY5N72fWDNQQ5m3we7uoZqkIipmik99088zrf/dBhNgq5u+qJm0u2WkYWTk+P0Z5spG+vBmpIxN4hwkKjZtwRAlNF+lNFmGzZsAoI9e3YTGXSa6TWqoPkYVwrKKIneTqdDrz9Dp+ruWz9g7t4rMkpCa3R0pKyV0O81vSZrssnODBKf0ExVSjP1zyBhG/vig8FIQJrF6gcDBcmYWxtw353ta7tuZ4jBiMQqOvT7/bm76XSaNTejtE3V/F7meg1WVZNyLs9APxOqDlWWKViqZo3DqlN6b3aiQx01napJ0jWjE6MZtTgYr5cxGHHH3HS18yu+X4p1Xq/DwbQ80ZQ1WPujrpsxn4MegZRnOrKM5hsk6waxR6fToe50yloeVfMBvaroNCN1qyrodvclH7uDKVibKXQ7VQdysOh9lERsFXMjGKumV+78xOlgXdT5ieNB4rl0YiyJtjqTqrMvSZ11ziWFqZKqCcyZV87g3uYGBteDaVibZOfcqMOylmOvX6ZGokm2dqsuVQ6mTu40yflmiuKqahLj0O/VzM72mJmdpdsdot/t06v7VM19dzrM/b7n1umgWQ+lX34P5T6DfjYjMDPpR032a7Kf1PSo+02iNGf2LfZY8X1/GyRJkiSA7GxistrIlk1by9IWvQm6BFUzG0zUzbIUMegA22tm66igk1A3MV6VVMMluTg+zVznRiKpmnXmS6e8PlnWymCiP81MtZXh7iSbq52MVV0mZ2apo8NMP4mZW5kZOb4kHvsxN4JqeGQjWd9KN8Y4+6yz+eF7ncmWLUN87LJrOG77bZxx6hi37L6GKz/9Je59v/vy2X//CGeedg6n3vlEPvKBj9Of7lH3a4a6JQ7odpONGzYQ9BmpyhSqmRVdRhmqk9EuVNFjQ7dLdoKoumSW+Hm2runXMD07S8YQNHFD1B2iA3WMEDFDXU8x3O3RyYq6P03Ut7N1dANTvT49ZhllgtOP63DqcV36dBkdupWtI7Mcv3k3nbiNXq/D7tlhbh/vc/Pu5OaJYHximFsngpleRSdyrpNnFfs651YkZFXWoexAlYPYvMNsf5Zbdw3TDajnguR9y31UVYdud4iTThpjw9gQIyNjTOwZL991RBBU1HWf3eNTXPfNvWw/bgPnnLaNTtUh57pV1kDFd6e69KZrzjtjU0lkdyqSsmZmnbB3coZNoxUzM9NElvh702hFdjcyPtMpidMqyLrEod3RTfzEk57WfD1Ssee7X2fHZf9A9vpsbbp3Zq9m23E1t8Ysw92aTRtHOHVbl0/ErYxPzDAx3WNstE8ne4wMl7U8Z2Y69OuyBmo9GCE4b1rawUIxJZnXRKjN8irRfN/Qbzq39/qD6W6B6JHZYaZXzz1zZ593Gg986CPYdvypfOuGKzn93PtSUTE5vYtudpntT/O1q69m986dbBkb4253OZXe+Ab61R5uvGWIEcbp1X2YadZsbTq0Dr4TqaOZxWhex12Yn0QsAWgu2Dbo4Dz4kmGQYJ3r7L1/ynHf/87rgB5VObfffM82PzE724cNI0Nkr+a23dOcvLFDt9NhtBNMzZSk5cxMn7qe4ZStFf9100y5n14SXTjzpI2cefIGqk6X7926l5t3zTA922PD2CgT41Pc9eST+MHTzmPP1J7yLVEGN+1MbrxlL7sm++yeSmYS6n5J0g4F1BWcsX0j9zh9wzL/kkqSpKPRspOPv3fJpX7bLkmSJEnSEvpVxaZNW6i6Q9Szk3QCOp1mTcOAqhrMZjNIOFT7OqoSVNGhl2XUXxV9xoaGmJyhdJSrk6pDk2QqnTa3Hddl7+Q49CchtjBbVfTrrUTO0O2OM76nCzlNRJnms0OUBBpJv4YTt45ywgnBrp2b2L79BE489Ti+/vVbmJqeYXRslOt23MjofU5jcuJ2rr/+ek678yamJ/by0Q99jAse81DudKcTue4r36TbCWZma2ame8zMTnNTbycjQxXDnWCk26Wfs0DF0DB0YjN1vZecDXr1CENDJ9IZCvq9mk5UdKpgJntkdMneBNRTUHWgP5iqNihLQIyTJP16hBH2cP6ZE+wdn2FyapjhbVP8xo8lW7Zt5IZv7ObsO49SdbpMT07QGSrlDHVG6PWTrHc3a9LX/PWlk1z2peTkLV22bOxy/OYuQ1VdZgsCuhVkls7XdVaMDI0wMjpCdDvUJMOdcXrZoVfXTE8HH7gqmO3VZb2+ALJDRcXw8IayjEzze4xOk1iugKipottMs9kpy50EpYN3lM6Z3ejQq5j7uSiJxOgnUQ0328rai9GpiKpTEtWUaUP7dRB1SbDVs9MwtYcYHqM7tJFNW05hdMtx9KemmR2fKlOlVqVj7dgoDEVF1S1L3fQy2T01y96pkggvKagunapJcmXOzY4FVZn5pkmy9SP3z73NKYn3fb/v8nNdJ3X0qGKidGDtDHHyKRu48zmnc/e7nMl1V3+F23Z9nJNPOYnN207iO9/6Kmef8wC++MV/YvP2k/nmjd/iq1/6GqPDw3RimopZxsbg5O2jTIwm4yNT7N7TY2aygihL6pTqDNZ7nD8b1vePYCSzmbp3Xnay9DFmMCnR3HIlzV3NDaMc/BbnlsUpHbfnEpL14PhsOqeX33evV5PDyeYNw4xPzbJ5+yjdai+bRoeZnOpTA9O9PuN7e4xPJf2JHj/7sDuzc89ONm8Y5pRNXX7ojE3ccusuzjhziMu+tJNNIxUPuudJfP7qG6noc/1397BhuOL0rcG5p2/jhptrbts1zs7xHmNDHbZ2u2zqVowMB1U36FbJhqE+U3tuObQ/nJIk6Ziw7OSjJEmSJEla2re+cd0x32n3kY/c99+P+tHfWLuKHKLth3HsC38VXtjitX9/mec94SD7H7LMcg/VKHCfH/jJRfc9bsHPj714hStzBE467YcBeOijng3AD9/7p9ayOqvm1Uts/91DOPe5h3iNdfxrlyRJa6Q6+CGSJEmSJEmSJEmSdHAmHyVJkiRJkiRJkiS1wuSjJEmSJEmSJEmSpFaYfJQkSZIkSZIkSZLUCpOPkiRJkiRJkiRJklrRXesKSJKkdnx9d81ZF3+glbJ2/MlPtFKOJEmSJEmSpDsWRz5KkiRJkiRJkiRJaoXJR0mSJEmSJEmSJEmtMPkoSZIkSZIkSZIkqRUmHyVJkiRJkiRJkiS1wuSjJEmSJEmSJEmSpFaYfJQkSZIkSZIkSZLUCpOPkiRJkiRJkiRJklph8lGSJEmSJEmSJElSK0w+SpIkSZIkSZIkSWqFyUdJkiRJkiRJkiRJrTD5KEmSJEmSJEmSJKkVJh8lSZIkSZIkSZIktcLkoyRJkiRJkiRJkqRWmHyUJEmSJEmSJEmS1AqTj5IkSZIkSZIkSZJaYfJRkiRJkiRJkiRJUitMPkqSJEmSJEmSJElqhclHSZIkSZIkSZIkSa0w+ShJkiRJkiRJkiSpFSYfJUmSJEmSJEmSJLXC5KMkSZIkSZIkSZKkVph8lCRJkiRJkqRVFhE7ImLHGlz3gojIiHjxal9bknTHYPJRkiRJkiRJkg4iIs5qknZvW+u6SJK0npl8lCRJkiRJkiRJktQKk4+SJEmSJEmSJEmSWmHyUZKkJcyfUici7hIR746I70ZE3ayRcd+IeHVEXBURt0XEVERcGxGvjIhtC8p6VlPWi5a41ikRMRsRX1qdu5MkSZIkHapmfcQbmh9/uYnvBq+L5h334xHxzxFxS0RMR8TXIuJPI2LrYV7vv0fEZRGxs4k1r46IP4iIkUWOzYi4PCJOi4i3N3HrZER8LiJ+/iDXuVdEfKC5zkRE/GtEPGSJY4+LiJdHxDVNnW6PiA9HxKMWOXZuXcnDvEY3Ip4TEZ+JiN3N8V+IiOdGhN9lS9JRwj/YkiQd3LnAvwFnAe8A3gzsBp4B/DfgGuCvgL8EbgJ+G7giIjbPK+MdzTm/GhGdRa7xNKALvGllbkGSJEmSdAQuB17d/PdVwEvmvb4I0HQ2/RDwQOADwGuA64AXUGLELYdyoYi4BHgn8APA3wOvB24DXgp8KCK6i5y2DfgUcE9KfPp/gHOAd0TE7yxxqfs154wCbwXeDzwM+FhE3HVBnbY2x14M7AJe1dTtwcBHIuJZLVxjqNn/emBr0wZvpnyH/Vrgr5e4hiRpnYnMXOs6SJK0LkXEWezr2fryzHzhgv1nAt/KzP6C7b9KCaouzsxXzNv+OuDXgMdn5vvnbQ/ga8DJwGmZuesAdfrcErvuNnzyuRtOvejVS+w+PG97zMZWylkre/bsAWDz5s0HOfKOwfbYx7bY37HaHs985jO59tprP5+Z913rukiSpGPHvBjxrzPzogX7LgQuBT4NPC4zd87bdxElIfiqzPytedt3AGTmWYsc+4/AUzNzct6+FwMvAn4zM189b/vgC973AP8tM+tm+9nA54BNwN0y8/pm+wXAZc05v5KZb5tX1rOANwJ/mZnPmbf9TcAzKcnAZ2fzpXJEnAdcSUku3jUzdxzBNQb397rmHvvN9k5z3acBT8rM93EAB4qbzz333A1vfetbD3S6FjhWY4aVZJsdPttseY6k3VY6bnbkoyRJB3czpTfrfjLz6wsTj41LKKMcf3zB9r9s3hf2CH00cDbw7gMlHiVJkiRJ69bzmvdnzE88AjSJty8CTz2Ecn4D6AFPm594bLwUuHWJcvrA7w4Sj811b6CMvhwCfnGRc66YnxRsXNJc/wGDDRExDPwCsBf4vUHisbnGtc01hoFfOoJrVMCvA98Bfmt+rN389/OB5NDaUJK0xhYboi9JkvZ3VWZOL9zYTAnzLMrUq3cHjmP/jj2nzz8+M/8zIj4OPDYi7pyZ32x2PbN5f+PBKrJUb6SmZ+d9Dnb+obrgggvaKmpNXH755cDRfx9tsT32sS32d6y2h71lJUnSGngwMAs8JSKessj+YeDEiNiembcuVkBEbADOB24BfrNMkvN9poEfXGT7N5pk40KXU0YT3nuRfVcu3JCZsxFxM2Ua14G7AhsoicTbFinnUuAPjvAadwGOB64F/mCJe59k8XtfWP6ScXNVVfc51j77rrRjNWZYSbbZ4bPNludI2m2l42aTj5IkHdx3ltj+buCngOuB9zXHDZKUvwmMLHLOG4BHAE8HXhQRpwBPAL6YmZ9tsc6SJEmSpNWznfJd64sOctwmyujFxWwDAjjxEMpZ6OYltg/i2eMW2bdziXN6QGfez4Nzb1ri+MH2rUdwje3N+3kc+N43HWCfJGmdMPkoSdLBfd8CyRFxP0ri8V+Ax2Zmb96+CvgfS5T1D5Sg8Fcj4o8oa1Z0gTe1XWlJkiRJ0qrZBVSZefwRlgHwhcw83JltTl5i+ykLyl6OwbmnLLH/1Bav8Y+Z+eQjKEeStA645qMkScvzA837P81PPDYeAIwtdlJmzgJvpUzJ+njKCMi9wDtWqJ6SJEmSpHYM1iHsLLLvM8C2iLjHcgvPzL3AfwL3iIjDTWKeERFnLbL9gub9C8utF3ANMAGcHxFbF9l/YfP++SO4xlcooyQf1CxxIkk6ipl8lCRpeXY07xfM3xgRJwGvP8i5b6YEra8DzgbemZl7Wq6fJEmSJKldt1NmxjljkX1/0by/JSJOW7gzIjZGxIMO4Rp/Tlkf8pLFEn0RsS0iFhsV2QFe0czEMzj2bOB5lClO/+YQrr2ozJyhdJjdDLx0QX3Oba4xC7z9CK7RA15LGUX5moj4vg69EXFqRNx9udeQJK0ep12VJGl5/h24AnhyRHwK+CRlmpvHUnqF3rjUiZn5jYj4AGWtR3DKVUmSJEla9zJzb0T8G/DwiHgH8FVKx9J/ysyPRcTFwMuBayPin4EbKGsUngk8khI3PuYg17gkIu4LPAf4WkR8GPgGcDyl8+ojgL8Cnr3g1P8AHgh8LiI+Qll/8Web9/+RmV87wtu/GHg48NyIuD9wGXBCc43NwHMz84YjvMZLgfMp9/b4iLgU+DZwEmUtyIcCvw/81xFeR5K0wkw+SpK0DJnZj4gnAC8DHkfp6fltypSqL+PgwdAllOTjlZl5JFPTSJIkSZJWzy9SRjk+BvjvQADfAv4jM18REVdQ4sOHAU+krGX4bcoMOO88lAtk5q9FxAcpSbhHURKIt1GSkH/K4qMYb6d0hv1fwK8AWyhx6Z9l5iFd9yB1ui0iHgz8HvBk4LeBSeCzwJ9m5kdauMZsRDwJ+AXgIuAnKcnb71ESuf8TlyyRpKOCyUdJkpaQmTsogeRS+2+j9EZdzFkHKf7ezfsbD7tikiRJkqQ1kZnXAY8/wP5PUkY4HkpZZx1g3/uB9x9m3W6kJO4OdtzlHDjWXbRembkT+N3mtVLXSMr0rcuewlWStPZc81GSpFUWEZspPVhvA961xtWRJEmSJEmSpNY48lGSpFUSET8B3IfSS/Zk4AWZObG2tZIkSZIkSZKk9ph8lCRp9TwF+GXgZuDllHVCJEmSJEmSJOmYYfJRkqRVkpkXARetcTUkSZIkSceQzFxybUVJktaCaz5KkiRJkiRJkiRJaoXJR0mSJEmSJEmSJEmtMPkoSZIkSZIkSZIkqRUmHyVJkiRJkiRJkiS1wuSjJEmSJEmSJEmSpFaYfJQkSZIkSZIkSZLUCpOPkiRJkiRJkiRJklph8lGSJEmSJEmSJElSK0w+SpIkSZIkSZIkSWqFyUdJkiRJkiRJkiRJrTD5KEmSJEmSJEmSJKkVJh8lSZIkSZIkSZIktcLkoyRJkiRJkiRJkqRWmHyUJEmSJEmSJEmS1AqTj5IkSZIkSZIkSZJa0V3rCkiSpHacuaXiq3/yE2tdDUmSJEmSJEl3YI58lCRJkiRJkiRJktQKk4+SJEmSJEmSJEmSWmHyUZIkSZIkSZIkSVIrTD5KkiRJkiRJkiRJaoXJR0mSJEmSJEmSJEmtMPkoSZIkSZIkSZIkqRUmHyVJkiRJkiRJkiS1wuSjJEmSJEmSJEmSpFaYfJQkSZIkSZIkSZLUCpOPkiRJkiRJkiRJklph8lGSJEmSJEmSJElSK0w+SpIkSZIkSZIkSWqFyUdJkiRJkiRJkiRJrTD5KEmSJEmSJEmSJKkVJh8lSZIkSZIkSZIktcLkoyRJkiRJkiRJkqRWRGaudR0kSdIRiohbR0ZGjr/HPe6x1lVZF/bs2QPA5s2b17gm64PtsY9tsb9jtT2uvvpqJicnb8vM7WtdF0mSJK0Pxs3Lc6zGDCvJNjt8ttnyHEm7rXTcbPJRkqRjQERMAx3gqrWuyzpxt+b9K2tai/XD9tjHttjfsdoeZwG7M/Psta6IJEmS1gfj5mU7VmOGlWSbHT7bbHmOpN3OYgXj5u5KFCpJklbdlwEy875rXZH1ICI+B7bHgO2xj22xP9tDkiRJdyDGzctgzHD4bLPDZ5stz3puN9d8lCRJkiRJkiRJktQKk4+SJEmSJEmSJEmSWmHyUZIkSZIkSZIkSVIrTD5KkiRJkiRJkiRJaoXJR0mSJEmSJEmSJEmtiMxc6zpIkiRJkiRJkiRJOgY48lGSJEmSJEmSJElSK0w+SpIkSZIkSZIkSWqFyUdJkiRJkiRJkiRJrTD5KEmSJEmSJEmSJKkVJh8lSZIkSZIkSZIktcLkoyRJkiRJkiRJkqRWmHyUJEmSJEmSJEmS1AqTj5IkrVMRcaeIuCQiboyI6YjYERGviohth1nO8c15O5pybmzKvdNK1b1tbbRFRPxYRLwyIj4WEbdGREbEJ1ey3ivlSNsjIjZGxFMj4p0R8ZWIGI+IPRFxZUQ8PyKGV/oe2tTS8/E7EfHPzbl7I2J3RHwpIv78jvZvZZEyHxER/ebfzMvarK8kSZJ0MGsZG6/E5+vVslZxYxM3LPX6TLt32a6WYsvLD9IGo0ucd/eI+LuI+G5ETEXENRHxkogYa+8O29fCc3bBQdpr8LrzgvOOyucsIn4mIl4bEZ9ovnfIiPibZZZ12G2/ms9ZZGbbZUqSpCMUEecCnwJOAt4HfAV4AHAhcA3w0My89RDK2d6UcxfgUuDfgbsBTwS+Czw4M69fiXtoS4tt8V7KfU8B1wE/BFyRmQ9bmZqvjDbaIyIeA3wQuA24jNIe24AnAKc05f9oZk6t0G20psXn4zpgL3AVcDMwBNwbeCSwG7ggM7+wEvfQlrbaYkGZm4H/AE4ANgF/nJl/0Ga9JUmSpKWsZWy8Ep+vV8taxo0RkcDXgbctUuy3MvOty76xFdTis3Y5JY58yRKHvCwzewvOeSDluRwC/i/wTeBHgPsBV1Daefrw72pltfScnQVctMTuewJPBr6cmfdccN7R+px9ETif8v3Dtyh/h96Rmb9wmOUcdtuv+nOWmb58+fLly5evdfYCPgwk8OsLtv95s/2Nh1jOm5rjX7lg+/Oa7R9a63tdxbZ4MHAPoAOc1Zz7ybW+v7VoD+BewFOB4QXbNwOfa8p5/lrf6yo/H6NLbH9GU84/r/W9rlZbLDj3EsqXDS9synjZWt+nL1++fPny5cuXrzvOay1j45X4fH00tdty48Zm++Vr3QZr+KxdDuRhXLcD/FdzjSfM215REkQJXLzW7bOSbXaA8t/VlPO8Y+g5uxA4DwjgguY+/mal234tnjNHPkqStM40vZeuA3YA52ZmPW/fZuAmyoeUkzJz/ADlbKL04KyBUzNzz7x9FXA9cGZzjXU5+rGttlik3LOAGzjKRj6uVHssuMbPA+8A3p+Zjz/iSq+gVWqP44CdwHWZed6R1nmlrERbRMQTgfcCvwh0gb/CkY+SJElaJWsZG69GrLFS1jpubEak/WtmXrCsG1gDbbbZYORjZsYhXvtHgI8BH8/MRy7Ydw7wNcoIv7NzHSVzVvo5i4gTKCMDa+C0zNy5YP9R95wtFBEXUEYVH9bIx+W0/Vo8Z675KEnS+nNh8/6R+R8gAJog6QpgA/Cgg5TzIGCMkmDbM39HU+6HF1xvPWqrLY4Vq9Ees81774BHrQ+r0R6DQPo/jqCM1dBqW0TEScBbgPdm5rLWn5AkSZKO0FrGxkdzLLoe4satEfG0iHhhRPxaRKzHdpqv9TaLiJ+LiIsj4rcj4rERMbLEoT/SvH9o4Y4mGf5VSnL8nEO99ipZ6efsl4ER4D0LE4/zHG3PWVuW0/ar/pyZfJQkaf25a/P+1SX2X9u832WVyllLx8I9tGk12uNpzfv3fSBdh1pvj4h4ekS8OCL+LCI+DPw1pfffxcuv5qpouy3eQokVnn0klZIkSZKOwFrGxkdzLLoe4sbzgf8N/DHwOuDTEfHFiLjnEsevtZVos78FXg68Evhn4BsR8TOrdO3VsNL1fkbz/qYDHHO0PWdtOSr+ppl8lCRp/Tmued+1xP7B9q2rVM5aOhbuoU0r2h4R8VzgMcAXKWv9rXcr0R5PB14EPB94NGUtk0dl5rUHPGvttdYWEfE04AnAczLz5iOvmiRJkrQsaxkbH82x6FrHjX8OPBQ4kbI+5P0pa8qdD1waEacv57orrM02ex9lBp07UUbc3o2ShNwKvDsiHrOC115NK1bviHgkJVn25cz81BKHHY3PWVuOir9pJh8lSZJERDwZeBXwHeCnM3P2wGccmzLzQc3aHCdQko8An4uIH1/Daq2aZj3UV1Gmtvm7ta2NJEmSpPXkUOLGzHx+Zn4qM2/JzL2ZeWVmPgX4e0qc9YJVrfQqy8y/yMz3Z+a3M3MqM6/JzBdSOrhWlESkDuyZzfublzrgjv6cHQ1MPkqStP4Mehsdt8T+wfadq1TOWjoW7qFNK9IeEfEkypQw3wUuaOb7Pxqs2PORmbdm5kcpCchJ4O0RMXbYNVw9bbXFJZT7fU4LdZIkSZKOxFrGxkdzLLpe48Y3Nu+POMzzVsNq/L7fSlkj814RsXmVr70SVuo5Ox74aZo4fBn1Ws/PWVuOir9pJh8lSVp/rmnel5pn/bzmfal52tsuZy0dC/fQptbbIyKeArwHuBl4ZGZec5BT1pMVfz6ahe0/TZnK5R7LLWcVtNUW9wFOAr4XETl4AX/V7P/9Ztt7j6i2kiRJ0sGtZWx8NMei6zVu/F7zvnEZ56601Ygtp4A9zY/z2+BofdZWqt6/DIwAf9fE44drPT9nbTkq/qaZfJQkaf25rHl/dETs9//VTe+4hwITwGcOUs5nKD3FHrqgVx1NuYMpJS9beOI60lZbHCtabY+IeCrwLuBGSgC53tc1XGi1no/BWhG9IyxnJbXVFv8H+N+LvD7e7P9i8/NHW6m1JEmStLS1jI2P5lh0vcaND2re1+NMOyv++46IuwLbKAnIW+bturR5X7gWJBFxDiVZ9HXWX7utVJs9o3lfcsrVg1jPz1lbltP2q/6cmXyUJGmdycyvAR8BzgJ+bcHul1B6b709M8cHGyPibhFxtwXl7KVMUbERePGCcp7blP/h9TzFZlttcaxosz0i4pcpiaZvAI9Yz8/BUtpqj4g4IyJOXuwaEfEsysL13wS+1F7t29Xi343nZebTF77YN/LxA82216/YzUiSJEmsbWy8nGuvF2sZN0bED0fE0GLbgT9ufvybQ7+b1dFibHl2M20oC7afyL6Y6m8zc37H1n8FrgYeERFPmHdOBbyi+fGNmZnLubeVshLf10TEw4EfBL6cmZ86wHFH5XN2uCJiqGmzc+dvX+bfp1V/zmKdPbOSJAloPlh8ijL94fsoHxAeCFxImQLhIZl567zjEyAzY0E525ty7kLp5fRZyge5J1LWaXhI86Fl3WqxLR4GPL35cRNlDYHvAh8cHJOZF63UfbSljfaIiAuBf6F0RLuEklhbaGdmvmpl7qI9LbXHkyhTCH0auI4yldB2So/JewJ7gZ/MzH9d+Ttavrb+rSxR9kWUYPmPM/MPWq+8JEmStIi1jI0P99rryVrFjRHxNuDxwCea46eBu1FGW3WAtwDPWm+JNGitzS6irDn4ScoIstuAM4DHUdbUuxL4sYXTiUbEAynP5RDwfynJ3h8F7gdcAfxoZk63fMtHrO0YNCLeDvwC8LzMfO0Brvs2jt7n7EnAk5ofTwF+nPKsfKLZdktmvqA59izgBuDrmXnWgnIO++/Taj9nJh8lSVqnIuLOwB9RPjxtB24C/hF4SWbevuDYJT/ANb3uXkT5cHMqcCsl4faHmfmtFbyF1rTRFvOSJ0s6lCTMenCk7XEobcEiH27Xqxba4wzgecDDKT0HjwemKAHAR4FXZ+Zigfa609bfjUXKvQiTj5IkSVoDaxkbH86115u1iBubxMovAT9MSYqMUtr5SuAtmflPR3JPK62FNrsn8HzgvsBpwBbKNKv/Cfwd8KbMnFni2nenjFq7ENhMmQLzXcCfZOZke3fZrhb/fW6jTO2bwGkHWu/xaH7OIuLFlL9DS5n7N3Wg5GOz/7D/Pq3mc2byUZIkSZIkSZIkSVIrXPNRkiRJkiRJkiRJUitMPkqSJEmSJEmSJElqhclHSZIkSZIkSZIkSa0w+ShJkiRJkiRJkiSpFSYfJUmSJEmSJEmSJLXC5KMkSZIkSZIkSZKkVph8lCRJkiRJkiRJktQKk4+SJEmSJEmSJEmSWmHyUZIkSZIkSZIkSVIrTD5KkiRJkiRJkiRJaoXJR0mSJEmSJEmSJEmtMPkoSZJ0BxcRF0RERsSLV/AaZzXXeNthnHNRc85FC7bviIgdh3KsJEmSJElHyrhZOjwmHyVJknTMWizgkiRJkiRJhXGzVkJ3rSsgSZIkLeEfgc8AN7V8rCRJkiRJxwLjZq1LJh8lSZK0LmXmLmBX28dKkiRJknQsMG7WeuW0q5IkSWts/roOEXG3iHhvRNwWEeMR8cmIePSC4+fWaYiIx0TE5RGxKyJy3jHHRcTLI+KaiJiKiNsj4sMR8aiD1OXBEfEvTXl7mnPut8hxp0XEH0bEFRHxnYiYiYgbI+KdEXH3g1zjoPe48D4PoQ33O3awHgdwJnBms2/weltEbIuIiYj4WkTEEmX+v+b477t/SZIkSdLqMW42btbRxeSjJEnS+nE28GngeOBNwHuA+wIfjIifW+T4nwHeD+wB3gi8GyAitgKfAi6m9Gp8FfD3wIOBj0TEs5a4/gOBy4Fp4PXAB4EfBT4REQ9fcOwjmvJ3NmX/BWX6lp8BPhsR57d0j8u1A3gJ5f53Nf89eL03M28H/hY4B/i+wDIi7gw8FvhcZl7ZYr0kSZIkSctn3NyeHRg3a4U47aokSdL68QjgzzLzdwYbIuJ1lKDjjRHxwczcPe/4xwGPy8wPLSjnFcDdgTcDz87MbMp6BXAl8JqI+HBm7lhw3mOAX8/M1827/hOB9wKXRMRdM7Nudl0KnJyZe+YX0ARPVwB/QglCjvQel6W5txcPenRm5osXOewNwK8AzwI+umDfrwIdSqAnSZIkSVofjJuNm3UUcOSjJEnS+rEL+KP5G5reg+8AtgI/teD49y0MoCJiGPgFCnzCigAABKtJREFUYC/we4MAqinrWuA1wDDwS4tc/zpKYDH/+u8D/hX4AeDh87Z/d2EA1Wy/ihJgXRgRQy3c44pprnsl8MSIOGWwPSI6lCBqD/Cu1aqPJEmSJOmgjJuNm3UUMPkoSZK0fnx+scCEMqULwL0XbP/sIsfeFdgAXJWZty2y/9IlygL4xLwemge9fkT8RLO+w00RMTtYGwJ4PDACnLBIWYd7jyvtDZTZQJ42b9vjgDsBf5OZe1e5PpIkSZKkpRk3GzfrKOC0q5IkSevHzUts/07zftwS2+cbHHPTEmUNtm89kutHxG9Q1sS4nTL1yjeACSCBJwHnUwKpZV9jlfwt8ErgGRHxJ00Q+cxmn1PHSJIkSdL6Ytxs3KyjgMlHSZKk9ePkJbYPpjbZtWB7Ljxw3jGnLLIP4NQlyjrk60dEF3gxJfC5T2buF7BFxIOXKOeQr7FaMnMyIt4G/Bbw6Ij4T8qaG//WTIUjSZIkSVo/jJuNm3UUcNpVSZKk9eM+EbF5ke0XNO9fOIQyrqH0pDw/IrYusv/C5v3zi+x7WEQs9vlw4fVPoPQA/dQiAdQm4D4HqF8b93g4+kDnIMf8JSUgfRZlzYoO9t6UJEmSpPXIuNm4WUcBk4+SJEnrx3HAH87fEBH3A55K6dn4jwcrIDNnKIvQbwZeuqCsc4HnAbPA2xc5/TzgOQvOeSLwSOA64BPN5u9SArX7NkHT4Ngh4NUsvmbFwBHf42G6FTgxIsaWOiAzrwU+Bvwk8GxgJ2VaGUmSJEnS+mLcbNyso4DTrkqSJK0fHweeHhEPBK6gTPXyc5QOY8/KzN2HWM7FwMOB50bE/YHLKIHNz1KCq+dm5g2LnPch4JUR8VjgKuAHgCcDU8DTmnUdyMw6Il7TXOdLEfE+YJjSO/T45noXLlJ+m/d4qD4G3B/4UER8HJgGrsrM/7fguDcAj6JMb/PazJxsuR6SJEmSpCNn3GzcrKOAIx8lSZLWjxuAh1AWo382Jej5PPC4zHz3oRaSmbcBDwb+F7Ad+G3gKcBngcdk5huWOPXfKNO4jADPpazhcCnwiMz8xIJj/yfwfGCSMu3Kk4ErgQcA31jpezwMLwPeCJwL/B6lV+tPL3LcPwG3NP/t1DGSJEmStD4ZN7fPuFmti8zF1luVJEnSaomIsyjBxV9n5kVrW5s7pog4hzJFzhWZ+fC1ro8kSZIkaR/j5rVn3KzD4chHSZIkCV4ABPC6ta6IJEmSJEnrkHGzDplrPkqSJOkOKSLOAH4eOA/4Fcp6He9Z00pJkiRJkrROGDdruUw+SpIk6Y7qHODlwATwUeD/y8x6baskSZIkSdK6YdysZXHNR0mSJEmSJEmSJEmtcM1HSZIkSZIkSZIkSa0w+ShJkiRJkiRJkiSpFSYfJUmSJEmSJEmSJLXC5KMkSZIkSZIkSZKkVph8lCRJkiRJkiRJktQKk4+SJEmSJEmSJEmSWmHyUZIkSZIkSZIkSVIrTD5KkiRJkiRJkiRJaoXJR0mSJEmSJEmSJEmtMPkoSZIkSZIkSZIkqRUmHyVJkiRJkiRJkiS1wuSjJEmSJEmSJEmSpFaYfJQkSZIkSZIkSZLUiv8fvIWOEH2MylYAAAAASUVORK5CYII=", - "text/plain": [ - "<Figure size 1152x1152 with 16 Axes>" - ] - }, - "metadata": { - "image/png": { - "height": 914, - "width": 911 - }, - "needs_background": "light" - }, - "output_type": "display_data" - } - ], - "source": [ - "plt.figure(figsize=(16, 16))\n", - "\n", - "for i, image in enumerate(original_images):\n", - " plt.subplot(4, 4, 2 * i + 1)\n", - " plt.imshow(image)\n", - " plt.axis(\"off\")\n", - "\n", - " plt.subplot(4, 4, 2 * i + 2)\n", - " y = np.arange(top_probs.shape[-1])\n", - " plt.grid()\n", - " plt.barh(y, top_probs[i])\n", - " plt.gca().invert_yaxis()\n", - " plt.gca().set_axisbelow(True)\n", - " plt.yticks(y, [cifar100.classes[index] for index in top_labels[i].numpy()])\n", - " plt.xlabel(\"probability\")\n", - "\n", - "plt.subplots_adjust(wspace=0.5)\n", - "plt.show()" - ] - } - ], - "metadata": { - "accelerator": "GPU", - "colab": { - "collapsed_sections": [], - "name": "Interacting with CLIP.ipynb", - "provenance": [] - }, - "kernelspec": { - "display_name": "Python 3", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.8.10" - }, - "widgets": { - "application/vnd.jupyter.widget-state+json": { - "12e23e2819094ee0a079d4eb77cfc4f9": { - "model_module": "@jupyter-widgets/base", - "model_name": "LayoutModel", - "state": { - "_model_module": "@jupyter-widgets/base", - "_model_module_version": "1.2.0", - "_model_name": "LayoutModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/base", - "_view_module_version": "1.2.0", - "_view_name": "LayoutView", - "align_content": null, - "align_items": null, - "align_self": null, - "border": null, - "bottom": null, - "display": null, - "flex": null, - "flex_flow": null, - "grid_area": null, - "grid_auto_columns": null, - "grid_auto_flow": null, - "grid_auto_rows": null, - "grid_column": null, - "grid_gap": null, - "grid_row": null, - "grid_template_areas": null, - "grid_template_columns": null, - "grid_template_rows": null, - "height": null, - "justify_content": null, - "justify_items": null, - "left": null, - "margin": null, - "max_height": null, - "max_width": null, - "min_height": null, - "min_width": null, - "object_fit": null, - "object_position": null, - "order": null, - "overflow": null, - "overflow_x": null, - "overflow_y": null, - "padding": null, - "right": null, - "top": null, - "visibility": null, - "width": null - } - }, - "1369964d45004b5e95a058910b2a33e6": { - "model_module": "@jupyter-widgets/controls", - "model_name": "HBoxModel", - "state": { - "_dom_classes": [], - "_model_module": "@jupyter-widgets/controls", - "_model_module_version": "1.5.0", - "_model_name": "HBoxModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/controls", - "_view_module_version": "1.5.0", - "_view_name": "HBoxView", - "box_style": "", - "children": [ - "IPY_MODEL_7a5f52e56ede4ac3abe37a3ece007dc9", - "IPY_MODEL_ce8b0faa1a1340b5a504d7b3546b3ccb" - ], - "layout": "IPY_MODEL_12e23e2819094ee0a079d4eb77cfc4f9" - } - }, - "161969cae25a49f38aacd1568d3cac6c": { - "model_module": "@jupyter-widgets/base", - "model_name": "LayoutModel", - "state": { - "_model_module": "@jupyter-widgets/base", - "_model_module_version": "1.2.0", - "_model_name": "LayoutModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/base", - "_view_module_version": "1.2.0", - "_view_name": "LayoutView", - "align_content": null, - "align_items": null, - "align_self": null, - "border": null, - "bottom": null, - "display": null, - "flex": null, - "flex_flow": null, - "grid_area": null, - "grid_auto_columns": null, - "grid_auto_flow": null, - "grid_auto_rows": null, - "grid_column": null, - "grid_gap": null, - "grid_row": null, - "grid_template_areas": null, - "grid_template_columns": null, - "grid_template_rows": null, - "height": null, - "justify_content": null, - "justify_items": null, - "left": null, - "margin": null, - "max_height": null, - "max_width": null, - "min_height": null, - "min_width": null, - "object_fit": null, - "object_position": null, - "order": null, - "overflow": null, - "overflow_x": null, - "overflow_y": null, - "padding": null, - "right": null, - "top": null, - "visibility": null, - "width": null - } - }, - "4a61c10fc00c4f04bb00b82e942da210": { - "model_module": "@jupyter-widgets/base", - "model_name": "LayoutModel", - "state": { - "_model_module": "@jupyter-widgets/base", - "_model_module_version": "1.2.0", - "_model_name": "LayoutModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/base", - "_view_module_version": "1.2.0", - "_view_name": "LayoutView", - "align_content": null, - "align_items": null, - "align_self": null, - "border": null, - "bottom": null, - "display": null, - "flex": null, - "flex_flow": null, - "grid_area": null, - "grid_auto_columns": null, - "grid_auto_flow": null, - "grid_auto_rows": null, - "grid_column": null, - "grid_gap": null, - "grid_row": null, - "grid_template_areas": null, - "grid_template_columns": null, - "grid_template_rows": null, - "height": null, - "justify_content": null, - "justify_items": null, - "left": null, - "margin": null, - "max_height": null, - "max_width": null, - "min_height": null, - "min_width": null, - "object_fit": null, - "object_position": null, - "order": null, - "overflow": null, - "overflow_x": null, - "overflow_y": null, - "padding": null, - "right": null, - "top": null, - "visibility": null, - "width": null - } - }, - "5e6adc4592124a4581b85f4c1f3bab4d": { - "model_module": "@jupyter-widgets/controls", - "model_name": "ProgressStyleModel", - "state": { - "_model_module": "@jupyter-widgets/controls", - "_model_module_version": "1.5.0", - "_model_name": "ProgressStyleModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/base", - "_view_module_version": "1.2.0", - "_view_name": "StyleView", - "bar_color": null, - "description_width": "initial" - } - }, - "7a5f52e56ede4ac3abe37a3ece007dc9": { - "model_module": "@jupyter-widgets/controls", - "model_name": "FloatProgressModel", - "state": { - "_dom_classes": [], - "_model_module": "@jupyter-widgets/controls", - "_model_module_version": "1.5.0", - "_model_name": "FloatProgressModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/controls", - "_view_module_version": "1.5.0", - "_view_name": "ProgressView", - "bar_style": "success", - "description": "", - "description_tooltip": null, - "layout": "IPY_MODEL_4a61c10fc00c4f04bb00b82e942da210", - "max": 169001437, - "min": 0, - "orientation": "horizontal", - "style": "IPY_MODEL_5e6adc4592124a4581b85f4c1f3bab4d", - "value": 169001437 - } - }, - "b597cd6f6cd443aba4bf4491ac7f957e": { - "model_module": "@jupyter-widgets/controls", - "model_name": "DescriptionStyleModel", - "state": { - "_model_module": "@jupyter-widgets/controls", - "_model_module_version": "1.5.0", - "_model_name": "DescriptionStyleModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/base", - "_view_module_version": "1.2.0", - "_view_name": "StyleView", - "description_width": "" - } - }, - "ce8b0faa1a1340b5a504d7b3546b3ccb": { - "model_module": "@jupyter-widgets/controls", - "model_name": "HTMLModel", - "state": { - "_dom_classes": [], - "_model_module": "@jupyter-widgets/controls", - "_model_module_version": "1.5.0", - "_model_name": "HTMLModel", - "_view_count": null, - "_view_module": "@jupyter-widgets/controls", - "_view_module_version": "1.5.0", - "_view_name": "HTMLView", - "description": "", - "description_tooltip": null, - "layout": "IPY_MODEL_161969cae25a49f38aacd1568d3cac6c", - "placeholder": "​", - "style": "IPY_MODEL_b597cd6f6cd443aba4bf4491ac7f957e", - "value": " 169001984/? [00:06<00:00, 25734958.25it/s]" - } - } - } - } - }, - "nbformat": 4, - "nbformat_minor": 0 -} diff --git a/kosmos-g/open_clip/docs/clip_conceptual_captions.md b/kosmos-g/open_clip/docs/clip_conceptual_captions.md deleted file mode 100644 index 7ddf31522..000000000 --- a/kosmos-g/open_clip/docs/clip_conceptual_captions.md +++ /dev/null @@ -1,13 +0,0 @@ -## Additional training curves for CLIP on Conceptual Captions - -# Zero shot accuracy -![](/docs/clip_zeroshot.png) - -# Training loss curve -![](/docs/clip_loss.png) - -# Validation loss curve -![](/docs/clip_val_loss.png) - -# Validation recall -![](/docs/clip_recall.png) \ No newline at end of file diff --git a/kosmos-g/open_clip/docs/clip_loss.png b/kosmos-g/open_clip/docs/clip_loss.png deleted file mode 100644 index fe3f41557..000000000 Binary files a/kosmos-g/open_clip/docs/clip_loss.png and /dev/null differ diff --git a/kosmos-g/open_clip/docs/clip_recall.png b/kosmos-g/open_clip/docs/clip_recall.png deleted file mode 100644 index 13b57e4a9..000000000 Binary files a/kosmos-g/open_clip/docs/clip_recall.png and /dev/null differ diff --git a/kosmos-g/open_clip/docs/clip_val_loss.png b/kosmos-g/open_clip/docs/clip_val_loss.png deleted file mode 100644 index 4c14f76fd..000000000 Binary files a/kosmos-g/open_clip/docs/clip_val_loss.png and /dev/null differ diff --git a/kosmos-g/open_clip/docs/clip_zeroshot.png b/kosmos-g/open_clip/docs/clip_zeroshot.png deleted file mode 100644 index 89e6ebc67..000000000 Binary files a/kosmos-g/open_clip/docs/clip_zeroshot.png and /dev/null differ diff --git a/kosmos-g/open_clip/docs/effective_robustness.png b/kosmos-g/open_clip/docs/effective_robustness.png deleted file mode 100644 index 536d1d05b..000000000 Binary files a/kosmos-g/open_clip/docs/effective_robustness.png and /dev/null differ diff --git a/kosmos-g/open_clip/docs/laion2b_clip_zeroshot_b32.png b/kosmos-g/open_clip/docs/laion2b_clip_zeroshot_b32.png deleted file mode 100644 index d0d484fdb..000000000 Binary files a/kosmos-g/open_clip/docs/laion2b_clip_zeroshot_b32.png and /dev/null differ diff --git a/kosmos-g/open_clip/docs/laion_clip_zeroshot.png b/kosmos-g/open_clip/docs/laion_clip_zeroshot.png deleted file mode 100644 index 811787d46..000000000 Binary files a/kosmos-g/open_clip/docs/laion_clip_zeroshot.png and /dev/null differ diff --git a/kosmos-g/open_clip/docs/laion_clip_zeroshot_b16.png b/kosmos-g/open_clip/docs/laion_clip_zeroshot_b16.png deleted file mode 100644 index db2d6a43e..000000000 Binary files a/kosmos-g/open_clip/docs/laion_clip_zeroshot_b16.png and /dev/null differ diff --git a/kosmos-g/open_clip/docs/laion_clip_zeroshot_b16_plus_240.png b/kosmos-g/open_clip/docs/laion_clip_zeroshot_b16_plus_240.png deleted file mode 100644 index 94a14f7c7..000000000 Binary files a/kosmos-g/open_clip/docs/laion_clip_zeroshot_b16_plus_240.png and /dev/null differ diff --git a/kosmos-g/open_clip/docs/laion_clip_zeroshot_l14.png b/kosmos-g/open_clip/docs/laion_clip_zeroshot_l14.png deleted file mode 100644 index 0b76fd5f8..000000000 Binary files a/kosmos-g/open_clip/docs/laion_clip_zeroshot_l14.png and /dev/null differ diff --git a/kosmos-g/open_clip/docs/laion_openai_compare_b32.jpg b/kosmos-g/open_clip/docs/laion_openai_compare_b32.jpg deleted file mode 100644 index 490a5fbf2..000000000 Binary files a/kosmos-g/open_clip/docs/laion_openai_compare_b32.jpg and /dev/null differ diff --git a/kosmos-g/open_clip/docs/scaling.png b/kosmos-g/open_clip/docs/scaling.png deleted file mode 100644 index 30115426d..000000000 Binary files a/kosmos-g/open_clip/docs/scaling.png and /dev/null differ diff --git a/kosmos-g/open_clip/requirements-test.txt b/kosmos-g/open_clip/requirements-test.txt deleted file mode 100644 index 24cb3902e..000000000 --- a/kosmos-g/open_clip/requirements-test.txt +++ /dev/null @@ -1,2 +0,0 @@ -pytest-xdist==2.5.0 -pytest==7.0.1 \ No newline at end of file diff --git a/kosmos-g/open_clip/requirements-training.txt b/kosmos-g/open_clip/requirements-training.txt deleted file mode 100644 index 2780eb068..000000000 --- a/kosmos-g/open_clip/requirements-training.txt +++ /dev/null @@ -1,8 +0,0 @@ -torch>=1.9.0 -torchvision -webdataset>=0.2.5 -regex -ftfy -tqdm -pandas -braceexpand diff --git a/kosmos-g/open_clip/requirements.txt b/kosmos-g/open_clip/requirements.txt deleted file mode 100644 index c1908ba20..000000000 --- a/kosmos-g/open_clip/requirements.txt +++ /dev/null @@ -1,5 +0,0 @@ -torch>=1.9.0 -torchvision -regex -ftfy -tqdm diff --git a/kosmos-g/open_clip/setup.py b/kosmos-g/open_clip/setup.py deleted file mode 100644 index 7cce7e983..000000000 --- a/kosmos-g/open_clip/setup.py +++ /dev/null @@ -1,56 +0,0 @@ -""" Setup -""" -from setuptools import setup, find_packages -from codecs import open -from os import path - -here = path.abspath(path.dirname(__file__)) - -# Get the long description from the README file -with open(path.join(here, 'README.md'), encoding='utf-8') as f: - long_description = f.read() - -exec(open('src/open_clip/version.py').read()) -setup( - name='open_clip_torch', - version=__version__, - description='OpenCLIP', - long_description=long_description, - long_description_content_type='text/markdown', - url='https://github.com/mlfoundations/open_clip', - author='', - author_email='', - classifiers=[ - # How mature is this project? Common values are - # 3 - Alpha - # 4 - Beta - # 5 - Production/Stable - 'Development Status :: 3 - Alpha', - 'Intended Audience :: Education', - 'Intended Audience :: Science/Research', - 'License :: OSI Approved :: Apache Software License', - 'Programming Language :: Python :: 3.7', - 'Programming Language :: Python :: 3.8', - 'Programming Language :: Python :: 3.9', - 'Programming Language :: Python :: 3.10', - 'Topic :: Scientific/Engineering', - 'Topic :: Scientific/Engineering :: Artificial Intelligence', - 'Topic :: Software Development', - 'Topic :: Software Development :: Libraries', - 'Topic :: Software Development :: Libraries :: Python Modules', - ], - - # Note that this is a string of words separated by whitespace, not a list. - keywords='CLIP pretrained', - package_dir={'': 'src'}, - packages=find_packages(where='src', exclude=['training']), - include_package_data=True, - install_requires=[ - 'torch >= 1.9', - 'torchvision', - 'ftfy', - 'regex', - 'tqdm', - ], - python_requires='>=3.7', -) diff --git a/kosmos-g/open_clip/src/data/gather_cc.py b/kosmos-g/open_clip/src/data/gather_cc.py deleted file mode 100644 index 44da701cf..000000000 --- a/kosmos-g/open_clip/src/data/gather_cc.py +++ /dev/null @@ -1,93 +0,0 @@ -import requests -import os -import multiprocessing as mp -from io import BytesIO -import numpy as np -import PIL -from PIL import Image -import pickle -import sys - - -def grab(line): - """ - Download a single image from the TSV. - """ - uid, split, line = line - try: - caption, url = line.split("\t")[:2] - except: - print("Parse error") - return - - if os.path.exists(ROOT+"/%s/%d/%d.jpg"%(split,uid%1000,uid)): - print("Finished", uid) - return uid, caption, url - - # Let's not crash if anythign weird happens - try: - dat = requests.get(url, timeout=20) - if dat.status_code != 200: - print("404 file", url) - return - - # Try to parse this as an Image file, we'll fail out if not - im = Image.open(BytesIO(dat.content)) - im.thumbnail((512, 512), PIL.Image.BICUBIC) - if min(*im.size) < max(*im.size)/3: - print("Too small", url) - return - - im.save(ROOT+"/%s/%d/%d.jpg"%(split,uid%1000,uid)) - - # Another try/catch just because sometimes saving and re-loading - # the image is different than loading it once. - try: - o = Image.open(ROOT+"/%s/%d/%d.jpg"%(split,uid%1000,uid)) - o = np.array(o) - - print("Success", o.shape, uid, url) - return uid, caption, url - except: - print("Failed", uid, url) - - except Exception as e: - print("Unknown error", e) - pass - -if __name__ == "__main__": - ROOT = "cc_data" - - if not os.path.exists(ROOT): - os.mkdir(ROOT) - os.mkdir(os.path.join(ROOT,"train")) - os.mkdir(os.path.join(ROOT,"val")) - for i in range(1000): - os.mkdir(os.path.join(ROOT,"train", str(i))) - os.mkdir(os.path.join(ROOT,"val", str(i))) - - - p = mp.Pool(300) - - for tsv in sys.argv[1:]: - print("Processing file", tsv) - assert 'val' in tsv.lower() or 'train' in tsv.lower() - split = 'val' if 'val' in tsv.lower() else 'train' - results = p.map(grab, - [(i,split,x) for i,x in enumerate(open(tsv).read().split("\n"))]) - - out = open(tsv.replace(".tsv","_output.csv"),"w") - out.write("title\tfilepath\n") - - for row in results: - if row is None: continue - id, caption, url = row - fp = os.path.join(ROOT, split, str(id % 1000), str(id) + ".jpg") - if os.path.exists(fp): - out.write("%s\t%s\n"%(caption,fp)) - else: - print("Drop", id) - out.close() - - p.close() - diff --git a/kosmos-g/open_clip/src/open_clip/__init__.py b/kosmos-g/open_clip/src/open_clip/__init__.py deleted file mode 100644 index e604475ac..000000000 --- a/kosmos-g/open_clip/src/open_clip/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -from .factory import list_models, create_model, create_model_and_transforms, add_model_config -from .loss import ClipLoss -from .model import CLIP, CLIPTextCfg, CLIPVisionCfg, convert_weights_to_fp16, trace_model -from .openai import load_openai_model, list_openai_models -from .pretrained import list_pretrained, list_pretrained_tag_models, list_pretrained_model_tags,\ - get_pretrained_url, download_pretrained -from .tokenizer import SimpleTokenizer, tokenize -from .transform import image_transform diff --git a/kosmos-g/open_clip/src/open_clip/bpe_simple_vocab_16e6.txt.gz b/kosmos-g/open_clip/src/open_clip/bpe_simple_vocab_16e6.txt.gz deleted file mode 100644 index 7b5088a52..000000000 Binary files a/kosmos-g/open_clip/src/open_clip/bpe_simple_vocab_16e6.txt.gz and /dev/null differ diff --git a/kosmos-g/open_clip/src/open_clip/factory.py b/kosmos-g/open_clip/src/open_clip/factory.py deleted file mode 100644 index fdf2b571a..000000000 --- a/kosmos-g/open_clip/src/open_clip/factory.py +++ /dev/null @@ -1,162 +0,0 @@ -import json -import logging -import os -import pathlib -import re -from copy import deepcopy -from pathlib import Path -from typing import Optional, Tuple - -import torch - -from .model import CLIP, convert_weights_to_fp16, resize_pos_embed -from .openai import load_openai_model -from .pretrained import get_pretrained_url, download_pretrained -from .transform import image_transform - - -_MODEL_CONFIG_PATHS = [Path(__file__).parent / f"model_configs/"] -_MODEL_CONFIGS = {} # directory (model_name: config) of model architecture configs - - -def _natural_key(string_): - return [int(s) if s.isdigit() else s for s in re.split(r'(\d+)', string_.lower())] - - -def _rescan_model_configs(): - global _MODEL_CONFIGS - - config_ext = ('.json',) - config_files = [] - for config_path in _MODEL_CONFIG_PATHS: - if config_path.is_file() and config_path.suffix in config_ext: - config_files.append(config_path) - elif config_path.is_dir(): - for ext in config_ext: - config_files.extend(config_path.glob(f'*{ext}')) - - for cf in config_files: - with open(cf, 'r') as f: - model_cfg = json.load(f) - if all(a in model_cfg for a in ('embed_dim', 'vision_cfg', 'text_cfg')): - _MODEL_CONFIGS[cf.stem] = model_cfg - - _MODEL_CONFIGS = {k: v for k, v in sorted(_MODEL_CONFIGS.items(), key=lambda x: _natural_key(x[0]))} - - -_rescan_model_configs() # initial populate of model config registry - - -def load_state_dict(checkpoint_path: str, map_location='cpu'): - checkpoint = torch.load(checkpoint_path, map_location=map_location) - if isinstance(checkpoint, dict) and 'state_dict' in checkpoint: - state_dict = checkpoint['state_dict'] - else: - state_dict = checkpoint - if next(iter(state_dict.items()))[0].startswith('module'): - state_dict = {k[7:]: v for k, v in state_dict.items()} - return state_dict - - -def load_checkpoint(model, checkpoint_path, strict=True): - state_dict = load_state_dict(checkpoint_path) - resize_pos_embed(state_dict, model) - incompatible_keys = model.load_state_dict(state_dict, strict=strict) - return incompatible_keys - - -def create_model( - model_name: str, - pretrained: str = '', - precision: str = 'fp32', - device: torch.device = torch.device('cpu'), - jit: bool = False, - force_quick_gelu: bool = False, - pretrained_image: bool = False, -): - model_name = model_name.replace('/', '-') # for callers using old naming with / in ViT names - - if pretrained.lower() == 'openai': - logging.info(f'Loading pretrained {model_name} from OpenAI.') - model = load_openai_model(model_name, device=device, jit=jit) - # See https://discuss.pytorch.org/t/valueerror-attemting-to-unscale-fp16-gradients/81372 - if precision == "amp" or precision == "fp32": - model = model.float() - else: - if model_name in _MODEL_CONFIGS: - logging.info(f'Loading {model_name} model config.') - model_cfg = deepcopy(_MODEL_CONFIGS[model_name]) - else: - logging.error(f'Model config for {model_name} not found; available models {list_models()}.') - raise RuntimeError(f'Model config for {model_name} not found.') - - if force_quick_gelu: - # override for use of QuickGELU on non-OpenAI transformer models - model_cfg["quick_gelu"] = True - - if pretrained_image: - if 'timm_model_name' in model_cfg.get('vision_cfg', {}): - # pretrained weight loading for timm models set via vision_cfg - model_cfg['vision_cfg']['timm_model_pretrained'] = True - else: - assert False, 'pretrained image towers currently only supported for timm models' - - model = CLIP(**model_cfg) - - if pretrained: - checkpoint_path = '' - url = get_pretrained_url(model_name, pretrained) - if url: - checkpoint_path = download_pretrained(url) - elif os.path.exists(pretrained): - checkpoint_path = pretrained - - if checkpoint_path: - logging.info(f'Loading pretrained {model_name} weights ({pretrained}).') - load_checkpoint(model, checkpoint_path) - else: - logging.warning(f'Pretrained weights ({pretrained}) not found for model {model_name}.') - raise RuntimeError(f'Pretrained weights ({pretrained}) not found for model {model_name}.') - - model.to(device=device) - if precision == "fp16": - assert device.type != 'cpu' - convert_weights_to_fp16(model) - - if jit: - model = torch.jit.script(model) - - return model - - -def create_model_and_transforms( - model_name: str, - pretrained: str = '', - precision: str = 'fp32', - device: torch.device = torch.device('cpu'), - jit: bool = False, - force_quick_gelu: bool = False, - pretrained_image: bool = False, - mean: Optional[Tuple[float, ...]] = None, - std: Optional[Tuple[float, ...]] = None, -): - model = create_model( - model_name, pretrained, precision, device, jit, - force_quick_gelu=force_quick_gelu, - pretrained_image=pretrained_image) - preprocess_train = image_transform(model.visual.image_size, is_train=True, mean=mean, std=std) - preprocess_val = image_transform(model.visual.image_size, is_train=False, mean=mean, std=std) - return model, preprocess_train, preprocess_val - - -def list_models(): - """ enumerate available model architectures based on config files """ - return list(_MODEL_CONFIGS.keys()) - - -def add_model_config(path): - """ add model config path or file and update registry """ - if not isinstance(path, Path): - path = Path(path) - _MODEL_CONFIG_PATHS.append(path) - _rescan_model_configs() diff --git a/kosmos-g/open_clip/src/open_clip/loss.py b/kosmos-g/open_clip/src/open_clip/loss.py deleted file mode 100644 index de31426df..000000000 --- a/kosmos-g/open_clip/src/open_clip/loss.py +++ /dev/null @@ -1,121 +0,0 @@ -import torch -import torch.nn as nn -from torch.nn import functional as F - -try: - import torch.distributed.nn - from torch import distributed as dist - has_distributed = True -except ImportError: - has_distributed = False - -try: - import horovod.torch as hvd -except ImportError: - hvd = None - - -def gather_features( - image_features, - text_features, - local_loss=False, - gather_with_grad=False, - rank=0, - world_size=1, - use_horovod=False -): - assert has_distributed, 'torch.distributed did not import correctly, please use a PyTorch version with support.' - if use_horovod: - assert hvd is not None, 'Please install horovod' - if gather_with_grad: - all_image_features = hvd.allgather(image_features) - all_text_features = hvd.allgather(text_features) - else: - with torch.no_grad(): - all_image_features = hvd.allgather(image_features) - all_text_features = hvd.allgather(text_features) - if not local_loss: - # ensure grads for local rank when all_* features don't have a gradient - gathered_image_features = list(all_image_features.chunk(world_size, dim=0)) - gathered_text_features = list(all_text_features.chunk(world_size, dim=0)) - gathered_image_features[rank] = image_features - gathered_text_features[rank] = text_features - all_image_features = torch.cat(gathered_image_features, dim=0) - all_text_features = torch.cat(gathered_text_features, dim=0) - else: - # We gather tensors from all gpus - if gather_with_grad: - all_image_features = torch.cat(torch.distributed.nn.all_gather(image_features), dim=0) - all_text_features = torch.cat(torch.distributed.nn.all_gather(text_features), dim=0) - else: - gathered_image_features = [torch.zeros_like(image_features) for _ in range(world_size)] - gathered_text_features = [torch.zeros_like(text_features) for _ in range(world_size)] - dist.all_gather(gathered_image_features, image_features) - dist.all_gather(gathered_text_features, text_features) - if not local_loss: - # ensure grads for local rank when all_* features don't have a gradient - gathered_image_features[rank] = image_features - gathered_text_features[rank] = text_features - all_image_features = torch.cat(gathered_image_features, dim=0) - all_text_features = torch.cat(gathered_text_features, dim=0) - - return all_image_features, all_text_features - - -class ClipLoss(nn.Module): - - def __init__( - self, - local_loss=False, - gather_with_grad=False, - cache_labels=False, - rank=0, - world_size=1, - use_horovod=False, - ): - super().__init__() - self.local_loss = local_loss - self.gather_with_grad = gather_with_grad - self.cache_labels = cache_labels - self.rank = rank - self.world_size = world_size - self.use_horovod = use_horovod - - # cache state - self.prev_num_logits = 0 - self.labels = {} - - def forward(self, image_features, text_features, logit_scale): - device = image_features.device - if self.world_size > 1: - all_image_features, all_text_features = gather_features( - image_features, text_features, - self.local_loss, self.gather_with_grad, self.rank, self.world_size, self.use_horovod) - - if self.local_loss: - logits_per_image = logit_scale * image_features @ all_text_features.T - logits_per_text = logit_scale * text_features @ all_image_features.T - else: - logits_per_image = logit_scale * all_image_features @ all_text_features.T - logits_per_text = logits_per_image.T - else: - logits_per_image = logit_scale * image_features @ text_features.T - logits_per_text = logit_scale * text_features @ image_features.T - - # calculated ground-truth and cache if enabled - num_logits = logits_per_image.shape[0] - if self.prev_num_logits != num_logits or device not in self.labels: - labels = torch.arange(num_logits, device=device, dtype=torch.long) - if self.world_size > 1 and self.local_loss: - labels = labels + num_logits * self.rank - if self.cache_labels: - self.labels[device] = labels - self.prev_num_logits = num_logits - else: - labels = self.labels[device] - - total_loss = ( - F.cross_entropy(logits_per_image, labels) + - F.cross_entropy(logits_per_text, labels) - ) / 2 - return total_loss diff --git a/kosmos-g/open_clip/src/open_clip/model.py b/kosmos-g/open_clip/src/open_clip/model.py deleted file mode 100644 index b5b0c94ef..000000000 --- a/kosmos-g/open_clip/src/open_clip/model.py +++ /dev/null @@ -1,600 +0,0 @@ -""" CLIP Model - -Adapted from https://github.com/openai/CLIP. Originally MIT License, Copyright (c) 2021 OpenAI. -""" -from collections import OrderedDict -from dataclasses import dataclass -import logging -import math -from typing import Tuple, Union, Callable, Optional - -import numpy as np -import torch -import torch.nn.functional as F -from torch import nn -from torch.utils.checkpoint import checkpoint - -from .timm_model import TimmModel -from .utils import freeze_batch_norm_2d, to_2tuple - -from argparse import Namespace -from torchscale.component.multihead_attention import MultiheadAttention - -class Bottleneck(nn.Module): - expansion = 4 - - def __init__(self, inplanes, planes, stride=1): - super().__init__() - - # all conv layers have stride 1. an avgpool is performed after the second convolution when stride > 1 - self.conv1 = nn.Conv2d(inplanes, planes, 1, bias=False) - self.bn1 = nn.BatchNorm2d(planes) - self.relu1 = nn.ReLU(inplace=True) - - self.conv2 = nn.Conv2d(planes, planes, 3, padding=1, bias=False) - self.bn2 = nn.BatchNorm2d(planes) - self.relu2 = nn.ReLU(inplace=True) - - self.avgpool = nn.AvgPool2d(stride) if stride > 1 else nn.Identity() - - self.conv3 = nn.Conv2d(planes, planes * self.expansion, 1, bias=False) - self.bn3 = nn.BatchNorm2d(planes * self.expansion) - self.relu3 = nn.ReLU(inplace=True) - - self.downsample = None - self.stride = stride - - if stride > 1 or inplanes != planes * Bottleneck.expansion: - # downsampling layer is prepended with an avgpool, and the subsequent convolution has stride 1 - self.downsample = nn.Sequential(OrderedDict([ - ("-1", nn.AvgPool2d(stride)), - ("0", nn.Conv2d(inplanes, planes * self.expansion, 1, stride=1, bias=False)), - ("1", nn.BatchNorm2d(planes * self.expansion)) - ])) - - def forward(self, x: torch.Tensor): - identity = x - - out = self.relu1(self.bn1(self.conv1(x))) - out = self.relu2(self.bn2(self.conv2(out))) - out = self.avgpool(out) - out = self.bn3(self.conv3(out)) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - out = self.relu3(out) - return out - - -class AttentionPool2d(nn.Module): - def __init__(self, spacial_dim: int, embed_dim: int, num_heads: int, output_dim: int = None): - super().__init__() - self.positional_embedding = nn.Parameter(torch.randn(spacial_dim ** 2 + 1, embed_dim) / embed_dim ** 0.5) - self.k_proj = nn.Linear(embed_dim, embed_dim) - self.q_proj = nn.Linear(embed_dim, embed_dim) - self.v_proj = nn.Linear(embed_dim, embed_dim) - self.c_proj = nn.Linear(embed_dim, output_dim or embed_dim) - self.num_heads = num_heads - - def forward(self, x): - x = x.reshape(x.shape[0], x.shape[1], x.shape[2] * x.shape[3]).permute(2, 0, 1) # NCHW -> (HW)NC - x = torch.cat([x.mean(dim=0, keepdim=True), x], dim=0) # (HW+1)NC - x = x + self.positional_embedding[:, None, :].to(x.dtype) # (HW+1)NC - x, _ = F.multi_head_attention_forward( - query=x, key=x, value=x, - embed_dim_to_check=x.shape[-1], - num_heads=self.num_heads, - q_proj_weight=self.q_proj.weight, - k_proj_weight=self.k_proj.weight, - v_proj_weight=self.v_proj.weight, - in_proj_weight=None, - in_proj_bias=torch.cat([self.q_proj.bias, self.k_proj.bias, self.v_proj.bias]), - bias_k=None, - bias_v=None, - add_zero_attn=False, - dropout_p=0, - out_proj_weight=self.c_proj.weight, - out_proj_bias=self.c_proj.bias, - use_separate_proj_weight=True, - training=self.training, - need_weights=False - ) - - return x[0] - - -class ModifiedResNet(nn.Module): - """ - A ResNet class that is similar to torchvision's but contains the following changes: - - There are now 3 "stem" convolutions as opposed to 1, with an average pool instead of a max pool. - - Performs anti-aliasing strided convolutions, where an avgpool is prepended to convolutions with stride > 1 - - The final pooling layer is a QKV attention instead of an average pool - """ - - def __init__(self, layers, output_dim, heads, image_size=224, width=64): - super().__init__() - self.output_dim = output_dim - self.image_size = image_size - - # the 3-layer stem - self.conv1 = nn.Conv2d(3, width // 2, kernel_size=3, stride=2, padding=1, bias=False) - self.bn1 = nn.BatchNorm2d(width // 2) - self.relu1 = nn.ReLU(inplace=True) - self.conv2 = nn.Conv2d(width // 2, width // 2, kernel_size=3, padding=1, bias=False) - self.bn2 = nn.BatchNorm2d(width // 2) - self.relu2 = nn.ReLU(inplace=True) - self.conv3 = nn.Conv2d(width // 2, width, kernel_size=3, padding=1, bias=False) - self.bn3 = nn.BatchNorm2d(width) - self.relu3 = nn.ReLU(inplace=True) - self.avgpool = nn.AvgPool2d(2) - - # residual layers - self._inplanes = width # this is a *mutable* variable used during construction - self.layer1 = self._make_layer(width, layers[0]) - self.layer2 = self._make_layer(width * 2, layers[1], stride=2) - self.layer3 = self._make_layer(width * 4, layers[2], stride=2) - self.layer4 = self._make_layer(width * 8, layers[3], stride=2) - - embed_dim = width * 32 # the ResNet feature dimension - self.attnpool = AttentionPool2d(image_size // 32, embed_dim, heads, output_dim) - - self.init_parameters() - - def _make_layer(self, planes, blocks, stride=1): - layers = [Bottleneck(self._inplanes, planes, stride)] - - self._inplanes = planes * Bottleneck.expansion - for _ in range(1, blocks): - layers.append(Bottleneck(self._inplanes, planes)) - - return nn.Sequential(*layers) - - def init_parameters(self): - if self.attnpool is not None: - std = self.attnpool.c_proj.in_features ** -0.5 - nn.init.normal_(self.attnpool.q_proj.weight, std=std) - nn.init.normal_(self.attnpool.k_proj.weight, std=std) - nn.init.normal_(self.attnpool.v_proj.weight, std=std) - nn.init.normal_(self.attnpool.c_proj.weight, std=std) - - for resnet_block in [self.layer1, self.layer2, self.layer3, self.layer4]: - for name, param in resnet_block.named_parameters(): - if name.endswith("bn3.weight"): - nn.init.zeros_(param) - - def lock(self, unlocked_groups=0, freeze_bn_stats=False): - assert unlocked_groups == 0, 'partial locking not currently supported for this model' - for param in self.parameters(): - param.requires_grad = False - if freeze_bn_stats: - freeze_batch_norm_2d(self) - - @torch.jit.ignore - def set_grad_checkpointing(self, enable=True): - # FIXME support for non-transformer - pass - - def stem(self, x): - x = self.relu1(self.bn1(self.conv1(x))) - x = self.relu2(self.bn2(self.conv2(x))) - x = self.relu3(self.bn3(self.conv3(x))) - x = self.avgpool(x) - return x - - def forward(self, x): - x = self.stem(x) - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.layer4(x) - x = self.attnpool(x) - - return x - - -class LayerNorm(nn.LayerNorm): - """Subclass torch's LayerNorm to handle fp16.""" - - def forward(self, x: torch.Tensor): - orig_type = x.dtype - x = F.layer_norm(x, self.normalized_shape, self.weight, self.bias, self.eps) - return x.to(orig_type) - - -class QuickGELU(nn.Module): - # NOTE This is slower than nn.GELU or nn.SiLU and uses more GPU memory - def forward(self, x: torch.Tensor): - return x * torch.sigmoid(1.702 * x) - - -class ResidualAttentionBlock(nn.Module): - def __init__(self, d_model: int, n_head: int, mlp_ratio: float = 4.0, act_layer: Callable = nn.GELU): - super().__init__() - - self.attn = nn.MultiheadAttention(d_model, n_head) - args = Namespace(**{'scale_length': 0, 'multiway': False, 'flash_attention': True}) - self.ts_attn = MultiheadAttention(args, d_model, n_head, self_attention=True) - self.ln_1 = LayerNorm(d_model) - mlp_width = int(d_model * mlp_ratio) - self.mlp = nn.Sequential(OrderedDict([ - ("c_fc", nn.Linear(d_model, mlp_width)), - ("gelu", act_layer()), - ("c_proj", nn.Linear(mlp_width, d_model)) - ])) - self.ln_2 = LayerNorm(d_model) - - def attention(self, x: torch.Tensor, attn_mask: Optional[torch.Tensor] = None): - # orginal_val = self.attn(x, x, x, need_weights=False, attn_mask=attn_mask)[0] - ts_val = self.ts_attn(x, x, x, attn_mask=attn_mask)[0] - return ts_val - - def forward(self, x: torch.Tensor, attn_mask: Optional[torch.Tensor] = None): - x = x + self.attention(self.ln_1(x), attn_mask=attn_mask) - x = x + self.mlp(self.ln_2(x)) - return x - - -class Transformer(nn.Module): - def __init__(self, width: int, layers: int, heads: int, mlp_ratio: float = 4.0, act_layer: Callable = nn.GELU): - super().__init__() - self.width = width - self.layers = layers - self.grad_checkpointing = False - - self.resblocks = nn.ModuleList([ - ResidualAttentionBlock(width, heads, mlp_ratio, act_layer=act_layer) - for _ in range(layers) - ]) - - def forward(self, x: torch.Tensor, attn_mask: Optional[torch.Tensor] = None): - for r in self.resblocks: - if self.grad_checkpointing and not torch.jit.is_scripting(): - x = checkpoint(r, x, attn_mask) - else: - x = r(x, attn_mask=attn_mask) - return x - - -class VisualTransformer(nn.Module): - def __init__( - self, image_size: int, patch_size: int, width: int, layers: int, heads: int, mlp_ratio: float, - output_dim: int, act_layer: Callable = nn.GELU): - super().__init__() - self.image_size = to_2tuple(image_size) - self.patch_size = to_2tuple(patch_size) - self.grid_size = (self.image_size[0] // self.patch_size[0], self.image_size[1] // self.patch_size[1]) - self.output_dim = output_dim - self.conv1 = nn.Conv2d(in_channels=3, out_channels=width, kernel_size=patch_size, stride=patch_size, bias=False) - - scale = width ** -0.5 - self.class_embedding = nn.Parameter(scale * torch.randn(width)) - self.positional_embedding = nn.Parameter(scale * torch.randn(self.grid_size[0] * self.grid_size[1] + 1, width)) - self.ln_pre = LayerNorm(width) - - self.transformer = Transformer(width, layers, heads, mlp_ratio, act_layer=act_layer) - - self.ln_post = LayerNorm(width) - self.proj = nn.Parameter(scale * torch.randn(width, output_dim)) - - def lock(self, unlocked_groups=0, freeze_bn_stats=False): - assert unlocked_groups == 0, 'partial locking not currently supported for this model' - for param in self.parameters(): - param.requires_grad = False - - @torch.jit.ignore - def set_grad_checkpointing(self, enable=True): - self.transformer.grad_checkpointing = enable - - def forward(self, x: torch.Tensor): - x = self.conv1(x) # shape = [*, width, grid, grid] - x = x.reshape(x.shape[0], x.shape[1], -1) # shape = [*, width, grid ** 2] - x = x.permute(0, 2, 1) # shape = [*, grid ** 2, width] - x = torch.cat( - [self.class_embedding.to(x.dtype) + torch.zeros(x.shape[0], 1, x.shape[-1], dtype=x.dtype, device=x.device), - x], dim=1) # shape = [*, grid ** 2 + 1, width] - x = x + self.positional_embedding.to(x.dtype) - x = self.ln_pre(x) - - x = x.permute(1, 0, 2) # NLD -> LND - x = self.transformer(x) - x = x.permute(1, 0, 2) # LND -> NLD - - x = self.ln_post(x[:, 0, :]) - - if self.proj is not None: - x = x @ self.proj - - return x - - -@dataclass -class CLIPVisionCfg: - layers: Union[Tuple[int, int, int, int], int] = 12 - width: int = 768 - head_width: int = 64 - mlp_ratio: float = 4.0 - patch_size: int = 16 - image_size: Union[Tuple[int, int], int] = 224 - timm_model_name: str = None # a valid model name overrides layers, width, patch_size - timm_model_pretrained: bool = False # use (imagenet) pretrained weights for named model - timm_pool: str = 'avg' # feature pooling for timm model ('abs_attn', 'rot_attn', 'avg', '') - timm_proj: str = 'linear' # linear projection for timm model output ('linear', 'mlp', '') - - -@dataclass -class CLIPTextCfg: - context_length: int = 77 - vocab_size: int = 49408 - width: int = 512 - heads: int = 8 - layers: int = 12 - - -class CLIP(nn.Module): - def __init__( - self, - embed_dim: int, - vision_cfg: CLIPVisionCfg, - text_cfg: CLIPTextCfg, - quick_gelu: bool = False, - ): - super().__init__() - if isinstance(vision_cfg, dict): - vision_cfg = CLIPVisionCfg(**vision_cfg) - if isinstance(text_cfg, dict): - text_cfg = CLIPTextCfg(**text_cfg) - - self.context_length = text_cfg.context_length - - # OpenAI models are pretrained w/ QuickGELU but native nn.GELU is both faster and more - # memory efficient in recent PyTorch releases (>= 1.10). - # NOTE: timm models always use native GELU regardless of quick_gelu flag. - act_layer = QuickGELU if quick_gelu else nn.GELU - - if vision_cfg.timm_model_name: - self.visual = TimmModel( - vision_cfg.timm_model_name, - pretrained=vision_cfg.timm_model_pretrained, - pool=vision_cfg.timm_pool, - proj=vision_cfg.timm_proj, - embed_dim=embed_dim, - image_size=vision_cfg.image_size - ) - act_layer = nn.GELU # so that text transformer doesn't use QuickGELU w/ timm models - elif isinstance(vision_cfg.layers, (tuple, list)): - vision_heads = vision_cfg.width * 32 // vision_cfg.head_width - self.visual = ModifiedResNet( - layers=vision_cfg.layers, - output_dim=embed_dim, - heads=vision_heads, - image_size=vision_cfg.image_size, - width=vision_cfg.width - ) - else: - vision_heads = vision_cfg.width // vision_cfg.head_width - self.visual = VisualTransformer( - image_size=vision_cfg.image_size, - patch_size=vision_cfg.patch_size, - width=vision_cfg.width, - layers=vision_cfg.layers, - heads=vision_heads, - mlp_ratio=vision_cfg.mlp_ratio, - output_dim=embed_dim, - act_layer=act_layer, - ) - - self.transformer = Transformer( - width=text_cfg.width, - layers=text_cfg.layers, - heads=text_cfg.heads, - act_layer=act_layer, - ) - - self.vocab_size = text_cfg.vocab_size - self.token_embedding = nn.Embedding(text_cfg.vocab_size, text_cfg.width) - self.positional_embedding = nn.Parameter(torch.empty(self.context_length, text_cfg.width)) - self.ln_final = LayerNorm(text_cfg.width) - - self.text_projection = nn.Parameter(torch.empty(text_cfg.width, embed_dim)) - self.logit_scale = nn.Parameter(torch.ones([]) * np.log(1 / 0.07)) - self.register_buffer('attn_mask', self.build_attention_mask(), persistent=False) - - self.init_parameters() - - def init_parameters(self): - nn.init.normal_(self.token_embedding.weight, std=0.02) - nn.init.normal_(self.positional_embedding, std=0.01) - nn.init.constant_(self.logit_scale, np.log(1 / 0.07)) - - if hasattr(self.visual, 'init_parameters'): - self.visual.init_parameters() - - proj_std = (self.transformer.width ** -0.5) * ((2 * self.transformer.layers) ** -0.5) - attn_std = self.transformer.width ** -0.5 - fc_std = (2 * self.transformer.width) ** -0.5 - for block in self.transformer.resblocks: - nn.init.normal_(block.attn.in_proj_weight, std=attn_std) - nn.init.normal_(block.attn.out_proj.weight, std=proj_std) - nn.init.normal_(block.mlp.c_fc.weight, std=fc_std) - nn.init.normal_(block.mlp.c_proj.weight, std=proj_std) - - if self.text_projection is not None: - nn.init.normal_(self.text_projection, std=self.transformer.width ** -0.5) - - def build_attention_mask(self): - # lazily create causal attention mask, with full attention between the vision tokens - # pytorch uses additive attention mask; fill with -inf - mask = torch.empty(self.context_length, self.context_length) - mask.fill_(float("-inf")) - mask.triu_(1) # zero out the lower diagonal - return mask - - def lock_image_tower(self, unlocked_groups=0, freeze_bn_stats=False): - # lock image tower as per LiT - https://arxiv.org/abs/2111.07991 - self.visual.lock(unlocked_groups=unlocked_groups, freeze_bn_stats=freeze_bn_stats) - - @torch.jit.ignore - def set_grad_checkpointing(self, enable=True): - self.visual.set_grad_checkpointing(enable) - self.transformer.grad_checkpointing = enable - - def encode_image(self, image): - return self.visual(image) - - def encode_text(self, text): - x = self.token_embedding(text) # [batch_size, n_ctx, d_model] - - x = x + self.positional_embedding - x = x.permute(1, 0, 2) # NLD -> LND - x = self.transformer(x, attn_mask=self.attn_mask) - x = x.permute(1, 0, 2) # LND -> NLD - x = self.ln_final(x) - - # x.shape = [batch_size, n_ctx, transformer.width] - # take features from the eot embedding (eot_token is the highest number in each sequence) - x = x[torch.arange(x.shape[0]), text.argmax(dim=-1)] @ self.text_projection - - return x - - def forward(self, image, text): - if image is None: - return self.encode_text(text) - elif text is None: - return self.encode_image(image) - image_features = self.encode_image(image) - image_features = F.normalize(image_features, dim=-1) - - text_features = self.encode_text(text) - text_features = F.normalize(text_features, dim=-1) - - return image_features, text_features, self.logit_scale.exp() - - -def convert_weights_to_fp16(model: nn.Module): - """Convert applicable model parameters to fp16""" - - def _convert_weights_to_fp16(l): - if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Linear)): - l.weight.data = l.weight.data.half() - if l.bias is not None: - l.bias.data = l.bias.data.half() - - if isinstance(l, nn.MultiheadAttention): - for attr in [*[f"{s}_proj_weight" for s in ["in", "q", "k", "v"]], "in_proj_bias", "bias_k", "bias_v"]: - tensor = getattr(l, attr) - if tensor is not None: - tensor.data = tensor.data.half() - - for name in ["text_projection", "proj"]: - if hasattr(l, name): - attr = getattr(l, name) - if attr is not None: - attr.data = attr.data.half() - - model.apply(_convert_weights_to_fp16) - - -def build_model_from_openai_state_dict(state_dict: dict): - vit = "visual.proj" in state_dict - - if vit: - vision_width = state_dict["visual.conv1.weight"].shape[0] - vision_layers = len( - [k for k in state_dict.keys() if k.startswith("visual.") and k.endswith(".attn.in_proj_weight")]) - vision_patch_size = state_dict["visual.conv1.weight"].shape[-1] - grid_size = round((state_dict["visual.positional_embedding"].shape[0] - 1) ** 0.5) - image_size = vision_patch_size * grid_size - else: - counts: list = [ - len(set(k.split(".")[2] for k in state_dict if k.startswith(f"visual.layer{b}"))) for b in [1, 2, 3, 4]] - vision_layers = tuple(counts) - vision_width = state_dict["visual.layer1.0.conv1.weight"].shape[0] - output_width = round((state_dict["visual.attnpool.positional_embedding"].shape[0] - 1) ** 0.5) - vision_patch_size = None - assert output_width ** 2 + 1 == state_dict["visual.attnpool.positional_embedding"].shape[0] - image_size = output_width * 32 - - embed_dim = state_dict["text_projection"].shape[1] - context_length = state_dict["positional_embedding"].shape[0] - vocab_size = state_dict["token_embedding.weight"].shape[0] - transformer_width = state_dict["ln_final.weight"].shape[0] - transformer_heads = transformer_width // 64 - transformer_layers = len(set(k.split(".")[2] for k in state_dict if k.startswith(f"transformer.resblocks"))) - - vision_cfg = CLIPVisionCfg( - layers=vision_layers, - width=vision_width, - patch_size=vision_patch_size, - image_size=image_size, - ) - text_cfg = CLIPTextCfg( - context_length=context_length, - vocab_size=vocab_size, - width=transformer_width, - heads=transformer_heads, - layers=transformer_layers - ) - model = CLIP( - embed_dim, - vision_cfg=vision_cfg, - text_cfg=text_cfg, - quick_gelu=True, # OpenAI models were trained with QuickGELU - ) - - for key in ["input_resolution", "context_length", "vocab_size"]: - state_dict.pop(key, None) - - convert_weights_to_fp16(model) - model.load_state_dict(state_dict) - return model.eval() - - -def trace_model(model, batch_size=256, device=torch.device('cpu')): - model.eval() - image_size = model.visual.image_size - example_images = torch.ones((batch_size, 3, image_size, image_size), device=device) - example_text = torch.zeros((batch_size, model.context_length), dtype=torch.int, device=device) - model = torch.jit.trace_module( - model, - inputs=dict( - forward=(example_images, example_text), - encode_text=(example_text,), - encode_image=(example_images,) - )) - model.visual.image_size = image_size - return model - - -def resize_pos_embed(state_dict, model, interpolation: str = 'bicubic', seq_dim=1): - # Rescale the grid of position embeddings when loading from state_dict - old_pos_embed = state_dict.get('visual.positional_embedding', None) - if old_pos_embed is None or not hasattr(model.visual, 'grid_size'): - return - grid_size = to_2tuple(model.visual.grid_size) - extra_tokens = 1 # FIXME detect different token configs (ie no class token, or more) - new_seq_len = grid_size[0] * grid_size[1] + extra_tokens - if new_seq_len == old_pos_embed.shape[0]: - return - - if extra_tokens: - pos_emb_tok, pos_emb_img = old_pos_embed[:extra_tokens], old_pos_embed[extra_tokens:] - else: - pos_emb_tok, pos_emb_img = None, old_pos_embed - old_grid_size = to_2tuple(int(math.sqrt(len(pos_emb_img)))) - - logging.info('Resizing position embedding grid-size from %s to %s', old_grid_size, grid_size) - pos_emb_img = pos_emb_img.reshape(1, old_grid_size[0], old_grid_size[1], -1).permute(0, 3, 1, 2) - pos_emb_img = F.interpolate( - pos_emb_img, - size=grid_size, - mode=interpolation, - align_corners=True, - ) - pos_emb_img = pos_emb_img.permute(0, 2, 3, 1).reshape(1, grid_size[0] * grid_size[1], -1)[0] - if pos_emb_tok is not None: - new_pos_embed = torch.cat([pos_emb_tok, pos_emb_img], dim=0) - else: - new_pos_embed = pos_emb_img - state_dict['visual.positional_embedding'] = new_pos_embed diff --git a/kosmos-g/open_clip/src/open_clip/model_configs/RN101-quickgelu.json b/kosmos-g/open_clip/src/open_clip/model_configs/RN101-quickgelu.json deleted file mode 100644 index d0db2c161..000000000 --- a/kosmos-g/open_clip/src/open_clip/model_configs/RN101-quickgelu.json +++ /dev/null @@ -1,22 +0,0 @@ -{ - "embed_dim": 512, - "quick_gelu": true, - "vision_cfg": { - "image_size": 224, - "layers": [ - 3, - 4, - 23, - 3 - ], - "width": 64, - "patch_size": null - }, - "text_cfg": { - "context_length": 77, - "vocab_size": 49408, - "width": 512, - "heads": 8, - "layers": 12 - } -} \ No newline at end of file diff --git a/kosmos-g/open_clip/src/open_clip/model_configs/RN101.json b/kosmos-g/open_clip/src/open_clip/model_configs/RN101.json deleted file mode 100644 index b88b4d3ac..000000000 --- a/kosmos-g/open_clip/src/open_clip/model_configs/RN101.json +++ /dev/null @@ -1,21 +0,0 @@ -{ - "embed_dim": 512, - "vision_cfg": { - "image_size": 224, - "layers": [ - 3, - 4, - 23, - 3 - ], - "width": 64, - "patch_size": null - }, - "text_cfg": { - "context_length": 77, - "vocab_size": 49408, - "width": 512, - "heads": 8, - "layers": 12 - } -} \ No newline at end of file diff --git a/kosmos-g/open_clip/src/open_clip/model_configs/RN50-quickgelu.json b/kosmos-g/open_clip/src/open_clip/model_configs/RN50-quickgelu.json deleted file mode 100644 index 8c2f91260..000000000 --- a/kosmos-g/open_clip/src/open_clip/model_configs/RN50-quickgelu.json +++ /dev/null @@ -1,22 +0,0 @@ -{ - "embed_dim": 1024, - "quick_gelu": true, - "vision_cfg": { - "image_size": 224, - "layers": [ - 3, - 4, - 6, - 3 - ], - "width": 64, - "patch_size": null - }, - "text_cfg": { - "context_length": 77, - "vocab_size": 49408, - "width": 512, - "heads": 8, - "layers": 12 - } -} diff --git a/kosmos-g/open_clip/src/open_clip/model_configs/RN50.json b/kosmos-g/open_clip/src/open_clip/model_configs/RN50.json deleted file mode 100644 index 33aa884d5..000000000 --- a/kosmos-g/open_clip/src/open_clip/model_configs/RN50.json +++ /dev/null @@ -1,21 +0,0 @@ -{ - "embed_dim": 1024, - "vision_cfg": { - "image_size": 224, - "layers": [ - 3, - 4, - 6, - 3 - ], - "width": 64, - "patch_size": null - }, - "text_cfg": { - "context_length": 77, - "vocab_size": 49408, - "width": 512, - "heads": 8, - "layers": 12 - } -} \ No newline at end of file diff --git a/kosmos-g/open_clip/src/open_clip/model_configs/RN50x16.json b/kosmos-g/open_clip/src/open_clip/model_configs/RN50x16.json deleted file mode 100644 index 3161e1a2c..000000000 --- a/kosmos-g/open_clip/src/open_clip/model_configs/RN50x16.json +++ /dev/null @@ -1,21 +0,0 @@ -{ - "embed_dim": 768, - "vision_cfg": { - "image_size": 384, - "layers": [ - 6, - 8, - 18, - 8 - ], - "width": 96, - "patch_size": null - }, - "text_cfg": { - "context_length": 77, - "vocab_size": 49408, - "width": 768, - "heads": 12, - "layers": 12 - } -} \ No newline at end of file diff --git a/kosmos-g/open_clip/src/open_clip/model_configs/RN50x4.json b/kosmos-g/open_clip/src/open_clip/model_configs/RN50x4.json deleted file mode 100644 index e155237f8..000000000 --- a/kosmos-g/open_clip/src/open_clip/model_configs/RN50x4.json +++ /dev/null @@ -1,21 +0,0 @@ -{ - "embed_dim": 640, - "vision_cfg": { - "image_size": 288, - "layers": [ - 4, - 6, - 10, - 6 - ], - "width": 80, - "patch_size": null - }, - "text_cfg": { - "context_length": 77, - "vocab_size": 49408, - "width": 640, - "heads": 10, - "layers": 12 - } -} \ No newline at end of file diff --git a/kosmos-g/open_clip/src/open_clip/model_configs/ViT-B-16-plus-240.json b/kosmos-g/open_clip/src/open_clip/model_configs/ViT-B-16-plus-240.json deleted file mode 100644 index 5bbd12bcd..000000000 --- a/kosmos-g/open_clip/src/open_clip/model_configs/ViT-B-16-plus-240.json +++ /dev/null @@ -1,16 +0,0 @@ -{ - "embed_dim": 640, - "vision_cfg": { - "image_size": 240, - "layers": 12, - "width": 896, - "patch_size": 16 - }, - "text_cfg": { - "context_length": 77, - "vocab_size": 49408, - "width": 640, - "heads": 10, - "layers": 12 - } -} \ No newline at end of file diff --git a/kosmos-g/open_clip/src/open_clip/model_configs/ViT-B-16-plus.json b/kosmos-g/open_clip/src/open_clip/model_configs/ViT-B-16-plus.json deleted file mode 100644 index 5dc1e09ba..000000000 --- a/kosmos-g/open_clip/src/open_clip/model_configs/ViT-B-16-plus.json +++ /dev/null @@ -1,16 +0,0 @@ -{ - "embed_dim": 640, - "vision_cfg": { - "image_size": 224, - "layers": 12, - "width": 896, - "patch_size": 16 - }, - "text_cfg": { - "context_length": 77, - "vocab_size": 49408, - "width": 640, - "heads": 10, - "layers": 12 - } -} \ No newline at end of file diff --git a/kosmos-g/open_clip/src/open_clip/model_configs/ViT-B-16.json b/kosmos-g/open_clip/src/open_clip/model_configs/ViT-B-16.json deleted file mode 100644 index 395eea77e..000000000 --- a/kosmos-g/open_clip/src/open_clip/model_configs/ViT-B-16.json +++ /dev/null @@ -1,16 +0,0 @@ -{ - "embed_dim": 512, - "vision_cfg": { - "image_size": 224, - "layers": 12, - "width": 768, - "patch_size": 16 - }, - "text_cfg": { - "context_length": 77, - "vocab_size": 49408, - "width": 512, - "heads": 8, - "layers": 12 - } -} \ No newline at end of file diff --git a/kosmos-g/open_clip/src/open_clip/model_configs/ViT-B-32-plus-256.json b/kosmos-g/open_clip/src/open_clip/model_configs/ViT-B-32-plus-256.json deleted file mode 100644 index 2f09c857d..000000000 --- a/kosmos-g/open_clip/src/open_clip/model_configs/ViT-B-32-plus-256.json +++ /dev/null @@ -1,16 +0,0 @@ -{ - "embed_dim": 640, - "vision_cfg": { - "image_size": 256, - "layers": 12, - "width": 896, - "patch_size": 32 - }, - "text_cfg": { - "context_length": 77, - "vocab_size": 49408, - "width": 640, - "heads": 10, - "layers": 12 - } -} \ No newline at end of file diff --git a/kosmos-g/open_clip/src/open_clip/model_configs/ViT-B-32-quickgelu.json b/kosmos-g/open_clip/src/open_clip/model_configs/ViT-B-32-quickgelu.json deleted file mode 100644 index ce6bd9235..000000000 --- a/kosmos-g/open_clip/src/open_clip/model_configs/ViT-B-32-quickgelu.json +++ /dev/null @@ -1,17 +0,0 @@ -{ - "embed_dim": 512, - "quick_gelu": true, - "vision_cfg": { - "image_size": 224, - "layers": 12, - "width": 768, - "patch_size": 32 - }, - "text_cfg": { - "context_length": 77, - "vocab_size": 49408, - "width": 512, - "heads": 8, - "layers": 12 - } -} \ No newline at end of file diff --git a/kosmos-g/open_clip/src/open_clip/model_configs/ViT-B-32.json b/kosmos-g/open_clip/src/open_clip/model_configs/ViT-B-32.json deleted file mode 100644 index 07c8e28eb..000000000 --- a/kosmos-g/open_clip/src/open_clip/model_configs/ViT-B-32.json +++ /dev/null @@ -1,16 +0,0 @@ -{ - "embed_dim": 512, - "vision_cfg": { - "image_size": 224, - "layers": 12, - "width": 768, - "patch_size": 32 - }, - "text_cfg": { - "context_length": 77, - "vocab_size": 49408, - "width": 512, - "heads": 8, - "layers": 12 - } -} \ No newline at end of file diff --git a/kosmos-g/open_clip/src/open_clip/model_configs/ViT-H-14.json b/kosmos-g/open_clip/src/open_clip/model_configs/ViT-H-14.json deleted file mode 100644 index 3e3a7e934..000000000 --- a/kosmos-g/open_clip/src/open_clip/model_configs/ViT-H-14.json +++ /dev/null @@ -1,17 +0,0 @@ -{ - "embed_dim": 1024, - "vision_cfg": { - "image_size": 224, - "layers": 32, - "width": 1280, - "head_width": 80, - "patch_size": 14 - }, - "text_cfg": { - "context_length": 77, - "vocab_size": 49408, - "width": 1024, - "heads": 16, - "layers": 24 - } -} \ No newline at end of file diff --git a/kosmos-g/open_clip/src/open_clip/model_configs/ViT-H-16.json b/kosmos-g/open_clip/src/open_clip/model_configs/ViT-H-16.json deleted file mode 100644 index 588485455..000000000 --- a/kosmos-g/open_clip/src/open_clip/model_configs/ViT-H-16.json +++ /dev/null @@ -1,17 +0,0 @@ -{ - "embed_dim": 1024, - "vision_cfg": { - "image_size": 224, - "layers": 32, - "width": 1280, - "head_width": 80, - "patch_size": 16 - }, - "text_cfg": { - "context_length": 77, - "vocab_size": 49408, - "width": 1024, - "heads": 16, - "layers": 24 - } -} \ No newline at end of file diff --git a/kosmos-g/open_clip/src/open_clip/model_configs/ViT-L-14-280.json b/kosmos-g/open_clip/src/open_clip/model_configs/ViT-L-14-280.json deleted file mode 100644 index 2262deaef..000000000 --- a/kosmos-g/open_clip/src/open_clip/model_configs/ViT-L-14-280.json +++ /dev/null @@ -1,16 +0,0 @@ -{ - "embed_dim": 768, - "vision_cfg": { - "image_size": 280, - "layers": 24, - "width": 1024, - "patch_size": 14 - }, - "text_cfg": { - "context_length": 77, - "vocab_size": 49408, - "width": 768, - "heads": 12, - "layers": 12 - } -} \ No newline at end of file diff --git a/kosmos-g/open_clip/src/open_clip/model_configs/ViT-L-14-336.json b/kosmos-g/open_clip/src/open_clip/model_configs/ViT-L-14-336.json deleted file mode 100644 index 8d1f74c26..000000000 --- a/kosmos-g/open_clip/src/open_clip/model_configs/ViT-L-14-336.json +++ /dev/null @@ -1,16 +0,0 @@ -{ - "embed_dim": 768, - "vision_cfg": { - "image_size": 336, - "layers": 24, - "width": 1024, - "patch_size": 14 - }, - "text_cfg": { - "context_length": 77, - "vocab_size": 49408, - "width": 768, - "heads": 12, - "layers": 12 - } -} \ No newline at end of file diff --git a/kosmos-g/open_clip/src/open_clip/model_configs/ViT-L-14.json b/kosmos-g/open_clip/src/open_clip/model_configs/ViT-L-14.json deleted file mode 100644 index d4a4bbb1d..000000000 --- a/kosmos-g/open_clip/src/open_clip/model_configs/ViT-L-14.json +++ /dev/null @@ -1,16 +0,0 @@ -{ - "embed_dim": 768, - "vision_cfg": { - "image_size": 224, - "layers": 24, - "width": 1024, - "patch_size": 14 - }, - "text_cfg": { - "context_length": 77, - "vocab_size": 49408, - "width": 768, - "heads": 12, - "layers": 12 - } -} \ No newline at end of file diff --git a/kosmos-g/open_clip/src/open_clip/model_configs/ViT-L-16-320.json b/kosmos-g/open_clip/src/open_clip/model_configs/ViT-L-16-320.json deleted file mode 100644 index fc2d13ca9..000000000 --- a/kosmos-g/open_clip/src/open_clip/model_configs/ViT-L-16-320.json +++ /dev/null @@ -1,16 +0,0 @@ -{ - "embed_dim": 768, - "vision_cfg": { - "image_size": 320, - "layers": 24, - "width": 1024, - "patch_size": 16 - }, - "text_cfg": { - "context_length": 77, - "vocab_size": 49408, - "width": 768, - "heads": 12, - "layers": 12 - } -} \ No newline at end of file diff --git a/kosmos-g/open_clip/src/open_clip/model_configs/ViT-L-16.json b/kosmos-g/open_clip/src/open_clip/model_configs/ViT-L-16.json deleted file mode 100644 index 82a1cedfa..000000000 --- a/kosmos-g/open_clip/src/open_clip/model_configs/ViT-L-16.json +++ /dev/null @@ -1,16 +0,0 @@ -{ - "embed_dim": 768, - "vision_cfg": { - "image_size": 224, - "layers": 24, - "width": 1024, - "patch_size": 16 - }, - "text_cfg": { - "context_length": 77, - "vocab_size": 49408, - "width": 768, - "heads": 12, - "layers": 12 - } -} \ No newline at end of file diff --git a/kosmos-g/open_clip/src/open_clip/model_configs/ViT-g-14.json b/kosmos-g/open_clip/src/open_clip/model_configs/ViT-g-14.json deleted file mode 100644 index 8c4b7325c..000000000 --- a/kosmos-g/open_clip/src/open_clip/model_configs/ViT-g-14.json +++ /dev/null @@ -1,18 +0,0 @@ -{ - "embed_dim": 1024, - "vision_cfg": { - "image_size": 224, - "layers": 40, - "width": 1408, - "head_width": 88, - "mlp_ratio": 4.3637, - "patch_size": 14 - }, - "text_cfg": { - "context_length": 77, - "vocab_size": 49408, - "width": 1024, - "heads": 16, - "layers": 24 - } -} \ No newline at end of file diff --git a/kosmos-g/open_clip/src/open_clip/model_configs/timm-efficientnetv2_rw_s.json b/kosmos-g/open_clip/src/open_clip/model_configs/timm-efficientnetv2_rw_s.json deleted file mode 100644 index e07e6beb3..000000000 --- a/kosmos-g/open_clip/src/open_clip/model_configs/timm-efficientnetv2_rw_s.json +++ /dev/null @@ -1,17 +0,0 @@ -{ - "embed_dim": 768, - "vision_cfg": { - "timm_model_name": "efficientnetv2_rw_s", - "timm_model_pretrained": false, - "timm_pool": "abs_attn", - "timm_proj": "", - "image_size": 288 - }, - "text_cfg": { - "context_length": 77, - "vocab_size": 49408, - "width": 768, - "heads": 8, - "layers": 12 - } -} \ No newline at end of file diff --git a/kosmos-g/open_clip/src/open_clip/model_configs/timm-resnet50d.json b/kosmos-g/open_clip/src/open_clip/model_configs/timm-resnet50d.json deleted file mode 100644 index 7bb0957cd..000000000 --- a/kosmos-g/open_clip/src/open_clip/model_configs/timm-resnet50d.json +++ /dev/null @@ -1,17 +0,0 @@ -{ - "embed_dim": 1024, - "vision_cfg": { - "timm_model_name": "resnet50d", - "timm_model_pretrained": false, - "timm_pool": "abs_attn", - "timm_proj": "", - "image_size": 224 - }, - "text_cfg": { - "context_length": 77, - "vocab_size": 49408, - "width": 512, - "heads": 8, - "layers": 12 - } -} diff --git a/kosmos-g/open_clip/src/open_clip/model_configs/timm-resnetaa50d.json b/kosmos-g/open_clip/src/open_clip/model_configs/timm-resnetaa50d.json deleted file mode 100644 index c011e0c02..000000000 --- a/kosmos-g/open_clip/src/open_clip/model_configs/timm-resnetaa50d.json +++ /dev/null @@ -1,17 +0,0 @@ -{ - "embed_dim": 1024, - "vision_cfg": { - "timm_model_name": "resnetaa50d", - "timm_model_pretrained": false, - "timm_pool": "abs_attn", - "timm_proj": "", - "image_size": 224 - }, - "text_cfg": { - "context_length": 77, - "vocab_size": 49408, - "width": 512, - "heads": 8, - "layers": 12 - } -} diff --git a/kosmos-g/open_clip/src/open_clip/model_configs/timm-resnetblur50.json b/kosmos-g/open_clip/src/open_clip/model_configs/timm-resnetblur50.json deleted file mode 100644 index 05d0b209a..000000000 --- a/kosmos-g/open_clip/src/open_clip/model_configs/timm-resnetblur50.json +++ /dev/null @@ -1,17 +0,0 @@ -{ - "embed_dim": 1024, - "vision_cfg": { - "timm_model_name": "resnetblur50", - "timm_model_pretrained": false, - "timm_pool": "abs_attn", - "timm_proj": "", - "image_size": 224 - }, - "text_cfg": { - "context_length": 77, - "vocab_size": 49408, - "width": 512, - "heads": 8, - "layers": 12 - } -} diff --git a/kosmos-g/open_clip/src/open_clip/model_configs/timm-swin_base_patch4_window7_224.json b/kosmos-g/open_clip/src/open_clip/model_configs/timm-swin_base_patch4_window7_224.json deleted file mode 100644 index 29dd08f6f..000000000 --- a/kosmos-g/open_clip/src/open_clip/model_configs/timm-swin_base_patch4_window7_224.json +++ /dev/null @@ -1,17 +0,0 @@ -{ - "embed_dim": 512, - "vision_cfg": { - "timm_model_name": "swin_base_patch4_window7_224", - "timm_model_pretrained": false, - "timm_pool": "", - "timm_proj": "linear", - "image_size": 224 - }, - "text_cfg": { - "context_length": 77, - "vocab_size": 49408, - "width": 512, - "heads": 8, - "layers": 12 - } -} \ No newline at end of file diff --git a/kosmos-g/open_clip/src/open_clip/model_configs/timm-vit_base_patch16_224.json b/kosmos-g/open_clip/src/open_clip/model_configs/timm-vit_base_patch16_224.json deleted file mode 100644 index 39341ce1b..000000000 --- a/kosmos-g/open_clip/src/open_clip/model_configs/timm-vit_base_patch16_224.json +++ /dev/null @@ -1,17 +0,0 @@ -{ - "embed_dim": 512, - "vision_cfg": { - "timm_model_name": "vit_base_patch16_224", - "timm_model_pretrained": false, - "timm_pool": "", - "timm_proj": "linear", - "image_size": 224 - }, - "text_cfg": { - "context_length": 77, - "vocab_size": 49408, - "width": 512, - "heads": 8, - "layers": 12 - } -} \ No newline at end of file diff --git a/kosmos-g/open_clip/src/open_clip/model_configs/timm-vit_base_patch32_224.json b/kosmos-g/open_clip/src/open_clip/model_configs/timm-vit_base_patch32_224.json deleted file mode 100644 index 39b845271..000000000 --- a/kosmos-g/open_clip/src/open_clip/model_configs/timm-vit_base_patch32_224.json +++ /dev/null @@ -1,17 +0,0 @@ -{ - "embed_dim": 512, - "vision_cfg": { - "timm_model_name": "vit_base_patch32_224", - "timm_model_pretrained": false, - "timm_pool": "", - "timm_proj": "linear", - "image_size": 224 - }, - "text_cfg": { - "context_length": 77, - "vocab_size": 49408, - "width": 512, - "heads": 8, - "layers": 12 - } -} \ No newline at end of file diff --git a/kosmos-g/open_clip/src/open_clip/model_configs/timm-vit_small_patch16_224.json b/kosmos-g/open_clip/src/open_clip/model_configs/timm-vit_small_patch16_224.json deleted file mode 100644 index 45863ab38..000000000 --- a/kosmos-g/open_clip/src/open_clip/model_configs/timm-vit_small_patch16_224.json +++ /dev/null @@ -1,17 +0,0 @@ -{ - "embed_dim": 512, - "vision_cfg": { - "timm_model_name": "vit_small_patch16_224", - "timm_model_pretrained": false, - "timm_pool": "", - "timm_proj": "linear", - "image_size": 224 - }, - "text_cfg": { - "context_length": 77, - "vocab_size": 49408, - "width": 512, - "heads": 8, - "layers": 12 - } -} \ No newline at end of file diff --git a/kosmos-g/open_clip/src/open_clip/openai.py b/kosmos-g/open_clip/src/open_clip/openai.py deleted file mode 100644 index ef0da78dc..000000000 --- a/kosmos-g/open_clip/src/open_clip/openai.py +++ /dev/null @@ -1,126 +0,0 @@ -""" OpenAI pretrained model functions - -Adapted from https://github.com/openai/CLIP. Originally MIT License, Copyright (c) 2021 OpenAI. -""" - -import os -import warnings -from typing import Union, List - -import torch - -from .model import build_model_from_openai_state_dict -from .pretrained import get_pretrained_url, list_pretrained_tag_models, download_pretrained - -__all__ = ["list_openai_models", "load_openai_model"] - - -def list_openai_models() -> List[str]: - """Returns the names of available CLIP models""" - return list_pretrained_tag_models('openai') - - -def load_openai_model( - name: str, - device: Union[str, torch.device] = "cuda" if torch.cuda.is_available() else "cpu", - jit=True, -): - """Load a CLIP model - - Parameters - ---------- - name : str - A model name listed by `clip.available_models()`, or the path to a model checkpoint containing the state_dict - device : Union[str, torch.device] - The device to put the loaded model - jit : bool - Whether to load the optimized JIT model (default) or more hackable non-JIT model. - - Returns - ------- - model : torch.nn.Module - The CLIP model - preprocess : Callable[[PIL.Image], torch.Tensor] - A torchvision transform that converts a PIL image into a tensor that the returned model can take as its input - """ - if get_pretrained_url(name, 'openai'): - model_path = download_pretrained(get_pretrained_url(name, 'openai')) - elif os.path.isfile(name): - model_path = name - else: - raise RuntimeError(f"Model {name} not found; available models = {list_openai_models()}") - - try: - # loading JIT archive - model = torch.jit.load(model_path, map_location=device if jit else "cpu").eval() - state_dict = None - except RuntimeError: - # loading saved state dict - if jit: - warnings.warn(f"File {model_path} is not a JIT archive. Loading as a state dict instead") - jit = False - state_dict = torch.load(model_path, map_location="cpu") - - if not jit: - try: - model = build_model_from_openai_state_dict(state_dict or model.state_dict()).to(device) - except KeyError: - sd = {k[7:]: v for k, v in state_dict["state_dict"].items()} - model = build_model_from_openai_state_dict(sd).to(device) - - if str(device) == "cpu": - model.float() - return model - - # patch the device names - device_holder = torch.jit.trace(lambda: torch.ones([]).to(torch.device(device)), example_inputs=[]) - device_node = [n for n in device_holder.graph.findAllNodes("prim::Constant") if "Device" in repr(n)][-1] - - def patch_device(module): - try: - graphs = [module.graph] if hasattr(module, "graph") else [] - except RuntimeError: - graphs = [] - - if hasattr(module, "forward1"): - graphs.append(module.forward1.graph) - - for graph in graphs: - for node in graph.findAllNodes("prim::Constant"): - if "value" in node.attributeNames() and str(node["value"]).startswith("cuda"): - node.copyAttributes(device_node) - - model.apply(patch_device) - patch_device(model.encode_image) - patch_device(model.encode_text) - - # patch dtype to float32 on CPU - if str(device) == "cpu": - float_holder = torch.jit.trace(lambda: torch.ones([]).float(), example_inputs=[]) - float_input = list(float_holder.graph.findNode("aten::to").inputs())[1] - float_node = float_input.node() - - def patch_float(module): - try: - graphs = [module.graph] if hasattr(module, "graph") else [] - except RuntimeError: - graphs = [] - - if hasattr(module, "forward1"): - graphs.append(module.forward1.graph) - - for graph in graphs: - for node in graph.findAllNodes("aten::to"): - inputs = list(node.inputs()) - for i in [1, 2]: # dtype can be the second or third argument to aten::to() - if inputs[i].node()["value"] == 5: - inputs[i].node().copyAttributes(float_node) - - model.apply(patch_float) - patch_float(model.encode_image) - patch_float(model.encode_text) - model.float() - - # ensure image_size attr available at consistent location for both jit and non-jit - model.visual.image_size = model.input_resolution.item() - return model diff --git a/kosmos-g/open_clip/src/open_clip/pretrained.py b/kosmos-g/open_clip/src/open_clip/pretrained.py deleted file mode 100644 index 3e28588b5..000000000 --- a/kosmos-g/open_clip/src/open_clip/pretrained.py +++ /dev/null @@ -1,163 +0,0 @@ -import hashlib -import os -import urllib -import warnings - -from tqdm import tqdm - -_RN50 = dict( - openai="https://openaipublic.azureedge.net/clip/models/afeb0e10f9e5a86da6080e35cf09123aca3b358a0c3e3b6c78a7b63bc04b6762/RN50.pt", - yfcc15m="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/rn50-quickgelu-yfcc15m-455df137.pt", - cc12m="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/rn50-quickgelu-cc12m-f000538c.pt" -) - -_RN50_quickgelu = dict( - openai="https://openaipublic.azureedge.net/clip/models/afeb0e10f9e5a86da6080e35cf09123aca3b358a0c3e3b6c78a7b63bc04b6762/RN50.pt", - yfcc15m="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/rn50-quickgelu-yfcc15m-455df137.pt", - cc12m="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/rn50-quickgelu-cc12m-f000538c.pt" -) - -_RN101 = dict( - openai="https://openaipublic.azureedge.net/clip/models/8fa8567bab74a42d41c5915025a8e4538c3bdbe8804a470a72f30b0d94fab599/RN101.pt", - yfcc15m="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/rn101-quickgelu-yfcc15m-3e04b30e.pt" -) - -_RN101_quickgelu = dict( - openai="https://openaipublic.azureedge.net/clip/models/8fa8567bab74a42d41c5915025a8e4538c3bdbe8804a470a72f30b0d94fab599/RN101.pt", - yfcc15m="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/rn101-quickgelu-yfcc15m-3e04b30e.pt" -) - -_RN50x4 = dict( - openai="https://openaipublic.azureedge.net/clip/models/7e526bd135e493cef0776de27d5f42653e6b4c8bf9e0f653bb11773263205fdd/RN50x4.pt", -) - -_RN50x16 = dict( - openai="https://openaipublic.azureedge.net/clip/models/52378b407f34354e150460fe41077663dd5b39c54cd0bfd2b27167a4a06ec9aa/RN50x16.pt", -) - -_RN50x64 = dict( - openai="https://openaipublic.azureedge.net/clip/models/be1cfb55d75a9666199fb2206c106743da0f6468c9d327f3e0d0a543a9919d9c/RN50x64.pt", -) - -_VITB32 = dict( - openai="https://openaipublic.azureedge.net/clip/models/40d365715913c9da98579312b702a82c18be219cc2a73407c4526f58eba950af/ViT-B-32.pt", - laion2b_e16="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/vit_b_32-laion2b_e16-af8dbd0c.pth", - laion400m_e31="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/vit_b_32-quickgelu-laion400m_e31-d867053b.pt", - laion400m_e32="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/vit_b_32-quickgelu-laion400m_e32-46683a32.pt", -) - -_VITB32_quickgelu = dict( - openai="https://openaipublic.azureedge.net/clip/models/40d365715913c9da98579312b702a82c18be219cc2a73407c4526f58eba950af/ViT-B-32.pt", - laion400m_e31="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/vit_b_32-quickgelu-laion400m_e31-d867053b.pt", - laion400m_e32="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/vit_b_32-quickgelu-laion400m_e32-46683a32.pt", -) - -_VITB16 = dict( - openai="https://openaipublic.azureedge.net/clip/models/5806e77cd80f8b59890b7e101eabd078d9fb84e6937f9e85e4ecb61988df416f/ViT-B-16.pt", - laion400m_e31="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/vit_b_16-laion400m_e31-00efa78f.pt", - laion400m_e32="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/vit_b_16-laion400m_e32-55e67d44.pt", -) - -_VITB16_PLUS_240 = dict( - laion400m_e31="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/vit_b_16_plus_240-laion400m_e31-8fb26589.pt", - laion400m_e32="https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/vit_b_16_plus_240-laion400m_e32-699c4b84.pt", -) - -_VITL14 = dict( - openai="https://openaipublic.azureedge.net/clip/models/b8cca3fd41ae0c99ba7e8951adf17d267cdb84cd88be6f7c2e0eca1737a03836/ViT-L-14.pt", - laion400m_e31='https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/vit_l_14-laion400m_e31-69988bb6.pt', - laion400m_e32='https://github.com/mlfoundations/open_clip/releases/download/v0.2-weights/vit_l_14-laion400m_e32-3d133497.pt', -) - -_VITL14_336 = dict( - openai="https://openaipublic.azureedge.net/clip/models/3035c92b350959924f9f00213499208652fc7ea050643e8b385c2dac08641f02/ViT-L-14-336px.pt" -) - -_PRETRAINED = { - "RN50": _RN50, - "RN50-quickgelu": _RN50_quickgelu, - "RN101": _RN101, - "RN101-quickgelu": _RN101_quickgelu, - "RN50x4": _RN50x4, - "RN50x16": _RN50x16, - "RN50x64": _RN50x64, - "ViT-B-32": _VITB32, - "ViT-B-32-quickgelu": _VITB32_quickgelu, - "ViT-B-16": _VITB16, - "ViT-B-16-plus-240": _VITB16_PLUS_240, - "ViT-L-14": _VITL14, - "ViT-L-14-336": _VITL14_336, -} - - -def list_pretrained(as_str: bool = False): - """ returns list of pretrained models - Returns a tuple (model_name, pretrain_tag) by default or 'name:tag' if as_str == True - """ - return [':'.join([k, t]) if as_str else (k, t) for k in _PRETRAINED.keys() for t in _PRETRAINED[k].keys()] - - -def list_pretrained_tag_models(tag: str): - """ return all models having the specified pretrain tag """ - models = [] - for k in _PRETRAINED.keys(): - if tag in _PRETRAINED[k]: - models.append(k) - return models - - -def list_pretrained_model_tags(model: str): - """ return all pretrain tags for the specified model architecture """ - tags = [] - if model in _PRETRAINED: - tags.extend(_PRETRAINED[model].keys()) - return tags - - -def get_pretrained_url(model: str, tag: str): - if model not in _PRETRAINED: - return '' - model_pretrained = _PRETRAINED[model] - tag = tag.lower() - if tag not in model_pretrained: - return '' - return model_pretrained[tag] - - -def download_pretrained(url: str, root: str = os.path.expanduser("~/.cache/clip")): - os.makedirs(root, exist_ok=True) - filename = os.path.basename(url) - - if 'openaipublic' in url: - expected_sha256 = url.split("/")[-2] - else: - expected_sha256 = '' - - download_target = os.path.join(root, filename) - - if os.path.exists(download_target) and not os.path.isfile(download_target): - raise RuntimeError(f"{download_target} exists and is not a regular file") - - if os.path.isfile(download_target): - if expected_sha256: - if hashlib.sha256(open(download_target, "rb").read()).hexdigest() == expected_sha256: - return download_target - else: - warnings.warn(f"{download_target} exists, but the SHA256 checksum does not match; re-downloading the file") - else: - return download_target - - with urllib.request.urlopen(url) as source, open(download_target, "wb") as output: - with tqdm(total=int(source.info().get("Content-Length")), ncols=80, unit='iB', unit_scale=True) as loop: - while True: - buffer = source.read(8192) - if not buffer: - break - - output.write(buffer) - loop.update(len(buffer)) - - if expected_sha256 and hashlib.sha256(open(download_target, "rb").read()).hexdigest() != expected_sha256: - raise RuntimeError(f"Model has been downloaded but the SHA256 checksum does not not match") - - return download_target diff --git a/kosmos-g/open_clip/src/open_clip/timm_model.py b/kosmos-g/open_clip/src/open_clip/timm_model.py deleted file mode 100644 index 071dd148c..000000000 --- a/kosmos-g/open_clip/src/open_clip/timm_model.py +++ /dev/null @@ -1,106 +0,0 @@ -""" timm model adapter - -Wraps timm (https://github.com/rwightman/pytorch-image-models) models for use as a vision tower in CLIP model. -""" -from collections import OrderedDict - -import torch.nn as nn - -try: - import timm - from timm.models.layers import Mlp, to_2tuple - from timm.models.layers.attention_pool2d import RotAttentionPool2d - from timm.models.layers.attention_pool2d import AttentionPool2d as AbsAttentionPool2d -except ImportError as e: - timm = None - -from .utils import freeze_batch_norm_2d - - -class TimmModel(nn.Module): - """ timm model adapter - # FIXME this adapter is a work in progress, may change in ways that break weight compat - """ - - def __init__( - self, - model_name, - embed_dim, - image_size=224, - pool='avg', - proj='linear', - drop=0., - pretrained=False): - super().__init__() - if timm is None: - raise RuntimeError("Please `pip install timm` to use timm models.") - - self.image_size = to_2tuple(image_size) - self.trunk = timm.create_model(model_name, pretrained=pretrained) - feat_size = self.trunk.default_cfg.get('pool_size', None) - feature_ndim = 1 if not feat_size else 2 - if pool in ('abs_attn', 'rot_attn'): - assert feature_ndim == 2 - # if attn pooling used, remove both classifier and default pool - self.trunk.reset_classifier(0, global_pool='') - else: - # reset global pool if pool config set, otherwise leave as network default - reset_kwargs = dict(global_pool=pool) if pool else {} - self.trunk.reset_classifier(0, **reset_kwargs) - prev_chs = self.trunk.num_features - - head_layers = OrderedDict() - if pool == 'abs_attn': - head_layers['pool'] = AbsAttentionPool2d(prev_chs, feat_size=feat_size, out_features=embed_dim) - prev_chs = embed_dim - elif pool == 'rot_attn': - head_layers['pool'] = RotAttentionPool2d(prev_chs, out_features=embed_dim) - prev_chs = embed_dim - else: - assert proj, 'projection layer needed if non-attention pooling is used.' - - # NOTE attention pool ends with a projection layer, so proj should usually be set to '' if such pooling is used - if proj == 'linear': - head_layers['drop'] = nn.Dropout(drop) - head_layers['proj'] = nn.Linear(prev_chs, embed_dim) - elif proj == 'mlp': - head_layers['mlp'] = Mlp(prev_chs, 2 * embed_dim, embed_dim, drop=drop) - - self.head = nn.Sequential(head_layers) - - def lock(self, unlocked_groups=0, freeze_bn_stats=False): - """ lock modules - Args: - unlocked_groups (int): leave last n layer groups unlocked (default: 0) - """ - if not unlocked_groups: - # lock full model - for param in self.trunk.parameters(): - param.requires_grad = False - if freeze_bn_stats: - freeze_batch_norm_2d(self.trunk) - else: - # NOTE: partial freeze requires latest timm (master) branch and is subject to change - try: - # FIXME import here until API stable and in an official release - from timm.models.helpers import group_parameters, group_modules - except ImportError: - raise RuntimeError( - 'Please install latest timm `pip install git+https://github.com/rwightman/pytorch-image-models`') - matcher = self.trunk.group_matcher() - gparams = group_parameters(self.trunk, matcher) - max_layer_id = max(gparams.keys()) - max_layer_id = max_layer_id - unlocked_groups - for group_idx in range(max_layer_id + 1): - group = gparams[group_idx] - for param in group: - self.trunk.get_parameter(param).requires_grad = False - if freeze_bn_stats: - gmodules = group_modules(self.trunk, matcher, reverse=True) - gmodules = {k for k, v in gmodules.items() if v <= max_layer_id} - freeze_batch_norm_2d(self.trunk, gmodules) - - def forward(self, x): - x = self.trunk(x) - x = self.head(x) - return x diff --git a/kosmos-g/open_clip/src/open_clip/tokenizer.py b/kosmos-g/open_clip/src/open_clip/tokenizer.py deleted file mode 100644 index 08e6d6638..000000000 --- a/kosmos-g/open_clip/src/open_clip/tokenizer.py +++ /dev/null @@ -1,181 +0,0 @@ -""" CLIP tokenizer - -Copied from https://github.com/openai/CLIP. Originally MIT License, Copyright (c) 2021 OpenAI. -""" -import gzip -import html -import os -from functools import lru_cache -from typing import Union, List - -import ftfy -import regex as re -import torch - - -@lru_cache() -def default_bpe(): - return os.path.join(os.path.dirname(os.path.abspath(__file__)), "bpe_simple_vocab_16e6.txt.gz") - - -@lru_cache() -def bytes_to_unicode(): - """ - Returns list of utf-8 byte and a corresponding list of unicode strings. - The reversible bpe codes work on unicode strings. - This means you need a large # of unicode characters in your vocab if you want to avoid UNKs. - When you're at something like a 10B token dataset you end up needing around 5K for decent coverage. - This is a signficant percentage of your normal, say, 32K bpe vocab. - To avoid that, we want lookup tables between utf-8 bytes and unicode strings. - And avoids mapping to whitespace/control characters the bpe code barfs on. - """ - bs = list(range(ord("!"), ord("~")+1))+list(range(ord("¡"), ord("¬")+1))+list(range(ord("®"), ord("ÿ")+1)) - cs = bs[:] - n = 0 - for b in range(2**8): - if b not in bs: - bs.append(b) - cs.append(2**8+n) - n += 1 - cs = [chr(n) for n in cs] - return dict(zip(bs, cs)) - - -def get_pairs(word): - """Return set of symbol pairs in a word. - Word is represented as tuple of symbols (symbols being variable-length strings). - """ - pairs = set() - prev_char = word[0] - for char in word[1:]: - pairs.add((prev_char, char)) - prev_char = char - return pairs - - -def basic_clean(text): - text = ftfy.fix_text(text) - text = html.unescape(html.unescape(text)) - return text.strip() - - -def whitespace_clean(text): - text = re.sub(r'\s+', ' ', text) - text = text.strip() - return text - - -class SimpleTokenizer(object): - def __init__(self, bpe_path: str = default_bpe(), special_tokens=None): - self.byte_encoder = bytes_to_unicode() - self.byte_decoder = {v: k for k, v in self.byte_encoder.items()} - merges = gzip.open(bpe_path).read().decode("utf-8").split('\n') - merges = merges[1:49152-256-2+1] - merges = [tuple(merge.split()) for merge in merges] - vocab = list(bytes_to_unicode().values()) - vocab = vocab + [v+'</w>' for v in vocab] - for merge in merges: - vocab.append(''.join(merge)) - if not special_tokens: - special_tokens = ['<start_of_text>', '<end_of_text>'] - else: - special_tokens = ['<start_of_text>', '<end_of_text>'] + special_tokens - vocab.extend(special_tokens) - self.encoder = dict(zip(vocab, range(len(vocab)))) - self.decoder = {v: k for k, v in self.encoder.items()} - self.bpe_ranks = dict(zip(merges, range(len(merges)))) - self.cache = {t:t for t in special_tokens} - special = "|".join(special_tokens) - self.pat = re.compile(special + r"""|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""", re.IGNORECASE) - - self.vocab_size = len(self.encoder) - self.all_special_ids = [self.encoder[t] for t in special_tokens] - - def bpe(self, token): - if token in self.cache: - return self.cache[token] - word = tuple(token[:-1]) + ( token[-1] + '</w>',) - pairs = get_pairs(word) - - if not pairs: - return token+'</w>' - - while True: - bigram = min(pairs, key = lambda pair: self.bpe_ranks.get(pair, float('inf'))) - if bigram not in self.bpe_ranks: - break - first, second = bigram - new_word = [] - i = 0 - while i < len(word): - try: - j = word.index(first, i) - new_word.extend(word[i:j]) - i = j - except: - new_word.extend(word[i:]) - break - - if word[i] == first and i < len(word)-1 and word[i+1] == second: - new_word.append(first+second) - i += 2 - else: - new_word.append(word[i]) - i += 1 - new_word = tuple(new_word) - word = new_word - if len(word) == 1: - break - else: - pairs = get_pairs(word) - word = ' '.join(word) - self.cache[token] = word - return word - - def encode(self, text): - bpe_tokens = [] - text = whitespace_clean(basic_clean(text)).lower() - for token in re.findall(self.pat, text): - token = ''.join(self.byte_encoder[b] for b in token.encode('utf-8')) - bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(' ')) - return bpe_tokens - - def decode(self, tokens): - text = ''.join([self.decoder[token] for token in tokens]) - text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors="replace").replace('</w>', ' ') - return text - - -_tokenizer = SimpleTokenizer() - - -def tokenize(texts: Union[str, List[str]], context_length: int = 77) -> torch.LongTensor: - """ - Returns the tokenized representation of given input string(s) - - Parameters - ---------- - texts : Union[str, List[str]] - An input string or a list of input strings to tokenize - context_length : int - The context length to use; all CLIP models use 77 as the context length - - Returns - ------- - A two-dimensional tensor containing the resulting tokens, shape = [number of input strings, context_length] - """ - if isinstance(texts, str): - texts = [texts] - - sot_token = _tokenizer.encoder["<start_of_text>"] - eot_token = _tokenizer.encoder["<end_of_text>"] - all_tokens = [[sot_token] + _tokenizer.encode(text) + [eot_token] for text in texts] - result = torch.zeros(len(all_tokens), context_length, dtype=torch.long) - - for i, tokens in enumerate(all_tokens): - if len(tokens) > context_length: - tokens = tokens[:context_length] # Truncate - tokens[-1] = eot_token - result[i, :len(tokens)] = torch.tensor(tokens) - - return result diff --git a/kosmos-g/open_clip/src/open_clip/transform.py b/kosmos-g/open_clip/src/open_clip/transform.py deleted file mode 100644 index ed5752758..000000000 --- a/kosmos-g/open_clip/src/open_clip/transform.py +++ /dev/null @@ -1,79 +0,0 @@ -from typing import Optional, Sequence, Tuple - -import torch -import torch.nn as nn -import torchvision.transforms.functional as F - - -from torchvision.transforms import Normalize, Compose, RandomResizedCrop, InterpolationMode, ToTensor, Resize, \ - CenterCrop - - -class ResizeMaxSize(nn.Module): - - def __init__(self, max_size, interpolation=InterpolationMode.BICUBIC, fn='max', fill=0): - super().__init__() - if not isinstance(max_size, int): - raise TypeError(f"Size should be int. Got {type(max_size)}") - self.max_size = max_size - self.interpolation = interpolation - self.fn = min if fn == 'min' else min - self.fill = fill - - def forward(self, img): - if isinstance(img, torch.Tensor): - height, width = img.shape[:2] - else: - width, height = img.size - scale = self.max_size / float(max(height, width)) - if scale != 1.0: - new_size = tuple(round(dim * scale) for dim in (height, width)) - img = F.resize(img, new_size, self.interpolation) - pad_h = self.max_size - new_size[0] - pad_w = self.max_size - new_size[1] - img = F.pad(img, padding=[pad_w//2, pad_h//2, pad_w - pad_w//2, pad_h - pad_h//2], fill=self.fill) - return img - - -def _convert_to_rgb(image): - return image.convert('RGB') - - -def image_transform( - image_size: int, - is_train: bool, - mean: Optional[Tuple[float, ...]] = None, - std: Optional[Tuple[float, ...]] = None, - resize_longest_max: bool = False, - fill_color: int = 0, -): - mean = mean or (0.48145466, 0.4578275, 0.40821073) # OpenAI dataset mean - std = std or (0.26862954, 0.26130258, 0.27577711) # OpenAI dataset std - if isinstance(image_size, (list, tuple)) and image_size[0] == image_size[1]: - # for square size, pass size as int so that Resize() uses aspect preserving shortest edge - image_size = image_size[0] - - normalize = Normalize(mean=mean, std=std) - if is_train: - return Compose([ - RandomResizedCrop(image_size, scale=(0.9, 1.0), interpolation=InterpolationMode.BICUBIC), - _convert_to_rgb, - ToTensor(), - normalize, - ]) - else: - if resize_longest_max: - transforms = [ - ResizeMaxSize(image_size, fill=fill_color) - ] - else: - transforms = [ - Resize(image_size, interpolation=InterpolationMode.BICUBIC), - CenterCrop(image_size), - ] - transforms.extend([ - _convert_to_rgb, - ToTensor(), - normalize, - ]) - return Compose(transforms) diff --git a/kosmos-g/open_clip/src/open_clip/utils.py b/kosmos-g/open_clip/src/open_clip/utils.py deleted file mode 100644 index 51e80c5e2..000000000 --- a/kosmos-g/open_clip/src/open_clip/utils.py +++ /dev/null @@ -1,60 +0,0 @@ -from itertools import repeat -import collections.abc - -from torch import nn as nn -from torchvision.ops.misc import FrozenBatchNorm2d - - -def freeze_batch_norm_2d(module, module_match={}, name=''): - """ - Converts all `BatchNorm2d` and `SyncBatchNorm` layers of provided module into `FrozenBatchNorm2d`. If `module` is - itself an instance of either `BatchNorm2d` or `SyncBatchNorm`, it is converted into `FrozenBatchNorm2d` and - returned. Otherwise, the module is walked recursively and submodules are converted in place. - - Args: - module (torch.nn.Module): Any PyTorch module. - module_match (dict): Dictionary of full module names to freeze (all if empty) - name (str): Full module name (prefix) - - Returns: - torch.nn.Module: Resulting module - - Inspired by https://github.com/pytorch/pytorch/blob/a5895f85be0f10212791145bfedc0261d364f103/torch/nn/modules/batchnorm.py#L762 - """ - res = module - is_match = True - if module_match: - is_match = name in module_match - if is_match and isinstance(module, (nn.modules.batchnorm.BatchNorm2d, nn.modules.batchnorm.SyncBatchNorm)): - res = FrozenBatchNorm2d(module.num_features) - res.num_features = module.num_features - res.affine = module.affine - if module.affine: - res.weight.data = module.weight.data.clone().detach() - res.bias.data = module.bias.data.clone().detach() - res.running_mean.data = module.running_mean.data - res.running_var.data = module.running_var.data - res.eps = module.eps - else: - for child_name, child in module.named_children(): - full_child_name = '.'.join([name, child_name]) if name else child_name - new_child = freeze_batch_norm_2d(child, module_match, full_child_name) - if new_child is not child: - res.add_module(child_name, new_child) - return res - - -# From PyTorch internals -def _ntuple(n): - def parse(x): - if isinstance(x, collections.abc.Iterable): - return x - return tuple(repeat(x, n)) - return parse - - -to_1tuple = _ntuple(1) -to_2tuple = _ntuple(2) -to_3tuple = _ntuple(3) -to_4tuple = _ntuple(4) -to_ntuple = lambda n, x: _ntuple(n)(x) diff --git a/kosmos-g/open_clip/src/open_clip/version.py b/kosmos-g/open_clip/src/open_clip/version.py deleted file mode 100644 index 19b4f1d60..000000000 --- a/kosmos-g/open_clip/src/open_clip/version.py +++ /dev/null @@ -1 +0,0 @@ -__version__ = '1.3.0' diff --git a/kosmos-g/open_clip/src/training/.gitignore b/kosmos-g/open_clip/src/training/.gitignore deleted file mode 100644 index 333c1e910..000000000 --- a/kosmos-g/open_clip/src/training/.gitignore +++ /dev/null @@ -1 +0,0 @@ -logs/ diff --git a/kosmos-g/open_clip/src/training/__init__.py b/kosmos-g/open_clip/src/training/__init__.py deleted file mode 100644 index e69de29bb..000000000 diff --git a/kosmos-g/open_clip/src/training/data.py b/kosmos-g/open_clip/src/training/data.py deleted file mode 100644 index 23ff21400..000000000 --- a/kosmos-g/open_clip/src/training/data.py +++ /dev/null @@ -1,458 +0,0 @@ -import ast -import json -import logging -import math -import os -import random -import sys -import time -from dataclasses import dataclass -from multiprocessing import Value - -import braceexpand -import numpy as np -import pandas as pd -import torch -import torchvision.datasets as datasets -import webdataset as wds -from PIL import Image -from torch.utils.data import Dataset, DataLoader, SubsetRandomSampler, IterableDataset, get_worker_info -from torch.utils.data.distributed import DistributedSampler -from webdataset.filters import _shuffle -from webdataset.tariterators import base_plus_ext, url_opener, tar_file_expander, valid_sample - -try: - import horovod.torch as hvd -except ImportError: - hvd = None - -from open_clip import tokenize - - -class CsvDataset(Dataset): - def __init__(self, input_filename, transforms, img_key, caption_key, sep="\t"): - logging.debug(f'Loading csv data from {input_filename}.') - df = pd.read_csv(input_filename, sep=sep) - - self.images = df[img_key].tolist() - self.captions = df[caption_key].tolist() - self.transforms = transforms - logging.debug('Done loading data.') - - def __len__(self): - return len(self.captions) - - def __getitem__(self, idx): - images = self.transforms(Image.open(str(self.images[idx]))) - texts = tokenize([str(self.captions[idx])])[0] - return images, texts - - -class SharedEpoch: - def __init__(self, epoch: int = 0): - self.shared_epoch = Value('i', epoch) - - def set_value(self, epoch): - self.shared_epoch.value = epoch - - def get_value(self): - return self.shared_epoch.value - - -@dataclass -class DataInfo: - dataloader: DataLoader - sampler: DistributedSampler = None - shared_epoch: SharedEpoch = None - - def set_epoch(self, epoch): - if self.shared_epoch is not None: - self.shared_epoch.set_value(epoch) - if self.sampler is not None and isinstance(self.sampler, DistributedSampler): - self.sampler.set_epoch(epoch) - - -def preprocess_txt(text): - return tokenize([str(text)])[0] - - -def get_dataset_size(shards): - shards_list = list(braceexpand.braceexpand(shards)) - dir_path = os.path.dirname(shards) - sizes_filename = os.path.join(dir_path, 'sizes.json') - len_filename = os.path.join(dir_path, '__len__') - if os.path.exists(sizes_filename): - sizes = json.load(open(sizes_filename, 'r')) - total_size = sum([int(sizes[os.path.basename(shard)]) for shard in shards_list]) - elif os.path.exists(len_filename): - # FIXME this used to be eval(open(...)) but that seemed rather unsafe - total_size = ast.literal_eval(open(len_filename, 'r').read()) - else: - total_size = None # num samples undefined - # some common dataset sizes (at time of authors last download) - # CC3M (train): 2905954 - # CC12M: 10968539 - # LAION-400M: 407332084 - # LAION-2B (english): 2170337258 - num_shards = len(shards_list) - return total_size, num_shards - - -def get_imagenet(args, preprocess_fns, split): - assert split in ["train", "val", "v2"] - is_train = split == "train" - preprocess_train, preprocess_val = preprocess_fns - - if split == "v2": - from imagenetv2_pytorch import ImageNetV2Dataset - dataset = ImageNetV2Dataset(location=args.imagenet_v2, transform=preprocess_val) - else: - if is_train: - data_path = args.imagenet_train - preprocess_fn = preprocess_train - else: - data_path = args.imagenet_val - preprocess_fn = preprocess_val - assert data_path - - dataset = datasets.ImageFolder(data_path, transform=preprocess_fn) - - if is_train: - idxs = np.zeros(len(dataset.targets)) - target_array = np.array(dataset.targets) - k = 50 - for c in range(1000): - m = target_array == c - n = len(idxs[m]) - arr = np.zeros(n) - arr[:k] = 1 - np.random.shuffle(arr) - idxs[m] = arr - - idxs = idxs.astype('int') - sampler = SubsetRandomSampler(np.where(idxs)[0]) - else: - sampler = None - - dataloader = torch.utils.data.DataLoader( - dataset, - batch_size=args.batch_size, - num_workers=args.workers, - sampler=sampler, - ) - - return DataInfo(dataloader=dataloader, sampler=sampler) - - -def count_samples(dataloader): - os.environ["WDS_EPOCH"] = "0" - n_elements, n_batches = 0, 0 - for images, texts in dataloader: - n_batches += 1 - n_elements += len(images) - assert len(images) == len(texts) - return n_elements, n_batches - - -def filter_no_caption(sample): - return 'txt' in sample - - -def log_and_continue(exn): - """Call in an exception handler to ignore any exception, isssue a warning, and continue.""" - logging.warning(f'Handling webdataset error ({repr(exn)}). Ignoring.') - return True - - -def group_by_keys_nothrow(data, keys=base_plus_ext, lcase=True, suffixes=None, handler=None): - """Return function over iterator that groups key, value pairs into samples. - - :param keys: function that splits the key into key and extension (base_plus_ext) - :param lcase: convert suffixes to lower case (Default value = True) - """ - current_sample = None - for filesample in data: - assert isinstance(filesample, dict) - fname, value = filesample["fname"], filesample["data"] - prefix, suffix = keys(fname) - if prefix is None: - continue - if lcase: - suffix = suffix.lower() - # FIXME webdataset version throws if suffix in current_sample, but we have a potential for - # this happening in the current LAION400m dataset if a tar ends with same prefix as the next - # begins, rare, but can happen since prefix aren't unique across tar files in that dataset - if current_sample is None or prefix != current_sample["__key__"] or suffix in current_sample: - if valid_sample(current_sample): - yield current_sample - current_sample = dict(__key__=prefix, __url__=filesample["__url__"]) - if suffixes is None or suffix in suffixes: - current_sample[suffix] = value - if valid_sample(current_sample): - yield current_sample - - -def tarfile_to_samples_nothrow(src, handler=log_and_continue): - # NOTE this is a re-impl of the webdataset impl with group_by_keys that doesn't throw - streams = url_opener(src, handler=handler) - files = tar_file_expander(streams, handler=handler) - samples = group_by_keys_nothrow(files, handler=handler) - return samples - - -def pytorch_worker_seed(): - """get dataloader worker seed from pytorch""" - worker_info = get_worker_info() - if worker_info is not None: - # favour the seed already created for pytorch dataloader workers if it exists - return worker_info.seed - # fallback to wds rank based seed - return wds.utils.pytorch_worker_seed() - - -_SHARD_SHUFFLE_SIZE = 2000 -_SHARD_SHUFFLE_INITIAL = 500 -_SAMPLE_SHUFFLE_SIZE = 5000 -_SAMPLE_SHUFFLE_INITIAL = 1000 - - -class detshuffle2(wds.PipelineStage): - def __init__( - self, - bufsize=1000, - initial=100, - seed=0, - epoch=-1, - ): - self.bufsize = bufsize - self.initial = initial - self.seed = seed - self.epoch = epoch - - def run(self, src): - if isinstance(self.epoch, SharedEpoch): - epoch = self.epoch.get_value() - else: - # NOTE: this is epoch tracking is problematic in a multiprocess (dataloader workers or train) - # situation as different workers may wrap at different times (or not at all). - self.epoch += 1 - epoch = self.epoch - rng = random.Random() - if self.seed < 0: - seed = pytorch_worker_seed() + epoch - else: - seed = self.seed + epoch - rng.seed(seed) - return _shuffle(src, self.bufsize, self.initial, rng) - - -class ResampledShards2(IterableDataset): - """An iterable dataset yielding a list of urls.""" - - def __init__( - self, - urls, - nshards=sys.maxsize, - worker_seed=None, - deterministic=False, - epoch=-1, - ): - """Sample shards from the shard list with replacement. - - :param urls: a list of URLs as a Python list or brace notation string - """ - super().__init__() - urls = wds.shardlists.expand_urls(urls) - self.urls = urls - assert isinstance(self.urls[0], str) - self.nshards = nshards - self.rng = random.Random() - self.worker_seed = pytorch_worker_seed if worker_seed is None else worker_seed - self.deterministic = deterministic - self.epoch = epoch - - def __iter__(self): - """Return an iterator over the shards.""" - if isinstance(self.epoch, SharedEpoch): - epoch = self.epoch.get_value() - else: - # NOTE: this is epoch tracking is problematic in a multiprocess (dataloader workers or train) - # situation as different workers may wrap at different times (or not at all). - self.epoch += 1 - epoch = self.epoch - if self.deterministic: - # reset seed w/ epoch if deterministic, worker seed should be deterministic due to arg.seed - self.rng.seed(self.worker_seed() + epoch) - for _ in range(self.nshards): - yield dict(url=self.rng.choice(self.urls)) - - -def get_wds_dataset(args, preprocess_img, is_train, epoch=0, floor=False): - input_shards = args.train_data if is_train else args.val_data - assert input_shards is not None - resampled = getattr(args, 'dataset_resampled', False) and is_train - - num_samples, num_shards = get_dataset_size(input_shards) - if not num_samples: - if is_train: - num_samples = args.train_num_samples - if not num_samples: - raise RuntimeError( - 'Currently, number of dataset samples must be specified for training dataset. ' - 'Please specify via `--train-num-samples` if no dataset length info present.') - else: - num_samples = args.val_num_samples or 0 # eval will just exhaust the iterator if not specified - - shared_epoch = SharedEpoch(epoch=epoch) # create a shared epoch store to sync epoch to dataloader worker proc - if resampled: - pipeline = [ResampledShards2(input_shards, deterministic=True, epoch=shared_epoch)] - else: - pipeline = [wds.SimpleShardList(input_shards)] - - # at this point we have an iterator over all the shards - if is_train: - if not resampled: - pipeline.extend([ - detshuffle2( - bufsize=_SHARD_SHUFFLE_SIZE, - initial=_SHARD_SHUFFLE_INITIAL, - seed=args.seed, - epoch=shared_epoch, - ), - wds.split_by_node, - wds.split_by_worker, - ]) - pipeline.extend([ - # at this point, we have an iterator over the shards assigned to each worker at each node - tarfile_to_samples_nothrow, # wds.tarfile_to_samples(handler=log_and_continue), - wds.shuffle( - bufsize=_SAMPLE_SHUFFLE_SIZE, - initial=_SAMPLE_SHUFFLE_INITIAL, - ), - ]) - else: - pipeline.extend([ - wds.split_by_worker, - # at this point, we have an iterator over the shards assigned to each worker - wds.tarfile_to_samples(handler=log_and_continue), - ]) - pipeline.extend([ - wds.select(filter_no_caption), - wds.decode("pilrgb", handler=log_and_continue), - wds.rename(image="jpg;png", text="txt"), - wds.map_dict(image=preprocess_img, text=preprocess_txt), - wds.to_tuple("image", "text"), - wds.batched(args.batch_size, partial=not is_train), - ]) - - dataset = wds.DataPipeline(*pipeline) - if is_train: - if not resampled: - assert num_shards >= args.workers * args.world_size, 'number of shards must be >= total workers' - # roll over and repeat a few samples to get same number of full batches on each node - round_fn = math.floor if floor else math.ceil - global_batch_size = args.batch_size * args.world_size - num_batches = round_fn(num_samples / global_batch_size) - num_workers = max(1, args.workers) - num_worker_batches = round_fn(num_batches / num_workers) # per dataloader worker - num_batches = num_worker_batches * num_workers - num_samples = num_batches * global_batch_size - dataset = dataset.with_epoch(num_worker_batches) # each worker is iterating over this - else: - # last batches are partial, eval is done on single (master) node - num_batches = math.ceil(num_samples / args.batch_size) - - dataloader = wds.WebLoader( - dataset, - batch_size=None, - shuffle=False, - num_workers=args.workers, - persistent_workers=True, - ) - - # FIXME not clear which approach is better, with_epoch before vs after dataloader? - # hoping to resolve via https://github.com/webdataset/webdataset/issues/169 - # if is_train: - # # roll over and repeat a few samples to get same number of full batches on each node - # global_batch_size = args.batch_size * args.world_size - # num_batches = math.ceil(num_samples / global_batch_size) - # num_workers = max(1, args.workers) - # num_batches = math.ceil(num_batches / num_workers) * num_workers - # num_samples = num_batches * global_batch_size - # dataloader = dataloader.with_epoch(num_batches) - # else: - # # last batches are partial, eval is done on single (master) node - # num_batches = math.ceil(num_samples / args.batch_size) - - # add meta-data to dataloader instance for convenience - dataloader.num_batches = num_batches - dataloader.num_samples = num_samples - - return DataInfo(dataloader=dataloader, shared_epoch=shared_epoch) - - -def get_csv_dataset(args, preprocess_fn, is_train, epoch=0): - input_filename = args.train_data if is_train else args.val_data - assert input_filename - dataset = CsvDataset( - input_filename, - preprocess_fn, - img_key=args.csv_img_key, - caption_key=args.csv_caption_key, - sep=args.csv_separator) - num_samples = len(dataset) - sampler = DistributedSampler(dataset) if args.distributed and is_train else None - shuffle = is_train and sampler is None - - dataloader = DataLoader( - dataset, - batch_size=args.batch_size, - shuffle=shuffle, - num_workers=args.workers, - pin_memory=True, - sampler=sampler, - drop_last=is_train, - ) - dataloader.num_samples = num_samples - dataloader.num_batches = len(dataloader) - - return DataInfo(dataloader, sampler) - - -def get_dataset_fn(data_path, dataset_type): - if dataset_type == "webdataset": - return get_wds_dataset - elif dataset_type == "csv": - return get_csv_dataset - elif dataset_type == "auto": - ext = data_path.split('.')[-1] - if ext in ['csv', 'tsv']: - return get_csv_dataset - elif ext in ['tar']: - return get_wds_dataset - else: - raise ValueError( - f"Tried to figure out dataset type, but failed for extention {ext}.") - else: - raise ValueError(f"Unsupported dataset type: {dataset_type}") - - -def get_data(args, preprocess_fns, epoch=0): - preprocess_train, preprocess_val = preprocess_fns - data = {} - - if args.train_data: - data["train"] = get_dataset_fn(args.train_data, args.dataset_type)( - args, preprocess_train, is_train=True, epoch=epoch) - - if args.val_data: - data["val"] = get_dataset_fn(args.val_data, args.dataset_type)( - args, preprocess_val, is_train=False) - - if args.imagenet_val is not None: - data["imagenet-val"] = get_imagenet(args, preprocess_fns, "val") - - if args.imagenet_v2 is not None: - data["imagenet-v2"] = get_imagenet(args, preprocess_fns, "v2") - - return data diff --git a/kosmos-g/open_clip/src/training/distributed.py b/kosmos-g/open_clip/src/training/distributed.py deleted file mode 100644 index 27bdd9b11..000000000 --- a/kosmos-g/open_clip/src/training/distributed.py +++ /dev/null @@ -1,113 +0,0 @@ -import os - -import torch - -try: - import horovod.torch as hvd -except ImportError: - hvd = None - - -def is_global_master(args): - return args.rank == 0 - - -def is_local_master(args): - return args.local_rank == 0 - - -def is_master(args, local=False): - return is_local_master(args) if local else is_global_master(args) - - -def is_using_horovod(): - # NOTE w/ horovod run, OMPI vars should be set, but w/ SLURM PMI vars will be set - # Differentiating between horovod and DDP use via SLURM may not be possible, so horovod arg still required... - ompi_vars = ["OMPI_COMM_WORLD_RANK", "OMPI_COMM_WORLD_SIZE"] - pmi_vars = ["PMI_RANK", "PMI_SIZE"] - if all([var in os.environ for var in ompi_vars]) or all([var in os.environ for var in pmi_vars]): - return True - else: - return False - - -def is_using_distributed(): - if 'WORLD_SIZE' in os.environ: - return int(os.environ['WORLD_SIZE']) > 1 - if 'SLURM_NTASKS' in os.environ: - return int(os.environ['SLURM_NTASKS']) > 1 - return False - - -def world_info_from_env(): - local_rank = 0 - for v in ('LOCAL_RANK', 'MPI_LOCALRANKID', 'SLURM_LOCALID', 'OMPI_COMM_WORLD_LOCAL_RANK'): - if v in os.environ: - local_rank = int(os.environ[v]) - break - global_rank = 0 - for v in ('RANK', 'PMI_RANK', 'SLURM_PROCID', 'OMPI_COMM_WORLD_RANK'): - if v in os.environ: - global_rank = int(os.environ[v]) - break - world_size = 1 - for v in ('WORLD_SIZE', 'PMI_SIZE', 'SLURM_NTASKS', 'OMPI_COMM_WORLD_SIZE'): - if v in os.environ: - world_size = int(os.environ[v]) - break - - return local_rank, global_rank, world_size - - -def init_distributed_device(args): - # Distributed training = training on more than one GPU. - # Works in both single and multi-node scenarios. - args.distributed = False - args.world_size = 1 - args.rank = 0 # global rank - args.local_rank = 0 - if args.horovod: - assert hvd is not None, "Horovod is not installed" - hvd.init() - args.local_rank = int(hvd.local_rank()) - args.rank = hvd.rank() - args.world_size = hvd.size() - args.distributed = True - os.environ['LOCAL_RANK'] = str(args.local_rank) - os.environ['RANK'] = str(args.rank) - os.environ['WORLD_SIZE'] = str(args.world_size) - elif is_using_distributed(): - if 'SLURM_PROCID' in os.environ: - # DDP via SLURM - args.local_rank, args.rank, args.world_size = world_info_from_env() - # SLURM var -> torch.distributed vars in case needed - os.environ['LOCAL_RANK'] = str(args.local_rank) - os.environ['RANK'] = str(args.rank) - os.environ['WORLD_SIZE'] = str(args.world_size) - torch.distributed.init_process_group( - backend=args.dist_backend, - init_method=args.dist_url, - world_size=args.world_size, - rank=args.rank, - ) - else: - # DDP via torchrun, torch.distributed.launch - args.local_rank, _, _ = world_info_from_env() - torch.distributed.init_process_group( - backend=args.dist_backend, - init_method=args.dist_url) - args.world_size = torch.distributed.get_world_size() - args.rank = torch.distributed.get_rank() - args.distributed = True - - if torch.cuda.is_available(): - if args.distributed and not args.no_set_device_rank: - device = 'cuda:%d' % args.local_rank - else: - device = 'cuda:0' - torch.cuda.set_device(device) - else: - device = 'cpu' - args.device = device - device = torch.device(device) - return device diff --git a/kosmos-g/open_clip/src/training/imagenet_zeroshot_data.py b/kosmos-g/open_clip/src/training/imagenet_zeroshot_data.py deleted file mode 100644 index 27abd8bf2..000000000 --- a/kosmos-g/open_clip/src/training/imagenet_zeroshot_data.py +++ /dev/null @@ -1,254 +0,0 @@ - - -imagenet_classnames = ["tench", "goldfish", "great white shark", "tiger shark", "hammerhead shark", "electric ray", - "stingray", "rooster", "hen", "ostrich", "brambling", "goldfinch", "house finch", "junco", - "indigo bunting", "American robin", "bulbul", "jay", "magpie", "chickadee", "American dipper", - "kite (bird of prey)", "bald eagle", "vulture", "great grey owl", "fire salamander", - "smooth newt", "newt", "spotted salamander", "axolotl", "American bullfrog", "tree frog", - "tailed frog", "loggerhead sea turtle", "leatherback sea turtle", "mud turtle", "terrapin", - "box turtle", "banded gecko", "green iguana", "Carolina anole", - "desert grassland whiptail lizard", "agama", "frilled-necked lizard", "alligator lizard", - "Gila monster", "European green lizard", "chameleon", "Komodo dragon", "Nile crocodile", - "American alligator", "triceratops", "worm snake", "ring-necked snake", - "eastern hog-nosed snake", "smooth green snake", "kingsnake", "garter snake", "water snake", - "vine snake", "night snake", "boa constrictor", "African rock python", "Indian cobra", - "green mamba", "sea snake", "Saharan horned viper", "eastern diamondback rattlesnake", - "sidewinder rattlesnake", "trilobite", "harvestman", "scorpion", "yellow garden spider", - "barn spider", "European garden spider", "southern black widow", "tarantula", "wolf spider", - "tick", "centipede", "black grouse", "ptarmigan", "ruffed grouse", "prairie grouse", "peafowl", - "quail", "partridge", "african grey parrot", "macaw", "sulphur-crested cockatoo", "lorikeet", - "coucal", "bee eater", "hornbill", "hummingbird", "jacamar", "toucan", "duck", - "red-breasted merganser", "goose", "black swan", "tusker", "echidna", "platypus", "wallaby", - "koala", "wombat", "jellyfish", "sea anemone", "brain coral", "flatworm", "nematode", "conch", - "snail", "slug", "sea slug", "chiton", "chambered nautilus", "Dungeness crab", "rock crab", - "fiddler crab", "red king crab", "American lobster", "spiny lobster", "crayfish", "hermit crab", - "isopod", "white stork", "black stork", "spoonbill", "flamingo", "little blue heron", - "great egret", "bittern bird", "crane bird", "limpkin", "common gallinule", "American coot", - "bustard", "ruddy turnstone", "dunlin", "common redshank", "dowitcher", "oystercatcher", - "pelican", "king penguin", "albatross", "grey whale", "killer whale", "dugong", "sea lion", - "Chihuahua", "Japanese Chin", "Maltese", "Pekingese", "Shih Tzu", "King Charles Spaniel", - "Papillon", "toy terrier", "Rhodesian Ridgeback", "Afghan Hound", "Basset Hound", "Beagle", - "Bloodhound", "Bluetick Coonhound", "Black and Tan Coonhound", "Treeing Walker Coonhound", - "English foxhound", "Redbone Coonhound", "borzoi", "Irish Wolfhound", "Italian Greyhound", - "Whippet", "Ibizan Hound", "Norwegian Elkhound", "Otterhound", "Saluki", "Scottish Deerhound", - "Weimaraner", "Staffordshire Bull Terrier", "American Staffordshire Terrier", - "Bedlington Terrier", "Border Terrier", "Kerry Blue Terrier", "Irish Terrier", - "Norfolk Terrier", "Norwich Terrier", "Yorkshire Terrier", "Wire Fox Terrier", - "Lakeland Terrier", "Sealyham Terrier", "Airedale Terrier", "Cairn Terrier", - "Australian Terrier", "Dandie Dinmont Terrier", "Boston Terrier", "Miniature Schnauzer", - "Giant Schnauzer", "Standard Schnauzer", "Scottish Terrier", "Tibetan Terrier", - "Australian Silky Terrier", "Soft-coated Wheaten Terrier", "West Highland White Terrier", - "Lhasa Apso", "Flat-Coated Retriever", "Curly-coated Retriever", "Golden Retriever", - "Labrador Retriever", "Chesapeake Bay Retriever", "German Shorthaired Pointer", "Vizsla", - "English Setter", "Irish Setter", "Gordon Setter", "Brittany dog", "Clumber Spaniel", - "English Springer Spaniel", "Welsh Springer Spaniel", "Cocker Spaniel", "Sussex Spaniel", - "Irish Water Spaniel", "Kuvasz", "Schipperke", "Groenendael dog", "Malinois", "Briard", - "Australian Kelpie", "Komondor", "Old English Sheepdog", "Shetland Sheepdog", "collie", - "Border Collie", "Bouvier des Flandres dog", "Rottweiler", "German Shepherd Dog", "Dobermann", - "Miniature Pinscher", "Greater Swiss Mountain Dog", "Bernese Mountain Dog", - "Appenzeller Sennenhund", "Entlebucher Sennenhund", "Boxer", "Bullmastiff", "Tibetan Mastiff", - "French Bulldog", "Great Dane", "St. Bernard", "husky", "Alaskan Malamute", "Siberian Husky", - "Dalmatian", "Affenpinscher", "Basenji", "pug", "Leonberger", "Newfoundland dog", - "Great Pyrenees dog", "Samoyed", "Pomeranian", "Chow Chow", "Keeshond", "brussels griffon", - "Pembroke Welsh Corgi", "Cardigan Welsh Corgi", "Toy Poodle", "Miniature Poodle", - "Standard Poodle", "Mexican hairless dog (xoloitzcuintli)", "grey wolf", "Alaskan tundra wolf", - "red wolf or maned wolf", "coyote", "dingo", "dhole", "African wild dog", "hyena", "red fox", - "kit fox", "Arctic fox", "grey fox", "tabby cat", "tiger cat", "Persian cat", "Siamese cat", - "Egyptian Mau", "cougar", "lynx", "leopard", "snow leopard", "jaguar", "lion", "tiger", - "cheetah", "brown bear", "American black bear", "polar bear", "sloth bear", "mongoose", - "meerkat", "tiger beetle", "ladybug", "ground beetle", "longhorn beetle", "leaf beetle", - "dung beetle", "rhinoceros beetle", "weevil", "fly", "bee", "ant", "grasshopper", - "cricket insect", "stick insect", "cockroach", "praying mantis", "cicada", "leafhopper", - "lacewing", "dragonfly", "damselfly", "red admiral butterfly", "ringlet butterfly", - "monarch butterfly", "small white butterfly", "sulphur butterfly", "gossamer-winged butterfly", - "starfish", "sea urchin", "sea cucumber", "cottontail rabbit", "hare", "Angora rabbit", - "hamster", "porcupine", "fox squirrel", "marmot", "beaver", "guinea pig", "common sorrel horse", - "zebra", "pig", "wild boar", "warthog", "hippopotamus", "ox", "water buffalo", "bison", - "ram (adult male sheep)", "bighorn sheep", "Alpine ibex", "hartebeest", "impala (antelope)", - "gazelle", "arabian camel", "llama", "weasel", "mink", "European polecat", - "black-footed ferret", "otter", "skunk", "badger", "armadillo", "three-toed sloth", "orangutan", - "gorilla", "chimpanzee", "gibbon", "siamang", "guenon", "patas monkey", "baboon", "macaque", - "langur", "black-and-white colobus", "proboscis monkey", "marmoset", "white-headed capuchin", - "howler monkey", "titi monkey", "Geoffroy's spider monkey", "common squirrel monkey", - "ring-tailed lemur", "indri", "Asian elephant", "African bush elephant", "red panda", - "giant panda", "snoek fish", "eel", "silver salmon", "rock beauty fish", "clownfish", - "sturgeon", "gar fish", "lionfish", "pufferfish", "abacus", "abaya", "academic gown", - "accordion", "acoustic guitar", "aircraft carrier", "airliner", "airship", "altar", "ambulance", - "amphibious vehicle", "analog clock", "apiary", "apron", "trash can", "assault rifle", - "backpack", "bakery", "balance beam", "balloon", "ballpoint pen", "Band-Aid", "banjo", - "baluster / handrail", "barbell", "barber chair", "barbershop", "barn", "barometer", "barrel", - "wheelbarrow", "baseball", "basketball", "bassinet", "bassoon", "swimming cap", "bath towel", - "bathtub", "station wagon", "lighthouse", "beaker", "military hat (bearskin or shako)", - "beer bottle", "beer glass", "bell tower", "baby bib", "tandem bicycle", "bikini", - "ring binder", "binoculars", "birdhouse", "boathouse", "bobsleigh", "bolo tie", "poke bonnet", - "bookcase", "bookstore", "bottle cap", "hunting bow", "bow tie", "brass memorial plaque", "bra", - "breakwater", "breastplate", "broom", "bucket", "buckle", "bulletproof vest", - "high-speed train", "butcher shop", "taxicab", "cauldron", "candle", "cannon", "canoe", - "can opener", "cardigan", "car mirror", "carousel", "tool kit", "cardboard box / carton", - "car wheel", "automated teller machine", "cassette", "cassette player", "castle", "catamaran", - "CD player", "cello", "mobile phone", "chain", "chain-link fence", "chain mail", "chainsaw", - "storage chest", "chiffonier", "bell or wind chime", "china cabinet", "Christmas stocking", - "church", "movie theater", "cleaver", "cliff dwelling", "cloak", "clogs", "cocktail shaker", - "coffee mug", "coffeemaker", "spiral or coil", "combination lock", "computer keyboard", - "candy store", "container ship", "convertible", "corkscrew", "cornet", "cowboy boot", - "cowboy hat", "cradle", "construction crane", "crash helmet", "crate", "infant bed", - "Crock Pot", "croquet ball", "crutch", "cuirass", "dam", "desk", "desktop computer", - "rotary dial telephone", "diaper", "digital clock", "digital watch", "dining table", - "dishcloth", "dishwasher", "disc brake", "dock", "dog sled", "dome", "doormat", "drilling rig", - "drum", "drumstick", "dumbbell", "Dutch oven", "electric fan", "electric guitar", - "electric locomotive", "entertainment center", "envelope", "espresso machine", "face powder", - "feather boa", "filing cabinet", "fireboat", "fire truck", "fire screen", "flagpole", "flute", - "folding chair", "football helmet", "forklift", "fountain", "fountain pen", "four-poster bed", - "freight car", "French horn", "frying pan", "fur coat", "garbage truck", - "gas mask or respirator", "gas pump", "goblet", "go-kart", "golf ball", "golf cart", "gondola", - "gong", "gown", "grand piano", "greenhouse", "radiator grille", "grocery store", "guillotine", - "hair clip", "hair spray", "half-track", "hammer", "hamper", "hair dryer", "hand-held computer", - "handkerchief", "hard disk drive", "harmonica", "harp", "combine harvester", "hatchet", - "holster", "home theater", "honeycomb", "hook", "hoop skirt", "gymnastic horizontal bar", - "horse-drawn vehicle", "hourglass", "iPod", "clothes iron", "carved pumpkin", "jeans", "jeep", - "T-shirt", "jigsaw puzzle", "rickshaw", "joystick", "kimono", "knee pad", "knot", "lab coat", - "ladle", "lampshade", "laptop computer", "lawn mower", "lens cap", "letter opener", "library", - "lifeboat", "lighter", "limousine", "ocean liner", "lipstick", "slip-on shoe", "lotion", - "music speaker", "loupe magnifying glass", "sawmill", "magnetic compass", "messenger bag", - "mailbox", "tights", "one-piece bathing suit", "manhole cover", "maraca", "marimba", "mask", - "matchstick", "maypole", "maze", "measuring cup", "medicine cabinet", "megalith", "microphone", - "microwave oven", "military uniform", "milk can", "minibus", "miniskirt", "minivan", "missile", - "mitten", "mixing bowl", "mobile home", "ford model t", "modem", "monastery", "monitor", - "moped", "mortar and pestle", "graduation cap", "mosque", "mosquito net", "vespa", - "mountain bike", "tent", "computer mouse", "mousetrap", "moving van", "muzzle", "metal nail", - "neck brace", "necklace", "baby pacifier", "notebook computer", "obelisk", "oboe", "ocarina", - "odometer", "oil filter", "pipe organ", "oscilloscope", "overskirt", "bullock cart", - "oxygen mask", "product packet / packaging", "paddle", "paddle wheel", "padlock", "paintbrush", - "pajamas", "palace", "pan flute", "paper towel", "parachute", "parallel bars", "park bench", - "parking meter", "railroad car", "patio", "payphone", "pedestal", "pencil case", - "pencil sharpener", "perfume", "Petri dish", "photocopier", "plectrum", "Pickelhaube", - "picket fence", "pickup truck", "pier", "piggy bank", "pill bottle", "pillow", "ping-pong ball", - "pinwheel", "pirate ship", "drink pitcher", "block plane", "planetarium", "plastic bag", - "plate rack", "farm plow", "plunger", "Polaroid camera", "pole", "police van", "poncho", - "pool table", "soda bottle", "plant pot", "potter's wheel", "power drill", "prayer rug", - "printer", "prison", "missile", "projector", "hockey puck", "punching bag", "purse", "quill", - "quilt", "race car", "racket", "radiator", "radio", "radio telescope", "rain barrel", - "recreational vehicle", "fishing casting reel", "reflex camera", "refrigerator", - "remote control", "restaurant", "revolver", "rifle", "rocking chair", "rotisserie", "eraser", - "rugby ball", "ruler measuring stick", "sneaker", "safe", "safety pin", "salt shaker", "sandal", - "sarong", "saxophone", "scabbard", "weighing scale", "school bus", "schooner", "scoreboard", - "CRT monitor", "screw", "screwdriver", "seat belt", "sewing machine", "shield", "shoe store", - "shoji screen / room divider", "shopping basket", "shopping cart", "shovel", "shower cap", - "shower curtain", "ski", "balaclava ski mask", "sleeping bag", "slide rule", "sliding door", - "slot machine", "snorkel", "snowmobile", "snowplow", "soap dispenser", "soccer ball", "sock", - "solar thermal collector", "sombrero", "soup bowl", "keyboard space bar", "space heater", - "space shuttle", "spatula", "motorboat", "spider web", "spindle", "sports car", "spotlight", - "stage", "steam locomotive", "through arch bridge", "steel drum", "stethoscope", "scarf", - "stone wall", "stopwatch", "stove", "strainer", "tram", "stretcher", "couch", "stupa", - "submarine", "suit", "sundial", "sunglasses", "sunglasses", "sunscreen", "suspension bridge", - "mop", "sweatshirt", "swim trunks / shorts", "swing", "electrical switch", "syringe", - "table lamp", "tank", "tape player", "teapot", "teddy bear", "television", "tennis ball", - "thatched roof", "front curtain", "thimble", "threshing machine", "throne", "tile roof", - "toaster", "tobacco shop", "toilet seat", "torch", "totem pole", "tow truck", "toy store", - "tractor", "semi-trailer truck", "tray", "trench coat", "tricycle", "trimaran", "tripod", - "triumphal arch", "trolleybus", "trombone", "hot tub", "turnstile", "typewriter keyboard", - "umbrella", "unicycle", "upright piano", "vacuum cleaner", "vase", "vaulted or arched ceiling", - "velvet fabric", "vending machine", "vestment", "viaduct", "violin", "volleyball", - "waffle iron", "wall clock", "wallet", "wardrobe", "military aircraft", "sink", - "washing machine", "water bottle", "water jug", "water tower", "whiskey jug", "whistle", - "hair wig", "window screen", "window shade", "Windsor tie", "wine bottle", "airplane wing", - "wok", "wooden spoon", "wool", "split-rail fence", "shipwreck", "sailboat", "yurt", "website", - "comic book", "crossword", "traffic or street sign", "traffic light", "dust jacket", "menu", - "plate", "guacamole", "consomme", "hot pot", "trifle", "ice cream", "popsicle", "baguette", - "bagel", "pretzel", "cheeseburger", "hot dog", "mashed potatoes", "cabbage", "broccoli", - "cauliflower", "zucchini", "spaghetti squash", "acorn squash", "butternut squash", "cucumber", - "artichoke", "bell pepper", "cardoon", "mushroom", "Granny Smith apple", "strawberry", "orange", - "lemon", "fig", "pineapple", "banana", "jackfruit", "cherimoya (custard apple)", "pomegranate", - "hay", "carbonara", "chocolate syrup", "dough", "meatloaf", "pizza", "pot pie", "burrito", - "red wine", "espresso", "tea cup", "eggnog", "mountain", "bubble", "cliff", "coral reef", - "geyser", "lakeshore", "promontory", "sandbar", "beach", "valley", "volcano", "baseball player", - "bridegroom", "scuba diver", "rapeseed", "daisy", "yellow lady's slipper", "corn", "acorn", - "rose hip", "horse chestnut seed", "coral fungus", "agaric", "gyromitra", "stinkhorn mushroom", - "earth star fungus", "hen of the woods mushroom", "bolete", "corn cob", "toilet paper"] - - - - - -openai_imagenet_template = [ - lambda c: f'a bad photo of a {c}.', - lambda c: f'a photo of many {c}.', - lambda c: f'a sculpture of a {c}.', - lambda c: f'a photo of the hard to see {c}.', - lambda c: f'a low resolution photo of the {c}.', - lambda c: f'a rendering of a {c}.', - lambda c: f'graffiti of a {c}.', - lambda c: f'a bad photo of the {c}.', - lambda c: f'a cropped photo of the {c}.', - lambda c: f'a tattoo of a {c}.', - lambda c: f'the embroidered {c}.', - lambda c: f'a photo of a hard to see {c}.', - lambda c: f'a bright photo of a {c}.', - lambda c: f'a photo of a clean {c}.', - lambda c: f'a photo of a dirty {c}.', - lambda c: f'a dark photo of the {c}.', - lambda c: f'a drawing of a {c}.', - lambda c: f'a photo of my {c}.', - lambda c: f'the plastic {c}.', - lambda c: f'a photo of the cool {c}.', - lambda c: f'a close-up photo of a {c}.', - lambda c: f'a black and white photo of the {c}.', - lambda c: f'a painting of the {c}.', - lambda c: f'a painting of a {c}.', - lambda c: f'a pixelated photo of the {c}.', - lambda c: f'a sculpture of the {c}.', - lambda c: f'a bright photo of the {c}.', - lambda c: f'a cropped photo of a {c}.', - lambda c: f'a plastic {c}.', - lambda c: f'a photo of the dirty {c}.', - lambda c: f'a jpeg corrupted photo of a {c}.', - lambda c: f'a blurry photo of the {c}.', - lambda c: f'a photo of the {c}.', - lambda c: f'a good photo of the {c}.', - lambda c: f'a rendering of the {c}.', - lambda c: f'a {c} in a video game.', - lambda c: f'a photo of one {c}.', - lambda c: f'a doodle of a {c}.', - lambda c: f'a close-up photo of the {c}.', - lambda c: f'a photo of a {c}.', - lambda c: f'the origami {c}.', - lambda c: f'the {c} in a video game.', - lambda c: f'a sketch of a {c}.', - lambda c: f'a doodle of the {c}.', - lambda c: f'a origami {c}.', - lambda c: f'a low resolution photo of a {c}.', - lambda c: f'the toy {c}.', - lambda c: f'a rendition of the {c}.', - lambda c: f'a photo of the clean {c}.', - lambda c: f'a photo of a large {c}.', - lambda c: f'a rendition of a {c}.', - lambda c: f'a photo of a nice {c}.', - lambda c: f'a photo of a weird {c}.', - lambda c: f'a blurry photo of a {c}.', - lambda c: f'a cartoon {c}.', - lambda c: f'art of a {c}.', - lambda c: f'a sketch of the {c}.', - lambda c: f'a embroidered {c}.', - lambda c: f'a pixelated photo of a {c}.', - lambda c: f'itap of the {c}.', - lambda c: f'a jpeg corrupted photo of the {c}.', - lambda c: f'a good photo of a {c}.', - lambda c: f'a plushie {c}.', - lambda c: f'a photo of the nice {c}.', - lambda c: f'a photo of the small {c}.', - lambda c: f'a photo of the weird {c}.', - lambda c: f'the cartoon {c}.', - lambda c: f'art of the {c}.', - lambda c: f'a drawing of the {c}.', - lambda c: f'a photo of the large {c}.', - lambda c: f'a black and white photo of a {c}.', - lambda c: f'the plushie {c}.', - lambda c: f'a dark photo of a {c}.', - lambda c: f'itap of a {c}.', - lambda c: f'graffiti of the {c}.', - lambda c: f'a toy {c}.', - lambda c: f'itap of my {c}.', - lambda c: f'a photo of a cool {c}.', - lambda c: f'a photo of a small {c}.', - lambda c: f'a tattoo of the {c}.', -] diff --git a/kosmos-g/open_clip/src/training/logger.py b/kosmos-g/open_clip/src/training/logger.py deleted file mode 100644 index 6d9abed92..000000000 --- a/kosmos-g/open_clip/src/training/logger.py +++ /dev/null @@ -1,26 +0,0 @@ -import logging - - -def setup_logging(log_file, level, include_host=False): - if include_host: - import socket - hostname = socket.gethostname() - formatter = logging.Formatter( - f'%(asctime)s | {hostname} | %(levelname)s | %(message)s', datefmt='%Y-%m-%d,%H:%M:%S') - else: - formatter = logging.Formatter('%(asctime)s | %(levelname)s | %(message)s', datefmt='%Y-%m-%d,%H:%M:%S') - - logging.root.setLevel(level) - loggers = [logging.getLogger(name) for name in logging.root.manager.loggerDict] - for logger in loggers: - logger.setLevel(level) - - stream_handler = logging.StreamHandler() - stream_handler.setFormatter(formatter) - logging.root.addHandler(stream_handler) - - if log_file: - file_handler = logging.FileHandler(filename=log_file) - file_handler.setFormatter(formatter) - logging.root.addHandler(file_handler) - diff --git a/kosmos-g/open_clip/src/training/main.py b/kosmos-g/open_clip/src/training/main.py deleted file mode 100644 index 216c367ff..000000000 --- a/kosmos-g/open_clip/src/training/main.py +++ /dev/null @@ -1,307 +0,0 @@ -import logging -import os -import random -from datetime import datetime - -import numpy as np -import torch -from torch import optim -from torch.cuda.amp import GradScaler - -try: - import wandb -except ImportError: - wandb = None - -try: - import torch.utils.tensorboard as tensorboard -except ImportError: - tensorboard = None - -try: - import horovod.torch as hvd -except ImportError: - hvd = None - -from open_clip import create_model_and_transforms, trace_model -from training.data import get_data -from training.distributed import is_master, init_distributed_device, world_info_from_env -from training.logger import setup_logging -from training.params import parse_args -from training.scheduler import cosine_lr -from training.train import train_one_epoch, evaluate - - -def random_seed(seed=42, rank=0): - torch.manual_seed(seed + rank) - np.random.seed(seed + rank) - random.seed(seed + rank) - - -def main(): - args = parse_args() - - # sanitize model name for filesystem / uri use, easier if we don't use / in name as a rule? - args.model = args.model.replace('/', '-') - - # get the name of the experiments - if args.name is None: - args.name = '-'.join([ - datetime.now().strftime("%Y_%m_%d-%H_%M_%S"), - f"model_{args.model}", - f"lr_{args.lr}", - f"b_{args.batch_size}", - f"j_{args.workers}", - f"p_{args.precision}", - ]) - - # discover initial world args early so we can log properly - args.distributed = False - args.local_rank, args.rank, args.world_size = world_info_from_env() - - args.log_path = None - if is_master(args, local=args.log_local): - log_base_path = os.path.join(args.logs, args.name) - os.makedirs(log_base_path, exist_ok=True) - log_filename = f'out-{args.rank}' if args.log_local else 'out.log' - args.log_path = os.path.join(log_base_path, log_filename) - if os.path.exists(args.log_path): - print( - "Error. Experiment already exists. Use --name {} to specify a new experiment." - ) - return -1 - - # Set logger - args.log_level = logging.DEBUG if args.debug else logging.INFO - setup_logging(args.log_path, args.log_level) - - # fully initialize distributed device environment - torch.backends.cudnn.benchmark = True - torch.backends.cudnn.deterministic = False - device = init_distributed_device(args) - - args.wandb = 'wandb' in args.report_to or 'all' in args.report_to - args.tensorboard = 'tensorboard' in args.report_to or 'all' in args.report_to - if is_master(args): - args.tensorboard_path = os.path.join(args.logs, args.name, "tensorboard") if args.tensorboard else '' - args.checkpoint_path = os.path.join(args.logs, args.name, "checkpoints") - for dirname in [args.tensorboard_path, args.checkpoint_path]: - if dirname: - os.makedirs(dirname, exist_ok=True) - else: - args.tensorboard_path = '' - args.checkpoint_path = '' - - if args.copy_codebase: - copy_codebase(args) - - assert args.precision in ['amp', 'fp16', 'fp32'] - if args.precision == 'fp16': - logging.warning( - 'It is recommended to use AMP mixed-precision instead of FP16. ' - 'FP16 support needs further verification and tuning, especially for train.') - - if args.horovod: - logging.info( - f'Running in horovod mode with multiple processes / nodes. Device: {args.device}.' - f'Process (global: {args.rank}, local {args.local_rank}), total {args.world_size}.') - elif args.distributed: - logging.info( - f'Running in distributed mode with multiple processes. Device: {args.device}.' - f'Process (global: {args.rank}, local {args.local_rank}), total {args.world_size}.') - else: - logging.info(f'Running with a single process. Device {args.device}.') - - random_seed(args.seed, 0) - model, preprocess_train, preprocess_val = create_model_and_transforms( - args.model, - args.pretrained, - precision=args.precision, - device=device, - jit=args.torchscript, - force_quick_gelu=args.force_quick_gelu, - pretrained_image=args.pretrained_image, - ) - random_seed(args.seed, args.rank) - - if args.trace: - model = trace_model(model, batch_size=args.batch_size, device=device) - - if args.lock_image: - # lock image tower as per LiT - https://arxiv.org/abs/2111.07991 - model.lock_image_tower( - unlocked_groups=args.lock_image_unlocked_groups, - freeze_bn_stats=args.lock_image_freeze_bn_stats) - - if args.grad_checkpointing: - model.set_grad_checkpointing() - - if is_master(args): - logging.info("Model:") - logging.info(f"{str(model)}") - logging.info("Params:") - params_file = os.path.join(args.logs, args.name, "params.txt") - with open(params_file, "w") as f: - for name in sorted(vars(args)): - val = getattr(args, name) - logging.info(f" {name}: {val}") - f.write(f"{name}: {val}\n") - - if args.distributed and not args.horovod: - if args.use_bn_sync: - model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model) - ddp_args = {} - if args.ddp_static_graph: - # this doesn't exist in older PyTorch, arg only added if enabled - ddp_args['static_graph'] = True - model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[device], **ddp_args) - - # create optimizer and scaler - optimizer = None - scaler = None - if args.train_data: - assert not args.trace, 'Cannot train with traced model' - - exclude = lambda n, p: p.ndim < 2 or "bn" in n or "ln" in n or "bias" in n or 'logit_scale' in n - include = lambda n, p: not exclude(n, p) - - named_parameters = list(model.named_parameters()) - gain_or_bias_params = [p for n, p in named_parameters if exclude(n, p) and p.requires_grad] - rest_params = [p for n, p in named_parameters if include(n, p) and p.requires_grad] - - optimizer = optim.AdamW( - [ - {"params": gain_or_bias_params, "weight_decay": 0.}, - {"params": rest_params, "weight_decay": args.wd}, - ], - lr=args.lr, - betas=(args.beta1, args.beta2), - eps=args.eps, - ) - if args.horovod: - optimizer = hvd.DistributedOptimizer(optimizer, named_parameters=model.named_parameters()) - hvd.broadcast_parameters(model.state_dict(), root_rank=0) - hvd.broadcast_optimizer_state(optimizer, root_rank=0) - - scaler = GradScaler() if args.precision == "amp" else None - - # optionally resume from a checkpoint - start_epoch = 0 - if args.resume is not None: - if os.path.isfile(args.resume): - checkpoint = torch.load(args.resume, map_location=device) - if 'epoch' in checkpoint: - # resuming a train checkpoint w/ epoch and optimizer state - start_epoch = checkpoint["epoch"] - sd = checkpoint["state_dict"] - if not args.distributed and next(iter(sd.items()))[0].startswith('module'): - sd = {k[len('module.'):]: v for k, v in sd.items()} - model.load_state_dict(sd) - if optimizer is not None: - optimizer.load_state_dict(checkpoint["optimizer"]) - if scaler is not None and 'scaler' in checkpoint: - scaler.load_state_dict(checkpoint['scaler']) - logging.info(f"=> resuming checkpoint '{args.resume}' (epoch {start_epoch})") - else: - # loading a bare (model only) checkpoint for fine-tune or evaluation - model.load_state_dict(checkpoint) - logging.info(f"=> loaded checkpoint '{args.resume}' (epoch {start_epoch})") - else: - logging.info("=> no checkpoint found at '{}'".format(args.resume)) - - # initialize datasets - data = get_data(args, (preprocess_train, preprocess_val), epoch=start_epoch) - assert len(data), 'At least one train or eval dataset must be specified.' - - # create scheduler if train - scheduler = None - if 'train' in data and optimizer is not None: - total_steps = data["train"].dataloader.num_batches * args.epochs - scheduler = cosine_lr(optimizer, args.lr, args.warmup, total_steps) - - # determine if this worker should save logs and checkpoints. only do so if it is rank == 0 - args.save_logs = args.logs and args.logs.lower() != 'none' and is_master(args) - writer = None - if args.save_logs and args.tensorboard: - assert tensorboard is not None, "Please install tensorboard." - writer = tensorboard.SummaryWriter(args.tensorboard_path) - - if args.wandb and is_master(args): - assert wandb is not None, 'Please install wandb.' - logging.debug('Starting wandb.') - args.train_sz = data["train"].dataloader.num_samples - if args.val_data is not None: - args.val_sz = data["val"].dataloader.num_samples - # you will have to configure this for your project! - wandb.init( - project="open-clip", - notes=args.wandb_notes, - tags=[], - config=vars(args), - ) - if args.debug: - wandb.watch(model, log='all') - wandb.save(params_file) - logging.debug('Finished loading wandb.') - - if 'train' not in data: - evaluate(model, data, start_epoch, args, writer) - return - - for epoch in range(start_epoch, args.epochs): - if is_master(args): - logging.info(f'Start epoch {epoch}') - - train_one_epoch(model, data, epoch, optimizer, scaler, scheduler, args, writer) - completed_epoch = epoch + 1 - - if any(v in data for v in ('val', 'imagenet-val', 'imagenet-v2')): - evaluate(model, data, completed_epoch, args, writer) - - # Saving checkpoints. - if args.save_logs: - checkpoint_dict = { - "epoch": completed_epoch, - "name": args.name, - "state_dict": model.state_dict(), - "optimizer": optimizer.state_dict(), - } - if scaler is not None: - checkpoint_dict["scaler"] = scaler.state_dict() - - if completed_epoch == args.epochs or ( - args.save_frequency > 0 and (completed_epoch % args.save_frequency) == 0 - ): - torch.save( - checkpoint_dict, - os.path.join(args.checkpoint_path, f"epoch_{completed_epoch}.pt"), - ) - if args.save_most_recent: - torch.save( - checkpoint_dict, - os.path.join(args.checkpoint_path, f"epoch_latest.pt"), - ) - - if args.wandb and is_master(args): - wandb.finish() - - -def copy_codebase(args): - from shutil import copytree, ignore_patterns - new_code_path = os.path.join(args.logs, args.name, "code") - if os.path.exists(new_code_path): - print( - f"Error. Experiment already exists at {new_code_path}. Use --name to specify a new experiment." - ) - return -1 - print(f"Copying codebase to {new_code_path}") - current_code_path = os.path.realpath(__file__) - for _ in range(3): - current_code_path = os.path.dirname(current_code_path) - copytree(current_code_path, new_code_path, ignore=ignore_patterns('log', 'logs', 'wandb')) - print("Done copying code.") - return 1 - - -if __name__ == "__main__": - main() diff --git a/kosmos-g/open_clip/src/training/params.py b/kosmos-g/open_clip/src/training/params.py deleted file mode 100644 index 7bffef26f..000000000 --- a/kosmos-g/open_clip/src/training/params.py +++ /dev/null @@ -1,289 +0,0 @@ -import argparse - - -def get_default_params(model_name): - # Params from paper (https://arxiv.org/pdf/2103.00020.pdf) - model_name = model_name.lower() - if "vit" in model_name: - return {"lr": 5.0e-4, "beta1": 0.9, "beta2": 0.98, "eps": 1.0e-6} - else: - return {"lr": 5.0e-4, "beta1": 0.9, "beta2": 0.999, "eps": 1.0e-8} - - -def parse_args(): - parser = argparse.ArgumentParser() - parser.add_argument( - "--train-data", - type=str, - default=None, - help="Path to csv filewith training data", - ) - parser.add_argument( - "--val-data", - type=str, - default=None, - help="Path to csv file with validation data", - ) - parser.add_argument( - "--train-num-samples", - type=int, - default=None, - help="Number of samples in dataset. Required for webdataset if not available in info file.", - ) - parser.add_argument( - "--val-num-samples", - type=int, - default=None, - help="Number of samples in dataset. Useful for webdataset if not available in info file.", - ) - parser.add_argument( - "--dataset-type", - choices=["webdataset", "csv", "auto"], - default="auto", - help="Which type of dataset to process." - ) - parser.add_argument( - "--dataset-resampled", - default=False, - action="store_true", - help="Whether to use sampling with replacement for webdataset shard selection." - ) - parser.add_argument( - "--csv-separator", - type=str, - default="\t", - help="For csv-like datasets, which separator to use." - ) - parser.add_argument( - "--csv-img-key", - type=str, - default="filepath", - help="For csv-like datasets, the name of the key for the image paths." - ) - parser.add_argument( - "--csv-caption-key", - type=str, - default="title", - help="For csv-like datasets, the name of the key for the captions." - ) - parser.add_argument( - "--imagenet-val", - type=str, - default=None, - help="Path to imagenet val set for conducting zero shot evaluation.", - ) - parser.add_argument( - "--imagenet-v2", - type=str, - default=None, - help="Path to imagenet v2 for conducting zero shot evaluation.", - ) - parser.add_argument( - "--logs", - type=str, - default="./logs/", - help="Where to store tensorboard logs. Use None to avoid storing logs.", - ) - parser.add_argument( - "--log-local", - action="store_true", - default=False, - help="log files on local master, otherwise global master only.", - ) - parser.add_argument( - "--name", - type=str, - default=None, - help="Optional identifier for the experiment when storing logs. Otherwise use current time.", - ) - parser.add_argument( - "--workers", type=int, default=1, help="Number of dataloader workers per GPU." - ) - parser.add_argument( - "--batch-size", type=int, default=64, help="Batch size per GPU." - ) - parser.add_argument( - "--epochs", type=int, default=32, help="Number of epochs to train for." - ) - parser.add_argument("--lr", type=float, default=None, help="Learning rate.") - parser.add_argument("--beta1", type=float, default=None, help="Adam beta 1.") - parser.add_argument("--beta2", type=float, default=None, help="Adam beta 2.") - parser.add_argument("--eps", type=float, default=None, help="Adam epsilon.") - parser.add_argument("--wd", type=float, default=0.2, help="Weight decay.") - parser.add_argument( - "--warmup", type=int, default=10000, help="Number of steps to warmup for." - ) - parser.add_argument( - "--use-bn-sync", - default=False, - action="store_true", - help="Whether to use batch norm sync.") - parser.add_argument( - "--skip-scheduler", - action="store_true", - default=False, - help="Use this flag to skip the learning rate decay.", - ) - parser.add_argument( - "--save-frequency", type=int, default=1, help="How often to save checkpoints." - ) - parser.add_argument( - "--save-most-recent", - action="store_true", - default=False, - help="Always save the most recent model trained to epoch_latest.pt.", - ) - parser.add_argument( - "--zeroshot-frequency", type=int, default=2, help="How often to run zero shot." - ) - parser.add_argument( - "--val-frequency", type=int, default=1, help="How often to run evaluation with val data." - ) - parser.add_argument( - "--resume", - default=None, - type=str, - help="path to latest checkpoint (default: none)", - ) - parser.add_argument( - "--precision", - choices=["amp", "fp16", "fp32"], - default="amp", - help="Floating point precision." - ) - parser.add_argument( - "--model", - type=str, - default="RN50", - help="Name of the vision backbone to use.", - ) - parser.add_argument( - "--pretrained", - default='', - type=str, - help="Use a pretrained CLIP model weights with the specified tag or file path.", - ) - parser.add_argument( - "--pretrained-image", - default=False, - action='store_true', - help="Load imagenet pretrained weights for image tower backbone if available.", - ) - parser.add_argument( - "--lock-image", - default=False, - action='store_true', - help="Lock full image tower by disabling gradients.", - ) - parser.add_argument( - "--lock-image-unlocked-groups", - type=int, - default=0, - help="Leave last n image tower layer groups unlocked.", - ) - parser.add_argument( - "--lock-image-freeze-bn-stats", - default=False, - action='store_true', - help="Freeze BatchNorm running stats in image tower for any locked layers.", - ) - parser.add_argument( - "--grad-checkpointing", - default=False, - action='store_true', - help="Enable gradient checkpointing.", - ) - parser.add_argument( - "--local-loss", - default=False, - action="store_true", - help="calculate loss w/ local features @ global (instead of realizing full global @ global matrix)" - ) - parser.add_argument( - "--gather-with-grad", - default=False, - action="store_true", - help="enable full distributed gradient for feature gather" - ) - parser.add_argument( - "--force-quick-gelu", - default=False, - action='store_true', - help="Force use of QuickGELU activation for non-OpenAI transformer models.", - ) - parser.add_argument( - "--torchscript", - default=False, - action='store_true', - help="torch.jit.script the model, also uses jit version of OpenAI models if pretrained=='openai'", - ) - parser.add_argument( - "--trace", - default=False, - action='store_true', - help="torch.jit.trace the model for inference / eval only", - ) - # arguments for distributed training - parser.add_argument( - "--dist-url", - default="env://", - type=str, - help="url used to set up distributed training", - ) - parser.add_argument( - "--dist-backend", default="nccl", type=str, help="distributed backend" - ) - parser.add_argument( - "--report-to", - default='', - type=str, - help="Options are ['wandb', 'tensorboard', 'wandb,tensorboard']" - ) - parser.add_argument( - "--wandb-notes", - default='', - type=str, - help="Notes if logging with wandb" - ) - parser.add_argument( - "--debug", - default=False, - action="store_true", - help="If true, more information is logged." - ) - parser.add_argument( - "--copy-codebase", - default=False, - action="store_true", - help="If true, we copy the entire base on the log diretory, and execute from there." - ) - parser.add_argument( - "--horovod", - default=False, - action="store_true", - help="Use horovod for distributed training." - ) - parser.add_argument( - "--ddp-static-graph", - default=False, - action='store_true', - help="Enable static graph optimization for DDP in PyTorch >= 1.11.", - ) - parser.add_argument( - "--no-set-device-rank", - default=False, - action="store_true", - help="Don't set device index from local rank (when CUDA_VISIBLE_DEVICES restricted to one per proc)." - ) - parser.add_argument( - "--seed", type=int, default=0, help="Default random seed." - ) - args = parser.parse_args() - - # If some params are not passed, we use the default values based on model name. - default_params = get_default_params(args.model) - for name, val in default_params.items(): - if getattr(args, name) is None: - setattr(args, name, val) - - return args diff --git a/kosmos-g/open_clip/src/training/scheduler.py b/kosmos-g/open_clip/src/training/scheduler.py deleted file mode 100644 index e0bfdf796..000000000 --- a/kosmos-g/open_clip/src/training/scheduler.py +++ /dev/null @@ -1,23 +0,0 @@ -import numpy as np - - -def assign_learning_rate(optimizer, new_lr): - for param_group in optimizer.param_groups: - param_group["lr"] = new_lr - - -def _warmup_lr(base_lr, warmup_length, step): - return base_lr * (step + 1) / warmup_length - - -def cosine_lr(optimizer, base_lr, warmup_length, steps): - def _lr_adjuster(step): - if step < warmup_length: - lr = _warmup_lr(base_lr, warmup_length, step) - else: - e = step - warmup_length - es = steps - warmup_length - lr = 0.5 * (1 + np.cos(np.pi * e / es)) * base_lr - assign_learning_rate(optimizer, lr) - return lr - return _lr_adjuster \ No newline at end of file diff --git a/kosmos-g/open_clip/src/training/train.py b/kosmos-g/open_clip/src/training/train.py deleted file mode 100644 index 05dee34be..000000000 --- a/kosmos-g/open_clip/src/training/train.py +++ /dev/null @@ -1,248 +0,0 @@ -import json -import logging -import math -import os -import time -from contextlib import suppress - -import numpy as np -import torch -import torch.nn.functional as F - -try: - import wandb -except ImportError: - wandb = None - -from open_clip import ClipLoss -from .distributed import is_master -from .zero_shot import zero_shot_eval - - -class AverageMeter(object): - """Computes and stores the average and current value""" - def __init__(self): - self.reset() - - def reset(self): - self.val = 0 - self.avg = 0 - self.sum = 0 - self.count = 0 - - def update(self, val, n=1): - self.val = val - self.sum += val * n - self.count += n - self.avg = self.sum / self.count - - -def unwrap_model(model): - if hasattr(model, 'module'): - return model.module - else: - return model - - -def train_one_epoch(model, data, epoch, optimizer, scaler, scheduler, args, tb_writer=None): - device = torch.device(args.device) - autocast = torch.cuda.amp.autocast if args.precision == 'amp' else suppress - - model.train() - loss = ClipLoss( - local_loss=args.local_loss, - gather_with_grad=args.gather_with_grad, - cache_labels=True, - rank=args.rank, - world_size=args.world_size, - use_horovod=args.horovod) - - data['train'].set_epoch(epoch) # set epoch in process safe manner via sampler or shared_epoch - dataloader = data['train'].dataloader - num_batches_per_epoch = dataloader.num_batches - sample_digits = math.ceil(math.log(dataloader.num_samples + 1, 10)) - - loss_m = AverageMeter() - batch_time_m = AverageMeter() - data_time_m = AverageMeter() - end = time.time() - for i, batch in enumerate(dataloader): - step = num_batches_per_epoch * epoch + i - scheduler(step) - - images, texts = batch - images = images.to(device=device, non_blocking=True) - texts = texts.to(device=device, non_blocking=True) - - data_time_m.update(time.time() - end) - optimizer.zero_grad() - - with autocast(): - image_features, text_features, logit_scale = model(images, texts) - total_loss = loss(image_features, text_features, logit_scale) - - if scaler is not None: - scaler.scale(total_loss).backward() - if args.horovod: - optimizer.synchronize() - scaler.unscale_(optimizer) - with optimizer.skip_synchronize(): - scaler.step(optimizer) - else: - scaler.step(optimizer) - scaler.update() - else: - total_loss.backward() - optimizer.step() - - # Note: we clamp to 4.6052 = ln(100), as in the original paper. - with torch.no_grad(): - unwrap_model(model).logit_scale.clamp_(0, math.log(100)) - - batch_time_m.update(time.time() - end) - end = time.time() - batch_count = i + 1 - if is_master(args) and (i % 100 == 0 or batch_count == num_batches_per_epoch): - batch_size = len(images) - num_samples = batch_count * batch_size * args.world_size - samples_per_epoch = dataloader.num_samples - percent_complete = 100.0 * batch_count / num_batches_per_epoch - - # NOTE loss is coarsely sampled, just master node and per log update - loss_m.update(total_loss.item(), batch_size) - logit_scale_scalar = logit_scale.item() - logging.info( - f"Train Epoch: {epoch} [{num_samples:>{sample_digits}}/{samples_per_epoch} ({percent_complete:.0f}%)] " - f"Loss: {loss_m.val:#.5g} ({loss_m.avg:#.4g}) " - f"Data (t): {data_time_m.avg:.3f} " - f"Batch (t): {batch_time_m.avg:.3f}, {args.batch_size*args.world_size / batch_time_m.val:#g}/s " - f"LR: {optimizer.param_groups[0]['lr']:5f} " - f"Logit Scale: {logit_scale_scalar:.3f}" - ) - - # Save train loss / etc. Using non avg meter values as loggers have their own smoothing - log_data = { - "loss": loss_m.val, - "data_time": data_time_m.val, - "batch_time": batch_time_m.val, - "samples_per_scond": args.batch_size*args.world_size / batch_time_m.val, - "scale": logit_scale_scalar, - "lr": optimizer.param_groups[0]["lr"] - } - for name, val in log_data.items(): - name = "train/" + name - if tb_writer is not None: - tb_writer.add_scalar(name, val, step) - if args.wandb: - assert wandb is not None, 'Please install wandb.' - wandb.log({name: val, 'step': step}) - - # resetting batch / data time meters per log window - batch_time_m.reset() - data_time_m.reset() - # end for - - -def evaluate(model, data, epoch, args, tb_writer=None): - metrics = {} - if not is_master(args): - return metrics - device = torch.device(args.device) - model.eval() - - zero_shot_metrics = zero_shot_eval(model, data, epoch, args) - metrics.update(zero_shot_metrics) - - autocast = torch.cuda.amp.autocast if args.precision == 'amp' else suppress - if 'val' in data and (args.val_frequency and ((epoch % args.val_frequency) == 0 or epoch == args.epochs)): - dataloader = data['val'].dataloader - num_samples = 0 - samples_per_val = dataloader.num_samples - - # FIXME this does not scale past small eval datasets - # all_image_features @ all_text_features will blow up memory and compute very quickly - cumulative_loss = 0.0 - all_image_features, all_text_features = [], [] - with torch.no_grad(): - for i, batch in enumerate(dataloader): - images, texts = batch - images = images.to(device=device, non_blocking=True) - texts = texts.to(device=device, non_blocking=True) - - with autocast(): - image_features, text_features, logit_scale = model(images, texts) - # features are accumulated in CPU tensors, otherwise GPU memory exhausted quickly - # however, system RAM is easily exceeded and compute time becomes problematic - all_image_features.append(image_features.cpu()) - all_text_features.append(text_features.cpu()) - logit_scale = logit_scale.mean() - logits_per_image = logit_scale * image_features @ text_features.t() - logits_per_text = logits_per_image.t() - - batch_size = images.shape[0] - labels = torch.arange(batch_size, device=device).long() - total_loss = ( - F.cross_entropy(logits_per_image, labels) + - F.cross_entropy(logits_per_text, labels) - ) / 2 - - cumulative_loss += total_loss * batch_size - num_samples += batch_size - if is_master(args) and (i % 100) == 0: - logging.info( - f"Eval Epoch: {epoch} [{num_samples} / {samples_per_val}]\t" - f"Loss: {cumulative_loss / num_samples:.6f}\t") - - val_metrics = get_metrics( - image_features=torch.cat(all_image_features), - text_features=torch.cat(all_text_features), - logit_scale=logit_scale.cpu(), - ) - loss = cumulative_loss / num_samples - metrics.update( - {**val_metrics, "val_loss": loss.item(), "epoch": epoch, "num_samples": num_samples} - ) - - if not metrics: - return metrics - - logging.info( - f"Eval Epoch: {epoch} " - + "\t".join([f"{k}: {round(v, 4):.4f}" for k, v in metrics.items()]) - ) - - if args.save_logs: - for name, val in metrics.items(): - if tb_writer is not None: - tb_writer.add_scalar(f"val/{name}", val, epoch) - - with open(os.path.join(args.checkpoint_path, "results.jsonl"), "a+") as f: - f.write(json.dumps(metrics)) - f.write("\n") - - if args.wandb: - assert wandb is not None, 'Please install wandb.' - for name, val in metrics.items(): - wandb.log({f"val/{name}": val, 'epoch': epoch}) - - return metrics - - -def get_metrics(image_features, text_features, logit_scale): - metrics = {} - logits_per_image = (logit_scale * image_features @ text_features.t()).detach().cpu() - logits_per_text = logits_per_image.t().detach().cpu() - - logits = {"image_to_text": logits_per_image, "text_to_image": logits_per_text} - ground_truth = torch.arange(len(text_features)).view(-1, 1) - - for name, logit in logits.items(): - ranking = torch.argsort(logit, descending=True) - preds = torch.where(ranking == ground_truth)[1] - preds = preds.detach().cpu().numpy() - metrics[f"{name}_mean_rank"] = preds.mean() + 1 - metrics[f"{name}_median_rank"] = np.floor(np.median(preds)) + 1 - for k in [1, 5, 10]: - metrics[f"{name}_R@{k}"] = np.mean(preds < k) - - return metrics diff --git a/kosmos-g/open_clip/src/training/zero_shot.py b/kosmos-g/open_clip/src/training/zero_shot.py deleted file mode 100644 index f9dfc9675..000000000 --- a/kosmos-g/open_clip/src/training/zero_shot.py +++ /dev/null @@ -1,89 +0,0 @@ -import logging -from contextlib import suppress - -import torch -import torch.nn.functional as F -from tqdm import tqdm - -from open_clip import tokenize -from .imagenet_zeroshot_data import imagenet_classnames, openai_imagenet_template - - -def zero_shot_classifier(model, classnames, templates, args): - with torch.no_grad(): - zeroshot_weights = [] - for classname in tqdm(classnames): - texts = [template(classname) for template in templates] # format with class - texts = tokenize(texts).to(args.device) # tokenize - if args.distributed and not args.horovod: - class_embeddings = model.module.encode_text(texts) - else: - class_embeddings = model.encode_text(texts) - class_embedding = F.normalize(class_embeddings, dim=-1).mean(dim=0) - class_embedding /= class_embedding.norm() - zeroshot_weights.append(class_embedding) - zeroshot_weights = torch.stack(zeroshot_weights, dim=1).to(args.device) - return zeroshot_weights - - -def accuracy(output, target, topk=(1,)): - pred = output.topk(max(topk), 1, True, True)[1].t() - correct = pred.eq(target.view(1, -1).expand_as(pred)) - return [float(correct[:k].reshape(-1).float().sum(0, keepdim=True).cpu().numpy()) for k in topk] - - -def run(model, classifier, dataloader, args): - autocast = torch.cuda.amp.autocast if args.precision == 'amp' else suppress - with torch.no_grad(): - top1, top5, n = 0., 0., 0. - for images, target in tqdm(dataloader, unit_scale=args.batch_size): - images = images.to(args.device) - target = target.to(args.device) - - with autocast(): - # predict - if args.distributed and not args.horovod: - image_features = model.module.encode_image(images) - else: - image_features = model.encode_image(images) - image_features = F.normalize(image_features, dim=-1) - logits = 100. * image_features @ classifier - - # measure accuracy - acc1, acc5 = accuracy(logits, target, topk=(1, 5)) - top1 += acc1 - top5 += acc5 - n += images.size(0) - - top1 = (top1 / n) - top5 = (top5 / n) - return top1, top5 - - -def zero_shot_eval(model, data, epoch, args): - if 'imagenet-val' not in data and 'imagenet-v2' not in data: - return {} - if args.zeroshot_frequency == 0: - return {} - if (epoch % args.zeroshot_frequency) != 0 and epoch != args.epochs: - return {} - - logging.info('Starting zero-shot imagenet.') - - logging.info('Building zero-shot classifier') - classifier = zero_shot_classifier(model, imagenet_classnames, openai_imagenet_template, args) - - logging.info('Using classifier') - results = {} - if 'imagenet-val' in data: - top1, top5 = run(model, classifier, data['imagenet-val'].dataloader, args) - results['imagenet-zeroshot-val-top1'] = top1 - results['imagenet-zeroshot-val-top5'] = top5 - if 'imagenet-v2' in data: - top1, top5 = run(model, classifier, data['imagenet-v2'].dataloader, args) - results['imagenetv2-zeroshot-val-top1'] = top1 - results['imagenetv2-zeroshot-val-top5'] = top5 - - logging.info('Finished zero-shot imagenet.') - - return results diff --git a/kosmos-g/runalign.sh b/kosmos-g/runalign.sh deleted file mode 100644 index ab1e86f39..000000000 --- a/kosmos-g/runalign.sh +++ /dev/null @@ -1,60 +0,0 @@ -python -m torch.distributed.launch --nproc_per_node=8 --nnodes=8 \ - --node_rank=$RANK --master_addr=$MASTER_ADDR --master_port=$MASTER_PORT train.py None \ - --task kosmosg \ - --tokens-per-sample 2048 \ - --criterion kosmosg \ - --arch kosmosg_xl \ - --required-batch-size-multiple 1 \ - --optimizer adam \ - --adam-betas '(0.9,0.98)' \ - --adam-eps 1e-6 \ - --clip-norm 2.0 \ - --lr-scheduler polynomial_decay \ - --weight-decay 0.01 \ - --lr 1e-3 \ - --warmup-updates 375 \ - --total-num-update 300000 \ - --max-update 300000 \ - --max-sentences 2 \ - --update-freq 1 \ - --log-format simple \ - --log-interval 50 \ - --disable-validation \ - --save-interval-updates 2000 \ - --no-epoch-checkpoints \ - --memory-efficient-fp16 \ - --fp16-init-scale 4 \ - --fp16-scale-window 256 \ - --min-loss-scale 0.0001 \ - --seed 0 \ - --dict-path data/dict.txt \ - --spm-model data/sentencepiece.bpe.model \ - --save-dir /path/to/save-dir \ - --tensorboard-logdir /path/to/tensorboard-logdir \ - --ddp-backend=no_c10d \ - --distributed-no-spawn \ - --batch-read-ahead 32 \ - --reset-dataloader \ - --train-json-split-name train-nogithub-noarvix-nopubmed-mtnlg \ - --image-encoder clip \ - --visual-model-name ViT-L-14 \ - --visual-output-dim 1024 \ - --visual-pretrained /path/to/ViT-L-14-sd.pt \ - --laion-data-dir /path/to/laion-data-dir \ - --laion-batch-size 56 \ - --instructpix2pix-data-dir /path/to/instructpix2pix/ \ - --instructpix2pix-batch-size 16 \ - --openimage-data-dir /path/to/openimage/ \ - --openimage-batch-size 16 \ - --latent-query-num 64 \ - --connector xconnector \ - --no-freeze-layer resblocks.23,ln_post \ - --subln \ - --flash-attention \ - --sope-rel-pos \ - --data-weights 1,0,0 \ - --pretrained-model-name-or-path runwayml/stable-diffusion-v1-5 \ - --pretrained-ckpt-path /path/to/checkpoint_stage1.pt \ - --checkpoint-activations \ - --random-drop-caption-prob 0.5 \ - --align \ No newline at end of file diff --git a/kosmos-g/runapp.sh b/kosmos-g/runapp.sh deleted file mode 100644 index 983cbdf11..000000000 --- a/kosmos-g/runapp.sh +++ /dev/null @@ -1,16 +0,0 @@ -python -m torch.distributed.launch --nproc_per_node=1 --nnodes=1 \ - app.py None \ - --task kosmosg \ - --criterion kosmosg \ - --arch kosmosg_xl \ - --required-batch-size-multiple 1 \ - --dict-path data/dict.txt \ - --spm-model data/sentencepiece.bpe.model \ - --memory-efficient-fp16 \ - --ddp-backend=no_c10d \ - --distributed-no-spawn \ - --subln \ - --sope-rel-pos \ - --checkpoint-activations \ - --flash-attention \ - --pretrained-ckpt-path /path/to/checkpoint_final.pt \ No newline at end of file diff --git a/kosmos-g/runeval_coco.sh b/kosmos-g/runeval_coco.sh deleted file mode 100644 index 463bfe380..000000000 --- a/kosmos-g/runeval_coco.sh +++ /dev/null @@ -1,15 +0,0 @@ -accelerate launch --multi_gpu sample_kosmos_coco.py None \ - --task kosmosg \ - --criterion kosmosg \ - --arch kosmosg_xl \ - --required-batch-size-multiple 1 \ - --dict-path data/dict.txt \ - --spm-model data/sentencepiece.bpe.model \ - --memory-efficient-fp16 \ - --ddp-backend=no_c10d \ - --distributed-no-spawn \ - --subln \ - --sope-rel-pos \ - --checkpoint-activations \ - --flash-attention \ - --pretrained-ckpt-path /path/to/checkpoint_final.pt \ No newline at end of file diff --git a/kosmos-g/runeval_dreambench.sh b/kosmos-g/runeval_dreambench.sh deleted file mode 100644 index 18e6dbca8..000000000 --- a/kosmos-g/runeval_dreambench.sh +++ /dev/null @@ -1,15 +0,0 @@ -accelerate launch --multi_gpu sample_kosmos_dreambench.py None \ - --task kosmosg \ - --criterion kosmosg \ - --arch kosmosg_xl \ - --required-batch-size-multiple 1 \ - --dict-path data/dict.txt \ - --spm-model data/sentencepiece.bpe.model \ - --memory-efficient-fp16 \ - --ddp-backend=no_c10d \ - --distributed-no-spawn \ - --subln \ - --sope-rel-pos \ - --checkpoint-activations \ - --flash-attention \ - --pretrained-ckpt-path /path/to/checkpoint_final.pt \ No newline at end of file diff --git a/kosmos-g/runtrain.sh b/kosmos-g/runtrain.sh deleted file mode 100644 index 436d0cc8e..000000000 --- a/kosmos-g/runtrain.sh +++ /dev/null @@ -1,59 +0,0 @@ -python -m torch.distributed.launch --nproc_per_node=8 --nnodes=8 \ - --node_rank=$RANK --master_addr=$MASTER_ADDR --master_port=$MASTER_PORT train.py None \ - --task kosmosg \ - --tokens-per-sample 2048 \ - --criterion kosmosg \ - --arch kosmosg_xl \ - --required-batch-size-multiple 1 \ - --optimizer adam \ - --adam-betas '(0.9,0.98)' \ - --adam-eps 1e-6 \ - --clip-norm 2.0 \ - --lr-scheduler polynomial_decay \ - --weight-decay 0.01 \ - --lr 1e-3 \ - --warmup-updates 375 \ - --total-num-update 200000 \ - --max-update 200000 \ - --max-sentences 2 \ - --update-freq 1 \ - --log-format simple \ - --log-interval 50 \ - --disable-validation \ - --save-interval-updates 2000 \ - --no-epoch-checkpoints \ - --memory-efficient-fp16 \ - --fp16-init-scale 4 \ - --fp16-scale-window 256 \ - --min-loss-scale 0.0001 \ - --seed 0 \ - --dict-path data/dict.txt \ - --spm-model data/sentencepiece.bpe.model \ - --save-dir /path/to/save-dir \ - --tensorboard-logdir /path/to/tensorboard-logdir \ - --ddp-backend=no_c10d \ - --distributed-no-spawn \ - --batch-read-ahead 32 \ - --reset-dataloader \ - --train-json-split-name train-nogithub-noarvix-nopubmed-mtnlg \ - --image-encoder clip \ - --visual-model-name ViT-L-14 \ - --visual-output-dim 1024 \ - --visual-pretrained /path/to/ViT-L-14-sd.pt \ - --laion-data-dir /path/to/laion-data-dir \ - --laion-batch-size 16 \ - --instructpix2pix-data-dir /path/to/instructpix2pix/ \ - --instructpix2pix-batch-size 16 \ - --openimage-data-dir /path/to/openimage/ \ - --openimage-batch-size 16 \ - --latent-query-num 64 \ - --connector xconnector \ - --no-freeze-layer resblocks.23,ln_post \ - --subln \ - --flash-attention \ - --sope-rel-pos \ - --data-weights 1,2,2 \ - --pretrained-model-name-or-path runwayml/stable-diffusion-v1-5 \ - --pretrained-ckpt-path /path/to/checkpoint_stage2.pt \ - --checkpoint-activations \ - --random-drop-caption-prob 0.5 \ No newline at end of file diff --git a/kosmos-g/sample_kosmosg_coco.py b/kosmos-g/sample_kosmosg_coco.py deleted file mode 100644 index 1e344b2ee..000000000 --- a/kosmos-g/sample_kosmosg_coco.py +++ /dev/null @@ -1,182 +0,0 @@ -import json -import os -import random - -import numpy as np -import torch -from PIL import Image -from accelerate import Accelerator -from omegaconf import OmegaConf -from torch.nn.utils.rnn import pad_sequence -from torchmetrics.image.fid import FrechetInceptionDistance -from torchvision.transforms import functional as F -from tqdm import tqdm - -from app_model import AppModel -from app_utils import randomize_seed_fn -from fairseq import options -from fairseq.dataclass.utils import convert_namespace_to_omegaconf - - -class COCO_Dataset_Image(torch.utils.data.Dataset): - def __init__(self, files): - self.files = files - - def __len__(self): - return len(self.files) - - def __getitem__(self, index): - filename = self.files[index] - real_image = np.array(Image.open(filename).convert('RGB')) - real_image = torch.tensor(real_image) - real_image = real_image.permute(2, 0, 1) / 255.0 - real_image = F.resize(real_image, 256) - real_image = F.center_crop(real_image, (256, 256)) - return real_image - - -class COCO_Dataset_Caption(torch.utils.data.Dataset): - def __init__(self, args, preprocess_fn): - self.args = args - self.preprocess_fn = preprocess_fn - # get text prompts - with open(os.path.join(args.data_dir, 'annotations', 'captions_val2014.json'), 'r') as f: - self.coco = json.load(f) - self.files = self.coco['annotations'] - # random sampled 30K images from COCO - random.seed(args.seed) - self.files = random.sample(self.files, 30000) - - def __len__(self): - return len(self.files) - - def __getitem__(self, index): - prompt = self.files[index]['caption'] - - src_tokens, _, img_gpt_input_mask, negative_tokens = \ - self.preprocess_fn(prompt, - "" if self.args.negative_prompt else "", - None, single_batch=False) - - return src_tokens, img_gpt_input_mask, negative_tokens - - -def collate_fn(batch): - src_tokens = [x[0] for x in batch] - img_gpt_input_mask = [x[1] for x in batch] - negative_tokens = batch[0][2].unsqueeze(0) - src_tokens = pad_sequence(src_tokens, batch_first=True, padding_value=1) - img_gpt_input_mask = pad_sequence(img_gpt_input_mask, batch_first=True, padding_value=0) - - return src_tokens, img_gpt_input_mask, negative_tokens - - -def main(cfg): - cfg.model.pretrained_ckpt_path = "/path/to/checkpoint_final.pt" - args = OmegaConf.create() - args.data_dir = "/path/to/coco" - args.batch_size = 16 - args.num_workers = 4 - args.scheduler = "ddim" # ['ddim', 'pndm', 'dpms'] - args.num_inference_steps = 250 - args.guidance_scale = 3.0 - args.num_images_per_prompt = 1 - args.seed = 0 - args.negative_prompt = False - args.override = False - args.output_dir = "/path/to/output-dir/" + cfg.model.pretrained_ckpt_path.split('/')[-2] + '_' + \ - cfg.model.pretrained_ckpt_path.split('/')[-1].split('.')[0].split('_')[-1] + '_' + args.scheduler \ - + '_' + str(args.num_inference_steps) + '_' + str(args.negative_prompt) - - accelerator = Accelerator() - if accelerator.is_main_process and not os.path.exists(args.output_dir): - os.makedirs(args.output_dir) - - fid = FrechetInceptionDistance(normalize=True) - fid = accelerator.prepare_model(fid, evaluation_mode=True) - with open(os.path.join(args.data_dir, 'annotations', 'captions_val2014.json'), 'r') as f: - files = json.load(f)['images'] - files = [os.path.join(args.data_dir, 'val2014', file['file_name']) for file in files] - image_dataset = COCO_Dataset_Image(files) - image_dataloader = torch.utils.data.DataLoader(image_dataset, batch_size=16, num_workers=args.num_workers, - shuffle=False, pin_memory=True, drop_last=False, - persistent_workers=True) - image_dataloader = accelerator.prepare(image_dataloader) - accelerator.print("Number of real images: ", len(image_dataset)) - - for batch in tqdm(image_dataloader): - fid.update(batch, real=True) - - # stat existing images in output_dir - image_paths = list() - for root, dirs, files in os.walk(args.output_dir): - for file in files: - if file.endswith(".png"): - image_paths.append(os.path.join(root, file)) - if len(image_paths) >= 30000 and not args.override: - accelerator.print("Already generated enough images") - image_dataset = COCO_Dataset_Image(image_paths) - image_dataloader = torch.utils.data.DataLoader(image_dataset, batch_size=128, num_workers=args.num_workers, - shuffle=False, pin_memory=True, drop_last=False, - persistent_workers=True) - image_dataloader = accelerator.prepare(image_dataloader) - accelerator.print("Number of fake images: ", len(image_dataset)) - - for batch in tqdm(image_dataloader): - fid.update(batch, real=False) - accelerator.print("FID: ", fid.compute()) - return - else: - # clear all existing images - if accelerator.is_main_process: - for root, dirs, files in os.walk(args.output_dir): - for file in files: - if file.endswith(".png"): - os.remove(os.path.join(root, file)) - - model = AppModel(cfg) - model.set_ckpt_scheduler_fn(cfg.model.pretrained_ckpt_path, args.scheduler) - - caption_dataset = COCO_Dataset_Caption(args, model.kosmosg_preprocess) - caption_dataloader = torch.utils.data.DataLoader(caption_dataset, batch_size=args.batch_size, - num_workers=args.num_workers, shuffle=False, pin_memory=True, - drop_last=False, persistent_workers=True, collate_fn=collate_fn) - accelerator.print("Number of prompts: ", len(caption_dataset)) - - model, caption_dataloader = accelerator.prepare(model, caption_dataloader) - - kwargs = { - 'num_inference_steps': args.num_inference_steps, - 'text_guidance_scale': args.guidance_scale, - 'num_images_per_prompt': args.num_images_per_prompt, - 'lora_scale': 0.0, - 'output_type': 'numpy' - } - - for batch_id, batch in tqdm(enumerate(caption_dataloader), total=len(caption_dataloader)): - src_tokens, img_gpt_input_mask, negative_tokens = batch - # generate images - randomize_seed_fn(args.seed, False) - images = model.model.sample(src_tokens, None, img_gpt_input_mask, negative_tokens, **kwargs) - - # save image - for image_id, image in enumerate(images): - pos = batch_id * accelerator.num_processes * args.batch_size * args.num_images_per_prompt + \ - image_id * accelerator.num_processes + accelerator.process_index - model.model.vae.numpy_to_pil(image)[0].save(os.path.join(args.output_dir, "{:05d}.png".format(pos))) - - images = np.stack(images, axis=0) - images = torch.tensor(images).to(accelerator.device) - images = images.permute(0, 3, 1, 2) - fid.update(images, real=False) - - accelerator.print("Number of Real Images: ", (fid.real_features_num_samples * accelerator.num_processes).item()) - accelerator.print("Number of Fake Images: ", (fid.real_features_num_samples * accelerator.num_processes).item()) - accelerator.print("FID: ", fid.compute()) - - -if __name__ == "__main__": - parser = options.get_training_parser() - cfg = options.parse_args_and_arch(parser, modify_parser=None) - cfg = convert_namespace_to_omegaconf(cfg) - main(cfg) diff --git a/kosmos-g/sample_kosmosg_dreambench.py b/kosmos-g/sample_kosmosg_dreambench.py deleted file mode 100644 index fdf02ff61..000000000 --- a/kosmos-g/sample_kosmosg_dreambench.py +++ /dev/null @@ -1,215 +0,0 @@ -import os - -import torch -from PIL import Image -from accelerate import Accelerator -from omegaconf import OmegaConf -from torch.nn.utils.rnn import pad_sequence -from torchmetrics.multimodal.clip_score import CLIPScore as CLIP_TScore -from tqdm import tqdm - -from app_model import AppModel -from app_utils import randomize_seed_fn -from eval.clip_score import CLIPIScore as CLIP_IScore -from eval.clip_score import CLIPTScore as CLIP_TScore -from eval.dino_score import DINOScore as DINO_Score -from eval.dreambench_prompts import * -from fairseq import options -from fairseq.dataclass.utils import convert_namespace_to_omegaconf - - -class Image_Dataset(torch.utils.data.Dataset): - def __init__(self, args, files): - self.args = args - self.files = files - - def __len__(self): - return len(self.files) - - def __getitem__(self, index): - image_path = self.files[index] - object_name, object_id, image_id, prompt = image_path.split('/')[-1].split('.')[0].split('+') - image = Image.open(image_path).convert('RGB') - real_image = Image.open(os.path.join(self.args.data_dir, object_name, object_id + '.jpg')).convert('RGB') - - return image, real_image, prompt - - -def image_collate_fn(batch): - image = [x[0] for x in batch] - real_image = [x[1] for x in batch] - prompt = [x[2] for x in batch] - return image, real_image, prompt - - -class DreamBench_Dataset(torch.utils.data.Dataset): - def __init__(self, args, preprocess_fn): - self.args = args - self.preprocess_fn = preprocess_fn - # Traverse all images in the dataset - self.image_paths = [] - for root, dirs, files in os.walk(args.data_dir): - for file in files: - if file.endswith(".jpg"): - self.image_paths.append(os.path.join(root, file)) - - def __len__(self): - return len(self.image_paths) * 25 - - def __getitem__(self, index): - image_path = self.image_paths[index // 25] - real_image = Image.open(image_path).convert('RGB') - object_id = image_path.split('/')[-1].split('.')[0] - object_name = image_path.split('/')[-2] - if object_name in OBJECT: - object_class = OBJECT[object_name] - prompt = OBJECT_PROMPTS[index % 25] - input_prompt = KOSMOSG_OBJECT_PROMPTS[index % 25] - else: - object_class = LIVE_OBJECT[object_name] - prompt = LIVE_OBJECT_PROMPTS[index % 25] - input_prompt = KOSMOSG_LIVE_OBJECT_PROMPTS[index % 25] - - prompt = prompt.format(object_class) - input_prompt = input_prompt.format('<i>' if self.args.drop_object else (object_class + ' <i>')) - - src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens = \ - self.preprocess_fn(input_prompt, - "" if self.args.negative_prompt else "", - real_image, single_batch=False) - - return src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens, object_name, object_id, \ - real_image, prompt - - -def dreambench_collate_fn(batch): - src_tokens = [x[0] for x in batch] - gpt_img_src_tokens = torch.cat([x[1] for x in batch]) - img_gpt_input_mask = [x[2] for x in batch] - negative_tokens = batch[0][3].unsqueeze(0) - src_tokens = pad_sequence(src_tokens, batch_first=True, padding_value=1) - img_gpt_input_mask = pad_sequence(img_gpt_input_mask, batch_first=True, padding_value=0) - object_name = [x[4] for x in batch] - object_id = [x[5] for x in batch] - real_image = [x[6] for x in batch] - prompt = [x[7] for x in batch] - return src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens, object_name, object_id, real_image, prompt - - -def main(cfg): - cfg.model.pretrained_ckpt_path = "/path/to/checkpoint_final.pt" - args = OmegaConf.create() - args.data_dir = "/path/to/dreambench/dreambooth/dataset" - args.batch_size = 5 - args.num_workers = 4 - args.scheduler = "dpms" # ['ddim', 'pndm', 'dpms'] - args.num_inference_steps = 100 - args.guidance_scale = 7.5 - args.num_images_per_prompt = 4 - args.seed = 0 - args.negative_prompt = False - args.drop_object = True - args.output_dir = "/path/to/output-dir/" + cfg.model.pretrained_ckpt_path.split('/')[-2] + '_' \ - + cfg.model.pretrained_ckpt_path.split('/')[-1].split('.')[0].split('_')[-1] + '_' \ - + args.scheduler + '_' + str(args.num_inference_steps) + '_' + str(args.guidance_scale) \ - + '_' + str(args.negative_prompt) + '_' + str(args.drop_object) - - accelerator = Accelerator() - if accelerator.is_main_process and not os.path.exists(args.output_dir): - os.makedirs(args.output_dir) - - dino_score = DINO_Score(model_name_or_path='dino_vits16') - clip_i_score = CLIP_IScore(model_name_or_path='openai/clip-vit-base-patch32') - clip_t_score = CLIP_TScore(model_name_or_path='openai/clip-vit-base-patch32') - - dino_score = accelerator.prepare_model(dino_score, evaluation_mode=True) - clip_i_score = accelerator.prepare_model(clip_i_score, evaluation_mode=True) - clip_t_score = accelerator.prepare_model(clip_t_score, evaluation_mode=True) - - # stat existing images in output_dir - image_paths = list() - for root, dirs, files in os.walk(args.output_dir): - for file in files: - if file.endswith(".png"): - image_paths.append(os.path.join(root, file)) - if len(image_paths) >= 3000: - accelerator.print("Already generated enough images") - dataset = Image_Dataset(args, image_paths) - dataloader = torch.utils.data.DataLoader(dataset, batch_size=16, num_workers=args.num_workers, - shuffle=False, pin_memory=True, drop_last=False, - persistent_workers=True, collate_fn=image_collate_fn) - dataloader = accelerator.prepare(dataloader) - accelerator.print("Number of Images: ", len(dataset)) - - for batch in tqdm(dataloader): - images, real_images, prompts = batch - dino_score.update(images, real_images) - clip_i_score.update(images, real_images) - clip_t_score.update(images, prompts) - accelerator.print("Computing Scores...") - accelerator.print("DINO Score: ", dino_score.compute()) - accelerator.print("CLIP Image Score: ", clip_i_score.compute()) - accelerator.print("CLIP Text Score: ", clip_t_score.compute()) - return - else: - # clear all existing images - if accelerator.is_main_process: - for root, dirs, files in os.walk(args.output_dir): - for file in files: - if file.endswith(".png"): - os.remove(os.path.join(root, file)) - - model = AppModel(cfg) - model.set_ckpt_scheduler_fn(cfg.model.pretrained_ckpt_path, args.scheduler) - - dataset = DreamBench_Dataset(args, model.kosmosg_preprocess) - dataloader = torch.utils.data.DataLoader(dataset, batch_size=args.batch_size, - num_workers=args.num_workers, shuffle=False, pin_memory=True, - drop_last=False, persistent_workers=True, - collate_fn=dreambench_collate_fn) - accelerator.print("Number of Images: ", len(dataset)) - - model, dataloader = accelerator.prepare(model, dataloader) - - kwargs = { - 'num_inference_steps': args.num_inference_steps, - 'text_guidance_scale': args.guidance_scale, - 'num_images_per_prompt': args.num_images_per_prompt, - 'lora_scale': 0.0, - } - - for batch_id, batch in tqdm(enumerate(dataloader), total=len(dataloader)): - src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens, object_name, object_id, real_image, prompt = batch - - # generate images - randomize_seed_fn(args.seed, False) - images = model.model.sample(src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens, **kwargs) - - # save image - for image_id, image in enumerate(images): - pos = batch_id * accelerator.num_processes * args.batch_size * args.num_images_per_prompt + \ - image_id * accelerator.num_processes + accelerator.process_index - name = '+'.join([ - object_name[image_id % args.batch_size], - object_id[image_id % args.batch_size], - str(pos), - prompt[image_id % args.batch_size] - ]) - images[image_id].save(os.path.join(args.output_dir, "{}.png".format(name))) - - real_image = real_image * args.num_images_per_prompt - dino_score.update(images, real_image) - clip_i_score.update(images, real_image) - clip_t_score.update(images, prompt * args.num_images_per_prompt) - - accelerator.print("Number of Samples: ", (dino_score.n_samples * accelerator.num_processes).item()) - accelerator.print("DINO Score: ", (dino_score.compute()).item()) - accelerator.print("CLIP Image Score: ", (clip_i_score.compute()).item()) - accelerator.print("CLIP Text Score: ", (clip_t_score.compute()).item()) - - -if __name__ == "__main__": - parser = options.get_training_parser() - cfg = options.parse_args_and_arch(parser, modify_parser=None) - cfg = convert_namespace_to_omegaconf(cfg) - main(cfg) diff --git a/kosmos-g/scripts/README.md b/kosmos-g/scripts/README.md deleted file mode 100644 index 2740bdd30..000000000 --- a/kosmos-g/scripts/README.md +++ /dev/null @@ -1,23 +0,0 @@ -# Data Preparation - -## InstructPix2Pix -```shell -bash scripts/download_data.sh path/to/clip-filtered-dataset -python convert_instructp2p.py --data-dir /path/to/clip-filtered-dataset/ --output-dir /path/to/output-dir/ --num-process 64 -``` - -## OpenImage -```shell -wget https://storage.googleapis.com/openimages/2018_04/image_ids_and_rotation.csv -python convert_openimage.py --data-dir /path/to/image_ids_and_rotation.csv --output-dir /path/to/output-dir/ --num-process 8 --cuda_device [0, 1, 2, 3, 4, 5, 6, 7] -``` - -if you want to preprocess the data in multiple nodes, you need to specify the `--num-machine` and `--machine-id` arguments. For example, if you want to preprocess the data in 8 nodes, you can run the following command in node 0: -```shell -python convert_openimage.py --data-dir /path/to/image_ids_and_rotation.csv --output-dir /path/to/output-dir/ --num-process 8 --cuda_device [0, 1, 2, 3, 4, 5, 6, 7] --num-machine 8 --machine-id 0 -``` -and run the following command in node 1: -```shell -python convert_openimage.py --data-dir /path/to/image_ids_and_rotation.csv --output-dir /path/to/output-dir/ --num-process 8 --cuda_device [0, 1, 2, 3, 4, 5, 6, 7] --num-machine 8 --machine-id 1 -``` -and so on. \ No newline at end of file diff --git a/kosmos-g/scripts/convert_instructpix2pix.py b/kosmos-g/scripts/convert_instructpix2pix.py deleted file mode 100644 index ca3ced99a..000000000 --- a/kosmos-g/scripts/convert_instructpix2pix.py +++ /dev/null @@ -1,98 +0,0 @@ -""" -this script will convert the original data format to tsv format, in which jpg files are converted to base64 strings and -saved in a resolution of 512(shorter side). -each contains 3 columns: input, edit, seed0_0, seed0_1, ..., seedN_0, seedN_1 -each tsv file should contain 1000 samples. -""" - -import argparse -import base64 -import io -import json -import os -from multiprocessing import Process - -from PIL import Image -from tqdm import tqdm - - -def save_tsv(args, i, sub_seeds_list): - with open(os.path.join(args.output_dir, 'data', f'{str(i).zfill(4)}.tsv'), 'w') as f: - for name, seeds in tqdm(sub_seeds_list, desc=f'processing {i}th tsv file', leave=False): - # load prompt - prompt = json.load(open(os.path.join(args.data_dir, name, 'prompt.json'))) - # load images - images = [Image.open(os.path.join(args.data_dir, name, f'{seed}_{j}.jpg')).convert('RGB') for seed in seeds - for j in range(2)] - # resize the shorter side to 512 - images = [im.resize((512, int(512 / im.size[0] * im.size[1])) if im.size[0] < im.size[1] else - (int(512 / im.size[1] * im.size[0]), 512)) for im in images] - # encode image using base64 - for j, im in enumerate(images): - buffer = io.BytesIO() - im.save(buffer, format='PNG') - images[j] = base64.b64encode(buffer.getvalue()).decode('utf-8') - - # write to tsv file - f.write('\t'.join([ - prompt['input'].replace('\t', '').replace('\n', '').strip(), - prompt['edit'].replace('\t', '').replace('\n', '').strip(), - *images - ]) + '\n') - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--data-dir', type=str, default='/path/to/clip-filtered-dataset/') - parser.add_argument('--output-dir', type=str, default='/path/to/output-dir/') - parser.add_argument('--max-image', type=int, default=1000) - parser.add_argument('--num-process', type=int, default=64) - - args = parser.parse_args() - - if not os.path.exists(args.output_dir): - os.makedirs(args.output_dir) - if not os.path.exists(os.path.join(args.output_dir, 'data')): - os.makedirs(os.path.join(args.output_dir, 'data')) - if not os.path.exists(os.path.join(args.output_dir, 'json')): - os.makedirs(os.path.join(args.output_dir, 'json')) - - # load seeds - seeds_list = json.load(open(os.path.join(args.data_dir, 'seeds.json'))) - - # split seeds into 1000 samples per tsv file - seeds_list = [seeds_list[i:i + args.max_image] for i in range(0, len(seeds_list), args.max_image)] - - # save tsv files - processes = [] - for i, sub_seeds_list in enumerate(seeds_list): - p = Process(target=save_tsv, args=(args, i, sub_seeds_list)) - p.start() - processes.append(p) - if len(processes) == args.num_process: - for p in processes: - p.join() - processes = [] - - for p in processes: - p.join() - - json_content = [ - { - "source": [], - "source_lang": "instructpix2pix", - "weight": 1.0, - "name": "instructpix2pix", - } - ] - - # find all tsv files in the output directory, save name to json file - for file in os.listdir(os.path.join(args.output_dir, 'data')): - if file.endswith('.tsv'): - json_content[0]['source'].append(file) - - # save json file - with open(os.path.join(args.output_dir, 'json', 'train.json'), 'w', encoding='utf-8') as f: - json.dump(json_content, f, indent=4) - - print('done.') diff --git a/kosmos-g/scripts/convert_openimage.py b/kosmos-g/scripts/convert_openimage.py deleted file mode 100644 index db2206265..000000000 --- a/kosmos-g/scripts/convert_openimage.py +++ /dev/null @@ -1,370 +0,0 @@ -import base64 -import io -import json -import multiprocessing -import os -import random -from argparse import ArgumentParser -from multiprocessing import Process - -import numpy as np -import requests -import torch -import torch.nn.functional as F -from PIL import Image -from scipy.ndimage import label, find_objects, grey_dilation -from torch.utils.data import Dataset, DataLoader -from tqdm import tqdm -from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer, Blip2Processor, Blip2ForConditionalGeneration, \ - CLIPSegProcessor, CLIPSegForImageSegmentation - -Image.MAX_IMAGE_PIXELS = 1000000000 - -INSTRUCTION_KEY = "### Instruction:" -INPUT_KEY = "### Input:" -RESPONSE_KEY = "### Response:" -INTRO_BLURB = "Below is an instruction that describes a task. Write a response that appropriately completes the request." -PROMPT_FOR_GENERATION_FORMAT = """{intro} - -{instruction_key} -Extract all objects mentioned in the caption and separate them using commas. Exclude background elements (site, location, environment) and only include foreground objects. Ensure that only nouns are included and exclude adjectives entirely. - -{input_key} -{input} - -{response_key} -""".format( - intro=INTRO_BLURB, - instruction_key=INSTRUCTION_KEY, - instruction="{instruction}", - input_key=INPUT_KEY, - input="{input}", - response_key=RESPONSE_KEY, -) - - -@torch.no_grad() -def save_tsv(args, shard_id, shard, device): - os.environ["CUDA_VISIBLE_DEVICES"] = "0, 1, 2, 3, 4, 5, 6, 7" - random.seed(args.seed) - torch.manual_seed(args.seed) - torch.cuda.set_device(device) - model_dtype = torch.float16 - # blip2 - blip2_processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-6.7b") - blip2_model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=model_dtype) - blip2_model.eval().to(device) - - # mpt - mpt_config = AutoConfig.from_pretrained('mosaicml/mpt-7b-instruct', trust_remote_code=True) - mpt_config.init_device = device - mpt_config.attn_config['attn_impl'] = args.attn_impl - - mpt_tokenizer = AutoTokenizer.from_pretrained('EleutherAI/gpt-neox-20b') - mpt_tokenizer.pad_token = mpt_tokenizer.eos_token - mpt_tokenizer.padding_side = 'left' - mpt_model = AutoModelForCausalLM.from_pretrained('mosaicml/mpt-7b-instruct', config=mpt_config, - torch_dtype=model_dtype, trust_remote_code=True) - mpt_model.eval() - - mpt_generate_kwargs = { - 'max_new_tokens': args.max_new_tokens, - 'temperature': args.temperature, - 'top_p': args.top_p, - 'top_k': args.top_k, - 'repetition_penalty': args.repetition_penalty, - 'no_repeat_ngram_size': args.no_repeat_ngram_size, - 'use_cache': args.use_cache, - 'do_sample': False if args.temperature == 0 else args.do_sample, - 'eos_token_id': mpt_tokenizer.eos_token_id, - 'pad_token_id': mpt_tokenizer.pad_token_id, - } - - # clipseg - clipseg_processor = CLIPSegProcessor.from_pretrained("CIDAS/clipseg-rd64-refined") - clipseg_model = CLIPSegForImageSegmentation.from_pretrained("CIDAS/clipseg-rd64-refined", torch_dtype=model_dtype) - clipseg_model.eval().to(device) - - cnt = 0 - - for image in tqdm(shard): - if image is None: - continue - if cnt % 1000 == 0: - # close previous file if any - if cnt > 0: - f.close() - f = open(os.path.join(args.output_dir, "data", f"cnt_{args.machine_id}_{shard_id}_{cnt // 1000}.tsv"), "w", - encoding='utf-8') - cnt += 1 - - blip2_input = blip2_processor(images=image, return_tensors="pt").to(device, model_dtype) - - blip2_gen = blip2_model.generate(**blip2_input) - caption = blip2_processor.batch_decode(blip2_gen, skip_special_tokens=True)[0] \ - .replace('\t', '').replace('\n', '').strip() - - # tag extraction - prompt = PROMPT_FOR_GENERATION_FORMAT.format(input=caption) - - # Run HF generate - mpt_input = mpt_tokenizer(prompt, return_tensors='pt', padding=True) - for key, value in mpt_input.items(): - mpt_input[key] = value.to(device) - mpt_gen = mpt_model.generate( - input_ids=mpt_input['input_ids'], - attention_mask=mpt_input['attention_mask'], - **mpt_generate_kwargs, - ) - tags = mpt_tokenizer.batch_decode(mpt_gen, skip_special_tokens=True)[0][len(prompt):] - - if '#' in tags: - continue - tags = tags.split(",") - - tags = [tag.replace('\t', '').replace('\n', '').strip() for tag in tags] - tags = [tag for tag in tags if len(tag) > 0 and tag in caption] - - if len(tags) == 0: - continue - - clipseg_input = clipseg_processor(text=tags, images=[image] * len(tags), padding=True, return_tensors="pt") - for key, value in clipseg_input.items(): - clipseg_input[key] = value.to(device) - if value.dtype == torch.float32: - clipseg_input[key] = value.to(device, model_dtype) - - # predict - clipseg_gen = clipseg_model(**clipseg_input).logits - - if len(tags) == 1: - clipseg_gen = clipseg_gen.unsqueeze(0) - - image_size = image.height - - # interpolate to original size - clipseg_gen = F.interpolate(clipseg_gen.unsqueeze(1), size=image_size, mode='bilinear') - masks = torch.sigmoid(clipseg_gen).squeeze(1) - masks = masks.cpu().numpy() - - sub_images = [] - tags_to_keep = [] - - # save the masked image - for mask_id, mask in enumerate(masks): - image_array = np.array(image) - thresholded_mask = mask > args.threshold - - if thresholded_mask.max() == 0: - continue - - thresholded_mask = grey_dilation(thresholded_mask, size=(image_size // 100, image_size // 100)) - labeled_matrix, num_features = label(thresholded_mask) - regions = find_objects(labeled_matrix) - sizes = [np.sum(thresholded_mask[region]) for region in regions] - max_index = np.argmax(sizes) - max_region = regions[max_index] - thresholded_mask[labeled_matrix != (max_index + 1)] = False - - tags_to_keep.append(tags[mask_id]) - - # Determine the dimensions of the region - y_start, y_stop = max_region[0].start, max_region[0].stop - x_start, x_stop = max_region[1].start, max_region[1].stop - height = y_stop - y_start - width = x_stop - x_start - - # Calculate the desired side length for a square region - side_length = max(height, width) - - # Calculate the center of the region - center_y = (y_start + y_stop) // 2 - center_x = (x_start + x_stop) // 2 - - # Calculate the new boundaries for the region - new_y_start = center_y - (side_length // 2) - new_y_stop = new_y_start + side_length - new_x_start = center_x - (side_length // 2) - new_x_stop = new_x_start + side_length - - # Adjust the boundaries if they exceed the image boundaries - if new_y_start < 0: - new_y_start = 0 - new_y_stop = side_length - elif new_y_stop > image_array.shape[0]: - new_y_start = image_array.shape[0] - side_length - new_y_stop = image_array.shape[0] - - if new_x_start < 0: - new_x_start = 0 - new_x_stop = side_length - elif new_x_stop > image_array.shape[1]: - new_x_start = image_array.shape[1] - side_length - new_x_stop = image_array.shape[1] - - # Create a new mask with the adjusted boundaries - object_image = image_array[new_y_start:new_y_stop, new_x_start:new_x_stop] - max_region_mask = thresholded_mask[new_y_start:new_y_stop, new_x_start:new_x_stop] - - masked_image = object_image.copy() - masked_image[~max_region_mask] = 255 - - object_image = Image.fromarray(object_image).resize((512, 512)) - masked_image = Image.fromarray(masked_image).resize((512, 512)) - sub_images.extend([object_image, masked_image]) - - if len(sub_images) == 0: - continue - - image = image.resize((512, 512)) - - # encode image using base64 - buffer = io.BytesIO() - image.save(buffer, format='PNG') - image = base64.b64encode(buffer.getvalue()).decode('utf-8') - - for j, im in enumerate(sub_images): - buffer = io.BytesIO() - im.save(buffer, format='PNG') - sub_images[j] = base64.b64encode(buffer.getvalue()).decode('utf-8') - - # write to tsv file - f.write('\t'.join([ - caption, - ','.join(tags_to_keep), - image, - *sub_images - ]) + '\n') - - -class OpenImageDataset(Dataset): - def __init__(self, url_data): - self.data = url_data - - def __len__(self): - return len(self.data) - - def __getitem__(self, idx): - try: - items = self.data[idx].split(',') - image = Image.open(requests.get(items[2], stream=True).raw).convert('RGB') - # caption - width, height = image.size - shortest_side = min(width, height) - left = (width - shortest_side) // 2 - top = (height - shortest_side) // 2 - right = left + shortest_side - bottom = top + shortest_side - image = image.crop((left, top, right, bottom)) - return image - except: - return None - - -def collate_fn(batch): - return batch[0] if batch is not None else None - - -def main(): - """Parse commandline arguments.""" - parser = ArgumentParser() - parser.add_argument('--data-dir', type=str, - default='/path/to/image_ids_and_rotation.csv') - parser.add_argument('--output-dir', type=str, default='/path/to/output-dir/') - parser.add_argument('--num-process', type=int, default=8) - parser.add_argument('--cuda-device', type=list, default=[0, 1, 2, 3, 4, 5, 6, 7]) - parser.add_argument('--num-machine', type=int, default=1) - parser.add_argument('--machine-id', type=int, default=0) - - parser.add_argument('--max-seq-len', type=int, default=None) - parser.add_argument('--max-new-tokens', type=int, default=10) - - parser.add_argument('--temperature', type=float, default=1.0) - parser.add_argument('--top-k', type=int, default=50) - parser.add_argument('--top-p', type=float, default=0.95) - parser.add_argument('--repetition-penalty', type=float, default=1.0) - parser.add_argument('--no-repeat-ngram-size', type=int, default=0) - - parser.add_argument('--seed', type=int, default=0) - parser.add_argument('--do-sample', type=bool, default=True) - parser.add_argument('--use-cache', type=bool, default=True) - parser.add_argument('--trust-remote-code', type=bool, default=True) - parser.add_argument('--attn-impl', type=str, default='torch') - parser.add_argument('--threshold', type=float, default=0.3) - args = parser.parse_args() - - os.environ['TOKENIZERS_PARALLELISM'] = 'false' - - if not os.path.exists(args.output_dir): - os.makedirs(args.output_dir) - if not os.path.exists(os.path.join(args.output_dir, 'data')): - os.makedirs(os.path.join(args.output_dir, 'data')) - if not os.path.exists(os.path.join(args.output_dir, 'json')): - os.makedirs(os.path.join(args.output_dir, 'json')) - - with open(args.data_dir, 'r', encoding='utf8') as f: - url_data = f.read().strip().split('\n') - - # split into 8 machine, and pick the part of machine_id - url_data = url_data[args.machine_id::args.num_machine] - - # split url data into shards - url_data = [url_data[i::args.num_process] for i in range(args.num_process)] - - dataloaders = [ - DataLoader( - OpenImageDataset(url_data[i]), - batch_size=1, - shuffle=False, - num_workers=4, - pin_memory=True, - persistent_workers=True, - drop_last=False, - prefetch_factor=4, - collate_fn=collate_fn - ) - for i in range(args.num_process) - ] - - multiprocessing.set_start_method('spawn') - processes = [] - - for shard_id, shard in enumerate(dataloaders): - p = Process( - target=save_tsv, - args=( - args, - shard_id, - shard, - torch.device('cuda:{}'.format(args.cuda_device[shard_id % len(args.cuda_device)])) - ) - ) - p.start() - processes.append(p) - - for p in processes: - p.join() - - json_content = [ - { - "source": [], - "source_lang": "openimage", - "weight": 1.0, - "name": "openimage", - } - ] - - # find all tsv files in the output directory, save name to json file - for file in os.listdir(os.path.join(args.output_dir, 'data')): - if file.endswith('.tsv'): - json_content[0]['source'].append(file) - - # save json file - with open(os.path.join(args.output_dir, 'json', 'train.json'), 'w', encoding='utf-8') as f: - json.dump(json_content, f, indent=4) - - print('Done!') - - -if __name__ == '__main__': - main() diff --git a/kosmos-g/scripts/download_instructp2p.sh b/kosmos-g/scripts/download_instructp2p.sh deleted file mode 100644 index 47d0d64f6..000000000 --- a/kosmos-g/scripts/download_instructp2p.sh +++ /dev/null @@ -1,27 +0,0 @@ -#!/bin/bash - -# Make data folder relative to script location -SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd ) - -mkdir -p $SCRIPT_DIR/../data - -# Copy text datasets -wget -q --show-progress http://instruct-pix2pix.eecs.berkeley.edu/gpt-generated-prompts.jsonl -O $SCRIPT_DIR/../data/gpt-generated-prompts.jsonl -wget -q --show-progress http://instruct-pix2pix.eecs.berkeley.edu/human-written-prompts.jsonl -O $SCRIPT_DIR/../data/human-written-prompts.jsonl - -# If dataset name isn't provided, exit. -if [ -z $1 ] -then - exit 0 -fi - -# Copy dataset files -mkdir $SCRIPT_DIR/../data/$1 -wget -A zip,json -R "index.html*" -q --show-progress -r --no-parent http://instruct-pix2pix.eecs.berkeley.edu/$1/ -nd -P $SCRIPT_DIR/../data/$1/ - -# Unzip to folders -unzip $SCRIPT_DIR/../data/$1/\*.zip -d $SCRIPT_DIR/../data/$1/ - -# Cleanup -rm -f $SCRIPT_DIR/../data/$1/*.zip -rm -f $SCRIPT_DIR/../data/$1/*.html \ No newline at end of file diff --git a/kosmos-g/scripts/remove_dreambench_multiimg.sh b/kosmos-g/scripts/remove_dreambench_multiimg.sh deleted file mode 100644 index dc35f6e43..000000000 --- a/kosmos-g/scripts/remove_dreambench_multiimg.sh +++ /dev/null @@ -1,53 +0,0 @@ -#!/bin/bash - -DATASET_PATH=${1} - -declare -a files_to_keep=( -"${DATASET_PATH}/pink_sunglasses/04.jpg" -"${DATASET_PATH}/can/01.jpg" -"${DATASET_PATH}/candle/02.jpg" -"${DATASET_PATH}/teapot/04.jpg" -"${DATASET_PATH}/backpack/02.jpg" -"${DATASET_PATH}/dog3/05.jpg" -"${DATASET_PATH}/cat2/02.jpg" -"${DATASET_PATH}/dog8/04.jpg" -"${DATASET_PATH}/grey_sloth_plushie/04.jpg" -"${DATASET_PATH}/backpack_dog/02.jpg" -"${DATASET_PATH}/robot_toy/00.jpg" -"${DATASET_PATH}/dog5/00.jpg" -"${DATASET_PATH}/duck_toy/01.jpg" -"${DATASET_PATH}/dog2/02.jpg" -"${DATASET_PATH}/dog/02.jpg" -"${DATASET_PATH}/colorful_sneaker/01.jpg" -"${DATASET_PATH}/red_cartoon/00.jpg" -"${DATASET_PATH}/clock/03.jpg" -"${DATASET_PATH}/shiny_sneaker/01.jpg" -"${DATASET_PATH}/dog6/02.jpg" -"${DATASET_PATH}/berry_bowl/02.jpg" -"${DATASET_PATH}/bear_plushie/03.jpg" -"${DATASET_PATH}/poop_emoji/00.jpg" -"${DATASET_PATH}/rc_car/03.jpg" -"${DATASET_PATH}/dog7/01.jpg" -"${DATASET_PATH}/cat/04.jpg" -"${DATASET_PATH}/fancy_boot/02.jpg" -"${DATASET_PATH}/wolf_plushie/04.jpg" -"${DATASET_PATH}/monster_toy/04.jpg" -"${DATASET_PATH}/vase/02.jpg" -) - -find "${DATASET_PATH}" -type f | while read file; do - keep=false - for keep_file in "${files_to_keep[@]}"; do - if [[ "${file}" == "${keep_file}" ]]; then - keep=true - break - fi - done - - if [[ "${keep}" == "false" ]]; then - rm "${file}" - echo "Deleted: ${file}" - fi -done - -echo "Cleanup completed!" diff --git a/kosmos-g/torchscale/.gitignore b/kosmos-g/torchscale/.gitignore deleted file mode 100644 index 4eccb03fb..000000000 --- a/kosmos-g/torchscale/.gitignore +++ /dev/null @@ -1,512 +0,0 @@ -## Ignore Visual Studio temporary files, build results, and -## files generated by popular Visual Studio add-ons. -## -## Get latest from https://github.com/github/gitignore/blob/master/VisualStudio.gitignore - -# User-specific files -*.rsuser -*.suo -*.user -*.userosscache -*.sln.docstates - -# User-specific files (MonoDevelop/Xamarin Studio) -*.userprefs - -# Mono auto generated files -mono_crash.* - -# Build results -[Dd]ebug/ -[Dd]ebugPublic/ -[Rr]elease/ -[Rr]eleases/ -x64/ -x86/ -[Aa][Rr][Mm]/ -[Aa][Rr][Mm]64/ -bld/ -[Bb]in/ -[Oo]bj/ -[Ll]og/ -[Ll]ogs/ - -# Visual Studio 2015/2017 cache/options directory -.vs/ -# Uncomment if you have tasks that create the project's static files in wwwroot -#wwwroot/ - -# Visual Studio 2017 auto generated files -Generated\ Files/ - -# MSTest test Results -[Tt]est[Rr]esult*/ -[Bb]uild[Ll]og.* - -# NUnit -*.VisualState.xml -TestResult.xml -nunit-*.xml - -# Build Results of an ATL Project -[Dd]ebugPS/ -[Rr]eleasePS/ -dlldata.c - -# Benchmark Results -BenchmarkDotNet.Artifacts/ - -# .NET Core -project.lock.json -project.fragment.lock.json -artifacts/ - -# StyleCop -StyleCopReport.xml - -# Files built by Visual Studio -*_i.c -*_p.c -*_h.h -*.ilk -*.meta -*.obj -*.iobj -*.pch -*.pdb -*.ipdb -*.pgc -*.pgd -*.rsp -*.sbr -*.tlb -*.tli -*.tlh -*.tmp -*.tmp_proj -*_wpftmp.csproj -*.log -*.vspscc -*.vssscc -.builds -*.pidb -*.svclog -*.scc - -# Chutzpah Test files -_Chutzpah* - -# Visual C++ cache files -ipch/ -*.aps -*.ncb -*.opendb -*.opensdf -*.sdf -*.cachefile -*.VC.db -*.VC.VC.opendb - -# Visual Studio profiler -*.psess -*.vsp -*.vspx -*.sap - -# Visual Studio Trace Files -*.e2e - -# TFS 2012 Local Workspace -$tf/ - -# Guidance Automation Toolkit -*.gpState - -# ReSharper is a .NET coding add-in -_ReSharper*/ -*.[Rr]e[Ss]harper -*.DotSettings.user - -# TeamCity is a build add-in -_TeamCity* - -# DotCover is a Code Coverage Tool -*.dotCover - -# AxoCover is a Code Coverage Tool -.axoCover/* -!.axoCover/settings.json - -# Visual Studio code coverage results -*.coverage -*.coveragexml - -# NCrunch -_NCrunch_* -.*crunch*.local.xml -nCrunchTemp_* - -# MightyMoose -*.mm.* -AutoTest.Net/ - -# Web workbench (sass) -.sass-cache/ - -# Installshield output folder -[Ee]xpress/ - -# DocProject is a documentation generator add-in -DocProject/buildhelp/ -DocProject/Help/*.HxT -DocProject/Help/*.HxC -DocProject/Help/*.hhc -DocProject/Help/*.hhk -DocProject/Help/*.hhp -DocProject/Help/Html2 -DocProject/Help/html - -# Click-Once directory -publish/ - -# Publish Web Output -*.[Pp]ublish.xml -*.azurePubxml -# Note: Comment the next line if you want to checkin your web deploy settings, -# but database connection strings (with potential passwords) will be unencrypted -*.pubxml -*.publishproj - -# Microsoft Azure Web App publish settings. Comment the next line if you want to -# checkin your Azure Web App publish settings, but sensitive information contained -# in these scripts will be unencrypted -PublishScripts/ - -# NuGet Packages -*.nupkg -# NuGet Symbol Packages -*.snupkg -# The packages folder can be ignored because of Package Restore -**/[Pp]ackages/* -# except build/, which is used as an MSBuild target. -!**/[Pp]ackages/build/ -# Uncomment if necessary however generally it will be regenerated when needed -#!**/[Pp]ackages/repositories.config -# NuGet v3's project.json files produces more ignorable files -*.nuget.props -*.nuget.targets - -# Microsoft Azure Build Output -csx/ -*.build.csdef - -# Microsoft Azure Emulator -ecf/ -rcf/ - -# Windows Store app package directories and files -AppPackages/ -BundleArtifacts/ -Package.StoreAssociation.xml -_pkginfo.txt -*.appx -*.appxbundle -*.appxupload - -# Visual Studio cache files -# files ending in .cache can be ignored -*.[Cc]ache -# but keep track of directories ending in .cache -!?*.[Cc]ache/ - -# Others -ClientBin/ -~$* -*~ -*.dbmdl -*.dbproj.schemaview -*.jfm -*.pfx -*.publishsettings -orleans.codegen.cs - -# Including strong name files can present a security risk -# (https://github.com/github/gitignore/pull/2483#issue-259490424) -#*.snk - -# Since there are multiple workflows, uncomment next line to ignore bower_components -# (https://github.com/github/gitignore/pull/1529#issuecomment-104372622) -#bower_components/ - -# RIA/Silverlight projects -Generated_Code/ - -# Backup & report files from converting an old project file -# to a newer Visual Studio version. Backup files are not needed, -# because we have git ;-) -_UpgradeReport_Files/ -Backup*/ -UpgradeLog*.XML -UpgradeLog*.htm -ServiceFabricBackup/ -*.rptproj.bak - -# SQL Server files -*.mdf -*.ldf -*.ndf - -# Business Intelligence projects -*.rdl.data -*.bim.layout -*.bim_*.settings -*.rptproj.rsuser -*- [Bb]ackup.rdl -*- [Bb]ackup ([0-9]).rdl -*- [Bb]ackup ([0-9][0-9]).rdl - -# Microsoft Fakes -FakesAssemblies/ - -# GhostDoc plugin setting file -*.GhostDoc.xml - -# Node.js Tools for Visual Studio -.ntvs_analysis.dat -node_modules/ - -# Visual Studio 6 build log -*.plg - -# Visual Studio 6 workspace options file -*.opt - -# Visual Studio 6 auto-generated workspace file (contains which files were open etc.) -*.vbw - -# Visual Studio LightSwitch build output -**/*.HTMLClient/GeneratedArtifacts -**/*.DesktopClient/GeneratedArtifacts -**/*.DesktopClient/ModelManifest.xml -**/*.Server/GeneratedArtifacts -**/*.Server/ModelManifest.xml -_Pvt_Extensions - -# Paket dependency manager -.paket/paket.exe -paket-files/ - -# FAKE - F# Make -.fake/ - -# CodeRush personal settings -.cr/personal - -# Python Tools for Visual Studio (PTVS) -__pycache__/ -*.pyc - -# Cake - Uncomment if you are using it -# tools/** -# !tools/packages.config - -# Tabs Studio -*.tss - -# Telerik's JustMock configuration file -*.jmconfig - -# BizTalk build output -*.btp.cs -*.btm.cs -*.odx.cs -*.xsd.cs - -# OpenCover UI analysis results -OpenCover/ - -# Azure Stream Analytics local run output -ASALocalRun/ - -# MSBuild Binary and Structured Log -*.binlog - -# NVidia Nsight GPU debugger configuration file -*.nvuser - -# MFractors (Xamarin productivity tool) working folder -.mfractor/ - -# Local History for Visual Studio -.localhistory/ - -# BeatPulse healthcheck temp database -healthchecksdb - -# Backup folder for Package Reference Convert tool in Visual Studio 2017 -MigrationBackup/ - -# Ionide (cross platform F# VS Code tools) working folder -.ionide/ - - -# Byte-compiled / optimized / DLL files -__pycache__/ -*.py[cod] -*$py.class - -# C extensions -*.so - -# Distribution / packaging -.Python -build/ -develop-eggs/ -dist/ -downloads/ -eggs/ -.eggs/ -lib/ -lib64/ -parts/ -sdist/ -var/ -wheels/ -share/python-wheels/ -*.egg-info/ -.installed.cfg -*.egg -MANIFEST - -# PyInstaller -# Usually these files are written by a python script from a template -# before PyInstaller builds the exe, so as to inject date/other infos into it. -*.manifest -*.spec - -# Installer logs -pip-log.txt -pip-delete-this-directory.txt - -# Unit test / coverage reports -htmlcov/ -.tox/ -.nox/ -.coverage -.coverage.* -.cache -nosetests.xml -coverage.xml -*.cover -*.py,cover -.hypothesis/ -.pytest_cache/ -cover/ - -# Translations -*.mo -*.pot - -# Django stuff: -*.log -local_settings.py -db.sqlite3 -db.sqlite3-journal - -# Flask stuff: -instance/ -.webassets-cache - -# Scrapy stuff: -.scrapy - -# Sphinx documentation -docs/_build/ - -# PyBuilder -.pybuilder/ -target/ - -# Jupyter Notebook -.ipynb_checkpoints - -# IPython -profile_default/ -ipython_config.py - -# pyenv -# For a library or package, you might want to ignore these files since the code is -# intended to run in multiple environments; otherwise, check them in: -# .python-version - -# pipenv -# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. -# However, in case of collaboration, if having platform-specific dependencies or dependencies -# having no cross-platform support, pipenv may install dependencies that don't work, or not -# install all needed dependencies. -#Pipfile.lock - -# poetry -# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control. -# This is especially recommended for binary packages to ensure reproducibility, and is more -# commonly ignored for libraries. -# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control -#poetry.lock - -# pdm -# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control. -#pdm.lock -# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it -# in version control. -# https://pdm.fming.dev/#use-with-ide -.pdm.toml - -# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm -__pypackages__/ - -# Celery stuff -celerybeat-schedule -celerybeat.pid - -# SageMath parsed files -*.sage.py - -# Environments -.env -.venv -env/ -venv/ -ENV/ -env.bak/ -venv.bak/ - -# Spyder project settings -.spyderproject -.spyproject - -# Rope project settings -.ropeproject - -# mkdocs documentation -/site - -# mypy -.mypy_cache/ -.dmypy.json -dmypy.json - -# Pyre type checker -.pyre/ - -# pytype static type analyzer -.pytype/ - -# Cython debug symbols -cython_debug/ - -# PyCharm -# JetBrains specific template is maintained in a separate JetBrains.gitignore that can -# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore -# and can be added to the global gitignore or merged into this file. For a more nuclear -# option (not recommended) you can uncomment the following to ignore the entire idea folder. -#.idea/ \ No newline at end of file diff --git a/kosmos-g/torchscale/CODE_OF_CONDUCT.md b/kosmos-g/torchscale/CODE_OF_CONDUCT.md deleted file mode 100644 index f9ba8cf65..000000000 --- a/kosmos-g/torchscale/CODE_OF_CONDUCT.md +++ /dev/null @@ -1,9 +0,0 @@ -# Microsoft Open Source Code of Conduct - -This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). - -Resources: - -- [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/) -- [Microsoft Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) -- Contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with questions or concerns diff --git a/kosmos-g/torchscale/LICENSE b/kosmos-g/torchscale/LICENSE deleted file mode 100644 index 9e841e7a2..000000000 --- a/kosmos-g/torchscale/LICENSE +++ /dev/null @@ -1,21 +0,0 @@ - MIT License - - Copyright (c) Microsoft Corporation. - - Permission is hereby granted, free of charge, to any person obtaining a copy - of this software and associated documentation files (the "Software"), to deal - in the Software without restriction, including without limitation the rights - to use, copy, modify, merge, publish, distribute, sublicense, and/or sell - copies of the Software, and to permit persons to whom the Software is - furnished to do so, subject to the following conditions: - - The above copyright notice and this permission notice shall be included in all - copies or substantial portions of the Software. - - THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR - IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE - AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER - LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, - OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE - SOFTWARE diff --git a/kosmos-g/torchscale/README.md b/kosmos-g/torchscale/README.md deleted file mode 100644 index 5b9511d96..000000000 --- a/kosmos-g/torchscale/README.md +++ /dev/null @@ -1,214 +0,0 @@ -# TorchScale - A Library for Transformers at (Any) Scale - -<p> - <a href="https://github.com/microsoft/torchscale/blob/main/LICENSE"><img alt="MIT License" src="https://img.shields.io/badge/license-MIT-blue.svg" /></a> - <a href="https://pypi.org/project/torchscale"><img alt="MIT License" src="https://badge.fury.io/py/torchscale.svg" /></a> -</p> - -TorchScale is a PyTorch library that allows researchers and developers to scale up Transformers efficiently and effectively. -It has the implementation of fundamental research to improve modeling generality and capability, as well as training stability and efficiency of scaling Transformers. - -- Stability - [**DeepNet**](https://arxiv.org/abs/2203.00555): scaling Transformers to 1,000 Layers and beyond -- Generality - [**Foundation Transformers (Magneto)**](https://arxiv.org/abs/2210.06423): towards true general-purpose modeling across tasks and modalities (including language, vision, speech, and multimodal) -- Efficiency - [**X-MoE**](https://arxiv.org/abs/2204.09179): scalable & finetunable sparse Mixture-of-Experts (MoE) - -## News - -- November, 2022: TorchScale 0.1.1 released [[Paper](https://arxiv.org/abs/2211.13184)] [[PyPI](https://pypi.org/project/torchscale/)] - -## Installation - -To install: -``` -pip install torchscale -``` - -Alternatively, you can develop it locally: -``` -git clone https://github.com/microsoft/torchscale.git -cd torchscale -pip install -e . -``` - -## Getting Started - -It takes only several lines of code to create a model with the above fundamental research features enabled. Here is how to quickly obtain a BERT-like encoder: - -```python ->>> from torchscale.architecture.config import EncoderConfig ->>> from torchscale.architecture.encoder import Encoder - ->>> config = EncoderConfig(vocab_size=64000) ->>> model = Encoder(config) - ->>> print(model) -``` - -We also support the `Decoder` architecture and the `EncoderDecoder` architecture: - -```python -# Creating a decoder model ->>> from torchscale.architecture.config import DecoderConfig ->>> from torchscale.architecture.decoder import Decoder - ->>> config = DecoderConfig(vocab_size=64000) ->>> decoder = Decoder(config) ->>> print(decoder) - -# Creating a encoder-decoder model ->>> from torchscale.architecture.config import EncoderDecoderConfig ->>> from torchscale.architecture.encoder_decoder import EncoderDecoder - ->>> config = EncoderDecoderConfig(vocab_size=64000) ->>> encdec = EncoderDecoder(config) ->>> print(encdec) -``` - -## Key Features - -- [DeepNorm to improve the training stability of Post-LayerNorm Transformers](https://arxiv.org/abs/2203.00555) - * enabled by setting *deepnorm=True* in the `Config` class. - * It adjusts both the residual connection and the initialization method according to the model architecture (i.e., encoder, decoder, or encoder-decoder). - -- [SubLN for the model generality and the training stability](https://arxiv.org/abs/2210.06423) - * enabled by *subln=True*. This is enabled by default. - * It introduces another LayerNorm to each sublayer and adjusts the initialization according to the model architecture. - * Note that SubLN and DeepNorm cannot be used in one single model. - -- [X-MoE: efficient and finetunable sparse MoE modeling](https://arxiv.org/abs/2204.09179) - * enabled by *use_xmoe=True*. - * It replaces every *'moe_freq'* `FeedForwardNetwork` layers with the X-MoE layers. - -- [Multiway architecture for multimodality](https://arxiv.org/abs/2208.10442) - * enabled by *multiway=True*. - * It provides a pool of Transformer's parameters used for different modalities. - -- [Relative position bias](https://arxiv.org/abs/1910.10683) - * enabled by adjusting *rel_pos_buckets* and *max_rel_pos*. - -- [SparseClip: improving the gradient clipping for sparse MoE models](https://arxiv.org/abs/2211.13184) - * we provide a [sample code](examples/fairseq/utils/sparse_clip.py) that can be easily adapted to the FairSeq (or other) repo. - -Most of the features above can be used by simply passing the corresponding parameters to the config. For example: - -```python ->>> from torchscale.architecture.config import EncoderConfig ->>> from torchscale.architecture.encoder import Encoder - ->>> config = EncoderConfig(vocab_size=64000, deepnorm=True, multiway=True) ->>> model = Encoder(config) - ->>> print(model) -``` - -## Examples - -We have the examples of how to use TorchScale in the following scenarios/tasks: - -- Language - - * [Decoder/GPT](examples/fairseq/README.md#example-gpt-pretraining) - - * [Encoder-Decoder/Neural Machine Translation](examples/fairseq/README.md#example-machine-translation) - - * [Encoder/BERT](examples/fairseq/README.md#example-bert-pretraining) - -- Vision - - * ViT/BEiT [In progress] - -- Speech - -- Multimodal - - * [Multiway Transformers/BEiT-3](torchscale/model/BEiT3.py) [In progress] - -We plan to provide more examples regarding different tasks (e.g. vision pretraining and speech recognition) and various deep learning toolkits (e.g. [DeepSpeed](https://github.com/microsoft/DeepSpeed) and [Megatron-LM](https://github.com/NVIDIA/Megatron-LM)). Any comments or PRs are welcome! - -## Results - -### Stability Evaluation - -<p align="center"> - <img src="https://publicmodel.blob.core.windows.net/torchscale/pic/convergence.png" width="800"/> -</p> - -The training curve is smooth by using TorchScale, while the baseline Transformer cannot converge. - -### Scaling-up Experiments - -<p align="center"> - <img src="https://publicmodel.blob.core.windows.net/torchscale/pic/scaling_curve.png" width="800"/> -</p> - -TorchScale supports arbitrary depths and widths, successfully scaling-up the models without pain. - -## Acknowledgments - -Some implementations in TorchScale are either adapted from or inspired by the [FairSeq](https://github.com/facebookresearch/fairseq) repository and the [UniLM](https://github.com/microsoft/unilm) repository. - -## Citations - -If you find this repository useful, please consider citing our work: - -``` -@article{torchscale, - author = {Shuming Ma and Hongyu Wang and Shaohan Huang and Wenhui Wang and Zewen Chi and Li Dong and Alon Benhaim and Barun Patra and Vishrav Chaudhary and Xia Song and Furu Wei}, - title = {{TorchScale}: {Transformers} at Scale}, - journal = {CoRR}, - volume = {abs/2211.13184}, - year = {2022} -} -``` - -``` -@article{deepnet, - author = {Hongyu Wang and Shuming Ma and Li Dong and Shaohan Huang and Dongdong Zhang and Furu Wei}, - title = {{DeepNet}: Scaling {Transformers} to 1,000 Layers}, - journal = {CoRR}, - volume = {abs/2203.00555}, - year = {2022}, -} -``` - -``` -@article{magneto, - author = {Hongyu Wang and Shuming Ma and Shaohan Huang and Li Dong and Wenhui Wang and Zhiliang Peng and Yu Wu and Payal Bajaj and Saksham Singhal and Alon Benhaim and Barun Patra and Zhun Liu and Vishrav Chaudhary and Xia Song and Furu Wei}, - title = {Foundation {Transformers}}, - journal = {CoRR}, - volume = {abs/2210.06423}, - year = {2022} -} -``` - -``` -@inproceedings{xmoe, - title={On the Representation Collapse of Sparse Mixture of Experts}, - author={Zewen Chi and Li Dong and Shaohan Huang and Damai Dai and Shuming Ma and Barun Patra and Saksham Singhal and Payal Bajaj and Xia Song and Xian-Ling Mao and Heyan Huang and Furu Wei}, - booktitle={Advances in Neural Information Processing Systems}, - year={2022}, - url={https://openreview.net/forum?id=mWaYC6CZf5} -} -``` - -## Contributing - -This project welcomes contributions and suggestions. Most contributions require you to agree to a -Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us -the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com. - -When you submit a pull request, a CLA bot will automatically determine whether you need to provide -a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions -provided by the bot. You will only need to do this once across all repos using our CLA. - -This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). -For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or -contact [Furu Wei](mailto:fuwei@microsoft.com) and [Shuming Ma](mailto:shumma@microsoft.com) with any additional questions or comments. - -## Trademarks - -This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft -trademarks or logos is subject to and must follow -[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general). -Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. -Any use of third-party trademarks or logos are subject to those third-party's policies. diff --git a/kosmos-g/torchscale/SECURITY.md b/kosmos-g/torchscale/SECURITY.md deleted file mode 100644 index e138ec5d6..000000000 --- a/kosmos-g/torchscale/SECURITY.md +++ /dev/null @@ -1,41 +0,0 @@ -<!-- BEGIN MICROSOFT SECURITY.MD V0.0.8 BLOCK --> - -## Security - -Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), and [our GitHub organizations](https://opensource.microsoft.com/). - -If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/opensource/security/definition), please report it to us as described below. - -## Reporting Security Issues - -**Please do not report security vulnerabilities through public GitHub issues.** - -Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/opensource/security/create-report). - -If you prefer to submit without logging in, send email to [secure@microsoft.com](mailto:secure@microsoft.com). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/opensource/security/pgpkey). - -You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://aka.ms/opensource/security/msrc). - -Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue: - - * Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.) - * Full paths of source file(s) related to the manifestation of the issue - * The location of the affected source code (tag/branch/commit or direct URL) - * Any special configuration required to reproduce the issue - * Step-by-step instructions to reproduce the issue - * Proof-of-concept or exploit code (if possible) - * Impact of the issue, including how an attacker might exploit the issue - -This information will help us triage your report more quickly. - -If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/opensource/security/bounty) page for more details about our active programs. - -## Preferred Languages - -We prefer all communications to be in English. - -## Policy - -Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/opensource/security/cvd). - -<!-- END MICROSOFT SECURITY.MD BLOCK --> diff --git a/kosmos-g/torchscale/SUPPORT.md b/kosmos-g/torchscale/SUPPORT.md deleted file mode 100644 index eaf439aec..000000000 --- a/kosmos-g/torchscale/SUPPORT.md +++ /dev/null @@ -1,25 +0,0 @@ -# TODO: The maintainer of this repo has not yet edited this file - -**REPO OWNER**: Do you want Customer Service & Support (CSS) support for this product/project? - -- **No CSS support:** Fill out this template with information about how to file issues and get help. -- **Yes CSS support:** Fill out an intake form at [aka.ms/onboardsupport](https://aka.ms/onboardsupport). CSS will work with/help you to determine next steps. -- **Not sure?** Fill out an intake as though the answer were "Yes". CSS will help you decide. - -*Then remove this first heading from this SUPPORT.MD file before publishing your repo.* - -# Support - -## How to file issues and get help - -This project uses GitHub Issues to track bugs and feature requests. Please search the existing -issues before filing new issues to avoid duplicates. For new issues, file your bug or -feature request as a new Issue. - -For help and questions about using this project, please **REPO MAINTAINER: INSERT INSTRUCTIONS HERE -FOR HOW TO ENGAGE REPO OWNERS OR COMMUNITY FOR HELP. COULD BE A STACK OVERFLOW TAG OR OTHER -CHANNEL. WHERE WILL YOU HELP PEOPLE?**. - -## Microsoft Support Policy - -Support for this **PROJECT or PRODUCT** is limited to the resources listed above. diff --git a/kosmos-g/torchscale/examples/__init__.py b/kosmos-g/torchscale/examples/__init__.py deleted file mode 100644 index 3ae31e250..000000000 --- a/kosmos-g/torchscale/examples/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] diff --git a/kosmos-g/torchscale/examples/fairseq/README.md b/kosmos-g/torchscale/examples/fairseq/README.md deleted file mode 100644 index b65434d13..000000000 --- a/kosmos-g/torchscale/examples/fairseq/README.md +++ /dev/null @@ -1,233 +0,0 @@ -# Example: Integration with FairSeq - -## Setup - -```bash -# Install the repo as a package: -git clone https://github.com/msranlp/torchscale.git -cd torchscale -pip install -e . -pip install git+https://github.com/shumingma/fairseq.git@moe -pip install git+https://github.com/shumingma/infinibatch.git -pip install iopath -pip install --upgrade numpy -``` - -## Example: BERT Pretraining - -### Data Format - -We use a [streaming dataloader](https://github.com/microsoft/infinibatch) to read the data on-the-fly from the disk. It requires the data sharded into multiple small files (e.g. 10K lines per file), as well as a JSON file to contain some meta data and the paths to these files. - -The overall data directory should be organized as follows: -``` -Data/ -├── json/ -│ ├── train.json -│ └── valid.json -├── shard/ -│ ├── train/ -│ │ ├── 00000.txt -│ │ ├── 00001.txt -│ │ └── ... -│ └── valid/ -│ ├── 00000.txt -│ ├── 00001.txt -│ └── ... -├── dict.txt -└── sentencepiece.bpe.model -``` - -We recommend that each sharded data files contains no more than 10K lines with one sentence per line, and two documents should be separated with an empty line. -``` -Document 1 Line 1 -Document 1 Line 2 -Document 1 Line 3 - -Document 2 Line 1 -Document 2 Line 2 - -... -``` - -Also, the JSON file should be in the format like this: -``` -[ - { - "source": [ - "shard/train/00000.txt", - "shard/train/00001.txt", - ... - ], - "source_lang": "en", - "weight": 1.0 - } -] -``` - -### Training Command -```bash -cd examples/fairseq/ -python -m torch.distributed.launch --nproc_per_node=8 --nnodes=8 train.py ${PATH_TO_DATA} \ - --task pretraining \ - --tokens-per-sample 512 \ - --mask-prob 0.15 \ - --span-length 3.0 \ - --leave-unmasked-prob 0.0 \ - --random-token-prob 0.0 \ - --criterion masked_lm \ - --arch mlm_base \ - --share-encoder-input-output-embed \ - --required-batch-size-multiple 8 \ - --spm-model ${PATH_TO_DATA}/sentencepiece.bpe.model \ - --dict-file ${PATH_TO_DATA}/dict.txt \ - --optimizer adam \ - --adam-betas '(0.9,0.98)' \ - --adam-eps 1e-6 \ - --clip-norm 2.0 \ - --lr-scheduler polynomial_decay \ - --lr 0.0005 \ - --warmup-updates 10000 \ - --total-num-update 125000 \ - --max-update 125000 \ - --max-sentences 32 \ - --update-freq 1 \ - --log-format simple \ - --log-interval 100 \ - --disable-validation \ - --save-interval-updates 5000 \ - --no-epoch-checkpoints \ - --fp16 \ - --fp16-init-scale 4 \ - --fp16-scale-window 256 \ - --min-loss-scale 0.0001 \ - --seed 1 \ - --save-dir ${PATH_TO_CKPT} \ - --ddp-backend=no_c10d \ - --distributed-no-spawn \ - --reset-dataloader \ - --batch-read-ahead 10000 \ - --rel-pos-buckets 32 \ - --max-rel-pos 128 \ - --deepnorm -``` - -## Example: GPT Pretraining - -### Data Format - -We use the format as in the FairSeq's [language modeling example](https://github.com/facebookresearch/fairseq/tree/main/examples/language_model#1-preprocess-the-data). - -### Dense Model - -```bash -cd examples/fairseq/ -python -m torch.distributed.launch --nproc_per_node=2 --nnodes=1 train.py \ - ${PATH_TO_DATA} \ - --num-workers 2 \ - --activation-fn gelu \ - --share-decoder-input-output-embed \ - --validate-interval-updates 1000 \ - --save-interval-updates 1000 \ - --no-epoch-checkpoints \ - --memory-efficient-fp16 \ - --fp16-init-scale 4 \ - --arch lm_base \ - --task language_modeling \ - --sample-break-mode none \ - --tokens-per-sample 128 \ - --optimizer adam --adam-betas "(0.9, 0.98)" \ - --adam-eps 1e-08 \ - --clip-norm 0.0 \ - --lr 5e-4 \ - --lr-scheduler polynomial_decay \ - --warmup-updates 750 \ - --dropout 0.1 \ - --attention-dropout 0.1 \ - --weight-decay 0.01 \ - --batch-size 4 \ - --update-freq 1 \ - --required-batch-size-multiple 1 \ - --total-num-update 50000 \ - --max-update 50000 \ - --seed 1 \ - --ddp-backend=c10d -``` - -### Sparse (MoE) Model - -```bash -cd examples/fairseq/ -python -m torch.distributed.launch --nproc_per_node=2 --nnodes=1 train.py \ - ${PATH_TO_DATA} \ - --num-workers 2 \ - --activation-fn gelu \ - --share-decoder-input-output-embed \ - --validate-interval-updates 1000 \ - --save-interval-updates 1000 \ - --no-epoch-checkpoints \ - --memory-efficient-fp16 \ - --fp16-init-scale 4 \ - --arch lm_base \ - --task language_modeling \ - --sample-break-mode none \ - --tokens-per-sample 128 \ - --optimizer adam --adam-betas "(0.9, 0.98)" \ - --adam-eps 1e-08 \ - --clip-norm 0.0 \ - --lr 5e-4 \ - --lr-scheduler polynomial_decay \ - --warmup-updates 750 \ - --dropout 0.1 \ - --attention-dropout 0.1 \ - --weight-decay 0.01 \ - --batch-size 4 \ - --update-freq 1 \ - --required-batch-size-multiple 1 \ - --total-num-update 50000 \ - --max-update 50000 \ - --seed 1 \ - --ddp-backend=no_c10d \ - --moe-expert-count 2 --moe-freq 2 \ - --moe-gating-use-fp32 --moe-second-expert-policy random --moe-normalize-gate-prob-before-dropping \ - --moe-eval-capacity-token-fraction -1.0 \ - --criterion moe_cross_entropy --moe-gate-loss-wt 0.01 --moe-gate-loss-combine-method sum \ - --use-xmoe -``` - -## Example: Machine Translation - -### Data Format - -We follow the FairSeq's [neural machine translation example](https://github.com/facebookresearch/fairseq/tree/main/examples/translation#training-a-new-model) to preprocess the data. - -### Dense Model - -```bash -cd examples/fairseq/ -python -m torch.distributed.launch --nproc_per_node=2 --nnodes=1 train.py \ - ${PATH_TO_DATA} \ - --arch mt_base --share-decoder-input-output-embed \ - --optimizer adam --adam-betas '(0.9, 0.98)' --clip-norm 0.0 \ - --lr 5e-4 --lr-scheduler inverse_sqrt --warmup-updates 4000 \ - --dropout 0.3 --weight-decay 0.0001 \ - --max-tokens 4096 --fp16 -``` - -### Sparse (MoE) Model - -```bash -cd examples/fairseq/ -python -m torch.distributed.launch --nproc_per_node=2 --nnodes=1 train.py \ - ${PATH_TO_DATA} \ - --arch mt_base --share-decoder-input-output-embed \ - --optimizer adam --adam-betas '(0.9, 0.98)' --clip-norm 0.0 \ - --lr 5e-4 --lr-scheduler inverse_sqrt --warmup-updates 4000 \ - --dropout 0.3 --weight-decay 0.0001 \ - --moe-expert-count 2 --moe-freq 2 \ - --moe-gating-use-fp32 --moe-second-expert-policy random --moe-normalize-gate-prob-before-dropping \ - --moe-eval-capacity-token-fraction -1.0 \ - --criterion moe_cross_entropy --moe-gate-loss-wt 0.01 --moe-gate-loss-combine-method sum \ - --use-xmoe \ - --max-tokens 4096 --fp16 -``` diff --git a/kosmos-g/torchscale/examples/fairseq/__init__.py b/kosmos-g/torchscale/examples/fairseq/__init__.py deleted file mode 100644 index 3ae31e250..000000000 --- a/kosmos-g/torchscale/examples/fairseq/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] diff --git a/kosmos-g/torchscale/examples/fairseq/generate.py b/kosmos-g/torchscale/examples/fairseq/generate.py deleted file mode 100644 index 37b894507..000000000 --- a/kosmos-g/torchscale/examples/fairseq/generate.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] - -# flake8: noqa -import models -import tasks -from fairseq_cli.generate import cli_main - -if __name__ == "__main__": - cli_main() diff --git a/kosmos-g/torchscale/examples/fairseq/interactive.py b/kosmos-g/torchscale/examples/fairseq/interactive.py deleted file mode 100644 index dca22d301..000000000 --- a/kosmos-g/torchscale/examples/fairseq/interactive.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] - -# flake8: noqa -import models -import tasks -from fairseq_cli.interactive import cli_main - -if __name__ == "__main__": - cli_main() diff --git a/kosmos-g/torchscale/examples/fairseq/laion-token-base.sh b/kosmos-g/torchscale/examples/fairseq/laion-token-base.sh deleted file mode 100644 index fa80f7fc4..000000000 --- a/kosmos-g/torchscale/examples/fairseq/laion-token-base.sh +++ /dev/null @@ -1,45 +0,0 @@ -python -m torch.distributed.launch --nproc_per_node=16 --nnodes=8 \ - --node_rank=$OMPI_COMM_WORLD_RANK --master_addr="$MASTER_IP" --master_port=$MASTER_PORT train.py /mnt/unilm/shaohanh/data/tnlg_config/ \ - --task vl_gpt_pretraining \ - --activation-fn gelu \ - --share-decoder-input-output-embed \ - --save-interval-updates 5000 \ - --no-epoch-checkpoints \ - --memory-efficient-fp16 \ - --fp16-init-scale 4 \ - --arch lm_base \ - --sample-break-mode none \ - --tokens-per-sample 2048 \ - --optimizer adam --adam-betas "(0.9, 0.98)" \ - --adam-eps 1e-08 \ - --clip-norm 0.0 \ - --lr 6e-4 \ - --lr-scheduler polynomial_decay \ - --warmup-updates 750 \ - --dropout 0.1 \ - --attention-dropout 0.1 \ - --weight-decay 0.01 \ - --batch-size 1 \ - --update-freq 2 \ - --log-format simple --log-interval 50 --disable-validation \ - --required-batch-size-multiple 1 \ - --total-num-update 300000 \ - --max-update 300000 \ - --seed 1 \ - --ddp-backend=legacy_ddp \ - --batch-read-ahead 100 \ - --rel-pos-buckets 32 \ - --max-rel-pos 128 \ - --dict-path /mnt/unilm/shumma/data/16g/dict.txt \ - --spm-model /mnt/unilm/shumma/data/16g/sentencepiece.bpe.model \ - --save-dir /mnt/unilm/shaohanh/exp/unigpt_exp/torchscale_base_laion_gpt \ - --tensorboard-logdir /mnt/unilm/shaohanh/exp/unigpt_exp/torchscale_base_laion_gpt/tb-logs \ - --laion-data-dir /mnt/conversationhub/shaohanh/bvt/data/laion_dataloader_config/ \ - --laion-batch-size 8 \ - --checkpoint-activations \ - --subln \ - --criterion vl_cross_entropy \ - --decoder-embed-dim 768 \ - --decoder-ffn-embed-dim 3072 \ - --decoder-layers 12 \ - --decoder-attention-heads 12 \ No newline at end of file diff --git a/kosmos-g/torchscale/examples/fairseq/laion-wild-token-base.sh b/kosmos-g/torchscale/examples/fairseq/laion-wild-token-base.sh deleted file mode 100644 index 3ee430274..000000000 --- a/kosmos-g/torchscale/examples/fairseq/laion-wild-token-base.sh +++ /dev/null @@ -1,47 +0,0 @@ -python -m torch.distributed.launch --nproc_per_node=16 --nnodes=8 \ - --node_rank=$OMPI_COMM_WORLD_RANK --master_addr="$MASTER_IP" --master_port=$MASTER_PORT train.py /mnt/msranlp/shaohanh/data/tnlg_config/ \ - --task vl_gpt_pretraining \ - --activation-fn gelu \ - --share-decoder-input-output-embed \ - --save-interval-updates 5000 \ - --no-epoch-checkpoints \ - --memory-efficient-fp16 \ - --fp16-init-scale 4 \ - --arch lm_base \ - --sample-break-mode none \ - --tokens-per-sample 2048 \ - --optimizer adam --adam-betas "(0.9, 0.98)" \ - --adam-eps 1e-08 \ - --clip-norm 0.0 \ - --lr 6e-4 \ - --lr-scheduler polynomial_decay \ - --warmup-updates 750 \ - --dropout 0.1 \ - --attention-dropout 0.1 \ - --weight-decay 0.01 \ - --batch-size 1 \ - --update-freq 2 \ - --log-format simple --log-interval 50 --disable-validation \ - --required-batch-size-multiple 1 \ - --total-num-update 300000 \ - --max-update 300000 \ - --seed 1 \ - --ddp-backend=legacy_ddp \ - --batch-read-ahead 100 \ - --rel-pos-buckets 32 \ - --max-rel-pos 128 \ - --dict-path data/dict.txt \ - --spm-model data/sentencepiece.bpe.model \ - --save-dir /mnt/msranlp/shaohanh/exp/unigpt_exp/torchscale_base_laion_wild_gpt_rerun \ - --tensorboard-logdir /mnt/msranlp/shaohanh/exp/unigpt_exp/torchscale_base_laion_wild_gpt_rerun/tb-logs \ - --laion-data-dir /mnt/conversationhub/shaohanh/bvt/data/laion_dataloader_config/ \ - --laion-batch-size 8 \ - --wild-data-dir /mnt/msranlp/shaohanh/bvl/wild_token \ - --wild-batch-size 4 \ - --checkpoint-activations \ - --subln \ - --criterion vl_cross_entropy \ - --decoder-embed-dim 768 \ - --decoder-ffn-embed-dim 3072 \ - --decoder-layers 12 \ - --decoder-attention-heads 12 \ No newline at end of file diff --git a/kosmos-g/torchscale/examples/fairseq/models/__init__.py b/kosmos-g/torchscale/examples/fairseq/models/__init__.py deleted file mode 100644 index 7a0f259c9..000000000 --- a/kosmos-g/torchscale/examples/fairseq/models/__init__.py +++ /dev/null @@ -1,36 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] - -import argparse -import importlib -import os - -MODEL_REGISTRY = {} -MODEL_DATACLASS_REGISTRY = {} -ARCH_MODEL_REGISTRY = {} -ARCH_MODEL_NAME_REGISTRY = {} -ARCH_MODEL_INV_REGISTRY = {} -ARCH_CONFIG_REGISTRY = {} - -# automatically import any Python files in the models/ directory -models_dir = os.path.dirname(__file__) -for file in os.listdir(models_dir): - path = os.path.join(models_dir, file) - if ( - not file.startswith("_") - and not file.startswith(".") - and (file.endswith(".py") or os.path.isdir(path)) - ): - model_name = file[: file.find(".py")] if file.endswith(".py") else file - module = importlib.import_module("models." + model_name) - - # extra `model_parser` for sphinx - if model_name in MODEL_REGISTRY: - parser = argparse.ArgumentParser(add_help=False) - group_archs = parser.add_argument_group("Named architectures") - group_archs.add_argument( - "--arch", choices=ARCH_MODEL_INV_REGISTRY[model_name] - ) - group_args = parser.add_argument_group("Additional command-line arguments") - MODEL_REGISTRY[model_name].add_args(group_args) - globals()[model_name + "_parser"] = parser diff --git a/kosmos-g/torchscale/examples/fairseq/models/bert.py b/kosmos-g/torchscale/examples/fairseq/models/bert.py deleted file mode 100644 index 7bbe382ce..000000000 --- a/kosmos-g/torchscale/examples/fairseq/models/bert.py +++ /dev/null @@ -1,470 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] - -import logging -from dataclasses import dataclass, field -from typing import Optional - -import torch -import torch.nn as nn -import torch.nn.functional as F -from apex.normalization import FusedLayerNorm as LayerNorm -from fairseq import utils -from fairseq.dataclass import ChoiceEnum, FairseqDataclass -from fairseq.models import BaseFairseqModel, register_model, register_model_architecture -from fairseq.models.squad import SQuADHead -from fairseq.models.transformer import DEFAULT_MIN_PARAMS_TO_WRAP, Embedding -from fairseq.modules import PositionalEmbedding -from omegaconf import II - -from torchscale.architecture.config import EncoderConfig - -from .machine_translation import MTEncoder as Encoder - -DEFAULT_MAX_SOURCE_POSITIONS = 1024 - -logger = logging.getLogger(__name__) - - -@dataclass -class BertConfig(FairseqDataclass): - activation_fn: ChoiceEnum(utils.get_available_activation_fns()) = field( - default="relu", metadata={"help": "activation function to use"} - ) - dropout: float = field(default=0.1, metadata={"help": "dropout probability"}) - attention_dropout: float = field( - default=0.0, metadata={"help": "dropout probability for attention weights"} - ) - activation_dropout: float = field( - default=0.0, metadata={"help": "dropout probability after activation in FFN."} - ) - encoder_embed_dim: int = field( - default=512, metadata={"help": "encoder embedding dimension"} - ) - encoder_output_dim: int = field( - default=512, metadata={"help": "encoder output dimension"} - ) - encoder_input_dim: int = field( - default=512, metadata={"help": "encoder input dimension"} - ) - encoder_ffn_embed_dim: int = field( - default=2048, metadata={"help": "encoder embedding dimension for FFN"} - ) - encoder_layers: int = field(default=6, metadata={"help": "num encoder layers"}) - encoder_attention_heads: int = field( - default=8, metadata={"help": "num encoder attention heads"} - ) - encoder_normalize_before: bool = field( - default=False, metadata={"help": "apply layernorm before each encoder block"} - ) - no_encoder_final_norm: bool = field( - default=False, - metadata={"help": "don't add an extra layernorm after the last encoder block"}, - ) - no_token_positional_embeddings: bool = field( - default=False, - metadata={ - "help": "if set, disables positional embeddings (outside self attention)" - }, - ) - share_encoder_input_output_embed: bool = field( - default=False, metadata={"help": "share encoder input and output embeddings"} - ) - encoder_learned_pos: bool = field( - default=False, - metadata={"help": "use learned positional embeddings in the encoder"}, - ) - layernorm_embedding: bool = field( - default=False, metadata={"help": "add layernorm to embedding"} - ) - no_scale_embedding: bool = field( - default=False, metadata={"help": "if True, dont scale embeddings"} - ) - checkpoint_activations: bool = field( - default=False, metadata={"help": "checkpoint activations at each layer"} - ) - offload_activations: bool = field( - default=False, - metadata={"help": "move checkpointed activations to CPU after they are used."}, - ) - # config for "Reducing Transformer Depth on Demand with Structured Dropout" (Fan et al., 2019) - encoder_layerdrop: float = field( - default=0.0, metadata={"help": "LayerDrop probability for encoder"} - ) - encoder_layers_to_keep: Optional[str] = field( - default=None, - metadata={ - "help": "which layers to *keep* when pruning as a comma-separated list" - }, - ) - # config for Fully Sharded Data Parallel (FSDP) training - min_params_to_wrap: int = field( - default=DEFAULT_MIN_PARAMS_TO_WRAP, - metadata={ - "help": ( - "minimum number of params for a layer to be wrapped with FSDP() when " - "training with --ddp-backend=fully_sharded. Smaller values will " - "improve memory efficiency, but may make torch.distributed " - "communication less efficient due to smaller input sizes. This option " - "is set to 0 (i.e., always wrap) when --checkpoint-activations or " - "--offload-activations are passed." - ) - }, - ) - max_source_positions: int = field( - default=1024, metadata={"help": "max source positions"} - ) - pooler_activation_fn: ChoiceEnum(utils.get_available_activation_fns()) = field( - default="relu", metadata={"help": "activation function to use for pooler layer"} - ) - pooler_dropout: float = field( - default=0.0, - metadata={"help": "dropout probability in the masked_lm pooler layers"}, - ) - # options from other parts of the config - # add_bos_token: bool = II("task.add_bos_token") - # tokens_per_sample: int = II("task.tokens_per_sample") - tpu: bool = II("common.tpu") - rel_pos_buckets: int = field(default=0, metadata={"help": ""}) - max_rel_pos: int = field(default=0, metadata={"help": ""}) - moe_freq: int = field( - default=0, - metadata={"help": "Frequency at which we insert MoE Transformer layers"}, - ) - moe_expert_count: int = field( - default=0, metadata={"help": "Number of experts in each MoE Layer"} - ) - moe_gating_use_fp32: bool = field( - default=False, - metadata={"help": "Use FP32 computations in MoE top2 gating function"}, - ) - moe_second_expert_policy: str = field( - default="sampling", - metadata={"help": "policy for second expert, options: all/sampling/random"}, - ) - moe_normalize_gate_prob_before_dropping: bool = field( - default=False, - metadata={ - "help": "whether to normalize gate probs before or after dropping experts for capacity and randomization" - }, - ) - moe_expert_ffn_dim: Optional[int] = field( - default=None, metadata={"help": "MoE expert FFN dimension"} - ) - moe_top1_expert: Optional[bool] = field( - default=False, metadata={"help": "Use top1 gate instead of top2"} - ) - moe_eval_capacity_token_fraction: Optional[float] = field( - default=0.25, - metadata={ - "help": ( - "Default: 0.25, Fraction of tokens as capacity during validation, " - "if set to negative, use same as training. range: (0.0, 1.0]." - ) - }, - ) - moe_normalize_expert_grad: Optional[str] = field( - default="world_size", - metadata={ - "help": "Divide expert gradients by (1) 'world_size' (2) 'sqrt_world_size'" - }, - ) - record_a2a_perf_stats: Optional[bool] = field( - default=False, - metadata={"help": "records all to all perf stats during distributed training"}, - ) - dummy_a2a: Optional[bool] = field( - default=False, - metadata={ - "help": "By passes all to all during distributed training by returning the input buffer as output" - }, - ) - moe_batch_prioritized_routing: Optional[bool] = field( - default=False, - metadata={ - "help": "if true orders token by the gate prob before capacity dropping." - }, - ) - ddp_rank: int = II("distributed_training.distributed_rank") - deepnorm: Optional[bool] = field( - default=False, - ) - subln: Optional[bool] = field( - default=False, - ) - - -@register_model("mlm", dataclass=BertConfig) -class BertModel(BaseFairseqModel): - def __init__(self, args, encoder): - super().__init__() - self.args = args - self.encoder = encoder - self.padding_idx = self.encoder.embed_tokens.padding_idx - self.classification_heads = nn.ModuleDict() - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - - args.max_source_positions = getattr( - args, "max_source_positions", DEFAULT_MAX_SOURCE_POSITIONS - ) - - embed_tokens = cls.build_embedding( - args, task.dictionary, args.encoder_embed_dim - ) - - embed_positions = ( - PositionalEmbedding( - args.max_source_positions, - args.encoder_embed_dim, - task.dictionary.pad(), - learned=args.encoder_learned_pos, - ) - if not args.no_token_positional_embeddings - else None - ) - - lm_head = cls.build_lm_head( - args, - args.encoder_embed_dim, - len(task.dictionary), - args.activation_fn, - weight=embed_tokens.weight, - ) - - config = EncoderConfig() - config.override(args) - - encoder = Encoder( - config, - embed_tokens=embed_tokens, - embed_positions=embed_positions, - output_projection=lm_head, - is_encoder_decoder=False, - dictionary=task.dictionary, - ) - - return cls(args, encoder) - - @classmethod - def build_embedding(cls, args, dictionary, embed_dim, path=None): - embed_tokens = Embedding(len(dictionary), embed_dim, dictionary.pad()) - return embed_tokens - - @classmethod - def build_lm_head(cls, args, embed_dim, output_dim, activation_fn, weight): - return LMHead(embed_dim, output_dim, activation_fn, weight) - - def output_layer(self, features, masked_tokens=None): - return self.encoder.output_projection(features, masked_tokens=masked_tokens) - - def register_classification_head( - self, name, num_classes=None, inner_dim=None, **kwargs - ): - """Register a classification head.""" - if name in self.classification_heads: - prev_num_classes = self.classification_heads[name].out_proj.out_features - prev_inner_dim = self.classification_heads[name].dense.out_features - if num_classes != prev_num_classes or inner_dim != prev_inner_dim: - logger.warning( - 're-registering head "{}" with num_classes {} (prev: {}) ' - "and inner_dim {} (prev: {})".format( - name, num_classes, prev_num_classes, inner_dim, prev_inner_dim - ) - ) - self.classification_heads[name] = ClassificationHead( - self.args.encoder_embed_dim, - inner_dim or self.args.encoder_embed_dim, - num_classes, - self.args.pooler_activation_fn, - self.args.pooler_dropout, - ) - - def register_question_answering_head(self, name, num_classes=None): - self.classification_heads[name] = SQuADHead( - self.args.encoder_embed_dim, - ) - - def upgrade_state_dict_named(self, state_dict, name): - prefix = name + "." if name != "" else "" - - # upgrade children modules - super().upgrade_state_dict_named(state_dict, name) - - # Handle new classification heads present in the state dict. - current_head_names = ( - [] - if not hasattr(self, "classification_heads") - else self.classification_heads.keys() - ) - keys_to_delete = [] - for k in state_dict.keys(): - if not k.startswith(prefix + "classification_heads."): - continue - - head_name = k[len(prefix + "classification_heads.") :].split(".")[0] # noqa: E203 - num_classes = state_dict[ - prefix + "classification_heads." + head_name + ".out_proj.weight" - ].size(0) - inner_dim = state_dict[ - prefix + "classification_heads." + head_name + ".dense.weight" - ].size(0) - - if getattr(self.args, "load_checkpoint_heads", False): - if head_name not in current_head_names: - self.register_classification_head(head_name, num_classes, inner_dim) - else: - if head_name not in current_head_names: - logger.warning( - "deleting classification head ({}) from checkpoint " - "not present in current model: {}".format(head_name, k) - ) - keys_to_delete.append(k) - elif ( - num_classes - != self.classification_heads[head_name].out_proj.out_features - or inner_dim - != self.classification_heads[head_name].dense.out_features - ): - logger.warning( - "deleting classification head ({}) from checkpoint " - "with different dimensions than current model: {}".format( - head_name, k - ) - ) - keys_to_delete.append(k) - for k in keys_to_delete: - del state_dict[k] - - # Copy any newly-added classification heads into the state dict - # with their current weights. - if hasattr(self, "classification_heads"): - cur_state = self.classification_heads.state_dict() - for k, v in cur_state.items(): - if prefix + "classification_heads." + k not in state_dict: - logger.info("Overwriting " + prefix + "classification_heads." + k) - state_dict[prefix + "classification_heads." + k] = v - - def forward( - self, - src_tokens=None, - features_only=False, - return_all_hiddens=False, - classification_head_name=None, - masked_tokens=None, - **kwargs - ): - encoder_out = self.encoder( - src_tokens, features_only=True, return_all_hiddens=return_all_hiddens - ) - x, extra = encoder_out["encoder_out"], encoder_out - x = x.transpose(0, 1) - - if classification_head_name is not None: - x = self.classification_heads[classification_head_name](x) - elif not features_only: - x = self.output_layer(x, masked_tokens=masked_tokens) - - return x, extra - - -class ClassificationHead(nn.Module): - """Head for sentence-level classification tasks.""" - - def __init__( - self, - input_dim, - inner_dim, - num_classes, - activation_fn, - pooler_dropout, - ): - super().__init__() - self.dense = nn.Linear(input_dim, inner_dim) - self.activation_fn = utils.get_activation_fn(activation_fn) - self.dropout = nn.Dropout(p=pooler_dropout) - self.out_proj = nn.Linear(inner_dim, num_classes) - - def forward(self, features, **kwargs): - x = features[:, 0, :] # take <s> token (equiv. to [CLS]) - x = self.dropout(x) - x = self.dense(x) - x = self.activation_fn(x) - x = self.dropout(x) - x = self.out_proj(x) - return x - - -class LMHead(nn.Module): - """Head for masked language modeling.""" - - def __init__(self, embed_dim, output_dim, activation_fn, weight=None): - super().__init__() - self.dense = nn.Linear(embed_dim, embed_dim) - self.activation_fn = utils.get_activation_fn(activation_fn) - self.layer_norm = LayerNorm(embed_dim) - - if weight is None: - weight = nn.Linear(embed_dim, output_dim, bias=False).weight - self.weight = weight - self.bias = nn.Parameter(torch.zeros(output_dim)) - - def forward(self, features, masked_tokens=None, **kwargs): - # Only project the masked tokens while training, - # saves both memory and computation - if masked_tokens is not None: - features = features[masked_tokens, :] - - x = self.dense(features) - x = self.activation_fn(x) - x = self.layer_norm(x) - # project back to size of vocabulary with bias - x = F.linear(x, self.weight) + self.bias - return x - - -@register_model_architecture("mlm", "mlm_base") -def base_unilm_architecture(args): - if hasattr(args, "encoder_final_norm"): - args.no_encoder_final_norm = not args.encoder_final_norm - - args.dropout = getattr(args, "dropout", 0.1) - args.attention_dropout = getattr(args, "attention_dropout", 0.0) - args.activation_dropout = getattr(args, "activation_dropout", 0.0) - args.pooler_dropout = getattr(args, "pooler_dropout", 0.0) - - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 768) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 3072) - args.encoder_layers = getattr(args, "encoder_layers", 12) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 12) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", True) - args.activation_fn = getattr(args, "activation_fn", "gelu") - args.pooler_activation_fn = getattr(args, "pooler_activation_fn", "tanh") - - args.encoder_layerdrop = getattr(args, "encoder_layerdrop", 0) - args.encoder_layers_to_keep = getattr(args, "encoder_layers_to_keep", None) - - # args.add_bos_token = getattr(args, "add_bos_token", False) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - args.share_encoder_input_output_embed = getattr( - args, "share_encoder_input_output_embed", True - ) - args.encoder_output_dim = getattr( - args, "encoder_output_dim", args.encoder_embed_dim - ) - args.encoder_input_dim = getattr(args, "encoder_input_dim", args.encoder_embed_dim) - - # Model training is not stable without this - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.no_encoder_final_norm = getattr(args, "no_encoder_final_norm", False) - - args.no_scale_embedding = getattr(args, "no_scale_embedding", True) - args.layernorm_embedding = getattr(args, "layernorm_embedding", True) - args.checkpoint_activations = getattr(args, "checkpoint_activations", False) - args.offload_activations = getattr(args, "offload_activations", False) - if args.offload_activations: - args.checkpoint_activations = True diff --git a/kosmos-g/torchscale/examples/fairseq/models/language_modeling.py b/kosmos-g/torchscale/examples/fairseq/models/language_modeling.py deleted file mode 100644 index 465157c27..000000000 --- a/kosmos-g/torchscale/examples/fairseq/models/language_modeling.py +++ /dev/null @@ -1,360 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] - -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from dataclasses import dataclass, field -from typing import Optional - -import torch -from fairseq import distributed_utils, utils -from fairseq.dataclass import ChoiceEnum, FairseqDataclass -from fairseq.models import ( - FairseqIncrementalDecoder, - FairseqLanguageModel, - register_model, - register_model_architecture, -) -from fairseq.models.transformer import DEFAULT_MIN_PARAMS_TO_WRAP, Embedding -from fairseq.modules import PositionalEmbedding -from omegaconf import II - -from torchscale.architecture.config import DecoderConfig -from torchscale.architecture.decoder import Decoder - -DEFAULT_MAX_TARGET_POSITIONS = 1024 -logger = logging.getLogger(__name__) - - -@dataclass -class LanguageConfig(FairseqDataclass): - activation_fn: ChoiceEnum(utils.get_available_activation_fns()) = field( - default="relu", metadata={"help": "activation function to use"} - ) - dropout: float = field(default=0.1, metadata={"help": "dropout probability"}) - attention_dropout: float = field( - default=0.0, metadata={"help": "dropout probability for attention weights"} - ) - activation_dropout: float = field( - default=0.0, metadata={"help": "dropout probability after activation in FFN."} - ) - relu_dropout: float = field( - default=0.0, metadata={"help": "dropout probability after activation in FFN."} - ) - decoder_embed_dim: int = field( - default=512, metadata={"help": "decoder embedding dimension"} - ) - decoder_output_dim: int = field( - default=512, metadata={"help": "decoder output dimension"} - ) - decoder_input_dim: int = field( - default=512, metadata={"help": "decoder input dimension"} - ) - decoder_ffn_embed_dim: int = field( - default=2048, metadata={"help": "decoder embedding dimension for FFN"} - ) - decoder_layers: int = field(default=6, metadata={"help": "num decoder layers"}) - decoder_attention_heads: int = field( - default=8, metadata={"help": "num decoder attention heads"} - ) - decoder_normalize_before: bool = field( - default=False, metadata={"help": "apply layernorm before each decoder block"} - ) - no_token_positional_embeddings: bool = field( - default=False, - metadata={ - "help": "if set, disables positional embeddings (outside self attention)" - }, - ) - share_decoder_input_output_embed: bool = field( - default=False, metadata={"help": "share decoder input and output embeddings"} - ) - decoder_learned_pos: bool = field( - default=False, - metadata={"help": "use learned positional embeddings in the decoder"}, - ) - layernorm_embedding: bool = field( - default=False, metadata={"help": "add layernorm to embedding"} - ) - no_scale_embedding: bool = field( - default=False, metadata={"help": "if True, dont scale embeddings"} - ) - checkpoint_activations: bool = field( - default=False, metadata={"help": "checkpoint activations at each layer"} - ) - offload_activations: bool = field( - default=False, - metadata={"help": "move checkpointed activations to CPU after they are used."}, - ) - # config for Fully Sharded Data Parallel (FSDP) training - min_params_to_wrap: int = field( - default=DEFAULT_MIN_PARAMS_TO_WRAP, - metadata={ - "help": ( - "minimum number of params for a layer to be wrapped with FSDP() when " - "training with --ddp-backend=fully_sharded. Smaller values will " - "improve memory efficiency, but may make torch.distributed " - "communication less efficient due to smaller input sizes. This option " - "is set to 0 (i.e., always wrap) when --checkpoint-activations or " - "--offload-activations are passed." - ) - }, - ) - moe_freq: int = field( - default=0, - metadata={"help": "Frequency at which we insert MoE Transformer layers"}, - ) - moe_expert_count: int = field( - default=0, metadata={"help": "Number of experts in each MoE Layer"} - ) - moe_gating_use_fp32: bool = field( - default=False, - metadata={"help": "Use FP32 computations in MoE top2 gating function"}, - ) - moe_second_expert_policy: str = field( - default="sampling", - metadata={"help": "policy for second expert, options: all/sampling/random"}, - ) - moe_normalize_gate_prob_before_dropping: bool = field( - default=False, - metadata={ - "help": "whether to normalize gate probs before or after dropping experts for capacity and randomization" - }, - ) - moe_expert_ffn_dim: Optional[int] = field( - default=None, metadata={"help": "MoE expert FFN dimension"} - ) - moe_top1_expert: Optional[bool] = field( - default=False, metadata={"help": "Use top1 gate instead of top2"} - ) - moe_eval_capacity_token_fraction: Optional[float] = field( - default=0.25, - metadata={ - "help": ( - "Default: 0.25, Fraction of tokens as capacity during validation, " - "if set to negative, use same as training. range: (0.0, 1.0]." - ) - }, - ) - moe_normalize_expert_grad: Optional[str] = field( - default="world_size", - metadata={ - "help": "Divide expert gradients by (1) 'world_size' (2) 'sqrt_world_size'" - }, - ) - record_a2a_perf_stats: Optional[bool] = field( - default=False, - metadata={"help": "records all to all perf stats during distributed training"}, - ) - dummy_a2a: Optional[bool] = field( - default=False, - metadata={ - "help": "By passes all to all during distributed training by returning the input buffer as output" - }, - ) - moe_batch_prioritized_routing: Optional[bool] = field( - default=False, - metadata={ - "help": "if true orders token by the gate prob before capacity dropping." - }, - ) - use_xmoe: Optional[bool] = field( - default=False, - ) - flash_attention: Optional[bool] = field( - default=False, - ) - sope_rel_pos: Optional[bool] = field( - default=False, - metadata={"help": "use SoPE as the relative position embhedding"}, - ) - scale_length: Optional[int] = field( - default=2048, - ) - - # options from other parts of the config - add_bos_token: bool = II("task.add_bos_token") - tokens_per_sample: int = II("task.tokens_per_sample") - max_target_positions: Optional[int] = II("task.max_target_positions") - tpu: bool = II("common.tpu") - memory_efficient_fp16: bool = II("common.memory_efficient_fp16") - fp16: bool = II("common.fp16") - fp16_no_flatten_grads: bool = II("common.fp16_no_flatten_grads") - ddp_backend: str = II("distributed_training.ddp_backend") - world_size: int = II("distributed_training.distributed_world_size") - distributed_rank: int = II("distributed_training.distributed_rank") - ddp_rank: int = II("distributed_training.distributed_rank") - deepnorm: Optional[bool] = field( - default=False, - ) - subln: Optional[bool] = field( - default=False, - ) - rel_pos_buckets: Optional[int] = field( - default=0, - ) - max_rel_pos: Optional[int] = field( - default=0, - ) - - -@register_model("lm", dataclass=LanguageConfig) -class LanguageModel(FairseqLanguageModel): - def __init__(self, args, decoder): - self.args = args - super().__init__(decoder) - - @classmethod - def build_model(cls, args, task): - - if getattr(args, "max_target_positions", None) is None: - args.max_target_positions = getattr( - args, "tokens_per_sample", DEFAULT_MAX_TARGET_POSITIONS - ) - - embed_tokens = cls.build_embedding( - args, task.source_dictionary, args.decoder_embed_dim - ) - - embed_positions = ( - PositionalEmbedding( - args.max_target_positions, - args.decoder_embed_dim, - task.dictionary.pad(), - learned=args.decoder_learned_pos, - ) - if not args.no_token_positional_embeddings - else None - ) - - if args.share_decoder_input_output_embed: - output_projection = torch.nn.Linear( - embed_tokens.weight.shape[1], - embed_tokens.weight.shape[0], - bias=False, - ) - output_projection.weight = embed_tokens.weight - else: - output_projection = torch.nn.Linear( - args.decoder_embed_dim, len(task.dictionary), bias=False - ) - torch.nn.init.normal_( - output_projection.weight, mean=0, std=args.decoder_embed_dim**-0.5 - ) - - if getattr(args, "moe_freq", 0) > 0 and ( - getattr(args, "fp16", False) - and not getattr(args, "memory_efficient_fp16", False) - and getattr(args, "ddp_backend", None) != "fully_sharded" - ): - assert ( - args.fp16_no_flatten_grads - ), "If training moe models, set --fp16-no-flatten-grads to calculate correct gradnorm" - - args.ddp_rank = distributed_utils.get_data_parallel_rank() - - config = DecoderConfig() - config.override(args) - - decoder = LMDecoder( - config, - embed_tokens, - embed_positions, - output_projection, - is_encoder_decoder=False, - dictionary=task.dictionary, - ) - - return cls(args, decoder) - - @classmethod - def build_embedding(cls, args, dictionary, embed_dim, path=None): - return Embedding(len(dictionary), embed_dim, dictionary.pad()) - - -class LMDecoder(Decoder, FairseqIncrementalDecoder): - def forward(self, src_tokens, **kwargs): - self_attn_padding_mask = src_tokens.eq(self.dictionary.pad()) - return super().forward(src_tokens, self_attn_padding_mask, **kwargs) - - def max_positions(self): - return self.embed_positions.max_positions - - def reorder_incremental_state_scripting( - self, - incremental_state, - new_order, - ): - for module in incremental_state: - for key in incremental_state[module]: - result = incremental_state[module][key].index_select(0, new_order) - incremental_state[module][key] = result - - -@register_model_architecture("lm", "lm_base") -def base_lm_architecture(args): - # backward compatibility for older model checkpoints - if hasattr(args, "no_tie_adaptive_proj"): - # previous models defined --no-tie-adaptive-proj, so use the existence of - # that option to determine if this is an "old" model checkpoint - args.no_decoder_final_norm = True # old models always set this to True - if args.no_tie_adaptive_proj is False: - args.tie_adaptive_proj = True - if hasattr(args, "decoder_final_norm"): - args.no_decoder_final_norm = not args.decoder_final_norm - - args.dropout = getattr(args, "dropout", 0.1) - args.attention_dropout = getattr(args, "attention_dropout", 0.0) - - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 2048) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.adaptive_softmax_factor = getattr(args, "adaptive_softmax_factor", 4) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False) - args.activation_fn = getattr(args, "activation_fn", "relu") - - args.decoder_layerdrop = getattr(args, "decoder_layerdrop", 0) - args.decoder_layers_to_keep = getattr(args, "decoder_layers_to_keep", None) - - args.base_layers = getattr(args, "base_layers", 0) - args.base_sublayers = getattr(args, "base_sublayers", 1) - args.base_shuffle = getattr(args, "base_shuffle", False) - - args.add_bos_token = getattr(args, "add_bos_token", False) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", False - ) - args.character_embeddings = getattr(args, "character_embeddings", False) - - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - - # Model training is not stable without this - args.decoder_normalize_before = True - args.no_decoder_final_norm = getattr(args, "no_decoder_final_norm", False) - - args.adaptive_input = getattr(args, "adaptive_input", False) - args.adaptive_input_factor = getattr(args, "adaptive_input_factor", 4) - args.adaptive_input_cutoff = getattr(args, "adaptive_input_cutoff", None) - - args.tie_adaptive_weights = getattr(args, "tie_adaptive_weights", False) - args.tie_adaptive_proj = getattr(args, "tie_adaptive_proj", False) - - args.no_scale_embedding = getattr(args, "no_scale_embedding", False) - args.layernorm_embedding = getattr(args, "layernorm_embedding", False) - args.checkpoint_activations = getattr(args, "checkpoint_activations", False) - args.offload_activations = getattr(args, "offload_activations", False) - if args.offload_activations: - args.checkpoint_activations = True diff --git a/kosmos-g/torchscale/examples/fairseq/models/machine_translation.py b/kosmos-g/torchscale/examples/fairseq/models/machine_translation.py deleted file mode 100644 index 1e7761a02..000000000 --- a/kosmos-g/torchscale/examples/fairseq/models/machine_translation.py +++ /dev/null @@ -1,453 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] - -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from typing import Dict, List, Optional, Tuple - -import torch -from fairseq import distributed_utils, utils -from fairseq.distributed import utils as fsdp_wrap -from fairseq.models import ( - FairseqEncoder, - FairseqEncoderDecoderModel, - register_model, - register_model_architecture, -) -from fairseq.models.transformer import Embedding -from fairseq.modules import PositionalEmbedding -from torch import Tensor - -from torchscale.architecture.config import DecoderConfig, EncoderConfig -from torchscale.architecture.encoder import Encoder - -from .language_modeling import LMDecoder as MTDecoder - -logger = logging.getLogger(__name__) - -DEFAULT_MAX_SOURCE_POSITIONS = 1024 -DEFAULT_MAX_TARGET_POSITIONS = 1024 -DEFAULT_MIN_PARAMS_TO_WRAP = int(1e8) - - -@register_model("mt") -class TranslationModel(FairseqEncoderDecoderModel): - def __init__(self, args, encoder, decoder): - super().__init__(encoder, decoder) - self.args = args - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--activation-fn', - choices=utils.get_available_activation_fns(), - help='activation function to use') - parser.add_argument('--dropout', type=float, metavar='D', - help='dropout probability') - parser.add_argument('--attention-dropout', type=float, metavar='D', - help='dropout probability for attention weights') - parser.add_argument('--activation-dropout', '--relu-dropout', type=float, metavar='D', - help='dropout probability after activation in FFN.') - parser.add_argument('--encoder-embed-path', type=str, metavar='STR', - help='path to pre-trained encoder embedding') - parser.add_argument('--encoder-embed-dim', type=int, metavar='N', - help='encoder embedding dimension') - parser.add_argument('--encoder-ffn-embed-dim', type=int, metavar='N', - help='encoder embedding dimension for FFN') - parser.add_argument('--encoder-layers', type=int, metavar='N', - help='num encoder layers') - parser.add_argument('--encoder-attention-heads', type=int, metavar='N', - help='num encoder attention heads') - parser.add_argument('--encoder-normalize-before', action='store_true', - help='apply layernorm before each encoder block') - parser.add_argument('--encoder-learned-pos', action='store_true', - help='use learned positional embeddings in the encoder') - parser.add_argument('--decoder-embed-path', type=str, metavar='STR', - help='path to pre-trained decoder embedding') - parser.add_argument('--decoder-embed-dim', type=int, metavar='N', - help='decoder embedding dimension') - parser.add_argument('--decoder-ffn-embed-dim', type=int, metavar='N', - help='decoder embedding dimension for FFN') - parser.add_argument('--decoder-layers', type=int, metavar='N', - help='num decoder layers') - parser.add_argument('--decoder-attention-heads', type=int, metavar='N', - help='num decoder attention heads') - parser.add_argument('--decoder-learned-pos', action='store_true', - help='use learned positional embeddings in the decoder') - parser.add_argument('--decoder-normalize-before', action='store_true', - help='apply layernorm before each decoder block') - parser.add_argument('--decoder-output-dim', type=int, metavar='N', - help='decoder output dimension (extra linear layer ' - 'if different from decoder embed dim') - parser.add_argument('--share-decoder-input-output-embed', action='store_true', - help='share decoder input and output embeddings') - parser.add_argument('--share-all-embeddings', action='store_true', - help='share encoder, decoder and output embeddings' - ' (requires shared dictionary and embed dim)') - parser.add_argument('--no-token-positional-embeddings', default=False, action='store_true', - help='if set, disables positional embeddings (outside self attention)') - parser.add_argument('--adaptive-softmax-cutoff', metavar='EXPR', - help='comma separated list of adaptive softmax cutoff points. ' - 'Must be used with adaptive_loss criterion'), - parser.add_argument('--adaptive-softmax-dropout', type=float, metavar='D', - help='sets adaptive softmax dropout for the tail projections') - parser.add_argument('--layernorm-embedding', action='store_true', - help='add layernorm to embedding') - parser.add_argument('--no-scale-embedding', action='store_true', - help='if True, dont scale embeddings') - parser.add_argument('--checkpoint-activations', action='store_true', - help='checkpoint activations at each layer, which saves GPU ' - 'memory usage at the cost of some additional compute') - parser.add_argument('--offload-activations', action='store_true', - help='checkpoint activations at each layer, then save to gpu. Sets --checkpoint-activations.') - # args for "Cross+Self-Attention for Transformer Models" (Peitz et al., 2019) - parser.add_argument('--no-cross-attention', default=False, action='store_true', - help='do not perform cross-attention') - parser.add_argument('--cross-self-attention', default=False, action='store_true', - help='perform cross+self-attention') - # args for "Reducing Transformer Depth on Demand with Structured Dropout" (Fan et al., 2019) - parser.add_argument('--encoder-layerdrop', type=float, metavar='D', default=0, - help='LayerDrop probability for encoder') - parser.add_argument('--decoder-layerdrop', type=float, metavar='D', default=0, - help='LayerDrop probability for decoder') - parser.add_argument('--encoder-layers-to-keep', default=None, - help='which layers to *keep* when pruning as a comma-separated list') - parser.add_argument('--decoder-layers-to-keep', default=None, - help='which layers to *keep* when pruning as a comma-separated list') - # args for Training with Quantization Noise for Extreme Model Compression ({Fan*, Stock*} et al., 2020) - parser.add_argument('--quant-noise-pq', type=float, metavar='D', default=0, - help='iterative PQ quantization noise at training time') - parser.add_argument('--quant-noise-pq-block-size', type=int, metavar='D', default=8, - help='block size of quantization noise at training time') - parser.add_argument('--quant-noise-scalar', type=float, metavar='D', default=0, - help='scalar quantization noise and scalar quantization at training time') - # args for Fully Sharded Data Parallel (FSDP) training - parser.add_argument( - '--min-params-to-wrap', type=int, metavar='D', default=DEFAULT_MIN_PARAMS_TO_WRAP, - help=( - 'minimum number of params for a layer to be wrapped with FSDP() when ' - 'training with --ddp-backend=fully_sharded. Smaller values will ' - 'improve memory efficiency, but may make torch.distributed ' - 'communication less efficient due to smaller input sizes. This option ' - 'is set to 0 (i.e., always wrap) when --checkpoint-activations or ' - '--offload-activations are passed.' - ) - ) - # args for mixture-of-expert layers - parser.add_argument('--moe-freq', type=int, metavar='D', default=0, - help='Frequency at which we insert MoE Transformer layers') - parser.add_argument('--encoder-moe-freq', type=int, metavar='D', default=0, - help='Frequency at which we insert MoE Transformer encoder layers') - parser.add_argument('--decoder-moe-freq', type=int, metavar='D', default=0, - help='Frequency at which we insert MoE Transformer decoder layers') - parser.add_argument('--moe-expert-count', type=int, metavar='D', default=0, - help='Number of experts in each MoE Layer') - parser.add_argument('--moe-gating-use-fp32', default=False, action='store_true', - help="Use FP32 computations in MoE top2 gating function") - parser.add_argument('--moe-second-expert-policy', type=str, default='sampling', - help="policy for second expert, options: all/sampling/random") - parser.add_argument( - '--moe-normalize-gate-prob-before-dropping', default=False, action='store_true', - help=( - "whether to normalize gate probs before or after dropping experts " - "for capacity and randomization" - ) - ) - parser.add_argument('--moe-expert-ffn-dim', type=int, default=0, - help="MoE Expert FFN dimension") - parser.add_argument('--moe-top1-expert', default=False, action='store_true', - help="Use top1 gate instead of top2") - parser.add_argument( - '--moe-eval-capacity-token-fraction', type=float, default=0.25, - help=( - "Fraction of tokens as capacity during validation" - "if set to negative, use same as training. range: (0.0, 1.0]." - ) - ) - parser.add_argument('--moe-normalize-expert-grad', type=str, default='world_size', - help="Divide expert gradients by (1) 'world_size' (2) 'sqrt_world_size'") - parser.add_argument('--use-moe-pad-mask', default=False, action='store_true', - help="Don't route padding tokens to any expert") - parser.add_argument('--use-xmoe', default=False, action='store_true', - help="Enable X-Moe") - parser.add_argument('--freeze-moe', default=False, action='store_true', - help="Freeze MoE Params") - parser.add_argument('--deepnorm', default=False, action='store_true', - help="Enable DeepNorm") - parser.add_argument('--subln', default=False, action='store_true', - help="Enable SubLN") - parser.add_argument('--pretrained-dense-mt-model-path', type=str, default='') - # args for pseudo-MoE layers - parser.add_argument('--alternate-ffn-embed-dim', type=int, default=0, - help="FFN embed dim of alternate pseudo-MoE blocks") - parser.add_argument('--rel-pos-buckets', type=int, default=0, - help='') - parser.add_argument('--max-rel-pos', type=int, default=0, - help='') - # fmt: on - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - - # make sure all arguments are present in older models - base_architecture(args) - - if getattr(args, "max_source_positions", None) is None: - args.max_source_positions = DEFAULT_MAX_SOURCE_POSITIONS - if getattr(args, "max_target_positions", None) is None: - args.max_target_positions = DEFAULT_MAX_TARGET_POSITIONS - - args.ddp_rank = distributed_utils.get_data_parallel_rank() - - src_dict, tgt_dict = task.source_dictionary, task.target_dictionary - - if args.share_all_embeddings: - if src_dict != tgt_dict: - raise ValueError("--share-all-embeddings requires a joined dictionary") - if args.encoder_embed_dim != args.decoder_embed_dim: - raise ValueError( - "--share-all-embeddings requires --encoder-embed-dim to match --decoder-embed-dim" - ) - if args.decoder_embed_path and ( - args.decoder_embed_path != args.encoder_embed_path - ): - raise ValueError( - "--share-all-embeddings not compatible with --decoder-embed-path" - ) - encoder_embed_tokens = cls.build_embedding( - args, src_dict, args.encoder_embed_dim, args.encoder_embed_path - ) - decoder_embed_tokens = encoder_embed_tokens - args.share_decoder_input_output_embed = True - else: - encoder_embed_tokens = cls.build_embedding( - args, src_dict, args.encoder_embed_dim, args.encoder_embed_path - ) - decoder_embed_tokens = cls.build_embedding( - args, tgt_dict, args.decoder_embed_dim, args.decoder_embed_path - ) - if getattr(args, "offload_activations", False): - args.checkpoint_activations = True # offloading implies checkpointing - - encoder_embed_positions = ( - PositionalEmbedding( - args.max_source_positions, - args.encoder_embed_dim, - src_dict.pad(), - learned=args.encoder_learned_pos, - ) - if not args.no_token_positional_embeddings - else None - ) - - decoder_embed_positions = ( - PositionalEmbedding( - args.max_target_positions, - args.decoder_embed_dim, - tgt_dict.pad(), - learned=args.decoder_learned_pos, - ) - if not args.no_token_positional_embeddings - else None - ) - - if args.share_decoder_input_output_embed: - output_projection = torch.nn.Linear( - decoder_embed_tokens.weight.shape[1], - decoder_embed_tokens.weight.shape[0], - bias=False, - ) - output_projection.weight = decoder_embed_tokens.weight - else: - output_projection = torch.nn.Linear( - args.decoder_embed_dim, len(tgt_dict), bias=False - ) - torch.nn.init.normal_( - output_projection.weight, mean=0, std=args.decoder_embed_dim**-0.5 - ) - - encoder = cls.build_encoder( - args, - encoder_embed_tokens, - encoder_embed_positions, - src_dict, - ) - decoder = cls.build_decoder( - args, - decoder_embed_tokens, - decoder_embed_positions, - output_projection, - tgt_dict, - ) - - if not args.share_all_embeddings: - min_params_to_wrap = getattr( - args, "min_params_to_wrap", DEFAULT_MIN_PARAMS_TO_WRAP - ) - # fsdp_wrap is a no-op when --ddp-backend != fully_sharded - encoder = fsdp_wrap(encoder, min_num_params=min_params_to_wrap) - decoder = fsdp_wrap(decoder, min_num_params=min_params_to_wrap) - return cls(args, encoder, decoder) - - @classmethod - def build_embedding(cls, args, dictionary, embed_dim, path=None): - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - emb = Embedding(num_embeddings, embed_dim, padding_idx) - # if provided, load from preloaded dictionaries - if path: - embed_dict = utils.parse_embedding(path) - utils.load_embedding(embed_dict, dictionary, emb) - return emb - - @classmethod - def build_encoder(cls, args, embed_tokens, embed_positions, dictionary): - config = EncoderConfig() - config.override(args) - - return MTEncoder( - config, - embed_tokens, - embed_positions, - is_encoder_decoder=True, - dictionary=dictionary, - ) - - @classmethod - def build_decoder( - cls, args, embed_tokens, embed_positions, output_projection, dictionary - ): - config = DecoderConfig() - config.override(args) - - return MTDecoder( - config, - embed_tokens, - embed_positions, - output_projection, - is_encoder_decoder=True, - dictionary=dictionary, - ) - - def forward( - self, - src_tokens, - src_lengths, - prev_output_tokens, - return_all_hiddens: bool = False, - features_only: bool = False, - **kwargs - ): - encoder_out = self.encoder(src_tokens, return_all_hiddens=return_all_hiddens) - decoder_out = self.decoder( - prev_output_tokens, - encoder_out=encoder_out, - features_only=features_only, - return_all_hiddens=return_all_hiddens, - ) - return decoder_out - - def get_normalized_probs( - self, - net_output: Tuple[Tensor, Optional[Dict[str, List[Optional[Tensor]]]]], - log_probs: bool, - sample: Optional[Dict[str, Tensor]] = None, - ): - """Get normalized probabilities (or log probs) from a net's output.""" - return self.get_normalized_probs_scriptable(net_output, log_probs, sample) - - -class MTEncoder(Encoder, FairseqEncoder): - def forward(self, src_tokens, **kwargs): - self_attn_padding_mask = src_tokens.eq(self.dictionary.pad()) - return super().forward( - src_tokens=src_tokens, encoder_padding_mask=self_attn_padding_mask, **kwargs - ) - - def reorder_encoder_out(self, encoder_out, new_order): - new_encoder_out = encoder_out["encoder_out"].index_select(1, new_order) - new_encoder_embedding = encoder_out["encoder_embedding"].index_select( - 0, new_order - ) - new_encoder_padding_mask = encoder_out["encoder_padding_mask"].index_select( - 0, new_order - ) - - encoder_states = encoder_out["encoder_states"] - if len(encoder_states) > 0: - for idx, state in enumerate(encoder_states): - encoder_states[idx] = state.index_select(1, new_order) - - return { - "encoder_out": new_encoder_out, # T x B x C - "encoder_padding_mask": new_encoder_padding_mask, - "encoder_embedding": new_encoder_embedding, # B x T x C - "encoder_states": encoder_states, # List[T x B x C] - } - - def max_positions(self): - return self.embed_positions.max_positions - - -@register_model_architecture("mt", "mt_base") -def base_architecture(args): - args.encoder_embed_path = getattr(args, "encoder_embed_path", None) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 2048) - args.encoder_layers = getattr(args, "encoder_layers", 6) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", False) - args.decoder_embed_path = getattr(args, "decoder_embed_path", None) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim) - args.decoder_ffn_embed_dim = getattr( - args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim - ) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", False) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False) - args.attention_dropout = getattr(args, "attention_dropout", 0.0) - args.activation_dropout = getattr(args, "activation_dropout", 0.0) - args.activation_fn = getattr(args, "activation_fn", "relu") - args.dropout = getattr(args, "dropout", 0.1) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", False - ) - args.share_all_embeddings = getattr(args, "share_all_embeddings", False) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - args.adaptive_input = getattr(args, "adaptive_input", False) - args.no_cross_attention = getattr(args, "no_cross_attention", False) - args.cross_self_attention = getattr(args, "cross_self_attention", False) - - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - - args.no_scale_embedding = getattr(args, "no_scale_embedding", False) - args.layernorm_embedding = getattr(args, "layernorm_embedding", False) - args.tie_adaptive_weights = getattr(args, "tie_adaptive_weights", False) - args.checkpoint_activations = getattr(args, "checkpoint_activations", False) - args.offload_activations = getattr(args, "offload_activations", False) - if args.offload_activations: - args.checkpoint_activations = True - args.encoder_layers_to_keep = getattr(args, "encoder_layers_to_keep", None) - args.decoder_layers_to_keep = getattr(args, "decoder_layers_to_keep", None) - args.encoder_layerdrop = getattr(args, "encoder_layerdrop", 0) - args.decoder_layerdrop = getattr(args, "decoder_layerdrop", 0) - args.quant_noise_pq = getattr(args, "quant_noise_pq", 0) - args.quant_noise_pq_block_size = getattr(args, "quant_noise_pq_block_size", 8) - args.quant_noise_scalar = getattr(args, "quant_noise_scalar", 0) - args.is_moe = getattr(args, "is_moe", False) - args.selected_expert_count = getattr(args, "selected_expert_count", 2) diff --git a/kosmos-g/torchscale/examples/fairseq/tasks/__init__.py b/kosmos-g/torchscale/examples/fairseq/tasks/__init__.py deleted file mode 100644 index dfbe5e026..000000000 --- a/kosmos-g/torchscale/examples/fairseq/tasks/__init__.py +++ /dev/null @@ -1,35 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] - -import argparse -import importlib -import os - -# register dataclass -TASK_DATACLASS_REGISTRY = {} -TASK_REGISTRY = {} -TASK_CLASS_NAMES = set() - -# automatically import any Python files in the tasks/ directory -tasks_dir = os.path.dirname(__file__) -for file in os.listdir(tasks_dir): - path = os.path.join(tasks_dir, file) - if ( - not file.startswith("_") - and not file.startswith(".") - and (file.endswith(".py") or os.path.isdir(path)) - ): - task_name = file[: file.find(".py")] if file.endswith(".py") else file - module = importlib.import_module("tasks." + task_name) - - # expose `task_parser` for sphinx - if task_name in TASK_REGISTRY: - parser = argparse.ArgumentParser(add_help=False) - group_task = parser.add_argument_group("Task name") - # fmt: off - group_task.add_argument('--task', metavar=task_name, - help='Enable this task with: ``--task=' + task_name + '``') - # fmt: on - group_args = parser.add_argument_group("Additional command-line arguments") - TASK_REGISTRY[task_name].add_args(group_args) - globals()[task_name + "_parser"] = parser diff --git a/kosmos-g/torchscale/examples/fairseq/tasks/data/__init__.py b/kosmos-g/torchscale/examples/fairseq/tasks/data/__init__.py deleted file mode 100644 index 3ae31e250..000000000 --- a/kosmos-g/torchscale/examples/fairseq/tasks/data/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] diff --git a/kosmos-g/torchscale/examples/fairseq/tasks/data/basic_loader.py b/kosmos-g/torchscale/examples/fairseq/tasks/data/basic_loader.py deleted file mode 100644 index b7b7866c6..000000000 --- a/kosmos-g/torchscale/examples/fairseq/tasks/data/basic_loader.py +++ /dev/null @@ -1,93 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] - -import torch -from infinibatch.iterators import CheckpointableIterator - -from . import utils -from .utils import ConcatIterator - - -class BaseBatchGen(CheckpointableIterator): - """ - This is a base class for batch generators that use infinibatch - """ - - def __init__(self): - self._iter = None - self.epoch = 1 - self.next_epoch_idx = 1 - self.sharded_checkpoint = False - self.should_close_after_finished = True - - def _build_iter(self): - """ - Build infinibatch iterator and assign to self._iter - """ - raise NotImplementedError() - - def _move_to_tensor(self, batch): - def to_tensor(x): - return torch.tensor(x) - - return utils.apply_to_sample(to_tensor, batch) - - @property - def iterator(self): - if self._iter is None: - raise NotImplementedError("_build_iter() must called first") - return self._iter - - def __iter__(self): - if self._iter is None: - raise NotImplementedError("_build_iter() must called first") - return self._iter - - def __next__(self): - - return next(self._iter) - - def setstate(self, value): - self._iter.setstate(value) - - def getstate(self): - return self._iter.getstate() - - def close(self): - self._iter.close() - - def __len__(self) -> int: - return 819200000 - - def next_epoch_itr( - self, shuffle=True, fix_batches_to_gpus=False, set_dataset_epoch=True - ): - return self - - def end_of_epoch(self) -> bool: - return False - - def state_dict(self): - """Returns a dictionary containing a whole state of the iterator.""" - return self.getstate() - - def load_state_dict(self, state_dict): - """Copies the state of the iterator from the given *state_dict*.""" - self.setstate(state_dict) - - @property - def first_batch(self): - return "DUMMY" - - -class ConcatLoader(BaseBatchGen): - def __init__(self, dataloaders): - super().__init__() - self.dataloaders = list(dataloaders) - self._build_iter() - - def _build_iter(self): - """ - Build infinibatch iterator and assign to self._iter - """ - self._iter = ConcatIterator([dataloader.iterator for dataloader in self.dataloaders]) diff --git a/kosmos-g/torchscale/examples/fairseq/tasks/data/laion_loader.py b/kosmos-g/torchscale/examples/fairseq/tasks/data/laion_loader.py deleted file mode 100644 index 91c1b0328..000000000 --- a/kosmos-g/torchscale/examples/fairseq/tasks/data/laion_loader.py +++ /dev/null @@ -1,115 +0,0 @@ -import json -import os -import multiprocessing -import itertools -import random - -from infinibatch import iterators -from functools import partial -from tasks.data.lm_loader import LMLoader -from tasks.data.utils import NativeCheckpointableIterator, WeightIterator, EOL_SYMBOL, BOI_SYMBOL, EOI_SYMBOL, image_code_to_token -from fairseq.data.encoders.gpt2_bpe import GPT2BPE - - -class LaionLoader(LMLoader): - def _tokenize(self): - multilingual_iters = [] - weights = [] - - for data in self.data: - multilingual_iters.append( - self._tokenize_foreach_lang(data) - ) - if 'weight' in data: - weights.append(float(data['weight'])) - else: - weights.append(int(data['count'])) - - if len(multilingual_iters) == 1: - return multilingual_iters[0] - - sampling_iterator = WeightIterator(weights, self.seed) - control_iterator = NativeCheckpointableIterator(sampling_iterator) - tokenized_lines = iterators.MultiplexIterator(control_iterator, multilingual_iters) - - return tokenized_lines - - def _tokenize_foreach_lang(self, data): - dataset = list(zip(data['source'])) - if self.shuffle: - chunk_files = iterators.InfinitePermutationSourceIterator( - dataset, - seed=self.seed, - shuffle=self.shuffle, - num_instances=self.num_shards, - instance_rank=self.shard_id,) - else: - chunk_files = iterators.ChunkedSourceIterator( - dataset, - num_instances=self.num_shards, - instance_rank=self.shard_id,) - - tokenized_lines = iterators.SelectManyIterator(chunk_files, lambda files: self._read_from_files(*files)) - tokenized_lines = iterators.SamplingRandomMapIterator(tokenized_lines, self._prepare, self.seed) - - return tokenized_lines - - @staticmethod - def fs_encode_line(fs_dict, words, append_eos=True): - ids = [] - for i, word in enumerate(words): - idx = fs_dict.index(word) - ids.append(idx) - if append_eos: - ids.append(fs_dict.eos_index) - return ids - - def _read_from_files(self, source_file): - """ - <s>  sentence </s> - <s> sentence  </s> - - """ - file_path = os.path.join(self.data_dir, source_file) - - if not os.path.exists(file_path): - print('| file {} not exists'.format(file_path), flush=True) - return iter([]) # skip bad file - - try: - with open(file_path, 'r', encoding='utf8') as f: - lines = f.read().strip().split('\n') - except: - return iter([]) # skip bad file - - for doc_jsonstr in lines: - try: - obj = json.loads(doc_jsonstr) - if int(obj['width']) < 200 or int(obj['height']) < 200: - continue - - line = obj['caption'] - spm_tokenizer=self.tokenizer - if isinstance(spm_tokenizer, GPT2BPE): - tokens = spm_tokenizer.encode(line).split(' ') - else: - tokens = spm_tokenizer.encode(line, out_type=str) - tokenized_tokens = LaionLoader.fs_encode_line(self.dictionary, tokens, append_eos=False) - image_tokens = [image_code_to_token(i) for i in obj['input_ids']] - image_tokens = LaionLoader.fs_encode_line(self.dictionary, image_tokens, append_eos=False) - r = random.random() - doc = [self.dictionary.bos()] - if r < 0.5: - doc.append(self.dictionary.index(BOI_SYMBOL)) - doc.extend(image_tokens) - doc.append(self.dictionary.index(EOI_SYMBOL)) - doc.extend(tokenized_tokens) - else: - doc.extend(tokenized_tokens) - doc.append(self.dictionary.index(BOI_SYMBOL)) - doc.extend(image_tokens) - doc.append(self.dictionary.index(EOI_SYMBOL)) - doc.append(self.dictionary.eos()) - yield doc - except: - continue diff --git a/kosmos-g/torchscale/examples/fairseq/tasks/data/laion_loader_test.py b/kosmos-g/torchscale/examples/fairseq/tasks/data/laion_loader_test.py deleted file mode 100644 index b441d6301..000000000 --- a/kosmos-g/torchscale/examples/fairseq/tasks/data/laion_loader_test.py +++ /dev/null @@ -1,107 +0,0 @@ -import sys,os -sys.path.append(os.getcwd()) - -from typing import NamedTuple -import os -import argparse -import json - -import sentencepiece as spm -# from fairseq.data.dictionary import Dictionary -# from laion_loader import LaionLoader -import tqdm -def image_code_to_token(code): - return "<image{}>".format(code) - - -def to_word(item, dictionary): - print(dictionary.string(item['net_input']['src_tokens'][0])) - print(dictionary.string(item['target'][0])) - -def run(): - parser = argparse.ArgumentParser() - parser.add_argument('--seed', type=int, default=1) - parser.add_argument('--data', type=str, default='/mnt/msranlp/shumma/data/16g') - parser.add_argument('--spm_path', type=str, default='data/sentencepiece.bpe.model') - parser.add_argument('--tokens_per_sample', type=int, default=2048) - parser.add_argument('--sample_break_mode', type=str, default='') - parser.add_argument('--batch_read_ahead', type=int, default=1) - parser.add_argument('--mask_prob', type=float, default=0.15) - parser.add_argument('--span_length', type=int, default=3) - parser.add_argument('--dynamic_mask', default=True) - parser.add_argument('--max_sentences', type=int, default=1) # batch size - parser.add_argument('--max_image_num', type=int, default=5) - parser.add_argument('--image_token_length', type=int, default=64) - - args = parser.parse_args() - - Dataset = NamedTuple('Dataset', [('data', str), ('data_dir', str), ('shuffle', bool)]) - dataset = Dataset(json.load(open(f'{args.data}/json/train.json')), args.data, True) - dictionary = Dictionary.load(os.path.join(args.data, 'dict.txt')) - dictionary.add_symbol('</line>') - dictionary.add_symbol('') - for i in range(8192): - dictionary.add_symbol(image_code_to_token(i)) - - tokenizer = spm.SentencePieceProcessor(model_file=args.spm_path) - - mlm_loader = LaionLoader( - args, - dataset, - dictionary, - tokenizer, - max_tokens=args.tokens_per_sample, - max_sentences=args.max_sentences, - max_positions=None, - ignore_invalid_inputs=False, - required_batch_size_multiple=1, - seed=1, - num_shards=1, - shard_id=0, - disable_prefetching=True, - ) - - num = 0 - i = 0 - for item in mlm_loader: - print(item) - i += 1 - if i > num: - break - - # for item in tqdm.tqdm(mlm_loader): - # i += 1 - -def cook_json(): - data = [] - item = { - "source": [], - "source_lang": "wild", - "weight": 1.0, - "name": "wild" - } - for i in range(7190): - item['source'].append("../nemavq2_encoder_base_decoder_centercrop_wild/partition.{:03d}.ndjson".format(i)) - - data.append(item) - json.dump(data, open('train.json', 'w', encoding='utf-8'), indent=2) - -# def cook_json(): -# data = [] -# item = { -# "source": [], -# "source_lang": "laion", -# "weight": 1.0, -# "name": "laion" -# } -# for i in range(128): -# for j in range(94): -# item['source'].append("../laion2b_filtered_tsvs_v1/{:05d}/{:05d}_{:05d}.tsv".format(i, i, j)) - -# data.append(item) -# json.dump(data, open('train.json', 'w', encoding='utf-8'), indent=2) - -if __name__ == '__main__': - # run() - cook_json() diff --git a/kosmos-g/torchscale/examples/fairseq/tasks/data/lm_loader.py b/kosmos-g/torchscale/examples/fairseq/tasks/data/lm_loader.py deleted file mode 100644 index a0b68e10f..000000000 --- a/kosmos-g/torchscale/examples/fairseq/tasks/data/lm_loader.py +++ /dev/null @@ -1,348 +0,0 @@ -import glob -import os -import torch -import numpy as np -import time -import json -import random -import itertools -import hydra -import copy -from omegaconf import DictConfig, OmegaConf - -from infinibatch import iterators -from .basic_loader import BaseBatchGen -from .utils import NativeCheckpointableIterator, WeightIterator, EOL_SYMBOL -from .utils import safe_getattr, safe_hasattr - - -class LMLoader(BaseBatchGen): - - def __init__( - self, - args, - dataset, - dictionary, - tokenizer, - max_tokens=None, - max_sentences=None, - max_positions=None, - ignore_invalid_inputs=False, - required_batch_size_multiple=1, - seed=1, - epoch=1, - num_shards=1, - shard_id=0, - disable_prefetching=False, - data_name='gpt', - ): - super().__init__() - self.args = args - self.data = dataset.data - self.data_dir = dataset.data_dir - self.shuffle = dataset.shuffle - self.dictionary = dictionary - self.tokenizer = tokenizer - - self.max_tokens = max_tokens - self.max_sentences = max_sentences - self.max_positions = max_positions - self.tokens_per_sample = args.tokens_per_sample - self.mlm_cut_length = safe_getattr(args, "mlm_cut_length", 0) - self.mlm_tokens_proportion = safe_getattr(args, "mlm_tokens_proportion", 0) - self.pad_to_max_len = safe_getattr(args, "pad_to_max_len", False) - self.ignore_invalid_inputs = ignore_invalid_inputs - self.required_batch_size_multiple = required_batch_size_multiple - self.seed = str(seed) - self.epoch = epoch - self.num_shards = num_shards - self.shard_id = shard_id - - self.batch_read_ahead = args.batch_read_ahead - self.disable_prefetching = disable_prefetching - self.data_name = data_name - self._setup() - - self._build_iter() - - def _setup(self): - pass - - def _build_iter(self): - tokenized_lines = self._tokenize() - self.padded_batches = self._batchify(tokenized_lines) - - if self.disable_prefetching: - prefetch_batches = self.padded_batches - else: - prefetch_batches = iterators.PrefetchIterator( - self.padded_batches, - buffer_size=10000, - buffer_in_main_process=True, - log_empty_buffer_warning=True and self.shard_id == 0, - ) - - prefetch_batches = iterators.MapIterator( - prefetch_batches, self._move_to_tensor - ) - - self._iter = prefetch_batches - - def _tokenize(self): - ''' - data: - { - 'source': list[Path], - } - ''' - dataset = list(zip(self.data['source'])) - - if self.shuffle: - chunk_files = \ - iterators.InfinitePermutationSourceIterator( - dataset, - seed=self.seed, - shuffle=self.shuffle, - num_instances=self.num_shards, - instance_rank=self.shard_id, - ) - else: - chunk_files = \ - iterators.ChunkedSourceIterator( - dataset, - num_instances=self.num_shards, - instance_rank=self.shard_id, - ) - - tokenized_lines = iterators.SelectManyIterator(chunk_files, lambda files: self._read_from_files(*files)) - tokenized_lines = iterators.SamplingRandomMapIterator(tokenized_lines, self._prepare, self.seed) - - return tokenized_lines - - def getstate(self): - state = super().getstate() - state["epoch"] = self.epoch - state["iterations_in_epoch"] = None - return state - - def _batchify(self, lines): - - if self.max_sentences is not None: - if self.batch_read_ahead > 0: - lines = iterators.BlockwiseShuffleIterator(lines, self.batch_read_ahead, self.seed) - batches = iterators.FixedBatchIterator(lines, self.max_sentences) - else: - # - - def dynamic_batch_size(sample): - lengths = [len(x) for x in sample] - batch_size = self.max_tokens // max(lengths) // self.required_batch_size_multiple * self.required_batch_size_multiple - return max(1, batch_size) - - batches = iterators.BucketedReadaheadBatchIterator( - lines, - read_ahead=self.batch_read_ahead, - key=(lambda x: max(len(x[0]), len(x[1]))) if self.shuffle else None, - batch_size=dynamic_batch_size, - shuffle=self.shuffle, - seed=self.seed, - ) - - def collate(batch): - batch_size = len(batch) - mlm_batch_size = sum([len(x[2]) for x in batch]) - - gpt_max_length = max([len(x[0]) for x in batch]) - if self.pad_to_max_len: - gpt_max_length = self.tokens_per_sample - - mlm_max_length = 0 - mlm_ntokens = 0 - for x in batch: - for y in x[2]: - mlm_max_length = max(mlm_max_length, len(y)) - mlm_ntokens += len(y) - - gpt_source_ids = np.full(shape=(batch_size, gpt_max_length-1), dtype=np.int32, - fill_value=self.dictionary.pad()) - gpt_target_ids = np.full(shape=(batch_size, gpt_max_length-1), dtype=np.int32, - fill_value=self.dictionary.pad()) - mlm_source_ids = np.full(shape=(mlm_batch_size, mlm_max_length), dtype=np.int32, - fill_value=self.dictionary.pad()) - gpt_input_mask_all = np.full(shape=(batch_size, gpt_max_length-1), dtype=np.int32, fill_value=0) - gpt_loss_mask_all = np.full(shape=(batch_size, gpt_max_length-1), dtype=np.int32, fill_value=1) - mlm_mask_all = np.full(shape=(mlm_batch_size, mlm_max_length), dtype=np.int32, fill_value=0) - - mlm_index = 0 - for i, (gpt_ids, gpt_input_mask, mlm_ids_list, mlm_mask_list, gpt_loss_mask) in enumerate(batch): - gpt_source_ids[i, :len(gpt_ids)-1] = gpt_ids[:-1] - gpt_target_ids[i, :len(gpt_ids)-1] = gpt_ids[1:] - gpt_input_mask_all[i, :len(gpt_ids)-1] = gpt_input_mask[:-1] - gpt_loss_mask_all[i, :len(gpt_ids)-1] = gpt_loss_mask[1:] - - for j, (mlm_ids, mlm_mask) in enumerate(zip(mlm_ids_list, mlm_mask_list)): - mlm_source_ids[mlm_index, :len(mlm_ids)] = mlm_ids - mlm_mask_all[mlm_index, :len(mlm_mask)] = mlm_mask - mlm_index += 1 - - ret_batch = { - 'text':{ - 'net_input': { - 'src_tokens': gpt_source_ids.astype(np.int64), - 'mlm_src_tokens': mlm_source_ids.astype(np.int64) if mlm_batch_size !=0 else None, - 'gpt_input_mask': gpt_input_mask_all.astype(np.bool_), - 'gpt_loss_mask': gpt_loss_mask_all.astype(np.bool_), - 'mlm_mask': mlm_mask_all.astype(np.bool_) if mlm_batch_size !=0 else None - }, - 'target': gpt_target_ids.astype(np.int64), - 'nsentences': batch_size, - 'ntokens': sum([len(x[0]) for x in batch]), - 'mlm_ntokens': mlm_ntokens - } - } - - return ret_batch - - def collate_for_gpt(batch): - batch_size = len(batch) - gpt_max_length = max([len(x[0]) for x in batch]) - if self.pad_to_max_len: - gpt_max_length = self.tokens_per_sample - - gpt_source_ids = np.full(shape=(batch_size, gpt_max_length-1), dtype=np.int32, - fill_value=self.dictionary.pad()) - gpt_target_ids = np.full(shape=(batch_size, gpt_max_length-1), dtype=np.int32, - fill_value=self.dictionary.pad()) - gpt_input_mask_all = np.full(shape=(batch_size, gpt_max_length-1), dtype=np.int32, fill_value=0) - gpt_loss_mask_all = np.full(shape=(batch_size, gpt_max_length-1), dtype=np.int32, fill_value=1) - - for i, (gpt_ids, gpt_input_mask, mlm_ids_list, mlm_mask_list, gpt_loss_mask) in enumerate(batch): - gpt_source_ids[i, :len(gpt_ids)-1] = gpt_ids[:-1] - gpt_target_ids[i, :len(gpt_ids)-1] = gpt_ids[1:] - gpt_input_mask_all[i, :len(gpt_ids)-1] = gpt_input_mask[:-1] - gpt_loss_mask_all[i, :len(gpt_ids)-1] = gpt_loss_mask[1:] - - ret_batch = { - self.data_name:{ - 'net_input': { - 'src_tokens': gpt_source_ids.astype(np.int64), - }, - 'target': gpt_target_ids.astype(np.int64), - 'nsentences': batch_size, - 'ntokens': sum([len(x[0]) for x in batch]), - 'mlm_ntokens': 0 - } - } - - return ret_batch - - if self.mlm_tokens_proportion == 0: - padded_batches = iterators.MapIterator( - batches, collate_for_gpt - ) - else: - padded_batches = iterators.MapIterator( - batches, collate - ) - - return padded_batches - - def _prepare(self, _random, doc): - mlm_tokens, mlm_mask, gpt_input_mask, gpt_loss_mask = self._mlm_cut(_random, doc) - full_tokens = self._gpt(doc) - return full_tokens, gpt_input_mask, mlm_tokens, mlm_mask, gpt_loss_mask - - def _mlm_cut(self, _random, doc): - eod_index = self.dictionary.indices[EOL_SYMBOL] - - if self.mlm_tokens_proportion == 0: - mlm_tokens = [] - mlm_mask = [] - gpt_input_mask = [0] * len(doc) - gpt_loss_mask = [1] * len(doc) - return mlm_tokens, mlm_mask, gpt_input_mask, gpt_loss_mask - - cut_start = np.arange(1, len(doc)-3/2*self.mlm_cut_length, self.mlm_cut_length, dtype=int) - - _random.shuffle(cut_start) - mlm_tokens = [] - mlm_mask = [] - start_list = [] - gpt_input_mask = np.zeros(len(doc), dtype=int) - gpt_loss_mask = np.ones(len(doc), dtype=int) - mlm_tokens_total_num = (len(doc)-1) * self.mlm_tokens_proportion - - mlm_tokens_cur_num = 0 - - for start in cut_start: - eod_num = doc[start:start+self.mlm_cut_length].count(eod_index) - if eod_num >= 2: - continue - elif eod_num == 1: - eod_pos = doc[start:start+self.mlm_cut_length].index(eod_index) - if self.mlm_cut_length - eod_pos < 20: - continue - start_ind, end_ind = start+eod_pos+1, start + self.mlm_cut_length - else: - cut_pos = _random.randint(0, self.mlm_cut_length-1) - if cut_pos >= self.mlm_cut_length/2: - start_ind, end_ind = start, start + cut_pos + 1 - else: - start_ind, end_ind = start + cut_pos, start + self.mlm_cut_length - - assert eod_index not in doc[start_ind:end_ind] - - start_list.append(start) - mlm_tokens.append([self.dictionary.bos()] + doc[start_ind:end_ind]) - mlm_tokens_cur_num += end_ind - start_ind - mlm_mask.append([0] + [1]*(end_ind - start_ind)) - gpt_input_mask[start_ind:end_ind] = 1 - gpt_loss_mask[start_ind:end_ind-1] = 0 - - if mlm_tokens_cur_num > mlm_tokens_total_num: - break - - ind = np.array(start_list).argsort() - start_list = np.array(start_list)[ind] - mlm_tokens = np.array(mlm_tokens, dtype=object)[ind] - mlm_mask = np.array(mlm_mask, dtype=object)[ind] - - return mlm_tokens, mlm_mask, gpt_input_mask, gpt_loss_mask - - def _gpt(self, doc): - return doc - - def _read_from_files(self, source_file): - data = [] - file_path = os.path.join(self.data_dir, source_file) - - if not os.path.exists(file_path): - print('| file {} not exists'.format(file_path), flush=True) - return iter([]) # skip bad file - - with open(file_path, 'r', encoding='utf8') as f: - lines = f.read().strip().split('\n') - - gpt_format_text = [] - for line in lines: - gpt_format_text.extend(list(filter(None, json.loads(line)["text"].split("\n")))) - gpt_format_text.append('') - - tokenized_lines = [self.tokenizer.encode(line) for line in gpt_format_text] - tokenized_ids = [self.dictionary.encode_line(line, add_if_not_exist=False) for line in tokenized_lines] - - doc = [self.dictionary.bos()] - for ids in tokenized_ids: - if len(ids) > self.tokens_per_sample: # drop too long sentence - continue - - if len(doc) + len(ids) > self.tokens_per_sample: - if len(doc) > 5/2*self.mlm_cut_length + 1: - data.append(doc) - doc = [self.dictionary.bos()] - doc.extend(ids) - - if len(doc) > 1 and len(doc) <= self.tokens_per_sample: - if len(doc) > 5/2*self.mlm_cut_length + 1: - data.append(doc) - - return data \ No newline at end of file diff --git a/kosmos-g/torchscale/examples/fairseq/tasks/data/mlm_loader.py b/kosmos-g/torchscale/examples/fairseq/tasks/data/mlm_loader.py deleted file mode 100644 index eb9cd72d8..000000000 --- a/kosmos-g/torchscale/examples/fairseq/tasks/data/mlm_loader.py +++ /dev/null @@ -1,334 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] - -import copy -import itertools -import os - -import numpy as np -from infinibatch import iterators - -from .basic_loader import BaseBatchGen -from .utils import NativeCheckpointableIterator, WeightIterator - - -class MLMLoader(BaseBatchGen): - def __init__( - self, - args, - dataset, - dictionary, - tokenizer, - max_tokens=None, - max_sentences=None, - max_positions=None, - ignore_invalid_inputs=False, - required_batch_size_multiple=1, - seed=1, - num_shards=1, - shard_id=0, - ): - super().__init__() - self.args = args - self.data = dataset.data - self.data_dir = dataset.data_dir - self.shuffle = dataset.shuffle - self.dictionary = dictionary - self.tokenizer = tokenizer - - self.max_tokens = max_tokens - self.max_sentences = max_sentences - self.max_positions = max_positions - self.tokens_per_sample = args.tokens_per_sample - self.sample_break_mode = args.sample_break_mode - self.ignore_invalid_inputs = ignore_invalid_inputs - self.required_batch_size_multiple = required_batch_size_multiple - self.seed = str(seed) - self.num_shards = num_shards - self.shard_id = shard_id - - self.batch_read_ahead = args.batch_read_ahead - - self._build_iter() - - def _build_iter(self): - tokenized_lines = self._multilingual_tokenize() - self.padded_batches = self._batchify(tokenized_lines) - - prefetch_batches = iterators.PrefetchIterator( - self.padded_batches, - buffer_size=10000, - buffer_in_main_process=True, - log_empty_buffer_warning=True and self.shard_id == 0, - ) - - prefetch_batches = iterators.MapIterator(prefetch_batches, self._move_to_tensor) - - self._iter = prefetch_batches - - def _multilingual_tokenize(self): - multilingual_iters = [] - weights = [] - - for data in self.data: - multilingual_iters.append(self._tokenize(data)) - if "weight" in data: - weights.append(float(data["weight"])) - else: - weights.append(int(data["count"])) - - if len(multilingual_iters) == 1: - return multilingual_iters[0] - - sampling_iterator = WeightIterator(weights) - control_iterator = NativeCheckpointableIterator(sampling_iterator) - tokenized_lines = iterators.MultiplexIterator( - control_iterator, multilingual_iters - ) - - return tokenized_lines - - def _tokenize(self, data): - """ - data: - { - 'source': list[Path], - 'source_lang': str, - 'count': int, - 'weight': float, - 'name': str, - } - """ - dataset = list( - zip( - data["source"], - itertools.repeat(data["source_lang"]), - ) - ) - - if self.shuffle: - chunk_files = iterators.InfinitePermutationSourceIterator( - dataset, - seed=self.seed, - shuffle=self.shuffle, - num_instances=self.num_shards, - instance_rank=self.shard_id, - ) - else: - chunk_files = iterators.ChunkedSourceIterator( - dataset, - num_instances=self.num_shards, - instance_rank=self.shard_id, - ) - - tokenized_lines = iterators.SelectManyIterator( - chunk_files, lambda files: self._read_from_files(*files) - ) - tokenized_lines = iterators.SamplingRandomMapIterator( - tokenized_lines, self._prepare, self.seed - ) - - return tokenized_lines - - def _batchify(self, lines): - - if self.max_sentences is not None: - if self.batch_read_ahead > 0: - lines = iterators.BlockwiseShuffleIterator( - lines, self.batch_read_ahead, self.seed - ) - batches = iterators.FixedBatchIterator(lines, self.max_sentences) - else: - - def dynamic_batch_size(sample): - lengths = [len(x) for x in sample] - batch_size = self.max_tokens // max(lengths) - batch_size = ( - batch_size - // self.required_batch_size_multiple - * self.required_batch_size_multiple - ) - return max(1, batch_size) - - batches = iterators.BucketedReadaheadBatchIterator( - lines, - read_ahead=self.batch_read_ahead, - key=(lambda x: max(len(x[0]), len(x[1]))) if self.shuffle else None, - batch_size=dynamic_batch_size, - shuffle=self.shuffle, - seed=self.seed, - ) - - def collate(batch): - batch_size = len(batch) - - mlm_source_max_length = max([len(x[0]) for x in batch]) - mlm_target_max_length = max([len(x[1]) for x in batch]) - s2s_source_max_length = max([len(x[2]) for x in batch]) - s2s_target_max_length = max([len(x[3]) for x in batch]) - - mlm_source_ids = np.full( - shape=(batch_size, mlm_source_max_length), - dtype=np.int32, - fill_value=self.dictionary.pad(), - ) - mlm_target_ids = np.full( - shape=(batch_size, mlm_target_max_length), - dtype=np.int32, - fill_value=self.dictionary.pad(), - ) - s2s_source_ids = np.full( - shape=(batch_size, s2s_source_max_length), - dtype=np.int32, - fill_value=self.dictionary.pad(), - ) - s2s_target_ids = np.full( - shape=(batch_size, s2s_target_max_length - 1), - dtype=np.int32, - fill_value=self.dictionary.pad(), - ) - s2s_prev_input_ids = np.full( - shape=(batch_size, s2s_target_max_length - 1), - dtype=np.int32, - fill_value=self.dictionary.pad(), - ) - - for i, ( - mlm_input_ids, - mlm_label_ids, - s2s_input_ids, - s2s_label_ids, - ) in enumerate(batch): - mlm_source_ids[i, : len(mlm_input_ids)] = mlm_input_ids - mlm_target_ids[i, : len(mlm_label_ids)] = mlm_label_ids - s2s_source_ids[i, : len(s2s_input_ids)] = s2s_input_ids - s2s_target_ids[i, : len(s2s_label_ids) - 1] = s2s_label_ids[1:] - s2s_prev_input_ids[i, : len(s2s_label_ids) - 1] = s2s_label_ids[:-1] - - ret_batch = { - "net_input": { - "src_tokens": mlm_source_ids.astype(np.int64), - }, - "target": mlm_target_ids.astype(np.int64), - "nsentences": batch_size, - "ntokens": sum([len(x[0]) for x in batch]), - } - - return ret_batch - - padded_batches = iterators.MapIterator(batches, collate) - - return padded_batches - - def _prepare(self, _random, doc): - nonmasked_tokens, masked_tokens = self._mask_lm(_random, doc) - nonnoise_spans, noise_spans = self._span_corruption(_random, doc) - return nonmasked_tokens, masked_tokens, nonnoise_spans, noise_spans - - def _mask_lm(self, _random, doc): - def mask_tokens(): - return "<mask>" - - length = len(doc) - mask_tokens_num = int(length * self.args.mask_prob) - mask_tokens_num = min(max(mask_tokens_num, 1), length - 1) - possible_mask_positions = _random.sample(range(length), k=mask_tokens_num) - possible_mask_positions = sorted(possible_mask_positions) - - nonmasked_tokens = copy.deepcopy(doc) - masked_tokens = [self.dictionary.pad() for _ in range(len(doc))] - - for position in possible_mask_positions: - # masked_tokens.append(nonmasked_tokens[position]) - masked_tokens[position] = nonmasked_tokens[position] - nonmasked_tokens[position] = self.dictionary.indices[mask_tokens()] - - return nonmasked_tokens, masked_tokens - - def _span_corruption(self, _random, doc): - def mask_tokens(i): - return f"<mask_{i}>" - - length = len(doc) - noise_tokens_num = int(length * self.args.mask_prob) - noise_tokens_num = min(max(noise_tokens_num, 1), length - 1) - noise_spans_num = int(noise_tokens_num / self.args.span_length) - noise_spans_num = max(noise_spans_num, 1) - nonnoise_tokens_num = length - noise_tokens_num - - if noise_spans_num == 1: - noise_split_positions = [0, noise_tokens_num] - else: - possible_split_positions = list(range(1, noise_tokens_num)) - _random.shuffle(possible_split_positions) - noise_split_positions = sorted( - possible_split_positions[: noise_spans_num - 1] - ) - noise_split_positions = [0] + noise_split_positions + [noise_tokens_num] - - possible_insert_positions = list(range(nonnoise_tokens_num)) - _random.shuffle(possible_insert_positions) - noise_insert_positions = sorted(possible_insert_positions[:noise_spans_num]) - - nonnoise_spans, noise_spans = [], [] - last_end = 0 - for i in range(noise_spans_num): - start_pos = noise_insert_positions[i] + noise_split_positions[i] - end_pos = noise_insert_positions[i] + noise_split_positions[i + 1] - mask_id = self.dictionary.indices[mask_tokens(i)] - - if getattr(self.args, "remove_target_sentinel", False): - noise_spans.append(doc[start_pos:end_pos]) - else: - noise_spans.append([mask_id] + doc[start_pos:end_pos]) - - if getattr(self.args, "remove_source_sentinel", False): - nonnoise_spans.extend(doc[last_end:start_pos]) - else: - nonnoise_spans.extend(doc[last_end:start_pos] + [mask_id]) - - last_end = end_pos - - nonnoise_spans.extend(doc[last_end:]) - noise_spans = sum(noise_spans, []) - - return nonnoise_spans, noise_spans - - def _read_from_files(self, source_file, source_lang): - # data = [] - file_path = os.path.join(self.data_dir, source_file) - - if not os.path.exists(file_path): - print("| file {} not exists".format(file_path), flush=True) - return iter([]) # skip bad file - - with open(file_path, "r", encoding="utf8") as f: - lines = f.read().strip().split("\n") - - doc = [self.dictionary.bos()] - for line in lines: - if line == "": - if self.sample_break_mode == "complete_doc": - # data.append(doc) - yield doc - doc = [self.dictionary.bos()] - continue - - tokenized_line = self.tokenizer.EncodeAsPieces(line) - tokenized_id = [ - self.dictionary.index(token) for token in tokenized_line - ] + [self.dictionary.eos_index] - - if len(tokenized_id) > self.tokens_per_sample: - continue - if len(doc) + len(tokenized_id) > self.tokens_per_sample: - # data.append(doc) - yield doc - doc = [self.dictionary.bos()] - doc.extend(tokenized_id) - - if len(doc) > 1 and len(doc) <= self.tokens_per_sample: - # data.append(doc) - yield doc - - # return data diff --git a/kosmos-g/torchscale/examples/fairseq/tasks/data/spm_lm_loader.py b/kosmos-g/torchscale/examples/fairseq/tasks/data/spm_lm_loader.py deleted file mode 100644 index e1eb90643..000000000 --- a/kosmos-g/torchscale/examples/fairseq/tasks/data/spm_lm_loader.py +++ /dev/null @@ -1,134 +0,0 @@ -import json -import os -import multiprocessing -import itertools - -from infinibatch import iterators -from functools import partial -from .lm_loader import LMLoader -from .utils import NativeCheckpointableIterator, WeightIterator, EOL_SYMBOL -from fairseq.data.encoders.gpt2_bpe import GPT2BPE - - -class SpmLmLoader(LMLoader): - def _tokenize(self): - multilingual_iters = [] - weights = [] - - for data in self.data: - multilingual_iters.append( - self._tokenize_foreach_lang(data) - ) - if 'weight' in data: - weights.append(float(data['weight'])) - else: - weights.append(int(data['count'])) - - if len(multilingual_iters) == 1: - return multilingual_iters[0] - - sampling_iterator = WeightIterator(weights, self.seed) - control_iterator = NativeCheckpointableIterator(sampling_iterator) - tokenized_lines = iterators.MultiplexIterator(control_iterator, multilingual_iters) - - return tokenized_lines - - def _tokenize_foreach_lang(self, data): - dataset = list(zip(data['source'])) - if self.shuffle: - chunk_files = iterators.InfinitePermutationSourceIterator( - dataset, - seed=self.seed, - shuffle=self.shuffle, - num_instances=self.num_shards, - instance_rank=self.shard_id,) - else: - chunk_files = iterators.ChunkedSourceIterator( - dataset, - num_instances=self.num_shards, - instance_rank=self.shard_id,) - - tokenized_lines = iterators.SelectManyIterator(chunk_files, lambda files: self._read_from_files(*files)) - tokenized_lines = iterators.SamplingRandomMapIterator(tokenized_lines, self._prepare, self.seed) - - return tokenized_lines - - @staticmethod - def fs_encode_line(fs_dict, words, append_eos=True): - ids = [] - for i, word in enumerate(words): - idx = fs_dict.index(word) - ids.append(idx) - if append_eos: - ids.append(fs_dict.eos_index) - return ids - - @staticmethod - def _doc_jsonstr_to_ids(doc_jsonstr, spm_tokenizer=None, fs_dict=None): - assert EOL_SYMBOL in fs_dict.indices - eol_index = fs_dict.indices[EOL_SYMBOL] - tokenized_ids = [] - for line in filter(None, json.loads(doc_jsonstr)["text"].split("\n")): - if isinstance(spm_tokenizer, GPT2BPE): - tokens = spm_tokenizer.encode(line).split(' ') - else: - tokens = spm_tokenizer.encode(line, out_type=str) - tokenized_tokens = SpmLmLoader.fs_encode_line(fs_dict, tokens, append_eos=False) - tokenized_tokens.append(eol_index) - tokenized_ids.append(tokenized_tokens) - if len(tokenized_ids) > 0: - last_line_ids = tokenized_ids[-1] - if last_line_ids[-1] == eol_index: - last_line_ids[-1] = fs_dict.eos_index - else: - print("[W] At SpmLmLoader._doc_jsonstr_to_ids, last line does not end with eol!") - last_line_ids.append(fs_dict.eos_index) - else: - print("[W] At SpmLmLoader._doc_jsonstr_to_ids, A null document with no lines!") - tokenized_ids = [[fs_dict.eos_index]] - return tokenized_ids - - def _read_from_files(self, source_file): - data = [] - file_path = os.path.join(self.data_dir, source_file) - - if not os.path.exists(file_path): - print('| file {} not exists'.format(file_path), flush=True) - return iter([]) # skip bad file - - try: - with open(file_path, 'r', encoding='utf8') as f: - lines = f.read().strip().split('\n') - except: - return iter([]) # skip bad file - - # NOTE #### simple spm implementation ############### - tokenized_ids = [] - for doc_jsonstr in lines: - ret = SpmLmLoader._doc_jsonstr_to_ids(doc_jsonstr, spm_tokenizer=self.tokenizer, fs_dict=self.dictionary) - tokenized_ids.extend(ret) - # ################################################### - - doc = [self.dictionary.bos()] - for ids in tokenized_ids: - - if getattr(self.args, "debug_p100", False): - if len(ids) > 256: - ids = ids[:256] - - if len(ids) >= self.tokens_per_sample: # drop too long sentence - # data.append(doc[:]) - ids = ids[:self.tokens_per_sample - 1] - # continue - - if len(doc) + len(ids) > self.tokens_per_sample: - if len(doc) > 5/2*self.mlm_cut_length + 1: - data.append(doc) - doc = [self.dictionary.bos()] - doc.extend(ids) - - if len(doc) > 1 and len(doc) <= self.tokens_per_sample: - if len(doc) > 5/2*self.mlm_cut_length + 1: - data.append(doc) - - return data \ No newline at end of file diff --git a/kosmos-g/torchscale/examples/fairseq/tasks/data/utils.py b/kosmos-g/torchscale/examples/fairseq/tasks/data/utils.py deleted file mode 100644 index c551398cf..000000000 --- a/kosmos-g/torchscale/examples/fairseq/tasks/data/utils.py +++ /dev/null @@ -1,156 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] - -import collections -from random import Random -from typing import Dict, Iterable, Optional - -import numpy as np -from infinibatch import iterators - - -EOL_SYMBOL = "</line>" -BOI_SYMBOL = "" - - -def apply_to_sample(f, sample): - if hasattr(sample, "__len__") and len(sample) == 0: - return {} - - def _apply(x): - if isinstance(x, np.ndarray): - return f(x) - elif isinstance(x, collections.OrderedDict): - # OrderedDict has attributes that needs to be preserved - od = collections.OrderedDict( - (key, _apply(value)) for key, value in x.items() - ) - od.__dict__ = x.__dict__ - return od - elif isinstance(x, dict): - return {key: _apply(value) for key, value in x.items()} - elif isinstance(x, list): - return [_apply(x) for x in x] - elif isinstance(x, tuple): - return tuple(_apply(x) for x in x) - elif isinstance(x, set): - return {_apply(x) for x in x} - else: - return x - - return _apply(sample) - - -class NativeCheckpointableIterator(iterators.CheckpointableIterator): - def __init__(self, iterable: Iterable): - self._input_iterable = iterable - self.setstate(None) - - def getstate(self) -> Dict: - return {"num_items_yielded": self._num_items_yielded} - - def setstate(self, checkpoint: Optional[Dict]): - self._iterator = iter(self._input_iterable) - self._num_items_yielded = ( - iterators._advance_iterator(self._iterator, checkpoint["num_items_yielded"]) - if checkpoint is not None - else 0 - ) - - def __next__(self): - item = next(self._iterator) - self._num_items_yielded += 1 - return item - - def close(self): - pass - - -class WeightIterator(object): - def __init__(self, weights, seed): - self.weights = weights - self.seed = seed - self.control_index = list(range(len(weights))) - self.setstate(None) - - def __iter__(self): - return self - - def getstate(self): - return {"random_state": self._random_state} - - def setstate(self, checkpoint): - self._random_state = checkpoint["random_state"] if checkpoint else None - self._random = ( - None # this will trigger the lazy initialization in self.__next__ - ) - - def __next__(self): - if self._random is None: - self._random = Random(self.seed) - if self._random_state is not None: - self._random.setstate(self._random_state) - idx = self._random.choices(self.control_index, self.weights)[0] - self._random_state = self._random.getstate() - return idx - - def close(self): - pass - - -def safe_getattr(obj, k, default=None): - """Returns obj[k] if it exists and is not None, otherwise returns default.""" - from omegaconf import OmegaConf - - if OmegaConf.is_config(obj): - return obj[k] if k in obj and obj[k] is not None else default - - return getattr(obj, k, default) - - -def safe_hasattr(obj, k): - """Returns True if the given key exists and is not None.""" - return getattr(obj, k, None) is not None - - -def image_code_to_token(code): - return "<image{}>".format(code) - - -class ConcatIterator(iterators.CheckpointableIterator): - """ - Concat items from all given iterators. - """ - def __init__(self, source_iterators): - """ - Args: - source_iterators: list of iterators to zip, item by item - """ - # TODO: Use all function? - for source_iterator in source_iterators: - if not isinstance(source_iterator, iterators.CheckpointableIterator): - raise ValueError('all iterators in source_iterators have to be CheckpointableIterator') - self._source_iterators = source_iterators # type: List[CheckpointableIterator] - - def getstate(self): - return {'input_states': tuple(iterator.getstate() for iterator in self._source_iterators)} - - def setstate(self, checkpoint): - if checkpoint is None: - for iterator in self._source_iterators: - iterator.setstate(None) - else: - # TODO: Add check that both lists have the same length? - for iterator, state in zip(self._source_iterators, checkpoint['input_states']): - iterator.setstate(state) - - def __next__(self): - res = {} # (note: can't use a generator expression, as it gets confused when a next() call raises StopIteration) - for iterator in self._source_iterators: - res.update(next(iterator)) - return res - - def close(self): - for it in self._source_iterators: - it.close() diff --git a/kosmos-g/torchscale/examples/fairseq/tasks/data/wild_loader.py b/kosmos-g/torchscale/examples/fairseq/tasks/data/wild_loader.py deleted file mode 100644 index 9b34a0d8b..000000000 --- a/kosmos-g/torchscale/examples/fairseq/tasks/data/wild_loader.py +++ /dev/null @@ -1,164 +0,0 @@ -import json -import os -import multiprocessing -import itertools -import random -import re - -from infinibatch import iterators -from functools import partial -from tasks.data.lm_loader import LMLoader -from tasks.data.utils import NativeCheckpointableIterator, WeightIterator, EOL_SYMBOL, BOI_SYMBOL, EOI_SYMBOL, image_code_to_token -from fairseq.data.encoders.gpt2_bpe import GPT2BPE -from spacy.lang.en import English - - -IMAGE_KEY="Images" -TEXT_KEY="Extracted" - - -class WildLoader(LMLoader): - def _setup(self): - self.nlp_sentencizer = English() - self.nlp_sentencizer.add_pipe("sentencizer") - self.max_image_num = 5 - - def _tokenize(self): - multilingual_iters = [] - weights = [] - - for data in self.data: - multilingual_iters.append( - self._tokenize_foreach_lang(data) - ) - if 'weight' in data: - weights.append(float(data['weight'])) - else: - weights.append(int(data['count'])) - - if len(multilingual_iters) == 1: - return multilingual_iters[0] - - sampling_iterator = WeightIterator(weights, self.seed) - control_iterator = NativeCheckpointableIterator(sampling_iterator) - tokenized_lines = iterators.MultiplexIterator(control_iterator, multilingual_iters) - - return tokenized_lines - - def _tokenize_foreach_lang(self, data): - dataset = list(zip(data['source'])) - if self.shuffle: - chunk_files = iterators.InfinitePermutationSourceIterator( - dataset, - seed=self.seed, - shuffle=self.shuffle, - num_instances=self.num_shards, - instance_rank=self.shard_id,) - else: - chunk_files = iterators.ChunkedSourceIterator( - dataset, - num_instances=self.num_shards, - instance_rank=self.shard_id,) - - tokenized_lines = iterators.SelectManyIterator(chunk_files, lambda files: self._read_from_files(*files)) - tokenized_lines = iterators.SamplingRandomMapIterator(tokenized_lines, self._prepare, self.seed) - - return tokenized_lines - - @staticmethod - def fs_encode_line(fs_dict, words, append_eos=True): - ids = [] - for i, word in enumerate(words): - idx = fs_dict.index(word) - ids.append(idx) - if append_eos: - ids.append(fs_dict.eos_index) - return ids - - def text_transform(self, line): - spm_tokenizer=self.tokenizer - if isinstance(spm_tokenizer, GPT2BPE): - tokens = spm_tokenizer.encode(line).split(' ') - else: - tokens = spm_tokenizer.encode(line, out_type=str) - tokenized_tokens = WildLoader.fs_encode_line(self.dictionary, tokens, append_eos=False) - return tokenized_tokens - - def clean(self, text): - # python re, remove html tags - clean = re.compile('<.*?>') - return re.sub(clean, '', text) - - - def _read_from_files(self, source_file): - """ - <s>  sentence  sentence </s> - 1, sample a random subsequnece: 3 sentences + the first image ... take up to 5 images + 3 sentences - 2, filter html tags <p>, <br>, <br/> - 3, single image, random sample rate as 0.5 - """ - file_path = os.path.join(self.data_dir, source_file) - - if not os.path.exists(file_path): - print('| file {} not exists'.format(file_path), flush=True) - return iter([]) # skip bad file - - try: - with open(file_path, 'r', encoding='utf8') as f: - lines = f.read().strip().split('\n') - except: - return iter([]) # skip bad file - - for doc_jsonstr in lines: - try: - json_obj = json.loads(doc_jsonstr) - doc = [self.dictionary.bos()] - start_idx = 0 - image_num = len(json_obj[IMAGE_KEY]) - if image_num == 1: - r = random.random() - if r > 0.5: - continue - for image_idx, image_item in enumerate(json_obj[IMAGE_KEY]): - if image_idx >= self.max_image_num: - if len(doc) < self.tokens_per_sample: - yield doc - break - - text_snippet = json_obj[TEXT_KEY][start_idx:image_item['Span'][0]-1] - text_snippet = self.clean(text_snippet) - if len(text_snippet) != 0: - if image_idx == 0: - # crop 3 sentences before the first image - sentences = list(self.nlp_sentencizer(text_snippet).sents) - text_snippet = ' '.join([str(sent) for sent in sentences[-3:]]) - text_token = self.text_transform(text_snippet) - doc.extend(text_token) - if len(doc) >= self.tokens_per_sample: # drop too long sentence - # data.append(doc[:]) - doc = doc[:self.tokens_per_sample - 2] - doc.append(self.dictionary.eos()) - yield doc - break - - image_tokens = [image_code_to_token(i) for i in image_item['input_ids']] - image_tokens = WildLoader.fs_encode_line(self.dictionary, image_tokens, append_eos=False) - doc.append(self.dictionary.index(BOI_SYMBOL)) - doc.extend(image_tokens) - doc.append(self.dictionary.index(EOI_SYMBOL)) - - start_idx = image_item['Span'][1] + 1 - if image_idx == image_num - 1: - # crop 3 sentences after the last image - text_snippet = json_obj[TEXT_KEY][start_idx:] - text_snippet = self.clean(text_snippet) - sentences = list(self.nlp_sentencizer(text_snippet).sents) - text_snippet = ' '.join([str(sent) for sent in sentences[:3]]) - text_token = self.text_transform(text_snippet) - doc.extend(text_token) - doc.append(self.dictionary.eos()) - if len(doc) < self.tokens_per_sample: - yield doc - break - except: - continue \ No newline at end of file diff --git a/kosmos-g/torchscale/examples/fairseq/tasks/data/wild_loader_test.py b/kosmos-g/torchscale/examples/fairseq/tasks/data/wild_loader_test.py deleted file mode 100644 index c76486796..000000000 --- a/kosmos-g/torchscale/examples/fairseq/tasks/data/wild_loader_test.py +++ /dev/null @@ -1,113 +0,0 @@ -IMAGE_KEY="Images" -TEXT_KEY="Extracted" - -import os, json, random, re - -max_image_num = 5 -tokens_per_sample = 2048 -from spacy.lang.en import English - -import sentencepiece as spm - -nlp_sentencizer = English() -nlp_sentencizer.add_pipe("sentencizer") -spm_tokenizer = spm.SentencePieceProcessor(model_file=r"C:\Users\shaohanh\Desktop\sentencepiece.bpe.model") - -def text_transform(line): - tokens = spm_tokenizer.encode(line, out_type=str) - return tokens - -def clean(text): - # python re, remove html tags - clean = re.compile('<.*?>') - return re.sub(clean, '', text) - - -def _read_from_files(source_file): - """ - <s>  sentence  sentence </s> - 1, sample a random subsequnece: 3 sentences + the first image ... take up to 5 images + 3 sentences - 2, filter html tags <p>, <br>, <br/> - 3, single image, random sample rate as 0.5 - """ - file_path = os.path.join(source_file) - - if not os.path.exists(file_path): - print('| file {} not exists'.format(file_path), flush=True) - return iter([]) # skip bad file - - try: - with open(file_path, 'r', encoding='utf8') as f: - lines = f.read().strip().split('\n') - except: - return iter([]) # skip bad file - - for doc_jsonstr in lines: - json_obj = json.loads(doc_jsonstr) - doc = ['bos'] - start_idx = 0 - image_num = len(json_obj[IMAGE_KEY]) - if image_num == 1: - r = random.random() - if r > 0.5: - continue - for image_idx, image_item in enumerate(json_obj[IMAGE_KEY]): - if image_idx >= max_image_num: - yield doc - break - - text_snippet = json_obj[TEXT_KEY][start_idx:image_item['Span'][0]-1] - text_snippet = clean(text_snippet) - if len(text_snippet) != 0: - if image_idx == 0: - # crop 3 sentences before the first image - sentences = list(nlp_sentencizer(text_snippet).sents) - text_snippet = ' '.join([str(sent) for sent in sentences[-3:]]) - text_token = text_transform(text_snippet) - doc.extend(text_token) - if len(doc) >= tokens_per_sample: # drop too long sentence - # data.append(doc[:]) - doc = doc[:tokens_per_sample - 2] - doc.append('eos') - yield doc - break - - image_tokens = [i for i in image_item['input_ids']] - doc.append('BOI_SYMBOL') - doc.extend(image_tokens) - doc.append('EOI_SYMBOL') - - start_idx = image_item['Span'][1] + 1 - if image_idx == image_num - 1: - # crop 3 sentences after the last image - text_snippet = json_obj[TEXT_KEY][start_idx:] - text_snippet = clean(text_snippet) - sentences = list(nlp_sentencizer(text_snippet).sents) - text_snippet = ' '.join([str(sent) for sent in sentences[:3]]) - text_token = text_transform(text_snippet) - doc.extend(text_token) - doc.append('eos') - if len(doc) < tokens_per_sample: - yield doc - break - -all_length = [] -image_num = [] -token_length = [] -j = 0 -for item in _read_from_files(r"C:\Users\shaohanh\Desktop\partition.000.ndjson"): - # all_length.append(len(item)) - # image_num.append(item.count('BOI_SYMBOL')) - # token_length.append(len(item) - item.count('BOI_SYMBOL') * 197) - print(item) - j += 1 - if j > 10: - break - # if j % 1000 == 0: - # print(len(all_length), flush=True) - # print(j) - - -print('average length: ', sum(all_length) / len(all_length)) -print('average image num: ', sum(image_num) / len(image_num)) -print('average token length: ', sum(token_length) / len(token_length)) \ No newline at end of file diff --git a/kosmos-g/torchscale/examples/fairseq/tasks/data/wild_loader_test_2.py b/kosmos-g/torchscale/examples/fairseq/tasks/data/wild_loader_test_2.py deleted file mode 100644 index 4cb92f6c6..000000000 --- a/kosmos-g/torchscale/examples/fairseq/tasks/data/wild_loader_test_2.py +++ /dev/null @@ -1,96 +0,0 @@ -import sys,os -sys.path.append(os.getcwd()) - -from typing import NamedTuple -import os -import argparse -import json - -import sentencepiece as spm -from fairseq.data.dictionary import Dictionary -from wild_loader import WildLoader - -import tqdm - - -def image_code_to_token(code): - return "<image{}>".format(code) - - -def to_word(item, dictionary): - print(dictionary.string(item['net_input']['src_tokens'][0])) - print(dictionary.string(item['target'][0])) - -def run(): - parser = argparse.ArgumentParser() - parser.add_argument('--seed', type=int, default=1) - parser.add_argument('--data', type=str, default='/mnt/msranlp/shumma/data/16g') - parser.add_argument('--spm_path', type=str, default='data/sentencepiece.bpe.model') - parser.add_argument('--tokens_per_sample', type=int, default=2048) - parser.add_argument('--sample_break_mode', type=str, default='') - parser.add_argument('--batch_read_ahead', type=int, default=1) - parser.add_argument('--mask_prob', type=float, default=0.15) - parser.add_argument('--span_length', type=int, default=3) - parser.add_argument('--dynamic_mask', default=True) - parser.add_argument('--max_sentences', type=int, default=1) # batch size - parser.add_argument('--max_image_num', type=int, default=5) - parser.add_argument('--image_token_length', type=int, default=64) - - args = parser.parse_args() - - Dataset = NamedTuple('Dataset', [('data', str), ('data_dir', str), ('shuffle', bool)]) - dataset = Dataset(json.load(open(f'{args.data}/json/train.json')), args.data, True) - dictionary = Dictionary.load(os.path.join(args.data, 'dict.txt')) - dictionary.add_symbol('</line>') - dictionary.add_symbol('') - for i in range(8192): - dictionary.add_symbol(image_code_to_token(i)) - - tokenizer = spm.SentencePieceProcessor(model_file=args.spm_path) - - mlm_loader = WildLoader( - args, - dataset, - dictionary, - tokenizer, - max_tokens=args.tokens_per_sample, - max_sentences=args.max_sentences, - max_positions=None, - ignore_invalid_inputs=False, - required_batch_size_multiple=1, - seed=1, - num_shards=1, - shard_id=0, - disable_prefetching=True, - ) - - num = 0 - i = 0 - for item in mlm_loader: - print(item) - i += 1 - if i > num: - break - - # for item in tqdm.tqdm(mlm_loader): - # i += 1 - -def cook_json(): - data = [] - item = { - "source": [], - "source_lang": "laion", - "weight": 1.0, - "name": "laion" - } - for i in range(128): - for j in range(94): - item['source'].append("../laion2b_filtered_tsvs_v1/{:05d}/{:05d}_{:05d}.tsv".format(i, i, j)) - - data.append(item) - json.dump(data, open('train.json', 'w', encoding='utf-8'), indent=2) - -if __name__ == '__main__': - run() - # cook_json() diff --git a/kosmos-g/torchscale/examples/fairseq/tasks/gpt_base.py b/kosmos-g/torchscale/examples/fairseq/tasks/gpt_base.py deleted file mode 100644 index e4c8abb38..000000000 --- a/kosmos-g/torchscale/examples/fairseq/tasks/gpt_base.py +++ /dev/null @@ -1,209 +0,0 @@ -import os -import json -from argparse import Namespace -import torch - -from fairseq import utils -from fairseq.data import Dictionary -from fairseq.tasks import FairseqTask, register_task -from fairseq.tasks.language_modeling import LanguageModelingTask, LanguageModelingConfig -from fairseq.data.encoders.gpt2_bpe import GPT2BPE -from dataclasses import dataclass, field -import sentencepiece - -from .data.spm_lm_loader import SpmLmLoader as LMLoader -from .data.utils import EOL_SYMBOL - -DEFAULT_ENCODER_JSON = "https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json" -DEFAULT_VOCAB_BPE = "https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe" - -@dataclass -class GPTLanguageModelingConfig(LanguageModelingConfig): - spm_model: str = field( - default="", - metadata={ - "help": "sentencepice model to tokenize the data" - }, - ) - gpt2_encoder_json: str = field( - default=DEFAULT_ENCODER_JSON, metadata={"help": "path to encoder.json"} - ) - gpt2_vocab_bpe: str = field( - default=DEFAULT_VOCAB_BPE, metadata={"help": "path to vocab.bpe"} - ) - dict_path: str = field( - default="", - metadata={ - "help": "sentencepice model to tokenize the data" - }, - ) - batch_read_ahead: int = field( - default=10000, - metadata={"help": "batch read ahead size for infinibatch"}, - ) - pad_to_max_len: bool = field( - default=False, - metadata={"help": "pad each sentence to max length"}, - ) - - -@register_task('gpt_pretraining', dataclass=GPTLanguageModelingConfig) -class GPTPretrainingTask(LanguageModelingTask): - def __init__(self, args, dictionary, tokenizer, output_dictionary=None, targets=None): - super().__init__(args, dictionary, output_dictionary=output_dictionary, targets=targets) - self.cfg = args - self.tokenizer = tokenizer - - @classmethod - def setup_task(cls, cfg, **kwargs): - """Setup the task (e.g., load dictionaries). - - Args: - args (argparse.Namespace): parsed command-line arguments - """ - paths = utils.split_paths(cfg.data) - assert len(paths) > 0 - - if len(cfg.dict_path) > 0: - dictionary = Dictionary.load(cfg.dict_path) - else: - dictionary = Dictionary.load(os.path.join(paths[0], "dict.txt")) - dictionary.add_symbol(EOL_SYMBOL) - - output_dictionary = dictionary - - args = cfg - # upgrade old checkpoints - if getattr(args, "exclude_self_target", False): - args.self_target = False - - targets = [] - if getattr(args, "self_target", False): - targets.append("self") - if getattr(args, "future_target", False): - targets.append("future") - if getattr(args, "past_target", False): - targets.append("past") - if len(targets) == 0: - # standard language modeling - targets = ["future"] - - if len(cfg.spm_model) > 0: - tokenizer = sentencepiece.SentencePieceProcessor(model_file=cfg.spm_model) - else: - tokenizer = GPT2BPE(Namespace( - gpt2_vocab_bpe=cfg.gpt2_vocab_bpe, - gpt2_encoder_json=cfg.gpt2_encoder_json)) - - return cls(cfg, dictionary, tokenizer, output_dictionary, targets=targets) - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - if "tnlg" in self.cfg.data and split == "train": - self.datasets[split] = { - # 'data': json.load(open(f'{self.cfg.data}/json/{split}-nogithub.json')) if split == 'train' else json.load(open(f'{self.cfg.data}/json/{split}.json')), - # 'data': json.load(open(f'{self.cfg.data}/json/{split}-nogithub-noarvix-nopubmed.json')) if split == 'train' else json.load(open(f'{self.cfg.data}/json/{split}.json')), - 'data': json.load(open(f'{self.cfg.data}/json/{split}-nogithub-noarvix-nopubmed-mtnlg.json')) if split == 'train' else json.load(open(f'{self.cfg.data}/json/{split}.json')), - 'data_dir': self.cfg.data, - 'shuffle': True if split == 'train' else False, - } - else: - self.datasets[split] = { - 'data': json.load(open(f'{self.cfg.data}/json/{split}.json')), - 'data_dir': self.cfg.data, - 'shuffle': True if split == 'train' else False, - } - self.datasets[split] = Namespace(**self.datasets[split]) - - def dataset(self, split): - if split not in self.datasets: - raise KeyError("Dataset not loaded: " + split) - - return self.datasets[split] - - def get_batch_iterator( - self, - dataset, - max_tokens=None, - max_sentences=None, - max_positions=None, - ignore_invalid_inputs=False, - required_batch_size_multiple=1, - seed=1, - num_shards=1, - shard_id=0, - num_workers=0, - epoch=1, - data_buffer_size=0, - disable_iterator_cache=False, - skip_remainder_batch=False, - grouped_shuffling=False, - update_epoch_batch_itr=False - ): - disable_prefetching = False - if not dataset.shuffle: # for valid and test - shard_id = 0 - disable_prefetching = True - - return LMLoader( - self.cfg, - dataset, - self.dictionary, - self.tokenizer, - max_tokens=max_tokens, - max_sentences=max_sentences, - max_positions=max_positions, - ignore_invalid_inputs=ignore_invalid_inputs, - required_batch_size_multiple=required_batch_size_multiple, - seed=seed, - epoch=epoch, - num_shards=num_shards, - shard_id=shard_id, - disable_prefetching=disable_prefetching, - ) - - @property - def source_dictionary(self): - return self.dictionary - - @property - def target_dictionary(self): - return self.dictionary - - def train_step( - self, sample, model, criterion, optimizer, update_num, ignore_grad=False - ): - """ - Do forward and backward, and return the loss as computed by *criterion* - for the given *model* and *sample*. - - Args: - sample (dict): the mini-batch. The format is defined by the - :class:`~fairseq.data.FairseqDataset`. - model (~fairseq.models.BaseFairseqModel): the model - criterion (~fairseq.criterions.FairseqCriterion): the criterion - optimizer (~fairseq.optim.FairseqOptimizer): the optimizer - update_num (int): the current update - ignore_grad (bool): multiply loss by 0 if this is set to True - - Returns: - tuple: - - the loss - - the sample size, which is used as the denominator for the - gradient - - logging outputs to display while training - """ - model.train() - model.set_num_updates(update_num) - with torch.autograd.profiler.record_function("forward"): - loss, sample_size, logging_output = criterion(model, sample['gpt']) - if ignore_grad: - loss *= 0 - with torch.autograd.profiler.record_function("backward"): - optimizer.backward(loss) - return loss, sample_size, logging_output - - def valid_step(self, sample, model, criterion): - model.eval() - with torch.no_grad(): - loss, sample_size, logging_output = criterion(model, sample['gpt']) - return loss, sample_size, logging_output diff --git a/kosmos-g/torchscale/examples/fairseq/tasks/pretraining.py b/kosmos-g/torchscale/examples/fairseq/tasks/pretraining.py deleted file mode 100644 index 57b062806..000000000 --- a/kosmos-g/torchscale/examples/fairseq/tasks/pretraining.py +++ /dev/null @@ -1,205 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] - -import json -import logging -import os -from argparse import Namespace - -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -from dataclasses import dataclass, field - -import sentencepiece as spm -from fairseq import utils -from fairseq.data import Dictionary -from fairseq.dataclass import ChoiceEnum, FairseqDataclass -from fairseq.tasks import FairseqTask, register_task -from omegaconf import II, MISSING - -from .data.mlm_loader import MLMLoader - -logger = logging.getLogger(__name__) - -SAMPLE_BREAK_MODE_CHOICES = ChoiceEnum(["none", "complete", "complete_doc", "eos"]) -SHORTEN_METHOD_CHOICES = ChoiceEnum(["none", "truncate", "random_crop"]) - - -@dataclass -class PretrainingConfig(FairseqDataclass): - data: str = field( - default=MISSING, - metadata={ - "help": "colon separated path to data directories list, \ - will be iterated upon during epochs in round-robin manner" - }, - ) - sample_break_mode: SAMPLE_BREAK_MODE_CHOICES = field( - default="complete", - metadata={ - "help": 'If omitted or "none", fills each sample with tokens-per-sample ' - 'tokens. If set to "complete", splits samples only at the end ' - "of sentence, but may include multiple sentences per sample. " - '"complete_doc" is similar but respects doc boundaries. ' - 'If set to "eos", includes only one sentence per sample.' - }, - ) - tokens_per_sample: int = field( - default=1024, - metadata={"help": "max number of tokens per sample for LM dataset"}, - ) - mask_prob: float = field( - default=0.15, - metadata={"help": "probability of replacing a token with mask"}, - ) - leave_unmasked_prob: float = field( - default=0.1, - metadata={"help": "probability that a masked token is unmasked"}, - ) - random_token_prob: float = field( - default=0.1, - metadata={"help": "probability of replacing a token with a random token"}, - ) - freq_weighted_replacement: bool = field( - default=False, - metadata={"help": "sample random replacement words based on word frequencies"}, - ) - mask_whole_words: bool = field( - default=False, - metadata={"help": "mask whole words; you may also want to set --bpe"}, - ) - mask_multiple_length: int = field( - default=1, - metadata={"help": "repeat the mask indices multiple times"}, - ) - mask_stdev: float = field( - default=0.0, - metadata={"help": "stdev of the mask length"}, - ) - shorten_method: SHORTEN_METHOD_CHOICES = field( - default="none", - metadata={ - "help": "if not none, shorten sequences that exceed --tokens-per-sample" - }, - ) - shorten_data_split_list: str = field( - default="", - metadata={ - "help": "comma-separated list of dataset splits to apply shortening to, " - 'e.g., "train,valid" (default: all dataset splits)' - }, - ) - seed: int = II("common.seed") - span_length: float = field( - default=3.0, - metadata={"help": "average span length for masking"}, - ) - remove_source_sentinel: bool = field( - default=False, - metadata={"help": "remove the source sentinel for the span corruption task"}, - ) - remove_target_sentinel: bool = field( - default=False, - metadata={"help": "remove the target sentinel for the span corruption task"}, - ) - batch_read_ahead: int = field( - default=100000, - metadata={"help": "batch read ahead size for infinibatch"}, - ) - required_batch_size_multiple: int = II("dataset.required_batch_size_multiple") - spm_model: str = field( - default="", - metadata={"help": "sentencepice model to tokenize the data"}, - ) - dict_file: str = field( - default="", - metadata={"help": ""}, - ) - - -@register_task("pretraining", dataclass=PretrainingConfig) -class PLMTask(FairseqTask): - def __init__(self, cfg, dictionary, tokenizer): - super().__init__(cfg) - self.cfg = cfg - self.dictionary = dictionary - self.tokenizer = tokenizer - self.seed = cfg.seed - self.mask_idx = dictionary.index("<mask>") - - @classmethod - def setup_task(cls, cfg, **kwargs): - paths = utils.split_paths(cfg.data) - assert len(paths) > 0 - if cfg.dict_file != "": - dictionary = Dictionary.load(cfg.dict_file) - else: - dictionary = Dictionary.load(os.path.join(paths[0], "dict.txt")) - - # add mask token - dictionary.add_symbol("<mask>") - for i in range(100): - dictionary.add_symbol(f"<mask_{i}>") - - dictionary.pad_to_multiple_(cfg.required_batch_size_multiple) - logger.info("dictionary: {} types".format(len(dictionary))) - - # tokenizer = SentencepieceBPE(Namespace(sentencepiece_model=cfg.spm_model)) - tokenizer = spm.SentencePieceProcessor() - tokenizer.Load(cfg.spm_model) - return cls(cfg, dictionary, tokenizer) - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - self.datasets[split] = { - "data": json.load(open(f"{self.cfg.data}/json/{split}.json")), - "data_dir": self.cfg.data, - "shuffle": True if split == "train" else False, - } - self.datasets[split] = Namespace(**self.datasets[split]) - - def dataset(self, split): - if split not in self.datasets: - raise KeyError("Dataset not loaded: " + split) - - return self.datasets[split] - - def get_batch_iterator( - self, - dataset, - max_tokens=None, - max_sentences=None, - max_positions=None, - ignore_invalid_inputs=False, - required_batch_size_multiple=1, - seed=1, - num_shards=1, - shard_id=0, - num_workers=0, - epoch=1, - data_buffer_size=0, - disable_iterator_cache=False, - ): - return MLMLoader( - self.cfg, - dataset, - self.dictionary, - self.tokenizer, - max_tokens=max_tokens, - max_sentences=max_sentences, - max_positions=max_positions, - ignore_invalid_inputs=ignore_invalid_inputs, - required_batch_size_multiple=required_batch_size_multiple, - seed=seed, - num_shards=num_shards, - shard_id=shard_id, - ) - - @property - def source_dictionary(self): - return self.dictionary - - @property - def target_dictionary(self): - return self.dictionary diff --git a/kosmos-g/torchscale/examples/fairseq/tasks/vl_gpt_base.py b/kosmos-g/torchscale/examples/fairseq/tasks/vl_gpt_base.py deleted file mode 100644 index 00e3147ce..000000000 --- a/kosmos-g/torchscale/examples/fairseq/tasks/vl_gpt_base.py +++ /dev/null @@ -1,297 +0,0 @@ -import os -import json -from argparse import Namespace -import torch - -from fairseq import utils -from fairseq.data import Dictionary -from fairseq.tasks import FairseqTask, register_task -from fairseq.tasks.language_modeling import LanguageModelingTask, LanguageModelingConfig -from fairseq.data.encoders.gpt2_bpe import GPT2BPE -from dataclasses import dataclass, field -import sentencepiece - -from .data.spm_lm_loader import SpmLmLoader as LMLoader -from .data.laion_loader import LaionLoader -from .data.wild_loader import WildLoader -from .data.utils import EOL_SYMBOL, BOI_SYMBOL, EOI_SYMBOL, image_code_to_token -from .data.basic_loader import ConcatLoader -from .gpt_base import GPTLanguageModelingConfig, GPTPretrainingTask - -DEFAULT_ENCODER_JSON = "https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json" -DEFAULT_VOCAB_BPE = "https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe" -IMAGE_COOEBOOK_SIZE = 8192 - -@dataclass -class VLGPTLanguageModelingConfig(GPTLanguageModelingConfig): - wild_data_dir: str = field(default="", metadata={"help": ""}) - wild_batch_size: int = field(default=4, metadata={"help": ""}) - laion_data_dir: str = field(default="", metadata={"help": ""}) - laion_batch_size: int = field(default=32, metadata={"help": ""}) - - - -@register_task('vl_gpt_pretraining', dataclass=VLGPTLanguageModelingConfig) -class VLGPTPretrainingTask(LanguageModelingTask): - def __init__(self, args, dictionary, tokenizer, output_dictionary=None, targets=None): - super().__init__(args, dictionary, output_dictionary=output_dictionary, targets=targets) - self.cfg = args - self.tokenizer = tokenizer - - @classmethod - def setup_task(cls, cfg, **kwargs): - """Setup the task (e.g., load dictionaries). - - Args: - args (argparse.Namespace): parsed command-line arguments - """ - paths = utils.split_paths(cfg.data) - assert len(paths) > 0 - - if len(cfg.dict_path) > 0: - dictionary = Dictionary.load(cfg.dict_path) - else: - dictionary = Dictionary.load(os.path.join(paths[0], "dict.txt")) - dictionary.add_symbol(EOL_SYMBOL) - dictionary.add_symbol(BOI_SYMBOL) - dictionary.add_symbol(EOI_SYMBOL) - for i in range(IMAGE_COOEBOOK_SIZE): - dictionary.add_symbol(image_code_to_token(i)) - - print('| dictionary: {} types'.format(len(dictionary))) - output_dictionary = dictionary - - args = cfg - # upgrade old checkpoints - if getattr(args, "exclude_self_target", False): - args.self_target = False - - targets = [] - if getattr(args, "self_target", False): - targets.append("self") - if getattr(args, "future_target", False): - targets.append("future") - if getattr(args, "past_target", False): - targets.append("past") - if len(targets) == 0: - # standard language modeling - targets = ["future"] - - if len(cfg.spm_model) > 0: - tokenizer = sentencepiece.SentencePieceProcessor(model_file=cfg.spm_model) - else: - tokenizer = GPT2BPE(Namespace( - gpt2_vocab_bpe=cfg.gpt2_vocab_bpe, - gpt2_encoder_json=cfg.gpt2_encoder_json)) - - return cls(cfg, dictionary, tokenizer, output_dictionary, targets=targets) - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - if "tnlg" in self.cfg.data and split == "train": - self.datasets[split] = { - # 'data': json.load(open(f'{self.cfg.data}/json/{split}-nogithub.json')) if split == 'train' else json.load(open(f'{self.cfg.data}/json/{split}.json')), - # 'data': json.load(open(f'{self.cfg.data}/json/{split}-nogithub-noarvix-nopubmed.json')) if split == 'train' else json.load(open(f'{self.cfg.data}/json/{split}.json')), - 'data': json.load(open(f'{self.cfg.data}/json/{split}-nogithub-noarvix-nopubmed-mtnlg.json')) if split == 'train' else json.load(open(f'{self.cfg.data}/json/{split}.json')), - 'data_dir': self.cfg.data, - 'shuffle': True if split == 'train' else False, - } - else: - self.datasets[split] = { - 'data': json.load(open(f'{self.cfg.data}/json/{split}.json')), - 'data_dir': self.cfg.data, - 'shuffle': True if split == 'train' else False, - } - self.datasets[split] = Namespace(**self.datasets[split]) - - def dataset(self, split): - if split not in self.datasets: - raise KeyError("Dataset not loaded: " + split) - - return self.datasets[split] - - def get_batch_iterator( - self, - dataset, - max_tokens=None, - max_sentences=None, - max_positions=None, - ignore_invalid_inputs=False, - required_batch_size_multiple=1, - seed=1, - num_shards=1, - shard_id=0, - num_workers=0, - epoch=1, - data_buffer_size=0, - disable_iterator_cache=False, - skip_remainder_batch=False, - grouped_shuffling=False, - update_epoch_batch_itr=False - ): - data_loader_list = [] - - disable_prefetching = False - config_split = 'train' - if not dataset.shuffle: # for valid and test - shard_id = 0 - disable_prefetching = True - config_split = 'valid' - - if self.cfg.wild_data_dir: - wild_dataset = Namespace(**{ - 'data': json.load(open(f'{self.cfg.wild_data_dir}/json/{config_split}.json')), - 'data_dir': self.cfg.wild_data_dir, - 'shuffle': dataset.shuffle}) - - wild_vl_loader = WildLoader( - self.cfg, - wild_dataset, - self.dictionary, - self.tokenizer, - max_tokens=max_tokens, - max_sentences=max_sentences, - max_positions=max_positions, - ignore_invalid_inputs=ignore_invalid_inputs, - required_batch_size_multiple=required_batch_size_multiple, - seed=seed, - epoch=epoch, - num_shards=num_shards, - shard_id=shard_id, - disable_prefetching=disable_prefetching, - data_name='wild' - ) - data_loader_list.append(wild_vl_loader) - - if self.cfg.laion_data_dir: - laion_dataset = Namespace(**{ - 'data': json.load(open(f'{self.cfg.laion_data_dir}/json/{config_split}.json')), - 'data_dir': self.cfg.laion_data_dir, - 'shuffle': dataset.shuffle}) - - lain_vl_loader = LaionLoader( - self.cfg, - laion_dataset, - self.dictionary, - self.tokenizer, - max_tokens=max_tokens, - max_sentences=self.cfg.laion_batch_size, - max_positions=max_positions, - ignore_invalid_inputs=ignore_invalid_inputs, - required_batch_size_multiple=required_batch_size_multiple, - seed=seed, - epoch=epoch, - num_shards=num_shards, - shard_id=shard_id, - disable_prefetching=disable_prefetching, - data_name='laion' - ) - data_loader_list.append(lain_vl_loader) - - lm_loader = LMLoader( - self.cfg, - dataset, - self.dictionary, - self.tokenizer, - max_tokens=max_tokens, - max_sentences=max_sentences, - max_positions=max_positions, - ignore_invalid_inputs=ignore_invalid_inputs, - required_batch_size_multiple=required_batch_size_multiple, - seed=seed, - epoch=epoch, - num_shards=num_shards, - shard_id=shard_id, - disable_prefetching=disable_prefetching, - ) - data_loader_list.append(lm_loader) - - concat_loader = ConcatLoader(data_loader_list) - return concat_loader - - @property - def source_dictionary(self): - return self.dictionary - - @property - def target_dictionary(self): - return self.dictionary - - def train_step( - self, sample, model, criterion, optimizer, update_num, ignore_grad=False - ): - """ - Do forward and backward, and return the loss as computed by *criterion* - for the given *model* and *sample*. - - Args: - sample (dict): the mini-batch. The format is defined by the - :class:`~fairseq.data.FairseqDataset`. - model (~fairseq.models.BaseFairseqModel): the model - criterion (~fairseq.criterions.FairseqCriterion): the criterion - optimizer (~fairseq.optim.FairseqOptimizer): the optimizer - update_num (int): the current update - ignore_grad (bool): multiply loss by 0 if this is set to True - - Returns: - tuple: - - the loss - - the sample size, which is used as the denominator for the - gradient - - logging outputs to display while training - """ - model.train() - model.set_num_updates(update_num) - - agg_loss, agg_sample_size, agg_logging_output = 0., 0., {} - - with torch.autograd.profiler.record_function("forward"): - loss, sample_size, logging_output = criterion(model, sample['gpt'], loss_name='gpt') - if ignore_grad: - loss *= 0 - with torch.autograd.profiler.record_function("backward"): - optimizer.backward(loss) - - agg_loss += loss.detach().item() - agg_sample_size += sample_size - agg_logging_output.update(logging_output) - - if 'laion' in sample: - with torch.autograd.profiler.record_function("forward"): - loss, sample_size, logging_output = criterion(model, sample['laion'], loss_name='laion') - if ignore_grad: - loss *= 0 - with torch.autograd.profiler.record_function("backward"): - optimizer.backward(loss) - - agg_loss += loss.detach().item() - agg_sample_size += sample_size - for key, value in logging_output.items(): - if key not in agg_logging_output: - agg_logging_output[key] = value - else: - agg_logging_output[key] += value - - if 'wild' in sample: - with torch.autograd.profiler.record_function("forward"): - loss, sample_size, logging_output = criterion(model, sample['wild'], loss_name='wild') - if ignore_grad: - loss *= 0 - with torch.autograd.profiler.record_function("backward"): - optimizer.backward(loss) - - agg_loss += loss.detach().item() - agg_sample_size += sample_size - for key, value in logging_output.items(): - if key not in agg_logging_output: - agg_logging_output[key] = value - else: - agg_logging_output[key] += value - - return agg_loss, agg_sample_size, agg_logging_output - - - def valid_step(self, sample, model, criterion): - model.eval() - with torch.no_grad(): - loss, sample_size, logging_output = criterion(model, sample['gpt']) - return loss, sample_size, logging_output diff --git a/kosmos-g/torchscale/examples/fairseq/train.py b/kosmos-g/torchscale/examples/fairseq/train.py deleted file mode 100644 index 0b0404e41..000000000 --- a/kosmos-g/torchscale/examples/fairseq/train.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] - -# flake8: noqa -import models -import tasks -from fairseq_cli.train import cli_main - -if __name__ == "__main__": - cli_main() diff --git a/kosmos-g/torchscale/examples/fairseq/utils/__init__.py b/kosmos-g/torchscale/examples/fairseq/utils/__init__.py deleted file mode 100644 index 3ae31e250..000000000 --- a/kosmos-g/torchscale/examples/fairseq/utils/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] diff --git a/kosmos-g/torchscale/examples/fairseq/utils/sparse_clip.py b/kosmos-g/torchscale/examples/fairseq/utils/sparse_clip.py deleted file mode 100644 index 2e567cad2..000000000 --- a/kosmos-g/torchscale/examples/fairseq/utils/sparse_clip.py +++ /dev/null @@ -1,85 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] - -import math -import warnings - -import torch -import torch.distributed as dist -from fairseq.utils import multi_tensor_l2norm_available, multi_tensor_total_norm - - -@torch.no_grad() -def clip_grad_norm_( - params, max_norm, moe_expert_count, aggregate_norm_fn=None -) -> torch.Tensor: - def grad_exists(p): - return p is not None and getattr(p, "grad", None) is not None - - if isinstance(params, torch.Tensor): - params = [params] - params = list(params) - params = list(filter(grad_exists, params)) - grads, expert_grads, base_expert_grads, sharded_grads = [], [], [], [] - denom = math.sqrt(max(dist.get_global_world_size(), moe_expert_count)) - for p in params: - if hasattr(p, "expert"): - expert_grads.append(p.grad.detach() / denom) - elif hasattr(p, "base_expert"): - base_expert_grads.append(p.grad.detach()) - elif hasattr(p, "_is_sharded"): - sharded_grads.append(p.grad.detach()) - else: - grads.append(p.grad.detach()) - if len(grads) == 0: - if len(params) > 0: - total_norm = params[0].new_tensor(0.0) - else: - total_norm = torch.tensor(0.0) - elif len(grads) == 1: - total_norm = torch.norm(grads[0], p=2, dtype=torch.float32) - else: - if multi_tensor_l2norm_available: - total_norm = multi_tensor_total_norm(grads) - else: - if torch.cuda.is_available(): - warnings.warn( - "amp_C fused kernels unavailable, disabling multi_tensor_l2norm; " - "you may get better performance by installing NVIDIA's apex library" - ) - device = torch.cuda.current_device() - elif grads[0].device.type == "xla": - device = grads[0].device - else: - device = torch.device("cpu") - total_norm = torch.norm( - torch.stack( - [torch.norm(g, p=2, dtype=torch.float32).to(device) for g in grads] - ) - ) - - # calculate split_norm and all_reduce with other workers - norms = [total_norm] - for split_grads in [expert_grads, sharded_grads]: - if len(split_grads) == 0: - continue - split_norm = torch.norm( - torch.stack([torch.norm(g, p=2, dtype=torch.float32) for g in split_grads]) - ) - if dist.is_initialized(): - split_norm.pow_(2) - dist.all_reduce(split_norm) - split_norm.sqrt_() - norms.append(split_norm) - if len(norms) > 1: - total_norm = torch.norm(torch.stack(norms)) - - if aggregate_norm_fn is not None: - total_norm = aggregate_norm_fn(total_norm) - - if max_norm > 0: - max_norm = float(max_norm) - clip_coef = (max_norm / (total_norm + 1e-6)).clamp_(max=1) - for g in grads + expert_grads + sharded_grads + base_expert_grads: - g.mul_(clip_coef) - return total_norm diff --git a/kosmos-g/torchscale/examples/fairseq/wild-token-base.sh b/kosmos-g/torchscale/examples/fairseq/wild-token-base.sh deleted file mode 100644 index 4e01e43d2..000000000 --- a/kosmos-g/torchscale/examples/fairseq/wild-token-base.sh +++ /dev/null @@ -1,45 +0,0 @@ -python -m torch.distributed.launch --nproc_per_node=16 --nnodes=8 \ - --node_rank=$OMPI_COMM_WORLD_RANK --master_addr="$MASTER_IP" --master_port=$MASTER_PORT train.py /mnt/unilm/shaohanh/data/tnlg_config/ \ - --task vl_gpt_pretraining \ - --activation-fn gelu \ - --share-decoder-input-output-embed \ - --save-interval-updates 5000 \ - --no-epoch-checkpoints \ - --memory-efficient-fp16 \ - --fp16-init-scale 4 \ - --arch lm_base \ - --sample-break-mode none \ - --tokens-per-sample 2048 \ - --optimizer adam --adam-betas "(0.9, 0.98)" \ - --adam-eps 1e-08 \ - --clip-norm 0.0 \ - --lr 6e-4 \ - --lr-scheduler polynomial_decay \ - --warmup-updates 750 \ - --dropout 0.1 \ - --attention-dropout 0.1 \ - --weight-decay 0.01 \ - --batch-size 1 \ - --update-freq 2 \ - --log-format simple --log-interval 50 --disable-validation \ - --required-batch-size-multiple 1 \ - --total-num-update 300000 \ - --max-update 300000 \ - --seed 1 \ - --ddp-backend=legacy_ddp \ - --batch-read-ahead 100 \ - --rel-pos-buckets 32 \ - --max-rel-pos 128 \ - --dict-path /mnt/unilm/shumma/data/16g/dict.txt \ - --spm-model /mnt/unilm/shumma/data/16g/sentencepiece.bpe.model \ - --save-dir /mnt/unilm/shaohanh/exp/unigpt_exp/torchscale_base_wild_gpt \ - --tensorboard-logdir /mnt/unilm/shaohanh/exp/unigpt_exp/torchscale_base_wild_gpt/tb-logs \ - --wild-data-dir /mnt/unilm/shaohanh/bvl/wild_token \ - --wild-batch-size 8 \ - --checkpoint-activations \ - --subln \ - --criterion vl_cross_entropy \ - --decoder-embed-dim 768 \ - --decoder-ffn-embed-dim 3072 \ - --decoder-layers 12 \ - --decoder-attention-heads 12 \ No newline at end of file diff --git a/kosmos-g/torchscale/setup.py b/kosmos-g/torchscale/setup.py deleted file mode 100644 index eb6a3f277..000000000 --- a/kosmos-g/torchscale/setup.py +++ /dev/null @@ -1,25 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] - -from io import open - -from setuptools import find_packages, setup - -setup( - name="torchscale", - version="0.1.1", - author="TorchScale Team", - author_email="Shuming.Ma@microsoft.com", - description="Transformers at any scale", - long_description=open("README.md", "r", encoding="utf-8").read(), - long_description_content_type="text/markdown", - keywords="Transformers at any scale", - license="MIT", - url="https://github.com/msranlp/torchscale", - packages=find_packages(exclude=["*.tests", "*.tests.*", "tests.*", "tests"]), - install_requires=["apex", "torch>=1.8", "fairscale==0.4.0", "timm==0.4.12"], - python_requires=">=3.8.0", - classifiers=[ - "Programming Language :: Python :: 3", - ], -) diff --git a/kosmos-g/torchscale/torchscale/__init__.py b/kosmos-g/torchscale/torchscale/__init__.py deleted file mode 100644 index 3ae31e250..000000000 --- a/kosmos-g/torchscale/torchscale/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] diff --git a/kosmos-g/torchscale/torchscale/architecture/__init__.py b/kosmos-g/torchscale/torchscale/architecture/__init__.py deleted file mode 100644 index 3ae31e250..000000000 --- a/kosmos-g/torchscale/torchscale/architecture/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] diff --git a/kosmos-g/torchscale/torchscale/architecture/config.py b/kosmos-g/torchscale/torchscale/architecture/config.py deleted file mode 100644 index 75ca61ac3..000000000 --- a/kosmos-g/torchscale/torchscale/architecture/config.py +++ /dev/null @@ -1,212 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] - - -class EncoderConfig(object): - def __init__(self, **kwargs): - self.encoder_embed_dim = kwargs.pop("encoder_embed_dim", 768) - self.encoder_attention_heads = kwargs.pop("encoder_attention_heads", 12) - self.encoder_ffn_embed_dim = kwargs.pop("encoder_ffn_embed_dim", 3072) - self.encoder_layers = kwargs.pop("encoder_layers", 12) - self.encoder_normalize_before = kwargs.pop("encoder_normalize_before", True) - self.activation_fn = kwargs.pop("activation_fn", "gelu") - self.dropout = kwargs.pop("dropout", 0.0) - self.drop_path_rate = kwargs.pop("drop_path_rate", 0.0) - self.attention_dropout = kwargs.pop("attention_dropout", 0.0) - self.activation_dropout = kwargs.pop("activation_dropout", 0.0) - self.no_scale_embedding = kwargs.pop("no_scale_embedding", True) - self.layernorm_embedding = kwargs.pop("layernorm_embedding", False) - self.moe_freq = kwargs.pop("moe_freq", 0) - self.moe_top1_expert = kwargs.pop("moe_top1_expert", False) - self.moe_expert_count = kwargs.pop("moe_expert_count", 0) - self.moe_gating_use_fp32 = kwargs.pop("moe_gating_use_fp32", True) - self.moe_eval_capacity_token_fraction = kwargs.pop( - "moe_eval_capacity_token_fraction", 0.25 - ) - self.moe_second_expert_policy = kwargs.pop("moe_second_expert_policy", "random") - self.moe_normalize_gate_prob_before_dropping = kwargs.pop( - "moe_normalize_gate_prob_before_dropping", False - ) - self.use_xmoe = kwargs.pop("use_xmoe", False) - self.rel_pos_buckets = kwargs.pop("rel_pos_buckets", 0) - self.max_rel_pos = kwargs.pop("max_rel_pos", 0) - self.deepnorm = kwargs.pop("deepnorm", False) - self.subln = kwargs.pop("subln", True) - self.bert_init = kwargs.pop("bert_init", False) - self.multiway = kwargs.pop("multiway", False) - self.share_encoder_input_output_embed = kwargs.pop( - "share_encoder_input_output_embed", False - ) - self.max_source_positions = kwargs.pop("max_source_positions", 1024) - self.no_output_layer = kwargs.pop("no_output_layer", False) - # Text - self.vocab_size = kwargs.pop("vocab_size", -1) - # Vision - self.img_size = kwargs.pop("img_size", 224) - self.patch_size = kwargs.pop("patch_size", 16) - self.in_chans = kwargs.pop("in_chans", 3) - # Fairscale - self.checkpoint_activations = kwargs.pop("checkpoint_activations", False) - self.fsdp = kwargs.pop("fsdp", False) - self.ddp_rank = kwargs.pop("ddp_rank", 0) - self.flash_attention = kwargs.pop("flash_attention", False) - self.scale_length = kwargs.pop("scale_length", 2048) - # Lora - self.lora = kwargs.pop("lora", False) - - if self.deepnorm: - self.encoder_normalize_before = False - self.subln = False - if self.subln: - self.encoder_normalize_before = True - self.deepnorm = False - if self.use_xmoe: - self.moe_normalize_gate_prob_before_dropping = True - self.moe_second_expert_policy = "random" - assert self.moe_freq > 0 and self.moe_expert_count > 0 - - def override(self, args): - for hp in self.__dict__.keys(): - if getattr(args, hp, None) is not None: - self.__dict__[hp] = getattr(args, hp, None) - - -class DecoderConfig(object): - def __init__(self, **kwargs): - self.decoder_embed_dim = kwargs.pop("decoder_embed_dim", 768) - self.decoder_attention_heads = kwargs.pop("decoder_attention_heads", 12) - self.decoder_ffn_embed_dim = kwargs.pop("decoder_ffn_embed_dim", 3072) - self.decoder_layers = kwargs.pop("decoder_layers", 12) - self.decoder_normalize_before = kwargs.pop("decoder_normalize_before", True) - self.activation_fn = kwargs.pop("activation_fn", "gelu") - self.dropout = kwargs.pop("dropout", 0.0) - self.drop_path_rate = kwargs.pop("drop_path_rate", 0.0) - self.attention_dropout = kwargs.pop("attention_dropout", 0.0) - self.activation_dropout = kwargs.pop("activation_dropout", 0.0) - self.no_scale_embedding = kwargs.pop("no_scale_embedding", True) - self.layernorm_embedding = kwargs.pop("layernorm_embedding", False) - self.moe_freq = kwargs.pop("moe_freq", 0) - self.moe_top1_expert = kwargs.pop("moe_top1_expert", False) - self.moe_expert_count = kwargs.pop("moe_expert_count", 0) - self.moe_gating_use_fp32 = kwargs.pop("moe_gating_use_fp32", True) - self.moe_eval_capacity_token_fraction = kwargs.pop( - "moe_eval_capacity_token_fraction", 0.25 - ) - self.moe_second_expert_policy = kwargs.pop("moe_second_expert_policy", "random") - self.moe_normalize_gate_prob_before_dropping = kwargs.pop( - "moe_normalize_gate_prob_before_dropping", False - ) - self.use_xmoe = kwargs.pop("use_xmoe", False) - self.rel_pos_buckets = kwargs.pop("rel_pos_buckets", 0) - self.max_rel_pos = kwargs.pop("max_rel_pos", 0) - self.deepnorm = kwargs.pop("deepnorm", False) - self.subln = kwargs.pop("subln", True) - self.bert_init = kwargs.pop("bert_init", False) - self.multiway = kwargs.pop("multiway", False) - self.share_decoder_input_output_embed = kwargs.pop( - "share_decoder_input_output_embed", False - ) - self.max_target_positions = kwargs.pop("max_target_positions", 1024) - self.no_output_layer = kwargs.pop("no_output_layer", False) - # Text - self.vocab_size = kwargs.pop("vocab_size", -1) - # Fairscale - self.checkpoint_activations = kwargs.pop("checkpoint_activations", False) - self.fsdp = kwargs.pop("fsdp", False) - self.ddp_rank = kwargs.pop("ddp_rank", 0) - self.flash_attention = kwargs.pop("flash_attention", False) - self.sope_rel_pos = kwargs.pop("sope_rel_pos", False) - self.scale_length = kwargs.pop("scale_length", 2048) - # Lora - self.lora = kwargs.pop("lora", False) - - if self.deepnorm: - self.decoder_normalize_before = False - self.subln = False - if self.subln: - self.decoder_normalize_before = True - self.deepnorm = False - if self.use_xmoe: - self.moe_normalize_gate_prob_before_dropping = True - self.moe_second_expert_policy = "random" - assert self.moe_freq > 0 and self.moe_expert_count > 0 - - def override(self, args): - for hp in self.__dict__.keys(): - if getattr(args, hp, None) is not None: - self.__dict__[hp] = getattr(args, hp, None) - - -class EncoderDecoderConfig(object): - def __init__(self, **kwargs): - self.encoder_embed_dim = kwargs.pop("encoder_embed_dim", 768) - self.encoder_attention_heads = kwargs.pop("encoder_attention_heads", 12) - self.encoder_ffn_embed_dim = kwargs.pop("encoder_ffn_embed_dim", 3072) - self.encoder_layers = kwargs.pop("encoder_layers", 12) - self.encoder_normalize_before = kwargs.pop("encoder_normalize_before", True) - self.decoder_embed_dim = kwargs.pop("decoder_embed_dim", 768) - self.decoder_attention_heads = kwargs.pop("decoder_attention_heads", 12) - self.decoder_ffn_embed_dim = kwargs.pop("decoder_ffn_embed_dim", 3072) - self.decoder_layers = kwargs.pop("decoder_layers", 12) - self.decoder_normalize_before = kwargs.pop("decoder_normalize_before", True) - self.activation_fn = kwargs.pop("activation_fn", "gelu") - self.dropout = kwargs.pop("dropout", 0.0) - self.drop_path_rate = kwargs.pop("drop_path_rate", 0.0) - self.attention_dropout = kwargs.pop("attention_dropout", 0.0) - self.activation_dropout = kwargs.pop("activation_dropout", 0.0) - self.no_scale_embedding = kwargs.pop("no_scale_embedding", True) - self.layernorm_embedding = kwargs.pop("layernorm_embedding", False) - self.moe_freq = kwargs.pop("moe_freq", 0) - self.moe_top1_expert = kwargs.pop("moe_top1_expert", False) - self.moe_expert_count = kwargs.pop("moe_expert_count", 0) - self.moe_gating_use_fp32 = kwargs.pop("moe_gating_use_fp32", True) - self.moe_eval_capacity_token_fraction = kwargs.pop( - "moe_eval_capacity_token_fraction", 0.25 - ) - self.moe_second_expert_policy = kwargs.pop("moe_second_expert_policy", "random") - self.moe_normalize_gate_prob_before_dropping = kwargs.pop( - "moe_normalize_gate_prob_before_dropping", False - ) - self.use_xmoe = kwargs.pop("use_xmoe", False) - self.rel_pos_buckets = kwargs.pop("rel_pos_buckets", 0) - self.max_rel_pos = kwargs.pop("max_rel_pos", 0) - self.deepnorm = kwargs.pop("deepnorm", False) - self.subln = kwargs.pop("subln", True) - self.bert_init = kwargs.pop("bert_init", False) - self.multiway = kwargs.pop("multiway", False) - self.share_all_embeddings = kwargs.pop("share_all_embeddings", False) - self.share_decoder_input_output_embed = kwargs.pop( - "share_decoder_input_output_embed", False - ) - self.max_source_positions = kwargs.pop("max_source_positions", 1024) - self.max_target_positions = kwargs.pop("max_target_positions", 1024) - self.no_output_layer = kwargs.pop("no_output_layer", False) - # Text - self.vocab_size = kwargs.pop("vocab_size", -1) - # Fairscale - self.checkpoint_activations = kwargs.pop("checkpoint_activations", False) - self.fsdp = kwargs.pop("fsdp", False) - self.ddp_rank = kwargs.pop("ddp_rank", 0) - self.flash_attention = kwargs.pop("flash_attention", False) - self.sope_rel_pos = kwargs.pop("sope_rel_pos", False) - self.scale_length = kwargs.pop("scale_length", 2048) - # Lora - self.lora = kwargs.pop("lora", False) - - if self.deepnorm: - self.encoder_normalize_before = False - self.decoder_normalize_before = False - self.subln = False - if self.subln: - self.encoder_normalize_before = True - self.decoder_normalize_before = True - self.deepnorm = False - if self.use_xmoe: - self.moe_normalize_gate_prob_before_dropping = True - self.moe_second_expert_policy = "random" - assert self.moe_freq > 0 and self.moe_expert_count > 0 - - def override(self, args): - for hp in self.__dict__.keys(): - if getattr(args, hp, None) is not None: - self.__dict__[hp] = getattr(args, hp, None) diff --git a/kosmos-g/torchscale/torchscale/architecture/decoder.py b/kosmos-g/torchscale/torchscale/architecture/decoder.py deleted file mode 100644 index f2ca6771a..000000000 --- a/kosmos-g/torchscale/torchscale/architecture/decoder.py +++ /dev/null @@ -1,510 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] - -import math - -import numpy as np -import torch -import torch.nn as nn -from apex.normalization import FusedLayerNorm as LayerNorm -from fairscale.nn import checkpoint_wrapper, wrap - -from torchscale.architecture.utils import init_bert_params -from torchscale.component.droppath import DropPath -from torchscale.component.feedforward_network import FeedForwardNetwork, make_experts -from torchscale.component.multihead_attention import MultiheadAttention -from torchscale.component.relative_position_bias import RelativePositionBias -from torchscale.component.xmoe.moe_layer import MOELayer -from torchscale.component.xmoe.routing import Top1Gate, Top2Gate -from torchscale.component.sope_relative_position import SoPE - - -class DecoderLayer(nn.Module): - def __init__( - self, - args, - depth, - is_moe_layer=False, - is_encoder_decoder=False, - ): - super().__init__() - self.args = args - self.embed_dim = args.decoder_embed_dim - self.dropout_module = torch.nn.Dropout(args.dropout, inplace=True) - - if args.drop_path_rate > 0: - drop_path_prob = np.linspace(0, args.drop_path_rate, args.decoder_layers)[ - depth - ] - self.drop_path = DropPath(drop_path_prob) - else: - self.drop_path = None - - self.self_attn = self.build_self_attention(self.embed_dim, args) - - self.normalize_before = args.decoder_normalize_before - - self.self_attn_layer_norm = LayerNorm(self.embed_dim) - - if not is_encoder_decoder: - self.encoder_attn = None - self.encoder_attn_layer_norm = None - else: - self.encoder_attn = self.build_encoder_attention(self.embed_dim, args) - self.encoder_attn_layer_norm = LayerNorm(self.embed_dim) - - self.is_moe_layer = is_moe_layer - self.ffn_dim = args.decoder_ffn_embed_dim - - if not self.is_moe_layer: - self.ffn = self.build_ffn( - self.embed_dim, - self.args, - ) - else: - if args.moe_top1_expert: - gate = Top1Gate( - self.embed_dim, - args.moe_expert_count, - use_fp32=args.moe_gating_use_fp32, - moe_eval_capacity_token_fraction=args.moe_eval_capacity_token_fraction, - use_xmoe=args.use_xmoe, - ) - else: - gate = Top2Gate( - self.embed_dim, - args.moe_expert_count, - args.moe_gating_use_fp32, - args.moe_second_expert_policy, - args.moe_normalize_gate_prob_before_dropping, - args.moe_eval_capacity_token_fraction, - use_xmoe=args.use_xmoe, - ) - experts = make_experts(args, self.embed_dim, self.ffn_dim) - self.moe_layer = MOELayer(gate, experts, args) - - self.final_layer_norm = LayerNorm(self.embed_dim) - - if args.deepnorm: - if is_encoder_decoder: - self.alpha = math.pow(3.0 * args.decoder_layers, 0.25) - else: - self.alpha = math.pow(2.0 * args.decoder_layers, 0.25) - else: - self.alpha = 1.0 - - def build_ffn(self, embed_dim, args): - return FeedForwardNetwork( - embed_dim, - self.ffn_dim, - args.activation_fn, - args.dropout, - args.activation_dropout, - args.subln, - ) - - def build_self_attention(self, embed_dim, args): - return MultiheadAttention( - args, - embed_dim, - args.decoder_attention_heads, - dropout=args.attention_dropout, - self_attention=True, - encoder_decoder_attention=False, - subln=args.subln, - ) - - def build_encoder_attention(self, embed_dim, args): - return MultiheadAttention( - args, - embed_dim, - args.decoder_attention_heads, - dropout=args.attention_dropout, - self_attention=False, - encoder_decoder_attention=True, - subln=args.subln, - ) - - def residual_connection(self, x, residual): - return residual * self.alpha + x - - def forward( - self, - x, - encoder_out=None, - encoder_padding_mask=None, - incremental_state=None, - self_attn_mask=None, - self_attn_padding_mask=None, - self_attn_rel_pos=None, - cross_attn_rel_pos=None, - self_attn_sope_rel_pos=None, - cross_attn_sope_rel_pos=None, - scale=1.0, - ): - residual = x - if self.normalize_before: - x = self.self_attn_layer_norm(x) - - x, attn = self.self_attn( - query=x, - key=x, - value=x, - key_padding_mask=self_attn_padding_mask, - incremental_state=incremental_state, - attn_mask=self_attn_mask, - rel_pos=self_attn_rel_pos, - sope_rel_pos=self_attn_sope_rel_pos, - scale=scale, - ) - x = self.dropout_module(x) - - if self.drop_path is not None: - x = self.drop_path(x) - - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.self_attn_layer_norm(x) - - if self.encoder_attn is not None and encoder_out is not None: - residual = x - if self.normalize_before: - x = self.encoder_attn_layer_norm(x) - - x, attn = self.encoder_attn( - query=x, - key=encoder_out, - value=encoder_out, - key_padding_mask=encoder_padding_mask, - incremental_state=None, - rel_pos=cross_attn_rel_pos, - sope_rel_pos=cross_attn_sope_rel_pos, - scale=scale, - ) - x = self.dropout_module(x) - - if self.drop_path is not None: - x = self.drop_path(x) - - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.encoder_attn_layer_norm(x) - - residual = x - if self.normalize_before: - x = self.final_layer_norm(x) - if not self.is_moe_layer: - x = self.ffn(x) - l_aux = None - else: - x = x.transpose(0, 1) - x, l_aux = self.moe_layer(x) - x = x.transpose(0, 1) - - if self.drop_path is not None: - x = self.drop_path(x) - - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.final_layer_norm(x) - - return x, attn, None, l_aux - - -class Decoder(nn.Module): - def __init__( - self, - args, - embed_tokens=None, - embed_positions=None, - output_projection=None, - is_encoder_decoder=False, - causal_mask=True, - **kwargs - ): - super().__init__(**kwargs) - self.args = args - self.causal_mask = causal_mask - - self.dropout_module = torch.nn.Dropout(args.dropout, inplace=True) - - embed_dim = args.decoder_embed_dim - self.embed_dim = embed_dim - self.embed_scale = 1.0 if args.no_scale_embedding else math.sqrt(embed_dim) - - self.embed_tokens = embed_tokens - self.embed_positions = embed_positions - - if ( - output_projection is None - and not args.no_output_layer - and args.vocab_size > 0 - ): - self.output_projection = self.build_output_projection(args) - else: - self.output_projection = output_projection - - if args.layernorm_embedding: - self.layernorm_embedding = LayerNorm(embed_dim) - else: - self.layernorm_embedding = None - - self.layers = nn.ModuleList([]) - - moe_freq = args.moe_freq - for i in range(args.decoder_layers): - is_moe_layer = moe_freq != 0 and (i + 1) % moe_freq == 0 - self.layers.append( - self.build_decoder_layer( - args, - depth=i, - is_moe_layer=is_moe_layer, - is_encoder_decoder=is_encoder_decoder, - ) - ) - - self.num_layers = len(self.layers) - - if args.decoder_normalize_before: - self.layer_norm = LayerNorm(embed_dim) - else: - self.layer_norm = None - - self.output_projection = output_projection - - self.self_attn_relative_position = None - self.cross_attn_relative_position = None - self.self_attn_sope = None - self.cross_attn_sope = None - - if args.sope_rel_pos: - assert args.rel_pos_buckets == 0 and args.max_rel_pos == 0 - self.self_attn_sope = SoPE( - args.decoder_embed_dim // args.decoder_attention_heads - ) - if is_encoder_decoder: - self.cross_attn_sope = SoPE( - args.decoder_embed_dim // args.decoder_attention_heads - ) - - if args.rel_pos_buckets > 0 and args.max_rel_pos > 0: - self.self_attn_relative_position = RelativePositionBias( - num_buckets=args.rel_pos_buckets, - max_distance=args.max_rel_pos, - n_heads=args.decoder_attention_heads, - ) - if is_encoder_decoder: - self.cross_attn_relative_position = RelativePositionBias( - num_buckets=args.rel_pos_buckets, - max_distance=args.max_rel_pos, - n_heads=args.decoder_attention_heads, - ) - - if args.bert_init: - self.apply(init_bert_params) - - if args.deepnorm: - if is_encoder_decoder: - init_scale = math.pow(12.0 * args.decoder_layers, 0.25) - else: - init_scale = math.pow(8.0 * args.decoder_layers, 0.25) - for name, p in self.named_parameters(): - if ( - "fc1" in name - or "fc2" in name - or "out_proj" in name - or "v_proj" in name - ): - p.data.div_(init_scale) - - if args.subln: - if is_encoder_decoder: - init_scale = math.sqrt(math.log(args.decoder_layers * 3)) - else: - init_scale = math.sqrt(math.log(args.decoder_layers * 2)) - for name, p in self.named_parameters(): - if "encoder_attn" in name: - continue - if ( - "fc1" in name - or "fc2" in name - or "out_proj" in name - or "v_proj" in name - ): - p.data.mul_(init_scale) - - def build_output_projection( - self, - args, - ): - if args.share_decoder_input_output_embed: - output_projection = torch.nn.Linear( - self.embed_tokens.weight.shape[1], - self.embed_tokens.weight.shape[0], - bias=False, - ) - output_projection.weight = self.embed_tokens.weight - else: - output_projection = torch.nn.Linear( - args.decoder_embed_dim, args.vocab_size, bias=False - ) - torch.nn.init.normal_( - output_projection.weight, mean=0, std=args.decoder_embed_dim ** -0.5 - ) - return output_projection - - def build_decoder_layer( - self, args, depth, is_moe_layer=False, is_encoder_decoder=False - ): - layer = DecoderLayer( - args, - depth, - is_moe_layer=is_moe_layer, - is_encoder_decoder=is_encoder_decoder, - ) - if args.checkpoint_activations: - layer = checkpoint_wrapper(layer, offload_to_cpu=False) - if args.fsdp: - layer = wrap(layer) - return layer - - def forward_embedding( - self, - tokens, - token_embedding=None, - incremental_state=None, - ): - positions = None - if self.embed_positions is not None: - if tokens is not None: - positions = self.embed_positions( - tokens, incremental_state=incremental_state - ) - else: - positions = self.embed_positions( - token_embedding, incremental_state=incremental_state - ) - - if incremental_state is not None: - tokens = tokens[:, -1:] - if positions is not None: - positions = positions[:, -1:] - - if token_embedding is None: - token_embedding = self.embed_tokens(tokens) - - x = embed = self.embed_scale * token_embedding - - if positions is not None: - x += positions - - if self.layernorm_embedding is not None: - x = self.layernorm_embedding(x) - - x = self.dropout_module(x) - - return x, embed - - def forward( - self, - prev_output_tokens, - self_attn_padding_mask=None, - encoder_out=None, - incremental_state=None, - features_only=False, - return_all_hiddens=False, - token_embeddings=None, - scale=1.0, - **kwargs - ): - # embed tokens and positions - x, _ = self.forward_embedding( - prev_output_tokens, token_embeddings, incremental_state - ) - x = x.transpose(0, 1) - - # relative postion - self_attn_rel_pos_bias = None - self_attn_sope_rel_pos = None - slen = x.size(0) - - if self.self_attn_sope is not None: - offset = 0 if incremental_state is None else incremental_state[0]["prev_key"].shape[2] - self_attn_sope_rel_pos = self.self_attn_sope(slen, offset) - - if self.self_attn_relative_position is not None: - self_attn_rel_pos_bias = self.self_attn_relative_position( - batch_size=x.size(1), qlen=slen, klen=slen - ) - if incremental_state is not None: - self_attn_rel_pos_bias = self_attn_rel_pos_bias[:, -1:, :] - - cross_attn_rel_pos_bias = None - cross_attn_sope_rel_pos = None - if self.cross_attn_sope is not None: - cross_attn_sope_rel_pos = self.cross_attn_sope(slen + encoder_out["encoder_out"].size(0)) - - if self.cross_attn_relative_position is not None: - cross_attn_rel_pos_bias = self.cross_attn_relative_position( - batch_size=x.size(1), - qlen=slen, - klen=encoder_out["encoder_out"].size(0), - ) - if incremental_state is not None: - cross_attn_rel_pos_bias = cross_attn_rel_pos_bias[:, -1:, :] - - # decoder layers - inner_states = [x] - - if encoder_out is None: - l_aux = [] - else: - l_aux = encoder_out["l_aux"] if "l_aux" in encoder_out else [] - - for idx, layer in enumerate(self.layers): - if incremental_state is None: - self_attn_mask = torch.triu( - torch.zeros([x.size(0), x.size(0)]) - .float() - .fill_(float("-inf")) - .type_as(x), - 1, - ) if self.causal_mask else None - else: - self_attn_mask = None - if idx not in incremental_state: - incremental_state[idx] = {} - - x, layer_attn, _, l_aux_i = layer( - x, - encoder_out["encoder_out"] if encoder_out is not None else None, - encoder_out["encoder_padding_mask"] - if encoder_out is not None and "encoder_padding_mask" in encoder_out else None, - incremental_state[idx] if incremental_state is not None else None, - self_attn_mask=self_attn_mask, - self_attn_padding_mask=self_attn_padding_mask, - self_attn_rel_pos=self_attn_rel_pos_bias, - cross_attn_rel_pos=cross_attn_rel_pos_bias, - self_attn_sope_rel_pos=self_attn_sope_rel_pos, - cross_attn_sope_rel_pos=cross_attn_sope_rel_pos, - scale=scale, - ) - l_aux.append(l_aux_i) - inner_states.append(x) - - if self.layer_norm is not None: - x = self.layer_norm(x) - - x = x.transpose(0, 1) - - if not features_only and self.output_projection is not None: - x = self.output_layer(x) - - return x, { - "inner_states": inner_states, - "l_aux": l_aux, - "attn": [layer_attn.mean(dim=0)] if layer_attn is not None else None, - } - - def output_layer(self, features): - return self.output_projection(features) diff --git a/kosmos-g/torchscale/torchscale/architecture/encoder.py b/kosmos-g/torchscale/torchscale/architecture/encoder.py deleted file mode 100644 index 5709016b2..000000000 --- a/kosmos-g/torchscale/torchscale/architecture/encoder.py +++ /dev/null @@ -1,384 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] - -import math - -import numpy as np -import torch -import torch.nn as nn -from apex.normalization import FusedLayerNorm as LayerNorm -from fairscale.nn import checkpoint_wrapper, wrap - -from torchscale.architecture.utils import init_bert_params -from torchscale.component.droppath import DropPath -from torchscale.component.feedforward_network import FeedForwardNetwork, make_experts -from torchscale.component.multihead_attention import MultiheadAttention -from torchscale.component.multiway_network import MultiwayWrapper, set_split_position -from torchscale.component.relative_position_bias import RelativePositionBias -from torchscale.component.xmoe.moe_layer import MOELayer -from torchscale.component.xmoe.routing import Top1Gate, Top2Gate - - -class EncoderLayer(nn.Module): - def __init__(self, args, depth, is_moe_layer=False, is_encoder_decoder=False): - super().__init__() - self.args = args - self.embed_dim = args.encoder_embed_dim - self.self_attn = self.build_self_attention(self.embed_dim, args) - self.self_attn_layer_norm = MultiwayWrapper(args, LayerNorm(self.embed_dim)) - self.dropout_module = torch.nn.Dropout(args.dropout, inplace=True) - - if args.drop_path_rate > 0: - drop_path_prob = np.linspace(0, args.drop_path_rate, args.encoder_layers)[ - depth - ] - self.drop_path = DropPath(drop_path_prob) - else: - self.drop_path = None - - self.normalize_before = args.encoder_normalize_before - self.is_moe_layer = is_moe_layer - self.ffn_dim = args.encoder_ffn_embed_dim - - if not self.is_moe_layer: - self.ffn = MultiwayWrapper( - args, - self.build_ffn( - self.embed_dim, - self.args, - ), - ) - else: - assert not self.args.multiway - if args.moe_top1_expert: - gate = Top1Gate( - self.embed_dim, - args.moe_expert_count, - use_fp32=args.moe_gating_use_fp32, - moe_eval_capacity_token_fraction=args.moe_eval_capacity_token_fraction, - use_xmoe=args.use_xmoe, - ) - else: - gate = Top2Gate( - self.embed_dim, - args.moe_expert_count, - args.moe_gating_use_fp32, - args.moe_second_expert_policy, - args.moe_normalize_gate_prob_before_dropping, - args.moe_eval_capacity_token_fraction, - use_xmoe=args.use_xmoe, - ) - experts = make_experts(args, self.embed_dim, self.ffn_dim) - self.moe_layer = MOELayer(gate, experts, args) - self.final_layer_norm = MultiwayWrapper(args, LayerNorm(self.embed_dim)) - - if args.deepnorm: - if is_encoder_decoder: - self.alpha = ( - math.pow( - math.pow(args.encoder_layers, 4) * args.decoder_layers, 0.0625 - ) - * 0.81 - ) - else: - self.alpha = math.pow(2.0 * args.encoder_layers, 0.25) - else: - self.alpha = 1.0 - - def build_ffn(self, embed_dim, args): - return FeedForwardNetwork( - embed_dim, - self.ffn_dim, - args.activation_fn, - args.dropout, - args.activation_dropout, - args.subln, - ) - - def build_self_attention(self, embed_dim, args): - return MultiheadAttention( - args, - embed_dim, - args.encoder_attention_heads, - dropout=args.attention_dropout, - self_attention=True, - encoder_decoder_attention=False, - subln=args.subln, - ) - - def residual_connection(self, x, residual): - return residual * self.alpha + x - - def forward(self, x, encoder_padding_mask, attn_mask=None, rel_pos=None, scale=1.0): - if attn_mask is not None: - attn_mask = attn_mask.masked_fill(attn_mask.to(torch.bool), -1e8) - - residual = x - if self.normalize_before: - x = self.self_attn_layer_norm(x) - x, _ = self.self_attn( - query=x, - key=x, - value=x, - key_padding_mask=encoder_padding_mask, - attn_mask=attn_mask, - rel_pos=rel_pos, - scale=scale, - ) - x = self.dropout_module(x) - - if self.drop_path is not None: - x = self.drop_path(x) - - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.self_attn_layer_norm(x) - - residual = x - if self.normalize_before: - x = self.final_layer_norm(x) - if not self.is_moe_layer: - x = self.ffn(x) - l_aux = None - else: - x = x.transpose(0, 1) - x, l_aux = self.moe_layer(x) - x = x.transpose(0, 1) - - if self.drop_path is not None: - x = self.drop_path(x) - - x = self.residual_connection(x, residual) - if not self.normalize_before: - x = self.final_layer_norm(x) - return x, l_aux - - -class Encoder(nn.Module): - def __init__( - self, - args, - embed_tokens=None, - embed_positions=None, - output_projection=None, - is_encoder_decoder=False, - **kwargs - ): - self.args = args - super().__init__(**kwargs) - - self.dropout_module = torch.nn.Dropout(args.dropout, inplace=True) - - embed_dim = args.encoder_embed_dim - self.embed_scale = 1.0 if args.no_scale_embedding else math.sqrt(embed_dim) - - self.embed_tokens = embed_tokens - self.embed_positions = embed_positions - - if ( - output_projection is None - and not is_encoder_decoder - and not args.no_output_layer - and args.vocab_size > 0 - ): - self.output_projection = self.build_output_projection(args) - else: - self.output_projection = output_projection - - if args.layernorm_embedding: - self.layernorm_embedding = MultiwayWrapper( - args, LayerNorm(embed_dim), dim=1 - ) - else: - self.layernorm_embedding = None - - self.layers = nn.ModuleList([]) - - moe_freq = args.moe_freq - for i in range(args.encoder_layers): - is_moe_layer = moe_freq != 0 and (i + 1) % moe_freq == 0 - self.layers.append( - self.build_encoder_layer( - args, - depth=i, - is_moe_layer=is_moe_layer, - is_encoder_decoder=is_encoder_decoder, - ) - ) - self.num_layers = len(self.layers) - - if args.encoder_normalize_before: - self.layer_norm = MultiwayWrapper(args, LayerNorm(embed_dim)) - else: - self.layer_norm = None - - if args.rel_pos_buckets > 0 and args.max_rel_pos > 0: - self.relative_position = RelativePositionBias( - num_buckets=args.rel_pos_buckets, - max_distance=args.max_rel_pos, - n_heads=args.encoder_attention_heads, - ) - else: - self.relative_position = None - - if args.bert_init: - self.apply(init_bert_params) - - if args.deepnorm: - if is_encoder_decoder: - init_scale = ( - math.pow( - math.pow(args.encoder_layers, 4) * args.decoder_layers, 0.0625 - ) - / 1.15 - ) - else: - init_scale = math.pow(8.0 * args.encoder_layers, 0.25) - for name, p in self.named_parameters(): - if ( - "fc1" in name - or "fc2" in name - or "out_proj" in name - or "v_proj" in name - ): - p.data.div_(init_scale) - - if args.subln: - if is_encoder_decoder: - init_scale = math.sqrt( - math.log(3 * args.decoder_layers) - * math.log(2 * args.encoder_layers) - / 3 - ) - else: - init_scale = math.sqrt(math.log(args.encoder_layers * 2)) - for name, p in self.named_parameters(): - if ( - "fc1" in name - or "fc2" in name - or "out_proj" in name - or "v_proj" in name - ): - p.data.mul_(init_scale) - - def build_output_projection( - self, - args, - ): - if args.share_encoder_input_output_embed: - assert args.encoder_embedding_type == "language" - output_projection = torch.nn.Linear( - self.embed_tokens.weight.shape[1], - self.embed_tokens.weight.shape[0], - bias=False, - ) - output_projection.weight = self.embed_tokens.weight - else: - output_projection = torch.nn.Linear( - args.encoder_embed_dim, args.vocab_size, bias=False - ) - torch.nn.init.normal_( - output_projection.weight, mean=0, std=args.encoder_embed_dim ** -0.5 - ) - return output_projection - - def build_encoder_layer( - self, args, depth, is_moe_layer=False, is_encoder_decoder=False - ): - layer = EncoderLayer( - args, - depth, - is_moe_layer=is_moe_layer, - is_encoder_decoder=is_encoder_decoder, - ) - if args.checkpoint_activations: - layer = checkpoint_wrapper(layer, offload_to_cpu=False) - if args.fsdp: - layer = wrap(layer) - return layer - - def forward_embedding( - self, - src_tokens, - token_embedding=None, - ): - if token_embedding is None: - token_embedding = self.embed_tokens(src_tokens) - x = embed = self.embed_scale * token_embedding - if self.embed_positions is not None: - if src_tokens is not None: - x = embed + self.embed_positions(src_tokens) - else: - x = embed + self.embed_positions(x) - if self.layernorm_embedding is not None: - x = self.layernorm_embedding(x) - x = self.dropout_module(x) - return x, embed - - def forward( - self, - src_tokens, - encoder_padding_mask=None, - return_all_hiddens=False, - token_embeddings=None, - multiway_split_position=None, - features_only=False, - scale=1.0, - **kwargs - ): - assert src_tokens is not None or token_embeddings is not None - - if encoder_padding_mask is None: - if src_tokens is not None: - encoder_padding_mask = torch.zeros_like( - src_tokens, device=src_tokens.device - ).bool() - else: - encoder_padding_mask = torch.zeros( - [token_embeddings.size(0), token_embeddings.size(1)], - device=token_embeddings.device, - ).bool() - - if multiway_split_position is not None: - assert self.args.multiway - self.apply(set_split_position(multiway_split_position)) - - x, encoder_embedding = self.forward_embedding(src_tokens, token_embeddings) - x = x * (1 - encoder_padding_mask.unsqueeze(-1).type_as(x)) - - x = x.transpose(0, 1) - - encoder_states = [] - - if return_all_hiddens: - encoder_states.append(x) - - rel_pos_bias = None - if self.relative_position is not None: - rel_pos_bias = self.relative_position( - batch_size=x.size(1), qlen=x.size(0), klen=x.size(0) - ) - - l_aux = [] - for layer in self.layers: - x, l_aux_i = layer( - x, encoder_padding_mask=encoder_padding_mask, rel_pos=rel_pos_bias, scale=scale - ) - if return_all_hiddens: - assert encoder_states is not None - encoder_states.append(x) - l_aux.append(l_aux_i) - - if self.layer_norm is not None: - x = self.layer_norm(x) - - if not features_only and self.output_projection is not None: - x = self.output_projection(x) - - return { - "encoder_out": x, - "encoder_embedding": encoder_embedding, - "encoder_padding_mask": encoder_padding_mask, - "encoder_states": encoder_states, - "l_aux": l_aux, - } diff --git a/kosmos-g/torchscale/torchscale/architecture/encoder_decoder.py b/kosmos-g/torchscale/torchscale/architecture/encoder_decoder.py deleted file mode 100644 index 91a906ec4..000000000 --- a/kosmos-g/torchscale/torchscale/architecture/encoder_decoder.py +++ /dev/null @@ -1,61 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] - -import torch.nn as nn - -from torchscale.architecture.decoder import Decoder -from torchscale.architecture.encoder import Encoder - - -class EncoderDecoder(nn.Module): - def __init__( - self, - args, - encoder_embed_tokens=None, - encoder_embed_positions=None, - decoder_embed_tokens=None, - decoder_embed_positions=None, - output_projection=None, - **kwargs - ): - super().__init__() - self.args = args - if args.share_all_embeddings: - args.share_decoder_input_output_embed = True - - self.encoder = Encoder( - args, - encoder_embed_tokens, - encoder_embed_positions, - is_encoder_decoder=True, - **kwargs - ) - - if args.share_all_embeddings and decoder_embed_tokens is None: - decoder_embed_tokens = self.encoder.embed_tokens - - self.decoder = Decoder( - args, - decoder_embed_tokens, - decoder_embed_positions, - output_projection, - is_encoder_decoder=True, - **kwargs - ) - - def forward( - self, - src_tokens, - prev_output_tokens, - return_all_hiddens=False, - features_only=False, - **kwargs - ): - encoder_out = self.encoder(src_tokens, return_all_hiddens=return_all_hiddens) - decoder_out = self.decoder( - prev_output_tokens, - encoder_out=encoder_out, - features_only=features_only, - return_all_hiddens=return_all_hiddens, - ) - return decoder_out diff --git a/kosmos-g/torchscale/torchscale/architecture/utils.py b/kosmos-g/torchscale/torchscale/architecture/utils.py deleted file mode 100644 index 58a5c15be..000000000 --- a/kosmos-g/torchscale/torchscale/architecture/utils.py +++ /dev/null @@ -1,33 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] - -import torch.nn as nn - -from torchscale.component.multihead_attention import MultiheadAttention -from torchscale.component.multiway_network import MultiwayNetwork - - -def init_bert_params(module): - def normal_(data): - data.copy_(data.cpu().normal_(mean=0.0, std=0.02).to(data.device)) - - if isinstance(module, nn.Linear): - normal_(module.weight.data) - if module.bias is not None: - module.bias.data.zero_() - if isinstance(module, nn.Embedding): - normal_(module.weight.data) - if module.padding_idx is not None: - module.weight.data[module.padding_idx].zero_() - if isinstance(module, MultiheadAttention): - if isinstance(module.q_proj, MultiwayNetwork): - normal_(module.q_proj.A.weight.data) - normal_(module.q_proj.B.weight.data) - normal_(module.k_proj.A.weight.data) - normal_(module.k_proj.B.weight.data) - normal_(module.v_proj.A.weight.data) - normal_(module.v_proj.B.weight.data) - else: - normal_(module.q_proj.weight.data) - normal_(module.k_proj.weight.data) - normal_(module.v_proj.weight.data) diff --git a/kosmos-g/torchscale/torchscale/component/__init__.py b/kosmos-g/torchscale/torchscale/component/__init__.py deleted file mode 100644 index 3ae31e250..000000000 --- a/kosmos-g/torchscale/torchscale/component/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] diff --git a/kosmos-g/torchscale/torchscale/component/droppath.py b/kosmos-g/torchscale/torchscale/component/droppath.py deleted file mode 100644 index 18c064408..000000000 --- a/kosmos-g/torchscale/torchscale/component/droppath.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] - -import torch.nn as nn -from timm.models.layers import drop_path - - -class DropPath(nn.Module): - """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).""" - - def __init__(self, drop_prob=None): - super(DropPath, self).__init__() - self.drop_prob = drop_prob - - def forward(self, x): - return drop_path(x, self.drop_prob, self.training) - - def extra_repr(self): - return "p={}".format(self.drop_prob) diff --git a/kosmos-g/torchscale/torchscale/component/embedding.py b/kosmos-g/torchscale/torchscale/component/embedding.py deleted file mode 100644 index f6cc62e7f..000000000 --- a/kosmos-g/torchscale/torchscale/component/embedding.py +++ /dev/null @@ -1,113 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] - -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class VisionLanguageEmbedding(nn.Module): - def __init__(self, text_embed, vision_embed): - super().__init__() - self.text_embed = text_embed - self.vision_embed = vision_embed - - def forward(self, textual_tokens, visual_tokens, **kwargs): - if textual_tokens is None: - return self.vision_embed(visual_tokens) - - if visual_tokens is None: - return self.text_embed(textual_tokens) - - x1 = self.vision_embed(visual_tokens) - x2 = self.text_embed(textual_tokens) - - return torch.cat([x1, x2], dim=1) - - -class VisionEmbedding(nn.Module): - """Image to Patch Embedding""" - - def __init__( - self, - img_size=224, - patch_size=16, - in_chans=3, - embed_dim=768, - contain_mask_token=False, - prepend_cls_token=False, - ): - super().__init__() - img_size = (img_size, img_size) - patch_size = (patch_size, patch_size) - num_patches = (img_size[1] // patch_size[1]) * (img_size[0] // patch_size[0]) - self.patch_shape = (img_size[0] // patch_size[0], img_size[1] // patch_size[1]) - self.img_size = img_size - self.patch_size = patch_size - self.num_patches = num_patches - - self.proj = nn.Conv2d( - in_chans, embed_dim, kernel_size=patch_size, stride=patch_size - ) - - if contain_mask_token: - self.mask_token = nn.Parameter(torch.zeros(1, 1, embed_dim)) - else: - self.mask_token = None - - if prepend_cls_token: - self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim)) - else: - self.cls_token = None - - def forward(self, x, masked_position=None, **kwargs): - B, C, H, W = x.shape - assert ( - H == self.img_size[0] and W == self.img_size[1] - ), f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})." - x = self.proj(x).flatten(2).transpose(1, 2) - - batch_size, seq_len, _ = x.size() - - if masked_position is not None: - assert self.mask_token is not None - mask_token = self.mask_token.expand(batch_size, seq_len, -1) - w = masked_position.unsqueeze(-1).type_as(mask_token) - x = x * (1 - w) + mask_token * w - - if self.cls_token is not None: - cls_tokens = self.cls_token.expand( - batch_size, -1, -1 - ) # stole cls_tokens impl from Phil Wang, thanks - x = torch.cat((cls_tokens, x), dim=1) - - return x - - -class TextEmbedding(nn.Embedding): - def reset_parameters(self): - nn.init.normal_(self.weight, mean=0, std=self.embedding_dim**-0.5) - self._fill_padding_idx_with_zero() - - -class PositionalEmbedding(nn.Embedding): - def forward( - self, - x, - positions=None, - **kwargs, - ): - if positions is None: - # being consistent with Fairseq, which starts from 2. - positions = ( - torch.arange(2, x.size(1) + 2, device=x.device).long().unsqueeze(0) - ) - return F.embedding( - positions, - self.weight, - self.padding_idx, - self.max_norm, - self.norm_type, - self.scale_grad_by_freq, - self.sparse, - ) diff --git a/kosmos-g/torchscale/torchscale/component/feedforward_network.py b/kosmos-g/torchscale/torchscale/component/feedforward_network.py deleted file mode 100644 index 31c0651b1..000000000 --- a/kosmos-g/torchscale/torchscale/component/feedforward_network.py +++ /dev/null @@ -1,131 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] - -import torch -import torch.nn as nn -import torch.nn.functional as F -from apex.normalization import FusedLayerNorm as LayerNorm - - -class set_torch_seed(object): - def __init__(self, seed): - assert isinstance(seed, int) - self.rng_state = self.get_rng_state() - - torch.manual_seed(seed) - if torch.cuda.is_available(): - torch.cuda.manual_seed(seed) - - def get_rng_state(self): - state = {"torch_rng_state": torch.get_rng_state()} - if torch.cuda.is_available(): - state["cuda_rng_state"] = torch.cuda.get_rng_state() - return state - - def set_rng_state(self, state): - torch.set_rng_state(state["torch_rng_state"]) - if torch.cuda.is_available(): - torch.cuda.set_rng_state(state["cuda_rng_state"]) - - def __enter__(self): - return self - - def __exit__(self, *exc): - self.set_rng_state(self.rng_state) - - -def make_experts(args, embed_dim, expert_ffn_dim): - world_size = ( - 1 - if not torch.distributed.is_initialized() - else torch.distributed.get_world_size() - ) - expert_list = [] - ddp_rank = args.ddp_rank - start_seed = torch.randint(1000000, (1,)).item() - # at least as many experts than gpus - if args.moe_expert_count >= world_size: - assert ( - args.moe_expert_count % world_size == 0 - ), f"{args.moe_expert_count}, {world_size}" - local_moe_expert_count = args.moe_expert_count // world_size - for i in range(local_moe_expert_count): - with set_torch_seed(start_seed + ddp_rank * local_moe_expert_count + i): - expert_list.append( - FeedForwardNetwork( - embed_dim, - expert_ffn_dim, - args.activation_fn, - args.dropout, - args.activation_dropout, - args.subln, - ) - ) - else: - assert ( - world_size % args.moe_expert_count == 0 - ), f"{world_size}, {args.moe_expert_count}" - - with set_torch_seed(start_seed + ddp_rank % args.moe_expert_count): - expert_list.append( - FeedForwardNetwork( - embed_dim, - expert_ffn_dim, - args.activation_fn, - args.dropout, - args.activation_dropout, - args.subln, - ) - ) - experts = nn.ModuleList(expert_list) - return experts - - -def get_activation_fn(activation): - if activation == "relu": - return F.relu - elif activation == "gelu": - return F.gelu - else: - raise NotImplementedError - - -class FeedForwardNetwork(nn.Module): - def __init__( - self, - embed_dim, - ffn_dim, - activation_fn, - dropout, - activation_dropout, - subln=False, - ): - super().__init__() - self.embed_dim = embed_dim - self.activation_fn = get_activation_fn(activation=str(activation_fn)) - self.activation_dropout_module = torch.nn.Dropout( - activation_dropout, inplace=True - ) - self.dropout_module = torch.nn.Dropout(dropout, inplace=True) - self.fc1 = nn.Linear(self.embed_dim, ffn_dim) - self.fc2 = nn.Linear(ffn_dim, self.embed_dim) - self.ffn_layernorm = LayerNorm(ffn_dim) if subln else None - - def reset_parameters(self): - self.fc1.reset_parameters() - self.fc2.reset_parameters() - if self.ffn_layernorm is not None: - self.ffn_layernorm.reset_parameters() - - def forward(self, x): - x_shape = x.shape - x = x.reshape(-1, x.size(-1)) - x = self.fc1(x) - x = self.activation_fn(x.float()).type_as(x) - x = self.activation_dropout_module(x) - if self.ffn_layernorm is not None: - x = self.ffn_layernorm(x) - x = self.fc2(x) - x = x.view(x_shape) - x = self.dropout_module(x) - return x diff --git a/kosmos-g/torchscale/torchscale/component/multihead_attention.py b/kosmos-g/torchscale/torchscale/component/multihead_attention.py deleted file mode 100644 index b59e0303c..000000000 --- a/kosmos-g/torchscale/torchscale/component/multihead_attention.py +++ /dev/null @@ -1,206 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] - -import math - -import torch -import torch.nn.functional as F -from apex.normalization import FusedLayerNorm as LayerNorm -from torch import nn -from diffusers.models.attention_processor import LoRALinearLayer - -from .multiway_network import MultiwayWrapper -from xformers.ops import memory_efficient_attention, LowerTriangularMask, MemoryEfficientAttentionCutlassOp - - -def rotate_every_two(x): - x1 = x[:, :, ::2] - x2 = x[:, :, 1::2] - x = torch.stack((-x2, x1), dim=-1) - return x.flatten(-2) # in einsum notation: rearrange(x, '... d j -> ... (d j)')\ - - -def duplicate_interleave(m): - """ - A simple version of `torch.repeat_interleave` for duplicating a matrix while interleaving the copy. - """ - dim0 = m.shape[0] - m = m.view(-1, 1) # flatten the matrix - m = m.repeat(1, 2) # repeat all elements into the 2nd dimension - m = m.view(dim0, -1) # reshape into a matrix, interleaving the copy - return m - - -def apply_rotary_pos_emb(x, sin, cos, scale=1): - sin, cos = map(lambda t: duplicate_interleave(t * scale), (sin, cos)) - # einsum notation for lambda t: repeat(t[offset:x.shape[1]+offset,:], "n d -> () n () (d j)", j=2) - return (x * cos) + (rotate_every_two(x) * sin) - - -class MultiheadAttention(nn.Module): - def __init__( - self, - args, - embed_dim, - num_heads, - dropout=0.0, - self_attention=False, - encoder_decoder_attention=False, - subln=False, - ): - super().__init__() - self.args = args - self.lora = hasattr(args, 'lora') and args.lora - self.embed_dim = embed_dim - self.num_heads = num_heads - self.head_dim = embed_dim // num_heads - self.scaling = self.head_dim ** -0.5 - self.scale_length = args.scale_length - - self.self_attention = self_attention - self.encoder_decoder_attention = encoder_decoder_attention - assert self.self_attention ^ self.encoder_decoder_attention - - self.k_proj = MultiwayWrapper(args, nn.Linear(embed_dim, embed_dim, bias=True)) - self.v_proj = MultiwayWrapper(args, nn.Linear(embed_dim, embed_dim, bias=True)) - self.q_proj = MultiwayWrapper(args, nn.Linear(embed_dim, embed_dim, bias=True)) - self.out_proj = MultiwayWrapper( - args, nn.Linear(embed_dim, embed_dim, bias=True) - ) - self.inner_attn_ln = ( - MultiwayWrapper(args, LayerNorm(self.embed_dim)) - if subln and self.self_attention - else None - ) - self.dropout_module = torch.nn.Dropout(dropout, inplace=True) - - if self.lora: - self.to_q_lora = LoRALinearLayer(embed_dim, embed_dim, rank=4) - self.to_k_lora = LoRALinearLayer(embed_dim, embed_dim, rank=4) - self.to_v_lora = LoRALinearLayer(embed_dim, embed_dim, rank=4) - self.to_out_lora = LoRALinearLayer(embed_dim, embed_dim, rank=4) - - def reset_parameters(self): - nn.init.xavier_uniform_(self.k_proj.weight, gain=1 / math.sqrt(2)) - nn.init.xavier_uniform_(self.v_proj.weight, gain=1 / math.sqrt(2)) - nn.init.xavier_uniform_(self.q_proj.weight, gain=1 / math.sqrt(2)) - nn.init.xavier_uniform_(self.out_proj.weight) - nn.init.constant_(self.out_proj.bias, 0.0) - - def forward( - self, - query, - key, - value, - incremental_state=None, - key_padding_mask=None, - attn_mask=None, - rel_pos=None, - sope_rel_pos=None, - scale=1.0, - ): - tgt_len, bsz, embed_dim = query.size() - src_len = tgt_len - assert embed_dim == self.embed_dim, f"query dim {embed_dim} != {self.embed_dim}" - assert list(query.size()) == [tgt_len, bsz, embed_dim] - - src_len, key_bsz, _ = key.size() - assert key_bsz == bsz, f"{query.size(), key.size()}" - assert value is not None - assert src_len, bsz == value.shape[:2] - - if self.lora: - q = self.q_proj(query) + scale * self.to_q_lora(query) - k = self.k_proj(key) + scale * self.to_k_lora(key) - v = self.v_proj(value) + scale * self.to_v_lora(value) - else: - q = self.q_proj(query) - k = self.k_proj(key) - v = self.v_proj(value) - - q = q.view(tgt_len, bsz * self.num_heads, self.head_dim).transpose(0, 1).contiguous() - k = k.view(-1, bsz * self.num_heads, self.head_dim).transpose(0, 1).contiguous() - v = v.view(-1, bsz * self.num_heads, self.head_dim).transpose(0, 1).contiguous() - - if incremental_state is not None: - if "prev_key" in incremental_state: - prev_key = incremental_state["prev_key"].view( - bsz * self.num_heads, -1, self.head_dim - ) - prev_value = incremental_state["prev_value"].view( - bsz * self.num_heads, -1, self.head_dim - ) - k = torch.cat([prev_key, k], dim=1) - v = torch.cat([prev_value, v], dim=1) - incremental_state["prev_key"] = k.view( - bsz, self.num_heads, -1, self.head_dim - ) - incremental_state["prev_value"] = v.view( - bsz, self.num_heads, -1, self.head_dim - ) - src_len = k.size(1) - - if sope_rel_pos is not None: - assert rel_pos is None - sin, cos, scale = sope_rel_pos - if self.self_attention: - k = apply_rotary_pos_emb(k, sin, cos, scale=1 / scale) - q = apply_rotary_pos_emb(q, sin[-q.shape[1]:], cos[-q.shape[1]:], scale=scale[-q.shape[1]:]) - else: - k = apply_rotary_pos_emb(k, sin[:k.shape[1]], cos[:k.shape[1]], scale=1 / scale[:k.shape[1]]) - q = apply_rotary_pos_emb(q, sin[k.shape[1]:], cos[k.shape[1]:], scale=scale[k.shape[1]:]) - - if k.shape[1] > self.scale_length: - scale_attention = torch.maximum(torch.ones(q.shape[1]), - torch.arange(k.shape[1] - q.shape[1], k.shape[1], 1).log() / math.log( - self.scale_length)).to(q) - q = q * scale_attention.unsqueeze(-1) - - if self.args.flash_attention and rel_pos is None and attn_mask is not None: - attn_bias = LowerTriangularMask() - attn = memory_efficient_attention(q, k, v, attn_bias, op=MemoryEfficientAttentionCutlassOp) - attn_weights = None - else: - q *= self.scaling - attn_weights = torch.bmm(q, k.transpose(1, 2)) - - if attn_mask is not None: - attn_weights = torch.nan_to_num(attn_weights) - attn_mask = attn_mask.unsqueeze(0) - attn_weights += attn_mask - - if key_padding_mask is not None: - attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) - attn_weights = attn_weights.masked_fill( - key_padding_mask.unsqueeze(1).unsqueeze(2).to(torch.bool), - float("-inf"), - ) - attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) - - if rel_pos is not None: - rel_pos = rel_pos.view(attn_weights.size()) - attn_weights = attn_weights + rel_pos - - attn_weights = F.softmax(attn_weights, dim=-1, dtype=torch.float32).type_as( - attn_weights - ) - attn_probs = self.dropout_module(attn_weights) - - attn = torch.bmm(attn_probs, v) - - attn = attn.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim) - - if self.inner_attn_ln is not None: - attn = self.inner_attn_ln(attn) - - if self.lora: - attn = self.out_proj(attn) + scale * self.to_out_lora(attn) - else: - attn = self.out_proj(attn) - - if attn_weights is not None: - attn_weights = attn_weights.view( - bsz, self.num_heads, tgt_len, src_len - ).transpose(1, 0) - - return attn, attn_weights diff --git a/kosmos-g/torchscale/torchscale/component/multiway_network.py b/kosmos-g/torchscale/torchscale/component/multiway_network.py deleted file mode 100644 index ea313206c..000000000 --- a/kosmos-g/torchscale/torchscale/component/multiway_network.py +++ /dev/null @@ -1,45 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] - -import copy - -import torch -import torch.nn as nn - - -def MultiwayWrapper(args, module, dim=0): - if args.multiway: - return MultiwayNetwork(module, dim=dim) - return module - - -def set_split_position(position): - def apply_fn(module): - if hasattr(module, "split_position"): - module.split_position = position - - return apply_fn - - -class MultiwayNetwork(nn.Module): - def __init__(self, module, dim=0): - super().__init__() - self.dim = dim - self.A = module - self.B = copy.deepcopy(module) - self.B.reset_parameters() - self.split_position = -1 - - def forward(self, x, **kwargs): - if self.split_position == -1: - return self.A(x, **kwargs) - if self.split_position == 0: - return self.B(x, **kwargs) - x1, x2 = torch.split( - x, - [self.split_position, x.size(self.dim) - self.split_position], - dim=self.dim, - ) - # x1, x2 = x[:self.split_position], x[self.split_position:] - y1, y2 = self.A(x1, **kwargs), self.B(x2, **kwargs) - return torch.cat([y1, y2], dim=self.dim) diff --git a/kosmos-g/torchscale/torchscale/component/relative_position_bias.py b/kosmos-g/torchscale/torchscale/component/relative_position_bias.py deleted file mode 100644 index ed29e4e6d..000000000 --- a/kosmos-g/torchscale/torchscale/component/relative_position_bias.py +++ /dev/null @@ -1,82 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] - -import math - -import torch -import torch.nn as nn - - -class RelativePositionBias(nn.Module): - def __init__( - self, bidirectional=True, num_buckets=32, max_distance=128, n_heads=12 - ): - super().__init__() - self.bidirectional = bidirectional - self.num_buckets = num_buckets - self.max_distance = max_distance - self.n_heads = n_heads - self.relative_attention_bias = nn.Embedding(self.num_buckets, self.n_heads) - - @staticmethod - def _relative_position_bucket( - relative_position, bidirectional=True, num_buckets=32, max_distance=128 - ): - ret = 0 - n = -relative_position - if bidirectional: - num_buckets //= 2 - ret += (n < 0).to(torch.long) * num_buckets - n = torch.abs(n) - else: - n = torch.max(n, torch.zeros_like(n)) - - max_exact = num_buckets // 2 - is_small = n < max_exact - - val_if_large = max_exact + ( - torch.log(n.float() / max_exact) - / math.log(max_distance / max_exact) - * (num_buckets - max_exact) - ).to(torch.long) - val_if_large = torch.min( - val_if_large, torch.full_like(val_if_large, num_buckets - 1) - ) - - ret += torch.where(is_small, n, val_if_large) - return ret - - def compute_bias(self, qlen, klen, step=None): - step = 0 if step is None else step - context_position = torch.arange( - step, - step + qlen, - dtype=torch.long, - device=self.relative_attention_bias.weight.device, - )[:, None] - memory_position = torch.arange( - klen, dtype=torch.long, device=self.relative_attention_bias.weight.device - )[None, :] - relative_position = memory_position - context_position # shape (qlen, klen) - - rp_bucket = self._relative_position_bucket( - relative_position, # shape (qlen, klen) - bidirectional=self.bidirectional, - num_buckets=self.num_buckets, - ) - rp_bucket = rp_bucket.to(self.relative_attention_bias.weight.device) - values = self.relative_attention_bias( - rp_bucket - ) # shape (qlen, klen, num_heads) - values = values.permute([2, 0, 1]).unsqueeze( - 0 - ) # shape (1, num_heads, qlen, klen) - return values - - def forward(self, batch_size, qlen, klen, step=None): - # shape (batch * num_heads, qlen, klen) - return ( - self.compute_bias(qlen, klen, step) - .repeat(batch_size, 1, 1, 1) - .view(-1, qlen, klen) - ) diff --git a/kosmos-g/torchscale/torchscale/component/sope_relative_position.py b/kosmos-g/torchscale/torchscale/component/sope_relative_position.py deleted file mode 100644 index 38295b7b7..000000000 --- a/kosmos-g/torchscale/torchscale/component/sope_relative_position.py +++ /dev/null @@ -1,35 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] - -import numpy as np -from scipy.optimize import minimize - -import torch -import torch.nn as nn - -def fixed_pos_embedding(x): - seq_len, dim = x.shape - inv_freq = 1.0 / (10000 ** (torch.arange(0, dim) / dim)) - sinusoid_inp = ( - torch.einsum("i , j -> i j", torch.arange(0, seq_len, dtype=torch.float), inv_freq).to(x) - ) - return torch.sin(sinusoid_inp), torch.cos(sinusoid_inp) - - -class SoPE(nn.Module): - def __init__( - self, head_dim, scale_base = 512 - ): - super().__init__() - self.head_dim = head_dim - self.scale_base = scale_base - self.register_buffer( - "scale", (torch.arange(0, head_dim, 2) + 0.4 * head_dim) / (1.4 * head_dim) - ) - - def forward(self, len, offset=0): - min = -(len + offset) // 2 - max = len + offset + min - scale = self.scale ** torch.arange(min, max, 1).to(self.scale).div(self.scale_base)[:, None] - sin, cos = fixed_pos_embedding(scale) - return (sin, cos, scale) \ No newline at end of file diff --git a/kosmos-g/torchscale/torchscale/component/xmoe/__init__.py b/kosmos-g/torchscale/torchscale/component/xmoe/__init__.py deleted file mode 100644 index 3ae31e250..000000000 --- a/kosmos-g/torchscale/torchscale/component/xmoe/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] diff --git a/kosmos-g/torchscale/torchscale/component/xmoe/moe_layer.py b/kosmos-g/torchscale/torchscale/component/xmoe/moe_layer.py deleted file mode 100644 index fe5d691d7..000000000 --- a/kosmos-g/torchscale/torchscale/component/xmoe/moe_layer.py +++ /dev/null @@ -1,363 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] - -# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved. -# -# This source code is licensed under the BSD license found in the -# LICENSE file in the root directory of this source tree. - -# NOTE: This is a mirror of the code in -# https://github.com/facebookresearch/fairscale/tree/master/fairscale/nn/moe - -import logging -import time -from typing import Any, Tuple, cast - -import torch -import torch.distributed as dist -from torch import Tensor -from torch.nn import Module, ModuleList - -try: - from fairseq.modules.moe import MOELayer - - has_fairseq = True - Base = MOELayer -except ModuleNotFoundError: - Base = Module - has_fairseq = False - -try: - # To enable Tutel MoE optimizations: - # python3 -m pip install --user --upgrade git+https://github.com/microsoft/tutel@v0.1.x - from tutel import moe as tutel_moe - - has_tutel, fused_cumsum_sub_one = True, tutel_moe.fast_cumsum_sub_one -except ModuleNotFoundError: - has_tutel, fused_cumsum_sub_one = False, lambda mask: torch.cumsum(mask, dim=0) - 1 - -logger = logging.getLogger(__name__) - - -# einsum dimensions: (g)roup, (s)equence, (e)xpert, (m)odel, (c)apacity -# See https://arxiv.org/pdf/2006.16668.pdf for details. - -# Based on https://github.com/pytorch/pytorch/pull/40762 -class _AllToAll(torch.autograd.Function): - @staticmethod - def forward(ctx: Any, group: dist.ProcessGroup, input: Tensor) -> Tensor: # type: ignore - ctx.group = group - input = input.contiguous() - output = torch.empty_like(input) - if torch.distributed.is_initialized(): - dist.all_to_all_single(output, input, group=group) - else: - assert group is None - output = input - return output - - @staticmethod - def backward(ctx: Any, *grad_output: Tensor) -> Tuple[None, Tensor]: - return (None, _AllToAll.apply(ctx.group, *grad_output)) - - -def _find_my_group_index(grouped_ranks): - my_rank = dist.get_rank() - for i, group in enumerate(grouped_ranks): - if my_rank in group: - return i - raise RuntimeError - - -def get_moe_group(moe_expert_count): - if dist.is_initialized(): - if not hasattr(get_moe_group, "_moe_groups"): - world_size = dist.get_world_size() - - if world_size <= moe_expert_count: - assert moe_expert_count % world_size == 0 - moe_groups = [[i] for i in range(world_size)] - - else: - assert world_size % moe_expert_count == 0 - ranks_per_group = world_size // moe_expert_count - moe_groups = [ - [i + j * moe_expert_count for j in range(ranks_per_group)] - for i in range(moe_expert_count) - ] - - get_moe_group._moe_group_idx = moe_groups - get_moe_group._moe_groups = [dist.new_group(g) for g in moe_groups] - - my_group_idx = _find_my_group_index(get_moe_group._moe_group_idx) - return get_moe_group._moe_groups[my_group_idx] - - -def get_all2all_group(moe_expert_count): - if dist.is_initialized(): - if not hasattr(get_all2all_group, "_all2all_groups"): - world_size = dist.get_world_size() - - # more experts than world size - if world_size <= moe_expert_count: - assert moe_expert_count % world_size == 0 - all2all_groups = [[i for i in range(world_size)]] - - # larger world than num experts - else: - assert world_size % moe_expert_count == 0 - ranks_per_group = world_size // moe_expert_count - all2all_groups = [ - [i * moe_expert_count + j for j in range(moe_expert_count)] - for i in range(ranks_per_group) - ] - - get_all2all_group._all2all_group_idx = all2all_groups - get_all2all_group._all2all_groups = [ - dist.new_group(g) for g in all2all_groups - ] - - my_group_idx = _find_my_group_index(get_all2all_group._all2all_group_idx) - return get_all2all_group._all2all_groups[my_group_idx] - - -class MOELayer(Base): - """MOELayer module which implements MixtureOfExperts as described in Gshard_. - :: - - gate = Top2Gate(model_dim, num_experts) - moe = MOELayer(gate, expert) - output = moe(input) - l_aux = moe.l_aux - - .. Gshard_: https://arxiv.org/pdf/2006.16668.pdf - - Args: - gate (torch.nn.Module): - gate network - expert (torch.nn.Module): - expert network - """ - - def __init__(self, gate, experts, args): - if has_fairseq: - super(Base, self).__init__() - else: - super().__init__() - self.gate = gate - if type(experts) == ModuleList: - self.experts = cast(ModuleList, experts) - else: - self.experts = ModuleList([experts]) - self.expert_group = get_moe_group(args.moe_expert_count) - self.all2all_group = get_all2all_group(args.moe_expert_count) - self.world_size = dist.get_world_size(group=self.expert_group) - self.all2all_size = dist.get_world_size(group=self.all2all_group) - for p in experts.parameters(): - p.expert = True # type: ignore - self.num_local_experts = len(self.experts) - self.args = args - self.in_generation = False - self.a2a_cuda_event_intervals = [] - self.a2a_cpu_time_ms = 0.0 - - def forward(self, *input: Tensor, input_padding_mask=None, **kwargs: Any) -> Tensor: - assert len(input) == 1, "only single input Tensor supported" - input = input[0] - assert ( - len(input.shape) == 3 - ), "input Tensor must have dimensions: (s)equence, (t)oken, (m)odel" - if input_padding_mask is not None: - assert ( - len(input_padding_mask.shape) == 2 - ), "input Tensor must have dimensions: (s)equence, (t)oken" - assert input_padding_mask.shape[0] == input.shape[0] - assert input_padding_mask.shape[1] == input.shape[1] - # assert input.shape[0] % len(self.experts) == 0, "num tokens must be order of number of local experts" - - # Implement Algorithm 2 from GShard paper. - d_model = input.shape[2] - # Pad to expected batch size - input_shape = list(input.shape) - expected_bsz = ( - getattr(self.args, "batch_size", 0) - if self.training - else getattr(self.args, "batch_size_valid", 0) - ) - # This indicates that --batch-size or --max-sentences is not specified - if expected_bsz is None: - expected_bsz = 0 - # Note: Padding is not necessary at generation time at present - # because all DDP workers process the same batch. Also, batch size at generation time - # can be different from that present in the checkpoint state - if ( - not self.in_generation - and expected_bsz != 0 - and input_shape[0] != expected_bsz - ): - logger.warning( - f"padding batch with unexpected size {input_shape[0]} (expected: {expected_bsz})" - ) - assert input_shape[0] < expected_bsz, f"{input_shape[0]} < {expected_bsz}" - padded_input = torch.zeros( - (expected_bsz, input_shape[1], input_shape[2]), - dtype=input.dtype, - layout=input.layout, - device=input.device, - ) - padded_input[: input_shape[0], :, :] = input - input = padded_input - - padded_input_padding_mask = torch.ones( - ( - expected_bsz, - input_shape[1], - ), - dtype=torch.bool, - device=input.device, - ) - if input_padding_mask is not None: - padded_input_padding_mask[: input_shape[0], :] = input_padding_mask - else: - padded_input_padding_mask[: input_shape[0], :] = False - input_padding_mask = padded_input_padding_mask - - # Reshape into S tokens by dropping sequence dimension. - reshaped_input = input.reshape(-1, d_model) - reshaped_input_shape = reshaped_input.shape - reshaped_input_padding_mask = ( - input_padding_mask.reshape(-1) if input_padding_mask is not None else None - ) - - # Doing padding here when --max-tokens is specified and not --batch-size or --max-sentences - # Pro of --max-tokens: more flexible for MT variable sequence lengths - # Con of --max-tokens: extra all-reduce needed to figure out optimal padding without running OOM - if expected_bsz == 0: - expected_dim = reshaped_input_shape[0] * torch.ones( - (1,), dtype=torch.long, device=input.device - ) - dist.all_reduce(expected_dim, group=dist.group.WORLD, op=dist.ReduceOp.MAX) - expected_dim = int(expected_dim.item()) - padded_input = torch.zeros( - (expected_dim, reshaped_input_shape[1]), - dtype=input.dtype, - layout=input.layout, - device=input.device, - ) - padded_input[: reshaped_input_shape[0], :] = reshaped_input - reshaped_input = padded_input - - padded_input_padding_mask = torch.ones( - (expected_dim,), dtype=torch.bool, device=padded_input.device - ) - if reshaped_input_padding_mask is not None: - padded_input_padding_mask[ - : reshaped_input_shape[0] - ] = reshaped_input_padding_mask - else: - padded_input_padding_mask[: reshaped_input_shape[0]] = False - reshaped_input_padding_mask = padded_input_padding_mask - - if has_tutel: - l_aux, self.metadata, C, E, indices_, locations_, gates_ = self.gate( - reshaped_input, reshaped_input_padding_mask - ) - S, M = reshaped_input.size(0), reshaped_input.size(1) - - if not hasattr(self, "_tutel_dispatcher"): - self._tutel_dispatcher = tutel_moe.fast_dispatcher( - E, C, M, dispatch_dtype=reshaped_input.dtype - ) - self._tutel_dispatcher.update(indices_, locations_, gates_, capacity=C) - dispatched_input = self._tutel_dispatcher.encode(reshaped_input) - else: - l_aux, combine_weights, dispatch_mask, self.metadata = self.gate( - reshaped_input, reshaped_input_padding_mask - ) - - dispatch_mask = dispatch_mask.to(input.dtype).permute( - 1, 2, 0 - ) # S,E,C -> E,C,S - E, C, S = dispatch_mask.size() - M = reshaped_input.size(1) - assert reshaped_input.size() == (S, M) - # einsum("sec,sm->ecm") - dispatched_input = torch.mm( - dispatch_mask.view(E * C, S), reshaped_input - ) # -> (E*C),M - - if self.all2all_size > 1: - dispatched_input = self.all_to_all_wrapper(dispatched_input) - - # Re-shape after all-to-all: ecm -> gecm - dispatched_input = dispatched_input.reshape( - self.all2all_size, self.num_local_experts, -1, d_model - ) - chunks = dispatched_input.chunk(self.num_local_experts, dim=1) - expert_outputs = [] - for chunk, expert in zip(chunks, self.experts): - expert_outputs += [expert(chunk)] - expert_output = torch.cat(expert_outputs, dim=1) - - if self.all2all_size > 1: - expert_output = self.all_to_all_wrapper(expert_output) - - # Re-shape back: gecm -> ecm - expert_output = expert_output.reshape( - self.all2all_size * self.num_local_experts, -1, d_model - ) - - if has_tutel: - combined_output = self._tutel_dispatcher.decode( - expert_output.view(E * C, M) - ) - else: - # einsum("sec,ecm->sm") - combined_output = combine_weights.view(S, E * C).mm( - expert_output.view(E * C, M) - ) - - # Remove padding here when --max-tokens is specified and not --batch-size or --max-sentences - combined_output = combined_output[: reshaped_input_shape[0], :] - combined_output = combined_output.reshape(input.shape) - combined_output = combined_output[: input_shape[0], :, :] - - self.record_all_to_all_stats() - - return combined_output, l_aux - - def prepare_for_inference_(self): - self.in_generation = True - - def all_to_all_wrapper(self, input: Tensor): - dummy_a2a = getattr(self.args, "dummy_a2a", False) - if dummy_a2a: - input = input.contiguous() - output = input.detach().clone() - return input - # always record times, since it is not a lot of overhead - # if we do not log it we simply clear it off in record_all_to_all_stats - cuda_start = torch.cuda.Event(enable_timing=True) - cuda_end = torch.cuda.Event(enable_timing=True) - cpu_start = time.time() * 1000 - cuda_start.record() - output = _AllToAll.apply(self.all2all_group, input) - cuda_end.record() - cpu_end = time.time() * 1000 - self.a2a_cpu_time_ms += cpu_end - cpu_start - self.a2a_cuda_event_intervals.append((cuda_start, cuda_end)) - return output - - def record_all_to_all_stats(self): - # controlled via an argument as we want to minimize any impact from torch.cuda.synchronize() - record_a2a_perf_stats = getattr(self.args, "record_a2a_perf_stats", False) - if record_a2a_perf_stats: - torch.cuda.synchronize() - self.metadata["all_to_all_cpu_time_ms"] = self.a2a_cpu_time_ms - a2a_cuda_time_ms = 0.0 - for ev_start, ev_end in self.a2a_cuda_event_intervals: - a2a_cuda_time_ms += ev_start.elapsed_time(ev_end) - self.metadata["all_to_all_cuda_time_ms"] = a2a_cuda_time_ms - # reset stats - self.a2a_cpu_time_ms = 0.0 - self.a2a_cuda_event_intervals = [] diff --git a/kosmos-g/torchscale/torchscale/component/xmoe/routing.py b/kosmos-g/torchscale/torchscale/component/xmoe/routing.py deleted file mode 100644 index 751a76a16..000000000 --- a/kosmos-g/torchscale/torchscale/component/xmoe/routing.py +++ /dev/null @@ -1,525 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] - -# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved. -# -# This source code is licensed under the BSD license found in the -# LICENSE file in the root directory of this source tree. - -# Implementation of Top2Gating described in https://arxiv.org/pdf/2006.16668.pdf -# Code is inspired by Top2GatingOnLogits from lingvo: -# https://github.com/tensorflow/lingvo/blob/21b8106c5f1d30a196c98eedc441d4fd70833b11/lingvo/core/moe_layers.py#L477 - -# NOTE: This is a mirror of the code in -# https://github.com/facebookresearch/fairscale/tree/master/fairscale/nn/moe - -import math -from typing import Callable, Dict, Optional, Tuple - -import torch -import torch.nn.functional as F -from torch import Tensor - -from .moe_layer import fused_cumsum_sub_one, has_tutel - -# use a fixed temperature to compute balance loss -TEMPERATURE_FOR_L_UAX = 0.07 - -# maximum capacity of 1 expert as a fraction of number of tokens in the batch -# Note: setting this to 1.0 causes inference to significantly slow down -EVAL_CAPACITY_TOKEN_FRACTION = 0.25 - -# logging -SAMPLE_FRACTION = 0.2 - - -def top1gating( - logits: torch.Tensor, - input_mask: Optional[torch.Tensor] = None, - use_fp32=False, - capacity_factor=1.0, - eval_mode=False, - moe_eval_capacity_token_fraction=EVAL_CAPACITY_TOKEN_FRACTION, - use_xmoe=False, - gate_obj=None, -) -> Tuple[Tensor, Tensor, Tensor, Dict]: - """Implements Top2Gating on logits.""" - metadata = {} - if use_fp32: - orig_dtype = logits.dtype - logits = logits.float() - - gates = F.softmax(logits, dim=1) - metadata["entropy_gating"] = entropy(probs=gates).mean().detach() - - # gates has shape of SE - num_tokens = gates.shape[0] - num_experts = gates.shape[1] - if moe_eval_capacity_token_fraction > 0.0 and eval_mode: - capacity = math.ceil(moe_eval_capacity_token_fraction * num_tokens) - else: - # capacity = capacity_factor * S/E - capacity = int(capacity_factor * math.ceil(num_tokens / num_experts)) - - # Create a mask for 1st's expert per token - indices1_s = torch.argmax(gates, dim=1) - mask1 = one_hot(indices1_s, num_classes=num_experts, unsqueeze_indices=True) - if input_mask is not None and input_mask.any(): - nonpadding = ~input_mask - mask1 = mask1 * nonpadding.unsqueeze(-1).to(mask1.dtype) - - # for logging (percent of tokens routed to each expert) - expert1_hist = ( - 100 - * torch.histc( - (indices1_s.squeeze() + 1), bins=num_experts, min=1, max=num_experts - ) - / num_tokens - ) - metadata["unused_expert1_count"] = (expert1_hist == 0).sum() - expert1_hist = ( - torch.sort(expert1_hist, dim=0, descending=True).values - + torch.finfo(torch.float32).tiny - ) - - sample_count = max(math.ceil(num_experts * SAMPLE_FRACTION), 1) - metadata["expert1_balance_top"] = expert1_hist[:sample_count].sum() - metadata["expert1_balance_bottom"] = expert1_hist[-sample_count:].sum() - - gates1_s = (gates * mask1).sum(dim=1) - - # Compute locations in capacity buffer - locations1 = fused_cumsum_sub_one(mask1) - - # Compute l_aux - me = torch.mean(gates, dim=0) - ce = torch.mean(mask1.to(gates.dtype), dim=0) - - l_aux = torch.mean(me * ce) - l_aux = l_aux * num_experts * num_experts - - if has_tutel: - locations1_s = torch.sum(locations1 * mask1, dim=1) - return ( - l_aux, - metadata, - capacity, - num_experts, - [ - indices1_s, - ], - [ - locations1_s, - ], - [ - gates1_s, - ], - ) - - # Remove locations outside capacity from mask - mask1 = mask1 * torch.lt(locations1, capacity) - # Store the capacity location for each token - locations1_s = torch.sum(locations1 * mask1, dim=1) - - # Calculate combine_weights and dispatch_mask - gates1 = gates1_s.unsqueeze(-1) * mask1.to(gates1_s.dtype) # einsum("s,se->se") - # locations1_sc = num_tokens * capacity - locations1_sc = one_hot(locations1_s, num_classes=capacity, unsqueeze_indices=True) - combine1_sec = torch.bmm( - # einsum("se,sc->sec") - gates1.unsqueeze(-1), - locations1_sc.to(gates1.dtype).unsqueeze(1), - ) - dispatch_mask = combine1_sec.bool() - if use_fp32: - return l_aux, combine1_sec.to(orig_dtype), dispatch_mask, metadata - else: - return l_aux, combine1_sec, dispatch_mask, metadata - - -class Top1Gate(torch.nn.Module): - """Gate module which implements Top2Gating as described in Gshard_. - :: - - gate = Top2Gate(model_dim, num_experts) - l_aux, combine_weights, dispatch_mask = gate(input) - - .. Gshard_: https://arxiv.org/pdf/2006.16668.pdf - - Args: - model_dim (int): - size of model embedding dimension - num_experts (ints): - number of experts in model - """ - - wg: torch.nn.Linear - - def __init__( - self, - model_dim: int, - num_experts: int, - use_fp32=False, - input_noise_type=None, - capacity_factor=1.0, - moe_eval_capacity_token_fraction=EVAL_CAPACITY_TOKEN_FRACTION, - use_xmoe=False, - ) -> None: - # TODO: merge this to top2gate.py - # - super().__init__() - - if not use_xmoe: - self.wg = torch.nn.Linear(model_dim, num_experts, bias=False) - else: - self.wg_reduction = torch.nn.Linear(model_dim, 16, bias=False) - wg = torch.empty(num_experts, 16) - torch.nn.init.orthogonal_(wg, gain=0.32) - self.register_parameter("wg", torch.nn.Parameter(wg)) - - self.use_xmoe = use_xmoe - self.use_fp32 = use_fp32 - self.input_noise_type = input_noise_type - self.capacity_factor = capacity_factor - self.moe_eval_capacity_token_fraction = moe_eval_capacity_token_fraction - - def forward(self, input, mask=None): # type: ignore - if self.use_xmoe: - input = self.wg_reduction(input) - with torch.no_grad(): - wg_norm = self.wg.norm(p=2.0, dim=1, keepdim=True) - self.wg.mul_(1.5 / wg_norm) - logits = self._cosine(input, self.wg) - logits = self._make_finite(logits) - else: - logits = self.wg(input) - - return top1gating( - logits, - mask, - use_fp32=self.use_fp32, - capacity_factor=self.capacity_factor, - eval_mode=not self.training, - moe_eval_capacity_token_fraction=self.moe_eval_capacity_token_fraction, - use_xmoe=self.use_xmoe, - gate_obj=self, - ) - - def _make_finite(self, scores): - ok = scores.isfinite() - if not ok.all(): - # NaNs here can break the assignment algorithm - scores[~ok] = scores[ok].min() - return scores - - def _get_gating_temperature(self, eps=1e-4): - if self.gating_t.data.item() < eps: - return eps - return self.gating_t - - def _cosine(self, mat1, mat2, eps=1e-4): - assert mat1.dim() == 2 - assert mat2.dim() == 2 - # mat1 = F.normalize(mat1, p=2.0, dim=1, eps=eps) - mat2 = F.normalize(mat2.float(), p=2.0, dim=1, eps=eps) - return mat1.float().matmul(mat2.transpose(0, 1)).type_as(mat1) - - -gumbel_map: Dict[torch.device, Callable] = {} - - -def gumbel_rsample(shape: Tuple, device: torch.device) -> Tensor: - gumbel = gumbel_map.get(device) - if gumbel is None: - one = torch.tensor(1.0, device=device) - zero = torch.tensor(0.0, device=device) - gumbel = torch.distributions.gumbel.Gumbel(zero, one).rsample # type: ignore - gumbel_map[device] = gumbel - return gumbel(shape) - - -def one_hot(indices: torch.Tensor, num_classes: int, unsqueeze_indices=False) -> Tensor: - if unsqueeze_indices: - indices = indices.unsqueeze(-1) - assert indices.shape[-1] == 1, "last dimension of indices must be have size 1" - output = torch.zeros( - indices.shape[:-1] + (num_classes,), device=indices.device, dtype=indices.dtype - ) - output.scatter_(len(output.shape) - 1, indices, 1) - return output - - -def entropy(probs): - logits = torch.distributions.utils.probs_to_logits(probs) - p_log_p = probs * logits - return -p_log_p.sum(-1) - - -def top2gating( - logits: torch.Tensor, - input_mask: Optional[torch.Tensor] = None, - use_fp32=False, - second_expert_policy="sampling", - normalize_gate_prob_before_dropping=False, - eval_mode=False, - moe_eval_capacity_token_fraction=0.25, - batch_prioritized_routing=False, -) -> Tuple[Tensor, Tensor, Tensor]: - """Implements Top2Gating on logits.""" - metadata = {} - if use_fp32: - orig_dtype = logits.dtype - logits = logits.float() - gates = F.softmax(logits, dim=1) - metadata["entropy_gating"] = entropy(probs=gates).mean().detach() - # gates has shape of SE - num_tokens = gates.shape[0] - num_experts = gates.shape[1] - if moe_eval_capacity_token_fraction > 0.0 and eval_mode: - capacity = math.ceil(moe_eval_capacity_token_fraction * num_tokens) - else: - # capacity = 2S/E - capacity = 2 * math.ceil(num_tokens / num_experts) - - # Create a mask for 1st's expert per token - indices1_s = torch.argmax(gates, dim=1, keepdim=True) - mask1 = one_hot(indices1_s, num_experts) - if second_expert_policy == "sampling": - # Create a mask for 2nd's expert per token using Gumbel-max trick - # https://timvieira.github.io/blog/post/2014/07/31/gumbel-max-trick/ - logits_w_noise = logits + gumbel_rsample(logits.shape, device=logits.device) - else: - logits_w_noise = logits - # Replace top-expert with min value - logits_except1 = logits_w_noise.masked_fill(mask1.bool(), float("-inf")) - indices2_s = torch.argmax(logits_except1, dim=1, keepdim=True) - mask2 = one_hot(indices2_s, num_experts) - gates1_s = (gates * mask1).sum(dim=1) - gates2_s = (gates * mask2).sum(dim=1) - - if normalize_gate_prob_before_dropping: - # Normalize gate probabilities - denom_s = gates1_s + gates2_s - # Avoid divide-by-zero - denom_s = torch.clamp(denom_s, min=torch.finfo(denom_s.dtype).eps) - gates1_s = gates1_s / denom_s - gates2_s = gates2_s / denom_s - - if second_expert_policy == "random": - sampled = (2 * gates2_s) > torch.rand_like(gates2_s) - mask2 = mask2 * sampled.repeat(num_experts, 1).transpose(1, 0) - - # Compute locations in capacity buffer - if input_mask is not None and input_mask.any(): - nonpadding = ~input_mask - mask1 = mask1 * nonpadding.unsqueeze(-1).to(mask1.dtype) - mask2 = mask2 * nonpadding.unsqueeze(-1).to(mask1.dtype) - - if batch_prioritized_routing: - # if batch_prioritized_routing: - importance_scores = -1 * gates.max(dim=1)[0] - sorted_mask1 = mask1[importance_scores.argsort(dim=0)] - sorted_cumsum1 = fused_cumsum_sub_one(sorted_mask1) * sorted_mask1 - importance_sorted_locations1 = sorted_cumsum1[ - importance_scores.argsort(dim=0).argsort(dim=0) - ] - - sorted_mask2 = mask2[importance_scores.argsort(dim=0)] - sorted_cumsum2 = fused_cumsum_sub_one(sorted_mask2) * sorted_mask2 - importance_sorted_locations2 = sorted_cumsum2[ - importance_scores.argsort(dim=0).argsort(dim=0) - ] - - importance_sorted_locations2 += torch.sum(mask1, dim=0, keepdim=True) - - locations1, locations2 = ( - importance_sorted_locations1, - importance_sorted_locations2, - ) - else: - locations1 = fused_cumsum_sub_one(mask1) - locations2 = fused_cumsum_sub_one(mask2) - # Update 2nd's location by accounting for locations of 1st - locations2 += torch.sum(mask1, dim=0, keepdim=True) - - # Compute l_aux - me = torch.mean(gates, dim=0) - ce = torch.mean(mask1.to(gates.dtype), dim=0) - l_aux = torch.mean(me * ce) - l_aux = l_aux * num_experts * num_experts - - # for logging purposes - metadata["overflow_expert1"] = ( - 100 * torch.sum(mask1 * torch.ge(locations1, capacity)) / torch.sum(mask1) - ) - metadata["overflow_expert2"] = ( - 100 * torch.sum(mask2 * torch.ge(locations2, capacity)) / torch.sum(mask2) - ) - - # Remove locations outside capacity from mask - mask1_, mask2_ = mask1, mask2 - mask1 = mask1 * torch.lt(locations1, capacity) - mask2 = mask2 * torch.lt(locations2, capacity) - - # for logging (percent of tokens routed to each expert) - expert1_hist = ( - 100 - * torch.histc( - (indices1_s.squeeze() + 1), bins=num_experts, min=1, max=num_experts - ) - / num_tokens - ) - metadata["unused_expert1_count"] = (expert1_hist == 0).sum() - expert1_hist = ( - torch.sort(expert1_hist, dim=0, descending=True).values - + torch.finfo(torch.float32).tiny - ) - - expert2_hist = ( - 100 - * torch.histc( - (indices2_s.squeeze() + 1), bins=num_experts, min=1, max=num_experts - ) - / num_tokens - ) - metadata["unused_expert2_count"] = (expert2_hist == 0).sum() - expert2_hist = ( - torch.sort(expert2_hist, dim=0, descending=True).values - + torch.finfo(torch.float32).tiny - ) - - sample_count = max(math.ceil(num_experts * SAMPLE_FRACTION), 1) - metadata["expert1_balance_top"] = expert1_hist[:sample_count].sum() - metadata["expert1_balance_bottom"] = expert1_hist[-sample_count:].sum() - - metadata["expert2_balance_top"] = expert2_hist[:sample_count].sum() - metadata["expert2_balance_bottom"] = expert2_hist[-sample_count:].sum() - - if not normalize_gate_prob_before_dropping: - # Normalize gate probabilities - gates1_s = (gates * mask1).sum(dim=1) - gates2_s = (gates * mask2).sum(dim=1) - denom_s = gates1_s + gates2_s - # Avoid divide-by-zero - denom_s = torch.clamp(denom_s, min=torch.finfo(denom_s.dtype).eps) - gates1_s /= denom_s - gates2_s /= denom_s - - if has_tutel: - locations1_s = torch.sum(locations1 * mask1_, dim=1) - locations2_s = torch.sum(locations2 * mask2_, dim=1) - return ( - l_aux, - metadata, - capacity, - num_experts, - [indices1_s, indices2_s], - [locations1_s, locations2_s], - [gates1_s, gates2_s], - ) - - # Store the capacity location for each token - locations1_s = torch.sum(locations1 * mask1, dim=1) - locations2_s = torch.sum(locations2 * mask2, dim=1) - - # Calculate combine_weights and dispatch_mask - gates1 = gates1_s.unsqueeze(-1) * mask1.to(gates1_s.dtype) # einsum("s,se->se") - gates2 = gates2_s.unsqueeze(-1) * mask2.to(gates2_s.dtype) # einsum("s,se->se") - locations1_sc = one_hot(locations1_s, num_classes=capacity, unsqueeze_indices=True) - locations2_sc = one_hot(locations2_s, num_classes=capacity, unsqueeze_indices=True) - combine1_sec = torch.bmm( - # einsum("se,sc->sec") - gates1.unsqueeze(-1), - locations1_sc.to(gates1.dtype).unsqueeze(1), - ) - combine2_sec = torch.bmm( - # einsum("se,sc->sec") - gates2.unsqueeze(-1), - locations2_sc.to(gates2.dtype).unsqueeze(1), - ) - combine_weights = combine1_sec + combine2_sec - dispatch_mask = combine_weights.bool() - if use_fp32: - return l_aux, combine_weights.to(orig_dtype), dispatch_mask, metadata - else: - return l_aux, combine_weights, dispatch_mask, metadata - - -class Top2Gate(torch.nn.Module): - """Gate module which implements Top2Gating as described in Gshard_. - :: - - gate = Top2Gate(model_dim, num_experts) - l_aux, combine_weights, dispatch_mask = gate(input) - - .. Gshard_: https://arxiv.org/pdf/2006.16668.pdf - - Args: - model_dim (int): - size of model embedding dimension - num_experts (ints): - number of experts in model - """ - - wg: torch.nn.Linear - - def __init__( - self, - model_dim: int, - num_experts: int, - use_fp32=False, - second_expert_policy="sampling", - normalize_gate_prob_before_dropping=False, - moe_eval_capacity_token_fraction=0.25, - batch_prioritized_routing=False, - use_xmoe=False, - ) -> None: - super().__init__() - if not use_xmoe: - self.wg = torch.nn.Linear(model_dim, num_experts, bias=False) - else: - self.wg_reduction = torch.nn.Linear(model_dim, 16, bias=False) - wg = torch.empty(num_experts, 16) - torch.nn.init.orthogonal_(wg, gain=0.32) - self.register_parameter("wg", torch.nn.Parameter(wg)) - self.use_fp32 = use_fp32 - self.second_expert_policy = second_expert_policy - self.normalize_gate_prob_before_dropping = normalize_gate_prob_before_dropping - self.moe_eval_capacity_token_fraction = moe_eval_capacity_token_fraction - self.batch_prioritized_routing = batch_prioritized_routing - self.use_xmoe = use_xmoe - - def forward(self, input, mask=None): # type: ignore - if self.use_xmoe: - input = self.wg_reduction(input) - with torch.no_grad(): - wg_norm = self.wg.norm(p=2.0, dim=1, keepdim=True) - self.wg.mul_(1.5 / wg_norm) - logits = self._cosine(input, self.wg) - logits = self._make_finite(logits) - else: - logits = self.wg(input) - return top2gating( - logits, - mask, - use_fp32=self.use_fp32, - second_expert_policy=self.second_expert_policy, - normalize_gate_prob_before_dropping=self.normalize_gate_prob_before_dropping, - eval_mode=not self.training, - moe_eval_capacity_token_fraction=self.moe_eval_capacity_token_fraction, - batch_prioritized_routing=self.batch_prioritized_routing, - ) - - def _cosine(self, mat1, mat2, eps=1e-4): - assert mat1.dim() == 2 - assert mat2.dim() == 2 - # mat1 = F.normalize(mat1, p=2.0, dim=1, eps=eps) - mat2 = F.normalize(mat2.float(), p=2.0, dim=1, eps=eps) - return mat1.float().matmul(mat2.transpose(0, 1)).type_as(mat1) - - def _make_finite(self, scores): - ok = scores.isfinite() - if not ok.all(): - # NaNs here can break the assignment algorithm - scores[~ok] = scores[ok].min() - return scores diff --git a/kosmos-g/torchscale/torchscale/model/BEiT3.py b/kosmos-g/torchscale/torchscale/model/BEiT3.py deleted file mode 100644 index 597a063e0..000000000 --- a/kosmos-g/torchscale/torchscale/model/BEiT3.py +++ /dev/null @@ -1,86 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] - -import torch -import torch.nn as nn - -from torchscale.architecture.encoder import Encoder -from torchscale.component.embedding import ( - PositionalEmbedding, - TextEmbedding, - VisionEmbedding, -) -from torchscale.component.multiway_network import MultiwayWrapper - - -class BEiT3(nn.Module): - def __init__(self, args, **kwargs): - super().__init__() - self.args = args - assert args.multiway - assert args.vocab_size > 0 - assert not args.share_encoder_input_output_embed - self.text_embed = TextEmbedding(args.vocab_size, args.encoder_embed_dim) - self.vision_embed = VisionEmbedding( - args.img_size, - args.patch_size, - args.in_chans, - args.encoder_embed_dim, - contain_mask_token=True, - prepend_cls_token=True, - ) - embed_positions = MultiwayWrapper( - args, - PositionalEmbedding(args.max_source_positions, args.encoder_embed_dim), - dim=1, - ) - self.encoder = Encoder( - args, - embed_tokens=None, - embed_positions=embed_positions, - output_projection=None, - is_encoder_decoder=False, - ) - - def forward( - self, - textual_tokens=None, - visual_tokens=None, - text_padding_position=None, - vision_masked_position=None, - ): - assert textual_tokens is not None or visual_tokens is not None - - if textual_tokens is None: - x = self.vision_embed(visual_tokens, vision_masked_position) - encoder_padding_mask = None - multiway_split_position = -1 - elif visual_tokens is None: - x = self.text_embed(textual_tokens) - encoder_padding_mask = text_padding_position - multiway_split_position = 0 - else: - x1 = self.vision_embed(visual_tokens, vision_masked_position) - multiway_split_position = x1.size(1) - x2 = self.text_embed(textual_tokens) - x = torch.cat([x1, x2], dim=1) - - if text_padding_position is not None: - encoder_padding_mask = torch.cat( - [ - torch.zeros(x1.shape[:-1]).to(x1.device).bool(), - text_padding_position, - ], - dim=1, - ) - else: - encoder_padding_mask = None - - encoder_out = self.encoder( - src_tokens=None, - encoder_padding_mask=encoder_padding_mask, - token_embeddings=x, - multiway_split_position=multiway_split_position, - ) - - return encoder_out diff --git a/kosmos-g/torchscale/torchscale/model/__init__.py b/kosmos-g/torchscale/torchscale/model/__init__.py deleted file mode 100644 index 3ae31e250..000000000 --- a/kosmos-g/torchscale/torchscale/model/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -# Copyright (c) 2022 Microsoft -# Licensed under The MIT License [see LICENSE for details] diff --git a/kosmos-g/train.py b/kosmos-g/train.py deleted file mode 100644 index 9164beece..000000000 --- a/kosmos-g/train.py +++ /dev/null @@ -1,7 +0,0 @@ -import unilm - -from fairseq_cli.train import cli_main - - -if __name__ == "__main__": - cli_main() \ No newline at end of file diff --git a/kosmos-g/unilm/__init__.py b/kosmos-g/unilm/__init__.py deleted file mode 100644 index c4a72444b..000000000 --- a/kosmos-g/unilm/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -import unilm.models -import unilm.criterions -import unilm.tasks \ No newline at end of file diff --git a/kosmos-g/unilm/criterions/__init__.py b/kosmos-g/unilm/criterions/__init__.py deleted file mode 100644 index ddafe01b7..000000000 --- a/kosmos-g/unilm/criterions/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -import importlib -import os - -# automatically import any Python files in the criterions/ directory -for file in sorted(os.listdir(os.path.dirname(__file__))): - if file.endswith(".py") and not file.startswith("_"): - file_name = file[: file.find(".py")] - importlib.import_module("unilm.criterions." + file_name) \ No newline at end of file diff --git a/kosmos-g/unilm/criterions/kosmosg.py b/kosmos-g/unilm/criterions/kosmosg.py deleted file mode 100644 index 8c8141028..000000000 --- a/kosmos-g/unilm/criterions/kosmosg.py +++ /dev/null @@ -1,142 +0,0 @@ -from dataclasses import dataclass, field - -from omegaconf import II - -from fairseq import metrics -from fairseq.criterions import FairseqCriterion, register_criterion -from fairseq.dataclass import FairseqDataclass - - -@dataclass -class KosmosGDiffConfig(FairseqDataclass): - ignore_eos: bool = field( - default=False, - metadata={"help": "ignore mlm output at eos token."}, - ) - sentence_avg: bool = II("optimization.sentence_avg") - data_weights: str = field( - default="", - metadata={"help": "dataset weights"} - ) - align: bool = field( - default=False, - metadata={"help": "use clip supervision"}, - ) - - -@register_criterion( - "kosmosg", dataclass=KosmosGDiffConfig -) -class KosmosGLoss(FairseqCriterion): - def __init__(self, cfg, task): - super().__init__(task) - self.cfg = cfg - global LOSS_NAMES - global SEPERATED_LOSS - LOSS_NAMES = ["image_laion", "image_instructpix2pix", "image_openimage"] - LOSS_NAMES = [loss_name for i, loss_name in enumerate(LOSS_NAMES) if float(cfg.data_weights.split(',')[i]) > 0] - SEPERATED_LOSS = ["diff_loss", "mse_loss", "rec_loss"] - - def forward(self, model, sample, reduce=True, loss_name="image_laion"): - """Compute the loss for the given sample. - - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - net_output = model(**sample["net_input"]) - raw_loss = net_output[1]['loss'] - - ntokens = sample["ntokens"] - nsentences = sample["nsentences"] - - logging_output = { - "ntokens": ntokens, - "nsentences": nsentences - } - - if 'diff_loss' in raw_loss: - loss = raw_loss['diff_loss'] - sample_size = nsentences - logging_output['diff_loss'] = loss.data - - elif 'clip_loss' in raw_loss: - loss = raw_loss['clip_loss']['mse_loss'] + raw_loss['clip_loss']['rec_loss'] - sample_size = 1 - for k, v in raw_loss['clip_loss'].items(): - logging_output[k] = v.data - - else: - raise NotImplementedError - - logging_output['loss'] = loss.data - logging_output[loss_name] = loss.data - logging_output["sample_size"] = sample_size - logging_output[loss_name + "_sample_size"] = logging_output["sample_size"] - return loss, sample_size, logging_output - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_items = [] - seperated_loss_items = {k: [] for k in SEPERATED_LOSS} - # log individual losses - for loss_name in LOSS_NAMES: - num_log = sum(1 for log in logging_outputs if loss_name in log) - loss_sum = sum(log.get(loss_name, 0) for log in logging_outputs) - single_sample_size = sum(log.get(loss_name + "_sample_size", 0) for log in logging_outputs) - seperated_loss_sum = { - k: sum(log.get(k, 0) for log in logging_outputs if loss_name in log) for k in SEPERATED_LOSS - } - - if loss_sum != 0: - metrics.log_scalar( - loss_name, loss_sum / num_log, num_log, round=3 - ) - metrics.log_scalar( - loss_name + "_sample_size", single_sample_size, round=3 - ) - - loss_items.append(loss_sum / num_log) - - for k in SEPERATED_LOSS: - if seperated_loss_sum[k] != 0: - metrics.log_scalar( - loss_name + "_" + k, seperated_loss_sum[k] / single_sample_size, single_sample_size, - round=3 - ) - seperated_loss_items[k].append(seperated_loss_sum[k] / single_sample_size) - - else: - metrics.log_scalar( - loss_name, 0, round=3 - ) - metrics.log_scalar( - loss_name + "_sample_size", 0, round=3 - ) - for k in SEPERATED_LOSS: - is_k_exist = [k in log for log in logging_outputs] - if any(is_k_exist): - metrics.log_scalar( - loss_name + "_" + k, 0, round=3 - ) - - metrics.log_scalar( - "loss", sum(loss_items) / len(loss_items), num_log, round=3 - ) - - for k in SEPERATED_LOSS: - if len(seperated_loss_items[k]) > 0: - metrics.log_scalar( - k, sum(seperated_loss_items[k]) / len(seperated_loss_items[k]), num_log, round=3 - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/kosmos-g/unilm/data/__init__.py b/kosmos-g/unilm/data/__init__.py deleted file mode 100644 index e69de29bb..000000000 diff --git a/kosmos-g/unilm/data/basic_loader.py b/kosmos-g/unilm/data/basic_loader.py deleted file mode 100644 index 05049be21..000000000 --- a/kosmos-g/unilm/data/basic_loader.py +++ /dev/null @@ -1,107 +0,0 @@ -import math -import re -import sys -import time -import torch -from infinibatch.iterators import CheckpointableIterator -from unilm.data import utils -from unilm.data.utils import ConcatIterator, MixIterator - - -class BaseBatchGen(CheckpointableIterator): - """ - This is a base class for batch generators that use infinibatch - """ - - def __init__(self): - self._iter = None - self.epoch = 1 - self.next_epoch_idx = 1 - self.sharded_checkpoint = False - self.should_close_after_finished = True - - def _build_iter(self): - """ - Build infinibatch iterator and assign to self._iter - """ - raise NotImplementedError() - - def _move_to_tensor(self, batch): - - def to_tensor(x): - return torch.tensor(x) - - return utils.apply_to_sample(to_tensor, batch) - - @property - def iterator(self): - if self._iter is None: - raise NotImplementedError("_build_iter() must called first") - return self._iter - - def __iter__(self): - if self._iter is None: - raise NotImplementedError("_build_iter() must called first") - return self._iter - - def __next__(self): - return next(self._iter) - - def setstate(self, value): - self._iter.setstate(value) - - def getstate(self): - return self._iter.getstate() - - def close(self): - self._iter.close() - - def __len__(self) -> int: - return 819200000 - - def next_epoch_itr( - self, shuffle=True, fix_batches_to_gpus=False, set_dataset_epoch=True - ): - return self - - def end_of_epoch(self) -> bool: - return False - - def state_dict(self): - """Returns a dictionary containing a whole state of the iterator.""" - return self.getstate() - - def load_state_dict(self, state_dict): - """Copies the state of the iterator from the given *state_dict*.""" - self.setstate(state_dict) - - @property - def first_batch(self): - return "DUMMY" - - -class ConcatLoader(BaseBatchGen): - def __init__(self, dataloaders): - super().__init__() - self.dataloaders = list(dataloaders) - self._build_iter() - - def _build_iter(self): - """ - Build infinibatch iterator and assign to self._iter - """ - self._iter = ConcatIterator([dataloader.iterator for dataloader in self.dataloaders]) - - -class MixLoader(BaseBatchGen): - def __init__(self, dataloaders, weights): - super().__init__() - self.dataloaders = list(dataloaders) - self.weights = weights - self._build_iter() - - def _build_iter(self): - """ - Build infinibatch iterator and assign to self._iter - """ - self._iter = MixIterator([dataloader.iterator for dataloader in self.dataloaders], self.weights) diff --git a/kosmos-g/unilm/data/lm_loader.py b/kosmos-g/unilm/data/lm_loader.py deleted file mode 100644 index b5d9894dd..000000000 --- a/kosmos-g/unilm/data/lm_loader.py +++ /dev/null @@ -1,344 +0,0 @@ -import glob -import os -import torch -import numpy as np -import time -import json -import random -import itertools -import hydra -import copy -from omegaconf import DictConfig, OmegaConf - -from infinibatch import iterators -from unilm.data.basic_loader import BaseBatchGen -from unilm.data.utils import NativeCheckpointableIterator, WeightIterator, EOD_SYMBOL -from fairseq.utils import safe_getattr, safe_hasattr - - -class LMLoader(BaseBatchGen): - - def __init__( - self, - args, - dataset, - dictionary, - tokenizer, - max_tokens=None, - max_sentences=None, - max_positions=None, - ignore_invalid_inputs=False, - required_batch_size_multiple=1, - seed=1, - epoch=1, - num_shards=1, - shard_id=0, - ): - super().__init__() - self.args = args - self.data = dataset.data - self.data_dir = dataset.data_dir - self.shuffle = dataset.shuffle - self.dictionary = dictionary - self.tokenizer = tokenizer - - self.max_tokens = max_tokens - self.max_sentences = max_sentences - self.max_positions = max_positions - self.tokens_per_sample = args.tokens_per_sample - self.mlm_cut_length = safe_getattr(args, "mlm_cut_length", 0) - self.mlm_tokens_proportion = safe_getattr(args, "mlm_tokens_proportion", 0) - self.ignore_invalid_inputs = ignore_invalid_inputs - self.required_batch_size_multiple = required_batch_size_multiple - self.seed = str(seed) - self.epoch = epoch - self.num_shards = num_shards - self.shard_id = shard_id - - self.batch_read_ahead = args.batch_read_ahead - - self._build_iter() - - def _build_iter(self): - tokenized_lines = self._tokenize() - self.padded_batches = self._batchify(tokenized_lines) - - prefetch_batches = iterators.PrefetchIterator( - self.padded_batches, - buffer_size=10000, - buffer_in_main_process=True, - log_empty_buffer_warning=True and self.shard_id == 0, - ) - - prefetch_batches = iterators.MapIterator( - prefetch_batches, self._move_to_tensor - ) - - self._iter = prefetch_batches - - def _tokenize(self): - ''' - data: - { - 'source': list[Path], - } - ''' - dataset = list(zip(self.data['source'])) - - if self.shuffle: - chunk_files = \ - iterators.InfinitePermutationSourceIterator( - dataset, - seed=self.seed, - shuffle=self.shuffle, - num_instances=self.num_shards, - instance_rank=self.shard_id, - ) - else: - chunk_files = \ - iterators.ChunkedSourceIterator( - dataset, - num_instances=self.num_shards, - instance_rank=self.shard_id, - ) - - tokenized_lines = iterators.SelectManyIterator(chunk_files, lambda files: self._read_from_files(*files)) - tokenized_lines = iterators.SamplingRandomMapIterator(tokenized_lines, self._prepare, self.seed) - - return tokenized_lines - - def getstate(self): - state = super().getstate() - state["epoch"] = self.epoch - state["iterations_in_epoch"] = None - return state - - def _batchify(self, lines): - - if self.max_sentences is not None: - if self.batch_read_ahead > 0: - lines = iterators.BlockwiseShuffleIterator(lines, self.batch_read_ahead, self.seed) - batches = iterators.FixedBatchIterator(lines, self.max_sentences) - else: - # - - def dynamic_batch_size(sample): - lengths = [len(x) for x in sample] - batch_size = self.max_tokens // max(lengths) // self.required_batch_size_multiple * self.required_batch_size_multiple - return max(1, batch_size) - - batches = iterators.BucketedReadaheadBatchIterator( - lines, - read_ahead=self.batch_read_ahead, - key=(lambda x: max(len(x[0]), len(x[1]))) if self.shuffle else None, - batch_size=dynamic_batch_size, - shuffle=self.shuffle, - seed=self.seed, - ) - - def collate(batch): - batch_size = len(batch) - mlm_batch_size = sum([len(x[2]) for x in batch]) - - gpt_max_length = max([len(x[0]) for x in batch]) - - mlm_max_length = 0 - mlm_ntokens = 0 - for x in batch: - for y in x[2]: - mlm_max_length = max(mlm_max_length, len(y)) - mlm_ntokens += len(y) - - gpt_source_ids = np.full(shape=(batch_size, gpt_max_length-1), dtype=np.int32, - fill_value=self.dictionary.pad()) - gpt_target_ids = np.full(shape=(batch_size, gpt_max_length-1), dtype=np.int32, - fill_value=self.dictionary.pad()) - mlm_source_ids = np.full(shape=(mlm_batch_size, mlm_max_length), dtype=np.int32, - fill_value=self.dictionary.pad()) - gpt_input_mask_all = np.full(shape=(batch_size, gpt_max_length-1), dtype=np.int32, fill_value=0) - gpt_loss_mask_all = np.full(shape=(batch_size, gpt_max_length-1), dtype=np.int32, fill_value=1) - mlm_mask_all = np.full(shape=(mlm_batch_size, mlm_max_length), dtype=np.int32, fill_value=0) - - mlm_index = 0 - for i, (gpt_ids, gpt_input_mask, mlm_ids_list, mlm_mask_list, gpt_loss_mask) in enumerate(batch): - gpt_source_ids[i, :len(gpt_ids)-1] = gpt_ids[:-1] - gpt_target_ids[i, :len(gpt_ids)-1] = gpt_ids[1:] - gpt_input_mask_all[i, :len(gpt_ids)-1] = gpt_input_mask[:-1] - gpt_loss_mask_all[i, :len(gpt_ids)-1] = gpt_loss_mask[1:] - - for j, (mlm_ids, mlm_mask) in enumerate(zip(mlm_ids_list, mlm_mask_list)): - mlm_source_ids[mlm_index, :len(mlm_ids)] = mlm_ids - mlm_mask_all[mlm_index, :len(mlm_mask)] = mlm_mask - mlm_index += 1 - - ret_batch = { - 'text':{ - 'net_input': { - 'src_tokens': gpt_source_ids.astype(np.int64), - 'mlm_src_tokens': mlm_source_ids.astype(np.int64) if mlm_batch_size !=0 else None, - 'gpt_input_mask': gpt_input_mask_all.astype(np.bool_), - 'gpt_loss_mask': gpt_loss_mask_all.astype(np.bool_), - 'mlm_mask': mlm_mask_all.astype(np.bool_) if mlm_batch_size !=0 else None - }, - 'target': gpt_target_ids.astype(np.int64), - 'nsentences': batch_size, - 'ntokens': sum([len(x[0]) for x in batch]), - 'mlm_ntokens': mlm_ntokens - } - } - - return ret_batch - - def collate_for_gpt(batch): - batch_size = len(batch) - gpt_max_length = max([len(x[0]) for x in batch]) - - gpt_source_ids = np.full(shape=(batch_size, gpt_max_length-1), dtype=np.int32, - fill_value=self.dictionary.pad()) - gpt_target_ids = np.full(shape=(batch_size, gpt_max_length-1), dtype=np.int32, - fill_value=self.dictionary.pad()) - gpt_input_mask_all = np.full(shape=(batch_size, gpt_max_length-1), dtype=np.int32, fill_value=0) - gpt_loss_mask_all = np.full(shape=(batch_size, gpt_max_length-1), dtype=np.int32, fill_value=1) - chunk_tokens_all = np.full(shape=(batch_size, gpt_max_length-1), dtype=np.int32, fill_value=0) - segment_tokens_all = np.full(shape=(batch_size, gpt_max_length-1), dtype=np.int32, fill_value=0) - - for i, (gpt_ids, gpt_input_mask, mlm_ids_list, mlm_mask_list, gpt_loss_mask, chunk_tokens, segment_tokens) in enumerate(batch): - gpt_source_ids[i, :len(gpt_ids)-1] = gpt_ids[:-1] - gpt_target_ids[i, :len(gpt_ids)-1] = gpt_ids[1:] - gpt_input_mask_all[i, :len(gpt_ids)-1] = gpt_input_mask[:-1] - gpt_loss_mask_all[i, :len(gpt_ids)-1] = gpt_loss_mask[1:] - chunk_tokens_all[i, :len(gpt_ids)-1] = chunk_tokens[:-1] - segment_tokens_all[i, :len(gpt_ids)-1] = segment_tokens[:-1] - - ret_batch = { - 'gpt':{ - 'net_input': { - 'src_tokens': gpt_source_ids.astype(np.int64), - 'mlm_src_tokens': None, - 'gpt_input_mask': gpt_input_mask_all.astype(np.bool_), - 'gpt_loss_mask': gpt_loss_mask_all.astype(np.bool_), - 'chunk_tokens': chunk_tokens_all.astype(np.int64), - 'segment_tokens': segment_tokens_all.astype(np.int64), - 'mlm_mask': None - }, - 'target': gpt_target_ids.astype(np.int64), - 'nsentences': batch_size, - 'ntokens': sum([len(x[0]) for x in batch]), - 'mlm_ntokens': 0 - } - } - - return ret_batch - - if self.mlm_tokens_proportion == 0: - padded_batches = iterators.MapIterator( - batches, collate_for_gpt - ) - else: - padded_batches = iterators.MapIterator( - batches, collate - ) - - return padded_batches - - def _prepare(self, _random, doc): - mlm_tokens, mlm_mask, gpt_input_mask, gpt_loss_mask, chunk_tokens, segment_tokens = self._mlm_cut(_random, doc) - full_tokens = self._gpt(doc) - return full_tokens, gpt_input_mask, mlm_tokens, mlm_mask, gpt_loss_mask, chunk_tokens, segment_tokens - - def _mlm_cut(self, _random, doc): - eod_index = self.dictionary.indices[EOD_SYMBOL] - - if self.mlm_tokens_proportion == 0: - mlm_tokens = [] - mlm_mask = [] - gpt_input_mask = [0] * len(doc) - gpt_loss_mask = [1] * len(doc) - chunk_tokens = [0] * len(doc) - segment_tokens = [0] * len(doc) - return mlm_tokens, mlm_mask, gpt_input_mask, gpt_loss_mask, chunk_tokens, segment_tokens - - cut_start = np.arange(1, len(doc)-3/2*self.mlm_cut_length, self.mlm_cut_length, dtype=int) - - _random.shuffle(cut_start) - mlm_tokens = [] - mlm_mask = [] - start_list = [] - gpt_input_mask = np.zeros(len(doc), dtype=int) - gpt_loss_mask = np.ones(len(doc), dtype=int) - mlm_tokens_total_num = (len(doc)-1) * self.mlm_tokens_proportion - - mlm_tokens_cur_num = 0 - - for start in cut_start: - eod_num = doc[start:start+self.mlm_cut_length].count(eod_index) - if eod_num >= 2: - continue - elif eod_num == 1: - eod_pos = doc[start:start+self.mlm_cut_length].index(eod_index) - if self.mlm_cut_length - eod_pos < 20: - continue - start_ind, end_ind = start+eod_pos+1, start + self.mlm_cut_length - else: - cut_pos = _random.randint(0, self.mlm_cut_length-1) - if cut_pos >= self.mlm_cut_length/2: - start_ind, end_ind = start, start + cut_pos + 1 - else: - start_ind, end_ind = start + cut_pos, start + self.mlm_cut_length - - assert eod_index not in doc[start_ind:end_ind] - - start_list.append(start) - mlm_tokens.append([self.dictionary.bos()] + doc[start_ind:end_ind]) - mlm_tokens_cur_num += end_ind - start_ind - mlm_mask.append([0] + [1]*(end_ind - start_ind)) - gpt_input_mask[start_ind:end_ind] = 1 - gpt_loss_mask[start_ind:end_ind-1] = 0 - - if mlm_tokens_cur_num > mlm_tokens_total_num: - break - - ind = np.array(start_list).argsort() - start_list = np.array(start_list)[ind] - mlm_tokens = np.array(mlm_tokens, dtype=object)[ind] - mlm_mask = np.array(mlm_mask, dtype=object)[ind] - - return mlm_tokens, mlm_mask, gpt_input_mask, gpt_loss_mask - - def _gpt(self, doc): - return doc - - def _read_from_files(self, source_file): - data = [] - file_path = os.path.join(self.data_dir, source_file) - - if not os.path.exists(file_path): - print('| file {} not exists'.format(file_path), flush=True) - return iter([]) # skip bad file - - with open(file_path, 'r', encoding='utf8') as f: - lines = f.read().strip().split('\n') - - gpt_format_text = [] - for line in lines: - gpt_format_text.extend(list(filter(None, json.loads(line)["text"].split("\n")))) - gpt_format_text.append('') - - tokenized_lines = [self.tokenizer.encode(line) for line in gpt_format_text] - tokenized_ids = [self.dictionary.encode_line(line, add_if_not_exist=False) for line in tokenized_lines] - - doc = [self.dictionary.bos()] - for ids in tokenized_ids: - if len(ids) > self.tokens_per_sample: # drop too long sentence - continue - - if len(doc) + len(ids) > self.tokens_per_sample: - if len(doc) > 5/2*self.mlm_cut_length + 1: - data.append(doc) - doc = [self.dictionary.bos()] - doc.extend(ids) - - if len(doc) > 1 and len(doc) <= self.tokens_per_sample: - if len(doc) > 5/2*self.mlm_cut_length + 1: - data.append(doc) - - return data \ No newline at end of file diff --git a/kosmos-g/unilm/data/spm_lm_loader.py b/kosmos-g/unilm/data/spm_lm_loader.py deleted file mode 100644 index 0204958cb..000000000 --- a/kosmos-g/unilm/data/spm_lm_loader.py +++ /dev/null @@ -1,137 +0,0 @@ -import json -import os -import multiprocessing -import itertools - -from infinibatch import iterators -from functools import partial -from .lm_loader import LMLoader -from .utils import NativeCheckpointableIterator, WeightIterator, EOL_SYMBOL -from fairseq.data.encoders.gpt2_bpe import GPT2BPE -from tiktoken.core import Encoding - - -class SpmLmLoader(LMLoader): - def _tokenize(self): - multilingual_iters = [] - weights = [] - - for data in self.data: - multilingual_iters.append( - self._tokenize_foreach_lang(data) - ) - if 'weight' in data: - weights.append(float(data['weight'])) - else: - weights.append(int(data['count'])) - - if len(multilingual_iters) == 1: - return multilingual_iters[0] - - sampling_iterator = WeightIterator(weights, self.seed) - control_iterator = NativeCheckpointableIterator(sampling_iterator) - tokenized_lines = iterators.MultiplexIterator(control_iterator, multilingual_iters) - - return tokenized_lines - - def _tokenize_foreach_lang(self, data): - dataset = list(zip(data['source'])) - if self.shuffle: - chunk_files = iterators.InfinitePermutationSourceIterator( - dataset, - seed=self.seed, - shuffle=self.shuffle, - num_instances=self.num_shards, - instance_rank=self.shard_id,) - else: - chunk_files = iterators.ChunkedSourceIterator( - dataset, - num_instances=self.num_shards, - instance_rank=self.shard_id,) - - tokenized_lines = iterators.SelectManyIterator(chunk_files, lambda files: self._read_from_files(*files)) - tokenized_lines = iterators.SamplingRandomMapIterator(tokenized_lines, self._prepare, self.seed) - - return tokenized_lines - - @staticmethod - def fs_encode_line(fs_dict, words, append_eos=True): - ids = [] - for i, word in enumerate(words): - idx = fs_dict.index(word) - ids.append(idx) - if append_eos: - ids.append(fs_dict.eos_index) - return ids - - @staticmethod - def _doc_jsonstr_to_ids(doc_jsonstr, spm_tokenizer=None, fs_dict=None): - assert EOL_SYMBOL in fs_dict.indices - eol_index = fs_dict.indices[EOL_SYMBOL] - tokenized_ids = [] - for line in filter(None, json.loads(doc_jsonstr)["text"].split("\n")): - if isinstance(spm_tokenizer, GPT2BPE): - tokens = spm_tokenizer.encode(line).split(' ') - elif isinstance(spm_tokenizer, Encoding): - tokens = list(map(str, spm_tokenizer.encode(line + '\n', allowed_special="all"))) - else: - tokens = spm_tokenizer.encode(line, out_type=str) - tokenized_tokens = SpmLmLoader.fs_encode_line(fs_dict, tokens, append_eos=False) - if not isinstance(spm_tokenizer, Encoding): - tokenized_tokens.append(eol_index) - tokenized_ids.append(tokenized_tokens) - if len(tokenized_ids) > 0: - last_line_ids = tokenized_ids[-1] - if last_line_ids[-1] == eol_index: - last_line_ids[-1] = fs_dict.eos_index - else: - last_line_ids.append(fs_dict.eos_index) - else: - print("[W] At SpmLmLoader._doc_jsonstr_to_ids, A null document with no lines!") - tokenized_ids = [[fs_dict.eos_index]] - return tokenized_ids - - def _read_from_files(self, source_file): - data = [] - file_path = os.path.join(self.data_dir, source_file) - - if not os.path.exists(file_path): - print('| file {} not exists'.format(file_path), flush=True) - return iter([]) # skip bad file - - try: - with open(file_path, 'r', encoding='utf8') as f: - lines = f.read().strip().split('\n') - except: - return iter([]) # skip bad file - - # NOTE #### simple spm implementation ############### - tokenized_ids = [] - for doc_jsonstr in lines: - ret = SpmLmLoader._doc_jsonstr_to_ids(doc_jsonstr, spm_tokenizer=self.tokenizer, fs_dict=self.dictionary) - tokenized_ids.extend(ret) - # ################################################### - - doc = [self.dictionary.bos()] - for ids in tokenized_ids: - - if getattr(self.args, "debug_p100", False): - if len(ids) > 256: - ids = ids[:256] - - if len(ids) >= self.tokens_per_sample: # drop too long sentence - # data.append(doc[:]) - ids = ids[:self.tokens_per_sample - 1] - # continue - - if len(doc) + len(ids) > self.tokens_per_sample: - if len(doc) > 5/2*self.mlm_cut_length + 1: - data.append(doc) - doc = [self.dictionary.bos()] - doc.extend(ids) - - if len(doc) > 1 and len(doc) <= self.tokens_per_sample: - if len(doc) > 5/2*self.mlm_cut_length + 1: - data.append(doc) - - return data \ No newline at end of file diff --git a/kosmos-g/unilm/data/utils.py b/kosmos-g/unilm/data/utils.py deleted file mode 100644 index ec4538e8b..000000000 --- a/kosmos-g/unilm/data/utils.py +++ /dev/null @@ -1,172 +0,0 @@ -import os -import gzip -from sre_parse import SPECIAL_CHARS -import numpy as np -from random import Random -from typing import Any, Callable, Dict, Generator, Iterable, Iterator, List, Optional, Tuple, Union -import collections -from infinibatch import iterators - -EOD_SYMBOL = "</doc>" -BOI_SYMBOL = "" -EOC_SYMBOL = "</chunk>" -EOL_SYMBOL = "</line>" - -SPECIAL_SYMBOLS = [EOD_SYMBOL, BOI_SYMBOL, EOI_SYMBOL, EOC_SYMBOL, EOL_SYMBOL] - - -def apply_to_sample(f, sample): - if hasattr(sample, "__len__") and len(sample) == 0: - return {} - - def _apply(x): - if isinstance(x, np.ndarray): - return f(x) - elif isinstance(x, collections.OrderedDict): - # OrderedDict has attributes that needs to be preserved - od = collections.OrderedDict((key, _apply(value)) for key, value in x.items()) - od.__dict__ = x.__dict__ - return od - elif isinstance(x, dict): - return {key: _apply(value) for key, value in x.items()} - elif isinstance(x, list): - return [_apply(x) for x in x] - elif isinstance(x, tuple): - return tuple(_apply(x) for x in x) - elif isinstance(x, set): - return {_apply(x) for x in x} - else: - return x - - return _apply(sample) - -class NativeCheckpointableIterator(iterators.CheckpointableIterator): - def __init__(self, iterable: Iterable): - self._input_iterable = iterable - self.setstate(None) - - def getstate(self) -> Dict: - return {'num_items_yielded': self._num_items_yielded} - - def setstate(self, checkpoint: Optional[Dict]): - self._iterator = iter(self._input_iterable) - self._num_items_yielded = iterators._advance_iterator(self._iterator, checkpoint['num_items_yielded']) if checkpoint is not None else 0 - - def __next__(self): - item = next(self._iterator) - self._num_items_yielded += 1 - return item - - def close(self): - pass - - -class WeightIterator(object): - def __init__(self, weights, seed): - self.weights = weights - self.seed = seed - self.control_index = list(range(len(weights))) - self.setstate(None) - - def __iter__(self): - return self - - def getstate(self): - return {"random_state": self._random_state} - - def setstate(self, checkpoint): - self._random_state = checkpoint["random_state"] if checkpoint else None - self._random = None # this will trigger the lazy initialization in self.__next__ - - def __next__(self): - if self._random is None: - self._random = Random(self.seed) - if self._random_state is not None: - self._random.setstate(self._random_state) - idx = self._random.choices(self.control_index, self.weights)[0] - self._random_state = self._random.getstate() - return idx - - def close(self): - pass - - -class ConcatIterator(iterators.CheckpointableIterator): - """ - Concat items from all given iterators. - """ - def __init__(self, source_iterators): - """ - Args: - source_iterators: list of iterators to zip, item by item - """ - # TODO: Use all function? - for source_iterator in source_iterators: - if not isinstance(source_iterator, iterators.CheckpointableIterator): - raise ValueError('all iterators in source_iterators have to be CheckpointableIterator') - self._source_iterators = source_iterators # type: List[CheckpointableIterator] - - def getstate(self): - return {'input_states': tuple(iterator.getstate() for iterator in self._source_iterators)} - - def setstate(self, checkpoint): - if checkpoint is None: - for iterator in self._source_iterators: - iterator.setstate(None) - else: - # TODO: Add check that both lists have the same length? - for iterator, state in zip(self._source_iterators, checkpoint['input_states']): - iterator.setstate(state) - - def __next__(self): - res = {} # (note: can't use a generator expression, as it gets confused when a next() call raises StopIteration) - for iterator in self._source_iterators: - res.update(next(iterator)) - return res - - def close(self): - for it in self._source_iterators: - it.close() - - -class MixIterator(iterators.CheckpointableIterator): - """ - Concat items from all given iterators. - """ - def __init__(self, source_iterators, weights): - """ - Args: - source_iterators: list of iterators to zip, item by item - """ - # TODO: Use all function? - for source_iterator in source_iterators: - if not isinstance(source_iterator, iterators.CheckpointableIterator): - raise ValueError('all iterators in source_iterators have to be CheckpointableIterator') - self._source_iterators = source_iterators # type: List[CheckpointableIterator] - assert len(weights) == len(source_iterators) - self.weights = weights - self.population = list(range(len(source_iterators))) - - def getstate(self): - return {'input_states': tuple(iterator.getstate() for iterator in self._source_iterators)} - - def setstate(self, checkpoint): - if checkpoint is None: - for iterator in self._source_iterators: - iterator.setstate(None) - else: - # TODO: Add check that both lists have the same length? - for iterator, state in zip(self._source_iterators, checkpoint['input_states']): - iterator.setstate(state) - - def __next__(self): - _random = Random() - res = {} # (note: can't use a generator expression, as it gets confused when a next() call raises StopIteration) - idx = _random.choices(self.population, self.weights)[0] - res.update(next(self._source_iterators[idx])) - return res - - def close(self): - for it in self._source_iterators: - it.close() diff --git a/kosmos-g/unilm/data/vl/instructpix2pix_loader.py b/kosmos-g/unilm/data/vl/instructpix2pix_loader.py deleted file mode 100644 index 19793bb15..000000000 --- a/kosmos-g/unilm/data/vl/instructpix2pix_loader.py +++ /dev/null @@ -1,246 +0,0 @@ -try: - from fairseq.data.encoders.gpt2_bpe import GPT2BPE -except: - print("GPT2BPE not found, please install fairseq first if you want to use GPT2BPE") -import os - -from PIL import Image - -try: - from torchvision.transforms import InterpolationMode - - BICUBIC = InterpolationMode.BICUBIC -except ImportError: - BICUBIC = Image.BICUBIC - -import numpy as np -import torch -from tiktoken.core import Encoding -from torchvision.transforms import CenterCrop, Compose, Resize -import base64 -import io -import random -from infinibatch import iterators -from unilm.data.vl.vl_base_loader import VLBaseLoader - -ALT_KEY = 'MMAltTextWords' -CAPTION_KEY = 'MMCaptionWords' -CONTENT_KEY = 'Content' -IMAGE_KEY = 'MMImage' -BOI_SYMBOL = "" - - -class NumpyNormalize(torch.nn.Module): - def __init__(self, mean, std): - super().__init__() - self.mean = mean - self.std = std - - def forward(self, img): - """ - Args: - img (PIL Image or Tensor). - Returns: - """ - image = np.array(img).transpose(2, 0, 1) # B, H, W, C -> B, C, H, W - image = image / 255.0 - image -= np.array(self.mean).reshape(-1, 1, 1) - image /= np.array(self.std).reshape(-1, 1, 1) - return image - - def __repr__(self) -> str: - return f"{self.__class__.__name__}(mean={self.mean}, std={self.std})" - - -class InstructPix2PixLoader(VLBaseLoader): - def _setup(self): - self.max_image_num = self.args.max_image_num - self.image_token_length = self.args.image_token_length - self.random_drop_caption_prob = self.args.random_drop_caption_prob - self.dictionary.add_symbol(BOI_SYMBOL) - self.dictionary.add_symbol(EOI_SYMBOL) - - def _build_filter(self): - return None - - def _build_image_transform(self): - preprocess_image = { - 'gpt': Compose([ - Resize(224, interpolation=BICUBIC), - CenterCrop(224), - NumpyNormalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711)) - ]), - 'diff': Compose([ - Resize(512), - CenterCrop(512), - NumpyNormalize([0.5], [0.5]) - ]) - } - return preprocess_image - - def _build_text_transform(self): - def text_transform(text): - append_eos = False - fs_dict = self.dictionary - if isinstance(self.tokenizer, Encoding): - words = list(map(str, self.tokenizer.encode(text, allowed_special="all"))) - else: - words = self.tokenizer.encode(text, out_type=str) - # ids = [fs_dict.bos_index] - ids = [] - for i, word in enumerate(words): - idx = fs_dict.index(word) - ids.append(idx) - if append_eos: - ids.append(fs_dict.eos_index) - return ids - - return text_transform - - def _batchify(self, lines): - - if self.max_sentences is not None: - if self.batch_read_ahead > 0: - lines = iterators.BlockwiseShuffleIterator(lines, self.batch_read_ahead, self.seed) - batches = iterators.FixedBatchIterator(lines, self.max_sentences) - else: - # - - def dynamic_batch_size(sample): - lengths = [len(x) for x in sample] - batch_size = self.max_tokens // max( - lengths) // self.required_batch_size_multiple * self.required_batch_size_multiple - return max(1, batch_size) - - batches = iterators.BucketedReadaheadBatchIterator( - lines, - read_ahead=self.batch_read_ahead, - key=(lambda x: max(len(x[0]), len(x[1]))) if self.shuffle else None, - batch_size=dynamic_batch_size, - shuffle=self.shuffle, - seed=self.seed, - ) - - def collate(batch): - batch_size = len(batch) - - gpt_max_length = max([len(x[0]) for x in batch]) - - gpt_source_ids = np.full(shape=(batch_size, gpt_max_length - 1), dtype=np.int32, - fill_value=self.dictionary.pad()) - gpt_target_ids = np.full(shape=(batch_size, gpt_max_length - 1), dtype=np.int32, - fill_value=self.dictionary.pad()) - gpt_input_mask_all = np.full(shape=(batch_size, gpt_max_length - 1), dtype=np.int32, fill_value=0) - gpt_loss_mask_all = np.full(shape=(batch_size, gpt_max_length - 1), dtype=np.int32, fill_value=1) - chunk_tokens_all = np.full(shape=(batch_size, gpt_max_length - 1), dtype=np.int32, fill_value=0) - segment_tokens_all = np.full(shape=(batch_size, gpt_max_length - 1), dtype=np.int32, fill_value=0) - - all_gpt_source_image_tokens = [] - all_target_image_tokens = [] - - for i, (full_tokens, gpt_src_image_tokens, tgt_image_tokens, text_input_mask, text_loss_mask, chunk_tokens, - segment_tokens) in enumerate(batch): - gpt_source_ids[i, :len(full_tokens) - 1] = full_tokens[:-1] - gpt_target_ids[i, :len(full_tokens) - 1] = full_tokens[1:] - gpt_input_mask_all[i, :len(full_tokens) - 1] = text_input_mask[:-1] - gpt_loss_mask_all[i, :len(full_tokens) - 1] = text_loss_mask[:-1] - chunk_tokens_all[i, :len(full_tokens) - 1] = chunk_tokens[:-1] - segment_tokens_all[i, :len(full_tokens) - 1] = segment_tokens[:-1] - all_gpt_source_image_tokens.extend(gpt_src_image_tokens) - all_target_image_tokens.append(tgt_image_tokens) - - gpt_image_source_ids = np.stack(all_gpt_source_image_tokens).astype(np.float32) \ - if all_gpt_source_image_tokens else None - image_target_ids = np.stack(all_target_image_tokens) - - ret_batch = { - 'vl_instructpix2pix': { - 'net_input': { - 'src_tokens': gpt_source_ids.astype(np.int64), - 'gpt_img_src_tokens': gpt_image_source_ids, - 'img_tgt_tokens': image_target_ids.astype(np.float32), - 'img_gpt_input_mask': gpt_input_mask_all.astype(np.bool_), - 'gpt_loss_mask': gpt_loss_mask_all.astype(np.bool_), - 'chunk_tokens': chunk_tokens_all.astype(np.int64), - 'segment_tokens': segment_tokens_all.astype(np.int64), - }, - 'target': gpt_target_ids.astype(np.int64), - 'nsentences': batch_size, - 'ntokens': sum([len(x[0]) for x in batch]), - } - } - - return ret_batch - - padded_batches = iterators.MapIterator( - batches, collate - ) - - return padded_batches - - def _prepare(self, _random, doc): - text_tokens = doc[CAPTION_KEY] - src_image_tokens = doc[IMAGE_KEY][:-1] - tgt_image_tokens = doc[IMAGE_KEY][-1] - text_input_mask = doc['input_mask'] - text_loss_mask = doc['loss_mask'] - chunk_tokens = doc['chunk_tokens'] - segment_tokens = doc['segment_tokens'] - - gpt_src_image_tokens = [self.image_transform['gpt'](im) for im in src_image_tokens] - tgt_image_tokens = self.image_transform['diff'](tgt_image_tokens) - - return text_tokens, gpt_src_image_tokens, tgt_image_tokens, text_input_mask, text_loss_mask, chunk_tokens, segment_tokens - - def _read_from_files(self, source_file): - file_path = os.path.join(self.data_dir, 'data', source_file) - if not os.path.exists(file_path): - print('| file {} not exists'.format(file_path), flush=True) - return iter([]) # skip bad file - try: - with open(file_path, 'r', encoding='utf8') as f: - lines = f.read().strip().split('\n') - except: - return iter([]) # skip bad file - - bos_id = self.dictionary.bos() - eos_id = self.dictionary.eos() - boi_id = self.dictionary.index(BOI_SYMBOL) - eoi_id = self.dictionary.index(EOI_SYMBOL) - for doc_str in lines: - item = doc_str.strip().split('\t') - # prepare text and image tokens - input_tokens = self.text_transform(item[0]) - edit_tokens = self.text_transform(item[1]) - # random drop caption - if random.random() < self.random_drop_caption_prob: - input_tokens = [] - - num_seeds = (len(item) - 2) // 2 - seed = random.randint(0, num_seeds - 1) - try: - image_tokens = [item[seed * 2 + 2], item[seed * 2 + 3]] - image_tokens = [Image.open(io.BytesIO(base64.b64decode(im))).convert("RGB") for im in image_tokens] - - doc = [bos_id] + input_tokens + [boi_id] * (self.image_token_length + 1) + [ - eoi_id] + edit_tokens + [eos_id] - doc_input_mask = [0] + [0] * len(input_tokens) + [0] + [1] * self.image_token_length + [0] + [ - 0] * len(edit_tokens) + [0] - doc_loss_mask = [1] + [1] * len(input_tokens) + [0] + [0] * self.image_token_length + [1] + [ - 1] * len(edit_tokens) + [1] - chunk_tokens = [0] + [0] * len(input_tokens) + [1] + [1] * self.image_token_length + [1] + [ - 1] * len(edit_tokens) + [1] - segment_tokens = [0] + [0] * len(input_tokens) + [1] + [1] * self.image_token_length + [1] + [ - 0] * len(edit_tokens) + [0] - - yield { - CAPTION_KEY: doc, - IMAGE_KEY: image_tokens, - 'input_mask': doc_input_mask, - 'loss_mask': doc_loss_mask, - 'chunk_tokens': chunk_tokens, - 'segment_tokens': segment_tokens, - } - - except: - continue diff --git a/kosmos-g/unilm/data/vl/laion2b_loader.py b/kosmos-g/unilm/data/vl/laion2b_loader.py deleted file mode 100644 index 92e9f4f18..000000000 --- a/kosmos-g/unilm/data/vl/laion2b_loader.py +++ /dev/null @@ -1,271 +0,0 @@ -try: - from fairseq.data.encoders.gpt2_bpe import GPT2BPE -except: - print("GPT2BPE not found, please install fairseq first if you want to use GPT2BPE") -import base64 -import io -import os - -from PIL import Image - -try: - from torchvision.transforms import InterpolationMode - - BICUBIC = InterpolationMode.BICUBIC -except ImportError: - BICUBIC = Image.BICUBIC - -import numpy as np -import torch -from PIL import Image -from tiktoken.core import Encoding -from torchvision.transforms import CenterCrop, Compose, Resize -from infinibatch import iterators -from unilm.data.vl.vl_base_loader import VLBaseLoader -from transformers import CLIPTokenizer - -ALT_KEY = 'MMAltTextWords' -CAPTION_KEY = 'MMCaptionWords' -CONTENT_KEY = 'Content' -IMAGE_KEY = 'MMImage' -BOI_SYMBOL = "" - - -class NumpyNormalize(torch.nn.Module): - def __init__(self, mean, std): - super().__init__() - self.mean = mean - self.std = std - - def forward(self, img): - """ - Args: - img (PIL Image or Tensor). - Returns: - """ - image = np.array(img).transpose(2, 0, 1) # B, H, W, C -> B, C, H, W - image = image / 255.0 - image -= np.array(self.mean).reshape(-1, 1, 1) - image /= np.array(self.std).reshape(-1, 1, 1) - return image - - def __repr__(self) -> str: - return f"{self.__class__.__name__}(mean={self.mean}, std={self.std})" - - -class Laion2BLoader(VLBaseLoader): - def _setup(self): - self.max_image_num = self.args.max_image_num - self.image_token_length = self.args.image_token_length - self.dictionary.add_symbol(BOI_SYMBOL) - self.dictionary.add_symbol(EOI_SYMBOL) - self.clip_tokenizer = CLIPTokenizer.from_pretrained("runwayml/stable-diffusion-v1-5", subfolder="tokenizer", - torch_dtype=torch.float16, revision="fp16") - - def _build_filter(self): - def width_height_filter(item): - # judge item[3] and item[4] is interger - if item[3].isdigit() and item[4].isdigit(): - return not self.args.align and (int(item[3]) < 384 or int(item[4]) < 384) - return True - - def length_filter(item): - if self.args.align and (len(self.clip_tokenizer.tokenize(item[1])) > 75 or len( - self.text_transform(item[1])) > 85): - return True - return False - - return [width_height_filter, length_filter] - - def _build_image_transform(self): - preprocess_image = { - 'gpt': Compose([ - Resize(224, interpolation=BICUBIC), - CenterCrop(224), - NumpyNormalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711)) - ]), - 'diff': Compose([ - Resize(512), - CenterCrop(512), - NumpyNormalize([0.5], [0.5]) - ]) - } - return preprocess_image - - def _build_text_transform(self): - def text_transform(text): - append_eos = False - fs_dict = self.dictionary - if isinstance(self.tokenizer, Encoding): - words = list(map(str, self.tokenizer.encode(text, allowed_special="all"))) - else: - words = self.tokenizer.encode(text, out_type=str) - # ids = [fs_dict.bos_index] - ids = [] - for i, word in enumerate(words): - idx = fs_dict.index(word) - ids.append(idx) - if append_eos: - ids.append(fs_dict.eos_index) - return ids - - return text_transform - - def _batchify(self, lines): - if self.max_sentences is not None: - if self.batch_read_ahead > 0: - lines = iterators.BlockwiseShuffleIterator(lines, self.batch_read_ahead, self.seed) - batches = iterators.FixedBatchIterator(lines, self.max_sentences) - else: - # - - def dynamic_batch_size(sample): - lengths = [len(x) for x in sample] - batch_size = self.max_tokens // max( - lengths) // self.required_batch_size_multiple * self.required_batch_size_multiple - return max(1, batch_size) - - batches = iterators.BucketedReadaheadBatchIterator( - lines, - read_ahead=self.batch_read_ahead, - key=(lambda x: max(len(x[0]), len(x[1]))) if self.shuffle else None, - batch_size=dynamic_batch_size, - shuffle=self.shuffle, - seed=self.seed, - ) - - def collate(batch): - batch_size = len(batch) - - gpt_max_length = max([len(x[0]) for x in batch]) - - gpt_source_ids = np.full(shape=(batch_size, gpt_max_length - 1), dtype=np.int32, - fill_value=self.dictionary.pad()) - gpt_target_ids = np.full(shape=(batch_size, gpt_max_length - 1), dtype=np.int32, - fill_value=self.dictionary.pad()) - gpt_input_mask_all = np.full(shape=(batch_size, gpt_max_length - 1), dtype=np.int32, fill_value=0) - gpt_loss_mask_all = np.full(shape=(batch_size, gpt_max_length - 1), dtype=np.int32, fill_value=1) - - chunk_tokens_all = np.full(shape=(batch_size, gpt_max_length - 1), dtype=np.int32, fill_value=0) - segment_tokens_all = np.full(shape=(batch_size, gpt_max_length - 1), dtype=np.int32, fill_value=0) - - clip_tokens_all = list() - all_target_image_tokens = [] - - for i, (full_tokens, gpt_src_image_tokens, tgt_image_tokens, text_input_mask, text_loss_mask, chunk_tokens, - segment_tokens, clip_tokens) in enumerate(batch): - gpt_source_ids[i, :len(full_tokens) - 1] = full_tokens[:-1] - gpt_target_ids[i, :len(full_tokens) - 1] = full_tokens[1:] - gpt_input_mask_all[i, :len(full_tokens) - 1] = text_input_mask[:-1] - gpt_loss_mask_all[i, :len(full_tokens) - 1] = text_loss_mask[:-1] - chunk_tokens_all[i, :len(full_tokens) - 1] = chunk_tokens[:-1] - segment_tokens_all[i, :len(full_tokens) - 1] = segment_tokens[:-1] - if clip_tokens is not None: - clip_tokens_all.append(clip_tokens) - if tgt_image_tokens is not None: - all_target_image_tokens.append(tgt_image_tokens) - - clip_tokens_all = self.clip_tokenizer( - clip_tokens_all, padding="max_length", - max_length=self.clip_tokenizer.model_max_length, truncation=True, - return_tensors="np").input_ids.astype(np.int64) if clip_tokens_all else None - image_target_ids = np.stack(all_target_image_tokens) if all_target_image_tokens else None - - ret_batch = { - 'vl_laion': { - 'net_input': { - 'src_tokens': gpt_source_ids.astype(np.int64), - 'gpt_img_src_tokens': None, - 'img_tgt_tokens': image_target_ids.astype(np.float32) if image_target_ids is not None else None, - 'img_gpt_input_mask': gpt_input_mask_all.astype(np.bool_), - 'gpt_loss_mask': gpt_loss_mask_all.astype(np.bool_), - 'chunk_tokens': chunk_tokens_all.astype(np.int64), - 'segment_tokens': segment_tokens_all.astype(np.int64), - 'clip_tokens': clip_tokens_all, - }, - 'target': gpt_target_ids.astype(np.int64), - 'nsentences': batch_size, - 'ntokens': sum([len(x[0]) for x in batch]), - } - } - - return ret_batch - - padded_batches = iterators.MapIterator( - batches, collate - ) - - return padded_batches - - def _prepare(self, _random, doc): - text_tokens = doc[CAPTION_KEY] - tgt_image_tokens = doc[IMAGE_KEY] - clip_tokens = doc['clip_tokens'] - text_input_mask = doc['input_mask'] - text_loss_mask = doc['loss_mask'] - chunk_tokens = doc['chunk_tokens'] - segment_tokens = doc['segment_tokens'] - - tgt_image_tokens = self.image_transform['diff'](tgt_image_tokens) if tgt_image_tokens is not None else None - - return text_tokens, None, tgt_image_tokens, text_input_mask, text_loss_mask, chunk_tokens, segment_tokens, clip_tokens - - def _read_from_files(self, source_file): - file_path = os.path.join(self.data_dir, source_file) - if not os.path.exists(file_path): - print('| file {} not exists'.format(file_path), flush=True) - return iter([]) # skip bad file - try: - with open(file_path, 'r', encoding='utf8') as f: - lines = f.read().strip().split('\n') - except: - return iter([]) # skip bad file - - bos_id = self.dictionary.bos_index - eos_id = self.dictionary.eos_index - - for doc_str in lines: - item = doc_str.strip().split('\t') - - # filter item based self.filter - # if 'laion2b' in source_file: # filter out bad image on laion dataset - is_filter = False - for filter in self.filters: - if filter(item): - is_filter = True - break - if is_filter: - continue - - try: - doc = self.text_transform(item[1]) - - if len(doc) > 128: - continue - - if self.args.align: - clip_tokens = item[1] - image_tokens = None - else: - clip_tokens = None - image_tokens = Image.open(io.BytesIO(base64.b64decode(item[2]))).convert("RGB") - - text_length = len(doc) - - text_tokens = [bos_id] + doc + [eos_id] - doc_input_mask = [0] + [0] * text_length + [0] - doc_loss_mask = [0] + [1] * text_length + [1] - chunk_tokens = [0] + [0] * text_length + [0] - segment_tokens = [0] + [0] * text_length + [0] - - yield { - CAPTION_KEY: text_tokens, - IMAGE_KEY: image_tokens, - 'input_mask': doc_input_mask, - 'loss_mask': doc_loss_mask, - 'chunk_tokens': chunk_tokens, - 'segment_tokens': segment_tokens, - 'clip_tokens': clip_tokens, - } - except: - continue diff --git a/kosmos-g/unilm/data/vl/openimage_loader.py b/kosmos-g/unilm/data/vl/openimage_loader.py deleted file mode 100644 index e6e5d2196..000000000 --- a/kosmos-g/unilm/data/vl/openimage_loader.py +++ /dev/null @@ -1,291 +0,0 @@ -try: - from fairseq.data.encoders.gpt2_bpe import GPT2BPE -except: - print("GPT2BPE not found, please install fairseq first if you want to use GPT2BPE") -import os - -from PIL import Image - -try: - from torchvision.transforms import InterpolationMode - - BICUBIC = InterpolationMode.BICUBIC -except ImportError: - BICUBIC = Image.BICUBIC - -import re -import numpy as np -import torch -from tiktoken.core import Encoding -from torchvision.transforms import CenterCrop, Compose, Resize -import base64 -import io -import random -from infinibatch import iterators -from unilm.data.vl.vl_base_loader import VLBaseLoader - -ALT_KEY = 'MMAltTextWords' -CAPTION_KEY = 'MMCaptionWords' -CONTENT_KEY = 'Content' -IMAGE_KEY = 'MMImage' -BOI_SYMBOL = "" - - -class NumpyNormalize(torch.nn.Module): - def __init__(self, mean, std): - super().__init__() - self.mean = mean - self.std = std - - def forward(self, img): - """ - Args: - img (PIL Image or Tensor). - Returns: - """ - image = np.array(img).transpose(2, 0, 1) # B, H, W, C -> B, C, H, W - image = image / 255.0 - image -= np.array(self.mean).reshape(-1, 1, 1) - image /= np.array(self.std).reshape(-1, 1, 1) - return image - - def __repr__(self) -> str: - return f"{self.__class__.__name__}(mean={self.mean}, std={self.std})" - - -class OpenImageLoader(VLBaseLoader): - def _setup(self): - self.max_image_num = self.args.max_image_num - self.image_token_length = self.args.image_token_length - self.random_drop_caption_prob = self.args.random_drop_caption_prob - self.dictionary.add_symbol(BOI_SYMBOL) - self.dictionary.add_symbol(EOI_SYMBOL) - - def _build_filter(self): - return None - - def _build_image_transform(self): - preprocess_image = { - 'gpt': Compose([ - Resize(224, interpolation=BICUBIC), - CenterCrop(224), - NumpyNormalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711)) - ]), - 'diff': Compose([ - Resize(512), - CenterCrop(512), - NumpyNormalize([0.5], [0.5]) - ]) - } - return preprocess_image - - def _build_text_transform(self): - def text_transform(text): - append_eos = False - fs_dict = self.dictionary - if isinstance(self.tokenizer, Encoding): - words = list(map(str, self.tokenizer.encode(text, allowed_special="all"))) - else: - words = self.tokenizer.encode(text, out_type=str) - # ids = [fs_dict.bos_index] - ids = [] - for i, word in enumerate(words): - idx = fs_dict.index(word) - ids.append(idx) - if append_eos: - ids.append(fs_dict.eos_index) - return ids - - return text_transform - - def _batchify(self, lines): - if self.max_sentences is not None: - if self.batch_read_ahead > 0: - lines = iterators.BlockwiseShuffleIterator(lines, self.batch_read_ahead, self.seed) - batches = iterators.FixedBatchIterator(lines, self.max_sentences) - else: - # - - def dynamic_batch_size(sample): - lengths = [len(x) for x in sample] - batch_size = self.max_tokens // max( - lengths) // self.required_batch_size_multiple * self.required_batch_size_multiple - return max(1, batch_size) - - batches = iterators.BucketedReadaheadBatchIterator( - lines, - read_ahead=self.batch_read_ahead, - key=(lambda x: max(len(x[0]), len(x[1]))) if self.shuffle else None, - batch_size=dynamic_batch_size, - shuffle=self.shuffle, - seed=self.seed, - ) - - def collate(batch): - batch_size = len(batch) - - gpt_max_length = max([len(x[0]) for x in batch]) - - gpt_source_ids = np.full(shape=(batch_size, gpt_max_length - 1), dtype=np.int32, - fill_value=self.dictionary.pad()) - gpt_target_ids = np.full(shape=(batch_size, gpt_max_length - 1), dtype=np.int32, - fill_value=self.dictionary.pad()) - gpt_input_mask_all = np.full(shape=(batch_size, gpt_max_length - 1), dtype=np.int32, fill_value=0) - gpt_loss_mask_all = np.full(shape=(batch_size, gpt_max_length - 1), dtype=np.int32, fill_value=1) - chunk_tokens_all = np.full(shape=(batch_size, gpt_max_length - 1), dtype=np.int32, fill_value=0) - segment_tokens_all = np.full(shape=(batch_size, gpt_max_length - 1), dtype=np.int32, fill_value=0) - - all_gpt_source_image_tokens = [] - all_target_image_tokens = [] - - for i, (full_tokens, gpt_src_image_tokens, tgt_image_tokens, text_input_mask, text_loss_mask, chunk_tokens, - segment_tokens) in enumerate(batch): - gpt_source_ids[i, :len(full_tokens) - 1] = full_tokens[:-1] - gpt_target_ids[i, :len(full_tokens) - 1] = full_tokens[1:] - gpt_input_mask_all[i, :len(full_tokens) - 1] = text_input_mask[:-1] - gpt_loss_mask_all[i, :len(full_tokens) - 1] = text_loss_mask[:-1] - chunk_tokens_all[i, :len(full_tokens) - 1] = chunk_tokens[:-1] - segment_tokens_all[i, :len(full_tokens) - 1] = segment_tokens[:-1] - all_gpt_source_image_tokens.extend(gpt_src_image_tokens) - all_target_image_tokens.append(tgt_image_tokens) - - gpt_image_source_ids = np.stack(all_gpt_source_image_tokens).astype(np.float32) \ - if all_gpt_source_image_tokens else None - image_target_ids = np.stack(all_target_image_tokens) - - ret_batch = { - 'vl_openimage': { - 'net_input': { - 'src_tokens': gpt_source_ids.astype(np.int64), - 'gpt_img_src_tokens': gpt_image_source_ids, - 'img_tgt_tokens': image_target_ids.astype(np.float32), - 'img_gpt_input_mask': gpt_input_mask_all.astype(np.bool_), - 'gpt_loss_mask': gpt_loss_mask_all.astype(np.bool_), - 'chunk_tokens': chunk_tokens_all.astype(np.int64), - 'segment_tokens': segment_tokens_all.astype(np.int64), - }, - 'target': gpt_target_ids.astype(np.int64), - 'nsentences': batch_size, - 'ntokens': sum([len(x[0]) for x in batch]), - } - } - - return ret_batch - - padded_batches = iterators.MapIterator( - batches, collate - ) - - return padded_batches - - def _prepare(self, _random, doc): - text_tokens = doc[CAPTION_KEY] - src_image_tokens = doc[IMAGE_KEY][:-1] - tgt_image_tokens = doc[IMAGE_KEY][-1] - text_input_mask = doc['input_mask'] - text_loss_mask = doc['loss_mask'] - chunk_tokens = doc['chunk_tokens'] - segment_tokens = doc['segment_tokens'] - - gpt_src_image_tokens = [self.image_transform['gpt'](im) for im in src_image_tokens] - tgt_image_tokens = self.image_transform['diff'](tgt_image_tokens) - - return text_tokens, gpt_src_image_tokens, tgt_image_tokens, text_input_mask, text_loss_mask, chunk_tokens, segment_tokens - - @staticmethod - def tag_filter(tag, tags, caption): - if tag in ['yellow', 'blue', 'red', 'green', 'white', 'black', 'brown', 'grey', 'gray', 'purple', - 'pink', 'orange', 'cyan', 'silver', 'gold', 'golden']: - return False - - if re.search(r'\b{}\b'.format(re.escape(tag)), caption, re.IGNORECASE) is None: - return False - - if sum([re.search(r'\b{}\b'.format(tag), t) is not None for t in tags]) > 1: - return False - - return True - - def _read_from_files(self, source_file): - file_path = os.path.join(self.data_dir, 'data', source_file) - if not os.path.exists(file_path): - print('| file {} not exists'.format(file_path), flush=True) - return iter([]) # skip bad file - try: - with open(file_path, 'r', encoding='utf8') as f: - lines = f.read().strip().split('\n') - except: - return iter([]) # skip bad file - - bos_id = self.dictionary.bos() - eos_id = self.dictionary.eos() - boi_id = self.dictionary.index(BOI_SYMBOL) - eoi_id = self.dictionary.index(EOI_SYMBOL) - - for doc_str in lines: - item = doc_str.strip().split('\t') - drop_object = random.random() < self.random_drop_caption_prob - try: - # prepare text and image tokens - caption = item[0] - tags = item[1].split(',') - - doc = [bos_id] - image_tokens = [] - doc_input_mask = [0] - doc_loss_mask = [1] - chunk_tokens = [0] - segment_tokens = [0] - - tags = {tag: tag_id for tag_id, tag in enumerate(tags) if self.tag_filter(tag, tags, caption)} - # sort tags by order in caption - tags = sorted(tags.items(), - key=lambda x: re.search(r'\b{}\b'.format(re.escape(x[0])), caption, re.IGNORECASE).end()) - - for tag, tag_id in tags: - if tag == '': - continue - - search_result = re.search(r'\b{}\b'.format(re.escape(tag)), caption, re.IGNORECASE) - if search_result is None: - continue - object_head = search_result.start() - object_tail = search_result.end() - - while object_tail < len(caption) and \ - not caption[object_tail] in [' ', ',', '.', '!', '?', ';', ':', ')', ']', '}']: - object_tail += 1 - - object_caption = caption[:object_head if drop_object else object_tail] - object_caption += ' ' if object_caption[-1] != ' ' else '' - object_caption = self.text_transform(object_caption) - caption = caption[object_tail:] - doc.extend(object_caption + [boi_id] + [boi_id] * self.image_token_length + [eoi_id]) - tag_loc = tag_id * 2 + 3 + (1 if random.random() <= 0.5 else 0) - image_tokens.append(Image.open(io.BytesIO(base64.b64decode(item[tag_loc]))).convert("RGB")) - doc_input_mask.extend([0] * len(object_caption) + [0] + [1] * self.image_token_length + [0]) - doc_loss_mask.extend([1] * len(object_caption) + [0] + [0] * self.image_token_length + [1]) - chunk_id = chunk_tokens[-1] - chunk_tokens.extend([chunk_id] * len(object_caption)) - chunk_id += 1 - chunk_tokens.extend([chunk_id] * (self.image_token_length + 2)) - segment_tokens.extend([0] * len(object_caption) + [1] * (self.image_token_length + 2)) - - caption_tail = self.text_transform(caption) - doc.extend(caption_tail + [eos_id]) - doc_input_mask.extend([0] * len(caption_tail) + [0]) - doc_loss_mask.extend([1] * len(caption_tail) + [1]) - chunk_tokens.extend([chunk_tokens[-1]] * (len(caption_tail) + 1)) - segment_tokens.extend([0] * len(caption_tail) + [0]) - image_tokens.append(Image.open(io.BytesIO(base64.b64decode(item[2]))).convert("RGB")) - - yield { - CAPTION_KEY: doc, - IMAGE_KEY: image_tokens, - 'input_mask': doc_input_mask, - 'loss_mask': doc_loss_mask, - 'chunk_tokens': chunk_tokens, - 'segment_tokens': segment_tokens, - } - except: - continue diff --git a/kosmos-g/unilm/data/vl/vl_base_loader.py b/kosmos-g/unilm/data/vl/vl_base_loader.py deleted file mode 100644 index 5be7da5fd..000000000 --- a/kosmos-g/unilm/data/vl/vl_base_loader.py +++ /dev/null @@ -1,374 +0,0 @@ -import json -import os -import multiprocessing -import itertools - -from infinibatch import iterators -from functools import partial - -try: - from fairseq.data.encoders.gpt2_bpe import GPT2BPE -except: - print("GPT2BPE not found, please install fairseq first if you want to use GPT2BPE") - -import glob -import os -import torch -import numpy as np -import time -import json -import random -import itertools -import hydra -import copy - -from infinibatch import iterators -from unilm.data.basic_loader import BaseBatchGen -from unilm.data.utils import NativeCheckpointableIterator, WeightIterator - - -class VLBaseLoader(BaseBatchGen): - - def __init__( - self, - args, - dataset, - dictionary, - tokenizer, - max_tokens=None, - max_sentences=None, - max_positions=None, - ignore_invalid_inputs=False, - required_batch_size_multiple=1, - seed=1, - epoch=1, - num_shards=1, - shard_id=0, - no_prefetch=False, - ): - super().__init__() - self.args = args - self.data = dataset.data - self.data_dir = dataset.data_dir - self.shuffle = dataset.shuffle - self.dictionary = dictionary - self.tokenizer = tokenizer - - self.max_tokens = max_tokens - self.max_sentences = max_sentences - self.max_positions = max_positions - self.tokens_per_sample = args.tokens_per_sample - self.ignore_invalid_inputs = ignore_invalid_inputs - self.required_batch_size_multiple = required_batch_size_multiple - self.seed = str(seed) - self.epoch = epoch - self.num_shards = num_shards - self.shard_id = shard_id - - self.batch_read_ahead = args.batch_read_ahead - self.no_prefetch = no_prefetch - - # build filter and transform - self._setup() - self.filters = self._build_filter() - self.image_transform = self._build_image_transform() - self.text_transform = self._build_text_transform() - - self._build_iter() - - def getstate(self): - state = super().getstate() - state["epoch"] = self.epoch - state["iterations_in_epoch"] = None - return state - - def _setup(self): - pass - - def _build_filter(self): - raise NotImplementedError - - def _build_image_transform(self): - raise NotImplementedError - - def _build_text_transform(self): - raise NotImplementedError - - def _build_iter(self): - # read data, filter, and transform - tokenized_lines = self._tokenize() - - # batchify and collate - self.padded_batches = self._batchify(tokenized_lines) - - if self.no_prefetch: # no prefetch is for debug - prefetch_batches = self.padded_batches - else: - prefetch_batches = iterators.PrefetchIterator( - self.padded_batches, - buffer_size=self.args.batch_read_ahead, - buffer_in_main_process=True, - log_empty_buffer_warning=True and self.shard_id == 0, - ) - - prefetch_batches = iterators.MapIterator( - prefetch_batches, self._move_to_tensor - ) - - self._iter = prefetch_batches - - def _tokenize(self): - multilingual_iters = [] - weights = [] - - for data in self.data: - multilingual_iters.append( - self._tokenize_foreach_lang(data) - ) - if 'weight' in data: - weights.append(float(data['weight'])) - else: - weights.append(int(data['count'])) - - if len(multilingual_iters) == 1: - return multilingual_iters[0] - - sampling_iterator = WeightIterator(weights, self.seed) - control_iterator = NativeCheckpointableIterator(sampling_iterator) - tokenized_lines = iterators.MultiplexIterator(control_iterator, multilingual_iters) - - return tokenized_lines - - def _tokenize_foreach_lang(self, data): - dataset = list(zip(data['source'])) - if self.shuffle: - chunk_files = iterators.InfinitePermutationSourceIterator( - dataset, - seed=self.seed, - shuffle=self.shuffle, - num_instances=self.num_shards, - instance_rank=self.shard_id,) - else: - chunk_files = iterators.ChunkedSourceIterator( - dataset, - num_instances=self.num_shards, - instance_rank=self.shard_id,) - - # read file based filter in self._read_from_files function - tokenized_lines = iterators.SelectManyIterator(chunk_files, lambda files: self._read_from_files(*files)) - - # add image/text transform in self._prepare function - tokenized_lines = iterators.SamplingRandomMapIterator(tokenized_lines, self._prepare, self.seed) - - return tokenized_lines - - # def _batchify(self, lines): - - # if self.max_sentences is not None: - # if self.batch_read_ahead > 0: - # lines = iterators.BlockwiseShuffleIterator(lines, self.batch_read_ahead, self.seed) - # batches = iterators.FixedBatchIterator(lines, self.max_sentences) - # else: - # # - - # def dynamic_batch_size(sample): - # lengths = [len(x) for x in sample] - # batch_size = self.max_tokens // max(lengths) // self.required_batch_size_multiple * self.required_batch_size_multiple - # return max(1, batch_size) - - # batches = iterators.BucketedReadaheadBatchIterator( - # lines, - # read_ahead=self.batch_read_ahead, - # key=(lambda x: max(len(x[0]), len(x[1]))) if self.shuffle else None, - # batch_size=dynamic_batch_size, - # shuffle=self.shuffle, - # seed=self.seed, - # ) - - # def collate(batch): - # batch_size = len(batch) - # mlm_batch_size = sum([len(x[2]) for x in batch]) - - # gpt_max_length = max([len(x[0]) for x in batch]) - - # mlm_max_length = 0 - # mlm_ntokens = 0 - # for x in batch: - # for y in x[2]: - # mlm_max_length = max(mlm_max_length, len(y)) - # mlm_ntokens += len(y) - - # gpt_source_ids = np.full(shape=(batch_size, gpt_max_length-1), dtype=np.int32, - # fill_value=self.dictionary.pad()) - # gpt_target_ids = np.full(shape=(batch_size, gpt_max_length-1), dtype=np.int32, - # fill_value=self.dictionary.pad()) - # audio_source_ids = np.full(shape=(batch_size, self.audio_segment_size, self.max_audio_token_length * self.audio_downsample_rate, self.n_bins), dtype=np.float32, - # fill_value=self.dictionary.pad()) - # gpt_input_mask_all = np.full(shape=(batch_size, gpt_max_length-1), dtype=np.int32, fill_value=0) - # gpt_loss_mask_all = np.full(shape=(batch_size, gpt_max_length-1), dtype=np.int32, fill_value=1) - # audio_mask_all = np.full(shape=(batch_size, self.audio_segment_size, self.max_audio_token_length), dtype=np.int32, fill_value=0) - - # for i, (full_tokens, gpt_input_mask, audio_tokens, audio_masks, gpt_loss_mask) in enumerate(batch): - # gpt_source_ids[i, :len(full_tokens)-1] = full_tokens[:-1] - # gpt_target_ids[i, :len(full_tokens)-1] = full_tokens[1:] - # gpt_input_mask_all[i, :len(full_tokens)-1] = gpt_input_mask[:-1] - # gpt_loss_mask_all[i, :len(full_tokens)-1] = gpt_loss_mask[1:] - # audio_source_ids[i] = audio_tokens - # audio_mask_all[i] = audio_masks - - # ret_batch = { - # 'audio':{ - # 'net_input': { - # 'src_tokens': gpt_source_ids.astype(np.int64), - # 'aud_src_tokens': audio_source_ids.astype(np.float32), - # 'aud_gpt_input_mask': gpt_input_mask_all.astype(np.bool_), - # 'gpt_loss_mask': gpt_loss_mask_all.astype(np.bool_), - # 'aud_mask': audio_mask_all.astype(np.bool_) - # }, - # 'target': gpt_target_ids.astype(np.int64), - # 'nsentences': batch_size, - # 'ntokens': sum([len(x[0]) for x in batch]), - # } - # } - - # return ret_batch - - # padded_batches = iterators.MapIterator( - # batches, collate - # ) - - # return padded_batches - - # def _prepare(self, _random, doc): - # """ - # speech json line to ids, - # <s> <audio> <audio> * N </audio> Text Text text <audio> <audio> * N </audio> Text text </s> - # """ - # full_tokens, gpt_input_mask, audio_tokens, audio_masks, gpt_loss_mask = self._audio_cut(_random, doc) - # return full_tokens, gpt_input_mask, audio_tokens, audio_masks, gpt_loss_mask - - # def _audio_cut(self, _random, doc): - # """ - # # start with audio - # # full token: <s> <audio> <audio> * N </audio> Text Text text <audio> <audio> * N </audio> Text text </s> - # # gpt_input_mask: 0 0 1 * N 0 0 ... 0 1 * N 0 0 0 0 - # # gpt_loss_mask: 0 0 0 * 1 1 1 ... 0 0 * 1 1 1 1 1 - # """ - - # def audio_token_segment(audio_len): - # #TODO: now we use image tag to represent audio, need to change to audio tag - # boi_id = self.dictionary.index("") - # return [boi_id] + [boi_id] * audio_len + [eoi_id] - - # # import pudb; pu.db - # full_tokens, gpt_input_mask, audio_tokens, audio_masks, gpt_loss_mask = [], [], [], [], [] - - # full_tokens.append(self.dictionary.bos()) - # gpt_input_mask.append(0) - # gpt_loss_mask.append(0) - - # shuffled_ids = [i for i in range(1, len(doc)) if int(doc[i]['size']) < self.max_single_audio_length] - # _random.shuffle(shuffled_ids) - # audio_part = shuffled_ids[:self.audio_segment_size - 1] - - # for i, segment in enumerate(doc): - # wav_path = os.path.join(self.audio_root_path, segment['file']) - # if i == 0 or i in audio_part: - # # fbank = get_fbank(wav_path, output_sample_rate=16000, n_bins=40) - # # NOTE: it will be blocked in get_fbank (kaldi feature extraction) - # # waveform, sample_rate = get_waveform(wav_path, normalization=False, output_sample_rate=self.output_sample_rate) - # # TODO: use tensor as input, if the data loader blocked, we can use other format to speed up - - # fbank = np.load(wav_path) - # fbank_len = fbank.shape[0] - # audio_hidden_size = fbank_len // self.audio_downsample_rate - # if self.audio_tokens_size != 0: - # audio_tokens_size = self.audio_tokens_size - # else: - # audio_tokens_size = audio_hidden_size - # full_tokens += audio_token_segment(audio_tokens_size) - # gpt_input_mask += [0] + [1] * audio_tokens_size + [0] - # gpt_loss_mask += [0] + [0] * audio_tokens_size + [1] - # audio_token = np.zeros((self.max_audio_token_length * self.audio_downsample_rate, self.n_bins)) - # audio_token[:fbank_len] = fbank - # audio_tokens.append(audio_token) - # audio_mask = np.zeros(self.max_audio_token_length) - # audio_mask[audio_hidden_size:] = 1 - # audio_masks.append(audio_mask) - # else: - # full_tokens += segment['text'] - # gpt_input_mask += [0] * len(segment['text']) - # gpt_loss_mask += [1] * len(segment['text']) - - # return full_tokens, gpt_input_mask, audio_tokens, audio_masks, gpt_loss_mask - - # @staticmethod - # def fs_encode_line(fs_dict, words, append_eos=True): - # ids = [] - # for i, word in enumerate(words): - # idx = fs_dict.index(word) - # ids.append(idx) - # if append_eos: - # ids.append(fs_dict.eos_index) - # return ids - - # @staticmethod - # def _doc_jsonstr_to_ids(doc_jsonstr, spm_tokenizer=None, fs_dict=None): - # """ - # """ - # doc = json.loads(doc_jsonstr) - # for item in doc: - # line = item['text'] - # if isinstance(spm_tokenizer, GPT2BPE): - # tokens = spm_tokenizer.encode(line).split(' ') - # else: - # tokens = spm_tokenizer.encode(line, out_type=str) - # tokenized_ids = SpeechLoader.fs_encode_line(fs_dict, tokens, append_eos=False) - # item['text'] = tokenized_ids - # return doc - - # def _read_from_files(self, source_file): - # def judge_doc(doc): - # if len(doc) > self.audio_segment_size and int(doc[0]['size']) < self.max_single_audio_length: - # for item in doc[1:]: - # if int(item['size']) < self.max_single_audio_length: - # return True - # return False - # data = [] - # file_path = os.path.join(self.data_dir, source_file) - - # if not os.path.exists(file_path): - # print('| file {} not exists'.format(file_path), flush=True) - # return iter([]) # skip bad file - - # try: - # with open(file_path, 'r', encoding='utf8') as f: - # lines = f.read().strip().split('\n') - # except: - # return iter([]) # skip bad file - - # for doc_jsonstr in lines: - # ret = SpeechLoader._doc_jsonstr_to_ids(doc_jsonstr, spm_tokenizer=self.tokenizer, fs_dict=self.dictionary) - # if len(ret) <= self.audio_segment_size: - # continue - - # # break the doc based self.max_text_length = 448 # 2028 - 800 - 800 - # doc = [] - # doc_text_length = 0 - # for i in range(0, len(ret)): - # if i == 0: - # doc.append(ret[i]) - # doc_text_length += len(ret[i]['text']) - # else: - # if doc_text_length + len(ret[i]['text']) > self.max_text_length: - # if judge_doc(doc): - # data.append(doc) - # doc = [ret[i]] - # doc_text_length = len(ret[i]['text']) - # else: - # doc.append(ret[i]) - # doc_text_length += len(ret[i]['text']) - # if judge_doc(doc): - # data.append(doc) - # return data - diff --git a/kosmos-g/unilm/models/__init__.py b/kosmos-g/unilm/models/__init__.py deleted file mode 100644 index 4b10d374b..000000000 --- a/kosmos-g/unilm/models/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -import importlib -import os -from fairseq.models import import_models - -models_dir = os.path.dirname(__file__) -import_models(models_dir, "unilm.models") \ No newline at end of file diff --git a/kosmos-g/unilm/models/aligner.py b/kosmos-g/unilm/models/aligner.py deleted file mode 100644 index 64fa322c2..000000000 --- a/kosmos-g/unilm/models/aligner.py +++ /dev/null @@ -1,100 +0,0 @@ -import torch -import torch.nn.functional as F -from torch import nn -from torchscale.architecture.config import EncoderDecoderConfig -from torchscale.architecture.decoder import Decoder -from torchscale.architecture.encoder import Encoder -from torchscale.component.embedding import PositionalEmbedding -from transformers import CLIPTextModel - - -class Aligner(nn.Module): - def __init__(self, args): - super().__init__() - self.clip_encoder = CLIPTextModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="text_encoder", - torch_dtype=torch.float16, revision="fp16") - cfg = EncoderDecoderConfig( - checkpoint_activations=args.checkpoint_activations, - flash_attention=args.flash_attention, - ) - self.encoder_proj = Encoder( - cfg, - embed_tokens=nn.Linear(args.decoder_embed_dim, 768), - embed_positions=PositionalEmbedding(32768, 768), - is_encoder_decoder=True, - ) - self.encoder_query = nn.Parameter(torch.randn(77, 768)) - self.encoder = Decoder( - cfg, - is_encoder_decoder=True, - causal_mask=False - ) - self.decoder_query = nn.Parameter(torch.randn(32768, 768)) - self.decoder = Decoder( - cfg, - is_encoder_decoder=True, - causal_mask=False - ) - self.decoder_proj = Encoder( - cfg, - embed_positions=PositionalEmbedding(32768, 768), - output_projection=nn.Linear(768, args.decoder_embed_dim), - ) - - def forward(self, condition, padding_mask, clip_tokens): - gpt_embed = self.encoder_proj( - src_tokens=condition, - encoder_padding_mask=padding_mask - ) - gpt_embed = self.encoder( - prev_output_tokens=None, - token_embeddings=self.encoder_query.unsqueeze(0).expand(gpt_embed["encoder_out"].shape[1], -1, -1), - encoder_out=gpt_embed - )[0] - with torch.no_grad(): - clip_embed = self.clip_encoder(clip_tokens)[0] - mse_loss = F.mse_loss(gpt_embed.float(), clip_embed.float(), reduction='mean') - gpt_embed = self.decoder( - prev_output_tokens=None, - token_embeddings=self.decoder_query[:condition.shape[1]].unsqueeze(0).expand(gpt_embed.shape[0], -1, -1), - encoder_out={"encoder_out": gpt_embed.transpose(0, 1)} - )[0] - gpt_embed = self.decoder_proj( - src_tokens=None, - token_embeddings=gpt_embed - )["encoder_out"].transpose(0, 1) - rec_loss = F.mse_loss(gpt_embed.float(), condition.float(), reduction='mean') * (77 / condition.shape[1]) - return {'clip_loss': {'mse_loss': mse_loss, 'rec_loss': rec_loss}} - - -class Aligner_encoder(nn.Module): - def __init__(self, args): - super().__init__() - cfg = EncoderDecoderConfig( - checkpoint_activations=args.checkpoint_activations, - flash_attention=args.flash_attention, - ) - self.encoder_proj = Encoder( - cfg, - embed_tokens=nn.Linear(args.decoder_embed_dim, 768), - embed_positions=PositionalEmbedding(32768, 768), - is_encoder_decoder=True, - ) - self.encoder_query = nn.Parameter(torch.randn(77, 768)) - self.encoder = Decoder( - cfg, - is_encoder_decoder=True, - causal_mask=False - ) - - def forward(self, condition, padding_mask): - condition = self.encoder_proj( - src_tokens=condition, - encoder_padding_mask=padding_mask, - ) - condition = self.encoder( - prev_output_tokens=None, - token_embeddings=self.encoder_query.unsqueeze(0).expand(condition["encoder_out"].shape[1], -1, -1), - encoder_out=condition, - )[0] - return condition diff --git a/kosmos-g/unilm/models/connector.py b/kosmos-g/unilm/models/connector.py deleted file mode 100644 index e54051bde..000000000 --- a/kosmos-g/unilm/models/connector.py +++ /dev/null @@ -1,83 +0,0 @@ -import torch -import torch.nn as nn -from fairseq.modules import MultiheadAttention -from fairseq import utils - - -def build_connector(args, input_dim, output_dim): - connector_name = args.text_connector if hasattr(args, "text_connector") else args.connector - if connector_name == "none": - connector = None - elif connector_name == "complex": - connector = ComplexConnector(input_dim, - output_dim, - args.activation_fn) - elif connector_name == "simple": - connector = SimpleConnector(input, - output_dim) - elif connector_name == "xconnector": - connector = XConnector(input_dim, output_dim, args) - else: - raise ValueError("Invalid text connector type: {}".format(args.connector)) - return connector - - -class SimpleConnector(nn.Module): - """Connector model of GPT and MLM.""" - - def __init__(self, input_dim, output_dim): - super().__init__() - self.dense = nn.Linear(input_dim, output_dim) - - def forward(self, features, **kwargs): - - x = self.dense(features) - return x - - -class ComplexConnector(nn.Module): - """Connector model of GPT and MLM.""" - - def __init__(self, input_dim, output_dim, activation_fn): - super().__init__() - self.dense = nn.Linear(input_dim, input_dim) - self.activation_fn = utils.get_activation_fn(activation_fn) - self.predict = nn.Linear(input_dim, output_dim) - - def forward(self, features, **kwargs): - - x = self.dense(features) - x = self.activation_fn(x) - - x = self.predict(x) - return x - - -class XConnector(nn.Module): - """Connector model of GPT and MLM.""" - - def __init__(self, input_dim, output_dim, args,): - super().__init__() - self.dense = nn.Linear(input_dim, output_dim) - self.latent_query = torch.nn.Parameter(torch.randn(args.latent_query_num, output_dim)) - - self.x_attn = MultiheadAttention( - output_dim, - args.decoder_attention_heads, - kdim=output_dim, - vdim=output_dim, - dropout=args.attention_dropout, - encoder_decoder_attention=True, - ) - - def forward(self, features, **kwargs): - - x = self.dense(features) - # x = attention_i(q=latent_query, kv=concat([x, latent_query])) - # shape of x is [batch_size * seq_len, output_dim] -> [seq_len, batch_size, output_dim] - x = x.view(-1, kwargs['src_len'], x.size(-1)).transpose(0, 1) - bsz = x.size(1) - latent_query = self.latent_query.unsqueeze(1).expand(-1, bsz, -1) - x, _ = self.x_attn(latent_query, torch.cat([x, latent_query]), torch.cat([x, latent_query])) - return x.transpose(0, 1).contiguous().view(-1, x.size(-1)) # [batch_size * seq_len, output_dim] - diff --git a/kosmos-g/unilm/models/diffusion.py b/kosmos-g/unilm/models/diffusion.py deleted file mode 100644 index eeb69c858..000000000 --- a/kosmos-g/unilm/models/diffusion.py +++ /dev/null @@ -1,285 +0,0 @@ -from fairseq.dataclass import FairseqDataclass -from fairseq.models import ( - BaseFairseqModel, -) -from fairseq.models import ( - register_model, - register_model_architecture, -) -from fairseq.utils import safe_getattr - -DEFAULT_MAX_TARGET_POSITIONS = 1024 - -import torch -import torch.nn.functional as F -import torch.utils.checkpoint - -from diffusers import AutoencoderKL, DDPMScheduler, DPMSolverMultistepScheduler, UNet2DConditionModel -from diffusers.utils.torch_utils import randn_tensor -import inspect -from PIL import Image -from diffusers.image_processor import VaeImageProcessor -from diffusers.loaders import LoraLoaderMixin -from diffusers.configuration_utils import FrozenDict - - -@register_model("diffusionmodel", dataclass=FairseqDataclass) -class Diffusionmodel(BaseFairseqModel, LoraLoaderMixin): - def __init__(self, noise_scheduler, unet, scheduler, vae_scale_factor): - super().__init__() - self.noise_scheduler = noise_scheduler - self.unet = unet - self.scheduler = scheduler - self.vae_scale_factor = vae_scale_factor - - self.config = FrozenDict([ - ('unet', ('diffusers', 'UNet2DConditionModel')), - ('scheduler', ('diffusers', 'DPMSolverMultistepScheduler')), - ]) - - @property - def components(self): - components = {k: getattr(self, k) for k in self.config.keys()} - return components - - @classmethod - def build_model(cls, args, task, vae_scale_factor): - noise_scheduler = DDPMScheduler.from_pretrained( - args.pretrained_model_name_or_path, subfolder="scheduler", torch_dtype=torch.float16, revision="fp16" - ) - unet = UNet2DConditionModel.from_pretrained( - args.pretrained_model_name_or_path, subfolder="unet", torch_dtype=torch.float16, revision="fp16" - ) - if args.checkpoint_activations: - unet.enable_gradient_checkpointing() - if args.flash_attention: - unet.enable_xformers_memory_efficient_attention() - - scheduler = DPMSolverMultistepScheduler.from_pretrained( - args.pretrained_model_name_or_path, subfolder="scheduler", torch_dtype=torch.float16, revision="fp16" - ) - model = cls(noise_scheduler=noise_scheduler, unet=unet, scheduler=scheduler, vae_scale_factor=vae_scale_factor) - return model - - def forward(self, latents, condition, null_token): - noise = torch.randn_like(latents) - bsz = latents.shape[0] - timesteps = torch.randint(0, self.noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device) - timesteps = timesteps.long() - noisy_latents = self.noise_scheduler.add_noise(latents, noise, timesteps) - # Get the target for loss depending on the prediction type - if self.noise_scheduler.config.prediction_type == "epsilon": - target = noise - elif self.noise_scheduler.config.prediction_type == "v_prediction": - target = self.noise_scheduler.get_velocity(latents, noise, timesteps) - else: - raise ValueError(f"Unknown prediction type {self.noise_scheduler.config.prediction_type}") - # Predict the noise residual and compute loss - random_p = torch.rand(bsz, device=latents.device) - text_classifier_free_idx = random_p < 0.1 - condition[text_classifier_free_idx] = null_token[0] - - model_pred = self.unet(noisy_latents, timesteps, condition).sample - loss = F.mse_loss(model_pred.float(), target.float(), reduction="none").mean([1, 2, 3]).sum() - return {'diff_loss': loss} - - def sample( - self, - condition, - height=512, - width=512, - num_inference_steps=50, - text_guidance_scale=7.5, - eta=0.0, - lora_scale=0.0, - **kwargs, - ): - batch_size = condition.shape[0] - device = condition.device - - do_classifier_free_guidance = text_guidance_scale > 1.0 - if do_classifier_free_guidance: - batch_size //= 2 - - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps = self.scheduler.timesteps - - shape = (batch_size, self.unet.config.in_channels, height // self.vae_scale_factor, - width // self.vae_scale_factor) - latents = randn_tensor(shape, device=device, dtype=condition.dtype) - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - for i, t in enumerate(timesteps): - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - noise_pred = self.unet( - latent_model_input, - t, - encoder_hidden_states=condition, - cross_attention_kwargs={"scale": lora_scale}, - ).sample - - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + text_guidance_scale * (noise_pred_text - noise_pred_uncond) - - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - return latents - - def sample_controlnet( - self, - condition, - control_image, - controlnet, - num_inference_steps=50, - text_guidance_scale=7.5, - num_images_per_prompt=1, - eta=0.0, - controlnet_conditioning_scale=1.0, - control_guidance_start=0.0, - control_guidance_end=1.0, - **kwargs, - ): - control_guidance_start, control_guidance_end = [control_guidance_start], [control_guidance_end] - - if not hasattr(self, "control_image_processor"): - self.control_image_processor = VaeImageProcessor( - vae_scale_factor=self.vae_scale_factor, do_convert_rgb=True, do_normalize=False - ) - batch_size = condition.shape[0] - device = condition.device - - do_classifier_free_guidance = text_guidance_scale > 1.0 - if do_classifier_free_guidance: - batch_size //= 2 - - image = self.control_image_processor.preprocess(control_image).to(dtype=torch.float32) - image = image.repeat_interleave(num_images_per_prompt, dim=0) - - image = image.to(device=device, dtype=torch.float16) - image = torch.cat([image] * 2) - height, width = image.shape[-2:] - - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps = self.scheduler.timesteps - - shape = (batch_size, self.unet.config.in_channels, height // self.vae_scale_factor, - width // self.vae_scale_factor) - latents = randn_tensor(shape, device=device, dtype=condition.dtype) - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - controlnet_keep = [] - for i in range(len(timesteps)): - keeps = [ - 1.0 - float(i / len(timesteps) < s or (i + 1) / len(timesteps) > e) - for s, e in zip(control_guidance_start, control_guidance_end) - ] - controlnet_keep.append(keeps[0]) - - for i, t in enumerate(timesteps): - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - control_model_input = latent_model_input - controlnet_condition = condition - - cond_scale = controlnet_conditioning_scale * controlnet_keep[i] - - down_block_res_samples, mid_block_res_sample = controlnet( - control_model_input, - t, - encoder_hidden_states=controlnet_condition, - controlnet_cond=image, - conditioning_scale=cond_scale, - return_dict=False, - ) - - noise_pred = self.unet( - latent_model_input, - t, - encoder_hidden_states=condition, - down_block_additional_residuals=down_block_res_samples, - mid_block_additional_residual=mid_block_res_sample - ).sample - - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + text_guidance_scale * (noise_pred_text - noise_pred_uncond) - - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - return latents - - -@register_model("vae", dataclass=FairseqDataclass) -class VAE(BaseFairseqModel): - def __init__(self, vae): - super().__init__() - self.vae = vae - self.scaling_factor = 0.18215 - - @classmethod - def build_model(cls, args, task): - vae = AutoencoderKL.from_pretrained( - args.pretrained_model_name_or_path, subfolder="vae", torch_dtype=torch.float16, revision="fp16" - ) - vae.requires_grad_(False) - model = cls(vae=vae) - return model - - def encode(self, x): - self.vae.eval() - return self.vae.encode(x).latent_dist.sample() * self.scaling_factor - - def encode_mode(self, x): - self.vae.eval() - return self.vae.encode(x).latent_dist.mode() - - @staticmethod - def numpy_to_pil(images): - """ - Convert a numpy image or a batch of images to a PIL image. - """ - if images.ndim == 3: - images = images[None, ...] - images = (images * 255).round().astype("uint8") - if images.shape[-1] == 1: - # special case for grayscale (single channel) images - pil_images = [Image.fromarray(image.squeeze(), mode="L") for image in images] - else: - pil_images = [Image.fromarray(image) for image in images] - - return pil_images - - def decode(self, latents, output_type="pil"): - self.vae.eval() - latents = 1 / self.scaling_factor * latents - image = self.vae.decode(latents).sample - image = (image / 2 + 0.5).clamp(0, 1) - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16 - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - if output_type == "pil": - image = self.numpy_to_pil(image) - return image - - -@register_model_architecture("diffusionmodel", "diffusionmodel_base") -def diffusionmodel_base(args): - args.pretrained_model_name_or_path = safe_getattr(args, "pretrained_model_name_or_path", - "runwayml/stable-diffusion-v1-5") - - -@register_model_architecture("vae", "vae_base") -def vae_base(args): - args.pretrained_model_name_or_path = safe_getattr(args, "pretrained_model_name_or_path", - "runwayml/stable-diffusion-v1-5") diff --git a/kosmos-g/unilm/models/gpt.py b/kosmos-g/unilm/models/gpt.py deleted file mode 100644 index e18bd38ae..000000000 --- a/kosmos-g/unilm/models/gpt.py +++ /dev/null @@ -1,378 +0,0 @@ -from dataclasses import dataclass, field -from typing import Any, Dict, List, Optional -from fairseq.dataclass import ChoiceEnum, FairseqDataclass -from torch import Tensor - -import logging - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from fairseq import distributed_utils, utils -from fairseq import checkpoint_utils -from fairseq import utils -from fairseq.utils import safe_getattr, safe_hasattr - -from fairseq.models import ( - BaseFairseqModel, - register_model, - register_model_architecture, -) -from fairseq.models.transformer_lm import ( - TransformerLanguageModelConfig, - TransformerLanguageModel, - base_gpt3_architecture, -) -from fairseq.models.transformer.transformer_decoder import TransformerDecoder -from fairseq.models import ( - FairseqIncrementalDecoder, - FairseqLanguageModel, - register_model, - register_model_architecture, -) -from fairseq.models.transformer import DEFAULT_MIN_PARAMS_TO_WRAP, Embedding -from fairseq.modules import PositionalEmbedding -from omegaconf import II - -from torchscale.architecture.config import DecoderConfig -from torchscale.architecture.decoder import Decoder - -from torchscale.architecture.config import DecoderConfig -from torchscale.architecture.decoder import Decoder -from torchscale.component.embedding import TextEmbedding - -DEFAULT_MAX_TARGET_POSITIONS = 1024 - - -@dataclass -class GPTModelConfig(TransformerLanguageModelConfig): - scale_final_logits: bool = field( - default=False, - metadata={ - "help": "scale final logits by sqrt(d)" - }, - ) - - gpt_model_path: str = field( - default="", - metadata={"help": "gpt checkpoint path"}, - ) - rescale_init: bool = field( - default=False, - metadata={ - "help": "whether to use rescale initialization" - }, - ) - deepnet: bool = field( - default=False, - metadata={ - "help": "enable deepnet in decoder" - }, - ) - last_ln_scale: bool = field( - default=False, - metadata={ - "help": "enable last_ln_scale in decoder" - }, - ) - - # options from other parts of the config - add_bos_token: bool = II("task.add_bos_token") - tokens_per_sample: int = II("task.tokens_per_sample") - max_target_positions: Optional[int] = II("task.max_target_positions") - tpu: bool = II("common.tpu") - memory_efficient_fp16: bool = II("common.memory_efficient_fp16") - fp16: bool = II("common.fp16") - fp16_no_flatten_grads: bool = II("common.fp16_no_flatten_grads") - ddp_backend: str = II("distributed_training.ddp_backend") - world_size: int = II("distributed_training.distributed_world_size") - distributed_rank: int = II("distributed_training.distributed_rank") - ddp_rank: int = II("distributed_training.distributed_rank") - - deepnorm: Optional[bool] = field( - default=False, - ) - subln: Optional[bool] = field( - default=False, - ) - rel_pos_buckets: Optional[int] = field( - default=0, - ) - max_rel_pos: Optional[int] = field( - default=0, - ) - flash_attention: bool = field( - default=False, - ) - sope_rel_pos: Optional[bool] = field( - default=False, - metadata={"help": "use SoPE as the relative position embhedding"}, - ) - scale_length: Optional[int] = field( - default=2048, - ) - max_chunk_emb: Optional[int] = field( - default=0, - metadata={"help": "chunk embedding, text image text image text: 0, 1, 1, 2, 2"}, - ) - segment_emb: Optional[bool] = field( - default=False, - ) - - -@register_model("gptmodel", dataclass=GPTModelConfig) -class GPTmodel(TransformerLanguageModel): - - @classmethod - def build_model(cls, args, task): - model = TransformerLanguageModel.build_model(args, task) - - if getattr(args, "max_target_positions", None) is None: - args.max_target_positions = getattr( - args, "tokens_per_sample", DEFAULT_MAX_TARGET_POSITIONS - ) - - embed_tokens = cls.build_embedding( - args, task.source_dictionary, args.decoder_embed_dim - ) - - embed_positions = ( - PositionalEmbedding( - args.max_target_positions, - args.decoder_embed_dim, - task.dictionary.pad(), - learned=args.decoder_learned_pos, - ) - if not args.no_token_positional_embeddings - else None - ) - - if args.share_decoder_input_output_embed: - output_projection = torch.nn.Linear( - embed_tokens.weight.shape[1], - embed_tokens.weight.shape[0], - bias=False, - ) - output_projection.weight = embed_tokens.weight - else: - output_projection = torch.nn.Linear( - args.decoder_embed_dim, len(task.dictionary), bias=False - ) - torch.nn.init.normal_( - output_projection.weight, mean=0, std=args.decoder_embed_dim ** -0.5 - ) - - if getattr(args, "moe_freq", 0) > 0 and ( - getattr(args, "fp16", False) - and not getattr(args, "memory_efficient_fp16", False) - and getattr(args, "ddp_backend", None) != "fully_sharded" - ): - assert ( - args.fp16_no_flatten_grads - ), "If training moe models, set --fp16-no-flatten-grads to calculate correct gradnorm" - - args.ddp_rank = distributed_utils.get_data_parallel_rank() - - config = DecoderConfig() - config.override(args) - - decoder = LMDecoder( - config, - embed_tokens, - embed_positions, - output_projection, - is_encoder_decoder=False, - dictionary=task.dictionary, - ) - decoder.chunk_emb = None - if args.max_chunk_emb > 0: - decoder.chunk_emb = TextEmbedding(args.max_chunk_emb, args.decoder_embed_dim) - decoder.segment_emb = None - if args.segment_emb: - decoder.segment_emb = TextEmbedding(2, args.decoder_embed_dim) - model.decoder = decoder - if args.gpt_model_path != "": - assert NotImplementedError - # state = checkpoint_utils.load_checkpoint_to_cpu(args.gpt_model_path) - # model.load_state_dict(state["model"], strict=True, args=args) - return model - - @classmethod - def build_embedding(cls, args, dictionary, embed_dim, path=None): - return Embedding(len(dictionary), embed_dim, dictionary.pad()) - - -class LMDecoder(Decoder, FairseqIncrementalDecoder): - def forward(self, src_tokens, **kwargs): - self_attn_padding_mask = src_tokens.eq(self.dictionary.pad()) - return super().forward(src_tokens, self_attn_padding_mask, **kwargs) - - def max_positions(self): - return self.embed_positions.max_positions - - def reorder_incremental_state_scripting( - self, - incremental_state, - new_order, - ): - for module in incremental_state: - for key in incremental_state[module]: - result = incremental_state[module][key].index_select(0, new_order) - incremental_state[module][key] = result - - def forward_embedding( - self, - tokens, - token_embedding=None, - incremental_state=None, - mlm_features: Optional[Tensor] = None, - gpt_input_mask: Optional[Tensor] = None, - img_features: Optional[Tensor] = None, - img_gpt_input_mask: Optional[Tensor] = None, - aud_features: Optional[Tensor] = None, - aud_gpt_input_mask: Optional[Tensor] = None, - chunk_tokens: Optional[Tensor] = None, - segment_tokens: Optional[Tensor] = None, - ): - positions = None - if self.embed_positions is not None: - positions = self.embed_positions( - tokens, incremental_state=incremental_state - ) - if self.chunk_emb is not None: - chunk_emb = self.chunk_emb(chunk_tokens) - positions += chunk_emb - if self.segment_emb is not None: - segment_emb = self.segment_emb(segment_tokens) - positions += segment_emb - - if incremental_state is not None: - tokens = tokens[:, -1:] - if positions is not None: - positions = positions[:, -1:] - - if token_embedding is None: - token_embedding = self.embed_tokens(tokens) - - gpt_embed_output = token_embedding - if mlm_features is not None: - gpt_embed_output[gpt_input_mask] = mlm_features - if img_features is not None: - gpt_embed_output[img_gpt_input_mask] = img_features - if aud_features is not None: - gpt_embed_output[aud_gpt_input_mask] = aud_features - - x = embed = self.embed_scale * gpt_embed_output - - if positions is not None: - x += positions - - if self.layernorm_embedding is not None: - x = self.layernorm_embedding(x) - - x = self.dropout_module(x) - - return x, embed - - def forward( - self, - prev_output_tokens, - self_attn_padding_mask=None, - encoder_out=None, - incremental_state=None, - features_only=False, - return_all_hiddens=False, - token_embeddings=None, - scale=1.0, - **kwargs - ): - # embed tokens and positions - x, _ = self.forward_embedding( - prev_output_tokens, token_embeddings, incremental_state, **kwargs - ) - x = x.transpose(0, 1) - - # relative position - self_attn_rel_pos_bias = None - slen = prev_output_tokens.size(1) - if self.self_attn_relative_position is not None: - self_attn_rel_pos_bias = self.self_attn_relative_position( - batch_size=x.size(1), qlen=slen, klen=slen - ) - if incremental_state is not None: - self_attn_rel_pos_bias = self_attn_rel_pos_bias[:, -1:, :] - cross_attn_rel_pos_bias = None - if self.cross_attn_relative_position is not None: - cross_attn_rel_pos_bias = self.cross_attn_relative_position( - batch_size=x.size(1), - qlen=slen, - klen=encoder_out["encoder_out"].size(0), - ) - if incremental_state is not None: - cross_attn_rel_pos_bias = cross_attn_rel_pos_bias[:, -1:, :] - - # decoder layers - if encoder_out is None: - l_aux = [] - else: - l_aux = encoder_out["l_aux"] if "l_aux" in encoder_out else [] - - for idx, layer in enumerate(self.layers): - if incremental_state is None: - self_attn_mask = torch.triu( - torch.zeros([x.size(0), x.size(0)]) - .float() - .fill_(float("-inf")) - .type_as(x), - 1, - ) - else: - self_attn_mask = None - if idx not in incremental_state: - incremental_state[idx] = {} - - x, layer_attn, _, l_aux_i = layer( - x, - encoder_out["encoder_out"] if encoder_out is not None else None, - encoder_out["encoder_padding_mask"] if encoder_out is not None else None, - incremental_state[idx] if incremental_state is not None else None, - self_attn_mask=self_attn_mask, - self_attn_padding_mask=self_attn_padding_mask, - self_attn_rel_pos=self_attn_rel_pos_bias, - cross_attn_rel_pos=cross_attn_rel_pos_bias, - scale=scale, - ) - l_aux.append(l_aux_i) - - x = self.layer_norm(x) if self.layer_norm is not None else x - - if not features_only: - logits = self.output_layer(x.transpose(0, 1)) - else: - logits = None - - return logits, x, { - "l_aux": l_aux, - "attn": [layer_attn.mean(dim=0) if layer_attn is not None else None], - } - - -@register_model_architecture("gptmodel", "gptmodel_small") -def gptmodel_small(args): - # 125M params - args.decoder_layers = safe_getattr(args, "decoder_layers", 12) - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 768) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 12) - args.decoder_learned_pos = safe_getattr(args, "decoder_learned_pos", False) - base_gpt3_architecture(args) - - -@register_model_architecture("gptmodel", "gptmodel_medium") -def gptmodel_medium(args): - # 350M params - args.decoder_layers = safe_getattr(args, "decoder_layers", 24) - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 1024) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 16) - args.decoder_learned_pos = safe_getattr(args, "decoder_learned_pos", False) - base_gpt3_architecture(args) diff --git a/kosmos-g/unilm/models/kosmosg.py b/kosmos-g/unilm/models/kosmosg.py deleted file mode 100644 index 896ef0229..000000000 --- a/kosmos-g/unilm/models/kosmosg.py +++ /dev/null @@ -1,622 +0,0 @@ -import copy -import logging -from dataclasses import dataclass, field - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from fairseq import checkpoint_utils -from fairseq import utils -from fairseq.models import ( - BaseFairseqModel, - register_model, - register_model_architecture, -) -from fairseq.models.roberta import ( - roberta_large_architecture, - RobertaModel, -) -from fairseq.models.transformer_lm import ( - base_gpt3_architecture, -) -from fairseq.utils import safe_getattr -from torchscale.architecture.config import EncoderConfig -from torchscale.model.BEiT3 import BEiT3 -from unilm.models.aligner import Aligner, Aligner_encoder -from unilm.models.connector import build_connector -from unilm.models.diffusion import Diffusionmodel, VAE -from unilm.models.gpt import GPTmodel, GPTModelConfig - -logger = logging.getLogger(__name__) - - -def slice_tokens_for_mlm(A, indx, num_elem=2): - all_indx = indx[:, None] + torch.arange(num_elem) - return A[torch.arange(all_indx.shape[0])[:, None], all_indx] - - -@dataclass -class KosmosGModelConfig(GPTModelConfig): - text_encoder: str = field( - default="none", - metadata={ - "help": "enable text encoder, options: none, roberta, electra" - }, - ) - image_encoder: str = field( - default="clip", - metadata={ - "help": "enable image encoder, options: none, clip, beit" - }, - ) - audio_encoder: str = field( - default="none", - metadata={ - "help": "enable audio encoder, options: none, " - }, - ) - - # parameters for MLM - connector: str = field( - default='xconnector', - metadata={ - "help": "connector: none, complex, simple, xconnector" - }, - ) - latent_query_num: int = field( - default=64, - metadata={ - "help": "number of latent query tokens" - }, - ) - remain_tokens: int = field( - default=300, - metadata={ - "help": "at least k tokens to produce gpt loss" - }, - ) - mlm_model_path: str = field( - default="", - metadata={"help": "mlm checkpoint path"}, - ) - mlm_dict: str = field( - default="", - metadata={"help": "mlm dict path"}, - ) - mlm_tokens_per_sample: int = field( - default=512, - metadata={"help": "mlm max length"}, - ) - - freeze_gpt: bool = field( - default=False, - metadata={ - "help": "freeze gpt parameters" - }, - ) - - # parameters for visual - visual_model_name: str = field( - default="ViT-L-14", - metadata={"help": "model_name for open_clip"}, ) - visual_pretrained: str = field( - default="", - metadata={"help": "model_name for visual_pretrained"}, ) - visual_output_dim: int = field( - default=1024, - metadata={"help": "output dimension for visual_pretrained"}, ) - no_freeze_layer: str = field( - default='resblocks.23,ln_post', - metadata={ - "help": "freeze last layer of visual_pretrained" - }, ) - - # parameters for speech - speech_model_path: str = field( - default="", - metadata={"help": "speech checkpoint path"}, - ) - audio_output_dim: int = field( - default=768, - metadata={"help": "output dimension for audio_pretrained"}, ) - - # parameters for fine-tuning - ft_type: int = field( - default=3, - metadata={ - "help": "fine-tuning type: \ - 1: gpt only \ - 2: roberta only \ - 3: roberta + gpt \ - 4: roberta + gpt(freeze) \ - 5: roberta(freeze) + gpt " - }, - ) - pooler_dropout: float = field( - default=0.1, - metadata={"help": "mlm max length"}, - ) - - pretrained_ckpt_path: str = field( - default="", - metadata={"help": "model checkpoint path"}, - ) - - pretrained_model_name_or_path: str = field( - default="runwayml/stable-diffusion-v1-5", - metadata={"help": "model name or path"}, - ) - - align: bool = field( - default=False, - metadata={"help": "use clip supervision"}, - ) - - lora_dir: str = field( - default="", - metadata={"help": "lora dir"}, - ) - - lora_name: str = field( - default="None", - metadata={"help": "lora name"}, - ) - - -@register_model("kosmosg", dataclass=KosmosGModelConfig) -class KosmosGmodel(BaseFairseqModel): - def __init__(self, args, gpt_model, diffusion_unet, vae, aligner=None, - text_model=None, img_model=None, aud_model=None, - text_connector=None, img_connector=None, aud_connector=None, - bos=0, eos=2): - """ - text_model: bidirectional text model, such as roberta, bert, electra - img_model: image model, such as ViT, CLIP, BEIT - """ - super().__init__() - self.args = args - self.gpt_model = gpt_model - self.diffusion_unet = diffusion_unet - self.vae = vae - self.aligner = aligner - - self.text_model = text_model - self.text_connector = text_connector - self.img_model = img_model - self.img_connector = img_connector - self.aud_model = aud_model - self.aud_connector = aud_connector - - self.bos = bos - self.eos = eos - self.classification_heads = nn.ModuleDict() - self.ft_type = args.ft_type - - @classmethod - def build_model(cls, args, task): - if hasattr(task, "all_dict"): - task.dictionary = task.all_dict - if args.align: - original_checkpoint_activations = args.checkpoint_activations - args.checkpoint_activations = False - gpt_model = GPTmodel.build_model(args, task) - if args.align: - args.checkpoint_activations = original_checkpoint_activations - logger.info("gpt args is {}".format(args)) - - if args.align: - aligner = Aligner(args) - vae = None - diffusion_unet = None - - else: - aligner = Aligner_encoder(args) - vae = VAE.build_model(args, task) - diffusion_unet = Diffusionmodel.build_model(args, task, 2 ** (len(vae.vae.config.block_out_channels) - 1)) - - text_model, text_connector = cls.load_text_model(args, task) - img_model, img_connector = cls.load_image_model(args, task) - - if args.checkpoint_activations: - img_model.set_grad_checkpointing(True) - - model = cls(args, gpt_model, diffusion_unet, vae, aligner=aligner, - text_model=text_model, text_connector=text_connector, - img_model=img_model, img_connector=img_connector, - bos=task.dictionary.bos_index, eos=task.dictionary.eos_index) - - if args.pretrained_ckpt_path != "": - state = checkpoint_utils.load_checkpoint_to_cpu_(args.pretrained_ckpt_path) - msg = model.load_state_dict(state["model"], strict=False, args=args) - logger.info(msg) - - cls.freeze_params(model.parameters()) - cls.unfreeze_params(model.aligner.parameters()) - - if hasattr(model.aligner, "clip_encoder"): - cls.freeze_params(model.aligner.clip_encoder.parameters()) - - if not args.align: - cls.unfreeze_params(model.gpt_model.parameters()) - cls.unfreeze_params(model.img_connector.parameters()) - if model.img_model is not None: - for p_name, p in model.img_model.named_parameters(): - if args.no_freeze_layer: - no_freeze_layers = args.no_freeze_layer.split(',') - for no_freeze_layer in no_freeze_layers: - if no_freeze_layer in p_name: - print("no_freeze_layer: {}".format(p_name)) - p.requires_grad = True - - cls.stat_params(model) - return model - - @staticmethod - def freeze_params(params): - for param in params: - param.requires_grad = False - - @staticmethod - def unfreeze_params(params): - for param in params: - param.requires_grad = True - - @staticmethod - def stat_params(model): - from tabulate import tabulate - stat = [] - for n, p in model.named_parameters(): - stat.append([n, p.shape, p.requires_grad]) - print(tabulate(stat, headers=["name", "shape", "trainable"])) - - def forward(self, src_tokens, - mlm_src_tokens=None, gpt_input_mask=None, - gpt_img_src_tokens=None, img_gpt_input_mask=None, img_tgt_tokens=None, - aud_src_tokens=None, aud_gpt_input_mask=None, - gpt_loss_mask=None, mlm_mask=None, - clip_tokens=None, **kwargs): - - if mlm_src_tokens is not None: - # mlm - mlm_output, _ = self.text_model(mlm_src_tokens, features_only=True) - mlm_output = mlm_output[mlm_mask] - if self.text_connector is not None: - # linear projection layer - mlm_output = self.text_connector(mlm_output) - else: - mlm_output = None - - if gpt_img_src_tokens is not None: - img_output = self.get_image_representation(gpt_img_src_tokens) - else: - img_output = None - - if aud_src_tokens is not None: - aud_output = self.get_audio_representation(aud_src_tokens, kwargs['aud_mask']) - else: - aud_output = None - - # gpt - x, condition, extra = self.gpt_model(src_tokens, - mlm_features=mlm_output, gpt_input_mask=gpt_input_mask, - img_features=img_output, img_gpt_input_mask=img_gpt_input_mask, - aud_features=aud_output, aud_gpt_input_mask=aud_gpt_input_mask, - **kwargs) - condition = condition.transpose(0, 1) - - if self.args.align: - extra["loss"] = self.aligner(condition, src_tokens.eq(1), clip_tokens) - - else: - # diffusion - latents = self.vae.encode(img_tgt_tokens) - condition = self.aligner(condition, src_tokens.eq(1)) - null_token = torch.LongTensor([0]).unsqueeze(0).to(condition.device) - null_token_embeds = self.gpt_model(null_token)[1].transpose(0, 1) - null_token_embeds = self.aligner(null_token_embeds, null_token.eq(1)) - - extra["loss"] = self.diffusion_unet(latents, condition, null_token_embeds) - - # loss mask - extra["loss_mask"] = gpt_loss_mask - return x, extra - - def encode_condition(self, src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens, - num_images_per_prompt): - model_device = next(self.gpt_model.parameters()).device - if src_tokens.device != model_device: - src_tokens = src_tokens.to(model_device) if src_tokens is not None else None - gpt_img_src_tokens = gpt_img_src_tokens.to(model_device) if gpt_img_src_tokens is not None else None - img_gpt_input_mask = img_gpt_input_mask.to(model_device) if img_gpt_input_mask is not None else None - negative_tokens = negative_tokens.to(model_device) if negative_tokens is not None else None - - if gpt_img_src_tokens is not None: - img_output = self.get_image_representation(gpt_img_src_tokens) - else: - img_output = None - - condition = self.gpt_model(src_tokens, img_features=img_output, img_gpt_input_mask=img_gpt_input_mask)[ - 1].transpose(0, 1) - condition = self.aligner(condition, src_tokens.eq(1)) - - null_token = torch.LongTensor([0]).unsqueeze(0).to(condition.device) - null_token = max(null_token, negative_tokens, key=lambda x: x.shape[1]) - null_token_embeds = self.gpt_model(null_token)[1].transpose(0, 1) - null_token_embeds = self.aligner(null_token_embeds, null_token.eq(1)) - - condition = torch.cat([ - null_token_embeds.repeat(len(condition) * num_images_per_prompt, 1, 1), - condition.repeat(num_images_per_prompt, 1, 1) - ]) - return condition - - def sample(self, src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens, **kwargs): - condition = self.encode_condition(src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens, - kwargs['num_images_per_prompt']) - - latents = self.diffusion_unet.sample(condition, **kwargs) - image = self.vae.decode(latents, output_type=kwargs['output_type'] if 'output_type' in kwargs else 'pil') - return image - - def sample_controlnet(self, src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens, control_image, - controlnet, **kwargs): - condition = self.encode_condition(src_tokens, gpt_img_src_tokens, img_gpt_input_mask, negative_tokens, - kwargs['num_images_per_prompt']) - latents = self.diffusion_unet.sample_controlnet(condition, control_image, controlnet, **kwargs) - image = self.vae.decode(latents, output_type=kwargs['output_type'] if 'output_type' in kwargs else 'pil') - return image - - def get_image_representation(self, gpt_img_src_tokens): - # image - if self.args.image_encoder == "clip": - img_output = self.img_model(gpt_img_src_tokens) - elif self.args.image_encoder == "b3-large": - img_output = \ - self.img_model(textual_tokens=None, visual_tokens=gpt_img_src_tokens, vision_masked_position=None)[ - "encoder_out"] - img_output = F.normalize(img_output, dim=-1) - elif self.args.image_encoder == "b3-base": - img_output = \ - self.img_model(textual_tokens=None, visual_tokens=gpt_img_src_tokens, vision_masked_position=None)[ - "encoder_out"] - img_output = F.normalize(img_output, dim=-1) - src_len = img_output.size(0) - img_output = img_output.transpose(0, 1) # T x B x C -> B x T x C - img_output = img_output.reshape(-1, img_output.size(-1)) - - if self.img_connector is not None: - # linear projection layer - img_output = self.img_connector(img_output, src_len=src_len) - return img_output - - def get_audio_representation(self, aud_src_tokens, aud_mask): - # audio - if len(aud_src_tokens.size()) == 3: - aud_src_tokens = aud_src_tokens.unsqueeze(0) - aud_mask = aud_mask.unsqueeze(0) - # aud_src_tokens B * Seg size * Fbank size * Dim - fbank = aud_src_tokens.view(-1, aud_src_tokens.size(2), aud_src_tokens.size(3)) - # B * Seg size * Fbank size * Dim -> B * Seg size * Fbank size * Dim - # aud_mask B * Seg size * Token size - padding_mask = aud_mask.view(-1, aud_mask.size(2)) # B * Seg size * Token size -> B * Seg size * Token size - padding_mask = padding_mask.unsqueeze(-1).repeat(1, 1, 2).view(-1, fbank.size(1)) - padding_mask = padding_mask.unsqueeze(-1).repeat(1, 1, fbank.size(-1)) - aud_output = self.aud_model.extract_features(source=None, - fbank=fbank, - padding_mask=padding_mask)[0] - aud_output = aud_output[~aud_mask.view(-1, aud_mask.size(-1))] - if self.aud_connector is not None: - # linear projection layer - aud_output = self.aud_connector(aud_output, src_len=aud_src_tokens.size(2)) - return aud_output - - def register_classification_head(self, name, num_classes=None, inner_dim=None, **kwargs): - """Register a classification head.""" - if name in self.classification_heads: - prev_num_classes = self.classification_heads[name].out_proj.out_features - prev_inner_dim = self.classification_heads[name].dense.out_features - if num_classes != prev_num_classes or inner_dim != prev_inner_dim: - logger.warning( - 're-registering head "{}" with num_classes {} (prev: {}) ' - 'and inner_dim {} (prev: {})'.format( - name, num_classes, prev_num_classes, inner_dim, prev_inner_dim - ) - ) - - self.classification_heads[name] = ClassificationHead( - self.args.encoder_embed_dim, - inner_dim or self.args.encoder_embed_dim, - num_classes, - self.args.pooler_activation_fn, - self.args.pooler_dropout, - self.args.ft_type - ) - - def get_normalized_probs(self, net_output, log_probs, sample=None): - """Get normalized probabilities (or log probs) from a net's output.""" - logits = net_output[0].float() - if log_probs: - return F.log_softmax(logits, dim=-1) - else: - return F.softmax(logits, dim=-1) - - @property - def supported_targets(self): - return {"future"} - - def upgrade_state_dict_named(self, state_dict, name): - prefix = name + '.' if name != '' else '' - - # upgrade children modules - super().upgrade_state_dict_named(state_dict, name) - - # Handle new classification heads present in the state dict. - current_head_names = ( - [] if not hasattr(self, 'classification_heads') - else self.classification_heads.keys() - ) - keys_to_delete = [] - for k in state_dict.keys(): - if not k.startswith(prefix + 'classification_heads.'): - continue - - head_name = k[len(prefix + 'classification_heads.'):].split('.')[0] - num_classes = state_dict[prefix + 'classification_heads.' + head_name + '.out_proj.weight'].size(0) - inner_dim = state_dict[prefix + 'classification_heads.' + head_name + '.dense.weight'].size(0) - - if getattr(self.args, 'load_checkpoint_heads', False): - if head_name not in current_head_names: - self.register_classification_head(head_name, num_classes, inner_dim) - else: - if head_name not in current_head_names: - logger.warning( - 'deleting classification head ({}) from checkpoint ' - 'not present in current model: {}'.format(head_name, k) - ) - keys_to_delete.append(k) - elif ( - num_classes != self.classification_heads[head_name].out_proj.out_features - or inner_dim != self.classification_heads[head_name].dense.out_features - ): - logger.warning( - 'deleting classification head ({}) from checkpoint ' - 'with different dimensions than current model: {}'.format(head_name, k) - ) - keys_to_delete.append(k) - for k in keys_to_delete: - del state_dict[k] - - # Copy any newly-added classification heads into the state dict - # with their current weights. - if hasattr(self, 'classification_heads'): - cur_state = self.classification_heads.state_dict() - for k, v in cur_state.items(): - if prefix + 'classification_heads.' + k not in state_dict: - logger.info('Overwriting ' + prefix + 'classification_heads.' + k) - state_dict[prefix + 'classification_heads.' + k] = v - - @classmethod - def load_text_model(cls, args, task): - """Load a roberta model from the fairseq library.""" - if args.text_encoder == "none": - return None, None - mlm_args = copy.deepcopy(args) - mlm_task = task - logger.info("Roberta dictionary: {} types".format(len(mlm_task.dictionary))) - - mlm_args.layernorm_embedding = True - mlm_args.no_scale_embedding = True - mlm_args.dropout = 0.1 - mlm_args.attention_dropout = 0.1 - mlm_args.tokens_per_sample = mlm_args.mlm_tokens_per_sample - mlm_model = RobertaModel.build_model(mlm_args, mlm_task) - logger.info("mlm args is {}".format(mlm_args)) - if args.mlm_model_path != "": - state = checkpoint_utils.load_checkpoint_to_cpu(args.mlm_model_path) - mlm_model.load_state_dict(state["model"], strict=True, args=mlm_args) - connector = build_connector(args, args.encoder_embed_dim, args.decoder_embed_dim) - return mlm_model, connector - - @classmethod - def load_image_model(cls, args, task): - if args.image_encoder == "none": - return None, None - if args.image_encoder == "clip": - from unilm.models.vl.clip import create_model - force_quick_gelu = False - if args.visual_model_name == "ViT-L-14": - force_quick_gelu = True - model = create_model(args.visual_model_name, pretrained=args.visual_pretrained, - force_quick_gelu=force_quick_gelu) - connector = build_connector(args, args.visual_output_dim, args.decoder_embed_dim) - return model, connector - if args.image_encoder == "b3-large": - drop_path_rate = 0.0 - embed_dim = 1024 - depth = 24 - num_heads = 16 - mlp_ratio = 4 - encoder_args = EncoderConfig(vocab_size=64010, multiway=True, layernorm_embedding=False, - no_output_layer=True, drop_path_rate=drop_path_rate, - encoder_embed_dim=embed_dim, encoder_attention_heads=num_heads, - encoder_ffn_embed_dim=int(embed_dim * mlp_ratio), encoder_layers=depth) - encoder = BEiT3(encoder_args) - encoder.load_state_dict(torch.load(args.visual_pretrained, map_location='cpu')["module"], strict=True) - model = encoder - connector = build_connector(args, args.visual_output_dim, args.decoder_embed_dim) - return model, connector - if args.image_encoder == "b3-base": - drop_path_rate = 0.0 - embed_dim = 768 - depth = 12 - num_heads = 12 - mlp_ratio = 4 - encoder_args = EncoderConfig(vocab_size=64010, multiway=True, layernorm_embedding=False, - no_output_layer=True, drop_path_rate=drop_path_rate, - encoder_embed_dim=embed_dim, encoder_attention_heads=num_heads, - encoder_ffn_embed_dim=int(embed_dim * mlp_ratio), encoder_layers=depth) - encoder = BEiT3(encoder_args) - # trim encoder in model state_dict - state_dict = {} - old_state_dict = torch.load(args.visual_pretrained, map_location='cpu')["module"] - for key in old_state_dict.keys(): - new_key = key - if 'head' in key: - continue - if 'encoder.encoder' in new_key: - new_key = new_key.replace('encoder.encoder', 'encoder') - elif 'encoder.text_embed' in new_key: - new_key = new_key.replace('encoder.text_embed', 'text_embed') - elif 'encoder.vision_embed' in new_key: - new_key = new_key.replace('encoder.vision_embed', 'vision_embed') - - state_dict[new_key] = old_state_dict[key] - - encoder.load_state_dict(state_dict, strict=True) - model = encoder - connector = build_connector(args, args.visual_output_dim, args.decoder_embed_dim) - return model, connector - - -class ClassificationHead(nn.Module): - """Head for sentence-level classification tasks.""" - - def __init__( - self, - input_dim, - inner_dim, - num_classes, - activation_fn, - pooler_dropout, - ft_type - ): - super().__init__() - self.dense = nn.Linear(input_dim, inner_dim) - self.activation_fn = utils.get_activation_fn(activation_fn) - self.dropout = nn.Dropout(p=pooler_dropout) - self.out_proj = nn.Linear(inner_dim, num_classes) - self.ft_type = ft_type - - def forward(self, features, **kwargs): - x = features[:, 0, :] # take <s> token (equiv. to [CLS]) - x = self.dropout(x) - x = self.dense(x) - x = self.activation_fn(x) - x = self.dropout(x) - x = self.out_proj(x) - return x - - -@register_model_architecture("kosmosg", "kosmosg_xl") -def kosmosg_xl(args): - # 1.3B params - args.decoder_layers = safe_getattr(args, "decoder_layers", 24) - args.decoder_embed_dim = safe_getattr(args, "decoder_embed_dim", 2048) - args.decoder_attention_heads = safe_getattr(args, "decoder_attention_heads", 32) - args.decoder_learned_pos = safe_getattr(args, "decoder_learned_pos", False) - - args.dropout = safe_getattr(args, "dropout", 0.1) - args.attention_dropout = safe_getattr(args, "attention_dropout", 0.1) - base_gpt3_architecture(args) - roberta_large_architecture(args) diff --git a/kosmos-g/unilm/models/vl/clip.py b/kosmos-g/unilm/models/vl/clip.py deleted file mode 100644 index e307a74b1..000000000 --- a/kosmos-g/unilm/models/vl/clip.py +++ /dev/null @@ -1,196 +0,0 @@ -import logging -import os -import torch - -from copy import deepcopy -from typing import Tuple, Union, Callable, Optional -from torch import nn -from torch.nn import functional as F -from open_clip.model import CLIP, CLIPVisionCfg, QuickGELU, TimmModel, ModifiedResNet, VisualTransformer, to_2tuple, LayerNorm, Transformer -from open_clip.factory import _MODEL_CONFIGS, list_models, load_checkpoint, get_pretrained_url, download_pretrained, load_state_dict - - -logger = logging.getLogger(__name__) - - -class VisualTransformer4Seq2Seq(nn.Module): - def __init__( - self, image_size: int, patch_size: int, width: int, layers: int, heads: int, mlp_ratio: float, output_dim: int, act_layer: Callable = nn.GELU): - super().__init__() - self.image_size = to_2tuple(image_size) - self.patch_size = to_2tuple(patch_size) - self.grid_size = (self.image_size[0] // self.patch_size[0], self.image_size[1] // self.patch_size[1]) - self.output_dim = output_dim - self.conv1 = nn.Conv2d(in_channels=3, out_channels=width, kernel_size=patch_size, stride=patch_size, bias=False) - - scale = width ** -0.5 - self.class_embedding = nn.Parameter(scale * torch.randn(width)) - self.positional_embedding = nn.Parameter(scale * torch.randn(self.grid_size[0] * self.grid_size[1] + 1, width)) - self.ln_pre = LayerNorm(width) - - self.transformer = Transformer(width, layers, heads, mlp_ratio, act_layer=act_layer) - - self.ln_post = LayerNorm(width) - # self.proj = nn.Parameter(scale * torch.randn(width, output_dim)) - - def lock(self, unlocked_groups=0, freeze_bn_stats=False): - assert unlocked_groups == 0, 'partial locking not currently supported for this model' - for param in self.parameters(): - param.requires_grad = False - - @torch.jit.ignore - def set_grad_checkpointing(self, enable=True): - self.transformer.grad_checkpointing = enable - - def forward(self, x: torch.Tensor): - x = self.conv1(x) # shape = [*, width, grid, grid] - x = x.reshape(x.shape[0], x.shape[1], -1) # shape = [*, width, grid ** 2] - x = x.permute(0, 2, 1) # shape = [*, grid ** 2, width] - x = torch.cat( - [self.class_embedding.to(x.dtype) + torch.zeros(x.shape[0], 1, x.shape[-1], dtype=x.dtype, device=x.device), x], dim=1) # shape = [*, grid ** 2 + 1, width] - x = x + self.positional_embedding.to(x.dtype) - x = self.ln_pre(x) - - x = x.permute(1, 0, 2) # NLD -> LND - x = self.transformer(x) - # NOTE encoder output is T, B, C for seq2seq - # x = x.permute(1, 0, 2) # LND -> NLD - - # x = self.ln_post(x[:, 0, :]) - x = self.ln_post(x) # [*, grid ** 2 + 1, width] - - # if self.proj is not None: - # x = x @ self.proj - return x - - -class ClipVisualOnly(nn.Module): - - # text_cfg for compatibility with original CLIP - def __init__(self, embed_dim, vision_cfg, text_cfg, quick_gelu=False): - super().__init__() - if isinstance(vision_cfg, dict): - vision_cfg = CLIPVisionCfg(**vision_cfg) - - # OpenAI models are pretrained w/ QuickGELU but native nn.GELU is both faster and more - # memory efficient in recent PyTorch releases (>= 1.10). - # NOTE: timm models always use native GELU regardless of quick_gelu flag. - act_layer = QuickGELU if quick_gelu else nn.GELU - - if vision_cfg.timm_model_name: - raise NotImplementedError - self.visual = TimmModel( - vision_cfg.timm_model_name, - pretrained=vision_cfg.timm_model_pretrained, - pool=vision_cfg.timm_pool, - proj=vision_cfg.timm_proj, - embed_dim=embed_dim, - image_size=vision_cfg.image_size) - act_layer = nn.GELU # so that text transformer doesn't use QuickGELU w/ timm models - elif isinstance(vision_cfg.layers, (tuple, list)): - raise NotImplementedError - vision_heads = vision_cfg.width * 32 // vision_cfg.head_width - self.visual = ModifiedResNet( - layers=vision_cfg.layers, - output_dim=embed_dim, - heads=vision_heads, - image_size=vision_cfg.image_size, - width=vision_cfg.width) - else: - vision_heads = vision_cfg.width // vision_cfg.head_width - self.visual = VisualTransformer4Seq2Seq( - image_size=vision_cfg.image_size, - patch_size=vision_cfg.patch_size, - width=vision_cfg.width, - layers=vision_cfg.layers, - heads=vision_heads, - mlp_ratio=vision_cfg.mlp_ratio, - output_dim=embed_dim, - act_layer=act_layer,) - self.init_parameters() - - def init_parameters(self): - if hasattr(self.visual, 'init_parameters'): - self.visual.init_parameters() - - @torch.jit.ignore - def set_grad_checkpointing(self, enable=True): - self.visual.set_grad_checkpointing(enable) - - def encode_image(self, image): - return self.visual(image) - - def forward(self, image): - image_features = self.encode_image(image) - image_features = F.normalize(image_features, dim=-1) - return image_features - - -def load_checkpoint4vision_only(model, checkpoint_path, strict=True): - state_dict = load_state_dict(checkpoint_path) - incompatible_keys = model.load_state_dict(state_dict, strict=strict) - return incompatible_keys - - -def create_model( - model_name: str, - pretrained: str = '', - jit: bool = False, - force_quick_gelu: bool = False, - pretrained_image: bool = False,): - model_name = model_name.replace('/', '-') # for callers using old naming with / in ViT names - - if pretrained.lower() == 'openai': - raise NotImplementedError - else: - if model_name in _MODEL_CONFIGS: - logger.info(f'Loading {model_name} model config.') - model_cfg = deepcopy(_MODEL_CONFIGS[model_name]) - else: - logging.error(f'Model config for {model_name} not found; available models {list_models()}.') - raise RuntimeError(f'Model config for {model_name} not found.') - - if force_quick_gelu: - # override for use of QuickGELU on non-OpenAI transformer models - model_cfg["quick_gelu"] = True - - if pretrained_image: - if 'timm_model_name' in model_cfg.get('vision_cfg', {}): - # pretrained weight loading for timm models set via vision_cfg - model_cfg['vision_cfg']['timm_model_pretrained'] = True - else: - assert False, 'pretrained image towers currently only supported for timm models' - - model = ClipVisualOnly(**model_cfg) - - if pretrained: - checkpoint_path = '' - url = get_pretrained_url(model_name, pretrained) - if url: - checkpoint_path = download_pretrained(url) - elif os.path.exists(pretrained): - checkpoint_path = pretrained - - if checkpoint_path: - logging.info(f'Loading pretrained {model_name} weights ({pretrained}).') - # NOTE TODO remove, strict=True is only for debug - load_checkpoint4vision_only(model, checkpoint_path, strict=False) - # reload attn weights into ts attn - dim = model.visual.transformer.resblocks[0].attn.in_proj_weight.shape[0] // 3 - for resblock in model.visual.transformer.resblocks: - resblock.ts_attn.q_proj.weight = nn.Parameter(resblock.attn.in_proj_weight[:dim].clone()) - resblock.ts_attn.q_proj.bias = nn.Parameter(resblock.attn.in_proj_bias[:dim].clone()) - resblock.ts_attn.k_proj.weight = nn.Parameter(resblock.attn.in_proj_weight[dim:2*dim].clone()) - resblock.ts_attn.k_proj.bias = nn.Parameter(resblock.attn.in_proj_bias[dim:2*dim].clone()) - resblock.ts_attn.v_proj.weight = nn.Parameter(resblock.attn.in_proj_weight[2*dim:].clone()) - resblock.ts_attn.v_proj.bias = nn.Parameter(resblock.attn.in_proj_bias[2*dim:].clone()) - resblock.ts_attn.out_proj.weight = nn.Parameter(resblock.attn.out_proj.weight.clone()) - resblock.ts_attn.out_proj.bias = nn.Parameter(resblock.attn.out_proj.bias.clone()) - resblock.attn = None - else: - logging.warning(f'Pretrained weights ({pretrained}) not found for model {model_name}.') - raise RuntimeError(f'Pretrained weights ({pretrained}) not found for model {model_name}.') - - if jit: model = torch.jit.script(model) - - return model \ No newline at end of file diff --git a/kosmos-g/unilm/tasks/__init__.py b/kosmos-g/unilm/tasks/__init__.py deleted file mode 100644 index ca688303f..000000000 --- a/kosmos-g/unilm/tasks/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -import argparse -import importlib -import os -from fairseq.tasks import import_tasks - -tasks_dir = os.path.dirname(__file__) -import_tasks(tasks_dir, "unilm.tasks") diff --git a/kosmos-g/unilm/tasks/gpt_base.py b/kosmos-g/unilm/tasks/gpt_base.py deleted file mode 100644 index 61c692f54..000000000 --- a/kosmos-g/unilm/tasks/gpt_base.py +++ /dev/null @@ -1,276 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -from dataclasses import dataclass, field -from typing import Optional -import logging -import os -from argparse import Namespace -import json -from omegaconf import MISSING, II, OmegaConf -from typing import Any - -import numpy as np -import torch -from fairseq import utils -from fairseq.data import Dictionary -from fairseq.tasks import FairseqTask, register_task -# from unilm.data.lm_loader import LMLoader -from unilm.data.spm_lm_loader import SpmLmLoader as LMLoader -from unilm.data.utils import SPECIAL_SYMBOLS -from fairseq.optim.amp_optimizer import AMPOptimizer -from fairseq.dataclass import FairseqDataclass, ChoiceEnum -from fairseq.data.encoders.gpt2_bpe import GPT2BPE -import sentencepiece - -logger = logging.getLogger(__name__) - -SAMPLE_BREAK_MODE_CHOICES = ChoiceEnum(["none", "complete", "complete_doc", "eos"]) -SHORTEN_METHOD_CHOICES = ChoiceEnum(["none", "truncate", "random_crop"]) - -DEFAULT_ENCODER_JSON = "https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json" -DEFAULT_VOCAB_BPE = "https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe" - - -@dataclass -class GPTPretrainingConfig(FairseqDataclass): - data: Optional[str] = field( - default=None, metadata={"help": "path to data directory"} - ) - train_json_split_name: str = field(default="train", metadata={"help": "path to sentencepiece model for dataloading"}) - - sample_break_mode: SAMPLE_BREAK_MODE_CHOICES = field( - default="none", - metadata={ - "help": 'If omitted or "none", fills each sample with tokens-per-sample ' - 'tokens. If set to "complete", splits samples only at the end ' - "of sentence, but may include multiple sentences per sample. " - '"complete_doc" is similar but respects doc boundaries. ' - 'If set to "eos", includes only one sentence per sample.' - }, - ) - tokens_per_sample: int = field( - default=2048, - metadata={"help": "max number of tokens per sample for LM dataset"}, - ) - output_dictionary_size: int = field( - default=-1, metadata={"help": "limit the size of output dictionary"} - ) - self_target: bool = field(default=False, metadata={"help": "include self target"}) - future_target: bool = field( - default=False, metadata={"help": "include future target"} - ) - past_target: bool = field(default=False, metadata={"help": "include past target"}) - add_bos_token: bool = field( - default=False, metadata={"help": "prepend beginning of sentence token (<s>)"} - ) - max_target_positions: Optional[int] = field( - default=None, metadata={"help": "max number of tokens in the target sequence"} - ) - shorten_method: SHORTEN_METHOD_CHOICES = field( - default="none", - metadata={ - "help": "if not none, shorten sequences that exceed --tokens-per-sample" - }, - ) - shorten_data_split_list: str = field( - default="", - metadata={ - "help": "comma-separated list of dataset splits to apply shortening to, " - 'e.g., "train,valid" (default: all dataset splits)' - }, - ) - pad_to_fixed_length: Optional[bool] = field( - default=False, - metadata={"help": "pad to fixed length"}, - ) - pad_to_fixed_bsz: Optional[bool] = field( - default=False, - metadata={"help": "boolean to pad to fixed batch size"}, - ) - - gpt2_encoder_json: str = field( - default=DEFAULT_ENCODER_JSON, metadata={"help": "path to encoder.json"} - ) - gpt2_vocab_bpe: str = field( - default=DEFAULT_VOCAB_BPE, metadata={"help": "path to vocab.bpe"} - ) - - required_batch_size_multiple: int = II("dataset.required_batch_size_multiple") - - batch_read_ahead: int = field( - default=10000, - metadata={"help": "batch read ahead size for infinibatch"}, - ) - - spm_model: str = field( - default="", - metadata={ - "help": "sentencepice model to tokenize the data" - }, - ) - dict_path: str = field( - default="", - metadata={ - "help": "sentencepice model to tokenize the data" - }, - ) - - tiktoken_model: str = field( - default="", - metadata={ - "help": "tiktoken model to tokenize the data" - }, - ) - - # TODO common vars below add to parent - seed: int = II("common.seed") - batch_size: Optional[int] = II("dataset.batch_size") - batch_size_valid: Optional[int] = II("dataset.batch_size_valid") - # dataset_impl: Optional[ChoiceEnum(get_available_dataset_impl())] = II( - # "dataset.dataset_impl" - # ) - data_buffer_size: int = II("dataset.data_buffer_size") - tpu: bool = II("common.tpu") - use_plasma_view: bool = II("common.use_plasma_view") - plasma_path: str = II("common.plasma_path") - - -@register_task("gpt_pretraining", dataclass=GPTPretrainingConfig) -class GPTTask(FairseqTask): - - def __init__(self, cfg, dictionary, tokenizer): - super().__init__(cfg) - self.cfg = cfg - self.dictionary = dictionary - self.tokenizer = tokenizer - self.seed = cfg.seed - - @classmethod - def setup_task(cls, cfg, **kwargs): - paths = utils.split_paths(cfg.data) - assert len(paths) > 0 - - if len(cfg.dict_path) > 0: - dictionary = Dictionary.load(cfg.dict_path) - else: - dictionary = Dictionary.load(os.path.join(paths[0], "dict.txt")) - - dictionary.add_symbol("<mask>") - for special_symbol in SPECIAL_SYMBOLS: - dictionary.add_symbol(special_symbol) - - dictionary.pad_to_multiple_(cfg.required_batch_size_multiple) - logger.info("dictionary: {} types".format(len(dictionary))) - - if len(cfg.spm_model) > 0: - tokenizer = sentencepiece.SentencePieceProcessor(model_file=cfg.spm_model) - elif len(cfg.tiktoken_model) > 0: - import tiktoken - tokenizer = tiktoken.get_encoding(cfg.tiktoken_model) - else: - tokenizer = GPT2BPE(Namespace( - gpt2_vocab_bpe=cfg.gpt2_vocab_bpe, - gpt2_encoder_json=cfg.gpt2_encoder_json)) - return cls(cfg, dictionary, tokenizer) - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - if "tnlg" in self.cfg.data: - self.datasets[split] = { - # 'data': json.load(open(f'{self.cfg.data}/json/{split}-nogithub.json')) if split == 'train' else json.load(open(f'{self.cfg.data}/json/{split}.json')), - # 'data': json.load(open(f'{self.cfg.data}/json/{split}-nogithub-noarvix-nopubmed.json')) if split == 'train' else json.load(open(f'{self.cfg.data}/json/{split}.json')), - 'data': json.load(open(f'{self.cfg.data}/json/{split}-nogithub-noarvix-nopubmed-mtnlg.json')) if split == 'train' else json.load(open(f'{self.cfg.data}/json/{split}.json')), - 'data_dir': self.cfg.data, - 'shuffle': True if split == 'train' else False, - } - else: - self.datasets[split] = { - 'data': json.load(open(f'{self.cfg.data}/json/{split}.json')), - 'data_dir': self.cfg.data, - 'shuffle': True if split == 'train' else False, - } - self.datasets[split] = Namespace(**self.datasets[split]) - - def dataset(self, split): - if split not in self.datasets: - raise KeyError("Dataset not loaded: " + split) - - return self.datasets[split] - - def get_batch_iterator( - self, - dataset, - max_tokens=None, - max_sentences=None, - max_positions=None, - ignore_invalid_inputs=False, - required_batch_size_multiple=1, - seed=1, - num_shards=1, - shard_id=0, - num_workers=0, - epoch=1, - data_buffer_size=0, - disable_iterator_cache=False, - skip_remainder_batch=False, - grouped_shuffling=False, - update_epoch_batch_itr=False - ): - return LMLoader( - self.cfg, - dataset, - self.dictionary, - self.tokenizer, - max_tokens=max_tokens, - max_sentences=max_sentences, - max_positions=max_positions, - ignore_invalid_inputs=ignore_invalid_inputs, - required_batch_size_multiple=required_batch_size_multiple, - seed=seed, - epoch=epoch, - num_shards=num_shards, - shard_id=shard_id, - ) - - @property - def source_dictionary(self): - return self.dictionary - - @property - def target_dictionary(self): - return self.dictionary - - def train_step( - self, sample, model, criterion, optimizer, update_num, ignore_grad=False - ): - """ - Do forward and backward, and return the loss as computed by *criterion* - for the given *model* and *sample*. - - Args: - sample (dict): the mini-batch. The format is defined by the - :class:`~fairseq.data.FairseqDataset`. - model (~fairseq.models.BaseFairseqModel): the model - criterion (~fairseq.criterions.FairseqCriterion): the criterion - optimizer (~fairseq.optim.FairseqOptimizer): the optimizer - update_num (int): the current update - ignore_grad (bool): multiply loss by 0 if this is set to True - - Returns: - tuple: - - the loss - - the sample size, which is used as the denominator for the - gradient - - logging outputs to display while training - """ - model.train() - model.set_num_updates(update_num) - with torch.autograd.profiler.record_function("forward"): - with torch.cuda.amp.autocast(enabled=(isinstance(optimizer, AMPOptimizer))): - loss, sample_size, logging_output = criterion(model, sample['gpt']) - if ignore_grad: - loss *= 0 - with torch.autograd.profiler.record_function("backward"): - optimizer.backward(loss) - return loss, sample_size, logging_output \ No newline at end of file diff --git a/kosmos-g/unilm/tasks/kosmosg.py b/kosmos-g/unilm/tasks/kosmosg.py deleted file mode 100644 index a20b2689f..000000000 --- a/kosmos-g/unilm/tasks/kosmosg.py +++ /dev/null @@ -1,219 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import json -import logging -from argparse import Namespace -from dataclasses import dataclass, field - -import torch -from deepspeed.runtime.engine import DeepSpeedEngine - -from fairseq.optim.amp_optimizer import AMPOptimizer -from fairseq.tasks import register_task -from unilm.data.basic_loader import MixLoader -from unilm.data.vl.instructpix2pix_loader import InstructPix2PixLoader -from unilm.data.vl.laion2b_loader import Laion2BLoader -from unilm.data.vl.openimage_loader import OpenImageLoader -from unilm.tasks.gpt_base import GPTPretrainingConfig, GPTTask - -logger = logging.getLogger(__name__) - - -@dataclass -class KosmosGConfig(GPTPretrainingConfig): - max_image_num: int = field(default=5, metadata={"help": ""}) - image_token_length: int = field(default=64, metadata={"help": ""}) - laion_data_dir: str = field(default="", metadata={"help": ""}) - laion_batch_size: int = field(default=1, metadata={"help": ""}) - instructpix2pix_data_dir: str = field(default="", metadata={"help": ""}) - instructpix2pix_batch_size: int = field(default=1, metadata={"help": ""}) - openimage_data_dir: str = field(default="", metadata={"help": ""}) - openimage_batch_size: int = field(default=1, metadata={"help": ""}) - data_weights: str = field( - default="1,2,2", - metadata={"help": "laion, instructpix2pix, openimage"}, - ) - caption_dropout_prob: float = field(default=0.3, metadata={"help": ""}) - align: bool = field(default=False, metadata={"help": ""}) - random_drop_caption_prob: float = field(default=0.0, metadata={"help": ""}) - - -@register_task("kosmosg", dataclass=KosmosGConfig) -class KosmosGTask(GPTTask): - def __init__(self, cfg, dictionary, tokenizer): - super().__init__(cfg, dictionary, tokenizer) - self.vlm_model = None - - def build_model(self, cfg, from_checkpoint=False): - model = super().build_model(cfg, from_checkpoint=from_checkpoint) - self.vlm_model = model - return model - - def get_batch_iterator( - self, - dataset, - max_tokens=None, - max_sentences=None, - max_positions=None, - ignore_invalid_inputs=False, - required_batch_size_multiple=1, - seed=1, - num_shards=1, - shard_id=0, - num_workers=0, - epoch=1, - data_buffer_size=0, - disable_iterator_cache=False, - skip_remainder_batch=False, - grouped_shuffling=False, - update_epoch_batch_itr=False - ): - laion_dataset = Namespace(**{ - 'data': json.load(open(f'{self.cfg.laion_data_dir}/json/train.json')), - 'data_dir': self.cfg.laion_data_dir, - 'shuffle': True}) - - lain_vl_loader = Laion2BLoader( - self.cfg, - laion_dataset, - self.dictionary, - self.tokenizer, - max_tokens=max_tokens, - max_sentences=self.cfg.laion_batch_size, - max_positions=max_positions, - ignore_invalid_inputs=ignore_invalid_inputs, - required_batch_size_multiple=required_batch_size_multiple, - seed=seed, - epoch=epoch, - num_shards=num_shards, - shard_id=shard_id, - no_prefetch=False, - ) - - instructpix2pix_dataset = Namespace(**{ - 'data': json.load(open(f'{self.cfg.instructpix2pix_data_dir}/json/train.json')), - 'data_dir': self.cfg.instructpix2pix_data_dir, - 'shuffle': True}) - - instructpix2pix_vl_loader = InstructPix2PixLoader( - self.cfg, - instructpix2pix_dataset, - self.dictionary, - self.tokenizer, - max_tokens=max_tokens, - max_sentences=self.cfg.instructpix2pix_batch_size, - max_positions=max_positions, - ignore_invalid_inputs=ignore_invalid_inputs, - required_batch_size_multiple=required_batch_size_multiple, - seed=seed, - epoch=epoch, - num_shards=num_shards, - shard_id=shard_id, - no_prefetch=False, - ) - - openimage_dataset = Namespace(**{ - 'data': json.load(open(f'{self.cfg.openimage_data_dir}/json/train.json')), - 'data_dir': self.cfg.openimage_data_dir, - 'shuffle': True}) - - openimage_vl_loader = OpenImageLoader( - self.cfg, - openimage_dataset, - self.dictionary, - self.tokenizer, - max_tokens=max_tokens, - max_sentences=self.cfg.openimage_batch_size, - max_positions=max_positions, - ignore_invalid_inputs=ignore_invalid_inputs, - required_batch_size_multiple=required_batch_size_multiple, - seed=seed, - epoch=epoch, - num_shards=num_shards, - shard_id=shard_id, - no_prefetch=False, - ) - - data_weight = [float(x) for x in self.cfg.data_weights.split(',')] - data_weight = [x / sum(data_weight) for x in data_weight] - - concat_loader = MixLoader([ - lain_vl_loader, - instructpix2pix_vl_loader, - openimage_vl_loader, - ], data_weight) - return concat_loader - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - self.datasets[split] = { - 'data': None, - 'data_dir': None, - 'shuffle': True if split == 'train' else False, } - self.datasets[split] = Namespace(**self.datasets[split]) - - def train_step( - self, sample, model, criterion, optimizer, update_num, ignore_grad=False - ): - """ - Do forward and backward, and return the loss as computed by *criterion* - for the given *model* and *sample*. - - Args: - sample (dict): the mini-batch. The format is defined by the - :class:`~fairseq.data.FairseqDataset`. - model (~fairseq.models.BaseFairseqModel): the model - criterion (~fairseq.criterions.FairseqCriterion): the criterion - optimizer (~fairseq.optim.FairseqOptimizer): the optimizer - update_num (int): the current update - ignore_grad (bool): multiply loss by 0 if this is set to True - - Returns: - tuple: - - the loss - - the sample size, which is used as the denominator for the - gradient - - logging outputs to display while training - """ - model.train() - model.set_num_updates(update_num) - - agg_loss, agg_sample_size, agg_logging_output = 0., 0., {} - - if 'vl_laion' in sample: - loss_name = "image_laion" - loss_key = 'vl_laion' - elif 'vl_instructpix2pix' in sample: - loss_name = "image_instructpix2pix" - loss_key = 'vl_instructpix2pix' - elif 'vl_openimage' in sample: - loss_name = "image_openimage" - loss_key = 'vl_openimage' - else: - assert False, "Unknown loss key" - - with torch.autograd.profiler.record_function("forward"): - with torch.cuda.amp.autocast(enabled=(isinstance(optimizer, AMPOptimizer))): - vl_loss, sample_size, logging_output = criterion(model, sample[loss_key], loss_name=loss_name) - if ignore_grad: - vl_loss *= 0 - with torch.autograd.profiler.record_function("backward"): - if isinstance(model, DeepSpeedEngine): - model.backward(vl_loss) - else: - optimizer.backward(vl_loss) - - agg_loss += vl_loss.detach().item() - agg_sample_size += sample_size - agg_logging_output.update(logging_output) - - return agg_loss, agg_sample_size, agg_logging_output - - @property - def source_dictionary(self): - return self.dictionary - - @property - def target_dictionary(self): - return self.dictionary diff --git a/kosmos-g/vl_setup.sh b/kosmos-g/vl_setup.sh deleted file mode 100644 index f8ab8eee9..000000000 --- a/kosmos-g/vl_setup.sh +++ /dev/null @@ -1,26 +0,0 @@ -pip install deepspeed==0.4.0 -pip install transformers==4.34.0 -pip install diffusers[torch]==0.21.4 -pip install controlnet-aux==0.0.6 -pip install opencv-contrib-python==4.3.0.36 -pip install torchmetrics[image]==1.0.0 -pip install einops==0.4 -pip install azure-storage-blob -pip install mediapipe==0.10.2 -pip install sentencepiece==0.1.97 -pip install ftfy==6.1.1 -pip install ujson==5.8.0 -pip install pydantic==1.10.8 -pip install tiktoken==0.4.0 -pip install typing_extensions==4.5.0 typing-inspect==0.8.0 -pip install ninja==1.11.1 -pip install basicsr==1.4.2 -pip install chromadb==0.3.26 -pip install einops-exts==0.0.4 -pip install gradio==3.47.1 - -pip install -v -U git+https://github.com/facebookresearch/xformers.git@v0.0.22 -pip install torchscale/ -pip install open_clip/ -pip install fairseq/ -pip install infinibatch/ \ No newline at end of file